From xen-devel-bounces@lists.xenproject.org Sat Apr 01 00:17:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 00:17:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517078.802123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piOvq-0000TO-43; Sat, 01 Apr 2023 00:17:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517078.802123; Sat, 01 Apr 2023 00:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piOvq-0000TH-0u; Sat, 01 Apr 2023 00:17:22 +0000
Received: by outflank-mailman (input) for mailman id 517078;
 Sat, 01 Apr 2023 00:17:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piOvn-0000T7-Vr; Sat, 01 Apr 2023 00:17:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piOvn-000769-Ua; Sat, 01 Apr 2023 00:17:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piOvn-00051M-B9; Sat, 01 Apr 2023 00:17:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piOvn-0005pD-Ah; Sat, 01 Apr 2023 00:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y/9+DwORxecq4Ne3hKBud5y8i4JQb4mLTXawEpL15Lw=; b=L9jfMX5zu7wio3KGD+zg9xcer4
	GEEXTZae6k9z3ZZdHg2ZL70n3gRtH8K9RqeNwNh/DSsPA2FU+cFClRWIaGFr+JJSQ6RQfpunGVuER
	COdwMG1ybn3w/PY/n1dUdI4DOx+sBPm/kY4ija+aSUFbTIZ3RzTxRjmFpw62N9hRaSdk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180095-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180095: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=eb6a74827200eedc81b8f45f332d6e9f3b3d2906
X-Osstest-Versions-That:
    ovmf=66f4b1b0d2e603c2b0a3f4149976cdc297a3606d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 00:17:19 +0000

flight 180095 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180095/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 eb6a74827200eedc81b8f45f332d6e9f3b3d2906
baseline version:
 ovmf                 66f4b1b0d2e603c2b0a3f4149976cdc297a3606d

Last test of basis   180091  2023-03-31 17:40:47 Z    0 days
Testing same since   180095  2023-03-31 21:10:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Albecki, Mateusz <mateusz.albecki@intel.com>
  Mateusz Albecki <mateusz.albecki@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   66f4b1b0d2..eb6a748272  eb6a74827200eedc81b8f45f332d6e9f3b3d2906 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 00:23:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 00:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517092.802164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piP1Z-0002Em-4N; Sat, 01 Apr 2023 00:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517092.802164; Sat, 01 Apr 2023 00:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piP1Z-0002Ed-0q; Sat, 01 Apr 2023 00:23:17 +0000
Received: by outflank-mailman (input) for mailman id 517092;
 Sat, 01 Apr 2023 00:23:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piP1X-0002EF-PG; Sat, 01 Apr 2023 00:23:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piP1X-0007Cr-NX; Sat, 01 Apr 2023 00:23:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piP1X-0005He-7R; Sat, 01 Apr 2023 00:23:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piP1X-0000Nj-6x; Sat, 01 Apr 2023 00:23:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+PqZPbf946pt3R4KbMh7ORmK4a0OXHF9xb1nidei2sY=; b=QqZFIfnfAYSYPFvmZsLddeeRpd
	fpDq6WBArkaMcwCLyQC/H4xNpS9egM1QzB7VK97v6Zp5ma4bcV3Hzyb6WjVKqYpDmyFTNMAty1tSk
	dFWmJatQbh0zXf7gpfXmDHiXTQIn46dmHsSIysUWGsSftsWr/Nv+zeaQK2uo1dIYIOxU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180084-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180084: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-4.17-testing:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=7758cd57e002c5096b2296ede67c59fca68724d7
X-Osstest-Versions-That:
    xen=3eac216e6e60860bbc030602c401d3ef8efce8d9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 00:23:15 +0000

flight 180084 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180084/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 179869
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 179869
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 179869
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 179869
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 179869
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 179869
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 179869
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 179869
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 179869
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  7758cd57e002c5096b2296ede67c59fca68724d7
baseline version:
 xen                  3eac216e6e60860bbc030602c401d3ef8efce8d9

Last test of basis   179869  2023-03-22 11:15:49 Z    9 days
Testing same since   180084  2023-03-31 06:37:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3eac216e6e..7758cd57e0  7758cd57e002c5096b2296ede67c59fca68724d7 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 03:33:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 03:33:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517105.802192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piRzQ-0002Kb-O1; Sat, 01 Apr 2023 03:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517105.802192; Sat, 01 Apr 2023 03:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piRzQ-0002KD-Hl; Sat, 01 Apr 2023 03:33:16 +0000
Received: by outflank-mailman (input) for mailman id 517105;
 Sat, 01 Apr 2023 03:33:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piRzP-0002K3-5Q; Sat, 01 Apr 2023 03:33:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piRzP-0006qb-1z; Sat, 01 Apr 2023 03:33:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piRzO-0007ct-JL; Sat, 01 Apr 2023 03:33:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piRzO-0000xl-Ip; Sat, 01 Apr 2023 03:33:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bGttNYeg4tpFhyWMo8aXw7Bm5/7Jf5XskJBEDQYBhWg=; b=HhkxbCaHT0kzCYNydzhuWBImNG
	1ZkFbr7k9k5+WO4MXEbDFMlovrexQrLXtRNVk4OOKHsgHjzrIAxjaSzO/1wkKZa9WTizbDLrGG/K3
	xRbvabyGA1UJOxfGTK58RR8wToyTnh5tJPpHLEu8K24aVqOaTeEeS68osCjA/dzOUcg0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180099-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180099: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2f499c36db51980ad43fc6b578c7678a1720bd9c
X-Osstest-Versions-That:
    ovmf=eb6a74827200eedc81b8f45f332d6e9f3b3d2906
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 03:33:14 +0000

flight 180099 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180099/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2f499c36db51980ad43fc6b578c7678a1720bd9c
baseline version:
 ovmf                 eb6a74827200eedc81b8f45f332d6e9f3b3d2906

Last test of basis   180095  2023-03-31 21:10:41 Z    0 days
Testing same since   180099  2023-04-01 00:44:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Ray Ni <ray.ni@intel.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   eb6a748272..2f499c36db  2f499c36db51980ad43fc6b578c7678a1720bd9c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 05:43:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 05:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517111.802201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piU0g-0007Ec-UA; Sat, 01 Apr 2023 05:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517111.802201; Sat, 01 Apr 2023 05:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piU0g-0007EV-RV; Sat, 01 Apr 2023 05:42:42 +0000
Received: by outflank-mailman (input) for mailman id 517111;
 Sat, 01 Apr 2023 05:42:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piU0f-0007EL-RM; Sat, 01 Apr 2023 05:42:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piU0f-0001ry-Oy; Sat, 01 Apr 2023 05:42:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piU0f-0003Xp-Di; Sat, 01 Apr 2023 05:42:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piU0f-0005cH-DI; Sat, 01 Apr 2023 05:42:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/cpMKIZ1/NjyUFabX38tOt0vuWWogMjA76BUJDVGFEc=; b=WRAb7IiFMyReZBINyO7ZSbps7u
	ONGPRzaRXXOEXncQbR7CO5ivCyhAUKTNu/ohcTe0Kd9w0xBn1Aoza74bqPruzYtHhVfijEboLtIWC
	94k9c/xl2mQ47raIK30JUe4ZXD4786+oVijZTws3iNcBFnd+ELcalJSDnKv4dI/xS9CE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180101-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180101: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=67a6f414aa0e2a9cac965fcc6d83b6cbd6e893c0
X-Osstest-Versions-That:
    ovmf=2f499c36db51980ad43fc6b578c7678a1720bd9c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 05:42:41 +0000

flight 180101 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180101/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 67a6f414aa0e2a9cac965fcc6d83b6cbd6e893c0
baseline version:
 ovmf                 2f499c36db51980ad43fc6b578c7678a1720bd9c

Last test of basis   180099  2023-04-01 00:44:02 Z    0 days
Testing same since   180101  2023-04-01 03:35:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Lendacky, Thomas via groups.io <thomas.lendacky=amd.com@groups.io>
  Ray Ni <ray.ni@intel.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2f499c36db..67a6f414aa  67a6f414aa0e2a9cac965fcc6d83b6cbd6e893c0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 06:21:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 06:21:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517118.802212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUbv-0003KA-Sr; Sat, 01 Apr 2023 06:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517118.802212; Sat, 01 Apr 2023 06:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUbv-0003K3-PG; Sat, 01 Apr 2023 06:21:11 +0000
Received: by outflank-mailman (input) for mailman id 517118;
 Sat, 01 Apr 2023 06:21:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piUbu-0003Jt-Mm; Sat, 01 Apr 2023 06:21:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piUbu-0002kQ-Jh; Sat, 01 Apr 2023 06:21:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piUbu-0004j9-7O; Sat, 01 Apr 2023 06:21:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piUbu-0001xb-70; Sat, 01 Apr 2023 06:21:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qNwdL1YNq/AZ5YzHZqCjmU0GKyK+xl8l4MqIeM25Yt0=; b=Xw8iivLjhPL+59rwlEKBru/627
	FGhBsKs+J6lXmL9jZ6RV6Po25w9+JWkNWnaxqXkZp4T8gwbttxzJHXqt52JWEs2LHplkKHm9svjNf
	Wq4HMPSXcoNxqks8nwd3VTgFhXUcPE+eukFw7HhAGxWxzWptT/BvEMj7a1WDKBJTlPqQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180088-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180088: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=efcd0ec14b0fe9ee0ee70277763b2d538d19238d
X-Osstest-Versions-That:
    qemuu=f00506aeca2f6d92318967693f8da8c713c163f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 06:21:10 +0000

flight 180088 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180088/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180057
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180057
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180057
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180057
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180057
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                efcd0ec14b0fe9ee0ee70277763b2d538d19238d
baseline version:
 qemuu                f00506aeca2f6d92318967693f8da8c713c163f3

Last test of basis   180057  2023-03-29 21:10:00 Z    2 days
Testing same since   180088  2023-03-31 12:08:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  Laurent Vivier <laurent@vivier.eu>
  Nathan Chancellor <nathan@kernel.org>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Titus Rwantare <titusr@google.com>
  Zach van Rijn <me@zv.io>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f00506aeca..efcd0ec14b  efcd0ec14b0fe9ee0ee70277763b2d538d19238d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 06:37:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 06:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517124.802221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUrE-0004vp-7a; Sat, 01 Apr 2023 06:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517124.802221; Sat, 01 Apr 2023 06:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUrE-0004vi-4o; Sat, 01 Apr 2023 06:37:00 +0000
Received: by outflank-mailman (input) for mailman id 517124;
 Sat, 01 Apr 2023 06:36:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iFEF=7Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1piUrD-0004vc-2s
 for xen-devel@lists.xenproject.org; Sat, 01 Apr 2023 06:36:59 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98a66c50-d057-11ed-b464-930f4c7d94ae;
 Sat, 01 Apr 2023 08:36:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 85D9F1F8B9;
 Sat,  1 Apr 2023 06:36:55 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 10300134FB;
 Sat,  1 Apr 2023 06:36:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id AzBZAgfRJ2QGdwAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 01 Apr 2023 06:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a66c50-d057-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680331015; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fwlqVEU+eay08IeeOOBfH65TQZ/XQ2nl5oYOkjj7LF0=;
	b=MLhGuHY3CSmYqr2j6E4yBaM89chPjWoCvhK+8ZTpJAQ4LqUfQfGaDbjFkxkQbSoUvT19LR
	UnX7QI+Uvo4rZ2gsQVxltDevlaQX+pUb0sZdTlyT42PQAUrR4sbpAQ0h0YQABpzteF3uPT
	KiNeYdvQLVNqIhlg1bIFWY1mzObA3oI=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org,
	linux-hyperv@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH v5 00/15] x86/mtrr: fix handling with PAT but without MTRR
Date: Sat,  1 Apr 2023 08:36:37 +0200
Message-Id: <20230401063652.23522-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series tries to fix the rather special case of PAT being available
without having MTRRs (either due to CONFIG_MTRR being not set, or
because the feature has been disabled e.g. by a hypervisor).

The main use cases are Xen PV guests and SEV-SNP guests running under
Hyper-V.

Instead of trying to work around all the issues by adding if statements
here and there, just try to use the complete available infrastructure
by setting up a read-only MTRR state when needed.

In the Xen PV case the current MTRR MSR values can be read from the
hypervisor, while for the SEV-SNP case all needed is to set the
default caching mode to "WB".

I have added more cleanup which has been discussed when looking into
the most recent failures.

Note that I couldn't test the Hyper-V related change (patch 3).

Running on bare metal and with Xen didn't show any problems with the
series applied.

It should be noted that patches 9+10 are replacing today's way to
lookup the MTRR cache type for a memory region from looking at the
MTRR register values to building a memory map with the cache types.
This should make the lookup much faster and much easier to understand.

Changes in V2:
- replaced former patches 1+2 with new patches 1-4, avoiding especially
  the rather hacky approach of V1, while making all the MTRR type
  conflict tests available for the Xen PV case
- updated patch 6 (was patch 4 in V1)

Changes in V3:
- dropped patch 5 of V2, as already applied
- split patch 1 of V2 into 2 patches
- new patches 6-10
- addressed comments

Changes in V4:
- addressed comments

Changes in V5
- addressed comments
- some other small fixes
- new patches 3, 8 and 15

clone of "mtrr"

Juergen Gross (15):
  x86/mtrr: split off physical address size calculation
  x86/mtrr: optimize mtrr_calc_physbits()
  x86/mtrr: replace some constants with defines
  x86/mtrr: support setting MTRR state for software defined MTRRs
  x86/hyperv: set MTRR state when running as SEV-SNP Hyper-V guest
  x86/xen: set MTRR state when running as Xen PV initial domain
  x86/mtrr: replace vendor tests in MTRR code
  x86/mtrr: have only one set_mtrr() variant
  x86/mtrr: allocate mtrr_value array dynamically
  x86/mtrr: add get_effective_type() service function
  x86/mtrr: construct a memory map with cache modes
  x86/mtrr: use new cache_map in mtrr_type_lookup()
  x86/mtrr: don't let mtrr_type_lookup() return MTRR_TYPE_INVALID
  x86/mm: only check uniform after calling mtrr_type_lookup()
  x86/mtrr: remove unused code

 arch/x86/include/asm/mtrr.h        |  44 ++-
 arch/x86/include/uapi/asm/mtrr.h   |   6 +-
 arch/x86/kernel/cpu/mshyperv.c     |   4 +
 arch/x86/kernel/cpu/mtrr/amd.c     |   2 +-
 arch/x86/kernel/cpu/mtrr/centaur.c |  11 +-
 arch/x86/kernel/cpu/mtrr/cleanup.c |   6 +-
 arch/x86/kernel/cpu/mtrr/cyrix.c   |   2 +-
 arch/x86/kernel/cpu/mtrr/generic.c | 578 +++++++++++++++++++----------
 arch/x86/kernel/cpu/mtrr/mtrr.c    | 146 ++++----
 arch/x86/kernel/cpu/mtrr/mtrr.h    |   7 +-
 arch/x86/kernel/setup.c            |   2 +
 arch/x86/mm/pgtable.c              |  24 +-
 arch/x86/xen/enlighten_pv.c        |  52 +++
 13 files changed, 573 insertions(+), 311 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Apr 01 06:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 06:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517126.802232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUrm-0005Oo-Fr; Sat, 01 Apr 2023 06:37:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517126.802232; Sat, 01 Apr 2023 06:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piUrm-0005Og-Ci; Sat, 01 Apr 2023 06:37:34 +0000
Received: by outflank-mailman (input) for mailman id 517126;
 Sat, 01 Apr 2023 06:37:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iFEF=7Y=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1piUrk-0005OF-GR
 for xen-devel@lists.xenproject.org; Sat, 01 Apr 2023 06:37:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id acec79e9-d057-11ed-b464-930f4c7d94ae;
 Sat, 01 Apr 2023 08:37:30 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BC0A921A6E;
 Sat,  1 Apr 2023 06:37:29 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 75458134FB;
 Sat,  1 Apr 2023 06:37:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 2J9VGynRJ2RUdwAAMHmgww
 (envelope-from <jgross@suse.com>); Sat, 01 Apr 2023 06:37:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acec79e9-d057-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680331049; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BBSVwIjfh8GqolcmguccgHbT9FXyjh+eQq+oIFQmSnw=;
	b=ajS/iUmKoQdV4uRxzu4IIGIVP4xHuK0N0mFHi+m4Vq9E4sezhaed9RpDBa3+M5YQFdJCaq
	gZbqX3tuVGAXhzB3za13DOYXPeqgOYoxxl+pC7dSY9e8MohVhKnXH+ccd+fRqFRO43AjOz
	n25N0ONmdBS7BJtWgUpIwdIAaFHoHiM=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v5 06/15] x86/xen: set MTRR state when running as Xen PV initial domain
Date: Sat,  1 Apr 2023 08:36:43 +0200
Message-Id: <20230401063652.23522-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230401063652.23522-1-jgross@suse.com>
References: <20230401063652.23522-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When running as Xen PV initial domain (aka dom0), MTRRs are disabled
by the hypervisor, but the system should nevertheless use correct
cache memory types. This has always kind of worked, as disabled MTRRs
resulted in disabled PAT, too, so that the kernel avoided code paths
resulting in inconsistencies. This bypassed all of the sanity checks
the kernel is doing with enabled MTRRs in order to avoid memory
mappings with conflicting memory types.

This has been changed recently, leading to PAT being accepted to be
enabled, while MTRRs stayed disabled. The result is that
mtrr_type_lookup() no longer is accepting all memory type requests,
but started to return WB even if UC- was requested. This led to
driver failures during initialization of some devices.

In reality MTRRs are still in effect, but they are under complete
control of the Xen hypervisor. It is possible, however, to retrieve
the MTRR settings from the hypervisor.

In order to fix those problems, overwrite the MTRR state via
mtrr_overwrite_state() with the MTRR data from the hypervisor, if the
system is running as a Xen dom0.

Fixes: 72cbc8f04fe2 ("x86/PAT: Have pat_enabled() properly reflect state when running on Xen")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
V2:
- new patch
V3:
- move the call of mtrr_overwrite_state() to xen_pv_init_platform()
V4:
- only call mtrr_overwrite_state() if any MTRR got from Xen
  (Boris Ostrovsky)
---
 arch/x86/xen/enlighten_pv.c | 52 +++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 093b78c8bbec..fdaea02ab5ab 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -68,6 +68,7 @@
 #include <asm/reboot.h>
 #include <asm/hypervisor.h>
 #include <asm/mach_traps.h>
+#include <asm/mtrr.h>
 #include <asm/mwait.h>
 #include <asm/pci_x86.h>
 #include <asm/cpu.h>
@@ -119,6 +120,54 @@ static int __init parse_xen_msr_safe(char *str)
 }
 early_param("xen_msr_safe", parse_xen_msr_safe);
 
+/* Get MTRR settings from Xen and put them into mtrr_state. */
+static void __init xen_set_mtrr_data(void)
+{
+#ifdef CONFIG_MTRR
+	struct xen_platform_op op = {
+		.cmd = XENPF_read_memtype,
+		.interface_version = XENPF_INTERFACE_VERSION,
+	};
+	unsigned int reg;
+	unsigned long mask;
+	uint32_t eax, width;
+	static struct mtrr_var_range var[MTRR_MAX_VAR_RANGES] __initdata;
+
+	/* Get physical address width (only 64-bit cpus supported). */
+	width = 36;
+	eax = cpuid_eax(0x80000000);
+	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
+		eax = cpuid_eax(0x80000008);
+		width = eax & 0xff;
+	}
+
+	for (reg = 0; reg < MTRR_MAX_VAR_RANGES; reg++) {
+		op.u.read_memtype.reg = reg;
+		if (HYPERVISOR_platform_op(&op))
+			break;
+
+		/*
+		 * Only called in dom0, which has all RAM PFNs mapped at
+		 * RAM MFNs, and all PCI space etc. is identity mapped.
+		 * This means we can treat MFN == PFN regarding MTRR settings.
+		 */
+		var[reg].base_lo = op.u.read_memtype.type;
+		var[reg].base_lo |= op.u.read_memtype.mfn << PAGE_SHIFT;
+		var[reg].base_hi = op.u.read_memtype.mfn >> (32 - PAGE_SHIFT);
+		mask = ~((op.u.read_memtype.nr_mfns << PAGE_SHIFT) - 1);
+		mask &= (1UL << width) - 1;
+		if (mask)
+			mask |= MTRR_MASK_VALID;
+		var[reg].mask_lo = mask;
+		var[reg].mask_hi = mask >> 32;
+	}
+
+	/* Only overwrite MTRR state if any MTRR could be got from Xen. */
+	if (reg)
+		mtrr_overwrite_state(var, reg, MTRR_TYPE_UNCACHABLE);
+#endif
+}
+
 static void __init xen_pv_init_platform(void)
 {
 	/* PV guests can't operate virtio devices without grants. */
@@ -135,6 +184,9 @@ static void __init xen_pv_init_platform(void)
 
 	/* pvclock is in shared info area */
 	xen_init_time_ops();
+
+	if (xen_initial_domain())
+		xen_set_mtrr_data();
 }
 
 static void __init xen_pv_guest_late_init(void)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sat Apr 01 12:04:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 12:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517145.802242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piZxP-0004TY-PU; Sat, 01 Apr 2023 12:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517145.802242; Sat, 01 Apr 2023 12:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piZxP-0004TR-MV; Sat, 01 Apr 2023 12:03:43 +0000
Received: by outflank-mailman (input) for mailman id 517145;
 Sat, 01 Apr 2023 12:03:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piZxO-0004TH-9z; Sat, 01 Apr 2023 12:03:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piZxN-0002HU-Lz; Sat, 01 Apr 2023 12:03:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piZxN-0004K0-7L; Sat, 01 Apr 2023 12:03:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piZxN-0002qa-6w; Sat, 01 Apr 2023 12:03:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pawRJLIzYdZWALZWGxQzinqL2TxXv/FjxJVI6HqfdO8=; b=QfISLqBMVEqnoLOngr0izvdZ6V
	xufyKJTVBDxe6CfMHjhPzwQAgecLB1BCdrmbe0iGtI2NK5YavpfIx+Q8t7oisZ40ioDK/a4dFLhmf
	JrH4EXQuvlulrcSQBqyjvMe7QjeSX7R7W3ebTGfDI8/T+sgSXBK7vkcbXtRxaZit6dsM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180093-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180093: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=5a57b48fdfcb1e196292665d87fac46180344f8a
X-Osstest-Versions-That:
    linux=62bad54b26db8bc98e28749cd76b2d890edb4258
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 12:03:41 +0000

flight 180093 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180093/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180081
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180081
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180081
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180081
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180081
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                5a57b48fdfcb1e196292665d87fac46180344f8a
baseline version:
 linux                62bad54b26db8bc98e28749cd76b2d890edb4258

Last test of basis   180081  2023-03-31 05:40:32 Z    1 days
Testing same since   180093  2023-03-31 20:41:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alyssa Ross <hi@alyssa.is>
  Bjorn Helgaas <bhelgaas@google.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Conor Dooley <conor.dooley@microchip.com>
  David Arcari <darcari@redhat.com>
  David E. Box <david.e.box@linux.intel.com>
  Hans de Goede <hdegoede@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Juraj Pecigos <kernel@juraj.dev>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@kernel.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Pierre Asselin <pa@panix.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rajvi Jingar <rajvi.jingar@linux.intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Song Liu <song@kernel.org>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Weißschuh <linux@weissschuh.net>
  weiliang1503 <weiliang1503@gmail.com>
  Yanjun Zhang <zhangyanjun@cestc.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yu Kuai <yukuai3@huawei.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   62bad54b26db..5a57b48fdfcb  5a57b48fdfcb1e196292665d87fac46180344f8a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 14:51:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 14:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517156.802252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1picZi-0004I9-R6; Sat, 01 Apr 2023 14:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517156.802252; Sat, 01 Apr 2023 14:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1picZi-0004I2-Nz; Sat, 01 Apr 2023 14:51:26 +0000
Received: by outflank-mailman (input) for mailman id 517156;
 Sat, 01 Apr 2023 14:51:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1picZh-0004Hs-6s; Sat, 01 Apr 2023 14:51:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1picZg-0006Aa-Va; Sat, 01 Apr 2023 14:51:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1picZg-0002sN-Fd; Sat, 01 Apr 2023 14:51:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1picZg-0006wp-FA; Sat, 01 Apr 2023 14:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vCi06dSwS+d4DBuc2qFuQMH+WfY+TYSEAjYP5ee0eXA=; b=pQskkCLlg9wAVnIV+riDcMxt6F
	s3yVxOQ0Ro5c9WIz3j2DEBwtqik5FwgXn8G/GjLjjymIsYb3zSHbwmv8aWaaiBojRoiK8Nkn0UTRb
	3CvURdi9nU+70/vbQ1ViFPdSQ8dKYlJfdMt1MBuIsHSx8uS10lra8kWFxOJoHjenSqJg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180098-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180098: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
X-Osstest-Versions-That:
    xen=eef4608fe71feddb5fea86678cf3acaf84d10fd2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 14:51:24 +0000

flight 180098 xen-unstable real [real]
flight 180104 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180098/
http://logs.test-lab.xenproject.org/osstest/logs/180104/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180104-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180062
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180062
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180062
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180062
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180062
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180062
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180062
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180062
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180062
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb
baseline version:
 xen                  eef4608fe71feddb5fea86678cf3acaf84d10fd2

Last test of basis   180062  2023-03-30 05:16:10 Z    2 days
Failing since        180073  2023-03-30 20:38:51 Z    1 days    3 attempts
Testing same since   180098  2023-03-31 23:09:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Christian Lindig <christian.lindig@cloud.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   eef4608fe7..d6e0b4c41a  d6e0b4c41a38655ade7ecb566e8b2961282769fb -> master


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 18:34:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 18:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517171.802286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pig3R-0002Dz-U3; Sat, 01 Apr 2023 18:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517171.802286; Sat, 01 Apr 2023 18:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pig3R-0002Dr-R1; Sat, 01 Apr 2023 18:34:21 +0000
Received: by outflank-mailman (input) for mailman id 517171;
 Sat, 01 Apr 2023 18:34:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pig3Q-0002Dh-CT; Sat, 01 Apr 2023 18:34:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pig3Q-0003JE-9i; Sat, 01 Apr 2023 18:34:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pig3P-0005TD-Pp; Sat, 01 Apr 2023 18:34:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pig3P-0005Oq-P5; Sat, 01 Apr 2023 18:34:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vmJ/mrxAtETkvtbEGndNmPrBzF1Sd/I6f1TIhhvYF28=; b=meTo3fFp+nUm5PNzEvelPme9XY
	q1Ed4ipDYeQ9L8YkPtrYkNWGB78LAQkcbBlwAVqaJjb1rLEIooHMqlFcikdcsvgPY8PwXpH8jqGdM
	d/hm+cfV4kKRlsKtT9eZLgtwMaJpcGHxC+7rB9F4PgU1LdHBWfyqwtKgrOJfLK19oJao=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180102-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180102: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=ac8e8ef24e3be018ec25f961033fed6c841bad02
X-Osstest-Versions-That:
    libvirt=2c6b5a84257379e516ca1999782dca88dfd8a9de
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 18:34:19 +0000

flight 180102 libvirt real [real]
flight 180106 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180102/
http://logs.test-lab.xenproject.org/osstest/logs/180106/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180106-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              ac8e8ef24e3be018ec25f961033fed6c841bad02
baseline version:
 libvirt              2c6b5a84257379e516ca1999782dca88dfd8a9de

Last test of basis   180061  2023-03-30 04:20:17 Z    2 days
Failing since        180080  2023-03-31 04:21:50 Z    1 days    2 attempts
Testing same since   180102  2023-04-01 04:20:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pavel Borecki <pavel.borecki@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   2c6b5a8425..ac8e8ef24e  ac8e8ef24e3be018ec25f961033fed6c841bad02 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 22:00:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 22:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517205.802312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pijHB-0007CB-3C; Sat, 01 Apr 2023 22:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517205.802312; Sat, 01 Apr 2023 22:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pijHB-0007C4-0b; Sat, 01 Apr 2023 22:00:45 +0000
Received: by outflank-mailman (input) for mailman id 517205;
 Sat, 01 Apr 2023 22:00:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pijH9-0007Bu-QW; Sat, 01 Apr 2023 22:00:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pijH9-0008SW-Ml; Sat, 01 Apr 2023 22:00:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pijH9-0003Hp-43; Sat, 01 Apr 2023 22:00:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pijH9-0006uM-3Z; Sat, 01 Apr 2023 22:00:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JYN/ZPe6KCf+8w4JVTDl2IXoPLNaO80KmtZ4P2ZNBm0=; b=0AJj/pVSkxGwAzqW9KH9UAMZz8
	xfD4YiTk0ZlDhJ2FTol9kCmfNaofbCbDexP34V540S2nWD+O3FwPOqu+oNzDxCk49eBtRkXjP5hR2
	BE8QCU6xGo9Dq8iLabUMpLb1If5Mu8rbjFaYVlme0x+INzQ/M9p6XTtcwAUwUSryUcEk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180103-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180103: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=7b50567bdcad8925ca1e075feb7171c12015afd1
X-Osstest-Versions-That:
    linux=5a57b48fdfcb1e196292665d87fac46180344f8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Apr 2023 22:00:43 +0000

flight 180103 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180103/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180093
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180093
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180093
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180093
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180093
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                7b50567bdcad8925ca1e075feb7171c12015afd1
baseline version:
 linux                5a57b48fdfcb1e196292665d87fac46180344f8a

Last test of basis   180093  2023-03-31 20:41:50 Z    1 days
Testing same since   180103  2023-04-01 12:08:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arnd Bergmann <arnd@arndb.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Siddharth Kawar <Siddharth.Kawar@microsoft.com>
  Siddharth Rajendra Kawar <sikawar@microsoft.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   5a57b48fdfcb..7b50567bdcad  7b50567bdcad8925ca1e075feb7171c12015afd1 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Apr 01 22:37:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Apr 2023 22:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517210.802322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pijqA-0002GR-T2; Sat, 01 Apr 2023 22:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517210.802322; Sat, 01 Apr 2023 22:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pijqA-0002GK-QU; Sat, 01 Apr 2023 22:36:54 +0000
Received: by outflank-mailman (input) for mailman id 517210;
 Sat, 01 Apr 2023 22:36:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fl6z=7Y=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pijqA-0002GE-0L
 for xen-devel@lists.xenproject.org; Sat, 01 Apr 2023 22:36:54 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b27e29a3-d0dd-11ed-85db-49a42c6b2330;
 Sun, 02 Apr 2023 00:36:52 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id ew6so103437853edb.7
 for <xen-devel@lists.xenproject.org>; Sat, 01 Apr 2023 15:36:52 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-077-013-027-219.77.13.pool.telefonica.de.
 [77.13.27.219]) by smtp.gmail.com with ESMTPSA id
 bq18-20020a056402215200b00501c2a9e16dsm2525599edb.74.2023.04.01.15.36.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Apr 2023 15:36:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b27e29a3-d0dd-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680388611;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3mcXjgUWbrap77dni2Kc4rFFZA3pvAccTjwmsDr8bxU=;
        b=mEteHCJVGbFfX6tRGM1s1+VkgCn+hxBW1ZPMXM8bHo1lDikF/amdVWjUftAXcShM8a
         QSUCelFL9ah0+AXtIsc17IFeWQH3I9hozzG5sbBDkZn+ElBcPC9HegHKvD+3vyTOjND3
         ugVPymLueNtOS+uB1+vL9uTzj6FXIW+y9HEhS2zqCKO91NvGVU76xY7V5+tEJG7NKrZj
         Zvoov0pcWYrLUjrLk3iPcEgyiUbVABU0T3DFZfzBGIhRfiIIfV26WtEYaypcV8qu1J8r
         HEWMBS+85UsEL27hFh76IGPqOkyEEeNSgnno6LLxlQm/ygtzQTIaNvuKHJa8j26iRqFv
         04YA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680388611;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3mcXjgUWbrap77dni2Kc4rFFZA3pvAccTjwmsDr8bxU=;
        b=FZYGlUXaMbnLhtyNoFpBG5v/G5swMeapPlWR6PbLVts9a1hmov3qVRv+IoReI8Vkx7
         Q8tvnoZ9QC9wd/Ln/iRoOijaPLxQffyDMb5EWkD8idsAxO1UQIm5wjIAb/GwzU9BmkyW
         joshiyLdPe/xNAdLagXFOWMb91LtnjH1aQ3HnSUTqS1ipA/6m75g/CFwoyklTTNhdOl9
         AeH8Rt197Zf5MVuJ3d8mk79oxIvmmcsPuiOr65CW5QGNZSAXDB4as2MVBgnHnZ4iX1uy
         1xfB1dyvdHJLKGpTS9spajNfGWBUCFSWY3kg/+8PY4WUDx/z+cN6VE8r1SGz00RBR1xF
         owkw==
X-Gm-Message-State: AAQBX9cpuOLnZ+x5M+qWl/75srGa2DR3b0hIEZjZRiVoMw9Bdrd0KVsF
	tGXAIsiX+NH53YX923WuMg4=
X-Google-Smtp-Source: AKy350ZBAQ0NclZYRHIFZo8U4NApWMVwMHMu7y4terrO7HGjAO6fxbeX8o8y5iqHZuYoEBhpoQdpEA==
X-Received: by 2002:a17:907:2077:b0:92f:fbac:69c4 with SMTP id qp23-20020a170907207700b0092ffbac69c4mr30391166ejb.56.1680388611436;
        Sat, 01 Apr 2023 15:36:51 -0700 (PDT)
Date: Sat, 01 Apr 2023 22:36:45 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: qemu-devel@nongnu.org, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw@amazon.co.uk>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v3_2/6=5D_hw/isa/piix3=3A_Reuse?= =?US-ASCII?Q?_piix3=5Frealize=28=29_in_piix3=5Fxen=5Frealize=28=29?=
In-Reply-To: <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard>
References: <20230312120221.99183-1-shentey@gmail.com> <20230312120221.99183-3-shentey@gmail.com> <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard>
Message-ID: <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 30=2E M=C3=A4rz 2023 13:00:25 UTC schrieb Anthony PERARD <anthony=2Eper=
ard@citrix=2Ecom>:
>On Sun, Mar 12, 2023 at 01:02:17PM +0100, Bernhard Beschow wrote:
>> This is a preparational patch for the next one to make the following
>> more obvious:
>>=20
>> First, pci_bus_irqs() is now called twice in case of Xen where the
>> second call overrides the pci_set_irq_fn with the Xen variant=2E
>
>pci_bus_irqs() does allocates pci_bus->irq_count, so the second call in
>piix3_xen_realize() will leak `pci_bus->irq_count`=2E Could you look if
>pci_bus_irqs_cleanup() can be called before the second pci_bus_irqs()
>call, or maybe some other way to avoid the leak?

Thanks for catching this! I'll post a v4=2E

I think the most fool-proof way to fix this is to free irq_count just befo=
re the assignment=2E pci_bus_irqs_cleanup() would then have to NULL the att=
ribute such that pci_bus_irqs() can be called afterwards=2E

BTW: I tried running qemu-system-x86_64 with PIIX4 rather than PIIX3 as Xe=
n guest with my pc-piix4 branch without success=2E This branch essentially =
just provides slightly different PCI IDs for PIIX=2E Does xl or something e=
lse in Xen check these? If not then this means I'm still missing something=
=2E Under KVM this branch works just fine=2E Any idea?

Thanks,
Bernhard

>
>> Second, pci_bus_set_route_irq_fn() is now also called in Xen mode=2E
>>=20
>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>> Reviewed-by: Michael S=2E Tsirkin <mst@redhat=2Ecom>
>
>Beside the leak which I think can happen only once, patch is fine:
>Reviewed-by: Anthony PERARD <anthony=2Eperard@citrix=2Ecom>
>
>Thanks,
>


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 02:14:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 02:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517214.802333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pinEv-0000GD-5O; Sun, 02 Apr 2023 02:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517214.802333; Sun, 02 Apr 2023 02:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pinEu-0000Fp-VI; Sun, 02 Apr 2023 02:14:40 +0000
Received: by outflank-mailman (input) for mailman id 517214;
 Sun, 02 Apr 2023 02:14:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinEu-0000Ff-0z; Sun, 02 Apr 2023 02:14:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinEt-0007lI-VA; Sun, 02 Apr 2023 02:14:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinEs-0004MT-F4; Sun, 02 Apr 2023 02:14:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pinEs-0005Jp-Ec; Sun, 02 Apr 2023 02:14:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i8Wpk8p17rn95gd+WONfEn5dMxOgMOWe7FZT2WMcxAc=; b=NG9KiuJwO+azkk3QNFLkvFNPLj
	6eu3VCATJIVP7zhOkRBYPzKFm9BNnWABH/sz5esOSRwk0yHgo+Pwca1e0UL7uqmABTNzft/RabNGY
	HikzGaVM+KYEyRFQUU4rwS2vySjkJtfPhgHNbVZM5NogZIKs/dMNY+AnY4FE/3t/QXDk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180108-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180108: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4ca4d2b9df27f9c58009d623678ac911c544d36c
X-Osstest-Versions-That:
    ovmf=67a6f414aa0e2a9cac965fcc6d83b6cbd6e893c0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 02:14:38 +0000

flight 180108 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180108/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4ca4d2b9df27f9c58009d623678ac911c544d36c
baseline version:
 ovmf                 67a6f414aa0e2a9cac965fcc6d83b6cbd6e893c0

Last test of basis   180101  2023-04-01 03:35:21 Z    0 days
Testing same since   180108  2023-04-01 23:42:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Marvin Häuser <mhaeuser@posteo.de>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   67a6f414aa..4ca4d2b9df  4ca4d2b9df27f9c58009d623678ac911c544d36c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 02:22:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 02:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517219.802343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pinMe-0001jc-Va; Sun, 02 Apr 2023 02:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517219.802343; Sun, 02 Apr 2023 02:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pinMe-0001jU-PS; Sun, 02 Apr 2023 02:22:40 +0000
Received: by outflank-mailman (input) for mailman id 517219;
 Sun, 02 Apr 2023 02:22:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinMd-0001jI-OV; Sun, 02 Apr 2023 02:22:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinMd-000849-KX; Sun, 02 Apr 2023 02:22:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pinMc-0004X3-Tj; Sun, 02 Apr 2023 02:22:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pinMc-00075v-T6; Sun, 02 Apr 2023 02:22:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MS3/z69NMjX20P24HtQGzFPG5o8SUI6o/Euvj+aImec=; b=Gm8YzoEjxWAZoYyyXbbfzJK2W9
	DA506eANmzNgi+nfYytDf+P49jxyQjl3mCruC/U3jAMIWXK84hfAUHH75zpI0xgAUITyeQq2Cl4pg
	2kTyH4dPRBWpUR3FDrV292RSHeIsDRYLfYL0bkDMzNamZVwQia57jp830RNYNZng1t00=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180105-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180105: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 02:22:38 +0000

flight 180105 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180105/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180098 pass in 180105
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180098

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180098
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180098
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180098
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180098
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180098
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180098
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180098
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180098
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180098
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm       3 hosts-allocate           starved in 180098 n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180105  2023-04-01 14:53:47 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 02 04:32:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 04:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517224.802353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pipOA-0006Vr-22; Sun, 02 Apr 2023 04:32:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517224.802353; Sun, 02 Apr 2023 04:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pipO9-0006Vj-Sn; Sun, 02 Apr 2023 04:32:21 +0000
Received: by outflank-mailman (input) for mailman id 517224;
 Sun, 02 Apr 2023 04:32:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pipO8-0006VZ-Ht; Sun, 02 Apr 2023 04:32:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pipO8-0002u1-Dy; Sun, 02 Apr 2023 04:32:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pipO7-0008Tz-Aq; Sun, 02 Apr 2023 04:32:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pipO7-0005tZ-7H; Sun, 02 Apr 2023 04:32:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OkKk8jCOkGXMYOPpNEsNlwK5QDNhHFnUuwU7TIBN/n8=; b=gSjr5ZLLPfQpAmb4dUon01NvGs
	VFbRGkvWIiKFpRR8wvVB8ydD2ak6toXKbos7f01MMBQNFsZEW1yI43NIZbXIxP0ds5tW6kWBXJAdy
	thDWQjoOlRoR/v4DUvjbjDHFMfmwDgtyLh3PwuNXqmINdEYlJmcA5k6cQYu460TeEjH0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180109: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b4af23aaab8a44341e43713a71cbebf23df2c27d
X-Osstest-Versions-That:
    ovmf=4ca4d2b9df27f9c58009d623678ac911c544d36c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 04:32:19 +0000

flight 180109 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180109/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b4af23aaab8a44341e43713a71cbebf23df2c27d
baseline version:
 ovmf                 4ca4d2b9df27f9c58009d623678ac911c544d36c

Last test of basis   180108  2023-04-01 23:42:19 Z    0 days
Testing same since   180109  2023-04-02 02:15:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4ca4d2b9df..b4af23aaab  b4af23aaab8a44341e43713a71cbebf23df2c27d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 07:14:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 07:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517232.802363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piruL-0005vx-5N; Sun, 02 Apr 2023 07:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517232.802363; Sun, 02 Apr 2023 07:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piruL-0005vq-2m; Sun, 02 Apr 2023 07:13:45 +0000
Received: by outflank-mailman (input) for mailman id 517232;
 Sun, 02 Apr 2023 07:13:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piruJ-0005vg-8q; Sun, 02 Apr 2023 07:13:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piruJ-0007Ah-6w; Sun, 02 Apr 2023 07:13:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piruI-0001m3-Nw; Sun, 02 Apr 2023 07:13:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piruI-0001Mq-NX; Sun, 02 Apr 2023 07:13:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GR17R9eJbS8LKlWxOJKbPKkm5REFoWqCm1j8utK1gno=; b=oC8TE3aXqA4Hmte1DxpHVaCou/
	ueBGyjPU26UpjcAFOLdGlBimsOVHsoZ+p1FxoeJJzraEvnq6rUHONS8mp9B9Xv08ClQT4r+ajk5zo
	11aKsYQ5FTopdQi2X9h4R96RonOZWUe52Ai1IlIeu52u/nkXBkvwkvVn6sQhK2CTlBP4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180112-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180112: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=fc00ff286a541c047b7d343e66ec10890b80d3ea
X-Osstest-Versions-That:
    ovmf=b4af23aaab8a44341e43713a71cbebf23df2c27d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 07:13:42 +0000

flight 180112 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180112/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 fc00ff286a541c047b7d343e66ec10890b80d3ea
baseline version:
 ovmf                 b4af23aaab8a44341e43713a71cbebf23df2c27d

Last test of basis   180109  2023-04-02 02:15:13 Z    0 days
Testing same since   180112  2023-04-02 04:43:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b4af23aaab..fc00ff286a  fc00ff286a541c047b7d343e66ec10890b80d3ea -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 07:42:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 07:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517239.802373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pisM2-0001VC-F7; Sun, 02 Apr 2023 07:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517239.802373; Sun, 02 Apr 2023 07:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pisM2-0001V5-Bq; Sun, 02 Apr 2023 07:42:22 +0000
Received: by outflank-mailman (input) for mailman id 517239;
 Sun, 02 Apr 2023 07:42:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pisM0-0001Uv-KA; Sun, 02 Apr 2023 07:42:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pisM0-0007tY-GV; Sun, 02 Apr 2023 07:42:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pisLz-0002a3-Ud; Sun, 02 Apr 2023 07:42:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pisLz-0001uh-U8; Sun, 02 Apr 2023 07:42:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EFSTj7x+FjD3timnUyBccbGZkgoXFnSJ4Gh5z5MjQYo=; b=29vdAcUuF92uEwFsjlkanW8nr1
	JXxzANfmxQF6kZUJCDClc9pu84OcFWhEvzXFCsOVCKer7JliTL0x3HTbD1MBdklj0/W5tTvP/wuXw
	e+YzByg4FMgOlmlpxPoXJBxK7K8ulMiYG/1aphV6jh/yie09wSb83r6BeD4VNJ08Rb5o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180107-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180107: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=00c7b5f4ddc5b346df62b757ec73f9357bb452af
X-Osstest-Versions-That:
    linux=7b50567bdcad8925ca1e075feb7171c12015afd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 07:42:19 +0000

flight 180107 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180107/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180103
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180103
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180103
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180103
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180103
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                00c7b5f4ddc5b346df62b757ec73f9357bb452af
baseline version:
 linux                7b50567bdcad8925ca1e075feb7171c12015afd1

Last test of basis   180103  2023-04-01 12:08:07 Z    0 days
Testing same since   180107  2023-04-01 22:11:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arınç ÜNAL <arinc.unal@arinc9.com>
  Ben Hutchings <ben@decadent.org.uk>
  Claudiu Beznea <claudiu.beznea@microchip.com> # on SAMA7G5
  Daniel Golle <daniel@makrotopia.org>
  Dario Binacchi <dario.binacchi@amarulasolutions.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Hans de Goede <hdegoede@redhat.com>
  Horatiu Vultur <horatiu.vultur@microchip.com>
  Jacob Pan <jacob.jun.pan@linux.intel.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  Jonathan Denose <jdenose@chromium.org>
  Jonathan Denose <jdenose@google.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kornel Dulęba <korneld@chromium.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthias Benkmann <matthias.benkmann@gmail.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  msizanoen <msizanoen@qtmlabs.xyz>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Peter Oberparleiter <oberpar@linux.ibm.com>
  Randy Dunlap <rdunlap@infradead.org>
  Werner Sembach <wse@tuxedocomputers.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   7b50567bdcad..00c7b5f4ddc5  00c7b5f4ddc5b346df62b757ec73f9357bb452af -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 12:27:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 12:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517261.802383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piwo4-0003v3-Uy; Sun, 02 Apr 2023 12:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517261.802383; Sun, 02 Apr 2023 12:27:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piwo4-0003uw-Ql; Sun, 02 Apr 2023 12:27:36 +0000
Received: by outflank-mailman (input) for mailman id 517261;
 Sun, 02 Apr 2023 12:27:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piwo3-0003um-Ne; Sun, 02 Apr 2023 12:27:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piwo3-0006Vk-Kt; Sun, 02 Apr 2023 12:27:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piwo3-0001aq-9a; Sun, 02 Apr 2023 12:27:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piwo3-0004Oy-9B; Sun, 02 Apr 2023 12:27:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/cBpI/4E6k6V/qUkmQWocwkX5ClhcjfWBFPfX4qFeQE=; b=VCp92ilZyt8O5k/eIU4M2AD93k
	Ul3mv1CnOFjsKRz5XOCdfvcVlvx+DLaaFLYpl1KJ3zCDvhhpVGvE0rD71hRFyliypGhbB4e+z5BZC
	94h+xi28sIxVEjtm5NGMgsWWBmZPAEUyo/dTcxgwuxrFbkrX5lRvSXHRcOKKc1kGfm6E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180110: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 12:27:35 +0000

flight 180110 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180110/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180105
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180105
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180105
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180105
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180105
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180105
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180105
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180105
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180105
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180110  2023-04-02 02:26:14 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 02 14:51:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 14:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517274.802392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piz3I-0001bH-1y; Sun, 02 Apr 2023 14:51:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517274.802392; Sun, 02 Apr 2023 14:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1piz3H-0001bA-Ve; Sun, 02 Apr 2023 14:51:27 +0000
Received: by outflank-mailman (input) for mailman id 517274;
 Sun, 02 Apr 2023 14:51:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piz3G-0001ax-Qe; Sun, 02 Apr 2023 14:51:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piz3G-0001nD-Ij; Sun, 02 Apr 2023 14:51:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1piz3G-0004qz-3O; Sun, 02 Apr 2023 14:51:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1piz3G-0005nm-2x; Sun, 02 Apr 2023 14:51:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TNn5vtC4ToGuGqyrVZ0zKZE88Z3HMhTGO119drbis84=; b=OBxLq/8S/cSSHVA+6NblknYJVk
	yI1jIP7p7UDrCqFf7eW0Rhwh8fkNir58hofx68VsJvrrLFSQ4Ad0LoyJHQVp3IUZZfZ8MHN24IIIJ
	CwUTiqkkfNUO+P++rsZ8DiLXONnjyCi/OcIMNyXsXTA9HvYuIj6Ucmv3aYCxuwpyNHyo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180111-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180111: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=145886e36dba23ed3756ab9c52b2ac77c506943f
X-Osstest-Versions-That:
    libvirt=ac8e8ef24e3be018ec25f961033fed6c841bad02
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Apr 2023 14:51:26 +0000

flight 180111 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180111/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              145886e36dba23ed3756ab9c52b2ac77c506943f
baseline version:
 libvirt              ac8e8ef24e3be018ec25f961033fed6c841bad02

Last test of basis   180102  2023-04-01 04:20:15 Z    1 days
Testing same since   180111  2023-04-02 04:23:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jiri Denemark <jdenemar@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>
  Weblate <noreply@weblate.org>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   ac8e8ef24e..145886e36d  145886e36dba23ed3756ab9c52b2ac77c506943f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 02 23:16:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Apr 2023 23:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517288.802402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj6vZ-000827-Dk; Sun, 02 Apr 2023 23:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517288.802402; Sun, 02 Apr 2023 23:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj6vZ-000820-B8; Sun, 02 Apr 2023 23:16:01 +0000
Received: by outflank-mailman (input) for mailman id 517288;
 Sun, 02 Apr 2023 23:15:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wpuV=7Z=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pj6vW-00081r-V8
 for xen-devel@lists.xenproject.org; Sun, 02 Apr 2023 23:15:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 509a565c-d1ac-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 01:15:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0A0C660ADB;
 Sun,  2 Apr 2023 23:15:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C052C433EF;
 Sun,  2 Apr 2023 23:15:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 509a565c-d1ac-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680477352;
	bh=y9VtgQP6ZdJ/r0jZwGmf39KrAbBONFdJ6pJJcZhU74A=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=a2MFJ3H6Gn1tjnpI5LBMv2hkvh5b6kihL2UQ0j9jVZfNOOqgunxgrbtWuoTFhghga
	 XIT1v0csT9NcPZgvAxJsbdax32G8BMUZN/W9vrkfJs5GVHDf5e+BJCxNOjxKEcH0f8
	 LGM5nHo7YHx264blzlCuAct1O/h1TyB12N40PQS3v2LDE6CuFa3HGArveTV4acpX65
	 /nkwDz3zxB5xqiGn7IwIc2uk1gPRW5nJfG1fZDPraEHYhHA1oxognGQ5OJ9DF+8nv0
	 O0CK9zgxZDwsST2fWJCJpdHUlauFpQGke5SGYqZbAHLjdbqCJNy1n7jcvIPxRDNYka
	 jW/nrh3oxGQIA==
Date: Sun, 2 Apr 2023 16:15:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Gianluca Guida <gianluca@rivosinc.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
In-Reply-To: <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
Message-ID: <alpine.DEB.2.22.394.2304021557140.4566@ubuntu-linux-20-04-desktop>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com> <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com> <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 31 Mar 2023, Julien Grall wrote:
> Hi Oleksii,
> 
> I was going to ack the patch but then I spotted something that would want some
> clarification.
> 
> On 29/03/2023 11:50, Oleksii Kurochko wrote:
> > diff --git a/xen/arch/arm/include/asm/bug.h b/xen/arch/arm/include/asm/bug.h
> > index cacaf014ab..3fb0471a9b 100644
> > --- a/xen/arch/arm/include/asm/bug.h
> > +++ b/xen/arch/arm/include/asm/bug.h
> > @@ -1,6 +1,24 @@
> >   #ifndef __ARM_BUG_H__
> >   #define __ARM_BUG_H__
> >   +/*
> > + * Please do not include in the header any header that might
> > + * use BUG/ASSERT/etc maros asthey will be defined later after
> > + * the return to <xen/bug.h> from the current header:
> > + *
> > + * <xen/bug.h>:
> > + *  ...
> > + *   <asm/bug.h>:
> > + *     ...
> > + *     <any_header_which_uses_BUG/ASSERT/etc macros.h>
> > + *     ...
> > + *  ...
> > + *  #define BUG() ...
> > + *  ...
> > + *  #define ASSERT() ...
> > + *  ...
> > + */
> > +
> >   #include <xen/types.h>
> >     #if defined(CONFIG_ARM_32)
> > @@ -11,76 +29,7 @@
> >   # error "unknown ARM variant"
> >   #endif
> >   -#define BUG_FRAME_STRUCT
> > -
> > -struct bug_frame {
> > -    signed int loc_disp;    /* Relative address to the bug address */
> > -    signed int file_disp;   /* Relative address to the filename */
> > -    signed int msg_disp;    /* Relative address to the predicate (for
> > ASSERT) */
> > -    uint16_t line;          /* Line number */
> > -    uint32_t pad0:16;       /* Padding for 8-bytes align */
> > -};
> > -
> > -#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
> > -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
> > -#define bug_line(b) ((b)->line)
> > -#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
> > -
> > -/* Many versions of GCC doesn't support the asm %c parameter which would
> > - * be preferable to this unpleasantness. We use mergeable string
> > - * sections to avoid multiple copies of the string appearing in the
> > - * Xen image. BUGFRAME_run_fn needs to be handled separately.
> > - */
> 
> Given this comment ...
> 
> > -#define BUG_FRAME(type, line, file, has_msg, msg) do {
> > \
> > -    BUILD_BUG_ON((line) >> 16);
> > \
> > -    BUILD_BUG_ON((type) >= BUGFRAME_NR);
> > \
> > -    asm ("1:"BUG_INSTR"\n"
> > \
> > -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"
> > \
> > -         "2:\t.asciz " __stringify(file) "\n"
> > \
> > -         "3:\n"
> > \
> > -         ".if " #has_msg "\n"
> > \
> > -         "\t.asciz " #msg "\n"
> > \
> > -         ".endif\n"
> > \
> > -         ".popsection\n"
> > \
> > -         ".pushsection .bug_frames." __stringify(type) ", \"a\",
> > %progbits\n"\
> > -         "4:\n"
> > \
> > -         ".p2align 2\n"
> > \
> > -         ".long (1b - 4b)\n"
> > \
> > -         ".long (2b - 4b)\n"
> > \
> > -         ".long (3b - 4b)\n"
> > \
> > -         ".hword " __stringify(line) ", 0\n"
> > \
> > -         ".popsection");
> > \
> > -} while (0)
> > -
> > -/*
> > - * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set the
> > - * flag but instead rely on the default value from the compiler). So the
> > - * easiest way to implement run_in_exception_handler() is to pass the to
> > - * be called function in a fixed register.
> > - */
> > -#define  run_in_exception_handler(fn) do {
> > \
> > -    asm ("mov " __stringify(BUG_FN_REG) ", %0\n"
> > \
> > -         "1:"BUG_INSTR"\n"
> > \
> > -         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","
> > \
> > -         "             \"a\", %%progbits\n"
> > \
> > -         "2:\n"
> > \
> > -         ".p2align 2\n"
> > \
> > -         ".long (1b - 2b)\n"
> > \
> > -         ".long 0, 0, 0\n"
> > \
> > -         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG) );
> > \
> > -} while (0)
> > -
> > -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
> > -
> > -#define BUG() do {                                              \
> > -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
> > -    unreachable();                                              \
> > -} while (0)
> > -
> > -#define assert_failed(msg) do {                                 \
> > -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
> > -    unreachable();                                              \
> > -} while (0)
> > +#define BUG_ASM_CONST   "c"
> 
> ... you should explain in the commit message why this is needed and the
> problem described above is not a problem anymore.
> 
> For instance, I managed to build it without 'c' on arm64 [1]. But it does
> break on arm32 [2]. I know that Arm is also where '%c' was an issue.
> 
> Skimming through linux, the reason seems to be that GCC may add '#' when it
> should not. That said, I haven't look at the impact on the generic
> implementation. Neither I looked at which version may be affected (the
> original message was from 2011).
> 
> However, without an explanation, I am afraid this can't go in because I am
> worry we may break some users (thankfully that might just be a compilation
> issues rather than weird behavior).
> 
> Bertrand, Stefano, do you know if this is still an issue?

I don't know, but I confirm your observation.

In my system, both ARM64 and ARM32 compile without problems with "c".
Without "c', only ARM64 compiles without problems, while ARM32 breaks.

My ARM32 compiler is:
arm-linux-gnueabihf-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0

Additing a meaningful explanation to the commit message might be
difficult in this case.

Maybe instead we could run a few tests with different versions of arm64
and arm32 gcc to check that everything still works? If everything checks
out, given that the issue has been unchanged for 10+ years we could just
keep "c" and move forward with it?


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 01:28:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 01:28:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517294.802412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj8zJ-0002p2-Me; Mon, 03 Apr 2023 01:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517294.802412; Mon, 03 Apr 2023 01:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj8zJ-0002ov-Jt; Mon, 03 Apr 2023 01:28:01 +0000
Received: by outflank-mailman (input) for mailman id 517294;
 Mon, 03 Apr 2023 01:28:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PW/x=72=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pj8zI-0002op-5S
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 01:28:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c30196a8-d1be-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 03:27:57 +0200 (CEST)
Received: from AM6PR01CA0047.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::24) by AS2PR08MB10296.eurprd08.prod.outlook.com
 (2603:10a6:20b:648::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.29; Mon, 3 Apr
 2023 01:27:54 +0000
Received: from AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::b1) by AM6PR01CA0047.outlook.office365.com
 (2603:10a6:20b:e0::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.22 via Frontend
 Transport; Mon, 3 Apr 2023 01:27:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT036.mail.protection.outlook.com (100.127.140.93) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.16 via Frontend Transport; Mon, 3 Apr 2023 01:27:54 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 03 Apr 2023 01:27:53 +0000
Received: from 1a285a08504b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 12AA0D9A-322B-424F-AD6A-F43BE9627906.1; 
 Mon, 03 Apr 2023 01:27:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1a285a08504b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 03 Apr 2023 01:27:48 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GVXPR08MB7798.eurprd08.prod.outlook.com (2603:10a6:150:4::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 01:27:44 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6254.029; Mon, 3 Apr 2023
 01:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c30196a8-d1be-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/xI7pG/+thWbOKdjBTxM27elUXU8IQePp3oEeA3Z1fQ=;
 b=UtDDT2pVj/+MT0Pk/1dqk5JZdWSJ2b4hPn9K6xyQz4PZYsIbl+EtRJG+KZs7OgqzrqjIPLFhXkFNXfC9f8egS8NaAPYxe928EC8lmi0iEitl7h3txhkrAdBMKu6j23aJvA1t1qqnHt++h1inIDxxqyarU+bu94bV68hWxS3s1iY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJy9kGR8eXSoorFf8+qhdJlRjF8EvRRrL2XGFQio3XiFgGgS5tXzAjwIYGxqYYax12yQH3enSAAvfvaQv+u96ybCNdtHZt4vnLvV6CD1sM+atyfLYfDY/sooY0ha8/bynrMnW+EPQg7Z0/0rdoxaLHsI82KhxmyCIFgYpirCxsAKAfGOsWsCchhLFCRUszRWDnAITFi9sTHedG3VlNz7ziVk1IvlLzZE5Rkr2cXuXFrlMRcwmaw1Llc0pCZwenJGXB/7zLMQirEPGzjNdLVlprsjtammD5qQ1XUcZCjb9+wJmPhbFCWpzo4bbz19HCBjGjyVpg9nA9fR+YvcR5fIPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/xI7pG/+thWbOKdjBTxM27elUXU8IQePp3oEeA3Z1fQ=;
 b=OQtLOSbyDMwDxiueCrf+PF/+i1o93IdItpg2MKKzgAJ5+9TI4Xjl8aWRsLxN2ytmpQ8eqYV8F50hRkMylpzsYfkWHCwIx7BpWa0lJIbFNnTweJ8KCpbxq4iOdZHRqHcq/4Cg3GhHWxTt3F7hFZCK5hD12Ebf+9v/uBDw8Bx5QDk2771hhzts0Fg2NPJglXj7yQKDfmQoq2BwyPwp8+exaTk1i/KqCNEI3WZiThp2ML3oNvDScJIcCI+tIX4VB2IE9vfTPN0AGsvXwr/BDeh8PdFpmebZiGDFZ5WQZA5ac4Q47F76368OMvipKAvLYrg4akGBQ150d6g0sft0ta+B2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/xI7pG/+thWbOKdjBTxM27elUXU8IQePp3oEeA3Z1fQ=;
 b=UtDDT2pVj/+MT0Pk/1dqk5JZdWSJ2b4hPn9K6xyQz4PZYsIbl+EtRJG+KZs7OgqzrqjIPLFhXkFNXfC9f8egS8NaAPYxe928EC8lmi0iEitl7h3txhkrAdBMKu6j23aJvA1t1qqnHt++h1inIDxxqyarU+bu94bV68hWxS3s1iY=
From: Henry Wang <Henry.Wang@arm.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "community.manager@xenproject.org" <community.manager@xenproject.org>,
	Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, George Dunlap <george.dunlap@cloud.com>, Juergen
 Gross <jgross@suse.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Xen 4.18 release: Proposed release schedule
Thread-Topic: Xen 4.18 release: Proposed release schedule
Thread-Index: AdllyZg0WLVAnXgvTY+2GvS7dGkAyQ==
Date: Mon, 3 Apr 2023 01:27:42 +0000
Message-ID:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 572C6A4C7311C9449A271E8C1E3D6915.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GVXPR08MB7798:EE_|AM7EUR03FT036:EE_|AS2PR08MB10296:EE_
X-MS-Office365-Filtering-Correlation-Id: a4762508-037e-494e-4f81-08db33e2a5f2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QfQatOycnpIGKtE3P+2SHbE8E6VbhOzrdGxin3WqgfOjCB/WkUUUgjP42gkT849kPCrHlcaKqe+yLBjGuS2ap4MSJ1RiJg/xBAPs2UlX/K7ybP6HRRdIbbPDM2XQgPO+XeSzMGPL/JPjJl0dwEpTqEXWIhQQ+jJChWRb6D0k/jGKD8OQpoUkspbKf+cADU0ISQ5AefVJ1/6/oK6cCpgqflNRRbehu24ELhzVdwUQAqyPOeS/4s6mFSqJcxTLi5t8gCjBxbTZdKeqCJZ60Q8lNqHY7lmot7RcD4/PxcOFiyvzyhnI8h7ZlEk/h+wGPUhA3lL55dT6fYkl1MpiaJDwxZrIJpkUTNQ/W6WQ1PMaL0wPBhVbpQM4OzkzKS2bbGF0tSx8wPWA3bJpHnL2I6upMTlFCksFCBQV7e6PZDfPmqmTTSZbpg7z6geFAj35KSKvlMZJQQNZtr++kTpvP36TS7ljOGMn7ZLuHWIJLp6SW6NRUlessZpS8mclfyGFri02HTTVflplVlyTY2/KeElq+0rkitdglUxS2CUrxtpLt3tznkyp6C6xGR+uBjucRSSTuQlEoOj6zmuYRARIKgYzifbYPwnk4VchMXXxoaYkkY752BC04A7M70rtBKCu1hCc
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(136003)(346002)(396003)(376002)(451199021)(5660300002)(8936002)(41300700001)(54906003)(52536014)(478600001)(2906002)(66446008)(64756008)(76116006)(7696005)(4326008)(316002)(71200400001)(7416002)(66946007)(66556008)(66476007)(8676002)(6916009)(9686003)(186003)(6506007)(26005)(122000001)(55016003)(83380400001)(33656002)(86362001)(38100700002)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7798
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	10da1716-30c4-4707-fc51-08db33e29f01
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	usNsLZ3d2D+/KMoucZdM2VyDDJ3RboOvpupsjnpaX9nRK/SnhtfC7uUb5XObPi3ZhKwQhfUeGq/5ANlZ1+yBZgcubGWgEvadHM7xRPsc7tGa5Imea8NDvI/pI8nUW2o/Gy/GK+xLDR1negkCFCRVn5wcD1UGh3iY0BFxXU6s2LMRYUnSf5d6Hof6x6TDYK6BG+itkKB2qVPyjzr3YpTAz10cADrAdLJzgeiBabxgnrdYL2yCOpnFqs7EaVSmksNDbg6jQkd7voYCACwMGrifOD640bLkUMZECRE9FQ11sV7oiw4e1JFw49CIZ+WRvNVOqG0ewhJ9vP9GA28AWg8+yR8p+FS5nYCgAGyEv1UMwSczfGHJBl/ZPUTrzx4pAzITH1ev5orepKPAg66GfKuuWRAlS/S3lEYhoAsO0Om6q/3V/mUtWXujVzRJtdRDHnqZmPCdo2JDpo6LOCVUEgiI1vvMIaxvlySiqsa63QzzKIQTIrPCjNVshQkH1BCrTF7rLEdust98Yp50zQj5eDeBMQJqdmv1r2kfgjrF/0HNTkeSOTnsQHRtqjdbR0VTXhhYFxR9vUPOhjY5A6/tt+L9O6jO5sxaxQJMurkzwXsiFrE6E14ZDlWI883FTsch7EloKWHg0qO7FfiNVFAuyalzT4Eh2XCkW8Pgs7NhkWimYUs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199021)(36840700001)(46966006)(86362001)(82310400005)(2906002)(33656002)(55016003)(40480700001)(7696005)(83380400001)(336012)(186003)(47076005)(26005)(6506007)(9686003)(4326008)(70586007)(8676002)(36860700001)(478600001)(70206006)(81166007)(6916009)(41300700001)(5660300002)(82740400003)(52536014)(356005)(54906003)(316002)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 01:27:54.2178
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a4762508-037e-494e-4f81-08db33e2a5f2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10296

Hi everyone,

Following the 8-month release cycle, also taking into account of the
Xen Summit and summer holiday, below is the proposed release
schedule that I come up with for Xen 4.18. Please don't hesitate to
provide your feedback. Thanks!

** Proposed option: Wed Aug 2, 2023 **
(+8 months from Xen 4.17 release)

- Last posting date          Fri May 12, 2023 (+6 weeks from now)

Patches adding new features are expected to be posted to the mailing
list by this date, although perhaps not in their final version.

- Feature freeze             Fri Jun 2, 2023 (+3 weeks from Last posting da=
te)

Patches adding new features should be committed by this date.
Straightforward bugfixes may continue to be accepted by maintainers.

- Code freeze                Fri Jun 23, 2023 (+3 weeks from Feature freeze=
)

Bugfixes only.

(Note that Xen Summit is Jun 24 - 26, 2023)

- Hard code freeze           Fri Jul 14, 2023 (+3 weeks from Code freeze)

Bugfixes for serious bugs (including regressions), and low-risk fixes only.

- Final commits              Fri Jul 28, 2023 (+2 weeks from Hard code free=
ze)

Branch off staging-4.18.

- Release                    Wed Aug 2, 2023

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 02:24:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 02:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517297.802423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj9rg-00012J-Tg; Mon, 03 Apr 2023 02:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517297.802423; Mon, 03 Apr 2023 02:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pj9rg-00012A-Ll; Mon, 03 Apr 2023 02:24:12 +0000
Received: by outflank-mailman (input) for mailman id 517297;
 Mon, 03 Apr 2023 02:24:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pj9rf-000120-Vk; Mon, 03 Apr 2023 02:24:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pj9rf-0000XH-TW; Mon, 03 Apr 2023 02:24:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pj9rf-0006si-C7; Mon, 03 Apr 2023 02:24:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pj9rf-0003bU-Be; Mon, 03 Apr 2023 02:24:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1jGnIj+2z8g+2HncjObUUrmONjjD3Lp7FuLc/1bwoFM=; b=sjjEawTUjrwTHqxuPjcB+lECfl
	jUGkl3o2BdP25n0HwNoHaxniSdliTdsfmtRDcE2i78cnpTjMD9+c3wP3oNh99tTlQHkiRCuMYxEQu
	QjXwkd3uG1kWZEKQrFSsQgl+D8bFbH3i+SyVsU9BS7cz0DsUPFqSjdMduOjrBQhLnpJY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180113-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180113: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=6ab608fe852b50fe809b22cdf7db6cbe006d7cb3
X-Osstest-Versions-That:
    linux=00c7b5f4ddc5b346df62b757ec73f9357bb452af
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 02:24:11 +0000

flight 180113 linux-linus real [real]
flight 180114 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180113/
http://logs.test-lab.xenproject.org/osstest/logs/180114/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180114-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180107
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180107
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180107
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180107
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180107
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                6ab608fe852b50fe809b22cdf7db6cbe006d7cb3
baseline version:
 linux                00c7b5f4ddc5b346df62b757ec73f9357bb452af

Last test of basis   180107  2023-04-01 22:11:37 Z    1 days
Testing same since   180113  2023-04-02 18:13:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anand Jain <anand.jain@oracle.com>
  Benjamin Gray <bgray@linux.ibm.com>
  Carlos Bilbao <carlos.bilbao@amd.com>
  David Disseldorp <ddiss@suse.de>
  David Sterba <dsterba@suse.com>
  Federico Vaga <federico.vaga@vaga.pv.it>
  Filipe Manana <fdmanana@suse.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Haren Myneni <haren@linux.ibm.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Steve French <stfrench@microsoft.com>
  Vegard Nossum <vegard.nossum@oracle.com>
  Yicong Yang <yangyicong@hisilicon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   00c7b5f4ddc5..6ab608fe852b  6ab608fe852b50fe809b22cdf7db6cbe006d7cb3 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 04:22:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 04:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517303.802433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjBhq-0004wZ-1N; Mon, 03 Apr 2023 04:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517303.802433; Mon, 03 Apr 2023 04:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjBhp-0004wS-UO; Mon, 03 Apr 2023 04:22:09 +0000
Received: by outflank-mailman (input) for mailman id 517303;
 Mon, 03 Apr 2023 04:22:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dhnu=72=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pjBho-0004wL-Vv
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 04:22:09 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14bb3ad6-d1d7-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 06:22:05 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 27AE95C0092;
 Mon,  3 Apr 2023 00:22:01 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Mon, 03 Apr 2023 00:22:01 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 3 Apr 2023 00:21:59 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14bb3ad6-d1d7-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1680495721; x=1680582121; bh=1gwzkrabqfP4SsGBsML6M961/r3JQJiJG2e
	NmXnX7VU=; b=nrtW0pt1TMXD9x28v4nCVj/tJTS5U0fl0q7fBuWrt/lvz/zXPHI
	2XkCaHMYhO0Wq7KPy67u8uTsOxiyFp9nO8WYsvx88ADhzAXQg5CahaEFaiP7M2Kd
	6Hc0gXsKSAwq2QKKiSm/AI+big7PYsnsMnpCh5ak0C92NqhtDhmZBve2MQ5l0XZ/
	pOFVvFw1z0/2hKXyABT9m2d73dFALCIdKwWkXmKVZhu9oPWghHJ3kUODuj74fG55
	fqY0UcuF5TI9wHhdNXi92SZA8EhkLKQiQOFjqshw8gjGCqFzyLEX/ufTJT+bGRC3
	ioyNkVbWu1VCaIAvOwjjM5XrAneAMaGWrhA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm2; t=1680495721; x=1680582121; bh=1gwzkrabqfP4S
	sGBsML6M961/r3JQJiJG2eNmXnX7VU=; b=aV6O5NGj48VocE/YyaJ2CwDB8/bcr
	Wy6SkEaJXXiqCGWrcyANGHNPgh/ZQgTTqAGe1vC8Fwdu0aPOrcX5L5YYsDiafBRz
	rStVAdUMgAUxI4dRBU4DiQXkWhMT2pwYWcX8QPjuEAVuD4AcnxCXmIQ9S1YGQwKk
	gU5/PSioO4mmDN01CgX0hA8qqlY+aWWtrbe0P2mDtzKWt0JfN4XqoZyeH1LOUj5N
	4bCo9w7UUtgrg1rD5HkeKj+5mcjB92hDmZlZB+f5cFxqGFWvCKdujBWGIVpUrl/O
	VeiCv8YLAFpeXRLiUrBOuvFCP1wZqmjhuqHO5Kn2Nzo3RuvSD2rJACgJw==
X-ME-Sender: <xms:aFQqZPP7CdbjUKNyZMCUpJ3P39BryApmWK0yJi5XILeJFSISyyUtdg>
    <xme:aFQqZJ-0lmBCaJvgzlVjgIJFvYHb_Ipv5xF1cTO2yCwAYmPvw0bkfW-TdE52z3HuL
    IBIGhP1bxbULg>
X-ME-Received: <xmr:aFQqZOQZZuIcgcE6_8c4emub8xYpbbs2jPq8AauTklSPJohudewAP_-KbaVfwgqwaMAXBL4sbSBTNX8oo8rCbaOECojh9YRY4xg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdeiiedgkeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:aFQqZDtKz_vrM44o4LRie6U05VUnfMVJCoHLuhB3tO3IXTP2lzlqnw>
    <xmx:aFQqZHdhm0ihGetD5z3CMjcOg4Z-ly1N4VMdtDFuCk5ngqX3flDTyA>
    <xmx:aFQqZP0FVBDaSwtIUYtWH5-PPgZjysFPe0i2gIl2zNc8hDNwpnp5ow>
    <xmx:aVQqZD5OxjCeuoTl4mEczbpDKaWDMGn5gHWYj2ciJFkAuEK-1MVnYg>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 3 Apr 2023 06:21:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 2/3] x86/hvm: Allow writes to registers on the same
 page as MSI-X table
Message-ID: <ZCpUZI6HmzYKDhAz@mail-itl>
References: <20230325024924.882883-1-marmarek@invisiblethingslab.com>
 <20230325024924.882883-2-marmarek@invisiblethingslab.com>
 <ZCLNQGXvUBxZbIGS@Air-de-Roger>
 <ZCLX1qD/FmbF5ulu@mail-itl>
 <540906f7-4543-9d01-2b2b-a3bd70eda74b@suse.com>
 <ZCLjGhbzGD2jykT9@mail-itl>
 <9eb7b538-4074-4b15-4ea2-67d9cc0bf85d@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Sx4KoBD4ntkktgRN"
Content-Disposition: inline
In-Reply-To: <9eb7b538-4074-4b15-4ea2-67d9cc0bf85d@suse.com>


--Sx4KoBD4ntkktgRN
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 3 Apr 2023 06:21:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 2/3] x86/hvm: Allow writes to registers on the same
 page as MSI-X table

On Tue, Mar 28, 2023 at 03:03:17PM +0200, Jan Beulich wrote:
> On 28.03.2023 14:52, Marek Marczykowski-G=C3=B3recki wrote:
> > On Tue, Mar 28, 2023 at 02:34:23PM +0200, Jan Beulich wrote:
> >> On 28.03.2023 14:05, Marek Marczykowski-G=C3=B3recki wrote:
> >>> On Tue, Mar 28, 2023 at 01:28:44PM +0200, Roger Pau Monn=C3=A9 wrote:
> >>>> On Sat, Mar 25, 2023 at 03:49:23AM +0100, Marek Marczykowski-G=C3=B3=
recki wrote:
> >>>>> +static bool cf_check msixtbl_page_accept(
> >>>>> +        const struct hvm_io_handler *handler, const ioreq_t *r)
> >>>>> +{
> >>>>> +    ASSERT(r->type =3D=3D IOREQ_TYPE_COPY);
> >>>>> +
> >>>>> +    return msixtbl_page_handler_get_hwaddr(
> >>>>> +            current->domain, r->addr, r->dir =3D=3D IOREQ_WRITE);
> >>>>
> >>>> I think you want to accept it also if it's a write to the PBA, and
> >>>> just drop it.  You should always pass write=3Dfalse and then drop it=
 in
> >>>> msixtbl_page_write() if it falls in the PBA region (but still return
> >>>> X86EMUL_OKAY).
> >>>
> >>> I don't want to interfere with msixtbl_mmio_page_ops, this handler is
> >>> only about accesses not hitting actual MSI-X structures.
> >>
> >> In his functionally similar vPCI change I did ask Roger to handle the
> >> "extra" space right from the same handlers. Maybe that's going to be
> >> best here, too.
> >=20
> > I have considered this option, but msixtbl_range() is already quite
> > complex, adding yet another case there won't make it easier to follow.
>=20
> Do you care about the case of msixtbl_addr_to_desc() returning NULL at
> all for the purpose you have?=20

IIUC I care specifically about this case.

> Like in Roger's patch I'd assume
> msixtbl_find_entry() needs extending what ranges it accepts; if need
> be another parameter may be added to cover cases where the extended
> coverage isn't wanted.
>
> > I mean, technically I can probably merge those two handlers together,
> > but I don't think it will result in nicer code. Especially since the
> > general direction is to abandon split of MSI-X table access handling
> > between Xen and QEMU and go with just QEMU doing it, hopefully at some
> > point not needing msixtbl_mmio_ops anymore (but still needing the one
> > for adjacent accesses).
>=20
> Hmm, at this point I'm not convinced of this plan. Instead I was hoping
> that once vPCI properly supports PVH DomU-s, we may also be able to make
> use of it for HVM, delegating less to qemu rather than more.

In that case, this code won't be needed anymore, which will also make
this handler unnecessary.

Anyway, I tried to merge this handling into existing handlers and the
resulting patch is slightly bigger, so it doesn't seem to avoid any
duplication. The only benefit I can think of is avoiding iterating
msixtbl_list twice (for respective accept callbacks) on each access. Is
it worth a bit more complicated handlers?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Sx4KoBD4ntkktgRN
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmQqVGQACgkQ24/THMrX
1ywJRAf/XIkiO20k0CXZsGxJ23AOQC91NVznTfOfXKg0aH+JjhwdUVKFQ2MCPQPF
sC2oziwPlxHDDGg6i/dY3Il63mRNYTiDZ9t/i1UF1lwWp3IVRv90fTUyeNnjtjYP
VCeYi5Ht1TE0wYN19MHb7A+6rvG33ZRFGiWxMF/IKMYR+uA3FwH6UYhTTRk5RflA
hujDod7MiRgRCrD3YDBH8bffDxj+Atq/eFaySaQ+WbbaRD0HegT0h83nHfWkvvTz
xjlr7lK38vTwr+6zVuRur0EE0IEl+dJO67UogTo0H1uf8DWJeY7QelHh2swV5YvX
OQVbWBoerAdPBoBA5ILi/SWWEfozKw==
=IgLb
-----END PGP SIGNATURE-----

--Sx4KoBD4ntkktgRN--


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517309.802447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEot-0000WN-WA; Mon, 03 Apr 2023 07:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517309.802447; Mon, 03 Apr 2023 07:41:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEot-0000V4-So; Mon, 03 Apr 2023 07:41:39 +0000
Received: by outflank-mailman (input) for mailman id 517309;
 Mon, 03 Apr 2023 07:41:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEos-0000Sg-9M
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:38 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4e4850c-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:35 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id p34so16484258wms.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:34 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4e4850c-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507693;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=hqktkp0iv4ik+R4dzmjAFeY0JZolWYG3VGRe4IrvPbY=;
        b=LVlZ2cuXFFwexXKoLI9ZG5QLEbGwOeE1vppWSymN0IQlwF89mbzooulggZ9QEfulr8
         PcmbIyQvJCwRAHRXGOCyARG18K25hcG1Z79E7hTODBbHIK5gRp8lt+RZ/NW7FpVGFaXg
         UiyqH4AVBIN7OzYQiYUpLFf0VmO/k9PzcY6G0ppf9uuG6chbW5qZDutq4dzRlzjqDcdl
         kBLiKAIa+gs0ikEQx09bluaV4HWsb28uN9XufuY9U6tt5vV79zff5a7CVVxoUUffTFnZ
         gomowXcFGaVrFrZduuwRYSaVlDcVSGOGAgvm8M/QM0eOxop+1AsCKWsROZrw7JaX9d3K
         eKWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507693;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=hqktkp0iv4ik+R4dzmjAFeY0JZolWYG3VGRe4IrvPbY=;
        b=vrYHaRJ77eeVFnTy8dRZcjTnyvTC8iMW7LmhO+3LOUgLuVVdQcqGOnblLz7iN6WPde
         zEGaYmwdzRJJZjTE1FUBTFoCctw1wd1sG9SiqHdgdybPgHzN7DqZqlRjltaORpBAMxAK
         LFoVUOfDujp8BWuo+HRuXDNu28M3GvX8mzk2AJbzqL5Ch8hxlNFViZwFwU0LRGrXEruQ
         I4gjVp6NkF0vA2YhFx8xYS1HBnQu6greJzd7C5Q+msldCeHvHdhv4arTVQCwvLui1Cfu
         E8YZ//d8DVWDOD3TtSWPS34Pg3oNbJN6P8XGFA3y0E0da1QKEdxVDujgBiy0PSN+VOpW
         ooZg==
X-Gm-Message-State: AO0yUKX2Tl0QIY6URJ0kC2F+5ttB30r9j6EyLY+9KIIQrcpkhBO2wnCK
	EkBfQb7yS1D1PfjTv/t+EaM=
X-Google-Smtp-Source: AK7set+w4NV/Q17dNwmnKiNJ9TGNyLmYq7XLiiKjnzxMzffvXWPMfja/NHU/YbSqWzE9T9gy1AwaVA==
X-Received: by 2002:a05:600c:ad0:b0:3ed:abb9:7515 with SMTP id c16-20020a05600c0ad000b003edabb97515mr26505746wmr.11.1680507693506;
        Mon, 03 Apr 2023 00:41:33 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
Date: Mon,  3 Apr 2023 09:41:17 +0200
Message-Id: <20230403074124.3925-1-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

There is currently a dedicated PIIX3 device model for use under Xen. By reu=
sing=0D
existing PCI API during initialization this device model can be eliminated =
and=0D
the plain PIIX3 device model can be used instead.=0D
=0D
Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also making Xen=
=0D
agnostic towards the precise south bridge being used in the PC machine. The=
=0D
latter might become particularily interesting once PIIX4 becomes usable in =
the=0D
PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3.=0D
=0D
Testing done:=0D
- `make check`=0D
- Run `xl create` with the following config:=0D
    name =3D "Manjaro"=0D
    type =3D 'hvm'=0D
    memory =3D 1536=0D
    apic =3D 1=0D
    usb =3D 1=0D
    disk =3D [ "file:manjaro-kde-21.2.6-220416-linux515.iso,hdc:cdrom,r" ]=
=0D
    device_model_override =3D "/usr/bin/qemu-system-x86_64"=0D
    vga =3D "stdvga"=0D
    sdl =3D 1=0D
- `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \=0D
    -cdrom manjaro-kde-21.2.6-220416-linux515.iso`=0D
=0D
v4:=0D
- Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)=0D
=0D
v3:=0D
- Rebase onto master=0D
=0D
v2:=0D
- xen_piix3_set_irq() is already generic. Just rename it. (Chuck)=0D
=0D
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>=0D
=0D
Bernhard Beschow (7):=0D
  include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()=0D
  hw/pci/pci.c: Don't leak PCIBus::irq_count[] in pci_bus_irqs()=0D
  hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()=0D
  hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3=0D
  hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()=0D
  hw/isa/piix3: Resolve redundant k->config_write assignments=0D
  hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE=0D
=0D
 include/hw/southbridge/piix.h |  1 -=0D
 include/hw/xen/xen.h          |  2 +-=0D
 hw/i386/pc_piix.c             | 36 +++++++++++++++++++--=0D
 hw/i386/xen/xen-hvm.c         |  2 +-=0D
 hw/isa/piix3.c                | 60 +----------------------------------=0D
 hw/pci/pci.c                  |  2 ++=0D
 stubs/xen-hw-stub.c           |  2 +-=0D
 7 files changed, 39 insertions(+), 66 deletions(-)=0D
=0D
-- =0D
2.40.0=0D
=0D


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517314.802503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEp0-000206-6x; Mon, 03 Apr 2023 07:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517314.802503; Mon, 03 Apr 2023 07:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEoz-0001zq-WC; Mon, 03 Apr 2023 07:41:45 +0000
Received: by outflank-mailman (input) for mailman id 517314;
 Mon, 03 Apr 2023 07:41:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEox-0000Sg-Vv
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:44 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9d7953e-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:42 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 i5-20020a05600c354500b003edd24054e0so19144066wmq.4
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:42 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9d7953e-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507702;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xfruqCTvdLb6qQjpwkCpa8vgF1IvHI8+CFma9+fgJv8=;
        b=EAQ6n7WWQXPUjZaBzzBlO5S8qeAHPTlk8U7zS3dcf+8IIdkYRF+t76elBK5JMC6VvR
         MlevLDBVnopzGlcluD5E3paMkO9gGmS9AiWXwbBxrmfV6aY677J1hA7sEoOMZsusRhCO
         WtNuC4TBrqU9huTBhT4qaoJ3hKFDLVLwR3Q492RbdYmNN96sEoQRc05zQjYNBPUaYgFS
         wlTsF3d6qB4+ivcvqD/tRWrRuT7ASZ49jVudbsUe4HaCkTh0Jf88+s+oA9GGmu4u4jN4
         MYC4GsrKHzZxdgEsprYEQvYQ9+pe1/Pfl0wcjSFYsZgQhf3t0h8B31q//5p9hTJOJ1Gu
         dYlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507702;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xfruqCTvdLb6qQjpwkCpa8vgF1IvHI8+CFma9+fgJv8=;
        b=Enkqb0ZqCfehwenYG9frKYj8zkkeBn40frY8jQ9i+tFNFIziC3p1O7KqWkAYCSC2Wt
         DLAfN403eElT8qoLHR97cSZDeZiCFTO794t0jZg/P0IBl6/U5oWlGafWiI7qs0MghWBL
         l6jvVNvMBr+UhLs3hX7VSfs/5KGsekTXahH2+R9QI12QmHrWBa99P/TgbEwRhGl0t94D
         RNyz3YInT4pTBqv9K3uyrAixugtu7JNJUpBg6hrWI0DBpAthCBvvcuyN4EUV9rLnO+9X
         QO/S6jWk7TRerZbug6725JYJ7AINZq106tnIDYcFl464N4h5mX/sb2/35ilSTpX3At0Y
         tfmg==
X-Gm-Message-State: AO0yUKX1lAKtH8emrffrpDfAf4dTi/U7FY3N/EaGeH0COB39+XX2iEWx
	M97HPHRG4wNTFIRE6+wrwXs=
X-Google-Smtp-Source: AK7set9rrvyB0aTyq/IlC7IjQSr4MJ3b4hfbs5XNaEHQCEylCJMqyaqqp21kfc6RgszwCmJPmkED8Q==
X-Received: by 2002:a1c:770c:0:b0:3ed:454d:36ab with SMTP id t12-20020a1c770c000000b003ed454d36abmr25688857wmi.16.1680507701818;
        Mon, 03 Apr 2023 00:41:41 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 6/7] hw/isa/piix3: Resolve redundant k->config_write assignments
Date: Mon,  3 Apr 2023 09:41:23 +0200
Message-Id: <20230403074124.3925-7-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The previous patch unified handling of piix3_write_config() accross the
PIIX3 device models which allows for assigning k->config_write once in the
base class.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-6-shentey@gmail.com>
---
 hw/isa/piix3.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 737f5c6a5d..418940139d 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -308,6 +308,7 @@ static void pci_piix3_class_init(ObjectClass *klass, void *data)
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
     AcpiDevAmlIfClass *adevc = ACPI_DEV_AML_IF_CLASS(klass);
 
+    k->config_write = piix3_write_config;
     dc->reset       = piix3_reset;
     dc->desc        = "ISA bridge";
     dc->vmsd        = &vmstate_piix3;
@@ -356,7 +357,6 @@ static void piix3_class_init(ObjectClass *klass, void *data)
 {
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix3_write_config;
     k->realize = piix3_realize;
 }
 
@@ -370,7 +370,6 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
 {
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix3_write_config;
     k->realize = piix3_realize;
 }
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517310.802453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEou-0000bP-CC; Mon, 03 Apr 2023 07:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517310.802453; Mon, 03 Apr 2023 07:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEou-0000Xy-7F; Mon, 03 Apr 2023 07:41:40 +0000
Received: by outflank-mailman (input) for mailman id 517310;
 Mon, 03 Apr 2023 07:41:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEos-0000Sg-Jq
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:38 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f686a8af-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:36 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id m2so28256759wrh.6
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:36 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f686a8af-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507696;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+2bxbqf6j4QPQJIwH9ivxbi0oLjhsXkO+rq7lp2Eh9U=;
        b=WC5szrsbsTKCY9H3NIUWRSbAOse9TsOnH1X01M7jJFEyCnqYih84GDL/KdBhBXgvfZ
         cxx71VQpcKF9zWdzMnSbkpc1hfesym5CJWKDjgWmFqmwEAZxtxhq8nOHsCyYPynIYrS9
         BaLfK4jsrhCeSuXHGsXYL7fNzlE5hnc0hfC+2XsB/zKpBEIWgD6sfzdiEyxE4zaNCnEv
         162tllbbB0sUjNuop5fu0nl/n19meZoUfNaKBn50XCNcWvdCal884QeT/q0SEepuboYT
         CVmM6NveJXfoFPYDUH3UGFCSnAI2AjW/EN4i3xu23N5hvy6np1jlESM9iiQdcFx39cPw
         K01w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507696;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+2bxbqf6j4QPQJIwH9ivxbi0oLjhsXkO+rq7lp2Eh9U=;
        b=LhOJ1lrSYnxj9MUFQKCgjRAj4wr02EKrwj0qKL4X0Q1KJ6nnICaPdUCcN83T8IibM5
         OC+Rw7sovmPOurnDKhNCYvMU4jGl7C1z6N+813c3+bREstJvF5T4cp+bgw9ox981fj9F
         SXkmRGdJpvr6F638m5Wh20ROF4bUF0uI/81/DWJsko4WOQtBbVoQUNJvUFBivV23gTE9
         ImdwRgEo08kqM7gi+PZ2lGDJNj1ugsutJ1PFgbENyHeGyam153ZwxgFwPcL3Jr4r/MnP
         8n+JBC9zZL2jZO73i8j/15Cl2yzn0+LljrmWM1Rm70tYL1LCC+MGkNimz4Suc7ctOtPN
         rMyw==
X-Gm-Message-State: AAQBX9e6sQsde7vVErMC/G6l2yhvuNy3gBcdboCWV/dCPuzxYhE7UP8k
	OBb4IvexUDRVN5Qg/BfLn5E=
X-Google-Smtp-Source: AKy350a8w/UjVGBXm/IvW1eiUEjt7WBOj3GvSpZIn/V9+4KAg1DkfQwNZGfsneF3GohAYNphgxwWGA==
X-Received: by 2002:a05:6000:ca:b0:2d6:5afe:7b91 with SMTP id q10-20020a05600000ca00b002d65afe7b91mr11692241wrx.30.1680507696091;
        Mon, 03 Apr 2023 00:41:36 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 2/7] hw/pci/pci.c: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
Date: Mon,  3 Apr 2023 09:41:19 +0200
Message-Id: <20230403074124.3925-3-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When calling pci_bus_irqs() multiple times on the same object without calling
pci_bus_irqs_cleanup() in between PCIBus::irq_count[] is currently leaked.
Let's fix this because Xen will do just that in a few commits, and because
calling pci_bus_irqs_cleanup() in between seems fragile and cumbersome.

Note that pci_bus_irqs_cleanup() now has to NULL irq_count such that
pci_bus_irqs() doesn't do a double free.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
---
 hw/pci/pci.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index def5000e7b..be1c5d16ec 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -558,6 +558,7 @@ void pci_bus_irqs(PCIBus *bus, pci_set_irq_fn set_irq,
     bus->set_irq = set_irq;
     bus->irq_opaque = irq_opaque;
     bus->nirq = nirq;
+    g_free(bus->irq_count);
     bus->irq_count = g_malloc0(nirq * sizeof(bus->irq_count[0]));
 }
 
@@ -573,6 +574,7 @@ void pci_bus_irqs_cleanup(PCIBus *bus)
     bus->irq_opaque = NULL;
     bus->nirq = 0;
     g_free(bus->irq_count);
+    bus->irq_count = NULL;
 }
 
 PCIBus *pci_register_root_bus(DeviceState *parent, const char *name,
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517315.802512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEp1-0002GR-L7; Mon, 03 Apr 2023 07:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517315.802512; Mon, 03 Apr 2023 07:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEp1-0002FC-DC; Mon, 03 Apr 2023 07:41:47 +0000
Received: by outflank-mailman (input) for mailman id 517315;
 Mon, 03 Apr 2023 07:41:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEoz-0000Sg-4z
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:45 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa87ecf0-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:43 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id d17so28232998wrb.11
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:43 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa87ecf0-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507703;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bFBFz6LEBXPSgQcOvP8yki+MQL0SlRe8vWuNcy1ofyc=;
        b=JnS3k4VnUoa2yBgFGDVDqacBd2DdaZDlI+9R3AWOFhe3KjznQtcT72nyHz3M0NNxPI
         BHk7+yVafEuJZsDmpI86KxWZdZSFBDFQejFdT++2yketIkY99aI1HrLv2dP9MnPjHQBs
         epvt+PyDpEF8mjOGr9ueoeGBlgS296BEv6sUEJ1Plv/gcSrfBpTUUOoN369ZxcDeBnJJ
         fC5SM5o/FYODXJeS0jB6XOoeGVOgSkOpHdlRV5MgT+SPRphnIdELNl11dGfm/A0PKLPN
         PjSXjmBBRUqR2fA/eTLELJx/ttMNy1M7gpFYGMCRRK1Cbtc4/GQjStDehV6+jgrR63Ry
         NCWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507703;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bFBFz6LEBXPSgQcOvP8yki+MQL0SlRe8vWuNcy1ofyc=;
        b=UR1KvIKcPOl6dT4+2vN9pqbgNlDFTic4ZAxUwuA3FGCvBiPiBvmevhg1RfaVWlkJ+x
         8/DA8+1pGHG7AHdsTg5haT0GB9l0zaVkML0NksXGpwJECNWEM/56FuVNc3tAYZY+aJi6
         trc0TFIw3JNoOM4O0pQ2ME4JQJw+jgo226wR3LUlldYcP5fPPJaRtJYgMA0Lx+U+EeGd
         4nG8JiXLhXYomHd2wJ68GLlStQHroCgWmryKKi7LL7iTLe4kP+KvTEySaOW6rpRrO0YC
         ekIvMGz3nFFogtYeHP4IAdBbG/IRJXhrRVIdFiFAi3+ZBZT3v1TLypWiaTJ+ToadNm2m
         zP6A==
X-Gm-Message-State: AAQBX9caf9M+lsw9VWm9dKozHB/Zikroy5ktDGtOvN6muE2M3EXavdfp
	17S66CJH6ZIX6tn9BBc65fs=
X-Google-Smtp-Source: AKy350YQYzXtoXYV33vsug2IkHk8dVMjKWMW4Vgyul9fIIKt/0YfXdy2ovrIHW0LfhNXns7sxctNuA==
X-Received: by 2002:adf:fd51:0:b0:2ce:adda:f45a with SMTP id h17-20020adffd51000000b002ceaddaf45amr25195402wrs.62.1680507703148;
        Mon, 03 Apr 2023 00:41:43 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 7/7] hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
Date: Mon,  3 Apr 2023 09:41:24 +0200
Message-Id: <20230403074124.3925-8-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During the last patches, TYPE_PIIX3_XEN_DEVICE turned into a clone of
TYPE_PIIX3_DEVICE. Remove this redundancy.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-7-shentey@gmail.com>
---
 include/hw/southbridge/piix.h |  1 -
 hw/i386/pc_piix.c             |  5 ++---
 hw/isa/piix3.c                | 15 ---------------
 3 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/include/hw/southbridge/piix.h b/include/hw/southbridge/piix.h
index 0bf48e936d..51be04e984 100644
--- a/include/hw/southbridge/piix.h
+++ b/include/hw/southbridge/piix.h
@@ -64,7 +64,6 @@ DECLARE_INSTANCE_CHECKER(PIIX3State, PIIX3_PCI_DEVICE,
                          TYPE_PIIX3_PCI_DEVICE)
 
 #define TYPE_PIIX3_DEVICE "PIIX3"
-#define TYPE_PIIX3_XEN_DEVICE "PIIX3-xen"
 #define TYPE_PIIX4_PCI_DEVICE "piix4-isa"
 
 #endif
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 1b70470dcd..7ca0d6d14e 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -237,8 +237,6 @@ static void pc_init1(MachineState *machine,
     if (pcmc->pci_enabled) {
         PIIX3State *piix3;
         PCIDevice *pci_dev;
-        const char *type = xen_enabled() ? TYPE_PIIX3_XEN_DEVICE
-                                         : TYPE_PIIX3_DEVICE;
 
         pci_bus = i440fx_init(pci_type,
                               i440fx_host,
@@ -251,7 +249,8 @@ static void pc_init1(MachineState *machine,
                                        : pc_pci_slot_get_pirq);
         pcms->bus = pci_bus;
 
-        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
+        pci_dev = pci_create_simple_multifunction(pci_bus, -1, true,
+                                                  TYPE_PIIX3_DEVICE);
 
         if (xen_enabled()) {
             pci_device_set_intx_routing_notifier(
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 418940139d..0d6992af67 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -29,7 +29,6 @@
 #include "hw/southbridge/piix.h"
 #include "hw/irq.h"
 #include "hw/isa/isa.h"
-#include "hw/xen/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
@@ -366,24 +365,10 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_class_init(ObjectClass *klass, void *data)
-{
-    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
-
-    k->realize = piix3_realize;
-}
-
-static const TypeInfo piix3_xen_info = {
-    .name          = TYPE_PIIX3_XEN_DEVICE,
-    .parent        = TYPE_PIIX3_PCI_DEVICE,
-    .class_init    = piix3_xen_class_init,
-};
-
 static void piix3_register_types(void)
 {
     type_register_static(&piix3_pci_type_info);
     type_register_static(&piix3_info);
-    type_register_static(&piix3_xen_info);
 }
 
 type_init(piix3_register_types)
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517313.802489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEox-0001Vc-Qg; Mon, 03 Apr 2023 07:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517313.802489; Mon, 03 Apr 2023 07:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEox-0001Uw-Fe; Mon, 03 Apr 2023 07:41:43 +0000
Received: by outflank-mailman (input) for mailman id 517313;
 Mon, 03 Apr 2023 07:41:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEow-0000Sg-LN
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:42 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f900dbb0-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:41 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id q19so25188101wrc.5
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:41 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f900dbb0-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507700;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/EuJujhhzp9QT+Va/F4M2q8YBrUEweb2g4rehnqe99A=;
        b=hcViKZ3qPuwgtA4SsDAiRVSOd64zBPnpHxw+0VBLBeB50VFc/rKXi7fhu7V71wdQ1h
         FxpfO/RXYDuxhNYh2ttxusHsPNk+t2vE9+H9K9bhKMFQmuSkwffp5fkmeD16xMkbxQQ3
         LXpNEKoDVtB45+OBM29Ey1FV8APapXHq6pMgasM66BMhL7Waih4O+UCiJoBRyIbd16zz
         253w1Lis8Qh0rRlo2FH1+2zh1X8aqANyAQo//WYFAC4MkQPKgVheB4UURa3XhtGzyOrn
         Ar8I+OmZhfK340m5fuvAT+mWzcInzHAYzg7cJgstXkUtstVizW6nuDwjuLuv0636VIOC
         vMaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507700;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/EuJujhhzp9QT+Va/F4M2q8YBrUEweb2g4rehnqe99A=;
        b=6plzb2sA43gsEP/CVhZbUjoeZvud1nFOfz1DbYNc95xZQOw+BubRGk+v5Ou4sDwStb
         FAqIhkMt3BxAtbcybU4854I6+kHTDmBGWL0HcQtopakmt4f1b0n8DA5VF/MjOTzvTL8k
         YM5pX35klguz1oKeujsBaxwE9UkoOBOiNVLgwFlOQSdrc6qt7dppZq5tuSyMvXW48Vyd
         QsajlBU66Quqi2/ZvHLSVyq0j5kSnnz6oIcb0vsbkvpUsa3+mma0kPJ+pFvVyzBCtUE7
         9w2pUwFjrpY6QxoUtUgsaBxrL0LuRUY0/aIyHj2fEdmusQVieCqLxC7e0XIjpCcvDpCw
         P/UQ==
X-Gm-Message-State: AAQBX9fLrG0UrKej9kIIo9Yd4CJFt3RFLJVpHiqRmrKJ8cp24I75pwbw
	UxoFljOlS5XR9wKflhbVRoA=
X-Google-Smtp-Source: AKy350ZZP0vLfS334AXJ1QOOCUu1Iz6IfelFWZ0j6MBXF/9ON1OR+oP0rMSuxPQc4qp6H8kpAFlguw==
X-Received: by 2002:a5d:68cb:0:b0:2cf:fd6:b83f with SMTP id p11-20020a5d68cb000000b002cf0fd6b83fmr12153147wrw.8.1680507700653;
        Mon, 03 Apr 2023 00:41:40 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 5/7] hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
Date: Mon,  3 Apr 2023 09:41:22 +0200
Message-Id: <20230403074124.3925-6-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Subscribe to pci_bus_fire_intx_routing_notifier() instead which allows for
having a common piix3_write_config() for the PIIX3 device models.

While at it, move the subscription into machine code to facilitate resolving
TYPE_PIIX3_XEN_DEVICE.

In a possible future followup, pci_bus_fire_intx_routing_notifier() could
be adjusted in such a way that subscribing to it doesn't require
knowledge of the device firing it.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-5-shentey@gmail.com>
---
 hw/i386/pc_piix.c | 18 ++++++++++++++++++
 hw/isa/piix3.c    | 22 +---------------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 99232701b1..1b70470dcd 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -88,6 +88,21 @@ static int pc_pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
     return (pci_intx + slot_addend) & 3;
 }
 
+static void piix_intx_routing_notifier_xen(PCIDevice *dev)
+{
+    int i;
+
+    /* Scan for updates to PCI link routes (0x60-0x63). */
+    for (i = 0; i < PIIX_NUM_PIRQS; i++) {
+        uint8_t v = dev->config_read(dev, PIIX_PIRQCA + i, 1);
+        if (v & 0x80) {
+            v = 0;
+        }
+        v &= 0xf;
+        xen_set_pci_link_route(i, v);
+    }
+}
+
 /* PC hardware initialisation */
 static void pc_init1(MachineState *machine,
                      const char *host_type, const char *pci_type)
@@ -239,6 +254,9 @@ static void pc_init1(MachineState *machine,
         pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
 
         if (xen_enabled()) {
+            pci_device_set_intx_routing_notifier(
+                        pci_dev, piix_intx_routing_notifier_xen);
+
             /*
              * Xen supports additional interrupt routes from the PCI devices to
              * the IOAPIC: the four pins of each PCI device on the bus are also
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 7a31caf2b6..737f5c6a5d 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -121,26 +121,6 @@ static void piix3_write_config(PCIDevice *dev,
     }
 }
 
-static void piix3_write_config_xen(PCIDevice *dev,
-                                   uint32_t address, uint32_t val, int len)
-{
-    int i;
-
-    /* Scan for updates to PCI link routes (0x60-0x63). */
-    for (i = 0; i < len; i++) {
-        uint8_t v = (val >> (8 * i)) & 0xff;
-        if (v & 0x80) {
-            v = 0;
-        }
-        v &= 0xf;
-        if (((address + i) >= PIIX_PIRQCA) && ((address + i) <= PIIX_PIRQCD)) {
-            xen_set_pci_link_route(address + i - PIIX_PIRQCA, v);
-        }
-    }
-
-    piix3_write_config(dev, address, val, len);
-}
-
 static void piix3_reset(DeviceState *dev)
 {
     PIIX3State *d = PIIX3_PCI_DEVICE(dev);
@@ -390,7 +370,7 @@ static void piix3_xen_class_init(ObjectClass *klass, void *data)
 {
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
-    k->config_write = piix3_write_config_xen;
+    k->config_write = piix3_write_config;
     k->realize = piix3_realize;
 }
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517308.802443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEot-0000T8-Nk; Mon, 03 Apr 2023 07:41:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517308.802443; Mon, 03 Apr 2023 07:41:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEot-0000T1-Jh; Mon, 03 Apr 2023 07:41:39 +0000
Received: by outflank-mailman (input) for mailman id 517308;
 Mon, 03 Apr 2023 07:41:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEor-0000Sf-M9
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:37 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f57e5bd8-d1f2-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 09:41:35 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 o24-20020a05600c511800b003ef59905f26so17440294wms.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:35 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f57e5bd8-d1f2-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507695;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TeCzVhGVlV6a+Zg2qRJBs5g8PJcLaKDhbJ2idHJxxx0=;
        b=NOsYWiO46TFwc5m5xQHDhQb58P1Be5veIezV+JjfGZagw4FzoW9qAT0JRFYCG1Egj2
         6q00XPm7KcKE7MUNFTIM7GgjU98aTY1zJm5D4vB8qAjZlbnErUSS4LUsH+5+df/dxoUP
         0E/k4gNZfqlF5NggV1/s4+AQHdipEvxQQ1nQUjtsp2ehdEujBeHt8RnlyjvDRq6SdcgL
         EQ6RsxkTM5ehELnOpAbiPcdPxEwrfnGGyLKjQpveSExcoF6JzPNodfRySaFCXvOVzjgt
         NLBM7Dq02dF3TYQjQ8bHDGAKajXemkJTLwVjK9LwE5h5Pq9U48zyJqpvvli/cdn+9UVR
         ptSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507695;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TeCzVhGVlV6a+Zg2qRJBs5g8PJcLaKDhbJ2idHJxxx0=;
        b=yhYHq1tfu5ZzukYJEqg1RGVi/gPUAqb6Hi9bex8vvT3rJNDChryZxgqqNURiRjxkPT
         2jhOEQRC7/0ksbQr+y7u75mAJI27ULKINXucBLp/+WAv83RkMu+GC1t/A341wPM1nYI8
         RIEPvPNffuQpPUfuioMB9yfH4B+wT0QFSx/AgYije3Z2S3P5fwk86kS2ter5xlbL1Scb
         mbXxKRwKAb2SfqewAJgfeSXyoFo0COTy+Jiub5ATR/0S+M5n5rHqsDbURnbuauxMzUlT
         LDdrKnogyNz7YyI8XWKJcJ/Mh507QyoOuVZ9Oaa3mqj3sAEt9xoarFdHVEL9z5j43QVy
         3j9Q==
X-Gm-Message-State: AO0yUKXKozgJLbOf7J7nXCClVU6UVscYId6K+uQXh8eop8V0BPLbAqNP
	K6bDQabeMsdlyxkbxeO1tcI=
X-Google-Smtp-Source: AK7set93HQ7bKPaN49jz207odhGu/2UkBML7wwW8EKMOxHqY7JzqavV7AWSsQWIMxqbtp26LC7xE8A==
X-Received: by 2002:a1c:7406:0:b0:3ed:320a:3721 with SMTP id p6-20020a1c7406000000b003ed320a3721mr27549019wmc.22.1680507694746;
        Mon, 03 Apr 2023 00:41:34 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 1/7] include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
Date: Mon,  3 Apr 2023 09:41:18 +0200
Message-Id: <20230403074124.3925-2-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_piix3_set_irq() isn't PIIX specific: PIIX is a single PCI device
while xen_piix3_set_irq() maps multiple PCI devices to their respective
IRQs, which is board-specific. Rename xen_piix3_set_irq() to communicate
this.

Also rename XEN_PIIX_NUM_PIRQS to XEN_IOAPIC_NUM_PIRQS since the Xen's
IOAPIC rather than PIIX has this many interrupt routes.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-2-shentey@gmail.com>
---
 include/hw/xen/xen.h  | 2 +-
 hw/i386/xen/xen-hvm.c | 2 +-
 hw/isa/piix3.c        | 4 ++--
 stubs/xen-hw-stub.c   | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 2bd8ec742d..37ecc91fc3 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -39,7 +39,7 @@ extern bool xen_domid_restrict;
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 int xen_set_pci_link_route(uint8_t link, uint8_t irq);
-void xen_piix3_set_irq(void *opaque, int irq_num, int level);
+void xen_intx_set_irq(void *opaque, int irq_num, int level);
 void xen_hvm_inject_msi(uint64_t addr, uint32_t data);
 int xen_is_pirq_msi(uint32_t msi_data);
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 56641a550e..ab8f1b61ee 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -143,7 +143,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return irq_num + (PCI_SLOT(pci_dev->devfn) << 2);
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
     xen_set_pci_intx_level(xen_domid, 0, 0, irq_num >> 2,
                            irq_num & 3, level);
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index a9cb39bf21..1b3e23f0d7 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -34,7 +34,7 @@
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
 
-#define XEN_PIIX_NUM_PIRQS      128ULL
+#define XEN_IOAPIC_NUM_PIRQS    128ULL
 
 static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
 {
@@ -405,7 +405,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
      * connected to the IOAPIC directly.
      * These additional routes can be discovered through ACPI.
      */
-    pci_bus_irqs(pci_bus, xen_piix3_set_irq, piix3, XEN_PIIX_NUM_PIRQS);
+    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_IOAPIC_NUM_PIRQS);
 }
 
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
diff --git a/stubs/xen-hw-stub.c b/stubs/xen-hw-stub.c
index 34a22f2ad7..7d7ffe83a9 100644
--- a/stubs/xen-hw-stub.c
+++ b/stubs/xen-hw-stub.c
@@ -15,7 +15,7 @@ int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
     return -1;
 }
 
-void xen_piix3_set_irq(void *opaque, int irq_num, int level)
+void xen_intx_set_irq(void *opaque, int irq_num, int level)
 {
 }
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517311.802462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEou-0000kS-QN; Mon, 03 Apr 2023 07:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517311.802462; Mon, 03 Apr 2023 07:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEou-0000ie-J4; Mon, 03 Apr 2023 07:41:40 +0000
Received: by outflank-mailman (input) for mailman id 517311;
 Mon, 03 Apr 2023 07:41:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEos-0000Sf-IM
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:38 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7281ec2-d1f2-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 09:41:37 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id t4so23024662wra.7
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:38 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7281ec2-d1f2-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507697;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YTX5AlYG9H9wezF7Huuxzz/nE/nKnDO8ny1qnuzLZhc=;
        b=mtSXWDOaZ+SoOSRY+/QyXRKvT4i+55X7WZtDYWMATiHisig086+0JsHukwNdiJId7A
         tFqxCAoC4RwIQJM009Vk+jfFfOFlaKQrlmk2bTXjnhvW4+CF9lMXfKv1wHR3ukRMZW6Y
         CQyFaG+VXbo7bsLaY3JNaZ0O5jKXrhP2qIEG72Vjgf8MVH9BIMi5d+VGm5MfsEvzCudI
         GyrjyuNPJZrO+zweLhjAv7NojvLIUzr0fIDB8FyORUM/z/yAaS2dYmbZmQwIZzFXhz4c
         p7LFtBj/nJrdJSF6L2IGVCFNtgwR5DHDosc59QA+1G+LMTWWcV3o/frA6tmQ3zvYZ538
         VF4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507697;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YTX5AlYG9H9wezF7Huuxzz/nE/nKnDO8ny1qnuzLZhc=;
        b=AaGVptg30OZFuThAwmbjtugwjA87nIW8QbuDTEHAzXmyftRhPKHE3q7M1ZaJomU6m8
         t/G6SZmG66qvyPNbGXlJdkm4Iw9WZfkDGTfPJWSvYwOvJpalESOk3t/xLJkTseM25M7J
         fWyK21RCy0VQ4VQwSGdSMN2vYvEogq0mpxhJujwotDzk1namkTN8YbAtG+KZp8kKA1MF
         XPxa/Yk/LcZuZAiMsSGZQkOAUKghYMa3BJbD5K8p1Y/KUCWYXRdRzi5fNXj2+ECJQc07
         bdM647XOqleZUkHQY1gBvV2gZG6JTyH7d1X9w7kvLG+FQUdkpWqZr5fyDCdWx4o1Yroe
         gq2g==
X-Gm-Message-State: AAQBX9caUHF64emzsEUrgQvyZk1mPrV+Munuk6rVUwq9r8nXpbbj9s6s
	TGVs+5GrHExRL8kI8c46Afw=
X-Google-Smtp-Source: AKy350aggVxi8ptGEF+kz7DCXq42gtEfYkUb6ffP2AHigP6K835TVXtAlFx6fFv21AA1vyoQLxAQyA==
X-Received: by 2002:adf:f388:0:b0:2db:46ad:7e82 with SMTP id m8-20020adff388000000b002db46ad7e82mr25915768wro.47.1680507697657;
        Mon, 03 Apr 2023 00:41:37 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 3/7] hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
Date: Mon,  3 Apr 2023 09:41:20 +0200
Message-Id: <20230403074124.3925-4-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a preparational patch for the next one to make the following
more obvious:

First, pci_bus_irqs() is now called twice in case of Xen where the
second call overrides the pci_set_irq_fn with the Xen variant.

Second, pci_bus_set_route_irq_fn() is now also called in Xen mode.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-3-shentey@gmail.com>
---
 hw/isa/piix3.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 1b3e23f0d7..a86cd23ef4 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -394,7 +394,7 @@ static void piix3_xen_realize(PCIDevice *dev, Error **errp)
     PIIX3State *piix3 = PIIX3_PCI_DEVICE(dev);
     PCIBus *pci_bus = pci_get_bus(dev);
 
-    pci_piix3_realize(dev, errp);
+    piix3_realize(dev, errp);
     if (*errp) {
         return;
     }
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 07:41:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 07:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517312.802483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEox-0001R2-37; Mon, 03 Apr 2023 07:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517312.802483; Mon, 03 Apr 2023 07:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjEow-0001Qp-UM; Mon, 03 Apr 2023 07:41:42 +0000
Received: by outflank-mailman (input) for mailman id 517312;
 Mon, 03 Apr 2023 07:41:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjEov-0000Sg-Cv
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 07:41:41 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8582e01-d1f2-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 09:41:39 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id m2so28256907wrh.6
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 00:41:39 -0700 (PDT)
Received: from Provence.localdomain
 (dynamic-078-055-162-106.78.55.pool.telefonica.de. [78.55.162.106])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm8414960wrr.100.2023.04.03.00.41.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 00:41:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8582e01-d1f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680507699;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MUDyOxROyX57/HeZfnPOMpR49OmyPgdDYS4l8CKR3oE=;
        b=R0zQZHS1SVM2nbqIZon4bg3J/Tgs443HdjKKqxyVKw/W6M8PKg6fF5HqP9chCuyiBZ
         HJkytr0cpW/VhAlTB3rKHnd519d/mvwUbFIWVLXIvNSr48AXF6vTFq6fJUcOUmUH1iyB
         ueVnC/xQZaZZHpFPSVhx5W9HBRQbsjok895nYAKptZCyOKoD4Ib82ypomuw9roljaWLe
         IbWXzVNtIq89BY8uwqhgoY21w0gcpO0L4vbRyFCz5FSL/pxc3tWZ8/gLXOuRE9RPqc0/
         6HjlOpoi2+VFwd5ZeD8MBrf7WTl2dA38nm35BVoLZzH5VWDCu+75pIfnCT84gtQ11hI1
         Ozwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680507699;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=MUDyOxROyX57/HeZfnPOMpR49OmyPgdDYS4l8CKR3oE=;
        b=pyFym8FJryc37DMfPoMm0BNvathFLweFfvcs7lUdvXHbcLej+Up5li25tFq49LECuA
         uUGCy7pWwV6hcVyf3XUrrQ/6l7G0CUMbM1d1XTvSCU52V1JFUSLC7jKbnBOVJtEM0dRY
         +cBnrSrtMMEahxFK13XwSwgsqgQmqSsyebogy1u89YeLn2HphjQz4Fy226sctWfmtMY8
         oxUMUUcVZPcwgJLBz0VzCMXpXpDmvokx22Vd1lyQZMNLevA8E5cs0takLFX7jomlwP8s
         tsXZJt1MwPoIdrL1556IPL6kPd5wO8U+LDxscuRBjR+0ud9bbi+3w6pPPEPr0TVI12P8
         Qb7g==
X-Gm-Message-State: AAQBX9fzzLab9MPnFqvnemnaAMGARgIZLqHd5wciCDkYm3V2mHvvDaiC
	stjTS8S4H8yRwtsreAnqTRo=
X-Google-Smtp-Source: AKy350Z1aIPseQR+1yV2swwLLDz0AEwlN4YYIJAKAq+O5S9egcGZLOBJfv2d/bu2tqMGfhvjE7XU2g==
X-Received: by 2002:adf:ed0e:0:b0:2d1:86e6:bd6 with SMTP id a14-20020adfed0e000000b002d186e60bd6mr26015599wro.22.1680507699461;
        Mon, 03 Apr 2023 00:41:39 -0700 (PDT)
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Bernhard Beschow <shentey@gmail.com>
Subject: [PATCH v4 4/7] hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
Date: Mon,  3 Apr 2023 09:41:21 +0200
Message-Id: <20230403074124.3925-5-shentey@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_intx_set_irq() doesn't depend on PIIX3State. In order to resolve
TYPE_PIIX3_XEN_DEVICE and in order to make Xen agnostic about the
precise south bridge being used, set up Xen's PCI IRQ handling of PIIX3
in the board.

Signed-off-by: Bernhard Beschow <shentey@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <20230312120221.99183-4-shentey@gmail.com>
---
 hw/i386/pc_piix.c | 13 +++++++++++++
 hw/isa/piix3.c    | 24 +-----------------------
 2 files changed, 14 insertions(+), 23 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 30eedd62a3..99232701b1 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -69,6 +69,7 @@
 #include "kvm/kvm-cpu.h"
 
 #define MAX_IDE_BUS 2
+#define XEN_IOAPIC_NUM_PIRQS 128ULL
 
 #ifdef CONFIG_IDE_ISA
 static const int ide_iobase[MAX_IDE_BUS] = { 0x1f0, 0x170 };
@@ -236,6 +237,18 @@ static void pc_init1(MachineState *machine,
         pcms->bus = pci_bus;
 
         pci_dev = pci_create_simple_multifunction(pci_bus, -1, true, type);
+
+        if (xen_enabled()) {
+            /*
+             * Xen supports additional interrupt routes from the PCI devices to
+             * the IOAPIC: the four pins of each PCI device on the bus are also
+             * connected to the IOAPIC directly.
+             * These additional routes can be discovered through ACPI.
+             */
+            pci_bus_irqs(pci_bus, xen_intx_set_irq, pci_dev,
+                         XEN_IOAPIC_NUM_PIRQS);
+        }
+
         piix3 = PIIX3_PCI_DEVICE(pci_dev);
         piix3->pic = x86ms->gsi;
         piix3_devfn = piix3->dev.devfn;
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index a86cd23ef4..7a31caf2b6 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -34,8 +34,6 @@
 #include "migration/vmstate.h"
 #include "hw/acpi/acpi_aml_interface.h"
 
-#define XEN_IOAPIC_NUM_PIRQS    128ULL
-
 static void piix3_set_irq_pic(PIIX3State *piix3, int pic_irq)
 {
     qemu_set_irq(piix3->pic[pic_irq],
@@ -388,32 +386,12 @@ static const TypeInfo piix3_info = {
     .class_init    = piix3_class_init,
 };
 
-static void piix3_xen_realize(PCIDevice *dev, Error **errp)
-{
-    ERRP_GUARD();
-    PIIX3State *piix3 = PIIX3_PCI_DEVICE(dev);
-    PCIBus *pci_bus = pci_get_bus(dev);
-
-    piix3_realize(dev, errp);
-    if (*errp) {
-        return;
-    }
-
-    /*
-     * Xen supports additional interrupt routes from the PCI devices to
-     * the IOAPIC: the four pins of each PCI device on the bus are also
-     * connected to the IOAPIC directly.
-     * These additional routes can be discovered through ACPI.
-     */
-    pci_bus_irqs(pci_bus, xen_intx_set_irq, piix3, XEN_IOAPIC_NUM_PIRQS);
-}
-
 static void piix3_xen_class_init(ObjectClass *klass, void *data)
 {
     PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
 
     k->config_write = piix3_write_config_xen;
-    k->realize = piix3_xen_realize;
+    k->realize = piix3_realize;
 }
 
 static const TypeInfo piix3_xen_info = {
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 08:09:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 08:09:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517336.802523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjFFP-0007bL-5b; Mon, 03 Apr 2023 08:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517336.802523; Mon, 03 Apr 2023 08:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjFFO-0007bC-Vy; Mon, 03 Apr 2023 08:09:02 +0000
Received: by outflank-mailman (input) for mailman id 517336;
 Mon, 03 Apr 2023 08:09:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjFFN-0007b6-Pk
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 08:09:01 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9b68f38-d1f6-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 10:08:59 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id eh3so113709905edb.11
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 01:08:59 -0700 (PDT)
Received: from [127.0.0.1] ([62.214.191.67]) by smtp.gmail.com with ESMTPSA id
 r30-20020a50aade000000b004f9e6495f94sm4252444edc.50.2023.04.03.01.08.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Apr 2023 01:08:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9b68f38-d1f6-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680509339;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=h7gZ4I+1THSU6WybzqtCa4itJGFjNPYCsJ3+ZxZ33L8=;
        b=g/yKJq8X2JpUJmxLRLkfW2Ei4bn+orZIp2D0WXpN1GZ8fDDktL6EaELr85Ejz5DqtK
         M2UYewx344latdwSIgrNpZqcOWCSxZPSz9KPBE08Fjt7braFb+Cn0U59CmirAOW6iLiy
         rC8auNdRyR0+Pgrda8sTcY8swp8rvrDMlM4Ci/izaFEF7vWBVKv6zJaY8Vn/59ZZso0B
         QSad5dETKrLhgqX+jCTxh6+1yopnRJzr94Kcc+ZXunE0a4h2viUpjqtvwpIvJFVIN+6X
         iDVk7Ii3wNE+Pz1f7gL5LFqzch0JlgA2B2dqA+6KAgy0pdh0b+dGQiY65MHJdvvQMfeW
         mjrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680509339;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=h7gZ4I+1THSU6WybzqtCa4itJGFjNPYCsJ3+ZxZ33L8=;
        b=J6bRAIy3ehyy04imOdbPeu3Fe/XVzF3KCO3t0ag4JCMtXJLuD7Wokd9uIKqAqXV8sX
         rPK3FrnxZt50kto/1X0XNzzAgPEVA3zUYKei2KwJiywdGmfm9OPTxck9W2frwrzbNTDU
         Mu/r6S8wN39hNEszb0Aqj8Zcge5iBhvsxknFWs4yxtrRJ8nW+sPwu+VbGJv0jJNr6xYQ
         hX7vgn2NYXSjOnT1VtSesxDojVypjaNjFHtg9PGDaW+xf4Gvcl4E2/xJmR7SYx1MXet7
         h4K0ytS0lcgHYEt0ftsFDcK92/I1Py9BQCXlxw2+kHEt7jr907omOBHpfJ2DSe+UWDlI
         S1zg==
X-Gm-Message-State: AAQBX9cDPxIYsi6YOnHou/itiU6rRKwgr7gguXlrRX1b9xmICJ7N6gmZ
	ESCoPkvdb9KC2sqS5ELYgSY=
X-Google-Smtp-Source: AKy350YggZvBQRQXxduHGJx+5C4Cd2Q1/ABqM+BmlNa87irJ7SgtPy2crbAhiUHp2jFkWurGwsvNoA==
X-Received: by 2002:a17:907:8694:b0:947:c221:eb38 with SMTP id qa20-20020a170907869400b00947c221eb38mr11083800ejc.13.1680509339124;
        Mon, 03 Apr 2023 01:08:59 -0700 (PDT)
Date: Mon, 03 Apr 2023 08:08:49 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: qemu-devel@nongnu.org, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw@amazon.co.uk>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v3_2/6=5D_hw/isa/piix3=3A_Reuse?= =?US-ASCII?Q?_piix3=5Frealize=28=29_in_piix3=5Fxen=5Frealize=28=29?=
In-Reply-To: <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>
References: <20230312120221.99183-1-shentey@gmail.com> <20230312120221.99183-3-shentey@gmail.com> <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard> <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>
Message-ID: <B4AB508E-C750-4D5E-BF89-908082A1CD84@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 1=2E April 2023 22:36:45 UTC schrieb Bernhard Beschow <shentey@gmail=2E=
com>:
>
>
>Am 30=2E M=C3=A4rz 2023 13:00:25 UTC schrieb Anthony PERARD <anthony=2Epe=
rard@citrix=2Ecom>:
>>On Sun, Mar 12, 2023 at 01:02:17PM +0100, Bernhard Beschow wrote:
>>> This is a preparational patch for the next one to make the following
>>> more obvious:
>>>=20
>>> First, pci_bus_irqs() is now called twice in case of Xen where the
>>> second call overrides the pci_set_irq_fn with the Xen variant=2E
>>
>>pci_bus_irqs() does allocates pci_bus->irq_count, so the second call in
>>piix3_xen_realize() will leak `pci_bus->irq_count`=2E Could you look if
>>pci_bus_irqs_cleanup() can be called before the second pci_bus_irqs()
>>call, or maybe some other way to avoid the leak?
>
>Thanks for catching this! I'll post a v4=2E

V4 is out=2E

Thanks,
Bernhard

>
>I think the most fool-proof way to fix this is to free irq_count just bef=
ore the assignment=2E pci_bus_irqs_cleanup() would then have to NULL the at=
tribute such that pci_bus_irqs() can be called afterwards=2E
>
>BTW: I tried running qemu-system-x86_64 with PIIX4 rather than PIIX3 as X=
en guest with my pc-piix4 branch without success=2E This branch essentially=
 just provides slightly different PCI IDs for PIIX=2E Does xl or something =
else in Xen check these? If not then this means I'm still missing something=
=2E Under KVM this branch works just fine=2E Any idea?
>
>Thanks,
>Bernhard
>
>>
>>> Second, pci_bus_set_route_irq_fn() is now also called in Xen mode=2E
>>>=20
>>> Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>
>>> Reviewed-by: Michael S=2E Tsirkin <mst@redhat=2Ecom>
>>
>>Beside the leak which I think can happen only once, patch is fine:
>>Reviewed-by: Anthony PERARD <anthony=2Eperard@citrix=2Ecom>
>>
>>Thanks,
>>


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 08:58:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 08:58:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517340.802533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjG17-00053x-Rx; Mon, 03 Apr 2023 08:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517340.802533; Mon, 03 Apr 2023 08:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjG16-00053c-Pe; Mon, 03 Apr 2023 08:58:20 +0000
Received: by outflank-mailman (input) for mailman id 517340;
 Mon, 03 Apr 2023 08:58:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjG12-00051H-RF
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 08:58:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8ff499d-d1fd-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 10:58:13 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 04:58:01 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by BY5PR03MB4950.namprd03.prod.outlook.com (2603:10b6:a03:1e1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 08:57:58 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 08:57:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8ff499d-d1fd-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680512293;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sIJr1DpFzFESjuG658Iht8tRP4KhsgK2Pqw9uQ1tb0k=;
  b=BqDIm73rGlRPu7zFnE0/Q/cmp3tPB7Gmc75YLjVS9FjYHlouKO9rso7W
   eQgJRLj4fl4VtxymQThyjzPOwGmHXQxxjW3TK3M8emetyAYivEB4jjpXL
   eUfCfTZPeIaB9NPUFesobDasi9TC4G8xVK+I8FeivkSsR05cHk6SNLYwL
   A=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 104500074
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:w8onRqscBbgJnBvfEsa33cPBz+fnVHdfMUV32f8akzHdYApBsoF/q
 tZmKWvVb6rcZ2r8KN8nOojgpEMHuJPRn9JjGQA6ryExQywX+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGzCFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwaxs1VUCIl+mMneirccw1ipwtEcTuBdZK0p1g5Wmx4fcOZ7nmGv2PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60aIG9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN5PTOXgqaICbFu7+zwYGT4zcECCsNKfqnGRCt5DE
 VEt5X97xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnO0cSCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptXm/oc6i0uXSs45SfbuyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CKvZ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:Q2u7z6nXACZYU6U1u0NBJVll0HXpDfLa3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRcdcLW7UpVoLkmyyXcY2+cs1PKZLWvbUQiTXeZfBOnZsl7d8kTFn4Yw6U
 4jSdkaNDSZNzNHZK3BkW2F+rgboeVu8MqT9JjjJ3UGd3AVV0m3hT0JezpyESdNNXl77YJSLu
 vk2iLezQDQBEj+aK6AdwE4dtmGnfLnvrT8byULAhY2gTP+8Q9BuNbBYmOlNg51aUI0/Ysf
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="104500074"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i0CkQzkp9lDYeT1y5/ecnK9MeaqWs+QPfjTuNQvSByp3ULQw6QKvoT+pqlcYKicsHlYoDPh5YS8iXdW0L++ftTowkzK3BEgEIS5lzwydy7ByP/si9hpVCYB5KD7Hbynj232bb+fr+tIp9cYDxSkSF1uf9QtNmL78tVYKKywdDwhCvsnWaYsE+ts1FoSZSC+U+mOjdIyqfKKphwsY2o9IEczrQXwktqBETDpfP2p/EUZlWS9Fypg4yPyoqr8OGl7hfK73dD1wh0MPhkzTSmDYRUEIqIP5VVnzI0sLjDnAvbJBX45h2EDglI1in/cjShSnMu5/IUGDvH8T8HuNU9O8YA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xf3yZJpNniIwd+LVSfWWmL7nhLZsXRRI19tvZcvt07g=;
 b=HAK+4Uny4llysaBC7W4EPOpcM5b26rinJB3crCbrkepXYk8rdRPuRLFOHTVe5kPVWH2yjCSVHWMCmSZ+svNru/ToOYLhz6vBISSgIY8lmZIyBcSqIXY8O3KSoCLl3QNWjdfI4iGN4ukZjusC/GOQP9d1z+bjBcCvacuPyYy+eo83ivXBs6GoDDBTpaXsd1ofqfSFP4BnTjAWoQchAjdeYAvNGppaWeMblUaP/DjOhnF/HM3lLtT7U1sXCtuJup9vUKP/umgg5aW1yhs3DrI033M4b57pMiUYG+NVUexyr2+GqmdwUc4P3OlOh/m/RFCYiMsUCtztZoKRbBxLXw7+iA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xf3yZJpNniIwd+LVSfWWmL7nhLZsXRRI19tvZcvt07g=;
 b=JPNT9cnkz5roi6FXlBBe+OPXbxNa4cCsw/B6QqJQN1yY/NvxKwN197EK7eWBcbdPQ0LXttYNtKGc38hFfK4iX/r68xmCfj9Jzmxtb+6wnJx5nQYaauSVrfeGnkoMF0zYCuh0EAGqMuQ3ll/FfA/pbRk70xlIsp+pig6JsGsW3TI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 10:57:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Message-ID: <ZCqVEHe1Qo3skeVf@Air-de-Roger>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230331175719.500285-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO4P123CA0032.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::19) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|BY5PR03MB4950:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b6339fd-7c7c-4ba8-b985-08db34218595
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J7D6imCPUVXcFC92XXRAvmOK5LGdg+GYXofDgK1W32VRJpp75sg6CQJ0/cEI3hDGVxfCS0aRkOHrgE6FJqNhX+rYCd2OCqSVO9SbBxaDwTwCo4W5xU9rusnjQjqu5iAGdNRWXV3CAxUOly3avdN752TovD+69bVdK8FcjR14BxZO6H2Wc5ypiWUARZNVkAFplD3KAaQo6j5CfuM56wFMLalZYXCY5TdMl0QayysMq9dsbVm5hgVM2lwFJl/5Rvnc95jO11ABHwmRAK+gf+/n2Lli7UK/2gr9eLhy2rlgL6vpnoyBb2b3OxSbzuQinc8nxHPimCBr6oVa9x7E2HOSuU9yTi66Jkj9UCCarv2me3bksZu72YXqGTd2sUalapkkhFCwT84seEeeOFx147IbfUQ3bxSXoRMSjfaxX7in2nFThpFBHp/ke6uDPRQWmesZekip1v28FGF3gLBl5E+mVLzmqvYp1hM7mgUgtJKttsBGnhgzZTvgGhEcgMu+vkXOngRl9ljqj2usYvEH3tUwjXV9YiT4aiMQB24kj/1aEpO50PYKh3j91R2Wnl7FekCd
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(136003)(366004)(346002)(396003)(39860400002)(451199021)(82960400001)(83380400001)(8676002)(4326008)(66946007)(66556008)(86362001)(66476007)(2906002)(41300700001)(316002)(33716001)(54906003)(478600001)(6636002)(85182001)(38100700002)(6486002)(6862004)(8936002)(5660300002)(9686003)(6512007)(186003)(6666004)(26005)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amZ4cURTbDZIWFFCTmg5NzJsK1hleGRYV2tWRXhhOURJcDlKVE00c2J5dXJS?=
 =?utf-8?B?KzJxWFppdDNPeXF0bHhUM0E2NUpPSHIydGp1WkVkOXpEN2c2bS8zSTJlT1Vo?=
 =?utf-8?B?Y1dlOXIrSXNkMUdzYjNPWjZTOGRub3p0cFZST3RteS84bk1pK0JrMWxtdVFx?=
 =?utf-8?B?QnM5TnY5WUErTlVEUmRyV3IyWTZCOVU1WmJGSUREc2ZUQUUxTVZqdjNKaFpt?=
 =?utf-8?B?N04wTnR1TkpKWkgvNWN6WkdCQWsrSGNxNS9KWlpKbTNLSDlQZkZJclVlN252?=
 =?utf-8?B?ZGhnb1R2S1RuSy9LQWtmWjJuUnVuM3dzR29rc2E5WGlEaDFYbFRJalhQaTNN?=
 =?utf-8?B?enE1NFZRTTAyem5ja3ZCZkFERUZzZ2NzRlRJbnpqU3laQUdZcEM4NmVidFpW?=
 =?utf-8?B?V0haWmVWOTZoZWc4QlAyMFBUcEpXbUZEK0kwOGpIZTdEaGRqWDhhRGI0REhH?=
 =?utf-8?B?RDQ5VVM0WW0xTnQ3YjhpTDlKc05IeGhWeUIxTCtTSmRuY3AyQndyV2x0UTc2?=
 =?utf-8?B?eEMwRERacWFyNFF4eHBwUkgyaTJsanNCZVNZcFRnNFFDN0Fpa1FvMDRsMGQx?=
 =?utf-8?B?OEwybXV0Yi94L3VucXFkRC9Ea3JsWmRUUDYwSWQ1cGgzWVY5cE9kYUhXV0ZW?=
 =?utf-8?B?L21lczFPKzhSekhsQ0NGUUU0dVVyS3p1d2JGNTFyVzhSQlluVG0zRGZMMnd4?=
 =?utf-8?B?UVBjY0JiWDF0Y1JUcVRWd0prZDFkb20wVmJlVGZBanowNXlMc1l6eXJPUHBM?=
 =?utf-8?B?TG9iN2JVcEUzSnVqUFB1ZUdodElvTHowQ2UvM243R3lWS0J3UmZvWlZ4a1Vv?=
 =?utf-8?B?Zkl2dUoybkZ1RmpWQVhCWXhROXZnOTBuMi9BUitPb3EyQVJxWXEySlRWVkkx?=
 =?utf-8?B?cVdQZU10TkJodUh6VkplY1pNUXdiWEhrbTRUMUhXMDNCL01GdGwwSnBSeFc4?=
 =?utf-8?B?SndiZDM0Q1ZkaXljSzdyUWt1MWVDZWdOT0svWEJqS1BRMWw2LzE5S3FGYWho?=
 =?utf-8?B?WVJjZFcvdU5ob0k0WjY1Q3dhV0JxcnR2YzF5ZEluV3VuQmZLbElGT1doT3hm?=
 =?utf-8?B?SHpXanZISEdzUW9TUzFJSmNEOUFLZUQrWHpSZlRZVWpmeUVrMnRuSlRqditk?=
 =?utf-8?B?MlorUFZRaUV2U1FRWU9nRkhSVHBZR3YySW50amtmQ0tmaG9IV2Z5YmdwK1lE?=
 =?utf-8?B?V3BMMXlTcVdlQUpVZDJzaWppdis2d1NUZVVuNEFWSlIrVVRGdVBlazFHTzFu?=
 =?utf-8?B?TXp2cjBjVnRBZmRFTWVxaVpyT1JrbCtSOURSS0ZoVnFTWEg5czRNMlBWd1hE?=
 =?utf-8?B?dUNKTFY1dkx2djlVcDFBQkZjMTd5V2JMN000Mll4WWZhUEFOL0NtWmVkR0Y1?=
 =?utf-8?B?WHVKc1NvNk1XMEFSZmRQek93KzRBcVkvOEkwaWJKRXJndlFxalo5cUlnR0s5?=
 =?utf-8?B?TUVrV0I1dHc3NFhxeUFWdWQzclFNWlIzSkpFRmh0dVhjOFBMS0ZCUlg0RENm?=
 =?utf-8?B?dmc3bUE1QmFMZk5RQmxXR1l2SDBjRW94eklUaU1tTXZmV0x0cnJkdm95Vjd1?=
 =?utf-8?B?VEQ4VUJuY2Y5WTVFNzAvZzVrNko3cmxxOTZodjVzM0VOcGl3V3NkNGhnaWMv?=
 =?utf-8?B?SnZKN2RQSFN1RkJPSWhOMjZmdzhxR3NkQUlKTlY0QWg3cno1MzY2SkNvcVBj?=
 =?utf-8?B?eUxBZWp1blo2RnhlR3RHUnpCYUUvQUJwdWgwWlVPS1AxL011UjAxTWdKUjFI?=
 =?utf-8?B?M2k0R3YrQnpqNUxXMEJMVG9xaGZUb3pKN3NuVStzQzVUZ2lvWldrdkpNUzFV?=
 =?utf-8?B?U3VzVDl4R3kzK0ZGb2JhUVRMYWF3NFBTeHVqTEtJOCtrWkw4QVVZKzZINjVF?=
 =?utf-8?B?ZlN5U3NOcUZOSzlhMDFFbU81dzF6R0F0VGt0b3poT2dxWHhPZ29sVy80MnpN?=
 =?utf-8?B?eStHeHRjek1laHhjQmpJOFFaM1FLRk81THg5L0pmZXJQcjkvUnpaY093SmpU?=
 =?utf-8?B?Ui9sUFhwYkgxRzVTa0FBcUhMRG1CR01Hc0t5TWZDSXErT1hySXp0bnBHVXBv?=
 =?utf-8?B?a1dKMGsrck5DTW1vUThvVmx1NGxROEdjMjlkdVNYN25BZzc2NUFVK0oyT1VZ?=
 =?utf-8?Q?nqKQ315XYiX6n/S5lDFgH5C+C?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	szt51JlSFrCauktts5mbwXId76pM7TsqI148Nzc8DDagaOb7dh3FnESfge4IfIr4C4/emJruiFhttOlYAgknN12Lq2t832Fw8DabWa/hTv4yR8DXjkE4UeGVHGSHHBprSKHEgguF/mtQdSLMA+2IzanhqJk39c5bfY2pY7/wwvGTeg5aOm8kwJq2z9Ipdkkjmh/vD9NnM5jz4G7jakvobjmg/fB+C5usgYoRkubPE1R1aWdn60SIlrxbJVcZglH2xpbCeIGl8yBc8iX20p/3IX/cNyuwkFjd9AcMmeLj6QPCX4xXyECZ+UMpYbMqjzQeBaAW00tu6gu4OMZ6KYLqR6GveUzKCy8pf6Gjhz9QrNlDuxO9F0HewF4HBAGlj2TtsWrAdDcGiBrHDfRJ6HJw5CBenhtLc2nj2acK9Il+CqApdIqh3Zn3hOgI7ldjf0VPmp4uRtdtBSOx5gi623lninZ1+CeJrOpTc0dRpeo7rssLQBBlYqGBRlOmQelKxDPHt/EDlWkohMNbWQWEv5EW9P8k+6SDlcB3CSVdou5Av5MRPfocxFBpScmA6O7IGrapX/lUG6bsbepMP68emKsQ7RjHD/PSgmu0wKxrn5vYeI7AX4FpQ4M16HDS5k/5cYYbtvQiiHf6sSOBqRZpaAjHlW4BHbNO/UHX3p86SmORE5JUOEn0nEJXouqLyHdxZKZDoWQWQU2z0zpA9KyOiFRCXdomYsh0Tjsrwztsq7g7nsUy6LGh5fajL3VLokBoHZQu6n4A2h8UGFZK8f2GNmhLmUFW+Qi1oZKQUeusnpxxfQn1TnW673KgV3aVqRjcuOMr
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b6339fd-7c7c-4ba8-b985-08db34218595
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 08:57:58.4949
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x4sf1BWTabxoH/ANDiNIm8XLslx6OjkoxyLk3/RQqHVQ9zJhleequIS75bAWjSaYgn66FzA+MjVlTf6xaMPwJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4950

On Fri, Mar 31, 2023 at 06:57:19PM +0100, Andrew Cooper wrote:
> This started by noticing the dubious Fam17h check in
> arch_ioreq_server_get_type_addr(), and then realising that checking the host
> CF8_EXT setting is utterly bogus for the guest <-> qemu emulation path.
> 
> What should be consulted here is the guest's MSR_AMD64_NB_CFG setting, but a)
> there isn't one, and b) the vestigial remnants of the cross-vendor migration
> logic cause MSR_AMD64_NB_CFG to be unconditionally read-as-zero, making the
> CF8_EXT path unused by any suitably-written OS in the first place.
> 
> MSR_AMD64_NB_CFG really has been removed on Fam17h (it's now just a read-zero,
> write-discard stub), and the ECS extension is unconditionally active, meaning
> it is not correct for Xen to ignore the ExtRegNo field on newer AMD CPUs.
> 
> It turns out that Xen did even have this behaviour in 4.5 and earlier, with
> this problematic CF8_EXT checking being added in 4.6.  Therefore, revert back
> to Xen's older behaviour - it is objectively less wrong than the current
> logic.
> 
> While fixing this, get rid of hvm_pci_decode_addr() - it is more complicated
> to follow (and to call) than using the CF8* macros in the calling context.
> Rename CF8_ADDR() to CF8_REG() to better describe what it does, and write a
> comment explaining all about CF8/CFC accesses.
> 
> There's one rare case when CF8_EXT is visible to guests, and that is for a
> pinned hwdom.  Right now, we permit such a dom0 to modify the CF8_EXT bit, but
> this seems like a very unwise idea.  Leave a TODO for people to consider.

One weirdness I've noticed is that for vPCI we decode the accesses
taking the extended CF8 bit after this change, but then if the access
is relayed to the hardware using vpci_{read,write}_hw it will be
forwarded to the hardware using pci_conf_{read,write}<size> which
doesn't have support for CF8_EXT.  So if the underlying hardware
doesn't have MMCFG support and the reg is > 255 it will be dropped.

> Fixes: e0fbf3bf9871 ("x86/AMD: correct certain Fam17 checks")
> Fixes: 2d67a7a4d37a ("x86: synchronize PCI config space access decoding")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> Whoever reviewed those two patches originally was clearly a fool...
> ---
>  xen/arch/x86/hvm/io.c             | 24 ++++++------------------
>  xen/arch/x86/hvm/ioreq.c          | 19 ++-----------------
>  xen/arch/x86/include/asm/hvm/io.h |  4 ----
>  xen/arch/x86/include/asm/pci.h    | 26 ++++++++++++++++++++++++--
>  xen/arch/x86/pv/emul-priv-op.c    | 19 ++++++-------------
>  5 files changed, 38 insertions(+), 54 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index 5ae209d3b6b3..b0d3c236e985 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -248,20 +248,6 @@ void register_g2m_portio_handler(struct domain *d)
>      handler->ops = &g2m_portio_ops;
>  }
>  
> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
> -                                 pci_sbdf_t *sbdf)
> -{
> -    ASSERT(CF8_ENABLED(cf8));
> -
> -    sbdf->bdf = CF8_BDF(cf8);
> -    sbdf->seg = 0;
> -    /*
> -     * NB: the lower 2 bits of the register address are fetched from the
> -     * offset into the 0xcfc register when reading/writing to it.
> -     */
> -    return CF8_ADDR_LO(cf8) | (addr & 3);
> -}
> -
>  /* vPCI config space IO ports handlers (0xcf8/0xcfc). */
>  static bool cf_check vpci_portio_accept(
>      const struct hvm_io_handler *handler, const ioreq_t *p)
> @@ -275,7 +261,7 @@ static int cf_check vpci_portio_read(
>  {
>      const struct domain *d = current->domain;
>      unsigned int reg;
> -    pci_sbdf_t sbdf;
> +    pci_sbdf_t sbdf = {};
>      uint32_t cf8;
>  
>      *data = ~(uint64_t)0;
> @@ -292,7 +278,8 @@ static int cf_check vpci_portio_read(
>      if ( !CF8_ENABLED(cf8) )
>          return X86EMUL_UNHANDLEABLE;
>  
> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
> +    sbdf.bdf = CF8_BDF(cf8);
> +    reg = CF8_REG(cf8) | (addr & 3);
>  
>      if ( !vpci_access_allowed(reg, size) )
>          return X86EMUL_OKAY;
> @@ -308,7 +295,7 @@ static int cf_check vpci_portio_write(
>  {
>      struct domain *d = current->domain;
>      unsigned int reg;
> -    pci_sbdf_t sbdf;
> +    pci_sbdf_t sbdf = {};
>      uint32_t cf8;
>  
>      if ( addr == 0xcf8 )
> @@ -323,7 +310,8 @@ static int cf_check vpci_portio_write(
>      if ( !CF8_ENABLED(cf8) )
>          return X86EMUL_UNHANDLEABLE;
>  
> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
> +    sbdf.bdf = CF8_BDF(cf8);
> +    reg = CF8_REG(cf8) | (addr & 3);
>  
>      if ( !vpci_access_allowed(reg, size) )
>          return X86EMUL_OKAY;
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 0bdcca1e1a5f..325a9d118e52 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -285,27 +285,12 @@ bool arch_ioreq_server_get_type_addr(const struct domain *d,
>           (p->addr & ~3) == 0xcfc &&
>           CF8_ENABLED(cf8) )
>      {
> -        unsigned int x86_fam, reg;
> -        pci_sbdf_t sbdf;
> -
> -        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
> +        pci_sbdf_t sbdf = { .bdf = CF8_BDF(cf8) };
> +        unsigned int reg = CF8_REG(cf8) | (p->addr & 3);
>  
>          /* PCI config data cycle */
>          *type = XEN_DMOP_IO_RANGE_PCI;
>          *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
> -        /* AMD extended configuration space access? */
> -        if ( CF8_ADDR_HI(cf8) &&
> -             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
> -             (x86_fam = get_cpu_family(
> -                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
> -             x86_fam < 0x17 )
> -        {
> -            uint64_t msr_val;
> -
> -            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
> -                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
> -                *addr |= CF8_ADDR_HI(cf8);
> -        }
>      }
>      else
>      {
> diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h
> index 54e0161b492c..3f3fb6403ccb 100644
> --- a/xen/arch/x86/include/asm/hvm/io.h
> +++ b/xen/arch/x86/include/asm/hvm/io.h
> @@ -144,10 +144,6 @@ void stdvga_deinit(struct domain *d);
>  
>  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
>  
> -/* Decode a PCI port IO access into a bus/slot/func/reg. */
> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
> -                                 pci_sbdf_t *sbdf);
> -
>  /*
>   * HVM port IO handler that performs forwarding of guest IO ports into machine
>   * IO ports.
> diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h
> index f4a58c8acf13..3b814f4ebacf 100644
> --- a/xen/arch/x86/include/asm/pci.h
> +++ b/xen/arch/x86/include/asm/pci.h
> @@ -3,10 +3,32 @@
>  
>  #include <xen/mm.h>
>  
> +/*
> + * PCI config space accesses with CF8/CFC:
> + *
> + * 1) Write {Enable | BDF | Reg} to CF8 to set an address
> + * 2) Read or write CF{C..F} to access the register
> + *
> + * For sub-dword register accesses, the bottom two register address bits come
> + * from the CF{C..F} address, not from CF8.
> + *
> + * AMD have extention to this protocol to access PCIe Extend Config Space by
> + * storing the 4 extra register address bits in the penultimate nibble of CF8.
> + * This extention:
> + *  - Is unconditionally active on Fam17h and later
> + *  - Has model specific enablement on Fam10h thru Fam16h
> + *  - Has reserved behaviour in all other cases, including other vendors
> + *
> + * For simplicity and because we are permitted to, given "reserved", Xen
> + * always treats ECS as active when emulating guest PCI config space accesses.
> + */
>  #define CF8_BDF(cf8)     (  ((cf8) & 0x00ffff00) >> 8)
> -#define CF8_ADDR_LO(cf8) (   (cf8) & 0x000000fc)
> -#define CF8_ADDR_HI(cf8) (  ((cf8) & 0x0f000000) >> 16)
>  #define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
> +#define CF8_REG(cf8)                                    \
> +    ({                                                  \
> +        unsigned int _c = cf8;                          \
> +        ((_c & 0x0f000000) >> 16) | (_c & 0xfc);        \
> +    })

What happens on Intel when the bit is set, is it just ignored?

We only allow such accesses for dom0 anyway.

>  
>  #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \
>                          || id == 0x01268086 || id == 0x01028086 \
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> index 5da00e24e4ff..008367195c78 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
>          if ( ro_map && test_bit(machine_bdf, ro_map) )
>              return false;
>      }
> -    start |= CF8_ADDR_LO(currd->arch.pci_cf8);
> -    /* AMD extended configuration space access? */
> -    if ( CF8_ADDR_HI(currd->arch.pci_cf8) &&
> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
> -         boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 )
> -    {
> -        uint64_t msr_val;
> -
> -        if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) )
> -            return false;
> -        if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) )
> -            start |= CF8_ADDR_HI(currd->arch.pci_cf8);
> -    }
> +    start |= CF8_REG(currd->arch.pci_cf8);
>  
>      return !write ?
>             xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
>          if ( !is_hwdom_pinned_vcpu(curr) )
>              return X86EMUL_OKAY;
>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
> +             /*
> +              * TODO: this is broken.  What happens when dom0 is pinned but
> +              * can't see the full system?  CF8_EXT probably ought to be a
> +              * Xen-owned setting, and made symmetric across the system.
> +              */

I would assume CF8_EXT would be symmetric across the system, specially
given that it seems to be phased out and only used in older AMD
families that where all symmetric?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 09:25:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 09:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517346.802543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGQx-0001Re-Tw; Mon, 03 Apr 2023 09:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517346.802543; Mon, 03 Apr 2023 09:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGQx-0001RX-R0; Mon, 03 Apr 2023 09:25:03 +0000
Received: by outflank-mailman (input) for mailman id 517346;
 Mon, 03 Apr 2023 09:25:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjGQw-0001RN-4j; Mon, 03 Apr 2023 09:25:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjGQw-0003Zu-2A; Mon, 03 Apr 2023 09:25:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjGQv-0000tz-Jj; Mon, 03 Apr 2023 09:25:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjGQv-0006cu-JB; Mon, 03 Apr 2023 09:25:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cQfBgwAWojfT8AROnVhNOIPTdz50paFl2+p9bIXtBX0=; b=d2Ny+2m8385vWDIL5jidQ/rLmf
	r5Fk2K3oAAoUvYzIuna0bbqZDxnl1BJ7a77+lCp+Vl/DYisD/GH43P6V4txq3RrUpM0cSzcdsjfPN
	ZO/K0exePLn821k9GpBKv2COkto+VkChjl2YfjYELl11MjtIu9FoBUaEUgkm3lg02vRw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180115-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180115: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 09:25:01 +0000

flight 180115 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180115/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 180110
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180110

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180110
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180110
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180110
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180110
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180110
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180110
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180110
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180110
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180110
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180115  2023-04-03 01:52:10 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 09:27:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 09:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517351.802553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGT8-000216-9M; Mon, 03 Apr 2023 09:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517351.802553; Mon, 03 Apr 2023 09:27:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGT8-00020z-6Z; Mon, 03 Apr 2023 09:27:18 +0000
Received: by outflank-mailman (input) for mailman id 517351;
 Mon, 03 Apr 2023 09:27:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L93W=72=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjGT6-00020t-Fx
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 09:27:16 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b82df015-d201-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 11:27:15 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1EAD21F8B8;
 Mon,  3 Apr 2023 09:27:14 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E1C8513416;
 Mon,  3 Apr 2023 09:27:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 95KzNfGbKmT0YQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 03 Apr 2023 09:27:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b82df015-d201-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680514034; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=fPwYyujXO4mPZ107oiABO3rd/KXZ8IqVN1G60GG3sas=;
	b=iI0+eaaRv6fdlTpnFRhsapTgfyCXMoahX7PzJ0gySxojgtteAmnpXUZBnB49ZGzqLBPSKf
	Ditz6CbiSLcL5+eYo82Skh5xXV2wuNmqe5W88s8G8G6LNxQhgXvHIXRGMQi1SeSP7xOlGI
	Ic6Jq9TW/PcdIhLhAfFfFoHKlp5Bgls=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org,
	Dan Carpenter <error27@gmail.com>
Subject: [PATCH v2] xen/pvcalls: don't call bind_evtchn_to_irqhandler() under lock
Date: Mon,  3 Apr 2023 11:27:11 +0200
Message-Id: <20230403092711.15285-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

bind_evtchn_to_irqhandler() shouldn't be called under spinlock, as it
can sleep.

This requires to move the calls of create_active() out of the locked
regions. This is no problem, as the worst which could happen would be
a spurious call of the interrupt handler, causing a spurious wake_up().

Reported-by: Dan Carpenter <error27@gmail.com>
Link: https://lore.kernel.org/lkml/Y+JUIl64UDmdkboh@kadam/
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- remove stale spin_unlock() (Oleksandr Tyshchenko)
---
 drivers/xen/pvcalls-front.c | 46 +++++++++++++++++++++----------------
 1 file changed, 26 insertions(+), 20 deletions(-)

diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
index d5d589bda243..b72ee9379d77 100644
--- a/drivers/xen/pvcalls-front.c
+++ b/drivers/xen/pvcalls-front.c
@@ -227,22 +227,30 @@ static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id)
 
 static void free_active_ring(struct sock_mapping *map);
 
-static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
-				   struct sock_mapping *map)
+static void pvcalls_front_destroy_active(struct pvcalls_bedata *bedata,
+					 struct sock_mapping *map)
 {
 	int i;
 
 	unbind_from_irqhandler(map->active.irq, map);
 
-	spin_lock(&bedata->socket_lock);
-	if (!list_empty(&map->list))
-		list_del_init(&map->list);
-	spin_unlock(&bedata->socket_lock);
+	if (bedata) {
+		spin_lock(&bedata->socket_lock);
+		if (!list_empty(&map->list))
+			list_del_init(&map->list);
+		spin_unlock(&bedata->socket_lock);
+	}
 
 	for (i = 0; i < (1 << PVCALLS_RING_ORDER); i++)
 		gnttab_end_foreign_access(map->active.ring->ref[i], NULL);
 	gnttab_end_foreign_access(map->active.ref, NULL);
 	free_active_ring(map);
+}
+
+static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
+				   struct sock_mapping *map)
+{
+	pvcalls_front_destroy_active(bedata, map);
 
 	kfree(map);
 }
@@ -433,19 +441,18 @@ int pvcalls_front_connect(struct socket *sock, struct sockaddr *addr,
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
-
-	spin_lock(&bedata->socket_lock);
-	ret = get_request(bedata, &req_id);
+	ret = create_active(map, &evtchn);
 	if (ret < 0) {
-		spin_unlock(&bedata->socket_lock);
 		free_active_ring(map);
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
-	ret = create_active(map, &evtchn);
+
+	spin_lock(&bedata->socket_lock);
+	ret = get_request(bedata, &req_id);
 	if (ret < 0) {
 		spin_unlock(&bedata->socket_lock);
-		free_active_ring(map);
+		pvcalls_front_destroy_active(NULL, map);
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
@@ -821,28 +828,27 @@ int pvcalls_front_accept(struct socket *sock, struct socket *newsock, int flags)
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
-	spin_lock(&bedata->socket_lock);
-	ret = get_request(bedata, &req_id);
+	ret = create_active(map2, &evtchn);
 	if (ret < 0) {
-		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
-			  (void *)&map->passive.flags);
-		spin_unlock(&bedata->socket_lock);
 		free_active_ring(map2);
 		kfree(map2);
+		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
+			  (void *)&map->passive.flags);
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
 
-	ret = create_active(map2, &evtchn);
+	spin_lock(&bedata->socket_lock);
+	ret = get_request(bedata, &req_id);
 	if (ret < 0) {
-		free_active_ring(map2);
-		kfree(map2);
 		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
 			  (void *)&map->passive.flags);
 		spin_unlock(&bedata->socket_lock);
+		pvcalls_front_free_map(bedata, map2);
 		pvcalls_exit_sock(sock);
 		return ret;
 	}
+
 	list_add_tail(&map2->list, &bedata->socket_mappings);
 
 	req = RING_GET_REQUEST(&bedata->ring, req_id);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 09:33:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 09:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517354.802563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGYb-0003SB-RO; Mon, 03 Apr 2023 09:32:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517354.802563; Mon, 03 Apr 2023 09:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjGYb-0003S4-Oj; Mon, 03 Apr 2023 09:32:57 +0000
Received: by outflank-mailman (input) for mailman id 517354;
 Mon, 03 Apr 2023 09:32:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pdzk=72=citrix.com=prvs=450632f3c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pjGYa-0003Ry-GR
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 09:32:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 81830ab4-d202-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 11:32:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81830ab4-d202-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680514374;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=T2DC+/OPrwDbeBS4v/+rJb1xT1/fC0TmvToGkyyaFpk=;
  b=Vb++4ivAJ98oU3zThEQzwRNI1RBC4bZWFUPqilCoAmHkIfa4V0ueCc49
   zoxnZ9GThJVgHAJjTukAqhQ35wWmq4B5jA+I4jPRdsgguSL28P0hiZxnk
   co52g//8YzFfKitjCbZhA6Km6e/QOvbyGYxfmPNwZh1BMrRGf1ntK7DAg
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106520249
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:l8VL2aqdlaonzn1/kayP8Vs8ttleBmILYhIvgKrLsJaIsI4StFCzt
 garIBmAOPvcZ2OmL4t3YI6x8kIE6sXTz9MyQFFk/CFnEC8Qo5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyJNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABdKNTOanOKn+oKASfhF1uYpF5jFPKpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVRrk6VoqwmpXDe1gVr3JDmMcbPe8zMTsJQ9qqdj
 jueoTmjWktGZbRzzxKp+XLrhtfQkRn7e6g1KIGU/+ZYvEaqkzl75Bo+CgLg/KjRZlSFc8lfJ
 koI9zsGoq079UjtRd74NzWhrXuZ+xIRRddUO+s97g6L1+zT+QnxLm0NVDVMbJovrME7QTYjy
 1qhkNbgBDgpu7qQIVqF/LCSvzK0OAAPIGMCbDNCRgwAi/Hvv4QsphvKR8RkFui+iZvoGlnYw
 yiNsTMlhrM7l8MC3Lm85hbAmT3EjpHUVAMx5wjRdmu49A59P9TjYYG0gXDW4etJNoqeZlmIt
 nsDgNTY6u0SZbmVnTGJaPUAGveu/fntGC3RhxtjEocs8xyp+mW/ZsZA7TdmPkBrP80YPzjzb
 yf7vBhNzIVeMHujcel8ZIfZI98x0aHqGNDhV/bVRtlDeJ58cEmA5i4GTVSR1GDkikRqkaw5N
 ZqBasGqJXIbD619y3yxXeh1+acrxyQ7yUvXRJby1RPh1qCRDFafU7wFLVCNfMgw66fCqwLQm
 /5WL8aQwg9TePH/aCLQt4UUKDg3wWMTXM6s7ZYNL6jaf1QgQTt6YxPM/V8/U6sixIdMjefZx
 EGGWV5xmHj1vizpKSzfPxiPd4jTsYZDQWMTZHJ8ZQzzhyh+Me5D/49EKcJpIOBPGPhLiKctE
 qJbI5jo7uFnEGyvxtgLUXXqQGWOnjyPjBnGASeqaSNXk3VIF12QoY+MkucCGUAz4suLWSgW+
 efIOvvzG8ZreuibJJ++hAiT512wp2MBv+l5QlHFJNJeEG21rtgyd3Kp0qFueptVQfkm+td9/
 1/PaSr0WMGX+9NlmDU3rfvsQ3iV/xtWQRMBQjizAUeePijG5GuzqbJ9vBKzVWmFDgvcofzyD
 di5OtmgaJXran4W6dsje1uqpIpij+bSS0hylVg1RiSVNQ3zUtuN4BCuhKFyi0GE/ZcB0SPeZ
 65F0oMy1WmhUC89LGMsGQ==
IronPort-HdrOrdr: A9a23:MrQ0wais0fnonQkgXQQkTigvOnBQXisji2hC6mlwRA09TyXPrb
 HJoB17726XtN91YhpLpTnuAtj5fZqiz+8P3WB8B9qftUzd2FdAT7sSiLcKoQeAJ8SWzIc06U
 4jSdkcNDSXNzRHZK3BjjVQfexO/DCPytHTuc7ui01pRQtpL41m8gtjEx2aCEEzZCQuP+tcKL
 OsovBDrzCjPVAadN6yCHVAf8WrnayzqLvWJSQCDxQkrDSDhTftyJOSKWn+4isj
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="106520249"
Date: Mon, 3 Apr 2023 10:32:37 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Bernhard Beschow <shentey@gmail.com>
CC: <qemu-devel@nongnu.org>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw@amazon.co.uk>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>, Aurelien Jarno
	<aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>, Paul Durrant
	<paul@xen.org>, <xen-devel@lists.xenproject.org>, "Michael S. Tsirkin"
	<mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Richard
 Henderson <richard.henderson@linaro.org>, Philippe
 =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Chuck Zmudzinski
	<brchuckz@aol.com>
Subject: Re: [PATCH v3 2/6] hw/isa/piix3: Reuse piix3_realize() in
 piix3_xen_realize()
Message-ID: <622b9674-fffd-4634-ac30-d0db3230478e@perard>
References: <20230312120221.99183-1-shentey@gmail.com>
 <20230312120221.99183-3-shentey@gmail.com>
 <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard>
 <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>

On Sat, Apr 01, 2023 at 10:36:45PM +0000, Bernhard Beschow wrote:
> 
> 
> Am 30. Mrz 2023 13:00:25 UTC schrieb Anthony PERARD <anthony.perard@citrix.com>:
> >On Sun, Mar 12, 2023 at 01:02:17PM +0100, Bernhard Beschow wrote:
> >> This is a preparational patch for the next one to make the following
> >> more obvious:
> >> 
> >> First, pci_bus_irqs() is now called twice in case of Xen where the
> >> second call overrides the pci_set_irq_fn with the Xen variant.
> >
> >pci_bus_irqs() does allocates pci_bus->irq_count, so the second call in
> >piix3_xen_realize() will leak `pci_bus->irq_count`. Could you look if
> >pci_bus_irqs_cleanup() can be called before the second pci_bus_irqs()
> >call, or maybe some other way to avoid the leak?
> 
> Thanks for catching this! I'll post a v4.
> 
> I think the most fool-proof way to fix this is to free irq_count just before the assignment. pci_bus_irqs_cleanup() would then have to NULL the attribute such that pci_bus_irqs() can be called afterwards.
> 
> BTW: I tried running qemu-system-x86_64 with PIIX4 rather than PIIX3 as Xen guest with my pc-piix4 branch without success. This branch essentially just provides slightly different PCI IDs for PIIX. Does xl or something else in Xen check these? If not then this means I'm still missing something. Under KVM this branch works just fine. Any idea?

Maybe the ACPI tables provided by libxl needs to be updated.
Or maybe something in the firmware (SeaBIOS or OVMF/OvmfXen) check the
id (I know that the PCI id of the root bus is checked, but I don't know
if that's the one that's been changed).

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 10:15:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 10:15:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517361.802573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHDX-0007wd-36; Mon, 03 Apr 2023 10:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517361.802573; Mon, 03 Apr 2023 10:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHDW-0007wW-Vq; Mon, 03 Apr 2023 10:15:14 +0000
Received: by outflank-mailman (input) for mailman id 517361;
 Mon, 03 Apr 2023 10:15:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjHDV-0007wD-7U
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 10:15:13 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6854e9c6-d208-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 12:15:10 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 06:15:05 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB6534.namprd03.prod.outlook.com (2603:10b6:a03:38e::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 10:15:03 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 10:15:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6854e9c6-d208-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680516910;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=kyjBEWr4eKfciZIpQc5jF5H6iyzmnfPSIHD1SdEfCtg=;
  b=ff8PnGMXVFvycCk2sGi8FlRgjeARZAHoMuvK68x0zNRMkDK9hqKO03xH
   xEv3mxyH+YG9/DQ2teUe97a0/lh+H7zyef73oPn2bM6TUyfwApDKARQS5
   xYOhkRAoyYJrv1z1ER/Azvk0krN8KqTuhw03IbFVuP3hO84OZZ40xY+SI
   c=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 104127755
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:e7v/ZaJpa6pe36ieFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENS0DVSy
 TEaWTzVPviJYzT9fYh1a4Wz9U1V6MDSmNNmTgtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRlPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4sXktX/
 6FfCwwofzndo8fp3ZiDFeZV05FLwMnDZOvzu1lG5BSAVbMMZ8+GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VvpTGLk2Sd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02raVwnymBer+EpWkp9pvpw2W61UrM0UIdwKbudWy2nyHDoc3x
 0s8v3BGQbIJ3FymSJzxUgO1pFaAvwUAQJxAHusi8gaPx6HIpQGDCQAsTDRMddgnv88eXiEx2
 xmCmNaBLSNrmK2YTzSa7Lj8kN+pES0cLGtHbylbSwIAuoHnuNtq1k2JSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNTNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:lJsV9qDt6wCk8mHlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="104127755"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JLNLFtP/mLsiN0Spq6kEemtwDqhnWaMyyjNvSsWBS4Amj7Ya+uUaODmSW/4XJTlnbUHlUXlXLmIUsNilA9Ta/TVtcONn6r55cFfG71fsz0ldzUvdKM5RPsQqKlBoNuEJ9v0TLLBvWNXwmotrh2wfoCOyzMvMdeHKMiRhQkTRKHch/3BTMVzUDahK4EFjNbihBNpVlMZAktk/VxOvlaZtFsOWMePFgGBbi65/SXnBXaxtxRw2LCKa0u7PJNA1eb4O7EpZnMiraWxdQljdApjCZ/oD9sXLAJFcc/VorFDNedcKulredIY53/XfwQiTDHHrPHLxioOPEZZ5afp9Swl2oA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BkDRg+VPEDuLPHDY4NyiLNhDqmUdPszHPJ+4Hgl9ygo=;
 b=DhcRLAqhhOZQNew+oeJUXeRyqtacTvgq3kWDfnT/9sQ0NrW8wA2OZSmGWr2dR3+ElffqNc454lBP33JjEHMuPtynjbhB1CNpmCOh1L1VMqrd+VRgt+CdlL3bPtnxH82lxuTjCiPsMnbAdO0OHp3Ygp5qJpYfvPBbt9XZQpxwTlfuSEsIq8hN5xD5VQaz/o1wQbKi49IzM0G/IbKHaFPpCL3b7LC1hwg4cBp26nwDRnxKoG5EQX1c325sTg5/gEygaBOCZaN7cP9K5qXBKxcG8I8RuoFF+awZdcvba3VmH7ncx+ktYnGbYJVmW/5N8KbO4w5temK7DUwqxW7e8ENaJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BkDRg+VPEDuLPHDY4NyiLNhDqmUdPszHPJ+4Hgl9ygo=;
 b=XtReE/+iXbtnyK8pcscBzJ833wREmKbrrHf3Txuz/s4zOSJ4ho2HVknJNZ12GR6C1gFTDxWkT7FJkPIRMeau31eKYCZdLTMb+C5CfU4x9BDnuyHVMI34v8VUrFHDlL7pr2AFCLC09cg8ChgpI6dIzkYGC1lcdq2stqzXBS9Z4xk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Date: Mon,  3 Apr 2023 12:14:49 +0200
Message-Id: <20230403101449.93323-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: BN1PR13CA0024.namprd13.prod.outlook.com
 (2603:10b6:408:e2::29) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB6534:EE_
X-MS-Office365-Filtering-Correlation-Id: 37b3c7fd-2b73-4bd3-d6a2-08db342c49d5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r4a2/2p7PxspTwtFRTS1DSY18LBpP3penWC+ZlwH+an/NIlxG1kxBVTeFBUd60muuzrCXNFkDsrS6kvw6a8GYmG0YyV5wiBSglkrD1Cc4jxrGkQ26kzMCUovCMhFvkmgCohzxuPzlj7V0HcY4xCBnatHD9/80jEw9aqDGS2UBGA2VxOZVhpzM4ai3RvGrTgFIB8yUwoTU7n+C96zjCupIq7ruMVOMirv6qiDbPgMBcZ/X7zoJ5rysTRbrg05GITkFTPu3a3sv+ZfrQeOYdN1Adld7F3AAEahfjGSdIfAV1uxpnoCEC0uKddwoJdCXgYwLtJCEiN6LcCbTYu1VGwBw6qaeWhtaax7fWIn7SKRXFhaiZbWxf2TVzWtKJhcdSl8MsYSyj/JmZSNKqK/Elyk8mKnuVl2SvCY3vftIpJMkfgTSJmKFRloOP01y+u8ks2gdEckMeDuj39SJvmk5xYqJCqP0pE77dFzfUIhrRv54wUsNW6h8sKfBee0J8n9MDNqF/K8swQrlHrumbET9X5tD+Q7xxb3/7Ugby9f0llkeU8LkZ/AdakjGcSvTMWKFjWL
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(39860400002)(136003)(376002)(451199021)(2906002)(83380400001)(6666004)(478600001)(8936002)(41300700001)(8676002)(5660300002)(54906003)(6486002)(66476007)(66946007)(66556008)(4326008)(316002)(36756003)(6916009)(38100700002)(186003)(82960400001)(26005)(6512007)(1076003)(2616005)(6506007)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTd0MjkxK296bzI2Q2ZSYmlsR2x5bSswdnhiZDFCYlRhSzNLOHhRQytVNnFs?=
 =?utf-8?B?bC9pVkFUeTMxR1hDNGRpcFhWRVNnZXV6NEpWYjlVRndYeXZDTEJZT0JnS1VO?=
 =?utf-8?B?anlrSmlFd0dLS3paT0k1eWFTbkpYSWo4bzk2WEsya1NaWHV2bGpkVzNLdjQ4?=
 =?utf-8?B?czV5Y284SDBDeVZxMmFkTTlBdGRXbjQ0NFZBUzE3QnZ4UmtjQWdDNElic0gz?=
 =?utf-8?B?c3NVaWpQeEdBWlpxRS9FZ09FVGpjQ0o5R2EzaFZ5Umlrdk5xN001VXU1dWl5?=
 =?utf-8?B?ZVd2MnJKYTR2MC9iU21BMEkzQUtGWW1nK0JsdnpwTnVhei93N0Q1VkVZVFZ2?=
 =?utf-8?B?aGEyNnVJYnBxNHRtWEN6S050a1FVd05idzIzVGZ2QzhBRk9GYzlOb0hSNGhn?=
 =?utf-8?B?SEM2L0dHOWFLblVXSVFILytnSU5aNkF3bUZWQkJjSkpUb2c5aVRhOVRUemFn?=
 =?utf-8?B?dG1waExqaXlyTUwzaGY4VU10bTV6UTlBcjdpTHlpZ1RUNWdhVzJYZTJCNjdy?=
 =?utf-8?B?VWZ3bUU1SEswTG8xaFoxVnZCaGVnYU0wSlZmcURiOUJZMFpueDZGZlZncUVk?=
 =?utf-8?B?L3ZRekoxUUVhYlJCaWRsblh4Ykk1Ym1ZMVlVbk1UdCtEdTdLVHVpeFBsWUs5?=
 =?utf-8?B?eGhHL3cyY1FMdUlRQ2NCVk9nd0Z1NTNOVkdVMzlVSXhJRFVhV1MzRzhrM283?=
 =?utf-8?B?MVM2cy9LWUhhSzFkdDBPSW1rc1F6N1JRTkVqbnNPRGhGUEh4d2NqZGJQeWRR?=
 =?utf-8?B?TGxDOEswNS9RbmdzZ0JQYUMzaHVkeWVBaTdwNUdIdWpYWjNPSUhndzk1Qklx?=
 =?utf-8?B?aFYyRDFMYmo4NGpSak15YWllY0JHeGY2RHowMW1Rd1ozRmdLcWEvUmE1NzEx?=
 =?utf-8?B?dk1hUlF0YTVnL1kxNVFOdzhwRVVpQWh1emJIb1ZpaVpMRlU5T2o3Smlvb3Nu?=
 =?utf-8?B?alNZWHVaRkd6Mm85dlFIUGVZTXhzb1ZqcXF6R2YybjVtRzY4SVlMQnhMYUcy?=
 =?utf-8?B?dlo4VVB2T3Y1NVhwMnh1QlRyRCtxcHRxaXNwVnBoVzRIc2NPc3lWdTJ2Q0NC?=
 =?utf-8?B?eHJuMjE2MC9BeWxVNkJRWXA4TzA0QmJvdFUxMXIvTEZTdXJyNEhsQTVIN0w0?=
 =?utf-8?B?QjVHUFYzKzhNSUo4ajl5OWtBaW00OVhFUW83YktGYmwwbXFSU21kbXU5L21i?=
 =?utf-8?B?ZDc1dEU3L3hIZnRrcXdZU0t5Z2RpMmdlWlJGM3Rqc0kzSEd5c0N4OGVqSEIr?=
 =?utf-8?B?NmF4ZUw1WnNueFY0SWN1TmhpQ25TRHg5NmhOR29vZTB3SHVBVmtBT0xKWmFH?=
 =?utf-8?B?ck5oZmRQY1dzRVpCYkZxQXdPd1U1RERHa2wvUlBYQ2xIZkVOallQVUtpQUI1?=
 =?utf-8?B?TGJFTmtDOGxaZ2hJQXJtdEw0d1pCUkxLUXl2aXpIVnppUGJTdW9zOER1RVVh?=
 =?utf-8?B?aXU3aVZteGtKRE9IN3ZjNUF4Nk11TCs2WlRCd2c0Q2dzV1dzVTkySTFqUTJM?=
 =?utf-8?B?M2lLdUp5TXVZbEVBbzJzUHZiQ2c5cnV1Unl6RFpmdzdOVzBHeklQVGU5VnI0?=
 =?utf-8?B?Zkk3ZWxkL3J5VjI0ZzZvQ0JrMkd0SStNUkh6eklRWFgweExQWHhaMXY4Qktj?=
 =?utf-8?B?SHliUStUaXg5L1g1bFNyb1Yyb3VJUk1XRVpJVUtWem5sQUQyNEw5V0lLTGtB?=
 =?utf-8?B?d2tIRDdJTXNLMWM0V1BialQrNDdjS2NZYk1MT1BTRWI5ZXR0aCsrUkdpUURi?=
 =?utf-8?B?V3M1TUQ0RXRnT0p1R3lMVjlVZUJGbFR4T2Q1dzZ1MmJibnp6Y3VpNzB1Y2ll?=
 =?utf-8?B?R1hOZXlNRTdPQzQyM3BmN1o0NDJWemZjQktDa0RIell5blRjclhjbDVyNGZL?=
 =?utf-8?B?SXRWRk9yNTVidE94NEF5d3Ayc05SbXIvT0ZEelVRemd2RlhKVkxCWXNlbThQ?=
 =?utf-8?B?YU9BZ1YycVQyUVJjalFqNnBiRnZXdnJ3U0tNZ1FTY1VxcHZIdjRRMi9CZzhY?=
 =?utf-8?B?NFhFa2FiZThSaE85TkdHR3hKa3o4dVkvUmRRNCs3WXpRdGM5ZHVoOS9hQlF2?=
 =?utf-8?B?K013c1J2VzM4SHZ1dUtoU25WbG1IWW9WSTNCbzdQRnRQTmJSaUVNc2lEekhI?=
 =?utf-8?Q?uObHq7/IVswXyzu6baCEMs4+8?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LGdZ4IvkOJ6kxNB8OzqEPVwbXPipV3L7pwdrU+zZiGChl+JPAtCeN131yJXv3OT9yvi8GloHBIi83EACwBrWcoRGfz61fhgloQLSIhKcMYV9ojY9B/hZNFs2JsZO3yZWC+48tjXRupUHvppQFDE/rSzm9sid141qJ/t8QmMC75EzOo+C8pd+ZQLnJxSOCPdUO7aHvmKeTMu9FxSn7Z/hcIOR9SGookMFpdZM2CVHTIiBt5tQ1ZlMmYgo0gfnFTkQ7T2prm//pyzpzMU7W+AxC28k0sxigEh2rLzXlEF4cc9Q/eJxy2tODixEgB4FO/DzdLLeujUt0EnJ6L0JPxJIYxGdIepgdnfUrlVuMrq72k1ySIjlkTHUsJPYDoliCxKK7D5rItlTK60/Ov/q15mDZcJdBZELaFTNlzX2cyBA9Yf+KUI8C5b9kZmcdyhWooEdXQgcs9Bs6RHdautoUTHewTUUdczmotHVYgJKlXHc9nO/+uDK+QQ+3GHguTmb3epU+6yEnmHhwm3gWecJkhC9SjM2wFD8YEdRbWjBXmihdjETb9knSUXFqbsgGO+dOm2/YJg5NY5yZH0Xky6SW5Rz/8AR59pZydYnau+ibJr/9678KnSPN8amexbbQWaCbpWD+Iy5YIJ5bOoGbwtfL/nm/i5YoMTvBG5lRDZDhNZdexwQVWfx3Uu6XU7jywETmpMWQqHPpLvW+TPS65A1aUrU2W/jEGCSGOQAiXkgkvsOciOSDf9ZcCRYWSAs1x0lWgf6Y0fTAwqLUIuI333FhN0LCjQKT3EVvpWtik62FXHAgM0qvcOmAVLGtJUNP+adB/Ow
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37b3c7fd-2b73-4bd3-d6a2-08db342c49d5
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 10:15:02.8220
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LPpA3ijAkE+/tmy/NVyPp9dME6OBy74Hkicfn53oIZfiFZ/8JHj8qnIC4TAfNg2JLQbKbjHNxzPy7bRfqg93Jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6534

Global p2m type recalculations (as triggered by logdirty) can create
so much contention on the p2m lock that simple guest operations like
VCPUOP_set_singleshot_timer on guests with a high amount of vCPUs (32)
will cease to work in a timely manner, up to the point that Linux
kernel versions that sill use the VCPU_SSHOTTMR_future flag with the
singleshot timer will cease to work:

[   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
[   82.793075] CE: Reprogramming failure. Giving up
[   82.779470] CE: Reprogramming failure. Giving up
[   82.821864] CE: xen increased min_delta_ns to 506250 nsec
[   82.821864] CE: xen increased min_delta_ns to 759375 nsec
[   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
[   82.821864] CE: Reprogramming failure. Giving up
[   82.856256] CE: Reprogramming failure. Giving up
[   84.566279] CE: Reprogramming failure. Giving up
[   84.649493] Freezing user space processes ...
[  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
[  130.604032] Task dump for CPU 14:
[  130.604032] swapper/14      R  running task        0     0      1 0x00000000
[  130.604032] Call Trace:
[  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
[  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
[  549.655463] Task dump for CPU 26:
[  549.655463] swapper/26      R  running task        0     0      1 0x00000000
[  549.655463] Call Trace:
[  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
[  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
[  821.888596] Task dump for CPU 26:
[  821.888622] swapper/26      R  running task        0     0      1 0x00000000
[  821.888677] Call Trace:
[  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14

This is obviously undesirable.  One way to bodge the issue would be to
ignore VCPU_SSHOTTMR_future, but that's a deliberate breakage of the
hypercall ABI.

Instead lower the contention in the lock by doing the recalculation
with the lock in read mode.  This is safe because only the flags/type
are changed, there's no PTE mfn change in the AMD recalculation logic.
The Intel (EPT) case is likely more complicated, as superpage
splitting for diverging EMT values must be done with the p2m lock in
taken in write mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I'm unsure whether such modification is fully safe:  I think changing
the flags/type should be fine: the PTE write is performed using
safwrite_p2m_entry() which must be atomic (as the guest is still
running and accessing the page tables).  I'm slightly worried about
all PTE readers not using atomic accesses to do so (ie: pointer
returned by p2m_find_entry() should be read atomicallly), and code
assuming that a gfn type cannot change while holding the p2m lock in
read mode.

Wanted to post early in case someone knows any showstoppers with this
approach that make it a no-go, before I try to further evaluate
users.
---
 xen/arch/x86/mm/p2m-pt.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index cd1af33b67..f145647f01 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -486,9 +486,6 @@ static int cf_check do_recalc(struct p2m_domain *p2m, unsigned long gfn)
         p2m_type_t ot, nt;
         unsigned long mask = ~0UL << (level * PAGETABLE_ORDER);
 
-        if ( !valid_recalc(l1, e) )
-            P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n",
-                      p2m->domain->domain_id, gfn, level);
         ot = p2m_flags_to_type(l1e_get_flags(e));
         nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask);
         if ( nt != ot )
@@ -538,9 +535,9 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
      */
     ASSERT(!altp2m_active(current->domain));
 
-    p2m_lock(p2m);
+    p2m_read_lock(p2m);
     rc = do_recalc(p2m, PFN_DOWN(gpa));
-    p2m_unlock(p2m);
+    p2m_read_unlock(p2m);
 
     return rc;
 }
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 10:17:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 10:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517364.802583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHFc-0008VT-Fm; Mon, 03 Apr 2023 10:17:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517364.802583; Mon, 03 Apr 2023 10:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHFc-0008VM-Bz; Mon, 03 Apr 2023 10:17:24 +0000
Received: by outflank-mailman (input) for mailman id 517364;
 Mon, 03 Apr 2023 10:17:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OGRT=72=citrix.com=prvs=45084431a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjHFb-0008VE-87
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 10:17:23 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b724bc29-d208-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 12:17:21 +0200 (CEST)
Received: from mail-bn7nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 06:17:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB6995.namprd03.prod.outlook.com (2603:10b6:510:12f::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.28; Mon, 3 Apr
 2023 10:16:58 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%5]) with mapi id 15.20.6254.030; Mon, 3 Apr 2023
 10:16:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b724bc29-d208-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680517041;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NJHBp8nY60BOtf1vDWBU2/LaHN57DMrJQ7Fa47wamQ4=;
  b=UtQwbGWnepMshyZ77M6JfhLfkudJsZjsEgsHMH7iR7/1eoGJU0d++Hxj
   Nf2+deF40423cFeO1Wu9Kp3iyIVWN2E0kChdxu3kYJE17NunaFObVGTxF
   U23wbuigrVrkhELg5Ny+JzviuT5BWw1PnyqTpkaPi4PYA9TmG9j/udB/Y
   8=;
X-IronPort-RemoteIP: 104.47.70.108
X-IronPort-MID: 103470109
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:cUGwyqif1xAN3cxzictcwVL4X161TBEKZh0ujC45NGQN5FlHY01je
 htvUWuHO6yCYTHyLt0iOY+yph8CvcTUydJkQAo6/i4yQSkb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQXDzIHSxaA3Nuuwb2pYdQ22McJDs/0adZ3VnFIlVk1DN4AaLWaG+Dgw4Ad2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMpluG1YLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXjqqEy2QXCroAVIDA2aH+DrKiXsECzcMJ+B
 nAZpwg1v4FnoSRHSfG4BXVUukWsrhMaHtZdDeA+wAWM0bbPpRaUAHAeSTxMY8Bgs9U5LRQI/
 FKUm9LiBRR0raaYD3ma89+8sjeaKSUTa2gYakc5oRAt5tDipMQ5iE3JR9M6SKqt1IStSXf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodt7xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:rh5QuqFqAHsk1/C1pLqF9ZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6Km90dq7MBThHPlOkPQs1NaZLXPbUQ6TQL2KgrGSoAEIdxeOk9K1kJ
 0QCJSWa+eAc2SS7/yb3ODQKb9Jrri6GeKT9J/jJh9WPH5XgspbnmNE42igYytLrUV9dPgE/M
 323Ls6m9PsQwVeUiz9bUN1LdTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y1zwoTSDRGxJYl6C
 zgnxbi7quunvmnwluEvlWjo6h+qZ/E8J9uFcaMgs8aJnHFjRupXp1oX/mvrS04u+am7XctiZ
 3prw07N8p+xnvNdiWeoAfr2SPnzDEygkWShGOwsD/Gm4jUVTg6A81OicZwdQbY0VMpuJVZ3L
 hQ12yUmpJLBVeY9R6NreTgZlVPrA6ZsHAimekcgzh2VpYfUqZYqcg68FlOGJkNMSrm4MQMEf
 VoDuvb+PFKGGnqJEzxjy1K+piBT34zFhCJTgwrvdGU6SFfmDRDw04R1KUk7wA93aN4b6MBy/
 XPM6xumr0LZNQRd7hBCOAIRtbyInDRQDrXWVjiYWjPJeUiATbgupT36LI66KWBY5oT1qY/n5
 zHTRdxqXMyQUTzEseDtac7sywleF/NHwgF9/suoqSQ4tbHNf7W2Gy4OR4TevKb0rYi6paxYY
 f1BHpUa8WTWVcGV7w5mTEWYKMiWkX2YPdly+rTZGj+0v4jCreawNAzI8yjbYbFIHIDZl7VJE
 clcXzaGPhgh3rbL0MQxiKhFE/QRg==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="103470109"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U5Lur6ZyE0aACszxuCNOIXsT6RRjPrhVIVvHgKHoclm829RtHI3+Z08iFsyahZyOj94qB8coR7J2EpOinJ1lH5oQjU7fjhSCXSQFSxDc2ocJQ9oyYxe4dd2pfu4TNzIukXujds3StmkSRdesZLz+OgVi135RhxNVbTFmy7RD+oHrWw8mLXEzlYE0ZmRin/lGB18CZM0ZpMBzz/St4s7LHkGa/LwQuiziBYUNjdXnHkZLpJ+6T7VaRO/lFd3Wu+zqRWlE2tQomO67p3cNj1Zme+pezHp84ndoFnpJ8xr9F7tXS5aF3m66DgfkcwS5RuaMgyPnLfJBD/nxKA1ZVJJOPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ybvD7N7hjF7U5elqg+fiOU6qIf03CIqjL/+AdVJOIWg=;
 b=FHmsqiINj7FQ0uXpccFF+uvDRGkotWDy7t/fdePZohheXwfB36Hd9EcLpKpnk5xY7Tg+Ayhw+pBlHwhyDr6YpG63NndpT4AKxBnmmCpn7GcVBnQPpKEVl1PaC3l7UpQIMm+8BGZgx8BSpWY/Q4WuZyaS26EMiqcOBCzqPXip3phO+ZabtbakotFkRMlLaCDuILA5givxfElzSsTdlkiglhnAVbU8u48UdV3p+oA9OM652Jqx6lUu7fSO8Z2jehVY9KK6AaqSfIbeYIJb0ozRtaTosuFvn2QxB7izQ0PF+MTSt1oldMhVXuzKLEP91egSBk819q9uftcvoYWSDpFQHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ybvD7N7hjF7U5elqg+fiOU6qIf03CIqjL/+AdVJOIWg=;
 b=PbQekA4iIB5zDkXUwEgJNxL7SXYXtPu3RoNSrMHuiaCZFfkZSP3otWvKZmsyx6WaNW4TI3AU5shbVG5VN1HygYEP/ZAU2aptahVC4/KYN3TYV4/oCoDFj2I8UmFofOzwasMB0DlzUBEWtzuGsFrj4twf42IBl0xLCmJsKZvLTSo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
Date: Mon, 3 Apr 2023 11:16:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
 <ZCqVEHe1Qo3skeVf@Air-de-Roger>
In-Reply-To: <ZCqVEHe1Qo3skeVf@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0046.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::34) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB6995:EE_
X-MS-Office365-Filtering-Correlation-Id: bb4db509-371d-41e7-d5c5-08db342c8eb9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GstqtAe623VcD3PaR5SL1OsCm35wA/8D/WdrH/9lAOdpOctcedUXTPUe31HVANhrkveJAQYZjIChO0hxxqAKNNst8fdkzTjiWyQNw+xDqesm3VAQOdbU2WwprdnAOMMU+hmTnF62n3iLbw5qOUxp1j6leRqoRW8RE43/NZFjqyTEn1DmoqGmk/IjjMyQR2Z3jbesgkK0ZseDj6IneLBDD3kgyUbo8P/IYVc0xavu7UUYlh4+u8oIvtoZKDIHWXbByra5T+L6j6ITPm/CMYQhxRNtp6BjahRHDyd6cNBycw8gMlou6ijEqWczu6Gjtq/UgY2alcHki504VMUwPiB9umwW69ec120QpLnR1S8oU/smP8hsmqYDx4ZRMLCBId28nbFcP99BW4XbKlmwSLxd7Pt4LuTbC8fTvz/nwkSzPuGgFD2VV2VdOs7bisUA9X3guFrbmNTY2wrc5xL9MpPaBQ69oLo9S99Yo0qGV0cR5R0dshbqN+5nulPC+uSwPM2Usx3IUxeuzy1X6hiKxdsV/elpy9j9cOobEoUO6tpCCTjmSCaxqk2i5sEW3Fu2pVKo/Ub89Cp+1a0IyONYcqrcA6DIa5LDcVUuvcGXH/3oTUTtO/cTyx3ZsyWMzY5T3SH6boWYbLAGyQvB3WoGBDlsxA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(39860400002)(376002)(396003)(346002)(451199021)(30864003)(2906002)(6512007)(6506007)(26005)(38100700002)(53546011)(31696002)(86362001)(83380400001)(36756003)(2616005)(82960400001)(186003)(6666004)(41300700001)(6862004)(6486002)(8936002)(8676002)(31686004)(6636002)(54906003)(4326008)(66556008)(66476007)(66946007)(478600001)(316002)(37006003)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlpSUXlZbFJMRmJ0RWRQQ0g2OUZlUjd0U09mSlcrZHBHcldiYVNqdzZoQU9C?=
 =?utf-8?B?MVR6WXNaVjZMaE9nT2FjU051d09DT2gxRHhteEw2dHRqMEprdklEb1QreE90?=
 =?utf-8?B?ejZvY0Jub0pDWE52Q013OWZHSzBDRGFqaUFCSkNGUzQwQklYRFY5NWtKRkp2?=
 =?utf-8?B?RTg4MENDU0ZzbWJpd2trbFlWQ3B4MGZtVk5zOEEzb1dLU1dOZW56aFZRQUNN?=
 =?utf-8?B?SGliL1BvVitLZXFtQWpXNHdPWEcyUXV3RGlROTVnMjQ5Z1AvaEJlWFpkSTRV?=
 =?utf-8?B?NVRrS2Q3L3dWbFRNdFRiSm82VWJFeFdVS2tobXNsejBISldBWVc2VUEzSHNz?=
 =?utf-8?B?b3drREE0ZEM5aXdwU1lQeWZsQ1B6MzNMVFUrdlJqNG9wMzNQQTdMMkFadUYy?=
 =?utf-8?B?MUJuUjQxU2FQdm81Mlhnekx1dEdXaDEwaHE1SjljTEtZM2U0OHBJUEZIR0Qw?=
 =?utf-8?B?SVJZVkZXS0dQQnZBSjNYUWJTdzcrV084Z1ZFcWRtWFJDUS91bkIwMm5FS3li?=
 =?utf-8?B?VGI1MUJ6TksxVEhZZnpBT3ZLYmZxVFZWMVNZRWNRdjdnNVVwMmQzK1FiemdY?=
 =?utf-8?B?QTB6WjZBYVhacXdmd0FwbzZudTRvNmN1VXA3dlBNalp3ZzRsT2o2eFRsQ2JS?=
 =?utf-8?B?dVl3NWpML2J6Qng4TVN4MWsyM2NObWV4cjkzTTlUblRlZGpocFhBSHZUS3NI?=
 =?utf-8?B?UG52TmVQd0d5RndPVzFMU2JlbFVOSm51QVRzOWljc2dmR0lZWWhMZW1CTW5Q?=
 =?utf-8?B?M0RIUDZ1M2k0ZU53NjkrMUhIdTlqV3F3Sit6cmIxQUxiOUpsc0thN2g1TUNE?=
 =?utf-8?B?UUczSjRsejVCSEw2V3lDcFN3dytibUN2akU2V0U2RTBkZmFlVDhBVnBpQ3hG?=
 =?utf-8?B?cmZpb3FpV2pvbXdnaS9BcHhiY1dVVDFzQlExc0NLYytKVmZyekh3OVZjUFV2?=
 =?utf-8?B?ZEIvNGJOcWFiT2NqQmRoSjRNajAyU1llNUFHSy9pSjhzU3pjaWxCaHFiZGJu?=
 =?utf-8?B?QWk3SkU4R2MzUjZqdnp0WHlDcC9lOW1DV1I0MUVZYURJeGdKbHduZ05HSGVZ?=
 =?utf-8?B?a1hjd3JXWmJiUTNVcHRzUUJGWGJQSkswK2VEZERpUEJiOVVIaWdnaGdjbDlw?=
 =?utf-8?B?M3QyQ1lBamJSN2R3UDhqNWFXSU84R1RmS0JjMDYzcWVpYm5PTXhBWnQxaXRB?=
 =?utf-8?B?aEVqb29zWXdiaEJTak55L3VENFhDRzFhMWJLcWZJU0JHY3pCSWx5bDJvclVC?=
 =?utf-8?B?djBoQmNIeGpyOCtRNEliWndRQloxYklaSUdPaEVqNFRwMDEwVGE4MXp5Yzlt?=
 =?utf-8?B?RU91QzlVOVNvQm5xcFhEaUpBOGJzSHJyUk11dkFDa2RkQ3RjS1diZjRlSGp4?=
 =?utf-8?B?QlJ1d1hxTFc0NzZpTi82WTJobDdhWjl1RTlPM09rWVZiQmF0STJUQWpnSkZR?=
 =?utf-8?B?bEdGckRLS0Y5Y3NFQTQrVHdCeG1YZHA2VG9BbWxObjUwL0N1VE1VS3RYTkVw?=
 =?utf-8?B?SlpDNE8xYjY3UVhxSnFvYTZmMUJZQTA0cnE2QWlsekxyVjRNazVSTVpESTdn?=
 =?utf-8?B?TkliSEx6YzBkZnZ6a0o0WTR6NHgyenJPZzJlSkhkck9xUHNHOWRmcUI2SVJF?=
 =?utf-8?B?OGNxSUM3QXVDbU1NTGVDYTViL2FZSzdFM3FERDhhSkpyQzRka09oZkxkVkRk?=
 =?utf-8?B?MURNTVJ0MG5LNVFGM1ZBK2R3c2twRkt6REVOSHBsbGRZc3pyanJ0eEcweWJa?=
 =?utf-8?B?K1MrcmNhVlpGQW9RSHkzMVRBcUpWcWZyY3dNZjNaNkFjV0RkUVYzVUt1ZWxM?=
 =?utf-8?B?Ui9qMWlBb085VEhFZFAvNXFtWmJqNnE1RmRWckd0TzZ1SFdoT0podTRsSTRl?=
 =?utf-8?B?Nk5TcXV3bFdRaDNDaWh3T1lqK2V0VTRmd29DMkFyTytCTm5pR1ROdGRpZjdT?=
 =?utf-8?B?UWNUZ2hQWkNiSEh3ZHlEZmZEdlZzcW5YaGM2VnRkdDJYcU1LZG9MblFoaEMv?=
 =?utf-8?B?S215MUF0bU1lR3pkV1Z5QmlXeU9VbklEcWJ3M09HZHJuUVpoNnQvN3h3elZx?=
 =?utf-8?B?VXh5V0tzS3IrM3F0K2xSQ21LcWZRVTRNRmtzSm5JWFhWaWpXb216cjNPcjhG?=
 =?utf-8?B?NVYvTy9QVDVaZTJLOHZWK2tSRDc5NjFrTDJ0V1hNa29KV0hhdzYra2JRQ0FU?=
 =?utf-8?B?VWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	6xQ7WrnMkvZOy0HC6j5Ux70YVQIxeMJbsq5NhhW83077hZINysldrrU6ShmesiyzVhuCGVfa5PXwVZNLaGA3ZAyXFR6oI5y8sa4knGyw+rwBi9945Xk+W7/Wselats0HyidgQGUGdL4gOrSRv10f+ZXkTW3nsg4fTuXcbyoS+QsYMYM7OPC5kVXkhMrTOt7Tg/5KoxlNF+N6O1z+Tz8BotPkIKJmlzgM1xObRRM4SiGfl6Xe4SmTJYg6bd0yG/25QziJcEJ8uM7Ip7Rgd2fSQNHTRVJU/JXunAVjSsvbh+yA22+f4gtM8iJrdYQqjJJndvr+1onzNNAdYItyBQM1Zav/UZQwXWt9swcNA2ratA0b6G5P22HWpWaqa6dTOToUWG7WUIR7c0BMQ4+l1z95FyV1KM+dtmfB7YzBGcIM8upE+japTBGic/KeM8G70ctzQ4p+R9zT32l/0LHTmtjgLw66mT2NAAbsFUfi4cNQcUcD5pG1X/+bYXHFUYJOW1ZnOtXZMdkGAzPl+YZJCcqIrwa9gPHZFdsWPTP1YB+MWVImmANSAUgB+XKJvRc7CtOQNS7RKmzZ4gWUPq8rbGRixR4l/fNOQfmSqbkSBYVoUFgNk+QWWNbGKHnKCev42BuKTFoEe3kmnUF0sNwsDwvQyVvaKTRZvRRzNBOkwWxXaVaJgIJHfTuXjItX+/falFDIkwxVetkPSZv9zTZ3s4os+AS1keIc+XbTxXXdu/Xo45DLh6Ci4M6V1u8SB4mpP0hCQ8uWGq00b6qGK67Q9amxl11plvC2bekRD1tVbsMc5VfGkgH4fEdQEgMpHCJ+Ljou
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb4db509-371d-41e7-d5c5-08db342c8eb9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 10:16:58.3744
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RTeM3/YP8WHkeszVLhnoJvM2RvVl+J4FsxU6TaD+Yjxt2mk4zKKJ2irrGFKNYZlATTveczY0ExWP2HzyTUjBF3NfZX8sFZNdvrCRvtaZJzc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6995

On 03/04/2023 9:57 am, Roger Pau Monné wrote:
> On Fri, Mar 31, 2023 at 06:57:19PM +0100, Andrew Cooper wrote:
>> This started by noticing the dubious Fam17h check in
>> arch_ioreq_server_get_type_addr(), and then realising that checking the host
>> CF8_EXT setting is utterly bogus for the guest <-> qemu emulation path.
>>
>> What should be consulted here is the guest's MSR_AMD64_NB_CFG setting, but a)
>> there isn't one, and b) the vestigial remnants of the cross-vendor migration
>> logic cause MSR_AMD64_NB_CFG to be unconditionally read-as-zero, making the
>> CF8_EXT path unused by any suitably-written OS in the first place.
>>
>> MSR_AMD64_NB_CFG really has been removed on Fam17h (it's now just a read-zero,
>> write-discard stub), and the ECS extension is unconditionally active, meaning
>> it is not correct for Xen to ignore the ExtRegNo field on newer AMD CPUs.
>>
>> It turns out that Xen did even have this behaviour in 4.5 and earlier, with
>> this problematic CF8_EXT checking being added in 4.6.  Therefore, revert back
>> to Xen's older behaviour - it is objectively less wrong than the current
>> logic.
>>
>> While fixing this, get rid of hvm_pci_decode_addr() - it is more complicated
>> to follow (and to call) than using the CF8* macros in the calling context.
>> Rename CF8_ADDR() to CF8_REG() to better describe what it does, and write a
>> comment explaining all about CF8/CFC accesses.
>>
>> There's one rare case when CF8_EXT is visible to guests, and that is for a
>> pinned hwdom.  Right now, we permit such a dom0 to modify the CF8_EXT bit, but
>> this seems like a very unwise idea.  Leave a TODO for people to consider.
> One weirdness I've noticed is that for vPCI we decode the accesses
> taking the extended CF8 bit after this change, but then if the access
> is relayed to the hardware using vpci_{read,write}_hw it will be
> forwarded to the hardware using pci_conf_{read,write}<size> which
> doesn't have support for CF8_EXT.  So if the underlying hardware
> doesn't have MMCFG support and the reg is > 255 it will be dropped.

It is important to stress that this change does not influence whether
the guest issues ECS accesses or not.  All it does is change Xen's
handling of such accesses.

Previously vPCI blindly ignored ECS accesses, so the vPCI layer
effectively truncated them to BCS accesses.

Now, from your analysis, when MMCFG isn't active, Xen's PCI layer will
effectively terminate ECS accesses with default behaviour, even on
systems where IO ECS is available.

So we've changed one valid behaviour for a different valid behaviour.


(Quick tangent...  Our PCI handling is currently very dumb. 
pci_mmcfg_read() returns its value by pointer but the callers never
check.  Swapping it to return by value would improve code gen quite a
lot.  Also, when MMCFG is active we still pass BCS accesses to IO ports.)

So I think we do want to improve Xen's general behaviour too, but this
difference here doesn't concern me.

>
>> Fixes: e0fbf3bf9871 ("x86/AMD: correct certain Fam17 checks")
>> Fixes: 2d67a7a4d37a ("x86: synchronize PCI config space access decoding")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> Whoever reviewed those two patches originally was clearly a fool...
>> ---
>>  xen/arch/x86/hvm/io.c             | 24 ++++++------------------
>>  xen/arch/x86/hvm/ioreq.c          | 19 ++-----------------
>>  xen/arch/x86/include/asm/hvm/io.h |  4 ----
>>  xen/arch/x86/include/asm/pci.h    | 26 ++++++++++++++++++++++++--
>>  xen/arch/x86/pv/emul-priv-op.c    | 19 ++++++-------------
>>  5 files changed, 38 insertions(+), 54 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
>> index 5ae209d3b6b3..b0d3c236e985 100644
>> --- a/xen/arch/x86/hvm/io.c
>> +++ b/xen/arch/x86/hvm/io.c
>> @@ -248,20 +248,6 @@ void register_g2m_portio_handler(struct domain *d)
>>      handler->ops = &g2m_portio_ops;
>>  }
>>  
>> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
>> -                                 pci_sbdf_t *sbdf)
>> -{
>> -    ASSERT(CF8_ENABLED(cf8));
>> -
>> -    sbdf->bdf = CF8_BDF(cf8);
>> -    sbdf->seg = 0;
>> -    /*
>> -     * NB: the lower 2 bits of the register address are fetched from the
>> -     * offset into the 0xcfc register when reading/writing to it.
>> -     */
>> -    return CF8_ADDR_LO(cf8) | (addr & 3);
>> -}
>> -
>>  /* vPCI config space IO ports handlers (0xcf8/0xcfc). */
>>  static bool cf_check vpci_portio_accept(
>>      const struct hvm_io_handler *handler, const ioreq_t *p)
>> @@ -275,7 +261,7 @@ static int cf_check vpci_portio_read(
>>  {
>>      const struct domain *d = current->domain;
>>      unsigned int reg;
>> -    pci_sbdf_t sbdf;
>> +    pci_sbdf_t sbdf = {};
>>      uint32_t cf8;
>>  
>>      *data = ~(uint64_t)0;
>> @@ -292,7 +278,8 @@ static int cf_check vpci_portio_read(
>>      if ( !CF8_ENABLED(cf8) )
>>          return X86EMUL_UNHANDLEABLE;
>>  
>> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
>> +    sbdf.bdf = CF8_BDF(cf8);
>> +    reg = CF8_REG(cf8) | (addr & 3);
>>  
>>      if ( !vpci_access_allowed(reg, size) )
>>          return X86EMUL_OKAY;
>> @@ -308,7 +295,7 @@ static int cf_check vpci_portio_write(
>>  {
>>      struct domain *d = current->domain;
>>      unsigned int reg;
>> -    pci_sbdf_t sbdf;
>> +    pci_sbdf_t sbdf = {};
>>      uint32_t cf8;
>>  
>>      if ( addr == 0xcf8 )
>> @@ -323,7 +310,8 @@ static int cf_check vpci_portio_write(
>>      if ( !CF8_ENABLED(cf8) )
>>          return X86EMUL_UNHANDLEABLE;
>>  
>> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
>> +    sbdf.bdf = CF8_BDF(cf8);
>> +    reg = CF8_REG(cf8) | (addr & 3);
>>  
>>      if ( !vpci_access_allowed(reg, size) )
>>          return X86EMUL_OKAY;
>> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
>> index 0bdcca1e1a5f..325a9d118e52 100644
>> --- a/xen/arch/x86/hvm/ioreq.c
>> +++ b/xen/arch/x86/hvm/ioreq.c
>> @@ -285,27 +285,12 @@ bool arch_ioreq_server_get_type_addr(const struct domain *d,
>>           (p->addr & ~3) == 0xcfc &&
>>           CF8_ENABLED(cf8) )
>>      {
>> -        unsigned int x86_fam, reg;
>> -        pci_sbdf_t sbdf;
>> -
>> -        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
>> +        pci_sbdf_t sbdf = { .bdf = CF8_BDF(cf8) };
>> +        unsigned int reg = CF8_REG(cf8) | (p->addr & 3);
>>  
>>          /* PCI config data cycle */
>>          *type = XEN_DMOP_IO_RANGE_PCI;
>>          *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
>> -        /* AMD extended configuration space access? */
>> -        if ( CF8_ADDR_HI(cf8) &&
>> -             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
>> -             (x86_fam = get_cpu_family(
>> -                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
>> -             x86_fam < 0x17 )
>> -        {
>> -            uint64_t msr_val;
>> -
>> -            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
>> -                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
>> -                *addr |= CF8_ADDR_HI(cf8);
>> -        }
>>      }
>>      else
>>      {
>> diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h
>> index 54e0161b492c..3f3fb6403ccb 100644
>> --- a/xen/arch/x86/include/asm/hvm/io.h
>> +++ b/xen/arch/x86/include/asm/hvm/io.h
>> @@ -144,10 +144,6 @@ void stdvga_deinit(struct domain *d);
>>  
>>  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
>>  
>> -/* Decode a PCI port IO access into a bus/slot/func/reg. */
>> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
>> -                                 pci_sbdf_t *sbdf);
>> -
>>  /*
>>   * HVM port IO handler that performs forwarding of guest IO ports into machine
>>   * IO ports.
>> diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h
>> index f4a58c8acf13..3b814f4ebacf 100644
>> --- a/xen/arch/x86/include/asm/pci.h
>> +++ b/xen/arch/x86/include/asm/pci.h
>> @@ -3,10 +3,32 @@
>>  
>>  #include <xen/mm.h>
>>  
>> +/*
>> + * PCI config space accesses with CF8/CFC:
>> + *
>> + * 1) Write {Enable | BDF | Reg} to CF8 to set an address
>> + * 2) Read or write CF{C..F} to access the register
>> + *
>> + * For sub-dword register accesses, the bottom two register address bits come
>> + * from the CF{C..F} address, not from CF8.
>> + *
>> + * AMD have extention to this protocol to access PCIe Extend Config Space by
>> + * storing the 4 extra register address bits in the penultimate nibble of CF8.
>> + * This extention:
>> + *  - Is unconditionally active on Fam17h and later
>> + *  - Has model specific enablement on Fam10h thru Fam16h
>> + *  - Has reserved behaviour in all other cases, including other vendors
>> + *
>> + * For simplicity and because we are permitted to, given "reserved", Xen
>> + * always treats ECS as active when emulating guest PCI config space accesses.
>> + */
>>  #define CF8_BDF(cf8)     (  ((cf8) & 0x00ffff00) >> 8)
>> -#define CF8_ADDR_LO(cf8) (   (cf8) & 0x000000fc)
>> -#define CF8_ADDR_HI(cf8) (  ((cf8) & 0x0f000000) >> 16)
>>  #define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
>> +#define CF8_REG(cf8)                                    \
>> +    ({                                                  \
>> +        unsigned int _c = cf8;                          \
>> +        ((_c & 0x0f000000) >> 16) | (_c & 0xfc);        \
>> +    })
> What happens on Intel when the bit is set, is it just ignored?

"the bit" => the ECS nibble, or the CF8_EXT bit?

The ECS nibble is ignored on Intel AFAICT, while the CF8_EXT bit is in a
very AMD-only MSR, so won't exist on Intel.

> We only allow such accesses for dom0 anyway.

And guests running on AMD hardware where CF8_EXT is active on the
northbridge of the core we are instantaneously scheduled on.

>>  
>>  #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \
>>                          || id == 0x01268086 || id == 0x01028086 \
>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
>> index 5da00e24e4ff..008367195c78 100644
>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
>>          if ( ro_map && test_bit(machine_bdf, ro_map) )
>>              return false;
>>      }
>> -    start |= CF8_ADDR_LO(currd->arch.pci_cf8);
>> -    /* AMD extended configuration space access? */
>> -    if ( CF8_ADDR_HI(currd->arch.pci_cf8) &&
>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
>> -         boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 )
>> -    {
>> -        uint64_t msr_val;
>> -
>> -        if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) )
>> -            return false;
>> -        if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) )
>> -            start |= CF8_ADDR_HI(currd->arch.pci_cf8);
>> -    }
>> +    start |= CF8_REG(currd->arch.pci_cf8);
>>  
>>      return !write ?
>>             xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
>> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
>>          if ( !is_hwdom_pinned_vcpu(curr) )
>>              return X86EMUL_OKAY;
>>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
>> +             /*
>> +              * TODO: this is broken.  What happens when dom0 is pinned but
>> +              * can't see the full system?  CF8_EXT probably ought to be a
>> +              * Xen-owned setting, and made symmetric across the system.
>> +              */
> I would assume CF8_EXT would be symmetric across the system, specially
> given that it seems to be phased out and only used in older AMD
> families that where all symmetric?

The CF8_EXT bit has been phased out.  The IO ECS functionality still exists.

But yes, the more I think about letting dom0 play with this, the more I
think it is a fundamentally broken idea...  I bet it was from the very
early AMD Fam10h days where dom0 knew how to turn it on, and Xen was
trying to pretend it didn't have to touch any PCI devices.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 10:56:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 10:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517373.802593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHrG-0004WI-I9; Mon, 03 Apr 2023 10:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517373.802593; Mon, 03 Apr 2023 10:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHrG-0004WB-EH; Mon, 03 Apr 2023 10:56:18 +0000
Received: by outflank-mailman (input) for mailman id 517373;
 Mon, 03 Apr 2023 10:56:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OGRT=72=citrix.com=prvs=45084431a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjHrE-0004W4-FJ
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 10:56:16 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 244edc6e-d20e-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 12:56:11 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 06:56:08 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5189.namprd03.prod.outlook.com (2603:10b6:610:a1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 10:56:03 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%5]) with mapi id 15.20.6254.030; Mon, 3 Apr 2023
 10:56:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 244edc6e-d20e-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680519371;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5JZGi8/1/yb6/qdgogTjC3obpQIKi7kNJk4CnzYviYY=;
  b=Rw6v1ajTSibklxpKTuSyWOQ/56pMUVf1nASbYcmRXnVQ2ypzgvJHUxBK
   MUBfFgK8fRjMAkXM0Whii14YYBkjrNGCzniDOzQ73XQlupIEsm4wCXwkp
   AvAwr3axKTBWwmk8BKm1EpE9MK+Zn2Fwzw4qD7uLmvA8Z7yBxPrUZjCgR
   w=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 103474203
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:sy9aWKtGLD34cGRB3kIilS1r4OfnVHtfMUV32f8akzHdYApBsoF/q
 tZmKTzUOKzcamWkL490advk8R5Tv5PSyIJgTgZlpCs0RX8b+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGzCFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwcj0nZzuIqvuP8L/mS7kwuYMDLsfCI9ZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60boq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgraI13AfLnQT/DjVPD2GnvPK9kHXvZPNRD
 XMk4Ckkv5cboRnDot7VGkfQTGS/lg4RXZ9cHvM37CmJy7HI+ECJC24cVDlDZdc68sgsSlQC9
 HWEgtfoDjxHq6CORDSW8bL8hSy2ETgYKykFfyBscOcey9zqoYV2hBSfSN9mSfexloesR2G2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:bgSiy6yApGpvZvMI2d8+KrPw+r1zdoMgy1knxilNoH1uA7Olfq
 WV98jzuiWUtN9vYgBHpTntAsW9qDDnhOdICPAqTMqftVDdyReVxeJZnPXfKl/bexEWrdQtrZ
 uIGpIWYLfN5D5B4voSizPULz9P+re6GN3Bv5ak8564d3AJV0kdhz0JbTpzancGJzWv9fECZe
 ChDz181l6dkN0sH6GGOkU=
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="103474203"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ofzh/HacUYVN6GmR/MBDtmIrO9ju3Y/Oo+PBlrBXc+jFxq8cjxSj4pn93q6MXIb01xPo+nYhT0eLVwE40oyZUdT8hp3EJCo6w7TjVdyq2MhC8WZHIJbrA4UhDQt4Qiw6jRmMJy4i4rhmqNXAhvVR9s+zoXRjg4mPF4EGwYwtJrNm0IpEnWcWhmC7osYXPN0Y9vLiuIcJqDQYNe4JfmgXqaOIbbJSH/1azSkN5jx4uf+VLaYdxiCPkE2vF87fHo2zyAIXdc1oiUR2J86UIWuoQEkEHPALXhpRUurbSJu6jZyDWIg0D/sR5dbYT5FIEqbT+6JmyhOMLwoLIqBTJ7cgTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+TRfRo/vwqJdd2nOP6GwL6ag6RGk2ZLD1LhaZ3gY2CE=;
 b=BNM8qhWvgePd+lyTkhUFo4sV0i3QRTXtIzIqwp6Yp45sEh7orI9+O2SX81L5UsEWqLLhqaQ0FLfPh0Ma8i/QN1MflKPsZvZOgh73vzKrT81PNU1TxVkIc3DVPOwT+D37hgxgyuUAlzKdqAgQCFhRsct7AF3QluMeNM1Mnhy7TiCW0E7bhrTIxiC7LezbFrvkbQ0cplm0xszuvtkApZXVhIGV+A+OYnXi61WWNx+FY0sbg5ApyPkYdijJLPniH8/g7TqOQJb15dK8Ds2/7TklXOsF0/6vhGRbntnT4fzdavksxvGk7C3UlS+62T0pV2y24cDgk+MOYPLolNlERizc/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+TRfRo/vwqJdd2nOP6GwL6ag6RGk2ZLD1LhaZ3gY2CE=;
 b=Tq/7eomAFhkjPnepLuC6STCn4toyzFzY/neLfMxiIXKp44XhGASpAFknGAGGUAIFtyokmvLCG6SGSccKKlOQhUlOMh/jEbxUsoj0Cop057/D8Ym4fj7ZVIMAGD4Jn51m8elo3m20x2M3Tyt2C8G9lzwrWMFzuBoxSNKSInCWUYY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <14d8128d-6f50-99e5-ec98-366318e7be1b@citrix.com>
Date: Mon, 3 Apr 2023 11:55:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH RFC 0/9] x86: Merge cpuid and msr policy
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230329205137.323253-1-andrew.cooper3@citrix.com>
 <ZCVtcR5u/14/WmCU@Air-de-Roger>
 <9108a58f-da8f-14e4-de88-a7c8c8abb0f7@citrix.com>
 <ZCWgHxCL4yXD6CxC@Air-de-Roger>
In-Reply-To: <ZCWgHxCL4yXD6CxC@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0419.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH2PR03MB5189:EE_
X-MS-Office365-Filtering-Correlation-Id: a83456b1-91d3-4a5e-d8ae-08db3432049a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WKET558DhG3LAHR3wU7lGBV+1SAVaLYHao7LDDHgU0IaG7Kny7h3thRyhQsgopErFW5++UfDHuzZltFmLKdX+nyuAEOBuFTdv+wtHhBXcAmCN2rdyrBrj6eYf0LCo48YxjZFimIIcaHJA9c30f5o2ZzRU04NohvNSjBrExjXhC5+MdYJGsUYPjcF5LMTQ2bU4TMpNJX5Th2p4mtFtLHP1PFfZLzKqGsm9e4URVC4QwtrVcmKiXBFWjaSQFPbIq9LIC4Baz3PtEYNHn8gkqaVcpIvhHA8I6P/0zqkFr7I4qshP1/SpFaI5tXlpsjsiF4ENJYr8A5wJzU/38mlHGHRwX4iZrA+Aw8CBvrU9KjK2Vjjh9Ru4TAtgBe5AHfSi3b4TRSgBXWRek6DFIcNL7F2hFtLuRGOZ7ZRl6DK/T5mWzCfVqsGZzfrlg62CbL7cNZ4Q+plB6pxip3qGBQmoJZVUQerG2hEPQPHKh9Zz3/7VjHxy4RWCii0pSGmtQOKOmkbJaIj+sf42qLYLJ2dqPcitpSfqHaWwy9x7CmdPY5AOooi1x3MxiGPkru1ZCZHM5k7jgHsjpn6DStpfxhbzVGL6k/Z5Q4eORwXm2gD9oQzTwqAqbYe22ozIT2YVGJAhiXG5U8erzNf3GOOUgXz6mGQ7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199021)(31696002)(86362001)(36756003)(2906002)(31686004)(2616005)(6486002)(966005)(53546011)(186003)(6512007)(6506007)(26005)(6666004)(4326008)(478600001)(8676002)(66946007)(66476007)(66556008)(41300700001)(82960400001)(5660300002)(38100700002)(6862004)(37006003)(316002)(54906003)(6636002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SWhlTnQwa0d5NFpBOWNud1pQQjBmQ2lhUEdLSE5vaW0wQ3JqL2s1MkJmZ3VX?=
 =?utf-8?B?RHc3RDFHLzdXMldadGdHWE1EU3ZWZVpJTVVEdXdsZ0doQWM5SVNmTkpGTm9l?=
 =?utf-8?B?eWdIK0l2RSt3Vzkxemo4Slphd3dqUWdSWHZnSit3SXc1V1UwYnVXejZXYnZx?=
 =?utf-8?B?dFZIbmlDN0U2dGErd0pKUXg1c1V4TUlXZVE0QU1Da2tuZnBHM1pSRjhraXd5?=
 =?utf-8?B?dW5xTk53NmErZ25WVzF6VmVTLzlUYnNBVkc3QnREUTcycG1uc0hSYVJyRmNw?=
 =?utf-8?B?Z2RvMGFpc2xLbms5WnZPd1VGQU83UUdPZzBnTVYvOW5qVXlBL1MxemNNTWM3?=
 =?utf-8?B?RUV3Rk84WjkvN0pvWGllWWJFMVdFVFRwYTZmNVN1R0hJdElGZzlPYjM1dVpn?=
 =?utf-8?B?RzBKenhnTVFKTDVSMExMb0hIc1k5OUUrZXBGbFROeHBKYWpIeTVTYzNxSTAv?=
 =?utf-8?B?S3dBNDlDRUhBSXJobmxBVHgvVHVGMkFTY3c2SVF2VzRRc2N2REdZRzJCNVV6?=
 =?utf-8?B?K2tIZ2ZlK3ZKcHBvb0l3RUxOVVRJcGRTNk5GcFl5R2FwM2QzVmcxTUtYa2Fi?=
 =?utf-8?B?Vmh5SGFKUWw0b2pBQ01CeW9xVTU1QkVXSk9CR2xWM0Y5QndZQXRHeFErZW1y?=
 =?utf-8?B?aG9yWWh3WkRFUlFFYVlEblJ6YVgvTXh4azkrak16NnNQcEEydnlmUXA1Z2xL?=
 =?utf-8?B?MU4yaE40ZkJBdzFUNFY0UU9ZWXFtMUpDSFJWZ1dHOFp2L1Qrb1JDV3FuMVhT?=
 =?utf-8?B?SmpMWjVsQzRidXdtdzkxaGVpR05BbFlMMzlBMHZSVjh1bzJJSSt6L1Zhamsz?=
 =?utf-8?B?S256Y2dkd2dzZlp5WCs3eGsyZ3pQMWhZNGJVTnBROGk1eVVkZWUvLzNodWxU?=
 =?utf-8?B?SGk1Y0lKK3htRC9Bc2NYY1pIeXEvQnF0eVpjck40UWcyYlo5V0VKT0NXb2Y1?=
 =?utf-8?B?QnFiR1ZEcjBzZU81bGJWL0FBVFJkRXhrSFE0OTkxRVZ3QkFIbGZwbnpDSTQ2?=
 =?utf-8?B?bnZ1d1I4VTZzcndhNWJCVmxUWFlKK01RVlFkVU51dHgzRnZJbkpNM3hYUGtj?=
 =?utf-8?B?VkpvVUVTbENhUFB3dFhCMEcrK1pLRkFXRXRkYmtKVDBBZFVucVBrWm9OcnpM?=
 =?utf-8?B?elh2QjRpOFc0eURLclJ1TGE1MHRRTThQa0JpTUpGQktNc1ErYjFOOHhDOEVT?=
 =?utf-8?B?UzlxYUF3YlRRZlQ4RnRhM0VuVm9qR2IwU2JzL1FHL0pYUHFUTmNOTWtXRncr?=
 =?utf-8?B?SlFwa2wzWDRVUmVDSDcxYkFqN09KcXVTOEh3OGZqSnMzdUhqdytvODNZRG9u?=
 =?utf-8?B?dTE4U05Jb3JkRjh0Y0daNkNCK2cyM2JRdHYvSSswZ1dkVnFXSzlnNnB4bFI0?=
 =?utf-8?B?VjZNSXZ6NWJEcGpaRWFLcGh3NjlPVS9LVG56ZEp4ZDA2czZlZTdHeWZjZFAy?=
 =?utf-8?B?OFg5bjkvcTJEZkRJVEZLRVVkMGNKL3hQR29rbXZ1cE15Q3JVUThJM2hycFk0?=
 =?utf-8?B?Mk5CSjNLSHI2dnNUM1FWMm9BR2FBa1ZkbkRyRVFMTHZlK2ZDd3Z6Q2lXb0kv?=
 =?utf-8?B?anVYTGdGNGVhc0tQQitiWEhwS0ZORnJtTWpSQVJQbXJWK2p3RHIvK3JiU3Bo?=
 =?utf-8?B?eWVadmFBMDVTUUp1a2xyUWh4TkFYWGhnNzM4a0hRUGV0Wi9xdGZUdnUvRytK?=
 =?utf-8?B?dTRjQTBHVjhGMGlSTjNtTTBZWFUrNXRONG1xQzlzVXk4cVpUbHlKT1k2V2ti?=
 =?utf-8?B?a3BZM295eUN6eDBHNHpqa3UyWEZreS9jSFloRHFsbkYrdXExVUcvMG9YcVQr?=
 =?utf-8?B?WmRTaVppd0VkVzRjMmR2YTFZZnpNTFpscjBFaWtuVFNVay9rQTc1R0RlNm81?=
 =?utf-8?B?VStreXFSQndlUGo2WlR1MFlWTG54dkozRENDR283OU02a0M0amU5MGxEaXIr?=
 =?utf-8?B?ZHV0NlE4UXpRNWt3N3BBTEh4U3B3MjQ5MHJabWNDYlNDRmMvaExpT3d6TUVK?=
 =?utf-8?B?VEkrV2V5VUwwRngzMVMvYXFoUFZRZlBLZDZxYlZEZ01OTms3Z0xoZHpITDJn?=
 =?utf-8?B?OWV4UWs2dStMZWJxdEdYZFVPV1J3eERVblBBakFFTWxvTkZvOVdpYmMxNUFC?=
 =?utf-8?B?bkxJaEVzU2w1SUVFalJJSUFYVmJpeTFrVExQYU0vTFArZFRFYTkycG9lNk56?=
 =?utf-8?B?U1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hfK0S/r0XySNmPS4uOIIA3mT7D1QjH3tZKU02dGRHbZdQe68FSdWAUBOS4fgydaIaaF3oaAyXazrSZa+Dpy6yWVZu47cum3/uEYUhELPCMr6Rlq2X7BMRQct3Sq92mOwfi71p0hKVRPVAOkZl8okZDXbEkRf4I4Oi3rzMCeuOpg5cwQZFk8eyxvdKfwR1gZ9G4Z88Yjun/GwG5HJI/Mqb2z5m0EirUF474oSGJqyXVLxj5//r5etdxcqWqOAbZ7YS5YTj+b/esH3bNky04giIM6+atkY9WEl8qyeoBg512pAjDabVHYBqCsLYwxSGpur4g4G7Vzrm3mA9h8guyduGn0OXRj3z1q6w+l1i+COL9Mc/6awgMPoEtCNie5cPaWn4ZUxM00P7xNoe3GGkv8na68+NQRh0H14ZpUzb2+dtBQY+tIq5pkbXheOEJcu27Nlu3G1qp9SiZ0wZW6Amil4L4DKJKp/yWWpzomLKtHVFhslC1W3roCY5Rt8yG0uPzPxFk1RU1s1tpdJJcT3c+8cZBFWt+OPb7KMZYHgaEQvnw7E/fu9K7DhKKcriCXO4LQwhFihCXAO1uYX4m6pFCAPu82TqKZq51MxVEbMXG2pWBYQHVQB5Sj2BXImaVpRHdK49ShGbw0C9QLs6EHemlB0xWfFsL4T7xOS+FS5fID3vzvSbfCDyo74UEnk82Io+L/sdQ3fhL3uGQUHp/onsBm8BOCw/+KcF6Fnl7NoQcBQEnasVyVXy6it8qt7xx+exNDh7tCJ8G9UIo6IfOhW75AUg7OoKbKKuPnKIlRy/dcDbt4xSNjihdIxZr1dq7GkHdr5
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a83456b1-91d3-4a5e-d8ae-08db3432049a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 10:56:03.6252
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: on0nacmmeG8eE3LQhdSgBim3hrqnk1v9S9ghITwPhJkYZYjoWMEcKoSAdNvTH+nksFjpzwP7kBhgf2G5oBWYVTIMkL8XesDtUF1sNwNegMU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5189

On 30/03/2023 3:43 pm, Roger Pau Monné wrote:
> On Thu, Mar 30, 2023 at 01:59:37PM +0100, Andrew Cooper wrote:
>> On 30/03/2023 12:07 pm, Roger Pau Monné wrote:
>>> On Wed, Mar 29, 2023 at 09:51:28PM +0100, Andrew Cooper wrote:
>>>> tl;dr to add MSR_ARCH_CAPS features sensibly, cpu_{featureset<->policy}() need
>>>> to not operate on objects of differing lifetimes, so structs
>>>> {cpuid,msr}_policy need merging and cpu_policy is the obvious name.
>>> So the problem is that there's a chance we might get a cpu_policy
>>> object that contains a valid (allocated) cpuid object, but not an msr
>>> one?
>> No - not cpu_policy.  It is that we can get a cpuid_policy and an
>> msr_policy that aren't at the same point in their lifecycle.
>>
>> ... which is exactly what happens right now for the raw/host msr right
>> now if you featureset_to_policy() to include MSR data.
> I see, but that's mostly because we handle the featureset_to_policy()
> in two different places for CPUID vs MSR, those need to be unified
> into a single helper that does both at the same point.
>
> I assume not having such pointers in side of cpu_policy makes it
> clearer that both msr and cpuid should be handled at the same time,
> but ultimately this would imply passing a cpu_policy object to
> featureset_to_policy() so that both CPUID and MSR sub-structs are
> filled from the same featureset.
>
> Sorry, maybe I'm being a bit dull here, just would like to understand
> the motivation of the change.

That's pretty much it.  Forcing them to be one object removes a class of
errors, and makes the resulting code easier to follow.

(Based on having tried to do the non-merged approach first, and deciding
that it's not code I'm willing to try putting upstream...)

>> Merging the two together into cpu_policy causes there to be a single
>> object lifecycle.
>>
>>
>> It's probably worth repeating the advise from the footnote in
>> https://lwn.net/Articles/193245/ again.  Get your datastructures right,
>> and the code takes care of itself.  Don't get them right, and the code
>> tends to be unmaintainable.
>>
>>
>>>> But this does mean that we now have
>>>>
>>>>   cpu_policy->basic.$X
>>>>   cpu_policy->feat.$Y
>>>>   cpu_policy->arch_caps.$Z
>>> I'm not sure I like the fact that we now can't differentiate between
>>> policy fields related to MSRs or CPUID leafs.
>>>
>>> Isn't there a chance we might in the future get some name space
>>> collision by us having decided to unify both?
>> The names are chosen by me so far, and the compiler will tell us if
>> things actually collide.
>>
>> And renaming the existing field is a perfectly acceptable way of
>> resolving a conflict which arises in the future.
>>
>> But yes - this was the whole point of asking the question.
> I think I would prefer to keep the cpu_policy->{cpuid,msr}.
> distinction if it doesn't interfere with further work.

Unfortunately that's the opposite of what Jan asked for.  What I have
done, based on the prior conversation is:

struct arch_domain {
    ...

    /*
     * The domain's CPU Policy.  "cpu_policy" is considered the canonical
     * pointer, but the "cpuid" and "msr" aliases exist so the most
     * appropriate one can be used for local code clarity.
     */
    union {
        struct cpu_policy *cpu_policy;
        struct cpu_policy *cpuid;
        struct cpu_policy *msr;
    };

So all the cases where you do have d->arch.cpuid.feat.$X, this continues
to work.

The cases where you pull a cpu_policy out into a local variable, there
will be no cpuid or msr infix, but these cases already have no cpuid/msr
part to their naming.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:03:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517377.802603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHy1-000603-8t; Mon, 03 Apr 2023 11:03:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517377.802603; Mon, 03 Apr 2023 11:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjHy1-0005zw-4b; Mon, 03 Apr 2023 11:03:17 +0000
Received: by outflank-mailman (input) for mailman id 517377;
 Mon, 03 Apr 2023 11:03:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjHxz-0005zo-Ow
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:03:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f9f8b19-d20f-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 13:03:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8457.eurprd04.prod.outlook.com (2603:10a6:102:1d8::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 11:03:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 11:03:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f9f8b19-d20f-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TTxWN8JN1/kZw5Cftr7clthGSLpLFjA0BKR50jnkAp4WRPxKXTbguD6xtJIvekBYNcEY1j6iiX/0JQ+R3Ccc0PFaz1MlcLyiCfITKkzFcWU+AxrgClZhz7Jp5HUnM+HaPGw+dw92JmYIKfR5I/HC6vf5KPjVqT6MN6lBwNrTQMu+sQoOlo4bLd0xeTXlP/B2nwpo85rM3dU1YLPPJ1cnnB5B7BhTRuQTpT/mIFK30dUp3PKk13XgBh7luU5thWGYeubuBIKPVxgDBh/cQVqgoiLuvU06Neiw5SrEAsz8Mx8CmXXmxoS6A6p71Xbwt4FmxdNxojIfoe073SBEXdxjwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=stlEyM17FXGz/0HvkYCKDBzaDmPVE+3EdmWVSIMQhbI=;
 b=bNG1yLlsmEZQOIGZ6sERDnD5C856fqnwm+sJ1B55y/S1uQoIk68BURMyBtHiXz7AWkBNdknl6e0znLVrV6SOKhMMt6S2WBqOYnmkIWm+K26uZc/KINhh1/Ma9Iq3jHzOkoBDueCIJTLophgDrl1z9Yv8qC4afoKw36qb6nUPPokOhiLiqZrTP2hnLMgs8Bv23UYhWCawhVxOfTiFg7A5mO89grGlfR87o4l2kSVp4i4mB9tTMaSqHOqOZeoqJ7xSiwJw9c4c6IsTggvOuw3waAjPUPxsZawTYpd5PlTU6oNXfZdrU/6t/Cb0KDwGcDqcXoNR7oRmTSNS8FZG+nxDrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=stlEyM17FXGz/0HvkYCKDBzaDmPVE+3EdmWVSIMQhbI=;
 b=3kwxwvug9kGJfRYWGcwUtIvDOiEgr5YRPUCMvOuapZU8104YGYjfvKAQ+T134IQ7jhyFJkeuVOG6a0VV0KkX4aNCRpfyK7cSbgqwaNcZBEKLEOu/Lt2TM3pjJezCRDIA65MXIdTNbL5NpD4Lb4yqMmozatnD8JwFNYuQLhjfkyTZxigznCLEvlUZ2fmlh4HoFrUb2EE0qwIKosvupn1qbdrY1oIfrm9iD0sURqm6x4WHu4UGaq5SN7hP32iPbcsZd/NxSOI6PsIDZ8vrhNbHjlLj09UTcJuU2Rv+gJJVBB5EUU8JzMGDfHyPtgOU8SNVmsN9b5RFVCLtMK3DhJC6wA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d05f606b-6a74-dcd6-2ea3-7ca28aad7284@suse.com>
Date: Mon, 3 Apr 2023 13:03:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 2/5] efi: only set a console mode if the current one is
 invalid
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: marmarek@invisiblethingslab.com, xen-devel@lists.xenproject.org
References: <20221123154525.63068-1-roger.pau@citrix.com>
 <20221123154525.63068-3-roger.pau@citrix.com>
 <c62446e1-8e47-5fa9-1c7b-a441d38711e6@suse.com>
 <ZCWuYjP7L4obvXt9@Air-de-Roger>
 <50fe2ff9-9633-1cbb-4afb-b577778d3edd@suse.com>
 <ZCW2IHKP4GHNmBuk@Air-de-Roger>
 <6dcc187a-5e81-2f36-4104-d9caac148cdd@suse.com>
 <ZCaNwp6dJg5MhRpP@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCaNwp6dJg5MhRpP@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0035.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8457:EE_
X-MS-Office365-Filtering-Correlation-Id: 44b503a8-60bd-49df-14ae-08db343302b4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FRzZlu1H3q+FkKkcUoaOpL8rQjE6qxbubGgf83xtwvUaPMYY5AALnvbykQAUj6acIDnEzaZ7PdKQEHQYkWzGw39ZGy+YreMQxCiE47Tg7Ox6DQ6w9X8CkCe9FgDG1BvnuJBnkmkqEWapqFrW+5VSFjXy21dLVbf1QKQVyGa+b/5rWZLBQzM7DWGC66klefCCOWprPd9l4XKF+mqUmgdhqaNIEG018oIGiBJc4Tudm7WYD/cgV2ukKtFoZ0x+J9tB6EY8erjRK7+f8WLq3V1MOOQy5IrTuBJs7jjo+uQ1VSrYXEnJ8y1Iv4y5kzjAB8u/6XV/OeSsh2zbuQgsB5Lar97HrvTU93nmysbzmcxqFYL8y4WDQ/UR0lk8gbZ0W8xjJbmhMvsDI4s1PS4mjRiaFTEntOKcFhK42JmAXcIii6wzs29XTs+NCOVCWwXALO19bYDNc8cV3ZpnMbkyuCOwTiNclxcPOErK/udZSnihzOSBug9JMz7MVKqd3TO/0BKnlnWIs08o/PWwILgYXqHndrW91EbIJUabh4NmYPZocGJDlamD8zxuFSFREEQ16DQpAgq2uAzBN65moMczZBjoPrN2O10jcsyWi5QzFPydwEISHa0DTPmU4JN5z7IdhozeJo/gKCC3pHS8P5/MB/1LFw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199021)(8676002)(66476007)(6916009)(4326008)(66556008)(66946007)(478600001)(316002)(8936002)(41300700001)(38100700002)(5660300002)(53546011)(186003)(83380400001)(2616005)(6486002)(26005)(6506007)(6512007)(31696002)(86362001)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3krVGhEZDZMQ1B1a1BuUitXRlBxRzc4RWpMK1JncFNDellpTzFtMlpQNlhx?=
 =?utf-8?B?S2doMWZFVjlFaW1rdnh0VndpLzZLYkZibkt0TmpmZkpMUHZUSzJ2MTc4TTNs?=
 =?utf-8?B?cWwvRWZQRHN2RkZIbURoUk5IbW1jTzNvdHE4M1ZyY0VBbDVKSDhDQS9LN3hx?=
 =?utf-8?B?WU8zdVVrcTR6K1FydGVhTXVENFViSU5Pd3l1QWJzYTQ4VmYyZlRlYmZvaCs2?=
 =?utf-8?B?eUpFLzAreXk3cWhvYXNJZ3FpRnNmQ0h4bzJoN0VQSG9wb0t0YlJ6MSs1bzha?=
 =?utf-8?B?T3lxY0pvQ2NTUnY5SDJEL1lpMkl2VTI4bXgyaDdMeWdpN0pSK3N0RWtROUZo?=
 =?utf-8?B?eTRqbmpYMi9DLzVqNWw1a0lPOElEcUM0YkNENEtETlhxQnhuQmJHRVVLSEJs?=
 =?utf-8?B?Ny9iMHRnRTQ0V1MwU3E1ejIvL0kzRUpIZTA5SGFlZGl0SXYydVI4bTNXaDEw?=
 =?utf-8?B?b2RWUXZJMGJGbG9qTGM4WW9ncmN2NlRicVJ2N1RLa0FQZzBxT2FHaUpOMEgw?=
 =?utf-8?B?VHI3S0VFenkwcElPZkN0VUcvdmM0NlphTjV4L3RFeXIxVUZob05MTVNRUEty?=
 =?utf-8?B?VHpCWGRLQVNhdUFsNHhTYzJ4L2JrWDcxV3pmNHNuOTBwRklyaVJwM3ZlVEdw?=
 =?utf-8?B?cDFXY3NFaUZKckdHNjVzRjlwUFpCUU9MMHdWRWpUcVRsSGVUUXNFMFBCSS8x?=
 =?utf-8?B?aTNsSlVXaEQyZ3JrZWpSTTRXU1lFL2ZySjhUK2N0K0tFUFdvR0Q2bVR0WEVo?=
 =?utf-8?B?c2FFUnJUNHJSOVZHKzYrdUV1M1J4bVE5eFE5eEdVZTFFOWtYZmwwMGR6Vld4?=
 =?utf-8?B?TG9zeDFvV2N4eTdCajJwNUIyS2ZsQytjYk1KdmtKQ0FzNXAzR0N5R3dwcHBQ?=
 =?utf-8?B?UW55cE56b3RNQ0JNekxzZ0xPV0RoNkErcEtoL21YeXlSaWpsc0hWSnorRFJG?=
 =?utf-8?B?Q3Rjeno0bzZhT1pMYTFJMFdZVVdENTZ6bnZGOHo0ZXkwMVFPNGg5WGxHdE00?=
 =?utf-8?B?MmI4REdNMVBkMC9JdXRHdEtXc1BOWVlkdG1Wd1FYcXhXY2RSaitITk1EZXZl?=
 =?utf-8?B?dFFjd2ZIbzU5cmgrWWswRXRkd3RtaXg5WW1aT1pvNnJjeU9OVE5lUm5jL2pz?=
 =?utf-8?B?ZHJXQmk3ODhHVjMxTG5RKzlwNEJab0ZBZzJOQWZ3TmZiaGVjeXRzWDgwcVcw?=
 =?utf-8?B?N3lweDdCK0JlZW5obTlNWTNuSHFlL3RETXQxVFp4MEVHQ29QK1N4dU5lQy9F?=
 =?utf-8?B?R3FFd244U2cwWUVIbE1pYXBlQ0RLaGZOeUw2ZUJhcHMyMEYwcVpBZTFpTVdU?=
 =?utf-8?B?K1RiM25UUHhYUUlPa2JNTUorTW53cXVKZWZhcjQ2T3JPcjZCSmwvVTZpWVBi?=
 =?utf-8?B?Y2NWOUpnY3ZraDlsMVB0RDRkQ1dQZWtyYytPY3A5TitKNldrd1BXbFQzZFlX?=
 =?utf-8?B?d2VtR2RLU0lBZUlzZEkwV1Q0dGJlaVg1VkNmVGpyb2F4TGZ5Zm9QOGNscW9T?=
 =?utf-8?B?K1EzKzVPUzhteW9uaVV3QWF6VndVQ284VWJtWkRpL2NVVVdFNkdTWnhJVkwx?=
 =?utf-8?B?d1N6cTZxVjVYZ2dyK2pFTWp6aE1uV1ZjMnBXQThGL2NvT2JxOE41Mms0L0tV?=
 =?utf-8?B?b0laVmtLVXBwWmJsdVBlNkZWQ0hucGFCMzZ2NmVpb1BZR08ydXV6dkhuOHY4?=
 =?utf-8?B?bmQzSEUyVWNHdjFENTRVeHBQa3llRlNOQ1k1Qld5bGZtdVN0ZXM0empMTXBM?=
 =?utf-8?B?MEJyYS9lOVRpR1BUMWZUalBXM0swMkdjVDhCdUtzbHp6dE1aUFVhNnp5blFL?=
 =?utf-8?B?SWpMQjVpRVZIekVhMHVmbERWOEVQS1JxUUU1TEdTSDNSRjVqS0xvVGNRRVhk?=
 =?utf-8?B?M25KYVhQdDY2ZUlSaVBZcnlTaU1XWFRKd2YxRmpwQkViQVZjektVaTZaYkxY?=
 =?utf-8?B?ZFFpaXFLakx1V3JobGltbHk4bVVQdjI0ZjZVakd0NDljZ3BsQ3NQQmVsRFhC?=
 =?utf-8?B?OWhsMVZIVkVqRVZ3NFlRbjdyQk5DYVRvS3JLTnJrV3pmOW9mK0xjSzAvNVNa?=
 =?utf-8?B?c3ZlV1VETnNqdWhPalQwdjYvQzlqanBIVjRNODNpb2l0N0psdVp1ZlhnYXVW?=
 =?utf-8?Q?BPVPehI+B3cwqgT86rcLuki/f?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 44b503a8-60bd-49df-14ae-08db343302b4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:03:09.7992
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wkXWOw90cf/XFu9WZvwt9RYmliXX8ZBwXj+/fAkHQzw73zrExqYIx+EQK+WOZMNQ0Ur60Vk12QD3c28xkAEqNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8457

On 31.03.2023 09:37, Roger Pau Monné wrote:
> On Fri, Mar 31, 2023 at 08:51:46AM +0200, Jan Beulich wrote:
>> On 30.03.2023 18:17, Roger Pau Monné wrote:
>>> On Thu, Mar 30, 2023 at 06:07:57PM +0200, Jan Beulich wrote:
>>>> On 30.03.2023 17:44, Roger Pau Monné wrote:
>>>>> I guess I'm slightly confused by the usage of both GOP and StdOut, I
>>>>> would assume if we have a gop, and can correctly initialize it there's
>>>>> no need to fiddle with StdOut also?
>>>>
>>>> Setting the GOP mode is done last before exiting boot services; this
>>>> may be a graphics mode which doesn't support a text output protocol.
>>>
>>> Right, that's what I was missing.  I assumed that all modes available
>>> in GOP would be compatible with the ConOut mode.
>>>
>>> Would you be OK with leaving StdOut as-is when booted from multiboot2,
>>> or there's a chance of things not being properly setup?
>>
>> On modern UEFI it may be unlikely, but I think it's not impossible (see
>> below).
>>
>>> IMO it's not very friendly to change the StdOut mode if not explicitly
>>> requested, as in the multiboot2 case that gets setup by the
>>> bootloader.
>>
>> May get set up, that is. If it was set up, then yes, we probably should
>> leave it alone unless told to use another mode. I.e. no vga= or
>> vga=current should minimally result in no further mode change. Aiui we
>> can't easily honor vga=gfx-... in that case, so leaving the mode alone
>> there may also be better than trying to guess a mode. The only time
>> where I would think it would be nice to switch by default even in the
>> xen.gz case is if the boot loader handed us the screen in some text
>> mode.
> 
> How would you detect such case?
> 
> ConOut is always text-mode like because it's a
> EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL interface.
> 
> Would it be a matter of checking whether the current GOP mode is
> valid, and if so leave it as-is unless told otherwise by a command
> line parameter?

I think so, yes.

> I would also like to avoid the unconditional resizing of the ConOut
> interface that's done in efi_console_set_mode(), as that has the size
> effect of changing the GOP mode, so I would only call
> efi_console_set_mode() is there's no gop.

Or maybe when the set mode isn't text-output capable.

> Not sure it's meaningful to change the ConOut number of cols/rows if
> there's no GOP, maybe it's possible to have some kind of screen that's
> usable for EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL but not as a GOP?

Of course there is. As said, earlier on screens started in 80x25 mode.
Even going to 80x50 or 80x60 is already an improvement. Plus there are
systems which support wider-than-80-cols text modes.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:09:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:09:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517383.802612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjI3w-0006dd-TG; Mon, 03 Apr 2023 11:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517383.802612; Mon, 03 Apr 2023 11:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjI3w-0006dW-QG; Mon, 03 Apr 2023 11:09:24 +0000
Received: by outflank-mailman (input) for mailman id 517383;
 Mon, 03 Apr 2023 11:09:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjI3v-0006dQ-GU
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:09:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa06ae9f-d20f-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 13:09:20 +0200 (CEST)
Received: from mail-mw2nam12lp2041.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 07:09:14 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB5469.namprd03.prod.outlook.com (2603:10b6:a03:28a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 11:09:12 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 11:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa06ae9f-d20f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680520160;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=v4J3NCj80jQPpfZWL0DWy5puK/iCxs6yTjvQS4+UW8Y=;
  b=HPA5GIvpmEiwDkhU3W3O8zcbeQIGLiQxOxevDrQi7/uJH8ErnHpq1uOc
   KbpNxBx+dM7Ibz4vPQWXK43Sh9h6YQYY3Vq5Gt7nxaFUhBKPT4CemjoNT
   oRlg8Hb5Y389Xgm99b2dugxe79LtYI+fop0tjugQtp9XpAqTcQpl379He
   M=;
X-IronPort-RemoteIP: 104.47.66.41
X-IronPort-MID: 104022316
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:c6a9tKhJsotYJd8aklRHjhV4X161VREKZh0ujC45NGQN5FlHY01je
 htvD22OaPvbNDT8eop0a4uz/UwG6sWAzt5gSFQ4rn09Q3sb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQyBiokMTGMud6szYiFWM8xps4EMsT0adZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluS0WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHunANhIT+TmnhJsqEXC1C8sI0cYbnC2+9OlhU7mae5bM
 XVBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9UXuA8p+EoDX0PjIaRUcdYQcUQA1D5MPsyLzflTrKR9dnVauq1Nv8HGiqx
 yjQ9HRnwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBb8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:ALJMR6ww4tx7TFt2b3TPKrPxTegkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICOgqTM6ftWzd1FdAQ7sD0WKP+UyCJ8S6zJ8n6U
 4CSdkDNDSTNykcsS+S2mDRfbcdKZu8gcaVbI/lvgpQpGpRGsVdBmlCe2Sm+hocfng9OXN1Lu
 vr2iIBzADQCUg/X4CePD0oTuLDr9rEmNbPZgMHPQcu7E2jnC6l87nzFjmfx1M7XylUybkv3G
 DZm0ihj5/T+c2T+1v57Sv+/p5WkNzuxp9qA9GNsNEcLnHBmxulf4NoXpyFpXQQrPu04Fgnvd
 HQq1MLPth16VnWYmapyCGdkDXI4XIL0TvP2FWYiXzsrYjQQy87MdNIgcZ8fgHC40Qtkdlg2O
 YTtljp/6Z/PFflpmDQ9tLIXxZlmg6dpmcjq/caizh6XZEFYLFcgIQD9Ad+EYsGHgj99Ic7ed
 MeRf301bJzSxe3fnrZtm5gzJiFWWkyJA6PRgw4tsmcw1Ft7QVE5npd4PZasmYL9Zo7RZUBzf
 /DKL5UmLZHSdJTRb5hBc8aKPHHRFDlcFbpCia/MF7nHKYINzbmsJjs+og44+msZdgh0IYyop
 LcS1lV3FRCNH4GMff+nKGjzyq9A1lUBV/Wu4NjDtlCy/HBrYPQQGy+oAtEqbrknx0daverKc
 pbdqgmR8MLFlGeaLqh7zeOJKW6FkNuLvH9muxLL25m8fi7XbHCh6j8TMv5AobLPHINZl7fa0
 FzLwQbYv8wo3yWZg==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="104022316"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NvvgfIZ7HE4PDCUUdyqVH/IDs0l3vXHznj9PPIL0J9zvMjGbcOJoQy02KBiWEA+P/ehA5GsQvypLTUchucUjvNlNsrRQ08VQui7I682ikN0DjBzUG5Qsl7FpOnBd7cEY3sIvVRR9kb6u+C7YyHCjUObNqCQYMZz9sqI86RO5q/sKZKLYW5GFWSKLwZnJzwTdidkcncw9TIVS1duWzn67Qi5meJM2plEm80X4w3Q+7I/RPTYDz+XEnTOAIy57bNuSLyEnibd+Mfdhs1VcrGHaDhX2sx1n1M1C6B0gwa99lwgr+1r8mw/jaDQy/f/60EYcDI2PgaVDdzT9HCP511jjGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t14AAj2zeHUEhhx/uWYA+ay1WChw3nkDJJQ/8Wx9Pg8=;
 b=mJ8dVNHlcA+Sy/uERac9nF3krVvp8bOhZNGZAxFP8CmFF7ecw9aZ8qxlnSvQfmf1P5p2d8QLYFrSYD9svbwAdpGJYjJb8f4Ok8nyQ3Z7ufhu/R999vFfk5duKQ9dB2xfCnADeE2JZIZkLsYO56L0/dwfkbAXrCSwnFNnDzV2EcSusU4p9YWaV/ouuJieoX3HdqDnMsksZRKJxACXQWwA9GDXO4cK8B9kwB5/jEIoUIpTKYODp8BVxhfPTcsNnig2Ab3NQOYeJwItv5QSJh4tyqehFcu2E4BKN6RPxx+VmxpS9/pj/x+TvMj/cAl1G87OMnHZY/X3NZr/2okB15KZ9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t14AAj2zeHUEhhx/uWYA+ay1WChw3nkDJJQ/8Wx9Pg8=;
 b=DcKHQO47nOrDSL1rEl6GlJ1zhYy5O4CNf+7VDyvEDtVeHi9adu5d4KdZaCPiXm3KHGm14bVpjozVYHcsZ8mccRECNkreIGk4rPHIOg3/YxMGOkATUOTVTsKjmjyO4qEi1HVgV16gSt0UgS3G7j8fAyzZgQopQkIvwXaVeZQLdps=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 13:09:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZCqz0YCFUifIlthC@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <8f2fa47d-89b7-b39c-e60f-edee1de5ca82@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <8f2fa47d-89b7-b39c-e60f-edee1de5ca82@suse.com>
X-ClientProxiedBy: LO4P123CA0575.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::7) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB5469:EE_
X-MS-Office365-Filtering-Correlation-Id: 58fa5742-ebc5-4235-149f-08db3433dacb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Akg7sTkmdu0g0OCYDZAN0JLnGuWcu0q7GLckNjWEPIl4oHCmMYCkXSEOkGRjKZIih3Hy8utzauxtgm8HT0amNQGh+Mf+eWV6w/WxFaENt9L5zbxCGGY+MTta02gP/RGEfEk9Bd5vAY8mfwS0WnfNOfjQYAQBwgPc4UhzcJwAf2fUE2WH4bcpukcPrKhNZllSCzlGtoOV17JHWxp6Sgx9ROXlP9d0+I4RN7uxPCTpePpH6snBA5KlUu3sBu2J9ZZhq9MmoYj2XrLeGeL8m37cmT0oY3DqnZ9q0xk7vSdW8KSCUy4vMkwF+mzKKjIhWrFITEuRNUDli7ifxEml0xpveVw/aVZ2qN6uV6JCrP52nPzoelPU/gnEuRZEWRYm++HyDJKdAhixN6rA4zXYcHNlWSfTK/mP0uQmZqofeIbJueDs0L5hPFklTFKrgRiShU4c7GfyUky0VLd0zVDEPLii7ELEsoOCuYLGl+k6vr6O5y9xL5DJNYQXLp33AGcfqI92wl/aNJxmVetJ9pBFzbYmXsdQ6QyEDwEaewXHZpKi1pSe1Ayyq//DHBe9lSrbcmHj
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(376002)(346002)(136003)(39860400002)(396003)(451199021)(86362001)(85182001)(33716001)(2906002)(6486002)(83380400001)(186003)(6512007)(6506007)(9686003)(26005)(6666004)(4326008)(6916009)(478600001)(8676002)(66946007)(66556008)(66476007)(82960400001)(5660300002)(38100700002)(41300700001)(316002)(54906003)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUxhSVlRWStmbThEUkdGdlAxdTN1MDZJZFo5dGFGUE5PNW1PdS82QlMxRWZh?=
 =?utf-8?B?Mk1ZL0dSaE1TYXRjU0FxOEVHb3hzaElsZEtXVmp5WXN3YnhnbGc1dU5QWmxs?=
 =?utf-8?B?Y2dYdjVjMG1BMlFJWmhkMU9LL05BaXZWd1QrbUpGTGh1dVR3USttN1kwSGJU?=
 =?utf-8?B?YUN1d3owaGJNQzc0NVFHRTlCbVoxUzd0aUQ4SDRGUnFoV2hPaDAwZ09XQ215?=
 =?utf-8?B?R245OGRxb0dQa1NkWTIxL3V2VUpnUy8wLzFaRWl6ZjFxSE13eGV2OWxJUWlT?=
 =?utf-8?B?TVltblZyeldzMGozNC9WTVJTYjNQSG0vRVgrQ1Mrb2ZMa280emJ1TjJKVUpL?=
 =?utf-8?B?SlR6VVRDa0dvL1hmR2x6V1FGYWN2V2l6aEVxWkptVytxL1VEWVlVUFIra2NG?=
 =?utf-8?B?UFFFOTZ1NmhvSVNLQXI0di9heGRRVVRhbG5CSkVYVHBRdnJ6ZVZaVUNrT2lt?=
 =?utf-8?B?MEN0d09JZEhBT0VUU25VZDVNU2Y0bTJmSUtJNktubm5RWDBmSGZ3RXI3OGU0?=
 =?utf-8?B?QUVrNExIVEM3YVhsYWZjSmcxeDNiRUFaSzZFLzB0TW01UkF2cXNxcy9CNGpw?=
 =?utf-8?B?TEJSbkplZFc2VFk3dWtnNU1oYlkxaUFiNTBIL0VKREI2SVdaaWpyMGd5TzNv?=
 =?utf-8?B?TTNTMWNrUmxnRHp1bURISk1KZGVxVzYwdktlRnlRODg1N0J2c3pEcStDaG9t?=
 =?utf-8?B?R1paMFN1V054Q2lsN2RDbUZkSkJ6YTJVQ1FnbTJ3T21rTGp2Njg1WUthTVJa?=
 =?utf-8?B?VTBmNFRCMzNFblNOUFNsT1o4cGFUTnd1bTF0WDk2dmxyZUlyWEp4T2d3QTFS?=
 =?utf-8?B?RCtqZVZDUzcwZ1VtTWxSd1hYeVRHSnU1WFpEbDhQcUdtWGpMNW5wd0pIKzJF?=
 =?utf-8?B?TFBBUEY2WFA1dXhyaEZhMzB6eWoxSjhJZUtHZVM1b0JacnpTVDgwWUxJZVk5?=
 =?utf-8?B?K1E4OTNHZkVXaXlDejNuUGcrWnhpaHNNVGo0T0h5TjkvWkFpS2U2RS9aNzNt?=
 =?utf-8?B?U05sb1VNcnVDQUN3U2FKT2laY2p4V1FGUzlPZFVPYkxFVnRXMTV0Y0tmWTEx?=
 =?utf-8?B?RzFad3o0RnF3MXlnOXNvcW5tK2FnVEI0Z1IrdFJPaXIrUG1jKzlXTnRteE16?=
 =?utf-8?B?Sk5zd1FzSEZCRGprbTVxZWNncDd0dE9ldWdpY09DWFRnOTQ0eThXajFsR3lk?=
 =?utf-8?B?OUtseVJtQmJoZElYa0ZSL1I3NmttRjZUQVBJbTZwMGl6ZmtYb3BrTUNnajhU?=
 =?utf-8?B?RkFtcWlYVWtLU0RRd3hiUmdQV25KQ045UlZMYXJsYnVFTjRNLytoUVNEVjNB?=
 =?utf-8?B?ZVNsUmNvYUdWN1BPQURvR2Mwb2R5Z3JIdkZoL3hvVkxaWWNRWG5xT3MvTHBH?=
 =?utf-8?B?OGdLMzRGTlh4eVYyMjh0WWFkKzFKUVlvS1ZObDdvblpFcnNNTGpRRkJ6bzFq?=
 =?utf-8?B?VWtwaldXajRqNUlUdGE2S21qclZOaCtubGM4STdrcmJmOWRJV3QxZDZjOC9t?=
 =?utf-8?B?OFpJSDRySmlrOGVRUVhyazNTeXprSk9xaUhBMmMzTFVwRXJSMHRDdG9HV1dS?=
 =?utf-8?B?RHY4T25JOS9SSGQzY0ZFNVBhblJMVlpFQXU2WHFGRmx6UFlGWWN5Y0Q2RUw4?=
 =?utf-8?B?SHhEV01PTmdkUjg3M1J5NFhZUmdUMUd4VUVlVG1iQmVTQlZxaHVrc0Q4TU5w?=
 =?utf-8?B?OHNPeE13cmNjaGxjbnN6ZlV0ZW55OHlZellsd2RhdHVYaXJZS0hwZ1hOTUVl?=
 =?utf-8?B?cUI4anBZdFQ2cnBTVWladUhWQms2TUo4WTBraDMyKzYzcmV3Kzcwa1RUMFV2?=
 =?utf-8?B?SGhndXR6WC9OVjArL0xZSmM0OE9TVmhaS0E5MUdXWktNS0k5aVdaclphcHRu?=
 =?utf-8?B?UzlnL1lqaStIQ2tKM0dXQUxrK3Y0bGNoSHMzWnVOOTJNSTczdkU4VmlmUDUv?=
 =?utf-8?B?ZlFzMzA2UytSc0x0TzNzUmtlMXVKTngxR0dxMkpodW5HODBCRU9VMmhxMGpY?=
 =?utf-8?B?OE1lbktPQjVKZ3locFk5UVlmcDZkMk4zV2NkdVBTZjdTNXdjbGQzV1hyaUw3?=
 =?utf-8?B?Sk14ejFxOFo1Ky9BTDBXR2g1Y01Vckk3RG5qUDcwSzhZNGZFc0JZbzcvNmJB?=
 =?utf-8?B?MTllQUZObUtvVkFLamZVWnp2TTA4QVF6c0NReEhFeFQ4OTFBTml1LzJtOHdY?=
 =?utf-8?B?UXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	lejQw9zKMxEMnfcRK+qkS9rnF6cGzoNe18E0jRMV3JiU138QnkEeXFTshpoTKcKABmi2eikSqkPixwi3takHBWBBxNB0nVGitVdlzTga/dPvLoVzqoIieY1LovtF3N3ZZhYhz3YhOCIOW3SVtwGUDwes06nkFkjrc/M5/Qk/s6ZsgV8FJxksrO9XvRB5jI7tUE0pD5KvZOuiXIJnWqcF3HgLOoOYKrY3yD8k4Nzc2Ypmk1veMA80E1L6EXNXxyH3z1Fow/Nt7I/5aHkbWyiPGLx20y3LjOMotePYezKkNLV8u+LcRSn5Te2UNuVGNof+WFquH6VxTZHzbY1i3s8dsXfXVM6ZDxwDdqsykxtRDC+6XpD2bbwoIkUfZpx3ue53ATbpqTROfCt3PDC2EHYaBJ9NedPQjQVkou7Es5ISQTeoU/Ex33lsJ94lWaKViBxab2WVe84aXDyHR4HLtcY4tgrQ9SaIORPTXWvcQdZ5KvEceEj9Wixv1JKRq9BXxFsElMUzeIyWWslB7Ze/5rc/PN3NO2cxWcFVcgQihdu/PK1lcO80q5RqZiGF9sOoveD7psURVtUbd2zHQvgTNk5gC8e6xG0g4/j90swqbfEtYPSrog6Hew/NcqM0FmsIoC26Tdph/61kUNHLMTiRihDJTTMLBxy7arFvmedLk3tjkMjI2Wu80tUiDJ1DXCkGsCt/0qIDFGeWZ4Hop89Dxe9SMbSGCpxVpnLJ836FzwXmmE+vf8aaEUvmLqqnLJ+JuHd8eAXrhzgXWPRfgJSlDrb73gpklIKCUVe8BmGNQHAAZofi7FQpdScfaQja5tpcAsnN
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58fa5742-ebc5-4235-149f-08db3433dacb
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:09:12.5348
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7lpLMQ8Z7Ntr6GVcLgbwRZKFq184XxS9nt3ilPLqY2nijENSFfjjXRiQypU1i1Hau5AGfKYVUTXN2uUrSLoRJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5469

On Thu, Mar 30, 2023 at 12:40:38PM +0200, Jan Beulich wrote:
> ... in order to also intercept Dom0 accesses through the alias ports.
> 
> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> use the CMOS RTC, because of there being none.

So it's fine for dom0 to switch off NMIs if Xen isn't using the RTC?
Seems like a weird side-effect of Xen not using the RTC (seeing as we
would otherwise mask bit 8 from dom0 RTC accesses).

Also I'm worried that when Xen doesn't intercept RTC ports accesses
from dom0 could be interrupted for example by the vCPU being scheduled
out, so a vCPU might perform a write to the index port, and be
scheduled out, leaving the RTC in an undefined state.

I've read claims online that the RTC is not reset by the firmware, and
since it has a battery the state is kept across reboots, so
interrupting an access like that cold leave the RTC in a broken state
across reboots.

> Note that rtc_init() deliberately uses 16 as the upper loop bound,
> despite probe_cmos_alias() using 8: The higher bound is benign now, but
> would save us touching the code (or, worse, missing to touch it) in case
> the lower one was doubled.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v5: Simplify logic in is_cmos_port(). Limit the scope of a local
>     variable. Adjust a comment that's being moved.
> v4: Also conditionally mask top bit for guest index port accesses. Add
>     missing adjustments to rtc_init(). Re-work to avoid recursive
>     read_lock(). Also adjust guest_io_{read,write}(). Re-base.
> v3: Re-base over change to earlier patch.
> v2: Re-base.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -27,7 +27,7 @@
>  #include <asm/hvm/vpt.h>
>  #include <asm/hvm/io.h>
>  #include <asm/hvm/save.h>
> -#include <asm/current.h>
> +#include <asm/iocap.h>
>  #include <xen/trace.h>
>  #include <public/hvm/params.h>
>  
> @@ -836,10 +836,18 @@ void rtc_init(struct domain *d)
>  
>      if ( !has_vrtc(d) )
>      {
> -        if ( is_hardware_domain(d) )
> -            /* Hardware domain gets mediated access to the physical RTC. */
> -            register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
> -        return;
> +        unsigned int port;
> +
> +        if ( !is_hardware_domain(d) )
> +            return;
> +
> +        /*
> +         * Hardware domain gets mediated access to the physical RTC/CMOS (of
> +         * course unless we don't use it ourselves, for there being none).
> +         */
> +        for ( port = RTC_PORT(0); port < RTC_PORT(0) + 0x10; port += 2 )
> +            if ( is_cmos_port(port, 2, d) )
> +                register_portio_handler(d, port, 2, hw_rtc_io);

You seem to have dropped a return from here, as for PVH dom0 the
initialization below shouldn't be done.

>      }
>  
>      spin_lock_init(&s->lock);
> --- a/xen/arch/x86/include/asm/mc146818rtc.h
> +++ b/xen/arch/x86/include/asm/mc146818rtc.h
> @@ -9,6 +9,10 @@
>  
>  extern spinlock_t rtc_lock;             /* serialize CMOS RAM access */
>  
> +struct domain;
> +bool is_cmos_port(unsigned int port, unsigned int bytes,
> +                  const struct domain *d);
> +
>  /**********************************************************************
>   * register summary
>   **********************************************************************/
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -220,7 +220,7 @@ static bool admin_io_okay(unsigned int p
>          return false;
>  
>      /* We also never permit direct access to the RTC/CMOS registers. */
> -    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
> +    if ( is_cmos_port(port, bytes, d) )
>          return false;
>  
>      return ioports_access_permitted(d, port, port + bytes - 1);
> @@ -290,7 +290,7 @@ static uint32_t guest_io_read(unsigned i
>          {
>              sub_data = pv_pit_handler(port, 0, 0);
>          }
> -        else if ( port == RTC_PORT(0) || port == RTC_PORT(1) )
> +        else if ( is_cmos_port(port, 1, currd) )
>          {
>              sub_data = rtc_guest_read(port);
>          }
> @@ -436,7 +436,7 @@ static void guest_io_write(unsigned int
>          {
>              pv_pit_handler(port, (uint8_t)data, 1);
>          }
> -        else if ( port == RTC_PORT(0) || port == RTC_PORT(1) )
> +        else if ( is_cmos_port(port, 1, currd) )
>          {
>              rtc_guest_write(port, data);
>          }
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -2131,37 +2131,36 @@ int __hwdom_init xen_in_range(unsigned l
>  static int __hwdom_init cf_check io_bitmap_cb(
>      unsigned long s, unsigned long e, void *ctx)
>  {
> -    struct domain *d = ctx;
> +    const struct domain *d = ctx;
>      unsigned int i;
>  
>      ASSERT(e <= INT_MAX);
>      for ( i = s; i <= e; i++ )
> -        __clear_bit(i, d->arch.hvm.io_bitmap);
> +        /*
> +         * Accesses to RTC ports also need to be trapped in order to keep
> +         * consistency with hypervisor accesses.
> +         */
> +        if ( !is_cmos_port(i, 1, d) )
> +            __clear_bit(i, d->arch.hvm.io_bitmap);
>  
>      return 0;
>  }
>  
>  void __hwdom_init setup_io_bitmap(struct domain *d)
>  {
> -    int rc;
> +    if ( !is_hvm_domain(d) )
> +        return;
>  
> -    if ( is_hvm_domain(d) )
> -    {
> -        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
> -        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
> -                                    io_bitmap_cb, d);
> -        BUG_ON(rc);
> -        /*
> -         * NB: we need to trap accesses to 0xcf8 in order to intercept
> -         * 4 byte accesses, that need to be handled by Xen in order to
> -         * keep consistency.
> -         * Access to 1 byte RTC ports also needs to be trapped in order
> -         * to keep consistency with PV.
> -         */
> -        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
> -        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
> -        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
> -    }
> +    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
> +    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
> +                                io_bitmap_cb, d) )
> +        BUG();
> +
> +    /*
> +     * We need to trap 4-byte accesses to 0xcf8 (see admin_io_okay(),
> +     * guest_io_read(), and guest_io_write()).
> +     */
> +    __set_bit(0xcf8, d->arch.hvm.io_bitmap);
>  }
>  
>  /*
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1234,7 +1234,10 @@ static unsigned long get_cmos_time(void)
>          if ( seconds < 60 )
>          {
>              if ( rtc.sec != seconds )
> +            {
>                  cmos_rtc_probe = false;
> +                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;
> +            }
>              break;
>          }
>  
> @@ -1249,6 +1252,77 @@ static unsigned long get_cmos_time(void)
>      return mktime(rtc.year, rtc.mon, rtc.day, rtc.hour, rtc.min, rtc.sec);
>  }
>  
> +static unsigned int __ro_after_init cmos_alias_mask;
> +
> +static int __init cf_check probe_cmos_alias(void)
> +{
> +    unsigned int offs;
> +
> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
> +        return 0;
> +
> +    for ( offs = 2; offs < 8; offs <<= 1 )
> +    {
> +        unsigned int i;
> +        bool read = true;
> +
> +        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
> +        {
> +            uint8_t normal, alt;
> +            unsigned long flags;
> +
> +            if ( i == acpi_gbl_FADT.century )
> +                continue;
> +
> +            spin_lock_irqsave(&rtc_lock, flags);
> +
> +            normal = CMOS_READ(i);
> +            if ( inb(RTC_PORT(offs)) != i )
> +                read = false;
> +
> +            alt = inb(RTC_PORT(offs + 1));
> +
> +            spin_unlock_irqrestore(&rtc_lock, flags);
> +
> +            if ( normal != alt )
> +                break;
> +
> +            process_pending_softirqs();
> +        }
> +        if ( i == 0x80 )
> +        {
> +            cmos_alias_mask |= offs;
> +            printk(XENLOG_INFO "CMOS aliased at %02x, index %s\n",
> +                   RTC_PORT(offs), read ? "r/w" : "w/o");

I would consider making this a DEBUG message, not sure it's that
useful for a normal end user, and printing to the console can be slow.

> +        }
> +    }
> +
> +    return 0;
> +}
> +__initcall(probe_cmos_alias);
> +
> +bool is_cmos_port(unsigned int port, unsigned int bytes, const struct domain *d)
> +{
> +    unsigned int offs;
> +
> +    if ( !is_hardware_domain(d) ||
> +         !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) )
> +        return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
> +
> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
> +        return false;
> +
> +    for ( offs = 2; offs <= cmos_alias_mask; offs <<= 1 )
> +    {
> +        if ( !(offs & cmos_alias_mask) )
> +            continue;
> +        if ( port <= RTC_PORT(offs | 1) && port + bytes > RTC_PORT(offs) )
> +            return true;
> +    }

Maybe I'm confused, but doesn't this loop start at RTC_PORT(2), and
hence you need to check for the RTC_PORT(0,1) pair outside of the
loop?

> +
> +    return false;
> +}
> +
>  /* Helpers for guest accesses to the physical RTC. */
>  unsigned int rtc_guest_read(unsigned int port)
>  {
> @@ -1256,23 +1330,25 @@ unsigned int rtc_guest_read(unsigned int
>      unsigned long flags;
>      unsigned int data = ~0;
>  
> -    switch ( port )
> +    switch ( port & ~cmos_alias_mask )

Given that the call is gated with is_cmos_port() it would be clearer
to just use RTC_PORT(1) as the mask here IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:09:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:09:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517384.802622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjI48-0006wK-9w; Mon, 03 Apr 2023 11:09:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517384.802622; Mon, 03 Apr 2023 11:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjI48-0006wD-6M; Mon, 03 Apr 2023 11:09:36 +0000
Received: by outflank-mailman (input) for mailman id 517384;
 Mon, 03 Apr 2023 11:09:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjI46-0006dQ-NP
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:09:34 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062d.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02c24cf1-d210-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 13:09:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8138.eurprd04.prod.outlook.com (2603:10a6:10:25c::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 11:09:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 11:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02c24cf1-d210-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ljSShnzZ3IkJ0ax+6+k560RKBq9aS5R/zl1756FzVn67gASBQETSN3J7r2e+zFBxCd+wxnYkIglWd2OThtXynLq7UYnrV1rY8775syZYQ+aS/i8hzLBXJNNWp95rIg39vrO8JPaEUAdRryHapFka8YFabs1rPeA0cpxlnYLW/9jyE7AxI8vMjnAhYBqYeoTG/mATEovuAX+IUclidYAdcLq/ANU3OpQd3JRqWruugAY8TVGd7HUoaFkEom3hd0MvtPIGnx/SYxidbig5ZPxilZobIaExKKEG4ItF629K+xbIHNSSDs1HCo6CQaFe9lSt0Q9k0xaKmpUt8tpgLEf8Wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lh3DidjN6ele/jYXeFN/8iuMc38l5BJIsPtFgZxHUNk=;
 b=SWmsI1ck1hdMvxKjKoxdQvC5X6Tg9ZVpzgqkep0bCW0vvgmSdVDWRFoRlMpvfSBg3/z5euJVsH4YvwTF9jQshlMQS+8Z3Y0/DAJLTXcskgZh7ye/nHO9vZR9Y/kxqP/1ZE4tjdamL/z1NBlCbyJSn0N31Vg1GnRlYP/v1xJTKFZqSm7SDoEI0r8aO1pmaKth3b2cbeSc5Dx0UF2SEPJd1RIEI2dNKzgufEMnmTmwT86vFWIA3tWRpUDVjjLL7eQ4MVJknG6f0AchakX5wuY1c+Qkl+C72gOMkn50KTvgIEOrYEa7D7QrGAYmNFwlXdkeB649kzUsq4s3vFydtAvgVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lh3DidjN6ele/jYXeFN/8iuMc38l5BJIsPtFgZxHUNk=;
 b=PZG4a3XwCqHLcxXBY553XoTiCUgxN4C0r+2zBigjPE831erZH5pR4+sbuh3KBKQaMTAoA/E+Cnaimq8Dmx6yX9cl63Q8wfOZrB7B0t1tLg1smYth4icrutfa7Luhd/dOyHFbyYYODiyChgWIyfGvUHER3qUpG2V8+pxqjbRZ5kHATcDVgTcYYCKa9VBQ1hR88Da5ur6vAaaVuYs9lMzz2TvlDYnJ2g6GAGcHGqaIK7nqCSj3FUzrRxkJzJ6a9nP7QvVZGsXBxExufZol/0waDemXAP55FtloWK7pP5iEVj4OgvS1iic3ZuZyCB+zGtGp0tcRTinaB3+AuTpzKON2jw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <245d1a4d-3bdc-21e3-8ee8-2909d6fe7b60@suse.com>
Date: Mon, 3 Apr 2023 13:09:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/3] x86/hvm: Allow writes to registers on the same
 page as MSI-X table
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230325024924.882883-1-marmarek@invisiblethingslab.com>
 <20230325024924.882883-2-marmarek@invisiblethingslab.com>
 <ZCLNQGXvUBxZbIGS@Air-de-Roger> <ZCLX1qD/FmbF5ulu@mail-itl>
 <540906f7-4543-9d01-2b2b-a3bd70eda74b@suse.com> <ZCLjGhbzGD2jykT9@mail-itl>
 <9eb7b538-4074-4b15-4ea2-67d9cc0bf85d@suse.com> <ZCpUZI6HmzYKDhAz@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCpUZI6HmzYKDhAz@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0152.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8138:EE_
X-MS-Office365-Filtering-Correlation-Id: 6181ff65-045b-4173-8cbc-08db3433e5fa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H1aHl4xkdWyX0+OhLb4OWAbIvhV8yZgq9IVd7OfwGTjNiJ6FmTIGK13kNdyE6QsnGxQ0fDjabnRcy3VNICeTtcENmGhbyTaEE9NeDn76mwWAMnbK51SDqS0lAqasR8IyuouSgOA7RyxfM1wXo4cAKK0/Y8oNTngFOI/LomHbsCcBBURKbyOscdcTTw1cHVRNa5RjxugeX1fy5QDv5Wv6u9xGiQwwkfwQQ0QRwTa/5JTqSgUIxM/HWI3zwz6OP4ksLh9do1Wjcs7HWD6sllTdWvFID8lJJKw1Ox2fHNelWj4h4lrDBKi2FM60/8poFhCxG1qukr75Uo/EZXxqxbnTAiqTu0e8WOY46GGQLbEqf53q3pN7ZmDJ1iUES7SRvB8YKk6z9DZ/W8/WpXXyYJsjtvWBa4odGZ0MpMu2w983BjlOcXFzey8eFWCEktE5MBxo+nzueCWkSXL+oy33tVhOSF4KGRYN+bA2hjQpq2OnUdhzBxPm4exe4dz8D61XpYYqP7jAwYnZKAADpoeJVgimbPjtIm/ru0wU6211v5IhZWLaUglP3/wEFpIq3YpxOsRHkJPf8u9enVqkXudepxCntkML29j8mA/fb9VNq93UxySDqyyQiRSlHJ3e11+evXpy2d4KbhZrIFY5x6tRcQWpfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(136003)(396003)(39860400002)(376002)(451199021)(38100700002)(5660300002)(66574015)(31696002)(2616005)(83380400001)(54906003)(316002)(6486002)(186003)(478600001)(53546011)(86362001)(6512007)(6506007)(26005)(41300700001)(36756003)(4326008)(8676002)(6916009)(66476007)(8936002)(66556008)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Yll5SjBuZFE5aHduaTR1OS9ObUF1OTlaUVJUWmJuV0ZseGVOaTIwZU94NjJG?=
 =?utf-8?B?eEFqelVxd0lmOEhId2Iwb25JaHVQYW5VM3NhcERQZTZOUHZtRzdPeVlzNDlM?=
 =?utf-8?B?ZWFIbFo2QWFEZWJ3ZktYQjNpTCs3T2dtWks2aHlYeGYxd1dTWENvaGNLZlN3?=
 =?utf-8?B?dkpXeVNLbWVhKzk2dHR4T1BsOGl0Y0JHam5zVWFRQS9pcThkK2RieG5QQmlh?=
 =?utf-8?B?VVQ3VFIxNlh0ekJyczhhcVVSSUtDK21ZSE1NNEUxREhYK2gxMXBpZ3ZWcHBy?=
 =?utf-8?B?MUNRU0NuYnJtMmluRFNvdmZKU1RUcCtIOTZNTUFRTk43TVBTcFo3UWNLV3NI?=
 =?utf-8?B?SS91UzhuSlRaSGlFU1ppaDRqZ0MvQW5hZVp2Nmk2a05tVjJiZkJCRU5nTVRI?=
 =?utf-8?B?a1ozckVRMnpSMVlQL3BrcVE5T3VpQkdHbWNJZ0ZQNHVMTnZKOWZOT3k1T2ZX?=
 =?utf-8?B?NHRHTjhaeThobVRFVUQvNEJkRHNidUFqemxWekpRZ1g4akw1OXlsYWE3M3lu?=
 =?utf-8?B?L2RKQTU1ZjRIa2NqMlRjbk9PZWlQc3BoNDF5RzMwalZsZE14Zi9PdUt0N0pi?=
 =?utf-8?B?SjVpME9xZHpmejVxU3hjbkdjQXB3aVdXTFMzbm9iempVYjQ5Tm5LSHJFTjB3?=
 =?utf-8?B?QU96WEhvSEo1aC9XbTlvQU9WTXUrbmN5NURKbUVJditrWTRKaVc3Y1RPUnVz?=
 =?utf-8?B?Q1ZLVm5zMUV1eFZiK2RzRWRMNnZkSnZlVy9LUTdpbWt4bDZrMjZ3NHBJaCt0?=
 =?utf-8?B?T1lpYnZlUWd4VHJuaGtPQjR6VWtxcU95azlPUjNMOUVQcGVuelhQSG95bThT?=
 =?utf-8?B?eDArUkVqei9iWlNSWmxzZkc4Mk5abEJBSnFVemFhcjhhdktTNGFHSnZyUkpT?=
 =?utf-8?B?eXIyMWhSTGpaMURZdlNaTUlKQ0ZnRDRPMUR4bHVKUklzSVRpUUJxWlhTbmFD?=
 =?utf-8?B?K2xha1hMUzlxSGNROGVXVlN1REpjem1GSFU3VllSRFdESkRJd3QwQzZuN3B4?=
 =?utf-8?B?LzdhSVVzc2poT1htTmtRNzVORTdrUTlyNFp3ZE0vM2tiRVppYkJKY210WmNY?=
 =?utf-8?B?eWJoRThPY2ZBVzlDeHF6NzVsRUdtQ1ZyZWFqeGlNYlFWNnNxUFZqcFVqc2pB?=
 =?utf-8?B?UlVxbisyTGxueGxiMGZHeDRQcjh5ZVVPc3RjbCtpQ2RFYy83bmxtSWtCaHUz?=
 =?utf-8?B?YjI0NWwyZVF3NTM3aGNWSGowdmFvTHBrRktZVkVDejczUEUwcFA2MFVoL2Nw?=
 =?utf-8?B?MVJoa2lFVlBCczR6Q2RrTGpkbWIrcjFWSHc5RjFpZklrUWt6MFBKTlNud2FR?=
 =?utf-8?B?LzVSS0RORGxsYzdaK3VQbGZiSTdHWmFIV0JnZWVyUDVwZ1U1TkdMT2hoTGJT?=
 =?utf-8?B?TDRPYlVIL2plbnQ0OGozTVZyZ2ZNdGMwWmpoRUpwL2pxcjFIZzhCbktaZ3NU?=
 =?utf-8?B?blduVk5vSTlWbEZHdXZnN1FKdWJxM1NSc29OUkJJRE9qazJHR2NaK3IrZFZV?=
 =?utf-8?B?elF3WDkzTVNPOVdrTWIwbldiVG5qcDMvQ3VvMzNWOE1udmsrcU5VU2ljSmVh?=
 =?utf-8?B?dHlKWlJaTFVZU1l6SkRoUGNadmVha1NYYkpCZjVleVJMZW8wN0Z2cjlNRVow?=
 =?utf-8?B?SW96UHhCQjAvblNnSmpGQklMKzZsR1JScDdxN1pMaEZ0dnN0cUg1ckN5b2FM?=
 =?utf-8?B?MTNVT0ZiZTg4MDJKUE5Yc0w3c3pvdVkrR2MyWTRYekd6SlRzS3p3RGpjSnpP?=
 =?utf-8?B?bnpNMG9WSFhKZ25MU2wraDRmWXR0Q0MxUmhlcDNjWFk5cVg4SW1RamFmSXA0?=
 =?utf-8?B?ZHNCZGJzYW5kTFpJWmxsVGZaNVZST3J6MDBJRjZ6enRZWldrQURWSTNYOExP?=
 =?utf-8?B?SXV2NHU4c3hSZmZtYVRRNGRpL2MvYm5qalluZG5zSTk3Tk9BdDR1aGRJYy92?=
 =?utf-8?B?VWoveUZpc3lvZnJMbGh1ckdWSmplNGljUVlYSnVjSlZwTlZoMTVKMmlFV05W?=
 =?utf-8?B?YXpDc1dDcVNBNFRFdVZJQnRMN2ZEZitXMktLUHhPR1gvU09MU2JhcjF3VVZR?=
 =?utf-8?B?VmpPd2VqQlhuUkdLK2s5RWRqZDBaNkFzTkduU0VUWHJvaFg1bVlpY1Y4Nm1a?=
 =?utf-8?Q?FKxjDwUFQaHqi1yie8dmYngBZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6181ff65-045b-4173-8cbc-08db3433e5fa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:09:31.0652
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m2Rj1ktJftMK6t7ODOcttf7WEBzgi5PHJ/SXgorruGiZI+qkNXH+sz6F0Ak2pyIgVb4rZSZzDrLKjk4gxrsdnQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8138

On 03.04.2023 06:21, Marek Marczykowski-Górecki wrote:
> On Tue, Mar 28, 2023 at 03:03:17PM +0200, Jan Beulich wrote:
>> On 28.03.2023 14:52, Marek Marczykowski-Górecki wrote:
>>> I mean, technically I can probably merge those two handlers together,
>>> but I don't think it will result in nicer code. Especially since the
>>> general direction is to abandon split of MSI-X table access handling
>>> between Xen and QEMU and go with just QEMU doing it, hopefully at some
>>> point not needing msixtbl_mmio_ops anymore (but still needing the one
>>> for adjacent accesses).
>>
>> Hmm, at this point I'm not convinced of this plan. Instead I was hoping
>> that once vPCI properly supports PVH DomU-s, we may also be able to make
>> use of it for HVM, delegating less to qemu rather than more.
> 
> In that case, this code won't be needed anymore, which will also make
> this handler unnecessary.
> 
> Anyway, I tried to merge this handling into existing handlers and the
> resulting patch is slightly bigger, so it doesn't seem to avoid any
> duplication. The only benefit I can think of is avoiding iterating
> msixtbl_list twice (for respective accept callbacks) on each access. Is
> it worth a bit more complicated handlers?

Well, limiting duplication (if possible) is only one aspect. Again
referring to Roger's functionally similar vPCI work, an important (from
my pov) aspect is to avoid consuming another handler slot (via a new
call to hvm_next_io_handler()).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:27:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:27:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517391.802634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIKo-0001DK-Om; Mon, 03 Apr 2023 11:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517391.802634; Mon, 03 Apr 2023 11:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIKo-0001DD-JW; Mon, 03 Apr 2023 11:26:50 +0000
Received: by outflank-mailman (input) for mailman id 517391;
 Mon, 03 Apr 2023 11:26:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjIKn-0001D7-4O
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:26:49 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a376de6-d212-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 13:26:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8199.eurprd04.prod.outlook.com (2603:10a6:20b:3f6::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 11:26:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 11:26:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a376de6-d212-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+hdoDww5qWu7/IpzNtygOSq9A5/NQ1j0WN3va4/nUtqo+qscEVXYo/l5MkAqg9mAq2tlp0ABwyn7g+k5OOgilMTXD9mHvf0hC6e6YJFLTyriFEY87MxvReUexsowPb2ON7RgCuVxrnPDHGOpPhQqlXgz3h10Du5yFcWRv9Yv3Yr13L75zXyl+UX1oJIkLSQI8F40QliD06/nTHB2jnHYVF0eLszK5LiZIXtexdxMRb8T8h2AUp7RqRhwMVj77kdNsYgfQfuUREskXUJcYIBxNVPf3SGQ9dbpqvyEsc9jcM/eTbr7qBtLCk+sVspm/LyVEFlPvyr6Tb3mQJlSeuXJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L8q0afLa22hTjaRnK3m9EodQhb1yTAjxZx4IUbIHkaM=;
 b=GZZLp0wclcMIbZpiCGr00m8ELQES7QzUVvGnPkA/f7/ORbO2eahMJkTnazvPYBBPjhVdrQ19Ux7gRjwfR5FBVK490brdhqbZ73tOiPCYjDf48Ji7aNWtfTlQYcdw9uGU7JZHXgKOrxzcU3kpK2E/N+YJyqsOVvjLlXkksiHy/BZ1JkaGEN03ekhcm92pvB+KMZJeUdkO9Or8CZ+9NByBRg1/wUCAEqWnx8dCHUHiqJA8k5UQK+7ZdwRzW1j6NBjk4GAbF3b70FDsHCXcT2Rewy0tIe6vSeuyaqmZQXOJRCTT+itjCgdMA5YDJ6zC9W2xlCUqcW/3Wkq8K5PLklY65A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L8q0afLa22hTjaRnK3m9EodQhb1yTAjxZx4IUbIHkaM=;
 b=IZxR0y3YHXIJpSLbODqGMcs4WvelFlgm1crboySZnhSlV2I/zZC8Xh3os2bfxkKSiHMTDuF6M4FHHMyQ2OOA4GAWei/cA5iKmbNaQe/44aE4HTVepqA40PRZEGF1t0dF+plgtLrtAvacgYpVi5ZFERHhpF0IDCO6qrcT4u7I/DmsxGCU1IEokBTnubVHwcdnD5DCkUrl3OtD5aEhOEaE4lZ34vWr5GkkysA21OI7yjTn6c18jAmrq8tOKBJTVwcVS1j4YMgHAwPe3+so4Znx82BRWjA6vprN5WeVvZH9BbjVzBpZOiLekFoxi6+ziwMz67zO/26Fxop49cD3Xs3vrQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <be49b5d2-f4a0-44f7-0f6e-56c0e63e9da0@suse.com>
Date: Mon, 3 Apr 2023 13:26:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <8f2fa47d-89b7-b39c-e60f-edee1de5ca82@suse.com>
 <ZCqz0YCFUifIlthC@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCqz0YCFUifIlthC@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0155.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8199:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d89e8ec-6986-410e-9c1c-08db34364d1d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iGa+gFBuan6KwoxqIjLD07BUMtvuyypTCL3f28CUB9ZH1u4m+BMNsN+xJYlO5tDSmli+ZsAkdRFT5b1hpzeKq8toHJXSFSmRmmMpfJFDxW3hnehR4pIn3lcs/FPhtp106/4HeNUNvM0wlBCoBLU54Y+t/rfYXmVlgHIsNT75i4Tz0WqZyKTW4mi0q0BlKCMBCYrcFbyc7lu6QjYIhQMM5R7KyOy+yt8IAvqLInyrbnfCaopLYfKtBMwO4O02uiMGCu4yEyc03XrnGguRIahqkvUzqpZpcCbjIpCj7eSvoCKFDvdSiO6KVvqwqChWngbna9d1m1O+G7P9/tT75RaV9NGDPVPHvr2ZUoDMg8lK5HI1aWMwPkApzXVOZeW+/MgKyppbzR3a8jnmJnQUDiWkqe6c5P8XPLoZwbkCtanD8QcS3uZ+YDQUBOfpoVF8vXuXTd5gSGn1WTddK/4I8RVT8yOxCL+Pt7q3RZ3R/sitL8FrbFuDDsbHhOnmHwhuTH1I/jHlG9h1LZjc83WZH3cj4XLeIoYZQHCWD9JcQlXHpcB5Vsq+yW1w811xIQmshJpnm3mt2s0V0AW2RpINYn3GQeKNItj5UA5bifkhEFQQ8XBF8uQ93BBW5/lN58sgSH/31FrSjupehoRE6YqH8ssbCA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(376002)(346002)(366004)(451199021)(8676002)(6916009)(66556008)(66476007)(66946007)(54906003)(316002)(478600001)(8936002)(41300700001)(5660300002)(38100700002)(4326008)(53546011)(186003)(83380400001)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eUtQKzlXVUZEb3hVTUtCUTVVTE4rcmhxWnVxR3FFQjR2bHRBemxmVmxiQWRx?=
 =?utf-8?B?TFlGRGlYSTZwN0U4RkhuZWRKSzhCM2pnWTdUdFo5bCtlaWtrbWdzUlAwdkhy?=
 =?utf-8?B?M29nWXM2WnorMVdTWXRqUW9wbE1PNFRVTWgwdmRZZ0Y0V2preWdSdmFLczJY?=
 =?utf-8?B?aW9mR210V0p4V1ZLTkhsTTlxQW9OdHJTQ1ZrUWFmbFB5Mk1JQXBVdExqdmRK?=
 =?utf-8?B?Nlo5ZVYwcHBLa2ZMTXFYR3JFR3hYTTFPUHY2cjEyQmMwRWxZMTJFL3QzS05D?=
 =?utf-8?B?U2hJWmxlZVV4UHJkUnVsZmZUWSs2ZG9CcHljbXVZT3B1RGtKTFE1ZlJ2dlV0?=
 =?utf-8?B?dEFBZ1F3eHFIRm41V2RBTTgxQlozaXdTZ0NHbXhveStNdGQvWUtvT29pVXoz?=
 =?utf-8?B?bU95UXdxUnNBZU9sa0M3bmtpU2R0bHMzdDkxNUlPajVBRldSSy9pV1JaZ2V0?=
 =?utf-8?B?b0dtaTFYUU5ablFtbURMVUNGL3VDUGpZNE44WGh0ODFzaC9JVE8xWVFvQWg3?=
 =?utf-8?B?MkNocmRHQTM0STJQOVF2VGUvOEM0T3JrdHllMDJOVy8yT3IrRWEveXgzQWxC?=
 =?utf-8?B?R3crTVlUSmp0ZEdDNEd5ZDZoUTdma2tqMVpsSGhvKzVBZDhqZG9OWFpJcFU4?=
 =?utf-8?B?TTRvOWxDSUl4STdLVUlTbXpkc0c4Qk51V01Mc1FVYWl5TmluRkVuNWIxRDhE?=
 =?utf-8?B?Z2ZiN3I1R1BiblhyOUxKeU96M3RGalZZWnhhS2FLdzduOXI3QnBLUDV5dGJ2?=
 =?utf-8?B?S3Z4c0xZL2NFcWpCa243RUJWSmdDcWxGNlJTZEE1WEM1cm9RalhzMnBwTDBB?=
 =?utf-8?B?Wk8wRC9UWjFUQnVJZWs0SkM5d0tHNG5TWTBFZjBRd2N5d0VDTEdaeFFmNXds?=
 =?utf-8?B?U1pKTitHYWVDcWxtbTVXMVhTZ1pEcjV2bEwySms5NG9EL2d2RDRsbSt5cDB1?=
 =?utf-8?B?M05UM1RPd0RoRmRqNXNWdXY2M3pBRk11SS9GaUdIaGZQTjR5MkZyTklISzhn?=
 =?utf-8?B?UHpoOHpjQmRmQjZGNUdhTHJFT2VWSWl4ZVZzbTlldGhsd1JOV0Q4TjFENG9K?=
 =?utf-8?B?RHFCdTNEaUMzU1pSZElJQkZzdTJhRDRWVHRmbXl3VndnT0ZlZ3ZBbVBSL1kv?=
 =?utf-8?B?U3JhNXM4d0k2RThiL3VhM05lb1RVOXprSy85bE40OThPTXV6ZnRMKzBNNlp1?=
 =?utf-8?B?ZlZOWXZBRzlKWktUVU54aVh0L0FSV1B4L3cvc1E4M1daaWJ1bEs1eExuNGhU?=
 =?utf-8?B?NUZxK2x1ZERaQ1ZvbHY4TUdFOWdFOFJiWGo3NXJWeVVqT0grdnBNSEs3QkVW?=
 =?utf-8?B?RjQ3c2ZNeXZ3aG05QUxSVEVxaUtsWWJaR2JJaHFjbXp1TzlYbVNFOHBVR1kv?=
 =?utf-8?B?V0hta2M5U3VYbnFZMFBrcFJpK1BhbTVJTFFNK2U0R2hveDhIVUwya0FpektC?=
 =?utf-8?B?cWlDYXdvclgyakp3WnE0SmhXcU5nK3NTcml1LzF0OUk1Uzl0OE43MWQ1YjVh?=
 =?utf-8?B?V1hLdGFKSXVTMVh1cEdXME4wdGlmZHJBUUVsbldta3dHUTh2R3Vtczg1VVJh?=
 =?utf-8?B?NlgwRXBHcEhnSi9maUhydHZDbFUrcERqS21tbmo0WXlHZmVTSC83SURCWWlv?=
 =?utf-8?B?aWFrenFLTm5ZQ3QwME9jVnhvVkU2RzNxZk81cWRIUHJINEg3bzdWM1JXWkJO?=
 =?utf-8?B?dm9LVmJmdllxUXIwTGxtRzRQM2VxSldlMTdzOXBHMy84d1B3YkVsWFVjU1RL?=
 =?utf-8?B?aTFFdU5Fa1ppTHdOT2FnU1FpMkEwOUZHMW92VkxLVk1vcVVZZjNlT2s1dTVO?=
 =?utf-8?B?VWJlenpzbTlEWW9ZTVBwVVNQQ2JQcktObU1iUTAzbWxiSEJwdXFDZ1A4N09q?=
 =?utf-8?B?YXI0ZTU4Nlo2Ym9Gd0lOSU1VOTlpZ0k1QnhGb2dWWVVlWktVWGQ4Q1hRb2dQ?=
 =?utf-8?B?MnJRQ29vdi9lbXlIcWg0R00wb2ZESFFUZTIyd1JtVGloM3R6QmxKMWdJNDQz?=
 =?utf-8?B?M1BPM0FFSngvSU8vR3ZET0NtWVI1aW5tZk9kSER1RlhKSXJQaEVMTWd1ejNk?=
 =?utf-8?B?aDUxTTdTTW5xbFlRMjJmMEw5R1lEQ0ZKSk56Z1k3MnNJbFNKNVo4Q0Z1WGY3?=
 =?utf-8?Q?mI6eHeHor+E2V67KVEXhLSmak?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d89e8ec-6986-410e-9c1c-08db34364d1d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:26:43.1181
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LOECWLr8hShqeiNXC7KvpCn9/n/lqF44JenNROx9YJ/XcvM4XgaQ0T9r9D3ZdLr0rUwfdoDZm4o1aYeEu75crQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8199

On 03.04.2023 13:09, Roger Pau Monné wrote:
> On Thu, Mar 30, 2023 at 12:40:38PM +0200, Jan Beulich wrote:
>> ... in order to also intercept Dom0 accesses through the alias ports.
>>
>> Also stop intercepting accesses to the CMOS ports if we won't ourselves
>> use the CMOS RTC, because of there being none.
> 
> So it's fine for dom0 to switch off NMIs if Xen isn't using the RTC?
> Seems like a weird side-effect of Xen not using the RTC (seeing as we
> would otherwise mask bit 8 from dom0 RTC accesses).

I haven't been able to find documentation on this single bit in the
absence of RTC / CMOS.

> Also I'm worried that when Xen doesn't intercept RTC ports accesses
> from dom0 could be interrupted for example by the vCPU being scheduled
> out, so a vCPU might perform a write to the index port, and be
> scheduled out, leaving the RTC in an undefined state.

I did specifically add "because of there being none" to the sentence
to clarify in which case we avoid intercepting.

> I've read claims online that the RTC is not reset by the firmware, and
> since it has a battery the state is kept across reboots, so
> interrupting an access like that cold leave the RTC in a broken state
> across reboots.

I can easily imagine such firmware exists.

>> @@ -836,10 +836,18 @@ void rtc_init(struct domain *d)
>>  
>>      if ( !has_vrtc(d) )
>>      {
>> -        if ( is_hardware_domain(d) )
>> -            /* Hardware domain gets mediated access to the physical RTC. */
>> -            register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
>> -        return;
>> +        unsigned int port;
>> +
>> +        if ( !is_hardware_domain(d) )
>> +            return;
>> +
>> +        /*
>> +         * Hardware domain gets mediated access to the physical RTC/CMOS (of
>> +         * course unless we don't use it ourselves, for there being none).
>> +         */
>> +        for ( port = RTC_PORT(0); port < RTC_PORT(0) + 0x10; port += 2 )
>> +            if ( is_cmos_port(port, 2, d) )
>> +                register_portio_handler(d, port, 2, hw_rtc_io);
> 
> You seem to have dropped a return from here, as for PVH dom0 the
> initialization below shouldn't be done.

Oh, indeed, thanks for spotting. (The excess init is benign afaict, but
I still shouldn't have dropped that "return".)

>> @@ -1249,6 +1252,77 @@ static unsigned long get_cmos_time(void)
>>      return mktime(rtc.year, rtc.mon, rtc.day, rtc.hour, rtc.min, rtc.sec);
>>  }
>>  
>> +static unsigned int __ro_after_init cmos_alias_mask;
>> +
>> +static int __init cf_check probe_cmos_alias(void)
>> +{
>> +    unsigned int offs;
>> +
>> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
>> +        return 0;
>> +
>> +    for ( offs = 2; offs < 8; offs <<= 1 )
>> +    {
>> +        unsigned int i;
>> +        bool read = true;
>> +
>> +        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
>> +        {
>> +            uint8_t normal, alt;
>> +            unsigned long flags;
>> +
>> +            if ( i == acpi_gbl_FADT.century )
>> +                continue;
>> +
>> +            spin_lock_irqsave(&rtc_lock, flags);
>> +
>> +            normal = CMOS_READ(i);
>> +            if ( inb(RTC_PORT(offs)) != i )
>> +                read = false;
>> +
>> +            alt = inb(RTC_PORT(offs + 1));
>> +
>> +            spin_unlock_irqrestore(&rtc_lock, flags);
>> +
>> +            if ( normal != alt )
>> +                break;
>> +
>> +            process_pending_softirqs();
>> +        }
>> +        if ( i == 0x80 )
>> +        {
>> +            cmos_alias_mask |= offs;
>> +            printk(XENLOG_INFO "CMOS aliased at %02x, index %s\n",
>> +                   RTC_PORT(offs), read ? "r/w" : "w/o");
> 
> I would consider making this a DEBUG message, not sure it's that
> useful for a normal end user, and printing to the console can be slow.

Can do, sure.

>> +bool is_cmos_port(unsigned int port, unsigned int bytes, const struct domain *d)
>> +{
>> +    unsigned int offs;
>> +
>> +    if ( !is_hardware_domain(d) ||
>> +         !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) )
>> +        return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
>> +
>> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
>> +        return false;
>> +
>> +    for ( offs = 2; offs <= cmos_alias_mask; offs <<= 1 )
>> +    {
>> +        if ( !(offs & cmos_alias_mask) )
>> +            continue;
>> +        if ( port <= RTC_PORT(offs | 1) && port + bytes > RTC_PORT(offs) )
>> +            return true;
>> +    }
> 
> Maybe I'm confused, but doesn't this loop start at RTC_PORT(2), and
> hence you need to check for the RTC_PORT(0,1) pair outside of the
> loop?

The loop starts at offset 2, yes, but see the initial if() in the
function. Or at least I thought I got that right, but it looks like
I didn't (failed attempt to try to address a specific request of
yours, iirc).

>> @@ -1256,23 +1330,25 @@ unsigned int rtc_guest_read(unsigned int
>>      unsigned long flags;
>>      unsigned int data = ~0;
>>  
>> -    switch ( port )
>> +    switch ( port & ~cmos_alias_mask )
> 
> Given that the call is gated with is_cmos_port() it would be clearer
> to just use RTC_PORT(1) as the mask here IMO.

Hmm, personally I wouldn't consider RTC_PORT(1) to be reasonable to
use as a mask (even if technically it would be okay).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:44:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517396.802643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIbv-0003ZO-6g; Mon, 03 Apr 2023 11:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517396.802643; Mon, 03 Apr 2023 11:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIbv-0003ZH-3i; Mon, 03 Apr 2023 11:44:31 +0000
Received: by outflank-mailman (input) for mailman id 517396;
 Mon, 03 Apr 2023 11:44:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjIbt-0003ZB-IY
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:44:29 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2224a1c-d214-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 13:44:27 +0200 (CEST)
Received: from mail-sn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 07:44:16 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB5853.namprd03.prod.outlook.com (2603:10b6:a03:2d0::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.34; Mon, 3 Apr
 2023 11:44:10 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 11:44:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2224a1c-d214-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680522267;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MgN5akSc15Gle+v7aSaA+1Daes50i5XlQsd5PTdXRAQ=;
  b=Q4rxInMefbKne/G6v8WPWCnOLxCeY93gn4FBXjwhJXZfHeLK/bYxTTcp
   ENYyLq9xdo7FfDRm7J0iy9u8BOB7dlUczGHCgRbMpzrY3McASYysaWBfS
   3ZNTmn9BVJl23a3ASED8DnGwnmryvTaISLT4LtS6vHn6vlqqK2Yl2+KAI
   k=;
X-IronPort-RemoteIP: 104.47.57.47
X-IronPort-MID: 102904644
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:mlhrX61UOZVmlorjE/bD5eZwkn2cJEfYwER7XKvMYLTBsI5bpzcAy
 DNMC22AbKuPMWXzKYh1b47i8RsOvJSAyYJhTAJppC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmPqgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfC0IQ6
 eQSIhUxMB3dib6d3qKmWtJwv5F2RCXrFNt3VnBI6xj8VaxjeraaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvC6Pk2Sd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv03L6XwX2nBOr+EpXgq8ZLuwS/m1ZKI14WfFXghKeGrlSHDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLSRmrbm9WX+bsLCOoluaJiw9PWIEIygeQmM4D8LLpYgyilfUSI9lGavt1NntQ2msn
 HaNsTQ0gKgVgYgTzaKn8FvbgjWq4J/UUgoy4QaRVWWghu9kWLOYi0WTwQCzxZ59wEyxFzFtY
 FBsdxCi0d0z
IronPort-HdrOrdr: A9a23:jWwQvKnUYeSkO4uI1OiHpmkKk+XpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="102904644"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iFhXLTVEItfp1DLIYa3L9wwHl7QAUQCDsB8GSi+fKpdlfxMzoWWVuxfEWobem8+CWWzocq3YxWoZHOIIkAg3tv0/LB7Et4Sk6C7JaYFDY8aia+jahxueXmCI2MFR+kKE5rtvRa0UF7afZf7/PchYbUNkCPjyc3guIbGHjjY2mzarnVHVaZI/mtK9BkvY7xQyr1fMUx9I/RPXunFJcBgg91BTFB3sh6DDdqEHfawZnhUay38IwNGpKQe9IjhS8RROBUCujrOzYDLBin7Mcs5VO9Ax+jRYHrWvJcBOxMxEUHAkELZ2wIhi413kHgowlNBQxZet9bWCO4Km+Fp0hunzeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GDY6ZPt2ogBpnfv4hBUOFEYDGrLLJXxL+TTEDteGKw4=;
 b=dv3e76jrtr1xVHetxmjX8DlIhGNK8TEGARIPlLpGxw1dHh0UQ3/oq/fFPBtm6CyCyeVT/UZKVxDP4hJiwGgeO9vfXtph62CrlsmdbwxSBSMkT/K7vnJokg9gkBQA8Z/J18QKt+2N3lZZwrtxb/DEvcfqcd+Y8bD5OViSy3nvoySB11oMd+iVpiGEZC9uQf2ma4gy9uBp8iWr5f3TtmGu9p1pZiKs5sSKWojucshehCliMykNipCOlU7rElvR+alAdgmRAB+diwF7gDZz79UXjJ8Yq0bBptDIWVvHxj4Ux5Lz8+hnQ7sZA3ismA1DGAFt4BnLG5HCVJel/vwUEIIbMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GDY6ZPt2ogBpnfv4hBUOFEYDGrLLJXxL+TTEDteGKw4=;
 b=kQrVgZrGYcdEmY025Bcw/gWDriD40XJHZe44/5HdUPTmZFSj2Eoytlu7qQFVjXvCPr/v65p1AuUfJBRtoN/Y0Xa0BUsY7rnkzpj26kLWtaQIIQmpk5zmy13KXZXrej8GhznBB5EzwH5Is6wCImUxzwntTfF9QcWRmt7/s1A905Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 13:44:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZCq8BQU3lqgxSp6Q@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <8f2fa47d-89b7-b39c-e60f-edee1de5ca82@suse.com>
 <ZCqz0YCFUifIlthC@Air-de-Roger>
 <be49b5d2-f4a0-44f7-0f6e-56c0e63e9da0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <be49b5d2-f4a0-44f7-0f6e-56c0e63e9da0@suse.com>
X-ClientProxiedBy: BN0PR04CA0199.namprd04.prod.outlook.com
 (2603:10b6:408:e9::24) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB5853:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ebf2394-e93e-4d97-9eeb-08db3438bcbd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OlPD7bVk9CGTz0PdgPTUKivu+0/DNWjjsCR3+0Qjb+OmgKkODqBktQ0G2BeBBoE+XNe2smJVj/UV3ffqLAmUh+dt7hsgMYq/Q/3fwQ/tAEWlyN2qcp8ptg3XekI3gbz2ixts8X1C/ZnA2H9f8LQHl8c8JsCtSMm0kjNHVXXsZc33HDUr4SX8WThtj9MLo7s7R0fcMJKsnF7hH3aGFMbgnweD3ixCAEf2paKkCG+36zbX6kgYGFSlO4gvPBkOs2S5p27gMkqjD9VIAqRhOecF69BKjC0EDZve2XGuuvTa9QlrVbXr5IlJmVmD/amlEr4euT23zUFXrbR69eLdUYeSxdI6MHSbTtZwn0c8fXwuHHLZ3m4W/G8F18JsUwmaHrCRLmtnolGnj/h2nNRQjuR5to6hm+vkpV4eeSEuciPdFw/kF/k7zxdpLCtQVzyQSdKAdl9sn+WMtrfFY6LxCPlDnKf5EeXqOYvXvwXYShoXjIS/FphvgXx2n3nWIzlOCUgm2v8Tv1WBKHbV2qN3H5OsOS1ko1kM5CgcrPhCzhdg8tcTatrlTSvk9pHrZatg/ww9
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(451199021)(2906002)(38100700002)(5660300002)(82960400001)(66476007)(66946007)(66556008)(8936002)(85182001)(41300700001)(6916009)(8676002)(4326008)(54906003)(316002)(83380400001)(53546011)(86362001)(478600001)(186003)(6666004)(9686003)(6506007)(26005)(6512007)(6486002)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bDYzSDVZOVdHUjJXdTQ2ZW5Memh5WGN4TDNNRUJkV3BTYm9iT1R1NnhMVHBw?=
 =?utf-8?B?Q3loMnZEQWlETG5paWgzSFNwYXBFeEJ3dUI2L0kybHYxVjJJQ1AzaWNuQStk?=
 =?utf-8?B?STdaQkpQdmt5S2VmZzRXdlhxc3pUZElnajZaakt1Tm1YQXdQZUxOTUlFandi?=
 =?utf-8?B?eWsrdW5QMzFmbW5vaGNTUzRwSkFJNk1PVndNZnBYUkw0WkNPb0hxdmxRNmwz?=
 =?utf-8?B?WndKZEYyQ2RFZUNjSDc2bGEyaFhybXZhdFh3bG1zVkdSNlNVNVNZUGQ5LzBX?=
 =?utf-8?B?WmgzWXRSaWZYa3N1ZXRnVVRlQ0FRTVlQdHhUaWFTaTNXK1JZSGl2b1dtSUN2?=
 =?utf-8?B?YUdoSExBcCtQZmN4QlQ2NnRPSGRUV1c0WWlkZ3hBYytaUWhNVGFXbFQvVi9G?=
 =?utf-8?B?R3BLSTc5WUM4MTJvSnBiSnVndTNvTEdkb05KNnQ5bnhycUJjbGpvOFNGbzQx?=
 =?utf-8?B?d0V1OWVPR3M4V1FoU1FmSEJjSUZyZHkzYzQ1aXlUWmowZEV0cFBoMFZWb0hk?=
 =?utf-8?B?b09MUWg1UU5OVFI3cnJFQWJLVE12Y2Y2bGdueWtJMEE5cG9saUtMR3hkSjd5?=
 =?utf-8?B?WE5CRUpMNnovclVhbmVqWjJidmd6WjRGL2FIQis0YkE5WmI5MTJoTktOVUho?=
 =?utf-8?B?VjViZGJJSjlPM1BMNmNKTVFjb2FsMDYvcWVNWXJERVBYSEs3Nzk3NWhybjVm?=
 =?utf-8?B?dTBySFY0eDNhYXg2L0Z5VVdBeFp2OWhLeXlWZUxwUnJEdTVmZ1lldVFNbzNh?=
 =?utf-8?B?K3NHZjZqcjdjK0wzMEp2Y2V3My9DQWx5STJFcjFqVEw1NFdNVndRZXNpNGhP?=
 =?utf-8?B?Q3BEQnliSFN0dm1IOVBTbnlYeXc0TVJSTkdXVENiMVcxUHhxZ3lVL1BOaGxQ?=
 =?utf-8?B?RWh1MXdZWUcwNjIxeWxTeG5sRGY0ZUpZZkJIUndvTU5pVFh6UzNIbEUvdllj?=
 =?utf-8?B?bUpxdkt2cFB1VVhDMnI5QUdqakF0Sjl5K0hUSlN0Sy8wc3RXZkpiVXo2UHNG?=
 =?utf-8?B?SVNURTAwYXh6VlpidDdjWlExRzAydVN6NHkzaVFJNGJEc0V0U1hoYlJhWitq?=
 =?utf-8?B?V1crczBqeEE3UzA1d0pTZlJCMGRNOGYrZ0RwRmJFNDJiMjl2STRyMi9nVGJ2?=
 =?utf-8?B?SWZaaUMxNUdSZkVZcEdjMUJYdjloK2ZVbG0wbkh1TUo5RExOY0VWT1FQUFVa?=
 =?utf-8?B?dmo0cjllN3BxdjJlMVNrZ2VDMWp6Q2xHUHd4d0dickUxZlRnbUZmYUpLUU9h?=
 =?utf-8?B?N1lGTVMxNkZROGRScUlvZXE5TkJsaWlhd2xJeGdIZDE4WmorT2k3ZE10YWxJ?=
 =?utf-8?B?MzBxQnREeVp5clFCTUo1Nlh2cURma3puK1Q1dGVZVm12THUyRGJVLy93RkNr?=
 =?utf-8?B?RlYzcnJXczIyUEM2R3JzREViTlVLS2FwUEJJU0I3TVM3d0J2N3pUdFdQOFg0?=
 =?utf-8?B?VlJkUjJ4QlFTS2hPQ295Vk1PWFlQb3BmLzM4amU1dnU5bXBjcVV2L1dEYVhO?=
 =?utf-8?B?WEF0NnE0OXNtRzlrQ3JHYVN2cklHSVBwb2NrK3F3eWI0elBYYmJXR00vL1lm?=
 =?utf-8?B?dFdvQTVqdC9pMGtjZGtVL01lTUcvRU9ZU2RoZXh6SVRuOUt4N3lJTy9kem1i?=
 =?utf-8?B?MWtoaWt4cWdNYitDWFNUZmdnb0xpWGZURTcwMW51MEtBNUVGZWVScWhYYUZW?=
 =?utf-8?B?NDRudGg3SjFndWkzdUpXaUtCNWFETDV1b0czMWZ2OEhjZ081UVo0a01sOUx1?=
 =?utf-8?B?ekFVb1c5UHlWS3V6RkNheDJoVFlYOU9WN3hNdEtHWE9adWo0Nlo4NDA0YUtY?=
 =?utf-8?B?T1RacEQvem1GWkpScGgrU2pOZ2pjSkYvRXFnRlUzc2NyZVhYdDZuZUpyWlE0?=
 =?utf-8?B?UisrOHBDODRDODVFVFhpSWVNUkZqZ3k1cWduTCtYQ0ROMlpGZGt2SEk0QlZi?=
 =?utf-8?B?MDdyTWFYdUF3QTNyOGdFcXNXNGRhd3JqbWJMZjdwVWUxSTlyUy9pdGpIaUNu?=
 =?utf-8?B?c25rdHBBRkN6aklKNnRkemtLeENQcjQ4TmczQlhObERnTjRDbGxhcm8rNHVo?=
 =?utf-8?B?MjJmcktnWXpxdHdTVy96a0liWEtkSlhTTldFY3hleW9Wa09aQzNPcE5EMzcr?=
 =?utf-8?B?dHNGTjRTclIrQ1A3d0p2VC9JRU16N0JtUVYxTHBMam1oU1lCVVRUTHpQempB?=
 =?utf-8?B?VkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	7Sma2PuAt8zP0CMrLo3bsg1YXKOKxKB1RjgDg7olFq4qPSTtPk2jukpjP2Jj2OGBhy4/l6xASaWGX0/Mbk/TGRtqbSPsP+NZSmX7KiNdOHNhJ+sFnLBMpghGVKxMFtHivm6l2HLYtZwRXVe8OgcxTzdKr9JXjdbv2irFUCrGNfGoj6mkkAacVRhGPd7mzdVBUCvstNFELVfmJKvloxCDx9pwVCcTVSaecY0rAxXJtxSt4AoUScn7aZVlnRPPZFw5LUybt3XcwVIiGXM7v1raUPtLv4NXiwu9UrPsDi7qglNXfuAM5uXbqf8xXtEmrjRlEF6yiSEnIATyBxa7jvUAf3pngEavChupMVLei5Cu6zqe0nPiljva5i8K1irkIQvLRiE7EEfmVlwc9HXmShGFmS4++lBqGqg/jO11vPH+JCyS83zBciM/isaymJGB4Jutb9fn/1DAtnaYvOX5GopVrAnd8WCOpTqdpPUfb2hzm5opF6Y54K2ElSXtTaS58u2INy0Jn/RLPMS9jvazm//UcfWWAwsGZ6yhfzhSB925je7h3cZnGgKd/dOlOPghjVlFYLgsMmN2itw7Q5x5IyqPldztjFzLoHqPsBihT1y+lv+0j6HlCVnM/7jgCybPoKrKp3r608mn8GN7i9mjJi2HqnKQpPUYNukQrLT8iz0ScqDgDSAnIYVZ9trR/SE6yTEjfOJowrGiNKpSrAXKrxh4d+nxBHQoRkfFf1RduPjkkhAfLBh9jph129kBug+BZ5k70HMFOgxkfv63ceuDpfm2v6FQsHxRByElpvIGXUAsPxBF56OKHGiCVdBNsVRmAki1
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ebf2394-e93e-4d97-9eeb-08db3438bcbd
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:44:09.6857
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nxlWr4DejAlde6ieUF2PEt9AYfDnb11Gi4IY6dT/6+nR1zEkup4ccgeYKzR7hfFHXpLO+VTuHG0T7sWHN5lmGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5853

On Mon, Apr 03, 2023 at 01:26:41PM +0200, Jan Beulich wrote:
> On 03.04.2023 13:09, Roger Pau Monné wrote:
> > On Thu, Mar 30, 2023 at 12:40:38PM +0200, Jan Beulich wrote:
> >> ... in order to also intercept Dom0 accesses through the alias ports.
> >>
> >> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> >> use the CMOS RTC, because of there being none.
> > 
> > So it's fine for dom0 to switch off NMIs if Xen isn't using the RTC?
> > Seems like a weird side-effect of Xen not using the RTC (seeing as we
> > would otherwise mask bit 8 from dom0 RTC accesses).
> 
> I haven't been able to find documentation on this single bit in the
> absence of RTC / CMOS.
> 
> > Also I'm worried that when Xen doesn't intercept RTC ports accesses
> > from dom0 could be interrupted for example by the vCPU being scheduled
> > out, so a vCPU might perform a write to the index port, and be
> > scheduled out, leaving the RTC in an undefined state.
> 
> I did specifically add "because of there being none" to the sentence
> to clarify in which case we avoid intercepting.

Oh, right, sorry for the fuzz, I didn't parse that last bit of the
sentence.  I'm fine with the current wording then.

> >> +bool is_cmos_port(unsigned int port, unsigned int bytes, const struct domain *d)
> >> +{
> >> +    unsigned int offs;
> >> +
> >> +    if ( !is_hardware_domain(d) ||
> >> +         !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) )
> >> +        return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
> >> +
> >> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
> >> +        return false;
> >> +
> >> +    for ( offs = 2; offs <= cmos_alias_mask; offs <<= 1 )
> >> +    {
> >> +        if ( !(offs & cmos_alias_mask) )
> >> +            continue;
> >> +        if ( port <= RTC_PORT(offs | 1) && port + bytes > RTC_PORT(offs) )
> >> +            return true;
> >> +    }
> > 
> > Maybe I'm confused, but doesn't this loop start at RTC_PORT(2), and
> > hence you need to check for the RTC_PORT(0,1) pair outside of the
> > loop?
> 
> The loop starts at offset 2, yes, but see the initial if() in the
> function. Or at least I thought I got that right, but it looks like
> I didn't (failed attempt to try to address a specific request of
> yours, iirc).

Hm, doesn't that first if() cause that on all systems with an RTC only
PORTS(0,1) are allowed?

> >> @@ -1256,23 +1330,25 @@ unsigned int rtc_guest_read(unsigned int
> >>      unsigned long flags;
> >>      unsigned int data = ~0;
> >>  
> >> -    switch ( port )
> >> +    switch ( port & ~cmos_alias_mask )
> > 
> > Given that the call is gated with is_cmos_port() it would be clearer
> > to just use RTC_PORT(1) as the mask here IMO.
> 
> Hmm, personally I wouldn't consider RTC_PORT(1) to be reasonable to
> use as a mask (even if technically it would be okay).

OK, never mind then.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:47:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:47:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517400.802652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIev-0004Cx-Ok; Mon, 03 Apr 2023 11:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517400.802652; Mon, 03 Apr 2023 11:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIev-0004Cq-M2; Mon, 03 Apr 2023 11:47:37 +0000
Received: by outflank-mailman (input) for mailman id 517400;
 Mon, 03 Apr 2023 11:47:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjIeu-0004Ck-JJ
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:47:36 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51eaa262-d215-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 13:47:34 +0200 (CEST)
Received: from mail-dm6nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 07:47:31 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by BLAPR03MB5587.namprd03.prod.outlook.com (2603:10b6:208:286::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.29; Mon, 3 Apr
 2023 11:47:26 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 11:47:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51eaa262-d215-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680522454;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jbn2uUVXb8yvO1/Rrdgb6/z7tAWkyGsVebJ70PfjtWA=;
  b=LbZAAEIwKDWIzIAbawhY+/5Q7lzk0f2t7H6Yh+bu9FRj31sg8ybhZ30f
   0CSotnS5HPnZJH2oTrPiNDhK7dD5yogyAoTw8DoGm+bY2WIzdoxJsZO90
   tBXH6wYAhC2MhsYeUIN/K8l1UbHkvYO9Z5I4SGSfjuNtscO6/ULRH2gL+
   Q=;
X-IronPort-RemoteIP: 104.47.57.169
X-IronPort-MID: 104026075
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hV6T8qhaHmr857Mrpjx9ZPY2X161RhEKZh0ujC45NGQN5FlHY01je
 htvUT3XaayCNmD8LY0kPIiy9BkE7MfUy4RkTQI5pCtkHiwb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQoLXcDcjvbtdiN0ZzqRMZ8nNoMAZn0adZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGzYbI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjBNpISuXnq5aGhnWol0hQNxwvZGKliqOYi3SQRtdUL
 mEtr39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOluU7WDgr3
 V+hhM7yCHpkt7j9YW2Z3qeZq3W1Iyd9EIMZTSoNTA9A79y9pog210vLVow6Tv/zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQGzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:qixBYqxIaI8E33WQOZ3kKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="104026075"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cv+SdaD0efSUllFyIm1fqrH2hJkDwPLoWdURfEf2rUwSlRl1jAYRBD+NoZd72egVwk9oG2loiUgyt8v0f5QQGFXob8CwcHcMqIciTEASE+eFr3TIWAbigVRIJjFcinL6WDhNCtvt7CHjUR1Kb/FOPPUmwMJvzdM6lC0TWTExOtqq9yXAuZ8A8DVJnO4ES/kczs7qEiiepREJczJOOE43Ag5Tm0XUufoqFPDCFfu1KsPmhdr1Rkch1TCSxgnX3L8VTB9Y27+MSP3wNJchg2wGXBUnQQrADYztkDzvtjNGyk3TvVBxmroJRyvCsDSIaotopoLtT2POxCz057ajBcUfog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gUTWRX+psj99xXOjpkf0l7H7LRbJFTWqro20JXGXwh0=;
 b=RxBA+KjG10ql6TbFMFA543YsOjZ0ETl2czMIf0S32O4TknCOZzwIvuLUXEwYvUBlVpJ1oPKafwMQgZTSiw3uuDraV+Kctpnt++svQY1gDlV+xB4onPiQ+2/W26WJtLC1FqptSBh9cQx8TC4W3n5UKZWJiHndyypLmzkfVsKvZEiNbGPKq5GH7R53uwdNY/Emtar9zVry6UzVIt8V6+TNsJANhjRdNpm/g07rwAeD4SqFGLTSuccz4NjtAPsVr7UF/jyNLQPB873hsDXXhaTb9THQGtU0+wL0orZ4UzCRi5ZyGoQ1xY/r2+1+53DpU/B1+JFwVv54yfCXqJi2TtKoCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gUTWRX+psj99xXOjpkf0l7H7LRbJFTWqro20JXGXwh0=;
 b=gfnsaE1fxGZdlDQDE4EUrlGr71Tp2+QcOaKoSv3ElfVgC9fTE/juca3RcvXkfxF4OryEVkSc7VSDqxjji0qhK4mJHBLUhDbyvpGGEMp7WJGVDWSjldVJCyD38GM8plGz3XNj0Scq0TSV9kCzm+dZqO/NfCLzrdlFqe7V0O5ipbw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 13:47:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH RFC 0/9] x86: Merge cpuid and msr policy
Message-ID: <ZCq8yDv/FaOvtTPo@Air-de-Roger>
References: <20230329205137.323253-1-andrew.cooper3@citrix.com>
 <ZCVtcR5u/14/WmCU@Air-de-Roger>
 <9108a58f-da8f-14e4-de88-a7c8c8abb0f7@citrix.com>
 <ZCWgHxCL4yXD6CxC@Air-de-Roger>
 <14d8128d-6f50-99e5-ec98-366318e7be1b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <14d8128d-6f50-99e5-ec98-366318e7be1b@citrix.com>
X-ClientProxiedBy: LO4P123CA0484.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a8::21) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|BLAPR03MB5587:EE_
X-MS-Office365-Filtering-Correlation-Id: 82571b92-0ff0-4bcd-f62f-08db343931dc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5EbbQb06vQw00GUu2B1CHxsmpSmHH9kpO4nxjb+9obaYlVoBna14vTDbM0bI0vl60XZlME+SKjqygRqIHu4stscaIRZ8WHO0mrzWjx0ebKnD5nJspF2om26CG47pt7RqqmtXo/hbB3GnB+WPW1BO295XvA7xDNaMq+R/t9Mcsr5yEz2tTzgAV31Z8G27qd0YUcZAYJc2spo9g+b4PsKfwBLjtCFq+jlG5wFzXRMCdxR70Z7DWSEgb8NYFMmY3JjrraL+ouEEIdKdavwczNl0IVPfajqCW0UUlbUF6L4UjnEh+Nm6eOe1gOc2D+uMSAPcJBoR2Psf0zXHC7mK9Zf+J+uhxj4/4pOabYUHonXo9OqYHoAWR50KiHWYPMRQu+P2i5lf77PBXMnzr/rqUQGN/Und49hMHrfiiewFJaAJu8rw3As+rdAKTMI6KOgRcy2T57fC8dGGk8wtCuAg0OfqpJC8jX9IoxVG5H4CqfQjQhYora3Q3+eqgfzGOl8LzPiXHnihc5p86fogJ8jqN+Ds3bylFGagJTXEEX0TnaiRrWXJax57CaaK2c2UkI50LO+N
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(66946007)(66476007)(41300700001)(66556008)(82960400001)(4326008)(316002)(6636002)(54906003)(186003)(6666004)(26005)(6486002)(6512007)(53546011)(9686003)(6506007)(86362001)(478600001)(33716001)(85182001)(2906002)(8676002)(6862004)(8936002)(38100700002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFRMSXRHL2xLYWQ5dUFWYTN1OGtyWGdTVDBxYmZCUTA2OVFOSFlsZUY5dDRx?=
 =?utf-8?B?cHd1c3AwMXlwck9YS3hET01qS3YwTU9DMGRwMml1UW05YklGdDdyVWZZbVV6?=
 =?utf-8?B?c2xHa0NQbkZLRFl0Tk9TUExkaEYrZEZRRjA2ZGtKVXlFVnJEYlk1V0NWbVpS?=
 =?utf-8?B?MFVjcjBlV1BaZHppei9PNUZXWjdocjE4YnBFTmVPRmlrclV0SEFQWTRiRFRq?=
 =?utf-8?B?SGNiR0ZUNUZMUVdYdzJkTGpibmJtSC83Yk9UMi9TcVhFUndpbWhNUlNCd0hQ?=
 =?utf-8?B?ejhVZnJnTEdJOTl6Rk1xRjNFVUU0akliL1RkR3ZIMFg3R0tkVGVaaWtFMUpL?=
 =?utf-8?B?eVlYeUJJM2FHNDNzcFJ3cDRVQXp6blV1Wk4vTEd6czR3NCtHWWFIVzNhelEx?=
 =?utf-8?B?UGlaU1AyK3VwQk55V01ZME5GQU1JL1JqNHkyVXRHU0U0K242LzBkbW1IVHRN?=
 =?utf-8?B?Mk50M2dtNE5tU1ppajY4c3llUU14RVZZUFlpTEhWd2R3c0FubGVOWi84d2hL?=
 =?utf-8?B?ck4vNGtFbkJDZlc4VWZMUlN3b2JMZ01pS1JkQkdxOXRVOUhxbDhQQWFNam04?=
 =?utf-8?B?T0h2cUdyRjdJTjhSRnd2WlQ2RTdjYWpKN1pmRDloVzdiN2c3c2w0bzFRcDd2?=
 =?utf-8?B?SG8xd2xNeVMwTGo1elNVOHlMaVZHQUZwUmVLYVU3TVNaOFpzT3ZPbHZnMGZD?=
 =?utf-8?B?R3FaR1dwa3J0dnBqbE9uNDVLY2tRTk9FSGlpNEpCUjVSazZTTVViTXVWM1h6?=
 =?utf-8?B?UjJtR1lWWEVqL0s1VW1wOFpaMmpob0hPVGhEZTBTanoxMEJVMlBFTUxTU2FN?=
 =?utf-8?B?V0xTK1FpME9tOG92TGlmQ3BvMmdHOWlwcUZKSjBiQXJGQURic3NVZWhMbXVR?=
 =?utf-8?B?Q2JhNWdFVUNMcExBOFBSY29aTzhtSHJOMTJoM014amljd25ZNFJxdHJ1OTdQ?=
 =?utf-8?B?eFlwc3VnRUQ2WlhYZmFWTWdYazdVWGw4Qlh2TG85ZUF3eUoyYTgxak1aQUsv?=
 =?utf-8?B?ekZVcDFvVjRwNzZuamNoc3VURUpGbDlFbzR4OEd1TzdReDR0VndTcnR6OUVW?=
 =?utf-8?B?QnlVRkM2Z1JjVUlybXgvREtUN1krbkpNbmUwckZQQmpQMENjRFV0MENmZFVn?=
 =?utf-8?B?RkJ1a0ZNYlp0ZnlmRXhFUnM5UzhONUxEeDEwUWxiek9rcFprakhGei9tTGRh?=
 =?utf-8?B?cXFUZTIxZThxWE1IdkdxWXBPdWpTdmFJOHRlVnYrVHlHN1dMcE10K0pYMkpn?=
 =?utf-8?B?SE1oS3FmNm00ZW1YSUxSYkFSWnUxOE5GZFo5d1pEdFQ3b1BabUY4Y294QlhR?=
 =?utf-8?B?SlgvV2dZSnI0RTg2dG1HRitkT2R2YzYyMUlzOEM5ZnVPcTBmVlhkVWZ1Uzhw?=
 =?utf-8?B?c0FKTE82dWhJVC9DZzBvTkNIODJ2Q3cyWjU4V1lRMXVXc0Q4Lzh5ODFMNGkx?=
 =?utf-8?B?U0lycHB2S21CQnp5T2tHcm5rWTNTVExydlU1OUVwTWVNamZYZGN0L0VaM1RP?=
 =?utf-8?B?MnUvNDhDbWZhWGc1bm1CdHpjYVJQWHduOEtuWjBZY3Q4Mm5zM1RnajNpK3Br?=
 =?utf-8?B?L2oyTkhYRm1rNFFVTGF6U2w0ZFZQcStZcW12WWxVTmhST0RpSTVZdVJzZDRj?=
 =?utf-8?B?Q2F1M0d6SU1vVkVGRzhhNHAvd3NmeFoyaDZHNW8rNUtTZko4T1RyQ3huTTVx?=
 =?utf-8?B?T21rR1JFMFhUSExRaUlOazhod2tCNXBUMDZ2N3F5SkNwVTYva0dEZG5rem80?=
 =?utf-8?B?ajNKTHZETERaRm9IR0k0cXlzbFArbzVtSXNOVXdwVU0xUGxOR0VyMVB0MGNK?=
 =?utf-8?B?MTQ1MG1RblVFY3dxWUV0di9yZ0lLcmVsZ3dGaFg4bFZybWw1cWRPaVM0akFa?=
 =?utf-8?B?TVlOZlJqdGNHNkYrL3FtWHpmMkhhYThRSDhhTTBNd25kaCtqRHBaUnhudmNS?=
 =?utf-8?B?c0Jsc2V4d0xRNUtjZ05zcVp1Y240bENZbWgzY21iaWhsTUhBT0R3UmIvUS9V?=
 =?utf-8?B?WDU4WUpQR3RUL0dHaFZHdjVUOE1mNWJKbE96ZWhNaTZWb095anB6bE8rWERV?=
 =?utf-8?B?c0lYclg5eGtPTVpQTi9hWFhyVWk2UGZKb3kvSFRZT3M3MnZCejdDWk9qRTlj?=
 =?utf-8?B?L2NtWXlhOUFuNmFvR2dsZGoyanoxUWs5cXhqa09hV1lad1R0aUEvaWZTMDJp?=
 =?utf-8?B?WHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	w5ZhF3gYtShqKTEj89NEXp+6n0ULAI0TGv4iN3hpD9Xzs+0cQ62v2nVYLYPLAj6O5zg+UZ9dNGgRI3wU9g0NTBBnMTkVVGnmPw80nDRblgk9b79bGzRJTSFcqF5rOy5ybgJGy4jLJfpBm8XFI/cmpuG79K/wY757stydIzKGNomhaiOiejQwS3Bf1yj3cnwgB79SoDAqH8SkxkcHCpzkMOneQ4n9uha+13Np4Oq0icBcbh5L09kFGAniyWNMJ/QstH7aXFjbHOa+oVzT5c7AVb+W+uy5h1NZFdpdfwkUsoBW2K3ewOYErHmSUN10EbNa+GEhCh1l4x4TlB9DHyddPnqqPSpE8sFsXI5UDms3bAxnyeb756BxV10n1urKY/Rb4p3ahMqBeMjVX6ETgLP9UE85uO9KzFSj0HRhxvS+c6f2PXjlt+d8gKPFbwcNgCwNLhONjqcNh0tp93r7z7TBGPpfIqPdx3isxK6LM3cFtn2Ky8S9F56HT68FCIhsrKlineRKnA/jBMhBSPHY07JZnWcoou5UT9bGK1mLrcBPCd1Bet+zhc1VlqHUSLFt3McDxux3eJpAHr6+++WOynojknALTvY40wBc6oeOqddfdgpI+0miF4g4/tM/yXUuCTMzuc7YM6IXFdazbulr4ayIFZsWY1BavBfH5eJZQavnVvuz3kQXNIr1vNhkN82tzGg/IhMcvYYk6+UrsG6IxyZi363+M+iGpovdKOhjQiVgPCfuS9Nm7aEA9p9nxBWL6Wp4D59otzlk+wB8F1Ue67LRfEtVadhTMmjTevwg3UW5epwuFslctVYnu2P6dqXpZebr
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82571b92-0ff0-4bcd-f62f-08db343931dc
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:47:26.0546
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: th2TLhkwqR91cywvzWVgvPAlw3Q7+xRJm5YOVWJv33g19BYNPExzm9fhxPVqJRzH89fIEufOMj9q69cLqNbGVg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5587

On Mon, Apr 03, 2023 at 11:55:57AM +0100, Andrew Cooper wrote:
> On 30/03/2023 3:43 pm, Roger Pau Monné wrote:
> > On Thu, Mar 30, 2023 at 01:59:37PM +0100, Andrew Cooper wrote:
> >> On 30/03/2023 12:07 pm, Roger Pau Monné wrote:
> >>> On Wed, Mar 29, 2023 at 09:51:28PM +0100, Andrew Cooper wrote:
> >>>> But this does mean that we now have
> >>>>
> >>>>   cpu_policy->basic.$X
> >>>>   cpu_policy->feat.$Y
> >>>>   cpu_policy->arch_caps.$Z
> >>> I'm not sure I like the fact that we now can't differentiate between
> >>> policy fields related to MSRs or CPUID leafs.
> >>>
> >>> Isn't there a chance we might in the future get some name space
> >>> collision by us having decided to unify both?
> >> The names are chosen by me so far, and the compiler will tell us if
> >> things actually collide.
> >>
> >> And renaming the existing field is a perfectly acceptable way of
> >> resolving a conflict which arises in the future.
> >>
> >> But yes - this was the whole point of asking the question.
> > I think I would prefer to keep the cpu_policy->{cpuid,msr}.
> > distinction if it doesn't interfere with further work.
> 
> Unfortunately that's the opposite of what Jan asked for.  What I have
> done, based on the prior conversation is:
> 
> struct arch_domain {
>     ...
> 
>     /*
>      * The domain's CPU Policy.  "cpu_policy" is considered the canonical
>      * pointer, but the "cpuid" and "msr" aliases exist so the most
>      * appropriate one can be used for local code clarity.
>      */
>     union {
>         struct cpu_policy *cpu_policy;
>         struct cpu_policy *cpuid;
>         struct cpu_policy *msr;
>     };
> 
> So all the cases where you do have d->arch.cpuid.feat.$X, this continues
> to work.
> 
> The cases where you pull a cpu_policy out into a local variable, there
> will be no cpuid or msr infix, but these cases already have no cpuid/msr
> part to their naming.

I see.  I'm fine with this.  There's still the remote possibility of
field name clash between cpuid and msr names, but we can likely sort
this out if we ever get into this position.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 11:49:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 11:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517405.802662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIgJ-0004mo-33; Mon, 03 Apr 2023 11:49:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517405.802662; Mon, 03 Apr 2023 11:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjIgJ-0004mh-0G; Mon, 03 Apr 2023 11:49:03 +0000
Received: by outflank-mailman (input) for mailman id 517405;
 Mon, 03 Apr 2023 11:49:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OGRT=72=citrix.com=prvs=45084431a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjIgH-0004mb-NL
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 11:49:01 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84e3de47-d215-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 13:49:00 +0200 (CEST)
Received: from mail-dm6nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 07:48:56 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5871.namprd03.prod.outlook.com (2603:10b6:a03:2dd::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.23; Mon, 3 Apr
 2023 11:48:55 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%5]) with mapi id 15.20.6254.030; Mon, 3 Apr 2023
 11:48:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84e3de47-d215-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680522540;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=S/dd1BE3i1wam+A+ho2BHZrjnBdwFWUSnpCe5R7AdTg=;
  b=MgVuGeXXcbzCcHhroo6jZKiHHz+kQK1i5od7AiSG2mac+kg1hK7bkzW3
   o36AtTGc0IHMgohxbSnd/77l+HPOL97bmKgn2vpy0CJer59AS4PmpcoMg
   +ENP0ISeDNL619Y10XPwJEp9TAZrCm7dDWCJizzoPxAlGzggmeitQK3lJ
   c=;
X-IronPort-RemoteIP: 104.47.58.107
X-IronPort-MID: 106534854
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QinEg69Q/NogbZ5aysA6DrUDrX+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 modUDvXbqrcMWWgKI90Ptm08UgAvZfSzIJqGVA4qy88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagQ5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkJ3
 s4ZEmgqYSu/xPiZ57KFRO01pYcKeZyD0IM34hmMzBn/JNN/GdXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWDilUpj9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMSebkpqM26LGV7nU8KQI5W2r4nemoo16je5VbD
 R0axTV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OcUbzE30
 l6Cn/vyGCdi9raSTBq16bO8vT60fy8PIgc/iTQsSAIE55zvpd81hxeWFtJ7Svft0ZvyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:JbYOwq9OQVtYB+pJ0G5uk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="106534854"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aw4KEYrQI5uvc5mTxz77tYbItzblEfPXPZSnBHSWzxR72M2YD/iVQM1DlzoPOMEccewaZvSZoWTNZamliB8Y9MklAtXuPAcIE1bgUQ+YvACwIibvTUZUwoPYVSppKrFKl3pth2PqxzraM77N1huArjpiD/J6Df83opSCUoQgJxEL/r5ZG9eIdNDJCsv1WYAQ+54KJ5Y6MQSAe/jJPfF0oi3uz18l32dCM/hYsyA0p7QvTf5+X20w3348cFOxNgPfA0juMaUS8pfeunQz8pp/fXgXc2vWzd8ff7RL6mpfNZgX8t74dwFR6N0haBX2XPBpAH0XVQUO9wO3OQzeu+0VYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NF/0VHEKmXlSYyf2iV8XQgfO8qTiuqIbt5+2xRqLfYw=;
 b=aILlJcOeX/72IpWXPcbM8ucJrpY9xgeCuO9DwF9xvnZ9CFGMYLIEbV7sEUB+jyv7z3zNKa8PGYs1VQft3dCepLzGhGprHWLc/sBUjYU4lWXS7Z07A97dykDITvayNn4Fb1bGY9CpFhyCjWE0s8Gt1DJoQ+SgKwA948zrMGWTz3tRxxZMHZLWY5W/ag6MnP1WVgA2vpw+tHIJhbgiXCQjGXo+SCoMN4rY4s7DBo6mvGrGQLUHOMhWOiZ1PdB/VgdjiHkTI//k5zcvk2XRMXYORGmLXGxQBphTtRKgCBwMKvUFBTzgOQ4Z2rkg3E/tH6MIzDqgsBVPpSltQ5cc2ADTtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NF/0VHEKmXlSYyf2iV8XQgfO8qTiuqIbt5+2xRqLfYw=;
 b=MqY3/d7l8Eg3sxUKIiHXl5VU+8yVp89jVDEL/e0ZpFo8LZSMsilGpZumCHH4p6EzgBmWOvMoaNt17sNpspdO/GVwnaGW3Wvi5qoqUFVdRoyiTNcRnmG4XdK4k9i2R6gdg4wwsDv/PEa/QzBAkot0lYrHa29yl4azXDc0ceQnaaw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6788d260-fe5c-f3e8-d479-329a2149fba4@citrix.com>
Date: Mon, 3 Apr 2023 12:48:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH RFC 0/9] x86: Merge cpuid and msr policy
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230329205137.323253-1-andrew.cooper3@citrix.com>
 <ZCVtcR5u/14/WmCU@Air-de-Roger>
 <9108a58f-da8f-14e4-de88-a7c8c8abb0f7@citrix.com>
 <ZCWgHxCL4yXD6CxC@Air-de-Roger>
 <14d8128d-6f50-99e5-ec98-366318e7be1b@citrix.com>
 <ZCq8yDv/FaOvtTPo@Air-de-Roger>
In-Reply-To: <ZCq8yDv/FaOvtTPo@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0273.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:37a::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5871:EE_
X-MS-Office365-Filtering-Correlation-Id: 0c4d7db8-8668-4e23-5eee-08db343966b5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7VuoL2vZQ0ZfeE/nfGYHyCSoWyKKVrePIqrmdcTy5wrqzJUaxwYPlczG2jY2yckafAN9P02PS+3wDJTFmpIL0zCYJr00j0RC5cfebOE3ffHpUaXwvCn3Sx37y6H9Wb1CUO5RdmLBpgUHn80UiIN9gCGCqdKAQqox3rBKYkJidOVmg0YpXqt5LD7EODrkEPDH60FB3GbyzHeHe6CfivhgyCLr8W10qFdWi02QEPHbJvDVS8Od0F0War+18tmgIa+pSFSENGttckGkwYlWfYYVgka6E0ez73Hzg7/AQbDt1TbEiKYUg5z2X9ZHsfZLMckTK2bA2WnrJlz7qHuPM3aN4YqUniZvj9Wt4obK3e0M9+Kr5k+bmC+g7lurWgzjePQlQL4GlpzfX0AOdwtKi5XbkIZAARI5u5+MiwwXbWGaIEKb/fYc53V/EDjy6p4FPdgB0nnABEwac+FwaeCCCglAmq8GgFcsye7i8GA6XCZ93pt7NqOk3/aAY+sBPMyB23wOet45lCs9rYkn8zTMHYtKRvW6dF6G6jB4qWRYKf/WMAmOnDUjDHDMHzQvjt4ElA79T83joP1rGlaxQlNu6yxNLXkYPnMWpOCrmQp4D2TuPBfRBVRcOaQ88nx1fQRxXi/fDMd4GWxz2Yeutt6nAo4fWQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(366004)(376002)(451199021)(66946007)(6506007)(4326008)(316002)(36756003)(53546011)(8676002)(6636002)(54906003)(66556008)(37006003)(66476007)(6486002)(8936002)(41300700001)(478600001)(5660300002)(6862004)(82960400001)(86362001)(2906002)(31696002)(38100700002)(186003)(2616005)(6512007)(26005)(6666004)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eC9ZcWhlWTE5cXp6cHZMV1UxMlFNdi9OaXNIQURVUWRQRVZPdzY1YVMrUW5D?=
 =?utf-8?B?cmhveFNuYUE3L0VjRGNTdjhCZytsSGMxWlFwMTk0ZmdlQnpiOXNXcWRmWlpa?=
 =?utf-8?B?S1JIRzNlbU9iZTlQbzVSdU5GSndZUEo5OGNRbElzU2tXNWxOMVRtMkJoTXpa?=
 =?utf-8?B?RlJOVEdkU3JybjVYTWgzd3ltbmJjK1hoeFF5RDNucGg3ZzZHUGttYUdMbzNL?=
 =?utf-8?B?YmhFVnJRaVZqazlQYmRCNTJTM1ZaYzZrVzlYTmVLeDVnM21OTEZxb0F1Z2ZB?=
 =?utf-8?B?S0ZUc0dMTWdxZFpjL1F4Q1p1SGVrRDZKenh2SGRLVGdOam1zSjQ5VG41cVJ6?=
 =?utf-8?B?YzJidjB6aDVlRk05eW85WFRaYzRGVHJvWWVLTXdyMFhUR2Z0Ly9lajR4NWNP?=
 =?utf-8?B?a0U0T0V5eXZSeHB4ckhIWEVGNmRpek5LOVl1OXJqcExRL1VGWDJOMi9FZDF1?=
 =?utf-8?B?MEpQNWhINTV6UVVIOThrbTFqaWh6bUJwWGU0OFhvZUtkUnhrT1grK3hVMERm?=
 =?utf-8?B?SHYydDl0eU5pcHhhMDFya3pFZGFVMlduU3hWcVZrMUUzVnkwTjVMcm1zMUhU?=
 =?utf-8?B?S1Zxa0lRN3pVaCs0QllEUjB0ekpkZ2JEZ3h4VVk3VDNvU2JSQmRlcHNzVzUr?=
 =?utf-8?B?Z1ZVNk9Sa1o1US9QNVowZHJoamQ4TkRZUFJ0RlNTbGQ4YUFIcVQ4R255NTJx?=
 =?utf-8?B?blZFSEkxM3FtaXlGTXpYSGM2dkhSM05NT2VURElianNFa2pJRjFDSVhEVFIy?=
 =?utf-8?B?MXZ2eWo0RzFPWjFJT1NiQmRWbEh2MXJVdWZDaXhnQVFkVEwrZmVjQWY5WXh4?=
 =?utf-8?B?OEg3cjZJZ1hlS3h6T0k3NnZxSFB1U1FOd0lUZFk1Y0NwR25rL2JKLzRTMlN6?=
 =?utf-8?B?V3p4a1BmWWZvWG9WMm4xTjBnKzN2bFUwVUFUNWxaS01SeTUveXpaN0NhTHhI?=
 =?utf-8?B?OXd3VGNIWTdSaVkwamdkSGZiN21zLzRtNVVxanBkcVhWektKWlBkZkI4c3VP?=
 =?utf-8?B?TzQvblMrbkVkS09NMndyYktRSGIrd0xLZlNxc2t0ZHdEU0U1U3NNUlBHcXh5?=
 =?utf-8?B?SVY5TXAzKzY4TjhVbmxFY29UTUpZNTFkNU5sbENzTjlKWGkzb0M1cUE4UVhV?=
 =?utf-8?B?c2NUVC85Q1M4b2l1TTB5VWlYQVNRSG9ZUStPbTlsajR3TFV4Z1l1WGxSMFRN?=
 =?utf-8?B?QVlBdS9ZQ1J1MGZoZXdKWkRlbTYwYUFBMG1GWVp0Zm40c0k2RDgrckZqaFZa?=
 =?utf-8?B?RS9IMGprWXhoZkdtb3pMWEdVWXgrdHJxU1JOQXlkUDNtSExSbDd1TUhyNCta?=
 =?utf-8?B?b3hiOVJjRmlwS0ZZempjbXJYQmtoK3ZvVjl3MUNrRklyL2w0RUZFbjZjVGg0?=
 =?utf-8?B?V2FnUVVkRURvcEY2NjlHdFJ2MHpoVHdRSy9pZEJ4WnZZREJ6SEdUREhOUEtw?=
 =?utf-8?B?TCtCbHhzTFg3cWZyMGtEOUZ6c2FUbDN5bENWZWt5SzJjZmxVRk1XSjdVaHFt?=
 =?utf-8?B?YUxQdkN6U21zN1NxeFRVc1I4LzlsS3pQaC9EYjJxQ1FBN2tEU2didU5nRFhl?=
 =?utf-8?B?NmJzSkxON0JGelJPSEg0RzFucUttQkZYUEtNeVZLVnR3SURtSXdpMGJJVU4w?=
 =?utf-8?B?L2djdVJ6bzhMZDg1bjFKWm56eG95eFhTc2E5ekhQZjBycHlqd2NtZGpqZ3gx?=
 =?utf-8?B?TkliK0pIQTI3MkhvUEh3d3d5UjNUbEhPWHVtbm8vSm9ONzR0bERoUVptNk5Y?=
 =?utf-8?B?OEtQbHJLUmJ5bGFpY2pVbW1KOHZqOG52NjRFbzJFOXphbC9wTU9NL2JmcDNH?=
 =?utf-8?B?L3I1bkZ5bGoybzNYRFpkU0duU1EzMi84REoxaEdrZytZaU1xOGh6UkUrMnB5?=
 =?utf-8?B?eTVxa1QzSmZQMGNwNXJCcmJmNGxwMEVNY2FUaXJOV1BWdm9VQW1NZkRmYzN3?=
 =?utf-8?B?aFdaUXhjdG03QzVkNTJCU3kvYi9tT3JRZDIydjZ0N2R5TjJNd1JaRGx5a3NG?=
 =?utf-8?B?SlVuWUhoaFlZUUlLd1JYcGRZYTU2c1l6M0pTODRPVGpYdnBDZUJUSlQ3TmtY?=
 =?utf-8?B?SzdFRU5zb3BZbEhaSE9ybDZhd3B0NzF2Mjk3bnQycEtma0J0V3MrclhxN2Uv?=
 =?utf-8?B?YkhsRkJDenMybEpxbWRWVkRPWkZKOXV3WHVYQmhDTEV2SjdlNXlOKzNHek9m?=
 =?utf-8?B?dVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	3xYKrKy1cKejUcL5dFOl2hOOZsjbmoDCfU0yN5i5U7gb4Be4MyaixNcovLefP91AT0iYzSdNyKOLS9U1i+JARaMZ+1NsLAlZJdy9/AD69xdRlkUAU3g4bZTKvF66mfcvDMZo6TIWUqu/vt9MhV8u+q1EMkrPgoH2v5++4MJTqWas5ncmp1aMaMm/FO9HaPtEJxEY79pNhzhTirWT9SBGbWO9CbaQRSncrRV8fTh5vzYtpVmmw4M4ZsRiQQCw8MwO2x9JDU1oLs8mJ7ctmNyzsQILyVDS6TwCVMPPHIqvDh6pIztqXfoouFEYrim43aHiejxOp5lp64saaGjn9O2nXRSl0tOUzn/M31TCNQe4a33hntc+dNKWNEIsl/vSnSD0pYklfNTV+FgfkK+Z0rI0aYscwVoImVKnKcGFk8nIXSLPxyQO9QQBnkE6069wQO74Fkm3Gk5HY45oeRDKT/Iwy++uyGWJHoPS4MhYy36o1ANpcw1KEjpiKGkdEUyhrAKz6jzWBGdjNY0NhxYcmhKWkgmDhz49uraMd4F3FRoBcQswwFUe8nLTBAbj94oEivWvMEJKNfmHM9mkJRvFvlXFZAEEV5S84/mZ3s9g2Tmmivcm8sNHBXhLhiYVkmOvXZKgmZFQIzEzzTTs/6v8z3ha3ZwTkvn1UeETcJYUPaYg1NTMNkJcnE9ZcNv7LFNMDwAdfXpLf+mu+xJ4y/m+SjXqrIC3H4YluLNACoRJDZEdDWcPiwTTV9Qcfb6JBUm0aQohoW5sxFCsN90gxozv8MumytnKofERo08/uLf5xT/0g+l8pg2OeJaPDUWMnJlphzu5
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c4d7db8-8668-4e23-5eee-08db343966b5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 11:48:54.6702
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cSZZb7vc48IOamLd73zFxGaEqfKQ378X/ZtOuMqofemymqqygjxrn7btxeXW2Yq16UQCEmUsu0qike0b8AjgeTL27RXRYV/Bg63eUkIZFpU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5871

On 03/04/2023 12:47 pm, Roger Pau Monné wrote:
> On Mon, Apr 03, 2023 at 11:55:57AM +0100, Andrew Cooper wrote:
>> On 30/03/2023 3:43 pm, Roger Pau Monné wrote:
>>> On Thu, Mar 30, 2023 at 01:59:37PM +0100, Andrew Cooper wrote:
>>>> On 30/03/2023 12:07 pm, Roger Pau Monné wrote:
>>>>> On Wed, Mar 29, 2023 at 09:51:28PM +0100, Andrew Cooper wrote:
>>>>>> But this does mean that we now have
>>>>>>
>>>>>>   cpu_policy->basic.$X
>>>>>>   cpu_policy->feat.$Y
>>>>>>   cpu_policy->arch_caps.$Z
>>>>> I'm not sure I like the fact that we now can't differentiate between
>>>>> policy fields related to MSRs or CPUID leafs.
>>>>>
>>>>> Isn't there a chance we might in the future get some name space
>>>>> collision by us having decided to unify both?
>>>> The names are chosen by me so far, and the compiler will tell us if
>>>> things actually collide.
>>>>
>>>> And renaming the existing field is a perfectly acceptable way of
>>>> resolving a conflict which arises in the future.
>>>>
>>>> But yes - this was the whole point of asking the question.
>>> I think I would prefer to keep the cpu_policy->{cpuid,msr}.
>>> distinction if it doesn't interfere with further work.
>> Unfortunately that's the opposite of what Jan asked for.  What I have
>> done, based on the prior conversation is:
>>
>> struct arch_domain {
>>     ...
>>
>>     /*
>>      * The domain's CPU Policy.  "cpu_policy" is considered the canonical
>>      * pointer, but the "cpuid" and "msr" aliases exist so the most
>>      * appropriate one can be used for local code clarity.
>>      */
>>     union {
>>         struct cpu_policy *cpu_policy;
>>         struct cpu_policy *cpuid;
>>         struct cpu_policy *msr;
>>     };
>>
>> So all the cases where you do have d->arch.cpuid.feat.$X, this continues
>> to work.
>>
>> The cases where you pull a cpu_policy out into a local variable, there
>> will be no cpuid or msr infix, but these cases already have no cpuid/msr
>> part to their naming.
> I see.  I'm fine with this.  There's still the remote possibility of
> field name clash between cpuid and msr names, but we can likely sort
> this out if we ever get into this position.

Thanks.  Yeah, we can rename if things become problematic.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:24:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517412.802673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJEq-0000hj-2N; Mon, 03 Apr 2023 12:24:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517412.802673; Mon, 03 Apr 2023 12:24:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJEp-0000hc-Vf; Mon, 03 Apr 2023 12:24:43 +0000
Received: by outflank-mailman (input) for mailman id 517412;
 Mon, 03 Apr 2023 12:24:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjJEp-0000hW-5q
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:24:43 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81ddb89d-d21a-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 14:24:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8230.eurprd04.prod.outlook.com (2603:10a6:20b:3f2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 12:24:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 12:24:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81ddb89d-d21a-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MFyhteW20r1iypmuiWClHRYR88XwsXiDbigY9APD1HbqI+7m1IM3m007KbvNZFprUN5UlgUJu59vamO6+c5HprTjANiKvCKv0oBBQ5iooIaRKJkTul64Wp5y7TjVBYj7her6Kd74Vr/17ryQ3zW52fXHgdQRcyQHSzpuacYg8+6u0w2aE1bouDX7DDQfR+8iHoZYHZSLDzjOecXg7SFBUYn8NVCYxLVVHBDS1L7mv21bTurQTaQLXR28bNwB1E8btqmztAu4VM75njArB4/V3SKR43nKlE8bi1hgP2sX4upmhsrMgc5v3L37+Y3QYLqBWdKH/MurXbyltFvEOvOVTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p163uZDvV5ZjP7boHQvWgvCYdt+czz0NqgkwcVWTlAY=;
 b=FBGq86CwJCpX74g1kpGzX6VtTkqWbjHkvSa9d8sEDVewsmuzYyXYApYByPAdvB8QysZ7DYmspR2B45qlntTffbEY7f2liRwsoWYZk7sAWqqGi1yae7fzXEj+WX6KKcc9aBuUOYQ7vE8+P0FJmAvO7iq2i56QdD4W1BLkc1GJqoRxHQNwBK9WENPUIBMXstDPI89vXnzfZQPEUwSHE2we6hLWR1WLMBEXOn0LtMqe/PU+tGVpUvdETqqYJtg+xCcJMBH6ptkJezI6+IKqj8otF8+bJ8wHDjSIr05eb2BVrJ6+xEFBEO8AHDg0BMtODxGSdQM34vag7/NZpuzrmySR2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p163uZDvV5ZjP7boHQvWgvCYdt+czz0NqgkwcVWTlAY=;
 b=pzcl+KShoODLAZSwAJ6HwjWdnzTLeP0cSKyN3msebJKvm1Us5kWNVgvcmchoaBeB6BLeJohV3V0oyBanwUdYrT33HbOeDkO5zzlIq3o8KCWw6C+IWnDNJo8on3gDCOxV3CbyQjwgZAeysV6YpFOiMDyAmSiOjsLPGuiT1yMnHvAxt2q+DRkPxzKtf1nHj/NozaSosOFcGGlp/BpDNdo52A1sER+6l+2FT13Oojt5f/xRvr6EMU3s+cCNTv1lbOJu8cPv/DnKP1qZtx/KtLi6p56E7aoqZcYiVP6xZtCvZwmITxeTH/pMsUQVacsQLNufTFVrpqNQKSWc6OBLv2us9Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <50420132-70e7-0cb5-6929-fda11b2cf05a@suse.com>
Date: Mon, 3 Apr 2023 14:24:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <8f2fa47d-89b7-b39c-e60f-edee1de5ca82@suse.com>
 <ZCqz0YCFUifIlthC@Air-de-Roger>
 <be49b5d2-f4a0-44f7-0f6e-56c0e63e9da0@suse.com>
 <ZCq8BQU3lqgxSp6Q@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCq8BQU3lqgxSp6Q@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0176.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8230:EE_
X-MS-Office365-Filtering-Correlation-Id: 09d06588-79ee-46d5-9eaa-08db343e64d2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YtvQJEj0l55q3gMd6RLs9NMxHJXDr4gC02q6Q9YO9L4zUg50OP2baTPJeB+Iz76M6hm1L3y7Yl94F7g0N2zKtRutJAHq8RLhuXTxmQWDbYtBSmLknqNr9Xku55jtRWU/f/oh6sfWgqlXHRuIVEZVNFdxhWCO+RYV7LGYeaqimJUVDYolL9/sLup5ju30oeCGQxjslu9VuDfT0Uo97uQOw+lrF3ao8Jg1wK6NDT34EmrPdhGzZQZcRxJmQ81tdaaR6wSETqcSZzeudc4TzQWlQSLYE8cNvTchme3b7L6+t9qoSc/p+89PJDkj+YOBga11QeOLPYdBU2yJNfm7ITsPioeRNlkTP0VALhhb3SC3bxzfi1isDEYDPzlbR1OqfZpz+F8ZDfXNeTwhQWns/d4ubSbjobIhwWhImGypkfwNNodN6x9bbjmynkO5XzGrjW3rrrDr1Hx9UnsIYPslRxOFp2jiZ0HezKqPDBYWrCGkEyzKIhC0A5ZNyKrbftqZ9kD5kW2OMs6JY9szMJ6teoW3gnOX718GJAUtxBXJeg7b+tLVxGd0blr8deDDOlYjgGUZ3UpQYb3tJsniKkytw9QY7i2Ie+unAxHDlWll8EWE2CZn96/vCVb4sp6A7xXrX+bPt4Pk8YCyq4MAGrRSB4cgyQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(376002)(346002)(366004)(39860400002)(451199021)(36756003)(6486002)(2906002)(31686004)(41300700001)(6916009)(4326008)(6506007)(6512007)(2616005)(26005)(186003)(66476007)(66556008)(66946007)(8676002)(8936002)(5660300002)(316002)(478600001)(54906003)(53546011)(83380400001)(38100700002)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVZENytjVkZMRHMzSGdQYmFubHc0UEdtQlBwdVUvNzVXelJsdHNGRElBS2Nv?=
 =?utf-8?B?Q0kvTGsyZmd5czRQYmNoS2FCRmQrZVRuOTRtUmY4WVM1M29OR1B6bEVSbnZO?=
 =?utf-8?B?ZTdyNGJpTXF2Q05xQS8yelJtNitrOG5GRjJ3eXRMV3ozSnJHZzVPZjhoL1Zn?=
 =?utf-8?B?ZjZ1dnl3NUhwSlpyN2N1ekoxRlR0b3NNNWl1d3dNbEd5bXM2bk9rczN2OGp4?=
 =?utf-8?B?ZFBYYWpzU0ZPSlpWNzlyRmhtMGNEa1BYZC9nZVU4Vk02YWRVUXhFb1Iwc0Uz?=
 =?utf-8?B?U2tGZjdTWHp4M2VaUjlyRWNuekt6SEI1TTZEM1g2VHlEYUYyaTdDL1kyYTl6?=
 =?utf-8?B?VUFiQjBNSnBBa1RUR2pDeGVsS1haTFgwc2FBZG9acERFcEdYdDhQVEZXNDRL?=
 =?utf-8?B?M2x4bkNTcWhSTkhLcDEzLzBMVHN5V1ptMXhBTXg5SE9Nb0h2TmRGNDZtZVlJ?=
 =?utf-8?B?WCtTcEJTYzNoTzNQa0VWbFlmdVp2eHNPUXhXYjR3NWZ1N1NXYUNURUh3YnVJ?=
 =?utf-8?B?VEJUNmliZHgrTWJOYUgyREhMc3pyN3J4c3hhSnNOSFFQQ09VZnRTS2FyUjBo?=
 =?utf-8?B?b25zRWdtUURxaHFaYU5DZSt5aFhyNVFKR1E2d1Z5VEwrQ0I3UEZpald0UzVK?=
 =?utf-8?B?S200SGs1R1dlQmJTc1ppUk1jQnV5ajlDSXNtWElmazBLeVVMN0g3aEh6K24z?=
 =?utf-8?B?Z0ZHQ3RBamczMWxoVk42dC9mc3VBZ1VjSEpZTTNGb0QvR0hHMjR2YUtKd1J3?=
 =?utf-8?B?Ymh6c3Y3eUpUSzEvL0hGUS9CSGNIZGpPcmYzdXVWVFRsd1ZPTVBrcHJkL0Rm?=
 =?utf-8?B?UkhtZDg5QzdCZ000b0UvY0c3WW5NcXEzRk1jNXRjQ0luZUlwRTBTT3ZEQTNJ?=
 =?utf-8?B?UG9YK0MxWSt2aXBxa3JaZVpLdC83QzRzVFRmVFUzZXRkSXR2QjVIYmZkaFll?=
 =?utf-8?B?N3phWkpVWDk3VWZRRUlGMlErdHJmVGhEUnRoOTdoSnVxNUNjL0tWbEs5QkNa?=
 =?utf-8?B?MlVvUDYyNy9WdVF3c0dlaWxlR3JtWU9sM0JkRlMrTHdCNzU4bnBLSU9VRHVp?=
 =?utf-8?B?TnpRelBZZUhQbTJSdnRPK2JvbGNRR0lQQmhaUk9udEI0b2VYdDBRcW9mdnp2?=
 =?utf-8?B?eGZKajdqYWErNzdzOS9iTTR5M3ljTkZPa2tOVlppbXVkSlJPS2JhMEZVTUNx?=
 =?utf-8?B?STkrQTh4cUI4bXZuM2U4NWs4ZXZ6RjRwTkRuZVRKblVkcDBpV3NYUDNmOFVC?=
 =?utf-8?B?ZmE5UkdTR1B3bnFWVGdKeW5SYU5VcFRqK3V6NmQyT3RkZ2l3UytveXR0NHcy?=
 =?utf-8?B?Y1B3UnBsWFJvekFXS3BGUzNNUE1XRE1oWXQ4Zy9VY0kreC9VK3lBaEgyMkJq?=
 =?utf-8?B?dmE0cDlIMWZ6VHc2UTVJbnplQmhGbjlXYVZ2cFlpQk5DMVM3TjdLK3dNQ1c3?=
 =?utf-8?B?RGdkMnc3cDBoTjFDdWZFUkd5NDhuUXpXWjNjN09zdHc3cExreW93d3VNaUFR?=
 =?utf-8?B?UlNhamhDRk9vaXZtbklVSFhGbW1HMmVkaXhOb1d2TnlrZVFTZm9URkxMZEhN?=
 =?utf-8?B?bVNia2granFZSW0wTUMwYUFuaVZDcmhEOWluSExYK0VsaktRYUt0bDB3VVJ5?=
 =?utf-8?B?czJQVGdaWWVNZyt1TjZFSXR1ZFMxOWQ0Qk00Z1FRaE9MMGhTSHFnWE41YU5Z?=
 =?utf-8?B?MytjcTJEcGo1NjROemFvOWx2QlpzQ1hOaG9ZS3Yrb1pNYVVUUktHL21nSVpl?=
 =?utf-8?B?RU51UHhUcUJsSk95alBFd1NjMG5SRUU2QUlaYzNEYkxYdVBzeXJGbXdpcGEz?=
 =?utf-8?B?N1Z0UVdMNW1RTUxEeWcxeWVyVHZ3cm9ZODBOQ0hiYkJ3QWJ2TFA4SkRlT1c4?=
 =?utf-8?B?ODZZczVQcXpGQlJaVEU1Q3VkMnIzV1hEMmxBelpTZ2s4MkdIeG4xeDFmMkox?=
 =?utf-8?B?ZTNyeXFmT2N6TmU0aG5PZzQzQ2FQdHBVM1J0MU1zWjhKVm1NeWpGdTkrVCt6?=
 =?utf-8?B?VkpOaks1QnBvYmhyK3RKaTljcDRDRmptbXFSRkJmQ3JVMTZqaVYyNHRzT2Mw?=
 =?utf-8?B?akNvdWNDdGpIQ3QyZmpoaU1pZS9BMW5tTDA4SXVoZGFUSDR5U1pFbHlXMit3?=
 =?utf-8?Q?x+bNPShbSTQlaz1NSmQewHswa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09d06588-79ee-46d5-9eaa-08db343e64d2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 12:24:38.8643
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3tT09XN7bW4Zkhhfz6IDgwAuk34nkF+8E8nut/0KBO0kLXvyi8wSaPlm1r1dKsFP8uDIvsR1VfAbOnEvUCbJyA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8230

On 03.04.2023 13:44, Roger Pau Monné wrote:
> On Mon, Apr 03, 2023 at 01:26:41PM +0200, Jan Beulich wrote:
>> On 03.04.2023 13:09, Roger Pau Monné wrote:
>>> On Thu, Mar 30, 2023 at 12:40:38PM +0200, Jan Beulich wrote:
>>>> +bool is_cmos_port(unsigned int port, unsigned int bytes, const struct domain *d)
>>>> +{
>>>> +    unsigned int offs;
>>>> +
>>>> +    if ( !is_hardware_domain(d) ||
>>>> +         !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) )
>>>> +        return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
>>>> +
>>>> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
>>>> +        return false;
>>>> +
>>>> +    for ( offs = 2; offs <= cmos_alias_mask; offs <<= 1 )
>>>> +    {
>>>> +        if ( !(offs & cmos_alias_mask) )
>>>> +            continue;
>>>> +        if ( port <= RTC_PORT(offs | 1) && port + bytes > RTC_PORT(offs) )
>>>> +            return true;
>>>> +    }
>>>
>>> Maybe I'm confused, but doesn't this loop start at RTC_PORT(2), and
>>> hence you need to check for the RTC_PORT(0,1) pair outside of the
>>> loop?
>>
>> The loop starts at offset 2, yes, but see the initial if() in the
>> function. Or at least I thought I got that right, but it looks like
>> I didn't (failed attempt to try to address a specific request of
>> yours, iirc).
> 
> Hm, doesn't that first if() cause that on all systems with an RTC only
> PORTS(0,1) are allowed?

Indeed, hence why I said "failed attempt". Looking at it now I really
don't know what I was thinking when writing it that way.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:25:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517415.802682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJFo-0001F5-BY; Mon, 03 Apr 2023 12:25:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517415.802682; Mon, 03 Apr 2023 12:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJFo-0001Ey-8t; Mon, 03 Apr 2023 12:25:44 +0000
Received: by outflank-mailman (input) for mailman id 517415;
 Mon, 03 Apr 2023 12:25:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OGRT=72=citrix.com=prvs=45084431a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjJFn-0001El-30
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:25:43 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a501aa3f-d21a-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 14:25:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a501aa3f-d21a-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680524741;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=TxjTaysaeDa90xTG7SNGAWxGMgj3IbXSTraiBTjeLEs=;
  b=W0ZX94Rv9Fw+UQH3xzzyFrZ2osH7ca3mZZiAeRttIgTARX1DVHY/dyew
   Rx9cHPG04nupYfQeb5Syj61ZdMUWBDFK4l1wgeVKsUwVcB51sEJ+Zc3bc
   vGrkywW9gJLCZIf8n7LIG88XsufA8KZ1WFbajhkiGg99gFb+J2YuAwlK+
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104030835
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Fq9iQ6Kgze6q5vGVFE+RrpUlxSXFcZb7ZxGr2PjKsXjdYENSgTMAm
 DEWXWuBMq3YZGT8c9l/O4q+pxsD7MKAzoJhGwZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRlPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5HE2Bwr
 uUfIgwjVQ2NhvCEn7G8WOZj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZkNxR7I/
 T+uE2LRXgwUb9eflCu88k38ofSQwi/4fd0vC+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHsljw2VsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9HX0DW3QdSgE5zeL+roAhvE3MScRsH/vg5jHqIg0c0
 wxmvQBn2eVL15VXh/jrlbzUq2ny/8aUF2bZ8i2SBzv4tV0hOeZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/LF7rVxBA1b5IehtDMhWfS+FyPosdz7ze
 1P0sghM/pJVN3bCRfYpM9rtVZx2k/G5T42NuhXogj1mO8EZSeN61Hs2OR74M57FyyDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPWkU0BDbGvP3CMmWPRRHhTRUUG6VnNg5Q/Xoa+zsBOQwnN19e5LWsdRrFY
IronPort-HdrOrdr: A9a23:w/PMB6BzSLcOKgPlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="104030835"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/emul: Fix test harness build with blk.c moved out of x86_emulate.c
Date: Mon, 3 Apr 2023 13:25:35 +0100
Message-ID: <20230403122535.724250-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Trying to build the test harness fails with:

  x86_emulate/blk.c: In function 'x86_emul_blk':
  x86_emulate/blk.c:74:15: error: expected ':' or ')' before 'ASM_FLAG_OUT'
     74 |               ASM_FLAG_OUT(, "; setz %[zf]")
        |               ^~~~~~~~~~~~

This is because ASM_FLAG_OUT() is still local to x86_emulate.c.  Move it into
x86-emulate.h instead so it ends up in all files including private.h.  The
main Xen build gets this macro from compiler.h.

Fixes: c80243f94386 ("x86emul: move x86_emul_blk() to separate source file")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/tests/x86_emulator/x86-emulate.h | 6 ++++++
 xen/arch/x86/x86_emulate/x86_emulate.c | 6 ------
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/tests/x86_emulator/x86-emulate.h b/tools/tests/x86_emulator/x86-emulate.h
index 0ae528a741ed..942b4cdd47d1 100644
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -65,6 +65,12 @@
 #define AC_(n,t) (n##t)
 #define _AC(n,t) AC_(n,t)
 
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define ASM_FLAG_OUT(yes, no) yes
+#else
+# define ASM_FLAG_OUT(yes, no) no
+#endif
+
 #define hweight32 __builtin_popcount
 #define hweight64 __builtin_popcountll
 
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index b84e9ee54dae..5a0ec5900a93 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -132,12 +132,6 @@ static const uint8_t sse_prefix[] = { 0x66, 0xf3, 0xf2 };
         (void *)(((long)__##var + __alignof(type) - __alignof(__##var))   \
                  & -__alignof(type))
 
-#ifdef __GCC_ASM_FLAG_OUTPUTS__
-# define ASM_FLAG_OUT(yes, no) yes
-#else
-# define ASM_FLAG_OUT(yes, no) no
-#endif
-
 /* MXCSR bit definitions. */
 #define MXCSR_MM  (1U << 17)
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:27:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517418.802693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJHW-0001oD-N4; Mon, 03 Apr 2023 12:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517418.802693; Mon, 03 Apr 2023 12:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJHW-0001o6-Jh; Mon, 03 Apr 2023 12:27:30 +0000
Received: by outflank-mailman (input) for mailman id 517418;
 Mon, 03 Apr 2023 12:27:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jmin=72=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pjJHV-0001nw-3Q
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:27:29 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e56cc482-d21a-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 14:27:28 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id r11so116629126edd.5
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 05:27:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e56cc482-d21a-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680524847;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tiE7fi8L9R9ZN6JJC1ebw86MnpKbIthjLM1TAQ6wdds=;
        b=SLWKoRIjJSc15Z3iJSvUSE+xWKgnWLllIi48RMO+zJuuH7Zh+wfu6mQWj4g/CMbfvm
         LWfIFDCmD3now078RwoRVxGcd8QMPVa+yEXuGB84+ZsddkMy2ELo100vpXSifKl7azhg
         G71yRxDmgpp1CcWAbCQh56I92k7s7x2wSPsH3y0Dt5YrQvWc+cFhTfnwthmTY/O6uIAv
         ISyhl5lRlzuTNHuNdBi+JjIHUTb8vHrXTyHW4xoJCTvMXX1xftq912OTVPjEAFu5wsXj
         l7W7wkKhz8g9ZI8NY7YozGaSM3disnulAoQVpVCB88s5dMMeJb6JWKiQ5BugpfwBvtvR
         jGZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680524847;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tiE7fi8L9R9ZN6JJC1ebw86MnpKbIthjLM1TAQ6wdds=;
        b=aKMCnrmbzJqodlJB1K5kGbrvjSDS2c7Xblt37z5/bitJp6AoVKfPYR5gIKaVQ1B//S
         RjxNtkej100WOGd2zNNukHnD2yoYr53PCq0woROJD6u0WuF6oGqscIJLdKCzfh9huFb1
         x/oLWz7IuLnIMA7nPIjA4czrKp92yV/obhE9WZl399ias3S/sl9OmfiF/CPwPkrS3eEd
         2AlZ2LMc+0VTA/zUVUhaauZCSksKPbfkngAMo5KfrcHxWwbR64HtvnKZKVjSqkQ3VZnB
         Dps9w8GRlVFQVhUYrD5Pjdw0hXeB4mtdtpThRm4erbFqv47trBiob3iQptb04tz3Jw+K
         Xk2g==
X-Gm-Message-State: AAQBX9cVmqtMNPlQtAX/KvD7cSzSkNtduqoOV2GJtsM0sCKk0GLt2Wao
	HjZcfhwWdPgVcVsuGIhGJH+crohUSG4KQnHjPN0=
X-Google-Smtp-Source: AKy350bvj5HIZOXxODdl9HmyUeSupI0oslUbBLSEUl1HB1gkoU4xmW0A4xcnaidpOajdABK2WAs4EAaIQ4xHo6NgWLI=
X-Received: by 2002:a17:907:20bc:b0:92a:581:ac49 with SMTP id
 pw28-20020a17090720bc00b0092a0581ac49mr15671469ejb.3.1680524847399; Mon, 03
 Apr 2023 05:27:27 -0700 (PDT)
MIME-Version: 1.0
References: <20230312120221.99183-1-shentey@gmail.com> <20230312120221.99183-3-shentey@gmail.com>
 <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard> <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com>
 <622b9674-fffd-4634-ac30-d0db3230478e@perard>
In-Reply-To: <622b9674-fffd-4634-ac30-d0db3230478e@perard>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 3 Apr 2023 08:27:14 -0400
Message-ID: <CAKf6xpvxf=F52etJ8o3eLQV4JVD5WM57znGoP3ctONRf7uPisA@mail.gmail.com>
Subject: Re: [PATCH v3 2/6] hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Bernhard Beschow <shentey@gmail.com>, qemu-devel@nongnu.org, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	David Woodhouse <dwmw@amazon.co.uk>, =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>, 
	Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>, Paul Durrant <paul@xen.org>, 
	xen-devel@lists.xenproject.org, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Richard Henderson <richard.henderson@linaro.org>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	Chuck Zmudzinski <brchuckz@aol.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 3, 2023 at 5:33=E2=80=AFAM Anthony PERARD <anthony.perard@citri=
x.com> wrote:
>
> On Sat, Apr 01, 2023 at 10:36:45PM +0000, Bernhard Beschow wrote:
> >
> >
> > Am 30. M=C3=A4rz 2023 13:00:25 UTC schrieb Anthony PERARD <anthony.pera=
rd@citrix.com>:
> > >On Sun, Mar 12, 2023 at 01:02:17PM +0100, Bernhard Beschow wrote:
> > >> This is a preparational patch for the next one to make the following
> > >> more obvious:
> > >>
> > >> First, pci_bus_irqs() is now called twice in case of Xen where the
> > >> second call overrides the pci_set_irq_fn with the Xen variant.
> > >
> > >pci_bus_irqs() does allocates pci_bus->irq_count, so the second call i=
n
> > >piix3_xen_realize() will leak `pci_bus->irq_count`. Could you look if
> > >pci_bus_irqs_cleanup() can be called before the second pci_bus_irqs()
> > >call, or maybe some other way to avoid the leak?
> >
> > Thanks for catching this! I'll post a v4.
> >
> > I think the most fool-proof way to fix this is to free irq_count just b=
efore the assignment. pci_bus_irqs_cleanup() would then have to NULL the at=
tribute such that pci_bus_irqs() can be called afterwards.
> >
> > BTW: I tried running qemu-system-x86_64 with PIIX4 rather than PIIX3 as=
 Xen guest with my pc-piix4 branch without success. This branch essentially=
 just provides slightly different PCI IDs for PIIX. Does xl or something el=
se in Xen check these? If not then this means I'm still missing something. =
Under KVM this branch works just fine. Any idea?
>
> Maybe the ACPI tables provided by libxl needs to be updated.
> Or maybe something in the firmware (SeaBIOS or OVMF/OvmfXen) check the
> id (I know that the PCI id of the root bus is checked, but I don't know
> if that's the one that's been changed).

Xen also has hvmloader, which runs before SeaBIOS/OVMF.  Looking at
tools/firmware/hvmloader/pci.c, it has
        ASSERT((devfn !=3D PCI_ISA_DEVFN) ||
               ((vendor_id =3D=3D 0x8086) && (device_id =3D=3D 0x7000)));

>From QEMU, it looks like 0x7000 is PCI_DEVICE_ID_INTEL_82371SB_0, but
PIIX4 uses 0x7110 (PCI_DEVICE_ID_INTEL_82371AB_0).  Maybe try removing
that check?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:32:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517422.802703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJMB-0003Fx-9Y; Mon, 03 Apr 2023 12:32:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517422.802703; Mon, 03 Apr 2023 12:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJMB-0003Fq-5R; Mon, 03 Apr 2023 12:32:19 +0000
Received: by outflank-mailman (input) for mailman id 517422;
 Mon, 03 Apr 2023 12:32:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjJM9-0003Fk-IY
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:32:17 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe12::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90bfa3f6-d21b-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 14:32:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8241.eurprd04.prod.outlook.com (2603:10a6:20b:3e4::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.23; Mon, 3 Apr
 2023 12:32:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 12:32:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90bfa3f6-d21b-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OhK0du4NTETNhaiN/TwaYdavWbFHxsw9tKVF8toKz/I8u9V/YiUhhvjmm3oQHlj+Sz3XsLhQ1ZCZ7MhiftzotZ3ueRwHkakoUpwjL+aNTtekj0p9MtLjNFrLLHqV4XQuMDiZ1Ee3NGU9SqhlL4r9AgOIeMUtiwqG13kTEiYcBQVwbJjT3/r1MqPLB9DYpHNl4fL9ipOVXkdWo+SCKIXaNT0DZAnpsUtVZ2SiUdxEJ335oiYjjKnCGL1onXiQjij2xjec0uVaT6HJn24S8FYa3DeiIeQqbSqZaUMY7PO0z7kyzXClJNYDKfcmJc3R+47Ka1y8qnftSfOgL8u8fxAi9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=T85vDg5TUrhPHf2JrkJQCK2w9XzTQ/j04O5QemxmJV8=;
 b=C7xnCkHfZJJ96+a26QG3zWITD/wL8KM9sGutj7YrjnGDNmj3L2RhI0gwr/j9vwZFZxQ2ybTR18wyexJusYMDsjbYdO6v41Z621bnxAjn93R3ubnPtxCJPtoxIl7NlHfdmF+feXnl3oQCLz42dzBJgA7u1EX+oKHMOVnAZ4vKktOkX2WAZGsa1yJxuE6w8MmUdU5fi12v2+h3ryF7J0oSVmP5EY2wGo7hQz1TSHgpToEFPcUdZmx0vzvRgA+j2Z/46E3kCeem2vHg8qf9fs/3oZ81k8dELu/rW18mUaU1ufgD/y3uFQjrvOg5FwjnbX8t3ICK3n6wgMnZiJzNB5QzPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T85vDg5TUrhPHf2JrkJQCK2w9XzTQ/j04O5QemxmJV8=;
 b=zWPNfIBFs5t1L8n19gz5XapdI1mIOaVCGyuBqh1fVofD+8k8pogW1O6SSWobBjBRpst4xClYOfmo5pcl6ThbmH4ng0b0CBraONZP4ssbbsmlscxJTSfwhLLekosGcQ+OmTux2qpTGIfdgJnzdde8hw9VJ0tHjVwRKjRpxSaMLHXUrJruVAWhkPK+x62H/i+PmAFf4uQGTOX8Q0A/DpKe61pR/drwwTdyIRTFdKhSrU9KaI8Cm4eeqKdtC5In49jWfAHAt9e1jq5F1xP7KAQxq2gzDrnOSVlpsPBkFxAKEvrS3xT6AzF7kE08vsWcQkDzlXk9uuAtSFTCXqhf/cyTQg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <efe42c36-8f5a-8600-dcc7-fc1dc8980775@suse.com>
Date: Mon, 3 Apr 2023 14:32:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/emul: Fix test harness build with blk.c moved out of
 x86_emulate.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230403122535.724250-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230403122535.724250-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0178.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8241:EE_
X-MS-Office365-Filtering-Correlation-Id: 729229be-f0d1-4914-e799-08db343f742c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7dOPxVcOMXHPaO4ZObhVfOLrJT9V5rehAiwRJCyd0y5TbCtnz0MSLGE/rW2ymvl99oAGBESsKxb/k4h3yvpULMIgzzFxJ1jFhZsiHuUYOOdYAc7Se97rzSZyiUJXvPxOzW3CptCwDlAMnPPR3+Fvh2U/laIh5UpmDk6CsoWp2OxXnZzXuAJutw2hm+bW1ndjjLJgDIyv3K97WOuSx3o6fLJV/+mIsb1kK3z+VheKxKBH0Wl0fy9t80i+J4fj0T7MYPYIhkAC+TkHg3/aOMQC+V6Ma2Fibty/3kaFQjiVagyieswq6qmUuoWdqYgFjW3BSbrQsbyqTkxfIyOvpmsKPtmmCp4Hzo5EuCWOy5rF1eWW0v04jRIiz2SDme107TM68zfY8mRhdYL9OjnVz4f5eqKyI8PMh0mKKDS2VrO4sz94e9sSNFDbKqpO2ZJ2BEp725+6qek4m0UhqyTppz2uQ+LEnvRdrEQLhqYExUC2/SmOHyUzvB4osRUHyFgkH5QqUMZqm3cj/mRDHF7x9LAb2/3Tt6bZw3PRK6+UNQvBUk9vBAqY3CbCDE91ESBseTHK6GdaHCKprLTlLRgJcdAn96U1VsNkdV/AEPPZoK+HhFM4rE+i7QlcgvhpPgWMD03Ub/jHlT52wIXz0GGePVV+qA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(31686004)(6486002)(26005)(8676002)(66476007)(6916009)(66946007)(54906003)(4326008)(66556008)(6512007)(316002)(36756003)(53546011)(6506007)(6666004)(38100700002)(2616005)(186003)(2906002)(8936002)(5660300002)(478600001)(41300700001)(31696002)(86362001)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U09nVFI1bEJNRzhaTDlWV1lzYmdRWUgzcFdDYStoOGVpNzhaTWsySTcrVGZR?=
 =?utf-8?B?V3dGRHl2MEZyNGxDNStTbTd1OStINkpPNTk3dlNmOFpSa0ZPQ2JNOVZlS3hl?=
 =?utf-8?B?TkpPRlBjRUJZZ3ZaS2lZdmNSUk9JU1I4VEs2a2RoUlAwTmk1WUtiVHU4RmZJ?=
 =?utf-8?B?SXpRRTl5d0lLQ1VWNkg1bmQxVVRPeXJ2SUw0dkdSOGY5RzVaSzFRKzZuQjhC?=
 =?utf-8?B?cTlFaUNiYWlrN0RTTW0wRFIxMWM3UkRIaWdmVzErUU5DS2ZCUTVLamM5bTZs?=
 =?utf-8?B?SlRINmxwNGNHeVYvZ1JLdXNCOFErNCtnc3Ztb3FpUytNeHE5dEY1ZE9LZGJD?=
 =?utf-8?B?bEltcHFObFRhUWxhQmR4K3ZVV1RpcVpEMHo4Y0hhQUlXcU5MSm9vTjNFNGZM?=
 =?utf-8?B?TS9YQ0RMZWhCZktjVk1rWjMzN05iZzJTWXNsc0dLNjZDbDFtZ05FYytQZjk1?=
 =?utf-8?B?NnhlTm5QSjdYWTBaUHdhbWE3akdVbFlkVzJMTFkwSmZRcUs3QUVMT2tOQWZH?=
 =?utf-8?B?citmdVFvZi9iNGtXMHhSc3R5SFJ4bUtzSTFRbEJuSWhFdk9jRWJOZkNUejR6?=
 =?utf-8?B?SE5EekcwSnV6ODFTMVc5QU5vQVRSaUVDVlFZOFM3RUwwNElwQ2RyUkhhYlVp?=
 =?utf-8?B?c1J3cVNmWVhVNTh5RnRWUmlhaXhUdXZsK095N2kzSUNzTW5vVEo5cy9YZE43?=
 =?utf-8?B?T1lwRUtxS2tpTGlneXRibVRDR3g5ZFJrRFhjc1h3YXJ0U3NTN3R5aDhpTktN?=
 =?utf-8?B?TWN1U3FSTmhNTTVQVGNHMy9OVVQzZExhVGZNMjg3ajg3ZmV6TGdHSE5hL3Nj?=
 =?utf-8?B?RHlMQncwNW9GOGVSVWJOY3kyZmFobVcxSzd6RGkzVnpKQkFiTkYzS3RTZlRv?=
 =?utf-8?B?dVBybmI5UkV5YTZEelQ4cG82WDZXYnlyL1Q4d3V2dGxOSjZEcG5tQ0lkUjlp?=
 =?utf-8?B?ZnVXbXh5ZGVoWW1ZUmY1eExYbTYwbG1qWUhKMFlLSDFpek9NMHM4dkdUN3Jw?=
 =?utf-8?B?cTFQeWRkRnFhV29zbUVVUWIwUHo5M0s2R2FXZWQxVUFabHRIUEZyR2krb05q?=
 =?utf-8?B?VnAweDU4L0dxVGNhMFR6UktqSDFEMGl5Qm5qZjJPZHc3QjlhVDBYK1JYZktF?=
 =?utf-8?B?M3pDZlpYM1krYU52UUJhcUM1YTB4MEs2SnZSVkJPVlkwU2R3THBFMlNqZmRs?=
 =?utf-8?B?eHdLbS9jQVV2UW5VZWllOFhkWDUxUnMrWjFKczJrTEdxOVZ1V01IYUx0K2NC?=
 =?utf-8?B?SllVaTJkVGNQTXQ0SXhETmhGY0krSkNLWnc4SjlsQmVOMS9PcFNpZi9oRmNM?=
 =?utf-8?B?cTh2UVdac3RVS01vd0R1dWFIQTA3VHJZU1pVNFpmZ0MyUFgrSEJob1E1RExh?=
 =?utf-8?B?TnJ5WWZYNitDTzRud3JLTkp5VWVnMFJtQ3dQdFVnblBLUDNaODlFVUJqYVRo?=
 =?utf-8?B?bEV5QzZ4UnVvNDBkbjE1aURlVXZ2c2pCNzI0STB5VW1JQTc2ZHdhTVBlRnhs?=
 =?utf-8?B?VXowTEN3c0xKSHphcmF3b2JTT2RrVFJVV1JpSVduZW43TUtMKzV6SEVKZHFO?=
 =?utf-8?B?cklpbjNzWlMxZ1NmNWoxRURGOG9SQXZPaGx6aVJJakttaDlqM2s1S2RteXNn?=
 =?utf-8?B?QXMxSXE3SVQvODh1L1R3NG91MW5XWENLWEsybitsU3FHcmJQMG84b1lCVFZH?=
 =?utf-8?B?YkM5YkN6VDh5ZkU0dlRCMkRFS3hPRk41ZXFNUW5mZmpndzl1SE9nRGZIMkVm?=
 =?utf-8?B?Q2ZzNW1HY1BMSldtRnRnaTRlV2oyNFNNQVlLVzdBcTlZNXAxVzQxcVRMOU9I?=
 =?utf-8?B?UFQwRGlMS1kyV2JVTkJSeGt3UFdmNytCUVBOQkhNeU5mOFdiZE8raHkzVUs2?=
 =?utf-8?B?S3lRQzFCZDhpbzAyWFZua0VXNHIvNXp3S2hneFpRaDVRMWplNnpDTndGL21x?=
 =?utf-8?B?Uncxcm95eTVRZGMzWDZ0dmN6cUJIN3FPSTYwY2lOMjVzbU1ERnBoeTVSOHJL?=
 =?utf-8?B?am9JZFdZb0xHWGRwUlVwWTNSOFZHeFNIZmF5dWU5bjhpZlJRbFh5UFRYTUJ3?=
 =?utf-8?B?OHUxN0xGa000K1lpa1kvdkZsRy9IZTV5ODUydDhEYm5ZbUloVk9mU0VJWFNB?=
 =?utf-8?Q?rMKUwt/g4YdiISEPWO+Hf9JlE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 729229be-f0d1-4914-e799-08db343f742c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 12:32:14.0724
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PILZt4P8qTgiY8fRyso//prdafEPV17MYrqV3dRrBnRGd7uadUsSEFqsjLHWU1iDWieB6IVhfdK0aBMuq07Vzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8241

On 03.04.2023 14:25, Andrew Cooper wrote:
> Trying to build the test harness fails with:
> 
>   x86_emulate/blk.c: In function 'x86_emul_blk':
>   x86_emulate/blk.c:74:15: error: expected ':' or ')' before 'ASM_FLAG_OUT'
>      74 |               ASM_FLAG_OUT(, "; setz %[zf]")
>         |               ^~~~~~~~~~~~
> 
> This is because ASM_FLAG_OUT() is still local to x86_emulate.c.  Move it into
> x86-emulate.h instead so it ends up in all files including private.h.  The
> main Xen build gets this macro from compiler.h.
> 
> Fixes: c80243f94386 ("x86emul: move x86_emul_blk() to separate source file")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Locally I actually have this exact same change, but in the MCOMMIT patch
(which predates the splitting work by quite a bit).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:39:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:39:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517426.802713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJSv-0003w3-44; Mon, 03 Apr 2023 12:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517426.802713; Mon, 03 Apr 2023 12:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJSu-0003vw-Vt; Mon, 03 Apr 2023 12:39:16 +0000
Received: by outflank-mailman (input) for mailman id 517426;
 Mon, 03 Apr 2023 12:39:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjJSt-0003vq-Bt
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:39:15 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89915a07-d21c-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 14:39:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7674.eurprd04.prod.outlook.com (2603:10a6:10:1f5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 12:39:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 12:39:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89915a07-d21c-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XWlLfxr7YGHGmlmKrz95obtczKkY1oeQ5/h0vLS+ODpFmYmzMlIAnadBrCtRoVmcKvChwGngargD02uF7H7234r7fh4D7xa4JZFWB/YWVpXEI/neAADufIsDcCFCLi402NFl/D5S9uAbJmgluSBKnPMcACkhcxkL+0fD5wu2Vo5yRkIMWy0r2pIwYDxukZBEwnF54iPAl8lpgG+hbXZ7LlG6tuqOeypuTPP3WhqR2kpdeM+G/DBgCVY/8jLNlCiVZI74MISO3pTyDJoTDswZqJxh55ioQxZjhKgqwbf/inCYjmGGe569+MU4hyohqenhhDZb1F+70rx+16DuDNNQ9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kkNr5iDw5JV3J8qNnpUmRp1zna4a5Qb3MIEAArRhfSo=;
 b=cth6HxMJ4PI/GbDW7F48qzEVzyzVTE7tT9b4HASmsQyiX33kb0Hj1vQEUD5b7tYGOTz52BW6J1KKBPkANUKMfdxVu022QQ0Pma0QpqUfFZHxmtwAZFrEK9Zvja80Uwla/2Gh1E8YFpbPhdetVYnqRelrz+x2YCWgTdD+3I7yNsyiVZW0jeYn64px1N5y+yW9Y5LzpGihVLpX0l0StoiJ9U1wp06HCphwzvxvw/3Teirc5kUZcf675RRkmQXgHRkZk7dpflvIoHf+NBfaRqUC4LqLMgnkXIkaDmlpn1jQgX6stzwEW416CZbjfWLg6HlJIDoKxxejsR4usdr1dDPUiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kkNr5iDw5JV3J8qNnpUmRp1zna4a5Qb3MIEAArRhfSo=;
 b=ehROP3MTKk07w6eSJVVPBduwfGEuNn67O8JK2VxS2BSTIxfBgu1uKoJpIx+hzU7JUshvWxp9pu2hdmmrI5K/aC4gO6zOCxro1iyfsyf5K8wlF7q1cNDMy7KUMckYHy5dS/4ps1DjLbrihR6rFk9uIDyZ1lhXUITRiDpgcN4uaUMgd+EPc4Cxo2msgI3LKRIkT0cQ/9R6ljWuUScssLI7Mu6n5a/eA4gHtADzcY7nT5x7LwL04qnExtYmAg0Pd5AEZUpRC3KSerjjw6QHQgD7TiKH/bIxdWl8XzA5SadlzCazcOJ5rufIZkNv+9gd4kVMlGlHiAoihw0fOJ10k5qLhg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d976d34-8a1e-95ad-3bc9-3cb704c1fae7@suse.com>
Date: Mon, 3 Apr 2023 14:39:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403101449.93323-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230403101449.93323-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7674:EE_
X-MS-Office365-Filtering-Correlation-Id: cba0af35-addf-4185-71f7-08db34406c43
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w2SsTqvPKN2/J2avpwsiVBPF0e9qWqIqa7RcFEAAY6LbqU5D/qfzyQeH4qWfuQUkOmHWYtI1NLq2G++hklHqoJyyeqRByeTz2MYBpM7HL43O82nAhIYYIvd75PoIoUV3h29ISRVYjsAGgxj76GzW6AhhHRz4Z1WNeLIxiE2Re/dcZTFFBhWs40rL1vJdtDXj3M8tilhnWMj6a4FrW+/OcH7D+xP4pz2P0jVTwU84J/pVASnSY1ts6iKIxJv32B49zn40fueR+85DxJ3ZXBYt4XZ8qub2p530lfSoSxvpBtQrMbekshyiq4GNkMmObpzfBso1D3eBeoysPJRXl/+4/RbqOsPzNus/s12tyYSuemFFUpWX9U+uw6p9c/j31adnqhMlfOTra5jt8hpIntkhDhHnR6k0GLtF3BmasOrGWwthbZSPvG8ciJy3ZuUylK7wuuRW1ch75D2Pj85cYWUTfiUM37fy3bm67vU+38wGsbXfLU9DFgZ1lf+Zyb4G/uzF5baRFeKOhqtt23TbFpFERDDAhmrqya0rS2NXKkIzhKWTIwtwqqHV5ZLZekuH/Grat27IpMN0l1cNkZbXplsoQNxt1EczarqcM/3w2p7eEjnol8m6uOh9qAvsey2wj30kF70NdkAbena5mAPjocdA7A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(366004)(39860400002)(376002)(136003)(451199021)(38100700002)(5660300002)(31696002)(2616005)(54906003)(316002)(83380400001)(6486002)(186003)(478600001)(53546011)(86362001)(6512007)(26005)(6506007)(41300700001)(36756003)(4326008)(8676002)(6916009)(66476007)(8936002)(66556008)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MGY3MWMzWXZaUzdZVUpkOFVhRnFidlFWSmJTdUMrWGllbkxidFhEd3pGWTlJ?=
 =?utf-8?B?WmN3eG1PSVdNbDZ5MUlSVGplTXBQdWFtNEZ1Q0N6VlNMWXM3bzd4NHg1VEQ4?=
 =?utf-8?B?dXpySkx0WVJLREtVY21TSlBWT1dxRFNvK0VrY21uUmxHaWJFaXNyeXRZN1JK?=
 =?utf-8?B?ZWhFcmxsMFN5K0VHcUZScWN5dVdCaC9QMlRGaGVYWXBmSXV1V2U5NjVzSXVH?=
 =?utf-8?B?RUU5M0RYZlJBd2xLOXBQNWp2N2NSNklQZ3BkM044RmRNaEFnU3ZPbllmNnFi?=
 =?utf-8?B?UXJFcnRMZ1VpR0ZUQnFGVjBjdEtvVlBsYkxDN2N2QkhmOGFPcDdPUlByVDZR?=
 =?utf-8?B?QXhYT2JnNFRqUUFqQWtDUGo5VEVnYnVJL0NLSmQxd1NDb2FvWTFxSnIyUDVv?=
 =?utf-8?B?a0lTSzgzaWo0OUFYS2FjL2RKaVdwZHI5eWhlbHp2bTQvZ3Y0Yi9BbmhSb1pu?=
 =?utf-8?B?Tmo2bkYzbmh0cDdmNHU2TUF0cVV3alNqNU1sVllmazZ2Mm0vQ0ZwaVU1ZFk1?=
 =?utf-8?B?MUVEUDVxdHBSaitmTFhCYnFaNWl5L3JETG0wYVRxRTBIRFEra1RnVjNaQzgy?=
 =?utf-8?B?aW1VZUNGSHJ4S1VOdkRwUkVzYjEyb1JxQ2RIbGxkZzJsVTVGd3Mrcm9MeTI3?=
 =?utf-8?B?NXZ3cmpNUFVJQjdZZE50TTJ4NWZxREZYQi9zN1A2SFdPbWcyRjZMQUdLM1Vl?=
 =?utf-8?B?Nk1HajlqZ1pNbVd3YytQcXdwNTlUZUEzOWUwTGVld1A1OGNRRDIxaVZwdXA2?=
 =?utf-8?B?RitXZm81K3lJNldFa0xHam5jVm9TZmw5MTd0Sy9XR3pkQkc5QmRUeTRST1lt?=
 =?utf-8?B?TjF3TWI3RUl1OCtsRVZYOTFzaGEydHlUK2tGVU83RXpkOUNDdjMxcWkrWU5G?=
 =?utf-8?B?VVhFcWVNNTgvSVlvVzR0cHZ1QlpDR1FHam84TEFyb29jZkVhaFNreW4zWlIr?=
 =?utf-8?B?b0xQRHpwckcreWd4N1o0TUc2UnMzSy9QTFk1SEN2bE1Ydk1xd3pUQ28ySTVl?=
 =?utf-8?B?anNxWStFVGdxUDhGZkRDeTR3WkNhNmJoSVZmdnJCWU04RytKUUNhRzVnVGlP?=
 =?utf-8?B?RlRucU8zZkdqRDIxT3lLekI2cE1KQWNnR05JdEhIU3hWK1FjMTl4MTlwQ0Z4?=
 =?utf-8?B?Y1FtUzYvbXIwWmljaTYyNHZqV0lQT0VSVFJJa1gxOVVVd1VYV1RreEZjbWNL?=
 =?utf-8?B?RHQwcTdxNFVIK01oOWUxQXVrdmdrU0dmQnd4UTE5RW9RYm91WTZWb0t6ZEk0?=
 =?utf-8?B?Q0dJcnNJRVFCVlloNEpHdGlIY3pyZDhyWlp6dXBCNkk4dWZ3c3p0by9yQTAz?=
 =?utf-8?B?Sm9tcWV6dXFoeW5VZUlGMGZmTzVUL2VLdVQzOWVWM05TTUlOVXR3ODhKNHdC?=
 =?utf-8?B?WWh4UERFL1ZRNU9TSkx3REhycE1sbVV5MHREMFVDOXlUMzkxbzFvQ2x2Y3hx?=
 =?utf-8?B?d3FQMzNpcTdkTXh4YkxVOFRMaHB5bU1OSm13aW43OE5pcmdnd1g2eTJUNyt5?=
 =?utf-8?B?Ulg1clFmaDlMTnVWY2YraC9uc2dvSURGQitycjZqUjFMTUM4TjZmZkRFbDhQ?=
 =?utf-8?B?U3dQWTVQenpLM2tNZUxiYk5nUkdvd2NEaFRkRmVLT2NZVFJVYTFQY0NlMGhX?=
 =?utf-8?B?eFRLdTZpclYwQTVKRDVvUkNPU2I1OTA4MWs2WmpqUDVOdEFaUGYxcjJWYmJy?=
 =?utf-8?B?UlJQMGFXdElOUmdzOTA2a081c0N1a1RxZzZ3NW9hZmFGUXl4RWZVT2RaOUxH?=
 =?utf-8?B?WFJiaVRTMGk1L3lJM0pMTDhMOFVSYTBndU5OUG52ZU03YzZ5bm90ZGZscTFx?=
 =?utf-8?B?c2FwUVlRcCs4bnhoMTNNZGxKU2RSamsySTNtcmRQL2p1bVFQZzJEbk5CMm9J?=
 =?utf-8?B?NnFiN1dUc3FqTWNTUzlMa2xIa0lMUmxSL2NGMXBsdTRhbU9QelN2RURrVWp1?=
 =?utf-8?B?a1NlVWIybXQ1WkdpaFF1cjN0a3kzOVhhVWJUTTdIZ05UenVlNWhjb29MclFs?=
 =?utf-8?B?YTJOczhBclRyWjZ5VUlSbXMwSk1DSmVaSUwzaWZ1MXRFNHRqYmZrTWcydXdQ?=
 =?utf-8?B?VURuSldLdkU4ajd1L0NBVEduTGNBS1RoVGYvNVV1Tm16Rjlpc3Rjay9MQjlC?=
 =?utf-8?Q?xw77x3Fe0MnfKYY3ETrVvgPfX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cba0af35-addf-4185-71f7-08db34406c43
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 12:39:10.6983
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DplGsPZUMXXL/hlB9n1zPu0fTyYp7Nf5kJM20tTTfxjxx7HlFMw3YbXbWXQU05HQxvfvSaOFbIOMHpyc8XSrMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7674

On 03.04.2023 12:14, Roger Pau Monne wrote:
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -486,9 +486,6 @@ static int cf_check do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>          p2m_type_t ot, nt;
>          unsigned long mask = ~0UL << (level * PAGETABLE_ORDER);
>  
> -        if ( !valid_recalc(l1, e) )
> -            P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n",
> -                      p2m->domain->domain_id, gfn, level);
>          ot = p2m_flags_to_type(l1e_get_flags(e));
>          nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask);
>          if ( nt != ot )

I'm afraid I neither understand why you make this change, nor why you
then leave the other use of valid_recalc() in place.

> @@ -538,9 +535,9 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
>       */
>      ASSERT(!altp2m_active(current->domain));
>  
> -    p2m_lock(p2m);
> +    p2m_read_lock(p2m);
>      rc = do_recalc(p2m, PFN_DOWN(gpa));
> -    p2m_unlock(p2m);
> +    p2m_read_unlock(p2m);
>  
>      return rc;
>  }

How can this be safe, when do_recalc() involves p2m_next_level(), which
may install new (intermediate) page tables?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:50:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517429.802723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJdF-0005RY-14; Mon, 03 Apr 2023 12:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517429.802723; Mon, 03 Apr 2023 12:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJdE-0005RR-UU; Mon, 03 Apr 2023 12:49:56 +0000
Received: by outflank-mailman (input) for mailman id 517429;
 Mon, 03 Apr 2023 12:49:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UzYg=72=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pjJdE-0005RL-4H
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 12:49:56 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07444ab3-d21e-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 14:49:54 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM4PR12MB6063.namprd12.prod.outlook.com (2603:10b6:8:b1::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 12:49:51 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242%4]) with mapi id 15.20.6254.030; Mon, 3 Apr 2023
 12:49:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07444ab3-d21e-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SoyMUN8lkB4TWJczrrnC+webiCjV0i3IcQSOHW8nq+ZhmaW/OP9RCfYYY+rbE6aJjm+SxcVlnrWZGdu6RuZ2gNC2B7HW8EQvS8ApRu1zTT1D7BzI84HZ6Hxl8Ka2r6yDSh4wIbdULKi2AVfUQbAT7dQi2EVmlfJ+k65XRwz99RFIF6jeK38UzcG0gMB3FuqOjbzDckBHkCEp7H8WQEGVpvTL7d1inTWG9kK5iOdLdz37KPE0BnfqBQP78z9NlQ3vv6PRr9rHJh7/lnGlchfCHfmSP6IWs7vEQ11sGLgzCjHk/9QXz6KbqeuuWnkIK9vX9wRnTFzJ0sD4EWQIhIUiQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0ulHxYGxKal1Hs+seRR7jT18eEPRUjrcMb+GtA8Atv4=;
 b=YCCk+OSU9li5Wc/nNrAYmffOEHb8tirpsk8gA8UnrCA/O20cRK/EtsuKVxHGXO3wVN/qDXBRxVJmSR9U/JUbnM4p5ckaansoE3Ws5wXdoTRh1+alkUooltZJ55OmKoKQjJ9wxs245H3YyYSLMhEV22MTNqrMA+E8BVLreyMu48xK+iVSCNcxvpyxwkGDiZ2FboPaMRPYd1+FAX7RQAmgPFKtXxG9VUbJNxJK5WeAUAQuZ8nX07GkaeNxCG8YbdrUKzvv18sF6FzWeQ4Dj6tFXazF8WJoel4M7cCKVyHyO21jCLHPVO4iqiXs+vfMCWJKjYSvNqwoe7cn+gN7bAw4/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ulHxYGxKal1Hs+seRR7jT18eEPRUjrcMb+GtA8Atv4=;
 b=rJVZcICm9E8Y9WhzweWI1oAc1+dk7enAelJgJQW1M9KlSaFmNRYZyNRDkn7Tw2TSMRdnPkpR/qlOcDCMDPpio7FL7IG72a0epePfqDPEnPfoPw0La31GNzI5XT0z9xBdhmb5HaLCtZQJZ1eLPTuNR/NfbW5K7HR6Z5xR+EAOJyA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <049cc80b-9fe3-1b01-67a6-112bd5e46443@amd.com>
Date: Mon, 3 Apr 2023 13:49:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN v4 06/11] xen/arm: smmu: Use writeq_relaxed_non_atomic() for
 writing to SMMU_CBn_TTBR0
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230321140357.24094-1-ayan.kumar.halder@amd.com>
 <20230321140357.24094-7-ayan.kumar.halder@amd.com>
 <f1e638f2-28c8-fcee-bfb4-a7d459281420@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <f1e638f2-28c8-fcee-bfb4-a7d459281420@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0450.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::23) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|DM4PR12MB6063:EE_
X-MS-Office365-Filtering-Correlation-Id: c15ac467-100a-41e8-1b0b-08db3441ea04
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lnkDqbon/RKPst/EN01pi4SOxk2ooXqNrL/jeH1V0X/2bBRJcIiYIGM7Qm7KYQXHIYoLQKkjkmw+bVrXygVUDyOwsx1kAGj9sIl+ZL/nYV+H72p3jMkOIVsJ4nMhvL5s07SfaRb5WjQumGxbUZ12gHyHd2xOBy14Qs0iVKpzsOWhyStvHTfcXAhokVjB5gxUsuORrlDLfrVRxwmfi3vEAwieocbbnvgWX8RS+Vg25oZd0iiqxZtv8+f46EVJPq3t3vkyVvuIM2IMiYo67wS3zqYob6G1auEbyj80Ox9bzjg9lGG5ySCuM52oz6mGXPAqq3nXTk6DeGgsAgC12WRFXFyasNHH5myR+UrZI3LHlA3wdPNUsTkOY9X86EIpcaVU46/vMhAGvYbzqNq45AmsU43EcsqsoevUx1Lxq/6K/Bhi/GpeLMmAs4+n/DqlHroQ7pPQ2GvsL+/3c8Qkeo9kXRIhGIArTP2LPI9oTDsR1EiY6FK4ZPYNRUrEKYgCknYM0Zc4OGSaBqRF/ABMPAB0lEE/Njsl+/q5Kwb8gVWX1BdxmbfKq37r0uRz2vNEq+7GzPqFNljt8vWbIyD1fbnNOXcsCMW/MqvV4XcZ44UGJXTeEPEfzAJjVYHbGewiJfdUMlQNsPWD+b/UOhEYJ6qMZg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(451199021)(8676002)(7416002)(8936002)(66556008)(2616005)(31696002)(186003)(26005)(6506007)(6512007)(38100700002)(83380400001)(53546011)(110136005)(316002)(478600001)(6486002)(6666004)(5660300002)(66476007)(41300700001)(4326008)(36756003)(66946007)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WVFUcTJEVVM5MEZiMUo1SWxoWUcycUt1ZUtheUcrcUU4eng4RTVjUDE0ZS8y?=
 =?utf-8?B?TlFyNFBNOFFVM05EdVp2S01MSUVpc3JtVTlST2svT2VFblV1cXd6SWlHMFpm?=
 =?utf-8?B?Smpqc1VIbkJCTUNLL0UxL2xnQkc1UDdIM0o4bTJ4V0hmVjlmaFpNV1NkQTZV?=
 =?utf-8?B?MUs1Y0s3QWUwU1lIMDREWjNVVmJoZ3pmc0FSM29RdEtENi85OTRNVkxEWlRx?=
 =?utf-8?B?L0VNR0tmTW53dTF3ck93MFBudHY0VERSdWd2WjNrRTZhay80QkJFYlQxUG1D?=
 =?utf-8?B?ZGY5YkdIRWhlRFM2NUdqamg0NE0rSTlsa2ZReCszSURvZ2JRSmx0ZGpsWC9E?=
 =?utf-8?B?OFNMaHMyaFY3aHBIVTUvT1B6enJwZkc2Y3dWZUZBUmZGWllUQzVOQXVsS25l?=
 =?utf-8?B?d3pWVXNwc0VKaG1SK24zY2JBRkU1TEJjWE9oWFN4TDV2OWhVVG54NE5McitJ?=
 =?utf-8?B?bnpTM3p0WXdmRjY1MXlmZzUraUcrelp4L3VGaVd3L2U1RFVnS3haRXFBMlha?=
 =?utf-8?B?SFhtdVg4N0dNTE04WXhoSkVuRVV2TWExZUhnNEVQbzZGQ3lJeUdnMDFYN0VT?=
 =?utf-8?B?S1RqMm5JMU5RYzlEQnlnUW9XbG5oUzgvNitNMlozR0VHMVpGWUViSnJEVXJR?=
 =?utf-8?B?blRibWtjeTd1R2xFaXB0d1RERnB2ZCtxc0k0NEo1SXdITWV2eE4xcWV5cTg5?=
 =?utf-8?B?YzNZR3lrNTVoQ2RBUUdxTk1aa0xTUSthbjhYdXBOWFRSVUZhdk5LbnR2Vm5m?=
 =?utf-8?B?ckcyUUpYdmRNQXpxZit4cXhzVEJTRFMwaVRvejhzOUNGNzdLOTAzSmQ4eWpl?=
 =?utf-8?B?NUYrYjlBNHp4NzdjbzVNVUN1SEZNOFdMcmJuaDgwQkRvT1lCSEpKYitDMytD?=
 =?utf-8?B?OSt1aVJ4empTOTlvN3BiYVIrSVU1OU1Lc05RdnJXb0xJOE1ibHZHV3R1djBQ?=
 =?utf-8?B?MitqVXU2a1Q3bE5CeVMwaytIamN1RkZhdlV0R1JNZDc5dWRVNWRUUnh1bEM4?=
 =?utf-8?B?TFpzNWp3eEdpZ3FBZXhsQUovM0djbncvWmV3VFFKMVR4VkVYNnRnaTlzc056?=
 =?utf-8?B?cDhpS0hKQ1hITFBZRGJ1RWdyeXlGTzNDSXJsUlIva2Fvdnh0dzNJemN5Y3dV?=
 =?utf-8?B?L0VPazdtZkZRRk5aV0pvMjNTNEI0K21MN0VSQlJnMXY4WXhMU0xSc2VzbExO?=
 =?utf-8?B?VXBUVzBRcWw5S0MwSEs3MkhRODZLUjRCYmNvTDIwbE9HVHA3aUtrNUxHWDVT?=
 =?utf-8?B?Smw1K0VjSExhdjZTSEdoaXRQZW9rbkovcXBUZlRYUk5MaDc5QkJ6SFlGNUE2?=
 =?utf-8?B?T2UyYlJkK3VUVkZUKzI4Sy9DSHdaVG15TkRDczlvUkV2S2kweUEyRTZlRi84?=
 =?utf-8?B?U3Z0Y1Y4K1o0STgzeXpNdEhRVW5PSXl2ODNqMzBpYTRxSVFKcFppUUlLM3BN?=
 =?utf-8?B?OUlZbHJWdXFQQkVtT0pnWHVYTDZEN05NUlNCaHRDcXhJMDlKWVg4KzdoSnE4?=
 =?utf-8?B?ZGhBd1Y2ZWRiVS8xKzJrTnpHbmJNV3YwckZISGsvOGFKS2lqbjk0N0lIUmQ5?=
 =?utf-8?B?enNhaVdyN1JkMDJ6SEdDQlBKb2dGdUtjWVBUdVhCWDdGZG5WSW8ydGF6clNU?=
 =?utf-8?B?WnMvOG8rc0xybmEvckZ5M1VKY3JUcWM1cnNKbGdkUGRIWlNyQ3FQVzZiNDJm?=
 =?utf-8?B?aWJsdVFZbmtZWjRYenBEM0N0aTZRRXlMQTRVVFl0VXU2SGtZWmRvT1V1Wnhp?=
 =?utf-8?B?ZXVhaE9PaXJyanVHOTBCWlI4L3ZqUC9RVVlUQWs3TXdEWlk2c1dSTEdQWkJU?=
 =?utf-8?B?VDFkMm0yQ25YV2t4OXJKR29MNzVmQlpldjhmci9wMDFNSFFOZmpVVjZPY0Zu?=
 =?utf-8?B?VzlCRm5MWS9iSkZ6MlB2bCs2alhYakN4ell2UEExbWM5S3BuNUhhSC9PTkxV?=
 =?utf-8?B?YXdZOGdZS282ZS81MGhMcG1lRjhNZHdtOUtoQ3FNT1JUUm1yOEV0enA4bUMy?=
 =?utf-8?B?VzhNMnFGL2dMcFJJSHNEdzJHT3g5TmQrRmhrMVYwekhIVVRUUFpYUS9lKzFu?=
 =?utf-8?B?ZGlVOGNxRDJxNFU4WHJkTVkrc3ZrMTRhb0hWWTFNb0g4cVU5dVJTN2hkakh6?=
 =?utf-8?Q?uIUPQ6c9/3zcHS+B90E+Cgh7b?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c15ac467-100a-41e8-1b0b-08db3441ea04
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 12:49:50.9217
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OUEqTcO+joF1U4qTClxAY7vT18IHh1gzwx4Uu0jn7OwC0pnGbCB2jn7JJKzkCG+skkYHQxF97G+fPKewuqcSSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6063


On 30/03/2023 22:27, Julien Grall wrote:
> CAUTION: This message has originated from an External Source. Please 
> use proper judgment and caution when opening attachments, clicking 
> links, or responding to this email.
>
>
> Hi Ayan,
Hi Julien,
>
> On 21/03/2023 14:03, Ayan Kumar Halder wrote:
>> Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
>> SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
>> writeq_relaxed_non_atomic() to write to it instead of invoking
>> writel_relaxed() twice for lower half and upper half of the register.
>>
>> This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
>> Thus, one can assign p2maddr to a 64 bit register and do the bit
>> manipulations on it, to generate the value for SMMU_CBn_TTBR0.
>>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>
> The tags should be ordered in a timeline. So Signed-off-by should be 
> first.
Ack. I will take care henceforth.
>
> I am happy to do it on commit if you can confirm that this patch doesn't
> dependent on the patches before.

Yes, please commit this patch as it is independent of the patch series.

- Ayan

>
> Cheers,
>
> -- 
> Julien Grall
>


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:51:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:51:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517432.802733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJes-0006oG-Bp; Mon, 03 Apr 2023 12:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517432.802733; Mon, 03 Apr 2023 12:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJes-0006o9-8s; Mon, 03 Apr 2023 12:51:38 +0000
Received: by outflank-mailman (input) for mailman id 517432;
 Mon, 03 Apr 2023 12:51:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJeq-0006nz-Uf; Mon, 03 Apr 2023 12:51:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJeq-0000Kh-RG; Mon, 03 Apr 2023 12:51:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJeq-0006Wy-Ac; Mon, 03 Apr 2023 12:51:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJeq-0000zg-AA; Mon, 03 Apr 2023 12:51:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dpMlPzhdn2xiggfFd3IuEvqdtUJGVsKIy+S3+BQTpqs=; b=R3KRY6zRMW+GO/FkAfxsvodHlb
	CQDzmh4/U5tEe7+jZ/WGnsytTcuKs2Pht/CfKOEUfpJwLgWOyOJmWE9unnY1x17ppxJiBBr+viWpj
	2HaXaXpD8UcdpFxHqQQlVK6NIryRfGqyBI7aKYUG88MnpxKCaAQaxEYBB+VIU6BeIaRo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180117-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180117: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=720ebfbad3e3bee8aa18e37e08ef597f493f8bf8
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 12:51:36 +0000

flight 180117 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180117/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180085

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  720ebfbad3e3bee8aa18e37e08ef597f493f8bf8
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180085  2023-03-31 07:01:54 Z    3 days
Testing same since   180117  2023-04-03 11:03:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 720ebfbad3e3bee8aa18e37e08ef597f493f8bf8
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Mon Apr 3 12:53:29 2023 +0200

    xen/x86: switch to use generic implemetation of bug.h
    
    The following changes were made:
    * Make GENERIC_BUG_FRAME mandatory for X86
    * Update asm/bug.h using generic implementation in <xen/bug.h>
    * Update do_invalid_op using generic do_bug_frame()
    * Define BUG_DEBUGGER_TRAP_FATAL to debugger_trap_fatal(X86_EXC_GP,regs)
    * type of eip variable was changed to 'const void *'
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 71efa7b868e64d29b2a0488e015e80798f1fde8a
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Mon Apr 3 12:52:02 2023 +0200

    xen: change <asm/bug.h> to <xen/bug.h>
    
    The idea of the patch is to change all <asm/bug.h> to <xen/bug.h> and
    keep Xen compilable with adding only minimal amount of changes:
    1. It was added "#include <xen/types.h>" to ARM's "<asm/bug.h>" as it
      uses uint_{16,32}t in 'struct bug_frame'.
    2. It was added '#define BUG_FRAME_STRUCT' which means that ARM hasn't
      been switched to generic implementation yet.
    3. It was added '#define BUG_FRAME_STRUCT' which means that x86 hasn't
      been switched to generic implementation yet.
    4. BUGFRAME_* and _start_bug_frame[], _stop_bug_frame_*[] were removed
      for ARM & x86 to deal with compilation errors such as:
          redundant redeclaration of ...
    5. Remove BUG_DISP_WIDTH, BUG_LINE_LO_WIDTH, BUG_LINE_HI_WIDTH from
      x86's <asm.bug.h> to not to produce #undef for them and #define again
      with the same values as in <xen/bug.h>. These #undef and #define will
      be anyway removed in the patch [2]
    6. Remove <asm/bug.h> from <x86/acpi/cpufreq/cpufreq.c> and
      <drivers/cpufreq/cpufreq.c> as nothing from <xen/bug.h> are used in
      <*/cpufreq.c>
    
    In the following two patches x86 and ARM archictectures will be
    switched fully:
    [1] xen/arm: switch ARM to use generic implementation of bug.h
    [2] xen/x86: switch x86 to use generic implemetation of bug.h
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit faafb5cb736db67a5790854c63bf3c76dd4df7e0
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Mon Apr 3 12:50:56 2023 +0200

    xen/arm: remove unused defines in <asm/bug.h>
    
    The following defines BUG_DISP_WIDTH, BUG_LINE_LO_WIDTH,
    BUG_LINE_HI_WIDTH aren't used in ARM so could be purged
    as unused.
    
    Requested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 60a9b07150558b212918aa8fedd532be246b03d7
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Mon Apr 3 12:50:06 2023 +0200

    xen: introduce CONFIG_GENERIC_BUG_FRAME
    
    A large part of the content of the bug.h is repeated among all
    architectures, so it was decided to create a new config
    CONFIG_GENERIC_BUG_FRAME.
    
    The version of <bug.h> from x86 was taken as the base version.
    
    The patch introduces the following stuff:
      * common bug.h header
      * generic implementation of do_bug_frame
      * new config CONFIG_GENERIC_BUG_FRAME
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit de7d113212b0e28423b6d0e983aa164e76b415b7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:48:12 2023 +0200

    x86emul: move various utility functions to separate source files
    
    Many are needed by the hypervisor only - have one file for this purpose.
    Some are also needed by the harness (but not the fuzzer) - have another
    file for these.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing
    "state" by "s" (like was done for other that has been split off).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit c80243f94386f64f85c5d92ef0bb19dc406eefc2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:47:08 2023 +0200

    x86emul: move x86_emul_blk() to separate source file
    
    The function is already non-trivial and is expected to further grow.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 1939403104965b091feb7712430ec5d7645a8d30
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:46:08 2023 +0200

    x86emul: split off insn decoding
    
    This is a fair chunk of code and data and can easily live separate from
    the main emulation function.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8f196c12eec7f90bcf31f86312b8fe5ee12b41be
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:44:59 2023 +0200

    x86emul: split off FPU opcode handling
    
    Some of the helper functions/macros are needed only for this, and the
    code is otherwise relatively independent of other parts of the emulator.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 0bae69c96b32963b535bb569d6b41f96a7d72617
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:43:51 2023 +0200

    x86emul: split off opcode 0fc7 handling
    
    There's a fair amount of sub-cases (with some yet to be implemented), so
    a separate function seems warranted.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 3e957de632532dc287ae4cd356fd8d7882d4f233
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:42:44 2023 +0200

    x86emul: split off opcode 0fae handling
    
    There's a fair amount of sub-cases (with some yet to be implemented), so
    a separate function seems warranted.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 9ace97ab9b87924477bbaea0a5a1378e106951cb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 3 12:41:08 2023 +0200

    x86emul: split off opcode 0f01 handling
    
    There's a fair amount of sub-cases (with some yet to be implemented), so
    a separate function seems warranted.
    
    Code moved gets slightly adjusted in a few places, e.g. replacing EXC_*
    by X86_EXC_* (such that EXC_* don't need to move as well; we want these
    to be phased out anyway).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 12:57:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 12:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517438.802743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJkV-0007WG-4j; Mon, 03 Apr 2023 12:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517438.802743; Mon, 03 Apr 2023 12:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjJkV-0007W9-26; Mon, 03 Apr 2023 12:57:27 +0000
Received: by outflank-mailman (input) for mailman id 517438;
 Mon, 03 Apr 2023 12:57:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJkU-0007Vz-6B; Mon, 03 Apr 2023 12:57:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJkU-0000Qv-0g; Mon, 03 Apr 2023 12:57:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJkT-0006eA-Ee; Mon, 03 Apr 2023 12:57:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjJkT-0004Um-EB; Mon, 03 Apr 2023 12:57:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c4BQUKfw/IhXa4GW4aJy10c/+1PyMTSGePVS06txcuo=; b=QBRInCoJJuJWz/C9Rtf1fkvUT5
	YtUg8FQFfpx21IBFi5+vDdsCxXgE09LabOZ4LqfDLS3f7QeilcuMRYlpAgSLHBZbSZ2SwITAToknw
	RdPL5j+Uh1UrMnopMEdYnQYElzA2qiesWBpD9TIplLqC+wXYoE0XR7Ye2J1IA9GtKBiM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180116: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=7e364e56293bb98cae1b55fd835f5991c4e96e7d
X-Osstest-Versions-That:
    linux=6ab608fe852b50fe809b22cdf7db6cbe006d7cb3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 12:57:25 +0000

flight 180116 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180116/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180113
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180113
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180113
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180113
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180113
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                7e364e56293bb98cae1b55fd835f5991c4e96e7d
baseline version:
 linux                6ab608fe852b50fe809b22cdf7db6cbe006d7cb3

Last test of basis   180113  2023-04-02 18:13:46 Z    0 days
Testing same since   180116  2023-04-03 02:26:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   6ab608fe852b..7e364e56293b  7e364e56293bb98cae1b55fd835f5991c4e96e7d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 13:25:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 13:25:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517446.802762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKBG-0002Um-D8; Mon, 03 Apr 2023 13:25:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517446.802762; Mon, 03 Apr 2023 13:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKBG-0002Uf-A9; Mon, 03 Apr 2023 13:25:06 +0000
Received: by outflank-mailman (input) for mailman id 517446;
 Mon, 03 Apr 2023 13:25:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjKBE-0002UZ-Ot
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 13:25:04 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f08920c2-d222-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 15:25:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8401.eurprd04.prod.outlook.com (2603:10a6:20b:3f3::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Mon, 3 Apr
 2023 13:25:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 13:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f08920c2-d222-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BkLheGfPYrqgJ+vKyithgeLT5qTRmEiqg82FBguVCvfHKpQROcZikyMZfZvXK9aYcuvcCLd0eoBx98mdCMzCY/Mw5CwnCY8qLpeWDj4CVWK95kUDsLw10dFtvkzpOMj5m5cuzhIX+4UFMX29Lqvy3uL/A9DcXU+WvJ/b3K6XLPew9ClsGcw7vH9I4KfcwWJouUNPOKat1O0sE05xrP4Wliie7iF4snl9IcEzNuxPIzsDm68b/T9yi3Ui9i2/AXt36ewRQa0bU4WFCyVQPJV6etVCADUIhEcScTm00o4njaKc1TaB7qaAo4FZ6WFJK8mu5S9G0sUHCilE9hsDFLvSzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EeO9WlffdyqbjvDDnpxR+pBUtbwT2GBSWRHSi5bAjnM=;
 b=HpvVtyGveNfjn83dfVWcJzzLN0mhOjzzeOs7XZLS2xhwvAE/14y6IR3cp1UKbFRAKxAX6vOg6Oz7NtTWbun9X3uXsAYgKk2kP4UjUAjPf6K0VK2bf3amimhcrmwfSKTtBDR4IJiyYJH3fJPgfJbypJr60vXdbtDLmWG1sL0Axn1bhyBZCALdoiXIhCniol6iiHABUyt8OFfXmRPYTf2jA9h6FLZp1VULNmkpIAiHE12VDOJw2r9qygftcUK2UnTwz70vPRVgeEyiPyQjvc+XfASH5AaX3tLrvzhCWjvM4AW38aDMyxJhejrzTfCF04v07NOxRIuqw8TD3KhVep14jA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EeO9WlffdyqbjvDDnpxR+pBUtbwT2GBSWRHSi5bAjnM=;
 b=15XQ/Z5fb3e0Y5ixpPR5JVkxKQ0pjp8H1f/Xt3hPpuAcWqlNjcMmQ+WyaRDcCJg01DwsW0JewPbvfroMqLiqmS8/bv+ej9uanmzrlZAth+BQk6h6iLS601GU4qMNrTy7wCgpK7SDd5gw5R9MV6kONheVLfImx1BEIZyqmvj7y9rqMNK5TcDblcMC+gACrfXn9640jh7du8swUxZs7NyVvveSBVzLFD/YLUV7Ys0aJUDrm5/1oAVGXPDoekGE81f0Q0d4GaDfaq5lcJOGltsJZT6SDrcZqVBqWi4AaV/phcy1nxKx48nFb8kVnbor/kQgCCC+nN/Ydok7ZB4EMbLAMA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <335f22be-cd5b-8cc9-cd9a-25caa59df384@suse.com>
Date: Mon, 3 Apr 2023 15:24:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230331175719.500285-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0132.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8401:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ce922e6-ce49-4487-c185-08db3446d32a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FQW0XosH6c+FLTUzXEPvaFjd7gZ4f74cp+aemhMqSgkP+7gpMbak7kpNYzTzxoz6PxrP8LCovvhmNBDFH0PPA0p3M1w3lRPqS3ZWUXraqipBKjf1Fpdr77FChtXN7nEWBYDQO8QesMKTyShvLy4ASM5XcRv/wA13Ps5K5mkX2AYYvfxWDKMuAwO60EEyQOWHsdMPgmOVTqs6AoKWw6RQsDhAZsGkJA+Fk3vFfmDv9G90T04vfvLkfMb4VzGwR56BDweEM/LByV/pKu8bNrqE3r3p/brCesW8xCzDxp9Yr7g7lFguBQ/KqtVyq01ruG5hv5fWljs7mhHKdX312BeB7mopla6zHLol6ZtGdmc3mVok49md0/qaBMsjsvf87wLkR+j9/6OK1uBH1IqDyoEKcSs5Z5tJHrpdmePMqe8HjR6HKZh1W4WvoHjPcboT7nIMpXiWnZ+4FBobtUxaLKeR1HWy4/fX+lRx00pD8hWOy25X/3P/kEeTRJBVO8Tq9c+Wvhz7ysx+TyV8CZZY+pFNeddCDCOsRCSH+1T2TkkMk6ZzlnyjlK3EDS79bseh3wNQm/UuMmI7/c+njUo3xnESUxa1yFaHQZAFK9Uxjkfpa3JJetj/rglYxgv5dAXBikPKid3v4mb0zTtOTwaIpnQIHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199021)(31696002)(86362001)(36756003)(2906002)(66899021)(31686004)(2616005)(53546011)(83380400001)(186003)(26005)(6506007)(6486002)(6512007)(6916009)(4326008)(8676002)(66476007)(478600001)(66556008)(66946007)(41300700001)(38100700002)(5660300002)(54906003)(316002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VHBBbFF2TU8wemdTSUN4TmFVeHJrd0pVSkRBYWZrYWU1ajUwQzFlK1N0anIr?=
 =?utf-8?B?NW92RjEyQ25nNE1tc2gweEpYUWdINHQ4S2hiQzRkUmtRWENWbTdlRFVRSitu?=
 =?utf-8?B?NFF0MmxYMExBNVZMamg0UHUxMmkrTHkvaVhKcEY5V0pRMk9JaVlkZlQvbEkz?=
 =?utf-8?B?M0d5Z1p0WVVJK0VoVm0zNTlCZndtNUpYUmxPR0YyZ1lSYWVRYi80L3N2TlNp?=
 =?utf-8?B?enp6bTRzTWIvb3BBaXFQbkowQzZQR2RRTGZnaUpJRzlaL01mZ2dyUXM3K3lW?=
 =?utf-8?B?YlkvaytRNWFtY2I4UE5lczA4VjBNRkhHVFVuQVFjdDZ3c3laeE9vdEJpaitF?=
 =?utf-8?B?UUlQNi9mOVVhblNNTEJYeWptSDA1eDRDRHhSVVhsdDVaOGEvM1pYVm9kdUNQ?=
 =?utf-8?B?V2R5RDZPZVRtQXUvdDZ6Z0dQR0tCSTl0U3ZvdjErUmxpbWF3K1J0V1hFR3dE?=
 =?utf-8?B?R1FRQVBSaUxZWHFPR2I5b25jdEk2QjUwbXBPYWtTY0VpYmhsM0haZ1JsRkF4?=
 =?utf-8?B?TWxTYldUWmlOMkR0OVZzRDBObnFHSnd0U2Rra0xWNkl5TlBvZzE0MnIxRlls?=
 =?utf-8?B?b0FTbzA4aDZVUDZUcE5uSTZpL3ZVbVVJSUZJck9kcHh0STRES2V4cDZubXFS?=
 =?utf-8?B?NmtwTEZUUXFET1pSb0NVcTR4ckR4VUNUVGhHZGl3cVArQkc4ZEEvNk1XNVlz?=
 =?utf-8?B?aXltbHVKbTNKM0N0b3lzQWx4RnlzZVhJaVhjR3hwL25DUDY1aDQ5enp1Mm4r?=
 =?utf-8?B?RlhtaTBwaEozeWJOT0RZY0pmNUx3NDFjL0thZVA0NkhyLzlNamQrdkxDdHhh?=
 =?utf-8?B?MVpXR3hvSURkSHhmR1RncUU3WGZwRnk2YitjdWF0OTMvUlhYMUl4b2hRY2FQ?=
 =?utf-8?B?Vm1pZWdtWWRyYlMvSVM5RHlIVTFQWEhOakpOdXRrOHFpZlBXSXdsR21GMUt6?=
 =?utf-8?B?cjU3cTVZaGFRVjBaY296OS92emFBNlRVcG5VZnJ3MGpNbGoybXdCa1gzM3pF?=
 =?utf-8?B?Vi9icmJLbVI5R1FtL1YwZkZHWXZ4QWQxa1JyR25LclljNXZLTWQxM1l3WnM4?=
 =?utf-8?B?bHVCbWVRMzZWRjdKbkJKVXZRYU9ScEVha1dmKzNidDF1RVVtdDJSQTBSdU81?=
 =?utf-8?B?ODg5TkpaSnltQk1yTE5LbDFhQlc5dHBoa3ZXMDN5UmhmdWNoRnIzZnliUlJ2?=
 =?utf-8?B?SmU4K0xMOS9wOEFacnVZTDU1TlZxcEpYUjlDTDVVODJSNzE3TmVKcUxGaGs2?=
 =?utf-8?B?NXR3NVAwcTUwbXZJNHB1cEhNVE5LdHdLQjZvT2lsajNvTGxGOC85SmFEN0Ur?=
 =?utf-8?B?RnFpeUhSc0wwcFZ3b05FR0VBeGtQdENJNGZ3UlJVMmJLMEdMU01UYTJWUk5L?=
 =?utf-8?B?dWxqZmFJT0xPWDM1bmdidHFXU2doZGJ5U0ZhV0ZCaWVDQUpqRWZrNE1seCtL?=
 =?utf-8?B?b3NMYXhlZUk3TWlnNUFrNkRGRXU1UEw2YUtLOHMvRDFyOUtCNXlhQ1pWb2ZM?=
 =?utf-8?B?Zk5CazN6MXR4KzVKT09aQWorYmI0T0YzcjRJeWNTRDQ2WWpRMWhleFhOMEVx?=
 =?utf-8?B?YzdvMUVTYXk2bzZrUVNRdTdyaE1aQ0prdTlBeS9KVUZkTGFDWWQvWnFiNkZG?=
 =?utf-8?B?YlFDYmxJeHVqRUQ5NUViaHN4UVdoVDF2RDlSelRlTk1rUDVldEpPQW9OR0hX?=
 =?utf-8?B?Sm9EKy9MZW9RblQ0QS9sNzJ6OW55ZnRXUXJhOG1heWk0aGpGelpvZG1ySGxL?=
 =?utf-8?B?QW8zcFQrMnB5QjNKdzdrcWVlMEpKbGN3L042Q2wybzJyMEpPTTI1dFJaaEY4?=
 =?utf-8?B?ZDRic3Z0MzhsM2o0aGNUcStlL3h3N1hwZHJuRTcwQyt4dWVEUC9yN01FYjRF?=
 =?utf-8?B?OExKbWduUW4vUjhQTGFJZXR2Z0gxVHNiVHhFaDlNYlIyR01zWVFZcHBVS214?=
 =?utf-8?B?QTFPbWwzVUZtSlRLZWd0elpiUGk3WnJiS2NMOUZKaDFDNEp2S09zYkQxYXJo?=
 =?utf-8?B?b3kvbnpTNUhKc1Yycm5kNkovaE9BTTlCU3RpNGNHd3dzWG9PUVRYUHlnN2JP?=
 =?utf-8?B?SVQvZkJ5U09CQmx6REJIbis4TlBsOWlIdy9wNDF6WDNRL2QyTUZVSUo0c08r?=
 =?utf-8?Q?I1n2P5S2cykPoHOgWAlkwJ6XW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ce922e6-ce49-4487-c185-08db3446d32a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 13:25:00.1373
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: de7B4H82wFl3h/rW8gLX1j/6assKAje5uzQLUXLySFSnFxl1M1aw0iHidTwIX7HFpxbgNLFWKYpLYDj7VYHjOw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8401

On 31.03.2023 19:57, Andrew Cooper wrote:
> This started by noticing the dubious Fam17h check in
> arch_ioreq_server_get_type_addr(), and then realising that checking the host
> CF8_EXT setting is utterly bogus for the guest <-> qemu emulation path.
> 
> What should be consulted here is the guest's MSR_AMD64_NB_CFG setting, but a)
> there isn't one, and b) the vestigial remnants of the cross-vendor migration
> logic cause MSR_AMD64_NB_CFG to be unconditionally read-as-zero, making the
> CF8_EXT path unused by any suitably-written OS in the first place.
> 
> MSR_AMD64_NB_CFG really has been removed on Fam17h (it's now just a read-zero,
> write-discard stub), and the ECS extension is unconditionally active, meaning
> it is not correct for Xen to ignore the ExtRegNo field on newer AMD CPUs.

I don't buy "unconditionally active". Whether to enable this by default is
up to firmware; I view it as not unlikely that some firmware actually have
a setup control for it. Fam17 controls it through D1[8-F]F4x044:0, while
Fam19 has D1[8-F]F0xC00:0 for this purpose. (I've had a work item for quite
some time to actually make use of this bit to enable ECS.)

> It turns out that Xen did even have this behaviour in 4.5 and earlier, with
> this problematic CF8_EXT checking being added in 4.6.  Therefore, revert back
> to Xen's older behaviour - it is objectively less wrong than the current
> logic.
> 
> While fixing this, get rid of hvm_pci_decode_addr() - it is more complicated
> to follow (and to call) than using the CF8* macros in the calling context.
> Rename CF8_ADDR() to CF8_REG() to better describe what it does, and write a
> comment explaining all about CF8/CFC accesses.
> 
> There's one rare case when CF8_EXT is visible to guests, and that is for a
> pinned hwdom.  Right now, we permit such a dom0 to modify the CF8_EXT bit, but
> this seems like a very unwise idea.  Leave a TODO for people to consider.
> 
> Fixes: e0fbf3bf9871 ("x86/AMD: correct certain Fam17 checks")

Therefore I'm not convinced this Fixes: tag is warranted.

> Fixes: 2d67a7a4d37a ("x86: synchronize PCI config space access decoding")

I'm also curious which particular aspect of that change you are considering
to be fixed here.

Your claim about behavior being reserved is perhaps okay as far as hardware
is concerned, but by removing the checks you e.g. misguide
xsm_pci_config_permission() in case it handles certain register ranges
specially. Dealing with that was, according to the description, one of the
purposes of the commit above (it's been long enough, so I can only go from
the description in git).

> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -248,20 +248,6 @@ void register_g2m_portio_handler(struct domain *d)
>      handler->ops = &g2m_portio_ops;
>  }
>  
> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
> -                                 pci_sbdf_t *sbdf)
> -{
> -    ASSERT(CF8_ENABLED(cf8));
> -
> -    sbdf->bdf = CF8_BDF(cf8);
> -    sbdf->seg = 0;
> -    /*
> -     * NB: the lower 2 bits of the register address are fetched from the
> -     * offset into the 0xcfc register when reading/writing to it.
> -     */
> -    return CF8_ADDR_LO(cf8) | (addr & 3);
> -}

I have to admit that I'm surprised that you fold replacing of this
function (purely mechanical afaict) into a change with a significant
functional change. Wouldn't you agree that this may better be split off?

> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
>          if ( !is_hwdom_pinned_vcpu(curr) )
>              return X86EMUL_OKAY;
>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
> +             /*
> +              * TODO: this is broken.  What happens when dom0 is pinned but
> +              * can't see the full system?  CF8_EXT probably ought to be a
> +              * Xen-owned setting, and made symmetric across the system.
> +              */
>               ((val ^ temp) & ~(1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
>              goto invalid;
>          if ( wrmsr_safe(MSR_AMD64_NB_CFG, val) == 0 )

What do you mean by "can't see the full system"? Originally Linux used
only the MSR view of the bit to enable ECS, but this was specifically
extended to prefer the PCI config space view instead (24d9b70b8c67
"x86: Use PCI method for enabling AMD extended config space before MSR
method").

Since here we're dealing with the MSR flavor, and since Linux shouldn't
be using this anymore (due to checking whether the bit is clear before
trying to set it), we may be okay with simply purging this code if we
don't care about very old Linux Dom0 anymore (or if we use your
argument of it not reliably affecting the entire system).

As to the setting becoming Xen-owned: Dom0 being responsible for
(almost) all of PCI, it would be somewhat odd to take away from it this
level of control.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 13:27:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 13:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517449.802772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKDG-000345-QL; Mon, 03 Apr 2023 13:27:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517449.802772; Mon, 03 Apr 2023 13:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKDG-00033y-Mz; Mon, 03 Apr 2023 13:27:10 +0000
Received: by outflank-mailman (input) for mailman id 517449;
 Mon, 03 Apr 2023 13:27:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjKDE-00033s-Im
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 13:27:08 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 388c2b9d-d223-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 15:27:05 +0200 (CEST)
Received: from mail-mw2nam12lp2047.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 09:26:52 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by BLAPR03MB5410.namprd03.prod.outlook.com (2603:10b6:208:29c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 13:26:49 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 13:26:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 388c2b9d-d223-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680528425;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=t1fkTL8HsOL6DzgWpK3QGHaxBLd4HWSxC4csE9hEYPc=;
  b=JQXzfvWZC0vM2zpdvo1C59xE1WTUfg3BtehPIaASzi9dRGos/Pm1gEhB
   +BA2LW1BIKxcZ/YNxPNU77OYRoUB7g5s/XeOA5+SzNuj12Bo2Xxgo4bux
   HK6P0Uzrbv1D8z/L4YPDOxI+MZoLM92pn20Rxgx+J6jTkYp4BMOWcLW5U
   c=;
X-IronPort-RemoteIP: 104.47.66.47
X-IronPort-MID: 106547825
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lVpx7KibkeP8VPf9HdoBkkubX161RxEKZh0ujC45NGQN5FlHY01je
 htvUGnUbPnca2Dwe4olOYy0p0MBu8fTy9A3QFM++ysyRn8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQcLQoMNjCcwNipnr/jUttmmYcdN+j0adZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluSyWDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHulBd5NROflnhJsqFSP61VQIQUPbGuygN2mt2fhSu0OC
 VNBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQEnsIrQT0h1
 neSgsjkQzdotdW9Vna15rqS6zSoNkAowXQqYCYFSU4A/IPlqYRq1BbXFI4/T+iyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsjA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:MY017a18yJU3K80J7x1G2wqjBAkkLtp133Aq2lEZdPWaSK2lfq
 eV7ZImPH7P+VEssRQb8+xoV5PsfZqxz/JICMwqTNSftOePghrVEGgg1/qe/9XYcxeOidK1rJ
 0QDZSWaueRMbEKt7ef3ODiKadY/DDvysnB7ts2jU0dLz2CDZsO0+4TMHf/LqQZfmd77LMCZe
 uhz/sCiTq8WGgdKv+2DmMCWIH41qf2vaOjTx4aJgItrDKDhzOw6LL8DnGjr2wjegIK77c+0H
 TP1zf07KW7s/2911v12mLJ445N8eGRuudrNYijitU1Nj6psAquaYh7MofyxAwInA==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="106547825"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mB1mmXF+aylgBGjOWaPSQlAFMio6He+ps3MaWmk1+PqKcp02EeGbaWQoLXrovAtw41RC0MDy1cB0Yf5hCJWlkTLYMYeSntea2+xhKXqeVVASI1ohy3noBUUwggMQS/lpvP/ftCO0XHl4/OCDav7Vc7YNqhbn8E45YCBE7SK//Yr/zhGc0EzRoGG3aopIn8zf6UtchctwFt8Bev4RIeCGREXbRG+pOhyf4fH3Cw7Z2NJMkTPiuaMeZpcYmUbHTIu7dxKsiqVjOsWhRF434E6QfcqXL9zzR0ejPX3kkSkWIgK6fhazsesOeJmDtKwcOfYTnpyXQIAFqRegIMdAJOxeUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iuyfu2/BsHus3kHZGId7MNczCmHstVKYik7fQ1vMBSI=;
 b=mXe2PDdASLPCtoeViNadZDC16EsdXq7ZDpcQ+uMHM2G66RP/wLaxWwsAA9DLVBetQIFq8T2BSUSdtUOP5v/fbKMbEC7Y7Zg6ZzBkTlSs74TSOsDYTbnUwW55WOCf8inEOT49Cyb1R7TwKXKv7uNKNzpve7DwBsVVbd7+Dwp24KtP6aGETQ2l16q6pMBATlHkGdyDDmC/runZNmPQek2N1L506zrcyBUYACPgCngpY1T8aL9VT5HDKGyHzK0n/qGAcok2RferIDq030L1hhtO7POwwga0ax8RWeqrJdOcDevM5VcX7FGCs7i/sBVRl8YgD8Vd8zOyw4KHfzto9AHP0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iuyfu2/BsHus3kHZGId7MNczCmHstVKYik7fQ1vMBSI=;
 b=Q3dPnquMKELdRylRMWfWtLDun0HpNN39Jxm1MkXL2K+dFUICxAbtz5hg5h0GRV1eP/uhuKYNxlv2ZQLxYYVS1OYIQmGwKmSurrfw9F1Ezg6jCJ3RMXoojqhoXZ32FVXnGtq1zhwbU+lDZbpzxTXIawlqGF7fUQHeNA2gVB4kptM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 15:26:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Message-ID: <ZCrUErZZkd6co1Dq@Air-de-Roger>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
 <ZCqVEHe1Qo3skeVf@Air-de-Roger>
 <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
X-ClientProxiedBy: LO2P265CA0142.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::34) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|BLAPR03MB5410:EE_
X-MS-Office365-Filtering-Correlation-Id: 47bd20a4-5c87-4e72-e82c-08db34471472
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jXyyPQ0ViY0rMp6AJmgwsTKxLZVIEgO7N6qSzD11XuTq1viEP5lRDtFJUD2dIduyk+nN/xIfz54a7o4NMCEiygbwrAnBAUAxpnGHF47NNnALnk7Z4iztDsed7nWKUNzt92UspqFFn9dYKEPkfQBWmaHFtg+TivYWS1rvQgBmdOgiTMPfOi3GK9BgB/FpUaCXAZKRNNiUzUKimoqoEZ5JOBMq0cTPqulw48NueAMzBOhSx9ZwXx76ak0ije8RPnEMRApFcCxKc2GoXEsLY/9ZpvcbeUU1x8H2qiopsYbnnJSS2GpcYzD4tipZnKUYLNz0tKfAiRL6XvUjQ8daU7A+758FLgvmWWX5Dm/Fwjm5xt7KTblJtXIh8uUu7PAZZTtIbK0idyifckcnCCPfStWYffjztjr3/IEwOngGdsOR8ambDeZVBBhV1iwIVkszFwYhdWLRz4Y/AAv7MrjfnbJtmvi1cLe2x0Uj7fX17v1BuQKPe+sfC0R0piwUVoq+9kXDuaPTDhLvOShGdtxqY6g/QIUEdBy6tJ6UeAKYzIb1cj0BaJFVGk39Q9zCOwMBTxS8
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(451199021)(38100700002)(30864003)(5660300002)(83380400001)(54906003)(316002)(6636002)(6486002)(33716001)(478600001)(186003)(53546011)(86362001)(6666004)(9686003)(26005)(6512007)(6506007)(82960400001)(85182001)(41300700001)(8676002)(4326008)(66556008)(6862004)(66946007)(66476007)(8936002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N2tnMHZ4Z25PaVFHbnVRM1I3Z0pUU3lzUGJIcEs2bFJUbHpDcU10SE1LS2hG?=
 =?utf-8?B?WWxVcHhmUWtzV0hqZW4xbGZBU3B4NCtxS2FYTVhKNVdNRi9VUUlGYkM0VkRC?=
 =?utf-8?B?eW5ydDJmWmxheTRSdUwySFpBTEYxaUxZOWVtUTRiRHQ1eWRHbHgza2ZaNFpY?=
 =?utf-8?B?bU5WTVhSYmxaQXJ1QXZUbjVab0N3bUhHTlhTS1djdXFsbCtMVGMxRmpCd0tB?=
 =?utf-8?B?VHJZMExVK0tIOWxIMkJjVWRqNy9Ob1pHOU9aWUV4bVd6YWM1azFZSWlQUmx0?=
 =?utf-8?B?dUdReUROelZpR3RqTGFwS1lDRTBhWGRUbE0zNisrNGVubE96V0ZlSlRsVSta?=
 =?utf-8?B?YkFwTW5LRFpucU5jOU81bTBJLzh4TW5FcmEya2NnL0MxTUdIMFZZYUxrZ3ps?=
 =?utf-8?B?RXBldkJPeE9SR3hjTEhCUndFejNoNzVVaHE2NENRNHc2N0EzUnZpL1NDbEF3?=
 =?utf-8?B?enJNZUVTS0VNTm1Qam8ydlUzWDBwV2JPcnR6aERXcXZCRTR1L3ZBSE9uR3FV?=
 =?utf-8?B?cVdvS3J2dDRxT3RWTXJRUFA0MllvMlZMaWdMbTVPU0xXb3pPZFlCY2gxbGtO?=
 =?utf-8?B?WFpNRDV5eXQ3d24raXdHUGdUdjFxZnRBR2poWUZXaHE4TFdsU0tLbEdFelpw?=
 =?utf-8?B?cm1hUzlzRFY5WUh1Vm4rMlhvemNhUEtHZXh4RnBYTjlrUVQrZFZCbHhWREFu?=
 =?utf-8?B?dm1ndStObzNqLzBHaWVMWHhnbmxiN0gzbFNpYmFOQndYbHpueXQ0aHJBcVlh?=
 =?utf-8?B?N2hlT2NZSTA0L1d0N3RmSVUraExRMUJoZVhZYStRdm1nb1FpL0s4cWN6MW96?=
 =?utf-8?B?YWFMUzNkU3hzM2d4T2Z6RXY5TkVPT1phK05OQklDaXNCSVdxQmxWZXJjNmE4?=
 =?utf-8?B?QXNBOWFDNDhOKytwaCtsZC9Bc3hOOXk1RjRteTRaLzNEWEVlcDJ0a3gyZ1JL?=
 =?utf-8?B?NUxDUXZaeThPNmNDSmNXWmxTOHVDOUkyYUsrRWVncTVlYVVUWWJNQUt4c1lu?=
 =?utf-8?B?Y3hnZTl5cXVSMmJRVG5keitlV3lCQ0liU2dYMHlReWhEZ0pTTTZ0QnJqMkZC?=
 =?utf-8?B?cjQ5S3lvWThaekFJeVh4bHFLaTNDY1BHcEZOZFdiZFlQazRIeVprTndCKzVJ?=
 =?utf-8?B?aFEzYXRwbmNjSGlHb2FDRGxnZE5SSmwrTTB1MWpkNzRXRkErY2ZHTTROUVZC?=
 =?utf-8?B?aWhndWxqUW96c1I2QVVjVXU2cUZaZVRnWWlkQ0NvdkZrVDJNM2tSYk1xODZ4?=
 =?utf-8?B?a0pCejdjcWh4eVRpbzJiaFFlcVQ1YWF3RGhIbHBCeTVPSjg0VkhIZmZ2MTRE?=
 =?utf-8?B?aVBJU3RRUjRObURCRXpDdHowaGhjd1ViNkFBeWpGUE9hVGxidlo5VXExWnhx?=
 =?utf-8?B?Q3Mxa2dyUTRaMUxHU05IQ2FWOWlOa2JPcmUwdnBxUzFsKzZKMUI5QzA0NElz?=
 =?utf-8?B?MHhYVzNFZHZjbDI5QjRWb3dkeWJnbnZneEVkdUhXY1NsTGU4LytBUG9xQ0RJ?=
 =?utf-8?B?N3hMYnZIK3R6Z3F1OGI1SzBmbjVLR094dEFzVzA1cTZJWUloVXlNUEtzaHZn?=
 =?utf-8?B?bTFlVW5PL2VudUwrdDUvRWg0aWd2SEVxcXNUSDZNZXRFYmRCWGJOeEtCNXhR?=
 =?utf-8?B?MXlLSXJKU0p6SVVsYWN1WDBZRmpIUm9mNHByVk9WTWJ1ZFVRYTkvcVZzQVd3?=
 =?utf-8?B?N2U4OEFWa2grclNYOWdMQ2lqaWs5NTZ1ZmEvNjZrdHdKNlBibDZrdGVtOXZ2?=
 =?utf-8?B?dEJ4TkdKRmRySXRzaHQrZWxDeWkzYnVsZ2FuSWE0TzVWU21pUFhoenErZDV3?=
 =?utf-8?B?OG55MGJmbkcvbU1aYUJQaXhEdFlVaU5XNUx0K3BWMGoyNFdEbE1xZWRraVNB?=
 =?utf-8?B?V1BVVkorZDVTZUVVTnlxYzBUdnZiakpUQ09qdEVYY29qdVR2UWxObUJsWjVK?=
 =?utf-8?B?NkZReXRGenlaVmpmUTgzQ3gyRGNycmh6bW0zMVZyQng4eGk3S1ZTRDZxM2sx?=
 =?utf-8?B?R2pYVkhCaWRDa1MwZ3hJVWdpSnZnYktQakxqa3drVmxjV0hXbkNqQkEyTlJI?=
 =?utf-8?B?cDZVNE5FL2pBQ2svZUp6Q2hhRE9WcHhVTUhTd1FnRHdOTVZkTGp5Y3pjeEhk?=
 =?utf-8?Q?V7IksAk/UbNk05GX18Sqa16Gb?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	kjB8EoHMnohUkQpwDZBCNxQqQVA6tW7vHXHcWQZbtwDOubwSdaMFn3w1uUmXw919FXyIZhLBjllbS4Ldzj5TSeMN1kTQsK9xO5HKRBdXScQ5h6Nd757nwdU4kurqtKNGIsg6qu2gKpH3k3bMBKuf2Bdzx2Jii4BKGaTk/3VneHai2AwBLmu3hXQIC/d3SbkzT/DE/AliN0N2FGHUVXJYqni3V6jaCoJxb3k+Jo2JiSDifza/ZBvWGSjw/R2R0fcWk9ymGXVRUv/1+9pVjUQ9N3XwAg39ZMVqX26kRYE8zre7qVND7/WtxWVVMZnR90rbLYkAKWU5urVyd0aVkHyLKceL4Uq0QRTYcw2txLY/nssbaIilUK+gJo9qre+rUTq0l+e1W6MqJ4Vxek0UYtqvFwQQzEU0PCcyFnL/YkT3hIk6ovEthu18l01q6x5nH1/TwAuBEHpXgN2YeS+KS2uLaJR0s6NEmN/IFHzNchBV4/0/8o/c1Gz7X+afllrrNPX4GGGvqivxLCFPxueZ4sYpXdJuu7e6skztEi/zR1RW4uXeJQcrBM9rAMP0ourwghZgO8+a5qz4hqoXs2Puj/Oh/cEzwE4LD1zEb1cl4+Tit2+KrW1hBLiQxJMFekjfpBtVLRv+mq3mGU1sxLWih6a1+GniiSiCNiDD42oNsyubf6DcKXlC0HoGgo0aJeg+eILDBcJ1DTQb153heMVhcfC319OtbgCEGY24OO3b7cEkMnJx8PHENLC1eaEyRyvaa2nsQYyQ5+FRHMff7MCX+r+5djmBWWye1fy69JiTAjWrYHN6dOl2NOD7WEAWGKIrEVoC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47bd20a4-5c87-4e72-e82c-08db34471472
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 13:26:49.5696
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Mx4C2QJvVZ6JByLslqBYMqMTTvM8UDNEq8Qy6JDM7/hOx5G4TGU0ZIXUCXq0SZ/rcggsztQmhQuDxJ/OeS8FsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5410

On Mon, Apr 03, 2023 at 11:16:52AM +0100, Andrew Cooper wrote:
> On 03/04/2023 9:57 am, Roger Pau Monné wrote:
> > On Fri, Mar 31, 2023 at 06:57:19PM +0100, Andrew Cooper wrote:
> >> This started by noticing the dubious Fam17h check in
> >> arch_ioreq_server_get_type_addr(), and then realising that checking the host
> >> CF8_EXT setting is utterly bogus for the guest <-> qemu emulation path.
> >>
> >> What should be consulted here is the guest's MSR_AMD64_NB_CFG setting, but a)
> >> there isn't one, and b) the vestigial remnants of the cross-vendor migration
> >> logic cause MSR_AMD64_NB_CFG to be unconditionally read-as-zero, making the
> >> CF8_EXT path unused by any suitably-written OS in the first place.
> >>
> >> MSR_AMD64_NB_CFG really has been removed on Fam17h (it's now just a read-zero,
> >> write-discard stub), and the ECS extension is unconditionally active, meaning
> >> it is not correct for Xen to ignore the ExtRegNo field on newer AMD CPUs.
> >>
> >> It turns out that Xen did even have this behaviour in 4.5 and earlier, with
> >> this problematic CF8_EXT checking being added in 4.6.  Therefore, revert back
> >> to Xen's older behaviour - it is objectively less wrong than the current
> >> logic.
> >>
> >> While fixing this, get rid of hvm_pci_decode_addr() - it is more complicated
> >> to follow (and to call) than using the CF8* macros in the calling context.
> >> Rename CF8_ADDR() to CF8_REG() to better describe what it does, and write a
> >> comment explaining all about CF8/CFC accesses.
> >>
> >> There's one rare case when CF8_EXT is visible to guests, and that is for a
> >> pinned hwdom.  Right now, we permit such a dom0 to modify the CF8_EXT bit, but
> >> this seems like a very unwise idea.  Leave a TODO for people to consider.
> > One weirdness I've noticed is that for vPCI we decode the accesses
> > taking the extended CF8 bit after this change, but then if the access
> > is relayed to the hardware using vpci_{read,write}_hw it will be
> > forwarded to the hardware using pci_conf_{read,write}<size> which
> > doesn't have support for CF8_EXT.  So if the underlying hardware
> > doesn't have MMCFG support and the reg is > 255 it will be dropped.
> 
> It is important to stress that this change does not influence whether
> the guest issues ECS accesses or not.  All it does is change Xen's
> handling of such accesses.
> 
> Previously vPCI blindly ignored ECS accesses, so the vPCI layer
> effectively truncated them to BCS accesses.
> 
> Now, from your analysis, when MMCFG isn't active, Xen's PCI layer will
> effectively terminate ECS accesses with default behaviour, even on
> systems where IO ECS is available.
> 
> So we've changed one valid behaviour for a different valid behaviour.

Given vPCI is currently limited to PVH dom0 I doubt there are many
systems capable of running PVH dom0 but not having MMCFG support.

Best way to fix would be to implement ECS accesses in
pci_conf_{read,write}<size>() if the register is > 255 and there's no
MMCFG.

> 
> (Quick tangent...  Our PCI handling is currently very dumb. 
> pci_mmcfg_read() returns its value by pointer but the callers never
> check.  Swapping it to return by value would improve code gen quite a
> lot.  Also, when MMCFG is active we still pass BCS accesses to IO ports.)

I wonder if it's really preferred to access registers below 255 using
the IO ports, as Linux seems to do the same (prefer IO port access if
possible).

> So I think we do want to improve Xen's general behaviour too, but this
> difference here doesn't concern me.
> 
> >
> >> Fixes: e0fbf3bf9871 ("x86/AMD: correct certain Fam17 checks")
> >> Fixes: 2d67a7a4d37a ("x86: synchronize PCI config space access decoding")
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Roger Pau Monné <roger.pau@citrix.com>
> >> CC: Wei Liu <wl@xen.org>
> >>
> >> Whoever reviewed those two patches originally was clearly a fool...
> >> ---
> >>  xen/arch/x86/hvm/io.c             | 24 ++++++------------------
> >>  xen/arch/x86/hvm/ioreq.c          | 19 ++-----------------
> >>  xen/arch/x86/include/asm/hvm/io.h |  4 ----
> >>  xen/arch/x86/include/asm/pci.h    | 26 ++++++++++++++++++++++++--
> >>  xen/arch/x86/pv/emul-priv-op.c    | 19 ++++++-------------
> >>  5 files changed, 38 insertions(+), 54 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> >> index 5ae209d3b6b3..b0d3c236e985 100644
> >> --- a/xen/arch/x86/hvm/io.c
> >> +++ b/xen/arch/x86/hvm/io.c
> >> @@ -248,20 +248,6 @@ void register_g2m_portio_handler(struct domain *d)
> >>      handler->ops = &g2m_portio_ops;
> >>  }
> >>  
> >> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
> >> -                                 pci_sbdf_t *sbdf)
> >> -{
> >> -    ASSERT(CF8_ENABLED(cf8));
> >> -
> >> -    sbdf->bdf = CF8_BDF(cf8);
> >> -    sbdf->seg = 0;
> >> -    /*
> >> -     * NB: the lower 2 bits of the register address are fetched from the
> >> -     * offset into the 0xcfc register when reading/writing to it.
> >> -     */
> >> -    return CF8_ADDR_LO(cf8) | (addr & 3);
> >> -}
> >> -
> >>  /* vPCI config space IO ports handlers (0xcf8/0xcfc). */
> >>  static bool cf_check vpci_portio_accept(
> >>      const struct hvm_io_handler *handler, const ioreq_t *p)
> >> @@ -275,7 +261,7 @@ static int cf_check vpci_portio_read(
> >>  {
> >>      const struct domain *d = current->domain;
> >>      unsigned int reg;
> >> -    pci_sbdf_t sbdf;
> >> +    pci_sbdf_t sbdf = {};
> >>      uint32_t cf8;
> >>  
> >>      *data = ~(uint64_t)0;
> >> @@ -292,7 +278,8 @@ static int cf_check vpci_portio_read(
> >>      if ( !CF8_ENABLED(cf8) )
> >>          return X86EMUL_UNHANDLEABLE;
> >>  
> >> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
> >> +    sbdf.bdf = CF8_BDF(cf8);
> >> +    reg = CF8_REG(cf8) | (addr & 3);
> >>  
> >>      if ( !vpci_access_allowed(reg, size) )
> >>          return X86EMUL_OKAY;
> >> @@ -308,7 +295,7 @@ static int cf_check vpci_portio_write(
> >>  {
> >>      struct domain *d = current->domain;
> >>      unsigned int reg;
> >> -    pci_sbdf_t sbdf;
> >> +    pci_sbdf_t sbdf = {};
> >>      uint32_t cf8;
> >>  
> >>      if ( addr == 0xcf8 )
> >> @@ -323,7 +310,8 @@ static int cf_check vpci_portio_write(
> >>      if ( !CF8_ENABLED(cf8) )
> >>          return X86EMUL_UNHANDLEABLE;
> >>  
> >> -    reg = hvm_pci_decode_addr(cf8, addr, &sbdf);
> >> +    sbdf.bdf = CF8_BDF(cf8);
> >> +    reg = CF8_REG(cf8) | (addr & 3);
> >>  
> >>      if ( !vpci_access_allowed(reg, size) )
> >>          return X86EMUL_OKAY;
> >> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> >> index 0bdcca1e1a5f..325a9d118e52 100644
> >> --- a/xen/arch/x86/hvm/ioreq.c
> >> +++ b/xen/arch/x86/hvm/ioreq.c
> >> @@ -285,27 +285,12 @@ bool arch_ioreq_server_get_type_addr(const struct domain *d,
> >>           (p->addr & ~3) == 0xcfc &&
> >>           CF8_ENABLED(cf8) )
> >>      {
> >> -        unsigned int x86_fam, reg;
> >> -        pci_sbdf_t sbdf;
> >> -
> >> -        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
> >> +        pci_sbdf_t sbdf = { .bdf = CF8_BDF(cf8) };
> >> +        unsigned int reg = CF8_REG(cf8) | (p->addr & 3);
> >>  
> >>          /* PCI config data cycle */
> >>          *type = XEN_DMOP_IO_RANGE_PCI;
> >>          *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
> >> -        /* AMD extended configuration space access? */
> >> -        if ( CF8_ADDR_HI(cf8) &&
> >> -             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
> >> -             (x86_fam = get_cpu_family(
> >> -                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
> >> -             x86_fam < 0x17 )
> >> -        {
> >> -            uint64_t msr_val;
> >> -
> >> -            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
> >> -                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
> >> -                *addr |= CF8_ADDR_HI(cf8);
> >> -        }
> >>      }
> >>      else
> >>      {
> >> diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h
> >> index 54e0161b492c..3f3fb6403ccb 100644
> >> --- a/xen/arch/x86/include/asm/hvm/io.h
> >> +++ b/xen/arch/x86/include/asm/hvm/io.h
> >> @@ -144,10 +144,6 @@ void stdvga_deinit(struct domain *d);
> >>  
> >>  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> >>  
> >> -/* Decode a PCI port IO access into a bus/slot/func/reg. */
> >> -unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr,
> >> -                                 pci_sbdf_t *sbdf);
> >> -
> >>  /*
> >>   * HVM port IO handler that performs forwarding of guest IO ports into machine
> >>   * IO ports.
> >> diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h
> >> index f4a58c8acf13..3b814f4ebacf 100644
> >> --- a/xen/arch/x86/include/asm/pci.h
> >> +++ b/xen/arch/x86/include/asm/pci.h
> >> @@ -3,10 +3,32 @@
> >>  
> >>  #include <xen/mm.h>
> >>  
> >> +/*
> >> + * PCI config space accesses with CF8/CFC:
> >> + *
> >> + * 1) Write {Enable | BDF | Reg} to CF8 to set an address
> >> + * 2) Read or write CF{C..F} to access the register
> >> + *
> >> + * For sub-dword register accesses, the bottom two register address bits come
> >> + * from the CF{C..F} address, not from CF8.
> >> + *
> >> + * AMD have extention to this protocol to access PCIe Extend Config Space by
> >> + * storing the 4 extra register address bits in the penultimate nibble of CF8.
> >> + * This extention:
> >> + *  - Is unconditionally active on Fam17h and later
> >> + *  - Has model specific enablement on Fam10h thru Fam16h
> >> + *  - Has reserved behaviour in all other cases, including other vendors
> >> + *
> >> + * For simplicity and because we are permitted to, given "reserved", Xen
> >> + * always treats ECS as active when emulating guest PCI config space accesses.
> >> + */
> >>  #define CF8_BDF(cf8)     (  ((cf8) & 0x00ffff00) >> 8)
> >> -#define CF8_ADDR_LO(cf8) (   (cf8) & 0x000000fc)
> >> -#define CF8_ADDR_HI(cf8) (  ((cf8) & 0x0f000000) >> 16)
> >>  #define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
> >> +#define CF8_REG(cf8)                                    \
> >> +    ({                                                  \
> >> +        unsigned int _c = cf8;                          \
> >> +        ((_c & 0x0f000000) >> 16) | (_c & 0xfc);        \
> >> +    })
> > What happens on Intel when the bit is set, is it just ignored?
> 
> "the bit" => the ECS nibble, or the CF8_EXT bit?

The ECS nibble.

> 
> The ECS nibble is ignored on Intel AFAICT, while the CF8_EXT bit is in a
> very AMD-only MSR, so won't exist on Intel.
> 
> > We only allow such accesses for dom0 anyway.
> 
> And guests running on AMD hardware where CF8_EXT is active on the
> northbridge of the core we are instantaneously scheduled on.
> 
> >>  
> >>  #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \
> >>                          || id == 0x01268086 || id == 0x01028086 \
> >> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> >> index 5da00e24e4ff..008367195c78 100644
> >> --- a/xen/arch/x86/pv/emul-priv-op.c
> >> +++ b/xen/arch/x86/pv/emul-priv-op.c
> >> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
> >>          if ( ro_map && test_bit(machine_bdf, ro_map) )
> >>              return false;
> >>      }
> >> -    start |= CF8_ADDR_LO(currd->arch.pci_cf8);
> >> -    /* AMD extended configuration space access? */
> >> -    if ( CF8_ADDR_HI(currd->arch.pci_cf8) &&
> >> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
> >> -         boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 )
> >> -    {
> >> -        uint64_t msr_val;
> >> -
> >> -        if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) )
> >> -            return false;
> >> -        if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) )
> >> -            start |= CF8_ADDR_HI(currd->arch.pci_cf8);
> >> -    }
> >> +    start |= CF8_REG(currd->arch.pci_cf8);
> >>  
> >>      return !write ?
> >>             xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
> >> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
> >>          if ( !is_hwdom_pinned_vcpu(curr) )
> >>              return X86EMUL_OKAY;
> >>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
> >> +             /*
> >> +              * TODO: this is broken.  What happens when dom0 is pinned but
> >> +              * can't see the full system?  CF8_EXT probably ought to be a
> >> +              * Xen-owned setting, and made symmetric across the system.
> >> +              */
> > I would assume CF8_EXT would be symmetric across the system, specially
> > given that it seems to be phased out and only used in older AMD
> > families that where all symmetric?
> 
> The CF8_EXT bit has been phased out.  The IO ECS functionality still exists.
> 
> But yes, the more I think about letting dom0 play with this, the more I
> think it is a fundamentally broken idea...  I bet it was from the very
> early AMD Fam10h days where dom0 knew how to turn it on, and Xen was
> trying to pretend it didn't have to touch any PCI devices.

It seems to me Xen should set CF8_EXT on all threads (when available)
and expose it to dom0, so that accesses using pci_{conf,write}_read()
work as expected?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 13:41:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 13:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517456.802787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKQh-0005Xq-9R; Mon, 03 Apr 2023 13:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517456.802787; Mon, 03 Apr 2023 13:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjKQh-0005Xj-6V; Mon, 03 Apr 2023 13:41:03 +0000
Received: by outflank-mailman (input) for mailman id 517456;
 Mon, 03 Apr 2023 13:41:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Dtn=72=gmail.com=tamas.k.lengyel@srs-se1.protection.inumbo.net>)
 id 1pjKQf-0005Xd-PG
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 13:41:01 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b01bb91-d225-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 15:40:59 +0200 (CEST)
Received: by mail-wr1-x435.google.com with SMTP id v1so29397987wrv.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 06:40:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b01bb91-d225-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680529259;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=pJFLvj+kzgAAuR4I2/Wel89bouBKjG12WhBkLGSFURQ=;
        b=WHSRr8THUJaSUtEklr0o/86/+GjCz1jnc8gSlIKp88U5RvaDwKA84e/h9WXAM9LcSx
         Bf3unCTzlPfSWHfIvfYn7KrNnuBRtxse4398WVHT8YZjF0hvq91YFJdzDESZM7HWYx4f
         iVQBdNulbpw4yonCU2ok6LfYZvS/MdyBZwNpq7LaumfqCoWaFqGhUODfy6Ge+2b/uIG3
         0937Eo3rQPJnVm5jENJqn4NK81h1iT59aZod7IhZUt/FcQ4ULX8xq5P/qv2anetDp+8J
         DQjJYwdkt5l23hGbfA3CfFcRuvVoX6k69b1rxxiUjKBbfq39mpWi69ramoCNyLMaz31Q
         lh4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680529259;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=pJFLvj+kzgAAuR4I2/Wel89bouBKjG12WhBkLGSFURQ=;
        b=QuRgrTObXiRVSryDh/7E35i36EuvqFQ5sw4WZ5mfbX00fj5HkSBPC2JcRmCkeAa7G+
         WD6bX6yRQMx/LX/GS281m2ASnGRhPDjPaaWkVD763Z/mhAeP/tkORhKfNq/KEZ4qsBUS
         W9dhWF9d9AuJJRt/iurB1eXijQ5ospf2U9ohb2IgvFkub8o4mwcnscFhNo8DgFSnU3jp
         7kdWK142PpewD7UrZt39eJn/iZgJ1pMLxEV2lSWRl8dx/EEODYi/HGHxYGjUf0B5MS48
         t98ZzDu7mCmKKYkGQCllSxgCk67NPBduXR3c4CYrqpUhwHfvRvj/tJXs2C+M/wT4d1IK
         A8lw==
X-Gm-Message-State: AAQBX9f7nvusqwoa+0E/HB9NIwJebMbaa4l1tTJvn+9mtKGA4h8QXKGb
	uYLaqooKCSLSyerflAvad+vC3lRPf+Qtn6wrf1bffpO3OqE=
X-Google-Smtp-Source: AKy350aDobyDVG9z/7nOQ+1yOitdFN3I0fdvQgDKY5Cs8cNF/SeNwb6g5B4WY2Q9UJxleNqCuvKRjxmjXSPmDVAk7eE=
X-Received: by 2002:adf:e98d:0:b0:2c5:7eb5:97a6 with SMTP id
 h13-20020adfe98d000000b002c57eb597a6mr7693051wrm.12.1680529258904; Mon, 03
 Apr 2023 06:40:58 -0700 (PDT)
MIME-Version: 1.0
References: <MN2PR07MB6045100322F58085DD6B1488E4BF9@MN2PR07MB6045.namprd07.prod.outlook.com>
 <c12ff321-e1ad-1377-2158-195594fdbe04@citrix.com> <MN2PR07MB6045B965DD2DA308C55905F9E4BF9@MN2PR07MB6045.namprd07.prod.outlook.com>
 <a1a814cd-9a76-9828-ffab-5590fcd5925f@citrix.com> <BYAPR07MB6040EB2AEC1567C5982FBD51E48E9@BYAPR07MB6040.namprd07.prod.outlook.com>
In-Reply-To: <BYAPR07MB6040EB2AEC1567C5982FBD51E48E9@BYAPR07MB6040.namprd07.prod.outlook.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 3 Apr 2023 09:40:22 -0400
Message-ID: <CABfawhnOvSGeAQPxdm8Yrm8iRswiZ=r4g+B6ZhLVx4bYV5y7GA@mail.gmail.com>
Subject: Re: Best way to use altp2m to support VMFUNC EPT-switching?
To: "Johnson, Ethan" <ejohns48@cs.rochester.edu>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000082fece05f86eb58f"

--00000000000082fece05f86eb58f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Mar 29, 2023 at 10:29=E2=80=AFPM Johnson, Ethan <ejohns48@cs.roches=
ter.edu>
wrote:
>
> On 2023-03-16 02:14:18 +0000, Andrew Cooper wrote:
> > Ok, so there is a lot here.  Apologies in advance for the overly long
> > answer.
> >
> > First, while altp2m was developed in parallel with EPTP-switching, we
> > took care to split the vendor neutral parts from the vendor specific
> > bits.  So while we do have VMFUNC support, that's considered "just" a
> > hardware optimisation to speed up the HVMOP_altp2m_switch_p2m hypercall=
.
> >
> > But before you start, it is important to understand your security
> > boundaries.  You've found external mode, and this is all about
> > controlling which aspects of altp2m the guest can invoke itself, and
> > modes other than external let the guest issue HVMOP_altp2m ops itself.
> >
> > If you permit the guest to change views itself, either with VMFUNC, or
> > HVMOP_altp2m_switch_p2m, you have to realise that these are just
> > "regular" CPL0 actions, and can be invoked by any kernel code, not just
> > your driver.  i.e. the union of all primary and alternative views is on=
e
> > single security domain.
> >
> > For some usecases this is fine, but yours doesn't look like it fits in
> > this category.  In particular, no amount of protection on the trampolin=
e
> > pages stops someone writing a VMFUNC instruction elsewhere in kernel
> > space and executing it.
> >
> > (I have seen plenty of research papers try to construct a security
> > boundary around VMFUNC.  I have yet see one that does so robustly, but =
I
> > do enjoy being surprised on occasion...)
> >
> > The first production use this technology I'm aware of was Bitdefender's
> > HVMI, where the guest had no control at all, and was subject to the
> > permission restrictions imposed on it by the agent in dom0.  The agent
> > trapped everything it considered sensitive, including writes to
> > sensitive areas of memory using reduced EPT permissions, and either
> > permitted execution to continue, or took other preventative action.
> >
> > This highlights another key point.  Some entity in the system needs to
> > deal with faults that occur when the guest accidentally (or otherwise)
> > violates the reduced EPT permissions.  #VE is, again, an optimisation t=
o
> > let violations be handled in guest context, rather than taking a VMExit=
,
> > but even with #VE the complicated corner cases are left to the external
> > agent.
> >
> > With HVMI, #VE (but not VMFUNC IIRC) did get used as an optimisation to
> > mitigate the perf hit from Window's Meltdown mitigation electing to use
> > LOCK'd BTS/BTC operations on pagetables (which were write protected
> > behind the scenes), but I'm reliably informed that the hoops required t=
o
> > jump through to make that work, and in particular avoid the notice of
> > PatchGuard, were substantial.
> >
> > Perhaps a more accessible example is
> > https://github.com/intel/kernel-fuzzer-for-xen-project and the
> > underlying libvmi.  There is also a very basic example in
> > tools/misc/xen-access.c in the Xen tree.
> >
> > For your question specifically about mapping other frames, we do have
> > hypercalls to map other frames (its necessary for e.g. mapping BARs of
> > passed-through PCI devices), but for obvious reasons, it's restricted t=
o
> > control software (Qemu) in dom0.  I suspect we don't actually have a
> > hypercall to map MMIO into an alternative view, but it shouldn't be har=
d
> > to add (if you still decide you want it by the end of this email).
> >
> >
> > But on to the specifics of mapping the xAPIC page.  Sorry, but
> > irrespective of altp2m, that is a non-starter, for reasons that date
> > back to ~1997 or thereabouts.
> >
> > It's worth saying that AMD can fully virtualise IPI delivery from one
> > vCPU to another without either taking a VMExit in the common case, sinc=
e
> > Zen1 (IIRC).  Intel has a similar capability since Sapphire Rapids
> > (IIRC).  Xen doesn't support either yet, because there are only so many
> > hours in the day...
> >
> > It is technically possible to map the xAPIC window into a guest, and
> > such a guest could interact the real interrupt controller.  But now
> > you've got the problem that two bits of software (Xen, and your magic
> > piece of guest kernel) are trying to driver the same single interrupt
> > controller.
> >
> > Even if you were to say that the guest would only use ICR to send
> > interrupts, that still doesn't work.  In xAPIC, ICR is formed of two
> > half registers, as it dates from the days of 32bit processors, with a
> > large stride between the two half registers.
> >
> > Therefore, it is a minimum of two separate instructions (set destinatio=
n
> > in ICR_HI, set type/mode/etc in ICR_LO) to send an interrupt.
> >
> > A common bug in kernels is to try and send IPIs when interrupts are
> > enabled, or in NMI context, both of which could interrupt an IPI
> > sequence.  This results in a sequence of writes (from the LAPIC's point
> > of view) of ICR_HI, ICR_HI, ICR_LO, ICR_LO, which causes the outer IPI
> > to be sent with the wrong destination.
> >
> > Guests always execute with IRQs enabled, but can take a VMExit on any
> > arbitrary instruction boundary for other reasons, so the guest kernel
> > can never be sure that ICR_HI hasn't been modified by Xen in the
> > background, even if it used two adjacent instructions to send the IPI.
> >
> > Now, if you were to swap xAPIC for x2APIC, one of the bigger changes wa=
s
> > making ICR a single register, so it could be written atomically.  But
> > now you have an MSR based interface, not an MMIO based interface.
> >
> > It's also worth noting that any system with >254 CPUs is necessarily
> > operating in x2APIC mode (so there isn't an xAPIC window to map, even i=
f
> > you wanted to try), and because of the =C3=86PIC Leak vulnerability, Ic=
eLake
> > and later CPUs are locked into x2APIC mode by firmware, with no option
> > to revert back into xAPIC mode even on smaller systems.
> >
> > On top of that, you've still got the problem of determining the
> > destination.  Even if the guest could send an IPI, it still has to know
> > the physical APIC ID of the CPU the target vCPU is currently scheduled
> > on.  And you'd have to ignore things like the logical mode or
> > destination shorthands, because multi/broadcast IPIs will hit incorrect
> > targets.
> >
> > On top of that, even if you can determine the right destination, how
> > does the target receive the interrupt?  There can only be one entity in
> > the system receiving INTR, and that's Xen.  So you've got to pick some
> > vector that Xen knows what to do with, but isn't otherwise using.
> >
> > Not to mention there's a(nother) giant security hole... A guest able to
> > issue interrupts could just send INIT-SIPI-SIPI and reset the target CP=
U
> > back into real mode behind Xen's back.  Xen will not take kindly to
this.
> >
> >
> > So while I expect there's plenty of room to innovate on the realm switc=
h
> > aspect of EPTP-switching, trying to send IPIs from within guest context
> > is something that I will firmly suggest you avoid.  There are good
> > reasons why it is so complicated to get VMExit-less guest IPIs working.
> >
> > ~Andrew
>
> Thank you for the detailed answers and context. I am somewhat encouraged
to
> note that most of the roadblocks you mentioned are issues we've
specifically
> considered (and think we have solutions for) in our design. :-) We're
using
> some rather exotic compiler-based instrumentation on the guest kernel
(plus
> some tricks with putting the "secure realm"'s page tables in a
nonoverlapping
> guest-physical address range that isn't present in the primary p2m used b=
y
> untrusted code) to prevent the guest from doing things it isn't supposed
to
> with VMFUNC and (x2)APIC access, despite running in ring 0 within non-roo=
t
> mode.
>
> On a more concrete level, I am looking to do the following from within th=
e
> hypervisor (specifically, from within a new hypercall I've added):
>
> 1) Get some (host-)physical memory frames from the domain heap and "pin"
them
> to make sure they won't be swapped out.
>
> 2) Create an altp2m for the calling (current) domain.
>
> 3) Map some of the newly-allocated physical frames into both the domain's
> primary p2m and its altp2m, with R/X permissions.
>
> 4) Map the rest of the physical frames into only the altp2m (as R/W), at =
a
> guest-physical address higher than the end of the main p2m's mapped range
> (such that when the primary p2m is active, the guest cannot access these
> pages without taking a hard VM-exit fault).
>
> I've been poring through Xen's p2m code (e.g. xen/arch/x86/mm/p2m.c) to
try
> to understand how to achieve these goals, but with little success.
Comments
> in the p2m code seem to be rather sparse, and mostly unhelpful for
> understanding (without pre-understood context) what many of the functions
do
> and what is the intended workflow for using them. For instance,
> similarly-named functions like guest_remove_page() and
> guest_physmap_remove_page() seem to operate at different levels of
> abstraction (in terms of memory management, refcount bookkeeping, etc.)
but
> it isn't externally obvious how they're meant to all fit together and be
used
> by client code.
>
> Any suggestions on which p2m (or other) APIs I should be focusing on, and
how
> they're meant to be used, would be greatly appreciated. I suppose in
theory I
> could just bypass p2m entirely, and populate one of the VMCS's
EPTP-switching
> array's slots directly with my own manually constructed paging hierarchy
> (since I'm envisioning the memory layout of our "secure realm" as being
quite
> simple - it only needs a handful of pages). But I'd rather "color within
the
> lines" of the existing APIs if possible, especially since some of the
pages
> will need to be mapped into the existing primary p2m (for the "insecure
> realm") as well.

You can find an example work-flow here to create altp2m's and change memory
permissions in the different views:
https://github.com/xen-project/xen/blob/master/tools/misc/xen-access.c#L517=
.
To add a new page to the VM you can use xc_domain_populate_physmap_exact.
If you add the page after the VM has already booted the main kernel is
unaware of these extra pages that were added but that doesn't mean it can't
try to poke them. Similarly, using any type of memory map to avoid the
kernel accessing these pages is just wishful thinking, the memory map is
after all just a hint to the OS what to look for, not an access-control
mechanism.

Also keep in mind that altp2m's get CoW populated from the hostp2m. You can
still get your altp2m to be "only a couple pages" by either 1) ensuring no
other pages ever get touched while running the vCPU with the altp2m as to
not trigger the CoW mechanism; or 2) manually map change the memaccess
permissions to n on every page you want to be in-accessible in the altp2m.

You'll likely want to have pages like where the IDT and GDT is mapped into
the altp2m, alongside the pagetable pages. An easy way to check what pages
are needed for execution in a given code context is use the VM forking
mechanism, create a fork at the point your code is that you want to run in
the altp2m, singlestep the fork a single instruction, then examine the
fork's EPT using xl debug-keys D. Anything you see that got mapped into the
fork's memory would be similarly needed to be accessible in the altp2m.

Cheers,
Tamas

--00000000000082fece05f86eb58f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Wed, Mar 29, 2023 at 10:29=E2=80=AFPM Johnson, =
Ethan &lt;<a href=3D"mailto:ejohns48@cs.rochester.edu">ejohns48@cs.rocheste=
r.edu</a>&gt; wrote:<br>&gt;<br>&gt; On 2023-03-16 02:14:18 +0000, Andrew C=
ooper wrote:<br>&gt; &gt; Ok, so there is a lot here.=C2=A0 Apologies in ad=
vance for the overly long<br>&gt; &gt; answer.<br>&gt; &gt;<br>&gt; &gt; Fi=
rst, while altp2m was developed in parallel with EPTP-switching, we<br>&gt;=
 &gt; took care to split the vendor neutral parts from the vendor specific<=
br>&gt; &gt; bits.=C2=A0 So while we do have VMFUNC support, that&#39;s con=
sidered &quot;just&quot; a<br>&gt; &gt; hardware optimisation to speed up t=
he HVMOP_altp2m_switch_p2m hypercall.<br>&gt; &gt;<br>&gt; &gt; But before =
you start, it is important to understand your security<br>&gt; &gt; boundar=
ies.=C2=A0 You&#39;ve found external mode, and this is all about<br>&gt; &g=
t; controlling which aspects of altp2m the guest can invoke itself, and<br>=
&gt; &gt; modes other than external let the guest issue HVMOP_altp2m ops it=
self.<br>&gt; &gt;<br>&gt; &gt; If you permit the guest to change views its=
elf, either with VMFUNC, or<br>&gt; &gt; HVMOP_altp2m_switch_p2m, you have =
to realise that these are just<br>&gt; &gt; &quot;regular&quot; CPL0 action=
s, and can be invoked by any kernel code, not just<br>&gt; &gt; your driver=
. =C2=A0i.e. the union of all primary and alternative views is one<br>&gt; =
&gt; single security domain.<br>&gt; &gt;<br>&gt; &gt; For some usecases th=
is is fine, but yours doesn&#39;t look like it fits in<br>&gt; &gt; this ca=
tegory.=C2=A0 In particular, no amount of protection on the trampoline<br>&=
gt; &gt; pages stops someone writing a VMFUNC instruction elsewhere in kern=
el<br>&gt; &gt; space and executing it.<br>&gt; &gt;<br>&gt; &gt; (I have s=
een plenty of research papers try to construct a security<br>&gt; &gt; boun=
dary around VMFUNC.=C2=A0 I have yet see one that does so robustly, but I<b=
r>&gt; &gt; do enjoy being surprised on occasion...)<br>&gt; &gt;<br>&gt; &=
gt; The first production use this technology I&#39;m aware of was Bitdefend=
er&#39;s<br>&gt; &gt; HVMI, where the guest had no control at all, and was =
subject to the<br>&gt; &gt; permission restrictions imposed on it by the ag=
ent in dom0.=C2=A0 The agent<br>&gt; &gt; trapped everything it considered =
sensitive, including writes to<br>&gt; &gt; sensitive areas of memory using=
 reduced EPT permissions, and either<br>&gt; &gt; permitted execution to co=
ntinue, or took other preventative action.<br>&gt; &gt;<br>&gt; &gt; This h=
ighlights another key point.=C2=A0 Some entity in the system needs to<br>&g=
t; &gt; deal with faults that occur when the guest accidentally (or otherwi=
se)<br>&gt; &gt; violates the reduced EPT permissions. =C2=A0#VE is, again,=
 an optimisation to<br>&gt; &gt; let violations be handled in guest context=
, rather than taking a VMExit,<br>&gt; &gt; but even with #VE the complicat=
ed corner cases are left to the external<br>&gt; &gt; agent.<br>&gt; &gt;<b=
r>&gt; &gt; With HVMI, #VE (but not VMFUNC IIRC) did get used as an optimis=
ation to<br>&gt; &gt; mitigate the perf hit from Window&#39;s Meltdown miti=
gation electing to use<br>&gt; &gt; LOCK&#39;d BTS/BTC operations on pageta=
bles (which were write protected<br>&gt; &gt; behind the scenes), but I&#39=
;m reliably informed that the hoops required to<br>&gt; &gt; jump through t=
o make that work, and in particular avoid the notice of<br>&gt; &gt; PatchG=
uard, were substantial.<br>&gt; &gt;<br>&gt; &gt; Perhaps a more accessible=
 example is<br>&gt; &gt; <a href=3D"https://github.com/intel/kernel-fuzzer-=
for-xen-project">https://github.com/intel/kernel-fuzzer-for-xen-project</a>=
 and the<br>&gt; &gt; underlying libvmi.=C2=A0 There is also a very basic e=
xample in<br>&gt; &gt; tools/misc/xen-access.c in the Xen tree.<br>&gt; &gt=
;<br>&gt; &gt; For your question specifically about mapping other frames, w=
e do have<br>&gt; &gt; hypercalls to map other frames (its necessary for e.=
g. mapping BARs of<br>&gt; &gt; passed-through PCI devices), but for obviou=
s reasons, it&#39;s restricted to<br>&gt; &gt; control software (Qemu) in d=
om0.=C2=A0 I suspect we don&#39;t actually have a<br>&gt; &gt; hypercall to=
 map MMIO into an alternative view, but it shouldn&#39;t be hard<br>&gt; &g=
t; to add (if you still decide you want it by the end of this email).<br>&g=
t; &gt;<br>&gt; &gt;<br>&gt; &gt; But on to the specifics of mapping the xA=
PIC page.=C2=A0 Sorry, but<br>&gt; &gt; irrespective of altp2m, that is a n=
on-starter, for reasons that date<br>&gt; &gt; back to ~1997 or thereabouts=
.<br>&gt; &gt;<br>&gt; &gt; It&#39;s worth saying that AMD can fully virtua=
lise IPI delivery from one<br>&gt; &gt; vCPU to another without either taki=
ng a VMExit in the common case, since<br>&gt; &gt; Zen1 (IIRC).=C2=A0 Intel=
 has a similar capability since Sapphire Rapids<br>&gt; &gt; (IIRC).=C2=A0 =
Xen doesn&#39;t support either yet, because there are only so many<br>&gt; =
&gt; hours in the day...<br>&gt; &gt;<br>&gt; &gt; It is technically possib=
le to map the xAPIC window into a guest, and<br>&gt; &gt; such a guest coul=
d interact the real interrupt controller.=C2=A0 But now<br>&gt; &gt; you&#3=
9;ve got the problem that two bits of software (Xen, and your magic<br>&gt;=
 &gt; piece of guest kernel) are trying to driver the same single interrupt=
<br>&gt; &gt; controller.<br>&gt; &gt;<br>&gt; &gt; Even if you were to say=
 that the guest would only use ICR to send<br>&gt; &gt; interrupts, that st=
ill doesn&#39;t work.=C2=A0 In xAPIC, ICR is formed of two<br>&gt; &gt; hal=
f registers, as it dates from the days of 32bit processors, with a<br>&gt; =
&gt; large stride between the two half registers.<br>&gt; &gt;<br>&gt; &gt;=
 Therefore, it is a minimum of two separate instructions (set destination<b=
r>&gt; &gt; in ICR_HI, set type/mode/etc in ICR_LO) to send an interrupt.<b=
r>&gt; &gt;<br>&gt; &gt; A common bug in kernels is to try and send IPIs wh=
en interrupts are<br>&gt; &gt; enabled, or in NMI context, both of which co=
uld interrupt an IPI<br>&gt; &gt; sequence.=C2=A0 This results in a sequenc=
e of writes (from the LAPIC&#39;s point<br>&gt; &gt; of view) of ICR_HI, IC=
R_HI, ICR_LO, ICR_LO, which causes the outer IPI<br>&gt; &gt; to be sent wi=
th the wrong destination.<br>&gt; &gt;<br>&gt; &gt; Guests always execute w=
ith IRQs enabled, but can take a VMExit on any<br>&gt; &gt; arbitrary instr=
uction boundary for other reasons, so the guest kernel<br>&gt; &gt; can nev=
er be sure that ICR_HI hasn&#39;t been modified by Xen in the<br>&gt; &gt; =
background, even if it used two adjacent instructions to send the IPI.<br>&=
gt; &gt;<br>&gt; &gt; Now, if you were to swap xAPIC for x2APIC, one of the=
 bigger changes was<br>&gt; &gt; making ICR a single register, so it could =
be written atomically.=C2=A0 But<br>&gt; &gt; now you have an MSR based int=
erface, not an MMIO based interface.<br>&gt; &gt;<br>&gt; &gt; It&#39;s als=
o worth noting that any system with &gt;254 CPUs is necessarily<br>&gt; &gt=
; operating in x2APIC mode (so there isn&#39;t an xAPIC window to map, even=
 if<br>&gt; &gt; you wanted to try), and because of the =C3=86PIC Leak vuln=
erability, IceLake<br>&gt; &gt; and later CPUs are locked into x2APIC mode =
by firmware, with no option<br>&gt; &gt; to revert back into xAPIC mode eve=
n on smaller systems.<br>&gt; &gt;<br>&gt; &gt; On top of that, you&#39;ve =
still got the problem of determining the<br>&gt; &gt; destination.=C2=A0 Ev=
en if the guest could send an IPI, it still has to know<br>&gt; &gt; the ph=
ysical APIC ID of the CPU the target vCPU is currently scheduled<br>&gt; &g=
t; on.=C2=A0 And you&#39;d have to ignore things like the logical mode or<b=
r>&gt; &gt; destination shorthands, because multi/broadcast IPIs will hit i=
ncorrect<br>&gt; &gt; targets.<br>&gt; &gt;<br>&gt; &gt; On top of that, ev=
en if you can determine the right destination, how<br>&gt; &gt; does the ta=
rget receive the interrupt?=C2=A0 There can only be one entity in<br>&gt; &=
gt; the system receiving INTR, and that&#39;s Xen.=C2=A0 So you&#39;ve got =
to pick some<br>&gt; &gt; vector that Xen knows what to do with, but isn&#3=
9;t otherwise using.<br>&gt; &gt;<br>&gt; &gt; Not to mention there&#39;s a=
(nother) giant security hole... A guest able to<br>&gt; &gt; issue interrup=
ts could just send INIT-SIPI-SIPI and reset the target CPU<br>&gt; &gt; bac=
k into real mode behind Xen&#39;s back.=C2=A0 Xen will not take kindly to t=
his.<br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt; So while I expect there&#39;s p=
lenty of room to innovate on the realm switch<br>&gt; &gt; aspect of EPTP-s=
witching, trying to send IPIs from within guest context<br>&gt; &gt; is som=
ething that I will firmly suggest you avoid.=C2=A0 There are good<br>&gt; &=
gt; reasons why it is so complicated to get VMExit-less guest IPIs working.=
<br>&gt; &gt;<br>&gt; &gt; ~Andrew<br>&gt;<br>&gt; Thank you for the detail=
ed answers and context. I am somewhat encouraged to<br>&gt; note that most =
of the roadblocks you mentioned are issues we&#39;ve specifically<br>&gt; c=
onsidered (and think we have solutions for) in our design. :-) We&#39;re us=
ing<br>&gt; some rather exotic compiler-based instrumentation on the guest =
kernel (plus<br>&gt; some tricks with putting the &quot;secure realm&quot;&=
#39;s page tables in a nonoverlapping<br>&gt; guest-physical address range =
that isn&#39;t present in the primary p2m used by<br>&gt; untrusted code) t=
o prevent the guest from doing things it isn&#39;t supposed to<br>&gt; with=
 VMFUNC and (x2)APIC access, despite running in ring 0 within non-root<br>&=
gt; mode.<br>&gt;<br>&gt; On a more concrete level, I am looking to do the =
following from within the<br>&gt; hypervisor (specifically, from within a n=
ew hypercall I&#39;ve added):<br>&gt;<br>&gt; 1) Get some (host-)physical m=
emory frames from the domain heap and &quot;pin&quot; them<br>&gt; to make =
sure they won&#39;t be swapped out.<br>&gt;<br>&gt; 2) Create an altp2m for=
 the calling (current) domain.<br>&gt;<br>&gt; 3) Map some of the newly-all=
ocated physical frames into both the domain&#39;s<br>&gt; primary p2m and i=
ts altp2m, with R/X permissions.<br>&gt;<br>&gt; 4) Map the rest of the phy=
sical frames into only the altp2m (as R/W), at a<br>&gt; guest-physical add=
ress higher than the end of the main p2m&#39;s mapped range<br>&gt; (such t=
hat when the primary p2m is active, the guest cannot access these<br>&gt; p=
ages without taking a hard VM-exit fault).<br>&gt;<br>&gt; I&#39;ve been po=
ring through Xen&#39;s p2m code (e.g. xen/arch/x86/mm/p2m.c) to try<br>&gt;=
 to understand how to achieve these goals, but with little success. Comment=
s<br>&gt; in the p2m code seem to be rather sparse, and mostly unhelpful fo=
r<br>&gt; understanding (without pre-understood context) what many of the f=
unctions do<br>&gt; and what is the intended workflow for using them. For i=
nstance,<br>&gt; similarly-named functions like guest_remove_page() and<br>=
&gt; guest_physmap_remove_page() seem to operate at different levels of<br>=
&gt; abstraction (in terms of memory management, refcount bookkeeping, etc.=
) but<br>&gt; it isn&#39;t externally obvious how they&#39;re meant to all =
fit together and be used<br>&gt; by client code.<br>&gt;<br>&gt; Any sugges=
tions on which p2m (or other) APIs I should be focusing on, and how<br>&gt;=
 they&#39;re meant to be used, would be greatly appreciated. I suppose in t=
heory I<br>&gt; could just bypass p2m entirely, and populate one of the VMC=
S&#39;s EPTP-switching<br>&gt; array&#39;s slots directly with my own manua=
lly constructed paging hierarchy<br>&gt; (since I&#39;m envisioning the mem=
ory layout of our &quot;secure realm&quot; as being quite<br>&gt; simple - =
it only needs a handful of pages). But I&#39;d rather &quot;color within th=
e<br>&gt; lines&quot; of the existing APIs if possible, especially since so=
me of the pages<br>&gt; will need to be mapped into the existing primary p2=
m (for the &quot;insecure<br>&gt; realm&quot;) as well.<br><div><br></div><=
div>You can find an example work-flow here to create altp2m&#39;s and chang=
e memory permissions in the different views: <a href=3D"https://github.com/=
xen-project/xen/blob/master/tools/misc/xen-access.c#L517">https://github.co=
m/xen-project/xen/blob/master/tools/misc/xen-access.c#L517</a>. To add a ne=
w page to the VM you can use <span class=3D"gmail-pl-c1 gmail-pl-token">xc_=
domain_populate_physmap_exact. If you add the page after the VM has already=
 booted the main kernel is unaware of these extra pages that were added but=
 that doesn&#39;t mean it can&#39;t try to poke them. Similarly, using any =
type of memory map to avoid the kernel accessing these pages is just wishfu=
l thinking, the memory map is after all just a hint to the OS what to look =
for, not an access-control mechanism.<br></span></div><div><span class=3D"g=
mail-pl-c1 gmail-pl-token"><br></span></div><div><span class=3D"gmail-pl-c1=
 gmail-pl-token">Also keep in mind that altp2m&#39;s get CoW populated from=
 the hostp2m. You can still get your altp2m to be &quot;only a couple pages=
&quot; by either 1) ensuring no other pages ever get touched while running =
the vCPU with the altp2m as to not trigger the CoW mechanism; or 2) manuall=
y map change the memaccess permissions to n on every page you want to be in=
-accessible in the altp2m.</span></div><div><span class=3D"gmail-pl-c1 gmai=
l-pl-token"><br></span></div><div><span class=3D"gmail-pl-c1 gmail-pl-token=
">You&#39;ll likely want to have pages like where the IDT and GDT is mapped=
 into the altp2m, alongside the pagetable pages. An easy way to check what =
pages are needed for execution in a given code context is use the VM forkin=
g mechanism, create a fork at the point your code is that you want to run i=
n the altp2m, singlestep the fork a single instruction, then examine the fo=
rk&#39;s EPT using xl debug-keys D. Anything you see that got mapped into t=
he fork&#39;s memory would be similarly needed to be accessible in the altp=
2m.<br></span></div><div><span class=3D"gmail-pl-c1 gmail-pl-token"><br></s=
pan></div><div><span class=3D"gmail-pl-c1 gmail-pl-token">Cheers,</span></d=
iv><div><span class=3D"gmail-pl-c1 gmail-pl-token">Tamas<br></span>

</div><div><br></div><div><br></div></div>

--00000000000082fece05f86eb58f--


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:28:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517465.802816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLAI-0001cW-0K; Mon, 03 Apr 2023 14:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517465.802816; Mon, 03 Apr 2023 14:28:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLAH-0001cP-Sn; Mon, 03 Apr 2023 14:28:09 +0000
Received: by outflank-mailman (input) for mailman id 517465;
 Mon, 03 Apr 2023 14:28:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjLAF-0001cJ-LF
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:28:08 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bbe73976-d22b-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 16:28:01 +0200 (CEST)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 10:27:52 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by DM6PR03MB5241.namprd03.prod.outlook.com (2603:10b6:5:24c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.34; Mon, 3 Apr
 2023 14:27:50 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 14:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbe73976-d22b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680532081;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=W48LfNmEx10sdamQf9Z+l8xZx5nNUTgaMAo86JMl588=;
  b=PrdTD8WIjJ7w4fbwdHL6s1IfS28R5Sj+X3OUnKq6OFkZynda8Bm9LWvE
   OkcIy2bQRybB0IDWgRg8RUONezmCgx3Dkth38IHWsHfhjTOofXfW894y0
   lXUgiUmwN6xHEZ8vNOeZdagvyJAedJElKRqQBMrIddHiV/y9eHtnXIjT6
   c=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 103501731
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:W0OpQ6liXCLCJColnsL7jnro5gynJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJKWG6HaKuNYDCkfNp+b9m39BtXvMPRyNRqHlFu+yoyFCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgX5ASGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eM+eWtONA2pu8C/yraBd/g3tJotJvC+aevzulk4pd3YJdAPZMmbBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3ieC3WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHugBdpLS+bhnhJsqACJ/2AwBjZJb2GyhPWhukesae1xA
 kNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4J5oflqYRq1hbXFI87Suiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:Wgm9aqBUpm9+gGflHemV55DYdb4zR+YMi2TDtnoBMCC9F/bzqy
 nApoV/6faZskdyZJhko6HiBEDiexLhHPxOkO0s1N6ZNWGMhILrFuFfBODZslrd8kPFh4hgPG
 RbH5SWyuecMbG3t6nHCcCDfeod/A==
X-IronPort-AV: E=Sophos;i="5.98,314,1673931600"; 
   d="scan'208";a="103501731"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YWu362jbBNliunOyuAp3h3gdwvNH/X19HuMKSGO0atc0oGVimOcA8ph/O1pm7mDP8ml+jmJsgydSIgBOmaxEAPE1/d7K3+0V1CQlZufiTTGfLlL4RmAgCYXQQppeWtBAiEZGPOsO2Yygfz2IDSnyzE7/cYVJHYbU5CczvZD0i//V/uV6W4Chy95/UPUwurd5KP7JsaFyj6/RRcGNAIUycsCe05rXmOJxF1MECwNhx3Cd3wTjXSx0bdxDGL5fvBUjN/zNfr8RdUEuB7tzmp8i1NJ+WlNeeol2qT/EHeywuohzaaECIRT3SxafSTw/WLQiGkRzoB8RjATP43T4jnXPhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IAfwAS67uPv82PW8HQEBVpAbw93Rh6M0Lh2mOQ5+z9k=;
 b=bXsUw9zjHuDG2HSDd8wCzEAeZeGUHYpq++tz0tYcvFmd++auWVhyKJNtLLifl6AV88jXskCdMUJ5uRM3C6k3pHXgJvzNbqQ4ToOjS88AGocnSjSmg0Cp1o0Q/5g1dyOHlBZ+EJGI83udAIqG8bWhAiQKJMKW1k0OzjKgOe3+E6L7b+CgNTPdKO7+g2Vts//2fxM14WnPkHXdD8mEka73rB2bCkglAyo9TshovP05NXZNS7YsJ8iSH2QHKZErmGttVY+cMsCMC6llBFMUZsafL9/q1dVsvAYm6t2TxCygD2sURiWtogY+dMqCDuandO6aFo7F48wVi6UGATuCI2puGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IAfwAS67uPv82PW8HQEBVpAbw93Rh6M0Lh2mOQ5+z9k=;
 b=Z8Ch6u1IzzGH1C+wjElwH7e0V5ftQxVQvbxaQjevAD/DKXJNmYuCfa7eIVHHMqA7JGyCsIwSRE4noKbInsD1GGsZElOpXuUCok2m5HrxVPM81hiU+Dyf2PF0hSCyiaBorYsT4WszXcDOoQTBKXea230rrEjOC4NsbMxRi521zM0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 16:27:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Message-ID: <ZCriYs9y6JU1gat9@Air-de-Roger>
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <8d976d34-8a1e-95ad-3bc9-3cb704c1fae7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <8d976d34-8a1e-95ad-3bc9-3cb704c1fae7@suse.com>
X-ClientProxiedBy: BN9PR03CA0325.namprd03.prod.outlook.com
 (2603:10b6:408:112::30) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|DM6PR03MB5241:EE_
X-MS-Office365-Filtering-Correlation-Id: dbc82857-6909-4183-2529-08db344f9a66
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CMTS+V5ALtoPTYzCDgDBdeInc9OyLTqbZ0fc33ySvPmpqjJqFGPMkL0I+gcBk9vRIEmeAa8THkphAiw2GIo4Egos1Ti50eXSyco9Hh3TQ5Lh0S/fLXZmGgc9SoDgPL7OF9n19DuDmWoP/kOV6EQKqWq7Enr18nM6ZlMv3U/UTI5lFQhTUo1lE0dteo9BU7bA7rxeFrD+xkzDSITkg152h2cH5cPl2XkClMKoSET2o1w2UG0yQYmbEti+mMMHFQp1HJ1gAIxKrtFQe1uJ9kQLp8ZmhU98nKGD/QbTc/EBXDgfINXPeRSWee/gYLalT4YX6e43oLG6HfAt62wUy8e/OgCgRA1svLSs1OCIwnRC56rQ8xzHtJloyKkZA1Pvgw6eC0zC7iXSJAI5dZxgZhB7VfFQm0uyBzga6mt+eiKbeMEmxf+PwGfUnLlD6peq1A1HIfugaak8ni9tUvdIU0ne6l+MFVChMjcq69VOnkE7g7DYKO3w+lgEgkgCSag+0fiGar8I4NywzP0JVE1/gKDFqlh0MWzJX59+AflyY39T6CPLNWWD1rlhVY/aaWwV3nVA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(346002)(376002)(136003)(39860400002)(451199021)(6486002)(83380400001)(5660300002)(6916009)(8676002)(66556008)(66476007)(2906002)(66946007)(38100700002)(41300700001)(54906003)(4326008)(82960400001)(316002)(8936002)(86362001)(478600001)(6666004)(6512007)(26005)(6506007)(9686003)(85182001)(33716001)(186003)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aktwaFhnOWZGdWVJeGVNR2dWN3VWdXR3aW9CNHIrTTR1endPRDVKcDJ6UnRH?=
 =?utf-8?B?ZzZxMFlkK3lMRmlMZXNoaUdCQklkSzlpclhDeWRCdjZ3S3JUVTBlVVozNVV2?=
 =?utf-8?B?ZU51cEV5cGNGOUowVUtTQ1R0VGdHTHdZa3d2NXV3cEZJYmtUaGIzbEpmTHRZ?=
 =?utf-8?B?RndSRHFDaGU2YjdTT0hSYTJMN0cyZy9vUGlIUXJDVFBOeFBjWk5TeDcrUi91?=
 =?utf-8?B?UkNWMjl3eEFDbjRxTnBFUDBNN3BOYWx0clNUOTg4VEUvVDJaZEZhclUzZXp3?=
 =?utf-8?B?bzRXb2lwSlJUVmp0SWxxbG04TGZzK29XcXg5TWJVZVl1Tnk3eG5WMDk1dTRk?=
 =?utf-8?B?UHJjUzY0RXdzUEpMV3VpbURDMVB0YlFFcUhWM3FpeTZiaUdxNTBJSmkwNllD?=
 =?utf-8?B?TTlYdlRqT2c2NnNNQ0Y1bEoyOFVZWS9rRGRxNlg3VjFaRytLbjhkUHhtMzQv?=
 =?utf-8?B?RHBQNS9aeDhKMUY0aWYySFArbm9XTGpZMElvUHF2aTlTbFhBM1NNbWduTlhX?=
 =?utf-8?B?dmZYbzN1NUVPTzRLQTNoN0lSbFFMeFZ3cVdXcGhxamR4b1d3bnhiN0J0SVpy?=
 =?utf-8?B?TVlrd010SFRXS0J0dDMwT21xUUxmZjU3Wmg4VWJkQU5MQUFacEdMd0pNUTVm?=
 =?utf-8?B?UkZpY0FueFYzRi9wTVhnVFQzT3c2MTVmQ29leHdRU0VKMllBZDQxMUc1cDdu?=
 =?utf-8?B?LzQyM0c4MUlkdFZmbVEzTmFKVS94bW84QzI0czEydkd0RXJ6cll5bzlINUlX?=
 =?utf-8?B?blhJbVNuV1Z6SDNzWjh3OUlLT1FlbTUyQ3dkZmhoNjR3b1FEaDRJRnF2VTlz?=
 =?utf-8?B?ZlN5SjdZb21GMElOYW1VOVc2M1ZTVEFtTUVpZjdIVG81bDc1ZUpQMkUvWG9O?=
 =?utf-8?B?cFhscFdzS3M2NHRtRmZXSktSempubXd4SnEyVmhPd3cwWHBSSno0UXl1WVZY?=
 =?utf-8?B?SjhWby9HeGNDOTd3VXk0QUpjaEE0NXV6OXBFa0hVaDZCN25WR0xwcVdiWW5P?=
 =?utf-8?B?WWdzV3lOZzNnUXpEdTFNciszcjJMYTF4dUYyekU1QmxONTlLM0tqZnkyUHkr?=
 =?utf-8?B?ZXAwYjQ5b3JGZkhvWjl3TlljTWlZMDhEUC80QjhUK0hudHdMWkx3YmFGYXM5?=
 =?utf-8?B?eStqMEROZDdWZ2tteXpvSFpBUHNoS2M2SCtEZTdwOU9wRno3ZmJKQkpLNVgw?=
 =?utf-8?B?L1NRN1d0VFJuTktQcFF3SlpzWXE2dTJRc2RBbzNGdzcxTFRQQ204SUZabE5q?=
 =?utf-8?B?YUNSVFFIUEhuMmFhRjVDazBXejBmdDd5YWF1b09vQjRKR2dzRm9iQWROREFP?=
 =?utf-8?B?WDkvaWpXQnhLV0gydWJXeVE3bTFzODVXSFNVZlEzTHFHaC9mcGVPaXluLzB0?=
 =?utf-8?B?cWl0enJlV0tzeEU5WDF4UkRGdXRQOE9mb2hQWDBSMjY4SFZJTlZKTzBQM2hi?=
 =?utf-8?B?NW5NU09HeExhU1F2clNpaFdWcXJJUURYSHRsb0MybTU5VEtobDc4QVVIZjNM?=
 =?utf-8?B?aDB2OHRlY0c4ejJsT0F3bzNEQzR6LzVpbkVBVDQ1cVpReFF4RUZ4d0cxY0FI?=
 =?utf-8?B?MTdtejdWVDZrUFBiRzlZQS9NOTBreDBmbmR0NERPZVhFbWJScjdSdVRPVVhq?=
 =?utf-8?B?SCtnc29aaHpUTFo4RlJEQlJTOURoTjNNbUZrMDV6cWhtOUhtNW9kcE5OSTdi?=
 =?utf-8?B?Q0ZDNDA3NXlHTjd4VjdLbmVocDNRZlZOZGJkZjY3WXZFbGNwYytBMDRTalRX?=
 =?utf-8?B?QnZHSlBGTFZzVUc2QXc0RHJRRGx0NG1VT2dMUk9iUzBIOVJvLzZ2ay9CeDdT?=
 =?utf-8?B?WlBkQVp2QXczVkRBWmhlc3lremhGT2J1Z2RTZUFnTXFPczNGZ0hNcWRMclY0?=
 =?utf-8?B?U005REJMSmpsaEYrRkgxcTNwbUZKNkxtU0JMalNXem4ybmc4Y3JmSjNGNlc3?=
 =?utf-8?B?U0E2OWREajlSdlM5UGphWG8zNktIVGpxK0hOUW9TVUF4bVA5MG9mOGsxczlt?=
 =?utf-8?B?Q3F5dGZ5VXVRNkZFOWRNbWZtd1pHbG51QzQvVDRUQWpSVERlM0NPT2dIOW5Z?=
 =?utf-8?B?NW1lV3VyU1ZpeHdhbjRqKzhXOG51OUdVOGsyckZIV3hYV1kvOU4zd1JVTVAy?=
 =?utf-8?Q?dQn4msdkz6lSCF1TllD/aNkGC?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	K8q+NIF1BxydUNFf60C3GgpCEup+KnhHZxUYiotpMVLtYT4xVkR5ouuvjFlsFOx1mGdNuPmbGLdnL57V+65kD09YXZflELEr4Jc8Ihyjv5fiVm02BJ/UHhfVmlX2vQel6IfV9tYvDpa/x9cehU/ixumGso4yPQH1FMTVTayMG2Q4bnOnTR4i7ewjk1eqh/pDv4ACrAZw44sMLXNPUrKre5WFBx+IL3OKwU0kBelNIdlthBk61NeXrBeElbropDns0KGlKWTNt7rYAZlhXZwWYAUTEi3011oxOwXbHd5989J9d+Uc9Q3xhiLcv+FKYmHq7It1FkEHT9XtL50jiJ/rf4lA21l1hkziNyXAvxU7QiPTCgTIDDrPLYU9j6YT0PvAz1h3e2W6sLDCYQIgVfsdtJkqEtY1eH+zGLdvZViVagUTgc2vexPAvfEiQFCJ9V9OJgjzn5AfUWA15Aaf7335dxN+cIP9kvCpwnIFg9CKu355jMqNvNoFwIWBtJygQQL0tasRoQFVuIudNxZBch/N11d9ojo79rfD8Kbvdn0Rydrqs6AvXnaPhrvkwBdWZopf5GWNxhyB5+OmZOISQySyhuNbL9tUQpTtNE78LwdAc6EFI6TOArtwJeAw/wXjV1KkELAqAFhf8MqGQSFZu7/UYmOGpa1tY9bhr9bErzFQAwYEd929CgvuZrXN0ryU5BZoXfNSU0Y52PX623Dd7QRGO5ZOw92zsNlWZucn1tX8t4AoxWrB87SvGlYKSvcYEtH+DHEyO+WEm7LkAgm30qCXL/GaiXjb74J7oGK1fWdPc7eISQnL5JKOiaexiEtdWFeR
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dbc82857-6909-4183-2529-08db344f9a66
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:27:50.5791
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YZgaDS1TRPjCreLTNaPC3eS346DtHhHxyXwfFKkscOJT49QEC3JNJzq772aW6mJKLaxL8AItCIWYtDLn7hqJVA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5241

On Mon, Apr 03, 2023 at 02:39:08PM +0200, Jan Beulich wrote:
> On 03.04.2023 12:14, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -486,9 +486,6 @@ static int cf_check do_recalc(struct p2m_domain *p2m, unsigned long gfn)
> >          p2m_type_t ot, nt;
> >          unsigned long mask = ~0UL << (level * PAGETABLE_ORDER);
> >  
> > -        if ( !valid_recalc(l1, e) )
> > -            P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n",
> > -                      p2m->domain->domain_id, gfn, level);
> >          ot = p2m_flags_to_type(l1e_get_flags(e));
> >          nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask);
> >          if ( nt != ot )
> 
> I'm afraid I neither understand why you make this change, nor why you
> then leave the other use of valid_recalc() in place.

The message can be bogus if we allow concurrent do_recalc(), and I
did miss the previous one.

I missed the one at the top.  Originally I wanted to send the RFC with
just changing the lock to read mode, but then I though I might as
well fix that (now bogus) print message.

> > @@ -538,9 +535,9 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
> >       */
> >      ASSERT(!altp2m_active(current->domain));
> >  
> > -    p2m_lock(p2m);
> > +    p2m_read_lock(p2m);
> >      rc = do_recalc(p2m, PFN_DOWN(gpa));
> > -    p2m_unlock(p2m);
> > +    p2m_read_unlock(p2m);
> >  
> >      return rc;
> >  }
> 
> How can this be safe, when do_recalc() involves p2m_next_level(), which
> may install new (intermediate) page tables?

Oh, great, didn't realize it was capable of doing so, it's more hidden
than in the EPT case.  Seems like this will only happen if a superpage
needs to be split because a lower order frame is being used as an
ioreq server page.

Do you think it would be safe to try to attempt to perform the recalc
with the read lock only and fallback to the write lock if there's a
need to call p2m_next_level()?

Do you agree it might be possible to do the recalc with just the read
lock if it's updating of PTE type / recalc flags only?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:41:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517470.802826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLMf-00040T-7V; Mon, 03 Apr 2023 14:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517470.802826; Mon, 03 Apr 2023 14:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLMf-00040M-4r; Mon, 03 Apr 2023 14:40:57 +0000
Received: by outflank-mailman (input) for mailman id 517470;
 Mon, 03 Apr 2023 14:40:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjLMd-00040C-5h; Mon, 03 Apr 2023 14:40:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjLMd-000359-4R; Mon, 03 Apr 2023 14:40:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjLMc-0000bd-SU; Mon, 03 Apr 2023 14:40:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjLMc-0006fm-Rw; Mon, 03 Apr 2023 14:40:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mUemyF7kJ+WSqtFthDJ62h4efiBNP1EudohcMPCEIsE=; b=GdUkNQJAW/5r8uL3bhNlxmE7YY
	LpdTPf/WuI6S2gTm4RvVrGHv9UVXESrHdOw4klCVzMVPbbKbFWpTIAZfUV96+eLj4uwh3qFiRbpTn
	bjBWYS74chQ3gVFqwPnejhe2LzN3SGsh6y2V5SvDXeJr7umsSOhp4ILG2xXQYMNT47D4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180119-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180119: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 14:40:54 +0000

flight 180119 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180119/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180085  2023-03-31 07:01:54 Z    3 days
Failing since        180117  2023-04-03 11:03:28 Z    0 days    2 attempts
Testing same since   180119  2023-04-03 13:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d6e0b4c41a..bfa2e6a246  bfa2e6a246225233f09a2523939e01dcf83bca4c -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:56:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517478.802839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLbW-0005bS-HY; Mon, 03 Apr 2023 14:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517478.802839; Mon, 03 Apr 2023 14:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLbW-0005bL-Et; Mon, 03 Apr 2023 14:56:18 +0000
Received: by outflank-mailman (input) for mailman id 517478;
 Mon, 03 Apr 2023 14:56:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLbV-0005bE-7T
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:56:17 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe02::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aebab964-d22f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:56:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8969.eurprd04.prod.outlook.com (2603:10a6:20b:408::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:56:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aebab964-d22f-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hezAqPL86OnmI4MejXQqvnV4J4QmjgUxuIoLbrPRLyFIyRfycMRuJCCjiKWmTVgDyQQPrE/JJRedDSTwy2IUxYnd/KH2zPQ4PredvVwxtkENfr3eUb8IiYDhwY5GI9DSBrnR14ABkc6vsPHlZZve7CfRLuf0Pp5vf8j+Cy6UxgSdmZ/ub24N9Uw6qS0VreDxzqtizqIXFkY90fhHz3rECpB03wLX9BpSBebf2xowzEtYL2CsQ0qgUoWJj9sOrPVH1g/mdlxxqQtG0ATxfqU+7y5vb2Lr+xueOU4VmOSxJFgGshLhVwB1luBXbR5L1nQX89fqMuqdceMnrwhKzu6ttQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hCUUEF17r9XW3dkx0a5vTv5SWd+1bdYAn0PcErwXMKY=;
 b=ARuvEzmApDObq3RFi38OlrGxxfnWI8BdYeD5W/UIbHSg9xPVIpO7Ct63yq75wBeuEX4J0cszOAQCdfrdbhJZQbaVCU+SyBcExF5Rj6DiQ45HYBQ6rbMJbDhsAGwIgzB9Uq8lQF2b8zQYPAQYx5p/5ldGSSZ8E03AcYxk0kmc6bvhB9Z5uhd4Oid4gzLSnPFfuWDupgy7mD6JKHqEOh55DpPdRpPmu41COtQPgn9nUf1qnLh+1eszdCRYdWXZNEOmKL/m1P4eBZydaCXuiad/66BHHD2SQi5QoGjW0hZkAPCThhai8Zs/x8gdskJLUvLbDyKwp6WF3heTa9gxQQUpJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hCUUEF17r9XW3dkx0a5vTv5SWd+1bdYAn0PcErwXMKY=;
 b=I+o87UIhGCP86CGyaG1lc2Z1IOCm8JK8RXLiLirKaZhVdkvrpMpj0Hh3szau9P0zr6RXBhNL4NBI70vRYwREL412WjBqVj2vgUFkNGEcEYVJtYtYI5uXFYS00MJFrzWu53lHoZl8ynQqdSPy4fmQPDq5kTTlW4n+s3oZ/GfVud3dIrKqIpaJ2pYogeui7FyUp6Xw42QffMYJZML2e+KmFB9hUORA7Vkyv8QzEPHFe/GrRPoZJ82ySzWwnIki/BvE9ZsPv4l8mf345CZDZTA7kLkBBYyDGDmbJewqJ8Cyxigk7XDd/QqKjxiE2GzS9ugSjP9NYN/evP2f2r1E/g2Iww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Date: Mon, 3 Apr 2023 16:56:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 00/10] x86: support AVX512-FP16
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0113.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8969:EE_
X-MS-Office365-Filtering-Correlation-Id: 176e682b-0784-4677-f35f-08db3453918c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D4BIcZoFTnb9PMSpaJgwTxnXlRpmMDWzlVmpcg/lhsjMf73i4N2R7a0iiYuFkUvm+uI2RZiMT8pGX12JA6lAN41OOFP5FonqrWNcRqwjt5JExD/nmZOjhT0oyQNCcxGIdsZOWQIuwzyzHiELQq+CI0noEK7lOPmCNLaUqlIuIKOQdAc4k/uom/kI/Y/rfXOeZ+RUrpYH92tJ3E8Gu8Ukowpk2B6Jr+Z4SurQ0W4gq1zBbzMRwM9T0tvc+Y3+MOKH2kGiwgH9K/EK3wkcGw3fz9Sx5a+jaBIt1ykdYivsVR1XAIQghb+BgJDzu+6SoaAbz99gcjEDmt/oV+M0C64sdTJhAgVsnAiM6Af2SQueQVvVETcLpkpekvg1UNRp+1dpw6IAQ+t1B2sZPijdFEVl7HVrJwuFRR+rTD6to/jkRqBwgsvjauZ5ppgf1FeKbjlEcnnF5eG+cYnq1853uRbhISgBiZt8yZoJ51URYKL+LozI2wqMAQJ8wYaVTwGO5qT7vbVx0QErFx4G8OAn+qEA03jrpTl2oPH5OIQB+/amQPYLVkU5BrYZ9z5Fk9hA29ZhVMNPR2VQ+x+cUQ0231cf+SIlhUMeoSR19kWjrkl9hVrFCFE9Tej/mGqhkvcplmAbKW64b3HP8MokX5AX/QDBdg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(396003)(376002)(366004)(39860400002)(451199021)(31696002)(86362001)(36756003)(2906002)(31686004)(2616005)(6486002)(186003)(83380400001)(6512007)(26005)(6506007)(6916009)(8676002)(66946007)(66476007)(66556008)(41300700001)(5660300002)(38100700002)(54906003)(316002)(478600001)(4744005)(8936002)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y25kUFdLZEhVaWdsRDR4akE4a3RRVVpRTzdWbEdzSGdTYVBZR09HS3BRZUFu?=
 =?utf-8?B?SkhEKzF2TVZoaTRKWVNIczlPTXkrS3pvSkQya2Y3Zy9XU0d0a1VUcGhKeUR5?=
 =?utf-8?B?dlRBWG5pcnlsVXNScUUrZ2xZeFd2WWpHM1kySEV1Sm0wTVZhbnFQVEFQSlhV?=
 =?utf-8?B?YTVlbEprd1A5ZVdUaFpEaGh1R1JLL3ltaDFKQ1plalVPV1NZdVR3V1JkUC96?=
 =?utf-8?B?M25mNDIxc0p2bVJCTEovZlUzWmRDOW1ETm82Q1I0WTJiU1psTGVhTVVxbnFT?=
 =?utf-8?B?TmlWYXVGMGVlODg4VmNLYlZDYkluU09tTktucU1ZdG8ybHptWXlnNGRpSFpy?=
 =?utf-8?B?ZHJuNVZ0MkZBbitHWmhJdnpPWVh2WEhqQ0wrR0FhK1h0NVdZRE53eHNZU3F1?=
 =?utf-8?B?SEU1VmF3Z1FoZ0UzRkJ1YUNtSWp2Rk1Ya0VwOVdNWG1iVTZrNWpaaGdwSUVE?=
 =?utf-8?B?cGFUQUFRMkpvSzBhc09WMVRsdjJBWHIrT3V2MU8xcjcrYTd0dzdjVUl2KzRt?=
 =?utf-8?B?a3F0SGRseHdSN2tRYjhadlgzMWVJUDlTeVNMTGhLSklta1ZyYWNmZjdMVDRr?=
 =?utf-8?B?Q2R1R0xMYmdVM29CaXRQRWxaQ2ZjRVcrbXpIWTNibUR1UDN4bEhGQlAvTmp6?=
 =?utf-8?B?ZWtOM2FBdXQwWHJIYzVqenYyZ0NDTXBZeXFhM3BtN2MxUjlKN0NBZlRaZ1g3?=
 =?utf-8?B?bFRhUmsveWdreFVVNmRhdVFnNlllTk1LUEhCR01BWE54aE96MG5xc3BOUDhu?=
 =?utf-8?B?ZVkwSnpvejdEa0l3ZWJZNnNXdjA2RCtQaHMrNTRGNU5Ia1V6RDYyQVZDWVBO?=
 =?utf-8?B?Z2pRL3NkQytMNjNkMk5LUVEySGhTYURLTEsrNENDOTFvbFZmbVU3SXBQdVFP?=
 =?utf-8?B?MHdybHZiR1pzYVNlS2NZR1FFVi8ybDdDdVpvTGlZVmpOQWZzWHAzL1RLNmJU?=
 =?utf-8?B?bml3Z0MyUW1MUjVuZlpLNWZzOFJnTjBZVkFZa1d0cHBnMDd2TWRIUkdwZVNq?=
 =?utf-8?B?bTlkZWw3WG15cE5RQTZMaTRzcFlpYVlLSGVFbjd1ZnlDNXZVcUxQNlUrZU9J?=
 =?utf-8?B?dVRYVGJRcTdMeldoRml0aWRqMzFIbUNncTYvTHB0MUVKZlVaR3pUYTFTUi9F?=
 =?utf-8?B?eEMvVkZUTmhubHhTeWxHUnEranpoaWVUT1l1aDRUY3djRGNZRkdiUnM2OHc5?=
 =?utf-8?B?YXo3N2V1ZWV1eW5Gd2xoWTdoVnJjOU5VdzdTWlNqR3hSUmlyYTBUaFZNSnkr?=
 =?utf-8?B?aWRCbnphY083aVRSTllwbmpJOU84SktnK2hCaW4ybTVVMU5JdVdkZUpucG81?=
 =?utf-8?B?akkyU1g3VG11Unc2RTIzNjFqeUs4dWh0b0dUS0tGL0pzZWlUZ2JiZFRROWdQ?=
 =?utf-8?B?S3VwVndsaXp0bXVwRzE5MXpDYWoxQzNJbWdFMm9FYXRtQTkwQTN0d2xsRnI0?=
 =?utf-8?B?NEtBTTJLcVBNbmFEcmMrSHpldE1LZ0o4czZVRUlZSTcraW1GNExGeGVJOGJ6?=
 =?utf-8?B?cWdmbFVXQzMzaDZaZVN5bXRMbDE1NmZiOWhUSDlTSE5IOVAzUjEyMHp2L0lt?=
 =?utf-8?B?bkpodWZsd01XYXNpRHVlRzV2dGZnTWRWem1SbExRa1hyZ0JwOXZ1RU15TTBW?=
 =?utf-8?B?Z3llMzByenZFc3VIbjBxaEprczdhWm53MHNPbnVBNS9nYWF1SWI5VmxHUHcr?=
 =?utf-8?B?ZG1CZ3c3RnIxQitlWCtQUVZ0OTRUTUpDNXpjb0F2UjZZdW5oeVl4anRYUXJ3?=
 =?utf-8?B?aEhJUTBldFRVeGMrRkk1NzlMOVdSY1FMQ0YxdVd0c2d5eThTb1VXK2pMYXRI?=
 =?utf-8?B?eEVBK0xoZDJPYkdiN0UxWWl5d3VnaUhDeVkzMWp4eTk5NlZMTXZWWk9qd2Z3?=
 =?utf-8?B?RTFZMDZSQWJmdzgzdWc4c0JxZ0pyRFBVaUJDbVE1QXRpMzhkSllZVEphREsw?=
 =?utf-8?B?RkdOd044a0R6S1czYWVBWGVINmQzVHREZnZpYXdlVTJGUEFrQTJwVFdXZW5n?=
 =?utf-8?B?ZFd1aUI0VTVwVllKdndUcDBJaWhhYlExaGtCN1czU3NsVVRHaTZIeVBJRDBS?=
 =?utf-8?B?MW83ci82dmxuUzJUQWRKTW1rWE8wSG9NMWRSdzlIaVNVYUlsUW9NVXZaL0Fm?=
 =?utf-8?Q?ekxAlKqKnkdN7mHZbikQGnm9w?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 176e682b-0784-4677-f35f-08db3453918c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:56:13.4152
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ajySwkRM0E9DQaQy721e9SfyfqDo48EUlpawqs2qMG9cmXiwyzmEhBAxbKSN5HoijtvZ9LaTP48TSnFNC+MyIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8969

While I (quite obviously) don't have any suitable hardware, Intel's
SDE allows testing the implementation. And since there's no new
state (registers) associated with this ISA extension, this should
suffice for integration.

01: handle AVX512-FP16 insns encoded in 0f3a opcode map
02: handle AVX512-FP16 Map5 arithmetic insns
03: handle AVX512-FP16 move insns
04: handle AVX512-FP16 fma-like insns
05: handle AVX512-FP16 Map6 misc insns
06: handle AVX512-FP16 complex multiplication insns
07: handle AVX512-FP16 conversion to/from (packed) int16 insns
08: handle AVX512-FP16 floating point conversion insns
09: handle AVX512-FP16 conversion to/from (packed) int{32,64} insns
10: AVX512-FP16 testing

I've re-based this ahead of the also pending AMX series (and,
obviously, ahead of the not even submitted yet KeyLocker one), in
the hope that this may find its way in sooner than that other series.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:57:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517481.802849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLcO-00066n-QV; Mon, 03 Apr 2023 14:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517481.802849; Mon, 03 Apr 2023 14:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLcO-00066g-N9; Mon, 03 Apr 2023 14:57:12 +0000
Received: by outflank-mailman (input) for mailman id 517481;
 Mon, 03 Apr 2023 14:57:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLcM-00066R-HX
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:57:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cea60352-d22f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:57:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:57:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cea60352-d22f-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=acCpsPXMXDMlfJg+cL1cmT0lBWOGMvb0w6OND+xqasBt6i3ccE5k5hoFWdaDYgubxzEUjC6htBqA0VrC0BKbAwXj8qQO2Mem2RSueRbovXVGshtRa+PCcbRTQikw+ive3cjWXwT4ZsYJXRWnrFFpUdCT2W1jtlZ6gkucsObn8ajfn2tYGtLZekQeFsuUixRicjiC83nFY3Lt3YcuuGyHVBQrCbNMmCZyCdH7Huqc148L/3QkkFvcqxZRhVu2s6SYhOxi+OaojF6208TtD55Pyon7nVBDpQ3Ms198ZamRAEgp2k8+/ixQIWmzO0zrYCdMhQQSkuGzofFKGd28EXnMlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FCrSDs0f16ekqy4Eik8NqBW3LlGngJ7nD0dM0P0nsr8=;
 b=LghBN9P3jwxWVOMeSwJ43Kb5OL1TweMekV/42QexP0f6FAbrIO2GgsEjwTqLinPWtmuOVKR6F36XlcIWsP/Td36Id92iO72bz2m5FAh/I5wNm3PHCMt7fdw6g076upvL0t7iFtarPrDPh2ap13H4j2f2bYey3AGDTyByIQl06X78qhRSFg/0zwGQ956n10KWTcJ31tEOAk/F+HmzYx8L0X5T+g6QK64xJL4g8fbqL8mY2vnkU1xahqZs4OzQKBGuOrQCViqvM2pSrCHvwQJNPcSHq48bSTQ8JR9XM0daDwagi/hRe6FP+JhKY1IVCodzuhri+wg79aFScQCLtnMhAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FCrSDs0f16ekqy4Eik8NqBW3LlGngJ7nD0dM0P0nsr8=;
 b=ihTbTw4ZFCEFdExKJRV/bfiTEuRtUrMpXtcJfFUlF8yCM7qIur9ff6Pnekrm5r83cuRDsQl7NzwSDBvuLxjvxIX9LCBr20elx8GjPIOg04aSCiicHwhkCA0hQEm56DKx+WCE6mP4qIvBoVfE/XS5sb7FLH3CEwqRkDb9Zp0gC8T3WP7XU1dHg0m0lGocC98sokQ/6GSaK6ETzhAcTzA+GyYk8xcIdwrlfGPpx7vwsvofQCAHPJn2AZU1u/7boyUYMcbSzjXwa2tiOmvkYz/LLSCKpTEFRbQa2NeFztkxSpxZFM2qU8AF8hJtCggJcMlb7NmXL+Ec3o2FnFQ7YPf57g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6ce78724-a826-f53d-4ff2-701cc279289a@suse.com>
Date: Mon, 3 Apr 2023 16:57:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 01/10] x86emul: handle AVX512-FP16 insns encoded in 0f3a
 opcode map
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 1398e316-7ab8-459b-e2ad-08db3453b0f3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gJH7sTdD+B931sUMQkYGMHQsEDVsbeEpddamUdu+F+M6nvK6PnhD8fcr8WISj97w7tqHKDzoGixjYoCKxWW5lvW+Z+3ZEOIwP5okgSVmp6zEHe7Av6buw4W1M0ozOd3rW7GXRT5LhD9eAqiscbOK6gg2OMqWHiYKKxoWoYSd+7XSuKV4TiSbe9Cb+Pd7/uixcbFNk7lN38DkJcKfDdpG0KQq6Hp7K5pQBOccFTRl8ZhWgGDNMFv0YrSzRXyEURIuxCMCwF5SjYF7Lv/QNdTXjc8/Cbvf61P8fHe/mKQOmHha5/lmtmf5m0cSdNBmYJleWcjEG5Lx/AJ1pExzkImbWv/ADsDkDXM3zmRk78sO0fg5XF7xlBUlEy0fDGgh53LmiS3hl2PjSsUv1wGI8sE3dA9C67exTB2KWDMMj/OmczR2i1on2O628pKUZn1BeuyyrhCjFDAG4ITfJRUJr8yvFbGy3TSwM7Xtw4PEk3+kzTNVmZt8C563ZJIuSaFhQpHuyGiyIOh3IGgTBXNNDMUpA88IkmyBN9Z63OwK45DVaXijO6mja6lPyJyBFUBD0326kP7ZMJ92KyAypJufisLbMHen/v+Wzs6cnboUo3QGqHfLILMLYtyHwzggt7X+0v/SLpgc2B9nWaAoT8nbQ6Qm3Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(30864003)(41300700001)(38100700002)(5660300002)(186003)(83380400001)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SXhpS2M0SUpZVVZvRkhLS1pqM0VDa0dINEVyZ1IwOXJJRmZlb3B2M28zQ25G?=
 =?utf-8?B?M3MvbDllK3FPQUUxZ3JxRjVrT2diMkpDYlJ2NzJzcklKRG9zdHRJSlVMWEZt?=
 =?utf-8?B?S0kwZlZwSi9xOUUrNllNYWNFNlBjNVBNUmVMQUNoeG9XRUZHTEUrVkxWMU81?=
 =?utf-8?B?b0ZRbEFZdFpJZXd0bE1JeS96ZVROYzdEYWJzY2NSRXVFY0pkTjFLRU52WUdv?=
 =?utf-8?B?VjdqUmlOcll4MExTSEErYXZaSS9CVjQzSXFnV0MyUUNTZjkreHRHV05sSnFq?=
 =?utf-8?B?S2s2ZnFPbHdxUUhGM3dEUkRiOXpQZ1ZEU0FFYzZDY2Fnb1lNcFVpNW1ueWxZ?=
 =?utf-8?B?L3k4WXlPODdxdFh0ckR1b2o0Y0g3Nk5JbGxDdEVPay9sMkVjVStFQURFSHlO?=
 =?utf-8?B?MkFZTysyTWtRbExLaGZiMEczQWtHeEZGUng2b3BsSmZMeU83WTlBVGtvMWt1?=
 =?utf-8?B?a0gxTWpTWnZMZlNFZG9ZTWJiM3pJbEFLRStiWTV2cm9nWUZmUXN5eVp0OUZ2?=
 =?utf-8?B?UFozWUFjbkprOXdBTE9odDhuTnZSQ2EzamJNMUdaTFFSUE1HZUFNVkJWU2Fi?=
 =?utf-8?B?YUl1TEVscEh5MFZGeVNkVGlZaE0yZU9nVFgyK2NFVUV0OE1VY1pRMG5rL2NJ?=
 =?utf-8?B?VnVHSS9NbGRydEhTQ1lnMC9TeG9PeHJQWHNRVjlBQThYV1dKVmNaZXl2SThv?=
 =?utf-8?B?L1ZKeFp4Y2hZUlg2UXZWL2VQbnh5TTFqb1VsSnh3dG91U3VUczdKU1ovWXBY?=
 =?utf-8?B?SkMwam9xZ1dYSWlrTVhGZmJMeHQzcWNvM3Z6aVNaQ0VEK0l0QzdCMjdlTDRi?=
 =?utf-8?B?OFVsSDczMEVzUzdZZVFnTzBRYVlFVzdYSnpLclBYMGxXNkhiVnRLMHFMWWNl?=
 =?utf-8?B?WmhSK0phTlAwZ3N2SGxiWHo2T0VlSmU3SnF5dnJ3U2xtaml5aEpBNjNzL0l4?=
 =?utf-8?B?aXJXRmpUM0EwVG0wYUJpRTNYVzIyTlc5WlNlZHFyOXJWaThhbXcwb1Q4SU9V?=
 =?utf-8?B?c3EyN3hrb3BvL252SSs0eTdHYXAzUXd3c3EyN1dVL0ExaUVxZ0ZOT1VrK3E3?=
 =?utf-8?B?Q3BiUXdtcWo4dDN1dXVvSUVhWmVKQnk5VUV5OTRPY0pVZjdKTmY2TmJrQXpD?=
 =?utf-8?B?d1pLWmNEdHYyZjJFOWQ0WDVYQXVwdzZReTRxSUFndVFBZHRSNEpEY0FHcU1y?=
 =?utf-8?B?OWtMZjQ4QmxRUDJiSmsrMlg5QjJQVHo1RVdEYTdlbUwzT2dVTTFldVFTTnps?=
 =?utf-8?B?N2JNYmdFNzlXcHpLaEU4elRkMDZFM01CYlNpSXY5TzI3MVVTOTBnQzdhMHV6?=
 =?utf-8?B?U1ZCZlNycHdpdSswM3I4QTZ0Y3R5cGplYytydWNVeGt3MVRidFlVYlJvRS9J?=
 =?utf-8?B?aHNGSXBuSHJ5bHg0bUE2QkQrNUpqRnJ1dkU5aXpQblhHeWw5VVEwaUYwaVRE?=
 =?utf-8?B?aFZwU3A3VjNMNk9EVVNqa3JzYXFpL0RjZTM1RTA2OEZQVjV1eXdRLy9RL3NQ?=
 =?utf-8?B?WVJ3U0YvQU1XTU1KY211QlE1SVlmSFFiR3RFVFhHcmVmNzhscmU2UjFuV1la?=
 =?utf-8?B?dEtMVnZjejh0WEkrbzBvT0tqSDNKdjQ2SzZZaVNlWjArVHl4R2l3QnRVb2cy?=
 =?utf-8?B?a3BPRUZtcldiSXorR2pqYWoremRMeFNmN2EyUkhwb2tUeTU4YUgwZk9GdWdv?=
 =?utf-8?B?T3VTeDJZY3E5VHd2YVBZcC91d2toMzlOVmhuOGgvMVRBWkxrRG1ZeEVISjhp?=
 =?utf-8?B?TG0zeXB6Tmt2NEFCR2FDbGdDM1J1WTZJY0d0UllRTWZMT2hiMXUySWtITG4r?=
 =?utf-8?B?YlpDQ3VSdDc1d2xGTUFHeFFUQUlXN0pad1c0ajdxV1ZERWI2eFpNWnd6TmRm?=
 =?utf-8?B?TnVSWVlWZHRJNUdSd3k0cGtHbkQ3SWd4cnErQlY3c1RSSFNtOTFQK3JhRUsr?=
 =?utf-8?B?dHJLUEhnSEdHbDV5R0R5YW5tZjVQMjlrbU5QNHgrYTM1REhoM0VRZFFyMlJD?=
 =?utf-8?B?RmhGSjVSMG0rUXZpUTFsK1NvbVQ1aFFaN3BzUkw5aXRaRytKN0FlS1ppQXRG?=
 =?utf-8?B?Mnd0U2w0V2MrSE5PLzRNNkhyYit1cVVxek0zdDlCcmN0WFgzWkw0clBUWExY?=
 =?utf-8?Q?G/+/tpkKpWtC+iRDrclxzYaNN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1398e316-7ab8-459b-e2ad-08db3453b0f3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:57:06.0431
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: faOVin8bABF1uGPHQt/CB5svc+hwVdgJ9qE3Q3n21/t+VwxkmC3XvN5/SOwOKZ4W0K6pA5AaKHx0p0NDCsLHCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

In order to re-use (also in subsequent patches) existing code and tables
as much as possible, simply introduce a new boolean field in emulator
state indicating whether an insn is one with a half-precision source.
Everything else then follows "naturally".

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
SDE: -spr or -future

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -76,6 +76,7 @@ enum esz {
     ESZ_b,
     ESZ_w,
     ESZ_bw,
+    ESZ_fp16,
 };
 
 #ifndef __i386__
@@ -601,6 +602,19 @@ static const struct test avx512_vpopcntd
     INSN(popcnt, 66, 0f38, 55, vl, dq, vl)
 };
 
+static const struct test avx512_fp16_all[] = {
+    INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
+    INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
+    INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
+    INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
+    INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
+    INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
+    INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
+    INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
+    INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
+    INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+};
+
 static const struct test gfni_all[] = {
     INSN(gf2p8affineinvqb, 66, 0f3a, cf, vl, q, vl),
     INSN(gf2p8affineqb,    66, 0f3a, ce, vl, q, vl),
@@ -728,8 +742,10 @@ static void test_one(const struct test *
         break;
 
     case ESZ_w:
-        esz = 2;
         evex.w = 1;
+        /* fall through */
+    case ESZ_fp16:
+        esz = 2;
         break;
 
 #ifdef __i386__
@@ -845,7 +861,7 @@ static void test_one(const struct test *
     case ESZ_b: case ESZ_w: case ESZ_bw:
         return;
 
-    case ESZ_d: case ESZ_q:
+    case ESZ_d: case ESZ_q: case ESZ_fp16:
         break;
 
     default:
@@ -1002,6 +1018,7 @@ void evex_disp8_test(void *instr, struct
     RUN(avx512_vnni, all);
     RUN(avx512_vp2intersect, all);
     RUN(avx512_vpopcntdq, all);
+    RUN(avx512_fp16, all);
 
     if ( cpu_has_avx512f )
     {
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1972,8 +1972,10 @@ static const struct evex {
     { { 0x03 }, 3, T, R, pfx_66, Wn, Ln }, /* valign{d,q} */
     { { 0x04 }, 3, T, R, pfx_66, W0, Ln }, /* vpermilps */
     { { 0x05 }, 3, T, R, pfx_66, W1, Ln }, /* vpermilpd */
+    { { 0x08 }, 3, T, R, pfx_no, W0, Ln }, /* vrndscaleph */
     { { 0x08 }, 3, T, R, pfx_66, W0, Ln }, /* vrndscaleps */
     { { 0x09 }, 3, T, R, pfx_66, W1, Ln }, /* vrndscalepd */
+    { { 0x0a }, 3, T, R, pfx_no, W0, LIG }, /* vrndscalesh */
     { { 0x0a }, 3, T, R, pfx_66, W0, LIG }, /* vrndscaless */
     { { 0x0b }, 3, T, R, pfx_66, W1, LIG }, /* vrndscalesd */
     { { 0x0f }, 3, T, R, pfx_66, WIG, Ln }, /* vpalignr */
@@ -1993,7 +1995,9 @@ static const struct evex {
     { { 0x22 }, 3, T, R, pfx_66, Wn, L0 }, /* vpinsr{d,q} */
     { { 0x23 }, 3, T, R, pfx_66, Wn, L1|L2 }, /* vshuff{32x4,64x2} */
     { { 0x25 }, 3, T, R, pfx_66, Wn, Ln }, /* vpternlog{d,q} */
+    { { 0x26 }, 3, T, R, pfx_no, W0, Ln }, /* vgetmantph */
     { { 0x26 }, 3, T, R, pfx_66, Wn, Ln }, /* vgetmantp{s,d} */
+    { { 0x27 }, 3, T, R, pfx_no, W0, LIG }, /* vgetmantsh */
     { { 0x27 }, 3, T, R, pfx_66, Wn, LIG }, /* vgetmants{s,d} */
     { { 0x38 }, 3, T, R, pfx_66, Wn, L1|L2 }, /* vinserti{32x4,64x2} */
     { { 0x39 }, 3, T, W, pfx_66, Wn, L1|L2 }, /* vextracti{32x4,64x2} */
@@ -2008,14 +2012,20 @@ static const struct evex {
     { { 0x51 }, 3, T, R, pfx_66, Wn, LIG }, /* vranges{s,d} */
     { { 0x54 }, 3, T, R, pfx_66, Wn, Ln }, /* vfixupimmp{s,d} */
     { { 0x55 }, 3, T, R, pfx_66, Wn, LIG }, /* vfixumpimms{s,d} */
+    { { 0x56 }, 3, T, R, pfx_no, W0, Ln }, /* vreduceph */
     { { 0x56 }, 3, T, R, pfx_66, Wn, Ln }, /* vreducep{s,d} */
+    { { 0x57 }, 3, T, R, pfx_no, W0, LIG }, /* vreducesh */
     { { 0x57 }, 3, T, R, pfx_66, Wn, LIG }, /* vreduces{s,d} */
+    { { 0x66 }, 3, T, R, pfx_no, W0, Ln }, /* vfpclassph */
     { { 0x66 }, 3, T, R, pfx_66, Wn, Ln }, /* vfpclassp{s,d} */
+    { { 0x67 }, 3, T, R, pfx_no, W0, LIG }, /* vfpclasssh */
     { { 0x67 }, 3, T, R, pfx_66, Wn, LIG }, /* vfpclasss{s,d} */
     { { 0x70 }, 3, T, R, pfx_66, W1, Ln }, /* vshldw */
     { { 0x71 }, 3, T, R, pfx_66, Wn, Ln }, /* vshld{d,q} */
     { { 0x72 }, 3, T, R, pfx_66, W1, Ln }, /* vshrdw */
     { { 0x73 }, 3, T, R, pfx_66, Wn, Ln }, /* vshrd{d,q} */
+    { { 0xc2 }, 3, T, R, pfx_no, W0, Ln }, /* vcmpph */
+    { { 0xc2 }, 3, T, R, pfx_f3, W0, LIG }, /* vcmpsh */
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
 };
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -4677,6 +4677,44 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing vfpclassphz $0x46,128(%ecx),%k3...");
+    if ( stack_exec && cpu_has_avx512_fp16 )
+    {
+        decl_insn(vfpclassph);
+
+        asm volatile ( put_insn(vfpclassph,
+                                /* 0x46: check for +/- 0 and neg. */
+                                /* vfpclassphz $0x46, 128(%0), %%k3 */
+                                ".byte 0x62, 0xf3, 0x7c, 0x48\n\t"
+                                ".byte 0x66, 0x59, 0x02, 0x46")
+                       :: "c" (NULL) );
+
+        set_insn(vfpclassph);
+        for ( i = 0; i < 3; ++i )
+        {
+            res[16 + i * 5 + 0] = 0x7fff0000; /* +0 / +NaN */
+            res[16 + i * 5 + 1] = 0xffff8000; /* -0 / -NaN */
+            res[16 + i * 5 + 2] = 0x80010001; /* +DEN / -DEN */
+            res[16 + i * 5 + 3] = 0xfc00f800; /* -FIN / -INF */
+            res[16 + i * 5 + 4] = 0x7c007800; /* +FIN / +INF */
+        }
+        res[31] = 0;
+        regs.ecx = (unsigned long)res - 64;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vfpclassph) )
+            goto fail;
+        asm volatile ( "kmovd %%k3, %0" : "=g" (rc) );
+        /*
+         * 0b11(0001100101)*3
+         * 0b1100_0110_0101_0001_1001_0100_0110_0101
+         */
+        if ( rc != 0xc6519465 )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     /*
      * The following compress/expand tests are not only making sure the
      * accessed data is correct, but they also verify (by placing operands
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -183,6 +183,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_vp2intersect (cp.feat.avx512_vp2intersect && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
+#define cpu_has_avx512_fp16 (cp.feat.avx512_fp16 && xcr0_mask(0xe6))
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -518,6 +518,7 @@ static const struct ext0f3a_table {
     [0x7a ... 0x7b] = { .simd_size = simd_scalar_opc, .four_op = 1 },
     [0x7c ... 0x7d] = { .simd_size = simd_packed_fp, .four_op = 1 },
     [0x7e ... 0x7f] = { .simd_size = simd_scalar_opc, .four_op = 1 },
+    [0xc2] = { .simd_size = simd_any_fp, .d8s = d8s_vl },
     [0xcc] = { .simd_size = simd_other },
     [0xce ... 0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xdf] = { .simd_size = simd_packed_int, .two_op = 1 },
@@ -579,7 +580,7 @@ static unsigned int decode_disp8scale(en
         if ( s->evex.brs )
         {
     case d8s_dq:
-            return 2 + s->evex.w;
+            return 1 + !s->fp16 + s->evex.w;
         }
         break;
 
@@ -596,7 +597,7 @@ static unsigned int decode_disp8scale(en
         /* fall through */
     case simd_scalar_opc:
     case simd_scalar_vexw:
-        return 2 + s->evex.w;
+        return 1 + !s->fp16 + s->evex.w;
 
     case simd_128:
         /* These should have an explicit size specified. */
@@ -1417,7 +1418,29 @@ int x86emul_decode(struct x86_emulate_st
              */
             s->simd_size = ext0f3a_table[b].simd_size;
             if ( evex_encoded() )
+            {
+                switch ( b )
+                {
+                case 0x08: /* vrndscaleph */
+                case 0x0a: /* vrndscalesh */
+                case 0x26: /* vfpclassph */
+                case 0x27: /* vfpclasssh */
+                case 0x56: /* vgetmantph */
+                case 0x57: /* vgetmantsh */
+                case 0x66: /* vreduceph */
+                case 0x67: /* vreducesh */
+                    if ( !s->evex.pfx )
+                        s->fp16 = true;
+                    break;
+
+                case 0xc2: /* vpcmp{p,s}h */
+                    if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                        s->fp16 = true;
+                    break;
+                }
+
                 disp8scale = decode_disp8scale(ext0f3a_table[b].d8s, s);
+            }
             break;
 
         case ext_8f09:
@@ -1712,7 +1735,7 @@ int x86emul_decode(struct x86_emulate_st
             break;
         case vex_f3:
             generate_exception_if(evex_encoded() && s->evex.w, X86_EXC_UD);
-            s->op_bytes = 4;
+            s->op_bytes = 4 >> s->fp16;
             break;
         case vex_f2:
             generate_exception_if(evex_encoded() && !s->evex.w, X86_EXC_UD);
@@ -1722,11 +1745,11 @@ int x86emul_decode(struct x86_emulate_st
         break;
 
     case simd_scalar_opc:
-        s->op_bytes = 4 << (ctxt->opcode & 1);
+        s->op_bytes = 2 << (!s->fp16 + (ctxt->opcode & 1));
         break;
 
     case simd_scalar_vexw:
-        s->op_bytes = 4 << s->vex.w;
+        s->op_bytes = 2 << (!s->fp16 + s->vex.w);
         break;
 
     case simd_128:
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -305,6 +305,7 @@ struct x86_emulate_state {
     bool lock_prefix;
     bool not_64bit; /* Instruction not available in 64bit. */
     bool fpu_ctrl;  /* Instruction is an FPU control one. */
+    bool fp16;      /* Instruction has half-precision FP source operand. */
     opcode_desc_t desc;
     union vex vex;
     union evex evex;
@@ -592,6 +593,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
+#define vcpu_has_avx512_fp16() (ctxt->cpuid->feat.avx512_fp16)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1300,7 +1300,7 @@ x86_emulate(
     b = ctxt->opcode;
     d = state.desc;
 #define state (&state)
-    elem_bytes = 4 << evex.w;
+    elem_bytes = 2 << (!state->fp16 + evex.w);
 
     generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);
 
@@ -7145,6 +7145,15 @@ x86_emulate(
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x0a): /* vrndscalesh $imm8,xmm/mem,xmm,xmm{k} */
+        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        /* fall through */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x08): /* vrndscaleph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        avx512_vlen_check(b & 2);
+        goto simd_imm8_zmm;
+
 #endif /* X86EMUL_NO_SIMD */
 
     CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */
@@ -7455,6 +7464,14 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x26): /* vgetmantph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x56): /* vreduceph $imm8,[xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x51): /* vranges{s,d} $imm8,xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x57): /* vreduces{s,d} $imm8,xmm/mem,xmm,xmm{k} */
         host_and_vcpu_must_have(avx512dq);
@@ -7467,6 +7484,16 @@ x86_emulate(
             avx512_vlen_check(true);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x27): /* vgetmantsh $imm8,xmm/mem,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x57): /* vreducesh $imm8,xmm/mem,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( !evex.brs )
+            avx512_vlen_check(true);
+        else
+            generate_exception_if(ea.type != OP_REG, EXC_UD);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x30): /* kshiftr{b,w} $imm8,k,k */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x32): /* kshiftl{b,w} $imm8,k,k */
         if ( !vex.w )
@@ -7630,6 +7657,16 @@ x86_emulate(
         avx512_vlen_check(true);
         goto simd_imm8_zmm;
 
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x66): /* vfpclassph $imm8,[xyz]mm/mem,k{k} */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0x67): /* vfpclasssh $imm8,xmm/mem,k{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, EXC_UD);
+        if ( !(b & 1) )
+            goto avx512f_imm8_no_sae;
+        generate_exception_if(evex.brs, EXC_UD);
+        avx512_vlen_check(true);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x70): /* vpshldw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x72): /* vpshrdw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
@@ -7640,6 +7677,16 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_vbmi2);
         goto avx512f_imm8_no_sae;
 
+    case X86EMUL_OPC_EVEX_F3(0x0f3a, 0xc2): /* vcmpsh $imm8,xmm/mem,xmm,k{k} */
+        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        /* fall through */
+    case X86EMUL_OPC_EVEX(0x0f3a, 0xc2): /* vcmpph $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK);
+        goto simd_imm8_zmm;
+
     case X86EMUL_OPC(0x0f3a, 0xcc):     /* sha1rnds4 $imm8,xmm/m128,xmm */
         host_and_vcpu_must_have(sha);
         op_bytes = 16;



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:57:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517483.802859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLci-0006VK-52; Mon, 03 Apr 2023 14:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517483.802859; Mon, 03 Apr 2023 14:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLci-0006VD-1q; Mon, 03 Apr 2023 14:57:32 +0000
Received: by outflank-mailman (input) for mailman id 517483;
 Mon, 03 Apr 2023 14:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLch-00066R-0F
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:57:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db14c9b8-d22f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:57:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:57:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db14c9b8-d22f-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UMX/oiOvfeZBVNHcbouIPbM3AbxdRxjiOtL9x3JLGk1sZoQ9BtjJpE5Rr9HDWw8HphwMd/jYD22Udb1uZTT9LHQPHCKcDFGkW9GZrrPBEJOye87zWjTSqaMi4ML3mO3WGkkNRDojBNk9klh8o5XBpdipMkqKaBjUoJ3psIxWLfCclwgMBSCWf0m2AJOx3lQQ03busDcU4sysBO3Pm3dXGgIkUWiJzqBGIbWjfqV/ZPGjxIyqveHBe3pWZyeBJ0wr1Y1ETOz5ewAsSxVQXM4aKUmKb3bawOvzlzUa+HMGTiUuV6z1+1K6u1HDY7OKR/sCSD2Umt72z49UNypny8A4Tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ssUjHs4HRn1b92pvEP2D8LrncOpBCibcR/DS6rD4XMw=;
 b=BrQX8T1/bUH8Xy6WLQ/2NC62yROU8KjcQ9L/LAJtMhbEeqs6NNLPfZbLVVvPpOWAIjv+A5bAtgMblDnQYMhhQoY4FeaE1pgdzAuIUPbxQj6z0h8M4MAsiwvS3ZE+n30xAKBkURPkRbMSjWDmNppuW9zSP7pwAJh6oqK91nH8QqgB33Zp74E21P6dSUPoGTuHbNtAnQBlNwT3C2jiwOC062XtXFb3v0k2iSOlxXHf44sEhwFoS2DWkjcytPmzsakTeZJjANc1Pu+UGFQKDzE2//A2jkWE75WnO67vJjjvsYamHjOQQUTmbL8lTH22hmdi6etigxy4cAWgZZ8V8OAHtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ssUjHs4HRn1b92pvEP2D8LrncOpBCibcR/DS6rD4XMw=;
 b=o5RVHHfqLGXbNMPqfLnyNN0oyTzROtsUC8vJyGyf6I1/oqrpgn2UXeQJpzoAc3gdVG6sEcd7+orQ710AJxrlvEykaRppSTtozKRPATsTKplIVw756v+NJj7bbK7vG7XwTdOklFxMTw7dsmYo6SHSyLuR+LjhPLTMD6n4lZ5i7YdLzgq7MLaWGpFGjo+ok84vcCiLTLToJan1tfZEjeridVjxpaXEgftc383cJ1wjTXB82hPvpHrKfeHNj2UA7jSY184MpErTds2U8azJEFnAXbpRloRMKzZTujJxwG5xT6oW2YwiM5VDHnlQrIA9+nYnIn9SFxUlWe+U0tFmE8YfmQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <111f6a32-0c2c-f586-f599-dd0c4451ba4e@suse.com>
Date: Mon, 3 Apr 2023 16:57:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 02/10] x86emul: handle AVX512-FP16 Map5 arithmetic insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0178.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 5e6ca9d1-9c36-4c89-1486-08db3453bdaa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MZl8oLiU2OSkr+EBCj64noQekDKaqy6rI5lW79mgjxOjZiF+vWhovzGbIYH2TXVCsNuG8Wj/pMJAzLvWHmkJLYxH9jx3Jbxz/q9DY7RiL+dIQqT4FQu1KB3kQSDn2pYsuJvmn0QVsRjX75lh432wsNbtBaWTEoV6AwI0olPVrf8sr3LZzACccakHLipARDvttiEW90cfH+HJ2cAx6NqgOJNNGoYZxTxBSGN2Bs/HtCIVIYEkFcke3IQy1Y2jOGvFyIoZsNWS9mNdiIKrVAKASqvUQEoPYfLYO8FgkkdeDL688PihN1Z18SoBJW57zS4K8CBTpUsPMCNVKiZZ+TPKAh6ULzxHoZejm6XKqtp7RhlSrYm5PpGNr5Oafia0FCVLjC54CdQ9FhQvm95tnS13lFK6LY8BrBz3hyM08dECWDvmjz7+IiM7drhkBPyWhgmu2F94bG6H61ZIOUeK6zzCHfSVGYkg0qlCkIEK5uJzGxBFngWarfATIFm/Qq60Cus7AsK4jJOKsW84qzNvPEF7HiiWZ14CWXbHwBUTCbKljGGHQ7T8vgsRbeD5YSk0kmJE6ln1k0iqexzt3nifkzTs9G6DSv84B3MUxv/VZWZGVd9fkM+kuJlt1dwyG7IF652O/jzhI8fw6mFT3+wv5CgRag==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cm5FSlBNZGE2Q0R5MGFaK2ZKZlRMbWZVeGlramNFWnlhNkxhczhyQnMrVFhj?=
 =?utf-8?B?ME1VMWlXdmRTUEhxRG9NOGNnQ3dnVnpQYnVoVXZTeU85UjVXaitCZGViODBx?=
 =?utf-8?B?KzdhTjB5bGdWQm85UVlINUpmUjErVXBWRmpsS3NRVGVnblltazVsSEYwYVo2?=
 =?utf-8?B?MHQ5cm15Q093b0dMRnppelRkMUJXTXU2cmlkNmJ1T3NiWWJTeHkwK1pYTEhq?=
 =?utf-8?B?UXBjVlVCcUdSQ0Mwb3RlN3lLK3hVY00yOHZrZk51OWluaCt3NWR1S3h1NHBV?=
 =?utf-8?B?VTdVaHhRM29Vc0o4NlhFS2p5OWRpVUh2Rmk2NHRaRHJvVEFvMlJlamVEOUxJ?=
 =?utf-8?B?eXZqMitkTnpLMkJ6RVVtRkY0YTlyTlppcCtPdi91VmZRZlYrZDBSa2NXWTZP?=
 =?utf-8?B?YUlnWlYrUGNpUVJTNTNETUg3VnZDVGVGQlRiRUxYWU9rM0dIb011S2ZiU1NI?=
 =?utf-8?B?UWFJR2RYNGxLVFRTZy91V3VJQjVWK0p4bjMrcjYrVDVneS82UnVjWnhJbmhH?=
 =?utf-8?B?UHNzWUhVeWhIUVpGZVpEWTVveGFvUlVFemhUUnFEL0YyeHNUWThyTjZkano5?=
 =?utf-8?B?WjNLWUxHdGQ2OUNkQjE3V0Z2cTdyU1V6bVVjMitYY25rTmMra2Q0eVpTNzFu?=
 =?utf-8?B?cHlKeW1jY2gvaVhVbkwyYzcvbFlQVVBORW9yMDRENG1Bc1JsOTVXTjRDVzlw?=
 =?utf-8?B?alU2K3IrSStVWWVHVlFEU0hkN05YQkdQUStGdWYrNDY2QU0vYUZqMkhRQXNL?=
 =?utf-8?B?dElnbkVPY3c3b0tNTHlWTEthWis5VFBRMytwMHVTTGI2OUcxUkwwbEdHQzZQ?=
 =?utf-8?B?Z3BhVlhNdWo1Ung2RWxyeHJJTVh0RUJNSmhhVFpBeWVpN0ZPQzc0S20vbFZm?=
 =?utf-8?B?Tm5PMGpjaTZrNkxRSGxwSTZDU2NmaTdmZk8wSDhLd0tmVWFWRndUZUZsVXA2?=
 =?utf-8?B?TlNUZGlEV0RWckVhK1Y4VTNBVWhEVzJZNnRyUndrbWxac1Z3cTRwSWh6OEUr?=
 =?utf-8?B?dGRBV0hNclpmd3JtbHk0Vy9YbHZ0WDNIQVgwdXNjNlQ2RmpSZDQ1WG9KNnFV?=
 =?utf-8?B?NHl4NXIxMGExcmFSV1NzaFBNWVBNcXgvaDRPMVRUWjJxYnJLYnlObTFwZksr?=
 =?utf-8?B?dnRlVmFCMWIxTlRDeUtzOWRHOU5FMnVBYUZ6VmxmNjZ4Qm9LTFVQVDFhUUN0?=
 =?utf-8?B?QWRDbXlOc0NCcCswQ1VhWlkxSWpIcjJuK0Z3bklETGJabmJnV0NvODF4WVYy?=
 =?utf-8?B?QXFDVU1lZnFRdUJlcUxGL1c4amI5NFZiOEsrZ1hqNTJBUkNzN2FEMkVhYmpV?=
 =?utf-8?B?L3lVeng5a2ZsUnZWN1FJNWZVSGVta3FMcXh2VmgzUXpuakE5UFJDNTZXWlVZ?=
 =?utf-8?B?MEZhNDVSckQ3V1J0bUxaZGZvNkNiMVB6NWg1SGYyWWI5S3ltN1BxSW1YWk4x?=
 =?utf-8?B?d3Z6S1R0aGF3NWxTdVRUVFR4WG1rRXZlbEZTMko5bnJxVzlNc1ZYUUNQdmEv?=
 =?utf-8?B?blk3eXRMazE4dkF6YVphWFNxc2RGSmE4QitNd01FMHlVVEE3YzVoaHNJM0h6?=
 =?utf-8?B?YjhZNzR6SzEreTRkN3NVMFcyNE5BdHZuekhpTkEvcExSSEtSOEQ5Y3h3OFFn?=
 =?utf-8?B?OUE0R1RJZmJCeWcvS2NGNGJLeFdiaHZVRmlzdEFqeVRLT3Y0anZEOTBFZnhW?=
 =?utf-8?B?cE1mRWdOeWYzTDNqTmpJRVJMTHJ1NkYray9CSWFPTjYvWTFhYjBuU2FCeTdw?=
 =?utf-8?B?MjYwQnN2bFFUdXprQU4wcUJ0RHdRU0dwQy8vL3M5dE5VQmc5Q01CdmdRZDhk?=
 =?utf-8?B?cUtidUVCemhxazJMYWJzckpwbjdaUy9Oai9qTWRtSVI4aGt0cy8waElvbCt1?=
 =?utf-8?B?NnNuYmcwZzUrK2d0czhOeHFFOWU4UGVUbkZib1NlTGF0clFOcGI2YWxuR3Yw?=
 =?utf-8?B?Z3p2bCs0SkdVWFZGZlhLNFdGVmhYTjduWkdaUiticTd0V2NYbmlJQnVPazRP?=
 =?utf-8?B?Z05HNDlJcCtYNk52bmxVR212MUNyQmNZcVpOam1ZNStvaFN3RStNZkJVaGZr?=
 =?utf-8?B?VnMwbUtjWXJsUVJBQmhINzlVOXdmUm5yODc2dFc2V0V4d0lIWWdLU2U5N1J3?=
 =?utf-8?Q?3vM0szKgt2TwLvoJNt+Wv3XnP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e6ca9d1-9c36-4c89-1486-08db3453bdaa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:57:27.3441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: no+s59NY7tbs+KlkGZaWRTTg5bS6SZ+QKvVAYQH7a4jGr2EGzAhKY4kRg1FimUp9XYHgDcCl0kw4pEcFvsR65w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

This encoding space is a very sparse clone of the "twobyte" one. Re-use
that table, as the entries corresponding to invalid opcodes in Map5 are
simply benign with simd_size forced to other than simd_none (preventing
undue memory reads in SrcMem handling early in x86_emulate()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Add comments.

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -6,7 +6,7 @@
 struct test {
     const char *mnemonic;
     unsigned int opc:8;
-    unsigned int spc:2;
+    unsigned int spc:3;
     unsigned int pfx:2;
     unsigned int vsz:3;
     unsigned int esz:4;
@@ -19,6 +19,10 @@ enum spc {
     SPC_0f,
     SPC_0f38,
     SPC_0f3a,
+    SPC_unused4,
+    SPC_map5,
+    SPC_map6,
+    SPC_unused7,
 };
 
 enum pfx {
@@ -603,16 +607,32 @@ static const struct test avx512_vpopcntd
 };
 
 static const struct test avx512_fp16_all[] = {
+    INSN(addph,           , map5, 58,    vl, fp16, vl),
+    INSN(addsh,         f3, map5, 58,    el, fp16, el),
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
+    INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(divph,           , map5, 5e,    vl, fp16, vl),
+    INSN(divsh,         f3, map5, 5e,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
     INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
+    INSN(maxph,           , map5, 5f,    vl, fp16, vl),
+    INSN(maxsh,         f3, map5, 5f,    el, fp16, el),
+    INSN(minph,           , map5, 5d,    vl, fp16, vl),
+    INSN(minsh,         f3, map5, 5d,    el, fp16, el),
+    INSN(mulph,           , map5, 59,    vl, fp16, vl),
+    INSN(mulsh,         f3, map5, 59,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
     INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
     INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
     INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+    INSN(sqrtph,          , map5, 51,    vl, fp16, vl),
+    INSN(sqrtsh,        f3, map5, 51,    el, fp16, el),
+    INSN(subph,           , map5, 5c,    vl, fp16, vl),
+    INSN(subsh,         f3, map5, 5c,    el, fp16, el),
+    INSN(ucomish,         , map5, 2e,    el, fp16, el),
 };
 
 static const struct test gfni_all[] = {
@@ -713,8 +733,8 @@ static void test_one(const struct test *
     union evex {
         uint8_t raw[3];
         struct {
-            uint8_t opcx:2;
-            uint8_t mbz:2;
+            uint8_t opcx:3;
+            uint8_t mbz:1;
             uint8_t R:1;
             uint8_t b:1;
             uint8_t x:1;
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2028,6 +2028,23 @@ static const struct evex {
     { { 0xc2 }, 3, T, R, pfx_f3, W0, LIG }, /* vcmpsh */
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
+}, evex_map5[] = {
+    { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
+    { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
+    { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
+    { { 0x51 }, 2, T, R, pfx_f3, W0, LIG }, /* vsqrtsh */
+    { { 0x58 }, 2, T, R, pfx_no, W0, Ln }, /* vaddph */
+    { { 0x58 }, 2, T, R, pfx_f3, W0, LIG }, /* vaddsh */
+    { { 0x59 }, 2, T, R, pfx_no, W0, Ln }, /* vmulph */
+    { { 0x59 }, 2, T, R, pfx_f3, W0, LIG }, /* vmulsh */
+    { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
+    { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
+    { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
+    { { 0x5d }, 2, T, R, pfx_f3, W0, LIG }, /* vminsh */
+    { { 0x5e }, 2, T, R, pfx_no, W0, Ln }, /* vdivph */
+    { { 0x5e }, 2, T, R, pfx_f3, W0, LIG }, /* vdivsh */
+    { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
+    { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
 };
 
 static const struct {
@@ -2037,6 +2054,8 @@ static const struct {
     { evex_0f,   ARRAY_SIZE(evex_0f) },
     { evex_0f38, ARRAY_SIZE(evex_0f38) },
     { evex_0f3a, ARRAY_SIZE(evex_0f3a) },
+    { NULL,      0 },
+    { evex_map5, ARRAY_SIZE(evex_map5) },
 };
 
 #undef Wn
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1219,9 +1219,22 @@ int x86emul_decode(struct x86_emulate_st
                         opcode |= MASK_INSR(0x0f3a, X86EMUL_OPC_EXT_MASK);
                         d = twobyte_table[0x3a].desc;
                         break;
+
+                    case evex_map5:
+                        if ( !evex_encoded() )
+                        {
                     default:
-                        rc = X86EMUL_UNRECOGNIZED;
-                        goto done;
+                            rc = X86EMUL_UNRECOGNIZED;
+                            goto done;
+                        }
+                        opcode |= MASK_INSR(5, X86EMUL_OPC_EXT_MASK);
+                        /*
+                         * Re-use twobyte_table[] here, for the similarity of
+                         * the entries valid in map 5.
+                         */
+                        d = twobyte_table[b].desc;
+                        s->simd_size = twobyte_table[b].size ?: simd_other;
+                        break;
                     }
                 }
                 else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) )
@@ -1443,6 +1456,25 @@ int x86emul_decode(struct x86_emulate_st
             }
             break;
 
+        case ext_map5:
+            switch ( b )
+            {
+            default:
+                if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                    s->fp16 = true;
+                break;
+
+            case 0x2e: case 0x2f: /* v{,u}comish */
+                if ( !s->evex.pfx )
+                    s->fp16 = true;
+                s->simd_size = simd_none;
+                break;
+            }
+
+            /* Like above re-use twobyte_table[] here. */
+            disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
+            break;
+
         case ext_8f09:
             if ( ext8f09_table[b].two_op )
                 d |= TwoOp;
@@ -1661,6 +1693,7 @@ int x86emul_decode(struct x86_emulate_st
         s->simd_size = ext8f08_table[b].simd_size;
         break;
 
+    case ext_map5:
     case ext_8f09:
     case ext_8f0a:
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -195,6 +195,7 @@ enum vex_opcx {
     vex_0f = vex_none + 1,
     vex_0f38,
     vex_0f3a,
+    evex_map5 = 5,
 };
 
 enum vex_pfx {
@@ -223,8 +224,8 @@ union vex {
 union evex {
     uint8_t raw[3];
     struct {             /* SDM names */
-        uint8_t opcx:2;  /* mm */
-        uint8_t mbz:2;
+        uint8_t opcx:3;  /* mmm */
+        uint8_t mbz:1;
         uint8_t R:1;     /* R' */
         uint8_t b:1;     /* B */
         uint8_t x:1;     /* X */
@@ -249,6 +250,7 @@ struct x86_emulate_state {
         ext_0f   = vex_0f,
         ext_0f38 = vex_0f38,
         ext_0f3a = vex_0f3a,
+        ext_map5 = evex_map5,
         /*
          * For XOP use values such that the respective instruction field
          * can be used without adjustment.
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3756,6 +3756,13 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
+    case X86EMUL_OPC_EVEX(5, 0x2e): /* vucomish xmm/m16,xmm */
+    case X86EMUL_OPC_EVEX(5, 0x2f): /* vcomish xmm/m16,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        /* fall through */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2e): /* vucomis{s,d} xmm/mem,xmm */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2f): /* vcomis{s,d} xmm/mem,xmm */
         generate_exception_if((evex.reg != 0xf || !evex.RX || evex.opmsk ||
@@ -3768,9 +3775,11 @@ x86_emulate(
         get_fpu(X86EMUL_FPU_zmm);
 
         opc = init_evex(stub);
-        op_bytes = 4 << evex.w;
+        op_bytes = 2 << (!state->fp16 + evex.w);
         goto vcomi;
 
+#endif
+
     case X86EMUL_OPC(0x0f, 0x30): /* wrmsr */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         fail_if(ops->write_msr == NULL);
@@ -7736,6 +7745,20 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x51):   /* vsqrtsh xmm/m16,xmm,xmm{k} */
+        d &= ~TwoOp;
+        /* fall through */
+    case X86EMUL_OPC_EVEX(5, 0x51):      /* vsqrtph [xyz]mm/mem,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x58): /* vadd{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x59): /* vmul{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5c): /* vsub{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5d): /* vmin{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5e): /* vdiv{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x5f): /* vmax{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        goto avx512f_all_fp;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -619,6 +619,7 @@ struct x86_emulate_ctxt
  *    0x0fxxxx for 0f-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f38xxxx for 0f38-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f3axxxx for 0f3a-prefixed opcodes (or their VEX/EVEX equivalents)
+ *     0x5xxxx for Map5 opcodes (EVEX only)
  *  0x8f08xxxx for 8f/8-prefixed XOP opcodes
  *  0x8f09xxxx for 8f/9-prefixed XOP opcodes
  *  0x8f0axxxx for 8f/a-prefixed XOP opcodes



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:57:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:57:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517486.802869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLcz-00074D-DE; Mon, 03 Apr 2023 14:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517486.802869; Mon, 03 Apr 2023 14:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLcz-000746-9r; Mon, 03 Apr 2023 14:57:49 +0000
Received: by outflank-mailman (input) for mailman id 517486;
 Mon, 03 Apr 2023 14:57:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLcy-00066R-Bz
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:57:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e55c288d-d22f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:57:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:57:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:57:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e55c288d-d22f-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e4Fx248CscoJ4WRIKQ0QGNnVid4Fv/K8U6cYb5HD42xPpX7Waq8Reahrvn08N6SOaEJoXjZRwTH3PNVD9EDmeawJCZqCJnTMjSiTJwVZOhYh6W1vmhjhsFwHeImaMdLQ3MdO/6w7BrMbskrgVXf+mV3B3GTAXHTwfLNHC3l1T23+TpcQ3tNexDX6BSI1r5gq+0TfvuYkPnNQAB0223+6IWTseRdEtEFJXalt2umjAgXkMNhGwn7OFXYk252am0KqZRxfU1eRM9HE8o84rZmB16VvyL5BlckRV6QlbnzvMj6POMxPYlTh8Q7k5XyzymH4tt1xoB9MqFjt+X3OxmuwVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y99XU3et9c84vJqhU6TL+qqpPQD4yDAwck0bIDW2RGA=;
 b=j9UxJBO6JgFcM8nLZ0AumNAYaP42R1/NsW2IyOWNdVQ6U3vfjzJHgBvEpHV8bTklI8EwAPM1TP0iJ4JD+ESj0bKKdTvBTF8A+2V6QWlN00YdHq2GiV4+yFUYklwgx/TNr31mRe4NZHe/7FgaZbxXr4aV0Z6X2kO1P6ymj8PaC/TgC02KhWvbHwF+MCmB2xjkpPyzZ+Ep+3iFWZrsVcEIpP5tFG3F6CVnhbF4FHbgXf7toIDGHpi2Ko9/AxZ23M3e/nCm3VusqpjtL/hYzgvV8oGiJVvAb7hFpT8R0pgj1C9lBboNafNYiXr2oaq0HnKRzNKoVE/Ba7tGWIcwXRWErg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y99XU3et9c84vJqhU6TL+qqpPQD4yDAwck0bIDW2RGA=;
 b=LEISKkaZBCIQmO7WrEQI09DY4XFGlQnoKu5eHyGc24yicxPCfHZ16s9aJdfl26xHuBgxmncTW6eTdJmC6l4eOXKk1y3L04v6TlJfyRQ6v/7ARaR3VaABANkl3X1OrYFD7DtwAbYzFk9XoaAekVuDLIH8GZtudGi0fKvj0CGz1pT48MB2bIuSDUR2YjFCxCMzxAXGmnKkbH5xVO+44mxh7I7+i303kbZFnyc4tK6KsAMn8JCFaACNc7V5U48YIARbKmKHohUenlUnEgqKy6Jh95NPZyGdLJbJID/pQ4+TcBNj4d5MiIyTpaPYoWaqO9NoCWQGXxffOntZT3QbjPcCCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d6c6a796-04fb-8156-e005-a2138be5a99e@suse.com>
Date: Mon, 3 Apr 2023 16:57:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 03/10] x86emul: handle AVX512-FP16 move insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 305e6770-76fb-45c5-afaf-08db3453c8bc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	juWwsLcDWMj12svgVwg+Yr3h9M/35BM7crTmON3ZDAaL9D9ys2Y5vCYAhwv/ajeZgsOUpG1bils1M+uO3TLC7bJtaz9giW1bVk948OpMzPINm/zdlZq2MOXVdXnt0TBYIi/w623Q9NvSghcuohq5uqtxjINpnpLDIVoKGCLoKrE1DReuELAoCnwOer/rWp+9E2XYnzw6bAJZaNpVLkjpzlfaR0r8gzvcpZ3J0vIcej/fDynF+U3FoTiMKNy27/mtcBuItKZYJVj0yvfUJPtHq1KotkOF7jiPbyGAKXZ7yC8SiOdaKAIwjsadvYYGyxBjcc3lfr5pLXL1+yMACGevKzCgl9aKNBI7nPZ1GwW78UFlmnMcNECVYzDBdwe+WkKVWT1LjYk+H3BKQI/n1AgP8+3i7Em65oVr2tHmo5EVEtyIfL9lF3db87KnVDOBfd786BUP6cJHiHlkKUkstdWGOF1mdwSdLM+4rAyIDmNoFsrlzZP/vpK3ssDp6l8AU/tXmTU8+8e8QnxSxuQp8Y5Ypvf0NeyC/mVWlPeapBDPXexMrTj5lZkBYa4AjO88qgXJlA+k8sO57RDYUMCFTMIzjSCEvfO0x91Ks26KP1mkrT3jzy7OVANCIy4P4o7gzk+bu72IomcLT9+Db64uk8EcfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NnFlUTdseDA2bXZDaEhPWnRmZjd5UXppcXRLTzd0SVJydmFybHg0eElOak5Z?=
 =?utf-8?B?NkMyb1RvU0NWTWlpb0tNR0p0NXhqRXk1U01rdHhnZkJxN0tOVmljS3NhMFll?=
 =?utf-8?B?R0Q2LzFjVmpBUURaRU9WT3FRaTZBNURIR2RESW1XbXhsRjhLTUcyYzd0eHVK?=
 =?utf-8?B?UUpwOERHUjgxcnpzMnlmbGZ3VVh5V2E0eHNHTUh6RXVIUTZHMzhvVG1ZSWJN?=
 =?utf-8?B?UHJGV044T1FLb05IMTdEeE0vNFhydW95cXV1bzNFOEloS2M1V2dhV3ZDL1NH?=
 =?utf-8?B?Yjl1UW5lQk9ldEVua3VPUlZOQWRzYS9aSTQ3dm40enV5dGtCdGpWS3VwUEVn?=
 =?utf-8?B?MTBRMFFXUEVFN0ZnZXVVelJiV0tpTHlUTUcxN2pPNm1pd3AvaDJoanJNcjR0?=
 =?utf-8?B?M0xXSmh5NWlwaHJRSzRzRVRjNkJSN1NwQlV5SHlZb1JwbkRFTFVSVVNVODhV?=
 =?utf-8?B?bCt3ZE82SWhTc0pPWHBEcmVKdkk5d2RrUGlJM0VuNGZoZ1pUdHRrV05Bbnpi?=
 =?utf-8?B?dXpzYk9pSmFYM1FmVkVId1FsU0ZEdHM2MGtiSzc4bUlrVHRpbUpIOEJnRnVM?=
 =?utf-8?B?dS83L1BNVk1sK0o4ZDlPTDV5ZGNVZEJXL3JYT0RLcFpsd2tYdnh6cm1sWmRI?=
 =?utf-8?B?UFR4aXZVb25NZnJZL0FZWVEvdDJXT0RDeXlIcEdJblYrNVNscG5LKzkwb2pk?=
 =?utf-8?B?WEEvajdLMTBXNUF0SWlrTjhCdXlwUURubHN2N0s3ZDkxUmQrYkl1UHl3ckc5?=
 =?utf-8?B?YStqWm1iT1ZyUWl4NHJKbWVwNmFKb2o0NzduRmY3cEtCNS9KbStCa1lKbHNV?=
 =?utf-8?B?cnNuV2h0MGdzWERZVDFyTXlCeFNtN3Z3WTJmMVBVK1NBcFc4ZU1nTTcxcFVv?=
 =?utf-8?B?a0hxblJqSnB4OXVnUjdtZ0FaUkpqVnoyZWNqOTRWaC9KV3ZieGdiMGJnN3pv?=
 =?utf-8?B?RnFhK2R6V1Jnc3hMY0NEdWh6WE90T0FZdVd5RFJjaWVVQUd6M2gwMXBuMzdw?=
 =?utf-8?B?bGFCVk1WQUhNK3dnRzFTSjI0RkVTeG1rbnQ0ZnQ5ZFNQRktYdVZCMEVtTmdM?=
 =?utf-8?B?VTFuaGN5aUNkMjlwTTNnbUVRaFlqb3U3clZja2ZiSkZqRnBQcWplMFNiR0l6?=
 =?utf-8?B?TlFKbFZwYzJBRVA1UDBLaFVRWUFHS0h6Ym5LQ2laTWlYYnhDYjgwMWlXUnM3?=
 =?utf-8?B?d0w5T09VQVBVeGFqMXBwTEZHMi9FYlBKa3pBL0xycUJqbFpaQUQrL1d4L0tL?=
 =?utf-8?B?OU5wYUlsTXU5MWo0d0tNSE9tTE5aaUZ4NW5xVVU2ZW0raG93cktkZjhUWXlo?=
 =?utf-8?B?OE9hWVV4Q2hjNDdlUzdOWXMxeE9RZUlEMG5LZ0FteUdTWjlqS1ZFM3dTSG5C?=
 =?utf-8?B?VzdoUXJyVUlMZFduVytsVkhKM0tna1VnbllCQytjalM5TWNLV21sTjUranZK?=
 =?utf-8?B?TTVueUdXbHRwMG91dHcyRHRCbC8yeHVLbzV3RjdISlRwbTFia1hIZ2xiRXZN?=
 =?utf-8?B?Z0Z4aVJyWXRpQkxqenhuZXpUdmlSQ1cvS0hzY3BiWFJFbmtqL2VWL00vWUV6?=
 =?utf-8?B?YzJ2TkFWWU1sMUgrVzB1WTVzc2pFOUE0VExPYjNNb2RheUpkNmczMkJ1RjJs?=
 =?utf-8?B?bGpvUVdKeTlkbXBxdWh5cm5BRUZORTZPQ0Z5MnRlNGJheDJRai9qT2FxTUMw?=
 =?utf-8?B?eU92bkpsTzZJVk5udFErVmcveXBRckhjOG5lZGxob1RSU0R2UHBIQmJPenA1?=
 =?utf-8?B?NUN0Rkpuc3FXdzFZUEk2QWdZSExlbTRNbCsvaWx5NjVHSjRHeU42eGhCbGk5?=
 =?utf-8?B?QzZBdTYwakZxaFAwOFArekcwa0NVOVhuSGNnb1A3YXJjdGd1ZUQ0c3RtTzFV?=
 =?utf-8?B?OFZ4TEJyeDY3eHJWWUlrb1lIdENEd0p0Mk5nTnNRa2E2N3pBVzJLdThPb0RL?=
 =?utf-8?B?c0Q4Y3VEbnhnWjg3S2toRjg0Mzh1RDVMNGt5NjRkN3ArVEN0NlU0eTBkSDJm?=
 =?utf-8?B?OFRnV1ZEWk85VkdQMGJCeGwrelFrL1A5V1A0YUtQQnlUU1o0NlFQSlZsSE1t?=
 =?utf-8?B?TjF2SzlFL2VnUCtPODJhM3pqK29LNEwwbWtySVBIT2hTcUFZQUxOQXUvdmg1?=
 =?utf-8?Q?Kw11USetWsaRwNeGwqQwd2i/S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 305e6770-76fb-45c5-afaf-08db3453c8bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:57:45.9815
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kNFNG1nkNpIvNdHSW+8U6RoBeBEl1fwDCoWnMH/Z0IuM81NMWrWFhAnVzIjobBw+I9u0Do0O15xRGXMlAVzqYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -622,6 +622,8 @@ static const struct test avx512_fp16_all
     INSN(maxsh,         f3, map5, 5f,    el, fp16, el),
     INSN(minph,           , map5, 5d,    vl, fp16, vl),
     INSN(minsh,         f3, map5, 5d,    el, fp16, el),
+    INSN(movsh,         f3, map5, 10,    el, fp16, el),
+    INSN(movsh,         f3, map5, 11,    el, fp16, el),
     INSN(mulph,           , map5, 59,    vl, fp16, vl),
     INSN(mulsh,         f3, map5, 59,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
@@ -635,6 +637,11 @@ static const struct test avx512_fp16_all
     INSN(ucomish,         , map5, 2e,    el, fp16, el),
 };
 
+static const struct test avx512_fp16_128[] = {
+    INSN(movw, 66, map5, 6e, el, fp16, el),
+    INSN(movw, 66, map5, 7e, el, fp16, el),
+};
+
 static const struct test gfni_all[] = {
     INSN(gf2p8affineinvqb, 66, 0f3a, cf, vl, q, vl),
     INSN(gf2p8affineqb,    66, 0f3a, ce, vl, q, vl),
@@ -1039,6 +1046,7 @@ void evex_disp8_test(void *instr, struct
     RUN(avx512_vp2intersect, all);
     RUN(avx512_vpopcntdq, all);
     RUN(avx512_fp16, all);
+    RUN(avx512_fp16, 128);
 
     if ( cpu_has_avx512f )
     {
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2029,6 +2029,8 @@ static const struct evex {
     { { 0xce }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineqb */
     { { 0xcf }, 3, T, R, pfx_66, W1, Ln }, /* vgf2p8affineinvqb */
 }, evex_map5[] = {
+    { { 0x10 }, 2, T, R, pfx_f3, W0, LIG }, /* vmovsh */
+    { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2045,6 +2047,8 @@ static const struct evex {
     { { 0x5e }, 2, T, R, pfx_f3, W0, LIG }, /* vdivsh */
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
+    { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 };
 
 static const struct {
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -5140,6 +5140,76 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing vmovsh 8(%ecx),%xmm5...");
+    if ( stack_exec && cpu_has_avx512_fp16 )
+    {
+        decl_insn(vmovsh_from_mem);
+        decl_insn(vmovw_to_gpr);
+
+        asm volatile ( "vpcmpeqw %%ymm5, %%ymm5, %%ymm5\n\t"
+                       put_insn(vmovsh_from_mem,
+                                /* vmovsh 8(%0), %%xmm5 */
+                                ".byte 0x62, 0xf5, 0x7e, 0x08\n\t"
+                                ".byte 0x10, 0x69, 0x04")
+                       :: "c" (NULL) );
+
+        set_insn(vmovsh_from_mem);
+        res[2] = 0x3c00bc00;
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) || !check_eip(vmovsh_from_mem) )
+            goto fail;
+        asm volatile ( "kmovw     %2, %%k1\n\t"
+                       "vmovdqu16 %1, %%zmm4%{%%k1%}%{z%}\n\t"
+                       "vpcmpeqw  %%zmm4, %%zmm5, %%k0\n\t"
+                       "kmovw     %%k0, %0"
+                       : "=g" (rc)
+                       : "m" (res[2]), "r" (1) );
+        if ( rc != 0xffff )
+            goto fail;
+        printf("okay\n");
+
+        printf("%-40s", "Testing vmovsh %xmm4,2(%eax){%k3}...");
+        memset(res, ~0, 8);
+        res[2] = 0xbc00ffff;
+        memset(res + 3, ~0, 8);
+        regs.eax = (unsigned long)res;
+        regs.ecx = ~0;
+        for ( i = 0; i < 2; ++i )
+        {
+            decl_insn(vmovsh_to_mem);
+
+            asm volatile ( "kmovw %1, %%k3\n\t"
+                           put_insn(vmovsh_to_mem,
+                                    /* vmovsh %%xmm4, 2(%0)%{%%k3%} */
+                                    ".byte 0x62, 0xf5, 0x7e, 0x0b\n\t"
+                                    ".byte 0x11, 0x60, 0x01")
+                           :: "a" (NULL), "r" (i) );
+
+            set_insn(vmovsh_to_mem);
+            rc = x86_emulate(&ctxt, &emulops);
+            if ( (rc != X86EMUL_OKAY) || !check_eip(vmovsh_to_mem) ||
+                 memcmp(res, res + 3 - i, 8) )
+                goto fail;
+        }
+        printf("okay\n");
+
+        printf("%-40s", "Testing vmovw %xmm5,%ecx...");
+        asm volatile ( put_insn(vmovw_to_gpr,
+                                /* vmovw %%xmm5, %0 */
+                                ".byte 0x62, 0xf5, 0x7d, 0x08\n\t"
+                                ".byte 0x7e, 0xe9")
+                       :: "c" (NULL) );
+        set_insn(vmovw_to_gpr);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) || !check_eip(vmovw_to_gpr) ||
+             regs.ecx != 0xbc00 )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing invpcid 16(%ecx),%%edx...");
     if ( stack_exec )
     {
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -585,7 +585,7 @@ static unsigned int decode_disp8scale(en
         break;
 
     case d8s_dq64:
-        return 2 + (s->op_bytes == 8);
+        return 1 + !s->fp16 + (s->op_bytes == 8);
     }
 
     switch ( s->simd_size )
@@ -1469,6 +1469,15 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
+
+            case 0x6e: /* vmovw r/m16, xmm */
+                d = (d & ~SrcMask) | SrcMem16;
+                /* fall through */
+            case 0x7e: /* vmovw xmm, r/m16 */
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                s->simd_size = simd_none;
+                break;
             }
 
             /* Like above re-use twobyte_table[] here. */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -4390,6 +4390,15 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_66(5, 0x7e): /* vmovw xmm,r/m16 */
+        ASSERT(dst.bytes >= 4);
+        if ( dst.type == OP_MEM )
+            dst.bytes = 2;
+        /* fall through */
+    case X86EMUL_OPC_EVEX_66(5, 0x6e): /* vmovw r/m16,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
@@ -7745,8 +7754,18 @@ x86_emulate(
 
 #ifndef X86EMUL_NO_SIMD
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x10):   /* vmovsh m16,xmm{k} */
+                                         /* vmovsh xmm,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x11):   /* vmovsh xmm,m16{k} */
+                                         /* vmovsh xmm,xmm,xmm{k} */
+        generate_exception_if(evex.brs, EXC_UD);
+        if ( ea.type == OP_MEM )
+            d |= TwoOp;
+        else
+        {
     case X86EMUL_OPC_EVEX_F3(5, 0x51):   /* vsqrtsh xmm/m16,xmm,xmm{k} */
-        d &= ~TwoOp;
+            d &= ~TwoOp;
+        }
         /* fall through */
     case X86EMUL_OPC_EVEX(5, 0x51):      /* vsqrtph [xyz]mm/mem,[xyz]mm{k} */
     CASE_SIMD_SINGLE_FP(_EVEX, 5, 0x58): /* vadd{p,s}h [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:58:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:58:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517490.802879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLdS-0007l0-Oj; Mon, 03 Apr 2023 14:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517490.802879; Mon, 03 Apr 2023 14:58:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLdS-0007kt-Lk; Mon, 03 Apr 2023 14:58:18 +0000
Received: by outflank-mailman (input) for mailman id 517490;
 Mon, 03 Apr 2023 14:58:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLdQ-00066R-Ob
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:58:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6539507-d22f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:58:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:58:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:58:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6539507-d22f-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gqnEFc9p304BbjCR28meVw3Cgt/JGP1KNLoUbKnVLacoWdsNP90bat1sFpAujaGNvQaaCSGYPqp0dMf+GVAz3TyZ6ktSAgM8ZAmqPrDNA7FC10nB7P1CAjR4kIaaPD2Ej0z9sUrm0G0eRWz/cN1MI2u1cDcVBbMZX3NdGQNsjYQDZn6F4yu4RGWX8SyKeiwHoif3JZ1JgQF4B3Etd3BZyn/IaiwoFlUx69uMxIRkZt4/M/9EJFrEwBlyt3QCQFBhChrM2HNon4x//ER/GLvcxlN84XLsaMwqa1l2IVBzi6NClJNxefrTWEtAuLIhfmD/sILk+SfRBpobGNzVyjoYNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oydr5RGLPiB/5UlaPCHWJTbrdc3fnjsbwu/jwCLiWIo=;
 b=TE+BkxGv3HX7deLzpjZOcZcPg9YOK3PVZ6EUVyuFgFy0+SnJdyPF0o8QyLYt5cOU2NGFNaRfrGMX4BrBHVpqeJwkZWrMaDqPShld/A9ZzGdKp3T3XZQ7V6rWqr39+7xWRQ3HAHieWoOlSKURUEIJrB5Rq9o7YgzT6tqe6hA2l8BFJTwji4X1vuh6RuVT70Nq1euktvc1xC0IaSgClJaS8OxyyCJAxndc+NzeEfHSn9fQ85FfMPGXgzh6dk3pbLDyikLMp4UBc9m7S5p/e8zO+6cZeI/YUaprycb9DK+QeFOnEm9y7AALs4AORHzD3+7O/0lEYZlDvPRFAgS+9fmFdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oydr5RGLPiB/5UlaPCHWJTbrdc3fnjsbwu/jwCLiWIo=;
 b=oxd/K2NL01WO8G2hQcT1QCfnVwgkyCXY6pVg2MFMtB+tf0CYWpNJhAa8XyZtcLkO6nBrOr5VRNtHx8BRCFxLOP6WRID+jNlLHFtDeUFHFU1OsRSR9Gom/up5TRXGkAhsRu8tTEuWv7SRQpnvKysk2N7SbJxYOfC+4yafMjcYIhKId932BBHXoCD0AaZzFojKgyI9OcwDCKpFC/pz3mYJnaNDrT6gTPJ7YiYTDFx1H1+rhqP+XG5z0QzDnciK6MFs43rBIMEFR+vi47rHuSQxu5is6te7+NNucFENBXKv09bn34WXohRJUPnuLfI9I9enwg8GSdfanpTT90031M20/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bd6c1183-f9aa-dbd1-84d8-c91a89dbc50f@suse.com>
Date: Mon, 3 Apr 2023 16:58:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 04/10] x86emul: handle AVX512-FP16 fma-like insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0192.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: ab8862c4-b2b5-4ea6-d632-08db3453d9c9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E1FjQ3QBqYkRdozj0hcc2S4PtbKILBwA/3y5NzFay+CMFtL/sryD9cyPVOp2NFqi33rjzaB0/L/UOY85eKTaabysq4ZmMq0J1XzshAYUpo77jnh12hZp+ysdEWJZK+35BFu660Gr4ywOd18yeHGrT06l2U/QAT3gRpZmGMGuRVmhNaFpICQHjK30/PehqsJLr694fj47KFnlDwLQtfMPqNiXryC8rgUh1Pg1S4HO9e+dcTDI5DG6HCSZYnv9XN5X+PEQZZjGgPJFXLylPj+HzJVtcUYxcEVqu5Eq8lifq02axkh2Ff3NYYztVacqYj36fTSxRXv8SCmVOXORaee/9LSv5VAYHzda5qFFfvWUdCxQ7kUS9/jtkyZrYN0kmC03Bmml1UPqIOICU2AVIoRLeJs+UthTHSpuSuVLaMjuJbouI2+YwX3r2FHbDnU5hdxrbmS3vibjczAI8gIjCTMeTl2IDKQF3h3/ZeX/J849Iv6S/JvhrsSxe81YXx5FcVRrTFzw39i/cCvNLHMIgxl/xdlvVBNycTV64WishB9njWE8jKybpZI+VVK/kUm3cmVb1hbm+To8DX3Bz5mKE80odUoHBzcLsUMg5f2K9kigf7tLPr0Tq0nR3gsgR32Apg8W219LJ1caBuCQTT+APfEsXw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(30864003)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amQxMTlIMTBPL2RhdXpwTWYxNityd2UvbVRzQWYyK0FOTFRFdFZ3bUM3dmFm?=
 =?utf-8?B?V1hoeUpJMjhlbkdIdUdwZDdjendYY1lrTE9Ia0g3MHBYU0NlNFZOUkxVTXdt?=
 =?utf-8?B?aFVSYm1FV0ZEdUFTL0FuMlhtLzR2bkREN01tdXcySldWSW9SZU1tMGhhMWpK?=
 =?utf-8?B?OUZiS3UyYXVMUTlBU1gyYTJOcGZqdTk2MjR2Zkx3UU0vby9OLzA1em5ISC9W?=
 =?utf-8?B?TTZYMGlCdC9aRk8ySis4Vzc3My8vNDFUdUtHUHQ5M01RcFRibVhIZCtXUis1?=
 =?utf-8?B?MXczZTNEWnByK2NmbFR2TGFUVW9PL0VEMDV0WGUxTHNJR0FZc0w1cFZuRE9O?=
 =?utf-8?B?NnV5cTFKTUVhaGgxaVBKWjB1Wk1lMGtFWStBZXBjcUFOSXFvZ3JaTlRTaUdY?=
 =?utf-8?B?L1M5TGVhaVBreTRNREgrWlRGaDFFSjhCMjhkM05OS1hlMnpDN2RpN2YwSWFs?=
 =?utf-8?B?T3BlMnVXUW55Qm5GMC8za3NqSEdFQjRBRlBaZ3FRVzVMRXh0TkxwS3RiN2xF?=
 =?utf-8?B?cllXQmtQaDliaEJyQ3hTdWd0WHBpRndrRE40cUFWalVtZmYxWjkvSzV6YWxo?=
 =?utf-8?B?R1lGVmdsaXJ1cnNtL216RFI1dkcwZGI4cXJrWnQ3cGxNR1JkZTg5NXpOMm45?=
 =?utf-8?B?T3RpVlcwVFM0anIyNnFmeXBOYmFmV1dacmN4UG03WWZPb3RYaXVJeWxaWFhQ?=
 =?utf-8?B?VHNBQzhVYjJwdWsyaFJreHZmZ0x0ZkN1REtqRG41WTMzN0w5ZDYzWFJOK3pR?=
 =?utf-8?B?ay9FWUZkRkg2Ni85MXRUa3NJTTZnNlhCMlI4YjMrNDFlV2dVdjNHTnJreUN3?=
 =?utf-8?B?MEJrN2F0THFUUHljSnNvaUhZZ0xkWlZJbzJMYjBhbHFtWFBReTNEN1Y4NXdX?=
 =?utf-8?B?TUFSVnlqWCtBYUUydjBWL1g5WkUxTzdFaDJjRFl3eFBqSWZvVHBiZmhwY0pj?=
 =?utf-8?B?UWxTaFRwSE0yMVRwR053VkZxQzFRVHhhNVJveGpsTXVaTTZDa3pOSC9rU2hX?=
 =?utf-8?B?czRTdUdZWHlZVGJXVWw5L3NlN2lPQXI4cGdGakEvV2NuSmV6WXJmRUVRUmRk?=
 =?utf-8?B?WmR4NXpablk5M1FhL1Fpd09PZXlVVVQvVEZBTWxqNGtoS1hJSy9WZFI5eWRX?=
 =?utf-8?B?QTBZcEsrT1BzMXR0VEtoMUx3UzNYTmhCY3Yya29RcVd5YU9DN1Vxamlpb00r?=
 =?utf-8?B?NEJZOGpWSDZJYkFRWlQwVFAxNFdqVzkvYlhvVnBKZ1lXeGVJQU9HTklkK3pu?=
 =?utf-8?B?Rk1Bd3Jnc2pLeVQzK0ZBTGFrQmJac0llNUJFVzJZdFZKN2UxNFB0ZWYrQnA2?=
 =?utf-8?B?TTdBcDNDM1RXekR2Mi9WY0NxZU56b3R4N2w5dHpvUkhxbVNyOUQxeFlxN0di?=
 =?utf-8?B?S3I0aDNoM3FTRFBpeTFRd2FlMWZ3bExsc1BjZDRhUXZveG1ib1ZtT3ZNNnZ3?=
 =?utf-8?B?cHlOMWtJblZiUnZickhvYlR2SXAxbUlYSFpVTytReHJMSTJLK0t0MXZZcHVa?=
 =?utf-8?B?R0RDWm41aDRuRXpydWVZUGhCckI0UlpmVVpxS1JLaVd6QXdGNjZYSExPZy83?=
 =?utf-8?B?Z2FpUnJEVGllVGRsZ3d4M3BGeklKa3pQRXF6LzFxcWNoSzV1VmpHanhBbE9R?=
 =?utf-8?B?QkcvZjBmai9EeExXaUZNSzdiUVYzNmoxRnJoZWsxcjNXaVMyQUhVbzVsczJT?=
 =?utf-8?B?cmxsWEFmNkFsSEpydGp3UmRDcHlGc1hUNnhsWWhsOXMxajNiUktZSFJZL1BU?=
 =?utf-8?B?WG5ERllsK0drLytpNk5SMWgwOURHZytYKzNKajVCVlE2V0lqWEkrZnQrWXlm?=
 =?utf-8?B?clQyUWY1alA2cDNZTHJXZEdIV0FWK1dlMlFxU1JMQ055N0hVdFNzdUN1T1Aw?=
 =?utf-8?B?bmMyU3Q0OVJDMmRjaWxZVTJTMFJYaUlxdnJqUlVrcm50alNyMGlwYnlrQmRv?=
 =?utf-8?B?czBmZDQ2ZHlRL3QwdUFaR3I0eGcrQ05NR05UZzVSRjRYYmZYa1NudUF6Y1p3?=
 =?utf-8?B?anFjbHd6RWJrcHgzQi9NcmZjZVlvb3BtRldFNnNTalBFb2NPNEdzanluM0FJ?=
 =?utf-8?B?ZWRFUzMyeWlGOW1QS3hOVWIyVklMZkErWWRsTVFlSkhvYmlPYVdDUTR0MjY3?=
 =?utf-8?Q?0hPKYHiLprcAFzWGZCWDt3XG3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ab8862c4-b2b5-4ea6-d632-08db3453d9c9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:58:14.5414
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3x+Uzm4zH6donlyxuL75jxyuHt+D/xbAzaajYG4qAd4Qko+8JLXJLQwDQ0J0p40YT+YeB28eETP9++rQ2XEmvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

The Map6 encoding space is a very sparse clone of the "0f38" one. Once
again re-use that table, as the entries corresponding to invalid opcodes
in Map6 are simply benign with simd_size forced to other than simd_none
(preventing undue memory reads in SrcMem handling early in
x86_emulate()).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Add comments.

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -614,6 +614,36 @@ static const struct test avx512_fp16_all
     INSN(comish,          , map5, 2f,    el, fp16, el),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
+    INSN(fmadd132ph,    66, map6, 98,    vl, fp16, vl),
+    INSN(fmadd132sh,    66, map6, 99,    el, fp16, el),
+    INSN(fmadd213ph,    66, map6, a8,    vl, fp16, vl),
+    INSN(fmadd213sh,    66, map6, a9,    el, fp16, el),
+    INSN(fmadd231ph,    66, map6, b8,    vl, fp16, vl),
+    INSN(fmadd231sh,    66, map6, b9,    el, fp16, el),
+    INSN(fmaddsub132ph, 66, map6, 96,    vl, fp16, vl),
+    INSN(fmaddsub213ph, 66, map6, a6,    vl, fp16, vl),
+    INSN(fmaddsub231ph, 66, map6, b6,    vl, fp16, vl),
+    INSN(fmsub132ph,    66, map6, 9a,    vl, fp16, vl),
+    INSN(fmsub132sh,    66, map6, 9b,    el, fp16, el),
+    INSN(fmsub213ph,    66, map6, aa,    vl, fp16, vl),
+    INSN(fmsub213sh,    66, map6, ab,    el, fp16, el),
+    INSN(fmsub231ph,    66, map6, ba,    vl, fp16, vl),
+    INSN(fmsub231sh,    66, map6, bb,    el, fp16, el),
+    INSN(fmsubadd132ph, 66, map6, 97,    vl, fp16, vl),
+    INSN(fmsubadd213ph, 66, map6, a7,    vl, fp16, vl),
+    INSN(fmsubadd231ph, 66, map6, b7,    vl, fp16, vl),
+    INSN(fnmadd132ph,   66, map6, 9c,    vl, fp16, vl),
+    INSN(fnmadd132sh,   66, map6, 9d,    el, fp16, el),
+    INSN(fnmadd213ph,   66, map6, ac,    vl, fp16, vl),
+    INSN(fnmadd213sh,   66, map6, ad,    el, fp16, el),
+    INSN(fnmadd231ph,   66, map6, bc,    vl, fp16, vl),
+    INSN(fnmadd231sh,   66, map6, bd,    el, fp16, el),
+    INSN(fnmsub132ph,   66, map6, 9e,    vl, fp16, vl),
+    INSN(fnmsub132sh,   66, map6, 9f,    el, fp16, el),
+    INSN(fnmsub213ph,   66, map6, ae,    vl, fp16, vl),
+    INSN(fnmsub213sh,   66, map6, af,    el, fp16, el),
+    INSN(fnmsub231ph,   66, map6, be,    vl, fp16, vl),
+    INSN(fnmsub231sh,   66, map6, bf,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2049,6 +2049,37 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
+}, evex_map6[] = {
+    { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
+    { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
+    { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
+    { { 0x99 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd132sh */
+    { { 0x9a }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub132ph */
+    { { 0x9b }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub132sh */
+    { { 0x9c }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd132ph */
+    { { 0x9d }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd132sh */
+    { { 0x9e }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub132ph */
+    { { 0x9f }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub132sh */
+    { { 0xa6 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub213ph */
+    { { 0xa7 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd213ph */
+    { { 0xa8 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd213ph */
+    { { 0xa9 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd213sh */
+    { { 0xaa }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub213ph */
+    { { 0xab }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub213sh */
+    { { 0xac }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd213ph */
+    { { 0xad }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd213sh */
+    { { 0xae }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub213ph */
+    { { 0xaf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub213sh */
+    { { 0xb6 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub231ph */
+    { { 0xb7 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd231ph */
+    { { 0xb8 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd231ph */
+    { { 0xb9 }, 2, T, R, pfx_66, W0, LIG }, /* vfmadd231sh */
+    { { 0xba }, 2, T, R, pfx_66, W0, Ln }, /* vfmsub231ph */
+    { { 0xbb }, 2, T, R, pfx_66, W0, LIG }, /* vfmsub231sh */
+    { { 0xbc }, 2, T, R, pfx_66, W0, Ln }, /* vfnmadd231ph */
+    { { 0xbd }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd231sh */
+    { { 0xbe }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub231ph */
+    { { 0xbf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub231sh */
 };
 
 static const struct {
@@ -2060,6 +2091,7 @@ static const struct {
     { evex_0f3a, ARRAY_SIZE(evex_0f3a) },
     { NULL,      0 },
     { evex_map5, ARRAY_SIZE(evex_map5) },
+    { evex_map6, ARRAY_SIZE(evex_map6) },
 };
 
 #undef Wn
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1235,6 +1235,20 @@ int x86emul_decode(struct x86_emulate_st
                         d = twobyte_table[b].desc;
                         s->simd_size = twobyte_table[b].size ?: simd_other;
                         break;
+
+                    case evex_map6:
+                        if ( !evex_encoded() )
+                        {
+                            rc = X86EMUL_UNRECOGNIZED;
+                            goto done;
+                        }
+                        opcode |= MASK_INSR(6, X86EMUL_OPC_EXT_MASK);
+                        /*
+                         * Re-use twobyte_table[]'s 0x38 entry here, for the
+                         * similarity of the 0F38 entries with map 6.
+                         */
+                        d = twobyte_table[0x38].desc;
+                        break;
                     }
                 }
                 else if ( s->ext < ext_8f08 + ARRAY_SIZE(xop_table) )
@@ -1484,6 +1498,28 @@ int x86emul_decode(struct x86_emulate_st
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
             break;
 
+        case ext_map6:
+            /*
+             * Re-use ext0f38_table[] here, for the similarity of the entries
+             * valid in map 6.
+             */
+            d = ext0f38_table[b].to_mem ? DstMem | SrcReg
+                                        : DstReg | SrcMem;
+            if ( ext0f38_table[b].two_op )
+                d |= TwoOp;
+            s->simd_size = ext0f38_table[b].simd_size ?: simd_other;
+
+            switch ( b )
+            {
+            default:
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                break;
+            }
+
+            disp8scale = decode_disp8scale(ext0f38_table[b].d8s, s);
+            break;
+
         case ext_8f09:
             if ( ext8f09_table[b].two_op )
                 d |= TwoOp;
@@ -1703,6 +1739,7 @@ int x86emul_decode(struct x86_emulate_st
         break;
 
     case ext_map5:
+    case ext_map6:
     case ext_8f09:
     case ext_8f0a:
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -196,6 +196,7 @@ enum vex_opcx {
     vex_0f38,
     vex_0f3a,
     evex_map5 = 5,
+    evex_map6,
 };
 
 enum vex_pfx {
@@ -251,6 +252,7 @@ struct x86_emulate_state {
         ext_0f38 = vex_0f38,
         ext_0f3a = vex_0f3a,
         ext_map5 = evex_map5,
+        ext_map6 = evex_map6,
         /*
          * For XOP use values such that the respective instruction field
          * can be used without adjustment.
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7778,6 +7778,49 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x97): /* vfmsubadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x98): /* vfmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9a): /* vfmsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9c): /* vfnmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9e): /* vfnmsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa6): /* vfmaddsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa7): /* vfmsubadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa8): /* vfmadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xaa): /* vfmsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xac): /* vfnmadd213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xae): /* vfnmsub213ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb6): /* vfmaddsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb7): /* vfmsubadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb8): /* vfmadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xba): /* vfmsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbc): /* vfnmadd231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbe): /* vfnmsub231ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9b): /* vfmsub132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9d): /* vfnmadd132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x9f): /* vfnmsub132sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xa9): /* vfmadd213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xab): /* vfmsub213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xad): /* vfnmadd213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xaf): /* vfnmsub213sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xb9): /* vfmadd231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbb): /* vfmsub231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbd): /* vfnmadd231sh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0xbf): /* vfnmsub231sh xmm/m16,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || (ea.type != OP_REG && evex.brs),
+                              EXC_UD);
+        if ( !evex.brs )
+            avx512_vlen_check(true);
+        goto simd_zmm;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -620,6 +620,7 @@ struct x86_emulate_ctxt
  *  0x0f38xxxx for 0f38-prefixed opcodes (or their VEX/EVEX equivalents)
  *  0x0f3axxxx for 0f3a-prefixed opcodes (or their VEX/EVEX equivalents)
  *     0x5xxxx for Map5 opcodes (EVEX only)
+ *     0x6xxxx for Map6 opcodes (EVEX only)
  *  0x8f08xxxx for 8f/8-prefixed XOP opcodes
  *  0x8f09xxxx for 8f/9-prefixed XOP opcodes
  *  0x8f0axxxx for 8f/a-prefixed XOP opcodes



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:58:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517493.802889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLdn-0008DY-0t; Mon, 03 Apr 2023 14:58:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517493.802889; Mon, 03 Apr 2023 14:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLdm-0008DR-U5; Mon, 03 Apr 2023 14:58:38 +0000
Received: by outflank-mailman (input) for mailman id 517493;
 Mon, 03 Apr 2023 14:58:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLdl-00066R-6c
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:58:37 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 02a1081e-d230-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:58:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:58:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02a1081e-d230-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C+EmADf+8qQGDOO2a63j7D3UXeDO17W/stMUvc3IZ+UhA4blbOiYj63+46myvKLXBRP02wPDZVgOQhVpceSXxPbN8OwJPSY3qnH5qnJo/XIju6/J15bQLgskU8QDDC9B3oIie6i1xjXYeerLReCeeCJsErm8aJQZelkbRQqRUGQXZLeHYfgwHbF46/ymma/9Bdg1FikzP6iyiNmGb6wsxWt+I9ctyU2EPeCA3mpPs78dHURoQEuf7AwkTjyYv/2Rvz+/7+l2LtBacGWhMY1gR+oIxaMNf+0UvxBFyxz5S6gBTrGGiUBoauzDUrEgbnxmU31wRqEdfgyWpUVCna14XA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ur6LIVsIiGjC3p7/RJj3JA9BWiqIOjVGL814s+UqimA=;
 b=KMD8RGMvfkIZLn4GC2cUTaf+ESHTasZiPA5BTKcjR7BaRPa/cTC1hvMKzIFBngILXUP1+4lJGlWxgBkhjp4i17Nvv9PvqkUOT4y1pidJ/fJ5FiFOWtb82eOhQmxSd1dyfIYjjJw3DeTrrNNfkUeinIXpRX4maEU1y4absn3UILqX5tCDSYhB8NvDTp3EhsvkZYvwk23CR93+APjS23LFq6TQABaTTpDxO6eGWeMIq4ZR+eAEAag6HHGTh6Ba68qeTm3S0OG8gKqzJRuUaINiSaV5OI1dunOOKKHjPnAUM5QpupB9kKMEUsRimElvrhgladaNDALyVOgbmj4l1rAaNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ur6LIVsIiGjC3p7/RJj3JA9BWiqIOjVGL814s+UqimA=;
 b=G33wVPMKVv7m65GBYqJyXhsLSsnI41uo1yTsRsp6JmvhbZAD/ulN4EY//VWEuvo752DLlEb3IfQt5wQ7T1dgUkDogypy0Xw83RV440jgPW9JwihA02ZyaW08BD/GK6OiRb0gSjIYl+lp/7FHVsjCFL3DZHgjhHIqOJdZBd4drph1eY1VNABMq+JkJYtrZaEicv8VQRI+cUW1WIAl5S9gb2HQppymfMM+e+TrHQkcCyL0unGAkpVSnaHhO0KLg8BT1xVmnVFeDdXE1Nliz+mO1YLuWw2PYq7muDr/AsA2BNSClIjOhmOa7m/bY0D6kjJEQGS2U7Rfuk/FwbCTk+l0bg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5ae781d5-0c8d-3c7e-31ea-b54e1eee6573@suse.com>
Date: Mon, 3 Apr 2023 16:58:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 05/10] x86emul: handle AVX512-FP16 Map6 misc insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0016.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: aa5fd024-27bb-482a-2236-08db3453e617
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qrk9KyUj4cU/pfT5MM0EPK48DtshvMOhIQWZBj30v8IeYswXQvByOSI7E95RnMKSxCQ2n2QBcUc/KparlVb8Z+ddlKk2n5NDR4a5OQN5/I1VaHiQPTEWVa51Ta4PZ5dSvXixFwShLzCU9QTgEOvJfLpTN1A6AnRtTLWm301i0aWRE7T3K2AJV14e1WoHwK9+0vPeGnIJP4FeLX4uw2JxOradHhCaKDWOGdwhi/UxwY0K1h+sLlP3PJqlWJUGx4y4ppriTr0mujAIOjI/3o4+a0Iem8rbbBCB1DEs1qMZVhWRg/7jbZjvozFm6BDC/xKxLTGBeMaOvNop5EkmjZtABvGjlSaqxLhhae9Vt6txn3fT7d2vZN/R+CqYx/yga4LSNEAqfGZRAwEqruL1gibb6Sz4hU1o/eb+O/fnjB5TA1VLRsHqx5jpCw8DZHbIDKnT4PIejQQHgS1XEr+OTdp2+WScCMF5xL7ryRdS+Y77hHmOz7ICOLuWgI/s8wPHlvTpRb8uqwXut5DA4/YEbUCGN45ZUrKPoUU625QCC822rgEY9Zg7w8x+MLO531TgDMhf5hTl1U0X29V6aVxzMxhLWTBORJdybfBdJBFJXPLownAzZXi/wwxKPVDnGv7KINyGzirDx3I8kHP9qAk3vH8coQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aWFHR1ZCcmhyQkN5ZnhPMEI5RkZQZzdlbERkakd0elQyUzVEdW9UOWhqVXRQ?=
 =?utf-8?B?dVRtMHFxZlg5L3hkQkJlRDRDRGZyZHdYM0daNVRGVnZMeU5DM3VYa3NDaUhC?=
 =?utf-8?B?RVRDYjBxWWZRQWtEbVl2QklXRUxPOFV4VHArK3ppMEZSMTYvUDdpSmZaR2Zz?=
 =?utf-8?B?ZzhHcU0vdlA3QmZDc29tb3ZuVDlCYzczTG5WQkhtTjNtQjNIOStMM2FFK2lt?=
 =?utf-8?B?Q3BVUVk2RWQ1YlAvUVA1ZXVYN0Vzc1pQcUdaVnRxMjltcG1QbkQ4aW9mS0FX?=
 =?utf-8?B?QzhpSzJ4T1B0WmxmWHpQdWJ5OG9xR0xQZ1haU0RlZy9JOCtrTDMxc1c4aTBx?=
 =?utf-8?B?YlNMTXdQWG1OZnZBbmtLZXcxUFEyaDZLZ0grOXZPcnJSaXYwSmZqZnBIRnRl?=
 =?utf-8?B?NDl1bEtIMTNMelNFRHFBN01CQkNhVU1OK0QzeU5YUnA5d25hcnRqWU12ZEpP?=
 =?utf-8?B?TWVFSlYzbjltUk9KWno4bUEyOVIrS3lFc1dBeng1T2FXT1IvY1BqTWlKd2VZ?=
 =?utf-8?B?ejlTdlpyWDJxTHQrK0VNSnlpOXdwQmdsWUw2UDFEZG1jY2V6QnNRbnkzRUs5?=
 =?utf-8?B?cFJEbDRrSWgwNDJpaEhqN2ZpaVFzSllncm9uUmthQVVSaGd1S1F5akszcVhm?=
 =?utf-8?B?RktCUDJYQnlPZkVrZEQ3czFmaVBxNzIxdUd3K1hDeThVOFVqbCtOc3hPUWln?=
 =?utf-8?B?TjV3YVVuNFJFWExDK1NSVVp1cEVuT1FqQmJXZDZLUUtYQlBNanhTWXZiTTVs?=
 =?utf-8?B?bjVFck5udzV4V1JhMTFvWHlBaXk3bDlRVVdoOEwvcDVPVDlvelVwZTlDbnBW?=
 =?utf-8?B?clNGN1huMWJnREFlWDl2dUtjUTJzQzdHaVdUQTFPTVhLWTZlTE1Qd1pxaEgw?=
 =?utf-8?B?OE5NKy9Jd1d2ZVFyeHduVG5QSGpHSCtDZ2xCTmMwL05vd0NLZE50NXZ4bGxP?=
 =?utf-8?B?TStsb09ZU084RmdmWnlMQlFwMjdKc212MXhzZDBxaUo3TVlkcjBkS0szemhn?=
 =?utf-8?B?V3JXU0ZyRWlaMmdhaURQZDdlUkpWcjNiOWFlbTBRSEQ1eitkNElKU1RGbG5u?=
 =?utf-8?B?TVloUUdrZkNheHpvWTcrMGxKZ3hqTlNqM2RzYnh6ck1vWnZ5UUtLaVVmejc3?=
 =?utf-8?B?SE5INWM2UHpRWjhDQ0NZMEhHcmFRL3RVSkFQWFRnUFNIYUhkeEdUZyt3c200?=
 =?utf-8?B?Nk1saWFTb1BtM1ZmZGh3MDZwaEtQRVFoc29HdzE0K0RlOFRuSExETEhrL1Yz?=
 =?utf-8?B?L3IzZ0huRmNWRFhFVGg0bVBtajdhOURTRm5VVDFCK3lZVGJjc0tSZjVKRkpx?=
 =?utf-8?B?eGh1Q014UkxMM2JZRDUzdkVtKzV1VXdFdng0ditLaEZlUGMvV3I1akUxeERa?=
 =?utf-8?B?L3lOUmpqSVV2ZTVXVnBzSFdBUWNSWmROeWJ6N25pV0VHMGRpRHlNRTVSUFNC?=
 =?utf-8?B?NGEzRDBaSmRDZ2hJVHBkMEtkWFBSWWhrVkYvdlRFTS93USswMXhKQk4vQXJi?=
 =?utf-8?B?VHRId3dkalpQSVMzQmRhY0ROalBqZ20yRlBzMVBLejQ4bU5qRE90N1B6QXIr?=
 =?utf-8?B?M0VuMGJhb0R0dm8vUVB0dFU0bFhPd2lnVzlaYXhzZER4cHFJeE5IWkxCQlZp?=
 =?utf-8?B?OUN0a3dFS0E3ZWozSEVxQXlPVEJsZ2JwNDkxWWd5OEVuVDZSVTdSZVhqTFJP?=
 =?utf-8?B?UWh5RUZJYlVubVA3M09yUXFKREwvbGVNZyt2ODhLeWhwb2ovaklPYzRNdjJx?=
 =?utf-8?B?bFJVdWxqUkxXQ01HOVBLWTVGNi9mYUFRQk9YVmd3QlF2ZVJkdW8wOGlrRjVQ?=
 =?utf-8?B?UVlJbjhMUFhLbGRYdlBvZUs3TVJYZEJPOGtMd0thckVXMHJNQmlFL1o4WmE5?=
 =?utf-8?B?M0U0Zk1rdXpXYkhqSHFJWHM0RG0zRjRiMXNQdDFjMGJlTERMZVJ3T09DaHNh?=
 =?utf-8?B?K3UwYkZadjVzbEdocTk3WXFEazJXVG1leWN0ekRLR0VrUXZFMW5TT1JIdmtr?=
 =?utf-8?B?NmJvOTVLWFU5SkFPUE1Rc0dLakpQUHI2MSs1TGRuWWJZbVBmT00wSFZ6OUVM?=
 =?utf-8?B?dnA4ZVI0V01McGsvWjlUYmRuR1laRXlKY3ZGNWZjOGtqWnhRMXRoQWRnWHh2?=
 =?utf-8?Q?7BdQf2nmxLP95Z94xSINsGY0N?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aa5fd024-27bb-482a-2236-08db3453e617
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:58:35.2251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CQfmDPtMjrBdlQJ3a375nZ7gluGXBUKFVDWw3vZQCNOxI/82FCyTZFWkxcxYHxZdhNE2uheuqBKc1vb9IiQb0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

While, as before, this leverages that the Map6 encoding space is a very
sparse clone of the "0f38" one, switch around the simd_size overriding
for opcode 2D. This way fewer separate overrides are needed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -646,6 +646,8 @@ static const struct test avx512_fp16_all
     INSN(fnmsub231sh,   66, map6, bf,    el, fp16, el),
     INSN(fpclassph,       , 0f3a, 66,    vl, fp16, vl),
     INSN(fpclasssh,       , 0f3a, 67,    el, fp16, el),
+    INSN(getexpph,      66, map6, 42,    vl, fp16, vl),
+    INSN(getexpsh,      66, map6, 43,    el, fp16, el),
     INSN(getmantph,       , 0f3a, 26,    vl, fp16, vl),
     INSN(getmantsh,       , 0f3a, 27,    el, fp16, el),
     INSN(maxph,           , map5, 5f,    vl, fp16, vl),
@@ -656,10 +658,16 @@ static const struct test avx512_fp16_all
     INSN(movsh,         f3, map5, 11,    el, fp16, el),
     INSN(mulph,           , map5, 59,    vl, fp16, vl),
     INSN(mulsh,         f3, map5, 59,    el, fp16, el),
+    INSN(rcpph,         66, map6, 4c,    vl, fp16, vl),
+    INSN(rcpsh,         66, map6, 4d,    el, fp16, el),
     INSN(reduceph,        , 0f3a, 56,    vl, fp16, vl),
     INSN(reducesh,        , 0f3a, 57,    el, fp16, el),
     INSN(rndscaleph,      , 0f3a, 08,    vl, fp16, vl),
     INSN(rndscalesh,      , 0f3a, 0a,    el, fp16, el),
+    INSN(rsqrtph,       66, map6, 4e,    vl, fp16, vl),
+    INSN(rsqrtsh,       66, map6, 4f,    el, fp16, el),
+    INSN(scalefph,      66, map6, 2c,    vl, fp16, vl),
+    INSN(scalefsh,      66, map6, 2d,    el, fp16, el),
     INSN(sqrtph,          , map5, 51,    vl, fp16, vl),
     INSN(sqrtsh,        f3, map5, 51,    el, fp16, el),
     INSN(subph,           , map5, 5c,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2050,6 +2050,14 @@ static const struct evex {
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
+    { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
+    { { 0x2d }, 2, T, R, pfx_66, W0, LIG }, /* vscalefsh */
+    { { 0x42 }, 2, T, R, pfx_66, W0, Ln }, /* vgetexpph */
+    { { 0x43 }, 2, T, R, pfx_66, W0, LIG }, /* vgetexpsh */
+    { { 0x4c }, 2, T, R, pfx_66, W0, Ln }, /* vrcpph */
+    { { 0x4d }, 2, T, R, pfx_66, W0, LIG }, /* vrcpsh */
+    { { 0x4e }, 2, T, R, pfx_66, W0, Ln }, /* vrsqrtph */
+    { { 0x4f }, 2, T, R, pfx_66, W0, LIG }, /* vrsqrtsh */
     { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
     { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
     { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -358,7 +358,7 @@ static const struct ext0f38_table {
     [0x2a] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
     [0x2b] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x2c] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
-    [0x2d] = { .simd_size = simd_packed_fp, .d8s = d8s_dq },
+    [0x2d] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x2e ... 0x2f] = { .simd_size = simd_packed_fp, .to_mem = 1 },
     [0x30] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_2 },
     [0x31] = { .simd_size = simd_other, .two_op = 1, .d8s = d8s_vl_by_4 },
@@ -909,8 +909,8 @@ decode_0f38(struct x86_emulate_state *s,
         ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
-    case X86EMUL_OPC_EVEX_66(0, 0x2d): /* vscalefs{s,d} */
-        s->simd_size = simd_scalar_vexw;
+    case X86EMUL_OPC_VEX_66(0, 0x2d): /* vmaskmovpd */
+        s->simd_size = simd_packed_fp;
         break;
 
     case X86EMUL_OPC_EVEX_66(0, 0x7a): /* vpbroadcastb */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7778,6 +7778,8 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x97): /* vfmsubadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x98): /* vfmadd132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -7802,6 +7804,8 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x2d): /* vscalefsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x43): /* vgetexpsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x9b): /* vfmsub132sh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x9d): /* vfnmadd132sh xmm/m16,xmm,xmm{k} */
@@ -7821,6 +7825,19 @@ x86_emulate(
             avx512_vlen_check(true);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_66(6, 0x4c): /* vrcpph [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x4e): /* vrsqrtph [xyz]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        goto avx512f_no_sae;
+
+    case X86EMUL_OPC_EVEX_66(6, 0x4d): /* vrcpsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_66(6, 0x4f): /* vrsqrtsh xmm/m16,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        avx512_vlen_check(true);
+        goto simd_zmm;
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:59:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:59:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517496.802899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLe5-0000ON-CR; Mon, 03 Apr 2023 14:58:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517496.802899; Mon, 03 Apr 2023 14:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLe5-0000OE-9a; Mon, 03 Apr 2023 14:58:57 +0000
Received: by outflank-mailman (input) for mailman id 517496;
 Mon, 03 Apr 2023 14:58:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLe3-00066R-9z
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:58:55 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0d7765f0-d230-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 16:58:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:58:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d7765f0-d230-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GZHer3oZBf7lCanQ1VDcdXV3qZySk5bsXWZnVk5TkFgjHacTMdyseqGAoruYKmu0GsS8kQKpgY7tdLkBQhVv1G4JePTjqJjeqa1JkreHgc3fb3R8ua/1shoSzxWEVpQzzXPP39VtnnBHF29Q5b2gGTuGjH195x7HpWYoGJLiOm1eN+bVJlZKPhSburJ0LuW7oLWWiwUIFhvxlraLYs8aFd7e1qDaL3Nce2GqQ8I5MeJ9KHxjBrRCgpJk2YqYdUEcg4ds7+rQndBH0CyqMgF7Ie0O4vE0/sTphLowoJMPESrO28MOTbhXwzRYqWRlkUjn5t0pwIJVV2XQrTBozuL+3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WeVlilJaXvmgNgAfKdgiJwwyvehDZKjSOpUZTR1a6eU=;
 b=ZTwIE8L09mh/cA67hF9lWtOfzJVyl/QpZ1F1QGVEjeilZP7aOjw9QHqG3YCWBDE+LjV7ET+jxyaVGZ0qPBXJo84AtoRd7MmhoUVY88PUQhKucv0ufYUWUEutyPhYstPcFALGcbKyoBNbQH6GkqlVNYhbTpDJFkmBrLnZmf/Jf4wjDJT+yjlBOa3GCG7RcKAommwFzsyEcEXcthIt8QSz6VsiI4TAxL3gFVeTl0BAyjgan5UX5xPU0NbMVIpQXJj4nz5gz1Rwjc/eZawbiRaCp5DVS9Ymn7CmzBbt1pzXW3cjnDSZ/MmARaFggy7at35pWMGAjrH2X482TCUpp+DtoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WeVlilJaXvmgNgAfKdgiJwwyvehDZKjSOpUZTR1a6eU=;
 b=Q2KughHdC/0ucGez9OJMQuI4UlCJ2XlnA9DHsGexkV3AFcIL8du3Va1nI4PIyFMgIaCd9qhnAJCzJaDXAoS43RRt5fRMRgdq/FO1EBX+emyEF5uby6vln/iKlvU+i43IqHcR/uLKZl0I4wHepZw0XB45Th0Cnag3ezUVbrG8b1sCajrT+iQ+sPnok7vOAty/SldzCaeC5G1MurA8Co5pukc2Wrhd1joTHLn2M5sy8X6/MIuGVYa5iPgF8jbJWR4sp+mDUiHyFMEdRrKcLuNnEMMLFa4jqrG1VY9iQVCUf4pdXkPJgJaR1OrVwb4ll/S9ap64nhNhPbKVgm7yAQ38tQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <25c100d7-fae7-5aa7-1d6e-3c06774a33f6@suse.com>
Date: Mon, 3 Apr 2023 16:58:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 06/10] x86emul: handle AVX512-FP16 complex multiplication
 insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 15772e1f-0528-4b4b-fe97-08db3453f0c7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lr6vmV/SRI15F3FfKyQrwpp8VAtUxEY0HLN9cNutNLcar1DoyK3X4/oUrYHp8EuOxkuehPjjHV3J9mjimsPsBLuvG8NgaymdeYbta7ZALvFZCf6dc9zQC1958kegfsAiwghG3mwnMfQvOlAax6ojyWZgE1qIDCl0xtedEKgv3SL7bDZ0A5B+5F5iFgYRhF8m2UFcaIaVTfMYAddZFn1ek2XrvHBmIH3pDcAXEuxtVs3VjFo0lkU7fkm/kpl/x7bnhWcbWiDsVjyiz+6RssWUhX0g37MpwLNep0Smng4Ays7+CZJH/P4C2omx8KIEGEwBD+HshZj1Ib1r6GRbql8MjqDrdvtWyJm6lju/0AUKYGWZSxz5LFMbINoEWZe/ogucofslgOx1E3sLab8wnK7rPHNeYy7ZlbWLux3zuRJd7lB4yMgQ881iRGydzc9cFUeCpVJBQo7eO+8BuQkJ8YrTAdCohURTI3MLWXuGAj1PlQcpNA+0hEgQutgv1LoLBpF38vAC0IESX4R96MKLrzpIUEFS6o9gZ+zmiOf8g8OOkD9bqX0dLVbVChAHiU0MxwoBcq0CtzPp6xYrxTspYwST1rN72/4FtBnDfdQd3+zD+HY8Na1CLNR4GyrhxQqF1Yh9Lcxoelol93xfdDbbRHDM/A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WTN3dDNJWGFuZXozd1lkZVlwSEtuRHBVdWxCMGVuakE4Vm1nK3IzbjBiU2lk?=
 =?utf-8?B?YWcyVmRxUXFQbzBSZ3A5elo1ZzV2NURPTTJTVGo3REt6cENpSExjRGlzb0tW?=
 =?utf-8?B?UjFHSmtDUXZ4YkM2Uzl1NTFtSS9zTWdrQ2J2UG1aUFNoRFEzRXJ3MHNGWHl4?=
 =?utf-8?B?S2N2Wm9pOTZsc0NKamw0TkNCQWVtV1gvUGtLTkIvRVhuREkwSW5sVlhZOGYy?=
 =?utf-8?B?OEd4Tk1qaDRYUHdvSVFwUWFLT3VjR0V3dmYwWGUrRkZzSXh5RUt2YUg3ZGNR?=
 =?utf-8?B?QzBHcjZaNmIvVnAxeUcxU2JIYmF6akx0WEhNOFJVMnlBYjdEazlqdzhNbXR0?=
 =?utf-8?B?dnhXU213WFZkeEthL0dNT2dWWGJaMDJPZFdtb1VxNUFkRmhRNGkwSXFSdTdG?=
 =?utf-8?B?TlhkR1NBQjNSKzBma2tMNFJYRTNkMHd5U2oxYXk0OHpUUXZxQTdINWErdmxS?=
 =?utf-8?B?bjlVVHo3SStKblk1NlBsY0RxR09nanZpM01BZEI4cTRKcENueVBtaHZhVkhj?=
 =?utf-8?B?Z1d6VXdYYmRwQWt1N1E4a0VsZkFpaDZHeHkvd0ZISVVPQmVHVlkvTm9pRlcr?=
 =?utf-8?B?TDVNaVFtZThSbHkyaHZtRnF6cER5K0hpampKamlVejBNNE5EcWZDVFIyUkgz?=
 =?utf-8?B?Vm9aTTBNTURWUGRBTHIvOEtUcWM3b2pkQ3JIdEUyOFlWMXVpLzVzdWN0SElO?=
 =?utf-8?B?OFZnOCszbGtaUHlOd3pvSWxoVzJnZm4rTmJTVEVoaEQzaWhzMUVhbndzTnJW?=
 =?utf-8?B?Q0dWNVlTVWxiY3FPSTZVV1dUL1Ixblpwc0ZySWUvT3ZhRzI2Y2xudy9ZdzZw?=
 =?utf-8?B?c1lIL2ZpaXFiZmQ2cnJpamduZUNLTmxWejg3VERhL0F0ai9zRkFpZnY0TkFj?=
 =?utf-8?B?TnhpdVhCQloycHFmOW9FSjZzTDRVYkRyRHJEZDBHTFFjbFZKQWd4Z0JXR2d4?=
 =?utf-8?B?MEhUYmdCS1JZNDJXcWIzakhma1U1djhZUU4wUGgzZ0lyL20rNHZVV1ZCUWk3?=
 =?utf-8?B?SmZHT212V29TL2g4NEpsZ3hNejVGLy9rTWF2SHpRZ2twOG1xa2w5aEJBMjFQ?=
 =?utf-8?B?NDQzN0JTQWNVZ0lkMXJ5RHQxS3MyQS9BWStzWDhsZzlqL0xuZUFzditrSmtW?=
 =?utf-8?B?M0NHT2FzVVpLaHpCdTZmOGdyREx4d2tyeU10L3B1UkpzbGNscDBOYzFuaExG?=
 =?utf-8?B?SjZUTGZkemNuZFhDU1JvUWlYR3cvR1VmVUlZV1B2WDljaUtUU3djSENjSFVh?=
 =?utf-8?B?T280TEZ5SXlpRzQvN1JGZjZxZVJZRlIwTDNUN3R3bXRpaWI0STVHVmR3LzBO?=
 =?utf-8?B?UXJJVWhXMWYzRjE4RUxOaVFJeXdnS3lQaXR1MVluQUZSN1JsZ21LU21WSmQ2?=
 =?utf-8?B?N1FZZk5BRzlaL1V0NTI5REtRQkdCWWEyaStxRnZrYm9WdFB3MzVnN0JYUDZq?=
 =?utf-8?B?U0NRZy95Y0NNYmFuanZRUVFmVElHN2xnZmtPRVJPU1lZUFM0YXQvTjduMjZ5?=
 =?utf-8?B?OHlIUSsyQkhNTmVWdlNFY2JBYU92bEdISWVkeU4wMEdLZFBxS0VjZ3ByWnp3?=
 =?utf-8?B?dnZ0Zjg2VmVMZm8yZ3VkZFU1VnhTYWdiMFREbzNFZ2NWZExDUlVXanVaMHZj?=
 =?utf-8?B?cUtlZnVSVTZMZytVK2RSVEJ5Zy8zeTdOSE9uOW9hNUprZFVZbWlFQ3k0dTJs?=
 =?utf-8?B?UHRhL0J2MTVoUU1NVEczZ3lWeERFVGpOZEYvR25yZ2J2Y0w4UVZyNGVoS3RL?=
 =?utf-8?B?b2lCU0RQcGxza2FtbStqL1hob0R3ZFdZS1BQclp5a1ZnUmVVdXkrSll1TzJN?=
 =?utf-8?B?empBdFI0Q2dIeW9BRU9BaXcxNTNJempQcVVGazZQV3Y3WjhEajZsRjQ0Y2dD?=
 =?utf-8?B?Q1c0bGRHWW8yckhjb1ZRMitlNzVlbmdrUURPczg2SDdydklOVmRYMmx3NVY0?=
 =?utf-8?B?TGF0OGdiR0czcXMwL2xtL3djUlJERTQxb29Kb2ZBWEFGT2lPNFdlbmYzNTE1?=
 =?utf-8?B?MlVBUmsxRU1kajk4YnhzdjVJUzcvWVBzeVVzMTBzSmdKNHRYeS9BZWJrYWF4?=
 =?utf-8?B?RFpCNThxR1FRdzVWSVVVTnE0ZWhaWnBDSEJPOEprTnl3VHhiZUF1WnB1RnZv?=
 =?utf-8?Q?D0K7e2lCTDSF6OGOsayxzOjGH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15772e1f-0528-4b4b-fe97-08db3453f0c7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:58:53.1131
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B6UULQsSi7mTUNiwDQAHEc20BBkBIHgMVVZqT5PX9/wk/EvpkS6aTfve4DFD6/g/psa0BRP3MTaoneKFH1bN2w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

Aspects to consider are that these have 32-bit element size (pairs of
FP16) and that there are restrictions on the registers valid to use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -614,12 +614,18 @@ static const struct test avx512_fp16_all
     INSN(comish,          , map5, 2f,    el, fp16, el),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
+    INSNX(fcmaddcph,    f2, map6, 56, 1, vl,    d, vl),
+    INSNX(fcmaddcsh,    f2, map6, 57, 1, el,    d, el),
+    INSNX(fcmulcph,     f2, map6, d6, 1, vl,    d, vl),
+    INSNX(fcmulcsh,     f2, map6, d7, 1, el,    d, el),
     INSN(fmadd132ph,    66, map6, 98,    vl, fp16, vl),
     INSN(fmadd132sh,    66, map6, 99,    el, fp16, el),
     INSN(fmadd213ph,    66, map6, a8,    vl, fp16, vl),
     INSN(fmadd213sh,    66, map6, a9,    el, fp16, el),
     INSN(fmadd231ph,    66, map6, b8,    vl, fp16, vl),
     INSN(fmadd231sh,    66, map6, b9,    el, fp16, el),
+    INSNX(fmaddcph,     f3, map6, 56, 1, vl,    d, vl),
+    INSNX(fmaddcsh,     f3, map6, 57, 1, el,    d, el),
     INSN(fmaddsub132ph, 66, map6, 96,    vl, fp16, vl),
     INSN(fmaddsub213ph, 66, map6, a6,    vl, fp16, vl),
     INSN(fmaddsub231ph, 66, map6, b6,    vl, fp16, vl),
@@ -632,6 +638,8 @@ static const struct test avx512_fp16_all
     INSN(fmsubadd132ph, 66, map6, 97,    vl, fp16, vl),
     INSN(fmsubadd213ph, 66, map6, a7,    vl, fp16, vl),
     INSN(fmsubadd231ph, 66, map6, b7,    vl, fp16, vl),
+    INSNX(fmulcph,      f3, map6, d6, 1, vl,    d, vl),
+    INSNX(fmulcsh,      f3, map6, d7, 1, el,    d, el),
     INSN(fnmadd132ph,   66, map6, 9c,    vl, fp16, vl),
     INSN(fnmadd132sh,   66, map6, 9d,    el, fp16, el),
     INSN(fnmadd213ph,   66, map6, ac,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2058,6 +2058,10 @@ static const struct evex {
     { { 0x4d }, 2, T, R, pfx_66, W0, LIG }, /* vrcpsh */
     { { 0x4e }, 2, T, R, pfx_66, W0, Ln }, /* vrsqrtph */
     { { 0x4f }, 2, T, R, pfx_66, W0, LIG }, /* vrsqrtsh */
+    { { 0x56 }, 2, T, R, pfx_f3, W0, Ln }, /* vfmaddcph */
+    { { 0x56 }, 2, T, R, pfx_f2, W0, Ln }, /* vfcmaddcph */
+    { { 0x57 }, 2, T, R, pfx_f3, W0, LIG }, /* vfmaddcsh */
+    { { 0x57 }, 2, T, R, pfx_f2, W0, LIG }, /* vfcmaddcsh */
     { { 0x96 }, 2, T, R, pfx_66, W0, Ln }, /* vfmaddsub132ph */
     { { 0x97 }, 2, T, R, pfx_66, W0, Ln }, /* vfmsubadd132ph */
     { { 0x98 }, 2, T, R, pfx_66, W0, Ln }, /* vfmadd132ph */
@@ -2088,6 +2092,10 @@ static const struct evex {
     { { 0xbd }, 2, T, R, pfx_66, W0, LIG }, /* vfnmadd231sh */
     { { 0xbe }, 2, T, R, pfx_66, W0, Ln }, /* vfnmsub231ph */
     { { 0xbf }, 2, T, R, pfx_66, W0, LIG }, /* vfnmsub231sh */
+    { { 0xd6 }, 2, T, R, pfx_f3, W0, Ln }, /* vfmulcph */
+    { { 0xd6 }, 2, T, R, pfx_f2, W0, Ln }, /* vfcmulcph */
+    { { 0xd7 }, 2, T, R, pfx_f3, W0, LIG }, /* vfmulcsh */
+    { { 0xd7 }, 2, T, R, pfx_f2, W0, LIG }, /* vfcmulcsh */
 };
 
 static const struct {
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -379,6 +379,8 @@ static const struct ext0f38_table {
     [0x4f] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x50 ... 0x53] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x54 ... 0x55] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_vl },
+    [0x56] = { .simd_size = simd_other, .d8s = d8s_vl },
+    [0x57] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0x58] = { .simd_size = simd_other, .two_op = 1, .d8s = 2 },
     [0x59] = { .simd_size = simd_other, .two_op = 1, .d8s = 3 },
     [0x5a] = { .simd_size = simd_128, .two_op = 1, .d8s = 4 },
@@ -441,6 +443,8 @@ static const struct ext0f38_table {
     [0xcc] = { .simd_size = simd_packed_fp, .two_op = 1, .d8s = d8s_vl },
     [0xcd] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xd6] = { .simd_size = simd_other, .d8s = d8s_vl },
+    [0xd7] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
     [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xf0] = { .two_op = 1 },
@@ -1515,6 +1519,10 @@ int x86emul_decode(struct x86_emulate_st
                 if ( s->evex.pfx == vex_66 )
                     s->fp16 = true;
                 break;
+
+            case 0x56: case 0x57: /* vf{,c}maddc{p,s}h */
+            case 0xd6: case 0xd7: /* vf{,c}mulc{p,s}h */
+                break;
             }
 
             disp8scale = decode_disp8scale(ext0f38_table[b].d8s, s);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7838,6 +7838,34 @@ x86_emulate(
         avx512_vlen_check(true);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_F3(6, 0x56): /* vfmaddcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0x56): /* vfcmaddcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(6, 0xd6): /* vfmulcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0xd6): /* vfcmulcph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
+        op_bytes = 16 << evex.lr;
+        /* fall through */
+    case X86EMUL_OPC_EVEX_F3(6, 0x57): /* vfmaddcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0x57): /* vfcmaddcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F3(6, 0xd7): /* vfmulcsh xmm/m16,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(6, 0xd7): /* vfcmulcsh xmm/m16,xmm,xmm{k} */
+    {
+        unsigned int src1 = ~evex.reg;
+
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w || ((b & 1) && ea.type != OP_REG && evex.brs),
+                              EXC_UD);
+        if ( mode_64bit() )
+            src1 = (src1 & 0xf) | (!evex.RX << 4);
+        else
+            src1 &= 7;
+        generate_exception_if(modrm_reg == src1 ||
+                              (ea.type != OP_MEM && modrm_reg == modrm_rm),
+                              EXC_UD);
+        if ( ea.type != OP_REG || (b & 1) || !evex.brs )
+            avx512_vlen_check(!(b & 1));
+        goto simd_zmm;
+    }
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:59:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517499.802909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLeV-00010B-L9; Mon, 03 Apr 2023 14:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517499.802909; Mon, 03 Apr 2023 14:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLeV-000104-IE; Mon, 03 Apr 2023 14:59:23 +0000
Received: by outflank-mailman (input) for mailman id 517499;
 Mon, 03 Apr 2023 14:59:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLeU-0000MW-Gk
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:59:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20603.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d122fce-d230-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 16:59:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:59:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:59:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d122fce-d230-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQINIxyPzlzRx22z2Pl0Fc86R5dbQQ50a7E9cWUggUzeZzI2ys6djhWvPjKO9M5HXPTFQBkjjarXlTuC4runqoil458lWydSAWExIkE2/saDb3mo46oZqmpVF8nW5JNdpevg5rl+NAtW9cZbGOUmqtM5tUg8tmxq7cY5eMqaALEbJDRkcWtF4WB1cYo65KOtCnStzp1pjSDmoHnZ37LpbLwoOphqaEgTVkWQMjH2WVkEWWyvlQr0Z82zBBMEaG9x56LdN0Ec38Uc5Xk2x31jQ3p1sXBUUTbqlJYcE+XzpyT2LPOa/64Eq2M+iNPys8OJX5ME+Zo79ujSpe7Mk8wfwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Uhli6p+2g334+AF1uQpNe0kBbZNIku4hPqzRfVDOW8Y=;
 b=K9H0uhiTy5Z4ldCq0xcM6JJRC4JM3Hi5MJbOHGrBXA6/Vr5oXY5YR74BnTPgCPszHDjznhjVCXXQSgqJJxHj7EEfer/T+C6iVgwNggr3idMc1y98UW2el+eJ6uasNPsPT5xOTlwkqfdJvfrk0iKe6MyWkGltPHnh8AA+fOscef1V1wPb5hEsLnQgDlOo5uB1WAKZwbuEBcIFuLzoicDi2C+nOiVzUW5NHhbIuidwOKQCqewIkUqCtt1cIULKmOO/QnsMBc6iBewxEisUexvKt8+BQ4gBMl5Yo22BXRgjOA2iGhWsBXp+kzVrf5RQGEntu2BCe+3DPatTsYpiMWR9Ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uhli6p+2g334+AF1uQpNe0kBbZNIku4hPqzRfVDOW8Y=;
 b=rc53KsEygbttSkBO0nl/1EmXf/XzElObcKKwbdmDj3utKhegypm1tLMZjp5BaEO0xSeEhPFsxXEQkF9VZiWRe/g6B//VSuKTMNy3kgjNsluTPoTcK3+n9L9BtEDCWh5tqjNuDSiL7bM2Q/ssbVLaWosGqUC98OcLYI8QHjW+vs3Mi77yP5Lr0Fpfx8ngvq/vCX0uslQEdWeE9Iu70wzi8Xjuf0duooX+n3DoTh9tLnHaZnSrBYz6q7qnuMaUrvu8JRZL+CFolZ8P7AJp4uR+xBYhtaWfv4ff7B9rmG5MpL/BkBXHe6ccuiYyl8quIzE9Nbt+fQEaTxBXEb5Qr6j50Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aedc8c13-c8c6-1752-0b1e-67a8a2e1b38f@suse.com>
Date: Mon, 3 Apr 2023 16:59:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 07/10] x86emul: handle AVX512-FP16 conversion to/from
 (packed) int16 insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 7526505b-58d4-4772-13a0-08db34540079
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9UQzmsyB2eRBJFk4YiB9IRSjpgMw7qlA9D2Pvx/G8dhdT6lphZKjclhIHz3WJrHhPayAbN/FL4t52xDO8Qx+Ync2bTMwiyeRrf21MZIENDNUlvTN+T9sMUZwGsXXFx1e5xT/MpluUtkEGhqTTcfACQy5maaAmmpbVHfpScmfVFb3JV8GuEdFxasA9N/pKFZUCHQ+srGqYbtiKu1DfWP1xXS72YN8J1vlqyBrdo33+UDrdUhRozV3FHPTLjmJ0x0TRK5Z8oaMKlJ1uB2hDDvaRi7YxId8YvSSTf2clNEMy2CEDW0AGDWN8jp01XKKska1UH8plVcaKUnDhdicz/ghfcyS+cciqeewT+i35TeDI5v19dRDnMyod7Twaw8BUeuWTwYcGhcqPtNJk8qL+5WSMXFkYey2TpaEkx/2zqTDR5kG2yii1kULnLNOuWhuhmj0/XgLIImI2guV/dTph7pNWzLhUyDJCZgYYwqi5BcWZG9giVhHOjfD+SBWEQmBXeM+81rpdHu3vyDvQYGHgOWvB6ytBvD3831ndSvpH0cWDTMaU2btx84nsPkXMAcwBgQamdBpCcXGBbUId9PJ1fOwIkFtmMtkBucyZXTgePYbA/Qm2RWxawkhU+gGft0v4NdeXmIsxpxodcVQO/hZghZXCg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NkdhYkJ4b005R3gvNkNabHMwamthYytYSitMSjcxSktRa0JpYnhxc1d6OGRY?=
 =?utf-8?B?TUpWZHRnRU9uQVdUVDR5WXFhQ3RJcWhNTGliYWxzN3J0TkZwKzB5cFJjN3Fx?=
 =?utf-8?B?K29HZEdPN2FwQmV2RjlMV0pvYml3ZVgvNGtrSnJIY09CRlFuODdRYlUrREsy?=
 =?utf-8?B?VEdEZ3FHaU5xV2ZVZTUwZnM2ZHdVTlVsTlBxNmRpMHd2R2x4NHdqaEt6bVVZ?=
 =?utf-8?B?WkpOTlc2eis5UjZ6WFZ5by84NXh0RFdrQlpJSjZ6MlUzelkxbEtXZTR1WlA4?=
 =?utf-8?B?elgycGZuUjViaWZBWlVMajR6R1NZcWo3bEx1RnkrSGlRa1dTeXNoTVFuM1FQ?=
 =?utf-8?B?VkJQSDk3b21wSWdTT1pQMm1nZFZQTHJkYnNtcTJXS0ZEQ2FJclNIY05naW9j?=
 =?utf-8?B?TVFmM1hBdmpmaFh0b2pCUWlubCtXZGxFVlNISnVDM0lNRmJXY0V0VS9yNTVG?=
 =?utf-8?B?cTRMdG1ZTmVkR28xN3FXN3FrWWZVQWM1UU51NWZFU0pBcDRncElBQUFyY09D?=
 =?utf-8?B?WE53VWpmbVRrNUtlZExGd3NjLzFZTS8vMjY2VlVYNHFhQnNUSkx2bndSRTNv?=
 =?utf-8?B?L29YNW9FRkI3M2p4d01nZ2xENjh5Wi9mQkhYQTd4WjhiNURTc2txYkhDQmlT?=
 =?utf-8?B?ejZGNTY1MTRUdUE0Q08zVkpvc2IxY3JVbWJrdktWTEZpY0FXdGJGSjFvMFpX?=
 =?utf-8?B?WS9LU1BGdzVWK0xPcXhGSy9DdklkYkFGVG5nbllNRVlKTCsvYStCZHdRS2pq?=
 =?utf-8?B?QStpRmNLYUNiTmZKaXRlamdQbGpobXVOZGIyaE9xcHJrZytUcW5KU2xYUXFP?=
 =?utf-8?B?S1diRDZUalVQdTlYY0FFQVZhQW9ObEtxRGtqNTcvZkZ0Ky9BOXZodVZwVHhr?=
 =?utf-8?B?MUIwSTlwQlg3YzVMMlp1VlpWQ1d1V3NEbmx0ZElFT1BWT3NKekVhKzVkUUEz?=
 =?utf-8?B?UFZTaFdQSkx2WjBJWkpqajVndUtDMC9XZ3FERlpWZFhlTDJsTzJQSWhsMm5h?=
 =?utf-8?B?SXZiOXBSOWgzUXA4ZSt4WWFNYW92WVFoYkhOMldLcFczd0J6ZFBST1BuT3p2?=
 =?utf-8?B?SDB6TGZza0dOZjlGclhXZTFNRlEvblhlRUVJbWZTa3J1MGdUUVFJWm43WVV2?=
 =?utf-8?B?M1NUcDBwQ3NrWHVyVkJ2ZFhSdkF4czhyUG9Kb3Z1SXBYazZnbGZseWR2VHIv?=
 =?utf-8?B?WVJvdUZmMGhYbnh3WGJlMitFY3RCMTR6VFhtOGtvMEVKU3MrWUE5SVAyQUFO?=
 =?utf-8?B?clN6QjYwaUd0RmMzS1ZsQXpPT0dIM2hRZW1vb2JxZythbURLNE9iS2NxbXdo?=
 =?utf-8?B?d0kwK0RvaXJTK1NuTGdqSjhMV21KMnVpek00Q2Y1emJyWndVUjkrMElIZy93?=
 =?utf-8?B?WklZRy93VlEydlNZSDk2ais0eHJwQkJZM1JjenJYVS9sZTBvazJoa3ZEY1pu?=
 =?utf-8?B?NWVLbStNMEdUYTZudUFVUW9Ocks2RzRUMXVXejRSajFMdEZhTkoxaHRUKzYz?=
 =?utf-8?B?UnFnNkpUZmUwYmMyVmxXaE9tRmpTZ3VZOXY4aDA2OUduamx4Y05CQmd6ekl6?=
 =?utf-8?B?RC9GZ2RNdUMzUXhlRng1Z0huMWR3ZUg0bWVDSUVCa2Y4OVAzNlZPMk8zdzJR?=
 =?utf-8?B?anZnSEx1dTVzdHJsQ21UWW0vQnMzMGNuTlNSMFVNakhFZFJtMUpSVnpKcUNT?=
 =?utf-8?B?a2QrZjBDaVE1anZhcHZzV1h6ejVUc2IrM0JhdFlocmVXZmQ1d2xFWDVZWFJw?=
 =?utf-8?B?Q3VUVzlQSlpDSGM0R2hNU1Bua2FxQ3laRy9XaHdQa2xhUDVpdXBnVHgwY3lO?=
 =?utf-8?B?TVhxNHRrWnNyY1dTK1VSME1idjBEazRyd0tXbiszZVl6WFQvZVVBT2tJVE5y?=
 =?utf-8?B?WGk4WHZOZjZReVZMN29DbUNZdXBFL1FUOUNiNjZ6SmQ1YU5EZmp0TVpzU2Jn?=
 =?utf-8?B?Y3ZEelc3YVVyY0tkVmErVllHZUdwRjhsSDlyN3U1aUtnZjQ2RFA2TUZDUy9R?=
 =?utf-8?B?UkVKTmtORW5aTzVvQ2k4eDR2VGYrQzZXZVZTa3graDZseXN1Ym9YR0JwbXM1?=
 =?utf-8?B?MllKaTZXbkh3T2diSUVEQjVCSWd6VE9ZaEpheUttdWRHRFVmWG1wMkFDM1or?=
 =?utf-8?Q?L3lYaX0A7W8Zi/n234jSay9AY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7526505b-58d4-4772-13a0-08db34540079
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:59:19.4314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: b5oOs+rTIZyG1YI43TgoMMafpU/avw2nCtPZtIjLz0K2YbReb8s9CK1QU/rS7MXiSaiXrcnluClW9NzB5eJlUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

These are easiest in that they have same-size source and destination
vectors, yet they're different from other conversion insns in that they
use opcodes which have different meaning in the 0F encoding space
({,V}H{ADD,SUB}P{S,D}), hence requiring a little bit of overriding.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,6 +612,12 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
+    INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
+    INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
+    INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
+    INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
+    INSN(cvtw2ph,       f3, map5, 7d,    vl, fp16, vl),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
     INSN(divsh,         f3, map5, 5e,    el, fp16, el),
     INSNX(fcmaddcph,    f2, map6, 56, 1, vl,    d, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2048,6 +2048,12 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x7c }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2uw */
+    { { 0x7c }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2w */
+    { { 0x7d }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2uw */
+    { { 0x7d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2w */
+    { { 0x7d }, 2, T, R, pfx_f3, W0, Ln }, /* vcvtw2ph */
+    { { 0x7d }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtuwph */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
     { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -259,7 +259,7 @@ static const struct twobyte_table {
     [0x78 ... 0x79] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_vl },
     [0x7a] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
     [0x7b] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, d8s_dq64 },
-    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other },
+    [0x7c ... 0x7d] = { DstImplicit|SrcMem|ModRM, simd_other, d8s_vl },
     [0x7e] = { DstMem|SrcImplicit|ModRM|Mov, simd_none, d8s_dq64 },
     [0x7f] = { DstMem|SrcImplicit|ModRM|Mov, simd_packed_int, d8s_vl },
     [0x80 ... 0x8f] = { DstImplicit|SrcImm },
@@ -1496,6 +1496,12 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
+
+            case 0x7c: /* vcvttph2{,u}w */
+            case 0x7d: /* vcvtph2{,u}w / vcvt{,u}w2ph */
+                d = DstReg | SrcMem | TwoOp;
+                s->fp16 = true;
+                break;
             }
 
             /* Like above re-use twobyte_table[] here. */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7778,6 +7778,14 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7d): /* vcvtph2w [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x7d): /* vcvtw2ph [xyz]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F2(5, 0x7d): /* vcvtuw2ph [xyz]mm/mem,[xyz]mm{k} */
+        op_bytes = 16 << evex.lr;
+        /* fall through */
     case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 14:59:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 14:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517501.802918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLer-0001Yc-UR; Mon, 03 Apr 2023 14:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517501.802918; Mon, 03 Apr 2023 14:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLer-0001YV-RX; Mon, 03 Apr 2023 14:59:45 +0000
Received: by outflank-mailman (input) for mailman id 517501;
 Mon, 03 Apr 2023 14:59:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLeq-0000MW-GH
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 14:59:44 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28cb0595-d230-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 16:59:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 14:59:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 14:59:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28cb0595-d230-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G5mq77nDTjSWCCoxWyRjwHhDlqQhIZ4O7Nys1zxdgGsK1GTEt+Ikse80QvIqBXOxaZ9bAdE3PPHxlGB1Mrs/BE9JvVGKYb3GoIiuzUy7OnPtz2psvVLX5WaHdoo1tGkHcCk6ghRvI3kto9ILk5e3ziDQunac+R2fhbe2iHd1vGqHAsWe8RwswW7u5z6k9+R+JHatzEQQrFpuo9NyosyrJI1oyIUVEhHimIFmW99O+VrarpklDmq/xx+ZdfFHXcjAa6QCe6Kz4Ym1T/0SqumFjzNzgcAtlFMCYezTi3oZRNStms0Bxrg4lHZLvSBPKdcV/QTfoTPJQP6UdnherJsU0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6C2XMdmMlevRHHkAtu6ujlq2vtpyDWTGQwl0t4wyzLM=;
 b=nANfu9XKPBGcYPRdHibN6R7xgd+IBoUVjXWEmSBzqhKBeSU8LUryeNn0ki/f3IpVgPvTYg7T181Nwjqh06xDGmscVql9xeEDQ4domX0KMONKXwXNVizVR43xiDTc4S2fNmw7Ya2qvr24c29IG/nmR2CzLNueCrcFDE3f//rUZJey0SPy0ABaSUSHdAGNBXqzpWtSzHdkbReN2WMxe8lQFXjshXQmx0VllgxFcxn/SXYNA5p9LVJwShMKdkK82JSUOGYHT/rChzZaPDoRw2qMQs1oIKZ2vkSSZRmZaKzgLxAodKzAqT31scvxr0YpHSVb2XaSPNbaUFABafBAIzZcoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6C2XMdmMlevRHHkAtu6ujlq2vtpyDWTGQwl0t4wyzLM=;
 b=QpzVia/dM/iOwWQz6zTHyK6Zrr0lxe7GLSG8WnRUZVI3rY1KyFhdx7PJiMNVpPBMrTdPjfZrxhm/UK48zx2ASboMAtjj9W6bvr4Oc0NxcYE5itUMQrNOEx5LnhavIOBemhed/BFa7E36f3WNcGKcxX+Xt9qFcRFc+8SUZU/vIlcm3ypb9GIg/88xuUV1vQoOchIXzN6RNfPwTsldd8np16/YuxX5ps9bX3PPyteTunKnm20Uma6pTbcETv9DTYPBH1oUDNRY80g9urO16zQDKToKNO8N10SZfGq1aBFql0sbZyHR/4+NjmqlvfM0ieZe3Y7hXiozJkfYk0YJf2ciqQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <607132c8-2693-667d-f6a8-09f362cc7bce@suse.com>
Date: Mon, 3 Apr 2023 16:59:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 08/10] x86emul: handle AVX512-FP16 floating point
 conversion insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0053.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: a1093b80-eed6-4d1f-427d-08db34540c24
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z23T6p0jTls9S1+36WHN2JS1GNM2UFWR1lXUTrmpW9Joyr7NxX2ZUt0vhAAObY+58hlQj/wUVlSuMzCgTy7wcvela4qSGxz6sluyCrZ84KIxr1oFcb6/ReaENHuxKIVq+wXb0uof34Cg/sRPIQAxWt/Ro/6LrL8murMQ2QqHR7AaylmEeW4kYC7w2HSZF6372eu6YNkldRsdBo8asUEmLa4ztGCu5rbeeOT8YnAM9sSud8WJ4Drsx7CLzG3gbTC6NNc70pPDBrVhf/H22B4JjQPA7fjSxpBXpbNfN3Hj+MMp8rmGHEMlCzNVOKXpx/uQhPACqxi3zlu943PiLH401FxDI57KYj52q0RJx15T9fptG4nysdTRJQws3U+kqBX/8Iq/42ul1Gz7yBImuaJ+rpw5fiFnmjC7Vc5p/Mtt0UrA8TXaBJGmGjR7r1HBpKb2di5Moo+VTCgf12szCVLd78kOg6hvfXTDEezrYqkXi6tmHVqbDw4eH++0Btv3tYb1VzBWE8OU3NW82htxcrm63XsVkOehC+92A3prT/FfdU45C4Momo8J+TbplagJgbnWBzQGN6dhtur/Z0HBK+IEGeCXKHPBda7tvYLoXyE9kgSc42fXz3ttwbxHztAqePuVepIbnEshEvqoJrBpB+aoAA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3VEQzNKREk3dkZJdERWdXlmZUZjUnQrRTBGaWFrYk1Ebm9ka0drWkluRnpU?=
 =?utf-8?B?Q0lVREczVGlMVzFJa0w5TU9IYUdZTmtCaGNOU2pvckhySUJwb2lUZmpKNFZV?=
 =?utf-8?B?SkFFYnFBQ1NOZklLSCtnWUtHYkdWQ1ZIUTZTb3VWckFKS3NXSjF3MXZ6eVA1?=
 =?utf-8?B?Y3R1cjgvMEpIb2VyVUlZREUwNElCODc1M0d5cHhBTnN5SUxFbjJpanVXVHMr?=
 =?utf-8?B?MXZucmdQZU4zN0FTYkVyaUVYdlBRZmF2cndNMWFDSHMyUTIyKzhSY0xGWnc2?=
 =?utf-8?B?NTNDak9OTXFXelQvNGlYaHlUaEpBMkJVenpQRk1mSXlCNW9SVTJWbUtkanV4?=
 =?utf-8?B?NTBNQytFYndZbUFQQ0VFWWJCMnY0Q0ZsS0t6SS9ycDlORUF0NUw0NUlmQ0sy?=
 =?utf-8?B?QzUzTjM5MUZVK1c5WVNENmFKbHVjRkpuOSttMXZVZ0pVdE1VMk9HOU5qMktZ?=
 =?utf-8?B?R1M4TTBoeXFrclBrZU40S0JjYTZjM041ZUUzN1JHRFhMZ0l2VVI1NnYzcitv?=
 =?utf-8?B?ZHp6WnR0SlhVM2c3cGpYUVYzMHdKNjJwNldWZG8xb0pwMUlscWs4TW0xWjFH?=
 =?utf-8?B?cWVwcER3VEh6TTZ1M0N3UWU4YXJVb2J3V0NodnBFWExaM0dxWmtBaDhJbHoz?=
 =?utf-8?B?ZHJGdXdzTzFkTVppYnAvVmgwN0hJcmEvQ2Q5MC9icTludExNWkJYaXE2Um9J?=
 =?utf-8?B?TkpLYTRxRld0bXdaMUFWQnlvaG9WNUtveHkzOEVJVnhVelArYmRPdm9idkNJ?=
 =?utf-8?B?eTJNcWdQdUorUkY5WXZvZUJKa1FYYUpEWWtTdHhUS1dlcDNoUWVBTXViOTdH?=
 =?utf-8?B?WHNqc3htL0lSV2h3Mm9Ybzk1dVgwOWhZTWtZbkU3c21VVE5XVkhrVGdpa1g2?=
 =?utf-8?B?TjRyS0NJZk0xRmtoczJmeVB6ZEp1WEhQWE81L0xmNXJMdFBNYWNmK1VzZ3k5?=
 =?utf-8?B?UXcwN3ZUV2hWUmVJb1V3RUZ1enNTbGVucWJWTk1MaE5kWDVMMG94UUFVU0dj?=
 =?utf-8?B?WE5GMDNBUTlqYkVpNC96RHgyYmhmSW9pWERGVms1cUJ5SkwxMFg5ZVNxQmVw?=
 =?utf-8?B?R3VJaXRmbEUxQ0d4VW9aVjFhV0dVNXhvaXBEM2Faa1EwUko4aGtOYi9SUGlL?=
 =?utf-8?B?VGx0ck1DdHBaMjVXeHJaMCtTN3JHbTRYWWZEK1FTNUJ1Nlhua1VUWVBCWHlW?=
 =?utf-8?B?Nm9FNi9neGg3YmZ4Q0VDa0lvOStYZXo1Y2s3SEpjSVJxUVI1dFN0QkM1ZDNJ?=
 =?utf-8?B?OU5HZWl3YVB1eFhRTnl2bXN2bnFTNUFNUytQcWZ5ZFpmUG5mWmUvQWpKV0g1?=
 =?utf-8?B?c3d3bmtreWxhc0xlRk44NjFEanVKMU9WUjZQOXJ5UzNmTHkrRTdzNkRCNE9X?=
 =?utf-8?B?Um5PVFllcnQxL1MyYm9uV2ZHMnFNZEhVOWdvYklKMHVNeXN2cDkyQmZ0VU9u?=
 =?utf-8?B?MVVZcE92MHhhQjlTYXlkMG9uK0xBcnVJVkhLVmRjNFJrQnQ3Zy83NFp2ZjlV?=
 =?utf-8?B?MlFnSkljbWsrTnV4SVBGaE8wa0ZpcGNlNlQ2TVVJbUVxQ2RNbGVOYjdNZ2JM?=
 =?utf-8?B?U3ZLNkgxWnhzRFRIM0xZRWppbnJ3UDJFSVJKRy9NOWMyN1Y3L2txN0x0bU1s?=
 =?utf-8?B?ZlZ4eElnSlVkWnpDRndqSTBHYWpGNm81L3o4T09DZVNkNnJ4QUFrUXNKRk5J?=
 =?utf-8?B?cWVXQXFMZjE0VUxLK3V4WFAzbUFqZ2czMnlLeUxBQUs5bUxyc0R3aGJWK2ZI?=
 =?utf-8?B?NUhjZWZqeDcvNE1nUjhaODVuZ20vNVk5REpnWENBUGg1eEpqbVlUT1pyc1Jk?=
 =?utf-8?B?MU8vN25uTUpCZGs3WEtzWjBrOUEvamlxaDZsTExLWUh5bVlhQ0VET3N4UHFz?=
 =?utf-8?B?VUdJSmg5R1BVVEtCYWVsV0xUYmhFR3hDbEFxeHp3cjA4SlFNaDAxRjZHV2hW?=
 =?utf-8?B?OTJMbllOU0tYeGhRd1prd1J2QjhpMjc5UVNDZXE4Sm95amlxSGZGUVBRdzVF?=
 =?utf-8?B?ODQrdTNRMGxxcEFGaUJZV29DdkFveld4ZEIrRVhtWnpOMzNLdGo0d3YycTE2?=
 =?utf-8?B?QUFPMnZVU0N6MVRrai9sQkNlQ2NnYnZnMS81NEVnWmc0Q0R3ZWRkMk5xczdp?=
 =?utf-8?Q?2t7JwB3yWwSvDmSy4u1XGlbed?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a1093b80-eed6-4d1f-427d-08db34540c24
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 14:59:39.0230
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: M/ochzW7QBuQbXIz+OxvSFpoxosAPCwq90fix46mGKLKJTCnQZk7eAIZWVrOHjHrqR3ohI/CSBo9f7VftYvnQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,8 +612,16 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtpd2ph,      66, map5, 5a,    vl,    q, vl),
+    INSN(cvtph2pd,        , map5, 5a,  vl_4, fp16, vl),
+    INSN(cvtph2psx,     66, map6, 13,  vl_2, fp16, vl),
     INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
     INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
+    INSN(cvtps2phx,     66, map5, 1d,    vl,    d, vl),
+    INSN(cvtsd2sh,      f2, map5, 5a,    el,    q, el),
+    INSN(cvtsh2sd,      f3, map5, 5a,    el, fp16, el),
+    INSN(cvtsh2ss,        , map6, 13,    el, fp16, el),
+    INSN(cvtss2sh,        , map5, 1d,    el,    d, el),
     INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
     INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
     INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2031,6 +2031,8 @@ static const struct evex {
 }, evex_map5[] = {
     { { 0x10 }, 2, T, R, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
+    { { 0x1d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtps2phx */
+    { { 0x1d }, 2, T, R, pfx_no, W0, LIG }, /* vcvtss2sh */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2039,6 +2041,10 @@ static const struct evex {
     { { 0x58 }, 2, T, R, pfx_f3, W0, LIG }, /* vaddsh */
     { { 0x59 }, 2, T, R, pfx_no, W0, Ln }, /* vmulph */
     { { 0x59 }, 2, T, R, pfx_f3, W0, LIG }, /* vmulsh */
+    { { 0x5a }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2pd */
+    { { 0x5a }, 2, T, R, pfx_66, W1, Ln }, /* vcvtpd2ph */
+    { { 0x5a }, 2, T, R, pfx_f3, W0, LIG }, /* vcvtsh2sd */
+    { { 0x5a }, 2, T, R, pfx_f2, W1, LIG }, /* vcvtsd2sh */
     { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
     { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
     { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
@@ -2056,6 +2062,8 @@ static const struct evex {
     { { 0x7d }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtuwph */
     { { 0x7e }, 2, T, W, pfx_66, WIG, L0 }, /* vmovw */
 }, evex_map6[] = {
+    { { 0x13 }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2psx */
+    { { 0x13 }, 2, T, R, pfx_no, W0, LIG }, /* vcvtsh2ss */
     { { 0x2c }, 2, T, R, pfx_66, W0, Ln }, /* vscalefph */
     { { 0x2d }, 2, T, R, pfx_66, W0, LIG }, /* vscalefsh */
     { { 0x42 }, 2, T, R, pfx_66, W0, Ln }, /* vgetexpph */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -224,7 +224,9 @@ static const struct twobyte_table {
     [0x14 ... 0x15] = { DstImplicit|SrcMem|ModRM, simd_packed_fp, d8s_vl },
     [0x16] = { DstImplicit|SrcMem|ModRM|Mov, simd_other, 3 },
     [0x17] = { DstMem|SrcImplicit|ModRM|Mov, simd_other, 3 },
-    [0x18 ... 0x1f] = { ImplicitOps|ModRM },
+    [0x18 ... 0x1c] = { ImplicitOps|ModRM },
+    [0x1d] = { ImplicitOps|ModRM, simd_none, d8s_vl },
+    [0x1e ... 0x1f] = { ImplicitOps|ModRM },
     [0x20 ... 0x21] = { DstMem|SrcImplicit|ModRM },
     [0x22 ... 0x23] = { DstImplicit|SrcMem|ModRM },
     [0x28] = { DstImplicit|SrcMem|ModRM|Mov, simd_packed_fp, d8s_vl },
@@ -1482,6 +1484,19 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 break;
 
+            case 0x1d: /* vcvtps2phx / vcvtss2sh */
+                if ( s->evex.pfx & VEX_PREFIX_SCALAR_MASK )
+                    break;
+                d = DstReg | SrcMem;
+                if ( s->evex.pfx & VEX_PREFIX_DOUBLE_MASK )
+                {
+                    s->simd_size = simd_packed_fp;
+                    d |= TwoOp;
+                }
+                else
+                    s->simd_size = simd_scalar_vexw;
+                break;
+
             case 0x2e: case 0x2f: /* v{,u}comish */
                 if ( !s->evex.pfx )
                     s->fp16 = true;
@@ -1506,6 +1521,15 @@ int x86emul_decode(struct x86_emulate_st
 
             /* Like above re-use twobyte_table[] here. */
             disp8scale = decode_disp8scale(twobyte_table[b].d8s, s);
+
+            switch ( b )
+            {
+            case 0x5a: /* vcvtph2pd needs special casing */
+                if ( !s->evex.pfx && !s->evex.brs )
+                    disp8scale -= 2;
+                break;
+            }
+
             break;
 
         case ext_map6:
@@ -1526,6 +1550,17 @@ int x86emul_decode(struct x86_emulate_st
                     s->fp16 = true;
                 break;
 
+            case 0x13: /* vcvtph2psx / vcvtsh2ss */
+                if ( s->evex.pfx & VEX_PREFIX_SCALAR_MASK )
+                    break;
+                s->fp16 = true;
+                if ( !(s->evex.pfx & VEX_PREFIX_DOUBLE_MASK) )
+                {
+                    s->simd_size = simd_scalar_vexw;
+                    d &= ~TwoOp;
+                }
+                break;
+
             case 0x56: case 0x57: /* vf{,c}maddc{p,s}h */
             case 0xd6: case 0xd7: /* vf{,c}mulc{p,s}h */
                 break;
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7778,14 +7778,25 @@ x86_emulate(
         generate_exception_if(evex.w, EXC_UD);
         goto avx512f_all_fp;
 
+    CASE_SIMD_ALL_FP(_EVEX, 5, 0x5a):  /* vcvtp{h,d}2p{h,d} [xyz]mm/mem,[xyz]mm{k} */
+                                       /* vcvts{h,d}2s{h,d} xmm/mem,xmm,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        if ( vex.pfx & VEX_PREFIX_SCALAR_MASK )
+            d &= ~TwoOp;
+        op_bytes = 2 << (((evex.pfx & VEX_PREFIX_SCALAR_MASK) ? 0 : 1 + evex.lr) +
+                         2 * evex.w);
+        goto avx512f_all_fp;
+
     case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7d): /* vcvtph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(5, 0x7d): /* vcvtw2ph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(5, 0x7d): /* vcvtuw2ph [xyz]mm/mem,[xyz]mm{k} */
-        op_bytes = 16 << evex.lr;
+    case X86EMUL_OPC_EVEX_66(6, 0x13): /* vcvtph2psx [xy]mm/mem,[xyz]mm{k} */
+        op_bytes = 8 << ((ext == ext_map5) + evex.lr);
         /* fall through */
+    case X86EMUL_OPC_EVEX_66(5, 0x1d): /* vcvtps2phx [xyz]mm/mem,[xy]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x2c): /* vscalefph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x42): /* vgetexpph [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x96): /* vfmaddsub132ph [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -7812,6 +7823,8 @@ x86_emulate(
             avx512_vlen_check(false);
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX(5, 0x1d):    /* vcvtss2sh xmm/mem,xmm,xmm{k} */
+    case X86EMUL_OPC_EVEX(6, 0x13):    /* vcvtsh2ss xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x2d): /* vscalefsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x43): /* vgetexpsh xmm/m16,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(6, 0x99): /* vfmadd132sh xmm/m16,xmm,xmm{k} */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:00:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517506.802929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLfG-0002y8-Bq; Mon, 03 Apr 2023 15:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517506.802929; Mon, 03 Apr 2023 15:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLfG-0002y1-8s; Mon, 03 Apr 2023 15:00:10 +0000
Received: by outflank-mailman (input) for mailman id 517506;
 Mon, 03 Apr 2023 15:00:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLfF-00025F-0y
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:00:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39430082-d230-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 17:00:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8760.eurprd04.prod.outlook.com (2603:10a6:10:2e3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 15:00:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 15:00:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39430082-d230-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zj8uHsbxyqDlnVZmjqL9sx91gNXvLdLQH8LQeUnyL22V4/yrnGVAlxxV7CfVrH0ySwjDPNYErJjUcVvHbbxF9bQyWcZvGXcee/VMYUP6EIOwtDJ5HpaMYDcCmGe6C/u/3bk9NIRChLsS/c1LdMreZEKxMao0lir8rF4zASSb/z11JV/YJX+F+U+Bqhi7QvyXjaAu8LaqKR8wiJ8eEXhNPooSIPz70IS25C2zeEBD+d70/3eZZ3OiTePCYV8wL3A4m2wb0ok6GIwvPvwMvsSc+upVhEBrcwDgno62q/Jj8UvgzHqfB6Wf/sc4Ec9Kvlq4ZhmgVxSm0EiiI/k5HCVWYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Wxm5CGoq/DJSIHHXyHkrlfKqPlnnC/KYi/RhtFzDoeM=;
 b=gOBkorkFh5NNcVpHL/s73bGIymlZuIs0MTPIzNBXyWngnZz3RLmK35LhUV9GCL2HwXPgwttejZc+WzHjewjIgN0UqqsANn2FdaZqxMdVQxzE/RZgsIz1Irh/4KA10soQSk2rWhMGglyx0HjNNpbKzwmI6n5GpyLcaA1CxdOB8u4dvwkT4ZgqmvjdzzxOuaNRgX2VK3J6mBWL8kH30n6JnuLgznyBh7NqWZaymNtLWTFaMSEm0Wvw1CJEReYY6VDtiSc+9CVgaoZhXrz2iomnat6yXdvndba6Uq8qTHPj6rkqqtlM7rQ25gITJ7QV6tI1mhhp55mVJK78w3APBufxcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wxm5CGoq/DJSIHHXyHkrlfKqPlnnC/KYi/RhtFzDoeM=;
 b=L7VenwbqWTh/tIIqcdEIQoz882tcdZFSjd+z0EBbqO3NfLRk2OtJRpmpuxfoSd5a0nU/ZgD96QVFyJFH+Q41DZXp13cH3tCay8ZpyCrZpY4DqW/Hdp4CkFqYSUi+bS2vF9/1p0WLh1fJZ4mHHmPHXr7ETxXNcV2HgKjQLAsO7vv7d1k4qbmgUwKDoM/D3aE4IkxlUzhJkZHUC41yEChLV+sd1PMbKQT3B3IYT4ernTFAOJ3ERGAUVwT74f7W77GS81zOVuWyDBuvLmdoR02E/2zpUpKcPUTrQ1Ma70X+FUJFxpMJAWk9D7umFJU4FlzKxT0+iT1fMdL8QSjUwT81rA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <283bef89-b29a-3cd3-56db-a34dc4f786ab@suse.com>
Date: Mon, 3 Apr 2023 17:00:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 09/10] x86emul: handle AVX512-FP16 conversion to/from
 (packed) int{32,64} insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0242.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8760:EE_
X-MS-Office365-Filtering-Correlation-Id: 86295e5c-9d05-4e11-d2e6-08db34541cb7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JcRK0aNB0Ms6+hy6fVaPmT7H2TV71DRyqJWPTf/9Z0DrjivdCDIHnOUYMoxoQ/OyE54LWrA+a87FBDrD0Y0onokNbgJHhTLBXvnIIeSFryMnTR2agwtyWLTmX5YjGbERsh70xgLaqIEmkaiaXkg8aWh4kAFEPuEzHa9cbL3CpXuTV34h07s5Pa6kFuNLRdIZ8jABOUOQz520KbTS5y9zg0srigjxltyxzgaw5EBUgozvxtqzmFMBIZAxMPFPJrOo5Rwmb4oF6qcaCbq+mfWff1SLkbcwfLttAVMJR9fBZcm8Ocjh2JflD8hRg9ajjks8p4+R/DWEworBmbpmcHL89a/9VXZj9vwNJD7VNuRThRF8IIHnL9J+OW8XVVwdgwKUISngqlAVIRdBuE9+rz59JVHlhvUovjKTyH+bUieNQqHxTZLVfztJc/Vwr0nfxRTvCcPX0xJZkdgtwkGe4Penwd8LsP80Y0eydU9A3uLkzfqPqTmMQhQ2BSFRt56zkoCZ/+yfYkGnD2FqDmjry4GvUEYkK2vweL8pWoo2jg6R82XS/dUlloqMtdKY87QZ5/8sRDcvkHkwdDhZGyYcAZX66xAzfYu0Fxn4OA/UPObfNq/j/JfQ4EU2FFLl41+zRuDjqqQMT1oM7zmP7L/JwD6MHA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(39860400002)(376002)(451199021)(6916009)(8676002)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(54906003)(8936002)(41300700001)(38100700002)(5660300002)(186003)(2616005)(6486002)(26005)(6506007)(6512007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eVJOTXkxTmRRWWZKcUxwQUw1YlNHM25DRkU2L3lRWmJwVE04aUJJRkhLOFp4?=
 =?utf-8?B?WERKVzFGWEo3VzJjdW41bmtmTHg1QUR3elhEQkprK2NETXVQbTQ3dG5HTjgz?=
 =?utf-8?B?YlFtWXBCbWM0bVlNTmRJUGhueGY0QjM0K2FxUWsxdUUzVWN2eWx5ajJ3ZC9o?=
 =?utf-8?B?N1A0ZlZMQk9tSHY3RzV4aWM1N0NrL3Y5QkFxVTBwNFc4ZzBqTnhXNkE2NlJh?=
 =?utf-8?B?S0Q2WFZSL1pNYlRUbFFTUEtPN3NIZlFLczd2ZFRyNTEwVk9xWk1rVEd3a0tw?=
 =?utf-8?B?cFZpN2pnUHJoeFlxdThUMENnU0N0Q0xQbGQzaHpiRENOdlhVTm5qdWZKTnJr?=
 =?utf-8?B?UVlaOHBlb1BXTUp3WWNlUlV4dDEzenhUaEFqb2Y4MFNUd3lZVm4wL0RpamxZ?=
 =?utf-8?B?WmhlMmxvTzlZOENrcVV3Sk9nc0prRFJoUmRoTFYxNi8rK2U4MWJlRW1tNWhs?=
 =?utf-8?B?ZXpzQnlPbEx0dzZlV1lJSzlHSEFFU3BUdW5zRlNDMVVZa3JMU2NWTTFEdVFi?=
 =?utf-8?B?QVIzaHY0Z1N3MkxjSVM2RXQ1QTRCUVdibHJRYmlRNkJrUnZDQWNkTGhQTG9H?=
 =?utf-8?B?SUlBNkw1Y3l2clFodmhQMmdMQm55UUNZU3dFU1VRaU82S3FHRGRyNW5aN1Bu?=
 =?utf-8?B?NTIyUHJ2eDNBTXptR0JqMEUzWUdzQjh5UUNmV0Vtd0plWWZheUduVmo3NmVB?=
 =?utf-8?B?dU1YZUN2K3FGZEZQK05mUDB0Vjh0dldGTStxYjd6VU4yQWR2OHJYUDlJT1VE?=
 =?utf-8?B?RHJOSlJsQi9HVFgyeGY3cDh0Nld4TDRLcE1lb3d3dzU3bVhsWk5RT0paZDlP?=
 =?utf-8?B?R3dOT1p1endMWGpLbzRKb2xRalFEOVpVaVV1ZFBKVlRPTm9Jd0ZlN2hxeTVu?=
 =?utf-8?B?M2I4MG84UEo4WXdZU0tUbUVHdElJemd4WU5QcFRSS1g5eFNNc2xuelEwQXdD?=
 =?utf-8?B?MW9TVkphWU1nUG1qTFIyZHRPRU4xWFlrYjJvbUMvaGZqa20yVDdxdXhJNUhS?=
 =?utf-8?B?T2pJYUg1UWZreW5WczAxNVJjTG5aMWROZ0ZybTJOTWJjenl5Y2xtZ3BnL0NE?=
 =?utf-8?B?S1FZMy8rd0RMUEl1YXNPWG40VkllaVppMEh0ZU0wOUZyWEUvaGloVVFpbldK?=
 =?utf-8?B?TWgxekxhelhsbVoxbmdiQnVRWE1LaVBQbVZFTEtBUHFCNVBqeDQxL0NLSEt1?=
 =?utf-8?B?dG5MNWtneFYxMzVieklwVk1vb0RYZk5jZ0RBK3Jya3J3VzdmZkJJOHc4eFVu?=
 =?utf-8?B?NldDRkVQRUgrV2R2RjlJMHRUeUwwT0phd1h6ZjR1QXlFVXNkM1FTM1Fjdmxk?=
 =?utf-8?B?ZFpCTllTNklocHgwTmZGREthTitZV1ZMek8zMnlEdStWVk5wVExMY3J2MDRV?=
 =?utf-8?B?dVVXY1F4NExyL0RvTW10K0J2RUc1OHRjbURWVzhsT2ptaCtMbS9FRjhnaXVM?=
 =?utf-8?B?K2YzVDlNOFNMNlRXWWVuYkhEMm50UkcwQ1JnNDFkT2VnVkFMZTdSemlFQ3VJ?=
 =?utf-8?B?MEkyTUVFQlRBdTJLRi9XSGkvOGc1OVpaTzQ0b0lRdFkrek9SRTVlUk55cTk4?=
 =?utf-8?B?WTM3NmhGYXNMUk91VThVa01BS05JeWE4Wm5sV0xGdERFZjRPWXZvT2FibDA5?=
 =?utf-8?B?ODFZZkdydmJhZ2dIak5GeEoyR3FvbUNDbTJLNFV1NUswNVo2d1RzUzNnQUo1?=
 =?utf-8?B?bU9MYlR2MlIvTGVMS25ucHJ6aEtoM1pUMUxQUWhJVHVqUDRaRUtkRXduMDgw?=
 =?utf-8?B?VExsc1dwaGNncUFrSUlsU1VIVm02dklBODlqTzhMMk5vSmcvVUR2UG02dldx?=
 =?utf-8?B?aCttakI1NkhtN3ZBb3crK2o1WUgza1lVRVlvSTZqdENKbUQ1Ty8wNy8yUjdH?=
 =?utf-8?B?NHFldThkNUM3ZndmRTJUdFg5OC9pcnVEZ2Vad1U5bkZ5RzFKTUVhNGE4U1V4?=
 =?utf-8?B?VUVQa3lBb0l4bnZxbkViclJsNGJDNmh0eXdVUzZ2ZmpKZkwyYURTMWJybGN3?=
 =?utf-8?B?OS9aYitCcGVLNitvMVA5WTRoMHYxN0F5ZmlsL1ZBLzFvRnp6QW1xQXRtZmFj?=
 =?utf-8?B?R1R4Qmd4Z0ROdXRpRGpDWk9YSDlrMHFtbDlHQ3RDTnNGMDI4WmtGTzFHSWRh?=
 =?utf-8?Q?IJfNdk0THtwJQDklc2INbjo4j?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 86295e5c-9d05-4e11-d2e6-08db34541cb7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:00:06.8062
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iqNOpu3XvHJjWnAGvsYlCgKKbuFjNmm+BVpulfyJ2cixIFA7b4GEb9W3VBNqJhkbr93z+xKum8DNv95lhr4xOg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8760

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -612,18 +612,36 @@ static const struct test avx512_fp16_all
     INSN(cmpph,           , 0f3a, c2,    vl, fp16, vl),
     INSN(cmpsh,         f3, 0f3a, c2,    el, fp16, el),
     INSN(comish,          , map5, 2f,    el, fp16, el),
+    INSN(cvtdq2ph,        , map5, 5b,    vl,    d, vl),
     INSN(cvtpd2ph,      66, map5, 5a,    vl,    q, vl),
+    INSN(cvtph2dq,      66, map5, 5b,  vl_2, fp16, vl),
     INSN(cvtph2pd,        , map5, 5a,  vl_4, fp16, vl),
     INSN(cvtph2psx,     66, map6, 13,  vl_2, fp16, vl),
+    INSN(cvtph2qq,      66, map5, 7b,  vl_4, fp16, vl),
+    INSN(cvtph2udq,       , map5, 79,  vl_2, fp16, vl),
+    INSN(cvtph2uqq,     66, map5, 79,  vl_4, fp16, vl),
     INSN(cvtph2uw,        , map5, 7d,    vl, fp16, vl),
     INSN(cvtph2w,       66, map5, 7d,    vl, fp16, vl),
     INSN(cvtps2phx,     66, map5, 1d,    vl,    d, vl),
+    INSN(cvtqq2ph,        , map5, 5b,    vl,    q, vl),
     INSN(cvtsd2sh,      f2, map5, 5a,    el,    q, el),
     INSN(cvtsh2sd,      f3, map5, 5a,    el, fp16, el),
+    INSN(cvtsh2si,      f3, map5, 2d,    el, fp16, el),
     INSN(cvtsh2ss,        , map6, 13,    el, fp16, el),
+    INSN(cvtsh2usi,     f3, map5, 79,    el, fp16, el),
+    INSN(cvtsi2sh,      f3, map5, 2a,    el, dq64, el),
     INSN(cvtss2sh,        , map5, 1d,    el,    d, el),
+    INSN(cvttph2dq,     f3, map5, 5b,  vl_2, fp16, vl),
+    INSN(cvttph2qq,     66, map5, 7a,  vl_4, fp16, vl),
+    INSN(cvttph2udq,      , map5, 78,  vl_2, fp16, vl),
+    INSN(cvttph2uqq,    66, map5, 78,  vl_4, fp16, vl),
     INSN(cvttph2uw,       , map5, 7c,    vl, fp16, vl),
     INSN(cvttph2w,      66, map5, 7c,    vl, fp16, vl),
+    INSN(cvttsh2si,     f3, map5, 2c,    el, fp16, el),
+    INSN(cvttsh2usi,    f3, map5, 78,    el, fp16, el),
+    INSN(cvtudq2ph,     f2, map5, 7a,    vl,    d, vl),
+    INSN(cvtuqq2ph,     f2, map5, 7a,    vl,    q, vl),
+    INSN(cvtusi2sh,     f3, map5, 7b,    el, dq64, el),
     INSN(cvtuw2ph,      f2, map5, 7d,    vl, fp16, vl),
     INSN(cvtw2ph,       f3, map5, 7d,    vl, fp16, vl),
     INSN(divph,           , map5, 5e,    vl, fp16, vl),
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2033,6 +2033,9 @@ static const struct evex {
     { { 0x11 }, 2, T, W, pfx_f3, W0, LIG }, /* vmovsh */
     { { 0x1d }, 2, T, R, pfx_66, W0, Ln }, /* vcvtps2phx */
     { { 0x1d }, 2, T, R, pfx_no, W0, LIG }, /* vcvtss2sh */
+    { { 0x2a }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsi2sh */
+    { { 0x2c }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvttsh2si */
+    { { 0x2d }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsh2si */
     { { 0x2e }, 2, T, R, pfx_no, W0, LIG }, /* vucomish */
     { { 0x2f }, 2, T, R, pfx_no, W0, LIG }, /* vcomish */
     { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vsqrtph */
@@ -2045,6 +2048,10 @@ static const struct evex {
     { { 0x5a }, 2, T, R, pfx_66, W1, Ln }, /* vcvtpd2ph */
     { { 0x5a }, 2, T, R, pfx_f3, W0, LIG }, /* vcvtsh2sd */
     { { 0x5a }, 2, T, R, pfx_f2, W1, LIG }, /* vcvtsd2sh */
+    { { 0x5b }, 2, T, R, pfx_no, W0, Ln }, /* vcvtdq2ph */
+    { { 0x5b }, 2, T, R, pfx_no, W1, Ln }, /* vcvtqq2ph */
+    { { 0x5b }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2dq */
+    { { 0x5b }, 2, T, R, pfx_f3, W0, Ln }, /* vcvttph2dq */
     { { 0x5c }, 2, T, R, pfx_no, W0, Ln }, /* vsubph */
     { { 0x5c }, 2, T, R, pfx_f3, W0, LIG }, /* vsubsh */
     { { 0x5d }, 2, T, R, pfx_no, W0, Ln }, /* vminph */
@@ -2054,6 +2061,17 @@ static const struct evex {
     { { 0x5f }, 2, T, R, pfx_no, W0, Ln }, /* vmaxph */
     { { 0x5f }, 2, T, R, pfx_f3, W0, LIG }, /* vmaxsh */
     { { 0x6e }, 2, T, R, pfx_66, WIG, L0 }, /* vmovw */
+    { { 0x78 }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2udq */
+    { { 0x78 }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2uqq */
+    { { 0x78 }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvttsh2usi */
+    { { 0x79 }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2udq */
+    { { 0x79 }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2uqq */
+    { { 0x79 }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtsh2usi */
+    { { 0x7a }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2qq */
+    { { 0x7a }, 2, T, R, pfx_f2, W0, Ln }, /* vcvtudq2ph */
+    { { 0x7a }, 2, T, R, pfx_f2, W1, Ln }, /* vcvtuqq2ph */
+    { { 0x7b }, 2, T, R, pfx_66, W0, Ln }, /* vcvtph2qq */
+    { { 0x7b }, 2, T, R, pfx_f3, Wn, LIG }, /* vcvtusi2sh */
     { { 0x7c }, 2, T, R, pfx_no, W0, Ln }, /* vcvttph2uw */
     { { 0x7c }, 2, T, R, pfx_66, W0, Ln }, /* vcvttph2w */
     { { 0x7d }, 2, T, R, pfx_no, W0, Ln }, /* vcvtph2uw */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1497,12 +1497,25 @@ int x86emul_decode(struct x86_emulate_st
                     s->simd_size = simd_scalar_vexw;
                 break;
 
+            case 0x2a: /* vcvtsi2sh */
+                break;
+
+            case 0x2c: case 0x2d: /* vcvt{,t}sh2si */
+                if ( s->evex.pfx == vex_f3 )
+                    s->fp16 = true;
+                break;
+
             case 0x2e: case 0x2f: /* v{,u}comish */
                 if ( !s->evex.pfx )
                     s->fp16 = true;
                 s->simd_size = simd_none;
                 break;
 
+            case 0x5b: /* vcvt{d,q}q2ph, vcvt{,t}ph2dq */
+                if ( s->evex.pfx && s->evex.pfx != vex_f2 )
+                    s->fp16 = true;
+                break;
+
             case 0x6e: /* vmovw r/m16, xmm */
                 d = (d & ~SrcMask) | SrcMem16;
                 /* fall through */
@@ -1512,6 +1525,17 @@ int x86emul_decode(struct x86_emulate_st
                 s->simd_size = simd_none;
                 break;
 
+            case 0x78: case 0x79: /* vcvt{,t}ph2u{d,q}q, vcvt{,t}sh2usi */
+                if ( s->evex.pfx != vex_f2 )
+                    s->fp16 = true;
+                break;
+
+            case 0x7a: /* vcvttph2qq, vcvtu{d,q}q2ph */
+            case 0x7b: /* vcvtph2qq, vcvtusi2sh */
+                if ( s->evex.pfx == vex_66 )
+                    s->fp16 = true;
+                break;
+
             case 0x7c: /* vcvttph2{,u}w */
             case 0x7d: /* vcvtph2{,u}w / vcvt{,u}w2ph */
                 d = DstReg | SrcMem | TwoOp;
@@ -1524,10 +1548,34 @@ int x86emul_decode(struct x86_emulate_st
 
             switch ( b )
             {
+            case 0x78:
+            case 0x79:
+                /* vcvt{,t}ph2u{d,q}q need special casing */
+                if ( s->evex.pfx <= vex_66 )
+                {
+                    if ( !s->evex.brs )
+                        disp8scale -= 1 + (s->evex.pfx == vex_66);
+                    break;
+                }
+                /* vcvt{,t}sh2usi needs special casing: fall through */
+            case 0x2c: case 0x2d: /* vcvt{,t}sh2si need special casing */
+                disp8scale = 1;
+                break;
+
             case 0x5a: /* vcvtph2pd needs special casing */
                 if ( !s->evex.pfx && !s->evex.brs )
                     disp8scale -= 2;
                 break;
+
+            case 0x5b: /* vcvt{,t}ph2dq need special casing */
+                if ( s->evex.pfx && !s->evex.brs )
+                    --disp8scale;
+                break;
+
+            case 0x7a: case 0x7b: /* vcvt{,t}ph2qq need special casing */
+                if ( s->evex.pfx == vex_66 && !s->evex.brs )
+                    disp8scale = s->evex.brs ? 1 : 2 + s->evex.lr;
+                break;
             }
 
             break;
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3577,6 +3577,12 @@ x86_emulate(
         state->simd_size = simd_none;
         goto simd_0f_rm;
 
+#ifndef X86EMUL_NO_SIMD
+
+    case X86EMUL_OPC_EVEX_F3(5, 0x2a):      /* vcvtsi2sh r/m,xmm,xmm */
+    case X86EMUL_OPC_EVEX_F3(5, 0x7b):      /* vcvtusi2sh r/m,xmm,xmm */
+        host_and_vcpu_must_have(avx512_fp16);
+        /* fall through */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2a): /* vcvtsi2s{s,d} r/m,xmm,xmm */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x7b): /* vcvtusi2s{s,d} r/m,xmm,xmm */
         generate_exception_if(evex.opmsk || (ea.type != OP_REG && evex.brs),
@@ -3655,7 +3661,9 @@ x86_emulate(
             opc[1] = 0x01;
 
             rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp,
-                           vex.pfx & VEX_PREFIX_DOUBLE_MASK ? 8 : 4, ctxt);
+                           vex.pfx & VEX_PREFIX_DOUBLE_MASK
+                           ? 8 : 2 << !state->fp16,
+                           ctxt);
             if ( rc != X86EMUL_OKAY )
                 goto done;
         }
@@ -3685,6 +3693,12 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+    case X86EMUL_OPC_EVEX_F3(5, 0x2c):      /* vcvttsh2si xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x2d):      /* vcvtsh2si xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x78):      /* vcvttsh2usi xmm/mem,reg */
+    case X86EMUL_OPC_EVEX_F3(5, 0x79):      /* vcvtsh2usi xmm/mem,reg */
+        host_and_vcpu_must_have(avx512_fp16);
+        /* fall through */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2c): /* vcvtts{s,d}2si xmm/mem,reg */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2d): /* vcvts{s,d}2si xmm/mem,reg */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x78): /* vcvtts{s,d}2usi xmm/mem,reg */
@@ -3756,8 +3770,6 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
-#ifndef X86EMUL_NO_SIMD
-
     case X86EMUL_OPC_EVEX(5, 0x2e): /* vucomish xmm/m16,xmm */
     case X86EMUL_OPC_EVEX(5, 0x2f): /* vcomish xmm/m16,xmm */
         host_and_vcpu_must_have(avx512_fp16);
@@ -7787,6 +7799,38 @@ x86_emulate(
                          2 * evex.w);
         goto avx512f_all_fp;
 
+    case X86EMUL_OPC_EVEX   (5, 0x5b): /* vcvtdq2ph [xyz]mm/mem,[xy]mm{k} */
+                                       /* vcvtqq2ph [xyz]mm/mem,xmm{k} */
+    case X86EMUL_OPC_EVEX_F2(5, 0x7a): /* vcvtudq2ph [xyz]mm/mem,[xy]mm{k} */
+                                       /* vcvtuqq2ph [xyz]mm/mem,xmm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 16 << evex.lr;
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(5, 0x5b): /* vcvtph2dq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_F3(5, 0x5b): /* vcvttph2dq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x78): /* vcvttph2udq [xy]mm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX   (5, 0x79): /* vcvtph2udq [xy]mm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 8 << evex.lr;
+        goto simd_zmm;
+
+    case X86EMUL_OPC_EVEX_66(5, 0x78): /* vcvttph2uqq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x79): /* vcvtph2uqq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7a): /* vcvttph2qq xmm/mem,[xyz]mm{k} */
+    case X86EMUL_OPC_EVEX_66(5, 0x7b): /* vcvtph2qq xmm/mem,[xyz]mm{k} */
+        host_and_vcpu_must_have(avx512_fp16);
+        generate_exception_if(evex.w, EXC_UD);
+        if ( ea.type != OP_REG || !evex.brs )
+            avx512_vlen_check(false);
+        op_bytes = 4 << (evex.w + evex.lr);
+        goto simd_zmm;
+
     case X86EMUL_OPC_EVEX   (5, 0x7c): /* vcvttph2uw [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(5, 0x7c): /* vcvttph2w [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX   (5, 0x7d): /* vcvtph2uw [xyz]mm/mem,[xyz]mm{k} */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:00:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517509.802938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLfr-0003ea-LR; Mon, 03 Apr 2023 15:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517509.802938; Mon, 03 Apr 2023 15:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjLfr-0003eT-Ib; Mon, 03 Apr 2023 15:00:47 +0000
Received: by outflank-mailman (input) for mailman id 517509;
 Mon, 03 Apr 2023 15:00:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjLfq-0003d2-Jv
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:00:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7d00::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4e2cef8b-d230-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 17:00:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB10044.eurprd04.prod.outlook.com (2603:10a6:20b:682::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Mon, 3 Apr
 2023 15:00:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 15:00:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e2cef8b-d230-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RnPvN30IJwsMdCvvKw77CHul+ofAp0MqjFkvQ4SIWooEZ9JnR25NquYi7FPRT23gyMmOgT6EojEiq+M+Szuo5+a30A/Oc8GfcdoHSmT6Mtbnw/7jxRbtjNJzbl9NyYxygR3M5/M9+ekNbJWxIP/wCms+lYfuPT3JIdrayVRb2oM/KEkeNF+MPm1XttcysBxkg64hQKaVsTPEBrj7ahmDnVkCeMP+Qzcatp7aQGz7/4p+D3TyNBbLuuVZe2szvyFMqwqxAYRS1xm0wrHBR0BmCqCLXairuqPnUP8y2TqBFyXUE69UDCzLL5MzE0xkmW8MV8W4zrR1GhwFt6pibDX1EA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CgoSFervzIjjTk4kgA6M9gpJ9SnSpxmSSPmQCBqoFvA=;
 b=kcz35wr+aH3U5KG99mOoNcJm9qjpZXECZdO1DVRpjT96ZP4Lju7OSJiHKBue/W3gRykxV2LOHISgLt4BZT/1f8pA0O+HRGS68lhpekUSXkjveJT5kDrKIUwoNOai3KQ1BUshGuZ+5Ln0ymEnQe68Ij+7bSlDrNhQmg+dICbyrgWqK3bkFZmay45QCbGQNVQjpvL4NarT2jnAsbow6vHaHFB7OPHb3qwrORtTlRAxcBzomBntcYtPE7pjje5WpSykX7mLVqve8r58YP0YROrcZ8kYkM2fWPFBbeMN0KUCUhKxn2k+dnyrXu4+r7wXUHPb9VMIjslbdda0eh2UjY9D7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CgoSFervzIjjTk4kgA6M9gpJ9SnSpxmSSPmQCBqoFvA=;
 b=VUuIGZ3Kc1qTpyENUmczt3n1Mjft9CDOklqzlQt1a7SVwPmURgjNqvfcFR1iUrfeUvaqTvDRzAJ2Kl5HZA8lHtOYQS/g37T8w2AUw86pY00aOMG6PIUlX/p0Z9fmmoTJZXWJ7HQdeEzOv0kqcHwBdqlDnU7hn6WfQzeaZXWTvlqXIp6Kjhjjh3r/iEGUtpz2JYMCFmJrD49sQvhbgS2h7MD4rL2IY7obOTdpgDR+KWhxydlhNt92B+4cvpfdAa776czHXgJKOvneiuwvhk5YvykkbgKULWTBCmMoJNNZLSzFURnbQaRgnGC7NlbCdGSzx24QmvAuloKFyDXH1g3LnA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bf8fa747-d2df-8340-5f7c-6b29ef3bb543@suse.com>
Date: Mon, 3 Apr 2023 17:00:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH v2 10/10] x86emul: AVX512-FP16 testing
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB10044:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a215608-475f-4f7a-ae96-08db34543140
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	++fwElJb+x3GM0AJnO9BUVl1HXTKb/TFQd7fXUOoaXHjhFZ7RDQDnsxf8E2oWngIemMux6l167LldBpXIrTZy6EK225BNQnC8vDjOr376O1+ggvWK9a9waFzmffzrbrrWM6rtNBDWDNXPgtk2N1999VI3xG9YO3EBN+m2TSPFOdrwSH8TJXTYOTonre1vkX3VqyXoJCbvUeUF0Cu4kln01xZ44yDCkcD2h1WJOVmh16+n0vZymBS8owSTfJa+0nMnnhPDAWvArgDK33utiDxKKDkJjng9yMwLnAdMLBNl8PhQc0vCSP6xYxD7ykkUSQiegl64UuV2eZbJanmNwL5WUmI6YH2akCvwnJoYTBxRe50pJoPEEo751lwbH7w/nC2sEZ3ToEsU6N/GzViiPfLNkH5jMCrFZkqcEX50pTQtstNIc7tZSKKEpHbI7IVkRfkFfGf9He+pC2aBiKPrOqLTcwN/qroYpet1sWB1t4VeSc6b1bVHWvdIRm/elP8187UO1XpEo0xmAvwbyxAOvFUK+iRFU37FGHi9IHc4ODSAN4hnRLT1dtYRmraO6HP+E5RiXR+kUzDIjCRS8gBL3D5LLyhd27voLA6hY7JB/xEZLPeWaG42tPSd1h+K6JYrzYMO47/AIg2k7d1AcxFj9W1Pg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199021)(186003)(54906003)(2616005)(478600001)(6486002)(6512007)(6506007)(26005)(2906002)(4001150100001)(36756003)(66946007)(66556008)(8936002)(66476007)(41300700001)(38100700002)(4326008)(31686004)(8676002)(6916009)(30864003)(83380400001)(316002)(86362001)(31696002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGtqWVIzM040N2JnbXkyellLYlArQWVZdzhDWHhvZDVNUGJ6S1E3RGwzczMw?=
 =?utf-8?B?ODEvS2JHL2FXN2FVMm0rOTdUWllybHZuVlVEc0pScmhsVzVDVjZIejV2Mm4v?=
 =?utf-8?B?M1lGcXNvWmdsZUhGYXFxZjFsYUpLQlFkN2FSQjdMN3FFd2ZDUGdLMEw1SzNq?=
 =?utf-8?B?Y015ZlpkcG15cGUyc2p1TWppdU5rZk5VNkgzRllHa1B3aEUreVlGd1hmTEx3?=
 =?utf-8?B?MWtnVE1UdGdqMS9VUDZrbDNSM3A5LzFPMThpUXRQdHN1YlZCRVVHd1ludkx2?=
 =?utf-8?B?TTZrbldTNkVMV3o3a1V4ekEyUVZOb1Njc0N1UTdWWWdJSVZMUkNvdEd3ZXo3?=
 =?utf-8?B?ci9EVXI1Z1U1QVllaC8welBuNEZ3RUYxK3JqU3VhUGUvU01MYXo3YWRsdmc3?=
 =?utf-8?B?djRDNEZzVHRZNkhzY1ZGRUFzQ29taUc3NXljRnU5bWpDZ0dCWXlaYkRNOERt?=
 =?utf-8?B?Z0h6WmJQSEhtbG1jRFN0TGpVV25IaWVjY0hkWTR6MGJ4YU5oTHk0ZGFWNS9z?=
 =?utf-8?B?QTRrekJ6RTd6ZXVVMVI4clo3RndMeW56bGV0Z0FPeDBpdkVvMlJNMEVUdGl2?=
 =?utf-8?B?Vzc4UjQ1OE9pMGRTUXgrdjlUR24yN0hveEVIMGljeW1raTNaa3YyVWx6R1pI?=
 =?utf-8?B?eENMdnJ0SHFQOGVzYjZENmRpVk90aVAycWlMS0xPbHo0MUFaZUJLTnVXNHNz?=
 =?utf-8?B?d3BTVFFuRXZ6S3J0dGZLenEwN2dWY0NoK2YwS0RnckxtdzdOenF6ckFKZHha?=
 =?utf-8?B?ZTJwaVBBa0hOYjlGeHhCUHFEeGQ3MzJ0dTNzM3o2Qm5Ua2UrdUZ2bVVGOC95?=
 =?utf-8?B?ZnNKKy9BdE9GTTkwVmtrazFyK09vSk41Z3BmRXZlRWZVY0EvV0w4bG9oemR5?=
 =?utf-8?B?eE1ORVRad29mNGJwMENsNU13TDlWWk5teUJnaDkvQk9aRUp4TFozQzBMbDcy?=
 =?utf-8?B?eUVRcFhuNmt4RWRrbTdGSUV5VWxiK1E3UGt4NkVkMkpMczBzTmxWd2FEdXRY?=
 =?utf-8?B?L3JIU21aWmVTdk5aT1QrK3U3Mi9KMnpNVmdIOWNuRUlhemxaSmZ4Ui9lS2tF?=
 =?utf-8?B?YnkrTHVEelBPaGdhL1htR3VBU1lyeU15M1c4K2tQdUJvNmh2em0wN3hSNS8v?=
 =?utf-8?B?WnhtNlhhQkQyRWhLRmpIVWV3b2xzb2J0dkYrYkZqR1Z4dnYvYm52d0hBdFVj?=
 =?utf-8?B?OVd6OVAwQVdYYjFlOWloZzkvTzhiajF4UkY5ZWlmWDc2aDZHRE5LZ2xvdzRW?=
 =?utf-8?B?c1I0dzlkTHV4Z3pDaEtRbHBVbVZrcWhmODZLQW9vbmZuZU9XNUx4KzMwcDFs?=
 =?utf-8?B?Q3FLQ1gxZ2lOd25ERk5Tdm1FTUVpaGwzNW1Zc2o2dEhJN25NL1hCbElJTlZ3?=
 =?utf-8?B?Q1JQTHcwcURBQmtjYXMzanp6ZEJtbm1kSXBONHNrQnR1cWJ2OG9STjE3Q2xI?=
 =?utf-8?B?aElEZmQrT3BUNmFDVmd4d1VRNW5iUHBCbCt2M096QnhvWlpWSnE2cEQ1cXZB?=
 =?utf-8?B?aWFvUE5KZ3BvelhRYjVjNkpWMlBuZU5VcmcwT0JFd05tSUhuZW83S3VEcXlx?=
 =?utf-8?B?OGFmeHFmV3NZSTJQaUNHaEV4Z0RRR0haYndEcU9MN1VzU0c3ZEZPZzVxYXVv?=
 =?utf-8?B?anJVTTJMRGkveUVoajZaQjNSVzZ0UDd5L0dKV1ltK1B1dU01RGRPamlzeFY4?=
 =?utf-8?B?NU5VQzY2K1pyVWNxclNRdUdKSzNoRFdUMnorQnJUSW5wUVNldmdXMm9xUUUw?=
 =?utf-8?B?SGtzSmF2ZzRFYmtwSDZJQjZrRFNhSUgxSHNLTkpuc1dOQlhwM1YyMTUxLytD?=
 =?utf-8?B?aGVpWm9QWFEwMDJXUHhTYVlVRlZ0QWw5bDYreVRKK3BnL1BoKy8rdFMwQU1k?=
 =?utf-8?B?c3NROUhhNjB5ekMwcW9nMmZCeU1GOTlxd0JUOVJDTTh4NjNmeEh3b05zNmlG?=
 =?utf-8?B?VFpsZG9HYUpCQk92VGUvRy9hcW91VXZNSEx3M3YwZkhnTWYzRHF5NUpjKzhQ?=
 =?utf-8?B?dlp2YWJNaHROdUJ3bVRKaVo4NnFOdlpxMTJCVmlXZDd0S2JaSGJKQWZWTUFl?=
 =?utf-8?B?dVJ6b0RNNE5Yc0U5OTFtM3psOXRXWkZmazRrbFZ2cGd0SXdKVnFlVjB2ZjJL?=
 =?utf-8?Q?VTJfJtjj3jeH9AApOmOdHQ4IQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a215608-475f-4f7a-ae96-08db34543140
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:00:41.2832
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yfFTHl4vAAfPD3HNgiQV/C5iny/iE1CwxemGnCc6KO4pMCA2DyUUsZOMtE0r8/e2G6aH1Mh74UdlKWRxa0aFGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB10044

Naming of some of the builtins isn't fully consistent with that of pre-
existing ones, so there's a need for a new BR2() wrapper macro.

With the tests providing some proof of proper functioning of the
emulator code also enable use of the feature by guests, as there's no
other infrastructure involved in enabling this ISA extension.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Add CHANGELOG.md entry.
---
This is Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> under the
condition that public/arch-x86/cpufeatureset.h use 'a', not 'A'. But I
was putting this under question, so far without further response.
---
SDE: -spr or -future
---
In the course of putting together the FMA part of the test I had noticed
that we no longer tested scalar FMA insns (FMA, FMA4, AVX512F), due to
gcc (then) no longer recognizing the pattern in version 9 or later. See
gcc bug 105965, which apparently has already gained a fix for version
13. (Using intrinsics for scalar operations is prohibitive, as they have
full-vector parameters.) I'm taking this as one of several reasons why
here I'm not even trying to make the compiler spot the complex FMA
patterns, using a mixture of intrinsics and inline assembly instead.

--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,7 @@ The format is based on [Keep a Changelog
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - x86 AVX512-FP16
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -16,7 +16,7 @@ vpath %.c $(XEN_ROOT)/xen/lib/x86
 
 CFLAGS += $(CFLAGS_xeninclude)
 
-SIMD := 3dnow sse sse2 sse4 avx avx2 xop avx512f avx512bw avx512dq avx512er avx512vbmi
+SIMD := 3dnow sse sse2 sse4 avx avx2 xop avx512f avx512bw avx512dq avx512er avx512vbmi avx512fp16
 FMA := fma4 fma
 SG := avx2-sg avx512f-sg avx512vl-sg
 AES := ssse3-aes avx-aes avx2-vaes avx512bw-vaes
@@ -91,6 +91,9 @@ avx512vbmi-vecs := $(avx512bw-vecs)
 avx512vbmi-ints := $(avx512bw-ints)
 avx512vbmi-flts := $(avx512bw-flts)
 avx512vbmi2-vecs := $(avx512bw-vecs)
+avx512fp16-vecs := $(avx512bw-vecs)
+avx512fp16-ints :=
+avx512fp16-flts := 2
 
 avx512f-opmask-vecs := 2
 avx512dq-opmask-vecs := 1 2
@@ -246,7 +249,7 @@ $(addsuffix .c,$(GF)):
 
 $(addsuffix .h,$(SIMD) $(FMA) $(SG) $(AES) $(CLMUL) $(SHA) $(GF)): simd.h
 
-xop.h avx512f.h: simd-fma.c
+xop.h avx512f.h avx512fp16.h: simd-fma.c
 
 endif # 32-bit override
 
--- a/tools/tests/x86_emulator/simd.c
+++ b/tools/tests/x86_emulator/simd.c
@@ -20,6 +20,14 @@ ENTRY(simd_test);
     asm ( "vcmpsd $0, %1, %2, %0"  : "=k" (r_) : "m" (x_), "v" (y_) ); \
     r_ == 1; \
 })
+# elif VEC_SIZE == 2
+#  define eq(x, y) ({ \
+    _Float16 x_ = (x)[0]; \
+    _Float16 __attribute__((vector_size(16))) y_ = { (y)[0] }; \
+    unsigned int r_; \
+    asm ( "vcmpsh $0, %1, %2, %0"  : "=k" (r_) : "m" (x_), "v" (y_) ); \
+    r_ == 1; \
+})
 # elif FLOAT_SIZE == 4
 /*
  * gcc's (up to at least 8.2) __builtin_ia32_cmpps256_mask() has an anomaly in
@@ -31,6 +39,8 @@ ENTRY(simd_test);
 #  define eq(x, y) ((BR(cmpps, _mask, x, y, 0, -1) & ALL_TRUE) == ALL_TRUE)
 # elif FLOAT_SIZE == 8
 #  define eq(x, y) (BR(cmppd, _mask, x, y, 0, -1) == ALL_TRUE)
+# elif FLOAT_SIZE == 2
+#  define eq(x, y) (B(cmpph, _mask, x, y, 0, -1) == ALL_TRUE)
 # elif (INT_SIZE == 1 || UINT_SIZE == 1) && defined(__AVX512BW__)
 #  define eq(x, y) (B(pcmpeqb, _mask, (vqi_t)(x), (vqi_t)(y), -1) == ALL_TRUE)
 # elif (INT_SIZE == 2 || UINT_SIZE == 2) && defined(__AVX512BW__)
@@ -116,6 +126,14 @@ static inline bool _to_bool(byte_vec_t b
     asm ( "vcvtusi2sd%z1 %1, %0, %0" : "=v" (t_) : "m" (u_) ); \
     (vec_t){ t_[0] }; \
 })
+#  elif FLOAT_SIZE == 2
+#   define to_u_int(type, x) ({ \
+    unsigned type u_; \
+    _Float16 __attribute__((vector_size(16))) t_; \
+    asm ( "vcvtsh2usi %1, %0" : "=r" (u_) : "m" ((x)[0]) ); \
+    asm ( "vcvtusi2sh%z1 %1, %0, %0" : "=v" (t_) : "m" (u_) ); \
+    (vec_t){ t_[0] }; \
+})
 #  endif
 #  define to_uint(x) to_u_int(int, x)
 #  ifdef __x86_64__
@@ -153,6 +171,43 @@ static inline bool _to_bool(byte_vec_t b
 #   define to_wint(x) BR(cvtqq2pd, _mask, BR(cvtpd2qq, _mask, x, (vdi_t)undef(), ~0), undef(), ~0)
 #   define to_uwint(x) BR(cvtuqq2pd, _mask, BR(cvtpd2uqq, _mask, x, (vdi_t)undef(), ~0), undef(), ~0)
 #  endif
+# elif FLOAT_SIZE == 2
+#  define to_int(x) BR2(vcvtw2ph, _mask, BR2(vcvtph2w, _mask, x, (vhi_t)undef(), ~0), undef(), ~0)
+#  define to_uint(x) BR2(vcvtuw2ph, _mask, BR2(vcvtph2uw, _mask, x, (vhi_t)undef(), ~0), undef(), ~0)
+#  if VEC_SIZE == 16
+#   define low_half(x) (x)
+#   define high_half(x) ((vec_t)B_(movhlps, , (vsf_t)undef(), (vsf_t)(x)))
+#   define insert_half(x, y, p) ((vec_t)((p) ? B_(movlhps, , (vsf_t)(x), (vsf_t)(y)) \
+                                             : B_(shufps, , (vsf_t)(y), (vsf_t)(x), 0b11100100)))
+#  elif VEC_SIZE == 32
+#   define _half(x, lh) ((vhf_half_t)B(extracti32x4_, _mask, (vsi_t)(x), lh, (vsi_half_t){}, ~0))
+#   define low_half(x)  _half(x, 0)
+#   define high_half(x) _half(x, 1)
+#   define insert_half(x, y, p) \
+    ((vec_t)B(inserti32x4_, _mask, (vsi_t)(x), (vsi_half_t)(y), p, (vsi_t)undef(), ~0))
+#  elif VEC_SIZE == 64
+#   define _half(x, lh) \
+    ((vhf_half_t)__builtin_ia32_extracti64x4_mask((vdi_t)(x), lh, (vdi_half_t){}, ~0))
+#   define low_half(x)  _half(x, 0)
+#   define high_half(x) _half(x, 1)
+#   define insert_half(x, y, p) \
+    ((vec_t)__builtin_ia32_inserti64x4_mask((vdi_t)(x), (vdi_half_t)(y), p, (vdi_t)undef(), ~0))
+#  endif
+#  define to_w_int(x, s) ({ \
+    vhf_half_t t_ = low_half(x); \
+    vsi_t lo_, hi_; \
+    touch(t_); \
+    lo_ = BR2(vcvtph2 ## s ## dq, _mask, t_, (vsi_t)undef(), ~0); \
+    t_ = high_half(x); \
+    touch(t_); \
+    hi_ = BR2(vcvtph2 ## s ## dq, _mask, t_, (vsi_t)undef(), ~0); \
+    touch(lo_); touch(hi_); \
+    insert_half(insert_half(undef(), \
+                            BR2(vcvt ## s ## dq2ph, _mask, lo_, (vhf_half_t){}, ~0), 0), \
+                BR2(vcvt ## s ## dq2ph, _mask, hi_, (vhf_half_t){}, ~0), 1); \
+})
+#  define to_wint(x) to_w_int(x, )
+#  define to_uwint(x) to_w_int(x, u)
 # endif
 #elif VEC_SIZE == 16 && defined(__SSE2__)
 # if FLOAT_SIZE == 4
@@ -240,10 +295,18 @@ static inline vec_t movlhps(vec_t x, vec
 #  define scale(x, y) scalar_2op(x, y, "vscalefsd %[in2], %[in1], %[out]")
 #  define sqrt(x) scalar_1op(x, "vsqrtsd %[in], %[out], %[out]")
 #  define trunc(x) scalar_1op(x, "vrndscalesd $0b1011, %[in], %[out], %[out]")
+# elif FLOAT_SIZE == 2
+#  define getexp(x) scalar_1op(x, "vgetexpsh %[in], %[out], %[out]")
+#  define getmant(x) scalar_1op(x, "vgetmantsh $0, %[in], %[out], %[out]")
+#  define recip(x) scalar_1op(x, "vrcpsh %[in], %[out], %[out]")
+#  define rsqrt(x) scalar_1op(x, "vrsqrtsh %[in], %[out], %[out]")
+#  define scale(x, y) scalar_2op(x, y, "vscalefsh %[in2], %[in1], %[out]")
+#  define sqrt(x) scalar_1op(x, "vsqrtsh %[in], %[out], %[out]")
+#  define trunc(x) scalar_1op(x, "vrndscalesh $0b1011, %[in], %[out], %[out]")
 # endif
 #elif defined(FLOAT_SIZE) && defined(__AVX512F__) && \
       (VEC_SIZE == 64 || defined(__AVX512VL__))
-# if ELEM_COUNT == 8 /* vextractf{32,64}x4 */ || \
+# if (ELEM_COUNT == 8 && ELEM_SIZE >= 4) /* vextractf{32,64}x4 */ || \
      (ELEM_COUNT == 16 && ELEM_SIZE == 4 && defined(__AVX512DQ__)) /* vextractf32x8 */ || \
      (ELEM_COUNT == 4 && ELEM_SIZE == 8 && defined(__AVX512DQ__)) /* vextractf64x2 */
 #  define _half(x, lh) ({ \
@@ -398,6 +461,21 @@ static inline vec_t movlhps(vec_t x, vec
                          VEC_SIZE == 32 ? 0b01 : 0b00011011, undef(), ~0), \
                        0b01010101, undef(), ~0)
 #  endif
+# elif FLOAT_SIZE == 2
+#  define frac(x) BR2(reduceph, _mask, x, 0b00001011, undef(), ~0)
+#  define getexp(x) BR(getexpph, _mask, x, undef(), ~0)
+#  define getmant(x) BR(getmantph, _mask, x, 0, undef(), ~0)
+#  define max(x, y) BR2(maxph, _mask, x, y, undef(), ~0)
+#  define min(x, y) BR2(minph, _mask, x, y, undef(), ~0)
+#  define scale(x, y) BR2(scalefph, _mask, x, y, undef(), ~0)
+#  define recip(x) B(rcpph, _mask, x, undef(), ~0)
+#  define rsqrt(x) B(rsqrtph, _mask, x, undef(), ~0)
+#  define shrink1(x) BR2(vcvtps2phx, _mask, (vsf_t)(x), (vhf_half_t){}, ~0)
+#  define shrink2(x) BR2(vcvtpd2ph, _mask, (vdf_t)(x), (vhf_quarter_t){}, ~0)
+#  define sqrt(x) BR2(sqrtph, _mask, x, undef(), ~0)
+#  define trunc(x) BR2(rndscaleph, _mask, x, 0b1011, undef(), ~0)
+#  define widen1(x) ((vec_t)BR2(vcvtph2psx, _mask, x, (vsf_t)undef(), ~0))
+#  define widen2(x) ((vec_t)BR2(vcvtph2pd, _mask, x, (vdf_t)undef(), ~0))
 # endif
 #elif FLOAT_SIZE == 4 && defined(__SSE__)
 # if VEC_SIZE == 32 && defined(__AVX__)
@@ -920,6 +998,16 @@ static inline vec_t movlhps(vec_t x, vec
 #  define dup_lo(x) B(movddup, _mask, x, undef(), ~0)
 # endif
 #endif
+#if FLOAT_SIZE == 2 && ELEM_COUNT > 1
+# define dup_hi(x) ((vec_t)B(pshufhw, _mask, \
+                             B(pshuflw, _mask, (vhi_t)(x), 0b11110101, \
+                               (vhi_t)undef(), ~0), \
+                             0b11110101, (vhi_t)undef(), ~0))
+# define dup_lo(x) ((vec_t)B(pshufhw, _mask, \
+                             B(pshuflw, _mask, (vhi_t)(x), 0b10100000, \
+                               (vhi_t)undef(), ~0), \
+                             0b10100000, (vhi_t)undef(), ~0))
+#endif
 #if VEC_SIZE == 16 && defined(__SSSE3__) && !defined(__AVX512VL__)
 # if INT_SIZE == 1
 #  define abs(x) ((vec_t)__builtin_ia32_pabsb128((vqi_t)(x)))
--- a/tools/tests/x86_emulator/simd.h
+++ b/tools/tests/x86_emulator/simd.h
@@ -53,6 +53,9 @@ float
 # elif FLOAT_SIZE == 8
 #  define MODE DF
 #  define ELEM_SFX "d"
+# elif FLOAT_SIZE == 2
+#  define MODE HF
+#  define ELEM_SFX "h"
 # endif
 #endif
 #ifndef VEC_SIZE
@@ -67,7 +70,10 @@ typedef unsigned int __attribute__((mode
 /* Various builtins want plain char / int / long long vector types ... */
 typedef char __attribute__((vector_size(VEC_SIZE))) vqi_t;
 typedef short __attribute__((vector_size(VEC_SIZE))) vhi_t;
+#if VEC_SIZE >= 4
 typedef int __attribute__((vector_size(VEC_SIZE))) vsi_t;
+typedef float __attribute__((vector_size(VEC_SIZE))) vsf_t;
+#endif
 #if VEC_SIZE >= 8
 typedef long long __attribute__((vector_size(VEC_SIZE))) vdi_t;
 typedef double __attribute__((vector_size(VEC_SIZE))) vdf_t;
@@ -96,6 +102,9 @@ typedef char __attribute__((vector_size(
 typedef short __attribute__((vector_size(HALF_SIZE))) vhi_half_t;
 typedef int __attribute__((vector_size(HALF_SIZE))) vsi_half_t;
 typedef long long __attribute__((vector_size(HALF_SIZE))) vdi_half_t;
+#ifdef __AVX512FP16__
+typedef _Float16 __attribute__((vector_size(HALF_SIZE))) vhf_half_t;
+#endif
 typedef float __attribute__((vector_size(HALF_SIZE))) vsf_half_t;
 # endif
 
@@ -110,6 +119,9 @@ typedef char __attribute__((vector_size(
 typedef short __attribute__((vector_size(QUARTER_SIZE))) vhi_quarter_t;
 typedef int __attribute__((vector_size(QUARTER_SIZE))) vsi_quarter_t;
 typedef long long __attribute__((vector_size(QUARTER_SIZE))) vdi_quarter_t;
+#ifdef __AVX512FP16__
+typedef _Float16 __attribute__((vector_size(QUARTER_SIZE))) vhf_quarter_t;
+#endif
 # endif
 
 # if ELEM_COUNT >= 8
@@ -163,6 +175,7 @@ DECL_OCTET(half);
 #elif VEC_SIZE == 64
 # define B(n, s, a...)   __builtin_ia32_ ## n ## 512 ## s(a)
 # define BR(n, s, a...)  __builtin_ia32_ ## n ## 512 ## s(a, 4)
+# define BR2(n, s, a...) __builtin_ia32_ ## n ## 512 ## s ## _round(a, 4)
 #endif
 #ifndef B_
 # define B_ B
@@ -171,6 +184,9 @@ DECL_OCTET(half);
 # define BR B
 # define BR_ B_
 #endif
+#ifndef BR2
+# define BR2 BR
+#endif
 #ifndef BR_
 # define BR_ BR
 #endif
--- a/tools/tests/x86_emulator/simd-fma.c
+++ b/tools/tests/x86_emulator/simd-fma.c
@@ -28,6 +28,8 @@ ENTRY(fma_test);
 #  define fmaddsub(x, y, z) BR(vfmaddsubps, _mask, x, y, z, ~0)
 # elif FLOAT_SIZE == 8
 #  define fmaddsub(x, y, z) BR(vfmaddsubpd, _mask, x, y, z, ~0)
+# elif FLOAT_SIZE == 2
+#  define fmaddsub(x, y, z) BR(vfmaddsubph, _mask, x, y, z, ~0)
 # endif
 #elif VEC_SIZE == 16
 # if FLOAT_SIZE == 4
@@ -70,6 +72,75 @@ ENTRY(fma_test);
 # endif
 #endif
 
+#ifdef __AVX512FP16__
+# define I (1.if16)
+# if VEC_SIZE > FLOAT_SIZE
+#  define CELEM_COUNT (ELEM_COUNT / 2)
+static const unsigned int conj_mask = 0x80000000;
+#  define conj(z) ({ \
+    vec_t r_; \
+    asm ( "vpxord %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (z), "m" (conj_mask), "i" (CELEM_COUNT) ); \
+    r_; \
+})
+#  define _cmul_vv(a, b, c)  BR2(vf##c##mulcph, , a, b)
+#  define _cmul_vs(a, b, c) ({ \
+    vec_t r_; \
+    _Complex _Float16 b_ = (b); \
+    asm ( "vf"#c"mulcph %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (a), "m" (b_), "i" (CELEM_COUNT) ); \
+    r_; \
+})
+#  define cmadd_vv(a, b, c) BR2(vfmaddcph, , a, b, c)
+#  define cmadd_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    vec_t r_; \
+    asm ( "vfmaddcph %2%{1to%c3%}, %1, %0" \
+          : "=v" (r_) \
+          : "v" (a), "m" (b_), "i" (CELEM_COUNT), "0" (c) ); \
+    r_; \
+})
+# else
+#  define CELEM_COUNT 1
+typedef _Float16 __attribute__((vector_size(4))) cvec_t;
+#  define conj(z) ({ \
+    cvec_t r_; \
+    asm ( "xor $0x80000000, %0" : "=rm" (r_) : "0" (z) ); \
+    r_; \
+})
+#  define _cmul_vv(a, b, c) ({ \
+    cvec_t r_; \
+    /* "=&x" to force destination to be different from both sources */ \
+    asm ( "vf"#c"mulcsh %2, %1, %0" : "=&x" (r_) : "x" (a), "m" (b) ); \
+    r_; \
+})
+#  define _cmul_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    cvec_t r_; \
+    /* "=&x" to force destination to be different from both sources */ \
+    asm ( "vf"#c"mulcsh %2, %1, %0" : "=&x" (r_) : "x" (a), "m" (b_) ); \
+    r_; \
+})
+#  define cmadd_vv(a, b, c) ({ \
+    cvec_t r_ = (c); \
+    asm ( "vfmaddcsh %2, %1, %0" : "+x" (r_) : "x" (a), "m" (b) ); \
+    r_; \
+})
+#  define cmadd_vs(a, b, c) ({ \
+    _Complex _Float16 b_ = (b); \
+    cvec_t r_ = (c); \
+    asm ( "vfmaddcsh %2, %1, %0" : "+x" (r_) : "x" (a), "m" (b_) ); \
+    r_; \
+})
+# endif
+# define cmul_vv(a, b) _cmul_vv(a, b, )
+# define cmulc_vv(a, b) _cmul_vv(a, b, c)
+# define cmul_vs(a, b) _cmul_vs(a, b, )
+# define cmulc_vs(a, b) _cmul_vs(a, b, c)
+#endif
+
 int fma_test(void)
 {
     unsigned int i;
@@ -156,5 +227,99 @@ int fma_test(void)
     touch(inv);
 #endif
 
+#ifdef CELEM_COUNT
+
+# if VEC_SIZE > FLOAT_SIZE
+#  define cvec_t vec_t
+#  define ceq eq
+# else
+  {
+    /* Cannot re-use the function-scope variables (for being too small). */
+    cvec_t x, y, z, src = { 1, 2 }, inv = { 2, 1 }, one = { 1, 1 };
+#  define ceq(x, y) ({ \
+    unsigned int r_; \
+    asm ( "vcmpph $0, %1, %2, %0"  : "=k" (r_) : "x" (x), "x" (y) ); \
+    (r_ & 3) == 3; \
+})
+# endif
+
+    /* (a * i)² == -a² */
+    x = cmul_vs(src, I);
+    y = cmul_vv(x, x);
+    x = -src;
+    touch(src);
+    z = cmul_vv(x, src);
+    if ( !ceq(y, z) ) return __LINE__;
+
+    /* conj(a * b) == conj(a) * conj(b) */
+    touch(src);
+    x = conj(src);
+    touch(inv);
+    y = cmulc_vv(x, inv);
+    touch(src);
+    touch(inv);
+    z = conj(cmul_vv(src, inv));
+    if ( !ceq(y, z) ) return __LINE__;
+
+    /* a * conj(a) == |a|² */
+    touch(src);
+    y = src;
+    touch(src);
+    x = cmulc_vv(y, src);
+    y *= y;
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        if ( x[i] != y[i] + y[i + 1] ) return __LINE__;
+        if ( x[i + 1] ) return __LINE__;
+    }
+
+    /* a * b == b * a + 0 */
+    touch(src);
+    touch(inv);
+    x = cmul_vv(src, inv);
+    touch(src);
+    touch(inv);
+    y = cmadd_vv(inv, src, (cvec_t){});
+    if ( !ceq(x, y) ) return __LINE__;
+
+    /* a * 1 + b == b * 1 + a */
+    touch(src);
+    touch(inv);
+    x = cmadd_vs(src, 1, inv);
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        z[i] = 1;
+        z[i + 1] = 0;
+    }
+    touch(z);
+    y = cmadd_vv(inv, z, src);
+    if ( !ceq(x, y) ) return __LINE__;
+
+    /* (a + b) * c == a * c + b * c */
+    touch(one);
+    touch(inv);
+    x = cmul_vv(src + one, inv);
+    touch(inv);
+    y = cmul_vv(one, inv);
+    touch(inv);
+    z = cmadd_vv(src, inv, y);
+    if ( !ceq(x, z) ) return __LINE__;
+
+    /* a * i + conj(a) == (Re(a) - Im(a)) * (1 + i) */
+    x = cmadd_vs(src, I, conj(src));
+    for ( i = 0; i < ELEM_COUNT; i += 2 )
+    {
+        typeof(x[0]) val = src[i] - src[i + 1];
+
+        if ( x[i] != val ) return __LINE__;
+        if ( x[i + 1] != val ) return __LINE__;
+    }
+
+# if VEC_SIZE == FLOAT_SIZE
+  }
+# endif
+
+#endif /* CELEM_COUNT */
+
     return 0;
 }
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -43,6 +43,7 @@ asm ( ".pushsection .test, \"ax\", @prog
 #include "avx512er.h"
 #include "avx512vbmi.h"
 #include "avx512vbmi2-vpclmulqdq.h"
+#include "avx512fp16.h"
 
 #define verbose false /* Switch to true for far more logging. */
 
@@ -249,6 +250,16 @@ static bool simd_check_avx512bw_gf_vl(vo
     return cpu_has_gfni && cpu_has_avx512vl;
 }
 
+static bool simd_check_avx512fp16(void)
+{
+    return cpu_has_avx512_fp16;
+}
+
+static bool simd_check_avx512fp16_vl(void)
+{
+    return cpu_has_avx512_fp16 && cpu_has_avx512vl;
+}
+
 static void simd_set_regs(struct cpu_user_regs *regs)
 {
     if ( cpu_has_mmx )
@@ -513,6 +524,10 @@ static const struct {
     AVX512VL(_VBMI+VL u16x8, avx512vbmi,    16u2),
     AVX512VL(_VBMI+VL s16x16, avx512vbmi,   32i2),
     AVX512VL(_VBMI+VL u16x16, avx512vbmi,   32u2),
+    SIMD(AVX512_FP16 f16 scal,avx512fp16,     f2),
+    SIMD(AVX512_FP16 f16x32, avx512fp16,    64f2),
+    AVX512VL(_FP16+VL f16x8, avx512fp16,    16f2),
+    AVX512VL(_FP16+VL f16x16,avx512fp16,    32f2),
     SIMD(SHA,                sse4_sha,        16),
     SIMD(AVX+SHA,             avx_sha,        16),
     AVX512VL(VL+SHA,      avx512f_sha,        16),
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -267,7 +267,7 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13)
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(TSXLDTRK,      9*32+16) /*a  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
-XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*   AVX512 FP16 instructions */
+XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*A  AVX512 FP16 instructions */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:30:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517516.802949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjM8r-0007PZ-6m; Mon, 03 Apr 2023 15:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517516.802949; Mon, 03 Apr 2023 15:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjM8r-0007PS-30; Mon, 03 Apr 2023 15:30:45 +0000
Received: by outflank-mailman (input) for mailman id 517516;
 Mon, 03 Apr 2023 15:30:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjM8p-0007PM-Ik
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:30:43 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20603.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7de094c2-d234-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 17:30:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7295.eurprd04.prod.outlook.com (2603:10a6:800:1ac::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 15:30:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 15:30:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7de094c2-d234-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HhGGfpap+l/uzAzkgDekyM8UEe4jxBRuMVoSlTvL71XzqzRpfVBdwCWx9tNc7qpCecxqD87EXzv+vChtJMS42AQo9ZviLy6p4jCkP+DcVRWX0/bG3fO5291dEckEaSITWHSBxq+8r7e9B9fe4Oy2zMSe2zWDSuzQ4tWXJeOJmt7iDlYyZCZr9skDIC+mV7bzU2bdWYdavDI9m6rTrwQT3dz/UAsLALptCW4vck014hBoYm9YLWl/LMWgLeFn/LtqGoRVfkJn7UHAn7UPbR+f3OshC2ZjDuYfegnAMP6DYU3rtV2sqHsDJ7LF13vuLny5wbFiBUN/mXYuKPyrWoLrEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B67imkZ+mLCqggN+D+FxYzH+CtFN8swOWJXmBPyZXY4=;
 b=iWFL42km80Gcki3XEjk4p89LcvTQoi0p5tS8GBbrOLU97lmFOxjBLIUN9R6Ab+tOMK8wjLXhIpq9Vs1gh0EThO11LfnYQAXnsweEqib1jirosvRsky5WmSnBHmDfXOw+f501n+eGV1G943+D7cbr9sx1nOo880AqeMWqIC6Lvq8LbtJMwo2habZ1PSb68djIqozRW3hlxhf35JoMZ2rbbb/F1P9RRmGWhI+YnATmIN08QqCClvFGOYx+5XpvhglMsCOawfSb3Gfb4PActUJn3l0ApAj14h/an6jrifwyaP6IAAfar8PBr+JpYbgSP+obBk2p0Qh4vRp+vhkx/k0cZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B67imkZ+mLCqggN+D+FxYzH+CtFN8swOWJXmBPyZXY4=;
 b=IvkreTiEJVYlokaNZihpp1VTdqE0yn5jec/tBohrnZ+JldQ/+yFd00oFxzm6yw0tpfjEbzSHmsgEnku80N/fOJcnY409jGbQm4f0EV1bCqxDs0BgibJ75+PSPh0JRNvqYyzEZKM7bqifNRTUdWkCIAQDcpvqExMiQIfb7168T30Vs0lD9PIbXHPR0eNQNDmxHq7BpF3Yu698pJ1hp148SyzA6PHRROrhMHu+5gSXYhZJvojxqyFt5afAf6KJ8m0Mh7oWNKHqBBQ3A/RjtPAkbD1GIglhmFLPPx7VXqf/u1DJ8P5zR7Hffs7NFSvDhrP+X3zTtkkGJRp2zETOt+xJfA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5fc00c1a-11ea-905e-49eb-d70caaf71041@suse.com>
Date: Mon, 3 Apr 2023 17:30:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <8d976d34-8a1e-95ad-3bc9-3cb704c1fae7@suse.com>
 <ZCriYs9y6JU1gat9@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCriYs9y6JU1gat9@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0064.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7295:EE_
X-MS-Office365-Filtering-Correlation-Id: e711b2bc-2f86-472d-a0a4-08db345860bc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GGPZlNp0ygXGJkHBr4ME9ooyBKI1D52o6tbcPC+uuHFWYSMwyYzWMVRMd64/qYo71yC+Q/3V2ZcDd41VwRzeXl/u1U0x7gGBgAZWDwdxSR1IOmAK/lclypGPRmKOBOB8eT/H3kkyHEbGHH0BC/8yRRdZe60qjGfZm7Hz1xTPWpARts+GYLJwFvjlBgGQuAQIK+s+SAGWEXPpsKLrKKntep+sA7z9GHyl5uZOagUlhP0o/vNX4Wh60YSpA2DMc+/HwliZYMargKLwnLDXMRlwiEcACRRehZehs9tcuKEhd8yOZYL+oKokNH0UiV6esyq37uXyDhEEuqjXJDLd30D1rCWRYSGqv711LiJ8/9fqzWHnvxKoHj0Q9IzTt/PE/jlBqxf0s8VAtwapRoWTpziJT77Gb7Q4BaYa3sa0IcU1mD2uBvCGMKajozIxoY4nG4P8LNfn1vTF9mSf8o8+SpHa4IrW2C0vMw/n0xBfWhVCzWIFW5/9q8AbV76KbKEucpu5SL6GVPMaMYP3r1QJ0mKO6PuLr+KM+4vhLOrp69aXXMHopjx+tSTAn6buH03f2Zd/T7IVIS6szc9SeMWp1MoWn7Ai7eYtHuvyE3qvxQ8f6dEviduqzxmz9FpJN8x/d2rkWIc4EqKnW60fOniUhfkuqQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(346002)(396003)(376002)(366004)(136003)(451199021)(31696002)(86362001)(36756003)(2906002)(31686004)(2616005)(53546011)(186003)(83380400001)(6512007)(6506007)(26005)(6486002)(8676002)(6916009)(4326008)(66946007)(66556008)(66476007)(41300700001)(5660300002)(38100700002)(54906003)(316002)(478600001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3h6NGRZR3Y4Q01VZHdKcGZER1ZQdTFFWGZPc0ZENzk5RlQzYWgyNUVPUWlN?=
 =?utf-8?B?U1JZYUNISy9WakhHaXdWck9JRVR0V05FOGR4VDFNc3crWHVUdnhUREpnUHVu?=
 =?utf-8?B?T2svMkVCbUdrNEUxU0FkTVZBRURmcXJmcnl6bDd6U1JqMG9wNEZqTWRBSWRK?=
 =?utf-8?B?aUNPSnpKNnNFZE1XaTZyN3g2QVRkc3BxaWRvaERiaEM2azlHTDBWbjd1U1o1?=
 =?utf-8?B?VkxOYVZ3U2toVlUyaytSemd2cWIxbHFUSWdCM3dqWFB0MExaQldNSXFUWW9x?=
 =?utf-8?B?QWlTRDFsbHZHTG1sR3lNNFMyMlJrNDI1cnBEOHlNZTJCS0xHc1NUK0FTVkMr?=
 =?utf-8?B?Y1FwK01GL1E0QjhWSFJxKzROVVBJU0lJQTdBVnZOMnRQb280WFhZbzBCanBE?=
 =?utf-8?B?ZEd1MllTNFJLanlOSzFuaUN2RHJ2Zyt3bi9KZWdFTmpGbFJ2Wi9OQUlENHlO?=
 =?utf-8?B?VitLVzQ0U0ZOQk5pQ080L3BXL0xkS1V1dFBtMXpQQm1KM2RsTUxJUHVCUHY5?=
 =?utf-8?B?dE5tK2gzQWZKQmV1QXYwNE40WFpmK3NGV01JWWRLMitXYVVtSWtTcDFETE8y?=
 =?utf-8?B?bXphZS8xNHBIQ2Z0bFhMamQ3V3Q5RVVZS3BHdnJtT3o4ckpvVkhDcHJITjFD?=
 =?utf-8?B?MXRiQmtKMm9kVVBhZWp6aDZLa1hNS2pORk1MK0pWZ3N1dXczVDk3YWdURFNw?=
 =?utf-8?B?Q0xCV0RQZEEyL0lyWDJScEdNN1ZWUGRkVHFkSlRHa1BGTFhzclFqNUFTbUpi?=
 =?utf-8?B?S3FBMkgrY2I1d29KK0xCTlRPOHUzWnM3RWM0WlN3V2lTVUJyYXVxamdKZ3F1?=
 =?utf-8?B?K3FMdXIyQUxOcWZJM3dpUC9nL2xkUEc0ZmkwYUdkU1NpTk8yRUo0RVBjZXRt?=
 =?utf-8?B?bDhibm1tOGhpMmdHNFVnaDAvTzQwUThtVUxyaVBEQjhsQjZFeWRJTUExVUgw?=
 =?utf-8?B?eTJyNzdNUVJwbk1Ydk5QWTZPYU5tYmVTNmx4VjFqMlBXaUlMN3JiRkkrRmJT?=
 =?utf-8?B?RDJUM21sVXZ4aGt6K2Q1VVAvQkF3ajNxSWJRMU93UWxQenM2MmVnVUgrV1dv?=
 =?utf-8?B?QWhmWGZsQmp3UWU2WU5qaHFIZlV2RHkzM2tmcUxmak1WcWFZSC84bTBzSmpN?=
 =?utf-8?B?SlBDekMrb3JpM1RodVhWYjB4UTdwWStIMzIvZXhXL1NVem5mSHhFYm5JWjNj?=
 =?utf-8?B?V1dwSnNmMUhTKzNid0RWOWxiTjYwUHZES1B3NmF3bTZvQzFJeUxhaktnWGw3?=
 =?utf-8?B?Sk1jNVhSMjBnbmdVOVZWYlJmMDlKcENCWjh1WDlPRURMQWw0dFcrNGRhR1pi?=
 =?utf-8?B?R09KbnhYb3FJVGRaMkdQc01wRHJzRzJuN1cvcEN4OHNPNER0citaZElkSGh6?=
 =?utf-8?B?eGwwakZ5T29jcWJiMjZGR3hWSGRqTDdEMXdzTnZMWThHcVp4c1hpWnRCTmYw?=
 =?utf-8?B?dmdtVG84ZEMvZ2IybGV6TGNXNkxDditQTThMQk5JRWlxVnBGOHM1b1JUcE4w?=
 =?utf-8?B?Um9ibW5LQnFlRG1SV2lFdTRaRUpZQXlUdnY4YXFEUVZXYzV5RFNsK05KUkV5?=
 =?utf-8?B?QjdHQTkzSHJ0SkhvUS85cEJJeFowdmNhVHJXVXgyWlFXTU5oOHR6N1M4QTQ4?=
 =?utf-8?B?RFk5d3g3Z0V3V0JyOWwzRk0yTHVGNGI0eDZ2UTF2WGRPdHdQbzJhZmw4S1dj?=
 =?utf-8?B?OExZaDAzclFaVzRTQlJ2ZC9Ddzg4U21yN0RNTnJHQVA3dU1hMG1zYWx6YXNJ?=
 =?utf-8?B?ODFpKzYzUG96eXBNbEUxUjhjdGtYSUNHV25mM2pxR1Fsd1JyTUFITjVxOU1F?=
 =?utf-8?B?dm5JZlpyek9qQzVRMFNqdm8rWUZKYWFhdFZKYmNVTjhTQzhGcG1kRExPbWJZ?=
 =?utf-8?B?eUUwUVgvYmpRTmNrb1Q5cUZJKzFERnNncEFiMHVkY3RNR3dSaWZKalAveUNv?=
 =?utf-8?B?NEhaYlU1enZCQkxXOHJQcVZlOVNGdTY2allackFCQTlqRS9HZ2lrbkk2YUNs?=
 =?utf-8?B?T05NY0grQy9xNnhJSWgwY1JmYmdvOEYrc1U1Z2R2NlpHSXdPMUp0ei8rdE5l?=
 =?utf-8?B?b1BCVjhyYUhKZUh5ZjdLRU5SbFRRWkE0cE9RMlRaa24yREVlZnFrT0h1eHdB?=
 =?utf-8?Q?nkMqpk5M+A1jyC12MKNQgVhBL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e711b2bc-2f86-472d-a0a4-08db345860bc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:30:39.0138
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /JDFppl6blr1xh4u3Q+C/fE7Rd6JnYPrzmUmgP9KvlJ4Eo8KgYml4+0RrrGByGWPXdROxWm3lzdSj9XiCGvv8A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7295

On 03.04.2023 16:27, Roger Pau Monné wrote:
> On Mon, Apr 03, 2023 at 02:39:08PM +0200, Jan Beulich wrote:
>> On 03.04.2023 12:14, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>> @@ -486,9 +486,6 @@ static int cf_check do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>>          p2m_type_t ot, nt;
>>>          unsigned long mask = ~0UL << (level * PAGETABLE_ORDER);
>>>  
>>> -        if ( !valid_recalc(l1, e) )
>>> -            P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n",
>>> -                      p2m->domain->domain_id, gfn, level);
>>>          ot = p2m_flags_to_type(l1e_get_flags(e));
>>>          nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask);
>>>          if ( nt != ot )
>>
>> I'm afraid I neither understand why you make this change, nor why you
>> then leave the other use of valid_recalc() in place.
> 
> The message can be bogus if we allow concurrent do_recalc(), and I
> did miss the previous one.
> 
> I missed the one at the top.  Originally I wanted to send the RFC with
> just changing the lock to read mode, but then I though I might as
> well fix that (now bogus) print message.
> 
>>> @@ -538,9 +535,9 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
>>>       */
>>>      ASSERT(!altp2m_active(current->domain));
>>>  
>>> -    p2m_lock(p2m);
>>> +    p2m_read_lock(p2m);
>>>      rc = do_recalc(p2m, PFN_DOWN(gpa));
>>> -    p2m_unlock(p2m);
>>> +    p2m_read_unlock(p2m);
>>>  
>>>      return rc;
>>>  }
>>
>> How can this be safe, when do_recalc() involves p2m_next_level(), which
>> may install new (intermediate) page tables?
> 
> Oh, great, didn't realize it was capable of doing so, it's more hidden
> than in the EPT case.  Seems like this will only happen if a superpage
> needs to be split because a lower order frame is being used as an
> ioreq server page.
> 
> Do you think it would be safe to try to attempt to perform the recalc
> with the read lock only and fallback to the write lock if there's a
> need to call p2m_next_level()?

Yes, that ought to be okay.

> Do you agree it might be possible to do the recalc with just the read
> lock if it's updating of PTE type / recalc flags only?

Technically this looks to be possible, yes. Question is whether we do
ourselves much good by introducing such a special case of permitting
a certain kind of writes with the lock only held in read mode. The
latest when we find a second (more or less similar) use case thing
are likely to become difficult.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:32:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:32:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517519.802959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMAp-0007xb-JJ; Mon, 03 Apr 2023 15:32:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517519.802959; Mon, 03 Apr 2023 15:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMAp-0007xU-FM; Mon, 03 Apr 2023 15:32:47 +0000
Received: by outflank-mailman (input) for mailman id 517519;
 Mon, 03 Apr 2023 15:32:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjMAo-0007xO-45
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:32:46 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5749c1b-d234-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 17:32:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7295.eurprd04.prod.outlook.com (2603:10a6:800:1ac::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 15:32:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 15:32:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5749c1b-d234-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m9npcgi1+bMVFezilYeE8bOu6Olj2VyP3KcCmsg5G+ZpVs5+CIR1JFLB82e6G0IWFqV/131USArPUZhODtJpu4UjJaMMZf2B5QUzz5iv8hEikgtejyK/zBfcBsXGNkKj67qCI/StS9mJ4P2esUQrKEpgnVmIQLkrN1tgH76o9siOtvVblmmNLlE58RR7+BQOyHy7P+UtRdjfCLqFamAN3Isn1nnpGzavlCqGrvnrEC6SHWhGNbngMRw/t42kuTrtcVS4FgiUUjQkvk6UEg1LR4m3F+qtAb9dyLzigBtATiPfAHaqUroUyl1IYmnTE8XxUJlfH9erzh35RsQFPOht+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yW/TYZcHxb7whSHZnCCIC13DmOy6RkRRJZkmFNF2v9c=;
 b=m07aEJNuiy5pMk3GTfVwPhYPoiLta95wHv8rSvFoSgwewluzk1bNm2C+WCzi0LXWoNYa21BA5tLlaxvSn8DLC+k4yakUfCCRTl/IbLVEsrlvFRpL6iylIULND4qcbYc7m8PMKli5u1M2ORHTVHT6jf6ImgeK8+KsYBg65NtggybXxcGg9XM4cwiJc1lWkqvTShcoVUqku/u4u5MOMYh2H2oQPHYpQpb8+wCMP6TOlwu/C72s1B/amlro4SI5uq4OS1X1Ji43E6Lab6fjYw520lEg4u7wYqfk4E0GAchzLEx6cYR3JQavnW8G+8Ti8+hIeCo/yh4uVZR3KJ21fdQSiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yW/TYZcHxb7whSHZnCCIC13DmOy6RkRRJZkmFNF2v9c=;
 b=ZXt+lPT94rfssoFdpe3MjBBAmny+NMLUUwkVfv1QgwmVyCgP3UdKtfHehG2flaRCKarzJmVLwt3Ukf2twlDXKQjimtb0dxhr236aV3Q5QTpqo/RnWHKNlkyCEHcB2QGAcSdc2InTH6bGAYXzHK8jrzMaGUnRTBgkPVvR3hoVe7ZEQTNZRnHDS6KPOjxCnQ4pvd7JspWWgOgv60UFoEYxWgRzCFWfvaHpIJpq7+DQqQsMAGjhbpyeEkafT/7kPIpI2UHlVn6rhxCFw+x4vjf5ubseXgRFFQAFUwLMv8G+7wneWzafqXV0Z9x3QrN/GguUDu8/bRJP/IyRou6YfZw5Hg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0adcb388-d2a9-e43e-ec20-de1df51f33d7@suse.com>
Date: Mon, 3 Apr 2023 17:32:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403101449.93323-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230403101449.93323-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7295:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ae16332-831e-4b02-4488-08db3458a93c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IjDEoTk22sxoybyjViHc6gj6hhsNoWi93iVA7Pb4rGtPRH2gPmmSOjwKbazmNqa/FsrtUKeK3/+1KJrQ9MzU9cBSkLdsJQ8fCACHke+KKdpeRJvaIjQaebQcWn8+1bRYKaBLW43VoIwDgXb1Ab5vl2d/HzlZ3MYTo6U2tPFip+IfJcH2/O8gXHHdsEVP95m8vNFyVs8DLfuVeRwY+ok11oRzU5BfPJWeTa9RBIBBgwQiRPlv3XMagUqqafeAoe1l76tHmJt0tpAH51dxrE8p2usDYTJqzpsPceXCwXPDYKY9l+tu0iuQpeem7W2wMTtCFciV9mEPkl1/VgecfW8BrXl2XqTMHkmvIh3xKYecP6HkWDkjvXaTVKb0cnJLAOChZrbUvgDuCmzYLNtkW6delazM867P4ARZ8YD0QZJWiq9O25eN8ALJZBPY2gHGDec2WiIrmAzh8kRT1aAalu10Ex0oWXscE7X9B1hvFyY10VDbO5yL1rxeBTKWf41G7kBM+a2i7Y++OdWbH3T/6MBcEtQqzwILjnWWZDfBkh1Lb/IMk8QX7c/NeL97h2jGon1jT1MC5+YzZTfg+564T7lFZS9V/Bxi0iyFHG5YsvNwQzXjaV81sXYWW9ZN8BvUC735NgPIdv0X2+5wFiWDvEv0yw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(346002)(396003)(376002)(366004)(136003)(451199021)(31696002)(86362001)(36756003)(2906002)(31686004)(2616005)(53546011)(186003)(83380400001)(6512007)(6506007)(26005)(6486002)(8676002)(6916009)(4326008)(66946007)(66556008)(66476007)(41300700001)(5660300002)(38100700002)(54906003)(316002)(478600001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q09rTXp4cnFMeDBjWGtGMWRPU3hFbmp6bUNSbEVRSmlXNm0zWlpSVTFwRWY4?=
 =?utf-8?B?QW93NEFtUUcrcEgwalJKdW1PY1B1YUVJbGZBbVdyYzlOZjB6YmNGcWlpdWxN?=
 =?utf-8?B?WUhBc0VJamd0N1prdFF6VXZ1NEJIeXB0ZzVpbUtxcUptallYY29hUEJTb0RL?=
 =?utf-8?B?cG9Jb3RRQnZJRGx1NmRLRUtZZDVEKys5VUw1ZGxXNFRvajJialkxSHVKRG8z?=
 =?utf-8?B?Skk5N0Fld0dDRmR3bTlHeFpQL1RRSTR6LzJ2Slc2QjE3ZjR4NHdEZCtORDNN?=
 =?utf-8?B?RmRMcXBFSnoybi81blY4QUI2VVhhS3FzK3JqdlRlTHJmbWpIQ2ZQNTJxT1hR?=
 =?utf-8?B?SVNKSnFZMWlqSmtEa0pralZHMkt5VWNXcStzcitKaDByVnRjcmZYaDErUUox?=
 =?utf-8?B?SkFPNVByckJQZG1OMlJWQXV3SkZDY2N4L0RTVnFLd05VNForU0tFTXpWMlE5?=
 =?utf-8?B?OHE1bENUZ0ROSGUwakNJLzh6MThCRWsvS0hDYVBkb3YycW5tTVg5ZHBWQTk5?=
 =?utf-8?B?aHMxSVgwN1Q4YURBWHlSRVg4anprenZRNFRJcWliVVcyVlhDTU5yVHpZTHB2?=
 =?utf-8?B?b3FZZWgyUm5KVlVKMUw0V1dZM1hKYnEvbGZ1ZGFBc0UxWU1hNXFzU1RhSmJP?=
 =?utf-8?B?aU5KUGVtWk1SOUFtWGVxT04rQ3pDUkJwYWdEbGhCSk5HUG03RHk2cXV5eHlX?=
 =?utf-8?B?K2w4YVN2VEo5K3ZLdDh0K2xvT3h4SFhseWZJb3FOM0tBT1VQUXdaUExHZm9S?=
 =?utf-8?B?d1BVc2xWVVRFUWJsOXQrUk43NGI4WFgwNi9sTGhScm5FWnZWTXhOTWE2MERQ?=
 =?utf-8?B?TldSaytyQXNWTmFLUVJUSWh4QndBMGp1c3Iyc05zdDF2SytLSWI5TjhMTFFW?=
 =?utf-8?B?c3kwbk1zaWhPK1Y3NUIyb0xETTRaZy9vQXVwTTZwOEh1bnEyajdpREc2ak9s?=
 =?utf-8?B?dXVpd0I4VzE5ZFZnbjJUL0I4cVRzYWZXQTExb0pzOEJoUEJkK29tV3Q2UUti?=
 =?utf-8?B?UnVjcUJ0STlwZTY1M1p3UFJ0L0FHeWc2WDdDdXhwdmFQb2ZKd1I3akZEdHF1?=
 =?utf-8?B?cGtHQTRjK0lmQUQvd3VzWHY3SHlRTWd6bWV6SWVmMUFnUitQQ091T2tncmVP?=
 =?utf-8?B?aEhVK3l5L1M4YnhNYi9hYkh1K25oSXpPNTdrOGdzNDNqMjdJc0R2dXZBWWhL?=
 =?utf-8?B?OGtYeTU1c2J3K0VzY3p4QzhMMk96RElmaGJOWVpqc1c5ejJRcG9RRHR6bXNK?=
 =?utf-8?B?cUdwU0ZKR2ZFUzRPTTdhL21OdHdMWlMwM0s2QUJXSk5idS9ROFRiaDl4Rnho?=
 =?utf-8?B?d2ovMW0wRnZhcHBIRndnZGRxOWJPb2FZanVMNGRERHNyZHhPT3ZHWFljaFBD?=
 =?utf-8?B?THpic2ptSHVjOGc2a3dpMHB3Ymx6T3VqbTYzZEdweWhXS0lEVVdMQ1hyQi9C?=
 =?utf-8?B?aFB3Ymh2NnRROThkMURSZHZLdVlyWmZLdi9nZmc3L0ZYUHJmUnlOZ2RNRGkr?=
 =?utf-8?B?Lyt1ZDBTV3F5dWNreml4aTFvODFjRXhTa3diM0RDQnRIZDlVVGZyVGZZOE9R?=
 =?utf-8?B?N2NBRDZmN2QvQ1VLc3B6cDBPTkY4QlpGNXphSUtvSU1hT0phQWdyaE10QzVX?=
 =?utf-8?B?Q3A1eUEyRFBjSlAyRmRlNEV5T3ZQZmhucnArelZNaEx3MzFQdjlzUE9xamtS?=
 =?utf-8?B?MGxnRUhrY0lIM1RkQ2J0V3R3K0JKc0tRSG5mVFFuZFdLTFkvYnVuL2E3RG01?=
 =?utf-8?B?emtCK0tNOFRLVmtHNWVNZXYraWhKK3dPUW42dWFESk15ZTRWc0pjTGkvN1J6?=
 =?utf-8?B?WldNZ2kxa0NyWmxlNzlacGdIdTFoMzJkVDYvNGRZdkJzMzVZUVZ4OVBXOWNl?=
 =?utf-8?B?NWlUb3JmN0U2aHltOVJ1L2VLc1VwQmk3WllMbUJMLzhUVHhFRnVRZHhUK2ww?=
 =?utf-8?B?VERvcU5lTm1JazBtcTVnOEFYOU11TkhlUUp3bHRtMkJoOGppN3kyeWVzVW9B?=
 =?utf-8?B?T3BMbjNDZUZOOFFCZVZEQWs1U3NMV2ltS0lWM2psc1ZibEFlN3kzMEpjb1Fh?=
 =?utf-8?B?czRsTHVIV1pGT1dVZGd2V3FqR09jZGdYSGhVOEl4SnVYVkd3a3duRytPN292?=
 =?utf-8?Q?l8DkJLD+eAv6UMp1/BAa65Bse?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ae16332-831e-4b02-4488-08db3458a93c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:32:40.5559
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WwAw3nFK9ODvlJA9bLC1jdt0LCDwYbvXPrJ3bsFZrnCL2TN9MMmC2Q/Eb3M2V3l/vyD1maJZA/zJNahBH06ujg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7295

On 03.04.2023 12:14, Roger Pau Monne wrote:
> Global p2m type recalculations (as triggered by logdirty) can create
> so much contention on the p2m lock that simple guest operations like
> VCPUOP_set_singleshot_timer on guests with a high amount of vCPUs (32)
> will cease to work in a timely manner, up to the point that Linux
> kernel versions that sill use the VCPU_SSHOTTMR_future flag with the
> singleshot timer will cease to work:
> 
> [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
> [   82.793075] CE: Reprogramming failure. Giving up
> [   82.779470] CE: Reprogramming failure. Giving up
> [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
> [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
> [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
> [   82.821864] CE: Reprogramming failure. Giving up
> [   82.856256] CE: Reprogramming failure. Giving up
> [   84.566279] CE: Reprogramming failure. Giving up
> [   84.649493] Freezing user space processes ...
> [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> [  130.604032] Task dump for CPU 14:
> [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
> [  130.604032] Call Trace:
> [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> [  549.655463] Task dump for CPU 26:
> [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
> [  549.655463] Call Trace:
> [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> [  821.888596] Task dump for CPU 26:
> [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
> [  821.888677] Call Trace:
> [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> 
> This is obviously undesirable.  One way to bodge the issue would be to
> ignore VCPU_SSHOTTMR_future, but that's a deliberate breakage of the
> hypercall ABI.
> 
> Instead lower the contention in the lock by doing the recalculation
> with the lock in read mode.  This is safe because only the flags/type
> are changed, there's no PTE mfn change in the AMD recalculation logic.
> The Intel (EPT) case is likely more complicated, as superpage
> splitting for diverging EMT values must be done with the p2m lock in
> taken in write mode.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> I'm unsure whether such modification is fully safe:  I think changing
> the flags/type should be fine: the PTE write is performed using
> safwrite_p2m_entry() which must be atomic (as the guest is still
> running and accessing the page tables).  I'm slightly worried about
> all PTE readers not using atomic accesses to do so (ie: pointer
> returned by p2m_find_entry() should be read atomicallly), and code
> assuming that a gfn type cannot change while holding the p2m lock in
> read mode.

Coming back to this: Yes, I think reads (at least the ones in do_recalc()
which can now be done in parallel) will need to be tightened if this is a
road we want to follow.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:39:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517522.802969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMH4-0000Cf-9P; Mon, 03 Apr 2023 15:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517522.802969; Mon, 03 Apr 2023 15:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMH4-0000CY-5b; Mon, 03 Apr 2023 15:39:14 +0000
Received: by outflank-mailman (input) for mailman id 517522;
 Mon, 03 Apr 2023 15:39:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjMH2-0000CO-R4
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:39:13 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aaac4a50-d235-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 17:39:07 +0200 (CEST)
Received: from mail-co1nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 11:39:05 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SA1PR03MB6451.namprd03.prod.outlook.com (2603:10b6:806:1c2::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Mon, 3 Apr
 2023 15:39:02 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 15:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaac4a50-d235-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680536348;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TJSScj+YE23rouMUnkriZyhGcby3yNfMK2DU1gUCJ1g=;
  b=cNZzINFWJkz1LDra47OnhGPEfaRBOHUINB8+XwMNoebaR7G8GjdVp8fd
   1EzwppEFGVxHXrq1cJIfLjtBN2qp6oP0AcvxFG+uFfzzmLy2Ac4AIZjAM
   87HobsU5AjqSwyXwPrSyqKfQrU8LIrs/M78vqiyt5n0kA/afrknfJ4LXr
   8=;
X-IronPort-RemoteIP: 104.47.56.175
X-IronPort-MID: 104555290
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NiGY1qqicGihICXy111aXZKb5DBeBmI+ZBIvgKrLsJaIsI4StFCzt
 garIBmAOv6Ja2WmKtpzPdi+808A7MTTztJmQQY5+y5nECgS9JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyJNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACADdhmBid+p+ZGcdNB83e8FF+fIDapK7xmMzRmBZRonabbqZvyToPR/hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jearaYWIEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpOSeDlrK8y6LGV7koPD0YwdUaCneGWpnLufcJ8A
 X0X3QN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgc/iTQsSAIE55zop9g1hxeWF9J7Svfq05vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:BLE14KEdxIFUGe1gpLqEKMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT4FOTOfeIHI/p/zciTPId+rJwrO8gd2VbTG19QYQceloAZsQkDuQEmygYypLrJEtP+tDKH
 KbjPA3wQZJKRwsH72G7mBuZZm6m+H2
X-IronPort-AV: E=Sophos;i="5.98,315,1673931600"; 
   d="scan'208";a="104555290"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q49pNWpv+VT7Gzrwq5lev7rcI8DZXndkMrhV0q0sNM5Ij4ygMI8avG3L0O1E1kg1gTBdt/rwJH1r6crwDZTKnE7pwgYdQp0d0XGL4Sk6VPhy4UK8chnoYZ2OaIKiFzxKRprL1k5N70ojrtGzXca22zKXhLCExVpWOgyJzWleQGhSMdUn2v3Vu6b8c6bdbU4vliIL3TXDFUV5oSv30JvafqJwNczCLp4YKo4Ul9u0pAgMwbHN7rLZXJjuYkL7sHBH/BglfZFcqBxTYxPeaNoXsBNrqoXeQRM+nv79k2LSqxTtulUM18A/EeVPPm/42zL3vYGfCRD/QSeXfQqy/1dl7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eMoA0HbVGxqEhBwVsdwSPwO02be0FeKEZK3UFGRXJsY=;
 b=HYrrU+6IzxuzIphui2P1kCAl+onAl9SOhBStcOLrVBHwkKOMqSCa3ayHfaj4SMn4Tt5gENsWNtfolTzvsv5jCfMsiWuGcuHz4ZSAHjCWrzeouFI3hE+viLYbu+c/nQ9vtUgz7J3lcKyUvNocabAvm2l/B6eHzXVZ/so+tZOUBvJrW8klkPRBMJneLEXEvYYCB0jp5ISymrdSJ0n91PYcKABGS6LxnN+p8buzz8Jc9bvxYHpabq8F2sb6Y/2OCfUYljCcZ+bUmB5PZMnOp7fSnfViWe80+OgOCzdUjPFoerMhOxD0J6H1NsUS1fG+JtJu3owOV++MB8FJrlWd+tkLFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eMoA0HbVGxqEhBwVsdwSPwO02be0FeKEZK3UFGRXJsY=;
 b=PPzhaNi2uYgHAusErCqvkdUVvut4o16rIuj+fTyVyS05Kgznm8N3h1842V92kS31tEeeey0rLAwN0rmEzg+yev4onwyLmA0yk/HwUSi1+fEttbBy1QrRHgoDGe9+TNw07+bMQzjJUGUb8y2Jy4jW4/i2i54NZDJjIFbemjRh17A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 17:38:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Message-ID: <ZCrzD7tH5WXARIvF@Air-de-Roger>
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <0adcb388-d2a9-e43e-ec20-de1df51f33d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0adcb388-d2a9-e43e-ec20-de1df51f33d7@suse.com>
X-ClientProxiedBy: LO4P265CA0154.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::11) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SA1PR03MB6451:EE_
X-MS-Office365-Filtering-Correlation-Id: 950c5119-e7c4-4b50-8036-08db34598c81
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4jv7sqOmRVFOn+PMG1efJRVL85AIxHhQ1VPsfaJE3x3C71eFh/CRXkH/J8OVwe3LZshOL30LOX+b9GkmGxN6Rjij5HDnkQpFCEco2l2YbxPn3dplzv05O0KHRMWPd0Aa+/+m9gsFNrcfElgbvWfow5xybNwMoG/lVsiyHJjnI3slvJhr6swf4aAu4i5S80v/pQqc9Tj9hxsD06sFp9nXKUlSBakE6SUzLeC7tPClWX4Wm3KT00/JPV0WmKVI7FhMDMhNazAOxwPk/FsRoGhClVbI/4wU5ul+q3cLVFYxa/e+7Dm2wgoyuLc45+72foPfN4kQLZoIWiy0wORiR/ID9aEhj2wlZRk4L9kN5DfXBGpkbSYz9SUYjvst/B7RwdjKNeKmDbgy4GWy2MVb38MsFVJF+eGE1arL5KSHIsQHL2Fw/55cX8pUAJfIkkbbJMoJ64KX7G6dJ8WbDmkg7dSb2QU1irbnRPf+7jgrqmztcHob8mvMFmpuXnq5ULrl9hXKsbhj6FlCf8VEqGF2EGhcb476by97kAYGyuXrae+pQFXGw5kvKhSCyPh/U/ENlrse
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(366004)(39860400002)(396003)(136003)(346002)(451199021)(41300700001)(4326008)(8936002)(6916009)(33716001)(82960400001)(26005)(316002)(5660300002)(86362001)(38100700002)(66899021)(54906003)(66946007)(66476007)(8676002)(66556008)(478600001)(2906002)(83380400001)(6486002)(186003)(6666004)(6506007)(9686003)(6512007)(53546011)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VDYrWk1VTEJnSTMzd2g2Vlp5RWh4VUxpQ2Qza0NYYUtrTFoxbWppcVJGVHZr?=
 =?utf-8?B?OHo3SC9rT1d2cHNMQTFaTVlyUXpZbm05MHY3ZmtOL0xwOEFnTHFzaURiUUtH?=
 =?utf-8?B?NXIzdjVGeDlieXdJZWdCOXBrdW5PNVI4d2kyaHg2UURWMlRtc3dudkF0c3pO?=
 =?utf-8?B?c084L3FxbDU2eU1iUTlRNnpOaU5FNG1Lbk5NU3dxbllucUZxQ2habGVvaXpa?=
 =?utf-8?B?YXhGOW5yQmIwZlB0Vm5iMSsxbWlUNE9sZnhUYXBQTENZSWtRdDVwanNIMkgr?=
 =?utf-8?B?bk5Sdy9rclhBZjloSnY1aC9zNS8ycG0yYVZIdERacmRNdTZTRkpIcHRpR3hr?=
 =?utf-8?B?LzdmZklHck1nWUwwWFBaUWF0cllrdEpNWHdXaENYUE40TzA4MGx3OFlTQTU2?=
 =?utf-8?B?V3cvajQ1SEJldE9XOWUxb0k2cTRJZklMZDVMTDBiY0NhQzVsc0FyY0o5d2VC?=
 =?utf-8?B?Y2YwYmtLc2FQbGNlejRKOVgxUHpQYlBOamFVbTNaL1dRZXJ3MlFJUzB3RUlY?=
 =?utf-8?B?SlFqYndMM3dzQTNjWEY0bjFNK0tzZzhndWd6TFg2Y1pXNlJnanp2dHNIVDg4?=
 =?utf-8?B?Ti9iU3lBa0lGVWY1blRFeGlrbThXZ1oyeENwaFpEcjF3cEtMY1orS3dYTGpv?=
 =?utf-8?B?OXIxT2JzbC9LZWdHd2lKUk43amc1d00xQ2JDdkNhaWlHSXNuaklmSEovVFR0?=
 =?utf-8?B?QlRzaWVQK1VtcnpENEtHQmtNUVpibEIrdXp0SkR6d05pUXBYRC8rVnNwOEhT?=
 =?utf-8?B?azFuVzJwZmM5SnVtQlM4aEFFQ0tLN1NTOXJ2MXNHQjcySkNFVzZ6c1JFK0Yv?=
 =?utf-8?B?M2pUdnVmbUtwWjQ1MFFFVjRpL0U4SlFmVjkrY2JobkEwMlJzeGRZRHR0cGh1?=
 =?utf-8?B?YXVkOTMybHlYOXVQQlRLMXBWT0NaV2RsV3BPdVpCVmdpemNJRjV5ekJTV1lL?=
 =?utf-8?B?L1A3Y3hQSEl3c0RHeWFkZy8ybjgxK3NjWkdIWnRFR2ViS2s2QXZrTW0xcVl2?=
 =?utf-8?B?MEw3VTFOOXVSWWJId3J3R0tLTloyandwNzBXNEVjdDdvOUExREpUeFhtTkh4?=
 =?utf-8?B?c2dFeVc3eHVNYlNZRkwrNVJzelIvMzNTY1NvTGV0Z0cySWFwVVR3V0tVVk1W?=
 =?utf-8?B?cDlrWVl1VGJwZ3JGT3B4dElQYTRDRzZhbVBmWE81ZE1Qd0w1Tnl6czVnenhr?=
 =?utf-8?B?U0RMTmFHZlJFNUNlVjQ4SUZ2VXpnclBmU1ZnWkozTllTbXlsMmNabWN3MUhz?=
 =?utf-8?B?TmJndHMwZURDNHIrbGF0eXNoaUFNQ3AzOGNHS2JUUDFDaDdtbHNzTktaOTJ2?=
 =?utf-8?B?RGhEVFRreDJmMEpzWTFQWTJyM0swMTZSNWNla0tYTUwwMTM3Y0JYMW9JVG1s?=
 =?utf-8?B?Y3Y5VUJFL2toN1ZUcnNacitDVENESEZxTFFFN2h4Y2VKNTZEY01YNVVjczFH?=
 =?utf-8?B?ckdCT3EyUEVlbjQzMUZnR2V4Nittdk1CV1h5TTNHWVYvenBMOWdRNmtSelNF?=
 =?utf-8?B?eXNIMHNtYUZFR1pWczhGZlFldnNXWVp4V0VqRGg5cDUwV0dqbXNGUE9wUkd4?=
 =?utf-8?B?ZjR4M1hhN05BL08rVS9SUHR6MDRxMTZMdkk5Q0l3RzVEcEFaam9EUFdqajBK?=
 =?utf-8?B?MGRJOEZ2RW9RN0J3eXlIa283VC9rZG84a2NocFRiSkVkVU9uQUpTc1RpdVdD?=
 =?utf-8?B?L3YvSmg3dXR4K2NnTER3Q0piRDdJcjN3TjZNL2ZQa2N1UW9WT25Pd01INEcx?=
 =?utf-8?B?LzlCUmxROEY5RnRuWTFLa3A3ZTRDbFJXKzNrTW5zdnZ0R3lsWHUwTFBTZmhC?=
 =?utf-8?B?Q09jOVhMN3NLT0MrOUtFRFZBd2pDRG1tYUZyaE5qUmJnQ3c2bUd3UTZLUmFk?=
 =?utf-8?B?M0RwUVhVOHpJRDR2Y0V6a09nYkxxZVlPSjluM1IvQzc4ZVRlZVVtS1BJTVJk?=
 =?utf-8?B?TEdRRnJLeXo5WGxnRnlPZ1d3L0xTaHNrVERieVRGWFhWQWJvWkpLcHI3dDVw?=
 =?utf-8?B?RVNPSlBqS2pPY29PVFhmQy9XZWxrcGZpenBZWDVPeUk4MXZFQmRFczg1Vy9X?=
 =?utf-8?B?blJiUzNpNjVmRjcwYWxCRng5aTBtTksyVk95YW5ZMUlOK0QwWEJEb2YzcVhF?=
 =?utf-8?Q?eMIZhSddkClRvbGY8N9FGWJnp?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+PB7tguXWrxsznNO+pEzjro8ARwAH/aQG2iq2bW9fJ0Y//98Syqh5VcO/TGVUUGJ+YtvUCjJMLFWrHi8RABbwItqF5uiYN4Pf94CAoHB5KxlwrkBCvBXx7lrc7frzQ3Zmq3GRXAe21dyXr9n//6hXo+hZEPd3sSmDzYdv7U/iy/DiZNVMYbidozQLKoKZKvwY3iuAZR2bLVmwJSsYUHTTIv9x94xvatCrtdQpjWSMVlZc9Q6TsrkME4QbKkt7nH2rksBO6VaGIuEYzGcjZn6e5ubW4hdXh/Z1LMbXlxdDAfoF3wh0nTmi7tHAcyJyA2Fz1eHd2axKv4FFzhycFlOVJybNPeW7IkZp/m6zAf7tgDp8iNK/+IJ7kbszuvohZKGCbKFL5r2VQOJRxOVid+nzcDQArALarRLdZaOoeSD2WlXpHcc3e7eLGSMNk11vKWfvZU9z4NXpWh630hAKlgxct1Rk7t1ByoCuQNDPB1l8XmwRRuXy6NqJEjNt8xgKVM/gXvlFA4yP3kEEXLpvDx6NBEo8uispZJdOHFbwLBZXYZzs7Sin9TpxVRKz7jN92M+KUd2DRY9zNShVXbIArcJoyjFknaAmTOJzt1SChmXVlDlcFysAY1nKqyIQCR58U0mNYG6xMjbR/1eME5f+IC/7SlwLg1/N73CL1Z9ERNERzaf/wTNq2PT0Y4Y7NSU74DcAcjnWzaROWbisc2+q/ofLXUDkBNjhCXfUXdNEzGmcgCClLqPQkyh024VfqTW2yCbgcepYMRF6fZAJAxbs4lBxYYKQLJlm3elVKg38tTu3hge6kIGhdWKUucR7mXPZbqC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 950c5119-e7c4-4b50-8036-08db34598c81
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:39:01.9891
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CbzQ85k2Qh9uVXirGRUdMoxvxkLOgKWRnV9Tc1+RIRQnIvedtZlU715khrXvZQt9eBA79XvRCjU66k5mj3sU+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6451

On Mon, Apr 03, 2023 at 05:32:39PM +0200, Jan Beulich wrote:
> On 03.04.2023 12:14, Roger Pau Monne wrote:
> > Global p2m type recalculations (as triggered by logdirty) can create
> > so much contention on the p2m lock that simple guest operations like
> > VCPUOP_set_singleshot_timer on guests with a high amount of vCPUs (32)
> > will cease to work in a timely manner, up to the point that Linux
> > kernel versions that sill use the VCPU_SSHOTTMR_future flag with the
> > singleshot timer will cease to work:
> > 
> > [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
> > [   82.793075] CE: Reprogramming failure. Giving up
> > [   82.779470] CE: Reprogramming failure. Giving up
> > [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
> > [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
> > [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
> > [   82.821864] CE: Reprogramming failure. Giving up
> > [   82.856256] CE: Reprogramming failure. Giving up
> > [   84.566279] CE: Reprogramming failure. Giving up
> > [   84.649493] Freezing user space processes ...
> > [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> > [  130.604032] Task dump for CPU 14:
> > [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
> > [  130.604032] Call Trace:
> > [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> > [  549.655463] Task dump for CPU 26:
> > [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
> > [  549.655463] Call Trace:
> > [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> > [  821.888596] Task dump for CPU 26:
> > [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
> > [  821.888677] Call Trace:
> > [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > 
> > This is obviously undesirable.  One way to bodge the issue would be to
> > ignore VCPU_SSHOTTMR_future, but that's a deliberate breakage of the
> > hypercall ABI.
> > 
> > Instead lower the contention in the lock by doing the recalculation
> > with the lock in read mode.  This is safe because only the flags/type
> > are changed, there's no PTE mfn change in the AMD recalculation logic.
> > The Intel (EPT) case is likely more complicated, as superpage
> > splitting for diverging EMT values must be done with the p2m lock in
> > taken in write mode.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > I'm unsure whether such modification is fully safe:  I think changing
> > the flags/type should be fine: the PTE write is performed using
> > safwrite_p2m_entry() which must be atomic (as the guest is still
> > running and accessing the page tables).  I'm slightly worried about
> > all PTE readers not using atomic accesses to do so (ie: pointer
> > returned by p2m_find_entry() should be read atomicallly), and code
> > assuming that a gfn type cannot change while holding the p2m lock in
> > read mode.
> 
> Coming back to this: Yes, I think reads (at least the ones in do_recalc()
> which can now be done in parallel) will need to be tightened if this is a
> road we want to follow.

There are likely a lot of reads under the p2m read lock outside of
do_recalc() that will ideally need to be switched to use atomic
accesses also?

I'm open to suggestions to other ways to get this sorted.  And that's
a guest with 'just' 32 vCPUs, as we go up the contention on the p2m
lock during recalcs/misconfigs is going to increase massively.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 15:43:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 15:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517527.802979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMKf-0001hJ-RO; Mon, 03 Apr 2023 15:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517527.802979; Mon, 03 Apr 2023 15:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMKf-0001hC-ON; Mon, 03 Apr 2023 15:42:57 +0000
Received: by outflank-mailman (input) for mailman id 517527;
 Mon, 03 Apr 2023 15:42:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/KQ=72=citrix.com=prvs=450b71a79=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjMKe-0001h4-NC
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 15:42:56 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 324d790a-d236-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 17:42:55 +0200 (CEST)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 Apr 2023 11:42:51 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SA1PR03MB6451.namprd03.prod.outlook.com (2603:10b6:806:1c2::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Mon, 3 Apr
 2023 15:42:49 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Mon, 3 Apr 2023
 15:42:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 324d790a-d236-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680536575;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=cChx0o5A6ihLKB0t4My2IL0j6fZQVRc+OjMz0/qhwpw=;
  b=JeMOsYKwn9K0ausFtgKOGnCXYicNJHPEIXmzMsQJNAfWJMcnCurpsFqY
   1isuyFtt6yU+biNEvunTynwTUmNQ/TICl4Gl9E+H1vjc1qsCe62+8yaAp
   8gT/V1g5iBNdTIcfHk8My4qw3IWHabkgXsQppDzLhVF+JJ+BFCS7+KaJ3
   c=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 104555926
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:9F6dDqOe06JvaVzvrR2DlsFynXyQoLVcMsEvi/4bfWQNrUoihTQGn
 DQcWDzVPv/fazTzKtFya962phxQ6J/czoMwGQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gdmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rh5JTFr1
 sY1ExJOXimBrfyY2pefa+Y506zPLOGzVG8ekldJ6GiASN0BGNXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PRxujeIpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrw27KSzHmlMG4UPIPh0fVP2kyW/Go4KE0Nf0qBn9v6r0HrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O/I+wBGAzOzT+QnxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+pQSiaPCEUKSoJYnYCRA5cut37+ths1VTIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslRPuZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:bxvKQqpWeu/ob/KOw+k0SfQaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.98,315,1673931600"; 
   d="scan'208";a="104555926"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJQQkdjP0C6CNvu1vXY5tc9Prfzh1TH2o33xtHDOrhsaaBajTxU5F82wsMJVu//yVJClo7njpoo1S2/UwFe8bqYxANvmIJAoOJkcWzPF4jU78YOs32Ew7pDGtCLiSj4GxyFFZJ56MpjbvLeBicPF6TL/pxz5HHKPLAOHLRdM0m7JD96lHrRiLQHpwruNLJJHx2jHp55+OY1ysHFA/94PFAzd5AjYFqlrq+bnKLHqNv2Mj3JITydirMstJp/kBydUZUQJLGnYtTYKuf0pPyt2K4qyC58t2RuHVEK6d0DNwxLBqEFvrlWqSQj/0ZPo3P4sa6OIPANl2jVrCw5z1xl5Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NFq82okmVOxlJdIrhlenhySNz73COmJ8btmugVasD6Q=;
 b=QiTbQtcQ8uILdiIkU7DgC71hPhl8skOam18jXFneve6EYA9+cbQeCbXfnW+GOJFcbi/Seb0CbGQezxFDJXDVCnTecECxbM1adyPg2bsgRb1aowQrExCDs7jonbPygeYK4ZbAD1mExvcS0DHiSMZQiH5o9yay49d7NALZ066AIOkbOrRChIKN9o12fSawRfqffi8j428bX8AHbsjf2hBRzwm6VeFCNYUf7ZN0hLBcZ5Xi62O2NtDg5alsImWO3dl5XrQCLA/BnFzFWvEk+ZuvF1C7cgaSXsJj+nAE4AvqyJQNudBxZDnYXyDeG68fhqBTNJSBlZdsmeQlTSjroVB2AQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NFq82okmVOxlJdIrhlenhySNz73COmJ8btmugVasD6Q=;
 b=ZSs4yxRWK6gauK0Dqq+7WX2gLMBEj1fN5J4kUcpKdUQ4qPfjJUsPcp4b6jIjkFUre2K5f4mdaR8YVMXG21d0qLJqwg8/FWnIJXUWtI7pSS09pYfDhPQC664NpemTpRSDnFRUu9KchkfDCJmJ2cZK3czluvcb9GlDYEEL/z50ndI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 3 Apr 2023 17:42:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Message-ID: <ZCrz8kDM6EB8t0Le@Air-de-Roger>
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <8d976d34-8a1e-95ad-3bc9-3cb704c1fae7@suse.com>
 <ZCriYs9y6JU1gat9@Air-de-Roger>
 <5fc00c1a-11ea-905e-49eb-d70caaf71041@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5fc00c1a-11ea-905e-49eb-d70caaf71041@suse.com>
X-ClientProxiedBy: LO2P265CA0207.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::27) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SA1PR03MB6451:EE_
X-MS-Office365-Filtering-Correlation-Id: e22b7550-ec20-4828-d277-08db345a13c3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KxxinPzUXSqjEuC76OGZL6FAFFvaBFMpu6qvp20IK5pyPUbdzZ5/UWXsZYhOjutGqSvtuJZLtisIZ1RtLXuQ2psY+Ee5iwaeukG3dtpAVEuMEfh8kdhFmJKna+uQ0xw/9DldsHO09lYnVEDZmBy+5elRFm0MFOQVkrgSTI+9iBz+k4MoK1dI1moWNr9zDSIbuih38T30+4pLxk81Bue6KB3cbtuiBSyvIV/5wP5YOaxFdCeKIsTR2MvFK1AT662sqayV5WyKLWi3pNK35O1EZv/vKYf2jnmv7IRfm2XQGUJ7DKEogRSzbJEKddnP3dVZR5gGt/6wcp+AtFVs3BjJTTm9A4JsaIO93lSx/8fuis1xY2JZHyieS/EP/5BEcezOH5xJ0dvUFmgKyxu9T2Cox7ZXJMLE5djAjpn3dmQg2Y3BvnTwthZ2uV/EO3eKofuAIlqIJOp6gkg6VD2TBrq1s+xaoW9psy5aLxYSVcpvdsTS59RjWYcCyJYbYulqktlEd+PL80BGl2los0BKua2eQahJ7fMLZVe/L0x2PeDqmBXPmXyMxRo0TJVHJ9RiR4iV
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(366004)(39860400002)(396003)(136003)(346002)(451199021)(41300700001)(4326008)(8936002)(6916009)(33716001)(82960400001)(26005)(316002)(5660300002)(86362001)(38100700002)(54906003)(66946007)(66476007)(8676002)(66556008)(478600001)(2906002)(83380400001)(6486002)(186003)(6666004)(6506007)(9686003)(6512007)(53546011)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eGNUNjVmZTRsM091d0VzdENuRjFWaWhCejBxOHZhSExVeTFMUDcyR3NQakFw?=
 =?utf-8?B?WUhldVVkZCtLaEFSRGg0WEwxd0VVQnE2cTF1dWxIODRsN1FNQTV1S3FpdEJR?=
 =?utf-8?B?THQrVzlBaU5IVXNyS3BsUkZ0MUw2Tk81WFU4Uys4cERWS2xFOUc4V3VwSGty?=
 =?utf-8?B?WDI0VW0yb2FBZ2RDNUc0K2dBVzdrRVdZc0RiUGhYS29RT01GVURQUTc2UEZR?=
 =?utf-8?B?UXc4RHcraTVIelJJOVR0NGdKL04xMUEybzZiT1NZc1NaRUV5ZjBhV3JjbnZW?=
 =?utf-8?B?MWpiWUFuVWE1VCtiRlhwQXkwNmdiS3pDZXhjLzA2d2NMcjhFY1p4MXFGYmFQ?=
 =?utf-8?B?OUZjZlpsUFhZUUpwUUlHeGNBeHdhTkFwc2RYdVFLVFQrd2dPeUZnai8yNzRx?=
 =?utf-8?B?K2RJVWJzNmtoT3NBSk5hZnpGYWVjZWhiZzB2K3QyMFF4N1RLUFh0SGpERTY2?=
 =?utf-8?B?bnl0MFNrbVgweURTa2Z6WkVZNE9yYzEyeTI5NERZRFJlcUo2RldFN3I5K1lk?=
 =?utf-8?B?V09ZK3FETkttVDl4bDg0NjZjeWhGSURrb1dmNkF3d2tESGVadTRDTVpEck1H?=
 =?utf-8?B?M2dKOTlpOXM2QzZOSTN3ZXowY2t6TVVqVVJGYkZML2MwOGhiWnhDandrSXFS?=
 =?utf-8?B?YUlSUzZ3WG5sb0Q0aGROQzJyN3cxM1FyMnZJRGZMV1FKOFRrWnhlNTZUVmxB?=
 =?utf-8?B?SWQ1QzBTbzczVHE2RmgzSXZSbGdGVjhpd1R5NDlGRCs3ODRxRjlpVHhkZU5n?=
 =?utf-8?B?TnBRQ2k3dU9JcHl0SmdmejdQKzMrcHVLVlA2YUNyMzV4YnNDWWRMOVUrTXYr?=
 =?utf-8?B?cTVYWjBTR0RKZzNqYXptV1IxN2lFK0pnYXlpelozeE1FamRDVmx4NG8zRmZ2?=
 =?utf-8?B?amNBWWhuZWt4amR6OUlGTTJHQXJqZ3NLQURNSVh2K2tGUGhvQXZ3MldodEc2?=
 =?utf-8?B?QnZ0Vk1EeGtneHpzR1dOYVJZb3BhdkkwamRpd2hleFJnYU1sOW9qRE5sSEpQ?=
 =?utf-8?B?T3NGZzJ4aE4xM1o1Qlp4QWdET2hxLytQRUI3UFdkZWpTRk8yR3RDQVZ1aVRk?=
 =?utf-8?B?SE9HYlZ1NXFlN2VhL0p1OGRCb20wUC9DTkFzMnVwNGl1cHJHZkhoYVdsUDdJ?=
 =?utf-8?B?QTJJYUJHWndGbGUzdmVTOGlFdnp6dEoybFN4NmdlUzRWZlA2ZEZwS21xOWlU?=
 =?utf-8?B?SHpLM25EL0xHOG9tQ0dYU1dRS2Znd1Z2WmFaRDZlNkFQbVE5aVR6dHJrN3Iw?=
 =?utf-8?B?ZXZ1dWtaVFIyS2NmeThEc0FEd25OLzRLSzlzQ1o5VGpjelJ2MkcyTzB2ZTZO?=
 =?utf-8?B?WnBFNDRHeUM1bzAxNWE3T2V0Q05oZWNHS1UxVU4yMW54ekkvaTZncUMyVWcz?=
 =?utf-8?B?TFhzRkwraGF4WXZhZFByRXFzRXk2cHVFMHhDV3JLM3NTeVFUQTVvOVp4Wm9q?=
 =?utf-8?B?WS9xMldIa2JIWnNVMDFGUStPUC9BV2FiVUM5REFJd0VTMkE4REsrV1FwNnRq?=
 =?utf-8?B?VnRXSi9TelZrZVpRcGg2Tk9HRDBuRWpYNlByL1REbUY0cjQ0Zk9EQXdma0Nk?=
 =?utf-8?B?a3RpczZtdWEyS1JWaDFSWEdmWUtlekk0bStTcUxJaDU2aWQyUHdpeTJnd0F1?=
 =?utf-8?B?d1Q0dFdpcllHajdTaVBjMHd3Mnh6R3R0MVBhcDd1ZTdiakoyT1Y3STZQYVl6?=
 =?utf-8?B?TjVCRkh2S2pTMHBOUjhBcnRmZWc0emNtTTFXT09jZGpsa1RNb3ZiZitoSW9j?=
 =?utf-8?B?UmtwZWFDL3lrczJ5NDg4YzI1RFF0R1JqRjJTNkgzL21ma2g4R200eHMyZmxH?=
 =?utf-8?B?VWFLN2xWM2RtWHZpY1V3NXFJZTlXTENiNEZxNE8yZm1OblB6Y0ZsRUJGYU1j?=
 =?utf-8?B?cFlTc25weWVaZklJd0hnZ0cvajRSWmhpUDkzRzc3UlBIUmYydWtydUpnaU1L?=
 =?utf-8?B?UzZhS2NjZWVaSlNwMEUxV1VmV2w4TW15bzNsK21rK1RMOXdITzNSUUx2cG44?=
 =?utf-8?B?eWNONmlmRk02OWJmdGYyS2M2aU9pdmRRZ0hGSncwVGNaN0hXY2VQelBrV3lV?=
 =?utf-8?B?WDNXQ25PTXo0cGEvWGZoOGIyQk5kY1dQY1M2VTQ3SFVnRmFxbUZjQS9DbDVh?=
 =?utf-8?B?RUUxd1pUT1Q5eEszSkVzSzZOci9mVlIzc1NYTnNxemZOdCtqNDVxQlhPQkZG?=
 =?utf-8?B?R2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	OkVL0yPGPTneFK7N0QYEdvgmeRNuVCM5aZJ3AyRHCZpiZZloIvzJzM0QCNHJ2BF1lsSgjoYyWFePpfKFqJkpa9KVzKnuq1RpikHjySyEOQz+t1UFNLg6fDxR6NOcLaB4ZdNyaSGkSbkbIrlmtDTGNFIOLKagGTNjxpTfs2dpjtZGF55qN+mNKM6NxDhWb8LWrwado9+Re535V2J0BDV89ojqx3nWOdFh9220sIvChhq5vbXHSLlbp35BzfsZbX5g5h+FXANO5guOQPUqdhKCPKK0gqAVHpGitO6ZrNpxWF8PAI30gLn5H8oGxzJIKGdM6R3T+NLf9hz/hahL5tthu9ZpBryPUasN0lgwloyVmwpqVdWXEE8jMRgOBwoqr5v85HT/2qNWEozJNoGfbGUrvL8svdFWxmsFNBNjLEiQ7L2SqTa1FjVMX8QYMlNYNq8K0piOIWM+XWozTxDJWw+IQ92d/mhZwMxizJX3mBPXnl6HroJw4hwtxGMSzsOqiHdG+d+uNobi+enqCHyd1A2trSRi3z14tEqK/LYF/8/hXlRl235eZXMPHeulkZ6RANQg1RXkZvdWP8vcVANK9McCePFJEVifs7Xzhqy9LcbEF0rc55ymmljg/AMBlzpZsCy9PTb3fNkNVRTQH6yFB8XdHYfAKNuM78MNxXQVF46gfXXrwf0EO6Z3T0GHAcfUvedziFkSercDxS0/SwOMRn6hdNkQtM+71Gldc8f690qEqfDFucwf9H04gkqRovQ76ET7shjCTkSWxZ9U/IcWzXhOG3LdKIfO1mh1B6InnT+hUZ+cgMDMnktxbMW/4AJjFVzd
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e22b7550-ec20-4828-d277-08db345a13c3
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 15:42:48.8089
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uRSk4ZxCuQuAGy9bTYrPotnMIvsZCAZztTLkllVaZ64sHvKpZjDtxx37y1TC7/FdbogQj3oJoWH6I7q4q1w4Mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6451

On Mon, Apr 03, 2023 at 05:30:37PM +0200, Jan Beulich wrote:
> On 03.04.2023 16:27, Roger Pau Monné wrote:
> > On Mon, Apr 03, 2023 at 02:39:08PM +0200, Jan Beulich wrote:
> >> On 03.04.2023 12:14, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/mm/p2m-pt.c
> >>> +++ b/xen/arch/x86/mm/p2m-pt.c
> >>> @@ -486,9 +486,6 @@ static int cf_check do_recalc(struct p2m_domain *p2m, unsigned long gfn)
> >>>          p2m_type_t ot, nt;
> >>>          unsigned long mask = ~0UL << (level * PAGETABLE_ORDER);
> >>>  
> >>> -        if ( !valid_recalc(l1, e) )
> >>> -            P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n",
> >>> -                      p2m->domain->domain_id, gfn, level);
> >>>          ot = p2m_flags_to_type(l1e_get_flags(e));
> >>>          nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask);
> >>>          if ( nt != ot )
> >>
> >> I'm afraid I neither understand why you make this change, nor why you
> >> then leave the other use of valid_recalc() in place.
> > 
> > The message can be bogus if we allow concurrent do_recalc(), and I
> > did miss the previous one.
> > 
> > I missed the one at the top.  Originally I wanted to send the RFC with
> > just changing the lock to read mode, but then I though I might as
> > well fix that (now bogus) print message.
> > 
> >>> @@ -538,9 +535,9 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa)
> >>>       */
> >>>      ASSERT(!altp2m_active(current->domain));
> >>>  
> >>> -    p2m_lock(p2m);
> >>> +    p2m_read_lock(p2m);
> >>>      rc = do_recalc(p2m, PFN_DOWN(gpa));
> >>> -    p2m_unlock(p2m);
> >>> +    p2m_read_unlock(p2m);
> >>>  
> >>>      return rc;
> >>>  }
> >>
> >> How can this be safe, when do_recalc() involves p2m_next_level(), which
> >> may install new (intermediate) page tables?
> > 
> > Oh, great, didn't realize it was capable of doing so, it's more hidden
> > than in the EPT case.  Seems like this will only happen if a superpage
> > needs to be split because a lower order frame is being used as an
> > ioreq server page.
> > 
> > Do you think it would be safe to try to attempt to perform the recalc
> > with the read lock only and fallback to the write lock if there's a
> > need to call p2m_next_level()?
> 
> Yes, that ought to be okay.
> 
> > Do you agree it might be possible to do the recalc with just the read
> > lock if it's updating of PTE type / recalc flags only?
> 
> Technically this looks to be possible, yes. Question is whether we do
> ourselves much good by introducing such a special case of permitting
> a certain kind of writes with the lock only held in read mode. The
> latest when we find a second (more or less similar) use case thing
> are likely to become difficult.

Yes, it's not very nice.  I'm open to suggestions of other ways to
remove some of the contention.  If we go down this route it would need
to be clearly documented.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 16:07:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 16:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517531.802988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMiT-0004pI-Nk; Mon, 03 Apr 2023 16:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517531.802988; Mon, 03 Apr 2023 16:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjMiT-0004pB-Kh; Mon, 03 Apr 2023 16:07:33 +0000
Received: by outflank-mailman (input) for mailman id 517531;
 Mon, 03 Apr 2023 16:07:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dypz=72=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjMiS-0004p3-Nb
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 16:07:32 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20627.outbound.protection.outlook.com
 [2a01:111:f400:fe12::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a26a9d1e-d239-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 18:07:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8057.eurprd04.prod.outlook.com (2603:10a6:10:1f1::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Mon, 3 Apr
 2023 16:07:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.033; Mon, 3 Apr 2023
 16:07:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a26a9d1e-d239-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SDMt1tSrxjGq3q3pB8tz78oVfOG/z9nxrspygX84sXw0kP/+vogkYAMOgB4AE5cFa7fvJpUuzYVXFfv5lTcEWJBFeqt6g/dCS0h7KH4aYCyzDd37zDOhZNlzmpL9IBdca0b8UVRzaze6JXQiruRIdPpMvW5DI++/ABKsBcQXzpdHo5j4yUAS+borJ+/RxJZyaJXlGoPOg5LEaPLzrei8YqQKgzJJuoN0FfNWqbwnSCwwMv6xZ17/p2g9/jLXc1ZLY17N4Xvt43KTwstcwt8wqE4vNubydeiIvFJ5J4+79rHMlCsgjJ53LfQbFdXKQGLWxHgZHGXOaL+edgETpmHoqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TQxMB6GjtobwvO42jbp5xpY3uI30G7RbMn/5g4WZk/M=;
 b=aesZRDui6IDoUJ7dwLG5qhRHh3QfOwzDSUEPxT+aXxKg+FWORlriJzD+9bQMXxDMAbXWG/EAgemI7ApME7ra+3ifxyVTMMa2iJjGysF6wMcVNN375kOjEyx0d5taczYF91hBlvMzTDYSmPz0UdvcN4JAQlS0y+WR9b0jzPHUB9DzjoIrDE5mAvQ8DD43huGlXdqPOKr/F7b1hZ8ZNEqhoMts291JthV1Qdrx8hbCJbz7qli/NLkjnQesx2NTGKUiMC0RwTcf72k/kNB4JCRjCiSPZ4YlF1toMMtPV47Fyrj97M3MemB3QDDE9aZqUKL3wuAvSw1dz+L/RPkEGyCPBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TQxMB6GjtobwvO42jbp5xpY3uI30G7RbMn/5g4WZk/M=;
 b=Y5xGij3RTmGX/SmzUNk+96YBCSzTwgIIpT/SP01q3zR2KRIZ0X6NjARxIajjJXqXg0BVh05afIyBer7qExokq9CSHBkb0N75wQUNc71qjB8D+yf1e4pjLs6GXXRb50sXcOafXUoElIx7wVPee7y0MdHHKJO+1xricy2vqFQs6X6JKBWKmlLr+XiEWXrgnAjP8VDx2leoNI/nMcdjeBh0kVpU7ipLy3n3klTaFPc17Chk+f9len9I63Sq7xXGnFWPiEsbILFIw7rZK2NQlhwmPMpe/zP7EkVq6LER33q2z9gIzOsXKkg+envZ79c8fcngbZkwUuU9jXzU0OAlVCi+YA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a247059e-27a8-0569-93af-de03e842e341@suse.com>
Date: Mon, 3 Apr 2023 18:07:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <0adcb388-d2a9-e43e-ec20-de1df51f33d7@suse.com>
 <ZCrzD7tH5WXARIvF@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCrzD7tH5WXARIvF@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8057:EE_
X-MS-Office365-Filtering-Correlation-Id: be3aaf3e-6f7d-4270-f90c-08db345d85b5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TCYiWYHnlCcQThEkq+rje5+2+F7335plpJIhPgb32k7WStzTKuRlUzSf2SrpMH8JdadYzMl02iFlIRSFeS4LuIsKJ/l67snqdANQe8CF261cNmaRugEmp2gOWSNHAwGIdrrpzbOKhHb1Vp+Peu2nQh1bN1PzgttIjbisN1o1oNRxIZRcUU5PyvcCpGJiiADHSre5fRG95Wh4HGNFkR+XY6J+9x7LtuzbvC11yq2dGuo3wLN0Jrem+x3BQNi6IHgRLw1uiRaM/Sjl21uvs2dq0704JtldTnMtlxrh1S+gPBtGqEgqWI0+HH7UCtdKrFaskYSHBRwX/tBymkaBPuXIHPsKw8XDZCvtwNq41pfAQtDR0ovgGIEAt+Y+pX3rzB4NDGkoSkZABf9eKfDZx2YRl2ppwLQ8Nf51lLuJeTjwChqUnn1hCxtf63MNf3T+C74ipG01q5ntoYs+pWt3JkDgp7qn8/wJnJYnmjxW82I6pSwdRlFAVo6Tvz3gVX/a6BuoJEjxA51tKmRMG8nWmiyUxMvho5j0F52VZt+JAAOES+qQT6LpNKYdWN6Oaqfa06w2/U/TNCRnlYXQ10pN4MI+5TaCwQDNOklenqAZkekhSbxwxyqkcqjIYkGXZlXmiat98RgwD99q75kKI5ygTNYuAQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(136003)(39860400002)(396003)(376002)(451199021)(186003)(66899021)(53546011)(6512007)(6506007)(26005)(31686004)(54906003)(478600001)(38100700002)(36756003)(66556008)(86362001)(66946007)(66476007)(83380400001)(8676002)(4326008)(6916009)(316002)(31696002)(2906002)(41300700001)(5660300002)(6486002)(8936002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OStQSE4wWEQySUtacURnUDI5amVPN3dOdkhRbUFSbDkrOFJYLzZWMVAycWVF?=
 =?utf-8?B?Mjgybi9jZFYxd3FtSW1RWTYrZVBucytIcjJoNUJSd2UreXQ4eGRTczVCNHFJ?=
 =?utf-8?B?RHRsSjhMSU9Od1FaSURRQzU0bjJhMlJTN2RVVHpDV3VXcHVvL2NoZjZra3Vm?=
 =?utf-8?B?RzhwVE1QVUM0d2dWZFI5VXdKZXhpMDFFZEZ2ZDBRVUFQWjF3WXJKZlFXSlJh?=
 =?utf-8?B?clc0bERlOUZNTm9vcHNTbW1sUnhCN2RERjBKMUpxNGpUa011blk2aHMvcTJP?=
 =?utf-8?B?bmNKZ3MxbXJnUXNYNlVCdUxkNU53ek5YWTczcGl1cm84WXgyS1dnTHZmRGMz?=
 =?utf-8?B?UEZJaWRadjE1Z0ovMTN4OXVNckpXdEhEM0poWGJJb2k5RUZvTk9aZ1p0cUlH?=
 =?utf-8?B?Z0JaaTBpOEZ4elI4WFhxM0ZmQURpSUYyemNrVmJxTXFlMkxjK3VabU82OGY3?=
 =?utf-8?B?VFZPNnhXeGtnR3oxN2ZHbzhhT0xvQUROVW83Q1ZrYmtaZGpTMzFBYTN0eHNN?=
 =?utf-8?B?RDhSVGtEV2F6MUZNamRxYStTQkhnajlVTDV4Yzk5dStRYTVyTUVxNXI3RXNB?=
 =?utf-8?B?amZDU0k2TWpkQ3haTXo1QnVqUUU4c1VwajFaSnpzMXlqV2xCMEE5anRNYnR3?=
 =?utf-8?B?T1FyU21hUUE5bi9yL3o1L0wvTkliSnducENqT0pEVmpaMnp0a1hHTzhnV0pP?=
 =?utf-8?B?V2ZLMUtadW9QWmpCMko2T0RaUHdtMzdFQnNBRzVuY0M2UlN3R1FzOXZIWWZp?=
 =?utf-8?B?YlhCdGk5RGx0Ni9sZldTMGtsZTU4dmRaem1QSk1iUjRhOVJpWGo5cXkrQ202?=
 =?utf-8?B?VCtxRXpVdVk4eGVuYUFrMmErSHArYVBuRVRTSFBuN2F1VW5XL3BLODI5aDlL?=
 =?utf-8?B?WVJMNVl2em1IbU5Kd0hKUkIzeXpTQXB3aE9YUytOUlpUcTdGbjNUN21PRVJs?=
 =?utf-8?B?WGxOdHFMeCtJZyswSHJHczNTWktWTmRWcnVBUEV2ak9VdFdnTFR0RXRsVFJX?=
 =?utf-8?B?Q1QwanpoNGo1R0Z5MHpXb2RUbkFJemxOUG4vcm00dCsvcSt5aU16VHpvQlFp?=
 =?utf-8?B?YVp2V1hERHdXalJiMk9HNkpISEI0bkFIUHJlUFAxai93RndEK1pCNVczSU1F?=
 =?utf-8?B?bHltMkhxM0doS2dDNTRJRmZYS1BWVmR0TFhMeGR4MkdKUnQ2K1FUdDZscEly?=
 =?utf-8?B?NEhFRm9aS0YwenkzQiswanJtSU9PekRRVFk2QnFydjNUck8vZWFPblpabmpB?=
 =?utf-8?B?bnlhdmxka0VybHM4ZHlBeHFRMnlRWUN5VlZzVWMwSUd6TUFVS2hVRjFyaEhp?=
 =?utf-8?B?TE9aN2JvKzJVdHByLzhod3R0QU9KY2xRK2NpOVFCOHZaOURaQzE3YkphYy9k?=
 =?utf-8?B?a2N0dldDRzh1TTBRaGJOZnVXR0ZxR2xGNVFzNjg2ZVV4dmFWUGxWcXFwY3oy?=
 =?utf-8?B?SXRoSGVqbDlLWVNpOUlFSlA5ZU0xWWdoUEFzZ3Z5d1k4eFMzZm10d0dZeE4v?=
 =?utf-8?B?aTU1MFg4aUQxS0p6VjBJRExJZGlOV3lsN3JWdXBUR29rOER4U0NZR2UxOWsz?=
 =?utf-8?B?azRLYnZHMXJqMGtGUGxsekRVbkhERXJucDM2WkNYSytXd296cGd5MHlDK3NL?=
 =?utf-8?B?Q2xTV3l6WnhneW1BVTFXWmRvTVY3Qnp6MmgxTWRPNkplT0NTZEZQMnlnZmNi?=
 =?utf-8?B?VCtPMTFkMGQ1RXBDV1EzdEJSU3JLeVNmNmNPSFN5ZmhmdXVKRGZOZThOclhw?=
 =?utf-8?B?ZHY4aTREQW8yVjcvME9lRkljeGRXdkhqbVF1WGFBY1hNcVMySkRUQTVGdm1x?=
 =?utf-8?B?ek1vcUZVYW10OFkrQXdOUEQxRFM5RnFFcTZaaG9jNy84ZC9wRklGM1Vqci9l?=
 =?utf-8?B?Yjc1aDA0YVN2Vm1xYXI1bnZkTmhkS0lwaUh3dENmMDF4dVVOMjdRRm0xc2pi?=
 =?utf-8?B?aEhpcEt2amdYbnJST2xpakhVM0luRjNDUDhndnB4QnprTzA3MVgwN1JXNnhI?=
 =?utf-8?B?OVRtOTExSlZEdlEwTHVYM1hVdlFVRVV4TjNNWFZwSmJtZ1dMQ3pSSHdHUldy?=
 =?utf-8?B?dUt6M3JucjArQU02ME90SVdHS0JBS0ZJa0pJOWRMOURwb0R6UzNrVHI3ZVps?=
 =?utf-8?Q?55aEb34WbUe3Q4F7B99O3ePt/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be3aaf3e-6f7d-4270-f90c-08db345d85b5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2023 16:07:28.4645
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O+MRNcqNC+Qvg3LpF4nKwOIfuzA2PPrvVKQ45l8ougXdhIek+Ca8Hffuw6pn0unRZWHg4XCLNogPbdJ8W+zXYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8057

On 03.04.2023 17:38, Roger Pau Monné wrote:
> On Mon, Apr 03, 2023 at 05:32:39PM +0200, Jan Beulich wrote:
>> On 03.04.2023 12:14, Roger Pau Monne wrote:
>>> Global p2m type recalculations (as triggered by logdirty) can create
>>> so much contention on the p2m lock that simple guest operations like
>>> VCPUOP_set_singleshot_timer on guests with a high amount of vCPUs (32)
>>> will cease to work in a timely manner, up to the point that Linux
>>> kernel versions that sill use the VCPU_SSHOTTMR_future flag with the
>>> singleshot timer will cease to work:
>>>
>>> [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
>>> [   82.793075] CE: Reprogramming failure. Giving up
>>> [   82.779470] CE: Reprogramming failure. Giving up
>>> [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
>>> [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
>>> [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
>>> [   82.821864] CE: Reprogramming failure. Giving up
>>> [   82.856256] CE: Reprogramming failure. Giving up
>>> [   84.566279] CE: Reprogramming failure. Giving up
>>> [   84.649493] Freezing user space processes ...
>>> [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
>>> [  130.604032] Task dump for CPU 14:
>>> [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
>>> [  130.604032] Call Trace:
>>> [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>>> [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>>> [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>>> [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>>> [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>>> [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
>>> [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
>>> [  549.655463] Task dump for CPU 26:
>>> [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
>>> [  549.655463] Call Trace:
>>> [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>>> [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>>> [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>>> [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>>> [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>>> [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
>>> [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
>>> [  821.888596] Task dump for CPU 26:
>>> [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
>>> [  821.888677] Call Trace:
>>> [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>>> [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>>> [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>>> [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>>> [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>>> [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
>>>
>>> This is obviously undesirable.  One way to bodge the issue would be to
>>> ignore VCPU_SSHOTTMR_future, but that's a deliberate breakage of the
>>> hypercall ABI.
>>>
>>> Instead lower the contention in the lock by doing the recalculation
>>> with the lock in read mode.  This is safe because only the flags/type
>>> are changed, there's no PTE mfn change in the AMD recalculation logic.
>>> The Intel (EPT) case is likely more complicated, as superpage
>>> splitting for diverging EMT values must be done with the p2m lock in
>>> taken in write mode.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>> I'm unsure whether such modification is fully safe:  I think changing
>>> the flags/type should be fine: the PTE write is performed using
>>> safwrite_p2m_entry() which must be atomic (as the guest is still
>>> running and accessing the page tables).  I'm slightly worried about
>>> all PTE readers not using atomic accesses to do so (ie: pointer
>>> returned by p2m_find_entry() should be read atomicallly), and code
>>> assuming that a gfn type cannot change while holding the p2m lock in
>>> read mode.
>>
>> Coming back to this: Yes, I think reads (at least the ones in do_recalc()
>> which can now be done in parallel) will need to be tightened if this is a
>> road we want to follow.
> 
> There are likely a lot of reads under the p2m read lock outside of
> do_recalc() that will ideally need to be switched to use atomic
> accesses also?

Possibly, perhaps even likely. I specifically said "at least". But ones
clearly on write-locked paths could probably be left alone.

> I'm open to suggestions to other ways to get this sorted.  And that's
> a guest with 'just' 32 vCPUs, as we go up the contention on the p2m
> lock during recalcs/misconfigs is going to increase massively.

I'm afraid I don't have any really good idea, but I'm wondering whether
trylock with (almost?) immediate exit back to guest might make this any
better. At least the guest could then take interrupts before the insn
is retried. Another thought in this direction would be to have a variant
of trylock which "senses" how contended the lock is, to spin if it's the
first one to wait, but exit (fail) otherwise.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 16:28:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 16:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517535.802999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2j-0007KT-GJ; Mon, 03 Apr 2023 16:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517535.802999; Mon, 03 Apr 2023 16:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2j-0007KM-Dc; Mon, 03 Apr 2023 16:28:29 +0000
Received: by outflank-mailman (input) for mailman id 517535;
 Mon, 03 Apr 2023 16:28:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L93W=72=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjN2h-0007KG-O2
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 16:28:27 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f30543a-d23c-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 18:28:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B70561FE5E;
 Mon,  3 Apr 2023 16:28:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7AA9D13416;
 Mon,  3 Apr 2023 16:28:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id eBFRHKn+KmSoXgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 03 Apr 2023 16:28:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f30543a-d23c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680539305; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=BkZFoojUM+8fz39pkxsEFi2atMuRoJfIlL2aw5KVwkg=;
	b=hdAKyfRvOTyqR6Ux0Ll7IvLjr+WjyD/MIyYC3S90Jzwl7MO+MsAQyTRMP2AUwmsipxhTZR
	nbQ9V+PeJ1+C3W6xO+jbfALsVJMaB8K4gi5pAj7IDkUnwwbaXBNMWaquRTjORpebjBDHwY
	gC79h9DBvyLaUjb0cYAT4zJ8JUyXqEE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] xen: some CONFIG_DEBUG_INFO changes
Date: Mon,  3 Apr 2023 18:28:21 +0200
Message-Id: <20230403162823.30681-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enabling crash dump analysis of the hypervisor requires the hypervisor
having been built with CONFIG_DEBUG_INFO enabled. Today this requires
either CONFIG_DEBUG or CONFIG_EXPERT set, which are both not security
supported.

This small series changes that in order to allow security supported
Xen builds with the capability to do crash dump analysis via the
"crash" tool.

Note that due to problems with test machines proper support for EFI
booted systems hasn't been verified, so this will likely need some more
work.

Changes in V2:
- comments addressed

Juergen Gross (2):
  xen: move CONFIG_DEBUG_INFO out of EXPERT section
  xen: update CONFIG_DEBUG_INFO help text

 xen/Kconfig.debug | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 16:28:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 16:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517536.803009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2o-0007a8-OS; Mon, 03 Apr 2023 16:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517536.803009; Mon, 03 Apr 2023 16:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2o-0007Zz-LY; Mon, 03 Apr 2023 16:28:34 +0000
Received: by outflank-mailman (input) for mailman id 517536;
 Mon, 03 Apr 2023 16:28:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L93W=72=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjN2o-0007Zc-6r
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 16:28:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9272d6a0-d23c-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 18:28:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 64B821FE73;
 Mon,  3 Apr 2023 16:28:31 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2652A13416;
 Mon,  3 Apr 2023 16:28:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id S5DsB6/+KmS7XgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 03 Apr 2023 16:28:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9272d6a0-d23c-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680539311; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=51wZc2vktLAGkJYuxQvGjNEZLX6vduj0/qi4N/4LX8o=;
	b=sRrt0UANSYHctURzJeHSOgf3dlCzRjDQ92tgQODJD6alTJAPtn1UxexgDFEfzy2T8gmn/8
	ruCf7Gnuto4HknA4fzPaDAGuYOlM4fTyuYT728oRKQCTumjkruhJW9/pnLKwjVbmv0iSKY
	d5TR+k960biF2bUT+SQZOsQmWyrAR6s=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
Date: Mon,  3 Apr 2023 18:28:22 +0200
Message-Id: <20230403162823.30681-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230403162823.30681-1-jgross@suse.com>
References: <20230403162823.30681-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to support hypervisor analysis of crash dumps, xen-syms needs
to contain debug_info. It should be allowed to configure the hypervisor
to be built with CONFIG_DEBUG_INFO in non-debug builds without having
to enable EXPERT.

Using a rather oldish gcc (7.5) it was verified that code generation
doesn't really differ between CONFIG_DEBUG_INFO on or off without
CONFIG_DEBUG being set (only observed differences were slightly
different symbol addresses, verified via "objdump -d", resulting from
the different config.gz in the binary). The old gcc version selection
was based on the assumption, that newer gcc won't regress in this
regard.

So move CONFIG_DEBUG_INFO out of the section guarded by EXPERT.

It should be mentioned that there have been reports that the linking
of the xen.efi might take considerably longer with CONFIG_DEBUG_INFO
selected when using newer binutils.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- expanded commit message (Jan Beulich)
---
 xen/Kconfig.debug | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
index fad3050d4f..e0d773d347 100644
--- a/xen/Kconfig.debug
+++ b/xen/Kconfig.debug
@@ -11,6 +11,13 @@ config DEBUG
 
 	  You probably want to say 'N' here.
 
+config DEBUG_INFO
+	bool "Compile Xen with debug info"
+	default DEBUG
+	help
+	  If you say Y here the resulting Xen will include debugging info
+	  resulting in a larger binary image.
+
 if DEBUG || EXPERT
 
 config CRASH_DEBUG
@@ -28,13 +35,6 @@ config GDBSX
 	  If you want to enable support for debugging guests from dom0 via
 	  gdbsx then say Y.
 
-config DEBUG_INFO
-	bool "Compile Xen with debug info"
-	default y
-	---help---
-	  If you say Y here the resulting Xen will include debugging info
-	  resulting in a larger binary image.
-
 config FRAME_POINTER
 	bool "Compile Xen with frame pointers"
 	default DEBUG
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 16:28:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 16:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517537.803019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2t-0007rf-0i; Mon, 03 Apr 2023 16:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517537.803019; Mon, 03 Apr 2023 16:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjN2s-0007rY-Tt; Mon, 03 Apr 2023 16:28:38 +0000
Received: by outflank-mailman (input) for mailman id 517537;
 Mon, 03 Apr 2023 16:28:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L93W=72=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjN2r-0007KG-So
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 16:28:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95cbb07e-d23c-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 18:28:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 163CF21EEC;
 Mon,  3 Apr 2023 16:28:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C8A5B13416;
 Mon,  3 Apr 2023 16:28:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id SzKZL7T+KmTtXgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 03 Apr 2023 16:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95cbb07e-d23c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680539317; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=221NmFTRmK1PFAHtiBP2d1/nR5dmCSk7Sk74pCThNJQ=;
	b=YegnYhORMyDnRciHjmFHW0hridB1GwSWIr7sp9x6vZe6gEmxyWp/09cWdROCKDz5gIFDQf
	xhWiKsrzneX7RzVcXgd3jMji8mHElRvxRsHn4ub8UYk6asn2WwrY+hSCRibOX5+iFi1SPg
	mDwiWinrWYxP6pURfCjjz8hvGe+aFwk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] xen: update CONFIG_DEBUG_INFO help text
Date: Mon,  3 Apr 2023 18:28:23 +0200
Message-Id: <20230403162823.30681-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230403162823.30681-1-jgross@suse.com>
References: <20230403162823.30681-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update the help text of the CONFIG_DEBUG_INFO option to be a little
bit more specific.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- expand help text, especially mentioning INSTALL_EFI_STRIP
  (Jan Beulich)
---
 xen/Kconfig.debug | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
index e0d773d347..e6e609f813 100644
--- a/xen/Kconfig.debug
+++ b/xen/Kconfig.debug
@@ -15,8 +15,14 @@ config DEBUG_INFO
 	bool "Compile Xen with debug info"
 	default DEBUG
 	help
-	  If you say Y here the resulting Xen will include debugging info
-	  resulting in a larger binary image.
+	  Say Y here if you want to build Xen with debug information. This
+	  information is needed e.g. for doing crash dump analysis of the
+	  hypervisor via the "crash" tool.
+	  Saying Y will increase the size of the xen-syms and xen.efi
+	  binaries. In case the space on the EFI boot partition is rather
+	  limited, you may want to make use of the INSTALL_EFI_STRIP make
+	  variable when building the hypervisor, in order to strip xen.efi
+	  before installing it to the EFI partition.
 
 if DEBUG || EXPERT
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517549.803038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwp-0004Xh-UQ; Mon, 03 Apr 2023 18:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517549.803038; Mon, 03 Apr 2023 18:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwp-0004Xa-Rk; Mon, 03 Apr 2023 18:30:31 +0000
Received: by outflank-mailman (input) for mailman id 517549;
 Mon, 03 Apr 2023 18:30:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwo-0004HH-5Z
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:30 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b015c4b-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-519-3isl-chxN3OIRuBPnd0Pfg-1; Mon, 03 Apr 2023 14:30:21 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 71BBF385F379;
 Mon,  3 Apr 2023 18:30:14 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D2F9E400F4F;
 Mon,  3 Apr 2023 18:30:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b015c4b-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546627;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e0HFVVv61t7j/DUMbeicZfUXtZsQPz0KQrxVQvmIXfw=;
	b=VM3hW7NwOHLGEw7z2TtAkaM9y+rDacCrFrDh2TCrbnubtXCyVGSV17liiQ5mgbFdza1Jic
	Q2QqKnofNnVRFGtZkh9OpKE80WPzwBqW5+qlNwVg2u9Abup3t4lZXGrUypOlnGm8quH8+g
	yHlDjsb/8OFN1+Rwu7LI7yWeXQhMif4=
X-MC-Unique: 3isl-chxN3OIRuBPnd0Pfg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Zhengui Li <lizhengui@huawei.com>
Subject: [PATCH 02/13] virtio-scsi: stop using aio_disable_external() during unplug
Date: Mon,  3 Apr 2023 14:29:53 -0400
Message-Id: <20230403183004.347205-3-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
   that in-flight requests are cancelled synchronously. This ensures
   that no in-flight requests remain once qdev_simple_device_unplug_cb()
   returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-disk.c   | 1 +
 hw/scsi/virtio-scsi.c | 3 ---
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e01bd84541 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
 
 static void scsi_unrealize(SCSIDevice *dev)
 {
+    scsi_device_purge_requests(dev, SENSE_CODE(RESET));
     del_boot_device_lchs(&dev->qdev, NULL);
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 000961446c..a02f9233ec 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517550.803048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwr-0004mc-7G; Mon, 03 Apr 2023 18:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517550.803048; Mon, 03 Apr 2023 18:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwr-0004mT-4H; Mon, 03 Apr 2023 18:30:33 +0000
Received: by outflank-mailman (input) for mailman id 517550;
 Mon, 03 Apr 2023 18:30:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwp-0004HH-5d
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9baee2a3-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-604-2uGakXqQNEinBtqwH2bvkA-1; Mon, 03 Apr 2023 14:30:25 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07C581C27DBC;
 Mon,  3 Apr 2023 18:30:17 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5F35E2166B26;
 Mon,  3 Apr 2023 18:30:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9baee2a3-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546628;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=;
	b=ffR9Y3K7u6+01Mx2Q47gj+8wq/GyXU0w05hTOcdsfp3Aewhmmd9SpVIs+UPuDgm4qXvbdl
	Uq4PldBdZbSCy/MljL4kEmO0k5iA9PXy6yku4nOslN8OZWoSycHJfzbSD77K1t3or/GZiK
	CiJi195aeO5BZvOeY0rVXdthKlTf1dg=
X-MC-Unique: 2uGakXqQNEinBtqwH2bvkA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 03/13] block/export: only acquire AioContext once for vhost_user_server_stop()
Date: Mon,  3 Apr 2023 14:29:54 -0400
Message-Id: <20230403183004.347205-4-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE()
requires that AioContext is only acquired once.

Since blk_exp_request_shutdown() already acquires the AioContext it
shouldn't be acquired again in vhost_user_server_stop().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 40f36ea214..5b6216069c 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
     aio_context_release(server->ctx);
 }
 
+/* server->ctx acquired by caller */
 void vhost_user_server_stop(VuServer *server)
 {
-    aio_context_acquire(server->ctx);
-
     qemu_bh_delete(server->restart_listener_bh);
     server->restart_listener_bh = NULL;
 
@@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server)
         AIO_WAIT_WHILE(server->ctx, server->co_trip);
     }
 
-    aio_context_release(server->ctx);
-
     if (server->listener) {
         qio_net_listener_disconnect(server->listener);
         object_unref(OBJECT(server->listener));
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517553.803073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwu-0005Ny-B9; Mon, 03 Apr 2023 18:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517553.803073; Mon, 03 Apr 2023 18:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwu-0005LQ-2c; Mon, 03 Apr 2023 18:30:36 +0000
Received: by outflank-mailman (input) for mailman id 517553;
 Mon, 03 Apr 2023 18:30:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOws-0004HH-9S
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9df5a3e9-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-127-rxZjFOypOgG8cMOE2p_TKw-1; Mon, 03 Apr 2023 14:30:27 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8F1D68C447E;
 Mon,  3 Apr 2023 18:30:19 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CC2A9483EC1;
 Mon,  3 Apr 2023 18:30:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9df5a3e9-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546632;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TKqwyQ4Hx/raK7iVoVjGcYzVd4PPhwE8qnDjfh3DtNg=;
	b=URu9BWdvsR30tJn1o9ILEciRQJDN6aQR/LQ34COhjAr41NM9w0luWbVDE2Fopal95lT5a4
	dPKaZHwJCbahzydaAA3tNYRtPCYgHjJU8Az79cykjloJEmwPhp8WgC+KBmmSupMbTKqC7r
	0TyJqYPIL+P9rEqRVhsS8ZCNDavhVWI=
X-MC-Unique: rxZjFOypOgG8cMOE2p_TKw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 04/13] util/vhost-user-server: rename refcount to in_flight counter
Date: Mon,  3 Apr 2023 14:29:55 -0400
Message-Id: <20230403183004.347205-5-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 3409d9e02e..e93f2ed6b4 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517548.803029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwo-0004HZ-MV; Mon, 03 Apr 2023 18:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517548.803029; Mon, 03 Apr 2023 18:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwo-0004HS-Jq; Mon, 03 Apr 2023 18:30:30 +0000
Received: by outflank-mailman (input) for mailman id 517548;
 Mon, 03 Apr 2023 18:30:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwn-0004HH-HE
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9addabbf-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-654-gJweyXgPPBCNaCJfjwIa3g-1; Mon, 03 Apr 2023 14:30:25 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0DAFE3C20EE3;
 Mon,  3 Apr 2023 18:30:22 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 63861440D8;
 Mon,  3 Apr 2023 18:30:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9addabbf-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546627;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=;
	b=Pq2O9MN9GXcdwkjsOEXC2ygoZmqJr/kEU4W7AK0q6j9jVTcqTNpCNN+GkCurWt9ZTXcZDr
	cWsXfXcmXuSV2JbzMfZGjFywQYe5D67mOdFgj/SCkcgvvMP2r1WwRHh09k/JSLX69u8JT+
	coX+sYvJTKC3zXKhJeSkaOVEOAzsNp0=
X-MC-Unique: gJweyXgPPBCNaCJfjwIa3g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 05/13] block/export: wait for vhost-user-blk requests when draining
Date: Mon,  3 Apr 2023 14:29:56 -0400
Message-Id: <20230403183004.347205-6-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 19 +++++++++++++++++++
 util/vhost-user-server.c             | 14 ++++++++++----
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e93f2ed6b4..dbf5207162 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -254,6 +254,22 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
+static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
+};
+
 static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              Error **errp)
 {
@@ -292,6 +308,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
                              logical_block_size, num_queues);
 
+    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vexp);
 
@@ -299,6 +316,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                                  num_queues, &vu_blk_iface, errp)) {
         blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
                                         blk_aio_detach, vexp);
+        blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
     }
@@ -312,6 +330,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
 
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vexp);
+    blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..2e6b640050 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517552.803069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwt-0005JH-TH; Mon, 03 Apr 2023 18:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517552.803069; Mon, 03 Apr 2023 18:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwt-0005J6-PZ; Mon, 03 Apr 2023 18:30:35 +0000
Received: by outflank-mailman (input) for mailman id 517552;
 Mon, 03 Apr 2023 18:30:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwr-0004HH-VE
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9db74a78-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-281-cN00YhrQM7uB9EfAu9sSvQ-1; Mon, 03 Apr 2023 14:30:26 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 416D13C10EEF;
 Mon,  3 Apr 2023 18:30:12 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 751A6C15BA0;
 Mon,  3 Apr 2023 18:30:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9db74a78-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546631;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oL+HrJxL6ytOleAzvmZ4SJ3Zeezd7vV75aui9s0hUdo=;
	b=dI+yjjuDwKKViSZTb1wuclFWGfaAIY4B8fBgPsiEvcqMluIfiN33JcNCj4A5KlkR1WTkz6
	vj61EvXAzYx80j80rRhCLyGfHn8Se3lfYfZD51uWRrbQzZRFUUapSK/N67fQDa9wTwq0O3
	fum+ALo6iIoQJmuSCHUgxKt8wCcQc9Q=
X-MC-Unique: cN00YhrQM7uB9EfAu9sSvQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 01/13] virtio-scsi: avoid race between unplug and transport event
Date: Mon,  3 Apr 2023 14:29:52 -0400
Message-Id: <20230403183004.347205-2-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-bus.c    |  3 ++-
 hw/scsi/virtio-scsi.c | 18 +++++++++---------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..f9bd064833 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qatomic_load_acquire(&dev->qdev.realized)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..000961446c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
-
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
     aio_enable_external(ctx);
@@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, sd,
+                               VIRTIO_SCSI_T_TRANSPORT_RESET,
+                               VIRTIO_SCSI_EVT_RESET_REMOVED);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517554.803079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwu-0005Wh-Rc; Mon, 03 Apr 2023 18:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517554.803079; Mon, 03 Apr 2023 18:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwu-0005Vk-K5; Mon, 03 Apr 2023 18:30:36 +0000
Received: by outflank-mailman (input) for mailman id 517554;
 Mon, 03 Apr 2023 18:30:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwt-00058E-6y
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:35 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9cf0bae2-d24d-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:30:32 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-274-Q2KxycS7PhakuOgluqVi7g-1; Mon, 03 Apr 2023 14:30:26 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4CDFC88FA4C;
 Mon,  3 Apr 2023 18:30:09 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 16D77140EBF4;
 Mon,  3 Apr 2023 18:30:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cf0bae2-d24d-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546630;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=CjLb/A1nTO72rleg3tgXtfILFOYKYMPMoC5OW2A0i4g=;
	b=iNAIOtkg78eEKcmpYmR0bZGz8Jp4V+2PT9d/T93TVVpQuQfWrBcjWDFicoh/J7sND0V+LD
	XEqoahedqHHkFAQXdF5L9jCjERMESwkprs71k+g2QL1AioEy7X2Vbrn2jcL0RCa7TEZzQn
	LoHFi1UDp6kiqr7Bf699zpcmZ/xR+Rk=
X-MC-Unique: Q2KxycS7PhakuOgluqVi7g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 00/13] block: remove aio_disable_external() API
Date: Mon,  3 Apr 2023 14:29:51 -0400
Message-Id: <20230403183004.347205-1-stefanha@redhat.com>
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

That covers the motivation for this change, now on to the specifics of this
series:

While it would be nice if a single conceptual approach could be applied to all
is_external=true file descriptors, I ended up looking at callers on a
case-by-case basis. There are two general ways I migrated code away from
is_external=true:

1. Block exports are typically best off unregistering fds in .drained_begin()
   and registering them again in .drained_end(). The .drained_poll() function
   waits for in-flight requests to finish using a reference counter.

2. Emulated storage controllers like virtio-blk and virtio-scsi are a little
   simpler. They can rely on BlockBackend's request queuing during drain
   feature. Guest I/O request coroutines are suspended in a drained section and
   resume upon the end of the drained section.

The first two virtio-scsi patches were already sent as a separate series. I
included them because they are necessary in order to fully remove
aio_disable_external().

Based-on: 087bc644b7634436ca9d52fe58ba9234e2bef026 (kevin/block-next)

Stefan Hajnoczi (13):
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  block/export: only acquire AioContext once for
    vhost_user_server_stop()
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  virtio: do not set is_external=true on host notifiers
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/fuse: take AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  aio: remove aio_disable_external() API

 include/block/aio.h                  |  55 -----------
 include/qemu/vhost-user-server.h     |   8 +-
 util/aio-posix.h                     |   1 -
 block.c                              |   7 --
 block/blkio.c                        |  15 +--
 block/curl.c                         |  10 +-
 block/export/fuse.c                  |  62 ++++++++++++-
 block/export/vduse-blk.c             | 132 +++++++++++++++++++--------
 block/export/vhost-user-blk-server.c |  73 +++++++++------
 block/io.c                           |   2 -
 block/io_uring.c                     |   4 +-
 block/iscsi.c                        |   3 +-
 block/linux-aio.c                    |   4 +-
 block/nfs.c                          |   5 +-
 block/nvme.c                         |   8 +-
 block/ssh.c                          |   4 +-
 block/win32-aio.c                    |   6 +-
 hw/i386/kvm/xen_xenstore.c           |   2 +-
 hw/scsi/scsi-bus.c                   |   3 +-
 hw/scsi/scsi-disk.c                  |   1 +
 hw/scsi/virtio-scsi.c                |  21 ++---
 hw/virtio/virtio.c                   |   6 +-
 hw/xen/xen-bus.c                     |   6 +-
 io/channel-command.c                 |   6 +-
 io/channel-file.c                    |   3 +-
 io/channel-socket.c                  |   3 +-
 migration/rdma.c                     |  16 ++--
 tests/unit/test-aio.c                |  27 +-----
 tests/unit/test-fdmon-epoll.c        |  73 ---------------
 util/aio-posix.c                     |  20 +---
 util/aio-win32.c                     |   8 +-
 util/async.c                         |   3 +-
 util/fdmon-epoll.c                   |  10 --
 util/fdmon-io_uring.c                |   8 +-
 util/fdmon-poll.c                    |   3 +-
 util/main-loop.c                     |   7 +-
 util/qemu-coroutine-io.c             |   7 +-
 util/vhost-user-server.c             |  38 ++++----
 tests/unit/meson.build               |   3 -
 39 files changed, 298 insertions(+), 375 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517551.803055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwr-0004s2-M3; Mon, 03 Apr 2023 18:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517551.803055; Mon, 03 Apr 2023 18:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwr-0004qd-Fl; Mon, 03 Apr 2023 18:30:33 +0000
Received: by outflank-mailman (input) for mailman id 517551;
 Mon, 03 Apr 2023 18:30:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwq-0004HH-5i
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c1693d8-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:30 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-270-yDm9BV4sNRCoPTzB8fuUCw-1; Mon, 03 Apr 2023 14:30:25 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A606E885620;
 Mon,  3 Apr 2023 18:30:24 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id ECAF31121314;
 Mon,  3 Apr 2023 18:30:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c1693d8-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546629;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=;
	b=TlI8CNolDE3g8bsImw9FNt8x40JPFGkUBySN9BQP+3GjyzqjIIxRGy+AfzPyW0ldWpqr4V
	kZLSEAa487q8i718P8/eG/s1EoWGN1VTHrH++d1e5o9jIW/g/nHzbvB8y0X2oIGIgmQwk+
	UkICGXaHU2W1Kb0v/6RCdJKtrFRAf8Y=
X-MC-Unique: yDm9BV4sNRCoPTzB8fuUCw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 06/13] block/export: stop using is_external in vhost-user-blk server
Date: Mon,  3 Apr 2023 14:29:57 -0400
Message-Id: <20230403183004.347205-7-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
 util/vhost-user-server.c             | 10 +++----
 2 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index dbf5207162..6e1bc196fb 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface = {
     .process_msg           = vu_blk_process_msg,
 };
 
-static void blk_aio_attached(AioContext *ctx, void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
-}
-
-static void blk_aio_detach(void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
-    vexp->export.ctx = NULL;
-}
-
 static void
 vu_blk_initialize_config(BlockDriverState *bs,
                          struct virtio_blk_config *config,
@@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    vexp->export.ctx = blk_get_aio_context(vexp->export.blk);
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
 };
 
@@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              logical_block_size, num_queues);
 
     blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
-    blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                 vexp);
 
     if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
                                  num_queues, &vu_blk_iface, errp)) {
-        blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
-                                        blk_aio_detach, vexp);
         blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
@@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp)
 {
     VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
 
-    blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                    vexp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 2e6b640050..332aea9306 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517555.803099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwx-0006AV-7f; Mon, 03 Apr 2023 18:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517555.803099; Mon, 03 Apr 2023 18:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwx-0006A0-46; Mon, 03 Apr 2023 18:30:39 +0000
Received: by outflank-mailman (input) for mailman id 517555;
 Mon, 03 Apr 2023 18:30:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwv-00058E-F1
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f6e1b61-d24d-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:30:35 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-453-9CKa4rlENESBndwEgTyXrw-1; Mon, 03 Apr 2023 14:30:31 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC85B185A790;
 Mon,  3 Apr 2023 18:30:29 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1ED14400F4F;
 Mon,  3 Apr 2023 18:30:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f6e1b61-d24d-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546634;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=02kXX9aX6Yr2x3VYWWh4xRNvlIxdZtHtMoEfskS16XA=;
	b=jG/URGlzduMp2atXz4KaNq3IKZO6oJIHO0i5UZihTsZ1MocxkdAM98rtBCvP7YZGG6WByY
	GluIqgXkZH/9e/fZj2iM1JdrVfoiobkwrMrhMTdqDgAAt0Q+c34ldRkCdmeb6CCCAJruct
	0V59YdLPn84w4MwihA1UoDBLiAAhgBc=
X-MC-Unique: 9CKa4rlENESBndwEgTyXrw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 08/13] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Mon,  3 Apr 2023 14:29:59 -0400
Message-Id: <20230403183004.347205-9-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517556.803109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwz-0006VT-GF; Mon, 03 Apr 2023 18:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517556.803109; Mon, 03 Apr 2023 18:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwz-0006VK-Cm; Mon, 03 Apr 2023 18:30:41 +0000
Received: by outflank-mailman (input) for mailman id 517556;
 Mon, 03 Apr 2023 18:30:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwx-00058E-3X
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a068c5f4-d24d-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:30:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-439-74AM9wfyN9eiYjF1jls1ng-1; Mon, 03 Apr 2023 14:30:33 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EE94B811E7C;
 Mon,  3 Apr 2023 18:30:31 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 521171121314;
 Mon,  3 Apr 2023 18:30:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a068c5f4-d24d-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546636;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w78ixlnTJkfmit1c5uhGwpfg3e2qbfo6w6b+6TW1iVA=;
	b=D8g8DNdYotY3VuhWIN5vTd5CmEn80Ry48uLp2xBWgJ+aguPaY/WH8O/z7PWXXhydT3zCDm
	ZgDNTHxlvXdxOeh9rjIagscMlbLbDKot1VRaMi+JENdwF2VzC0/CShZeyCjooamKM5YXMm
	9R6V1BPLX2Fp3UVDHCidsLd4YD8gV8Y=
X-MC-Unique: 74AM9wfyN9eiYjF1jls1ng-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 09/13] hw/xen: do not set is_external=true on evtchn fds
Date: Mon,  3 Apr 2023 14:30:00 -0400
Message-Id: <20230403183004.347205-10-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The xen-block device actually works fine with is_external=false because
BlockBackend requests are already queued between bdrv_drained_begin()
and bdrv_drained_end(). Since the Xen ring size is finite, request
queuing will stop once the ring is full and memory usage is bounded.
After bdrv_drained_end() the BlockBackend requests will resume and
xen-block's processing will continue.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/xen/xen-bus.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..c4fd26abe1 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,11 +842,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        xen_device_event, NULL, xen_device_poll, NULL, channel);
 }
 
@@ -920,7 +920,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517557.803115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwz-0006a7-W0; Mon, 03 Apr 2023 18:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517557.803115; Mon, 03 Apr 2023 18:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOwz-0006ZY-Om; Mon, 03 Apr 2023 18:30:41 +0000
Received: by outflank-mailman (input) for mailman id 517557;
 Mon, 03 Apr 2023 18:30:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOwx-00058E-FX
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0980774-d24d-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:30:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-37-LlfGX1TgPQ-M7RE1UMTAgw-1; Mon, 03 Apr 2023 14:30:34 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 408E3101A551;
 Mon,  3 Apr 2023 18:30:27 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9B806140EBF4;
 Mon,  3 Apr 2023 18:30:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0980774-d24d-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546636;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hEwtCA8s9JuIEhTUsGQuJCNn3dK7Smgq43ityTlAMA0=;
	b=Xcp4WtcqbfNKMOakkTqb4qmt7Tm+ZsvjLsYs4l1qKl45bB3w/KSqrfXNUWwmk0tydT95ez
	Ltj8xIHGvxr0HOvGhFmmFFc1PbDlC/PZCR7ZjmVwU4UszcBPcMRVWP1b7eJ0/iF2NB1l7e
	zSUfRYz8Svn4VgimrgfHMFDLwZY6H/E=
X-MC-Unique: LlfGX1TgPQ-M7RE1UMTAgw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 07/13] virtio: do not set is_external=true on host notifiers
Date: Mon,  3 Apr 2023 14:29:58 -0400
Message-Id: <20230403183004.347205-8-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Host notifiers trigger virtqueue processing. There are critical sections
when new I/O requests must not be submitted because they would cause
interference.

In the past this was solved using aio_set_event_notifiers()
is_external=true argument, which disables fd monitoring between
aio_disable/enable_external() calls. This API is not multi-queue block
layer friendly because it requires knowledge of the specific AioContext.
In a multi-queue block layer world any thread can submit I/O and we
don't know which AioContexts are currently involved.

virtio-blk and virtio-scsi are the only users that depend on
is_external=true. Both rely on the block layer, where we can take
advantage of the existing request queuing behavior that happens during
drained sections. The block layer's drained sections are the only user
of aio_disable_external().

After this patch the virtqueues will be processed during drained
section, but submitted I/O requests will be queued in the BlockBackend.
Queued requests are resumed when the drained section ends. Therefore,
the BlockBackend is still quiesced during drained sections but we no
longer rely on is_external=true to achieve this.

Note that virtqueues have a finite size, so queuing requests does not
lead to unbounded memory usage.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 98c4819fcc..dcd7aabb4e 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517558.803130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOx4-0007G8-Fj; Mon, 03 Apr 2023 18:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517558.803130; Mon, 03 Apr 2023 18:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOx4-0007Fn-6f; Mon, 03 Apr 2023 18:30:46 +0000
Received: by outflank-mailman (input) for mailman id 517558;
 Mon, 03 Apr 2023 18:30:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOx2-00058E-7u
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a36de5a6-d24d-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:30:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-614-fKluM2IgOGOWbZExTvDE9Q-1; Mon, 03 Apr 2023 14:30:35 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B9501101A553;
 Mon,  3 Apr 2023 18:30:34 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1B0242166B26;
 Mon,  3 Apr 2023 18:30:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a36de5a6-d24d-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546641;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=;
	b=cTWL+vRTyf9kGd0m6cmuoYc28Vx4UD7RwyLLvHzchBpGTT6VkoKaph4sD/+9WbPPGqreUG
	HCgnsRbV04m6H4h/FQpiR1YRVXVZL8NIicSIlgSDu1NpgjqfkBkya7EPsSs7l9zif+b+or
	tik5MuKYVOOWWM18/47bXu0Dt7FWMqQ=
X-MC-Unique: fKluM2IgOGOWbZExTvDE9Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 10/13] block/export: rewrite vduse-blk drain code
Date: Mon,  3 Apr 2023 14:30:01 -0400
Message-Id: <20230403183004.347205-11-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index f7ae44e3ce..35dc8fcf45 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
@@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517559.803138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOx6-0007j5-7m; Mon, 03 Apr 2023 18:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517559.803138; Mon, 03 Apr 2023 18:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOx6-0007hW-0C; Mon, 03 Apr 2023 18:30:48 +0000
Received: by outflank-mailman (input) for mailman id 517559;
 Mon, 03 Apr 2023 18:30:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOx4-0004HH-KI
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a580eb64-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:46 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-86-LcBnC9OFNAS0HwwcyCiR0g-1; Mon, 03 Apr 2023 14:30:41 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A154885A5B1;
 Mon,  3 Apr 2023 18:30:39 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 062F21121314;
 Mon,  3 Apr 2023 18:30:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a580eb64-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546644;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9pwn7OV2gNoHAp8haSXqR0tkaO5uu/ou2YhRF+j7lsw=;
	b=ZNlbdW6cE/ITRX9LLxw3u14JWZGqJgNuahR70l+HYnq3NDX7c4jb3ehsLSIxBovKHoi1l7
	dER6Akem132n0C2X3TtxRgOWSB1mmc6ErOI5tMhLivYCLNp4hM/QJnAeZ8IhRfsXCgE70B
	FFXOFexhjwfdYk9OaoXJeJvCKYesiew=
X-MC-Unique: LcBnC9OFNAS0HwwcyCiR0g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 12/13] block/fuse: do not set is_external=true on FUSE fd
Date: Mon,  3 Apr 2023 14:30:03 -0400
Message-Id: <20230403183004.347205-13-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 18394f9e07..83bccf046b 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -248,6 +294,8 @@ static void read_from_fuse_export(void *opaque)
     blk_exp_ref(&exp->common);
     aio_context_release(exp->common.ctx);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -258,6 +306,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     aio_context_acquire(exp->common.ctx);
     blk_exp_unref(&exp->common);
     aio_context_release(exp->common.ctx);
@@ -272,7 +324,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
@@ -291,6 +343,8 @@ static void fuse_export_delete(BlockExport *blk_exp)
 {
     FuseExport *exp = container_of(blk_exp, FuseExport, common);
 
+    blk_set_dev_ops(exp->common.blk, NULL, NULL);
+
     if (exp->fuse_session) {
         if (exp->mounted) {
             fuse_session_unmount(exp->fuse_session);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:30:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517560.803149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOxA-0008VW-MY; Mon, 03 Apr 2023 18:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517560.803149; Mon, 03 Apr 2023 18:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjOxA-0008Sh-IR; Mon, 03 Apr 2023 18:30:52 +0000
Received: by outflank-mailman (input) for mailman id 517560;
 Mon, 03 Apr 2023 18:30:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjOx8-0004HH-IU
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:30:50 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7013ff6-d24d-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:30:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-436-vCApPF78Ozqd8fbf0y-d6A-1; Mon, 03 Apr 2023 14:30:44 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 25B603C20EE6;
 Mon,  3 Apr 2023 18:30:43 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D84B140C83A9;
 Mon,  3 Apr 2023 18:30:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7013ff6-d24d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546647;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LN11HOmb2+zF0uNZ22EvLdu7DtwYKh88e9e0E+eLcmM=;
	b=gL14HbSADW2gwQ0ncwOvH5wYgDWFBpRpxmyqosZ/+uVGw/TZE28V0W2Jn2YIf2PY76nALb
	EKVCVkxs4/ED5laBfU9NgCHe7ys10tvVFA++F/J8rkVXr4GaN9LHljVViYQjheFKuRFT5J
	X5FdIokvqMciwcgfAbYGc7KZr8UJyWs=
X-MC-Unique: vCApPF78Ozqd8fbf0y-d6A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 13/13] aio: remove aio_disable_external() API
Date: Mon,  3 Apr 2023 14:30:04 -0400
Message-Id: <20230403183004.347205-14-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.

Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().

The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().

Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:

  @@
  expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
  @@
  - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
  + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)

  @@
  expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
  @@
  - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
  + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/aio.h           | 55 --------------------------
 util/aio-posix.h              |  1 -
 block.c                       |  7 ----
 block/blkio.c                 | 15 +++----
 block/curl.c                  | 10 ++---
 block/export/fuse.c           |  8 ++--
 block/export/vduse-blk.c      | 10 ++---
 block/io.c                    |  2 -
 block/io_uring.c              |  4 +-
 block/iscsi.c                 |  3 +-
 block/linux-aio.c             |  4 +-
 block/nfs.c                   |  5 +--
 block/nvme.c                  |  8 ++--
 block/ssh.c                   |  4 +-
 block/win32-aio.c             |  6 +--
 hw/i386/kvm/xen_xenstore.c    |  2 +-
 hw/virtio/virtio.c            |  6 +--
 hw/xen/xen-bus.c              |  6 +--
 io/channel-command.c          |  6 +--
 io/channel-file.c             |  3 +-
 io/channel-socket.c           |  3 +-
 migration/rdma.c              | 16 ++++----
 tests/unit/test-aio.c         | 27 +------------
 tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
 util/aio-posix.c              | 20 +++-------
 util/aio-win32.c              |  8 +---
 util/async.c                  |  3 +-
 util/fdmon-epoll.c            | 10 -----
 util/fdmon-io_uring.c         |  8 +---
 util/fdmon-poll.c             |  3 +-
 util/main-loop.c              |  7 ++--
 util/qemu-coroutine-io.c      |  7 ++--
 util/vhost-user-server.c      | 11 +++---
 tests/unit/meson.build        |  3 --
 34 files changed, 75 insertions(+), 289 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

diff --git a/include/block/aio.h b/include/block/aio.h
index e267d918fd..d4ce01ea08 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -467,7 +467,6 @@ bool aio_poll(AioContext *ctx, bool blocking);
  */
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -483,7 +482,6 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready);
@@ -612,59 +610,6 @@ static inline void aio_timer_init(AioContext *ctx,
  */
 int64_t aio_compute_timeout(AioContext *ctx);
 
-/**
- * aio_disable_external:
- * @ctx: the aio context
- *
- * Disable the further processing of external clients.
- */
-static inline void aio_disable_external(AioContext *ctx)
-{
-    qatomic_inc(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_enable_external:
- * @ctx: the aio context
- *
- * Enable the processing of external clients.
- */
-static inline void aio_enable_external(AioContext *ctx)
-{
-    int old;
-
-    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
-    assert(old > 0);
-    if (old == 1) {
-        /* Kick event loop so it re-arms file descriptors */
-        aio_notify(ctx);
-    }
-}
-
-/**
- * aio_external_disabled:
- * @ctx: the aio context
- *
- * Return true if the external clients are disabled.
- */
-static inline bool aio_external_disabled(AioContext *ctx)
-{
-    return qatomic_read(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_node_check:
- * @ctx: the aio context
- * @is_external: Whether or not the checked node is an external event source.
- *
- * Check if the node's is_external flag is okay to be polled by the ctx at this
- * moment. True means green light.
- */
-static inline bool aio_node_check(AioContext *ctx, bool is_external)
-{
-    return !is_external || !qatomic_read(&ctx->external_disable_cnt);
-}
-
 /**
  * aio_co_schedule:
  * @ctx: the aio context
diff --git a/util/aio-posix.h b/util/aio-posix.h
index 80b927c7f4..4264c518be 100644
--- a/util/aio-posix.h
+++ b/util/aio-posix.h
@@ -38,7 +38,6 @@ struct AioHandler {
 #endif
     int64_t poll_idle_timeout; /* when to stop userspace polling */
     bool poll_ready; /* has polling detected an event? */
-    bool is_external;
 };
 
 /* Add a handler to a ready list */
diff --git a/block.c b/block.c
index a79297f99b..e9625ffeee 100644
--- a/block.c
+++ b/block.c
@@ -7254,9 +7254,6 @@ static void bdrv_detach_aio_context(BlockDriverState *bs)
         bs->drv->bdrv_detach_aio_context(bs);
     }
 
-    if (bs->quiesce_counter) {
-        aio_enable_external(bs->aio_context);
-    }
     bs->aio_context = NULL;
 }
 
@@ -7266,10 +7263,6 @@ static void bdrv_attach_aio_context(BlockDriverState *bs,
     BdrvAioNotifier *ban, *ban_tmp;
     GLOBAL_STATE_CODE();
 
-    if (bs->quiesce_counter) {
-        aio_disable_external(new_context);
-    }
-
     bs->aio_context = new_context;
 
     if (bs->drv && bs->drv->bdrv_attach_aio_context) {
diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..72117fa005 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState *bs,
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(new_context,
-                       s->completion_fd,
-                       false,
-                       blkio_completion_fd_read,
-                       NULL,
+    aio_set_fd_handler(new_context, s->completion_fd,
+                       blkio_completion_fd_read, NULL,
                        blkio_completion_fd_poll,
-                       blkio_completion_fd_poll_ready,
-                       bs);
+                       blkio_completion_fd_poll_ready, bs);
 }
 
 static void blkio_detach_aio_context(BlockDriverState *bs)
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       s->completion_fd,
-                       false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, NULL,
+                       NULL, NULL, NULL);
 }
 
 /* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
diff --git a/block/curl.c b/block/curl.c
index 8bb39a134e..0fc42d03d7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value, void *opaque)
     CURLSocket *socket = value;
     BDRVCURLState *s = socket->s;
 
-    aio_set_fd_handler(s->aio_context, socket->fd, false,
+    aio_set_fd_handler(s->aio_context, socket->fd,
                        NULL, NULL, NULL, NULL, NULL);
     return true;
 }
@@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     trace_curl_sock_cb(action, (int)fd);
     switch (action) {
         case CURL_POLL_IN:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, NULL, NULL, NULL, socket);
             break;
         case CURL_POLL_OUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, curl_multi_do, NULL, NULL, socket);
             break;
         case CURL_POLL_INOUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, curl_multi_do,
                                NULL, NULL, socket);
             break;
         case CURL_POLL_REMOVE:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, NULL, NULL, NULL, NULL);
             break;
     }
diff --git a/block/export/fuse.c b/block/export/fuse.c
index 83bccf046b..9d4c6c4dee 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque)
     FuseExport *exp = opaque;
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        NULL, NULL, NULL, NULL, NULL);
     exp->fd_handler_set_up = false;
 }
@@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque)
     exp->common.ctx = blk_get_aio_context(exp->common.blk);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 }
@@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -324,7 +324,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), false,
+                               fuse_session_fd(exp->fuse_session),
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 35dc8fcf45..94e2491e4a 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -141,7 +141,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
     }
 
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -155,7 +155,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
         return;
     }
 
-    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+    aio_set_fd_handler(vblk_exp->export.ctx, fd,
                        NULL, NULL, NULL, NULL, NULL);
 }
 
@@ -174,7 +174,7 @@ static void on_vduse_dev_kick(void *opaque)
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, on_vduse_dev_kick, NULL, NULL, NULL,
+                       on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
     /* Virtqueues are handled by vduse_blk_drained_end() */
@@ -183,7 +183,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
 
     /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
@@ -368,7 +368,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev),
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
diff --git a/block/io.c b/block/io.c
index db438c7657..5b0b126422 100644
--- a/block/io.c
+++ b/block/io.c
@@ -356,7 +356,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
 
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
         bdrv_parent_drained_begin(bs, parent);
         if (bs->drv && bs->drv->bdrv_drain_begin) {
             bs->drv->bdrv_drain_begin(bs);
@@ -409,7 +408,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
             bs->drv->bdrv_drain_end(bs);
         }
         bdrv_parent_drained_end(bs, parent);
-        aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
diff --git a/block/io_uring.c b/block/io_uring.c
index 989f9a99ed..b64a3e6285 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -406,7 +406,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
 
 void luring_detach_aio_context(LuringState *s, AioContext *old_context)
 {
-    aio_set_fd_handler(old_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(old_context, s->ring.ring_fd,
                        NULL, NULL, NULL, NULL, s);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
@@ -416,7 +416,7 @@ void luring_attach_aio_context(LuringState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_luring_completion_bh, s);
-    aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(s->aio_context, s->ring.ring_fd,
                        qemu_luring_completion_cb, NULL,
                        qemu_luring_poll_cb, qemu_luring_poll_ready, s);
 }
diff --git a/block/iscsi.c b/block/iscsi.c
index 9fc0bed90b..34f97ab646 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun)
 
     if (ev != iscsilun->events) {
         aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
-                           false,
                            (ev & POLLIN) ? iscsi_process_read : NULL,
                            (ev & POLLOUT) ? iscsi_process_write : NULL,
                            NULL, NULL,
@@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
     IscsiLun *iscsilun = bs->opaque;
 
     aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     iscsilun->events = 0;
 
     if (iscsilun->nop_timer) {
diff --git a/block/linux-aio.c b/block/linux-aio.c
index fc50cdd1bf..129908531a 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -443,7 +443,7 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
 }
@@ -452,7 +452,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
-    aio_set_event_notifier(new_context, &s->e, false,
+    aio_set_event_notifier(new_context, &s->e,
                            qemu_laio_completion_cb,
                            qemu_laio_poll_cb,
                            qemu_laio_poll_ready);
diff --git a/block/nfs.c b/block/nfs.c
index 351dc6ec8d..2f603bba42 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client)
     int ev = nfs_which_events(client->context);
     if (ev != client->events) {
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false,
                            (ev & POLLIN) ? nfs_process_read : NULL,
                            (ev & POLLOUT) ? nfs_process_write : NULL,
                            NULL, NULL, client);
@@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
     NFSClient *client = bs->opaque;
 
     aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     client->events = 0;
 }
 
@@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client)
     if (client->context) {
         qemu_mutex_lock(&client->mutex);
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false, NULL, NULL, NULL, NULL, NULL);
+                           NULL, NULL, NULL, NULL, NULL);
         qemu_mutex_unlock(&client->mutex);
         if (client->fh) {
             nfs_close(client->context, client->fh);
diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..17937d398d 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     }
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     if (!nvme_identify(bs, namespace, errp)) {
@@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs)
     g_free(s->queues);
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
     event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]);
     qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map,
                             0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE);
@@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState *bs)
 
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
 }
 
 static void nvme_attach_aio_context(BlockDriverState *bs,
@@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
 
     s->aio_context = new_context;
     aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     for (unsigned i = 0; i < s->queue_count; i++) {
diff --git a/block/ssh.c b/block/ssh.c
index b3b3352075..2748253d4a 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     trace_ssh_restart_coroutine(restart->co);
-    aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL);
 
     aio_co_wake(restart->co);
 }
@@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
     trace_ssh_co_yield(s->sock, rd_handler, wr_handler);
 
     aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
-                       false, rd_handler, wr_handler, NULL, NULL, &restart);
+                       rd_handler, wr_handler, NULL, NULL, &restart);
     qemu_coroutine_yield();
     trace_ssh_co_yield_back(s->sock);
 }
diff --git a/block/win32-aio.c b/block/win32-aio.c
index ee87d6048f..6327861e1d 100644
--- a/block/win32-aio.c
+++ b/block/win32-aio.c
@@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfile)
 void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL);
     aio->aio_ctx = NULL;
 }
 
@@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *new_context)
 {
     aio->aio_ctx = new_context;
-    aio_set_event_notifier(new_context, &aio->e, false,
-                           win32_aio_completion_cb, NULL, NULL);
+    aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb,
+                           NULL, NULL);
 }
 
 QEMUWin32AIOState *win32_aio_init(void)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e81bc8791..0b189c6ab8 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh),
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index dcd7aabb4e..6125e4d556 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c4fd26abe1..7b5c338ea1 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,11 +842,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        xen_device_event, NULL, xen_device_poll, NULL, channel);
 }
 
@@ -920,7 +920,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
diff --git a/io/channel-command.c b/io/channel-command.c
index e7edd091af..7ed726c802 100644
--- a/io/channel-command.c
+++ b/io/channel-command.c
@@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
                                                    void *opaque)
 {
     QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
-    aio_set_fd_handler(ctx, cioc->readfd, false,
-                       io_read, NULL, NULL, NULL, opaque);
-    aio_set_fd_handler(ctx, cioc->writefd, false,
-                       NULL, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opaque);
 }
 
 
diff --git a/io/channel-file.c b/io/channel-file.c
index d76663e6ae..8b5821f452 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
                                                 void *opaque)
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
-    aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write,
-                       NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b0ea7d48b3..d99945ebec 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
                                                   void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false,
-                       io_read, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/migration/rdma.c b/migration/rdma.c
index df646be35e..aee41ca43e 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3104,15 +3104,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     }
 }
 
diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c
index 321d7ab01a..519440eed3 100644
--- a/tests/unit/test-aio.c
+++ b/tests/unit/test-aio.c
@@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque)
 static void set_event_notifier(AioContext *ctx, EventNotifier *notifier,
                                EventNotifierHandler *handler)
 {
-    aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL);
+    aio_set_event_notifier(ctx, notifier, handler, NULL, NULL);
 }
 
 static void dummy_notifier_read(EventNotifier *n)
@@ -383,30 +383,6 @@ static void test_flush_event_notifier(void)
     event_notifier_cleanup(&data.e);
 }
 
-static void test_aio_external_client(void)
-{
-    int i, j;
-
-    for (i = 1; i < 3; i++) {
-        EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
-        event_notifier_init(&data.e, false);
-        aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, NULL);
-        event_notifier_set(&data.e);
-        for (j = 0; j < i; j++) {
-            aio_disable_external(ctx);
-        }
-        for (j = 0; j < i; j++) {
-            assert(!aio_poll(ctx, false));
-            assert(event_notifier_test_and_clear(&data.e));
-            event_notifier_set(&data.e);
-            aio_enable_external(ctx);
-        }
-        assert(aio_poll(ctx, false));
-        set_event_notifier(ctx, &data.e, NULL);
-        event_notifier_cleanup(&data.e);
-    }
-}
-
 static void test_wait_event_notifier_noflush(void)
 {
     EventNotifierTestData data = { .n = 0 };
@@ -935,7 +911,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-    g_test_add_func("/aio/external-client",         test_aio_external_client);
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c
deleted file mode 100644
index ef5a856d09..0000000000
--- a/tests/unit/test-fdmon-epoll.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * fdmon-epoll tests
- *
- * Copyright (c) 2020 Red Hat, Inc.
- */
-
-#include "qemu/osdep.h"
-#include "block/aio.h"
-#include "qapi/error.h"
-#include "qemu/main-loop.h"
-
-static AioContext *ctx;
-
-static void dummy_fd_handler(EventNotifier *notifier)
-{
-    event_notifier_test_and_clear(notifier);
-}
-
-static void add_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        event_notifier_init(&notifiers[i], false);
-        aio_set_event_notifier(ctx, &notifiers[i], false,
-                               dummy_fd_handler, NULL, NULL);
-    }
-}
-
-static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL, NULL);
-        event_notifier_cleanup(&notifiers[i]);
-    }
-}
-
-/* Check that fd handlers work when external clients are disabled */
-static void test_external_disabled(void)
-{
-    EventNotifier notifiers[100];
-
-    /* fdmon-epoll is only enabled when many fd handlers are registered */
-    add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-
-    aio_disable_external(ctx);
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-    aio_enable_external(ctx);
-
-    remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-}
-
-int main(int argc, char **argv)
-{
-    /*
-     * This code relies on the fact that fdmon-io_uring disables itself when
-     * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
-     * to fdmon-epoll when the number of fds exceeds a threshold.
-     */
-    qemu_init_main_loop(&error_fatal);
-    ctx = qemu_get_aio_context();
-
-    while (g_main_context_iteration(NULL, false)) {
-        /* Do nothing */
-    }
-
-    g_test_init(&argc, &argv, NULL);
-    g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
-    return g_test_run();
-}
diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f76..934b1bbb85 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx,
         new_node->io_poll = io_poll;
         new_node->io_poll_ready = io_poll_ready;
         new_node->opaque = opaque;
-        new_node->is_external = is_external;
 
         if (is_new) {
             new_node->pfd.fd = fd;
@@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
 {
-    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external,
+    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
                        (IOHandler *)io_read, NULL, io_poll,
                        (IOHandler *)io_poll_ready, notifier);
 }
@@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx)
 
         /* TODO should this check poll ready? */
         revents = node->pfd.revents & node->pfd.events;
-        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
             result = true;
             break;
         }
-        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
             result = true;
             break;
         }
@@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
         QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll);
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
-        poll_ready && revents == 0 &&
-        aio_node_check(ctx, node->is_external) &&
-        node->io_poll_ready) {
+        poll_ready && revents == 0 && node->io_poll_ready) {
         node->io_poll_ready(node->opaque);
 
         /*
@@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
 
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_read) {
         node->io_read(node->opaque);
 
@@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_OUT | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_write) {
         node->io_write(node->opaque);
         progress = true;
@@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx,
     AioHandler *tmp;
 
     QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) {
-        if (aio_node_check(ctx, node->is_external) &&
-            node->io_poll(node->opaque)) {
+        if (node->io_poll(node->opaque)) {
             aio_add_poll_ready_handler(ready_list, node);
 
             node->poll_idle_timeout = now + POLL_IDLE_INTERVAL_NS;
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 6bded009a4..948ef47a4d 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -32,7 +32,6 @@ struct AioHandler {
     GPollFD pfd;
     int deleted;
     void *opaque;
-    bool is_external;
     QLIST_ENTRY(AioHandler) node;
 };
 
@@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx,
         node->opaque = opaque;
         node->io_read = io_read;
         node->io_write = io_write;
-        node->is_external = is_external;
 
         if (io_read) {
             bitmask |= FD_READ | FD_ACCEPT | FD_CLOSE;
@@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            bool is_external,
                             EventNotifierHandler *io_notify,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
@@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx,
             node->e = e;
             node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
             node->pfd.events = G_IO_IN;
-            node->is_external = is_external;
             QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node);
 
             g_source_add_poll(&ctx->source, &node->pfd);
@@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* fill fd sets */
     count = 0;
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!node->deleted && node->io_notify
-            && aio_node_check(ctx, node->is_external)) {
+        if (!node->deleted && node->io_notify) {
             assert(count < MAXIMUM_WAIT_OBJECTS);
             events[count++] = event_notifier_get_handle(node->e);
         }
diff --git a/util/async.c b/util/async.c
index 21016a1ac7..be0726038e 100644
--- a/util/async.c
+++ b/util/async.c
@@ -377,7 +377,7 @@ aio_ctx_finalize(GSource     *source)
         g_free(bh);
     }
 
-    aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_rec_mutex_destroy(&ctx->lock);
     qemu_lockcnt_destroy(&ctx->list_lock);
@@ -561,7 +561,6 @@ AioContext *aio_context_new(Error **errp)
     QSLIST_INIT(&ctx->scheduled_coroutines);
 
     aio_set_event_notifier(ctx, &ctx->notifier,
-                           false,
                            aio_context_notifier_cb,
                            aio_context_notifier_poll,
                            aio_context_notifier_poll_ready);
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index e11a8a022e..ef3eacacd2 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
     int i, ret = 0;
     struct epoll_event events[128];
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout > 0) {
         ret = qemu_poll_ns(&pfd, 1, timeout);
         if (ret > 0) {
@@ -131,11 +126,6 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    /* Do not upgrade while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return false;
-    }
-
     if (npfd >= EPOLL_ENABLE_THRESHOLD) {
         if (fdmon_epoll_try_enable(ctx)) {
             return true;
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index ab43052dd7..17ec18b7bd 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
     unsigned wait_nr = 1; /* block until at least one cqe is ready */
     int ret;
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout == 0) {
         wait_nr = 0; /* non-blocking */
     } else if (timeout > 0) {
@@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
         return true;
     }
 
-    /* Are we falling back to fdmon-poll? */
-    return qatomic_read(&ctx->external_disable_cnt);
+    return false;
 }
 
 static const FDMonOps fdmon_io_uring_ops = {
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
index 5fe3b47865..17df917cf9 100644
--- a/util/fdmon-poll.c
+++ b/util/fdmon-poll.c
@@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
     assert(npfd == 0);
 
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
-                && aio_node_check(ctx, node->is_external)) {
+        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) {
             add_pollfd(node);
         }
     }
diff --git a/util/main-loop.c b/util/main-loop.c
index e180c85145..3e43a9cd38 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -642,14 +642,13 @@ void qemu_set_fd_handler(int fd,
                          void *opaque)
 {
     iohandler_init();
-    aio_set_fd_handler(iohandler_ctx, fd, false,
-                       fd_read, fd_write, NULL, NULL, opaque);
+    aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL,
+                       opaque);
 }
 
 void event_notifier_set_handler(EventNotifier *e,
                                 EventNotifierHandler *handler)
 {
     iohandler_init();
-    aio_set_event_notifier(iohandler_ctx, e, false,
-                           handler, NULL, NULL);
+    aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL);
 }
diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c
index d791932d63..364f4d5abf 100644
--- a/util/qemu-coroutine-io.c
+++ b/util/qemu-coroutine-io.c
@@ -74,8 +74,7 @@ typedef struct {
 static void fd_coroutine_enter(void *opaque)
 {
     FDYieldUntilData *data = opaque;
-    aio_set_fd_handler(data->ctx, data->fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL);
     qemu_coroutine_enter(data->co);
 }
 
@@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd)
     data.ctx = qemu_get_current_aio_context();
     data.co = qemu_coroutine_self();
     data.fd = fd;
-    aio_set_fd_handler(
-        data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data);
+    aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL,
+                       &data);
     qemu_coroutine_yield();
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 332aea9306..9ba19121a2 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
     g_free(vu_fd_watch);
@@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index fa63cfe6ff..2980170397 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -121,9 +121,6 @@ if have_block
   if nettle.found() or gcrypt.found()
     tests += {'test-crypto-pbkdf': [io]}
   endif
-  if config_host_data.get('CONFIG_EPOLL_CREATE1')
-    tests += {'test-fdmon-epoll': [testblock]}
-  endif
 endif
 
 if have_system
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:34:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517570.803159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjP0B-00039i-Ht; Mon, 03 Apr 2023 18:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517570.803159; Mon, 03 Apr 2023 18:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjP0B-00039Z-F2; Mon, 03 Apr 2023 18:33:59 +0000
Received: by outflank-mailman (input) for mailman id 517570;
 Mon, 03 Apr 2023 18:33:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1aQ=72=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjP0A-00038w-HI
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:33:58 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 177b4932-d24e-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 20:33:57 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-117-j26lxWggN_O159O5n3-PNg-1; Mon, 03 Apr 2023 14:33:46 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3118C885620;
 Mon,  3 Apr 2023 18:30:37 +0000 (UTC)
Received: from localhost (unknown [10.39.192.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 71C5440C6EC4;
 Mon,  3 Apr 2023 18:30:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 177b4932-d24e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680546835;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hLGTNhZNr3bIXX7Mx11PRwkKdI3CfrxoXkyIIe5WNhM=;
	b=ex87W1Y2rVLKmb4iIysD6LEcc+IrdiGRxrTImhC5fU5oW9/2y581T4DsPBNiyueuHpq6Li
	KvpiWb1NsWVBioKxv5i5yW33Gr+3vluq1C0jFv+BtC47HIMa2tHZ8b7xBF/Xxgoln6O/HC
	rqYr6PiZ0kz3vdcKaxMIeQmHDHs1W8c=
X-MC-Unique: j26lxWggN_O159O5n3-PNg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	<qemu-block@nongnu.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH 11/13] block/fuse: take AioContext lock around blk_exp_ref/unref()
Date: Mon,  3 Apr 2023 14:30:02 -0400
Message-Id: <20230403183004.347205-12-stefanha@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

These functions must be called with the AioContext acquired:

  /* Callers must hold exp->ctx lock */
  void blk_exp_ref(BlockExport *exp)
  ...
  /* Callers must hold exp->ctx lock */
  void blk_exp_unref(BlockExport *exp)

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..18394f9e07 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -244,7 +244,9 @@ static void read_from_fuse_export(void *opaque)
     FuseExport *exp = opaque;
     int ret;
 
+    aio_context_acquire(exp->common.ctx);
     blk_exp_ref(&exp->common);
+    aio_context_release(exp->common.ctx);
 
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
@@ -256,7 +258,9 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    aio_context_acquire(exp->common.ctx);
     blk_exp_unref(&exp->common);
+    aio_context_release(exp->common.ctx);
 }
 
 static void fuse_export_shutdown(BlockExport *blk_exp)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 18:40:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 18:40:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517586.803168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjP6Q-0004m8-8E; Mon, 03 Apr 2023 18:40:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517586.803168; Mon, 03 Apr 2023 18:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjP6Q-0004m1-5f; Mon, 03 Apr 2023 18:40:26 +0000
Received: by outflank-mailman (input) for mailman id 517586;
 Mon, 03 Apr 2023 18:40:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=umFA=72=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pjP6P-0004lv-3S
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 18:40:25 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fdeb178b-d24e-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 20:40:22 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id h9so31381937ljq.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 11:40:22 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 u22-20020a2ea176000000b002935005f782sm940636ljl.57.2023.04.03.11.40.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Apr 2023 11:40:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdeb178b-d24e-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680547222;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=D1mf9FE6OGtx421qkqQ1ut1+Ibbt2CZmN0oBWVfBFIo=;
        b=gI1euHdMIe085Z55oMOZQGbeJQyoGYIuimgyfobH023UYgD0ikSCYkI1pbIYPwL0HH
         +gUiDYb0pIAv9BmWevsS9TpgsYNl69UR+KzNQibGQVVPqd58i0ZXaQQ+XaHVZQvYKw7h
         F9+TI0kGQHH9cvuIQAMsMaIxxMvIewuCiBmn5fi54Hc9aqLFTzNQ3J/ptY18BzSRmr0Z
         P/1MTm/QSRlRO9CuoMBE7Ebp5n9b9ugCI4GTn/ko1lwU9oMrInvXLH/Z2vqbJLVgmW0g
         ZXl2+nV30rYzNqiLHg4BOeV4jZXs1vueeM6AcOV7bpXL7a7BhvhscXfffKF6IDAflOGc
         cUYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680547222;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=D1mf9FE6OGtx421qkqQ1ut1+Ibbt2CZmN0oBWVfBFIo=;
        b=lxEBbvIVoai+Co67+J2BeOHNbALrTAeOX+qufkz5f0eyFxJZiWjZ9kvy0dMIV651id
         C0Go7Rzc/esIwxFVNsq2JLYOlh3YfwFEzkE6cE8QDAqNED1zNAO9hnbOYQAEMAH1yV2+
         LwVA0rvjQUJ5VtRuwxRXT0OwHuJ7i1mYABX1zbWe72HacvEq4IWQzVeIHWXNtZU+UM3s
         HvszJgHV5/P5zNh7T+aGVxjcopoXs8IYUxn8ljHwFsDCtuxdSkN1SzpxSw+uAAyk5A/b
         IpK44gZlHYdOPl6Zk2hHuwpRxFaRoq6aKpyDGMyKWTbUnmMW6zerIBfi1DrT30DoprLR
         wfiw==
X-Gm-Message-State: AAQBX9eZAawIv4rPL6qXgb6TnR5rtyOkG+e1Ac0N7tR3t9dkHnsc0M39
	WNxIB8ok7avznnBNUEA+/C8=
X-Google-Smtp-Source: AKy350a0lRWSBr0kRwjKoJJMRBpD5euDxKz53pnqpvwnWtA8j+SJucGmsWro9c/7CzV4EaSMFjzDHA==
X-Received: by 2002:a2e:7314:0:b0:298:a89b:fb66 with SMTP id o20-20020a2e7314000000b00298a89bfb66mr111082ljc.42.1680547222051;
        Mon, 03 Apr 2023 11:40:22 -0700 (PDT)
Message-ID: <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic
 implementation of bug.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bertrand Marquis
 <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date: Mon, 03 Apr 2023 21:40:21 +0300
In-Reply-To: <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
	 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
	 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) 
MIME-Version: 1.0

SGVsbG8gSnVsaWVuLCAKT24gRnJpLCAyMDIzLTAzLTMxIGF0IDIyOjA1ICswMTAwLCBKdWxpZW4g
R3JhbGwgd3JvdGU6Cj4gSGkgT2xla3NpaSwKPiAKPiBJIHdhcyBnb2luZyB0byBhY2sgdGhlIHBh
dGNoIGJ1dCB0aGVuIEkgc3BvdHRlZCBzb21ldGhpbmcgdGhhdCB3b3VsZCAKPiB3YW50IHNvbWUg
Y2xhcmlmaWNhdGlvbi4KPiAKPiBPbiAyOS8wMy8yMDIzIDExOjUwLCBPbGVrc2lpIEt1cm9jaGtv
IHdyb3RlOgo+ID4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9idWcuaAo+
ID4gYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYnVnLmgKPiA+IGluZGV4IGNhY2FmMDE0YWIu
LjNmYjA0NzFhOWIgMTAwNjQ0Cj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYnVn
LmgKPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9idWcuaAo+ID4gQEAgLTEsNiAr
MSwyNCBAQAo+ID4gwqAgI2lmbmRlZiBfX0FSTV9CVUdfSF9fCj4gPiDCoCAjZGVmaW5lIF9fQVJN
X0JVR19IX18KPiA+IMKgIAo+ID4gKy8qCj4gPiArICogUGxlYXNlIGRvIG5vdCBpbmNsdWRlIGlu
IHRoZSBoZWFkZXIgYW55IGhlYWRlciB0aGF0IG1pZ2h0Cj4gPiArICogdXNlIEJVRy9BU1NFUlQv
ZXRjIG1hcm9zIGFzdGhleSB3aWxsIGJlIGRlZmluZWQgbGF0ZXIgYWZ0ZXIKPiA+ICsgKiB0aGUg
cmV0dXJuIHRvIDx4ZW4vYnVnLmg+IGZyb20gdGhlIGN1cnJlbnQgaGVhZGVyOgo+ID4gKyAqCj4g
PiArICogPHhlbi9idWcuaD46Cj4gPiArICrCoCAuLi4KPiA+ICsgKsKgwqAgPGFzbS9idWcuaD46
Cj4gPiArICrCoMKgwqDCoCAuLi4KPiA+ICsgKsKgwqDCoMKgIDxhbnlfaGVhZGVyX3doaWNoX3Vz
ZXNfQlVHL0FTU0VSVC9ldGMgbWFjcm9zLmg+Cj4gPiArICrCoMKgwqDCoCAuLi4KPiA+ICsgKsKg
IC4uLgo+ID4gKyAqwqAgI2RlZmluZSBCVUcoKSAuLi4KPiA+ICsgKsKgIC4uLgo+ID4gKyAqwqAg
I2RlZmluZSBBU1NFUlQoKSAuLi4KPiA+ICsgKsKgIC4uLgo+ID4gKyAqLwo+ID4gKwo+ID4gwqAg
I2luY2x1ZGUgPHhlbi90eXBlcy5oPgo+ID4gwqAgCj4gPiDCoCAjaWYgZGVmaW5lZChDT05GSUdf
QVJNXzMyKQo+ID4gQEAgLTExLDc2ICsyOSw3IEBACj4gPiDCoCAjIGVycm9yICJ1bmtub3duIEFS
TSB2YXJpYW50Igo+ID4gwqAgI2VuZGlmCj4gPiDCoCAKPiA+IC0jZGVmaW5lIEJVR19GUkFNRV9T
VFJVQ1QKPiA+IC0KPiA+IC1zdHJ1Y3QgYnVnX2ZyYW1lIHsKPiA+IC3CoMKgwqAgc2lnbmVkIGlu
dCBsb2NfZGlzcDvCoMKgwqAgLyogUmVsYXRpdmUgYWRkcmVzcyB0byB0aGUgYnVnIGFkZHJlc3MK
PiA+ICovCj4gPiAtwqDCoMKgIHNpZ25lZCBpbnQgZmlsZV9kaXNwO8KgwqAgLyogUmVsYXRpdmUg
YWRkcmVzcyB0byB0aGUgZmlsZW5hbWUgKi8KPiA+IC3CoMKgwqAgc2lnbmVkIGludCBtc2dfZGlz
cDvCoMKgwqAgLyogUmVsYXRpdmUgYWRkcmVzcyB0byB0aGUgcHJlZGljYXRlCj4gPiAoZm9yIEFT
U0VSVCkgKi8KPiA+IC3CoMKgwqAgdWludDE2X3QgbGluZTvCoMKgwqDCoMKgwqDCoMKgwqAgLyog
TGluZSBudW1iZXIgKi8KPiA+IC3CoMKgwqAgdWludDMyX3QgcGFkMDoxNjvCoMKgwqDCoMKgwqAg
LyogUGFkZGluZyBmb3IgOC1ieXRlcyBhbGlnbiAqLwo+ID4gLX07Cj4gPiAtCj4gPiAtI2RlZmlu
ZSBidWdfbG9jKGIpICgoY29uc3Qgdm9pZCAqKShiKSArIChiKS0+bG9jX2Rpc3ApCj4gPiAtI2Rl
ZmluZSBidWdfZmlsZShiKSAoKGNvbnN0IHZvaWQgKikoYikgKyAoYiktPmZpbGVfZGlzcCk7Cj4g
PiAtI2RlZmluZSBidWdfbGluZShiKSAoKGIpLT5saW5lKQo+ID4gLSNkZWZpbmUgYnVnX21zZyhi
KSAoKGNvbnN0IGNoYXIgKikoYikgKyAoYiktPm1zZ19kaXNwKQo+ID4gLQo+ID4gLS8qIE1hbnkg
dmVyc2lvbnMgb2YgR0NDIGRvZXNuJ3Qgc3VwcG9ydCB0aGUgYXNtICVjIHBhcmFtZXRlciB3aGlj
aAo+ID4gd291bGQKPiA+IC0gKiBiZSBwcmVmZXJhYmxlIHRvIHRoaXMgdW5wbGVhc2FudG5lc3Mu
IFdlIHVzZSBtZXJnZWFibGUgc3RyaW5nCj4gPiAtICogc2VjdGlvbnMgdG8gYXZvaWQgbXVsdGlw
bGUgY29waWVzIG9mIHRoZSBzdHJpbmcgYXBwZWFyaW5nIGluCj4gPiB0aGUKPiA+IC0gKiBYZW4g
aW1hZ2UuIEJVR0ZSQU1FX3J1bl9mbiBuZWVkcyB0byBiZSBoYW5kbGVkIHNlcGFyYXRlbHkuCj4g
PiAtICovCj4gCj4gR2l2ZW4gdGhpcyBjb21tZW50IC4uLgo+IAo+ID4gLSNkZWZpbmUgQlVHX0ZS
QU1FKHR5cGUsIGxpbmUsIGZpbGUsIGhhc19tc2csIG1zZykgZG8KPiA+IHvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLcKgwqDCoCBCVUlMRF9CVUdfT04o
KGxpbmUpID4+Cj4gPiAxNik7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAt
wqDCoMKgIEJVSUxEX0JVR19PTigodHlwZSkgPj0KPiA+IEJVR0ZSQU1FX05SKTvCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIFwKPiA+IC3CoMKgwqAgYXNtCj4gPiAoIjE6IkJVR19JTlNUUiJcbiLCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gXAo+ID4gLcKgwqDCoMKgwqDCoMKgwqAg
Ii5wdXNoc2VjdGlvbiAucm9kYXRhLnN0ciwgXCJhTVNcIiwgJXByb2diaXRzLAo+ID4gMVxuIsKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiMjpc
dC5hc2NpeiAiIF9fc3RyaW5naWZ5KGZpbGUpCj4gPiAiXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+IC3CoMKgwqDCoMKg
wqDCoMKgCj4gPiAiMzpcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAKPiA+IFwKPiA+IC3CoMKgwqDCoMKgwqDCoMKgICIuaWYg
IiAjaGFzX21zZwo+ID4gIlxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwK
PiA+IC3CoMKgwqDCoMKgwqDCoMKgICJcdC5hc2NpeiAiICNtc2cKPiA+ICJcbiLCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLcKgwqDCoMKgwqDCoMKgwqAKPiA+ICIuZW5k
aWZcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgCj4gPiBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4gIi5wb3BzZWN0aW9uXG4iwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgCj4gPiBcCj4gPiAtwqDCoMKg
wqDCoMKgwqDCoCAiLnB1c2hzZWN0aW9uIC5idWdfZnJhbWVzLiIgX19zdHJpbmdpZnkodHlwZSkg
IiwgXCJhXCIsCj4gPiAlcHJvZ2JpdHNcbiJcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4gIjQ6
XG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgCj4gPiBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiLnAyYWxpZ24KPiA+IDJcbiLC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtwqDC
oMKgwqDCoMKgwqDCoCAiLmxvbmcgKDFiIC0KPiA+IDRiKVxuIsKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLcKgwqDCoMKgwqDCoMKgwqAgIi5sb25nICgyYiAtCj4g
PiA0YilcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+IC3CoMKg
wqDCoMKgwqDCoMKgICIubG9uZyAoM2IgLQo+ID4gNGIpXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiLmh3b3JkICIgX19zdHJp
bmdpZnkobGluZSkgIiwKPiA+IDBcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4g
Ii5wb3BzZWN0aW9uIik7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgCj4gPiBcCj4gPiAtfSB3aGlsZSAoMCkKPiA+IC0KPiA+IC0vKgo+ID4gLSAqIEdDQyB3aWxs
IG5vdCBhbGxvdyB0byB1c2UgImkiwqAgd2hlbiBQSUUgaXMgZW5hYmxlZCAoWGVuIGRvZXNuJ3QK
PiA+IHNldCB0aGUKPiA+IC0gKiBmbGFnIGJ1dCBpbnN0ZWFkIHJlbHkgb24gdGhlIGRlZmF1bHQg
dmFsdWUgZnJvbSB0aGUgY29tcGlsZXIpLgo+ID4gU28gdGhlCj4gPiAtICogZWFzaWVzdCB3YXkg
dG8gaW1wbGVtZW50IHJ1bl9pbl9leGNlcHRpb25faGFuZGxlcigpIGlzIHRvIHBhc3MKPiA+IHRo
ZSB0bwo+ID4gLSAqIGJlIGNhbGxlZCBmdW5jdGlvbiBpbiBhIGZpeGVkIHJlZ2lzdGVyLgo+ID4g
LSAqLwo+ID4gLSNkZWZpbmXCoCBydW5faW5fZXhjZXB0aW9uX2hhbmRsZXIoZm4pIGRvCj4gPiB7
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIFwKPiA+IC3CoMKgwqAgYXNtICgibW92ICIgX19zdHJpbmdpZnkoQlVHX0ZOX1JF
RykgIiwKPiA+ICUwXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIFwKPiA+IC3CoMKgwqDCoMKgwqDCoMKgCj4gPiAiMToiQlVHX0lOU1RSIlxu
IsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgCj4gPiBcCj4gPiAtwqDC
oMKgwqDCoMKgwqDCoCAiLnB1c2hzZWN0aW9uIC5idWdfZnJhbWVzLiIgX19zdHJpbmdpZnkoQlVH
RlJBTUVfcnVuX2ZuKQo+ID4gIiwiwqDCoMKgwqDCoMKgIFwKPiA+IC3CoMKgwqDCoMKgwqDCoMKg
ICLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXCJhXCIsCj4gPiAlJXByb2diaXRzXG4iwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4gIjI6XG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgCj4gPiBcCj4gPiAtwqDC
oMKgwqDCoMKgwqDCoCAiLnAyYWxpZ24KPiA+IDJcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiLmxvbmcgKDFi
IC0KPiA+IDJiKVxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4g
LcKgwqDCoMKgwqDCoMKgwqAgIi5sb25nIDAsIDAsCj4gPiAwXG4iwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLcKgwqDCoMKgwqDCoMKgwqAgIi5wb3BzZWN0
aW9uIiA6OiAiciIgKGZuKSA6IF9fc3RyaW5naWZ5KEJVR19GTl9SRUcpCj4gPiApO8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtfSB3aGlsZSAoMCkKPiA+IC0KPiA+IC0jZGVmaW5lIFdB
Uk4oKSBCVUdfRlJBTUUoQlVHRlJBTUVfd2FybiwgX19MSU5FX18sIF9fRklMRV9fLCAwLCAiIikK
PiA+IC0KPiA+IC0jZGVmaW5lIEJVRygpIGRvIHvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgXAo+ID4gLcKgwqDCoCBCVUdfRlJBTUUoQlVHRlJBTUVfYnVnLMKgIF9fTElORV9fLCBf
X0ZJTEVfXywgMCwgIiIpO8KgwqDCoMKgwqDCoMKgIFwKPiA+IC3CoMKgwqAgdW5yZWFjaGFibGUo
KTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLX0gd2hpbGUgKDApCj4g
PiAtCj4gPiAtI2RlZmluZSBhc3NlcnRfZmFpbGVkKG1zZykgZG8ge8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gLcKg
wqDCoCBCVUdfRlJBTUUoQlVHRlJBTUVfYXNzZXJ0LCBfX0xJTkVfXywgX19GSUxFX18sIDEsIG1z
Zyk7wqDCoMKgwqAgXAo+ID4gLcKgwqDCoCB1bnJlYWNoYWJsZSgpO8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBcCj4gPiAtfSB3aGlsZSAoMCkKPiA+ICsjZGVmaW5lIEJVR19BU01f
Q09OU1TCoMKgICJjIgo+IAo+IC4uLiB5b3Ugc2hvdWxkIGV4cGxhaW4gaW4gdGhlIGNvbW1pdCBt
ZXNzYWdlIHdoeSB0aGlzIGlzIG5lZWRlZCBhbmQKPiB0aGUgCj4gcHJvYmxlbSBkZXNjcmliZWQg
YWJvdmUgaXMgbm90IGEgcHJvYmxlbSBhbnltb3JlLgo+IAo+IEZvciBpbnN0YW5jZSwgSSBtYW5h
Z2VkIHRvIGJ1aWxkIGl0IHdpdGhvdXQgJ2MnIG9uIGFybTY0IFsxXS4gQnV0IGl0IAo+IGRvZXMg
YnJlYWsgb24gYXJtMzIgWzJdLiBJIGtub3cgdGhhdCBBcm0gaXMgYWxzbyB3aGVyZSAnJWMnIHdh
cyBhbgo+IGlzc3VlLgo+IAo+IFNraW1taW5nIHRocm91Z2ggbGludXgsIHRoZSByZWFzb24gc2Vl
bXMgdG8gYmUgdGhhdCBHQ0MgbWF5IGFkZCAnIycKPiB3aGVuIAo+IGl0IHNob3VsZCBub3QuIFRo
YXQgc2FpZCwgSSBoYXZlbid0IGxvb2sgYXQgdGhlIGltcGFjdCBvbiB0aGUgZ2VuZXJpYwo+IGlt
cGxlbWVudGF0aW9uLiBOZWl0aGVyIEkgbG9va2VkIGF0IHdoaWNoIHZlcnNpb24gbWF5IGJlIGFm
ZmVjdGVkCj4gKHRoZSAKPiBvcmlnaW5hbCBtZXNzYWdlIHdhcyBmcm9tIDIwMTEpLgpZb3UgYXJl
IHJpZ2h0IHRoYXQgc29tZSBjb21waWxlcnMgYWRkICcjJyB3aGVuIGl0IHNob3VsZG4ndC4gVGhl
IHNhbWUKdGhpbmcgaGFwcGVucyB3aXRoIFJJU0MtVi4KClNvIEknbGwgdXBkYXRlIGJvdGggdGhl
IGNvbW1pdCBtZXNzYWdlIGFuZCBjb21tZW50LgoKPiAKPiBIb3dldmVyLCB3aXRob3V0IGFuIGV4
cGxhbmF0aW9uLCBJIGFtIGFmcmFpZCB0aGlzIGNhbid0IGdvIGluIGJlY2F1c2UKPiBJIAo+IGFt
IHdvcnJ5IHdlIG1heSBicmVhayBzb21lIHVzZXJzICh0aGFua2Z1bGx5IHRoYXQgbWlnaHQganVz
dCBiZSBhIAo+IGNvbXBpbGF0aW9uIGlzc3VlcyByYXRoZXIgdGhhbiB3ZWlyZCBiZWhhdmlvciku
Cj4gCj4gQmVydHJhbmQsIFN0ZWZhbm8sIGRvIHlvdSBrbm93IGlmIHRoaXMgaXMgc3RpbGwgYW4g
aXNzdWU/Cj4gCj4gQ2hlZXJzLAo+IAo+IFsxXSBhYXJjaDY0LWxpbnV4LWdudS1nY2MgKExpbmFy
byBHQ0MgNy41LTIwMTkuMTIpIDcuNS4wCj4gWzJdIGFybS1ub25lLWxpbnV4LWdudWVhYmloZi1n
Y2MgKEdOVSBUb29sY2hhaW4gZm9yIHRoZSBBLXByb2ZpbGUgCj4gQXJjaGl0ZWN0dXJlIDEwLjMt
MjAyMS4wNyAoYXJtLTEwLjI5KSkgMTAuMy4xIDIwMjEwNjIxCj4gCn4gT2xla3NpaQoK



From xen-devel-bounces@lists.xenproject.org Mon Apr 03 20:00:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 20:00:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517597.803178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjQLo-0004fx-Qx; Mon, 03 Apr 2023 20:00:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517597.803178; Mon, 03 Apr 2023 20:00:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjQLo-0004fq-OO; Mon, 03 Apr 2023 20:00:24 +0000
Received: by outflank-mailman (input) for mailman id 517597;
 Mon, 03 Apr 2023 20:00:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjQLn-0004fg-1o; Mon, 03 Apr 2023 20:00:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjQLm-0002wk-Rl; Mon, 03 Apr 2023 20:00:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjQLm-0006gu-FJ; Mon, 03 Apr 2023 20:00:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjQLm-00054e-Eq; Mon, 03 Apr 2023 20:00:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iVHKg7EU5CIRCK4JX8fX/BslmcDwyivBEC/bwGpywxU=; b=VO67zF9qbZc3DvCHrj08igDKbS
	UBAbD5Qo/zdedOiUYSSmempc813BDtuSxmhVZau031h07gj5WtzJsdrxW4vZWpUsqOkDrkReWWb8x
	qpmZGlKz2tfbzLkzpY73FWQ7+3r6dq4L5SYXmFuZ5mQDoT1sDWi1A5bpN7BFcD5MHIJg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180127-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180127: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=26997800c991f934b57ebd91de2edcd93312f756
X-Osstest-Versions-That:
    ovmf=fc00ff286a541c047b7d343e66ec10890b80d3ea
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Apr 2023 20:00:22 +0000

flight 180127 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180127/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 26997800c991f934b57ebd91de2edcd93312f756
baseline version:
 ovmf                 fc00ff286a541c047b7d343e66ec10890b80d3ea

Last test of basis   180112  2023-04-02 04:43:57 Z    1 days
Testing same since   180127  2023-04-03 15:40:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Erich McMillan <emcmillan@microsoft.com>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   fc00ff286a..26997800c9  26997800c991f934b57ebd91de2edcd93312f756 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 20:36:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 20:36:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517602.803189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjQum-00087Y-Hq; Mon, 03 Apr 2023 20:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517602.803189; Mon, 03 Apr 2023 20:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjQum-00087R-F3; Mon, 03 Apr 2023 20:36:32 +0000
Received: by outflank-mailman (input) for mailman id 517602;
 Mon, 03 Apr 2023 20:36:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qzQh=72=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pjQuk-00087L-Sh
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 20:36:31 +0000
Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com
 [2a00:1450:4864:20::335])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35dc50ff-d25f-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 22:36:29 +0200 (CEST)
Received: by mail-wm1-x335.google.com with SMTP id
 d11-20020a05600c3acb00b003ef6e6754c5so15162021wms.5
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 13:36:28 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-078-055-162-106.78.55.pool.telefonica.de.
 [78.55.162.106]) by smtp.gmail.com with ESMTPSA id
 bg11-20020a05600c3c8b00b003f04057bf1bsm13656303wmb.18.2023.04.03.13.36.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Apr 2023 13:36:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35dc50ff-d25f-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680554188;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hNpPseAD6nHZ7v+hgrSa5UPOdQF/BU4JEX0k08mqk2E=;
        b=BzrQf8shkKF3ucc8v0yanJ+Y+GcgPtueiVt7X99aCBS4DeY+XPagZA/aqqJeocKnGQ
         vAm80zoeHOANwvXNddMOL7IBIjwphOJFStdydNflih2ttP8KROWr9FklP2PJqrEH9o2d
         rGPbRwTINAhRpF9n42/WlHkMeEI5FWxRKLf2neETIG2bLlXvVmsqSehsrk8w4iLJddGC
         mGuxAFIV6d3eGN5jxxOL+borlmXOai7qhD0ZogU9fxDMZqYbUKUAImap3hlnON7jQTh6
         DZW1lb3DC88qHwebbilU0AuHzQJ4gYOCqKu5vPU2JqMqVY6Y7UOaKS6kJiuRcQHgol5N
         +MUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680554188;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=hNpPseAD6nHZ7v+hgrSa5UPOdQF/BU4JEX0k08mqk2E=;
        b=F49eC6NbCmqqZF8W81npFZJoF4NY9L4ObYkudTNvIzJaT0KmkUqPG6thbgACLkmlob
         dfQxTiFznoBzV/2q4GJAUX/OMw/8hgyngzHBjqpLYukcEtlfi/9lqdCvcx3PWv6zoyw/
         VS2XLLNQMVU4/SEtEeoZ4Be15Mozzzjj6Fz5Pvu3MwJFq+R8mYkdVGdn5iznNAw0kRmP
         /agCbF7wQiv6CDYhZbE/phfO24bQIuAlrot0c9fGdATjg2EGpLxL7HhRqb9iVUBq7KUG
         pbbXmdlalsZgR92+wYMy1Tb9FuUISDbnQa6UjQ0MRSkUU8SKnCOmE2+Q9cOq5NAjIAo7
         ojpA==
X-Gm-Message-State: AAQBX9fca5f1MDGgktpp7ziIgXImedy70KyYhivv/V7CcmDj+L9AcT3V
	C5Pr8AsPpCXdGKDO2iyTXe8=
X-Google-Smtp-Source: AKy350bKz/PYk6lgC93kgyq++7qdvvptIvcgqHIM2SizcDYoNqvyp2ruZaSzDSuJzx2fXfh1j29U3A==
X-Received: by 2002:a1c:f208:0:b0:3eb:4150:a476 with SMTP id s8-20020a1cf208000000b003eb4150a476mr497072wmc.0.1680554188103;
        Mon, 03 Apr 2023 13:36:28 -0700 (PDT)
Date: Mon, 03 Apr 2023 20:36:22 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Jason Andryuk <jandryuk@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>
CC: qemu-devel@nongnu.org, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>, David Woodhouse <dwmw@amazon.co.uk>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Aurelien Jarno <aurelien@aurel32.net>, Eduardo Habkost <eduardo@habkost.net>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Chuck Zmudzinski <brchuckz@aol.com>
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v3_2/6=5D_hw/isa/piix3=3A_Reuse?= =?US-ASCII?Q?_piix3=5Frealize=28=29_in_piix3=5Fxen=5Frealize=28=29?=
In-Reply-To: <CAKf6xpvxf=F52etJ8o3eLQV4JVD5WM57znGoP3ctONRf7uPisA@mail.gmail.com>
References: <20230312120221.99183-1-shentey@gmail.com> <20230312120221.99183-3-shentey@gmail.com> <f52c41f7-e662-4afd-8ac9-ce2c0da2b1be@perard> <7F45B51F-F1E3-4F04-A46F-4C80509C7195@gmail.com> <622b9674-fffd-4634-ac30-d0db3230478e@perard> <CAKf6xpvxf=F52etJ8o3eLQV4JVD5WM57znGoP3ctONRf7uPisA@mail.gmail.com>
Message-ID: <3D51F8CC-6909-4777-9C43-5E277650331C@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 3=2E April 2023 12:27:14 UTC schrieb Jason Andryuk <jandryuk@gmail=2Eco=
m>:
>On Mon, Apr 3, 2023 at 5:33=E2=80=AFAM Anthony PERARD <anthony=2Eperard@c=
itrix=2Ecom> wrote:
>>
>> On Sat, Apr 01, 2023 at 10:36:45PM +0000, Bernhard Beschow wrote:
>> >
>> >
>> > Am 30=2E M=C3=A4rz 2023 13:00:25 UTC schrieb Anthony PERARD <anthony=
=2Eperard@citrix=2Ecom>:
>> > >On Sun, Mar 12, 2023 at 01:02:17PM +0100, Bernhard Beschow wrote:
>> > >> This is a preparational patch for the next one to make the followi=
ng
>> > >> more obvious:
>> > >>
>> > >> First, pci_bus_irqs() is now called twice in case of Xen where the
>> > >> second call overrides the pci_set_irq_fn with the Xen variant=2E
>> > >
>> > >pci_bus_irqs() does allocates pci_bus->irq_count, so the second call=
 in
>> > >piix3_xen_realize() will leak `pci_bus->irq_count`=2E Could you look=
 if
>> > >pci_bus_irqs_cleanup() can be called before the second pci_bus_irqs(=
)
>> > >call, or maybe some other way to avoid the leak?
>> >
>> > Thanks for catching this! I'll post a v4=2E
>> >
>> > I think the most fool-proof way to fix this is to free irq_count just=
 before the assignment=2E pci_bus_irqs_cleanup() would then have to NULL th=
e attribute such that pci_bus_irqs() can be called afterwards=2E
>> >
>> > BTW: I tried running qemu-system-x86_64 with PIIX4 rather than PIIX3 =
as Xen guest with my pc-piix4 branch without success=2E This branch essenti=
ally just provides slightly different PCI IDs for PIIX=2E Does xl or someth=
ing else in Xen check these? If not then this means I'm still missing somet=
hing=2E Under KVM this branch works just fine=2E Any idea?
>>
>> Maybe the ACPI tables provided by libxl needs to be updated=2E
>> Or maybe something in the firmware (SeaBIOS or OVMF/OvmfXen) check the
>> id (I know that the PCI id of the root bus is checked, but I don't know
>> if that's the one that's been changed)=2E
>
>Xen also has hvmloader, which runs before SeaBIOS/OVMF=2E  Looking at
>tools/firmware/hvmloader/pci=2Ec, it has
>        ASSERT((devfn !=3D PCI_ISA_DEVFN) ||
>               ((vendor_id =3D=3D 0x8086) && (device_id =3D=3D 0x7000)));
>
>From QEMU, it looks like 0x7000 is PCI_DEVICE_ID_INTEL_82371SB_0, but
>PIIX4 uses 0x7110 (PCI_DEVICE_ID_INTEL_82371AB_0)=2E  Maybe try removing
>that check?

Sounds promising indeed=2E I'll give it a try!

Regards,
Bernhard

>
>Regards,
>Jason


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 20:47:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 20:47:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517606.803198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjR5C-0001EA-L8; Mon, 03 Apr 2023 20:47:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517606.803198; Mon, 03 Apr 2023 20:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjR5C-0001E3-IF; Mon, 03 Apr 2023 20:47:18 +0000
Received: by outflank-mailman (input) for mailman id 517606;
 Mon, 03 Apr 2023 20:47:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fvMM=72=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pjR5A-0001Dx-IY
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 20:47:16 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b77cc6db-d260-11ed-85db-49a42c6b2330;
 Mon, 03 Apr 2023 22:47:15 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 d11-20020a05600c3acb00b003ef6e6754c5so15174647wms.5
 for <xen-devel@lists.xenproject.org>; Mon, 03 Apr 2023 13:47:15 -0700 (PDT)
Received: from [192.168.69.115] (pas38-h02-176-184-5-132.dsl.sta.abo.bbox.fr.
 [176.184.5.132]) by smtp.gmail.com with ESMTPSA id
 l32-20020a05600c1d2000b003f0321c22basm19001040wms.12.2023.04.03.13.47.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Apr 2023 13:47:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b77cc6db-d260-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680554835;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=lQW83CaEFIKTAbTGMK7zJAXx2zCsHH8oRPYKrzlIxPw=;
        b=fNGh2NV6/uj7FXzhYWT2H2X1iMgLhYqx5DQL6GwdoEYEHkvIK7KGPOYxFcq407H4T+
         SPpDo7z8o68YlOOG65fHhKTjYHMuzp3pjBoINet7XgMWmiIsuJq1JC7vZu25hbshHqkm
         4ePpZieKILGkS/7UVLBCblFqD/mw2XE4aUt4Op92vlqL4PkWxCqswfJkmLEw9eJJvlmP
         BOjt3AZDAkZEszXzDqBSBhAJ0++TjQg0Qhn2xutvRsF0zqVsrFK49JBMJ94rsnhv/u5Z
         UHdh2GTjPUtx1NU5SFlV6jvmMTsWTt8WzXQWdclyISmEQ/TtGy22fxDk3RZzbbD9OIHP
         dbUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680554835;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=lQW83CaEFIKTAbTGMK7zJAXx2zCsHH8oRPYKrzlIxPw=;
        b=M8YdRhSIDmLFK17YKeoEiDcq/gmUuUqaiSkGZ7ygQVVKeOIV4NR8nwP5vIXo35bmR4
         h+weTKi+Y4+XACyJ4KLUN7goNemqtXa4TNrsklyHhYM8Tmn8YMLFcVnqtme/5Nj6C75+
         V2j3RTlknhpnzf7PpXQwMti3ZMZvwRQKlcuKG9PfYLrjUpqCYJt/yMUPXRRXjiKYsFtl
         Wp72D1SW1TZX3QXsH5QqsLVlA2Qj1Tx/0BWqzzhMydfBzuGZWauXpQygtlVr1IN5E54p
         R3f/BWk2VURVZj2c0FxEnujgZ0u67dISFaHqAbgpLIx8VdJYcON85RewfE1tNgP0bWmu
         xOxQ==
X-Gm-Message-State: AAQBX9cqQwhdwD5DvpT9g51M+o6btFiW86yCqL2/hWdfS6cCZi4rd+gO
	Hy3PY3gghG/ja+O/jr/TXBLexA==
X-Google-Smtp-Source: AKy350YJckSFRktwO+Ap2UM2V7mr6N7rsDxT5yrMXbEjRUK205rt4FU3Ijr4q2cDcOnc7yvmq+n7YQ==
X-Received: by 2002:a05:600c:3789:b0:3ee:b3bf:5f7c with SMTP id o9-20020a05600c378900b003eeb3bf5f7cmr491399wmr.23.1680554835048;
        Mon, 03 Apr 2023 13:47:15 -0700 (PDT)
Message-ID: <2bbe988c-0802-55c3-b2a3-05e3f94e2f04@linaro.org>
Date: Mon, 3 Apr 2023 22:47:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH 01/13] virtio-scsi: avoid race between unplug and
 transport event
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, Julia Suvorova <jusual@redhat.com>,
 Kevin Wolf <kwolf@redhat.com>, Peter Lieven <pl@kamp.de>,
 Coiby Xu <Coiby.Xu@gmail.com>, xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>, Paul Durrant <paul@xen.org>,
 "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-2-stefanha@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230403183004.347205-2-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 3/4/23 20:29, Stefan Hajnoczi wrote:
> Only report a transport reset event to the guest after the SCSIDevice
> has been unrealized by qdev_simple_device_unplug_cb().
> 
> qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
> to false so that scsi_device_find/get() no longer see it.
> 
> scsi_target_emulate_report_luns() also needs to be updated to filter out
> SCSIDevices that are unrealized.
> 
> These changes ensure that the guest driver does not see the SCSIDevice
> that's being unplugged if it responds very quickly to the transport
> reset event.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   hw/scsi/scsi-bus.c    |  3 ++-
>   hw/scsi/virtio-scsi.c | 18 +++++++++---------
>   2 files changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
> index c97176110c..f9bd064833 100644
> --- a/hw/scsi/scsi-bus.c
> +++ b/hw/scsi/scsi-bus.c
> @@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
>               DeviceState *qdev = kid->child;
>               SCSIDevice *dev = SCSI_DEVICE(qdev);
>   
> -            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
> +            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
> +                qatomic_load_acquire(&dev->qdev.realized)) {

Would this be more useful as a qdev_is_realized() helper?


From xen-devel-bounces@lists.xenproject.org Mon Apr 03 21:03:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Apr 2023 21:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517609.803209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjRKw-0003ax-1d; Mon, 03 Apr 2023 21:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517609.803209; Mon, 03 Apr 2023 21:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjRKv-0003aq-Tj; Mon, 03 Apr 2023 21:03:33 +0000
Received: by outflank-mailman (input) for mailman id 517609;
 Mon, 03 Apr 2023 21:03:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/kjH=72=collabora.com=dmitry.osipenko@srs-se1.protection.inumbo.net>)
 id 1pjRKv-0003ak-8J
 for xen-devel@lists.xenproject.org; Mon, 03 Apr 2023 21:03:33 +0000
Received: from madras.collabora.co.uk (madras.collabora.co.uk
 [2a00:1098:0:82:1000:25:2eeb:e5ab])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fac0cbc5-d262-11ed-b464-930f4c7d94ae;
 Mon, 03 Apr 2023 23:03:28 +0200 (CEST)
Received: from [192.168.2.163] (109-252-124-32.nat.spd-mgts.ru
 [109.252.124.32])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: dmitry.osipenko)
 by madras.collabora.co.uk (Postfix) with ESMTPSA id 7E46C66003B2;
 Mon,  3 Apr 2023 22:03:25 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fac0cbc5-d262-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com;
	s=mail; t=1680555807;
	bh=Q2hdVjNQTK+EUWtWpCirhufnEV7FHiF9pY75Mt779/o=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=lSLI99hnfSoTDZNCbIO7NvqqKE1AY7Sp3cn/LgehMOJ5vXw5XO2KoBa+lQQeCO5bd
	 OP+7K5/00TbABp2dHoU+RS6AigJN09CS3ANla0f8/Ycdu4YQQTVffyIxOm2rn67//I
	 uuBD4Pky5TjcN6f8kBfyO/KOfc2OxUZiLi2+6MvH/KkQo74OlVTA7Pj2rLL6UmB8+4
	 y/+dHo2GKHOnrS4U0MRoUBwrVm2gDVgC68qQ+kEmhY5XN0vHKOSHUSYUdxrAy8Pr2p
	 uwjs8vaYrkGffNsznnM+mFxL1ctoxvapXD7xQ0I/d5xRelHCPUTmnPYMSupsYY9C17
	 9g5F+KZcB8LzA==
Message-ID: <626c649b-e82c-d660-5015-6dd64e48a4a0@collabora.com>
Date: Tue, 4 Apr 2023 00:03:22 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [RFC QEMU PATCH 08/18] virtio-gpu: Initialize Venus
Content-Language: en-US
To: Huang Rui <ray.huang@amd.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>, "Michael S . Tsirkin"
 <mst@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "Dr . David Alan Gilbert" <dgilbert@redhat.com>,
 Robert Beckett <bob.beckett@collabora.com>,
 "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Deucher, Alexander" <Alexander.Deucher@amd.com>,
 "Koenig, Christian" <Christian.Koenig@amd.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 Xenia Ragiadakou <burzalodowa@gmail.com>,
 "Huang, Honglei1" <Honglei1.Huang@amd.com>,
 "Zhang, Julia" <Julia.Zhang@amd.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
References: <20230312092244.451465-1-ray.huang@amd.com>
 <20230312092244.451465-9-ray.huang@amd.com>
 <68195782-0309-2f81-7f1f-84a7fe7bb05c@collabora.com>
 <ZA9HWRYxPUk1OeGe@amd.com>
 <53c25304-bc30-b5af-846e-b247aab67be9@collabora.com>
 <ZB2kGABHUKc+Bk5H@amd.com>
From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
In-Reply-To: <ZB2kGABHUKc+Bk5H@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 3/24/23 16:22, Huang Rui wrote:
> On Thu, Mar 16, 2023 at 07:14:47AM +0800, Dmitry Osipenko wrote:
>> On 3/13/23 18:55, Huang Rui wrote:
>>> On Mon, Mar 13, 2023 at 01:51:03AM +0800, Dmitry Osipenko wrote:
>>>> On 3/12/23 12:22, Huang Rui wrote:
>>>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>>
>>>>> Request Venus when initializing VirGL.
>>>>>
>>>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>> ---
>>>>>  hw/display/virtio-gpu-virgl.c | 4 ++++
>>>>>  1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
>>>>> index fe03dc916f..f5ce206b93 100644
>>>>> --- a/hw/display/virtio-gpu-virgl.c
>>>>> +++ b/hw/display/virtio-gpu-virgl.c
>>>>> @@ -803,7 +803,11 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>>>>>  {
>>>>>      int ret;
>>>>>  
>>>>> +#ifdef VIRGL_RENDERER_VENUS
>>>>> +    ret = virgl_renderer_init(g, VIRGL_RENDERER_VENUS, &virtio_gpu_3d_cbs);
>>>>> +#else
>>>>>      ret = virgl_renderer_init(g, 0, &virtio_gpu_3d_cbs);
>>>>> +#endif
>>>>
>>>> Note that Venus now requires VIRGL_RENDERER_RENDER_SERVER flag to be
>>>> set. Please test the patches with the latest virglrenderer and etc.
>>>>
>>>> The #ifdef also doesn't allow adding new flags, it should look like:
>>>>
>>>> #ifdef VIRGL_RENDERER_VENUS
>>>>     flags |= VIRGL_RENDERER_RENDER_SERVER;
>>>> #endif
>>>>
>>>>     ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
>>>
>>> In fact, we have rebased to the latest virglrenderer:
>>>
>>> We check both VIRGL_RENDERER_RENDER_SERVER or VIRGL_RENDERER_VENUS in
>>> virglrenderer, alternative of them works.
>>>
>>> https://gitlab.freedesktop.org/rui/virglrenderer/-/commit/c1322a8a84379b1ef7939f56c6761b0114716f45
>>
>> All the extra changes you made to virglrenderer that Qemu depends on
>> need to go upstream. Please open all the relevant merge requests. Thanks!
>>
> 
> Dmitry, sorry to late response, I have created relevant merge requests
> below:
> 
> Virglrenderer:
> https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/1068
> 
> Mesa:
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22108
> 
> I'd appreciate any comments. :-)

Thanks, Ray. I'll try to get to the patches soon.


-- 
Best regards,
Dmitry



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 01:11:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 01:11:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517616.803221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjVCX-0001pF-HO; Tue, 04 Apr 2023 01:11:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517616.803221; Tue, 04 Apr 2023 01:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjVCX-0001p8-EE; Tue, 04 Apr 2023 01:11:09 +0000
Received: by outflank-mailman (input) for mailman id 517616;
 Tue, 04 Apr 2023 01:11:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjVCW-0001ow-31; Tue, 04 Apr 2023 01:11:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjVCV-0000ix-VQ; Tue, 04 Apr 2023 01:11:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjVCV-00064T-EH; Tue, 04 Apr 2023 01:11:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjVCV-0006FR-Dj; Tue, 04 Apr 2023 01:11:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NGuRm6LRtwdRikEnpctEtxAxJAPOwnanjb4jTii76sI=; b=cEOjz3wxxOzMyX912Cb+XwCiV2
	eBOwdOP6mKtcDnteOJrxrhnASU9qpCGCMEaznPCgue9V8jwufWgfoqcPzYOWtG1S19Yvvmb3+JSYp
	wFYkQa5Do7uMJOkg0uUNVu8pWnt3QYx/Z/+2VMQ2bgijlwPybTwEKnYJohe/kkrKhGXw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180130: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-localmigrate:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=148341f0a2f53b5e8808d093333d85170586a15d
X-Osstest-Versions-That:
    linux=7e364e56293bb98cae1b55fd835f5991c4e96e7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 01:11:07 +0000

flight 180130 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180130/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-freebsd12-amd64 17 guest-localmigrate   fail REGR. vs. 180116
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180116
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180116

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180116
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                148341f0a2f53b5e8808d093333d85170586a15d
baseline version:
 linux                7e364e56293bb98cae1b55fd835f5991c4e96e7d

Last test of basis   180116  2023-04-03 02:26:51 Z    0 days
Testing same since   180130  2023-04-03 17:12:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Brauner <brauner@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Kelley <mikelley@microsoft.com>
  Mohammed Gamal <mgamal@redhat.com>
  Wei Liu <wei.liu@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 148341f0a2f53b5e8808d093333d85170586a15d
Merge: 2d72ab2449fa cb2239c198ad
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Apr 3 09:41:24 2023 -0700

    Merge tag 'vfs.misc.fixes.v6.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping
    
    Pull vfs fix from Christian Brauner:
     "When a mount or mount tree is made shared the vfs allocates new peer
      group ids for all mounts that have no peer group id set. Only mounts
      that aren't marked with MNT_SHARED are relevant here as MNT_SHARED
      indicates that the mount has fully transitioned to a shared mount. The
      peer group id handling is done with namespace lock held.
    
      On failure, the peer group id settings of mounts for which a new peer
      group id was allocated need to be reverted and the allocated peer
      group id freed. The cleanup_group_ids() helper can identify the mounts
      to cleanup by checking whether a given mount has a peer group id set
      but isn't marked MNT_SHARED. The deallocation always needs to happen
      with namespace lock held to protect against concurrent modifications
      of the propagation settings.
    
      This fixes the one place where the namespace lock was dropped before
      calling cleanup_group_ids()"
    
    * tag 'vfs.misc.fixes.v6.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping:
      fs: drop peer group ids under namespace lock

commit 2d72ab2449fa9fce8f6898fd5adda10497f7c111
Merge: 7e364e56293b f8acb24aaf89
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Apr 3 09:34:08 2023 -0700

    Merge tag 'hyperv-fixes-signed-20230402' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux
    
    Pull hyperv fixes from Wei Liu:
    
     - Fix a bug in channel allocation for VMbus (Mohammed Gamal)
    
     - Do not allow root partition functionality in CVM (Michael Kelley)
    
    * tag 'hyperv-fixes-signed-20230402' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
      x86/hyperv: Block root partition functionality in a Confidential VM
      Drivers: vmbus: Check for channel allocation before looking up relids

commit cb2239c198ad9fbd5aced22cf93e45562da781eb
Author: Christian Brauner <brauner@kernel.org>
Date:   Thu Mar 30 09:13:16 2023 +0200

    fs: drop peer group ids under namespace lock
    
    When cleaning up peer group ids in the failure path we need to make sure
    to hold on to the namespace lock. Otherwise another thread might just
    turn the mount from a shared into a non-shared mount concurrently.
    
    Link: https://lore.kernel.org/lkml/00000000000088694505f8132d77@google.com
    Fixes: 2a1867219c7b ("fs: add mount_setattr()")
    Reported-by: syzbot+8ac3859139c685c4f597@syzkaller.appspotmail.com
    Cc: stable@vger.kernel.org # 5.12+
    Message-Id: <20230330-vfs-mount_setattr-propagation-fix-v1-1-37548d91533b@kernel.org>
    Signed-off-by: Christian Brauner <brauner@kernel.org>

commit f8acb24aaf89fc46cd953229462ea8abe31b395f
Author: Michael Kelley <mikelley@microsoft.com>
Date:   Wed Mar 15 08:34:13 2023 -0700

    x86/hyperv: Block root partition functionality in a Confidential VM
    
    Hyper-V should never specify a VM that is a Confidential VM and also
    running in the root partition.  Nonetheless, explicitly block such a
    combination to guard against a compromised Hyper-V maliciously trying to
    exploit root partition functionality in a Confidential VM to expose
    Confidential VM secrets. No known bug is being fixed, but the attack
    surface for Confidential VMs on Hyper-V is reduced.
    
    Signed-off-by: Michael Kelley <mikelley@microsoft.com>
    Link: https://lore.kernel.org/r/1678894453-95392-1-git-send-email-mikelley@microsoft.com
    Signed-off-by: Wei Liu <wei.liu@kernel.org>

commit 1eb65c8687316c65140b48fad27133d583178e15
Author: Mohammed Gamal <mgamal@redhat.com>
Date:   Fri Feb 17 22:44:11 2023 +0200

    Drivers: vmbus: Check for channel allocation before looking up relids
    
    relid2channel() assumes vmbus channel array to be allocated when called.
    However, in cases such as kdump/kexec, not all relids will be reset by the host.
    When the second kernel boots and if the guest receives a vmbus interrupt during
    vmbus driver initialization before vmbus_connect() is called, before it finishes,
    or if it fails, the vmbus interrupt service routine is called which in turn calls
    relid2channel() and can cause a null pointer dereference.
    
    Print a warning and error out in relid2channel() for a channel id that's invalid
    in the second kernel.
    
    Fixes: 8b6a877c060e ("Drivers: hv: vmbus: Replace the per-CPU channel lists with a global array of channels")
    
    Signed-off-by: Mohammed Gamal <mgamal@redhat.com>
    Reviewed-by: Dexuan Cui <decui@microsoft.com>
    Link: https://lore.kernel.org/r/20230217204411.212709-1-mgamal@redhat.com
    Signed-off-by: Wei Liu <wei.liu@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 04:10:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 04:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517624.803232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjY0E-0003Cy-C3; Tue, 04 Apr 2023 04:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517624.803232; Tue, 04 Apr 2023 04:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjY0E-0003Cr-8Q; Tue, 04 Apr 2023 04:10:38 +0000
Received: by outflank-mailman (input) for mailman id 517624;
 Tue, 04 Apr 2023 04:10:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjY0D-0003Ch-FR; Tue, 04 Apr 2023 04:10:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjY0D-00067W-Bt; Tue, 04 Apr 2023 04:10:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjY0C-0004Hf-SP; Tue, 04 Apr 2023 04:10:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjY0C-0005Ir-RP; Tue, 04 Apr 2023 04:10:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UaSrjTFr7NlGVotpBsXurKBxjodXIRhN7GGHzpHI4EU=; b=6yuVXNHZNy+n7XXri1U9eMQCX5
	f0t1EwzH4StdYjnCNsdcQlnG2b8g3KwEOzIpR1AOe323swtopah6xAph8PTk2HUtIDnQ3RQyolxls
	x6iffoh1+ugM6eyqxgKj6IQwYSjxIS1QHxueIad0aGnn3+ggyBYNDCtihxH+bYlzv4rk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180126-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180126: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
X-Osstest-Versions-That:
    xen=d6e0b4c41a38655ade7ecb566e8b2961282769fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 04:10:36 +0000

flight 180126 xen-unstable real [real]
flight 180132 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180126/
http://logs.test-lab.xenproject.org/osstest/logs/180132/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install         fail pass in 180132-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180115
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180115
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180115
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180115
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180115
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180115
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180115
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180115
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180115
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c
baseline version:
 xen                  d6e0b4c41a38655ade7ecb566e8b2961282769fb

Last test of basis   180115  2023-04-03 01:52:10 Z    1 days
Testing same since   180126  2023-04-03 15:09:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d6e0b4c41a..bfa2e6a246  bfa2e6a246225233f09a2523939e01dcf83bca4c -> master


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:37:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517642.803290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaIH-0001Gw-Un; Tue, 04 Apr 2023 06:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517642.803290; Tue, 04 Apr 2023 06:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaIH-0001Gp-R6; Tue, 04 Apr 2023 06:37:25 +0000
Received: by outflank-mailman (input) for mailman id 517642;
 Tue, 04 Apr 2023 06:37:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjaIG-0001Gj-Fi
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:37:24 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20613.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2642e993-d2b3-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 08:37:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6989.eurprd04.prod.outlook.com (2603:10a6:803:131::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 06:37:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:37:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2642e993-d2b3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kgiPZ83m5kwIHcBbRDEFp48bVUAOOTXKJlxGJP62hYe0uFNRjUzi8SuyN/Wdfp+Ky85EbnKQ8yrRWKZrVRek2Y9EEPaYEm9siYtbUL4MfYgiiUlPrByZzNBOcvzgDS1T7hjv712sn6LMOBeqNsHHWklKwiOhjufT0ELARwpkoI27D4GhnWiZFeQdUStKyVaNVNsZchdH/bIpV9Z3kKgZoE0LUGq5/+fLUF0uYpLLMsMrrjEzfTGZnmDvrSV8aJeidc83nTOlybt6uSs7IjLsMpckYBrzyjQeDb7C3sOm/mc00VAhDcLwRN2sTbb+q19DkT2WwN/aVQwb3ep2QjwXXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WCK7+KxNPVlBfCrlOenGmTflrOgFep/xixoemxsHHV8=;
 b=nPf/VyMWmsahspydfceckauCxIwnDIpDHn6ElcXJQrZhMj5ert2z8uS5NdY6moFrBLTS44eivQ85S5/BZp2/uINqnkbZUSaGXL4e8pgp6r5EecBDkMvquH6a7350NVShdCb9xAkAe7A7CQ3eopYfubJhk+4nZTfUFzVRJhJLP+vxBFhKJ6bXLDoTpC/U1vZzv0rjlPd7LzXH/EkcxbN8B/hYxNWJd1Odd4pRIfHNO1xp/p/4e6RY77suVJ2NSrwgaJiuHJMn1F4CE4SwLfT+5H88UzgPXJamP4OKQvKz1Ou+vZNd6E6DYEL8NyyQ2AqybF62seByNM9T2Y1K/3a+Ng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WCK7+KxNPVlBfCrlOenGmTflrOgFep/xixoemxsHHV8=;
 b=wiXbskfFx4wIgpYSY1DL1Vu+JhDBjd1wGekAVbx0f9StkxQeLYuEGdtrauxHC6AROi7hGX5o7I9H/wWNHla46GY6pAVz8XAlM1tXUC2kXwtWRmt9CJkr9LpdEzMkH66ois0Sox8GBQhmG6JMg/VW67F+8zzn0DAePCKhUPafN2TIdlmVb6uCVqd69ncimIsJnqGFDIPKa1AVNWw2K7KwLGuBaeHJjefd/0VJ9azniEe94Bgx0MgBYbqvyKDzu5eyxtrBF5vSHmGCXpLEiYGkt0W2g8hlQggCQYp1dNZiaHcD6zeZdZM+76miVA/5tmZNtar6sXBgHStb8uUpYlQtmQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8353df4d-44ef-e39c-9a66-b6a7a73d5ff8@suse.com>
Date: Tue, 4 Apr 2023 08:37:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul/fuzzer: re-arrange cleaning
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0045.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6989:EE_
X-MS-Office365-Filtering-Correlation-Id: 6aab12ff-9bb6-49b7-2e32-08db34d708de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hvoakpgHPL6PIg/ap+VB3yl59RzmXC0QzhxXmR1drflj8jN0rnI9ng6klRH8RkffUC+/hHk4HI2fVckZZ2G6OhTssMOD64LjnbHxv3v1f+wLzl+Ejemy5SmGoH0xkGmIwgqMlWihmAyDNQgAkN8ea3+C0ICsztkvnFXPweaTm+6heifLRcva3RMb7WnLHyWNCIrnu5NooY1vwGjJvy7WbCkR8ZQ7PBpICJgX6D1GsyrXL234mFpJgzZBxdwxX7Zj5C1O14X8KkorMKatXGN+AlsBw3dD9Y1FPOS8g20kaCW71jjB1bqR/raKCYFEam4otGtkxt9eKL/UIJGorGrjDXMLzCNfnp14x/Cjxu67wmMvi5D509Yr0LjuLUCm5x/05TF5tZ69oOjH7xZs7mOhuH9Ntk6n3E5nz0p9GTatWTFy+GgUpXR36DaQ21P6AcDobJVi63ShY+rS92iskb2torwu3QGmji4YjHysTtlBiSIknzUA+U7GPCUZLae/332hdmCl9qMx0+kN8q3Gp8ftemH8vdp1VzuA+RILXOZAS8/Y9pFT94mc+NYgWZSjHdiyU5I9NWT+bCKrqF9fImdlRCQCKuoqnjX3RV4fXUPgciGzNTcSqmWAhG0aOF6mawIJedGhDmL2KB0L16rqJh2kLQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(376002)(346002)(136003)(396003)(366004)(451199021)(66946007)(478600001)(66899021)(316002)(54906003)(5660300002)(8936002)(36756003)(31696002)(2906002)(4326008)(86362001)(6916009)(8676002)(66556008)(38100700002)(41300700001)(26005)(6506007)(6512007)(6486002)(83380400001)(2616005)(31686004)(186003)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aGQwK1FHQUljWkFRcTl4ZDE5RzFJSmFYSnAwRUdhbnd4WEcwcStVL3ZUQkk3?=
 =?utf-8?B?NmlIZkdsQTlDYm5mQ004WFAzaUtZd0xkeU1lY01EWlJINmFZVHRZdkVFaXhZ?=
 =?utf-8?B?cllFWHJObnVodWNwTjFBSDBYbk4wcjhiQzZPZkwydXFUNGxscVRMaTlLaUZu?=
 =?utf-8?B?a1VtS0dmbGVzb3YrMFhrUHV5ZHgrbnBZRXZYWUcvalRvSEtkRnhQQmlIUXhw?=
 =?utf-8?B?R3o0V2d5QmJRcXBUUW9xRmd0a0YzaTFJOGpBd2pHWitIWmFtSFlHRk56Uitq?=
 =?utf-8?B?WlJZMk5MNml3aVJaejRXUU5WVy9hSlBzbnY2RDIyT2RwU2E4MlZnV1BwWVJQ?=
 =?utf-8?B?RXIyTHgxT1Q3c0UxVURuTHZOajMyWjRsbjNkUXdiSzZkaU5hRG5Id3lvK2ZT?=
 =?utf-8?B?UkpSQlFNWktYekNkcU1DTURWamVKQURpSDA1MkV3Vis3aEdZOXhucGRXYVlJ?=
 =?utf-8?B?b2VtazY1N1dxcUh0cjdqWGFCUHpnZXRFbkkrN0VDbm9TZE5pbE5jbExIL2tN?=
 =?utf-8?B?SXd4bXJqcTcvQ05jdDJtMk9YcFFIVWNFMlFsNGc0OWgzN3dwc0NGY3pVcnlX?=
 =?utf-8?B?SWw4QUNBemhPMGYyWEl0L2JZamV6clBYV3paa1gwRW9Xbmh2MzV0ejlyVTF5?=
 =?utf-8?B?Qm04d3puZTUxSmNjVTUweEkxcnZ2NC9kRnhNcENHNUJZM1RSRGU3RmFEV09L?=
 =?utf-8?B?ckptZFZVUDF2bFpDekFiZUt4c0ZqSnFYckVRS2lZZGo2ZzdWU0MvYkFsZHlG?=
 =?utf-8?B?aEhLSXdkVDJHano4VlNRNVBEOW8vV3d1ZHVKdWdEN3pvY2d0NXM4YkRKdXIy?=
 =?utf-8?B?NC9LbFg0RllOOERDVlU0Mkhtcm1CLzN1OWhySHRQQ3FoRTl1eklvNlQ2dVhj?=
 =?utf-8?B?eWVFUDF0cG1FS2RFV0hHK3ZUZHBKdGUyYTJiYmU5SXl6OGhIbnJxZU1LNUhT?=
 =?utf-8?B?cFJzNVZHelE1dmNPZVgzUDJSeDhkRUcrTTFydnZMRS9vTUZGNk54Y0U0d2JT?=
 =?utf-8?B?VmN3TDhReGt6OVc3QktqQTZ6N1h6TFpFYWhkL0RuQklrRWJtaGFuRlMxdUIw?=
 =?utf-8?B?N3JvRVJ6NHIwc1FXa0xSMkNaMlRwQS8xd2lkMnlGdEZDQTNMVUhGL1pMNmpP?=
 =?utf-8?B?OEpzZkVDenU0OEUxL3MxTlRZYzhSRVVCVmdLMmlvc1VyMUFKQXBkZGtpeGNI?=
 =?utf-8?B?d3JmNlQ5THQxSVhxbXF6UXpkRSsyRnlhUFF1UjBKSjBYOE0wWDh3Wm9leVZy?=
 =?utf-8?B?S2M4bDJwV2cxZzNma1pUUnk0Qmg4MHJ6Mk9uREk5SGNGNzFpYlBzL3FKUUhJ?=
 =?utf-8?B?L2N5UWlhMEoraHFxeERBYXhZR1hGN25DN1RnYjlGejBzMUwwYm9LVE9TaHJX?=
 =?utf-8?B?S1FkQ0hRNjJwTEpLWUtIalNscWxieGZDamNDMytiblB6U0NuVXZhaklNU0Fs?=
 =?utf-8?B?aEZJYlMwcWFCRDJQR0tPdFp5dFJJRkRuVnVRR21JODFyUDhEU1A1Tno0c1pl?=
 =?utf-8?B?S0xUdnRjZFk4MEQ4WHUvV05rWWJMSU0wci81SFBDWmhqT2MrM2NxTjR6YlBI?=
 =?utf-8?B?VnVabG91MS8xV1B5Q0dJZXhDNktpbWFGUDRHMklrQkwwU2dpRldlMEo4Zk02?=
 =?utf-8?B?a0VRMzhBL21RUGVDRzNtZ3BrZnFxMGNKYlZoYU5xa093bnNWZHI3ZTljMmVz?=
 =?utf-8?B?aDJaVkNXamp2Q1BzTlJ1VmFKd0ZyUm9nbDBJbzl5T1VsTm92NlBVQW50emJr?=
 =?utf-8?B?Y0xUTElnYmNlUWIrWEovYStCVnVtMGVxamVlck9tak5vbDlOUjkzY3lSeXR6?=
 =?utf-8?B?dm9Sd3VxUURZV0VVQjFNdzNDZzZibFIwNnR6VmIxZjFMLzhQbXdYcWphODFN?=
 =?utf-8?B?V1Z3RTFqY2FFNmlIOCtlVEM4azAyYmdlbnJ0aHZZOHZrbzQ1M0ptRWhVUmFj?=
 =?utf-8?B?bnpkRlhQYTZ3blBmdjE5OU4vUjBsbXRjL2ZlUjdCMStDLzUySXVJaEM1MXlS?=
 =?utf-8?B?MEtGWDVaVTNTWk1hdzJWVEZGc0szcXZrOFgxdzE3UDByOXZzRGFHZGlRcVBi?=
 =?utf-8?B?aktMenJkZkE4WGozS3cvV2lBeFVRMXlYcDA0SUpwY1lWa09Ma2ZXT3doV1JW?=
 =?utf-8?Q?CJLfO54vEI41kYOrn7M0YvneB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6aab12ff-9bb6-49b7-2e32-08db34d708de
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:37:17.6602
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UUHPJcy7JpkbBQ+uNhXTpuxt8zLGz0Jc9k0bIEkk+bFSIGxoVJsrL5rgu2tROfGDcFCRiOnE2aKSe6W+eEMTpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6989

The latter of the two commits referenced below converted x86_emulate
from a symlinked dir to a real one, holding symlinked files. Yet even
before that the split between distclean and clean was suspicious: A
similar split, removing symlinks only in distclean, doesn't exist
anywhere else in the tree afaics.

Fixes: c808475882ef ("tools/fuzz: introduce x86 instruction emulator target")
Fixes: 9ace97ab9b87 ("x86emul: split off opcode 0f01 handling")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The use of FORCE also looks suspicious to me in the rules creating the
symlinks. Supposedly that's to deal with the source tree moving, but is
that really something we need to care about (and if so, here but not
elsewhere)?

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -60,11 +60,11 @@ all: x86-insn-fuzz-all
 
 .PHONY: distclean
 distclean: clean
-	rm -f x86_emulate x86-emulate.c x86-emulate.h wrappers.c cpuid.c
 
 .PHONY: clean
 clean:
 	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov
+	rm -rf x86_emulate x86-emulate.c x86-emulate.h wrappers.c cpuid.c
 
 .PHONY: install
 install: all


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:38:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:38:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517646.803300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaIs-0001mk-CI; Tue, 04 Apr 2023 06:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517646.803300; Tue, 04 Apr 2023 06:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaIs-0001md-8b; Tue, 04 Apr 2023 06:38:02 +0000
Received: by outflank-mailman (input) for mailman id 517646;
 Tue, 04 Apr 2023 06:38:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjaIq-0001kf-1I
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:38:00 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3c90218e-d2b3-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 08:37:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6989.eurprd04.prod.outlook.com (2603:10a6:803:131::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 06:37:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:37:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c90218e-d2b3-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EtIZu2Q2yiS1eXxPR8p4qTQcO8AMasHTE/9ow7bE2JzE9WRin2B4CzIHCzQ3dn49seQF93yIluSz/5SnLZr8LsfC/IYPdmydu5ikTywb0wEHnAJ6oIgbYod+D596tLwGBjZjjc2JSLOTUw1/OcfjufF17FIXwMpqIXdqErwDWCcdHhYY/rrQt1a1yaqBq/sd6pPBY4xYJiIFky1RoLI4fsBn1/Kek/wQrnQuu9rPwfqWV4ijmxdVUD7SWsz75PModVtK/rXHigvu55x+HmtMhSGFXN2cLufGqxCS7FT7BZMI4D4tP8wdkfp1lfugqKS+eIM+pzHPQSGpHoJVhwmGIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9nxbvcdtu2fDPhGz7G7bOIoBMTqBuVwAFgntN9b6L3M=;
 b=L3zGQgqXHKxubaXksa5ZsBYheFm8glh+C6YLBo2CSTzjpCPfF+sX5CntHo9Bnqrh1OqaYRxLtmJbVdgk7ndiRGYQNRvSLCHut+nUvJekwV7AbpncS1a6QJylBS+t74qC3Ez0ptZJBmlRvivpQMEk6CVycQjAOjbad49/lPidsLYIyreF6WE+xErkpSRUa94eBKc2ZLnUbLqiPJUfr6bfE4z7Mx3ip1JVV0KN7HvNcIErJ+T89HrkWmUT5wepUkIE1ghoubFXZ0gznBBShkp8IrMz8qcgT8LthIWL0vFdNia1X1hSas016wyChzxoi8BMcQmLySZbNRHJVIWbPIrAWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9nxbvcdtu2fDPhGz7G7bOIoBMTqBuVwAFgntN9b6L3M=;
 b=XPgYU6rbrxO2s5ejGWNdYXPnTm1UoGwHrf2q/5/ddhikhV1vqZY2vwOSSOQ5+3j3Ben9psC97oAKUB1wqCu7e7RM9+O03nai4f6sNIt7OuZq5iKgePIu7msA8KapVqVhA8r0P+nxHsgJvkmk01FyfduUyDVv/nYNfRejWWd4A9NZJ4Ye3XFkm7OEovUo8eb0FzI6OfAIyN8yzHGVUd5HczX/4bqyXJRDUn/fe/yWbyJ+T6sQ4eyZ9UiC5LfXqqRiCEzsttuT/pYJh1cjk/Mub7WNc/d99zI+YmKfGJf+oBoNT9Hi1NmSl38UPNhtg7ehBDfEMElx3+lELFqg6KMDpQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>
Date: Tue, 4 Apr 2023 08:37:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul/test: drop bogus .PHONY
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6989:EE_
X-MS-Office365-Filtering-Correlation-Id: 14c0fabe-515c-4cf5-de33-08db34d72023
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8v45MbdvZrr05/tp+ZTvzkDrUxuBsgvXnm9I25qRU0gXVrZ9yCmJVqYrossXYraqT2fefU3lC9wLsjae4u9TY5J1xH9lRCjU4aczaYd0js7EgNTivkWC8OKTAyldEFGbVmmIVM95nYXfbx9ILIDpkAw/bUnoCORcU1wF/XHumVA6rN1p4b18Jpyp0zwv3nOlULKjLW5bevDGf+yol7iKXhMVKFJXzjqUaHtkoZjvZtn8EKtqWEWDzDlaOo769WUsX8Qz7Q7N0QWEZKbUnXL1D5Uqsd+PJvZnlW4y1e3B9Y+2Wr831ymZy9hERjKFxI+eStTSZrifpFQPi8n4bgFynchUnKCHg2kQiJxhSjdVwwPGUmKJnvPHmTqQOl+gAw7Gyx4LpoQyoyR3OWUOtAL556XmuA1EpXX3D+y27j8Ejn+IoXGFEBXA+qG+a/U4WZCsDBK3xzZ2EdSlHCBCVCFqHiROW+e9AIfDpRUuG35qSj8EQAuHn8F0/w3SRZqTimjfDTu71J6IchC6jLhol7Iz3V1nkhYyDIVphOsd2Mi9NRlpFej7qE+6DXiRZ6e4LDAjI6mLkr7xiRJgdkbIDKQF6CBxhW61XOLR7rPEoXqLmkZ6T2fthjYyJ0jpgRUizjx5E2bLdcsoMqIjQUhEdcYhOi/7qLAKeKOz7orzqSeC3qisfAZaFsVXOwaLZrRH8KvB
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(376002)(346002)(136003)(396003)(366004)(451199021)(66946007)(478600001)(316002)(54906003)(5660300002)(8936002)(36756003)(31696002)(4744005)(2906002)(4326008)(86362001)(6916009)(8676002)(66556008)(38100700002)(41300700001)(26005)(6506007)(6512007)(6486002)(2616005)(31686004)(186003)(66476007)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1BEK0FabzBqQTZDRTlmOHdtS2lDb3Q5KzhmMnBBajhOdXdvcG9jSEUwU2NE?=
 =?utf-8?B?ZE56Vit1NTdDUGZYT0FhcVBoWFdUWi9uK2t6dXdidVNCOG5CT2NpQmxPdnZZ?=
 =?utf-8?B?VHVsU3NCSGJDRUk0QkJjT29mMlUzeEVSamVVYWJPMDRYbFJnS3M0bmlzQTZM?=
 =?utf-8?B?Z05qVmNrRlAxRHZneHpTVUo3NitVaUp2SnVHcnJYbHovL1hzd0V2UkZObmIz?=
 =?utf-8?B?cHd6eWpNbCtQQlg3ZmVYU0VtYVRTOHNsZzh1L2VMaUFERkE1NjBlYXIzam1m?=
 =?utf-8?B?V2JsMktMNGE1ZUQzem8yaCthMEw0dERUNzhLemJBSUtqNVpWOUsxU3RlOHRF?=
 =?utf-8?B?OGRITTErRG5ESW5laDBaRGNiN1V1TTlRRisvVE5OKzJicTVWZExLbDErbDIx?=
 =?utf-8?B?N3hFeUI2ci9uenhrUW90c1VoUmZzRVpxSDhhOUNCaGtjc1RhZjJ1dWVPWkxh?=
 =?utf-8?B?ZFh2dmNwSWkxbE1kOTFkdUFzbC9DMDZOc21RSGxHUWorM3lmRVdaMXhQTmFC?=
 =?utf-8?B?UUxlZUtCU3dFVjZaMzNyU2VsR1kvdHNacVBXZkxBbnpRVytSeUNZMnpJcWVW?=
 =?utf-8?B?TzV2S2lKNmF4SUtFRGhsWkk4TzRkclZWUW00VE5nM3VBZkxiTExsdDdCZ0hW?=
 =?utf-8?B?T29SeEJyeVVaNTVRU1BTckJKNDk5MW5XVFNzeTZOcjdXTFp5VXN6aVhNL05K?=
 =?utf-8?B?bTRyK3hYSWNRa0IxVzhJbUpvZzBYcTZ4NWliTWxodUlJb0NRODBNY0Rzd2w0?=
 =?utf-8?B?UFQ2YzdCUWJ3QTBJR0FlckVTN1hvUlRKYit2ejJyditLQWZZRDB0QThHWHYv?=
 =?utf-8?B?UlFsOGFiTWN0UWVGLzBXTEN5SWJyV0pXR2Rvenl3WnFOUWp1Nm5SdG5DZ3Rx?=
 =?utf-8?B?cUhON1doSHc4RlZXUnVyeGxCcjlJQmI1NlBRdjFJNFcwb21VQllJQkh4Q0Nn?=
 =?utf-8?B?UU5XYW1Yb3dKQVFPYnJPN2poVEpwY2FOUlh4Vng5eWZac1A4UU1tWWNuWm0r?=
 =?utf-8?B?cVZvNFkzQkdrNXlMYnhDemhydG1qOERPekRHYU0zUVFaV0l0cmEyTVljN0l3?=
 =?utf-8?B?YWRuTThrUks0Z1FaZlUvL1NSL25DUFo2TkZJSG9jRmFGbkdGd0VPbTBJUEFH?=
 =?utf-8?B?Wlc1MUlFcGdKUG9WTmxlRGVsai9wVXNuZWJjaHVDTEQzdmtJU2hDY2h6ZGNr?=
 =?utf-8?B?R0xXdEV1LzlNcGs0NFd0UkJVeXk1anR3RFZOb2lqOXdYNXVGejE4RHhxNEtM?=
 =?utf-8?B?MnRKWUR3RnJZVk5LSFNWL3hjMDdPN2ozaTlLU1VxaEJicUJ6NGFhTE1Ia0pF?=
 =?utf-8?B?UDBkOVhQZmRieWNnKzhFTnVqUmVGbUg4QUVLN1hxMGFTWHNQb1ZxVmEyUlIx?=
 =?utf-8?B?Mjhja0RhVDNKTzB6bFIzbXM2c29IazR3V3U1eGxpdFprKzU4WFNXOUttNnZo?=
 =?utf-8?B?MTZ1NFgvY2tVNmxUSFVBbkpCNDFZQVU4RmZYTWlWQzFjUFI5Z250Sk9LaFY4?=
 =?utf-8?B?bllKYkEwUHR2ZUdKY1pxK1Eyd1ZidExlTUZjbEN2Y3ZUZ3RscnZxc05WOGNQ?=
 =?utf-8?B?UDVDdGZOYTFRNkV4ZFVlY2Q4SlgwczZZelRIcHh3amhFbldmOVNaMTlBbGJx?=
 =?utf-8?B?SjMrV3ludVBHNjF1d0szUCtmdCtvZGxLRldMSmY4U1BjNHgxWmRvNWpFS2FB?=
 =?utf-8?B?ZWlPNzJWK3doWjArQWUyT1pJUFpRTnRPZXFmT1YvZDdrbFBFL2N2QjVoZllt?=
 =?utf-8?B?aUxBbEJtaHpnWjQ1RXc3N0VHQlczVTFRcS9tVGo3bjZoTDAvVGFTUlljUmJP?=
 =?utf-8?B?UUZkR1IreDB3MmoxZ2JoQTNiYno5bWkwQXJxUnMvMnJ6ZFpGc1RqZ0dnNkg2?=
 =?utf-8?B?N1Z2c0QrQWgxRkUxd29LVytSbGZQbmc5bWFQeVVzdEFzTm5OUk1kRVNXbk1Z?=
 =?utf-8?B?R0RnL0lXTWRIYTBaOFBDUml3UDlHMmZCK0lDaW1xRUlnamVRdlpHVVZkYllL?=
 =?utf-8?B?N0VGaXBxVVlXL1BrREY4VXNnS1dPY1ptN01VZmtqTUZYWFRUc0xZZUpyaEc1?=
 =?utf-8?B?d0dkVFNISmk1OTM3YlNlazBqcFlGYTJhMnREeE9DNEdjWTlJTTIrMTFKU01u?=
 =?utf-8?Q?1EV/+1Sq0BdLMBGcwY0AV5+lx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 14c0fabe-515c-4cf5-de33-08db34d72023
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:37:56.7225
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C+KkG4e+DOvljm5O5UTrR5Wlwak3JMgdN6WCWw9TBiQsBTJHhXNdG1FKtPIAgrVhDXao+0thRzFhPfnML1Xp4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6989

x86_emulate is a real (directory) target.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -278,7 +278,6 @@ else
 run32 clean32: %32: %
 endif
 
-.PHONY: x86_emulate
 x86_emulate:
 	mkdir -p $@
 	ln -sf $(XEN_ROOT)/xen/arch/x86/$@/*.[ch] $@/


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:41:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517649.803310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaLu-0003FR-Q5; Tue, 04 Apr 2023 06:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517649.803310; Tue, 04 Apr 2023 06:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaLu-0003FK-NA; Tue, 04 Apr 2023 06:41:10 +0000
Received: by outflank-mailman (input) for mailman id 517649;
 Tue, 04 Apr 2023 06:41:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjaLt-0003FC-MR
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:41:09 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad84d469-d2b3-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 08:41:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6989.eurprd04.prod.outlook.com (2603:10a6:803:131::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 06:41:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:41:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad84d469-d2b3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YhAYVrOsCkH3oFI1IWJuSAOHjQeCyzBcS1F/4v7Df+WS7mdqw8ER7e9r9rLSAFYu7+DND1buzmzJsHdUtnAKllUiQqqV0+U5yQMCKDMu9h180DKu3EYysVaQu8f54ydjpsd/cA/39x17QCFey1/N71mHi8BiwabA0qqDviJIZL72pn4XlMslEhSAiNH3fauCB28U/VahTBbkbFV1qCv1JoBrsdUa5v7tyhVYtyH2qvVYtqiaGFVcScAvg3KCrG+dRnL2TZKgGb7liV+Rhn7H2rMaj8xbHCyuahgXnxAei1qhqR+ETYRkrVW09hQ0YFqtNlJZvsrcFR2pycY11P8F5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=K2UkOx6OFZmoKsDw4mTu+8TCCvL9Hmx+/sNsW6ojfnM=;
 b=lcKQ6cd7GFTbI0ZTOiQi6A9Zr2gUhnaM+BGTesWnCKVzoubfjNnWfDJUfODAWD52PRcSTjp40w7nx97wQImdFUzzbkYd/4YRr+bBcvR2Fh7RPC8oNs06teJaLoEJsrHOs1kUXcFW7SFOwk3bkgyLcDR5vVaAsMjVqbEscyxPaUpmq+G7eslsDXFEsIr9PFsktq1qXF2dNPrsZwxkUTp5sl3qz5xq/J+sUa3qeGHOdt7yuji2YV1K+Dlv1Xn5s+Uyyhr2M2CCPcintMiVZk1y/vLB06qlko2V5LLSBdw+N934hW2YhJm6h/bqYbTxK882+qlpav7519mUC2bV355wPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K2UkOx6OFZmoKsDw4mTu+8TCCvL9Hmx+/sNsW6ojfnM=;
 b=BGitEcbLQ8TjU/oYVRp8OmZwxgYSH0+5EpR9naFwFOrZJ0HIu/nW86b570LLoVns+wBIYoC8S2J1kS+9g6il7c4feBFCRgcshZTpd/aXHr92ogAddgZ4iU7H09n3/M+6HEgKFjTLUFvKVjJ91NrmhR9sNX4eNEW8p/FprUDSg8napQnv/VxCyvrHugAY2Yn0EZFNjEG0Vx/dFu30u+KgRomHO8C4rqY5PZa3H1ixLj0c+OrmjfEFan87UHY6kzBSBkRlcQPCY/fkNx4yCOkcptjYz6NujF8AO6Pc1uhRA4a7mRodore8BJ4/TIf3BMUaJEW5xxreX/iEqtLjSP3X6g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
Date: Tue, 4 Apr 2023 08:41:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
 <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0154.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6989:EE_
X-MS-Office365-Filtering-Correlation-Id: edc3c366-953d-4c12-fb39-08db34d790f3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	u3KmMZgbqvI7VKAIR6A5RcGph19vbIQMa7j0RFKvJ4OnMTqzAFFyymZU1S2MbByq9Sa4D3AF4Li1xnCuJbF6XylPRbZfi+fo86aXd8V6zkY5QQNfaajccL4wBm7RsA6JaPfdOBt5aPy8WwjoGNYzmtwgYLQSBdbufKTgkpqSNjND34UC5mwdhZ6PV3+si0TOAvswE9ijfRsfewxGgBdMg+QzJj3U7ti56FN9IEEWvz1U15iaRGyW/BUrnwe4jrtRuDe+0DpUOLzI+ozIBSTja5ijSNAT6It5bB38n2PP8P2t7YexTbfxif1cKI/LwsfFPixQIU/njq4pwUxtNoSWY2DoyCJ+8NgSKXhdar6J7CZhMEn2xrdSdhZL+Cs2U5NpjMo7eodU6eH9qDOzQcBciaa7DUw3KLkgtMBT7xY1DQxprtpdPerpPZ3PNqS5qhOGF2hfo4OV5GX8hWW8QF47bexgXGwQv3mrMyB0TISlEQrK164hvijn1c235xw9lbYGgTHSdpRs1oMmAM9ICYUj+uB1Jv9R2Ga1b+AQWqrajXs301vJ9abrflEo2iX3rWp1dLpBTvfQ1vnprNCEmTq4EP67n46uHUWYJI20NmF891dV3vP5b8erfPJUVo3QeBm5FJ8DaLCsK2e8A2MaLxb14sgkChRgUzpHGfX9GeyJUUs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(376002)(346002)(136003)(396003)(366004)(451199021)(66946007)(478600001)(316002)(54906003)(5660300002)(8936002)(36756003)(31696002)(2906002)(4326008)(86362001)(6916009)(8676002)(66556008)(38100700002)(41300700001)(26005)(6506007)(6512007)(53546011)(6486002)(83380400001)(2616005)(31686004)(186003)(66476007)(41533002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjMrVURVWTcxMXFNUFIzS3haWEJwanFjQXYwZEJDcE0rWmRMMWV6UklWa3Fw?=
 =?utf-8?B?Z3FVR2o3ckk1VVVQZXRWR25BaCtMN0g2M3ZyTms3MFZaNnh1elRybXNWQ3VZ?=
 =?utf-8?B?SkFHcHh1czV5UHdxSkhTZzE3VG9JeFo0UzFYRWxycmxud0RkdnlkemZIMUJP?=
 =?utf-8?B?ekgzNktVZUdwaWg2Z0sxR1lEbUxJREFMR2NJdjNyM2ErMS92UG5sVXBDRnpq?=
 =?utf-8?B?YllETng3VkE0UXpXVExYcm9YejBzUDJGeTMyQ00vVjU5K3NpMFdMa2tUVllj?=
 =?utf-8?B?UjFJZzEzR0x5QzhNTUxPVnJIRWdhek9PbExodFZJWXg1WnlMa2JZQzI1RTFD?=
 =?utf-8?B?ZTBRdHkzd2FXSlFCT29EKzFHTmgrUVdDZ1pRZ0FoVUF0Z2VWQjFaSElWSFV0?=
 =?utf-8?B?T3ptSHBQU0c4NldHYnVTSktNYlVWWkx6KzZidEJuRnVOZjFZUFpENS9FaDhO?=
 =?utf-8?B?ZHpiRVh0UTIzdW1kT094cjN5V2J2S0VLUTVlMXlLU2V4UzBDNWEwbUt2UlpF?=
 =?utf-8?B?WUlmTWwwT3NDbTNSUkhmckJXZUZlUC9oU1VwczZHYWdMWXpXbS9pRVVOSll2?=
 =?utf-8?B?NkxJa3BSYmxNSC9iOTR6QWpSQ2NFcm9WVDhYYWpKSGZpRHA4Mno5ZjVMSFc0?=
 =?utf-8?B?NXJiT3dJKzNGWTdhMWwvZlZQYTNlZGFnS0JjYXc1a3NMZTZMVTBSK3JHdGpO?=
 =?utf-8?B?VGZkUDJEQ2JDY29SNEE2Y09pai9lQkhGUU52bEcrTk1hTUtaWkh2UzE5b3hh?=
 =?utf-8?B?K2c4OWJnaGltVUFuWjlZV05PUFhjSmVxY2srakR2ZzlTZkF0MlRmblovZXV3?=
 =?utf-8?B?bUZjVUU2SFJPZmhjakFTcHlDQU9CQnl5MXJIY2RMVHpWTHBJNmpRWE1lQWRv?=
 =?utf-8?B?Z2dJR3hQR2c4S1N5Si9DZy82OWt2SXFKdHhveE12WlVNVkZHbWdBbVZaMkJN?=
 =?utf-8?B?dHd4NGJVWjJFT1VlYmFWSzMxRVhyVXNmdmN0eGtzTmxLZ1l2S0dZSUVLc3ZH?=
 =?utf-8?B?MVMvd1NocGNRNTFQem1zNGxmNHVKTDdCd0R6OS90d2IwTUR2K253KzFEQlA2?=
 =?utf-8?B?OC9PT2toeG9Ka0svbmozem9sMDBDZTNNbzRXdVVlY3QzclFBdzEvZEZlb1NU?=
 =?utf-8?B?UnFSTjIzTEViaXZjMk9LZFhxVURxeVVPUUpKcW5oWDMzcjljNlpmQ01ndzZD?=
 =?utf-8?B?SHNFTW9MYStwc0ZSY3lQNFhlMXhoNCtXMVNpSFBLS3R2NFpIMmZFTzYxVVF4?=
 =?utf-8?B?OXRsc1dKalk5MnpobWpWNnRET21zL0xRY1BjaWM2bnBvSVhFSG9PYjdQZGFD?=
 =?utf-8?B?eTF0b0cwWUZINExQNkZEcjhSM0lwNGwzZ0FoVy9MYThhdmxudzUwWHhQdmdX?=
 =?utf-8?B?ZXRxblc0ZDZ6eVBDK1h6dEtlNXFqcVl6RTZQSWoyZWZveEo5VUl6djc5ODJV?=
 =?utf-8?B?Vmd4eUhyRHI0NjFzTW4wdTBxaFlVUURWVFk3NlBOYzY0TG9FaVlIMW5LaFc5?=
 =?utf-8?B?R3NrSmJwWEhQSG5HSnN3VFVmckwyRHR1bjkrWkhQellhcXcxRUNXZEl5dHlx?=
 =?utf-8?B?ZldVRDdhcDNuSTRBR21IbEhTUUpodkprL3k2eUwySk1zSDl1amc2Q3BLbUZS?=
 =?utf-8?B?T1lrdDdKbG40UTRFRzd1bUFUY1FxM3diR0J2UmJMVWRUd0xPQTlRRUg3MWlK?=
 =?utf-8?B?UlhLWFAwWjZvcUc0SURPUlhjVHlGcENteU5LQzBBNlBucmtDVzZXQ0s0WlRz?=
 =?utf-8?B?a1B3L0VTTkpwcEtCaTFTNkRIelArZERwODVvS0dUay95MkRwUDZyaEMrMVcw?=
 =?utf-8?B?YmxsQkFBejdFbmRrTDcwYktOMVNIeUp1SFFDRWZsdXVuSDRidzBCR1ZjZXl4?=
 =?utf-8?B?VEU5d0M4VmNoTGFsd2NleGpYa2ZTejdJSDF4Vk9DeW1oNm50aUo1cXkrOHow?=
 =?utf-8?B?cnNQa0hmdUdZdlVJYm9zUXBFaVF4YWgzaHpScSs5R2UxOWlOVWhkZllGdFFj?=
 =?utf-8?B?Ukt3akVBZGw2SU5TQVdGN1RzM1pwNWxKTGtyTXBabXBWRjRNaC9rNUpGMVlk?=
 =?utf-8?B?T042Um9FL2h6ZlNBMzZQUWtjS1Z3Vk5lK0dsazQ4RG5GRXJtZHhxVnd5Rk1T?=
 =?utf-8?Q?GoIDP28E9A++AEnKiElxIWNLY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: edc3c366-953d-4c12-fb39-08db34d790f3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:41:05.9235
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dD4ZRKvkD68n4VjLQhm144hMBJOLoL/onhONJiRvxCztAAo6iEtfp5XHqfagCldncCnl2lxituXGGUxwaIr8LA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6989

On 03.04.2023 20:40, Oleksii wrote:
> Hello Julien, 
> On Fri, 2023-03-31 at 22:05 +0100, Julien Grall wrote:
>> Hi Oleksii,
>>
>> I was going to ack the patch but then I spotted something that would 
>> want some clarification.
>>
>> On 29/03/2023 11:50, Oleksii Kurochko wrote:
>>> diff --git a/xen/arch/arm/include/asm/bug.h
>>> b/xen/arch/arm/include/asm/bug.h
>>> index cacaf014ab..3fb0471a9b 100644
>>> --- a/xen/arch/arm/include/asm/bug.h
>>> +++ b/xen/arch/arm/include/asm/bug.h
>>> @@ -1,6 +1,24 @@
>>>   #ifndef __ARM_BUG_H__
>>>   #define __ARM_BUG_H__
>>>   
>>> +/*
>>> + * Please do not include in the header any header that might
>>> + * use BUG/ASSERT/etc maros asthey will be defined later after
>>> + * the return to <xen/bug.h> from the current header:
>>> + *
>>> + * <xen/bug.h>:
>>> + *  ...
>>> + *   <asm/bug.h>:
>>> + *     ...
>>> + *     <any_header_which_uses_BUG/ASSERT/etc macros.h>
>>> + *     ...
>>> + *  ...
>>> + *  #define BUG() ...
>>> + *  ...
>>> + *  #define ASSERT() ...
>>> + *  ...
>>> + */
>>> +
>>>   #include <xen/types.h>
>>>   
>>>   #if defined(CONFIG_ARM_32)
>>> @@ -11,76 +29,7 @@
>>>   # error "unknown ARM variant"
>>>   #endif
>>>   
>>> -#define BUG_FRAME_STRUCT
>>> -
>>> -struct bug_frame {
>>> -    signed int loc_disp;    /* Relative address to the bug address
>>> */
>>> -    signed int file_disp;   /* Relative address to the filename */
>>> -    signed int msg_disp;    /* Relative address to the predicate
>>> (for ASSERT) */
>>> -    uint16_t line;          /* Line number */
>>> -    uint32_t pad0:16;       /* Padding for 8-bytes align */
>>> -};
>>> -
>>> -#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>> -#define bug_line(b) ((b)->line)
>>> -#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>> -
>>> -/* Many versions of GCC doesn't support the asm %c parameter which
>>> would
>>> - * be preferable to this unpleasantness. We use mergeable string
>>> - * sections to avoid multiple copies of the string appearing in
>>> the
>>> - * Xen image. BUGFRAME_run_fn needs to be handled separately.
>>> - */
>>
>> Given this comment ...
>>
>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do
>>> {                      \
>>> -    BUILD_BUG_ON((line) >>
>>> 16);                                             \
>>> -    BUILD_BUG_ON((type) >=
>>> BUGFRAME_NR);                                    \
>>> -    asm
>>> ("1:"BUG_INSTR"\n"                                                 
>>> \
>>> -         ".pushsection .rodata.str, \"aMS\", %progbits,
>>> 1\n"                \
>>> -         "2:\t.asciz " __stringify(file)
>>> "\n"                               \
>>> -        
>>> "3:\n"                                                            
>>> \
>>> -         ".if " #has_msg
>>> "\n"                                               \
>>> -         "\t.asciz " #msg
>>> "\n"                                              \
>>> -        
>>> ".endif\n"                                                        
>>> \
>>> -        
>>> ".popsection\n"                                                   
>>> \
>>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\",
>>> %progbits\n"\
>>> -        
>>> "4:\n"                                                            
>>> \
>>> -         ".p2align
>>> 2\n"                                                     \
>>> -         ".long (1b -
>>> 4b)\n"                                                \
>>> -         ".long (2b -
>>> 4b)\n"                                                \
>>> -         ".long (3b -
>>> 4b)\n"                                                \
>>> -         ".hword " __stringify(line) ",
>>> 0\n"                                \
>>> -        
>>> ".popsection");                                                   
>>> \
>>> -} while (0)
>>> -
>>> -/*
>>> - * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't
>>> set the
>>> - * flag but instead rely on the default value from the compiler).
>>> So the
>>> - * easiest way to implement run_in_exception_handler() is to pass
>>> the to
>>> - * be called function in a fixed register.
>>> - */
>>> -#define  run_in_exception_handler(fn) do
>>> {                                  \
>>> -    asm ("mov " __stringify(BUG_FN_REG) ",
>>> %0\n"                            \
>>> -        
>>> "1:"BUG_INSTR"\n"                                                 
>>> \
>>> -         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn)
>>> ","       \
>>> -         "             \"a\",
>>> %%progbits\n"                                 \
>>> -        
>>> "2:\n"                                                            
>>> \
>>> -         ".p2align
>>> 2\n"                                                     \
>>> -         ".long (1b -
>>> 2b)\n"                                                \
>>> -         ".long 0, 0,
>>> 0\n"                                                  \
>>> -         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG)
>>> );             \
>>> -} while (0)
>>> -
>>> -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
>>> -
>>> -#define BUG() do {                                              \
>>> -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
>>> -    unreachable();                                              \
>>> -} while (0)
>>> -
>>> -#define assert_failed(msg) do {                                 \
>>> -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
>>> -    unreachable();                                              \
>>> -} while (0)
>>> +#define BUG_ASM_CONST   "c"
>>
>> ... you should explain in the commit message why this is needed and
>> the 
>> problem described above is not a problem anymore.
>>
>> For instance, I managed to build it without 'c' on arm64 [1]. But it 
>> does break on arm32 [2]. I know that Arm is also where '%c' was an
>> issue.
>>
>> Skimming through linux, the reason seems to be that GCC may add '#'
>> when 
>> it should not. That said, I haven't look at the impact on the generic
>> implementation. Neither I looked at which version may be affected
>> (the 
>> original message was from 2011).
> You are right that some compilers add '#' when it shouldn't. The same
> thing happens with RISC-V.

RISC-V doesn't know of a '#' prefix, does it? '#' is a comment character
there afaik, like for many other architectures.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:47:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:47:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517652.803320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaRQ-0003uq-Da; Tue, 04 Apr 2023 06:46:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517652.803320; Tue, 04 Apr 2023 06:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaRQ-0003uj-AV; Tue, 04 Apr 2023 06:46:52 +0000
Received: by outflank-mailman (input) for mailman id 517652;
 Tue, 04 Apr 2023 06:46:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjaRP-0003uN-70
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:46:51 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2062a.outbound.protection.outlook.com
 [2a01:111:f400:fe13::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7937d7a4-d2b4-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 08:46:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7972.eurprd04.prod.outlook.com (2603:10a6:20b:236::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 06:46:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:46:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7937d7a4-d2b4-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TD9pz/84qduABV1TQTEYHXOTnwmLiWEAMoA+2g9ssPxxgi6MeQd+6+5daIlhUApcY6yR/u4PRqCTC9BKfixH/i/5f+JvQv2ewH2hj6PAyk+7ZQ68kRslOt9aTosFnz9e/sJyC0z1pBMnlfAor/1DTbKvfYLs44JfTC8OWUSNPQr2w/xpWRW/lTG+SXLLCu4kZx+T2tlrT2ziDufPLAUxooaIOrwKoODhbkFQqcgUjoU+8H/vphqLKtlJqvx8O2+vomcx6T3uk3qaKg6L+g49+CoL2JcXuYAdr1L8Da0GBiNynGODo1AwcLnFxYuypGQJV/k+pSCK4Ykd2lH9O8uFSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TawaQTZiKD5mX4SgPj1LVmMbrtcalac5mcR+lI2S6YE=;
 b=PbEgIQUgk7uMQwu6SJnS7QJWhWBXXZg/9q6LbHU4XaMu+xWrWrbIkKxSMcwVtnOCwFZA+WpSJbXv2pF/NVi2ct4xIJYnMh/tp3FU+CtA7Iid7wjQKbqSKjuBIIF4qBIvv/DoLSRXo2Os6ZZ61K94OSnbm4iQwP1lc2hSrczyiGkjnNY+XVOhQwWuh/cm86zB0YEBeEgOkPmSutBgmTR9oarxLwpwzICKTaqPWgPayIoq9sJKmFswaQapdpIhxL3vuk9eO/ObB0TUNWW8DeZpFKnutejcnE5KeIi1hFaogc+F9QdVQg9d9tTvTdY3mngg7UH5bmc5tMN1bPed96BCBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TawaQTZiKD5mX4SgPj1LVmMbrtcalac5mcR+lI2S6YE=;
 b=iv4anSFO6uQ3O9YPyLXDqs0JGJ6EdWUKaDfL10lxcXkgUmUOzhVeNG9S/aByRwMjXl21QppamL8QcICNmpX8Jx3b496aPuQzPGgFVRVzE8CcWMIkagntBw0Y6NsrTkxt7/BoYEC95+xJ+iajmsiAIAkT52/7B/ClfqbxjimiW/vJv47YWUVnNBu8kLq5yaOJa3enw0YvAiO8y+ktwEcqlFd6rSj7XlbDB89PHZYUdmLVom/cdlvSeAOH54IjBEKGNg/cOOoGDB6ZjiwhYDgQRxh6ZMiVXOvrNGm6Yja3XxJ94YnN2mgMeITaQDFrWnTEPcxV2NTpBDeB14x+T0NwuA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d4fc10a3-abe7-1298-daac-00c96147c7a5@suse.com>
Date: Tue, 4 Apr 2023 08:46:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] cmdline: document "extra_guest_irqs" upper bound
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0044.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7972:EE_
X-MS-Office365-Filtering-Correlation-Id: 437e3e65-02c1-4b8f-0f43-08db34d85a9c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Pr12zN8Ib6ncSZuJ+iYF+JZnzH+p8BIvHQO7mK4qXsovcfBs9A+bDw6o9hT/OTlv3TJxkw0ia9qiJmsRAPReYgkw0p8YxlKf8nWyVqaVdCMaK/gmEMbm5OTEv2BdbJ7xWsQh/ZXfWbuNFkS0ST/A7fJBkDGi6sqOPVnaX371bXcJE1GXb638ykHe1CnKAoXtnh+Hc5U1T1vB9a2JEzG4wGh7QP6zB0bkpyyvXdq5gggEzon2/lCJKvRnJ8qzK2ljO6jgF/saVpwrnm6koHMFmWu47l+PxWPv+FqPiHBDgSMg+CAO4TyQjUr0n0sAXYaCoXu3w6JWExkPyH8ZR+YIRAbPorn2a31nsLFts7gqEM0zY0M13YjNmHbmgXVcCqvC2shkbyYQecDiVy9yWtWquZ99pWVsAh5o/92NYI5Kf1ch+SpPbaEQYUvLdpyepUes61K2423mC6tB6lcKkCg4dCiuQTYcDBqiJU6hci1KYsW+pg8u62mRxjuMIskKtEv4kp7HDqeOkt7EXBSpDkFZnVTg06aRCaOEqWzgOwj4MwCJDH6QeIXCpP0F7aKwBN5ykArtgljq2FaFxsoSIouD3foRVpb+yuL6KqfRbAFJVJnqKXC1vCWOfc0baLLGM2/etIDljBR6tSqdIUutnzipBA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199021)(6486002)(83380400001)(36756003)(2616005)(38100700002)(31696002)(86362001)(6512007)(26005)(6506007)(186003)(54906003)(2906002)(316002)(31686004)(5660300002)(41300700001)(8936002)(66946007)(4326008)(6916009)(8676002)(66476007)(66556008)(4744005)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eGdLa0Z6Mll6Tml0a080S2oyWm1GQXVOUTFPOXNZa3ZSeWUyalVycms4amN1?=
 =?utf-8?B?bVJrY3dqTlhGaUVuand2ZWM2QmMzNHFMK1Z4ZTZSdzBzSVI3dDI1a3EzSXFK?=
 =?utf-8?B?MVpDRnJBeGswYTlJcGo5Tzl4Tis1c3JPdUs3N0FWT0VxdTZXRXNYK05oQjVW?=
 =?utf-8?B?UE1kaEFSU0VKZ2UvV1ozR2NGZm1LZGNJa2tFMGVHSDh3b0RBMGVRUWJjQWYx?=
 =?utf-8?B?U1BxWXYrNzFaZzRGN2JwTW10RFhLZHkwOEU3VHdYZm5QZVhpU0p2KzlzaThS?=
 =?utf-8?B?OHd1S0lwMzFEM1dtY0ZHbHVqbWtobWtCK3BxdGdMMnlENVI4OW5wamJNR3Ur?=
 =?utf-8?B?MTRYZ0FZRnVQWHRHeGEyaEdsdFdOaU5PVmF0b1U1dDlmaXQ2ZDd5V0txVUxU?=
 =?utf-8?B?aFN3Y2JwWTBuTFE3TnAyVkpzU0VBN3A1Tzl4U0p5UkhoalRwbUhSRmo3ZFds?=
 =?utf-8?B?Y3k1NU5ZR3U0akVjUXlKaFRTeHppTFRSM2FKeTVNMWFzY1NDK0t6RUpPOGhn?=
 =?utf-8?B?K2ZRdUptMkV4bzJNTE5SdTZEb1V4cXdpVEZ0Zkx6NnpkN1V3QW81Si9jTVhh?=
 =?utf-8?B?d1pOK05NMndMQmNYWHZlbXdWcXlpelVodVpLa090NHV5cTJDMElPZXZqaDdH?=
 =?utf-8?B?TWZ0VjJEUjRiRHF1QzN2OU10RXJLOGRoSmMxeXl2TUdQamFVbDh6ZThqN2c2?=
 =?utf-8?B?bGlnNEdBV0RBbm5BVXpRZXFPYUNlK3BRajN3MnJCS3kzaVVmRkJLb21ucHZF?=
 =?utf-8?B?UEJ6am1uT3U5ZVkwMEZicW1NK1pzN0VaQ1p4OS85WjlNWUl2dXVua0VHWHVy?=
 =?utf-8?B?U2VYbUt0akNhckxveHAzamxEMzVGbFArdVhnY1lFNWFuektQUFljOGxBT0Rj?=
 =?utf-8?B?d3VWYmlTRU95Y3ZNSm5tRUdhMEtNTllBWXQrUEpJN21lRTgvTGdpUjhtdXNF?=
 =?utf-8?B?UHQvR3dJZ29BdXUxRzZSN3dGdmplUkV3bTJrbWFlTWVyUXd2RzQxb0ZwRDM3?=
 =?utf-8?B?R0NjWjlqdzRvK25Qcy8xYmxaUU9BenNRaGlxMVJDcG96K0U0WEV0QmRGdzdU?=
 =?utf-8?B?Um9CK2dORVkxMEJYaHRqMWxibi9hRURTZS9oR2d2Wk4wd25pZE5pRjM3MVNW?=
 =?utf-8?B?eXpUODN5cEpkS2NrNUxEMTN6WUJYSEFDbDNVSlpZYlNJRVNzK2tNQWw5YWRn?=
 =?utf-8?B?aytTdzUzNy9HTmZveUdscHlneXYvL013TzFTemRldFBOYWt2bTl0bUpQWlFl?=
 =?utf-8?B?K2hIKzE5V2dnYWtraUttdGNLc0RVODJOeTZDazMvY2VDUklwSmZiT01TelpO?=
 =?utf-8?B?OXVJVWszR2F3TVlFSGg3cHRJa0lnK1pMZi80dGRkR0QwQmNxWWVHdkNqVkRO?=
 =?utf-8?B?cVRDSmh3WDVSNWUxVWNSVi9BbUVIelZBdkpCRTNVZW5HdHNtYTBzZDNSQytt?=
 =?utf-8?B?WTlDREtxTXdvaEdyWnpOajY1ZnBsYjc1SzNKamtiN3N6c2ZLTmQvd2dJS05x?=
 =?utf-8?B?d0RKaTNlOVhGc1REQW14aG14OW50UHZ5Qk5Pd2hJOGU3R3NicGxaWko5dnFH?=
 =?utf-8?B?UmtOeHVNU1YxRXZVd0o2Ni9Gb0N3ckoyZ29qeGVSajdsQzA2OHFTQlVIWHEr?=
 =?utf-8?B?WnErdkZqM1VoL3VKbDhFdFpNWDVKckJiM0d3cjNTcXRrdE9lRk51OGR4UmxU?=
 =?utf-8?B?ZHBBVDhPTFhLOUQrUm9jYW1zSVBYOFpKNjdVa29aWlpHS2hEbTNQQTRQNjh2?=
 =?utf-8?B?VUcrOWx2RnBrc085REZUY0QvaDFzVnlvblJSQ24vdGIranpsTUQwU0tKbGVP?=
 =?utf-8?B?OGhRczlDY3RBTGZIZEdlZk1uSWp1SXZ6VzNqaHBzbDFjbWNaeDJBMDRwYkYr?=
 =?utf-8?B?UDlWTVJYQ2RZRFJ0S0I2c3c3TjUyQm1tMW5HMmVqMkVEVnVid3RDOFZWSDBS?=
 =?utf-8?B?ekRaTklXQTVjTzV1MjF3RC9pRTBuY1ZMZGo5eXMvdWtNUXlnMnNIdWkvS1BD?=
 =?utf-8?B?SHhlbUhvM3FhaS9MYXNVSnp4VGFEcHZMNU1hRVV5ajJlWWpja0xPSC85OTZz?=
 =?utf-8?B?SE1TYTNuNVJsUk1BL1BBamh0L0d4RGF1czVZY3ZlU0Q3Z08wUUJmUEk5ZXRO?=
 =?utf-8?Q?rp02GRQEk9P0u4r93OZNwJB1Q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 437e3e65-02c1-4b8f-0f43-08db34d85a9c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:46:44.2529
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l/hi71BTg5MHwRvQJhIZ8g67SWKG8zgRh9eV57sLX+DeVGP+ETWKB42St/lMBlGfn22XFCbemf4x3lB7HdL6TQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7972

PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
more than 32k pIRQ-s can be used by a domain. Document this upper bound.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was uncertain about also introducing a bounds check in code: We don't
check for bogus / abusive values elsewhere either.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1130,7 +1130,8 @@ common for all domUs, while the optional
 is for dom0.  Changing the setting for domU has no impact on dom0 and vice
 versa.  For example to change dom0 without changing domU, use
 `extra_guest_irqs=,512`.  The default value for Dom0 and an eventual separate
-hardware domain is architecture dependent.
+hardware domain is architecture dependent.  The upper limit for both values is
+32768.
 Note that specifying zero as domU value means zero, while for dom0 it means
 to use the default.
 


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:55:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517656.803329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaZ9-0005Py-Ap; Tue, 04 Apr 2023 06:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517656.803329; Tue, 04 Apr 2023 06:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaZ9-0005Pb-87; Tue, 04 Apr 2023 06:54:51 +0000
Received: by outflank-mailman (input) for mailman id 517656;
 Tue, 04 Apr 2023 06:54:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjaZ8-0005PV-5c
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:54:50 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 95647f52-d2b5-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 08:54:47 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 02:54:40 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by BN9PR03MB6073.namprd03.prod.outlook.com (2603:10b6:408:136::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 06:54:36 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6%7]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:54:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95647f52-d2b5-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680591287;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ArH07A1EUAkuj+60F1scr9dgVajlc/cTWcd9aBlSjeQ=;
  b=JmT8lDx8wPZQctGL6tcEMG5a/TV8X0ruKDLBjYZZxwCz9PzJ2QlFvHeU
   mvbJoxrT5V/UcvgrjljY0fIz/FW2WhartS2EaMbn8xI+ezO7cFwmAfXGU
   0EZbM47QLQp3hfo8Ptxoz7GHRFny8ahSdmiTDRkFMcAQo8K//+YHhPMXO
   g=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 103591911
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:T9j1CKwilJj2E4hTMz96t+caxyrEfRIJ4+MujC+fZmUNrF6WrkUPy
 GQaUGjVPqmOYGvze4pyPt+190lQ65fTxoAxTwtr/yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UMHUMja4mtC5QRiPawT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTtk9
 6YKJDkXVUrd1vmVyp7kbM9MuO12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANlPReTpr6cCbFu75mIjKQY7D2OAiOCTtkm9RYNeI
 W4O9X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRVLufgcBRAoBptPl/4c6i0uXSs45SfbqyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:c+aU6qFj8E9bnJdVpLqEwMeALOsnbusQ8zAX/mt6Q3VuA7elfg
 6V7Y0mPH7P+U4ssRQb8+xoV5PwJE80maQFg7X5eI3SPzUO21HIEGgB1/qH/9SIIUSXndK1l5
 0BT0EUMqyWMbEVt7ed3OB6KbodKRu8nZxASd2w856ld29XV50=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103591911"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FsQGmh6UF6kUpHKDpWLia9DgworWs6OeJhYo1iK70bO7fkzhz+Q5Z3XDVDN2Xf551BV7nIotu+AsumuoxJD9tYolvx6RMo+uTlU/SmM+cEgwMwvfugdvo8W6q3tTCzLeDmquyO5j0/cPnyHi5uFVwPVPGELufLTU8sbku2gbqlOWZh4F6DaaMB1lCfu9KwCZtEsp0z+o+JdSZtGGkGKipcLVX3DsmUl5KLOG4ak4/JutyRsQ5WMdATV3wkWX/jCBmjZpGdKFosl8akFdSAJ+Ms9lI1SQK3M6P+/EeNxHcBejxuWfnyob9fhx1jkzSEW98LW7evU4rBQKEWHHvD1r7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ArH07A1EUAkuj+60F1scr9dgVajlc/cTWcd9aBlSjeQ=;
 b=cRhGkGwmUFUeOPtVCGdCImYC6hCUWyQlMyOJKpMxXpc0BXsp8SVUK90hzBJMe78Vvn//Zjv2oCqs6h+omZFbDqEGg6OV75ZLdNT2GOJLwnPYqwyYmcLcgo8B92qHpezV8ArLrIcTIAP4vVevnzm/BfZXZ4NnidCRvJrfbkLpoDL29ImMkq3muUsnaLBlWcaMpXWUukIqrI5PB7dbN/UukNANsFkw/09wCUtFPBz6XqYU/8+rDNpkBL6fM41/a2nolUpVIZK4p3QjmyB6Q7ujhrohGDF4Q7XoQTRju4kID4/nnsmJqv+MGaBJwxwvfy1lXn6qW03AQQ/EWLqskQi8Jw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ArH07A1EUAkuj+60F1scr9dgVajlc/cTWcd9aBlSjeQ=;
 b=OjWO4DzgFETf4wNn4BC8O96LTe/ZqZxAX7z7etw1Z/CFASFjhus8iWL1iJfCJn9qtK1EAa9g0mqHz7/TQe2Bj4lr0/5+7B64QxxtkoFBR636e55yzTaSnynQo7UEUiRJ7AIoXTz2eczwlseF8ycMd95ZNXE4UE91QSwOYf88DDw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <38a2565d-da05-7a20-c144-6fb388b12716@citrix.com>
Date: Tue, 4 Apr 2023 07:54:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul/fuzzer: re-arrange cleaning
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Anthony Perard <anthony.perard@citrix.com>
References: <8353df4d-44ef-e39c-9a66-b6a7a73d5ff8@suse.com>
In-Reply-To: <8353df4d-44ef-e39c-9a66-b6a7a73d5ff8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0598.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::22) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|BN9PR03MB6073:EE_
X-MS-Office365-Filtering-Correlation-Id: 56e33d06-9eee-4cff-9634-08db34d973dc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b3USi8iuzHrgn/meyW2oq9YRy+vxGZKHrwDJPk9EpjAjZ5LoQpeSO/ihiWpJwfq0ji5/3Es6TMKG79qSgYo3jR05vDrUjzNadRqS4xWPR9TDQYgjlDvo1r4iwAzmpHOJXiVvMENbVe0PtRq4IxGNXJ0UQRo7PBQHmkO8Nt/mSuunISjNM9tSx3ZrgKMwVO6r5sKDHx9cWwKZyDD2aXMssijaab4nMob1Qark2LhPAYZrypw8XXOs+Ws1ZOFClyxGzNFFIhMQHc9bStYzIAv+suIMzuz+ieJEvx3cbt/OeFVJ4jl2VhYzULlpr8cdcYiA5+6P7k41QyKayarCNDfHxLICEIndQFMUpbfV21ogxckcCe6NTVCFS+fYGY9rtmleseeQGa1neApKt0g9O/k95zzr8t1i3hylEICJsLMMD4vyjh99+jtFLoSS30DZXuqJ0N/4BWti1sodFPb5NXOjQ9GZ/gYf5TBeuZ4oRnalyXBTXlb4ryRKBWDE2sPLhvyLetJDaUz31DXkem5IhmQmpoZ4ZlBXMA7yr5Rb6oQ13ja6FgVCA+RyCk3gNW+q4KI8Z/DfP/oifqSv1pvWcf3wAM+bCOvn8SYvhkTbhCVVuT1MpZdsQIED3CHbMbePnrVW1lBW1WFzcspKpP9t1etJZg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(451199021)(4326008)(66946007)(66556008)(66476007)(8676002)(478600001)(110136005)(316002)(54906003)(8936002)(4744005)(5660300002)(41300700001)(82960400001)(38100700002)(53546011)(186003)(83380400001)(2616005)(6486002)(107886003)(6666004)(6512007)(26005)(6506007)(31696002)(86362001)(36756003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dUNuUGh6N3VWblVzRlJZakpMbjArYUJqcHd2TWcxV09VTnNrRnJVM0pGNDZv?=
 =?utf-8?B?Q0VGa3ltckJyMWkxWVV1YjZVNy9Wb05UemVyM25XWVBCbWJDOUdYTjRJbXY4?=
 =?utf-8?B?YUlDamxnKzFlaWtUOEFmZStLV3E4aW1OQVlCYnp3MEFzU1hoMlZyMXVXOVJL?=
 =?utf-8?B?TXBtL0hDQXZrQXFweU0yT1c1ZU1TL25SWHNWcFZpeFc1SkRqQVlvaHRJWTl5?=
 =?utf-8?B?eFRmdk1TNDZPL2k4enB4SzUvckppNEVqQXFkeE02RThwZnBHVE9SdXMxeDBP?=
 =?utf-8?B?bDQ1a3RWYzczMUlTSm9MTkFQZ3ZISWpvaTFHdmpzREJ1VzllbFhnU3VVeC9s?=
 =?utf-8?B?OS9Qd0pvY3hEcjZCWXFBQ0J0RmoxTHNWaUZsRFFGaWx1UzIwSUpEYXVrdlpJ?=
 =?utf-8?B?aWRtL1p1OW1Ud3AzQ1MyQUlObG8rM09pOW1ZSnNHZmQ3emlBa0RvNkluY1JR?=
 =?utf-8?B?ME1aeGF2MldZRGN6aklqZWwvQ2NoRjkvYWlvSU1POVNCNldZVmlvdEZvQjZm?=
 =?utf-8?B?WjE4YkFpdkkzTmlLQkNlaGhWemtWUEZyb2o3Y09ybFgwbm5PT2dUT1lsbXdl?=
 =?utf-8?B?czNWWUx4Ulk3RXJNdUFrVjFDZmNJUy9vNG83cGlQZnNsMURML24xWkRRYXFo?=
 =?utf-8?B?SVhLSXF0NTJVWU1YbGNqeFRUM1lhTlJoNnlqeVJ5TlJWVW95d2VEVGlTWC90?=
 =?utf-8?B?Y2t2NEZFeGxYMHdRU3VVYkk3OXgra1dzamxUSDVVR0NUNEdEbmtRWUpoOHpw?=
 =?utf-8?B?WC8rVGN3S0cvbjVNaHQ3SUVJYUdTTmhmMHlVbzQyaEhPNElqSlVQREIzR2l2?=
 =?utf-8?B?RlBkdHppRnR5SUkrVmlYVDdYUUc2WFNVSm1JZVdUQTVKcVpsOW1ieTlwcTZE?=
 =?utf-8?B?RTIvVVkzMElVY1hYMHhrOVpJVkdQM1NRcGVxZnpXczlqUnlJYmJWZzNNbEhZ?=
 =?utf-8?B?TWhXYTUwZFJESENYL0RvL0llVndDbW5hWDNQWjFHN3hENFQ3ZVpldVhYc0R0?=
 =?utf-8?B?ZVRveUJaQWx5WmxtTFU3UWJDendxVVpDUWJ6amV3MG8xbkVoRUhWcHZNT1Vo?=
 =?utf-8?B?d3AxQ1FYS0hIQUF4OUlJbDdIT3Rkd0pmYVNyZVVGblVGNUVVaEtYRE5Vc0l6?=
 =?utf-8?B?enIrM2pTL1RBbFVNMng1M2RJWkZXUlRPL3Q3VHd5bkdLV2ZRdlF4TmVEWHJy?=
 =?utf-8?B?RHh2di9venFhRDhmblE4NFpSZzl4ZjJiUktjb1Q5ZDBxeUdJaWJkb0pLREQx?=
 =?utf-8?B?a2lmUHQ0WUw3enM3Q2U3OHhSNEVDdjJTMTJ1QjFiMjRDQTY5ZUhiTE1sN2VU?=
 =?utf-8?B?SmluVDdGMk52aW9aTmc3WUF5UkNsRHB5bjRRdU80YldYdTNxU2VMd2dMdmNa?=
 =?utf-8?B?WDg3VFV0TG1IRmJTNHJxQk4rMHlwR3NqcEdUOHVSUlkvTUlZV1hpcjJnK0lo?=
 =?utf-8?B?UXk0enpXdWFnU1ZWZFE2TGxDQjRzeXNXSkg2T2JjYTNVV0tVUWFoOUhZekJN?=
 =?utf-8?B?elY0bFFDSU9nUDhjTTBpT3hRU1hGQXM3STlMNDloY21CV3V6em05NHRIWVZ3?=
 =?utf-8?B?dDZtOC9NRVljRVZLS3h4ZWZkQ0tUUFFodU1UdHo0Q1JSRXVxWk9QQUZCQUtL?=
 =?utf-8?B?aW5YbTdZcU1xUGN4MFV5TDNnNXpYQWN0bGd5Q2lrbGk3OURPZkFQL0Uxekp5?=
 =?utf-8?B?QzBlbEd0NW4zeklTeFdKTUFvVHIrakRJbmNKTDcwVG9KUWIrUVgxd0xQa2ps?=
 =?utf-8?B?azlWRVNmdDZKT1p2N0ljNTBZYWd0WmdjcG5QV0U3QW1xMUhzM05FNnZaVUQ0?=
 =?utf-8?B?M0w4YjM4ZDVLdmV0Und4dWQ4RHdrVXBxZUliRjQzTW9BUVN4OE1UVDNpazBJ?=
 =?utf-8?B?UTJYZGNsOFFHYmptOXNNdXgyY1k5UWdaSU9aTGdqbTcxRUNQRDJwblhkS3pL?=
 =?utf-8?B?K2VveHZTZ3RPWUNNancvTFEwNHB0Yk8zQ1FlL0xGcHh6eGdoN2daMWJlK2Fv?=
 =?utf-8?B?R1RtR0ZnU2JsVWYyQkhlbGM1VlBGVEhUbHJ1Q2NEcEdQRmpZNUFFR2xYTGNG?=
 =?utf-8?B?SlFXSTZ2TGJzLzUxaVJxNEVqUlNyNncwVlVoNHQ5RkdLZ3JHNmdWclBrUFJM?=
 =?utf-8?B?Y2VlYnFOTEVZV0RkR2h4WXRMRWdzakw4L3BIWkZ4UStndTg2Qkh5ZGdYVHI3?=
 =?utf-8?B?bnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ymHF7pS5/Ahdhc/uoLsjA6HK0e/Kp8LVLJUYohW1T+q+B4JS2yb23rQSpFpvcu6Xj8omYExAAVlNiMqf5XPVoXhsTAdif8PWzAx8VH+Cw/5PKqFU34gtvBhoEyEd5fyhIeKe/P8IKSkNcGxw5uvoQ4+wqIZUmUBekPih2z810zySCR45s6hv/4krr762m5nQOK+sQYy/+uppdNGVtLQF9Yz2oJj5Mkov43a3P7CHFNg3GSgemWPa36ythWAfuEavkVLXVBBKhQYzynWtmR1BSw13AQKuRUPFJE3bBYVLhfzgwZ06zrUsseDDRw6u9wfAn8zcF36f2ofYLx66jVnNpqfiNTZLL1tibqDjpPVCLdhfGswGEoJmwVzTedHirIpOraqHy7axF6BBHIouOoYPrL7FndZea8XIld/kFhzHfW7kwGF+kq8v5ILBgTsvKey5jCdc4Uke6Zeqx9Aasd64XRHIW8m1u7KQsZEgJEdaGkF9lacY5W8/q6C1Vbj7eSHHFt4yWidcwg/ek0cKhT+zkzQgfUZ86tgexmFyDGOv3kxD3wxYAQ9hQzIwm0n4GxwkX3QvRII7mQYpiB9OQnnukDcc22ymyDnykjawXeacL3czD9Ef0tsDntXrshf0f9eZ4pp/5nx4rIsJPZlGuLeVpF/o0iRg03C+iBbITHfI9cm0M6j4egT41KkAtDIfQsB+DYRauCZA73hfdNPAl6SR+cm4ElLOoPffv6iyCKl9TjhnSPXAkqdmZvqPFFWaorJcq06hVZMQvcClGBzA/z9W6neeL+C45l5y3cp0hp71EZDL55b+AiCRJHGhL5qaEEFv
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 56e33d06-9eee-4cff-9634-08db34d973dc
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:54:36.2061
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B/sRLw5OAW5d/a3xHplBWEAMB17FEiY622J6ohTn1UBafI1vVkNcYkgthCMPVwb/cUMWsq+WsTjDm8PRC3OEfS5yhzjXly4qB76rgG+OS2E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6073

On 04/04/2023 7:37 am, Jan Beulich wrote:
> The latter of the two commits referenced below converted x86_emulate
> from a symlinked dir to a real one, holding symlinked files. Yet even
> before that the split between distclean and clean was suspicious: A
> similar split, removing symlinks only in distclean, doesn't exist
> anywhere else in the tree afaics.
>
> Fixes: c808475882ef ("tools/fuzz: introduce x86 instruction emulator target")
> Fixes: 9ace97ab9b87 ("x86emul: split off opcode 0f01 handling")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 06:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 06:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517657.803340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaZT-0005p4-JE; Tue, 04 Apr 2023 06:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517657.803340; Tue, 04 Apr 2023 06:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjaZT-0005ow-Ff; Tue, 04 Apr 2023 06:55:11 +0000
Received: by outflank-mailman (input) for mailman id 517657;
 Tue, 04 Apr 2023 06:55:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjaZS-0005n7-3q
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 06:55:10 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a20f5801-d2b5-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 08:55:08 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 02:55:04 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by CH2PR03MB5237.namprd03.prod.outlook.com (2603:10b6:610:9c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 06:55:02 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6%7]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 06:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a20f5801-d2b5-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680591308;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wVelfQuyaKXgYl0uIIi7iW/tVlnA+3FXHh3/C5/4X2o=;
  b=YZPjn0EBcLbJef6f09zqSo55KU06KkHWCGwEL5OYxQerg+w8js52Kb3G
   JXcPCo8QfBzVC3V0ZeCYxMXhQvQypj1XodwBcac9kd8YX7hc9AUcEuvNr
   ujWMpFFfwV3lC9Iszc79gxEjGEv0/PEUPa7dkgBNeV4qLfj1lpPS6b7J3
   M=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 103016611
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:nBj2LapFAdA3WNJv0l90SXm7XsleBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBmHPv2LYWv9fI1wO4nj9hxQuZLWzIRjGVZk+HoyECJE8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyVNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAD48QUvcoNmN+qDhcK53wecofMbMZbpK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSFEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqUy2QPNnzV75Bs+BGLjg8ewlhOHW8NuL
 1YU8W0+gfAd+xn+JjX6d1jiyJKehTYeUddNF+wx6CmW17HZpQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRutPQAFIGlEYjULJTbp+PHmqYA3yxjJHtBqFffsisWvQG+gh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7EJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:5wqcma/RDsCuNSm8MwFuk+HFdr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwQZVoMkmskqKdhrNhQItKPTOWwldASbsP0WKM+UyCJ8STzJ856U
 4kSdkENDSSNykFsS+Z2mmF+r8bqbHokZxAx92ut0uFJTsaFJ2IhD0JbjpzfHcGIjWuSaBJdq
 Z1saF81kadkDksH4yG7j5vZZmwm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjsOTj9Vxr8m0G
 7d1yj0/L+qvf2XwgLVkza71eUbpPLRjv94QOCcgMkcLTvhzi6ueYRaQrWH+Bwlve21714usd
 /U5zMtJd565X/9dny85THtxw7j+jAz7GKK8y7UvVLT5ejCAB4qActIgoxUNjPf9kobpdl5lI
 ZGxXiQuZZ7BQ7J2H2V3amDazha0m6P5VYym+8aiHJSFaMYdb9qtIQauGdYCo0JEi7W4J0uVM
 NuEMbfzvBLdk7yVQGTgkBfhPiXGlgjFBaPRUYP/uSTzjhthXh8i3AVwcQO901wg64Vet1h3a
 DpI65onLZBQos9dqRmHtoMRsOxFyjkXQ/MGHj6GyWmKIg3f1b277Ln6rQ84++nPLYSyoEppZ
 jHWFRE8UYvZkPVD9GU1pEjyGGNfIyEZ0Wu9ihi3ek9hlWlL4CbdRFrCWpe3fdIms9vQfEyAJ
 2ISdVr6/yKFxqbJW8G5Xy5Z3BoEwhsbCQkgKdLZ7uwmLO6FmTLjJ2sTB+BHsulLR8UHkXCP1
 AkYB/fYO1902HDYA6MvPGWYQKjRnDC
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103016611"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dFjjsVIrQG5TVQuVRZLwzfZdjbzjPNhDXbp02TC0leo3IYduaz1ekEk4i3JFT40ZAEvy9JxnM21FfdEJ/IiN3KgaocPOVHqAitJ9kPItSkKTxctg2KMFD7iOhITGfQ8EmT1cKAHilSA7d0/6DgfBp6lWdh0t3iKC2vbBAiKgvGoIm5ZTdchsmmh5CYczGG6jdq7EKKVQD7X6GGYWqzgu67Jw0ccs3nLpWnFgtfv/2+1oceILhZE5AFp19KDCALMQZphkpWBt0E9TP9yPiMr6icyrt5J5vwI6ULbY0p/+0nGtVhhbrJ8mqT35msSQnNh4InWDh896EQg6sKT8jqyt5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wVelfQuyaKXgYl0uIIi7iW/tVlnA+3FXHh3/C5/4X2o=;
 b=FaEJBRkyftcnuijZI0cq4y4sA9zfzGd9dDHkF3JIX5z+voBRc5MY79XA8yZROe8Gp0xDtce4yHj9qKgGjEfzlpM3wk6nyQXwzyGNEY9Fg5pSKIy8NT9lu/tNgdUGu6Bh3dD4KUGGZB9YO2JixFMsFSV4ChKmaqrb7rt46L/CPaaly6pBP3UsqkHIPT3LqlGs3FsHirJ7wfj4mLGv9DnPntpd57qCoYengtimBYLhz4FtnpsUaAjvcgQ8OPzJUmHJjyJqeX0qCz7LUnpjIdxIM3X4qCTKfwGlUm5PXLO2VJO6BcktQZT02+9ZM81zEJvpmQPKCEeI8Zwqsgw9ZetlVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wVelfQuyaKXgYl0uIIi7iW/tVlnA+3FXHh3/C5/4X2o=;
 b=PAWcrJtkhu26O+etMMBUvk0D3lEC3Uq6kDyK1jCflSk7Kv+TTM73ug0s5SBUrGgbuEByXF5+D9jQAwdypUBcHcxakNyJs7WO8XMPqYouxZE7z3/VEGCo5Bm2Q2h0J5KVxnmxf5IK4j6ukFSNtIasyv9V8Zs8CiakgvWHpFxdXdA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <7de7a0a9-dd1b-c98b-9663-23c4aedb65b1@citrix.com>
Date: Tue, 4 Apr 2023 07:54:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul/test: drop bogus .PHONY
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Anthony Perard <anthony.perard@citrix.com>
References: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>
In-Reply-To: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0596.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::10) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|CH2PR03MB5237:EE_
X-MS-Office365-Filtering-Correlation-Id: d3fb6780-f414-48cc-fb03-08db34d98369
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xw+Z1IDcIkWTgOrUoj4f1+/CxJPAvnKH1Y8HxMcSSw860+vVrRYVZ1wsFlZqCRhfcrnP26MDym2j8lXciDtUaE7mbNEHsRn0h0YFXGQEWimJeFdUWz8xnODErjBjOv1ogUk1TdjfG8+KRvtnDIh7AZjDeY9o6jVSdtyV+oHf/+jYrewWlYkGCWpAAQc00cw3eCGFiXR6Y9W9AR/SwXJUyBC81CuoOaXeNyWo586iLjcm8eU9sIWeQGTS14cvShWmqlJp5wRg6/dWxLAMPs9bCCdQzzX3E00xtnIVB6pmJM1tRQHnY4DnTSKMQ4vXVLIMXadGcyCQzJvIrDuRHI7uZtH9Bm9gaaZJ/6ARTVGfJGM1/3CPtIwrkS6F+WRNeq3EFn1s1VeSDfK9JyTlNh11C9ftab2n8b+zoJqFBgMwu0SvxBcJODRx1AeIqfaJJNKLzVitL0B/qOzFBmKGt5MF5Mi9QRsmamOIfEgDKARypIAJgryOS8rT3/CElAZlUF7YibOm8e9luJ+qpuWuPqY5c7SHewYgPYYR5R2FEDi5Qv1UjkNnmLqnf2OSN7jg8d4S7KSwGIAM0Sfr64suWGDjO6OMsoQ31YgPCABhNM6l/4YQWClY0C/kCqpaV02sPD1zdMxPBJSTLjKNqKPDar4lNyLCRXG6qiKlC/5tDPuiyZM=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(4326008)(66556008)(66476007)(66946007)(478600001)(8676002)(54906003)(316002)(110136005)(8936002)(5660300002)(38100700002)(41300700001)(82960400001)(53546011)(186003)(6486002)(107886003)(6666004)(6506007)(26005)(6512007)(31696002)(86362001)(558084003)(36756003)(2906002)(2616005)(31686004)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmFiaXJHUHRzRm1rT0tqb3pyTU1zNzNhNzhweENBOEZFaXFJcUd5MDVodlVt?=
 =?utf-8?B?YThGS094MzJiT25SVlJlTHNLYXk0emxzUlQ3alIya0lXOW1iUlRNQThGbzBl?=
 =?utf-8?B?UzI3KytMcnJjVlZTU1g2QkE1YktvMjladlY5bnBlRjE0TW85NWtnZ1pFZ0sx?=
 =?utf-8?B?aEgrV3QrNHo3WUQxOEtEMEZEazZDRzU1TnE0cGRISXVEYWVxK2RSbTYwVWtQ?=
 =?utf-8?B?VjlhbXBwWncwUDBqZDV5dFlQM21JN1g2cDNZeFdwWEtaanJmNVVNYk9LYXds?=
 =?utf-8?B?YXFBSU9pS3dwQitQWVZCV3dHRG9uVjN4NXpqUVRab0o2VlJla2k4WFQ2WUxh?=
 =?utf-8?B?YjV3WGpIOUtsZEJadTVKOTgxRnJHYkFvY1RQM2FGV3lLTW1wNFFwSTNSWC9r?=
 =?utf-8?B?U3dtbE1QVC9yZE1LWUxpME55TjFlWXI1cDZWV21tZkJVeDRKbzc2MWovYmYr?=
 =?utf-8?B?SHNVY1F2M1ovMHQ2UkI1QWVYMHdXTGRQeUYvS3FwUHNSSFh5SktUVTZYeUp1?=
 =?utf-8?B?b2JlMGhtNk9DdlMwQmxKSENKS1EzVFRyR1JuS05NZFZyNmVnRXh3dlc1QVJO?=
 =?utf-8?B?alJzOHRpdG0wblkvR1dBVjV1N3U5L3FNNkNEUEMvaXczWTJ6Rnd2Z2NGcFRH?=
 =?utf-8?B?elMxMjI3dG9FeVBJc2oyQjV0WUc5UG9NdjhRNW1vT2lmS2pBQ2NzRkFFNzU4?=
 =?utf-8?B?bWx5K3hkUittTklWN09CcVc3ZkZUWHh0ZTMremtKQkhWZ0hLaTkvdWNXMnFD?=
 =?utf-8?B?VUt4dFZ1eVBPZTFTTmUxTEVpR3E3L0w3U3plTFd6blplY0FJM2piVWs5MmJO?=
 =?utf-8?B?bGg0LzFFanBRejc1SkRWTEZVMjIxdmd4MFF6WWxYbFFuV2FZUXRRNkZxeERO?=
 =?utf-8?B?UjdzcE5ESnZ6UHdzSkMzcVRmZHZpOGFvOG9ieVBwWmk4OVhpTDlLMkcyeVdN?=
 =?utf-8?B?dGJzdlk1NHRZcGhpRVFHRHF5cmsyb1FGd3FBTHp5Y0FDR0JmNnUvVU0xcDN4?=
 =?utf-8?B?TnN3eVFCbk9ObkxFZ1pnVHR1dnVpSnFjbm0zVDZHZkF4cDJoSFNYTVQraVlr?=
 =?utf-8?B?bFl0aktGOHhZYXQ1NW1nYXFSWjN5RHk1dlZTRmFFT2JUSjI4YTNua2dUbTd1?=
 =?utf-8?B?RGg4TVlZN3BqaHBtY2ZFM3dqbjIwK3RWc2x3U2dQaWhtR1M4RFVSMkwzQlE4?=
 =?utf-8?B?YU15Y3FLaVlYUE5vRnMxMVVZUFM0OVMrRmpabnRTS1hJT3RaQ2l6aGdOWXgw?=
 =?utf-8?B?Tks3WllCbjhPRHhmRGNBd2k5dGNnZUt5ZFZwaGNoajJKNWszOWxON0I4Ykhn?=
 =?utf-8?B?ZnZING4rRWZoVEErb1dFS0MzNjJsaExkNzcwcWpLb2liTGRrQUJFdzVIcGts?=
 =?utf-8?B?YWZTY2N1RDFTQXRJSEJ1RlovZnJjR0tFTkszZFNUazJHYy94dWRELzFwN2k2?=
 =?utf-8?B?V0VvakVvdHdmQnNwSGFjeFlrUmFaaXBpVnp4a00rSmMwS28ydTRDS0tnS3ow?=
 =?utf-8?B?ZVNpdGVLV1pxbnhKRk96dzZQTWQ4VGh4UFFtbXVtNHEwZWlnZUZsTEl4NGtv?=
 =?utf-8?B?RHp5ZGpETVZNU1hGaDNoZ0wzbjVFRElSTVNqQi9xODlYMDhpb0liem5LQ0cv?=
 =?utf-8?B?STRLTVZZek9qV1Q3SXVjODN2YVJDcjhwekRPQll3RXk4TzJQOWVEU1dqRmJX?=
 =?utf-8?B?WVFkRzZEL2xLamxPOEFGNGVhVjIvTzFCRkNUTzNrREJ4dTlsVUxCQ0UxRmo4?=
 =?utf-8?B?TmJqRFIwODdPR2Fjamp5Z01zQ1FOdEpWUG5VOFNtLzJXU3VXMDlkZEdyR3JH?=
 =?utf-8?B?ZmJCUEN1K0JJVnhWcDlpTCtWejlPRmw2L3JmamRzTHVNb1ZHeFY0WEhFVFBh?=
 =?utf-8?B?cVFOcW40VEhMbk1IMURxc09zcVE1R1B3cS9NSUJCa1diaXN5c1ZKMi85VkFZ?=
 =?utf-8?B?NjEvTzJNWk5KN0JRUGpXS0s2VmZuKy8vS3g4R2xMQXhVSnlPY3lRVko4WFlH?=
 =?utf-8?B?MklsZHhKVmlsVmJQQTdvNGorRzFNVzBGVE1zMnV4alkxTU0wRm5TWU0xN28y?=
 =?utf-8?B?cTU0TXMzWmwrLzVzYUdvOWxOVGZJeHBFTktPWEt3RFp1eVNOSGJFVHpyMWdX?=
 =?utf-8?B?OS9YVVFZMnp3dWQ5ZUJZNVczTTRFSzkxSkl3WmNQRVhXRFFhaEVIVWhJMGVa?=
 =?utf-8?B?cFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	EVmzHPohxC0F84xEMxQTvJ8ncscMKdeCS++FZl9d703OjJD5SKjAuvh5ts8/D7UXp8olOa/hV/ObW4TmzgIU0BtHRX65ypks9e6N/y+DAxjusvRVNW5PjQQkZo35P4ehGH6NQg/6iESW6kEoR0+aHkVTjBmF840ik+xgRNDfv2tQq/iAe/z1K17WU7IVFDxCCaIalrui1Bx4P++HhseduVwPdujtOutvLcphPIM1hk5Q16LLhu7Q9c9Q1MdsjuH1gyY4KXCCVK4qEkIs6fxTpS4Oyyh48Lw//hk+eCq/CGzQGjJolBnxqrWr4y/PAUJ+quyqJGlcIDewxmj7Xg+C/o9eUGZYEiTBVXQmINSk5vclxLJPeqMgrSrtndw+sDzr4BnUtKDXnAoygmrAy+Pbqu6YrNFfoVZ33aRXDD6RcWfDYm2xbO74Ks8ahS3V5L/Pr6isdDj5H0/MivUuSCXS0r3NpgTe5I3Tu0YslzfDTbOc7iKYBUHyuTJIogqbLEwJI4YbO+aICAAUHK2nEN72FlieYd7hxYvZq5VPmb61M3+/GmGk4rSb6D/6aZ+TyW39faJTmP2DtETMxh8V7b0NHS+h7gWUrFbqrJLz7JloylVDuuet+pUtwzKico7uKX+X1QD/1+EF4eOUQlWCXVJQcDBxaITJagV7xhloTLVvJloAWSlevWu7Oo8+3KmEP3gNiUTK+Y1g9PzJzrMYpT9i4yb7jQIB4Ja/RZ7Sf+3h6Vf6CplJLrtFO3nub9dXiwgj2QTTu4UpHWeFvEZvZNhxWlavePSOqi1ns8N27bsxxHq048C82UP5khmZ41U1sc8H
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3fb6780-f414-48cc-fb03-08db34d98369
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 06:55:02.1914
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7CXdfMX8kUGciHWFxH1v3VxUTlN/qZQRURPMKZLs+1ido4nN4m6lPwTT3Hp1NNlphc7lXzgOMYmuki7kRKAbU1MRdgnH5bs/XNhrk0o2Vts=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5237

On 04/04/2023 7:37 am, Jan Beulich wrote:
> x86_emulate is a real (directory) target.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 07:09:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 07:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517664.803350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjanR-0007ec-R1; Tue, 04 Apr 2023 07:09:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517664.803350; Tue, 04 Apr 2023 07:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjanR-0007eV-ON; Tue, 04 Apr 2023 07:09:37 +0000
Received: by outflank-mailman (input) for mailman id 517664;
 Tue, 04 Apr 2023 07:09:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjanQ-0007eN-SZ
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 07:09:36 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a63cb010-d2b7-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 09:09:34 +0200 (CEST)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 03:09:31 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by SJ0PR03MB5696.namprd03.prod.outlook.com (2603:10b6:a03:2d6::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 07:09:29 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::a84f:cb5:8471:f9d6%7]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 07:09:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a63cb010-d2b7-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680592174;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wFGQFCHVvkmCrHJ1jX9q5LMd/o9f8WZjGtAJf7Jd0UE=;
  b=dJu7SBLs4BhfaUBslisJdVjswQzl/Sx5pfPFdrrCfTri+7Un9L+qbMeI
   eUGRARbOzIvk0B/4qCYTWhZvYv+ezpY9mG4eg5S4E1gYTNS1Wg3sOdEVJ
   0kq2aqUPQ2tNiEyW2eVCnsFw/8l7S7cJR623wLPFEpvguSlSoWfFf+zRU
   w=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 104636992
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:d+uY0aktrPm+dP20RudmKQro5gxUJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJKXmyCb//ZYWKmKNojaY6zoEgP7MeAmIc2Sgtkqi5mECMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgX5AOGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 c4nISEWcg2vvt67+Iird+dF3/kRAda+aevzulk4pd3YJdAPZMmbBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieC9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOHmraA70Qf7Kmo7AzEdWQCHpMuFmFedXdsBE
 1I/xSYllP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2GEUvvhjutot3MUVQz7wCOBma9tFohNMiiepCi7kXd4bBYNoGFQ1Kdv
 X8C3c+D8OQJCpLLnyuIKAkQIIyUCz++GGW0qTZS81MJrlxBJ1bLkVhs3QxD
IronPort-HdrOrdr: A9a23:oOtNSKvU+dmVJ4hhy/vsqf6A7skDeNV00zEX/kB9WHVpm62j+/
 xG+c5x6faaslkssR0b9+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpgHIKmnT8fNcyL
 clU4UWMqyVMbGit7eZ3DWF
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104636992"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VAd9ThiVB1C/euKnE+DNBs86UG+rK+QzGvc4Qy+vcHCkQlfSpokUaK/PuMKk812NYbwjt/+XwCiqXiGTBa6oW/mNrPGlyX4ngPf+1k/xXjWJD/dDUbdykrpBTxjd5/J7sGTjXDeDSEg3aduIce8XooOgVPM0j6IyHyNs1nZboYO3qW55gY7PXz5Htld4B/YImUQGbUqy5QrtJRg8oWqUWNRbfDcKMXTnZdOtlM2XI701ACoucXhBU9Kj14wlI5tqxpat3x38b4XSq9GambHru80NkMfWRiSBCCjPsVDTLrfApJnDWT0l0z7tHdoDKoYHbkAU9QFC2IpD+/IietN7Cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wFGQFCHVvkmCrHJ1jX9q5LMd/o9f8WZjGtAJf7Jd0UE=;
 b=hg6On8P1fYABJ+Q9Pns0LSJaW0DLCCB1B3GA573Y7dl0NSEF+SlxrN0F44MzHWKQBDcLUbr3qMFpHuE+2CaYnKVJ6Et7tUV1Brgio2LfrtFbCHF0uYWXVYnAi+hygKTbSvZVXOOmy/HMxsx0eR/qcYqtWhC52X5zKk9jPZTUOBwifFTSIZo/sqVbMIaokzhs23U4JBxOH6+nmfftQNxusyjXsKCJ9h0Alt33VEqhzBmAbiJUGOjGdh0Ra4Wka5zOVj0iHj3KzOUuWGvZP5cTtpnH+15skuP+i6bcB5r4CAfTHplcDiORRgSYWUeHd/BRUU20M7gRPHemwAxqekLzrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wFGQFCHVvkmCrHJ1jX9q5LMd/o9f8WZjGtAJf7Jd0UE=;
 b=ij63QU8l+Qz8QF4JeGPe38hrJWuSDxesxfMh2ki96JmxIvhcZMG1WZvCQhadQ3/dstdHHUkIJMQFnkw2NMJ66Iw5SZsFzQRnG0l56qplT+HUplhKCIjYYE4pFhyyRLRNoGZdDly//ZQxpZAd8nEwEi/kOl9yp8sAfgRiC/WcghE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0e165632-938d-596b-3087-75da66641d85@citrix.com>
Date: Tue, 4 Apr 2023 08:09:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] cmdline: document "extra_guest_irqs" upper bound
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <d4fc10a3-abe7-1298-daac-00c96147c7a5@suse.com>
In-Reply-To: <d4fc10a3-abe7-1298-daac-00c96147c7a5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0122.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c6::15) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|SJ0PR03MB5696:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b4b58ae-eb1c-483a-c2a9-08db34db875a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WsTVc2LQ9NZBD0Cl+Q8p+FU07W6edyMigri2OIzAvEnfG2l0LR50CerLkoteTJRzN0OVDLTcg1H4EKL+XhdIK3yaxMW+JSoZdcd3o6GKk/UOqoeeLjwYFZ4a56BgGXyutfsamkOVT3CKdwXdXQwqJQQbghpRmQCnU2HJ0hkjc06eH3stSxnaYh49vWlx6by6BrQj7NxBw0xsFmKu3OtqtDVYFbz15lG0LQf1JTLa28pRFdxgTPAS0DBdh4wwpSmJ0RiCtBTfTeO48w/m4x+nEwvcVKQOoLRIcrG65v/OUI1o6yzAtaD+I3liO2cvdaM88UadULFcyHKSzj002LpOJTI2aHiSbZbvOG6lwLTrohmBnkLk1d/bgJiXO2DJxQaCXd+f/XQVelK7Htz/LTcEcpa6pS1uL89IaaPiKxyx6fKgEpgY5FyQBisIOEmVTGrmHZNI95khMAa2HH2lAGFnMmW+x6Ms0x5b3atSP6cGHe65CBls5Q/rbj+Ax22glKYqRguDpEniNgsHNyAsWLHZZSw4KObP2gVsSH0pRJwFYuNwSOdZyOJ1FxxKAa1R9oiazS9R6Ntngqu4vODF4rAst9o8plbShAeaY6cmL4SG+PgWrp5UCgCYmYiVV9zt9dGBIrgVq/7PaBZZw7AbobS03w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(451199021)(31696002)(86362001)(2906002)(36756003)(31686004)(6486002)(2616005)(186003)(83380400001)(53546011)(6506007)(26005)(6512007)(107886003)(6666004)(4326008)(66556008)(66476007)(66946007)(478600001)(8676002)(5660300002)(38100700002)(82960400001)(4744005)(110136005)(54906003)(316002)(41300700001)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RDg5d0VJNlNaRHlzdDJ3L3NLZlgrcEZsa1VVdnc2d210aDZtNnZBYlpkcWV1?=
 =?utf-8?B?RlFrSmJKNzdvKzhXU09lR3AvWEFnMmxTZUI0SzNjSFZ1WXBNV3Y3b0QycXE5?=
 =?utf-8?B?TUNTbGx3S2xuakpJaEhHK0p2WFZZRVdtZmQvYmdidTVLelExNHppSXAzK1Vs?=
 =?utf-8?B?NEdBUDlvZGx3WUs1cGZIc0dzK0lDaHE4TFM5ZW15dlBrMzZObFBuM0tmSzVL?=
 =?utf-8?B?M21ZU29tempEb1M0c0wwemhHYUlidys0TjR5SUVmdDJCQXkvR3IzWnY5TWVw?=
 =?utf-8?B?RklGb1J3UjNNbDJ0NXhRbC95RFJHSFp2dG5YdVBVeUY2ZzFBV1F3ZXBhS1hU?=
 =?utf-8?B?K0hhaUE5OTlpVlNGSEtoekNKbEd6aEc2dXJiS2ErNnp4dHJ2dUN5V0pTaklJ?=
 =?utf-8?B?Q1JCbkFNc2RPTmh0K1U4UVRpRjU4Qm1RV21aSGdqZWYwTVlqLzJzM3BoVDNZ?=
 =?utf-8?B?aldzWTI0YndSTE9WclVOcTM4TWowVE5oK1k0VlBNRXo4V3lZZ3ljeklybkxG?=
 =?utf-8?B?dHgwa0EvMlA3MUlBdkFUcDNFSUdXem5SVzhheUorcVMyL1NtY2hsRjRFSjFz?=
 =?utf-8?B?NGVKVFd4QjJPcU1UeHVkdlBpU0dUaVNnVTJuQXhpVGpWMUFtNkhiYldzN00z?=
 =?utf-8?B?U3Rua2JiK25HbkZEMzJqekNUSnViVUdJWWYralIzMy9vVThpVC9PbXVrbmo2?=
 =?utf-8?B?cEtQejZaaWxyWUhTYlViYS9KaWRjaFpKeGk3VXJ4amRGUkdhdjRGL25kOCty?=
 =?utf-8?B?b2xRYVhOOEtkL3AvZ0xHbmN0akxxUTBuS0h5SkcvcWNya2FpT3FWR2U1TDly?=
 =?utf-8?B?UVZDQmsySi9hOUFPL25DZmxrcy9jeWtrTTRtcG9HMmVoWTNNZjlOMmtOVUdO?=
 =?utf-8?B?aUxlTEZzZnBFNFRhU0tRR1lvU2EzS2pXNjZCRkRwR0NqQkQ5QVhoZmJOZnJu?=
 =?utf-8?B?VUs1VTd2dFhRTkV0cXpCNHU1emRXMVpvOC8ybTlEeEk3QXY1TEpIekNSYmxx?=
 =?utf-8?B?UVdMUldrNVRzWUh0UGhJcnkxNGRFWSt3UTJ3eUdhblFoWnY3TUhwek1reERm?=
 =?utf-8?B?c3hjamp4cS94SE9vV0MzSVhicWNhUDRSQmM2dTA4MUc0dGl6N1NISEdtVnJm?=
 =?utf-8?B?KzlocFcwUlRvb0RscFBaUVp3TUpLUGdQd2RsTGtZY2lUNHp2MFBOOXgzWWk1?=
 =?utf-8?B?NFVaYkxuR0VIRTNhckhZZWF2OXBENjBiYTczNWVyb2FkYnlRSXRVanZZUkpS?=
 =?utf-8?B?ZEtBVzdwRlVpOVZsM3k3M2RTWDkzMDIvVnFXV1UxVWNiYWwyVWpnc3VScjVx?=
 =?utf-8?B?M0hQdFNEWEZzZnpkRmFIbWlMY2lYTHBhaWM3VjB0UnhNdFVJM0I5WlFnODJz?=
 =?utf-8?B?UjZQRmViLzFJL3d4cTJUeGU4akdVNjlydDlYOFBjbTM0cjZsMGp6dWVEUWo5?=
 =?utf-8?B?VTNHYVBUaU5hb0l5QXo1KzFSUnJJblpuUk4yZTZsUnVJOFZXb2pLQjdhV0tz?=
 =?utf-8?B?d0d6b01peTRYTTFJcFdXR3JnaWRzYnZHZm5TT1hSelRIMm5DY21HdmZWMGZV?=
 =?utf-8?B?OGNIdlY0QTJrUzI4TG1xanFRQ0VseDlZbXA1bmR0a0JzUTcweFlNaHQ0a1Z5?=
 =?utf-8?B?NlBNZjI4blNHdXhlRzFnMGNGaHpCRElycXRWU0NlMzFJeXFDcytmdlNuK0JC?=
 =?utf-8?B?aW9RK0NXVEhpVjBocnBSL3VRMGd4OWZIc2dQd0JPVTZyWktPT3FzMWpmNzBo?=
 =?utf-8?B?Tk5mQTF5bVF2TnFsbzdZSEFhOVdxc0tMRjJNd2hUK1RlN2RVZzhoOEJ6cFg5?=
 =?utf-8?B?OE40MS9pRXovZSt2ZUhlaGxhamtHb2lQRmVnZDJBT08zNjhjeWVDSmZqa0FH?=
 =?utf-8?B?VXNYcjhPTFRpQWNPQzhWZld5Z1RvdmpLc09vVjVLSlpQUTd6Zkx1R0p2V2xX?=
 =?utf-8?B?TWVXdUw5TEpMcndxZHRQM1hVcFZ6ZDViT0FqRkdDNGJkeDY3REFuZ2VCZ1p0?=
 =?utf-8?B?Mk9WZEdaNjFKanVTZC84S2RDbVYrV0lDT3c5aUkxejBwKzhMWmFvSzZWdS95?=
 =?utf-8?B?VmlLa1F5a2ZQalVjYkZnV2FSckE1WEpudWgwQ0xUL2ZlUThEYjQvRXRWeENt?=
 =?utf-8?B?U3hkNzA0L1VIV0dRYkZOSFhLVnFUSkJPdUtVdVB2Ulp0U0ZTY2VNZXVOMWVE?=
 =?utf-8?B?UlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	n4IjA30dof9a2HkgiVTn3xNWUoJxxnbLH8+dr9D99mn9ddyQhI7b8K+ezN+PCpxRuSx1ba7YCm/6lb7PHMSkrnE2fOlctRPD3FBOm1XSathEO+7Teuogj4pSNGXDyJQoFP7Fsg21KN/1Qc497KiUXodQ8URU3kX18KumO2+NxutRtT+1ek1XVJP5E4U42bc6/VxVK1MMhRKT3gtg2Sy8QJoqKIQYTfsSR1SyVftwQEsG9adHG0+5EdQq4Q/LHfiKwFG7/ElNTUiS03qtgXoDpfR1mhrdytwxcfIAt5a05tkKZJ3aVJG6PBaJcQ/LZUVdvO2GM378WTti3LyhZ8ndwKSR6Zvw8UFxdUN4BU52C9Z4V+btukn6k8Yz7ET5Q85rXo5rTmFHjajao8RcCGqfqjoXEkj2JVb2OtZQSOgfa/EcruhlS0UoSA+f95nqlgd1Lohxi//spj6bvCQFuWa0IoqCjHlocX21A2/qipqIR/GlfQr7aRUup2zgAIcg4pYEsQgjrnArBUNGEnJCVg8aN+GkT70iLKzt+AphPv3tyVP9OS31mQuLYMZadX9l+YfIhe15xT89AVJY6OPbJavEah1kO3V0CgJ+vq/o8HMKq/WlhZFtQ5U5hbPlhUZXtdSlPh7w32it4xSCdYb6It0pBo6B1pNuU34N9ANsHRfXlaf2NdwTZaChHolZdEbflWthZMIzpF35INMkfV9kPusPKkGB7TODfx3VoMSErS/lH84HOg9OSz371kuWb+QeJtl63pWsOK9VxMe/35oyNNV2yma09+RzPV1Xi6rTdz6OtdKWJAB/rycLhyBVwCBr7VtD/UbN1D/vuDXTfhqDde3PLEml0CpP1PeTn6oDStKOWKQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b4b58ae-eb1c-483a-c2a9-08db34db875a
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 07:09:28.0290
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KkMYjRKcRyMclxKryR9ViLLMAHYE7YSojcUV/DGx0H44jnT/V1wDZNaU5HRcK6VYwVxUICHuZIFd6bVROi36cuBz9npwuoUshJv0wNi9J0s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5696

On 04/04/2023 7:46 am, Jan Beulich wrote:
> PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
> more than 32k pIRQ-s can be used by a domain. Document this upper bound.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> I was uncertain about also introducing a bounds check in code: We don't
> check for bogus / abusive values elsewhere either.

Normally not, but in this case I suspect it's worth it.  Without a
bounds check, don't we risk wandering off the page?



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 07:25:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 07:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517669.803360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjb2x-0001pf-BG; Tue, 04 Apr 2023 07:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517669.803360; Tue, 04 Apr 2023 07:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjb2x-0001pY-8Q; Tue, 04 Apr 2023 07:25:39 +0000
Received: by outflank-mailman (input) for mailman id 517669;
 Tue, 04 Apr 2023 07:25:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMxR=73=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pjb2w-0001pS-9E
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 07:25:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e43c1df3-d2b9-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 09:25:36 +0200 (CEST)
Received: from AS9PR07CA0032.eurprd07.prod.outlook.com (2603:10a6:20b:46b::26)
 by GV1PR08MB8129.eurprd08.prod.outlook.com (2603:10a6:150:93::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Tue, 4 Apr
 2023 07:25:32 +0000
Received: from AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46b:cafe::ac) by AS9PR07CA0032.outlook.office365.com
 (2603:10a6:20b:46b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.28 via Frontend
 Transport; Tue, 4 Apr 2023 07:25:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT015.mail.protection.outlook.com (100.127.140.173) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.18 via Frontend Transport; Tue, 4 Apr 2023 07:25:32 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 04 Apr 2023 07:25:31 +0000
Received: from 03dfef89983d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4964DB8F-5F00-4319-BF1E-C01212FD3F3E.1; 
 Tue, 04 Apr 2023 07:25:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 03dfef89983d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 04 Apr 2023 07:25:30 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com (2603:10a6:208:ff::27)
 by AS8PR08MB8222.eurprd08.prod.outlook.com (2603:10a6:20b:52a::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 07:25:20 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104]) by AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104%6]) with mapi id 15.20.6254.033; Tue, 4 Apr 2023
 07:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e43c1df3-d2b9-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZkOr/QC/Awt/0jF1p4/c1SFts/o/0u4Id40Ai/aNGcQ=;
 b=l7d7PF75xOf0+Q7cFkpC1cRWrySrS6yFJNuJygewysDhZXySsk1G+/NQ9Wk4aQ01zQk1b1f3JDZ4FF4xlynH76jreLA2nQsRCTXnvl8O65P8bAorkoi0ObkfIXdeXbo5pEYcs2PGmsyi8TCQ+QNiHOBkI9780D/+8IReUoCerQ4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1ca0b615dc84edd7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HxKdMJXd2jEqkpgCcKZCXRw1v5eIYNjEmyQdHMpCgCRbGtTTHGxRr7BaRh/4Ka8qYpSKjR0wEwR1vEzkhhRPqzd0AspF44sT+vkmhD9FopEOG/IGZx48lHuDQUlHr+r1ZpDSAC9ZivUlgDnqVUcuCGJDbW7DRUe05ECfEJlxMVZcrx2wbDC3sYdhm63ehPw8LekBV48zl2MImBfuzplzkaBt5K3nBB9yerdtWKE56Tnftd/YRut4p5Y1Xx7eAj8JhiRfPyMguZKnoD0qRhB88g65OQQIrYN2kJFiJjtqNc4KBVshIuF2DcwRRgIsftjNV6A66Qf7jMoWPZD9Fx1EGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZkOr/QC/Awt/0jF1p4/c1SFts/o/0u4Id40Ai/aNGcQ=;
 b=F8OQ/IkAsMBW0RYR3vRffvTPEgAJJTwnpp8gRmjf4O2v6tvjncM9FiXLSBqAjueBRZa2ueoqqd+/axDmYcPZ8y5IGz4X+5IAD462JtBtHGEw6BOt69TcgEmv6Qf/JQKnDTeGho2kYOQaCHNMOvMYIEMJmJZPXVG2MRVLs0OrIWtJ9k15WgK13DOMh+yXSMq2KoL/1IzqlowQucrZrgOi8SBPEa4WP4mx1kkTtUKN+3aCE8M1QpcRqAlwCnB0inuAtoaLy91ge3rRaSLpeVjGgovUSKWvXnh4OfrBD4WX2tYyzSe4QP1JBNqiXw4/rfcx0iUhK06ccRDXBbg8ow7taA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZkOr/QC/Awt/0jF1p4/c1SFts/o/0u4Id40Ai/aNGcQ=;
 b=l7d7PF75xOf0+Q7cFkpC1cRWrySrS6yFJNuJygewysDhZXySsk1G+/NQ9Wk4aQ01zQk1b1f3JDZ4FF4xlynH76jreLA2nQsRCTXnvl8O65P8bAorkoi0ObkfIXdeXbo5pEYcs2PGmsyi8TCQ+QNiHOBkI9780D/+8IReUoCerQ4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Christian Lindig
	<christian.lindig@cloud.com>
Subject: Re: [PATCH v4 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v4 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZYJtfUiyGg0Un4ki5c+B28igcyq8TjeGAgAc+FQA=
Date: Tue, 4 Apr 2023 07:25:20 +0000
Message-ID: <6373AE6D-DD75-45FE-B549-DC540453508C@arm.com>
References: <20230327105944.1360856-1-luca.fancellu@arm.com>
 <20230327105944.1360856-10-luca.fancellu@arm.com>
 <9804fbab-3e8d-49dc-847b-4804a6de286b@perard>
In-Reply-To: <9804fbab-3e8d-49dc-847b-4804a6de286b@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.400.51.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB3745:EE_|AS8PR08MB8222:EE_|AM7EUR03FT015:EE_|GV1PR08MB8129:EE_
X-MS-Office365-Filtering-Correlation-Id: 8ead9dac-8345-4ada-d05f-08db34ddc64f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ipWbnH40rr+i4lSQ0M79XLqn15iAvIsDpjsZxuvjFDkxbtZO+nVKD/Mhg6ktcSOFKvaoN3jeEMQg4Ikp+E0N+AoAh5Lgbzz0wpBGPPI0hmjtVPq0DG7nxDIxFB/G5JUU3wooDAtmFTysBamC9flSrqlCbjL9QKSuSBWqye4bT8YBpm0DXKMqljTW+CGeDsS7iSwVKF7KMX61Ftx6uExxHF2JM2XXwRlNkTAPYlkqEU+pjQp/k87ScfCqEi3kmOJJPvnT/dkn2Ox6eNIImNl+vJ8mOyjJz6gvXWQK6hRnqCZku/Lt142baAXCvMVPZkdAKX39DEGL7iuC7l7a+AshDksQEyb5Vl2H+lRv+EClSC3nwRWXVJuMR+PkXsK2Qd8eREkF0hJxELUpfxM5W+fAYPZyBMLR7IRBuOzoAsy1+x54z2brYYetSMA9fOybmFFhe80GFZJ1+sydjD1Z7TgBy01Mp+gB/05lAWTqwc7aU5YQNSGeqxilTzOyJTKV+JZK6r6AgYJTRiidcX4CWhKIGyt+VooI6AqpEldjiCtfm9vZvD+6efOGyXhRSYdeMW5ZbI5MkNp7MFsSlji23/SLy7eWBnBdcdUb5GLYoH0E/4yS90xIFay0VfsNsdQcU0AP8QFgRbhZ5as12ad2i5nMdQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3745.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(451199021)(33656002)(122000001)(38100700002)(36756003)(7416002)(5660300002)(38070700005)(8936002)(76116006)(66556008)(66446008)(66946007)(8676002)(86362001)(41300700001)(4326008)(6916009)(66476007)(64756008)(91956017)(966005)(2616005)(26005)(54906003)(53546011)(2906002)(6512007)(6506007)(6486002)(71200400001)(478600001)(186003)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <80EFDB6C63C4DA4FB717E4EFAF5C8824@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8222
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	04d21212-0789-43a2-b402-08db34ddbf71
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xIyCZxPUW1EiT9pmyFVSNkwweJtQ64d5UUfBJrPUiuaODYwNy+8ip33rLcPbQHpHKwD59XbbZluALrSwuXf+gw95+pcA0+7hYnyitz/wiYwwx8ROTdyGPuo6w2ArCD0ntJkFnf93JEUJq6dqpeMSeYtftI+FaFGmbKWd+Rp8qCubLYowCxKUl4vYuBVwFEO+elx3QIKcSe5WgavWkByR/QOtnkH/+nWkrxhXNxx956KUj8ZQvRu1r8ZaJ8+1hg1i7+hzfOHPH837WZRatf+lLiQnMjDZCwJ2Di+xW7yEn/BO9v8skAQL3sNjYe/cd4xpNhR8fvnob0z2kPRsOF54b2s9lHESiKgzgUkGd/HOvNN9Bgs9LK58avTWjabXZ1yWS8kLz9mCmat9ewxUbU6b/Ebw3GU57yyf4fFqcPXy4IMiIA0ac2A1tAlSocpO1dAykxIHJGFONE8bY5374F+ATZsXeV9Zieza2ePGstMwTuZgEbgE/5jtWE9fkbpL6QDhfB/hCpU5MctfOUbqZXuAugn92jFIjGNvuunBVV3/hVp9pi1PiuAkOS49QmIiG5BQ/mqS5KaK6mg1uTahBZv/O5jTku6fXHPPjjSkpvxhPl7JIhXVoqpvInhk/5vnOo4Gj+JKW6jHh2k9Bu10Or5mI1n1aRib/mZjK9LoqVHCiiJmSnczT/magiqNAeFeYgkO
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(40470700004)(46966006)(36840700001)(82740400003)(82310400005)(356005)(81166007)(40460700003)(86362001)(40480700001)(36756003)(33656002)(966005)(107886003)(6512007)(2906002)(186003)(4326008)(26005)(6506007)(70206006)(70586007)(5660300002)(41300700001)(8936002)(478600001)(8676002)(54906003)(316002)(6862004)(6486002)(36860700001)(53546011)(336012)(47076005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 07:25:32.0874
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ead9dac-8345-4ada-d05f-08db34ddc64f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8129

SGkgQW50aG9ueSwNCg0KVGhhbmsgeW91IGZvciB5b3VyIHJldmlldw0KDQo+IE9uIDMwIE1hciAy
MDIzLCBhdCAxNzo0OSwgQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+
IHdyb3RlOg0KPiANCj4gT24gTW9uLCBNYXIgMjcsIDIwMjMgYXQgMTE6NTk6NDFBTSArMDEwMCwg
THVjYSBGYW5jZWxsdSB3cm90ZToNCj4+IC0tLQ0KPj4gdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2hl
bHBlcnMuZ2VuLmdvICAgIHwgIDIgKysNCj4+IHRvb2xzL2dvbGFuZy94ZW5saWdodC90eXBlcy5n
ZW4uZ28gICAgICB8ICAxICsNCj4+IHRvb2xzL2luY2x1ZGUvYXJtLWFyY2gtY2FwYWJpbGl0aWVz
LmggICB8IDMzICsrKysrKysrKysrKysrKysrKysrKysrKysNCj4gDQo+IENvdWxkIHlvdSBtb3Zl
IHRoYXQgbmV3IGZpbGUgaW50byAidG9vbHMvaW5jbHVkZS94ZW4tdG9vbHMvIiwgd2hlcmUNCj4g
ImNvbW1vbi1tYWNyb3MuaCIgaXMuIFRoZSB0b3AtZGlyICJ0b29scy9pbmNsdWRlIiBhbHJlYWR5
IGhhcyBhIG1peHR1cmUNCj4gb2YgaW5zdGFsbGVkIGFuZCBpbnRlcm5hbCBoZWFkZXJzLCBidXQg
bW9zdCBvZiB0aGVtIGFyZSBpbnN0YWxsZWQuIFNvDQo+IHRoZSBmYWlybHkgcmVjZW50ICJ4ZW4t
dG9vbHMiIGRpciB3aGljaCBoYXZlIGJlZW4gaW50cm9kdWNlZCB0byBzaGFyZQ0KPiBtYWNyb3Mg
YXQgYnVpbGQgdGltZSBzZWVtcyBtb3JlIGFwcHJvcHJpYXRlIGZvciBhbm90aGVyIGJ1aWx0LXRp
bWUNCj4gbWFjcm8uDQoNClllcyBJ4oCZbGwgZG8NCg0KPiANCj4+IHRvb2xzL2luY2x1ZGUveGVu
LXRvb2xzL2NvbW1vbi1tYWNyb3MuaCB8ICAyICsrDQo+PiANCj4+IGRpZmYgLS1naXQgYS90b29s
cy9pbmNsdWRlL2FybS1hcmNoLWNhcGFiaWxpdGllcy5oIGIvdG9vbHMvaW5jbHVkZS9hcm0tYXJj
aC1jYXBhYmlsaXRpZXMuaA0KPj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAw
MDAwMDAwMC4uNDZlODc2NjUxMDUyDQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysgYi90b29scy9p
bmNsdWRlL2FybS1hcmNoLWNhcGFiaWxpdGllcy5oDQo+PiBAQCAtMCwwICsxLDMzIEBADQo+PiAr
LyogU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjAgKi8NCj4+ICsvKg0KPj4gKyAqIENv
cHlyaWdodCAoQykgMjAyMyBBUk0gTHRkLg0KPj4gKyAqLw0KPj4gKw0KPj4gKyNpZm5kZWYgQVJN
X0FSQ0hfQ0FQQUJJTElUSUVTX0gNCj4+ICsjZGVmaW5lIEFSTV9BUkNIX0NBUEFCSUxJVElFU19I
DQo+PiArDQo+PiArLyogVGVsbCB0aGUgWGVuIHB1YmxpYyBoZWFkZXJzIHdlIGFyZSBhIHVzZXIt
c3BhY2UgdG9vbHMgYnVpbGQuICovDQo+PiArI2lmbmRlZiBfX1hFTl9UT09MU19fDQo+PiArI2Rl
ZmluZSBfX1hFTl9UT09MU19fIDENCj4gDQo+IFNvbWVob3csIHRoaXMgZG9lc24ndCBzZWVtcyBh
cHByb3ByaWF0ZSBpbiB0aGlzIGhlYWRlci4gVGhpcyBtYWNybw0KPiBzaG91bGQgaW5zdGVhZCBi
ZSBzZXQgb24gdGhlIGNvbW1hbmQgbGluZS4gQW55IHJlYXNvbiB5b3UndmUgYWRkZWQgdGhpcw0K
PiBpbiB0aGUgaGVhZGVyPw0KDQpJ4oCZdmUgYWRkZWQgdGhhdCBiZWNhdXNlIHN5c2N0bC5oIGlz
IGRvaW5nIHRoaXM6DQoNCiNpZiAhZGVmaW5lZChfX1hFTl9fKSAmJiAhZGVmaW5lZChfX1hFTl9U
T09MU19fKQ0KI2Vycm9yICJzeXNjdGwgb3BlcmF0aW9ucyBhcmUgaW50ZW5kZWQgZm9yIHVzZSBi
eSBub2RlIGNvbnRyb2wgdG9vbHMgb25seSINCiNlbmRpZg0KDQpCdXQgSeKAmXZlIG5vdCBjaGVj
a2VkIGlmIHRoZSBtYWNybyBpcyBhbHJlYWR5IHBhc3NlZCB0aHJvdWdoIHRoZSBidWlsZCBzeXN0
ZW0sIEnigJlsbA0KdHJ5IGFuZCBJ4oCZbGwgcmVtb3ZlIGl0IGlmIGl04oCZcyB0aGUgY2FzZQ0K
DQo+IA0KPj4gKyNlbmRpZg0KPj4gKw0KPj4gKyNpbmNsdWRlIDxzdGRpbnQuaD4NCj4+ICsjaW5j
bHVkZSA8eGVuL3N5c2N0bC5oPg0KPj4gKw0KPj4gKyNpbmNsdWRlIDx4ZW4tdG9vbHMvY29tbW9u
LW1hY3Jvcy5oPg0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUNCj4+ICt1bnNpZ25lZCBpbnQgYXJj
aF9jYXBhYmlsaXRpZXNfYXJtX3N2ZSh1bnNpZ25lZCBpbnQgYXJjaF9jYXBhYmlsaXRpZXMpDQo+
PiArew0KPj4gKyNpZiBkZWZpbmVkKF9fYWFyY2g2NF9fKQ0KPj4gKyAgICB1bnNpZ25lZCBpbnQg
c3ZlX3ZsID0gTUFTS19FWFRSKGFyY2hfY2FwYWJpbGl0aWVzLA0KPj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIFhFTl9TWVNDVExfUEhZU0NBUF9BUk1fU1ZFX01BU0spOw0K
Pj4gKw0KPj4gKyAgICAvKiBWZWN0b3IgbGVuZ3RoIGlzIGRpdmlkZWQgYnkgMTI4IGJlZm9yZSBz
dG9yaW5nIGl0IGluIGFyY2hfY2FwYWJpbGl0aWVzICovDQo+PiArICAgIHJldHVybiBzdmVfdmwg
KiAxMjhVOw0KPj4gKyNlbHNlDQo+PiArICAgIHJldHVybiAwOw0KPj4gKyNlbmRpZg0KPj4gK30N
Cj4+ICsNCj4+ICsjZW5kaWYgLyogQVJNX0FSQ0hfQ0FQQUJJTElUSUVTX0ggKi8NCj4+IGRpZmYg
LS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5cGVzLmlkbCBiL3Rvb2xzL2xpYnMvbGln
aHQvbGlieGxfdHlwZXMuaWRsDQo+PiBpbmRleCBjMTAyOTJlMGQ3ZTMuLmZkMzFkYWNmN2Q1YSAx
MDA2NDQNCj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfdHlwZXMuaWRsDQo+PiArKysg
Yi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5cGVzLmlkbA0KPj4gQEAgLTExMzMsNiArMTEzMyw3
IEBAIGxpYnhsX3BoeXNpbmZvID0gU3RydWN0KCJwaHlzaW5mbyIsIFsNCj4+ICAgICAoImNhcF92
cG11IiwgYm9vbCksDQo+PiAgICAgKCJjYXBfZ250dGFiX3YxIiwgYm9vbCksDQo+PiAgICAgKCJj
YXBfZ250dGFiX3YyIiwgYm9vbCksDQo+PiArICAgICgiYXJjaF9jYXBhYmlsaXRpZXMiLCB1aW50
MzIpLA0KPiANCj4gVGhpcyBhZGRpdGlvbmFsIGZpZWxkIG5lZWRzIGEgbmV3IExJQlhMX0hBVkVf
IG1hY3JvIGluICJsaWJ4bC5oIi4NCg0KSeKAmWxsIGFkZA0KDQo+IA0KPj4gICAgIF0sIGRpcj1E
SVJfT1VUKQ0KPj4gDQo+PiBsaWJ4bF9jb25uZWN0b3JpbmZvID0gU3RydWN0KCJjb25uZWN0b3Jp
bmZvIiwgWw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMu
YyBiL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPj4gaW5kZXggMzU5MDFjMmQ2
M2I2Li4yNTRkM2I1ZGNjZDIgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9weXRob24veGVuL2xvd2xl
dmVsL3hjL3hjLmMNCj4+ICsrKyBiL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0K
Pj4gQEAgLTcsNiArNyw3IEBADQo+PiAjZGVmaW5lIFBZX1NTSVpFX1RfQ0xFQU4NCj4+ICNpbmNs
dWRlIDxQeXRob24uaD4NCj4+ICNkZWZpbmUgWENfV0FOVF9DT01QQVRfTUFQX0ZPUkVJR05fQVBJ
DQo+PiArI2luY2x1ZGUgPGFybS1hcmNoLWNhcGFiaWxpdGllcy5oPg0KPiANCj4gQ291bGQgeW91
IGFkZCB0aGlzIGhlYWRlciAuLi4NCj4gDQo+PiAjaW5jbHVkZSA8eGVuY3RybC5oPg0KPj4gI2lu
Y2x1ZGUgPHhlbmd1ZXN0Lmg+DQo+PiAjaW5jbHVkZSA8ZmNudGwuaD4NCj4+IEBAIC0yMiw4ICsy
Myw2IEBADQo+PiAjaW5jbHVkZSA8eGVuL2h2bS9odm1faW5mb190YWJsZS5oPg0KPj4gI2luY2x1
ZGUgPHhlbi9odm0vcGFyYW1zLmg+DQo+PiANCj4+IC0jaW5jbHVkZSA8eGVuLXRvb2xzL2NvbW1v
bi1tYWNyb3MuaD4NCj4+IC0NCj4gDQo+IC4uLiBoZXJlLCBpbnN0ZWFkPw0KPiANCj4gQWxzbywg
SSB0aGluayAjaW5jbHVkZSBjb21tb24tbWFjcm9zLCBjYW4gc3RheS4NCg0KT2sgSeKAmWxsIGRv
IHRoZSBtb2RpZmljYXRpb25zDQoNCj4gDQo+PiAvKiBOZWVkZWQgZm9yIFB5dGhvbiB2ZXJzaW9u
cyBlYXJsaWVyIHRoYW4gMi4zLiAqLw0KPj4gI2lmbmRlZiBQeU1PRElOSVRfRlVOQw0KPj4gI2Rl
ZmluZSBQeU1PRElOSVRfRlVOQyBETF9FWFBPUlQodm9pZCkNCj4+IEBAIC04OTcsNyArODk2LDcg
QEAgc3RhdGljIFB5T2JqZWN0ICpweXhjX3BoeXNpbmZvKFhjT2JqZWN0ICpzZWxmKQ0KPj4gICAg
IGlmICggcCAhPSB2aXJ0X2NhcHMgKQ0KPj4gICAgICAgKihwLTEpID0gJ1wwJzsNCj4+IA0KPj4g
LSAgICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgie3M6aSxzOmksczppLHM6aSxzOmwsczpsLHM6bCxz
OmksczpzLHM6c30iLA0KPj4gKyAgICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgie3M6aSxzOmksczpp
LHM6aSxzOmwsczpsLHM6bCxzOmksczpzLHM6cyxzOml9IiwNCj4+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAibnJfbm9kZXMiLCAgICAgICAgIHBpbmZvLm5yX25vZGVzLA0KPj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJ0aHJlYWRzX3Blcl9jb3JlIiwgcGluZm8udGhyZWFkc19w
ZXJfY29yZSwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAiY29yZXNfcGVyX3NvY2tl
dCIsIHBpbmZvLmNvcmVzX3Blcl9zb2NrZXQsDQo+PiBAQCAtOTA3LDcgKzkwNiwxMCBAQCBzdGF0
aWMgUHlPYmplY3QgKnB5eGNfcGh5c2luZm8oWGNPYmplY3QgKnNlbGYpDQo+PiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgInNjcnViX21lbW9yeSIsICAgICBwYWdlc190b19raWIocGluZm8u
c2NydWJfcGFnZXMpLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjcHVfa2h6Iiwg
ICAgICAgICAgcGluZm8uY3B1X2toeiwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
aHdfY2FwcyIsICAgICAgICAgIGNwdV9jYXAsDQo+PiAtICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICJ2aXJ0X2NhcHMiLCAgICAgICAgdmlydF9jYXBzKTsNCj4+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgInZpcnRfY2FwcyIsICAgICAgICB2aXJ0X2NhcHMsDQo+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICJhcm1fc3ZlX3ZsIiwNCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBhcmNoX2NhcGFiaWxpdGllc19hcm1fc3ZlKHBpbmZvLmFyY2hfY2FwYWJpbGl0
aWVzKQ0KPiANCj4gYXJjaF9jYXBhYmlsaXRpZXNfYXJtX3N2ZSgpIHJldHVybnMgYW4gInVuc2ln
bmVkIGludCIsIGJ1dCB0aGUgZm9ybWF0IGlzDQo+ICJpIiwgd2hpY2ggc2VlbXMgdG8gYmUgYW4g
ImludCIuIFNob3VsZG4ndCB0aGUgZm9ybWF0IGJlICJJIiBpbnN0ZWFkPw0KPiANCj4gaHR0cHM6
Ly9kb2NzLnB5dGhvbi5vcmcvMy9jLWFwaS9hcmcuaHRtbCNjLlB5X0J1aWxkVmFsdWUNCg0KWWVz
IHlvdSBhcmUgcmlnaHQsIEnigJlsbCBjaGFuZ2UgaXQNCg0KPiANCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICApOw0KPj4gfQ0KPj4gDQo+PiBzdGF0aWMgUHlPYmplY3QgKnB5eGNfZ2V0Y3B1
aW5mbyhYY09iamVjdCAqc2VsZiwgUHlPYmplY3QgKmFyZ3MsIFB5T2JqZWN0ICprd2RzKQ0KPj4g
ZGlmZiAtLWdpdCBhL3Rvb2xzL3hsL3hsX2luZm8uYyBiL3Rvb2xzL3hsL3hsX2luZm8uYw0KPj4g
aW5kZXggNzEyYjc2MzhiMDEzLi5iZjE4YmEyNDQ5ZWYgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94
bC94bF9pbmZvLmMNCj4+ICsrKyBiL3Rvb2xzL3hsL3hsX2luZm8uYw0KPj4gQEAgLTE0LDYgKzE0
LDcgQEANCj4+IA0KPj4gI2RlZmluZSBfR05VX1NPVVJDRQ0KPj4gDQo+PiArI2luY2x1ZGUgPGFy
bS1hcmNoLWNhcGFiaWxpdGllcy5oPg0KPiANCj4gQW55IHJlYXNvbiByZWFzb24gdG8gaGF2ZSB0
aGlzIGhlYWRlciBmaXJzdD8NCj4gSSBmZWVsIGxpa2UgcHJpdmF0ZSBoZWFkZXJzIHNob3VsZCBj
b21lIGFmdGVyIHB1YmxpYyBvbmVzLCBzbyBoZXJlLCB0aGlzDQo+IGluY2x1ZGUgd291bGQgYmUg
YWRkZWQgYmV0d2VlbiA8bGlieGx1dGlsLmg+IGFuZCAieGwuaCIuDQoNCk9rIEnigJlsbCBtb3Zl
IGl0DQoNCj4gDQo+PiAjaW5jbHVkZSA8ZmNudGwuaD4NCj4+ICNpbmNsdWRlIDxpbnR0eXBlcy5o
Pg0KPj4gI2luY2x1ZGUgPHN0ZGxpYi5oPg0KPj4gQEAgLTIyNCw2ICsyMjUsMTMgQEAgc3RhdGlj
IHZvaWQgb3V0cHV0X3BoeXNpbmZvKHZvaWQpDQo+PiAgICAgICAgICBpbmZvLmNhcF9nbnR0YWJf
djIgPyAiIGdudHRhYi12MiIgOiAiIg0KPj4gICAgICAgICApOw0KPj4gDQo+PiArICAgIC8qIFBy
aW50IGFybSBTVkUgdmVjdG9yIGxlbmd0aCBvbmx5IG9uIEFSTSBwbGF0Zm9ybXMgKi8NCj4+ICsj
aWYgZGVmaW5lZChfX2FhcmNoNjRfXykNCj4+ICsgICAgbWF5YmVfcHJpbnRmKCJhcm1fc3ZlX3Zl
Y3Rvcl9sZW5ndGggIDogJXVcbiIsDQo+PiArICAgICAgICAgYXJjaF9jYXBhYmlsaXRpZXNfYXJt
X3N2ZShpbmZvLmFyY2hfY2FwYWJpbGl0aWVzKQ0KPj4gKyAgICAgICAgKTsNCj4+ICsjZW5kaWYN
Cj4+ICsNCj4+ICAgICB2aW5mbyA9IGxpYnhsX2dldF92ZXJzaW9uX2luZm8oY3R4KTsNCj4+ICAg
ICBpZiAodmluZm8pIHsNCj4+ICAgICAgICAgaSA9ICgxIDw8IDIwKSAvIHZpbmZvLT5wYWdlc2l6
ZTsNCj4gDQo+IFRoYW5rcywNCj4gDQo+IC0tIA0KPiBBbnRob255IFBFUkFSRA0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 07:38:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 07:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517672.803369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjbEk-0003Sh-E0; Tue, 04 Apr 2023 07:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517672.803369; Tue, 04 Apr 2023 07:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjbEk-0003SZ-BG; Tue, 04 Apr 2023 07:37:50 +0000
Received: by outflank-mailman (input) for mailman id 517672;
 Tue, 04 Apr 2023 07:37:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjbEi-0003SQ-C5
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 07:37:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7d00::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9828afdf-d2bb-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 09:37:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9099.eurprd04.prod.outlook.com (2603:10a6:10:2f2::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 07:37:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 07:37:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9828afdf-d2bb-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CyQ2v5VCAOBCG5+IGWoKhxOJ1219jOEz/0JvInID30pnHeWRk6DDpnJqTGSBY3pFcfmQK7K7GN9cnIp8liFKoQbYON1gq1xonS4pS3T7k/g3HotSPA+7lSj1vtMjafTDCDdXjTIDruMVy/K0CIMZI7w5E5pobGk269SP0Qr3ptoLYxS6JAKfP9O6QQBhkDBy6w8r3YPlJHEssPJA4kPrWVD6TNGK1NuyaJyB1qPPK3hp02+GV0P4L9bpgpWxjFSDF55uNcxm2zMbmVWlv6+4NCh19gHyIXWPj8FK3jwYoVZXRnF02ptq5CjOmuvvli6zcSTa9nye5cRWfmCZGDIB+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A9iXgmTsj+0Rxvl5Ewhk+C1qP1W68DSWKVggTNGwczk=;
 b=b2moBjgtdKWmeSf+pGQNi7a9hyOV/1n8s+Irq9Suj4upp2azGhTyG3i1AQsdPao4B6xP+U0G7qg37KnUDkZSDEep6w9/QTivXTwY2HHOYM7d+vMu6ZzgZP+5Fusm6kPFMXGoIG8wkTAQJSpzd2yc96VjYaiWe0SQ4HU72i/8O85vOKmne4ClRdGqmSP5pCSj0Uf57dyjBLXe9eUYqQUIRA2y/vAECrX+JWfM07ciXwjSqkNpSDJMctzg4sOKTbr43yPI0iJWxZjw9sEBlXLAcgdwblIUqtEr3aTOlk38dOvEjMXGLdwj/rft/T79YMD4uMKVN9JxYRTSBXBWK3VIVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9iXgmTsj+0Rxvl5Ewhk+C1qP1W68DSWKVggTNGwczk=;
 b=2ZJCzns286owsxMv5qGWrWcrScvHmuVa8qEzbNc7bAAFyANOw6jyWLtROjFHo1JvrL5IgqaxOkeZ9k65Bzjjdgzj4hl1M0FjHNI0A8/UkHSnvBr7jdExkeM6xhppfLEj8KfEO9JyOtdJMUBPCepEZRM1HGsJLSprM+1KU6AKEY37h8BZrYsy6E+9nN7VkxFNVNEU3hGujdg4q1w1iyMwCmpFpkYPG+Azmin65Ca87+4IaOKa9ygv0G25qG+wog+G1E+3rb5R+okROP9LQ7fdlLLTR4XyXr8+XB5NGBTJryD4wajbUcmIq7CskVihZCYwzckoqpsEVUQQKuAm1BP0tg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ab50964-bf29-b580-7ef6-78aa0e6a7a7f@suse.com>
Date: Tue, 4 Apr 2023 09:37:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] cmdline: document "extra_guest_irqs" upper bound
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d4fc10a3-abe7-1298-daac-00c96147c7a5@suse.com>
 <0e165632-938d-596b-3087-75da66641d85@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <0e165632-938d-596b-3087-75da66641d85@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0073.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9099:EE_
X-MS-Office365-Filtering-Correlation-Id: ca3bb805-3ab5-4dbb-82b3-08db34df7a97
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nBRF3TZYN6rv0oEUEaEKC7i7sRmzECA3RfX76xp6QWSSR7TszHtdXwgbHV23zcStLqhYFiSPiuH2cHHmR6MDQa4gIXi1YoRcG4iaGsJVhQ8zi6hRnu9wP5QkUhHn/9zEj7/LeyQLsN9cpmlEMlg1xakp/5TQRu0saUIzcmrjJGcfs3K0IDGlt7L2cfXMPFYdZmoetfO+8Ir/LlkRd0npQdswmhNCGFJJ87Ct4ecQT65zlV+dteLCp/w0m9oTUl3ajsmjK9pb2AjACt3Xj4qwaJRFebix2qKtT2odYvQUc/K8skG2rw1rGHVUs8TtwBWR/DURr46MQIph3qN/msiYYnMU1T1J0dNzA4/xIzgqaltVM6au4Te9Q8CNa7xatx80bUXYfHITmLk6Gjtmf3ooNeUI/bhIdXuRWBPoaoN6W4P18L4ow66vAmsP163smxvG6ugnzmFgEHxKftzLuaalPXVQy1kuNDaPHM7CgBKhyqQA+mZ0Ndi/YZajAqCsgJg2h6HE9B/Vsz5+ABWxtjXG8lyoym65ET/qO+0EnZ9BYHA4VWDdjBt+ZkSHrtiI7T6pjSkHkIE4JfIFLBpKwIWhK7pLByWiaPOkTiGrci/tCGYMxb6q5woD7LLyoyawxzStw/IWVkUu/R+MhAqxXH+LKw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(136003)(346002)(376002)(451199021)(5660300002)(31686004)(316002)(2906002)(54906003)(41300700001)(8936002)(186003)(478600001)(6916009)(8676002)(4326008)(66946007)(66476007)(66556008)(4744005)(6486002)(53546011)(6512007)(26005)(6506007)(2616005)(83380400001)(36756003)(86362001)(31696002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c0hET3BCZEFWaGhFL21XYkxTcU1EUXBYUGlTNmkxY3dFYmdFRWE2d2ZnTmR4?=
 =?utf-8?B?MTFQU0ZJKzZxVG5jK2hZNGQ3ampvcmFxcFVlTkdPaTVNeVNKaGlsS1hjRmRI?=
 =?utf-8?B?U2tkZHNPT1Z6N3VDVkhhY3UxZ2xyN2VhNnRSY1hqNGZyL1M0MXAzTVpwL2NC?=
 =?utf-8?B?UWhLOWZTc2UvTXpDbjN0YmsrcDVsQVdmSWpyNmJyMVB2MjlPdktoTXNwY2o3?=
 =?utf-8?B?YWcyWThtSzlVdE1NY2lxaDBkSFQ1TnltU0kzUHBUaytNc2JEdTIvSXpkczR6?=
 =?utf-8?B?a3BDemYvVit1dzNkYVRWWkJFZ0NBQzE2a3ZGT2lBMVJINHNxT1JoRzVqbTBY?=
 =?utf-8?B?TktIRWJ6K2JYNkFRR1RIRDVDNkZIVjY5aC9pNG84R2ttUnlaRldYYUFqZklO?=
 =?utf-8?B?Q3pocDF2NFhGbitTME5Ga0RESEI1YWJqSlBkY29abjdkS0tSV2RFUm1oQjdl?=
 =?utf-8?B?Rzd3N055eCtvYjY4RXZTR0JPeTZsWGNQTnhXK0locmZmQlErclgxRjBHblJL?=
 =?utf-8?B?OHc0RnFhQzVLLzVkTTlDQnZpL3N5Wk9RZVhnZEpvK01jeVYyTFFlRnRjMGJK?=
 =?utf-8?B?UGNCaUIrV0x6WXp3SVF4ZTUreGlvVUhnM2luaVhFRFdtT2MxWmQ3RTJUdUM2?=
 =?utf-8?B?dHFiM3BHY3ZObzlwYTFPZUtaVEt1dy9WUzRvclhsWlZYUkVaeDV6UzZvdmRE?=
 =?utf-8?B?RUtYRlRKUWdxYTVRdkJGRU1SK2xGNjZSdlZucmtXMXRXTkNyMk1NcUFmMmc0?=
 =?utf-8?B?anp6Ui9SNUZSV0FOSUQvc054djlqOGFlMVpYUEdMZy9uL2cyUDJaR2xURGVF?=
 =?utf-8?B?SWpMczlnMW53OVZoY3RFOGtDTFhCSDRUcVhGeUxReUV0SVRGNnNhWFErVHZX?=
 =?utf-8?B?VGxheTRvTjdIMkRlSTIrMEFZMWhGa045UWNHcDhCU0tnMTVONGpmckwzejdU?=
 =?utf-8?B?Zys4VXRCS2FrL3h0a0dENVprQnRwOUtyRnJGaktEK3lJVTk1RzFrZjJGd3h1?=
 =?utf-8?B?bjhxZ2tOeTJDVVlJZWNnUHoveWlQK0Z1dHpTeXhhQkxQTnVnVGdEN2lBWVk0?=
 =?utf-8?B?OTJGaUJDcVRaY2dsK3NFdkM1YXF6cDExUkt0K0dBYXNNTDduL3RYV3ZqYjM3?=
 =?utf-8?B?dGxOYXdaZGVRM0VpdDBaVEM1N1QwOTdESTczKzhZRHoycFBoWmEza1RxVUpU?=
 =?utf-8?B?enA2Tm5oTU5yMkgzS0VncDZYcWl2TjF6OG9kN01XNmlHMFpnNncxcG9RYVBB?=
 =?utf-8?B?UXp4UHhadGk2bkFSNGpwcUVBV3FZamxqWEhCS2tJZVpPK0RyL29WOG9jYzhP?=
 =?utf-8?B?dWFHYWl1Z2FWTHYrdkxuRjhBUVpPVWp5OGU4ODFNTjcxVDhXY3FaSnZ5YTcr?=
 =?utf-8?B?QUdoV0dyMmpXNFkyNmRya282VlhQTDdDQ1NTbGJRVk9VekdiNE1WQXUrNU0r?=
 =?utf-8?B?Y3VEVWlTaWFTVVEwRkpFK1Y3cTJaclRLdzdDa2ZzdGNEZ1pTeTBqaGxSSldB?=
 =?utf-8?B?QjdzU0VlZFNzbW05d3lXNXgxUk9QN3VqL2JhTEVjUkM2VkE4dnpyMGc1UEV6?=
 =?utf-8?B?SHBydXh6Yld4RExMckVnQlRLUCtxUC9oalBiR3RDKzMvTXdCTGlpenFYeDN0?=
 =?utf-8?B?eGMxT0o2VDVTVjhDM1U4WWtqU0oxczZPY1pRNmpVQmxuR240QnpLQnZnbGRs?=
 =?utf-8?B?YTBPVUZPeXYvZ3VESXAyaC9IMElBRCt2V3czV0hqWHRHdkg4SlRKVGsyMjRm?=
 =?utf-8?B?QU5YM0xZNEROWmY3Z1A5cGx4UklWcjlSL016VmRIQ0hVR1h1N0QzT3VwSkNm?=
 =?utf-8?B?RHJzZDYwVHNzOCtTeGJNSExlM3R5ajYwellodW5Db3hNeUxiWHBnU2ZPYk9E?=
 =?utf-8?B?bzBHUUIzb01XelY5YXozN3VnYlVqS0JvR1lZZ2EwNzBjMU5LcENId3RXaGgr?=
 =?utf-8?B?UTlsbS93bjNRNlRtdTdkeWpvaGw5SWdaSzJzSnh1RFovTmg1K2EvYUE3bGh0?=
 =?utf-8?B?a0JHT0t2L0hsOG9zVGVYSDVQMHhZMzQ4YTRVWUIvSWJXMGZJdzRKUjJKL25V?=
 =?utf-8?B?RTdyU3gvUlE1OU9GVnZDamhldDlUdnBoMVRsaG94aHpZTGFEOVA4WCtJeVNk?=
 =?utf-8?Q?Kr3fSIh6sOoQtRcHHbTVVZ571?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ca3bb805-3ab5-4dbb-82b3-08db34df7a97
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 07:37:44.3466
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NWpUQGb+Gs1qMsDd5LTCGitg9nqJSA2jARTRPKXonESYUPLkQDDomKgMQTAl7JtvrJD1st4FCdfVoht0FGky3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9099

On 04.04.2023 09:09, Andrew Cooper wrote:
> On 04/04/2023 7:46 am, Jan Beulich wrote:
>> PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
>> more than 32k pIRQ-s can be used by a domain. Document this upper bound.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, but because of the below I guess I'll make a v2.

>> ---
>> I was uncertain about also introducing a bounds check in code: We don't
>> check for bogus / abusive values elsewhere either.
> 
> Normally not, but in this case I suspect it's worth it.  Without a
> bounds check, don't we risk wandering off the page?

Indeed we do; in debug builds we hit assertions.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 08:10:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 08:10:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517680.803380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjbjm-0007Ua-AA; Tue, 04 Apr 2023 08:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517680.803380; Tue, 04 Apr 2023 08:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjbjm-0007UT-7H; Tue, 04 Apr 2023 08:09:54 +0000
Received: by outflank-mailman (input) for mailman id 517680;
 Tue, 04 Apr 2023 08:09:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HcaL=73=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pjbjl-0007UN-H1
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 08:09:53 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 132c36a6-d2c0-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 10:09:51 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id g19so28180438lfr.9
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 01:09:51 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 l10-20020ac24a8a000000b004db3e445f1fsm2205676lfp.97.2023.04.04.01.09.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 01:09:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 132c36a6-d2c0-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680595791;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5z+oL0CEH0hrowCjVXAlWzw0WLtvmNnQxZR5UH2VOjM=;
        b=bVbpgCtfMLjb4MUQAJF9M7j3q5DlwGRXTIaRBHi/TVwj7VWYneEepIVp1C1X2RXAl6
         lcVyqlhciICOg5kpi9DavDk9UnXwo2mYBt9nmjXNa69o0etNJnxoS9Hf1hWkv1YsY8I+
         ai8lQcRqsbkup05rZmyUnETgnsLfOayvypXa6NIdbJ3CS8c1G0I7pO38kJAaRHQjpq88
         E6RHEV1mw2oapFIVsuF9UaKDNTdnYq4xBNKJQSExh3waEw1fMdkq/+GsVru+cMWQ+S1t
         qGOV8SVscyJFxLmp1U44XdeM3bKZTPGi529lHSvIxSWWT5A2mDZPV0zUeuL/MgPCw14i
         ICMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680595791;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5z+oL0CEH0hrowCjVXAlWzw0WLtvmNnQxZR5UH2VOjM=;
        b=2wzHrZq4YojvjbCCrUZBVnl2qsrXJS9F45o60njsqKtwurUB9xWS6oblYclNg9LyF/
         CM0cFE6Cvtyyv41BPoVgk7kgxouFwpZqbCP3KAvCXD0RrN/9cKlMGFmXcP3wssZpu4x9
         knrHIKnRSyGo1dZjKMtyB1dSJZmMGXjt5/msmYkBs+kEy2Co1v2i2RY9z3BJYFjF7/pP
         pVq/CoEQewQMZqY7EM1iwjdJEggQgjjBcju9zFZV28gZ8weM0bg7vJfcKTyymiLx9wtz
         fUjIkAPeBqS78pJyee5pWu92ftxEYAPfXl1qYo3c+SgvraDtPfk/I9inOlK9FU1uR6qS
         i3QA==
X-Gm-Message-State: AAQBX9f2X6Kj4egXItb5wf6atrir4cUgoy+ee7xVyYvjjT1pGOtg3CUm
	Wig+0b9wzhMWyKZe4043qO8=
X-Google-Smtp-Source: AKy350bxrCz6UxynTAaF3n2L7sCDwK6TCD2MGrvMRukGdH6BghtA4gpnUPdyvRSSa6dz7SsBubnAJQ==
X-Received: by 2002:ac2:42c4:0:b0:4e9:ad85:aa09 with SMTP id n4-20020ac242c4000000b004e9ad85aa09mr404758lfl.68.1680595791004;
        Tue, 04 Apr 2023 01:09:51 -0700 (PDT)
Message-ID: <d351a7b6d673b70d45e809123e6e42abbf7b8014.camel@gmail.com>
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic
 implementation of bug.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bertrand
 Marquis <bertrand.marquis@arm.com>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>,  xen-devel@lists.xenproject.org, Julien Grall
 <julien@xen.org>
Date: Tue, 04 Apr 2023 11:09:49 +0300
In-Reply-To: <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
	 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
	 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
	 <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
	 <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) 
MIME-Version: 1.0

T24gVHVlLCAyMDIzLTA0LTA0IGF0IDA4OjQxICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAwMy4wNC4yMDIzIDIwOjQwLCBPbGVrc2lpIHdyb3RlOgo+ID4gSGVsbG8gSnVsaWVuLCAKPiA+
IE9uIEZyaSwgMjAyMy0wMy0zMSBhdCAyMjowNSArMDEwMCwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+
ID4gPiBIaSBPbGVrc2lpLAo+ID4gPiAKPiA+ID4gSSB3YXMgZ29pbmcgdG8gYWNrIHRoZSBwYXRj
aCBidXQgdGhlbiBJIHNwb3R0ZWQgc29tZXRoaW5nIHRoYXQKPiA+ID4gd291bGQgCj4gPiA+IHdh
bnQgc29tZSBjbGFyaWZpY2F0aW9uLgo+ID4gPiAKPiA+ID4gT24gMjkvMDMvMjAyMyAxMTo1MCwg
T2xla3NpaSBLdXJvY2hrbyB3cm90ZToKPiA+ID4gPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJt
L2luY2x1ZGUvYXNtL2J1Zy5oCj4gPiA+ID4gYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYnVn
LmgKPiA+ID4gPiBpbmRleCBjYWNhZjAxNGFiLi4zZmIwNDcxYTliIDEwMDY0NAo+ID4gPiA+IC0t
LSBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9idWcuaAo+ID4gPiA+ICsrKyBiL3hlbi9hcmNo
L2FybS9pbmNsdWRlL2FzbS9idWcuaAo+ID4gPiA+IEBAIC0xLDYgKzEsMjQgQEAKPiA+ID4gPiDC
oCAjaWZuZGVmIF9fQVJNX0JVR19IX18KPiA+ID4gPiDCoCAjZGVmaW5lIF9fQVJNX0JVR19IX18K
PiA+ID4gPiDCoCAKPiA+ID4gPiArLyoKPiA+ID4gPiArICogUGxlYXNlIGRvIG5vdCBpbmNsdWRl
IGluIHRoZSBoZWFkZXIgYW55IGhlYWRlciB0aGF0IG1pZ2h0Cj4gPiA+ID4gKyAqIHVzZSBCVUcv
QVNTRVJUL2V0YyBtYXJvcyBhc3RoZXkgd2lsbCBiZSBkZWZpbmVkIGxhdGVyIGFmdGVyCj4gPiA+
ID4gKyAqIHRoZSByZXR1cm4gdG8gPHhlbi9idWcuaD4gZnJvbSB0aGUgY3VycmVudCBoZWFkZXI6
Cj4gPiA+ID4gKyAqCj4gPiA+ID4gKyAqIDx4ZW4vYnVnLmg+Ogo+ID4gPiA+ICsgKsKgIC4uLgo+
ID4gPiA+ICsgKsKgwqAgPGFzbS9idWcuaD46Cj4gPiA+ID4gKyAqwqDCoMKgwqAgLi4uCj4gPiA+
ID4gKyAqwqDCoMKgwqAgPGFueV9oZWFkZXJfd2hpY2hfdXNlc19CVUcvQVNTRVJUL2V0YyBtYWNy
b3MuaD4KPiA+ID4gPiArICrCoMKgwqDCoCAuLi4KPiA+ID4gPiArICrCoCAuLi4KPiA+ID4gPiAr
ICrCoCAjZGVmaW5lIEJVRygpIC4uLgo+ID4gPiA+ICsgKsKgIC4uLgo+ID4gPiA+ICsgKsKgICNk
ZWZpbmUgQVNTRVJUKCkgLi4uCj4gPiA+ID4gKyAqwqAgLi4uCj4gPiA+ID4gKyAqLwo+ID4gPiA+
ICsKPiA+ID4gPiDCoCAjaW5jbHVkZSA8eGVuL3R5cGVzLmg+Cj4gPiA+ID4gwqAgCj4gPiA+ID4g
wqAgI2lmIGRlZmluZWQoQ09ORklHX0FSTV8zMikKPiA+ID4gPiBAQCAtMTEsNzYgKzI5LDcgQEAK
PiA+ID4gPiDCoCAjIGVycm9yICJ1bmtub3duIEFSTSB2YXJpYW50Igo+ID4gPiA+IMKgICNlbmRp
Zgo+ID4gPiA+IMKgIAo+ID4gPiA+IC0jZGVmaW5lIEJVR19GUkFNRV9TVFJVQ1QKPiA+ID4gPiAt
Cj4gPiA+ID4gLXN0cnVjdCBidWdfZnJhbWUgewo+ID4gPiA+IC3CoMKgwqAgc2lnbmVkIGludCBs
b2NfZGlzcDvCoMKgwqAgLyogUmVsYXRpdmUgYWRkcmVzcyB0byB0aGUgYnVnCj4gPiA+ID4gYWRk
cmVzcwo+ID4gPiA+ICovCj4gPiA+ID4gLcKgwqDCoCBzaWduZWQgaW50IGZpbGVfZGlzcDvCoMKg
IC8qIFJlbGF0aXZlIGFkZHJlc3MgdG8gdGhlCj4gPiA+ID4gZmlsZW5hbWUgKi8KPiA+ID4gPiAt
wqDCoMKgIHNpZ25lZCBpbnQgbXNnX2Rpc3A7wqDCoMKgIC8qIFJlbGF0aXZlIGFkZHJlc3MgdG8g
dGhlCj4gPiA+ID4gcHJlZGljYXRlCj4gPiA+ID4gKGZvciBBU1NFUlQpICovCj4gPiA+ID4gLcKg
wqDCoCB1aW50MTZfdCBsaW5lO8KgwqDCoMKgwqDCoMKgwqDCoCAvKiBMaW5lIG51bWJlciAqLwo+
ID4gPiA+IC3CoMKgwqAgdWludDMyX3QgcGFkMDoxNjvCoMKgwqDCoMKgwqAgLyogUGFkZGluZyBm
b3IgOC1ieXRlcyBhbGlnbiAqLwo+ID4gPiA+IC19Owo+ID4gPiA+IC0KPiA+ID4gPiAtI2RlZmlu
ZSBidWdfbG9jKGIpICgoY29uc3Qgdm9pZCAqKShiKSArIChiKS0+bG9jX2Rpc3ApCj4gPiA+ID4g
LSNkZWZpbmUgYnVnX2ZpbGUoYikgKChjb25zdCB2b2lkICopKGIpICsgKGIpLT5maWxlX2Rpc3Ap
Owo+ID4gPiA+IC0jZGVmaW5lIGJ1Z19saW5lKGIpICgoYiktPmxpbmUpCj4gPiA+ID4gLSNkZWZp
bmUgYnVnX21zZyhiKSAoKGNvbnN0IGNoYXIgKikoYikgKyAoYiktPm1zZ19kaXNwKQo+ID4gPiA+
IC0KPiA+ID4gPiAtLyogTWFueSB2ZXJzaW9ucyBvZiBHQ0MgZG9lc24ndCBzdXBwb3J0IHRoZSBh
c20gJWMgcGFyYW1ldGVyCj4gPiA+ID4gd2hpY2gKPiA+ID4gPiB3b3VsZAo+ID4gPiA+IC0gKiBi
ZSBwcmVmZXJhYmxlIHRvIHRoaXMgdW5wbGVhc2FudG5lc3MuIFdlIHVzZSBtZXJnZWFibGUKPiA+
ID4gPiBzdHJpbmcKPiA+ID4gPiAtICogc2VjdGlvbnMgdG8gYXZvaWQgbXVsdGlwbGUgY29waWVz
IG9mIHRoZSBzdHJpbmcgYXBwZWFyaW5nCj4gPiA+ID4gaW4KPiA+ID4gPiB0aGUKPiA+ID4gPiAt
ICogWGVuIGltYWdlLiBCVUdGUkFNRV9ydW5fZm4gbmVlZHMgdG8gYmUgaGFuZGxlZCBzZXBhcmF0
ZWx5Lgo+ID4gPiA+IC0gKi8KPiA+ID4gCj4gPiA+IEdpdmVuIHRoaXMgY29tbWVudCAuLi4KPiA+
ID4gCj4gPiA+ID4gLSNkZWZpbmUgQlVHX0ZSQU1FKHR5cGUsIGxpbmUsIGZpbGUsIGhhc19tc2cs
IG1zZykgZG8KPiA+ID4gPiB7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIFwKPiA+ID4gPiAtwqDCoMKgIEJVSUxEX0JVR19PTigobGluZSkgPj4KPiA+ID4gPiAxNik7
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gLcKgwqDCoCBCVUlMRF9C
VUdfT04oKHR5cGUpID49Cj4gPiA+ID4gQlVHRlJBTUVfTlIpO8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4g
PiA+IC3CoMKgwqAgYXNtCj4gPiA+ID4gKCIxOiJCVUdfSU5TVFIiXG4iwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgCj4gPiA+ID4gwqDCoMKgwqAKPiA+ID4gPiBcCj4gPiA+ID4gLcKg
wqDCoMKgwqDCoMKgwqAgIi5wdXNoc2VjdGlvbiAucm9kYXRhLnN0ciwgXCJhTVNcIiwgJXByb2di
aXRzLAo+ID4gPiA+IDFcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+
IC3CoMKgwqDCoMKgwqDCoMKgICIyOlx0LmFzY2l6ICIgX19zdHJpbmdpZnkoZmlsZSkKPiA+ID4g
PiAiXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIFwKPiA+ID4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4gPiA+ICIzOlxuIsKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4g
PiA+IMKgwqDCoAo+ID4gPiA+IFwKPiA+ID4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiLmlmICIgI2hh
c19tc2cKPiA+ID4gPiAiXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+
ID4gPiA+IC3CoMKgwqDCoMKgwqDCoMKgICJcdC5hc2NpeiAiICNtc2cKPiA+ID4gPiAiXG4iwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiAtwqDCoMKgwqDCoMKgwqDC
oAo+ID4gPiA+ICIuZW5kaWZcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgCj4gPiA+ID4gwqDCoMKgCj4gPiA+ID4gXAo+ID4gPiA+IC3CoMKgwqDCoMKg
wqDCoMKgCj4gPiA+ID4gIi5wb3BzZWN0aW9uXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgCj4gPiA+ID4gwqDCoMKgCj4gPiA+ID4gXAo+ID4gPiA+IC3CoMKgwqDCoMKg
wqDCoMKgICIucHVzaHNlY3Rpb24gLmJ1Z19mcmFtZXMuIiBfX3N0cmluZ2lmeSh0eXBlKSAiLAo+
ID4gPiA+IFwiYVwiLAo+ID4gPiA+ICVwcm9nYml0c1xuIlwKPiA+ID4gPiAtwqDCoMKgwqDCoMKg
wqDCoAo+ID4gPiA+ICI0OlxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gPiA+IMKgwqDCoAo+ID4gPiA+IFwKPiA+ID4gPiAtwqDC
oMKgwqDCoMKgwqDCoCAiLnAyYWxpZ24KPiA+ID4gPiAyXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+IC3CoMKgwqDCoMKgwqDCoMKgICIu
bG9uZyAoMWIgLQo+ID4gPiA+IDRiKVxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgXAo+ID4gPiA+IC3CoMKgwqDCoMKgwqDCoMKgICIubG9uZyAoMmIgLQo+ID4gPiA+IDRi
KVxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+IC3CoMKg
wqDCoMKgwqDCoMKgICIubG9uZyAoM2IgLQo+ID4gPiA+IDRiKVxuIsKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+IC3CoMKgwqDCoMKgwqDCoMKgICIuaHdvcmQg
IiBfX3N0cmluZ2lmeShsaW5lKSAiLAo+ID4gPiA+IDBcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gLcKgwqDC
oMKgwqDCoMKgwqAKPiA+ID4gPiAiLnBvcHNlY3Rpb24iKTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAKPiA+ID4gPiDCoMKgwqAKPiA+ID4gPiBcCj4gPiA+ID4gLX0gd2hp
bGUgKDApCj4gPiA+ID4gLQo+ID4gPiA+IC0vKgo+ID4gPiA+IC0gKiBHQ0Mgd2lsbCBub3QgYWxs
b3cgdG8gdXNlICJpIsKgIHdoZW4gUElFIGlzIGVuYWJsZWQgKFhlbgo+ID4gPiA+IGRvZXNuJ3QK
PiA+ID4gPiBzZXQgdGhlCj4gPiA+ID4gLSAqIGZsYWcgYnV0IGluc3RlYWQgcmVseSBvbiB0aGUg
ZGVmYXVsdCB2YWx1ZSBmcm9tIHRoZQo+ID4gPiA+IGNvbXBpbGVyKS4KPiA+ID4gPiBTbyB0aGUK
PiA+ID4gPiAtICogZWFzaWVzdCB3YXkgdG8gaW1wbGVtZW50IHJ1bl9pbl9leGNlcHRpb25faGFu
ZGxlcigpIGlzIHRvCj4gPiA+ID4gcGFzcwo+ID4gPiA+IHRoZSB0bwo+ID4gPiA+IC0gKiBiZSBj
YWxsZWQgZnVuY3Rpb24gaW4gYSBmaXhlZCByZWdpc3Rlci4KPiA+ID4gPiAtICovCj4gPiA+ID4g
LSNkZWZpbmXCoCBydW5faW5fZXhjZXB0aW9uX2hhbmRsZXIoZm4pIGRvCj4gPiA+ID4ge8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBcCj4gPiA+ID4gLcKgwqDCoCBhc20gKCJtb3YgIiBfX3N0cmluZ2lmeShCVUdfRk5fUkVH
KSAiLAo+ID4gPiA+ICUwXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiAtwqDCoMKgwqDCoMKgwqDCoAo+ID4gPiA+ICIxOiJC
VUdfSU5TVFIiXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAKPiA+ID4gPiDC
oMKgwqAKPiA+ID4gPiBcCj4gPiA+ID4gLcKgwqDCoMKgwqDCoMKgwqAgIi5wdXNoc2VjdGlvbiAu
YnVnX2ZyYW1lcy4iCj4gPiA+ID4gX19zdHJpbmdpZnkoQlVHRlJBTUVfcnVuX2ZuKQo+ID4gPiA+
ICIsIsKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gLcKgwqDCoMKgwqDCoMKgwqAgIsKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCBcImFcIiwKPiA+ID4gPiAlJXByb2diaXRzXG4iwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+
ID4gLcKgwqDCoMKgwqDCoMKgwqAKPiA+ID4gPiAiMjpcbiLCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAKPiA+ID4gPiDCoMKgwqAKPiA+ID4g
PiBcCj4gPiA+ID4gLcKgwqDCoMKgwqDCoMKgwqAgIi5wMmFsaWduCj4gPiA+ID4gMlxuIsKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiAtwqDC
oMKgwqDCoMKgwqDCoCAiLmxvbmcgKDFiIC0KPiA+ID4gPiAyYilcbiLCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiAtwqDCoMKgwqDCoMKgwqDCoCAiLmxvbmcg
MCwgMCwKPiA+ID4gPiAwXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgXAo+ID4gPiA+IC3CoMKgwqDCoMKgwqDCoMKgICIucG9wc2VjdGlvbiIgOjogInIiIChmbikg
OiBfX3N0cmluZ2lmeShCVUdfRk5fUkVHKQo+ID4gPiA+ICk7wqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIFwKPiA+ID4gPiAtfSB3aGlsZSAoMCkKPiA+ID4gPiAtCj4gPiA+ID4gLSNkZWZpbmUgV0FS
TigpIEJVR19GUkFNRShCVUdGUkFNRV93YXJuLCBfX0xJTkVfXywgX19GSUxFX18sIDAsCj4gPiA+
ID4gIiIpCj4gPiA+ID4gLQo+ID4gPiA+IC0jZGVmaW5lIEJVRygpIGRvCj4gPiA+ID4ge8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gLcKgwqDCoCBCVUdfRlJBTUUo
QlVHRlJBTUVfYnVnLMKgIF9fTElORV9fLCBfX0ZJTEVfXywgMCwKPiA+ID4gPiAiIik7wqDCoMKg
wqDCoMKgwqAgXAo+ID4gPiA+IC3CoMKgwqAKPiA+ID4gPiB1bnJlYWNoYWJsZSgpO8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gLX0gd2hpbGUgKDApCj4gPiA+ID4g
LQo+ID4gPiA+IC0jZGVmaW5lIGFzc2VydF9mYWlsZWQobXNnKSBkbwo+ID4gPiA+IHvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IFwKPiA+ID4gPiAtwqDCoMKgIEJVR19GUkFNRShCVUdGUkFNRV9hc3NlcnQsIF9fTElORV9fLCBf
X0ZJTEVfXywgMSwKPiA+ID4gPiBtc2cpO8KgwqDCoMKgIFwKPiA+ID4gPiAtwqDCoMKgCj4gPiA+
ID4gdW5yZWFjaGFibGUoKTvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4g
PiA+IC19IHdoaWxlICgwKQo+ID4gPiA+ICsjZGVmaW5lIEJVR19BU01fQ09OU1TCoMKgICJjIgo+
ID4gPiAKPiA+ID4gLi4uIHlvdSBzaG91bGQgZXhwbGFpbiBpbiB0aGUgY29tbWl0IG1lc3NhZ2Ug
d2h5IHRoaXMgaXMgbmVlZGVkCj4gPiA+IGFuZAo+ID4gPiB0aGUgCj4gPiA+IHByb2JsZW0gZGVz
Y3JpYmVkIGFib3ZlIGlzIG5vdCBhIHByb2JsZW0gYW55bW9yZS4KPiA+ID4gCj4gPiA+IEZvciBp
bnN0YW5jZSwgSSBtYW5hZ2VkIHRvIGJ1aWxkIGl0IHdpdGhvdXQgJ2MnIG9uIGFybTY0IFsxXS4g
QnV0Cj4gPiA+IGl0IAo+ID4gPiBkb2VzIGJyZWFrIG9uIGFybTMyIFsyXS4gSSBrbm93IHRoYXQg
QXJtIGlzIGFsc28gd2hlcmUgJyVjJyB3YXMKPiA+ID4gYW4KPiA+ID4gaXNzdWUuCj4gPiA+IAo+
ID4gPiBTa2ltbWluZyB0aHJvdWdoIGxpbnV4LCB0aGUgcmVhc29uIHNlZW1zIHRvIGJlIHRoYXQg
R0NDIG1heSBhZGQKPiA+ID4gJyMnCj4gPiA+IHdoZW4gCj4gPiA+IGl0IHNob3VsZCBub3QuIFRo
YXQgc2FpZCwgSSBoYXZlbid0IGxvb2sgYXQgdGhlIGltcGFjdCBvbiB0aGUKPiA+ID4gZ2VuZXJp
Ywo+ID4gPiBpbXBsZW1lbnRhdGlvbi4gTmVpdGhlciBJIGxvb2tlZCBhdCB3aGljaCB2ZXJzaW9u
IG1heSBiZSBhZmZlY3RlZAo+ID4gPiAodGhlIAo+ID4gPiBvcmlnaW5hbCBtZXNzYWdlIHdhcyBm
cm9tIDIwMTEpLgo+ID4gWW91IGFyZSByaWdodCB0aGF0IHNvbWUgY29tcGlsZXJzIGFkZCAnIycg
d2hlbiBpdCBzaG91bGRuJ3QuIFRoZQo+ID4gc2FtZQo+ID4gdGhpbmcgaGFwcGVucyB3aXRoIFJJ
U0MtVi4KPiAKPiBSSVNDLVYgZG9lc24ndCBrbm93IG9mIGEgJyMnIHByZWZpeCwgZG9lcyBpdD8g
JyMnIGlzIGEgY29tbWVudAo+IGNoYXJhY3Rlcgo+IHRoZXJlIGFmYWlrLCBsaWtlIGZvciBtYW55
IG90aGVyIGFyY2hpdGVjdHVyZXMuCkl0IGRvZXNuJ3QgYW5kIGZvciBSSVNDLVYgaXQncyBhIGNv
bW1lbnQgY2hhcmFjdGVyLgoKYWZhaWsgJyVjJyBpcyBuZWVkZWQgdG8gc2tpcCBwcmVmaXgoJ3Np
Z24nICkgKCMgb3IgJCAoIGluIGNhc2Ugb2YKeDg2KSkuCgpJIG1lYW4gdGhhdCBSSVNDLVYgZG9l
c24ndCBwdXQgYW55dGhpbmcgYmVmb3JlIGltbWVkaWF0ZSBzbyB0aGVyZSBpcyBubwpuZWVkIHRv
IHVzZSAlYyBhcyB3ZSBkb24ndCBuZWVkIHRvIHNraXAgcHJlZml4LydzaWduJyBiZWZvcmUgaW1t
ZWRpYXRlCmJ1dCBpZiBzdGFydCB0byB1c2UgJyVjJyBpdCB3aWxsIGNhdXNlIGFuIGNvbXBpbGVy
IGlzc3VlLgoKfiBPbGVrc2lpCg==



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:10:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517683.803390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcfx-0005OL-Oj; Tue, 04 Apr 2023 09:10:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517683.803390; Tue, 04 Apr 2023 09:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcfx-0005NU-Lf; Tue, 04 Apr 2023 09:10:01 +0000
Received: by outflank-mailman (input) for mailman id 517683;
 Tue, 04 Apr 2023 09:10:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjcfw-0005IK-7e
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:10:00 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 778422a5-d2c8-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:09:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9852.eurprd04.prod.outlook.com (2603:10a6:150:117::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 09:09:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 09:09:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 778422a5-d2c8-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nRN07gEljZ6tFkm2xgW8JexQfh6/erScfgyrKAbxUeSg+lFIlz6gNPqp3fzoj3uo9Elbr/gyXEj3mHALL9kEgN1Rzgf7ObcqbMMZt85L96lZnQtz+9OT6y9P3oD5GzoWihiX5QUFXcezzD/8Yemm+6wA3cAWhHA9jd5YMjnufAZaop/GGAXOmPRNpquveB3X1NnY4bCSvp6MIfi0hUwCX5+zufYxDzGs/hjU2Aw8+/LNVNHeE1S4CqcPKuaa7sCe1l9vZJrOLlupHlfRFubdFjbDmMi6yRfDO5ulZe+Z4N5sF9j3wFyNzvuCgQa0+z5SH+iacpg5zfzK3PqJIBxrxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t1Be+8SKWFkGjv1O8hnhZzzzbBtxIIpMpvWMe0CFsrA=;
 b=TaEVGTN2bzVTPEfp1TmP5P886/nYF1Lw3W1IyyuLYl+Y76K5LHN/GnfQcb3zV97SOlwyB+C3/upAk8DiOy0KhmVzp5PyqFI+Wt9j8gSja6jRL+2KjcJIfdqIMG/qeocEOOz3mMx5fa2LBFdfy+0jruAf5uXG1kyfZzYeX2J6ZxM9WciCntwLeD4tdVio3tJzFbQC+KUwB3SCf7Ped9iyyVV3N0HfEOzirmdLIqovoKd8f5lOCvrBpsTJ1TPDMLH+Xn2wF+g8n3rFqVrjq6gHyZW+TtAqC5S1dlRzzFIWGugKu5/iHIq63FEWoxbm2UwEmzF29ehUdD+4skcnipox/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t1Be+8SKWFkGjv1O8hnhZzzzbBtxIIpMpvWMe0CFsrA=;
 b=O7LQHCWoscDHDcWfIxBiv59wntLBoUx3AbXlXhJ8NTfCP9zBPdVjrMU3YUd63mIjrcYXC9RgEa3iaUGx2Br6A6ABpx/uS7WrJNO6ruf+huXSxVBXpzf9qF6y8aI+CY/rTTS24RgikEY946DpsBuqUqPYUorVz9zJLMAYp6eNVN3Lbx04CfWVkexa4MGgK9bMB6GaPTd8yDcnJ5oCibT2KNYF9tKQX7fH+Yn99shS+faO0zuVWO2qtvVW8+VgN5ya/6ooUAutquM17YjSmrZCrWxRCg/cqUEeCjrzLWl97kLwjDVbFzjSGXqWt6zKoA/qkKhmXZnkTBTEB7Ra20aZiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>
Date: Tue, 4 Apr 2023 11:09:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/2] xen: update CONFIG_DEBUG_INFO help text
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230403162823.30681-3-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0065.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9852:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f60c38f-c882-4ace-cb2e-08db34ec5926
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bNGVfm1DEgbv6LJ6mrtvh5NIHL93bzS4wfnz/5POJskHOhYnRuFv0OXETJbEmpQL9d7EPu114z06cOzKmnJVLIlIgLce+p5njr//coY5ACMve/YGIhlAPt12B02zZ/28crR4WMB1suBVGXSLJVN3QOZmfyPeYFNw5wqxuh5HExcKrQ1DehcI93R2YoIFU6xwrnRV4DXidYm6OUOl5WSoP4EzhTBYTZJJ6lmRwiR46gETjPYWJIVqw/RQ8v41NyTj6OJeFOA/klZJ5drZmm52ASYOyKS9l6wmGym03kPVfPglW2zHzBF0/6MfqGPER3xDjw0My32+mRk/Htd5wTWbhayUe2GJ3PmLngO+CEGFueVVRk/2Dirq+kbTMyYt8M1uiusWi35esr2lPvcVl8BefIu+7neiNHAPa7E4nWcC354b7bnXuF22T0P1l8VSkzFjGzEOeaVaRmgxbDgnWbdwpGO+l7K4ck3Ul6IUfW9D8L3SeSAgg3yeKfWYVpbLSNq4sHDH/ScM+qnCu55OYIlbPTMrH92cGPDCH7Qt7Ft+5mRGOu4i8QrdvthzsH0M0nzsU4NEPTHNpvtr9rHlEqpifxOv78J0PutqDnd5hQ0CAfHbkfyf+TyP3X4I9pz/z+cecbYY5DPC1to/97NJgyg8IA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(39860400002)(346002)(396003)(451199021)(6506007)(186003)(53546011)(83380400001)(26005)(2616005)(8676002)(54906003)(6636002)(66946007)(41300700001)(37006003)(316002)(4326008)(66556008)(6666004)(6486002)(38100700002)(66476007)(2906002)(36756003)(478600001)(86362001)(6862004)(31696002)(8936002)(5660300002)(6512007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VURNVGZ1L0w0TmV5aGxKKzlRazN3QmRVM2w1SzUyU0FRWHNCb3FqNXl6eGY5?=
 =?utf-8?B?UHJrNlJEMldoRjR1T2dHS0x5Lzh6RmJycVdvd0YyYVAvT3dZcU1INHFvbXNt?=
 =?utf-8?B?ZUFMaGF4TzlQV3RWUWlFQXVWeStqeU9sQ0NxbXEyRzczb1lSODE1bG9WM09k?=
 =?utf-8?B?YVRVMEZUTzRQUk1CMnAwY3E1YUpGeFhZV1VFWUpiSEtlck9NcitJYWNjNTlr?=
 =?utf-8?B?U1VCdzc3eHRKaVBiQlVLcENNcm1qSWl2Q2ttMlRUS2dxYnBpblhZN0o0TU9q?=
 =?utf-8?B?WXFBdWJyTWRSMUhCTlRaY0JJMkpRbnFJOHMwQXMyampSekVkYjZrd1RYSExD?=
 =?utf-8?B?aFVkSk1yZktxOXY0SDNYMHVaZTJVMW9SVS9xY0l6M0JMSHlnZjRhdVFzcjlE?=
 =?utf-8?B?KzEyME1JR292bVJ4UlRFbVlkZmNGQW9aYmdsYnNCYWU3ek9LZXMwRnJYaXpH?=
 =?utf-8?B?NnZhSC96THRub2ZVRG1WV1MzUmQzZUNUMm9PZGprUUtlbTBVMktFSnkzT0Vr?=
 =?utf-8?B?c3p4Nk55Q0d3QkYyVHFIRFRubXQ0NVpHRmVyZ3NPTTV6RUltTmN4VEh6UlNT?=
 =?utf-8?B?VXVHdFQvTnFRMUV4WDlEWk92U0phanFka3ZZQUJDamlzVzBLNllNdFh5SERC?=
 =?utf-8?B?STREOExGbkpxSTBhdUlBRkVhcVh6eDN5d013bDBsVGVwaFJDZFJTWStHNFdI?=
 =?utf-8?B?NXNuMjArZTRueGxhazVtUlVCaHEzTjg3dG0vRTlZeFRVWDdweWQzM1Fzbm93?=
 =?utf-8?B?UGNGMXU4YVJMUWdSdjJFZS9rdmJUc3IwVEEzeFdHanBnVFFpSTJWSEE1WTdJ?=
 =?utf-8?B?OWlVcWMxaEVPRWY2b1h5TThGWXFYVXBsdkZZTWpZZEpKTFY2V0hrS1lpYm10?=
 =?utf-8?B?ejF2Z251Sy9xLzNmdFNtODhneHFlczRtQUV3eU5jRGpGVVV4RVU3TDJHZTc0?=
 =?utf-8?B?ODVJOXZSZzY2VEVGcy9RYld5OGtvMW5QMUc1WXB5aXIzWU5NTmhqZDRFcm5u?=
 =?utf-8?B?TTNFMHI5NjY4bVlpV01jVHFpMHhHdTFydFlJdUIxeWQ3ZS9Hc0VpQXRBUUo3?=
 =?utf-8?B?THQ5QTdxZnFvQ3JoQTBQZjBrUDlnZVBtTEUvc3MvL3pSR0Vwdm5MQVFhUDFw?=
 =?utf-8?B?L1hDQ2lKVnZzWDBHMjIyS3NMbzdwKy9iVGorQS8yMW5IL0U1WjB3bE1CT0lG?=
 =?utf-8?B?dEVPSFhJUFFyb0xSczFWYnkwSlJ2WWwzSXRBajNsYWR6OVJGZXhPeWlnbTZu?=
 =?utf-8?B?Y2tZS21TazVzV1RCbVQweU1WbUZpaWVZN2pLQVZpc21SMzJhQzZIK09LR1hM?=
 =?utf-8?B?bHMwOCtyMzg4Mm82NXd0djd2dVRvVXdIVjM0UVlQdG9jc1AybFRGWHNMRjFE?=
 =?utf-8?B?VEtGMW5LcisvM2k2U054c3JOdTJLVXl2TDlZLzFiamJXdUlGaXBKN0ZZZzZt?=
 =?utf-8?B?R09tMlI3SmJuLzVaclNuaFd4UVR5Y01xR2phVjMybm8vVmM0WXhtbCt5OXQ5?=
 =?utf-8?B?Mnp1TERuYzV5c0Z4N2lXdGJsTXYxaUNKdm53b2hGS25ubnVERDVUa1VhOEV1?=
 =?utf-8?B?cE1ncHpad2hYdEJ0NmdCVkREaWg1RU11RDBGanFwMkNaQ1d1aG14cW5tc2tZ?=
 =?utf-8?B?M2IycVRRSXRwUmpXU1kvaU51anRZYW9vSGMxblFiMjFjTWJkS2MxSzluUm1w?=
 =?utf-8?B?TmhKWGVaYk1jRkZYY2d0d3RwN05xY2NYbDkyTk9PSzA0NzhrbDZPZEVFTGZt?=
 =?utf-8?B?T2tyZ1hOTlI2WFduOXFIcGprVTRPVG1LY3oxd095YTA2SldWK1JkQUVkS0li?=
 =?utf-8?B?RHA2S0JsUUE5OFY4TW91LzJBTVB4S0FBNVFwcWwyaEQvc1pyai80K3lwYU5J?=
 =?utf-8?B?eXJuUkpBUGRGK01Ocjg0VGpUVjRET00xdnVXRHJBbUk0MDUvc3dweUM4Mks1?=
 =?utf-8?B?UHhmVE5DMys4dHNYWUhKTTBCb3hZMWpqR0V6ZzR2WWlaQVl2UE5mYitMZ1NE?=
 =?utf-8?B?M2NNZEs0elFsS1hJVUUxTnp0QzlLZC9oejNLZkoyRGtBMW5DQVV4bGhMdnFF?=
 =?utf-8?B?SDBGV0FkaDZtQWNaN29rNnlucHNEc0NYWlJacTlTRTNWZ09ubkhERzVrK0d0?=
 =?utf-8?Q?GfWahpXOyFvgpTycaIxhsdDol?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f60c38f-c882-4ace-cb2e-08db34ec5926
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 09:09:51.7507
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0QibiVXN6MeP/giY7oyX+opy/bECYf+ZnpTjPXwwHJ1Zoo7SEypphMT1WeRjQ6iriE2MU+T7PaqRNGAhTHxSlw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9852

On 03.04.2023 18:28, Juergen Gross wrote:
> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -15,8 +15,14 @@ config DEBUG_INFO
>  	bool "Compile Xen with debug info"
>  	default DEBUG
>  	help
> -	  If you say Y here the resulting Xen will include debugging info
> -	  resulting in a larger binary image.
> +	  Say Y here if you want to build Xen with debug information. This
> +	  information is needed e.g. for doing crash dump analysis of the
> +	  hypervisor via the "crash" tool.
> +	  Saying Y will increase the size of the xen-syms and xen.efi
> +	  binaries. In case the space on the EFI boot partition is rather
> +	  limited, you may want to make use of the INSTALL_EFI_STRIP make
> +	  variable when building the hypervisor, in order to strip xen.efi
> +	  before installing it to the EFI partition.

Hmm, INSTALL_EFI_STRIP is only a courtesy to developers wanting to install
xen.efi directly into the EFI partition. It wouldn't affect the normal
flow, and hence I think this wants expressing here such that both kinds of
people have at least a hint what they need to do. I.e. in the normal case
they'd need to adjust the way xen.efi is "propagated" from its installed
location onto the EFI partition, to do the desired stripping at that time.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:17:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517687.803400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcmg-0006nO-Ff; Tue, 04 Apr 2023 09:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517687.803400; Tue, 04 Apr 2023 09:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcmg-0006nH-Ci; Tue, 04 Apr 2023 09:16:58 +0000
Received: by outflank-mailman (input) for mailman id 517687;
 Tue, 04 Apr 2023 09:16:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xtki=73=redhat.com=quintela@srs-se1.protection.inumbo.net>)
 id 1pjcmf-0006nB-0w
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:16:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 709b7c78-d2c9-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:16:54 +0200 (CEST)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-654-RJP9IdugPhGMvS-nWILlMw-1; Tue, 04 Apr 2023 05:16:50 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 d11-20020a05600c34cb00b003ee89ce8cc3so15786764wmq.7
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 02:16:50 -0700 (PDT)
Received: from redhat.com ([47.58.164.113]) by smtp.gmail.com with ESMTPSA id
 s17-20020a05600c45d100b003ed51cdb94csm21819628wmo.26.2023.04.04.02.16.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 02:16:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 709b7c78-d2c9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680599813;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:in-reply-to:in-reply-to:  references:references;
	bh=BSBmdTbfTkUl7Ea3VQh27DTaOioxkwyVdy28bzui4ZE=;
	b=hVh2hptaQSOC0hSoQw24A2g80Up/KyqVThSrXp2+tZG8e/YzKeUGj6mE8IA4xFLw6bVs3G
	83zsRHUP29HxSkk4RksOCOu5aE/Yio5NkGPqsbNqxR2VAkrKeH4Pj0tG0ksQ9wWWF7nppi
	+hZQ4LJdPPG07SFfv6ecOGRox3Vwqws=
X-MC-Unique: RJP9IdugPhGMvS-nWILlMw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680599809;
        h=mime-version:message-id:date:reply-to:user-agent:references
         :in-reply-to:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=BSBmdTbfTkUl7Ea3VQh27DTaOioxkwyVdy28bzui4ZE=;
        b=EA+4yCAkZ9kzkYE7qwDl0Rc1teg1CW2nk1WLN1ZMeENwbEFvhRB1bb1/XZwzvbknOo
         v2t+8YZPLSKIoLvZgelb06mzYowmhcbBnameUo3XZYJAwyByn8pyRRmOo/JHEa0UefCY
         LnonYyRAF0zNiTpoDrwzq+tk+n+CRkMzPsITrVFgRP7tnhzms9nn98ZCoSydV1FMF4QM
         LSNw77ancK/ofOWlf3By15snLmUhVWeE6KawoFetKm99XNr1P5CNImoVZGOEt1c7OTBp
         WIUMfRVqhojceu7oRH1q7YIy3L1Fvw1EPPUHhn52pmmegGiKC2oPOGKTAQ+l2+pCMas0
         5Z/A==
X-Gm-Message-State: AAQBX9d5mVWHvwhj45mhvVsD8ug2vvyV6GAxiYr/QcXkBjSE+jpQWFHq
	MjIb7wmn1NT8d2gy8nxoeo+a2HGKAviH6C6jYMJNsuE652L6CIhgLR0f306Vw9dro+ycyg3CJzJ
	v7njXO/k1ELZlClqdtqveSVqA7oU=
X-Received: by 2002:a05:6000:508:b0:2d7:89ce:8319 with SMTP id a8-20020a056000050800b002d789ce8319mr1013750wrf.27.1680599809116;
        Tue, 04 Apr 2023 02:16:49 -0700 (PDT)
X-Google-Smtp-Source: AKy350bBjCYY8Gmu90141dU4yV/FMGGIfHstBOuovWY7hslWLD6VBET6tgmgZ4mx0XgXOelUMo6HKg==
X-Received: by 2002:a05:6000:508:b0:2d7:89ce:8319 with SMTP id a8-20020a056000050800b002d789ce8319mr1013723wrf.27.1680599808769;
        Tue, 04 Apr 2023 02:16:48 -0700 (PDT)
From: Juan Quintela <quintela@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,  Paolo Bonzini <pbonzini@redhat.com>,  Julia
 Suvorova <jusual@redhat.com>,  Kevin Wolf <kwolf@redhat.com>,  Peter
 Lieven <pl@kamp.de>,  Coiby Xu <Coiby.Xu@gmail.com>,
  xen-devel@lists.xenproject.org,  Richard Henderson
 <richard.henderson@linaro.org>,  Stefano Garzarella <sgarzare@redhat.com>,
  <qemu-block@nongnu.org>,  Eduardo Habkost <eduardo@habkost.net>,
  Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>,  Paul Durrant
 <paul@xen.org>,
  "Richard W.M. Jones" <rjones@redhat.com>,  "Dr. David Alan Gilbert"
 <dgilbert@redhat.com>,  Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
  Aarushi Mehta <mehta.aaru20@gmail.com>,  Stefano Stabellini
 <sstabellini@kernel.org>,  Fam Zheng <fam@euphon.net>,  David Woodhouse
 <dwmw2@infradead.org>,  Stefan Weil <sw@weilnetz.de>,  Xie Yongji
 <xieyongji@bytedance.com>,  Hanna Reitz <hreitz@redhat.com>,  Ronnie
 Sahlberg <ronniesahlberg@gmail.com>,  eesposit@redhat.com,  "Michael S.
 Tsirkin" <mst@redhat.com>,  Daniel P. =?utf-8?Q?Berrang=C3=A9?=
 <berrange@redhat.com>,
  Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 13/13] aio: remove aio_disable_external() API
In-Reply-To: <20230403183004.347205-14-stefanha@redhat.com> (Stefan Hajnoczi's
	message of "Mon, 3 Apr 2023 14:30:04 -0400")
References: <20230403183004.347205-1-stefanha@redhat.com>
	<20230403183004.347205-14-stefanha@redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Reply-To: quintela@redhat.com
Date: Tue, 04 Apr 2023 11:16:46 +0200
Message-ID: <877cusroqp.fsf@secure.mitica>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain

Stefan Hajnoczi <stefanha@redhat.com> wrote:
> All callers now pass is_external=false to aio_set_fd_handler() and
> aio_set_event_notifier(). The aio_disable_external() API that
> temporarily disables fd handlers that were registered is_external=true
> is therefore dead code.
>
> Remove aio_disable_external(), aio_enable_external(), and the
> is_external arguments to aio_set_fd_handler() and
> aio_set_event_notifier().
>
> The entire test-fdmon-epoll test is removed because its sole purpose was
> testing aio_disable_external().
>
> Parts of this patch were generated using the following coccinelle
> (https://coccinelle.lip6.fr/) semantic patch:
>
>   @@
>   expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
>   @@
>   - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
>   + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
>
>   @@
>   expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
>   @@
>   - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
>   + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

[....]

> diff --git a/migration/rdma.c b/migration/rdma.c
> index df646be35e..aee41ca43e 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -3104,15 +3104,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
>  {
>      QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
>      if (io_read) {
> -        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
> -                           false, io_read, io_write, NULL, NULL, opaque);
> -        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
> -                           false, io_read, io_write, NULL, NULL, opaque);
> +        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
> +                           io_write, NULL, NULL, opaque);
> +        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
> +                           io_write, NULL, NULL, opaque);
>      } else {
> -        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
> -                           false, io_read, io_write, NULL, NULL, opaque);
> -        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
> -                           false, io_read, io_write, NULL, NULL, opaque);
> +        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
> +                           io_write, NULL, NULL, opaque);
> +        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
> +                           io_write, NULL, NULL, opaque);
>      }
>  }

Reviewed-by: Juan Quintela <quintela@redhat.com>

For the migration bits.
I don't even want to know why the RDMA code uses a low level block layer API.



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:19:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:19:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517692.803410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcot-0007QL-0Z; Tue, 04 Apr 2023 09:19:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517692.803410; Tue, 04 Apr 2023 09:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcos-0007QE-TQ; Tue, 04 Apr 2023 09:19:14 +0000
Received: by outflank-mailman (input) for mailman id 517692;
 Tue, 04 Apr 2023 09:19:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjcor-0007Q6-PJ
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:19:13 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20629.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c35be60a-d2c9-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:19:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9915.eurprd04.prod.outlook.com (2603:10a6:10:4ec::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Tue, 4 Apr
 2023 09:19:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 09:19:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c35be60a-d2c9-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dXjcnHJV9t+ebL3yCyqlnnc4xs4LSzhBG1Zpqfj77IpBYLumNKSlxc41/z9rBXwttrZZcZ2/ULA3oaBKVTWHh6045oGFdEBxxC5azCYLbC8MIu+Kp1R41cE7cBQi/c+b1N4Lx6WIQV/D+XvvSqiugJP/qUJmsLkhPlOuKlHYzKAed2spZAcok49NpzowxO6NX62Q5d7o+wZFiE6bp5rcRY+xYL7k6KMf9x8Sch+pU4G4smx5DI5aPJDk6ZVjk1X5Pc0e9e+fyZuc7r9Oq+z+t+8Hz0SG3HPECNpmukyeFK3d5pq/ebxIoDnVRu0zdN6d48dCZfj3WMn3muknoYMzKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/7N904AxklSbnMiROFbdWJfhWDMU/a+PNtwY8Ooo9QI=;
 b=KbUUh1GlFBg2IcBus5lH0FsROpj0cozNVxCc8zXm1upBnO+msKgoiVR1qaf4gvoXE+n5/a7ASNGqWS+SpeuooV0PSJjEWEKAvq04uFXFYXv2wWwc15pV9LfkeJl1WrGXNoKZ6XVZsSJXlbCjoNzgRtPmT2kXNQyalh/nmGqVNokNdBerSpBmlmhtJzpLETCX9rLAvZ9OyI8cPsl5pu84Vzn7t/XyzNA2x453Ava+/H0ZtsNBf1bBl6Mt9OF1ph6nDdzhJu7Cef57c85tTzp6s8z31PurCSLjhjWFIR29kWug6FWQokKj5kwws2LalG+VQQY2aNx5yto9mEjj+S3TZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/7N904AxklSbnMiROFbdWJfhWDMU/a+PNtwY8Ooo9QI=;
 b=gq3VuFEVaKpovtmJTLz+v4FBDbEcZHYDVK1kj/ChC8bBeNwf0o8h98rZCU68lIVD7kj1la8WN1tUpb/ZrnNtNGnIPR0BMew7JnTfhjNy5itjBMtqfOv+LPVd1D8OlO2ejqQ2niQ6/24/bSyrNkM68Zd4JqVn0hntr4jd8wFwASn6jtW/gUxFa3wTX7lFp/E4+LTuNFBKz+NU7LMZF1gtgaJfpzxZ9vQFEci5V/iQx8FULPpyQPb/8jwgYlJf6ERk3raYERTuPgs8cRSS4QzpCKZR6ZJ+5v/frcApd64MD8xmBKpEztGTsPEOuzUIQsNrWZQKJ62ENGRLLt1E2UnWwA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f52befc9-f19c-12fb-b0db-b6c4219999b2@suse.com>
Date: Tue, 4 Apr 2023 11:19:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86/PV: ignore PAE_MODE ELF note for 64-bit Dom0
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0102.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9915:EE_
X-MS-Office365-Filtering-Correlation-Id: 377ba6c8-602c-49ac-00ba-08db34eda5c3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JCDSsrggbx3+5PNTDltoIx1wBJfxgNSyv5T55+HkKh4uSkIH/jLUmpV4W3wRpjsUG4CCq/gtamMiVJFouVUnE8FUV6JKeqCSxQs00jlJtzor00P7WZv2eejb4hsubN5Heo/NRo009VmiYgcD9YBU1UbvE34+MRn3M3gHQKLoQl+FVuacrhhLzhrfL4u2uatAXEbWmBDDZRdcrT6QnuFx/9qpTbsrBjDBMu64cBEPnXFwevF04El8kV++9ZgqWdWN0oUJteNzlWhF9KlHIUPz7E1q9rr88Fch0CP8uhmdmK4uukW0fGtY3a41Fwfk4H+whOuVV5ZkXSQi2i7D/4AQb4JJPZ4bG2feibwPwS3WBv+qhgCzmwLdH3TiQhXkP7hCLuL8NeP1qKnSS7x+Vexi7MW6JQM9cbs7hSWE2IIni6Zd8jCbCnTO/rhg6HlK3UI6DZHsttO9JRe888Icp3sXBT0+odL5WN8nj+5dPW+1hlOHUPAUgCA12GfMnXtHOK1JTzUdRzWx6HSZOubAz4QD0xEuya8GIpr5MJONSNTpmNPG3izV2XgTvei87ZFe9c5T39rYWtQDB1b3zphNWdy+xi70CVLN2a02buLvjbZRbIToCeh584t3EDQshfqnozbjbr+s9qdWESNM2gOnBAL7CQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(346002)(366004)(376002)(451199021)(38100700002)(8676002)(66476007)(66556008)(66946007)(5660300002)(8936002)(41300700001)(316002)(31686004)(6916009)(4326008)(478600001)(2906002)(31696002)(54906003)(83380400001)(86362001)(186003)(36756003)(2616005)(6486002)(6512007)(26005)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K3Fkak81VzRiYktxQUx1LzdUeHM2S0Z4aWVhcmFvUGRZNjBGWDVzUTlpNVAx?=
 =?utf-8?B?Mkh1S3F1RlNvb0J3ditsRk9qLzJmNk9lVUxCUHhQQnp0dTNyZ1FVdkhmU2RE?=
 =?utf-8?B?TnBycXJGcVN0ZjJDNXcxUmRQZlg5VXRmRjMyZzdCMENBMlJoT0lxbVdzV3RD?=
 =?utf-8?B?enNUQi9yQVFNZGtnRnhCbWVaNXBCZ3RYSzRlUkFFbXRVMDRuQ2NscEZtRElP?=
 =?utf-8?B?VEZPNXhjWVpUeU82QUU0VHBQZG54WlRmR0x0TEhaNkRoQ2JWc1VLVVNPYXBl?=
 =?utf-8?B?UUpFUTlDZWtqeUxPbzVwdDV6aXlYVTM2SFRWenJGdjRVcGpVN2hrWXlaUWNh?=
 =?utf-8?B?bDNZMjlXcmFEOTFsWDdlV2c2SkVQYmVoOVNGcG9CU2IvcGNUVTh3bnQzLzJE?=
 =?utf-8?B?aW0zaWpaWVNGcGlSZmFRREIvYjFMaTBBNkJvTkJLeXlLQ2FsTXk0STgwb1U0?=
 =?utf-8?B?SFgyMVc1UTdpTm93TU5leDZQTUcwMXpGdEpOZEhEaXAveGY3Z3VkSzlFNFRR?=
 =?utf-8?B?QVMvWTRuK3lvRDNHcm9MQkJXUGpyUjY0R29QLzU3VFM0dUhDb3JiMTlxTFVJ?=
 =?utf-8?B?aEFyTUpOcFdpOFJnZFRHNW1aZkk2TFY5eEpNY3MveFpWS3dUSUJoNm9nK2Rk?=
 =?utf-8?B?UDM4WDRYOWhXR0hUWWFFcHAyRmV1ZlQvQWZwZ3IxN252bU5pUm5SSjlpRnhU?=
 =?utf-8?B?S1ZSM3UyeGh6VGtDSDZFZ2J3ZU9Md3d6UHRhWWMzZjgvSzhZYm9Xelo3WGlk?=
 =?utf-8?B?bWdWNHFJWnJocmdzd1NZYm50OHNuZFJ1Z1lyRDNOL1c3NUxPS1pLbWprTFNY?=
 =?utf-8?B?dCtraWl6U3J3ODlLQmhwWnppSXl0b04rMFoyWjlUQmQxWklDQlRYb2dPTFlh?=
 =?utf-8?B?cHpUK1NjeWFiSU01Uy95ZFBoUDYzZEdoTnhHRHhMWnpvY1hOeEVWUCtsSzlq?=
 =?utf-8?B?WHJ0TWowMkgveDFqRlRYOE8zZnB4YjdGeHZoKzBzeVIxdGJaVVpLaTd6SUVY?=
 =?utf-8?B?SnVkT3FqQXh1enJBREtlS2hPUUp3bDlTNXhmclNkekc5bnZMaDY5aEVyZXR5?=
 =?utf-8?B?K0VCcHBBdGdkZmw1cWpOTUtMbDFtRFhyaTgvcjdlZ2s2bDNpdCtDZ1hFMHNp?=
 =?utf-8?B?RGlBMVVNRUJ3azAvQWt0dGZCNktRejdwc2hyWmp2NGlKUFYybEJjUHNjWGRn?=
 =?utf-8?B?U0lkKzhObm01TDh4cFFhTzFFMjQvdmFMMXI1ZnJ3MG1PQzF5U24vWCs3WTRj?=
 =?utf-8?B?Mkh3V04rblBTWEZ3VWt0MFJBSCtJR3VlUC9IMDVYaTBGNjJHU0dhMXZWMzcw?=
 =?utf-8?B?aXZpYm9RRjhJeGxyRzlsY28rRnlOYVJEOVVReVdkam9VcXpzNWVkaDQ1cEZJ?=
 =?utf-8?B?TW5UUFlESUlZTGpFcFN3QWtzcGZFOUNmeE0zYk1VdWpDSTYrYXBwR3BwS3FL?=
 =?utf-8?B?OXd1amgzTWNDSDFvWWZscTJTR3hCZERaeWVHSDRGdURrMGFvWXlTTHExcTMw?=
 =?utf-8?B?c0gvNEJ4U0xQQjZ1cFNwbjE5V3R1cjZvdTAzTjNWZnVERFY5ekhjYWpLNXNC?=
 =?utf-8?B?c0dDclkvdG5pcmJQeXU0SVZudkFLT0U0RUJlU3J3cFNkTlUvc3pzb29zSzRC?=
 =?utf-8?B?RG9JaGlqYVRxV1l5cTlMa09RZE5aejVidXVmRnlscGVBYllOTk1UTWttSmhx?=
 =?utf-8?B?R1d5UEpDWVBBM0pUaHdBMlJvcE5wWlducVhCeHgvaXBFNWZ6T0k5TjlGTkVC?=
 =?utf-8?B?WmZMaVpiK3FUd0VKaTB0alpxeE5qdWJWNzNyWFhGeGVxbkRjL0g2bmJaVitJ?=
 =?utf-8?B?Zkd2aDVwMXV6Qlc0dU03WWg1bjRHK2lPZTQ3L1lYUHV4d0ZobldCUDhCN3Nw?=
 =?utf-8?B?V0FpcCsvNFRRTHE1Y2ZadTVpYW0vRmNjOTVkdGJDT3h6L0p5VmJvZVJ0WWY5?=
 =?utf-8?B?UjhxTVA2SWN1Qy9vTlZxb3JkeGlPSnFDWUNXZnl5Rjd0MXMyeWJjeUdNTGxX?=
 =?utf-8?B?NVhPZjNtVWZrVmtXSVZJMit0cUFmKzVNVzU5dUpTZ05TSU4rTmp2aDJvM2NX?=
 =?utf-8?B?bnRUTGdyM1RXZWNpdnhhb0MrQms3c09PQnhsRmlwVnppT3M4bVdUSmlJK1Ny?=
 =?utf-8?Q?mgCEK8xZxF0gnF6Xz8srBHW6q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 377ba6c8-602c-49ac-00ba-08db34eda5c3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 09:19:09.8621
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tEidxJEAFU9n8QurnQZpXBuQbrcU2rn6fKoGU2O42ACtUe0gZQRAwZuT+ZL9nSydLI4ZYFhiEXhdCmuxs6cAxg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9915

Besides a printk() the main effect is slight corruption of the start
info magic: While that's meant to be xen-3.0-x86_64, it wrongly ended
up as xen-3.0-x86_64p.

Note that no known users exist that would have developed a dependency on
the bogus magic string. In particular Linux, NetBSD, and mini-os have
been checked.

Fixes: 460060f83d41 ("libelf: use for x86 dom0 builder")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: While Linux works fine with the adjustment, I'm not entirely
     certain of external tools (crash?) having grown a dependency. It
     may be worth noting that XenoLinux and its forward ports never had
     this ELF note in 64-bit kernels, so in principle it may be
     reasonable to expect that no such dependency exists anywhere.

Prior to "x86/PV32: restore PAE-extended-CR3 logic" that (meaningless
for 64-bit domains) VM-assist could also be engaged, based on the ELF
note's value. I expect that change to go in first, at which point the
description here is going to be correct (in not mentioning this VM-
assist aspect).
---
v2: Extend description.

--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -459,8 +459,13 @@ int __init dom0_construct_pv(struct doma
     compat = is_pv_32bit_domain(d);
 
     if ( elf_64bit(&elf) && machine == EM_X86_64 )
+    {
         compatible = true;
 
+        /* Zap meaningless setting which kernels may carry by mistake. */
+        parms.pae = 0;
+    }
+
     if ( elf_msb(&elf) )
         compatible = false;
 


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:21:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517698.803420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcqP-0000Nn-CR; Tue, 04 Apr 2023 09:20:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517698.803420; Tue, 04 Apr 2023 09:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjcqP-0000Ng-9G; Tue, 04 Apr 2023 09:20:49 +0000
Received: by outflank-mailman (input) for mailman id 517698;
 Tue, 04 Apr 2023 09:20:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjcqO-0000Na-Ht
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:20:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fae3a15c-d2c9-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:20:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8422.eurprd04.prod.outlook.com (2603:10a6:20b:3ea::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 09:20:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 09:20:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fae3a15c-d2c9-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OZgDFb4IqiwfLkjEEmwkZ+1ljIk4DDuHVz3oIuLuTMqfbpBYTuRORqo87+SSzSrFmshEsQr6S/+V1siJ0GWhOWJrpv3uZ+JbfEPh8yX1RSVqSOzmeILDLK8gZDKJ9Yvap7MwtF6l6djAmWwmCnF+6vHKrCJ3QySBex7nY9iAK2QaqoZyBz6ERgoRYswfCVdCmRraEWap5C6fhJmnDCOsfE3DBj+cb1Ya9Fq6UHYhV4A3gIU53AQ+iegHf/+0OtAAvb5/zXbDcj9NOnZEIo1Fiz6RawfieAzS/bu9/ZKmvk/ZJOjjm3unV2HvwMvS6Qr6BqVWYXe7V2X6Zm5zO3/Q0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SbtH2o36X0z7MtAmMJTg+4apLZdjst0d0KZ8EsroDUQ=;
 b=MsKkMCZI4y6TPZj02Zdv6lLEBKO4Ewt+9tL2CJv5exJK9qL+GZlP8OeyfYeUDk99R33XBowojuRPs4AnvjuKzILOr1iiJfQ0OS2fCaVSpzBvTDIojqYTCXR8RyVt/o449PVY0Zf4fs0fOJhWNieMw165DJxPoD1pQ5ACCBVGMnNId8GNzCDzaq8SopYovrb+/jKTCbkjnXkYMmrl634Yv404Ub4xdulXc3yfv58SAuDRq44sXhUfQTmQNBWw05C0ZOYh1CIt6/5Nd8u95PyuBOwmg6GjrGxSW91AZvTBKtu0Vsf+KALUXel4WIHwK0Ovj/YfZYs2NVlGP4197rXSFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SbtH2o36X0z7MtAmMJTg+4apLZdjst0d0KZ8EsroDUQ=;
 b=v0cSlHWMihQ2ahcbgOv1/1zcFACldyujn7pVwC5UPApOF3AvzywRR1Xl7e8jHsADnc3QZ4mgW2E1RdO0jXBGsvJZo7t1l3xJPmLXR3ToZsg9n12lx/D9ZbX8p+tMgIAHW2vmcDqCqnyFOUBc0OLav9J8alFZ1b5gq0epzGlYsF0UmM+FrDDWL1pdKibXb8wSB1702i7sB/DJMkDluHx3Y0qBpJHp2YgNGzhDRA8K1xnDNYGecb0tDT7gUK2GkiXaTosUylgHl+kwQGGNuYFrboDLzClUB432aaHajzsdETvDtYorenruGHh2sS7DvurLNx6DZijbtD8RqPIxOGFzLQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
Date: Tue, 4 Apr 2023 11:20:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper bounds
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8422:EE_
X-MS-Office365-Filtering-Correlation-Id: b91f59e7-56c9-467e-0c21-08db34edde56
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xjoLBMIkTSAHNvG456J/d59nZ6a+nevWUuvchG3yBeqRc0TGrXi5mDm7fqYtddG6FejbdCpLg8Nv9mPa9fOIJmyYqem8z802VYELLJR1O1mwPDty+tfgcsDzkEXELn+pmcSfb2kVw4Ggl2QGnVAMH76G4jT38aCRUVr1se8Hd9T/PI0AEZbSZOQ0FwBCwvAapzAxD7pd+FyHDCgX4w4b9GNXAhdvkoowqlHQFdK1HbYTpq54wyt7uMQ3QJ/vXUQnKDl+O6UpEF5ejRHEvl1iXLUbL45cDtJtxZUezuVqEHJFBpNAXrtRXQM+fU6fn2uP1xKG/rY57MCK63rz4/9jU2C7Hm9VpDIesY43XKrcVGfcuKwmBE8es08hwZuoyQrFrEEkF4zUxCBiIe48yzUKh3PN58aR74QNMWvaRbPfe3YV66VThkvQszVrYS66XbISyXF0Xkv6HWJeNQXzjzVAeCgITJdtcfO5jVuLbyLtyPTI0HFriZFxoJ4FkmE0UoCqHSXtg5AfRDkrQeN2HFmnSTq3VftAPJwkwofwRyQiiZoUKI1x5xvkGxdLjQTcrHYUQXuCtF8Rr2NY38bVeq1+kvOY2D0Ag0srVrmLlXpzpOIlv2IC62Ugoueji6BSX6WY5BjtyWgTFBf6rKtgC5ifsw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(366004)(346002)(376002)(451199021)(86362001)(31696002)(36756003)(2906002)(31686004)(2616005)(6486002)(83380400001)(186003)(26005)(6506007)(6512007)(8676002)(4326008)(316002)(478600001)(66556008)(66946007)(66476007)(6916009)(41300700001)(38100700002)(5660300002)(54906003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2FWeGM4THJKdmZPbjFYU1IzTmNZN3lFek5OaXZaMFl0UnBqemFCTzhPUEc5?=
 =?utf-8?B?bit1eWZ2czg4d0UrVUVFbStQMXJhUWREL0k3cis1d0lCTkZUM1AxbWJLZ0tn?=
 =?utf-8?B?N0gwNUErWXlaU1JNZTFmaFRwN3JtblVhRDVseFBBTUdRNzBCV2xSUGVDcDlD?=
 =?utf-8?B?Ny9keE9VcUF0b2VzeDJPSWV2SU5SNXBUQ3RSb3JJTjdOekVxdHE3R3pRZnFj?=
 =?utf-8?B?Zm8vZ01rNFZmak9tamk3WmNnOUZpSWRjK1pmUjNLZlVVRXI5RXJqQ21TZGpM?=
 =?utf-8?B?bkN2anZrcFpBaTZBSWtRbm1td3UwVXFiUGxPNmt0SkY4LzhHNlNiaTVublZR?=
 =?utf-8?B?c1l0a3Q5eEFYY3Ryai9aYzR4UW5ab0Uza1V0RkxYOURVLzlWcW04cFlhOW5J?=
 =?utf-8?B?b1VTeVVOcDBGNHhVUGlpenp3ZnFSSWg0Sm9JcG5VYUw2VjlBcnhmUEJiZWFV?=
 =?utf-8?B?ZmxIOVhVRnZOQjRlNjNtMzdIcnQ0UkNadmlUeTl3RzJqYldpWFlYeGZINEtn?=
 =?utf-8?B?OGlJSW5Sb3IyZTJDMlp5LzlhQjNzdlNLd1ppYnh1WGg1MnhIOFd2dnZsVzdu?=
 =?utf-8?B?Z040N0Y3U2VZdnhaSVZXT3hGOWdpRVZESXB0czd2TEhrMlBoUHVnM0dFYkRY?=
 =?utf-8?B?bHBOYXlnM0ZCeHIzdzJOODZ6M3l1dGE0aTdpazc4Z05KMkJkWVM0UmtkSVI4?=
 =?utf-8?B?RFdPS0ZDRHhQSmh5SWFLUTJmbThsam1JeFFvVFdsbHN2Rk0wRWtRNjhrS3pX?=
 =?utf-8?B?NmtqUmppRklRR1RmZitNWjZyZjNaRWlaUmthWFdEaE9LK05rUFMvb1h6cjMw?=
 =?utf-8?B?VUNJWnZJUkZ3aDlHZDlWcmpHdHc5MEVjWm9ydnE2dm04TS9LaWdDbHhmZVZN?=
 =?utf-8?B?Umt4TDlwaXlDQ2svS0VXdWRsdjdYRHN1OCtqeXVTdU1lbStHTmEwK1FGaS96?=
 =?utf-8?B?UG5MNTZFZUd3V1pZYy8wWWVSTVVxVnNweFVZK1RndUV4SjZaQ1pxcEJpbEVp?=
 =?utf-8?B?WXd1UE5neUFWQmxXR2ZwOGswUThvV3JQaSt6ZDNEUE9yQlBDb2dCaU8wL2h3?=
 =?utf-8?B?MWF4WmVpNUw5SE9DeUE4SmRsM1JEVVUwZzFqcmVqZFd3R1dBbkkxNnl2R0ZQ?=
 =?utf-8?B?YnI5WTBXYVpkeUduWHE0M3ZoMUgxZVRSWFJ1RTV0Um1Bd0cxZDFnM0xUMUV1?=
 =?utf-8?B?WWxUM2RWVUlZdzRwYjVJSFp5dEQ1aEpzZnQ3ak9mcTlxTS92enRhQXBOR0o3?=
 =?utf-8?B?VWQ4aUxxMklTQk9lcHJyc0t0R2tja0dPWnRDaHVWSmNlUnF2QnR1TmRBVkUy?=
 =?utf-8?B?Ky9ra3ZzUFNsRW9zM2NHUys4VVBad2YxbE40VHc4dU1xbUZiMExGdG1xNTV5?=
 =?utf-8?B?bmxwTmxSN3dtd1p3cnVaRU9CMFRVNFBwWmUrZ0ZkWXBSMXJ2bFA3UEF1TkJU?=
 =?utf-8?B?eVgzZlhGN3hxT2FyREVmaCttRU9rNm50RXBzTXkwUWJFZ0k3SmRxZTF0SW5B?=
 =?utf-8?B?RWpzMkV1bk1XcXNKOThwd3lCSmdJNktXVjdHVVREc01ZOSttYXFBc04zQXRW?=
 =?utf-8?B?MFZwQm0vMGdFMWRJcG5ETnBKNFljdXFHam1oNHVYSW5aNElRWHpHRWRaNXpW?=
 =?utf-8?B?bGxxOWkxYTdSSWZKKzF2Z0x6aDd4MUVXNE9BSk9jaThlSE5rbDB0SDNYZ0lE?=
 =?utf-8?B?enBja294U1Q4L1huVExtMDNPeEc4U3lSSTlTNncweTBlTE1ya291UFpkMll0?=
 =?utf-8?B?bHc2eEZmck5mNXFLOWF5MVZ5QWJGb1BHNnovNTRqOUt2aG1zaTlBaTlhaWp5?=
 =?utf-8?B?eWFRbEhXdlUrUU1ycklyTU9QQ3lGSUx3RUFqSUU2NFZ0d0R4V0VPcnFOekQ4?=
 =?utf-8?B?VzhnekdDQUYxUG45RjhYcHRQbzVKTnlYWktwbWNRU1FZRUUxaFhVekhwZzNa?=
 =?utf-8?B?dnZUMDlvclkvaS9IVy9CSjJ1ZEhJV0grL3pGeEt0WHJCUVJkbUw4V0oraUMw?=
 =?utf-8?B?blJKWlhvb1VOQ2ZFSzVWbzJRWEZaWlc3Wm1YdUhzSzlvcE5XRjZwZ2ZSN3g0?=
 =?utf-8?B?L1Jxc05PQlVQTzlka2ZCUTNMYTNJaFVMWnN5QUtZRUt2Z2lXYndTMUVpSkxY?=
 =?utf-8?Q?sO3TFk1c50LmlZwe2tBfn1Ior?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b91f59e7-56c9-467e-0c21-08db34edde56
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 09:20:44.6882
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G9qH83bYv/CAcTpMap9oTVfpNRDRO4ovFGwcImjmw7BSntIKnzR4nW551GsntFag8CHNUT1G7YgodBNzJDe1Hg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8422

PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
more than 32k pIRQ-s can be used by a domain on x86. Document this upper
bound.

To also enforce the limit, (ab)use both arch_hwdom_irqs() (changing its
parameter type) and setup_system_domains(). This is primarily to avoid
exposing the two static variables or introducing yet further arch hooks.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Instead of passing dom_xen into arch_hwdom_irqs(), NULL could also be
used. That would make the connection to setup_system_domains() yet more
weak, though.

On Arm the upper limit right now effectively is zero, albeit with -
afaict - no impact if a higher value was used (and hence permitting up
to the default of 32 is okay albeit useless). The question though is
whether the command line option as a whole shouldn't be x86-only.

Passing the domain pointer instead of the domain ID would also allow
to return a possibly different value if sensible for PVH Dom0 (which
presently has no access to PHYSDEVOP_pirq_eoi_gmfn_v<N> in the first
place).
---
v2: Also enforce these bounds. Adjust doc to constrain the bound to x86
    only.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1130,7 +1130,8 @@ common for all domUs, while the optional
 is for dom0.  Changing the setting for domU has no impact on dom0 and vice
 versa.  For example to change dom0 without changing domU, use
 `extra_guest_irqs=,512`.  The default value for Dom0 and an eventual separate
-hardware domain is architecture dependent.
+hardware domain is architecture dependent.  The upper limit for both values on
+x86 is such that the resulting total number of IRQs can't be higher than 32768.
 Note that specifying zero as domU value means zero, while for dom0 it means
 to use the default.
 
--- a/xen/arch/arm/include/asm/irq.h
+++ b/xen/arch/arm/include/asm/irq.h
@@ -52,7 +52,7 @@ struct arch_irq_desc {
 
 extern const unsigned int nr_irqs;
 #define nr_static_irqs NR_IRQS
-#define arch_hwdom_irqs(domid) NR_IRQS
+#define arch_hwdom_irqs(d) NR_IRQS
 
 struct irq_desc;
 struct irqaction;
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2665,18 +2665,21 @@ void __init ioapic_init(void)
            nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
 
-unsigned int arch_hwdom_irqs(domid_t domid)
+unsigned int arch_hwdom_irqs(const struct domain *d)
 {
     unsigned int n = fls(num_present_cpus());
 
-    if ( !domid )
+    if ( is_system_domain(d) )
+        return PAGE_SIZE * BITS_PER_BYTE;
+
+    if ( !d->domain_id )
         n = min(n, dom0_max_vcpus());
     n = min(nr_irqs_gsi + n * NR_DYNAMIC_VECTORS, nr_irqs);
 
     /* Bounded by the domain pirq eoi bitmap gfn. */
     n = min_t(unsigned int, n, PAGE_SIZE * BITS_PER_BYTE);
 
-    printk("Dom%d has maximum %u PIRQs\n", domid, n);
+    printk("%pd has maximum %u PIRQs\n", d, n);
 
     return n;
 }
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -659,7 +659,7 @@ struct domain *domain_create(domid_t dom
             d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
         else
             d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
-                                           : arch_hwdom_irqs(domid);
+                                           : arch_hwdom_irqs(d);
         d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
 
         radix_tree_init(&d->pirq_tree);
@@ -771,6 +771,8 @@ struct domain *domain_create(domid_t dom
 
 void __init setup_system_domains(void)
 {
+    unsigned int n;
+
     /*
      * Initialise our DOMID_XEN domain.
      * Any Xen-heap pages that we will allow to be mapped will have
@@ -782,6 +784,19 @@ void __init setup_system_domains(void)
     if ( IS_ERR(dom_xen) )
         panic("Failed to create d[XEN]: %ld\n", PTR_ERR(dom_xen));
 
+    /* Bound-check values passed via "extra_guest_irqs=". */
+    n = max(arch_hwdom_irqs(dom_xen), nr_static_irqs);
+    if ( extra_hwdom_irqs > n - nr_static_irqs )
+    {
+        extra_hwdom_irqs = n - nr_static_irqs;
+        printk(XENLOG_WARNING "hwdom IRQs bounded to %u\n", n);
+    }
+    if ( extra_domU_irqs > max(32U, n - nr_static_irqs) )
+    {
+        extra_domU_irqs = n - nr_static_irqs;
+        printk(XENLOG_WARNING "domU IRQs bounded to %u\n", n);
+    }
+
     /*
      * Initialise our DOMID_IO domain.
      * This domain owns I/O pages that are within the range of the page_info
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -173,8 +173,9 @@ extern irq_desc_t *pirq_spin_lock_irq_de
 
 unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *);
 
+/* When passed a system domain, this returns the maximum permissible value. */
 #ifndef arch_hwdom_irqs
-unsigned int arch_hwdom_irqs(domid_t);
+unsigned int arch_hwdom_irqs(const struct domain *);
 #endif
 
 #ifndef arch_evtchn_bind_pirq


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:38:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517701.803430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjd7Y-00025P-RH; Tue, 04 Apr 2023 09:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517701.803430; Tue, 04 Apr 2023 09:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjd7Y-00025I-Mz; Tue, 04 Apr 2023 09:38:32 +0000
Received: by outflank-mailman (input) for mailman id 517701;
 Tue, 04 Apr 2023 09:38:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tGS1=73=redhat.com=dgilbert@srs-se1.protection.inumbo.net>)
 id 1pjd7X-00025C-9e
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:38:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74b0c826-d2cc-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:38:30 +0200 (CEST)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-541-4kZDA1kzP8OvR8XertBUoQ-1; Tue, 04 Apr 2023 05:38:25 -0400
Received: by mail-wr1-f70.google.com with SMTP id
 t9-20020adfba49000000b002dd3986083bso3601087wrg.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 02:38:25 -0700 (PDT)
Received: from work-vm
 (ward-16-b2-v4wan-166627-cust863.vm18.cable.virginm.net. [81.97.203.96])
 by smtp.gmail.com with ESMTPSA id
 e5-20020a05600c218500b003ed243222adsm14539157wme.42.2023.04.04.02.38.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 02:38:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74b0c826-d2cc-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680601108;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PrhRpuE5PfYdyTK6FnQG64gKAkt74Bkdl20J+vUSjAY=;
	b=FCGjmNrGWmqVC6GsMsCbqwGSvXl+NyJS+9aMQ4g8aI6GWehtHWt1yIuOIdu1ubhi347BjA
	VXgzuL/f5SnK9PZD2bPsMA+griQj6gPlNnMTiwa1DQ2NQ9Jj2vTGivcyj9SwhBnbe+eKCL
	WdHKv9QBovcwgjsLoTpQMpQNTHOmbqo=
X-MC-Unique: 4kZDA1kzP8OvR8XertBUoQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680601104; x=1683193104;
        h=user-agent:in-reply-to:content-disposition:mime-version:references
         :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PrhRpuE5PfYdyTK6FnQG64gKAkt74Bkdl20J+vUSjAY=;
        b=1kJDy+OxjQOX+xUYuwBsQUedTf5o+ipR1S+z5p79ZfV0w3dRcW7DAtv4mD54u6p62q
         IkAlEpnV7d7ITGnXjR1xyQFrj/GzFtDJIpROOhyYw91iHf/kSYJcUIeMAA1HxK1gbH9D
         nX9yiJCTymjUrEQlqC0K4YadqGHVkP9evkG8dqLfY3q7wpqz3Svyewd3sKGWfPGhQgN6
         3KrqJCEJEYf2iulrZ1hGH5vdL3L5cWoDlJ68Dh74PiSpCM+3DOaVBrjQIIflXIxNL5kk
         DDM9jqwhAr/Zwvw91l72OLmsewanIIUVwmThYSy+9Iscgh87z30LdtJcUW+HFKEqxy+q
         i/vQ==
X-Gm-Message-State: AAQBX9f8fYMXvTsIKVwW5urqyO232RlJniZW69YPgQH1nWoDiwEaaS87
	3ylo1ZTczkQsiKGWAwDTrGZNcpNlB7Wl1tZtU7OWmLdUV7rJ9Tyq22d8OZt0lHkN2zWUWQCO1Sx
	DGRW7OFs/jfY33qo+eMtTaefYiyE=
X-Received: by 2002:a7b:c38a:0:b0:3ed:e715:1784 with SMTP id s10-20020a7bc38a000000b003ede7151784mr1947188wmj.15.1680601104666;
        Tue, 04 Apr 2023 02:38:24 -0700 (PDT)
X-Google-Smtp-Source: AKy350arlH3TpIuzppyhWEkgvJ11yISyREQNvnJp5gdw8zP6hTf84qhuHwXkNDKAhWVO4OJfBTXc1g==
X-Received: by 2002:a7b:c38a:0:b0:3ed:e715:1784 with SMTP id s10-20020a7bc38a000000b003ede7151784mr1947163wmj.15.1680601104292;
        Tue, 04 Apr 2023 02:38:24 -0700 (PDT)
Date: Tue, 4 Apr 2023 10:38:21 +0100
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org,
	Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>, David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 13/13] aio: remove aio_disable_external() API
Message-ID: <ZCvwDVPTNS8VUtVb@work-vm>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-14-stefanha@redhat.com>
 <877cusroqp.fsf@secure.mitica>
MIME-Version: 1.0
In-Reply-To: <877cusroqp.fsf@secure.mitica>
User-Agent: Mutt/2.2.9 (2022-11-12)
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

* Juan Quintela (quintela@redhat.com) wrote:
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > All callers now pass is_external=false to aio_set_fd_handler() and
> > aio_set_event_notifier(). The aio_disable_external() API that
> > temporarily disables fd handlers that were registered is_external=true
> > is therefore dead code.
> >
> > Remove aio_disable_external(), aio_enable_external(), and the
> > is_external arguments to aio_set_fd_handler() and
> > aio_set_event_notifier().
> >
> > The entire test-fdmon-epoll test is removed because its sole purpose was
> > testing aio_disable_external().
> >
> > Parts of this patch were generated using the following coccinelle
> > (https://coccinelle.lip6.fr/) semantic patch:
> >
> >   @@
> >   expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
> >   @@
> >   - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
> >   + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
> >
> >   @@
> >   expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
> >   @@
> >   - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
> >   + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> [....]
> 
> > diff --git a/migration/rdma.c b/migration/rdma.c
> > index df646be35e..aee41ca43e 100644
> > --- a/migration/rdma.c
> > +++ b/migration/rdma.c
> > @@ -3104,15 +3104,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
> >  {
> >      QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
> >      if (io_read) {
> > -        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
> > -                           false, io_read, io_write, NULL, NULL, opaque);
> > -        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
> > -                           false, io_read, io_write, NULL, NULL, opaque);
> > +        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
> > +                           io_write, NULL, NULL, opaque);
> > +        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
> > +                           io_write, NULL, NULL, opaque);
> >      } else {
> > -        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
> > -                           false, io_read, io_write, NULL, NULL, opaque);
> > -        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
> > -                           false, io_read, io_write, NULL, NULL, opaque);
> > +        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
> > +                           io_write, NULL, NULL, opaque);
> > +        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
> > +                           io_write, NULL, NULL, opaque);
> >      }
> >  }
> 
> Reviewed-by: Juan Quintela <quintela@redhat.com>
> 
> For the migration bits.
> I don't even want to know why the RDMA code uses a low level block layer API.

I don't think it's block specific.
It looks like it's because qio_channel uses aio in the case where
something QIO_CHANNEL_ERR_BLOCK and then waits for the recovery; see
4d9f675 that added it.

Dave
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:49:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517705.803440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdIF-0003iS-VA; Tue, 04 Apr 2023 09:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517705.803440; Tue, 04 Apr 2023 09:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdIF-0003iL-Ru; Tue, 04 Apr 2023 09:49:35 +0000
Received: by outflank-mailman (input) for mailman id 517705;
 Tue, 04 Apr 2023 09:49:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjdID-0003iF-O8
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:49:33 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe7579c3-d2cd-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:49:31 +0200 (CEST)
Received: from mail-mw2nam12lp2047.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 05:49:24 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by BY5PR03MB5362.namprd03.prod.outlook.com (2603:10b6:a03:220::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 09:49:21 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 09:49:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe7579c3-d2cd-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601771;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=0TK5mmykk1NcKCg1NtRN94tEW99YAhs3/IlP3iOzrg8=;
  b=OZ50fGFJoKvmFdns2jOxop3jP983ZpwYLrHjBTb2ZPwW8cBJaPbSsF/m
   /DhKgyhnYUpvM/BzGgW4rAGZ+8XMsJ00quV3TB7QYxmiY0YqQOO7I54Nd
   I6tj4+AyUileSs62nihI68fiu3Eh97Tzjx8dS1NDE8ZqUtAONT5fsjPdk
   o=;
X-IronPort-RemoteIP: 104.47.66.47
X-IronPort-MID: 106670401
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:abX7gqgd5v+uiYSuq6vRI2rvX161RxEKZh0ujC45NGQN5FlHY01je
 htvXDiFPP6ON2TyftAnaN7l8xtVvcPUzdFkHFc9qiwxRCIb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQzLTURNT3anNiIg6ORT+lgguVgAsn0adZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluSyWDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHulBt1KSOXhnhJsqFShlnc9WDY6bmqcuMPnsFObGM1cA
 lNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4A/IPlqYRq1BbXFI4/T+iyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:QfaZqqybX/sjhG69DjMEKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="106670401"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A9p2QuxIUIdjPEf8QDeg7FWnVBRGJQ6aGWQakcXLDPEp333o/1J+isxpWifpMwyF/j2BnBvxC0ecSOFQJ6tGEZWQFwjRxgqkYVU8QzffK1k3uoqTYoG/FJNj8GMuou8VCwiNRxeQblWxZCu7DVtzXaArIsVjCJYTQvcLRMJiGFiqQL0HFgVphvfKI3GP+lT4jYR2immaXrABKFFqNkH5GCI1PEK6KkbvYCuIIb/pcOYVjHs93acDUkjQvpsjX7ITLcNyk/dEm9TVI5GtM7FKpWrbPlr0IUxUtWBDqiNXhj8xJ84mc6Aq+lYJ4gFRs4Zq2eLql+zBMlg7PT9jHifxyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Sg2Fb3jl6e1BDtSDZUlN864Ch3WiFTsjN/pXDxcysgg=;
 b=hYEET9wrBbHsF0a/r/NhJmHJFBg7Bqjej7m2ZSaE/4vE2zujEFe2LxNPPG/dwdWbJAn/zYwPOos2jB8czjLnw3GNWZYep5tu4jsY+4ZX86piv4WwSpUONVLRo/ocn/eGMYomZFvvnKvhWG1eueh/ZEqdUq+kHpB7UOFbnhyqQWLSI88M0uL4gwEF0jl3ypRevLf5p/p5UT/OsbbhHSWPzYd855oWuqvUeMhvK98bX4aHmVnG4Pok9cibT6MH46gTw6Ia8HRz7gAP1cTk7zMnvgzKZgbmqy26uo2VCa006S9uxdtFvO2QtmAZObcjCH0f6H8Fl2OaWyN1yIAJ0y6XOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Sg2Fb3jl6e1BDtSDZUlN864Ch3WiFTsjN/pXDxcysgg=;
 b=UWHpsR2HAMa9Dw/hUDvroDZaSXg6XS3rreEApagL9Ol82Ref6sdZVvBr8SarQTAOBSy81UElGIY3xJcaE9xRuVw6hDOyjEkm72ABYORh931gU5l3LgcqZ8dIXcbiDTycH+RgeXzil42W8vr7Aedz0WwcO0KxKuKusX3G2PrNSeE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 11:49:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH] x86/p2m-pt: do type recalculations with p2m read lock
Message-ID: <ZCvymhQClg8bzzwv@Air-de-Roger>
References: <20230403101449.93323-1-roger.pau@citrix.com>
 <0adcb388-d2a9-e43e-ec20-de1df51f33d7@suse.com>
 <ZCrzD7tH5WXARIvF@Air-de-Roger>
 <a247059e-27a8-0569-93af-de03e842e341@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a247059e-27a8-0569-93af-de03e842e341@suse.com>
X-ClientProxiedBy: LO2P265CA0514.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::21) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|BY5PR03MB5362:EE_
X-MS-Office365-Filtering-Correlation-Id: aab5934b-d4a3-4bf1-35d5-08db34f1dd2a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zIOREiEmSeyBsvlaO7PcQ1/zqlyEN6XArCVekD3PjgoRc4U8z1VEleN/iGKSZ+qdihw/KAHzYJBAR4OAUVRQgZY75mWqOyCk6yoU7FZ5TbTtVAX4XrANHKR48VECbPxUDE+37eGtIfHJSfR6rd/hGVxuY0/DGK/DJxKc1L4FLtvGKg/Lu/UCsq9EqDoAKMGbXc6BqEye+ujjnY+WQViRKJry1yuZ9lzezdYSWva0qj9ZNcjM9WqcVTvj3vJ9JjHCt/NK7MN8BZig3RAvGOy9qP9nqOghkoA/HFPO6ShETl1bjCYfo6mjgP0+Y/n8nzZoUJu+CqEI7MBmJxKkv1LImA4GbOM+BHkMlhaq4ig65pEaKCS2QhdZqfINpOdAFjNt/xmSJJW/EGsgEDNuH98+gbfP+dRHQrwmduhCHclmckNQGhcluG0hyRdBA2jnwahH9FMj+rvpHTXlVLInVt4+ghZoN/dgwSL+pLrqekyC5lFRTPghKUYGQgyimF500FrtGKgO3ZyjypHwecy8VJTr99NHUibqfvr4Wx1LNsnizPcqVcxY/4+Tthw7OQ9+Sp43
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(39860400002)(346002)(396003)(366004)(136003)(451199021)(82960400001)(85182001)(38100700002)(33716001)(86362001)(2906002)(6486002)(9686003)(6506007)(6512007)(26005)(53546011)(6666004)(66946007)(66476007)(66556008)(8676002)(6916009)(4326008)(41300700001)(478600001)(54906003)(5660300002)(8936002)(316002)(83380400001)(66899021)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVppMEFiTDEyekM3UHYwR2xMdXVnMzR0ZkVlU0VlbzBzM0l1MWVhZjdGVmc2?=
 =?utf-8?B?dHdTRGpyRlhGcjVMb1dHSC9GQUZkSmNqOENaZzhVNG1XclFYVHVmM3NvSEhj?=
 =?utf-8?B?VzVJTG0rd3NQR1N6QmJUb0FiZitLaXFBUXc1b29WRkkxbDJyRmNZOVJGQ2tE?=
 =?utf-8?B?dnA5d1V6NWVKN00yRi8yRkl5bFVQM3pld01xb1RqaDhWckZLL2JCK0ZLdW5r?=
 =?utf-8?B?MUIvb0RaVmlISXJBYkFaZ3pGSXlUS1FxNkRGTThRZGZiNzRjcHpsSjFNM01o?=
 =?utf-8?B?cktaMFl1MFlWS2FmUTR1NlcwczV5a1lTQzdERmZkd2pvazU0YnA0QzlobG5h?=
 =?utf-8?B?ZjR6ZjVzek9LOHhUQ0hWT3RTNzhVKzNaT3lCV3hhSlVtNTJGbFhCUnRnNnlF?=
 =?utf-8?B?Vys5RTVEc0g3NnNHZWtLa3lxWTRpeWVyS21ZL0pUTXZJdHcvUzJ2Mk50cjNz?=
 =?utf-8?B?ZldUd2FScWduLzR1dUVEaGlmUGVBdU5VRE5WNDcyMWtET2ZJWWxiQThzaGRl?=
 =?utf-8?B?NHRWWXlTRmJYblV0eE1mUHUyT29MSk9ydEtUWDZXMG1jeThPeGNHMUMxdGg4?=
 =?utf-8?B?L1dmeUQ2ZmF6VE9pV0l4T3RyUGtLeXhJSGtGTVB5OVdBVjg3anFyK3c0QlNq?=
 =?utf-8?B?NEdOQWswUjF2ZzhGd1hKTERMYnRBdHk2Ynd6dXpPMnA0dC9xbWlLZWNZOTdn?=
 =?utf-8?B?ODc0WEF0Q012aWsycEJCVHhlWThuRVM1eitYejlURVUrM3hlT0V6aGtxRXUy?=
 =?utf-8?B?c0srMXBhTmc3NzRNNWFERTh2RkdyWTVlTjdOMExONlZ5eHNvZTdYZWxJRk9C?=
 =?utf-8?B?bTBBT0pqR3l2bE1NMHh3R01BZ0g3aE5Ya1JSMXhocGxnRmMxZHF3TnU4cm9J?=
 =?utf-8?B?cEFpQWUxaEpCQ3RTNm5yaUNHeWtQQklpUXZwU1pZdmpYSDlzR3V0ZWhVbCsx?=
 =?utf-8?B?dS9vL2Z5Z2V0SDludElPVk56WFliNGlxbWZpMytQUERaSjNQdWZXV1BQaWV0?=
 =?utf-8?B?NGpEMlZPcjE3UXpFbW9wbHFWRi9ldnlTM0U0WkVnTWtRQ1YzY2Y3ZVFCM1gv?=
 =?utf-8?B?SDdnUDdiUzRrRWozaCtkV2FxR1VGRWVQZXRVUFVnWW9Ya3Q1ek9pTUIwOUFn?=
 =?utf-8?B?bVJZK3RKTFk1RlZxUjNzajlXbzFhZ0UxTElBMTFocXdYSTJsOEZwNlFhcGVX?=
 =?utf-8?B?QlpzOWpCaVR4SmhXU1lmN2pOdmh3MkpMT3grNk9QTlQ4ZCs1TE80V1dGUlBk?=
 =?utf-8?B?WXRvcnZUV1QrckxjeXljeDFRS0RiK0wzdVVqblpjMW1rZC9EMFNlY3h4T2lW?=
 =?utf-8?B?Y041Qi82NXY3SUNnbHkrOFRMRjcwMEZ1aTJaZjVlYUlkTkljUXBmdm5vRVFF?=
 =?utf-8?B?MCtma0I3MFM1dk52U2hZSC8zVHE5bjZjUTNUUkVEWnppb2w3eVgvNzNVUjNo?=
 =?utf-8?B?VlJXMm10eW9Hc2Q0ODR1UThPTC92TDVXMkRDZGJXM3RaV29ta3JneXF1SlU0?=
 =?utf-8?B?MW1NU29Dcmg0VUtRMGpycCs1czdOTG9OczNYaElRN1pMam1nMXBSaFNVblpS?=
 =?utf-8?B?SnA5U2VSVGkyd1NhR2pHc0puaUZMdWl4aUI3dVA3dFNGNmZrUitQdDFBVVVw?=
 =?utf-8?B?WllJLzkyeVkzeWhnR2xpS1lkQ3dVTVlCU2tKazJhb21OTHdoemd1a01iK0NQ?=
 =?utf-8?B?c1ZPR05VbkpVUGJiN3ZUMGNoUFptQjNnZzlGZlFJWDh1em1tSVBibE40SDdt?=
 =?utf-8?B?V3dMdFVmSENBUElyaWkzNUpOWW1xS2hGOXRwR1ZrUXcrTjdidU1wamNOM05j?=
 =?utf-8?B?NERqVmNqVnFyZ3JYdXdjczF1cjBqbGFSdFpQc2xkcisveXEzb3B6TTR4QmZL?=
 =?utf-8?B?T0RUR2ZyandQQXBmQjFuOFZ2UjI0dlJzNDlOV3drQkRBckdlRnhxS2ZEU3Vt?=
 =?utf-8?B?OUJjWmwwQ0xnQnd5U25jU29Jc05ZbkJRWXhOTndYVWswNWFQTm5qUHdDRjJ4?=
 =?utf-8?B?eWFTRk9qbTViTmZCeGZtM0ZxWXVlOEVIWW81YmV3M2tnYXBJbHdtdWtsdVBp?=
 =?utf-8?B?bjNKWTRONDVWc21TaUgyc1J2TWdiYnBCRU9PYURaZ1Y2c2F1SjB6YnVWckRV?=
 =?utf-8?Q?KM4D5CcpFk9CLne8CWIIUMub9?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	usnYVDu0tWEVhKAgrpadeT2pkycxFshUxCyFt8z8juphXTAS7IDlxAB8CcfFGpJPqI/PVknW5XOp9VCmbplWoBDvPr/9/Q5HL5B/3G5f6j0RA7tOB13Men2ahNP98poyZMPR13LA2PvPe/ptbQQrW8q02YsEUD3wRw6s0ybPr5uPFYe1iBSJqj50BpINWdu9y97pBsDHU0sHQEAcBFnCdMNK4XAbDpratBjuzFmNJjVjXiCoDHqawed84gSs2jqosOYEg747FT5uGztLAzDZEA3bOeum3eQrh5EmzJ0UKSrH8TdnlT6ZvVU2hK0niPlwka655cKNg0X4rVDgg5EU0kn3Y1DbSoqG7mCV96dsbz4ey30PELu3P/PNAJXRTrmyTs6kDbn4DvwEHr0JeJdADeWSlhvzUph16yOLXX0/ron9wkAkhsjYW70pU1LE0WcUy6M1TZodvaVnSAiwR4yi7dC+GagfOdL5Np4S08miUJYWOSigzzK5LxzxhnAL7gKAYS329sB0zi8wSpReB/9WELPPXUOeYhXfM1XcmkXeGyMdRZBzyCpT02gckkQhvtEfPIWuXW5GDAnKlyjO+1Yf9XQFen7wviHC6ISJsrPA/TvhsxHuZy4lnkn0rU4rOpCghjJLkkqOmae1PjY5Z0jBLHbqUaPu4R7a9RVgRGOCRGKR/9Rku16ri125FP1xFs/vE599QuEkw10Fw2Z3AR2iANk4cKsRwXiHUYX+KBkoxWYtFCe2BFKRSe3MAOlg+tz+JnnA96IsYW2n6KKcQ+g/erpFRnWr+ML6GFbz4phtJH/+2NQMaiJJT3zrZ13xjTj7
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aab5934b-d4a3-4bf1-35d5-08db34f1dd2a
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 09:49:20.8879
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZVllJ6hnUhXNRlMOxiFF95av+DwF6Ty/1p9Gmyymq5DkFKgXbEfJvZkbKUX0ypwyapbi4Ld93pPCcBM2lDei1w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5362

On Mon, Apr 03, 2023 at 06:07:26PM +0200, Jan Beulich wrote:
> On 03.04.2023 17:38, Roger Pau Monné wrote:
> > On Mon, Apr 03, 2023 at 05:32:39PM +0200, Jan Beulich wrote:
> >> On 03.04.2023 12:14, Roger Pau Monne wrote:
> >>> Global p2m type recalculations (as triggered by logdirty) can create
> >>> so much contention on the p2m lock that simple guest operations like
> >>> VCPUOP_set_singleshot_timer on guests with a high amount of vCPUs (32)
> >>> will cease to work in a timely manner, up to the point that Linux
> >>> kernel versions that sill use the VCPU_SSHOTTMR_future flag with the
> >>> singleshot timer will cease to work:
> >>>
> >>> [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
> >>> [   82.793075] CE: Reprogramming failure. Giving up
> >>> [   82.779470] CE: Reprogramming failure. Giving up
> >>> [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
> >>> [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
> >>> [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
> >>> [   82.821864] CE: Reprogramming failure. Giving up
> >>> [   82.856256] CE: Reprogramming failure. Giving up
> >>> [   84.566279] CE: Reprogramming failure. Giving up
> >>> [   84.649493] Freezing user space processes ...
> >>> [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> >>> [  130.604032] Task dump for CPU 14:
> >>> [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
> >>> [  130.604032] Call Trace:
> >>> [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >>> [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >>> [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >>> [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >>> [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >>> [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> >>> [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> >>> [  549.655463] Task dump for CPU 26:
> >>> [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
> >>> [  549.655463] Call Trace:
> >>> [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >>> [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >>> [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >>> [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >>> [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >>> [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> >>> [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> >>> [  821.888596] Task dump for CPU 26:
> >>> [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
> >>> [  821.888677] Call Trace:
> >>> [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >>> [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >>> [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >>> [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >>> [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >>> [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> >>>
> >>> This is obviously undesirable.  One way to bodge the issue would be to
> >>> ignore VCPU_SSHOTTMR_future, but that's a deliberate breakage of the
> >>> hypercall ABI.
> >>>
> >>> Instead lower the contention in the lock by doing the recalculation
> >>> with the lock in read mode.  This is safe because only the flags/type
> >>> are changed, there's no PTE mfn change in the AMD recalculation logic.
> >>> The Intel (EPT) case is likely more complicated, as superpage
> >>> splitting for diverging EMT values must be done with the p2m lock in
> >>> taken in write mode.
> >>>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>> ---
> >>> I'm unsure whether such modification is fully safe:  I think changing
> >>> the flags/type should be fine: the PTE write is performed using
> >>> safwrite_p2m_entry() which must be atomic (as the guest is still
> >>> running and accessing the page tables).  I'm slightly worried about
> >>> all PTE readers not using atomic accesses to do so (ie: pointer
> >>> returned by p2m_find_entry() should be read atomicallly), and code
> >>> assuming that a gfn type cannot change while holding the p2m lock in
> >>> read mode.
> >>
> >> Coming back to this: Yes, I think reads (at least the ones in do_recalc()
> >> which can now be done in parallel) will need to be tightened if this is a
> >> road we want to follow.
> > 
> > There are likely a lot of reads under the p2m read lock outside of
> > do_recalc() that will ideally need to be switched to use atomic
> > accesses also?
> 
> Possibly, perhaps even likely. I specifically said "at least". But ones
> clearly on write-locked paths could probably be left alone.
> 
> > I'm open to suggestions to other ways to get this sorted.  And that's
> > a guest with 'just' 32 vCPUs, as we go up the contention on the p2m
> > lock during recalcs/misconfigs is going to increase massively.
> 
> I'm afraid I don't have any really good idea, but I'm wondering whether
> trylock with (almost?) immediate exit back to guest might make this any
> better. At least the guest could then take interrupts before the insn
> is retried. Another thought in this direction would be to have a variant
> of trylock which "senses" how contended the lock is, to spin if it's the
> first one to wait, but exit (fail) otherwise.

Using trylock in the recalc path could starve those quite badly, as
readers can acquire the lock concurrently.  Also we would loose the
fairness.

Using trylock on VCPUOP_set_singleshot_timer (in order to fetch the
data from the guest provided pointer) would lead to the same situation
AFAICT, as guests using VCPU_SSHOTTMR_future will likely see the time
expired by the point the hypercall checks it.

One thing that I've noticed is that copy_from_guest for HVM
(__hvm_copy()) takes and releases the p2m lock in read mode at several
points.  It would likely be better if the whole GVA -> MFN translation
was resolved inside of a single read-locked region, as that would
avoid stalls in the middle of the operation, but I don't think this
will solve the issue at hand.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517713.803489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLL-00065q-Kf; Tue, 04 Apr 2023 09:52:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517713.803489; Tue, 04 Apr 2023 09:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLL-00064o-FH; Tue, 04 Apr 2023 09:52:47 +0000
Received: by outflank-mailman (input) for mailman id 517713;
 Tue, 04 Apr 2023 09:52:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLJ-0005bo-Os
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:45 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70fd1efd-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:52:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70fd1efd-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601963;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=DskzMb94090V7ZbiUkJVNj9GkqXb607CuUmAX3BTcQA=;
  b=debtlS28BTxGtSYZb6P407iklmOqyFqitg4Ggsc3Y/bOyPWvsKpskBpf
   9Ty8v+uGuUaL6rEzOL0IsmCjHuHKQ2HpAIvnZnYlggJ6OjpIIVfuZAeua
   a2YndCaKf97FkyASzT+DLkvOICgR+PWgtrMzeFD927vihzC6Bn/dZ8tM8
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103612244
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:oo6hb6gLw1Ufv2IQodeBTL1UX161HxAKZh0ujC45NGQN5FlHY01je
 htvWmiBPPuDYTf1c9okbty+/U4GupXSz9Q3QQM/rC48Eyob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQmLCFXNyvEht6PwbGlEMRP1skGCdX0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 jmYpj6jXk1y2Nq38n2ByjG9i+r1m332faEjMKO92vduqQjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O+8w5RyJy6HUyx2EHWVCRTlEAPQ5sOcmSDps0
 UWG9+4FHhQ27ufTEyjEsO7J83XrY3N9wXI+iTEscw8UuYDY54APtxPMUIxOM/64q/jpBmSlq
 9yVlxTSl4n/nOZSifXjoA+Z2W31znTaZlVrv1uKBwpJ+is8Pdf4PNLwtDA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGFb5J+i8GBkkeC9U3j8sIFcFm
 nP7twJL/4N0N3C3d6JxaI/ZI510nfC9SYq0DKuOMIomjn1NmOivpXkGWKJt9zq1zBhEfV8XY
 v93jvpA/V5FUP86nVJats8W0KMxxzBW+F4/savTlkz9uZLHPS79dFvwGAfWBgzPxP/e8Vq9H
 hc2H5fi9iizp8WlPXaLq9dPcglaRZX5bLivw/Fqmie4ClIOMAkc5zX5mNvNp6QNc3xpq9r1
IronPort-HdrOrdr: A9a23:Cxf9BKmoCalSaIbUWjqMRhn2btnpDfLd3DAbv31ZSRFFG/Fwwf
 re+sjztCWE6gr5PUtK8+xoV5PwOU80iqQFmrX5UY3OYOCigguVxeJZnO7fKl/behEWn9Q1vZ
 uIMZIOa+EYa2IXsS+Q2meF+rgbr+Vv+ZrY/9v2/jNWSAlhaeVN6Bp0ER+BEld7AClqbKBJcK
 a01458ojKrezAyYt6gDncIG8jvzue77a7OUFo9AhYs6k2ygTutrJ7WeiLouSsjbw==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103612244"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 02/15] x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields
Date: Tue, 4 Apr 2023 10:52:09 +0100
Message-ID: <20230404095222.1373721-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These weren't great names to begin with, and using {leaves,msrs} matches up
better with the existing nr_{leaves,msr} parameters anyway.

Furthermore, by renaming these fields we can get away with using some #define
trickery to avoid the struct {cpuid,msr}_policy merge needing to happen in a
single changeset.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Fix typo in commit message
---
 tools/libs/guest/xg_cpuid_x86.c | 12 ++++++------
 xen/arch/x86/domctl.c           | 12 ++++++------
 xen/arch/x86/sysctl.c           |  8 ++++----
 xen/include/public/domctl.h     |  4 ++--
 xen/include/public/sysctl.h     |  4 ++--
 5 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 1b02bc987af7..5fae06e77804 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -145,9 +145,9 @@ static int get_system_cpu_policy(xc_interface *xch, uint32_t index,
     sysctl.cmd = XEN_SYSCTL_get_cpu_policy;
     sysctl.u.cpu_policy.index = index;
     sysctl.u.cpu_policy.nr_leaves = *nr_leaves;
-    set_xen_guest_handle(sysctl.u.cpu_policy.cpuid_policy, leaves);
+    set_xen_guest_handle(sysctl.u.cpu_policy.leaves, leaves);
     sysctl.u.cpu_policy.nr_msrs = *nr_msrs;
-    set_xen_guest_handle(sysctl.u.cpu_policy.msr_policy, msrs);
+    set_xen_guest_handle(sysctl.u.cpu_policy.msrs, msrs);
 
     ret = do_sysctl(xch, &sysctl);
 
@@ -183,9 +183,9 @@ static int get_domain_cpu_policy(xc_interface *xch, uint32_t domid,
     domctl.cmd = XEN_DOMCTL_get_cpu_policy;
     domctl.domain = domid;
     domctl.u.cpu_policy.nr_leaves = *nr_leaves;
-    set_xen_guest_handle(domctl.u.cpu_policy.cpuid_policy, leaves);
+    set_xen_guest_handle(domctl.u.cpu_policy.leaves, leaves);
     domctl.u.cpu_policy.nr_msrs = *nr_msrs;
-    set_xen_guest_handle(domctl.u.cpu_policy.msr_policy, msrs);
+    set_xen_guest_handle(domctl.u.cpu_policy.msrs, msrs);
 
     ret = do_domctl(xch, &domctl);
 
@@ -232,9 +232,9 @@ int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
     domctl.cmd = XEN_DOMCTL_set_cpu_policy;
     domctl.domain = domid;
     domctl.u.cpu_policy.nr_leaves = nr_leaves;
-    set_xen_guest_handle(domctl.u.cpu_policy.cpuid_policy, leaves);
+    set_xen_guest_handle(domctl.u.cpu_policy.leaves, leaves);
     domctl.u.cpu_policy.nr_msrs = nr_msrs;
-    set_xen_guest_handle(domctl.u.cpu_policy.msr_policy, msrs);
+    set_xen_guest_handle(domctl.u.cpu_policy.msrs, msrs);
     domctl.u.cpu_policy.err_leaf = -1;
     domctl.u.cpu_policy.err_subleaf = -1;
     domctl.u.cpu_policy.err_msr = -1;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 0b41b279507e..944af63e68d0 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -54,10 +54,10 @@ static int update_domain_cpu_policy(struct domain *d,
 
     /* Merge the toolstack provided data. */
     if ( (ret = x86_cpuid_copy_from_buffer(
-              new.cpuid, xdpc->cpuid_policy, xdpc->nr_leaves,
+              new.cpuid, xdpc->leaves, xdpc->nr_leaves,
               &err.leaf, &err.subleaf)) ||
          (ret = x86_msr_copy_from_buffer(
-              new.msr, xdpc->msr_policy, xdpc->nr_msrs, &err.msr)) )
+              new.msr, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
         goto out;
 
     /* Trim any newly-stale out-of-range leaves. */
@@ -1317,20 +1317,20 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_get_cpu_policy:
         /* Process the CPUID leaves. */
-        if ( guest_handle_is_null(domctl->u.cpu_policy.cpuid_policy) )
+        if ( guest_handle_is_null(domctl->u.cpu_policy.leaves) )
             domctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
         else if ( (ret = x86_cpuid_copy_to_buffer(
                        d->arch.cpuid,
-                       domctl->u.cpu_policy.cpuid_policy,
+                       domctl->u.cpu_policy.leaves,
                        &domctl->u.cpu_policy.nr_leaves)) )
             break;
 
         /* Process the MSR entries. */
-        if ( guest_handle_is_null(domctl->u.cpu_policy.msr_policy) )
+        if ( guest_handle_is_null(domctl->u.cpu_policy.msrs) )
             domctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
         else if ( (ret = x86_msr_copy_to_buffer(
                        d->arch.msr,
-                       domctl->u.cpu_policy.msr_policy,
+                       domctl->u.cpu_policy.msrs,
                        &domctl->u.cpu_policy.nr_msrs)) )
             break;
 
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 3f5b092df16a..3ed7c69f4315 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -411,11 +411,11 @@ long arch_do_sysctl(
         }
 
         /* Process the CPUID leaves. */
-        if ( guest_handle_is_null(sysctl->u.cpu_policy.cpuid_policy) )
+        if ( guest_handle_is_null(sysctl->u.cpu_policy.leaves) )
             sysctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
         else if ( (ret = x86_cpuid_copy_to_buffer(
                        policy->cpuid,
-                       sysctl->u.cpu_policy.cpuid_policy,
+                       sysctl->u.cpu_policy.leaves,
                        &sysctl->u.cpu_policy.nr_leaves)) )
             break;
 
@@ -427,11 +427,11 @@ long arch_do_sysctl(
         }
 
         /* Process the MSR entries. */
-        if ( guest_handle_is_null(sysctl->u.cpu_policy.msr_policy) )
+        if ( guest_handle_is_null(sysctl->u.cpu_policy.msrs) )
             sysctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
         else if ( (ret = x86_msr_copy_to_buffer(
                        policy->msr,
-                       sysctl->u.cpu_policy.msr_policy,
+                       sysctl->u.cpu_policy.msrs,
                        &sysctl->u.cpu_policy.nr_msrs)) )
             break;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 7280e9f96816..529801c89ba3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -683,8 +683,8 @@ struct xen_domctl_cpu_policy {
                          * 'cpuid_policy'. */
     uint32_t nr_msrs;   /* IN/OUT: Number of MSRs in/written to
                          * 'msr_policy' */
-    XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_policy; /* IN/OUT */
-    XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_policy;    /* IN/OUT */
+    XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves; /* IN/OUT */
+    XEN_GUEST_HANDLE_64(xen_msr_entry_t)  msrs;   /* IN/OUT */
 
     /*
      * OUT, set_policy only.  Written in some (but not all) error cases to
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index e8dded9fb94a..2b24d6bfd00e 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1050,8 +1050,8 @@ struct xen_sysctl_cpu_policy {
                            * 'msr_policy', or the maximum number of MSRs if
                            * the guest handle is NULL. */
     uint32_t _rsvd;       /* Must be zero. */
-    XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_policy; /* OUT */
-    XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_policy;    /* OUT */
+    XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves; /* OUT */
+    XEN_GUEST_HANDLE_64(xen_msr_entry_t)  msrs;   /* OUT */
 };
 typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517714.803496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLM-0006FC-8d; Tue, 04 Apr 2023 09:52:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517714.803496; Tue, 04 Apr 2023 09:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLM-0006Ds-2q; Tue, 04 Apr 2023 09:52:48 +0000
Received: by outflank-mailman (input) for mailman id 517714;
 Tue, 04 Apr 2023 09:52:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLK-0005bo-MN
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71dfcd27-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:52:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71dfcd27-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601964;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=lvleilS5BRDLhyzG1k+YQ1TdfDraad9QCp1ZsHyb0Xk=;
  b=WunbCIuEUdHCEfvZ4bFM7QNzud+F87X+JbejyrRmxpS2SqPUiPLvnKcW
   th6/wWn72x1OdW5pTTLauqQZI+Eus8pu50njyMdyQH6CqpcZmdHFiYsXx
   +GliXG8kQQgPqMUQCK0PpWy0zbmos0WPl4+rm7nkcGE7G7v3XLVU5mAIc
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106670669
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:j7CdmqBPer0whRVW/1bjw5YqxClBgxIJ4kV8jS/XYbTApDMl12FWy
 jYWUWmEbPyKZjOkc9txbt/l9E4EsJOAy9M2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4gRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw171LJ0Fj6
 vUhM3NXZBSgjsm376Kbc7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4TUG5oNwBjJz
 o7A1z6lEz8mC9Xc8wbftULrhajxpAWjdqtHQdVU8dY12QbOlwT/EiY+Sl+TsfS/zEmkVLp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmPwKfJ5weSBkAfUyVMLtchsacLqScCj
 wHT2YmzXHo27ePTECjGnluJkd+sEQVOEUkiSDANdBBGudPng4gYjz3fbu82RcZZkebJ9SHML
 yGi9XZu3+hM05RUjs1X7nic3Wvy+8Ghohodo1yOAzn7tl4RiJuNPdTA1LTN0RpXwG91pHGlt
 WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43Nzgbb1HRcNJG8yFoSLLQGypyGgWyL1VGsgFYyT1R
 0TYpBlc4pReVFPzM/8vPt7vWp5xl/awfTgAahwzRoMWCqWdiSfdpH0+DaJu9zuFfLcQfVEXZ
 s7ALJfE4YcyAqV71jumL9ogPUsQ7nlmnwv7HMmrpylLJJLCPBZ5v59ZagrRBg34hYvYyDjoH
 yF3bJTVl08GDLKnMkE6M+c7dDg3EJTyPriuw+Q/SwJJClMO9L0JYxMJ/Y4cRg==
IronPort-HdrOrdr: A9a23:3Hy5EK55FGsbrdfn5wPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="106670669"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 04/15] x86: Merge struct msr_policy into struct cpu_policy
Date: Tue, 4 Apr 2023 10:52:11 +0100
Message-ID: <20230404095222.1373721-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As with the cpuid side, use a temporary define to make struct msr_policy still
work.

Note, this means that domains now have two separate struct cpu_policy
allocations with disjoint information, and system policies are in a similar
position, as well as xc_cpu_policy objects in libxenguest.  All of these
duplications will be addressed in the following patches.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Alter msr_policy -> cpu_policy in comments.
---
 tools/fuzz/cpu-policy/afl-policy-fuzzer.c |   1 -
 xen/arch/x86/include/asm/msr.h            |   3 +-
 xen/include/xen/lib/x86/cpu-policy.h      |  81 ++++++++++++++++-
 xen/include/xen/lib/x86/msr.h             | 104 ----------------------
 xen/lib/x86/msr.c                         |   2 +-
 5 files changed, 83 insertions(+), 108 deletions(-)
 delete mode 100644 xen/include/xen/lib/x86/msr.h

diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 79e42e8bfd04..0ce3d8e16626 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -10,7 +10,6 @@
 
 #include <xen-tools/common-macros.h>
 #include <xen/lib/x86/cpu-policy.h>
-#include <xen/lib/x86/msr.h>
 #include <xen/domctl.h>
 
 static bool debug;
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 7946b6b24c11..02eddd919c27 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -6,8 +6,9 @@
 #include <xen/types.h>
 #include <xen/percpu.h>
 #include <xen/errno.h>
+#include <xen/kernel.h>
 
-#include <xen/lib/x86/msr.h>
+#include <xen/lib/x86/cpu-policy.h>
 
 #include <asm/asm_defns.h>
 #include <asm/cpufeature.h>
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 666505964d00..53fffca55211 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -3,7 +3,6 @@
 #define XEN_LIB_X86_POLICIES_H
 
 #include <xen/lib/x86/cpuid-autogen.h>
-#include <xen/lib/x86/msr.h>
 
 #define FEATURESET_1d     0 /* 0x00000001.edx      */
 #define FEATURESET_1c     1 /* 0x00000001.ecx      */
@@ -107,6 +106,9 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor);
      CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE +  \
      CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
 
+/* Maximum number of MSRs written when serialising a cpu_policy. */
+#define MSR_MAX_SERIALISED_ENTRIES 2
+
 struct cpu_policy
 {
 #define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
@@ -324,6 +326,44 @@ struct cpu_policy
         };
     } extd;
 
+    /*
+     * 0x000000ce - MSR_INTEL_PLATFORM_INFO
+     *
+     * This MSR is non-architectural, but for simplicy we allow it to be read
+     * unconditionally.  CPUID Faulting support can be fully emulated for HVM
+     * guests so can be offered unconditionally, while support for PV guests
+     * is dependent on real hardware support.
+     */
+    union {
+        uint32_t raw;
+        struct {
+            uint32_t :31;
+            bool cpuid_faulting:1;
+        };
+    } platform_info;
+
+    /*
+     * 0x0000010a - MSR_ARCH_CAPABILITIES
+     *
+     * This is an Intel-only MSR, which provides miscellaneous enumeration,
+     * including those which indicate that microarchitectrual sidechannels are
+     * fixed in hardware.
+     */
+    union {
+        uint32_t raw;
+        struct {
+            bool rdcl_no:1;
+            bool ibrs_all:1;
+            bool rsba:1;
+            bool skip_l1dfl:1;
+            bool ssb_no:1;
+            bool mds_no:1;
+            bool if_pschange_mc_no:1;
+            bool tsx_ctrl:1;
+            bool taa_no:1;
+        };
+    } arch_caps;
+
 #undef __DECL_BITFIELD
 #undef _DECL_BITFIELD
 #undef DECL_BITFIELD
@@ -337,6 +377,7 @@ struct cpu_policy
 
 /* Temporary */
 #define cpuid_policy cpu_policy
+#define msr_policy cpu_policy
 
 struct old_cpu_policy
 {
@@ -438,9 +479,11 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
 #ifdef __XEN__
 #include <public/arch-x86/xen.h>
 typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
+typedef XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_entry_buffer_t;
 #else
 #include <xen/arch-x86/xen.h>
 typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
+typedef xen_msr_entry_t msr_entry_buffer_t[];
 #endif
 
 /**
@@ -480,6 +523,42 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf);
 
+/**
+ * Serialise an msr_policy object into an array.
+ *
+ * @param policy     The msr_policy to serialise.
+ * @param msrs       The array of msrs to serialise into.
+ * @param nr_entries The number of entries in 'msrs'.
+ * @returns -errno
+ *
+ * Writes at most MSR_MAX_SERIALISED_ENTRIES.  May fail with -ENOBUFS if the
+ * buffer array is too short.  On success, nr_entries is updated with the
+ * actual number of msrs written.
+ */
+int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+                           msr_entry_buffer_t msrs, uint32_t *nr_entries);
+
+/**
+ * Unserialise an msr_policy object from an array of msrs.
+ *
+ * @param policy     The msr_policy object to unserialise into.
+ * @param msrs       The array of msrs to unserialise from.
+ * @param nr_entries The number of entries in 'msrs'.
+ * @param err_msr    Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * Reads at most MSR_MAX_SERIALISED_ENTRIES.  May fail for a number of reasons
+ * based on the content in an individual 'msrs' entry, including the MSR index
+ * not being valid in the policy, the flags field being nonzero, or if the
+ * value provided would truncate when stored in the policy.  In such cases,
+ * the optional err_* pointer will identify the problematic MSR.
+ *
+ * No content validation is performed on the data stored in the policy object.
+ */
+int x86_msr_copy_from_buffer(struct msr_policy *policy,
+                             const msr_entry_buffer_t msrs, uint32_t nr_entries,
+                             uint32_t *err_msr);
+
 /*
  * Calculate whether two policies are compatible.
  *
diff --git a/xen/include/xen/lib/x86/msr.h b/xen/include/xen/lib/x86/msr.h
deleted file mode 100644
index 48ba4a59c036..000000000000
--- a/xen/include/xen/lib/x86/msr.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/* Common data structures and functions consumed by hypervisor and toolstack */
-#ifndef XEN_LIB_X86_MSR_H
-#define XEN_LIB_X86_MSR_H
-
-/* Maximum number of MSRs written when serialising msr_policy. */
-#define MSR_MAX_SERIALISED_ENTRIES 2
-
-/* MSR policy object for shared per-domain MSRs */
-struct msr_policy
-{
-    /*
-     * 0x000000ce - MSR_INTEL_PLATFORM_INFO
-     *
-     * This MSR is non-architectural, but for simplicy we allow it to be read
-     * unconditionally.  CPUID Faulting support can be fully emulated for HVM
-     * guests so can be offered unconditionally, while support for PV guests
-     * is dependent on real hardware support.
-     */
-    union {
-        uint32_t raw;
-        struct {
-            uint32_t :31;
-            bool cpuid_faulting:1;
-        };
-    } platform_info;
-
-    /*
-     * 0x0000010a - MSR_ARCH_CAPABILITIES
-     *
-     * This is an Intel-only MSR, which provides miscellaneous enumeration,
-     * including those which indicate that microarchitectrual sidechannels are
-     * fixed in hardware.
-     */
-    union {
-        uint32_t raw;
-        struct {
-            bool rdcl_no:1;
-            bool ibrs_all:1;
-            bool rsba:1;
-            bool skip_l1dfl:1;
-            bool ssb_no:1;
-            bool mds_no:1;
-            bool if_pschange_mc_no:1;
-            bool tsx_ctrl:1;
-            bool taa_no:1;
-        };
-    } arch_caps;
-};
-
-#ifdef __XEN__
-#include <public/arch-x86/xen.h>
-typedef XEN_GUEST_HANDLE_64(xen_msr_entry_t) msr_entry_buffer_t;
-#else
-#include <xen/arch-x86/xen.h>
-typedef xen_msr_entry_t msr_entry_buffer_t[];
-#endif
-
-/**
- * Serialise an msr_policy object into an array.
- *
- * @param policy     The msr_policy to serialise.
- * @param msrs       The array of msrs to serialise into.
- * @param nr_entries The number of entries in 'msrs'.
- * @returns -errno
- *
- * Writes at most MSR_MAX_SERIALISED_ENTRIES.  May fail with -ENOBUFS if the
- * buffer array is too short.  On success, nr_entries is updated with the
- * actual number of msrs written.
- */
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
-                           msr_entry_buffer_t msrs, uint32_t *nr_entries);
-
-/**
- * Unserialise an msr_policy object from an array of msrs.
- *
- * @param policy     The msr_policy object to unserialise into.
- * @param msrs       The array of msrs to unserialise from.
- * @param nr_entries The number of entries in 'msrs'.
- * @param err_msr    Optional hint for error diagnostics.
- * @returns -errno
- *
- * Reads at most MSR_MAX_SERIALISED_ENTRIES.  May fail for a number of reasons
- * based on the content in an individual 'msrs' entry, including the MSR index
- * not being valid in the policy, the flags field being nonzero, or if the
- * value provided would truncate when stored in the policy.  In such cases,
- * the optional err_* pointer will identify the problematic MSR.
- *
- * No content validation is performed on the data stored in the policy object.
- */
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
-                             const msr_entry_buffer_t msrs, uint32_t nr_entries,
-                             uint32_t *err_msr);
-
-#endif /* !XEN_LIB_X86_MSR_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index 7d71e92a380a..c4d885e7b568 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -1,6 +1,6 @@
 #include "private.h"
 
-#include <xen/lib/x86/msr.h>
+#include <xen/lib/x86/cpu-policy.h>
 
 /*
  * Copy a single MSR into the provided msr_entry_buffer_t buffer, performing a
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517710.803455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLH-00059k-LP; Tue, 04 Apr 2023 09:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517710.803455; Tue, 04 Apr 2023 09:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLH-00058c-G1; Tue, 04 Apr 2023 09:52:43 +0000
Received: by outflank-mailman (input) for mailman id 517710;
 Tue, 04 Apr 2023 09:52:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLG-00056d-Dy
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:42 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e4dedd2-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:52:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e4dedd2-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601958;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=26klonSMws4SwwrhUJvMYhga9nBaQ/a3jGXfpoulioI=;
  b=d/YAm3j1rzvxGBlaOJyg/vwXopWL36Ba0QNi1HIbUXHH7iL0mrvYZLpp
   +TAzt9ErqBxZXnoG1sgg2q92GP7hsgLNiGzKKLJD1L0PZUuEcfNbC+01e
   7jFMCfdD5uS76SeyBKeA0i5pHqvJdwOfpuo+H4sH2ADdHOmzOl6963T/X
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104161361
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:vNzrr606nWyAYg4k/PbD5axxkn2cJEfYwER7XKvMYLTBsI5bp2NVz
 TQcCm2OO/3YZWqje9AkPNjipBkG7JbQzddrG1RspC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfJ3Fny
 f0oBxQ0bjOSmcmMx6jqd+triZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0JxxjB/
 Tyfl4j/KgoIEd+55DeYy0qLtP3VwmDXSbsNE4Tto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jY+cddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBUkSTBMFfSQh2tnAsZ8YtBXVH+thF6Hg27UZBgrML
 yC2QDkW3utD1ZNUif/kpDgrkBr3+MGXE1ddChH/Gzv8s1gnPNPNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83oJrW7FF4aLJ9w43d2HDB4B3jw4UTHoe
 lTPngha+YVeOnCnBYcuPdLpVph0nPK7T4q1PhwxUjapSsEpHDJrAQk0PRLAt4wTuBNEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk33uYdykVbJEd/pxnPSNLFmhE5FyS2Jm
 +ti2zyikEoADLenPnaOoeb+7zkidBAGOHw/kOQPHsbrH+asMDhJ5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:9DL2QqBOw0rCwfjlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104161361"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset convertors
Date: Tue, 4 Apr 2023 10:52:16 +0100
Message-ID: <20230404095222.1373721-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These are already getting over-large for being inline functions, and are only
going to grow more over time.  Out of line them, yielding the following net
delta from bloat-o-meter:

  add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)

Switch to the newer cpu_policy terminology while doing so.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 tools/libs/guest/xg_cpuid_x86.c      |  2 +-
 xen/arch/x86/cpuid.c                 | 28 +++++++--------
 xen/arch/x86/sysctl.c                |  2 +-
 xen/include/xen/lib/x86/cpu-policy.h | 52 ++++++----------------------
 xen/lib/x86/cpuid.c                  | 42 ++++++++++++++++++++++
 5 files changed, 68 insertions(+), 58 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 259029be8b36..33d366a8eb43 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -565,7 +565,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
             }
         }
 
-        cpuid_featureset_to_policy(feat, p);
+        x86_cpu_featureset_to_policy(feat, p);
     }
     else
     {
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index df3e503ced9d..5eb5f1893516 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -368,7 +368,7 @@ static void __init calculate_host_policy(void)
     p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
                                           ARRAY_SIZE(p->extd.raw) - 1);
 
-    cpuid_featureset_to_policy(boot_cpu_data.x86_capability, p);
+    x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
     recalculate_xstate(p);
     recalculate_misc(p);
 
@@ -450,7 +450,7 @@ static void __init calculate_pv_max_policy(void)
     unsigned int i;
 
     *p = host_cpu_policy;
-    cpuid_policy_to_featureset(p, pv_featureset);
+    x86_cpu_policy_to_featureset(p, pv_featureset);
 
     for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
         pv_featureset[i] &= pv_max_featuremask[i];
@@ -468,7 +468,7 @@ static void __init calculate_pv_max_policy(void)
     guest_common_feature_adjustments(pv_featureset);
 
     sanitise_featureset(pv_featureset);
-    cpuid_featureset_to_policy(pv_featureset, p);
+    x86_cpu_featureset_to_policy(pv_featureset, p);
     recalculate_xstate(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
@@ -481,7 +481,7 @@ static void __init calculate_pv_def_policy(void)
     unsigned int i;
 
     *p = pv_max_cpu_policy;
-    cpuid_policy_to_featureset(p, pv_featureset);
+    x86_cpu_policy_to_featureset(p, pv_featureset);
 
     for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
         pv_featureset[i] &= pv_def_featuremask[i];
@@ -490,7 +490,7 @@ static void __init calculate_pv_def_policy(void)
     guest_common_default_feature_adjustments(pv_featureset);
 
     sanitise_featureset(pv_featureset);
-    cpuid_featureset_to_policy(pv_featureset, p);
+    x86_cpu_featureset_to_policy(pv_featureset, p);
     recalculate_xstate(p);
 }
 
@@ -502,7 +502,7 @@ static void __init calculate_hvm_max_policy(void)
     const uint32_t *hvm_featuremask;
 
     *p = host_cpu_policy;
-    cpuid_policy_to_featureset(p, hvm_featureset);
+    x86_cpu_policy_to_featureset(p, hvm_featureset);
 
     hvm_featuremask = hvm_hap_supported() ?
         hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
@@ -581,7 +581,7 @@ static void __init calculate_hvm_max_policy(void)
     guest_common_feature_adjustments(hvm_featureset);
 
     sanitise_featureset(hvm_featureset);
-    cpuid_featureset_to_policy(hvm_featureset, p);
+    x86_cpu_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
 }
 
@@ -593,7 +593,7 @@ static void __init calculate_hvm_def_policy(void)
     const uint32_t *hvm_featuremask;
 
     *p = hvm_max_cpu_policy;
-    cpuid_policy_to_featureset(p, hvm_featureset);
+    x86_cpu_policy_to_featureset(p, hvm_featureset);
 
     hvm_featuremask = hvm_hap_supported() ?
         hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
@@ -612,7 +612,7 @@ static void __init calculate_hvm_def_policy(void)
         __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
 
     sanitise_featureset(hvm_featureset);
-    cpuid_featureset_to_policy(hvm_featureset, p);
+    x86_cpu_featureset_to_policy(hvm_featureset, p);
     recalculate_xstate(p);
 }
 
@@ -682,8 +682,8 @@ void recalculate_cpuid_policy(struct domain *d)
                                             ? CPUID_GUEST_NR_EXTD_AMD
                                             : CPUID_GUEST_NR_EXTD_INTEL) - 1);
 
-    cpuid_policy_to_featureset(p, fs);
-    cpuid_policy_to_featureset(max, max_fs);
+    x86_cpu_policy_to_featureset(p, fs);
+    x86_cpu_policy_to_featureset(max, max_fs);
 
     if ( is_hvm_domain(d) )
     {
@@ -740,7 +740,7 @@ void recalculate_cpuid_policy(struct domain *d)
                            (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
                             cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
 
-    cpuid_featureset_to_policy(fs, p);
+    x86_cpu_featureset_to_policy(fs, p);
 
     /* Pass host cacheline size through to guests. */
     p->basic.clflush_size = max->basic.clflush_size;
@@ -806,7 +806,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
         uint32_t fs[FSCAPINTS];
         unsigned int i;
 
-        cpuid_policy_to_featureset(p, fs);
+        x86_cpu_policy_to_featureset(p, fs);
 
         for ( i = 0; i < ARRAY_SIZE(fs); ++i )
         {
@@ -814,7 +814,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
             fs[i] &= ~dom0_disable_feat[i];
         }
 
-        cpuid_featureset_to_policy(fs, p);
+        x86_cpu_featureset_to_policy(fs, p);
 
         recalculate_cpuid_policy(d);
     }
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 43a241f2090f..c107f40c6283 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -338,7 +338,7 @@ long arch_do_sysctl(
             ret = -EINVAL;
 
         if ( !ret )
-            cpuid_policy_to_featureset(p, featureset);
+            x86_cpu_policy_to_featureset(p, featureset);
 
         /* Copy the requested featureset into place. */
         if ( !ret && copy_to_guest(sysctl->u.cpu_featureset.features,
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 8b27a0725b8e..57b4633c861e 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -387,49 +387,17 @@ struct cpu_policy_errors
 
 #define INIT_CPU_POLICY_ERRORS { -1, -1, -1 }
 
-/* Fill in a featureset bitmap from a CPUID policy. */
-static inline void cpuid_policy_to_featureset(
-    const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
-{
-    fs[FEATURESET_1d]  = p->basic._1d;
-    fs[FEATURESET_1c]  = p->basic._1c;
-    fs[FEATURESET_e1d] = p->extd.e1d;
-    fs[FEATURESET_e1c] = p->extd.e1c;
-    fs[FEATURESET_Da1] = p->xstate.Da1;
-    fs[FEATURESET_7b0] = p->feat._7b0;
-    fs[FEATURESET_7c0] = p->feat._7c0;
-    fs[FEATURESET_e7d] = p->extd.e7d;
-    fs[FEATURESET_e8b] = p->extd.e8b;
-    fs[FEATURESET_7d0] = p->feat._7d0;
-    fs[FEATURESET_7a1] = p->feat._7a1;
-    fs[FEATURESET_e21a] = p->extd.e21a;
-    fs[FEATURESET_7b1] = p->feat._7b1;
-    fs[FEATURESET_7d2] = p->feat._7d2;
-    fs[FEATURESET_7c1] = p->feat._7c1;
-    fs[FEATURESET_7d1] = p->feat._7d1;
-}
+/**
+ * Copy the featureset words out of a cpu_policy object.
+ */
+void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
+                                  uint32_t fs[FEATURESET_NR_ENTRIES]);
 
-/* Fill in a CPUID policy from a featureset bitmap. */
-static inline void cpuid_featureset_to_policy(
-    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
-{
-    p->basic._1d  = fs[FEATURESET_1d];
-    p->basic._1c  = fs[FEATURESET_1c];
-    p->extd.e1d   = fs[FEATURESET_e1d];
-    p->extd.e1c   = fs[FEATURESET_e1c];
-    p->xstate.Da1 = fs[FEATURESET_Da1];
-    p->feat._7b0  = fs[FEATURESET_7b0];
-    p->feat._7c0  = fs[FEATURESET_7c0];
-    p->extd.e7d   = fs[FEATURESET_e7d];
-    p->extd.e8b   = fs[FEATURESET_e8b];
-    p->feat._7d0  = fs[FEATURESET_7d0];
-    p->feat._7a1  = fs[FEATURESET_7a1];
-    p->extd.e21a  = fs[FEATURESET_e21a];
-    p->feat._7b1  = fs[FEATURESET_7b1];
-    p->feat._7d2  = fs[FEATURESET_7d2];
-    p->feat._7c1  = fs[FEATURESET_7c1];
-    p->feat._7d1  = fs[FEATURESET_7d1];
-}
+/**
+ * Copy the featureset words back into a cpu_policy object.
+ */
+void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
+                                  struct cpu_policy *p);
 
 static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
 {
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index e81f76c779c0..734e90823a63 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
     }
 }
 
+void x86_cpu_policy_to_featureset(
+    const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
+{
+    fs[FEATURESET_1d]        = p->basic._1d;
+    fs[FEATURESET_1c]        = p->basic._1c;
+    fs[FEATURESET_e1d]       = p->extd.e1d;
+    fs[FEATURESET_e1c]       = p->extd.e1c;
+    fs[FEATURESET_Da1]       = p->xstate.Da1;
+    fs[FEATURESET_7b0]       = p->feat._7b0;
+    fs[FEATURESET_7c0]       = p->feat._7c0;
+    fs[FEATURESET_e7d]       = p->extd.e7d;
+    fs[FEATURESET_e8b]       = p->extd.e8b;
+    fs[FEATURESET_7d0]       = p->feat._7d0;
+    fs[FEATURESET_7a1]       = p->feat._7a1;
+    fs[FEATURESET_e21a]      = p->extd.e21a;
+    fs[FEATURESET_7b1]       = p->feat._7b1;
+    fs[FEATURESET_7d2]       = p->feat._7d2;
+    fs[FEATURESET_7c1]       = p->feat._7c1;
+    fs[FEATURESET_7d1]       = p->feat._7d1;
+}
+
+void x86_cpu_featureset_to_policy(
+    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
+{
+    p->basic._1d             = fs[FEATURESET_1d];
+    p->basic._1c             = fs[FEATURESET_1c];
+    p->extd.e1d              = fs[FEATURESET_e1d];
+    p->extd.e1c              = fs[FEATURESET_e1c];
+    p->xstate.Da1            = fs[FEATURESET_Da1];
+    p->feat._7b0             = fs[FEATURESET_7b0];
+    p->feat._7c0             = fs[FEATURESET_7c0];
+    p->extd.e7d              = fs[FEATURESET_e7d];
+    p->extd.e8b              = fs[FEATURESET_e8b];
+    p->feat._7d0             = fs[FEATURESET_7d0];
+    p->feat._7a1             = fs[FEATURESET_7a1];
+    p->extd.e21a             = fs[FEATURESET_e21a];
+    p->feat._7b1             = fs[FEATURESET_7b1];
+    p->feat._7d2             = fs[FEATURESET_7d2];
+    p->feat._7c1             = fs[FEATURESET_7c1];
+    p->feat._7d1             = fs[FEATURESET_7d1];
+}
+
 void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
 {
     p->x86_vendor = x86_cpuid_lookup_vendor(
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517709.803450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLH-00056z-C4; Tue, 04 Apr 2023 09:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517709.803450; Tue, 04 Apr 2023 09:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLH-00056s-9R; Tue, 04 Apr 2023 09:52:43 +0000
Received: by outflank-mailman (input) for mailman id 517709;
 Tue, 04 Apr 2023 09:52:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLF-00056d-Oj
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6de41a44-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:52:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6de41a44-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601958;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=yusP9sqEdAZaFQzY4EzOz4zVJrPEbY9vi/S0gYtOA9g=;
  b=B5JQhePuqD6Lt87DrutIo4qUR9ovRVw5Nq4j3mOFPi86L18IR5qTK7yb
   bG6pWaYepYhUKVcPC+kjBXYbw4hTOTBtC4UBmENXgO/JZpXA/B8+2+AoU
   uUPWn2Q46ABlHzDQjWFJMRDjmskZ5XCWlszrNey/ujeIHHjbnJY0GbuXr
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104656504
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:UXTOm601L5VneFYFKPbD5axxkn2cJEfYwER7XKvMYLTBsI5bp2MOn
 zZKUG7QPfeJYDH8KNslboSw9k4E7cCGz9dmTwJkpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfHT5o3
 /AiIQw3MB2Fht6S55KDVeAymZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0JwBvG+
 zqal4j/Kj8Vbv+w6Be7y2mHo9TTnw7Ae7w/KLLto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jY+cddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBXNdQwUIdwI52YfupoAPqjXvdslqTrHg27UZBgrML
 yC2QDkW3utD1ZNUif/kpDgrkBr3+MGXE1ddChH/Gzv8s1gnPNPNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83oJrW7FF4aLJ9w43d2HDB4B3jw4UTHoe
 lTPngha+YVeOnCnBYcuPdLpVph0nPK7T4q1PhwxUjapSsEpHDJrAQk0PRLAt4wTuBNEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk33uYdykVbJEd/pxnPSNLFmhE5FyS2Jm
 +ti2zyikEoADLenPnaOoeb+7zkidBAGOHw/kOQPHsbrH+asMDhJ5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:GRI8zauwCmGdDUojUt+TfATN7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104656504"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 00/15]  x86: Merge cpuid and msr policy objects
Date: Tue, 4 Apr 2023 10:52:07 +0100
Message-ID: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is in order to be able to put MSR_ARCH_CAPS in a featureset.  In
hindsight it was a mistake separating CPUID and read-only MSR data into
separate structs.

Patches 1-8 were posted previously and have had the feedback addressed.
Patches 9-15 are the result of splitting the older RFC patch 9 apart, with the
discussed adjustment to aliases accounted for.

Gitlab run showing the series to be buildable at each changeset:

  https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4053607867

P.S. I'm not sure I believe the net diffstat below.  I think the cpuid.c =>
cpu-policy.c rename is confusing it.

Andrew Cooper (15):
  x86: Rename struct cpu_policy to struct old_cpuid_policy
  x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields
  x86: Rename struct cpuid_policy to struct cpu_policy
  x86: Merge struct msr_policy into struct cpu_policy
  x86: Merge the system {cpuid,msr} policy objects
  x86: Merge a domain's {cpuid,msr} policy objects
  x86: Merge xc_cpu_policy's cpuid and msr objects
  x86: Drop struct old_cpu_policy
  x86: Out-of-inline the policy<->featureset convertors
  x86/boot: Move MSR policy initialisation logic into cpu-policy.c
  x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
  x86/emul: Switch x86_emulate_ctxt to cpu_policy
  tools/fuzz: Rework afl-policy-fuzzer
  libx86: Update library API for cpu_policy
  x86: Remove temporary {cpuid,msr}_policy defines

 tools/fuzz/cpu-policy/afl-policy-fuzzer.c     |  64 +-
 .../fuzz/x86_instruction_emulator/fuzz-emul.c |   2 +-
 tools/libs/guest/xg_cpuid_x86.c               |  50 +-
 tools/libs/guest/xg_private.h                 |   5 +-
 tools/tests/cpu-policy/test-cpu-policy.c      |  54 +-
 tools/tests/tsx/test-tsx.c                    |  71 +-
 tools/tests/x86_emulator/Makefile             |   2 +-
 tools/tests/x86_emulator/test_x86_emulator.c  |   2 +-
 tools/tests/x86_emulator/x86-emulate.c        |   4 +-
 tools/tests/x86_emulator/x86-emulate.h        |   2 +-
 xen/arch/x86/Makefile                         |   1 +
 xen/arch/x86/{cpuid.c => cpu-policy.c}        | 692 ++++----------
 xen/arch/x86/cpu/common.c                     |   4 +-
 xen/arch/x86/cpu/mcheck/mce_intel.c           |   2 +-
 xen/arch/x86/cpuid.c                          | 856 +-----------------
 xen/arch/x86/domain.c                         |  17 +-
 xen/arch/x86/domctl.c                         |  51 +-
 xen/arch/x86/hvm/emulate.c                    |   4 +-
 xen/arch/x86/hvm/hvm.c                        |   5 +-
 xen/arch/x86/hvm/svm/svm.c                    |   2 +-
 xen/arch/x86/hvm/vlapic.c                     |   2 +-
 xen/arch/x86/hvm/vmx/vmx.c                    |   8 +-
 xen/arch/x86/include/asm/cpu-policy.h         |  27 +
 xen/arch/x86/include/asm/cpuid.h              |  21 +-
 xen/arch/x86/include/asm/domain.h             |  13 +-
 xen/arch/x86/include/asm/msr.h                |  14 +-
 xen/arch/x86/mm/mem_sharing.c                 |   3 +-
 xen/arch/x86/mm/shadow/hvm.c                  |   2 +-
 xen/arch/x86/msr.c                            | 160 +---
 xen/arch/x86/pv/domain.c                      |   3 +-
 xen/arch/x86/pv/emul-priv-op.c                |   6 +-
 xen/arch/x86/pv/ro-page-fault.c               |   2 +-
 xen/arch/x86/setup.c                          |   5 +-
 xen/arch/x86/sysctl.c                         |  79 +-
 xen/arch/x86/traps.c                          |   2 +-
 xen/arch/x86/x86_emulate/private.h            |   4 +-
 xen/arch/x86/x86_emulate/x86_emulate.c        |   2 +-
 xen/arch/x86/x86_emulate/x86_emulate.h        |   9 +-
 xen/arch/x86/xstate.c                         |   4 +-
 xen/include/public/domctl.h                   |   4 +-
 xen/include/public/sysctl.h                   |   4 +-
 xen/include/xen/lib/x86/cpu-policy.h          | 510 ++++++++++-
 xen/include/xen/lib/x86/cpuid.h               | 475 ----------
 xen/include/xen/lib/x86/msr.h                 | 104 ---
 xen/lib/x86/cpuid.c                           |  68 +-
 xen/lib/x86/msr.c                             |   6 +-
 xen/lib/x86/policy.c                          |   8 +-
 47 files changed, 1005 insertions(+), 2430 deletions(-)
 copy xen/arch/x86/{cpuid.c => cpu-policy.c} (52%)
 create mode 100644 xen/arch/x86/include/asm/cpu-policy.h
 delete mode 100644 xen/include/xen/lib/x86/cpuid.h
 delete mode 100644 xen/include/xen/lib/x86/msr.h

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517715.803509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLN-0006c1-JQ; Tue, 04 Apr 2023 09:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517715.803509; Tue, 04 Apr 2023 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLN-0006bS-D8; Tue, 04 Apr 2023 09:52:49 +0000
Received: by outflank-mailman (input) for mailman id 517715;
 Tue, 04 Apr 2023 09:52:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLL-0005bo-RA
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:48 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 72bcdab1-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:52:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72bcdab1-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601966;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=LIoXi1ZnCa4bmwRZ1DkqTbadm8IWoD0szu1Mk05QyrI=;
  b=EB8sJLmqiDRgiMc11ykK76kB3fOu2av1Q7msVMhaFmno2RVkTNY8UDJH
   DTnuYUUkpt7PyodeHLk8xriIjUJhSw7B4j4XaTn42qD8TwdSzUtYdEGUa
   xi1aiDMpeyYPmu3o4uIdPo5OTfPMq/UwBQd+vhR539IwHFooEjScot1yh
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104275103
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cy1kG65e3HpQFt0c1t7M3gxRtEfHchMFZxGqfqrLsTDasY5as4F+v
 jAcD2yCOKrbYzOhKdwiPdy/p0oOsJTRnYBrGVZrpStmHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 PoRJxw3Sz+/mfuG+O+CbNsr3v8FFZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7E/
 D2bpjyiav0cHOaa1Ced/UqourTGnw6nXIM7Hbj/z+E/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JSGeAn7ACGyoLP/h2UQGMDS1Zpd9gOpMIwAzsw2
 Te0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTRYCRAQp2fzMnJ8qviqSd/BRV/aOqOSgTFkc3
 Au2QDgCa6Q71JBbj/jkowqY2lpAtbCSEFdru1y/snaNq1ogOdX7P9HABU3zt64oEWqPcrWWU
 JHoceC65ftGM5yCnTflrA4lTODwvKbt3NExbDdS83gdG9eFoSTLkXh4um0WGauQGp9slcXVS
 EHSoxhNw5RYIWGna6R6C6roVZRykPS+RI6+DK6EBjarXnSWXFbflByCmGbKhzy9+KTSufpX1
 WinnTaEUi9BVPUPIMueTOYBy747rh0DKZfobcmjlXyPiOPODEN5vJ9ZaDNimMhltvLbyOgUm
 v4DX/a3J+J3C7KhPnOOrdFKfTjn7xETXPjLliCeTcbbSiIOJY3rI6a5LW8JE2C9o5loqw==
IronPort-HdrOrdr: A9a23:zRzflaEGZxpovFrjpLqEMceALOsnbusQ8zAXPo5KOHlom7+j5r
 +TdZUgpGXJYVMqMk3I9urwQZVoLUmslqKdpLNhWotKPzOWw1dATrsSlbcKqgeIc0afygce79
 YFT0EXMrzN5DNB/KDHCXyDYqodKbe8gcKVbCTlo0uFjzsGV0it1WhE436gYzBLrcB9a6YEKA
 ==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104275103"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 06/15] x86: Merge a domain's {cpuid,msr} policy objects
Date: Tue, 4 Apr 2023 10:52:13 +0100
Message-ID: <20230404095222.1373721-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, they're the same underlying type, containing disjoint information.

Drop the d->arch.msr pointer, and union d->arch.cpuid to give it a second name
of cpu_policy in the interim.

Merge init_domain_{cpuid,msr}_policy() into a single init_domain_cpu_policy(),
moving the implementation into cpu-policy.c

No practical change.  This undoes the transient doubling of storage space from
earlier patches.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Reword commit message.
 * Undo accidental deletion of v->arch.msrs.
---
 xen/arch/x86/cpu-policy.c             | 49 +++++++++++++++++++++++++++
 xen/arch/x86/cpuid.c                  | 23 -------------
 xen/arch/x86/domain.c                 | 15 +++-----
 xen/arch/x86/domctl.c                 | 35 ++++++++++---------
 xen/arch/x86/include/asm/cpu-policy.h |  4 +++
 xen/arch/x86/include/asm/cpuid.h      |  3 --
 xen/arch/x86/include/asm/domain.h     | 13 +++++--
 xen/arch/x86/include/asm/msr.h        |  1 -
 xen/arch/x86/mm/mem_sharing.c         |  3 +-
 xen/arch/x86/msr.c                    | 44 ------------------------
 10 files changed, 86 insertions(+), 104 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 663e9a084c53..e9ac1269c35a 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -1,10 +1,13 @@
 /* SPDX-License-Identifier: GPL-2.0-or-later */
 #include <xen/cache.h>
 #include <xen/kernel.h>
+#include <xen/sched.h>
 
 #include <xen/lib/x86/cpu-policy.h>
 
 #include <asm/cpu-policy.h>
+#include <asm/msr-index.h>
+#include <asm/setup.h>
 
 struct cpu_policy __ro_after_init     raw_cpu_policy;
 struct cpu_policy __ro_after_init    host_cpu_policy;
@@ -16,3 +19,49 @@ struct cpu_policy __ro_after_init  pv_def_cpu_policy;
 struct cpu_policy __ro_after_init hvm_max_cpu_policy;
 struct cpu_policy __ro_after_init hvm_def_cpu_policy;
 #endif
+
+int init_domain_cpu_policy(struct domain *d)
+{
+    struct cpu_policy *p = is_pv_domain(d)
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
+
+    if ( !p )
+    {
+        ASSERT_UNREACHABLE();
+        return -EOPNOTSUPP;
+    }
+
+    p = xmemdup(p);
+    if ( !p )
+        return -ENOMEM;
+
+    /* See comment in ctxt_switch_levelling() */
+    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
+        p->platform_info.cpuid_faulting = false;
+
+    /*
+     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
+     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
+     * domain policy logic gains a better understanding of MSRs.
+     */
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_ARCH_CAPABILITIES, val);
+
+        p->arch_caps.raw = val &
+            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
+             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
+             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
+             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
+    }
+
+    d->arch.cpu_policy = p;
+
+    recalculate_cpuid_policy(d);
+
+    return 0;
+}
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 0916bfe175c8..df3e503ced9d 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -784,29 +784,6 @@ void recalculate_cpuid_policy(struct domain *d)
         p->extd.raw[0x19] = EMPTY_LEAF;
 }
 
-int init_domain_cpuid_policy(struct domain *d)
-{
-    struct cpuid_policy *p = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpu_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
-
-    if ( !p )
-    {
-        ASSERT_UNREACHABLE();
-        return -EOPNOTSUPP;
-    }
-
-    p = xmemdup(p);
-    if ( !p )
-        return -ENOMEM;
-
-    d->arch.cpuid = p;
-
-    recalculate_cpuid_policy(d);
-
-    return 0;
-}
-
 void __init init_dom0_cpuid_policy(struct domain *d)
 {
     struct cpuid_policy *p = d->arch.cpuid;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index d5847f70f890..b23e5014d1d3 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -66,6 +66,7 @@
 #ifdef CONFIG_COMPAT
 #include <compat/vcpu.h>
 #endif
+#include <asm/cpu-policy.h>
 #include <asm/psr.h>
 #include <asm/pv/domain.h>
 #include <asm/pv/mm.h>
@@ -743,8 +744,7 @@ int arch_domain_create(struct domain *d,
 
         d->arch.ctxt_switch = &idle_csw;
 
-        d->arch.cpuid = ZERO_BLOCK_PTR; /* Catch stray misuses. */
-        d->arch.msr = ZERO_BLOCK_PTR;
+        d->arch.cpu_policy = ZERO_BLOCK_PTR; /* Catch stray misuses. */
 
         return 0;
     }
@@ -799,10 +799,7 @@ int arch_domain_create(struct domain *d,
         goto fail;
     paging_initialised = true;
 
-    if ( (rc = init_domain_cpuid_policy(d)) )
-        goto fail;
-
-    if ( (rc = init_domain_msr_policy(d)) )
+    if ( (rc = init_domain_cpu_policy(d)) )
         goto fail;
 
     d->arch.ioport_caps =
@@ -873,8 +870,7 @@ int arch_domain_create(struct domain *d,
     iommu_domain_destroy(d);
     cleanup_domain_irq_mapping(d);
     free_xenheap_page(d->shared_info);
-    xfree(d->arch.cpuid);
-    xfree(d->arch.msr);
+    XFREE(d->arch.cpu_policy);
     if ( paging_initialised )
         paging_final_teardown(d);
     free_perdomain_mappings(d);
@@ -888,8 +884,7 @@ void arch_domain_destroy(struct domain *d)
         hvm_domain_destroy(d);
 
     xfree(d->arch.e820);
-    xfree(d->arch.cpuid);
-    xfree(d->arch.msr);
+    XFREE(d->arch.cpu_policy);
 
     free_domain_pirqs(d);
     if ( !is_idle_domain(d) )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 5800bb10bc4a..81be25c67731 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -40,11 +40,11 @@
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
 {
-    struct old_cpu_policy new = {};
+    struct cpu_policy *new;
     struct cpu_policy *sys = is_pv_domain(d)
         ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
         : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
-    struct old_cpu_policy old_sys = { sys, sys };
+    struct old_cpu_policy old_sys = { sys, sys }, old_new;
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
     int ret = -ENOMEM;
 
@@ -54,33 +54,33 @@ static int update_domain_cpu_policy(struct domain *d,
         return -EOPNOTSUPP;
     }
 
-    /* Start by copying the domain's existing policies. */
-    if ( !(new.cpuid = xmemdup(d->arch.cpuid)) ||
-         !(new.msr   = xmemdup(d->arch.msr)) )
+    /* Start by copying the domain's existing policy. */
+    if ( !(new = xmemdup(d->arch.cpu_policy)) )
         goto out;
 
+    old_new = (struct old_cpu_policy){ new, new };
+
     /* Merge the toolstack provided data. */
     if ( (ret = x86_cpuid_copy_from_buffer(
-              new.cpuid, xdpc->leaves, xdpc->nr_leaves,
+              new, xdpc->leaves, xdpc->nr_leaves,
               &err.leaf, &err.subleaf)) ||
          (ret = x86_msr_copy_from_buffer(
-              new.msr, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
+              new, xdpc->msrs, xdpc->nr_msrs, &err.msr)) )
         goto out;
 
     /* Trim any newly-stale out-of-range leaves. */
-    x86_cpuid_policy_clear_out_of_range_leaves(new.cpuid);
+    x86_cpuid_policy_clear_out_of_range_leaves(new);
 
     /* Audit the combined dataset. */
-    ret = x86_cpu_policies_are_compatible(&old_sys, &new, &err);
+    ret = x86_cpu_policies_are_compatible(&old_sys, &old_new, &err);
     if ( ret )
         goto out;
 
     /*
-     * Audit was successful.  Replace existing policies, leaving the old
-     * policies to be freed.
+     * Audit was successful.  Replace the existing policy, leaving the old one
+     * to be freed.
      */
-    SWAP(new.cpuid, d->arch.cpuid);
-    SWAP(new.msr,   d->arch.msr);
+    SWAP(new, d->arch.cpu_policy);
 
     /* TODO: Drop when x86_cpu_policies_are_compatible() is completed. */
     recalculate_cpuid_policy(d);
@@ -89,9 +89,8 @@ static int update_domain_cpu_policy(struct domain *d,
     domain_cpu_policy_changed(d);
 
  out:
-    /* Free whichever cpuid/msr structs are not installed in struct domain. */
-    xfree(new.cpuid);
-    xfree(new.msr);
+    /* Free whichever struct is not installed in struct domain. */
+    xfree(new);
 
     if ( ret )
     {
@@ -1327,7 +1326,7 @@ long arch_do_domctl(
         if ( guest_handle_is_null(domctl->u.cpu_policy.leaves) )
             domctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
         else if ( (ret = x86_cpuid_copy_to_buffer(
-                       d->arch.cpuid,
+                       d->arch.cpu_policy,
                        domctl->u.cpu_policy.leaves,
                        &domctl->u.cpu_policy.nr_leaves)) )
             break;
@@ -1336,7 +1335,7 @@ long arch_do_domctl(
         if ( guest_handle_is_null(domctl->u.cpu_policy.msrs) )
             domctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
         else if ( (ret = x86_msr_copy_to_buffer(
-                       d->arch.msr,
+                       d->arch.cpu_policy,
                        domctl->u.cpu_policy.msrs,
                        &domctl->u.cpu_policy.nr_msrs)) )
             break;
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index eef14bb4267e..9ba34bbf5ea1 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -3,6 +3,7 @@
 #define X86_CPU_POLICY_H
 
 struct cpu_policy;
+struct domain;
 
 extern struct cpu_policy     raw_cpu_policy;
 extern struct cpu_policy    host_cpu_policy;
@@ -11,4 +12,7 @@ extern struct cpu_policy  pv_def_cpu_policy;
 extern struct cpu_policy hvm_max_cpu_policy;
 extern struct cpu_policy hvm_def_cpu_policy;
 
+/* Allocate and initialise a CPU policy suitable for the domain. */
+int init_domain_cpu_policy(struct domain *d);
+
 #endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index ea0586277331..7f81b998ce01 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -49,9 +49,6 @@ extern struct cpuidmasks cpuidmask_defaults;
 /* Check that all previously present features are still available. */
 bool recheck_cpu_features(unsigned int cpu);
 
-/* Allocate and initialise a CPUID policy suitable for the domain. */
-int init_domain_cpuid_policy(struct domain *d);
-
 /* Apply dom0-specific tweaks to the CPUID policy. */
 void init_dom0_cpuid_policy(struct domain *d);
 
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index 17780ad9db2f..466388a98e12 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -386,9 +386,16 @@ struct arch_domain
      */
     uint8_t x87_fip_width;
 
-    /* CPUID and MSR policy objects. */
-    struct cpuid_policy *cpuid;
-    struct msr_policy *msr;
+    /*
+     * The domain's CPU Policy.  "cpu_policy" is considered the canonical
+     * pointer, but the "cpuid" and "msr" aliases exist so the most
+     * appropriate one can be used for local code clarity.
+     */
+    union {
+        struct cpu_policy *cpu_policy;
+        struct cpu_policy *cpuid;
+        struct cpu_policy *msr;
+    };
 
     struct PITState vpit;
 
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 022230acc0af..b59a51d238a7 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -419,7 +419,6 @@ struct vcpu_msrs
 };
 
 void init_guest_msr_policy(void);
-int init_domain_msr_policy(struct domain *d);
 int init_vcpu_msr_policy(struct vcpu *v);
 
 /*
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 649d93dc5444..5b3449db7a11 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1902,8 +1902,7 @@ static int fork(struct domain *cd, struct domain *d)
 
         domain_pause(d);
         cd->max_pages = d->max_pages;
-        *cd->arch.cpuid = *d->arch.cpuid;
-        *cd->arch.msr = *d->arch.msr;
+        *cd->arch.cpu_policy = *d->arch.cpu_policy;
         cd->vmtrace_size = d->vmtrace_size;
         cd->parent = d;
     }
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index bff26bc4e2b5..93bd93feb644 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -122,50 +122,6 @@ void __init init_guest_msr_policy(void)
     }
 }
 
-int init_domain_msr_policy(struct domain *d)
-{
-    struct msr_policy *mp = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpu_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
-
-    if ( !mp )
-    {
-        ASSERT_UNREACHABLE();
-        return -EOPNOTSUPP;
-    }
-
-    mp = xmemdup(mp);
-    if ( !mp )
-        return -ENOMEM;
-
-    /* See comment in ctxt_switch_levelling() */
-    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
-        mp->platform_info.cpuid_faulting = false;
-
-    /*
-     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
-     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
-     * domain policy logic gains a better understanding of MSRs.
-     */
-    if ( is_hardware_domain(d) && cpu_has_arch_caps )
-    {
-        uint64_t val;
-
-        rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
-        mp->arch_caps.raw = val &
-            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
-             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
-             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
-             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
-    }
-
-    d->arch.msr = mp;
-
-    return 0;
-}
-
 int init_vcpu_msr_policy(struct vcpu *v)
 {
     struct vcpu_msrs *msrs = xzalloc(struct vcpu_msrs);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517711.803470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLK-0005cY-09; Tue, 04 Apr 2023 09:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517711.803470; Tue, 04 Apr 2023 09:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLJ-0005cM-Su; Tue, 04 Apr 2023 09:52:45 +0000
Received: by outflank-mailman (input) for mailman id 517711;
 Tue, 04 Apr 2023 09:52:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLI-00056d-HN
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:44 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70eae744-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:52:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70eae744-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601962;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=aLMfsVFJMPWfdGfDVV87nevqNr05P5LkUPZ6r3/rMcw=;
  b=IAJ1862dGKNvFB/BTlLqBMI1KFmgaP4pS38PwQOZBK4kWd3N7pJEO9DL
   b6d4u0QMrtoCU+shhxXmc6p+gda0cfszpM9pCYDmrgH4asbhbh1t9Cwt2
   EGonPLpEOypR67dLz71cuvY09Kae3+X2JzOc4bxJbMHovM59yBzIIK8hl
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104656513
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:we0psq/4EOmL4Du6egCdDrUD/36TJUtcMsCJ2f8bNWPcYEJGY0x3n
 DQXWj2FbKzcYWD1fN9/bN628UoHv8eEydVqHQtprn08E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklrp
 cBIEigHRCrbrP2t5vW/WM0ruPo8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhrG+
 zybpj+R7hcyEfy/wzCb0k6XmsDTkAHKQ6c5MJeH+as/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c9haHvA+6QqN4rHJ+AvfDW8BJhZebPQ2uclwQiYlv
 mJlhPuwW2Yp6ufMDyvAqPHN92ja1TUpwXEqQH84HTEd6fPZ+KoslTSISsRHV5CLkYigcd3v+
 AxmvBTSlp1K055Tivrlpw+e696/jsOXF1Bov207Skrgt1okP9D9OuRE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFBz3oRZPhagKvFlDyL5Ba67ogwPBb
 k7Joh9275ROJnasZqIfS9vvW5x3kfaxT4+/CKC8gj9yjn9ZLVfvwc2TTRTIgzCFfLYEysnTx
 qt3ge7zVC1HWMyLPRK9RvsH0K9D+x3SMVj7HMihpzz+iOr2WZJgYetdWLd4RrxjvfzsTcS82
 4o3CvZmPD0ED7KiOHCLrtdDRb3IRFBiba3LRwVsXrbrCmJb9KsJUZc9HZtJl1RZoplo
IronPort-HdrOrdr: A9a23:enJGDKjcmqWppJkVrY3NCTzegnBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104656513"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 01/15] x86: Rename struct cpu_policy to struct old_cpuid_policy
Date: Tue, 4 Apr 2023 10:52:08 +0100
Message-ID: <20230404095222.1373721-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We want to merge struct cpuid_policy and struct msr_policy together, and the
result wants to be called struct cpu_policy.

The current struct cpu_policy, being a pair of pointers, isn't terribly
useful.  Rename the type to struct old_cpu_policy, but it will disappear
entirely once the merge is complete.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/guest/xg_cpuid_x86.c          | 4 ++--
 tools/tests/cpu-policy/test-cpu-policy.c | 4 ++--
 xen/arch/x86/domctl.c                    | 4 ++--
 xen/arch/x86/include/asm/cpuid.h         | 2 +-
 xen/arch/x86/sysctl.c                    | 4 ++--
 xen/include/xen/lib/x86/cpu-policy.h     | 6 +++---
 xen/lib/x86/policy.c                     | 4 ++--
 7 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 4542878bbe88..1b02bc987af7 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -868,8 +868,8 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest)
 {
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
-    struct cpu_policy h = { &host->cpuid, &host->msr };
-    struct cpu_policy g = { &guest->cpuid, &guest->msr };
+    struct old_cpu_policy h = { &host->cpuid, &host->msr };
+    struct old_cpu_policy g = { &guest->cpuid, &guest->msr };
     int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
 
     if ( !rc )
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index d3f24fd6d274..909d6272f875 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -602,7 +602,7 @@ static void test_is_compatible_success(void)
     for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
     {
         struct test *t = &tests[i];
-        struct cpu_policy sys = {
+        struct old_cpu_policy sys = {
             &t->host_cpuid,
             &t->host_msr,
         }, new = {
@@ -654,7 +654,7 @@ static void test_is_compatible_failure(void)
     for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
     {
         struct test *t = &tests[i];
-        struct cpu_policy sys = {
+        struct old_cpu_policy sys = {
             &t->host_cpuid,
             &t->host_msr,
         }, new = {
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 2118fcad5dfe..0b41b279507e 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -40,8 +40,8 @@
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
 {
-    struct cpu_policy new = {};
-    const struct cpu_policy *sys = is_pv_domain(d)
+    struct old_cpu_policy new = {};
+    const struct old_cpu_policy *sys = is_pv_domain(d)
         ? &system_policies[XEN_SYSCTL_cpu_policy_pv_max]
         : &system_policies[XEN_SYSCTL_cpu_policy_hvm_max];
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 9c3637549a10..49b3128f06f9 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -51,7 +51,7 @@ extern struct cpuid_policy raw_cpuid_policy, host_cpuid_policy,
     pv_max_cpuid_policy, pv_def_cpuid_policy,
     hvm_max_cpuid_policy, hvm_def_cpuid_policy;
 
-extern const struct cpu_policy system_policies[];
+extern const struct old_cpu_policy system_policies[];
 
 /* Check that all previously present features are still available. */
 bool recheck_cpu_features(unsigned int cpu);
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 16625b57f01f..3f5b092df16a 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -32,7 +32,7 @@
 #include <asm/psr.h>
 #include <asm/cpuid.h>
 
-const struct cpu_policy system_policies[6] = {
+const struct old_cpu_policy system_policies[6] = {
     [ XEN_SYSCTL_cpu_policy_raw ] = {
         &raw_cpuid_policy,
         &raw_msr_policy,
@@ -391,7 +391,7 @@ long arch_do_sysctl(
 
     case XEN_SYSCTL_get_cpu_policy:
     {
-        const struct cpu_policy *policy;
+        const struct old_cpu_policy *policy;
 
         /* Reserved field set, or bad policy index? */
         if ( sysctl->u.cpu_policy._rsvd ||
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 5a2c4c7b2d90..3a5300d1078c 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -5,7 +5,7 @@
 #include <xen/lib/x86/cpuid.h>
 #include <xen/lib/x86/msr.h>
 
-struct cpu_policy
+struct old_cpu_policy
 {
     struct cpuid_policy *cpuid;
     struct msr_policy *msr;
@@ -33,8 +33,8 @@ struct cpu_policy_errors
  * incompatibility is detected, the optional err pointer may identify the
  * problematic leaf/subleaf and/or MSR.
  */
-int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
-                                    const struct cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
+                                    const struct old_cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
 #endif /* !XEN_LIB_X86_POLICIES_H */
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f6cea4e2f9bd..2975711d7c6c 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,8 +2,8 @@
 
 #include <xen/lib/x86/cpu-policy.h>
 
-int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
-                                    const struct cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
+                                    const struct old_cpu_policy *guest,
                                     struct cpu_policy_errors *err)
 {
     struct cpu_policy_errors e = INIT_CPU_POLICY_ERRORS;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517716.803520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLP-0006z0-6n; Tue, 04 Apr 2023 09:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517716.803520; Tue, 04 Apr 2023 09:52:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLP-0006y7-1o; Tue, 04 Apr 2023 09:52:51 +0000
Received: by outflank-mailman (input) for mailman id 517716;
 Tue, 04 Apr 2023 09:52:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLN-0005bo-Ai
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:49 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 742f8e97-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:52:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 742f8e97-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601967;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=9UHULmNeog4OHP6x3IyWXzySNECyTqt0kgFmTV9Rc0k=;
  b=IK9AzNLBAoh2XvuW4teqM9D3HYqjy1lkNvNy4Fijj04eQXjLcHu2yzSW
   pHu2QuGXLqUJ9+Pw+ApWjcAxbbDuS03H6pVlniQhg89SGTkQQKep/nf7p
   99JlWV0xEreuhWI+/TI5u1qy6xEm8P+DNdJkoUwfINPRhdRG8LtkXJ0zB
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106670672
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:I3hCz6wXoWh3baxkQiN6t+dIxirEfRIJ4+MujC+fZmUNrF6WrkUFz
 TYWUDyGOqrcN2X0eYsiYIXj80gPusSHz9Y1TFNoriAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UMHUMja4mtC5QRiPawT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXlM2
 Ow+OD8vVEySvOW82Jfic7k9ptt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNwBjH/
 jyZpQwVBDk5C5+BmAWeo0muufDU3grwY4EdHqa3o6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL84QyUG2wFRT5pc8E9uYk9QjlC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0caZwIUaxsKweO/sbo0hArqF+8zQY6q24id9S7L/
 9yakMQvr+xN3ZZWiPvhogmvbyGE/caQEFNsjunDdif8t14iOtb4D2C9wQKDhcusOrp1WbVoU
 JIsv8GFpN4DApiW/MBmaLVcRer5jxpp3dC1vLKOI3XC3273k5JbVdoMiAyS3W8wWir+RRfnY
 VXIpSRa74JJMX2hYMdfOtzhU5l2k/m6To67Bpg4i+aihbAoLGe6ENxGPxbMjwgBbmB3+U3AB
 XtrWZn1VitLYUiW5DG3W/0cwdcW+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9404AD7GkO3WIqN57wJJjBSFTOK0aYvd/LoarSjeK0kl4YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:ZQBS36OH2TnlpcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="106670672"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 07/15] x86: Merge xc_cpu_policy's cpuid and msr objects
Date: Tue, 4 Apr 2023 10:52:14 +0100
Message-ID: <20230404095222.1373721-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, they're the same underlying type, containing disjoint information.

Use a single object instead.  Also take the opportunity to rename 'entries' to
'msrs' which is more descriptive, and more in line with nr_msrs being the
count of MSR entries in the API.

test-tsx uses xg_private.h to access the internals of xc_cpu_policy, so needs
updating at the same time.  Take the opportunity to improve the code clarity
by passing a cpu_policy rather than an xc_cpu_policy into some functions.

No practical change.  This undoes the transient doubling of storage space from
earlier patches.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Reword the commit message.
 * Clean up test-tsx a bit more.
---
 tools/libs/guest/xg_cpuid_x86.c | 36 ++++++++---------
 tools/libs/guest/xg_private.h   |  5 +--
 tools/tests/tsx/test-tsx.c      | 71 +++++++++++++++------------------
 3 files changed, 53 insertions(+), 59 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 5fae06e77804..5061fe357767 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -431,7 +431,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     xc_dominfo_t di;
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
-    struct cpuid_policy *p = NULL;
+    struct cpu_policy *p = NULL;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
@@ -692,7 +692,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     int rc;
 
-    rc = x86_cpuid_copy_from_buffer(&policy->cpuid, policy->leaves,
+    rc = x86_cpuid_copy_from_buffer(&policy->policy, policy->leaves,
                                     nr_leaves, &err_leaf, &err_subleaf);
     if ( rc )
     {
@@ -702,7 +702,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
         return rc;
     }
 
-    rc = x86_msr_copy_from_buffer(&policy->msr, policy->entries,
+    rc = x86_msr_copy_from_buffer(&policy->policy, policy->msrs,
                                   nr_entries, &err_msr);
     if ( rc )
     {
@@ -719,18 +719,18 @@ int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
                              xc_cpu_policy_t *policy)
 {
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
-    unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+    unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
     int rc;
 
     rc = get_system_cpu_policy(xch, policy_idx, &nr_leaves, policy->leaves,
-                               &nr_entries, policy->entries);
+                               &nr_msrs, policy->msrs);
     if ( rc )
     {
         PERROR("Failed to obtain %u policy", policy_idx);
         return rc;
     }
 
-    rc = deserialize_policy(xch, policy, nr_leaves, nr_entries);
+    rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
     if ( rc )
     {
         errno = -rc;
@@ -744,18 +744,18 @@ int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
                              xc_cpu_policy_t *policy)
 {
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
-    unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+    unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
     int rc;
 
     rc = get_domain_cpu_policy(xch, domid, &nr_leaves, policy->leaves,
-                               &nr_entries, policy->entries);
+                               &nr_msrs, policy->msrs);
     if ( rc )
     {
         PERROR("Failed to obtain domain %u policy", domid);
         return rc;
     }
 
-    rc = deserialize_policy(xch, policy, nr_leaves, nr_entries);
+    rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
     if ( rc )
     {
         errno = -rc;
@@ -770,16 +770,16 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
 {
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
-    unsigned int nr_entries = ARRAY_SIZE(policy->entries);
+    unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
     int rc;
 
     rc = xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_leaves,
-                                 policy->entries, &nr_entries);
+                                 policy->msrs, &nr_msrs);
     if ( rc )
         return rc;
 
     rc = xc_set_domain_cpu_policy(xch, domid, nr_leaves, policy->leaves,
-                                  nr_entries, policy->entries,
+                                  nr_msrs, policy->msrs,
                                   &err_leaf, &err_subleaf, &err_msr);
     if ( rc )
     {
@@ -802,7 +802,7 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
 
     if ( leaves )
     {
-        rc = x86_cpuid_copy_to_buffer(&p->cpuid, leaves, nr_leaves);
+        rc = x86_cpuid_copy_to_buffer(&p->policy, leaves, nr_leaves);
         if ( rc )
         {
             ERROR("Failed to serialize CPUID policy");
@@ -813,7 +813,7 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
 
     if ( msrs )
     {
-        rc = x86_msr_copy_to_buffer(&p->msr, msrs, nr_msrs);
+        rc = x86_msr_copy_to_buffer(&p->policy, msrs, nr_msrs);
         if ( rc )
         {
             ERROR("Failed to serialize MSR policy");
@@ -831,7 +831,7 @@ int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
                                uint32_t nr)
 {
     unsigned int err_leaf = -1, err_subleaf = -1;
-    int rc = x86_cpuid_copy_from_buffer(&policy->cpuid, leaves, nr,
+    int rc = x86_cpuid_copy_from_buffer(&policy->policy, leaves, nr,
                                         &err_leaf, &err_subleaf);
 
     if ( rc )
@@ -850,7 +850,7 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
                               const xen_msr_entry_t *msrs, uint32_t nr)
 {
     unsigned int err_msr = -1;
-    int rc = x86_msr_copy_from_buffer(&policy->msr, msrs, nr, &err_msr);
+    int rc = x86_msr_copy_from_buffer(&policy->policy, msrs, nr, &err_msr);
 
     if ( rc )
     {
@@ -868,8 +868,8 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest)
 {
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
-    struct old_cpu_policy h = { &host->cpuid, &host->msr };
-    struct old_cpu_policy g = { &guest->cpuid, &guest->msr };
+    struct old_cpu_policy h = { &host->policy, &host->policy };
+    struct old_cpu_policy g = { &guest->policy, &guest->policy };
     int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
 
     if ( !rc )
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 09e24f122760..e729a8106c3e 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -173,10 +173,9 @@ int pin_table(xc_interface *xch, unsigned int type, unsigned long mfn,
 #include <xen/lib/x86/cpu-policy.h>
 
 struct xc_cpu_policy {
-    struct cpuid_policy cpuid;
-    struct msr_policy msr;
+    struct cpu_policy policy;
     xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
-    xen_msr_entry_t entries[MSR_MAX_SERIALISED_ENTRIES];
+    xen_msr_entry_t msrs[MSR_MAX_SERIALISED_ENTRIES];
 };
 #endif /* x86 */
 
diff --git a/tools/tests/tsx/test-tsx.c b/tools/tests/tsx/test-tsx.c
index d6d98c299bf9..b7e1972ce8a7 100644
--- a/tools/tests/tsx/test-tsx.c
+++ b/tools/tests/tsx/test-tsx.c
@@ -151,15 +151,15 @@ static void test_tsx_msrs(void)
 {
     printf("Testing MSR_TSX_FORCE_ABORT consistency\n");
     test_tsx_msr_consistency(
-        MSR_TSX_FORCE_ABORT, host.cpuid.feat.tsx_force_abort);
+        MSR_TSX_FORCE_ABORT, host.policy.feat.tsx_force_abort);
 
     printf("Testing MSR_TSX_CTRL consistency\n");
     test_tsx_msr_consistency(
-        MSR_TSX_CTRL, host.msr.arch_caps.tsx_ctrl);
+        MSR_TSX_CTRL, host.policy.arch_caps.tsx_ctrl);
 
     printf("Testing MSR_MCU_OPT_CTRL consistency\n");
     test_tsx_msr_consistency(
-        MSR_MCU_OPT_CTRL, host.cpuid.feat.srbds_ctrl);
+        MSR_MCU_OPT_CTRL, host.policy.feat.srbds_ctrl);
 }
 
 /*
@@ -281,7 +281,7 @@ static void test_rtm_behaviour(void)
     else
         return fail("  Got unexpected behaviour %d\n", rtm_behaviour);
 
-    if ( host.cpuid.feat.rtm )
+    if ( host.policy.feat.rtm )
     {
         if ( rtm_behaviour == RTM_UD )
             fail("  Host reports RTM, but appears unavailable\n");
@@ -293,57 +293,52 @@ static void test_rtm_behaviour(void)
     }
 }
 
-static void dump_tsx_details(const struct xc_cpu_policy *p, const char *pref)
+static void dump_tsx_details(const struct cpu_policy *p, const char *pref)
 {
     printf("  %s RTM %u, HLE %u, TSX_FORCE_ABORT %u, RTM_ALWAYS_ABORT %u, TSX_CTRL %u\n",
            pref,
-           p->cpuid.feat.rtm,
-           p->cpuid.feat.hle,
-           p->cpuid.feat.tsx_force_abort,
-           p->cpuid.feat.rtm_always_abort,
-           p->msr.arch_caps.tsx_ctrl);
+           p->feat.rtm,
+           p->feat.hle,
+           p->feat.tsx_force_abort,
+           p->feat.rtm_always_abort,
+           p->arch_caps.tsx_ctrl);
 }
 
 /* Sanity test various invariants we expect in the default/max policies. */
-static void test_guest_policies(const struct xc_cpu_policy *max,
-                                const struct xc_cpu_policy *def)
+static void test_guest_policies(const struct cpu_policy *max,
+                                const struct cpu_policy *def)
 {
-    const struct cpuid_policy *cm = &max->cpuid;
-    const struct cpuid_policy *cd = &def->cpuid;
-    const struct msr_policy *mm = &max->msr;
-    const struct msr_policy *md = &def->msr;
-
     dump_tsx_details(max, "Max:");
     dump_tsx_details(def, "Def:");
 
-    if ( ((cm->feat.raw[0].d | cd->feat.raw[0].d) &
+    if ( ((max->feat.raw[0].d | def->feat.raw[0].d) &
           (bitmaskof(X86_FEATURE_TSX_FORCE_ABORT) |
            bitmaskof(X86_FEATURE_RTM_ALWAYS_ABORT) |
            bitmaskof(X86_FEATURE_SRBDS_CTRL))) ||
-         ((mm->arch_caps.raw | md->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
+         ((max->arch_caps.raw | def->arch_caps.raw) & ARCH_CAPS_TSX_CTRL) )
         fail("  Xen-only TSX controls offered to guest\n");
 
     switch ( rtm_behaviour )
     {
     case RTM_UD:
-        if ( (cm->feat.raw[0].b | cd->feat.raw[0].b) &
+        if ( (max->feat.raw[0].b | def->feat.raw[0].b) &
              (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
              fail("  HLE/RTM offered to guests despite not being available\n");
         break;
 
     case RTM_ABORT:
-        if ( cd->feat.raw[0].b &
+        if ( def->feat.raw[0].b &
              (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)) )
              fail("  HLE/RTM offered to guests by default despite not being usable\n");
         break;
 
     case RTM_OK:
-        if ( !cm->feat.rtm || !cd->feat.rtm )
+        if ( !max->feat.rtm || !def->feat.rtm )
              fail("  RTM not offered to guests despite being available\n");
         break;
     }
 
-    if ( cd->feat.hle )
+    if ( def->feat.hle )
         fail("  Fail: HLE offered in default policy\n");
 }
 
@@ -352,13 +347,13 @@ static void test_def_max_policies(void)
     if ( xen_has_pv )
     {
         printf("Testing PV default/max policies\n");
-        test_guest_policies(&pv_max, &pv_default);
+        test_guest_policies(&pv_max.policy, &pv_default.policy);
     }
 
     if ( xen_has_hvm )
     {
         printf("Testing HVM default/max policies\n");
-        test_guest_policies(&hvm_max, &hvm_default);
+        test_guest_policies(&hvm_max.policy, &hvm_default.policy);
     }
 }
 
@@ -382,23 +377,23 @@ static void test_guest(struct xen_domctl_createdomain *c)
         goto out;
     }
 
-    dump_tsx_details(&guest_policy, "Cur:");
+    dump_tsx_details(&guest_policy.policy, "Cur:");
 
     /*
      * Check defaults given to the guest.
      */
-    if ( guest_policy.cpuid.feat.rtm != (rtm_behaviour == RTM_OK) )
+    if ( guest_policy.policy.feat.rtm != (rtm_behaviour == RTM_OK) )
         fail("  RTM %u in guest, despite rtm behaviour\n",
-             guest_policy.cpuid.feat.rtm);
+             guest_policy.policy.feat.rtm);
 
-    if ( guest_policy.cpuid.feat.hle ||
-         guest_policy.cpuid.feat.tsx_force_abort ||
-         guest_policy.cpuid.feat.rtm_always_abort ||
-         guest_policy.cpuid.feat.srbds_ctrl ||
-         guest_policy.msr.arch_caps.tsx_ctrl )
+    if ( guest_policy.policy.feat.hle ||
+         guest_policy.policy.feat.tsx_force_abort ||
+         guest_policy.policy.feat.rtm_always_abort ||
+         guest_policy.policy.feat.srbds_ctrl ||
+         guest_policy.policy.arch_caps.tsx_ctrl )
         fail("  Unexpected features advertised\n");
 
-    if ( host.cpuid.feat.rtm )
+    if ( host.policy.feat.rtm )
     {
         unsigned int _7b0;
 
@@ -406,7 +401,7 @@ static void test_guest(struct xen_domctl_createdomain *c)
          * If host RTM is available, all combinations of guest flags should be
          * possible.  Flip both HLE/RTM to check non-default settings.
          */
-        _7b0 = (guest_policy.cpuid.feat.raw[0].b ^=
+        _7b0 = (guest_policy.policy.feat.raw[0].b ^=
                 (bitmaskof(X86_FEATURE_HLE) | bitmaskof(X86_FEATURE_RTM)));
 
         /* Set the new policy. */
@@ -427,12 +422,12 @@ static void test_guest(struct xen_domctl_createdomain *c)
             goto out;
         }
 
-        dump_tsx_details(&guest_policy, "Cur:");
+        dump_tsx_details(&guest_policy.policy, "Cur:");
 
-        if ( guest_policy.cpuid.feat.raw[0].b != _7b0 )
+        if ( guest_policy.policy.feat.raw[0].b != _7b0 )
         {
             fail("  Expected CPUID.7[1].b 0x%08x differs from actual 0x%08x\n",
-                 _7b0, guest_policy.cpuid.feat.raw[0].b);
+                 _7b0, guest_policy.policy.feat.raw[0].b);
             goto out;
         }
     }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517712.803477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLK-0005g8-CZ; Tue, 04 Apr 2023 09:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517712.803477; Tue, 04 Apr 2023 09:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLK-0005fO-5e; Tue, 04 Apr 2023 09:52:46 +0000
Received: by outflank-mailman (input) for mailman id 517712;
 Tue, 04 Apr 2023 09:52:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLI-00056d-Rv
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:44 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71331b36-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:52:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71331b36-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601962;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=IL5jduQ9wz95TuOVb8Hg3lcqfJg4oc1M0v7wklWBR7U=;
  b=fqdWjunhNnUkRvKB57Auje5ZtBaKjup87SJp1Mp+1mFJ7h8VO2z2ZQ9m
   VSRHs5OEyNgf8++03Tl0yYWjo+nUcYK/IwZu7hGG+24Y7xmkEYlIIL3ZG
   yrHPjG7Vx2ag6RJh5dBMGaavU1qzBkVOqnZhq4mPQriuJLmedpHVF8ijY
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104161363
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:TTtdRaPFHGHio0PvrR3al8FynXyQoLVcMsEvi/4bfWQNrUohhj0Cy
 WtLWWHXPf+CY2LxfYtxaorn/BgB757Tm9QwTgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sBWOkdp2
 u4JE2EicT7Yos+RzeyhRMA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXSGZwLxx3G/
 Qoq+UzAJwoHCNiulQHC2WCPnPXQ3g/bZYYrQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBeeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwAzDFQkQgAWXDBUzMbN+6QeqR+RVNhKRfvdYsLOJd3g/
 9ybhHFg1+1O0pBRiPzTEUPv2Gz1+MWQJuIhzkCOBz/+sFskDGKwT9bwgWU3+8qsO2pworOpm
 HEf0/aT4+kVZX1mvHzcGb5ddF1FChvsDdE9vbKMN8N7n9hV0yT/Fb28GRknTKuTDu4KeCXyf
 GjYsh5L6ZlYMROCNPEnO9/tVZVwlvK+RbwJs8w4ifIXOvBMmPKvpnkyNSZ8IUi2+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5DVlEkFCbGhO3m/HEx6BQliEEXXzKve86R/HtNv6CI/cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:igteTK1T07G5OI8/NpHhKwqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104161363"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 08/15] x86: Drop struct old_cpu_policy
Date: Tue, 4 Apr 2023 10:52:15 +0100
Message-ID: <20230404095222.1373721-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With all the complicated callers of x86_cpu_policies_are_compatible() updated
to use a single cpu_policy object, we can drop the final user of struct
old_cpu_policy.

Update x86_cpu_policies_are_compatible() to take (new) cpu_policy pointers,
reducing the amount of internal pointer chasing, and update all callers to
pass their cpu_policy objects directly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Fix host/guest typo in xc_cpu_policy_is_compatible()
---
 tools/libs/guest/xg_cpuid_x86.c          |  4 +-
 tools/tests/cpu-policy/test-cpu-policy.c | 50 +++++++-----------------
 xen/arch/x86/domctl.c                    |  7 +---
 xen/include/xen/lib/x86/cpu-policy.h     | 12 ++----
 xen/lib/x86/policy.c                     | 12 +++---
 5 files changed, 27 insertions(+), 58 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 5061fe357767..259029be8b36 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -868,9 +868,7 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest)
 {
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
-    struct old_cpu_policy h = { &host->policy, &host->policy };
-    struct old_cpu_policy g = { &guest->policy, &guest->policy };
-    int rc = x86_cpu_policies_are_compatible(&h, &g, &err);
+    int rc = x86_cpu_policies_are_compatible(&host->policy, &guest->policy, &err);
 
     if ( !rc )
         return true;
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 909d6272f875..a4ca07f33973 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -98,7 +98,7 @@ static bool msrs_are_sorted(const xen_msr_entry_t *entries, unsigned int nr)
 
 static void test_cpuid_current(void)
 {
-    struct cpuid_policy p;
+    struct cpu_policy p;
     xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
     unsigned int nr = ARRAY_SIZE(leaves);
     int rc;
@@ -118,7 +118,7 @@ static void test_cpuid_current(void)
 static void test_cpuid_serialise_success(void)
 {
     static const struct test {
-        struct cpuid_policy p;
+        struct cpu_policy p;
         const char *name;
         unsigned int nr_leaves;
     } tests[] = {
@@ -242,7 +242,7 @@ static void test_cpuid_serialise_success(void)
 static void test_msr_serialise_success(void)
 {
     static const struct test {
-        struct msr_policy p;
+        struct cpu_policy p;
         const char *name;
         unsigned int nr_msrs;
     } tests[] = {
@@ -430,7 +430,7 @@ static void test_cpuid_out_of_range_clearing(void)
     static const struct test {
         const char *name;
         unsigned int nr_markers;
-        struct cpuid_policy p;
+        struct cpu_policy p;
     } tests[] = {
         {
             .name = "basic",
@@ -550,7 +550,7 @@ static void test_cpuid_out_of_range_clearing(void)
     for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
     {
         const struct test *t = &tests[i];
-        struct cpuid_policy *p = memdup(&t->p);
+        struct cpu_policy *p = memdup(&t->p);
         void *ptr;
         unsigned int nr_markers;
 
@@ -574,23 +574,20 @@ static void test_is_compatible_success(void)
 {
     static struct test {
         const char *name;
-        struct cpuid_policy host_cpuid;
-        struct cpuid_policy guest_cpuid;
-        struct msr_policy host_msr;
-        struct msr_policy guest_msr;
+        struct cpu_policy host, guest;
     } tests[] = {
         {
             .name = "Host CPUID faulting, Guest not",
-            .host_msr = {
+            .host = {
                 .platform_info.cpuid_faulting = true,
             },
         },
         {
             .name = "Host CPUID faulting, Guest wanted",
-            .host_msr = {
+            .host = {
                 .platform_info.cpuid_faulting = true,
             },
-            .guest_msr = {
+            .guest = {
                 .platform_info.cpuid_faulting = true,
             },
         },
@@ -602,15 +599,8 @@ static void test_is_compatible_success(void)
     for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
     {
         struct test *t = &tests[i];
-        struct old_cpu_policy sys = {
-            &t->host_cpuid,
-            &t->host_msr,
-        }, new = {
-            &t->guest_cpuid,
-            &t->guest_msr,
-        };
         struct cpu_policy_errors e;
-        int res = x86_cpu_policies_are_compatible(&sys, &new, &e);
+        int res = x86_cpu_policies_are_compatible(&t->host, &t->guest, &e);
 
         /* Check the expected error output. */
         if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
@@ -624,25 +614,22 @@ static void test_is_compatible_failure(void)
 {
     static struct test {
         const char *name;
-        struct cpuid_policy host_cpuid;
-        struct cpuid_policy guest_cpuid;
-        struct msr_policy host_msr;
-        struct msr_policy guest_msr;
+        struct cpu_policy host, guest;
         struct cpu_policy_errors e;
     } tests[] = {
         {
             .name = "Host basic.max_leaf out of range",
-            .guest_cpuid.basic.max_leaf = 1,
+            .guest.basic.max_leaf = 1,
             .e = { 0, -1, -1 },
         },
         {
             .name = "Host extd.max_leaf out of range",
-            .guest_cpuid.extd.max_leaf = 1,
+            .guest.extd.max_leaf = 1,
             .e = { 0x80000000, -1, -1 },
         },
         {
             .name = "Host no CPUID faulting, Guest wanted",
-            .guest_msr = {
+            .guest = {
                 .platform_info.cpuid_faulting = true,
             },
             .e = { -1, -1, 0xce },
@@ -654,15 +641,8 @@ static void test_is_compatible_failure(void)
     for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
     {
         struct test *t = &tests[i];
-        struct old_cpu_policy sys = {
-            &t->host_cpuid,
-            &t->host_msr,
-        }, new = {
-            &t->guest_cpuid,
-            &t->guest_msr,
-        };
         struct cpu_policy_errors e;
-        int res = x86_cpu_policies_are_compatible(&sys, &new, &e);
+        int res = x86_cpu_policies_are_compatible(&t->host, &t->guest, &e);
 
         /* Check the expected error output. */
         if ( res == 0 || memcmp(&t->e, &e, sizeof(t->e)) )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 81be25c67731..c02528594102 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -41,10 +41,9 @@ static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
 {
     struct cpu_policy *new;
-    struct cpu_policy *sys = is_pv_domain(d)
+    const struct cpu_policy *sys = is_pv_domain(d)
         ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
         : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
-    struct old_cpu_policy old_sys = { sys, sys }, old_new;
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
     int ret = -ENOMEM;
 
@@ -58,8 +57,6 @@ static int update_domain_cpu_policy(struct domain *d,
     if ( !(new = xmemdup(d->arch.cpu_policy)) )
         goto out;
 
-    old_new = (struct old_cpu_policy){ new, new };
-
     /* Merge the toolstack provided data. */
     if ( (ret = x86_cpuid_copy_from_buffer(
               new, xdpc->leaves, xdpc->nr_leaves,
@@ -72,7 +69,7 @@ static int update_domain_cpu_policy(struct domain *d,
     x86_cpuid_policy_clear_out_of_range_leaves(new);
 
     /* Audit the combined dataset. */
-    ret = x86_cpu_policies_are_compatible(&old_sys, &old_new, &err);
+    ret = x86_cpu_policies_are_compatible(sys, new, &err);
     if ( ret )
         goto out;
 
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 53fffca55211..8b27a0725b8e 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -379,12 +379,6 @@ struct cpu_policy
 #define cpuid_policy cpu_policy
 #define msr_policy cpu_policy
 
-struct old_cpu_policy
-{
-    struct cpuid_policy *cpuid;
-    struct msr_policy *msr;
-};
-
 struct cpu_policy_errors
 {
     uint32_t leaf, subleaf;
@@ -559,7 +553,7 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr);
 
-/*
+/**
  * Calculate whether two policies are compatible.
  *
  * i.e. Can a VM configured with @guest run on a CPU supporting @host.
@@ -573,8 +567,8 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
  * incompatibility is detected, the optional err pointer may identify the
  * problematic leaf/subleaf and/or MSR.
  */
-int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
-                                    const struct old_cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
+                                    const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
 #endif /* !XEN_LIB_X86_POLICIES_H */
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index 2975711d7c6c..a9c60000af9d 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,8 +2,8 @@
 
 #include <xen/lib/x86/cpu-policy.h>
 
-int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
-                                    const struct old_cpu_policy *guest,
+int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
+                                    const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err)
 {
     struct cpu_policy_errors e = INIT_CPU_POLICY_ERRORS;
@@ -15,18 +15,18 @@ int x86_cpu_policies_are_compatible(const struct old_cpu_policy *host,
 #define FAIL_MSR(m) \
     do { e.msr = (m); goto out; } while ( 0 )
 
-    if ( guest->cpuid->basic.max_leaf > host->cpuid->basic.max_leaf )
+    if ( guest->basic.max_leaf > host->basic.max_leaf )
         FAIL_CPUID(0, NA);
 
-    if ( guest->cpuid->feat.max_subleaf > host->cpuid->feat.max_subleaf )
+    if ( guest->feat.max_subleaf > host->feat.max_subleaf )
         FAIL_CPUID(7, 0);
 
-    if ( guest->cpuid->extd.max_leaf > host->cpuid->extd.max_leaf )
+    if ( guest->extd.max_leaf > host->extd.max_leaf )
         FAIL_CPUID(0x80000000, NA);
 
     /* TODO: Audit more CPUID data. */
 
-    if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
+    if ( ~host->platform_info.raw & guest->platform_info.raw )
         FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
 
 #undef FAIL_MSR
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517717.803525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLP-00074K-Pc; Tue, 04 Apr 2023 09:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517717.803525; Tue, 04 Apr 2023 09:52:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLP-00072q-EI; Tue, 04 Apr 2023 09:52:51 +0000
Received: by outflank-mailman (input) for mailman id 517717;
 Tue, 04 Apr 2023 09:52:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLN-00056d-7T
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:49 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71129d6f-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:52:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71129d6f-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601964;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=HneZ+FyiXcZ5gyPQXO36bg8Dh9bI0rc/BYX3S3biViY=;
  b=XOLkP+Qp9vWrMcjAxNov3/hjB2+dj3ygmlmKOaGt+ltkfcGIOnueYnbq
   DaC1w+NZ2JFaNphpZKZeSy07N082DO/hS4t/r21RXi1bTXGOnDOv5aSvf
   jgEC9T0kR82hwXv9T1zifq1xXnMrkles5qt8dmaJKCBhU4wQI2Ivtqr68
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104275102
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xF1lj6zryNkAuFjMYsx6t+dIxirEfRIJ4+MujC+fZmUNrF6WrkUCy
 jYcDG/TPfyJMWb1eooga9+0pEsBup/UmtJkTVFt/yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UMHUMja4mtC5QRiPawT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVhI/
 PM0EhUiUg6at9rr+ZSJDbFXoP12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNwhzH+
 zuepQwVBDkbO9e1ygTez06Lh82SlDH6G7AIM6Cno6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0WdBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0c7YyYLYTEgzOX9ubEL0yvpfJE7K4iM24id9S7L/
 9yakMQvr+xN3ZZWiPvhogmvbyGE/caQEFNsjunDdif8t14iOtb4D2C9wQKDhcusOrp1WbVoU
 JIsv8GFpN4DApiW/MBmaLVcRer5jxpp3dC1vLKOI3XC3273k5JbVdoMiAyS3W8wWir+RRfnY
 VXIpSRa74JJMX2hYMdfOtzhU5l2k/m6To67Bpg4i+aihbAoLGe6ENxGPxbMjwgBbmB3+U3AB
 XtrWZn1VitLYUiW5DG3W/0cwdcW+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9404AD7GkO3WIqN57wJJjBSFTOK0aYvd/LoarSjeK0kl4YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:1SF98aiMBDuv+B6mbtonImLiPnBQXgMji2hC6mlwRA09TyVXrb
 HKoB1p726KtN9xYgBbpTnkAsO9qBznhOdICOUqTNSftUzdyQyVxeJZnPDfKl/balXDH4dmvM
 8KE5SWSueAaWSS5fya3ODSKadH/DDzytHVuQ6x9QYOceioUc1dBsVCZzpz3ncYeOA/P+tFKH
 NU3KR6mwY=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104275102"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 05/15] x86: Merge the system {cpuid,msr} policy objects
Date: Tue, 4 Apr 2023 10:52:12 +0100
Message-ID: <20230404095222.1373721-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, they're the same underlying type, containing disjoint information.

Introduce a new cpu-policy.{h,c} to be the new location for all policy
handling logic.  Place the combined objects in __ro_after_init, which is new
since the original logic was written.

As we're trying to phase out the use of struct old_cpu_policy entirely, rework
update_domain_cpu_policy() to not pointer-chase through system_policies[].

This in turn allows system_policies[] in sysctl.c to become static and reduced
in scope to XEN_SYSCTL_get_cpu_policy.

No practical change.  This undoes the transient doubling of storage space from
earlier patches.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Reword commit message
 * Reintroduce dropped const from system_policies[]
---
 xen/arch/x86/Makefile                 |  1 +
 xen/arch/x86/cpu-policy.c             | 18 +++++++
 xen/arch/x86/cpu/common.c             |  4 +-
 xen/arch/x86/cpuid.c                  | 66 +++++++++++--------------
 xen/arch/x86/domctl.c                 | 17 +++++--
 xen/arch/x86/include/asm/cpu-policy.h | 14 ++++++
 xen/arch/x86/include/asm/cpuid.h      |  6 ---
 xen/arch/x86/include/asm/msr.h        |  7 ---
 xen/arch/x86/msr.c                    | 38 ++++++--------
 xen/arch/x86/sysctl.c                 | 71 ++++++++++-----------------
 10 files changed, 116 insertions(+), 126 deletions(-)
 create mode 100644 xen/arch/x86/cpu-policy.c
 create mode 100644 xen/arch/x86/include/asm/cpu-policy.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 08b110592dcc..fc9487aa4023 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -19,6 +19,7 @@ obj-y += bitops.o
 obj-bin-y += bzimage.init.o
 obj-bin-y += clear_page.o
 obj-bin-y += copy_page.o
+obj-y += cpu-policy.o
 obj-y += cpuid.o
 obj-$(CONFIG_PV) += compat.o
 obj-$(CONFIG_PV32) += x86_64/compat.o
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
new file mode 100644
index 000000000000..663e9a084c53
--- /dev/null
+++ b/xen/arch/x86/cpu-policy.c
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#include <xen/cache.h>
+#include <xen/kernel.h>
+
+#include <xen/lib/x86/cpu-policy.h>
+
+#include <asm/cpu-policy.h>
+
+struct cpu_policy __ro_after_init     raw_cpu_policy;
+struct cpu_policy __ro_after_init    host_cpu_policy;
+#ifdef CONFIG_PV
+struct cpu_policy __ro_after_init  pv_max_cpu_policy;
+struct cpu_policy __ro_after_init  pv_def_cpu_policy;
+#endif
+#ifdef CONFIG_HVM
+struct cpu_policy __ro_after_init hvm_max_cpu_policy;
+struct cpu_policy __ro_after_init hvm_def_cpu_policy;
+#endif
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 5ad347534a22..f11dcda57a69 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -3,6 +3,8 @@
 #include <xen/delay.h>
 #include <xen/param.h>
 #include <xen/smp.h>
+
+#include <asm/cpu-policy.h>
 #include <asm/current.h>
 #include <asm/debugreg.h>
 #include <asm/processor.h>
@@ -141,7 +143,7 @@ bool __init probe_cpuid_faulting(void)
 		return false;
 
 	if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
-		raw_msr_policy.platform_info.cpuid_faulting =
+		raw_cpu_policy.platform_info.cpuid_faulting =
 			val & MSR_PLATFORM_INFO_CPUID_FAULTING;
 
 	if (rc ||
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index b22725c492e7..0916bfe175c8 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -4,6 +4,7 @@
 #include <xen/sched.h>
 #include <xen/nospec.h>
 #include <asm/amd.h>
+#include <asm/cpu-policy.h>
 #include <asm/cpuid.h>
 #include <asm/hvm/hvm.h>
 #include <asm/hvm/nestedhvm.h>
@@ -142,17 +143,6 @@ static void zero_leaves(struct cpuid_leaf *l,
     memset(&l[first], 0, sizeof(*l) * (last - first + 1));
 }
 
-struct cpuid_policy __read_mostly     raw_cpuid_policy,
-                    __read_mostly    host_cpuid_policy;
-#ifdef CONFIG_PV
-struct cpuid_policy __read_mostly  pv_max_cpuid_policy;
-struct cpuid_policy __read_mostly  pv_def_cpuid_policy;
-#endif
-#ifdef CONFIG_HVM
-struct cpuid_policy __read_mostly hvm_max_cpuid_policy;
-struct cpuid_policy __read_mostly hvm_def_cpuid_policy;
-#endif
-
 static void sanitise_featureset(uint32_t *fs)
 {
     /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
@@ -344,7 +334,7 @@ static void recalculate_misc(struct cpuid_policy *p)
 
 static void __init calculate_raw_policy(void)
 {
-    struct cpuid_policy *p = &raw_cpuid_policy;
+    struct cpuid_policy *p = &raw_cpu_policy;
 
     x86_cpuid_policy_fill_native(p);
 
@@ -354,10 +344,10 @@ static void __init calculate_raw_policy(void)
 
 static void __init calculate_host_policy(void)
 {
-    struct cpuid_policy *p = &host_cpuid_policy;
+    struct cpuid_policy *p = &host_cpu_policy;
     unsigned int max_extd_leaf;
 
-    *p = raw_cpuid_policy;
+    *p = raw_cpu_policy;
 
     p->basic.max_leaf =
         min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
@@ -449,17 +439,17 @@ static void __init guest_common_feature_adjustments(uint32_t *fs)
      * of IBRS by using the AMD feature bit.  An administrator may wish for
      * performance reasons to offer IBPB without IBRS.
      */
-    if ( host_cpuid_policy.feat.ibrsb )
+    if ( host_cpu_policy.feat.ibrsb )
         __set_bit(X86_FEATURE_IBPB, fs);
 }
 
 static void __init calculate_pv_max_policy(void)
 {
-    struct cpuid_policy *p = &pv_max_cpuid_policy;
+    struct cpuid_policy *p = &pv_max_cpu_policy;
     uint32_t pv_featureset[FSCAPINTS];
     unsigned int i;
 
-    *p = host_cpuid_policy;
+    *p = host_cpu_policy;
     cpuid_policy_to_featureset(p, pv_featureset);
 
     for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
@@ -486,11 +476,11 @@ static void __init calculate_pv_max_policy(void)
 
 static void __init calculate_pv_def_policy(void)
 {
-    struct cpuid_policy *p = &pv_def_cpuid_policy;
+    struct cpuid_policy *p = &pv_def_cpu_policy;
     uint32_t pv_featureset[FSCAPINTS];
     unsigned int i;
 
-    *p = pv_max_cpuid_policy;
+    *p = pv_max_cpu_policy;
     cpuid_policy_to_featureset(p, pv_featureset);
 
     for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
@@ -506,12 +496,12 @@ static void __init calculate_pv_def_policy(void)
 
 static void __init calculate_hvm_max_policy(void)
 {
-    struct cpuid_policy *p = &hvm_max_cpuid_policy;
+    struct cpuid_policy *p = &hvm_max_cpu_policy;
     uint32_t hvm_featureset[FSCAPINTS];
     unsigned int i;
     const uint32_t *hvm_featuremask;
 
-    *p = host_cpuid_policy;
+    *p = host_cpu_policy;
     cpuid_policy_to_featureset(p, hvm_featureset);
 
     hvm_featuremask = hvm_hap_supported() ?
@@ -539,7 +529,7 @@ static void __init calculate_hvm_max_policy(void)
      * HVM guests are able if running in protected mode.
      */
     if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
-         raw_cpuid_policy.basic.sep )
+         raw_cpu_policy.basic.sep )
         __set_bit(X86_FEATURE_SEP, hvm_featureset);
 
     /*
@@ -597,12 +587,12 @@ static void __init calculate_hvm_max_policy(void)
 
 static void __init calculate_hvm_def_policy(void)
 {
-    struct cpuid_policy *p = &hvm_def_cpuid_policy;
+    struct cpuid_policy *p = &hvm_def_cpu_policy;
     uint32_t hvm_featureset[FSCAPINTS];
     unsigned int i;
     const uint32_t *hvm_featuremask;
 
-    *p = hvm_max_cpuid_policy;
+    *p = hvm_max_cpu_policy;
     cpuid_policy_to_featureset(p, hvm_featureset);
 
     hvm_featuremask = hvm_hap_supported() ?
@@ -670,8 +660,8 @@ void recalculate_cpuid_policy(struct domain *d)
 {
     struct cpuid_policy *p = d->arch.cpuid;
     const struct cpuid_policy *max = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpuid_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpuid_policy : NULL);
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
     uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
     unsigned int i;
 
@@ -746,7 +736,7 @@ void recalculate_cpuid_policy(struct domain *d)
     /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
     fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
                             cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
-    fs[FEATURESET_7b0] |= (host_cpuid_policy.feat._7b0 &
+    fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
                            (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
                             cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
 
@@ -797,8 +787,8 @@ void recalculate_cpuid_policy(struct domain *d)
 int init_domain_cpuid_policy(struct domain *d)
 {
     struct cpuid_policy *p = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpuid_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpuid_policy : NULL);
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
 
     if ( !p )
     {
@@ -1102,7 +1092,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_pv_domain(d) && is_hardware_domain(d) &&
              guest_kernel_mode(v, regs) && cpu_has_monitor &&
              regs->entry_vector == TRAP_gp_fault )
-            *res = raw_cpuid_policy.basic.raw[5];
+            *res = raw_cpu_policy.basic.raw[5];
         break;
 
     case 0x7:
@@ -1234,14 +1224,14 @@ static void __init __maybe_unused build_assertions(void)
     /* Find some more clever allocation scheme if this trips. */
     BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
 
-    BUILD_BUG_ON(sizeof(raw_cpuid_policy.basic) !=
-                 sizeof(raw_cpuid_policy.basic.raw));
-    BUILD_BUG_ON(sizeof(raw_cpuid_policy.feat) !=
-                 sizeof(raw_cpuid_policy.feat.raw));
-    BUILD_BUG_ON(sizeof(raw_cpuid_policy.xstate) !=
-                 sizeof(raw_cpuid_policy.xstate.raw));
-    BUILD_BUG_ON(sizeof(raw_cpuid_policy.extd) !=
-                 sizeof(raw_cpuid_policy.extd.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
+                 sizeof(raw_cpu_policy.basic.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
+                 sizeof(raw_cpu_policy.feat.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
+                 sizeof(raw_cpu_policy.xstate.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
+                 sizeof(raw_cpu_policy.extd.raw));
 }
 
 /*
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 944af63e68d0..5800bb10bc4a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,18 +35,25 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/psr.h>
-#include <asm/cpuid.h>
+#include <asm/cpu-policy.h>
 
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
 {
     struct old_cpu_policy new = {};
-    const struct old_cpu_policy *sys = is_pv_domain(d)
-        ? &system_policies[XEN_SYSCTL_cpu_policy_pv_max]
-        : &system_policies[XEN_SYSCTL_cpu_policy_hvm_max];
+    struct cpu_policy *sys = is_pv_domain(d)
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
+    struct old_cpu_policy old_sys = { sys, sys };
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
     int ret = -ENOMEM;
 
+    if ( !sys )
+    {
+        ASSERT_UNREACHABLE();
+        return -EOPNOTSUPP;
+    }
+
     /* Start by copying the domain's existing policies. */
     if ( !(new.cpuid = xmemdup(d->arch.cpuid)) ||
          !(new.msr   = xmemdup(d->arch.msr)) )
@@ -64,7 +71,7 @@ static int update_domain_cpu_policy(struct domain *d,
     x86_cpuid_policy_clear_out_of_range_leaves(new.cpuid);
 
     /* Audit the combined dataset. */
-    ret = x86_cpu_policies_are_compatible(sys, &new, &err);
+    ret = x86_cpu_policies_are_compatible(&old_sys, &new, &err);
     if ( ret )
         goto out;
 
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
new file mode 100644
index 000000000000..eef14bb4267e
--- /dev/null
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef X86_CPU_POLICY_H
+#define X86_CPU_POLICY_H
+
+struct cpu_policy;
+
+extern struct cpu_policy     raw_cpu_policy;
+extern struct cpu_policy    host_cpu_policy;
+extern struct cpu_policy  pv_max_cpu_policy;
+extern struct cpu_policy  pv_def_cpu_policy;
+extern struct cpu_policy hvm_max_cpu_policy;
+extern struct cpu_policy hvm_def_cpu_policy;
+
+#endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index d418e8100dde..ea0586277331 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -46,12 +46,6 @@ DECLARE_PER_CPU(struct cpuidmasks, cpuidmasks);
 /* Default masking MSR values, calculated at boot. */
 extern struct cpuidmasks cpuidmask_defaults;
 
-extern struct cpuid_policy raw_cpuid_policy, host_cpuid_policy,
-    pv_max_cpuid_policy, pv_def_cpuid_policy,
-    hvm_max_cpuid_policy, hvm_def_cpuid_policy;
-
-extern const struct old_cpu_policy system_policies[];
-
 /* Check that all previously present features are still available. */
 bool recheck_cpu_features(unsigned int cpu);
 
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 02eddd919c27..022230acc0af 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -292,13 +292,6 @@ static inline void wrmsr_tsc_aux(uint32_t val)
 
 uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp);
 
-extern struct msr_policy     raw_msr_policy,
-                            host_msr_policy,
-                          pv_max_msr_policy,
-                          pv_def_msr_policy,
-                         hvm_max_msr_policy,
-                         hvm_def_msr_policy;
-
 /* Container object for per-vCPU MSRs */
 struct vcpu_msrs
 {
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 7ddf0078c3a2..bff26bc4e2b5 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -25,6 +25,7 @@
 #include <xen/sched.h>
 
 #include <asm/amd.h>
+#include <asm/cpu-policy.h>
 #include <asm/debugreg.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/viridian.h>
@@ -37,20 +38,9 @@
 
 DEFINE_PER_CPU(uint32_t, tsc_aux);
 
-struct msr_policy __read_mostly     raw_msr_policy,
-                  __read_mostly    host_msr_policy;
-#ifdef CONFIG_PV
-struct msr_policy __read_mostly  pv_max_msr_policy;
-struct msr_policy __read_mostly  pv_def_msr_policy;
-#endif
-#ifdef CONFIG_HVM
-struct msr_policy __read_mostly hvm_max_msr_policy;
-struct msr_policy __read_mostly hvm_def_msr_policy;
-#endif
-
 static void __init calculate_raw_policy(void)
 {
-    struct msr_policy *mp = &raw_msr_policy;
+    struct msr_policy *mp = &raw_cpu_policy;
 
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
@@ -61,9 +51,9 @@ static void __init calculate_raw_policy(void)
 
 static void __init calculate_host_policy(void)
 {
-    struct msr_policy *mp = &host_msr_policy;
+    struct msr_policy *mp = &host_cpu_policy;
 
-    *mp = raw_msr_policy;
+    *mp = raw_cpu_policy;
 
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
@@ -81,25 +71,25 @@ static void __init calculate_host_policy(void)
 
 static void __init calculate_pv_max_policy(void)
 {
-    struct msr_policy *mp = &pv_max_msr_policy;
+    struct msr_policy *mp = &pv_max_cpu_policy;
 
-    *mp = host_msr_policy;
+    *mp = host_cpu_policy;
 
     mp->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_pv_def_policy(void)
 {
-    struct msr_policy *mp = &pv_def_msr_policy;
+    struct msr_policy *mp = &pv_def_cpu_policy;
 
-    *mp = pv_max_msr_policy;
+    *mp = pv_max_cpu_policy;
 }
 
 static void __init calculate_hvm_max_policy(void)
 {
-    struct msr_policy *mp = &hvm_max_msr_policy;
+    struct msr_policy *mp = &hvm_max_cpu_policy;
 
-    *mp = host_msr_policy;
+    *mp = host_cpu_policy;
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     mp->platform_info.cpuid_faulting = true;
@@ -109,9 +99,9 @@ static void __init calculate_hvm_max_policy(void)
 
 static void __init calculate_hvm_def_policy(void)
 {
-    struct msr_policy *mp = &hvm_def_msr_policy;
+    struct msr_policy *mp = &hvm_def_cpu_policy;
 
-    *mp = hvm_max_msr_policy;
+    *mp = hvm_max_cpu_policy;
 }
 
 void __init init_guest_msr_policy(void)
@@ -135,8 +125,8 @@ void __init init_guest_msr_policy(void)
 int init_domain_msr_policy(struct domain *d)
 {
     struct msr_policy *mp = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_msr_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_msr_policy : NULL);
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_def_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_def_cpu_policy : NULL);
 
     if ( !mp )
     {
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 3ed7c69f4315..43a241f2090f 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -30,38 +30,7 @@
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
 #include <asm/psr.h>
-#include <asm/cpuid.h>
-
-const struct old_cpu_policy system_policies[6] = {
-    [ XEN_SYSCTL_cpu_policy_raw ] = {
-        &raw_cpuid_policy,
-        &raw_msr_policy,
-    },
-    [ XEN_SYSCTL_cpu_policy_host ] = {
-        &host_cpuid_policy,
-        &host_msr_policy,
-    },
-#ifdef CONFIG_PV
-    [ XEN_SYSCTL_cpu_policy_pv_max ] = {
-        &pv_max_cpuid_policy,
-        &pv_max_msr_policy,
-    },
-    [ XEN_SYSCTL_cpu_policy_pv_default ] = {
-        &pv_def_cpuid_policy,
-        &pv_def_msr_policy,
-    },
-#endif
-#ifdef CONFIG_HVM
-    [ XEN_SYSCTL_cpu_policy_hvm_max ] = {
-        &hvm_max_cpuid_policy,
-        &hvm_max_msr_policy,
-    },
-    [ XEN_SYSCTL_cpu_policy_hvm_default ] = {
-        &hvm_def_cpuid_policy,
-        &hvm_def_msr_policy,
-    },
-#endif
-};
+#include <asm/cpu-policy.h>
 
 struct l3_cache_info {
     int ret;
@@ -326,19 +295,19 @@ long arch_do_sysctl(
 
     case XEN_SYSCTL_get_cpu_featureset:
     {
-        static const struct cpuid_policy *const policy_table[6] = {
-            [XEN_SYSCTL_cpu_featureset_raw]  = &raw_cpuid_policy,
-            [XEN_SYSCTL_cpu_featureset_host] = &host_cpuid_policy,
+        static const struct cpu_policy *const policy_table[6] = {
+            [XEN_SYSCTL_cpu_featureset_raw]     = &raw_cpu_policy,
+            [XEN_SYSCTL_cpu_featureset_host]    = &host_cpu_policy,
 #ifdef CONFIG_PV
-            [XEN_SYSCTL_cpu_featureset_pv]   = &pv_def_cpuid_policy,
-            [XEN_SYSCTL_cpu_featureset_pv_max] = &pv_max_cpuid_policy,
+            [XEN_SYSCTL_cpu_featureset_pv]      = &pv_def_cpu_policy,
+            [XEN_SYSCTL_cpu_featureset_pv_max]  = &pv_max_cpu_policy,
 #endif
 #ifdef CONFIG_HVM
-            [XEN_SYSCTL_cpu_featureset_hvm]  = &hvm_def_cpuid_policy,
-            [XEN_SYSCTL_cpu_featureset_hvm_max] = &hvm_max_cpuid_policy,
+            [XEN_SYSCTL_cpu_featureset_hvm]     = &hvm_def_cpu_policy,
+            [XEN_SYSCTL_cpu_featureset_hvm_max] = &hvm_max_cpu_policy,
 #endif
         };
-        const struct cpuid_policy *p = NULL;
+        const struct cpu_policy *p = NULL;
         uint32_t featureset[FSCAPINTS];
         unsigned int nr;
 
@@ -391,7 +360,19 @@ long arch_do_sysctl(
 
     case XEN_SYSCTL_get_cpu_policy:
     {
-        const struct old_cpu_policy *policy;
+        static const struct cpu_policy *const system_policies[6] = {
+            [XEN_SYSCTL_cpu_policy_raw]         = &raw_cpu_policy,
+            [XEN_SYSCTL_cpu_policy_host]        = &host_cpu_policy,
+#ifdef CONFIG_PV
+            [XEN_SYSCTL_cpu_policy_pv_max]      = &pv_max_cpu_policy,
+            [XEN_SYSCTL_cpu_policy_pv_default]  = &pv_def_cpu_policy,
+#endif
+#ifdef CONFIG_HVM
+            [XEN_SYSCTL_cpu_policy_hvm_max]     = &hvm_max_cpu_policy,
+            [XEN_SYSCTL_cpu_policy_hvm_default] = &hvm_def_cpu_policy,
+#endif
+        };
+        const struct cpu_policy *policy;
 
         /* Reserved field set, or bad policy index? */
         if ( sysctl->u.cpu_policy._rsvd ||
@@ -400,11 +381,11 @@ long arch_do_sysctl(
             ret = -EINVAL;
             break;
         }
-        policy = &system_policies[
+        policy = system_policies[
             array_index_nospec(sysctl->u.cpu_policy.index,
                                ARRAY_SIZE(system_policies))];
 
-        if ( !policy->cpuid || !policy->msr )
+        if ( !policy )
         {
             ret = -EOPNOTSUPP;
             break;
@@ -414,7 +395,7 @@ long arch_do_sysctl(
         if ( guest_handle_is_null(sysctl->u.cpu_policy.leaves) )
             sysctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
         else if ( (ret = x86_cpuid_copy_to_buffer(
-                       policy->cpuid,
+                       policy,
                        sysctl->u.cpu_policy.leaves,
                        &sysctl->u.cpu_policy.nr_leaves)) )
             break;
@@ -430,7 +411,7 @@ long arch_do_sysctl(
         if ( guest_handle_is_null(sysctl->u.cpu_policy.msrs) )
             sysctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
         else if ( (ret = x86_msr_copy_to_buffer(
-                       policy->msr,
+                       policy,
                        sysctl->u.cpu_policy.msrs,
                        &sysctl->u.cpu_policy.nr_msrs)) )
             break;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:52:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517718.803537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLR-0007WT-EP; Tue, 04 Apr 2023 09:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517718.803537; Tue, 04 Apr 2023 09:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLR-0007Ur-7F; Tue, 04 Apr 2023 09:52:53 +0000
Received: by outflank-mailman (input) for mailman id 517718;
 Tue, 04 Apr 2023 09:52:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdLP-0005bo-36
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:52:51 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7485f6e7-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:52:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7485f6e7-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680601968;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UWOWvkRFhQtN58lpWN5uMWSpgTK9p/X4W2+Z700adBA=;
  b=Q+DNi5ozG4MyQG1HGxr14UHhfAKEFcdbdV8wNqQWJDTuy2Jc7xzuHROJ
   oMSiAaHWFmdWB0a6fX+dMFHuk7zOTmDsrjOR8IX/P80sErBqHAVntacbA
   ukdUOfwTiqCzFRrm9IKTz6J8t8FirFDwZyAbJ1VokRo5bEyuQE70aMZ4F
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104275106
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:8puiia01dD2dX2jz//bD5axxkn2cJEfYwER7XKvMYLTBsI5bpzAHx
 2UXDW/Sb/yMZ2KhetxxboSyoUNX7MfSnNJhQQpqpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfDGFyr
 eYECTk2RAmqqNKpxeyjY7FAr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0JxBnF/
 zqYl4j/Ki4FGY3EzCCmzn2HuPHIh3PnfYgrLqLto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooLW6QuEAmkPThZadccr8sQxQFQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBWICWXceUSoM2PP6ia4DkReRVv87PIfg27UZBgrML
 yC2QDkW3utD1ZNUif/kpDgrkBr3+MGXE1ddChH/Gzv8s1gnPNPNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83oJrW7FF4aLJ9w43d2HDB4B3jw4UTHoe
 lTPngha+YVeOnCnBYcuPdLpVph0nPK7T4q1PhwxUjapSsEpHDJrAQk0PRLAt4wTuBNEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk33uYdykVbJEd/pxnPSNLFmhE5FyS2Jm
 +ti2zyikEoADLenPnaOoeb+7zkidBAGOHw/kOQPHsbrH+asMDhJ5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:+TC7BajZnnUAz5em4pebKylZdHBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104275106"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 03/15] x86: Rename struct cpuid_policy to struct cpu_policy
Date: Tue, 4 Apr 2023 10:52:10 +0100
Message-ID: <20230404095222.1373721-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Also merge lib/x86/cpuid.h entirely into lib/x86/cpu-policy.h

Use a temporary define to make struct cpuid_policy still work.

There's one forward declaration of struct cpuid_policy in
tools/tests/x86_emulator/x86-emulate.h that isn't covered by the define, and
it's easier to rename that now than to rearrange the includes.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Retain test/x86_emulator handcoded dependency on cpuid-autogen.h
 * Rebase over x86_emulate() split
---
 tools/fuzz/cpu-policy/afl-policy-fuzzer.c |   2 +-
 tools/tests/x86_emulator/Makefile         |   2 +-
 tools/tests/x86_emulator/x86-emulate.h    |   2 +-
 xen/arch/x86/include/asm/cpuid.h          |   1 -
 xen/arch/x86/x86_emulate/x86_emulate.h    |   2 +-
 xen/include/xen/lib/x86/cpu-policy.h      | 463 ++++++++++++++++++++-
 xen/include/xen/lib/x86/cpuid.h           | 475 ----------------------
 xen/lib/x86/cpuid.c                       |   2 +-
 8 files changed, 467 insertions(+), 482 deletions(-)
 delete mode 100644 xen/include/xen/lib/x86/cpuid.h

diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 7d0f274c6cdd..79e42e8bfd04 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -9,7 +9,7 @@
 #include <getopt.h>
 
 #include <xen-tools/common-macros.h>
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
 #include <xen/lib/x86/msr.h>
 #include <xen/domctl.h>
 
diff --git a/tools/tests/x86_emulator/Makefile b/tools/tests/x86_emulator/Makefile
index f5d88fb9f681..4b1f75de052e 100644
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -292,7 +292,7 @@ HOSTCFLAGS += $(CFLAGS_xeninclude) -I. $(HOSTCFLAGS-$(XEN_COMPILE_ARCH))
 x86.h := $(addprefix $(XEN_ROOT)/tools/include/xen/asm/,\
                      x86-vendors.h x86-defns.h msr-index.h) \
          $(addprefix $(XEN_ROOT)/tools/include/xen/lib/x86/, \
-                     cpuid.h cpuid-autogen.h)
+                     cpu-policy.h cpuid-autogen.h)
 x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h x86_emulate/private.h $(x86.h)
 
 $(OBJS): %.o: %.c $(x86_emulate.h)
diff --git a/tools/tests/x86_emulator/x86-emulate.h b/tools/tests/x86_emulator/x86-emulate.h
index 942b4cdd47d1..02922d0c5a19 100644
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -77,7 +77,7 @@
 #define is_canonical_address(x) (((int64_t)(x) >> 47) == ((int64_t)(x) >> 63))
 
 extern uint32_t mxcsr_mask;
-extern struct cpuid_policy cp;
+extern struct cpu_policy cp;
 
 #define MMAP_SZ 16384
 bool emul_test_init(void);
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 49b3128f06f9..d418e8100dde 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -9,7 +9,6 @@
 #include <xen/percpu.h>
 
 #include <xen/lib/x86/cpu-policy.h>
-#include <xen/lib/x86/cpuid.h>
 
 #include <public/sysctl.h>
 
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index bb7af967ffee..75015104fbdb 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -23,7 +23,7 @@
 #ifndef __X86_EMULATE_H__
 #define __X86_EMULATE_H__
 
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
 
 #define MAX_INST_LEN 15
 
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 3a5300d1078c..666505964d00 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -2,9 +2,342 @@
 #ifndef XEN_LIB_X86_POLICIES_H
 #define XEN_LIB_X86_POLICIES_H
 
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpuid-autogen.h>
 #include <xen/lib/x86/msr.h>
 
+#define FEATURESET_1d     0 /* 0x00000001.edx      */
+#define FEATURESET_1c     1 /* 0x00000001.ecx      */
+#define FEATURESET_e1d    2 /* 0x80000001.edx      */
+#define FEATURESET_e1c    3 /* 0x80000001.ecx      */
+#define FEATURESET_Da1    4 /* 0x0000000d:1.eax    */
+#define FEATURESET_7b0    5 /* 0x00000007:0.ebx    */
+#define FEATURESET_7c0    6 /* 0x00000007:0.ecx    */
+#define FEATURESET_e7d    7 /* 0x80000007.edx      */
+#define FEATURESET_e8b    8 /* 0x80000008.ebx      */
+#define FEATURESET_7d0    9 /* 0x00000007:0.edx    */
+#define FEATURESET_7a1   10 /* 0x00000007:1.eax    */
+#define FEATURESET_e21a  11 /* 0x80000021.eax      */
+#define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
+#define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
+#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
+#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+
+struct cpuid_leaf
+{
+    uint32_t a, b, c, d;
+};
+
+/*
+ * Versions of GCC before 5 unconditionally reserve %rBX as the PIC hard
+ * register, and are unable to cope with spilling it.  This results in a
+ * rather cryptic error:
+ *    error: inconsistent operand constraints in an ‘asm’
+ *
+ * In affected situations, work around the issue by using a separate register
+ * to hold the the %rBX output, and xchg twice to leave %rBX preserved around
+ * the asm() statement.
+ */
+#if defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && defined(__i386__)
+# define XCHG_BX "xchg %%ebx, %[bx];"
+# define BX_CON [bx] "=&r"
+#elif defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && \
+    defined(__x86_64__) && (defined(__code_model_medium__) || \
+                            defined(__code_model_large__))
+# define XCHG_BX "xchg %%rbx, %q[bx];"
+# define BX_CON [bx] "=&r"
+#else
+# define XCHG_BX ""
+# define BX_CON "=&b"
+#endif
+
+static inline void cpuid_leaf(uint32_t leaf, struct cpuid_leaf *l)
+{
+    asm ( XCHG_BX
+          "cpuid;"
+          XCHG_BX
+          : "=a" (l->a), BX_CON (l->b), "=&c" (l->c), "=&d" (l->d)
+          : "a" (leaf) );
+}
+
+static inline void cpuid_count_leaf(
+    uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *l)
+{
+    asm ( XCHG_BX
+          "cpuid;"
+          XCHG_BX
+          : "=a" (l->a), BX_CON (l->b), "=c" (l->c), "=&d" (l->d)
+          : "a" (leaf), "c" (subleaf) );
+}
+
+#undef BX_CON
+#undef XCHG
+
+/**
+ * Given the vendor id from CPUID leaf 0, look up Xen's internal integer
+ * vendor ID.  Returns X86_VENDOR_UNKNOWN for any unknown vendor.
+ */
+unsigned int x86_cpuid_lookup_vendor(uint32_t ebx, uint32_t ecx, uint32_t edx);
+
+/**
+ * Given Xen's internal vendor ID, return a string suitable for printing.
+ * Returns "Unknown" for any unrecognised ID.
+ */
+const char *x86_cpuid_vendor_to_str(unsigned int vendor);
+
+#define CPUID_GUEST_NR_BASIC      (0xdu + 1)
+#define CPUID_GUEST_NR_CACHE      (5u + 1)
+#define CPUID_GUEST_NR_FEAT       (2u + 1)
+#define CPUID_GUEST_NR_TOPO       (1u + 1)
+#define CPUID_GUEST_NR_XSTATE     (62u + 1)
+#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
+#define CPUID_GUEST_NR_EXTD_AMD   (0x21u + 1)
+#define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
+                                      CPUID_GUEST_NR_EXTD_AMD)
+
+/*
+ * Maximum number of leaves a struct cpu_policy turns into when serialised for
+ * interaction with the toolstack.  (Sum of all leaves in each union, less the
+ * entries in basic which sub-unions hang off of.)
+ */
+#define CPUID_MAX_SERIALISED_LEAVES                     \
+    (CPUID_GUEST_NR_BASIC +                             \
+     CPUID_GUEST_NR_FEAT   - !!CPUID_GUEST_NR_FEAT +    \
+     CPUID_GUEST_NR_CACHE  - !!CPUID_GUEST_NR_CACHE +   \
+     CPUID_GUEST_NR_TOPO   - !!CPUID_GUEST_NR_TOPO +    \
+     CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE +  \
+     CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
+
+struct cpu_policy
+{
+#define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
+#define _DECL_BITFIELD(x)   __DECL_BITFIELD(x)
+#define __DECL_BITFIELD(x)  CPUID_BITFIELD_ ## x
+
+    /* Basic leaves: 0x000000xx */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
+        struct {
+            /* Leaf 0x0 - Max and vendor. */
+            uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
+
+            /* Leaf 0x1 - Family/model/stepping and features. */
+            uint32_t raw_fms;
+            uint8_t :8,       /* Brand ID. */
+                clflush_size, /* Number of 8-byte blocks per cache line. */
+                lppp,         /* Logical processors per package. */
+                apic_id;      /* Initial APIC ID. */
+            union {
+                uint32_t _1c;
+                struct { DECL_BITFIELD(1c); };
+            };
+            union {
+                uint32_t _1d;
+                struct { DECL_BITFIELD(1d); };
+            };
+
+            /* Leaf 0x2 - TLB/Cache/Prefetch. */
+            uint8_t l2_nr_queries; /* Documented as fixed to 1. */
+            uint8_t l2_desc[15];
+
+            uint64_t :64, :64; /* Leaf 0x3 - PSN. */
+            uint64_t :64, :64; /* Leaf 0x4 - Structured Cache. */
+            uint64_t :64, :64; /* Leaf 0x5 - MONITOR. */
+            uint64_t :64, :64; /* Leaf 0x6 - Therm/Perf. */
+            uint64_t :64, :64; /* Leaf 0x7 - Structured Features. */
+            uint64_t :64, :64; /* Leaf 0x8 - rsvd */
+            uint64_t :64, :64; /* Leaf 0x9 - DCA */
+
+            /* Leaf 0xa - Intel PMU. */
+            uint8_t pmu_version, _pmu[15];
+
+            uint64_t :64, :64; /* Leaf 0xb - Topology. */
+            uint64_t :64, :64; /* Leaf 0xc - rsvd */
+            uint64_t :64, :64; /* Leaf 0xd - XSTATE. */
+        };
+    } basic;
+
+    /* Structured cache leaf: 0x00000004[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_CACHE];
+        struct cpuid_cache_leaf {
+            uint32_t /* a */ type:5, level:3;
+            bool self_init:1, fully_assoc:1;
+            uint32_t :4, threads_per_cache:12, cores_per_package:6;
+            uint32_t /* b */ line_size:12, partitions:10, ways:10;
+            uint32_t /* c */ sets;
+            bool /* d */ wbinvd:1, inclusive:1, complex:1;
+        } subleaf[CPUID_GUEST_NR_CACHE];
+    } cache;
+
+    /* Structured feature leaf: 0x00000007[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_FEAT];
+        struct {
+            /* Subleaf 0. */
+            uint32_t max_subleaf;
+            union {
+                uint32_t _7b0;
+                struct { DECL_BITFIELD(7b0); };
+            };
+            union {
+                uint32_t _7c0;
+                struct { DECL_BITFIELD(7c0); };
+            };
+            union {
+                uint32_t _7d0;
+                struct { DECL_BITFIELD(7d0); };
+            };
+
+            /* Subleaf 1. */
+            union {
+                uint32_t _7a1;
+                struct { DECL_BITFIELD(7a1); };
+            };
+            union {
+                uint32_t _7b1;
+                struct { DECL_BITFIELD(7b1); };
+            };
+            union {
+                uint32_t _7c1;
+                struct { DECL_BITFIELD(7c1); };
+            };
+            union {
+                uint32_t _7d1;
+                struct { DECL_BITFIELD(7d1); };
+            };
+
+            /* Subleaf 2. */
+            uint32_t /* a */:32, /* b */:32, /* c */:32;
+            union {
+                uint32_t _7d2;
+                struct { DECL_BITFIELD(7d2); };
+            };
+        };
+    } feat;
+
+    /* Extended topology enumeration: 0x0000000B[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_TOPO];
+        struct cpuid_topo_leaf {
+            uint32_t id_shift:5, :27;
+            uint16_t nr_logical, :16;
+            uint8_t level, type, :8, :8;
+            uint32_t x2apic_id;
+        } subleaf[CPUID_GUEST_NR_TOPO];
+    } topo;
+
+    /* Xstate feature leaf: 0x0000000D[xx] */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_XSTATE];
+
+        struct {
+            /* Subleaf 0. */
+            uint32_t xcr0_low, /* b */:32, max_size, xcr0_high;
+
+            /* Subleaf 1. */
+            union {
+                uint32_t Da1;
+                struct { DECL_BITFIELD(Da1); };
+            };
+            uint32_t /* b */:32, xss_low, xss_high;
+        };
+
+        /* Per-component common state.  Valid for i >= 2. */
+        struct {
+            uint32_t size, offset;
+            bool xss:1, align:1;
+            uint32_t _res_d;
+        } comp[CPUID_GUEST_NR_XSTATE];
+    } xstate;
+
+    /* Extended leaves: 0x800000xx */
+    union {
+        struct cpuid_leaf raw[CPUID_GUEST_NR_EXTD];
+        struct {
+            /* Leaf 0x80000000 - Max and vendor. */
+            uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
+
+            /* Leaf 0x80000001 - Family/model/stepping and features. */
+            uint32_t raw_fms, /* b */:32;
+            union {
+                uint32_t e1c;
+                struct { DECL_BITFIELD(e1c); };
+            };
+            union {
+                uint32_t e1d;
+                struct { DECL_BITFIELD(e1d); };
+            };
+
+            uint64_t :64, :64; /* Brand string. */
+            uint64_t :64, :64; /* Brand string. */
+            uint64_t :64, :64; /* Brand string. */
+            uint64_t :64, :64; /* L1 cache/TLB. */
+            uint64_t :64, :64; /* L2/3 cache/TLB. */
+
+            /* Leaf 0x80000007 - Advanced Power Management. */
+            uint32_t /* a */:32, /* b */:32, /* c */:32;
+            union {
+                uint32_t e7d;
+                struct { DECL_BITFIELD(e7d); };
+            };
+
+            /* Leaf 0x80000008 - Misc addr/feature info. */
+            uint8_t maxphysaddr, maxlinaddr, :8, :8;
+            union {
+                uint32_t e8b;
+                struct { DECL_BITFIELD(e8b); };
+            };
+            uint32_t nc:8, :4, apic_id_size:4, :16;
+            uint32_t /* d */:32;
+
+            uint64_t :64, :64; /* Leaf 0x80000009. */
+            uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. */
+            uint64_t :64, :64; /* Leaf 0x8000000b. */
+            uint64_t :64, :64; /* Leaf 0x8000000c. */
+            uint64_t :64, :64; /* Leaf 0x8000000d. */
+            uint64_t :64, :64; /* Leaf 0x8000000e. */
+            uint64_t :64, :64; /* Leaf 0x8000000f. */
+            uint64_t :64, :64; /* Leaf 0x80000010. */
+            uint64_t :64, :64; /* Leaf 0x80000011. */
+            uint64_t :64, :64; /* Leaf 0x80000012. */
+            uint64_t :64, :64; /* Leaf 0x80000013. */
+            uint64_t :64, :64; /* Leaf 0x80000014. */
+            uint64_t :64, :64; /* Leaf 0x80000015. */
+            uint64_t :64, :64; /* Leaf 0x80000016. */
+            uint64_t :64, :64; /* Leaf 0x80000017. */
+            uint64_t :64, :64; /* Leaf 0x80000018. */
+            uint64_t :64, :64; /* Leaf 0x80000019 - TLB 1GB Identifiers. */
+            uint64_t :64, :64; /* Leaf 0x8000001a - Performance related info. */
+            uint64_t :64, :64; /* Leaf 0x8000001b - IBS feature information. */
+            uint64_t :64, :64; /* Leaf 0x8000001c. */
+            uint64_t :64, :64; /* Leaf 0x8000001d - Cache properties. */
+            uint64_t :64, :64; /* Leaf 0x8000001e - Extd APIC/Core/Node IDs. */
+            uint64_t :64, :64; /* Leaf 0x8000001f - AMD Secure Encryption. */
+            uint64_t :64, :64; /* Leaf 0x80000020 - Platform QoS. */
+
+            /* Leaf 0x80000021 - Extended Feature 2 */
+            union {
+                uint32_t e21a;
+                struct { DECL_BITFIELD(e21a); };
+            };
+            uint32_t /* b */:32, /* c */:32, /* d */:32;
+        };
+    } extd;
+
+#undef __DECL_BITFIELD
+#undef _DECL_BITFIELD
+#undef DECL_BITFIELD
+
+    /* Toolstack selected Hypervisor max_leaf (if non-zero). */
+    uint8_t hv_limit, hv2_limit;
+
+    /* Value calculated from raw data above. */
+    uint8_t x86_vendor;
+};
+
+/* Temporary */
+#define cpuid_policy cpu_policy
+
 struct old_cpu_policy
 {
     struct cpuid_policy *cpuid;
@@ -19,6 +352,134 @@ struct cpu_policy_errors
 
 #define INIT_CPU_POLICY_ERRORS { -1, -1, -1 }
 
+/* Fill in a featureset bitmap from a CPUID policy. */
+static inline void cpuid_policy_to_featureset(
+    const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
+{
+    fs[FEATURESET_1d]  = p->basic._1d;
+    fs[FEATURESET_1c]  = p->basic._1c;
+    fs[FEATURESET_e1d] = p->extd.e1d;
+    fs[FEATURESET_e1c] = p->extd.e1c;
+    fs[FEATURESET_Da1] = p->xstate.Da1;
+    fs[FEATURESET_7b0] = p->feat._7b0;
+    fs[FEATURESET_7c0] = p->feat._7c0;
+    fs[FEATURESET_e7d] = p->extd.e7d;
+    fs[FEATURESET_e8b] = p->extd.e8b;
+    fs[FEATURESET_7d0] = p->feat._7d0;
+    fs[FEATURESET_7a1] = p->feat._7a1;
+    fs[FEATURESET_e21a] = p->extd.e21a;
+    fs[FEATURESET_7b1] = p->feat._7b1;
+    fs[FEATURESET_7d2] = p->feat._7d2;
+    fs[FEATURESET_7c1] = p->feat._7c1;
+    fs[FEATURESET_7d1] = p->feat._7d1;
+}
+
+/* Fill in a CPUID policy from a featureset bitmap. */
+static inline void cpuid_featureset_to_policy(
+    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
+{
+    p->basic._1d  = fs[FEATURESET_1d];
+    p->basic._1c  = fs[FEATURESET_1c];
+    p->extd.e1d   = fs[FEATURESET_e1d];
+    p->extd.e1c   = fs[FEATURESET_e1c];
+    p->xstate.Da1 = fs[FEATURESET_Da1];
+    p->feat._7b0  = fs[FEATURESET_7b0];
+    p->feat._7c0  = fs[FEATURESET_7c0];
+    p->extd.e7d   = fs[FEATURESET_e7d];
+    p->extd.e8b   = fs[FEATURESET_e8b];
+    p->feat._7d0  = fs[FEATURESET_7d0];
+    p->feat._7a1  = fs[FEATURESET_7a1];
+    p->extd.e21a  = fs[FEATURESET_e21a];
+    p->feat._7b1  = fs[FEATURESET_7b1];
+    p->feat._7d2  = fs[FEATURESET_7d2];
+    p->feat._7c1  = fs[FEATURESET_7c1];
+    p->feat._7d1  = fs[FEATURESET_7d1];
+}
+
+static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+{
+    return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
+}
+
+static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+{
+    uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
+
+    return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
+}
+
+const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
+
+/**
+ * Recalculate the content in a CPUID policy which is derived from raw data.
+ */
+void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+
+/**
+ * Fill a CPUID policy using the native CPUID instruction.
+ *
+ * No sanitisation is performed, but synthesised values are calculated.
+ * Values may be influenced by a hypervisor or from masking/faulting
+ * configuration.
+ */
+void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+
+/**
+ * Clear leaf data beyond the policies max leaf/subleaf settings.
+ *
+ * Policy serialisation purposefully omits out-of-range leaves, because there
+ * are a large number of them due to vendor differences.  However, when
+ * constructing new policies (e.g. levelling down), it is possible to end up
+ * with out-of-range leaves with stale content in them.  This helper clears
+ * them.
+ */
+void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+
+#ifdef __XEN__
+#include <public/arch-x86/xen.h>
+typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
+#else
+#include <xen/arch-x86/xen.h>
+typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
+#endif
+
+/**
+ * Serialise a cpuid_policy object into an array of cpuid leaves.
+ *
+ * @param policy     The cpuid_policy to serialise.
+ * @param leaves     The array of leaves to serialise into.
+ * @param nr_entries The number of entries in 'leaves'.
+ * @returns -errno
+ *
+ * Writes at most CPUID_MAX_SERIALISED_LEAVES.  May fail with -ENOBUFS if the
+ * leaves array is too short.  On success, nr_entries is updated with the
+ * actual number of leaves written.
+ */
+int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+                             cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
+
+/**
+ * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ *
+ * @param policy      The cpuid_policy to unserialise into.
+ * @param leaves      The array of leaves to unserialise from.
+ * @param nr_entries  The number of entries in 'leaves'.
+ * @param err_leaf    Optional hint for error diagnostics.
+ * @param err_subleaf Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * Reads at most CPUID_MAX_SERIALISED_LEAVES.  May return -ERANGE if an
+ * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * err_* pointers will identify the out-of-range indicies.
+ *
+ * No content validation of in-range leaves is performed.  Synthesised data is
+ * recalculated.
+ */
+int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+                               const cpuid_leaf_buffer_t leaves,
+                               uint32_t nr_entries, uint32_t *err_leaf,
+                               uint32_t *err_subleaf);
+
 /*
  * Calculate whether two policies are compatible.
  *
diff --git a/xen/include/xen/lib/x86/cpuid.h b/xen/include/xen/lib/x86/cpuid.h
deleted file mode 100644
index fa98b371eef4..000000000000
--- a/xen/include/xen/lib/x86/cpuid.h
+++ /dev/null
@@ -1,475 +0,0 @@
-/* Common data structures and functions consumed by hypervisor and toolstack */
-#ifndef XEN_LIB_X86_CPUID_H
-#define XEN_LIB_X86_CPUID_H
-
-#include <xen/lib/x86/cpuid-autogen.h>
-
-#define FEATURESET_1d     0 /* 0x00000001.edx      */
-#define FEATURESET_1c     1 /* 0x00000001.ecx      */
-#define FEATURESET_e1d    2 /* 0x80000001.edx      */
-#define FEATURESET_e1c    3 /* 0x80000001.ecx      */
-#define FEATURESET_Da1    4 /* 0x0000000d:1.eax    */
-#define FEATURESET_7b0    5 /* 0x00000007:0.ebx    */
-#define FEATURESET_7c0    6 /* 0x00000007:0.ecx    */
-#define FEATURESET_e7d    7 /* 0x80000007.edx      */
-#define FEATURESET_e8b    8 /* 0x80000008.ebx      */
-#define FEATURESET_7d0    9 /* 0x00000007:0.edx    */
-#define FEATURESET_7a1   10 /* 0x00000007:1.eax    */
-#define FEATURESET_e21a  11 /* 0x80000021.eax      */
-#define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
-#define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
-#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
-#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
-
-struct cpuid_leaf
-{
-    uint32_t a, b, c, d;
-};
-
-/*
- * Versions of GCC before 5 unconditionally reserve %rBX as the PIC hard
- * register, and are unable to cope with spilling it.  This results in a
- * rather cryptic error:
- *    error: inconsistent operand constraints in an ‘asm’
- *
- * In affected situations, work around the issue by using a separate register
- * to hold the the %rBX output, and xchg twice to leave %rBX preserved around
- * the asm() statement.
- */
-#if defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && defined(__i386__)
-# define XCHG_BX "xchg %%ebx, %[bx];"
-# define BX_CON [bx] "=&r"
-#elif defined(__PIC__) && __GNUC__ < 5 && !defined(__clang__) && \
-    defined(__x86_64__) && (defined(__code_model_medium__) || \
-                            defined(__code_model_large__))
-# define XCHG_BX "xchg %%rbx, %q[bx];"
-# define BX_CON [bx] "=&r"
-#else
-# define XCHG_BX ""
-# define BX_CON "=&b"
-#endif
-
-static inline void cpuid_leaf(uint32_t leaf, struct cpuid_leaf *l)
-{
-    asm ( XCHG_BX
-          "cpuid;"
-          XCHG_BX
-          : "=a" (l->a), BX_CON (l->b), "=&c" (l->c), "=&d" (l->d)
-          : "a" (leaf) );
-}
-
-static inline void cpuid_count_leaf(
-    uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *l)
-{
-    asm ( XCHG_BX
-          "cpuid;"
-          XCHG_BX
-          : "=a" (l->a), BX_CON (l->b), "=c" (l->c), "=&d" (l->d)
-          : "a" (leaf), "c" (subleaf) );
-}
-
-#undef BX_CON
-#undef XCHG
-
-/**
- * Given the vendor id from CPUID leaf 0, look up Xen's internal integer
- * vendor ID.  Returns X86_VENDOR_UNKNOWN for any unknown vendor.
- */
-unsigned int x86_cpuid_lookup_vendor(uint32_t ebx, uint32_t ecx, uint32_t edx);
-
-/**
- * Given Xen's internal vendor ID, return a string suitable for printing.
- * Returns "Unknown" for any unrecognised ID.
- */
-const char *x86_cpuid_vendor_to_str(unsigned int vendor);
-
-#define CPUID_GUEST_NR_BASIC      (0xdu + 1)
-#define CPUID_GUEST_NR_CACHE      (5u + 1)
-#define CPUID_GUEST_NR_FEAT       (2u + 1)
-#define CPUID_GUEST_NR_TOPO       (1u + 1)
-#define CPUID_GUEST_NR_XSTATE     (62u + 1)
-#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
-#define CPUID_GUEST_NR_EXTD_AMD   (0x21u + 1)
-#define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
-                                      CPUID_GUEST_NR_EXTD_AMD)
-
-/*
- * Maximum number of leaves a struct cpuid_policy turns into when serialised
- * for interaction with the toolstack.  (Sum of all leaves in each union, less
- * the entries in basic which sub-unions hang off of.)
- */
-#define CPUID_MAX_SERIALISED_LEAVES                     \
-    (CPUID_GUEST_NR_BASIC +                             \
-     CPUID_GUEST_NR_FEAT   - !!CPUID_GUEST_NR_FEAT +    \
-     CPUID_GUEST_NR_CACHE  - !!CPUID_GUEST_NR_CACHE +   \
-     CPUID_GUEST_NR_TOPO   - !!CPUID_GUEST_NR_TOPO +    \
-     CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE +  \
-     CPUID_GUEST_NR_EXTD + 2 /* hv_limit and hv2_limit */ )
-
-struct cpuid_policy
-{
-#define DECL_BITFIELD(word) _DECL_BITFIELD(FEATURESET_ ## word)
-#define _DECL_BITFIELD(x)   __DECL_BITFIELD(x)
-#define __DECL_BITFIELD(x)  CPUID_BITFIELD_ ## x
-
-    /* Basic leaves: 0x000000xx */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
-        struct {
-            /* Leaf 0x0 - Max and vendor. */
-            uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
-
-            /* Leaf 0x1 - Family/model/stepping and features. */
-            uint32_t raw_fms;
-            uint8_t :8,       /* Brand ID. */
-                clflush_size, /* Number of 8-byte blocks per cache line. */
-                lppp,         /* Logical processors per package. */
-                apic_id;      /* Initial APIC ID. */
-            union {
-                uint32_t _1c;
-                struct { DECL_BITFIELD(1c); };
-            };
-            union {
-                uint32_t _1d;
-                struct { DECL_BITFIELD(1d); };
-            };
-
-            /* Leaf 0x2 - TLB/Cache/Prefetch. */
-            uint8_t l2_nr_queries; /* Documented as fixed to 1. */
-            uint8_t l2_desc[15];
-
-            uint64_t :64, :64; /* Leaf 0x3 - PSN. */
-            uint64_t :64, :64; /* Leaf 0x4 - Structured Cache. */
-            uint64_t :64, :64; /* Leaf 0x5 - MONITOR. */
-            uint64_t :64, :64; /* Leaf 0x6 - Therm/Perf. */
-            uint64_t :64, :64; /* Leaf 0x7 - Structured Features. */
-            uint64_t :64, :64; /* Leaf 0x8 - rsvd */
-            uint64_t :64, :64; /* Leaf 0x9 - DCA */
-
-            /* Leaf 0xa - Intel PMU. */
-            uint8_t pmu_version, _pmu[15];
-
-            uint64_t :64, :64; /* Leaf 0xb - Topology. */
-            uint64_t :64, :64; /* Leaf 0xc - rsvd */
-            uint64_t :64, :64; /* Leaf 0xd - XSTATE. */
-        };
-    } basic;
-
-    /* Structured cache leaf: 0x00000004[xx] */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_CACHE];
-        struct cpuid_cache_leaf {
-            uint32_t /* a */ type:5, level:3;
-            bool self_init:1, fully_assoc:1;
-            uint32_t :4, threads_per_cache:12, cores_per_package:6;
-            uint32_t /* b */ line_size:12, partitions:10, ways:10;
-            uint32_t /* c */ sets;
-            bool /* d */ wbinvd:1, inclusive:1, complex:1;
-        } subleaf[CPUID_GUEST_NR_CACHE];
-    } cache;
-
-    /* Structured feature leaf: 0x00000007[xx] */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_FEAT];
-        struct {
-            /* Subleaf 0. */
-            uint32_t max_subleaf;
-            union {
-                uint32_t _7b0;
-                struct { DECL_BITFIELD(7b0); };
-            };
-            union {
-                uint32_t _7c0;
-                struct { DECL_BITFIELD(7c0); };
-            };
-            union {
-                uint32_t _7d0;
-                struct { DECL_BITFIELD(7d0); };
-            };
-
-            /* Subleaf 1. */
-            union {
-                uint32_t _7a1;
-                struct { DECL_BITFIELD(7a1); };
-            };
-            union {
-                uint32_t _7b1;
-                struct { DECL_BITFIELD(7b1); };
-            };
-            union {
-                uint32_t _7c1;
-                struct { DECL_BITFIELD(7c1); };
-            };
-            union {
-                uint32_t _7d1;
-                struct { DECL_BITFIELD(7d1); };
-            };
-
-            /* Subleaf 2. */
-            uint32_t /* a */:32, /* b */:32, /* c */:32;
-            union {
-                uint32_t _7d2;
-                struct { DECL_BITFIELD(7d2); };
-            };
-        };
-    } feat;
-
-    /* Extended topology enumeration: 0x0000000B[xx] */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_TOPO];
-        struct cpuid_topo_leaf {
-            uint32_t id_shift:5, :27;
-            uint16_t nr_logical, :16;
-            uint8_t level, type, :8, :8;
-            uint32_t x2apic_id;
-        } subleaf[CPUID_GUEST_NR_TOPO];
-    } topo;
-
-    /* Xstate feature leaf: 0x0000000D[xx] */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_XSTATE];
-
-        struct {
-            /* Subleaf 0. */
-            uint32_t xcr0_low, /* b */:32, max_size, xcr0_high;
-
-            /* Subleaf 1. */
-            union {
-                uint32_t Da1;
-                struct { DECL_BITFIELD(Da1); };
-            };
-            uint32_t /* b */:32, xss_low, xss_high;
-        };
-
-        /* Per-component common state.  Valid for i >= 2. */
-        struct {
-            uint32_t size, offset;
-            bool xss:1, align:1;
-            uint32_t _res_d;
-        } comp[CPUID_GUEST_NR_XSTATE];
-    } xstate;
-
-    /* Extended leaves: 0x800000xx */
-    union {
-        struct cpuid_leaf raw[CPUID_GUEST_NR_EXTD];
-        struct {
-            /* Leaf 0x80000000 - Max and vendor. */
-            uint32_t max_leaf, vendor_ebx, vendor_ecx, vendor_edx;
-
-            /* Leaf 0x80000001 - Family/model/stepping and features. */
-            uint32_t raw_fms, /* b */:32;
-            union {
-                uint32_t e1c;
-                struct { DECL_BITFIELD(e1c); };
-            };
-            union {
-                uint32_t e1d;
-                struct { DECL_BITFIELD(e1d); };
-            };
-
-            uint64_t :64, :64; /* Brand string. */
-            uint64_t :64, :64; /* Brand string. */
-            uint64_t :64, :64; /* Brand string. */
-            uint64_t :64, :64; /* L1 cache/TLB. */
-            uint64_t :64, :64; /* L2/3 cache/TLB. */
-
-            /* Leaf 0x80000007 - Advanced Power Management. */
-            uint32_t /* a */:32, /* b */:32, /* c */:32;
-            union {
-                uint32_t e7d;
-                struct { DECL_BITFIELD(e7d); };
-            };
-
-            /* Leaf 0x80000008 - Misc addr/feature info. */
-            uint8_t maxphysaddr, maxlinaddr, :8, :8;
-            union {
-                uint32_t e8b;
-                struct { DECL_BITFIELD(e8b); };
-            };
-            uint32_t nc:8, :4, apic_id_size:4, :16;
-            uint32_t /* d */:32;
-
-            uint64_t :64, :64; /* Leaf 0x80000009. */
-            uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. */
-            uint64_t :64, :64; /* Leaf 0x8000000b. */
-            uint64_t :64, :64; /* Leaf 0x8000000c. */
-            uint64_t :64, :64; /* Leaf 0x8000000d. */
-            uint64_t :64, :64; /* Leaf 0x8000000e. */
-            uint64_t :64, :64; /* Leaf 0x8000000f. */
-            uint64_t :64, :64; /* Leaf 0x80000010. */
-            uint64_t :64, :64; /* Leaf 0x80000011. */
-            uint64_t :64, :64; /* Leaf 0x80000012. */
-            uint64_t :64, :64; /* Leaf 0x80000013. */
-            uint64_t :64, :64; /* Leaf 0x80000014. */
-            uint64_t :64, :64; /* Leaf 0x80000015. */
-            uint64_t :64, :64; /* Leaf 0x80000016. */
-            uint64_t :64, :64; /* Leaf 0x80000017. */
-            uint64_t :64, :64; /* Leaf 0x80000018. */
-            uint64_t :64, :64; /* Leaf 0x80000019 - TLB 1GB Identifiers. */
-            uint64_t :64, :64; /* Leaf 0x8000001a - Performance related info. */
-            uint64_t :64, :64; /* Leaf 0x8000001b - IBS feature information. */
-            uint64_t :64, :64; /* Leaf 0x8000001c. */
-            uint64_t :64, :64; /* Leaf 0x8000001d - Cache properties. */
-            uint64_t :64, :64; /* Leaf 0x8000001e - Extd APIC/Core/Node IDs. */
-            uint64_t :64, :64; /* Leaf 0x8000001f - AMD Secure Encryption. */
-            uint64_t :64, :64; /* Leaf 0x80000020 - Platform QoS. */
-
-            /* Leaf 0x80000021 - Extended Feature 2 */
-            union {
-                uint32_t e21a;
-                struct { DECL_BITFIELD(e21a); };
-            };
-            uint32_t /* b */:32, /* c */:32, /* d */:32;
-        };
-    } extd;
-
-#undef __DECL_BITFIELD
-#undef _DECL_BITFIELD
-#undef DECL_BITFIELD
-
-    /* Toolstack selected Hypervisor max_leaf (if non-zero). */
-    uint8_t hv_limit, hv2_limit;
-
-    /* Value calculated from raw data above. */
-    uint8_t x86_vendor;
-};
-
-/* Fill in a featureset bitmap from a CPUID policy. */
-static inline void cpuid_policy_to_featureset(
-    const struct cpuid_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
-{
-    fs[FEATURESET_1d]  = p->basic._1d;
-    fs[FEATURESET_1c]  = p->basic._1c;
-    fs[FEATURESET_e1d] = p->extd.e1d;
-    fs[FEATURESET_e1c] = p->extd.e1c;
-    fs[FEATURESET_Da1] = p->xstate.Da1;
-    fs[FEATURESET_7b0] = p->feat._7b0;
-    fs[FEATURESET_7c0] = p->feat._7c0;
-    fs[FEATURESET_e7d] = p->extd.e7d;
-    fs[FEATURESET_e8b] = p->extd.e8b;
-    fs[FEATURESET_7d0] = p->feat._7d0;
-    fs[FEATURESET_7a1] = p->feat._7a1;
-    fs[FEATURESET_e21a] = p->extd.e21a;
-    fs[FEATURESET_7b1] = p->feat._7b1;
-    fs[FEATURESET_7d2] = p->feat._7d2;
-    fs[FEATURESET_7c1] = p->feat._7c1;
-    fs[FEATURESET_7d1] = p->feat._7d1;
-}
-
-/* Fill in a CPUID policy from a featureset bitmap. */
-static inline void cpuid_featureset_to_policy(
-    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpuid_policy *p)
-{
-    p->basic._1d  = fs[FEATURESET_1d];
-    p->basic._1c  = fs[FEATURESET_1c];
-    p->extd.e1d   = fs[FEATURESET_e1d];
-    p->extd.e1c   = fs[FEATURESET_e1c];
-    p->xstate.Da1 = fs[FEATURESET_Da1];
-    p->feat._7b0  = fs[FEATURESET_7b0];
-    p->feat._7c0  = fs[FEATURESET_7c0];
-    p->extd.e7d   = fs[FEATURESET_e7d];
-    p->extd.e8b   = fs[FEATURESET_e8b];
-    p->feat._7d0  = fs[FEATURESET_7d0];
-    p->feat._7a1  = fs[FEATURESET_7a1];
-    p->extd.e21a  = fs[FEATURESET_e21a];
-    p->feat._7b1  = fs[FEATURESET_7b1];
-    p->feat._7d2  = fs[FEATURESET_7d2];
-    p->feat._7c1  = fs[FEATURESET_7c1];
-    p->feat._7d1  = fs[FEATURESET_7d1];
-}
-
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
-{
-    return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
-}
-
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
-{
-    uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
-
-    return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
-}
-
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
-
-/**
- * Recalculate the content in a CPUID policy which is derived from raw data.
- */
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
-
-/**
- * Fill a CPUID policy using the native CPUID instruction.
- *
- * No sanitisation is performed, but synthesised values are calculated.
- * Values may be influenced by a hypervisor or from masking/faulting
- * configuration.
- */
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
-
-/**
- * Clear leaf data beyond the policies max leaf/subleaf settings.
- *
- * Policy serialisation purposefully omits out-of-range leaves, because there
- * are a large number of them due to vendor differences.  However, when
- * constructing new policies (e.g. levelling down), it is possible to end up
- * with out-of-range leaves with stale content in them.  This helper clears
- * them.
- */
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
-
-#ifdef __XEN__
-#include <public/arch-x86/xen.h>
-typedef XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) cpuid_leaf_buffer_t;
-#else
-#include <xen/arch-x86/xen.h>
-typedef xen_cpuid_leaf_t cpuid_leaf_buffer_t[];
-#endif
-
-/**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
- *
- * @param policy     The cpuid_policy to serialise.
- * @param leaves     The array of leaves to serialise into.
- * @param nr_entries The number of entries in 'leaves'.
- * @returns -errno
- *
- * Writes at most CPUID_MAX_SERIALISED_LEAVES.  May fail with -ENOBUFS if the
- * leaves array is too short.  On success, nr_entries is updated with the
- * actual number of leaves written.
- */
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
-                             cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
-
-/**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
- *
- * @param policy      The cpuid_policy to unserialise into.
- * @param leaves      The array of leaves to unserialise from.
- * @param nr_entries  The number of entries in 'leaves'.
- * @param err_leaf    Optional hint for error diagnostics.
- * @param err_subleaf Optional hint for error diagnostics.
- * @returns -errno
- *
- * Reads at most CPUID_MAX_SERIALISED_LEAVES.  May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
- * err_* pointers will identify the out-of-range indicies.
- *
- * No content validation of in-range leaves is performed.  Synthesised data is
- * recalculated.
- */
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
-                               const cpuid_leaf_buffer_t leaves,
-                               uint32_t nr_entries, uint32_t *err_leaf,
-                               uint32_t *err_subleaf);
-
-#endif /* !XEN_LIB_X86_CPUID_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 8eb88314f53c..e81f76c779c0 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -1,6 +1,6 @@
 #include "private.h"
 
-#include <xen/lib/x86/cpuid.h>
+#include <xen/lib/x86/cpu-policy.h>
 
 static void zero_leaves(struct cpuid_leaf *l,
                         unsigned int first, unsigned int last)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:53:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517735.803550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLg-0001DI-Qi; Tue, 04 Apr 2023 09:53:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517735.803550; Tue, 04 Apr 2023 09:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdLg-0001D2-NM; Tue, 04 Apr 2023 09:53:08 +0000
Received: by outflank-mailman (input) for mailman id 517735;
 Tue, 04 Apr 2023 09:53:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8u37=73=casper.srs.infradead.org=BATV+8e7372aa539f26de88ef+7163+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pjdLe-0005bo-Du
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:07 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7e112011-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:53:05 +0200 (CEST)
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=freeip.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pjdL9-00FEvo-3m; Tue, 04 Apr 2023 09:52:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e112011-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=wvgoJQ9Lm0ccWU181xaSUlae1PKiLsa2k6MVbJjeNtE=; b=YUP5VxUUDjN7PoRvhJ1Z/6AqgN
	7yoyEVN2XqXQTuYI69bJS2qGUdan3oCjnkR12UKpNSlPUNTUdzpk5DY/xbugywWlO6LHyyXlzhRbI
	K2Ywx250HO65Ts7l7gE4H1aQbHnCpljV8c58aS4k6DHBdPMLQmOVkoRorlscUWkSdPLboEUeK6DPB
	OiMDf9gevMIN5nmYLglpR/Erhx9f7yjXIm8kaAmocRrPc0s03A8mUs0e/00r9k3pYmxuyFakN/IxA
	6AYcj54rWR37VJZarjItBstYW5YjQFIRie+IY8z2DEDap3aV0Guw7YGhSeXGk6I1dAc1AyFwAinCp
	5FVxq6Iw==;
Message-ID: <d5c612131f3e47cb08c090bae00b8b0dba58f11e.camel@infradead.org>
Subject: Re: [PATCH 08/13] hw/xen: do not use
 aio_set_fd_handler(is_external=true) in xen_xenstore
From: David Woodhouse <dwmw2@infradead.org>
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, Julia Suvorova <jusual@redhat.com>,
  Kevin Wolf <kwolf@redhat.com>, Peter Lieven <pl@kamp.de>, Coiby Xu
 <Coiby.Xu@gmail.com>,  xen-devel@lists.xenproject.org, Richard Henderson
 <richard.henderson@linaro.org>,  Stefano Garzarella <sgarzare@redhat.com>,
 qemu-block@nongnu.org, Eduardo Habkost <eduardo@habkost.net>, Philippe
 =?ISO-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Paul Durrant
 <paul@xen.org>, "Richard W.M. Jones" <rjones@redhat.com>, "Dr. David Alan
 Gilbert" <dgilbert@redhat.com>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 Stefan Weil <sw@weilnetz.de>,  Juan Quintela <quintela@redhat.com>, Xie
 Yongji <xieyongji@bytedance.com>, Hanna Reitz <hreitz@redhat.com>,  Ronnie
 Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com, "Michael S.
 Tsirkin" <mst@redhat.com>, "Daniel P." =?ISO-8859-1?Q?Berrang=E9?=
 <berrange@redhat.com>, Anthony Perard <anthony.perard@citrix.com>
Date: Tue, 04 Apr 2023 10:52:32 +0100
In-Reply-To: <20230403183004.347205-9-stefanha@redhat.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
	 <20230403183004.347205-9-stefanha@redhat.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-7s6HEpTuj3u8ifXuNfwk"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-7s6HEpTuj3u8ifXuNfwk
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2023-04-03 at 14:29 -0400, Stefan Hajnoczi wrote:
> There is no need to suspend activity between aio_disable_external() and
> aio_enable_external(), which is mainly used for the block layer's drain
> operation.
>=20
> This is part of ongoing work to remove the aio_disable_external() API.
>=20
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>

Thanks.

> ---
> =C2=A0hw/i386/kvm/xen_xenstore.c | 2 +-
> =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
> index 900679af8a..6e81bc8791 100644
> --- a/hw/i386/kvm/xen_xenstore.c
> +++ b/hw/i386/kvm/xen_xenstore.c
> @@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Er=
ror **errp)
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 error_setg(errp, "Xensto=
re evtchn port init failed");
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
> =C2=A0=C2=A0=C2=A0=C2=A0 }
> -=C2=A0=C2=A0=C2=A0 aio_set_fd_handler(qemu_get_aio_context(), xen_be_evt=
chn_fd(s->eh), true,
> +=C2=A0=C2=A0=C2=A0 aio_set_fd_handler(qemu_get_aio_context(), xen_be_evt=
chn_fd(s->eh), false,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen_xens=
tore_event, NULL, NULL, NULL, s);
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 s->impl =3D xs_impl_create(xen_domid);


--=-7s6HEpTuj3u8ifXuNfwk
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDA0MDk1MjMyWjAvBgkqhkiG9w0BCQQxIgQgrsepn9Vl
DdKC1BeQEBFEi/a6RbJbt8TXN1jmHO7GNJ0wgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAVZ1qfTa9Nxee1RiBcOJYy3wtZjMAiU4eW
9bj9KacxKWNjZGupFHiljaK3GCNNbfQX0SaycrtzkHxnnGpBSnMy1uluAbCx/4Pxkw5tEO3s2oMl
HJS70RRWDS9q3d26tgu6znZFl/JoVdNI9CljOw6adKOgDY4QL5TLEbGW4/CQJmDmLgnROIoiId9k
l+EWUDZZEofAxneyZP1GpSV3f5945HxOxl5FlLlP2GmuEzpBZlIWsocyozX2FeKFSTR0BNIPssD0
eS1fK3j9IEthkl3t1IlpGQPmkG8aYeRgNvpZc41eA2iVzFKGar5ILukzrCwc2Pp8sHC4bQSR7oB2
xejzhzdBAYvMzu1FYLKqYvAmqT/xqXjxIjbucwT68yyLh8pp7NE5uIBl2L4acjmMfmFRJQAYSezP
t/VWPxuZunQhzDzo4a0unsZP9I6KLfrnPggVbASb9oAfxCC8Hu4wPjbbyhuOs36+pHqLPcHKSVno
sP4Z6mQnGOT8ng6LTCiYkq7LQkEHqgHFHIn5ScqYIY1x4VUQ4lUVkw2PE/MNacZ+hIsj7GYry2C7
9o+aa1qw2f5NCns8ITlcqC3vdDks86MvETwP4V0tnRES+P91GPB0KnWhFUwSCIHLRFOni0GGggtj
O6rNb9HAbDhwXn12vYtafW+933ruJSCaFbig3SeFsQAAAAAAAA==


--=-7s6HEpTuj3u8ifXuNfwk--


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517745.803559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRK-00037Y-Hw; Tue, 04 Apr 2023 09:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517745.803559; Tue, 04 Apr 2023 09:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRK-00037R-Eu; Tue, 04 Apr 2023 09:58:58 +0000
Received: by outflank-mailman (input) for mailman id 517745;
 Tue, 04 Apr 2023 09:58:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdMR-0005bo-Qm
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:56 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a1b42c3-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:53:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a1b42c3-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602032;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=+gBx+55wWG2Xc8HbkdAE6jzV3hAbtN970XBfRirIXxI=;
  b=UCF0+KPNPzktodWBff3mT/1jmkk0sEMy6VSSA8VMX2UUtV0oh57Ss0sO
   KG3xCyeBEas0u6P0n3/FCXgn8oW4wVG0ROtacZybgn0SYnw3z55kgUjvf
   WxQVDTpi5uYQadEGOEFBSGMI/yiKW+c9mk18QQDhmYNv1PcmoK1OCiEky
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104656675
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:uS3LlKIe1D1PhJBLFE+RrpUlxSXFcZb7ZxGr2PjKsXjdYENS0mMCm
 2sYXW+OOvnfZ2TxcoxyaYTjpE5XusPdyoc2QAtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRiPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c56AFhr5
 P4KDQwqVSyYmfDm5aiSUu9z05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZkLwxjG+
 T+uE2LRBwBGbYOGkQS+ryijj8nQsAb8UbkwG+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHsljw2VsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9IG47QD4GXy45xOLfu58h3jfVZ85lD/vg5jHqIg0c0
 wxmvQBn2eVL0JFVjfrilbzUq2ny/8aUF2bZ8i2SBzv4tV0hOeZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/LF7rVxBA1b5IehtDMhWfS+FyPosdz7ze
 1P0sghM/pJVN3bCRfYpM9rsV5p7l/O6S4SNuhXogj1mO8AZSeN61Hs2OR74M57FyyDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPRlEoDCrSiPXCHmWPRRHhTRUUG6VnNg5Q/Xoa+zsBOQgnN19e5LWsdRrFY
IronPort-HdrOrdr: A9a23:xxqf5a2COqozSHZlDIYQaQqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104656675"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
Date: Tue, 4 Apr 2023 10:52:18 +0100
Message-ID: <20230404095222.1373721-12-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Switch to the newer cpu_policy nomenclature.  Do some easy cleanup of
includes.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 xen/arch/x86/cpu-policy.c             | 752 ++++++++++++++++++++++++
 xen/arch/x86/cpuid.c                  | 817 +-------------------------
 xen/arch/x86/hvm/hvm.c                |   1 -
 xen/arch/x86/include/asm/cpu-policy.h |   6 +
 xen/arch/x86/include/asm/cpuid.h      |  11 +-
 xen/arch/x86/pv/domain.c              |   1 +
 xen/arch/x86/setup.c                  |   2 -
 7 files changed, 764 insertions(+), 826 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index f6a2317ed7bd..83186e940ca7 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -1,13 +1,19 @@
 /* SPDX-License-Identifier: GPL-2.0-or-later */
 #include <xen/cache.h>
 #include <xen/kernel.h>
+#include <xen/param.h>
 #include <xen/sched.h>
 
 #include <xen/lib/x86/cpu-policy.h>
 
+#include <asm/amd.h>
 #include <asm/cpu-policy.h>
+#include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/svm/svm.h>
 #include <asm/msr-index.h>
+#include <asm/paging.h>
 #include <asm/setup.h>
+#include <asm/xstate.h>
 
 struct cpu_policy __ro_after_init     raw_cpu_policy;
 struct cpu_policy __ro_after_init    host_cpu_policy;
@@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
 struct cpu_policy __ro_after_init hvm_def_cpu_policy;
 #endif
 
+const uint32_t known_features[] = INIT_KNOWN_FEATURES;
+
+static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
+static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
+static const uint32_t __initconst hvm_hap_max_featuremask[] =
+    INIT_HVM_HAP_MAX_FEATURES;
+static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
+static const uint32_t __initconst hvm_shadow_def_featuremask[] =
+    INIT_HVM_SHADOW_DEF_FEATURES;
+static const uint32_t __initconst hvm_hap_def_featuremask[] =
+    INIT_HVM_HAP_DEF_FEATURES;
+static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
+
+static const struct feature_name {
+    const char *name;
+    unsigned int bit;
+} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
+
+/*
+ * Parse a list of cpuid feature names -> bool, calling the callback for any
+ * matches found.
+ *
+ * always_inline, because this is init code only and we really don't want a
+ * function pointer call in the middle of the loop.
+ */
+static int __init always_inline parse_cpuid(
+    const char *s, void (*callback)(unsigned int feat, bool val))
+{
+    const char *ss;
+    int val, rc = 0;
+
+    do {
+        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
+        const char *feat;
+
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        /* Skip the 'no-' prefix for name comparisons. */
+        feat = s;
+        if ( strncmp(s, "no-", 3) == 0 )
+            feat += 3;
+
+        /* (Re)initalise lhs and rhs for binary search. */
+        lhs = feature_names;
+        rhs = feature_names + ARRAY_SIZE(feature_names);
+
+        while ( lhs < rhs )
+        {
+            int res;
+
+            mid = lhs + (rhs - lhs) / 2;
+            res = cmdline_strcmp(feat, mid->name);
+
+            if ( res < 0 )
+            {
+                rhs = mid;
+                continue;
+            }
+            if ( res > 0 )
+            {
+                lhs = mid + 1;
+                continue;
+            }
+
+            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
+            {
+                callback(mid->bit, val);
+                mid = NULL;
+            }
+
+            break;
+        }
+
+        /*
+         * Mid being NULL means that the name and boolean were successfully
+         * identified.  Everything else is an error.
+         */
+        if ( mid )
+            rc = -EINVAL;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+
+static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
+{
+    if ( !val )
+        setup_clear_cpu_cap(feat);
+    else if ( feat == X86_FEATURE_RDRAND &&
+              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
+        setup_force_cpu_cap(X86_FEATURE_RDRAND);
+}
+
+static int __init cf_check parse_xen_cpuid(const char *s)
+{
+    return parse_cpuid(s, _parse_xen_cpuid);
+}
+custom_param("cpuid", parse_xen_cpuid);
+
+static bool __initdata dom0_cpuid_cmdline;
+static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
+static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
+
+static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
+{
+    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
+    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
+}
+
+static int __init cf_check parse_dom0_cpuid(const char *s)
+{
+    dom0_cpuid_cmdline = true;
+
+    return parse_cpuid(s, _parse_dom0_cpuid);
+}
+custom_param("dom0-cpuid", parse_dom0_cpuid);
+
+#define EMPTY_LEAF ((struct cpuid_leaf){})
+static void zero_leaves(struct cpuid_leaf *l,
+                        unsigned int first, unsigned int last)
+{
+    memset(&l[first], 0, sizeof(*l) * (last - first + 1));
+}
+
+static void sanitise_featureset(uint32_t *fs)
+{
+    /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
+    uint32_t disabled_features[
+        ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
+    unsigned int i;
+
+    for ( i = 0; i < FSCAPINTS; ++i )
+    {
+        /* Clamp to known mask. */
+        fs[i] &= known_features[i];
+
+        /*
+         * Identify which features with deep dependencies have been
+         * disabled.
+         */
+        disabled_features[i] = ~fs[i] & deep_features[i];
+    }
+
+    for_each_set_bit(i, (void *)disabled_features,
+                     sizeof(disabled_features) * 8)
+    {
+        const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
+        unsigned int j;
+
+        ASSERT(dfs); /* deep_features[] should guarentee this. */
+
+        for ( j = 0; j < FSCAPINTS; ++j )
+        {
+            fs[j] &= ~dfs[j];
+            disabled_features[j] &= ~dfs[j];
+        }
+    }
+}
+
+static void recalculate_xstate(struct cpu_policy *p)
+{
+    uint64_t xstates = XSTATE_FP_SSE;
+    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
+    unsigned int i, Da1 = p->xstate.Da1;
+
+    /*
+     * The Da1 leaf is the only piece of information preserved in the common
+     * case.  Everything else is derived from other feature state.
+     */
+    memset(&p->xstate, 0, sizeof(p->xstate));
+
+    if ( !p->basic.xsave )
+        return;
+
+    if ( p->basic.avx )
+    {
+        xstates |= X86_XCR0_YMM;
+        xstate_size = max(xstate_size,
+                          xstate_offsets[X86_XCR0_YMM_POS] +
+                          xstate_sizes[X86_XCR0_YMM_POS]);
+    }
+
+    if ( p->feat.mpx )
+    {
+        xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
+        xstate_size = max(xstate_size,
+                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
+                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
+    }
+
+    if ( p->feat.avx512f )
+    {
+        xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
+        xstate_size = max(xstate_size,
+                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
+                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
+    }
+
+    if ( p->feat.pku )
+    {
+        xstates |= X86_XCR0_PKRU;
+        xstate_size = max(xstate_size,
+                          xstate_offsets[X86_XCR0_PKRU_POS] +
+                          xstate_sizes[X86_XCR0_PKRU_POS]);
+    }
+
+    p->xstate.max_size  =  xstate_size;
+    p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
+    p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
+
+    p->xstate.Da1 = Da1;
+    if ( p->xstate.xsaves )
+    {
+        p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
+        p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
+    }
+    else
+        xstates &= ~XSTATE_XSAVES_ONLY;
+
+    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
+    {
+        uint64_t curr_xstate = 1ul << i;
+
+        if ( !(xstates & curr_xstate) )
+            continue;
+
+        p->xstate.comp[i].size   = xstate_sizes[i];
+        p->xstate.comp[i].offset = xstate_offsets[i];
+        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
+        p->xstate.comp[i].align  = curr_xstate & xstate_align;
+    }
+}
+
+/*
+ * Misc adjustments to the policy.  Mostly clobbering reserved fields and
+ * duplicating shared fields.  Intentionally hidden fields are annotated.
+ */
+static void recalculate_misc(struct cpu_policy *p)
+{
+    p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
+    p->basic.apic_id = 0; /* Dynamic. */
+
+    p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
+    p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
+
+    p->basic.raw[0x8] = EMPTY_LEAF;
+
+    /* TODO: Rework topology logic. */
+    memset(p->topo.raw, 0, sizeof(p->topo.raw));
+
+    p->basic.raw[0xc] = EMPTY_LEAF;
+
+    p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
+
+    /* Most of Power/RAS hidden from guests. */
+    p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
+
+    p->extd.raw[0x8].d = 0;
+
+    switch ( p->x86_vendor )
+    {
+    case X86_VENDOR_INTEL:
+        p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
+        p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
+        p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
+
+        p->extd.vendor_ebx = 0;
+        p->extd.vendor_ecx = 0;
+        p->extd.vendor_edx = 0;
+
+        p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
+
+        p->extd.raw[0x5] = EMPTY_LEAF;
+        p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
+
+        p->extd.raw[0x8].a &= 0x0000ffff;
+        p->extd.raw[0x8].c = 0;
+        break;
+
+    case X86_VENDOR_AMD:
+    case X86_VENDOR_HYGON:
+        zero_leaves(p->basic.raw, 0x2, 0x3);
+        memset(p->cache.raw, 0, sizeof(p->cache.raw));
+        zero_leaves(p->basic.raw, 0x9, 0xa);
+
+        p->extd.vendor_ebx = p->basic.vendor_ebx;
+        p->extd.vendor_ecx = p->basic.vendor_ecx;
+        p->extd.vendor_edx = p->basic.vendor_edx;
+
+        p->extd.raw_fms = p->basic.raw_fms;
+        p->extd.raw[0x1].b &= 0xff00ffff;
+        p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
+
+        p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
+        p->extd.raw[0x8].c &= 0x0003f0ff;
+
+        p->extd.raw[0x9] = EMPTY_LEAF;
+
+        zero_leaves(p->extd.raw, 0xb, 0x18);
+
+        /* 0x19 - TLB details.  Pass through. */
+        /* 0x1a - Perf hints.   Pass through. */
+
+        p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
+        p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
+        p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
+        p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
+        p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
+        p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
+        break;
+    }
+}
+
 static void __init calculate_raw_policy(void)
 {
     struct cpu_policy *p = &raw_cpu_policy;
 
+    x86_cpuid_policy_fill_native(p);
+
+    /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
+    ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
+
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
 
@@ -34,9 +362,50 @@ static void __init calculate_raw_policy(void)
 static void __init calculate_host_policy(void)
 {
     struct cpu_policy *p = &host_cpu_policy;
+    unsigned int max_extd_leaf;
 
     *p = raw_cpu_policy;
 
+    p->basic.max_leaf =
+        min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
+    p->feat.max_subleaf =
+        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
+
+    max_extd_leaf = p->extd.max_leaf;
+
+    /*
+     * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
+     * dispatch serialising for Spectre mitigations.  Extend max_extd_leaf
+     * beyond what hardware supports, to include the feature leaf containing
+     * this information.
+     */
+    if ( cpu_has_lfence_dispatch )
+        max_extd_leaf = max(max_extd_leaf, 0x80000021);
+
+    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
+                                          ARRAY_SIZE(p->extd.raw) - 1);
+
+    x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
+    recalculate_xstate(p);
+    recalculate_misc(p);
+
+    /* When vPMU is disabled, drop it from the host policy. */
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        p->basic.raw[0xa] = EMPTY_LEAF;
+
+    if ( p->extd.svm )
+    {
+        /* Clamp to implemented features which require hardware support. */
+        p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
+                               (1u << SVM_FEATURE_LBRV) |
+                               (1u << SVM_FEATURE_NRIPS) |
+                               (1u << SVM_FEATURE_PAUSEFILTER) |
+                               (1u << SVM_FEATURE_DECODEASSISTS));
+        /* Enable features which are always emulated. */
+        p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
+                               (1u << SVM_FEATURE_TSCRATEMSR));
+    }
+
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
@@ -51,11 +420,88 @@ static void __init calculate_host_policy(void)
          ARCH_CAPS_PBRSB_NO);
 }
 
+static void __init guest_common_default_feature_adjustments(uint32_t *fs)
+{
+    /*
+     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
+     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
+     * compensate.
+     *
+     * Mitigate by hiding RDRAND from guests by default, unless explicitly
+     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
+     * default setting, guests can use RDRAND if explicitly enabled
+     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
+     * previously using RDRAND can migrate in.
+     */
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
+         cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
+        __clear_bit(X86_FEATURE_RDRAND, fs);
+
+    /*
+     * On certain hardware, speculative or errata workarounds can result in
+     * TSX being placed in "force-abort" mode, where it doesn't actually
+     * function as expected, but is technically compatible with the ISA.
+     *
+     * Do not advertise RTM to guests by default if it won't actually work.
+     */
+    if ( rtm_disabled )
+        __clear_bit(X86_FEATURE_RTM, fs);
+}
+
+static void __init guest_common_feature_adjustments(uint32_t *fs)
+{
+    /* Unconditionally claim to be able to set the hypervisor bit. */
+    __set_bit(X86_FEATURE_HYPERVISOR, fs);
+
+    /*
+     * If IBRS is offered to the guest, unconditionally offer STIBP.  It is a
+     * nop on non-HT hardware, and has this behaviour to make heterogeneous
+     * setups easier to manage.
+     */
+    if ( test_bit(X86_FEATURE_IBRSB, fs) )
+        __set_bit(X86_FEATURE_STIBP, fs);
+    if ( test_bit(X86_FEATURE_IBRS, fs) )
+        __set_bit(X86_FEATURE_AMD_STIBP, fs);
+
+    /*
+     * On hardware which supports IBRS/IBPB, we can offer IBPB independently
+     * of IBRS by using the AMD feature bit.  An administrator may wish for
+     * performance reasons to offer IBPB without IBRS.
+     */
+    if ( host_cpu_policy.feat.ibrsb )
+        __set_bit(X86_FEATURE_IBPB, fs);
+}
+
 static void __init calculate_pv_max_policy(void)
 {
     struct cpu_policy *p = &pv_max_cpu_policy;
+    uint32_t fs[FSCAPINTS];
+    unsigned int i;
 
     *p = host_cpu_policy;
+    x86_cpu_policy_to_featureset(p, fs);
+
+    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+        fs[i] &= pv_max_featuremask[i];
+
+    /*
+     * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
+     * availability, or admin choice), hide the feature.
+     */
+    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
+    {
+        __clear_bit(X86_FEATURE_IBRSB, fs);
+        __clear_bit(X86_FEATURE_IBRS, fs);
+    }
+
+    guest_common_feature_adjustments(fs);
+
+    sanitise_featureset(fs);
+    x86_cpu_featureset_to_policy(fs, p);
+    recalculate_xstate(p);
+
+    p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
 
     p->arch_caps.raw = 0; /* Not supported yet. */
 }
@@ -63,15 +509,112 @@ static void __init calculate_pv_max_policy(void)
 static void __init calculate_pv_def_policy(void)
 {
     struct cpu_policy *p = &pv_def_cpu_policy;
+    uint32_t fs[FSCAPINTS];
+    unsigned int i;
 
     *p = pv_max_cpu_policy;
+    x86_cpu_policy_to_featureset(p, fs);
+
+    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+        fs[i] &= pv_def_featuremask[i];
+
+    guest_common_feature_adjustments(fs);
+    guest_common_default_feature_adjustments(fs);
+
+    sanitise_featureset(fs);
+    x86_cpu_featureset_to_policy(fs, p);
+    recalculate_xstate(p);
 }
 
 static void __init calculate_hvm_max_policy(void)
 {
     struct cpu_policy *p = &hvm_max_cpu_policy;
+    uint32_t fs[FSCAPINTS];
+    unsigned int i;
+    const uint32_t *mask;
 
     *p = host_cpu_policy;
+    x86_cpu_policy_to_featureset(p, fs);
+
+    mask = hvm_hap_supported() ?
+        hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
+
+    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+        fs[i] &= mask[i];
+
+    /*
+     * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
+     * (x2)APIC isn't enabled.
+     */
+    __set_bit(X86_FEATURE_APIC, fs);
+    __set_bit(X86_FEATURE_X2APIC, fs);
+
+    /*
+     * We don't support EFER.LMSLE at all.  AMD has dropped the feature from
+     * hardware and allocated a CPUID bit to indicate its absence.
+     */
+    __set_bit(X86_FEATURE_NO_LMSL, fs);
+
+    /*
+     * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
+     * long mode (and init_amd() has cleared it out of host capabilities), but
+     * HVM guests are able if running in protected mode.
+     */
+    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
+         raw_cpu_policy.basic.sep )
+        __set_bit(X86_FEATURE_SEP, fs);
+
+    /*
+     * VIRT_SSBD is exposed in the default policy as a result of
+     * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
+     */
+    if ( amd_virt_spec_ctrl )
+        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+    /*
+     * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
+     * availability, or admin choice), hide the feature.
+     */
+    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
+    {
+        __clear_bit(X86_FEATURE_IBRSB, fs);
+        __clear_bit(X86_FEATURE_IBRS, fs);
+    }
+    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
+        /*
+         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
+         * and implemented using the former. Expose in the max policy only as
+         * the preference is for guests to use SPEC_CTRL.SSBD if available.
+         */
+        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+    /*
+     * With VT-x, some features are only supported by Xen if dedicated
+     * hardware support is also available.
+     */
+    if ( cpu_has_vmx )
+    {
+        if ( !cpu_has_vmx_mpx )
+            __clear_bit(X86_FEATURE_MPX, fs);
+
+        if ( !cpu_has_vmx_xsaves )
+            __clear_bit(X86_FEATURE_XSAVES, fs);
+    }
+
+    /*
+     * Xen doesn't use PKS, so the guest support for it has opted to not use
+     * the VMCS load/save controls for efficiency reasons.  This depends on
+     * the exact vmentry/exit behaviour, so don't expose PKS in other
+     * situations until someone has cross-checked the behaviour for safety.
+     */
+    if ( !cpu_has_vmx )
+        __clear_bit(X86_FEATURE_PKS, fs);
+
+    guest_common_feature_adjustments(fs);
+
+    sanitise_featureset(fs);
+    x86_cpu_featureset_to_policy(fs, p);
+    recalculate_xstate(p);
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     p->platform_info.cpuid_faulting = true;
@@ -82,8 +625,32 @@ static void __init calculate_hvm_max_policy(void)
 static void __init calculate_hvm_def_policy(void)
 {
     struct cpu_policy *p = &hvm_def_cpu_policy;
+    uint32_t fs[FSCAPINTS];
+    unsigned int i;
+    const uint32_t *mask;
 
     *p = hvm_max_cpu_policy;
+    x86_cpu_policy_to_featureset(p, fs);
+
+    mask = hvm_hap_supported() ?
+        hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
+
+    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+        fs[i] &= mask[i];
+
+    guest_common_feature_adjustments(fs);
+    guest_common_default_feature_adjustments(fs);
+
+    /*
+     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
+     * amd_virt_spec_ctrl is set.
+     */
+    if ( amd_virt_spec_ctrl )
+        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
+
+    sanitise_featureset(fs);
+    x86_cpu_featureset_to_policy(fs, p);
+    recalculate_xstate(p);
 }
 
 void __init init_guest_cpu_policies(void)
@@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
 
     return 0;
 }
+
+void recalculate_cpuid_policy(struct domain *d)
+{
+    struct cpu_policy *p = d->arch.cpuid;
+    const struct cpu_policy *max = is_pv_domain(d)
+        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
+        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
+    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
+    unsigned int i;
+
+    if ( !max )
+    {
+        ASSERT_UNREACHABLE();
+        return;
+    }
+
+    p->x86_vendor = x86_cpuid_lookup_vendor(
+        p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
+
+    p->basic.max_leaf   = min(p->basic.max_leaf,   max->basic.max_leaf);
+    p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
+    p->extd.max_leaf    = 0x80000000 | min(p->extd.max_leaf & 0xffff,
+                                           ((p->x86_vendor & (X86_VENDOR_AMD |
+                                                              X86_VENDOR_HYGON))
+                                            ? CPUID_GUEST_NR_EXTD_AMD
+                                            : CPUID_GUEST_NR_EXTD_INTEL) - 1);
+
+    x86_cpu_policy_to_featureset(p, fs);
+    x86_cpu_policy_to_featureset(max, max_fs);
+
+    if ( is_hvm_domain(d) )
+    {
+        /*
+         * HVM domains using Shadow paging have further restrictions on their
+         * available paging features.
+         */
+        if ( !hap_enabled(d) )
+        {
+            for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
+                max_fs[i] &= hvm_shadow_max_featuremask[i];
+        }
+
+        /* Hide nested-virt if it hasn't been explicitly configured. */
+        if ( !nestedhvm_enabled(d) )
+        {
+            __clear_bit(X86_FEATURE_VMX, max_fs);
+            __clear_bit(X86_FEATURE_SVM, max_fs);
+        }
+    }
+
+    /*
+     * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY.  These bits
+     * affect how to interpret topology information in other cpuid leaves.
+     */
+    __set_bit(X86_FEATURE_HTT, max_fs);
+    __set_bit(X86_FEATURE_X2APIC, max_fs);
+    __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
+
+    /*
+     * 32bit PV domains can't use any Long Mode features, and cannot use
+     * SYSCALL on non-AMD hardware.
+     */
+    if ( is_pv_32bit_domain(d) )
+    {
+        __clear_bit(X86_FEATURE_LM, max_fs);
+        if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            __clear_bit(X86_FEATURE_SYSCALL, max_fs);
+    }
+
+    /* Clamp the toolstacks choices to reality. */
+    for ( i = 0; i < ARRAY_SIZE(fs); i++ )
+        fs[i] &= max_fs[i];
+
+    if ( p->basic.max_leaf < XSTATE_CPUID )
+        __clear_bit(X86_FEATURE_XSAVE, fs);
+
+    sanitise_featureset(fs);
+
+    /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
+    fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
+    fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
+                           (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
+                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
+
+    x86_cpu_featureset_to_policy(fs, p);
+
+    /* Pass host cacheline size through to guests. */
+    p->basic.clflush_size = max->basic.clflush_size;
+
+    p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
+    p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
+                                paging_max_paddr_bits(d));
+    p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
+                                (p->basic.pae || p->basic.pse36) ? 36 : 32);
+
+    p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
+
+    recalculate_xstate(p);
+    recalculate_misc(p);
+
+    for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
+    {
+        if ( p->cache.subleaf[i].type >= 1 &&
+             p->cache.subleaf[i].type <= 3 )
+        {
+            /* Subleaf has a valid cache type. Zero reserved fields. */
+            p->cache.raw[i].a &= 0xffffc3ffu;
+            p->cache.raw[i].d &= 0x00000007u;
+        }
+        else
+        {
+            /* Subleaf is not valid.  Zero the rest of the union. */
+            zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
+            break;
+        }
+    }
+
+    if ( vpmu_mode == XENPMU_MODE_OFF ||
+         ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
+        p->basic.raw[0xa] = EMPTY_LEAF;
+
+    if ( !p->extd.svm )
+        p->extd.raw[0xa] = EMPTY_LEAF;
+
+    if ( !p->extd.page1gb )
+        p->extd.raw[0x19] = EMPTY_LEAF;
+}
+
+void __init init_dom0_cpuid_policy(struct domain *d)
+{
+    struct cpu_policy *p = d->arch.cpuid;
+
+    /* dom0 can't migrate.  Give it ITSC if available. */
+    if ( cpu_has_itsc )
+        p->extd.itsc = true;
+
+    /*
+     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
+     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
+     * domain policy logic gains a better understanding of MSRs.
+     */
+    if ( cpu_has_arch_caps )
+        p->feat.arch_caps = true;
+
+    /* Apply dom0-cpuid= command line settings, if provided. */
+    if ( dom0_cpuid_cmdline )
+    {
+        uint32_t fs[FSCAPINTS];
+        unsigned int i;
+
+        x86_cpu_policy_to_featureset(p, fs);
+
+        for ( i = 0; i < ARRAY_SIZE(fs); ++i )
+        {
+            fs[i] |=  dom0_enable_feat [i];
+            fs[i] &= ~dom0_disable_feat[i];
+        }
+
+        x86_cpu_featureset_to_policy(fs, p);
+
+        recalculate_cpuid_policy(d);
+    }
+}
+
+static void __init __maybe_unused build_assertions(void)
+{
+    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
+    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
+    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
+    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
+    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
+
+    /* Find some more clever allocation scheme if this trips. */
+    BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
+
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
+                 sizeof(raw_cpu_policy.basic.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
+                 sizeof(raw_cpu_policy.feat.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
+                 sizeof(raw_cpu_policy.xstate.raw));
+    BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
+                 sizeof(raw_cpu_policy.extd.raw));
+}
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 5eb5f1893516..3f20c342fde8 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1,638 +1,14 @@
-#include <xen/init.h>
-#include <xen/lib.h>
-#include <xen/param.h>
 #include <xen/sched.h>
-#include <xen/nospec.h>
-#include <asm/amd.h>
+#include <xen/types.h>
+
+#include <public/hvm/params.h>
+
 #include <asm/cpu-policy.h>
 #include <asm/cpuid.h>
-#include <asm/hvm/hvm.h>
-#include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/svm/svm.h>
 #include <asm/hvm/viridian.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/paging.h>
-#include <asm/processor.h>
 #include <asm/xstate.h>
 
-const uint32_t known_features[] = INIT_KNOWN_FEATURES;
-
-static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
-static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
-static const uint32_t __initconst hvm_hap_max_featuremask[] =
-    INIT_HVM_HAP_MAX_FEATURES;
-static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
-static const uint32_t __initconst hvm_shadow_def_featuremask[] =
-    INIT_HVM_SHADOW_DEF_FEATURES;
-static const uint32_t __initconst hvm_hap_def_featuremask[] =
-    INIT_HVM_HAP_DEF_FEATURES;
-static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
-
-static const struct feature_name {
-    const char *name;
-    unsigned int bit;
-} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
-
-/*
- * Parse a list of cpuid feature names -> bool, calling the callback for any
- * matches found.
- *
- * always_inline, because this is init code only and we really don't want a
- * function pointer call in the middle of the loop.
- */
-static int __init always_inline parse_cpuid(
-    const char *s, void (*callback)(unsigned int feat, bool val))
-{
-    const char *ss;
-    int val, rc = 0;
-
-    do {
-        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
-        const char *feat;
-
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        /* Skip the 'no-' prefix for name comparisons. */
-        feat = s;
-        if ( strncmp(s, "no-", 3) == 0 )
-            feat += 3;
-
-        /* (Re)initalise lhs and rhs for binary search. */
-        lhs = feature_names;
-        rhs = feature_names + ARRAY_SIZE(feature_names);
-
-        while ( lhs < rhs )
-        {
-            int res;
-
-            mid = lhs + (rhs - lhs) / 2;
-            res = cmdline_strcmp(feat, mid->name);
-
-            if ( res < 0 )
-            {
-                rhs = mid;
-                continue;
-            }
-            if ( res > 0 )
-            {
-                lhs = mid + 1;
-                continue;
-            }
-
-            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
-            {
-                callback(mid->bit, val);
-                mid = NULL;
-            }
-
-            break;
-        }
-
-        /*
-         * Mid being NULL means that the name and boolean were successfully
-         * identified.  Everything else is an error.
-         */
-        if ( mid )
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
-
-    return rc;
-}
-
-static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
-{
-    if ( !val )
-        setup_clear_cpu_cap(feat);
-    else if ( feat == X86_FEATURE_RDRAND &&
-              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
-        setup_force_cpu_cap(X86_FEATURE_RDRAND);
-}
-
-static int __init cf_check parse_xen_cpuid(const char *s)
-{
-    return parse_cpuid(s, _parse_xen_cpuid);
-}
-custom_param("cpuid", parse_xen_cpuid);
-
-static bool __initdata dom0_cpuid_cmdline;
-static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
-static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
-
-static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
-{
-    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
-    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
-}
-
-static int __init cf_check parse_dom0_cpuid(const char *s)
-{
-    dom0_cpuid_cmdline = true;
-
-    return parse_cpuid(s, _parse_dom0_cpuid);
-}
-custom_param("dom0-cpuid", parse_dom0_cpuid);
-
 #define EMPTY_LEAF ((struct cpuid_leaf){})
-static void zero_leaves(struct cpuid_leaf *l,
-                        unsigned int first, unsigned int last)
-{
-    memset(&l[first], 0, sizeof(*l) * (last - first + 1));
-}
-
-static void sanitise_featureset(uint32_t *fs)
-{
-    /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
-    uint32_t disabled_features[
-        ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
-    unsigned int i;
-
-    for ( i = 0; i < FSCAPINTS; ++i )
-    {
-        /* Clamp to known mask. */
-        fs[i] &= known_features[i];
-
-        /*
-         * Identify which features with deep dependencies have been
-         * disabled.
-         */
-        disabled_features[i] = ~fs[i] & deep_features[i];
-    }
-
-    for_each_set_bit(i, (void *)disabled_features,
-                     sizeof(disabled_features) * 8)
-    {
-        const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
-        unsigned int j;
-
-        ASSERT(dfs); /* deep_features[] should guarentee this. */
-
-        for ( j = 0; j < FSCAPINTS; ++j )
-        {
-            fs[j] &= ~dfs[j];
-            disabled_features[j] &= ~dfs[j];
-        }
-    }
-}
-
-static void recalculate_xstate(struct cpuid_policy *p)
-{
-    uint64_t xstates = XSTATE_FP_SSE;
-    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
-    unsigned int i, Da1 = p->xstate.Da1;
-
-    /*
-     * The Da1 leaf is the only piece of information preserved in the common
-     * case.  Everything else is derived from other feature state.
-     */
-    memset(&p->xstate, 0, sizeof(p->xstate));
-
-    if ( !p->basic.xsave )
-        return;
-
-    if ( p->basic.avx )
-    {
-        xstates |= X86_XCR0_YMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_YMM_POS] +
-                          xstate_sizes[X86_XCR0_YMM_POS]);
-    }
-
-    if ( p->feat.mpx )
-    {
-        xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
-                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
-    }
-
-    if ( p->feat.avx512f )
-    {
-        xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
-                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
-    }
-
-    if ( p->feat.pku )
-    {
-        xstates |= X86_XCR0_PKRU;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_PKRU_POS] +
-                          xstate_sizes[X86_XCR0_PKRU_POS]);
-    }
-
-    p->xstate.max_size  =  xstate_size;
-    p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
-    p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
-
-    p->xstate.Da1 = Da1;
-    if ( p->xstate.xsaves )
-    {
-        p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
-        p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
-    }
-    else
-        xstates &= ~XSTATE_XSAVES_ONLY;
-
-    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
-    {
-        uint64_t curr_xstate = 1ul << i;
-
-        if ( !(xstates & curr_xstate) )
-            continue;
-
-        p->xstate.comp[i].size   = xstate_sizes[i];
-        p->xstate.comp[i].offset = xstate_offsets[i];
-        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
-        p->xstate.comp[i].align  = curr_xstate & xstate_align;
-    }
-}
-
-/*
- * Misc adjustments to the policy.  Mostly clobbering reserved fields and
- * duplicating shared fields.  Intentionally hidden fields are annotated.
- */
-static void recalculate_misc(struct cpuid_policy *p)
-{
-    p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
-    p->basic.apic_id = 0; /* Dynamic. */
-
-    p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
-    p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
-
-    p->basic.raw[0x8] = EMPTY_LEAF;
-
-    /* TODO: Rework topology logic. */
-    memset(p->topo.raw, 0, sizeof(p->topo.raw));
-
-    p->basic.raw[0xc] = EMPTY_LEAF;
-
-    p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
-
-    /* Most of Power/RAS hidden from guests. */
-    p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
-
-    p->extd.raw[0x8].d = 0;
-
-    switch ( p->x86_vendor )
-    {
-    case X86_VENDOR_INTEL:
-        p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
-        p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
-        p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
-
-        p->extd.vendor_ebx = 0;
-        p->extd.vendor_ecx = 0;
-        p->extd.vendor_edx = 0;
-
-        p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
-
-        p->extd.raw[0x5] = EMPTY_LEAF;
-        p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
-
-        p->extd.raw[0x8].a &= 0x0000ffff;
-        p->extd.raw[0x8].c = 0;
-        break;
-
-    case X86_VENDOR_AMD:
-    case X86_VENDOR_HYGON:
-        zero_leaves(p->basic.raw, 0x2, 0x3);
-        memset(p->cache.raw, 0, sizeof(p->cache.raw));
-        zero_leaves(p->basic.raw, 0x9, 0xa);
-
-        p->extd.vendor_ebx = p->basic.vendor_ebx;
-        p->extd.vendor_ecx = p->basic.vendor_ecx;
-        p->extd.vendor_edx = p->basic.vendor_edx;
-
-        p->extd.raw_fms = p->basic.raw_fms;
-        p->extd.raw[0x1].b &= 0xff00ffff;
-        p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
-
-        p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
-        p->extd.raw[0x8].c &= 0x0003f0ff;
-
-        p->extd.raw[0x9] = EMPTY_LEAF;
-
-        zero_leaves(p->extd.raw, 0xb, 0x18);
-
-        /* 0x19 - TLB details.  Pass through. */
-        /* 0x1a - Perf hints.   Pass through. */
-
-        p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
-        p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
-        p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
-        p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
-        p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
-        p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
-        break;
-    }
-}
-
-static void __init calculate_raw_policy(void)
-{
-    struct cpuid_policy *p = &raw_cpu_policy;
-
-    x86_cpuid_policy_fill_native(p);
-
-    /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
-    ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
-}
-
-static void __init calculate_host_policy(void)
-{
-    struct cpuid_policy *p = &host_cpu_policy;
-    unsigned int max_extd_leaf;
-
-    *p = raw_cpu_policy;
-
-    p->basic.max_leaf =
-        min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
-    p->feat.max_subleaf =
-        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
-
-    max_extd_leaf = p->extd.max_leaf;
-
-    /*
-     * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
-     * dispatch serialising for Spectre mitigations.  Extend max_extd_leaf
-     * beyond what hardware supports, to include the feature leaf containing
-     * this information.
-     */
-    if ( cpu_has_lfence_dispatch )
-        max_extd_leaf = max(max_extd_leaf, 0x80000021);
-
-    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
-                                          ARRAY_SIZE(p->extd.raw) - 1);
-
-    x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
-    recalculate_xstate(p);
-    recalculate_misc(p);
-
-    /* When vPMU is disabled, drop it from the host policy. */
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        p->basic.raw[0xa] = EMPTY_LEAF;
-
-    if ( p->extd.svm )
-    {
-        /* Clamp to implemented features which require hardware support. */
-        p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
-                               (1u << SVM_FEATURE_LBRV) |
-                               (1u << SVM_FEATURE_NRIPS) |
-                               (1u << SVM_FEATURE_PAUSEFILTER) |
-                               (1u << SVM_FEATURE_DECODEASSISTS));
-        /* Enable features which are always emulated. */
-        p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
-                               (1u << SVM_FEATURE_TSCRATEMSR));
-    }
-}
-
-static void __init guest_common_default_feature_adjustments(uint32_t *fs)
-{
-    /*
-     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
-     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
-     * compensate.
-     *
-     * Mitigate by hiding RDRAND from guests by default, unless explicitly
-     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
-     * default setting, guests can use RDRAND if explicitly enabled
-     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
-     * previously using RDRAND can migrate in.
-     */
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
-         cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
-        __clear_bit(X86_FEATURE_RDRAND, fs);
-
-    /*
-     * On certain hardware, speculative or errata workarounds can result in
-     * TSX being placed in "force-abort" mode, where it doesn't actually
-     * function as expected, but is technically compatible with the ISA.
-     *
-     * Do not advertise RTM to guests by default if it won't actually work.
-     */
-    if ( rtm_disabled )
-        __clear_bit(X86_FEATURE_RTM, fs);
-}
-
-static void __init guest_common_feature_adjustments(uint32_t *fs)
-{
-    /* Unconditionally claim to be able to set the hypervisor bit. */
-    __set_bit(X86_FEATURE_HYPERVISOR, fs);
-
-    /*
-     * If IBRS is offered to the guest, unconditionally offer STIBP.  It is a
-     * nop on non-HT hardware, and has this behaviour to make heterogeneous
-     * setups easier to manage.
-     */
-    if ( test_bit(X86_FEATURE_IBRSB, fs) )
-        __set_bit(X86_FEATURE_STIBP, fs);
-    if ( test_bit(X86_FEATURE_IBRS, fs) )
-        __set_bit(X86_FEATURE_AMD_STIBP, fs);
-
-    /*
-     * On hardware which supports IBRS/IBPB, we can offer IBPB independently
-     * of IBRS by using the AMD feature bit.  An administrator may wish for
-     * performance reasons to offer IBPB without IBRS.
-     */
-    if ( host_cpu_policy.feat.ibrsb )
-        __set_bit(X86_FEATURE_IBPB, fs);
-}
-
-static void __init calculate_pv_max_policy(void)
-{
-    struct cpuid_policy *p = &pv_max_cpu_policy;
-    uint32_t pv_featureset[FSCAPINTS];
-    unsigned int i;
-
-    *p = host_cpu_policy;
-    x86_cpu_policy_to_featureset(p, pv_featureset);
-
-    for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
-        pv_featureset[i] &= pv_max_featuremask[i];
-
-    /*
-     * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
-     * availability, or admin choice), hide the feature.
-     */
-    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
-    {
-        __clear_bit(X86_FEATURE_IBRSB, pv_featureset);
-        __clear_bit(X86_FEATURE_IBRS, pv_featureset);
-    }
-
-    guest_common_feature_adjustments(pv_featureset);
-
-    sanitise_featureset(pv_featureset);
-    x86_cpu_featureset_to_policy(pv_featureset, p);
-    recalculate_xstate(p);
-
-    p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
-}
-
-static void __init calculate_pv_def_policy(void)
-{
-    struct cpuid_policy *p = &pv_def_cpu_policy;
-    uint32_t pv_featureset[FSCAPINTS];
-    unsigned int i;
-
-    *p = pv_max_cpu_policy;
-    x86_cpu_policy_to_featureset(p, pv_featureset);
-
-    for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
-        pv_featureset[i] &= pv_def_featuremask[i];
-
-    guest_common_feature_adjustments(pv_featureset);
-    guest_common_default_feature_adjustments(pv_featureset);
-
-    sanitise_featureset(pv_featureset);
-    x86_cpu_featureset_to_policy(pv_featureset, p);
-    recalculate_xstate(p);
-}
-
-static void __init calculate_hvm_max_policy(void)
-{
-    struct cpuid_policy *p = &hvm_max_cpu_policy;
-    uint32_t hvm_featureset[FSCAPINTS];
-    unsigned int i;
-    const uint32_t *hvm_featuremask;
-
-    *p = host_cpu_policy;
-    x86_cpu_policy_to_featureset(p, hvm_featureset);
-
-    hvm_featuremask = hvm_hap_supported() ?
-        hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
-
-    for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
-        hvm_featureset[i] &= hvm_featuremask[i];
-
-    /*
-     * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
-     * (x2)APIC isn't enabled.
-     */
-    __set_bit(X86_FEATURE_APIC, hvm_featureset);
-    __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
-
-    /*
-     * We don't support EFER.LMSLE at all.  AMD has dropped the feature from
-     * hardware and allocated a CPUID bit to indicate its absence.
-     */
-    __set_bit(X86_FEATURE_NO_LMSL, hvm_featureset);
-
-    /*
-     * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
-     * long mode (and init_amd() has cleared it out of host capabilities), but
-     * HVM guests are able if running in protected mode.
-     */
-    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
-         raw_cpu_policy.basic.sep )
-        __set_bit(X86_FEATURE_SEP, hvm_featureset);
-
-    /*
-     * VIRT_SSBD is exposed in the default policy as a result of
-     * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
-     */
-    if ( amd_virt_spec_ctrl )
-        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
-    /*
-     * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
-     * availability, or admin choice), hide the feature.
-     */
-    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
-    {
-        __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
-        __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
-    }
-    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
-        /*
-         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
-         * and implemented using the former. Expose in the max policy only as
-         * the preference is for guests to use SPEC_CTRL.SSBD if available.
-         */
-        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
-    /*
-     * With VT-x, some features are only supported by Xen if dedicated
-     * hardware support is also available.
-     */
-    if ( cpu_has_vmx )
-    {
-        if ( !cpu_has_vmx_mpx )
-            __clear_bit(X86_FEATURE_MPX, hvm_featureset);
-
-        if ( !cpu_has_vmx_xsaves )
-            __clear_bit(X86_FEATURE_XSAVES, hvm_featureset);
-    }
-
-    /*
-     * Xen doesn't use PKS, so the guest support for it has opted to not use
-     * the VMCS load/save controls for efficiency reasons.  This depends on
-     * the exact vmentry/exit behaviour, so don't expose PKS in other
-     * situations until someone has cross-checked the behaviour for safety.
-     */
-    if ( !cpu_has_vmx )
-        __clear_bit(X86_FEATURE_PKS, hvm_featureset);
-
-    guest_common_feature_adjustments(hvm_featureset);
-
-    sanitise_featureset(hvm_featureset);
-    x86_cpu_featureset_to_policy(hvm_featureset, p);
-    recalculate_xstate(p);
-}
-
-static void __init calculate_hvm_def_policy(void)
-{
-    struct cpuid_policy *p = &hvm_def_cpu_policy;
-    uint32_t hvm_featureset[FSCAPINTS];
-    unsigned int i;
-    const uint32_t *hvm_featuremask;
-
-    *p = hvm_max_cpu_policy;
-    x86_cpu_policy_to_featureset(p, hvm_featureset);
-
-    hvm_featuremask = hvm_hap_supported() ?
-        hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
-
-    for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
-        hvm_featureset[i] &= hvm_featuremask[i];
-
-    guest_common_feature_adjustments(hvm_featureset);
-    guest_common_default_feature_adjustments(hvm_featureset);
-
-    /*
-     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
-     * amd_virt_spec_ctrl is set.
-     */
-    if ( amd_virt_spec_ctrl )
-        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
-
-    sanitise_featureset(hvm_featureset);
-    x86_cpu_featureset_to_policy(hvm_featureset, p);
-    recalculate_xstate(p);
-}
-
-void __init init_guest_cpuid(void)
-{
-    calculate_raw_policy();
-    calculate_host_policy();
-
-    if ( IS_ENABLED(CONFIG_PV) )
-    {
-        calculate_pv_max_policy();
-        calculate_pv_def_policy();
-    }
-
-    if ( hvm_enabled )
-    {
-        calculate_hvm_max_policy();
-        calculate_hvm_def_policy();
-    }
-}
 
 bool recheck_cpu_features(unsigned int cpu)
 {
@@ -656,170 +32,6 @@ bool recheck_cpu_features(unsigned int cpu)
     return okay;
 }
 
-void recalculate_cpuid_policy(struct domain *d)
-{
-    struct cpuid_policy *p = d->arch.cpuid;
-    const struct cpuid_policy *max = is_pv_domain(d)
-        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
-        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
-    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
-    unsigned int i;
-
-    if ( !max )
-    {
-        ASSERT_UNREACHABLE();
-        return;
-    }
-
-    p->x86_vendor = x86_cpuid_lookup_vendor(
-        p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
-
-    p->basic.max_leaf   = min(p->basic.max_leaf,   max->basic.max_leaf);
-    p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
-    p->extd.max_leaf    = 0x80000000 | min(p->extd.max_leaf & 0xffff,
-                                           ((p->x86_vendor & (X86_VENDOR_AMD |
-                                                              X86_VENDOR_HYGON))
-                                            ? CPUID_GUEST_NR_EXTD_AMD
-                                            : CPUID_GUEST_NR_EXTD_INTEL) - 1);
-
-    x86_cpu_policy_to_featureset(p, fs);
-    x86_cpu_policy_to_featureset(max, max_fs);
-
-    if ( is_hvm_domain(d) )
-    {
-        /*
-         * HVM domains using Shadow paging have further restrictions on their
-         * available paging features.
-         */
-        if ( !hap_enabled(d) )
-        {
-            for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
-                max_fs[i] &= hvm_shadow_max_featuremask[i];
-        }
-
-        /* Hide nested-virt if it hasn't been explicitly configured. */
-        if ( !nestedhvm_enabled(d) )
-        {
-            __clear_bit(X86_FEATURE_VMX, max_fs);
-            __clear_bit(X86_FEATURE_SVM, max_fs);
-        }
-    }
-
-    /*
-     * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY.  These bits
-     * affect how to interpret topology information in other cpuid leaves.
-     */
-    __set_bit(X86_FEATURE_HTT, max_fs);
-    __set_bit(X86_FEATURE_X2APIC, max_fs);
-    __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
-
-    /*
-     * 32bit PV domains can't use any Long Mode features, and cannot use
-     * SYSCALL on non-AMD hardware.
-     */
-    if ( is_pv_32bit_domain(d) )
-    {
-        __clear_bit(X86_FEATURE_LM, max_fs);
-        if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
-            __clear_bit(X86_FEATURE_SYSCALL, max_fs);
-    }
-
-    /* Clamp the toolstacks choices to reality. */
-    for ( i = 0; i < ARRAY_SIZE(fs); i++ )
-        fs[i] &= max_fs[i];
-
-    if ( p->basic.max_leaf < XSTATE_CPUID )
-        __clear_bit(X86_FEATURE_XSAVE, fs);
-
-    sanitise_featureset(fs);
-
-    /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
-    fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
-                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
-    fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
-                           (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
-                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
-
-    x86_cpu_featureset_to_policy(fs, p);
-
-    /* Pass host cacheline size through to guests. */
-    p->basic.clflush_size = max->basic.clflush_size;
-
-    p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
-    p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
-                                paging_max_paddr_bits(d));
-    p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
-                                (p->basic.pae || p->basic.pse36) ? 36 : 32);
-
-    p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
-
-    recalculate_xstate(p);
-    recalculate_misc(p);
-
-    for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
-    {
-        if ( p->cache.subleaf[i].type >= 1 &&
-             p->cache.subleaf[i].type <= 3 )
-        {
-            /* Subleaf has a valid cache type. Zero reserved fields. */
-            p->cache.raw[i].a &= 0xffffc3ffu;
-            p->cache.raw[i].d &= 0x00000007u;
-        }
-        else
-        {
-            /* Subleaf is not valid.  Zero the rest of the union. */
-            zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
-            break;
-        }
-    }
-
-    if ( vpmu_mode == XENPMU_MODE_OFF ||
-         ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
-        p->basic.raw[0xa] = EMPTY_LEAF;
-
-    if ( !p->extd.svm )
-        p->extd.raw[0xa] = EMPTY_LEAF;
-
-    if ( !p->extd.page1gb )
-        p->extd.raw[0x19] = EMPTY_LEAF;
-}
-
-void __init init_dom0_cpuid_policy(struct domain *d)
-{
-    struct cpuid_policy *p = d->arch.cpuid;
-
-    /* dom0 can't migrate.  Give it ITSC if available. */
-    if ( cpu_has_itsc )
-        p->extd.itsc = true;
-
-    /*
-     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
-     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
-     * domain policy logic gains a better understanding of MSRs.
-     */
-    if ( cpu_has_arch_caps )
-        p->feat.arch_caps = true;
-
-    /* Apply dom0-cpuid= command line settings, if provided. */
-    if ( dom0_cpuid_cmdline )
-    {
-        uint32_t fs[FSCAPINTS];
-        unsigned int i;
-
-        x86_cpu_policy_to_featureset(p, fs);
-
-        for ( i = 0; i < ARRAY_SIZE(fs); ++i )
-        {
-            fs[i] |=  dom0_enable_feat [i];
-            fs[i] &= ~dom0_disable_feat[i];
-        }
-
-        x86_cpu_featureset_to_policy(fs, p);
-
-        recalculate_cpuid_policy(d);
-    }
-}
-
 void guest_cpuid(const struct vcpu *v, uint32_t leaf,
                  uint32_t subleaf, struct cpuid_leaf *res)
 {
@@ -1190,27 +402,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     }
 }
 
-static void __init __maybe_unused build_assertions(void)
-{
-    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
-
-    /* Find some more clever allocation scheme if this trips. */
-    BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
-
-    BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
-                 sizeof(raw_cpu_policy.basic.raw));
-    BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
-                 sizeof(raw_cpu_policy.feat.raw));
-    BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
-                 sizeof(raw_cpu_policy.xstate.raw));
-    BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
-                 sizeof(raw_cpu_policy.extd.raw));
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d326fa1c0136..675c523d9909 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -77,7 +77,6 @@
 #include <public/memory.h>
 #include <public/vm_event.h>
 #include <public/arch-x86/cpuid.h>
-#include <asm/cpuid.h>
 
 #include <compat/hvm/hvm_op.h>
 
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index 13e2a1f86d13..b361537a602b 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -18,4 +18,10 @@ void init_guest_cpu_policies(void);
 /* Allocate and initialise a CPU policy suitable for the domain. */
 int init_domain_cpu_policy(struct domain *d);
 
+/* Apply dom0-specific tweaks to the CPUID policy. */
+void init_dom0_cpuid_policy(struct domain *d);
+
+/* Clamp the CPUID policy to reality. */
+void recalculate_cpuid_policy(struct domain *d);
+
 #endif /* X86_CPU_POLICY_H */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index 7f81b998ce01..b32ba0bbfe5c 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -8,14 +8,10 @@
 #include <xen/kernel.h>
 #include <xen/percpu.h>
 
-#include <xen/lib/x86/cpu-policy.h>
-
 #include <public/sysctl.h>
 
 extern const uint32_t known_features[FSCAPINTS];
 
-void init_guest_cpuid(void);
-
 /*
  * Expected levelling capabilities (given cpuid vendor/family information),
  * and levelling capabilities actually available (given MSR probing).
@@ -49,13 +45,8 @@ extern struct cpuidmasks cpuidmask_defaults;
 /* Check that all previously present features are still available. */
 bool recheck_cpu_features(unsigned int cpu);
 
-/* Apply dom0-specific tweaks to the CPUID policy. */
-void init_dom0_cpuid_policy(struct domain *d);
-
-/* Clamp the CPUID policy to reality. */
-void recalculate_cpuid_policy(struct domain *d);
-
 struct vcpu;
+struct cpuid_leaf;
 void guest_cpuid(const struct vcpu *v, uint32_t leaf,
                  uint32_t subleaf, struct cpuid_leaf *res);
 
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index f94f28c8e271..95492715d8ad 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -10,6 +10,7 @@
 #include <xen/param.h>
 #include <xen/sched.h>
 
+#include <asm/cpu-policy.h>
 #include <asm/cpufeature.h>
 #include <asm/invpcid.h>
 #include <asm/spec_ctrl.h>
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 51a19b9019eb..08ade715a3ce 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -51,7 +51,6 @@
 #include <asm/alternative.h>
 #include <asm/mc146818rtc.h>
 #include <asm/cpu-policy.h>
-#include <asm/cpuid.h>
 #include <asm/spec_ctrl.h>
 #include <asm/guest.h>
 #include <asm/microcode.h>
@@ -1991,7 +1990,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     if ( !tboot_protect_mem_regions() )
         panic("Could not protect TXT memory regions\n");
 
-    init_guest_cpuid();
     init_guest_cpu_policies();
 
     if ( xen_cpuidle )
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517749.803570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRX-0003Rx-UL; Tue, 04 Apr 2023 09:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517749.803570; Tue, 04 Apr 2023 09:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRX-0003Rj-Qz; Tue, 04 Apr 2023 09:59:11 +0000
Received: by outflank-mailman (input) for mailman id 517749;
 Tue, 04 Apr 2023 09:59:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdMC-0005bo-Gy
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:40 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 926ea3a0-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:53:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 926ea3a0-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602019;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=GbFah8Uej44Bj9X6DdnsRE7vn00ll+/fTK8duGH2gAE=;
  b=RJH/ZgNLZ3lSIqMB3bEan5pDxw5I7g09rzjIzs17XzMBe07KFGPmXftg
   Jwz9ThgHQxbFFZOtKRxnbTOYYZJq4mC70NJur+z1TNFaqmKFPe8OyC42V
   AFY1cI5DZkwSCJOjIJksb0amE++Clo9uFJ4FEVPXWIQXM2zX9TRrsid87
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104161466
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:uoLQVq9Ye3x1JcrNB3lyDrUD/36TJUtcMsCJ2f8bNWPcYEJGY0x3z
 moaD2rSOqqLYGL0edlxbNu+pBlVvpDUyNJiHFRs+X88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklu9
 uIXeQoIYCmZntio+6C4W8hJq8gaeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhrB+
 DuaoTqR7hcyN8yVxj28z1KVgKzsrBnVAYxJNKGWz6s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c/h6HvA+6QqN4rHJ+AvfDW8BJhZebPQ2uclwQiYlv
 mJlhPuwW2Yp6ufMDyvAqPHN92ja1TUpwXEqRSwaQlo/5tfaupgJkCndTcdCNYO5t4igcd3v+
 AxmvBTSlp1K055Tivrlpw+e696/jsOXF1Bov207Skrgt1okP9D9OuRE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFBz3oRZPhagKvFlDyL5Ba67ogwPBb
 k7Joh9275ROJnasZqIfS9vvW5x3kfaxT4+/CKC8gj9yjn9ZLVfvwc2TTRTIgzCFfLYEysnTx
 qt3ge7zVC1HWMyLPRK9RvsH0K9D+x3SMVj7HMihpzz+iOr2WZJgYetdWLd4RrxjvfzsTcS82
 4o3CvZmPD0ED7KiOHCLrtdDRb3IRFBiba3LRwVsXrbrCmJb9KsJUpc9HZtJl1RZoplo
IronPort-HdrOrdr: A9a23:59oaB6CIYAkWZ2LlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104161466"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic into cpu-policy.c
Date: Tue, 4 Apr 2023 10:52:17 +0100
Message-ID: <20230404095222.1373721-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Switch to the newer cpu_policy nomenclature.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 xen/arch/x86/cpu-policy.c             | 84 +++++++++++++++++++++++++++
 xen/arch/x86/include/asm/cpu-policy.h |  3 +
 xen/arch/x86/include/asm/msr.h        |  1 -
 xen/arch/x86/msr.c                    | 84 ---------------------------
 xen/arch/x86/setup.c                  |  3 +-
 5 files changed, 89 insertions(+), 86 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index e9ac1269c35a..f6a2317ed7bd 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -20,6 +20,90 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
 struct cpu_policy __ro_after_init hvm_def_cpu_policy;
 #endif
 
+static void __init calculate_raw_policy(void)
+{
+    struct cpu_policy *p = &raw_cpu_policy;
+
+    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* Was already added by probe_cpuid_faulting() */
+
+    if ( cpu_has_arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
+}
+
+static void __init calculate_host_policy(void)
+{
+    struct cpu_policy *p = &host_cpu_policy;
+
+    *p = raw_cpu_policy;
+
+    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
+    /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
+    p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
+
+    /* Temporary, until we have known_features[] for feature bits in MSRs. */
+    p->arch_caps.raw &=
+        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
+         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
+         ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
+         ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
+         ARCH_CAPS_PBRSB_NO);
+}
+
+static void __init calculate_pv_max_policy(void)
+{
+    struct cpu_policy *p = &pv_max_cpu_policy;
+
+    *p = host_cpu_policy;
+
+    p->arch_caps.raw = 0; /* Not supported yet. */
+}
+
+static void __init calculate_pv_def_policy(void)
+{
+    struct cpu_policy *p = &pv_def_cpu_policy;
+
+    *p = pv_max_cpu_policy;
+}
+
+static void __init calculate_hvm_max_policy(void)
+{
+    struct cpu_policy *p = &hvm_max_cpu_policy;
+
+    *p = host_cpu_policy;
+
+    /* It's always possible to emulate CPUID faulting for HVM guests */
+    p->platform_info.cpuid_faulting = true;
+
+    p->arch_caps.raw = 0; /* Not supported yet. */
+}
+
+static void __init calculate_hvm_def_policy(void)
+{
+    struct cpu_policy *p = &hvm_def_cpu_policy;
+
+    *p = hvm_max_cpu_policy;
+}
+
+void __init init_guest_cpu_policies(void)
+{
+    calculate_raw_policy();
+    calculate_host_policy();
+
+    if ( IS_ENABLED(CONFIG_PV) )
+    {
+        calculate_pv_max_policy();
+        calculate_pv_def_policy();
+    }
+
+    if ( hvm_enabled )
+    {
+        calculate_hvm_max_policy();
+        calculate_hvm_def_policy();
+    }
+}
+
 int init_domain_cpu_policy(struct domain *d)
 {
     struct cpu_policy *p = is_pv_domain(d)
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index 9ba34bbf5ea1..13e2a1f86d13 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -12,6 +12,9 @@ extern struct cpu_policy  pv_def_cpu_policy;
 extern struct cpu_policy hvm_max_cpu_policy;
 extern struct cpu_policy hvm_def_cpu_policy;
 
+/* Initialise the guest cpu_policy objects. */
+void init_guest_cpu_policies(void);
+
 /* Allocate and initialise a CPU policy suitable for the domain. */
 int init_domain_cpu_policy(struct domain *d);
 
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index b59a51d238a7..458841733e18 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -418,7 +418,6 @@ struct vcpu_msrs
     uint32_t dr_mask[4];
 };
 
-void init_guest_msr_policy(void);
 int init_vcpu_msr_policy(struct vcpu *v);
 
 /*
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 93bd93feb644..802fc60baf81 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -38,90 +38,6 @@
 
 DEFINE_PER_CPU(uint32_t, tsc_aux);
 
-static void __init calculate_raw_policy(void)
-{
-    struct msr_policy *mp = &raw_cpu_policy;
-
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
-    /* Was already added by probe_cpuid_faulting() */
-
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, mp->arch_caps.raw);
-}
-
-static void __init calculate_host_policy(void)
-{
-    struct msr_policy *mp = &host_cpu_policy;
-
-    *mp = raw_cpu_policy;
-
-    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
-    /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
-    mp->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
-
-    /* Temporary, until we have known_features[] for feature bits in MSRs. */
-    mp->arch_caps.raw &=
-        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
-         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
-         ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
-         ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
-         ARCH_CAPS_PBRSB_NO);
-}
-
-static void __init calculate_pv_max_policy(void)
-{
-    struct msr_policy *mp = &pv_max_cpu_policy;
-
-    *mp = host_cpu_policy;
-
-    mp->arch_caps.raw = 0; /* Not supported yet. */
-}
-
-static void __init calculate_pv_def_policy(void)
-{
-    struct msr_policy *mp = &pv_def_cpu_policy;
-
-    *mp = pv_max_cpu_policy;
-}
-
-static void __init calculate_hvm_max_policy(void)
-{
-    struct msr_policy *mp = &hvm_max_cpu_policy;
-
-    *mp = host_cpu_policy;
-
-    /* It's always possible to emulate CPUID faulting for HVM guests */
-    mp->platform_info.cpuid_faulting = true;
-
-    mp->arch_caps.raw = 0; /* Not supported yet. */
-}
-
-static void __init calculate_hvm_def_policy(void)
-{
-    struct msr_policy *mp = &hvm_def_cpu_policy;
-
-    *mp = hvm_max_cpu_policy;
-}
-
-void __init init_guest_msr_policy(void)
-{
-    calculate_raw_policy();
-    calculate_host_policy();
-
-    if ( IS_ENABLED(CONFIG_PV) )
-    {
-        calculate_pv_max_policy();
-        calculate_pv_def_policy();
-    }
-
-    if ( hvm_enabled )
-    {
-        calculate_hvm_max_policy();
-        calculate_hvm_def_policy();
-    }
-}
-
 int init_vcpu_msr_policy(struct vcpu *v)
 {
     struct vcpu_msrs *msrs = xzalloc(struct vcpu_msrs);
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b29229933d8c..51a19b9019eb 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -50,6 +50,7 @@
 #include <asm/nmi.h>
 #include <asm/alternative.h>
 #include <asm/mc146818rtc.h>
+#include <asm/cpu-policy.h>
 #include <asm/cpuid.h>
 #include <asm/spec_ctrl.h>
 #include <asm/guest.h>
@@ -1991,7 +1992,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
         panic("Could not protect TXT memory regions\n");
 
     init_guest_cpuid();
-    init_guest_msr_policy();
+    init_guest_cpu_policies();
 
     if ( xen_cpuidle )
         xen_processor_pmbits |= XEN_PROCESSOR_PM_CX;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517751.803580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRZ-0003hx-6H; Tue, 04 Apr 2023 09:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517751.803580; Tue, 04 Apr 2023 09:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRZ-0003hq-1O; Tue, 04 Apr 2023 09:59:13 +0000
Received: by outflank-mailman (input) for mailman id 517751;
 Tue, 04 Apr 2023 09:59:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdMS-0005bo-R1
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:56 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b5e9743-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:53:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b5e9743-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602034;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=g6soTwUYllGEYZrZeu6VHps1q/P3LIhMMKfu395S8Yw=;
  b=Nf0ZU6SdlXkjQwjPFuLSMIH3fVlpq3UFTM19kMDfK/O8oWiBivO69uiF
   TqT+bWjNGD1vR+PqRcBqnyol607dnJBz9PgQSZxNJ0ryg8G2KBKDiLZ1L
   OVZ52db+q6jQYdnpFNZGLnSXEmsEf9FCDURWD4Dv/JVIMQrK6Zp0d6njz
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104161492
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ZPnmvaN/QfuEnHrvrR3al8FynXyQoLVcMsEvi/4bfWQNrUp2hDRTn
 WtLC2jSMvePNzOhe4t+Pdmw8UoEscXdzNEyGgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rd+DUURy
 9IxEQ4ESxqlqvOcwLjqe+Y506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXSGZwLxxrJ/
 Aoq+UzEC0gYGoKS2wCOrHi2jOjWwWT8dK4NQejQGvlC3wTImz175ActfUS/iem0jAi5Qd03A
 1wZ/G8ioLY/8GSvT8LhRFuorXicpBkeVtFMVeog52ml6IDZ/gKYDWgsVSNaZZots8peeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwBEJGMmOhEqZDI76vzphqwipBLFH+dsRfvdYsLOJd3g/
 9ybhHFg1+1O0pBRiPzTEUPv2Gz1+MWQJuIhzkCOBz/+sFskDGKwT9bwgWU3+8qsO2pworOpm
 HEf0/aT4+kVZX1mvHzcGb5ddF1FChvsDdE9vbKMN8N7n9hV0yT/Fb28GRknTKuTDu4KeCXyf
 GjYsh5L6ZlYMROCNPEnO9/tVZVwlvK+RbwJs8w4ifIXOvBMmPKvpnkyNSZ8IUi2+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5DVlEkFCbGhO3m/HEx6BQliEEXXzKve86R/HtNv6CI/cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:fE8LuaNbuPenKsBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104161492"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy
Date: Tue, 4 Apr 2023 10:52:19 +0100
Message-ID: <20230404095222.1373721-13-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As with struct domain, retain cpuid as a valid alias for local code clarity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Retain cpuid compatibility alias.
 * Split out of RFC patch.
---
 tools/fuzz/x86_instruction_emulator/fuzz-emul.c | 2 +-
 tools/tests/x86_emulator/test_x86_emulator.c    | 2 +-
 tools/tests/x86_emulator/x86-emulate.c          | 2 +-
 xen/arch/x86/hvm/emulate.c                      | 4 ++--
 xen/arch/x86/mm/shadow/hvm.c                    | 2 +-
 xen/arch/x86/pv/emul-priv-op.c                  | 2 +-
 xen/arch/x86/pv/ro-page-fault.c                 | 2 +-
 xen/arch/x86/x86_emulate/private.h              | 4 ++--
 xen/arch/x86/x86_emulate/x86_emulate.h          | 7 +++++--
 9 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index 966e46bee199..4885a68210d0 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -893,7 +893,7 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
     struct x86_emulate_ctxt ctxt = {
         .data = &state,
         .regs = &input.regs,
-        .cpuid = &cp,
+        .cpu_policy = &cp,
         .addr_size = 8 * sizeof(void *),
         .sp_size = 8 * sizeof(void *),
     };
diff --git a/tools/tests/x86_emulator/test_x86_emulator.c b/tools/tests/x86_emulator/test_x86_emulator.c
index 31586f805726..7b7fbaaf45ec 100644
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -909,7 +909,7 @@ int main(int argc, char **argv)
 
     ctxt.regs = &regs;
     ctxt.force_writeback = 0;
-    ctxt.cpuid     = &cp;
+    ctxt.cpu_policy = &cp;
     ctxt.lma       = sizeof(void *) == 8;
     ctxt.addr_size = 8 * sizeof(void *);
     ctxt.sp_size   = 8 * sizeof(void *);
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index f6ee09439751..2692404df906 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -25,7 +25,7 @@
 #endif
 
 uint32_t mxcsr_mask = 0x0000ffbf;
-struct cpuid_policy cp;
+struct cpu_policy cp;
 
 static char fpu_save_area[0x4000] __attribute__((__aligned__((64))));
 static bool use_xsave;
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 95364deb1996..5691725d6c6f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2771,7 +2771,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
 void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     unsigned int errcode)
 {
-    struct hvm_emulate_ctxt ctx = {{ 0 }};
+    struct hvm_emulate_ctxt ctx = {};
     int rc;
 
     hvm_emulate_init_once(&ctx, NULL, guest_cpu_user_regs());
@@ -2846,7 +2846,7 @@ void hvm_emulate_init_once(
 
     hvmemul_ctxt->validate = validate;
     hvmemul_ctxt->ctxt.regs = regs;
-    hvmemul_ctxt->ctxt.cpuid = curr->domain->arch.cpuid;
+    hvmemul_ctxt->ctxt.cpu_policy = curr->domain->arch.cpu_policy;
     hvmemul_ctxt->ctxt.force_writeback = true;
 }
 
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index e2ee1c77056f..cc84af01925a 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -319,7 +319,7 @@ const struct x86_emulate_ops *shadow_init_emulation(
     memset(sh_ctxt, 0, sizeof(*sh_ctxt));
 
     sh_ctxt->ctxt.regs = regs;
-    sh_ctxt->ctxt.cpuid = curr->domain->arch.cpuid;
+    sh_ctxt->ctxt.cpu_policy = curr->domain->arch.cpu_policy;
     sh_ctxt->ctxt.lma = hvm_long_mode_active(curr);
 
     /* Segment cache initialisation. Primed with CS. */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 5da00e24e4ff..ab52768271c5 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1327,7 +1327,7 @@ int pv_emulate_privileged_op(struct cpu_user_regs *regs)
     struct domain *currd = curr->domain;
     struct priv_op_ctxt ctxt = {
         .ctxt.regs = regs,
-        .ctxt.cpuid = currd->arch.cpuid,
+        .ctxt.cpu_policy = currd->arch.cpu_policy,
         .ctxt.lma = !is_pv_32bit_domain(currd),
     };
     int rc;
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index 5963f5ee2d51..0d02c7d2ab10 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -356,7 +356,7 @@ int pv_ro_page_fault(unsigned long addr, struct cpu_user_regs *regs)
     unsigned int addr_size = is_pv_32bit_domain(currd) ? 32 : BITS_PER_LONG;
     struct x86_emulate_ctxt ctxt = {
         .regs      = regs,
-        .cpuid     = currd->arch.cpuid,
+        .cpu_policy = currd->arch.cpu_policy,
         .addr_size = addr_size,
         .sp_size   = addr_size,
         .lma       = addr_size > 32,
diff --git a/xen/arch/x86/x86_emulate/private.h b/xen/arch/x86/x86_emulate/private.h
index 653a298c705b..8dee019731ae 100644
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -505,7 +505,7 @@ in_protmode(
 })
 
 static inline bool
-_amd_like(const struct cpuid_policy *cp)
+_amd_like(const struct cpu_policy *cp)
 {
     return cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON);
 }
@@ -513,7 +513,7 @@ _amd_like(const struct cpuid_policy *cp)
 static inline bool
 amd_like(const struct x86_emulate_ctxt *ctxt)
 {
-    return _amd_like(ctxt->cpuid);
+    return _amd_like(ctxt->cpu_policy);
 }
 
 #define vcpu_has_fpu()         (ctxt->cpuid->basic.fpu)
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index 75015104fbdb..0139d16da70c 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -565,8 +565,11 @@ struct x86_emulate_ctxt
      * Input-only state:
      */
 
-    /* CPUID Policy for the domain. */
-    const struct cpuid_policy *cpuid;
+    /* CPU policy for the domain.  Allow aliases for local code clarity. */
+    union {
+        struct cpu_policy *cpu_policy;
+        struct cpu_policy *cpuid;
+    };
 
     /* Set this if writes may have side effects. */
     bool force_writeback;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517752.803585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRZ-0003mr-FD; Tue, 04 Apr 2023 09:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517752.803585; Tue, 04 Apr 2023 09:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRZ-0003mO-Bx; Tue, 04 Apr 2023 09:59:13 +0000
Received: by outflank-mailman (input) for mailman id 517752;
 Tue, 04 Apr 2023 09:59:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdMK-00056d-2X
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:48 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 962f50e3-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:53:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 962f50e3-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602025;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=BD8/V6HTXPI8zjfRwIUp+LKbifOL/pPEYn5jnL6tGjU=;
  b=aEhT89AkO8KrOFxlJwGFdsHqSg/+e2oJSGcN+/+4ebY3pOECirJTasTI
   NJJW2kRRMv4SSesCIJgTczQdOJ1FVlZKpkVyg9387ZRhlEgNqiFgc7JS7
   UNO4QXgeunyYt0obMr3+kHTO9kb+XUujwqDpGY3MDaNprTswMPVREsXIh
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103612437
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:KGRsUKqQKYmTq8pX0IbW2gF8deReBmJnZRIvgKrLsJaIsI4StFCzt
 garIBmFPPyKZGb0KY0iOd61oRsDvpDXm4VhSwo5pSgxHy4ao5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyVNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGxUbhuAm6WU+7m2d8Q9tu4mEMX6A6pK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 juepDqgWE1Ba7RzzxKD/lb0q+qSkBj6G4YOO5eU8cN2sgeMkzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyNZutnoo510rCnEQ+tOQZ6fi+H5CA7Zl
 mXiQDcFu1kDsSIa//zlrQia3Gz2+cGhoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTYm5CKGONYAQMvCdkTNrGwk3PSatM53FyhBwwcnTx
 7/AGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvueHwP9Dz+ieD2TCfMGd843K6mMrhRAFWs/F+Er
 L6y9qKil31ibQEJSnKIrtJJdAxVdChT6FKfg5U/S9Nv6zFOQAkJY8I9C5t4E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:XbA7vaCUYNajGozlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103612437"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines
Date: Tue, 4 Apr 2023 10:52:22 +0100
Message-ID: <20230404095222.1373721-16-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With all code areas updated, drop the temporary defines and adjust all
remaining users.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Split out of RFC patch
---
 xen/arch/x86/cpu/mcheck/mce_intel.c    |  2 +-
 xen/arch/x86/cpuid.c                   |  2 +-
 xen/arch/x86/domain.c                  |  2 +-
 xen/arch/x86/hvm/hvm.c                 |  4 ++--
 xen/arch/x86/hvm/svm/svm.c             |  2 +-
 xen/arch/x86/hvm/vlapic.c              |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c             |  8 ++++----
 xen/arch/x86/include/asm/msr.h         |  2 +-
 xen/arch/x86/msr.c                     | 20 +++++++++-----------
 xen/arch/x86/pv/domain.c               |  2 +-
 xen/arch/x86/pv/emul-priv-op.c         |  4 ++--
 xen/arch/x86/traps.c                   |  2 +-
 xen/arch/x86/x86_emulate/x86_emulate.c |  2 +-
 xen/include/xen/lib/x86/cpu-policy.h   |  4 ----
 14 files changed, 26 insertions(+), 32 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 301533722d1a..2f23f02923d2 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -1008,7 +1008,7 @@ int vmce_intel_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
 int vmce_intel_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     unsigned int bank = msr - MSR_IA32_MC0_CTL2;
 
     switch ( msr )
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 3f20c342fde8..f311372cdf1f 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -36,7 +36,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
                  uint32_t subleaf, struct cpuid_leaf *res)
 {
     const struct domain *d = v->domain;
-    const struct cpuid_policy *p = d->arch.cpuid;
+    const struct cpu_policy *p = d->arch.cpu_policy;
 
     *res = EMPTY_LEAF;
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b23e5014d1d3..91f57e3a3b17 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -283,7 +283,7 @@ void update_guest_memory_policy(struct vcpu *v,
 
 void domain_cpu_policy_changed(struct domain *d)
 {
-    const struct cpuid_policy *p = d->arch.cpuid;
+    const struct cpu_policy *p = d->arch.cpu_policy;
     struct vcpu *v;
 
     if ( is_pv_domain(d) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 675c523d9909..7020fdce995c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -924,7 +924,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
                            signed int cr0_pg)
 {
     const struct domain *d = v->domain;
-    const struct cpuid_policy *p = d->arch.cpuid;
+    const struct cpu_policy *p = d->arch.cpu_policy;
 
     if ( value & ~EFER_KNOWN_MASK )
         return "Unknown bits set";
@@ -961,7 +961,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
 /* These bits in CR4 can be set by the guest. */
 unsigned long hvm_cr4_guest_valid_bits(const struct domain *d)
 {
-    const struct cpuid_policy *p = d->arch.cpuid;
+    const struct cpu_policy *p = d->arch.cpu_policy;
     bool mce, vmxe, cet;
 
     /* Logic broken out simply to aid readability below. */
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 02563e4b7027..b8fe759db456 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -583,7 +583,7 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
 {
     struct svm_vcpu *svm = &v->arch.hvm.svm;
     struct vmcb_struct *vmcb = svm->vmcb;
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
 
     if ( opt_hvm_fep ||
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index dc93b5e930b1..f4f5ffc673e5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1083,7 +1083,7 @@ static void set_x2apic_id(struct vlapic *vlapic)
 
 int guest_wrmsr_apic_base(struct vcpu *v, uint64_t value)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     struct vlapic *vlapic = vcpu_vlapic(v);
 
     if ( !has_vlapic(v->domain) )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e05588505871..ee4c41628cc3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -710,7 +710,7 @@ static void vmx_restore_host_msrs(void)
 
 static void vmx_save_guest_msrs(struct vcpu *v)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     struct vcpu_msrs *msrs = v->arch.msrs;
 
     /*
@@ -731,7 +731,7 @@ static void vmx_save_guest_msrs(struct vcpu *v)
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     const struct vcpu_msrs *msrs = v->arch.msrs;
 
     write_gs_shadow(v->arch.hvm.vmx.shadow_gs);
@@ -784,7 +784,7 @@ void vmx_update_exception_bitmap(struct vcpu *v)
 
 static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
 {
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
     int rc = 0;
 
     if ( opt_hvm_fep ||
@@ -3521,7 +3521,7 @@ static int cf_check vmx_msr_write_intercept(
     unsigned int msr, uint64_t msr_content)
 {
     struct vcpu *v = current;
-    const struct cpuid_policy *cp = v->domain->arch.cpuid;
+    const struct cpu_policy *cp = v->domain->arch.cpu_policy;
 
     HVM_DBG_LOG(DBG_LEVEL_MSR, "ecx=%#x, msr_value=%#"PRIx64, msr, msr_content);
 
diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h
index 458841733e18..1d8ea9f26faa 100644
--- a/xen/arch/x86/include/asm/msr.h
+++ b/xen/arch/x86/include/asm/msr.h
@@ -290,7 +290,7 @@ static inline void wrmsr_tsc_aux(uint32_t val)
     }
 }
 
-uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp);
+uint64_t msr_spec_ctrl_valid_bits(const struct cpu_policy *cp);
 
 /* Container object for per-vCPU MSRs */
 struct vcpu_msrs
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 802fc60baf81..2e16818bf509 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -54,8 +54,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 {
     const struct vcpu *curr = current;
     const struct domain *d = v->domain;
-    const struct cpuid_policy *cp = d->arch.cpuid;
-    const struct msr_policy *mp = d->arch.msr;
+    const struct cpu_policy *cp = d->arch.cpu_policy;
     const struct vcpu_msrs *msrs = v->arch.msrs;
     int ret = X86EMUL_OKAY;
 
@@ -139,13 +138,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         goto get_reg;
 
     case MSR_INTEL_PLATFORM_INFO:
-        *val = mp->platform_info.raw;
+        *val = cp->platform_info.raw;
         break;
 
     case MSR_ARCH_CAPABILITIES:
         if ( !cp->feat.arch_caps )
             goto gp_fault;
-        *val = mp->arch_caps.raw;
+        *val = cp->arch_caps.raw;
         break;
 
     case MSR_INTEL_MISC_FEATURES_ENABLES:
@@ -326,7 +325,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
  * separate CPUID features for this functionality, but only set will be
  * active.
  */
-uint64_t msr_spec_ctrl_valid_bits(const struct cpuid_policy *cp)
+uint64_t msr_spec_ctrl_valid_bits(const struct cpu_policy *cp)
 {
     bool ssbd = cp->feat.ssbd || cp->extd.amd_ssbd;
     bool psfd = cp->feat.intel_psfd || cp->extd.psfd;
@@ -345,8 +344,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
     const struct vcpu *curr = current;
     struct domain *d = v->domain;
-    const struct cpuid_policy *cp = d->arch.cpuid;
-    const struct msr_policy *mp = d->arch.msr;
+    const struct cpu_policy *cp = d->arch.cpu_policy;
     struct vcpu_msrs *msrs = v->arch.msrs;
     int ret = X86EMUL_OKAY;
 
@@ -387,7 +385,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
          * for backwards compatiblity, the OS should write 0 to it before
          * trying to access the current microcode version.
          */
-        if ( d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL || val != 0 )
+        if ( cp->x86_vendor != X86_VENDOR_INTEL || val != 0 )
             goto gp_fault;
         break;
 
@@ -397,7 +395,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
          * to AMD CPUs as well (at least the architectural/CPUID part does).
          */
         if ( is_pv_domain(d) ||
-             d->arch.cpuid->x86_vendor != X86_VENDOR_AMD )
+             cp->x86_vendor != X86_VENDOR_AMD )
             goto gp_fault;
         break;
 
@@ -409,7 +407,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
          * by any CPUID bit.
          */
         if ( is_pv_domain(d) ||
-             d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL )
+             cp->x86_vendor != X86_VENDOR_INTEL )
             goto gp_fault;
         break;
 
@@ -446,7 +444,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         bool old_cpuid_faulting = msrs->misc_features_enables.cpuid_faulting;
 
         rsvd = ~0ull;
-        if ( mp->platform_info.cpuid_faulting )
+        if ( cp->platform_info.cpuid_faulting )
             rsvd &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
 
         if ( val & rsvd )
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 95492715d8ad..5c92812dc67a 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -146,7 +146,7 @@ static void release_compat_l4(struct vcpu *v)
 
 unsigned long pv_fixup_guest_cr4(const struct vcpu *v, unsigned long cr4)
 {
-    const struct cpuid_policy *p = v->domain->arch.cpuid;
+    const struct cpu_policy *p = v->domain->arch.cpu_policy;
 
     /* Discard attempts to set guest controllable bits outside of the policy. */
     cr4 &= ~((p->basic.tsc     ? 0 : X86_CR4_TSD)      |
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index ab52768271c5..04416f197951 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -885,7 +885,7 @@ static int cf_check read_msr(
 {
     struct vcpu *curr = current;
     const struct domain *currd = curr->domain;
-    const struct cpuid_policy *cp = currd->arch.cpuid;
+    const struct cpu_policy *cp = currd->arch.cpu_policy;
     bool vpmu_msr = false, warn = false;
     uint64_t tmp;
     int ret;
@@ -1034,7 +1034,7 @@ static int cf_check write_msr(
 {
     struct vcpu *curr = current;
     const struct domain *currd = curr->domain;
-    const struct cpuid_policy *cp = currd->arch.cpuid;
+    const struct cpu_policy *cp = currd->arch.cpu_policy;
     bool vpmu_msr = false;
     int ret;
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index c36e3f855bd9..e4f8b158e1ed 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1036,7 +1036,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
                              uint32_t subleaf, struct cpuid_leaf *res)
 {
     const struct domain *d = v->domain;
-    const struct cpuid_policy *p = d->arch.cpuid;
+    const struct cpu_policy *p = d->arch.cpu_policy;
     uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
     uint32_t idx  = leaf - base;
     unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 5a0ec5900a93..c69f7c65f526 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -848,7 +848,7 @@ protmode_load_seg(
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
 {
-    const struct cpuid_policy *cp = ctxt->cpuid;
+    const struct cpu_policy *cp = ctxt->cpu_policy;
     enum x86_segment sel_seg = (sel & 4) ? x86_seg_ldtr : x86_seg_gdtr;
     struct { uint32_t a, b; } desc, desc_hi = {};
     uint8_t dpl, rpl;
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index dee46adeff17..182cf77cffaf 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -375,10 +375,6 @@ struct cpu_policy
     uint8_t x86_vendor;
 };
 
-/* Temporary */
-#define cpuid_policy cpu_policy
-#define msr_policy cpu_policy
-
 struct cpu_policy_errors
 {
     uint32_t leaf, subleaf;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517768.803600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRm-0004r1-1y; Tue, 04 Apr 2023 09:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517768.803600; Tue, 04 Apr 2023 09:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRl-0004qn-Th; Tue, 04 Apr 2023 09:59:25 +0000
Received: by outflank-mailman (input) for mailman id 517768;
 Tue, 04 Apr 2023 09:59:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdMG-00056d-FQ
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9433838d-d2ce-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 11:53:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9433838d-d2ce-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602022;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ChH0mv0WlkWt8mC9RVpMxMqX69igiduH8qj4IJ/OTlE=;
  b=SQUbV8GVwRO3uvUSYpX7wEh1Eu7H6jlNWovrOVwJbiFjpOHwF7RePZTB
   phvODEYxBcx1TfzkB4RH9RqXaWEF/I5fQwyXJN7cXiI4EOk7RjFMqWrpA
   fc3UkJ16A2c6Ri52zsC0abLudmRSLCb6FNap6UuDq7+HlyNFBuskcu6ts
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104275209
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jOR9b6rJqOdbVVFSZpebUQeba2NeBmJnZRIvgKrLsJaIsI4StFCzt
 garIBmFMvyOZmSmctF2bo+w8h5QuZCBzdM1SVBvqSo3FCpHoJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyVNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABE1cyvcwN+d+4OmEuBi2f94IMvOOLpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jueoz6mX0tCZbRzzxKGqHGNgM/DzBjaY4EYSoGRyfpj2UO6kzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyMk28TY/8YOpSn+U45qHvGO1dH3PDXJl
 mXiQDcFu1kDsSIa//zlrQia3Gz2+cGhoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTYm5CKGONYAQMvCdkTNrGwk3PSatM53FyhBwwcnTx
 7/AGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvueHwP9Dz+ieD2TCfMGd843K6mMrhRAFWs/F+Er
 L6y9qKil31ibQEJSnKIrtJJdAxVdChT6FKfg5U/S9Nv6zFOQAkJY8I9C5t6E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:VwSaD63tK3JQsKmUXOpa4QqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104275209"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer
Date: Tue, 4 Apr 2023 10:52:20 +0100
Message-ID: <20230404095222.1373721-14-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With cpuid_policy and msr_policy merged to form cpu_policy, merge the
respective fuzzing logic.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 tools/fuzz/cpu-policy/afl-policy-fuzzer.c | 57 ++++++++---------------
 1 file changed, 20 insertions(+), 37 deletions(-)

diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 0ce3d8e16626..466bdbb1d91a 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -16,16 +16,19 @@ static bool debug;
 
 #define EMPTY_LEAF ((struct cpuid_leaf){})
 
-static void check_cpuid(struct cpuid_policy *cp)
+static void check_policy(struct cpu_policy *cp)
 {
-    struct cpuid_policy new = {};
+    struct cpu_policy new = {};
     size_t data_end;
     xen_cpuid_leaf_t *leaves = malloc(CPUID_MAX_SERIALISED_LEAVES *
                                       sizeof(xen_cpuid_leaf_t));
-    unsigned int nr = CPUID_MAX_SERIALISED_LEAVES;
+    xen_msr_entry_t *msrs = malloc(MSR_MAX_SERIALISED_ENTRIES *
+                                   sizeof(xen_cpuid_leaf_t));
+    unsigned int nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
+    unsigned int nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
     int rc;
 
-    if ( !leaves )
+    if ( !leaves || !msrs )
         return;
 
     /*
@@ -49,12 +52,19 @@ static void check_cpuid(struct cpuid_policy *cp)
     x86_cpuid_policy_recalc_synth(cp);
 
     /* Serialise... */
-    rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr);
+    rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
+    assert(rc == 0);
+    assert(nr_leaves <= CPUID_MAX_SERIALISED_LEAVES);
+
+    rc = x86_msr_copy_to_buffer(cp, msrs, &nr_msrs);
     assert(rc == 0);
-    assert(nr <= CPUID_MAX_SERIALISED_LEAVES);
+    assert(nr_msrs <= MSR_MAX_SERIALISED_ENTRIES);
 
     /* ... and deserialise. */
-    rc = x86_cpuid_copy_from_buffer(&new, leaves, nr, NULL, NULL);
+    rc = x86_cpuid_copy_from_buffer(&new, leaves, nr_leaves, NULL, NULL);
+    assert(rc == 0);
+
+    rc = x86_msr_copy_from_buffer(&new, msrs, nr_msrs, NULL);
     assert(rc == 0);
 
     /* The result after serialisation/deserialisaion should be identical... */
@@ -76,28 +86,6 @@ static void check_cpuid(struct cpuid_policy *cp)
     free(leaves);
 }
 
-static void check_msr(struct msr_policy *mp)
-{
-    struct msr_policy new = {};
-    xen_msr_entry_t *msrs = malloc(MSR_MAX_SERIALISED_ENTRIES *
-                                   sizeof(xen_msr_entry_t));
-    unsigned int nr = MSR_MAX_SERIALISED_ENTRIES;
-    int rc;
-
-    if ( !msrs )
-        return;
-
-    rc = x86_msr_copy_to_buffer(mp, msrs, &nr);
-    assert(rc == 0);
-    assert(nr <= MSR_MAX_SERIALISED_ENTRIES);
-
-    rc = x86_msr_copy_from_buffer(&new, msrs, nr, NULL);
-    assert(rc == 0);
-    assert(memcmp(mp, &new, sizeof(*mp)) == 0);
-
-    free(msrs);
-}
-
 int main(int argc, char **argv)
 {
     FILE *fp = NULL;
@@ -144,8 +132,7 @@ int main(int argc, char **argv)
     while ( __AFL_LOOP(1000) )
 #endif
     {
-        struct cpuid_policy *cp = NULL;
-        struct msr_policy *mp = NULL;
+        struct cpu_policy *cp = NULL;
 
         if ( fp != stdin )
         {
@@ -160,22 +147,18 @@ int main(int argc, char **argv)
         }
 
         cp = calloc(1, sizeof(*cp));
-        mp = calloc(1, sizeof(*mp));
-        if ( !cp || !mp )
+        if ( !cp )
             goto skip;
 
         fread(cp, sizeof(*cp), 1, fp);
-        fread(mp, sizeof(*mp), 1, fp);
 
         if ( !feof(fp) )
             goto skip;
 
-        check_cpuid(cp);
-        check_msr(mp);
+        check_policy(cp);
 
     skip:
         free(cp);
-        free(mp);
 
         if ( fp != stdin )
         {
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 09:59:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 09:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517772.803610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRn-00058l-9L; Tue, 04 Apr 2023 09:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517772.803610; Tue, 04 Apr 2023 09:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdRn-000582-5z; Tue, 04 Apr 2023 09:59:27 +0000
Received: by outflank-mailman (input) for mailman id 517772;
 Tue, 04 Apr 2023 09:59:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdM5-0005bo-SB
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 09:53:34 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8dd3c787-d2ce-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 11:53:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dd3c787-d2ce-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602012;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=S9rrbCEYs4IT9bqKPV/iJx2K9DkXe5RcZyCTKB2nQ9U=;
  b=OYDADpeDP5cEnFRW6GUiK3sGMYpY8eqSwL2Ynnejkj/FXDgApxclEytm
   slgEqR2VE/GZsSnLewg1GrazG8oitkZVgC++JrgWoc2OW8rb82hn+vaTb
   WsTsSRHvgd0DlV0iSeHuyMjBSFZOr+tHHR4GfzzfnkRkwCs99iGWT8EWr
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104656599
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Fsg6SaOJIj2cUo7vrR3al8FynXyQoLVcMsEvi/4bfWQNrUolhT1Ty
 2JMX2yGbvjZNGX1edklbYq0pksH65LRndQ2GQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0ut4C0Bc/
 qETEzkIdA6fuNir4by7cuY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXSGZsIwBvJ9
 woq+UymDD0CCs2P1wamzWj1rOv0jSH7fpIdQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasnDQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBeeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwAYImUjdyRZRjAM5sP9vL4JkzPBTu5sRfvdYsLOJd3g/
 9ybhHFg1+1O0pBRiPzTEUPv2Gz1+MWQJuIhzkCOBz/+sFskDGKwT9bwgWU3+8qsO2pworOpm
 HEf0/aT4+kVZX1mvHzcGb5ddF1FChvsDdE9vbKMN8N7n9hV0yT/Fb28GRknTKuTDu4KeCXyf
 GjYsh5L6ZlYMROCNPEnO9/tVZVwlvK+RbwJs8w4ifIXOvBMmPKvpnkyNSZ8IUi2+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5DVlEkFCbGhO3m/HEx6BQliEEXXzKve86R/HtNv6CI/cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:ujXFI61ilrIcMTIDVr7uEwqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104656599"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 14/15] libx86: Update library API for cpu_policy
Date: Tue, 4 Apr 2023 10:52:21 +0100
Message-ID: <20230404095222.1373721-15-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Adjust the API and comments appropriately.

x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
TODO in the short term.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
---
 tools/fuzz/cpu-policy/afl-policy-fuzzer.c |  4 +--
 tools/tests/cpu-policy/test-cpu-policy.c  |  4 +--
 tools/tests/x86_emulator/x86-emulate.c    |  2 +-
 xen/arch/x86/cpu-policy.c                 |  2 +-
 xen/arch/x86/domctl.c                     |  2 +-
 xen/arch/x86/xstate.c                     |  4 +--
 xen/include/xen/lib/x86/cpu-policy.h      | 42 ++++++++++++-----------
 xen/lib/x86/cpuid.c                       | 24 +++++++------
 xen/lib/x86/msr.c                         |  4 +--
 9 files changed, 46 insertions(+), 42 deletions(-)

diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 466bdbb1d91a..7d8467b4b258 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -48,8 +48,8 @@ static void check_policy(struct cpu_policy *cp)
      * Fix up the data in the source policy which isn't expected to survive
      * serialisation.
      */
-    x86_cpuid_policy_clear_out_of_range_leaves(cp);
-    x86_cpuid_policy_recalc_synth(cp);
+    x86_cpu_policy_clear_out_of_range_leaves(cp);
+    x86_cpu_policy_recalc_synth(cp);
 
     /* Serialise... */
     rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index a4ca07f33973..f1d968adfc39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -105,7 +105,7 @@ static void test_cpuid_current(void)
 
     printf("Testing CPUID on current CPU\n");
 
-    x86_cpuid_policy_fill_native(&p);
+    x86_cpu_policy_fill_native(&p);
 
     rc = x86_cpuid_copy_to_buffer(&p, leaves, &nr);
     if ( rc != 0 )
@@ -554,7 +554,7 @@ static void test_cpuid_out_of_range_clearing(void)
         void *ptr;
         unsigned int nr_markers;
 
-        x86_cpuid_policy_clear_out_of_range_leaves(p);
+        x86_cpu_policy_clear_out_of_range_leaves(p);
 
         /* Count the number of 0xc2's still remaining. */
         for ( ptr = p, nr_markers = 0;
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index 2692404df906..7d2d57f7591a 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -75,7 +75,7 @@ bool emul_test_init(void)
 
     unsigned long sp;
 
-    x86_cpuid_policy_fill_native(&cp);
+    x86_cpu_policy_fill_native(&cp);
 
     /*
      * The emulator doesn't use these instructions, so can always emulate
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 83186e940ca7..1140f0b365cd 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -347,7 +347,7 @@ static void __init calculate_raw_policy(void)
 {
     struct cpu_policy *p = &raw_cpu_policy;
 
-    x86_cpuid_policy_fill_native(p);
+    x86_cpu_policy_fill_native(p);
 
     /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
     ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index c02528594102..1a8b4cff48ee 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -66,7 +66,7 @@ static int update_domain_cpu_policy(struct domain *d,
         goto out;
 
     /* Trim any newly-stale out-of-range leaves. */
-    x86_cpuid_policy_clear_out_of_range_leaves(new);
+    x86_cpu_policy_clear_out_of_range_leaves(new);
 
     /* Audit the combined dataset. */
     ret = x86_cpu_policies_are_compatible(sys, new, &err);
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index d481e1db3e7e..92496f379546 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -684,7 +684,7 @@ void xstate_init(struct cpuinfo_x86 *c)
 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
                     const struct xsave_hdr *hdr)
 {
-    uint64_t xcr0_max = cpuid_policy_xcr0_max(d->arch.cpuid);
+    uint64_t xcr0_max = cpu_policy_xcr0_max(d->arch.cpuid);
     unsigned int i;
 
     if ( (hdr->xstate_bv & ~xcr0_accum) ||
@@ -708,7 +708,7 @@ int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
 int handle_xsetbv(u32 index, u64 new_bv)
 {
     struct vcpu *curr = current;
-    uint64_t xcr0_max = cpuid_policy_xcr0_max(curr->domain->arch.cpuid);
+    uint64_t xcr0_max = cpu_policy_xcr0_max(curr->domain->arch.cpuid);
     u64 mask;
 
     if ( index != XCR_XFEATURE_ENABLED_MASK )
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 57b4633c861e..dee46adeff17 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -399,12 +399,12 @@ void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
 void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
                                   struct cpu_policy *p);
 
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xcr0_max(const struct cpu_policy *p)
 {
     return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
 }
 
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xstates(const struct cpu_policy *p)
 {
     uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
 
@@ -414,18 +414,18 @@ static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
 const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
 
 /**
- * Recalculate the content in a CPUID policy which is derived from raw data.
+ * Recalculate the content in a CPU policy which is derived from raw data.
  */
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p);
 
 /**
- * Fill a CPUID policy using the native CPUID instruction.
+ * Fill CPU policy using the native CPUID/RDMSR instruction.
  *
  * No sanitisation is performed, but synthesised values are calculated.
  * Values may be influenced by a hypervisor or from masking/faulting
  * configuration.
  */
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+void x86_cpu_policy_fill_native(struct cpu_policy *p);
 
 /**
  * Clear leaf data beyond the policies max leaf/subleaf settings.
@@ -436,7 +436,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
  * with out-of-range leaves with stale content in them.  This helper clears
  * them.
  */
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p);
 
 #ifdef __XEN__
 #include <public/arch-x86/xen.h>
@@ -449,9 +449,10 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
 #endif
 
 /**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
+ * Serialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
  *
- * @param policy     The cpuid_policy to serialise.
+ * @param policy     The cpu_policy to serialise.
  * @param leaves     The array of leaves to serialise into.
  * @param nr_entries The number of entries in 'leaves'.
  * @returns -errno
@@ -460,13 +461,14 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
  * leaves array is too short.  On success, nr_entries is updated with the
  * actual number of leaves written.
  */
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *policy,
                              cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
 
 /**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ * Unserialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
  *
- * @param policy      The cpuid_policy to unserialise into.
+ * @param policy      The cpu_policy to unserialise into.
  * @param leaves      The array of leaves to unserialise from.
  * @param nr_entries  The number of entries in 'leaves'.
  * @param err_leaf    Optional hint for error diagnostics.
@@ -474,21 +476,21 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
  * @returns -errno
  *
  * Reads at most CPUID_MAX_SERIALISED_LEAVES.  May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * incoming leaf is out of range of cpu_policy, in which case the optional
  * err_* pointers will identify the out-of-range indicies.
  *
  * No content validation of in-range leaves is performed.  Synthesised data is
  * recalculated.
  */
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *policy,
                                const cpuid_leaf_buffer_t leaves,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf);
 
 /**
- * Serialise an msr_policy object into an array.
+ * Serialise the MSRs of a cpu_policy object into an array.
  *
- * @param policy     The msr_policy to serialise.
+ * @param policy     The cpu_policy to serialise.
  * @param msrs       The array of msrs to serialise into.
  * @param nr_entries The number of entries in 'msrs'.
  * @returns -errno
@@ -497,13 +499,13 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
  * buffer array is too short.  On success, nr_entries is updated with the
  * actual number of msrs written.
  */
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+int x86_msr_copy_to_buffer(const struct cpu_policy *policy,
                            msr_entry_buffer_t msrs, uint32_t *nr_entries);
 
 /**
- * Unserialise an msr_policy object from an array of msrs.
+ * Unserialise the MSRs of a cpu_policy object from an array of msrs.
  *
- * @param policy     The msr_policy object to unserialise into.
+ * @param policy     The cpu_policy object to unserialise into.
  * @param msrs       The array of msrs to unserialise from.
  * @param nr_entries The number of entries in 'msrs'.
  * @param err_msr    Optional hint for error diagnostics.
@@ -517,7 +519,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *policy,
  *
  * No content validation is performed on the data stored in the policy object.
  */
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
+int x86_msr_copy_from_buffer(struct cpu_policy *policy,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr);
 
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 734e90823a63..7c7b092736ff 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -102,13 +102,13 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d1             = fs[FEATURESET_7d1];
 }
 
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
 {
     p->x86_vendor = x86_cpuid_lookup_vendor(
         p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
 }
 
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
+void x86_cpu_policy_fill_native(struct cpu_policy *p)
 {
     unsigned int i;
 
@@ -199,7 +199,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
         cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
         cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
 
-        xstates = cpuid_policy_xstates(p);
+        xstates = cpu_policy_xstates(p);
 
         /* This logic will probably need adjusting when XCR0[63] gets used. */
         BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
@@ -222,10 +222,12 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
     p->hv_limit = 0;
     p->hv2_limit = 0;
 
-    x86_cpuid_policy_recalc_synth(p);
+    /* TODO MSRs */
+
+    x86_cpu_policy_recalc_synth(p);
 }
 
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p)
 {
     unsigned int i;
 
@@ -260,7 +262,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
         zero_leaves(p->topo.raw, i, ARRAY_SIZE(p->topo.raw) - 1);
     }
 
-    if ( p->basic.max_leaf < 0xd || !cpuid_policy_xstates(p) )
+    if ( p->basic.max_leaf < 0xd || !cpu_policy_xstates(p) )
         memset(p->xstate.raw, 0, sizeof(p->xstate.raw));
     else
     {
@@ -268,7 +270,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
         BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
 
         /* First two leaves always valid.  Rest depend on xstates. */
-        i = max(2, 64 - __builtin_clzll(cpuid_policy_xstates(p)));
+        i = max(2, 64 - __builtin_clzll(cpu_policy_xstates(p)));
 
         zero_leaves(p->xstate.raw, i,
                     ARRAY_SIZE(p->xstate.raw) - 1);
@@ -333,7 +335,7 @@ static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
     return 0;
 }
 
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *p,
                              cpuid_leaf_buffer_t leaves, uint32_t *nr_entries_p)
 {
     const uint32_t nr_entries = *nr_entries_p;
@@ -383,7 +385,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
 
         case 0xd:
         {
-            uint64_t xstates = cpuid_policy_xstates(p);
+            uint64_t xstates = cpu_policy_xstates(p);
 
             COPY_LEAF(leaf, 0, &p->xstate.raw[0]);
             COPY_LEAF(leaf, 1, &p->xstate.raw[1]);
@@ -419,7 +421,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
     return 0;
 }
 
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *p,
                                const cpuid_leaf_buffer_t leaves,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf)
@@ -522,7 +524,7 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
         }
     }
 
-    x86_cpuid_policy_recalc_synth(p);
+    x86_cpu_policy_recalc_synth(p);
 
     return 0;
 
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index c4d885e7b568..e04b9ca01302 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -23,7 +23,7 @@ static int copy_msr_to_buffer(uint32_t idx, uint64_t val,
     return 0;
 }
 
-int x86_msr_copy_to_buffer(const struct msr_policy *p,
+int x86_msr_copy_to_buffer(const struct cpu_policy *p,
                            msr_entry_buffer_t msrs, uint32_t *nr_entries_p)
 {
     const uint32_t nr_entries = *nr_entries_p;
@@ -48,7 +48,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *p,
     return 0;
 }
 
-int x86_msr_copy_from_buffer(struct msr_policy *p,
+int x86_msr_copy_from_buffer(struct cpu_policy *p,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr)
 {
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:05:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517792.803620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdX2-0007Yq-V8; Tue, 04 Apr 2023 10:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517792.803620; Tue, 04 Apr 2023 10:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdX2-0007Yj-SH; Tue, 04 Apr 2023 10:04:52 +0000
Received: by outflank-mailman (input) for mailman id 517792;
 Tue, 04 Apr 2023 10:04:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjdX1-0007Yd-Iw
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:04:51 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21e6e2bd-d2d0-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 12:04:49 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 06:04:29 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by MW4PR03MB6924.namprd03.prod.outlook.com (2603:10b6:303:1bb::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 10:04:26 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 10:04:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21e6e2bd-d2d0-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680602690;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Ok66YB07IDcQouChuZxEPNj7oZ/Qz8j2cCPSknr1tlo=;
  b=F0QEIX8BP3zk7NZk9Im7hYet8FOiAT+NLlI32ZEOobCN/qM7eUXl6yBE
   /Py5mtSxbHdpyTy/yBFEo7LXvL+8s+KGsXACw2V9QVW4DdurCF8wdaL+7
   IIeQapm/GdMXRMD3mVTeakVXVBDw4iSzcBvNo/o0ncfJdDIZI3pyqGjBX
   4=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 103613987
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4L8JBq4rgJJCZCjKFrxnNAxRtBnGchMFZxGqfqrLsTDasY5as4F+v
 mMZCzyOPvmDYWf3c9Egb46zoxsH6pLcm9JkSwNprnoyHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 80/CGgtfAy5mLyI452WFvBvo+4zI5y+VG8fkikIITDxK98DGMmGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnEoojumF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXilAdlDROLQGvhCqkO83FdJKyYtd0qJ/NW2lBGif+p6E
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9URX5NkcHbC4ACAEDs9/qpdhqigqVF4gyVqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraT0tDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:npkDy6N7m60jLsBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103613987"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fP30eJHQi8VKXJe0tPnSekRi/t+MnesKJZwzkFusYT6vl30Zn5uhhD7b0PX+Zk6kisWYCuSvG8rpuxfoFxC1JrEqpzNx2xDzXFgZzuNGf9OIper592uKSKvoeSmN13Mr+sn1c0u4LakFKLJ4kT0pEy4lpxgks95/enTYSp9YhjQy1DEP+qrzMe1eQZDq6znMTfoeEwCbrP6dX1UfbHKD43HA3UiMXts9BxFXcanYR1w3NO8SFYeYSaRY7KaASxglEE9gMf6QFF59224FIwjPVwXvl0q64seyv1Yi5PtB6+V2WN8Jqa5hsO8UDTbvQd9sdjoHTjWb0GRn4NQhOibz+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PevfhuB9Pt0w/l3GtYOJ6zG/k37g0KxbE8D4sxkY1qE=;
 b=OSDS5bsm90d9YFNJLl2qwRi6LdWbDOAGkDNjbVJM14Pb2Uca2u5HDXoaM8YC0Azsh/sfJQFgBO4IHuiqrhiYZrVIIr8GUtyL1MG1CqwLwK5MyvIwfia4R37bBvNQOkyRKQYtGuIfPFmhnAGvA5qJMbz5oKwPJHqD+rQFdDP2TVc2Ep/jLrO5XNff3mBV1oQ/4c8PakM8EMKzhasj5GplkEt5kJgJHVthQZlObU1vIH6PrQF7HxNslWbD8GpjP6oE5A1HKVjqdiixTMg+ZNK/+k+Q1soD0qsHZxYM32UVKYDwZYjiX8A27uUJP+6z1dMHlnOVKMLo3BaGzWtZuexGVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PevfhuB9Pt0w/l3GtYOJ6zG/k37g0KxbE8D4sxkY1qE=;
 b=hQ3MNP477Zn1a/AjqZq1kOCIZ4/IbbWL5EqAcCoPQEU2Gh929RmfPGy5nRF+WuTfZ5ER5GalC3jBPLiqL4Jv8LtErOaR3luFcLbhmoTHMpMYA0O58n2zXMxttrKfKTFoDyumZ39ODLuCBhhRsG2HrDLc68daGzZGhi6xMMKIl3M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 12:04:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86/PV: ignore PAE_MODE ELF note for 64-bit Dom0
Message-ID: <ZCv2JR74mi0Gfqs/@Air-de-Roger>
References: <f52befc9-f19c-12fb-b0db-b6c4219999b2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f52befc9-f19c-12fb-b0db-b6c4219999b2@suse.com>
X-ClientProxiedBy: LO4P302CA0001.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::6) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|MW4PR03MB6924:EE_
X-MS-Office365-Filtering-Correlation-Id: 766ad71d-d05c-4f1b-6aef-08db34f3f92f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nKJQqF89Hhdxl26J+mlEYZNnDz5aL+8pIQBG+VyLkgtFdIxvk8kvXxo4p78DVZgSUdSalDfWBmX2Zg2nCPA+IXyzhH94L7Cw/fHgD/6+yx+c0wl+i4lUnYCmvcni+b5v39sdY/YKUqWoo7tVKXdwrY1CkJKgTttAow37q0VmXRL6pG/T9E56CH7X4RMyR3i5BDTlaotNAqk0m3aGepUbvLxXvjiIrLrY6N8TAPwZ8AUKW8Fb6Z+20r43djPEtiHUhchkCUjG30D1H9KwndlPNK2AucAzzXkj8hOxEdZpZubpsaKKwrxaxnLtCoKwGhm9YMLmctTmxG1X0uKsk0CMX3rwELR1uPE9Qhhvv1hlgzzWQYCMh2pww3z3SUQDrekzRugg3EE5kBD4+FN4XzSmul6wBZVazEW+f/+sHY+OH32f+pZghKS0DvjNlS5dgb91T+PqR7XGCz4NYAMKIEsX1ifqsksRkF/MF6JF+7wO39WO82yCPZwQaEQDloFWenJUg11sO+du2qHHWgdCjM9KYrTi/9UVnqfegWQQGV5HEqtGRWpVkg7SawLJhXkSBmcd
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(346002)(396003)(136003)(376002)(366004)(451199021)(86362001)(66946007)(41300700001)(4326008)(8676002)(66476007)(66556008)(6916009)(316002)(6486002)(83380400001)(85182001)(6666004)(478600001)(33716001)(54906003)(2906002)(38100700002)(5660300002)(8936002)(186003)(6512007)(6506007)(9686003)(26005)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZHMvTGFrcWcrVmEvWFJjems4d1RWNDZOOFBUUExmYmY0b0tya1ByMDJxQlJM?=
 =?utf-8?B?dUNLUXFGVFAySWNQSFJGMmw0NlhROFoyaGRSQWRqT3RFZDRERk9aRW1SOXpx?=
 =?utf-8?B?MGhBcWw3VHMyN1lMdUk2SUtjTEtUZ2NsOXVESmhzdTlsRTV3OXV1RFZaRy9y?=
 =?utf-8?B?eWp6ak5TYy9JSTZWbGErc1dZS0FaY2FRWHpkSXlwZ2srVTRId2JhQkdhTVlq?=
 =?utf-8?B?aWtQU2M2RXFSUytRQm42SWN2QSs4Znd6aVIwZDMyYkd1Smw1TWl3VXQvOEZx?=
 =?utf-8?B?VEd5V1JPQ2VNZkhOYitCWHRqRGt4WVBMN3BPTzhnYXFZVnV4Wm1rZFJQVUlm?=
 =?utf-8?B?THhtS3JsMUh4MU95UXY0dDcrZzBNc2dLMWlzaTBqTlp6RStZMXVrZDRvcFdZ?=
 =?utf-8?B?aTZVM2pEVkNkNlJ6TFIwQXd6VVdDNWttdVYwQzBPUlVYK3ZuTm1nbnZJcnFG?=
 =?utf-8?B?eGkzRnNiSFRxTHk5VkorZlZWWjRsajVTVnZqazB6WFhlc0hoSUYrVTFCQkRm?=
 =?utf-8?B?LzRuYXBiSnE4RzBiOTN5NEE5amtrY0F0UnRJQWZoaVhEOGY1WkZIODQxalNG?=
 =?utf-8?B?U2hmaXV0UWkvTzBWd0hqNVVYY0p0THRiY1B3UGxxdUNZUkxEK25NNUpEMkxW?=
 =?utf-8?B?dEQydkIxbEh1VGtFOFRlcXJaeWZOUDJRVTlEZTZaZk9PbU4wSGM2dnRZeEh5?=
 =?utf-8?B?QS9MbXk5dVZYYjRZdGN6ZmVlRkdORzdBc1BaSGhzeWJ3bVRLdWxwZGMzUXI1?=
 =?utf-8?B?UTJFZFVzS3dBTGdiZlAwQ2xDeUU5bWpreGQ4c3NNYTNzazNPRFprNm9vSGpm?=
 =?utf-8?B?eHlTZGt1Njd2MGVRWnM4RHhLTEhXdHRvVDhDRG1Za1kvcloydUhtZ2s0eXlR?=
 =?utf-8?B?TXNLZGpEbEZ0cU1CdGpuamZwbUJBcGxGNnd4M3pOcmw1TnJzVmI3c2JoYVZZ?=
 =?utf-8?B?QUVBU1BzdG5jbWhFSzJ4N2tBMGNPQ045Z3JDdTdOdjBjSFVFdjBxbi9SY2N4?=
 =?utf-8?B?VnlpM3VTRERjQmJGM0xSNFhwR3JkT0dCZkQxSUZlRXhPUXRYWlM3UVdCZmpE?=
 =?utf-8?B?enIxcTBuaVdyM0E5YzZZdlBSR0RtMjlteHRZbDdwbGQ4cWJ5ajNlbllmeHdT?=
 =?utf-8?B?TVlNM2NJYkpLQ0ZjcnZxOW9jd1RKVWtSTzRMUnIxZVdVSzJ6MTdHK2FBZTFS?=
 =?utf-8?B?ZG9QU0lUQ0RHYWMrdVhhV0tDNnFzcUpIQzVRakJIbTJtS0lMczgwaFBJZU1W?=
 =?utf-8?B?VS9BWjNZaG1MREtadG1Udjh2YkR0REFGQ1I2RmlUYmxPNjB3ZkRJN2wxZ3pV?=
 =?utf-8?B?K2xzRzJjVVhWZlpjcnJqRFQrcGlGdHd1RDdoalh4ZnB3bVpaaVhKN2pPMjM2?=
 =?utf-8?B?UDNQQm9UNjJIQmdDdkNyRlJrS0JqTmtJNW5KV0hhLys1L21Wc1EzTjlGa1Z5?=
 =?utf-8?B?MGZnUjVzdGpjOVhpUDE2Qm5xeHRxc2N6YS9OeXBGTjdZVFRJNnBPb3h1VUpv?=
 =?utf-8?B?dk1kaDdZZnhJK3lYblF2NWludFJSeE52WUIzSEJlcXg1RU9mdDN4RFJFWnRN?=
 =?utf-8?B?SFAvRWp2dkIrVXRYSWpTRzAybDd4VFp6em03aFEwaWNoSkdadmgrSy9BTGpG?=
 =?utf-8?B?M0VHTDV1T3ovQ2Q4b3BvNzJXbnlGSExYWG1LVC9Da3pKTXZkeHJxelRWTmxQ?=
 =?utf-8?B?SGFTOVduRUNOWVBRS2NkMk1OV0tiaEZsUmUxOHFCMjhqaXE5dU0xd0pyMjBj?=
 =?utf-8?B?akpEbUEzWUNIRkNBVWw2ZWFNU2tlTEozaEhOV0ZPVHpIL1VGVXE5UHRrVFVw?=
 =?utf-8?B?di9ZNTBLNVdqYkFuNG9sK21BMHpWME1uUS9nM1ZoUlhwVUlGdzA4bEpxN3RJ?=
 =?utf-8?B?MWxrbUhOeDRkWVpQSjNLMGZ6bThxVEV1UnNLRStSdjBhWEVKODI3UUR6T2Vj?=
 =?utf-8?B?cU1DK0VkZldDWldqSDg4T2VLbUw5d1B2OEw3clFiVEZrZHYxTmhMNlNCRENw?=
 =?utf-8?B?RTQrRTBHMUk1TVJIK1lJRjRiRkZrTG9ldWxtMWdLY3dYYm1UK1lYODNEUHgw?=
 =?utf-8?B?WVVKZnEwWldMMjloU2prTHV0bEdpYnVJSGxxRDlzTlRKL2FyQ0JPWXhQbDly?=
 =?utf-8?B?bDYxQ3l0LzJ0QjBwWG9KYmxzYTY3VGxKanFBekcySVBZbkxrVmtzL1J3bktR?=
 =?utf-8?B?a1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZH/1KQtAM0OjJLSuoJgE3HYP3c9n3ZyPAhPvrdnQdPNwVPhed2QhDDzLGlN+NZweFg+LJ5XQ7id5JyJou3sCkja/fXnhFoABVNR9IBSJ7FGWxqE85uOqohQ4ikHFkxfXwUd0GJZhr81jhEADZAMljADrymIQ0M8zAPXwbQoki2N54d6roesiEgupY4MSIo1EKzziHvNVN/Lx6zAvY3/OPuMhcgemxM8VlGe0AIJAZuPo3Faxm89ePbo9YHuyi17C3zdfZkpOT/BidVjR2lyWT1/YX+0Z7jz6V0q5ipUwIPpZBQNS98ApQPx4o80Pu02EvRO+B9v3nJhWKA8Tt8+GQBXLFmpCGh2Vni3TutLPdws4L9ofkhMkvd9cYYzmlfeH70WKbxWAlmuQyQXnEifGupDJ93XZor7iw8L8NoRYaOuv+DJVxm43a0vddIQyGYWCRVoEeJ1JQrG/i8MMp7drtwEJ4sBl9Ho+YPw4pxdUvQOcj2NHjY33iMUjjndmHwSKO/frqxWnooUyYhYsT9deiF6UdMmKHhTqeZ/tDW6ra76P4QK9TkoovTxNBXWWBZ6uygNfOd3OXMjVuS0wmbAoCEDWMRJOJkFi5+6x6tysN3axthm0MmwH1qytuhpUJLFgC1J0MNG3VzlEWhuTmxIZw6s+75GZJIY3wJL0YHq4ACgdQYhqcRxH/mkOHauQdSe44WkRo/vu1qn1GcEhCO2PZ8TdbnVWT3t7Enff2teRxOttBUQv8AM66hn8qpMGuMwckclus8R+iIO6eHGg6p7bWnohmkC1sKWbF6Jkxt+AAqRLvOgCeJKpQglfmXpCDZet
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 766ad71d-d05c-4f1b-6aef-08db34f3f92f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:04:26.6749
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jAov9yYDQGiSrqY7surtFpGMNMINFytH1pFpVooB4mwoySgSwiJIYSmHNIYbsWNK002qT79R6yd0gJZcMA1ymw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6924

On Tue, Apr 04, 2023 at 11:19:08AM +0200, Jan Beulich wrote:
> Besides a printk() the main effect is slight corruption of the start
> info magic: While that's meant to be xen-3.0-x86_64, it wrongly ended
> up as xen-3.0-x86_64p.
> 
> Note that no known users exist that would have developed a dependency on
> the bogus magic string. In particular Linux, NetBSD, and mini-os have
> been checked.
> 
> Fixes: 460060f83d41 ("libelf: use for x86 dom0 builder")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> RFC: While Linux works fine with the adjustment, I'm not entirely
>      certain of external tools (crash?) having grown a dependency. It
>      may be worth noting that XenoLinux and its forward ports never had
>      this ELF note in 64-bit kernels, so in principle it may be
>      reasonable to expect that no such dependency exists anywhere.
> 
> Prior to "x86/PV32: restore PAE-extended-CR3 logic" that (meaningless
> for 64-bit domains) VM-assist could also be engaged, based on the ELF
> note's value. I expect that change to go in first, at which point the
> description here is going to be correct (in not mentioning this VM-
> assist aspect).

Will look at it now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:12:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517798.803629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdeL-0000gb-SG; Tue, 04 Apr 2023 10:12:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517798.803629; Tue, 04 Apr 2023 10:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdeL-0000gU-Pg; Tue, 04 Apr 2023 10:12:25 +0000
Received: by outflank-mailman (input) for mailman id 517798;
 Tue, 04 Apr 2023 10:12:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjdeK-0000gO-H2
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:12:24 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2f2b7090-d2d1-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 12:12:21 +0200 (CEST)
Received: from mail-dm6nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 06:12:19 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB5758.namprd03.prod.outlook.com (2603:10b6:a03:2d0::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:12:15 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 10:12:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f2b7090-d2d1-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680603141;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=UFz2JVkAVtyrZ1xbzG59PK/LiPSeghCsIzW9wX9wibQ=;
  b=IIwMwk2mD2gqMO5VrcN+HgI6dyGaXHRUJTX4/1YUGg09PlWG1R026BLx
   aJaqqJfoPjJFSUxF/KSBKjovDS25PkpPIMpCW+UgBtxCdefygykByrhpY
   wc8zjImYwLJtk7YX1aTPnYV4mXvZKHs180NsJFSNHjmAb5qytqWg47vSY
   o=;
X-IronPort-RemoteIP: 104.47.58.101
X-IronPort-MID: 103615075
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Asrj+KvuNbpgxrEJ3IfbOPlfHufnVHBfMUV32f8akzHdYApBsoF/q
 tZmKWDQM/feYzb9eosgb4m19EMH6sXTn99nTlY+qSw2FSpH+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwCT1SfBuFwLKP27u1QPB0nIN4b8jXBdZK0p1g5Wmx4fcOZ7nmGv2PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60aIO9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANlMTeDgqaMCbFu7/3EpGgdPSFyA+8Kyt0mQQNQBO
 3FT5X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRVLufgcBRAoBptXm/oc6i0uXSs45SfbsyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CBhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:TQutO6xlArgAKiQ1wctdKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103615075"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZN0z6NogvqAaI2zwHlEDQacuH1AVq4OoYovgnv8nxAFCM4+ZosOa/2wzxJBtFILNsobh/QxpIzzgTbUfoqVjnIW6w7z4+8axSNPDPpAzwV+T2BWe3xW7Npsl+0bL7mCjr0jWnOLTGe4wXvBuKxzAtOonUnKBw06T2TnlavqIZjDtd4bVICQFcfyR8M6M89VEc5ZiDAju2dcXjoTiFg+UVID0uOe0r8qM75TiVetWqxd3C+DzgqMC3mtANKN5qx2Rq/plvufVLlhGnHr6XXNB8cauJHwqgqsJmpPlR19Au3umFEr55R0uXa3EmFIn6OjQZr8a5Q32NCNgXEJ4DeYIEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/mi0rYFQH7+37FCBieDUcO9QBZVqw+Q1snS907vw8ow=;
 b=Zb0v37uzNp7DOAMQZMyvsW487UneIi3yPTYxt5a+iTBJTS8Mm2rGny97khpbHvyxLXSkUJL/mLK/PJZzvrxt0Q9aKbMe4+06g1SzMxkksKjYyejbIhpWrnTcwnbuy9BsCechRyYBoOSMKbTMg6DQS2L6zU4yVfpGMBbkj1o8r2BICuYSLDJNCnwhxfdCGP8rKBtO4UGZfZaLi5VesyubFFhlkYphWNr7Z43ZlZgnJUtZVkxaqmFHRgvon4yo14px31yOtyLP8vtrF4bL8qIWhpG7FuoZH7JDsGdmsfZa60eqp3MM5tE9iPNNB7dKUWLVZrm5K8dTUYoypys/uEmOJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/mi0rYFQH7+37FCBieDUcO9QBZVqw+Q1snS907vw8ow=;
 b=LCmkkGPHTSZ9+3SZ2q4ekZ+dBBpqITWs40VGuErnX8LzlSiH0OixwZqLTlnx8T0n5quKZy+P/4sQnfodWnhI40i1t7o8Zht1Xh0LllbEezcP/dd2TxRPOXcJKmwvk7F1H0/s+JtkBebagAwmQi79vyA7BFWsEYaQeaZdwKVCIkQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 12:12:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZCv3+cpzJ52Y679G@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
X-ClientProxiedBy: LO4P123CA0435.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::8) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB5758:EE_
X-MS-Office365-Filtering-Correlation-Id: 7babf69e-4352-4220-091a-08db34f510a3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E+RlM9xKusNVh3+08goIXUsfNEYShHNiXXcyD/CZhizqcnsOjnCp011XuNmDWtjtReYjzBjaktwc3NcIxqTYKs9Au6bcCkzr4LS45gF/iFKFGK6uT3ZYJILizcnE1fcZUYZqTm4dHAapO/0JKRMUYkfnnb9i+EWKekNuvDJVQcM/BuI6+1HwHr0IPGI14qq3vQD7uOlzpEe4IK1tTWbr7xvyrVWG/solotKuF1k5hRItxvk6hTbGeSht4uSWmnV8uy4iC4I4S8SNHBfnh/bZvk6r/stHt3lqRm5bgCmxj6N+U390+5yNfE9DYZxUjdNbjXBF3E94s25jlufFOZd5oLGcLOm6yu4rzXbAQsDNFsz7V8SLXWIHUgP5xRZPOrVS1vjUBuL4IiOYlDXYzgnl5EXYGJO2YLGBn8+pH+y61mcyoi12sSdifcaHNCITQEHCLCLOZ/pgeFIna9VaAciNjPYCW3qx1Vuifmw54avpGdcEDZtORyNFuTOaGtKbdWVBcM8Xa25COP68ckztZweW20l5e0Xl5QqMpHtchPE6KYkBx3R7Bt1PfHHtLJBIe4IZ0gjQVfI0fPB3ScXzxu/lDK1NTRz7hb/fO+YLtsHbT6A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(136003)(396003)(346002)(366004)(451199021)(66899021)(186003)(33716001)(26005)(6666004)(2906002)(86362001)(8936002)(9686003)(5660300002)(83380400001)(478600001)(66476007)(66946007)(6512007)(6916009)(316002)(54906003)(6506007)(41300700001)(82960400001)(4326008)(85182001)(38100700002)(8676002)(6486002)(66556008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SXpUZUdRVCtoV3F0QzNRZE9meDBzaHk2WTYyNzJTUXBFYVBkOFFnUFpMMHRp?=
 =?utf-8?B?M1d2cUJabTJLdDZhY1Y0UXhCb2RwUUpYNGhqUC8xUkJpNlZUNzZ0b1czWi9O?=
 =?utf-8?B?cVkwUE1xRFZjUVN1cTd6VlJLUWIrOVYydDhHKzZvY0tiYXovTGVPRlNwVUkz?=
 =?utf-8?B?aGJmcFhyanMvTzhJMkcyOXFuaFBpYnpSV3QwRm5nZjFlTTdYeStQcmdKL3Ru?=
 =?utf-8?B?TUtHU1dHWEJaTmJtOFZzeGJQNmU5TzR3d09iRGFPTXVVcFM3TCtoTWNZZytn?=
 =?utf-8?B?L3VGU1g1d0thM1RpLzY3UDZid2FXcXFWZ2NjcENnSVZVUGpSaUZEVDFOR01l?=
 =?utf-8?B?Sy84dys2c04xQWxiY1lmV1kwVGJ2bFVCdThKSHFtZ1VWU05pSWgrOXF4K0Fz?=
 =?utf-8?B?U2g3Y2NOaUpZc2tHMSs2akJ2WkwraVkydjFKRTl3R1JYakJLYkRYT0FLSjNV?=
 =?utf-8?B?K0lUd1B2WlR2SVo0dzh0ZGhxbDFzT2ZHdnJWaWtxMDZYc0d0ekQvK214bEJQ?=
 =?utf-8?B?NFRDT2MwV1BJQWtaZ0lab2lKdzFEVCtNZVRLOFlYUlJ0SnduclFIL0hMZG9q?=
 =?utf-8?B?Qzc3STdzVVRuLzNOODh4bDl3T0ppbzhnRGxjbks3ZkYvdHRuSVI5MDNka1hJ?=
 =?utf-8?B?RG41c0E2TjJmajJYVnhjUUFFREc0a1ZHK2NvdlZPZWhmUThPY2VjQWVobll2?=
 =?utf-8?B?ckFqWXAycmFQNWxBaFcyWVVyN0srZVFrcG5ZaW5tK2VNVXpla0dWR21CL01U?=
 =?utf-8?B?Rzc5aDUwaHVHMGlqQ09nTHVZQXk1amFUeGZiVFNzY0k1U0RLTW5sUzQ2SVVZ?=
 =?utf-8?B?NmNHMVRWY1d4bGhlRTZBRmdnaWtSQzJRRHlmL2NFT095OC9EaWF5MkwvNUs5?=
 =?utf-8?B?TWMwakNGMmdRMDdETTA4UzhZYWhOcG8ySllEZmg2TUF6U1NNZU4ydzFBNGF3?=
 =?utf-8?B?K0ZtOW1iVFJoSVNOMnp3Y0k3bmxGWlJVaVRmYXZyNzM3NGtvTDlXWE1ValM1?=
 =?utf-8?B?YjBha0lreVlnZkVjdnpMNkZGSDlsclB6dlgxYkdBcVZVYk9WUDJSb1NabmRY?=
 =?utf-8?B?dVJ1ZkFzODVqYVdiMDdXOW1XNUdqUitxVkZCOEgwK2V2bHZSTDFQZnFHZVJF?=
 =?utf-8?B?c3MxRE4xQWZYMHIySGJySk9CbFU5RTZxQnRrbHZoWUVZVEpITzZsY1Y4QnZP?=
 =?utf-8?B?bVR5ZWMyYzhLTTFySEIvTEI3U2ZKQlFmTFJsRU15NWxrRU8rSWNmY01BUW1j?=
 =?utf-8?B?THppeUhLcFNJeldSTUJIOWduM1psSW9Jb0JaZmU1S3BZRzR2NHc5ZWhXUENF?=
 =?utf-8?B?Q3V1UExCWitCdTdFR0pwUFh1RmVPMUo5c3BuZ3FiRVhHQUVEaDBObytHQkNS?=
 =?utf-8?B?ZmNyVjF3OW1FTU5MNHdiWFo5OTVHd0JySUxCSkhkWmcwd3dnWmNFeHJ2aW4y?=
 =?utf-8?B?TmtuU2JLaGxYdXJDcnB3cHI1cVZxckcrSjBvb2NyeG5HaUtRc1FMRENWbXVw?=
 =?utf-8?B?SjNkcFBIUHIveHcyTWFRWU9XSzZ6NVpka3NaQ1grcWJiV1FiVnBRYmQxV1Bn?=
 =?utf-8?B?eU1mRDc4VXZxM21Oak15MlA0eTJiaXVLT0pUYXJYaWx5UFJPV2dtbkgvYzZw?=
 =?utf-8?B?N1V5dFFSdFBUMStCRDNFWG0zQnpJLzh4NlRJSjZKbWdSeXFiaEI1aDJTZ2xU?=
 =?utf-8?B?enVUell1Y1kzRythc2taMm1BQWVqMXR5eXM5dWtvWDU4dkNlWGprcTlGdEpZ?=
 =?utf-8?B?dFd3SUhiam4xU0Y5VzlsbkxBR1ZtVG84RUlhOG9tbHRKT3FRa3NLQ2JjZVV5?=
 =?utf-8?B?bDd4QjJ1VUZVcHVSYWhYb1Z6MXJ2dWthR3p2MWZNVTFnMTZMNTY0ZFpSTjJQ?=
 =?utf-8?B?OHQwYTJrWkEySmcrQmZIb0JJUkRqbmQyZVRYc0tLc2I1QXU3NEdHbHdYNjZk?=
 =?utf-8?B?bXBiN292Sys5Z0w3WFVhK0ZjVWJ5YXdCaFFJYmNFTUhXSGozakJjT1JCLzB5?=
 =?utf-8?B?MU5uV2ZydmR5KyticTU4NXpHZ2RzOThmNWdyRUMxeVJrK08yOG9Wbi92c1Yz?=
 =?utf-8?B?MHhER1JxbTlDUFBsV1lPcXM1cHFIbVVnUjlhTm1vMTg4SHZlc3ZMYUEyNks5?=
 =?utf-8?B?aXNMSkdJdVo0NmdaNC9udVpNbU81RmQxWTBCUGkxRTFpUmYzekRhaDZ0bDNr?=
 =?utf-8?B?dkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	q6FoNrjwNIEkhIWFJxjG+8HWe510D91Cjp9k59PsYFgz3lsV4YYepULB4KEUs452N+X2RcFQFaG14ZN/2M4VPPjOMhPCu05GVurrczlPKv/DjsPoTCw6zokwdtJhwsKyq1lXiDkRzFKhWnSAB1SlG4uwqTLBeBTIcNDxT+X/AqrjJjaITGucbGUJeu0SC1aMHT4TcwR+AUGc0du6hATEPawb+rBhhvoJXODM0Egv2bFOLDasPwHXOgcFqjZyjdop/mKqW8THVvBfhBHWkHx9W/LVN1t7es6n08GbVOVead+cElmBMn8fOetDY4jXdYjUatfwZljarbnwg3O8iRKt1GNDzrbPXZS8+6tzMreedztCCwr4wipDjtzeMc5R8sbDT/pvMBBXGZUf6HJ/lPgD7KNfUBodcPGOCKK75Zv6WDJFlwQY0+F1DGVjlXv7h93ZbX8d0i/MGUMxdCFhANSELomz2Advh+MKLmDHycgS7AiV6iTmesUI1jQ5BD9yMGB2FO4VS0VvzLYPHEmorkMWu+xSY33k1sKahiJtq+ZofLvJflR1fOKiBxAdIzIm4nKO4DO5s/5R0r4B0+cOHCWjKrwgRTWX9GB0/YQbMOZ0krU4l2qwFH6fXoq8YSp47UUNz4E83uqt2v8OZhU+hF18LL6xzKzCQx3YPLPO9xoyqgM2WcYTzXa95Zbg5ygboj3ElbvbYGKGqpY0NQ5DnhmbvW47liVTazH73PHqbFbNIZAEBVmKcO2pmJ/nLu9hHHmd191iH7f041lUOlbfp4ZZG0TiXnizW/ryr6fMP8LNq8VUXVqnoID2gcP9fXKMuUdE
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7babf69e-4352-4220-091a-08db34f510a3
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:12:15.6654
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VskyNUKOAj7TYbTydjRfeNgcmcuZo8vBy1gpw3DTXdJL4lEkaMgr0gcsTGGk6D7LirKXy8LubR4XfmA5L+f0wg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5758

On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> applies to guests also when run on a 64-bit hypervisor: The "extended
> CR3" format has to be used there as well, to fit the address in the only
> 32-bit wide register there. As a result it was a mistake that the check
> was never enabled for that case, and was then mistakenly deleted in the
> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> CONFIG_PAGING_LEVELS==4"]).
> 
> Similarly during Dom0 construction kernel awareness needs to be taken
> into account, and respective code was again mistakenly never enabled for
> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
> 
> At the same time restrict enabling of the assist for Dom0 to just the
> 32-bit case. Furthermore there's no need for an atomic update there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was uncertain whether to add a check to the CR3 guest read path,
> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> be converted to "extended" format (overflow is possible there in
> principle because of the control tools "slack" in promote_l3_table()).
> 
> In that context I was puzzled to find no check on the CR3 guest write
> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> of the low reserved ones) could observe anomalous behavior rather than
> plain failure.
> 
> As to a Fixes: tag - it's pretty unclear which of the many original
> 32-on-64 changes to blame. I don't think the two cited commits should
> be referenced there, as they didn't break anything that wasn't already
> broken.
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>      unsigned int   partial_flags = page->partial_flags;
>      l3_pgentry_t   l3e = l3e_empty();
>  
> +    /*
> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> +     * understand the weird 'extended cr3' format for dealing with high-order
> +     * address bits. We cut some slack for control tools (before vcpu0 is
> +     * initialised).

Don't we then need some check in the vCPU init path to assure that the
cr3 is < 32bits if we allow those to initially be set?

Or will the initialization unconditionally overwrite any previous cr3
value?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:16:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:16:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517802.803639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdiI-0001Ik-DR; Tue, 04 Apr 2023 10:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517802.803639; Tue, 04 Apr 2023 10:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdiI-0001Id-A6; Tue, 04 Apr 2023 10:16:30 +0000
Received: by outflank-mailman (input) for mailman id 517802;
 Tue, 04 Apr 2023 10:16:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjdiH-0001IT-3v; Tue, 04 Apr 2023 10:16:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjdiG-0007rv-T3; Tue, 04 Apr 2023 10:16:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjdiG-0004X2-FX; Tue, 04 Apr 2023 10:16:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjdiG-0004ZC-F7; Tue, 04 Apr 2023 10:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HKzArlN5QH0A1KTS3kZmDBsTTF+uGBcxCr2nRB7swiY=; b=x+lSi87K4rpG8lS0W2fjNVapOH
	N/yDRih2lRPCV6K5BG6K1vKXL6FlMJCP9Ly+AHBH8JrxnOEhiwwGbrpU2EGzdHDA1QRHw1KtqvOQZ
	20mbMxWh9tA5FNwzLrikvUJ9GMJe1vKtWGom5guPpsywlnvmbcC0k88jrEYshNcCWYfQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180131: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-localmigrate:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=148341f0a2f53b5e8808d093333d85170586a15d
X-Osstest-Versions-That:
    linux=7e364e56293bb98cae1b55fd835f5991c4e96e7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 10:16:28 +0000

flight 180131 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build   fail in 180130 REGR. vs. 180116

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd12-amd64 17 guest-localmigrate fail in 180130 pass in 180131
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail in 180130 pass in 180131
 test-amd64-amd64-xl-pvshim   22 guest-start/debian.repeat  fail pass in 180130

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 180130 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180130 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180116
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                148341f0a2f53b5e8808d093333d85170586a15d
baseline version:
 linux                7e364e56293bb98cae1b55fd835f5991c4e96e7d

Last test of basis   180116  2023-04-03 02:26:51 Z    1 days
Testing same since   180130  2023-04-03 17:12:08 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Brauner <brauner@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Kelley <mikelley@microsoft.com>
  Mohammed Gamal <mgamal@redhat.com>
  Wei Liu <wei.liu@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 148341f0a2f53b5e8808d093333d85170586a15d
Merge: 2d72ab2449fa cb2239c198ad
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Apr 3 09:41:24 2023 -0700

    Merge tag 'vfs.misc.fixes.v6.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping
    
    Pull vfs fix from Christian Brauner:
     "When a mount or mount tree is made shared the vfs allocates new peer
      group ids for all mounts that have no peer group id set. Only mounts
      that aren't marked with MNT_SHARED are relevant here as MNT_SHARED
      indicates that the mount has fully transitioned to a shared mount. The
      peer group id handling is done with namespace lock held.
    
      On failure, the peer group id settings of mounts for which a new peer
      group id was allocated need to be reverted and the allocated peer
      group id freed. The cleanup_group_ids() helper can identify the mounts
      to cleanup by checking whether a given mount has a peer group id set
      but isn't marked MNT_SHARED. The deallocation always needs to happen
      with namespace lock held to protect against concurrent modifications
      of the propagation settings.
    
      This fixes the one place where the namespace lock was dropped before
      calling cleanup_group_ids()"
    
    * tag 'vfs.misc.fixes.v6.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping:
      fs: drop peer group ids under namespace lock

commit 2d72ab2449fa9fce8f6898fd5adda10497f7c111
Merge: 7e364e56293b f8acb24aaf89
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Apr 3 09:34:08 2023 -0700

    Merge tag 'hyperv-fixes-signed-20230402' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux
    
    Pull hyperv fixes from Wei Liu:
    
     - Fix a bug in channel allocation for VMbus (Mohammed Gamal)
    
     - Do not allow root partition functionality in CVM (Michael Kelley)
    
    * tag 'hyperv-fixes-signed-20230402' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
      x86/hyperv: Block root partition functionality in a Confidential VM
      Drivers: vmbus: Check for channel allocation before looking up relids

commit cb2239c198ad9fbd5aced22cf93e45562da781eb
Author: Christian Brauner <brauner@kernel.org>
Date:   Thu Mar 30 09:13:16 2023 +0200

    fs: drop peer group ids under namespace lock
    
    When cleaning up peer group ids in the failure path we need to make sure
    to hold on to the namespace lock. Otherwise another thread might just
    turn the mount from a shared into a non-shared mount concurrently.
    
    Link: https://lore.kernel.org/lkml/00000000000088694505f8132d77@google.com
    Fixes: 2a1867219c7b ("fs: add mount_setattr()")
    Reported-by: syzbot+8ac3859139c685c4f597@syzkaller.appspotmail.com
    Cc: stable@vger.kernel.org # 5.12+
    Message-Id: <20230330-vfs-mount_setattr-propagation-fix-v1-1-37548d91533b@kernel.org>
    Signed-off-by: Christian Brauner <brauner@kernel.org>

commit f8acb24aaf89fc46cd953229462ea8abe31b395f
Author: Michael Kelley <mikelley@microsoft.com>
Date:   Wed Mar 15 08:34:13 2023 -0700

    x86/hyperv: Block root partition functionality in a Confidential VM
    
    Hyper-V should never specify a VM that is a Confidential VM and also
    running in the root partition.  Nonetheless, explicitly block such a
    combination to guard against a compromised Hyper-V maliciously trying to
    exploit root partition functionality in a Confidential VM to expose
    Confidential VM secrets. No known bug is being fixed, but the attack
    surface for Confidential VMs on Hyper-V is reduced.
    
    Signed-off-by: Michael Kelley <mikelley@microsoft.com>
    Link: https://lore.kernel.org/r/1678894453-95392-1-git-send-email-mikelley@microsoft.com
    Signed-off-by: Wei Liu <wei.liu@kernel.org>

commit 1eb65c8687316c65140b48fad27133d583178e15
Author: Mohammed Gamal <mgamal@redhat.com>
Date:   Fri Feb 17 22:44:11 2023 +0200

    Drivers: vmbus: Check for channel allocation before looking up relids
    
    relid2channel() assumes vmbus channel array to be allocated when called.
    However, in cases such as kdump/kexec, not all relids will be reset by the host.
    When the second kernel boots and if the guest receives a vmbus interrupt during
    vmbus driver initialization before vmbus_connect() is called, before it finishes,
    or if it fails, the vmbus interrupt service routine is called which in turn calls
    relid2channel() and can cause a null pointer dereference.
    
    Print a warning and error out in relid2channel() for a channel id that's invalid
    in the second kernel.
    
    Fixes: 8b6a877c060e ("Drivers: hv: vmbus: Replace the per-CPU channel lists with a global array of channels")
    
    Signed-off-by: Mohammed Gamal <mgamal@redhat.com>
    Reviewed-by: Dexuan Cui <decui@microsoft.com>
    Link: https://lore.kernel.org/r/20230217204411.212709-1-mgamal@redhat.com
    Signed-off-by: Wei Liu <wei.liu@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:31:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517808.803651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdwx-0003hZ-SR; Tue, 04 Apr 2023 10:31:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517808.803651; Tue, 04 Apr 2023 10:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdwx-0003hS-NT; Tue, 04 Apr 2023 10:31:39 +0000
Received: by outflank-mailman (input) for mailman id 517808;
 Tue, 04 Apr 2023 10:31:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjdww-0003hM-LU
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:31:38 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on062e.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfa9873c-d2d3-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 12:31:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8788.eurprd04.prod.outlook.com (2603:10a6:20b:42f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 10:31:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:31:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfa9873c-d2d3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RPs6cxSRpxJ0evezqtVW3GhqqpF9JGlPRnTVvwbE5C1EmYgJfz51qB0WNAF2D/ztlv6pHuNsdxxoRmwvILwIoJFG1TtOXtrnQqrS7k5Vnvv9JhPTqvVhUsNm/3yngqifFXEbCowGYFJJvkhLpiHj71/8TFasf//gC8sh7FnFwecicQ7qGYKk0CbVJleLbgCdxizgYr/tq9/QL+NtfLXsJRR7XpVjl0u3iHsqVAy6nNzg1pKDrP6DAj35qBAMaAnOjxemCAzoDYn+Bo3fnGeUEOkVOiJ6VEoXdXFUTPDc8eXX0laobm8uT8LsdkTXGUXvECUqiUzCGJcPXL0864C9Wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SK5yEcXmvmF9LNfOBdI39ph6figLkQWOz0P31QY8dBg=;
 b=odvfKTC28lKLHP5xiLycaN/exVvFShAjRlrNj5mQBB6hKk5a5lY24Bd7iJm2l4PNELNE4StN1AlQ8HuY/iJZdZGBUyW8w/lOMEln7ENs08eDpZvXZFWNjf+6aZC/zgAtfzH0e9Q3N8SrQQWbIOJjfZ/P6j8Eb1bJgk8ONmXorArF4b/+eKuAiPwt1bX0N6EmCYdR11SJ79NHH5CiWJtDt03QvPzjn7279v7ylkzKeh0+9LH3OTKV+fva//Qs6l298BKnO8RdCXpCax10kMCzTqyU6MhJ5plYZ3+MVHsyQd6MpLGJ9r6lAA/ADkRiRcmpZDwoz3LUmO0i8z1SjdkeVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SK5yEcXmvmF9LNfOBdI39ph6figLkQWOz0P31QY8dBg=;
 b=wGgAHSVk/JYQwMgEp3Ix/GB4DLlV1TpQWU7B3HsrW81olMSQxHLEH4ONWg12QnDRRcYGm8YnABebysymwl9c7CCNvvm+nIBPL2Y00jjgNkUDMB3F5klgzLf8YxirANpADwtm7OH0y6NB+oct3uwzPbuwQs49ATgw0WDjsFYaHal7pWZ4uwyayu+0SNlD51fclYs3g8RYjmbDPvLCgXTN0r/0veXauoOhHW5xkYPWHgOaIITptJUdqCXyIRudNlzcdlTHcfwzYdGZ/6C6aUv438mg8nM850mHIdi0xTEhXipgi5RSCkSWO7n8RAx0Smz2+BsXQXneizDIirZn9/WVkg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
Date: Tue, 4 Apr 2023 12:31:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCv3+cpzJ52Y679G@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8788:EE_
X-MS-Office365-Filtering-Correlation-Id: 98b9e487-f8fa-4488-56f5-08db34f7c2c9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tS5fpPz/AxFt8LgcxRun1J4alZjrGRctKpx3efuBKBxvI71aAMP/NUlCIRQcYufUMh1rkgzXq1rsuxhbgND94FpuPkx85Xm6XJBs6uRvAeJ3BLnvA7jG0xWPSlIxDOToETLChN3QCD8kDOFQLhI/mr/csGkegnF4eTkpUF/Qy99P5Fk87OAGCym+4wcDYaMvLFnaHUsa9Y1WtncYjfakc0ng5UQI3cJnxmZ8Qp58iYqNWrMpSlZulhTCX9RqSlcxe1Jhv6hsn0Xz2huLeSsxA0yPGJdfDNKfvQzEdzYm/cotFWQapg4S/fa9LcnfKXttHERLF2mEEX3KBppLXXYUPNZgH1/yuanUPqIGrC96FMAMiFOQNjaJTEBvXcxkHrR7rd9W5BDKA90oAvh5huS43X8qUciaiPceQ9+392/wLuhB/sgQoXXqG4YX2cy2WlVIEC3+r8jXfpz6i5Drkn41ty0XYi8l/e2EnVa/2P5+ar1T7CT65I4F5nL1Ti2bwdA8HouK9AOQnM3aYdMPTPP6tHuXrJfiL6w1G+DGaWmIHW7rjvy36rHVa9ZlDTIgzIEeUXlb/R7SkdK6T9ht1j64yFR3zBhIakMOC6Rs1EQYHJ4a0j97Oyn3kcz0OUFK2fshPwhGYjB5jPSg9nMYjjgoZw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(366004)(346002)(136003)(39860400002)(451199021)(66946007)(31686004)(478600001)(6512007)(6506007)(26005)(66556008)(41300700001)(6916009)(6486002)(2616005)(66476007)(4326008)(53546011)(54906003)(316002)(186003)(66899021)(5660300002)(83380400001)(8936002)(8676002)(2906002)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a29TWWw0eHdESHRBZUUvODBBZEM0WlkyVzRQUTkrbGFQclpIcWt0NzNEMHpt?=
 =?utf-8?B?L1BISGxsalZZcUs2KzFxSFZvekdiUTEva0poTU8yb2VXSG1iR1dvamFTQS82?=
 =?utf-8?B?aEdKZFR2U3V2a1d0YXBHZ2lRZXBZeVRvVThGbEpnTTk2dERLTG1BK0JWTjc5?=
 =?utf-8?B?dDFFbmpySko1amwvL2NlYWNLelJnRjNCbVVWd0MzcnR4a3pGYXh3eWRiVXR3?=
 =?utf-8?B?OHFER3RHeXhabkgwMzNBZjZ0ZGxyeS9XMXIzMEJmYnk5K0w1ZGdGeXVCd3NZ?=
 =?utf-8?B?V1VCZ2ZUSGdBcHJpMFhmMUpNcUg5K0ZXQlRpeTQ2dkFHZnhrWVYxN1BicFgx?=
 =?utf-8?B?Qm9CZVRCZHNGY0JUVXE5WUJoZXBUWnN4MVFJTFlWVVJPd3FLVW03OTFjcWU3?=
 =?utf-8?B?akh6V3FMandnbUFsVk9Qd29URjZGcVByaFoxeWN0WDllU2FXbzZtSlMrN3hH?=
 =?utf-8?B?NStIYVVINFBuZXBkdkpheXVxV1FTbWh2UlhiVGk1bVRPWHZwK1h0VW16NVFN?=
 =?utf-8?B?OUFOWjBLUFlEOThQdktZSnY5Wk9GLzNyMDFGOGUzWEhReisrRWNnUGpaY2N5?=
 =?utf-8?B?bmJMMFNGcFo5UWh2d1VidGw5K21peTlkcUkyQmF0bjQ2UERDRkpRcTBuZkhP?=
 =?utf-8?B?VGRMdXBzYTdoNTROL25hZlVORExwaXVCRmtZMkRoclpQZHlLU2ZwMjljUmx2?=
 =?utf-8?B?SWpxY1JyQSt2Rk0xUGM3RmlNRDIwM3dhUzNUZDN2OHRMTUpJTWtzcjc5cFBD?=
 =?utf-8?B?SDd3TTlJSkNzbFI4enFqUmhFUlpsb0hIOHpqZzVZQlNvK0t1bzB4WGpMVEgr?=
 =?utf-8?B?OXVBVlBVZDVVZm45cERha0FFcmxxTmIwdGFJQWtmQ1RSRHVzbVFUWEd5TVV0?=
 =?utf-8?B?SXBGaGNLaDJEbDNJdmtqVlloZzBmckJrVHp0RXAyNFJkb3hTUkNtMzdkSDF6?=
 =?utf-8?B?NGFiZWJRZFI5M2tUeFRySkk1aWZjTkVBOU9wbVVXSFFoVDdaQVczaFhDRnFv?=
 =?utf-8?B?OSsrNytoRGkrVFNNSk40UlE2dlhvN1I4SWl2WFJxN0tkaFNsQ1RYY2JWMDJ6?=
 =?utf-8?B?U0oySFNtTmFwVlE1RFdBR0xsWWxSWS9Ibm14MHZIbW9QeVpkRlF3NFllWmpC?=
 =?utf-8?B?cndHQkY5eHUvUExSMHkzdTIwKzlPY0dRcTdBRDhnVXBqdlVYOXppeFJ3NDlT?=
 =?utf-8?B?TDR2MlZLdjR5akJHYXZMTkR4QTZuREJ6MDErZ3NjNGxrbDdISkRJeGRnUDNJ?=
 =?utf-8?B?TjBhZmNycCtIQ0NKTmNhV1ZDN0laNUtJcERUNFhiZ04zTUZLSTd6N3V0M0FH?=
 =?utf-8?B?RnYzZ0NUSlBmcm5scVVrY2VjQnB6QTUzODVZc2d6MGp3RjcyWU92ZFFtVElZ?=
 =?utf-8?B?Si9RRm9rZSs0anJPN0NhSDVqZWV0ZEZUNjN5a1Jod0FMc0RBa0pueWZxYUJy?=
 =?utf-8?B?VFo1NDRIcFFuSG1pNUJ1MnpFMW1WSVRWWlFYb2RLUnRUWnVoSUNvRWJjWXV3?=
 =?utf-8?B?ZjdEU0pRQUhrVFBSbGtMaWFXNkJBdGUxZHhUMHM4enVFQmJvZEtVZGFaYXVa?=
 =?utf-8?B?KyswclVDVUR0YmtWMTRsMksreDJkaVlYVnBsZkVUL1BYUWU0VW45M2lKeWJC?=
 =?utf-8?B?cjdPU1RyS25MSVJzNU5jb3pEL09FcVFFQW4wWlUzaHB6Y1lJTzAyeWhOaVNp?=
 =?utf-8?B?TmNIRDlnbVNKbG1HOGxZRGxRdHQzOG1ZMlJiV3gzTW0rNDZTSVltYjZuaXJa?=
 =?utf-8?B?S1hRNjlpa0lMcEhUR2JTTUhrUlRVSndXYUVHQmUyTk5IbnJ4TVlkVTVMNGxI?=
 =?utf-8?B?NFJDMEtGaVpJOGkvVGdQcTVxQTVkb0owckk3Sk0xRzBMR1owelNkN0dPSXMv?=
 =?utf-8?B?S21YZE1YTjlpTWpnM1phWDloQnJueGtxV0M1cVNNVFd1MFdRenNEcFVtTCtr?=
 =?utf-8?B?WGhkR1JnNUlKWWNnNlFiWkNyNjlZTWhXTTJHTDRTMzVQek1hQThseTFwOFFm?=
 =?utf-8?B?RjJ1UERRbUhSM2VuckV1UUU0eHNNUXBIeklpSmtMbXlCd2xYSXRlbzFLcGFQ?=
 =?utf-8?B?RkU4MTFvQjFOaVh3RDI4amllcmZnVDVoS1BWZE9LUzQwZzJiNkN1dkNOU1du?=
 =?utf-8?Q?bYlQv6f93bZe7IvZXXjfgpOjm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98b9e487-f8fa-4488-56f5-08db34f7c2c9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:31:33.4765
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XJO4R/GTiOt8Qw9jsVRL5bbQeiVFCrBy9TfaLmwX6/hVsqsTKTaDBuKNuQofuGaNIDxIzzzwOAtCrDb33jdhJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8788

On 04.04.2023 12:12, Roger Pau Monné wrote:
> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>> applies to guests also when run on a 64-bit hypervisor: The "extended
>> CR3" format has to be used there as well, to fit the address in the only
>> 32-bit wide register there. As a result it was a mistake that the check
>> was never enabled for that case, and was then mistakenly deleted in the
>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
>> CONFIG_PAGING_LEVELS==4"]).
>>
>> Similarly during Dom0 construction kernel awareness needs to be taken
>> into account, and respective code was again mistakenly never enabled for
>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
>>
>> At the same time restrict enabling of the assist for Dom0 to just the
>> 32-bit case. Furthermore there's no need for an atomic update there.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I was uncertain whether to add a check to the CR3 guest read path,
>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
>> be converted to "extended" format (overflow is possible there in
>> principle because of the control tools "slack" in promote_l3_table()).
>>
>> In that context I was puzzled to find no check on the CR3 guest write
>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
>> of the low reserved ones) could observe anomalous behavior rather than
>> plain failure.
>>
>> As to a Fixes: tag - it's pretty unclear which of the many original
>> 32-on-64 changes to blame. I don't think the two cited commits should
>> be referenced there, as they didn't break anything that wasn't already
>> broken.
>>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>      unsigned int   partial_flags = page->partial_flags;
>>      l3_pgentry_t   l3e = l3e_empty();
>>  
>> +    /*
>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>> +     * understand the weird 'extended cr3' format for dealing with high-order
>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>> +     * initialised).
> 
> Don't we then need some check in the vCPU init path to assure that the
> cr3 is < 32bits if we allow those to initially be set?
> 
> Or will the initialization unconditionally overwrite any previous cr3
> value?

That's not the way I understand this "cut some slack". Instead I read it
to be meant to cover for the VM-assist bit not being set, yet. Beyond
that it is assumed to be tool stack's responsibility to constrain
addresses suitably. If it doesn't, it'll simply break the guest. (There
is some guessing on my part involved here, as the original introduction
of that code didn't further explain things.)

Nevertheless going beyond what was there originally might be desirable.
Yet it's not really clear to me when / how to carry out such further
checking. For example I don't fancy walking all of the domain's pages
when it's about to be unpaused for the first time.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:34:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517811.803659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdzZ-0004Fp-7h; Tue, 04 Apr 2023 10:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517811.803659; Tue, 04 Apr 2023 10:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjdzZ-0004Fi-4e; Tue, 04 Apr 2023 10:34:21 +0000
Received: by outflank-mailman (input) for mailman id 517811;
 Tue, 04 Apr 2023 10:34:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjdzX-0004Fc-PX
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:34:19 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4008d808-d2d4-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 12:34:18 +0200 (CEST)
Received: from mail-bn8nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 06:34:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB7152.namprd03.prod.outlook.com (2603:10b6:a03:4d5::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:34:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:34:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4008d808-d2d4-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680604458;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Z2Wc7JsRF9HraoVD2sOHIDwXVyvrEyRv6dD3C/bsKWc=;
  b=Qcnom77GNv107goEZSn28eN5UKY6vt7jB3gfddXUYl/Jm17iXk2evY0t
   DCQQqJTnx0+WX+rlnPGwfBaO7/OF+bTSa8DThztSrhFFECVCfkJTqffb0
   Nqt/d5dSa7cT7s+l6fCdnR2ZC3qE90+TxWAUtGwY95R/HLALHt9Jvufau
   U=;
X-IronPort-RemoteIP: 104.47.55.174
X-IronPort-MID: 104166344
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FHiTMqjcwEtLIImMS//a+f8uX1614xEKZh0ujC45NGQN5FlHY01je
 htvXWnSOKyLZDH0KNh+bo619U8OvJ7Qx9ViHgNurCBmFy4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQEFiorTi+It9uwmpuDVPJ3huB7Bc70adZ3VnFIlVk1DN4AaLWaGeDgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluGybLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXkr6A02wHProAVIDg/WkXrrseTsFenC4NeJ
 Xwa5wAAobdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8ry62OCUTBX8PY2kDVwRty8L4vIg5gxbLT9BiOK24lNv4HXf32
 T/ihDc6r6Uei4gMzarT1VrKmS62r57FCAs8/BzKX3mN5xl8IoWiYuSA91/f6vpRKZeDeVOIt
 nMEhsu24fgHCNeGkynlfQkWNLSg5vLAOjuMh1dqRsMl7270pCLlep1M6jZjIksvKtwDZTLif
 E7Uv0VW+YNXO3ypK6RwZupdFvgX8EQpLvy9Pti8UzaESsEZmNOvlM22WXOt4g==
IronPort-HdrOrdr: A9a23:S5DihqBl3tUle2DlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104166344"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j/l62gojXGd1eAiq9ylUwiU2Zhu7qtUCLDHvZNeQAW82Zw8efMJzaVJ5OmdvVz/7PBoP2XHg7GWUTJ/EZgFBoO5XoyvMdxS0FgvpmThWhFAoYGnRb+e4EprW2sQml1w2JIodzGZQiTpV4vJ7yId4nIFl3On/a1IsuLZP3hUyTrit5wYasz0xb6pDtYl4w8JdXE8d8OjzBG1u7zyxahhjIDup5Y395fmS9aLVUXQTMCas8mfGRRxFp68LGriLETNK2DF9ESF1hZE1edTYJAm2rfTqvTxcBBxpQki7PAZg02V3A0s8FxddaIbBvG3HDsbJwrTljpHwXa/1wHrQnCNBog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XksOnesjBMGLFJg6HigPvb/LKfQp1MR2TWqNugXsJlo=;
 b=ULbfhGkchc5+zdXVg12PDH25wKV+MJ1dSJQ7vhk1rds0i5nxYQvQeSv0XnvlB+8GZMTe1kB1V2vKWRBTWdKNYNIn74/XRSgjRlShe6l0j1YDFBd9BBDQoZcmetQRRynqh2YIJvAZan1cqxXh2waPASNdDW4BPSjV6q7nR8WztcQShGBDV19AyEFLmvrlQY9DDoQ3tSNMEJb1IsWH9qpXSTbJbv2IIFTRSbOPKI7voIrGtic6BHCxOqYe8scjWETJO7WUm/phE5BvBAkD4TMXBJtnx0J3KdKbty5NtOKnzSn3TwtTZGdIgbMk9eLn17GT5jEJA90O4G/pu5qsjg3C3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XksOnesjBMGLFJg6HigPvb/LKfQp1MR2TWqNugXsJlo=;
 b=Lf/+hqYuoIO+5Fq88iV8VZ+VHJHxyp+keBQroAmM9UNZZ6GQI8oGdfZ4yJ1zW3Vy+UNWPgdYz1b8Vt12ZNBdL+edQ7wo2wGY9XuSnnlySCQKPgsdu763SFrzY7QFXpRmFciTxTCfvOgnhtD1aOmm7kxt8iK47VdgXd8XiSb1rlQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6c5cdffa-f3fb-8f40-c44f-ad7431451929@citrix.com>
Date: Tue, 4 Apr 2023 11:34:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
In-Reply-To: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0487.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB7152:EE_
X-MS-Office365-Filtering-Correlation-Id: 458fd087-1ea6-434f-a4ba-08db34f8207f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F1QKM5MZrDkhU5DpVh/Mw52NTOEOCmgK23txuc6CBKTEuy0i+V4SKL6HVjHCgN65cAQ8nLMIimn760P8KXStPNX21JO0MPnutMwVtel7rDcCN955nyTncSRhxO1r635wy0CIZf3I2ki4jme1H/8y/4pUK0hEDnJd/zH3Xn1lWHFbPbtw7LD0n4ZzsPz9GIMFiOLw4rGC0s7jUZbK4UPQmrj0XW8dIeWqgkbQlHYVtwz3w9tmfSkwvLgKyiIDEtcPTtUYJq4v8lFsyQUGxtwJP6znJ+s+DfrpwZCTeRJm76p4QJkqme60agbXRY/Fn1ieqHAzLv/4Ev7po0ld2XLxZOM/QyZi+hRCM8SK/XVIctWunLwxFiM38yqW9YhzBKXoZzvsWs1CwN9e85WtpTbKR9ay7rXPftenGqWpNwmUfbsAybX1c2sitaKnATMAcUX3tffG+29+m+4aO7+b0cLLX2FZXtfRAhpsKVH8XHRk+c4a3QtzaGHnKwwXG3B+M28v0Pkb7NvrLXgkALYi+O0eeT8bgVoep9cKeINb4eBPQ494r3WQfk9rJZB3MwOejE43ovoJwXuGou/Owx7i9Nbyp0HB84HA5MUrfqvVivGXJXNIA4dUdsyFh1HL0Tag8IwJPRckOFZwEw9/LROL7kxPWA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(451199021)(5660300002)(8676002)(41300700001)(110136005)(2906002)(8936002)(31686004)(66476007)(54906003)(66946007)(66556008)(4326008)(478600001)(316002)(6666004)(6486002)(83380400001)(53546011)(36756003)(6512007)(6506007)(26005)(82960400001)(2616005)(186003)(86362001)(31696002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEtUbDQwdlcyTGdTcHFBRjZsWGxzNSt5MzNyRkRTcWR0NGhFWnZlRVBSbnFO?=
 =?utf-8?B?cVFRT1RwbS9PWlNaT0lhanhOM29VTlNNc1RtVVZmQ2JadE1nbzhVTUFFeE9k?=
 =?utf-8?B?bVNDSEVUN2o5UDI0VEQya0hFUzhQZmJYaVJUUUdiRWh2SnBKaFBFeUVpdFBp?=
 =?utf-8?B?QzFSSFZVaWsyRXlpYWNLSGdBZWM3SnV0Z094T1JuUWN0UWY4Y2pHbkNQZjRP?=
 =?utf-8?B?Sk5DMFozY0ZZNWhOVHl1V0M5SnVzSFZTRWRlajZJUURPd1MrbGZwZkhSMHo2?=
 =?utf-8?B?YVZXbTdRTE9adlJUVU5VSmxXNEc3dHRBdElLTlNlZW5Zb3dFRGR5R2Q2Qk1M?=
 =?utf-8?B?U1pNbnBoUVlvNlZWVi8yaTJmREtpYVlIb01UQ251WWp6bWphMkRURUxaTVl3?=
 =?utf-8?B?WEM4Q1BtZW9pdVFiVzlJV1NJcHUzZ0tDaXg3bm5NcGJacEtjV3pHbnRwZy9n?=
 =?utf-8?B?RHZqUVl6a3B1cWpUeEUzQjNiV0N2UkQzVVg0OFE4czNxRXZ0cEw2RUJ6Zm1x?=
 =?utf-8?B?RGJ3VERYajB4dDhadDBUdDJ5dFZaU3dRRHoyQ0dtSmg5TGFVZERhNTFBcWww?=
 =?utf-8?B?MHFUR3ZVSWN5N2l5MjdraTJwY0F4STNNaXAwZ1RqTUpsT0JFYjJDSzhQUXpo?=
 =?utf-8?B?Z3hDRmV5OTZHUXpWejlXRVpyL1FNZEtFUVM5UE5TK0l1QnprbWdKcjMzSG01?=
 =?utf-8?B?K0ZMeUFpcHJZMjV0ZERnbUcwOUJsMjR1ZkdYaWF4RjY1WWtUZkRrREtSeXlp?=
 =?utf-8?B?UGUvMFN3NjVHQkNKeDNqV0tNR1BKZWpIazlKb0JySmdPd2xhbnJrbzBQcjRU?=
 =?utf-8?B?eWRvRUdHcStnOU5sd3RCSDcySkc4K0owN3ZucWdUb1J0ZENCalh0aWJzM3N0?=
 =?utf-8?B?MGdDSnJVYUYxeE5oa2V1WHpIYi9rR0p4NGxydHVQMC8zdERuRXdiWUpicmp5?=
 =?utf-8?B?Y1hFVVFoR2g4Y1U2Q01ZOEdrcUQ3b2JuZjhuZHM0aFI1ay9laXdPK1V0c2pO?=
 =?utf-8?B?bDU4ZVNhcWFncStVdXdwS0ZlZW5TcjZsOE9nbnRoYm1pak1DUy9TQ1EwTTl2?=
 =?utf-8?B?c1RSWFREdjgweGI4NExUdzNKTkFiQXFuUU1MeHJBWXZ5NUlaTmM5WVNXWk9T?=
 =?utf-8?B?MU9XM3F3dG0xU0FtVDBFUzBxaW16bDFhVU9nVFlSRnY2c0R3di9Kck9SbHo4?=
 =?utf-8?B?S2UvVGFqWVE5QVh2dlFna0dFNkpJSzR4WHpuc1hzRjBKREdtRXpZRmUrQlFt?=
 =?utf-8?B?SkIwT3dadjJCRGFUMCtETFoxZWxmQmlIRU52Q0pQa3RiZUszOVhJNy9FSk1z?=
 =?utf-8?B?ckVDQWs0T05GakFKU1BPQ2QrQTlRSGF5YUVFSG5rMytpeEhoQXFMR0tkL0li?=
 =?utf-8?B?T3ZQNGNHT09YdzRnMUhXTzhhQktSS2RwOGxYenFGRUJwU3NVenpCOEdjUzBh?=
 =?utf-8?B?WHVBaXR2a1BhZDlhZktsVUlOU2hGVy85cEdFRXFFY1VYWk5ZR0dNdk12bW9S?=
 =?utf-8?B?UFRhTFZYQkQ0cHdhM05jdm5adUJZN0ZzMUE4ekRicE0zcHFkNUZCeExTR1do?=
 =?utf-8?B?SklHczFsNEtuc1p4c3lYQzcwY0NUSVNQN3lkbmt0T2JZN1dlQnRlTnB1akVl?=
 =?utf-8?B?ZEhKcEIyVFE1QUlwUWp2R0I0aS9RQ3laOFpTV0xpL2JDL0x1WXhkck1yYjRj?=
 =?utf-8?B?Z3IwQU5IK3VGQkNWeGp4Ym4xaTdOdHV0QklhSFZISThjMGV5ZGhwalhhaTZ4?=
 =?utf-8?B?bjZxOGtkNElReml5TGx1aFZrS2JnR1hVS0NYR0dqTlEzNVZqbEl3RmQvc1hh?=
 =?utf-8?B?SlJkbTBxSHlBZjBOSmltdGlLN3JXV2pFNGw0Y3hUUGZVZGxKaTJwdW5PVzB5?=
 =?utf-8?B?eVBjUDM3YVUzekJ6WjhrZk5YUXhUZ24za1BTaDdrZS9JdmxVV0RNV1JWUTFT?=
 =?utf-8?B?M1BVc1kyY1Y2bkczeDlvQ2ljek5pb3ZDM1FXb3h2Q3F4NmxPUkVpc3BpaWps?=
 =?utf-8?B?NDNxQjU3YVNTWkJnbGhVVU5hUU9Id0FLdHRacVc0UURHMG9qdTRJaldKdTBV?=
 =?utf-8?B?NjZzd0RZNnkyV1NCUXZUTThNelMzSkV0QThLWW5mbGc2VUU4UktrVTlDTE5v?=
 =?utf-8?B?NGxWMWNSeWxha2pmMlA2dWhmeThQVVI4R3A2d25xVGhMT3M2N3FDN1B6VExE?=
 =?utf-8?B?M1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LKESl4Hy3Ni+YXuxlH3+PQ80Hsv7uQx3iXwHmAm9ZQZQ0cAEvBHQyZ+KBZrFgJ4dndqPjjkW220a3XJlsNeIejg37wOwnZSDWvt2Kn/5Bdt2QIvzQPBg91vSmVZG78P4i0aMDIzV1WR6QKZwhe0kEO2IvXReCbOJ7kjguHI6miJiDEnsKNGThNaaUUdE1Wg3LCRCiIsULBrA/PawDdtZNnWpgjrjjZuybb9nuCwze2UTXBqvoKW7gFaaY1RuUUyq1uWZBL2CMzgb8wX+dmzebF8ACoAVdXkqp7M3aLciXuaXYH7zTUU38TC9yWYvIIbqZhkqFK3IjPdwYRoagyouCft4HHXbqN7FfmZ8A02LGvsouzlOHz92TNt49f8m3X3R6r4b92fxt2Y6ROfPrQ4Lp5EDtWqwuTF3c0TGUax+I+UcewOVSCmT2Dre3CGS8XnUkCTSIqozkAoDYCAMv6mCv2XENs1sXg2/Uu3amL0NRZMWC6YFIzuwhEQIbM7FihcF8lncHgak6PQLEXhb4NrHNUMwjNmp65zFxptyf5dJpGQ9h+N1hcIYgUIMd9tGgs0cu7QboMqbL3AMVk3nN/zSQOlS1wxezlEmrDunli22unKAglXMIPPsoCtOgTGmRt4qUvsAqylgMWp8adrQd/kZSEhPMHcI/1DT7fXVNjQ7kBojvOadOFfm6Tw2CY+e3ds2mNNBQvLk47KcIQ+b6pZSTYN1g6+KYtNwJ19OSzvUdkjHUKspUPS2mXnqRFXtQthnWs6vxD9EqQheo/S5GdCWwW+IL9FiCy4wlUPiFZR/uVYC2E/m2DNItQqBDy31LrhV0Yj8BhLISvQakvUMuyD2aldCfsTIk/ir8dKbnkYHWdKv8HuuBSu8oUCNp85uA3dDaRuGMcp+Z4kHmgZsrIjm+g==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 458fd087-1ea6-434f-a4ba-08db34f8207f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:34:10.8412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C+xzU4aZL46DlngxkUJWNuTvvid6O7tBOR7Ai1wneb9sw/9E4Imz8hrhaykgAKr/50TZ3Ls/ORGXGafbQ7m/oIWiD5LHpyugfcZ5d8bsXXY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7152

On 04/04/2023 10:20 am, Jan Beulich wrote:
> PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
> more than 32k pIRQ-s can be used by a domain on x86. Document this upper
> bound.
>
> To also enforce the limit, (ab)use both arch_hwdom_irqs() (changing its
> parameter type) and setup_system_domains(). This is primarily to avoid
> exposing the two static variables or introducing yet further arch hooks.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Instead of passing dom_xen into arch_hwdom_irqs(), NULL could also be
> used. That would make the connection to setup_system_domains() yet more
> weak, though.
>
> On Arm the upper limit right now effectively is zero, albeit with -
> afaict - no impact if a higher value was used (and hence permitting up
> to the default of 32 is okay albeit useless). The question though is
> whether the command line option as a whole shouldn't be x86-only.
>
> Passing the domain pointer instead of the domain ID would also allow
> to return a possibly different value if sensible for PVH Dom0 (which
> presently has no access to PHYSDEVOP_pirq_eoi_gmfn_v<N> in the first
> place).
> ---
> v2: Also enforce these bounds. Adjust doc to constrain the bound to x86
>     only.
>
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1130,7 +1130,8 @@ common for all domUs, while the optional
>  is for dom0.  Changing the setting for domU has no impact on dom0 and vice
>  versa.  For example to change dom0 without changing domU, use
>  `extra_guest_irqs=,512`.  The default value for Dom0 and an eventual separate
> -hardware domain is architecture dependent.
> +hardware domain is architecture dependent.  The upper limit for both values on
> +x86 is such that the resulting total number of IRQs can't be higher than 32768.
>  Note that specifying zero as domU value means zero, while for dom0 it means
>  to use the default.
>  
> --- a/xen/arch/arm/include/asm/irq.h
> +++ b/xen/arch/arm/include/asm/irq.h
> @@ -52,7 +52,7 @@ struct arch_irq_desc {
>  
>  extern const unsigned int nr_irqs;
>  #define nr_static_irqs NR_IRQS
> -#define arch_hwdom_irqs(domid) NR_IRQS
> +#define arch_hwdom_irqs(d) NR_IRQS

I know it's not your bug, but this ought to be (d, NR_IRQS) as you're
changing it.

>  
>  struct irq_desc;
>  struct irqaction;
> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -2665,18 +2665,21 @@ void __init ioapic_init(void)
>             nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
>  }
>  
> -unsigned int arch_hwdom_irqs(domid_t domid)
> +unsigned int arch_hwdom_irqs(const struct domain *d)
>  {
>      unsigned int n = fls(num_present_cpus());
>  
> -    if ( !domid )
> +    if ( is_system_domain(d) )
> +        return PAGE_SIZE * BITS_PER_BYTE;

System domains never reach here, because ...

> +
> +    if ( !d->domain_id )
>          n = min(n, dom0_max_vcpus());
>      n = min(nr_irqs_gsi + n * NR_DYNAMIC_VECTORS, nr_irqs);
>  
>      /* Bounded by the domain pirq eoi bitmap gfn. */
>      n = min_t(unsigned int, n, PAGE_SIZE * BITS_PER_BYTE);
>  
> -    printk("Dom%d has maximum %u PIRQs\n", domid, n);
> +    printk("%pd has maximum %u PIRQs\n", d, n);
>  
>      return n;
>  }
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c

... just out of context here is the system domain early exit from
domain_create().

> @@ -659,7 +659,7 @@ struct domain *domain_create(domid_t dom
>              d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>          else
>              d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
> -                                           : arch_hwdom_irqs(domid);
> +                                           : arch_hwdom_irqs(d);
>          d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
>  
>          radix_tree_init(&d->pirq_tree);
> @@ -771,6 +771,8 @@ struct domain *domain_create(domid_t dom
>  
>  void __init setup_system_domains(void)
>  {
> +    unsigned int n;
> +
>      /*
>       * Initialise our DOMID_XEN domain.
>       * Any Xen-heap pages that we will allow to be mapped will have
> @@ -782,6 +784,19 @@ void __init setup_system_domains(void)
>      if ( IS_ERR(dom_xen) )
>          panic("Failed to create d[XEN]: %ld\n", PTR_ERR(dom_xen));
>  
> +    /* Bound-check values passed via "extra_guest_irqs=". */
> +    n = max(arch_hwdom_irqs(dom_xen), nr_static_irqs);
> +    if ( extra_hwdom_irqs > n - nr_static_irqs )
> +    {
> +        extra_hwdom_irqs = n - nr_static_irqs;
> +        printk(XENLOG_WARNING "hwdom IRQs bounded to %u\n", n);
> +    }
> +    if ( extra_domU_irqs > max(32U, n - nr_static_irqs) )
> +    {
> +        extra_domU_irqs = n - nr_static_irqs;

Why the extra 32 here?

> +        printk(XENLOG_WARNING "domU IRQs bounded to %u\n", n);
> +    }
> +
>      /*
>       * Initialise our DOMID_IO domain.
>       * This domain owns I/O pages that are within the range of the page_info
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -173,8 +173,9 @@ extern irq_desc_t *pirq_spin_lock_irq_de
>  
>  unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *);
>  
> +/* When passed a system domain, this returns the maximum permissible value. */

This comment is technically true, but it probably doesn't want to stay.

~Andrew

>  #ifndef arch_hwdom_irqs
> -unsigned int arch_hwdom_irqs(domid_t);
> +unsigned int arch_hwdom_irqs(const struct domain *);
>  #endif
>  
>  #ifndef arch_evtchn_bind_pirq
>



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:35:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517815.803670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje0E-0004pV-Kj; Tue, 04 Apr 2023 10:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517815.803670; Tue, 04 Apr 2023 10:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje0E-0004pO-Hl; Tue, 04 Apr 2023 10:35:02 +0000
Received: by outflank-mailman (input) for mailman id 517815;
 Tue, 04 Apr 2023 10:35:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pje0D-0004kI-53
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:35:01 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5874413f-d2d4-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 12:34:59 +0200 (CEST)
Received: from mail-dm6nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 06:34:56 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB7152.namprd03.prod.outlook.com (2603:10b6:a03:4d5::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:34:54 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:34:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5874413f-d2d4-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680604499;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LXvl7Q3MWQdrWCPWcRV34Y4DsSEqczpFJv5pi729qKQ=;
  b=WP+JAYsI55YaSsHYDXxkJaJcpZ1bqU2LpBy3DDr39Aso+Zr04+xjWCy0
   l7akXUh5TWYQiE/1dV0IF25heW+O/4t9NjIFsuC8O54oDKaOeHzpE5wZ1
   mnS0hWLcwY9gYb4U0cJr5cnCIGU3YW/G2mKw4XKZmxe6kOyspkdJC6x4W
   c=;
X-IronPort-RemoteIP: 104.47.73.48
X-IronPort-MID: 104280354
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ak8wEKjT3hFSv7G/VURyC0QBX161QhEKZh0ujC45NGQN5FlHY01je
 htvD2/TP6zcZGahKYhzbtji8kkFuJTVm9QxHgdkpCgyEH8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQpFBUcYRCAv96x3ZCgE/Np2p8ecPj0adZ3VnFIlVk1DN4AaLWbH+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMqluS9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOHipqYx3QH7Kmo7NlpKCmDqm+eDoxCMB/9ve
 1EJuRE/lP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+LqRuiNC5TKnUNDRLoViMA6tjn5Y020BTGS44/FLbv1oGlXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94aRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:YY+3f6uiKOyHlQP6CcKd2LD47skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104280354"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SmVfRWhO2TfFgk14bLqC6vaIkeg/LcbBl3P0JAGjmb8khhR3M2WCXyfwunZkgiOzJ6x8BGvnKPSysv60nd1k0+Fm08AGNFetf5lSFHt8OyN0KdEQgPeh85h9RtVZ9OJxCShtEZ/DcLuFNe0GWR1xiL9mbFAIEBysGVP0tV0VcQZCvn5TWuOoy1ajHCW4jWGd4uCTIRs9xddN2n+Wb/ZUQpKDy4N7AHzPibPpjkNKxvGcxLqeQXokSl8kNPsjSR/pUjnTOdW3YNSLEGjnRdzDEQNyixy8rxhrNqghiMjhAZ9Jl50U6u3Oc59N+O97yHmjZuSRK/iUJJST4YvA1COPtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LXvl7Q3MWQdrWCPWcRV34Y4DsSEqczpFJv5pi729qKQ=;
 b=l7byjE+OcKiMEwpiiL/ArPQdijNgxwlj1/FjdX1fLgPrPgVlSw11KMBZAp7XO9lF2DunMK2vQucUFo7vVadm5f7f2M2n01QlcNbtiOY9TUBJ9mEowHqmozHFvivlrXSC5GOFSCJReotO6DXr70XX7mr+IE5Sl7uOfHNK7HAaGYpUz3Pfz8843S3k+3HFvllT6G0gDYlAH2pvMUVSILSTPfg5qC1UE6+61aL12AyrT+1gu3gs76ikQAjxHmZlyU0pYu9ND2EVRt3+FxTCTrOgqhQNKNWHSW1nOd5xFFJfyJhtvSfC0BsRHJ7zb7KnEduGMXpmI5FeL+LZMNySwcVImg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LXvl7Q3MWQdrWCPWcRV34Y4DsSEqczpFJv5pi729qKQ=;
 b=Og6sZxqGHT0KbHmMTKjVU9vwmLSKIVD4E2DBwtWsx33ydIWaT0g5npmqi5n+D7pxPfEzrLFvE7DK1VQAqenqgEWCcO1yK7URInNH9xrzr2i0WJuXilSStpemeNVqTqVtJfZa8OP2mwUnpGmEQULO7yIVooh9mxr36/+FbOqT86w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0eba4f65-c054-531b-2998-6038f17af0f8@citrix.com>
Date: Tue, 4 Apr 2023 11:34:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/PV: ignore PAE_MODE ELF note for 64-bit Dom0
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <f52befc9-f19c-12fb-b0db-b6c4219999b2@suse.com>
In-Reply-To: <f52befc9-f19c-12fb-b0db-b6c4219999b2@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0483.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB7152:EE_
X-MS-Office365-Filtering-Correlation-Id: 20802844-2fdb-4e39-df92-08db34f83ab1
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CUBAvMq/Y0uCnTv4sQRjyf058xtwyXb7jn6wO8ZHMeielHZ8xXalwqOtURQyXzo/K1lrr9duA+Glq+xhNMZx2XRsZjiatr2RKjB59po6otVx2UpUX1BRP8PKbBrhfDTr+BjKXb5pBhZ0bEoc7p5FsT2DusTg0U2/O59TR532PeKShkh+hEqv4slKrVIlC5OaVWyQg6kjXjb9KFmxcgayXQMSCLwsPS9D1jQ3AGWVmH2KVp6pukmUCq8aR/khpYkHVPSb9w0W96pEAqrlPysoKVZsZSfQrTIqFTtNi6RfR2cTOWIBWLTa6SjFS1odMi3ik/t7CcZ2DbdmMD9CsfK2ukGIBw+NDvvDfpYpjrnaT/94hgO+iGe8Kzym2lxwcjpAs+kzkBJY3XxWbj1WbMpY1tOVXagy77AaJ4SNQ1kXBPzLMGfzVd3y0cDwg0NI2eKb2d4jiLaK8TFHYZYPldsX3FPF7esoZ6GjY2jipNT5cH2BmFozvefc4eUlEFxHxTzvvbbyaBFQhDOQOiayi/U2fPZaw2/PdkYLNeXhhuUwAJN/7GFoHIyx8NH90nizFET3sQh+VOLQATAX1n4t1XsvF8wu8/gXcXbASwbVqBHxa6WlJqyqEE4CyEWChfJm83MYICNFzxblHvdXF9AnNKFKWA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(451199021)(5660300002)(8676002)(41300700001)(4744005)(110136005)(2906002)(8936002)(31686004)(66476007)(54906003)(66946007)(66556008)(4326008)(478600001)(316002)(6666004)(6486002)(107886003)(53546011)(36756003)(6512007)(6506007)(26005)(82960400001)(2616005)(186003)(86362001)(31696002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UEFXazcxQmRHa2svdXdFT2hjL2llK3pnYnhDdDl3UE51d3Q5OHN0RDN3OXZm?=
 =?utf-8?B?TENXeEdHVEk4aVJEYVdYYjhJMTdhQndoVlVKc1VKdW5wRm1WbnhFZzJmb0VG?=
 =?utf-8?B?a3JWMW5sNmVKbUNZNThFQ1A1OXJhK1ZUWmJSbG41cWFNWXF3WFpEOE5WSU13?=
 =?utf-8?B?Y3V1bFBIQjc0RHFBanBvQlMrUnV3cmNYZlAvcDFPQ1BtdnJIRWJ1WnFGbGJi?=
 =?utf-8?B?aUcwTEdWaDFXRDdKT1ZIeXpsQ0xIbk9Ga1lneUVTNEpCUnd1ZmNQZ0I0MWl4?=
 =?utf-8?B?Z0gvSEV5OWNnS2FVbFg4QmZ4UFpmMldpTXdJUTJBS1pkNGpxSFFSUkJEUU81?=
 =?utf-8?B?eTFuRi9tSTFKUWRQYWVkY21obGI0UVRJZFdQSzYvQzZVQnEvVU85bTY1VlNp?=
 =?utf-8?B?alU0eVhsRk1RN1ZIOUFZSDlQOUNMcUdDRUw0VHRPNE1HN3ZFVWRjU3N6a1Ax?=
 =?utf-8?B?TGlXZWFTajA5eHJ1Qkk1NGJCTWxkNGZYQWxvbzQvem1ZVXZmOWdGdDdQWnQ4?=
 =?utf-8?B?RE8vY2FPQm04dGVBVnptcDRvSDFjWkJrR2hjYW9PQmMzQzVqbTVqQkpBMmRx?=
 =?utf-8?B?ZkpSVXRwbjdBaUU2NEdrVGIvLzdJY25ZVGtKTzY1ZUdpbjVWYVZ1MHEwaGdD?=
 =?utf-8?B?THJiQmFyK2N0QVczK08vNU5hWUtPK0lhYlBrV0RqKzA3N1JrYUpwdmFubDJY?=
 =?utf-8?B?RUw4Q0VZd2xZamtQMVcrN0JaWkpRSFM3bTdsa2UzTkxoMDczdTdRQmVWc1F5?=
 =?utf-8?B?Q2R4V0g1cDQ3bVJKbmVlYVNFdXJjQWl0VGkyUS9obVpTaWYxY1ZvMnE5QVpF?=
 =?utf-8?B?Ri9BTUVPMy92YWkrdmxTaEhnSmVNdUdlTGo0ZHZPa3BpaGxlQkVvQnRvOG9S?=
 =?utf-8?B?V3NMNVRmUVdQTi9waXFIN0xkbzFFUkR5VHQyczlER3hqQ0pSRlFSZHJHVml4?=
 =?utf-8?B?Qjlyc2QrWk1uL1V0cFA0bkZnVXAxMjdSVkNXWEhsQUpGVW90R2dNTVZ4czFV?=
 =?utf-8?B?U3JVeVNDOVZOTDdlTVZ4VVRGWjVOb1VBWFI1QW96WHRMZVdEQmt6cDYxWHpQ?=
 =?utf-8?B?VWl2RWYxRnJ1ajQyOStiY3JKaERUSGlvcys4Rk9JWDhsaVlkT3BjRFAzUHpy?=
 =?utf-8?B?Y3NaMUROZk5GNzFjbzQxYzM2V2VGWDREZTlRZTQ4L0Q0cUtrTzJNYkNNeC9Z?=
 =?utf-8?B?dFo2M2RGYVBJbDBLT01UeEVuVVl0dzV4Y28vZ1kzL3hZeEJQSUlmcDBHZkts?=
 =?utf-8?B?S2hvRWxhNGg5Znd6bkZROW9BVFh1b3hXdWpxdFA0QTYrVDY1YlZqNDRnMGZP?=
 =?utf-8?B?M0U2bC9sYzRPK1JVOTFkNGZBYUt2Rm1IMVNGbXp3b3E1Ulh1ZjFncnNadWtX?=
 =?utf-8?B?Nk1LaUxEQ2ZsTlBxWk9vTC9oVkFRWUtsdWFWMGxkUVpzV1I3K0dIUkhtQisw?=
 =?utf-8?B?WENORG9OTFV4N2x5K3BKa0lVTW5qaS85bVpmWktBMjZSRGJYb2tPMTE1S0w0?=
 =?utf-8?B?YzRzZWlma3VZWWgyZUt5RDdFVHJjc0lmUm1aKzFVUlZ1Z2JZb0tTRG5NWDR5?=
 =?utf-8?B?RGUyZmI4K3lidFlkdkNOMnZYS1JvVWhXeUpMaDUxVUZmUUhIcFdjRTNpd0FN?=
 =?utf-8?B?YkQ3aHpUbnY4bGNXbnp4VDVuNkZEczlFSjRJWitmMnRyVU41eDFUblhpd0k5?=
 =?utf-8?B?RzJuTzZzZjVaQjlwM2drSi9lWEUvTTNoQUdkdjMwUVNwUDNDRUtCUUFyNHpn?=
 =?utf-8?B?V3BJQVZmM1VwUEdhdmM1VXlSUVQ1cW5xK2wvNWhicHNLTkZrckpqNUZLN1Vt?=
 =?utf-8?B?dG1EOFdnaVVzRGdXR3NTekR6YkVXYlJJaU01eDBVcCtOY3RvY3JMbzByajFa?=
 =?utf-8?B?Wm5xRnJTRmthZDhwT1dub3R5b25NSHVVb05pZlBJMXdpRTU4ekRBa0Nkc2cz?=
 =?utf-8?B?b2pLUEh0azhtME4yclBTVHkwMWhuVVhGSzFnUFFIOFpBZU16NGtuemtKOVJ5?=
 =?utf-8?B?VHlHWEcwa2krZE9GSXpSVit1K2xDV3NrbE5kOUNHUHIvV3RYWXRwSytOTmlu?=
 =?utf-8?B?WnpGWndYc3lyKzhxMDlKeW11Z1NOQWpVNFBKSGltcExkVTBrZ2lzV1lSK21T?=
 =?utf-8?B?a0VEbmtyejlRRDl6bGdvc3dVUTNjWWxMLzlMVy9GZ1NmTUM3VENQbjNwZCtp?=
 =?utf-8?B?L1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cAKExYs5cyHWmlabf/b+aXIUh6aPZJpKy6oe3aMkyb/dQ56OfWAN145lsXuyO9Jh21ehlNSo16mniGaFBR7cuMxX21MmrHPzyzRj/6yWNSjOJ8P9wv09FEmxPcyOkWU7bByEE9DsM7cCXMHWRbh+Rj4tJT5PURmioeQ+VN4HkyE8masZU/qFp/KwF8Ntagt4rMmidyM9w+X3xm5F47QRztmN1f9QO/b02hAlUTwypKoUDchBHreKtFgeS4O7N3S3zIDsLYhe4aRfhFVot2ASTlncZt+swgDhl0OJHvhp7FXRas1KLDPfPZLMeeLJJ1b+rxiofhbJNywJ90wzkTTCQYEmCFWeTNva0LxjRsMgi1z2/zoGxZtP5AW2RW6qvPT4gdj0QtTREuQV35jXPgnGIzxcDVEWJHv8OdiCc0+lfnoUrx6eiy8qaVuRLukPyblxvkMgRgld+ANgbMf8ICHt70qEV8cGlWzEp1T0vMFONClsqJrBBkBBp8DCfUq6l3S7oypSw3YS/9x6KhdK4Jbko6JmbvksEcz6yUqNHM2qfLgEyj1yXRZUgs2AC+lOujKnibGfLt5daR8nBomPkynMbD/7FZpxSWlp6SZOvDapYtnDmhAKkBCe/xGWXCdm7xR8U1UPT1dY9i1oK+VH/wVwsoiBsYCOgmpHKzlQBRJv14dCzqyJ4hkHs/bTzSrUKNS2J0lwOuo2zfrqPF50i8wH3KlRtLJr7wNPijYDvy1BJF5v4TbTJX2b5nstMbhCsAvJEuA6ByK5hq8BzAN9WdIsK7vNYCqxlxO53iVzOPfcavj7M2JStcJAViquqFAhrdWj
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20802844-2fdb-4e39-df92-08db34f83ab1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:34:54.7425
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +vXcfbfeQb8yp4kj80w+5DGmV/DEphCRo8OrQQgCXxpW2n7T24s/Tpq9rvRUHfY5PoSOPbbN5lETAGpUD1iSujE8LIjp59U6m+pqqCgOV/8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7152

On 04/04/2023 10:19 am, Jan Beulich wrote:
> Besides a printk() the main effect is slight corruption of the start
> info magic: While that's meant to be xen-3.0-x86_64, it wrongly ended
> up as xen-3.0-x86_64p.
>
> Note that no known users exist that would have developed a dependency on
> the bogus magic string. In particular Linux, NetBSD, and mini-os have
> been checked.
>
> Fixes: 460060f83d41 ("libelf: use for x86 dom0 builder")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:39:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:39:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517821.803680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje4E-0005V7-5Y; Tue, 04 Apr 2023 10:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517821.803680; Tue, 04 Apr 2023 10:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje4E-0005V0-2A; Tue, 04 Apr 2023 10:39:10 +0000
Received: by outflank-mailman (input) for mailman id 517821;
 Tue, 04 Apr 2023 10:39:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gREj=73=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pje4D-0005Uu-HW
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:39:09 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e88::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ecc90741-d2d4-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 12:39:08 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by IA0PR12MB7577.namprd12.prod.outlook.com (2603:10b6:208:43e::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:39:03 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242%4]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:39:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecc90741-d2d4-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NXqgMz10bGgqY66STra7jb2YSy1QRnFJf5ND73tU/fZnwwmUK2yFwgdliZwYHi2r3uuYoohm9hfyDJPrgAyS53lkD4ApdTePlcXVn6ZpnKad2DB9XRcF9RIcH+/SgQOjalbHYaPcdLHrzsHZaJx9Go4vd8JRgU8gzKvp0yePkiJg2TacslqwblncNwAaIJCUy2PgXiC3eXjnx2m8SkD9S6W/QHk/il/dA/NVamkX7ld02fxvLMQgywx12JnYYpb1Ph11vTZ8hgUrnm3lNZMdGuH7kI/P3RWdhxk+KuOqqzonqjFQ5vAvZAoJwgTiopWtBcSiz9RS3s2ZIFYTx3ILWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mn9NBNAoabzJLeGf/m8W6pLeV0k6QC8ZkTvUpnBKvq8=;
 b=K8+yFpC4hhBR/yCCnHbIfGySPaUmXZB+ffbK9984YecKI6cOakh18kFba9oNb5VBdbhC0UyPWMKVqEaWOTvOOdyNn1TkTHwVuwMG/8l9oZmHjUiAU3f43w7FKO+BkMhemRWjQqwiupQGsZxPGgqSS6bSsJTNc28g9CzW5ic6kYyUlakEiZvcgGgssfQQAacju97bHBDbOCOyAXbBel9jLXDbaKDimZO4zqJscGR2GLI8rTwA7nRI63K6u6sJucJkHw7lm6gXrZPqKtdSvhrpgaDe3dIGIqWaPWIRSeYEI8t6BJKgmv2BV6ack67nQfBu3FBXCgp0IqD1Ss0Ci+l0Nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mn9NBNAoabzJLeGf/m8W6pLeV0k6QC8ZkTvUpnBKvq8=;
 b=S24BMfdfkUmh2aVNAg8LHimJ6vPxPQrMLehG34knLFtYSHqrahcJopoR6wFjNRivZuTZ9E3M/be60mLuTdABN3OUGSrHekzzI3kX/I9zleRuJhnWqQgKJe6G20EOHmxwge4tvBh4visCSiZia8MXCthVHBG9UFvisVhCoGBTh6U=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <c080cabc-0d69-3c32-213b-06261b2f5899@amd.com>
Date: Tue, 4 Apr 2023 11:38:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN v4 11/11] xen/arm: p2m: Enable support for 32bit IPA for
 ARM_32
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230321140357.24094-1-ayan.kumar.halder@amd.com>
 <20230321140357.24094-12-ayan.kumar.halder@amd.com>
 <22ce7663-e63c-a3b8-9444-8f43cc4620c4@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <22ce7663-e63c-a3b8-9444-8f43cc4620c4@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0632.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:294::22) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|IA0PR12MB7577:EE_
X-MS-Office365-Filtering-Correlation-Id: 732c742c-cc52-423b-3e5a-08db34f8cecc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s5iY7twI3ihaBx5iJGrRFUUg06tnKNi9S18DOLoBJmBeWlP12maxb9WoorEiF/YX6zDazGgGS3S7qGQLABRqBCpQ/aBIciMYdFoxOe30FMRP4y8y6sAi/pSej5CED5URDZ/L1A4MZikkQVMrSqh34I4ylHQGVq/KP6394MAkrjSJKjLoBvmPT5H6p+r+83Z3UX4QV0fAhY1jLWKoqzpim/2ID75vgVBxX75StWCV3X2EYXkNs/+n/8UxxmRBbGCFsvcItnocakRahxRCTNf30yf/vT1Jk5kGIqCzTizhzyyUKmsHtqKQSYhhtBLPXw0pc/TJ5HbjaD+/gY5F4Kd0yQ8MIhWOQ0iT+6E6DPQ4VgdFNFdNs97tY7/I9Q7vmYGuXVNZh1a+QRGATrMgBPC0eeY40zmf4Z8WML7hstY7IMc4kAW3vFzfvQ7VqGqMF4+FvNvKKRmgaic+CvS/aAl3jz+c81F7KlzG9BYJcSRmO7eWrygZ05AKY3e1nKo78MSC+tskIJjdq1Hi5GRTXGN+4P16Ur1QvtejF4Qip8ycoOMNu5jmi1v+MJn/waQDdqeCfzZrPCpzNPHEUwcxl4De5C2suYP4r47G/iB1LBJCHRgLfZPgMYWIMyY6CleQF7NXlB3IgQSA+ryExpHl6kyCDsZcGGazcHa69c6sQqwuqgDnfP8O7bUbtiI9zL82ItW4/ctlQRvgZGbg8fIAvvOlmg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(366004)(396003)(451199021)(478600001)(4326008)(66476007)(66946007)(66556008)(8676002)(110136005)(316002)(8936002)(7416002)(38100700002)(41300700001)(5660300002)(53546011)(83380400001)(186003)(6666004)(26005)(6506007)(6512007)(6486002)(31696002)(36756003)(2906002)(2616005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z0NEbmpZSC9Jc2N6aElsYm1FdWVWL2FaZmZManpiMDNFUmJieE9UUHkrS3dk?=
 =?utf-8?B?OXdDYjVKQklsMjAyRFJPWFZCWGpoN21jQkNJOXhURDRGakU4Zm12UnRacXBi?=
 =?utf-8?B?bEFsVTMrVy90Q21NSU1DNVV3VnZlbEluc29lY3FCcndlTms5VnAwU1lEcmtZ?=
 =?utf-8?B?UUZIWWRxRjh3T0Q3VWdHUm1BbFJkTzYzRlBnYWJGd1hyU3pncCtkVy92dkRv?=
 =?utf-8?B?WkJHNXVhSWJuQmszN2JkRllqMVNPV2JqRFk4TjZ3cmpNT1NwZ1hoNmJTb3Vp?=
 =?utf-8?B?cXpaTG03RlRObzUvVUtndHhhV0REL3hrQ0FOMk1TdERMV0JUTWxrY3o2dmcr?=
 =?utf-8?B?NmNpRUIzK2NWeXhYV3JQYTdqNE8zTDgzZy9aVkZxTmxCR3F3M3h0MVdhUUpx?=
 =?utf-8?B?ZXhhQUlDbEtGbGlLN241emJ1enJCVHlXR3RRMnJCc01Ld2VCdHV3b2Vyck1P?=
 =?utf-8?B?d0V2bFlLWEphdVlJckU4WjFGMDlJZ0xLMnA1ejczK0pKRnpaT0tDUlp1a0hK?=
 =?utf-8?B?MWFtMGFndzA0aTNZZXNocXNBRndReVM1WlJTS1pMYkdRUXBEN21NeDNCV1g4?=
 =?utf-8?B?alhNN3ZQYW94R0Z1RlhsMXByb01EZUJ1a1kxL2prUkpQTjRqeTQyRys2M25r?=
 =?utf-8?B?b2c2TzMxeG4rVjA3cWM2SGg1MEx3UlhmNjczV0xpa00wczMxb0lPTG4wZ1BQ?=
 =?utf-8?B?TGFQUHh4Q0FrS3czNFdMdUVoenIxK2U4SGMvc0RvTW9BOWQ3STdqTmhvSjkv?=
 =?utf-8?B?VHRlcHBaWWhLbTViTFFrZFhyemFtL0o4dXNVUHpIZDFWbUxnR1lyYUMvaksz?=
 =?utf-8?B?VWFWdHhWWXBqdGQwTWpoZTFlRG1QVzUzcGR1SEllZ25HbjVWYVRVNDRMRmJM?=
 =?utf-8?B?SktEb3NVY29rY0VYSXNuSk5tbWpFK0NVcjFtaG05OUFvSEpPMjJjRk5PV0ZL?=
 =?utf-8?B?VVNLWHc3MzJ1YkFUVHptQ2REN3c0TWdpV2dGTXNBcWFXT2NMWSs4Y29hSHd4?=
 =?utf-8?B?ZzB5d0hRNHVOaFViWUxNN1ZxMnhqSVZCZ2dwVG83eUZLU3p3SUdIMFYrTXVw?=
 =?utf-8?B?M3lsZ0pMRWlwYXNieGVWandOODlaby9LSUt0aUJSRDFlcDFCbFp2emVtYkpO?=
 =?utf-8?B?ckRxSTJhdDRLNWlhbFpyWGdKME84QllBZXFVM1NGWlczcmQ3OWJIZmEybXRF?=
 =?utf-8?B?M2tVRXdReG4xamxNbzNrVE14YWs5bXJKeENSUzFyZG91OHQ2ZzhVT0h0dk1t?=
 =?utf-8?B?eWt3MVdaTGdjNXFCUThVSXJDSzZ6SjVaRXcwN3hFM1pEK0xGMmJyakd4dGxW?=
 =?utf-8?B?cjF3Mk12UzVHZWNnczlFZVM2emF4V2FvM2dvajFqYzVWOEQwMFloVnFVbEUr?=
 =?utf-8?B?eS90THgvY3lVTWVYTkUvNGRxTlRzTGN0Zm1JUVZIaENVYWtSRzN5NVkzb2JB?=
 =?utf-8?B?ZVF4ZS9jSE1QY3lRdkE5dFovQ1BUcWdjRE1tNkNHRWhTdkhmRWh2dmVsOHZt?=
 =?utf-8?B?THBHdHBGaUowVmxEY0t5N25CbnVUU21aS0p2dnlMenlSbnpoSGtxVlpPNTJ4?=
 =?utf-8?B?bzNmNE0xQ09uNlJjZzF0ZEVoUUVmOEdLYzdrMnFQK2Vmd3Q3bkVKRFF1Ymxo?=
 =?utf-8?B?WUtGRHMyMGpQSnVzV2IzeTdqeDM5Wkp4Z1pFL2I0b1JqazhpMjg2Q1ozQnBK?=
 =?utf-8?B?ckg2a3ZtYkh5OGdYNG54eG8vV1pmVWFZQlcwVGZ3Q3FVaCszTlR2dTN5SVBM?=
 =?utf-8?B?NklmSjNkS2xlQW9hZUltWi9lTDR1TGNyS3Q2Yk4rVUZIeEZXdk9qQWVRdWww?=
 =?utf-8?B?SVdwalFYNlBmNlFCYmFsSG5XdVNZT0NKc09HY1dmallOeWVvM1BSNWpSNDdZ?=
 =?utf-8?B?Y01VWEh1ZG04OFhiSTBRV2tReHpFdzlEaFc4RGgzLzNVWXdUOTdxU2hkMTZG?=
 =?utf-8?B?Y3ZUaVRyWklVem44bjNxeUxneEN1eUFCcHBaU1RucS9odVJBQ3hmSWhpaS9o?=
 =?utf-8?B?TFA3Uko4bktVZ1J6VTY5d2k5R0YzU0cxa0tmcTY5ekNGRHN2Q0grWW1qd1FJ?=
 =?utf-8?B?aE54c2hSWHhPaVh5K2xlZFI0Z0VDdzZyWkRyM2pYU3NsWHZwa3hWNCs3OW5i?=
 =?utf-8?Q?dSzKblbzK0MPQ+PH+/ln/W02q?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 732c742c-cc52-423b-3e5a-08db34f8cecc
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:39:03.0467
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: psfq0utQ4sQWygfKvxwz+lEWG4o9l3qx++WR5hhIHwjSKpSZjDR0qkXl9e2thUpP8jqlGFzvQ//DsnxBomSfbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7577


On 30/03/2023 22:45, Julien Grall wrote:
> CAUTION: This message has originated from an External Source. Please 
> use proper judgment and caution when opening attachments, clicking 
> links, or responding to this email.
>
>
> Hi Ayan,

Hi Julien,

I need some clarifications.

>
> On 21/03/2023 14:03, Ayan Kumar Halder wrote:
>> The pabits, t0sz, root_order and sl0 values are the same as those for
>> ARM_64.
>
> To me this read as the line should be common. But you still duplicate it.
>
> In any case, you should justify this change with a pointer to the Arm
> Arm. Not just saying they are common.

Does the following commit message read fine ?

Refer ARM DDI 0406C.d ID040418, B3-1345,

 0.

    "Use of concatenated second-level translation tables

    A stage 2 translation with an input address range of 31-34 bits can
    start the translation either:

      *

        With a first-level lookup, accessing a first-level translation
        table with 2-16 entries.

      *

        With a second-level lookup, accessing a set of concatenated
        second-level translation tables"


Thus, for 32 bit IPA, there will be only one root level translation 
tables. This is because as the paragraph explains above,  35 bit IPA is 
the minimum required to support two root level translation tables.

The root order for 32 bit IPA will be 0. (Refer xen/arch/arm/p2m.c 
"#define P2M_ROOT_PAGES (1<<P2M_ROOT_ORDER)")

Please clarify if I misunderstood something.

- Ayan

>
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from -
>>
>> v1 - New patch.
>>
>> v2 - 1. Added Ack.
>>
>> v3 - 1. Dropped Ack.
>> 2. Rebased the patch based on the previous change.
>>
>>   xen/arch/arm/p2m.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index f34b6e6f11..20beecc6e8 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -2272,8 +2272,9 @@ void __init setup_virt_paging(void)
>>           unsigned int sl0;    /* Desired SL0, maximum in comment */
>>       } pa_range_info[] __initconst = {
>>   #ifdef CONFIG_ARM_32
>> -        [0] = { 40,      24/*24*/,  1,          1 },
>> -        [1] = { 0 } /* Invalid */
>> +        [0] = { 32,      32/*32*/,  0,          1 },
>
> As I pointed out in one of the previous version, the root order is
> different than ...
>
>> +        [1] = { 40,      24/*24*/, 1,          1 },
>
> ... here. Yet, you still keep P2M_ROOT_ORDER and P2M_ROOT_LEVEL
> hardcoded. Your previous patch wants to define p2M_root_order and
> p2m_root_level (lower-case intended). IOW making more code common
> between arm64 and arm32.
>
> Cheers,
>
> -- 
> Julien Grall
>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:40:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517824.803689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje5Y-0006rx-EQ; Tue, 04 Apr 2023 10:40:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517824.803689; Tue, 04 Apr 2023 10:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pje5Y-0006rq-Bk; Tue, 04 Apr 2023 10:40:32 +0000
Received: by outflank-mailman (input) for mailman id 517824;
 Tue, 04 Apr 2023 10:40:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pje5W-0006rk-VW
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:40:30 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0617.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1dad2538-d2d5-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 12:40:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9925.eurprd04.prod.outlook.com (2603:10a6:150:112::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:40:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dad2538-d2d5-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fI7x1J62ojqi97gy4+fTaJUsgb2OhfGDGOWxGAdMiXhREY4SgsbRMbgY01Ea+i5xTwPdJJvdzlw+X66rcEZK8rShpa0/S8P4R4jJf+knpEvkmGtzyqZKnG5n8NSNNabTO35FSthhaIM7/DtZSLRU0DN1fK/oIqskBXQRKQcLPaFhNM39zH3t27WC1v78V3ET402J5kxlCsRUnu7Cz8AWwuTJ43uoPnDxKsIuZg70ayKV0xkEyCKIWAHfXR345WpBeoPBCYDQuWfeIkDFKup8QTmcIJ5Zq/VtmO4PggPNZYKd/2GXArdDjcJua8XPwFwy3jDVP3SGM5faF3JFn3oPSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xBCXfjvZiFQaKwerBMfqcds06QVNrK3ird1Ze6TJ1/4=;
 b=UwvzoLctm8iTZuLOqpTQgD4RH2FxNbqHK90nA9mVZWP5Z09/D3q/a3Exuj85Z0ZklgOy6EyVQ6e82LuihipArU1ifguVCG7sI0hQGDg8Ef0lfBTxZOBFLs0CTkt5TeH+8l7nNxjWeeozo0P31oqf2ayFPsVD8IO894xqVZ2Dao7677feYzgEEJQEARB3mdmiYa7CRZQR9DzdhNgv7quYLJKlxnc3AiqOTzW2Y+1fde2nWpGetX372PL1VcABlYsCZ8tLGO80tNQGNxl5mznRWXnFwoPCftv9VFGnQuHfASlfEp2GEnO7cR5G76oDcTsLPrWWgG75diVRINzWBC4rBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xBCXfjvZiFQaKwerBMfqcds06QVNrK3ird1Ze6TJ1/4=;
 b=azU5ruHzngtFAL0aS4DLPWES+gcoS2IAWfNLmrxscYSJK1xFaWGM6TydS7R5rn/0k5olgpOJrz773jaJTPOuCQmvnt6zlAaEekm3NZJSyuDw58cHTn9XsJndgymVflMHRAYpOoc7bSy3OPakr4901txg9+xE1Lv83ypugh5+lnjvhXn5KHe3BJorLqsurs8+Id65P97NL/0SBMLnIjbYPL4iZma/aEEXmKX+k0VMFEeosVqU+tqhWWQGYkMOFprb4ecKz+4290NbzEJCrYQx7WZqusBlsBkYwl/1frmwm7OZngZIS5JJw+RAYphuvMzC23oeBY9y/ma8zTb21JakUw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f3a11fa7-6e39-f7a9-7705-17c3af34273e@suse.com>
Date: Tue, 4 Apr 2023 12:40:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
 <6c5cdffa-f3fb-8f40-c44f-ad7431451929@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6c5cdffa-f3fb-8f40-c44f-ad7431451929@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0158.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9925:EE_
X-MS-Office365-Filtering-Correlation-Id: 10380eb0-29a9-4bba-3abe-08db34f9006e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jblOSUFyrjy2NsmsZV9PpHFxrDUf8fIf9H9GdUO+RgdksrtyMcBzvyAzY8oB7mvyO9bdQMdgnsfGuE6o+X3p4Rr1Ahlrv+2dtQ2NXw5lJFtKM9lMMa6DWlNpx1h7LJ7t+8dA+pkSVeKBvBGUqhr5rp573EjDI+ivLHU1I/Qv9hMaUIW+rSjg9f+Tg7dRxa8vB1GhGLPwoJVV5NPKENJPQIXJYaiwcsn8rd0pUEC05kGXaCGuYKt/D3N489534/NsUJlcARx/l4OHT79GZCxSYxrDMxRDri+tlWzuyXfMDfWUwxD6MCtOpXeEsuuFrLdX7IOJt4ELvKHmIZVnAxfZe/KdIplphgASmp593aPt9cezPnO+crVqxezxBRrGwHF0QjMPVGqboj1xkKAduPzj6jSbWXMptl1WZjqMuQsB/ocqCY3ZhLCyeCUf31BFJTM4/jfJQGxrpa78hTuS8/ZtnTLR7M9ykg9rDckqF9GIR/YeuJbBtF1wF+uCdhkz4hv6Qc+6nRNAZ7dklyyEQr9fM0JDaL1cQXlvMh9skCZpY4nc+gabDTMY6889V4l4MjnAVtZ26PlwXuVrz3zME+/Z6/zx9rUqNknWHezMtzyLdNCKAGnGTG2otunwHtZxRJwS9z/0XiQUNDyHlpIK2Hyp/Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(136003)(366004)(396003)(451199021)(31686004)(38100700002)(2906002)(86362001)(5660300002)(66946007)(8936002)(66476007)(8676002)(36756003)(54906003)(41300700001)(66556008)(316002)(6916009)(4326008)(478600001)(6486002)(2616005)(53546011)(6506007)(26005)(6512007)(31696002)(186003)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cTZYTGJacGVQeEtHaWoxR3ZFaEoxemFWWlN1cjNtU2p5Tm5nR2JyRWUxNFF0?=
 =?utf-8?B?bUpGMTZIK00yVW4zTDVxREJPaU5WOEdoUTVvSWk5RmV3bHhVMjdxQ1hFTVRp?=
 =?utf-8?B?SStUUDhBUEFKODVDcGozRGVNeVY5eG8vaVZuc0NjLzNYdXZXb1RyOTA0TXdP?=
 =?utf-8?B?WUxzTEZabWdsN01RcVd4QzNQbHBMMVd2NnlVZlNpYzRBMnJHVFYwcnBhS1A1?=
 =?utf-8?B?Qzc4Zi9vWHBXdm5ERFhWSll6ZitabkFVZXYrNVd0aFdjWnhvbVJ0T2RveUZZ?=
 =?utf-8?B?MnlqNy9NcG1LRlNMSkZ4Y25wbTlmeDBXNlZrS1pSWWs4dzZCRThxWTRiczV3?=
 =?utf-8?B?VlBGQjFQWEdwaVdQR0FBeVppVGZaM3R1MFZXM1lTdGJpc3cvRkJWTVhmbTc0?=
 =?utf-8?B?bWlxd1VSMWpWMm52dWpDZ3pMTVBROUIvS0lSYjZVSXJTVEU5U2lpZS9jYkdk?=
 =?utf-8?B?bUc1Skc5MG44MHZTZThqcm56eUZwZ2pvRHJOdVJzUnI3eUhJTVZjYmwxZjZm?=
 =?utf-8?B?cCtaUFdrbjVrbm9uUGM3a3cycStmOVF4bWZpZ0tBWVpZK3AzQ3B4NTRITHlZ?=
 =?utf-8?B?cmdaMDdvYzQ2VHlFZmVTS1EyQ1NmWklrZ1hsOGREbFN0cTQyejU0NmJHV1Js?=
 =?utf-8?B?bzJncFRHcDd4TVRFUHRuRnVYR2kxeWlGOVZJS3FaYms2dlYwZUR0aG1XeklQ?=
 =?utf-8?B?U0ErZkhra3dZWDJ2ZHN0dUxENmxIRmhvQ2FyQWpIRVNDT1BZcEY1Sm5GendV?=
 =?utf-8?B?MExXd0RmQW1xSS8veU1mWC91R1ZCTkVDS3dUWEVQZlAwR0htWmMrU0d1Und6?=
 =?utf-8?B?UWtUL0tGNko5c0JtRms2UzZHbjJRSFRIa3hHM2FDMXBaRFRIYytOWWlJOTVR?=
 =?utf-8?B?ZU5rQ2krSXdkMXl5cTVNREQrc2pucjU1RkxwTVBOVmZZWkQ1cDRxdzNYajF1?=
 =?utf-8?B?c1dkMTBtcHJicjBBZGsyOWZubXA2Zjh4eS9hQWFzNE5DRm43L3FBcHVsTlpJ?=
 =?utf-8?B?ckVVTDhsNVdVa0ZYZkd1dFJSa1dvdTAzRW5wM2p5d0tONmRxUW5GNVNPQkJD?=
 =?utf-8?B?bWxkY1BpZmduWDY2Rkc1Nk94a1cxVytqY3V3MXQ5NTBZQ0ErMXJkQlpnL201?=
 =?utf-8?B?NCsyMEdBRFBSNWpIQXJQL0t4dkF4WVB3a2QvKzVYclVLeTg5WWwrNzllVmls?=
 =?utf-8?B?SUJQdkpZdVRmTEhUNnEySlo1ZmZ6WGxlY3VvSUdJamQ4dkI0dGl4MFhpSXFE?=
 =?utf-8?B?U3o4QXdaRk5nd01hT01XdlJUL1EwM0pmaXFObG5jM2hrQmNJWHA4RmF4cURQ?=
 =?utf-8?B?b1dWN3pyVHVDY05TeEQ5MDFEYW54cldUY3ZRczZTV3R5Q3VsNVBxc0JHbEhh?=
 =?utf-8?B?K0ZZNkV5VmZKR2RtS2FsWXkxSUU0NklReUlLUWVRanZxQ2NCQkdCUDlLeUFl?=
 =?utf-8?B?QWQ1NkZUajBYVnovRWFpTU01dEJVZ00zS0NvWTVlb2NSM2wrOWlGT2E1R2Z1?=
 =?utf-8?B?L0ZrTUxNUnhEZGo5b3liNFVJaU1nZ2NVMjVUMTY4WlA4SG9qM2ZWbkovT1lt?=
 =?utf-8?B?UGUrbFk2Qnh6L0xOM2VzM0dzdzltZUFaZWF6RXU3Q2psVVd5cDlZV0NRRHc2?=
 =?utf-8?B?ZnRaaU4vb1NLTEJmWngraitZaC9pbGJ2ZW8wZy9zR2NLWnFEN3FGZVA5cmNn?=
 =?utf-8?B?a1dka3cvYUNwZFZuYVU2T3hLMGVlRjBLRFZxaDNqZzRaMU5EQ0s5a1BZOEtw?=
 =?utf-8?B?dThiTlUrSHNIeGZlZ0V5RWRSM3BMeHRwRDBCQkVhZTYzdzcwQmI3ejVOMGs0?=
 =?utf-8?B?SVhycUxKVjZWU0J2Kzkwa1BPUkNYaXRFNXVtU29Kb2c3YWZ6WEw2Qys0V0Fi?=
 =?utf-8?B?OXpMR1JydnI3TU9mSHZhbXlVZlFLbCtYQ04zREZRaHR0VktHaXI1SENyRWIr?=
 =?utf-8?B?dU8yNDdlNWRhM0tUaXUwYW4yNTdFOHhCQmRRYmpvWTVFSVBQNDNSZHQ0cHlN?=
 =?utf-8?B?MTJ6RnExQ2lUYmJBNEdDTlVkeGdWT0hPcGFSck1TZEdnZkZNSVFWQ3VWcm14?=
 =?utf-8?B?ZlVoTFdKVStRQlZCV3ExRTVsT3lyZUswcnljZ3pQNXR3WDZBakNUOFpMdFNu?=
 =?utf-8?Q?gdmJ5pZfp98odzMkz3beugtsB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10380eb0-29a9-4bba-3abe-08db34f9006e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:40:26.3271
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r90aoUXFqKACHTjsDBCXu1sPq+5ihh2O0jjvqGlxJDBcjcz/+VtHJh1HlaHhw1aOQbk3jE+Nthe7EY42j7Na0A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9925

On 04.04.2023 12:34, Andrew Cooper wrote:
> On 04/04/2023 10:20 am, Jan Beulich wrote:
>> --- a/xen/arch/arm/include/asm/irq.h
>> +++ b/xen/arch/arm/include/asm/irq.h
>> @@ -52,7 +52,7 @@ struct arch_irq_desc {
>>  
>>  extern const unsigned int nr_irqs;
>>  #define nr_static_irqs NR_IRQS
>> -#define arch_hwdom_irqs(domid) NR_IRQS
>> +#define arch_hwdom_irqs(d) NR_IRQS
> 
> I know it's not your bug, but this ought to be (d, NR_IRQS) as you're
> changing it.

I can add this (with a cast to void), but I'll leave the final say to
Arm maintainers.

>> --- a/xen/arch/x86/io_apic.c
>> +++ b/xen/arch/x86/io_apic.c
>> @@ -2665,18 +2665,21 @@ void __init ioapic_init(void)
>>             nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
>>  }
>>  
>> -unsigned int arch_hwdom_irqs(domid_t domid)
>> +unsigned int arch_hwdom_irqs(const struct domain *d)
>>  {
>>      unsigned int n = fls(num_present_cpus());
>>  
>> -    if ( !domid )
>> +    if ( is_system_domain(d) )
>> +        return PAGE_SIZE * BITS_PER_BYTE;
> 
> System domains never reach here, because ...
> 
>> +
>> +    if ( !d->domain_id )
>>          n = min(n, dom0_max_vcpus());
>>      n = min(nr_irqs_gsi + n * NR_DYNAMIC_VECTORS, nr_irqs);
>>  
>>      /* Bounded by the domain pirq eoi bitmap gfn. */
>>      n = min_t(unsigned int, n, PAGE_SIZE * BITS_PER_BYTE);
>>  
>> -    printk("Dom%d has maximum %u PIRQs\n", domid, n);
>> +    printk("%pd has maximum %u PIRQs\n", d, n);
>>  
>>      return n;
>>  }
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
> 
> ... just out of context here is the system domain early exit from
> domain_create().

Of course. But that's not the path I care about; this ...

>> @@ -659,7 +659,7 @@ struct domain *domain_create(domid_t dom
>>              d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
>>          else
>>              d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
>> -                                           : arch_hwdom_irqs(domid);
>> +                                           : arch_hwdom_irqs(d);
>>          d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
>>  
>>          radix_tree_init(&d->pirq_tree);
>> @@ -771,6 +771,8 @@ struct domain *domain_create(domid_t dom
>>  
>>  void __init setup_system_domains(void)
>>  {
>> +    unsigned int n;
>> +
>>      /*
>>       * Initialise our DOMID_XEN domain.
>>       * Any Xen-heap pages that we will allow to be mapped will have
>> @@ -782,6 +784,19 @@ void __init setup_system_domains(void)
>>      if ( IS_ERR(dom_xen) )
>>          panic("Failed to create d[XEN]: %ld\n", PTR_ERR(dom_xen));
>>  
>> +    /* Bound-check values passed via "extra_guest_irqs=". */
>> +    n = max(arch_hwdom_irqs(dom_xen), nr_static_irqs);

... is the one.

>> +    if ( extra_hwdom_irqs > n - nr_static_irqs )
>> +    {
>> +        extra_hwdom_irqs = n - nr_static_irqs;
>> +        printk(XENLOG_WARNING "hwdom IRQs bounded to %u\n", n);
>> +    }
>> +    if ( extra_domU_irqs > max(32U, n - nr_static_irqs) )
>> +    {
>> +        extra_domU_irqs = n - nr_static_irqs;
> 
> Why the extra 32 here?

On Arm we would warn even if the command line option wasn't used. Plus
I view it as bogus to warn for any value up to the default.

>> --- a/xen/include/xen/irq.h
>> +++ b/xen/include/xen/irq.h
>> @@ -173,8 +173,9 @@ extern irq_desc_t *pirq_spin_lock_irq_de
>>  
>>  unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *);
>>  
>> +/* When passed a system domain, this returns the maximum permissible value. */
> 
> This comment is technically true, but it probably doesn't want to stay.

Why not? We (now) depend on this property.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:48:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:48:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517829.803700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjeDV-0007am-B0; Tue, 04 Apr 2023 10:48:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517829.803700; Tue, 04 Apr 2023 10:48:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjeDV-0007af-7W; Tue, 04 Apr 2023 10:48:45 +0000
Received: by outflank-mailman (input) for mailman id 517829;
 Tue, 04 Apr 2023 10:48:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjeDU-0007aZ-2A
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:48:44 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe13::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4402ac15-d2d6-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 12:48:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8562.eurprd04.prod.outlook.com (2603:10a6:20b:421::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 10:48:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:48:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4402ac15-d2d6-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dvDvE+LdazRTMVRdDLDqwsNJxaThusd9CBiFKV57vm0UBFZmGSem9OShFD2UMBKabaw5DrqhPj6/6hqFVFHjS+fXL9LUTmdE/bpL11afEra45alD6mqKJdknys3xZqyhT9DWjC+MlvnM2dfDHCGC04wAN0KrwibaY6zjHugujdsZnnlD2bbr0Y3xHPQaSmgjPb5SrZZl3vxq4uJrtzA6z3qHBeZoeSt/lkBJf/GjOaNUnW5BPMPtjPp0gXbCa/2IGghWAMv+mcWSZyrG6xUNDaiqBVgYu9Fvb6gETdLC2UVcP+7Gbu0dxoQVrAlvhQPz6Xuj9EuZuH4wpbS7LS4NNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Cf8KOdXE7psy3J1Dgd+qmWiV5099rLBr/uhBEdSFlTs=;
 b=GVxNefWRk1qm1V/1NQT8LdLgikIAi5APO95GVkRLFT3gnR8LjggpmiHUe2TYabjPvPufgQejJSNUPdv0usgTyIRCMLTvBGb8+WwLZpPEKv51Ff+/FM7rKUihF7Dweqa3GCBys6XZmD9fqycHY1MNQ81XiTBCh8gkslNwbU1Dby0eEINxK/hvTFDrylSNuqqQc9dnogvGXRrin21A7/qvsdf5hgyHiDjm5ZRJKOPQpxy8YOjqkYzYtROTgjY28yrvLvxcNPvlnr2j3CB9S5A7XfTks4yzj+YHDpi03lyZlTL/vRT93w1Rgt6+7233FhV76gWpvwZGoYW43xLqqD1NiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cf8KOdXE7psy3J1Dgd+qmWiV5099rLBr/uhBEdSFlTs=;
 b=A1TCnAEwsh0hf0EH9LSb2baIYDDAH14fym9XaVUAnrTmESFIiUyibFgGOqENld2ejHLg3RrfLGJhiJT9pcymiwi9IDeglb+JU9heJ103XorFDwLW2J/1ClB/+N8zW1mVF6LpxRbZyewDInsRc2pwlfXtbn7MurNtwm9808cMnmn45fVtKzC9sVL4nSYv2qJb61G20/Y69LtjQJSTeBQe6DHvDshz5Zo+eZt7/qzvwjCCjncRap0bEHPbvZB72xyCKxPhWk/1aAIFVX9uYmk25cfli2l4ks7yMzTTEX1br9nRMvZ6xqQZoecZ/e09UJBSDXrXvUkD/slSZFIu5lo0KQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <558b1cc5-e294-87b3-676b-68d7e55065f5@suse.com>
Date: Tue, 4 Apr 2023 12:48:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: correct AVX512VL+VPCLMUL test descriptions
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0146.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8562:EE_
X-MS-Office365-Filtering-Correlation-Id: 982942c5-8a4e-4b0d-1967-08db34fa26ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Sar9RPx4JvyedxjGsvPXmdWuQZQ8waCOZr/tki2/FKxQKZc9gDk5v+lNVpGY1H+cMlrjYN46owAlEM/1Vo5cRHQoYxabg5IF1xmZio424WDYI+CUFICkdIxTAhATqfPWg1hqW8fWW0VBEoj5aCoWAqtvndf8oZfsYcUM151nb0SY+UQ7bo2GWaIC9iGqaXA/XAGC9DjC0RbrE8WLa15o3BU/qOqnP1zLXAaGdRPKkm18TcTQkiV1PXZOJFcQRR1kkd73igW0mu9j1SwSOrLs84qCmtunjix7DvVg5pzanQbP9IL/vtS3u3OJ0dmDqvo5Rdf8iJ1Y+DRwbgV1cXfWPtvsJQaxVRDpd9cd2dh2tkelS3QUccTTRgXrGE7Bce7BumbJYdgn5D68G8CPO7M7Lr8Rj/AeMhzWQCBx2RsIYtE/CLttqBi4GqclVnPZQ3H+k/WeEEymqndpN5uRE3o9OShqZUr9WflKEJf80WMyBbtSjwYhxORy4m9U3rSanN4zbj6sEldEepPNy8elZla74SbJAShoIs57mI3gGdNiSzPnsjZL9Oz0geeeVfogKSMkdLAu/x6EnP5UY3U0tq1fcngBx4toA97iEqWjPurzwGyKN8036G/2XoMBn5f57bUbibFvzUyssOb+jadvRuUcvg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199021)(31686004)(478600001)(26005)(6512007)(6506007)(66556008)(2616005)(41300700001)(66946007)(6486002)(66476007)(6916009)(4326008)(54906003)(316002)(186003)(5660300002)(8676002)(8936002)(2906002)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Wjg4SjR2bUdsWm1HclBPMVZ0YVUxK3dSYXFQa1R5UERXdWxrY2E1bndBQ082?=
 =?utf-8?B?NE1ncHhJRlNlc24vKzJrM0psc3hiZW5tWHpGOWJBVWNMbjNzVEZPUXFlVjdU?=
 =?utf-8?B?WmMyZlJ0anRRam01ejNodjNDZE9Fd3JVUGxWdzZCR21GajExWjlrTmpvME84?=
 =?utf-8?B?eHRMRHowYlU2TXk2WGJHaWpaMW5GN014dlJFUXFjdGtSaHp6TFpKSFA2c2Iy?=
 =?utf-8?B?TGloYjRmRVVRM2gxQzYrTnRsRXJvQ0QxWFl2b0t6S2RRdEpnVVE3QlpobjN6?=
 =?utf-8?B?OE0vd1dpM2RmOC92TWJ5ZUxURTI1dWE4TlNXS2QvNW1rQVJKMml6WWRzMjdz?=
 =?utf-8?B?TFNCeGxDNStrVG1zTkxEQ0p6SXgvcEo5Y2Y1NlhIRjRHVXBpMldqVUxwWkhV?=
 =?utf-8?B?UmI1T2dOc2RqZ3BvVWh2emhhZnJURkZCNytteGNpVWMxL0o2ZUNTOFUySmFO?=
 =?utf-8?B?M28xZUlpN3hxU2R5K25pdDBYenJOQldTSzk0WXhSV01RcTQrUXhLQ1F3SWdW?=
 =?utf-8?B?NWVMYUdMYXl5TjY1WHhZdDhBZ0N6Q0tCemtPSDUycnVqdUV4SDZoZjhuMzdF?=
 =?utf-8?B?Y0NUUkJVQmwxZXU5R2ZUNDBOYlZmN3NuMnRkcW9EUzBGalo0djFiNmlVYjdD?=
 =?utf-8?B?S3FzTnYybGVmZTUxQlNmd0pBMU1rQ3F1amtRdk9UaDBITkdxWk5mVVFHdElx?=
 =?utf-8?B?ZzEyZ0l0d0FrZ2dOc0J6cUQ2MEd3am4yc0d2aDVKcVN0RXFXU3dtcDFvTG5v?=
 =?utf-8?B?d0k4YW9NQU8wSmZFT09WK2F5b0N2Mm9LdmxYQ3dUUjY4SkFXTXZrT3RYdS9z?=
 =?utf-8?B?TTlxbi9UUHJKSlBtQUZLcDZUN1NNVzVJc1c5U1FpK3lOaitUQW92KzI5Uk1l?=
 =?utf-8?B?MlZZK0h3bWVzRFh4NGlVdURKcEw5cGlGUW1HMXIzQ2t0Y1lKRWVyMXRlUVBo?=
 =?utf-8?B?aG1RaHp5MDIvWkdEcW9rWVdsc1QrM2hCa21VQVZybDQrWFptRVA3UnRvVVVp?=
 =?utf-8?B?TmdLWGVOYkw1SCtiVkhjV2xTY0RON2k4b3JrMGk1Mm5jNDdDc0NRZVV2SWh5?=
 =?utf-8?B?Y2g1dUxrZTJPMGN1SzNBMXpJNnZidXgrT1ZDbzRRZ3JpNkxFaWE1RERyZFhG?=
 =?utf-8?B?Y3k2NSsrOTB0cWthSTBPWjJtMTdrU3d3WTFoR3Ric3AvQTVkK3lmSDBMd2pB?=
 =?utf-8?B?Vk1GZTIwSTJ6c2NlMmlaTWFBaW1uei9ic3NmUW5leElSY1ZvNEdZU0RsOHFJ?=
 =?utf-8?B?T2ZFTXlhaUprQWM0UHhiY1FkeExONnBZL2ZWaEVwZ29LeDVVZEMyYXo5aHI3?=
 =?utf-8?B?R1Ixai9QWVJDaTRaaGwydGZPVEdObTlLb1dRWC9OUVBKZXpUb0tjUEpHTG5L?=
 =?utf-8?B?Q2E3Sm9ObkxoRkdEMkNwWVVTTUtMM1AyY255QllFTUdtYnFpbUVodFE0NGN3?=
 =?utf-8?B?U0tlcWQxU05Tb3NJc2RqVkJsUFBHVWVYU1F3cHZ2Yjk2SGN5WHJZdjhhK2N0?=
 =?utf-8?B?eTR2NWVHeUYyVE1QYUhIc3FiQXk3VCsrTGVGKzl3TFFwczdST2VBZWxtcjlh?=
 =?utf-8?B?bk1mWFJBMmhIcjFXa3F5ZzJSM0lITHdQeWUvQldzWGFka2tST3lITzZ5bk1T?=
 =?utf-8?B?RkFPQlRXaENyQ3hkKzAwKzRpL3ZhOGxKZ3VMb0Q1d2w1aCtHSi81RGRhcWVy?=
 =?utf-8?B?bXI1cjJ1T0ZaRjJ0Vnc1bFNkM2pnMTBaMW44UXJQZksrOVgxaFlXdmRjeTZS?=
 =?utf-8?B?ZDNoaUF1V2pNOElHWlg5bHplbWNCS2hqamtGam1pWVVkZnVNd05lQXVQVUYr?=
 =?utf-8?B?Q3lDZWM2UXl4N3VNaUhpSEZKUDZGSGNmTE5GSmt4cG1jQUJFWDhGSlNoZTUz?=
 =?utf-8?B?bElkTnN5dHBTRS9tNXd6RTNESXpJeG5NeVJsc3hiK1RMUmpmaCtMQUpGTmR2?=
 =?utf-8?B?RlFFSmZ6K3Boa01tVFFHdlJxTTFjZ1NFdjdRZ21jaHhUUTFnWmVVdTJhc0VM?=
 =?utf-8?B?b2sxRVBwT2drblZ3U0pUWjJWdkYxK2xUZkVoMHdWazFQeEE1bjVMTUZBMDVX?=
 =?utf-8?B?TENPZzlMS016WlY4SytkRlpxUTFwMkdyWlEya2p3aGw4VW5JSmpyUWpJRDBC?=
 =?utf-8?Q?cVPjr/Jb2iTenlDt4GIa2OUgz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 982942c5-8a4e-4b0d-1967-08db34fa26ef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:48:40.4500
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s/uTvvQ44jZuUEM0woXj3LHRAzt7Qhmo4pQJd72L7ISoPYXfuEVWRMGqUG+rah6CxHNXx+WNy+PaN/gpFCuK9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8562

The stride values (based on 32-bit element size) were wrong for these
two test, yielding misleading output (especially when comparing with the
test variants also involving AVX512-VBMI2).

Also insert a missing blank on a nearby, related line.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -520,10 +520,10 @@ static const struct {
     SIMD(VAES (EVEX/x64), avx512bw_vaes,      64),
     AVX512VL(VL+VAES (x16), avx512bw_vaes,    16),
     AVX512VL(VL+VAES (x32), avx512bw_vaes,    32),
-    SIMD(VPCLMUL (VEX/x4), avx2_vpclmulqdq,  32),
+    SIMD(VPCLMUL (VEX/x4), avx2_vpclmulqdq,   32),
     SIMD(VPCLMUL (EVEX/x8), avx512bw_vpclmulqdq, 64),
-    AVX512VL(VL+VPCLMUL (x4), avx512bw_vpclmulqdq, 16),
-    AVX512VL(VL+VPCLMUL (x8), avx512bw_vpclmulqdq, 32),
+    AVX512VL(VL+VPCLMUL (x2), avx512bw_vpclmulqdq, 16),
+    AVX512VL(VL+VPCLMUL (x4), avx512bw_vpclmulqdq, 32),
     SIMD(AVX512_VBMI2+VPCLMUL (x8), avx512vbmi2_vpclmulqdq, 64),
     AVX512VL(_VBMI2+VL+VPCLMUL (x2), avx512vbmi2_vpclmulqdq, 16),
     AVX512VL(_VBMI2+VL+VPCLMUL (x4), avx512vbmi2_vpclmulqdq, 32),


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 10:58:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 10:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517832.803710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjeMs-0000de-7Y; Tue, 04 Apr 2023 10:58:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517832.803710; Tue, 04 Apr 2023 10:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjeMs-0000dX-4g; Tue, 04 Apr 2023 10:58:26 +0000
Received: by outflank-mailman (input) for mailman id 517832;
 Tue, 04 Apr 2023 10:58:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjeMq-0000dR-K6
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 10:58:24 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c79f845-d2d7-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 12:58:21 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 06:58:04 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5201.namprd03.prod.outlook.com (2603:10b6:a03:221::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 10:58:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 10:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c79f845-d2d7-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680605901;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4mz0unXIACTtoxSLUL21K62e6alLo+uDYC9SUOBOSVI=;
  b=WCAxC+yuCPKtvtaXw+LgQ8tKUva5Hu1DgmGTk5LzNJ/H1AAPW0ES0KOP
   Ph83nMh6pqvyv/QeVt1H6xiFDHTnddmDWUwJkjiCVcefZRB8yHN/bBS0c
   reM3y1fMVa1D3OUGl+D6HIeSRFPdEL7YKUSgHPto5VVZJFWcyY0gZDQhY
   4=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 106678214
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:D0pY9KBwgkrLAhVW/wXiw5YqxClBgxIJ4kV8jS/XYbTApDwl1DMGn
 2FMC2vUaKrea2P1ftp0Ooy1/BtU65LUnYJqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4gRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9P0nOEtHp
 PgjLC0hSiGuuMSOzq2jVbw57igjBJGD0II3nFhFlGmcIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuuzO7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXw32kCNhLSNVU8NZUx2Gj9kEsVCc0anGlhPiJkl/uasJ2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBQQ5b5dDm+dk3lkiWFo0lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9b1gbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:SrPFtqpjmyfz4juBdpEtU2saV5oaeYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTnEAtjlfZq/z/5ICPgqXItKNTOO0AHEEGgh1/qB/9SKIVycygcy79
 YHT4FOTPH2EFhmnYLbzWCDYq8dKFHrytHMuQ4G9QYLcT1X
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="106678214"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CasEe4vu2Md6j80QG3j8w/nIChUVnmf6vVuLsqlAOcktGSfY1CD/4zk4eHTqL6Mp9Tx0AyKSDMt3m0gIuNiGYuwTMDgYw4KfC7l7dMTCuGM41HWt7BRB5xIQA/2+41IEqN1pxzAXAA6tE1fthjunE2jyyAUGmYUl0u4M9PBY440DYSx6jz22pk+yYHFa7dw7yctg8PRUcQSTS1pg+zod6FSgDBQMoiIRYzg7QSatNRDN+pUF8wZLW99DIv0JwBgdY+KW0+Cqs73oMq92n5vEqiEm2DVFcjnGSvkcSCxkV0LuFxNoZi2roW+SNEvN7Bn4p4dUekNHqK1lgjocsOwwIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4mz0unXIACTtoxSLUL21K62e6alLo+uDYC9SUOBOSVI=;
 b=FPe4ZsdLxNtXl1RM2Yivnwf0x1LIVmmCBEiausksS6aJTvIedegfIBuhql6qqSfpzv3PAH5rtxliaKVmjjnnM11DLJ6ERWIftWPNoGUJ/itL0HL1+clRymP9u1LTsANadtRdxtdrvzEJ5CxrPSFVUhH5qJXbOPQ1AfQZYz4WPgbUpgmGZUy1eX13nYr9RvF9BEj/blZRGXBUvQYe3omXLYJ1RGDjFdGSL3o+7dOsvlY/rrb0uvMkPQ3iYEyiMvfVbsBI0ArUZ4DuAhgW7EFcDd6kNYKlJEcSLBmnXPultw180JWyq6HW36+1xviWodLp4b5USkmY7n+M7AL3FgjEWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4mz0unXIACTtoxSLUL21K62e6alLo+uDYC9SUOBOSVI=;
 b=g/gLeAZISZkMpECLErU/rp6ertcTiki+RNslpVG0oAO/YH7n+1KITBOZPKyZMljQL2BrtXg2/4xsstJoI6/eemNru26C9M6DwIHgDMeExDiZVV9DTYzcRGCOekTYJsq77bbkx+4mBPAhxbxEblBEdqBWCjUatr0c91KzkUCtbsw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <3d5c84c4-d7f4-57ba-ef0e-b190ccadae14@citrix.com>
Date: Tue, 4 Apr 2023 11:57:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul: correct AVX512VL+VPCLMUL test descriptions
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <558b1cc5-e294-87b3-676b-68d7e55065f5@suse.com>
In-Reply-To: <558b1cc5-e294-87b3-676b-68d7e55065f5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0141.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5201:EE_
X-MS-Office365-Filtering-Correlation-Id: f4ddafc0-02ef-4a3f-7178-08db34fb75d3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tz77qLQlSePRymeXuCOTy7zKlvONRckHqyqkz+OfPRsDzYLYxBT7xSqsnSrY0iVFHAntNQlPmHkAQGMTMO4TnTK0NHR1bUXUsqni1ait+G5zTP8iZ/wiK/XkWtWMCPg4srFURy7LNhcB2ArslaNWyTxdwSVAmiTzfY8F1dKQ7iOfu+r8nbGI3WcCBh3KkxiYhiZSu4/N88+LxjHey5uQBCiedf7mMzmYKHMS8BQ7zhoO1mz/dHxUVXbQLpkkhMc0Wio9Jfd/VgPYGZEk2jwLRAqQBez3mg47EEKcllcey9uUWRdzjxdKtHBt339MVOHavRSmw0BqdPIC2UvV5+0dioLzuZIUB64gKaZR0q61fF9P7FKCIU5ap3RpnHwsRCcz3ovApDz1tw0DJAaCZjzmn7i4AwQb6nu1/IZ3yT2scSqwDPLaHxClqHhAKyVaKW2TIZgzf9M0tek2MdvarU6yAenh4ijsgRxR8MaThhnWl/cmM5DCQj+1C3g2yMVhcaLWLQy3Rw1XXoCiIED2+6g6mQH7ejNFMnf6qhMZg+GOLrt6zOKkaL5E10CTJcxdCpppzBNRs3fQ+p2Z706T6PrbvxW3qhG/vi1ELVr7FECcF6kWW+M/QjDzChid0Ci+aUwhZ/lGob3hpZYbWEKg9A5ChA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(136003)(39860400002)(376002)(346002)(451199021)(186003)(6666004)(26005)(107886003)(53546011)(6506007)(41300700001)(4326008)(82960400001)(2616005)(6512007)(316002)(478600001)(66476007)(66946007)(110136005)(54906003)(38100700002)(8676002)(6486002)(66556008)(36756003)(86362001)(31696002)(2906002)(4744005)(8936002)(5660300002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VjE4aEdEN0RQWnJlSHp6aDlQSy9ucFdlcC9hbzVxQ2ZNQnNMNmt5RXJubXNi?=
 =?utf-8?B?RUpnVkducU13WTYwUnh6ZnhQc1V2WktaZjMvYmZNbzJhUFBYMWM1aGErOXB4?=
 =?utf-8?B?ZGNCRkFrUUg5bW1EWnFHeDZFMEVpQk9uTWpKUFM4VjJnRnl5akJPWUZSWjBN?=
 =?utf-8?B?QmNuK0NSUFk2TVF0cC90S3QwM3ZZdHFnWUlTb0Q2a0Vob29hRVI5cTdjOSs0?=
 =?utf-8?B?MURETWxpVXlUR1N0YW96Um5ka0pBTEsrZ1VydTM5d1V2RjQxQXJETGRrYTVn?=
 =?utf-8?B?SW96V1JHcVlRQXRVbDlOblpqUU95L3c5ZlFBZ1M5SUxpRjJ3dU9vTVVsUWFY?=
 =?utf-8?B?Y1l2Y2FnMmxLZEpJQS9rL3NJZWZFZEJPOG1LaGk4RWI2Umd5dEoveStxWUow?=
 =?utf-8?B?ZlNFR21GNmxqSnFFWUJVSWZSRDVHTXJjeHMvekZUZW5qd2NlSWhBaEtjb25I?=
 =?utf-8?B?eFdrd3JpUjlYSk9QcXE4TWswSkJLcm9yWGp4Y2VxN3UwUlgzYVpkRlVmaVE4?=
 =?utf-8?B?MjJFSXhxK0Yzc0ZiT0RHajltMDFTUmliWVIrTUh3WjlRcnpHWlBrOHV5dnJC?=
 =?utf-8?B?NHhtc3M4MHBsb2wvams5TG1WeU45eWYwaHF5WnFwWFRWYzA4ZGUzbEJvMWVj?=
 =?utf-8?B?bEFxYnRXTXZCT2NpZlZuazU4V212TXFvdmVjSWxROVVOZVJSZWlaSDNRL1lw?=
 =?utf-8?B?RzFJRjQxMlBZTVhxZXlORkFwNEpUNzh3bjBrK2dXdENYMU1mUHNKT0RWa2p3?=
 =?utf-8?B?M254bnUzWnU1UTFzWXJRN3FlaHRSVGx0OEpKbHROM3hyaGVxTVVZeFdKRzVh?=
 =?utf-8?B?RzlmS29ESUZDSGlrVFY2QlMyZXdaVnBnTktGdlp4R01JcmVOYjRNREZnMTJM?=
 =?utf-8?B?Vk9CYnV6RFdib08rRXc0NWRuVlZQa1JzWHdxNjZ6cWhBMS9Ha0xxZjBSbjly?=
 =?utf-8?B?bjRoRnpXTm5HeERBbjRyejYxOUxpekZHVnQ1ZElWck9mRXVsbVcyTEEvUnlq?=
 =?utf-8?B?WkdhTXA0SEpia0pVTTRHQkRuell5b0c4TWV1L2tWNDhqUU1aeFA4SWVBdVJ0?=
 =?utf-8?B?K2tDUjhkRENpWmU1dGVPOXpxRk40bWpNNjBrdGVPaVFPNXNLeTlzNDZRNUtQ?=
 =?utf-8?B?TVVTc0RMOWx6QnlqVmEyN2NvWWtCQ3dkL0N0N2thTDRXdFVIYVZkdmNpNzYw?=
 =?utf-8?B?L3pBOXRlZHBOZlRjcExqSjFKWnhxT3dpbUhtU1N4bGxMZi9FQ05KL0RQYk9O?=
 =?utf-8?B?SEhrcFVkTExOdU9zeWFhZEZRSXBCamhjWFlvWFB2N2k4SWlORUFIN3IwQ3pP?=
 =?utf-8?B?ay9EMzJrS1J4cUQ4ZjRYa0c0eDdoOVU0dnRKMFhjcFZGTnY3L3NCbGRXRFZh?=
 =?utf-8?B?dERGV2psbU5Va01IZ0d5OUpEMWdJbkFCNFBYWW4rUXdXLzdyY1pHRFFVTU9L?=
 =?utf-8?B?WlBUQ0lISkI5NWx6a2V0NGM0aDFWMk5ITlBTcWxkbnpRQ3ZtVGtVekdJNmhX?=
 =?utf-8?B?Z1lVQlBkVHFKZG5lTUk2dzdLK3Z2bzJ4a21PaXNtV3NKcG9JWGd6emtDVzRs?=
 =?utf-8?B?SFpyYjZZSUZmRUY2ZzVGSkp0OGFXaDcxMTlaU09Kbm1SSEkyNkNDOUlaMWRF?=
 =?utf-8?B?QzlCOFZFeFR5NFVQSTdDcVF0UnVqQTg2R2Y4bEtxbnFpUitVazZrTU5IQ2hx?=
 =?utf-8?B?aVNtLyszOW9ZbzFibTZFb1p0NE0xeWwwbWY3bStZdmcycGpvMVpNU3A1WTl3?=
 =?utf-8?B?WVFpV0pMSVhHMk5rRzdoODNXdXJKZlU5eW1MUmpWQVU4MmF3cG9LR0VuZGtI?=
 =?utf-8?B?Y1dRL3RnV29VMk1Wb3JXOTFYdkVmKytjMnhTTlJBZW9QVzk0QmZYR3RLK3hU?=
 =?utf-8?B?TC93YnpZTDlKNnFGL2kvSXFqcytETFVnam9IRk0wYTdaNFg1MW5rRys4cFA4?=
 =?utf-8?B?VWNwYTFEOW9taXliVTQ4QVQ3cEFJU0pza0szUHNScDdLdzR4Y0l0TGtMMDBs?=
 =?utf-8?B?cCswYXo3c2Fia1NrTG1lMFZMT01lWUdEcVpZNllnMXNFMEdmVnE3eHk1VG5n?=
 =?utf-8?B?NUswMmZSR1ZxR0lmK0loTnMrMytXdFpJeVV5V0ErWkpOS2hmZVZXS1NmN29u?=
 =?utf-8?B?eElpOWJrNWVzckVncnR2VTBKakZCR3pMVnBxTVE5aWt3ZDRGRS9mcDdocmov?=
 =?utf-8?B?bmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	gUPU5qttekrMU/QjceW2KvO730NOTg5IQL0bQ0knQtQiF5rPNYL0+8e2JhwuIa51WEeYnKDyfEp0kRTgR5vRR+UU7vWxWE5d6y+CljRVPhgiL9ulKLSsoHTM06fpPOrxTLS49WlqA32quQb/ft73DGU1vnKA0alb17dba04aWdG6F8hyg7ooVdLLmxSOBCaozz6QEB8BkZCID6BQnY2S+9gxSUPexuvXw+xaqrg9eMiZid4BYxksnjWhpOCNCI8xKFnI4i8cnvj6sp3wscE+fMv+2gh4vSBJftdJszhyIyZ91mKO0zz7YydzLDfWF35eKWj5TezAomowkaP7bHDJJ6h8g5DfiKVSKSZfZAOroO5vEmgFVzO7c05FBQNHaPr0C070Z5WsfUz/0AAZhJ3+5GeZ22gC1BwrPWaLg3uaAQYGs6sINUaCuhVvex3m+h6OgGAIZgpCyRrxx5YpJKd5tHF6dd+4mAj1pHTMoJxKuuhdAznRnrKmFHxQaJDbzePoXtD8tYw+5BdRN0pqrA41HEEncAdD8tG+QrPdA9W2sQsbAI5Zx/3rIEyHoiBbRIaxaVedeu7vNlCEWwiQLMx90u3R3qTkMRMPBSCp+rLUnwuf5621uQfyGdX9zvPW8pcnU+/JU+W7KH76l0dR5QpEltQasKZbm8ErKC5BTcaYwzSQSo1sn3rA7ttKvYZRGNdBRCA0Lz6TPx1N3xrRRwdWGpd75fC7GaXcsYK5agDMzyBg/yA8TWT1Z4zVqKKZ2LFcvHYY3SttoPNxKkjpnYn458pD0W4SHiE0/1e+SEOplLZGhG2ptE7zLOLbGsNe9XDF
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4ddafc0-02ef-4a3f-7178-08db34fb75d3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 10:58:02.5100
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AkX+JXjAxbyiEfKPxqo3kqf4axYI8eoRr8/CGBdQSmIVluUCwxwfRlP5Vb6XG6lM/WE7KzlKq6HuXKUJmohMz4JC4VBZoEG6E9NvqJy5VPw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5201

On 04/04/2023 11:48 am, Jan Beulich wrote:
> The stride values (based on 32-bit element size) were wrong for these
> two test, yielding misleading output (especially when comparing with the
> test variants also involving AVX512-VBMI2).
>
> Also insert a missing blank on a nearby, related line.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 11:41:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 11:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517836.803721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjf2Z-0005o6-AY; Tue, 04 Apr 2023 11:41:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517836.803721; Tue, 04 Apr 2023 11:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjf2Z-0005nz-4s; Tue, 04 Apr 2023 11:41:31 +0000
Received: by outflank-mailman (input) for mailman id 517836;
 Tue, 04 Apr 2023 11:41:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjf2X-0005nt-4E
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 11:41:29 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f7f29a8-d2dd-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 13:41:24 +0200 (CEST)
Received: from mail-dm6nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 07:41:18 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by MN2PR03MB5247.namprd03.prod.outlook.com (2603:10b6:208:19f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 11:41:16 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 11:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f7f29a8-d2dd-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680608484;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Z59NWup1eQF6ib+PLgar8cI684KTkUwJrqiiGycw+Pc=;
  b=GDvwjNXyFmFJ6mJpTClHcLZW718vNMCM/oqdm0dZVlHqyLxKj55HR7ND
   Ar/PpVX+WdfdEhCucsSjnjhIa9LHSkGgugM0uzz5fGLzZi21hwk3dhSDI
   j/YkPldIR3IK5txbbH9xRuuChvjKyzLqY//SLuFyGrvJbEL19bd086umk
   A=;
X-IronPort-RemoteIP: 104.47.57.171
X-IronPort-MID: 103050501
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:y4RQHKJgPhJmM7mkFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENSg2RVz
 DdOUT2GOa7eZ2L3KNwlO9i3/UwDvJSDm9BjQVZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRiPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5IImh0z
 NJGFgohME7a3PCa/OPieu9z05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMlWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv03beSxX6jAer+EpW4qvp2m0+/21ciJxwcVUaLkb6wlX+HDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLSNrmK2YTzSa7Lj8kN+pES0cLGtHaSpaSwIAuoDnuNtq0UuJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNfNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:gQ5mNq8Q1aYoegUPY11uk+Fbdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQkdcKO7SdK9qBLnhNZICOwqUYtKMzOW3FdAQLsC0WKm+UyYJ8SczJ8X6U
 4DSdkYNDSYNzET4qjHCUuDYrAdKbK8gcOVbJLlvhJQpHZRGsNdBmlCajqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOqXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6R64aT1eUYpDLS8KoDOCW+sLlUFtwqsHfqWG1VYczNgNnympDs1L9lqq
 iIn/5qBbUP15qYRBDInTLdny3t1ysv7Xj5oGXo+0cL5/aJDg7SQvAx+r5xY1/X7VEts8p717
 8O12WFt4BPBReFhyjl4cPUPisa4XZcjEBS5NL7tUYvJbc2eftUt8gS7UlVGJAPEGbz750mCv
 BnCIXZ6OxNeV2XYnjFti03qebcF0gbD1ODWAwPq8aV2z9ZkDRwyFYZ3tUWmjMF+IgmQ5dJ6u
 zYOuBjla1ITMURcaVhbd1xN/efGyjIW1bBIWiSKVPoGOUOPG/MsYf+5PEv6OSjaPUzvekPcV
 T6ISBlXEIJCjLT4Je1reN2Gzj2MRSAYQg=
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103050501"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZC03thCXRgw54az2y05hx8DexSyGUvMa9mH7mgU5GD5pMa9WSAH5QD7eXUzhXLAu42AhVp6dOfvCqZjjUz3TEAjFrSrmsZ8QBx6chYyjj+704kZXWACEFS30nb7s9dLg1QEnUpnDUHkcuawqC7Fq+RGUjdqmH9ge+ENNk6nnM8RM74DvD3qUN9KP7GsFIdHx6j3knDVJKWsdAdHK/yJx0BPiMh6IIkP1rDcN3SOM/ODnNQy+ielhcpXWp3r4rdPLY9s2kSo7tJTYGuoh/6jW/fkmcojD0e3ctCaFtArqbRBdOGsx6e9ZZ9L1ukog3mv3yzasPlT2Apibq3t+CKASQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bvN/doML6DAbPVu6JvfmZv122kJ+JIPSdwlwdDpfzbY=;
 b=M83JZd9I95AlpR5Fu+1nMf9H85bxtQSejGxqA52iygrwmCWp4Hy7SMkN/cYaz/IIk6XRxCrzyobgabpdhm8AFX6RQhpgkJp0iZD6NQesAA6qEsjlkMoV6tbS0mSDy6GFd3NxfaM1et/KIk4G+j0bsSiWx/iMRBj2AvGhWUCecphynh7LBZmK4U2oTgArpNyzBbNuMd0R7s4Lr8FwdqrGBQjkOIMM0fVHhzepeUhdOM1WWkK8ursXUjXkPe//zwH/M6AwtMWBR9ROSHQxiYipkBr+w4eJdQb7bmfGd7NcqaC6d9YSsFqUHwDmUuafjS/XYCcqU8XYMIVVQl4uY49E6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bvN/doML6DAbPVu6JvfmZv122kJ+JIPSdwlwdDpfzbY=;
 b=mFCXZv7rqiuiC3swS/1monEZnzTlJKpo9QUuHAO/lQHbhI15mS3uON1zNIk4G65Yfh0+aH5LzPKSpcEQip9/nKO8o9oNpAubDI3kMz8Q9j7VMqwUX9LegoGDgg9Tzb6BXcxXAcdxTZc8A8H0oyj2L/tjYfeivRfUxkTGIdOdzi8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 13:41:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZCwM1SfCAfh2koBD@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
X-ClientProxiedBy: LO4P265CA0240.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::20) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|MN2PR03MB5247:EE_
X-MS-Office365-Filtering-Correlation-Id: 8068abbe-50bd-40d5-b06b-08db35017f97
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	858fYNEfiy97ZGDd4P+a5DfygRFDmLshKtppaA93nOi970XsU6APLuXXvbEUY7h0vHddFMfPkzou0dk0oXK50KhM2sMl/L/qYDSdcjbEkTOI68kfeo86Q4y/4omBymJZGhtOh0EjKtQWxb9xP4QoMbubz/J1617zu7rNuVf6jNk0iMJqtR+spHZqgKOEfHlmdNyx1b9YB4CnCSFAbNXL/FfTyN9v4YGGkkmSszZ0Xhz/1NtHRgMraVq4cWA6VjJfbZ1gRtWcJEvJvFTKN2/OuM5Y3sWh3iCv1yUIB9i2Jyja/7ivoMfmD48u2en2Rr5AF1XbQaC4zMtar+oJpn7qbXUqotm0R/34azoEeZi+U4O6JmZwYGJKzwO7gyPi3rQsXlUUjIt3lyzlvr4V9uOSBsyDOWWYjZ2tWyefnWtrPn0DxV28rxQPdPaEMDEiA8K/Kne4PXkCnQSX4clUkVliDwFp1JQ+WR+vs3iGXiUYmHG3ekcAJwCpdumhzNsYc+VKfoVixwVNXdUP+FRVXcbA8XSx6zuZNX74o85hjmUDRpw4JI/U4V/qNIsA4P3BRJK7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(366004)(376002)(39860400002)(136003)(396003)(451199021)(4326008)(66476007)(66556008)(66946007)(8676002)(54906003)(5660300002)(316002)(6916009)(8936002)(478600001)(82960400001)(41300700001)(38100700002)(186003)(53546011)(83380400001)(6486002)(6666004)(9686003)(6512007)(26005)(6506007)(86362001)(85182001)(33716001)(2906002)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1FkUWY0MW1iWGpxeUI4OXVhRGhKUTkrdURsWnd2ZjNlU2pQbS9YSHVad21x?=
 =?utf-8?B?WS8zdVMydUVwNmNKMlpIRUF0QXJVMG9wTlBEM2oxKys1a3VaejVROVErNzRX?=
 =?utf-8?B?dmxEVzR2Zys1SjhwUEtJSCs2V2c0MkdtZ1NJTFFQT0Y2TENsTlUvbHpzOTNw?=
 =?utf-8?B?U0VUdXRBTFZLVjFDNjUxMU1HVW9uQVBwMnhCZkl0U0xNVkdwRXZ0eTFLcVVH?=
 =?utf-8?B?NFJMSWU2NzJTbmFCcmdUWGVxV0s3S0JuSWhnaTNyVkk3OHlCNDM2ckZ3MEFS?=
 =?utf-8?B?VTJGOU4xU2tOcVpEaW5EcCs0OHRlV0ZRQ0RESmZQVENoS2ZkYmZkYXcvRjBv?=
 =?utf-8?B?Y0x5ZFBFOFVPUDIrV0o4aW5oaHhZb1dwQUtZQ0ZpR2VwNjJiSXJQMm03UEhU?=
 =?utf-8?B?a3JzYnlFa2dPNnhyZXFJUTF6aUtCYU9GYVdHdTd3MWw5d0YvTjdPdW11Nng1?=
 =?utf-8?B?VzROV3hTeW5VZmJRNVgyc2wyK2pJaCtEdUg2NVVzTkVLM1d0a240Lys5T1lK?=
 =?utf-8?B?cDF0cC9LWjdndStPNEhHV1NYN3ZBRy9jTlBNSEdTSmwzMjh4M09ya2orM2lX?=
 =?utf-8?B?di82cHVUWWJTeFpBOVhMRjBDZFFSbko5Ti9oR3lVdjFEVlBBVFJZdVU3aXYy?=
 =?utf-8?B?cC8rWGVCRTZ6S083b0pKUHIzVnRDKzBnSFhiYkpxZWs1Ykh0aGU2SWw4cXhJ?=
 =?utf-8?B?OW4zTDQxb1RDUld3OGsyekFzblVMSDg5THd4T2h3U1VZRGVVeUJobmFxUkRS?=
 =?utf-8?B?em0rdThRVjV0cU9td0JRNzgrS29meEVrRjBEblhlUGdmbElleEl4WmJNbXFo?=
 =?utf-8?B?SU1xRHBQZmxxZmk3cnZHdm95VS9qSW1XTGx2aU9BWjBvNlFLVThibUU5Y1Zm?=
 =?utf-8?B?NmpOL1QvWitldTdLenFpcE85YTl0U1VvWE83MXhtZVFqZzQ3LzlWWGoya2Fk?=
 =?utf-8?B?SlJmYkRsaG9IdDliMWdnUW9IZWpkNG04WHVNNlRQWU91c01neFdkMUFSS1pJ?=
 =?utf-8?B?V2pEUWJkb25JWjY2ZU9FNm9wUW5TcG1ndGdzd3krcHd0d0hVSjByRFZNbmZa?=
 =?utf-8?B?NWlsWGp3MVl5b0xvNm55eW9GSVc3SkQxSC9RMXBUeS9wZTdKZDh3Y3pNcFFS?=
 =?utf-8?B?eml1OG4rQ0hacjRwU1hWcVhwOXpnZUh2VWZGL2xSTDYvamhna0QzTnkyN3d6?=
 =?utf-8?B?YXB6VVlTUUJWVk1UVzdvSHBBWWpjWWtjeHRSUFhtdEpWU0RxNkN2cGFXQ0hv?=
 =?utf-8?B?alFKdHNFU0cwc0xUVDU2dFh5cEh1d0NZazhXRktlUmhLMk1QallZNTF5K3NF?=
 =?utf-8?B?MFY5OVNnL1dSWFFjeldXV3p5aHVyUFJkSWJGNUYxUmxITVZiNUJTdHJmQlh1?=
 =?utf-8?B?NkVrbTNzYU9PRm9BaVJSS0Yva0Q3NDVOZVo1dGYzbGhsYm1hN2FWdWpmS3hq?=
 =?utf-8?B?MFNiZ0JnNWxkYlVRUW5DcGdLa0JMOXhsS2o2TS9iTlZsKy95akh1UjFidmpH?=
 =?utf-8?B?dEFPaFVBdDRCSDRHbGtRNWRFQ29aeFZhbWYrWkEraUtwNm9BSnk4K0pLRERD?=
 =?utf-8?B?c2FVYzFBT3crVUxkUXNEKzRxbSsvYzg1OGZ5bGE0b3ZVUkNrVUtrdTcyUXYv?=
 =?utf-8?B?eWtjNERkRG5PazYwVVJYVW9WTTJaQ3BOczc0dW5wSUFaSkhDdUMrVWtKcksx?=
 =?utf-8?B?VXhFeWkwOWJyNFBwQk1tZ3hoQ3ZqQnBlMllBSjNvRndFNTVmaGdpTTkrM0Jx?=
 =?utf-8?B?Z0lyUVhlVkJNTWExN3FGWC9xbStEQW80clhVQnprMDRFKzRvQTBRMkZoQ2ZX?=
 =?utf-8?B?bVJIa1Y2eTJ4SmIvQUR0MVRuRk8vMkhFSjg3T0M3Tys2K2lKak96blJsNmcy?=
 =?utf-8?B?azlKdk9yMll3MVltcDZ6VVBxdlFMaVk4a2ZDR0JFMjVsUDNqcEFvdGU3Q3pt?=
 =?utf-8?B?Q2JMeWVSQTRoUjBtNnRVcjAyeXlheFI2a2lIZjRKb2N3WjRCcHI1aThYZ0N2?=
 =?utf-8?B?UXJhZUwyNngxNE9VaWNCTGJSUHoyZkllZitBRjNVclRWcExibHpmeTU1bGdK?=
 =?utf-8?B?ZHpQOEd6RzhTWkduYTRyQ1pkbHNzMktidjVRdzlzbGs2SGd0c1M1L0FHOU0x?=
 =?utf-8?Q?j23LQcu1Fw92yh5zaVJtH2x/L?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0FBcI7CAwrmmErj4MQLs1FI2hiWNbaVlZBbn5XoX7iBrL27WkAnYsLzB/N3hFebsbvFZg9+2BxSJupSLGCUjFtXAeupyEUFaThE5Zg4mZ/qDgrG+XHLksI0bN8AmBgwgzZNtR+6/k5iIqTKfA7jB9KnWP2/cuefvz+Xkn0ASueV3KQoarPjme8TVaWhyg/tMtlsG4+5Dnnkj6OqlvtaEV35GSYrH1pU4TJ3X55JnxZL/TxESsKSHxhhrc2JPjsHb3tCj1V6v+vBJqkHSmYbdCXUor5Erv4mMGn+NprDXeoPvX1FkznfAiw9KiaarRQrC+op6jjJW3u7MUtFrAT2z40aL6vNNSgaT3dKcX7DZi0xW2UL/Y2VnyWM6kQM4g04EWXmgsrNTK7tzBRgGD+tX/rIEeABUhGniWl2D+TCeXPaFxmiJnKEjtAIU73bUu2w+KZurXmihTTayi4RRgmW37dEjipR3rnw00d7JQ8ysEAW9cBKyPQAcANb5NQnLDJ8PVF6rs5bliimP3vry6e9dW5OmTH4gIonNLdK/1VpaxTquT6oP90H6EYEKcqRYjPmr2AwVtY2k/wgvVxGMJ5WXmUuXO4PnK4+ZmgzJw3j+oB2ZLBFoPbMzJNTQNf7Xcbq9qbM38XFB6acO+tCu6Mdn0jxGgGxo2I3EGdl1MbcZkan7N1vy845URXl0nkYduf5ep0JukXltxYqhTH5/ZPzKoHjpOTCR5b9WNva90s0shg1VPVS7x1uk5ZoJYIyi07Sn9/5U0Y3l6SnRi0PhqfmhWgCi6ByBsuMIq8ZZwgi5AOYKxihNp44QWv8ADAxJgs5v
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8068abbe-50bd-40d5-b06b-08db35017f97
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 11:41:15.6901
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xpIW4ULc3Z7MUY9IPGv4R0RFiNMGZV9TbxxrqYxbu4IpwkxREJoM/xUmC/DqWX7Qbf32e3/VIxOQN29yoACZug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5247

On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
> On 04.04.2023 12:12, Roger Pau Monné wrote:
> > On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
> >> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> >> applies to guests also when run on a 64-bit hypervisor: The "extended
> >> CR3" format has to be used there as well, to fit the address in the only
> >> 32-bit wide register there. As a result it was a mistake that the check
> >> was never enabled for that case, and was then mistakenly deleted in the
> >> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> >> CONFIG_PAGING_LEVELS==4"]).
> >>
> >> Similarly during Dom0 construction kernel awareness needs to be taken
> >> into account, and respective code was again mistakenly never enabled for
> >> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> >> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
> >>
> >> At the same time restrict enabling of the assist for Dom0 to just the
> >> 32-bit case. Furthermore there's no need for an atomic update there.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> I was uncertain whether to add a check to the CR3 guest read path,
> >> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> >> be converted to "extended" format (overflow is possible there in
> >> principle because of the control tools "slack" in promote_l3_table()).
> >>
> >> In that context I was puzzled to find no check on the CR3 guest write
> >> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> >> of the low reserved ones) could observe anomalous behavior rather than
> >> plain failure.
> >>
> >> As to a Fixes: tag - it's pretty unclear which of the many original
> >> 32-on-64 changes to blame. I don't think the two cited commits should
> >> be referenced there, as they didn't break anything that wasn't already
> >> broken.
> >>
> >> --- a/xen/arch/x86/mm.c
> >> +++ b/xen/arch/x86/mm.c
> >> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
> >>      unsigned int   partial_flags = page->partial_flags;
> >>      l3_pgentry_t   l3e = l3e_empty();
> >>  
> >> +    /*
> >> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> >> +     * understand the weird 'extended cr3' format for dealing with high-order
> >> +     * address bits. We cut some slack for control tools (before vcpu0 is
> >> +     * initialised).
> > 
> > Don't we then need some check in the vCPU init path to assure that the
> > cr3 is < 32bits if we allow those to initially be set?
> > 
> > Or will the initialization unconditionally overwrite any previous cr3
> > value?
> 
> That's not the way I understand this "cut some slack". Instead I read it
> to be meant to cover for the VM-assist bit not being set, yet. Beyond
> that it is assumed to be tool stack's responsibility to constrain
> addresses suitably. If it doesn't, it'll simply break the guest. (There
> is some guessing on my part involved here, as the original introduction
> of that code didn't further explain things.)

If it's just the guest that's broken I would think it's fine.  As long
as such mismatch doesn't cause issues in the hypervisor internal state.

Did you see a toolstack setting such entries before pae_extended_cr3
is set?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:07:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517844.803730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgNU-0005iV-Jq; Tue, 04 Apr 2023 13:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517844.803730; Tue, 04 Apr 2023 13:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgNU-0005iO-Gw; Tue, 04 Apr 2023 13:07:12 +0000
Received: by outflank-mailman (input) for mailman id 517844;
 Tue, 04 Apr 2023 13:07:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z4sN=73=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjgNT-0005iI-7p
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:07:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ac532a6-d2e9-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:07:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-342-aEEHYlrtMj-0qGW-NiOQLQ-1; Tue, 04 Apr 2023 09:07:02 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8CE59101A54F;
 Tue,  4 Apr 2023 13:07:01 +0000 (UTC)
Received: from localhost (unknown [10.39.194.166])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4DFB4440BC;
 Tue,  4 Apr 2023 13:07:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ac532a6-d2e9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680613628;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=oHqf0BVn2cr4iUWHULKZ+lPM/LaK6kyDq2wLGk1dnWI=;
	b=N5XFXZU8AK5znY3a3LiDnV5C4UIohAhrK99Ia21wBSIh4FlIQzeEfbTqLwpqnG/D5Xp18b
	uw0MYWXlRdxTUn1jQw2Kp6Vqn/FjOPrJA9xMizsiPYyFxZ0h5ah+uEo0ETQIgh05k3MLqt
	ArncaLxKJ1Hyaso00fg/HEF+YI/bxS0=
X-MC-Unique: aEEHYlrtMj-0qGW-NiOQLQ-1
Date: Tue, 4 Apr 2023 09:06:58 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>, David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>, Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 01/13] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <20230404130658.GG428487@fedora>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-2-stefanha@redhat.com>
 <2bbe988c-0802-55c3-b2a3-05e3f94e2f04@linaro.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="a21Ii3lSZbeXw64P"
Content-Disposition: inline
In-Reply-To: <2bbe988c-0802-55c3-b2a3-05e3f94e2f04@linaro.org>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5


--a21Ii3lSZbeXw64P
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 03, 2023 at 10:47:11PM +0200, Philippe Mathieu-Daud=E9 wrote:
> On 3/4/23 20:29, Stefan Hajnoczi wrote:
> > Only report a transport reset event to the guest after the SCSIDevice
> > has been unrealized by qdev_simple_device_unplug_cb().
> >=20
> > qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
> > to false so that scsi_device_find/get() no longer see it.
> >=20
> > scsi_target_emulate_report_luns() also needs to be updated to filter out
> > SCSIDevices that are unrealized.
> >=20
> > These changes ensure that the guest driver does not see the SCSIDevice
> > that's being unplugged if it responds very quickly to the transport
> > reset event.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >   hw/scsi/scsi-bus.c    |  3 ++-
> >   hw/scsi/virtio-scsi.c | 18 +++++++++---------
> >   2 files changed, 11 insertions(+), 10 deletions(-)
> >=20
> > diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
> > index c97176110c..f9bd064833 100644
> > --- a/hw/scsi/scsi-bus.c
> > +++ b/hw/scsi/scsi-bus.c
> > @@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITar=
getReq *r)
> >               DeviceState *qdev =3D kid->child;
> >               SCSIDevice *dev =3D SCSI_DEVICE(qdev);
> > -            if (dev->channel =3D=3D channel && dev->id =3D=3D id && de=
v->lun !=3D 0) {
> > +            if (dev->channel =3D=3D channel && dev->id =3D=3D id && de=
v->lun !=3D 0 &&
> > +                qatomic_load_acquire(&dev->qdev.realized)) {
>=20
> Would this be more useful as a qdev_is_realized() helper?

Yes. There are no other users, but I think a helper makes sense.

Stefan

--a21Ii3lSZbeXw64P
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmQsIPIACgkQnKSrs4Gr
c8iK9gf/TFi/RWboEn8yEQ4zpL0td9Nd4+0XYH85P9acscF+6TcA+W70ehstRChY
twRsDr7bNxiglT+w01MBLdladQFM/J7c58OUZQ9OdEgkhFY1iP3DEIod+7sHIb/D
kkxW8D3uzTtzVRBGt+bIiFI44sBkukGcHVSrIyk3Rc57amd0T9ceSmGuXanmq0sp
kOpaQ/yt4dmAACGfJfcBYXfQE9QMr465W/YJQDjH+wrT49e9SkDbgw3MYjo6zYcm
BTH9t6XKAHp8b7OKZ8F8ibkALAXU553zogaq2j5Qlra0fzk/PsgyYwkni8qaTw8V
AUhiPKNh5ppuxq6VEfzReJrAyoIS5g==
=Pths
-----END PGP SIGNATURE-----

--a21Ii3lSZbeXw64P--



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:08:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517847.803740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgPA-0006Fg-VF; Tue, 04 Apr 2023 13:08:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517847.803740; Tue, 04 Apr 2023 13:08:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgPA-0006FZ-SL; Tue, 04 Apr 2023 13:08:56 +0000
Received: by outflank-mailman (input) for mailman id 517847;
 Tue, 04 Apr 2023 13:08:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjgP9-0006Ey-9u
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:08:55 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7513fd9-d2e9-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:08:51 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 09:08:48 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO1PR03MB5697.namprd03.prod.outlook.com (2603:10b6:303:94::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.34; Tue, 4 Apr
 2023 13:08:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 13:08:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7513fd9-d2e9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680613731;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=u/A/J0571XYKFGpeBNTr7AZ4HdpqCckhPqJWR+JnJrE=;
  b=Cn4lpczMcXVKzxVvcqyIQMMd31cGrnHpg+weUuaCf2fD2ZCkIx1vDpkI
   xpc1TZF9s/BXZtyu4DD69cUeuaeNs1ErjKMvaqITchYRRjqBGvCyoqtzQ
   6LU2zZq2+Ueeq+VeF4lx7Ol66h2PupqkTVKqtRSL9xKevhrpxPCe6EkHg
   o=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 104184127
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ZwLyJasDrrBr06p4mPij/K9smOfnVHtfMUV32f8akzHdYApBsoF/q
 tZmKWmDOK6NMTPxfosib9izoEpS6MWHn4NnHgdr/ixjQygV+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwAg4mYTmku7qMmpmld7I2hZ0DD+y1M9ZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60bou9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqqc63ATJmQT/DjUdBFv8nfW4jXSlXuxGL
 0BPynow95MboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSy2ETgYKykFfyBscOcey9zqoYV2hBSfSN9mSfexloesR2C2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:WhdKqqM6ZHK6M8BcTsWjsMiBIKoaSvp037BN7SxMoH1uHfBw8v
 rEoB1173HJYVoqOU3I++rwW5VoMEm9yXcd2+B4V9qftWLdyQmVxe9ZnO/f6gylNyri9vNMkY
 dMGpIOb+EY1GIK6PrH3A==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="104184127"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CKVqkx0Baz9qaPqvPEZWjAy/JF4w4Qh/1Y/mzk7WParBecgAD8AVorJTCGQ9LSOlwvbH04ABdSCq2Z2Uyce2gv8NsdJUlYEBis1lnYD5psmaHVQMBWcXFcNCh5QJa5OLu6E9jLyBu1XbXlscoudF4WltyEnmlAioQD4P9PZMmvOpVCcBPeKCoaDWNowGnBH/d4PFmAN6nUVTR/SCAbOWWNjVmomyArJ9IR1hDt5YQn5RhZyX3CB/n1pWhU0lHrBLJVlUy6Om6XTHBTZJVXk76awSM8KQ8TBwClwXE9B6DgKFc3PFDhOBkf43ZnvLKfWFPOEKFZYrzUstevmo+4pYbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mlemeQ7aWqQk8Ow5+xbGNCu6nFUcKoMWH3TsirNkFks=;
 b=dpyCRInhvZO+CBErJZN6GMaiDlpXgiT8qki7n85VCz7I6d6V9wREhAoHqtSQxuV+nVrUZZvfZcgcOQp3qls6tj84v84DmHMNmuoOlll7RFCjsiBqLunAz5EAT1yMgZmT31pWoz0Wxyfhe65LHymW+xrRE3AS9zQ+q7LxfSHIzMhJVN6fv9tFXlkX7gc8mgzC9UnQH61dymiiRUkUNE1qMN24fbNOcjxIZwqllAfTnBaJvL8mo53dz/riZ9GeVvCppLW5nnbLS0nCkQce1gJQHJZSswvffNppAyucBG/0hscQbi8W2Yu9pt3TrDMjFZoDZNRooRMiUBAu6/4/b0aSAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mlemeQ7aWqQk8Ow5+xbGNCu6nFUcKoMWH3TsirNkFks=;
 b=M5aIjkNNE5aZCKHE6i+oahL3k53cU94WRM/AabclROSXeGFvAF0pTnKG+r53p/0ic+o2+FpRFqKS9VS+hgyOKd5JG9ywEnKVCD7U9aCfS4v2ZpmHc3h9vxWEcVOZ3LLSVUquI1yI1T9gfgNYPVBhbhdWxGNEVnSlwmtQCKkZUD4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6d64dd4a-5b25-ddca-5c07-7b4c0fc48c0c@citrix.com>
Date: Tue, 4 Apr 2023 14:08:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
In-Reply-To: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0031.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CO1PR03MB5697:EE_
X-MS-Office365-Filtering-Correlation-Id: 0111f8ac-691d-45b4-cd79-08db350db893
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	70i2Ln9FsZTQYRJ8YjgEoSNjt8KMb/4qUbZN67FyqyWKCyWi/wz3xeuoU9DM4+4USnYtXAdE3AeLQD64P5eC/tYQElkaBHCFJSvRJ8ZL6NGCeRjwCSVtHIiPthwQHK8xWH5F8TP3MkQxBOUkd7WkRWVxzbTsZWYKHprh5MUSw/yY85fBe0pwO21m8Y3nX9cKlFKFPb1AexGXb+Jct+eyCcs5e6iPzKi2NxeAS5Wkb02O+pT5baBD3Rb+7WFLVQZdKm7uDIbzLiuVIr/gmv+C4K6hUIiHn3lUTEZFQEvB9LNDYiz6etDdMecqVVpDqefmvRT6tCuxXdglc6l5SageKeXOJqW2oymSeI/gKyxFAZxJkRB4U1v3AZqAjcORPXerPn0voJvxlY6pGLrBe1Te39A906IgtNJxwPDFYlFsJ1mZT72qGOWDXuc+NjIhQijhDBkc1bq31RZvHS6LhJ/Up0TiYDAlGbEn8Mv4mtc2Hdo8BGJWuW/OulPXDZO0EBjyYplhs24UCjAz9PcWkCqftB20vMEsGRJZQBPCzNitE3VlJjfttXvfohEcjBrf3mciHJngWOmOHql0IBq+RGQA+ipmBVrbjdgUu2OKDNR3TNlwo3tuixLRkdM4hdjSdAQ4R9DFe88IxZ2OW8o88+EW2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(376002)(396003)(366004)(451199021)(4326008)(5660300002)(8676002)(41300700001)(8936002)(38100700002)(82960400001)(66899021)(86362001)(2906002)(31696002)(26005)(6666004)(6506007)(107886003)(53546011)(6512007)(36756003)(6486002)(2616005)(83380400001)(186003)(31686004)(66476007)(316002)(110136005)(66556008)(66946007)(478600001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NFVHZjZ1OWxSdEJxemdNc2ZYais4RXB3TVJoVVFWTEcrTUxQWkEyME1jN1oy?=
 =?utf-8?B?QXlQSlk4VWlIZloyQkdTZDhGTzE0OFhzMFBNUTdIMzEyWGppaW9tUjFGK3N6?=
 =?utf-8?B?RSt1VGJNZWpjemVNMDJLZmVPS2J1M3duVlY1akdvZnJIaXhXMmYwM2hUSnJR?=
 =?utf-8?B?dEVySE96b1hKeDRFK0h1dytOeVdFVkdLYVp0Nkowa24vazVOWGhySkdQVytM?=
 =?utf-8?B?NlBVYXpWL3ptS3RZMFN3azNSR05ZMlI1TTBkbUUrYnNtSGhXb3ROL2xUaFhS?=
 =?utf-8?B?N2tuenpmVE1UajViZWVjcFVHcWwxSm9YeGFBT2hhaUJWZHE1NFhtTGYvbG1O?=
 =?utf-8?B?U0U4WFJwbkhHVTFoSlV3L3ZJVis1SDRwcmhRdmJHdHdjK0Zib0R1OFRaTzJq?=
 =?utf-8?B?elFTQmh2TUdaSUQzelorUGZUSHhrMFUvajNJN3k1V3UreHBmVVk1eEFMTlFp?=
 =?utf-8?B?K3pNNkptcXdMK0t0WU1ENUFvN0hYenlkM1V0RE43M2hnZ1ZyWE00UzVWNnBN?=
 =?utf-8?B?V29LZks1NjBib2trazlicW5NbEh1S3E1OTNVM21tQWhQalk3RGptRnJCVENm?=
 =?utf-8?B?RzZEV2FEREVXSHcrcU5BRUZUTWNpYU1GV05uK01IRndSQjUwbklOQlZodG5P?=
 =?utf-8?B?SnJRM2R1aVB2Q0MyNlNiVy8wMUMrc2xLL0JSTERoZnEyRWd2dFJ1VStDSTdl?=
 =?utf-8?B?ZDBKaUF2YUZGYUphWWhLQ2ZPUEhBMFRuTG1EMlprb3dPR1o3NjZWc1YwQ0FM?=
 =?utf-8?B?K2VhSExUZjlUeDNIVFZWSmZMMkJqYVJXRSs5cXNXOHRCZWxnKzU4dUFHeDBy?=
 =?utf-8?B?dmMrMDArVmhuakJFSlNUVnk3U1BuNjF3ZzdRTjNrTVc2L0hXcUR1aWswckNO?=
 =?utf-8?B?d1RRTnBJRDhCaFVqRUdXc1hTMlJiTHBYVTZDdE4rUUtmbXY1eVl5V1Y5M1Zv?=
 =?utf-8?B?aENvRjRMTTZFVllaSmVsRHFNLy9MamxwSG5KY3l2Zk9aRjRKZlRDS1ByNy85?=
 =?utf-8?B?ODBVK1dLV3RiNFRTVVg5V2JXVS9leFpDUUxLbGhYeDhMZ1dySlVibFRuQlpu?=
 =?utf-8?B?SDN6bk9ZMnBPSGpzUjAyNzdYQVlleEk3bHBZZEhRSkM1NDhUYldkRTBZVVdR?=
 =?utf-8?B?LzJoUjJXS0ZqOUhBdzhKWUlkODRMbEtuQ1ZGbGtyYzBTY3BnU0V4dXRhWGI3?=
 =?utf-8?B?TlAwVzRQaFltZTY1M1VSTnBaTFdrRk5aeGpPWE00aW9ET0tHcUZWa2kraVRt?=
 =?utf-8?B?MDM1N3BjRXhWSndtZ3FTVGtqZisya1FjMWhIeGE2LzllR0FaZmhmeG1BVklt?=
 =?utf-8?B?VHdYcHVxeVdPQWlPTHFlVGhnTjIrWWRwamVVWWlyTkhJd1I5ZjNGNkxQcTd3?=
 =?utf-8?B?YXI0MkhqWm5QK3dJNGdqRkk2bkdWbnRXVEZRSVlWdVJ2VnhxenRtTkRQL083?=
 =?utf-8?B?MVpYNlBIaThjZmZDUXpBYzV5dXJPay9xM0NDRDFyalJFYTZZek4zdW16MTZH?=
 =?utf-8?B?N2UxYWVhOUxlUkphN3l5c2tnQkhCU0IxSW8weWpJd2hNVkVwNkRweGFDZFNj?=
 =?utf-8?B?dUpCcjkwWm5mSDRhT3JWemJPdXhVVFJPYmZFbDZVQWR4YWJQcHhCcEFieUE3?=
 =?utf-8?B?VVFwRy9EWnlXTUtaMU10enVaa3ZreElTMzVOU1c5NGNrVi9jUDNaRUFOMVlQ?=
 =?utf-8?B?eHYzcUVaV3BGSno1RWZoK2pBM243dUxDM240SWFSbHllR1BDYjIwdGZWVUkz?=
 =?utf-8?B?QWRISjM5N3lDb3pUQnluUDBnWldrQStjU3lPSkxwL2FDY0lPbkVQeFB0N2s2?=
 =?utf-8?B?bTlScHlxUXVxUWFwM3hVTTB4YWFXaWFGYmFwcUpjVkFkaHRFdUg3R1FSVGlD?=
 =?utf-8?B?K0hYdThJaFhEUGtoQmtyUnh6L0gwZmR1NHJLWDZxenhUZHRNd1E3WEJaVkh5?=
 =?utf-8?B?c2l2bklVNFJSZUtaZzdLVEhQUk5BQ2F0YnB0NEFkaVFubkgrMjZxeS9mWE9i?=
 =?utf-8?B?dFlCRE1wcUE1eit5WE9lNVdzS2JvcElzSzZkb1lHYWRxem9rcEFhZE9GOGxn?=
 =?utf-8?B?NmlIdkFUM2xLRkJlR0Exa3B3N21FNFUzVDNTVGFMYUJkYXZiWVI5djZ6cXdZ?=
 =?utf-8?B?L0FIOEZRajlIbVZzd1ZDWUtvNlh0dHNXb1BYWW5FSTZ3dG5Oa3BBZ3Rrd1BO?=
 =?utf-8?B?T3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	lrriTmojV1wymCQJ1SdzRb6nmQq18VowdfVrPM67IidYgqWZ14zcc1d0unNccaSq+92rQ4VruKgIEM5FzSeW36d/Uvnzf24OS9Voj4H86ipGOy4cGcYbPtiPi/2lpqwifp6FsXVi8aPrBmm9+c5EaB/PSjoGulkxL336WL/wrXVzeKBoC2pyZEE84c/bMQYAN5bvwUE5Ja734ktFyGC6bLtTaIIgQxKQVqrwefZ/L2VpkXcrKiLf9piKEQevu/kW9GfTQdjohSFanzMe8nFbcQMf9yf1G1srD+fe9mO2diPLH5DMbfrHgC1yE0H/aZ7jj+PduvHvyjVOTXdOZkw3N/TLJVo7YBAe4n3eTspS6/3mWwYVPOfCG8Fz9h8X9YXixoxdHKMIbPpWyhVs14gOIOh5hVDHza/jiMxdYAEUSPsmvEn3gThgXVg+vqnHtscv3J6Fxe4FtPVC5UbVDlw7JPFtDYaVR8JRR80tjLun0ppSVOm1cFOAHp0kHqWfCr7ZM/Vwvk0jU/kJkHqDY80/R8ixdK4+7bHQtoHcVHdtxchE1xvDT3KE48rIIXpggO0adzkJHosa8TSe0UkDnNR8ctSyv44uDxmyN8KK2BEw4bDsQdmrwz+OnRfDwBt7JgfJvmqtHBi91BXRlj1o7wKKEPtkOtfVSUiY9yAxKCx8ye2zHK/YoSTTu3zpMkr6T+fqy9oy4QrO05ZWOoBWcH/ZZfs5htZOGYOYGEdHzw3hjx3Ds+/LAipxgC7JENMpASiq6ZhvO6HQPYWbPKQ/i6YoFtYXUBTgc/HZFSxJiupbq5or0/k72R2IN7LiOwz0P+Ss
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0111f8ac-691d-45b4-cd79-08db350db893
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 13:08:45.4977
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6xfJFUPGmMSMP4hMUNW6XJhaPVsrRWCWxb/eeR/PNbaJIputK7OvB144d/OmGBv2HR2V49i2oKSn+tCToS0tF7S5tz/gc3VklfS7sVszoRU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5697

On 15/02/2023 2:54 pm, Jan Beulich wrote:
> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> applies to guests also when run on a 64-bit hypervisor:

Is this really true?  Even when looking at Xen 4.2, 32bit guests are
required to pass a full 4k page, not a 32b quad.

Which makes complete sense.  It was a hard requirement of 32bit non-PAE
guests, so it was a natural restriction to maintain into 32bit PAE guests.

This is *only* a 32-on-64 issue, because this is the only case a 32bit
guest could in principle use an L3 placed above the 4G boundary.

>  The "extended
> CR3" format has to be used there as well, to fit the address in the only
> 32-bit wide register there. As a result it was a mistake that the check
> was never enabled for that case, and was then mistakenly deleted in the
> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> CONFIG_PAGING_LEVELS==4"]).
>
> Similarly during Dom0 construction kernel awareness needs to be taken
> into account, and respective code was again mistakenly never enabled for
> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
>
> At the same time restrict enabling of the assist for Dom0 to just the
> 32-bit case. Furthermore there's no need for an atomic update there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was uncertain whether to add a check to the CR3 guest read path,
> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> be converted to "extended" format (overflow is possible there in
> principle because of the control tools "slack" in promote_l3_table()).
>
> In that context I was puzzled to find no check on the CR3 guest write
> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> of the low reserved ones) could observe anomalous behavior rather than
> plain failure.
>
> As to a Fixes: tag - it's pretty unclear which of the many original
> 32-on-64 changes to blame. I don't think the two cited commits should
> be referenced there, as they didn't break anything that wasn't already
> broken.
>
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>      unsigned int   partial_flags = page->partial_flags;
>      l3_pgentry_t   l3e = l3e_empty();
>  
> +    /*
> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> +     * understand the weird 'extended cr3' format for dealing with high-order
> +     * address bits. We cut some slack for control tools (before vcpu0 is
> +     * initialised).
> +     */
> +    if ( is_pv_32bit_domain(d) &&
> +         unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
> +         mfn_x(l3mfn) >= 0x100000 &&
> +         d->vcpu[0] && d->vcpu[0]->is_initialised )
> +    {
> +        gdprintk(XENLOG_WARNING,
> +                 "PAE pgd must be below 4GB (%#lx >= 0x100000)",
> +                 mfn_x(l3mfn));
> +        return -ERANGE;
> +    }

Having dug through source history, I see this is largely the form that
it used to be.

But I'm unconvinced by the "cut control tools some slack".  I'm quite
tired of different bits of Xen taking on unnecessary complexity because
people are unwilling to fix the problem at the correct layer.

A toolstack which has non-pae_extended_cr3 guest on its hand will know
this before any pagetables get allocated.

That said...

I don't recall encountering this in migration v2, and looking at the
logic now, I'm pretty sure it will malfunction with a
non-pae_extended_cr3 guest.  When interpreting the guest cr3 value, we
blindly make the transform on the save and restore side, and I can't
spot anything limiting the L3 tables to below the 4G boundary.

So I'm reasonably sure I accidentally broke such guests in Xen 4.6(?)
and the absence of complaints in the intervening 8(?) years shows how
many are in use in practice.

For this check specifically, I'd suggest prohibiting non-32p guests from
setting pae_extended_cr3 in the first place (I see no limit currently),
and then simplifying the check to just

if ( unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
     mfn_x(l3mfn) >= PFN_DOWN(GB(4)) )


And I suppose I need to make a non-pae_extended_cr3 XTF test which is
migrate-capable...

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:28:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517853.803750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjghY-0000Ff-He; Tue, 04 Apr 2023 13:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517853.803750; Tue, 04 Apr 2023 13:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjghY-0000FY-ES; Tue, 04 Apr 2023 13:27:56 +0000
Received: by outflank-mailman (input) for mailman id 517853;
 Tue, 04 Apr 2023 13:27:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjghW-0000F9-9t
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:27:54 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f0d471a-d2ec-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:27:51 +0200 (CEST)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 09:27:46 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6268.namprd03.prod.outlook.com (2603:10b6:510:ea::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 13:27:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 13:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f0d471a-d2ec-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680614871;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=P4xCSk37NaJ3GwvRnjV+hqE8WKXnDPVbaRFPKeOS5/U=;
  b=U0iz5J0Z84qyU94GeFH935+UupzPtfVx9D4m+IYy+pcDhTORrNwVJfMw
   7Q8zEOd75tk4f3QYLqBCg0ampGg3EZXgbqnKj9OWrNz15lCSE1ixdkVdA
   kyr8OWyD4SnIBqKeQFreszSJWX4SpB4Ct9axsGMSaKfslc7EGs7qRQzgC
   4=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 106697073
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:sddWT6LYDtlHsemgFE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS1TwCz
 mYaCmyCbqvYYTPwfd9ybNi39EoC6JKAnNMyHFFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRiPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c53WDxR1
 qQ4EglKQQuG1+yE+YjiE/BF05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMlGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHulBtNNTOLjnhJsqB6N+ExUGSMab2aEhaiclUyQfMJ0E
 GVBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQEnsIrQT0h1
 neSgsjkQzdotdW9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1xbXFI88SOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAszA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:E9U/sK5tmKHRMlG8JAPXwPvXdLJyesId70hD6qkmc20tTiX+rb
 HKoB17726XtN5yMEtLpTnuAsS9qB/nmaKdgrNhXotKPjOGhILyFvAF0WKK+VSJcBEWkNQz6U
 4KSchD4bPLY2STIqzBkXGF+3pL+qjizEgI792uqEtQcQ==
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="106697073"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PmjaESlC89RsgFZZdc6/0rXISbvxkADkFhu0Oof7SCM/8ems2hd1mTV3R9pSXET4227gzI/5ZGJh+cQy9LrUWDkFbsJb8SKnoNm49zNP6Bws3y6169UVCq0c/ax+vBhCZ8hFU9FgFTg0Wv0GFDuBJJEK9uGLg0bvjB2wYrJbYvQ5GrgCkEqBrMDdXFnS6MvLFZ7qG/CZcV9CoIkyt0kstoIZTg2gKR2SEGm24MM5L/yrR8lU9VwLjzDGflc5+QZ1uFZ4yMa29Ir6IhPh4qIzmFcJkiVqEJ2ZoB5ceAw3OBS8tyoN3lDYcUysRPXLnn4xL5EsnYfEX1BXr8mzrVfE8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Nl1vY8U0d5COAPbhowL7ndLBAeSmFOBTksyjV33zWzE=;
 b=Og5uW2nJZ20lpsCTDjtjS5UcQfCMOJ5SvR0/DeQETv7nRgqeLhfMl1W81WPGypbxXQygxf1sNyr3/n7mq6qB/CYgN1Sz6QOF/m5A34eS+eHoCPHsL7x3KnxLRFm9pqmM81usvIr5tqGu/ei2S1m/YUMBHwXO8+pKBiFCKaQRD5ZeBqCjF/8p4iV1UVvnRrnCq9VXapGA03zmW8vPgkB0u0V+kW2O2+KXkV70MZvcxW+gq82IuZPPPGDpOPPsxDSP3UtiwLjxSWKuDHUCp/hnNN4Wwj8Jv1Zo8eVTbY1lqFcYzbVpEhrMjnNi5cMFbr8Pv5YX3j90kK3Ixe9m/UVbQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nl1vY8U0d5COAPbhowL7ndLBAeSmFOBTksyjV33zWzE=;
 b=Fvvptp2OycOlJnJF0iIZjnJ15HW3yHFG2dWhmlMBb5TKP/tU0SdrZ4GbZTTUd7JqBM82SCiwB3qGbnpuQtzoa5egnsMqjqE+rRWXf1J/qzIrJ/t7FUFt3ElVMHgwCeyAXaA7NAOoyTekwb7TQcjCXZG6H9yVO+WGtqR1wPSGoRE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <91fc0c1f-a985-17bd-2011-f4964d82e008@citrix.com>
Date: Tue, 4 Apr 2023 14:27:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
 <ZCqVEHe1Qo3skeVf@Air-de-Roger>
 <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
 <ZCrUErZZkd6co1Dq@Air-de-Roger>
In-Reply-To: <ZCrUErZZkd6co1Dq@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0442.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6268:EE_
X-MS-Office365-Filtering-Correlation-Id: e87a61d5-4043-4052-0f47-08db35105e32
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8XvwyiwvE1db9nDwLvG9A+3lQyUDTKmwhzZW3S2jjHmMmkYytAq6U8hH/K8rv9RXi0resvYjonJaimDIuZfwMQ/GxxrX9V3Vvu5Ors3EhI9gyW2CIjhcl8HytAHreF+GDk7RVYVC0ha/VW8DZOxPKELVf8aMjoyVqqSb4SGrzYrEXdNoyp9rduicOe5/MqEXU382RMe1hEDhp8YAfMYTqeCbLIq/ZC1J4RGWbyW3sJI8LlAVlLP+9oVb/P6UhAuhkvzeIDT0WzfW8Fm4GvlD4llnrorfvw1lrwr8K0/4ux7jV+VHzImO+yEx3+tkey4Brjiqyb2lyi1kaOeea7ig/sTIL5jwDDZOk0ZAGEesVz/2f8ekPvbWDrjWJ4ThvYkKrTh+JT289ROmp1zgJBFZRnV+RDJciLCg/b2p96RkkyvM1n8tz+fmTc9cz9TSn/LJCodUe5ubXb5X1VPIyvo9cNGYA0pnRfljRZt3moiZg8Lesu91Bq5IYOwhdJfodyHre1uZ0PEX7E0xP5jZ7B1uvrfEH0D5TMQAcPSg7jQq/tDTYnFjv0mpRuZWPl9EpOApm9xemtY0WDEtYLxnpPUK/rpVYfciGVHkFJyfoMrLXNlhtO6nY2feGGA12mFAdslinReg3uJj0WUz2V4SdVvqlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(396003)(376002)(136003)(451199021)(31686004)(6636002)(37006003)(2906002)(316002)(5660300002)(54906003)(6862004)(66946007)(8936002)(41300700001)(186003)(478600001)(4326008)(8676002)(66476007)(66556008)(6666004)(6486002)(53546011)(26005)(6512007)(6506007)(2616005)(83380400001)(36756003)(82960400001)(31696002)(38100700002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVBabHVycmhOMUxSUk5zL3djMmNJRGw3dzExZXY2WVVzQ0hwaFUvcXV4Ky9r?=
 =?utf-8?B?dE14d210UVFPdFdWT3VRWUxoUUJEV3RhMEQ4cTNpZ210TzM2SzRYcHd0cUJ6?=
 =?utf-8?B?aSs5MjFxQ1ZSUjNORXJCZzBwR2pCT2lOMmR3UHVGc0hEY1Q1S3ZWVWlsMGJ4?=
 =?utf-8?B?V2ZqK2NLbG1OejlEYkFTekRpVUNQdm84WFJzMEU2VUhIcVhLRVRJa0VhZ01m?=
 =?utf-8?B?L0ZkUVNFdE1jM0JWc2xiSEdWaTRhTkhPYk5FWklzOTNQOTJ4WlpHN1RsWGIx?=
 =?utf-8?B?emJjRWVRMzYrYTN0SlhUQWt5azZlMWFwZWlNUXNZUWRlK2hrNnVEaGpXUnBT?=
 =?utf-8?B?M25uclE3ZGUyUmx6VWtjZ2ZVU0dlYU1Qd1kzcUZBVGNyY1lSM1kvZ1k3Vk0z?=
 =?utf-8?B?VXZVSWo2K2dVZXVwTEpJNlpnQzhwTWRNMWdyUmhtREExbVFkRmdIK1R6STZM?=
 =?utf-8?B?OEhvM1B3NzJtODRzWENjSERvVXMzenpTKzIzbFk0RXI5Y0NydEkxczV6Ynh0?=
 =?utf-8?B?N2t0VEUvM0xWUHovYVNEQS83dFJJemJBOHhuS01LdGw3cExhRGxDOFFwMVNS?=
 =?utf-8?B?VEJlOHdkbTBFVzNLei9DK0xZdGRRU2hHZjdXWitFWlcyU21SQ1JlSTB4c2E0?=
 =?utf-8?B?K1gvVThSRDM5aVp3eHdxcysxNHRpNXBKbkUxSFJIVHAwUStLSkt2NEEvN2lS?=
 =?utf-8?B?UWJsaFU1T0xFdGFrSjZ2YUF6TndvYUpsUGJlRkVCaHk3MktkSzNQOUh5UWVq?=
 =?utf-8?B?aTBxTm1KeEdUSE9VYmRRd1R1NkMxM0pDc1pOU1pjbEJaWmhaTnpqSmRuUkhP?=
 =?utf-8?B?elAyU1J1SUN4TTREUEZNeVdLWk9DTVlpbktQR25hbUdpSEtKOWZhNlF1QW9n?=
 =?utf-8?B?bEdYcGlaMXpJMkRlMlZLbXNYWnNkdkw1Q3ZaRThFMjNud2t6TnpySEdLUCty?=
 =?utf-8?B?TE9Vays0QWZ4U09adk5iaXFscVVzb0Rtck9Qb0VYRDB5KzNLa3d4RkJ4NG1h?=
 =?utf-8?B?WldnYXlCU0N0eXE1VXVsWEJpcWp1MHp0ZFh0Ymp5SklKdy9PMjI2TGZFR0hm?=
 =?utf-8?B?TmRLeVcvVFJ1YVFtemxTVVhxbTBFbVhUWHNEYWJreG9iTUgwYXF0dlJrc2Zk?=
 =?utf-8?B?aGNkOGRUUjJNMmIrQjcybVN4NW40d0xVbWpxdldGUm45ejV4Sko2cWhjakFE?=
 =?utf-8?B?TWtKR1U1eW1LRE90aXUzRkM2Nk1nVGxQTDM2bnh0MTJiM0krRmoremZpN2V3?=
 =?utf-8?B?QXVDUFZoSWFxRWNRd2RpNXd4TFcxQTlzTEFGbTFqMEhHeDhaU2JkTzg4NVJP?=
 =?utf-8?B?N2lSM3BPeU0wVjNEVjdJR0h0aG1RYjB2TXZrbitQMm5URFBoSm1PZ3dyNDdn?=
 =?utf-8?B?K0NHL1FVS0RqOUtNOHE3Q0NCckhPakJzVGtudW9nQlprQ214R3AwRkRjb0ZU?=
 =?utf-8?B?TDJ1b2NZU1FvZmFmdXdPTjNVWDVXWlhraVJEcHlvQU0wRzIxOWlZb0srU1VK?=
 =?utf-8?B?NVRZK0ltTWpsaG83QkRJcHJoZ3MxRXpSTmxEZ2FGeUk1dDQweVVCUzZUZ3BS?=
 =?utf-8?B?NmRSSjBxbnl3YThPdnFqR3lpWmVBNzBNVEZrOGUvaTkvTWlPUUdtb2lCei9X?=
 =?utf-8?B?SzkyTWl3WDE5L1cwbmxZQS9pdUYzSlZlQTNjNGl4TzJvMjNZY1V4M2dBVmN3?=
 =?utf-8?B?NDN2bGVSV3FVVUdTVGR2SDdDYmlwUzdrM0xJdDJOOXRjN1h2K3VFdUFaNlpU?=
 =?utf-8?B?ZGhqUEQzMHVFWGRxbkVXNG10UWVmOU12djVneWRHei9zZXlaeTRQN3Fvdjht?=
 =?utf-8?B?VWRFWkpVcnU5aUdkUXMyT3NyMDFRUnBzSlNEdEtUVmRialhCMkMxdTEvS2F6?=
 =?utf-8?B?Q3JoTnNlQ1d0Zlk2V1lpVGZhSlg0Y2MyOGkyR1VMN3JjRmJIYnFNM1VTMU05?=
 =?utf-8?B?bEZEUnlyQ0FCU2FhaXJCNnkzVWhMWXUzamtyQitRckRUQzJVbVhCaVZJaWJI?=
 =?utf-8?B?c0dnVUx5bXJ2YW1FK00rbElxQlBUbUVyaGh0ZUtkZWlkT2RZVU1jaThPUTBO?=
 =?utf-8?B?TTljRmhPRzYvRk84TEw0QUtWRXh4aWxYUFVrUVNZSThyeDlvcEF2S0JHVVdz?=
 =?utf-8?B?NzZZb3JGdFN1cEYyWGdqa2xzK0dRT1Q3RXdpeE9SMDF3aWl6ZWhSZ0xJVm5W?=
 =?utf-8?B?aFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	5DbWliqt2NTg0SiY73F0PP+0MbEla54To2QiKtFdZ2KI2exgxtsXTgjDqmVMiNu46j++U3WKWBcpqWeSuWkoWW/j3w98fXIO3Qp0gL8B0jkxPk00Rfx6lzP1CSN8/M5tOB84fCjP9EWluCqfMMVtw9ahaOgvK3mLTIbH59x7dqm51aK0ODLzT1lGPTaUGfrfB69bchkin4vJxi0hpYZ/7K5pTdiW0o3XnvVyGPpQ3PLLHcju2Obs3pfIc5Fk1nPU1K70sHaThFWAi/JP2uOCsg+NJqgt7b6C2kXzUv/3TR5tWtdMhOf6cIdknFgGxbrwOfX/AUwadzQhSVrDNH/+eMvwbCS/YaAVbch8k80P+W+z90zTQe8MeWxP+fH6CdopQqJGgKJ8sCTz+KiW49WFNKafIbMcO+ykRL/az/1PTCa24UWx5wETZfwuY4X5DUgc5wBRaZ9ijAie/oHspgbHeAgA0kl8X8Bn05j+wDAqae4vonKqMuHpXOXXWG9cLDuCOH37BM5Xi+Eo5F6mkyz/yLTNYPEkHQvs8KiJe5Q/e2WDGxRK/9/gHR9WKd2x5QgNDDWHjjRXihm3VgwD3F6QQwlAgMYp9NA/LOlfndnD7FaHjbIcKjNfoX+Y7q1Daat4PuAReSgItSsZznBd4ingX80LDAIcJK62BeL3MOW1gRD35bPq0g5zwWFHYZkigBxHwh0sPcFhZ5lRF+KftyotChafU2f6X064C8AoyJgPdwRZ696wS8zUkqamnBxjIfOwopA4nvPmXUuu7R3zsAcHwgLGWDFgmmAp5LwbS/2gUhQBwAkHaesHtTwtXi10SNVI
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e87a61d5-4043-4052-0f47-08db35105e32
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 13:27:42.1942
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: h2OnHul5NYlWXcnPzRh7f15M9dPrbllYoRiwtl/Bd3rDMGIoYha+mwbCmECEbTZlcZlQeKzknfHY7NvedJncUfJpoT8fPgS9CbQ2QNKPMgU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6268

On 03/04/2023 2:26 pm, Roger Pau Monné wrote:
> On Mon, Apr 03, 2023 at 11:16:52AM +0100, Andrew Cooper wrote:
>> On 03/04/2023 9:57 am, Roger Pau Monné wrote:
>> (Quick tangent...  Our PCI handling is currently very dumb. 
>> pci_mmcfg_read() returns its value by pointer but the callers never
>> check.  Swapping it to return by value would improve code gen quite a
>> lot.  Also, when MMCFG is active we still pass BCS accesses to IO ports.)
> I wonder if it's really preferred to access registers below 255 using
> the IO ports, as Linux seems to do the same (prefer IO port access if
> possible).

And see how many attempts there have been to change this, only blocked
on untangling the IO port mess on other architectures (a problem Xen
doesn't have to contend with).

MMCFG, when available is strictly preferable to IO ports.

An MMCFG access is a single UC read or write, whereas IO ports are a
pair of UC accesses *and* a global spinlock.

>>>>  
>>>>  #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \
>>>>                          || id == 0x01268086 || id == 0x01028086 \
>>>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
>>>> index 5da00e24e4ff..008367195c78 100644
>>>> --- a/xen/arch/x86/pv/emul-priv-op.c
>>>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>>>> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
>>>>          if ( ro_map && test_bit(machine_bdf, ro_map) )
>>>>              return false;
>>>>      }
>>>> -    start |= CF8_ADDR_LO(currd->arch.pci_cf8);
>>>> -    /* AMD extended configuration space access? */
>>>> -    if ( CF8_ADDR_HI(currd->arch.pci_cf8) &&
>>>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
>>>> -         boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 )
>>>> -    {
>>>> -        uint64_t msr_val;
>>>> -
>>>> -        if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) )
>>>> -            return false;
>>>> -        if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) )
>>>> -            start |= CF8_ADDR_HI(currd->arch.pci_cf8);
>>>> -    }
>>>> +    start |= CF8_REG(currd->arch.pci_cf8);
>>>>  
>>>>      return !write ?
>>>>             xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
>>>> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
>>>>          if ( !is_hwdom_pinned_vcpu(curr) )
>>>>              return X86EMUL_OKAY;
>>>>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
>>>> +             /*
>>>> +              * TODO: this is broken.  What happens when dom0 is pinned but
>>>> +              * can't see the full system?  CF8_EXT probably ought to be a
>>>> +              * Xen-owned setting, and made symmetric across the system.
>>>> +              */
>>> I would assume CF8_EXT would be symmetric across the system, specially
>>> given that it seems to be phased out and only used in older AMD
>>> families that where all symmetric?
>> The CF8_EXT bit has been phased out.  The IO ECS functionality still exists.
>>
>> But yes, the more I think about letting dom0 play with this, the more I
>> think it is a fundamentally broken idea...  I bet it was from the very
>> early AMD Fam10h days where dom0 knew how to turn it on, and Xen was
>> trying to pretend it didn't have to touch any PCI devices.
> It seems to me Xen should set CF8_EXT on all threads (when available)
> and expose it to dom0, so that accesses using pci_{conf,write}_read()
> work as expected?

It's per northbridge in the system, not per thread.  Hence needing the
spinlock protecting the CF8/CFC IO port pair access, and why MMCFG is
strictly preferable.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:31:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517858.803760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgka-0001i3-4e; Tue, 04 Apr 2023 13:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517858.803760; Tue, 04 Apr 2023 13:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgka-0001hw-1R; Tue, 04 Apr 2023 13:31:04 +0000
Received: by outflank-mailman (input) for mailman id 517858;
 Tue, 04 Apr 2023 13:31:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjgkZ-0001hk-3U; Tue, 04 Apr 2023 13:31:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjgkZ-0003lX-01; Tue, 04 Apr 2023 13:31:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjgkY-00030w-Ej; Tue, 04 Apr 2023 13:31:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjgkY-0005k4-EF; Tue, 04 Apr 2023 13:31:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cvNA39lKzg/ZQ85gaOZ3G2aS3C8/vSMMsobqoCVHgVw=; b=fHuXYoc9GvogAQCPjzODfi+RXi
	JCAISMrbXKukahceSSwMSpgTyAczvz+yejg48gVUrjmkm2IICAxLRDPL+OTzsMdsQ7BVJWjiuAF6x
	tMIOMM9xAveC1ACdmZw8iAdW4rqFx3ONyCtW1dlB+50mABGBebRj4QZPgmb6y5ZeOrW0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180133-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180133: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
X-Osstest-Versions-That:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 13:31:02 +0000

flight 180133 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180133/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180126
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180126
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180126
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180126
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180126
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180126
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180126
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180126
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180126
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c
baseline version:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c

Last test of basis   180133  2023-04-04 04:12:59 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:39:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517863.803769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgsI-0002NJ-0W; Tue, 04 Apr 2023 13:39:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517863.803769; Tue, 04 Apr 2023 13:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgsH-0002NC-Ts; Tue, 04 Apr 2023 13:39:01 +0000
Received: by outflank-mailman (input) for mailman id 517863;
 Tue, 04 Apr 2023 13:39:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8slJ=73=redhat.com=pbonzini@srs-se1.protection.inumbo.net>)
 id 1pjgsG-0002Mh-Rr
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:39:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c4bf896-d2ee-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:38:57 +0200 (CEST)
Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com
 [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-512-zhTRkDGhMQ6fbcLtkdxDKQ-1; Tue, 04 Apr 2023 09:38:55 -0400
Received: by mail-ed1-f70.google.com with SMTP id
 b1-20020aa7dc01000000b004ad062fee5eso45756877edu.17
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 06:38:54 -0700 (PDT)
Received: from ?IPV6:2001:b07:6468:f312:9af8:e5f5:7516:fa89?
 ([2001:b07:6468:f312:9af8:e5f5:7516:fa89])
 by smtp.googlemail.com with ESMTPSA id
 f22-20020a50d556000000b005002daeb27asm5937325edj.37.2023.04.04.06.38.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 06:38:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c4bf896-d2ee-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680615536;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s8A+0/77ETVjCC+da2gmyQ8uuifyTw2Q7q/NY4s52Vs=;
	b=YeNAUa4hhtiLj8Dgxhn8DBcPGnaSTKjzqrFH7nzX4P8fYwSYtIDpYrTAx3+7EhEwXOZjR2
	XSMggwyIxIMbZ6drTBbN3ET4R+4vgtB8cSvsxz9sbgAN71yUHvGpJMRtek/xJ8R/1Sqo4E
	3Gvrh/0lqXP4AJHnfCMNGXNHlYyUGVo=
X-MC-Unique: zhTRkDGhMQ6fbcLtkdxDKQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680615534; x=1683207534;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=s8A+0/77ETVjCC+da2gmyQ8uuifyTw2Q7q/NY4s52Vs=;
        b=IRWuGvVoowvrgRYDukE2Gv00bxP6Ic3GHanhNdMAKmiuUW7mjcW6hAwPDpgT0+JQMh
         lw5oEiORcTmtXa5wmPixVnS8O1n9SaEvG8kUkZ+FNEjNHzzFVqILDYm8XIBgwZmGILeW
         HUdGYbY2KIQ42yWpxxx0ldrI3BavGB+o+DJD3WmCnes0EkXzT8XRJnTbJq4zms5bZ3Kh
         AHxsbU/pHO2OCoBulja4aWrcspqLpDuJwhJUwznllXPZn3TiMGqDJCscJ/dmIh2XbnjG
         xq7/stN8cMZdLdMEnO0lxrdroWp+PIpVLyvTFyXIlh5oTVIOOQRcxBueju9cPD2WzV28
         i/kQ==
X-Gm-Message-State: AAQBX9fYh6rJ2yudQIl7iZ5zSJugzqFK5lP5V51KfNMeDcmtQBeGXsan
	weVFbQqMAQ1v6gdOwo2g+yjMVSzxrshKwHWxxt6QP6wYI3p3ERFyXRgiI7z4sX+oRgHm0fT6BkW
	WjQtTI5maLo8s94CfcIgEQapn8Wo=
X-Received: by 2002:a05:6402:4cf:b0:4fa:e8f3:968b with SMTP id n15-20020a05640204cf00b004fae8f3968bmr2551233edw.19.1680615533971;
        Tue, 04 Apr 2023 06:38:53 -0700 (PDT)
X-Google-Smtp-Source: AKy350al+iU0IKXO90b1D9to94YRCbqQRKUwSNuKsjsPFReXh3S7dIzKpCXzXyF4ywwP2kJFgg+jKA==
X-Received: by 2002:a05:6402:4cf:b0:4fa:e8f3:968b with SMTP id n15-20020a05640204cf00b004fae8f3968bmr2551207edw.19.1680615533676;
        Tue, 04 Apr 2023 06:38:53 -0700 (PDT)
Message-ID: <8f871654-064c-4782-e99f-c81f5935b23a@redhat.com>
Date: Tue, 4 Apr 2023 15:38:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 02/13] virtio-scsi: stop using aio_disable_external()
 during unplug
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
 Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
 xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Paul Durrant <paul@xen.org>, "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Zhengui Li <lizhengui@huawei.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-3-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <20230403183004.347205-3-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 4/3/23 20:29, Stefan Hajnoczi wrote:
> This patch is part of an effort to remove the aio_disable_external()
> API because it does not fit in a multi-queue block layer world where
> many AioContexts may be submitting requests to the same disk.
> 
> The SCSI emulation code is already in good shape to stop using
> aio_disable_external(). It was only used by commit 9c5aad84da1c
> ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> disk") to ensure that virtio_scsi_hotunplug() works while the guest
> driver is submitting I/O.
> 
> Ensure virtio_scsi_hotunplug() is safe as follows:
> 
> 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
>     device_set_realized() calls qatomic_set(&dev->realized, false) so
>     that future scsi_device_get() calls return NULL because they exclude
>     SCSIDevices with realized=false.
> 
>     That means virtio-scsi will reject new I/O requests to this
>     SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
>     virtio_scsi_hotunplug() is still executing. We are protected against
>     new requests!
> 
> 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
>     that in-flight requests are cancelled synchronously. This ensures
>     that no in-flight requests remain once qdev_simple_device_unplug_cb()
>     returns.
> 
> Thanks to these two conditions we don't need aio_disable_external()
> anymore.
> 
> Cc: Zhengui Li <lizhengui@huawei.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   hw/scsi/scsi-disk.c   | 1 +
>   hw/scsi/virtio-scsi.c | 3 ---
>   2 files changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
> index 97c9b1c8cd..e01bd84541 100644
> --- a/hw/scsi/scsi-disk.c
> +++ b/hw/scsi/scsi-disk.c
> @@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
>   
>   static void scsi_unrealize(SCSIDevice *dev)
>   {
> +    scsi_device_purge_requests(dev, SENSE_CODE(RESET));
>       del_boot_device_lchs(&dev->qdev, NULL);
>   }
>   
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 000961446c..a02f9233ec 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>       VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
>       VirtIOSCSI *s = VIRTIO_SCSI(vdev);
>       SCSIDevice *sd = SCSI_DEVICE(dev);
> -    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
>   
> -    aio_disable_external(ctx);
>       qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
> -    aio_enable_external(ctx);
>   
>       if (s->ctx) {
>           virtio_scsi_acquire(s);

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:39:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517866.803780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgsu-0002rs-9I; Tue, 04 Apr 2023 13:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517866.803780; Tue, 04 Apr 2023 13:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgsu-0002rl-68; Tue, 04 Apr 2023 13:39:40 +0000
Received: by outflank-mailman (input) for mailman id 517866;
 Tue, 04 Apr 2023 13:39:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8slJ=73=redhat.com=pbonzini@srs-se1.protection.inumbo.net>)
 id 1pjgst-0002po-9E
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:39:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 23f4759e-d2ee-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:39:37 +0200 (CEST)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-205-6Lp5ym6kPTCntbQ4WZMqJg-1; Tue, 04 Apr 2023 09:39:34 -0400
Received: by mail-ed1-f71.google.com with SMTP id
 n6-20020a5099c6000000b00502c2f26133so1037639edb.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 06:39:34 -0700 (PDT)
Received: from ?IPV6:2001:b07:6468:f312:9af8:e5f5:7516:fa89?
 ([2001:b07:6468:f312:9af8:e5f5:7516:fa89])
 by smtp.googlemail.com with ESMTPSA id
 gv19-20020a1709072bd300b00931db712768sm5987531ejc.4.2023.04.04.06.39.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 06:39:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23f4759e-d2ee-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680615576;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+fw7/8SirqBr4X2qtQdxLuMPR3GvOcViyBmOU340OWg=;
	b=abfSQhgaYtzYpsQ6Rnh+lCpLuRkHbswRSzvuRSvhDXXxvi4ca8/57IokU611F8OvMTEHdt
	ndEIEJLfAVib4Zc2VIjRYQZ3COnR830XjvAxZVYqEegXHTr7CxqWXkAGOImlaaZGUx7ofy
	XtDFVsigw4y8zV2Jpnsw38pU7snn5yk=
X-MC-Unique: 6Lp5ym6kPTCntbQ4WZMqJg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680615574; x=1683207574;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+fw7/8SirqBr4X2qtQdxLuMPR3GvOcViyBmOU340OWg=;
        b=LB+SM2z/jixszcC4S+kU/T+5kPythdG193JJuhyjXTFlk5xHQCw7+BJpagOMTeoUrh
         CCi9YUMB1lPkte5FoCYtvXUQP4/pAAtSfQQPWq2mGANRP+pruJbSD88tkUqcqLD5BnEt
         Y6iszod9UKM7AHL71sojZfNlWuOaD2UEUNNgnp/WE3CdGtP1NCd3KqmNI0KlJS7HY29I
         XFabsI3IfeAZ8BYelHjVUPAm8ogLbzreXfJHWycckaBwWv4Vp5hzubPRpKj9Fie7H/tH
         2k9IsS9apxXcRZA/z5dY2HCYQKbFpc++T3OHpiwZnCDK+zdxsdKM8IHJE+1ccv0fnh4M
         9HGA==
X-Gm-Message-State: AAQBX9euaW3OLVE3BFrbnRj85lEq+Cld+jkG6qDNvryEUb2sYyBgXvVz
	jzuEJTTurKMdq+1rufFwpLuXPPb/0pSjfzWyhaWkl7LQnmex39ISZmgkrl13TWB4fyU8vGi1yoc
	Q3uDzUJrvygqsGdnqbUDIZoopW1o=
X-Received: by 2002:a17:906:aad3:b0:921:5cce:6599 with SMTP id kt19-20020a170906aad300b009215cce6599mr2419094ejb.41.1680615573825;
        Tue, 04 Apr 2023 06:39:33 -0700 (PDT)
X-Google-Smtp-Source: AKy350aaCkFAzW7d1iA6E5boAcJaqOBPLxKcd5Wvt9uXai5VAMRolWR6H8rwuM6Bp4D7FgjG04Wj5g==
X-Received: by 2002:a17:906:aad3:b0:921:5cce:6599 with SMTP id kt19-20020a170906aad300b009215cce6599mr2419071ejb.41.1680615573528;
        Tue, 04 Apr 2023 06:39:33 -0700 (PDT)
Message-ID: <c6ba263d-25e5-fde4-e46d-12929b2cd080@redhat.com>
Date: Tue, 4 Apr 2023 15:39:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 04/13] util/vhost-user-server: rename refcount to
 in_flight counter
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
 Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
 xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Paul Durrant <paul@xen.org>, "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-5-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <20230403183004.347205-5-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 4/3/23 20:29, Stefan Hajnoczi wrote:
> The VuServer object has a refcount field and ref/unref APIs. The name is
> confusing because it's actually an in-flight request counter instead of
> a refcount.
> 
> Normally a refcount destroys the object upon reaching zero. The VuServer
> counter is used to wake up the vhost-user coroutine when there are no
> more requests.
> 
> Avoid confusing by renaming refcount and ref/unref to in_flight and
> inc/dec.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   include/qemu/vhost-user-server.h     |  6 +++---
>   block/export/vhost-user-blk-server.c | 11 +++++++----
>   util/vhost-user-server.c             | 14 +++++++-------
>   3 files changed, 17 insertions(+), 14 deletions(-)
> 
> diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
> index 25c72433ca..bc0ac9ddb6 100644
> --- a/include/qemu/vhost-user-server.h
> +++ b/include/qemu/vhost-user-server.h
> @@ -41,7 +41,7 @@ typedef struct {
>       const VuDevIface *vu_iface;
>   
>       /* Protected by ctx lock */
> -    unsigned int refcount;
> +    unsigned int in_flight;
>       bool wait_idle;
>       VuDev vu_dev;
>       QIOChannel *ioc; /* The I/O channel with the client */
> @@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
>   
>   void vhost_user_server_stop(VuServer *server);
>   
> -void vhost_user_server_ref(VuServer *server);
> -void vhost_user_server_unref(VuServer *server);
> +void vhost_user_server_inc_in_flight(VuServer *server);
> +void vhost_user_server_dec_in_flight(VuServer *server);
>   
>   void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
>   void vhost_user_server_detach_aio_context(VuServer *server);
> diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
> index 3409d9e02e..e93f2ed6b4 100644
> --- a/block/export/vhost-user-blk-server.c
> +++ b/block/export/vhost-user-blk-server.c
> @@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
>       free(req);
>   }
>   
> -/* Called with server refcount increased, must decrease before returning */
> +/*
> + * Called with server in_flight counter increased, must decrease before
> + * returning.
> + */
>   static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
>   {
>       VuBlkReq *req = opaque;
> @@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
>                                       in_num, out_num);
>       if (in_len < 0) {
>           free(req);
> -        vhost_user_server_unref(server);
> +        vhost_user_server_dec_in_flight(server);
>           return;
>       }
>   
>       vu_blk_req_complete(req, in_len);
> -    vhost_user_server_unref(server);
> +    vhost_user_server_dec_in_flight(server);
>   }
>   
>   static void vu_blk_process_vq(VuDev *vu_dev, int idx)
> @@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
>           Coroutine *co =
>               qemu_coroutine_create(vu_blk_virtio_process_req, req);
>   
> -        vhost_user_server_ref(server);
> +        vhost_user_server_inc_in_flight(server);
>           qemu_coroutine_enter(co);
>       }
>   }
> diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> index 5b6216069c..1622f8cfb3 100644
> --- a/util/vhost-user-server.c
> +++ b/util/vhost-user-server.c
> @@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
>       error_report("vu_panic: %s", buf);
>   }
>   
> -void vhost_user_server_ref(VuServer *server)
> +void vhost_user_server_inc_in_flight(VuServer *server)
>   {
>       assert(!server->wait_idle);
> -    server->refcount++;
> +    server->in_flight++;
>   }
>   
> -void vhost_user_server_unref(VuServer *server)
> +void vhost_user_server_dec_in_flight(VuServer *server)
>   {
> -    server->refcount--;
> -    if (server->wait_idle && !server->refcount) {
> +    server->in_flight--;
> +    if (server->wait_idle && !server->in_flight) {
>           aio_co_wake(server->co_trip);
>       }
>   }
> @@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
>           /* Keep running */
>       }
>   
> -    if (server->refcount) {
> +    if (server->in_flight) {
>           /* Wait for requests to complete before we can unmap the memory */
>           server->wait_idle = true;
>           qemu_coroutine_yield();
>           server->wait_idle = false;
>       }
> -    assert(server->refcount == 0);
> +    assert(server->in_flight == 0);
>   
>       vu_deinit(vu_dev);
>   

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:43:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517870.803790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgwb-0004PC-TQ; Tue, 04 Apr 2023 13:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517870.803790; Tue, 04 Apr 2023 13:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgwb-0004P5-QW; Tue, 04 Apr 2023 13:43:29 +0000
Received: by outflank-mailman (input) for mailman id 517870;
 Tue, 04 Apr 2023 13:43:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8slJ=73=redhat.com=pbonzini@srs-se1.protection.inumbo.net>)
 id 1pjgwa-0004Oz-2D
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:43:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac7d1973-d2ee-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 15:43:26 +0200 (CEST)
Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com
 [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-401-sOwozmLYNka3ioryJDhGRA-1; Tue, 04 Apr 2023 09:43:24 -0400
Received: by mail-ed1-f72.google.com with SMTP id
 s30-20020a508d1e000000b005005cf48a93so45831605eds.8
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 06:43:23 -0700 (PDT)
Received: from ?IPV6:2001:b07:6468:f312:9af8:e5f5:7516:fa89?
 ([2001:b07:6468:f312:9af8:e5f5:7516:fa89])
 by smtp.googlemail.com with ESMTPSA id
 tp24-20020a170907c49800b00948c320fcfdsm2099432ejc.202.2023.04.04.06.43.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 06:43:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac7d1973-d2ee-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680615805;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=banha77N8+B6dxSVSxrCs92gu1rjIC58TeHKkpj1HpY=;
	b=EhtOVeFUA+/eJpC5yUkkyuanpN2dGoPHlXpaEhKQ++x02jdlobL4TbEMdL2o+Tzf2AkIHT
	Z1p+waU+cYdo9f9hkMScwjxGXa/NTPftL2GUEiJAOk2F3rXD5tzUL0/WmNbkzL304Dg7Am
	nKydMtZCcY44tC4vVPVYQL57SykOgm8=
X-MC-Unique: sOwozmLYNka3ioryJDhGRA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680615803; x=1683207803;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=banha77N8+B6dxSVSxrCs92gu1rjIC58TeHKkpj1HpY=;
        b=dxGkdqudtTO4x5Asj2HNhA34vnvbGlZ7DLyRbk4L4yfhOVImESaCkyMN7Hqn4BVHmh
         DfhhY4Z3qT8tFIUUzpOyHhs86R2Ct+XNCYDcMcCsRAhfelYsib0DOhyHKqWt8Nlx0knQ
         8sKr482fIogH7YS3n5P2Uhd0gGDQY7/F/IViLZKPnCrp6O4TRCUYJ6L2vmGDtHmZDvGj
         PtQoEdX99U9gNrVNucskcY6hefxPrrrLqUu+x87Y+wBJfWEG412Sb4ARIJke95JYQSOP
         DivZapQO7Vjv/7i7KlBT8otkvqk6r1GOK6Z4IN+RSw97fAOdcwQtKtNL0jaar1pw9OrF
         QlUA==
X-Gm-Message-State: AAQBX9dEwKl3sD9K4+TwL80eECOh8uXbu97OYtxQ0DX9Xb+DsxLxVfWl
	kLExiaDIUbzHcacJyb7d2Eqhi8SMaYwlSJAIuUDd9QP9Gti1fBTK03RZzATH7Q/348YUOhmhAGO
	9nbHGirbAlDIKGfR0QV1Zx59qQ0U=
X-Received: by 2002:a17:907:6b87:b0:93c:efaf:ba75 with SMTP id rg7-20020a1709076b8700b0093cefafba75mr2282159ejc.37.1680615802912;
        Tue, 04 Apr 2023 06:43:22 -0700 (PDT)
X-Google-Smtp-Source: AKy350ZNWNaNuwDkGHAuaeEc6X1s7r9xa0mDxtFhP32ZOx76xJcO7cMfakVgb34HP4JHYtrJgHQzvw==
X-Received: by 2002:a17:907:6b87:b0:93c:efaf:ba75 with SMTP id rg7-20020a1709076b8700b0093cefafba75mr2282128ejc.37.1680615802626;
        Tue, 04 Apr 2023 06:43:22 -0700 (PDT)
Message-ID: <261efade-683e-84dc-d402-7143be7199c3@redhat.com>
Date: Tue, 4 Apr 2023 15:43:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 00/13] block: remove aio_disable_external() API
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
 Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
 xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Paul Durrant <paul@xen.org>, "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <20230403183004.347205-1-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 4/3/23 20:29, Stefan Hajnoczi wrote:
> The aio_disable_external() API temporarily suspends file descriptor monitoring
> in the event loop. The block layer uses this to prevent new I/O requests being
> submitted from the guest and elsewhere between bdrv_drained_begin() and
> bdrv_drained_end().
> 
> While the block layer still needs to prevent new I/O requests in drained
> sections, the aio_disable_external() API can be replaced with
> .drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
> BlockDevOps.
> 
> This newer .bdrained_begin/end/poll() approach is attractive because it works
> without specifying a specific AioContext. The block layer is moving towards
> multi-queue and that means multiple AioContexts may be processing I/O
> simultaneously.
> 
> The aio_disable_external() was always somewhat hacky. It suspends all file
> descriptors that were registered with is_external=true, even if they have
> nothing to do with the BlockDriverState graph nodes that are being drained.
> It's better to solve a block layer problem in the block layer than to have an
> odd event loop API solution.
> 
> That covers the motivation for this change, now on to the specifics of this
> series:
> 
> While it would be nice if a single conceptual approach could be applied to all
> is_external=true file descriptors, I ended up looking at callers on a
> case-by-case basis. There are two general ways I migrated code away from
> is_external=true:
> 
> 1. Block exports are typically best off unregistering fds in .drained_begin()
>     and registering them again in .drained_end(). The .drained_poll() function
>     waits for in-flight requests to finish using a reference counter.
> 
> 2. Emulated storage controllers like virtio-blk and virtio-scsi are a little
>     simpler. They can rely on BlockBackend's request queuing during drain
>     feature. Guest I/O request coroutines are suspended in a drained section and
>     resume upon the end of the drained section.

Sorry, I disagree with this.

Request queuing was shown to cause deadlocks; Hanna's latest patch is 
piling another hack upon it, instead in my opinion we should go in the 
direction of relying _less_ (or not at all) on request queuing.

I am strongly convinced that request queuing must apply only after 
bdrv_drained_begin has returned, which would also fix the IDE TRIM bug 
reported by Fiona Ebner.  The possible livelock scenario is generally 
not a problem because 1) outside an iothread you have anyway the BQL 
that prevents a vCPU from issuing more I/O operations during 
bdrv_drained_begin 2) in iothreads you have aio_disable_external() 
instead of .drained_begin().

It is also less tidy to start a request during the drained_begin phase, 
because a request that has been submitted has to be completed (cancel 
doesn't really work).

So in an ideal world, request queuing would not only apply only after 
bdrv_drained_begin has returned, it would log a warning and 
.drained_begin() should set up things so that there are no such warnings.

Thanks,

Paolo



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:46:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517873.803799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgzj-000505-B6; Tue, 04 Apr 2023 13:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517873.803799; Tue, 04 Apr 2023 13:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjgzj-0004zy-8S; Tue, 04 Apr 2023 13:46:43 +0000
Received: by outflank-mailman (input) for mailman id 517873;
 Tue, 04 Apr 2023 13:46:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8slJ=73=redhat.com=pbonzini@srs-se1.protection.inumbo.net>)
 id 1pjgzi-0004zr-P8
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:46:42 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20184499-d2ef-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:46:40 +0200 (CEST)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-341-mct8Gp3fNoy_iDMr5PIPuw-1; Tue, 04 Apr 2023 09:46:38 -0400
Received: by mail-ed1-f71.google.com with SMTP id
 j21-20020a508a95000000b004fd82403c91so45818213edj.3
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 06:46:37 -0700 (PDT)
Received: from ?IPV6:2001:b07:6468:f312:9af8:e5f5:7516:fa89?
 ([2001:b07:6468:f312:9af8:e5f5:7516:fa89])
 by smtp.googlemail.com with ESMTPSA id
 ae14-20020a17090725ce00b00947a939f6e0sm5693767ejc.77.2023.04.04.06.46.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 06:46:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20184499-d2ef-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680615999;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EOC5xMNB8TKUwhrqSALMGxTnLF38Hs5nrhrqBLl6nYM=;
	b=diswIHp3Kk8bHBFXDoC1xDGSmIUco8xblMdh2DwNsGJqx1gOWL07Ka+N52QlWzwF93nF0Y
	4H0C/jfL5xGjK0fSqDPcgKJW+Nd72ImQc18OHjU8TOVNZ/QviIk806kC1RIUdmSW9j/877
	NKbRnQML6TtJFAdS4bIiqz4d/B2bvUU=
X-MC-Unique: mct8Gp3fNoy_iDMr5PIPuw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680615997; x=1683207997;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=EOC5xMNB8TKUwhrqSALMGxTnLF38Hs5nrhrqBLl6nYM=;
        b=Xas72rhRFB8hxvoiNnmzetw0397IEIEncBJ8+v2jPKQK9nvqgKN5V5GYyc/7y3ruUL
         bdqevhapMi8Bbt80J8Y00t8VKIJ+7l1KUv/vmxisPEd5ghgpefqN+AMhLa/2sb3hHgWJ
         Hf8zYj7aEO0xbjciL33uusWrCXsk++oRHllgV4EeKFQ1c8s9G8qX87wEwr4Vakncyxpo
         fhnAlnUM1TDPIT6bi2pZbOC2aHv7Xg5U3hn9nFPhvGld7GHzhOG2xNGXjD4OWrRbl7Hn
         OAXuiShk9R5yiATRuLyRY9ZfolAC/Dy1CSFdRpRJ1zB2y1V3oPt8uKdWNoPJ77z9qQcw
         T3bQ==
X-Gm-Message-State: AAQBX9ejd+YzgALL8NJ1fI1t4t/25vFeadOHKgTLFCkuKp3OWluBtl5x
	LLJSFzDKV0ZuC9/F5SWhvHYUjvxuqf9EpKGvTyOwcR/0shyx72EHDvUKkVzz09JuUW3xFNhxHsx
	BmvIPiJd1FLfMGSwyuDjKNLFP7U8=
X-Received: by 2002:a17:907:8a24:b0:947:791b:fdcb with SMTP id sc36-20020a1709078a2400b00947791bfdcbmr2640629ejc.21.1680615996937;
        Tue, 04 Apr 2023 06:46:36 -0700 (PDT)
X-Google-Smtp-Source: AKy350ZKGn72BpwOfdhCDqdjHSKetYDAkrw483POzylVVBOQOp0vZ1sAfoVpUPVCPr98oRDnQvzcCQ==
X-Received: by 2002:a17:907:8a24:b0:947:791b:fdcb with SMTP id sc36-20020a1709078a2400b00947791bfdcbmr2640574ejc.21.1680615996616;
        Tue, 04 Apr 2023 06:46:36 -0700 (PDT)
Message-ID: <92b731c7-81d4-ef54-cca9-9dcb944e94f0@redhat.com>
Date: Tue, 4 Apr 2023 15:46:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 11/13] block/fuse: take AioContext lock around
 blk_exp_ref/unref()
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
 Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
 xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Paul Durrant <paul@xen.org>, "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-12-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <20230403183004.347205-12-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 4/3/23 20:30, Stefan Hajnoczi wrote:
> These functions must be called with the AioContext acquired:
> 
>    /* Callers must hold exp->ctx lock */
>    void blk_exp_ref(BlockExport *exp)
>    ...
>    /* Callers must hold exp->ctx lock */
>    void blk_exp_unref(BlockExport *exp)
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/export/fuse.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/block/export/fuse.c b/block/export/fuse.c
> index 06fa41079e..18394f9e07 100644
> --- a/block/export/fuse.c
> +++ b/block/export/fuse.c
> @@ -244,7 +244,9 @@ static void read_from_fuse_export(void *opaque)
>       FuseExport *exp = opaque;
>       int ret;
>   
> +    aio_context_acquire(exp->common.ctx);
>       blk_exp_ref(&exp->common);
> +    aio_context_release(exp->common.ctx);
>   
>       do {
>           ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
> @@ -256,7 +258,9 @@ static void read_from_fuse_export(void *opaque)
>       fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
>   
>   out:
> +    aio_context_acquire(exp->common.ctx);
>       blk_exp_unref(&exp->common);
> +    aio_context_release(exp->common.ctx);
>   }

Since the actual thread-unsafe work is done in a bottom half, perhaps 
instead you can use qatomic_inc and qatomic_fetch_dec in 
blk_exp_{ref,unref}?

Paolo



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:48:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517876.803810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh0r-0005X0-M4; Tue, 04 Apr 2023 13:47:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517876.803810; Tue, 04 Apr 2023 13:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh0r-0005Ws-I7; Tue, 04 Apr 2023 13:47:53 +0000
Received: by outflank-mailman (input) for mailman id 517876;
 Tue, 04 Apr 2023 13:47:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8slJ=73=redhat.com=pbonzini@srs-se1.protection.inumbo.net>)
 id 1pjh0q-0005Wk-3m
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:47:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49ac961a-d2ef-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:47:50 +0200 (CEST)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-447-soMn4lKEOgS4lM9Mr4vtAQ-1; Tue, 04 Apr 2023 09:47:45 -0400
Received: by mail-ed1-f69.google.com with SMTP id
 u30-20020a50c05e000000b0050299de3f82so10374672edd.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 06:47:45 -0700 (PDT)
Received: from ?IPV6:2001:b07:6468:f312:9af8:e5f5:7516:fa89?
 ([2001:b07:6468:f312:9af8:e5f5:7516:fa89])
 by smtp.googlemail.com with ESMTPSA id
 u25-20020a170906109900b008e22978b98bsm5996684eju.61.2023.04.04.06.47.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 06:47:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49ac961a-d2ef-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680616069;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NXZVAuXURCCUeEyCJ3Apt7MqMRoeGh4PQRIsfTgOkBg=;
	b=Y+VGdb+cQ0Un1Bfr731INk2BFXRgrmxlmd8dH6m6QAfIFciwB3qKeAeZOSPPUZthpsF70O
	KajTWvXkThnPdZNW6Ha+ZIMFj9qdbmdudX8LZ56oCrE7EootRaLN/kcyps5BhCAnay0yVl
	SzxUZzQ97067GL9bRpzUqiNiNcVxVF4=
X-MC-Unique: soMn4lKEOgS4lM9Mr4vtAQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680616065; x=1683208065;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=NXZVAuXURCCUeEyCJ3Apt7MqMRoeGh4PQRIsfTgOkBg=;
        b=GJjzaPP6j2RNmErkH7tg+w6Kj1i5jqPJsLvOdm7QCAlks+ZSDmGbUlo672hUG4KHar
         WUxNt+thSENTuQTJ2ODSydGI4IW2d6fT9JIrBdqbStM58RC4VpXVYB/RtcgXc9+bUbpL
         zy4Ik3ekZhUFqctgUL9F1SlL26tEjajc2pVGnJtocAJ9ME7TeEFkIR01lmyXREb8QwcE
         zId1quFZQwvz2PMmMPc+pIklOUWNeOTCWO02CyEZnEmYbI5WFGbn4cQ67dJHGmrFdaWn
         zx+AIy/gs/3AFVDq89mRL4OC1RRVAWfEVmR6co5d9soiAb2lfnUYsY9pbOF6RKniSYmx
         6D2w==
X-Gm-Message-State: AAQBX9cs3N+bWxWlBOF+nz5NAVCMim7xKlscvwYP2Mjh/w02DHgmiGWW
	A+HjG197yAkxxYe3M5+PBPIL1VZIP4zBrY4ZDJxrH3UeQTRxzEQPDXiQvwjCrt12new5m0PuY8n
	QZNPbB/8hbBLwRBprvkF3LYfk73g=
X-Received: by 2002:a17:906:68ca:b0:931:b4d3:fc7f with SMTP id y10-20020a17090668ca00b00931b4d3fc7fmr2638023ejr.30.1680616064890;
        Tue, 04 Apr 2023 06:47:44 -0700 (PDT)
X-Google-Smtp-Source: AKy350bTUWpDfEkQMztCXSp3lAzmKMLlNkSrHzGGYgK9EE7DOW/ArYgtjIo2Ha6U6gSgxkvegrzexA==
X-Received: by 2002:a17:906:68ca:b0:931:b4d3:fc7f with SMTP id y10-20020a17090668ca00b00931b4d3fc7fmr2638000ejr.30.1680616064630;
        Tue, 04 Apr 2023 06:47:44 -0700 (PDT)
Message-ID: <df512475-d1b0-eb76-9a0b-28760b5a73d2@redhat.com>
Date: Tue, 4 Apr 2023 15:47:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 01/13] virtio-scsi: avoid race between unplug and
 transport event
To: Stefan Hajnoczi <stefanha@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Julia Suvorova <jusual@redhat.com>,
 Kevin Wolf <kwolf@redhat.com>, Peter Lieven <pl@kamp.de>,
 Coiby Xu <Coiby.Xu@gmail.com>, xen-devel@lists.xenproject.org,
 Richard Henderson <richard.henderson@linaro.org>,
 Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
 Eduardo Habkost <eduardo@habkost.net>, Paul Durrant <paul@xen.org>,
 "Richard W.M. Jones" <rjones@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Fam Zheng <fam@euphon.net>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Juan Quintela <quintela@redhat.com>, Xie Yongji <xieyongji@bytedance.com>,
 Hanna Reitz <hreitz@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 eesposit@redhat.com, "Michael S. Tsirkin" <mst@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-2-stefanha@redhat.com>
 <2bbe988c-0802-55c3-b2a3-05e3f94e2f04@linaro.org>
 <20230404130658.GG428487@fedora>
From: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <20230404130658.GG428487@fedora>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 4/4/23 15:06, Stefan Hajnoczi wrote:
>> Would this be more useful as a qdev_is_realized() helper?
> Yes. There are no other users, but I think a helper makes sense.

Agreed; anyway,

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:48:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:48:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517878.803820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh1s-00064o-W9; Tue, 04 Apr 2023 13:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517878.803820; Tue, 04 Apr 2023 13:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh1s-00064g-T7; Tue, 04 Apr 2023 13:48:56 +0000
Received: by outflank-mailman (input) for mailman id 517878;
 Tue, 04 Apr 2023 13:48:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMxR=73=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pjh1r-000645-2z
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:48:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20617.outbound.protection.outlook.com
 [2a01:111:f400:7d00::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ea399f3-d2ef-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:48:51 +0200 (CEST)
Received: from DU2PR04CA0327.eurprd04.prod.outlook.com (2603:10a6:10:2b5::32)
 by DU0PR08MB9824.eurprd08.prod.outlook.com (2603:10a6:10:443::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 13:48:49 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::dc) by DU2PR04CA0327.outlook.office365.com
 (2603:10a6:10:2b5::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.22 via Frontend
 Transport; Tue, 4 Apr 2023 13:48:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6222.22 via Frontend Transport; Tue, 4 Apr 2023 13:48:48 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Tue, 04 Apr 2023 13:48:48 +0000
Received: from b5273e95bc26.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C0E0D05A-41F1-4CB3-8A13-790796B09FB8.1; 
 Tue, 04 Apr 2023 13:48:37 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b5273e95bc26.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 04 Apr 2023 13:48:37 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com (2603:10a6:208:ff::27)
 by DU0PR08MB9396.eurprd08.prod.outlook.com (2603:10a6:10:423::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 13:48:34 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104]) by AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104%6]) with mapi id 15.20.6254.033; Tue, 4 Apr 2023
 13:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ea399f3-d2ef-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yu4G5C5PAz4602WPmtEDrxcOgB32F/GfLy1j4u4kXq4=;
 b=ChSO1tDiwBHwHSg4cspdN2v239HQfqI+k3VD3oq2iNo5a2J22+UolWFWprwAbHyg4Kk7xxWWi17R9oZTVYf4zg4fUsf0lNobqlXRMUgwoK/2wRy3dPfapJdsoR6CZ8yEFOQOXxWCEm0O9npWKiyWGbYxdNilUAQwEWBVDwpyLmU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8a7c52f910f13e3e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y77Pmyo5vpXOmzKkJG6aKdl4TwdtzFrg5QSdlxAubVExdq/OevnWC5wlV5EnYeNzQ0OcrH1cSRI7JRpaURti++raDcHth06HESKl7yIBTByJxDL9oZ4vWD0TCowVkAj6eaXO2909yR4P7CYhdaNreqfawhHxX5/o7GcB9ji/hgiPYSwDyqSisrDmuiYKd3U4iyWjCJwjjSMgUJg7uiZ0nta7sDiFOkLKUMmfWDSRnDtQ4HlpNJ0kBL1WT/BvOM1gEMgQp3cycSIyr25BdOG0LrINCjbSji3nSpDSHuAffUAg5AcZ5xKuTGZsEsDMHd0OG9a19MxRLUbNchSQMZG8fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yu4G5C5PAz4602WPmtEDrxcOgB32F/GfLy1j4u4kXq4=;
 b=j2WZ6zbuRzT8+PlbW470uczrj7E7v1D+xhwtFA2MbAthaofo8Ly/xZ6mCRxnhDl7z2o9CrVXqHnbzl06xla8S8s5w2bg3LW2L2d6nE8Ii9i+vJDmPGusXNeCWNHMhGgCJTKr8UIom4/5FDYp65jwtq2REeg4IvszYCoffLUalNaseG6a7pNfrd9iyBDxICwrpqDngU5sTHoXGM/Fspk/jAbW2Aqe5wBKhRIkmEBRvHQw6dlN7NYGa8HRpmTTbysw/b5DcwWzN4TJ81Ya6ySMndGxmJiO/KSzcQgn7CWBWdqcqA7qcEJ5WRyvxPMcJWbrXvIIVasCrxJ7MQVDFPZx1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yu4G5C5PAz4602WPmtEDrxcOgB32F/GfLy1j4u4kXq4=;
 b=ChSO1tDiwBHwHSg4cspdN2v239HQfqI+k3VD3oq2iNo5a2J22+UolWFWprwAbHyg4Kk7xxWWi17R9oZTVYf4zg4fUsf0lNobqlXRMUgwoK/2wRy3dPfapJdsoR6CZ8yEFOQOXxWCEm0O9npWKiyWGbYxdNilUAQwEWBVDwpyLmU=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@cloud.com>
Subject: Re: [PATCH v4 10/12] xen/tools: add sve parameter in XL configuration
Thread-Topic: [PATCH v4 10/12] xen/tools: add sve parameter in XL
 configuration
Thread-Index: AQHZYJtmX5JG1Zpok0mjr+irvDjf6q8U92AAgAY/qIA=
Date: Tue, 4 Apr 2023 13:48:34 +0000
Message-ID: <328A9CBD-5FCE-481B-93AF-D139963488D5@arm.com>
References: <20230327105944.1360856-1-luca.fancellu@arm.com>
 <20230327105944.1360856-11-luca.fancellu@arm.com>
 <9bd2924b-bb4a-440d-ae31-0253e66c56e5@perard>
In-Reply-To: <9bd2924b-bb4a-440d-ae31-0253e66c56e5@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.400.51.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB3745:EE_|DU0PR08MB9396:EE_|DBAEUR03FT019:EE_|DU0PR08MB9824:EE_
X-MS-Office365-Filtering-Correlation-Id: d29897bf-f5a4-4ab3-b278-08db35135169
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KDc67oFzZs6C6dfrCcNHtXqvDa74BtN+tpfKg+oDzBewcVhCJndnH34TiE/t5j6PtFfxfPxUUHv4PK2+ohM23NAR2c6e5ItqF2wm43nj6jswniBBT0vv6zpTs4E0PIx9fdFlQ72sISliZhn0N1mpgyLsV8oYt+KlaPORkEMZxX4F4YHJ3ob30Vt7s8gNb8dCiZJLnz/lv4Pd2yCIioCjAjq3zttHOlyfMwZGsu0vHr2rJqMZKzXF3EtKy/TgR/9VqpF+Uzqs9wFUZi9vqzqI9QQJ47fnmJGAj+E0lr4fElhg1UrHGrZStwdcpJSa/tTaKVF4MJF6IXaU/jyx0xflt6SdJ8IQ37c0V7BEPxJT5gZ7AwW5tKeeK8BCWUU8bYXTFHzsQU1goF2bsa6xBjA9SNv/UKVbFiUhrKpM6kALWitSRj2Kk2OqJ0VSadfMbxD6AFHvbRTmoNej58TkA+3Q/iMz6KPHN1kcFMr3JNiyz38ock6j8v9yl/nHLD/4SOqIfznmNs/vdyTQnIujNBDmJTpHKhmt+7ZyVDlaaAUG9GuPsFwYTKRmVURMtAtJUW9y2mzqgiWnZOBP5h/oOI2pv0oYYVuYwFOyBcao7gDcHm1QxdWg55iCu5A41giQMtJ+ViA84hTE7gJK1tncRNh6aQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3745.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(376002)(136003)(396003)(451199021)(38070700005)(6512007)(26005)(6506007)(33656002)(86362001)(36756003)(53546011)(122000001)(2616005)(83380400001)(38100700002)(186003)(66446008)(66556008)(91956017)(66476007)(66946007)(64756008)(478600001)(76116006)(2906002)(6916009)(4326008)(5660300002)(8676002)(316002)(41300700001)(8936002)(71200400001)(54906003)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <73907EEA8D744543B8238637242D9E44@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9396
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	21826133-0d7d-48dc-9e91-08db351348b2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qVuSKC7yfB3hKC7oQ9H9IbOO/rcNRFE2jmBjPAlkK/gDZRQYi5GJGCClfkON6RGSbBGms2b38BJrkAd8Snr6YTCCE/H0aewV9jdVjnDRe9V4exr0v52hZXSYlLbJqbBwmPuz+K3m90K0o6WycmO/9QB5NMs8xwrFa7Z0ZlFzGtmJEnoqiSICF8Jg3+8k+Nfo8CQN3CbfogSRj3egIMUuEIaEVsIc+eXA9wqwIBRLwid6YsgspyEclXANNeambWBlLSY5+pRfaqFkVfCTTX6QuaeZpK2lcQCBouG5OUpkJm7Rxx5NHG3TevYsjk9/ThmhNP4JcXqB7s1D6JeChqGbwSmBbUuu/vSD2tj8d4VVjFmMBsLElKS4Yt9vBGrGLQ2RygKXQRd0evsHkCnjmO3yemfUOyMrVHZNudlR66Nk7KO29LOM+HkB2XdN/ArzK/Qof8LSPGj/gRX8fmtT3c1twHG6rwVWwBlG9s44qAfI5edmzsElFNeL/+WiCP//pPkHEa7jtqz2QpeNNjN/ywXO14LluR+kvI4wQvao3xfq/eUO44jMSYo9aSb4HqbnLw9tCIEdXtREPfPFQE77NlKW6m75lS6D5KuwYdLMs/FJ62PgVeW0Xd0j2841PKanzkjulOZlmw9AeIOT+on6D5Lz1qdsMX20bT0HfbnrAljlIdqfqTJI4k33k8Osy7la8/4bBcuUXpUgALqeaQOOt76HlQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(36840700001)(46966006)(40470700004)(186003)(82310400005)(26005)(107886003)(53546011)(6506007)(41300700001)(4326008)(336012)(2616005)(70586007)(316002)(6512007)(70206006)(47076005)(54906003)(478600001)(40460700003)(8676002)(6486002)(36756003)(82740400003)(356005)(33656002)(81166007)(86362001)(6862004)(2906002)(36860700001)(83380400001)(8936002)(5660300002)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 13:48:48.8808
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d29897bf-f5a4-4ab3-b278-08db35135169
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9824

DQoNCkhpIEFudGhvbnksDQoNClRoYW5rcyBmb3IgeW91ciByZXZpZXcNCg0KPiBPbiAzMSBNYXIg
MjAyMywgYXQgMTU6MjMsIEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29t
PiB3cm90ZToNCj4gDQo+IE9uIE1vbiwgTWFyIDI3LCAyMDIzIGF0IDExOjU5OjQyQU0gKzAxMDAs
IEx1Y2EgRmFuY2VsbHUgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEvZG9jcy9tYW4veGwuY2ZnLjUu
cG9kLmluIGIvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluDQo+PiBpbmRleCAxMGYzNzk5MGJlNTcu
LmFkZjQ4ZmU4YWMxZCAxMDA2NDQNCj4+IC0tLSBhL2RvY3MvbWFuL3hsLmNmZy41LnBvZC5pbg0K
Pj4gKysrIGIvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluDQo+PiBAQCAtMjk1Miw2ICsyOTUyLDE3
IEBAIEN1cnJlbnRseSwgb25seSB0aGUgInNic2FfdWFydCIgbW9kZWwgaXMgc3VwcG9ydGVkIGZv
ciBBUk0uDQo+PiANCj4+ID1iYWNrDQo+PiANCj4+ICs9aXRlbSBCPHN2ZT0iTlVNQkVSIj4NCj4+
ICsNCj4+ICtUbyBlbmFibGUgU1ZFLCB1c2VyIG11c3Qgc3BlY2lmeSBhIG51bWJlciBkaWZmZXJl
bnQgZnJvbSB6ZXJvLCBtYXhpbXVtIDIwNDggYW5kDQo+PiArbXVsdGlwbGUgb2YgMTI4LiBUaGF0
IHZhbHVlIHdpbGwgYmUgdGhlIG1heGltdW0gbnVtYmVyIG9mIFNWRSByZWdpc3RlcnMgYml0cw0K
Pj4gK3RoYXQgdGhlIGh5cGVydmlzb3Igd2lsbCBpbXBvc2UgdG8gdGhpcyBndWVzdC4gSWYgdGhl
IHBsYXRmb3JtIGhhcyBhIGxvd2VyDQo+IA0KPiBNYXliZSBzdGFydCBieSBkZXNjcmliaW5nIHdo
YXQgdGhlICJzdmUiIHZhbHVlIGlzIGJlZm9yZSBpbXBvc2luZw0KPiBsaW1pdHMuIE1heWJlIHNv
bWV0aGluZyBsaWtlOg0KPiANCj4gICAgU2V0IHRoZSBtYXhpbXVtIHZlY3RvciBsZW5ndGggdGhh
dCBhIGd1ZXN0J3MgU2NhbGFibGUgVmVjdG9yDQo+ICAgIEV4dGVuc2lvbiAoU1ZFKSBjYW4gdXNl
LiBPciBkaXNhYmxlIGl0IGJ5IHNwZWNpZnlpbmcgMCwgdGhlIGRlZmF1bHQuDQo+IA0KPiAgICBW
YWx1ZSBuZWVkcyB0byBiZSBhIG11bHRpcGxlIG9mIDEyOCwgd2l0aCBhIG1heGltdW0gb2YgMjA0
OCBvciB0aGUNCj4gICAgbWF4aW11bSBzdXBwb3J0ZWQgYnkgdGhlIHBsYXRmb3JtLg0KPiANCj4g
V291bGQgdGhpcywgb3Igc29tZXRoaW5nIGxpa2UgdGhhdCBiZSBhIGdvb2QgZXhwbGFuYXRpb24g
b2YgdGhlICJzdmUiDQo+IGNvbmZpZ3VyYXRpb24gb3B0aW9uPw0KDQpZZXMgSSBjYW4gY2hhbmdl
IGl0LCBhIG5lZWQgdG8gZG8gaXQgYW55d2F5IGJlY2F1c2UgSSB0aGluayBhbHNvIGhlcmUsIHRo
ZSBzdWdnZXN0aW9uDQpGcm9tIEphbiBjYW4gYXBwbHkgYW5kIHdlIGNvdWxkIHBhc3MgYSBuZWdh
dGl2ZSB2YWx1ZSB0aGF0IG1lYW5zIOKAnG1heCBWTCBzdXBwb3J0ZWQNCmJ5IHRoZSBwbGF0Zm9y
bSINCg0KPiANCj4+ICtzdXBwb3J0ZWQgYml0cyB2YWx1ZSwgdGhlbiB0aGUgZG9tYWluIGNyZWF0
aW9uIHdpbGwgZmFpbC4NCj4+ICtBIHZhbHVlIGVxdWFsIHRvIHplcm8gaXMgdGhlIGRlZmF1bHQg
YW5kIGl0IG1lYW5zIHRoaXMgZ3Vlc3QgaXMgbm90IGFsbG93ZWQgdG8NCj4+ICt1c2UgU1ZFLg0K
Pj4gKw0KPj4gKz1iYWNrDQo+PiArDQo+PiA9aGVhZDMgeDg2DQo+PiANCj4+ID1vdmVyIDQNCj4+
IGRpZmYgLS1naXQgYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jIGIvdG9vbHMvbGlicy9s
aWdodC9saWJ4bF9hcm0uYw0KPj4gaW5kZXggZGRjN2IyYTE1OTc1Li4xNmE0OTAzMWZkNTEgMTAw
NjQ0DQo+PiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jDQo+PiArKysgYi90b29s
cy9saWJzL2xpZ2h0L2xpYnhsX2FybS5jDQo+PiBAQCAtMjExLDYgKzIxMSw4IEBAIGludCBsaWJ4
bF9fYXJjaF9kb21haW5fcHJlcGFyZV9jb25maWcobGlieGxfX2djICpnYywNCj4+ICAgICAgICAg
cmV0dXJuIEVSUk9SX0ZBSUw7DQo+PiAgICAgfQ0KPj4gDQo+PiArICAgIGNvbmZpZy0+YXJjaC5z
dmVfdmwgPSBkX2NvbmZpZy0+Yl9pbmZvLmFyY2hfYXJtLnN2ZTsNCj4gDQo+IFRoaXMgdHJ1bmNh
dGUgYSAxNmJpdCB2YWx1ZSBpbnRvIGFuIDhiaXQgdmFsdWUsIEkgdGhpbmsgeW91IHNob3VsZCBj
aGVjaw0KPiB0aGF0IHRoZSB2YWx1ZSBjYW4gYWN0dWFsbHkgZml0Lg0KPiANCj4gQW5kIG1heWJl
IGNoZWNrIGBkX2NvbmZpZy0+Yl9pbmZvLmFyY2hfYXJtLnN2ZWAgdmFsdWUgaGVyZSBpbnN0ZWFk
IG9mDQo+IGB4bGAgYXMgY29tbWVudGVkIGxhdGVyLg0KDQpZZXMgSSBjYW4gZG8gaXQsIG9uZSBx
dWVzdGlvbiwgY2FuIEkgdXNlIGhlcmUgeGNfcGh5c2luZm8gdG8gcmV0cmlldmUgdGhlIG1heGlt
dW0NClZlY3RvciBsZW5ndGggZnJvbSBhcmNoX2NhcGFiaWxpdGllcz8NCkkgbWVhbiwgaXMgdGhl
cmUgYSBiZXR0ZXIgd2F5IG9yIEkgY2FuIGdvIGZvciB0aGF0Pw0KDQo+IA0KPj4gKw0KPj4gICAg
IHJldHVybiAwOw0KPj4gfQ0KPj4gDQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9s
aWJ4bF90eXBlcy5pZGwgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3R5cGVzLmlkbA0KPj4gaW5k
ZXggZmQzMWRhY2Y3ZDVhLi5lZjRhODM1OGU1NGUgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJz
L2xpZ2h0L2xpYnhsX3R5cGVzLmlkbA0KPj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF90
eXBlcy5pZGwNCj4+IEBAIC02OTAsNiArNjkwLDcgQEAgbGlieGxfZG9tYWluX2J1aWxkX2luZm8g
PSBTdHJ1Y3QoImRvbWFpbl9idWlsZF9pbmZvIixbDQo+PiANCj4+ICAgICAoImFyY2hfYXJtIiwg
U3RydWN0KE5vbmUsIFsoImdpY192ZXJzaW9uIiwgbGlieGxfZ2ljX3ZlcnNpb24pLA0KPj4gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICgidnVhcnQiLCBsaWJ4bF92dWFydF90eXBlKSwN
Cj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKCJzdmUiLCB1aW50MTYpLA0KPiAN
Cj4gSSB3b25kZXIgaWYgcmVuYW1pbmcgInN2ZSIgdG8gInN2ZV92bCIgaGVyZSB3b3VsZCBtYWtl
IHNlbnNlLCBzZWVpbmcNCj4gdGhhdCAic3ZlX3ZsIiBpcyBhY3R1YWxseSB1c2VkIGluIG90aGVy
IHBsYWNlcy4NCg0KWWVzIEkgY2FuIHJlbmFtZSBpdCBhcyBzdmVfdmwsIEkgd2lsbCBhbHNvIGNo
YW5nZSB0aGUgdHlwZSB0byDigJxpbnRlZ2VyIg0KDQo+IA0KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgXSkpLA0KPj4gICAgICgiYXJjaF94ODYiLCBTdHJ1Y3QoTm9uZSwgWygibXNy
X3JlbGF4ZWQiLCBsaWJ4bF9kZWZib29sKSwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIF0pKSwNCj4+IGRpZmYgLS1naXQgYS90b29scy94bC94bF9wYXJzZS5jIGIvdG9vbHMveGwv
eGxfcGFyc2UuYw0KPj4gaW5kZXggMWY2ZjQ3ZGFmNGUxLi4zY2JjMjNiMzY5NTIgMTAwNjQ0DQo+
PiAtLS0gYS90b29scy94bC94bF9wYXJzZS5jDQo+PiArKysgYi90b29scy94bC94bF9wYXJzZS5j
DQo+PiBAQCAtMTIsNiArMTIsNyBAQA0KPj4gICogR05VIExlc3NlciBHZW5lcmFsIFB1YmxpYyBM
aWNlbnNlIGZvciBtb3JlIGRldGFpbHMuDQo+PiAgKi8NCj4+IA0KPj4gKyNpbmNsdWRlIDxhcm0t
YXJjaC1jYXBhYmlsaXRpZXMuaD4NCj4gDQo+IENvdWxkIHlvdSBhZGQgdGhpcyBoZWFkZXJzIGFm
dGVyIHB1YmxpYyBvbmVzPw0KDQpZZXMNCg0KPiANCj4+ICNpbmNsdWRlIDxjdHlwZS5oPg0KPj4g
I2luY2x1ZGUgPGludHR5cGVzLmg+DQo+PiAjaW5jbHVkZSA8bGltaXRzLmg+DQo+PiBAQCAtMTMx
Miw4ICsxMzEzLDYgQEAgdm9pZCBwYXJzZV9jb25maWdfZGF0YShjb25zdCBjaGFyICpjb25maWdf
c291cmNlLA0KPj4gICAgICAgICBleGl0KEVYSVRfRkFJTFVSRSk7DQo+PiAgICAgfQ0KPj4gDQo+
PiAtICAgIGxpYnhsX3BoeXNpbmZvX2Rpc3Bvc2UoJnBoeXNpbmZvKTsNCj4+IC0NCj4+ICAgICBj
b25maWc9IHhsdV9jZmdfaW5pdChzdGRlcnIsIGNvbmZpZ19zb3VyY2UpOw0KPj4gICAgIGlmICgh
Y29uZmlnKSB7DQo+PiAgICAgICAgIGZwcmludGYoc3RkZXJyLCAiRmFpbGVkIHRvIGFsbG9jYXRl
IGZvciBjb25maWd1cmF0aW9uXG4iKTsNCj4+IEBAIC0yODg3LDYgKzI4ODYsMjkgQEAgc2tpcF91
c2JkZXY6DQo+PiAgICAgICAgIH0NCj4+ICAgICB9DQo+PiANCj4+ICsgICAgaWYgKCF4bHVfY2Zn
X2dldF9sb25nIChjb25maWcsICJzdmUiLCAmbCwgMCkpIHsNCj4+ICsgICAgICAgIHVuc2lnbmVk
IGludCBhcm1fc3ZlX3ZsID0NCj4+ICsgICAgICAgICAgICBhcmNoX2NhcGFiaWxpdGllc19hcm1f
c3ZlKHBoeXNpbmZvLmFyY2hfY2FwYWJpbGl0aWVzKTsNCj4+ICsgICAgICAgIGlmICghYXJtX3N2
ZV92bCkgew0KPj4gKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLCAiU1ZFIGlzIG5vdCBzdXBw
b3J0ZWQgYnkgdGhlIHBsYXRmb3JtXG4iKTsNCj4+ICsgICAgICAgICAgICBleGl0KC1FUlJPUl9G
QUlMKTsNCj4gDQo+ICJFUlJPUl9GQUlMIiBpcyBhICJsaWJ4bF9lcnJvciIsIGluc3RlYWQgZXhp
dCB3aXRoICJFWElUX0ZBSUxVUkUiLg0KDQpPayBJIHdpbGwgdXNlIHRoZSByaWdodCB0eXBlDQoN
Cj4gDQo+PiArICAgICAgICB9IGVsc2UgaWYgKCgobCAlIDEyOCkgIT0gMCkgfHwgKGwgPiAyMDQ4
KSkgew0KPj4gKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLA0KPj4gKyAgICAgICAgICAgICAg
ICAgICAgIkludmFsaWQgc3ZlIHZhbHVlOiAlbGQuIE5lZWRzIHRvIGJlIDw9IDIwNDggYW5kIG11
bHRpcGxlIg0KPj4gKyAgICAgICAgICAgICAgICAgICAgIiBvZiAxMjhcbiIsIGwpOw0KPj4gKyAg
ICAgICAgICAgIGV4aXQoLUVSUk9SX0ZBSUwpOw0KPj4gKyAgICAgICAgfSBlbHNlIGlmIChsID4g
YXJtX3N2ZV92bCkgew0KPj4gKyAgICAgICAgICAgIGZwcmludGYoc3RkZXJyLA0KPj4gKyAgICAg
ICAgICAgICAgICAgICAgIkludmFsaWQgc3ZlIHZhbHVlOiAlbGQuIFBsYXRmb3JtIHN1cHBvcnRz
IHVwIHRvICV1IGJpdHNcbiIsDQo+PiArICAgICAgICAgICAgICAgICAgICBsLCBhcm1fc3ZlX3Zs
KTsNCj4+ICsgICAgICAgICAgICBleGl0KC1FUlJPUl9GQUlMKTsNCj4+ICsgICAgICAgIH0NCj4+
ICsgICAgICAgIC8qIFZlY3RvciBsZW5ndGggaXMgZGl2aWRlZCBieSAxMjggaW4gZG9tYWluIGNv
bmZpZ3VyYXRpb24gc3RydWN0ICovDQo+IA0KPiBUaGF0J3Mgd3JvbmcsIGJlc2lkZSB0aGlzIGNv
bW1lbnQgdGhlcmUncyBub3RoaW5nIHRoYXQgc2F5IHRoYXQNCj4gYGJfaW5mby0+YXJjaF9hcm0u
c3ZlYCBuZWVkcyB0byBoYXZlIGEgdmFsdWUgZGl2aWRlZCBieSAxMjguDQo+IGBiX2luZm8tPmFy
Y2hfYXJtLnN2ZWAgaXMganVzdCBvZiB0eXBlIHVpbnQxNl90IChsaWJ4bF90eXBlLmlkbCkuDQo+
IA0KPiBCVFcsICJ0b29scy94bCIgKHhsKSB1c2UganVzdCBvbmUgdXNlciBvZiAidG9vbHMvbGli
cy9saWdodCIgKGxpYnhsKSwgc28NCj4gaXQncyBwb3NzaWJsZSB0aGF0IG90aGVyIHVzZXJzIHdv
dWxkIHNldCBgc3ZlYCB0byBhIHZhbHVlIHRoYXQgaGF2ZW4ndA0KPiBiZWVuIGNoZWNrZWQuIFNv
IEkgdGhpbmsgYWxsIHRoZSBjaGVjayB0aGF0IHRoZSBgc3ZlYCB2YWx1ZSBpcyBjb3JyZWN0DQo+
IGNvdWxkIGJlIGRvbmUgaW4gbGlieGwgaW5zdGVhZC4NCg0KU3VyZSBJIHdpbGwgZG8gdGhhdA0K
DQo+IA0KPiANCj4+ICsgICAgICAgIGJfaW5mby0+YXJjaF9hcm0uc3ZlID0gbCAvIDEyOFU7DQo+
PiArICAgIH0NCj4+ICsNCj4+ICsgICAgbGlieGxfcGh5c2luZm9fZGlzcG9zZSgmcGh5c2luZm8p
Ow0KPj4gKw0KPj4gICAgIHBhcnNlX3ZrYl9saXN0KGNvbmZpZywgZF9jb25maWcpOw0KPj4gDQo+
PiAgICAgZF9jb25maWctPnZpcnRpb3MgPSBOVUxMOw0KPiANCj4gVGhhbmtzLA0KPiANCj4gLS0g
DQo+IEFudGhvbnkgUEVSQVJEDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 13:51:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 13:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517883.803830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh4C-0007ZA-GM; Tue, 04 Apr 2023 13:51:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517883.803830; Tue, 04 Apr 2023 13:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjh4C-0007Z3-Df; Tue, 04 Apr 2023 13:51:20 +0000
Received: by outflank-mailman (input) for mailman id 517883;
 Tue, 04 Apr 2023 13:51:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7Av=73=citrix.com=prvs=45137d3e2=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pjh4B-0007Yx-0X
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 13:51:19 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3bf9ca4-d2ef-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 15:51:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3bf9ca4-d2ef-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680616276;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=x0PaaBcjCtio6kfl56+tYMtF5+jnjOaDFGTwPHgx+5s=;
  b=Nt1t4USWaFa/DfpUMnaGqWyZ6raGi82wb+reqZ3jUZ6Ogzr7Udt42iCi
   huTWCtS7w7uR1U3NS6rGn4FiQvDD78RDvFvYFO0+laqtdea8sRCp2KVkl
   ws9E+kgJUTOrlz/Z9Hu8zQHz8gwOFUubIKTGKb09fljTJKWFo4R5X+F8X
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103067229
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:qK19D6O0RDxVFobvrR2ql8FynXyQoLVcMsEvi/4bfWQNrUp2hDAAn
 zdKW2mHafaIZmT9L91ybd/g8kkC75TRn4diTAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uB3OWtk0
 uITFCoydFPapubnnKu2bOY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXSHp0LwRzC9
 woq+UzBGBcTMcKt5AOf61CFnu71rCLKdoA7QejQGvlC3wTImz175ActfVe2pPiRi0igWsleI
 UgZ5iovq6cp8EWhCNL6WnWQqWaJpBcGV/JMEucx70eGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhqgGqy89G3of3JPdClbOHFCFFFeizX+nG0tphTPdtxJN7C+ssfKA3Kp4
 jTb8DEzobpG2KbnyJ6HEUD7byOE/8aZFlBtul6GAgpJ/SsiOtf7OtXABUzzqK8Zcd3HFgTpU
 G0swZD20QwYMX2aeMVhqs0pFarh2fuKOSa0bbVHT8h4rGTFF5JOkOltDNBCyKRBaJxslcfBO
 hO7hO+ozMY70IGWRaF2eZmtLM8h0LLtE9/oPtiNMIoWP8QvJVXcpn8+DaJ144wKuBF0+ZzTx
 L/BKZr8ZZrkIf8PIMWKqxc1juZwm3FWKZL7TpHn1RW3uYejiIquYe5dajOmN7lphJ5oVS2Jq
 76zwePWkUQAOAA/CwGLmbMuwacifCRiXsqv85MKHgNBSyI/cFwc5zbq6etJU+RYc259zI8kI
 lnVtpdk9WfC
IronPort-HdrOrdr: A9a23:eC6zkakKtpA/BAnV6I88gvOIJKDpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.98,317,1673931600"; 
   d="scan'208";a="103067229"
Date: Tue, 4 Apr 2023 14:51:01 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Demi Marie Obenour
	<demi@invisiblethingslab.com>
Subject: Re: [xen-unstable-smoke test] 179929: regressions - trouble:
 blocked/fail/pass/starved
Message-ID: <20d41dd0-19d1-47fb-92ab-4de458ddd56f@perard>
References: <osstest-179929-mainreport@xen.org>
 <fbe7ded7-47f6-caec-dabb-6978d9e2a192@citrix.com>
 <17e95e93-a3a9-962c-1563-f9fc526320df@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <17e95e93-a3a9-962c-1563-f9fc526320df@citrix.com>

On Fri, Mar 24, 2023 at 08:37:06PM +0000, Andrew Cooper wrote:
> On 24/03/2023 8:28 pm, Andrew Cooper wrote:
> > On 24/03/2023 6:58 pm, osstest service owner wrote:
> >> flight 179929 xen-unstable-smoke real [real]
> >> http://logs.test-lab.xenproject.org/osstest/logs/179929/
> >>
> >> Regressions :-(
> >>
> >> Tests which did not succeed and are blocking,
> >> including tests which could not be run:
> >>  build-amd64                   6 xen-build                fail REGR. vs. 179926
> >
> > Bah.
> >
> > make[6]: Entering directory '/home/osstest/build.179929.build-amd64/xen/tools/firmware/etherboot'
> > set -e; if ! /usr/bin/wget -c -O _ipxe.tar.gz https://xenbits.xen.org/xen-extfiles/ipxe-git-3c040ad387099483102708bb1839110bc788cefb.tar.gz; then \
> > 	git clone file:////osstest/IPXE-GIT-FORBIDDEN ipxe.git; \
> > 	(cd ipxe.git && git archive --format=tar --prefix=ipxe/ \
> > 	3c040ad387099483102708bb1839110bc788cefb | gzip -n >../_ipxe.tar.gz); \
> > 	rm -rf ipxe.git; \
> > fi
> > --2023-03-24 17:06:51--  https://xenbits.xen.org/xen-extfiles/ipxe-git-3c040ad387099483102708bb1839110bc788cefb.tar.gz
> > Resolving cache (cache)... 172.16.148.6
> > Connecting to cache (cache)|172.16.148.6|:3128... connected.
> > ERROR: The certificate of 'xenbits.xen.org' is not trusted.
> > ERROR: The certificate of 'xenbits.xen.org' has expired.
> > Cloning into 'ipxe.git'...
> > fatal: '//osstest/IPXE-GIT-FORBIDDEN' does not appear to be a git repository
> > fatal: Could not read from remote repository.
> >
> > That's OSSTest choking, apparently with the same LE root cert problem?
> 
> Given that there's plenty of content wanting testing right now, and no
> chance of this being looked at until next week, I've reverted e1d750844
> (which was just a single hunk anyway) in the hopes that we can still get
> a useful weekend of testing

The certificate of the HTTPS proxy has been renewed, and osstest has
been updated to use the new certificates. So that commit should work.
In other word, osstest is ready for a revert of b5cc3c25a242 ("Revert
"build: Change remaining xenbits.xen.org link to HTTPS"")

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:21:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517887.803840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhXA-0002eo-PP; Tue, 04 Apr 2023 14:21:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517887.803840; Tue, 04 Apr 2023 14:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhXA-0002eh-MZ; Tue, 04 Apr 2023 14:21:16 +0000
Received: by outflank-mailman (input) for mailman id 517887;
 Tue, 04 Apr 2023 14:21:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjhX9-0002eb-Nh
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:21:16 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062e.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3a42a1d-d2f3-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:21:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8482.eurprd04.prod.outlook.com (2603:10a6:20b:34a::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 14:21:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:21:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a42a1d-d2f3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZLumDTvjYmBBEicEU0W6WDIxirkjIhnXI5vpIqaPT7dMG7NMsPunJMrVgAveEZLRLdEJY9bPG9pSy8GJh+htYDYGiix5D9vpyuO2QHZyjP32AzpeGVX4Kd+FSZ3E6evqikndHZPJUdFZ6UDeGh18JUR/mSUEbLroOVAPImJBUHzZC+RKN/dnZkYuUGHk65tO51P5zEOjGXlg1nM9HvYeN2kPDnb0Igd4Idekbjzelj78c0+GanuGpGpRDaA2QwjnP5FGtZ1CPSLZVVVNHtHYTDqaGSqxyvOEbE2asV6j/mv3+KyMTd8y0DfCXdVJpf8vT+3R7pcwbcP1q4ScxzzQPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bHyzEfPI/zvXf0RCfyg2mKP5kKbOBRpTIxetoWwctGc=;
 b=YexxhU6wCTE2CvMpprh6bSX4hQRhZQH7hpTRGKGUtf2Eh+l8J9npDodirU0B05IRp8tkqsT/8pjzvBVoaHlg+taD4o03hIGAwR13COwepumDdJcWbfPcYQ9VlNH/vVCzhWzkMw6ltvO+zC8IZ4UTIndCpTRPo3r+oPlcsarkm7wX4Q6k2fKMxsTgUAmZbT/FFD3VRU9svaM6kBaQ/wE8uSffBRe7ARV2j7wdrY81W0rvFOkYLPXTGDJmV9DzuNYrRtTpqgJs+qXlHY1/sHP+JqTJMJZ6JhpFeBWYlHbutG9fNFZx30Q5jhUxGn7zBghGGPstdF+pddGixRzkp/28Iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bHyzEfPI/zvXf0RCfyg2mKP5kKbOBRpTIxetoWwctGc=;
 b=r+KkM2vefq7hV25iKWPcPAzcoJswnkYl1WB4mRXucN31p0akV9fDOiU7K+lHJKIww0XOfR/cKJakN44tfZvKPP93zbqxixACw1Yqd3CDxwjGH7RhAGl2SjiVgiRu207P3uTI6es4+zjYzm0Xm9Izmhzfs1SaYACcahoxCFuv/SqUMb2KvBUv7uXmH1VmhX6d6H0rmeIZDgkk1HNJ/twppXCuUjkaSx0YV9lqu08GCQlBe8mnGEj7z9nv8m37+kMSn684iZCeB3QEohZAQ8qr+WGkitnyercpo+R9uvOkMNPEVjM4hnElnkngiV8wPD6tl54qsnZGX31QGan0RkUEGg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5f9218c1-9ee9-c5bd-af8b-003084aa66e4@suse.com>
Date: Tue, 4 Apr 2023 16:21:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <6d64dd4a-5b25-ddca-5c07-7b4c0fc48c0c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6d64dd4a-5b25-ddca-5c07-7b4c0fc48c0c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0215.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8482:EE_
X-MS-Office365-Filtering-Correlation-Id: 9d0a04c3-12b1-4342-bc19-08db3517d631
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WwEmXkwe8u4fuKODHyx+T/2vRD5hj7rp0oFssz0K8zbL+pYQ4EPwXFlvuXGrJMuHFaXZMYO8IBbrZ7+npQeL8fLm44sOMGzQaQlS1sa1W3/9lQoobm5AI/drtBV8eS1D+aVoXWSBmXJo/+x4rnp1oFZxwLtF9bAHQalhWnWa0NkDFT2N2jrflhgiVLS5DLjplfNwP5h8B+Dbb0XTVy00hLqTirGrD+hcsKJq+o8EsdSrHcIkUZr3XICPLL+wQhPdyyTfCbmvg0KMUvLNAN5PtpSZbj0y+a4O2TECigD47DCysjgPTiPM9nBVXU46PhvpollJ4CQX78XmIzC2Lmc4qlrl9EBA1731RL3IEmwvSG4K/pe9S+zEJE5ggO1KelrTrAdsCSHj92+DI/XpYAv/N91iNhaugAzhIW4ghq+HfcL+GIAdqgXh4jlBW/sHMXd3SJkJFdxWqyFe2XhR9b1diIIDzuDpkb6kTQzdxWVDyavjeX8H45t0NjxBfOmyDW/REKCTzXIHRdPFZSyM6GAdS97FsxGWmpFmfAWb8AyOceSQNBxc0FoNtWe13Navf6fvB7h07JWPdBBuLLFSdpwybpemAyivWRY6+wiCFVt4eAN0m8IgbA8xidoAgG4XNpobl0M1k96KZMXf35howtsLug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(396003)(346002)(451199021)(86362001)(31696002)(36756003)(2906002)(66899021)(31686004)(2616005)(53546011)(6486002)(83380400001)(186003)(26005)(6506007)(6666004)(6512007)(4326008)(316002)(8676002)(66556008)(66946007)(66476007)(478600001)(6916009)(41300700001)(38100700002)(5660300002)(54906003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUY0V0FOSFhBUlR2K3kydXBxdUJnMW9vUjhpWGVkWjlGbkFRQ0xhUnF6c1Br?=
 =?utf-8?B?VTR2NERBOCtYS3I5NTh1Qzk1UjN6YzgrMTB1d1JxTHdadjBxZGtYalcwdUlx?=
 =?utf-8?B?WUdUZ2FnRXlGZTNGeEZzUTZuMXZITEVyUzRPZHdmdXBib2NiYTJ2TVdzNmRI?=
 =?utf-8?B?MFlpOEJLb3lxSWJHblBvN0VNTVM5bnA0YW56V0U0M3ZpQ2MvK1VvUmFXdE5G?=
 =?utf-8?B?OTBkcDJleldtc2dZMlNjeXFsVWFmNkhadVh6bnAzWWFXeHBmTEZtN25PRWlu?=
 =?utf-8?B?TmZvV0ZYcllZRXRsM2pIdkxRKys3SlpuK3ZhRzk5OFR6RGoyWUtuUnM0M2xn?=
 =?utf-8?B?WmphekthWmo2eCtJSFo5ZmpQRkZFRzRRd25jOHFYeGZ2SmZiTEE1NmVuY21j?=
 =?utf-8?B?MUZ1NlJreWcwNVdJQ1VlZ2hkTkN5NXZ6S0JESjhNQ0hFeVRHVHZrQVhGT3JZ?=
 =?utf-8?B?akg5Wi9ZckRhY2VBbnNPTjZHR1YraTFza3BWY0RRMHBpczN6VGo4VThCRXJq?=
 =?utf-8?B?M20rN0t0WTRnbFp4QkpRMmRxWXkwQ05xb2VhdUNLSzdiK002S1B0WTRlc3NL?=
 =?utf-8?B?QWxTSUl3ekZweCtreVJkdittY1k1OGxKU3FlVzJpellUSDg3a0l4SUd0K1J1?=
 =?utf-8?B?c3pPUmlGSGN2cG9ISzZxaXpJdnFVQVVOR2JaN2pqK0FmYS9SdnRNY1dsdmpJ?=
 =?utf-8?B?L3UrQUxvWTBTNTIrL2JhT29GQlJSaHkzVjZNK1ZHeWUxb29Bc3dsTHRzSVk4?=
 =?utf-8?B?ZDJIZlEvRHdnc1JmRThudERSR2RxV3MzTlJuWUM0ckJYcW1LSmxCcG9vejdD?=
 =?utf-8?B?SUh5YlFUQ0lVMDI5RGo5SlpMaW10TGFTb0dLWUUzVmFSSEFadUpqL2xLakk1?=
 =?utf-8?B?Yks3T251WHhqMDRYemRSK0p3YXpuNlVMRHkvRUV0OHcwRDdjRmhpUHoyZVRi?=
 =?utf-8?B?NjRaTWd5RUlkekxobE03alR1ak1hdGpPck5uRDlYUjZlazdNU2gxcnYySjdu?=
 =?utf-8?B?UmtQZTRpYWMyck5BbE9QZU5jTmFtYVgzQlExYnR1dC9lYktQbWdGUFdSSnFO?=
 =?utf-8?B?c2ZaZzZEdWU1bzNrQXJRaEx5QjZsYzROVTArb3FXakxxVVRzYXVwNVFkRFIr?=
 =?utf-8?B?dTFHeGFPOTJtZnlmeStkOUNFMTFhNlQ5YkJuSC9Lb2JQblJYOHlRWHgrUkVH?=
 =?utf-8?B?MGhBZFN5LzZvdE5xWVpwanAvajN5MlZBTXNlUXd1ZU4vVGJBQ1N5QXA1eWpt?=
 =?utf-8?B?YjVJSG5JOTlnTWF1cHQrZ3R3UitiQUZOeGxEREtPcjRlVjN6ajZXY2hyc3c1?=
 =?utf-8?B?RjhudDUzd1ljVHJOS3FpUzZiQ09UT0ZzdmxDQnRxSTVHRExmOEdZSWw0TW5h?=
 =?utf-8?B?SlBFajhBVFJqUXlqa0ViNFVSYUhQVUpVamVvbEx3WnNZSHU0Y1VqU3BrWWVi?=
 =?utf-8?B?MWpMY05rUnhhMlRmek5sWldiWGZEZ0pGbEtyckx0ZFRGald2OXkyMDhxVnlU?=
 =?utf-8?B?ZkorbTlyYmVmcG5WWkxpdS9RTi9sMFhKWk9rQnBvOWdoSFFoWlNIT0tEQW4r?=
 =?utf-8?B?aUpJblMxbXFNaXZKR1prdkl1RHBZY0l1K3NQbEp2S3lPRHhHcHNLM3UzK0l3?=
 =?utf-8?B?N0t2ZTVXWlB6Q1JTU3l6UEpyZXlRQmczZzdpSlZBQlA1cWF5YW1hSFBPNVZW?=
 =?utf-8?B?T3A1d1BBUGhtdUptSlJHakszak5IZGMwZ2hiZUYzdTJNRER0bHI3MHMranRT?=
 =?utf-8?B?QXBEZXA1QVB3aGZSVlRvTjViM2puRDdmUXpqTmUyUTZQaFF5Snc1S25WMmhj?=
 =?utf-8?B?cCtacEoyS09UVXk0cVJncWhFYlhDclhTaWo1Z3M3NWszTkFrMEk2N09wS3Vo?=
 =?utf-8?B?b0xLUHBsZWg5blZsSVJ3UGR6MFF5ZTRXKzFSd2xKSHJaaHFMWWlXV0k5THRu?=
 =?utf-8?B?QTkxZ1dKK0NTanZ6NlBrcnFWYk1pNTNueHdWRm9ybEk0K0RDYTFvT05pZ1pu?=
 =?utf-8?B?ZnhvaVFXM244aWJSek5CVTFndm5udHgzSElrSXBTdmEweURjcVVEaUtyMGV4?=
 =?utf-8?B?cmRQcjgzVFFGNlZiN3F0anFQN2JGM3J3Q3Zwc1ZqL1lJVFJyMkY1WjFvRG9T?=
 =?utf-8?Q?Tb3FQpE7V4aGGbiottI4I3LWa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d0a04c3-12b1-4342-bc19-08db3517d631
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:21:09.8794
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NhIa2HgwUpedre17lxjGRXTxiIRE9hAUek1D7YR5dppWAf9M/SVDjg35zaYyOMTLMa5apO1k6mgpsXOEg/D1NQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8482

On 04.04.2023 15:08, Andrew Cooper wrote:
> On 15/02/2023 2:54 pm, Jan Beulich wrote:
>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>> applies to guests also when run on a 64-bit hypervisor:
> 
> Is this really true?  Even when looking at Xen 4.2, 32bit guests are
> required to pass a full 4k page, not a 32b quad.

The full-page vs 32b-quad aspect is orthogonal. This VM-assist is solely
about where that data structure is, not what size it is.

> Which makes complete sense.  It was a hard requirement of 32bit non-PAE
> guests, so it was a natural restriction to maintain into 32bit PAE guests.
> 
> This is *only* a 32-on-64 issue, because this is the only case a 32bit
> guest could in principle use an L3 placed above the 4G boundary.

Not exactly. 32-bit Xen maintained a 4-entry "shadow" array below 4G
that it would copy (massage) the guest entries into upon CR3 reload
(just look for struct pae_l3_cache in the old sources). So above-4G
page table base was possible there as well.

>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>      unsigned int   partial_flags = page->partial_flags;
>>      l3_pgentry_t   l3e = l3e_empty();
>>  
>> +    /*
>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>> +     * understand the weird 'extended cr3' format for dealing with high-order
>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>> +     * initialised).
>> +     */
>> +    if ( is_pv_32bit_domain(d) &&
>> +         unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>> +         mfn_x(l3mfn) >= 0x100000 &&
>> +         d->vcpu[0] && d->vcpu[0]->is_initialised )
>> +    {
>> +        gdprintk(XENLOG_WARNING,
>> +                 "PAE pgd must be below 4GB (%#lx >= 0x100000)",
>> +                 mfn_x(l3mfn));
>> +        return -ERANGE;
>> +    }
> 
> Having dug through source history, I see this is largely the form that
> it used to be.
> 
> But I'm unconvinced by the "cut control tools some slack".  I'm quite
> tired of different bits of Xen taking on unnecessary complexity because
> people are unwilling to fix the problem at the correct layer.

But anything tools do before having created the first vCPU would not
have had any means to engage the VM-assist. I.e. ...

> A toolstack which has non-pae_extended_cr3 guest on its hand will know
> this before any pagetables get allocated.

... this knowledge buys it nothing: It would need to move the table
to below 4G irrespective of knowing that the guest can deal with
bigger addresses, just to get past this check.

> For this check specifically, I'd suggest prohibiting non-32p guests from
> setting pae_extended_cr3 in the first place (I see no limit currently),
> and then simplifying the check to just
> 
> if ( unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>      mfn_x(l3mfn) >= PFN_DOWN(GB(4)) )

Dropping the is_pv_32bit_domain() check isn't possible because we can't,
all of the sudden, fail 64-bit guests' requests to enable this VM-
assist (no matter that we know that it is of no use to them). Dropping
the control-tools part of the condition is at least problematic as well,
as per above. Albeit I'll admit I didn't check whether nowadays vCPU 0
is initialized before page tables are built. But I think it's more
sensible the other way around: CR3 setting (in the hypervisor) is less
involved when the page was already validated as an L3 one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:24:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517891.803850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhaB-0003Dc-8Z; Tue, 04 Apr 2023 14:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517891.803850; Tue, 04 Apr 2023 14:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhaB-0003DV-5R; Tue, 04 Apr 2023 14:24:23 +0000
Received: by outflank-mailman (input) for mailman id 517891;
 Tue, 04 Apr 2023 14:24:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjhaA-0003DN-85
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:24:22 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on061c.outbound.protection.outlook.com
 [2a01:111:f400:fe02::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6345e347-d2f4-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:24:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8482.eurprd04.prod.outlook.com (2603:10a6:20b:34a::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 14:24:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:24:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6345e347-d2f4-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B7tWl/BQS4a33kq+YilzewF/6rtfg/zxjStKj20pmYDYguIOaZ1iddPmlPKiCoBwNrpC57FxPoSmZttNxk1V4kHVK5yFZ9bDs22G6MofM7n5CoH7E+FdJmUwWDPBzIhUigDwwgspBtt63vpq4jA0OwDvLcEN54tRGp47SZYnj9TTa5jzYn9B+1j3vf2iBwEcSzvUrewMurfWf37QXNQ2ljnvT+HEtPLATDNEllPozf5QtqWlF84PfzgwLjFNt6to+8b9yBQoldsqaJqV3Mbo0zILgRr1aQQUUA50zqqgYItwgxOlEtJftJ3yjoQyHTQb5CJDZZ27i2P7FZU0ymU+qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NdmOg5mQoRO5Np3j2AZRvRHevCll0fy8DK/TFMocFsA=;
 b=E+SCMfCT6aC5Q0b+CVSO9gwqnWf2zSL+kNfUO6rAAn+mqt0L/N0tHbOkhcIS2Ys2x9o/bzhxNdF3Yevem2HkD/IVd5EDuUNZk43L8RneFUIIWjFNxoqtAec7vBcl4j9oWkWrXHE4ccrD+zcs8OK1s88hJbu2c0n4ySyZV3z0scqGPVI94h4WUtcUpz8Yl0FazESftpmewUnGWpT6mRZT6nETpTkApnKNqLCFE6SkHXUjCA47yVRK+VfvjXLu5svwVck6x1XB0V+0EPmOtPuYbBq3j2m41AREy8tEgUZ2Fzg0UuEkY0zrtF0o+dW4+c/Gd3gwlXwZ5g2L1Ck7I42MbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NdmOg5mQoRO5Np3j2AZRvRHevCll0fy8DK/TFMocFsA=;
 b=FSlfvdF28BcFoIiJoaJRYGp010tN+40H1I6Q5RNPIGR/HEh1cSrycrF7dr9QmY+tR4NITt8W8256cE+WzVMpIkcCSOzLwaEzqN0IuZy/cI9R6vQgJD5snJWmwTrAQ2L9MkoAm3S4YDpaunEb9fcRIv4J2jcgTkT/0lN3Y4Zd5DxQVDPRNBFoXbltRSvktw75x1+LNjmyJhtwn9YOQGmABGAfIxxBdN39IYMkKAyunuppapuNmM1x6zxbmsiFGkOWAOMu/vKiLMf1XJ9ArGjIt7fgOJXQMlIOmXZ6IFuKgWV1Xox4C5+l17Zn2WkyUFnFrRO287Mxj++R5IqT5CgO6Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
Date: Tue, 4 Apr 2023 16:24:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCwM1SfCAfh2koBD@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0189.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8482:EE_
X-MS-Office365-Filtering-Correlation-Id: aa2eb8a3-7588-4689-7181-08db3518467c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3JNjU+TEotU1brJS6qg8FO4NWs1oegpL7iYGZnxaWSF3ty9pij9BgnGBCNsg63dOpF8DWjhGz1HSdGk7FIDNMFlb8i2MtSCms3rEOV8dtXWuyy7wTxGNeXrBFS2qxyxD1D+7M7DIENiHJFy8+nq3p12q6hfcOI3EqSFqHsDDeAh+k+I5x7062tlX0Z5HEQd//iodivAXVG+UMBXO7IfIAnGYWVtbmlsP97IwkAg3pY5CeAogVCpMslfAeKLWV091jMUgpnXEWdaIQB38s9IlxN0S+d55uT5kUHJQUP773RM3CX9MNrTnRrOig3GtiadaRsTdqnajBiXvRER+vOgGucWfwpaDIAtCwbWRzP6+J3wJuzs1JCxBD2XFJyVhHPszciN0iallo0Uh0fdlQRP9wJjt4ToolNZ/YiL+2ZFWkiyTnZOpY/dHbhMRAROVmtyqZNTPvux7/BSmbwZ6wi5/rI6FVMue91/8j/5fgWyRNNiB8mdx+PeNnQguifjrY1itk8I/L9r+RZkmw15j5nn+AE18yrUxAPiB04O9vi9gVogn5LN93xqIrJ/wtyAADMPD52C1Hz+awARA5DFPd+BvOn1Pk06NqtFkSByqbKJf2d4uFDzRPH4N7M5m9zgppT2/YXbxmUqfYBaYSCTtAv5dHQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(396003)(346002)(451199021)(86362001)(31696002)(36756003)(2906002)(66899021)(31686004)(2616005)(53546011)(6486002)(83380400001)(186003)(26005)(6506007)(6512007)(4326008)(316002)(8676002)(66556008)(66946007)(66476007)(478600001)(6916009)(41300700001)(38100700002)(5660300002)(54906003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1I5Z2NsbC9td3c3bWk5R2ZqTi9ONkFlN2dvL1NCeEwrbEM4cU5NelZ5dVVs?=
 =?utf-8?B?SDBYaGoyQThGNFVLSWlrbytseDdFeEovOEdqOSs4UzRTR0tVaU8rOFNyOStK?=
 =?utf-8?B?a2hTMGtSblgxbTlvOW5pMlFQbnVtZmYzc1JKRklDdUZSbm9nRTN1aWNTVFFh?=
 =?utf-8?B?Vldkb052cXpiMXByT0lzcXhscXRoLzZZclVIR3ZiMUNaUU8rME9mc3FoaTBs?=
 =?utf-8?B?R2ovVXBOQjIvalVxeXhYT0t1QTEzdFdEN3MyMDByMXk0SEVWWUVMWVp1THNh?=
 =?utf-8?B?WkNTTkZwMkVTSUxTMmc3aUZzYlRtQnFUaURScHVGb0xRY2l1dVVXM2s3ZGlW?=
 =?utf-8?B?c0oxQWFJTE9SQkQ1ZFU0Yzk4NTNsR1dKU1dCZGVGbG8xbnRkZWxqN0RZcTRV?=
 =?utf-8?B?SEpoUm1vclFYZHR3ZWUwNlNmcjFIL21MRDJ0anJ1RkY0Wk9iMEZheTA1S3Yv?=
 =?utf-8?B?VjY3cFJpRXJiV0wvNTRqRWRtSktha0tOQTFoWFlMTHhqWWFwUUY5MVdjNW44?=
 =?utf-8?B?QmNUc3pBdGR3TDhTbVZPL3BzeHUwR3FBSm4zNEZ0OWxqQUVKNkJoamVOY0xT?=
 =?utf-8?B?YU55cG03QndqOXRjd2xPa3lDU0owc2srQ0dTYzRweFVzbFdWQjJNazRFQVBs?=
 =?utf-8?B?Znh0RG5FVW5ZZzZCNnhmOVZRb3pOR0wwTzhzRkVGRjJvbVJUT2l1dzRJaFdI?=
 =?utf-8?B?Zmk0eHphcE5sVnVSVUdzVEFKNkRWbG1oTEs0R0NpeFh5Q3kyeXByTUZtSURS?=
 =?utf-8?B?R0p0OGZlZ1REU2hGNGNLVER6NllqZkpHVnYwUC91RER5UXNNNVQxdmdDNVVP?=
 =?utf-8?B?bW5NazZWTEFFanY4RHZydkhxZXFiZGFLaTIvcFNOYTg1aEZuSC9pczlDczlC?=
 =?utf-8?B?dyt0UnVWb3dSeW1mWFNzd2cxNjBZYmlJeEV2UG1OYnpXNFRtN0hhYnlpOGtq?=
 =?utf-8?B?TnQ4V1MvYXJSbE5NSFFQaHg4TTF0bVJwSmRUMmNGQVVJclU5Z1BUTDZUK254?=
 =?utf-8?B?dGNrY3BkeXRhNVRXK1Nxa0ROSnJZRXhwK1BNTUFISGJzY1Rkc3BnTjRjdjN3?=
 =?utf-8?B?eUNoYnVZTWdCd2UzNGdCWEh0dXNNYjZPeDhhK3Mrc0JYaXdFL2lIaWJqZDFI?=
 =?utf-8?B?RXQ0WmRsT2JqcjRnZWR3WnlpVUhUbHYyRlIxcTE4YWxhaEZuTTFubGc0ZTFn?=
 =?utf-8?B?MFBkVXdLbDF5WFp0RmdxUTMxeHZoblU2RmVrSFR3NUdqQi9vVjRHZWQzaExS?=
 =?utf-8?B?TEhXYzlnNDNra3hmWTJkMFIyRkxqUjhqWlJWNUxkVytCMTQyNVE2b0ZlS0Jq?=
 =?utf-8?B?MXpzK1JxUERNQnhnTzhYUnpFbFBtYnRQeWFtWXRsS2k2Q1pBU0JnUVJHOEtp?=
 =?utf-8?B?SmVTSnAydm9RY2ZxR3M1WGlISVpGc1Rlelg5Q3JNWXYybG1Qa2xyUUpiRjls?=
 =?utf-8?B?YjZsdTRuKzQ5K0MrQTY5aWJaVFdMZ0xnRGxmSVNzOVF6emF6NFVrcFl3QlVL?=
 =?utf-8?B?L0pYTU9YbHhIaDc1L2dpYVJndEdBaDJ4R3F4SGZnTUswRWplS1RONkdIaFMy?=
 =?utf-8?B?Q2Z1c2pmYWd3RXVPS3czNlR5eVVhWStIV01FYWRDTmZhdU9YS0d1NkJLMXE1?=
 =?utf-8?B?QngycU1CR1RsQmx4NDcxUFVCSFlQOTNPZnFqd0ZORzk2VHFEdWRFTDFzeHZr?=
 =?utf-8?B?em15ckxmSUQ0MXRkQlhvdytZK3RiT0k0aDlHeHdTdTliQXB0aWhMR3BhNG1z?=
 =?utf-8?B?a2FVNHd3V25IWTJoNVNVWmtFV0hFcnlXaDhpOXZqV2dUdzRjVjNhSlg3M1B6?=
 =?utf-8?B?VTRWNHVESjBKZ2hZTDRKQnpMMEQ1SlpqSldWcmxiTks1NC9uYkhVUVJ5UW44?=
 =?utf-8?B?NEU2UXVRMyt2RWN5UUlEd2gzZVFMajIwK1FQK1M0cmMwaEwyclpCQVZLVWNU?=
 =?utf-8?B?bjBSdWRBazF4ZzZpOU5tR1BjWmg0NDlZOVpNTyt1VHFvYTRUSm1RbjdqZnZ3?=
 =?utf-8?B?ZWpMeGhaQVhIazdnUld6S2xZZ2VWMHo2SjlwdmN5RGlIVTdYcm53MFdpY2Vi?=
 =?utf-8?B?YTBnNk1Fb0hlSXZSSSsraWt1NDRSSkxyQW42TW5HSm11cEtlbXFreW5Objl0?=
 =?utf-8?Q?BWfOoCZX5fxxher62oMid9UYZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aa2eb8a3-7588-4689-7181-08db3518467c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:24:18.3083
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /5bmXlbQWSLGApp+34QHSj3odF9BdTECl1fqcqBoqHbQvTjA2EiNQ1JnjXP2FwhgU6gh1x5nhiywhIhujeQpBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8482

On 04.04.2023 13:41, Roger Pau Monné wrote:
> On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
>> On 04.04.2023 12:12, Roger Pau Monné wrote:
>>> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
>>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>>>> applies to guests also when run on a 64-bit hypervisor: The "extended
>>>> CR3" format has to be used there as well, to fit the address in the only
>>>> 32-bit wide register there. As a result it was a mistake that the check
>>>> was never enabled for that case, and was then mistakenly deleted in the
>>>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
>>>> CONFIG_PAGING_LEVELS==4"]).
>>>>
>>>> Similarly during Dom0 construction kernel awareness needs to be taken
>>>> into account, and respective code was again mistakenly never enabled for
>>>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
>>>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
>>>>
>>>> At the same time restrict enabling of the assist for Dom0 to just the
>>>> 32-bit case. Furthermore there's no need for an atomic update there.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> I was uncertain whether to add a check to the CR3 guest read path,
>>>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
>>>> be converted to "extended" format (overflow is possible there in
>>>> principle because of the control tools "slack" in promote_l3_table()).
>>>>
>>>> In that context I was puzzled to find no check on the CR3 guest write
>>>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
>>>> of the low reserved ones) could observe anomalous behavior rather than
>>>> plain failure.
>>>>
>>>> As to a Fixes: tag - it's pretty unclear which of the many original
>>>> 32-on-64 changes to blame. I don't think the two cited commits should
>>>> be referenced there, as they didn't break anything that wasn't already
>>>> broken.
>>>>
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>>>      unsigned int   partial_flags = page->partial_flags;
>>>>      l3_pgentry_t   l3e = l3e_empty();
>>>>  
>>>> +    /*
>>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>>>> +     * understand the weird 'extended cr3' format for dealing with high-order
>>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>>>> +     * initialised).
>>>
>>> Don't we then need some check in the vCPU init path to assure that the
>>> cr3 is < 32bits if we allow those to initially be set?
>>>
>>> Or will the initialization unconditionally overwrite any previous cr3
>>> value?
>>
>> That's not the way I understand this "cut some slack". Instead I read it
>> to be meant to cover for the VM-assist bit not being set, yet. Beyond
>> that it is assumed to be tool stack's responsibility to constrain
>> addresses suitably. If it doesn't, it'll simply break the guest. (There
>> is some guessing on my part involved here, as the original introduction
>> of that code didn't further explain things.)
> 
> If it's just the guest that's broken I would think it's fine.  As long
> as such mismatch doesn't cause issues in the hypervisor internal state.
> 
> Did you see a toolstack setting such entries before pae_extended_cr3
> is set?

To be honest - I didn't look. As said in the longer reply to Andrew, I
think it is more logical this way (the page table root already being
validated as an L3 table when vCPU 0 is inititalized, which includes
setting its CR3). Hence even if right now the order was the other way
around (which I doubt it is), I wouldn't want to make impossible to
restore the original ordering again.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517896.803860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhoV-0004qy-L1; Tue, 04 Apr 2023 14:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517896.803860; Tue, 04 Apr 2023 14:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhoV-0004qr-I4; Tue, 04 Apr 2023 14:39:11 +0000
Received: by outflank-mailman (input) for mailman id 517896;
 Tue, 04 Apr 2023 14:39:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yeB/=73=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pjhoT-0004ql-R9
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:39:09 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7395241e-d2f6-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:39:07 +0200 (CEST)
Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com
 [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-489-xE8BrNmlOl6tBKmvK2zYKA-1; Tue, 04 Apr 2023 10:39:01 -0400
Received: by mail-ed1-f72.google.com with SMTP id
 u30-20020a50c05e000000b0050299de3f82so10600187edd.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 07:39:00 -0700 (PDT)
Received: from redhat.com ([2.52.129.179]) by smtp.gmail.com with ESMTPSA id
 qb34-20020a1709077ea200b00948f41af90dsm1279385ejc.166.2023.04.04.07.38.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 07:38:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7395241e-d2f6-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680619145;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gmO97gAtFYtTbL1S42EXHY21vUfwD059Jzy3RrPu67Q=;
	b=OLssQaPUVXUA7egJ0iGuWBP6qQM3xoGARTMGd3rCRS680yEngVP32ZecArHyEvgWf6zaF+
	kBAYqg4xSKAxIKEXPc2u6woo9ZYLw5c55F9NBm0B9oHNYxycwxA9rQJJbp3PxWM2aFnF9K
	sTooll9I02HKZa8hHz30DKNXBg665go=
X-MC-Unique: xE8BrNmlOl6tBKmvK2zYKA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680619139;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gmO97gAtFYtTbL1S42EXHY21vUfwD059Jzy3RrPu67Q=;
        b=xQEKz8jFgBHwGUR34x2bS03+RoDawniika7996cref7EL2jkqSGgyrFd9nwCRrzCFi
         xcV5Grx9G8IlTtS4xWZ8rT3W6ACvlzwP1nuMrvqQioQvD6yt3dYL5qcFzZQUrhhTk/8q
         JewYbLBfXrBPFwMV/qGZZSKzrQtwo260QQZu0SvK5knf0iBe7ySgd7G+9QJ2iD1gjS5V
         VCzV4i0DJyN1y5H2pSAR+8HoQRcveTKtSZo3uQdAKwH/mizm2assmehx9ftj26yJFsO2
         ezWpbPxxcVMW+u7PWKrL1uybF7z6FXheN1hXuzolIPT8buJt7WcQzE5KPxmSB5KUZkRN
         NanQ==
X-Gm-Message-State: AAQBX9fV7Zss8zShF+5tcyTPjr27O7G3T1WrPzuACVVKveEJimPhABm6
	5jpCJFXSJ0O6sN3zW9z0NqorNqfT5C//iMCGNnxE7KvegB5mnDmgcZpZiqyY7L396dirD0+b313
	Ii42yyVNmu1+z8ggJ384hpGzKwSc=
X-Received: by 2002:aa7:cfd5:0:b0:4fa:c17d:8fdd with SMTP id r21-20020aa7cfd5000000b004fac17d8fddmr2323120edy.34.1680619139637;
        Tue, 04 Apr 2023 07:38:59 -0700 (PDT)
X-Google-Smtp-Source: AKy350bi2S3wGdfV0YoNjlWz9akfgA+degJejYBD5zzJwJg5OHp4hxP/MX7u3WDswchOrfTPlv4XoQ==
X-Received: by 2002:aa7:cfd5:0:b0:4fa:c17d:8fdd with SMTP id r21-20020aa7cfd5000000b004fac17d8fddmr2323108edy.34.1680619139386;
        Tue, 04 Apr 2023 07:38:59 -0700 (PDT)
Date: Tue, 4 Apr 2023 10:38:53 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
	Peter Lieven <pl@kamp.de>, Coiby Xu <Coiby.Xu@gmail.com>,
	xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>, David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>, Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 01/13] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <20230404103838-mutt-send-email-mst@kernel.org>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-2-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230403183004.347205-2-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Mon, Apr 03, 2023 at 02:29:52PM -0400, Stefan Hajnoczi wrote:
> Only report a transport reset event to the guest after the SCSIDevice
> has been unrealized by qdev_simple_device_unplug_cb().
> 
> qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
> to false so that scsi_device_find/get() no longer see it.
> 
> scsi_target_emulate_report_luns() also needs to be updated to filter out
> SCSIDevices that are unrealized.
> 
> These changes ensure that the guest driver does not see the SCSIDevice
> that's being unplugged if it responds very quickly to the transport
> reset event.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>


Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

Feel free to merge.

> ---
>  hw/scsi/scsi-bus.c    |  3 ++-
>  hw/scsi/virtio-scsi.c | 18 +++++++++---------
>  2 files changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
> index c97176110c..f9bd064833 100644
> --- a/hw/scsi/scsi-bus.c
> +++ b/hw/scsi/scsi-bus.c
> @@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
>              DeviceState *qdev = kid->child;
>              SCSIDevice *dev = SCSI_DEVICE(qdev);
>  
> -            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
> +            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
> +                qatomic_load_acquire(&dev->qdev.realized)) {
>                  store_lun(tmp, dev->lun);
>                  g_byte_array_append(buf, tmp, 8);
>                  len += 8;
> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 612c525d9d..000961446c 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>      SCSIDevice *sd = SCSI_DEVICE(dev);
>      AioContext *ctx = s->ctx ?: qemu_get_aio_context();
>  
> -    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> -        virtio_scsi_acquire(s);
> -        virtio_scsi_push_event(s, sd,
> -                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> -                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> -        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
> -        virtio_scsi_release(s);
> -    }
> -
>      aio_disable_external(ctx);
>      qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
>      aio_enable_external(ctx);
> @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
>          virtio_scsi_release(s);
>      }
> +
> +    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> +        virtio_scsi_acquire(s);
> +        virtio_scsi_push_event(s, sd,
> +                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> +                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> +        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
> +        virtio_scsi_release(s);
> +    }
>  }
>  
>  static struct SCSIBusInfo virtio_scsi_scsi_info = {
> -- 
> 2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:48:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517900.803870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhxK-0006Jp-GV; Tue, 04 Apr 2023 14:48:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517900.803870; Tue, 04 Apr 2023 14:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhxK-0006Ji-Dj; Tue, 04 Apr 2023 14:48:18 +0000
Received: by outflank-mailman (input) for mailman id 517900;
 Tue, 04 Apr 2023 14:48:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjhxJ-0006Jc-1N
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:48:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb17d4b9-d2f7-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 16:48:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8059.eurprd04.prod.outlook.com (2603:10a6:10:1e9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:48:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:48:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb17d4b9-d2f7-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bEx35mZh57eCns829b3WkMAk1YGOrap6lKU3x/Nb7alj3JPFiBKghdZv6v7Duj5arsEYtDlmDWjV+r1KE5HjeSa1uxseOu7Cf0w2LML+8vBXRw0INRCMx3DazKd5b37nky1SH+l5o/9ozf+L6PLOhloRnnOEGpcybHjxU1odamDCw67CKbJ4mrN2KY6mVY9vl9AwlNRjz42/j0USv4fXAArBWgWxzy6DDPUeZ/9mK45AI6c/3l7ZtTnmwqeucJmujl8JUMh1AUKx+wE8Mn2tqAYQe3OFSLH/ZsxxsVF94qcikOfdf5TXJVasvMhElmT0rWVSQsAS5COUTdOG0SyxHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3CGLe7T2yLSk6DDDg8vDLU1RI5JXzolsx2xLHPjcZd4=;
 b=hYa0A9uQNpT4C95PaHjmeOnQG+9ZAxMDLoprR/SMJA8cqg/W+WUf+XFGZ8OsVjfSI/PweMBBbayB2Jsg6dR52+3fdIiRGsUXl8q512ja7gnsW5lZceUkwWMMKDmyKqTsVdSycNFVl7cl0Ghc9NgrbIPg/KMriyPgElzKr/qqPFNtanP3qO/g8dm7Epz8xkjRQ2H/w+Smm+MfUcHbQUoW04jJziKLIWhF/eg31Ndx6TU8uhSPuJbWmfJocIYmhioYlSbAi6ZGJp9qmOG6qOsjmPjLUld0xeBY5b9bX/5rWqssOG0gHVzjyOxOZqyw7ZUHZnKUncb1Gip+u2tot6AYkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3CGLe7T2yLSk6DDDg8vDLU1RI5JXzolsx2xLHPjcZd4=;
 b=WkXVOlx5SAhYHTg1/0ib7ySM69twvBxCsj1D5ghDNLd94Z2GjfH+jHDKJsLq1GW7NIxjxBbZBsTSa5BffSAB8CSZY5IAB4BtSfsuzo8z3dW0kuZ30dI2WWFr19TTUu5LO25s0jsAkd3MPLfesunQZFqeQL8ubUYCvI1YKXAsdGwaqQNAhurNpedeiLL1hYIvEiOrTndyjXA7NCm39Wcu99DDPj7WG5mcQiZf7ML2lSQviJg67Ea3yqSi3HpxkTwC64Ho9kHkxafm363wtbbJo7ik7NMYBNPPlZKsvahJHUH28iEh/pCTrNAHBLLjpQAqD2b2tPn3U4oomUmXVT+Uiw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Date: Tue, 4 Apr 2023 16:48:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/9] x86emul: misc additions
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0092.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8059:EE_
X-MS-Office365-Filtering-Correlation-Id: a5821fa2-48e4-42cb-331e-08db351b9e69
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Va39pFWYcD06j+tLXhp+5REl9euqrD1q+s1t97QN5EWXQNfbvZNqhEovTDI4J66PUobSDGkgG+8WmjbtHayJyGJOyGLbDLvpmwbm4lhotSKGtWRrv+isBw6PnwLKJyrJTC8H8QR4/uqVhrQJviH/elDG6VYdxrFMxAWHQ40zS3068PDDxgBVxdsJtyCNwdbhkQ8SYyG2U7QfnwjDKuCBZFunnoand5QngDWN2Aoli10LL4t4P85IyNBvsutu7dlq4w0rz2C6swA9bMtOzvdXcYT9K6khNRqKvMw4NqY0ErOoR4uNtuRx2uAUnJla8ll1Rl+SLjvqZxbRbzORQeCm4R/eDwLmW8JFHp7Gn8Bd1Dru4HYSme7uMef2uldC1BNlT9/kLERTdqW/H7+GHCoWv8JDVydpAdHQ+GPok3GQK4Z3sHUU1x8qzyLT2AV9qIMwIygviLoK4fruA/l+5Z5ooQas5IjbwQXCdkM3UTnwGBKGa2J3NlwtngbqTE2tFjkOHL5m+jAKKtX73y+gWXNz4S7ofAaZHd2yVKJvRqb5Te5E0R51MHnCXxG/LSmswf2+qn4mAW6Mo1YWNtxqaXzoCw619dW2IXt67TV5sBsFudAO6Mc4izA7Kst5T+InzkPKwDMj3S2D7F33vF7WUCV6mw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(136003)(396003)(366004)(346002)(451199021)(478600001)(316002)(54906003)(5660300002)(8936002)(36756003)(86362001)(31696002)(4326008)(4744005)(2906002)(8676002)(6916009)(66556008)(66476007)(38100700002)(41300700001)(6512007)(6506007)(26005)(6666004)(66946007)(6486002)(83380400001)(2616005)(31686004)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d25HUGVKRm9ES2t0aThEM0hBWVc1WkMzcWtxMTNtQ2wyRk1DbDdUSXZHbXZK?=
 =?utf-8?B?TnlNQXhZUFhsSXVBWjEyQTJMTm01VFZMZTI0RzJkN1hacjdhUEpLby81Q21q?=
 =?utf-8?B?QWhSOHVuc1VjL0hYMnp2cWxjZ3NERGlUOXkxRHJZZ2pNSWNvY1A2VmtkNXkx?=
 =?utf-8?B?WTZrckdiT0hxU1NyejEwUmQvbXVsVnhldUQ0T0FoN2FrNUtaUk5NZ0lUcVBT?=
 =?utf-8?B?S294NkZKUElqSml2UEpTY0R3ZVR5djlCZ1FHRXBIY3d0WTNFSzduQzhrbC9W?=
 =?utf-8?B?aTlTaExXQzB1a2Z2WjcwY3BZZVJsOUNOTWZkL2ZRdTByQ0hlRy8rMHA0cTBu?=
 =?utf-8?B?aDVySFN0VEtrY1NmZG5qOHNneXNqWCtHeTJ3bGFNUDZ4ZTNMeWZuUTd6WEVt?=
 =?utf-8?B?YmJMRTd2SkpqZzljSlBLZFJvb1pyRW92VHlvL0ZDYkx6Z2FzWmNUSVd4cVFM?=
 =?utf-8?B?ektmbFg0RG5oaDZrazdBeXIwNWpkb0Z3OXE1MGRibno4Nmdlb2p3RHhmN29t?=
 =?utf-8?B?UjhNa0RDSk9zQllpNnpyWjRHcEZlVk5xSGw5WEdWSHA3dEZyenY1OUlIU281?=
 =?utf-8?B?bkp5UXhOSFl2TFpOSS84UjI3ZlViOHNMMWYyV1JzUi9zOWY3TlhMTjIvTXRV?=
 =?utf-8?B?blFkK2FxdzZyLzRod1BPT0wwMEppVlZrbmVoV2JBaE5FNCswY0pkbU8zczYy?=
 =?utf-8?B?QnNiLzYzeEI0dzlHeTA1Tk5meFQ1UE9xNW04MmpnczF4c1p0SCt0TVd5VEdp?=
 =?utf-8?B?ZkFVSXRXOU5iVXdUejJMV3RVZ3ZZRW1CSXdJNC9MNXBuKzNXYlBYVktBUVp2?=
 =?utf-8?B?MlRxb0N3K0Iwem1QSlpDT2JWdzJJcTF0bDdIZ0V0YVMxSTR5MEJ5RmNEZHV4?=
 =?utf-8?B?MDBEa1dhTTBIVGxDaUdSVmZKYUd4WFdZdzdBbEFuV0Noa2FNR2lXLy9rZ01P?=
 =?utf-8?B?MVhqZ2pMcEpVU1VPTjdBdkFTNWNyMkZKS0ZJeHowUnJ4cWpxUldETXI3K204?=
 =?utf-8?B?M3lvQ05uVHRQakxZR3Nnc2hUUUViV0tJL0VibzRIUVVtZGpmSkliVG5RSVJQ?=
 =?utf-8?B?akFRYlBSNklJZVF4czVGQWFnMTZzOTFDa3I0TmVDT0pCUlNOdlRmZGtpTTFY?=
 =?utf-8?B?T2VFVmkrMHdlSE5SRkdmTW5oNWpTejJxRHFaMFNzQlNEa3JKOURyQklLdDBh?=
 =?utf-8?B?VU0rVG14NXorWlp4SUt0UWQ0RC9BUnRlWUJsYXpzYlVuM3h6alRWd0s0SFRa?=
 =?utf-8?B?ZVd1QXY3WE5xcUN3MWlDU21rMUx5TDByVlRuYi9ONWM5ckQ4R05td2dEZUk3?=
 =?utf-8?B?MjNGM2greU1nWTRhSmlWZnpTalozSFM3bVhleHFEdVgzZXJjNmR5R2F1eUFi?=
 =?utf-8?B?NGN6VVNNZTU5OEpvTlFRNHozRnU5a3RkVnFmQnM2WDVWQ3Q3YzJ0Z2NGM0RJ?=
 =?utf-8?B?NTd6cVNXYi9BV2RMcHE1V0xjOFZtY0E1Q1BNMkltd1VlcXVBNWxyZm1qWkov?=
 =?utf-8?B?eUJMNlpPZ2daVnJwUlBJc09zdzN0NGtvR2NrY2RGTlA0M0lXZHVHK2NhVmly?=
 =?utf-8?B?dEFWYnRmQmo5VHRvK1FidmtXQ0pNY2RCR1E3cGZBRlkrM244UjZDdm5aOHJ6?=
 =?utf-8?B?RXVDV3ZXSm40U0VuYjF6RVowVE1lb3BGbCt1THZLK1o3Uk5qMHpvWkNSZFkw?=
 =?utf-8?B?NmdET0ZsNm1uaUt2V0w0M0E3dmJCNWl5KzNLb21tU1pVWmpkRS9iU0NCTlkz?=
 =?utf-8?B?czFCemQ4TVkzODJmWjA0SHcrazBNNnIyaS9Jem80bmliQkFoL0pDS2ZOODE1?=
 =?utf-8?B?NWhDUlJVQ2wzRUZTOExPUi8wbDdST0hRYWtsNlRRQWUrdWswS3oyYkY0eXRF?=
 =?utf-8?B?MUFDcWJKMUxXamJCU2xGclhzcERsVllBZkNlbWRvcVVaSGZ2OVVYTXVCU0dS?=
 =?utf-8?B?c1ZpaTNZdnJDUFZ2bndLN0s2dTJIdTZxM1FhV1UyVmxuQll0elpYeGlCRE1P?=
 =?utf-8?B?a3N1SWNnT1IxdVRhb2ZKR3V2QzduT3JlbW5IMHFPeFdhdUlWVDd2MWllTnRM?=
 =?utf-8?B?ZmplRDVqWmNZVjcvRXcyajdlbmpVaFFLc1RZUGlXMEtWZ21IaWRRanFLeHFR?=
 =?utf-8?Q?UDfQ36fK8VFBhattmX7nnd/7B?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a5821fa2-48e4-42cb-331e-08db351b9e69
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:48:14.2488
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U4YN9jr9R2pFriNrbrm1T5wfy9xng6XtyAE3yaFONvkNVtBxHjScuVwCroJVxbNklaeeJ+xFMUkXeD1TMh117w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8059

This series adds support for a number of more or less recently announced
ISA extensions. Plus a little bit of (more or less related) cleanup. The
series interacts mildly (and only contextually) with the AVX512-FP16 one.
Note that patch 1 was previously posted standalone; the posting here is
unchanged, so isn't called "v2". Note further that while the last patch
is kind of incomplete (it doesn't enable the feature for guest use), it
could still be applied ahead of further VMX-specific work that's needed
there (and that's dependent upon documentation becoming more complete).

Apart from contextual interaction the patches should be largely
independent of one another; only 5 strictly depends on 4.

1: support LKGS
2: support WRMSRNS
3: drop regs field from emulator state structure
4: support CMPccXADD
5: re-use new stub_exn field in state structure
6: support AVX-IFMA insns
7: support AVX-VNNI-INT8
8: support AVX-NE-CONVERT insns
9: support {RD,WR}MSRLIST

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:49:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:49:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517904.803880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhyB-0006ps-ST; Tue, 04 Apr 2023 14:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517904.803880; Tue, 04 Apr 2023 14:49:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhyB-0006pl-Oj; Tue, 04 Apr 2023 14:49:11 +0000
Received: by outflank-mailman (input) for mailman id 517904;
 Tue, 04 Apr 2023 14:49:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjhy9-0006pZ-Pj
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:49:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d99ae6e5-d2f7-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:49:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8059.eurprd04.prod.outlook.com (2603:10a6:10:1e9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:49:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:49:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d99ae6e5-d2f7-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhNa3gQVqEqVWOn1BQiWAUkMLvttOvoMTYXO6hoFgS0pyVeMkKly/otz5kRIMRz0WZtPgsKfRWB1KV1eJD4aGGzk4g8jHwXW4gBdhawPpYOC3nHZnfGvBuT2dnzkw60kJEz2EAkKsWFUqQicHtfT4zGOxwtlkwgYzz0z+6nCNztdv2wqFNOidDUxQ23Y552pThpgz0crjXFAtgzdjSvgxgrOnRuZStgOEZqNWXZfE6/1K8NHMg9ZiZn/0xXvcaSO/r2pjq8BZgmoM4FiRcI0i5j7sDEPQbYVh3BWdeF2ae+v3L68XHf1o5owZQO2CPjYwKgABnjPDmiuJ41TW08vvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1gpGtiRKwEgaAgJAWR+acZezpCQEWfuFPhZDqqfukzE=;
 b=lZtQDbpvs4EEYH9MHKzoQHu/gBzR7BFyLKl5iU7wsxqx19j+/KNIJNHJ0W9Q1ziByIefiYPK5Ze70vGK9Uo+GkG5/fNcYU6UO5OpEAI5WSidJpR8nKBhk6yvaopf6UZeG8CtsyGEXk6qWiMD828tIBijfUQ76oViF4+WoIgUMEPAeuYwxuikxXEZVJtWCZBPsBJMVj9pVSu1H77n+w5Rb8CuNf5DeByMDTMR6ZlpZ1nsBeIEh0pOZIijHHezxGCl4Nvc9qjnaQKn08oV10OBFBNkoz06BO+ZKJQOR8mbVo8kEbxra0xPT+bu0DgaQ+Q+tkdgx20ZTdzS0RcDiqcf7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1gpGtiRKwEgaAgJAWR+acZezpCQEWfuFPhZDqqfukzE=;
 b=cPOsI1bQRTbggibijwxFPtr05jC7HHOl5FaH6GijrFXx5PlPBjkX7Fxk/Tc6dKYf2v2U+kxi7hQwDzse3SjCWH3OmpDPscm/H0TA5FRLRMmnERC4oubIgwCRah2DrLLso2TDTNP/RceqTXjOzY6L5SZw3ELqErtNdmOeKtL6SfIa7cXrpdxbTRxv8v3GJCqR3USJCKjzQCV5kltJ7V7WyN09QYUUsTjx8xBusaoBaUgVHXvUEld1kMYoWOacnsd6Mx2O3stSvPlfOaW3kFap+rohcDTeuu4GmhXavGeHNGXoOYhV7IjevwZeHWDR/xGp/vy432vOjHln82fOo5dVSA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1eb21ece-9d33-d8e1-1c2b-c682dbb1cda1@suse.com>
Date: Tue, 4 Apr 2023 16:49:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 1/9] x86emul: support LKGS
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8059:EE_
X-MS-Office365-Filtering-Correlation-Id: 809a8e30-a5b7-4f0b-85d9-08db351bbce6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	//iGQc3JQ14jAqruPERNrELZL4XVtliVGJWs2Hq1qm1r5FkyLDs0yLSFuYeig3PdGCoSH2saEFo2xArk4TsmqxiS2na7i9FAbFVV7PjYA+OaVzgT6XJjpQZjBNr61dxn6bgYTAjf4vRf8CPaE1WgRknYd8/4rmhuTMA+59+qFY/GemoVQAWlITIwcQh2FyZ69JbF9mr5x1EewfbaQWpsG/q4s/+5hs1MqTAadQLVrEIC9Ep3thReE38wzQAOZOB1OQN0Nlcgf9wJak/TBoLSbs7X0d2INLt/OmVRyRtz9hoMKqHS1zpvCWBoR6Ou/CUvCDSNjzqlBwpa3NPfchOlapq+GwlBR6bQRhkNBFGG372BIGBQbqQb7K7r8EHnuIXwrfwV2rK5Exi6G1xA1Ji8oFlcPJZBjhd5OzuOZU7XntNsTxz2BaMG7MWs7+3KGMBTPDt9FxbwWKiSKhWYX/JTB3tg2PvEjjJteFw6lqMAmWHnX7RK2Vh2YjbYQNXQiZ2cobC3D2mgxssYOIlZRCXE2F4mWVxwZmOgl7yOpmazv9wDMgQoybBYJY3IriMnefwAzE6nbakIjOTvAfNxmIU5MXMh1w+pP6tKgA/gq4/Lnu8MCZeemS3HGi9Fg86rViUhGEM9Fa8/Qacb1EBODdBotA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(136003)(396003)(366004)(346002)(451199021)(478600001)(316002)(54906003)(5660300002)(8936002)(36756003)(86362001)(31696002)(4326008)(2906002)(8676002)(6916009)(66556008)(66476007)(38100700002)(41300700001)(6512007)(6506007)(26005)(66946007)(6486002)(83380400001)(2616005)(31686004)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QldCMmZ2aC9mWFdTZXJvSHNoWGFvR29qdFA0UnNTSy9BZzBUS3R5b2pvdTVr?=
 =?utf-8?B?WEsrOUd6WjBuZnpLb1UrcElJNW5vMHRFMzA0dERwOWlxR0NJQk1kVUplUEhX?=
 =?utf-8?B?UkRIbUtRUC9hSzFGV1gzZVFka29BQUNrUTB1eVpiZ1JGaGhxVFYrbHRsU2ds?=
 =?utf-8?B?bXVXSElwdUhsby9tazVFdEd3RFozVW13OTh4bGpGUngyb2NjdExiK040YXdW?=
 =?utf-8?B?TGUrOXhvcjFUaUZpSDZ1cDJjZ2xsRTlQY0lLaTRxeXdrRnFxcTMrVXRSdG9Y?=
 =?utf-8?B?WkpxdHBCb1lHVTJxNGt5dWU4L1FKMWg4VWlhZnYrUXdjM3FQM0V5cis2d2hi?=
 =?utf-8?B?M1A2U1prdXlQYXNEMkREOTBOMnF6cTNWemRZRnFoWEhMaDJlS3lORXA5Nm54?=
 =?utf-8?B?VlZpby9zcHlqNk5aV2dVeGdhazBZQ2o5UG5JcUtBcGxIM0hYSURQUWhDVUJz?=
 =?utf-8?B?WVdBcnRBMWRDd3Q4Y0lwbHdqTXhTSVFtclpPZVk0djU3RGNCcytIQ3l3SE1x?=
 =?utf-8?B?WE9GbXVKemlpRktlRjlKWHNqVzhFOWpqU1N5OGswRXR3NDJNQmVaRlN3TWFE?=
 =?utf-8?B?NklwV0V6Z01JdEFBK2x6VUNaV2Qwcm5lV1M4aG9VRzJ5cU1BaTkyeVMxU3lx?=
 =?utf-8?B?SFFtU252Rmx4c2lBenhFaFg1ME40RTVFeVk4enZ2WFk1YVcwSzViWkRUWmRR?=
 =?utf-8?B?Zi9waU1aNEJOUXRzSlZWYVRNVEgrVXphWWl1U0xQZyt5ZDV5T2l3RXh5ZjQz?=
 =?utf-8?B?dUI5dGdsKyt5aXd4M1ZQNlVEQ3lrMXp4K3FPRVZZcEp6YzU1MWF6Z3daZDlq?=
 =?utf-8?B?U2tJZTVTSTIxb1ZHMTV4ejdjQUlvcm5QWXpSTm5sUmhUQmwrNzAvTzdiU0pZ?=
 =?utf-8?B?UWVFZU5CMWhNWE9zd0dWNmZERWtJZmFHV1ZhT0U3OE82SHlLZjArSHRzQm1R?=
 =?utf-8?B?cE9FMkwxbXZzWWpROC8rY3JwcWtqeXFpZmFQb0hmZzhYR0NIeEphVFRiK3RB?=
 =?utf-8?B?Y2c4bWd1VFA1Uk8rYlA3RDRzQXVlWS9Fditvdi9lTThWdmNweHNvdEVmTWJ5?=
 =?utf-8?B?OVFXNHZrNVU5Q3B3R0RBSkkwTGUrU29pUFRaeXhiQWlYUkV6dEpndlhGWGlr?=
 =?utf-8?B?V09kVDBiQTFWcTcyeDZvL3Z4ZnhvYkdmZlMzc3NoYlloTHZHM1V3a29MV2du?=
 =?utf-8?B?cnk2VExLNk95NDd0bmQ3WGNtdjkxb0tJQWlWczhablFsdFB1V2FGaTVuVzEz?=
 =?utf-8?B?cXpwci9scFBydVV4ZG9xK3hQd01Md2t0anBoTTBFYzZxa0hmbzlLUEJhQWJv?=
 =?utf-8?B?dG9IUU5keFBhQjcremRLeGlxVngxOG1SR0dnUVFLVmg4c3FLWHNJa2tIZldE?=
 =?utf-8?B?V0ViSUJjVlJDRERtaU5lMlFMYVJhN1pCUjVmdm9SbEtQNlA0TFhHWXc4eVF0?=
 =?utf-8?B?RXE5OUUyRUtKVFJ3Nm45Vks3N2pZcXdpTEVnNXNNKy9NYmh5dXZuSnNVTVFE?=
 =?utf-8?B?Y2pRUkFpaHU4WGQwMnRtL2VLY1ZpQnUyak91RTdsbit2T0wyL0oyZlN2TzhP?=
 =?utf-8?B?c2JXd3BvS1VKWU40ZUpkckIrTVNmME5zWTNmM01YdXBrRGg1QTdNM0plL21O?=
 =?utf-8?B?MHFSeWRHWUFmNXY0QUh6UnkvQ1BGYjdOd0lERDhGbndsMkRtVy9BaCtDa3Qz?=
 =?utf-8?B?OVN4QWI4ekwybmFFVWJEV0pvWDNJeFJNZjRhWCthK3dteHRJakdZT3d2aFkz?=
 =?utf-8?B?akdGUCtOWHBJV0tBV1lCVHdpZnZjeWxSTEtxN1E4L2lKZFpDaDVCa29xWXRJ?=
 =?utf-8?B?a2lNN3BJRUMxY2ZYRjNTY2ZVaXhlaEc3b29HR1NzUmw1UVZ1WWZKZThIMHZh?=
 =?utf-8?B?NEVLay9UYTFqaVNVbTJRd3FJTTVyMmhCbytQTWh1WjkvVHQ0YUhzc1c5cTVI?=
 =?utf-8?B?MDZDbW9RSUlENWJ1NXpzN3Q4eW5rVjh5Rk9OaldQWW0yTlJtREIzbjM2Z0JO?=
 =?utf-8?B?VTgyV2VoWWJSMWdsZ05Vd0hyeUExOFFYZVZXdC9TT3pzWmdwbENMb2JKY2p3?=
 =?utf-8?B?TURJbFV0ZXo1ZXg4U3NaNWQ2VHB4a2ZCVFc0NDd2QlJCYlY5c1RKa0R5R2Jr?=
 =?utf-8?Q?8zkyPIclFEil40sKwAi4ZwRev?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 809a8e30-a5b7-4f0b-85d9-08db351bbce6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:49:05.5014
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lL3awT8mHOCociecza1dYY2PnNYlnbDr5CCMg801VVTaV5tDIljqgqL7uDlmbEhOJ6W2fwppeND0zrq5HuuldQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8059

Provide support for this insn, which is a prereq to FRED. CPUID-wise
introduce both its and FRED's bit at this occasion, thus allowing to
also express the dependency right away.

While adding a testcase, also add a SWAPGS one. In order to not affect
the behavior of pre-existing tests, install write_{segment,msr} hooks
only transiently.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Instead of ->read_segment() we could of course also use ->read_msr() to
fetch the original GS base. I don't think I can see a clear advantage of
either approach; the way it's done it matches how we handle SWAPGS.

For PV save_segments() would need adjustment, but the insn being
restricted to ring 0 means PV guests can't use it anyway (unless we
wanted to emulate it as another privileged insn).

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -235,6 +235,8 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"fzrm",         0x00000007,  1, CPUID_REG_EAX, 10,  1},
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
+        {"fred",         0x00000007,  1, CPUID_REG_EAX, 17,  1},
+        {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
 
         {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -190,7 +190,8 @@ static const char *const str_7a1[32] =
     [10] = "fzrm",          [11] = "fsrs",
     [12] = "fsrcs",
 
-    /* 18 */                [19] = "wrmsrns",
+    /* 16 */                [17] = "fred",
+    [18] = "lkgs",          [19] = "wrmsrns",
 };
 
 static const char *const str_e21a[32] =
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -326,6 +326,7 @@ static const struct {
     { { 0x00, 0x18 }, { 2, 2 }, T, R }, /* ltr */
     { { 0x00, 0x20 }, { 2, 2 }, T, R }, /* verr */
     { { 0x00, 0x28 }, { 2, 2 }, T, R }, /* verw */
+    { { 0x00, 0x30 }, { 0, 2 }, T, R, pfx_f2 }, /* lkgs */
     { { 0x01, 0x00 }, { 2, 2 }, F, W }, /* sgdt */
     { { 0x01, 0x08 }, { 2, 2 }, F, W }, /* sidt */
     { { 0x01, 0x10 }, { 2, 2 }, F, R }, /* lgdt */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -666,6 +666,10 @@ static int blk(
     return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt);
 }
 
+#ifdef __x86_64__
+static unsigned long gs_base, gs_base_shadow;
+#endif
+
 static int read_segment(
     enum x86_segment seg,
     struct segment_register *reg,
@@ -675,8 +679,30 @@ static int read_segment(
         return X86EMUL_UNHANDLEABLE;
     memset(reg, 0, sizeof(*reg));
     reg->p = 1;
+
+#ifdef __x86_64__
+    if ( seg == x86_seg_gs )
+        reg->base = gs_base;
+#endif
+
+    return X86EMUL_OKAY;
+}
+
+#ifdef __x86_64__
+static int write_segment(
+    enum x86_segment seg,
+    const struct segment_register *reg,
+    struct x86_emulate_ctxt *ctxt)
+{
+    if ( !is_x86_user_segment(seg) )
+        return X86EMUL_UNHANDLEABLE;
+
+    if ( seg == x86_seg_gs )
+        gs_base = reg->base;
+
     return X86EMUL_OKAY;
 }
+#endif
 
 static int read_msr(
     unsigned int reg,
@@ -689,6 +715,20 @@ static int read_msr(
         *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
         return X86EMUL_OKAY;
 
+#ifdef __x86_64__
+    case 0xc0000101: /* GS_BASE */
+        if ( ctxt->addr_size < 64 )
+            break;
+        *val = gs_base;
+        return X86EMUL_OKAY;
+
+    case 0xc0000102: /* SHADOW_GS_BASE */
+        if ( ctxt->addr_size < 64 )
+            break;
+        *val = gs_base_shadow;
+        return X86EMUL_OKAY;
+#endif
+
     case 0xc0000103: /* TSC_AUX */
 #define TSC_AUX_VALUE 0xCACACACA
         *val = TSC_AUX_VALUE;
@@ -698,6 +738,31 @@ static int read_msr(
     return X86EMUL_UNHANDLEABLE;
 }
 
+#ifdef __x86_64__
+static int write_msr(
+    unsigned int reg,
+    uint64_t val,
+    struct x86_emulate_ctxt *ctxt)
+{
+    switch ( reg )
+    {
+    case 0xc0000101: /* GS_BASE */
+        if ( ctxt->addr_size < 64 || !is_canonical_address(val) )
+            break;
+        gs_base = val;
+        return X86EMUL_OKAY;
+
+    case 0xc0000102: /* SHADOW_GS_BASE */
+        if ( ctxt->addr_size < 64 || !is_canonical_address(val) )
+            break;
+        gs_base_shadow = val;
+        return X86EMUL_OKAY;
+    }
+
+    return X86EMUL_UNHANDLEABLE;
+}
+#endif
+
 #define INVPCID_ADDR 0x12345678
 #define INVPCID_PCID 0x123
 
@@ -1331,6 +1396,41 @@ int main(int argc, char **argv)
         printf("%u bytes read - ", bytes_read);
         goto fail;
     }
+    printf("okay\n");
+
+    emulops.write_segment = write_segment;
+    emulops.write_msr     = write_msr;
+
+    printf("%-40s", "Testing swapgs...");
+    instr[0] = 0x0f; instr[1] = 0x01; instr[2] = 0xf8;
+    regs.eip = (unsigned long)&instr[0];
+    gs_base = 0xffffeeeecccc8888UL;
+    gs_base_shadow = 0x0000111122224444UL;
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.eip != (unsigned long)&instr[3]) ||
+         (gs_base != 0x0000111122224444UL) ||
+         (gs_base_shadow != 0xffffeeeecccc8888UL) )
+        goto fail;
+    printf("okay\n");
+
+    printf("%-40s", "Testing lkgs 2(%rdx)...");
+    instr[0] = 0xf2; instr[1] = 0x0f; instr[2] = 0x00; instr[3] = 0x72; instr[4] = 0x02;
+    regs.eip = (unsigned long)&instr[0];
+    regs.edx = (unsigned long)res;
+    res[0]   = 0x00004444;
+    res[1]   = 0x8888cccc;
+    i = cp.extd.nscb; cp.extd.nscb = true; /* for AMD */
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.eip != (unsigned long)&instr[5]) ||
+         (gs_base != 0x0000111122224444UL) ||
+         gs_base_shadow )
+        goto fail;
+
+    cp.extd.nscb = i;
+    emulops.write_segment = NULL;
+    emulops.write_msr     = NULL;
 #endif
     printf("okay\n");
 
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -86,6 +86,7 @@ bool emul_test_init(void)
     cp.feat.adx = true;
     cp.feat.avx512pf = cp.feat.avx512f;
     cp.feat.rdpid = true;
+    cp.feat.lkgs = true;
     cp.extd.clzero = true;
 
     if ( cpu_has_xsave )
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -744,8 +744,12 @@ decode_twobyte(struct x86_emulate_state
         case 0:
             s->desc |= DstMem | SrcImplicit | Mov;
             break;
+        case 6:
+            if ( !(s->modrm_reg & 1) && mode_64bit() )
+            {
         case 2: case 4:
-            s->desc |= SrcMem16;
+                s->desc |= SrcMem16;
+            }
             break;
         }
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -594,6 +594,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
+#define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 
 #define vcpu_must_have(feat) \
     generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2886,8 +2886,31 @@ x86_emulate(
                 break;
             }
             break;
-        default:
-            generate_exception_if(true, EXC_UD);
+        case 6: /* lkgs */
+            generate_exception_if((modrm_reg & 1) || vex.pfx != vex_f2, EXC_UD);
+            generate_exception_if(!mode_64bit() || !mode_ring0(), EXC_UD);
+            vcpu_must_have(lkgs);
+            fail_if(!ops->read_segment || !ops->read_msr ||
+                    !ops->write_segment || !ops->write_msr);
+            if ( (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
+                                     ctxt)) != X86EMUL_OKAY ||
+                 (rc = ops->read_segment(x86_seg_gs, &sreg,
+                                         ctxt)) != X86EMUL_OKAY )
+                goto done;
+            dst.orig_val = sreg.base;
+            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
+                                         ctxt, ops)) != X86EMUL_OKAY ||
+                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
+                                      ctxt)) != X86EMUL_OKAY )
+                goto done;
+            sreg.base = dst.orig_val;
+            if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
+                                          ctxt)) != X86EMUL_OKAY )
+            {
+                /* Best effort unwind (i.e. no error checking). */
+                ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
+                goto done;
+            }
             break;
         }
         break;
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -281,6 +281,8 @@ XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /
 XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
 XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */
+XEN_CPUFEATURE(FRED,         10*32+17) /*   Flexible Return and Event Delivery */
+XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
 XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*   WRMSR Non-Serialising */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -295,6 +295,9 @@ def crunch_numbers(state):
 
         # In principle the TSXLDTRK insns could also be considered independent.
         RTM: [TSXLDTRK],
+
+        # FRED builds on the LKGS instruction.
+        LKGS: [FRED],
     }
 
     deep_features = tuple(sorted(deps.keys()))



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:50:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:50:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517909.803890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhz6-0008GE-9a; Tue, 04 Apr 2023 14:50:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517909.803890; Tue, 04 Apr 2023 14:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjhz6-0008G7-6Y; Tue, 04 Apr 2023 14:50:08 +0000
Received: by outflank-mailman (input) for mailman id 517909;
 Tue, 04 Apr 2023 14:50:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjhz5-0007P6-7w
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:50:07 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20619.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc72725c-d2f7-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:50:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9979.eurprd04.prod.outlook.com (2603:10a6:800:1da::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:50:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:50:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc72725c-d2f7-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GsoDCfEdy+uT5ocgTR3PmCem/D/ChYSS4+J+KKb/WUQSvJ56zE3+pqAMr1WVxA33T+a/TBw9G+pdOo0JxE6a5sOjxjGeofDneGgHGjcbA7cZCUXAgZr0H0W/JsZL2WFnqtmVWGEBD5aq5keMZlBHCdjH8HFVFNaa7fZbqrbQGE3h6VOORr5tFwV9C+Af4moMUMjF91BPm/361n6MzE0Kl6T9J4wxerZIKpmCQFnwZDpTw7m0u4ZX+vtg17lai4Av6PFuBCr++28x3Y9XOF/MVq+yCz6x9mSE4KbZw06uEDSAjlB2luYRYvDuPk/iNcg5hnp9XcfdcfmLTRyhj042QQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gLZS7zO17fvDUH6j59nF4hxdbd2yaRJuM4XD9fYDp0s=;
 b=FiG57S8EGwSoQrn3Nr3WreXdB5DppEJ0cHrJCyNgh/DtGpmxRKs+FEuhItLjtJWCE/KQuWQdY38OMVylHDSBum10YKjKg5I0LQxbFY5+6dbQBW80V1pV3QaTt+27/ocF5OmEdnHIupOH9zj3mjp2WuRqaBlOor1rTr1i33qPeZUDqljDdFk6i1JtT79HMp/ddyyFU43zgGdD0PecJ3ZwNrd10ZrAaBCjYQvjCL9kUPWt211PUlEInfgN/8wtbxUv6kBF4AHGCq7qIFu+Kmf/tukV7sw4GrwOfq6udTmWoad2qKJ1LdN3yhD5Mt/7TmuuX1dIb3rwLYzKlH/tw/3Myg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLZS7zO17fvDUH6j59nF4hxdbd2yaRJuM4XD9fYDp0s=;
 b=Tm2oAQR0bOwQ9ossvg11iCKg7/BXM42e9VT5HI9KBUcbQe3VbtmoN1cbMYE2ne6SqpI0VapwxZp+rlh5xJwChIEzMPLtSqJf0fcGbKm3L7xWd9zIVl1KGBO/ihpVzH11ZRm7CQvZEH4R4RHkbrcbBMBj7yxzROzoIPWV1xTVvZhhzANybmvuA4wX8lho3+PzjXh6VGPNbN3vnt8JG4Wxk4V+n/2KtEgVsthw+YCBUFgyMowOH0JYA9Y9W9f7l21784fpz3keqm2wC9D9pwzYs7uTq88yMIvu+xe54A1V/xb74PNtq1GlDnzfWB61+wP72t1zU4FnhEeR8vNelMfqAw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0c2ddae9-3222-9755-b6e1-35e51410093b@suse.com>
Date: Tue, 4 Apr 2023 16:50:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 2/9] x86emul: support WRMSRNS
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0077.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9979:EE_
X-MS-Office365-Filtering-Correlation-Id: 13dd7963-161c-435c-9dc7-08db351bdf05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M9A4e083zpWr9QYF+Aeqriop/HnY/SSzwxalGvEnhx0WKi5etch1CrUHz5c4LHFvF19c/gSJYaMq2nNf6bm80AuyA/XQzGIWv2Ov9weITz1mofohMX0uHVALo2hwlsBCWjoyXI+Y/0waWfMgl9B8Eh0mCOGBJqbVnTvQeJdR7l5W43iAsMnd/zoY1ka4+Hm5FMTOgEyJwkideuGdSMc6Jczb3TdWr3YWNe7C52rGLOdHc/DKIsXJ2koHDbrf5s+9WBNG/2SbPV+HSJN1leWgYyzR0ggbVbs4XmY9+b6ywdts6igoDa4sDREtzOMyo6OunObvZKDZuYCU9T3POjxeGuBTHMzCXve/VRhxN30TqwC+ENN2726PDCd/BglxMdXiABwHVahUnJ6+Y5LpD4OE2wq9L5U7lDyA9CUOwvYFu7+21xcAIxHsEsJUaig76qEWKoKoaO7mUobwORIwaHGqTTY5g1A+gW0n/9GiNgz4SKIL3rC081mAUAq83i7Ct1kc8jO+xqVb2e2NT7z+/3DCjWsxKriFZtt23Pg+OcmwQyDWpOSTWxQ1ixPDtuH+ouBHzoQhw69g1CzUTk7KkFdnKk0ofNrz91cyR5lpkUmkynAfEYBgx0f5pu3fPO1QyCJ+AFvjVlWFtgQcXMsVoKdwdfqEicnris3TN1FbXpMDrUdGsAK64UB507eX9G7cTn//+5rraKoXCBc9/g/UReho7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(366004)(39850400004)(451199021)(83380400001)(2616005)(6486002)(6506007)(316002)(6512007)(478600001)(54906003)(2906002)(26005)(186003)(5660300002)(38100700002)(66946007)(66556008)(66476007)(41300700001)(86362001)(8676002)(31696002)(8936002)(6916009)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VzdwWjZ5ME1DaEdIRkhGTU95L1QyOXlmZDBQV3F6N3ZrK3ZocnRJemI2OEcy?=
 =?utf-8?B?L1lTSnJSYWg5R3dyVDRVWXRWNWN3d1VUSjBkdnlIOEh1cmhhaUhTVzZpQWU3?=
 =?utf-8?B?L3QzQitPNXlMa0d6WXU1cUNlR2h1R21RaStrWk8zNWM4V2hOaDREWlBJL000?=
 =?utf-8?B?eHBIMmIzcWgwVzFXNHlESnMzODRiZElXby9rQXN1UHFnNng5UzBDcHhNL2d4?=
 =?utf-8?B?ZU84c3kxN2xlS3R5V2tocVZpK2M5MXVhWVJjdnJybHQ1ekExM1FKVHNrdzh1?=
 =?utf-8?B?MXZ3SUY4U0w3QUZlejRvcDBmQWp4VVRXRTIwMElqcForYUZPVFN5TGpyaGxQ?=
 =?utf-8?B?UFdzTmpPbWRISGtMRHVYUHZnWXF3cnhwYzVNbHZNbTRlbkx1VkVPYXNTSGhp?=
 =?utf-8?B?VXdWV2NNU1V1QjV6ZHRPL3JwMVA2dkE2UWhrNEFrWHE0Mlhxck5COS9UWGN5?=
 =?utf-8?B?WmRObVZHdnNVa2JBOE1HMldUQ1NIbHN2anM4a21Mb2t6amxEYVp0N0VQSHJH?=
 =?utf-8?B?MWszZ3Q2dXA4R1Y3MXptVVJCVlNaeFc5ZCthYUp5WkxNQWsvN0o1cmhZKzZI?=
 =?utf-8?B?bUMzY1VldWNYdkkycjBZWE5wUUlwYmdkd3RLalVYczZBWGxUSXJTcEsrMlEx?=
 =?utf-8?B?VUxoM1E2RnVCQ1hyOWllb3NSSGI3a1VyVWQ4ZkZaSVV1MlMwOVBKYjJyeDF0?=
 =?utf-8?B?SlFTUm9haElheGkvczZDa29aSU9ucnVnUlc5RHhad1M4cHpwS1V5a3J2RUV1?=
 =?utf-8?B?MGdZdFNZYUc2ZGV6UkJpeTA4ZVNabm9XTFhrdys1b0thYkR0aUJmWVdkZm54?=
 =?utf-8?B?am56RCtqOGpPdUxqdWRIOHZaS01TdGN3YU9VUXcxOHp6Qml2Mi9yejZnUitT?=
 =?utf-8?B?V0JiTExtZGkwS1lNY1lNWnJSTWlRZlVPYWNkbmZNY1YxYmFzZmZvMVRmaGNL?=
 =?utf-8?B?V3RrcjEwdzJlVUpkWTZWS0JtbDVZcUdjL2c3SzB1WWJYT0oxMlRtZStHK0J4?=
 =?utf-8?B?aGEzOVIrT0pwdTVXdi93ckh2dlZBbVNJMGt6aEZzUjZzYkxvU0NXUjZsT29I?=
 =?utf-8?B?Mk5ISC8rcEIycXhNWmllakJqbFZ5ZEFjNGM0WURQV3pObGtTNkkzZXMvVmJ1?=
 =?utf-8?B?T1VhcFFySjg3Y0xjWGYrOW9lTlFjRFB6R2lSWWZhS2hwZ08rRkNXNDlsQ3lw?=
 =?utf-8?B?SnZISFN5djZRcFE1RkhsNmwzeE1rZTZwTFRENWdOZzNOTVRzZjVGaDJzTTBV?=
 =?utf-8?B?bktRdEI5VTViQXdVT3AzK0dNOEJiMUZOUEpyTFg2OFNPR0dOYzZkdlF2TFJU?=
 =?utf-8?B?ZGZraEtvQ1Z3YTU3aUR3cThnbGFWdnN1bm11VmdMclBNSWtPWmJRMmV2SmZW?=
 =?utf-8?B?QW5PRGYzeXlJbWFjbXhsOVB5NjJmMFdndDJtQUhZcUU5MXRXbTJ1WFBBdUp5?=
 =?utf-8?B?ZzVUOGp6RGFrclk5Mjgya21oY0pTRzRkUDRUSWI4U1ZuWWtsbWVkZ242MUdH?=
 =?utf-8?B?eEk1VXFhaXJ6Z0lXTWNYU0FlNWFIV3JpditFMTJHc1FvTjlsbVhUaUxpV0Fj?=
 =?utf-8?B?NE5GS1VJTGpFM3RwV2VmOE1CWWlrWDdsQklKdTZyYit1SGFtZWI2eFlXR0RH?=
 =?utf-8?B?TG1EVUVCTXM4UVlGZ1AwUmxuOVplVTZ0cEZBOXl0ODFTL3I0dUN5NG9GZXJp?=
 =?utf-8?B?cUhkN090MTRXZkFuT1JpZFgwZkwwOS9wSjFidFN2SGpLZEJuQk0yQjdWZFZR?=
 =?utf-8?B?OGhDd21GV2lNRitXNEEwMEFndk1Uc1c2N2JIVWV5azVFRk1ab2JVL1g2eVJ2?=
 =?utf-8?B?ZjRYVkl2ZHBMdDIxLzBGbVl2clRxQmthTTBsR1hxalp6OWZkby9hSFFBUllD?=
 =?utf-8?B?T2dDZC96TVk4NWlPd1BnSHM5MGhjUmY5UkN1UU12UkY0RDRXOEtEeDZLenhC?=
 =?utf-8?B?WUp5dkdZWG54amhYeFExK1VGbWlMeHpmZ1EwMVJ4MUtETktvQ3JlV2tYa0Er?=
 =?utf-8?B?MHRjUnRsS2ZUdWhIQ3Y0OE9tQlZjMmVJb2hXSmp4QTFkcUR4TW9HZ3NyZ1VW?=
 =?utf-8?B?VVRlc1Q0ZVpaY013SmprY2tBUVlDTnN0dVdKZm5ZMG9oaUhkNXlqRDRaZ21z?=
 =?utf-8?Q?O4NWMfovbM+i2gc0xoMp85nuU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13dd7963-161c-435c-9dc7-08db351bdf05
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:50:02.6525
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u8uza9ILXMWS5UDyR3d1LmgnZyLzIOwLbUR3xRITAJsl4u51eX1GpHZ4sI6hBP0wjyZZfCVEWY0zS5f6/5qq/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9979

This insn differs from WRMSR solely in the lack of serialization. Hence
the code used there can simply be used here as well, plus a feature
check of course. As there's no other infrastructure needed beyond
permitting the insn for PV privileged-op emulation (in particular no
separate new VMEXIT) we can expose the insn to guests right away.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -341,6 +341,7 @@ static const struct {
     /*{ 0x01, 0xc3 }, { 2, 2 }, F, R }, vmresume */
     { { 0x01, 0xc4 }, { 2, 2 }, F, N }, /* vmxoff */
     { { 0x01, 0xc5 }, { 2, 2 }, F, N }, /* pconfig */
+    { { 0x01, 0xc6 }, { 2, 2 }, F, N }, /* wrmsrns */
     { { 0x01, 0xc8 }, { 2, 2 }, F, N }, /* monitor */
     { { 0x01, 0xc9 }, { 2, 2 }, F, N }, /* mwait */
     { { 0x01, 0xca }, { 2, 2 }, F, N }, /* clac */
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -87,6 +87,7 @@ bool emul_test_init(void)
     cp.feat.avx512pf = cp.feat.avx512f;
     cp.feat.rdpid = true;
     cp.feat.lkgs = true;
+    cp.feat.wrmsrns = true;
     cp.extd.clzero = true;
 
     if ( cpu_has_xsave )
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1252,8 +1252,11 @@ static int cf_check validate(
     {
         unsigned int modrm_rm, modrm_reg;
 
-        if ( x86_insn_modrm(state, &modrm_rm, &modrm_reg) != 3 ||
-             (modrm_rm & 7) != 1 )
+        if ( x86_insn_modrm(state, &modrm_rm, &modrm_reg) != 3 )
+            break;
+        if ( (modrm_rm & 7) == 6 && !(modrm_reg & 7) ) /* wrmsrns, {rd,wr}msrlist */
+            return X86EMUL_OKAY;
+        if ( (modrm_rm & 7) != 1 )
             break;
         switch ( modrm_reg & 7 )
         {
--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -43,6 +43,20 @@ int x86emul_0f01(struct x86_emulate_stat
         struct segment_register sreg;
         uint64_t msr_val;
 
+    case 0xc6:
+        switch ( s->vex.pfx )
+        {
+        case vex_none: /* wrmsrns */
+            vcpu_must_have(wrmsrns);
+            generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+            fail_if(!ops->write_msr);
+            rc = ops->write_msr(regs->ecx,
+                                ((uint64_t)regs->r(dx) << 32) | regs->eax,
+                                ctxt);
+            goto done;
+        }
+        generate_exception(X86_EXC_UD);
+
     case 0xca: /* clac */
     case 0xcb: /* stac */
         vcpu_must_have(smap);
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -595,6 +595,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
+#define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 
 #define vcpu_must_have(feat) \
     generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -283,7 +283,7 @@ XEN_CPUFEATURE(FSRS,         10*32+11) /
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */
 XEN_CPUFEATURE(FRED,         10*32+17) /*   Flexible Return and Event Delivery */
 XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
-XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*   WRMSR Non-Serialising */
+XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*A  WRMSR Non-Serialising */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:51:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517913.803900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji0R-0000QD-Jz; Tue, 04 Apr 2023 14:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517913.803900; Tue, 04 Apr 2023 14:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji0R-0000Q6-H0; Tue, 04 Apr 2023 14:51:31 +0000
Received: by outflank-mailman (input) for mailman id 517913;
 Tue, 04 Apr 2023 14:51:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji0Q-0000Q0-MV
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:51:30 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061d.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e687569-d2f8-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 16:51:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9979.eurprd04.prod.outlook.com (2603:10a6:800:1da::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:51:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:51:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e687569-d2f8-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XANcl+4tlrwBoTsvD4ro7L547vtHaE+az1sGOkY/hrFadqxBtXZkvrZCC/jz6BXakZ6cu7AUFEs0i16TgCKVtvMr1mgj7DCLuz2Z/Hty859M0y5ybeqP1dZrj/2BnOvq3XTMHrlkkjenbnzpR9j2Ria5AO2aWKeVIcXAlu0qQQzuUMoi4O75KdVavZKxTK8GnEhAPALDy3i5iH4PqBf0ci1lJXtZxi4U6iTwl2u9ZXgzaYFewQ6x07rjvnAcgGJ9+jmTaHmoPU+4RJcjCbN9dtt1PtjIezmiPrL0C6uqZ54s/T+CuCCeyMNNOsZWIoiN7DVr+tVvAqHle/1lQxA/8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eoN5/J1wcWGoWrTfKMxsXC6epsaN4Cagva4IJnTZ3rI=;
 b=Kit+0++cgohLU6r5aI7YpsOx4hEl9AOJMOZ7R5o/ex5zt7ZeQpXLvm/zQv0G4v1N51Q7mwZBnwrcB+bx0MGwXhzEqFmlMt+hAYrCGvHy1G4L7CVNYo6Q3JTniVYAQqxMcSUzG0Zt0Z2NaI2hflxuGkYDLn46ynm4B+dRp6sifDyEEXCYEXWf9qfE5JRNBLJcFvbSbz5/QLi31CdiPsIeIgDW6ruoSx/MRZ2cD4dPBNL4TRV7X4zJpREnnu3Uodf7z6lb2srNzXMXMOa8AdhhBzbEUmTVdcum/Iyg5mOa/rniVbbEx3rpE8GqwI1/fITzlphHN9ftYS0CYfxoLadhDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eoN5/J1wcWGoWrTfKMxsXC6epsaN4Cagva4IJnTZ3rI=;
 b=LQnFx00uCEIVsIM9lGKLXcHwic0V8q1JPW4C9g3tBt7uUwoxzZ33EQnJpWc3fUT23iCBcrGo8WrgZoKbt1e8xUBsEyQs4JS+eTWD6+lrfh7Kk87uZYisugXeXpJy7HqFhNq7eHQlPnthxxLMe8PpcuR7D+PCIAtJql48Wd8pwRxJM7unnS+CfyK6gdjW0s9ofJga1eBVSxskbduvrQhitwQfH8QmBE3YjvgdZdIHkCHxgW0ryxFddy9sJDVrU9DgFvklZGaQY+n6mDe9YORc+aGBucM6AdO0SdeRtn3II/9H4EAV7NYiRYE5JmvqiExY0BkKKZ4TSX0nxVxbvhTH/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cf5580db-3573-ec73-9e59-61aee337b2c6@suse.com>
Date: Tue, 4 Apr 2023 16:51:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 3/9] x86emul: drop regs field from emulator state structure
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0144.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9979:EE_
X-MS-Office365-Filtering-Correlation-Id: 2fc72461-e753-4fb9-92a7-08db351c1204
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+i+O0wDRCs1crW0KniaW9fvWic7mfNcpD/5r6FOdWEqHExEkTjPNd+v7NWom4d0yhaawNWpSc/tyTLYN5c52pMSX+MWejYkNXG16t5g+V0eIuwNgA6RS0w3KPTaFX6YT+Ld6+dZ7hleyfJnFydVJKN4nBr5bM6g8oznX31lRfQPwX4ZE0Zl+AOC72aI6v8QXVsE/6eUg6bPre2zWtB+wFtdGWzn93eBXSGt44qKWq1KRnZDvjIgjbV8yYPG3y41OUzBAsiN1AVs/6ux4XV4eAq9+zVqNk5lPjkPmG+xM0B51VYOjwYwX7DG09EZ8xcCtkubXi8/mh72zt62a/Rl9QwnD/0/2jYFoVND6m+wgX+H5xBXr0kfM6u2MXV4S/mP2YOiqHDkMtHWyh2VnsKFtfKIFlcPal8U/9zmzyuRqjNZLUa7ja41GXATlQjescI1Ya65LbJxNPYIr4RLhup8sbXPuyz8IT+EtQnUw2Wbq0838Ri3lhkgK0B76yTbxRLVWblLbw3ZyHbjUBPlga1/ReyRcpdZcghtKfbdMCCN3YG3lVtnZYP6WMMO2l6pg7oWfDQsN4MciuW/yJ1rQgKoiGF7IKqrzCrH4tpFCJB9sm4Pn2ooVRHwssXD1wusmMWlLLphm6CDZ8sTjk5arO6P8wOMzDKpj8zZoE/WeUu85mKj8iJdnXbT5VOq2k8IWCmoe
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(366004)(39850400004)(451199021)(2616005)(6486002)(6506007)(316002)(6512007)(478600001)(54906003)(2906002)(26005)(6666004)(186003)(5660300002)(38100700002)(66946007)(66556008)(66476007)(41300700001)(86362001)(8676002)(31696002)(8936002)(6916009)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXBHWUk1NTdlN0c4Y2h0ZndzK2x3TDcyYnFnU0x0L3YvblZsVnJPc2M3TTVo?=
 =?utf-8?B?YndCV0tVQ0VDVlBpVStFUXZMb0UzOXNvUDhubjBhTU1zeEdpWUp5UlZHeVF1?=
 =?utf-8?B?bi8zSzdqMFkwcjI4azhwVXAzOTNIRVg3cWZMSE4rNlJuN0xVOG41TkY2Rk5t?=
 =?utf-8?B?WnUwMS9zOC9pcDRhd3dUYWMyTE5PS3IrR3cxckhmUUJsT1lvRi9CZkJ4Q3Zm?=
 =?utf-8?B?QSthNEgrU3NCOUNuVWsvNk55UDgrbk91TmRGQ0xrcDdUdDRrN2lXdFBvSy9X?=
 =?utf-8?B?aVdackMvM2ZUSkc2OTFqcnZrSFkvMExkYnRDbVFweitPYXBpc0s2WklRczFR?=
 =?utf-8?B?WVJWV3U5ODdhRzZHckc1TkRCeGRid1Bwb09HZUZURGFmZ254cHlvZ1l3eGlJ?=
 =?utf-8?B?NHhWTGlmN1pHMTU1cWpTZ1NtV0FzV0l0VDMvMko4UGQwc0xhOXJjU1gyL3pN?=
 =?utf-8?B?enBGSFBpUE04QWtNQVZ5SjlLTFEzR1NxaHhuMU11QUlBRDZBVWVSN2d2R0lQ?=
 =?utf-8?B?ZUg2WDNoc0t5SU15U3RXeGp2cnJRRnV5Ni9zdUhHbjNtRlFQTWNHRmd6WmR6?=
 =?utf-8?B?WU1NSXZGNVNHTGUwTUJLZ0FsTUpVTGdIVFRsMlNhS1BKYnlhVlhoY3ZjOE0v?=
 =?utf-8?B?RklsTFlqUE9qU0xTVlFIZ1BSL01ENWZKVWV0RW1oUDRBTkxHUUU4MGdPVGNZ?=
 =?utf-8?B?VXhUalRwTnZJRjVxYzFuZEIvQWxEdkR2SHo1dTZQZ3R1NnRaWVZ6bnR0TDFT?=
 =?utf-8?B?VVZGeks0RElHRjh5S2FWQU5IU05BRTNNV3JNWGlqdnNoVFBkdTRxSnhjMGMw?=
 =?utf-8?B?MFlFVGpVZ0ZVRk5PUVF2N2NKK0Z3V0NXanlQRHN1Rk5qMHZoOHF1RE92bGp5?=
 =?utf-8?B?S1NLbnVoT2NlSy83ODdZY3FZVnZFdjU5ckVERTYxMWV1MHVid0lNNko3NEg0?=
 =?utf-8?B?Y3ZuUnRGM2pid210K0Zaa1N5eU81L2lFdWpGVjF4WEdveHhFVUJuQjY3YnJI?=
 =?utf-8?B?RWZ2aUZZS29FTmM2WVNmWWVuOU5aVmVBSkJyQWp1YTBVa2ZzUTZ1LzNtVkgv?=
 =?utf-8?B?Q3QxZmw3ZzRFem53NDBLZXFTaFFJVm9hSWlqV1pXbTNyVDc2TEt3MSthbXBt?=
 =?utf-8?B?T2VWeVpwY21JZDkwcVI2SDE3djlRb2JaUWtJbkhPZklVcWgwRC94MlBGb1cy?=
 =?utf-8?B?UUEzaDR2djNMaWxibkJxZVJ6VDJZNDloUzVzM2NSNi9aYVFQOFNqK1BoRUxt?=
 =?utf-8?B?YXZrVnFZcXpmdDBzSXNBNGo0OXdhMWY5NjF5WkF2aFNoL1FoY1NFc2lhc2pq?=
 =?utf-8?B?YW9kdmRETnhmaVpJRzNOb2NjM1RaYTJjSUhRdTlWaHdEN1BmeS8xVDJqeUJx?=
 =?utf-8?B?R25YeHpNSTVKT0lPcVlNQW4zYk1od051Sy9jcFV1ank1Z0VrTzZhcmxnQ0gx?=
 =?utf-8?B?RGtWdUtsL29ZeHBwcit0M0ZwM1pOT2ZqaEdNVlVRQmpGdEJVT1BJbmErNVZT?=
 =?utf-8?B?QkhtZE1EMkUrS1NIeU1RRlExWUgxZTJuWWhhOTVjaDdvbjJCazhIV0FXWnln?=
 =?utf-8?B?cHhIZnZuQlpIZUJkaGxYMklDQ0kyWXhqVGl6Q0dER3hDWHFRUTNDZ1FWTWlW?=
 =?utf-8?B?WjNncXZDemtNanRTTXllYStraUhjdCs3bDc1L3BOZGtVRmlORmorZUlLczB1?=
 =?utf-8?B?dk5kTk80Q0ZicDJrendhZXhiUWNEalNyUW9vaHJENjViTExDaHBFdlVEbHdM?=
 =?utf-8?B?UjAwYzhoRGVLU1V3OG0vZElCSlR3UTZ3dG9jRDRJSUNKenVSZ0VGbE1VV2NZ?=
 =?utf-8?B?Tzh6Mm1STWNtS0Q5OEUvYWhpMUxQbzU0ZDhHZUNoYXRZU3J4UWpVRlNtbG56?=
 =?utf-8?B?Tmh4K2Q1Y3BKREdFK3lyMnNUeE83dERJa2o2TFRsSDRwK1AvcVpWNlZ1dS83?=
 =?utf-8?B?MGxTc3pHdXFSdjRHczlCb0wyYWZhUkJTK0lqZFNIL0Nvb244cUtadnRaQTJi?=
 =?utf-8?B?YVJubnVFaFZmSnRUc3gyUVM3WUsrWTFLZmtRN2JGMWZ3Z2RxVXZXSU51Vzdi?=
 =?utf-8?B?SXEvdW9ZSHBxbmk1bHFJemdtU1d5T3Rrc0JHaURUWmljRXE4UnkwNzVqQXBK?=
 =?utf-8?Q?geUtSPpQsDwSsOmrbrBJ1d5Zq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2fc72461-e753-4fb9-92a7-08db351c1204
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:51:28.2371
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o2LtRZdsAfE8Y0Vt+/qcG366+tWsmoMtZEMrlP2JlydSqXLFpnf9wu4csyY9Y/BJ7h/uIhdZXQUgIvOphrq6hQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9979

For an unclear reason 0552a8cfda43 ("x86emul: track only rIP in emulator
state") converted the original struct cpu_user_regs instance to a
pointer, rather than dropping the field altogether: The pointer merely
aliases the one in the context structure.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1013,7 +1013,6 @@ int x86emul_decode(struct x86_emulate_st
     s->ea.type = OP_NONE;
     s->ea.mem.seg = x86_seg_ds;
     s->ea.reg = PTR_POISON;
-    s->regs = ctxt->regs;
     s->ip = ctxt->regs->r(ip);
 
     s->op_bytes = def_op_bytes = ad_bytes = def_ad_bytes =
@@ -1129,7 +1128,7 @@ int x86emul_decode(struct x86_emulate_st
             default:
                 BUG(); /* Shouldn't be possible. */
             case 2:
-                if ( s->regs->eflags & X86_EFLAGS_VM )
+                if ( ctxt->regs->eflags & X86_EFLAGS_VM )
                     break;
                 /* fall through */
             case 4:
@@ -1458,33 +1457,33 @@ int x86emul_decode(struct x86_emulate_st
             switch ( s->modrm_rm )
             {
             case 0:
-                s->ea.mem.off = s->regs->bx + s->regs->si;
+                s->ea.mem.off = ctxt->regs->bx + ctxt->regs->si;
                 break;
             case 1:
-                s->ea.mem.off = s->regs->bx + s->regs->di;
+                s->ea.mem.off = ctxt->regs->bx + ctxt->regs->di;
                 break;
             case 2:
                 s->ea.mem.seg = x86_seg_ss;
-                s->ea.mem.off = s->regs->bp + s->regs->si;
+                s->ea.mem.off = ctxt->regs->bp + ctxt->regs->si;
                 break;
             case 3:
                 s->ea.mem.seg = x86_seg_ss;
-                s->ea.mem.off = s->regs->bp + s->regs->di;
+                s->ea.mem.off = ctxt->regs->bp + ctxt->regs->di;
                 break;
             case 4:
-                s->ea.mem.off = s->regs->si;
+                s->ea.mem.off = ctxt->regs->si;
                 break;
             case 5:
-                s->ea.mem.off = s->regs->di;
+                s->ea.mem.off = ctxt->regs->di;
                 break;
             case 6:
                 if ( s->modrm_mod == 0 )
                     break;
                 s->ea.mem.seg = x86_seg_ss;
-                s->ea.mem.off = s->regs->bp;
+                s->ea.mem.off = ctxt->regs->bp;
                 break;
             case 7:
-                s->ea.mem.off = s->regs->bx;
+                s->ea.mem.off = ctxt->regs->bx;
                 break;
             }
             switch ( s->modrm_mod )
@@ -1517,7 +1516,7 @@ int x86emul_decode(struct x86_emulate_st
                                      !s->evex.RX) << 4;
                 else if ( s->sib_index != 4 )
                 {
-                    s->ea.mem.off = *decode_gpr(s->regs, s->sib_index);
+                    s->ea.mem.off = *decode_gpr(ctxt->regs, s->sib_index);
                     s->ea.mem.off <<= s->sib_scale;
                 }
                 if ( (s->modrm_mod == 0) && ((sib_base & 7) == 5) )
@@ -1525,7 +1524,7 @@ int x86emul_decode(struct x86_emulate_st
                 else if ( sib_base == 4 )
                 {
                     s->ea.mem.seg  = x86_seg_ss;
-                    s->ea.mem.off += s->regs->r(sp);
+                    s->ea.mem.off += ctxt->regs->r(sp);
                     if ( !s->ext && (b == 0x8f) )
                         /* POP <rm> computes its EA post increment. */
                         s->ea.mem.off += ((mode_64bit() && (s->op_bytes == 4))
@@ -1534,16 +1533,16 @@ int x86emul_decode(struct x86_emulate_st
                 else if ( sib_base == 5 )
                 {
                     s->ea.mem.seg  = x86_seg_ss;
-                    s->ea.mem.off += s->regs->r(bp);
+                    s->ea.mem.off += ctxt->regs->r(bp);
                 }
                 else
-                    s->ea.mem.off += *decode_gpr(s->regs, sib_base);
+                    s->ea.mem.off += *decode_gpr(ctxt->regs, sib_base);
             }
             else
             {
                 generate_exception_if(d & vSIB, X86_EXC_UD);
                 s->modrm_rm |= (s->rex_prefix & 1) << 3;
-                s->ea.mem.off = *decode_gpr(s->regs, s->modrm_rm);
+                s->ea.mem.off = *decode_gpr(ctxt->regs, s->modrm_rm);
                 if ( (s->modrm_rm == 5) && (s->modrm_mod != 0) )
                     s->ea.mem.seg = x86_seg_ss;
             }
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -321,7 +321,6 @@ struct x86_emulate_state {
 #define imm2 ea.orig_val
 
     unsigned long ip;
-    struct cpu_user_regs *regs;
 
 #ifndef NDEBUG
     /*



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517917.803909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji1J-0000xt-ST; Tue, 04 Apr 2023 14:52:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517917.803909; Tue, 04 Apr 2023 14:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji1J-0000xm-Pp; Tue, 04 Apr 2023 14:52:25 +0000
Received: by outflank-mailman (input) for mailman id 517917;
 Tue, 04 Apr 2023 14:52:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji1I-0000xc-89
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:52:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0607.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e22134c-d2f8-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 16:52:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9979.eurprd04.prod.outlook.com (2603:10a6:800:1da::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:52:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:52:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e22134c-d2f8-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SA3IJD4J6AdT496EyuFZF1ksGQWwNwCJX1fOQOuUXQwA9LT79yU3CD7gefWlKowNKY7qp61NXsua/ffV9PLUNzNJVELCjA8e+51qEMVciXPxW45/8zYRaY0tFPmahZlduvu554rr7YSHnOyOCEUXWr7ipn8l+NpQ2YvYW6tr8f3ZT4icAxK2vo2vLDGQvjZ9YYk7Ys8djKDOaSws/qUxjHhHreQePT1kga046WqW54hAT67OoDj4eKNPsj78TpL3GVIO8OwbazM1ZsmBCHIfIDWkm5+VhhQ2ucjoQuzh6x69JC3veovZCjE/g7bopj6gzAGVxYVCpzy2nIXGVHEe1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ytto9LVBxwhZWpIO0ffNtB+orBK2vTWlKoDCEHOFFGA=;
 b=k2NC7FNaoySdRSkYTg0hOI+p/jROwE71n4cleef/1LCtye2RD241KlmZV466bCcCmSbXdeB/6mQL+oceYn/txiXy0aoSLNUnjfYzpT5nsaG1JWCnxiOHdSPtsqaliAREt5drlWGoLkdOiZjUb9alWp7rEgYEHB/msEDYqaqxrVQmHMYgb+tKF5kS/qD35SFdeWdbjTVpvdx/kNVnQbH6VYCssuvNqBAo/83Loh3eFRDkoP4ARB8dk9zFsyeYKrgZgMM3L2V0dfbci3QFN6JcD2MYf8wyXzc5lwC0ejpWtGi+RbCiyxV8rIUcNQVoJ+Gsj1/RqzSHf5jGJvXWqz56Hw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ytto9LVBxwhZWpIO0ffNtB+orBK2vTWlKoDCEHOFFGA=;
 b=3EbP+drpvdTr+SnRb9FMjKKHDtL8JluIHUEIC1Y9JBghfik/V7zNcSb5HEkGo33nrWDJOiGcimldftFjwhKihui+3bMsHD+fqKcNZje7ErqGnjFYburtNhmhblLW8HkHYCLeByENXxZXiDItzaX3hdfxnThsUk5bcZUFSyM/8DYla5Keawc8GkYO99EamivdYd4JfbjritkzUcxpYmGdYGKLiQ7AKcpI9AiwV4FMQMskBPb+rMm5wzWP2FWUVdezH+eRY7CQzEOmBkODtN5Fr8n7NE2NBTy9c7AuEDFbozmqBM9scrp3sVyd3UR18lqxMRBakof1ipgoHrjLZ1uOsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7fdf882f-0667-e0f1-8183-2dc1a344f4fb@suse.com>
Date: Tue, 4 Apr 2023 16:52:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 4/9] x86emul: support CMPccXADD
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0152.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9979:EE_
X-MS-Office365-Filtering-Correlation-Id: 26a2b5d1-4b23-43b1-bff5-08db351c3185
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p/g0L5g/0M52/bLrqgAyDMN6xCfU6LzoTIsNQPL3c/43p80bWgLMNSg+ZASemFQ4/nnzn2aChYNc6jpokWischmjnnkYyTy85VAG43DMVC4ndQJ4voeHNyAGastoIjybugWeUGEnJtYfT5rPmvx7jvwydKcQzXsKEk93qnMJO06Rxz+obwPn0cuBpNxo9IQrzLDkqZ85AhMMagT63j1ghSL0xgC0A94WWhRkt1J2BqpbmXj3OwWHfknkpU6oqc/8qskjJVcYmxLZC/HweNoM/1Q0KamfkPLtOga5Of6X3IiB9fNJvlJCAjMMnmYu8dy6aKIEKIA9AVx1wOzSmcO2JXc3dc5+rVp4HTCXFdqT/50iXv+zOSaYMZfWoIzpqyqfoTULGv0s5ron16CN8T1l8lz472pR0KVNYaRVXpY0Ywexp5F/k38t2MHmkLar3J18gCbqczESuTlTcegVJG0H4cEFP0UlkiMiScLfrKaOgmqNqHAf9SmSMXTHUV/pqRhOU177BGUhcjzBfzmpVhYQEHE3OM52WNTnD91JEnSj4KCsD8fKE1XvhxkGLZB4wxIfp/GEIzvQhO3OI/K+N+In61ysjjv1dCTEYcVn9CR1M6wp6Xf5hYi/V9deO2IDKigNZnvIX5mvSmAtuiuIJSX9iA9O3YrDc+Wl2jjg1C/EZF9pJqY6AbZRn9tpUrvUGX8u9OwaeVydw2ztl8AtJNAdxA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(366004)(39850400004)(451199021)(83380400001)(66574015)(2616005)(6486002)(6506007)(316002)(6512007)(478600001)(54906003)(2906002)(26005)(186003)(30864003)(5660300002)(38100700002)(66946007)(66556008)(66476007)(41300700001)(86362001)(8676002)(31696002)(8936002)(6916009)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?elhsdXBXZjYvS3Qzd3NEbTJkS1FrdDlYbW1xNWl4cGpsWkpEcFBWZGY1bUJq?=
 =?utf-8?B?dTZsN3dla1c3dHlteU91U09JTlRpdm5MS0JBRkMyQ2NJVGI0WUE1L1dBYTB3?=
 =?utf-8?B?VVV5OTY0TlVGT3VsNXIwNWMveTBCYm9pZHJhc0dXUEFjS0x6TlA3ankzQ2I3?=
 =?utf-8?B?ek5yK2E2TmM4OUFTdXA3Y0NQeFFOTVFiVkNhNWNqT0NIMGViTi9iQnFrTnMr?=
 =?utf-8?B?TiszRk9WZkJwQUFXaG9CMDd4dy9kZkxZV0sreDV3U1dUM0VZcXJQV2l1bERs?=
 =?utf-8?B?RFdCUTVvZk03YTZZVHNQb20yNzBqckFYOFpBNGJHNkg3QkkvT042TEIwbUVJ?=
 =?utf-8?B?MTF3cmE3VlFpbk5aVE52bnRvWGlqMytLY3lrZkxNczNEMG1HN0ZTYkRhbmhn?=
 =?utf-8?B?bWJwN3NlZEZIbnRhMVljYUJqUjFLQW1qZ2NUUnhvcFBmSTlPeTZ2VWJrNVNj?=
 =?utf-8?B?YTZEL1JneFg5a0ZTcHRyKzlMb1M1TlhwVU1jTGRuMDZJSWMzTGJqS05tS2tu?=
 =?utf-8?B?UWJDNllpRkJiMHZBZklmajJYTFBweEZteEkvV2FHMkpPRG5SRnR3dStrUUxi?=
 =?utf-8?B?MFpURmpPZ2FVUzR2d1pZa1E1eTVRckVIRDdtZS92RS9YakptNWRFMEx5akhG?=
 =?utf-8?B?NzZmTURIalVpQ0lEcUhwOE96VjRyQ0QzSzV4YWRMTmpjeXkyaElZSVZPYkpU?=
 =?utf-8?B?YlpmaVlPZmRkemFOTnI3K3JQSU5SNHR6UzRaQmNRQlNvSUcxbUhVR1U5ODNV?=
 =?utf-8?B?dVhRT2ZjeFBRR1BDcmNQa2RLWHV5cWxYNmJkZ2x5cThybjhPS0QrakFtbFdT?=
 =?utf-8?B?dUVBWFl0UTM0dEdXci9yQUZjN2U1MFB2ZWkwUDFrY2dSVmpaWVJDVGVwUTJy?=
 =?utf-8?B?YTA5QWlYU2NFTzMxSmYyRkRNeWRSSWY2L3lONWhLRnBWeTdkRncxRW9wUU5O?=
 =?utf-8?B?UnJybFY2Nm5Dd09EOGpqWnNYeVNiU3dKOVpkbEFnZEFvSjZyME9EelY3anZ4?=
 =?utf-8?B?MjJpQlJzZDFBSjlDZFFXd1g2M2ZCanpJdU9DTW1Qemdvd2puNmkycWpWQStm?=
 =?utf-8?B?enE0alJyeGI3dkR1ejU2MmhHbHIwMEtIbkdETEhpVDBIWEp6a0tha3pDU1h2?=
 =?utf-8?B?OEpJM0FaUklXblFPRDZjK3V0V1I5aE13RjNXZlIvZ3g4SjN4Y2JYRlBxN3lr?=
 =?utf-8?B?ZHVUb3I1U0xMQXY5Q2VGYWZkMmNzY0RwMThSRGVtdnMrQjNBUXU4aW1zQTd6?=
 =?utf-8?B?MWhuLzI2Q2xTSXNCbWhjWG0wRVR6ZHJNbVVOSXBsNEFIUTJjd0thV1BTc01F?=
 =?utf-8?B?Y2QveHAxR1dwRGhocVhWbnlqa1ZzSGFjWHNEMzBYLytPK29DNDhaMEdleGxs?=
 =?utf-8?B?VHBXQitkRm8zNHltaVcvZVNXM09EUCtabWlFV2tLbjQ5R2EyWEhLdkllbFV3?=
 =?utf-8?B?Tk9SZTF2OHVjelRRNzhvQ2EzUW44UzBNTU9LRVA4QXVBcmJudlAxclBhQk9h?=
 =?utf-8?B?akp2bVZ4dTlTcVcyU3N1K25iRWpMWkYrZzlybDlkRis4eXRBU2J3aXdaNHEy?=
 =?utf-8?B?L0FuUlBBQ1ordHlLS2lZYzRJM1pCZVE1MlVoQ2Y3a3Nkb3JFSjlyUE9YU0pJ?=
 =?utf-8?B?QUVaTEQrN0wwa3VaTmF0T2xrYzdhOFJlWHlLYjhDV0ZuU0NPdGRMVGVZcTA1?=
 =?utf-8?B?OUkxOU9MdXNITUkzU1djVE9SRzA2QUlyYWtkWUV4eUtUeGNpVzNyS3ZUbE9u?=
 =?utf-8?B?YWdvKzVhdlpZbTZFZjVBb3lOZFB5S0Y5L0oxT2txQWJmUi9kSGgxQWNNL0JK?=
 =?utf-8?B?RHJacll4ZlNOa2hFcXhGOEJhRU05SWJyRlR0eXhOUHEwVzVIeU9sSlJGblVD?=
 =?utf-8?B?VkNjRE02RkYrT0Roa0xJa0hGL1Mva3BYMTk4dVVydmt1amhwUE9sY2dLcjdW?=
 =?utf-8?B?MnBXc29pOTJ3NlRuallwUHAzZWNOWWVZSHpJNkp3Qyt1T2k1emhvdkM1Smxk?=
 =?utf-8?B?VDNYa2d3VWNzSzNYTjhQN0lzU21odldoOFRadU11M3BTV1kxckdLby80QnBU?=
 =?utf-8?B?RzRyS0d2SzdIcnpEcGxoM3VCZ3Y5YW5rVUJnQlJ6RG1iUGx2V2pUa01BUU0x?=
 =?utf-8?Q?LqPPpDvBk0nKkezpyyjDMiy+2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 26a2b5d1-4b23-43b1-bff5-08db351c3185
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:52:21.1560
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MedJmFONYDiI83Q6l+pqfjGeR6f562yEcTke5WMYchXpTbjJJTjcD880K4Pm5t3N9HWG5nI7vPV4hTFVT1OMaA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9979

Unconditionally wire this through the ->rmw() hook. Since x86_emul_rmw()
now wants to construct and invoke a stub, make stub_exn available to it
via a new field in the emulator state structure.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
# SDE: -grr or -srf

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -232,6 +232,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
 
         {"avx-vnni",     0x00000007,  1, CPUID_REG_EAX,  4,  1},
         {"avx512-bf16",  0x00000007,  1, CPUID_REG_EAX,  5,  1},
+        {"cmpccxadd",    0x00000007,  1, CPUID_REG_EAX,  7,  1},
         {"fzrm",         0x00000007,  1, CPUID_REG_EAX, 10,  1},
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -186,6 +186,7 @@ static const char *const str_7d0[32] =
 static const char *const str_7a1[32] =
 {
     [ 4] = "avx-vnni",      [ 5] = "avx512-bf16",
+    /* 6 */                 [ 7] = "cmpccxadd",
 
     [10] = "fzrm",          [11] = "fsrs",
     [12] = "fsrcs",
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1388,6 +1388,22 @@ static const struct vex {
     { { 0xdd }, 2, T, R, pfx_66, WIG, Ln }, /* vaesenclast */
     { { 0xde }, 2, T, R, pfx_66, WIG, Ln }, /* vaesdec */
     { { 0xdf }, 2, T, R, pfx_66, WIG, Ln }, /* vaesdeclast */
+    { { 0xe0 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpoxadd */
+    { { 0xe1 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnoxadd */
+    { { 0xe2 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpbxadd */
+    { { 0xe3 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnbxadd */
+    { { 0xe4 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpexadd */
+    { { 0xe5 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnexadd */
+    { { 0xe6 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpbexadd */
+    { { 0xe7 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpaxadd */
+    { { 0xe8 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpsxadd */
+    { { 0xe9 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnsxadd */
+    { { 0xea }, 2, F, W, pfx_66, Wn, L0 }, /* cmppxadd */
+    { { 0xeb }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnpxadd */
+    { { 0xec }, 2, F, W, pfx_66, Wn, L0 }, /* cmplxadd */
+    { { 0xed }, 2, F, W, pfx_66, Wn, L0 }, /* cmpgexadd */
+    { { 0xee }, 2, F, W, pfx_66, Wn, L0 }, /* cmplexadd */
+    { { 0xef }, 2, F, W, pfx_66, Wn, L0 }, /* cmpgxadd */
     { { 0xf2 }, 2, T, R, pfx_no, Wn, L0 }, /* andn */
     { { 0xf3, 0x08 }, 2, T, R, pfx_no, Wn, L0 }, /* blsr */
     { { 0xf3, 0x10 }, 2, T, R, pfx_no, Wn, L0 }, /* blsmsk */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -1398,6 +1398,78 @@ int main(int argc, char **argv)
     }
     printf("okay\n");
 
+    printf("%-40s", "Testing cmpbxadd %rbx,%r9,(%rdx)...");
+    if ( stack_exec && cpu_has_cmpccxadd )
+    {
+        instr[0] = 0xc4; instr[1] = 0x62; instr[2] = 0xe1; instr[3] = 0xe2; instr[4] = 0x0a;
+        regs.rip = (unsigned long)&instr[0];
+        regs.eflags = EFLAGS_ALWAYS_SET;
+        res[0] = 0x11223344;
+        res[1] = 0x01020304;
+        regs.rdx = (unsigned long)res;
+        regs.r9  = 0x0001020300112233UL;
+        regs.rbx = 0x0101010101010101UL;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[5]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x0101010101010101UL) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_PF | EFLAGS_ALWAYS_SET)) ||
+             (res[0] != 0x11223344) ||
+             (res[1] != 0x01020304) )
+            goto fail;
+
+        regs.rip = (unsigned long)&instr[0];
+        regs.r9 <<= 8;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[5]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x0101010101010101UL) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_SF |
+               EFLAGS_ALWAYS_SET)) ||
+             (res[0] != 0x12233445) ||
+             (res[1] != 0x02030405) )
+            goto fail;
+        printf("okay\n");
+
+        printf("%-40s", "Testing cmpsxadd %r9d,%ebx,4(%r10)...");
+        instr[1] = 0xc2; instr[2] = 0x31; instr[3] = 0xe8; instr[4] = 0x5a; instr[5] = 0x04;
+        regs.rip = (unsigned long)&instr[0];
+        res[2] = res[0] = ~0;
+        regs.r10 = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[6]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x02030405) ||
+             ((regs.eflags & EFLAGS_MASK) != EFLAGS_ALWAYS_SET) ||
+             (res[0] + 1) ||
+             (res[1] != 0x02030405) ||
+             (res[2] + 1) )
+            goto fail;
+
+        regs.rip = (unsigned long)&instr[0];
+        regs.rbx <<= 8;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[6]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x02030405) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_SF |
+               EFLAGS_ALWAYS_SET)) ||
+             (res[0] + 1) ||
+             (res[1] != 0x13253749) ||
+             (res[2] + 1) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     emulops.write_segment = write_segment;
     emulops.write_msr     = write_msr;
 
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -185,6 +185,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_serialize  cp.feat.serialize
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
+#define cpu_has_cmpccxadd  cp.feat.cmpccxadd
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
 
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -170,6 +170,7 @@ extern struct cpuinfo_x86 boot_cpu_data;
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
+#define cpu_has_cmpccxadd       boot_cpu_has(X86_FEATURE_CMPCCXADD)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -443,6 +443,7 @@ static const struct ext0f38_table {
     [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
     [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xe0 ... 0xef] = { .to_mem = 1 },
     [0xf0] = { .two_op = 1 },
     [0xf1] = { .to_mem = 1, .two_op = 1 },
     [0xf2 ... 0xf3] = {},
@@ -934,6 +935,8 @@ decode_0f38(struct x86_emulate_state *s,
             ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
+    case X86EMUL_OPC_VEX_66(0, 0xe0)
+     ... X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */
     case X86EMUL_OPC_VEX(0, 0xf2):    /* andn */
     case X86EMUL_OPC_VEX(0, 0xf3):    /* Grp 17 */
     case X86EMUL_OPC_VEX(0, 0xf5):    /* bzhi */
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -265,6 +265,7 @@ struct x86_emulate_state {
         rmw_btc,
         rmw_btr,
         rmw_bts,
+        rmw_cmpccxadd,
         rmw_dec,
         rmw_inc,
         rmw_neg,
@@ -322,6 +323,8 @@ struct x86_emulate_state {
 
     unsigned long ip;
 
+    struct stub_exn *stub_exn;
+
 #ifndef NDEBUG
     /*
      * Track caller of x86_decode_insn() to spot missing as well as
@@ -593,6 +596,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
+#define vcpu_has_cmpccxadd()   (ctxt->cpuid->feat.cmpccxadd)
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6881,6 +6881,15 @@ x86_emulate(
 
 #endif /* !X86EMUL_NO_SIMD */
 
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xe0)
+     ... X86EMUL_OPC_VEX_66(0x0f38, 0xef): /* cmp<cc>xadd r,r,m */
+        generate_exception_if(!mode_64bit() || dst.type != OP_MEM || vex.l,
+                              EXC_UD);
+        host_and_vcpu_must_have(cmpccxadd);
+        fail_if(!ops->rmw);
+        state->rmw = rmw_cmpccxadd;
+        break;
+
     case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */
     case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */
         vcpu_must_have(movbe);
@@ -7942,14 +7951,20 @@ x86_emulate(
     {
         ea.val = src.val;
         op_bytes = dst.bytes;
+        state->stub_exn = &stub_exn;
         rc = ops->rmw(dst.mem.seg, dst.mem.off, dst.bytes, &_regs.eflags,
                       state, ctxt);
+#ifdef __XEN__
+        if ( rc == X86EMUL_stub_failure )
+            goto emulation_stub_failure;
+#endif
         if ( rc != X86EMUL_OKAY )
             goto done;
 
         /* Some operations require a register to be written. */
         switch ( state->rmw )
         {
+        case rmw_cmpccxadd:
         case rmw_xchg:
         case rmw_xadd:
             switch ( dst.bytes )
@@ -8224,6 +8239,7 @@ int x86_emul_rmw(
     uint32_t *eflags,
     struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt)
+#define stub_exn (*state->stub_exn) /* for invoke_stub() */
 {
     unsigned long *dst = ptr;
 
@@ -8289,6 +8305,37 @@ int x86_emul_rmw(
 #undef BINOP
 #undef SHIFT
 
+#ifdef __x86_64__
+    case rmw_cmpccxadd:
+    {
+        struct x86_emulate_stub stub = {};
+        uint8_t *buf = get_stub(stub);
+        typeof(state->vex) *pvex = container_of(buf + 1, typeof(state->vex),
+                                                raw[0]);
+        unsigned long dummy;
+
+        buf[0] = 0xc4;
+        *pvex = state->vex;
+        pvex->b = 1;
+        pvex->r = 1;
+        pvex->reg = 0xf; /* rAX */
+        buf[3] = ctxt->opcode;
+        buf[4] = 0x11; /* reg=rDX r/m=(%RCX) */
+        buf[5] = 0xc3;
+
+        *eflags &= ~EFLAGS_MASK;
+        invoke_stub("",
+                    _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
+                    "+m" (*dst), "+d" (state->ea.val),
+                    [tmp] "=&r" (dummy), [eflags] "+g" (*eflags)
+                    : "a" (*decode_vex_gpr(state->vex.reg, ctxt->regs, ctxt)),
+                      "c" (dst), [mask] "i" (EFLAGS_MASK));
+
+        put_stub(stub);
+        break;
+    }
+#endif
+
     case rmw_not:
         switch ( state->op_bytes )
         {
@@ -8384,7 +8431,13 @@ int x86_emul_rmw(
 #undef JCXZ
 
     return X86EMUL_OKAY;
+
+#if defined(__XEN__) && defined(__x86_64__)
+ emulation_stub_failure:
+    return X86EMUL_stub_failure;
+#endif
 }
+#undef stub_exn
 
 static void __init __maybe_unused build_assertions(void)
 {
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -278,6 +278,7 @@ XEN_CPUFEATURE(SSBD,          9*32+31) /
 /* Intel-defined CPU features, CPUID level 0x00000007:1.eax, word 10 */
 XEN_CPUFEATURE(AVX_VNNI,     10*32+ 4) /*A  AVX-VNNI Instructions */
 XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
+XEN_CPUFEATURE(CMPCCXADD,    10*32+ 7) /*A  CMPccXADD Instructions */
 XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
 XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:53:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517922.803920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji26-0001ZI-Af; Tue, 04 Apr 2023 14:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517922.803920; Tue, 04 Apr 2023 14:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji26-0001ZB-7Q; Tue, 04 Apr 2023 14:53:14 +0000
Received: by outflank-mailman (input) for mailman id 517922;
 Tue, 04 Apr 2023 14:53:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji25-0001Yx-5r
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:53:13 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0624.outbound.protection.outlook.com
 [2a01:111:f400:fe02::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b18823a-d2f8-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:53:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8126.eurprd04.prod.outlook.com (2603:10a6:102:1bc::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:53:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b18823a-d2f8-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GhnS7sPADdHjgqtZl6OY5N1VFd7TcKOoBUKAX5Wj7HYTQr0lRr6dNea8hpaLZJqAUB93Uu/mKBhDmSrGeqIug0TrqSafATODSiVfXbVWhK6EGrkK1F57nl19RFyCW9wgkRRQZg1MTlzbw1nZhX2hlioGNJ+Oz0dIkAXH8l2F5+rma3aMG80MBzwWU/UXES7yqM69ct5ENwFhvq1D5NN1xlPb52G/XnzI2V5H+LEpywQY+sbbnuPnp0BsonrEgV0m9+HD2jRgAaC9ROz9UUyIgZ52p1KydKEqOOy9X9osP1hOFSGwyQCurOHPR4N+UqsIxaR98UMp2HJkd52mDhZbaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8OsRmeonhBRsTzr/ndz1lxvEmvcIjQIblEBSSAV+qJQ=;
 b=iKBBzkhVmSNBmPbbVZtt/9+Bvf6t+CJ5i7fZKksYGXexRcH4AI79VsMXIrWuR10Z0d+Qjgyl8fXOsECQKY2LpLKyLjtmcknw78FkcBPc2wj1Iod5CsYU5Xy+0GN5Wn6EC58XpjWHUZ5FbvmBCrVhd/fB1o828GfDscH5/1cOExqb8pCyDwpLkdsNkCm2j5+aldUNOkuq5+yC3cOCBczbfAjUaw2rNw/mynZqgAXacRJCv1+u1ogtdEpRwPnCitLHbM/HjkC+oqFDeeQZDf7mbKwadV6TZ40dDO3mycH9cgEYWCwKbhorjLLl3Nqwp8/DsoMLM/OTGdw5BKSIB74IzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8OsRmeonhBRsTzr/ndz1lxvEmvcIjQIblEBSSAV+qJQ=;
 b=H/+BzYGQyJBpRTkddsQ7Le2xgcpf9Fj4QhoI4ex93pTNEYhTBcFk6kmQQ6yxRPFgqGOxYNv8mi+u6M0gUlnmhtuH9euoloyQl0abjiHczMfyjUHM0H/jF5I+CnQQwYax2RDjSTt38Yss+FydQht5ubZxBgUAKulhBgPzIvVUQHK9Qg7Pb/W5u/7bQJlorxJNVk5JeadsS6+zozmbcyRX3wvRiKZ486kFodccav4Yw2ttm+hHFutDjkLnwSa7vwqqCwkbnEnIqn+WpfMapbe++RlGcrEl5wEkgUsJlhqyUqPlI9MNcD/AjKq2EIQhdmQIiGbpwf80jAlutKtNgGrExA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a9c212a8-8c63-91e0-eb07-8c927b62c1ca@suse.com>
Date: Tue, 4 Apr 2023 16:53:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 5/9] x86emul: re-use new stub_exn field in state structure
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0119.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8126:EE_
X-MS-Office365-Filtering-Correlation-Id: 1c8fae01-9a9b-4b73-bf58-08db351c4dbf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0fcJcWaMjcAJuGfgmgwQCc8hYlnF/F4r4Krnj/w3y+AQyAHYAzKXvpwvAqS47/6wpmlbaW3FXi9nhrmLQcDntR/h7pU5gH8270T6ZypJFpVzKPzYLPwteLvUdG8yMI7frfBNb4tRHjWcUII0D5y/tlchRCpZ3nSboG2DmlxwY4h8v691qjYsly8mLFswK++ToFmyaFICWPYHzkWDXDUt6/LE/NWbaUzNegvaR7ozQEdiBY2cD8XAMDCGaX7PxwSo/z6FpN3U7GPTEUMFGVouOLEG1LYKy9F9WeEua2lrsiEWlSoDc+hNlaepTceJ+GqWfLQFnPw4YTcPZaBSezePYecgrDtz4JjMbOmos2aVEa+12Tlpzg3iSzA07WgbpwhV5//Kwq53FNJHweb6tOFfaLCTmypcOLKXr+Nr4+fqDRZyG+4/kAwGzX6k8VBGQ1JKjdSkqVJAXH1oCgv+9pE1YoM1JOOldi3JOSiZFl4gCl1TOi7RZip01OBxZCq94AUE5gGyDd26xeOcY6NIix9jsq4i+Ikrs+LZFZLsMO3IqEfRMF1H8OjrPQlwEiv7pPieVgPnaGcvNapUJBkpc4J2sZi5c+UA+Aa0CBWTKb33E4LzGHE7/GHvLpM6mV4Lw1i2iQbt55W/rinGAkgsuIqpgQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199021)(66556008)(41300700001)(8676002)(4326008)(6916009)(66476007)(66946007)(316002)(5660300002)(8936002)(31686004)(54906003)(478600001)(6486002)(26005)(6512007)(6506007)(2616005)(186003)(31696002)(86362001)(38100700002)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?enJXZmhQZ2ZaTmZXYVI3V2VUQmNLaG9MQ2EzREdaRlZCbVQ1T09yWk1PZ3Rj?=
 =?utf-8?B?c2ZweHA5b1RWYnNzUFdQK2JEclMvTTlwaU9JT0J0K3d4MW03K0pGVktHZjZS?=
 =?utf-8?B?cm1QQ1JmTnd0Sk9heHhkbVgvczdVR1owemhZSUNXeVQrT0ljMDd1bS9ZSEJX?=
 =?utf-8?B?d29FVFM3OHpZZjJkRko3NEI3UDh0dUpIU0ZvSUpEYzRKTDdpejZhNXR2cEla?=
 =?utf-8?B?WURTN2RpcDR5UWlFTFVhelF5dldGZTg1Q0hqSHNWNmtzWld5SVpseUtWWVVW?=
 =?utf-8?B?TlJVNFdGTENadUNZSWtOdkpsNnBuRmhuQW1FQU45a3RPbnZhL1dOZU9BdGV0?=
 =?utf-8?B?V2F5Z2g4T0oxMStGRTdrcStKUm5JcFdRM0FCNmJTSURTZHJZbEh1REJKaHhj?=
 =?utf-8?B?Z0lMUkZjZnM4Z0xNWTJOU2dSejZDbW43VGw5bmE4OTNQeWV3bytMRDFGY1pT?=
 =?utf-8?B?TzlYZjlUMEMyblhBTk4wdlFPSHBWZXcxbE9BLy8vZVR3Qkw5TVV1V0hDZXoz?=
 =?utf-8?B?N05OOFk4MUlGYTFDZXpZRXZMVUhxWlpJOUo5a2JOVXI5dU1iUUY4U0tSdmJk?=
 =?utf-8?B?NmppV3pIejAwWURwSW1QbzgzSlJGeTY5aWw2cUxYZm90TUNSTW5qTVZKQmQ2?=
 =?utf-8?B?MkRmUHZzb0Fnem8zUXM5MHRIV3dXQmpucFFMc3oxeW5zNEFma0czL25XRUpQ?=
 =?utf-8?B?Q2ZiY2d4K1JvNnJiNkJ4Kzkrb0dpcUJyc0VNaGdVY3RSeUIvTjdsQ0RpeDZp?=
 =?utf-8?B?RlpSVFBPQlhaYVQyMi9qVWFsZ0NxNXQ0N0tpMkJqMldDdHQ5dTl0LzVxZy90?=
 =?utf-8?B?bVpCTTF5b3RZR1M5MG1xeXlEVGdGdFE4WFY3RWdVdW9RYTdrYk0zWTY5OGgx?=
 =?utf-8?B?RUFqeUZXVGFlM2JHNGxOdDBtZUJTcUI4bWtkYjBLQ21hY3lXYnU0dkFjQkxI?=
 =?utf-8?B?OHhTdGE3UVp5b1lqRXc1WmIrOCtyUk41a2Rpb0JOekhZMk5JaFp4NDVzbWlY?=
 =?utf-8?B?S1ViZzZEY2REcjMvRmRSbWgwQWl4MlhkMzBDay9kbDhBL0RkYk0xMnhDNkFN?=
 =?utf-8?B?VFVIRU1XdTRZNWJNSzJLL0tUQXRESUxjUTFtQ1ZmZHBFTlpRQ2l3K1dkemE5?=
 =?utf-8?B?cnpoMmQxQXdHQnBoa1V6b2U1QnQ5U0xRYktEMlFHR0V0UjNEQm1jSVRCQStH?=
 =?utf-8?B?eUFvYk44T1VWbEZFUDNBTkVCd2dOdEpRUVYzZm15QW4xbnlRWkYyWFZpbFBq?=
 =?utf-8?B?cWhGR3NvYVE3elVkZndrZ3hYK3dPZjlkM1lkL215NFZGb28yZ1dhcHVOeGpy?=
 =?utf-8?B?NnpKOXhEeEdBVmx3eDBrbDhCS1hRY3FFUWJpby85ckhseWx4S1JseGFROUVD?=
 =?utf-8?B?TTFvN3ZsU2g5a3hicmVKWnNjVmJqaUQvaWR3OFQ4MXRpU3dLczNOZkkreVBq?=
 =?utf-8?B?TjBaRjZpREcySFU0aG9RR2ttN2VkM05tbHBRdERvckZqelF4QW5jSkZLV2lx?=
 =?utf-8?B?bHBxTlcyY1NUSG5FbTNNTTA2c0dydHRYaVlpdFlIQlJ0cGR6SHNzZi9Eekov?=
 =?utf-8?B?ZkdobURTMFRxc1h3Rzc1TmFtWDVVa0E0N25hQmFva0wwa0ZYT0tjd3BhaXFr?=
 =?utf-8?B?OWFVWGJyTm9ERitjbDFRU1NFcjlaWXk0ck5oZ2pFdUUydURpTDhOeDBoaE9V?=
 =?utf-8?B?cWJRN3hRK2RqSmQxVEpLdndEZENVMzMwZXZleE1JdkF1Z2J6NlZtYkg1OGRt?=
 =?utf-8?B?d0pOOUlrSENUc0Y0T2hzMlcxZldHdE9IbDdwK29aQUQ0Z25CMEgweXFHQmNk?=
 =?utf-8?B?TXg0VUJWL3c4WFR4WVMwSDBaYW5VeUxoak85YzV5b21PY2JnaFVmVHA0SCsy?=
 =?utf-8?B?Yys2V0NMV1BBNHJIOVZqZ2lFRk42cWtQSUVPU1Frb2E1K1JCMm02Mi9VWmRW?=
 =?utf-8?B?UFdIeUJCZFRMQmVNdldLSTk5U1o5R1VXd3pMQzMrZm9MYWdrdWRXZ003L3dR?=
 =?utf-8?B?ZTljM05rNG5uenV5eVFzVkxzNmFSZnNmbHpqKzVtcGR0L1RsYTRiRkhFRnd4?=
 =?utf-8?B?RStIdm4zK000WnVGQzFXa3QwaC85S2ZkNFZ4UW9qLzBKNWFyVXRCTVBsQjBq?=
 =?utf-8?Q?ktn84LU4ppiEZIbwUArGrqL5a?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c8fae01-9a9b-4b73-bf58-08db351c4dbf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:53:08.4398
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gIV3jWri0WyKkPAbshbCihh7ZvMOOVJfD0Xqe9/yRCL3UUDycjWRE2eBLdrpV+NiwvSt9MsGBr4LTYQGzd0eWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8126

This can now also be used to reduce the number of parameters
x86emul_fpu() needs to take.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
We could of course set the struct field once early in x86_emulate(), but
for now I think we're better off leaving it as NULL where not actually
needed.

--- a/xen/arch/x86/x86_emulate/fpu.c
+++ b/xen/arch/x86/x86_emulate/fpu.c
@@ -90,9 +90,8 @@ int x86emul_fpu(struct x86_emulate_state
                 unsigned int *insn_bytes,
                 enum x86_emulate_fpu_type *fpu_type,
 #define fpu_type (*fpu_type) /* for get_fpu() */
-                struct stub_exn *stub_exn,
-#define stub_exn (*stub_exn) /* for invoke_stub() */
                 mmval_t *mmvalp)
+#define stub_exn (*s->stub_exn) /* for invoke_stub() */
 {
     uint8_t b;
     int rc;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -764,7 +764,6 @@ int x86emul_fpu(struct x86_emulate_state
                 const struct x86_emulate_ops *ops,
                 unsigned int *insn_bytes,
                 enum x86_emulate_fpu_type *fpu_type,
-                struct stub_exn *stub_exn,
                 mmval_t *mmvalp);
 int x86emul_0f01(struct x86_emulate_state *s,
                  struct cpu_user_regs *regs,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2058,8 +2058,9 @@ x86_emulate(
 #ifndef X86EMUL_NO_FPU
     case 0x9b:  /* wait/fwait */
     case 0xd8 ... 0xdf: /* FPU */
+        state->stub_exn = &stub_exn;
         rc = x86emul_fpu(state, &_regs, &dst, &src, ctxt, ops,
-                         &insn_bytes, &fpu_type, &stub_exn, mmvalp);
+                         &insn_bytes, &fpu_type, mmvalp);
         goto dispatch_from_helper;
 #endif
 



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:53:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517924.803930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji2T-00021x-IZ; Tue, 04 Apr 2023 14:53:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517924.803930; Tue, 04 Apr 2023 14:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji2T-00021q-Ep; Tue, 04 Apr 2023 14:53:37 +0000
Received: by outflank-mailman (input) for mailman id 517924;
 Tue, 04 Apr 2023 14:53:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji2S-0001Yx-7x
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:53:36 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe02::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 78833a08-d2f8-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:53:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8126.eurprd04.prod.outlook.com (2603:10a6:102:1bc::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:53:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:53:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78833a08-d2f8-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bBoZfzhvl+G8C00sLGUlmDgErx4+maPHAcSWakkZ5tBvwz/EUag5Bz/YX2NJerTBrblty7kAcO+wGWK8dgXvH1eL3Td21xsL01caoHSsbvaOatpfkwCqzvT32c+gSNhNpyXAM0wk2Xow//1WWV/NhKdLSM668niB13IuAgpHHFsl3EHesX7gL+VFbMSGhqY/pJEppHVyiotmispZqXWvBPnDc9SuIdzw5gCPaHoa9mfJelLuMeStwNGsWZMs4j5d4wIeAEHEEUTRbJEAhH5Aow/AC2di1Inb5D9E59ZIP7YInCVVymoDQj5acHkz/GRfj+K/Yl3BlSne0eh6xj6vWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uuO8e/fNBYfvRmo/WVweSQll7opiYTXixAvlnnhA5lc=;
 b=Sb41/HGtjtrtam1xw3R49Yp/5HL93DS5nqthIHfWW2+zl9VPVPh+L5hY7samnFQWrW9qYw04k7h4ShM/JnqwRb3kkL6AHXo81o8hMHJVMxqPERRNf+elI6YLtEdLwfZ/7f5yOOQWFkwZ//c+azP+Tz4kLZM0NJfbX3naquaBS12jJiBzTC/Su6J7TE4FEEYxeX0NKvkMWKnNhheg4LGTrNsPzwzFRKLKqnFvSNiEjT5uQ39wWK6rHtfftKpftJvFzAzPmucivPDc6wrqajQfbFB7Q+AFBFIQfya5xdnXneQwo8sOHUe24zD6/qVZ1f2X3GR/cu0kemq2W7WjE5MaBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uuO8e/fNBYfvRmo/WVweSQll7opiYTXixAvlnnhA5lc=;
 b=p/CCwAKTuvzf8J5fl1KK68jMW1UeQfljCloVXPCWm0Wv9eJqJSDjQQhGElngAuDuVbE3t7aYWkeATnJEE17WIwhrR0Wx6hwhku78PU6ntzww8Fuoxfz1L/1W8xU2LtF4Gn3oxk0MiP0yOgZ7IX291y1Gf0YmebyrFsDlqregalIr/rkf7s86xccfUMWWuiDhDs5LCWjiNaWP3yUO7w6CUniACr6TenxtfirHjHu8SAmBqFGEhCF9lnF3NcCmq5WcP4i990o1qCAbbsArSjbSNg9h452zc/9Fiz3LdVsJIWTJX019JZlrD3MTmbZ4ymWdqPw2OOfHSBD9fJ7O1mqztA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c5358413-2638-23c2-4d44-a925c6f08d49@suse.com>
Date: Tue, 4 Apr 2023 16:53:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 6/9] x86emul: support AVX-IFMA insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0162.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8126:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e77b752-406a-4250-39ee-08db351c5c2a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	C2D30WIKNGZhVtk4eZwoxkSPbGLZHVP/zQHR6w6G4ILCiecx0RYX5Wzc+7PheyAKD31MjFvcuFaBxUbyXJ5Jl8zbtAzfsAk6vN10CgOTpw5M/GJXifYUZXvvODp8P5HjJJYDQl/O0j6gh13s8M++4237nim7QrextKN21ABj6jg6DIfveyf5CNzZVz+xoYUjzCTHKybqp7uf9chS1cVOLo8CL9V/xRouYgtBFHg4ysmRqpTk6gV1BSoW8wuB9PHFzrgHf+QazklC+CpKv9lwQihfvEcPOLet1edp49B+Xa94V420svh9ToepY6mwsQlKKaMQFFBkTSwlXIKOS6rEy9rJYIznrqWIA01WNqwwEzfOWoGl7CXSPN8r0Sh/mnqGta81NMwosiZw6N7Vl6nO2mmI3QATXfCWvpNMB2gEL9V1yLXmhCNNnSRVuxriOfGREeU3cGJLnOpFQkgJ1XXfNNFzJYoHeZADS4WHdm7E5kDCW3ENxS9soN4x9h3jinzfMTDHpwmg8rZf5+KDt4t42+3EkY8Z8EIP4McHH9aAYsDA+Ib8spujT02hPHnLQ+vS4yrpsEfOze0AI5QzbR7JXP1M6OdMEKOcgXnr1PHZO96Dl3M3bIkVqiLU/jEPtxCfPMVAbMJTp3RU4ygyALmVyA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199021)(66556008)(41300700001)(8676002)(4326008)(6916009)(66476007)(66946007)(316002)(5660300002)(8936002)(31686004)(54906003)(478600001)(6486002)(26005)(6512007)(6506007)(2616005)(186003)(83380400001)(31696002)(86362001)(38100700002)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alBRMEtzVUovNHFRdnQ2NU05ZmdlU3RCSFJ0QzMyU1BPbGxtd3grWXp0MFpR?=
 =?utf-8?B?UUpjU0Y3RXZvaUhTelpxdGV6VjB5ZUx3UnJ0aVNqVUNJUHBiUU5iWTYzTUhj?=
 =?utf-8?B?cnNNMXI1RlNLbFV5aEMrN1pZMVVnZnRROEpNZUR6QnZ4SWs0YUVqN2tySnlM?=
 =?utf-8?B?RlI5YTY4SG90QVRtbmg5bTBaNFNGM3U0eEtLbkx4T1R2S2ZTWWRCRkZMOUd3?=
 =?utf-8?B?bXBsTVVsd3lXM0NwcTJ2S3hYVDRGVmNXaVU0TWowQTZCWHpuMkZUNkd6K1BC?=
 =?utf-8?B?NjdmU3FmL3dFY3l4dlVQUk9yODgwMzdQenkzSG5hTkQwSTRiMzlMWnoyb252?=
 =?utf-8?B?UjQrYjQyamVjZjlzQjQ2ck9WalVTcGRMTnI1VysyZkN3eGlsaGFnaU9ua24x?=
 =?utf-8?B?L3RieC9aN2gyK2JUaDBOZlJqTU9KZFg3aVpTOVI0MGE4c3pIemY4OHVqUld5?=
 =?utf-8?B?ZWhvVkNtZkJhellrUWcyQ3ltbFlqT1dLNWY0M2ovaEY0Znd3Y3U4U2xKa1Vh?=
 =?utf-8?B?KzUyK3Bha2N1dTFUd09ZaVdGaUlKZDhyaXBadWRpVFQwazAwd3JVY1V2Q1Vr?=
 =?utf-8?B?QVRLU0IwVmdIRlJQNGdRM0dRbG9UMG5WTE9KVHVQenRaRjVHU0ZyWW8ybmNE?=
 =?utf-8?B?Ry9Za0tyNEtJOTg4ZnZaMjkvWGI5RGdabHJ2WDZyZ1lYTmNueVVHc01xdGYr?=
 =?utf-8?B?V2VZSEdMZkJmZm1ISE9acU8rQ2tlOE5XS3hrcDZ6bk50Si9Ld0NrT0s2bjRz?=
 =?utf-8?B?aGZTSjMyTExBS0ZwdVV3WHA0R3V3SlFuSkROa3IzOEVrTUE5YU9xR3BxbEdN?=
 =?utf-8?B?ZVR5SjNTQm45Z0Q5WGt5ajhRUXV6WStiK2tYT09zWUd2bzdqRzYxZGNadXZo?=
 =?utf-8?B?R2dNOXh1cmwwU24ybThrMjNQMVA4bVdIdlg3bGM5RitVMm44N3FkbmxtWmpS?=
 =?utf-8?B?TU1aQ2Z6NEVRbDhOdWdsUitrUytTSWtHSE9YNlFEU0JPem1tTFNRdnpXazVC?=
 =?utf-8?B?OGdCdUhaVlJKNmR4T0gvREk0TUhIMzdKWGFROWNLMk5yYXJ6c2FPSjhCMjN5?=
 =?utf-8?B?bFJacHRrRTA0OTl2cVc2OWMwSGNuOE9MS0d3ZG5aOEdwUlJzaEkvZ1dvMG0y?=
 =?utf-8?B?ejQ3VjJoZTZJNjZEVHN6cmJLelRoM2lsVEpGVHVKbXJpbHZ2cmY5ZEFCYmN2?=
 =?utf-8?B?ODJzcmFXenFnbFVoSVpkSFJiblVvbXllVUk1M3ducHpDcHJEeDVXLy9zZTV3?=
 =?utf-8?B?OGxGSkVNVHFxZkNvYU1CWEd5RWhHKzVlTWFwdUl2dTZUZENMUWNoUFc1SkVC?=
 =?utf-8?B?R1pwVEFlamFVLzRYaXZ4cTZvRDFHTEppTVRJL3c5MU9HVm1iNUV5L3RMZHVV?=
 =?utf-8?B?ZWREYVZybUhFZ0xTT0tQNE1uVjdlSDZDMWhVZTJsWnU4MUZvc1RLUTRWUzhl?=
 =?utf-8?B?Nk95T2t6c2Z2U3NvUWRpeVkrcWpzYTUyT2xIMW5wbXVLVnoyZTFlVGlLNFV0?=
 =?utf-8?B?Y1RJcU80OUVKVG5ibFJ1THVIMnVjWEhSSzVJV0hHeTg4eUpqQSt1cHdCMGdY?=
 =?utf-8?B?OXA5Q1lndG9RTk45Ym5yN2tnVEVxamtkQmZXTk5xSmMvbmZqNGJ2Zk4zVEpC?=
 =?utf-8?B?ZjdVVkhmN2RmeFl6QVp1SnF3bGl5a2JMZGtsVk5MRzE2ZUlOeVhJTXpxblBH?=
 =?utf-8?B?NUVhTnBMUWxIZFQ1d3c4c1NuOEtDam01VE4yRlhQRUxubWgvVmFTaXdRZ2RD?=
 =?utf-8?B?dFdiMUcrNkc1aHlhd3NIN3E1WlpTOG8zcXlqWld3SGY1MUhOeEwyMmJkS2hp?=
 =?utf-8?B?dUJoQ2NXaE5uOERFenFxMmV3M21FZFUrNEJFZGd6UCs1ZUhRQ0hTdlpnY1FR?=
 =?utf-8?B?M2k1eFhZOEVUanpEcmM2c29kR1ZVYllQYWt3LzFXSDlOamtqVlVtUmhwNThL?=
 =?utf-8?B?Rk9FY0RlaDZ6WHlDbE9BeThCc3lLQ21PQWJsNzFSYUlLazRmK3ExRFBvL0Q5?=
 =?utf-8?B?YktVb0oyQlh6Q1o3QUQ3QUhlNlJpOGhCNHdzZ29ZSnBOVHBlK3BMNjBHRnVU?=
 =?utf-8?B?WWozeC90WU1SZ2Nld1g1SEFIMmhkVFdBeVR0M1VZeC91Szh1NDJxZnF3a1l2?=
 =?utf-8?Q?r5ePGJoN/z/6395lFf6YcvC0x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e77b752-406a-4250-39ee-08db351c5c2a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:53:32.6198
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f4lbY4WiC4GDnqnPoFsiLGe0fYWtTnDfGPyf6ObN16f2Tzx24/C3h2YGrtgCCR34eDSMeuR03e7YTDdngOHv8g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8126

As in a few cases before (in particular: AVX512_IFMA), since the insns
here and in particular their memory access patterns follow the usual
scheme, I didn't think it was necessary to add a contrived test
specifically for them.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -239,6 +239,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"fred",         0x00000007,  1, CPUID_REG_EAX, 17,  1},
         {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
+        {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
 
         {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
 
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -193,6 +193,8 @@ static const char *const str_7a1[32] =
 
     /* 16 */                [17] = "fred",
     [18] = "lkgs",          [19] = "wrmsrns",
+
+    /* 22 */                [23] = "avx-ifma",
 };
 
 static const char *const str_e21a[32] =
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1372,6 +1372,8 @@ static const struct vex {
     { { 0xad }, 2, T, R, pfx_66, Wn, LIG }, /* vnmadd213s{s,d} */
     { { 0xae }, 2, T, R, pfx_66, Wn, Ln }, /* vnmsub213p{s,d} */
     { { 0xaf }, 2, T, R, pfx_66, Wn, LIG }, /* vnmsub213s{s,d} */
+    { { 0xb4 }, 2, T, R, pfx_66, W1, Ln }, /* vpmadd52luq */
+    { { 0xb5 }, 2, T, R, pfx_66, W1, Ln }, /* vpmadd52huq */
     { { 0xb6 }, 2, T, R, pfx_66, Wn, Ln }, /* vmaddsub231p{s,d} */
     { { 0xb7 }, 2, T, R, pfx_66, Wn, Ln }, /* vmsubadd231p{s,d} */
     { { 0xb8 }, 2, T, R, pfx_66, Wn, Ln }, /* vmadd231p{s,d} */
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -186,6 +186,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 #define cpu_has_cmpccxadd  cp.feat.cmpccxadd
+#define cpu_has_avx_ifma   (cp.feat.avx_ifma && xcr0_mask(6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
 
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -171,6 +171,7 @@ extern struct cpuinfo_x86 boot_cpu_data;
 #define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
 #define cpu_has_cmpccxadd       boot_cpu_has(X86_FEATURE_CMPCCXADD)
+#define cpu_has_avx_ifma        boot_cpu_has(X86_FEATURE_AVX_IFMA)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -599,6 +599,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_cmpccxadd()   (ctxt->cpuid->feat.cmpccxadd)
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
+#define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
 
 #define vcpu_must_have(feat) \
     generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6727,6 +6727,12 @@ x86_emulate(
         break;
     }
 
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xb4): /* vpmadd52luq [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xb5): /* vpmadd52huq [xy]mm/mem,[xy]mm,[xy]mm */
+        host_and_vcpu_must_have(avx_ifma);
+        generate_exception_if(!vex.w, EXC_UD);
+        goto simd_0f_ymm;
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xb4): /* vpmadd52luq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xb5): /* vpmadd52huq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512_ifma);
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -285,6 +285,7 @@ XEN_CPUFEATURE(FSRCS,        10*32+12) /
 XEN_CPUFEATURE(FRED,         10*32+17) /*   Flexible Return and Event Delivery */
 XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
 XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*A  WRMSR Non-Serialising */
+XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -254,7 +254,7 @@ def crunch_numbers(state):
         # feature flags.  If want to use AVX512, AVX2 must be supported and
         # enabled.  Certain later extensions, acting on 256-bit vectors of
         # integers, better depend on AVX2 than AVX.
-        AVX2: [AVX512F, VAES, VPCLMULQDQ, AVX_VNNI],
+        AVX2: [AVX512F, VAES, VPCLMULQDQ, AVX_VNNI, AVX_IFMA],
 
         # AVX512F is taken to mean hardware support for 512bit registers
         # (which in practice depends on the EVEX prefix to encode) as well



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:54:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517929.803940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji33-0002ci-R6; Tue, 04 Apr 2023 14:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517929.803940; Tue, 04 Apr 2023 14:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji33-0002cZ-Ne; Tue, 04 Apr 2023 14:54:13 +0000
Received: by outflank-mailman (input) for mailman id 517929;
 Tue, 04 Apr 2023 14:54:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji32-0001Yx-68
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:54:12 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20631.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e50941e-d2f8-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:54:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8126.eurprd04.prod.outlook.com (2603:10a6:102:1bc::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:54:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:54:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e50941e-d2f8-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eqMVr+pBuiHEIto5vREmKrSTnfKifscS241O+gZshjQfduCez2NvwjTQPS6brQAbUZEtX+u8a9Dd4tsC8LOjZhpp3Lb+sorgFqcpEe6wcmSIaHQ0o0MXIInM339zIncc8xeFVujdpFPfIsnAuTg5PBvIpUB/EVl1IFKXem185XgxNU+f5cxpho/pIScUgmjtHNb2RhOMoe07MGQ+7cWgnDCJHf2jIyCNVqxlVbOjjM69E1I+MQ3xEUB644TzlqnwtIMX5KXVf7zB1oJRHIDdw5JrZIL48Yc2NfbYm4NgQxcWVuicI8RmNmD/DInwIo+ps2P/ayU8zir1t/rc0NXR/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xgzA2b+QjSJL+b8+vIb+DCUzhUIrjTWFIjzTmH58pdU=;
 b=S2ePVGxgBeb2+di0/h/ZVhxwElxcKExG+Uq+HqR9pMXds19bBGNKS9VlNUFUGJdyQIbkoSvkyuqLzMrPWSdXzV8WJGu+B0Ox+Xkk631SnfOvdZa76gewsm3W+UBnPMsXjISzMoI7KjmP/yfsFZVs5pIFeYEOymRuDWorlD60nmZGUUvSZMudU6SUR5iWhLu1OhjKcFBOZETVVVdYFThrY3fTSDmJrUCvyggrfWHhLr4853YS1uF67PEvU+DT/sIbqzgvxJ3IjK4a7KaXUxUZyxvvhaJlkP3ArkjwbahuYIW5JlW49hZ0avYP9TQ+v91EwDf2NGzp4rXfv5mWr5D4+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xgzA2b+QjSJL+b8+vIb+DCUzhUIrjTWFIjzTmH58pdU=;
 b=Avscmj6hi8aM2ZxA2p40X0iBRXVU+5ozJKtMKLfRBszh7FNQbzgAFQs5N0uul6nYsCgHvZPBQQutXZniMa9b8huxeHvVPxZ4BViJK+iO/i3BIgyw5/puFAPfiH4OhKx5pwykcCmoKfY8MLqruVVDIJNPMow/vsGfwi0DAOH7nscJnTmsGDav8Eu59WB3vF5PkYivB5AmXe+hcQ9M53RbJfeOclC6Rl77gPRcMeFbxAD+gq+xQlFXDrjqweiyOfctJ4myZ+GAKDP3MPZ+7do1q0/xZxuNv9oTGVjzGgeI+O6rDt58C22rZvb2xsonLKMVceIUHA6iIwEQulxoOzCH7Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e5c21de7-8802-9226-82f6-505c8f4d6ac8@suse.com>
Date: Tue, 4 Apr 2023 16:54:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 7/9] x86emul: support AVX-VNNI-INT8
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0195.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8126:EE_
X-MS-Office365-Filtering-Correlation-Id: f819c972-c97e-4025-a410-08db351c71bb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DWC1w/V8N+y1potYInB97EcDZQSxNdQ06+hBP5SQ22WoC8xATDtCJ0RaAn3VcdByWa6rmeirEzyx0ZUphZXTcY4LLUkBWJC+ug+ry1KnSMDlzoRR7c98ory61W9ofGyB/rFmDKTLIu79vElBlNkPRAaqaNMdzD5MUZSF6JxquFTqyi9AwdOsT4uFMFp11o2mYYZzJZmPOdnMahq7y82XPqPGfshbLkU5SUYv2WVvVbK4dP1btjmOl06TTGkk94i4HeSntXLYM2oGJdlR3FL2s5Yp6gOKpgVgU6jUkMyh9/YREIg6Xzl8bA3FMuSPBUo86N46tTXBYW42AUweWKZq5g/oSbo3gPn1CeHtFU1takj73cC2yd5BkiCgZvjwlnE1sV5icBShMdZi0PtSuPVd3ZHKCZytgqT+paZ04N7nQRYT481XNgRmMaWtQcBDMEbiBKZxqdnIBMqplWOxBjwoY4bFJrAwJJkappOmbwqRLnWgrmsu/rUOBvy8zNQTrysQor2D0Ho+6uUti8nLLJU16UnsQ911CfoM0Z1eQv0/Uxvto+dL18SQjc3sYp5MHQmVMqkUxMVG+le2Zxiub+tt0awx+apyP6jkzOaCHrAxotL9CQt83yEiEkpKBXMcm19LQmlEuKNeH0TfNt4tUQysYA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199021)(66556008)(41300700001)(8676002)(4326008)(6916009)(66476007)(66946007)(316002)(5660300002)(8936002)(31686004)(54906003)(478600001)(6486002)(26005)(6512007)(6506007)(2616005)(186003)(83380400001)(31696002)(86362001)(38100700002)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3d5TlpxMmpQOCt3a2N4cWdqOU9JYkhaYU1ScDMrbVpCVzJ2WFF4cHFjZHdX?=
 =?utf-8?B?S0ZpUE5EQm9TQjFFMTBiL29XY2NESlY2dkdaRXYxbjNNaDVEWksxeXV4bFNr?=
 =?utf-8?B?S003alZDTGlrdUVRS01DbVVXQ1FHa1hZbTEyZWRDWlFSTDMySVlPU2tWeWxC?=
 =?utf-8?B?YnhmR3h1QmQ5WEZzTlRMZ3pNSUpZQVpxbDVtTzg4SDJFL0U5NnQxWCttQ0pF?=
 =?utf-8?B?cTA3YjcvQS9ldVBHZEU0UDl6YXoza2ZhM1lxdDJtTkk5OWdIeWpWWExwdHEw?=
 =?utf-8?B?SkZ5eStRTDNkUUNaZmFNYXBvUU5NMmI0dkpIRWFqUWRyZHBPdysyYS81ZitH?=
 =?utf-8?B?NVNHN041ZHVUenpyUCtPUzhLbWpycUMxTksvZ3VkZGVXSTR6QUk3WnhZNlVU?=
 =?utf-8?B?OXFZcWZLZGxsSUxJOVR4czd0bElVNVVGbEhYd04vSkxwM2tKck4xdUI1VzlF?=
 =?utf-8?B?UHBxVVMwemZBdlV0eEJGUDdJZmw0TGhLOVhjNXBadVp6elN6S0pITWdpbTZP?=
 =?utf-8?B?dW9RdG5RNGd6UFMrVVhFMWtpQlpoOS9rMHo5NVg3SWdTVHVrOUFtTXI1Ulhy?=
 =?utf-8?B?ZXRLQm1KVGRJWDJ3L2F2c2tRUDdja0VLTGwzajhUaHFiR3YwejI2QUpReUJS?=
 =?utf-8?B?alRFdldYSXhNUVMxNzJ0ZWF0ZGxlZkpJTmU5aCtHR2NYTWNWaENkK0lTczlk?=
 =?utf-8?B?MjhEck5JczFoSnNoRllteld6MXp1S2F4TVNzZUpKdnRvQUxiZ1Voa3RkUkx4?=
 =?utf-8?B?aXJwVXJNK25IYWY1cVBlVGNhTXk1ZFNCS3lSRGZhTEF5elBTNklWSjJKcW9K?=
 =?utf-8?B?eDl5dklRUHVHZmpsMHZOMVg0MFdZYmhuaVQ5b25ZWGNxVG5sem5jVC9ud1Ur?=
 =?utf-8?B?S1UzVGV3bnAvSVlPMmp5bjI0UHNySmxlNU1pSW11UlJ0WDRpbzhwUkdXYzM0?=
 =?utf-8?B?Z0w3Z1hEdExFajdUMmFldFI3RmdXVGtKRTJYOU54U2RtN2lZSSs0V1orTS9m?=
 =?utf-8?B?WHZEQVprUEp2dDRWZkNScy9LQ0tXRmRBS20wdHlxcTFNTVJmUUpqK3htazdw?=
 =?utf-8?B?YlV4RHFLZitBRC9pRlhGT0QxN2xqQmE3MlNKR1lXRTFMdm1jWXpJQkc1aXV5?=
 =?utf-8?B?QTRqZmtmV2pKR3FFRTdza0cyODJOQlhuMlI3OEpWQ1UwaERwZEVqdFNFbXl6?=
 =?utf-8?B?M3NpVHVmM1prMThPbVZURkovdEF3SmVGUThKbjZMVDBNb3VNbFFaeU9jYThy?=
 =?utf-8?B?NXd5YlA4Y2FCNmptMjBIbVl3MXRKekdXZzVtYjBhM3hhOXY5cEhpU0ZLSDRu?=
 =?utf-8?B?TnFMUWRnUHUxL05hMjBUZFZHWlRSanJOSnlYbHI2ZU5ka0xPcWpkcDVTMW9w?=
 =?utf-8?B?RnRlMWhvMUlFbEwrZWZpSUpRQmUwQXBLRER4NVFlMENpOGFTZVduOGc5ZmhV?=
 =?utf-8?B?eG9TOEdveXpnc1JQUHgwQUxqQUNYRU12b0puem5BRWttS2VTc2ExZ0l0ZGFY?=
 =?utf-8?B?RnM2QWN1ZEx4c0tLWjZPZkxTTG5ka3NnWmRReG1QTzlUMzJVWnRQYkd4aTJw?=
 =?utf-8?B?ZzR5NHdqbmE3TGJ5RzY3QS9tNDNsTlhLdXRkcSsxUURIZFVxcy9ZUHlLekJZ?=
 =?utf-8?B?cG1lSEluczBQY3hlR1BCMDFEem0vSmNsekl1V2doNW5tOVZBblVoZ2dPakpB?=
 =?utf-8?B?Rng5cUxFM21XTTI2bjNRbE5heHM3OXZWcWN0ak1vQmVUTTUzRDcxbnpZd2pu?=
 =?utf-8?B?TnVZWFRSV3BPeFF2MVk1SDg2V29KQ0QxOWdXZ2FJN2h1WENXTytjSmNkM21l?=
 =?utf-8?B?dGkvckVuMjFOTGlLMEthL1dTNzl5b3NhM0ZzYmhQakRoTFY1U09NSkxuRHlx?=
 =?utf-8?B?aHkwNkUwMFVtOGtBbmFvMDVpZWxzZ1lDeWlFaGsvMWwvcnU5bWNiMHVvem9H?=
 =?utf-8?B?WlpJcWJheXBpYnkvM3g4VUUwWkU1ZnVxbWk2L3dzTGphMmtjZ2FUMW9XRlli?=
 =?utf-8?B?YmZXZ2YybVd3WVR2Q3hZS3NmeXNHU01OZExwNFFZM3NXMmh0dG91NzBUcHpN?=
 =?utf-8?B?YWRraWxkOGFmS3Z4RUdoVCtrNkhYc2g0bFgvaWx2UTZSc053L3JKeTcwUktx?=
 =?utf-8?Q?D3aLiEKRmXlV3usLQw768beWQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f819c972-c97e-4025-a410-08db351c71bb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:54:08.8178
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NxV+/LR6ER9vkq5LIrPPGeHnYL7/9QtLN1mrGYfNQe3N/LfXEc5dxFM0u2XzwYzOTDs+Egtv2rSWOCEoeGyi/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8126

These are close relatives of the AVX-VNNI ISA extension. Since the insns
here and in particular their memory access patterns follow the usual
scheme (and especially the byte variants of AVX-VNNI), I didn't think it
was necessary to add a contrived test specifically for them.

While making the addition also re-wire AVX-VNNI's handling to
simd_0f_ymm: There's no reason to check the AVX feature alongside the
one actually of interest (there are a few features where two checks are
actually necessary, e.g. GFNI+AVX, but this isn't the case here).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -241,6 +241,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
         {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
 
+        {"avx-vnni-int8",0x00000007,  1, CPUID_REG_EDX,  4,  1},
         {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
 
         {"intel-psfd",   0x00000007,  2, CPUID_REG_EDX,  0,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -214,6 +214,8 @@ static const char *const str_7c1[32] =
 
 static const char *const str_7d1[32] =
 {
+    [ 4] = "avx-vnni-int8",
+
     [18] = "cet-sss",
 };
 
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1337,8 +1337,14 @@ static const struct vex {
     { { 0x45 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsrlv{d,q} */
     { { 0x46 }, 2, T, R, pfx_66, W0, Ln }, /* vpsravd */
     { { 0x47 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsllv{d,q} */
+    { { 0x50 }, 2, T, R, pfx_no, W0, Ln }, /* vpdpbuud */
     { { 0x50 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusd */
+    { { 0x50 }, 2, T, R, pfx_f3, W0, Ln }, /* vpdpbsud */
+    { { 0x50 }, 2, T, R, pfx_f2, W0, Ln }, /* vpdpbssd */
+    { { 0x51 }, 2, T, R, pfx_no, W0, Ln }, /* vpdpbuuds */
     { { 0x51 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusds */
+    { { 0x51 }, 2, T, R, pfx_f3, W0, Ln }, /* vpdpbsuds */
+    { { 0x51 }, 2, T, R, pfx_f2, W0, Ln }, /* vpdpbssds */
     { { 0x52 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpwssd */
     { { 0x53 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpwssds */
     { { 0x58 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastd */
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -187,6 +187,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 #define cpu_has_cmpccxadd  cp.feat.cmpccxadd
 #define cpu_has_avx_ifma   (cp.feat.avx_ifma && xcr0_mask(6))
+#define cpu_has_avx_vnni_int8 (cp.feat.avx_vnni_int8 && xcr0_mask(6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
 
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -173,6 +173,9 @@ extern struct cpuinfo_x86 boot_cpu_data;
 #define cpu_has_cmpccxadd       boot_cpu_has(X86_FEATURE_CMPCCXADD)
 #define cpu_has_avx_ifma        boot_cpu_has(X86_FEATURE_AVX_IFMA)
 
+/* CPUID level 0x00000007:1.edx */
+#define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
+
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
 #define cpu_has_cpuid_faulting  boot_cpu_has(X86_FEATURE_CPUID_FAULTING)
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -600,6 +600,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
+#define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
 
 #define vcpu_must_have(feat) \
     generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6077,13 +6077,23 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_avx;
 
+    case X86EMUL_OPC_VEX   (0x0f38, 0x50): /* vpdpbuud [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_F3(0x0f38, 0x50): /* vpdpbsud [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_F2(0x0f38, 0x50): /* vpdpbssd [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX   (0x0f38, 0x51): /* vpdpbuuds [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_F3(0x0f38, 0x51): /* vpdpbsuds [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_F2(0x0f38, 0x51): /* vpdpbssds [xy]mm/mem,[xy]mm,[xy]mm */
+        host_and_vcpu_must_have(avx_vnni_int8);
+        generate_exception_if(vex.w, EXC_UD);
+        goto simd_0f_ymm;
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x51): /* vpdpbusds [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x52): /* vpdpwssd [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x53): /* vpdpwssds [xy]mm/mem,[xy]mm,[xy]mm */
         host_and_vcpu_must_have(avx_vnni);
         generate_exception_if(vex.w, EXC_UD);
-        goto simd_0f_avx;
+        goto simd_0f_ymm;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x50): /* vpdpbusd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x51): /* vpdpbusds [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -305,6 +305,7 @@ XEN_CPUFEATURE(MCDT_NO,            13*32
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ecx, word 14 */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
+XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
 #endif /* XEN_CPUFEATURE */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -254,7 +254,7 @@ def crunch_numbers(state):
         # feature flags.  If want to use AVX512, AVX2 must be supported and
         # enabled.  Certain later extensions, acting on 256-bit vectors of
         # integers, better depend on AVX2 than AVX.
-        AVX2: [AVX512F, VAES, VPCLMULQDQ, AVX_VNNI, AVX_IFMA],
+        AVX2: [AVX512F, VAES, VPCLMULQDQ, AVX_VNNI, AVX_IFMA, AVX_VNNI_INT8],
 
         # AVX512F is taken to mean hardware support for 512bit registers
         # (which in practice depends on the EVEX prefix to encode) as well



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:54:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:54:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517933.803949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji3U-00039F-7t; Tue, 04 Apr 2023 14:54:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517933.803949; Tue, 04 Apr 2023 14:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji3U-000398-4w; Tue, 04 Apr 2023 14:54:40 +0000
Received: by outflank-mailman (input) for mailman id 517933;
 Tue, 04 Apr 2023 14:54:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji3S-00038P-5s
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:54:38 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20611.outbound.protection.outlook.com
 [2a01:111:f400:fe16::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d9240a4-d2f8-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 16:54:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8126.eurprd04.prod.outlook.com (2603:10a6:102:1bc::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:54:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:54:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d9240a4-d2f8-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nFr790ZrZjSmlZcs33yWrti5tqAz3kertLXPude8W632EOZfuYr6BjHFMwk0Yx+uKoUfp8NDTN94xIn9l/k4noHgzuFV6yB2C+WEblF+NdU4D8HOUjGjCWqzWNNxR+8AV0hg6F5ESYGoYxOwEEbzhVFrGYE1FsOszk/Vh9o/BvxR/LMuwydz+PGQmxhK2t5eC8p3tUPnKgP4yhLQd3Wc6iEMYTMG147hFSTnL6FRJVDoFMzUxGt0wi4kH0Ot0QWzwH+7mz9aN5e3bcXdahpVZiBawcjzVNBlMg2UfMHUTvSzVSON4On3519EH8mUuhAGz3dRDgGhOhkHPnld19iNxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N2VXtzSxt2N0XZD5RA5H5Y/sAIhPik9M4K9Pvxm3mDQ=;
 b=hz0hOfiGsgL4cWDcAocxbO32yQchhO5NzEnF+i0auW796NMe2I6mUN4o9CpF4EtdZF4qWGZuI5euzNER+XTCJNStbyvNcvNHYjKyL89zPLau85O0qKX2JABTc9mpfCSPAuNEGczunT3aYSR7o/8gxgODE/WQ97RhGnun5fhCOgYBMyp4rhb0+o1SQcs064HTc0UDnjhhFMjwMsliFnrCQEAO0XQyCWXeriU6HK2EQuvSeKZDnpEeA+UuA2OAeDvmZweGKfPGXV5B5GefdOtu+pb1XqXJYsP8z2ozbQzajldBcIs6/kGuvx+OW0hq9TrNQ7mQANgdALM/bjMDwYYBjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N2VXtzSxt2N0XZD5RA5H5Y/sAIhPik9M4K9Pvxm3mDQ=;
 b=S3KwFP555vTnXB5z6xy6NcVsc1EDXocbxy2qvpsJYQwzkFiNl/C64f6ntoqfGNvOD3Lh5/8nQzs5X3INV+4sYJ4I0270blANQiEnY/Zgs/4NQ+6qUz++k5ycV3ng0wVbB9gRGAk0jIDIzN/+1nR/ClRcrg3ot8UPR4pNdxAeayomC6WLkt/ICRAmlNDjI3NCVlb9OM8FBIVqOlYwpmduZzVfKytkgfhz5x9ivQqHC/RPR06JMAVS9XT0vXoU4jeF/84y2d029uDhjXipMTrTeXGpjNob4MgNqByRqkP9J/eHutqi4jCb9EMHGXya1RHtTLMFlzDLVlcDP9HG3TDyfw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bdcb4822-397a-0795-08eb-74e661d9b7ae@suse.com>
Date: Tue, 4 Apr 2023 16:54:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 8/9] x86emul: support AVX-NE-CONVERT insns
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0181.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8126:EE_
X-MS-Office365-Filtering-Correlation-Id: 9c23632b-f94e-40cf-9c58-08db351c80e9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Neg8WMJaYIe6XSbITpCdr26RGqwTgvbcrYG1SCrolWeVvjRjgMFG+SH/yvxqaU30ceQedhzV7zayjOowVLRkwZJUGFQyXEU3u32VckP/qJqSPzGvj26D5R4iezyXlWum/klJDnS6w/Hf3mfROqVVvHNOyRNPpRoYcMDDUJNrPiM6EQ6wmC9zm32pYejjJiPr9nSNyQh8Mh5TnKMhMZNYBm/0AYYfy2hqpTrpN6MKhPeODmGMv7r/jQA/+0YQkkNZmFqkGUHzMEFH+zCiBhp+47FVCNzlcy1UvtbHRxiaDqcsuTkQblCtAheL6Qbeq7qgyr2B9BbJtntk5w7f5Ca8wgCZwZPpLwCRTpc5TWSmbK9cXKuTWYgJ+3KROCfHQto8FmkZwUKp0AOPmOD8q/XasRfm101TqyMFT2aw77uCo7uQkesFGpRH+m4xXyPvkw0ABDhXhO/pEs46uY4lkH9wt6WjzyJfu6n3sLscm7f1mqISqPP16hhPH/f3kiz7eJoBHhjnUHJNvO1qfBVKscs6j0RKLVAwLv0bNGcL5b0ooDSGKZ9uE6MeXSvXtnGQOT3F6kVDkZlm19YTwsuF1KS+siHFEPLAlSVj+IAcUAv0SNiDu2OpBM8nDCvte68LOTl45931sIkP+g6oyHe/6m9NOQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199021)(66556008)(41300700001)(8676002)(4326008)(6916009)(66476007)(66946007)(316002)(5660300002)(8936002)(31686004)(54906003)(478600001)(6486002)(26005)(6512007)(6506007)(2616005)(186003)(83380400001)(31696002)(86362001)(38100700002)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUpoMlRCSnRQSHdRU2xNRmtsemd0QUNjTE1PNERmaGZ1c05aV21EYUI4S2hE?=
 =?utf-8?B?bktQOEJFdno4MUFaV01UNHFuLzI0S0JkbnUwNkE4ZGs1MUZ4MnFHa3pJZDE4?=
 =?utf-8?B?VE1KOXpoSWxldDZicXRTUmswa09kMVRkY01PK2Y4akNhcFJidE5PaFJrdVU5?=
 =?utf-8?B?MTRQU3E2MXl3RHVFdTJ5anJSWUJnWXU4dXRhTS9VVUFoV243WGlweEJyNEVX?=
 =?utf-8?B?UkdGa2kxRTU2V0xhZ2ZDVVFtbmtUZERPM1M3akR0dm1neWlYSGc4Mjd1cHI2?=
 =?utf-8?B?cWFURWEzeEg3R0FhMEFtVm5PT0czWXlzMWFGUUI4ajNhRHk3N2R5RkxydkdI?=
 =?utf-8?B?Nk0rMTdHWE0rTnVuczVyMVlQWTkvTlF0NmFSS0JhRGxtd2xzUWJ6RmMyeWRZ?=
 =?utf-8?B?eTFucFNXeHZGY3l1UjJ3UVNGc1c4SmFWSUswNi9RamxWc29yNi9iamlDMU4v?=
 =?utf-8?B?RGlrT0htWXdWTkpaR1MydFhQODMyT29zQWNNaTF6MGswZ0xDTEErREdwRWFR?=
 =?utf-8?B?N3JDVG44enJudUJmTE93U3Y0K2d2RlMrOUVGbnBUOFp1S0JFcTJ6RTAxUmVO?=
 =?utf-8?B?aVB3T1pFcHBPR3IyMnhmK1FYaVA0dklITXJMMlNhdmhMdlFLblRzbTFYUXhh?=
 =?utf-8?B?bWZYMThseEVIL2hoUjRoaDdvU2d2NjFDSUhlVkpzY284SGliV3RRcFV0Qnc0?=
 =?utf-8?B?eHB4ZUtSZ1BZWTU2U2Y5cFhWalFsVHpIS0VKNTFYVnhoTnUvVjF6aDBOK0Yz?=
 =?utf-8?B?bU1KNkNIU2RMU2JPWlpwY0hKQ0liT2lURnp5azdzUWQ4SHcxWXB3OVlOYmQ3?=
 =?utf-8?B?Z1BrdVkrRThWL29CMm9jdnBqdzRTQnpuU1N6MXdjeDZ4TmNWbEhSMGRrYzU4?=
 =?utf-8?B?YXBTVWFFMzV5RFN0eHNhR2RTOEJ5dk1uKzZveDBBNGEwOTlGUU1LWmtkOWJN?=
 =?utf-8?B?SHJxYXpkcWJLSGZuUE5pdTlTMytZalQwT2ZHN1NKMlNwZEVaMVJsMzUyZE5Q?=
 =?utf-8?B?Qy9WRWNjRlA4NXBtd2NXSU83RkRialJQYVlOdE43bmpYTk9UMFVJc0l2TXIz?=
 =?utf-8?B?eTNHelpCdEpRVUwyRzlqQkt1SG9LOTFoNEJVYXlhWW12TlY5K3NUMU9MK2pL?=
 =?utf-8?B?eVY3KzhFS0ZoMWFzZmE2cmU2UjRhNFZLTjd5cWVzRVdnQXAvR1dZSjV1SCtH?=
 =?utf-8?B?cVZ6ZlhUY1laaFFERUFVN25lSXY0OVd4TGs0d1ZtUHViWkJhclhhODVzUjRQ?=
 =?utf-8?B?eGRDUDB2TVNaU2xITE81TkNUVVprc0Jzd3ZYdzUzbjZWQ2drTWpKeWhwZ2ZQ?=
 =?utf-8?B?MkhGMVN3dC9SRWhGeFNUa2wvZFNUUXQ2SDZ3aFBBMk4xRmVwV0xheTBzc21y?=
 =?utf-8?B?dm1XNVhzMDdycWNqZ3FoUyt1bE9tbmpMNkNiaFdNNjN2V2IrYTlXcnVKQzB3?=
 =?utf-8?B?T2N1cDVjNGFyeUtaQ2ZEOVAxWTFkYTJmQjlXL0Y5WVNNUDR0RDdNbUdHNkpi?=
 =?utf-8?B?ZWprT0oydVorczU5Y3dvL2hCOVRabE1CM0MzcVZhN1VNcjM5Y2JDYnVlUkFV?=
 =?utf-8?B?ajRpZHpFZEh0cWg5QmNEaVdjNFZ3OTZ6bnhJUTdGalZvNC8xMlJXSkpteGw0?=
 =?utf-8?B?alA2c1BqOFNjdVdSeFd3LzRyZEtILytQSnVLRUYzMXBUY3Q3eFZGRlJjbXZU?=
 =?utf-8?B?WWtkRnNDVjdlVWtlSlpEY1VlM0o3SndNb1d4LzllMUtzTmQzTmhCTE1ZZ0Ft?=
 =?utf-8?B?TTJPV0MrZUhEa3hCSXdnd0pKWS9CNFgzWExTQjNrVnNQU1Z1SVdjWEtOdEpV?=
 =?utf-8?B?SDg1bDNZS3FXQTJWWU80eFg0aE95SEVXb3BhLzIvZ3pHRXJHaElIVEFPWm4z?=
 =?utf-8?B?V0hHdE9FZ3FwNzlRVXY0S3k2dXYyZk9vRTJNam5Ick84TzVtZ1NDNEdxYkRp?=
 =?utf-8?B?UmNpSGprdmdNdnoxclhQckxnL2szUWNudDIwTnpEZEU1OU9xQlBVdHV2UjNj?=
 =?utf-8?B?N1N3OHlPVlBVS1pFZk95WHo1aXArZ01BYjFRSEJsWkxnL1grdmxQR1NNb0k1?=
 =?utf-8?B?ejhBcHVDK2JuM1R1bWMvbHNTbTloNXlDRjdCeXhuckpTOFRoOVBpS09lNHNk?=
 =?utf-8?Q?4wCrIWJhID1vw+FNfN5thOAVo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c23632b-f94e-40cf-9c58-08db351c80e9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:54:34.2795
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QIDfAINVTb+W4Toch1pGOexXH3FU6pwNkV3bLehq/gyW/Paa7fj6kLHUpLzHow3/86DeZLgFEBuDxS2aaVG0zw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8126

Matching what was done earlier, explicit tests are added only for
irregular insn / memory access patterns.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
SDE: -grr or -srf

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -242,6 +242,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
 
         {"avx-vnni-int8",0x00000007,  1, CPUID_REG_EDX,  4,  1},
+        {"avx-ne-convert",0x00000007, 1, CPUID_REG_EDX,  5,  1},
         {"cet-sss",      0x00000007,  1, CPUID_REG_EDX, 18,  1},
 
         {"intel-psfd",   0x00000007,  2, CPUID_REG_EDX,  0,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -214,7 +214,7 @@ static const char *const str_7c1[32] =
 
 static const char *const str_7d1[32] =
 {
-    [ 4] = "avx-vnni-int8",
+    [ 4] = "avx-vnni-int8", [ 5] = "avx-ne-convert",
 
     [18] = "cet-sss",
 };
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1350,6 +1350,7 @@ static const struct vex {
     { { 0x58 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastd */
     { { 0x59 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastq */
     { { 0x5a }, 2, F, R, pfx_66, W0, L1 }, /* vbroadcasti128 */
+    { { 0x72 }, 2, T, R, pfx_f3, W0, Ln }, /* vcvtneps2bf16 */
     { { 0x78 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastb */
     { { 0x79 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastw */
     { { 0x8c }, 2, F, R, pfx_66, Wn, Ln }, /* vpmaskmov{d,q} */
@@ -1378,6 +1379,12 @@ static const struct vex {
     { { 0xad }, 2, T, R, pfx_66, Wn, LIG }, /* vnmadd213s{s,d} */
     { { 0xae }, 2, T, R, pfx_66, Wn, Ln }, /* vnmsub213p{s,d} */
     { { 0xaf }, 2, T, R, pfx_66, Wn, LIG }, /* vnmsub213s{s,d} */
+    { { 0xb0 }, 2, F, R, pfx_no, W0, Ln }, /* vcvtneoph2ps */
+    { { 0xb0 }, 2, F, R, pfx_66, W0, Ln }, /* vcvtneeph2ps */
+    { { 0xb0 }, 2, F, R, pfx_f3, W0, Ln }, /* vcvtneebf162ps */
+    { { 0xb0 }, 2, F, R, pfx_f2, W0, Ln }, /* vcvtneobf162ps */
+    { { 0xb1 }, 2, F, R, pfx_66, W0, Ln }, /* vbcstnesh2ps */
+    { { 0xb1 }, 2, F, R, pfx_f3, W0, Ln }, /* vbcstnebf162ps */
     { { 0xb4 }, 2, T, R, pfx_66, W1, Ln }, /* vpmadd52luq */
     { { 0xb5 }, 2, T, R, pfx_66, W1, Ln }, /* vpmadd52huq */
     { { 0xb6 }, 2, T, R, pfx_66, Wn, Ln }, /* vmaddsub231p{s,d} */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -4572,6 +4572,39 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing vbcstnebf162ps 2(%ecx),%ymm3...");
+    if ( stack_exec && cpu_has_avx_ne_convert )
+    {
+        decl_insn(vbcstnebf162ps);
+
+        asm volatile ( /* vbcstnebf162ps 2(%0), %%ymm3 */
+                       put_insn(vbcstnebf162ps,
+                                ".byte 0xc4, 0xe2, 0x7e, 0xb1, 0x59, 0x02 ")
+                       :: "c" (NULL) );
+
+        res[0] = 0x43210000;
+        regs.ecx = (unsigned long)res;
+        set_insn(vbcstnebf162ps);
+        bytes_read  = 0;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vbcstnebf162ps) ||
+             bytes_read != 2 )
+            goto fail;
+
+        asm volatile ( "vbroadcastss %1, %%ymm2;"
+                       "vsubps %%ymm3, %%ymm2, %%ymm1;"
+                       "vptest %%ymm1, %%ymm1;"
+                       "setc %b0; setz %h0"
+                       : "=&Q" (rc)
+                       : "m" (res[0]) );
+        if ( (rc & 0xffff) != 0x0101 )
+            goto fail;
+
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing stmxcsr (%edx)...");
     if ( cpu_has_sse )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -188,6 +188,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_cmpccxadd  cp.feat.cmpccxadd
 #define cpu_has_avx_ifma   (cp.feat.avx_ifma && xcr0_mask(6))
 #define cpu_has_avx_vnni_int8 (cp.feat.avx_vnni_int8 && xcr0_mask(6))
+#define cpu_has_avx_ne_convert (cp.feat.avx_ne_convert && xcr0_mask(6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
 
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -175,6 +175,7 @@ extern struct cpuinfo_x86 boot_cpu_data;
 
 /* CPUID level 0x00000007:1.edx */
 #define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
+#define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -423,6 +423,8 @@ static const struct ext0f38_table {
     [0xad] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
     [0xae] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
     [0xaf] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
+    [0xb0] = { .simd_size = simd_other, .two_op = 1 },
+    [0xb1] = { .simd_size = simd_other, .two_op = 1 },
     [0xb4 ... 0xb5] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xb6 ... 0xb8] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
     [0xb9] = { .simd_size = simd_scalar_vexw, .d8s = d8s_dq },
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -601,6 +601,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
 #define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
+#define vcpu_has_avx_ne_convert() (ctxt->cpuid->feat.avx_ne_convert)
 
 #define vcpu_must_have(feat) \
     generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6208,6 +6208,19 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_vbmi2);
         goto avx512f_no_sae;
 
+    case X86EMUL_OPC_VEX   (0x0f38, 0xb0): /* vcvtneoph2ps mem,[xy]mm */
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xb0): /* vcvtneeph2ps mem,[xy]mm */
+    case X86EMUL_OPC_VEX_F3(0x0f38, 0xb0): /* vcvtneebf162ps mem,[xy]mm */
+    case X86EMUL_OPC_VEX_F2(0x0f38, 0xb0): /* vcvtneobf162ps mem,[xy]mm */
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        /* fall through */
+    case X86EMUL_OPC_VEX_F3(0x0f38, 0x72): /* vcvtneps2bf16 [xy]mm/mem,xmm */
+        host_and_vcpu_must_have(avx_ne_convert);
+        generate_exception_if(vex.w, EXC_UD);
+        d |= TwoOp;
+        op_bytes = 16 << vex.l;
+        goto simd_0f_ymm;
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x75): /* vpermi2{b,w} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x7d): /* vpermt2{b,w} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x8d): /* vperm{b,w} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -6737,6 +6750,13 @@ x86_emulate(
         break;
     }
 
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xb1): /* vbcstnesh2ps mem,[xy]mm */
+    case X86EMUL_OPC_VEX_F3(0x0f38, 0xb1): /* vbcstnebf162ps mem,[xy]mm */
+        host_and_vcpu_must_have(avx_ne_convert);
+        generate_exception_if(vex.w || ea.type != OP_MEM, EXC_UD);
+        op_bytes = 2;
+        goto simd_0f_ymm;
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0xb4): /* vpmadd52luq [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0xb5): /* vpmadd52huq [xy]mm/mem,[xy]mm,[xy]mm */
         host_and_vcpu_must_have(avx_ifma);
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -306,6 +306,7 @@ XEN_CPUFEATURE(MCDT_NO,            13*32
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.edx, word 15 */
 XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
+XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
 #endif /* XEN_CPUFEATURE */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -232,7 +232,7 @@ def crunch_numbers(state):
         # for the XOP prefix).  VEX/XOP-encoded GPR instructions, such as
         # those from the BMI{1,2}, TBM and LWP sets function fine in the
         # absence of any enabled xstate.
-        AVX: [FMA, FMA4, F16C, AVX2, XOP],
+        AVX: [FMA, FMA4, F16C, AVX2, XOP, AVX_NE_CONVERT],
 
         # This dependency exists solely for the shadow pagetable code.  If the
         # host doesn't have NX support, the shadow pagetable code can't handle



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:55:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:55:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517939.803960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji4P-0003qX-Hm; Tue, 04 Apr 2023 14:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517939.803960; Tue, 04 Apr 2023 14:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji4P-0003qQ-F0; Tue, 04 Apr 2023 14:55:37 +0000
Received: by outflank-mailman (input) for mailman id 517939;
 Tue, 04 Apr 2023 14:55:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji4N-0003oC-2I
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:55:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf7e408d-d2f8-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:55:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7387.eurprd04.prod.outlook.com (2603:10a6:102:91::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:55:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf7e408d-d2f8-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S5C8RUOJoi+hdotVxI3CsrNA3Ic+y6ScMHT4OnOdslj7PMiji5TJMyg6coI6uSOMHxC5ylWYUBteOjxdJcUfldfuD43CEutMRQxdB9T3/6ayResp69fo/9+9lh4QiYwVSmxiYoaCbgkzMwbe2dLsY8TMU/KLaPM2qnW0BUWps/X4ZBNvqULHU6X00XX75buG5s4fgr8WLVxvE1EYZFLEE3PFCly7e6MNtL+7xXxoL2emufiLKUetwEvQ5PO8p5P+hlNYHY7CJypSCppQLLLk9h8fM7MnLOTGH+AWWimYX+xYlmhNof6dZiidZMrh6BWZoik50vjfygmJGrqZg/eL5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lLeFsTI0iyr3SPpC0veUHC0LR4+hVFKVGSkpuBvNPvk=;
 b=j1B3/nI/Ij62SVVKcWFXoxcL+fLfMxdc/l1q4uUGqygLzB4TS5upSqZP7OkunisF1w6o3/zAZkfdh9Ef045Fetzfib+0LNcfAPK1wizXoqZpVFZxgLx/SlTdlHZOwMLT477nW5cEBKjVj4h0Hm3qp907QtI5kEykoj13JAhITwexypOtOLSxbCQD3oivQRPCz/fm3YyeD15QgA/gpYbfY+yzb8haX+Y7ONtZXbYAkxb7yO+n0N2D3FH6rkXpInVQjROd7mIz6x6fDQvR7VORH2zcsqNBQmirtt8tSyaLKk/z/QIVCI0X3MD196Z1sY5BjJYr8K0M/dgM/QF1shBahg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lLeFsTI0iyr3SPpC0veUHC0LR4+hVFKVGSkpuBvNPvk=;
 b=on7RLrSQgN/5YUVplOkU3QxQdBCRm73yu0+GJ84nv9o+VI83CNHh2gjtkIHyrepE4N+pD90PJoyQgBRpbnSbqGNMsGUzf0/WU4V6vXqzRK5mu/2q7mDkFfe+XsZQ84wrOOO5S3QtMUt2HWEn2xk1Waf5YHlmahtmNr+AuMFhRenEHogKoPuXo1HEz5VBvH/MuskNIzD1R71ZjJCOAKzXqTqb30mR81Wl/oFfnPJssLiX82CrTMee70lzzrduc9YGBH0Na6L2W/EL5mNwNdN1chHWTD8x7gJOyVlpdYbWnuY0/iPIUzwsgLASgXuFYStiNyK+l32QsLyzdVwaM/IvWQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
Date: Tue, 4 Apr 2023 16:55:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 9/9] x86emul+VMX: support {RD,WR}MSRLIST
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
In-Reply-To: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7387:EE_
X-MS-Office365-Filtering-Correlation-Id: a0b0b44f-ddb3-4d24-b94f-08db351ca162
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/je+6YLXR1M66HeYWQ2onL2DabuFALwc5l0MZ+Bn2W8TXGG+HBeejT1eNPsFqsThOJB6eHaDHNiSQTs1z/Jzg/i1fmKOAyKO9vW6M7u/0afR6petqdSm5lFxWto8cqfetpuftRPlw0PK5Q8ZwwYTqWzowc/L8dGqYmWsCwwOuSlGtGZwoixzMqq4bOpngLIVPx6QT5q8N0Q8eXUc4xjzQoV+Ia0Y/vOQVWwKlsXwbg1qBepw2PSbM4pxT2+cP+ZI0emmFI1XIXjG3gi3w5jaIhNHg43/A2CF+2xIx5AuGo5KaLLSD9T6udNkUQRwGhkdkQ4ylm2x2kDkRtjEsxEVkaAko4GoFlLKu7XsyNAvZ+eaVu6z5A4a81zP9tqSlKZuzrYF8SATpXUHsor070GajW60iMMWFCxpsAAlTe+4PFA1OteW2JeoP2nssyskpTgV4paEMUwel5MB8Q/Gr0Icq3l69KyLAGP1jByBGmAowS90BeKn2+kdAVmWMXCh6M8NsNkXV9XwPuiKo4MaRc8I5W3ie6Oiw/gHOhVrliTrKRMt31RF76noMp0K31sQAVXw8V2Ha+75EHUWEN0UW/nsLk4kv3cvkNEYOx4uvMAFURFCIslJ4IvDf+82UdIm+42AyIYDzOUDf8Y2CclKEkhSeg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(39860400002)(366004)(451199021)(83380400001)(2616005)(6486002)(6506007)(316002)(6512007)(478600001)(54906003)(2906002)(26005)(186003)(30864003)(5660300002)(38100700002)(66946007)(66556008)(66476007)(41300700001)(86362001)(8676002)(31696002)(8936002)(6916009)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MjJicU5WMFMxZlZnZEU5NC84dFYwYmowR2NhRFIxci9DZkJsRzR1TlgzQXkv?=
 =?utf-8?B?L05pQTN3Qnp1WWtPek1FZHlrcDk1TGFwc3FZcVdpMG5uM1B6SEtoajZLaTR6?=
 =?utf-8?B?VDQ2Ulo3NGxHSmJ4ZWI4Rk1NVFpLOUVzYzdxTVd2R0lDZWQ2dExHMTdHSWVm?=
 =?utf-8?B?Uk9KdENJSE9SRFl4YmNhZllBd0xURU5vc1VPS016OE9WSldSWXBwUTNzNGhY?=
 =?utf-8?B?endmOXBIU2hSOWJlYXR3dzRjN2xTcUxNbzRFL3ZDYkpnUkdPeVlkWWlKK25a?=
 =?utf-8?B?aFdJMjVJbG9wK0c1aERVT0JFWERheGY0T21PanFsNzRLTHJWd01NWlRDZzNJ?=
 =?utf-8?B?MTBSeHZDM2lWMTUyd1Q0ZktuYnFyM1g4c3p0eGFuMEV1N0xRbEtDdktFaEFC?=
 =?utf-8?B?WHRJTGc3a0VpSDcyUzJqOGhaUEdIODRVUnQ2NEJ5ajdSdUlMVGVaZHZpSjFj?=
 =?utf-8?B?c1A1UjYxbkczbFYzSENTMUdnVGE5TC9KM1BOODZ5WitUSDF4Q2w3eUpSeGh6?=
 =?utf-8?B?dkM3WWRtNkE1MklOTzgySk90LzdiUWtuUmtVNHpWRFJISUNzZ2V5M2ZmcUI5?=
 =?utf-8?B?L0tIK0NKOWFIRzFvSENtRlM3emk1VzlhbGVHM0I5QUZ2dEV1YzkyNTJ1MzVy?=
 =?utf-8?B?R1A3TENRQVVwb3NMSnQybDRERmpsMWk0NEhCR1pKaW95TWtnMi9uVjRQTWZt?=
 =?utf-8?B?MitUbE9CTnpzb0dScUhMUENWR2dqeWxkcERQN1B4d2RGWDc3SDEwUTcvc3ls?=
 =?utf-8?B?bnFLZlE2OTZJb3FrNEkxOE1NcURqZytWTUZBOUFUNVN0SGRUTHdPcXhOM05Y?=
 =?utf-8?B?ckEzdTMvdkd3c0pad2gxM1JZZno5YWtMVU9pUXYzNEdNQ2tTbkxSQStJKzBF?=
 =?utf-8?B?MFNLVFgyYmk3dGIvTXZMRmNuMUN4S3lQclRoQUgrZTNKeUVzelZzMVBOVHky?=
 =?utf-8?B?OU5leG4wTHNDaWtMRVp3ZnFaSUZxcDlEWnZTa3A1Z3hzK1FwbC9YTk0yQ0Fm?=
 =?utf-8?B?TE93ZGNPMnNvai83UCtXOTUyaVBuNE5QcytaTWhaN2kxcjcyRU1HWEpJc3ow?=
 =?utf-8?B?TkdQWlpoclJocEZFSWVJMDhVZlZZclVTQTFvckVTcmNkQVBjMk8zaHMwNzNz?=
 =?utf-8?B?L2V0cWQ1VnJOWlRBbDV2NlllT3NBTmRvQkx6ZGw1RVBYdkVoRVpNaWdTU2RZ?=
 =?utf-8?B?VUV3OUFMUzhDcTA2TDZUeFdBck1DTjVhQUlkNG5hS2RjbnhSOU9Nc1NkMUZu?=
 =?utf-8?B?YXlmSHNzbXIwbUJRaTgwSXZhQTJBYnpJbU9HSG50cXBINE5oSEEzYi9SR0FE?=
 =?utf-8?B?NUhqUkNLUDBMQmcrVDYzWEgvVVJTRlluK0grK21rVkI5VFZPMll0Sm9iTDRW?=
 =?utf-8?B?TVdLU0RXRkZ4Y3R3T2dJdkJMSEh1OC9BSnhaL2k1TnRSRGFlS0VtekoxKzgv?=
 =?utf-8?B?YXdCdS80OG0zajQ1VVc2UURQQkxhUStuNm8xMFJRdXFuUitodnh5a3Q5a1Vr?=
 =?utf-8?B?QjN4NVQ4N0MwczZIcVlpRW10RGhvTDJrWUR5WWdKcmpaQmVwcXBWckJpNjha?=
 =?utf-8?B?Z25qL1NuWXpYSnBtVWRGcUpDdU1UOVRGei9sNXViMHMrdEg1SVMrbGQ0NnZW?=
 =?utf-8?B?RFNkSTZDdWduOSsybGl0a0wzZk5PRHJERHNFR1FuZUVKYlFvc0ZVeXQ0TkIv?=
 =?utf-8?B?L3BKcVd5Wnp2ZVRscEtsVEZsUFB2QXA1YWVKYXRlUnpYaGYrbkF2R3pvTTYw?=
 =?utf-8?B?NnNrUlZQMjhHSDdHYVFwSXBHV2N1cUdsMkUwaDd4VVBJUnE5QlgrTlhCTlg5?=
 =?utf-8?B?eWZTSk9NQTB1V05pZEVaYTlTWFMycWJEcjZJZXF6UTJOb05XUU1kakRxOTla?=
 =?utf-8?B?eEJRNG0zRlNMZ2ZOZDY3S2FURGU1VGxWdElleGxlVDNaRzJnUWEyK0hucjVG?=
 =?utf-8?B?Qkg1THVlN1gzRlhQNE54Um9HdTdkazV1K01ZazZqbUZSYkVZdWZ2b2NYazQr?=
 =?utf-8?B?cGkxME5JQzA5M0Z5RU00Tm9yWWgrTm5uS3JLN1UvZ3Njb3MrTXE2c0hTRjhE?=
 =?utf-8?B?SDJwb0czQ3JleWFEU3FvbnRnaUliVkdYRllXQUR6QWNjT3ZIVDgxSWxhQnJq?=
 =?utf-8?Q?Up0dv1aW8a5nepSm6dLuogKca?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0b0b44f-ddb3-4d24-b94f-08db351ca162
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:55:28.7634
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0TN3sk35VdQqZnON7Yql3gZG76HBwh1+FgeJ90UY59yFPDq7O9i2VKE/NAuMHiahZ/Nt8dc2lq9KTTuk5cXHew==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7387

These are "compound" instructions to issue a series of RDMSR / WRMSR
respectively. In the emulator we can therefore implement them by using
the existing msr_{read,write}() hooks. The memory accesses utilize that
the HVM ->read() / ->write() hooks are already linear-address
(x86_seg_none) aware (by way of hvmemul_virtual_to_linear() handling
this case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TODO: Use VMX tertiary execution control (once bit is known; see
      //todo-s) and then further adjust cpufeatureset.h.

RFC: In vmx_vmexit_handler() handling is forwarded to the emulator
     blindly. Alternatively we could consult the exit qualification and
     process just a single MSR at a time (without involving the
     emulator), exiting back to the guest after every iteration. (I
     don't think a mix of both models makes a lot of sense.)

RFC: For PV priv_op_ops would need to gain proper read/write hooks,
     which doesn't look desirable (albeit there we could refuse to
     handle anything else than x86_seg_none); we may want to consider to
     instead not support the feature for PV guests, requiring e.g. Linux
     to process the lists in new pvops hooks.

RFC: I wasn't sure whether to add preemption checks to the loops -
     thoughts?

With the VMX side of the spec still unclear (tertiary execution control
bit unspecified in ISE 046) we can't enable the insn yet for (HVM) guest
use. The precise behavior of MSR_BARRIER is also not spelled out, so the
(minimal) implementation is a guess for now.

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -240,6 +240,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
         {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
+        {"msrlist",      0x00000007,  1, CPUID_REG_EAX, 27,  1},
 
         {"avx-vnni-int8",0x00000007,  1, CPUID_REG_EDX,  4,  1},
         {"avx-ne-convert",0x00000007, 1, CPUID_REG_EDX,  5,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -195,6 +195,8 @@ static const char *const str_7a1[32] =
     [18] = "lkgs",          [19] = "wrmsrns",
 
     /* 22 */                [23] = "avx-ifma",
+
+    /* 26 */                [27] = "msrlist",
 };
 
 static const char *const str_e21a[32] =
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -342,6 +342,8 @@ static const struct {
     { { 0x01, 0xc4 }, { 2, 2 }, F, N }, /* vmxoff */
     { { 0x01, 0xc5 }, { 2, 2 }, F, N }, /* pconfig */
     { { 0x01, 0xc6 }, { 2, 2 }, F, N }, /* wrmsrns */
+    { { 0x01, 0xc6 }, { 0, 2 }, F, W, pfx_f2 }, /* rdmsrlist */
+    { { 0x01, 0xc6 }, { 0, 2 }, F, R, pfx_f3 }, /* wrmsrlist */
     { { 0x01, 0xc8 }, { 2, 2 }, F, N }, /* monitor */
     { { 0x01, 0xc9 }, { 2, 2 }, F, N }, /* mwait */
     { { 0x01, 0xca }, { 2, 2 }, F, N }, /* clac */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -589,6 +589,7 @@ static int read(
     default:
         if ( !is_x86_user_segment(seg) )
             return X86EMUL_UNHANDLEABLE;
+    case x86_seg_none:
         bytes_read += bytes;
         break;
     }
@@ -619,7 +620,7 @@ static int write(
     if ( verbose )
         printf("** %s(%u, %p,, %u,)\n", __func__, seg, (void *)offset, bytes);
 
-    if ( !is_x86_user_segment(seg) )
+    if ( !is_x86_user_segment(seg) && seg != x86_seg_none )
         return X86EMUL_UNHANDLEABLE;
     memcpy((void *)offset, p_data, bytes);
     return X86EMUL_OKAY;
@@ -711,6 +712,10 @@ static int read_msr(
 {
     switch ( reg )
     {
+    case 0x0000002f: /* BARRIER */
+        *val = 0;
+        return X86EMUL_OKAY;
+
     case 0xc0000080: /* EFER */
         *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
         return X86EMUL_OKAY;
@@ -1499,9 +1504,53 @@ int main(int argc, char **argv)
          (gs_base != 0x0000111122224444UL) ||
          gs_base_shadow )
         goto fail;
+    printf("okay\n");
 
     cp.extd.nscb = i;
     emulops.write_segment = NULL;
+
+    printf("%-40s", "Testing rdmsrlist...");
+    instr[0] = 0xf2; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
+    regs.rip = (unsigned long)&instr[0];
+    regs.rsi = (unsigned long)(res + 0x80);
+    regs.rdi = (unsigned long)(res + 0x80 + 0x40 * 2);
+    regs.rcx = 0x0002000100008000UL;
+    gs_base_shadow = 0x0000222244446666UL;
+    memset(res + 0x80, ~0, 0x40 * 8 * 2);
+    res[0x80 + 0x0f * 2] = 0xc0000101; /* GS_BASE */
+    res[0x80 + 0x0f * 2 + 1] = 0;
+    res[0x80 + 0x20 * 2] = 0xc0000102; /* SHADOW_GS_BASE */
+    res[0x80 + 0x20 * 2 + 1] = 0;
+    res[0x80 + 0x31 * 2] = 0x2f; /* BARRIER */
+    res[0x80 + 0x31 * 2 + 1] = 0;
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.rip != (unsigned long)&instr[4]) ||
+         regs.rcx ||
+         (res[0x80 + (0x40 + 0x0f) * 2] != (unsigned int)gs_base) ||
+         (res[0x80 + (0x40 + 0x0f) * 2 + 1] != (gs_base >> (8 * sizeof(int)))) ||
+         (res[0x80 + (0x40 + 0x20) * 2] != (unsigned int)gs_base_shadow) ||
+         (res[0x80 + (0x40 + 0x20) * 2 + 1] != (gs_base_shadow >> (8 * sizeof(int)))) ||
+         res[0x80 + (0x40 + 0x31) * 2] || res[0x80 + (0x40 + 0x31) * 2 + 1] )
+        goto fail;
+    printf("okay\n");
+
+    printf("%-40s", "Testing wrmsrlist...");
+    instr[0] = 0xf3; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
+    regs.eip = (unsigned long)&instr[0];
+    regs.rsi -= 0x11 * 8;
+    regs.rdi -= 0x11 * 8;
+    regs.rcx = 0x0002000100000000UL;
+    res[0x80 + 0x0f * 2] = 0xc0000102; /* SHADOW_GS_BASE */
+    res[0x80 + 0x20 * 2] = 0xc0000101; /* GS_BASE */
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.rip != (unsigned long)&instr[4]) ||
+         regs.rcx ||
+         (gs_base != 0x0000222244446666UL) ||
+         (gs_base_shadow != 0x0000111122224444UL) )
+        goto fail;
+
     emulops.write_msr     = NULL;
 #endif
     printf("okay\n");
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -88,6 +88,7 @@ bool emul_test_init(void)
     cp.feat.rdpid = true;
     cp.feat.lkgs = true;
     cp.feat.wrmsrns = true;
+    cp.feat.msrlist = true;
     cp.extd.clzero = true;
 
     if ( cpu_has_xsave )
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -835,6 +835,17 @@ static void cf_check vmx_cpuid_policy_ch
     else
         vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
 
+    if ( cp->feat.msrlist )
+    {
+        vmx_clear_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
+        //todo enable MSRLIST tertiary execution control
+    }
+    else
+    {
+        vmx_set_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
+        //todo disable MSRLIST tertiary execution control
+    }
+
  out:
     vmx_vmcs_exit(v);
 
@@ -3705,6 +3716,22 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool cf_check is_msrlist(
+    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
+{
+
+    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0x01) )
+    {
+        unsigned int rm, reg;
+        int mode = x86_insn_modrm(state, &rm, &reg);
+
+        /* This also includes WRMSRNS; should be okay. */
+        return mode == 3 && rm == 6 && !reg;
+    }
+
+    return false;
+}
+
 static void vmx_do_extint(struct cpu_user_regs *regs)
 {
     unsigned long vector;
@@ -4513,6 +4540,17 @@ void vmx_vmexit_handler(struct cpu_user_
         }
         break;
 
+    case EXIT_REASON_RDMSRLIST:
+    case EXIT_REASON_WRMSRLIST:
+        if ( vmx_guest_x86_mode(v) != 8 || !currd->arch.cpuid->feat.msrlist )
+        {
+            ASSERT_UNREACHABLE();
+            hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
+        }
+        else if ( !hvm_emulate_one_insn(is_msrlist, "MSR list") )
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        break;
+
     case EXIT_REASON_VMXOFF:
     case EXIT_REASON_VMXON:
     case EXIT_REASON_VMCLEAR:
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -211,6 +211,8 @@ static inline void pi_clear_sn(struct pi
 #define EXIT_REASON_XRSTORS             64
 #define EXIT_REASON_BUS_LOCK            74
 #define EXIT_REASON_NOTIFY              75
+#define EXIT_REASON_RDMSRLIST           78
+#define EXIT_REASON_WRMSRLIST           79
 /* Remember to also update VMX_PERF_EXIT_REASON_SIZE! */
 
 /*
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -24,6 +24,8 @@
 #define  APIC_BASE_ENABLE                   (_AC(1, ULL) << 11)
 #define  APIC_BASE_ADDR_MASK                0x000ffffffffff000ULL
 
+#define MSR_BARRIER                         0x0000002f
+
 #define MSR_TEST_CTRL                       0x00000033
 #define  TEST_CTRL_SPLITLOCK_DETECT         (_AC(1, ULL) << 29)
 #define  TEST_CTRL_SPLITLOCK_DISABLE        (_AC(1, ULL) << 31)
--- a/xen/arch/x86/include/asm/perfc_defn.h
+++ b/xen/arch/x86/include/asm/perfc_defn.h
@@ -6,7 +6,7 @@ PERFCOUNTER_ARRAY(exceptions,
 
 #ifdef CONFIG_HVM
 
-#define VMX_PERF_EXIT_REASON_SIZE 76
+#define VMX_PERF_EXIT_REASON_SIZE 80
 #define VMEXIT_NPF_PERFC 143
 #define SVM_PERF_EXIT_REASON_SIZE (VMEXIT_NPF_PERFC + 1)
 PERFCOUNTER_ARRAY(vmexits,              "vmexits",
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -223,6 +223,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t
     case MSR_AMD_PPIN:
         goto gp_fault;
 
+    case MSR_BARRIER:
+        if ( !cp->feat.msrlist )
+            goto gp_fault;
+        *val = 0;
+        break;
+
     case MSR_IA32_FEATURE_CONTROL:
         /*
          * Architecturally, availability of this MSR is enumerated by the
@@ -493,6 +499,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t
         uint64_t rsvd;
 
         /* Read-only */
+    case MSR_BARRIER:
     case MSR_IA32_PLATFORM_ID:
     case MSR_CORE_CAPABILITIES:
     case MSR_INTEL_CORE_THREAD_COUNT:
--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -40,6 +40,7 @@ int x86emul_0f01(struct x86_emulate_stat
     switch ( s->modrm )
     {
         unsigned long base, limit, cr0, cr0w, cr4;
+        unsigned int n;
         struct segment_register sreg;
         uint64_t msr_val;
 
@@ -54,6 +55,56 @@ int x86emul_0f01(struct x86_emulate_stat
                                 ((uint64_t)regs->r(dx) << 32) | regs->eax,
                                 ctxt);
             goto done;
+
+        case vex_f3: /* wrmsrlist */
+            vcpu_must_have(msrlist);
+            generate_exception_if(!mode_64bit(), X86_EXC_UD);
+            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
+                                  (regs->r(di) & 7),
+                                  X86_EXC_GP, 0);
+            fail_if(!ops->write_msr);
+            while ( regs->r(cx) )
+            {
+                n = __builtin_ffsl(regs->r(cx)) - 1;
+                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                generate_exception_if(msr_val != (uint32_t)msr_val,
+                                      X86_EXC_GP, 0);
+                base = msr_val;
+                if ( (rc = ops->read(x86_seg_none, regs->r(di) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY ||
+                     (rc = ops->write_msr(base, msr_val, ctxt)) != X86EMUL_OKAY )
+                    break;
+                regs->r(cx) &= ~(1UL << n);
+            }
+            goto done;
+
+        case vex_f2: /* rdmsrlist */
+            vcpu_must_have(msrlist);
+            generate_exception_if(!mode_64bit(), X86_EXC_UD);
+            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
+                                  (regs->r(di) & 7),
+                                  X86_EXC_GP, 0);
+            fail_if(!ops->read_msr || !ops->write);
+            while ( regs->r(cx) )
+            {
+                n = __builtin_ffsl(regs->r(cx)) - 1;
+                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                generate_exception_if(msr_val != (uint32_t)msr_val,
+                                      X86_EXC_GP, 0);
+                if ( (rc = ops->read_msr(msr_val, &msr_val,
+                                         ctxt)) != X86EMUL_OKAY ||
+                     (rc = ops->write(x86_seg_none, regs->r(di) + n * 8,
+                                      &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                regs->r(cx) &= ~(1UL << n);
+            }
+            if ( rc != X86EMUL_OKAY )
+                ctxt->regs->r(cx) = regs->r(cx);
+            goto done;
         }
         generate_exception(X86_EXC_UD);
 
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -600,6 +600,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
+#define vcpu_has_msrlist()     (ctxt->cpuid->feat.msrlist)
 #define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
 #define vcpu_has_avx_ne_convert() (ctxt->cpuid->feat.avx_ne_convert)
 
--- a/xen/arch/x86/x86_emulate/util.c
+++ b/xen/arch/x86/x86_emulate/util.c
@@ -112,6 +112,9 @@ bool cf_check x86_insn_is_mem_access(con
         break;
 
     case X86EMUL_OPC(0x0f, 0x01):
+        /* {RD,WR}MSRLIST */
+        if ( mode_64bit() && s->modrm == 0xc6 )
+            return s->vex.pfx >= vex_f3;
         /* Cover CLZERO. */
         return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
     }
@@ -172,7 +175,11 @@ bool cf_check x86_insn_is_mem_write(cons
         case 0xff: /* Grp5 */
             break;
 
-        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
+        case X86EMUL_OPC(0x0f, 0x01):
+            /* RDMSRLIST */
+            if ( mode_64bit() && s->modrm == 0xc6 )
+                return s->vex.pfx == vex_f2;
+            /* CLZERO is another odd one. */
             return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
 
         default:
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -286,6 +286,7 @@ XEN_CPUFEATURE(FRED,         10*32+17) /
 XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
 XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*A  WRMSR Non-Serialising */
 XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
+XEN_CPUFEATURE(MSRLIST,      10*32+27) /*   MSR list instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 14:57:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 14:57:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517944.803969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji6a-0004X7-2Q; Tue, 04 Apr 2023 14:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517944.803969; Tue, 04 Apr 2023 14:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pji6Z-0004X0-Vi; Tue, 04 Apr 2023 14:57:51 +0000
Received: by outflank-mailman (input) for mailman id 517944;
 Tue, 04 Apr 2023 14:57:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pji6Y-0004Wr-Jx
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 14:57:50 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 104f1fcb-d2f9-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 16:57:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7387.eurprd04.prod.outlook.com (2603:10a6:102:91::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 14:57:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 14:57:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 104f1fcb-d2f9-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MkqA52HTGsGEPM35miugkVaUCcy2ru4hCst91XMg6yVjJk+YOkh3F7YNfIv8qIA7ssiiHZUpkCbZch159eXVdSAq5zkABZ44TzhTS/C5MD+cbVq6Zk7Z4b6pE1v2EVERKzTxFDp7muBdQxH4xYef+sUSHL583eRAExUxMq+tV4xH21iMPmPkAfNTJeg73UoyNnOSPQllKwdQhEplRTSxNHxgQ1SRjRyrO3n/lSLGsW5yDofgh2Pe0KrDmGvIDIWKgYrMvRvj083kg8bd1jQA1PNiSAd2R6TJSngCyOxEVmfrhTQwBvvgcp9EE1tf5wFd1TfD0DnL6q9BQZzXHJD6Fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AsDrIOoJ4XkstJMEfZwJev2okvsusmdA6pBNEPh/60E=;
 b=eGIvZ07gW4tEn/ePHr6nzsksQKoVyNY++RB3Axt1CRkVGloYJpvvr7PuSMEIj/IAYr8Mxo4DHo/ITD+6hK7eb/0ujFK6SClgyJKK0sQ1LY2FNriEFH6XrDCm6PE943CO6I4wiI7VSVKuyOuTor/l58HRZn3jS/44kId3+Kw1pQc0sWDyPebMUUoMlZQSIYmNYjKCwdRhJ5hm3rdrW0bzseeN8Wn+cNfGnkAwlk8Mu+nNAY0SEXELNL4gCljYe2TNZyCCHjHjtiOw3Fbi1kjC+aPKDDsQw/e/l9ueyRJmuAJrvSf5hv+jpLMA/mrg87qvEBiCdPNVbLT0sxHm2YEw4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AsDrIOoJ4XkstJMEfZwJev2okvsusmdA6pBNEPh/60E=;
 b=V0H63stCf+XcTSPlqe45Uh8sLOv/pEl13b3f7lNZNSLADEmuSAzyHox9CGpeC0VjzHcZVdqcnUbJN/f6xGaGDuW7NcjrhVibdPi701tYigkSeNAPwN5jcvHDdM9OxuocCCMxh/8kKgPwcZvY4DtXsPQ3ywXtGmr7o5Uhj8Wnsc3ezsgNeOwss9ApAC46UC7O5Tqo4Hbo8Mb/OlOl2aL2mSUYS0c3RJK+N36SQd92gtYYplwp1phCQglvjzq0vWCdO55Tapts05oF7dhrLzlW3rNj0QVsEeCFP3xJHcfTCG/McTUeIkyp9LQWr8CJjAbCNgC9UBvEVqD6wpnk2vxODg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <31d27d25-f576-d605-d3d1-b56b543df568@suse.com>
Date: Tue, 4 Apr 2023 16:57:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 9/9] x86emul+VMX: support {RD,WR}MSRLIST
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
In-Reply-To: <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0152.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7387:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a2e1e52-5555-4953-2544-08db351cf324
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	j71GX9BV6knxN1YyWwgQ1jD5Rld8A6hmBbKYZEUs5kbopOO4IgvmLRO2I0kB9iZkhoH8H8zKKLuBGtGFY0Pepcw5AbqJ7CuhofmFjnK2IUqrqYJCcgoc+xxygT2VsbC8EdFhApaFkilK94GowVT/E7LvpbE9g+ypjw3iHxns8jztGmdCfaPc8kg5tMP5/fgDK+SNZBeiTjV5N+QY6lEdHDiAylljjWuqCBICR7VZ+DU2lKbxKiqkjYddxSqGiJ3P/9CZSHleYLi92otTbxtFEEg32n/rSj8WrfNQBah9jdUZRIU0B9xK7Akt/K+kzf2cEaaqT3r9RJEad/Ip2qv12fCxWGDaoCoBOzR1Ar+oNGQkfjdEsNvvEmVENn8Fk5zqmOn1nCjgg9MhVLjJpAS1rSES6z4sYRcH3NE81FS106Z2tJdb5TI7vw49VTEIeGdBERS7YvjrmRjH2aD7Ilko6UjrdKEIgK/UoNlbvf42HBolFW9sRlfWIPGHP2knl7dAE6V4xPAQrUq6EzznTIJu1ELCRBggG3Fpf8YMGo7WPYQqQxydD3I0cwkVXEpI++EM55zwSmgIJwpyswCrhpgRl6fS2aSVmcS0zcPhQD8wnPh+0co+7/CfKp6yoZMdr7xD9qQAGKstbxzxPy8tzrOCBg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(39860400002)(366004)(451199021)(83380400001)(2616005)(6486002)(6506007)(316002)(6512007)(478600001)(54906003)(53546011)(2906002)(26005)(186003)(30864003)(5660300002)(38100700002)(66946007)(66556008)(66476007)(41300700001)(86362001)(8676002)(31696002)(8936002)(6916009)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YW9uOEEzaVRPUjFEcWZDeDJabFo3Q1hPdEdWb3VxZVE4NUU0dlB1N3RXWlVr?=
 =?utf-8?B?OGNNYmNTb2RtSVdkSXN2NzQ2Q1BKcWZkMDVqTE1rWTcwNzJ0UWt4cUw1bGlG?=
 =?utf-8?B?WW1CKy8xWFc4bkFpMXpuL1FnQnh4d1lHcVkwc09DTGJoWHpBYmdpTjl6RHBy?=
 =?utf-8?B?KzNDaG5GQ3Y2NnlrMWdmeEtkSFhkeGppaC80c1dEZU83MkFJU2EyWnRuTDV3?=
 =?utf-8?B?ak1weVNqMkN0VlJKaEFCYWFNM29SNjRZR1UzSTdvNSsra1RaWi9TdWpOak1x?=
 =?utf-8?B?ZmcvVzMwMjg1MHdqdWVaRWYzWnNDeUQzdkJOQUdUSnd4dEhzaFhkVnZMRGpO?=
 =?utf-8?B?TUEyaXFmVDN4RFh3YStUSDJUc0FxZUhEak1jOVhSbzk1bnN5b3hpbHhuWElG?=
 =?utf-8?B?TVU0enRBTVorR0k3RWRjNXFkUUhVQnZlK0hSSFpmUXduZlRFVnQxL1NPcEdt?=
 =?utf-8?B?K1p1TVZYeFJrODZld0ZCZ0p6UmQvUlFDK0pNa3B3c0Y5QU8xZVNZTDBLMXlS?=
 =?utf-8?B?c1h3QnJVdVhrZHdPOTNBVXpVMXVYbVZmL1dsQjlLV1FWV1JsT3NDT0E0dnln?=
 =?utf-8?B?dURUeDdwWk9OYUR3em0zMktiZFc1eTJoVGpiaHhlaXJpRjRkOFMyN2xjdjdH?=
 =?utf-8?B?RFVMVmoyTTJ2TTlKMEU3bGZGNTduYmMxclUrWDF2dkRUZ1NQNnBuR1oweElE?=
 =?utf-8?B?TnlWb0p1bVNBTzMxY0FrdytjWWtGcWYyU3ZaWGI4KzlCNk9GWWZ1MEFHZ0hZ?=
 =?utf-8?B?OG5ZOUszbWxIYUVsN1B3UjZiM3doZTRXUlEvQUNVR2hhUzcwMHJaYkR6TWZM?=
 =?utf-8?B?aFRWL1NEejJ1bG9IaHFnNUd6aEZSN2dXTEdsUEp0UGE2ZWxwZ2hUVGF4U21n?=
 =?utf-8?B?U3gyRkRXMFpTcTNyUDV5OElhVmdXRmo0T1J1cmo4eW9RdHJudlVDR2piTU02?=
 =?utf-8?B?K2xpazVDeGY4R25Pa09kc2dyNmx3eTBhSm9vUjdxWno1TWY1RnNtVmYxeTlF?=
 =?utf-8?B?K0R3N043VDNlS3pXWU9mZDYya3I1bk43TmZMWHBleWFCMWZDWmZqMXl2cy9w?=
 =?utf-8?B?UTlobTl5K0VsTWs5QktzUnp0NGlkdjlFNm5UQnlCSE41cnU2dDczR3hPMmpU?=
 =?utf-8?B?UXhtcnV3bFV3UGQ1czZhbCtRaXR6eVFEVmViN1N3djJ5ZE1RMjhzc09WV0ph?=
 =?utf-8?B?M2lmeWQva1gyMTFLZXRCZUhZM3VVVHE3ODlYUE9WeUEzM012Z09UZnBHZzNw?=
 =?utf-8?B?bGE5Z2RlU2hhMGJaYjFzQUJIbEFxbmJsZlcrUlBwdHYwUC9qRDY3SEx6WDVn?=
 =?utf-8?B?OTI3YXhPV1Y0UGVrUTBpMVlWeWJ1dTJRWjFWcS9FUXRoR2IvbUJXYTN2NDQx?=
 =?utf-8?B?M2VHcDYvd0IzWUFEUGVpc2V0aGplVVU4Qk0wcnRUQ3FFZEFWM05MRE5IRyto?=
 =?utf-8?B?dWFIZVBjUGppQTdJNEx3amhMdVF1d21uYU9GTmR2YkhacTNteWRob1VXY2Jk?=
 =?utf-8?B?NVNQWnQ0VndzTG0vQWkvc21DTlc3ZHZYQXYzb1E3U09KZVlyN0d0RmQ0K1g3?=
 =?utf-8?B?bHAzR0hmSjI3UGQyalk4MHJnczhuNVRLU2plMHhETndTbXZaNnMxS1NSL0x3?=
 =?utf-8?B?N0hHMFNLMnVwU1VXZ2RKMEdVR0dhRklwU0NqK2NTT1lYMlJiZ0NpM2VNeUEv?=
 =?utf-8?B?b0t5U3hsZjhYWko2cjJtZGJ4SnpIYkN3V0Y1dGJTUytBVjhiQnhYSFJwT0ZX?=
 =?utf-8?B?dkFlcC82OHVaYkRTSUhoY2M1N1l2ZUlzZXdCZVJhS2pRUFhQeWE4ZDZYUm5y?=
 =?utf-8?B?dDNtRXFaMWZCMzJUalNIWE9wdytwWjlBT2tpUGJxb1daVEptTXVOQmNVa2Ji?=
 =?utf-8?B?N2NMVlQ2SUU3LzgvbktETkVIZnZ5UElQcE5lVFJnZzh3RG4wYWxGenhMd09L?=
 =?utf-8?B?M0c3NkxnM0Zwanh4RXMzc2puZzNuMFFScVhTUkE2V1o5ZmdHandkMXVXQjZF?=
 =?utf-8?B?R29XQWVSR0F0ZWc4dHBsRjlaVkFzckh5ZFNmcXRZYVBtQ25GU3FGaFM1R0Fo?=
 =?utf-8?B?UWxvV2xGYnd0dU8zRXA5NFFZem9hcEJZdzEvQW5QNll2eTlSODdqQmozMGp5?=
 =?utf-8?Q?iQeOPa4Bk/ka4LCWlEE9nVMTp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a2e1e52-5555-4953-2544-08db351cf324
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 14:57:45.9412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uRvVk0AgHxp0pnLXpPA/RfyurjTWRctGCFrtPTCSLSibt2rcjzrLSbp25fQKK7F1SGfWEbwtwU34Os1RitGxzQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7387

On 04.04.2023 16:55, Jan Beulich wrote:
> These are "compound" instructions to issue a series of RDMSR / WRMSR
> respectively. In the emulator we can therefore implement them by using
> the existing msr_{read,write}() hooks. The memory accesses utilize that
> the HVM ->read() / ->write() hooks are already linear-address
> (x86_seg_none) aware (by way of hvmemul_virtual_to_linear() handling
> this case).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TODO: Use VMX tertiary execution control (once bit is known; see
>       //todo-s) and then further adjust cpufeatureset.h.

Argh, should have Cc-ed Kevin and Jun, even if there weren't this issue.

Jan

> RFC: In vmx_vmexit_handler() handling is forwarded to the emulator
>      blindly. Alternatively we could consult the exit qualification and
>      process just a single MSR at a time (without involving the
>      emulator), exiting back to the guest after every iteration. (I
>      don't think a mix of both models makes a lot of sense.)
> 
> RFC: For PV priv_op_ops would need to gain proper read/write hooks,
>      which doesn't look desirable (albeit there we could refuse to
>      handle anything else than x86_seg_none); we may want to consider to
>      instead not support the feature for PV guests, requiring e.g. Linux
>      to process the lists in new pvops hooks.
> 
> RFC: I wasn't sure whether to add preemption checks to the loops -
>      thoughts?
> 
> With the VMX side of the spec still unclear (tertiary execution control
> bit unspecified in ISE 046) we can't enable the insn yet for (HVM) guest
> use. The precise behavior of MSR_BARRIER is also not spelled out, so the
> (minimal) implementation is a guess for now.
> 
> --- a/tools/libs/light/libxl_cpuid.c
> +++ b/tools/libs/light/libxl_cpuid.c
> @@ -240,6 +240,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
>          {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
>          {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
>          {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
> +        {"msrlist",      0x00000007,  1, CPUID_REG_EAX, 27,  1},
>  
>          {"avx-vnni-int8",0x00000007,  1, CPUID_REG_EDX,  4,  1},
>          {"avx-ne-convert",0x00000007, 1, CPUID_REG_EDX,  5,  1},
> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -195,6 +195,8 @@ static const char *const str_7a1[32] =
>      [18] = "lkgs",          [19] = "wrmsrns",
>  
>      /* 22 */                [23] = "avx-ifma",
> +
> +    /* 26 */                [27] = "msrlist",
>  };
>  
>  static const char *const str_e21a[32] =
> --- a/tools/tests/x86_emulator/predicates.c
> +++ b/tools/tests/x86_emulator/predicates.c
> @@ -342,6 +342,8 @@ static const struct {
>      { { 0x01, 0xc4 }, { 2, 2 }, F, N }, /* vmxoff */
>      { { 0x01, 0xc5 }, { 2, 2 }, F, N }, /* pconfig */
>      { { 0x01, 0xc6 }, { 2, 2 }, F, N }, /* wrmsrns */
> +    { { 0x01, 0xc6 }, { 0, 2 }, F, W, pfx_f2 }, /* rdmsrlist */
> +    { { 0x01, 0xc6 }, { 0, 2 }, F, R, pfx_f3 }, /* wrmsrlist */
>      { { 0x01, 0xc8 }, { 2, 2 }, F, N }, /* monitor */
>      { { 0x01, 0xc9 }, { 2, 2 }, F, N }, /* mwait */
>      { { 0x01, 0xca }, { 2, 2 }, F, N }, /* clac */
> --- a/tools/tests/x86_emulator/test_x86_emulator.c
> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
> @@ -589,6 +589,7 @@ static int read(
>      default:
>          if ( !is_x86_user_segment(seg) )
>              return X86EMUL_UNHANDLEABLE;
> +    case x86_seg_none:
>          bytes_read += bytes;
>          break;
>      }
> @@ -619,7 +620,7 @@ static int write(
>      if ( verbose )
>          printf("** %s(%u, %p,, %u,)\n", __func__, seg, (void *)offset, bytes);
>  
> -    if ( !is_x86_user_segment(seg) )
> +    if ( !is_x86_user_segment(seg) && seg != x86_seg_none )
>          return X86EMUL_UNHANDLEABLE;
>      memcpy((void *)offset, p_data, bytes);
>      return X86EMUL_OKAY;
> @@ -711,6 +712,10 @@ static int read_msr(
>  {
>      switch ( reg )
>      {
> +    case 0x0000002f: /* BARRIER */
> +        *val = 0;
> +        return X86EMUL_OKAY;
> +
>      case 0xc0000080: /* EFER */
>          *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
>          return X86EMUL_OKAY;
> @@ -1499,9 +1504,53 @@ int main(int argc, char **argv)
>           (gs_base != 0x0000111122224444UL) ||
>           gs_base_shadow )
>          goto fail;
> +    printf("okay\n");
>  
>      cp.extd.nscb = i;
>      emulops.write_segment = NULL;
> +
> +    printf("%-40s", "Testing rdmsrlist...");
> +    instr[0] = 0xf2; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
> +    regs.rip = (unsigned long)&instr[0];
> +    regs.rsi = (unsigned long)(res + 0x80);
> +    regs.rdi = (unsigned long)(res + 0x80 + 0x40 * 2);
> +    regs.rcx = 0x0002000100008000UL;
> +    gs_base_shadow = 0x0000222244446666UL;
> +    memset(res + 0x80, ~0, 0x40 * 8 * 2);
> +    res[0x80 + 0x0f * 2] = 0xc0000101; /* GS_BASE */
> +    res[0x80 + 0x0f * 2 + 1] = 0;
> +    res[0x80 + 0x20 * 2] = 0xc0000102; /* SHADOW_GS_BASE */
> +    res[0x80 + 0x20 * 2 + 1] = 0;
> +    res[0x80 + 0x31 * 2] = 0x2f; /* BARRIER */
> +    res[0x80 + 0x31 * 2 + 1] = 0;
> +    rc = x86_emulate(&ctxt, &emulops);
> +    if ( (rc != X86EMUL_OKAY) ||
> +         (regs.rip != (unsigned long)&instr[4]) ||
> +         regs.rcx ||
> +         (res[0x80 + (0x40 + 0x0f) * 2] != (unsigned int)gs_base) ||
> +         (res[0x80 + (0x40 + 0x0f) * 2 + 1] != (gs_base >> (8 * sizeof(int)))) ||
> +         (res[0x80 + (0x40 + 0x20) * 2] != (unsigned int)gs_base_shadow) ||
> +         (res[0x80 + (0x40 + 0x20) * 2 + 1] != (gs_base_shadow >> (8 * sizeof(int)))) ||
> +         res[0x80 + (0x40 + 0x31) * 2] || res[0x80 + (0x40 + 0x31) * 2 + 1] )
> +        goto fail;
> +    printf("okay\n");
> +
> +    printf("%-40s", "Testing wrmsrlist...");
> +    instr[0] = 0xf3; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
> +    regs.eip = (unsigned long)&instr[0];
> +    regs.rsi -= 0x11 * 8;
> +    regs.rdi -= 0x11 * 8;
> +    regs.rcx = 0x0002000100000000UL;
> +    res[0x80 + 0x0f * 2] = 0xc0000102; /* SHADOW_GS_BASE */
> +    res[0x80 + 0x20 * 2] = 0xc0000101; /* GS_BASE */
> +    rc = x86_emulate(&ctxt, &emulops);
> +    if ( (rc != X86EMUL_OKAY) ||
> +         (regs.rip != (unsigned long)&instr[4]) ||
> +         regs.rcx ||
> +         (gs_base != 0x0000222244446666UL) ||
> +         (gs_base_shadow != 0x0000111122224444UL) )
> +        goto fail;
> +
>      emulops.write_msr     = NULL;
>  #endif
>      printf("okay\n");
> --- a/tools/tests/x86_emulator/x86-emulate.c
> +++ b/tools/tests/x86_emulator/x86-emulate.c
> @@ -88,6 +88,7 @@ bool emul_test_init(void)
>      cp.feat.rdpid = true;
>      cp.feat.lkgs = true;
>      cp.feat.wrmsrns = true;
> +    cp.feat.msrlist = true;
>      cp.extd.clzero = true;
>  
>      if ( cpu_has_xsave )
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -835,6 +835,17 @@ static void cf_check vmx_cpuid_policy_ch
>      else
>          vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
>  
> +    if ( cp->feat.msrlist )
> +    {
> +        vmx_clear_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
> +        //todo enable MSRLIST tertiary execution control
> +    }
> +    else
> +    {
> +        vmx_set_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
> +        //todo disable MSRLIST tertiary execution control
> +    }
> +
>   out:
>      vmx_vmcs_exit(v);
>  
> @@ -3705,6 +3716,22 @@ gp_fault:
>      return X86EMUL_EXCEPTION;
>  }
>  
> +static bool cf_check is_msrlist(
> +    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
> +{
> +
> +    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0x01) )
> +    {
> +        unsigned int rm, reg;
> +        int mode = x86_insn_modrm(state, &rm, &reg);
> +
> +        /* This also includes WRMSRNS; should be okay. */
> +        return mode == 3 && rm == 6 && !reg;
> +    }
> +
> +    return false;
> +}
> +
>  static void vmx_do_extint(struct cpu_user_regs *regs)
>  {
>      unsigned long vector;
> @@ -4513,6 +4540,17 @@ void vmx_vmexit_handler(struct cpu_user_
>          }
>          break;
>  
> +    case EXIT_REASON_RDMSRLIST:
> +    case EXIT_REASON_WRMSRLIST:
> +        if ( vmx_guest_x86_mode(v) != 8 || !currd->arch.cpuid->feat.msrlist )
> +        {
> +            ASSERT_UNREACHABLE();
> +            hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
> +        }
> +        else if ( !hvm_emulate_one_insn(is_msrlist, "MSR list") )
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +        break;
> +
>      case EXIT_REASON_VMXOFF:
>      case EXIT_REASON_VMXON:
>      case EXIT_REASON_VMCLEAR:
> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> @@ -211,6 +211,8 @@ static inline void pi_clear_sn(struct pi
>  #define EXIT_REASON_XRSTORS             64
>  #define EXIT_REASON_BUS_LOCK            74
>  #define EXIT_REASON_NOTIFY              75
> +#define EXIT_REASON_RDMSRLIST           78
> +#define EXIT_REASON_WRMSRLIST           79
>  /* Remember to also update VMX_PERF_EXIT_REASON_SIZE! */
>  
>  /*
> --- a/xen/arch/x86/include/asm/msr-index.h
> +++ b/xen/arch/x86/include/asm/msr-index.h
> @@ -24,6 +24,8 @@
>  #define  APIC_BASE_ENABLE                   (_AC(1, ULL) << 11)
>  #define  APIC_BASE_ADDR_MASK                0x000ffffffffff000ULL
>  
> +#define MSR_BARRIER                         0x0000002f
> +
>  #define MSR_TEST_CTRL                       0x00000033
>  #define  TEST_CTRL_SPLITLOCK_DETECT         (_AC(1, ULL) << 29)
>  #define  TEST_CTRL_SPLITLOCK_DISABLE        (_AC(1, ULL) << 31)
> --- a/xen/arch/x86/include/asm/perfc_defn.h
> +++ b/xen/arch/x86/include/asm/perfc_defn.h
> @@ -6,7 +6,7 @@ PERFCOUNTER_ARRAY(exceptions,
>  
>  #ifdef CONFIG_HVM
>  
> -#define VMX_PERF_EXIT_REASON_SIZE 76
> +#define VMX_PERF_EXIT_REASON_SIZE 80
>  #define VMEXIT_NPF_PERFC 143
>  #define SVM_PERF_EXIT_REASON_SIZE (VMEXIT_NPF_PERFC + 1)
>  PERFCOUNTER_ARRAY(vmexits,              "vmexits",
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -223,6 +223,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t
>      case MSR_AMD_PPIN:
>          goto gp_fault;
>  
> +    case MSR_BARRIER:
> +        if ( !cp->feat.msrlist )
> +            goto gp_fault;
> +        *val = 0;
> +        break;
> +
>      case MSR_IA32_FEATURE_CONTROL:
>          /*
>           * Architecturally, availability of this MSR is enumerated by the
> @@ -493,6 +499,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t
>          uint64_t rsvd;
>  
>          /* Read-only */
> +    case MSR_BARRIER:
>      case MSR_IA32_PLATFORM_ID:
>      case MSR_CORE_CAPABILITIES:
>      case MSR_INTEL_CORE_THREAD_COUNT:
> --- a/xen/arch/x86/x86_emulate/0f01.c
> +++ b/xen/arch/x86/x86_emulate/0f01.c
> @@ -40,6 +40,7 @@ int x86emul_0f01(struct x86_emulate_stat
>      switch ( s->modrm )
>      {
>          unsigned long base, limit, cr0, cr0w, cr4;
> +        unsigned int n;
>          struct segment_register sreg;
>          uint64_t msr_val;
>  
> @@ -54,6 +55,56 @@ int x86emul_0f01(struct x86_emulate_stat
>                                  ((uint64_t)regs->r(dx) << 32) | regs->eax,
>                                  ctxt);
>              goto done;
> +
> +        case vex_f3: /* wrmsrlist */
> +            vcpu_must_have(msrlist);
> +            generate_exception_if(!mode_64bit(), X86_EXC_UD);
> +            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
> +                                  (regs->r(di) & 7),
> +                                  X86_EXC_GP, 0);
> +            fail_if(!ops->write_msr);
> +            while ( regs->r(cx) )
> +            {
> +                n = __builtin_ffsl(regs->r(cx)) - 1;
> +                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
> +                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
> +                    break;
> +                generate_exception_if(msr_val != (uint32_t)msr_val,
> +                                      X86_EXC_GP, 0);
> +                base = msr_val;
> +                if ( (rc = ops->read(x86_seg_none, regs->r(di) + n * 8,
> +                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY ||
> +                     (rc = ops->write_msr(base, msr_val, ctxt)) != X86EMUL_OKAY )
> +                    break;
> +                regs->r(cx) &= ~(1UL << n);
> +            }
> +            goto done;
> +
> +        case vex_f2: /* rdmsrlist */
> +            vcpu_must_have(msrlist);
> +            generate_exception_if(!mode_64bit(), X86_EXC_UD);
> +            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
> +                                  (regs->r(di) & 7),
> +                                  X86_EXC_GP, 0);
> +            fail_if(!ops->read_msr || !ops->write);
> +            while ( regs->r(cx) )
> +            {
> +                n = __builtin_ffsl(regs->r(cx)) - 1;
> +                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
> +                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
> +                    break;
> +                generate_exception_if(msr_val != (uint32_t)msr_val,
> +                                      X86_EXC_GP, 0);
> +                if ( (rc = ops->read_msr(msr_val, &msr_val,
> +                                         ctxt)) != X86EMUL_OKAY ||
> +                     (rc = ops->write(x86_seg_none, regs->r(di) + n * 8,
> +                                      &msr_val, 8, ctxt)) != X86EMUL_OKAY )
> +                    break;
> +                regs->r(cx) &= ~(1UL << n);
> +            }
> +            if ( rc != X86EMUL_OKAY )
> +                ctxt->regs->r(cx) = regs->r(cx);
> +            goto done;
>          }
>          generate_exception(X86_EXC_UD);
>  
> --- a/xen/arch/x86/x86_emulate/private.h
> +++ b/xen/arch/x86/x86_emulate/private.h
> @@ -600,6 +600,7 @@ amd_like(const struct x86_emulate_ctxt *
>  #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
>  #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
>  #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
> +#define vcpu_has_msrlist()     (ctxt->cpuid->feat.msrlist)
>  #define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
>  #define vcpu_has_avx_ne_convert() (ctxt->cpuid->feat.avx_ne_convert)
>  
> --- a/xen/arch/x86/x86_emulate/util.c
> +++ b/xen/arch/x86/x86_emulate/util.c
> @@ -112,6 +112,9 @@ bool cf_check x86_insn_is_mem_access(con
>          break;
>  
>      case X86EMUL_OPC(0x0f, 0x01):
> +        /* {RD,WR}MSRLIST */
> +        if ( mode_64bit() && s->modrm == 0xc6 )
> +            return s->vex.pfx >= vex_f3;
>          /* Cover CLZERO. */
>          return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
>      }
> @@ -172,7 +175,11 @@ bool cf_check x86_insn_is_mem_write(cons
>          case 0xff: /* Grp5 */
>              break;
>  
> -        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
> +        case X86EMUL_OPC(0x0f, 0x01):
> +            /* RDMSRLIST */
> +            if ( mode_64bit() && s->modrm == 0xc6 )
> +                return s->vex.pfx == vex_f2;
> +            /* CLZERO is another odd one. */
>              return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
>  
>          default:
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -286,6 +286,7 @@ XEN_CPUFEATURE(FRED,         10*32+17) /
>  XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
>  XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*A  WRMSR Non-Serialising */
>  XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
> +XEN_CPUFEATURE(MSRLIST,      10*32+27) /*   MSR list instructions */
>  
>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
>  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:01:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517948.803980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiA9-00060N-IO; Tue, 04 Apr 2023 15:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517948.803980; Tue, 04 Apr 2023 15:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiA9-00060G-FK; Tue, 04 Apr 2023 15:01:33 +0000
Received: by outflank-mailman (input) for mailman id 517948;
 Tue, 04 Apr 2023 15:01:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiA7-00060A-VA
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:01:31 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94eecb06-d2f9-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:01:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9982.eurprd04.prod.outlook.com (2603:10a6:10:4db::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:01:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94eecb06-d2f9-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cnu0U4TuQngHdyNyTYVEFbBFhsVXEe9GL/nuyWVZ7SVtt5JfGi3nT3iFPZA9AZTkPVWkb6YPXmwgP6IqB9UvyUdI6t9CDjVO/aNGylNc6P6ySy4UrYH4z6VUugAr4DvDeXH7d4wl6FUjT7lb14YVr98gXVKpm89qsgwZ7drAcUKucWkq0T6/ftdQ8NotCdMqV0YBbEBZnUJ9p2fLX3nmELWMU31kaudTT1PZxbbKQsx/XuPZy6ApAPzHHxms4Lx96e4mz634KTWbHHJSCLMSTQouRDK0cbyKbZ5Ck9YZIJF22oTZJlGbwLEZogvCWSOIlWBMdU/GgB7cNbpYaM/r4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aaynlAKGKmHgycZ7qG6oJUclcTjJpuYDyEq4fa/d5Qw=;
 b=SnXL8GYPzKQZWYU1emHWNgm0NgZmj4qZXM9v7gSgS41IOcUQNKBx6BaDA+hrwSSdCN06hdC+ITKfQW/auuSXnm0VmZx2wjx9d/ZlN0yF6ZBXMdN/PzdW+YUdFLbIVozsRC9q0NkFwzP5b4oysrsCkFC+qTmIXxuB8sh2bNA93zGkot1BYly7alUGQt1bdnoxtJ8mQ6aXxSoUjHjI02X9gkUZMm3Gy14lzelZRoGCU/RBk9ottjHD9NXBe5P8fHd44HyGTp3YVTgRoTPPy9gQAsyoFMkGh6fYWbdZG/RfATTM4wCKsqEkZiN7mSJQsIheH0OgwwHKHc6+WYqQ5X9lBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aaynlAKGKmHgycZ7qG6oJUclcTjJpuYDyEq4fa/d5Qw=;
 b=iF2E/VlPPMHb454CWi/ToaUff67RmZb/Ck0CyYuTX7f7yA6w/ioHTBdPIs1NmUhsklG5FyCuQiqpwxdCx5e+t2SMxDmg4lPHuf5uT7crO/WvT/gn5Bn1soapC7pyEcKG90F2vQ0lJc+/f/Dv/VEiCy0FtZT6yKxtH+NR8usl56MoQXMzbqYtze/p2MtCkF5fXVI9DtDH0RfVtWRNjYbcZJje8Wt8Thl4QB2IRbS4/SjK+x9EfLv3bAQD65j+uDy/OsHSm9rxXfXcrxB8Zin8aeCuyJoxnX8/Vuqka7PlYCOsSEeA+ZNJtbEYGUDOIG5qqLj7m2ozyWXLrRvIhcyJBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <63395f4e-2272-5537-190e-27318d4057ea@suse.com>
Date: Tue, 4 Apr 2023 17:01:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset
 convertors
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-10-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-10-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9982:EE_
X-MS-Office365-Filtering-Correlation-Id: 8351e5e8-42ff-46a2-e71e-08db351d783c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4DjTsWhZTzvrZz7tcTW9xefyFb7DMdjekkej6IrrrhsHj+uBFNgz6D2leFEH+XGRbwPFVfsCzeWvzZ2dVgv3RlVy9Drvlygbxs2ZNXq1J3kerGrCI3joplLvKbV+joxvXlfqOK0T1XNAKSrf4fnyy86lE8sZh+UoA2P6h3/7fT8s44cbKprTmg4o8VU4KvHK3LRZW/C/Cn0uc40lBI0JFAnIE0mSwgT4YmQFWpjG3NAvnbUkXlow5dew0g2NucxaHBC3yRrKVK/IZbomyU6ileJuMfPih2XP3PYiLXPOrs5u2YvT7scYxrznHpFGKu4poRU52lonA0vKaEkVx6bt+TkrBSndzqLAjiz3EjmFNDK8lAVUrD5BAyyPW+lABEuU3Q5O0WHWqSCp+uCpf5ndAzsP3j38FVZuNUjCdKfK1TZIgPuGz5K2tJgHufJ3/gUNJLZDlaREUtrWz1MCPq/0JbySxNY8OPk4N2mkSJpcUuC870dV5/UycPZTrPVRFTF6X6yR0MdlfvpGDm3b1ten2Shev/4HGzs0IOGW8cm9sY+iuJ3gvy85lq9HYa1pgxPBcKIycfuJfIChnR++KLfJSzYDNtKs0zT6BuVUEl1utU/W6LttLKJkuxTw823j5VP6cTIsagmhvZ0iDV65wv5A1Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(376002)(346002)(366004)(451199021)(2616005)(86362001)(31696002)(36756003)(2906002)(31686004)(53546011)(6486002)(186003)(83380400001)(6512007)(6506007)(26005)(4326008)(478600001)(66556008)(66946007)(8676002)(6916009)(66476007)(54906003)(41300700001)(5660300002)(38100700002)(8936002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YVpaR3NnS1gwbjdRcFZQeGUvSW9HdkVEVDh0UVZuYTBsV2YwYzgraUsrS3Nn?=
 =?utf-8?B?SnB5OHJmUjRqVVlGam9mdWlpTWFaSWlzSFlGNXY3SVgyamdQMTdIV2ZmV3JL?=
 =?utf-8?B?NGdxa2JnaFNRT1lQWDl3QThFTE5ZejBlaDl5L2IyMmcxUm1BTzNmSHg1dlZG?=
 =?utf-8?B?QUZCVmJIYVRRRHUwTHV0dVZkSnQ3cURqMVRCQkZVRWhMYmJGRFVCRFJva2NY?=
 =?utf-8?B?dUVMMlBLTUI3UldCS3ZEeUpqTmMxT3JXREtrMGdjazIyV3NNd1FBakE5TjZO?=
 =?utf-8?B?VnhHOWdTSzhSc3FYcmxnejF3RUdIb0Z0RkRPaE9OQVMxY0gwQ213VHV3bml5?=
 =?utf-8?B?OXV5c3RXNG5UUHkrNzR5d3AyVWFlVmI1N2JtSDE1bkRqeC9qcXNmUXduOFd6?=
 =?utf-8?B?bUxxdHIyTUhIdENPK2pmTHJaa3NkVFlLcTJGUGFjY3UvVUgzaHB2WHAyTVc5?=
 =?utf-8?B?ZGxkREpoZlEvUVc0VzV6SGJJY1NVWVBvaWFGcHRqRmVrdkJtOVVMZkxyK3BV?=
 =?utf-8?B?TmtNbVl5YkhETVIxTlNmaVJvaDBGV1g4dERjbXNpTnlDTDNwRDlwTzBMcGlC?=
 =?utf-8?B?a080a2RvR3ZtRytmcE5FVCtKalhRR1E1YTVKZ2h1NmFhYWxWd2hWNFAweUVv?=
 =?utf-8?B?SUtZV2U1MHJpL3h0MTlpMmIvQU5FZjloZzcwOWdESkU2Mi9FMnIwaDIwOENy?=
 =?utf-8?B?ZnN5MXFEcFV0TmtmdlZkdUpURUtBcEszMDh0V3lsTVpLa1ZCaDYybWRSRUZH?=
 =?utf-8?B?T1dQVE4xbThrWVlkNUo5TVIzRzFOSVJFTC9UbTREK1p3cW5zeldDQmNMdmhJ?=
 =?utf-8?B?MFRLaStYcUE1NVdGUGRSb3ZYaUd4TUtiek1uTFNGVTl0Y3EyRGhVY2R5ZkFT?=
 =?utf-8?B?aWFhV1dsbUEvdkdwTi9YQm5iVEVteXU0R05PRkVIeVdhUXp4SnNiY1ZzdUM0?=
 =?utf-8?B?cVFYa0tzVWgwWTZEdnoxQVRHK1FPaWRnRTdqcWpuMEdWM2R6QW8wQVQrckZn?=
 =?utf-8?B?WkFwbVJlLzB6WW1iSUhBRVFxKzhqdDlLcHZZUFFVZldCeGF1M3VUWjhUemhi?=
 =?utf-8?B?VjVPYlhWL1c4a1FYVHNMeVRiVnJPRTIvM1FuWk9OSXEwR0lmTzYrNXFVQ0dT?=
 =?utf-8?B?dlJHajJtbWhJcERyQXBIOTArdUwzSEpNYjhMYnQ2Z2NxZnp5UzNOcFVqd1pP?=
 =?utf-8?B?K0laMG5EaUl4MVUyWjR1eU9tdG1TOUR3SC9VeG4vbnZvUktNbnZpRGFRS0lQ?=
 =?utf-8?B?OEJvV1k3UEdLcU5sTHFKREpPR0dTRmNjUnRvcGx2cnJKKzlSUmdlTHdEYTVM?=
 =?utf-8?B?TU9aeHBGSlR4bVY0VHE5Z3VlRzBJMU9Fd3NXbXB1TUtSSG50Yk9oaUo2bkFy?=
 =?utf-8?B?SjlVZnlaOEYrWU8wTVlQUzJKcjZ0NmlIa29sb1N2WEU4VUp2K25KSWl4VCtx?=
 =?utf-8?B?a1hZREM0dHg3em5PbWF5QTc4c2tSMURuS1dCUHk5QXo2VmJpTjRtb1l5WE94?=
 =?utf-8?B?aldoa1k5SGd1SmlvdFNFcENnMUFUajVFYWw3c3Znd1ZoTmV6RnpJNG1tc0o5?=
 =?utf-8?B?RjYrUExVVG1JK2J6UzYydUZmeDBUbllxTW9YRFhSL1hZay9Nb3dISktJVFp1?=
 =?utf-8?B?MUZGVCtwdzVUK2g0TkVCakZtcHkvd0VvVWxVcEdiNXhCaEtpM2RrNExQdDB0?=
 =?utf-8?B?MUhRZU9wdUFUNUluNTdvUUVVZ2pWRnYzZmJUQ0wvTGRSanQ0WjlDOG92eTRm?=
 =?utf-8?B?dWdoUTdLVS9HaGRSbmlzUEJkeDE4Rm52QTZsOHhxYnpXdFB1TGRmU2xKQjR3?=
 =?utf-8?B?R21VNnFRbElMWlRIRkQ5RS9HYmUzRTVwU0dsWFAwOEswTGs2Z2tjellDbEc1?=
 =?utf-8?B?NnRKNDVzZUt5ZXRpeTRxVzVOMXNPaGsvT2tnL3VlTkRMMi9oWjNFUDJjTFhR?=
 =?utf-8?B?QUErdDYvd05vWWJaVlRlWG12TUdGK01ZUG4yK1ZlbStWdE9HZFpEMWplT3pl?=
 =?utf-8?B?REFsdjdKUE5kb2xIWHJENEJUZEtLVWRDdjJMT0hJNnlOQUl1M1lRY2RtWVVF?=
 =?utf-8?B?ZEVmc2hRdDdPQzZPK3lPQVZtNDBjZVlkaE5IQk9UcklRY3BZSktuTnhLdjdm?=
 =?utf-8?Q?W7o2cKO41XY25zBLFBGjqvJDO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8351e5e8-42ff-46a2-e71e-08db351d783c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:01:29.2058
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e/NU6duYQSxdd9syxVQdwJ1YPaYZrg7ALX7yQq0Bc/9MQok2smjrcEs4wBp/KG8KoK8ulfRsNEWtK5JPeuH1oA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9982

On 04.04.2023 11:52, Andrew Cooper wrote:
> These are already getting over-large for being inline functions, and are only
> going to grow more over time.  Out of line them, yielding the following net
> delta from bloat-o-meter:
> 
>   add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
> 
> Switch to the newer cpu_policy terminology while doing so.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

I take it you have a reason to ...

> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
>      }
>  }
>  
> +void x86_cpu_policy_to_featureset(
> +    const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
> +{
> +    fs[FEATURESET_1d]        = p->basic._1d;
> +    fs[FEATURESET_1c]        = p->basic._1c;
> +    fs[FEATURESET_e1d]       = p->extd.e1d;
> +    fs[FEATURESET_e1c]       = p->extd.e1c;
> +    fs[FEATURESET_Da1]       = p->xstate.Da1;
> +    fs[FEATURESET_7b0]       = p->feat._7b0;
> +    fs[FEATURESET_7c0]       = p->feat._7c0;
> +    fs[FEATURESET_e7d]       = p->extd.e7d;
> +    fs[FEATURESET_e8b]       = p->extd.e8b;
> +    fs[FEATURESET_7d0]       = p->feat._7d0;
> +    fs[FEATURESET_7a1]       = p->feat._7a1;
> +    fs[FEATURESET_e21a]      = p->extd.e21a;
> +    fs[FEATURESET_7b1]       = p->feat._7b1;
> +    fs[FEATURESET_7d2]       = p->feat._7d2;
> +    fs[FEATURESET_7c1]       = p->feat._7c1;
> +    fs[FEATURESET_7d1]       = p->feat._7d1;
> +}
> +
> +void x86_cpu_featureset_to_policy(
> +    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
> +{
> +    p->basic._1d             = fs[FEATURESET_1d];
> +    p->basic._1c             = fs[FEATURESET_1c];
> +    p->extd.e1d              = fs[FEATURESET_e1d];
> +    p->extd.e1c              = fs[FEATURESET_e1c];
> +    p->xstate.Da1            = fs[FEATURESET_Da1];
> +    p->feat._7b0             = fs[FEATURESET_7b0];
> +    p->feat._7c0             = fs[FEATURESET_7c0];
> +    p->extd.e7d              = fs[FEATURESET_e7d];
> +    p->extd.e8b              = fs[FEATURESET_e8b];
> +    p->feat._7d0             = fs[FEATURESET_7d0];
> +    p->feat._7a1             = fs[FEATURESET_7a1];
> +    p->extd.e21a             = fs[FEATURESET_e21a];
> +    p->feat._7b1             = fs[FEATURESET_7b1];
> +    p->feat._7d2             = fs[FEATURESET_7d2];
> +    p->feat._7c1             = fs[FEATURESET_7c1];
> +    p->feat._7d1             = fs[FEATURESET_7d1];
> +}

... add quite a few padding blanks in here, unlike in the originals?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:04:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:04:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517951.803989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiCj-0006cl-3V; Tue, 04 Apr 2023 15:04:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517951.803989; Tue, 04 Apr 2023 15:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiCj-0006ce-0m; Tue, 04 Apr 2023 15:04:13 +0000
Received: by outflank-mailman (input) for mailman id 517951;
 Tue, 04 Apr 2023 15:04:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiCh-0006cY-PF
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:04:11 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f445bdf1-d2f9-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:04:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8441.eurprd04.prod.outlook.com (2603:10a6:102:1d8::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:04:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:04:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f445bdf1-d2f9-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dJ3Jz95BAxaEfcUaLgmAxhFzC8qLu3IZw8sFwrIRSH1rbAEt3wovjz1er+uWoudXh4SO5n4cxqWimMr4wdN4X3PV8DKY1BfIiqTKjGExrfaS2AJSniTFewGmegkrhsANOmMRVCE+y1THeBLAMoF/b051JaYVFy40k/KJrXoBlVEgELkmqrYAyI3YL9ukilVbiHTSawtUhEn1SKNOeSgT0ZruPRhlW0s/KCBsPHbU7PXVGZx51PsrcUhHY3sjsgT1DCJI6CFWsYuMfquChj7AqMpNKMgX0DsAU7du08xAafG+8XV2cAZnmfE2zxaj1B7N/bvY05nlePp9/as3DlA7Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kGuOQTAuPBRa5775jCnhEJmWNqlJN05jP5ku+zddE8M=;
 b=bPHKoqBtX4UszgXQ+YE/snVgbVrJrrnXNk7M8mQI8EmKR5zPZo9UrMItRa4WmX5fOPjnlvb06mOetimF4KANC11W2VX4Maqk7v7LDlv2WEaAuhQpULk9emx7tJZSgJuMNZb10NobYSOvweevpiUzg5ereBjnZ4+IkfTaxY+stv+i782BHYeyTwQ9WKeptJREMxJkJ79uKCfrPu+RgXvrQpVLieTtwz7pUFHAKvMs4dX+DH3avQIQVk5EAK0UJKHl/AQ0iDj1cV7JMgArfCsKsSbgRbQ1nuwV6cHpJbC7e5vnhDfyueppiIyhA3kxcanCTCP3z4mloJKyYjt2yIxVDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kGuOQTAuPBRa5775jCnhEJmWNqlJN05jP5ku+zddE8M=;
 b=3r7anzR6MHKgnJE7J5DFvb/doTrtD6kquzUZ0P6qbkv94vEAL9HKvbpwRVDPGRCUtM6SsZyzas3wYOX7Q5zm9ZMRcXPbU26PLW0XrP/RfZZejtShoR4lN6xCQu46Vph4bl7KtvLiobiZKr36t3WjN3UXV/QUaAQgRaR31hxPYgdDYldVpKPdOU992+vrloK3xYVwI63bsWrOP2FbRqNn/khZEz5MTMKmOFmvlMDJjKVNw50YYs3pwoeea3X+yxUGOrwb1Jb4VFEyWHOuljZpAc+E5OHQlkCBIYEJ+HK1oiKMU+7qjfa5vGjEtnhC9IZ2vuSu22NBG1kpERkJjs1qcw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <60e92022-e383-6382-ff50-d6c67e2be227@suse.com>
Date: Tue, 4 Apr 2023 17:04:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 10/15] x86/boot: Move MSR policy initialisation logic
 into cpu-policy.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-11-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-11-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0007.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8441:EE_
X-MS-Office365-Filtering-Correlation-Id: a6150b87-a238-4ad1-c5d2-08db351dd5e2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	41epBBVYVpwblG2jiHD/PvMA2phDRHb9KdSeOXbNd0oGKaPWEqsW0SsDl/+WOZaev5cQJppfEwyGNj9yAlYzvMjhHdf2VrVXLJ2Z1BjI7AGyp9Fu/n+clBTlrmPifGqg1bq5AjsxCzWsrSOcCsZUjlmUHKE0vhDXVYOaVALk6AcCGcQWs6aIH8FzY+vGOAnK1J3jWDo208WAt0xjtuWKPkR/zvmhIK/b8uAth5wlR6Y8XxYsRtRkwzK5ilVyU9d8Bj8qWyc5eHC04AXC8AkcwRwDT/Ysza1oiKBwlCNiCYAoUgK0ImWJfcQuaEwlsZ7JBlZozn9NTrDbssknxyEAae4BLLC/Q0WdZKujTfONhE5wdYHGPl2RNJ3OhOtlhbg+91idHNXUaP9mZnF8w9I2AVQgAg0a2r8Q/n2+I4kdWxh2S4s0uiM8gumBUwvrt3CLYfadyu57eHG2bsSyBsJ7iQUinwnKpQzaYlpGgQXVcOJW+bFKFTkrG4zRCvx4mJLDt48benIWYUlhULTX5Er+rASjaYex8OpLZkVYVe8XCS7tVpC2Fe8/hvRf5g99SFWGj+B5ShsxBF2FhuaxP0jg0xQb4UYv3qCn16LgrhF/u8SR8leoi1htFadjRND+XdXdIOgBsCi/Y4WfO7aKspRfUA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(396003)(366004)(39860400002)(346002)(451199021)(558084003)(36756003)(38100700002)(31696002)(86362001)(2906002)(6486002)(6512007)(26005)(53546011)(6506007)(66946007)(66476007)(6916009)(4326008)(8676002)(66556008)(41300700001)(54906003)(478600001)(316002)(5660300002)(8936002)(31686004)(2616005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHQ1eWpySVcwWE5TczZPYkpGQVVYKzAzdXRaQlkzUFJMOERCdUkvQ0RFMUJk?=
 =?utf-8?B?SVZVOEx4ZVB2RGZ1R0NzS2phSUxtQU4xdURnZy9jVytYNkVtYnU3T2R3ZndI?=
 =?utf-8?B?K2wyaHRTeThwQmxlOHVzdFVkNTdoZUFOMTkyS3cvckN1VnZMUkVOdjlMVU4z?=
 =?utf-8?B?czNqYTlXaVFtMExqbnZzTWFWRW5XZXFVaUZxQmEySFJZUjJua1RFTFBVRGxj?=
 =?utf-8?B?RnN5UklYOFhHNWlFazdHQjh5eWw5dUFMTkhmdlNKeE9mbmk0Vmo1OTQ4QXJR?=
 =?utf-8?B?MlkzRzdsUFVUTU5TUXE1aVJrNEFab0hrUG5ORFJqWU5XVlN4ZjgzekZMcXdK?=
 =?utf-8?B?cTdPZTNrQnNtODhPOHpTWlBtQzl0aHA1ZjdPT3hRQkQxTkVSc3c3Tk8wVU4r?=
 =?utf-8?B?NSsyR3Ixc1dON0NTQzVLMW4ySnJuV3NpZlFwZFRsTWpLYUhuZlh2TnZWZlVZ?=
 =?utf-8?B?R0U3dFRRZWdyK0kwV051RDFpRlpiTkZNeENrdmpTUlo5RjgyQi8zNU5IMWJq?=
 =?utf-8?B?ZURIdzRyTmNZZGdhdktpblBrQVhOTHI5bk13N2d5WFNSbHVJUzZybFZRR3FV?=
 =?utf-8?B?dU1NY2l6bktNYU9kUTFNcVNyNmJvMkRnQVQwZVlTYWQxNkxKN0VTMWJGUkln?=
 =?utf-8?B?MXkyMEpJU3BzdG81b2JFNmI4SjI2UzlxVkJ2TEt6ME9BQWdQdUF6TGcwQW9m?=
 =?utf-8?B?dFZUN3ZGMFMwdXNaNlNidlFTR1hRYy80aCtWcldYMFBUS0RXL0R6cUVmNmNO?=
 =?utf-8?B?R09VTldZMkdKMFplSTc1MFM5QU8vaGVFNnh1R1FnazAyZTZvVjFBMXpGR291?=
 =?utf-8?B?RDdBeUh0SGRmWTZicXQvVi9zdlBrcCtKci9wL2o2SUdxMkRCNWg4d3JvM3I5?=
 =?utf-8?B?dEpwbEdrL1h6QXN0T2VhaEV2SEJNQkEwa1ZDbXpQR0U4RnZGeTEzMEh3WnJB?=
 =?utf-8?B?SEV3SzhYVTgyQjBQdlVGUFBDSjFjM0xnc3JnQnlCQk5mRWdaSHA5QTcwcFpn?=
 =?utf-8?B?cHcxQmtHSnkzSTdhQW03bndEVDFrMWJNclp3blZTTXpLdFo5Vy92WmhsLzln?=
 =?utf-8?B?S0pjbGFQeTYrNGJJY1NLS3FKMFgwR29YQzBZcXI0UGg0dUNKT3AxcVA2TmtL?=
 =?utf-8?B?SHJBZkVDcFRUVHRKbjB0QmI1YURyQ2phZGkxelBjdUZoVElRUTV5cis1QmVZ?=
 =?utf-8?B?VEV0WFRwemNXOHFWTUYzSFF3U204NUFOMUpTRjdiaVhzemxvVTlULzZBcmZk?=
 =?utf-8?B?NWRLKy9nNjBKbkNSK3EzRFUyZXIxV0dzTXAzWTZ2WXprcnlXbHEzVFlrMDV1?=
 =?utf-8?B?bEVocXJNcE9GVFVvenZPTEU5T011emxBa3pJQnJsbndYZWFRbHR0NitCWng1?=
 =?utf-8?B?SThBSUM3RHdYTDB4Rmhlb2ZpS3ZRN1lZSnZESWk3VlVvL2xCSXFsSEhuR2VK?=
 =?utf-8?B?czlRdWprTUVROW93c2ZvMlZQOFdNNFFiVmt3VW54cW1POXFOdTUxYlY1NWdt?=
 =?utf-8?B?MGFuUjBYakpPdVluV0puUU1UZFpvbXorNm9VY1pYSDJWWVUrV1BHU2NUNStC?=
 =?utf-8?B?eUFZMEplOFZqRlpTZEpHQXl6NFB3ckNwZGMvOUJYM0pDOTR6ZzlzWllZYmZn?=
 =?utf-8?B?eXlPRm1IMm1EcGRMbGtsUjM5VU93TkdzKzZmZlordm9lZ3Zzdk9yc240bm82?=
 =?utf-8?B?bDB6Y0FwbU12UjNpMWlWN2JIOTd3c3FaQVRyalNtaVdRVithWGFDRkNRbmVE?=
 =?utf-8?B?Z3I5K0lyZ1NBblZYd3Nkc0ZMOFJjWVJFMUc4aktWdWwzMHE5STJMdWFKT2xt?=
 =?utf-8?B?eG5uYWJGZEgrN3AzbytCSUQ3SzRjTm5ZenpQcm5qclpsYW9mK0EvT2pmb0JE?=
 =?utf-8?B?b3lsV3ZpRFNudVVzVis4QUpYS29HZFh0S3RJTEZSNjFkaGw0K2pkWGZBRGxr?=
 =?utf-8?B?dklkVUJXWjEyTG0yTW5LbEFONU5rWHdvVWdldUxIWDdUR3FvbmJrSmlSV3N2?=
 =?utf-8?B?Tk90RlZoaFptVHBwZmk0cDZ6Q2dTOG9GMklFMlR4YnM2bnl0YUtZSWlMV1Ny?=
 =?utf-8?B?bytaeXVXbWFmZVNHYnFkL201dzVpUXdaUkt0eUpvTXBkWk9IaHRWY1JOMGRm?=
 =?utf-8?Q?9UAn50KisctOjY4IGcwt38JV+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a6150b87-a238-4ad1-c5d2-08db351dd5e2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:04:06.3767
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9q3EroGxX9ZsllO2HRY9CHKEhpxsU4NWaQXvzhsvfy+1fuMW/Fa4zcpMwwU0/xRoFzRHqLfWkNHmLse4EdR8MQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8441

On 04.04.2023 11:52, Andrew Cooper wrote:
> Switch to the newer cpu_policy nomenclature.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:04:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:04:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517953.804000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiDF-00075N-C1; Tue, 04 Apr 2023 15:04:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517953.804000; Tue, 04 Apr 2023 15:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiDF-00075G-8t; Tue, 04 Apr 2023 15:04:45 +0000
Received: by outflank-mailman (input) for mailman id 517953;
 Tue, 04 Apr 2023 15:04:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjiDD-0006cY-I2
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:04:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05f58427-d2fa-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:04:42 +0200 (CEST)
Received: from mail-sn1nam02lp2046.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 11:04:34 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB5598.namprd03.prod.outlook.com (2603:10b6:a03:28a::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:04:31 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 15:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05f58427-d2fa-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680620682;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=igp8vhRXqLzi6dgIlBWd0iZCPWb4KnQmsCPvlHrRiqk=;
  b=eioSmmy8ZGpEcGI/xJrHVL2RHJ51Zs+ko5+0S0Xl7wC9HgLxbmjsUJ+p
   BXcDdQUl2fU5pjwkWl0lrzo1wzAUH5XA2KSFWyHajUYm2SS5mUCs8m66L
   8CiS+C/JncpoXjwS3Bb1ySGJR7ixWw4xVR0GvO9bkefb7AP3YE0W5XtoY
   Q=;
X-IronPort-RemoteIP: 104.47.57.46
X-IronPort-MID: 103079324
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xDo0HKKNRu6hUnKtFE+R9pQlxSXFcZb7ZxGr2PjKsXjdYENSgTVWy
 TQZXWmCM67fM2qmLd90bN7l9BwDsZ7Vm4c2GgRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRiPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5zGWZB2
 tw9MwoPTTCIt6G0kJCLYeVF05FLwMnDZOvzu1lG5BSBV7MdZ8mGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTSNpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr33rCexnmnMG4UPIaBptI0knbU/X0WMw8/eEGWoP6Tj0HrDrqzL
 GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAmZDNcbN0ttOctWCcnk
 FSOmrvBGjhHoLCTD3WH+d+pQSiaPCEUKSoZY3YCRA5dud37+tlv0FTIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslRP+ZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:EWHwRqoO3mHraeV0iCd3j2AaV5oJeYIsimQD101hICG9E/bo8v
 xG+c5wuCMc5wx8ZJhNo7+90dC7MBThHP1OkOss1NWZPDUO0VHARL2Ki7GN/9SKIVycygcy78
 Zdmp9FebnN5AhB5voSODPIaerIGuP3iJxAWN2uqUuFkTsaEJ2IMT0JdzpyfSVNNXB7OaY=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="103079324"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zep+hn5yw3NA1jJHq1tf6kXrYY45O8INcCRz3PgUYawe2hIl5bbZfvzjeEgM0JNNJIrg8awLHkpdrN5MDrlr0rGqDtL1yedDV7ecWCCu8sctv+WMUc7wAjO1+pJetJOnVX4zvuqvMVZm14cKTU9NzXEkJvD67l1WSTtR8jQT/3l5t3CZ/P98CSuUYli8AcpiXZxOPjf/GZElTekluiUBMqooAQmVB6lLjRpbMKrU3qvd0Vf0PRn7e4i0h1R7Y9BAnlLhBYhMDaIgAFeO7zQVjjtKwRfaav2+1O+ayIKETvwSR1N6msfuoEdTuyd7xhIXgatujSy2wG2WbImoH2SjBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SAJceG8SYRB7dmczQIz9NoKmgKqs2EQZOu9yMBGtoAw=;
 b=lCH1Z81IlhvjG1CmJLos7NLbWThErhel9E0JUo9j7JXZw7iR1oRZv2bw8dAY4Egc7xCeec9KbG1bJCPIjrpoZnCfiadBA/wADOgYJXAS74U7jjiRKreLjccJRxLBEmompdQ3Ux/WXtdYef8Cj3kb//DYchPMr3Wg4lYr2CrWc7B0UDqN38E0p2Dm8qRjQON2FUW7tHfTTRIe/nA6RAw42dZ0rLU1kYwnQwY3VSoY0v4kxAcVBapm6P2CH3r/kzWQDv4i6AoP8lZkaEQfFHMvXmflgLwCL32/Ht3TRkTS8C4QsuyRq7/84E/Ij64LaZnPDjhQrVBu+nNd3OElk01gGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SAJceG8SYRB7dmczQIz9NoKmgKqs2EQZOu9yMBGtoAw=;
 b=SKzQywZPOCsvd/BQswUJk7wabPiUfz762Nqii8wYcsOXkW7qn8WWVZdZhKRUaDAE4Gz234ayWrDFuTFh6e05r1aAsiTJ3pZCz/J2Y/1zt3QyyA/0KGqps/70ZqDqW6+1Nt24HhIX4hCe1TT5G7PfClCZ42bx9H/8X7DiEzU7ijU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 17:04:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Message-ID: <ZCw8eQSIN0FpXAhX@Air-de-Roger>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
 <ZCqVEHe1Qo3skeVf@Air-de-Roger>
 <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
 <ZCrUErZZkd6co1Dq@Air-de-Roger>
 <91fc0c1f-a985-17bd-2011-f4964d82e008@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <91fc0c1f-a985-17bd-2011-f4964d82e008@citrix.com>
X-ClientProxiedBy: LO4P265CA0219.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::17) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB5598:EE_
X-MS-Office365-Filtering-Correlation-Id: 719450aa-12ce-42a8-2a09-08db351de4c4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wVy+E1AH+Wl65p+WGY44c1JsWBvU/R07CFxP/Lhwfb1/hwSDm6F3N8mvYC05jHJRFhJ76nVcZqtjhW/ELWMDZ9BFALVMt3HS+qBsGcGy+/CQX0DmPeBZHMPRiG/G7t2CIUp8h3rJcIg4CKJjnzLqwqasCc8C0vWVDqXU7A8Kzb9rj/55iwbaOVXrAF3cjh7SuNKVwnAUZqVUyv1Og71RgzgIUuk2HyDZ7sfp/wdtrNTAwuDS4qapqkqd7qizRmPc9s2cOam+A/+FG3UQ2erfk9nZYFetsbYNKTpZIIwWxmUlaZsj6GPe66vmfQP7PCJqvdZy39J4QvTv8jEPQYCB/sNuFoBigT7+3GCN98U65mhLrVytpukhydvRSMFrS/RgIfF3eNOqYjv7abqQXSKkGexkm+Tjo/rLKAKd2KosAw4ZD5sZzs1aqORMYcm0cl6zr2swBfgzYn3bET1pfANAdsq/fvagkEsNl28f1vK9nlWRXNOuAO9KQRhsSohOl8St0zIqLoIeN5z4Xc/TzstgUW+FGK+DW9HdfIKza+y8+CfmkiimmsajzwCqDFLqFsZA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(376002)(136003)(396003)(346002)(39860400002)(451199021)(186003)(53546011)(6512007)(26005)(9686003)(6506007)(66946007)(38100700002)(6666004)(41300700001)(6486002)(83380400001)(54906003)(478600001)(316002)(6636002)(33716001)(2906002)(86362001)(82960400001)(66556008)(66476007)(8676002)(4326008)(5660300002)(85182001)(6862004)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S2MvMFhNS2RCQ0ZpQ3ZCaExNZVFoTXIrUkRrSFBKMTdlbUVNMTdqTEROTWMx?=
 =?utf-8?B?cHVjRE8vcHRMWHA4RDVRZzdFbkdBN3J5K1VGSXZCWUxHTE9qbzFFNCtGVmlj?=
 =?utf-8?B?U2lPazJPdVA5dW1iTG9hQm91Nzl1OXFrZitaMS8vSWtia1habEdpdnRjd0p2?=
 =?utf-8?B?blUxcWVFSEFLbTNtY2VjelpRMVVobTcxSUFWemVTYlozRjZSVEhIanpSaUx1?=
 =?utf-8?B?SDFYVFdKMVBaVEhXVzFSblJUTVF0eUFHMnNQdDZUY2JUa1RVTlV1WlZncVlM?=
 =?utf-8?B?MGNiVFRSbDJDVHdNWUhzSFpyOWxjalZLZ1hsSnBXcnVyWW1hUHFtZlRGT2xQ?=
 =?utf-8?B?eStMUElJRXRRdVdIQ2ZPUG9ia0hYaURwQktJeTc1Q1JDSVFCcWZoUzNSa01P?=
 =?utf-8?B?RVVjbW5EbUN0T00vcUYyY2ZyVkNsd1kwMGw4VE1rSVZXYll0bWcvb0FqTUJG?=
 =?utf-8?B?SS9Xc1BBMElIbDBxU1l3OGhaLzNVMEpjb2x1VS9hMGdxT2NHdVBGQVlNMnB5?=
 =?utf-8?B?ajFZcGpPbVB5M29pR21zRW44czZaS1gxVmc1K1ZINTBOSGRPMk16dG41TGkz?=
 =?utf-8?B?ZFlWNGwyR2c1QUdrN0ZGTjYvQTNDKzUyMURwR3VCYk5UZS8yaEtVZGlrWjBP?=
 =?utf-8?B?K1U3UHUvNUFWdEFBdEppWjJYTmhHSHY0K3pzSjlZK09xd0pRNG9sMUxERUc4?=
 =?utf-8?B?RXRxT3AzSllRTmxvRWxMQk5wS1JENlNNQkdYRDllUVc1QnVXRU5TdnJEcVE3?=
 =?utf-8?B?NlVacmdjWkZ4QS9DZG5VS1lBcnBFaFQ3aDNwclBCbExKTXNKQ2Y4cVdIaTRW?=
 =?utf-8?B?T2thVTJWY0t5QWs4YjhiRkpYdmJvVGZBVHppeTZKOEtmVE5IZHFNWTYxU2Fl?=
 =?utf-8?B?N1QyQWsraWtja0o1TnBjRlhyR1UvdGJCRE8wb1JkT1lmRUg4TEY5d1BneG5U?=
 =?utf-8?B?c2ZXWVJvTEp5SVV5bU1PSkx0eGgzby9tVlZMOVlRdTJubXlIa0IxWGxTN3VU?=
 =?utf-8?B?MWxyK3VvY0FMU29ITFZ0T0pleEQwdkFwcXNENk9pTEk5QjMybjg2WE1vaHJE?=
 =?utf-8?B?by8rbDFPaWNYV1I5QURleGNhOWdzMmdMVjRkNng3bEtaSzdVUHJxdlBhRzJ1?=
 =?utf-8?B?T0VyWHByZWI1Qk1uQ1h5aXM1K1lPR1BsYVA4TUpiTE5nUmFRUEtqeWhDeGdV?=
 =?utf-8?B?amRJbi91aCtYWW1nY1hMaVZqOEdMTnZxWW9tL0tRMGNDdjhiZ2tWUjQvWFVI?=
 =?utf-8?B?NlBRMkZ5aGNuZnQvNGU4bDd6dENHZ1ZnYWhjblNnOGxqZ040Z1VSdVdZWkU2?=
 =?utf-8?B?cXkxWXFVSStGbDZtYkgraXExUEFPS21ieTcwRjZNcnUvZWdUaWVubnR6Z0FC?=
 =?utf-8?B?M01CSHcrZE5VZEszdi8rN2Y4Q1kzamRjK25FVmxVaERwWUJ5M1hDVmFNVmFP?=
 =?utf-8?B?dlRkWmdqUDIzcnN5cmQ0RUhVekZwVHNHa0lnWkxSbElWWjdWaE9oQ3B5VFo0?=
 =?utf-8?B?Qk9UVmRrNUJlcXhaOGM3ZkM3bnFQR05JYlN0VHFiQjNhNU1BbCtQYllwRGZX?=
 =?utf-8?B?RjRRK3p4eXBRQWpPSDV0YlJmd3IreENDY0lhUUd3UWhOOHhUNUhzcitKTUVV?=
 =?utf-8?B?SkRXQVhiTjFMU0JuN3pKU3JRUzRJMlE4OG5GQnQ4b0NidzlWL0RyelFRRW05?=
 =?utf-8?B?RlRzOU8vNGdidTNEOHVPZHozZjNIZ3dtNEJMZkhGYUV4V0hjcWt3MXp3elFy?=
 =?utf-8?B?MVdKTWVqZXJ6eENIUTBxTVpNZUtQU1FQUHJnWUJ5VVVLWW5OOHBJRzZCRmQ5?=
 =?utf-8?B?L3dzbEkvS0NSYUZXZUJrcDdZWWhMZlgvbmVCRHlybUVZUmQvdUtlSHZGUjR3?=
 =?utf-8?B?dE9iZlFQTElZcFdaMmQ2eEIvQXlYMlJZamU5Rm9SdWY1T3V2dmVoS0VhblQz?=
 =?utf-8?B?d1dwNWdsZmRpRk5IY2MzVjFHbzdjemdpbGEwYWtQWUNYdmhJd1dDdjMzRzd6?=
 =?utf-8?B?akhKN0c2QmZ4QnFmS2lTUi9HNjhPTnlmellTa0tMY05rK3RpblYycU5OQ1Vv?=
 =?utf-8?B?TlR5MzduUU5WYTdHNDhWaFd0VVZ5aTA4Q1VxVVcreXc0QTBlanBYTU9oenY1?=
 =?utf-8?Q?oXXBCMv32Pfi+2nteoDxkXlM1?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	gG5XgOYqLzVRA3Dy19jxDPS8VVaXE3vZAH3V8sk0tgmKzD9p1sTOqAlWB1GWVpPQpnCTAqMrCRtO02JU/Btj+L4w+LY8czWB+cx0jvfIXovmW55A5KYoyyauyNIb1H0FSenkWyMK7o+kJVaCGL0Mf8Qkw9KX+Swdjt/PyNui9CiSgKHsCCUgH8I4KDwAY/i7+RQW+D0HjLYNF3OD1acd/CzA5JTdbnEa5TVfIQ0wAMhZBx3rbVuFd+Ydqe3Y9x0Zx/+mB6DTj/soLFu15vL2lBiThyXYShKXomjuSn2pMHOTnub/q6tynFSUvj7if9n65XllOhLI7AHIjzAK8Wft3AzxnTCXr+MHjLT5zpqFm7h1LRNm6b79FuD+mtKflyAjlMiuLPMv7eNrRgNGojFQ3qHNVTBBh8pkyVhApJNGaAvBH5ilFKNhj84z2neTAYBmWG1eaxPCw73k6OShprJiDnCwRfBVSr+GzFE3A5UEqbpPWYFvCliYGh6STX5GSagCxnhwSArLcOJTYx3RF8zqUhEuWovUfVj8gP3gOKLztD6PaizqPeqhwyQnwunMibt/wX7WbXIR6CsrsaGOLM7xRLQVJsdCDba4UVrbW3dN6NAaxBG/eKdhjeLEDGWj1RZ1OTlhgwGnMMn1QjG2CMebaPBIAzF3+uUfptwvRmiCjMhAb1uAdOHZHES44/v1GuV9p5acQkW4hTetdjxlPVJiQfiAaeAKr72xTzS6LmpoAIAhy8F1GPRUuVAmAWO7LunSn3SPO+eYH4oR4E4ixRiLm6c38J5WM0rsmX8Ve63g+t3SEnz9AZDX4+6eTbgeUH31
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 719450aa-12ce-42a8-2a09-08db351de4c4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:04:31.4982
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3osPg+CdPmys++u/aNnauMQ4HrG5VVT8Q0JpjMYjUGpd58t1OHxX4YF1l7BMoyRGofzx9YmJJPdIAoBh/s6y3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5598

On Tue, Apr 04, 2023 at 02:27:36PM +0100, Andrew Cooper wrote:
> On 03/04/2023 2:26 pm, Roger Pau Monné wrote:
> > On Mon, Apr 03, 2023 at 11:16:52AM +0100, Andrew Cooper wrote:
> >> On 03/04/2023 9:57 am, Roger Pau Monné wrote:
> >> (Quick tangent...  Our PCI handling is currently very dumb. 
> >> pci_mmcfg_read() returns its value by pointer but the callers never
> >> check.  Swapping it to return by value would improve code gen quite a
> >> lot.  Also, when MMCFG is active we still pass BCS accesses to IO ports.)
> > I wonder if it's really preferred to access registers below 255 using
> > the IO ports, as Linux seems to do the same (prefer IO port access if
> > possible).
> 
> And see how many attempts there have been to change this, only blocked
> on untangling the IO port mess on other architectures (a problem Xen
> doesn't have to contend with).
> 
> MMCFG, when available is strictly preferable to IO ports.
> 
> An MMCFG access is a single UC read or write, whereas IO ports are a
> pair of UC accesses *and* a global spinlock.

Right, I know it's better from a performance PoV, but didn't know
whether there was some known glitches for not using MMCFG when
accessing regs < 255.

> >>>>  
> >>>>  #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \
> >>>>                          || id == 0x01268086 || id == 0x01028086 \
> >>>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> >>>> index 5da00e24e4ff..008367195c78 100644
> >>>> --- a/xen/arch/x86/pv/emul-priv-op.c
> >>>> +++ b/xen/arch/x86/pv/emul-priv-op.c
> >>>> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
> >>>>          if ( ro_map && test_bit(machine_bdf, ro_map) )
> >>>>              return false;
> >>>>      }
> >>>> -    start |= CF8_ADDR_LO(currd->arch.pci_cf8);
> >>>> -    /* AMD extended configuration space access? */
> >>>> -    if ( CF8_ADDR_HI(currd->arch.pci_cf8) &&
> >>>> -         boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
> >>>> -         boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 )
> >>>> -    {
> >>>> -        uint64_t msr_val;
> >>>> -
> >>>> -        if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) )
> >>>> -            return false;
> >>>> -        if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) )
> >>>> -            start |= CF8_ADDR_HI(currd->arch.pci_cf8);
> >>>> -    }
> >>>> +    start |= CF8_REG(currd->arch.pci_cf8);
> >>>>  
> >>>>      return !write ?
> >>>>             xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
> >>>> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
> >>>>          if ( !is_hwdom_pinned_vcpu(curr) )
> >>>>              return X86EMUL_OKAY;
> >>>>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
> >>>> +             /*
> >>>> +              * TODO: this is broken.  What happens when dom0 is pinned but
> >>>> +              * can't see the full system?  CF8_EXT probably ought to be a
> >>>> +              * Xen-owned setting, and made symmetric across the system.
> >>>> +              */
> >>> I would assume CF8_EXT would be symmetric across the system, specially
> >>> given that it seems to be phased out and only used in older AMD
> >>> families that where all symmetric?
> >> The CF8_EXT bit has been phased out.  The IO ECS functionality still exists.
> >>
> >> But yes, the more I think about letting dom0 play with this, the more I
> >> think it is a fundamentally broken idea...  I bet it was from the very
> >> early AMD Fam10h days where dom0 knew how to turn it on, and Xen was
> >> trying to pretend it didn't have to touch any PCI devices.
> > It seems to me Xen should set CF8_EXT on all threads (when available)
> > and expose it to dom0, so that accesses using pci_{conf,write}_read()
> > work as expected?
> 
> It's per northbridge in the system, not per thread.  Hence needing the
> spinlock protecting the CF8/CFC IO port pair access, and why MMCFG is
> strictly preferable.

So just setting CF8_EXT_ENABLE on MSR_AMD64_NB_CFG by the BSP should
be enough to have it enabled?  I expect all other threads will see the
bit as set in the MSR then.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:17:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:17:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517962.804010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiOt-0000Nv-Fd; Tue, 04 Apr 2023 15:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517962.804010; Tue, 04 Apr 2023 15:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiOt-0000No-C2; Tue, 04 Apr 2023 15:16:47 +0000
Received: by outflank-mailman (input) for mailman id 517962;
 Tue, 04 Apr 2023 15:16:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiOs-0000Nf-0q
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:16:46 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20617.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3e77233-d2fb-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 17:16:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9261.eurprd04.prod.outlook.com (2603:10a6:20b:4c7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:16:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:16:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3e77233-d2fb-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CH2uPWiQiVsV9TncPRvS7hzOEl1A2Q8mUkrd2sgBq1kzbo6egdQP0BtG5T8v4JG9uW4fzLkGWPFShTfcu1NaSvmel6mkrHdLwjOGLLZt8ggYNLI3/jTVcqfPnSmYGhPujly6Ujyr6Yosa6BkixhiYaiBCYQTUN5IxNEh5K5xlfdbamHxMTuEsjOycUxjdLTgNe2lSjqyJFgoaXDr4vmv8pw/pfzo8QxMIuF5G26c//zuPMNJJkK5vboZWfUNFxyj2CALulm0IYi3aqTsI8SITy1T3YtSHG7L4i+SSpK0Ktm0GzdL8kdmgZwm/TTF2c70Muk4TRQQ8OPk3nD+s1TENg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gDVX0Pqi/OHZLe+dLA/r7Eg/rxGTpg9C2YX3gmKNGh8=;
 b=StenCZ3Gtz11IJSpkWb/LOkgu/T1aQ0efG20DY2svnS9Eizt1bDLeNxfQEC9gqcV+pSsFlwP/Dcix2o6z2uAJAwMstq3bHQqoiEH/dhtWRiwG/hOyReCR0zl6TkY5KnF/Nc/d8JrK2IuItxq4BSOEyaMCh91Hp7T2PwCxHgi6y7Do6DVwTxp0CmDb8da+02+oGoHA8o3/rtea52sKMA1oQMFS4jY23vlEdHZzN4262Bl5zRxP/FFLYRzR2e1q62ltHDil/V85R4M6qXe/JYOgYTW87QwNXql4tdQ+0YvtEIdV7ZijUwkCiILHaZB286br8uUp3dajEPm2OS/KLr3FA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gDVX0Pqi/OHZLe+dLA/r7Eg/rxGTpg9C2YX3gmKNGh8=;
 b=SZkiNwKLIq0i1YK/QgZVcyEzDnIugaFX2oo9iNhoRbkjSOO/MFfcjq68fIqaxcF0SeiKxLJzRnRY+Vit1bflCcRTUKA6++/EZnjsiqzMRKeWgNs/VggID4axsm9kbAIYU6+Yf8CkgreslWwzcD3AVmxNKy5DROFe7hl2uozT/07tLzkgwChLveT1j2EY+xZkxFFCdGToOG0lFMMv9jLJ/Nhg+DzkxbaT5wkDK/KBhZdk6AjEzbq6obqMnT5UBEozwsKyuXgxGdbgiO9Jjki8+f4UvsuSFQG6/F3lzOv5k4EzzAGokF4BDKDV8Y78e/bYfR0ibHZxGG8XhJ6cHRq8mw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <087536dd-96cf-84f0-4b8f-d4de4d6bd093@suse.com>
Date: Tue, 4 Apr 2023 17:16:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation
 logic into cpu-policy.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-12-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-12-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0145.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9261:EE_
X-MS-Office365-Filtering-Correlation-Id: 1549535e-a323-4972-3bbc-08db351f94fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l3c98LWBBtEp2d1twKFDVihc+sOdq0gV9RvcdC0SXRccMLDyGQJoFysT0DoS+aGxcxylYJlC4t8UO7v6Y87AUq283oLpEpwBLfKp4ghzDKOK3JPDBKW25Z0kg7ZCAA4kMcwgheSIN4+bfwJqNONdlQeUS//XFu/FUZ/6SlJHHIeWJ4Q+R2f7jnDs8sWIzYUWENTk1jBPWYp7OYC1Hs491buUlrr0hdBacrrwWOgtLVd8oYlO64FKX+dsRtCnTcAMV40+KXVI1x2OG9iMk0ghF10bW7ncu2P6vBaHc6LQfPQ6ffWW0B0RvYeh8m5Qy7IEmzCXyhWEuTl8q2gP1XmUhy+mTfLpLFy1G47SBjfuWt5prUlTspFrTji1qtj5p8X5e0PESae7f3IxkcOCw3mvFVM1GONzrrXXWEOFdVTlnq5lxDBZFzmKvSl6mXBIb0Rmt/ps/vdDoA2B+2zJzV9uIbSyc99VeBXYyyCmmQBptfe17izGTjkqfF5D0g7j6HunhHXB3mm7qEcC0aSr8cw/mLwNiHxrX3VUOAURsFXpVgkt3bcCSi8899a3ULCSrJyWlBb7/2NEEsiWlsMyzSXkov7W3hgeLWlzFnTn5bIBH53JDxzqDMYrHCAsrRjNC6oSvfDW4oFavwlXjIou8f0Gz0mV+lsPs7VuyafVEtNn7uFyx1/WAMxXidXCWm02NzoS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(39860400002)(136003)(366004)(451199021)(30864003)(38100700002)(5660300002)(83380400001)(2616005)(31696002)(54906003)(316002)(36756003)(53546011)(6486002)(86362001)(186003)(6512007)(478600001)(6506007)(26005)(6916009)(66556008)(66476007)(41300700001)(4326008)(8676002)(66946007)(8936002)(31686004)(66899021)(2906002)(43740500002)(45980500001)(579004)(559001)(473944003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bldtVCtZdU03c1dKMmk2YjhLeHdpWE0wdzFsZ0dVOVJvRUxISGhnSGFjam1a?=
 =?utf-8?B?dXNoZnA1NDl3bkE4YjBSZC80K0lzaWJtTFJvZWRTM0ZsbXBnYVJEV2JOQUsz?=
 =?utf-8?B?dzBTR3Z6TDg4TnZmekRmeEEvcUYxSjhpMXducVJhWEl1b2JYdFpQZ1FtZUZT?=
 =?utf-8?B?c1M4WUd3eVFCeFFWRHlicWRtMUJINy9ZdEtKYWlEMFUzNGJ1S04rRkY5NnZ4?=
 =?utf-8?B?czBabG00cXUvOXZvN2QvUTl2clBxdlg1QzVXbkxQMXdLcjhSRHZHNk4vc2c4?=
 =?utf-8?B?dW9pMlNrUDZQVGR2dnk2Nlk1Y2pWVG5ORnpKNnkrYWw4eUVsUlpGTlNTTmU4?=
 =?utf-8?B?NENtbGpsYVNvSXNHRGRRZmZBc1dJN0RRTGJ0VGkvQmxuZzFjdkUzODJrb0ww?=
 =?utf-8?B?ckJkR3NHNGxRRXB0VkpWNXZPTEYwYmI0eEVRaGZVMlVBdExTUHQzVnpnTlJZ?=
 =?utf-8?B?MEd2a096cWE3aUFVanFuL3JqdEl4bXk5aWd3M0trc3E2V1p6dEk5dUwyd1RU?=
 =?utf-8?B?UllyRzRjREU0Mzl0dDdBSFhPQlhmVmcxMFZPQXMxOS9UV3NWS1VlOTUzWlNm?=
 =?utf-8?B?dUV5TjhLdHNhbFFPTHMwMHIvWk1RWDdjczk5T0wzV1ozNTBGdzBGRXhlcTJw?=
 =?utf-8?B?T1hYRXhVZkdQWFlpbUxvRk84UDczeEVKVVJWSkVXZ1BJR1h2MlVNOFhvTVhK?=
 =?utf-8?B?Tk1DU1h6NkhrVnk3QmNrMjVvMFlWSDFGbU5hWWpMMjZ0dlludER0eGk2Q2dV?=
 =?utf-8?B?R3BUK0dSYnBhakZsYkk3WEI4SUdyS1BaM1RZUkZLV1BVMThPUTMxb2NTRVVC?=
 =?utf-8?B?bHd4VVcrY0xaeUMyZkFPNVF4N1ZnK0RoUTRiUGZxWko0ZlFYTlBlN3dVMS9r?=
 =?utf-8?B?d2NFUnc5OVZRRjV1UWdNdTRNSEJ1OW1KajZaWFJselFrQVhLZTdwUG5lc1hB?=
 =?utf-8?B?NjRsaENGNkhkeCtYa3RhZW92REkxT0xNeEF5UHQ5ZTNsd3FybWJmbTVTUERN?=
 =?utf-8?B?bzBoL1JaSC9NdXlvSXpjZnQ3R3Vwa3ZIc1lyVWtndko1YThxeUhzOW9QM2Zh?=
 =?utf-8?B?TFFRdHA5Y0I3c0ZEWEMrZlQ1d2UzU1BWR0VWM0tnRVZFRDhubDRJMTdUempY?=
 =?utf-8?B?ZmFJbmQ1Rlc3VlJKR2EyUDcrbExCS1RmUVdGaHNlbFdObGh6djFXZ29ESzNB?=
 =?utf-8?B?WERoTzdnekJ1RUpZOHNqTlIwTmI4ZHVSMklGSjR5aENFbk01cEltQnJqZ01s?=
 =?utf-8?B?M3l4UzRDSW81aVhuRC9SUVBpZ245alFoTlFPZTMyQW01YmZBR0NUYjg2aTMz?=
 =?utf-8?B?RjZnUkF6L2E1QStCUGRYVlkvTTY4c0FLMnZnRWUyTERsbTM0L2xHdEFjVDFn?=
 =?utf-8?B?bjFsTU11b1pxT3RuT2xpcElyMk5WdEcwaHd3SjdiTjR2SSswY2M1Q0RNZ2U5?=
 =?utf-8?B?ZTlQSlFMZXc0YjFBanBETjZUbzhQMjQrOEo0dFNhaXR4QXBTdmZpeVdwMWJw?=
 =?utf-8?B?L2l0a3FiQlVpOUM3VFlpNE5FWFltanhpVFhqdTBtdk5QOWV1WDJWZTkrV0FZ?=
 =?utf-8?B?NVFsZStWdjJmejdGd0tMNitwamEvQTNwNHdYWUNITTZJUHVQblZsQVQ3MnFu?=
 =?utf-8?B?MDZqaVBmOFQ2TUp4eHljMVc1RDIwUHhVOTVCL0x2YWpNQjJOeE1NRVBWeG01?=
 =?utf-8?B?MnRsS0xlNFBVUTVmQXdGVmh1QmhxMWFPazlGZ3lsVnVWQ2xDbHhDL0UvUVZx?=
 =?utf-8?B?WXFGbXRZblprSUZ2UDJrYTNiNnZwdGNWMEp6YitRNURsSzJ2T3A0MnVmMExS?=
 =?utf-8?B?T3hUdjZLeW5qTEx5dTB4VFg4ejlUbTB3d1NjeVlyN2JqQ1Fpd0xIZVE0YVIv?=
 =?utf-8?B?REhOaVdMWmpCemk3VDRWR2pXTGtIR0hjNHY3RDB1dVpQT2FROTBaSVh3MCtY?=
 =?utf-8?B?MGh1UitySHVCd0VNYzZhT0dqNGQ0ZStwYWR0Y1piVGoxVU1WSGhKNWZVanl4?=
 =?utf-8?B?NTNKVm9SZlBtVUhHUDdZNnR4TEMzM08yZEJhTTFaMzFqaEVWQlFMOUdNOHVG?=
 =?utf-8?B?TnJkZWFHZHRVZkZZNEFTQXRHc28wVm5WcEpoMjdUeUx2VEVaRnRCOUNpMHNn?=
 =?utf-8?Q?pD8X3SwPg8JwwK9Dz6rc9uqV4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1549535e-a323-4972-3bbc-08db351f94fc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:16:36.4754
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PfJRslTDae/57uSLPY1iY5tz0xJsYpvEqWaY193iHzLQJj6nE0N+TA15Li2ebhHqhQhS7E3uw8yklpv9febKYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9261

On 04.04.2023 11:52, Andrew Cooper wrote:
> Switch to the newer cpu_policy nomenclature.  Do some easy cleanup of
> includes.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> v2:
>  * New
> ---
>  xen/arch/x86/cpu-policy.c             | 752 ++++++++++++++++++++++++
>  xen/arch/x86/cpuid.c                  | 817 +-------------------------
>  xen/arch/x86/hvm/hvm.c                |   1 -
>  xen/arch/x86/include/asm/cpu-policy.h |   6 +
>  xen/arch/x86/include/asm/cpuid.h      |  11 +-
>  xen/arch/x86/pv/domain.c              |   1 +
>  xen/arch/x86/setup.c                  |   2 -
>  7 files changed, 764 insertions(+), 826 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
> index f6a2317ed7bd..83186e940ca7 100644
> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -1,13 +1,19 @@
>  /* SPDX-License-Identifier: GPL-2.0-or-later */
>  #include <xen/cache.h>
>  #include <xen/kernel.h>
> +#include <xen/param.h>
>  #include <xen/sched.h>
>  
>  #include <xen/lib/x86/cpu-policy.h>
>  
> +#include <asm/amd.h>
>  #include <asm/cpu-policy.h>
> +#include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/svm/svm.h>
>  #include <asm/msr-index.h>
> +#include <asm/paging.h>
>  #include <asm/setup.h>
> +#include <asm/xstate.h>
>  
>  struct cpu_policy __ro_after_init     raw_cpu_policy;
>  struct cpu_policy __ro_after_init    host_cpu_policy;
> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>  struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>  #endif
>  
> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
> +
> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
> +    INIT_HVM_HAP_MAX_FEATURES;
> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
> +    INIT_HVM_SHADOW_DEF_FEATURES;
> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
> +    INIT_HVM_HAP_DEF_FEATURES;
> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
> +
> +static const struct feature_name {
> +    const char *name;
> +    unsigned int bit;
> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
> +
> +/*
> + * Parse a list of cpuid feature names -> bool, calling the callback for any
> + * matches found.
> + *
> + * always_inline, because this is init code only and we really don't want a
> + * function pointer call in the middle of the loop.
> + */
> +static int __init always_inline parse_cpuid(
> +    const char *s, void (*callback)(unsigned int feat, bool val))
> +{
> +    const char *ss;
> +    int val, rc = 0;
> +
> +    do {
> +        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
> +        const char *feat;
> +
> +        ss = strchr(s, ',');
> +        if ( !ss )
> +            ss = strchr(s, '\0');
> +
> +        /* Skip the 'no-' prefix for name comparisons. */
> +        feat = s;
> +        if ( strncmp(s, "no-", 3) == 0 )
> +            feat += 3;
> +
> +        /* (Re)initalise lhs and rhs for binary search. */
> +        lhs = feature_names;
> +        rhs = feature_names + ARRAY_SIZE(feature_names);
> +
> +        while ( lhs < rhs )
> +        {
> +            int res;
> +
> +            mid = lhs + (rhs - lhs) / 2;
> +            res = cmdline_strcmp(feat, mid->name);
> +
> +            if ( res < 0 )
> +            {
> +                rhs = mid;
> +                continue;
> +            }
> +            if ( res > 0 )
> +            {
> +                lhs = mid + 1;
> +                continue;
> +            }
> +
> +            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
> +            {
> +                callback(mid->bit, val);
> +                mid = NULL;
> +            }
> +
> +            break;
> +        }
> +
> +        /*
> +         * Mid being NULL means that the name and boolean were successfully
> +         * identified.  Everything else is an error.
> +         */
> +        if ( mid )
> +            rc = -EINVAL;
> +
> +        s = ss + 1;
> +    } while ( *ss );
> +
> +    return rc;
> +}
> +
> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
> +{
> +    if ( !val )
> +        setup_clear_cpu_cap(feat);
> +    else if ( feat == X86_FEATURE_RDRAND &&
> +              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
> +        setup_force_cpu_cap(X86_FEATURE_RDRAND);
> +}
> +
> +static int __init cf_check parse_xen_cpuid(const char *s)
> +{
> +    return parse_cpuid(s, _parse_xen_cpuid);
> +}
> +custom_param("cpuid", parse_xen_cpuid);
> +
> +static bool __initdata dom0_cpuid_cmdline;
> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
> +
> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
> +{
> +    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
> +    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
> +}
> +
> +static int __init cf_check parse_dom0_cpuid(const char *s)
> +{
> +    dom0_cpuid_cmdline = true;
> +
> +    return parse_cpuid(s, _parse_dom0_cpuid);
> +}
> +custom_param("dom0-cpuid", parse_dom0_cpuid);

Unless the plan is to completely remove cpuid.c, this command line
handling would imo better fit there. I understand that to keep
dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
would then need to be exposed (under a different name), but I think
that's quite okay, the more that it's an __init function.

> +#define EMPTY_LEAF ((struct cpuid_leaf){})
> +static void zero_leaves(struct cpuid_leaf *l,
> +                        unsigned int first, unsigned int last)
> +{
> +    memset(&l[first], 0, sizeof(*l) * (last - first + 1));
> +}
> +
> +static void sanitise_featureset(uint32_t *fs)
> +{
> +    /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
> +    uint32_t disabled_features[
> +        ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
> +    unsigned int i;
> +
> +    for ( i = 0; i < FSCAPINTS; ++i )
> +    {
> +        /* Clamp to known mask. */
> +        fs[i] &= known_features[i];
> +
> +        /*
> +         * Identify which features with deep dependencies have been
> +         * disabled.
> +         */
> +        disabled_features[i] = ~fs[i] & deep_features[i];
> +    }
> +
> +    for_each_set_bit(i, (void *)disabled_features,
> +                     sizeof(disabled_features) * 8)
> +    {
> +        const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
> +        unsigned int j;
> +
> +        ASSERT(dfs); /* deep_features[] should guarentee this. */
> +
> +        for ( j = 0; j < FSCAPINTS; ++j )
> +        {
> +            fs[j] &= ~dfs[j];
> +            disabled_features[j] &= ~dfs[j];
> +        }
> +    }
> +}
> +
> +static void recalculate_xstate(struct cpu_policy *p)
> +{
> +    uint64_t xstates = XSTATE_FP_SSE;
> +    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
> +    unsigned int i, Da1 = p->xstate.Da1;
> +
> +    /*
> +     * The Da1 leaf is the only piece of information preserved in the common
> +     * case.  Everything else is derived from other feature state.
> +     */
> +    memset(&p->xstate, 0, sizeof(p->xstate));
> +
> +    if ( !p->basic.xsave )
> +        return;
> +
> +    if ( p->basic.avx )
> +    {
> +        xstates |= X86_XCR0_YMM;
> +        xstate_size = max(xstate_size,
> +                          xstate_offsets[X86_XCR0_YMM_POS] +
> +                          xstate_sizes[X86_XCR0_YMM_POS]);
> +    }
> +
> +    if ( p->feat.mpx )
> +    {
> +        xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
> +        xstate_size = max(xstate_size,
> +                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
> +                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
> +    }
> +
> +    if ( p->feat.avx512f )
> +    {
> +        xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
> +        xstate_size = max(xstate_size,
> +                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
> +                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
> +    }
> +
> +    if ( p->feat.pku )
> +    {
> +        xstates |= X86_XCR0_PKRU;
> +        xstate_size = max(xstate_size,
> +                          xstate_offsets[X86_XCR0_PKRU_POS] +
> +                          xstate_sizes[X86_XCR0_PKRU_POS]);
> +    }
> +
> +    p->xstate.max_size  =  xstate_size;
> +    p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
> +    p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
> +
> +    p->xstate.Da1 = Da1;
> +    if ( p->xstate.xsaves )
> +    {
> +        p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
> +        p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
> +    }
> +    else
> +        xstates &= ~XSTATE_XSAVES_ONLY;
> +
> +    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
> +    {
> +        uint64_t curr_xstate = 1ul << i;
> +
> +        if ( !(xstates & curr_xstate) )
> +            continue;
> +
> +        p->xstate.comp[i].size   = xstate_sizes[i];
> +        p->xstate.comp[i].offset = xstate_offsets[i];
> +        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
> +        p->xstate.comp[i].align  = curr_xstate & xstate_align;
> +    }
> +}
> +
> +/*
> + * Misc adjustments to the policy.  Mostly clobbering reserved fields and
> + * duplicating shared fields.  Intentionally hidden fields are annotated.
> + */
> +static void recalculate_misc(struct cpu_policy *p)
> +{
> +    p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
> +    p->basic.apic_id = 0; /* Dynamic. */
> +
> +    p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
> +    p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
> +
> +    p->basic.raw[0x8] = EMPTY_LEAF;
> +
> +    /* TODO: Rework topology logic. */
> +    memset(p->topo.raw, 0, sizeof(p->topo.raw));
> +
> +    p->basic.raw[0xc] = EMPTY_LEAF;
> +
> +    p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
> +
> +    /* Most of Power/RAS hidden from guests. */
> +    p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
> +
> +    p->extd.raw[0x8].d = 0;
> +
> +    switch ( p->x86_vendor )
> +    {
> +    case X86_VENDOR_INTEL:
> +        p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
> +        p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
> +        p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
> +
> +        p->extd.vendor_ebx = 0;
> +        p->extd.vendor_ecx = 0;
> +        p->extd.vendor_edx = 0;
> +
> +        p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
> +
> +        p->extd.raw[0x5] = EMPTY_LEAF;
> +        p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
> +
> +        p->extd.raw[0x8].a &= 0x0000ffff;
> +        p->extd.raw[0x8].c = 0;
> +        break;
> +
> +    case X86_VENDOR_AMD:
> +    case X86_VENDOR_HYGON:
> +        zero_leaves(p->basic.raw, 0x2, 0x3);
> +        memset(p->cache.raw, 0, sizeof(p->cache.raw));
> +        zero_leaves(p->basic.raw, 0x9, 0xa);
> +
> +        p->extd.vendor_ebx = p->basic.vendor_ebx;
> +        p->extd.vendor_ecx = p->basic.vendor_ecx;
> +        p->extd.vendor_edx = p->basic.vendor_edx;
> +
> +        p->extd.raw_fms = p->basic.raw_fms;
> +        p->extd.raw[0x1].b &= 0xff00ffff;
> +        p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
> +
> +        p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
> +        p->extd.raw[0x8].c &= 0x0003f0ff;
> +
> +        p->extd.raw[0x9] = EMPTY_LEAF;
> +
> +        zero_leaves(p->extd.raw, 0xb, 0x18);
> +
> +        /* 0x19 - TLB details.  Pass through. */
> +        /* 0x1a - Perf hints.   Pass through. */
> +
> +        p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
> +        p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
> +        p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
> +        p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
> +        p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
> +        p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
> +        break;
> +    }
> +}
> +
>  static void __init calculate_raw_policy(void)
>  {
>      struct cpu_policy *p = &raw_cpu_policy;
>  
> +    x86_cpuid_policy_fill_native(p);
> +
> +    /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
> +    ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
> +
>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>      /* Was already added by probe_cpuid_faulting() */
>  
> @@ -34,9 +362,50 @@ static void __init calculate_raw_policy(void)
>  static void __init calculate_host_policy(void)
>  {
>      struct cpu_policy *p = &host_cpu_policy;
> +    unsigned int max_extd_leaf;
>  
>      *p = raw_cpu_policy;
>  
> +    p->basic.max_leaf =
> +        min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
> +    p->feat.max_subleaf =
> +        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
> +
> +    max_extd_leaf = p->extd.max_leaf;
> +
> +    /*
> +     * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
> +     * dispatch serialising for Spectre mitigations.  Extend max_extd_leaf
> +     * beyond what hardware supports, to include the feature leaf containing
> +     * this information.
> +     */
> +    if ( cpu_has_lfence_dispatch )
> +        max_extd_leaf = max(max_extd_leaf, 0x80000021);
> +
> +    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
> +                                          ARRAY_SIZE(p->extd.raw) - 1);
> +
> +    x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
> +    recalculate_xstate(p);
> +    recalculate_misc(p);
> +
> +    /* When vPMU is disabled, drop it from the host policy. */
> +    if ( vpmu_mode == XENPMU_MODE_OFF )
> +        p->basic.raw[0xa] = EMPTY_LEAF;
> +
> +    if ( p->extd.svm )
> +    {
> +        /* Clamp to implemented features which require hardware support. */
> +        p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
> +                               (1u << SVM_FEATURE_LBRV) |
> +                               (1u << SVM_FEATURE_NRIPS) |
> +                               (1u << SVM_FEATURE_PAUSEFILTER) |
> +                               (1u << SVM_FEATURE_DECODEASSISTS));
> +        /* Enable features which are always emulated. */
> +        p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
> +                               (1u << SVM_FEATURE_TSCRATEMSR));
> +    }
> +
>      /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>      /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
> @@ -51,11 +420,88 @@ static void __init calculate_host_policy(void)
>           ARCH_CAPS_PBRSB_NO);
>  }
>  
> +static void __init guest_common_default_feature_adjustments(uint32_t *fs)
> +{
> +    /*
> +     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
> +     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
> +     * compensate.
> +     *
> +     * Mitigate by hiding RDRAND from guests by default, unless explicitly
> +     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
> +     * default setting, guests can use RDRAND if explicitly enabled
> +     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
> +     * previously using RDRAND can migrate in.
> +     */
> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> +         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
> +         cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
> +        __clear_bit(X86_FEATURE_RDRAND, fs);
> +
> +    /*
> +     * On certain hardware, speculative or errata workarounds can result in
> +     * TSX being placed in "force-abort" mode, where it doesn't actually
> +     * function as expected, but is technically compatible with the ISA.
> +     *
> +     * Do not advertise RTM to guests by default if it won't actually work.
> +     */
> +    if ( rtm_disabled )
> +        __clear_bit(X86_FEATURE_RTM, fs);
> +}
> +
> +static void __init guest_common_feature_adjustments(uint32_t *fs)
> +{
> +    /* Unconditionally claim to be able to set the hypervisor bit. */
> +    __set_bit(X86_FEATURE_HYPERVISOR, fs);
> +
> +    /*
> +     * If IBRS is offered to the guest, unconditionally offer STIBP.  It is a
> +     * nop on non-HT hardware, and has this behaviour to make heterogeneous
> +     * setups easier to manage.
> +     */
> +    if ( test_bit(X86_FEATURE_IBRSB, fs) )
> +        __set_bit(X86_FEATURE_STIBP, fs);
> +    if ( test_bit(X86_FEATURE_IBRS, fs) )
> +        __set_bit(X86_FEATURE_AMD_STIBP, fs);
> +
> +    /*
> +     * On hardware which supports IBRS/IBPB, we can offer IBPB independently
> +     * of IBRS by using the AMD feature bit.  An administrator may wish for
> +     * performance reasons to offer IBPB without IBRS.
> +     */
> +    if ( host_cpu_policy.feat.ibrsb )
> +        __set_bit(X86_FEATURE_IBPB, fs);
> +}
> +
>  static void __init calculate_pv_max_policy(void)
>  {
>      struct cpu_policy *p = &pv_max_cpu_policy;
> +    uint32_t fs[FSCAPINTS];
> +    unsigned int i;
>  
>      *p = host_cpu_policy;
> +    x86_cpu_policy_to_featureset(p, fs);
> +
> +    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> +        fs[i] &= pv_max_featuremask[i];
> +
> +    /*
> +     * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
> +     * availability, or admin choice), hide the feature.
> +     */
> +    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
> +    {
> +        __clear_bit(X86_FEATURE_IBRSB, fs);
> +        __clear_bit(X86_FEATURE_IBRS, fs);
> +    }
> +
> +    guest_common_feature_adjustments(fs);
> +
> +    sanitise_featureset(fs);
> +    x86_cpu_featureset_to_policy(fs, p);
> +    recalculate_xstate(p);
> +
> +    p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
>  
>      p->arch_caps.raw = 0; /* Not supported yet. */
>  }
> @@ -63,15 +509,112 @@ static void __init calculate_pv_max_policy(void)
>  static void __init calculate_pv_def_policy(void)
>  {
>      struct cpu_policy *p = &pv_def_cpu_policy;
> +    uint32_t fs[FSCAPINTS];
> +    unsigned int i;
>  
>      *p = pv_max_cpu_policy;
> +    x86_cpu_policy_to_featureset(p, fs);
> +
> +    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> +        fs[i] &= pv_def_featuremask[i];
> +
> +    guest_common_feature_adjustments(fs);
> +    guest_common_default_feature_adjustments(fs);
> +
> +    sanitise_featureset(fs);
> +    x86_cpu_featureset_to_policy(fs, p);
> +    recalculate_xstate(p);
>  }
>  
>  static void __init calculate_hvm_max_policy(void)
>  {
>      struct cpu_policy *p = &hvm_max_cpu_policy;
> +    uint32_t fs[FSCAPINTS];
> +    unsigned int i;
> +    const uint32_t *mask;
>  
>      *p = host_cpu_policy;
> +    x86_cpu_policy_to_featureset(p, fs);
> +
> +    mask = hvm_hap_supported() ?
> +        hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
> +
> +    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> +        fs[i] &= mask[i];
> +
> +    /*
> +     * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
> +     * (x2)APIC isn't enabled.
> +     */
> +    __set_bit(X86_FEATURE_APIC, fs);
> +    __set_bit(X86_FEATURE_X2APIC, fs);
> +
> +    /*
> +     * We don't support EFER.LMSLE at all.  AMD has dropped the feature from
> +     * hardware and allocated a CPUID bit to indicate its absence.
> +     */
> +    __set_bit(X86_FEATURE_NO_LMSL, fs);
> +
> +    /*
> +     * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
> +     * long mode (and init_amd() has cleared it out of host capabilities), but
> +     * HVM guests are able if running in protected mode.
> +     */
> +    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
> +         raw_cpu_policy.basic.sep )
> +        __set_bit(X86_FEATURE_SEP, fs);
> +
> +    /*
> +     * VIRT_SSBD is exposed in the default policy as a result of
> +     * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
> +     */
> +    if ( amd_virt_spec_ctrl )
> +        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> +    /*
> +     * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
> +     * availability, or admin choice), hide the feature.
> +     */
> +    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
> +    {
> +        __clear_bit(X86_FEATURE_IBRSB, fs);
> +        __clear_bit(X86_FEATURE_IBRS, fs);
> +    }
> +    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
> +        /*
> +         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
> +         * and implemented using the former. Expose in the max policy only as
> +         * the preference is for guests to use SPEC_CTRL.SSBD if available.
> +         */
> +        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> +    /*
> +     * With VT-x, some features are only supported by Xen if dedicated
> +     * hardware support is also available.
> +     */
> +    if ( cpu_has_vmx )
> +    {
> +        if ( !cpu_has_vmx_mpx )
> +            __clear_bit(X86_FEATURE_MPX, fs);
> +
> +        if ( !cpu_has_vmx_xsaves )
> +            __clear_bit(X86_FEATURE_XSAVES, fs);
> +    }
> +
> +    /*
> +     * Xen doesn't use PKS, so the guest support for it has opted to not use
> +     * the VMCS load/save controls for efficiency reasons.  This depends on
> +     * the exact vmentry/exit behaviour, so don't expose PKS in other
> +     * situations until someone has cross-checked the behaviour for safety.
> +     */
> +    if ( !cpu_has_vmx )
> +        __clear_bit(X86_FEATURE_PKS, fs);
> +
> +    guest_common_feature_adjustments(fs);
> +
> +    sanitise_featureset(fs);
> +    x86_cpu_featureset_to_policy(fs, p);
> +    recalculate_xstate(p);
>  
>      /* It's always possible to emulate CPUID faulting for HVM guests */
>      p->platform_info.cpuid_faulting = true;
> @@ -82,8 +625,32 @@ static void __init calculate_hvm_max_policy(void)
>  static void __init calculate_hvm_def_policy(void)
>  {
>      struct cpu_policy *p = &hvm_def_cpu_policy;
> +    uint32_t fs[FSCAPINTS];
> +    unsigned int i;
> +    const uint32_t *mask;
>  
>      *p = hvm_max_cpu_policy;
> +    x86_cpu_policy_to_featureset(p, fs);
> +
> +    mask = hvm_hap_supported() ?
> +        hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
> +
> +    for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> +        fs[i] &= mask[i];
> +
> +    guest_common_feature_adjustments(fs);
> +    guest_common_default_feature_adjustments(fs);
> +
> +    /*
> +     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
> +     * amd_virt_spec_ctrl is set.
> +     */
> +    if ( amd_virt_spec_ctrl )
> +        __set_bit(X86_FEATURE_VIRT_SSBD, fs);
> +
> +    sanitise_featureset(fs);
> +    x86_cpu_featureset_to_policy(fs, p);
> +    recalculate_xstate(p);
>  }
>  
>  void __init init_guest_cpu_policies(void)
> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>  
>      return 0;
>  }
> +
> +void recalculate_cpuid_policy(struct domain *d)
> +{
> +    struct cpu_policy *p = d->arch.cpuid;
> +    const struct cpu_policy *max = is_pv_domain(d)
> +        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
> +        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);

While this is how the original code was, wouldn't this want to use
hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?

> +    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
> +    unsigned int i;
> +
> +    if ( !max )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return;
> +    }
> +
> +    p->x86_vendor = x86_cpuid_lookup_vendor(
> +        p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
> +
> +    p->basic.max_leaf   = min(p->basic.max_leaf,   max->basic.max_leaf);
> +    p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
> +    p->extd.max_leaf    = 0x80000000 | min(p->extd.max_leaf & 0xffff,
> +                                           ((p->x86_vendor & (X86_VENDOR_AMD |
> +                                                              X86_VENDOR_HYGON))
> +                                            ? CPUID_GUEST_NR_EXTD_AMD
> +                                            : CPUID_GUEST_NR_EXTD_INTEL) - 1);
> +
> +    x86_cpu_policy_to_featureset(p, fs);
> +    x86_cpu_policy_to_featureset(max, max_fs);
> +
> +    if ( is_hvm_domain(d) )
> +    {
> +        /*
> +         * HVM domains using Shadow paging have further restrictions on their
> +         * available paging features.
> +         */
> +        if ( !hap_enabled(d) )
> +        {
> +            for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
> +                max_fs[i] &= hvm_shadow_max_featuremask[i];
> +        }
> +
> +        /* Hide nested-virt if it hasn't been explicitly configured. */
> +        if ( !nestedhvm_enabled(d) )
> +        {
> +            __clear_bit(X86_FEATURE_VMX, max_fs);
> +            __clear_bit(X86_FEATURE_SVM, max_fs);
> +        }
> +    }
> +
> +    /*
> +     * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY.  These bits
> +     * affect how to interpret topology information in other cpuid leaves.
> +     */
> +    __set_bit(X86_FEATURE_HTT, max_fs);
> +    __set_bit(X86_FEATURE_X2APIC, max_fs);
> +    __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
> +
> +    /*
> +     * 32bit PV domains can't use any Long Mode features, and cannot use
> +     * SYSCALL on non-AMD hardware.
> +     */
> +    if ( is_pv_32bit_domain(d) )
> +    {
> +        __clear_bit(X86_FEATURE_LM, max_fs);
> +        if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +            __clear_bit(X86_FEATURE_SYSCALL, max_fs);
> +    }
> +
> +    /* Clamp the toolstacks choices to reality. */
> +    for ( i = 0; i < ARRAY_SIZE(fs); i++ )
> +        fs[i] &= max_fs[i];
> +
> +    if ( p->basic.max_leaf < XSTATE_CPUID )
> +        __clear_bit(X86_FEATURE_XSAVE, fs);
> +
> +    sanitise_featureset(fs);
> +
> +    /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
> +    fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> +                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
> +    fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
> +                           (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> +                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
> +
> +    x86_cpu_featureset_to_policy(fs, p);
> +
> +    /* Pass host cacheline size through to guests. */
> +    p->basic.clflush_size = max->basic.clflush_size;
> +
> +    p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
> +    p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
> +                                paging_max_paddr_bits(d));
> +    p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
> +                                (p->basic.pae || p->basic.pse36) ? 36 : 32);
> +
> +    p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
> +
> +    recalculate_xstate(p);
> +    recalculate_misc(p);
> +
> +    for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
> +    {
> +        if ( p->cache.subleaf[i].type >= 1 &&
> +             p->cache.subleaf[i].type <= 3 )
> +        {
> +            /* Subleaf has a valid cache type. Zero reserved fields. */
> +            p->cache.raw[i].a &= 0xffffc3ffu;
> +            p->cache.raw[i].d &= 0x00000007u;
> +        }
> +        else
> +        {
> +            /* Subleaf is not valid.  Zero the rest of the union. */
> +            zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
> +            break;
> +        }
> +    }
> +
> +    if ( vpmu_mode == XENPMU_MODE_OFF ||
> +         ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
> +        p->basic.raw[0xa] = EMPTY_LEAF;
> +
> +    if ( !p->extd.svm )
> +        p->extd.raw[0xa] = EMPTY_LEAF;
> +
> +    if ( !p->extd.page1gb )
> +        p->extd.raw[0x19] = EMPTY_LEAF;
> +}
> +
> +void __init init_dom0_cpuid_policy(struct domain *d)
> +{
> +    struct cpu_policy *p = d->arch.cpuid;
> +
> +    /* dom0 can't migrate.  Give it ITSC if available. */
> +    if ( cpu_has_itsc )
> +        p->extd.itsc = true;
> +
> +    /*
> +     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
> +     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
> +     * domain policy logic gains a better understanding of MSRs.
> +     */
> +    if ( cpu_has_arch_caps )
> +        p->feat.arch_caps = true;
> +
> +    /* Apply dom0-cpuid= command line settings, if provided. */
> +    if ( dom0_cpuid_cmdline )
> +    {
> +        uint32_t fs[FSCAPINTS];
> +        unsigned int i;
> +
> +        x86_cpu_policy_to_featureset(p, fs);
> +
> +        for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> +        {
> +            fs[i] |=  dom0_enable_feat [i];
> +            fs[i] &= ~dom0_disable_feat[i];
> +        }
> +
> +        x86_cpu_featureset_to_policy(fs, p);
> +
> +        recalculate_cpuid_policy(d);
> +    }
> +}
> +
> +static void __init __maybe_unused build_assertions(void)
> +{
> +    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
> +    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
> +    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
> +    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
> +    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
> +
> +    /* Find some more clever allocation scheme if this trips. */
> +    BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
> +
> +    BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
> +                 sizeof(raw_cpu_policy.basic.raw));
> +    BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
> +                 sizeof(raw_cpu_policy.feat.raw));
> +    BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
> +                 sizeof(raw_cpu_policy.xstate.raw));
> +    BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
> +                 sizeof(raw_cpu_policy.extd.raw));
> +}
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index 5eb5f1893516..3f20c342fde8 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -1,638 +1,14 @@
> -#include <xen/init.h>
> -#include <xen/lib.h>
> -#include <xen/param.h>
>  #include <xen/sched.h>
> -#include <xen/nospec.h>
> -#include <asm/amd.h>
> +#include <xen/types.h>
> +
> +#include <public/hvm/params.h>
> +
>  #include <asm/cpu-policy.h>
>  #include <asm/cpuid.h>
> -#include <asm/hvm/hvm.h>
> -#include <asm/hvm/nestedhvm.h>
> -#include <asm/hvm/svm/svm.h>
>  #include <asm/hvm/viridian.h>
> -#include <asm/hvm/vmx/vmcs.h>
> -#include <asm/paging.h>
> -#include <asm/processor.h>
>  #include <asm/xstate.h>
>  
> -const uint32_t known_features[] = INIT_KNOWN_FEATURES;
> -
> -static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> -static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> -static const uint32_t __initconst hvm_hap_max_featuremask[] =
> -    INIT_HVM_HAP_MAX_FEATURES;
> -static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> -static const uint32_t __initconst hvm_shadow_def_featuremask[] =
> -    INIT_HVM_SHADOW_DEF_FEATURES;
> -static const uint32_t __initconst hvm_hap_def_featuremask[] =
> -    INIT_HVM_HAP_DEF_FEATURES;
> -static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
> -
> -static const struct feature_name {
> -    const char *name;
> -    unsigned int bit;
> -} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
> -
> -/*
> - * Parse a list of cpuid feature names -> bool, calling the callback for any
> - * matches found.
> - *
> - * always_inline, because this is init code only and we really don't want a
> - * function pointer call in the middle of the loop.
> - */
> -static int __init always_inline parse_cpuid(
> -    const char *s, void (*callback)(unsigned int feat, bool val))
> -{
> -    const char *ss;
> -    int val, rc = 0;
> -
> -    do {
> -        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
> -        const char *feat;
> -
> -        ss = strchr(s, ',');
> -        if ( !ss )
> -            ss = strchr(s, '\0');
> -
> -        /* Skip the 'no-' prefix for name comparisons. */
> -        feat = s;
> -        if ( strncmp(s, "no-", 3) == 0 )
> -            feat += 3;
> -
> -        /* (Re)initalise lhs and rhs for binary search. */
> -        lhs = feature_names;
> -        rhs = feature_names + ARRAY_SIZE(feature_names);
> -
> -        while ( lhs < rhs )
> -        {
> -            int res;
> -
> -            mid = lhs + (rhs - lhs) / 2;
> -            res = cmdline_strcmp(feat, mid->name);
> -
> -            if ( res < 0 )
> -            {
> -                rhs = mid;
> -                continue;
> -            }
> -            if ( res > 0 )
> -            {
> -                lhs = mid + 1;
> -                continue;
> -            }
> -
> -            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
> -            {
> -                callback(mid->bit, val);
> -                mid = NULL;
> -            }
> -
> -            break;
> -        }
> -
> -        /*
> -         * Mid being NULL means that the name and boolean were successfully
> -         * identified.  Everything else is an error.
> -         */
> -        if ( mid )
> -            rc = -EINVAL;
> -
> -        s = ss + 1;
> -    } while ( *ss );
> -
> -    return rc;
> -}
> -
> -static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
> -{
> -    if ( !val )
> -        setup_clear_cpu_cap(feat);
> -    else if ( feat == X86_FEATURE_RDRAND &&
> -              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
> -        setup_force_cpu_cap(X86_FEATURE_RDRAND);
> -}
> -
> -static int __init cf_check parse_xen_cpuid(const char *s)
> -{
> -    return parse_cpuid(s, _parse_xen_cpuid);
> -}
> -custom_param("cpuid", parse_xen_cpuid);
> -
> -static bool __initdata dom0_cpuid_cmdline;
> -static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
> -static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
> -
> -static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
> -{
> -    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
> -    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
> -}
> -
> -static int __init cf_check parse_dom0_cpuid(const char *s)
> -{
> -    dom0_cpuid_cmdline = true;
> -
> -    return parse_cpuid(s, _parse_dom0_cpuid);
> -}
> -custom_param("dom0-cpuid", parse_dom0_cpuid);
> -
>  #define EMPTY_LEAF ((struct cpuid_leaf){})
> -static void zero_leaves(struct cpuid_leaf *l,
> -                        unsigned int first, unsigned int last)
> -{
> -    memset(&l[first], 0, sizeof(*l) * (last - first + 1));
> -}
> -
> -static void sanitise_featureset(uint32_t *fs)
> -{
> -    /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
> -    uint32_t disabled_features[
> -        ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
> -    unsigned int i;
> -
> -    for ( i = 0; i < FSCAPINTS; ++i )
> -    {
> -        /* Clamp to known mask. */
> -        fs[i] &= known_features[i];
> -
> -        /*
> -         * Identify which features with deep dependencies have been
> -         * disabled.
> -         */
> -        disabled_features[i] = ~fs[i] & deep_features[i];
> -    }
> -
> -    for_each_set_bit(i, (void *)disabled_features,
> -                     sizeof(disabled_features) * 8)
> -    {
> -        const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
> -        unsigned int j;
> -
> -        ASSERT(dfs); /* deep_features[] should guarentee this. */
> -
> -        for ( j = 0; j < FSCAPINTS; ++j )
> -        {
> -            fs[j] &= ~dfs[j];
> -            disabled_features[j] &= ~dfs[j];
> -        }
> -    }
> -}
> -
> -static void recalculate_xstate(struct cpuid_policy *p)
> -{
> -    uint64_t xstates = XSTATE_FP_SSE;
> -    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
> -    unsigned int i, Da1 = p->xstate.Da1;
> -
> -    /*
> -     * The Da1 leaf is the only piece of information preserved in the common
> -     * case.  Everything else is derived from other feature state.
> -     */
> -    memset(&p->xstate, 0, sizeof(p->xstate));
> -
> -    if ( !p->basic.xsave )
> -        return;
> -
> -    if ( p->basic.avx )
> -    {
> -        xstates |= X86_XCR0_YMM;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_YMM_POS] +
> -                          xstate_sizes[X86_XCR0_YMM_POS]);
> -    }
> -
> -    if ( p->feat.mpx )
> -    {
> -        xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
> -                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
> -    }
> -
> -    if ( p->feat.avx512f )
> -    {
> -        xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
> -                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
> -    }
> -
> -    if ( p->feat.pku )
> -    {
> -        xstates |= X86_XCR0_PKRU;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_PKRU_POS] +
> -                          xstate_sizes[X86_XCR0_PKRU_POS]);
> -    }
> -
> -    p->xstate.max_size  =  xstate_size;
> -    p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
> -    p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
> -
> -    p->xstate.Da1 = Da1;
> -    if ( p->xstate.xsaves )
> -    {
> -        p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
> -        p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
> -    }
> -    else
> -        xstates &= ~XSTATE_XSAVES_ONLY;
> -
> -    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
> -    {
> -        uint64_t curr_xstate = 1ul << i;
> -
> -        if ( !(xstates & curr_xstate) )
> -            continue;
> -
> -        p->xstate.comp[i].size   = xstate_sizes[i];
> -        p->xstate.comp[i].offset = xstate_offsets[i];
> -        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
> -        p->xstate.comp[i].align  = curr_xstate & xstate_align;
> -    }
> -}
> -
> -/*
> - * Misc adjustments to the policy.  Mostly clobbering reserved fields and
> - * duplicating shared fields.  Intentionally hidden fields are annotated.
> - */
> -static void recalculate_misc(struct cpuid_policy *p)
> -{
> -    p->basic.raw_fms &= 0x0fff0fff; /* Clobber Processor Type on Intel. */
> -    p->basic.apic_id = 0; /* Dynamic. */
> -
> -    p->basic.raw[0x5] = EMPTY_LEAF; /* MONITOR not exposed to guests. */
> -    p->basic.raw[0x6] = EMPTY_LEAF; /* Therm/Power not exposed to guests. */
> -
> -    p->basic.raw[0x8] = EMPTY_LEAF;
> -
> -    /* TODO: Rework topology logic. */
> -    memset(p->topo.raw, 0, sizeof(p->topo.raw));
> -
> -    p->basic.raw[0xc] = EMPTY_LEAF;
> -
> -    p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
> -
> -    /* Most of Power/RAS hidden from guests. */
> -    p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
> -
> -    p->extd.raw[0x8].d = 0;
> -
> -    switch ( p->x86_vendor )
> -    {
> -    case X86_VENDOR_INTEL:
> -        p->basic.l2_nr_queries = 1; /* Fixed to 1 query. */
> -        p->basic.raw[0x3] = EMPTY_LEAF; /* PSN - always hidden. */
> -        p->basic.raw[0x9] = EMPTY_LEAF; /* DCA - always hidden. */
> -
> -        p->extd.vendor_ebx = 0;
> -        p->extd.vendor_ecx = 0;
> -        p->extd.vendor_edx = 0;
> -
> -        p->extd.raw[0x1].a = p->extd.raw[0x1].b = 0;
> -
> -        p->extd.raw[0x5] = EMPTY_LEAF;
> -        p->extd.raw[0x6].a = p->extd.raw[0x6].b = p->extd.raw[0x6].d = 0;
> -
> -        p->extd.raw[0x8].a &= 0x0000ffff;
> -        p->extd.raw[0x8].c = 0;
> -        break;
> -
> -    case X86_VENDOR_AMD:
> -    case X86_VENDOR_HYGON:
> -        zero_leaves(p->basic.raw, 0x2, 0x3);
> -        memset(p->cache.raw, 0, sizeof(p->cache.raw));
> -        zero_leaves(p->basic.raw, 0x9, 0xa);
> -
> -        p->extd.vendor_ebx = p->basic.vendor_ebx;
> -        p->extd.vendor_ecx = p->basic.vendor_ecx;
> -        p->extd.vendor_edx = p->basic.vendor_edx;
> -
> -        p->extd.raw_fms = p->basic.raw_fms;
> -        p->extd.raw[0x1].b &= 0xff00ffff;
> -        p->extd.e1d |= p->basic._1d & CPUID_COMMON_1D_FEATURES;
> -
> -        p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
> -        p->extd.raw[0x8].c &= 0x0003f0ff;
> -
> -        p->extd.raw[0x9] = EMPTY_LEAF;
> -
> -        zero_leaves(p->extd.raw, 0xb, 0x18);
> -
> -        /* 0x19 - TLB details.  Pass through. */
> -        /* 0x1a - Perf hints.   Pass through. */
> -
> -        p->extd.raw[0x1b] = EMPTY_LEAF; /* IBS - not supported. */
> -        p->extd.raw[0x1c] = EMPTY_LEAF; /* LWP - not supported. */
> -        p->extd.raw[0x1d] = EMPTY_LEAF; /* TopoExt Cache */
> -        p->extd.raw[0x1e] = EMPTY_LEAF; /* TopoExt APIC ID/Core/Node */
> -        p->extd.raw[0x1f] = EMPTY_LEAF; /* SEV */
> -        p->extd.raw[0x20] = EMPTY_LEAF; /* Platform QoS */
> -        break;
> -    }
> -}
> -
> -static void __init calculate_raw_policy(void)
> -{
> -    struct cpuid_policy *p = &raw_cpu_policy;
> -
> -    x86_cpuid_policy_fill_native(p);
> -
> -    /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
> -    ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
> -}
> -
> -static void __init calculate_host_policy(void)
> -{
> -    struct cpuid_policy *p = &host_cpu_policy;
> -    unsigned int max_extd_leaf;
> -
> -    *p = raw_cpu_policy;
> -
> -    p->basic.max_leaf =
> -        min_t(uint32_t, p->basic.max_leaf,   ARRAY_SIZE(p->basic.raw) - 1);
> -    p->feat.max_subleaf =
> -        min_t(uint32_t, p->feat.max_subleaf, ARRAY_SIZE(p->feat.raw) - 1);
> -
> -    max_extd_leaf = p->extd.max_leaf;
> -
> -    /*
> -     * For AMD/Hygon hardware before Zen3, we unilaterally modify LFENCE to be
> -     * dispatch serialising for Spectre mitigations.  Extend max_extd_leaf
> -     * beyond what hardware supports, to include the feature leaf containing
> -     * this information.
> -     */
> -    if ( cpu_has_lfence_dispatch )
> -        max_extd_leaf = max(max_extd_leaf, 0x80000021);
> -
> -    p->extd.max_leaf = 0x80000000 | min_t(uint32_t, max_extd_leaf & 0xffff,
> -                                          ARRAY_SIZE(p->extd.raw) - 1);
> -
> -    x86_cpu_featureset_to_policy(boot_cpu_data.x86_capability, p);
> -    recalculate_xstate(p);
> -    recalculate_misc(p);
> -
> -    /* When vPMU is disabled, drop it from the host policy. */
> -    if ( vpmu_mode == XENPMU_MODE_OFF )
> -        p->basic.raw[0xa] = EMPTY_LEAF;
> -
> -    if ( p->extd.svm )
> -    {
> -        /* Clamp to implemented features which require hardware support. */
> -        p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
> -                               (1u << SVM_FEATURE_LBRV) |
> -                               (1u << SVM_FEATURE_NRIPS) |
> -                               (1u << SVM_FEATURE_PAUSEFILTER) |
> -                               (1u << SVM_FEATURE_DECODEASSISTS));
> -        /* Enable features which are always emulated. */
> -        p->extd.raw[0xa].d |= ((1u << SVM_FEATURE_VMCBCLEAN) |
> -                               (1u << SVM_FEATURE_TSCRATEMSR));
> -    }
> -}
> -
> -static void __init guest_common_default_feature_adjustments(uint32_t *fs)
> -{
> -    /*
> -     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
> -     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
> -     * compensate.
> -     *
> -     * Mitigate by hiding RDRAND from guests by default, unless explicitly
> -     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
> -     * default setting, guests can use RDRAND if explicitly enabled
> -     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
> -     * previously using RDRAND can migrate in.
> -     */
> -    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> -         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
> -         cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
> -        __clear_bit(X86_FEATURE_RDRAND, fs);
> -
> -    /*
> -     * On certain hardware, speculative or errata workarounds can result in
> -     * TSX being placed in "force-abort" mode, where it doesn't actually
> -     * function as expected, but is technically compatible with the ISA.
> -     *
> -     * Do not advertise RTM to guests by default if it won't actually work.
> -     */
> -    if ( rtm_disabled )
> -        __clear_bit(X86_FEATURE_RTM, fs);
> -}
> -
> -static void __init guest_common_feature_adjustments(uint32_t *fs)
> -{
> -    /* Unconditionally claim to be able to set the hypervisor bit. */
> -    __set_bit(X86_FEATURE_HYPERVISOR, fs);
> -
> -    /*
> -     * If IBRS is offered to the guest, unconditionally offer STIBP.  It is a
> -     * nop on non-HT hardware, and has this behaviour to make heterogeneous
> -     * setups easier to manage.
> -     */
> -    if ( test_bit(X86_FEATURE_IBRSB, fs) )
> -        __set_bit(X86_FEATURE_STIBP, fs);
> -    if ( test_bit(X86_FEATURE_IBRS, fs) )
> -        __set_bit(X86_FEATURE_AMD_STIBP, fs);
> -
> -    /*
> -     * On hardware which supports IBRS/IBPB, we can offer IBPB independently
> -     * of IBRS by using the AMD feature bit.  An administrator may wish for
> -     * performance reasons to offer IBPB without IBRS.
> -     */
> -    if ( host_cpu_policy.feat.ibrsb )
> -        __set_bit(X86_FEATURE_IBPB, fs);
> -}
> -
> -static void __init calculate_pv_max_policy(void)
> -{
> -    struct cpuid_policy *p = &pv_max_cpu_policy;
> -    uint32_t pv_featureset[FSCAPINTS];
> -    unsigned int i;
> -
> -    *p = host_cpu_policy;
> -    x86_cpu_policy_to_featureset(p, pv_featureset);
> -
> -    for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
> -        pv_featureset[i] &= pv_max_featuremask[i];
> -
> -    /*
> -     * If Xen isn't virtualising MSR_SPEC_CTRL for PV guests (functional
> -     * availability, or admin choice), hide the feature.
> -     */
> -    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_PV) )
> -    {
> -        __clear_bit(X86_FEATURE_IBRSB, pv_featureset);
> -        __clear_bit(X86_FEATURE_IBRS, pv_featureset);
> -    }
> -
> -    guest_common_feature_adjustments(pv_featureset);
> -
> -    sanitise_featureset(pv_featureset);
> -    x86_cpu_featureset_to_policy(pv_featureset, p);
> -    recalculate_xstate(p);
> -
> -    p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
> -}
> -
> -static void __init calculate_pv_def_policy(void)
> -{
> -    struct cpuid_policy *p = &pv_def_cpu_policy;
> -    uint32_t pv_featureset[FSCAPINTS];
> -    unsigned int i;
> -
> -    *p = pv_max_cpu_policy;
> -    x86_cpu_policy_to_featureset(p, pv_featureset);
> -
> -    for ( i = 0; i < ARRAY_SIZE(pv_featureset); ++i )
> -        pv_featureset[i] &= pv_def_featuremask[i];
> -
> -    guest_common_feature_adjustments(pv_featureset);
> -    guest_common_default_feature_adjustments(pv_featureset);
> -
> -    sanitise_featureset(pv_featureset);
> -    x86_cpu_featureset_to_policy(pv_featureset, p);
> -    recalculate_xstate(p);
> -}
> -
> -static void __init calculate_hvm_max_policy(void)
> -{
> -    struct cpuid_policy *p = &hvm_max_cpu_policy;
> -    uint32_t hvm_featureset[FSCAPINTS];
> -    unsigned int i;
> -    const uint32_t *hvm_featuremask;
> -
> -    *p = host_cpu_policy;
> -    x86_cpu_policy_to_featureset(p, hvm_featureset);
> -
> -    hvm_featuremask = hvm_hap_supported() ?
> -        hvm_hap_max_featuremask : hvm_shadow_max_featuremask;
> -
> -    for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
> -        hvm_featureset[i] &= hvm_featuremask[i];
> -
> -    /*
> -     * Xen can provide an (x2)APIC emulation to HVM guests even if the host's
> -     * (x2)APIC isn't enabled.
> -     */
> -    __set_bit(X86_FEATURE_APIC, hvm_featureset);
> -    __set_bit(X86_FEATURE_X2APIC, hvm_featureset);
> -
> -    /*
> -     * We don't support EFER.LMSLE at all.  AMD has dropped the feature from
> -     * hardware and allocated a CPUID bit to indicate its absence.
> -     */
> -    __set_bit(X86_FEATURE_NO_LMSL, hvm_featureset);
> -
> -    /*
> -     * On AMD, PV guests are entirely unable to use SYSENTER as Xen runs in
> -     * long mode (and init_amd() has cleared it out of host capabilities), but
> -     * HVM guests are able if running in protected mode.
> -     */
> -    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) &&
> -         raw_cpu_policy.basic.sep )
> -        __set_bit(X86_FEATURE_SEP, hvm_featureset);
> -
> -    /*
> -     * VIRT_SSBD is exposed in the default policy as a result of
> -     * amd_virt_spec_ctrl being set, it also needs exposing in the max policy.
> -     */
> -    if ( amd_virt_spec_ctrl )
> -        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> -    /*
> -     * If Xen isn't virtualising MSR_SPEC_CTRL for HVM guests (functional
> -     * availability, or admin choice), hide the feature.
> -     */
> -    if ( !boot_cpu_has(X86_FEATURE_SC_MSR_HVM) )
> -    {
> -        __clear_bit(X86_FEATURE_IBRSB, hvm_featureset);
> -        __clear_bit(X86_FEATURE_IBRS, hvm_featureset);
> -    }
> -    else if ( boot_cpu_has(X86_FEATURE_AMD_SSBD) )
> -        /*
> -         * If SPEC_CTRL.SSBD is available VIRT_SPEC_CTRL.SSBD can be exposed
> -         * and implemented using the former. Expose in the max policy only as
> -         * the preference is for guests to use SPEC_CTRL.SSBD if available.
> -         */
> -        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> -    /*
> -     * With VT-x, some features are only supported by Xen if dedicated
> -     * hardware support is also available.
> -     */
> -    if ( cpu_has_vmx )
> -    {
> -        if ( !cpu_has_vmx_mpx )
> -            __clear_bit(X86_FEATURE_MPX, hvm_featureset);
> -
> -        if ( !cpu_has_vmx_xsaves )
> -            __clear_bit(X86_FEATURE_XSAVES, hvm_featureset);
> -    }
> -
> -    /*
> -     * Xen doesn't use PKS, so the guest support for it has opted to not use
> -     * the VMCS load/save controls for efficiency reasons.  This depends on
> -     * the exact vmentry/exit behaviour, so don't expose PKS in other
> -     * situations until someone has cross-checked the behaviour for safety.
> -     */
> -    if ( !cpu_has_vmx )
> -        __clear_bit(X86_FEATURE_PKS, hvm_featureset);
> -
> -    guest_common_feature_adjustments(hvm_featureset);
> -
> -    sanitise_featureset(hvm_featureset);
> -    x86_cpu_featureset_to_policy(hvm_featureset, p);
> -    recalculate_xstate(p);
> -}
> -
> -static void __init calculate_hvm_def_policy(void)
> -{
> -    struct cpuid_policy *p = &hvm_def_cpu_policy;
> -    uint32_t hvm_featureset[FSCAPINTS];
> -    unsigned int i;
> -    const uint32_t *hvm_featuremask;
> -
> -    *p = hvm_max_cpu_policy;
> -    x86_cpu_policy_to_featureset(p, hvm_featureset);
> -
> -    hvm_featuremask = hvm_hap_supported() ?
> -        hvm_hap_def_featuremask : hvm_shadow_def_featuremask;
> -
> -    for ( i = 0; i < ARRAY_SIZE(hvm_featureset); ++i )
> -        hvm_featureset[i] &= hvm_featuremask[i];
> -
> -    guest_common_feature_adjustments(hvm_featureset);
> -    guest_common_default_feature_adjustments(hvm_featureset);
> -
> -    /*
> -     * Only expose VIRT_SSBD if AMD_SSBD is not available, and thus
> -     * amd_virt_spec_ctrl is set.
> -     */
> -    if ( amd_virt_spec_ctrl )
> -        __set_bit(X86_FEATURE_VIRT_SSBD, hvm_featureset);
> -
> -    sanitise_featureset(hvm_featureset);
> -    x86_cpu_featureset_to_policy(hvm_featureset, p);
> -    recalculate_xstate(p);
> -}
> -
> -void __init init_guest_cpuid(void)
> -{
> -    calculate_raw_policy();
> -    calculate_host_policy();
> -
> -    if ( IS_ENABLED(CONFIG_PV) )
> -    {
> -        calculate_pv_max_policy();
> -        calculate_pv_def_policy();
> -    }
> -
> -    if ( hvm_enabled )
> -    {
> -        calculate_hvm_max_policy();
> -        calculate_hvm_def_policy();
> -    }
> -}
>  
>  bool recheck_cpu_features(unsigned int cpu)
>  {
> @@ -656,170 +32,6 @@ bool recheck_cpu_features(unsigned int cpu)
>      return okay;
>  }
>  
> -void recalculate_cpuid_policy(struct domain *d)
> -{
> -    struct cpuid_policy *p = d->arch.cpuid;
> -    const struct cpuid_policy *max = is_pv_domain(d)
> -        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
> -        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
> -    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
> -    unsigned int i;
> -
> -    if ( !max )
> -    {
> -        ASSERT_UNREACHABLE();
> -        return;
> -    }
> -
> -    p->x86_vendor = x86_cpuid_lookup_vendor(
> -        p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
> -
> -    p->basic.max_leaf   = min(p->basic.max_leaf,   max->basic.max_leaf);
> -    p->feat.max_subleaf = min(p->feat.max_subleaf, max->feat.max_subleaf);
> -    p->extd.max_leaf    = 0x80000000 | min(p->extd.max_leaf & 0xffff,
> -                                           ((p->x86_vendor & (X86_VENDOR_AMD |
> -                                                              X86_VENDOR_HYGON))
> -                                            ? CPUID_GUEST_NR_EXTD_AMD
> -                                            : CPUID_GUEST_NR_EXTD_INTEL) - 1);
> -
> -    x86_cpu_policy_to_featureset(p, fs);
> -    x86_cpu_policy_to_featureset(max, max_fs);
> -
> -    if ( is_hvm_domain(d) )
> -    {
> -        /*
> -         * HVM domains using Shadow paging have further restrictions on their
> -         * available paging features.
> -         */
> -        if ( !hap_enabled(d) )
> -        {
> -            for ( i = 0; i < ARRAY_SIZE(max_fs); i++ )
> -                max_fs[i] &= hvm_shadow_max_featuremask[i];
> -        }
> -
> -        /* Hide nested-virt if it hasn't been explicitly configured. */
> -        if ( !nestedhvm_enabled(d) )
> -        {
> -            __clear_bit(X86_FEATURE_VMX, max_fs);
> -            __clear_bit(X86_FEATURE_SVM, max_fs);
> -        }
> -    }
> -
> -    /*
> -     * Allow the toolstack to set HTT, X2APIC and CMP_LEGACY.  These bits
> -     * affect how to interpret topology information in other cpuid leaves.
> -     */
> -    __set_bit(X86_FEATURE_HTT, max_fs);
> -    __set_bit(X86_FEATURE_X2APIC, max_fs);
> -    __set_bit(X86_FEATURE_CMP_LEGACY, max_fs);
> -
> -    /*
> -     * 32bit PV domains can't use any Long Mode features, and cannot use
> -     * SYSCALL on non-AMD hardware.
> -     */
> -    if ( is_pv_32bit_domain(d) )
> -    {
> -        __clear_bit(X86_FEATURE_LM, max_fs);
> -        if ( !(boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> -            __clear_bit(X86_FEATURE_SYSCALL, max_fs);
> -    }
> -
> -    /* Clamp the toolstacks choices to reality. */
> -    for ( i = 0; i < ARRAY_SIZE(fs); i++ )
> -        fs[i] &= max_fs[i];
> -
> -    if ( p->basic.max_leaf < XSTATE_CPUID )
> -        __clear_bit(X86_FEATURE_XSAVE, fs);
> -
> -    sanitise_featureset(fs);
> -
> -    /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
> -    fs[FEATURESET_7b0] &= ~(cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> -                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL));
> -    fs[FEATURESET_7b0] |= (host_cpu_policy.feat._7b0 &
> -                           (cpufeat_mask(X86_FEATURE_FDP_EXCP_ONLY) |
> -                            cpufeat_mask(X86_FEATURE_NO_FPU_SEL)));
> -
> -    x86_cpu_featureset_to_policy(fs, p);
> -
> -    /* Pass host cacheline size through to guests. */
> -    p->basic.clflush_size = max->basic.clflush_size;
> -
> -    p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
> -    p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
> -                                paging_max_paddr_bits(d));
> -    p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
> -                                (p->basic.pae || p->basic.pse36) ? 36 : 32);
> -
> -    p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
> -
> -    recalculate_xstate(p);
> -    recalculate_misc(p);
> -
> -    for ( i = 0; i < ARRAY_SIZE(p->cache.raw); ++i )
> -    {
> -        if ( p->cache.subleaf[i].type >= 1 &&
> -             p->cache.subleaf[i].type <= 3 )
> -        {
> -            /* Subleaf has a valid cache type. Zero reserved fields. */
> -            p->cache.raw[i].a &= 0xffffc3ffu;
> -            p->cache.raw[i].d &= 0x00000007u;
> -        }
> -        else
> -        {
> -            /* Subleaf is not valid.  Zero the rest of the union. */
> -            zero_leaves(p->cache.raw, i, ARRAY_SIZE(p->cache.raw) - 1);
> -            break;
> -        }
> -    }
> -
> -    if ( vpmu_mode == XENPMU_MODE_OFF ||
> -         ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
> -        p->basic.raw[0xa] = EMPTY_LEAF;
> -
> -    if ( !p->extd.svm )
> -        p->extd.raw[0xa] = EMPTY_LEAF;
> -
> -    if ( !p->extd.page1gb )
> -        p->extd.raw[0x19] = EMPTY_LEAF;
> -}
> -
> -void __init init_dom0_cpuid_policy(struct domain *d)
> -{
> -    struct cpuid_policy *p = d->arch.cpuid;
> -
> -    /* dom0 can't migrate.  Give it ITSC if available. */
> -    if ( cpu_has_itsc )
> -        p->extd.itsc = true;
> -
> -    /*
> -     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
> -     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
> -     * domain policy logic gains a better understanding of MSRs.
> -     */
> -    if ( cpu_has_arch_caps )
> -        p->feat.arch_caps = true;
> -
> -    /* Apply dom0-cpuid= command line settings, if provided. */
> -    if ( dom0_cpuid_cmdline )
> -    {
> -        uint32_t fs[FSCAPINTS];
> -        unsigned int i;
> -
> -        x86_cpu_policy_to_featureset(p, fs);
> -
> -        for ( i = 0; i < ARRAY_SIZE(fs); ++i )
> -        {
> -            fs[i] |=  dom0_enable_feat [i];
> -            fs[i] &= ~dom0_disable_feat[i];
> -        }
> -
> -        x86_cpu_featureset_to_policy(fs, p);
> -
> -        recalculate_cpuid_policy(d);
> -    }
> -}
> -
>  void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>                   uint32_t subleaf, struct cpuid_leaf *res)
>  {
> @@ -1190,27 +402,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>      }
>  }
>  
> -static void __init __maybe_unused build_assertions(void)
> -{
> -    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
> -
> -    /* Find some more clever allocation scheme if this trips. */
> -    BUILD_BUG_ON(sizeof(struct cpuid_policy) > PAGE_SIZE);
> -
> -    BUILD_BUG_ON(sizeof(raw_cpu_policy.basic) !=
> -                 sizeof(raw_cpu_policy.basic.raw));
> -    BUILD_BUG_ON(sizeof(raw_cpu_policy.feat) !=
> -                 sizeof(raw_cpu_policy.feat.raw));
> -    BUILD_BUG_ON(sizeof(raw_cpu_policy.xstate) !=
> -                 sizeof(raw_cpu_policy.xstate.raw));
> -    BUILD_BUG_ON(sizeof(raw_cpu_policy.extd) !=
> -                 sizeof(raw_cpu_policy.extd.raw));
> -}
> -
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d326fa1c0136..675c523d9909 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -77,7 +77,6 @@
>  #include <public/memory.h>
>  #include <public/vm_event.h>
>  #include <public/arch-x86/cpuid.h>
> -#include <asm/cpuid.h>
>  
>  #include <compat/hvm/hvm_op.h>
>  
> diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
> index 13e2a1f86d13..b361537a602b 100644
> --- a/xen/arch/x86/include/asm/cpu-policy.h
> +++ b/xen/arch/x86/include/asm/cpu-policy.h
> @@ -18,4 +18,10 @@ void init_guest_cpu_policies(void);
>  /* Allocate and initialise a CPU policy suitable for the domain. */
>  int init_domain_cpu_policy(struct domain *d);
>  
> +/* Apply dom0-specific tweaks to the CPUID policy. */
> +void init_dom0_cpuid_policy(struct domain *d);
> +
> +/* Clamp the CPUID policy to reality. */
> +void recalculate_cpuid_policy(struct domain *d);
> +
>  #endif /* X86_CPU_POLICY_H */
> diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
> index 7f81b998ce01..b32ba0bbfe5c 100644
> --- a/xen/arch/x86/include/asm/cpuid.h
> +++ b/xen/arch/x86/include/asm/cpuid.h
> @@ -8,14 +8,10 @@
>  #include <xen/kernel.h>
>  #include <xen/percpu.h>
>  
> -#include <xen/lib/x86/cpu-policy.h>
> -
>  #include <public/sysctl.h>
>  
>  extern const uint32_t known_features[FSCAPINTS];
>  
> -void init_guest_cpuid(void);
> -
>  /*
>   * Expected levelling capabilities (given cpuid vendor/family information),
>   * and levelling capabilities actually available (given MSR probing).
> @@ -49,13 +45,8 @@ extern struct cpuidmasks cpuidmask_defaults;
>  /* Check that all previously present features are still available. */
>  bool recheck_cpu_features(unsigned int cpu);
>  
> -/* Apply dom0-specific tweaks to the CPUID policy. */
> -void init_dom0_cpuid_policy(struct domain *d);
> -
> -/* Clamp the CPUID policy to reality. */
> -void recalculate_cpuid_policy(struct domain *d);
> -
>  struct vcpu;
> +struct cpuid_leaf;
>  void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>                   uint32_t subleaf, struct cpuid_leaf *res);
>  
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index f94f28c8e271..95492715d8ad 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -10,6 +10,7 @@
>  #include <xen/param.h>
>  #include <xen/sched.h>
>  
> +#include <asm/cpu-policy.h>
>  #include <asm/cpufeature.h>
>  #include <asm/invpcid.h>
>  #include <asm/spec_ctrl.h>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 51a19b9019eb..08ade715a3ce 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -51,7 +51,6 @@
>  #include <asm/alternative.h>
>  #include <asm/mc146818rtc.h>
>  #include <asm/cpu-policy.h>
> -#include <asm/cpuid.h>
>  #include <asm/spec_ctrl.h>
>  #include <asm/guest.h>
>  #include <asm/microcode.h>
> @@ -1991,7 +1990,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      if ( !tboot_protect_mem_regions() )
>          panic("Could not protect TXT memory regions\n");
>  
> -    init_guest_cpuid();
>      init_guest_cpu_policies();
>  
>      if ( xen_cpuidle )



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:18:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517967.804020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiQM-0000y2-Vl; Tue, 04 Apr 2023 15:18:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517967.804020; Tue, 04 Apr 2023 15:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiQM-0000xv-T1; Tue, 04 Apr 2023 15:18:18 +0000
Received: by outflank-mailman (input) for mailman id 517967;
 Tue, 04 Apr 2023 15:18:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiQL-0000wk-Fa
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:18:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebd7f2bb-d2fb-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 17:18:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9261.eurprd04.prod.outlook.com (2603:10a6:20b:4c7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:18:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebd7f2bb-d2fb-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qge5t21hXJyEHRkcmOzMh/XI4LOHNFeIm9xUWwrxOF1dGcLWyZOBfyaEQiEd88Et0rV6/ZC6wE5bcvRc2W6cqvL/JIWE9AMNznlTFOu6GR1aPbMCQb1UxWWzjRTBMVIRuYpob/IJyS1pYqsmdHQm3CRsPHUNS+s5KmgLUa1+LnBLny7RaQb40AArkhfAPMfvj4DEN3xl5xsEIcoBXj55D8pUOax2zv9eQKGWPL/t6OiAZQmi5l9FTlITBRcRWrHKvgQi+5mXHtt68/pctR52Qvz5aBw5kNr8oWcF13nzCIPCFBmKBI3Dx0V4wlUbCMyX+m85kj5mXGmuH8OJMsivIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=62XBXfVTI7hx6DBhJHmeGCLANXJLmjSM/SQCiMNAh4c=;
 b=cFAfsyS/MGqAxLJA36bCHNzpJPzPCqH+h3Dshoe5g1k4LZD8TtvxGRrI0QTukKcVw8/89xH2ipkF0sLXlphewf8bvTp0FO1tdCCiEiNXPC4ZvwuIrwTHUD8jgEaB8vXmbpXjJdHTQ9y6UeH7biwwMWkopwzolWHZipzhbdiyldZU/pps4g2z2eY+JVqnJeFRx4kbU36n5qB/RcxDvZxSVkxVAjYyyn3QVnSgBffxUq2/ZfXZdOAwYUYNSK7Br6HcT+OVtKyIl1P81Zw5OJrm89QpQ7uftFrtlzzW50QkCufAGgg4zsRv+p9dX2PfKjdcY4GnNZ4DofHIflK8PewI9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=62XBXfVTI7hx6DBhJHmeGCLANXJLmjSM/SQCiMNAh4c=;
 b=sN/t/NtkfDhjk+UMQMQZwT1uR0jUrefEbCq5yTcFYQVWLM6kpa5k8xFJBdGchrfb/nDgWwVwYfWpNYm/WPRUqbkYVEUx9Y46xoosen8wHHPDvth04v1xhCGCgsG0+o/4fJ44k4RoqqycilIrJ3jdV2eqcB9IcAvtQAvuqSQLomut8sZehKQg46V0zdY+AtaCac7y5eEeOOyCSHm4BvwmfUFob3ut7Iv5N7uKPq96guFf2zSXwpWxbwGRbmBpBAnCAQuZDQnsqBbM1/vkn3V02+i3GqFwio9+J2c6JDO3RXOXnt48YEEMJHguC/L6nB0zE1PW3UM5j6sdwH1RpuXMiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6e1d7c6f-82c8-418d-922a-e3f6f24c8d21@suse.com>
Date: Tue, 4 Apr 2023 17:18:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230331175719.500285-1-andrew.cooper3@citrix.com>
 <ZCqVEHe1Qo3skeVf@Air-de-Roger>
 <4b76def9-9940-ccf0-8050-12ddf2c1253c@citrix.com>
 <ZCrUErZZkd6co1Dq@Air-de-Roger>
 <91fc0c1f-a985-17bd-2011-f4964d82e008@citrix.com>
 <ZCw8eQSIN0FpXAhX@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCw8eQSIN0FpXAhX@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9261:EE_
X-MS-Office365-Filtering-Correlation-Id: d03ec373-8be0-4f2b-c7c3-08db351fcf5d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h+051pjhnvn4C0k5LxZqCUTp3RbzbFJ8y5e2SZqH5zyxhIGQoSn2HGOc/+MErV7v4dJpURA9O/JNF85ajTAIggQrfltNH0fPF1AqnfrF3HPCD4uskU/qC9wAvjkG+FTCbRSUANj8RTCTkjim0Rgf7e5gNMMVhmf1s7VKlj0daE44edwZJlYNBLorK+54Y7U97w1iNVYfloYqDoU86YRWWzbVsvUszOXFqUUJqfL3BHu9CvMem/6NRTM5cjXAUMIpoYOBvsLQL6tqOQ3JdGrFCSEfGI767+qZ76/mGMIM2A+VzeYkrpB5PrtITgWIh1eKvkXbiX/dlL4B+n8a1rtcH+zWs5Bl+t/MgDku7JE9T+lMS7I/GNJM7oVTDmCO/kRfBFsu7iir+bs65JXkKKbEYQfehWpUY4Mmds+3rQACj83L1mJDQp2A+DKmN4KSIhHtBcQQ3sx21UNxjWOUqUpZsWp4W1WBaYpR5Z4+7c2OmYe2zGMUlhANLjQW54qHL7fHClXxHElbCC4f4Y/lVC9zggFtPbamY0R1NEJhRqX0FKFC2wbu0F6YHJeAYSIzRwHQZSpsWOScVyjkHXOyzl5ch534TVOI4Rrir7SDyXopKVO2qywEdV6md4wrAoVR8XB3LScBAM7lpYphIaKbpQ86OQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(39860400002)(136003)(366004)(451199021)(38100700002)(5660300002)(83380400001)(2616005)(31696002)(54906003)(316002)(36756003)(53546011)(6486002)(86362001)(186003)(6512007)(478600001)(6506007)(26005)(6916009)(66556008)(66476007)(41300700001)(4326008)(8676002)(66946007)(8936002)(31686004)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R2dnYStKNmZGeHhRRE4yM1Z6RzZPTjRNa3RWajRmWlhuTExrMzg1T2MvMklQ?=
 =?utf-8?B?VVFaTHRGbXpqT3oreFZDVkhpbzBTM2k5bTY2MmpaWndRVDRtWSs0M3dDVkZI?=
 =?utf-8?B?bWJYSnkvekhDeFBPdWJFeHNLMnFQVWVwR1JwcmExWE5RNTZNTEpUSUFSMmdw?=
 =?utf-8?B?VVR1ZFBRSHBBcjNVQSszTXIxLzR1OUIxSjRVYjQyNmxnenlRbURIR0VJaWFk?=
 =?utf-8?B?YnpwMHRxcmxYWXh5RGJBMHRlc0llb1lWdmtyT2Y1N0FCdTJsL3pnenFDUmZO?=
 =?utf-8?B?UDBLKy9QZWFQTW8xNWg1VUVTa3hWOHJvV1Mvb21xamVCM3E1N3dwa2lpSm1V?=
 =?utf-8?B?a2NZSUVxMkNuNkNzUDZDNi9HMGRlRCtDZ0RQdGpMNW80eE1nR2JyRXZReEUw?=
 =?utf-8?B?bHczdkhETEIrZW91bnhNZ3AySzI2YkR4NnB2eDdnMUs4NzRSbzZKOG9hazhj?=
 =?utf-8?B?QVV5U0FNdE1GdUh0dmhNWW5vWENaYUdXaTdiTXBhWkplcTFFdHhKVlBHNFJo?=
 =?utf-8?B?NFFXeExvODA3S2s0Q2FaV3FXMzVZMmpWeVRvajFuZU4wOHMyeWE2UXlqejBR?=
 =?utf-8?B?cXFORVlPUW1ESFJCbWN1Z1ZjRzlyM1ZVVjRWSGxOMlNEQXlrUU5oK2lERnkv?=
 =?utf-8?B?RE1MREpmRmxqSGRCcW9oa2QyQmwvYzBscE5MUktlRWpUbFVMdHI4QjdSNEt5?=
 =?utf-8?B?dGN5M3czalBoUC9JbHlPSm5pMHFZV1J5TS9WQk9OdWxEUFlIeFZwd2tFNTBl?=
 =?utf-8?B?U1ZLWFVIZ3BmcUxTdXluNE8xVm5udkNQZ1lqL2ZYaklhKzRPaEg5NzJpVWJW?=
 =?utf-8?B?U0UzelhNSk9MblF6TU8yRGJINlE4bnU0cXhMZFFDZEI4WFVUV1VQVTg0N3dt?=
 =?utf-8?B?ZXhkSGFOTXBGRkkyMmFJYllsMEZRU3BaYVBCTFAxbW1jeUVtMGtiSm1hdXhR?=
 =?utf-8?B?VmV3MFdKcTBmS3dpMXRDWWpmRzREaSthejhqVDI3SmZ4RDRlUExxdWlETUt3?=
 =?utf-8?B?cVQ0NFN5REtSTFFzektvNnJGUjFLeDJHM21vdWU4TXNKb203czVXNS8wSE5Z?=
 =?utf-8?B?RzM4NnZjN2tPSC9ZdWh6ODAzOUlTdDBQQ0dMazVRNElQTjJOM01QeU13ei85?=
 =?utf-8?B?M01zUkg2cEM1dkwvRzFuZVFoTGE5SlBBN0hrSFBzS0w3Q1lHbGk0M2FudDAz?=
 =?utf-8?B?K0pJOXpCZVQraU5lR3ovam9ERnpUN0N0bjNqOXpoZk0rZTBIR3pLUXpuT1VB?=
 =?utf-8?B?QmlVOXRlUDZ1eUoxclFxT1ZCQ004Vk9LTVpsM3NVZmk3VzlNeVJRa2ZjS2lH?=
 =?utf-8?B?c1F6bnFoNnNpeWRtYWJrM2ZsZlRqZ0VCcnk5UHo5NWVCK3IxYzBXbm1aUmFy?=
 =?utf-8?B?dkVQbFhvQzN3OHF1Q1ZMTTluZi9EZHhUdlhnd3BUZ1ZHSlE4ZkVQaTZITkFB?=
 =?utf-8?B?a3BSRWVlYjRUZUhqakgwQnhIOVROZFo2MUp5U3VseEtWSzI2K1RweERVa0hQ?=
 =?utf-8?B?M1J2YU9JRWxEK0EvMDc4bys4MldTT3ZWVmJncHVjc2xXYkJ4NWpYanlJcU1C?=
 =?utf-8?B?Tk9rYk5hblBsM200UkNhcmR3bzNZdk05L3JaU3lKbERuWGQrZFVTN2pTdXRB?=
 =?utf-8?B?eHprMzBqbUNsdFFSYU96U1hRNStlYWQrbmxCVlRFbG0vTG9CYkN6a0ZzeFRu?=
 =?utf-8?B?VklIbEhYMkxVeGgwdmVZV2Nvbk4wMlk4SlM2Q3JnY0RZQlVGQWxDZHBiY05F?=
 =?utf-8?B?cGtRQmtZMzZBRjZnZEY5bjJHMWZEUjJqU29WRStIVHI5TU1CRTJ3RjMvbDlZ?=
 =?utf-8?B?TERYdzdkcjBpaVoveGZINFFUdVpWSWJVZVBZT09ib21RWFNiK2gxeHlEd2E0?=
 =?utf-8?B?YW5pYUl3WnoydUtNVDl0eHpTWEp0bXB0aHZienJpK3RPMEhONFVUc0UyWWFT?=
 =?utf-8?B?T1lMd3pESmpQM2NHSEFabE50eFVvNE5JZ08xRkVVK1dLY0M5SHlLWVpLRkxo?=
 =?utf-8?B?RHpKdk9aTkp6Y1pralBVMHNwQm5EYVR0Z2ZGb20xaTdKZHJQUm9DWVNLeE15?=
 =?utf-8?B?eGJMbGRhNlBobk1ybk5qbVM5L2pUVDRxUVkvRU5vUXRYZ3JORGtoSWNDN1ky?=
 =?utf-8?Q?92BDiQS5x3A2wJzpSNJi2tSGm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d03ec373-8be0-4f2b-c7c3-08db351fcf5d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:18:14.3795
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f1mXU8/tG1ZLKM3mtuX9BztNW+MgdDyPFrLWBjAM8GGFCQpYtQx5/OuPaih60R2hE5nAqDllbLwQs9a/WyKpMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9261

On 04.04.2023 17:04, Roger Pau Monné wrote:
> On Tue, Apr 04, 2023 at 02:27:36PM +0100, Andrew Cooper wrote:
>> On 03/04/2023 2:26 pm, Roger Pau Monné wrote:
>>> On Mon, Apr 03, 2023 at 11:16:52AM +0100, Andrew Cooper wrote:
>>>> On 03/04/2023 9:57 am, Roger Pau Monné wrote:
>>>>>> @@ -1104,6 +1092,11 @@ static int cf_check write_msr(
>>>>>>          if ( !is_hwdom_pinned_vcpu(curr) )
>>>>>>              return X86EMUL_OKAY;
>>>>>>          if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) ||
>>>>>> +             /*
>>>>>> +              * TODO: this is broken.  What happens when dom0 is pinned but
>>>>>> +              * can't see the full system?  CF8_EXT probably ought to be a
>>>>>> +              * Xen-owned setting, and made symmetric across the system.
>>>>>> +              */
>>>>> I would assume CF8_EXT would be symmetric across the system, specially
>>>>> given that it seems to be phased out and only used in older AMD
>>>>> families that where all symmetric?
>>>> The CF8_EXT bit has been phased out.  The IO ECS functionality still exists.
>>>>
>>>> But yes, the more I think about letting dom0 play with this, the more I
>>>> think it is a fundamentally broken idea...  I bet it was from the very
>>>> early AMD Fam10h days where dom0 knew how to turn it on, and Xen was
>>>> trying to pretend it didn't have to touch any PCI devices.
>>> It seems to me Xen should set CF8_EXT on all threads (when available)
>>> and expose it to dom0, so that accesses using pci_{conf,write}_read()
>>> work as expected?
>>
>> It's per northbridge in the system, not per thread.  Hence needing the
>> spinlock protecting the CF8/CFC IO port pair access, and why MMCFG is
>> strictly preferable.
> 
> So just setting CF8_EXT_ENABLE on MSR_AMD64_NB_CFG by the BSP should
> be enough to have it enabled?  I expect all other threads will see the
> bit as set in the MSR then.

No, it's one bit per socket iirc.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:22:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517971.804029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiUW-0002S7-H1; Tue, 04 Apr 2023 15:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517971.804029; Tue, 04 Apr 2023 15:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiUW-0002S0-EM; Tue, 04 Apr 2023 15:22:36 +0000
Received: by outflank-mailman (input) for mailman id 517971;
 Tue, 04 Apr 2023 15:22:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiUU-0002Rr-QI
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:22:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7d00::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 858a8cf5-d2fc-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:22:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8264.eurprd04.prod.outlook.com (2603:10a6:20b:3fd::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:22:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 858a8cf5-d2fc-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZiqgN4QuaDD3A9CSFy0Wym+tlZHRTfB5Ng9rjtsnn0OCi3phTzxozAy2l1Ks5dkziE6JfNz4dqsAthA0lzg4tshVkB50bvQWXGPTjDS9PEBTzS4e5x2WP9kK4eQqU/k+TTSVZJ9hmgHTD2DMc1EPwTTw7ezMrCYHzyvkc1YEaU7qoa3zHIxVSaicW9kLixuH0WCGmw7ZuR+Ki8Zn0ePhyOveWOWyzf7Odpr7hoZWDjg/JQcQAQFkKSOme2XBqVUlHYBDn1pwxs/Dp2H5f6C+E7wW5nnh6TOq5a9pjVGW1RKbNEOwZsOQg6aJCIMyhymxQ2qq79DsVggTjWoPgAkeNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eTzjGYye49h2XlQsudSBmRNkFZaxCU/5/YM7mQSQXcU=;
 b=W3eDZaCvNG/0j74Bv01VsV9z2gIRqJPib+EVdQ1ocihWhMzsRSrwlz0IX3FMcE1jYLWeziOvMhzHLch49winTPUjhh2EoZlLCn100Hj9RT1r5SM8ouka/cdYmxSCAD14QDod1hmsESqBreBfcIpPBZHs8QsjJdh8VYQnFwcfUy02gI23fU7YwlfDdwkaDKpCMYxqH6Sc5h0FlFyQeNocpb3H3GUAE/nvVpAm2swqWxE0IZtGnOrO+zp2zNL1ZKFapA88OW0Irrw+UWJ1Of8tj5iYB/25enpz6W4IVAZxIw+MzE/UUozpFrg+adDoXe8Uf7aXM4mLtuHCdlMjfUEv8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eTzjGYye49h2XlQsudSBmRNkFZaxCU/5/YM7mQSQXcU=;
 b=pqZxDDb1utSba/nygGJzk/A+a3zjogYW8saU+O+N9Jgwcu5N7+A7ZLePvC8YtKVhlE5vekM50jBmrYPGEF7CkPz0dleQimwxH8g11YBuLFPoivLWSCJ5dZOX69LoYIL7nLDyd+FH5lUB/KpJSDEuSvAV1T0ugcziBXww6aakHH6pg82BBnzkmwY5V4Ruq0rSdE1JSynQSY9K6SJwGvXx8tZeO3l/KYgQ7dm8tll0bXrgcrxE1ehAeLWODAYNZg7WQ/VjIQr1+AEpNgd+L3wJdjcyyHP7zmJC1IwMBZg1XpBfOvIazuv8CZkf6EDgBWhdl9PvzEUm4/JmGZAfn0bo6Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3c132e7c-8582-874a-1964-31368fbd5872@suse.com>
Date: Tue, 4 Apr 2023 17:22:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 12/15] x86/emul: Switch x86_emulate_ctxt to cpu_policy
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-13-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-13-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0195.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8264:EE_
X-MS-Office365-Filtering-Correlation-Id: f807b7e2-69ae-4c90-4094-08db35206845
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ux3dH5JOxY+VxSOXqvaEjLj7TxxzPL6hixN02zNFlI6+WDuG9z+9/Bcyaj4BhLLNPp53J3q4OGTP66pyeiOp7QBEU1lZMd4MZNPnbmYeUjOOs0QQrI8HJexOIxm1lPc/WuYie3ajwP8JtAXkEvQrRXfU7dbk1QtSLo7H5Lc6wqcdF0+ak6f4jsTs9p/1O4X7EtikZSsRVmBMl5Xhhit4zmKODJfZfQgKPw2XV0wUYr8s4cQ4hVtD9SOhNAzCi7E7NhMUCOJPa0vVtQ63PiKKBJLqj+gIS4MhVD2PzjBwRuo08A83jUVbevkt6kx+8855US3Vm/D1kZ8ija8yorhSTkISRkiPl9L0XU0FHSvyHVVi4mDpK3rAd5HaYLZWdv2c8u3VKBBYFwp3J9Pzm7f0D5x2cgGP8GsEuKizP/fjAsPELshihBUdVMbe1kTH1yO9K+RE+X+oD+4wEWD68TstuzFCvB3jZBDXml6Yw+m1z+OUuEsGR2qUbg2dO/j2qN5+lxA7XB+lVMDlyKUCQh38WryO3CUTdUUseuv5XqHntGgQM06AfCuExzKYlLFrTD8juMttbtcveeZzRB7setKz375TU2Hc/XeMj4oiAZE4L9BV7iTCQXM2HMpKNSgEAv2MjzpLTRvOT0ybnIrFaHQYDQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(376002)(39860400002)(346002)(451199021)(4326008)(478600001)(66476007)(66556008)(8676002)(66946007)(5660300002)(316002)(6916009)(54906003)(8936002)(41300700001)(38100700002)(186003)(53546011)(2616005)(6486002)(6506007)(26005)(6512007)(31696002)(86362001)(36756003)(558084003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0t2bDg4MGhQWjY5NUdJblNzNGkrcXIwVjRWaDFXSW9YWkhBZXF3eFBkTWpo?=
 =?utf-8?B?M3Rpek02OGQyWnYwUUZMc09nZzFjVHRmV05FZ3A0dUwzL01RTUZlZW9UaVF5?=
 =?utf-8?B?d1VGdjZDVDR2WlRlWUhlYnJybUY4MHhYN1paSW5YZm5DeGtGOWo4a0pFaEZr?=
 =?utf-8?B?SngvemljU1B4enAvdSthcWp0Z2J6ZGpZRHU3RGd6S0U2ZmJTalB0YkQybTd0?=
 =?utf-8?B?U2p0alZrSGJtTXB5Y25lV3hJdUhZQWNSV1NSeGUva2w1NDZQUElsc3NBNTFa?=
 =?utf-8?B?OTFKdFdzc3NMR1BYc3czTG5LVHRSM1hDZU92WnErT05PT1N3bjdTY25PcFFC?=
 =?utf-8?B?aW9uZ3h4d2VDTzIwcUdsSVRkU2xOYm1uSGZycXYrNThwenpCT0g1a2thN00w?=
 =?utf-8?B?MHhUTGRSdGRIcnBGd2MvamZydFM3TGJISzRnWjZyOFFVd2ZvUzUrQmFHQ0ZW?=
 =?utf-8?B?ZFl4TGFoZXZydFNjaFBrZWJLNEFDNW10WHBoY25iUG8rd2JVOFNWdXBiekpy?=
 =?utf-8?B?R2JSU0EyellRK1JKRW1QaTlvV2xyMWlKZGw0Nm4zaFllb0VaeFRuZUJTc3BX?=
 =?utf-8?B?dW9FRWdDalRnRDJBSWxTV0pJdHhXbkd2UVVyVXZTUXJJSDZXL3lUYUsxdnIr?=
 =?utf-8?B?b0hoNmozeW1zRUxJQmp0VXdGVnN0NEpramZpMmdFUnZtNGs4RlVNUjBoYzh1?=
 =?utf-8?B?aWlTY3VURnY0VE9LVGVyRm1CYk1pZGZVdk9tZCtpY24zOE1ydDJhbVdIVXQr?=
 =?utf-8?B?WWZGeFFLQ3gxdzJrUENVN3hyc1MvWk96S2hiK1FnWUg0dkFJdFd6clhWcWp0?=
 =?utf-8?B?a0xtdWFObDNOZHZSMy9UZ2FMMXZOdEhlMUlPb2lNYzdNMHVDaGF0a256QWdC?=
 =?utf-8?B?MGQ3VERXT1k2clNOaUYrVnB5bkhweGY3UXJVTlc4ci9PcnQ1MldTSjgyRElV?=
 =?utf-8?B?b1R2bXBJUDYzV2YzSCtmMjVYT25ua0xXM1B0MCtkWHBkUTZZM0JoZmYzUlBt?=
 =?utf-8?B?Z0h5WXJ6WUJpeWZ1MUlqZTZkanRhU0M4M3Nmckdxa3o0NldxN0dLRkhjV2ow?=
 =?utf-8?B?VFZJVE5OdUw1eXlzWmlEQVJFSll5VHFscVZxMHpERFFWNVFEZWlETmN1NzFi?=
 =?utf-8?B?VzVTZVlaR2M3VDZ2YWZFR1BmZ3JJM3psMncvbU5DUEFPdlRHUzg2ZjJSOThz?=
 =?utf-8?B?WWIyRDJHejRCRXZvSS9jVjN6bVNwYmx3NW9YSmREQkl3ZDB5R0k5cWx2QTZZ?=
 =?utf-8?B?d3c5QXpHRTY1TmVnWWMzZ2EyMTlqTXo5cVd1VThYNk5tM25zN0pIQ1NaUTEx?=
 =?utf-8?B?U09PRkJ1eE1DandXNCtxdjBQSDQwU3JuQUZ3aHBGYk11cVFlVmQzOEdadnBt?=
 =?utf-8?B?OXNkVExsenhXWVBlai9pbXlad0ljczc5VW5zTWdaeTJoV1FCa0VtQ2tjZ0J0?=
 =?utf-8?B?Y29UazNjY1BFQVl4VEJ3S21Hak9yS3ZiNldpaExiOFhHWnVOOTY5dWhWQVBC?=
 =?utf-8?B?YVIrUHlIeU52bDAzZGZIVDJHRHdOOVJoRFNXMFJOQmMzZGdFVmxFaVBob0VC?=
 =?utf-8?B?WXAzZTdxYTVEM2JUc0s4YmRwNnpVb3lxWVkxSUUrYi9SSkdod21HcFQxVjEw?=
 =?utf-8?B?Z3ZXQ3RoclNMYnA0ckpLYVphL1kzVDZOTmEyUkFyR3IwcnFNL1AwRGRldXFn?=
 =?utf-8?B?Q3VCdU0yZ04wcHdscXNCUTNQUUhyZTR6aUxKTnh1UTFhcy9qKzdYNEd0WEFC?=
 =?utf-8?B?WGFvUGJIQUZOaFNkVG5yKytQWUMvQ2hkbEowSXR6aVc5cEJMVm91L1Y3dENL?=
 =?utf-8?B?QkpWNFYzdVFtcEF4ZFFzTVdPNTRKbks1cjFQMkJ0bGZDMTR1V1dIampPYXB6?=
 =?utf-8?B?UmIvQnJsZVRleEdPSnRWdllKaDRVaUlmaDJtU2FTNFIrSm5uUDNHd1hEMFNl?=
 =?utf-8?B?WFhWUzMwRUFFUWV1aGxnU3NPUDdYVXZEVEN2UXBaSHArYzdqejVPMmgyN2ZM?=
 =?utf-8?B?MVNlNktNSyt6L2J3MWRXWTF2cDBtT092MExoaUl0MWJrbTdtTFVOcFBUdUFN?=
 =?utf-8?B?Nm1yUHJuckk3Vkl1Zkt2YTNuVVIvc1FGS0x2RldENDVsRk9yUE1UbHZ0ZEtI?=
 =?utf-8?Q?JIXF5Vgof0FAArKWM65Ip4+Tx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f807b7e2-69ae-4c90-4094-08db35206845
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:22:30.9659
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0uACBSfMpPjYGvyypp/S7Ax6HnYNQKwhvxq6ZI2/qbBpryMOtUxVMG5ApJ7NvbnV1UdDKERnMHh7bUHGre0Qtw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8264

On 04.04.2023 11:52, Andrew Cooper wrote:
> As with struct domain, retain cpuid as a valid alias for local code clarity.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:25:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517974.804040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiXK-00032w-02; Tue, 04 Apr 2023 15:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517974.804040; Tue, 04 Apr 2023 15:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiXJ-00032p-Sz; Tue, 04 Apr 2023 15:25:29 +0000
Received: by outflank-mailman (input) for mailman id 517974;
 Tue, 04 Apr 2023 15:25:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjiXH-00032i-Se
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:25:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7d00::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec1a3515-d2fc-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 17:25:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7223.eurprd04.prod.outlook.com (2603:10a6:10:1a4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:25:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:25:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec1a3515-d2fc-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YXWFxTTIcAacLzPaNbkYtDFk6Cvndl3AkF7oZz+L4IaTpAEf/UkdzVPJsTf2R8sVCQweIHxMBNd+B49v9bSucx0nvvVSgy+WrWeDbkUbymtzTDRE0s4O/UhN5efqxHt7xWXZQq6LuUJk3oqpuHvDQJxN4hc1/LGZYgDKl/LxSZPxhdCIUNuFcuzTed/MNcxicJJpLEEYUFUt8pBSJ4hm05ZseBHsnUhJhFS9Q8RCK0/UGZgqABrYIbNSrUTZ7GVtfc8MH7aLk3VA7umEtmeoaHeBe4jlTJivlpN2q02+Zsyba6UL54YYlTEaB0qGuEu9D56/vCVx3/oiIo9H8Uiaew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5jGdw4viSV9S7TApuPOX3S47yaHtXnnnjRZnrRRyAqQ=;
 b=UgOTaCDs4vRN9m05gfghutBRClfUpbKngQIxdY8jVWHu5QZ8LzFSGBr61IfL4SMI+y5lJ43/VPGWcm8HZnNW53P0+RglJTbwCQLtYvTz0jdfOPjn7mtMf8pbDjIpuTxKj4O5ur0FZ1xNYsbiLzj8SlsNLbk9o2JrAulfayUuMlkeNkql7I4UTv7pU1h+lMx8WzrtPjZXNE2jpM0DzF+O5MamZe3k6se5vUD+2KGq/scrN164ThqcAEC3VuxMiEpUrAz62UBgtn7gQAJiOErdE+DQH2/KFTIy1bvsTTJEcjO4RpQUqfKyq662Zr3Ey8ni1/01j0LLhxDLU6GHKl69PQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5jGdw4viSV9S7TApuPOX3S47yaHtXnnnjRZnrRRyAqQ=;
 b=R1iOakWn/93EOmasSt+HE5fETSQXoKOSd1qraMJvih7cnFu3g+WppmpBUarssuitWebkKjNCyIMdhGWM5oKdkAlh+E+9aTYMIkCIUBpsdzvUoVIHZFd7vjNtaPo6VCjOMTOAd+1ga6DUoYK8hGsKtMA3MN32pO4W/liWR8kUzXbDFsMfKaF0HQzKzZFX9/wPHOTPK+l4S2rcNfM2hb6M0nQAMewNkxjcFkR/Z70o6kHeboAno+UWiNj3TjZxKEPRjDwr4x3b/Q36AAb/oT6fM+JtLGP6RZmWFEQW+OMHqJhg/5WJox3+8rJ801SDb0/yTJu6+VfT9bIQxq0vRdfnWg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d569eec6-10b1-3d79-c7c5-14943ed8c78f@suse.com>
Date: Tue, 4 Apr 2023 17:25:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 13/15] tools/fuzz: Rework afl-policy-fuzzer
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-14-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-14-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7223:EE_
X-MS-Office365-Filtering-Correlation-Id: 736dabe2-4459-426e-7a95-08db3520cf7b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ax7uXd//TEu11IMdpodelv/GMAD3sEPJzccWxv6cbMxi7zLt2ImNo0V9MvNpaJqJ/8Uy0Wx4TkNFBGdkFQuRAtGQQNE5hLeaASV3IX03NztJqt/2Unjam4i/trGz2R5nC8w4tJtT16B5KAtxTMJMTjX+zqe7FLVQs8dvS2MfskOPDN0iQgGk9qFov98TnvzrVe2ti/+cLUvxq4x/8kWwXGfORgAPzFjM9srSCKXmS/EdlD0omxizIiHPe2qdmSD7rkMttXxVJjsQsdgqlG1PRsUOeeyhepEHCtbYOFC61pWqVoKkPGEVl5NsIfaxF9GFVPxz4k8Np0Rgb/VP2VwcrLhAGkLqTz6YgPWnmZ+ZkSEcUlXLjB3vi5J1bmCjjdXHYhlFMhAsFpd6FH+4kFzNAw4fsIq29wQ1txaMWlEel1pWOGvRcDp71h09Sdz6+sem5uuBUVlZq+X5/GWzX5GidLK+zoW3QOF7cqi3AwkfS1uf3SQ5vDBHY/Wrogri+Y94oPJV9pPE6CaehgpdLGSnms2ntqYV9rx84/XVKQcI/Ri+oBvNhevQYo25mLl7nechVhH3dwyNKteh0ib6QR/X8jAshK4UdA4jqNEeQqJngLVrViNLZ/Mg8N7Hfy8X0f6DRK09p4abx/NbqRG/Hf3SBw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(376002)(366004)(39850400004)(396003)(451199021)(5660300002)(38100700002)(8676002)(8936002)(186003)(6506007)(6512007)(26005)(36756003)(2616005)(53546011)(2906002)(558084003)(31696002)(86362001)(41300700001)(54906003)(31686004)(66946007)(66476007)(66556008)(4326008)(6916009)(478600001)(6486002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SkxCeUVDcDAvc2JMcnEwSzVSd1F3bkMzUWErMndhT0szNjBnT3ErNGVmb0hN?=
 =?utf-8?B?U2NPL1ZZSWN3YXFLenU0OWt0eUVxUHczYmtuOUF3RlJQNDdqeVNjWVpXZ1Qr?=
 =?utf-8?B?ZllxZExjL3daUlFNWWJPL3pVaWFBcmIzWWJUa1kxWVZ3S3c0OFhLcitnSm5R?=
 =?utf-8?B?NmplS0cyekFMYnVyRy9MNk9KeVhibW5tQkQyUmNpWE5rNzhyR2ZsRlQ1MnFx?=
 =?utf-8?B?S3A2Zm10UXp1Y09HcCt3M3ZwbmZ4ekJlQjd5R0t6QTVnS2YxY0JqRER4UXlz?=
 =?utf-8?B?dFJYbUNCc1hRWDBvTllNakxUbys4TXRoMFlDRURqc05MaFJpSE9Ed1Z5VW9h?=
 =?utf-8?B?bVVBRkt6NzVXcUlPb0tQeTNIS1FsdnZFVlhhUG1sS3NTWEltc211UFV1S0R2?=
 =?utf-8?B?NC9oWWt5RFFLdnZ0eERwZVVvQ0w3a0JHdG5ZVzhQMVlYdkpTTFlJZm44RTlx?=
 =?utf-8?B?a3J5Qy9HWTI0VmRRY25CdjRBYnR6eTdrUSs1ZEZzcEltVkFKemZUcThJK3RX?=
 =?utf-8?B?WEQ5NzRWd3RRcDE2cTJ3YnR4OFUvUjZOTkZ2eVNZUUk1aGMycGJQa3d1dGJS?=
 =?utf-8?B?OHJnSDY5R3dtMC9QWEdaaS9JUUtmSVN0cG8wcTFvRjdvSHZEQ3MrdFZIYXI5?=
 =?utf-8?B?NFNBYmd1M3RvZ2krZFN1R3EycEsyVTNnaHhuR0dBMFNDTElVeUhKRlJIOGxy?=
 =?utf-8?B?OTZCL29xdG1LeFpqYXR4TkxoZ0d6bDBMeDNuMWZ6QWNkRzFucGw3RDB6b0N2?=
 =?utf-8?B?bWVmdGd0MGZaeWhZL2VlNGpYS01ZSXR1eTFZcnpyTXU4a3MvYytrbjh5TDhp?=
 =?utf-8?B?OHdiZ3EyQm9tMk1XbmptTzlnR3ltVWN1QllrRXkvWW81QWZYUXlIaHJpcnZn?=
 =?utf-8?B?TkFYWGsxSUkyRGR5RkRrSlU1cUsrZkxXVGRKbC84MkpxbGdHVS9kb3N0OWR5?=
 =?utf-8?B?V3pWWXRLaWRYWkg4ZVVOc1N1MFRVd0p1N0NBZWJnSmdVb0gwQUVjd0E0Skw2?=
 =?utf-8?B?aE1neXptRkdqRmFaNnVOSlN0K1hLZUNxLzZsejkxZFp2T1ljSWE0dGJ0NlJl?=
 =?utf-8?B?MU5oYjF6a2g4d2N1UmdHVDhOOUoxbmpHY2gvUFAvZzcydTAyRDMvVHpLTS94?=
 =?utf-8?B?aENJZGJDSWtzb0lZN1pQZ29IdHBlNzYxL0R0SVozdC9ZcHpLaXExbjNtTVZO?=
 =?utf-8?B?akpXWWNrbjNrd1RZNkJUc3ZzMnZpUmVBN2VnV2tSUWxZcW1BTTlmbnlzamt5?=
 =?utf-8?B?cDhxaHFXQ1pWdW16bnJqcEU2OTRkUzlUMEIrWlBUNzJ0d3lHU1NGZlJhY0VY?=
 =?utf-8?B?UGRFV0NyQUc0Z1VJRk0yMm1QM3lTVnorREZOTE5PRGFsODEweWhNY2txNCt0?=
 =?utf-8?B?R0ZBSUs1WkVTRDI3aEVxSmF4Rk8wY1pVMENFK3FCMkN6UExjOVM5MGFFelJJ?=
 =?utf-8?B?TWpubFdma25xelVKbGIyVU1NM0JsZE1QT3hhaDVsdkxlRzV3bmJRSGlPckZF?=
 =?utf-8?B?MXBDNm15QlNnbkdPa0JaZ0ZIcWtyTU1RamRMUnBMOXNwbUF6RnJSK1lneUdr?=
 =?utf-8?B?NjFESGJDeTloc3kybDF2aHpicEkvamkrMnhDK2pQT09EK2V4aUIvWlRpaGFw?=
 =?utf-8?B?UU5BWExKdFJiRUlKWkRLa0JHcGU5VXNJangxTVY5VzF1b3RUckkvTEFRSllS?=
 =?utf-8?B?c0k0NmV3dWl6KzBOWGJHcENhZmEwbVRPWXFZNjluYUhHOGdvUzhOYnZFQU5R?=
 =?utf-8?B?NU5hMjdXRmpHRXhmSDkrd1FVajlDN0dDMVVpcnFZVFRzOW0yaGxHNjZ4NkNQ?=
 =?utf-8?B?TURqMld2alIvb2k5UlNKaGlES3IyODFCeE5scGZZK3hPcjI5VFZIOHF4NGZN?=
 =?utf-8?B?bytSNnlwMkpJRzY0MnlpL2cvODhyNVhFRy9ybmxKdU1SZW0zZ2svNGhWcnZ5?=
 =?utf-8?B?ejdsN3hab1JWU0taamZzK20xbGVJSm5lYjVxTEdNWGNpUDlxdldmSDFBeEhJ?=
 =?utf-8?B?aVlsYU9rUHBzUU1SUFM4UXQ5d2M4d0VSWUx1UzNTUHRIc2tkYWFyM3VXMWt3?=
 =?utf-8?B?NVk0QzNhLzhQbXFUK0JHeWswbVIvcGR3dUVBdWpVWXNwNHZyRXIxNnJIQU5r?=
 =?utf-8?Q?zNwxvkAk+Dekflwb+lr8NSf/r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 736dabe2-4459-426e-7a95-08db3520cf7b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:25:24.1808
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GHojTxkwcnM5KxxqMO0v1yoxd+LWglPSyzc8t7y099zMTc7kyXO0PthkIyVXjB9glU2aTEU3n88geekI9I3fTQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7223

On 04.04.2023 11:52, Andrew Cooper wrote:
> With cpuid_policy and msr_policy merged to form cpu_policy, merge the
> respective fuzzing logic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:26:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517978.804050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiYR-0003f6-Ec; Tue, 04 Apr 2023 15:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517978.804050; Tue, 04 Apr 2023 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiYR-0003ez-BC; Tue, 04 Apr 2023 15:26:39 +0000
Received: by outflank-mailman (input) for mailman id 517978;
 Tue, 04 Apr 2023 15:26:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjiYQ-0003Zp-4t
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:26:38 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15526ce0-d2fd-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:26:36 +0200 (CEST)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 11:26:31 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5214.namprd03.prod.outlook.com (2603:10b6:208:1ef::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:26:29 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15526ce0-d2fd-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680621996;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=79+mPpvknChuWzV8y6U2VYX1RyXjcvvfH29L6m9w3ts=;
  b=YCAD+cEYBXPwVSwaWBetc3AcYCO7JOoFez5op14Nd2Ed4ifsd8I4j95z
   zmZH9ho5vHTyDhzdajobh5ohEdab3A/q7rqm6XZHVRXoCvhMhYSygu12B
   kNY9OgIS3rerZ1A8hhmwEOZUFwc6cK0L4/bSEsaxHVdG3JzSuytXVeNEr
   I=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 103655417
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Fy1c164aovCxN1kuoSSZqQxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 jNJDTvSP6vbazOnLYogao/g8UNVvpbRzoJnSVY4rioyHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5my
 v0mIWocaCm4uv+7xqqeGrJe1s4CFZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOmF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKT+Plq6Iz3zV/wEQPEQEtdGO5/MWWyUfkWMt0K
 3Y++yMh+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqWMMfZQ4M4t2mqodsiBvKF45nCPTs1oyzHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxa0owFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:RovSx6DlQGbH3GHlHelc55DYdb4zR+YMi2TDt3oddfU1SL39qy
 nKpp4mPHDP5wr5NEtPpTniAtjkfZq/z+8X3WB5B97LMDUO3lHIEGgL1+DfKlbbak/DH4BmtZ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="103655417"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CmeBLmSo8iEfGAX3TVuaJ34sj3E++BLrRrum8Z3kVjU4sNFBdGQlkblGN1rPb9lpzHzvceWb1kYxq57sen5irYB7xbOPZ6EIHionwOCbkcu15APDwfBdt4kSwjjBm5dzQ8HyHv3kKkdySjf+uo2tX1fsQnlSqRwV0RyZypzHuv9GA33wIK/J+4xxRN01fjUjADrfj1E2LwqrvLO9fPxUyvzouU7IPWVfdjxq75/esbdTBrGeth0jo/RepqIslP+wN2dAkrqR9ys2IfzwABSWQtQWXzY0bo6yvBAPWn9yFoTalXC0CrOcS9NiHnQBBO5Lft0Itwj75ZsD3yHE6ilBng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z6y40F/Oe0xUll3zPKQlK+LZEkQkCQrpdsfcxvK2eJ4=;
 b=atvqcQmtsorYmY8GzSWFUpzGz3sy4P3lv7rj6flg8Zq19qQBVM8T+XSCZmM1ag/Dg1G8YaKYeI6p7/szcNtHClRQ/+HLOL1ufX1/711k/1f/fiJuU/qGRgt2oMQCUmKaABNyIXasY6RPIoGe+Jvr9BqaH+ppxOZ3bFc7IGjzyuLUBz5SeU8pKkqVZ8UAIHy0o5+dtOKwwp5xkc5jaVj9gew0ZopXknx+5mTalcUR4BScOaHTGuRGoRUwD8GfLzt23Cbd5NRCpI0APlfLpIfbpdfNqGI1Spcby8qImr+/loCehkPN+/SpsMEl+HrOT4DpAn//Ac/92Il3I51dKjEpbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6y40F/Oe0xUll3zPKQlK+LZEkQkCQrpdsfcxvK2eJ4=;
 b=csrxbuSiUtosL82hT3yP3QCHTuxkS7b1J9wznSzrUCiySvY6Kq6924991JE1B+iPxKbHqM3dA6Fs57fqXueA5e29bjgJdbuVvkXe48khYqxrtxS1ROHx59/ktGt1t408OzfKYNPBaoqmhSVmXPjIynxrkN+UaRjyJT2WvDWI1Yc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <ef49980f-1d03-387c-d343-7eb8256b6227@citrix.com>
Date: Tue, 4 Apr 2023 16:26:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 09/15] x86: Out-of-inline the policy<->featureset
 convertors
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-10-andrew.cooper3@citrix.com>
 <63395f4e-2272-5537-190e-27318d4057ea@suse.com>
In-Reply-To: <63395f4e-2272-5537-190e-27318d4057ea@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0627.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:294::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5214:EE_
X-MS-Office365-Filtering-Correlation-Id: 342dad2b-35b5-44d6-4af2-08db3520f630
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9ejMmS5YFhoHjdFUqCErvU1+UL+um3VDUh4VoTrmH8FJacjraSLxqp9GmBDpFpdzXirNOcX+7FMf//vCQUIzyZTlxO+0SyYAeIdsHNB/72YOVjt41VwEZOIPpvfX/blM57Zfckufrkp6o7/iTs0FbIIoOKR6NKP1A83WKmRWw/qStNpSoHFm7t/TULwqcypJZzs3RiD0kEddC1czu2a57VJdPaqTEPW2FWAUncVcRhI1z9iiN1bmLEuM8j0MG94Gsu+kpMNlXzJZs3KHsPtxnY+hsigIiaLAuuPAo3OobS+USrJqgWwK2IOG+TaWlqJDkjucnPy0TbqIVaJ7A359bAnIr8sL4pVR0XlOByVVM1NRm5FAd1VC9nzmItmA2XWaH99w3SR3uI515MITH8iKxrAmTkKCQsHi7A7BxdMpLaTG9mNHCLm5Fs/uTh38/0G7kh+PeKgaS2oV3MJEU+rPXIfc3A6+WAyY4LNscygvitEqUX7ZK4H3ArV1u/z0PsH1QTuwY3p61ly6YH9fULF1oVg8jbH+2pPiwLF7Kj5MuJigFZkRvzFeyrfkOTsyoMoNNhzoEOhcsliH0bogptltlKMjTIxQp0T6gkENcmXP/cBEaYsMg8VrvyTdSdkP+kMWEuTa1l8pOpgVExJJws+IVg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(366004)(376002)(39860400002)(396003)(451199021)(8936002)(5660300002)(38100700002)(83380400001)(86362001)(31696002)(2616005)(316002)(54906003)(53546011)(6666004)(478600001)(186003)(6486002)(26005)(6512007)(6506007)(66476007)(82960400001)(6916009)(4326008)(8676002)(41300700001)(66556008)(66946007)(36756003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OEVMS2MwSUM5U1h6SUdhZWtuMVhhVWhOR1hLKzZSU3o5TTgxYXZLazQrY2g4?=
 =?utf-8?B?ZnVnaitXZTExYkc0blNLZXlNNS9HQnhkZEhzZlF6VndIaGZUb2lMOXpuME04?=
 =?utf-8?B?eUxxSHl1VmE3RlJsdndsLzJBVElNUDZ0UTlicU9aNEkvd1M0bGhHTDZlRUh3?=
 =?utf-8?B?VG9SeXMwZHRlNHZZS1BkczZIMGhIcm1UWUc4NjR1L0E5azg1Qk5raFFOZzZo?=
 =?utf-8?B?SWtkVGlCQVpKUU9iWCtUZU14ZEtJVzB1c09zTGJ1S2FXYzZodXQwblVleWNW?=
 =?utf-8?B?ZmtabWdKRVhlYUZEQ2RTQ3F6bEIxSTNiNnYwd2d1amVNQUxNcldCOGEzdjRv?=
 =?utf-8?B?QzJTQ2luajlOWUdKMXNia1FNSkRBMU1LK3dLbVYwVVpHWERqb3loUit4M2t3?=
 =?utf-8?B?dFJyT1FXRWliaWxPT0hNcm1zcTVhS2xlVWVxWkpYRG9RMFlBVFJLRkpXSDJ4?=
 =?utf-8?B?OWdBMjIwSlVCSENmaVdOclZOTVFGTzZaUE1QU0dtQjZkcC9oSmJ4QTM0dzh0?=
 =?utf-8?B?MFdaTTV0enpZUjljdkxNck9RMkxpdHdIL2o5dXRpdDFBQ2xtb1pYbVpsRzcz?=
 =?utf-8?B?VlF3K25WZ0xyWDc1MXdIejdlYTZ6ZTRSb2RyMklzVEltV245NmFDVG02NTQ1?=
 =?utf-8?B?dU00VVBHQm1TcnVuVm5IWTJ3QUJwdWJyTkoxSGJVSmZQZUNxZzludW9vNC9C?=
 =?utf-8?B?S2NvaklwOU1QSUZNMDlXWXNCeGtiVldlRXNUcWJYK0pYYVNicGRHSk1ZdHBH?=
 =?utf-8?B?SnlFSGhMem5uaTVaZFZYL1JuY3pDOUtsd3V3ZlpCRjhnNmt5VFB0WFVnaFJM?=
 =?utf-8?B?N1lNTUFGa3VNZkpRUnN2RmlGaHJ2em9KMjhoSU9wY3pYWW9ERzRpcXJwWTRl?=
 =?utf-8?B?NjhUaEZNSmxET3hEVys2ZnFxL0NsNDliOUwwWHdDOTZpU1JkUmF0NUdWcDI0?=
 =?utf-8?B?SHVSSDM0K2tCWDEydFd6YnZ2eHZYSUpjb0JLWDhDV0JKOExydi9NajR6M1Jm?=
 =?utf-8?B?SnRhK1hYblNyRUxVRjhDUENwZU1HSHdnWktCNUxiendOMkZpdUM1cjBjTVlE?=
 =?utf-8?B?VmdoQktqdnN4WG1ncjN5VGRxSDZjSGliNjVFY29pNGRpT2FybFJMcEw0RDc5?=
 =?utf-8?B?cUJPMnVpbFErVHgrSHo2a1pNTFNETTZIZ1hpOWJOVnRPVDNzSzRLaWtpRU9H?=
 =?utf-8?B?VHVRZzRHQmxNaXpzNU5YT0hqcHVPcG51Uk5wWWc4d1FIUkhiK0xTem96eUt0?=
 =?utf-8?B?SGJGTXJkNlZBbWN4aW13aVNlSUppMVBPY0lXaVA3VFRzUWNja29FR29qMVYw?=
 =?utf-8?B?U3l4djgvaFYwYy85TkJkKytoQi9HOTFxOXVzT3BmYkxhaFpEcGhyZjV4UFVK?=
 =?utf-8?B?U2F1TGFmNXBxMm1NMExic2QwSE1sRnByQlI0aTAyN0V1dWZmaGRaUHo5UDJN?=
 =?utf-8?B?aVZXVFJHZFBiZlNiMml6aittT2NaanRxVEpUU1hUMUhBM3ROeFhnSEtxVjhP?=
 =?utf-8?B?T0lsUS8yR0hUTStJY0tPakxNY01oQTV1d1ZTTjRERFNNYkt4Nm0vSWNsSG9M?=
 =?utf-8?B?amMyMVRCYnNydGk2QjFVbWM2djZHa2h3SUwvNUx3VU1velNVek9IcVJXOGdx?=
 =?utf-8?B?UEFxS0JGbzcyZisySmRjK1N2NGRyV21iRzRqSG1FVnJZZVJWMjVDT3djL05v?=
 =?utf-8?B?c1J4VWIyb1dsUVduTnkzTTFNSzdUS1N1dkE0WC9sTkV3a0hmN3gweC9nbGZi?=
 =?utf-8?B?Tldac05KTFZtakR3eWVua2VJMWhxcmhTMTBJR2hUTFBzLzVzQ0N6SllqaW1F?=
 =?utf-8?B?czlnazdJTEoyNGpqTVgxdzZzd3VtbkM0cEtKSy8wMitibXZHclp0MFNDTTlX?=
 =?utf-8?B?YXZCWmJiMkRzNTZjTmcxS3F2cjJNVjlhZExOeW5WVGdVWUJXMmhacGFTMlJN?=
 =?utf-8?B?WlZuN0NQUXIvRnIxdm1Qc1BzWlgvM3dTVy9DVjVRcGpyZmpiR3UvODJwWm1S?=
 =?utf-8?B?SFozcWZOeDA4K3FHQmxmKzJ4Nk1Va25hR01ibHgydzhWQnFueHF4VmhNMEdC?=
 =?utf-8?B?aUJsd3RSSVA1TytnUnNuK3l2S2laQWc4V04wbmRXbzFQSXRleEhkTFJhR2Uy?=
 =?utf-8?B?NEJHVEVOWXFlUk5VNElIU0tleFRzYzlxM2I4VEpIc2hmdWZzMS9GMU9TWVdR?=
 =?utf-8?B?OEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HoGCMVVVfn3ycyQSFTDNz4PlXxLqi9vqryWcyXEaF6ZmBkIb1PvF7Ki7GcxyF/L/9UMt4yXWOQAA/QRYrQVEsdrllv0qWjOg1p6OsRkW5CLM2K1gyMU10Ux6Hz9JSdgpxNtGpL3WyZzve5R5lGgXvDjzJ4LYDZeeCU78Vt/223lnEqV4N8Bg+/Bw6Dxq//rzuyYibuYIV7j1SIqi7W0JLb1hh8RRbadU4PtLR9HhvuOfltXx93EcrENAGeZLdZZyMpUGCcUXw5m1cWcj5MugGbAGGP5FE9KEHu1tAi1xGz7Z29a5D3n5f55+C2x5lUjfa6Ec4ZSjcHWValiodA+QkzVfzSbskCk2KQkd9u4x0Qi1Ggxdw0RihZHF3Qgh5qr398bzkWaCmlda9+XJ9Y6/1tb4A+uVCmd8TbiAdltI1aEo2leYq+/HNJaBQX67jIQf0vxGes8IBptXJv8+P3siwr5IqFP7GZ0itW/kA2Rs02u/ZTdrHlUR1YG5WZfz5zjXwRXWoPi1Fp8JLSP7KrzMN6I06shjjK3GQeEmGKJSidT41uSHlgYUAoQIWtXIzvABAWGR8T7bg1kW44IxBruQrBotHRY3GMZyOG0oE/HDaejsG1VCRQy7fEYIpHGAWiZDF3e6XrwRDky85W/7qMwU7Gl1+vEPGgtUMbmJW9K1amEzYF+BRmxO3S4mniGZSa7pY2N6LStX7z2k/kaWbhYEjkAJjp2PCTLzeGo+a0XT6ilMKwXxJ1U2xKmNOuDz7UVs6VKliSOEfRD4PmidSUx3nSI/heRfp5IH8zcFEQSKqDn+IWoKnl4no10f00K/GjTJ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 342dad2b-35b5-44d6-4af2-08db3520f630
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:26:29.0140
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zLuqNaIf+5JPrF2kQ29zNh3St+uIN+9cGUci+ksz8wqV7LIW6WrVyHXaVdwBU56nVBV+Lqn7Vjgv/TrzzD8cwDSyuaAjNJQlqf7KkjJlf98=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5214

On 04/04/2023 4:01 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> These are already getting over-large for being inline functions, and are only
>> going to grow more over time.  Out of line them, yielding the following net
>> delta from bloat-o-meter:
>>
>>   add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
>>
>> Switch to the newer cpu_policy terminology while doing so.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> I take it you have a reason to ...
>
>> --- a/xen/lib/x86/cpuid.c
>> +++ b/xen/lib/x86/cpuid.c
>> @@ -60,6 +60,48 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
>>      }
>>  }
>>  
>> +void x86_cpu_policy_to_featureset(
>> +    const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
>> +{
>> +    fs[FEATURESET_1d]        = p->basic._1d;
>> +    fs[FEATURESET_1c]        = p->basic._1c;
>> +    fs[FEATURESET_e1d]       = p->extd.e1d;
>> +    fs[FEATURESET_e1c]       = p->extd.e1c;
>> +    fs[FEATURESET_Da1]       = p->xstate.Da1;
>> +    fs[FEATURESET_7b0]       = p->feat._7b0;
>> +    fs[FEATURESET_7c0]       = p->feat._7c0;
>> +    fs[FEATURESET_e7d]       = p->extd.e7d;
>> +    fs[FEATURESET_e8b]       = p->extd.e8b;
>> +    fs[FEATURESET_7d0]       = p->feat._7d0;
>> +    fs[FEATURESET_7a1]       = p->feat._7a1;
>> +    fs[FEATURESET_e21a]      = p->extd.e21a;
>> +    fs[FEATURESET_7b1]       = p->feat._7b1;
>> +    fs[FEATURESET_7d2]       = p->feat._7d2;
>> +    fs[FEATURESET_7c1]       = p->feat._7c1;
>> +    fs[FEATURESET_7d1]       = p->feat._7d1;
>> +}
>> +
>> +void x86_cpu_featureset_to_policy(
>> +    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
>> +{
>> +    p->basic._1d             = fs[FEATURESET_1d];
>> +    p->basic._1c             = fs[FEATURESET_1c];
>> +    p->extd.e1d              = fs[FEATURESET_e1d];
>> +    p->extd.e1c              = fs[FEATURESET_e1c];
>> +    p->xstate.Da1            = fs[FEATURESET_Da1];
>> +    p->feat._7b0             = fs[FEATURESET_7b0];
>> +    p->feat._7c0             = fs[FEATURESET_7c0];
>> +    p->extd.e7d              = fs[FEATURESET_e7d];
>> +    p->extd.e8b              = fs[FEATURESET_e8b];
>> +    p->feat._7d0             = fs[FEATURESET_7d0];
>> +    p->feat._7a1             = fs[FEATURESET_7a1];
>> +    p->extd.e21a             = fs[FEATURESET_e21a];
>> +    p->feat._7b1             = fs[FEATURESET_7b1];
>> +    p->feat._7d2             = fs[FEATURESET_7d2];
>> +    p->feat._7c1             = fs[FEATURESET_7c1];
>> +    p->feat._7d1             = fs[FEATURESET_7d1];
>> +}
> ... add quite a few padding blanks in here, unlike in the originals?

Yeah.  There was already one misalignment, and I haven't quite decided
on the MSR syntax yet but it's going to be longer still.

Here specifically, we've got p->arch_caps.{a,d} at a minimum, so column
width is based on the MSR name.

This is just a guestimate of "plenty for now".

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:35:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517985.804060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjigL-0005Bs-95; Tue, 04 Apr 2023 15:34:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517985.804060; Tue, 04 Apr 2023 15:34:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjigL-0005Bl-5W; Tue, 04 Apr 2023 15:34:49 +0000
Received: by outflank-mailman (input) for mailman id 517985;
 Tue, 04 Apr 2023 15:34:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjigJ-0005Bf-Ol
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:34:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a33a9d7-d2fe-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:34:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9095.eurprd04.prod.outlook.com (2603:10a6:20b:446::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:34:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a33a9d7-d2fe-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d4AjKKNXb1SxKoopU6MW4HnYpWop+Jz54Cx1kNOZVQswfYc8CyVCNYGu/FH+yvr032csE94MXGzMFk8Wypan//TpGcBib1ZVPd4qlgiiq0wgq341fhcA35tC6ONvhHM3owu5eNmArGW0YI53w+qNWcsaQ6XiLe7+EzH8ms2KkstSIABei/8ItT1eDm4/CrZm02TQKDo44Aeh91droynZbes9vsb4264tAea2YzURP7NLPrKQcHbN+YE1lHcWf+/Fm7AgA/UvjGDLnRyYLlYld/QeDXhBoHIx2AITxOawc7eKg0JVCXfRNWVRHv51QhZNKKZ5gF6XY1phgP4dtUmNwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NRzzlFeZvhJduDLPey9wLw9oUPxcwz4PaBTrfAGQoHo=;
 b=KA1Gj/WIN4MpPqxc9EGHSvWjIHj0nSHXgrEQk9B6D6utC+ZMch52xdmB8FRuEY6aw5MCNfap8OsaKZ5Z57hXlvkroi+MOBlwM2elSTWxeyGl+awUHzUzjppJX5wcoBACYUbUuVwD1lxjmrDbXT0bwVe2ae/TGUYAfDm1bXGh1VvcG3V+2NUoysZwwTmdMdrg6pQb2T+UY8J0XoaCsqq/gI9gjl47D0EJr1KZL5vxd+6Fxbu6ygFMecotaa1vz1zZtFzWEFyg+It/hp7/fa/UT6qLBM4xEmQxGdXOjIMJ+CO0GuWslv7S6u4dNyHCkFqOi8MQPHhU+JD+c8R1gKIGpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NRzzlFeZvhJduDLPey9wLw9oUPxcwz4PaBTrfAGQoHo=;
 b=UPttCuqCUuNktmesejqrWd/hEYoQtdPwZ678+aiVb6FlWvC2lMv43iNLSH85Prh3zqCDSjQdJzmw0VkWSrTnJrG/LjuUjf/D13EAGsy31dXQd14ZhPq4MlT9IYh66g4WBxlgAY36dUy2oCGaZkdzI//j+QA3iG2CKe1AC/fpmkSBba/p40Zywn9GYPe4LirpqY51PKJAIpufhV5tGSX7exMQ4nk1VTF1xvNZcpGrHoApSiRs1CyvJPpoABCou+S/0HTZCWXh1JJk3U7ri5/xFpWG9nkr5/302xPmhLHpE1Vs7XphIyP/duhLXq8/MqzArTE0nr7uYVVIg6CkyCnrxg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <74e716df-b27f-a86b-a257-fcdc2526a820@suse.com>
Date: Tue, 4 Apr 2023 17:34:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 14/15] libx86: Update library API for cpu_policy
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-15-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-15-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0209.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9095:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d111571-a369-4e9c-846e-08db35221d78
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ShdmLwjbTk/PRjXIAQWKys2h4hNPqTVim0n3OJT+4VjETw7eykToX8UlDdfasF6i5LTAnIGXBuPXNPbJFB0oe4Sx/4eOiCqRHY9VG/Wo7eBkL8ijmG1cj6IS42ubnmFZfW2gLn1JKTrh4lzsaMPlXGGs2hLXefd/bzEyOY2gzmPIMyx9hyNRJIsr8hOI4F1/7uMi3D+tngtjecmtpu5cb2EeXk2FBNEv9kCvrSTKZxZNVWDinel8ke53735lXj+qwWrro6Fam8oGvbd0d1gwBvmAoj12Fdm2eKd8jbbSkSs/5+AO+bQHbasriIpuSiUX39XZoR67YgmmWiENElHQJa0oRh9ZMZWsvml9Z0hOpcFI5q+RK8ET/lCy3ZfeaEn9zX4ZzPfxsZxORey5eD7EAIr6QdpX0z/paXUewUxEDewfewMdto8TgKUX20V/xem0DVjHARI05MyXmSJ9MMORTG/fXU3OsmVr5tGTjC+GYdnywS9EQKsiI/4DZ96Nx9huKCsGt0QOJ+lkGoDpkqb4gMiZf9/pddInLrBuzlNQLtDR6NWDBPEvozpvL97cBExN8YEq/oUh1KJasQYOz4xnNsbI2tnXAErxjNqLBBUFeq4HYtAHJ9AKhgv2I71l052zYxOmaUZ2bSWoOFKWUhcpIw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(396003)(39860400002)(376002)(451199021)(66946007)(6486002)(6506007)(53546011)(6512007)(26005)(2616005)(66476007)(66556008)(38100700002)(41300700001)(31686004)(186003)(54906003)(316002)(478600001)(2906002)(31696002)(86362001)(4744005)(4326008)(8676002)(6916009)(5660300002)(36756003)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WE9pL204L0ZBY2JxVlQxTFhTM0dHeWRSZ09aY0IrY1VUQ2JYampETGFySjIw?=
 =?utf-8?B?bFhEQnBLdzRVRHNEYlUzOEZHWnZRYVVleTNLUnZxR25ybWZYd2FRNXYxeUdJ?=
 =?utf-8?B?OTlEampCMi9jWVFqYTQwSVpibzNIYTJSbi9PaWkycy9GUVg4OTZpenNrWHRz?=
 =?utf-8?B?RXQ3REtEVERtaVJPRjFOa3czVkVIWll0c3Fkd0hmVFJDblM2eW5Rb0pzY0Fl?=
 =?utf-8?B?aUxyTmJhUkhOZEdBQkZEcldyZld4NkpNYnR1NDRlQXVERGNHNE84T2ZISlQ2?=
 =?utf-8?B?Y1lGbVBhUXBTaXZ1ODNiTVZyY0lIc1ZMVmZpMG9xRUsvaVZYMnp1dy9LQU0r?=
 =?utf-8?B?S1hFekF0dllDUUc2eFlzMldVS0MrVVdGL3NPSTdsV1UrUWRoQ3N5Yk1BMXcz?=
 =?utf-8?B?OFRnL1ZkZmZ2eGVkOHRQazBRSTBLKzROdXZhdWh3TlVzUk5oelN4MnoyV2Nr?=
 =?utf-8?B?L3orckFkMlZ2Mm9pMU5GTHhrM3NhMnhKMGdpa0ZqL3dtcmhWRnR6d1BoQ2xU?=
 =?utf-8?B?YkE2VWRCb0tvaVFtYnA1cFBRUWFzYktUamQvWGRCUVcxdzNNNENhTlZxVHNj?=
 =?utf-8?B?Y3hRMGorcGZ6TW8yZklQNitFV3dtNVlYcVRqa1FvNit4MFhIc3ZGUHlKOUw0?=
 =?utf-8?B?VE9jWDlEbzhudFhRZWtJbWpBNzlUZnpkd2I2U3drRVVTU1ZsVG90ZCt3ejNZ?=
 =?utf-8?B?bWZvZEhhZDQ3T05WMU1LbURnOE9sK014em8xTklXbFUyRU0xa1YvNEZNREtX?=
 =?utf-8?B?WGhTMTFwWVJUVjNibmFCcysyRFJvQkhkQ3lpdmxUam42TkxKUitPMUlRY2px?=
 =?utf-8?B?b1ZkK1EwQ1F4SWF5WTNWU1R3SDgwQWlvU0RtdWovd2x0di9XdEhJSUxYYnlC?=
 =?utf-8?B?NHBkMHBwR2p0ZHpPWkFKWXdYcWpDN2tUWmlwQW9aZ2pPY2lmMGY0amZLWkRY?=
 =?utf-8?B?SFdEaElFNlhLQWhJSUUvWC9LeFJBUnZjVExGWnNVOWZXVEx5WTJ4MEVoTld5?=
 =?utf-8?B?RmhXckxSUnl2YjlqVUdWT0JnLzdRREZhMEp6S3BZMUlaUFIxZk01UmQvQ2hj?=
 =?utf-8?B?UGlYSHYrNG41NmhIQW9pZzJXTE5UazE0b3FCb3FpcEs2T0FCcUE3U0ZkemVZ?=
 =?utf-8?B?UU50blVRQ3BvMWVIRHFwSTdLNjNNeFVONUd4cExrdkxDcE5lOW5Cdm1tNUFF?=
 =?utf-8?B?Z2VXcnhuY0NsZTZtY3VPNmorczg4UzU1WjBFRUNMR2xwQmcyeHAzcU5YWGlr?=
 =?utf-8?B?YlNLdVE3ZlJsZk85VWUyM0dBYzJQc1JvL2J5RTlYUjJrWm1SNGwzQkJEVEpn?=
 =?utf-8?B?ZkFoTFhYeFhSWHZEbDRKazIydUEydXkyeWhTaVBzYkltbmdFRXhDekVENnRi?=
 =?utf-8?B?a2EzWHFGMnNJMDNQVFpkKzhHQmgrSDBpenhsTjlFcTNsZVNobnl6Uml1bFFI?=
 =?utf-8?B?MS8wVWhzZGVPUjJoSklwQ3k4VUlBYUs4YWJ6L0dUVW9NUDd5ZzlyMGtSMzB2?=
 =?utf-8?B?K3RwZGc4bVJQdTVoMDI5QXlwVk5iWVdqOEx3RWZvNlVZZFhEVysrMWJxMGZC?=
 =?utf-8?B?ekVIWkxHSnVzMHVVQjhSVzRFM1N3MGtzRlZFSHVZR2swUkZIYnNKdlNESkwz?=
 =?utf-8?B?V2RUclBmaTRUdXIxSXBQaU9wd05yVmwzK3Q5V0dlb1VsTzZROHhZTGx1SHlk?=
 =?utf-8?B?MTYwaG1QbU9VOHhKYnJURGpEZ0VRZC9yTTdHOUpPVTB6NVgrUnFGeFQzUEhM?=
 =?utf-8?B?bEtFQlNDT1BFd0xiM2FYR1FKS2RDNlRDK0IvckVUNTVOVDEzajBzYXhyUW15?=
 =?utf-8?B?ajRIWXA3VG5YVjljZ1ZhSng5TndaajZDWHB5NXNoV3lPbGhkNUVpeXRvSkhJ?=
 =?utf-8?B?Q0JNT1ZvRVJCYUNwYWVTcWxXTjNZbXFSa1YvOGRJUUo1ZEVjNnNZTG8yMmJ2?=
 =?utf-8?B?dTRnd0dFU1NVTTE4ZGJMUXZHV1ptRkUvWTllM1NnU0REclNEWjRyQUlrTU1T?=
 =?utf-8?B?TTFYa2RLVVBpdldmS1ZBQ1NOakYwa3RGbjZ0NWlHd3BqQ3Vta1JHL0M4QmxY?=
 =?utf-8?B?Uy85SkptbmwrMWVWd01QUEVHM1hJc05jdUcwVW9oZG1Jams0OWxlak9jc3FT?=
 =?utf-8?Q?nwYwJXeprs1DSq0cDAV9IxeV7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d111571-a369-4e9c-846e-08db35221d78
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:34:44.5453
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I4ZbXMBDog+Q9FVjWCmiUAgLtj9cegS4ayrK0HJeznx37hEvXUg+bF0VDp26z+P4wj0sbAfad+WZxhLVQ8bSPA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9095

On 04.04.2023 11:52, Andrew Cooper wrote:
> Adjust the API and comments appropriately.
> 
> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
> TODO in the short term.

That'll then require passing in a callback function anyway, such that
different environments can use different ways of getting at the wanted
MSR values. (IOW a bigger change anyway.)

> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

What about x86_cpuid_lookup_deep_deps()? That'll be looking at more than
just CPUID bits as well, won't it?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517989.804070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjihs-0005le-Iu; Tue, 04 Apr 2023 15:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517989.804070; Tue, 04 Apr 2023 15:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjihs-0005lX-Fd; Tue, 04 Apr 2023 15:36:24 +0000
Received: by outflank-mailman (input) for mailman id 517989;
 Tue, 04 Apr 2023 15:36:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjihr-0005lQ-9q
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:36:23 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 726b7179-d2fe-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:36:21 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 11:36:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5263.namprd03.prod.outlook.com (2603:10b6:208:1f1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:36:14 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:36:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726b7179-d2fe-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680622581;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=JUQAkjiIbs23B7Yu1hEhEOqPd/LPq69CwzaQsPDsfzM=;
  b=BXRenU5Fmb/3rDgTLX/+hk3NWU8j3JLhGirtck7MAtajg/SEUA073wra
   USYfbP2vAbwvMpIIpKLI5Lp9Fh9ZL39zvBGM0UzpxHwzGULzGcbInqYeq
   Ky+/j3oZj6MBpdIc2I4MC5UuGcJtvfiUHsDgpAjdJlREJTYIkNGF91FHj
   g=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 103085443
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wq2F76wSdAW61fmiSOd6t+caxyrEfRIJ4+MujC+fZmUNrF6WrkUPy
 zdMXzjSOPuDMTPwKt53Pti3p0sF7cXXyt5kG1NrryAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UMHUMja4mtC5QRiPawT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KU8Tz
 OAoKwkhVROKndC5y7aqcPA0rNt2eaEHPKtH0p1h5RfwKK98BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjmVlVMuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANlKRODkqqUCbFu74GIQEBY2XkSCgae7tWGFHPdSD
 WIN0397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8/AKxFmUCCDlbZ7QOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptXm/oc6i0uVSs45SfHqyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CHhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:8A+cA6keXlnPWH97LIuZCLi5BnLpDfIj3DAbv31ZSRFFG/Fw9v
 re+cjzsCWe4gr5N0tNpTntAsa9qArnhPlICOoqTNWftWvd2FdARbsKhebfKlvbdREWndQttp
 uIHZIeNPTASWZGs+eS2njdL+od
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="103085443"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jwnH+GSPsh06ycU4qxT6XUpBuRAhhrM6u7Wpp8caziK2Gv4W5dRG73YBH9dYH1jzLfh3stxd5MytEOEvV2z54zFFKlLiH7dceJC3+YnDRI2EObWh7li1IUureJDgA17XOzwXSLBL1VA92mRxPw6eGKkE4HHhvHDtyhY6esVpGwlSQp+Zy7AyukDCCwme+KTIjpOZyhOiTtWt8dZN43L7N1nLhy+ew6ok2ngichSBOphDKYUg7ILbGm1n8AECkhCNAo9u+x9oiGEhFOseqX1RhP9leORWzQyONtvn65NWmi4mYs8bbJJzCQDPvLmMaXkb8z9TwkwK3MtsTiEhtYP80g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JUQAkjiIbs23B7Yu1hEhEOqPd/LPq69CwzaQsPDsfzM=;
 b=eLuh/Uo+HOXY5dIE91PcpQtLKGuSvMECavazGr0o1nDqQGlAzP9TOEnHpNSgnZc3Vt4ivRHVJmAWt9qKe7e2ijW8qHTtfkI7v3i/wk1I3boATS38HZl8f0S3fRXrN6e+9MzkfK530mHetwTCp78/uLArq1/dbx1MM3oQlEfj01F5z08jMiOEtlhqG9YAdgWEI2PQNhQDsD7hTiHcvzP3Gw/vjhbMOSyU8GDgvL0CaIWt0ZYV4AImEJsOOOaZwBjXTL2qS5RFk53mfHuZ9alamGZJkkOmG93b2fhuFxJ8LxaWw0MgOYw2bwx/V2PHbwxb5w1uTinKHAc0MaGCfnBHPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JUQAkjiIbs23B7Yu1hEhEOqPd/LPq69CwzaQsPDsfzM=;
 b=hocUkkrP5dq4q/PMlcYfZiRAOdCQunsVFxoBtlsEHA2ObyK3KNSSnLgn+xW9hHK7NIf10KewtaM9W4ViTFFqiKmQRSAKFW+bK3a9gYq+guqroudfGyr4KeiVF276n1Elphys3x54yRv+kQpDOg5gEFqTf+4CzEZ88urrrbMhp5M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f76f8bfe-e9ed-11ec-d0d2-16292e1adeb2@citrix.com>
Date: Tue, 4 Apr 2023 16:36:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 14/15] libx86: Update library API for cpu_policy
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-15-andrew.cooper3@citrix.com>
 <74e716df-b27f-a86b-a257-fcdc2526a820@suse.com>
In-Reply-To: <74e716df-b27f-a86b-a257-fcdc2526a820@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0010.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5263:EE_
X-MS-Office365-Filtering-Correlation-Id: f33def41-2bbd-46c8-f4a8-08db35225277
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+O8QSyb3ZY0c0qvgKn9ApdkQzwx7UrvJB+B4KcdTG5Oo+isabOJ8uc3lYwJD1gqlPH6XJD7rXbxVXNHszEq+4EHolsuCmlyjTqeyB3Qc3Z1JNZcUuJlWyMHE2xv1t3mItTFCgJj+M9AutOrvXJOHkWHPPVdlINB8zkGo4ofUomhCt75MPAo0WFIiCzF12ylrgrdVRh5iehQf+bzuJPJ1ifEATSso22+XqPMcKpL5AqO3IsWmkknjVkqKi+z7KSWvzAZroUSEJf/YRZf/bLW5lOoWj5SQ1oT6EkAlIQOhs8Md3JcVbqHsN8JyLEKO8po7EBuI//0rGTB4y3rbTlbYiQHfTGo6Pq38jI34BrNrYbwFFK6sxJMiO8BrX0Wwv9wyIBwAt3v6GtBwS9k7nWYurO+tbJ7WZ/2RJUpdqtOpOTk9/ayel7Ja65haQrfDdr5ujr9qNvsSYRRD5NOs7+uJFZq7drGv5Ab12cWwgoHQeyHvl9GWBw0FJQh0J69OQtWYdMwzYqVbHNnxnntBgMoFWHms40VN67BFog2R/vNfHokT3s33yKBVtyONGaqEFs9M0+MQTQXvI75dS1le2WP5IhS9q0Nx+OynvghIV4KU09H4Y7pfMTOtLMyWqD9Tx8sa/oaQJxdtu5n05mngYsvlNg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(136003)(396003)(376002)(39860400002)(451199021)(2906002)(31696002)(36756003)(31686004)(86362001)(6486002)(2616005)(6666004)(186003)(6512007)(53546011)(26005)(6506007)(66476007)(66946007)(8676002)(66556008)(6916009)(41300700001)(82960400001)(5660300002)(38100700002)(54906003)(478600001)(316002)(8936002)(4326008)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TjB1Z2pOMGxMUkRuN2FHU0JYa2NMUUdHdUdHWlNXOUMrWWxFdjJ6YktCNFZZ?=
 =?utf-8?B?b3FONEtGQkJuRVQxL1I0SVg1LzhkQVEvOVEvN3NKMDczWDhUcVVmejVWMDU0?=
 =?utf-8?B?TTJDaWFaZ2ZFN3k5QUhFMnBUTVkyRDZLMS9QMWtMVUxlenA0eUdNWkRVZGtF?=
 =?utf-8?B?bldGRjU5THRlb0hic0J6WHVDV2J1MmhGbEpmYWFMT1p6ODJMU1R5aXR4NVdL?=
 =?utf-8?B?SElEeXd3MHRqNHlaZ0FtdjMzaXhOVkZYVzd1NHdobC9PdXZ6S0RpUnA1aC9O?=
 =?utf-8?B?Nk1xaTd1RVFScmJkTjhwQlczUzZaL3FreDBPblQ1YWdtS29iUlR6U1pVV24x?=
 =?utf-8?B?aVlPN2lYT2hlcEV4Y3FjTUc0U3Ftd3NnM3hkeTZ0TGVSM2wwSzFRczRZa3hz?=
 =?utf-8?B?cFZsdzY3aEJaR0RzQldkbElFTTllL3pHeEE5eVEybjNibEVzMW1adEtRVzQr?=
 =?utf-8?B?Syt6YUZEV2M5OEs2Um5JMnVpRWFPekg0S24wRTZTQkxGR08zdW5ucDNOeGcw?=
 =?utf-8?B?SU04TVgyazJRSjFtREtMVXBCZ3BoYkhoUnRFQllHK01CdjJOQVM1SmdKb3M1?=
 =?utf-8?B?bmlXS1o1RFZwWlhhUVVHQmV4MnZaSjNnYnZrVTA0OVczcGhkMExFd2N3L0hi?=
 =?utf-8?B?N1ZoNEhHZUthOU9pdlNtWU5uSkR0dzVzdkpaRnJVU3JsS0RoMVhiNlc4Q1ZD?=
 =?utf-8?B?Uzg4RzUrM0hqaHFzZE9TRlJMbXRlTmNWb0t3N1I3YVpDR3pidmUrc2N0M25u?=
 =?utf-8?B?ektHaXd6SnRORkdyMDdydjVEbGM0UU5IQkFvZzNOUFpxRVM4ZTd3Q20vcnFP?=
 =?utf-8?B?aTVFMzMwRVF3UnVnN3FiQVZEQ0V5em9tclA5ZElNbUNSZWlEWWdjZ0U1SlFB?=
 =?utf-8?B?cFZ6dklaUHZwbGtCUytmalVKNWpjQ2pqVnlraWFrU3hHRFk0ZUZ1OTZlNjJK?=
 =?utf-8?B?TXVCZnV6QlhEeWxMc0VXUUdaODFCVW5iU2d1M2t3WW9hWmNTVXo4L0dBQ3dq?=
 =?utf-8?B?cmtRc0M5QW4wS1pSSGYwbEU3ZjEvZXRMd3BlcVZrZXZtNmVnR1NSUVZOSllw?=
 =?utf-8?B?d2ZpbnZ1R0RxYlUweWtoZXo2d2FZVGhQTGY5dDhEcm8vVGRGbFpUeUhPOUN5?=
 =?utf-8?B?UDEyNlk1QlRBSlhINUhJb1Q1YmF5RzNCaE4xbEtaSkdkQWVvRlpyN2NZZVR3?=
 =?utf-8?B?cWd5Y3lLcGJOY2VvY1RvK1B5aVRzZmZzcEJVS21CeTl5TlYwRE0ydzBHR0xw?=
 =?utf-8?B?RW1vbGh2NkdvRlB2dXp4NHdnYlhxa3AwMlZsZVJyVWQwblhscE5hWWZ1ekR3?=
 =?utf-8?B?N2JuTGQrZmlXVnRQSEtiVzdqZ1NxS2ZCY2hldHVRNXFFZ0JrajAvRFM2M3Yx?=
 =?utf-8?B?ekczQkFlZXpwcXR6WTJiOExwa0lPWmlDV2hjeFRBVFlUT2FGRU1xZGlzVzZU?=
 =?utf-8?B?b2RLVFYvOXhCdnRkalhGUTIrZkw4cHNpWUgxUUhxMUM5WXRXYzQzQk5yS1Vx?=
 =?utf-8?B?aEVRNnpQa01NaW5Wbm1aVDhtYms2QU4yUG5JM09HSEo3b29ERjEzMUlBWlhy?=
 =?utf-8?B?VS9OZjhveS9HT0JDZVhDRy9lZnJENW9mUFNzdHhObWZDWUVndDR6NXFSYmdH?=
 =?utf-8?B?c0YvWXBXVDhZTFhBNVNSMUZsTXQ0bjErNmd0dGFJSlJZNjR5SThDSFBxUVdN?=
 =?utf-8?B?VjFLcHZiVm4zdVNGSVJaR3hSK0Y4NGJLTW41VGRMMTRpYVROT1Vhak1SVHRL?=
 =?utf-8?B?MzVWOWhtNndKUW1oRzRoVDVlNERmZzZKNE1lN3VEN3BOZlV2ZkJRdUJlbE0r?=
 =?utf-8?B?TTFGWXNLZ2JKVncxbUdkc0crZkJXWXdMRmNKVWpCRHZRVHozZk9mOTdzUkxm?=
 =?utf-8?B?bHlDdVFHelZzWXRUVjJHQ2Q1WkR6WHliVE1PdCt4Tnd0NHIwVnNrQjVIVm9C?=
 =?utf-8?B?bHhsbklnZmlnWlc5VTlhSnJvbmxEWWh1L0kzRWR0ZGN4dFAwTHM4L0FaTmY0?=
 =?utf-8?B?Wk5YQ1lkYnFGQ2htajB4UWE2aGZHaVQ2Zk13R054VmZyL2QxR1A2d1pVUUZt?=
 =?utf-8?B?T3p4NkZOSm9ES2hZWjdncUE0SURCV09wVWV1QU5UZElQOTVJZlNoeWZlOG00?=
 =?utf-8?B?cnMxbzhzYUZYbDkwQmVjSm1Bd1VxSk04c1duNldlSGRadm1kM0hEc0RyMU40?=
 =?utf-8?B?dGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	5losZz6Zgz1jqfFZY5kNpbAGrJv9LZsLcQmNviY6hhhAVkn23/tx5TyRQ3DjQui/fPW1gW+JOIHSLTqDVXyX4hMhIXAvmC7DKN916M68OKCAY3X3P9+Q0nvR0tlTEhLtJqedBvGLYZ+xjv7jhPCdR3antjBkt1PKwtA/bHNwu5/6yuVDW8kXXdXyeJKsePkdnEE+IqXI6CUIkwuLTE5d82cCJfIFnJalvKj1Ek0ILaqUl2KgYs0TB9KMVVdMMlsvVlUYdvtR1XgUk8/KQWA6bexn9nV0IEWBjmtaGIRJbgdSDlm3QFbK/IN51E2jhqPPYu6ot5r1+7LXqttuyyObQ1xp6kXY6KzwAN01QYHFNt0hlC6vfK4O8g4r3pnT3EvPdYOGJPV9DTo127KliwYRTCJ8rONyubB+eQDFh/xsaiXh97NBTWBy9BQX1DeWq6HJ7HUbwovDlB2VJ0Yyf1wFxktThi2Ae4z0taAM3N5DVddOsvs5QxbfzYofchUlQr0lNiBbaOA3xmD/bozYQvEQXGrjOMsCKJRg3Ai75S1vIgA2vYnoNAhpZTUmyzwGkXL+WiYE6hSV28SIibtl6Dg5zoqpyjRIwTc8e1G2wkDyscH5FVigZYY+RqbFRunriMla9/nXa4ZtdwADXfuMYs/B4+t7u13eyRrHezq8ma7iGi3WMUHWAjrVI3QqP+dLFzdB/4Fx/RysF6OZDQfqlG2Nua5ylqjO5pZ2QEKSJX8p1najXRoKbEX/x4RRo1/1MuUYiytKU/JdXxvOqx19BaPIsbu1BuS+5/OnUUsU3FfudO2VilrfUuaWiLIlflr4+DjI
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f33def41-2bbd-46c8-f4a8-08db35225277
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:36:13.9605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mQFCXW8SpmHgk7fDe+7P+unfZOzafi0yafXJcBz5nPVDjaZcLkPj4p72Opn9HxENgQOjbsUO3hdj3W8wqPX2U7QKEIpfb0uK8cdTNiyTxMQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5263

On 04/04/2023 4:34 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> Adjust the API and comments appropriately.
>>
>> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
>> TODO in the short term.
> That'll then require passing in a callback function anyway, such that
> different environments can use different ways of getting at the wanted
> MSR values. (IOW a bigger change anyway.)

We've already got #if __XEN__'s in there already.  I was going to add
one more in the short term.

>
>> No practical change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> What about x86_cpuid_lookup_deep_deps()? That'll be looking at more than
> just CPUID bits as well, won't it?

Good point.  I'll adjust.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:37:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517994.804080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiih-0006KZ-Vz; Tue, 04 Apr 2023 15:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517994.804080; Tue, 04 Apr 2023 15:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiih-0006KS-TH; Tue, 04 Apr 2023 15:37:15 +0000
Received: by outflank-mailman (input) for mailman id 517994;
 Tue, 04 Apr 2023 15:37:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=re+z=73=epam.com=prvs=8458147038=oleksandr_tyshchenko@srs-se1.protection.inumbo.net>)
 id 1pjiih-0006HI-05
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:37:15 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 905ea8b0-d2fe-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 17:37:12 +0200 (CEST)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 334E34CU009007; Tue, 4 Apr 2023 15:37:01 GMT
Received: from eur03-dba-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2173.outbound.protection.outlook.com [104.47.51.173])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3prnabrcac-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 04 Apr 2023 15:37:00 +0000
Received: from DB8PR03MB6108.eurprd03.prod.outlook.com (2603:10a6:10:ed::15)
 by AM7PR03MB6497.eurprd03.prod.outlook.com (2603:10a6:20b:1be::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 15:36:58 +0000
Received: from DB8PR03MB6108.eurprd03.prod.outlook.com
 ([fe80::bdd:a497:66a0:a342]) by DB8PR03MB6108.eurprd03.prod.outlook.com
 ([fe80::bdd:a497:66a0:a342%5]) with mapi id 15.20.6254.033; Tue, 4 Apr 2023
 15:36:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 905ea8b0-d2fe-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gi8KtCpTfwUune7wIX/QAE4BEQFpCNh3rzeu2BOTZIATozlX0m6N2HUQLtu7QOm1jAFebugUMMtx3vMXcOVIbW0RMyqTVvZskEnKcuFMuVOgHSSHban9LNa1Bc+d6cHRDtLtbv0qZSpYFOJoIEVfoGZyVvUls6ftq4hMjx84DRPd2Y4l93NTCP70uTDYTi1owYosILU6Pqbk/M7mIYLhrum6yoCldQZqJF5oNOkLZISBIY7UXqKiwhxlryYr70ywlBUjuns0HXkK24uwN0GxQ8ATEmiGaknXX6IE1ovO14w/vwyoJab36MJ681aQZ4T2jWQblgiJo6y1aoboBY+G5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PKocZPDzP6zaQ8YHNKIM3Zxni4FGkO63dFSN+bSbAcI=;
 b=XeIuYHJ+2YvxozHs3wLpoGwpazeDCImpnUX6mve2efRBXjOD1n8KW8hoKHfS6AEFboefL4ycbGzgKpJR0uJShIE49eZokyMSyspNWgXRiTZyHtIspg2KNOX9ST3Ob3gBCK14iimnEo1IurmkPhfXiZhKyeeLZ2xt00s3hpFqpcsNNBRKkL0N7e53nbsw7YBsgoHnnFNML4+GbpeFhZwKtMR7puyAdtRLZms3/Zfe8/H0xGbzGV2AsVxy2jZgLnlzn5SU/Hm8E8nB8ZgoNhdc/Kb6cHKXm5+EmwZt0ILGvA/jYLORB4iK1Hho9x7O/0tftI1u2g8I3Rh+ej+98hl8SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PKocZPDzP6zaQ8YHNKIM3Zxni4FGkO63dFSN+bSbAcI=;
 b=kcKyWQn+/AvvdxoDRYVzcjAmAyqLDAxzjC9wX/xwNXV7Ti9bSxCVRmHd0saO+hsvy5HO7LsHEGlEDg8kHUvEsXEyevk/wYHFIouuyHtTYypPrU2iMv3oPy3pLj1FEQmRMftiVht7tucpmotLVGQZL8pD6anZrFfbuhpoCj6+FbG7ARoe5FwHFn45ROWbu5ojRQ89eEa4nvAt4yslumEPJSYBatO9QVlfjUNlraPJh8K4GwlUtqnIgvom5puXE5By7EgNploIUM2Fwbz0Cpqn8JK2F2RG/GNNj7aiDLiNJq6WNy7ONCdKwUVIjmVzz+aJagOgKRKEGL9Cx5L5QIufeA==
From: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>
To: Juergen Gross <jgross@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Dan
 Carpenter <error27@gmail.com>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] xen/pvcalls: don't call bind_evtchn_to_irqhandler()
 under lock
Thread-Topic: [PATCH v2] xen/pvcalls: don't call bind_evtchn_to_irqhandler()
 under lock
Thread-Index: AQHZZg59s4tqkaRFM0qaTjkzIaVOrq8bSm4A
Date: Tue, 4 Apr 2023 15:36:57 +0000
Message-ID: <9c0b195d-181e-f34c-6b42-7d7e75da901c@epam.com>
References: <20230403092711.15285-1-jgross@suse.com>
In-Reply-To: <20230403092711.15285-1-jgross@suse.com>
Accept-Language: en-US, ru-RU
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DB8PR03MB6108:EE_|AM7PR03MB6497:EE_
x-ms-office365-filtering-correlation-id: ff1414c5-0aad-4191-ae65-08db35226d39
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 a1Lsuo6gRzWGatT5cniIW8jID8dsTDsOrvA5vmzYgACLvLH/jm79dh9F6vGMsvuIjkxXgEbSS9VDyc63eysEiNMi7/BRGNroZytDLhRj3WkHEPtDdH1FbFeJ3hsqjY8AtWOdoYnWiR2qOXAhbl3WY4MyUE55Ddcx4tK4bS43JK80wYzMavkkCPYi5i1P2A8d5xBucEI0f/rKAV1GryUR/xN+/vXWuAXX06euXtkGBNoEzKSxy/cjChzP6f6fUflUkRpNdMS5JnkXVVna5mC3VJPveLlXE65naZVGHvC623xGWcsTyIWMmS2uJunNC2Z3xVgBUv11hCDT23KMXnrY0fKybuos01VG9HHmCjfFcU1MVmgQkjIXtJ7HLISgpH7fOZTPl2GEWUcL2tIxDOX9xTfr74ZTojN0RWyTDyAl0r9BVf6S/+UCXM+T3CLV7K9S6VnwNf+TSP1NYSx6XaYiLdfQ1Ie5M6ngTfYLYcnemJ4643w+1EegXD2fLMJTiJGX0/F9C9wp2WUbFbW3A3ij3Z0XrrwiyZE8NrWCAQH/fbtTT6xydRrEhgtv2eKpKjEToN1xMfYB9yB/fHPuzpDK0xbqoM6esZ3kWXsAb4XMa6bnlYt6wxDUgs352B/5aYLm4JvaqEYo6KNtDj9lVV9hhw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB8PR03MB6108.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(366004)(396003)(39860400002)(376002)(451199021)(66946007)(6486002)(6506007)(53546011)(6512007)(26005)(83380400001)(2616005)(966005)(122000001)(66446008)(66476007)(66556008)(38100700002)(41300700001)(71200400001)(31686004)(186003)(54906003)(316002)(478600001)(2906002)(31696002)(86362001)(91956017)(4326008)(64756008)(76116006)(8676002)(6916009)(5660300002)(38070700005)(36756003)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?OUFWUEFSWGRaa3N6eWNaNXhCT0w3RDhJOFE2Z1ZBRVBnenphTFBGS3lvVVls?=
 =?utf-8?B?Vkh4YVNmNlZEZE9RK0djUC9VUVREWTU2RWExRUNQaU9rZTJISkdwSVh0ZUlV?=
 =?utf-8?B?Vk1YaEZjZFhVWmUxMEIxdkI4bnRCSWFkWmxmb0lJRlp3SC9mT3lTZnZleXNu?=
 =?utf-8?B?Um9nRFpNejQxTzFPcllBNENOTDBLdjN5UnNLaTE1d3VseU5JVDFwbExlSWxa?=
 =?utf-8?B?b2pweGE0Tk44a3VPeitOeVU5cVJVOTdmajBJV0NZSjlWWXZJWWVZaVhrK25k?=
 =?utf-8?B?RndmM1RuNzMxZ2tFWjVHdCtycXJoRGRzVUZvcFY0RDlldGYxWnlzV2JDNXlv?=
 =?utf-8?B?U05TbFEwY2RESEVGdENGRG9JeUdlZmRhcEllUHRkaGp4K09IOGZMYjd5ZWVL?=
 =?utf-8?B?eE9acW9telVBYitsdEZ4dVpsRXp2RWRQMVNscU5iVk1mbkhFUmZ6OFZPbEVN?=
 =?utf-8?B?YzJHRFQwWHdIMWtwaCszWkV1NVlrR0FMcmVyU00xTG8xRzBjYzNqbHZ2bVNn?=
 =?utf-8?B?MkZoRlVVNnVyWWN3OE52L0NzbUVQNk1adTlMcDNxU0xqSktZdWFMVWFNRVVM?=
 =?utf-8?B?T2J1WjR0ZVFsdHd4RDZuL21SUkpqNFJPR1FNSDNrTStHNTM4b1hFWmxoOGhh?=
 =?utf-8?B?TkIvMXhyZ2E4R2wvbFNkb21YdFk3dzZBSk1qRXlBWHBlOXVPTGJFTkhEMkRT?=
 =?utf-8?B?TWc5WUlOMld0RWtibnB0RXBXcVJrTUt6UW9UOTFuTkdrVzg0VW95NjF6bkc1?=
 =?utf-8?B?aHJjaWtzZUNkQnJhOUhCWXZLVVFMSHorUGIraENlTlZ2OVdJZWd3NFFQWFdK?=
 =?utf-8?B?WDNqN2s2Q3l1Zm9KdnNEcXloV3A2bFdCRmYwVldBOUVVZnZvL3ZteVErenRo?=
 =?utf-8?B?L0hzdmZ2RzAvRG1JYUpqckZDdTloV0ZkTlFOOTdRaWZMQVo5eGJtN1RldVRZ?=
 =?utf-8?B?dXl2UjMydXk0Y2ZLalJCenFSRG5CWjZVVitmRnJwdmt0ZnZQL2twOW1VV2s2?=
 =?utf-8?B?Q3R1L3ZNY1RPNkh0Q01xbkVJOXh5Q1ZpQXg2Y0o1R2hyTzlENi9Ld3pTUU1u?=
 =?utf-8?B?dk1sRm1YZ09wRjRCUGxvWHNSVmQyT0EyY3BtOFo2ajk4S0xwSGxDUzUyM0o1?=
 =?utf-8?B?UVBYc05xakRJemlNZCtsOVNHVjNnWVBCdkM4U1AzNmVUamFRTWRIekRiOXgr?=
 =?utf-8?B?U3BERlpYemJuNURPb0N5bytwbWtCdHlkN0tnVDZJYWE2NnoxSzFhWHBuUkZv?=
 =?utf-8?B?NmtxT3dqTDY2cS82eXQ4czY5SmU1eThObllrWG5aUmVhWXlPcEkvNVFxcWJE?=
 =?utf-8?B?VlZQNGVVR0R1cGFNRkp5WCtQQ2VyL2tWTEFRR2JUOUtjdStJZS9qMWFZOG5a?=
 =?utf-8?B?ZytYYXNERUd5ck8vY25zNnNMVVlUSDdwQWhjUmgyYTRicVh5VmFYRU1KdEdH?=
 =?utf-8?B?SHJvUGZTSjNnQU4zblVpVmkxSHRjd3Ira29EbXp3TC9hK01QYldxVmVSUE9F?=
 =?utf-8?B?MEJsUXBNc0lBclZZd2RBNjRMbkczVURSZkwzZm5FSHVzbHNnUHljNTVWUGhV?=
 =?utf-8?B?a0tGVVlIekw3N0wvTmdKdnBMQVdxY0hEL093bFhUamhTcXQ5VGFDdmkrNVlC?=
 =?utf-8?B?ck5lWStST0pCZldibGNSRW12Vld5Y2tBa3lxWXdLb1FvN1ZEMXNSWVppbkov?=
 =?utf-8?B?eUxWRk1OdVZ2NW1EMmlEcnlOWURWcHo3ZklEZk9BaGtlVTBNdVVTL2hTRFNm?=
 =?utf-8?B?QXJkMXo1MDRVVEM4MjBmWE15cldELzFqK1VhRkhaM25KSUhZOGMvbFV3UGJt?=
 =?utf-8?B?bTUwWE1MdFQ1VFdXUnNBa3p2bTV4cnprUVB6ZUF4SlE0eTYrdzhuUEEwb0ZO?=
 =?utf-8?B?L1lvZXpEVEJqTUpVQkNEWkZHSTA3bnorYklPNXArcWd5ek9nUDBVc1V1T2U0?=
 =?utf-8?B?OHByZ2YzSTFjUjYzUHcwY1FleE5BRHZrSTR0bzBnQkZ6VHdIc3hWcGlRU3F2?=
 =?utf-8?B?RGpXRUVpSnhWSkFWQzY3K1JDbDNBSi9mMXZseTlnbDl0S3drSXU0cmd4ZVIx?=
 =?utf-8?B?Y09KL2RCbzJQQnIweDVNUmViSkdWbnBxVEV4SnVoYUl0UXVLcUdidU5Lc01q?=
 =?utf-8?B?QS82ZjcwY0ZVbm8yOGQ4T1dZRkVwQVlndDdzbjlYN3RXZ0VhbE1kZzBHaUhS?=
 =?utf-8?Q?OgR1sj3/VJJGxZ10EO7xG1s=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <79D6B25CF8F4024C906AC9FE71CB979D@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DB8PR03MB6108.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ff1414c5-0aad-4191-ae65-08db35226d39
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Apr 2023 15:36:58.0108
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: heFIkc22fH/IuXh4nrj466lbfx7hM1xyKvNw0Nbl7mck+r3PePMDcUmJY+NqNJ+vLgXYG4oIXBLigLA2qySVJrM8AdXmZX2GlMJI2ssZRJY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR03MB6497
X-Proofpoint-ORIG-GUID: YrqL6bX1DrWCmLv2kdDaV9yGdaEas3wu
X-Proofpoint-GUID: YrqL6bX1DrWCmLv2kdDaV9yGdaEas3wu
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-04_07,2023-04-04_04,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxscore=0
 clxscore=1015 suspectscore=0 phishscore=0 mlxlogscore=769 spamscore=0
 bulkscore=0 priorityscore=1501 lowpriorityscore=0 impostorscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304040144

DQoNCk9uIDAzLjA0LjIzIDEyOjI3LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KDQoNCkhlbGxvIEp1
ZXJnZW4NCg0KPiBiaW5kX2V2dGNobl90b19pcnFoYW5kbGVyKCkgc2hvdWxkbid0IGJlIGNhbGxl
ZCB1bmRlciBzcGlubG9jaywgYXMgaXQNCj4gY2FuIHNsZWVwLg0KPiANCj4gVGhpcyByZXF1aXJl
cyB0byBtb3ZlIHRoZSBjYWxscyBvZiBjcmVhdGVfYWN0aXZlKCkgb3V0IG9mIHRoZSBsb2NrZWQN
Cj4gcmVnaW9ucy4gVGhpcyBpcyBubyBwcm9ibGVtLCBhcyB0aGUgd29yc3Qgd2hpY2ggY291bGQg
aGFwcGVuIHdvdWxkIGJlDQo+IGEgc3B1cmlvdXMgY2FsbCBvZiB0aGUgaW50ZXJydXB0IGhhbmRs
ZXIsIGNhdXNpbmcgYSBzcHVyaW91cyB3YWtlX3VwKCkuDQo+IA0KPiBSZXBvcnRlZC1ieTogRGFu
IENhcnBlbnRlciA8ZXJyb3IyN0BnbWFpbC5jb20+DQo+IExpbms6IGh0dHBzOi8vdXJsZGVmZW5z
ZS5jb20vdjMvX19odHRwczovL2xvcmUua2VybmVsLm9yZy9sa21sL1kqSlVJbDY0VURtZGtib2hA
a2FkYW0vX187S3chIUdGXzI5ZGJjUUlVQlBBIXdUeVUwMzJQUVB4cWxwSWZ1V1J3Yi1EWUUxSzhQ
MGJSV0p5SklDYTdJRWJBd1EwX2FlWndrbkFXd3hKX2N2X3RXR1k0MmY1TlBnbjZKSHRac2lHUCQg
W2xvcmVbLl1rZXJuZWxbLl1vcmddDQo+IFNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4NCj4gLS0tDQo+IFYyOg0KPiAtIHJlbW92ZSBzdGFsZSBzcGluX3VubG9j
aygpIChPbGVrc2FuZHIgVHlzaGNoZW5rbykNCg0KDQpSZXZpZXdlZC1ieTogT2xla3NhbmRyIFR5
c2hjaGVua28gPG9sZWtzYW5kcl90eXNoY2hlbmtvQGVwYW0uY29tPg0KDQoNCg0KPiAtLS0NCj4g
ICBkcml2ZXJzL3hlbi9wdmNhbGxzLWZyb250LmMgfCA0NiArKysrKysrKysrKysrKysrKysrKyst
LS0tLS0tLS0tLS0tLS0tDQo+ICAgMSBmaWxlIGNoYW5nZWQsIDI2IGluc2VydGlvbnMoKyksIDIw
IGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3B2Y2FsbHMtZnJv
bnQuYyBiL2RyaXZlcnMveGVuL3B2Y2FsbHMtZnJvbnQuYw0KPiBpbmRleCBkNWQ1ODliZGEyNDMu
LmI3MmVlOTM3OWQ3NyAxMDA2NDQNCj4gLS0tIGEvZHJpdmVycy94ZW4vcHZjYWxscy1mcm9udC5j
DQo+ICsrKyBiL2RyaXZlcnMveGVuL3B2Y2FsbHMtZnJvbnQuYw0KPiBAQCAtMjI3LDIyICsyMjcs
MzAgQEAgc3RhdGljIGlycXJldHVybl90IHB2Y2FsbHNfZnJvbnRfZXZlbnRfaGFuZGxlcihpbnQg
aXJxLCB2b2lkICpkZXZfaWQpDQo+ICAgDQo+ICAgc3RhdGljIHZvaWQgZnJlZV9hY3RpdmVfcmlu
ZyhzdHJ1Y3Qgc29ja19tYXBwaW5nICptYXApOw0KPiAgIA0KPiAtc3RhdGljIHZvaWQgcHZjYWxs
c19mcm9udF9mcmVlX21hcChzdHJ1Y3QgcHZjYWxsc19iZWRhdGEgKmJlZGF0YSwNCj4gLQkJCQkg
ICBzdHJ1Y3Qgc29ja19tYXBwaW5nICptYXApDQo+ICtzdGF0aWMgdm9pZCBwdmNhbGxzX2Zyb250
X2Rlc3Ryb3lfYWN0aXZlKHN0cnVjdCBwdmNhbGxzX2JlZGF0YSAqYmVkYXRhLA0KPiArCQkJCQkg
c3RydWN0IHNvY2tfbWFwcGluZyAqbWFwKQ0KPiAgIHsNCj4gICAJaW50IGk7DQo+ICAgDQo+ICAg
CXVuYmluZF9mcm9tX2lycWhhbmRsZXIobWFwLT5hY3RpdmUuaXJxLCBtYXApOw0KPiAgIA0KPiAt
CXNwaW5fbG9jaygmYmVkYXRhLT5zb2NrZXRfbG9jayk7DQo+IC0JaWYgKCFsaXN0X2VtcHR5KCZt
YXAtPmxpc3QpKQ0KPiAtCQlsaXN0X2RlbF9pbml0KCZtYXAtPmxpc3QpOw0KPiAtCXNwaW5fdW5s
b2NrKCZiZWRhdGEtPnNvY2tldF9sb2NrKTsNCj4gKwlpZiAoYmVkYXRhKSB7DQo+ICsJCXNwaW5f
bG9jaygmYmVkYXRhLT5zb2NrZXRfbG9jayk7DQo+ICsJCWlmICghbGlzdF9lbXB0eSgmbWFwLT5s
aXN0KSkNCj4gKwkJCWxpc3RfZGVsX2luaXQoJm1hcC0+bGlzdCk7DQo+ICsJCXNwaW5fdW5sb2Nr
KCZiZWRhdGEtPnNvY2tldF9sb2NrKTsNCj4gKwl9DQo+ICAgDQo+ICAgCWZvciAoaSA9IDA7IGkg
PCAoMSA8PCBQVkNBTExTX1JJTkdfT1JERVIpOyBpKyspDQo+ICAgCQlnbnR0YWJfZW5kX2ZvcmVp
Z25fYWNjZXNzKG1hcC0+YWN0aXZlLnJpbmctPnJlZltpXSwgTlVMTCk7DQo+ICAgCWdudHRhYl9l
bmRfZm9yZWlnbl9hY2Nlc3MobWFwLT5hY3RpdmUucmVmLCBOVUxMKTsNCj4gICAJZnJlZV9hY3Rp
dmVfcmluZyhtYXApOw0KPiArfQ0KPiArDQo+ICtzdGF0aWMgdm9pZCBwdmNhbGxzX2Zyb250X2Zy
ZWVfbWFwKHN0cnVjdCBwdmNhbGxzX2JlZGF0YSAqYmVkYXRhLA0KPiArCQkJCSAgIHN0cnVjdCBz
b2NrX21hcHBpbmcgKm1hcCkNCj4gK3sNCj4gKwlwdmNhbGxzX2Zyb250X2Rlc3Ryb3lfYWN0aXZl
KGJlZGF0YSwgbWFwKTsNCj4gICANCj4gICAJa2ZyZWUobWFwKTsNCj4gICB9DQo+IEBAIC00MzMs
MTkgKzQ0MSwxOCBAQCBpbnQgcHZjYWxsc19mcm9udF9jb25uZWN0KHN0cnVjdCBzb2NrZXQgKnNv
Y2ssIHN0cnVjdCBzb2NrYWRkciAqYWRkciwNCj4gICAJCXB2Y2FsbHNfZXhpdF9zb2NrKHNvY2sp
Ow0KPiAgIAkJcmV0dXJuIHJldDsNCj4gICAJfQ0KPiAtDQo+IC0Jc3Bpbl9sb2NrKCZiZWRhdGEt
PnNvY2tldF9sb2NrKTsNCj4gLQlyZXQgPSBnZXRfcmVxdWVzdChiZWRhdGEsICZyZXFfaWQpOw0K
PiArCXJldCA9IGNyZWF0ZV9hY3RpdmUobWFwLCAmZXZ0Y2huKTsNCj4gICAJaWYgKHJldCA8IDAp
IHsNCj4gLQkJc3Bpbl91bmxvY2soJmJlZGF0YS0+c29ja2V0X2xvY2spOw0KPiAgIAkJZnJlZV9h
Y3RpdmVfcmluZyhtYXApOw0KPiAgIAkJcHZjYWxsc19leGl0X3NvY2soc29jayk7DQo+ICAgCQly
ZXR1cm4gcmV0Ow0KPiAgIAl9DQo+IC0JcmV0ID0gY3JlYXRlX2FjdGl2ZShtYXAsICZldnRjaG4p
Ow0KPiArDQo+ICsJc3Bpbl9sb2NrKCZiZWRhdGEtPnNvY2tldF9sb2NrKTsNCj4gKwlyZXQgPSBn
ZXRfcmVxdWVzdChiZWRhdGEsICZyZXFfaWQpOw0KPiAgIAlpZiAocmV0IDwgMCkgew0KPiAgIAkJ
c3Bpbl91bmxvY2soJmJlZGF0YS0+c29ja2V0X2xvY2spOw0KPiAtCQlmcmVlX2FjdGl2ZV9yaW5n
KG1hcCk7DQo+ICsJCXB2Y2FsbHNfZnJvbnRfZGVzdHJveV9hY3RpdmUoTlVMTCwgbWFwKTsNCj4g
ICAJCXB2Y2FsbHNfZXhpdF9zb2NrKHNvY2spOw0KPiAgIAkJcmV0dXJuIHJldDsNCj4gICAJfQ0K
PiBAQCAtODIxLDI4ICs4MjgsMjcgQEAgaW50IHB2Y2FsbHNfZnJvbnRfYWNjZXB0KHN0cnVjdCBz
b2NrZXQgKnNvY2ssIHN0cnVjdCBzb2NrZXQgKm5ld3NvY2ssIGludCBmbGFncykNCj4gICAJCXB2
Y2FsbHNfZXhpdF9zb2NrKHNvY2spOw0KPiAgIAkJcmV0dXJuIHJldDsNCj4gICAJfQ0KPiAtCXNw
aW5fbG9jaygmYmVkYXRhLT5zb2NrZXRfbG9jayk7DQo+IC0JcmV0ID0gZ2V0X3JlcXVlc3QoYmVk
YXRhLCAmcmVxX2lkKTsNCj4gKwlyZXQgPSBjcmVhdGVfYWN0aXZlKG1hcDIsICZldnRjaG4pOw0K
PiAgIAlpZiAocmV0IDwgMCkgew0KPiAtCQljbGVhcl9iaXQoUFZDQUxMU19GTEFHX0FDQ0VQVF9J
TkZMSUdIVCwNCj4gLQkJCSAgKHZvaWQgKikmbWFwLT5wYXNzaXZlLmZsYWdzKTsNCj4gLQkJc3Bp
bl91bmxvY2soJmJlZGF0YS0+c29ja2V0X2xvY2spOw0KPiAgIAkJZnJlZV9hY3RpdmVfcmluZyht
YXAyKTsNCj4gICAJCWtmcmVlKG1hcDIpOw0KPiArCQljbGVhcl9iaXQoUFZDQUxMU19GTEFHX0FD
Q0VQVF9JTkZMSUdIVCwNCj4gKwkJCSAgKHZvaWQgKikmbWFwLT5wYXNzaXZlLmZsYWdzKTsNCj4g
ICAJCXB2Y2FsbHNfZXhpdF9zb2NrKHNvY2spOw0KPiAgIAkJcmV0dXJuIHJldDsNCj4gICAJfQ0K
PiAgIA0KPiAtCXJldCA9IGNyZWF0ZV9hY3RpdmUobWFwMiwgJmV2dGNobik7DQo+ICsJc3Bpbl9s
b2NrKCZiZWRhdGEtPnNvY2tldF9sb2NrKTsNCj4gKwlyZXQgPSBnZXRfcmVxdWVzdChiZWRhdGEs
ICZyZXFfaWQpOw0KPiAgIAlpZiAocmV0IDwgMCkgew0KPiAtCQlmcmVlX2FjdGl2ZV9yaW5nKG1h
cDIpOw0KPiAtCQlrZnJlZShtYXAyKTsNCj4gICAJCWNsZWFyX2JpdChQVkNBTExTX0ZMQUdfQUND
RVBUX0lORkxJR0hULA0KPiAgIAkJCSAgKHZvaWQgKikmbWFwLT5wYXNzaXZlLmZsYWdzKTsNCj4g
ICAJCXNwaW5fdW5sb2NrKCZiZWRhdGEtPnNvY2tldF9sb2NrKTsNCj4gKwkJcHZjYWxsc19mcm9u
dF9mcmVlX21hcChiZWRhdGEsIG1hcDIpOw0KPiAgIAkJcHZjYWxsc19leGl0X3NvY2soc29jayk7
DQo+ICAgCQlyZXR1cm4gcmV0Ow0KPiAgIAl9DQo+ICsNCj4gICAJbGlzdF9hZGRfdGFpbCgmbWFw
Mi0+bGlzdCwgJmJlZGF0YS0+c29ja2V0X21hcHBpbmdzKTsNCj4gICANCj4gICAJcmVxID0gUklO
R19HRVRfUkVRVUVTVCgmYmVkYXRhLT5yaW5nLCByZXFfaWQpOw==


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.517995.804090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjij5-0006nq-7I; Tue, 04 Apr 2023 15:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 517995.804090; Tue, 04 Apr 2023 15:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjij5-0006nj-4R; Tue, 04 Apr 2023 15:37:39 +0000
Received: by outflank-mailman (input) for mailman id 517995;
 Tue, 04 Apr 2023 15:37:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjij4-0006nR-AF
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:37:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a019c2d6-d2fe-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:37:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7818.eurprd04.prod.outlook.com (2603:10a6:10:1f2::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:37:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a019c2d6-d2fe-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ss6mEVYZxtrSuSiHJFFHBUVGrlVgNYSmPZKPnC9gd/dk5PhAgiU7wO+mkp/ZHlBooCkHog6WNe+fwjaozsZdptjBxO5v40BPonLu9QN078JEKWnNGf+ciJbWbgR1t1sZiHzqjjU76fK+cgeH7Vi4OrYjO5J3SCQzVpZS99gQvoK16U+BQUlnnDGAZWTfWd2SyccA8wFWQXMJnyTXIXG1EdSHwa9RZ92I1AEyq8VAZJ3CTJkEnzdwkGs/6ktwAH/DodZP9DLcPbW+ZxTZ+2JMn6PR9Ie2J+ML32KiEgQtCKGXIfBkT4/syGOKLEIjR86jBw5NPhS3j79qFo4o1kYAXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=14ddhLCYch+LjmllLCbgc6txJlddcQ2MeHG303rRx20=;
 b=ErPsOFog8cfLt5vbdqatYRmOpCZmG4zfXpaedBxNZX2QQ89uiM4aXBqS8GYnUecjJL/xhUDDRTgm9iPHbMeyPzgVML/cmda4juOeBMorrwWgdkxnWEUULoumJAFplcbl8OGb0/sz6msO17OmsQrfDSYxHjduTkyz/hOIY7wlZJKG+LsNDa1cs2P5Jnz0DGXhaxJODeIC3AnQAZG2dJG8YGzbVLqJGHuNKc2p12V7v8xaCCNgNf/Oem9zb/vwYqmfD5OOnJJ0OS8dPNQMO2WPptRCVGH52MaqwIkAVPPr0YKRXTOjlOo3KLhjPJk/gqvSWTS3SZuofil4LtYYNyPqVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=14ddhLCYch+LjmllLCbgc6txJlddcQ2MeHG303rRx20=;
 b=DA3juulYJKMUnEqJdZOQmjLcQxKfEmS3Wr5pB1N6AmFfX1wZU3GKz/LsHgusnuiNDJUzJL8r9SRJRQQ1VPitxd8khTfRnVlNublMmJ3xEXRHmcmH2DgxtJHTZCAX+naWqRmaB2H+SV3QSxGHti0k7W0BgkktupKluw8r9p5J0oUm/o5sWv70fdxrC5PPhb47dgcn25VU670FFy62Ct+sKz9Q99uChq5YmN6yKF0GVPZhV9gK5CZhYmcJNqIlTiOyByK5hSnCz5SlsZTuoRSFf4zJnzD6/HgABU0Du9AtcOk9+8rDPQPEJdXI4QcVfWBjkMWSXsKpu9pzydKTYCo8Xw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5abc440d-a484-34a1-589b-420902d44d6c@suse.com>
Date: Tue, 4 Apr 2023 17:37:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 15/15] x86: Remove temporary {cpuid,msr}_policy defines
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-16-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404095222.1373721-16-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7818:EE_
X-MS-Office365-Filtering-Correlation-Id: 893548af-ce87-480e-3bbb-08db352282ae
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U1snMlMci7TBwWBm/r/hTsjK5NjtDfLKfyofl/Gle8Rbp+PY1CsZ1gkvqMg0C/lB+y40Fxxe4T3m7giL/6wnhPXEXWRQhn8TQyNkuw+w0tJv3Sk7R7Qhfn1FqkUKSS7qLMoYaqEwj+HzFF2mB/151dzdF2joBxK7HrxyvWjPbjT3UkvYo/uGi/bEz+QvNvbsKqLcO9wKr4Y28iLYX9yxg6gQP/f06l7GzX/2ETqE/J1UTqDlYp64SGvYMi0AIRecTBdy4URipGE7MaJwzsaDbELn8oe9iBd2rAJ7MZZ8gc8UlpQc7QXZbBIzdOyCLWS8OkXX+Dx5iL/siBShoHlNIYt7bVuiIGqQCu0o4nh918BuRRtTjx1J+TxCpvac3H176RPh2YMSgy5Tcn228qrrn9YPo0sU/4axKnOJCyJRcdes7MCfnOSYA4zK8gO9q96q5AWSvQX6ddgk4RVyB/6a1/SOmZ0pY1fKWxkMZRTGcqTyBs3wX0j3vj2E2O2W+NNjCLJNY7YTO0Y1hOxL8zyTBWuH66jwP5Z0zi8K8N6EqzfbuLy3z81tsd2fqu0xKAzdT6kRj09UyJRlaHiN5pgIpmvNybwAVf32Suujq6lMuHXw+Q5De1XGuGvRzmimBEXsEqu47kETk8HvrWFq+85eLA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(376002)(39860400002)(396003)(451199021)(8936002)(5660300002)(38100700002)(83380400001)(86362001)(31696002)(2616005)(316002)(54906003)(53546011)(558084003)(478600001)(186003)(6486002)(26005)(6512007)(6506007)(66476007)(6916009)(4326008)(8676002)(41300700001)(66556008)(66946007)(36756003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VVVUN0UycFhqUEIvT2k3M3lsclF5dGVlaVZtWEI0WjI2aThwdExqSi9ycE5s?=
 =?utf-8?B?UWZiT1VpNWpSOVlBSWN4c0hrekptTUNNL1VQdUFkejRxZmprZU9ybHhBaS80?=
 =?utf-8?B?VCtuQ3A3aVN3ZFNQY0NoNDRvSnhPYnBJVGVUTzVxSWJ1N1BDcXY1VnZGS3Vh?=
 =?utf-8?B?ekJJUkNDRWJ2bzg3MFlEZ3I0UXpKYkF3N041azF2YVc2blZEdjFjM1VyL2J1?=
 =?utf-8?B?R0hvMk14UmszY0FRVElUNEFBSE43dGY3V1lhOHNKS0tVR0pxcjhFcEJKaHB3?=
 =?utf-8?B?Q1RQL0FvcWJ5SEx5NlRsN2pYMk9waEprM2VpcTIveTVvYlFYZEJSQloxeTAx?=
 =?utf-8?B?ZlNWOTlnMktaa2hWblI3TXNqb0FCcWxPNjRKRUM5ZkJTMzg0aXZIZHJjSDY1?=
 =?utf-8?B?Wkd0b2tFd0lNK3h4NTVjK3drTUwzYU9wc21aZGNmQit3U05lN0U1TkJpZHFw?=
 =?utf-8?B?WnRDczBYazZVcDUxUklkVXBZQ1hVd3U0ZHFlRG4raURJN0d1YnFUckFFT1lD?=
 =?utf-8?B?a2dld3V1SmFwM3IwNHJzQTVhWHorRndUR2RrQ2N6SlozMnUrVFFFbTdKZ0tw?=
 =?utf-8?B?SVprd3haQWJVSnNQRnZ6cW5iSi9OTytZOGR4OENtOThDenFpUTFtVHAzcVMy?=
 =?utf-8?B?UFp6R2tDaXZIaElXMEZoWWlqQXVjR0NRcEpsVUVUM09nMFYxQlVtWkhDVnh0?=
 =?utf-8?B?NU11aDZYejNocFZucFFRVFg5dHQ0WDlsNlphSUhnTVJJUzNZMDZLMWkvQjcr?=
 =?utf-8?B?cThJaE0xQ3l2MThOd1NWN0dGSWdqbEdNd2NqTGM3K3E5SmJueG43WGxxLzBz?=
 =?utf-8?B?OU5OQ0V1aFpTbFNVSjBaK2xWOVlQcE9ncEdQS1paUUtZTjVsbGFqZzJpaEdy?=
 =?utf-8?B?UHo2SmtxYWdHT3NJZmcybHJrVkd5K3FDMkkraXRsU243L0xNU0JmaHhhTlNv?=
 =?utf-8?B?d0FtVGN6cWc1NkZGVlVTRXV6cGFuYXgxVnNlSHA3YzNLTFVCODkxWlc2WEx1?=
 =?utf-8?B?WThkKy9zVkxhRVZaQ0NhM0RrM0lJRDZvQ2ExaXpLR2FzN0U1VDRkQllmVFlZ?=
 =?utf-8?B?WFU4UitFNnRlQ2I0NmFpOWhiR2k3WWI3SEtBdHRxVzg5WTJWYzdSL2twZWsy?=
 =?utf-8?B?QU4rOXpJbUs3UHhnV0EremZYNGI0VWhEQnFSa1hSeEUvbzlPZFBQektPdEVh?=
 =?utf-8?B?OWlsVGFlandSMmk4bjdKK1NvVXhlNHBJOTVlbFBGSVRoQXJpdys4TWh0NVFp?=
 =?utf-8?B?TVdtS29VTzMyc2F3SVoraUpBYTFyamtCWGNua1o0a3U0NWdINzlqa0ozUVd6?=
 =?utf-8?B?U2tUSHJydWx5Y3M3NDlQdnhOeEYwbmkwdjdPaTBqZ2dqTjBsdjFETWhWNnhl?=
 =?utf-8?B?cWZvZnIvZ3hSWEVHTmtOQmpqWWMrTlRPTEtJSG5yOVVZQmtKNWgycDlvczVm?=
 =?utf-8?B?czl3MmNpanhaMFZUUHBLWWRuZ2hXWnNOZk1tNUJBM2NEK3IxVkpaT0dzaXBV?=
 =?utf-8?B?MWlkY1daU1JXRmNCaW9NYkVZVlRvQVI3Q0hmSEd3YXNXM28wQzVybG1OUVV5?=
 =?utf-8?B?V0xaUWdNMHFML3BxM1o3SnA1ZXRsTlZwNUMxZ0RDb0hhUGtlN2FDN3pnWFBt?=
 =?utf-8?B?a21TZ3VDclA1WkxxeUFtSGNYK2JYSk03WDZnQ0s3YThjOFFqSEVhcG8zeERz?=
 =?utf-8?B?TCtlaE9LbGFKaGI5by9RaEtRZHQzaGJ5Si9aWlRWQzJsMkl0RkZYeERSWHJ1?=
 =?utf-8?B?VUtRcEtnZzJTcVZ4M1hpdUJuN2Vub05YZVZqN0wwSklLa1BkTHVTVUptZ24x?=
 =?utf-8?B?OGtJNk16Z2cvc1cxcUJhQTl5Y2hONERGNEo0UjR6QTkvbVNTR2NVcnh1d0d6?=
 =?utf-8?B?cE92Vkk0b3k3amV1TUdOckk1VDBDa1gxM2l0MGQ3T3JGM3BLUzRNNGZibjFZ?=
 =?utf-8?B?Q3FPK2pLSkRjN2s1bG1NZURaVXBtRzJaSVJvenNwT1dnYTV0eUlEbWg4TDZB?=
 =?utf-8?B?TEY4bDRlREJaSnF3YUxBNW12NEFhTm5YU3JTekJFL0FMRk95ZkhCYmVOMFpw?=
 =?utf-8?B?S1VnYXB3Umx3VEY0V0Y3OVFvS3lqWTQxaTZtdG1wWW4zRk12eURmL21OOVU0?=
 =?utf-8?Q?VJegFsx0vWKmsN8ts84ZbiqMu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 893548af-ce87-480e-3bbb-08db352282ae
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:37:34.2467
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JF+c51SdEOCP2+ywgvXhNrF7Aei2KGMr4dMfGUhuEiXXXIjJpDmhNSgd5Yj48w+riBrDnEhZxA8fcFPWgw8NiQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7818

On 04.04.2023 11:52, Andrew Cooper wrote:
> With all code areas updated, drop the temporary defines and adjust all
> remaining users.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:45:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518003.804099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiqW-0008VW-Vu; Tue, 04 Apr 2023 15:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518003.804099; Tue, 04 Apr 2023 15:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjiqW-0008VP-TI; Tue, 04 Apr 2023 15:45:20 +0000
Received: by outflank-mailman (input) for mailman id 518003;
 Tue, 04 Apr 2023 15:45:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjiqV-0008VG-Ts
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:45:20 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b1bc3d2b-d2ff-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:45:17 +0200 (CEST)
Received: from mail-dm6nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 11:45:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6774.namprd03.prod.outlook.com (2603:10b6:303:178::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.23; Tue, 4 Apr
 2023 15:45:12 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 15:45:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1bc3d2b-d2ff-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680623117;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=natFjPgAl7txcGRKd3B57/LBN2sPc/aWTAD6a6f26Ic=;
  b=AoJP7Mo6SG49VZZVxJ5/fv9L7/KuLKsqISAgBj+xnNSQbWwnwFar7TKA
   R8TPZe47Ddjmr0EqLAmZLJy6rUN9ih8Kvxn3DPJEPIG2vUwNTOBv2EXs1
   cQPM/+XUrHw3Oge7LphKtXbcXH4Ww1sf5YZvN6g8iSemSzfcOM5thG/T9
   A=;
X-IronPort-RemoteIP: 104.47.59.172
X-IronPort-MID: 104324439
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rPpuEKtxc795TysPU4fLfUfc9ufnVHtfMUV32f8akzHdYApBsoF/q
 tZmKT3Tb/qJYzb3eop+aNuxpkgAsMOHndI2TwNt/yAyQylB+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGyyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwc2o0UhaB18mM2OzrdsRViJ15dev6I9ZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60b4C9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqqUw3AHIlwT/DjVHVXWfu8P+lnWQSvV1L
 m86y3QhpvI9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2gheRSN9mSfSxloesRmu2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:59KGOahpJr2OVMJT8QakuOCf/HBQXtUji2hC6mlwRA09TyX4rb
 HKoB1/73SftN9/Yh0dcLy7V5VoOEmskqKdgrNhX4tKPjOHhILAFugLgLcKpQePJ8SUzJ8/6U
 4PSclDIey1M2VFrK/BkW2FL+o=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104324439"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CKFLHhphQVUZ8V+XFFeyv1IK25gLUzIOQl7NlH8re9kF/JAyBmNHAvlEyCFlqVAb412bYB3wHH4zI236O/tIW8DaDLS4ydPMB9NKVIBcCMOrE6cCvPMFfnrumzgGulXAapaaKWXcw9+Ik0fxA/ZZIYfiK3fMk8mmWLIJIcXY0pz8xuzgh7+Ej8NTBRMyx1zAXG7BsdAS/4FI3gFt+xdNQr0bHNA3WNbAr4WlZQkwPhJGcxYeHuknKeYgVHfkGhNgOOr2WobruYZpVuR2gCCdx5OQ2s7qk9upWIZME7QvGGi3+3ClscQtBdjNAk+rg1JAaEN/VRa+A7c05NffOZ9GIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qdMLLuwbV3BYa1fUZZwxBbHCaaivCL4vH/hihHVFoZY=;
 b=TuOX77v8o4rO4+pD0FDjrVzTupFWLqyuLPqpwL7l5ZLhOVJtPa6xCRWZTLEGavE5DpmdrLZThqa+psQ5bzvT/olmHQ3PND5sVL4WUrkgCZHXu02wCPUTQ6LtS3nfjxtJW4lbpujOyLFL/SE+h77/vwvt5b/gpm5g20x6LkMSG890DhDcraD79NxsuDC+yN8d8Fh2Ut4YsYHJZWzve7gmbFmTLl7VbyLf9S4OL3BRzHu/uYOqDTOPFjy472OqOvkSsSAJULcV50/r9FFYFJtRJgzTJ5RcAG9Ugwr9tByPFO3HpmLqIYJvqvuWyqExzw201rH6vXX7edxw7tRFYf8p/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qdMLLuwbV3BYa1fUZZwxBbHCaaivCL4vH/hihHVFoZY=;
 b=uDduHxDWUZVzTQxn2nn0IkXd6y2/Wk+P0gXyJ5GVLGhQ5VisiJ45qj/kKXugv6uqukfxl4DYPs/ygc7G9lP8fm4KdMb52lnq3pbG6m31v/QMID8fC/r9KBeuGV7uA0H1ZjOKqMxQGFpACTSjLk2di9hJaLX2JqUlu/U5i6j9BKw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0c43546e-0333-af19-efa5-71cfaf5efa3f@citrix.com>
Date: Tue, 4 Apr 2023 16:45:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation
 logic into cpu-policy.c
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-12-andrew.cooper3@citrix.com>
 <087536dd-96cf-84f0-4b8f-d4de4d6bd093@suse.com>
In-Reply-To: <087536dd-96cf-84f0-4b8f-d4de4d6bd093@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0337.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CO3PR03MB6774:EE_
X-MS-Office365-Filtering-Correlation-Id: 0a18d73f-ce1c-4eec-1248-08db352392c6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AQxiMrV8mXXBToee8J/wqingWEbZSZifVA+xAPV3KUkI4Ji49S7QOMnO14HCYhhQQJFHZzg/lIXzUWsKIrFGaOYzfF3Iqr4SS0lYT5RZm8P7WiA2lHZwB4v5bwTE5Svm9FPcBYo6COxgqg+bDG1PjFVOLdZEbUXF68OI0kySFXp/HRR5403KOMIBxfhqBDcldRW9r72h9m7XGJ0k7TCVqb3KXHnxOW6Sq9HDV7KmxrAX+QPhDr9/6OR8aLxFYtz4ntJxRaV+SGJgoD6BBQx8EFo9eK1ortiyL/cEvimR0JkXqVUJNrw8BLgYbXLPLgbICHQ4vNqGy677kT68/Kqy7Hzc/eitkrSnzcE3XkB0Dqk3TpvVtFVtFwfpfOZCQ5iN8ScIGSS4aG24ecoIen1dAZdCpiPQeMsJrraliMyB9Sf0R8j3rHoOGy0+wJSOuSAv7YuvNsFBgoipMcJKOuHfSdEUcxHXcp/Wr21beZzGpz6u8vGHq82uwgSKLNvuf3O/CiBKfhm8uhOfSMWtTe+XH5qrisgXuKFzmhD1F6/d+f5wLwjS34LztYDPuGHS4ZpRqPjdkVNFjj5LykAXaC1uapUMdFrfRxw5vrXtU3KbhJ6qG7PM9z+mSWC31iLjHPRR9BVl/wSPnx5jB9QlvXADNA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(39860400002)(366004)(451199021)(5660300002)(38100700002)(41300700001)(8676002)(8936002)(82960400001)(31696002)(86362001)(2906002)(6506007)(53546011)(6486002)(6666004)(26005)(83380400001)(36756003)(2616005)(6512007)(31686004)(186003)(316002)(4326008)(66556008)(66476007)(66946007)(6916009)(54906003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eVNEUGs4Mi9taGY0SE5NMS91MzZmQ0cxaFdTVXBiVUxBZkZMZWRQWXhPUnBi?=
 =?utf-8?B?YWg4OFVDbWtqQW5yV2l0WVNYaFNPdDAyQ3AyNkh6WUdGcHhjMTR0d25EZHYr?=
 =?utf-8?B?dExjLzV1elc4VGdsUDBTK1ZqTkpQN2dYcGZuUDA0WWRFQnp4NlVTbnk1L2p5?=
 =?utf-8?B?dWh6ZUVnSitRRG52Nzd0ZDZHbEg3cENjbTZjSGE5TlZJKzlZSDVqU3pqTVNX?=
 =?utf-8?B?dnRyNjRTNXpxRWl0L2JPV013S1J6Q2ZTMGFYRWo4SDk1R1M3NUFxQ3ZnL0tL?=
 =?utf-8?B?cmdKSTE2RG1acUFVeDRVcXZIQ01McmlobWcyVDVLcG5WMmxZaTNxeHh3YzVO?=
 =?utf-8?B?TEV2MHVSR1ZYN1dEaG1JQ3dhQzdsbWJmcmcyRWZjcnVzMmpTZTI0eFk4QVZj?=
 =?utf-8?B?cmEwTjh0UCtKUC91WFJJd0NZU1pEYkYwQTJiRXBIZDFweVJFVFVhWUh0QXov?=
 =?utf-8?B?SEQ2UWZSV2I0dkdNS0hDQ0JuYTdBdEZPNzFXRW5PQnl4MjlaaWtDSFd5d3JE?=
 =?utf-8?B?bkR6R08xcTNmK1BGaEpYZklIN3VaemNUd211SFRxelJWNmpKZHk2cytjYllt?=
 =?utf-8?B?eTlxdlYvcFprSTQreTJjbEMvbFlabmtaczZJZklqN0FaNkM4Wjd1SkpQWThp?=
 =?utf-8?B?RlhhT2pmeEtLUjJ3MVYzeSt0UHhsU1B6cjJsQWEydlZ6TmNqaEVHa0FuUW56?=
 =?utf-8?B?RDhiYXdwQjVyUzNBU0ptOWJ3SUtSUmVMdE03WlRRL1B2UUd6KzlCdzhKdks0?=
 =?utf-8?B?eHZJdnlKQmRaWUR0czE2TTRBdW1admJacmpzaXZScFQ0UE5OM1hScGVxcHJO?=
 =?utf-8?B?N0tKOUo3ZWR0K3B3bW41cGMzVkY4TUdOU2ZLU2JZSjRoZEVPL3ExR05GWWhn?=
 =?utf-8?B?SDhObTRjeGhHSnJWMC8wQ3Q5c0tNQ2N1TDJEbVNIVG1vUHBWNEl4Y0E2a0pK?=
 =?utf-8?B?K09GalFqc0IweGlGZ0s1SkdDVkZacDdOMlRtWE5Jd2ErOTB5ZWo4Y1RYMW9S?=
 =?utf-8?B?UmdXNlpyRnBTS3p4YS9IMllZYTFBNi8zQmRoVC9CbTRtTVJMREZIMkNZZHQ4?=
 =?utf-8?B?Wkw3QkhuajNYbzc2dFd2a2R2OXZKMXdZTzZyREE5cHNGbnUzekQ1VmZtRTNF?=
 =?utf-8?B?STFJRmlxYXJjanN5Mmh0SW9rdzNxSE9oazNVUHltZENGUmxoS2NndmlBeXFk?=
 =?utf-8?B?Q2doOXJTQnFrejVMdm83RlhKdWMxaWdDTHovWkFlRU1Yc3YrcU9vdXRvOUVU?=
 =?utf-8?B?RlhaYzdnZTVoVGlwOGFucEtnbllwNGlXdXZENjBpWFV3V1BKcDNhZ0NBMjlW?=
 =?utf-8?B?TzZXcFVzUjdsOW82ZGpJcmtybEtLTmtqcTNKZGJNdVNwb1RzeVhhSHFibVli?=
 =?utf-8?B?bk9jYjVSb0Q3OWRHM0MrUEszbVBUci9hakRONVFmM2lOK1dFa1VPdTNlVEJ2?=
 =?utf-8?B?dkJReW9WTjVzTkZzUGp5MU9EUi9QMXp4dnNDNEV2SFluSXQveEVXL3hMSFBt?=
 =?utf-8?B?QUFLYXNiVzhhdkJLSWZ6cWRwcGRadWVtbW04UDlnSnc0QTR2ZzJ1Y09yNkhv?=
 =?utf-8?B?d2M0eU0rdnBxMTlOcXNKbFdBSDU4WU5lUEJFNWlMYzhRWjdYdHhlM3NNYVF0?=
 =?utf-8?B?WWJ5OUF4OXFrRFFNVDRiSGV5Z0FqZ1RWdFkwQVpmSEFkZld5azlPRnFPTEhY?=
 =?utf-8?B?ZmluMkdUT1VnSUFIcC84d2ZoZ1RjWWVBaDhzRWJDQ2RTdjB4UUlLakZ6WFZJ?=
 =?utf-8?B?d2d6UHVIYS9hZWhGM0cvRTg1WFdsSUhkYUtteEt5VVAzckxhWGkxTm44RDQ1?=
 =?utf-8?B?MW1VU0diQzZQMEJaYXB5MHRrMVhyZjV0NTJrNnJVRHQwYjVHb3BWTVh4dnBL?=
 =?utf-8?B?b2FhY0xLaFplb2U0VFNJNXNBL1czRDhybG9kcW8xOWNJNWcxRUdZTVdJQjJL?=
 =?utf-8?B?dnhwSmVjRzZtUHhwbXQ5bDZyUlVMSzd5UlhsMG84YURtSnZzU3dnL0d6VFI2?=
 =?utf-8?B?U0FNUC9jUXN1TGJneXhuR1NDTjBUcklBaWQzVFVUOCs5T2ViVHY4V0hwbDVN?=
 =?utf-8?B?UXF5YllvcTBtdTUzZDMwOGJ2bWhiYkxvOXFQTVVhQS9neENoeVE4R1V6azMy?=
 =?utf-8?B?QWpqVkdnS3d0R0VWeWpXSVRzcnFtYjVyVVhpN05BWW9NVHJ1Zjk3YU1XRmQ5?=
 =?utf-8?B?Mnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	fhUsFmlsE22EGcJl48bJICfNxPkwj3j5atnzjSc5Tb2wA3eT/EcIVYk/+Fq9HQGh5zUfQ/nE/hRSJFjgasSAH6uGPOKAJDKYndoqAW9uSgIru5wbLkT19D+ux07c0LEE7jY0n/M8cpbUeyFSwT7zXczHfU3WFccLxn2CH3mwnciqNy59nsrKvNbQbQ/EOLm7SII/F8oT9jzS+7RRZXuXDS0l86RKYWdQjgBUD5hao56vL2BrNZE3H3WS2rs9+yMThffb0oF7Zf3Lzq4xxEcoYaHmQZHD+JTbDI1/T3AC+l0RPHzAEEPbI0NL/C0kpqQdiaNdFk/xQh5hQuIXf1OMHd9lkZ77e8vfoM6nOz4fcR3i5IHr6xGY0UhDI3XW9rcVV0UZB30KFLJWTU4cEjTd2TbUMxRl2Nl4FqLnj/OIkqxNkr2Pdh5kmrXfrXokroEBeaTsDMbc6fZHUFzQrs6mTsYsagmojxc24oLNLVuxvTLsUdMv1X25w3veNkFlVthdOY4W80GZTgaWZ0nJKaJKUeGqf02UjNqgDqqI6JNYpRv/ym51HBV5+mvrshXglpNWpqEw6yI8QiApBcCPyDBeHoxly3hnElzvHKaAmTTy4MfMG1F2Iv/yRhuCnjQugQV+ywU2eAi0ueehz2f1X34rvEusKJtFn8x6zzbUMRYco7hq3yp60XPjExF/kp0ENvRaVg0oRRHQ9K/o6FynjYsNQJPDw9BayAXkx9CrgyiP2kIxHDSn+bNKaUXUusvWQeAXqNcUUXW62sfvIfQukrmjd+YucG0M5Z3RAXBQcupEpwLlhe73Xscvq0uZTJH7e/ot
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a18d73f-ce1c-4eec-1248-08db352392c6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:45:11.8222
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cfL9l+vCdCLJ9Ry2Kl8XSbb9IBrHxm6zCWJLBcM/zkhhd9qsSHEzyulOamtziLzco1a5IEDuXH9544EB26A4G5/u7EKL1nQ7w/jm6QzDvdM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6774

On 04/04/2023 4:16 pm, Jan Beulich wrote:
> On 04.04.2023 11:52, Andrew Cooper wrote:
>> Switch to the newer cpu_policy nomenclature.  Do some easy cleanup of
>> includes.
>>
>> No practical change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> v2:
>>  * New
>> ---
>>  xen/arch/x86/cpu-policy.c             | 752 ++++++++++++++++++++++++
>>  xen/arch/x86/cpuid.c                  | 817 +-------------------------
>>  xen/arch/x86/hvm/hvm.c                |   1 -
>>  xen/arch/x86/include/asm/cpu-policy.h |   6 +
>>  xen/arch/x86/include/asm/cpuid.h      |  11 +-
>>  xen/arch/x86/pv/domain.c              |   1 +
>>  xen/arch/x86/setup.c                  |   2 -
>>  7 files changed, 764 insertions(+), 826 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
>> index f6a2317ed7bd..83186e940ca7 100644
>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -1,13 +1,19 @@
>>  /* SPDX-License-Identifier: GPL-2.0-or-later */
>>  #include <xen/cache.h>
>>  #include <xen/kernel.h>
>> +#include <xen/param.h>
>>  #include <xen/sched.h>
>>  
>>  #include <xen/lib/x86/cpu-policy.h>
>>  
>> +#include <asm/amd.h>
>>  #include <asm/cpu-policy.h>
>> +#include <asm/hvm/nestedhvm.h>
>> +#include <asm/hvm/svm/svm.h>
>>  #include <asm/msr-index.h>
>> +#include <asm/paging.h>
>>  #include <asm/setup.h>
>> +#include <asm/xstate.h>
>>  
>>  struct cpu_policy __ro_after_init     raw_cpu_policy;
>>  struct cpu_policy __ro_after_init    host_cpu_policy;
>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>>  struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>>  #endif
>>  
>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>> +
>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>> +    INIT_HVM_HAP_MAX_FEATURES;
>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>> +    INIT_HVM_SHADOW_DEF_FEATURES;
>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>> +    INIT_HVM_HAP_DEF_FEATURES;
>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>> +
>> +static const struct feature_name {
>> +    const char *name;
>> +    unsigned int bit;
>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>> +
>> +/*
>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>> + * matches found.
>> + *
>> + * always_inline, because this is init code only and we really don't want a
>> + * function pointer call in the middle of the loop.
>> + */
>> +static int __init always_inline parse_cpuid(
>> +    const char *s, void (*callback)(unsigned int feat, bool val))
>> +{
>> +    const char *ss;
>> +    int val, rc = 0;
>> +
>> +    do {
>> +        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>> +        const char *feat;
>> +
>> +        ss = strchr(s, ',');
>> +        if ( !ss )
>> +            ss = strchr(s, '\0');
>> +
>> +        /* Skip the 'no-' prefix for name comparisons. */
>> +        feat = s;
>> +        if ( strncmp(s, "no-", 3) == 0 )
>> +            feat += 3;
>> +
>> +        /* (Re)initalise lhs and rhs for binary search. */
>> +        lhs = feature_names;
>> +        rhs = feature_names + ARRAY_SIZE(feature_names);
>> +
>> +        while ( lhs < rhs )
>> +        {
>> +            int res;
>> +
>> +            mid = lhs + (rhs - lhs) / 2;
>> +            res = cmdline_strcmp(feat, mid->name);
>> +
>> +            if ( res < 0 )
>> +            {
>> +                rhs = mid;
>> +                continue;
>> +            }
>> +            if ( res > 0 )
>> +            {
>> +                lhs = mid + 1;
>> +                continue;
>> +            }
>> +
>> +            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>> +            {
>> +                callback(mid->bit, val);
>> +                mid = NULL;
>> +            }
>> +
>> +            break;
>> +        }
>> +
>> +        /*
>> +         * Mid being NULL means that the name and boolean were successfully
>> +         * identified.  Everything else is an error.
>> +         */
>> +        if ( mid )
>> +            rc = -EINVAL;
>> +
>> +        s = ss + 1;
>> +    } while ( *ss );
>> +
>> +    return rc;
>> +}
>> +
>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>> +{
>> +    if ( !val )
>> +        setup_clear_cpu_cap(feat);
>> +    else if ( feat == X86_FEATURE_RDRAND &&
>> +              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>> +        setup_force_cpu_cap(X86_FEATURE_RDRAND);
>> +}
>> +
>> +static int __init cf_check parse_xen_cpuid(const char *s)
>> +{
>> +    return parse_cpuid(s, _parse_xen_cpuid);
>> +}
>> +custom_param("cpuid", parse_xen_cpuid);
>> +
>> +static bool __initdata dom0_cpuid_cmdline;
>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>> +
>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>> +{
>> +    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
>> +    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>> +}
>> +
>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>> +{
>> +    dom0_cpuid_cmdline = true;
>> +
>> +    return parse_cpuid(s, _parse_dom0_cpuid);
>> +}
>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
> Unless the plan is to completely remove cpuid.c, this command line
> handling would imo better fit there. I understand that to keep
> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
> would then need to be exposed (under a different name), but I think
> that's quite okay, the more that it's an __init function.

I'm not sure I agree.  (I did debate this for a while before moving the
cmdline parsing.)

I do have some cleanup plans which will move code into cpuid.c, and
guest_cpuid() absolutely still lives there, but for these options
specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
bit names names will work here too.

So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
is is absolutely more cpu-policy.c content than cpuid.c content.

We can't get rid of the existing cmdline names, and I think documenting
our way out of the "it's not only CPUID bits any more" is better than
adding yet a new name.

>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>  
>>      return 0;
>>  }
>> +
>> +void recalculate_cpuid_policy(struct domain *d)
>> +{
>> +    struct cpu_policy *p = d->arch.cpuid;
>> +    const struct cpu_policy *max = is_pv_domain(d)
>> +        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
>> +        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
> While this is how the original code was, wouldn't this want to use
> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?

No.  That will fail to link.

This trickery is necessary to drop the compiler-visible reference to
hvm_max_cpu_policy in !CONFIG_HVM builds.

This function is only called after the domain type has already been
established, which precludes calling it in a case where max will
evaluate to NULL, hence the ASSERT_UNREACHABLE() just below.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 15:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 15:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518009.804110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjj2K-0001ji-6P; Tue, 04 Apr 2023 15:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518009.804110; Tue, 04 Apr 2023 15:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjj2K-0001jb-2V; Tue, 04 Apr 2023 15:57:32 +0000
Received: by outflank-mailman (input) for mailman id 518009;
 Tue, 04 Apr 2023 15:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjj2J-0001jT-1X
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 15:57:31 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64a97c0f-d301-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 17:57:27 +0200 (CEST)
Received: from mail-dm6nam11lp2172.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 11:57:24 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB7101.namprd03.prod.outlook.com (2603:10b6:a03:4e2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 15:57:17 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 15:57:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64a97c0f-d301-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680623847;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=x4cE0SxBtJATw7E3uGvEqgIgR/UsymTRX4mzi0JwnBU=;
  b=XDJTbOCIFHWHBt+cM2HasjH0u9fP1kg9DXdWVZQf/qkbifk/BhupmEY5
   If3o4SxdpjKNEjNPw/XY0HKTR4qTh41W+amyk0foGw3BffO+aoW3JzcSx
   iGJWWwGWH/A4Qhs7HjLYg9OmHNcJZvYkaIokuBswPCXIVybVqOvz+FGzF
   0=;
X-IronPort-RemoteIP: 104.47.57.172
X-IronPort-MID: 104210500
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:idS6EazMPcynro3SM796t+cRxyrEfRIJ4+MujC+fZmUNrF6WrkUPn
 DZKDGzUM/rbY2D0eo8gO4yx/UxV6pOByYc3TQpv+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UMHUMja4mtC5QRiPawT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXBi1
 aE3Og4UVU+4q+Oqw52DTctyq9t2eaEHPKtH0p1h5RfwKK9/BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjeVlVMruFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eTx3qjBdpKfFG+3t5WmHK8+zJUNB4TdEuxpcSBm2qSGN0Kf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluP1TM9KGYDYWoISFUD6ty6+YUr1EuQFJBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPeRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:tFKDTKnjg6jQzZcEk8XeY8GmgOXpDfIo3DAbv31ZSRFFG/Fw9v
 re+sjzsCWftN9/YgBGpTn+Asi9qB/nhPtICPAqUYtKPzOJhILLFu5fBOLZqlWMJ8SZzJ8/6U
 4JScND4bbLfDpHZKjBkW2F+xpJ+rm6zJw=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104210500"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fNKvwg+I+Ci2icnQYj06xmeA3MCE+Z5AV2EBmLAznZTjI/arlVWxb2w9N1eecbtVGzSb3SLIVDr66+AQ9IttVl4bG8jnJ/F1O9DFCugC+QONhFB5pTX+aoSc+CUDWc7lQ+e7j/x6cUOws7I/Ce/kZMr7/4WemuP3z81oziMoRdYszwm1G9Ksn1kDlhEyJKId5Ofytdd8CFExPv+j769QuiG5kic+yDRVW9XP7gnc9GMjlNhkMiVZt8Kc8xNHC4cOrRJBAeHEFJ/D64o+S3qaHlPExxf9znsNNe2W8UQgdu35sxkQDTOMVNh/4GyTpts9EQmInuy0RpNy0ZnEhEucqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZhJ5xzIUm24PVzc6DAO6I1XdSsa+BdeUXFhz3Cs1W84=;
 b=og5/8MExSaAZFP9tFLo41KJwk7CyDfjERuOwBDg0JJ9h4HpNHaFqhtkjpbgnj1kQaIzzqDOVL8ck+Vpu1EHP6TGBjTgXuizK1W4EewgjiziVl8g36F6b3ViH5qiK4oMm5SjjcduRdupao6v05WzuUtrIdzd1lI9Lt7Bu5PxbF50qKSLPQzZX9z6P2/wNBADEghx5S817osPjORyyYSV8/uupaNa6bazzqvkJSnppTo7qe4Bqz9XNrUHzCrxKrVGYjJEoSsuJN8R37NUrJ/8wjaFFO6s4DjfnipIBVqSJgW9YYAgp5i7JPp99HaI/cMqZPqgO8DNX7iYilvwawyC0JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZhJ5xzIUm24PVzc6DAO6I1XdSsa+BdeUXFhz3Cs1W84=;
 b=BdDp7i9H65v8BSLcY3kknAuFISKAxdjdnFBLOOsri8M0I4aXqo49U/6VLeKA4LvyI6lStS1SfjnMJx9r0UvwQfGq6k6HcUqNr6LBLC9o9aLF9Sz+9fdqxluNCMTxEpoLnsDP0Zsh202WopN4IuA7oz7GpLNRWmmHFccU8XJ7/4M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 17:57:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZCxI18gb8zK5X+nR@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
X-ClientProxiedBy: LO4P123CA0409.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::18) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB7101:EE_
X-MS-Office365-Filtering-Correlation-Id: ac7192a8-c1a5-49c6-3c33-08db352543bf
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V4rzIh/kFzauq0AWovF4N68I/MYyCjFsSBk12qtixJK/8Zlg5ulazUbLMYEFS0DS20Zvw9vFkFQFM7KK3tuF8FwfIYnhCwrrgTL1bOcunzTNRNIIGHhUoZkui7Am3+NcWH7DQ+70rA/HD8Au3wBC0ukb1bbb/kSd5H7IY4tlafQ7EO1lICrrBnvWYsv2OeDe86Z7rVS8yGZRV//9J5Ojdi/k5sbitI2J1Y521hrnFEKWGurZBJZD3gkXM0gnhlnSJ3iN3Ydap+sIh3MonBkgtpaiKrTdmgZdW65wx49LIAMbsTyFfb4XLeXPDEBGxrYRSNzaIr01+2qEFoVKNyUeTKeJ3bpcdrhl1LF6YGfaXGbXtSPjfCfGUAnm0Tat+XKqa9FJqdDJ2nJXmYcHGXYKkbrzbAn29g/9hSdisn30RZe/0F/fK4Hxl01Z1PTe9obKisn6cqmb0yecGBZ8kUaCvBG+J3V/Jt3gFStFY1ubADPVBBKdW5YwNpTgho+eMDin29VbqVayOJZQsMSCvKEcptDAG03cPWb3xLJXxJJv4Jcw+xtojvC558mwM5a13hBO0Utqlm+AMJWoUX4y5ggcIbd1pwwav+PMDwSBai0msNo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(366004)(39860400002)(376002)(136003)(396003)(451199021)(2906002)(82960400001)(38100700002)(86362001)(66899021)(33716001)(6666004)(6512007)(66476007)(66946007)(66556008)(8676002)(4326008)(53546011)(9686003)(41300700001)(6506007)(26005)(316002)(6916009)(54906003)(478600001)(85182001)(6486002)(83380400001)(5660300002)(8936002)(186003)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aHRTNnBEMUcvTWQ3bEVRMXRrTVJBMjIwV1dIbnVBYTRNZkFSOGFBSkZqNkhE?=
 =?utf-8?B?ZzRBZVUrVXpPYWhmUkcyK0VCWTlsa1FSemVxbTFrUlRGbjBCOTNwRW9xeVJo?=
 =?utf-8?B?TFF4TlNORHp2ZVRkMzM0WDF4d0tHdkUwU2VxWjkvSU95QzA5WlQwem04UFky?=
 =?utf-8?B?d1crOHYwVVg5VlhvaVRzcURkYVhrb0w2YWd0eVhLa2Q0RGM5Zms3NzBWazQ2?=
 =?utf-8?B?SEc4TjJxN0JLT1UvcGZJS3hWbTVjTDFpN0Jra01IdXNsRWR1NVlDRDJSYWZt?=
 =?utf-8?B?WjNqTkNVYk1BYStDN0xlYS9taXZuZDhCaGZSZ3ZKMGJGbzh6MFhIVVh5UEpH?=
 =?utf-8?B?TmIzckN6dmI3SnVGOGQ5VkkxNHkyOEp2S21SK3AwK0x1b3VVNGFJNFdVRXBz?=
 =?utf-8?B?VXcreVNnK053K2pGc1FFNVpocUhMVkpRZG5zNEh4L1JYb3FPaFhLTlY0TWlC?=
 =?utf-8?B?ZkZhSmZCa0RnbVFxb2FtcG9JQVBYcExPWEU0TWpMMlZjVHQrcGlPVXU3S1BC?=
 =?utf-8?B?dHFzZHdkbkhBQWR2enBQSDk4U2lGa1QyVklXZ2VSWmlUVDNyL3gvOGNTcFJ5?=
 =?utf-8?B?TFR2Vk4zR1ZoUEZHODh4d255WjFwcnRJM1liOW51KzhtUWtna1hhVEd6R1Zu?=
 =?utf-8?B?TXVlZjlHR2RobjlFQUFWQzBuUTAvU2pFUXAwN2N6MWcyM3FPZC96cHRQWm9j?=
 =?utf-8?B?ODdPQnpuc3loSmtSWVk2eGpnUXBOcE5mRlBVejFFdldVTVl3ejIyTUxIRTBv?=
 =?utf-8?B?MDJEdjB5Qk8xaWxiZE9KejVBRldaSWRGRjdMZ0gwdEFsR0JNOUprR2UzYi90?=
 =?utf-8?B?cDBYQ3haQldjUjRRL1Z0MFVEc0s3V2U2MWlXZDViN0R1SDZRQkFTWGlmNk5h?=
 =?utf-8?B?dndtRUpPc3gxQjVuMkY3Yjh0OEI1YklhWmFOZXhNQjZnb2lOVnFXOVVyWGtj?=
 =?utf-8?B?bzRHU0diSENsdlV3TlRHbVd3dDNSSWkycGpvS2ZzemQ3V0U5QlB3THFyTmlP?=
 =?utf-8?B?TWZyV3h0ajhSUlZiWW9CaUdEc0htYnlxcHRoMEgxVDIzWW45K2YrL1RxZFd2?=
 =?utf-8?B?Y1crQ1hRcEU3QllOcWJoYWlybVA0TTdkaTRDYXRNaDd4Q0FNb21SQ1IwN2Yw?=
 =?utf-8?B?d3gxQlBld3IrbFpKMWtRS204ajhXa1BUNkVjMVdJRThCSlpIa0VLcjVYMEdF?=
 =?utf-8?B?V0laL2ZONzdXMm9iSmUvVlZwcGdxQkpSMDVpWnArUU1qdmJobzRSS3VIY2Jn?=
 =?utf-8?B?U3YvTUZTUDRVMllWcURuZHB2eXZCaFptR000eUd0WDc0NW5vQnR3NmdaUU0x?=
 =?utf-8?B?ZUlETXN1VHFBaE80VnMxZmJMRHpoRUtDUVlYbVQzK2xLSzN1b0kyaFRSUjJj?=
 =?utf-8?B?WmhoVlF1Y3Q5Ulh6ZXFhTi94Wi9wU1hwQ1BOU1JaUlgyMDJrbDlRYis1M3ZB?=
 =?utf-8?B?cUtCdEI4OC8xWHRvNmJqL04rQnVBd1RtRmsxc0tOZUtmbXFnY1NpRFB4dFJD?=
 =?utf-8?B?R0NmZzREUXJ0NW1SYklSUFg3ajU5OWpXb0o1L20wUW94TzlIbUFtNjI2SmdE?=
 =?utf-8?B?bkg1K2M5VUFCWmU5cUNza2tBTFg2UURaVFpDVmF4djh4ZjJmTU1odEJ5OStS?=
 =?utf-8?B?bDhRRUVlc0MvNVhnbVZDWGJRdFFsNy9zbDIzQjBpUFVZdTZJcFI1YVBlaGwz?=
 =?utf-8?B?d0dOdjd5UkJLV3R1ZXJ6eFlKbEJHTXBRQ295elNBZzJqa3MrYWJ6TnZGMUFR?=
 =?utf-8?B?UDFqMEtUb2VOMDNMRi8wQk1vaWxCRHN0ZUhjRkdOV3lwcmRIWW9Eb3NIVjd2?=
 =?utf-8?B?OWo3VXN3cm1FNCtEQXFubHZzM2QvL295bXhyd2R6NWpLWVVqUHBwYVV4bnJV?=
 =?utf-8?B?MlhGOGU3eG9leUhtcUhtYjVQQ0d6d0RLM0twWGVKbk1ackNMMTBJaTNOREFL?=
 =?utf-8?B?MWFxcFEyZHlGNktJZTJIalplVmRFU0pMZVdpVTFkcDFka25ZTzAwUHZ0SlVK?=
 =?utf-8?B?MGp3MEpxSFh2clVvaTJHbHVCZWVocUM3c0ZQakIyZVNnYzVseGtjT01WOFhv?=
 =?utf-8?B?ek5mN0dKa1BKQ2xRTk5NQVJwdHJ3N3NaOTBQWDRHL3IvZEw0dkVCdENBYjZv?=
 =?utf-8?Q?yn8dKp5HVKQ4se3D9lGEDc+48?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	JLf8pMbefW55+2gdRTJqVSLIMVy6xOlc6xGMqrEBSW0mAsV3RVb38VrQokofDAcVlmq5Z6pescGHitHcfhjCDt4Ax8QkoFI6x6pyHIdK6V7ySIBdumbmat0yC2Nya9lRYTR/0MMCneK7rQ/roMHZ+BbdQX381Scpoqe0VTP936BHDUkwgRnsKZu+eRj9E2dXJjVbUPC2ekAO7eOvBrSsXPsPqR15T2YDV3ZuzbnPP5XCJO5rnyPcLo1ml7ECRKh3ixwfvCXJFlUsxHRIrD8lG8CbnhPD4lVcp0GbqZzFTcpJE86jDEhf1V/vwkgS6wQUAJ+VIW/D14+g4Oc22rLrd0KymnjFa5SCteRbw/XCJdIOZU+UebMS8G/Wa2CZnLu/wC3R4BS/MKz2sdluNLg5F93syhdlcOcimpeQ7n241RTz03LC7I5+dojxaL1H9KaHyAckRDYywVgC+dl7x/6mnlJzX9QQg9aCUQD5wA96oTBGg4mf+SMAbu70N8hV1nhnq+4zko/i1kjnljO76pgAEDvNy/p+79VFVcEMFG+EbT7VzFb9xnI6yW4qTOiGRkcsQaGjDHl7xeKjOlzQRAE5/FzJNs/GN9mONDh2uc+voGL2bOiPdHNzejmFp2ec/5MwPKM5oOH4/lwvH/5ZMAdLr6uZJOyo4tLCkhlt1kWrbAHeReWxwsI+63jho2O8qnt1NK4Na5z4wOmCoffegg7Je3RuqcEsMh1krovCtz2CBEV+BoehlO1U5tFaw4Gp8UHbxKkpglPrZWGuhWZBSsoxYwkUfJTzQbD3gWr6OGSq/6NZmMv8RZ80ZWB6fIGleA+e
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac7192a8-c1a5-49c6-3c33-08db352543bf
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 15:57:17.2939
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EsjZht/Ytd4OyysFc8M1iLLlbFfkkf2v7qfuc+Hp4XMRsUgQGxXnF4LUqcvsW5J4V82c99jw3DCcq+1MGtisIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7101

On Tue, Apr 04, 2023 at 04:24:16PM +0200, Jan Beulich wrote:
> On 04.04.2023 13:41, Roger Pau Monné wrote:
> > On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
> >> On 04.04.2023 12:12, Roger Pau Monné wrote:
> >>> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
> >>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> >>>> applies to guests also when run on a 64-bit hypervisor: The "extended
> >>>> CR3" format has to be used there as well, to fit the address in the only
> >>>> 32-bit wide register there. As a result it was a mistake that the check
> >>>> was never enabled for that case, and was then mistakenly deleted in the
> >>>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> >>>> CONFIG_PAGING_LEVELS==4"]).
> >>>>
> >>>> Similarly during Dom0 construction kernel awareness needs to be taken
> >>>> into account, and respective code was again mistakenly never enabled for
> >>>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> >>>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
> >>>>
> >>>> At the same time restrict enabling of the assist for Dom0 to just the
> >>>> 32-bit case. Furthermore there's no need for an atomic update there.
> >>>>
> >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>> ---
> >>>> I was uncertain whether to add a check to the CR3 guest read path,
> >>>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> >>>> be converted to "extended" format (overflow is possible there in
> >>>> principle because of the control tools "slack" in promote_l3_table()).
> >>>>
> >>>> In that context I was puzzled to find no check on the CR3 guest write
> >>>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> >>>> of the low reserved ones) could observe anomalous behavior rather than
> >>>> plain failure.
> >>>>
> >>>> As to a Fixes: tag - it's pretty unclear which of the many original
> >>>> 32-on-64 changes to blame. I don't think the two cited commits should
> >>>> be referenced there, as they didn't break anything that wasn't already
> >>>> broken.
> >>>>
> >>>> --- a/xen/arch/x86/mm.c
> >>>> +++ b/xen/arch/x86/mm.c
> >>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
> >>>>      unsigned int   partial_flags = page->partial_flags;
> >>>>      l3_pgentry_t   l3e = l3e_empty();
> >>>>  
> >>>> +    /*
> >>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> >>>> +     * understand the weird 'extended cr3' format for dealing with high-order
> >>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
> >>>> +     * initialised).
> >>>
> >>> Don't we then need some check in the vCPU init path to assure that the
> >>> cr3 is < 32bits if we allow those to initially be set?
> >>>
> >>> Or will the initialization unconditionally overwrite any previous cr3
> >>> value?
> >>
> >> That's not the way I understand this "cut some slack". Instead I read it
> >> to be meant to cover for the VM-assist bit not being set, yet. Beyond
> >> that it is assumed to be tool stack's responsibility to constrain
> >> addresses suitably. If it doesn't, it'll simply break the guest. (There
> >> is some guessing on my part involved here, as the original introduction
> >> of that code didn't further explain things.)
> > 
> > If it's just the guest that's broken I would think it's fine.  As long
> > as such mismatch doesn't cause issues in the hypervisor internal state.
> > 
> > Did you see a toolstack setting such entries before pae_extended_cr3
> > is set?
> 
> To be honest - I didn't look. As said in the longer reply to Andrew, I
> think it is more logical this way (the page table root already being
> validated as an L3 table when vCPU 0 is inititalized, which includes
> setting its CR3). Hence even if right now the order was the other way
> around (which I doubt it is), I wouldn't want to make impossible to
> restore the original ordering again.

IMO I think it would be better if we could already report error at
domain creation time if the toolstack is attempting to create a domain
that the hypervisor knows is not going to work properly, rather than
allowing it and the guest failing in maybe non obvious ways.

It seems to me however that we would need to fix xc_dom_boot_image()
in order to setup the vCPU before creating the initial page-tables.
(->setup_pgtables() hook being called before ->vcpu() hook)

So I don't think this is strictly worse than what we have, but it
would also be nice to get things sorted out so the ability of the
toolstack to shot its own foot is limited.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518014.804119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjBg-0003lo-1l; Tue, 04 Apr 2023 16:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518014.804119; Tue, 04 Apr 2023 16:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjBf-0003lh-VF; Tue, 04 Apr 2023 16:07:11 +0000
Received: by outflank-mailman (input) for mailman id 518014;
 Tue, 04 Apr 2023 16:07:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjBe-0003lX-1C; Tue, 04 Apr 2023 16:07:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjBd-00086J-Uw; Tue, 04 Apr 2023 16:07:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjBd-0002Tf-Et; Tue, 04 Apr 2023 16:07:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjBd-0003So-EA; Tue, 04 Apr 2023 16:07:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6n06JrAZS3LVFrbsQfYq6ZCfdJtlwUHjqhGaejujYdo=; b=zyYMPm84PUmkpyg4CrTmmo+BPa
	5rEVBxpaUJzDwtO1l8iEOVuqWIfGwldcyeyaO1dLfswZgt3Xegek1ILuA25u1BMEHteONNX32r8/U
	ph8Ztnsdz3kTmDLfHAblaZ/823Dux9FmfAi8v7jyXZlwTL0HF3PLJtJgy48JvfS47t1U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180137-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180137: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
X-Osstest-Versions-That:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 16:07:09 +0000

flight 180137 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180137/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
baseline version:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c

Last test of basis   180119  2023-04-03 13:00:25 Z    1 days
Testing same since   180137  2023-04-04 14:03:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bfa2e6a246..658fcb7ac9  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518017.804130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjBn-00042m-9r; Tue, 04 Apr 2023 16:07:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518017.804130; Tue, 04 Apr 2023 16:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjBn-00042f-6Y; Tue, 04 Apr 2023 16:07:19 +0000
Received: by outflank-mailman (input) for mailman id 518017;
 Tue, 04 Apr 2023 16:07:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCXw=73=citrix.com=prvs=451435b33=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pjjBm-00042B-23
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:07:18 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3963683-d302-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:07:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3963683-d302-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680624435;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=QsmKjz9FSvqTsAQ5kNWHSGgpoV1tw0uvDbd0t6jEgvo=;
  b=NauPj0zhH9qqNc6uPFGBX8VXr0Up6GZZd0gvygJjwLe7C7Koahpvl82A
   0MpOcsKx1psg4lBqMqXAlqfnIiBHHHgsZTkqjp0iffgB1d2qv1x6/gHy6
   GSAVjeFsxOw+iaWQY/ITmdWbaCOZWZ9XB5qtHI3G4ndR1UTWIelxf/4ou
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104328623
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:dUgNEal0GTwHMn6zhFqF1jTo5gy2JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIeUG6DOa6IZjD9e91zboXjox8AuZPUyYdrGQRu/C08HiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgX5AOGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 f1HNzVWV0uqvvm3/a+KbdFql8MAcPC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHkfidXpUPTqbs++GzS5AdwzKLsIJzefdniqcB9xx7E/
 TiWoT6iav0cHNWa1CLZz2qJv+PWgRjXZI43RJLj0cc/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JSGeAn7ACGyoLP/h2UQGMDS1Zpd9gOpMIwAzsw2
 TehndzzAid0mKaIUn/b/bCRxRuwJCwUIGkqdSICCwwf7LHLu5ovhxjCStJiFq+djdDvHzz0h
 TeQo0ADa6471JBRkf/hpBae3mzq/8KSJuIo2unJdmP68QYiXdWMXpep1EXqys5GPYG6EmDU6
 RDohPOiAPAy4YClzXLdGLtVQOr4u55pIxWH3wcxQsBJGyCFvif6INsOuGwWyFJBaJ5sRNP/X
 KPEVeq9Drd3NWDiU6J4apnZ5y8Cnfm5ToSNuhw5g7NzjnlNmOyvpnsGiba4hTyFraTVufhX1
 W2nWcitF20GLq9s0SC7QewQuZdymHBmmD+NFMqglUr3uVZ7WJJyYe5dWGZik8hjtP/UyOkr2
 4032zS2J+V3D7SlP3i/HX87JlEWN3krba3LRzhsXrfbeGJOQThxY8I9NJt9I+SJaYwJzLaXl
 px8M2cEoGfCaYrvcl/VMCo7NOu+Bf6SbxsTZEQRALph4FB7Ca7H0UvVX8JfkWUPnAC78cNJc
 g==
IronPort-HdrOrdr: A9a23:A8CGqKNzLcl7FsBcTuqjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ4OxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykAsS8h2njfLz/8+qjhzEl1v5an856yd3ARV51d
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104328623"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v4 2/3] x86/platform: introduce XENPF_get_ucode_revision
Date: Tue, 4 Apr 2023 17:06:54 +0100
Message-ID: <20230404160655.2354-3-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230404160655.2354-1-sergey.dyasli@citrix.com>
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Currently it's impossible to get CPU's microcode revision from Xen after
late loading without looking into Xen logs which is not always convenient.

Add a new platform op in order to get the required data from Xen and
provide a wrapper for libxenctrl.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v3 --> v4:
- clarified the commit message
- Renamed "ucode version" to "ucode revision"
- Removed DECLARE_PLATFORM_OP and NULL checks
- Added a TODO comment about parked CPUs
- Renamed struct xenpf_ucode_revision fields
---
 tools/include/xenctrl.h                  |  2 ++
 tools/libs/ctrl/xc_misc.c                | 18 +++++++++++++++
 xen/arch/x86/platform_hypercall.c        | 29 ++++++++++++++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  4 ++++
 xen/include/public/platform.h            | 11 +++++++++
 xen/include/xlat.lst                     |  1 +
 6 files changed, 65 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 34b3b25289..1149f805ba 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1187,6 +1187,8 @@ int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo);
 int xc_microcode_update(xc_interface *xch, const void *buf, size_t len);
 int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver);
+int xc_get_ucode_revision(xc_interface *xch,
+                          struct xenpf_ucode_revision *ucode_rev);
 int xc_numainfo(xc_interface *xch, unsigned *max_nodes,
                 xc_meminfo_t *meminfo, uint32_t *distance);
 int xc_pcitopoinfo(xc_interface *xch, unsigned num_devs,
diff --git a/tools/libs/ctrl/xc_misc.c b/tools/libs/ctrl/xc_misc.c
index 90d50faa4f..4159294b2e 100644
--- a/tools/libs/ctrl/xc_misc.c
+++ b/tools/libs/ctrl/xc_misc.c
@@ -243,6 +243,24 @@ int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
     return 0;
 }
 
+int xc_get_ucode_revision(xc_interface *xch,
+                          struct xenpf_ucode_revision *ucode_rev)
+{
+    int ret;
+    struct xen_platform_op op = {
+        .cmd = XENPF_get_ucode_revision,
+        .u.ucode_revision.cpu = ucode_rev->cpu,
+    };
+
+    ret = do_platform_op(xch, &op);
+    if ( ret != 0 )
+        return ret;
+
+    *ucode_rev = op.u.ucode_revision;
+
+    return 0;
+}
+
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo)
 {
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a2d9526355..9ff2da8fc3 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -640,6 +640,35 @@ ret_t do_platform_op(
     }
     break;
 
+    case XENPF_get_ucode_revision:
+    {
+        struct xenpf_ucode_revision *rev = &op->u.ucode_revision;
+
+        if ( !get_cpu_maps() )
+        {
+            ret = -EBUSY;
+            break;
+        }
+
+        /* TODO: make it possible to know ucode revisions for parked CPUs */
+        if ( (rev->cpu >= nr_cpu_ids) || !cpu_online(rev->cpu) )
+            ret = -ENOENT;
+        else
+        {
+            const struct cpu_signature *sig = &per_cpu(cpu_sig, rev->cpu);
+
+            rev->signature = sig->sig;
+            rev->pf = sig->pf;
+            rev->revision = sig->rev;
+        }
+
+        put_cpu_maps();
+
+        if ( __copy_field_to_guest(u_xenpf_op, op, u.ucode_revision) )
+            ret = -EFAULT;
+    }
+    break;
+
     case XENPF_cpu_online:
     {
         int cpu = op->u.cpu_ol.cpuid;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 5bf6b958d2..99440f4076 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -28,6 +28,10 @@ CHECK_pf_pcpuinfo;
 CHECK_pf_pcpu_version;
 #undef xen_pf_pcpu_version
 
+#define xen_pf_ucode_revision xenpf_ucode_revision
+CHECK_pf_ucode_revision;
+#undef xen_pf_pucode_revision
+
 #define xen_pf_enter_acpi_sleep xenpf_enter_acpi_sleep
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 60caa5ce7e..15777b5416 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -614,6 +614,16 @@ DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
 typedef struct dom0_vga_console_info xenpf_dom0_console_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_dom0_console_t);
 
+#define XENPF_get_ucode_revision 65
+struct xenpf_ucode_revision {
+    uint32_t cpu;             /* IN:  CPU number to get the revision from.  */
+    uint32_t signature;       /* OUT: CPU signature (CPUID.1.EAX).          */
+    uint32_t pf;              /* OUT: Platform Flags (Intel only)           */
+    uint32_t revision;        /* OUT: Microcode Revision.                   */
+};
+typedef struct xenpf_ucode_revision xenpf_ucode_revision_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_ucode_revision_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -645,6 +655,7 @@ struct xen_platform_op {
         xenpf_resource_op_t           resource_op;
         xenpf_symdata_t               symdata;
         xenpf_dom0_console_t          dom0_console;
+        xenpf_ucode_revision_t        ucode_revision;
         uint8_t                       pad[128];
     } u;
 };
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d601a8a984..9c41948514 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -157,6 +157,7 @@
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 ?	xenpf_resource_entry		platform.h
+?	xenpf_ucode_revision		platform.h
 ?	pmu_data			pmu.h
 ?	pmu_params			pmu.h
 !	sched_poll			sched.h
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518019.804139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC2-0004Qd-NP; Tue, 04 Apr 2023 16:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518019.804139; Tue, 04 Apr 2023 16:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC2-0004QW-Kb; Tue, 04 Apr 2023 16:07:34 +0000
Received: by outflank-mailman (input) for mailman id 518019;
 Tue, 04 Apr 2023 16:07:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCXw=73=citrix.com=prvs=451435b33=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pjjC1-0004Q2-Mv
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:07:33 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cca51aa8-d302-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 18:07:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cca51aa8-d302-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680624451;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=OIB4XE7cSQQtZzws9p9Ga2irS8OnpzPLnMZtppaosBg=;
  b=GsB7Yy1Jo4Ow6+EFtZiwVjKi0AULlLC1H/ys9LaFkAvARqHohNo0SPOP
   tToe0+ra4bSeoli/yY5QUf1cAN/w4sMwzy0jegLkfLOG1jVaOxSAbkyvc
   UsFMfByJHhSwbITbCUCsp3ESrwL0EtYQcRiYOXTihIoHUyHypkIZwab1N
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106724130
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:yu8CZ6pwY/cYEdn6ykTvBwmglPVeBmIuZRIvgKrLsJaIsI4StFCzt
 garIBmGPKuDYzCgctEjO4u0pkIEsZXQzIRiGQc9rylgQSJE8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzyVNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGtSLQDTreG3+ZSQEeJz3fYlc8boI5xK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVDpUiaqLtx73na1whw+LPsLMDUapqBQsA9ckOw/
 zqYoD6lW0pCXDCZ4SSP9i20qPGVpgzmSr4rNrK47rl4iWTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvBBzN1t6aOYWmA7brSpjS3UQAXMGsDaCksXQYDpd75r+kblQnTR9xuFKq0iNzdGjzqx
 T2O6i8kiN0uYdUjjvvhuwqd2nT1+8aPF1RujunKYo67xlgmJ571TY+p0H7wtv1fd7SfEVaCo
 2dRzqBy89sy4YGxeD2lGbtdRO/xu63UbFUwknY0QcB/qm3FF2qLONkJvWogfBoB3tMsI2eBX
 aPFhe9GCHa/1lOOZLQ/XY++At9CIUPIRYW8DaC8gjajj/FMmO67EMJGPxT4M5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzOLGMylnk78i+TODJJwdVviGALWBt3VEYve+FmFm
 zqhH5DiJ+pjvB3WPXCMrN97waEiJnknH5Hmw/Fqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiMSo8Mum3Ac4u9RrW/0UEZD6V5pTqWq73hI93Snf9VeJ4nAC/5ZaYl
 8U4Rvg=
IronPort-HdrOrdr: A9a23:g3P7gK99tCdBf+9fU7Juk+DzI+orL9Y04lQ7vn2ZKCYlF/Bw8v
 rF8cjzuiWZtN98Yh4dcKm7SdC9qBDnhPxICOsqXYtKNTOO0FdASrsN0WKI+UyCJ8SRzI9gPJ
 BbAsxD4Y3LZmSSVfyKmzVQyexQpuVvLZrY4ts2E00dNT2CtZsQlTtENg==
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="106724130"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v4 1/3] tools/xenctrl: add xc_get_cpu_version()
Date: Tue, 4 Apr 2023 17:06:53 +0100
Message-ID: <20230404160655.2354-2-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230404160655.2354-1-sergey.dyasli@citrix.com>
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

As a wrapper for XENPF_get_cpu_version platform op.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v3 --> v4:
- Replaced DECLARE_PLATFORM_OP
- Removed NULL checks
---
 tools/include/xenctrl.h   |  1 +
 tools/libs/ctrl/xc_misc.c | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..34b3b25289 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1186,6 +1186,7 @@ int xc_physinfo(xc_interface *xch, xc_physinfo_t *info);
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo);
 int xc_microcode_update(xc_interface *xch, const void *buf, size_t len);
+int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver);
 int xc_numainfo(xc_interface *xch, unsigned *max_nodes,
                 xc_meminfo_t *meminfo, uint32_t *distance);
 int xc_pcitopoinfo(xc_interface *xch, unsigned num_devs,
diff --git a/tools/libs/ctrl/xc_misc.c b/tools/libs/ctrl/xc_misc.c
index 265f15ec2d..90d50faa4f 100644
--- a/tools/libs/ctrl/xc_misc.c
+++ b/tools/libs/ctrl/xc_misc.c
@@ -226,6 +226,23 @@ int xc_microcode_update(xc_interface *xch, const void *buf, size_t len)
     return ret;
 }
 
+int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
+{
+    int ret;
+    struct xen_platform_op op = {
+        .cmd = XENPF_get_cpu_version,
+        .u.pcpu_version.xen_cpuid = cpu_ver->xen_cpuid,
+    };
+
+    ret = do_platform_op(xch, &op);
+    if ( ret != 0 )
+        return ret;
+
+    *cpu_ver = op.u.pcpu_version;
+
+    return 0;
+}
+
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo)
 {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518021.804150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC3-0004gP-W5; Tue, 04 Apr 2023 16:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518021.804150; Tue, 04 Apr 2023 16:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC3-0004gG-Rr; Tue, 04 Apr 2023 16:07:35 +0000
Received: by outflank-mailman (input) for mailman id 518021;
 Tue, 04 Apr 2023 16:07:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCXw=73=citrix.com=prvs=451435b33=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pjjC2-0004Q2-Bv
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:07:34 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd171558-d302-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 18:07:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd171558-d302-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680624451;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=P7/Mv4d7vPTIrhuPMkAr+Y3CR940VFPqx5xikklOBgo=;
  b=SJhGaJIFGPD3cRIsvDx8/7mUAWZD//QDkCd8k4QLCMD0V2knzwoR4rDw
   NC0lHra9VELCiSi6M2WQx6ynorjcsO4b8DjazyaBinajuFfkACl+SVLbq
   8H9U+eGy6S6ptF6+yBVzTw8QMXLELr3Hp4BlAfF1ngYSGG+tC8EziQDQ/
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106724129
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:WrH9SK1LPnlbyBXylPbD5eVxkn2cJEfYwER7XKvMYLTBsI5bp2BSz
 2JLXWqGPayLZjHye9sib9m+/R8BupWDx4AxSQY4pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfWj4N5
 awyKzE0SDvYjP3t8rTkRepzmZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tPylnHbyYntUuVuOoasf6GnP1g1hlrPqNbI5f/TTHZsKxBrB/
 DOuE2LRJDwXFICz4jm59neB38WSn3vfVKBOLejtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQo2
 UWOhMjBHiF0vfueTnf13rWJqTK/PwAFIGlEYjULJSMe+MXqqow3ihPJT/5gHbSzg9mzHiv/q
 w1mtwBn2e9V15RSkfzmoxae2WnESoX1ohAdvVqUTjKP7QJAQqX+bJPz216E4vVfM9PMJrWeh
 0Toi/Ry/chXU8HVynTRG7RRdF26z63baWOB2DaDC7Fkrm3woCD7IOi89RkkfC9U3tA4lSgFi
 aM5kSdY/9dtMXSjdsebiKrhWp1xncAM+TkIP804j+aigbArLmdrBAk0OSatM5nFySDAa50XN
 5aBatqLBn0HE6lhxzfeb75Dged2mn5vnziJG8CTI/GbPV22PSb9dFv4GAHWMrBRAF2s+205D
 Oqzx+PVkk4CAYUSkwHc8JIJLEBiEEXX8ave8pQNHsbae1oOJY3UI6OJqV/XU9A/zvs9eyah1
 i3VZ3K0P3Kk3yKdc13WMiE7AF4tNL4mxU8G0eUXFQ7A8xAejUyHt/p3m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:D1ZmRaqMfUdI1X5YDbvdIUEaV5oAeYIsimQD101hICG9E/bo8v
 xG+c5wuCMc5wx8ZJhNo7+90dC7MArhHP1OkOss1NWZPDUO0VHARL2Ki7GN/9SKIVycygcy78
 Zdmp9FebnN5AhB5voSODPIaOrIGuP3lpxAWN2uqEuFkTsaE52IMT0JcDqmLg==
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="106724129"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v4 0/3] xen-ucode: print information about currently loaded ucode
Date: Tue, 4 Apr 2023 17:06:52 +0100
Message-ID: <20230404160655.2354-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain

Posting v4 with addressed review comments. Changes from v3 are available
in each patch.

Sergey Dyasli (3):
  tools/xenctrl: add xc_get_cpu_version()
  x86/platform: introduce XENPF_get_ucode_revision
  tools/xen-ucode: print information about currently loaded ucode

 tools/include/xenctrl.h                  |  3 +
 tools/libs/ctrl/xc_misc.c                | 35 ++++++++++
 tools/misc/xen-ucode.c                   | 83 ++++++++++++++++++++----
 xen/arch/x86/platform_hypercall.c        | 29 +++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  4 ++
 xen/include/public/platform.h            | 11 ++++
 xen/include/xlat.lst                     |  1 +
 7 files changed, 154 insertions(+), 12 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518022.804160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC5-0004wm-8R; Tue, 04 Apr 2023 16:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518022.804160; Tue, 04 Apr 2023 16:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjC5-0004wX-5O; Tue, 04 Apr 2023 16:07:37 +0000
Received: by outflank-mailman (input) for mailman id 518022;
 Tue, 04 Apr 2023 16:07:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCXw=73=citrix.com=prvs=451435b33=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1pjjC3-0004Q2-C0
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:07:35 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cedfab95-d302-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 18:07:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cedfab95-d302-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680624453;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=uBXu6YZv6BAp5cZzrn3yHrWUC2z5HKvOE9bB6d7ZZR4=;
  b=C+WTegrat2MbXQD0eTtTXHY77Y/y9cnBEfngiiX/94eFKxSc6DcWn2IU
   qHfbBIoVSa6O9yxr+/2lq4X2vmIR224w0YZKr/K+zglpfZb8rHFUWcMWo
   b2cF2j3qvJjhGWnGid2kP37FoS7uKmS1WguBabK1w1Yqcdz8RMQTFANWH
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106724147
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:AUokrKgTGvkjyX7osJTilRLbX161VhAKZh0ujC45NGQN5FlHY01je
 htvWGCOPPuCajOhc912PI61/BgDupXTx9AxHFQ6+SkwQX4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ0IgwPTAGvqtm73a2iGvI8vJwJfO3CadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XaDBCp1+E46Ym6nPXzSR60aT3McqTcduPLSlQth/A+
 D2coz2lXXn2MvSA8TeO90vwltbItgKlWdNRN+P72eV11Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Eec39QWMwar8+BuCCy4PSTspQMMinN87Q3otz
 FDht9HmHzt0q5WOVGmQsLyTqFuaOzURLGIETT8JS00C+daLiJookhvFQ9JnEai0pt74Azf9x
 3aNtidWulkIpZdVjePhpwmB2m/y4MGTFWbZ+zk7QEqJ5D97Rp+JSrel9EfCve0QFt6zHn2o6
 S1sd9el0AweMX2cvHXTEL1TRO3ytqrt3C702gA2QcR4n9i50zv6JN0LvmkjTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONk1Hs0DaJo9zqx+HXAaIlmU
 XthTe6iDGwBFYNsxyesSuEW3NcDn35unT6PG8GhlE75gdJygUJ5rp9caDOzgh0RtvvY8G05D
 f4EXyd19/mveLKnOXSGmWLiBVsLMWI6FfjLliCjTcbaelAOMDh4W5fsLUYJJ9QNc1J9yr2Zo
 RlQmyZwlDLCuJEwAV7QNSo5MO+3DM4XQLBSFXVEAGtEEkMLOe6HhJrzvbNtJdHLKMQLISZIc
 sQ4
IronPort-HdrOrdr: A9a23:5XLDkKPTjTUy48BcTuqjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ4OxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykAsS8h2njfLz/8+qjhzEl1v5an856yd3ARV51d
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="106724147"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v4 3/3] tools/xen-ucode: print information about currently loaded ucode
Date: Tue, 4 Apr 2023 17:06:55 +0100
Message-ID: <20230404160655.2354-4-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230404160655.2354-1-sergey.dyasli@citrix.com>
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Add an option to xen-ucode tool to print the currently loaded ucode
revision and also print it during usage info.  Print CPU signature and
platform flags as well.  The raw data comes from XENPF_get_cpu_version
and XENPF_get_ucode_revision platform ops.

Example output:
    Intel:
    CPU signature 06-55-04 (raw 0x00050654) pf 0x1 revision 0x02006e05

    AMD:
    CPU signature fam19h (raw 0x00a00f11) revision 0x0a0011ce

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v3 --> v4:
- changed the output to be 1-line long
- made xc_interface *xch global
- added error checking to xc calls
- added error for unsupported CPU vendor
- changed printf format to 0x%08x for raw signature and revision values
---
 tools/misc/xen-ucode.c | 83 ++++++++++++++++++++++++++++++++++++------
 1 file changed, 71 insertions(+), 12 deletions(-)

diff --git a/tools/misc/xen-ucode.c b/tools/misc/xen-ucode.c
index ad32face2b..bd0bfaaa00 100644
--- a/tools/misc/xen-ucode.c
+++ b/tools/misc/xen-ucode.c
@@ -12,22 +12,89 @@
 #include <fcntl.h>
 #include <xenctrl.h>
 
+static xc_interface *xch;
+
+static const char intel_id[] = "GenuineIntel";
+static const char   amd_id[] = "AuthenticAMD";
+
+static void show_curr_cpu(FILE *f)
+{
+    int ret;
+    struct xenpf_pcpu_version cpu_ver = { .xen_cpuid = 0 };
+    struct xenpf_ucode_revision ucode_rev = { .cpu = 0 };
+
+    ret = xc_get_cpu_version(xch, &cpu_ver);
+    if ( ret )
+    {
+        fprintf(f, "Failed to get CPU information. (err: %s)\n",
+                strerror(errno));
+        exit(1);
+    }
+
+    ret = xc_get_ucode_revision(xch, &ucode_rev);
+    if ( ret )
+    {
+        fprintf(f, "Failed to get microcode information. (err: %s)\n",
+                strerror(errno));
+        exit(1);
+    }
+
+    /*
+     * Print signature in a form that allows to quickly identify which ucode
+     * blob to load, e.g.:
+     *
+     *      Intel:   /lib/firmware/intel-ucode/06-55-04
+     *      AMD:     /lib/firmware/amd-ucode/microcode_amd_fam19h.bin
+     */
+    if ( memcmp(cpu_ver.vendor_id, intel_id,
+                sizeof(cpu_ver.vendor_id)) == 0 )
+    {
+        fprintf(f, "CPU signature %02x-%02x-%02x (raw 0x%08x) pf %#x revision 0x%08x\n",
+                   cpu_ver.family, cpu_ver.model, cpu_ver.stepping,
+                   ucode_rev.signature, ucode_rev.pf, ucode_rev.revision);
+    }
+    else if ( memcmp(cpu_ver.vendor_id, amd_id,
+                     sizeof(cpu_ver.vendor_id)) == 0 )
+    {
+        fprintf(f, "CPU signature fam%xh (raw 0x%08x) revision 0x%08x\n",
+                   cpu_ver.family, ucode_rev.signature, ucode_rev.revision);
+    }
+    else
+    {
+        fprintf(f, "Unsupported CPU vendor: %s\n", cpu_ver.vendor_id);
+        exit(3);
+    }
+}
+
 int main(int argc, char *argv[])
 {
     int fd, ret;
     char *filename, *buf;
     size_t len;
     struct stat st;
-    xc_interface *xch;
+
+    xch = xc_interface_open(NULL, NULL, 0);
+    if ( xch == NULL )
+    {
+        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
+                strerror(errno));
+        exit(1);
+    }
 
     if ( argc < 2 )
     {
-        fprintf(stderr,
-                "xen-ucode: Xen microcode updating tool\n"
-                "Usage: %s <microcode blob>\n", argv[0]);
+        fprintf(stderr, "xen-ucode: Xen microcode updating tool\n");
+        show_curr_cpu(stderr);
+        fprintf(stderr, "Usage: %s <microcode blob>\n", argv[0]);
         exit(2);
     }
 
+    if ( !strcmp(argv[1], "show-cpu-info") )
+    {
+        show_curr_cpu(stdout);
+        return 0;
+    }
+
     filename = argv[1];
     fd = open(filename, O_RDONLY);
     if ( fd < 0 )
@@ -52,14 +119,6 @@ int main(int argc, char *argv[])
         exit(1);
     }
 
-    xch = xc_interface_open(NULL, NULL, 0);
-    if ( xch == NULL )
-    {
-        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
-                strerror(errno));
-        exit(1);
-    }
-
     ret = xc_microcode_update(xch, buf, len);
     if ( ret )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:07:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:07:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518026.804170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjCD-0005OY-Kv; Tue, 04 Apr 2023 16:07:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518026.804170; Tue, 04 Apr 2023 16:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjCD-0005OG-H2; Tue, 04 Apr 2023 16:07:45 +0000
Received: by outflank-mailman (input) for mailman id 518026;
 Tue, 04 Apr 2023 16:07:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjjCB-00042B-FM
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:07:43 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20616.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3cadec5-d302-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:07:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAWPR04MB10053.eurprd04.prod.outlook.com (2603:10a6:102:387::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 16:07:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 16:07:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3cadec5-d302-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b0AlkEvq6ByF24u0vM7F2WktRfsIA1tpbWxgiPpskX0XNWu1lFfMbR4d7c+KRAA8JN+pRjDJosWmEcTEkzd6XMcLs14KHV0aYbntoGY1lqSo4KZiqrRZJTCqUiv4jzR9wuj6RnsqFXauGm+17t7kIIKrKYfau94du7utpnpGr/xkhX6MjiNs6yTMeG0VgGKg8MkJeXCeiF+w2hsjHf35EsGwoQuoWgHaInrEUcavaEmxO9MYGeKQFywFaUzuG6Ap1w/WAep+jRl5sC1YCUDdinEzKVdMeHbjFxwY50FeliHGeislR+cc4Qkqrpo45fmHf/z7vfN+zvYES1U8HtQ3sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2CCQNHz3YZYu6kKH9DtowEu2cW9bMzkaJbpFteJ6gbc=;
 b=i96DBr47vI704e0mhtslpD+n9bR0NzWEbE6Yj4xQjWTv09GaCErDA6sBaO5uXnbi9Q1iY2auAbku+RxNPrW+90HpYBSotTV8v87WAIjayL75otyiwMewQI3JKOWtbtabh86tj54OxOLCUUWvfRAi/iPXbWcOlHdWd+6zIwo6OsDeR/5TAvaciIWjTMkOj2JIZvPEc0uU8IRgjh4U6VQee3GaCQ5tvWj7wCr2uatVtufPmIzhrkjH3KKO73eaTofLHjfmShI4p77C0xhO0gXwmXvS4MG6P030GVlxB2Hzs5w8S5e6J7Kzc80Raca30+j2VUdwyu28egfBIRtafIISUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2CCQNHz3YZYu6kKH9DtowEu2cW9bMzkaJbpFteJ6gbc=;
 b=P2wm3j0RGT5ejgtbm1i33oOc6CYtCvUp9ZZkBBQihrQEkIXKRnBNpY8tN5VE4zAB16rgmy4pyyDzekTEqbGOVB8YYfAk5FVp03fTREFYPtNU7iWCnR0mA8xVSFCDpBXtQme5rYHB7XO1KB2DDp3V0dKntKjSKVLacpUKBbX6OpVwnHt5DaLAWWICVouGR9uNLXDg3MR4KZIYcODfAkdWXOtOvk6kpTnWKMjIosfbK8UDiEzblLaWGjNXC1Ni0tsc8szjjviIt2wolUb3fRN0ToUszgKwuEgnsV6VwWCxGIsFSHrT1uC5Kbr/Yz/2h2O8siWDcZQ1RuzI7JuAy2C62Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e80cbe9d-8e35-027d-7765-396816bae85f@suse.com>
Date: Tue, 4 Apr 2023 18:07:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 1/3] efi: try to use the currently set GOP mode
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-2-roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230331095946.45024-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAWPR04MB10053:EE_
X-MS-Office365-Filtering-Correlation-Id: 6d204e65-96a9-4482-84f6-08db3526b719
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VfP0zM3lkMCNWowKLN3pye2YIwqwXggcARB9C8WF/cZnaitmjI58EK0BDayyZBB3Q/kgcxBC8pWP7scg3Xwd4/d9Kx5QsLlLtZDGjwxyUwAJoRP1C/C8hViYHRG/JA1CH0X64rq0y/IDY8DsOFUElp9TAvamdO67SYyXZUqOUa4CQoL/WheCdMTfMQhaoek4QT2eh+8LcjVCoicrWZy2XCFbLuomLuUY4zbN9v1r1c5+mnJE+KXeg/osT1H5MDqTAe1lQdVgiRmH+V99vYiqPrOsEHdhfGsQdH8Ye0hKYzTk7TagOT43DMg5QoS7TRH6/Hr35vYHqEjPCn8Y48hg8MLUIFPwIlYN0ru15nevVpaSUIofwMhFUqB50XdutvEtq2R73miiy88f61UUqbV3VxKwtn8RU0+iN8Y+88bNbiN59erstVnBi7eDiRilGT/ToIKDkWt00Y1WEis120btbKTQ894gLvKCYdA0L6FfzaptsoTiEq62grGPhGHEP7cJTA0dJQEo5EYZe7jjKurOTCzgiMPNsifdnoDM8sVu0q7gZQpiiQh6uDz1qY+t4n9aohWbJlpD++0DDKy0yibVOWNRspmVXidfooMDhoXe/r3X0WU8tEyQFZXQrPHrW6RzyW/i8ccwpz1P7rm3m0q8Hw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(346002)(376002)(136003)(451199021)(31696002)(5660300002)(38100700002)(8936002)(6916009)(4326008)(66946007)(8676002)(66476007)(66556008)(41300700001)(86362001)(2906002)(6486002)(26005)(36756003)(53546011)(6506007)(6512007)(186003)(83380400001)(316002)(2616005)(31686004)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YUh3SEVaei9obWtYRExlUExyWDZMdDBxcVE3N2RmOHVMYWFZTnBwSHpWTnNP?=
 =?utf-8?B?RHJvZkpzOVAvcTZPZkxrSy9BdlNJYnlDRjFRVDZ1UVV2T1ZuMmk4MlJtQ09C?=
 =?utf-8?B?M1lOdUlSWmVhL2JVcFI2RmZJNmhTeUZHOEVCd3kyem1PQ3VBMlU1dlQ1a2lT?=
 =?utf-8?B?MVlpOG1uNWVtRjhEcHM1U25jSTh0V0xLM2M2dlhJWTc4K0x2aFJtNVpvODZU?=
 =?utf-8?B?NDdwZ1VodkF0ckJ3a2t2aDNMNzh3OG96cEd5T2hDVksydXdhdUw2Znd6MEFq?=
 =?utf-8?B?aFBpMEhxM3ZHdUg1NkYrR2NYMk4ycVVPanBwN29MNFllZHZES203b1RmdDRr?=
 =?utf-8?B?TzhlVExMaU1BYkpRYmtXU1lGOUNTczAySGlLb2txSUNta2FGWkhIL0ZYZlEw?=
 =?utf-8?B?RWdYY21ZNS9USWtJRHlZdWNqWDgxbE8zVm1wNms4bDRNK0NZazk5NkQzRW5K?=
 =?utf-8?B?U09wd1JGSysveTF0VDMzTFhWYkkzUWxTUElyRkJpdFI2Q0JhMk5CeVh2UkNM?=
 =?utf-8?B?U2xsN2M4R3V5MzR4a05XNks2RitkdThIQm13emhjQ2N0L0IwMFJCd09SRGxX?=
 =?utf-8?B?RGw1TDBpYTIyQXhKdjVuZzZ6VnNKUEpKMzNLZUlpbmtXbURuamkrZHBlTGQ1?=
 =?utf-8?B?MU1DWHkwZVlNM0p1K1hzclNPajQ1eDFxSkdhYVRtSlgyZnB2OStQMzlFNzNL?=
 =?utf-8?B?VWZaVVhKN2ZZSUxheGhXSDhER0gyTTFuS2NjWTQyZUc3M2UxSkcxMlpEMzlk?=
 =?utf-8?B?K1crbHIxUXk3YmE3eUFLZVJFWS9wZFFyZStoZTJoWS83SkFmSWtKanlBa0hI?=
 =?utf-8?B?R3J3SUVLb2xOVmRiZDBSa1lOT0VpY213QUZHaENvZmdRaHZIUkI3SmZzQkNW?=
 =?utf-8?B?QUFkYzFENzRoQWUyQ1kxV2l6bXhHbU5RTEF2VGc2a0hwQVdyQXdCczhlSXFW?=
 =?utf-8?B?RW1xQVZSL3pzbW9HSmdyZTJZZWhzTE9rUmZ5WmkxV0pERkxrSk5NNGlTVVJ6?=
 =?utf-8?B?NGFUVVJpMXFHRzVsazlsYnpBUEd2L1Vod2EvNWo5WmpFSWZ0Q2tZYk5ZRUZZ?=
 =?utf-8?B?Rlc2KzNDS29yVHlSTFY2OS9PTUhSRU8xUUp2cmZENzJiMS9kd0NwSmJ5WlZp?=
 =?utf-8?B?cHRlZHJVNUFYNUNieUdTRGdWd2I1d3p6aFQ5NWliYW9OZ0V0elNHYjNBemhD?=
 =?utf-8?B?UitUUm5MWXRpTkt2MFY4ZkR3dkhVWEFjZTFCc2lYeWdTMnZQRXpOdURrbkVX?=
 =?utf-8?B?OFVta25ueVdpN25uSEx0bll5MnFRR2k4NjYwdGkrNEZtVVN3R3Q0ZTBGUTV1?=
 =?utf-8?B?WmJtS2p4OG5KcnlWZG1TaEp2RXRueCtpQ0ZHSjJVV20zckNKSTJicVk0cDIy?=
 =?utf-8?B?RjVoZjg5TzlFa20wV2tyd0k1S2xIT1pQTVFIMGZwOHFXd0gvWTRDdkEvVEdy?=
 =?utf-8?B?Zm9oTlRoYjlHVnFUalk4eTNRVTBYYjA0czhvN1JVYkFCaWdpdUliSkp1TVQ3?=
 =?utf-8?B?cUlZWDJRN2FhZXVzRVcwZUlFazVITjh5NTlvd0IzREF5MVg1UkVBcndpbnZV?=
 =?utf-8?B?SWE3dDVaWGcrWUgreXE3dG5YMGpBZjhrVDZvTUtzKzAwS2EzWE9BcjRESUR0?=
 =?utf-8?B?amlyS0V3aVNEWUNKYXhRTTJpcm9OZTBzM0ZlL29iM3g2dXdZZDcvVjVBaEtv?=
 =?utf-8?B?Zm0xVmJzbnRlS0xvc25VVC93cjJLNlI3Tm1XdlR1YVkrOE9kRUhmT3loOCtB?=
 =?utf-8?B?RjdVcXFvK2o2SW5vQUxQNW1JYk1hdDBRNzB3dDhldjB1TldpYm0wWUNYVUh5?=
 =?utf-8?B?eEUxaE4rdmgvNUI2cXlaZGtEakRaOHRGU2U2eXo1NWpRdG1LeC9kaTBTUnQy?=
 =?utf-8?B?Q0Z2UEpOOVRaMmluVXFHMC9ZZ21GTHFrWlhoTjVXZVBCRVdUQ1ZsdE5uaUNj?=
 =?utf-8?B?RGFJaWVTeUVjRDlpaSt0cE9ORHNyWHErT1ZYRmhmSjU5Mkg1MUM3QzlvUlVM?=
 =?utf-8?B?SUJJMGVoK2hvcjF2VG9wTitaMDB2ZnloZWhQUWUvaFF4azBXOFhYNFEwUzNT?=
 =?utf-8?B?ejgvd1g5dzdScXVtSU1rdThSU3dWNWRBYUVwQkJlZjVsSGxkWHVOVGFQSitu?=
 =?utf-8?Q?BVOQRAzvfKjbvF6yMZSQsocS+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d204e65-96a9-4482-84f6-08db3526b719
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 16:07:40.3456
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0jMqhkrX42BpFlA5WtXjALpAk09l0+5rjiZZmTlG6OuHfaQf99AXoPe8hz0KSIWiOy90ncg7Yfj1wAZDbdWqqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB10053

On 31.03.2023 11:59, Roger Pau Monne wrote:
> Modify efi_find_gop_mode() so that passing cols or rows as 0 is
> interpreted as a request to attempt to keep the currently set mode,
> and do so if the mode query for information is successful and the depth
> is supported.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - Only update cols or rows if the value is 0.
>  - Leave depth alone.
> ---
>  xen/common/efi/boot.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)

While I realize that docs/misc/efi.pandoc currently doesn't describe
what rows/cols being zero means, I think that file needs updating in
the course of what you're doing. Irrespective of this I'm uncertain
about the change in behavior: Presently both values being 0 means
"find biggest resolution mode", in an attempt to have as much info on
the screen at a time as possible (in particular in case of problems).
I certainly appreciate the desire to have a way to say "keep the
current mode", but I don't think this should alter behavior of what's
presently considered valid config settings.

> --- a/xen/common/efi/boot.c
> +++ b/xen/common/efi/boot.c
> @@ -930,6 +930,27 @@ static UINTN __init efi_find_gop_mode(EFI_GRAPHICS_OUTPUT_PROTOCOL *gop,
>      UINTN gop_mode = ~0, info_size, size;
>      unsigned int i;
>  
> +    if ( (!cols || !rows) && gop->Mode->Mode < gop->Mode->MaxMode )
> +    {
> +        /* If no (valid) resolution suggested, try to use the current mode. */
> +        status = gop->QueryMode(gop, gop->Mode->Mode, &info_size, &mode_info);
> +        if ( EFI_ERROR(status) )
> +            PrintErr(L"Invalid current graphics mode\r\n");
> +        else if ( mode_info->PixelFormat < PixelBltOnly )
> +            return gop->Mode->Mode;

What if one of cols/rows was non-zero? You then wouldn't fulfill the request.
"depth", if non-zero, is also entirely ignored. (We don't fulfill such a
request right now either, but not doing so becomes more odd now imo. In fact
right now in this case we leave the screen alone, if I'm not mistaken, so
there already looks to be a way to achieve what you're after.)

> +        else
> +        {
> +            /*
> +             * Try to find a mode with the same resolution and a valid pixel
> +             * format.
> +             */

Is "valid" the right word here? I think you mean "usable for us" or some
such?

> +            if ( !cols )
> +                cols = mode_info->HorizontalResolution;
> +            if ( !rows )
> +                rows = mode_info->VerticalResolution;
> +        }
> +    }

Overall with the resulting behavior I'm not sure the title really describes
what's being done. You "try" only in certain cases.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:11:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:11:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518040.804180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjFZ-00084A-BM; Tue, 04 Apr 2023 16:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518040.804180; Tue, 04 Apr 2023 16:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjFZ-000843-7x; Tue, 04 Apr 2023 16:11:13 +0000
Received: by outflank-mailman (input) for mailman id 518040;
 Tue, 04 Apr 2023 16:11:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=efKo=73=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1pjjFX-00083x-Qs
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:11:11 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d2813f0-d303-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:11:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A8F646366F;
 Tue,  4 Apr 2023 16:11:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C854CC433D2;
 Tue,  4 Apr 2023 16:11:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d2813f0-d303-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680624663;
	bh=eKiqHkoBVo1AQRx10HafTawauui+/7aWix/7MTRPGrA=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=nkcQpc5NRHDKcALdEO/kD1vJ7JRnALrRH75LPh2TDOMavdcMgvCuVznC7x8x+ecKt
	 1/q7prAN9eClvaAU3c0+dUWFvcRO66V04KGg+gYkb+YrMhjTw0ASpXENR+YKBHmeia
	 3Q5pvbnqrK/uCSEeFKclNxI3vwOX0dUud1/pCQiZ36qwD28S6odWh3QMAvbBbgMd/s
	 C/Qhj0iZpFO3X/jYT9y3J5qKvtPSjj/GrdFNEZo1J9pLwcjwlot2Tv6qfpoajNxUA+
	 o+rmP5HeMIpXk0J+ytkWwcX3yLOfhBvsgasfRd7/C7/NUehmnMEdxtOqmb8yLnHceu
	 ZoK1FHXZYisVQ==
Date: Tue, 4 Apr 2023 11:11:01 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <20230404161101.GA3554747@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230330162434.35055-1-andriy.shevchenko@linux.intel.com>

On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> Provide two new helper macros to iterate over PCI device resources and
> convert users.
> 
> Looking at it, refactor existing pci_bus_for_each_resource() and convert
> users accordingly.
> 
> Note, the amount of lines grew due to the documentation update.
> 
> Changelog v8:
> - fixed issue with pci_bus_for_each_resource() macro (LKP)
> - due to above added a new patch to document how it works
> - moved the last patch to be #2 (Philippe)
> - added tags (Philippe)
> 
> Changelog v7:
> - made both macros to share same name (Bjorn)

I didn't actually request the same name for both; I would have had no
idea how to even do that :)

v6 had:

  pci_dev_for_each_resource_p(dev, res)
  pci_dev_for_each_resource(dev, res, i)

and I suggested:

  pci_dev_for_each_resource(dev, res)
  pci_dev_for_each_resource_idx(dev, res, i)

because that pattern is used elsewhere.  But you figured out how to do
it, and having one name is even better, so thanks for that extra work!

> - split out the pci_resource_n() conversion (Bjorn)
> 
> Changelog v6:
> - dropped unused variable in PPC code (LKP)
> 
> Changelog v5:
> - renamed loop variable to minimize the clash (Keith)
> - addressed smatch warning (Dan)
> - addressed 0-day bot findings (LKP)
> 
> Changelog v4:
> - rebased on top of v6.3-rc1
> - added tag (Krzysztof)
> 
> Changelog v3:
> - rebased on top of v2 by Mika, see above
> - added tag to pcmcia patch (Dominik)
> 
> Changelog v2:
> - refactor to have two macros
> - refactor existing pci_bus_for_each_resource() in the same way and
>   convert users
> 
> Andy Shevchenko (6):
>   kernel.h: Split out COUNT_ARGS() and CONCATENATE()
>   PCI: Introduce pci_resource_n()
>   PCI: Document pci_bus_for_each_resource() to avoid confusion
>   PCI: Allow pci_bus_for_each_resource() to take less arguments
>   EISA: Convert to use less arguments in pci_bus_for_each_resource()
>   pcmcia: Convert to use less arguments in pci_bus_for_each_resource()
> 
> Mika Westerberg (1):
>   PCI: Introduce pci_dev_for_each_resource()
> 
>  .clang-format                             |  1 +
>  arch/alpha/kernel/pci.c                   |  5 +-
>  arch/arm/kernel/bios32.c                  | 16 +++--
>  arch/arm/mach-dove/pcie.c                 | 10 ++--
>  arch/arm/mach-mv78xx0/pcie.c              | 10 ++--
>  arch/arm/mach-orion5x/pci.c               | 10 ++--
>  arch/mips/pci/ops-bcm63xx.c               |  8 +--
>  arch/mips/pci/pci-legacy.c                |  3 +-
>  arch/powerpc/kernel/pci-common.c          | 21 +++----
>  arch/powerpc/platforms/4xx/pci.c          |  8 +--
>  arch/powerpc/platforms/52xx/mpc52xx_pci.c |  5 +-
>  arch/powerpc/platforms/pseries/pci.c      | 16 ++---
>  arch/sh/drivers/pci/pcie-sh7786.c         | 10 ++--
>  arch/sparc/kernel/leon_pci.c              |  5 +-
>  arch/sparc/kernel/pci.c                   | 10 ++--
>  arch/sparc/kernel/pcic.c                  |  5 +-
>  drivers/eisa/pci_eisa.c                   |  4 +-
>  drivers/pci/bus.c                         |  7 +--
>  drivers/pci/hotplug/shpchp_sysfs.c        |  8 +--
>  drivers/pci/pci.c                         |  3 +-
>  drivers/pci/probe.c                       |  2 +-
>  drivers/pci/remove.c                      |  5 +-
>  drivers/pci/setup-bus.c                   | 37 +++++-------
>  drivers/pci/setup-res.c                   |  4 +-
>  drivers/pci/vgaarb.c                      | 17 ++----
>  drivers/pci/xen-pcifront.c                |  4 +-
>  drivers/pcmcia/rsrc_nonstatic.c           |  9 +--
>  drivers/pcmcia/yenta_socket.c             |  3 +-
>  drivers/pnp/quirks.c                      | 29 ++++-----
>  include/linux/args.h                      | 13 ++++
>  include/linux/kernel.h                    |  8 +--
>  include/linux/pci.h                       | 72 +++++++++++++++++++----
>  32 files changed, 190 insertions(+), 178 deletions(-)
>  create mode 100644 include/linux/args.h

Applied 2-7 to pci/resource for v6.4, thanks, I really like this!

I omitted

  [1/7] kernel.h: Split out COUNT_ARGS() and CONCATENATE()"

only because it's not essential to this series and has only a trivial
one-line impact on include/linux/pci.h.

Bjorn


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:14:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518050.804190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjIv-0000D6-RF; Tue, 04 Apr 2023 16:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518050.804190; Tue, 04 Apr 2023 16:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjIv-0000Cz-NV; Tue, 04 Apr 2023 16:14:41 +0000
Received: by outflank-mailman (input) for mailman id 518050;
 Tue, 04 Apr 2023 16:14:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7Jsu=73=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjjIu-0000Ch-1P
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:14:40 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061c.outbound.protection.outlook.com
 [2a01:111:f400:fe13::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cca72ce1-d303-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 18:14:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8965.eurprd04.prod.outlook.com (2603:10a6:10:2e0::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 16:14:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 16:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cca72ce1-d303-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FxDF+AoahQsIazkgLmGUOEB1HZGNhxuBZ7CqJbChgxVtEb+V1oMVMsYrYQU3autLwehhrH4U09bM6iPc30+M3N24yAKgxfCendALi04VHnzE3392qU4qwHB7Bh4G7307LrYYWhZEhIJB+5zIGQ+nDd78g2OvI/G9Ui0SpQot7Qvs1jtuBnlz8yKVjyaYC/wnJom/urcFTW5ne00qvolpMLDijiGcMAuzZHAv46yU5f5265HGgfGCFlOtAbpPW7eNvjZwtlCiDSak0p/6Wr+zYwIJwGcoDYeS4AMQXFNWeUjFQryQIn3tUPPz+o60ywxND0xZ+K84rTV9324AhqCdnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EnWiy6JScAf/NRInOHVOq/udM6RNapUPBQzbUYxr1Xs=;
 b=Lg0spScQOCFiftogrciJLn08wLJm0b60/9ue5eMeWFFeF5aBtbPey2ISYIhTdzT1tVXAJb/14gVBhy0qz3liHKx1fqyO7zozPf7iWmf/yNTjVgFQ2Hz+333/buHAnIz6c8PDsuNJn7Q4iRHITi5dCWp91pmX5goKlf7jcvtrJVMdinAb1Hr/6qOLt60W3wtILbhUaI4/mrpOZ/gfe9UzhpsUnxusNVzv6TzkSIwr1/08TlkjEuD7yQJz50Mm5JZkWcCb0U9jGvLIX7RhxvrQylvbjSdQiRow0qKPWlW1ctHVZJ3GnsI+llptwqZPqC2/+z5crAb5DwLzYKEd60ELoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnWiy6JScAf/NRInOHVOq/udM6RNapUPBQzbUYxr1Xs=;
 b=aRGcCZEMN56MdYhmcec1BPO4ub485kwyRfbxRipYq++NU3MnxG3p/7FpVSjm6TVst+Avk7T+mxoFLV9kt+dnpSpV0r7qlSeoDJwvAGKaE+meqCJoUPTRzUSJMT5Skzh932llx2tDtPTT+YI8BDvXfsPSKZHhrc7hRialGroUjolV68jNQgxsXTKz6+sbvtEA2UvRwfMJ8mfc/K3YlgNCtlPzPRxuHQUeXtvZrEd2+sMoR2Ex5C2OiCj3BDpXIC4a24x57DZZ25DWiDDg8Y2CmituFkAxLbTdTJG04T3Q93mOzCMHvktuS377HW/u94k5TMrXNPFnGUmh4wUZjoR4Uw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dfd78e18-9f78-dd44-c19d-3a5263285d4f@suse.com>
Date: Tue, 4 Apr 2023 18:14:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation
 logic into cpu-policy.c
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-12-andrew.cooper3@citrix.com>
 <087536dd-96cf-84f0-4b8f-d4de4d6bd093@suse.com>
 <0c43546e-0333-af19-efa5-71cfaf5efa3f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <0c43546e-0333-af19-efa5-71cfaf5efa3f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0225.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8965:EE_
X-MS-Office365-Filtering-Correlation-Id: 26effeed-589d-4817-d268-08db3527adc5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fSme10hNEqs6WNvJlK2SB9uD6SwrBrS0hOaBWob2EAQWk653q/7Lwy1QjbPVPGYrI4PvmorPcHosKuDhViygMgpx5cfLlsH9PZ368zgSlHLkznTnoGSsG6NyJAEtOG42XfE3DKoZls7Tv8Ip90Yn5GPXkt7ueN9N1H8m8QP9sxu857HicNi5I4zGm89GBFja0DBgZvu4BgD7lwczI+00NHALGumCHjP+lJLCUeXhA81BJD4tHbcrvdvxYNqZ0CS0KlaKF9otdB57PAIBDwx2iaeJBojB8tI19rtojthlzpvVhIlhr2K6DyZkxrSNg7TPsbABDIXETZBD/HdyUzn9Ml3/OBkidaP/CMlFf/6AqCyC1JwH0WSwFW1CpnJbQpkZT5/YLswWF+X/DKDNJAAGe1iS0btpTC6FcJBMqhxEwb+JG/d2P4SghDf58ak20ZH85NtbriezWLmWb9FLo82TvcHDEHWbYp6sw0seIXHhk/Cb71z2vq5t90iGFB6pdqksNqyu71mZDCQu06QX2T85qQHQoRCf4WnK1XnBrJKNssdUnb1jsw42nvH6YdbxaDaTehEVRrIC8MsUyWTvjpVkRr9WB6AdsvXnlFmcvugcf72D7dSC7TDMK1hTbo17F2T8ij/bMkciJ0kVBIMO0Oc8fQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(396003)(376002)(39860400002)(136003)(451199021)(6916009)(66476007)(478600001)(316002)(8676002)(4326008)(66556008)(66946007)(8936002)(54906003)(38100700002)(41300700001)(5660300002)(53546011)(6486002)(186003)(2616005)(6506007)(6512007)(26005)(31696002)(86362001)(2906002)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZENxVU5tYUNNUWJtTDN5eUdXeFlWUEdDR3NTZWZ1MTJSeVNjc0lPSm84RENx?=
 =?utf-8?B?b2RJaEUreVVHS3ZMRjBxSjhDckRxdlJnMnIrTFlNRGhqeGgvczgyd2RnZ0ts?=
 =?utf-8?B?d2JjVVl2YWxkSzBLbUcvNUwwWDZMUXBuZXIxZzBnT2NDSDMyRW1IUVpvalhv?=
 =?utf-8?B?cXdiWmlnUkFYN0Y2aEgycVlFT0hSeCtLSGlramFYdVBWMU5oOFJiK2dDc0dk?=
 =?utf-8?B?Q2RxbUM0VWd5RmhJRjNJOTdmUEdhUmVkSXdabVVGb2VGV2RHdm1LT3BNRGlx?=
 =?utf-8?B?UU1DamhCcXl1eGw1RG14UFJ2YlB0WGY0YURYMnFFdHJQdXlLbWVBR0w2ZU00?=
 =?utf-8?B?TGZvemdacnJld2FyN21VVkQvb1c1d3lnbE1GL0hYZmVPa3MrUnQxS0l1bjNX?=
 =?utf-8?B?dUNFaXA1ZjBycWdLT3NEVm8vQmdiMHB4SVdQVmdPdWVMMDIwUHhHV1h3RzVI?=
 =?utf-8?B?bytOOWM4a3ZKMXc1c3hreWpWZHYyclQ3RXdZTzhVNHVtblhja2Ftck80a3ZV?=
 =?utf-8?B?Vk9ZRmswbkZqdjRNOHgveTVvclhsZkN0VDhBb1krV1l4cVlDQ2MrZmNJUTBo?=
 =?utf-8?B?L01TM3MrRUpGZ2RGZnQyVzdjVEtnbThndWVUQ0VVRnFXeHlSR2swZ2RMczE3?=
 =?utf-8?B?clJYMlhob0lWRkRIL01DR2xLR3VQMHVWWFpza3hORWtJNWFIZWtWVTFzUGsy?=
 =?utf-8?B?WEoySk96NjVsV0dubzNpNy9oUFk2M3FibVdmYTNvUm5JeDdBUlFicXRnbDB4?=
 =?utf-8?B?VEJGc0Q0eDZYZXhFNVRtUitNbi81UXlKWWt6cGRiS1IrbnlhTTNwVGVVZ1Bm?=
 =?utf-8?B?ZE9Obm9yRFZNTGRmTTdvRDZScUt1eHpDVjdRdS8wQ0lyVkFnbzAwZ3NBMTh1?=
 =?utf-8?B?ekxvSnhscndUWmVXM3ROVENVdC9raG9nSmtEeDRMejM4R3ZUKzN1dUtDcElr?=
 =?utf-8?B?RWoyT3hxMEd4S09zVFEvdGtSS1NoNllEdjVRWGEvWjdxbHRUS0pGeXhKTVhi?=
 =?utf-8?B?bXAwVnhxb2cvZ3c2aGpjdEpKRHJjcWx4cDR3YlpHNUlLa1V3ZG1ob0drODZV?=
 =?utf-8?B?cWp2cE9OYXlpWEZLZURqYTNFam5oSXNxdlBHUW5JYTFIbTNnNHpqVStOaDM2?=
 =?utf-8?B?RGpKc0dlZ2xhK2NqbUZkTENLSWgxQU9rNzV2emYwWWVzSFUyZFUwMkdxV2xU?=
 =?utf-8?B?RWRQS0Jwb0JpdlA5NTVUd1E1azBSVm9NcUZSaERWRC9XeWJuZE43bVI0R0d4?=
 =?utf-8?B?TU53WldXa0JaSjg3ZkpsSkd2WnpuWlRKbXQ2eGRodmRiUEJMN3Mwb3JCVUEy?=
 =?utf-8?B?NVpxVmkwQUhrR2c3VkZyUzJObjlNVC9xMHVWRkx0OXpPNS9UY3M2clNxN1Y4?=
 =?utf-8?B?UXdCc2x0SUs2OFM5L25JOHJGK2xDc1d1RThheFRoeEl6Q2x5WUp3b2tqR3pn?=
 =?utf-8?B?ZHl4eS9saUF0SHAwcUNpVmp2NWNhR2UwRkFPS2VLZ1lHMEpBRTVmMXc3bk1r?=
 =?utf-8?B?NjJvV3RlT1I2c0RLbkdWVmRpZG9WUWtYek9PQmthR1o3bjJNTTd3Y0JGd2VQ?=
 =?utf-8?B?ZkVEbXdBM1dDVWF1YWN5S29Sd0xDQitzbXdKRFJTYkg5Ulk0eUhGaVlIVm9y?=
 =?utf-8?B?YWUxSGdPa2R5N3M0N1dtYlg5TGVEcjRwYlFNbkk0Z1JwaGlnVmh6VHlKTngx?=
 =?utf-8?B?Y0MrcGdEeHBBWENicExINExxOGc5aUZpempPZXNVbU1UeWhMK3hZc2hQRmpl?=
 =?utf-8?B?SGNRWHl3TFJwNXNEcnpXUjY4bG1xSGUwKzJCUkZLOXdXbjJRQkQ4T0lyN1gy?=
 =?utf-8?B?SHlicURNaVJScnhJc1JGRTAzMjFrLzJuclhWMWpXdTdEVW5JUGdURUxvSmlV?=
 =?utf-8?B?UmVJN1FrM1VzWW1mMWx3SE1JcnNybzhraURwTnJ2TjUvczNsd0xwaDUwQ2gv?=
 =?utf-8?B?bGx1cjdzUmRRbEsyQUxvU1FRcGN4N2Vtck9PTjRWN0k3bk9jcDZPTnJiNGlX?=
 =?utf-8?B?VkpmT3pzWkVhOG1ZQ2JkV2Z0bTlsMXJsVWM5Q3VQY3IxQWRTZXhUZGM0dHhj?=
 =?utf-8?B?eVc5OGpkMEw1VDNXbHRVb3FMYlFrYUo0QU11WlJ6a1p3MDI3Vkw1dllxTks3?=
 =?utf-8?Q?VIjMKLgUHHQ0eMWUpn2go8NTP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 26effeed-589d-4817-d268-08db3527adc5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 16:14:34.2684
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3yKb9yRXea4X5BM7aNLQ8NbPiS8M2JSCDoFV5ZKBo9aEbpiRmi6P8ly+V8cr87yONEErXwVNPEwTt4PhNN8iaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8965

On 04.04.2023 17:45, Andrew Cooper wrote:
> On 04/04/2023 4:16 pm, Jan Beulich wrote:
>> On 04.04.2023 11:52, Andrew Cooper wrote:
>>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>>>  struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>>>  #endif
>>>  
>>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>>> +
>>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>>> +    INIT_HVM_HAP_MAX_FEATURES;
>>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>>> +    INIT_HVM_SHADOW_DEF_FEATURES;
>>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>>> +    INIT_HVM_HAP_DEF_FEATURES;
>>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>>> +
>>> +static const struct feature_name {
>>> +    const char *name;
>>> +    unsigned int bit;
>>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>>> +
>>> +/*
>>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>>> + * matches found.
>>> + *
>>> + * always_inline, because this is init code only and we really don't want a
>>> + * function pointer call in the middle of the loop.
>>> + */
>>> +static int __init always_inline parse_cpuid(
>>> +    const char *s, void (*callback)(unsigned int feat, bool val))
>>> +{
>>> +    const char *ss;
>>> +    int val, rc = 0;
>>> +
>>> +    do {
>>> +        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>>> +        const char *feat;
>>> +
>>> +        ss = strchr(s, ',');
>>> +        if ( !ss )
>>> +            ss = strchr(s, '\0');
>>> +
>>> +        /* Skip the 'no-' prefix for name comparisons. */
>>> +        feat = s;
>>> +        if ( strncmp(s, "no-", 3) == 0 )
>>> +            feat += 3;
>>> +
>>> +        /* (Re)initalise lhs and rhs for binary search. */
>>> +        lhs = feature_names;
>>> +        rhs = feature_names + ARRAY_SIZE(feature_names);
>>> +
>>> +        while ( lhs < rhs )
>>> +        {
>>> +            int res;
>>> +
>>> +            mid = lhs + (rhs - lhs) / 2;
>>> +            res = cmdline_strcmp(feat, mid->name);
>>> +
>>> +            if ( res < 0 )
>>> +            {
>>> +                rhs = mid;
>>> +                continue;
>>> +            }
>>> +            if ( res > 0 )
>>> +            {
>>> +                lhs = mid + 1;
>>> +                continue;
>>> +            }
>>> +
>>> +            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>>> +            {
>>> +                callback(mid->bit, val);
>>> +                mid = NULL;
>>> +            }
>>> +
>>> +            break;
>>> +        }
>>> +
>>> +        /*
>>> +         * Mid being NULL means that the name and boolean were successfully
>>> +         * identified.  Everything else is an error.
>>> +         */
>>> +        if ( mid )
>>> +            rc = -EINVAL;
>>> +
>>> +        s = ss + 1;
>>> +    } while ( *ss );
>>> +
>>> +    return rc;
>>> +}
>>> +
>>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>>> +{
>>> +    if ( !val )
>>> +        setup_clear_cpu_cap(feat);
>>> +    else if ( feat == X86_FEATURE_RDRAND &&
>>> +              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>>> +        setup_force_cpu_cap(X86_FEATURE_RDRAND);
>>> +}
>>> +
>>> +static int __init cf_check parse_xen_cpuid(const char *s)
>>> +{
>>> +    return parse_cpuid(s, _parse_xen_cpuid);
>>> +}
>>> +custom_param("cpuid", parse_xen_cpuid);
>>> +
>>> +static bool __initdata dom0_cpuid_cmdline;
>>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>>> +
>>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>>> +{
>>> +    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
>>> +    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>>> +}
>>> +
>>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>>> +{
>>> +    dom0_cpuid_cmdline = true;
>>> +
>>> +    return parse_cpuid(s, _parse_dom0_cpuid);
>>> +}
>>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
>> Unless the plan is to completely remove cpuid.c, this command line
>> handling would imo better fit there. I understand that to keep
>> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
>> would then need to be exposed (under a different name), but I think
>> that's quite okay, the more that it's an __init function.
> 
> I'm not sure I agree.  (I did debate this for a while before moving the
> cmdline parsing.)
> 
> I do have some cleanup plans which will move code into cpuid.c, and
> guest_cpuid() absolutely still lives there, but for these options
> specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
> bit names names will work here too.
> 
> So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
> is is absolutely more cpu-policy.c content than cpuid.c content.
> 
> We can't get rid of the existing cmdline names, and I think documenting
> our way out of the "it's not only CPUID bits any more" is better than
> adding yet a new name.

Hmm, yes:
Acked-by: Jan Beulich <jbeulich@suse.com>

>>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>>  
>>>      return 0;
>>>  }
>>> +
>>> +void recalculate_cpuid_policy(struct domain *d)
>>> +{
>>> +    struct cpu_policy *p = d->arch.cpuid;
>>> +    const struct cpu_policy *max = is_pv_domain(d)
>>> +        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
>>> +        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
>> While this is how the original code was, wouldn't this want to use
>> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
> 
> No.  That will fail to link.

Why? hvm_enabled is a #define (to false) only when !HVM.

> This trickery is necessary to drop the compiler-visible reference to
> hvm_max_cpu_policy in !CONFIG_HVM builds.
> 
> This function is only called after the domain type has already been
> established, which precludes calling it in a case where max will
> evaluate to NULL, hence the ASSERT_UNREACHABLE() just below.

Right, and this will hold when HVM=y but no VMX/SVM was found.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:16:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:16:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518053.804200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjKR-0000ml-4i; Tue, 04 Apr 2023 16:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518053.804200; Tue, 04 Apr 2023 16:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjKR-0000me-1v; Tue, 04 Apr 2023 16:16:15 +0000
Received: by outflank-mailman (input) for mailman id 518053;
 Tue, 04 Apr 2023 16:16:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjKP-0000mU-Fm; Tue, 04 Apr 2023 16:16:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjKP-0008Tj-Bq; Tue, 04 Apr 2023 16:16:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjKO-00032c-Rr; Tue, 04 Apr 2023 16:16:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjjKO-00034b-RI; Tue, 04 Apr 2023 16:16:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RqEEFWst+GWVnEQHrqAj94ZB3VIu3nRPZx4GnxqzVl4=; b=FV6aJFQYzlnefih0VN/FqEvdFe
	xN/jmek009GB+Y20gvPE7zC59FK2dIbg5klzvkIeXp+zOkraiReUW7dhV4QWkVUiJhaSHw9hzwZcD
	MawGJ3hqz/ToHX9krbH4AN4TJ51DpciV1DlGDFNi3SitVF8JQUNTjCq7MWBAWh3WMI9Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180134: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=d292ddf1cc268bdd8a494f8e7ce76dc3445c26ab
X-Osstest-Versions-That:
    libvirt=145886e36dba23ed3756ab9c52b2ac77c506943f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 16:16:12 +0000

flight 180134 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180134/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              d292ddf1cc268bdd8a494f8e7ce76dc3445c26ab
baseline version:
 libvirt              145886e36dba23ed3756ab9c52b2ac77c506943f

Last test of basis   180111  2023-04-02 04:23:21 Z    2 days
Testing same since   180134  2023-04-04 04:20:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ján Tomko <jtomko@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michael Ablassmeier <abi@grinser.de>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   145886e36d..d292ddf1cc  d292ddf1cc268bdd8a494f8e7ce76dc3445c26ab -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:20:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518061.804210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjOV-0002Is-QA; Tue, 04 Apr 2023 16:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518061.804210; Tue, 04 Apr 2023 16:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjOV-0002Il-MW; Tue, 04 Apr 2023 16:20:27 +0000
Received: by outflank-mailman (input) for mailman id 518061;
 Tue, 04 Apr 2023 16:20:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjjOU-0002Ic-E3
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:20:26 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98cfb28c-d304-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:20:23 +0200 (CEST)
Received: from mail-bn8nam04lp2042.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 12:20:20 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH8PR03MB7305.namprd03.prod.outlook.com (2603:10b6:510:251::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.30; Tue, 4 Apr
 2023 16:20:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 16:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98cfb28c-d304-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680625223;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=acb0Vjzb2vomYZXvOLj89NVgZUGBCCnPCoP9oO0tFqo=;
  b=GIFvxWaUrEDLx4U7ujAV2SQrFIkan9GMvBbMBaMIVj9hBIUr0Hu9sQfI
   kZ+f56ZWUF9lnq9SxrxzU6wrJ0qTgLFTkR31QXg6oVgm+wUh8Q8lAvmfT
   DpFXawoi43FFuG4qK0na0GBqRr1umZABtfn4NWUPrycYyRqqWijQQHI1c
   0=;
X-IronPort-RemoteIP: 104.47.74.42
X-IronPort-MID: 104214592
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:X9E+nq7zTnfhVXQFo/FnBAxRtBjGchMFZxGqfqrLsTDasY5as4F+v
 jRNC2+CbqyKNDD1f98lPtuy9U8Du5XTy4JhTQI4/CBkHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5my
 MYSGiwEXgC6xMmc5L2QbrRJrcRgBZy+VG8fkikIITDxK98DGcyGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MkEotitABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpOTuTkq68w6LGV7nEiJQIZUEbmmuXnm3PmUv5eA
 HUmqhN7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc6SDkr/
 l6PgdLyBDZrvaGVSHSS7bOdp3W5Pi19EIMZTSoNTA9A6d+zpog21k7LVow7TPTzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:WA1izqzdTowErJzoXHlsKrPxw+skLtp133Aq2lEZdPULSKKlfp
 GV88jziyWZtN9IYgBapTiBUJPwIk80hqQFm7X5XI3SFTUO11HYS72KgbGC/9SkIVyHygc/79
 YpT0EdMqyXMbESt6+TijVQUexQueVvmJrYxds2pE0dLz2CHpsQizuRfTzrd3GeKjMnOXKxf6
 D22iLIzwDQO0j+Ma+Aa0UtbqzmnZnmhZjmaRkJC1oM8w+Vlw6l77b8Dlyxwgoeeykn+8ZozU
 H11yjCoomzufCyzRHRk0XJ6Y5NpdfnwtxfQOSRl8kuLCn2gArAXvUqZ1TChkF5nAic0idkrD
 D+mWZvAy1H0QKcQoj6m2qt5+Cq6kdS15ar8y7lvZJIm727eNtyMbs/uatJNhTe8EYup9d6ze
 ZC2H+YrYNeCVfakD36/MWgbWAfqqOYmwtRrQcotQ0rbaIOLLtK6YAP9kJcF5kNWCr89YA8Ce
 FrSMXR/uxff1+WZ23Q+jAH+q3bYl0jWhOdBkQSsM2c1DZb2Hh/0ksD3cQa2nMN7og0RZVI7/
 nNdq5oiLZNRMkLar8VPpZ3fSJ2MB2/ffvhChPjHb28LtB4B5vkke+H3IkI
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104214592"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QiXHEQoJ7gUssHZa+/QRf6pnqZ9f0amGUC3btt+EemYAQg0/mFVrNh7hf0PAz8PX6znV4FMGKQ60B5AXwq9Lb6qEVL5Gw8FtFnmg5ae+aUQjqf0Gvh8GmvLh1JOa8GQD+4kxhlPOgEaKG+fJbWOhi2Tk1YMe79gdJvn/nbs78J1BrjjJiENARpdoSdmnwDpTveILtZtftUTu+1n7zba6XuArYKEIFfDIM5yTm+NJaq2Km0/96JdDfNkH+q2a+XBnDQUrlwsStlefo3GO2B3Qa2rOdP98DkEfRVmL6H7/hf29A37YGrGt32zL7Ql4MY9U9G0/8lD+KapQpiH/1hqFfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jj4gGm4oTx0u/+xZtcHtoeuCVzi/i1cnClh6OAVuhZQ=;
 b=g9iZNjydfgTds69Db0ukoTAOMb0VxG6QXO+v6DSQn1vQUCGNanuiQNQpWb4cAUh9lK/eAClwmiBJ0q48gn96CE7d6EYyzuc4oeWIYDH3bwqUf8JcaimFX11yaPXlwSr97p+iPORd6qM80mEDp08nuq6PzeIbo88I7q2xenvOjA5cVVx+xn2GRNdvgn11JTO4WpxD4Wfjuvae8GdVBBO1UQSKNcHG0p+LEyGCESKul/THwP+CkLCd4jynOF6cHsg1gcrFsQm/o4EzFbGgxlfixGdUN6Z/XRwd2KOaF5GFdw110WBTKSJuh8zYD6SxqrOsgdEHULmGclaDwMJo6Dil3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jj4gGm4oTx0u/+xZtcHtoeuCVzi/i1cnClh6OAVuhZQ=;
 b=eHBzTi2uAY7MMIL1PhD18PK+hbEHHXKJUeiLi/aUuNkDQznEDBEbLSVyLR0Rc6+T/S1qESO1lYe4zdz2+a1YD0jwEi6V5bhr6ZqEam3Z5Nf7FUlnjlQQ/ya2ljb+jzSooOQqducGoTZtzehzQ9bjt2rhu8qcn5O0E6LGQjLc/sE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <697cec91-0f58-9f50-33db-d2d92d321267@citrix.com>
Date: Tue, 4 Apr 2023 17:20:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [xen-unstable-smoke test] 179929: regressions - trouble:
 blocked/fail/pass/starved
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Demi Marie Obenour <demi@invisiblethingslab.com>
References: <osstest-179929-mainreport@xen.org>
 <fbe7ded7-47f6-caec-dabb-6978d9e2a192@citrix.com>
 <17e95e93-a3a9-962c-1563-f9fc526320df@citrix.com>
 <20d41dd0-19d1-47fb-92ab-4de458ddd56f@perard>
In-Reply-To: <20d41dd0-19d1-47fb-92ab-4de458ddd56f@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0034.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ae::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH8PR03MB7305:EE_
X-MS-Office365-Filtering-Correlation-Id: 8659ba86-3a78-4f89-3a25-08db35287a8d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p4haAepPlMwkxv5bNNyATxxsKfJ7L1sqIiuAC9N5eMmsuM0EFBjJ8In+wuR4WtYQoLuvu2mX/itemv54TtYp4XLSJ1PaQT2AgEczfMZxRnDrZZiPaEkMNSRM2+AyZvZscEUcUQxGYNmlbEVWoCIFcOpNDXQ0+OmJLHsYtLfSV1IOggbAIJHV50dJZi1GNaf1tlIjfLDhmpjp543D2Y9ML4t4yurTfqmvn86yGFk5IfOvSWzOu9L2LzdHmOIs3FGIZd+H22mo3AhYn8onpFX1t/iiQAWTVgl9JgyHGTokRi8K1f/yfa6v9gOA7M5/+lGzdQlEYtBo0JKGRexIo/aw38JBfFmwSwo025yCNK+y/OWD/t2rA8M8QWVEq4q8CeAVNSJP9W3XAw2ROGsg9QtDCUdHMVmCxG+oas+oMXp37Zw3XoNhSO9zHKQ3UvHd9w7L3kZVSwvEgtr1tMiwOMra+VjsJ7vaALbD020oC067kkQUAvC7LeKzRLbYGW+vpt3oR99gFGhljFgikn4pNI0ySqdELJmxLZo/4ZoVdta2Ff0RcKVNMhQP5iOQW+RGkKDUamo1NSb9DegxevFoSWV4iJUP0qh/yRH3Ldun8mQdnP6uhIG8S1U3uwYGrggJDqYbJlgc3qRUvpJ6MxUMvWQJOQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(451199021)(26005)(478600001)(15650500001)(2906002)(186003)(2616005)(6512007)(6506007)(53546011)(36756003)(6666004)(83380400001)(966005)(6486002)(37006003)(82960400001)(6862004)(38100700002)(86362001)(316002)(31696002)(5660300002)(41300700001)(4326008)(8936002)(31686004)(66476007)(66556008)(8676002)(66946007)(6636002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UmI1bmxsVGU1c0pIL3ZWbUF3dUh4VWhkMlFGYnJwRjF1MGJmQldWRUNMV3lL?=
 =?utf-8?B?S0MzaGlXTGMwNE1TNlhTUjViRkU5VW1GM1lSeUpzQ2dIZDY0ZW8vb2VKZ3N1?=
 =?utf-8?B?OFVWSGZEdzh0cnl4M2U1Q1k3VlBwZWp5cUNuaUhJaTZldUFJUnhCeGtoemlC?=
 =?utf-8?B?ZEgzOWFyejdNTkx6N2FPMHFsYmpQRXRVUXAwUWoyVm5rbTJqTnVIV3kzTTl1?=
 =?utf-8?B?MnJ4VnQzVzJ3S3dxaFMzcW1SUng3RndtY2pLRkdmemhwSTUvYXptOGF4RVNu?=
 =?utf-8?B?RVBlQnUvWDE5Vm04TGVocFo1TmpZZ2lrQllENHcvTUNGY1VHOFVsWFpEcHFL?=
 =?utf-8?B?ZkRXT2JDOG1ZRDEyRFRTSjl3L215MnFPeG5RMm1LY3cyblpDQjlKaUlERnAx?=
 =?utf-8?B?czkxczlwaVR6YktWL1J4ODQ0MUtPSWhRcDMyN0owNEtUcFM5alZtbWxrNFAw?=
 =?utf-8?B?MVlZOEQ1WjhDM1BZaGgzY0tnUWliNFZ1dmdtOGxGRU9yR3MxZTFJamRlNHpZ?=
 =?utf-8?B?R2l0VDdhSTNFRDNRaGlGV1FrOW9ub01BSktoeStMTDA2Rk52UG9Td1lsM2Nj?=
 =?utf-8?B?T3hzbjd5dG5sd2dXNEdEaDFjVDM1dWI0VHRpMkwydUVpWmRDNTVrK09ROWpz?=
 =?utf-8?B?bHJzNmI1emxJU0I5YUMxc1hLWisva2pobVphclJySVdhdlNFaC9NQTJ5Q1E2?=
 =?utf-8?B?MjRlMStrVVJETWIvVGhjV1pOajFvSXNSVTVZdzFoUmg2M3czVHkraUJYU08v?=
 =?utf-8?B?MnQwVlJUNjZmdW5CWUpZSkcrbFBmN091SEVhRnUxOURtUk9BMXpEMURqWXF1?=
 =?utf-8?B?VnNIMENFM01Qck5uSTVjZnd1akZ0YzZQNWhGelJ1MkpxSlY5YmN1Zjk1Yys4?=
 =?utf-8?B?S0RUMEVQY09ub0pYZkUxSDVjb3lQVUZlTzBvQUM4QVQxTFJLVGlwNU9wdFFm?=
 =?utf-8?B?TjFYcEpVR0JHblNJSk5GU3AwVnFXdWJxZTd2dFpmcVZBN3FkbS9rN3lZNjh1?=
 =?utf-8?B?Qm9PalNzWDNIcXROZmVOcC9yalhhZ0RzbG1RekxmUHY3YXAreWxwNWVDOEgv?=
 =?utf-8?B?eFVseVJkOU8vU3dkVjJEQjJ4Ukx1Ni9WS3ZWMlVFS29Cd0hiSEVZTHZUdjZC?=
 =?utf-8?B?T0c3c1J4QW1QL0sraEVLdHFJc0hvQmlCU0lnV3JyaVJDVFpQZytFdTBSbXFI?=
 =?utf-8?B?QS9ENzVWV3kydUZKak1WV1h3cTZTTFdhYWgxY0hORmNuMmc5UDJNZUVhWllr?=
 =?utf-8?B?a3VKVlBTU1pBYU56R1R4ZlliNm05WW1rZEkyaFNzejRhUmFPaXM2cy92ZXdt?=
 =?utf-8?B?dXpodVg4TU9yNWhtSjFoakg1YTZ4aHdqaEUwckx3SU9MTnVITFlTemhhT0E5?=
 =?utf-8?B?WStUUUdHUWY5L2JhY2VVVGordVkrYmNKeldSdTFmWlIybDZzN3hCWVMybUg4?=
 =?utf-8?B?ckprMEpSRVlWbHRVYUp1djgyUmRDZktNczIycTJ0Nzcxbmc5SVBIUG9UcUlh?=
 =?utf-8?B?d1J2ZVZJVDFCZUtJTG5vWlNqb3NvcllLb3U4bXZsb3hCRzUrYTF3N0JRRnRJ?=
 =?utf-8?B?emFCVXRKSDRtdDZNZmFJRFR3aWdQdGc0RkkvcXNqaS9jZTNPZTk0aktrZFdW?=
 =?utf-8?B?YjAxY0ptekd1bmIxcjNnZmFGVE51dW5TVzNlMGFSZ09OMitpWE1wUmZnMlZY?=
 =?utf-8?B?YTE3NzBVUFJvcEhYdmVpdFNPaDlBaFozQ1JoU2ZFY2d1SzdGaGt1TlJVSk8y?=
 =?utf-8?B?RTRWYmxVR3FDZllGYVVhNVQwU3RCZ0d5dzN0b01NU3gwdlNjb1lqci9TdlhC?=
 =?utf-8?B?aXlFWnlsbmtpRnlaNGZwYnROdWpXVFBTMTQ2SnR3cGVEQk1HaFdFTm5lMXBt?=
 =?utf-8?B?aXhYUEp6Z0pZM0FXZ0l2VmxMUGNINlhINFMyQ1h5VjlqeFNHVm8rTFpkaUZQ?=
 =?utf-8?B?TUdUVkc3WlBQbFI5NStDWGx0Q0NVYitMeEFsNjFYSU9uQ2l2cEltM0dvTVNz?=
 =?utf-8?B?d2U2TWRuZmJ5K1oxWEpWZ0phWGFTeTJmSysyVzJlajdFdVROc0FPL1lqWWox?=
 =?utf-8?B?VHlKZ2hmLy9mb1lrVllSVUNDaGNHU0hlK0lxN2lBejZaNFppT3dZYjZnSGlF?=
 =?utf-8?B?dHhNSkZhOVB5aThQZVJoOVhpcUh2cTI1OU54SkU2Y3pZdG1tTzdzMCtjRzJN?=
 =?utf-8?B?RVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	phkGOa41YsSDAtkUI7cpdr34HqbCnw4wLy2A99ZrFZI9/dTj3aT3XZOOnEKMKCzX9RaQxAnA25jxoH1bXMF9yiy31nXuovAY8WmLF8LDokbQpLcWtmMuIyPPwaeQSFep8eq1YEesGJs8J8gjSTeOcNwdiMMkLajsB4ibEtZ7W6wP88WKZWXRkmXXZAcSy+EFHBJ42XDPsAOcwW4oi3yQDfJ+sVUPnRXrPbFHFFKwPozB2/xiTeoGvSvjT3prmWEd4SDT6vwTmuirIAmg3wInxhrxGlx7INlWufVRLurqu2w6Ob03pSduW3FpmD1gKStYL3e/W4ihXnFHc3fiKDi0+6os0eLMwA0xMtPxRlZzmn3LiMy2qsbpOxQTIt57lECjlaRQQn4ncrZZX3i+7Yy/96ctUFtf2cUn6oK7Kst/da6BbymigNEuNJxMzUnCEtP+zOvTUTvDqUnYYM9znwkg8c3mFtApD0C1H6F06oSkRt59GngXblNC0J4eYQQfGDCl+sz1sY9pvmnY044aJFvAeGP3CCiv8Y2pq3TORLpS0vxpAo4cYUmbpLhnESLEDz+F0Ok9iauJGYbLa11VJeJ4Eblcl94NP0vKKfaSU1+8IfptKnBKj1OhO5l+TuApf2BINAxooRFzAaSuHc7XtkPx8pKKu03RXO1qalQYfimpgMnME5CoCNetlgpQg7fm1VdFPBkB1koQwVsU36+AHeyno4qJdVPv9GPmCB8YOtXQtsTzjZR+7TPowSrWP/bXeBxAlzquTrpgRNbwfY21euWae611iQrLbB/OHA8v7Xt7wjup3KtHnH90H3C81ULq6c+/7n+UrZ8QsJYSG94r7LpUfA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8659ba86-3a78-4f89-3a25-08db35287a8d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 16:20:17.8494
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R/HM4adQ7hPrS1jTS0Dt26pvhHSBjIArHqWrA6+A4RFDMpzsoMOyiW2CQEBLXTBeIR8QayIKJ/7S8CYbjTCIhWzf2tw63CH4pPEHsrrWOcs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR03MB7305

On 04/04/2023 2:51 pm, Anthony PERARD wrote:
> On Fri, Mar 24, 2023 at 08:37:06PM +0000, Andrew Cooper wrote:
>> On 24/03/2023 8:28 pm, Andrew Cooper wrote:
>>> On 24/03/2023 6:58 pm, osstest service owner wrote:
>>>> flight 179929 xen-unstable-smoke real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/179929/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>>  build-amd64                   6 xen-build                fail REGR. vs. 179926
>>> Bah.
>>>
>>> make[6]: Entering directory '/home/osstest/build.179929.build-amd64/xen/tools/firmware/etherboot'
>>> set -e; if ! /usr/bin/wget -c -O _ipxe.tar.gz https://xenbits.xen.org/xen-extfiles/ipxe-git-3c040ad387099483102708bb1839110bc788cefb.tar.gz; then \
>>> 	git clone file:////osstest/IPXE-GIT-FORBIDDEN ipxe.git; \
>>> 	(cd ipxe.git && git archive --format=tar --prefix=ipxe/ \
>>> 	3c040ad387099483102708bb1839110bc788cefb | gzip -n >../_ipxe.tar.gz); \
>>> 	rm -rf ipxe.git; \
>>> fi
>>> --2023-03-24 17:06:51--  https://xenbits.xen.org/xen-extfiles/ipxe-git-3c040ad387099483102708bb1839110bc788cefb.tar.gz
>>> Resolving cache (cache)... 172.16.148.6
>>> Connecting to cache (cache)|172.16.148.6|:3128... connected.
>>> ERROR: The certificate of 'xenbits.xen.org' is not trusted.
>>> ERROR: The certificate of 'xenbits.xen.org' has expired.
>>> Cloning into 'ipxe.git'...
>>> fatal: '//osstest/IPXE-GIT-FORBIDDEN' does not appear to be a git repository
>>> fatal: Could not read from remote repository.
>>>
>>> That's OSSTest choking, apparently with the same LE root cert problem?
>> Given that there's plenty of content wanting testing right now, and no
>> chance of this being looked at until next week, I've reverted e1d750844
>> (which was just a single hunk anyway) in the hopes that we can still get
>> a useful weekend of testing
> The certificate of the HTTPS proxy has been renewed, and osstest has
> been updated to use the new certificates. So that commit should work.
> In other word, osstest is ready for a revert of b5cc3c25a242 ("Revert
> "build: Change remaining xenbits.xen.org link to HTTPS"")
>
> Cheers,

Thanks.  I'll revert the revert.

Fingers crossed...

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:27:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518066.804219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjUu-0002xZ-Gz; Tue, 04 Apr 2023 16:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518066.804219; Tue, 04 Apr 2023 16:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjUu-0002xS-EK; Tue, 04 Apr 2023 16:27:04 +0000
Received: by outflank-mailman (input) for mailman id 518066;
 Tue, 04 Apr 2023 16:27:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HcaL=73=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pjjUt-0002xK-2n
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:27:03 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86249a4d-d305-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:26:59 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id br6so43025676lfb.11
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 09:26:59 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 w7-20020ac25987000000b004d5786b7299sm2393695lfn.5.2023.04.04.09.26.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 09:26:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86249a4d-d305-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680625619;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=OmE3+4mkKRClmf97yq5669zSeIG+gT8wURrmTIS+7sg=;
        b=OSFBnMwyzxDpJUp2dHCeg7rzHTOaTjujj/6I42VEd9kOWuiWeXIzI5qCdpWtSZZ8b0
         D8jBXn+kbirgwJMhnaibRRJusCs2ifhQb6V6enzIsDxiZUBUfglbSqICznk5OGEBq6k8
         qA0Y7YP2VjIUyDu6fOaz7fi/IfPbO+/+KpUOu/mv/ELS4octetJr+yHrmU6nmHPwmhP4
         ZA3Slvz9K2IPtPfozOH+qbdqGgXfOikvjhNFnVwKXr8zIE8JUZadWUkKTZcA3VJu5pBG
         O2oM1b5z2dRvroZgtXT9YP5kO4i6ob2siBbA9pDbaBO5Box7j3VxK+RCeeSS/3AjDB6v
         igHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680625619;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=OmE3+4mkKRClmf97yq5669zSeIG+gT8wURrmTIS+7sg=;
        b=UZO06bw78qzTBMzceL1nL1bQLc1sl5jQYg8sOSMEzBuSzp/bPD/h6iLY1naDyWjDon
         Ot773JOUY05I/jeHNY8VP2l1bIzgzu8Jy3uu5aGfJqnWnFIPc35f5Cs5zLoKZyRolGBz
         JhDlIysiPic4gusoE4sbUM9x3IM1H9pI9m5t5lV5fF3S9VXQfXJvgk5hJ7ExgDD1WuzX
         0ogR+gPe6LDZm1VW27JFCesShQmgkfRCGp3V7LUW2c2JrztjB2XKEeVI+S0V+Ph/uSzI
         OR854AWB9CfkTEGLeCuPt2YG5Im4AB/NfTEh/vWbjh2fCEn/4WtS093HrF8kuQ7BkyX6
         tg7A==
X-Gm-Message-State: AAQBX9cxyqo0M+2RdF79E81iFjG3vU+trLlE/gbLlYSJrhTVDKDXCT/V
	hbdCjpxJvfGfCMwUmrt1/CU=
X-Google-Smtp-Source: AKy350Y7stklTXPa+K/oV7wrJeU6Tm1Gp8FOdgiWdJxinC8a8+cNvUU/yYakY30v/7Enn+G90YTRHQ==
X-Received: by 2002:ac2:4823:0:b0:4eb:edf:fb5e with SMTP id 3-20020ac24823000000b004eb0edffb5emr863032lft.44.1680625618897;
        Tue, 04 Apr 2023 09:26:58 -0700 (PDT)
Message-ID: <d2c63b45e269fb7442486e33592e03f55c9c2d6e.camel@gmail.com>
Subject: Re: [PATCH v3 1/3] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Tue, 04 Apr 2023 19:26:58 +0300
In-Reply-To: <621425bb-03ad-1a5b-f53f-553eeefebf44@suse.com>
References: <cover.1679934166.git.oleksii.kurochko@gmail.com>
	 <93da6ba381604748e2c71e2ebd35e80798ec4bb2.1679934166.git.oleksii.kurochko@gmail.com>
	 <0a16b79e-8292-6947-24d4-dd027113943f@suse.com>
	 <f83cf0373bdcf31d6d273d53949cb81f54f74d5a.camel@gmail.com>
	 <621425bb-03ad-1a5b-f53f-553eeefebf44@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) 
MIME-Version: 1.0

Hi Julien,

On Wed, 2023-03-29 at 14:06 +0200, Jan Beulich wrote:
> > > > +void __init setup_initial_pagetables(void)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 struct mmu_desc mmu_desc =3D { 0, 0, NULL, 0 };
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 /*
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * Access to _{stard, end } is always PC-r=
elative
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * thereby when access them we will get lo=
ad adresses
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * of start and end of Xen
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * To get linker addresses LOAD_TO_LINK() =
is required
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * to use
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 */
> > > > +=C2=A0=C2=A0=C2=A0 unsigned long load_start=C2=A0=C2=A0=C2=A0 =3D =
(unsigned long)_start;
> > > > +=C2=A0=C2=A0=C2=A0 unsigned long load_end=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 =3D (unsigned long)_end;
> > > > +=C2=A0=C2=A0=C2=A0 unsigned long linker_start=C2=A0 =3D LOAD_TO_LI=
NK(load_start);
> > > > +=C2=A0=C2=A0=C2=A0 unsigned long linker_end=C2=A0=C2=A0=C2=A0 =3D =
LOAD_TO_LINK(load_end);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if ( (linker_start !=3D load_start) &&
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (linker_start <=
=3D load_end) && (load_start <=3D
> > > > linker_end)
> > > > ) {
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 early_printk("(XEN) lin=
ker and load address ranges
> > > > overlap\n");
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 die();
> > > > +=C2=A0=C2=A0=C2=A0 }
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 calc_pgtbl_lvls_num(&mmu_desc);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 mmu_desc.pgtbl_base =3D stage1_pgtbl_root;
> > > > +=C2=A0=C2=A0=C2=A0 mmu_desc.next_pgtbl =3D stage1_pgtbl_nonroot;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 setup_initial_mapping(&mmu_desc,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 linker_start,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 linker_end,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 load_start,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 PTE_LEAF_DEFAULT);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 setup_ptes_permission(&mmu_desc);
> > >=20
> > > ...: Why does this require a 2nd pass / function in the first
> > > place?
> > Probably I misunderstood Julien and it setup_pte_permission can be
> > done
> > in setup_initial_mapping() but here is the reply:
> > https://lore.kernel.org/xen-devel/79e83610-5980-d9b5-7994-6b0cb2b9049a@=
xen.org/
>=20
> Hmm, yes, his option 2 looks like what you've implemented. Still I
> don't see why the permissions can't be got right on the first pass.
I would like to ask you again about separation of mapping Xen and
setting permission for specific sections.

I am not using setup_initial_mapping to change permission flags as it
was before so do we still need to do two passes? Can't we set
permission during the first pass?

It looks like there is a sense in merging setup_initial_mapping() and
setup_ptes_permission().

Probably I misunderstood your opinion from the link above.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:29:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518070.804230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjWx-0003Vr-Tj; Tue, 04 Apr 2023 16:29:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518070.804230; Tue, 04 Apr 2023 16:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjWx-0003Vk-R8; Tue, 04 Apr 2023 16:29:11 +0000
Received: by outflank-mailman (input) for mailman id 518070;
 Tue, 04 Apr 2023 16:29:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7Av=73=citrix.com=prvs=45137d3e2=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pjjWx-0003Ve-0a
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:29:11 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d279d7c6-d305-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 18:29:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d279d7c6-d305-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680625749;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=muyEy+imng+MiuS0rC7GNqhnQwFZmakSOGOJA1lr+IM=;
  b=EZQUfEaG6DqHe/5dnXKjbQUx7Cl9BPUw6Dy5D0Qf6JeaKkhPJM+sUtF/
   LMb2mCwsSoFuhJYHw/gNTg2OaBR8Adfup2PHb/AXqzscXcPu06ncCTQfJ
   zZROcIDK1a/f9dPDX2dOmsTYLNki0wnpXCipBv/PzHtMj89UZPRCUDMhG
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106727183
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jRNvnK/ES7sHIGEiyo3jDrUDvn6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 jAYWzzVP/aOZDT1KN93Ot6z8BwD75eDyNNkGwZo/H88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagX5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklp9
 MxfJSFcdCvaiuC8/JHkS+Zp3J0KeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxhjH/
 D2frz+R7hcyDPnC2x3c3lGVp9CXvT3dYIY+BJn76as/6LGU7jNKU0BHPbehmtGhh1KzQZRfL
 F0Z4QInt610/0uuJvHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQMMrtYk9RTEs/
 laTmpXiAjkHmLeYU26H/7GY6za7IzEILHQqbDUBCwAC5rHLv4Ubnh/JCNF5H8adh8X4Azjqz
 xiWrSI1gPMYistj6kmg1QmZ2XT2/MGPF1NroFyNBQpJ8z+VeqaCZrzv5EmK980ZIYSTcnrev
 XcBwZG3ubVm4Y62qMCdfAkcNOj3t67baGGH0AIH84oJrGr0pSP6FWxEyHQnfRoybJ5ZEdP8S
 BWL0T698qO/K5dDgUVfR4uqQ/onwqH7fTgOfqCFN4EeCnSdmeLuwc2PWaJz9zq3+KTUuftjU
 ap3iO71ZZrgNYxpzSCtW8AW2qIxyyY1yAv7HM6rl0n7j+vEPyTLGd/p1WdiichgtMu5TPj9q
 Y4DZ6NmNT0EOAEBXsUn2dFKdg1bRZTKLZv3t9ZWZoa+H+aSI0l4U6W56ep4K+RYc1F9yr+gE
 oeVBhUJlzISRBTvdW23V5yUQOi1Aswl9S5jbHRE0JTB8yFLXLtDJZw3L/MfFYTLPsQ6pRKoZ
 5Hpo/m9P8k=
IronPort-HdrOrdr: A9a23:IBAPkahctH3S8mul+LPfnLaXXnBQX9923DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQJVpQRvnlaKdkrNhRotKPTOW81dAQ7sSibcKrwePJ8S6zJ8l6U
 4CSdk1NDSTNykcsS+S2mDRf7kdKZu8gcaVbIzlvhRQpHRRGsRdBnBCe2Sm+yNNJTVuNN4cLt
 6x98BHrz2vdTA+adm6PGAMW6zutsDGj5XvZD8BHloC5BOVhT2lxbbmG1zAty1uHw9n8PMHyy
 zoggb57qKsv7WSzQLd7Xba69B7icbs0dxKAe2Lk4wwJi/3ggilSYx9U/mpvSwzosuo9FE2+e
 O87isIDoBW0Tf8b2u1qRzi103LyzA18ULvzleenD/KvdH5bChSMbsOuatpNj/ir2YwttB116
 xGm0iDsYBMMB/GlCPho/DVShBRkFauq3ZKq59Ts5Ufa/pfVFZil/1cwKpnKuZEIMs80vFiLA
 BaNrCe2B+RSyLcU5mWhBgo/DXmZAVIIv7PeDl+hiXS6UkYoJkx9Tpm+CQS801wiK4VWt1K4f
 /JPb9vk6wLRsgKbbhlDONEWsevDHfRKCi8Rl56DG6XYJ3vAUi93KLf8fEw/qWnaZYIxJw9lN
 DIV05Zr3c7fwbrBdeV1JNG/xjRSCHlNA6dgv129tx8oPnxVbDrOSqMRBQnlNahuewWBoneV+
 yoMJxbDvf/JS/lGJpP3Qf5R55OQENuGfE9q5I+QRaDs8jLIorluqjSd+vSPqPkFXI+Vmb2Eh
 I4LU3OzQV7nzKWs1PD8WjssinWCzLCFLpLYdnn1vlWzpQRPYtRtQVQgUil56iwWE5/jpA=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="106727183"
Date: Tue, 4 Apr 2023 17:28:58 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Peter Hoyes <peter.hoyes@arm.com>
CC: <xen-devel@lists.xenproject.org>, <wei.chen@arm.com>,
	<bertrand.marquis@arm.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/xendomains: Only save/restore/migrate if supported
 by xenlight
Message-ID: <fa320fd7-31fa-4e96-a804-172e70ef1c80@perard>
References: <20230322135800.3869458-1-peter.hoyes@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230322135800.3869458-1-peter.hoyes@arm.com>

On Wed, Mar 22, 2023 at 01:58:00PM +0000, Peter Hoyes wrote:
> From: Peter Hoyes <Peter.Hoyes@arm.com>
> 
> Saving, restoring and migrating domains are not currently supported on
> arm and arm64 platforms, so xendomains prints the warning:
> 
>   An error occurred while saving domain:
>   command not implemented
> 
> when attempting to run `xendomains stop`. It otherwise continues to shut
> down the domains cleanly, with the unsupported steps skipped.

The patch looks kind of ok, but shouldn't $XENDOMAINS_SAVE be set to an
empty string in the config by the admin instead?

Or is the issue that $XENDOMAINS_SAVE is set by default, even on arm* ?

Maybe it's easier to check that the command is implemented at run time
rather than trying to have a good default value for XENDOMAINS_SAVE at
install/package time.

> Use `xl help` to detect whether save/restore/migrate is supported by the
> platform. If not, do not attempt to run the corresponding command.
> 
> Signed-off-by: Peter Hoyes <Peter.Hoyes@arm.com>
> ---
>  tools/hotplug/Linux/xendomains.in | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/hotplug/Linux/xendomains.in b/tools/hotplug/Linux/xendomains.in
> index 70f4129ef4..bafcb874e1 100644
> --- a/tools/hotplug/Linux/xendomains.in
> +++ b/tools/hotplug/Linux/xendomains.in
> @@ -229,6 +229,15 @@ parseln()
>      [ -n "$name" -a -n "$id" ] && return 0 || return 1
>  }
>  
> +subcmd_supported()
> +{
> +    local output
> +    output=$("$CMD help | grep "^ $1"")
> +    if [ ! "$output" ]; then
> +        return 1
> +    fi

It looks like some quote are in the wrong place. You probably wanted to
write:
    output="$($CMD help | grep "^ $1")"

But I'd like to have this slightly more robust, to match the whole
command, rather than the beginning.
(For example `subcmd_supported "pci"` would reture true even if no "pci"
command exist.)

Something like:
    $CMD help | grep "^ $1\( \|$\)"
To check that the command name is followed by a space or end-of-line
(even if it seems that there's always a space printed.)
A similar pattern is used in "tools/xl/bash-completion", so it should
probably be fine in the future.

Then, we don't really need the "$output" from grep, we could just ask it
if there's a match or not, with --quiet:
    $CMD help | grep -q "^ $1\( \|$\)"

And that would be the whole function. Would that works for you?


The rest of the patch looks fine.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:39:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518078.804240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjgj-00057Y-16; Tue, 04 Apr 2023 16:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518078.804240; Tue, 04 Apr 2023 16:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjgi-00057R-U4; Tue, 04 Apr 2023 16:39:16 +0000
Received: by outflank-mailman (input) for mailman id 518078;
 Tue, 04 Apr 2023 16:39:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b6be=73=citrix.com=prvs=4510202f8=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjjgh-00057L-Tn
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 16:39:16 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39fe83bc-d307-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:39:13 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 12:39:08 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by DS7PR03MB5575.namprd03.prod.outlook.com (2603:10b6:5:2cd::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.26; Tue, 4 Apr
 2023 16:39:06 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Tue, 4 Apr 2023
 16:39:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39fe83bc-d307-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680626353;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=2VwF/6oP5Vnmo3WD3Q2qsw9zTr5lv4aZJIMcQTK7mI4=;
  b=B0J8c8OGyqgbNN2QaFK9dphCrDwk7IcEPvZ0wJThj8n4pCTipvLKyQ4g
   gad72IxUZzNpKWd64xfPfIb1vODqkShLJ4im3qF+z2X/JGKfrN9EsfTqQ
   QxyG3r0m+qaFBU90zSVasEecPURDbvo7oPcqFlP/Werzx69wezF/NFEKj
   Q=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 103095625
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MfBVxquAUHPrImC4G4Oovuo0dOfnVHBfMUV32f8akzHdYApBsoF/q
 tZmKW+GOfuCNGD3eYt+YY6z/EkBvMPXm99lQVQ+rilgHigR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGyyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwOm00MTGMueKK45GYZ/Buo/wHNtb3FdZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60boq9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANlKReDmrKMCbFu7wzc1FDI9TUCBn8Kp1FWnR9lTM
 HwF9X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRVLufgcBRAoBptPl8Ic6i0uWSs45SfDlyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:F5uwxa4OYAhfktoyKwPXwPrXdLJyesId70hD6qkRc202TiX8ra
 uTdZsguCMc9wxhIE3I9ertBED4ewK6yXct2/h2AV7AZniEhILLFuBfBOLZqlWLJ8SZzIFgPM
 xbE5SWZuefMbAP5vyKhDVRYb0bsby6GDbCv5am80tQ
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="103095625"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G1xyXshKzn3AsI5nin9LNHyjXA0cPNdeavPitItCBWSomeaH3/DoDbCTWHYsoo9mm2Yg2T0gIFNA4E3ph5E8tU1wBQm1+IS64spzlRulJKJ9SKAc3MNH69qO9KyKnakN+Wajs/hfxHG6UNvGgcnJurSETJnqOoVVVqt6tN4ED9SgdaO1lXJ8dAem/cfrXsQTB9Rfo6tbDRzh3nlxjiEiXXmw8RmqEHYedjwT65Tk0EeQp4BlWKvc0C7m12BdbXNIpT3WUXKfDJw6Ef0eS/rDCpCNMTgy4Fn1DLCfw190m4ixxUKB0JLUsVwEfopfZ2axb7y4/sVQpuaNoa/EnXW0qA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XvcsTwnXdBTocmvXSNfMLcZeG+qa8zME+K8EMsoSI54=;
 b=l7e9HwHQX4As9Vs/3989o2ktcL19LTV1yaXyy77VxtE4ZybBLhY1Un1eAQb8qDrnglkWqjYZbOAvSXrTrB4fKMkib4tfX8FSVzeYExAKHCMdLJVCkGpce463nEpLtfpH81vXjbFXa22z8JNpjbLE2UAEbvG93QvKqKkdEIdvALBu0yEGva+UincS6K31jG0qzrhUrcBDqbElbNo8Ku0yDxLLwV7jdT2t+cx5tNtZnT6beR50uWWvWyAZGdv9UlN2HvoLluIiNLQkra37SFKLXOvZzG6nJgnvrTlvxzO7aJMuGP3lEQ70kEAF67Cz+3loVOrylXf90hYfVo/DMwamOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XvcsTwnXdBTocmvXSNfMLcZeG+qa8zME+K8EMsoSI54=;
 b=CsKf6VHnyg4m3lMY1SpsdQhXEcgeYfnWrzl9f9jzAFx1hQ6v+4qnZDPvZASCgc2aTwPE+ruaFkbjB6I7yI+Z+vr3DlJI2hdbw3Rr7lvblQ3HxLuqmpAs8K6I13mbnovs28FpOk+xX6jLuW2YNf4B0ujWt8iqDwrvkFVFeNc7zio=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 4 Apr 2023 18:38:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZCxSooPqPwpGW6yv@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
 <ZCxI18gb8zK5X+nR@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZCxI18gb8zK5X+nR@Air-de-Roger>
X-ClientProxiedBy: LO2P265CA0001.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::13) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|DS7PR03MB5575:EE_
X-MS-Office365-Filtering-Correlation-Id: 0675f27c-2a71-4a0c-b622-08db352b1adc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZDSkCjgLsp6KxiXQAwTEj08tBX4RF0YyhYbw/9cBAac5ZUzeOuqDuPdrUx2i3kPHU5Rizfrw/lOpXBsHylrYYsxL4xoEsfwc686EBT1Vw9oXIkFI9WYqZrp8mK3OugkoO748INZSZ+AifJsoQNEF9nWKbBYfPOeDgPwOy8LSROBMxaVCgidQZVNV7eSlQJ8AzAX4so/stoc8l0rn+9rz4qoUnYWx/qozs9AE2zG+sYJE16Pla/qPH8VK9b4XRLSuPmOBraC5504ywJK/R1HdAh5Y8sfRWLrqeZz4e6YRpJ5Ry4Cugjt3E1R5FFSDYnpJ5f3UxA3WOStBHpeK1QwQHH+xb8weSqGz2twxgjwbLq4lWGpgusEIU0n/givyzcVoEjjvfmV2mk1wP4jqZyBVF4P4FcAlqNjcEt6edKgeQPauq9/tQ7fBaud03gSPNJjN8rwRecvw0W0QXIiql+AA5wCwXalcmxXA7BMttzsYLtXrjDj8zg346dg0bzSOyK9p5xBa6W9gS5N7z+niUerFR/FoIopcnaKKR/UeElA2vaC80RDzcW8L3KjE10a+3g2hHVSD+QlqEV8QM5HSwCwf0vHx0CqRz5PydZZxLrZvOrc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(376002)(136003)(346002)(396003)(366004)(451199021)(86362001)(41300700001)(5660300002)(9686003)(186003)(6666004)(53546011)(26005)(6506007)(38100700002)(6512007)(6486002)(82960400001)(33716001)(66946007)(8936002)(4326008)(478600001)(54906003)(8676002)(66476007)(83380400001)(316002)(66556008)(6916009)(85182001)(66899021)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eHFRdHBwUmdlbWFtWTlqenZHM0czNzJaZUd0Q1hkNWFhQlpuZzZ3WmVyV0tN?=
 =?utf-8?B?RW1YQ204TVF6MjNhMnQxdVRqYm04d3czTjBTb1o5NCtITG9QUVFkUWtqcUJa?=
 =?utf-8?B?ZVFWRmtvQ3ArcC9qalRUQWtsKzBSMnhrU3ZOVUp0cUMvcnA0R3I3TEJ1dEt5?=
 =?utf-8?B?VWZyKy9VeUcyUkNGYkZHUXU4OFVjaGZ3S3V4V05OYnA3OG5mWEVXMnMza0oy?=
 =?utf-8?B?Y3pJQlQrMDg0VFlxTWtEYVYxZzUzdEYzZXBtSFQvUmhzN3I0OEphUVFVSzNW?=
 =?utf-8?B?d0VGbllpRFBMN0xQRjRhbjdRclU4MGxMWVFiUzVyYUY0alZ2MEFWQ2lLRDVo?=
 =?utf-8?B?b0ZmRjV5ejZXK05QT1IvR0FWdjN2b1R4bVQzVG14ZlgzOG1GaXpZMUxYZ0ZX?=
 =?utf-8?B?Vkx3TG41NWFRVkVEWGtxMHNuUWF4ZDE5U1Z4OHl1VXFtbXcrSkUwQ0ZsK1Vj?=
 =?utf-8?B?Vm03VmNuV0ZPV3llOFJEOVN4RHhUTWpRZGgyT2xnbUsvcllXaVNLQWJxWVdU?=
 =?utf-8?B?RS9ZdklRdmpoYmpjR1VnNnVOR0NlMmh3MmtIV3R3czJKQzB2Y1pidmdYSUlG?=
 =?utf-8?B?eFFZa3FlVWNhcEgwYUV3RnNIZlFQMUFjajc3SzlIM1lDZ3BZU2xyaUwzS0Fl?=
 =?utf-8?B?WXlMS3NlQzRDM0lMRTZUNGhXSmc5dmp3Q0NoY2tzYjNRN1B4aFBPS3hxQWFZ?=
 =?utf-8?B?TWtUa1BsNUVwWFNrbkQ4bDZ0eGd3NE11SlhkNXk4Mnl0eG9TTFcxNldvcndN?=
 =?utf-8?B?YmJtbUg4blAyRm9ndTJvdWEzay9KTE92Qk9qZnhrVVo0NVJ1MUVIL0RTZ1E0?=
 =?utf-8?B?TlY3K002RGNuQkVJdTc5ay9vaVJacWRnMVJ0NDVVemJqRHdQNlRtRktyOFc1?=
 =?utf-8?B?Z04xeEN1TXA3cGkvWlg4Q2Vndkxwc0c2ODRaaVFZWVlvdUZEdVdzVlE4bXFw?=
 =?utf-8?B?QmEwQ3BNYnFDUkRsRDJtOHAwdWlVZ1QyOUFyeDZWbVlBc1BVT1Zaa2ZubjRj?=
 =?utf-8?B?RXpWSDZGSzRBQjZGNDM2SGVNOWdXazhhRXVZS24zWHQrbHZGVzNUK3E1dm5i?=
 =?utf-8?B?dUwzOUp3WUFLZmhjdXJGVytmZEJBZXZ0VC8rVnRPWDhvTDdoaTcwY3JBUGUz?=
 =?utf-8?B?dGJBZlcrcnpBMytUemNQb0EwU1RjcHdpWHNlRmkxTXhRNTFUaW9UWFdwYTZV?=
 =?utf-8?B?eFJwZVZabTdwWWlFN1IySVRNaVhabmpJejQ5SmduKzVJVi9Yd00wR2ZFMlNz?=
 =?utf-8?B?VjlYK25lNkcwZnUyWTluY3pCVkY3NGdkU01GVXRXYmkzT0ZPQldDV2pnZzVM?=
 =?utf-8?B?TTM2T29HMHZ3T240U3QwZHN0d1h0UUNKN1dmeE1nVlZTa1BDTXRTeFpaRHdC?=
 =?utf-8?B?c1dJNkpxbk1qamNwbkJ6QkxtMVlTK0QzeVhROWpaMy9TT1BTbi9NTHh1Rmxk?=
 =?utf-8?B?YXd6azhmZ3V5RElMY0JqK21VNm1PYUtKWlIrbTFkU1ozR1pHaU9oNVR1em1z?=
 =?utf-8?B?RG5IYjJXcUkzVVYxNjBic0orcnd3dCtpazM1dGlrSW9haDMzRXU5MzJGdWZy?=
 =?utf-8?B?ajJ6c3cyaW1vU1dmT2toR094K2xzaU1WTEwyU1hXYTlpMUQ5Mjc4T0tlQito?=
 =?utf-8?B?UU0xYmZsNHFFRzVDU0RUVUYwRmhGQy9KbGl0NFUzUWhEM2dDbk1lbVZvYVAw?=
 =?utf-8?B?anNtQnVsSXV0NHhpRk1kbHdESHEvN0R4bG1RTzAzYkUrTGs4YnE3WHZOMEJr?=
 =?utf-8?B?dTlDbXEyZ3ZubUhuQ1hqaEZndFZpNzBNaVFDZnZiYWZJckdPaDlZZTFUeHVF?=
 =?utf-8?B?MER6aUFhTyszejJ5SllUSVpTWlhUYjlMUCtkVzNMSDhEZnBjVnNEczV0UHdo?=
 =?utf-8?B?Zk9GOUE1UlVrVlNXVVgyZTdnNm5WZTZsbURLR28zcTNxTkc0Nm4ySkc4b3Bk?=
 =?utf-8?B?QStVOUJMa29kVlEyb3FuenhHclJ0bjBQa3dDQXZrV1dNNEgrbThkSHZURllt?=
 =?utf-8?B?WkxqMzBHaTVTNHUxY0d2RXEzSmVuNlZybG9NT2R6blVSZjczZmZaNVJ0L2hX?=
 =?utf-8?B?S3VnZFczejNCQUYvbGpSR2d6RnhkQVhQdkgwbVVKUnU3dE9rTUNOSitxVXJU?=
 =?utf-8?B?R1B1eXBXY0NKRHU1VmJ5VE5SdGxxMDRNMUNvK1czQjBpYVBrQ0hJK2t5SXFD?=
 =?utf-8?B?bUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AAG3/3lI25PQSsvGZf/rleKCs2H+ojUcmHJ5gWMItwZIOV9LYoqMPXEHeOdsT2386g+7eNbe7fY37fywrzltsQYzAdEq2nXLULaykcuRXxZelPeGAFMTzGpRvcE/XVh+TXd33MvPfq/Em40TQDzr2pt5HkCDARrQtLTzAGv3GjDZJ+l693U1SOHofc0KOF6bQkDpLOOHIo+FbT0E13AikpcQl34KirRKPJ3tgqImdU4jnwd1bxTqP0j9vdkobiP2NNqkGLCz47UTGHwW6XbnseO6FSyoe4XEYmxVzCArlssoJ13FKNWmV4MBR+fFe3z/M2dqTSb/im+sIh0eM9Bh3wHGv4fUIBGhtFvVWICaCcOYDwRviW3opENkE3MURfNZoyeX4rmYipIo/G+Gww+A/HGUJrEv4fQJzKTHFnvbs69t4MW/7sbarWXaBETdeiFfdbQghp6DhUWt53kwY4uFbt8gf+7FvzNGT0G00yOExQaon/+NWvZ372DcPto5ymYv5YEC82pujjWmbBDQrHNGgf+EdF3NuE7kbUZ7aobx4P5P0rqw8OyneLqKIhSnvJV8AMHhrzW7LADpL7l+drWAs5Wbxxtk4TCEPw3XXWmjUhvX6icLnbbUBankeHv4ZTBZJrNgFxVIi5aDsMvOsHbwpTS1iqxFbIJ4F7SXuO6rMlwlNsCgxV0Jwf6/VyWDNChM6OVgkbFuDx58SHGexwnSnpPYnG42B8MJvdPgwQuDG4zWekJ6S6jCiepl0E2J3+u45ci2q326d5iCApeOMqHy/LItbXb1zdqbls12BSwsaFswqGOCKbaSLizDktbNUdwP
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0675f27c-2a71-4a0c-b622-08db352b1adc
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 16:39:05.5814
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8GaC8WohKXghUOV/L+fW+g86GpluO/f+jHS26OpCPYDrZYdb9HdzBOIN73SneRZSeKmAlmYaBE2CbAU7mOZOxw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5575

On Tue, Apr 04, 2023 at 05:57:11PM +0200, Roger Pau Monné wrote:
> On Tue, Apr 04, 2023 at 04:24:16PM +0200, Jan Beulich wrote:
> > On 04.04.2023 13:41, Roger Pau Monné wrote:
> > > On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
> > >> On 04.04.2023 12:12, Roger Pau Monné wrote:
> > >>> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
> > >>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> > >>>> applies to guests also when run on a 64-bit hypervisor: The "extended
> > >>>> CR3" format has to be used there as well, to fit the address in the only
> > >>>> 32-bit wide register there. As a result it was a mistake that the check
> > >>>> was never enabled for that case, and was then mistakenly deleted in the
> > >>>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> > >>>> CONFIG_PAGING_LEVELS==4"]).
> > >>>>
> > >>>> Similarly during Dom0 construction kernel awareness needs to be taken
> > >>>> into account, and respective code was again mistakenly never enabled for
> > >>>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> > >>>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
> > >>>>
> > >>>> At the same time restrict enabling of the assist for Dom0 to just the
> > >>>> 32-bit case. Furthermore there's no need for an atomic update there.
> > >>>>
> > >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > >>>> ---
> > >>>> I was uncertain whether to add a check to the CR3 guest read path,
> > >>>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> > >>>> be converted to "extended" format (overflow is possible there in
> > >>>> principle because of the control tools "slack" in promote_l3_table()).
> > >>>>
> > >>>> In that context I was puzzled to find no check on the CR3 guest write
> > >>>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> > >>>> of the low reserved ones) could observe anomalous behavior rather than
> > >>>> plain failure.
> > >>>>
> > >>>> As to a Fixes: tag - it's pretty unclear which of the many original
> > >>>> 32-on-64 changes to blame. I don't think the two cited commits should
> > >>>> be referenced there, as they didn't break anything that wasn't already
> > >>>> broken.
> > >>>>
> > >>>> --- a/xen/arch/x86/mm.c
> > >>>> +++ b/xen/arch/x86/mm.c
> > >>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
> > >>>>      unsigned int   partial_flags = page->partial_flags;
> > >>>>      l3_pgentry_t   l3e = l3e_empty();
> > >>>>  
> > >>>> +    /*
> > >>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> > >>>> +     * understand the weird 'extended cr3' format for dealing with high-order
> > >>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
> > >>>> +     * initialised).
> > >>>
> > >>> Don't we then need some check in the vCPU init path to assure that the
> > >>> cr3 is < 32bits if we allow those to initially be set?
> > >>>
> > >>> Or will the initialization unconditionally overwrite any previous cr3
> > >>> value?
> > >>
> > >> That's not the way I understand this "cut some slack". Instead I read it
> > >> to be meant to cover for the VM-assist bit not being set, yet. Beyond
> > >> that it is assumed to be tool stack's responsibility to constrain
> > >> addresses suitably. If it doesn't, it'll simply break the guest. (There
> > >> is some guessing on my part involved here, as the original introduction
> > >> of that code didn't further explain things.)
> > > 
> > > If it's just the guest that's broken I would think it's fine.  As long
> > > as such mismatch doesn't cause issues in the hypervisor internal state.
> > > 
> > > Did you see a toolstack setting such entries before pae_extended_cr3
> > > is set?
> > 
> > To be honest - I didn't look. As said in the longer reply to Andrew, I
> > think it is more logical this way (the page table root already being
> > validated as an L3 table when vCPU 0 is inititalized, which includes
> > setting its CR3). Hence even if right now the order was the other way
> > around (which I doubt it is), I wouldn't want to make impossible to
> > restore the original ordering again.
> 
> IMO I think it would be better if we could already report error at
> domain creation time if the toolstack is attempting to create a domain
> that the hypervisor knows is not going to work properly, rather than
> allowing it and the guest failing in maybe non obvious ways.
> 
> It seems to me however that we would need to fix xc_dom_boot_image()
> in order to setup the vCPU before creating the initial page-tables.
> (->setup_pgtables() hook being called before ->vcpu() hook)
> 
> So I don't think this is strictly worse than what we have, but it
> would also be nice to get things sorted out so the ability of the
> toolstack to shot its own foot is limited.

Maybe I'm confused after all day, but isn't the hypercall used by the
toolstack to set CR3 the same one used to set the vm_assist bits?
(XEN_DOMCTL_setvcpucontext)

At which point we just need to make sure d->vm_assist gets set before
attempting to load the new CR3 (seems that way from the quick look
I've given at arch_set_info_guest()).

And so there should be no need to give extra slack to toolstack
operations.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 16:51:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 16:51:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518083.804251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjsi-0007Tj-5C; Tue, 04 Apr 2023 16:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518083.804251; Tue, 04 Apr 2023 16:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjjsi-0007Tc-0N; Tue, 04 Apr 2023 16:51:40 +0000
Received: by outflank-mailman (input) for mailman id 518083;
 Tue, 04 Apr 2023 16:51:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rvLN=73=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1pjjsg-0007TW-Pv
 for xen-devel@lists.xen.org; Tue, 04 Apr 2023 16:51:38 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5ef77c0-d308-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 18:51:36 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 j1-20020a05600c1c0100b003f04da00d07so992074wms.1
 for <xen-devel@lists.xen.org>; Tue, 04 Apr 2023 09:51:35 -0700 (PDT)
Received: from [192.168.0.106] ([91.123.150.38])
 by smtp.gmail.com with ESMTPSA id
 e5-20020a05600c218500b003ed243222adsm15616711wme.42.2023.04.04.09.51.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 09:51:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5ef77c0-d308-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680627095;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=nmwA5QIE12SS7JgIVsKKCsxGeOjasYgBEbSu07ZYTSU=;
        b=j3X7CQ1RSjFMeq3h/sDNd516v3GtX5TcOf28kc7cih/YdK3d1j1bIgD6r8dQyy+CTV
         xS7FxlaA2mI9xXehp9sLxN2YiEzkisHp8N5GqnVLZGgOJbRKbcqftf4G50nEtCDPxUqy
         s82n2D6C8ucroeun906j3Hn/5n+h85GFAX88qGxpEMlAzDZVzcEF/p3iLVLTlqHlv+Kt
         eYyAj3fluApLMwniSQ6PXyLsuow0ouar1hI10a2VpDrbaI2MoPA37FhXvJkNLJpqBVRw
         G5dde3U+AI/b6zw4vpME7WRGv2wG9HW/fQu1IPEPHfph5u0XjDYFYkNkKmltMFO/UlCl
         0GCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680627095;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=nmwA5QIE12SS7JgIVsKKCsxGeOjasYgBEbSu07ZYTSU=;
        b=Tcyvfu7cgAZzDpwl3v5r55wBKyD3SrXwk801YUGzylqlGESoaMrXEZkrj2+Cyb6lY0
         UQnp6xGi5dwBNIMPiFeJgOM51iY4aFw6hXC7ZtxANQcPkfU8HAUdQlqQKtf3s2ukUbeQ
         bHiUvvUCsoFiYsDgLkhT0LierbcNSNjREsq6aB3+6ppwT2i4LuGeIWJBbUjEOec+FVQL
         jWJ2vpl6BH4krmof4SIb9kVVQyrTfH2eqZ/I1Lkp53kDOYx5Z2A1QtxmkIQqlRZ2sZJ3
         h6AAz4xXbd/3pFeb5A78Nro9EdN41pVCpbzR+pCLMo0n2Q0r30DaRQhOidX/z2G+163x
         3fRw==
X-Gm-Message-State: AAQBX9e/uI3BSS7qgeWcgdPSH0YwkcDYU41w1I6k463yQDwTyyw+pGoR
	RyS0VAUyH7VouxpPNZK52Ro=
X-Google-Smtp-Source: AKy350ZPTHLXZZtEJJV/KLDfQHEAcggBiApR/vQ6qZwddyf+FZnWqHowVpVwxPjRx3L0JA7kCZ0Q4w==
X-Received: by 2002:a05:600c:254:b0:3ee:2552:7512 with SMTP id 20-20020a05600c025400b003ee25527512mr2628533wmj.13.1680627095070;
        Tue, 04 Apr 2023 09:51:35 -0700 (PDT)
Message-ID: <44ec8244-d172-8f21-d7c1-a070ff0a5cb5@gmail.com>
Date: Tue, 4 Apr 2023 19:51:33 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH] libxl: fix matching of generic virtio device
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 Julien Grall <julien@xen.org>, xen-devel@lists.xen.org,
 Juergen Gross <jgross@suse.com>, stratos-dev@op-lists.linaro.org,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <210b5be4b7e84fce1519663f28ca24f6761fb2cb.1680161663.git.viresh.kumar@linaro.org>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
In-Reply-To: <210b5be4b7e84fce1519663f28ca24f6761fb2cb.1680161663.git.viresh.kumar@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 30.03.23 10:35, Viresh Kumar wrote:


Hello Viresh


> The strings won't be an exact match, and we are only looking to match
> the prefix here, i.e. "virtio,device". This is already done properly in
> libxl_virtio.c file, lets do the same here too.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>


It feels to me this patch wants to gain the following tag:

Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")



> ---
>   tools/libs/light/libxl_arm.c | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index ddc7b2a15975..97c80d7ed0fa 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -1033,10 +1033,14 @@ static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
>       } else if (!strcmp(type, VIRTIO_DEVICE_TYPE_GPIO)) {
>           res = make_virtio_mmio_node_gpio(gc, fdt);
>           if (res) return res;
> -    } else if (strcmp(type, VIRTIO_DEVICE_TYPE_GENERIC)) {
> -        /* Doesn't match generic virtio device */
> -        LOG(ERROR, "Invalid type for virtio device: %s", type);
> -        return -EINVAL;
> +    } else {
> +        int len = sizeof(VIRTIO_DEVICE_TYPE_GENERIC) - 1;
> +
> +        if (strncmp(type, VIRTIO_DEVICE_TYPE_GENERIC, len)) {
> +            /* Doesn't match generic virtio device */
> +            LOG(ERROR, "Invalid type for virtio device: %s", type);
> +            return -EINVAL;
> +        }


I agree that now code is aligned with what we have in libxl_virtio.c 
file, but I am afraid I cannot connect the sentence from the commit 
description:
"The strings won't be an exact match, and we are only looking to match
the prefix here, i.e. "virtio,device"."

with the sentence from docs/man/xl.cfg.5.pod.in:

"For generic virtio devices, where we don't need to set special or 
compatible properties in the Device Tree, the type field must be set to 
"virtio,device"."

I might miss something, but shouldn't we clarify the documentation?



>       }
>   
>       return fdt_end_node(fdt);


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 17:21:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 17:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518087.804260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkLF-0002R0-Dz; Tue, 04 Apr 2023 17:21:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518087.804260; Tue, 04 Apr 2023 17:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkLF-0002Qt-AA; Tue, 04 Apr 2023 17:21:09 +0000
Received: by outflank-mailman (input) for mailman id 518087;
 Tue, 04 Apr 2023 17:21:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjkLE-0002Qj-6c; Tue, 04 Apr 2023 17:21:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjkLE-0001YS-10; Tue, 04 Apr 2023 17:21:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjkLD-0005UC-HA; Tue, 04 Apr 2023 17:21:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjkLD-0002Pi-Gf; Tue, 04 Apr 2023 17:21:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VidBrDSsszNUGskm2BZWHSEq+ojhFlf1TW60WYaUgyk=; b=slHygYNrUlkk7ZjNVvZD8kHIKU
	wv9sgvVwbzRYAQz3nVpSHpdJkrju+uGveQN9yoPCn07nP8VvkF+iyd3OGMBkON/s7ca8+6FifOi5e
	tALlzh8zA9f0BQfRhLcN/ZzpKyLT72zQyoF+r3cZ82EQvtnzV4MjeflY2ahjbfM7gkWo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180138-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180138: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=fb89f62d2702faf7db7f7afef342467d4f0fba3c
X-Osstest-Versions-That:
    ovmf=26997800c991f934b57ebd91de2edcd93312f756
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 17:21:07 +0000

flight 180138 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180138/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 fb89f62d2702faf7db7f7afef342467d4f0fba3c
baseline version:
 ovmf                 26997800c991f934b57ebd91de2edcd93312f756

Last test of basis   180127  2023-04-03 15:40:46 Z    1 days
Testing same since   180138  2023-04-04 15:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   26997800c9..fb89f62d27  fb89f62d2702faf7db7f7afef342467d4f0fba3c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 17:36:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 17:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518093.804270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkZd-000418-LE; Tue, 04 Apr 2023 17:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518093.804270; Tue, 04 Apr 2023 17:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkZd-000411-IR; Tue, 04 Apr 2023 17:36:01 +0000
Received: by outflank-mailman (input) for mailman id 518093;
 Tue, 04 Apr 2023 17:35:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jyst=73=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pjkZb-00040c-Ia
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 17:35:59 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2391ff87-d30f-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 19:35:49 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id eg48so133624781edb.13
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 10:35:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2391ff87-d30f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680629749;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=B++ILPDB3X+VuBU8jlV4qBmXikQNjKUxckqbnWCshEs=;
        b=nEMEpYDPKqf3YLC0FS2dnk3CaMTs5JeR/WJE92Vclc3fGNqm3Q4jcJ1saTecAW50kK
         SkZAAQedIQyx1QshTDj2dqfqdrj3L3TA98J03DxnWXM6lx5vIF1Y2qtzGFQYUnwou0NI
         0RdoLDwN3COEIJxX6HOFtQ68IcXGl00Q/Cro7ywaKLa/H7gl07KR0KPsmFRtsgxmVdPQ
         T++ulO6Hu1d7QX+FbmmPNluD3/GfPKnxXHpahwlm/bJjy3Lw/5Ek1C2Q4ArXC/Xs0TwO
         5CJyUvZoK5f8yImPnWACOMke+eMrjrzi/9c6YdRo3viKtljNnygfDHcE96qEVB9WWz+H
         RAPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680629749;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=B++ILPDB3X+VuBU8jlV4qBmXikQNjKUxckqbnWCshEs=;
        b=AFT/M0Nw/UsOxWECXxMxd8uhjZ9DnVBrOD/nFh0igUX567IGw7new+d4TQsufbavEa
         pmVXIht3Bi4kGL/VWu01A4rRO/qlWQxbc54FaGVeqvhxZPTiOK0E6F3mIULWJkw9cTic
         O7LfzWxckbwJ1LbdWmlIeJMiWwNdFhYXSkQYk/9UztBdGc89mCmW6BN7BaAOwYKgBYD3
         QWz7ABzKStfTGy7eQaTaaPsyag4FFqQLWSQavvkcLJcwZvPvW6Ute0hEDxzXiL/0Uvsq
         dAedwIWEspZ7vp0Za1ygv9cEWTC68TvnVqtTeadCwbQXGwLJyqYCs470F0Bx5+4jxk6p
         5hYQ==
X-Gm-Message-State: AAQBX9etrsUW5ctkTP92hVHnAffy1Z5i9upsdayUpNcaNWR/mJkzsYWj
	wcVHgU2kwf4pt62HBNGRvQJR4/pnaE94lLdqaKFDvA==
X-Google-Smtp-Source: AKy350bFZ5U/HvpJohqoyPJEv2MbHDufBnPeztQmU14vuRQWrqhanpo8MykXvxhbF2/kYUPL7BROh16KxkmoJmjzqmw=
X-Received: by 2002:a17:906:9bde:b0:924:efbb:8a8b with SMTP id
 de30-20020a1709069bde00b00924efbb8a8bmr173409ejc.6.1680629748942; Tue, 04 Apr
 2023 10:35:48 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-14-dwmw2@infradead.org>
In-Reply-To: <20230307182707.2298618-14-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 4 Apr 2023 18:35:38 +0100
Message-ID: <CAFEAcA_SS8xRjGKZoSyGc0nh_-C2Wh8hauGzR82Aj8S1g8xBOQ@mail.gmail.com>
Subject: Re: [PULL 13/27] hw/xen: Add xenstore operations to allow redirection
 to internal emulation
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: Paul Durrant <pdurrant@amazon.com>
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Reviewed-by: Paul Durrant <paul@xen.org>
> ---

Hi; Coverity points out a memory leak in this code (CID 1508098):

> +static struct qemu_xs_handle *libxenstore_open(void)
> +{
> +    struct xs_handle *xsh = xs_open(0);
> +    struct qemu_xs_handle *h = g_new0(struct qemu_xs_handle, 1);

Here we allocate memory...

> +
> +    if (!xsh) {
> +        return NULL;

...but here we can return without freeing it...

> +    }
> +
> +    h = g_new0(struct qemu_xs_handle, 1);

...and here we allocate a second time and overwrite the
pointer to the first allocation.

Deleting the first call to g_new0() would fix both of these.

> +    h->xsh = xsh;
> +
> +    notifier_list_init(&h->notifiers);
> +    qemu_set_fd_handler(xs_fileno(h->xsh), watch_event, NULL, h);
> +
> +    return h;
> +}

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 17:45:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 17:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518098.804280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkid-0005Yq-MI; Tue, 04 Apr 2023 17:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518098.804280; Tue, 04 Apr 2023 17:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkid-0005Yj-IW; Tue, 04 Apr 2023 17:45:19 +0000
Received: by outflank-mailman (input) for mailman id 518098;
 Tue, 04 Apr 2023 17:45:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8u37=73=casper.srs.infradead.org=BATV+8e7372aa539f26de88ef+7163+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pjkib-0005YX-Qt
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 17:45:18 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75862dc2-d310-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 19:45:16 +0200 (CEST)
Received: from [2001:8b0:10b:5:99d7:d5a0:55b7:41c3]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pjkiS-00FZi4-QD; Tue, 04 Apr 2023 17:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75862dc2-d310-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=g8SnSDCU1FbwVqsBDwgLKPNWtPpuDwLN/0oRMertQBw=; b=h/snio6bG0Vu7iuAlLVf8ry4b+
	wuNF+uHu0wBcluW7ML+QR5KwB29+bVr49ZFQCCh72LjZOhFw5NnL1snhHVvec4OLHOJ6ft8eUrx8a
	Cs8rC8H+BkIT65DXYCK/NZVAvDTTJsasSXv54KemN1IqALr2sTy2QAn6gJy+o7XOvUJPYtqAPrxX8
	4DeFjZ4yXEcFsZ2EeLgbXAKNt62l6C0kp67Dt7p/+WRyiatMRmM2ToOfyhpODf1KNoIUfLj5PDsWN
	4W63StUsJUZnZqg8ociRHA7R1znRtZwU+xl7JVtX+xImPrlRLaJ7pafY2mP7EfhOTJt5BZUc2YdCK
	pAyE54uw==;
Message-ID: <d079d8c1f455c96203dc44906d37c2ac8963a6ae.camel@infradead.org>
Subject: Re: [PULL 13/27] hw/xen: Add xenstore operations to allow
 redirection to internal emulation
From: David Woodhouse <dwmw2@infradead.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant
	 <paul@xen.org>, Joao Martins <joao.m.martins@oracle.com>, Ankur Arora
	 <ankur.a.arora@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	vikram.garhwal@amd.com, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, Juan Quintela <quintela@redhat.com>, "Dr .
	David Alan Gilbert"
	 <dgilbert@redhat.com>
Date: Tue, 04 Apr 2023 18:45:07 +0100
In-Reply-To: <CAFEAcA_SS8xRjGKZoSyGc0nh_-C2Wh8hauGzR82Aj8S1g8xBOQ@mail.gmail.com>
References: <20230307182707.2298618-1-dwmw2@infradead.org>
	 <20230307182707.2298618-14-dwmw2@infradead.org>
	 <CAFEAcA_SS8xRjGKZoSyGc0nh_-C2Wh8hauGzR82Aj8S1g8xBOQ@mail.gmail.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-trzKSnN3SH5xbBzkYnH5"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-trzKSnN3SH5xbBzkYnH5
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2023-04-04 at 18:35 +0100, Peter Maydell wrote:
> On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org>
> wrote:
> >=20
> > From: Paul Durrant <pdurrant@amazon.com>
> >=20
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > Reviewed-by: Paul Durrant <paul@xen.org>
> > ---
>=20
> Hi; Coverity points out a memory leak in this code (CID 1508098):
>=20
> > +static struct qemu_xs_handle *libxenstore_open(void)
> > +{
> > +=C2=A0=C2=A0=C2=A0 struct xs_handle *xsh =3D xs_open(0);
> > +=C2=A0=C2=A0=C2=A0 struct qemu_xs_handle *h =3D g_new0(struct qemu_xs_=
handle, 1);
>=20
> Here we allocate memory...
>=20
> > +
> > +=C2=A0=C2=A0=C2=A0 if (!xsh) {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return NULL;
>=20
> ...but here we can return without freeing it...
>=20
> > +=C2=A0=C2=A0=C2=A0 }
> > +
> > +=C2=A0=C2=A0=C2=A0 h =3D g_new0(struct qemu_xs_handle, 1);
>=20
> ...and here we allocate a second time and overwrite the
> pointer to the first allocation.
>=20
> Deleting the first call to g_new0() would fix both of these.

Indeed, thanks. Do you want a=20

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>

or would you prefer me to submit the actual patch as described?


--=-trzKSnN3SH5xbBzkYnH5
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDA0MTc0NTA3WjAvBgkqhkiG9w0BCQQxIgQgpJ6J7s3H
3UAHtBoEPna0uaV12yxgcl1DDtVgYwJt6Wowgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgB3HWBPzyvUlUFU8nI8xVYFzlf9Nx/nbPvU
kqKMCcfTELJAut4NQ9mlezYs4JBQsRh5b9/sQ017dBAq8Ok1epto8UBoT5sFYBF+1UMtLCWQCHMA
qX+JElG8VoKWaDSUU2V7bSpvnCbMHx2ViNZieqnkfK+5OpZegXjZ7FytorfPd2EMC0gw+wkiOPIQ
PJ7n08r7P+AEzRYMO2IF07X2/2h+ioOfMG+uOZY7vJQWjLCrpW6Fx4X6aUQha8l089RtqBcna+gA
xYrQJs3K7DHdBg6rACS1Ahwi9rPH01v9iZpqCOB5UZqXZOcMM8m3d0KekBeru8BFVZlZdG9Wd5LZ
Pf22aDYlS2sKPUtds49hsynRxAITZhlFws//l3WWLWbgl9Vl+1yjH0fOETiY616vgpn4v09alfDt
fpPXqYhSalAaRay4lgdCb7cs8aoL2f5eFdoJojvGZdYFGDsLOwhCXADeyoHdRjX3GCveFFNhuTRX
NMuuQoEgECceOPihHoQxc1Zx0iYXPP+NnmMyWK8/Mg276mM+IFXwrOcTKmy0hcRM6+EY6QaSfl1H
7c033dLIQiPIWLIrgnQThDckTasgTmzHZMK4chcKJhDp0Jr6KtZ+CGusS6/MZjTTuJ3z+q1+Uxp8
K6XFWZH/3UO4mZyVskTvfKavOfDXUYa/KgDy4G/QgwAAAAAAAA==


--=-trzKSnN3SH5xbBzkYnH5--


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 17:46:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 17:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518102.804289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkjB-00060t-UN; Tue, 04 Apr 2023 17:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518102.804289; Tue, 04 Apr 2023 17:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjkjB-00060m-Rj; Tue, 04 Apr 2023 17:45:53 +0000
Received: by outflank-mailman (input) for mailman id 518102;
 Tue, 04 Apr 2023 17:45:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jyst=73=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pjkjA-00060Z-Qx
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 17:45:52 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8abb3184-d310-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 19:45:52 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id ew6so133753336edb.7
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 10:45:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8abb3184-d310-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680630351;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=zM6C51SzsToixyfPpduAVipa4SnNg5IMo4YirkkilTI=;
        b=m79o95DJlnvNJ4KH0Md0JB/qcLndytfefpLDAV0P5BOEkg/Vua/5v5yhxJiEAvb7w5
         zX4nTiEhJCuiD5E98BTa7eiGfw2OqM2TpwCvm1TcLI6Hk2miGK5+V4kheAWEibmMUm3R
         txiTr0hc/XRHsz4NUYkN/ovZ+XzpI7r9BRvI0kulU38aQadsOpX6GPzfUPpyJlXcZabB
         84rq8WrNEjrkLZ0VGXADyKLyZT0AVrpSg1AHZWPisb5dFOkkIAV3isD22l9nLcJ0JZJA
         mp4WPZrOD0ZIlWi0yX0/5QNNpEwJEB7WKHjVk6zXuk9UI4SAqf9oZJg3HYcHG/MKsiKz
         j7Ag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680630351;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=zM6C51SzsToixyfPpduAVipa4SnNg5IMo4YirkkilTI=;
        b=vTlh7o9YqVL4pn9P4Jf9rDIZvvJFx5sn/95zoBemeVsjY0M8VFzS0BIG2nykF8pXjH
         /smTRZ18TWRHAPNY5Ncp7UDQ1B8jl/QswepLMHF7Ah9+d5+3xknJQsyPNwBmaK0ZSB/u
         cgUwkctZFXHb8FoafH1QeEGPYTeYtzjsAVcZvibiTKbjnFfcmQSsAA3+BkTRyyw677+7
         FqekhSqi1Dgcv9CsQkehob+J2XeK/FubeYTTjwW7+8Y7kxrotajqZwN55mhwuu+SimAd
         m6462gsp+gX2gg1F9oj/0DPQaBHumoWvRGKY/NPFIVV7yw7GtNbHIyj5TVWPstjqiE0U
         dRpA==
X-Gm-Message-State: AAQBX9fjR0CIBh2He4Cmey5xaK06dFlw3ZhkHsTPf90lJf5sjxyFBxQE
	g4lUvAFpPgU6x/nXxGNnRolKdiLf30oc8zTxy7h6XQ==
X-Google-Smtp-Source: AKy350aUoDCSACn1ctQt94298XZZELptJd4vlQvIrHKhZ59SL3gN/lMGnaDhhMIf4vhQdc2A1ymiyiiUf29jIj2J2NY=
X-Received: by 2002:a17:907:118d:b0:932:4577:6705 with SMTP id
 uz13-20020a170907118d00b0093245776705mr203873ejb.6.1680630351647; Tue, 04 Apr
 2023 10:45:51 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-14-dwmw2@infradead.org>
 <CAFEAcA_SS8xRjGKZoSyGc0nh_-C2Wh8hauGzR82Aj8S1g8xBOQ@mail.gmail.com> <d079d8c1f455c96203dc44906d37c2ac8963a6ae.camel@infradead.org>
In-Reply-To: <d079d8c1f455c96203dc44906d37c2ac8963a6ae.camel@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 4 Apr 2023 18:45:41 +0100
Message-ID: <CAFEAcA-DT-990Y81mh0rgBp-P0fdLTYCD=DN7m1qued7VFVrVg@mail.gmail.com>
Subject: Re: [PULL 13/27] hw/xen: Add xenstore operations to allow redirection
 to internal emulation
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 4 Apr 2023 at 18:45, David Woodhouse <dwmw2@infradead.org> wrote:
>
> On Tue, 2023-04-04 at 18:35 +0100, Peter Maydell wrote:
> > On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org>
> > wrote:
> > >
> > > From: Paul Durrant <pdurrant@amazon.com>
> > >
> > > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > > Reviewed-by: Paul Durrant <paul@xen.org>
> > > ---
> >
> > Hi; Coverity points out a memory leak in this code (CID 1508098):
> >
> > > +static struct qemu_xs_handle *libxenstore_open(void)
> > > +{
> > > +    struct xs_handle *xsh = xs_open(0);
> > > +    struct qemu_xs_handle *h = g_new0(struct qemu_xs_handle, 1);
> >
> > Here we allocate memory...
> >
> > > +
> > > +    if (!xsh) {
> > > +        return NULL;
> >
> > ...but here we can return without freeing it...
> >
> > > +    }
> > > +
> > > +    h = g_new0(struct qemu_xs_handle, 1);
> >
> > ...and here we allocate a second time and overwrite the
> > pointer to the first allocation.
> >
> > Deleting the first call to g_new0() would fix both of these.
>
> Indeed, thanks. Do you want a
>
> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
>
> or would you prefer me to submit the actual patch as described?

If you could submit the patch that would be easiest -- you're in
a better position to test it.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 18:16:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 18:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518107.804300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlCe-0001BC-6W; Tue, 04 Apr 2023 18:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518107.804300; Tue, 04 Apr 2023 18:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlCe-0001B5-3Y; Tue, 04 Apr 2023 18:16:20 +0000
Received: by outflank-mailman (input) for mailman id 518107;
 Tue, 04 Apr 2023 18:16:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rvLN=73=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1pjlCc-0001Az-Hg
 for xen-devel@lists.xen.org; Tue, 04 Apr 2023 18:16:18 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca1fe744-d314-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 20:16:17 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 v20-20020a05600c471400b003ed8826253aso1247661wmo.0
 for <xen-devel@lists.xen.org>; Tue, 04 Apr 2023 11:16:16 -0700 (PDT)
Received: from [192.168.0.106] ([91.123.150.38])
 by smtp.gmail.com with ESMTPSA id
 a14-20020a05600c224e00b003ef5f77901dsm15728970wmm.45.2023.04.04.11.16.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Apr 2023 11:16:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca1fe744-d314-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680632176;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=BCirR0DIa/QvXBN4OfZB+iKk1mo5LyWqNWEty91n8xk=;
        b=AtiqUKvX4AsMbxrDCCur/zsKdRRuNzxsmivIphJuZbGWHLchRBrx3Bu4Y4pL5NNuu9
         gG6iZU31EeFyCtKIbboiP4hpV+3JSbb27lTu6SwnAbGghTjdYSOQF7jOCCOIlt0gGTeD
         cTJnVwEFjieRi6H5T2uuNorqzdPtikxwMAMRXRMjEo6YDSodpV+Da/BY4hq/1Vw9nXe2
         9EHOrmLX0skiNRcFnWM2ss0FWcAo6YYaDG9Qm1LneIjjY7E6iyeafJn4KIvcBlBPIcIn
         p66Aaq8eGNhCFwmPwJ/FslvOgQt+xNZKNgoFpD3HuDs4JG5IRvoArFfxYMwL4TgZFqKS
         EaAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680632176;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=BCirR0DIa/QvXBN4OfZB+iKk1mo5LyWqNWEty91n8xk=;
        b=nPO27ig5JVXN1XEbk5nrz4Wr5ovUfkHcmSTTI9r0mc1yk9xxNpwgQqAts8ttCOXQ1Z
         Kl7YvqxlhqWtGeHFDdCRTK/0+6PZKWCRm8IJCxOOV4Sng4hGxTTcofbZxIP0zffG1Biw
         vuKuRL2YvaYVux4+mbx5/eLxtIn4APdY2qpW0YPmc/zvpOrKtfMlZV87PRg5kfNf4haH
         TaH8fSniYujLfl/0+3Cdvjz9BlQ5dAXrJG/c3ZMl/FSTHl3q2ceVMNbSdIIib3lW2nhM
         beF9mM29Hyd3zfoNSi+CsT3ThWGGRMTfb1cRte2Qm6JUQ/lW438NNR31183d8DVgNPqU
         B4UA==
X-Gm-Message-State: AAQBX9e+UnCY1EqIdAUmqdKDIwURBE3euTT9j1Zm0ip99QYwGgXvhb6w
	Ynl7BKnQNHgDbuZpeQzEqEg=
X-Google-Smtp-Source: AKy350YOF8FfjzS9Aayo5g4ZcE9hqYtbwccdFVeyWjdirZtdMoLh46cHTvoZGnOtElOFokm2PtkFKw==
X-Received: by 2002:a05:600c:acb:b0:3dc:55d9:ec8 with SMTP id c11-20020a05600c0acb00b003dc55d90ec8mr2596346wmr.41.1680632175538;
        Tue, 04 Apr 2023 11:16:15 -0700 (PDT)
Message-ID: <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
Date: Tue, 4 Apr 2023 21:16:13 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
 stratos-dev@op-lists.linaro.org, Juergen Gross <jgross@suse.com>,
 Julien Grall <julien@xen.org>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
In-Reply-To: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 30.03.23 11:43, Viresh Kumar wrote:

Hello Viresh

> Currently, we add grant mapping related device tree properties if the
> backend domain is not Dom0. While Dom0 is privileged and can do foreign
> mapping for the entire guest memory, it is still okay for Dom0 to access
> guest's memory via grant mappings and hence map only what is required.

ok, probably makes sense

> 
> This commit adds another parameter for virtio devices, with which they
> can do forced grant mappings irrespective of the backend domain id.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>


In general patch lgtm, just a few comments below


> ---
>   docs/man/xl.cfg.5.pod.in         |  4 ++++
>   tools/libs/light/libxl_arm.c     | 21 ++++++++++++---------
>   tools/libs/light/libxl_types.idl |  1 +
>   tools/libs/light/libxl_virtio.c  | 11 +++++++++++
>   tools/xl/xl_parse.c              |  2 ++
>   5 files changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 10f37990be57..4879f136aab8 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1616,6 +1616,10 @@ properties in the Device Tree, the type field must be set to "virtio,device".
>   Specifies the transport mechanism for the Virtio device, only "mmio" is
>   supported for now.
>   
> +=item B<forced_grant=BOOLEAN>
> +
> +Allows Xen Grant memory mapping to be done from Dom0.


Asumming it is disabled by default, I would add the following:

The default is (0) false.

> +
>   =falback
>   
>   =item B<tee="STRING">
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index 97c80d7ed0fa..ec2f1844e9b3 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -922,7 +922,8 @@ static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
>   
>   /* The caller is responsible to complete / close the fdt node */
>   static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
> -                                        uint32_t irq, uint32_t backend_domid)
> +                                        uint32_t irq, uint32_t backend_domid,
> +                                        bool forced_grant)
>   {
>       int res;
>       gic_interrupt intr;
> @@ -945,7 +946,7 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
>       res = fdt_property(fdt, "dma-coherent", NULL, 0);
>       if (res) return res;
>   
> -    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
> +    if (forced_grant || backend_domid != LIBXL_TOOLSTACK_DOMID) {
>           uint32_t iommus_prop[2];
>   
>           iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
> @@ -959,11 +960,12 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
>   }
>   
>   static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, uint64_t base,
> -                                 uint32_t irq, uint32_t backend_domid)
> +                                 uint32_t irq, uint32_t backend_domid,
> +                                 bool forced_grant)
>   {
>       int res;
>   
> -    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
> +    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, forced_grant);
>       if (res) return res;
>   
>       return fdt_end_node(fdt);
> @@ -1019,11 +1021,11 @@ static int make_virtio_mmio_node_gpio(libxl__gc *gc, void *fdt)
>   
>   static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
>                                           uint32_t irq, const char *type,
> -                                        uint32_t backend_domid)
> +                                        uint32_t backend_domid, bool forced_grant)
>   {
>       int res;
>   
> -    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
> +    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, forced_grant);
>       if (res) return res;
>   
>       /* Add device specific nodes */
> @@ -1363,7 +1365,7 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>                       iommu_needed = true;
>   
>                   FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
> -                                           disk->backend_domid) );
> +                                           disk->backend_domid, false) );
>               }
>           }
>   
> @@ -1373,12 +1375,13 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>               if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO)
>                   continue;
>   
> -            if (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID)
> +            if (virtio->forced_grant || virtio->backend_domid != LIBXL_TOOLSTACK_DOMID)
>                   iommu_needed = true;
>   
>               FDT( make_virtio_mmio_node_device(gc, fdt, virtio->base,
>                                                 virtio->irq, virtio->type,
> -                                              virtio->backend_domid) );
> +                                              virtio->backend_domid,
> +                                              virtio->forced_grant) );
>           }
>   
>           /*
> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> index c10292e0d7e3..cfbcd717dc7f 100644
> --- a/tools/libs/light/libxl_types.idl
> +++ b/tools/libs/light/libxl_types.idl
> @@ -740,6 +740,7 @@ libxl_device_virtio = Struct("device_virtio", [
>       ("backend_domname", string),
>       ("type", string),
>       ("transport", libxl_virtio_transport),
> +    ("forced_grant", bool),


If I remember correctly when making any updates here we also need to 
regenerate golang bindings.


>       ("devid", libxl_devid),
>       # Note that virtio-mmio parameters (irq and base) are for internal
>       # use by libxl and can't be modified.
> diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> index faada49e184e..e1f15344ef97 100644
> --- a/tools/libs/light/libxl_virtio.c
> +++ b/tools/libs/light/libxl_virtio.c
> @@ -48,11 +48,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
>       flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
>       flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
>       flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
> +    flexarray_append_pair(back, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
>   
>       flexarray_append_pair(front, "irq", GCSPRINTF("%u", virtio->irq));
>       flexarray_append_pair(front, "base", GCSPRINTF("%#"PRIx64, virtio->base));
>       flexarray_append_pair(front, "type", GCSPRINTF("%s", virtio->type));
>       flexarray_append_pair(front, "transport", GCSPRINTF("%s", transport));
> +    flexarray_append_pair(front, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
>   
>       return 0;
>   }
> @@ -104,6 +106,15 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path,
>           }
>       }
>   
> +    tmp = NULL;
> +    rc = libxl__xs_read_checked(gc, XBT_NULL,
> +				GCSPRINTF("%s/forced_grant", be_path), &tmp);
> +    if (rc) goto out;
> +
> +    if (tmp) {
> +        virtio->forced_grant = strtoul(tmp, NULL, 0);
> +    }

I would add "else" case, something like:

{
     LOG(DEBUG, "Missing xenstore node %s/forced_grant,
                 assuming it is disabled", libxl_path);
     virtio->forced_grant = 0;
}

> +
>       tmp = NULL;
>       rc = libxl__xs_read_checked(gc, XBT_NULL,
>   				GCSPRINTF("%s/type", be_path), &tmp);
> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 1f6f47daf4e1..3e34da099785 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1215,6 +1215,8 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token)
>       } else if (MATCH_OPTION("transport", token, oparg)) {
>           rc = libxl_virtio_transport_from_string(oparg, &virtio->transport);
>           if (rc) return rc;
> +    } else if (MATCH_OPTION("forced_grant", token, oparg)) {
> +        virtio->forced_grant = strtoul(oparg, NULL, 0);
>       } else {
>           fprintf(stderr, "Unknown string \"%s\" in virtio spec\n", token);
>           return -1;


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 18:22:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 18:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518112.804309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlIT-0002f3-Vp; Tue, 04 Apr 2023 18:22:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518112.804309; Tue, 04 Apr 2023 18:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlIT-0002ew-Sv; Tue, 04 Apr 2023 18:22:21 +0000
Received: by outflank-mailman (input) for mailman id 518112;
 Tue, 04 Apr 2023 18:22:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8u37=73=casper.srs.infradead.org=BATV+8e7372aa539f26de88ef+7163+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pjlIR-0002eq-RA
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 18:22:19 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9edc7384-d315-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 20:22:16 +0200 (CEST)
Received: from [2001:8b0:10b:5:99d7:d5a0:55b7:41c3]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pjlIB-00Fbbn-KB; Tue, 04 Apr 2023 18:22:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9edc7384-d315-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Lu8jZhJaTM8m5Xr4ZDYrxrjU1MfxECiW0p4YuCvnGkU=; b=p1Vzi9eB2flWEAyLvCdP73AYed
	5cxIUcKX2e4Hj3GBqR/JPEJjgr/GVER5EDKP7iF5VfmYr+uSi6nhaNXHLE0ZBmwMv4+e4MvoiqtHd
	8RCwRAdlISuh7VlCxlnbuIT/lWgZs+NR+ScqzUTbFVkCUEYsq3XgGfLPXjgGpmYQzN5xEBK2HFpA+
	7SXBDrfuAsizV++3ptudrG4I0Y07Mnw6egH2DTCSyYmZVQ15SPHHIjddJwp6RlnIhyem75hDjwrsb
	ojy6qE07qp6nad4Rw2PlbthbVobe25Zyo9JhYOsInVqt28A/RHbe7XIFHX4bxocciULrlgR3Uzf0w
	Uf/iNuMQ==;
Message-ID: <59486d12e4c22fccf5fad4a34cfae68547c759c2.camel@infradead.org>
Subject: Re: [PULL 13/27] hw/xen: Add xenstore operations to allow
 redirection to internal emulation
From: David Woodhouse <dwmw2@infradead.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant
	 <paul@xen.org>, Joao Martins <joao.m.martins@oracle.com>, Ankur Arora
	 <ankur.a.arora@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	vikram.garhwal@amd.com, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, Juan Quintela <quintela@redhat.com>, "Dr .
	David Alan Gilbert"
	 <dgilbert@redhat.com>
Date: Tue, 04 Apr 2023 19:21:53 +0100
In-Reply-To: <CAFEAcA-DT-990Y81mh0rgBp-P0fdLTYCD=DN7m1qued7VFVrVg@mail.gmail.com>
References: <20230307182707.2298618-1-dwmw2@infradead.org>
	 <20230307182707.2298618-14-dwmw2@infradead.org>
	 <CAFEAcA_SS8xRjGKZoSyGc0nh_-C2Wh8hauGzR82Aj8S1g8xBOQ@mail.gmail.com>
	 <d079d8c1f455c96203dc44906d37c2ac8963a6ae.camel@infradead.org>
	 <CAFEAcA-DT-990Y81mh0rgBp-P0fdLTYCD=DN7m1qued7VFVrVg@mail.gmail.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-5qfME1/1OsAkh+xXm9Bs"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-5qfME1/1OsAkh+xXm9Bs
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2023-04-04 at 18:45 +0100, Peter Maydell wrote:
> On Tue, 4 Apr 2023 at 18:45, David Woodhouse <dwmw2@infradead.org>
> wrote:
> >=20
> > On Tue, 2023-04-04 at 18:35 +0100, Peter Maydell wrote:
> > > On Tue, 7 Mar 2023 at 18:27, David Woodhouse
> > > <dwmw2@infradead.org>
> > > wrote:
> > > >=20
> > > > From: Paul Durrant <pdurrant@amazon.com>
> > > >=20
> > > > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > > > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > > > Reviewed-by: Paul Durrant <paul@xen.org>
> > > > ---
> > >=20
> > > Hi; Coverity points out a memory leak in this code (CID 1508098):
> > >=20
> > > > +static struct qemu_xs_handle *libxenstore_open(void)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 struct xs_handle *xsh =3D xs_open(0);
> > > > +=C2=A0=C2=A0=C2=A0 struct qemu_xs_handle *h =3D g_new0(struct qemu=
_xs_handle,
> > > > 1);
> > >=20
> > > Here we allocate memory...
> > >=20
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if (!xsh) {
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return NULL;
> > >=20
> > > ...but here we can return without freeing it...
> > >=20
> > > > +=C2=A0=C2=A0=C2=A0 }
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 h =3D g_new0(struct qemu_xs_handle, 1);
> > >=20
> > > ...and here we allocate a second time and overwrite the
> > > pointer to the first allocation.
> > >=20
> > > Deleting the first call to g_new0() would fix both of these.
> >=20
> > Indeed, thanks. Do you want a
> >=20
> > Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
> >=20
> > or would you prefer me to submit the actual patch as described?
>=20
> If you could submit the patch that would be easiest -- you're in
> a better position to test it.

I've been getting Paul to test the parts on actual Xen, so I'll send it
and let him test.



--=-5qfME1/1OsAkh+xXm9Bs
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDA0MTgyMTUzWjAvBgkqhkiG9w0BCQQxIgQgugYs5ujX
NisPvvX7Yn5LgbTK0p3LWuuOwcg/UVqe3pAwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAtJTGL0p1GfoAN7M+KDM7nTNR8U75JMSb3
sOI7kfy+vAfZvuHJFydxVn4P1AMSj3TIbMJr3FbR5ajQuIPeYkuT43WMnL6Mgsh7XeBpdyJdXwyV
al9m3lWZq9xUYu/89yBEd70Rc8iivB/lFgvQaNFMyAQoJUS8BgDAqo65U+8bz01c7HYMcM0Os3cP
eTLXI/zcSYZP/6eqVAuP2SP9u3ikA0YyqhI3ca7lX4QWGNaGnLCb75tOtRYjJ/50Z/pvSX8a0k1L
lX68GPM6GIURFUV5m8f/tyQcPOZUYUrqypWvPXuh35GtQinDsIDRM6bBrUp0jRdn4An7Lsc9e6iW
8PP20IrrpYor9H1+WhxHZ/ip9AM5yO5ZYwlw+3cx+BftbRDhNfu3T6Mwr5AhfzHCXpYHvcxoTdrS
hOeoEpclqTaCRivyd2lhGh2f4AMp6/EsO6JyN5+/j8QXO7lCdPaN6YrLHkXNoeSocR72la516Jlr
cmlxFFBK9VQm7DiRtesUkSpOLg3XKt/iZcv6pN6ruBbEYEGEFmmc8MWbXymR75WmaxslrDSnNtQV
I/welI9d6hZrlAwVprmJbomQyVyJgWd4R1HPzSXYZ+XjNgUNYMDY7mv7pCtZkfuOa3QEpGphWW+I
cpkAD3slZePZ5QsNRYRg6W3yhL8CMu755G3bAP7jyQAAAAAAAA==


--=-5qfME1/1OsAkh+xXm9Bs--


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 18:25:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 18:25:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518117.804320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlL5-0003Do-EN; Tue, 04 Apr 2023 18:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518117.804320; Tue, 04 Apr 2023 18:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlL5-0003Dh-BE; Tue, 04 Apr 2023 18:25:03 +0000
Received: by outflank-mailman (input) for mailman id 518117;
 Tue, 04 Apr 2023 18:25:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8u37=73=casper.srs.infradead.org=BATV+8e7372aa539f26de88ef+7163+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pjlL4-0003Db-DB
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 18:25:02 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 02b171ab-d316-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 20:25:01 +0200 (CEST)
Received: from [2001:8b0:10b:5:99d7:d5a0:55b7:41c3]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pjlL2-00Fbl8-3s; Tue, 04 Apr 2023 18:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02b171ab-d316-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:Cc:To:
	From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=480JJEkRxs3cmNNvJ6vGQyvJ1jJTqswEUwkTvI30tHc=; b=onf3GHa5oA8L0Q8WHPctBlPRUW
	zHbocjgALQoABlp3ZuxwkP093PFoR8fNhfXGq7VdpaL5Jqn70+CPmKTlOK8ypnn2CK7y4Wt6WY/+s
	Qv4L+OTeCABVyU9PvgyFa02QfsH0g6IFbUszb+3kih8UgsPVsrxkjXb4gX7T1EE3QVeU2+nc3Qn8i
	8GxXREN4h3+7tm7jBzGEGDKqjpEpd2W6OOhMROVwD6Ey5HTd+KtCWjzcM33o0XSGfYSr2Qwrz2iDz
	ZB4GbV+pPvGdBodDDwbV1QSX3LjjC4VicQIXWXPeumJDV6w+qmydTovyCnDE3NDDODkkkb7nrKq20
	/e0Vmrng==;
Message-ID: <daaa71eea7fa0c4bdb70131d794ce8e5cee0e0c2.camel@infradead.org>
Subject: [PATCH] hw/xen: Fix memory leak in libxenstore_open() for Xen
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel <qemu-devel@nongnu.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>,
  Anthony Perard <anthony.perard@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 04 Apr 2023 19:24:59 +0100
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-uipq9uCR3mh5JLaPWdR1"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-uipq9uCR3mh5JLaPWdR1
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

From: David Woodhouse <dwmw@amazon.co.uk>

There was a superfluous allocation of the XS handle, leading to it
being leaked on both the error path and the success path (where it gets
allocated again).

Spotted by Coverity (CID 1508098).

Fixes: ba2a92db1ff6 ("hw/xen: Add xenstore operations to allow redirection =
to internal emulation")
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 hw/xen/xen-operations.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/xen/xen-operations.c b/hw/xen/xen-operations.c
index 4b78fbf4bd..3d213d28df 100644
--- a/hw/xen/xen-operations.c
+++ b/hw/xen/xen-operations.c
@@ -287,7 +287,7 @@ static void watch_event(void *opaque)
 static struct qemu_xs_handle *libxenstore_open(void)
 {
     struct xs_handle *xsh =3D xs_open(0);
-    struct qemu_xs_handle *h =3D g_new0(struct qemu_xs_handle, 1);
+    struct qemu_xs_handle *h;
=20
     if (!xsh) {
         return NULL;
--=20
2.34.1



--=-uipq9uCR3mh5JLaPWdR1
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDA0MTgyNDU5WjAvBgkqhkiG9w0BCQQxIgQgjC6wgXqc
50+U+F35zaGyC+2DHwu+0q2L01ltJKIfXIMwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAKBg6VhCEJOc7AzHqmKKK0YSirnbRs5g0S
3gKvk9aXDcf3mlWMN1oZuqQsqnqaZQBIKhq6P4+GOnvfEBsImBCP94LMUb+YI/Sm3VRUAq9at0nK
mrWV3mvz82Gu8J0ZoZQqleUcfFUHwBp39pSxnXSFi4XFp2kp9OhLCbMjK8TbGozVAr6nuEhQY+ki
f1Mx9Pm2Mu4JgzhBm5gnOgRwvdZ9ZspufLBP9VpCVIsIrCczajM01MNCENvX4p19ORPTjF+J1RBV
dN7gcj6lvCyj6N591JB15s5NYJ2SwqaqCl61HeG+43K6XwbySMrEW4YHkZG4WgkX0gxq/vTGVQDV
CfhbvOKaqNQNdhKY8cg3shh/zCAmQgzVrap+N2dFWq4AVVOVFvHwQGkkaRwyagO9qPb+ADDk3nak
/xhWdaHLxz8cNmdrqEyAk57dbxos5P5kMpqMXpxDthbMLheCjSHTBscyVdCCADsHYzCRUCYHepng
hmYoVM4uE5CBPN6PYvnD6sg1bWRwpOQYLmadoilhRWNxBz5J/cAdUAH0kzPg7CHqaUwJRYuHBQFu
KJ/h38hGXLUAoz4X21Vro+oJUh2D+jB5MJHeV2P6UuOiTTh8iP6Vb10X6vg5b+lYOT4A2HpLj1Yt
LhOP1GmoU8nM2YroVqXt9OVE1SPNyF8vi7RLdxljtwAAAAAAAA==


--=-uipq9uCR3mh5JLaPWdR1--


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 18:28:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 18:28:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518122.804330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlOM-0003q0-Sw; Tue, 04 Apr 2023 18:28:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518122.804330; Tue, 04 Apr 2023 18:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlOM-0003pt-QE; Tue, 04 Apr 2023 18:28:26 +0000
Received: by outflank-mailman (input) for mailman id 518122;
 Tue, 04 Apr 2023 18:28:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jyst=73=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pjlOL-0003pn-Ov
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 18:28:25 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b76eb4c-d316-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 20:28:23 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id t10so134161537edd.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 11:28:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b76eb4c-d316-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680632903;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=r+d5Q9YLXYIEcRrYG4TRRbzODXP0vlh3YF3733MY38Y=;
        b=f3BpCBNxoILDtSyKj6axT8mrPBlR5NN0lRzyoX5tdjNO/89HEPknpa4WvDWvmDY58B
         BI0hYUwNBgaMA8tYJ7g1HLqAvUExLBy3m1U1+4LSeQHitztp3uTADQEFWKk3ejKh8zHG
         d8eIbwe2BJ3fYazGz2Q4bSoA4I96qXA5V+g/6roXRdYcnwfhMpurLv7XqBXSibhOa8T4
         uZcgEoBlNaqEyFJ1VY53AKfCGcQ/Cel/7ok7tvgZq4mf9GXwVfV67QswY7pBPxG4byhj
         5Gzf52tGeynPD3urXrj/SVyzZcSHBswX6C3Zinr9W2c1498cvHXAwVpExI0VTjq7CdSU
         bAWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680632903;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=r+d5Q9YLXYIEcRrYG4TRRbzODXP0vlh3YF3733MY38Y=;
        b=qYebnK8ESyCUChQ7qXgwN+NFPf8mORiaU+6gEYjhfEdCVUFrqG/qGWI2z/Xc3zOkoC
         6cNrzBjp5ulnoxbOCK1kdd9xc/Q8G3Vs2TkI44laZAE3KgSyYFk0t1j+h+ruR2WKu1wp
         lQsfLWeNeMeukKUvGbpXNdpvASPc8MqKjfhMRAEKqn5nFUMQgK25331VzBZ4lUd299Fs
         NoG91vXsQhPiycsrTk/UuMMMCFDKLoEzGdEjZiuZqHKiRTcVCFDsY6EK0y7IEJwru+//
         0jmRFIc61PS8QK2PcZpgUM+4ec/bH7TAPFOOhIU6iE9xV9LdL+mT/36XC1fNi+rYIsDf
         P2MA==
X-Gm-Message-State: AAQBX9dyddlt8xdILmbfHkfqeEM4keUcyB1dgSUB5liKMK38x6Xls6lA
	MTnPRKy2SLRiUp0Xbp38FoZXZDG0sDXuY5i1eqRLiQ==
X-Google-Smtp-Source: AKy350ZT252GSKd/98j6Mu/b60fK17QV1dP+JE4fqqkBHdumFzII2W394UiPvTI+M3TSV0LF0wvFMeuQ5PxM5EcMK/M=
X-Received: by 2002:a50:bb43:0:b0:4fb:7e7a:ebf1 with SMTP id
 y61-20020a50bb43000000b004fb7e7aebf1mr222707ede.6.1680632902832; Tue, 04 Apr
 2023 11:28:22 -0700 (PDT)
MIME-Version: 1.0
References: <daaa71eea7fa0c4bdb70131d794ce8e5cee0e0c2.camel@infradead.org>
In-Reply-To: <daaa71eea7fa0c4bdb70131d794ce8e5cee0e0c2.camel@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 4 Apr 2023 19:28:12 +0100
Message-ID: <CAFEAcA_-ejPodtPeaJb1xpS7aK1ApQtE1qdRU7L5aJO_XzgPZA@mail.gmail.com>
Subject: Re: [PATCH] hw/xen: Fix memory leak in libxenstore_open() for Xen
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel <qemu-devel@nongnu.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, 4 Apr 2023 at 19:25, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> There was a superfluous allocation of the XS handle, leading to it
> being leaked on both the error path and the success path (where it gets
> allocated again).
>
> Spotted by Coverity (CID 1508098).
>
> Fixes: ba2a92db1ff6 ("hw/xen: Add xenstore operations to allow redirection to internal emulation")
> Suggested-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>  hw/xen/xen-operations.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/xen/xen-operations.c b/hw/xen/xen-operations.c
> index 4b78fbf4bd..3d213d28df 100644
> --- a/hw/xen/xen-operations.c
> +++ b/hw/xen/xen-operations.c
> @@ -287,7 +287,7 @@ static void watch_event(void *opaque)
>  static struct qemu_xs_handle *libxenstore_open(void)
>  {
>      struct xs_handle *xsh = xs_open(0);
> -    struct qemu_xs_handle *h = g_new0(struct qemu_xs_handle, 1);
> +    struct qemu_xs_handle *h;
>
>      if (!xsh) {
>          return NULL;
> --
> 2.34.1

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 18:51:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 18:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518127.804340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlk1-0007AD-Op; Tue, 04 Apr 2023 18:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518127.804340; Tue, 04 Apr 2023 18:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjlk1-0007A6-LM; Tue, 04 Apr 2023 18:50:49 +0000
Received: by outflank-mailman (input) for mailman id 518127;
 Tue, 04 Apr 2023 18:50:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjljz-00079w-Pr; Tue, 04 Apr 2023 18:50:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjljz-0003dr-OB; Tue, 04 Apr 2023 18:50:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjljz-0001mv-Bl; Tue, 04 Apr 2023 18:50:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjljz-0005Y1-B2; Tue, 04 Apr 2023 18:50:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V3Qio32KoVcucSR4U73bxtt9sgm0NSQNh+tVBbU+crI=; b=L/xh3opJhIgwI+dm6A8BdInCIK
	bRLhSzhmXNCVeXubtTWVliP095JiFaMs+JKFkrbCSucDFhOKGtlFUoqmX1UyFURWl+CwqwtDHuwxy
	4RR5AcDxmo2XJEzM5I1aj8ILQaL4tqX70PNicoOstzH63yVxSg/73S2fHsUkg64qqB+4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180140: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
X-Osstest-Versions-That:
    xen=658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 18:50:47 +0000

flight 180140 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180137

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
baseline version:
 xen                  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e

Last test of basis   180137  2023-04-04 14:03:39 Z    0 days
Testing same since   180140  2023-04-04 17:01:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Apr 4 17:35:52 2023 +0100

    Revert "Revert "build: Change remaining xenbits.xen.org link to HTTPS""
    
    This reverts commit b5cc3c25a242ddb9c5b108884061b17f35c3084b, reinstating the
    original change as per e1d75084443f676be681fdaf47585cc9a5f5b820.
    
    We think the OSSTest failure has been addressed now.
    
    Link: https://lore.kernel.org/xen-devel/20d41dd0-19d1-47fb-92ab-4de458ddd56f@perard/
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 20:41:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 20:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518136.804359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnSg-0001j9-Jk; Tue, 04 Apr 2023 20:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518136.804359; Tue, 04 Apr 2023 20:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnSg-0001j2-Gb; Tue, 04 Apr 2023 20:41:02 +0000
Received: by outflank-mailman (input) for mailman id 518136;
 Tue, 04 Apr 2023 20:41:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjnSe-0001ig-Gd
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 20:41:01 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff4003aa-d328-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 22:40:57 +0200 (CEST)
Received: from mail-dm6nam04lp2042.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 16:40:53 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5080.namprd03.prod.outlook.com (2603:10b6:a03:1ea::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.34; Tue, 4 Apr
 2023 20:40:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 20:40:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff4003aa-d328-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680640857;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OcfyXQakXa/Js9T7ldqQkijECtim2exxVgrREpNG/r4=;
  b=TUR/yCv4mOIAz/Lqa8cCNKL1teeH1Ar2Mu6E1Muz1r9yPRWFW9BL0Vq3
   Ajx6inXcUrKSC3LIcZ5BvYRHGuZ7fYudZtuamcje1PcdUvmumdDClZyuF
   pPvD0uSMxTF6Y0my1Uy1jxlwkQcOQLPHMtQPftxmcphcJSzgPDi/B6TOj
   A=;
X-IronPort-RemoteIP: 104.47.73.42
X-IronPort-MID: 104360082
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ErVnjq23vtZHkFjmCfbD5fFwkn2cJEfYwER7XKvMYLTBsI5bpzMAz
 2VKUWGAMqyIZjOje9Bwbti18BwCuZ7UydFrHFY5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfAlpk3
 NVJdgk2Y06gmPC9nre4Vusyv5F2RCXrFNt3VnBI6xj8VapjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouC6PlmSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHunA9xKTO3inhJsqGfL9jFCWCI9bgOQouuZrWj9Ve8Bd
 nVBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4J5oflqYRr0hbXFI4/Suiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAszA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:xw4ARKiWntTJhn2buqE+FBoHDXBQXgYji2hC6mlwRA09TyX4rb
 HUoB1/73TJYVkqNk3I9ersBEDCewK5yXcN2+gs1O6ZPDUO21HYTr2Kj7GSuwEIcheWnoRgPM
 FbAs1D4bbLYmSS4/yX3OD2KadG/DArytHPuc7Oi11WZUVBbaV46gdwDQyWVndxWBJNCfMCZf
 mhD4581kOdRUg=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104360082"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oDD7Lo4ulL5XlkD7IBdPqELv80gFlCsQk5bIvQqJUC0S/iZ49ZmUwIYxBEOW8n/BdnD3Kyyklo7ZwlAIZ+gWZqp3t21LyQwClzbmfpe6/jpGDFoG/V5wjy8kJ+ZBd3ywbo+lfnb9disbQKF0arcCYyL1KGSca2qWlbyDY0itKQaw8DprHheZruf/gaIuJ+r1BHW72sp+FzT8NWIrD9H+xihupgpALEXglqx7YwIjJNpt6LbXDLQ46ol4cC3JGLSlx4zItTAnTcWLH6pceuknR/1nFmOi613JGX4cSWoINa9bi9pP8IJMQg16B1/yIWfjiCbV0gWDgfAuJ7+cdyQHDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hKY0pwn6ZmXRuUf4xs2ZJ/Z+kpIVH1u23Ydggy2dA1g=;
 b=ELbxaJHSD+s/lc2W70HNi8YXaJhEjXt7TNn15AydiYWBTaCBHCFdB3al3VJxwNK78c1JhYD3DIMMzOqbYKwf8lMZ9hNyqydt6Lwr108wh98nH//CXTJTD+PBd26UT1vZ1EyarcHJqRWJP2ez0EnXJ4xzt56ycDfNnE11kV+wLR4suCPfHs0ha/qxWMJa10iGom50AEjJ9DsaGWCuSXZWpiRHZOzJ2ezPupZyILaQHyr1V3FicfWZhYmhrMH4lHxzpR6xkD5YMuEH6evyDQL8zh3RlyUF+ikwUfi4aCoPYeywFBqxO5r/s7xPcDP7MXNJ1PP0CyBWbBfMa51nH21W0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hKY0pwn6ZmXRuUf4xs2ZJ/Z+kpIVH1u23Ydggy2dA1g=;
 b=igwCiu7Vzps1m5JGPnEDqgXvhOxLAw6l0a9gXXUEra7ay5onT+CHFyUOTQu//zbjKUhU+Zr4HqXayLFLLq+vjAd7noiitqCLnuyYUCxtGArnh0WU6+snZbaAYKeqcmBHQYL0nq/RgPeZ8vUHqRZMi02X/ylctTQzyaIiEnKGE+s=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <78e72635-6ebc-b3b7-83d5-134d5c63d561@citrix.com>
Date: Tue, 4 Apr 2023 21:40:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <6d64dd4a-5b25-ddca-5c07-7b4c0fc48c0c@citrix.com>
 <5f9218c1-9ee9-c5bd-af8b-003084aa66e4@suse.com>
In-Reply-To: <5f9218c1-9ee9-c5bd-af8b-003084aa66e4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0368.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5080:EE_
X-MS-Office365-Filtering-Correlation-Id: a7a4647d-0572-4568-dc69-08db354ce052
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N+C1yuUiLaz+J8lOgSnA/GjcN8V771lkj+/TLTxTXoLPUi5P7P0ej27uOt9hjdnhEKwyEGDmcJIKPhvhAvo+zdu5EjU/6Pp2nTy2L5vJg9vmTpYMo9zgZEHhj2jr3TPJDannPs5AJUV+w+jnLjkuzZJxiWfKLyc+hwuKsCE3cfo2Q01ovMjnAeaAtfs1CQPrcohVDECx+y3Wj9+uYjP4wuytRbkhi/c8OnBG2ESiWr7UlPHNxkduVQYF/jbg+FgQMJcmPjigMAn3NlF+6jZOAZr0L9LSyCVOm7Oo/g3EDjfkBbNCOr/QkIz4wsyNtg7Ch3kN2AfJrmXdD9Owa1rTGPhOB9gJ6TeUpE3Q4/OFOJKq2PJaM47LxPH+2lcjh2pE1IiH4ztDUjRZ0YLVnesVP3ZrSTXLhiUgNfA9y+iji/Bc2kwlsRxgv/MiScGWXSg0vYqQdY3iCDdxUvTEHHjsCJwSQ0bZdaoQon884xwzYaPCw4zwSLCmmsXTr0qlLwiIWGh3OTXDPNfJhJtKqrQeohi8XJhPUq60RIT/xzjHlUoIMAmN3pgi2+ueRK+IhQcEU+st6XJG4HTxHZ4E/ZoOYYwKdKO/SiLL56M2e4fvKIWftExgRp5uA6KzkzIegm0WZt5X5RqviLoNrgH7gxfZLQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199021)(31686004)(36756003)(316002)(186003)(53546011)(2616005)(6506007)(26005)(83380400001)(6512007)(6486002)(478600001)(86362001)(54906003)(6666004)(31696002)(6916009)(4326008)(8676002)(2906002)(66476007)(66899021)(66556008)(82960400001)(8936002)(41300700001)(66946007)(5660300002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVdRR2FkeWtpaU80blFsd3JnNVZLbG9PZjdpbmtIS1BhNmtaRU01cjFTZE1T?=
 =?utf-8?B?ZGpWelk1YnRPanF3U0N2WWlnMmlGK0ZRdHl2T1psaFNKR1h6S3ZTOEhBdkFL?=
 =?utf-8?B?OXpPbEREWHprM0dWZis0T3c2dEduU05PLzdmNWJaejV0ZitBQTNvQnBKZjZD?=
 =?utf-8?B?Vko1TFJpWmJlekdtcHNzZTBQK01DWDg0b1E4Qk96MmZCUHN6MkVTUHgyNUxD?=
 =?utf-8?B?Qk5TdG95TnVoMURMSWkzenpENUkydnNjSFp4aFNrZ2VCNTZoWWpIQ2FsNGhp?=
 =?utf-8?B?Vitoa1BnQ3dDcmREQU1IcUtoc3FXNnpQSSs4TWhGVEVtY05NbWljeFhjVkl4?=
 =?utf-8?B?OW1kdVpYeFkzcURXNkJqSDRJWVE4QTJvVGhDRUVZS3E2T2toempxVTFLWUZm?=
 =?utf-8?B?MzlnMjZ4UlprRjlTdk5wRmFnK2tNOUdsS3VCeEtOeUdDR0tIV0xTUHpYczRt?=
 =?utf-8?B?aU1XOGQ3OWIwcUxJczFXRGJxOTlrZjd6NzNHdFo4RGZmRWdqbnBXeXV0QnNt?=
 =?utf-8?B?UVVrQU5Ob0RaTjFkWUc3cUpDWG1HcDlpVVRSVlUvcGZTc3J6RkNlZDRWOXBH?=
 =?utf-8?B?bmg1VDVjR1NrQmUwN1MzZFZoajJ5cFhzZEVlWmlELzZHRVVPVmlYNGNIZ0xU?=
 =?utf-8?B?U2RqK09NWnh3amdST1JYZXZidFZMUVFBS0ZSaGg2bi9wQkpDMTl4N2RZQWNj?=
 =?utf-8?B?c2dUdkZpeDA0d1pkY21LYUNzV3dmZ21lNWovdG5vNUc2RUhLTEFGdlI4eVU3?=
 =?utf-8?B?MURpL0x2NjFYbFZnT2diN09MUVRlTUlyRlhJRHM1VVNQMDRkV3A2dHZHUUov?=
 =?utf-8?B?dWI4bGlLS0h4RUxiUW5hNS8vb2NLY3hFTitXeG5NSGY3aHV6S3NuUDVWdUVG?=
 =?utf-8?B?ckFFTm1KdUlOSFBpR0tHL3c4OTdFbjNSOEJRTWdDa1RyRTIzWlBGZFVJRkRj?=
 =?utf-8?B?ZGdYR2FKa2hJblM3V2xHRXY5ZFNBZjRmcmV5VzRhWWFubUhVRDdWK2FOOXdS?=
 =?utf-8?B?Nm9yZ3NaNlEvZ3hOUDVUbmMyNkR3MndHeU9UR0NwTVVjWWlRYVZjT3VUc0hs?=
 =?utf-8?B?U0E5eCthRHV5RFJjbTh5UXVYRnVtOTExVFVXU1JCNGZsTzJZNEVkbWF1bjN5?=
 =?utf-8?B?NnFoOTJONEhmektnclRZRHkzL3R3THZlWnpaNHBidEQvNHM5cU55WXgzU0cy?=
 =?utf-8?B?dElVN2ZQS1dTQ01DTE1TT3B4czEraFpUQ3liL0pHTFlYM3FQc2dUakhOTUZY?=
 =?utf-8?B?d2tWeGxKT0xMNWFGWWl2d1RzSHNqdGk3enBRRWllblRiSndmTHRlWEFCdzBy?=
 =?utf-8?B?WlFGN1lxUmdzQnMyVGhxaEFvOXlmNUV0cGYzbDJhWjhTbVltZ2U0RG9QWVVV?=
 =?utf-8?B?ckt3eEZ2bENwOXBsWVkyZGhhZVJGYWdmQnVvUk84aGRUVUE2aXo5TUdJTHJB?=
 =?utf-8?B?bGcxZXNjTnh3RFl2cXZHbm9pc1JBQkdDMzBnQ1ZmZC9rT2pQcmtxZ2FTT0ZL?=
 =?utf-8?B?R2dXMEhZQUJzMEdrN0tWMEhCc0x4VHluY0hKZTdId2ZQMlRORmZyeEZXRnZW?=
 =?utf-8?B?V1hqWjV5SnBrWG9EeWZrcm5heGFnOWVSS2JIemFWeHRKYjFPelBscVk0UEMx?=
 =?utf-8?B?QTRraUZXKzF0NnNRZFRYQVZRbS9SQ25CcWd0SFdISHVxclBsejEyOXJzVUxN?=
 =?utf-8?B?VXhBSFpPa2hSMHg2dlhNUVA1c202Snl3U1owNHV0SE1KYllHUUNyYnI4aC9s?=
 =?utf-8?B?MVRNTFZOcElGcWx5dmdqU2tWQ0VtY0RMWDhsL3BUMmpjb2NWWjc0WGV5eVdj?=
 =?utf-8?B?S3dEMU50ZGZuZkgwY1FqNkkxQmM2NUZaTTlxZEtWWVBPQUpSajJBYWRPdTJF?=
 =?utf-8?B?ZXR5aEdXZ2VuRHZBOXNCUDJpNjIvblJQcWR0Qjc1WEZpb2ZiUjhZZVJSUDdm?=
 =?utf-8?B?UmdvbTAxT0xjNnNSSDVOOTNRclp5cm8xSFlpSXZtOXF5QkxkV01vdENKdmYx?=
 =?utf-8?B?aThLNXpzd2hZREUzanpLaGV2TDhJSFppSFljL3d6TENUS0ZLVU1sS1plQWRx?=
 =?utf-8?B?ZmUyQzUwNHkvWWFaTGZhY2VudjBDdm5UUzQ0cmdHL3kwdGt2VUg3TGJWa29V?=
 =?utf-8?B?NDZUR1ZXanozZlR2RUlVYlpuY2NoQWFWRzBsem9uaEZGcjU3SjF4MVNIcVB0?=
 =?utf-8?B?Y3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	6XNVG3yMzF3X3HVo6+rS9+NescXcz6MpivFJdgbZM1/hESOMKmT+JxUoV4xxGx0yngg1c5SpVE63eW7AGm06nMORCKnVmKskQO+DfWHr+Qr9ZFmTYHSc/rBZJgNt02KLrlOn2Q8lvC3qE4Ui3DL+jXwncdTkTJ0l9hQdbB7gM1Bu0MPRGQ2uIqyFH0lQrMwCSVvca8UJvUMRxelzG0f4IHPIOsux0KuHlVW9559h/CkyZuFU/ph6it1jJ/p6pUnVkvuRrvYmWT0ebS2bMQFA+5I3k4WE18liTvC3RhYCj0y8yvNj0MsRTcL4OXZ/jx/LcZQpnkLe6Etn/Ckr8TamKlX1B2tdbGkL6pTqoQn0F2jTOjGHpRv9gmDHMt8mJkBPVbwbhWzTVTcix1cZTSq4LDfBxkXDXwektkD7A8nQdltyYZGNYglrtnciH4F+qjz6hWUp2E3tpWbY8fm08JZrFv4x8JZEpiABwrOVzdhhtCM44uCAiY6xt4fYtSSYr0kgcNVTl98Xrx7cyyw48P76zAMm1qVxS1PBNW1YAQbhzHSGugWkcjwaV4W6/lqiJYlcqmVbaDWKuQx5JttCapnxf7tDi9+fZv5Wx279sDSj325e1F0ibBrE4fo8G4MlcaC14KyvJi9vREoaXl7vnYthWOcOhbuDyaNemNhpj887F9kN7X2NHnefjOprIndwW1EmCPMnhVUALKqYav56w92xzdccsXLQpgIn7tz9IZ31H7jqadDbQPqPYwXR5obEFL1HhCbswmi+YiR5y9cZzKjdS5z3Z14Mb0nrhSW3FP4m+xgLwAHaCdYxXOU0XkS37qrV
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7a4647d-0572-4568-dc69-08db354ce052
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 20:40:50.4683
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aEdlN5k5uNAlP4wjMv01l2s/bDL3zNg0pXn5LMMNqZb+nwvL4SPOqcmBMqXcB3jHaExtNPHqOqvc41MC6Zfq1aaUS1BEqSzGoV4dYsTyd0g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5080

On 04/04/2023 3:21 pm, Jan Beulich wrote:
> On 04.04.2023 15:08, Andrew Cooper wrote:
>> On 15/02/2023 2:54 pm, Jan Beulich wrote:
>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>>> applies to guests also when run on a 64-bit hypervisor:
>> Is this really true?  Even when looking at Xen 4.2, 32bit guests are
>> required to pass a full 4k page, not a 32b quad.
> The full-page vs 32b-quad aspect is orthogonal. This VM-assist is solely
> about where that data structure is, not what size it is.
>
>> Which makes complete sense.  It was a hard requirement of 32bit non-PAE
>> guests, so it was a natural restriction to maintain into 32bit PAE guests.
>>
>> This is *only* a 32-on-64 issue, because this is the only case a 32bit
>> guest could in principle use an L3 placed above the 4G boundary.
> Not exactly. 32-bit Xen maintained a 4-entry "shadow" array below 4G
> that it would copy (massage) the guest entries into upon CR3 reload
> (just look for struct pae_l3_cache in the old sources). So above-4G
> page table base was possible there as well.

Oh eww, so while Xen never gained an optimisation to permit only a 32b
quad in place of a full 4k L3 table, it did support having the full
tables higher.

(This code is especially hard to follow with #ifdefary in the common
mm.c when there are perfectly good x86_{32,64}/mm.c's to use for
differing function implementations...)

>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>>      unsigned int   partial_flags = page->partial_flags;
>>>      l3_pgentry_t   l3e = l3e_empty();
>>>  
>>> +    /*
>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>>> +     * understand the weird 'extended cr3' format for dealing with high-order
>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>>> +     * initialised).
>>> +     */
>>> +    if ( is_pv_32bit_domain(d) &&
>>> +         unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>>> +         mfn_x(l3mfn) >= 0x100000 &&
>>> +         d->vcpu[0] && d->vcpu[0]->is_initialised )
>>> +    {
>>> +        gdprintk(XENLOG_WARNING,
>>> +                 "PAE pgd must be below 4GB (%#lx >= 0x100000)",
>>> +                 mfn_x(l3mfn));
>>> +        return -ERANGE;
>>> +    }
>> Having dug through source history, I see this is largely the form that
>> it used to be.
>>
>> But I'm unconvinced by the "cut control tools some slack".  I'm quite
>> tired of different bits of Xen taking on unnecessary complexity because
>> people are unwilling to fix the problem at the correct layer.
> But anything tools do before having created the first vCPU would not
> have had any means to engage the VM-assist. I.e. ...
>
>> A toolstack which has non-pae_extended_cr3 guest on its hand will know
>> this before any pagetables get allocated.
> ... this knowledge buys it nothing: It would need to move the table
> to below 4G irrespective of knowing that the guest can deal with
> bigger addresses, just to get past this check.

This just goes from bad to worse.  It is mad that the VMASSIST flags
can't be set ahead of a vcpu initialise hypercall.

But.

The code in xg_dom_x86.c unconditionally moves the L3 below the 4G
boundary, so the thing actually pinned as an L3 will always pass the check.

Which is just as well because it too blindly applies the extended-cr3
transform momentarily after conditionally setting
VMASST_TYPE_pae_extended_cr3...

So a 32bit PV guests will pass the check irrespective of their
pae_extended_cr3 setting.


I note looking at this code that it's absurd.  The single L3 ought to be
allocated with memf(32), rather than being allocated regularly and then
reallocated lower if it happened to be too high (which will be the
increasingly common case as it's getting harder and harder to find
systems with <4G of RAM).  Making the memf conditional on the
pae_extended_cr3 needs to come with some better xen<->tools APIs.

>> For this check specifically, I'd suggest prohibiting non-32p guests from
>> setting pae_extended_cr3 in the first place (I see no limit currently),
>> and then simplifying the check to just
>>
>> if ( unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>>      mfn_x(l3mfn) >= PFN_DOWN(GB(4)) )
> Dropping the is_pv_32bit_domain() check isn't possible because we can't,
> all of the sudden, fail 64-bit guests' requests to enable this VM-
> assist (no matter that we know that it is of no use to them).

I'm not so sure about this.  This VMASSIST cannot credibly be set at
runtime, and making a restriction here is not usefully different from
prior patches of yours that relax checks in Xen that still break on
older builds.

But as I know you're going to argue with that position, I'll at least
note that ignoring a 64bit guest's request to set that bit would be less
bad than the current behaviour.

> Dropping
> the control-tools part of the condition is at least problematic as well,
> as per above. Albeit I'll admit I didn't check whether nowadays vCPU 0
> is initialized before page tables are built. But I think it's more
> sensible the other way around: CR3 setting (in the hypervisor) is less
> involved when the page was already validated as an L3 one.

All of this is before the guest starts running, so it doesn't matter.

The most efficient way (from Xen's point of view) is to pin the L1s,
then L2s, then L3s and then set vCR3, because this is the only order
where we don't have to do do recursive type acquisition.

But, the most efficient way for the toolstack to do this is the opposite
way around, because making Xen do recursive type acquisition is faster
than other ways, and turns all subsequent hypercalls into almost no-ops.

I doubt there is a relevant difference between these two approaches.


And it doesn't matter either.  The check won't ever trip from domain
creation (see above), nor from migration (we set vcpu context before
pinning the pagetables, and a non pae_extended_cr3 will have exploded on
the source side).

So there really are no toolstack codepaths that can trip the check. 
Future improvements that might trip the check can come with a less
broken hypercall as a prerequisite.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 20:51:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 20:51:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518141.804369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjncY-0003EW-JS; Tue, 04 Apr 2023 20:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518141.804369; Tue, 04 Apr 2023 20:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjncY-0003EP-Ge; Tue, 04 Apr 2023 20:51:14 +0000
Received: by outflank-mailman (input) for mailman id 518141;
 Tue, 04 Apr 2023 20:51:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjncX-0003EJ-3l
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 20:51:13 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dc6f685-d32a-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 22:51:11 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 16:51:02 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6176.namprd03.prod.outlook.com (2603:10b6:5:39c::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 4 Apr
 2023 20:51:00 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 20:51:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dc6f685-d32a-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680641471;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=d/iOhb27zQl0Tuqnt544c6sjqrsZzaAcvW83EZJK5kY=;
  b=SBiv/y7gCNqL3Fl+wYSnk7ASKU2ztZxQpoWIzCkFs9+muxJ4zDKyAdWa
   0tOpLE48qPCkk3Em4yAEmD1jMPI9vfb1Td3sZUYCYmwzAL/MZW7EGYMmg
   DGFBTVJCtyVnJX0Uu/d1GdThAFZXGYpl4Gon3Ot8SvYVJHD0b7YRwP0pw
   I=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 106757052
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3yR+nqgEGUXmPLeAY/hMyiHbX161TREKZh0ujC45NGQN5FlHY01je
 htvD2GAM//bY2Wjc4ggO4vj/EkAv5fXzIJnHlE5/CBjQy4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AeCzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQKJ21RZRLZhdmyg4OwQdcymeZgFsLCadZ3VnFIlVk1DN4AaLWaGeDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEgluGybbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXmqaM12ADJroAVIDsWCHiJmNCrth+ze40FN
 RdJ0Asp8pFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ/i0zJR9M6Sqqt1ISqRHf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxode51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:krKzKa0ssaxDfDCcwY4ckwqjBLwkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY
 dMGpIObOEY1GIK7/rH3A==
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="106757052"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PZKFQ28R07lFlepFv/LBj/DbBqCnCyggutGWyuhaKCI5lClBeI0x0iIqJmGfwnLCLpX7VMz672kod6gCymRePumxxu0clr3wYBorKAMJyk8WAQpOiu7MfsNNSsVox4fOwRq9Emsl513D1NPth8wBWMDXX5jd30CQdxnqaxXAoQpoemxxwqyYBtRDHwVMUcXdbo8PKt3Dw24Q25AEx3hT836GStm38ClJ2rrJP1oryw+LerbQmt4sGgnuiShbfLlVzAMTgNnSUy/WCEmwREH8MvEU6OnIgtUM8eVWNJoEyOFuP1OFKR69Bb3P0uJ+2o6nCPgBvDPq6+mLjNjzZ7PhRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WDKuqcvC7AlVLeucoWBHyztl7KBKwvZVo/uJNac1XF8=;
 b=kiislyOBD3iUhTHjccwpgwVu+AHAlGX1ad47FUIyP7dWoT1amm4nY+dQ98kMiwCTniK7BhHzTEutEsMwV+cIKJTs22Lx7r7jg1l+SG81oCfTBnrNBZJR0JgshzyeNcPw8cJKX+1jyUV79iYAPeFgm6Ey7F+t7FVMTYzd9fHQhJy2v/SFxAQm9HWiwA3ib+vIeR68RO1Q54BgFrEsGoPpPx+JtLvNEvj5XC4MCghuwtRZKy7JoIaV7nSfSqPt8TJ5xKV/n+NOv0pAnmqUpXAqk5XRRpSOwaMem5wT53Kt9aY7eZlS4kdUUYOHDptl1BWkosvkMj+dV7uIQS7bN1iF3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WDKuqcvC7AlVLeucoWBHyztl7KBKwvZVo/uJNac1XF8=;
 b=PInC0lXGhivAMelWCiL5n5Ci9biOfsq7O0W/8u1bxqsaHQPC8Pa3FY7FIi6RN4sSRK9FqWnNFINv0Q9yVyuWnDA/XNmfCF2/Sktojh6Rw5rJY93FRBWXz25l9eAUGhIzj1Ayjam3CoNqppQZ1pNe5hkoOTWFe3b1fFUCn/mm3DQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <fb22a998-1d2c-b723-a2a3-7dec8946cb78@citrix.com>
Date: Tue, 4 Apr 2023 21:50:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 11/15] x86/boot: Merge CPUID policy initialisation
 logic into cpu-policy.c
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-1-andrew.cooper3@citrix.com>
 <20230404095222.1373721-12-andrew.cooper3@citrix.com>
 <087536dd-96cf-84f0-4b8f-d4de4d6bd093@suse.com>
 <0c43546e-0333-af19-efa5-71cfaf5efa3f@citrix.com>
 <dfd78e18-9f78-dd44-c19d-3a5263285d4f@suse.com>
In-Reply-To: <dfd78e18-9f78-dd44-c19d-3a5263285d4f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0495.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6176:EE_
X-MS-Office365-Filtering-Correlation-Id: afc277d6-5a40-455f-dab8-08db354e4c2e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XUlKjAF76JH8xR6cCO1fPWYFAHi7R8CAI4N4YPboQfg5PbLzIMnh3kcxia7b/OjHl6fKkMhlv3eCQkyLYvUGI9Nrv1ba0mFrB05Fs7bW6x8mngd812CKT+8+cV3k2Y/VL34h+ILs2MecftC3hFgxZ9TGofT7jSUOXELxltaHhDS9H9ak+WW5dWTH5iy5f1cgQdDl0kBOiTXyMn2oN3jeQdUdWpnJuRDNvGbsUTd+2tYETCNoUv75ycOSfXNmRK8yS4frJbF8BI28tuvMDsMvfp6OXaOcpTCO7iOnA+uQ2KONbc1jasvsN2+aJWqs0z+ohH4rlerJkVk4kFinNRv37CTru/eKwtcvxB/sKdWyrYDs70pjJYY//m9BQ+74t7Mu0eKy3GdPy8EwTWfg615Y9DouvzHFfXtICmxNfRXzwCxrwhdDxfABlXDUivpQUZkZYfeUo9wIYT11+eUtekYHu8Tu/JL1ZnayjJMVeRdP9SD2Ji0WFWfYMw4nk6gLEZdG5UW1qQ+LsYqlGqf0B6u2bRx+bMXifVwgfXFCWiEPzYH49jOVd+ltYvBuQJ/5ZxcXUMssfMTqmcWa/FPWhwzNe/8Q0VMb0QIY9fWaKz73PK0RbNtz0tEcRFVpgwdZAruogZ/NkKKCqhJOYPS9BuhhGg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(396003)(376002)(39860400002)(366004)(451199021)(2906002)(31696002)(36756003)(31686004)(86362001)(6486002)(2616005)(186003)(6512007)(26005)(53546011)(6506007)(6666004)(66476007)(66946007)(8676002)(6916009)(41300700001)(82960400001)(66556008)(38100700002)(54906003)(5660300002)(478600001)(316002)(4326008)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0RMTlZNWm5wYVc1TXMvZFNveU5BUlBYUkxKRXRza05lRDdWcWxRdG56YVVL?=
 =?utf-8?B?Wlh1RkkyTXZXRGxHZlF0MTQyemEwMFB1dVlwSmh4M2tZWStXK0g5RjFEcUJt?=
 =?utf-8?B?VFNNY3NsbVVrRllacFRXNWxNTEJLdkMvRGowSExsajNRQkhSQllNbkxaSG1E?=
 =?utf-8?B?K3V4dHBmZXhiS2RLQnpTNStnMTVwZk8wM2ZvT0E4Q0VGYzBybmlyVlZ6SVA5?=
 =?utf-8?B?YUd0Sk1BRnZRb015QVRpbVgzOHVhYW9zVnUvNDdqb015bHErMUVpQzM3dlZQ?=
 =?utf-8?B?Y3prdGNMcG40ZjYzT0luSDlUR3czRjlIVkh6aXZab0RSRjRYMU13TUJaYU1E?=
 =?utf-8?B?cVdQMHJvOFZyVDFQcVNDanFRSmt4dVUxaUl0MW16Tmt6TzlGeERBaGJHQUZj?=
 =?utf-8?B?UE5oRndGMzdrMjlvT0FGTlRSM2JWWWZsdmlvQU4yNVJydHVmUXY5RXZVREVu?=
 =?utf-8?B?TkJFYytud2RpZ1FoQ2MzcXZiVW0zQlllZ0hITzRMdy9mbkVENi84TSs5dTNs?=
 =?utf-8?B?ZW4vL2pkQ24zM2RaVDJNb2wwVzZQaUF3TEFMWHkzZm5SMGhuMnlaQlNkWDBw?=
 =?utf-8?B?WWppb3hVejhoQ1phZTBmWEZ2VWZtYmo0ME1wc2ZuZFRlVC9ERUk5Ty84Q1dw?=
 =?utf-8?B?ZDdzMEhoQU9xajJBR0RaOS9kajhaQXpxN3RpUWZIODBDeUxRdXJ2bjE3Ry8w?=
 =?utf-8?B?dnc5Y2dkei9LcE1CRWNTczR0aUVsbDdsSTZIOGRYM095ajh0MkM2bWRtMVdp?=
 =?utf-8?B?a2tLRU5GL3V2ZXkzMmp3TWxsSm9YNGdudjI1QzJpY0NJbjZTeWNZTkswT2Ro?=
 =?utf-8?B?bHR3V1N2WXI4RmlxZFM0aGtlQmdoZUZha2NzS1VRUUREUGFyL2srM2VNSmhP?=
 =?utf-8?B?ZVlJU29aR0xEVW5PbE42Q2ZqaXhoaEYvaHBZNFFIL3NIQUJiTWlUUFN5akRr?=
 =?utf-8?B?aDlXaEdkYTVZbmZJNS9RRjBESFZXaStTemNQNnE1ajdQQkVZcStzQWVnT3By?=
 =?utf-8?B?R2Rvdk1nTm5OVFJhWEZNOVRaWWJ4Nkc5TDN6VDJ1eGtHT2U0bFVpRVI3TUdy?=
 =?utf-8?B?Vm55Q2JkeUh1dURmZDB4TWx6aWFoWTBrOW5VYUhuSFQ2RjJLSUptNHpNbGhW?=
 =?utf-8?B?SHZybzA1OFd4amxGU1pENlRWSHJQRjg1UWtlSGpUWUNzZmJzVkFFRXI0VXJr?=
 =?utf-8?B?Tkd0YTNYMUxyQ1ltM3pzQXhkS01ORGVxYzRZTHdCaUxzRm55U0dtRlA4YW1Y?=
 =?utf-8?B?djdHWjIyczY2bkNKejV3T3pUY2YvYlRqdUlYWjdXZTVhWDRsMXVuNmlJRnp1?=
 =?utf-8?B?SUlqbFZxOGFPelA1NWNYLzF6T0NWeWZEVUt5ZXA2MmlmbnhINXl0QXdGbnNV?=
 =?utf-8?B?ZDA3RVJ4OFFCM3ZkTjZrVzZaeWFlQ3R5QkhybW5pTEdYcGNpSmJDc0FnYjc2?=
 =?utf-8?B?UVI3ZUthblI5d1NIRytCRExHT1lRYUdpSDQ4bXp5UHQ5elloQ3FVQlRubTkv?=
 =?utf-8?B?bENKeFNOS3E0b0FBRzMzUnVQTmJhS3VhQzI5ejMvVkkyMmdhTVRVTkNDNlhP?=
 =?utf-8?B?UXliVjRKNHRWQUNaZE5RdlhsenNEbjkwT3BuVnN1MEg4UWY1MldhK3FIRkNh?=
 =?utf-8?B?YmdCVjVjZUpieXJlMEF3d3N0WVlzazBuNTY1bTFzNmwzK0wxNkNwU1BrSFFC?=
 =?utf-8?B?bDd0QnVEdWlla21oRzBkRUMydGtWNEFjRHF4QUR5M2VPSGxkSVZPZlo5bHNR?=
 =?utf-8?B?cit5czZncEh5RE9WRUZoSUtET043dlBnQ0U4Njhtd0FsYkRoMlZoZVdKZnhL?=
 =?utf-8?B?RFZ4clNYYWdSY0t6bG4wc3NLTW9lMllScU05Z1NWMENTUVEzVUVxRHVPWnl5?=
 =?utf-8?B?WlFKTk8rVjVVUHYzWEJ0M0EzVkc5SnRlUElGdlRob3lDVjlTcTlKc0x1Rzl5?=
 =?utf-8?B?SDY2c2JCU1prRzlmNGk3aXJMTkc2K2dBVjdUR0ZHK1VVQXZBVUlqTWxPbTVj?=
 =?utf-8?B?Rm8zWmpRdHpVZkpEczhOck5NbURxdkRJNFo4eDcwNzRQa0I5cGtaUXFpUXo0?=
 =?utf-8?B?ZTE1ZGVHK1dBblFsd0M3SkY5djBFUXMwQnNtVjVUZWFzZkRkVkNCTnVEeWlh?=
 =?utf-8?B?TDNCblB0eURpM01QR0pvY29wbDNLVE1acXVLV1dzV2xRRFBONTEzbkIzTUkw?=
 =?utf-8?B?U0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XFz+MTPEdioA6ii1oehNI6+bzbojB4FnKTvW5EcfqTunHF5eATImcU67Ves7JDxMa5jPVG50p2dsoZK4A6lbyhU74FkRdlQBOgLlLWjMizfTxhbTIrhx3O32P4DddQytI/uywdl41OqBOxQwytTlLv64mqQXVkPhOr62wf+A9fyWAC9V9P2eXiQwMpN+zq/3lvl68Tv3l4jynSRtcNbnp/gY2NTeDoSeI7MNyKhzcrQXX7ZkzHOP3VsRQHPzxxYEqI6UIjqdlPU/5TVdsVFtysLJd6fzOkodn4kdGAUgpkTpnWfl27NVEB0QedI+lRaRyjHtFoOOKufy5KXCJpEnOECUYxRdBENB9x7CZw90rGS+0pxdsf51rm4dFHNZXhB20Kj15p+rsizZ8DHw1fFWEa4D2Gs+2MLe8HctUKClNF7nPdPfZ/8KM/zxDITmcetnEl2WCKEmzrSJctOoAoIbLGP70m8iLop3fXxdLLw7r4wTnEa2PwM1jY7AMzz/LsLSRa0iA/t1jZS+oGxLaFAAWdbqhhKyAxohyN06P+UtarwgXkZBkrvCyBB/MgcdneC6FmgxV6ASlhJbuRO06zoAI4rKSJ1Mbva5xg/I6AwroHAN5rVrsCEnCNA+rzo5nlyOFQ62JwfxOZHEJZKQqe/99dz0gMofMIcB87QQHp/Rk228nGMxqzARENeG3cedKVnBWvODWPgImxMW8pCqOdLR4kLXYeZetQ6tI/pIY3FxYsPU1in5fvxUgoScL3AdlOo1RTMFS0wZhUrrLWnTx9FAt9aKQEdrTA+aA7U/JrkNf6UmvEjv301W+aZYhh7dkWJA
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: afc277d6-5a40-455f-dab8-08db354e4c2e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 20:51:00.6659
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZsIuDZu2geNTIV0P+Z7CYFuPJqIKOIgfGsEeqg9uE2RnMWa3/NTzATZMrF/D9GCu6UVPgW2HLhcnE4xZDcR5xvi/CqH6w/gbPlss9s5DLI4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6176

On 04/04/2023 5:14 pm, Jan Beulich wrote:
> On 04.04.2023 17:45, Andrew Cooper wrote:
>> On 04/04/2023 4:16 pm, Jan Beulich wrote:
>>> On 04.04.2023 11:52, Andrew Cooper wrote:
>>>> @@ -20,10 +26,332 @@ struct cpu_policy __ro_after_init hvm_max_cpu_policy;
>>>>  struct cpu_policy __ro_after_init hvm_def_cpu_policy;
>>>>  #endif
>>>>  
>>>> +const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>>>> +
>>>> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>>>> +static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
>>>> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
>>>> +    INIT_HVM_HAP_MAX_FEATURES;
>>>> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
>>>> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>>>> +    INIT_HVM_SHADOW_DEF_FEATURES;
>>>> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
>>>> +    INIT_HVM_HAP_DEF_FEATURES;
>>>> +static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>>>> +
>>>> +static const struct feature_name {
>>>> +    const char *name;
>>>> +    unsigned int bit;
>>>> +} feature_names[] __initconstrel = INIT_FEATURE_NAMES;
>>>> +
>>>> +/*
>>>> + * Parse a list of cpuid feature names -> bool, calling the callback for any
>>>> + * matches found.
>>>> + *
>>>> + * always_inline, because this is init code only and we really don't want a
>>>> + * function pointer call in the middle of the loop.
>>>> + */
>>>> +static int __init always_inline parse_cpuid(
>>>> +    const char *s, void (*callback)(unsigned int feat, bool val))
>>>> +{
>>>> +    const char *ss;
>>>> +    int val, rc = 0;
>>>> +
>>>> +    do {
>>>> +        const struct feature_name *lhs, *rhs, *mid = NULL /* GCC... */;
>>>> +        const char *feat;
>>>> +
>>>> +        ss = strchr(s, ',');
>>>> +        if ( !ss )
>>>> +            ss = strchr(s, '\0');
>>>> +
>>>> +        /* Skip the 'no-' prefix for name comparisons. */
>>>> +        feat = s;
>>>> +        if ( strncmp(s, "no-", 3) == 0 )
>>>> +            feat += 3;
>>>> +
>>>> +        /* (Re)initalise lhs and rhs for binary search. */
>>>> +        lhs = feature_names;
>>>> +        rhs = feature_names + ARRAY_SIZE(feature_names);
>>>> +
>>>> +        while ( lhs < rhs )
>>>> +        {
>>>> +            int res;
>>>> +
>>>> +            mid = lhs + (rhs - lhs) / 2;
>>>> +            res = cmdline_strcmp(feat, mid->name);
>>>> +
>>>> +            if ( res < 0 )
>>>> +            {
>>>> +                rhs = mid;
>>>> +                continue;
>>>> +            }
>>>> +            if ( res > 0 )
>>>> +            {
>>>> +                lhs = mid + 1;
>>>> +                continue;
>>>> +            }
>>>> +
>>>> +            if ( (val = parse_boolean(mid->name, s, ss)) >= 0 )
>>>> +            {
>>>> +                callback(mid->bit, val);
>>>> +                mid = NULL;
>>>> +            }
>>>> +
>>>> +            break;
>>>> +        }
>>>> +
>>>> +        /*
>>>> +         * Mid being NULL means that the name and boolean were successfully
>>>> +         * identified.  Everything else is an error.
>>>> +         */
>>>> +        if ( mid )
>>>> +            rc = -EINVAL;
>>>> +
>>>> +        s = ss + 1;
>>>> +    } while ( *ss );
>>>> +
>>>> +    return rc;
>>>> +}
>>>> +
>>>> +static void __init cf_check _parse_xen_cpuid(unsigned int feat, bool val)
>>>> +{
>>>> +    if ( !val )
>>>> +        setup_clear_cpu_cap(feat);
>>>> +    else if ( feat == X86_FEATURE_RDRAND &&
>>>> +              (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
>>>> +        setup_force_cpu_cap(X86_FEATURE_RDRAND);
>>>> +}
>>>> +
>>>> +static int __init cf_check parse_xen_cpuid(const char *s)
>>>> +{
>>>> +    return parse_cpuid(s, _parse_xen_cpuid);
>>>> +}
>>>> +custom_param("cpuid", parse_xen_cpuid);
>>>> +
>>>> +static bool __initdata dom0_cpuid_cmdline;
>>>> +static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
>>>> +static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
>>>> +
>>>> +static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
>>>> +{
>>>> +    __set_bit  (feat, val ? dom0_enable_feat  : dom0_disable_feat);
>>>> +    __clear_bit(feat, val ? dom0_disable_feat : dom0_enable_feat );
>>>> +}
>>>> +
>>>> +static int __init cf_check parse_dom0_cpuid(const char *s)
>>>> +{
>>>> +    dom0_cpuid_cmdline = true;
>>>> +
>>>> +    return parse_cpuid(s, _parse_dom0_cpuid);
>>>> +}
>>>> +custom_param("dom0-cpuid", parse_dom0_cpuid);
>>> Unless the plan is to completely remove cpuid.c, this command line
>>> handling would imo better fit there. I understand that to keep
>>> dom0_{en,dis}able_feat[] static, the _parse_dom0_cpuid() helper
>>> would then need to be exposed (under a different name), but I think
>>> that's quite okay, the more that it's an __init function.
>> I'm not sure I agree.  (I did debate this for a while before moving the
>> cmdline parsing.)
>>
>> I do have some cleanup plans which will move code into cpuid.c, and
>> guest_cpuid() absolutely still lives there, but for these options
>> specifically, the moment I add MSR_ARCH_CAPS into a featureset, their
>> bit names names will work here too.
>>
>> So arguably {dom0-}cpuid= don't be a great name moving forwards, but it
>> is is absolutely more cpu-policy.c content than cpuid.c content.
>>
>> We can't get rid of the existing cmdline names, and I think documenting
>> our way out of the "it's not only CPUID bits any more" is better than
>> adding yet a new name.
> Hmm, yes:
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
>>>> @@ -149,3 +716,188 @@ int init_domain_cpu_policy(struct domain *d)
>>>>  
>>>>      return 0;
>>>>  }
>>>> +
>>>> +void recalculate_cpuid_policy(struct domain *d)
>>>> +{
>>>> +    struct cpu_policy *p = d->arch.cpuid;
>>>> +    const struct cpu_policy *max = is_pv_domain(d)
>>>> +        ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
>>>> +        : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
>>> While this is how the original code was, wouldn't this want to use
>>> hvm_enabled, just like init_guest_cpu_policies() does (patch 10)?
>> No.  That will fail to link.
> Why? hvm_enabled is a #define (to false) only when !HVM.

Hmm, maybe.

But honestly, I want to keep the code as it is because this is trying to
only be code-movement, and because it's currently symmetric between the
two cases.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:02:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518148.804381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnmq-0004qF-NH; Tue, 04 Apr 2023 21:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518148.804381; Tue, 04 Apr 2023 21:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnmq-0004q8-Ka; Tue, 04 Apr 2023 21:01:52 +0000
Received: by outflank-mailman (input) for mailman id 518148;
 Tue, 04 Apr 2023 21:01:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z4sN=73=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjnmp-0004q2-NV
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 21:01:51 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8d75946-d32b-11ed-b464-930f4c7d94ae;
 Tue, 04 Apr 2023 23:01:47 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-410-nUUcCrVBN7ilOFNHk6niDw-1; Tue, 04 Apr 2023 17:01:41 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE052185A78F;
 Tue,  4 Apr 2023 21:01:39 +0000 (UTC)
Received: from localhost (unknown [10.39.194.166])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 857D82166B26;
 Tue,  4 Apr 2023 21:01:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8d75946-d32b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680642106;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hU59+CgsTGXo7IuR485kJWxhopO+syI1zdWvSx0jC0o=;
	b=AI9Hyzooera9BZj6dkrOIyvAgCKRsx+3X0Vwe+q8tKnka5KArvHymKdUdovOEaNhZMDbpa
	/6hcWa6p0r0wODyUjl38WYDI7bg8C+1ein3BFnVgvUGUoS/DvtNPzpgBXU+GRvdOMvvjby
	MHoNMZTrePbngIVAm2ASy1wpsCCXtOY=
X-MC-Unique: nUUcCrVBN7ilOFNHk6niDw-1
Date: Tue, 4 Apr 2023 17:01:36 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>, xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>, David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>, Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 11/13] block/fuse: take AioContext lock around
 blk_exp_ref/unref()
Message-ID: <20230404210136.GB603232@fedora>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <20230403183004.347205-12-stefanha@redhat.com>
 <92b731c7-81d4-ef54-cca9-9dcb944e94f0@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="neTXd3RKoERDfsz0"
Content-Disposition: inline
In-Reply-To: <92b731c7-81d4-ef54-cca9-9dcb944e94f0@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6


--neTXd3RKoERDfsz0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Apr 04, 2023 at 03:46:34PM +0200, Paolo Bonzini wrote:
> On 4/3/23 20:30, Stefan Hajnoczi wrote:
> > These functions must be called with the AioContext acquired:
> >=20
> >    /* Callers must hold exp->ctx lock */
> >    void blk_exp_ref(BlockExport *exp)
> >    ...
> >    /* Callers must hold exp->ctx lock */
> >    void blk_exp_unref(BlockExport *exp)
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >   block/export/fuse.c | 4 ++++
> >   1 file changed, 4 insertions(+)
> >=20
> > diff --git a/block/export/fuse.c b/block/export/fuse.c
> > index 06fa41079e..18394f9e07 100644
> > --- a/block/export/fuse.c
> > +++ b/block/export/fuse.c
> > @@ -244,7 +244,9 @@ static void read_from_fuse_export(void *opaque)
> >       FuseExport *exp =3D opaque;
> >       int ret;
> > +    aio_context_acquire(exp->common.ctx);
> >       blk_exp_ref(&exp->common);
> > +    aio_context_release(exp->common.ctx);
> >       do {
> >           ret =3D fuse_session_receive_buf(exp->fuse_session, &exp->fus=
e_buf);
> > @@ -256,7 +258,9 @@ static void read_from_fuse_export(void *opaque)
> >       fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
> >   out:
> > +    aio_context_acquire(exp->common.ctx);
> >       blk_exp_unref(&exp->common);
> > +    aio_context_release(exp->common.ctx);
> >   }
>=20
> Since the actual thread-unsafe work is done in a bottom half, perhaps
> instead you can use qatomic_inc and qatomic_fetch_dec in
> blk_exp_{ref,unref}?

Sure, I'll give that a try in the next revision.

Stefan

--neTXd3RKoERDfsz0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmQskC8ACgkQnKSrs4Gr
c8ge+Af/fN0G3NFCPv1dW59nNxnKDFCC+GIm48Kk+qOWta6oggk2ADk7czgcS3fN
BHk3zVgrvaoGZ8seFxN3CqKsCfYCFCkcws2IzVErHRIGpVCN6kpakDinR244u+cF
qRwtk+ME1+Yv+IcOEk/Lj+UfS6YUD0TLY8LxfbjlsQFQV5RlsRY2/f+4FzbWrFJx
+9dicHDGkWs35LeabbAWl/mki6TGMh2/APr8a87gMlRHONc4gG/ZcDOdWj+Dg9bz
ODgE6eDuf/dHMk/xrfGzwRUly4gTrZLIUZ5ADes+ZhUipijL/8fSQ35gW2tTVzNC
8shYSvBKrBNF6oFfIUluQ/7doZbp0g==
=85Ek
-----END PGP SIGNATURE-----

--neTXd3RKoERDfsz0--



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:04:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518152.804392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnpl-0005PG-4F; Tue, 04 Apr 2023 21:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518152.804392; Tue, 04 Apr 2023 21:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnpl-0005P9-1P; Tue, 04 Apr 2023 21:04:53 +0000
Received: by outflank-mailman (input) for mailman id 518152;
 Tue, 04 Apr 2023 21:04:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z4sN=73=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pjnpk-0005P1-3e
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 21:04:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56876968-d32c-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 23:04:51 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-550-DimVpMkPOYuLTzIRXxdNzg-1; Tue, 04 Apr 2023 17:04:45 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 74F9B887400;
 Tue,  4 Apr 2023 21:04:44 +0000 (UTC)
Received: from localhost (unknown [10.39.194.166])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CEA78C35999;
 Tue,  4 Apr 2023 21:04:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56876968-d32c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680642290;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nzP6fmPsLpxmtr/dHctXT699XVp9501+3aNzUwPLVNM=;
	b=cYMzFtJ+W6fEGyny5QnxWoAs17dHAjvd/g+STtG7QlTGiiYriSWNJaj6fITNe69asaYeft
	0QXhIbI9B9rcDw+yCL9/yx9lVhU7i6mEA5wRcr5GOZW65eHKLLfIylle4SMA/gWAeRsWwx
	9/jgU39MnuojQP+GPqr/uv7OZAymb1M=
X-MC-Unique: DimVpMkPOYuLTzIRXxdNzg-1
Date: Tue, 4 Apr 2023 17:04:42 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Peter Lieven <pl@kamp.de>,
	Coiby Xu <Coiby.Xu@gmail.com>, xen-devel@lists.xenproject.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Fam Zheng <fam@euphon.net>, David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>, Juan Quintela <quintela@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, eesposit@redhat.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 00/13] block: remove aio_disable_external() API
Message-ID: <20230404210442.GC603232@fedora>
References: <20230403183004.347205-1-stefanha@redhat.com>
 <261efade-683e-84dc-d402-7143be7199c3@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="snMIsWYZfG/EJhXo"
Content-Disposition: inline
In-Reply-To: <261efade-683e-84dc-d402-7143be7199c3@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--snMIsWYZfG/EJhXo
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Apr 04, 2023 at 03:43:20PM +0200, Paolo Bonzini wrote:
> On 4/3/23 20:29, Stefan Hajnoczi wrote:
> > The aio_disable_external() API temporarily suspends file descriptor mon=
itoring
> > in the event loop. The block layer uses this to prevent new I/O request=
s being
> > submitted from the guest and elsewhere between bdrv_drained_begin() and
> > bdrv_drained_end().
> >=20
> > While the block layer still needs to prevent new I/O requests in drained
> > sections, the aio_disable_external() API can be replaced with
> > .drained_begin/end/poll() callbacks that have been added to BdrvChildCl=
ass and
> > BlockDevOps.
> >=20
> > This newer .bdrained_begin/end/poll() approach is attractive because it=
 works
> > without specifying a specific AioContext. The block layer is moving tow=
ards
> > multi-queue and that means multiple AioContexts may be processing I/O
> > simultaneously.
> >=20
> > The aio_disable_external() was always somewhat hacky. It suspends all f=
ile
> > descriptors that were registered with is_external=3Dtrue, even if they =
have
> > nothing to do with the BlockDriverState graph nodes that are being drai=
ned.
> > It's better to solve a block layer problem in the block layer than to h=
ave an
> > odd event loop API solution.
> >=20
> > That covers the motivation for this change, now on to the specifics of =
this
> > series:
> >=20
> > While it would be nice if a single conceptual approach could be applied=
 to all
> > is_external=3Dtrue file descriptors, I ended up looking at callers on a
> > case-by-case basis. There are two general ways I migrated code away from
> > is_external=3Dtrue:
> >=20
> > 1. Block exports are typically best off unregistering fds in .drained_b=
egin()
> >     and registering them again in .drained_end(). The .drained_poll() f=
unction
> >     waits for in-flight requests to finish using a reference counter.
> >=20
> > 2. Emulated storage controllers like virtio-blk and virtio-scsi are a l=
ittle
> >     simpler. They can rely on BlockBackend's request queuing during dra=
in
> >     feature. Guest I/O request coroutines are suspended in a drained se=
ction and
> >     resume upon the end of the drained section.
>=20
> Sorry, I disagree with this.
>=20
> Request queuing was shown to cause deadlocks; Hanna's latest patch is pil=
ing
> another hack upon it, instead in my opinion we should go in the direction=
 of
> relying _less_ (or not at all) on request queuing.
>=20
> I am strongly convinced that request queuing must apply only after
> bdrv_drained_begin has returned, which would also fix the IDE TRIM bug
> reported by Fiona Ebner.  The possible livelock scenario is generally not=
 a
> problem because 1) outside an iothread you have anyway the BQL that preve=
nts
> a vCPU from issuing more I/O operations during bdrv_drained_begin 2) in
> iothreads you have aio_disable_external() instead of .drained_begin().
>=20
> It is also less tidy to start a request during the drained_begin phase,
> because a request that has been submitted has to be completed (cancel
> doesn't really work).
>=20
> So in an ideal world, request queuing would not only apply only after
> bdrv_drained_begin has returned, it would log a warning and .drained_begi=
n()
> should set up things so that there are no such warnings.

That's fine, I will give .drained_begin/end/poll() a try with virtio-blk
and virtio-scsi in the next revision.

Stefan

--snMIsWYZfG/EJhXo
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmQskOoACgkQnKSrs4Gr
c8jsQAf+PJyaSj96NmHb9lkSsM0rDSEoQ56fJV7vTR+eUUuvgwbRl1u07Nrka9s6
mwJjQLVUZpsLuOTCC1ZOvePeeo0daHHZ6pkLMI9xTh2noy8/ka3KUSkbYAE2y8+/
CYqbu19QuEIEEjjPEXl7xda50dSz16N27J+A367ZKS2jcB+p4hr73gII1vLwCnbi
3xuLsNRy/f9mcOiv+30w/fiGLxmVDYlq/5msaY5JlLkNzbeAWC4s06Pcvoxa71Jr
RgB1YLWcwI5x+AIZQ6Je1bZyK5beUHJGxIKEkAq5CyiaDaBCwRpoiuFAbI4lXz4X
2Pwa17IPZ6Zgqvf4L6E9LAMB/nCygA==
=UdcK
-----END PGP SIGNATURE-----

--snMIsWYZfG/EJhXo--



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:06:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:06:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518156.804402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnr6-0005zD-FP; Tue, 04 Apr 2023 21:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518156.804402; Tue, 04 Apr 2023 21:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjnr6-0005z6-CO; Tue, 04 Apr 2023 21:06:16 +0000
Received: by outflank-mailman (input) for mailman id 518156;
 Tue, 04 Apr 2023 21:06:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjnr5-0005yw-3x
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 21:06:15 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86d294f0-d32c-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 23:06:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86d294f0-d32c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680642372;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SZkHVZ2NWT/4Tsj7fqIvLvzNKb7GyKuoPQiH3xqCu+A=;
  b=IkzEPp7N7dzi8H/BX/JnfrFqHorwdvXIOJVrT6Bjo/DYdE3oZtPAQaw+
   7jAGaOk9r0uiiUHpRQyJW7YwPZ+gvWo/aoANm8UTT9VppE9v37wPkQokm
   8N9cNVWeWtWm6aiMOTnqUstFVo7zas5O0fE40/K0CQl1aS7cA0CCMR1ri
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104742219
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:FAAa7aI4UetFx3TcFE+RrZUlxSXFcZb7ZxGr2PjKsXjdYENS0jJSn
 zNKDW3VaP3eZ2ugf99xboyw8ElXup/Sm9JjGVFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRiPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5HAH8S2
 eQCLAkpUQ2CmNK68papFLlj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZkKwhzC/
 zOuE2LRLBQHDOeVzSi/3TH3n8GXtBrHf743LejtnhJtqALKnTFCYPEMbnOkpdGph0j4XMhQQ
 2QE9yxroaUs+UiDStjmQwb+sHOCpgQbWddbD6s98g7l4oj+7hudB2MEZiVcc9Fgv8gzLQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9HW4cOQMcVw88x+b+oZ4DgiDrXIgzH/vg5jHqIg0c0
 wxmvQBn2eVL05ZXjPTnlbzUq2ny/8aUF2bZ8i2SBzv4tV0hOeZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/LF7rVxBA1b5IehtDMhWfS+FyPosdz7ze
 1P0sghM/pJVN3bCRfYpM9rvUJ9wnPOwRIyNuhXogj1mO8EZSeN61Hs2OR74M57FyyDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPTkk4GD7WhMni/HEx6BQliEEXXzKve86R/HtNv6CI8cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:+dSzMKiU91avdV2rpsYin75tXHBQXioji2hC6mlwRA09TyX5ra
 2TdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKF+Vzd8kXFndK1vp
 0QEZSWZueRMbEAt7ec3OG5eexQvOVu8sqT9JjjJ6EGd3AVV0lihT0JezpyCidNNW977QJSLu
 vn2iJAzQDQAEg/X4CAKVQuefPMnNHPnIKOW296O/Z2gDP+9Q9B8dTBYmOl4is=
X-IronPort-AV: E=Sophos;i="5.98,318,1673931600"; 
   d="scan'208";a="104742219"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v3] libx86: Update library API for cpu_policy
Date: Tue, 4 Apr 2023 22:06:00 +0100
Message-ID: <20230404210600.1404532-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230404095222.1373721-15-andrew.cooper3@citrix.com>
References: <20230404095222.1373721-15-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Adjust the API and comments appropriately.

x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
TODO in the short term.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * New
v3:
 * Update x86_cpu_policy_lookup_deep_deps() too and write an API doc.
---
 tools/fuzz/cpu-policy/afl-policy-fuzzer.c |  4 +-
 tools/tests/cpu-policy/test-cpu-policy.c  |  4 +-
 tools/tests/x86_emulator/x86-emulate.c    |  2 +-
 xen/arch/x86/cpu-policy.c                 |  4 +-
 xen/arch/x86/cpu/common.c                 |  2 +-
 xen/arch/x86/domctl.c                     |  2 +-
 xen/arch/x86/xstate.c                     |  4 +-
 xen/include/xen/lib/x86/cpu-policy.h      | 49 +++++++++++++----------
 xen/lib/x86/cpuid.c                       | 26 ++++++------
 xen/lib/x86/msr.c                         |  4 +-
 10 files changed, 55 insertions(+), 46 deletions(-)

diff --git a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
index 466bdbb1d91a..7d8467b4b258 100644
--- a/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
+++ b/tools/fuzz/cpu-policy/afl-policy-fuzzer.c
@@ -48,8 +48,8 @@ static void check_policy(struct cpu_policy *cp)
      * Fix up the data in the source policy which isn't expected to survive
      * serialisation.
      */
-    x86_cpuid_policy_clear_out_of_range_leaves(cp);
-    x86_cpuid_policy_recalc_synth(cp);
+    x86_cpu_policy_clear_out_of_range_leaves(cp);
+    x86_cpu_policy_recalc_synth(cp);
 
     /* Serialise... */
     rc = x86_cpuid_copy_to_buffer(cp, leaves, &nr_leaves);
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index a4ca07f33973..f1d968adfc39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -105,7 +105,7 @@ static void test_cpuid_current(void)
 
     printf("Testing CPUID on current CPU\n");
 
-    x86_cpuid_policy_fill_native(&p);
+    x86_cpu_policy_fill_native(&p);
 
     rc = x86_cpuid_copy_to_buffer(&p, leaves, &nr);
     if ( rc != 0 )
@@ -554,7 +554,7 @@ static void test_cpuid_out_of_range_clearing(void)
         void *ptr;
         unsigned int nr_markers;
 
-        x86_cpuid_policy_clear_out_of_range_leaves(p);
+        x86_cpu_policy_clear_out_of_range_leaves(p);
 
         /* Count the number of 0xc2's still remaining. */
         for ( ptr = p, nr_markers = 0;
diff --git a/tools/tests/x86_emulator/x86-emulate.c b/tools/tests/x86_emulator/x86-emulate.c
index 2692404df906..7d2d57f7591a 100644
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -75,7 +75,7 @@ bool emul_test_init(void)
 
     unsigned long sp;
 
-    x86_cpuid_policy_fill_native(&cp);
+    x86_cpu_policy_fill_native(&cp);
 
     /*
      * The emulator doesn't use these instructions, so can always emulate
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 83186e940ca7..a58bf6cad54e 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -176,7 +176,7 @@ static void sanitise_featureset(uint32_t *fs)
     for_each_set_bit(i, (void *)disabled_features,
                      sizeof(disabled_features) * 8)
     {
-        const uint32_t *dfs = x86_cpuid_lookup_deep_deps(i);
+        const uint32_t *dfs = x86_cpu_policy_lookup_deep_deps(i);
         unsigned int j;
 
         ASSERT(dfs); /* deep_features[] should guarentee this. */
@@ -347,7 +347,7 @@ static void __init calculate_raw_policy(void)
 {
     struct cpu_policy *p = &raw_cpu_policy;
 
-    x86_cpuid_policy_fill_native(p);
+    x86_cpu_policy_fill_native(p);
 
     /* Nothing good will come from Xen and libx86 disagreeing on vendor. */
     ASSERT(p->x86_vendor == boot_cpu_data.x86_vendor);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index f11dcda57a69..edc4db1335eb 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -75,7 +75,7 @@ void __init setup_clear_cpu_cap(unsigned int cap)
 		       __builtin_return_address(0), cap);
 
 	__clear_bit(cap, boot_cpu_data.x86_capability);
-	dfs = x86_cpuid_lookup_deep_deps(cap);
+	dfs = x86_cpu_policy_lookup_deep_deps(cap);
 
 	if (!dfs)
 		return;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index c02528594102..1a8b4cff48ee 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -66,7 +66,7 @@ static int update_domain_cpu_policy(struct domain *d,
         goto out;
 
     /* Trim any newly-stale out-of-range leaves. */
-    x86_cpuid_policy_clear_out_of_range_leaves(new);
+    x86_cpu_policy_clear_out_of_range_leaves(new);
 
     /* Audit the combined dataset. */
     ret = x86_cpu_policies_are_compatible(sys, new, &err);
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index d481e1db3e7e..92496f379546 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -684,7 +684,7 @@ void xstate_init(struct cpuinfo_x86 *c)
 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
                     const struct xsave_hdr *hdr)
 {
-    uint64_t xcr0_max = cpuid_policy_xcr0_max(d->arch.cpuid);
+    uint64_t xcr0_max = cpu_policy_xcr0_max(d->arch.cpuid);
     unsigned int i;
 
     if ( (hdr->xstate_bv & ~xcr0_accum) ||
@@ -708,7 +708,7 @@ int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
 int handle_xsetbv(u32 index, u64 new_bv)
 {
     struct vcpu *curr = current;
-    uint64_t xcr0_max = cpuid_policy_xcr0_max(curr->domain->arch.cpuid);
+    uint64_t xcr0_max = cpu_policy_xcr0_max(curr->domain->arch.cpuid);
     u64 mask;
 
     if ( index != XCR_XFEATURE_ENABLED_MASK )
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 57b4633c861e..cf7de0f29ccd 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -399,33 +399,38 @@ void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
 void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
                                   struct cpu_policy *p);
 
-static inline uint64_t cpuid_policy_xcr0_max(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xcr0_max(const struct cpu_policy *p)
 {
     return ((uint64_t)p->xstate.xcr0_high << 32) | p->xstate.xcr0_low;
 }
 
-static inline uint64_t cpuid_policy_xstates(const struct cpuid_policy *p)
+static inline uint64_t cpu_policy_xstates(const struct cpu_policy *p)
 {
     uint64_t val = p->xstate.xcr0_high | p->xstate.xss_high;
 
     return (val << 32) | p->xstate.xcr0_low | p->xstate.xss_low;
 }
 
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature);
+/**
+ * For a specific feature, look up the dependent features.  Returns NULL if
+ * this feature has no dependencies.  Otherwise return a featureset of
+ * dependent features, which has been recursively flattened.
+ */
+const uint32_t *x86_cpu_policy_lookup_deep_deps(uint32_t feature);
 
 /**
- * Recalculate the content in a CPUID policy which is derived from raw data.
+ * Recalculate the content in a CPU policy which is derived from raw data.
  */
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p);
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p);
 
 /**
- * Fill a CPUID policy using the native CPUID instruction.
+ * Fill CPU policy using the native CPUID/RDMSR instruction.
  *
  * No sanitisation is performed, but synthesised values are calculated.
  * Values may be influenced by a hypervisor or from masking/faulting
  * configuration.
  */
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
+void x86_cpu_policy_fill_native(struct cpu_policy *p);
 
 /**
  * Clear leaf data beyond the policies max leaf/subleaf settings.
@@ -436,7 +441,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p);
  * with out-of-range leaves with stale content in them.  This helper clears
  * them.
  */
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p);
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p);
 
 #ifdef __XEN__
 #include <public/arch-x86/xen.h>
@@ -449,9 +454,10 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
 #endif
 
 /**
- * Serialise a cpuid_policy object into an array of cpuid leaves.
+ * Serialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
  *
- * @param policy     The cpuid_policy to serialise.
+ * @param policy     The cpu_policy to serialise.
  * @param leaves     The array of leaves to serialise into.
  * @param nr_entries The number of entries in 'leaves'.
  * @returns -errno
@@ -460,13 +466,14 @@ typedef xen_msr_entry_t msr_entry_buffer_t[];
  * leaves array is too short.  On success, nr_entries is updated with the
  * actual number of leaves written.
  */
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *policy,
                              cpuid_leaf_buffer_t leaves, uint32_t *nr_entries);
 
 /**
- * Unserialise a cpuid_policy object from an array of cpuid leaves.
+ * Unserialise the CPUID leaves of a cpu_policy object into an array of cpuid
+ * leaves.
  *
- * @param policy      The cpuid_policy to unserialise into.
+ * @param policy      The cpu_policy to unserialise into.
  * @param leaves      The array of leaves to unserialise from.
  * @param nr_entries  The number of entries in 'leaves'.
  * @param err_leaf    Optional hint for error diagnostics.
@@ -474,21 +481,21 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *policy,
  * @returns -errno
  *
  * Reads at most CPUID_MAX_SERIALISED_LEAVES.  May return -ERANGE if an
- * incoming leaf is out of range of cpuid_policy, in which case the optional
+ * incoming leaf is out of range of cpu_policy, in which case the optional
  * err_* pointers will identify the out-of-range indicies.
  *
  * No content validation of in-range leaves is performed.  Synthesised data is
  * recalculated.
  */
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *policy,
                                const cpuid_leaf_buffer_t leaves,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf);
 
 /**
- * Serialise an msr_policy object into an array.
+ * Serialise the MSRs of a cpu_policy object into an array.
  *
- * @param policy     The msr_policy to serialise.
+ * @param policy     The cpu_policy to serialise.
  * @param msrs       The array of msrs to serialise into.
  * @param nr_entries The number of entries in 'msrs'.
  * @returns -errno
@@ -497,13 +504,13 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
  * buffer array is too short.  On success, nr_entries is updated with the
  * actual number of msrs written.
  */
-int x86_msr_copy_to_buffer(const struct msr_policy *policy,
+int x86_msr_copy_to_buffer(const struct cpu_policy *policy,
                            msr_entry_buffer_t msrs, uint32_t *nr_entries);
 
 /**
- * Unserialise an msr_policy object from an array of msrs.
+ * Unserialise the MSRs of a cpu_policy object from an array of msrs.
  *
- * @param policy     The msr_policy object to unserialise into.
+ * @param policy     The cpu_policy object to unserialise into.
  * @param msrs       The array of msrs to unserialise from.
  * @param nr_entries The number of entries in 'msrs'.
  * @param err_msr    Optional hint for error diagnostics.
@@ -517,7 +524,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *policy,
  *
  * No content validation is performed on the data stored in the policy object.
  */
-int x86_msr_copy_from_buffer(struct msr_policy *policy,
+int x86_msr_copy_from_buffer(struct cpu_policy *policy,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr);
 
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 734e90823a63..68aafb404927 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -102,13 +102,13 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d1             = fs[FEATURESET_7d1];
 }
 
-void x86_cpuid_policy_recalc_synth(struct cpuid_policy *p)
+void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
 {
     p->x86_vendor = x86_cpuid_lookup_vendor(
         p->basic.vendor_ebx, p->basic.vendor_ecx, p->basic.vendor_edx);
 }
 
-void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
+void x86_cpu_policy_fill_native(struct cpu_policy *p)
 {
     unsigned int i;
 
@@ -199,7 +199,7 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
         cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
         cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
 
-        xstates = cpuid_policy_xstates(p);
+        xstates = cpu_policy_xstates(p);
 
         /* This logic will probably need adjusting when XCR0[63] gets used. */
         BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
@@ -222,10 +222,12 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy *p)
     p->hv_limit = 0;
     p->hv2_limit = 0;
 
-    x86_cpuid_policy_recalc_synth(p);
+    /* TODO MSRs */
+
+    x86_cpu_policy_recalc_synth(p);
 }
 
-void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
+void x86_cpu_policy_clear_out_of_range_leaves(struct cpu_policy *p)
 {
     unsigned int i;
 
@@ -260,7 +262,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
         zero_leaves(p->topo.raw, i, ARRAY_SIZE(p->topo.raw) - 1);
     }
 
-    if ( p->basic.max_leaf < 0xd || !cpuid_policy_xstates(p) )
+    if ( p->basic.max_leaf < 0xd || !cpu_policy_xstates(p) )
         memset(p->xstate.raw, 0, sizeof(p->xstate.raw));
     else
     {
@@ -268,7 +270,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
         BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63);
 
         /* First two leaves always valid.  Rest depend on xstates. */
-        i = max(2, 64 - __builtin_clzll(cpuid_policy_xstates(p)));
+        i = max(2, 64 - __builtin_clzll(cpu_policy_xstates(p)));
 
         zero_leaves(p->xstate.raw, i,
                     ARRAY_SIZE(p->xstate.raw) - 1);
@@ -278,7 +280,7 @@ void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
                 ARRAY_SIZE(p->extd.raw) - 1);
 }
 
-const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t feature)
+const uint32_t *x86_cpu_policy_lookup_deep_deps(uint32_t feature)
 {
     static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
     static const struct {
@@ -333,7 +335,7 @@ static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
     return 0;
 }
 
-int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
+int x86_cpuid_copy_to_buffer(const struct cpu_policy *p,
                              cpuid_leaf_buffer_t leaves, uint32_t *nr_entries_p)
 {
     const uint32_t nr_entries = *nr_entries_p;
@@ -383,7 +385,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
 
         case 0xd:
         {
-            uint64_t xstates = cpuid_policy_xstates(p);
+            uint64_t xstates = cpu_policy_xstates(p);
 
             COPY_LEAF(leaf, 0, &p->xstate.raw[0]);
             COPY_LEAF(leaf, 1, &p->xstate.raw[1]);
@@ -419,7 +421,7 @@ int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
     return 0;
 }
 
-int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
+int x86_cpuid_copy_from_buffer(struct cpu_policy *p,
                                const cpuid_leaf_buffer_t leaves,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf)
@@ -522,7 +524,7 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
         }
     }
 
-    x86_cpuid_policy_recalc_synth(p);
+    x86_cpu_policy_recalc_synth(p);
 
     return 0;
 
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index c4d885e7b568..e04b9ca01302 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -23,7 +23,7 @@ static int copy_msr_to_buffer(uint32_t idx, uint64_t val,
     return 0;
 }
 
-int x86_msr_copy_to_buffer(const struct msr_policy *p,
+int x86_msr_copy_to_buffer(const struct cpu_policy *p,
                            msr_entry_buffer_t msrs, uint32_t *nr_entries_p)
 {
     const uint32_t nr_entries = *nr_entries_p;
@@ -48,7 +48,7 @@ int x86_msr_copy_to_buffer(const struct msr_policy *p,
     return 0;
 }
 
-int x86_msr_copy_from_buffer(struct msr_policy *p,
+int x86_msr_copy_from_buffer(struct cpu_policy *p,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr)
 {
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:29:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:29:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518162.804411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoDQ-00008i-Dn; Tue, 04 Apr 2023 21:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518162.804411; Tue, 04 Apr 2023 21:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoDQ-00008b-B2; Tue, 04 Apr 2023 21:29:20 +0000
Received: by outflank-mailman (input) for mailman id 518162;
 Tue, 04 Apr 2023 21:29:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoDO-00008P-ID; Tue, 04 Apr 2023 21:29:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoDO-0007a2-Er; Tue, 04 Apr 2023 21:29:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoDO-0000I2-0U; Tue, 04 Apr 2023 21:29:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoDO-0006sT-02; Tue, 04 Apr 2023 21:29:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3nsbpyvV8cjcptqjQ7Ew7Ma4IEDCW/Qt4OJrV0bNQmU=; b=qJ+h/1t9wqUMb6LXvzIjXuK19P
	njbdecVdmZ7M1s18BEUDBZVr+0gyKPXuzdOagDvaeK13PNITYFBxBiPS3pZqRmt3F3qOig2oWzd+w
	dt4GX+CXpaFeXleN6/B1yCsDrjuNNbanc/I2+Y2OnpWlCYZfKRvs0ciLNKWJSc2WRHd4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180141-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180141: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7df447930c42addaf2cc0d32916141d95ded677e
X-Osstest-Versions-That:
    ovmf=fb89f62d2702faf7db7f7afef342467d4f0fba3c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 21:29:18 +0000

flight 180141 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180141/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7df447930c42addaf2cc0d32916141d95ded677e
baseline version:
 ovmf                 fb89f62d2702faf7db7f7afef342467d4f0fba3c

Last test of basis   180138  2023-04-04 15:10:48 Z    0 days
Testing same since   180141  2023-04-04 17:42:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   fb89f62d27..7df447930c  7df447930c42addaf2cc0d32916141d95ded677e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:41:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:41:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518168.804422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoP5-0002ZD-Hn; Tue, 04 Apr 2023 21:41:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518168.804422; Tue, 04 Apr 2023 21:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoP5-0002Z6-Eq; Tue, 04 Apr 2023 21:41:23 +0000
Received: by outflank-mailman (input) for mailman id 518168;
 Tue, 04 Apr 2023 21:41:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoP3-0002Yw-99; Tue, 04 Apr 2023 21:41:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoP3-0007wZ-3X; Tue, 04 Apr 2023 21:41:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoP2-0000pd-SU; Tue, 04 Apr 2023 21:41:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjoP2-0000wk-Ry; Tue, 04 Apr 2023 21:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R1thnMV2UW1eOgIhj7W2vBfzbZkG2iiyfLzU8SqQdQw=; b=Y9zvq2B7gUFQbkIw7MKcIFpYU/
	IIxeyb2/z+yplktTbT0hs5Y0sM5ORuOvxZy+GVeuk41HP8GDgrk1RONTVMamkmoIhRRZm0T7b0Fcd
	HwsI+SXRLUHS08AMzUu81VDDISTTZ7VxU0TAgut7/TdWDCpocq50dX57iduWWcQXN2ZM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180143: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
X-Osstest-Versions-That:
    xen=658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 21:41:20 +0000

flight 180143 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180143/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
baseline version:
 xen                  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e

Last test of basis   180137  2023-04-04 14:03:39 Z    0 days
Testing same since   180140  2023-04-04 17:01:55 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   658fcb7ac9..415f7d9404  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 21:54:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 21:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518174.804432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoby-00047h-OD; Tue, 04 Apr 2023 21:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518174.804432; Tue, 04 Apr 2023 21:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjoby-00047a-Km; Tue, 04 Apr 2023 21:54:42 +0000
Received: by outflank-mailman (input) for mailman id 518174;
 Tue, 04 Apr 2023 21:54:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Oy2y=73=citrix.com=prvs=4518c43dc=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pjobx-00047T-0I
 for xen-devel@lists.xenproject.org; Tue, 04 Apr 2023 21:54:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4af19420-d333-11ed-85db-49a42c6b2330;
 Tue, 04 Apr 2023 23:54:38 +0200 (CEST)
Received: from mail-dm6nam04lp2040.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 Apr 2023 17:54:13 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6387.namprd03.prod.outlook.com (2603:10b6:806:1c2::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 4 Apr
 2023 21:54:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Tue, 4 Apr 2023
 21:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4af19420-d333-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680645278;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9ZabhdgXSSrBvWvumOUTfZeknY8WfKfHQ46NHrpbX7k=;
  b=T8UVvalobsGjTF4yqjXPU5rD1D/ARvgJEKkr8xwycMMYrsDzD8MLjc6/
   TiSCwOfysvjEIFlLXpnwej8Sugxwm4mBd2Ros4lq1E2VXBOtdQ6qXk9KW
   Cq/7aKe5NccuBJBhmWJKfsNpIOgGyEHTQ1/0WSVHkv70WTUMn47JFReHm
   M=;
X-IronPort-RemoteIP: 104.47.73.40
X-IronPort-MID: 104746601
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:aGnFs6OJgx50ehfvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUoj1WYHz
 WBJXTyPaa3bZDemLt4kPYTk8U5UupXVmNAwHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gBmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sUuOk9Pp
 e0fFDQQdCKpg+Cz4rj4UMA506zPLOGzVG8ekldJ6GmFSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxvzC7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwXynBtxKTdVU8NZQnnax4DcQDSQGVHS0p/7ktErmRJF2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBQQ5b5dDm+N03lkiXEo4lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bzgbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:J6Ok6K0dEx/5TNQAXjYPgwqjBIIkLtp133Aq2lEZdPUzSKClfq
 GV88jzsCWetN9/Yh8dcLy7WZVoI0mslqKdkLNwAV7KZmCP0gaVxedZnO7fKlbbak/DH4BmpM
 FdWpk7JNrsDUVryebWiTPIaurIGeP3lJxAU92uqEtQcQ==
X-IronPort-AV: E=Sophos;i="5.98,319,1673931600"; 
   d="scan'208";a="104746601"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfH9SsgOx/ZeREsVz8Hab5u5UhAcj+Z2iq5n/jncm5e17uW0Dq6ueRdrOBL+Diyx+kLU7wL1Upb3YgVhlMe4Eh3CYHyLSWqW5E97bRRNzyMV4sE4Am6KR8jKSiTuABcDygtQ3mRmx+dBRC3oNjlM3GfcxOzlgTIsFuCtL8uU4VOHTVnKtTQeL6xizUCC8fkgXZcOtJ/qiWyZPcWQ6k7HD9plMQGeHk8QPRBVUY/Bp8uvAIzB3boPZz/QOjUr/Uys1tgd+sCRSQHSb9jqbdWrt0CN2QWl4C7+T//6VBxZ3pawpSceQhwP0w14JENuC1XNN58uy3NLOT/MzUyMMAi+Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=93vfan2gsno1DCeqXREVT5h6NSIC8R4Twl29c7aP6Js=;
 b=Cm8dYvRAb3/24djSiEWENn3XfuoXr1ty5Xo7TvmGLTv8wQIeW+CxZ4itd50W9ElinYgrY1Bf3/St7r3sABdEGV2sdQsyE6BMRZTCosPBNcSqmq5ZVxj8Z9aJHwZdTQAAn/SA5efshvhTydxEjjzdovpVZ1tIrakbLQ78jbBJ8pux8JVjv7ickzqfkSJWOFhcuFNI+aBAlK1UH+BK8lQ6OUP0IUJdJ1zh/pDs9ZuWBnPvi74DCydBierEBEAx4qgsgAYRwZNpPVbVQ+3pyu16w6WgFZAb+2ZsbyntqaX/44Ry8ntQJqRV38QVUw/OFI4YW+Lj82ih3AStLd4e6WySsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=93vfan2gsno1DCeqXREVT5h6NSIC8R4Twl29c7aP6Js=;
 b=H9A6pqwWufacSJRIvj+05gR/t5sOu9t5PhoE+y8etv9+5c68HH1X/ic+uyHlWP3VtlK0N2aKKdVKiMZmqmfWpQqahdatubiipaJPoSaXusQB5n16boG4S3zVE+2O6Z0TPUlwUDGMeO3ZXUxKryV9UPAvHKotsx8cq55wvLHYgXg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d8d72cc8-f477-abb1-f6fb-5aa1909b36aa@citrix.com>
Date: Tue, 4 Apr 2023 22:54:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/9] x86emul: support LKGS
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <1eb21ece-9d33-d8e1-1c2b-c682dbb1cda1@suse.com>
In-Reply-To: <1eb21ece-9d33-d8e1-1c2b-c682dbb1cda1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO0P265CA0006.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:355::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA1PR03MB6387:EE_
X-MS-Office365-Filtering-Correlation-Id: 81c19b71-470e-415a-c6ad-08db35571f01
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vb0CtKbU3ol0ffgjokJh203s58Ccmj3YgkzJnl6h4/0CNxw7ivyVR5gj+I63yyMQickV8AnRXGP2Ql4F6MoBU/prpbjT80wUk5p7O2byhc7YxHAk0Bsi4pmQiiiiIlxAqwrGVrdjEyv+EV+8ypHfAX9iwtFDoU4pIHBkQAT0N3Ms5nlHI9F5upNgSIKH9/D5UPFHuCu2sDtAYlCKtP+NPMNlfHGU6BhG77jtctoUqTbSet3pOj27Jv8f37/ecm5ravx5t29mKwTndVk6S/xN9lF8Z1tUTNyXXFF40YMQpH0nb2rNCquuDuYBOAGe5Rd6QMf6L1BZ1xNknEA4z/0DVCCwIjq+vwQWa3rs3xi8pPd4xpBhf1Ih9qQjMLilHXZRf4z1wDugPZwUf6FbkyMk0wBs/CNCwF7WsDSoU2ZXavfLDrLGFPCySzB/nmFcuKMYlBApEuMU6d2dJk6KS4jTab0g2eOIlX6ng+ZQB0eV2/dea8pP7kMa5Gfr8+xL2JJh/QK4W1uuzhuw3SC8gZ3T0llsJI8z1XUQNzg1/RpnSpTDFnbuWRnYQkT3R2WmK1qkUnjXH06QY+QQEsuHU3jqMU4CIBqc2kOl9RlrNCmboTQjFdS9LRcUvJWxcIGF13IjW6ZsuIBTl4u3O4heyDdFOg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(376002)(136003)(396003)(451199021)(107886003)(31686004)(6666004)(2616005)(83380400001)(186003)(53546011)(6512007)(6506007)(26005)(31696002)(86362001)(4326008)(66556008)(8676002)(66476007)(66946007)(38100700002)(2906002)(82960400001)(41300700001)(54906003)(316002)(110136005)(478600001)(5660300002)(8936002)(6486002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MWFGMzFiL0kxL0ZUQXJQMzBzZmhzdFhZazdoRENFcCtZMS9DakhvTUNZcGk1?=
 =?utf-8?B?ZVBvaDdLS3FxeFR4d1ZnTTFkNllyM0VacUZyU2UyTC9PUXBFMi8vZTluZGRR?=
 =?utf-8?B?ZEZLL0N5Sm9PS2dXdElRTzhBVEozVVRuNjdSNGNiMEZQQUJWeGtBek5PRVd5?=
 =?utf-8?B?R3RqeTlMK1dpRHRZdEp1UU1pVzhNZTVZeGdhQjJqc0FGNmROeGFBMS9yVi9h?=
 =?utf-8?B?TlFmOGJXZEpSRDFQVEptQUI5VGtZR1YzeEM1dmJVRXZHeFBZMUtOQXRseWpn?=
 =?utf-8?B?ZGlKSWhWT3VhZklLTDRxUDNvYkFwcG4zRE9COUNudUtEU0c4YUdoWm14WXcv?=
 =?utf-8?B?YUZ0MXRSYUxDbVUwWnRhdWF5Z1lMY0U1cGlTeEN1TVNkRGhaeVlOZXREYlNw?=
 =?utf-8?B?S3lNaXBUaDMrdEl0TjV3RkdFQlVhdW1URlNWQ3hJcElvTlcwd2xDMnlwOHhR?=
 =?utf-8?B?TDIyQXJkNytISDN0Vis3cjJwQTlpZkgzQUh3bGZHRURZYXFKaVRTT0R2TXdj?=
 =?utf-8?B?anFzWTdmOGF2T1FWM0NFTGZCUXltWTB1L0VxWVBGTWRRc3VYUUJ4T1BhUTZj?=
 =?utf-8?B?YTA3VVNNWEZ1ZWtKV282NVNHSnRoVTNCaFhzK0tYaHdNQnBkcUFmRkw0azlS?=
 =?utf-8?B?bFV3NmZCcGFDSXJrbS9oOTlTTkZUcXZqclhuUkRuM2tIZzUrdDM2S3A1WTEz?=
 =?utf-8?B?YXFaN3FueUZqQW4zb2xkY1ROVUcrWkZ0bjlSaHFCOVhnT2hKUEMvZGFPYzcx?=
 =?utf-8?B?WjZyY01oRnpoRnBTc2FpZGZNY1dMVDE4cm1seTN6eURVcEpGWDgzQnZmMDVu?=
 =?utf-8?B?d2N0cVVVeEVmL0VsOUo5RENyTHZNUkVHNmxxY0RkVDJyaldEcGxGOERNbm1D?=
 =?utf-8?B?MVArTHpxV2VyZ0dqekhsWndTbUVoK2V1NnRjU05reXVpZkpkaSswOXZqZWNB?=
 =?utf-8?B?eUFUbXRsa25KTkxzNlFSTHRqWE1EcFRxRFBoM2w1bFU4QWJIS1NJU1RMOXkz?=
 =?utf-8?B?OSsyWDEwYzZSMTlpdmxFSGN0OVkzWXdXSVdQUnkvSTBSempwNmJqSm9rNlZw?=
 =?utf-8?B?YXY1VmI2bXFJa200aGtud1ZhQmloVXJwbkZBQVRDV0U4eTJKNk1YMnJhekNl?=
 =?utf-8?B?QURCMmFkRU9YWE94MFpsYncrZ1drMWEyYkNtT0IvbVN3VWZIcmh1WDJCbDV5?=
 =?utf-8?B?K1M1b3J2ZTBrV0JKcEY5M3NmL1RyUzdrVENwUC9jME1acndtUGhEYTVZMWZ0?=
 =?utf-8?B?QmxxRkVjdDlkQVFMTWp1RnQ4KzcyTVJ1aHRUem1Ec2pZVkVxQ3RVa1BZYXNM?=
 =?utf-8?B?NWxBNVh1aGJmR1lpUUY0S0wxa1hnbGUwVkg3VEh3VEdrdmtjYU9LVW1qb01w?=
 =?utf-8?B?OXVaWlZ5a3JPMEN0cFZFNkRGMWNiU2VZZUlZc0RPVVpyZUxPOTZQZXkyK3gr?=
 =?utf-8?B?ZlA0UG9VWllxbEFqWE05ekt6MDI4eGR1VWlwd0dYc3hUeUsyc3Y0RXlqTmxh?=
 =?utf-8?B?eUxTdWU5UG9WY1FaT01pbWZEZndsaW9SVVFSZ2Q3VTNuNzBTUWh5NElXRHJq?=
 =?utf-8?B?TmxKYkszeit6eHNZOG1QelBuZktjMWZoOHYwZkladjkrOFVkb3NheFByVUl3?=
 =?utf-8?B?MTZveDRzQUd3OERoUDRPazUxT09qVjJXdXBMVDRQcjNEdDF2L0kyVkYwL2I5?=
 =?utf-8?B?NGR1d24wSitYSi9TOUF0Tlpya2hOMHAwemQrbkpZTU9acVhNRVNUaHlJeXJE?=
 =?utf-8?B?Q3ZrK1BxRnVIUjdISlJ0SW1YSTRncUkrUWpuZ2o3WjVnQXBtd3JydkduTFE5?=
 =?utf-8?B?ekpTbmRjRFlhb2Z4a1JQSXM1SVRrVTh0OUxaa3VHWitDRnV1TXh5SUhxOXZ5?=
 =?utf-8?B?YVE2bG5KUjZzM2VVcDBmQXJjVXdmSHh4cXFTdFprWjI2alB1YzVKalBEREkv?=
 =?utf-8?B?d0JEL2pUTkgrcThhUnd1WFFXaHdQZTJvM2lVUm1vaWVsN25WNlNOTEFBeGhK?=
 =?utf-8?B?L2xpbXZuS1kwd3BtNU5UcGFzdmZCWHpRME1ld0FOdXVid3VUN0J6WjRuV2Jy?=
 =?utf-8?B?Q1pSWW9kSHpialVxUzhlLy96RHRJc1FYMmxtSzI1cEVYck0rd1B6K0JxOWVi?=
 =?utf-8?B?UDAzWk4ySHRnbmgyVFVTTDdZWnE5SDJTOWlUMkUrS1RKZytCd0JneHNpdEZi?=
 =?utf-8?B?Qnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	uT/yVC0+kxQNGpiumMSstrGq+FkHc3ef3wBFNpNZmfNYvyWNk35dlwseCcPIlU1NSSh8ISeW5C6A9JqdDKrpt3V0MQ4ZA1TXW99OC/Iyoke+iO4DZOUemC5i/qB4L+Hm6kRCLOnoclJknFFiP3hyEjcnA4awmaz7z+Uu5xYX2TJf4jmHEcC8GmI3qzznNejLx6tAgKs0148VVmIqet0nAIMVxmtC5nG3CyNjhm2zb+O9X0LcGJC08k+Vrsq7rtVNMxfgosivWGj+I35QUXSB0g/KUPBf9l1VIr4R2M2f619icadscfSTNlQ0P7o9TP0Cz+6idEpcUIqEYMquhtsXlrApZA0C1lSsKyrpHjyrELKphViZxNZUyzAd2MS7xDQ/y65XPZwuRkX7HSxtbSQSqoa+h8w81eEfyPjpiz771Wxpmb0RAWvxYbJQTgBugVrqRFaAriQ1LCb14x14gseUZ0PRWT9jNH4bC0tz8YMt/abY+epSofQdbe9QbeqzB2YtHayN4rgC9Q9NEnBEeYEGsr4GkQFDdq9OQjrBkWdAIEUjYQIzk9ubDznCV94/IN2NlArhnfx+1jF8qUIui7yzCyqtQBsq/5TkBIxj0rVrwx8G2AGlBoTTgsTh3nJ3thFm7+aOO4/1QhHtkaaDgXj9bbcCivL1GvQ+HH9xFli8laegHiqnju3HZ92CGmvHGh8dhXmciO0kjvhnGqCj9rUXfV2JX8MCncUGRAd5AcI0rWoM9VHjRANpFG3iMpJWda52hMUD1iwvkfEkuI8gItXirJdC8dvBVFocd0Rc+pYyOyijYAYx2BbFZ5WivDOsea7N
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 81c19b71-470e-415a-c6ad-08db35571f01
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Apr 2023 21:54:10.5766
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6iRnkCwweVfSLJ1nQqfd/qPBptLjSe/wNDMT+DXk1TBH0ubY9YHot131BSJKc/odQMME6pxeVtNm3LkIqfHfF/aXxddlW+yjcuqKaaBNe8s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6387

On 04/04/2023 3:49 pm, Jan Beulich wrote:
> Provide support for this insn, which is a prereq to FRED. CPUID-wise
> introduce both its and FRED's bit at this occasion, thus allowing to
> also express the dependency right away.
>
> While adding a testcase, also add a SWAPGS one. In order to not affect
> the behavior of pre-existing tests, install write_{segment,msr} hooks
> only transiently.

IMO, the emulator is already complicated enough without us having
fallback logic to cope with callers that don't set up all the hooks.

Nor do I think making these hooks transient in the test harness is a
clever idea.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Instead of ->read_segment() we could of course also use ->read_msr() to
> fetch the original GS base. I don't think I can see a clear advantage of
> either approach; the way it's done it matches how we handle SWAPGS.

read_segment() is a much shorter logic chain under the hood, so will be
marginally faster, but it will be a couple of unnecessary VMREADs (on
Intel at least).

We could expose the get/set reg paths for cases where we know we're not
going to need sanity checks, but I'm not sure it's worth it in this case.

> For PV save_segments() would need adjustment, but the insn being
> restricted to ring 0 means PV guests can't use it anyway (unless we
> wanted to emulate it as another privileged insn).

I know, it's on the list.

What is rather irritating is that, depending on FRED or not, it's
opposite whether the guest user or guest kernel GS base is in context. 
Sadly Intel refused my request for a control knob to turn off FRED's
auto-SWAPGS, but I didn't really push them on it because for practically
all other circumstances, it would just be a way for OSes to shoot
themselves in the foot.

For PV guests, our regular ABI is half-way to FRED anyway.  I suspect we
can get most of the interesting rest of the functionality by adding an
ERET bit to the HYPERCALL_iret flags.  I'm not sure yet if we ought to
bother exposing CSL in the pvFRED ABI or not, but doing so would reduce
the divergence from native even further.

> --- a/xen/arch/x86/x86_emulate/private.h
> +++ b/xen/arch/x86/x86_emulate/private.h
> @@ -594,6 +594,7 @@ amd_like(const struct x86_emulate_ctxt *
>  #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
>  #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
>  #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
> +#define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
>  
>  #define vcpu_must_have(feat) \
>      generate_exception_if(!vcpu_has_##feat(), X86_EXC_UD)
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -2886,8 +2886,31 @@ x86_emulate(
>                  break;
>              }
>              break;
> -        default:
> -            generate_exception_if(true, EXC_UD);
> +        case 6: /* lkgs */
> +            generate_exception_if((modrm_reg & 1) || vex.pfx != vex_f2, EXC_UD);
> +            generate_exception_if(!mode_64bit() || !mode_ring0(), EXC_UD);

Can we switch to X86_* please.  Alternatively, I've got such a patch
which I've just rebased over all your emulator changes anyway, if we're
happy to fix this in one fell swoop.

(Sadly, you did move some TRAP_* names into util-xen.c which I fixed up
in my other tree-wide exception constant patch.)

> +            vcpu_must_have(lkgs);
> +            fail_if(!ops->read_segment || !ops->read_msr ||
> +                    !ops->write_segment || !ops->write_msr);
> +            if ( (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
> +                                     ctxt)) != X86EMUL_OKAY ||
> +                 (rc = ops->read_segment(x86_seg_gs, &sreg,
> +                                         ctxt)) != X86EMUL_OKAY )
> +                goto done;
> +            dst.orig_val = sreg.base;
> +            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
> +                                         ctxt, ops)) != X86EMUL_OKAY ||
> +                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
> +                                      ctxt)) != X86EMUL_OKAY )
> +                goto done;
> +            sreg.base = dst.orig_val;

Honestly, I think a comment is needed here, because I'm struggling to
work out if this is correct or not.

There is a 64->32 bit truncation of base with LGKS, just as there is
with MOV GS.

Which I think does happen as a side effect of protmode_load_seg() only
filling in the lower half of sreg.base, but I think it would be nicer to
have:

+            dst.orig_val = sreg.base; /* Preserve full GS Base */
+            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
+                                         ctxt, ops)) != X86EMUL_OKAY ||
+                 /* Write truncated base into GS_SHADOW */
+                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
+                                      ctxt)) != X86EMUL_OKAY )
+                goto done;
+            sreg.base = dst.orig_val; /* Reinstate full GS Base */

Or so, because it's weird not to see a (uint32_t) somewhere in this logic.

> +            if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
> +                                          ctxt)) != X86EMUL_OKAY )
> +            {
> +                /* Best effort unwind (i.e. no error checking). */
> +                ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);

write_segment() can't fail.  (The sanity checks are actually deferred
until after emulation is complete, and I'm not sure if that's behaviour
we want...)

However, more importantly, if we actually take this error path (for some
future reason) then we've created a security vulnerability in the guest.

It will be strictly better to crash the domain in this case, than to try
and let it continue in this state.

> +                goto done;
> +            }
>              break;
>          }
>          break;
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -281,6 +281,8 @@ XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /
>  XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
>  XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
>  XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */
> +XEN_CPUFEATURE(FRED,         10*32+17) /*   Flexible Return and Event Delivery */
> +XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
>  XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*   WRMSR Non-Serialising */
>  
>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -295,6 +295,9 @@ def crunch_numbers(state):
>  
>          # In principle the TSXLDTRK insns could also be considered independent.
>          RTM: [TSXLDTRK],
> +
> +        # FRED builds on the LKGS instruction.
> +        LKGS: [FRED],

Hmm...  This is the first case (I think) we've got where a dependency
that goes back numerically in terms of feature number.

Obviously we need to support it, but I'm not sure if the deep_deps loop
will cope in its current form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 23:43:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 23:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518181.804442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqIj-0006jp-LB; Tue, 04 Apr 2023 23:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518181.804442; Tue, 04 Apr 2023 23:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqIj-0006ji-HB; Tue, 04 Apr 2023 23:42:57 +0000
Received: by outflank-mailman (input) for mailman id 518181;
 Tue, 04 Apr 2023 23:42:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zi2d=73=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjqIi-0006jc-73
 for xen-devel@lists.xen.org; Tue, 04 Apr 2023 23:42:56 +0000
Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com
 [2607:f8b0:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 680bcece-d342-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 01:42:49 +0200 (CEST)
Received: by mail-pl1-x632.google.com with SMTP id w4so32818573plg.9
 for <xen-devel@lists.xen.org>; Tue, 04 Apr 2023 16:42:49 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 y6-20020a170902864600b0019a97a4324dsm8937815plt.5.2023.04.04.16.42.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 16:42:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 680bcece-d342-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680651768;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=feIblo+o8kbpXNJYhzwlTFiRu9n2TLSdCk9P+Iemon0=;
        b=Yan3Q9i7usnOf/ue1Ycdr66MdZ/nw/QgnlqaS4UNZDvl86rfJ9JCpESP5YkLjkYU2h
         ypWspWID3xtm7mfI2MG/9QdNp/C7Njy0uSzzOeY6mBxt0hhJTbZ5n3AQiFh2gixGgn8X
         OnKq6gCfWgacDC7Y9wkL4k8rtMMP/QUTSPLkw+nLWCnjvVmkGx2v9qBnMUS17sp86BSW
         ir9I0Zmzy9ZGad7QpnpU+/ARWvp8T5Z+N2OFKBEL54Dz26YGb86FeiB8+j4NMJXPUiWo
         9RMfyrAa8yxV3pOFIHADm/xxHiGZ4gT9EMFKoEfJIM8Vwtl50/BxI4UHwL5Fb+8z7OnL
         KP0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680651768;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=feIblo+o8kbpXNJYhzwlTFiRu9n2TLSdCk9P+Iemon0=;
        b=K1stWgcAyZf8ClVwrQ7be0SWWjLvxgZbspSfEORaID8mEcL+JKZlRWmvJExjTXKdB5
         u9wOB+jC3BQk5V7p2IN22tugSaCntRHHai7aAo5R1xN2QoBtM4NsRFQ8PF6KGw0enj69
         x02tNv2gg16Fz7JC6ybt4vjwkkn8dmWG7xNOuWbBZLOgu6CdNFBloJbceS910L5IGbif
         9ZYFtzIbiCmdjvBdI2wUUeQgB1pkeSa+fDq1hRIWj3o3o9ZcuVqArO+007hrg3Eseh/z
         +4CdOMZWuclLA/mrCD6Ql6HnyDNs7Yk9WJ69yZmNjVqLwMzTDJpyHVVjucoRPoINnqDE
         bq4w==
X-Gm-Message-State: AAQBX9dGHVGm2QuAKjSb26u1w+lz75XzYulQxLHvj9UIHih05SzA8TiC
	ZFropbMcWQp0ql5yxJHfIBLNSg==
X-Google-Smtp-Source: AKy350ba2lwW+XWrJEcaG00ngHmXaxvRAhtHPRrC1Vt33qoLwtfK/oXIIbLdeQDDb2MkXMlAm5GfPQ==
X-Received: by 2002:a17:902:f68f:b0:19c:be0c:738 with SMTP id l15-20020a170902f68f00b0019cbe0c0738mr5157163plg.59.1680651767862;
        Tue, 04 Apr 2023 16:42:47 -0700 (PDT)
Date: Wed, 5 Apr 2023 05:12:28 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
	stratos-dev@op-lists.linaro.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>

On 04-04-23, 21:16, Oleksandr Tyshchenko wrote:
> ok, probably makes sense

While testing both foreign and grant mappings I stumbled upon another
related problem. How do I control the creation of iommu node from
guest configuration file, irrespective of the domain backend is
running at ? This is what we have right now:

- always create iommu nodes if backend-dom != 0
- always create iommu nodes if forced_grant == 1

what I need to cover is
- don't create iommu nodes irrespective of the domain

This is required if you want to test both foreign and grant memory
allocations, with different guests kernels. i.e. one guest kernel for
device with grant mappings and another guest for device with foreign
mappings. There is no way, that I know of, to disable the creation of
iommu nodes. Of course we would want to use the same images for kernel
and other stuff, so this needs to be controlled from guest
configuration file.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Tue Apr 04 23:56:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Apr 2023 23:56:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518186.804451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqW7-0008Qd-UU; Tue, 04 Apr 2023 23:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518186.804451; Tue, 04 Apr 2023 23:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqW7-0008QW-Rn; Tue, 04 Apr 2023 23:56:47 +0000
Received: by outflank-mailman (input) for mailman id 518186;
 Tue, 04 Apr 2023 23:56:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjqW6-0008QM-5z; Tue, 04 Apr 2023 23:56:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjqW6-0002cI-2s; Tue, 04 Apr 2023 23:56:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjqW5-00051d-LU; Tue, 04 Apr 2023 23:56:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjqW5-0004RP-Ky; Tue, 04 Apr 2023 23:56:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j/m17Yn7/eQs4vcsv8wd4f6/0NDyHCDEjIIcGGNnV5A=; b=TBjn4Y0TFf7LqRFxCYWUXLBttA
	D0nYLYyg8GekKlUK5MesY9SIUOeKRfBdVZ70tuULpCd62YEGkCdrIC9sdGGGq4HwyU0hq73ZZMLV9
	ELfacJx6VA9pFbn5K+Nbh/h2YcAxlnEX7pZ/kI2Iy47sH+GTFeRBzug49ywmequg65U0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180135: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-pvshim:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=148341f0a2f53b5e8808d093333d85170586a15d
X-Osstest-Versions-That:
    linux=7e364e56293bb98cae1b55fd835f5991c4e96e7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Apr 2023 23:56:45 +0000

flight 180135 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180135/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvshim 22 guest-start/debian.repeat fail in 180131 pass in 180135
 test-amd64-amd64-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 180131
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180131

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180116
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180116
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                148341f0a2f53b5e8808d093333d85170586a15d
baseline version:
 linux                7e364e56293bb98cae1b55fd835f5991c4e96e7d

Last test of basis   180116  2023-04-03 02:26:51 Z    1 days
Testing same since   180130  2023-04-03 17:12:08 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Brauner <brauner@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Kelley <mikelley@microsoft.com>
  Mohammed Gamal <mgamal@redhat.com>
  Wei Liu <wei.liu@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   7e364e56293b..148341f0a2f5  148341f0a2f53b5e8808d093333d85170586a15d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 00:13:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 00:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518194.804472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqli-0003Cq-H8; Wed, 05 Apr 2023 00:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518194.804472; Wed, 05 Apr 2023 00:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqli-0003Cj-Dd; Wed, 05 Apr 2023 00:12:54 +0000
Received: by outflank-mailman (input) for mailman id 518194;
 Wed, 05 Apr 2023 00:12:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=enMU=74=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjqlh-0002xi-H7
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 00:12:53 +0000
Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com
 [2607:f8b0:4864:20::102d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a5e91b4-d346-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 02:12:51 +0200 (CEST)
Received: by mail-pj1-x102d.google.com with SMTP id j13so32305906pjd.1
 for <xen-devel@lists.xen.org>; Tue, 04 Apr 2023 17:12:51 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 z3-20020a170902ee0300b001a1ea1d6d6esm8851436plb.290.2023.04.04.17.12.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 17:12:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a5e91b4-d346-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680653570;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mYxdQefgqzq2X8Q8qR8DL2/WAeN3DrYXWhA7psmOG80=;
        b=tzmH9TBXP3GLNCGLUPpOx4TttKCBmol8VdJA6+UQj486ul/mT6/7hcIp6NJQc4wMbq
         WfrnQHg0d5FE8BoZ+pORuig9piAflF/mel91+K1+Mb5MkgBp+E4vzEupCjnuuiRSeaUX
         sRR6x0OfIml2X7W/sZqZrwDJ4uIL2NfsQ/4s72WUpB43/9QfiBpV2M2YUJw11AtvFYEW
         nv66L+qr4kJQ5ganRgEFds/KFds1DLRrf5fFhiu/A2+2UYdPxH258Q/hzgiOn2wWIwM/
         1Ldud6iAarCJ2TmueSghvBXJ4lUdwxT03TM96N4q02SHBzvW0L4kDw1EnL6wqlCOOWMF
         KQlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680653570;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mYxdQefgqzq2X8Q8qR8DL2/WAeN3DrYXWhA7psmOG80=;
        b=GUu6wSDAMyEFpzl5D4lur6Bfuwm6xfpmJQLDL1YaXjXiVvyTFG5+yiIBOj844Ak61W
         fCa6+JA4NyN6mhUg99UajP5Y8dB1H5k32NmOph+pTDIlZeHwuXi+Axb5SKVimS70tlHb
         umRuMFVdNVBzNWCUI2KhAhjqBkFC66Aw+ZqNcVbc7zcQdmqDHDMQXAlrSC7yw7n8Uf/w
         vg5U3w9KtMn+6ug6Llo4VGaieK5Tsy26ufecm6OD1HEMrQVsxsdYigTmbaoxGoEpzSUy
         D6yajaSofqK0QlqBLKXDLLCgeuM2V6yV8ipBGiPWjNIXePZACicGgcD+IcCnNXciBHAH
         cFfw==
X-Gm-Message-State: AAQBX9cwOIOvRrRqV5v25Tv2wxT49ySgIeZtpcTvCM0rlkkbO7c5KqqR
	Aeedze2Kq57twibw7hPeizJDyeIYu3MtMNsOX8E=
X-Google-Smtp-Source: AKy350b5vjoTBYkLa2izjFoNXcHxKSMw3SrIT5qvguVpK4gjXabDgou1jc1K8ftTFJBNnxwFcqQy9w==
X-Received: by 2002:a17:90b:3b8a:b0:237:161d:f5ac with SMTP id pc10-20020a17090b3b8a00b00237161df5acmr4435401pjb.36.1680653570386;
        Tue, 04 Apr 2023 17:12:50 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: [PATCH V2 2/2] libxl: fix matching of generic virtio device
Date: Wed,  5 Apr 2023 05:42:36 +0530
Message-Id: <62f2603d8b3fba1efb236063a0819fb95285b0ae.1680653504.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
In-Reply-To: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The strings won't be an exact match, as we are only looking to match the
prefix here, i.e. "virtio,device". This is already done properly in
libxl_virtio.c file, lets do the same here too.

Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V1->V2: Add the missing fixes tag.

 tools/libs/light/libxl_arm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..97c80d7ed0fa 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -1033,10 +1033,14 @@ static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
     } else if (!strcmp(type, VIRTIO_DEVICE_TYPE_GPIO)) {
         res = make_virtio_mmio_node_gpio(gc, fdt);
         if (res) return res;
-    } else if (strcmp(type, VIRTIO_DEVICE_TYPE_GENERIC)) {
-        /* Doesn't match generic virtio device */
-        LOG(ERROR, "Invalid type for virtio device: %s", type);
-        return -EINVAL;
+    } else {
+        int len = sizeof(VIRTIO_DEVICE_TYPE_GENERIC) - 1;
+
+        if (strncmp(type, VIRTIO_DEVICE_TYPE_GENERIC, len)) {
+            /* Doesn't match generic virtio device */
+            LOG(ERROR, "Invalid type for virtio device: %s", type);
+            return -EINVAL;
+        }
     }
 
     return fdt_end_node(fdt);
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 00:13:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 00:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518193.804462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqlg-0002xv-8K; Wed, 05 Apr 2023 00:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518193.804462; Wed, 05 Apr 2023 00:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjqlg-0002xo-5M; Wed, 05 Apr 2023 00:12:52 +0000
Received: by outflank-mailman (input) for mailman id 518193;
 Wed, 05 Apr 2023 00:12:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=enMU=74=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjqle-0002xi-ID
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 00:12:50 +0000
Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com
 [2607:f8b0:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97e26084-d346-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 02:12:47 +0200 (CEST)
Received: by mail-pl1-x62a.google.com with SMTP id le6so32869820plb.12
 for <xen-devel@lists.xen.org>; Tue, 04 Apr 2023 17:12:47 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 jn22-20020a170903051600b0019a95baaaa6sm8852381plb.222.2023.04.04.17.12.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Apr 2023 17:12:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97e26084-d346-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680653566;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=OWw80btV/vkY8XMbURM92c6D4GYeLA/5q3xU6Aoi0EQ=;
        b=vR5dx0yJebVkrvv+aB71InjcvrW6v9q0rNb2PTlKa4lWPnr4/+1CT8KntvSHFQKpCM
         siaDKPZwRnkot4U8RUR7yYpA9CR+79a+NByNqKCtWM8XTMRw7aQ+4NXSgLSHbK0ZGE50
         QPgCl4tZtf45W8+N9nUlXaG6eFFXj8MPsvunicqk18zheXGfYpoV+f5uKDHDy00GGHhZ
         Ouz/Q5yQrF5BC8YUFzktg3F9hE3rKvbZtGqxMOZ1cLD7mQPziXcKw6Tv6t2hTlB5FJZM
         O4zNi823xfywpse/OuLgtWfVK9WPUeINoe/ziK2zK2CAr/UlvgUI5QTAqYf2Fm32YanH
         mMRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680653566;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OWw80btV/vkY8XMbURM92c6D4GYeLA/5q3xU6Aoi0EQ=;
        b=4108WNwuCYfF/epZqox5Xyu1aP5matl6SCPcbeJ4v0m/Sk0M8D5G9Anj0IYiiVcfJB
         IQQOUmzuSSwDf1Sfh9mUqm4Ch8Azve8JDb15w2pcK4sCIZZjeZ+yvvj85L5gJ78IkXrH
         /qwZNByua02iD2cxqRUWgSMfxSLTtLjlE+RMm8B5Q+YGR/ZQ0nfurnpfweEs+CKDZRuq
         Zz/dpTukqwgZONRGTThXENQ2mIzyfrSQseVG93zLgxrvERGXCcgA1X08IOa145aeRuZt
         RrslJGIjcwuWktnzSaEMvn5KJheyFWRnMURRZkO3tFA7v8gbWxP5lPumwJSI718Zwrw3
         WiBA==
X-Gm-Message-State: AAQBX9cpEsa513P8xGij+CB1pcKHbPofYvTwKwoB/nVXG0O2U84J7fMf
	sCj7Jl5oZwWXtzM8oDZoF3quT7pP6CG/3u2fQOQ=
X-Google-Smtp-Source: AKy350bnELvarxqhwrfDobQ4HgHVzxA23wxmz2u35PJ8qLOp7TTqjYlqBLgYLfiN6mn/KoxMHgp9gg==
X-Received: by 2002:a17:90a:1902:b0:240:5c43:7766 with SMTP id 2-20020a17090a190200b002405c437766mr4844628pjg.4.1680653566069;
        Tue, 04 Apr 2023 17:12:46 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: [PATCH V2 1/2] docs: Allow generic virtio device types to contain device-id
Date: Wed,  5 Apr 2023 05:42:35 +0530
Message-Id: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For generic virtio devices, where we don't need to add compatible or
other special DT properties, the type field is set to "virtio,device".

But this misses the case where the user sets the type with a valid
virtio device id as well, like "virtio,device26" for file system device.

Update documentation to support that as well.

Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V1->V2: New patch.

 docs/man/xl.cfg.5.pod.in | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..ea20eac0ba32 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1608,8 +1608,9 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is
 
 L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>
 
-For generic virtio devices, where we don't need to set special or compatible
-properties in the Device Tree, the type field must be set to "virtio,device".
+For other generic virtio devices, where we don't need to set special or
+compatible properties in the Device Tree, the type field must be set to
+"virtio,device" or "virtio,device<N>", where "N" is the virtio device id.
 
 =item B<transport=STRING>
 
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 03:05:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 03:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518204.804482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjtSM-0002Ay-R5; Wed, 05 Apr 2023 03:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518204.804482; Wed, 05 Apr 2023 03:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjtSM-0002Aq-MK; Wed, 05 Apr 2023 03:05:06 +0000
Received: by outflank-mailman (input) for mailman id 518204;
 Wed, 05 Apr 2023 03:05:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjtSL-0002Ag-0n; Wed, 05 Apr 2023 03:05:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjtSK-0006WQ-He; Wed, 05 Apr 2023 03:05:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjtSJ-0002z8-UR; Wed, 05 Apr 2023 03:05:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjtSJ-0008Lu-Te; Wed, 05 Apr 2023 03:05:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sfpbcr+nPnZUZKjN++iIYImzR/J4uxMaWfRdEUAx0/c=; b=nRBU+RtBybYwK0NAD0YzaGQT6l
	RXattxOWVDxhmWIi8ZnwvadaZ+tEiianNT1PFRG2ev9tC2dHpcqnLUPbZa/G7vwWtnYG5YFGYEuCz
	xgPZoQj5dc3y604ka+k20ZXGkmb2GdBqotjNBDo4GU03UZfhyDpFuF25SnTe9fHu93nc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180136: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=51a6dc9d394098e8f4141fad869a1ee9585f54f8
X-Osstest-Versions-That:
    qemuu=efcd0ec14b0fe9ee0ee70277763b2d538d19238d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 03:05:03 +0000

flight 180136 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180136/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180088
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180088
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180088
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180088
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180088
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                51a6dc9d394098e8f4141fad869a1ee9585f54f8
baseline version:
 qemuu                efcd0ec14b0fe9ee0ee70277763b2d538d19238d

Last test of basis   180088  2023-03-31 12:08:44 Z    4 days
Testing same since   180136  2023-04-04 13:08:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chris Rauer <crauer@google.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Markus Armbruster <armbru@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   efcd0ec14b..51a6dc9d39  51a6dc9d394098e8f4141fad869a1ee9585f54f8 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 04:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 04:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518211.804492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjv2Y-0003vX-Tg; Wed, 05 Apr 2023 04:46:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518211.804492; Wed, 05 Apr 2023 04:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjv2Y-0003vQ-Po; Wed, 05 Apr 2023 04:46:34 +0000
Received: by outflank-mailman (input) for mailman id 518211;
 Wed, 05 Apr 2023 04:46:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WpID=74=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1pjv2W-0003vK-SQ
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 04:46:33 +0000
Received: from mail-ua1-x92f.google.com (mail-ua1-x92f.google.com
 [2607:f8b0:4864:20::92f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d45cb58b-d36c-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 06:46:30 +0200 (CEST)
Received: by mail-ua1-x92f.google.com with SMTP id i22so24745423uat.8
 for <xen-devel@lists.xenproject.org>; Tue, 04 Apr 2023 21:46:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d45cb58b-d36c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680669988;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1mHfEQJVJxa7vmszfTcCwqaAU9CfuBGkxiTaGBwcxQI=;
        b=pHPuOmM0hqQsCVM5oqdsVCnq3cSErus6CZp0nlP2tSmPudXWNmJYrcuslv5bpRnY1S
         ckmspC8qR/oeJoQJe1O3CwIlQXciszzKY25cTaftbrf+Syk988VYnWgIl6c4Trk8Aqlf
         +dE74/BHzzcT433EpKmBWnsex5YSQrAq/bgExJvfKEnAa2UmiRd4Qg3fguvVa9qlmzSM
         k1UArUywPOjKy5YM88rjCAPsMY+SuBa2dDUl4nodcv16OwVw6O+OWen4Ro/6jYxIzkft
         WndR0bqRj7ECxGx6lU7MWBa5mUKbn8IcKDJ/wvNlDJQoo2d/rbEHxiy0VrpYrrWsiCJh
         PlMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680669988;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1mHfEQJVJxa7vmszfTcCwqaAU9CfuBGkxiTaGBwcxQI=;
        b=VViNmlLci6wAIqJLohkMEOu87FeIp9qb1Tw3jYIb/zxJkb6Z3VTbICHABCNcAy/fEj
         nMJy46I8n/PW+Ghlk5Nt+yF+eICiM/iKPE9x3E8GfuNzNckqWIjckrkXLDqO/3T0zMxI
         dDjuP3++EXyMhXLsYHcFXPw1M02/4EqfD22LskYvZ9MHQQ83JVUmZ8N+sxunX2Q9wvD5
         kcVoHVI0Ixl0XNOhxo5ibFd2IXnDbC1gJ2wUqzofCddZqc985B0hMwYowYCOK0M8BuD7
         fEgxX7E2kXOkMXszY5KvBAWIc465rKCUCY00IJMrosAsKp/KtN4Z02JvUm3sTW6tmXkm
         nkiA==
X-Gm-Message-State: AAQBX9f+4QWdUaD6ebMjHX1ePeQQBEV+Fr2KK1gL6iG3HmEMyNhG4zf7
	NJkHwyEkhMQ94O+99EB7Mk37iCi9tRjht44iOo4=
X-Google-Smtp-Source: AKy350akIvZyr5+Ra8ZTKGtad0acZ4zqrDFkn2DwRV4Aaw/MuEbufeHM0ZDMDtFHbQ1E1lUoQ7I8bv21YCVj7QVKxkE=
X-Received: by 2002:a9f:37cd:0:b0:764:64c1:9142 with SMTP id
 q71-20020a9f37cd000000b0076464c19142mr1418967uaq.0.1680669988687; Tue, 04 Apr
 2023 21:46:28 -0700 (PDT)
MIME-Version: 1.0
References: <20230324222451.3295023-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230324222451.3295023-1-andrew.cooper3@citrix.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 5 Apr 2023 14:46:02 +1000
Message-ID: <CAKmqyKPDef0BdO0iK5Vx4-fycG3e7k0_oP_tFXDqRF5vUDXwDw@mail.gmail.com>
Subject: Re: [PATCH] ARM+RISC-V: BSS handling improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Mar 25, 2023 at 8:25=E2=80=AFAM Andrew Cooper <andrew.cooper3@citri=
x.com> wrote:
>
>  * Correct comments in arm{32,64}/head.S
>  * Provide Linker assertions to check the safety of the zeroing loops
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Bob Eshleman <bobbyeshleman@gmail.com>
> CC: Alistair Francis <alistair.francis@wdc.com>
> CC: Connor Davis <connojdavis@gmail.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>
> Pulled out of the very start of my work to try and unify the handling of
> xen_phys_addr across architectures.
> ---
>  xen/arch/arm/arm32/head.S | 2 +-
>  xen/arch/arm/arm64/head.S | 2 +-
>  xen/arch/arm/xen.lds.S    | 2 ++
>  xen/arch/riscv/xen.lds.S  | 4 ++++
>  4 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index df51550baa8a..f9f7be9588b1 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -301,7 +301,7 @@ ENDPROC(check_cpu_mode)
>  zero_bss:
>          PRINT("- Zero BSS -\r\n")
>          mov_w r0, __bss_start        /* r0 :=3D vaddr(__bss_start) */
> -        mov_w r1, __bss_end          /* r1 :=3D vaddr(__bss_start) */
> +        mov_w r1, __bss_end          /* r1 :=3D vaddr(__bss_end)   */
>
>          mov   r2, #0
>  1:      str   r2, [r0], #4
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..8a4dd64c99ad 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -437,7 +437,7 @@ zero_bss:
>
>          PRINT("- Zero BSS -\r\n")
>          ldr   x0, =3D__bss_start       /* x0 :=3D vaddr(__bss_start) */
> -        ldr   x1, =3D__bss_end         /* x1 :=3D vaddr(__bss_start) */
> +        ldr   x1, =3D__bss_end         /* x1 :=3D vaddr(__bss_end)   */
>
>  1:      str   xzr, [x0], #8
>          cmp   x0, x1
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index 1b392345bc3b..6ca3caefe607 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -240,3 +240,5 @@ ASSERT(_idmap_end - _idmap_start <=3D PAGE_SIZE, "Ide=
ntity mapped code is larger t
>   */
>  ASSERT(IS_ALIGNED(__init_begin,     4), "__init_begin is misaligned")
>  ASSERT(IS_ALIGNED(__init_end,       4), "__init_end is misaligned")
> +ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misa=
ligned")
> +ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misali=
gned")
> diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
> index ca57cce75cba..2ed70eccc62a 100644
> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -1,3 +1,4 @@
> +#include <xen/lib.h>
>  #include <xen/xen.lds.h>
>
>  #undef ENTRY
> @@ -156,3 +157,6 @@ SECTIONS
>
>      ELF_DETAILS_SECTIONS
>  }
> +
> +ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misa=
ligned")
> +ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misali=
gned")
> --
> 2.30.2
>
>


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 06:40:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 06:40:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518218.804502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjwoB-00075x-Ut; Wed, 05 Apr 2023 06:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518218.804502; Wed, 05 Apr 2023 06:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjwoB-00075q-Rn; Wed, 05 Apr 2023 06:39:51 +0000
Received: by outflank-mailman (input) for mailman id 518218;
 Wed, 05 Apr 2023 06:39:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjwoA-00075f-Sk; Wed, 05 Apr 2023 06:39:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjwoA-0003uT-Pg; Wed, 05 Apr 2023 06:39:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pjwoA-0006po-7I; Wed, 05 Apr 2023 06:39:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pjwoA-0000Ao-6q; Wed, 05 Apr 2023 06:39:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pA769ubfFR/4arVACp/5eOEiHrl/x/uoU+musyhZ/fQ=; b=HXFCsxbPN3tyk80pmx5V6cV0rX
	c6KDU/Qs7kURpTa2FBkns4M3N21HYOtOZHjXtvnwgKoPzJz1WQwy2yIpp55PryufdU86xVmVa4uVo
	9LyDvtzmx1DA2iJLBChu8Hv9ODO6Siw721/AmTivV2CU/f2uM20pfHu28HRrX57oMzCg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180139-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180139: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
X-Osstest-Versions-That:
    xen=bfa2e6a246225233f09a2523939e01dcf83bca4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 06:39:50 +0000

flight 180139 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180139/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180133
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180133
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180133
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180133
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180133
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180133
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180133
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180133
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180133
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
baseline version:
 xen                  bfa2e6a246225233f09a2523939e01dcf83bca4c

Last test of basis   180133  2023-04-04 04:12:59 Z    1 days
Testing same since   180139  2023-04-04 16:39:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bfa2e6a246..658fcb7ac9  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e -> master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518228.804528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBU-0002F6-0b; Wed, 05 Apr 2023 07:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518228.804528; Wed, 05 Apr 2023 07:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBT-0002Ez-Sq; Wed, 05 Apr 2023 07:03:55 +0000
Received: by outflank-mailman (input) for mailman id 518228;
 Wed, 05 Apr 2023 07:03:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBS-0002Et-Ox
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:03:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 057a42a7-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:03:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 95B26204F8;
 Wed,  5 Apr 2023 07:03:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4F8F313A31;
 Wed,  5 Apr 2023 07:03:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id e+f0EVcdLWRFEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:03:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 057a42a7-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678231; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=/ixJC+7prIJuoskG0EIHqu4CVsVfCk74G0QSnhmZDzM=;
	b=h9rvUgdzBA7Hqu85sjtwMNHuaiHefCCwEO4cBTJl1JOo0xqHAY56pYccr5q6+hL8S8o7l6
	1lq5Psheupci+F6672Sv4aetHL273Mw/3QrfDm17//EKW16oGoVyWR3LdNVWOw/qfYhFsW
	oumo0p1Ov8K5U8/KaUR3sTShX4GUxNM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v4 00/13] tools/xenstore: rework internal accounting
Date: Wed,  5 Apr 2023 09:03:36 +0200
Message-Id: <20230405070349.25293-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series reworks the Xenstore internal accounting to use a uniform
generic framework. It is adding some additional useful diagnostic
information, like accounting trace and max. per-domain and global quota
values seen.

Changes in V2:
- added patch 1 (leftover from previous series)
- rebase

Changes in V3:
- addressed comments

Changes in V4:
- fixed patch 3

Juergen Gross (13):
  tools/xenstore: take transaction internal nodes into account for quota
  tools/xenstore: manage per-transaction domain accounting data in an
    array
  tools/xenstore: introduce accounting data array for per-domain values
  tools/xenstore: add framework to commit accounting data on success
    only
  tools/xenstore: use accounting buffering for node accounting
  tools/xenstore: add current connection to domain_memory_add()
    parameters
  tools/xenstore: use accounting data array for per-domain values
  tools/xenstore: add accounting trace support
  tools/xenstore: add TDB access trace support
  tools/xenstore: switch transaction accounting to generic accounting
  tools/xenstore: remember global and per domain max accounting values
  tools/xenstore: use generic accounting for remaining quotas
  tools/xenstore: switch quota management to be table based

 docs/misc/xenstore.txt                 |   5 +-
 tools/xenstore/xenstored_control.c     |  65 ++--
 tools/xenstore/xenstored_core.c        | 168 +++++-----
 tools/xenstore/xenstored_core.h        |  24 +-
 tools/xenstore/xenstored_domain.c      | 433 ++++++++++++++++++-------
 tools/xenstore/xenstored_domain.h      |  60 +++-
 tools/xenstore/xenstored_transaction.c |  22 +-
 tools/xenstore/xenstored_watch.c       |  15 +-
 8 files changed, 514 insertions(+), 278 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518229.804537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBY-0002Uy-AS; Wed, 05 Apr 2023 07:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518229.804537; Wed, 05 Apr 2023 07:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBY-0002Un-7C; Wed, 05 Apr 2023 07:04:00 +0000
Received: by outflank-mailman (input) for mailman id 518229;
 Wed, 05 Apr 2023 07:03:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBX-0002UC-4m
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:03:59 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08a76ca9-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:03:57 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 35663229E9;
 Wed,  5 Apr 2023 07:03:57 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0655A13A31;
 Wed,  5 Apr 2023 07:03:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ZzxgO1wdLWRUEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:03:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08a76ca9-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678237; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jR2CaVJDZ2qBizxTmbgLfYMIkR5bcQwiTfYf2yndaVw=;
	b=PNm3qfd93Y9Gavr2aSfwatm6iAcArvUAkI79Ec4Tyz5JmZRSAEqnX+J++bZAENfZdrmhrV
	c0EkT/oJCwY1tWWec5KmRLnWNFn+p233Vqh67Dit3FLRkxAE9kWF+lGf+1CSX4WsBiX6gG
	WFxSSwcrukx/2dldNX1uTN9jlLZrpUY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 01/13] tools/xenstore: take transaction internal nodes into account for quota
Date: Wed,  5 Apr 2023 09:03:37 +0200
Message-Id: <20230405070349.25293-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it is checking the node quota
only against the number of nodes outside the transaction.

This can result in the transaction finally failing, as node quota is
checked at the end of the transaction again.

On the other hand even in a transaction deleting many nodes, new nodes
might not be creatable, in case the node quota was already reached at
the start of the transaction.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V3:
- rewrite of commit message (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..dbbf97accc 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1116,9 +1116,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518230.804547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBd-0002nk-Jj; Wed, 05 Apr 2023 07:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518230.804547; Wed, 05 Apr 2023 07:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBd-0002nZ-Ga; Wed, 05 Apr 2023 07:04:05 +0000
Received: by outflank-mailman (input) for mailman id 518230;
 Wed, 05 Apr 2023 07:04:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBb-0002UC-L1
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:03 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0bfdcdaa-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:03 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C870B205F9;
 Wed,  5 Apr 2023 07:04:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 95F1F13A31;
 Wed,  5 Apr 2023 07:04:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NXw7I2IdLWRfEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bfdcdaa-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678242; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JlFoqnoU3I4JMyJapnFZtevlraEzWmolnTL1zQ4kqcs=;
	b=DDAP5/NyAduIWDEb6ICBx4jdNEjhte34fHoXhVU634jNXxaIfwnKOrqGKNI6jeg6ph566v
	KkAXwau9ss9pN8JmnivV8yWp18Ak6MDK0OfL+QBwKc0N0CJhcN/LNEJ3kRN5lboJWw0jU+
	I9r22p9nwPylJiSigy3yVyLqeuejJZA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 02/13] tools/xenstore: manage per-transaction domain accounting data in an array
Date: Wed,  5 Apr 2023 09:03:38 +0200
Message-Id: <20230405070349.25293-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to prepare keeping accounting data in an array instead of
using independent fields, switch the struct changed_domain accounting
data to that scheme, for now only using an array with one element.

In order to be able to extend this scheme add the needed indexing enum
to xenstored_domain.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- make "what" parameter of acc_add_changed_dom() an enum type, and
  assert() that it won't exceed the accounting array (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 19 +++++++++++--------
 tools/xenstore/xenstored_domain.h | 10 ++++++++++
 2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index dbbf97accc..609a9a13ab 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -99,8 +99,8 @@ struct changed_domain
 	/* Identifier of the changed domain. */
 	unsigned int domid;
 
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
+	/* Accounting data. */
+	int acc[ACC_TR_N];
 };
 
 static struct hashtable *domhash;
@@ -550,7 +550,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
 			if (chk_quota && cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -595,19 +595,21 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			       unsigned int domid)
+static int acc_add_changed_dom(const void *ctx, struct list_head *head,
+			       enum accitem what, int val, unsigned int domid)
 {
 	struct changed_domain *cd;
 
+	assert(what < ARRAY_SIZE(cd->acc));
+
 	cd = acc_get_changed_domain(ctx, head, domid);
 	if (!cd)
 		return 0;
 
 	errno = 0;
-	cd->nbentry += val;
+	cd->acc[what] += val;
 
-	return cd->nbentry;
+	return cd->acc[what];
 }
 
 static void domain_conn_reset(struct domain *domain)
@@ -1071,7 +1073,8 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 
 	if (conn && conn->transaction) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 279cccb3ad..40803574f6 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -19,6 +19,16 @@
 #ifndef _XENSTORED_DOMAIN_H
 #define _XENSTORED_DOMAIN_H
 
+/*
+ * All accounting data is stored in a per-domain array.
+ * Depending on the account item there might be other scopes as well, like e.g.
+ * a per transaction array.
+ */
+enum accitem {
+	ACC_NODES,
+	ACC_TR_N,		/* Number of elements per transaction. */
+};
+
 void handle_event(void);
 
 void check_domains(void);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518232.804558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBk-0003C5-T7; Wed, 05 Apr 2023 07:04:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518232.804558; Wed, 05 Apr 2023 07:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBk-0003By-Q8; Wed, 05 Apr 2023 07:04:12 +0000
Received: by outflank-mailman (input) for mailman id 518232;
 Wed, 05 Apr 2023 07:04:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBi-0002Et-Ga
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:10 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f586cb1-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:04:08 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 67D92229E5;
 Wed,  5 Apr 2023 07:04:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3834213A31;
 Wed,  5 Apr 2023 07:04:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NaRUDGgdLWSUEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f586cb1-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XGeyzFUz5tXMQxwcHUnMlsPDW6IPPfsMHnMCy5JL5Tc=;
	b=hz+PU6USwBStzOH8qIlujgCh7UZln1zWU3HxdKKBJcFUaj1pIdwd6spHFZdSJpaFSJqT5R
	wikn4kZb07OYCFVKm2y78r1ZqeS1PweDdHmGtfO1ZKjw0Q2jYT3LkErSnWa52fsf7XAqYc
	GQ/ohUuVYPPMu6oEmN4BIBfLILN8bbY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 03/13] tools/xenstore: introduce accounting data array for per-domain values
Date: Wed,  5 Apr 2023 09:03:39 +0200
Message-Id: <20230405070349.25293-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the scheme of an accounting data array for per-domain
accounting data and use it initially for the number of nodes owned by
a domain.

Make the accounting data type to be unsigned int, as no data is allowed
to be negative at any time.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- remove domid parameter from domain_acc_add_chk() (Julien Grall)
- rename domain_acc_add_chk() (Julien Grall)
- modify overflow check (Julien Grall)
V4:
- fix overflow check
---
 tools/xenstore/xenstored_domain.c | 70 ++++++++++++++++++-------------
 tools/xenstore/xenstored_domain.h |  3 +-
 2 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 609a9a13ab..30fb9acec6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -69,8 +69,8 @@ struct domain
 	/* Has domain been officially introduced? */
 	bool introduced;
 
-	/* number of entry from this domain in the store */
-	int nbentry;
+	/* Accounting data for this domain. */
+	unsigned int acc[ACC_N];
 
 	/* Amount of memory allocated for this domain. */
 	int memory;
@@ -246,7 +246,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 
 	if (keep_orphans) {
 		set_tdb_key(node->name, &key);
-		domain->nbentry--;
+		domain_nbentry_dec(NULL, domain->domid);
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
@@ -270,7 +270,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->nbentry > 0 ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -278,7 +278,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->nbentry > 0) {
+	if (domain->acc[ACC_NODES]) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -437,7 +437,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->nbentry);
+	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->nbwatch);
 	ent(transactions, ta);
 	ent(outstanding, d->nboutstanding);
@@ -1047,8 +1047,27 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-static int domain_nbentry_add(struct connection *conn, unsigned int domid,
-			      int add, bool no_dom_alloc)
+static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+
+	if ((add < 0 && -add > d->acc[what]) ||
+	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting value will be wrong. This is no
+		 * problem, as the transaction will fail due to the resulting
+		 * conflict.
+		 */
+		return (add < 0) ? 0 : INT_MAX;
+	}
+
+	return d->acc[what] + add;
+}
+
+static int domain_acc_add(struct connection *conn, unsigned int domid,
+			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
 	struct list_head *head;
@@ -1071,56 +1090,49 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
-	if (conn && conn->transaction) {
+	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+		ret = acc_add_changed_dom(conn->transaction, head, what,
 					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
 		}
-		/*
-		 * In a transaction when a node is being added/removed AND the
-		 * same node has been added/removed outside the transaction in
-		 * parallel, the resulting number of nodes will be wrong. This
-		 * is no problem, as the transaction will fail due to the
-		 * resulting conflict.
-		 * In the node remove case the resulting number can be even
-		 * negative, which should be avoided.
-		 */
-		return max(d->nbentry + ret, 0);
+		return domain_acc_add_valid(d, what, ret);
 	}
 
-	d->nbentry += add;
+	d->acc[what] = domain_acc_add_valid(d, what, add);
 
-	return d->nbentry;
+	return d->acc[what];
 }
 
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_dec(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, -1, true) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_fix(unsigned int domid, int num, bool update)
 {
 	int ret;
 
-	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	ret = domain_acc_add(NULL, domid, ACC_NODES, update ? num : 0, update);
 	if (ret < 0 || update)
 		return ret;
 
 	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_nbentry(struct connection *conn)
+unsigned int domain_nbentry(struct connection *conn)
 {
 	return domain_is_unprivileged(conn)
-	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
+	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
@@ -1597,7 +1609,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->nbentry;
+	dom->nodes = -d->acc[ACC_NODES];
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1652,7 +1664,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->nbentry += dom->nodes;
+	d->acc[ACC_NODES] += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 40803574f6..9d05eb01da 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -27,6 +27,7 @@
 enum accitem {
 	ACC_NODES,
 	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -77,7 +78,7 @@ int domain_alloc_permrefs(struct node_perms *perms);
 int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
-int domain_nbentry(struct connection *conn);
+unsigned int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518234.804568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBo-0003Xm-54; Wed, 05 Apr 2023 07:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518234.804568; Wed, 05 Apr 2023 07:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBo-0003XU-1p; Wed, 05 Apr 2023 07:04:16 +0000
Received: by outflank-mailman (input) for mailman id 518234;
 Wed, 05 Apr 2023 07:04:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBn-0002UC-09
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12a74feb-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:14 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 013E520258;
 Wed,  5 Apr 2023 07:04:14 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C962013A31;
 Wed,  5 Apr 2023 07:04:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Ac+pL20dLWSiEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a74feb-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678254; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PmKRxa2YuBmHM00MgxuyH9GXZoeQCdBxFLkVz4Hk1ys=;
	b=W+adN7dxvkw5e9v+LkLSUx0dG3FvIDFTdGoBe9KhCde/ai3BrSwTIYgCAj8J7s7tF8uXkS
	p98RJu10dza9JeLqj2N3J+HuYsat3qpyE2/okY2BJAT3AOxZZ818/HCwRF4nLZvFPKvWie
	BvgAhmbL7B8wqZFtw7FrbTN2Ct6q1kY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 04/13] tools/xenstore: add framework to commit accounting data on success only
Date: Wed,  5 Apr 2023 09:03:40 +0200
Message-Id: <20230405070349.25293-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of modifying accounting data and undo those modifications in
case of an error during further processing, add a framework for
collecting the needed changes and commit them only when the whole
operation has succeeded.

This scheme can reuse large parts of the per transaction accounting.
The changed_domain handling can be reused, but the array size of the
accounting data should be possible to be different for both use cases.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- call acc_commit() earlier (Julien Grall)
- add assert() to acc_commit()
- use fixed sized acc array in struct changed_domain (Julien Grall)
---
 tools/xenstore/xenstored_core.c   |  9 ++++--
 tools/xenstore/xenstored_core.h   |  3 ++
 tools/xenstore/xenstored_domain.c | 53 ++++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_domain.h |  5 ++-
 4 files changed, 66 insertions(+), 4 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3ca68681e3..84335f5f3d 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, int error)
 			break;
 		}
 	}
+
+	acc_drop(conn);
+
 	send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
 			  strlen(xsd_errors[i].errstring) + 1);
 }
@@ -1034,6 +1037,9 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 
 	assert(type != XS_WATCH_EVENT);
 
+	conn->in = NULL;
+	acc_commit(conn);
+
 	if ( len > XENSTORE_PAYLOAD_MAX ) {
 		send_error(conn, E2BIG);
 		return;
@@ -1059,8 +1065,6 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 		}
 	}
 
-	conn->in = NULL;
-
 	/* Update relevant header fields and fill in the message body. */
 	bdata->hdr.msg.type = type;
 	bdata->hdr.msg.len = len;
@@ -2195,6 +2199,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->is_stalled = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
+	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
 	INIT_LIST_HEAD(&new->watches);
 	INIT_LIST_HEAD(&new->transaction_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c59b06551f..1f811f38cb 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -139,6 +139,9 @@ struct connection
 	struct list_head out_list;
 	uint64_t timeout_msec;
 
+	/* Not yet committed accounting data (valid if in != NULL). */
+	struct list_head acc_list;
+
 	/* Referenced requests no longer pending. */
 	struct list_head ref_list;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 30fb9acec6..144cbafb73 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -91,6 +91,8 @@ struct domain
 	bool wrl_delay_logged;
 };
 
+#define ACC_CHD_N (ACC_TR_N < ACC_REQ_N ? ACC_REQ_N : ACC_TR_N)
+
 struct changed_domain
 {
 	/* List of all changed domains. */
@@ -100,7 +102,7 @@ struct changed_domain
 	unsigned int domid;
 
 	/* Accounting data. */
-	int acc[ACC_TR_N];
+	int acc[ACC_CHD_N];
 };
 
 static struct hashtable *domhash;
@@ -1070,6 +1072,7 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
+	struct changed_domain *cd;
 	struct list_head *head;
 	int ret;
 
@@ -1090,6 +1093,22 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
+	/* Temporary accounting data until final commit? */
+	if (conn && conn->in && what < ACC_REQ_N) {
+		/* Consider transaction local data. */
+		ret = 0;
+		if (conn->transaction && what < ACC_TR_N) {
+			head = transaction_get_changed_domains(
+				conn->transaction);
+			cd = acc_find_changed_domain(head, domid);
+			if (cd)
+				ret = cd->acc[what];
+		}
+		ret += acc_add_changed_dom(conn->in, &conn->acc_list, what,
+					   add, domid);
+		return errno ? -1 : domain_acc_add_valid(d, what, ret);
+	}
+
 	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
 		ret = acc_add_changed_dom(conn->transaction, head, what,
@@ -1106,6 +1125,38 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	return d->acc[what];
 }
 
+void acc_drop(struct connection *conn)
+{
+	struct changed_domain *cd;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		talloc_free(cd);
+	}
+}
+
+void acc_commit(struct connection *conn)
+{
+	struct changed_domain *cd;
+	enum accitem what;
+
+	/*
+	 * Make sure domain_acc_add() below can't add additional data to
+	 * to be committed accounting records.
+	 */
+	assert(!conn->in);
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		for (what = 0; what < ACC_REQ_N; what++)
+			if (cd->acc[what])
+				domain_acc_add(conn, cd->domid, what,
+					       cd->acc[what], true);
+
+		talloc_free(cd);
+	}
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 9d05eb01da..6355ad4f37 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,7 +25,8 @@
  * a per transaction array.
  */
 enum accitem {
-	ACC_NODES,
+	ACC_REQ_N,		/* Number of elements per request. */
+	ACC_NODES = ACC_REQ_N,
 	ACC_TR_N,		/* Number of elements per transaction. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
@@ -113,6 +114,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
  * If "update" is true, "chk_quota" is ignored.
  */
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
+void acc_drop(struct connection *conn);
+void acc_commit(struct connection *conn);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518237.804578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBt-0004Ar-HX; Wed, 05 Apr 2023 07:04:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518237.804578; Wed, 05 Apr 2023 07:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxBt-0004Ak-Em; Wed, 05 Apr 2023 07:04:21 +0000
Received: by outflank-mailman (input) for mailman id 518237;
 Wed, 05 Apr 2023 07:04:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBs-0002UC-Dn
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:20 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15fee7ff-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:19 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 943DD229D0;
 Wed,  5 Apr 2023 07:04:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6637513A31;
 Wed,  5 Apr 2023 07:04:19 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id qRGQF3MdLWSuEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15fee7ff-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678259; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zpWITZM/jycFaeqzkIbP5IXI6C0MvtvZCqGsXHa9q9E=;
	b=kTpVk4DwUfgPK7n3BwU0oPzleMoDtWJ447NcVfWzVjgXB+gyyHKQnZFa0k/Vr2vrOq9/LD
	HF4LHvHcGRpZHl9AJDKHA0UZmHI75Hu6Rn8600M4F1KRoyM3hM4yAivBTD1QMT3z8+BXXv
	8+7tdsP+1izd8Xl53RPN1wneBPWYAXY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 05/13] tools/xenstore: use accounting buffering for node accounting
Date: Wed,  5 Apr 2023 09:03:41 +0200
Message-Id: <20230405070349.25293-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the node accounting to the accounting information buffering in
order to avoid having to undo it in case of failure.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 21 ++-------------------
 tools/xenstore/xenstored_domain.h |  4 ++--
 2 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 84335f5f3d..92a40ccf3f 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1452,7 +1452,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1797,27 +1796,11 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	old_perms = node->perms;
 	domain_nbentry_dec(conn, get_node_owner(node));
 	node->perms = perms;
-	if (domain_nbentry_inc(conn, get_node_owner(node))) {
-		node->perms = old_perms;
-		/*
-		 * This should never fail because we had a reference on the
-		 * domain before and Xenstored is single-threaded.
-		 */
-		domain_nbentry_inc(conn, get_node_owner(node));
+	if (domain_nbentry_inc(conn, get_node_owner(node)))
 		return ENOMEM;
-	}
 
-	if (write_node(conn, node, false)) {
-		int saved_errno = errno;
-
-		domain_nbentry_dec(conn, get_node_owner(node));
-		node->perms = old_perms;
-		/* No failure possible as above. */
-		domain_nbentry_inc(conn, get_node_owner(node));
-
-		errno = saved_errno;
+	if (write_node(conn, node, false))
 		return errno;
-	}
 
 	fire_watches(conn, ctx, name, node, false, &old_perms);
 	send_ack(conn, XS_SET_PERMS);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 6355ad4f37..e669f57b80 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,9 +25,9 @@
  * a per transaction array.
  */
 enum accitem {
+	ACC_NODES,
 	ACC_REQ_N,		/* Number of elements per request. */
-	ACC_NODES = ACC_REQ_N,
-	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518241.804588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxC0-0004lL-QS; Wed, 05 Apr 2023 07:04:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518241.804588; Wed, 05 Apr 2023 07:04:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxC0-0004l9-NG; Wed, 05 Apr 2023 07:04:28 +0000
Received: by outflank-mailman (input) for mailman id 518241;
 Wed, 05 Apr 2023 07:04:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxBz-0002Et-6Z
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:27 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 195505f4-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:04:25 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2D16D229ED;
 Wed,  5 Apr 2023 07:04:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0042613A31;
 Wed,  5 Apr 2023 07:04:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id rYEsOngdLWS3EwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 195505f4-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KyUsGkw8xvAHU6kBBf8lvAgf5AlXfalzLddBzhYP/rg=;
	b=hS7Q1J2iA2PuHiGjs8dolF9hQLFzMKmnotf6iaAmoucNaCMbjfrmJqVN10kvlHLumpgt/M
	ncomGyud1nlnq0qh91uVHCijkS+y45dNDfUPzKrtysap+3mypWr4/sKPFJyz/WwzMvg+RY
	6XxHAUO7n0xo+N+DCuNtj6ZUlhAq89M=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 06/13] tools/xenstore: add current connection to domain_memory_add() parameters
Date: Wed,  5 Apr 2023 09:03:42 +0200
Message-Id: <20230405070349.25293-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to enable switching memory accounting to the generic array
based accounting, add the current connection to the parameters of
domain_memory_add().

This requires to add the connection to some other functions, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 28 ++++++++++++++++------------
 tools/xenstore/xenstored_domain.c |  3 ++-
 tools/xenstore/xenstored_domain.h | 14 +++++++++-----
 tools/xenstore/xenstored_watch.c  | 11 ++++++-----
 4 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 92a40ccf3f..88ae674523 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -246,7 +246,8 @@ static void free_buffered_data(struct buffered_data *out,
 		}
 	}
 
-	domain_memory_add_nochk(conn->id, -out->hdr.msg.len - sizeof(out->hdr));
+	domain_memory_add_nochk(conn, conn->id,
+				-out->hdr.msg.len - sizeof(out->hdr));
 
 	if (out->hdr.msg.type == XS_WATCH_EVENT) {
 		req = out->pend.req;
@@ -631,24 +632,25 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	 * nodes to new owners.
 	 */
 	if (old_acc.memory)
-		domain_memory_add_nochk(old_domid,
+		domain_memory_add_nochk(conn, old_domid,
 					-old_acc.memory - key->dsize);
-	ret = domain_memory_add(new_domid, data->dsize + key->dsize,
-				no_quota_check);
+	ret = domain_memory_add(conn, new_domid,
+				data->dsize + key->dsize, no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		return ret;
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
 	if (tdb_store(tdb_ctx, *key, *data, TDB_REPLACE) != 0) {
-		domain_memory_add_nochk(new_domid, -data->dsize - key->dsize);
+		domain_memory_add_nochk(conn, new_domid,
+					-data->dsize - key->dsize);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		errno = EIO;
 		return errno;
@@ -683,7 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
-		domain_memory_add_nochk(domid, -acc->memory - key->dsize);
+		domain_memory_add_nochk(conn, domid, -acc->memory - key->dsize);
 	}
 
 	return 0;
@@ -1055,11 +1057,13 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	if (len <= DEFAULT_BUFFER_SIZE) {
 		bdata->buffer = bdata->default_buffer;
 		/* Don't check quota, path might be used for returning error. */
-		domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+		domain_memory_add_nochk(conn, conn->id,
+					len + sizeof(bdata->hdr));
 	} else {
 		bdata->buffer = talloc_array(bdata, char, len);
 		if (!bdata->buffer ||
-		    domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+		    domain_memory_add_chk(conn, conn->id,
+					  len + sizeof(bdata->hdr))) {
 			send_error(conn, ENOMEM);
 			return;
 		}
@@ -1122,7 +1126,7 @@ void send_event(struct buffered_data *req, struct connection *conn,
 		}
 	}
 
-	if (domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+	if (domain_memory_add_chk(conn, conn->id, len + sizeof(bdata->hdr))) {
 		talloc_free(bdata);
 		return;
 	}
@@ -3322,7 +3326,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * be smaller. So ignore it. The limit will be applied for any resource
 	 * after the state has been fully restored.
 	 */
-	domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+	domain_memory_add_nochk(conn, conn->id, len + sizeof(bdata->hdr));
 }
 
 void read_state_buffered_data(const void *ctx, struct connection *conn,
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 144cbafb73..f94812610a 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1235,7 +1235,8 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 	return false;
 }
 
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check)
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check)
 {
 	struct domain *domain;
 
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index e669f57b80..5cfd730cf6 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -80,25 +80,29 @@ int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
 unsigned int domain_nbentry(struct connection *conn);
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check);
 
 /*
  * domain_memory_add_chk(): to be used when memory quota should be checked.
  * Not to be used when specifying a negative mem value, as lowering the used
  * memory should always be allowed.
  */
-static inline int domain_memory_add_chk(unsigned int domid, int mem)
+static inline int domain_memory_add_chk(struct connection *conn,
+					unsigned int domid, int mem)
 {
-	return domain_memory_add(domid, mem, false);
+	return domain_memory_add(conn, domid, mem, false);
 }
+
 /*
  * domain_memory_add_nochk(): to be used when memory quota should not be
  * checked, e.g. when lowering memory usage, or in an error case for undoing
  * a previous memory adjustment.
  */
-static inline void domain_memory_add_nochk(unsigned int domid, int mem)
+static inline void domain_memory_add_nochk(struct connection *conn,
+					   unsigned int domid, int mem)
 {
-	domain_memory_add(domid, mem, true);
+	domain_memory_add(conn, domid, mem, true);
 }
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 8ad0229df6..e30cd89be3 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -199,7 +199,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 	watch->token = talloc_strdup(watch, token);
 	if (!watch->node || !watch->token)
 		goto nomem;
-	if (domain_memory_add(conn->id, strlen(path) + strlen(token),
+	if (domain_memory_add(conn, conn->id, strlen(path) + strlen(token),
 			      no_quota_check))
 		goto nomem;
 
@@ -274,8 +274,9 @@ int do_unwatch(const void *ctx, struct connection *conn,
 	list_for_each_entry(watch, &conn->watches, list) {
 		if (streq(watch->node, node) && streq(watch->token, vec[1])) {
 			list_del(&watch->list);
-			domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-							  strlen(watch->token));
+			domain_memory_add_nochk(conn, conn->id,
+						-strlen(watch->node) -
+						strlen(watch->token));
 			talloc_free(watch);
 			domain_watch_dec(conn);
 			send_ack(conn, XS_UNWATCH);
@@ -291,8 +292,8 @@ void conn_delete_all_watches(struct connection *conn)
 
 	while ((watch = list_top(&conn->watches, struct watch, list))) {
 		list_del(&watch->list);
-		domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-						  strlen(watch->token));
+		domain_memory_add_nochk(conn, conn->id, -strlen(watch->node) -
+							strlen(watch->token));
 		talloc_free(watch);
 		domain_watch_dec(conn);
 	}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518244.804598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxC6-0005Fx-1l; Wed, 05 Apr 2023 07:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518244.804598; Wed, 05 Apr 2023 07:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxC5-0005F8-U0; Wed, 05 Apr 2023 07:04:33 +0000
Received: by outflank-mailman (input) for mailman id 518244;
 Wed, 05 Apr 2023 07:04:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxC3-0002UC-Rz
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1cacb26e-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BCE9D229FF;
 Wed,  5 Apr 2023 07:04:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8F10E13A31;
 Wed,  5 Apr 2023 07:04:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id whOiIX4dLWTHEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cacb26e-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678270; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yXnL6y0xe3ujnGIrpffgz9CI+U2tC4w1dZmiNxkMXsw=;
	b=W7D+JLybanPWNvnVDx6eoSXB/U3WJR6A7WZ9HEE3UUKI5O26jvf9gW0MfxlUvBkM7mydAw
	llKH5a7EwZZXjTgpPQxN7NF4APfwC9TjembZO5rkzqIuoTkLjxb6ghyYN2iBRTzH/hlIyU
	os3novb/1onmO6ME64axrSDcIZqN0po=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 07/13] tools/xenstore: use accounting data array for per-domain values
Date: Wed,  5 Apr 2023 09:03:43 +0200
Message-Id: <20230405070349.25293-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the accounting of per-domain usage of Xenstore memory, watches, and
outstanding requests to the array based mechanism.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |   8 +--
 tools/xenstore/xenstored_domain.c | 111 +++++++++++-------------------
 tools/xenstore/xenstored_domain.h |  10 +--
 3 files changed, 52 insertions(+), 77 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 88ae674523..67aa7c1578 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -255,7 +255,7 @@ static void free_buffered_data(struct buffered_data *out,
 			req->pend.ref.event_cnt--;
 			if (!req->pend.ref.event_cnt && !req->on_out_list) {
 				if (req->on_ref_list) {
-					domain_outstanding_domid_dec(
+					domain_outstanding_dec(conn,
 						req->pend.ref.domid);
 					list_del(&req->list);
 				}
@@ -271,7 +271,7 @@ static void free_buffered_data(struct buffered_data *out,
 		out->on_ref_list = true;
 		return;
 	} else
-		domain_outstanding_dec(conn);
+		domain_outstanding_dec(conn, conn->id);
 
 	talloc_free(out);
 }
@@ -1077,7 +1077,7 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	/* Queue for later transmission. */
 	list_add_tail(&bdata->list, &conn->out_list);
 	bdata->on_out_list = true;
-	domain_outstanding_inc(conn);
+	domain_outstanding_inc(conn, conn->id);
 }
 
 /*
@@ -3320,7 +3320,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * request have been delivered.
 	 */
 	if (bdata->hdr.msg.type != XS_WATCH_EVENT)
-		domain_outstanding_inc(conn);
+		domain_outstanding_inc(conn, conn->id);
 	/*
 	 * We are restoring the state after Live-Update and the new quota may
 	 * be smaller. So ignore it. The limit will be applied for any resource
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f94812610a..22836ae881 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -72,19 +72,12 @@ struct domain
 	/* Accounting data for this domain. */
 	unsigned int acc[ACC_N];
 
-	/* Amount of memory allocated for this domain. */
-	int memory;
+	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
 	bool hard_quota_reported;
 	time_t mem_last_msg;
 #define MEM_WARN_MINTIME_SEC 10
 
-	/* number of watch for this domain */
-	int nbwatch;
-
-	/* Number of outstanding requests. */
-	int nboutstanding;
-
 	/* write rate limit */
 	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
 	struct wrl_timestampt wrl_timestamp;
@@ -202,14 +195,15 @@ static bool domain_can_write(struct connection *conn)
 
 static bool domain_can_read(struct connection *conn)
 {
-	struct xenstore_domain_interface *intf = conn->domain->interface;
+	struct domain *domain = conn->domain;
+	struct xenstore_domain_interface *intf = domain->interface;
 
 	if (domain_is_unprivileged(conn)) {
-		if (conn->domain->wrl_credit < 0)
+		if (domain->wrl_credit < 0)
 			return false;
-		if (conn->domain->nboutstanding >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
 			return false;
-		if (conn->domain->memory >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -440,10 +434,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp) return ENOMEM
 
 	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->nbwatch);
+	ent(watches, d->acc[ACC_WATCH]);
 	ent(transactions, ta);
-	ent(outstanding, d->nboutstanding);
-	ent(memory, d->memory);
+	ent(outstanding, d->acc[ACC_OUTST]);
+	ent(memory, d->acc[ACC_MEM]);
 
 #undef ent
 
@@ -1186,14 +1180,16 @@ unsigned int domain_nbentry(struct connection *conn)
 	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
-static bool domain_chk_quota(struct domain *domain, int mem)
+static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 {
 	time_t now;
+	struct domain *domain;
 
-	if (!domain || !domid_is_unprivileged(domain->domid) ||
-	    (domain->conn && domain->conn->is_ignored))
+	if (!conn || !domid_is_unprivileged(conn->id) ||
+	    conn->is_ignored)
 		return false;
 
+	domain = conn->domain;
 	now = time(NULL);
 
 	if (mem >= quota_memory_per_domain_hard &&
@@ -1238,80 +1234,57 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
 		      bool no_quota_check)
 {
-	struct domain *domain;
+	int ret;
 
-	domain = find_domain_struct(domid);
-	if (domain) {
-		/*
-		 * domain_chk_quota() will print warning and also store whether
-		 * the soft/hard quota has been hit. So check no_quota_check
-		 * *after*.
-		 */
-		if (domain_chk_quota(domain, domain->memory + mem) &&
-		    !no_quota_check)
-			return ENOMEM;
-		domain->memory += mem;
-	} else {
-		/*
-		 * The domain the memory is to be accounted for should always
-		 * exist, as accounting is done either for a domain related to
-		 * the current connection, or for the domain owning a node
-		 * (which is always existing, as the owner of the node is
-		 * tested to exist and deleted or replaced by domid 0 if not).
-		 * So not finding the related domain MUST be an error in the
-		 * data base.
-		 */
-		errno = ENOENT;
-		corrupt(NULL, "Accounting called for non-existing domain %u\n",
-			domid);
-		return ENOENT;
-	}
+	ret = domain_acc_add(conn, domid, ACC_MEM, 0, true);
+	if (ret < 0)
+		return -ret;
+
+	/*
+	 * domain_chk_quota() will print warning and also store whether the
+	 * soft/hard quota has been hit. So check no_quota_check *after*.
+	 */
+	if (domain_chk_quota(conn, ret + mem) && !no_quota_check)
+		return ENOMEM;
+
+	/*
+	 * The domain the memory is to be accounted for should always exist,
+	 * as accounting is done either for a domain related to the current
+	 * connection, or for the domain owning a node (which is always
+	 * existing, as the owner of the node is tested to exist and deleted
+	 * or replaced by domid 0 if not).
+	 * So not finding the related domain MUST be an error in the data base.
+	 */
+	domain_acc_add(conn, domid, ACC_MEM, mem, true);
 
 	return 0;
 }
 
 void domain_watch_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nbwatch++;
+	domain_acc_add(conn, conn->id, ACC_WATCH, 1, true);
 }
 
 void domain_watch_dec(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	if (conn->domain->nbwatch)
-		conn->domain->nbwatch--;
+	domain_acc_add(conn, conn->id, ACC_WATCH, -1, true);
 }
 
 int domain_watch(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
-		? conn->domain->nbwatch
+		? domain_acc_add(conn, conn->id, ACC_WATCH, 0, true)
 		: 0;
 }
 
-void domain_outstanding_inc(struct connection *conn)
+void domain_outstanding_inc(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding++;
+	domain_acc_add(conn, domid, ACC_OUTST, 1, true);
 }
 
-void domain_outstanding_dec(struct connection *conn)
+void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding--;
-}
-
-void domain_outstanding_domid_dec(unsigned int domid)
-{
-	struct domain *d = find_domain_by_domid(domid);
-
-	if (d)
-		d->nboutstanding--;
+	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 5cfd730cf6..0d61bf4344 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -28,7 +28,10 @@ enum accitem {
 	ACC_NODES,
 	ACC_REQ_N,		/* Number of elements per request. */
 	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
-	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
+	ACC_WATCH = ACC_TR_N,
+	ACC_OUTST,
+	ACC_MEM,
+	ACC_N,			/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -107,9 +110,8 @@ static inline void domain_memory_add_nochk(struct connection *conn,
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
-void domain_outstanding_inc(struct connection *conn);
-void domain_outstanding_dec(struct connection *conn);
-void domain_outstanding_domid_dec(unsigned int domid);
+void domain_outstanding_inc(struct connection *conn, unsigned int domid);
+void domain_outstanding_dec(struct connection *conn, unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:04:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518248.804607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxCA-0005jl-AN; Wed, 05 Apr 2023 07:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518248.804607; Wed, 05 Apr 2023 07:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxCA-0005jc-6c; Wed, 05 Apr 2023 07:04:38 +0000
Received: by outflank-mailman (input) for mailman id 518248;
 Wed, 05 Apr 2023 07:04:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxC9-0002UC-5c
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:37 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 200ce63f-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:36 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7340C20592;
 Wed,  5 Apr 2023 07:04:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3D6E213A31;
 Wed,  5 Apr 2023 07:04:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id TNk3DYQdLWTWEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 200ce63f-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678276; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0FVX/N0LxAMOywC2jlDscHABd/v4BQEj8zV+MLHaj68=;
	b=kkFJhyYeVvEhEobwXdCyZzLJpbdI1PB+Q0ceqAJQc3IV+ScoJrBQheTkPhfSOhU8j3Zi3j
	3ilgCE1AnCRHhreOXN4+uAJwR+yzepxIshlSbqaD/7DaYRvHBtTmzKm2pa5Bk8NRtaU9Sq
	BdXYeOhCOuCgRFvcGQSoWQSQhTyy5rs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 08/13] tools/xenstore: add accounting trace support
Date: Wed,  5 Apr 2023 09:03:44 +0200
Message-Id: <20230405070349.25293-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "acc" and the related trace calls.

The "acc" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   |  2 +-
 tools/xenstore/xenstored_core.h   |  1 +
 tools/xenstore/xenstored_domain.c | 10 ++++++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 67aa7c1578..bd816ce709 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2746,7 +2746,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl",
+	"obj", "io", "wrl", "acc",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1f811f38cb..3e0734a6c6 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -302,6 +302,7 @@ extern unsigned int trace_flags;
 #define TRACE_OBJ	0x00000001
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
+#define TRACE_ACC	0x00000008
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 22836ae881..1caa60bb14 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -540,6 +540,12 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+#define trace_acc(...)				\
+do {						\
+	if (trace_flags & TRACE_ACC)		\
+		trace("acc: " __VA_ARGS__);	\
+} while (0)
+
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 {
 	struct changed_domain *cd;
@@ -603,6 +609,8 @@ static int acc_add_changed_dom(const void *ctx, struct list_head *head,
 		return 0;
 
 	errno = 0;
+	trace_acc("local change domid %u: what=%u %d add %d\n", domid, what,
+		  cd->acc[what], val);
 	cd->acc[what] += val;
 
 	return cd->acc[what];
@@ -1114,6 +1122,8 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		return domain_acc_add_valid(d, what, ret);
 	}
 
+	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
+		  d->acc[what], add);
 	d->acc[what] = domain_acc_add_valid(d, what, add);
 
 	return d->acc[what];
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:09:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:09:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518261.804629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGM-0007Z9-Lm; Wed, 05 Apr 2023 07:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518261.804629; Wed, 05 Apr 2023 07:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGM-0007Xk-FN; Wed, 05 Apr 2023 07:08:58 +0000
Received: by outflank-mailman (input) for mailman id 518261;
 Wed, 05 Apr 2023 07:08:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxCR-0002Et-GC
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a24292f-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:04:53 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5B43C229D0;
 Wed,  5 Apr 2023 07:04:53 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 183F713A31;
 Wed,  5 Apr 2023 07:04:53 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id y9STBJUdLWT9EwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a24292f-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678293; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2ZcKfGTywyhrwZZUE2CTmIRIHQZRyLmi1TNVZ6l6MAw=;
	b=J85PbfdL8z3iCw2B4Q1bk3oMnyCK50oEidaoR42ZPYCoHHWj/yqLNNp6LOgrjYls97N18/
	HkhQHaOPux3ZgThOWGXZFOewgIISa3eegqDDkkgUuomSi7g5tmcBvk97MQJ7ZTrK/9OCsv
	JTZWK5b6oNvQifldTqW7itnzi5IUMd8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 11/13] tools/xenstore: remember global and per domain max accounting values
Date: Wed,  5 Apr 2023 09:03:47 +0200
Message-Id: <20230405070349.25293-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add saving the maximum values of the different accounting data seen
per domain and (for unprivileged domains) globally, and print those
values via the xenstore-control quota command. Add a sub-command for
resetting the global maximum values seen.

This should help for a decision how to set the related quotas.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xenstore.txt             |   5 +-
 tools/xenstore/xenstored_control.c |  22 ++++++-
 tools/xenstore/xenstored_domain.c  | 100 +++++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h  |   2 +
 4 files changed, 108 insertions(+), 21 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index d807ef0709..38015835b1 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -426,7 +426,7 @@ CONTROL			<command>|[<parameters>|]
 	print|<string>
 		print <string> to syslog (xenstore runs as daemon) or
 		to console (xenstore runs as stubdom)
-	quota|[set <name> <val>|<domid>]
+	quota|[set <name> <val>|<domid>|max [-r]]
 		without parameters: print the current quota settings
 		with "set <name> <val>": set the quota <name> to new value
 		<val> (The admin should make sure all the domain usage is
@@ -435,6 +435,9 @@ CONTROL			<command>|[<parameters>|]
 		violating the new quota setting isn't increased further)
 		with "<domid>": print quota related accounting data for
 		the domain <domid>
+		with "max [-r]": show global per-domain maximum values of all
+		unprivileged domains, optionally reset the values by adding
+		"-r"
 	quota-soft|[set <name> <val>]
 		like the "quota" command, but for soft-quota.
 	help			<supported-commands>
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index cbd62556c3..a2ba64a15c 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -306,6 +306,22 @@ static int quota_get(const void *ctx, struct connection *conn,
 	return domain_get_quota(ctx, conn, atoi(vec[0]));
 }
 
+static int quota_max(const void *ctx, struct connection *conn,
+		     char **vec, int num)
+{
+	if (num > 1)
+		return EINVAL;
+
+	if (num == 1) {
+		if (!strcmp(vec[0], "-r"))
+			domain_reset_global_acc();
+		else
+			return EINVAL;
+	}
+
+	return domain_max_global_acc(ctx, conn);
+}
+
 static int do_control_quota(const void *ctx, struct connection *conn,
 			    char **vec, int num)
 {
@@ -315,6 +331,9 @@ static int do_control_quota(const void *ctx, struct connection *conn,
 	if (!strcmp(vec[0], "set"))
 		return quota_set(ctx, conn, vec + 1, num - 1, hard_quotas);
 
+	if (!strcmp(vec[0], "max"))
+		return quota_max(ctx, conn, vec + 1, num - 1);
+
 	return quota_get(ctx, conn, vec, num);
 }
 
@@ -978,7 +997,8 @@ static struct cmd_s cmds[] = {
 	{ "memreport", do_control_memreport, "[<file>]" },
 #endif
 	{ "print", do_control_print, "<string>" },
-	{ "quota", do_control_quota, "[set <name> <val>|<domid>]" },
+	{ "quota", do_control_quota,
+		"[set <name> <val>|<domid>|max [-r]]" },
 	{ "quota-soft", do_control_quota_s, "[set <name> <val>]" },
 	{ "help", do_control_help, "" },
 };
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 40bcc1dbfa..d21f31da92 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,6 +43,8 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
+static unsigned int acc_global_max[ACC_N];
+
 struct domain
 {
 	/* The id of this domain */
@@ -70,7 +72,10 @@ struct domain
 	bool introduced;
 
 	/* Accounting data for this domain. */
-	unsigned int acc[ACC_N];
+	struct acc {
+		unsigned int val;
+		unsigned int max;
+	} acc[ACC_N];
 
 	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
@@ -201,9 +206,9 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
 			return false;
-		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -266,7 +271,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES].val ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -274,7 +279,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->acc[ACC_NODES]) {
+	if (domain->acc[ACC_NODES].val) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -428,14 +433,41 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+				      d->acc[e].val, d->acc[e].max); \
+	if (!resp) return ENOMEM
+
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
+
+#undef ent
+
+	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+
+	return 0;
+}
+
+int domain_max_global_acc(const void *ctx, struct connection *conn)
+{
+	char *resp;
+
+	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
+	if (!resp)
+		return ENOMEM;
+
+#define ent(t, e) \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, d->acc[ACC_TRANS]);
-	ent(outstanding, d->acc[ACC_OUTST]);
-	ent(memory, d->acc[ACC_MEM]);
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
 
 #undef ent
 
@@ -1051,10 +1083,12 @@ int domain_adjust_node_perms(struct node *node)
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
+	unsigned int val;
+
 	assert(what < ARRAY_SIZE(d->acc));
 
-	if ((add < 0 && -add > d->acc[what]) ||
-	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+	if ((add < 0 && -add > d->acc[what].val) ||
+	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
 		/*
 		 * In a transaction when a node is being added/removed AND the
 		 * same node has been added/removed outside the transaction in
@@ -1065,7 +1099,13 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 		return (add < 0) ? 0 : INT_MAX;
 	}
 
-	return d->acc[what] + add;
+	val = d->acc[what].val + add;
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+
+	return val;
 }
 
 static int domain_acc_add(struct connection *conn, unsigned int domid,
@@ -1121,10 +1161,10 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	}
 
 	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
-		  d->acc[what], add);
-	d->acc[what] = domain_acc_add_valid(d, what, add);
+		  d->acc[what].val, add);
+	d->acc[what].val = domain_acc_add_valid(d, what, add);
 
-	return d->acc[what];
+	return d->acc[what].val;
 }
 
 void acc_drop(struct connection *conn)
@@ -1159,6 +1199,28 @@ void acc_commit(struct connection *conn)
 	}
 }
 
+static int domain_reset_global_acc_sub(const void *k, void *v, void *arg)
+{
+	struct domain *d = v;
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		d->acc[i].max = d->acc[i].val;
+
+	return 0;
+}
+
+void domain_reset_global_acc(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		acc_global_max[i] = 0;
+
+	/* Set current max values seen. */
+	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
@@ -1659,7 +1721,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->acc[ACC_NODES];
+	dom->nodes = -d->acc[ACC_NODES].val;
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1714,7 +1776,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->acc[ACC_NODES] += dom->nodes;
+	d->acc[ACC_NODES].val += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index abc766f343..b55c9bcc2d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -126,6 +126,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
 void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
+int domain_max_global_acc(const void *ctx, struct connection *conn);
+void domain_reset_global_acc(void);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:09:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:09:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518260.804622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGM-0007T6-9d; Wed, 05 Apr 2023 07:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518260.804622; Wed, 05 Apr 2023 07:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGM-0007ST-6U; Wed, 05 Apr 2023 07:08:58 +0000
Received: by outflank-mailman (input) for mailman id 518260;
 Wed, 05 Apr 2023 07:08:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxCL-0002Et-ML
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:49 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26c01ddb-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:04:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AA1E22058E;
 Wed,  5 Apr 2023 07:04:47 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7B8FD13A31;
 Wed,  5 Apr 2023 07:04:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aC3eHI8dLWTyEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26c01ddb-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678287; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9+svNodvtqEzi347VmDVhAeqbmPB+a6CEyNMcH7A/78=;
	b=tnoiNvEK7wVRD3t0XP4Hb9wj9DRqF9BbSyC6DqYylJERvPZPE+SKEm+XT2wW86hDrWy8xB
	/v4wAUytR0s5QLSPDUknaFsDXTnPllXZP5H+DTDJbjKcl4L5+PBWKVbQU4+YHfxlPv2odP
	WPjJIR4tQYibRtvRn3i3I3iBFXgM4ig=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 10/13] tools/xenstore: switch transaction accounting to generic accounting
Date: Wed,  5 Apr 2023 09:03:46 +0200
Message-Id: <20230405070349.25293-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As transaction accounting is active for unprivileged domains only, it
can easily be added to the generic per-domain accounting.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        |  3 +--
 tools/xenstore/xenstored_core.h        |  1 -
 tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
 tools/xenstore/xenstored_domain.h      |  4 ++++
 tools/xenstore/xenstored_transaction.c | 12 +++++-------
 5 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 2d481fcad9..88c569b7d5 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2083,7 +2083,7 @@ static void consider_message(struct connection *conn)
 	 * stalled. This will ignore new requests until Live-Update happened
 	 * or it was aborted.
 	 */
-	if (lu_is_pending() && conn->transaction_started == 0 &&
+	if (lu_is_pending() && conn->ta_start_time == 0 &&
 	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
 		trace("Delaying transaction start for connection %p req_id %u\n",
 		      conn, conn->in->hdr.msg.req_id);
@@ -2190,7 +2190,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->funcs = funcs;
 	new->is_ignored = false;
 	new->is_stalled = false;
-	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 5a11dc1231..3564d85d7d 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -151,7 +151,6 @@ struct connection
 	/* List of in-progress transactions. */
 	struct list_head transaction_list;
 	uint32_t next_transaction_id;
-	unsigned int transaction_started;
 	time_t ta_start_time;
 
 	/* List of delayed requests. */
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 1caa60bb14..40bcc1dbfa 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -419,12 +419,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
-	int ta;
 
 	if (!d)
 		return ENOENT;
 
-	ta = d->conn ? d->conn->transaction_started : 0;
 	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
 	if (!resp)
 		return ENOMEM;
@@ -435,7 +433,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 
 	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, ta);
+	ent(transactions, d->acc[ACC_TRANS]);
 	ent(outstanding, d->acc[ACC_OUTST]);
 	ent(memory, d->acc[ACC_MEM]);
 
@@ -1297,6 +1295,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
+void domain_transaction_inc(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
+}
+
+void domain_transaction_dec(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
+}
+
+unsigned int domain_transaction_get(struct connection *conn)
+{
+	return (domain_is_unprivileged(conn))
+		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
+		: 0;
+}
+
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
 static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
 static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 0d61bf4344..abc766f343 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -31,6 +31,7 @@ enum accitem {
 	ACC_WATCH = ACC_TR_N,
 	ACC_OUTST,
 	ACC_MEM,
+	ACC_TRANS,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -112,6 +113,9 @@ void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn, unsigned int domid);
 void domain_outstanding_dec(struct connection *conn, unsigned int domid);
+void domain_transaction_inc(struct connection *conn);
+void domain_transaction_dec(struct connection *conn);
+unsigned int domain_transaction_get(struct connection *conn);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 11c8bcec84..1c14b8579a 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -479,8 +479,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_is_unprivileged(conn) &&
-	    conn->transaction_started > quota_max_transaction)
+	if (domain_transaction_get(conn) > quota_max_transaction)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
@@ -505,9 +504,9 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	list_add_tail(&trans->list, &conn->transaction_list);
 	talloc_steal(conn, trans);
 	talloc_set_destructor(trans, destroy_transaction);
-	if (!conn->transaction_started)
+	if (!conn->ta_start_time)
 		conn->ta_start_time = time(NULL);
-	conn->transaction_started++;
+	domain_transaction_inc(conn);
 	wrl_ntransactions++;
 
 	snprintf(id_str, sizeof(id_str), "%u", trans->id);
@@ -533,8 +532,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 	conn->transaction = NULL;
 	list_del(&trans->list);
-	conn->transaction_started--;
-	if (!conn->transaction_started)
+	domain_transaction_dec(conn);
+	if (list_empty(&conn->transaction_list))
 		conn->ta_start_time = 0;
 
 	chk_quota = trans->node_created && domain_is_unprivileged(conn);
@@ -588,7 +587,6 @@ void conn_delete_all_transactions(struct connection *conn)
 
 	assert(conn->transaction == NULL);
 
-	conn->transaction_started = 0;
 	conn->ta_start_time = 0;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:09:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:09:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518257.804618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGM-0007Qe-2V; Wed, 05 Apr 2023 07:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518257.804618; Wed, 05 Apr 2023 07:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGL-0007QS-W0; Wed, 05 Apr 2023 07:08:57 +0000
Received: by outflank-mailman (input) for mailman id 518257;
 Wed, 05 Apr 2023 07:08:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxCX-0002Et-31
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:05:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2d7be2a9-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:04:59 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F139B203F7;
 Wed,  5 Apr 2023 07:04:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C72B113A31;
 Wed,  5 Apr 2023 07:04:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cxhLL5odLWQKFAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d7be2a9-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678298; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aCyLt0zxkjWxud2x9uGxTV8f0obgprf3/tY/K8k0jbw=;
	b=JMkSCFp31r0fIZPNH2AkMftQJA+mpIcj9jvbs5ufb0Ar0XBybtUowBEtZDvBcP2LTjksJF
	fqxANAkevAZlQI+gakDpVPD8TisCAKrF0fcbvK1C5jjSE/LCDkdGF8CT6w2ukbn0+ujGx6
	xNfbNEucYtgjqwKlr68nrI4H020r5Bo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 12/13] tools/xenstore: use generic accounting for remaining quotas
Date: Wed,  5 Apr 2023 09:03:48 +0200
Message-Id: <20230405070349.25293-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The maxrequests, node size, number of node permissions, and path length
quota are a little bit special, as they are either active in
transactions only (maxrequests), or they are just per item instead of
count values. Nevertheless being able to know the maximum number of
those quota related values per domain would be beneficial, so add them
to the generic accounting.

The per domain value will never show current numbers other than zero,
but the maximum number seen can be gathered the same way as the number
of nodes during a transaction.

To be able to use the const qualifier for a new function switch
domain_is_unprivileged() to take a const pointer, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 14 ++++-----
 tools/xenstore/xenstored_core.h        |  2 +-
 tools/xenstore/xenstored_domain.c      | 39 ++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h      |  6 ++++
 tools/xenstore/xenstored_transaction.c |  4 +--
 tools/xenstore/xenstored_watch.c       |  2 +-
 6 files changed, 48 insertions(+), 19 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 88c569b7d5..65df2866bf 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -799,8 +799,8 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (!no_quota_check && domain_is_unprivileged(conn) &&
-	    data.dsize >= quota_max_entry_size) {
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1168,7 +1168,7 @@ static bool valid_chars(const char *node)
 		       "0123456789-/_@") == strlen(node));
 }
 
-bool is_valid_nodename(const char *node)
+bool is_valid_nodename(const struct connection *conn, const char *node)
 {
 	int local_off = 0;
 	unsigned int domid;
@@ -1188,7 +1188,8 @@ bool is_valid_nodename(const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (strlen(node) > local_off + quota_max_path_len)
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
+			   quota_max_path_len))
 		return false;
 
 	return valid_chars(node);
@@ -1250,7 +1251,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
-	if (!is_valid_nodename(*canonical_name)) {
+	if (!is_valid_nodename(conn, *canonical_name)) {
 		errno = EINVAL;
 		return NULL;
 	}
@@ -1775,8 +1776,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_is_unprivileged(conn) &&
-	    perms.num > quota_nb_perms_per_node)
+	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3564d85d7d..9339820156 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -258,7 +258,7 @@ void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
 
 /* Is this a valid node name? */
-bool is_valid_nodename(const char *node);
+bool is_valid_nodename(const struct connection *conn, const char *node);
 
 /* Get name of parent node. */
 char *get_parent(const void *ctx, const char *node);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index d21f31da92..49e2c5c82a 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -433,7 +433,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
 				      d->acc[e].val, d->acc[e].max); \
 	if (!resp) return ENOMEM
 
@@ -442,6 +442,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
 
 #undef ent
 
@@ -459,7 +460,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
 				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
@@ -468,6 +469,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
 
 #undef ent
 
@@ -1081,12 +1083,22 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
+static void domain_acc_valid_max(struct domain *d, enum accitem what,
+				 unsigned int val)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+	assert(what < ARRAY_SIZE(acc_global_max));
+
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+}
+
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
 	unsigned int val;
 
-	assert(what < ARRAY_SIZE(d->acc));
-
 	if ((add < 0 && -add > d->acc[what].val) ||
 	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
 		/*
@@ -1100,10 +1112,7 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 	}
 
 	val = d->acc[what].val + add;
-	if (val > d->acc[what].max)
-		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	domain_acc_valid_max(d, what, val);
 
 	return val;
 }
@@ -1221,6 +1230,20 @@ void domain_reset_global_acc(void)
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
+bool domain_max_chk(const struct connection *conn, enum accitem what,
+		    unsigned int val, unsigned int quota)
+{
+	if (!conn || !conn->domain)
+		return false;
+
+	if (domain_is_unprivileged(conn) && val > quota)
+		return true;
+
+	domain_acc_valid_max(conn->domain, what, val);
+
+	return false;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b55c9bcc2d..3e17b63659 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -32,6 +32,10 @@ enum accitem {
 	ACC_OUTST,
 	ACC_MEM,
 	ACC_TRANS,
+	ACC_TRANSNODES,
+	ACC_NPERM,
+	ACC_PATHLEN,
+	ACC_NODESZ,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -128,6 +132,8 @@ void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
+bool domain_max_chk(const struct connection *conn, unsigned int what,
+		    unsigned int val, unsigned int quota);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 1c14b8579a..0b256f9b18 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,8 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (trans->nodes >= quota_trans_nodes &&
-		    domain_is_unprivileged(conn)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
+				   quota_trans_nodes)) {
 			ret = ENOSPC;
 			goto err;
 		}
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index e30cd89be3..61b1e3421e 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -176,7 +176,7 @@ static int check_watch_path(struct connection *conn, const void *ctx,
 		*path = canonicalize(conn, ctx, *path);
 		if (!*path)
 			return errno;
-		if (!is_valid_nodename(*path))
+		if (!is_valid_nodename(conn, *path))
 			goto inval;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:09:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518263.804648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGP-0008Co-8m; Wed, 05 Apr 2023 07:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518263.804648; Wed, 05 Apr 2023 07:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGP-0008Cf-2H; Wed, 05 Apr 2023 07:09:01 +0000
Received: by outflank-mailman (input) for mailman id 518263;
 Wed, 05 Apr 2023 07:08:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxCE-0002UC-Sd
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:04:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2367f272-d380-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 09:04:42 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1AC7020298;
 Wed,  5 Apr 2023 07:04:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DFDB813A31;
 Wed,  5 Apr 2023 07:04:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id dLRaNYkdLWTnEwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:04:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2367f272-d380-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678282; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GIdwVYCAKepkDxThtVUfw/37sAa1oGJdz4Nx6ZA+2lg=;
	b=JYSdSrraO6+/ewSBe4ComCaeaEt6LaIqm7GpPwvvBB2/50c3lKRhEfexZ/M1PYtUNuOAdp
	cAXPwMDtyPaVr6zj4HebZEt9t+7eJ9G9ICvXqsD4QgpVereucdcAxnztUHXU0/E9EcKMvt
	WCuHsq6yasY6PYD4cdSaJ4rMhd+r4JQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v4 09/13] tools/xenstore: add TDB access trace support
Date: Wed,  5 Apr 2023 09:03:45 +0200
Message-Id: <20230405070349.25293-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "tdb" and the related trace calls.

The "tdb" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c        | 8 +++++++-
 tools/xenstore/xenstored_core.h        | 7 +++++++
 tools/xenstore/xenstored_transaction.c | 7 ++++++-
 3 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index bd816ce709..2d481fcad9 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -589,6 +589,8 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
 		if (old_data.dptr == NULL) {
 			acc->memory = 0;
 		} else {
+			trace_tdb("read %s size %zu\n", key->dptr,
+				  old_data.dsize + key->dsize);
 			hdr = (void *)old_data.dptr;
 			acc->memory = old_data.dsize;
 			acc->domid = hdr->perms[0].id;
@@ -655,6 +657,7 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("store %s size %zu\n", key->dptr, data->dsize + key->dsize);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
@@ -682,6 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("delete %s\n", key->dptr);
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
@@ -731,6 +735,8 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		goto error;
 	}
 
+	trace_tdb("read %s size %zu\n", key.dptr, data.dsize + key.dsize);
+
 	node->parent = NULL;
 	talloc_steal(node, data.dptr);
 
@@ -2746,7 +2752,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl", "acc",
+	"obj", "io", "wrl", "acc", "tdb",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3e0734a6c6..5a11dc1231 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -303,9 +303,16 @@ extern unsigned int trace_flags;
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
 #define TRACE_ACC	0x00000008
+#define TRACE_TDB	0x00000010
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
+#define trace_tdb(...)				\
+do {						\
+	if (trace_flags & TRACE_TDB)		\
+		trace("tdb: " __VA_ARGS__);	\
+} while (0)
+
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 2b15506953..11c8bcec84 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -374,8 +374,11 @@ static int finalize_transaction(struct connection *conn,
 				if (tdb_error(tdb_ctx) != TDB_ERR_NOEXIST)
 					return EIO;
 				gen = NO_GENERATION;
-			} else
+			} else {
+				trace_tdb("read %s size %zu\n", key.dptr,
+					  key.dsize + data.dsize);
 				gen = hdr->generation;
+			}
 			talloc_free(data.dptr);
 			if (i->generation != gen)
 				return EAGAIN;
@@ -399,6 +402,8 @@ static int finalize_transaction(struct connection *conn,
 			set_tdb_key(i->trans_name, &ta_key);
 			data = tdb_fetch(tdb_ctx, ta_key);
 			if (data.dptr) {
+				trace_tdb("read %s size %zu\n", ta_key.dptr,
+					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:09:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518268.804653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGP-0008EO-Ic; Wed, 05 Apr 2023 07:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518268.804653; Wed, 05 Apr 2023 07:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxGP-0008Dn-BL; Wed, 05 Apr 2023 07:09:01 +0000
Received: by outflank-mailman (input) for mailman id 518268;
 Wed, 05 Apr 2023 07:09:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pjxCd-0002Et-6Z
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:05:07 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30dd4850-d380-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:05:04 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9FAA1201AE;
 Wed,  5 Apr 2023 07:05:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 6FFED13A31;
 Wed,  5 Apr 2023 07:05:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id sUUcGqAdLWQYFAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 07:05:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30dd4850-d380-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680678304; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WlsEUwLyrbuDwUKcPe0rdVJuR+fsh5G5MKrTkqmSsAU=;
	b=nuCyErAeaJms2nibOGMxAA1WYVt61sEpsQAOH+dBoxFDz3+jeklDHw4Wp+bbUxdmDyiZmk
	H4Qi8RYtT+TRz6nGLb0OcbywNqGLor5XSv80LKOmzU/0XzzhiNw6JI52TaFas4p+uqF8EU
	bpKKxSVcybscwIDkiW4DzfawgVwEszo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 13/13] tools/xenstore: switch quota management to be table based
Date: Wed,  5 Apr 2023 09:03:49 +0200
Message-Id: <20230405070349.25293-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
References: <20230405070349.25293-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having individual quota variables switch to a table based
approach like the generic accounting. Include all the related data in
the same table and add accessor functions.

This enables to use the command line --quota parameter for setting all
possible quota values, keeping the previous parameters for
compatibility.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
One further remark: it would be rather easy to add soft-quota for all
the other quotas (similar to the memory one). This could be used as
an early warning for the need to raise global quota.
---
 tools/xenstore/xenstored_control.c     |  43 ++------
 tools/xenstore/xenstored_core.c        |  85 ++++++++--------
 tools/xenstore/xenstored_core.h        |  10 --
 tools/xenstore/xenstored_domain.c      | 132 +++++++++++++++++--------
 tools/xenstore/xenstored_domain.h      |  12 ++-
 tools/xenstore/xenstored_transaction.c |   5 +-
 tools/xenstore/xenstored_watch.c       |   2 +-
 7 files changed, 155 insertions(+), 134 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index a2ba64a15c..75f51a80db 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -221,35 +221,6 @@ static int do_control_log(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-struct quota {
-	const char *name;
-	int *quota;
-	const char *descr;
-};
-
-static const struct quota hard_quotas[] = {
-	{ "nodes", &quota_nb_entry_per_domain, "Nodes per domain" },
-	{ "watches", &quota_nb_watch_per_domain, "Watches per domain" },
-	{ "transactions", &quota_max_transaction, "Transactions per domain" },
-	{ "outstanding", &quota_req_outstanding,
-		"Outstanding requests per domain" },
-	{ "transaction-nodes", &quota_trans_nodes,
-		"Max. number of accessed nodes per transaction" },
-	{ "memory", &quota_memory_per_domain_hard,
-		"Total Xenstore memory per domain (error level)" },
-	{ "node-size", &quota_max_entry_size, "Max. size of a node" },
-	{ "path-max", &quota_max_path_len, "Max. length of a node path" },
-	{ "permissions", &quota_nb_perms_per_node,
-		"Max. number of permissions per node" },
-	{ NULL, NULL, NULL }
-};
-
-static const struct quota soft_quotas[] = {
-	{ "memory", &quota_memory_per_domain_soft,
-		"Total Xenstore memory per domain (warning level)" },
-	{ NULL, NULL, NULL }
-};
-
 static int quota_show_current(const void *ctx, struct connection *conn,
 			      const struct quota *quotas)
 {
@@ -260,9 +231,11 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-	for (i = 0; quotas[i].quota; i++) {
+	for (i = 0; i < ACC_N; i++) {
+		if (!quotas[i].name)
+			continue;
 		resp = talloc_asprintf_append(resp, "%-17s: %8d %s\n",
-					      quotas[i].name, *quotas[i].quota,
+					      quotas[i].name, quotas[i].val,
 					      quotas[i].descr);
 		if (!resp)
 			return ENOMEM;
@@ -274,7 +247,7 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 }
 
 static int quota_set(const void *ctx, struct connection *conn,
-		     char **vec, int num, const struct quota *quotas)
+		     char **vec, int num, struct quota *quotas)
 {
 	unsigned int i;
 	int val;
@@ -286,9 +259,9 @@ static int quota_set(const void *ctx, struct connection *conn,
 	if (val < 1)
 		return EINVAL;
 
-	for (i = 0; quotas[i].quota; i++) {
-		if (!strcmp(vec[0], quotas[i].name)) {
-			*quotas[i].quota = val;
+	for (i = 0; i < ACC_N; i++) {
+		if (quotas[i].name && !strcmp(vec[0], quotas[i].name)) {
+			quotas[i].val = val;
 			send_ack(conn, XS_CONTROL);
 			return 0;
 		}
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 65df2866bf..6e2fc06840 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -89,17 +89,6 @@ unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-int quota_nb_entry_per_domain = 1000;
-int quota_nb_watch_per_domain = 128;
-int quota_max_entry_size = 2048; /* 2K */
-int quota_max_transaction = 10;
-int quota_nb_perms_per_node = 5;
-int quota_trans_nodes = 1024;
-int quota_max_path_len = XENSTORE_REL_PATH_MAX;
-int quota_req_outstanding = 20;
-int quota_memory_per_domain_soft = 2 * 1024 * 1024; /* 2 MB */
-int quota_memory_per_domain_hard = 2 * 1024 * 1024 + 512 * 1024; /* 2.5 MB */
-
 unsigned int timeout_watch_event_msec = 20000;
 
 void trace(const char *fmt, ...)
@@ -799,7 +788,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize)
 	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
@@ -1188,8 +1177,7 @@ bool is_valid_nodename(const struct connection *conn, const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
-			   quota_max_path_len))
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off))
 		return false;
 
 	return valid_chars(node);
@@ -1501,7 +1489,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= hard_quotas[ACC_NODES].val) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1776,7 +1764,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
+	if (domain_max_chk(conn, ACC_NPERM, perms.num))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
@@ -2644,7 +2632,16 @@ static void usage(void)
 "                          memory: total used memory per domain for nodes,\n"
 "                                  transactions, watches and requests, above\n"
 "                                  which Xenstore will stop talking to domain\n"
+"                          nodes: number nodes owned by a domain\n"
+"                          node-permissions: number of access permissions per\n"
+"                                            node\n"
+"                          node-size: total size of a node (permissions +\n"
+"                                     children names + content)\n"
 "                          outstanding: number of outstanding requests\n"
+"                          path-length: length of a node path\n"
+"                          transactions: number of concurrent transactions\n"
+"                                        per domain\n"
+"                          watches: number of watches per domain"
 "  -q, --quota-soft <what>=<nb> set a soft quota <what> to the value <nb>,\n"
 "                          causing a warning to be issued via syslog() if the\n"
 "                          limit is violated, allowed quotas are:\n"
@@ -2695,12 +2692,12 @@ int dom0_domid = 0;
 int dom0_event = 0;
 int priv_domid = 0;
 
-static int get_optval_int(const char *arg)
+static unsigned int get_optval_int(const char *arg)
 {
 	char *end;
-	long val;
+	unsigned long val;
 
-	val = strtol(arg, &end, 10);
+	val = strtoul(arg, &end, 10);
 	if (!*arg || *end || val < 0 || val > INT_MAX)
 		barf("invalid parameter value \"%s\"\n", arg);
 
@@ -2709,15 +2706,19 @@ static int get_optval_int(const char *arg)
 
 static bool what_matches(const char *arg, const char *what)
 {
-	unsigned int what_len = strlen(what);
+	unsigned int what_len;
+
+	if (!what)
+		false;
 
+	what_len = strlen(what);
 	return !strncmp(arg, what, what_len) && arg[what_len] == '=';
 }
 
 static void set_timeout(const char *arg)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	unsigned int val;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
@@ -2731,22 +2732,22 @@ static void set_timeout(const char *arg)
 static void set_quota(const char *arg, bool soft)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	struct quota *q = soft ? soft_quotas : hard_quotas;
+	unsigned int val;
+	unsigned int i;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
 	val = get_optval_int(eq + 1);
-	if (what_matches(arg, "outstanding") && !soft)
-		quota_req_outstanding = val;
-	else if (what_matches(arg, "transaction-nodes") && !soft)
-		quota_trans_nodes = val;
-	else if (what_matches(arg, "memory")) {
-		if (soft)
-			quota_memory_per_domain_soft = val;
-		else
-			quota_memory_per_domain_hard = val;
-	} else
-		barf("unknown quota \"%s\"\n", arg);
+
+	for (i = 0; i < ACC_N; i++) {
+		if (what_matches(arg, q[i].name)) {
+			q[i].val = val;
+			return;
+		}
+	}
+
+	barf("unknown quota \"%s\"\n", arg);
 }
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
@@ -2808,7 +2809,7 @@ int main(int argc, char *argv[])
 			no_domain_init = true;
 			break;
 		case 'E':
-			quota_nb_entry_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'F':
 			pidfile = optarg;
@@ -2826,10 +2827,10 @@ int main(int argc, char *argv[])
 			recovery = false;
 			break;
 		case 'S':
-			quota_max_entry_size = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
 			break;
 		case 't':
-			quota_max_transaction = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'T':
 			tracefile = optarg;
@@ -2849,15 +2850,17 @@ int main(int argc, char *argv[])
 			verbose = true;
 			break;
 		case 'W':
-			quota_nb_watch_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'A':
-			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'M':
-			quota_max_path_len = strtol(optarg, NULL, 10);
-			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
-						 quota_max_path_len);
+			hard_quotas[ACC_PATHLEN].val =
+				strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_PATHLEN].val =
+				 min((unsigned int)XENSTORE_REL_PATH_MAX,
+				     hard_quotas[ACC_PATHLEN].val);
 			break;
 		case 'Q':
 			set_quota(optarg, false);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 9339820156..92d5b50f3c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -316,16 +316,6 @@ extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
 extern int priv_domid;
-extern int quota_nb_watch_per_domain;
-extern int quota_max_transaction;
-extern int quota_max_entry_size;
-extern int quota_nb_perms_per_node;
-extern int quota_max_path_len;
-extern int quota_nb_entry_per_domain;
-extern int quota_req_outstanding;
-extern int quota_trans_nodes;
-extern int quota_memory_per_domain_soft;
-extern int quota_memory_per_domain_hard;
 extern bool keep_orphans;
 
 extern unsigned int timeout_watch_event_msec;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 49e2c5c82a..9f6b17cca3 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,7 +43,61 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static unsigned int acc_global_max[ACC_N];
+struct quota hard_quotas[ACC_N] = {
+	[ACC_NODES] = {
+		.name = "nodes",
+		.descr = "Nodes per domain",
+		.val = 1000,
+	},
+	[ACC_WATCH] = {
+		.name = "watches",
+		.descr = "Watches per domain",
+		.val = 128,
+	},
+	[ACC_OUTST] = {
+		.name = "outstanding",
+		.descr = "Outstanding requests per domain",
+		.val = 20,
+	},
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (error level)",
+		.val = 2 * 1024 * 1024 + 512 * 1024,	/* 2.5 MB */
+	},
+	[ACC_TRANS] = {
+		.name = "transactions",
+		.descr = "Active transactions per domain",
+		.val = 10,
+	},
+	[ACC_TRANSNODES] = {
+		.name = "transaction-nodes",
+		.descr = "Max. number of accessed nodes per transaction",
+		.val = 1024,
+	},
+	[ACC_NPERM] = {
+		.name = "node-permissions",
+		.descr = "Max. number of permissions per node",
+		.val = 5,
+	},
+	[ACC_PATHLEN] = {
+		.name = "path-max",
+		.descr = "Max. length of a node path",
+		.val = XENSTORE_REL_PATH_MAX,
+	},
+	[ACC_NODESZ] = {
+		.name = "node-size",
+		.descr = "Max. size of a node",
+		.val = 2048,
+	},
+};
+
+struct quota soft_quotas[ACC_N] = {
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (warning level)",
+		.val = 2 * 1024 * 1024,			/* 2.0 MB */
+	},
+};
 
 struct domain
 {
@@ -206,10 +260,10 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= hard_quotas[ACC_OUTST].val)
 			return false;
-		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
-		    quota_memory_per_domain_hard)
+		if (domain->acc[ACC_MEM].val >= hard_quotas[ACC_MEM].val &&
+		    hard_quotas[ACC_MEM].val)
 			return false;
 	}
 
@@ -424,6 +478,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
+	unsigned int i;
 
 	if (!d)
 		return ENOENT;
@@ -432,19 +487,15 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
-				      d->acc[e].val, d->acc[e].max); \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u (max %8u)\n",
+					      hard_quotas[i].name,
+					      d->acc[i].val, d->acc[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -454,24 +505,21 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int domain_max_global_acc(const void *ctx, struct connection *conn)
 {
 	char *resp;
+	unsigned int i;
 
 	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
-				      acc_global_max[e]);         \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u\n",
+					      hard_quotas[i].name,
+					      hard_quotas[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -586,7 +634,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	list_for_each_entry(cd, head, list) {
 		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
-			if (chk_quota && cnt >= quota_nb_entry_per_domain)
+			if (chk_quota && cnt >= hard_quotas[ACC_NODES].val)
 				return ENOSPC;
 			if (cnt < 0)
 				return ENOMEM;
@@ -1087,12 +1135,12 @@ static void domain_acc_valid_max(struct domain *d, enum accitem what,
 				 unsigned int val)
 {
 	assert(what < ARRAY_SIZE(d->acc));
-	assert(what < ARRAY_SIZE(acc_global_max));
+	assert(what < ARRAY_SIZE(hard_quotas));
 
 	if (val > d->acc[what].max)
 		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	if (val > hard_quotas[what].max && domid_is_unprivileged(d->domid))
+		hard_quotas[what].max = val;
 }
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
@@ -1224,19 +1272,19 @@ void domain_reset_global_acc(void)
 	unsigned int i;
 
 	for (i = 0; i < ACC_N; i++)
-		acc_global_max[i] = 0;
+		hard_quotas[i].max = 0;
 
 	/* Set current max values seen. */
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
 bool domain_max_chk(const struct connection *conn, enum accitem what,
-		    unsigned int val, unsigned int quota)
+		    unsigned int val)
 {
 	if (!conn || !conn->domain)
 		return false;
 
-	if (domain_is_unprivileged(conn) && val > quota)
+	if (domain_is_unprivileged(conn) && val > hard_quotas[what].val)
 		return true;
 
 	domain_acc_valid_max(conn->domain, what, val);
@@ -1285,8 +1333,7 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 	domain = conn->domain;
 	now = time(NULL);
 
-	if (mem >= quota_memory_per_domain_hard &&
-	    quota_memory_per_domain_hard) {
+	if (mem >= hard_quotas[ACC_MEM].val && hard_quotas[ACC_MEM].val) {
 		if (domain->hard_quota_reported)
 			return true;
 		syslog(LOG_ERR, "Domain %u exceeds hard memory quota, Xenstore interface to domain stalled\n",
@@ -1303,15 +1350,14 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 			syslog(LOG_INFO, "Domain %u below hard memory quota again\n",
 			       domain->domid);
 		}
-		if (mem >= quota_memory_per_domain_soft &&
-		    quota_memory_per_domain_soft &&
-		    !domain->soft_quota_reported) {
+		if (mem >= soft_quotas[ACC_MEM].val &&
+		    soft_quotas[ACC_MEM].val && !domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = true;
 			syslog(LOG_WARNING, "Domain %u exceeds soft memory quota\n",
 			       domain->domid);
 		}
-		if (mem < quota_memory_per_domain_soft &&
+		if (mem < soft_quotas[ACC_MEM].val &&
 		    domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = false;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 3e17b63659..b9e38abff5 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -39,6 +39,16 @@ enum accitem {
 	ACC_N,			/* Number of elements per domain. */
 };
 
+struct quota {
+	const char *name;
+	const char *descr;
+	unsigned int val;
+	unsigned int max;
+};
+
+extern struct quota hard_quotas[ACC_N];
+extern struct quota soft_quotas[ACC_N];
+
 void handle_event(void);
 
 void check_domains(void);
@@ -133,7 +143,7 @@ void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
 bool domain_max_chk(const struct connection *conn, unsigned int what,
-		    unsigned int val, unsigned int quota);
+		    unsigned int val);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 0b256f9b18..e8968d7a1d 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,7 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
-				   quota_trans_nodes)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1)) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -479,7 +478,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_transaction_get(conn) > quota_max_transaction)
+	if (domain_transaction_get(conn) > hard_quotas[ACC_TRANS].val)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 61b1e3421e..e8eb35de02 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -239,7 +239,7 @@ int do_watch(const void *ctx, struct connection *conn, struct buffered_data *in)
 			return EEXIST;
 	}
 
-	if (domain_watch(conn) > quota_nb_watch_per_domain)
+	if (domain_watch(conn) > hard_quotas[ACC_WATCH].val)
 		return E2BIG;
 
 	watch = add_watch(conn, vec[0], vec[1], relative, false);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 07:52:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 07:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518289.804668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxw1-0006mQ-M0; Wed, 05 Apr 2023 07:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518289.804668; Wed, 05 Apr 2023 07:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjxw1-0006mJ-J6; Wed, 05 Apr 2023 07:52:01 +0000
Received: by outflank-mailman (input) for mailman id 518289;
 Wed, 05 Apr 2023 07:52:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjxw0-0006mB-9h
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 07:52:00 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd524ea5-d386-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 09:51:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9940.eurprd04.prod.outlook.com (2603:10a6:10:4c1::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 07:51:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 07:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd524ea5-d386-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gS2k3VFRwdUd2gf441Igmw/8E+ASuzlKPyItbkr+TJlI1jGNKyDZMxEff6JJM6sB8XqAkJnDeWcWaCgEIhY39ZSZXNrQ7VIv6BmZz5dqvjIqpmue986YH7JpZIal7qCmSubv0AN/6txgXYovuqCwUJcLziBVHFZ483Ykyhd8A7kQCH1bKkI1pYCNUEsN/rjc9rXD4ZZTg0f5eWsgRhI+u6I0Y8tbjGMCS6SSkm8GPFfxPfsw8tJ2RHIbrrKbycgMeq4MQd9W5hsbsWxotabbntqVO6olMI8I00xMHUFCSzEccElRIwzLwZJScu2yJ+qPo9je/Tc4RFLNY9re6HrQ9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0x0+0wJ8Q4w/H40y6THXQJcjdg6Sf3VzaqPj3AYPlHU=;
 b=Qp7eM7wHo+9fF0GNesJqiFirLpsjArm97y8NYt9nzab5V0QS1oIyMRFwLz3QdkCEWSEGD63jL4IyHefgEJenNn95d7WeCoEUjNQm39NxIgPdGichGazBhfvHL0MT+PRmc5/CTbZk7c6bItcera0HVpPZwBa2YEzMyINrYcSkWOGv8/RKOFOeZ+UvaLEr8HvNdeQQcUdgckRz9g/m8u4jlg6apYaNsSRNcIFtAFF79WVeNxPaw8lx3s4RElP4ngKmQ2mh+S8rwkNG8dPB7KxNXVTlMtusa0jdcfEUaVh4MOyR5dAR8EIwW36ZnVXxlz0a4uTz8HYCHzO4rtY9ekmgBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0x0+0wJ8Q4w/H40y6THXQJcjdg6Sf3VzaqPj3AYPlHU=;
 b=sZVXquaSgZ4fiEqIa9EvfqM3WADdNiCNYjprrgujvV4Gt2xWZ5QVGiphbz6zzIluzNDw9afpzfuJCGQZ22xegW0vQsRBuOpARrtxr/2Yc/3S/X3irhbY/J4J1e9ULgf9TPVUB0E6DLjLueYtgkZSJutMEH3hH+itdKIxDMKYIcs4zHPEDyTTj6UkFiZO53bWKMiXy0XbRhU9TYfvWe0E1Q7Qz/S18X6ktv3kFKoV9tqk/qQc8uGhu7Te5AvVfZ1j0hIS3K1CpXo2wQDBahFN9dfJXG7C2xRMumUScOS4Fs5EFWE6511R+vsic8tJQ6Y1v2a+LWdoTLOzBjv1fFDDfQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <400b70cd-d2a9-60ce-0acc-a429f3f5a958@suse.com>
Date: Wed, 5 Apr 2023 09:51:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 1/9] x86emul: support LKGS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <1eb21ece-9d33-d8e1-1c2b-c682dbb1cda1@suse.com>
 <d8d72cc8-f477-abb1-f6fb-5aa1909b36aa@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d8d72cc8-f477-abb1-f6fb-5aa1909b36aa@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0122.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9940:EE_
X-MS-Office365-Filtering-Correlation-Id: fc273405-6910-4852-3995-08db35aa9f23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SIqlHaJ7CCYZnxiN14QlocvY7yqtXYEURDaMX5MnXnup6NTmoZbjtHAukxX12gyi7lpGFWLVV/bGDtRLDj65PotAfLJovJ9w6CSNrl+y6rWs0PPjXdAjVUgLH0hsB6JnsSyzYXeOALYJ/f11ks8QmSni8Js5hmRiux84xO7srlxPU0mClyvfp8G4Juug87I1MtYWk+75HAyFzLbyZz/1D33VBnWqHtW6NAFcvdl6B2ET9XaL+tWuYxWwyTrDzZw/T/UYVCIPwFcrflaREi3jw/0iB49s7gT1gFujqJaGTGiVO27uYkqevhI3/jx1UgGBLJnCGIoeO6H/exVOCB/Ek+M+PpsgGOzCMxZXConIvejOrh3p+kHx+mccohEsOVvNNimfyUXgnBnuYOW2xWBBdW5qYXfRVbnumCoaGE3o5jPRa8kyVAj3fanjTOfjuVmiZmFbEpuLhGfyJaV+QTt3ymvtnB8VQvBktgMJk94oATCGuyezLSeZ6jviM4IFB+WKZmXlpA3GbEEoNBkDKriJuDPRzNqwR6aAubxvByhk8xrRlruaF8yYTGnbfTNtNtj244nUvHjRWFAnNq48WCqBSibyWgeoMIEEYbj7LGqn8e6pavOM1cl/rFtWdBdngxMUEvLZgHRDgo6rI0Qq7Cw/vg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(136003)(396003)(39860400002)(376002)(451199021)(6486002)(5660300002)(478600001)(36756003)(38100700002)(31696002)(66556008)(66476007)(4326008)(6916009)(8676002)(66946007)(86362001)(41300700001)(316002)(54906003)(8936002)(2616005)(186003)(31686004)(26005)(6512007)(6506007)(2906002)(53546011)(66899021)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0tsNis3MGlubE1GcDZDdCtXK05adHN3MjlhcWFhTmR0ZWVMQnNrS0hqQ0Jw?=
 =?utf-8?B?bUFmeE5ZWUxIRGJHRzY5VmRBelpvS0lNMWdYTjFqUW1zdVgxWkNIVzV5OVRa?=
 =?utf-8?B?d3RWYjhVcFRiN0RWc2hLcU5Ham1QS3ZtTFYrZ25YenlJMmM2TXFBSXl6QU0z?=
 =?utf-8?B?eTR1azNxZThwVlBzSEpZU1FudTZsSzJybjh4TXBSSkIrc2xIUWZRR1JLTXBu?=
 =?utf-8?B?bEprRjVGNUJxM0FTY2pYZzFYM3VNRXMvREVOTzBKZG0vamROYldJNGw5N3Vz?=
 =?utf-8?B?bi8vd0JoWVV0c1BZUVpuanVjRkRJR0N2STBtN25UcTNJcGpJQ0lWVDZPRGlv?=
 =?utf-8?B?ZEp3a0JhMFVveVdsbDhQRld2cElCL29YdTcyRjNyRWxYNEtjS3ExajVWSVFs?=
 =?utf-8?B?UnE1djJNVWc5dlJWeU1ibXVsTTM1K05VMVRiYzR6RlAycVhPeERtOW9DSXJO?=
 =?utf-8?B?SnVESnNWa1VUWGNCK2UyaU5OWS9pMUFhK3JVdXNNa3M3dmVtMk5JZUNWQzhm?=
 =?utf-8?B?Q0pJdEpHS2ZNNERzZTYrQkxjUXJyV1g2TUVQYlF3cG52U01hQWg0eHhjVDAr?=
 =?utf-8?B?UFV3Rmp6TnhSMFJsQ25GaVJqRlpYcldUcEFIWVBTVG9tSnI5MytsejBCOUlD?=
 =?utf-8?B?V1R6OWc5NEVXeVVsN1VqcXBob21DYzU4WEpKbkpGaDc1cHkvRnY3T0FvNHo3?=
 =?utf-8?B?dFpNQm1rTXRxZFdwOGtyOWJWdml0QW5nYnFhSENSMXhHMlc3RHBKcUUxLytl?=
 =?utf-8?B?NS9OcEE1KzU5Y2pnREd3RHNwYnFvaUNlejJYNXA4LzdNWWpZUm4wbytnQUl3?=
 =?utf-8?B?QjhkcEltRDZyZEZ5VkVaNWR1NnRpSEdlMWJEOXpIQkcrc2NMQmlxbFBWUGNp?=
 =?utf-8?B?a1FUN3VHVTQ0aVpYSVE2T25zMWIzVFBLdnZaRjNTVEppY3hOTG5ESC9uZVJm?=
 =?utf-8?B?VmZVRWgwY2NHRU1Bem9ldm9qcTJjN1dqcSsyL0tjbGZvNDhaQkttNWdXUFd2?=
 =?utf-8?B?NElUaXNXbXJROUJkcEJvak5GNFVQcjlaWnBOOHg2cnp3SUNibFhjY3o1UUtV?=
 =?utf-8?B?Y3hIa25JVTFuMWI0bk9EUjFWckpoZVJzd2pEeHdmcXYvMkZRUWdrQzBrUkdk?=
 =?utf-8?B?Y01rNWlia2xPVzh1cThQbHQyS0Y3RDdXbEIxa3pPRitkT0FObExLL2tuUU9K?=
 =?utf-8?B?RUdqTk53R3lzVk1OL2FJT0k5dTQ5RDdmeWdNYldrYXcrcFVaQTNCOUptNVBn?=
 =?utf-8?B?V1ZTaWNHd2FwTkFyWEJWQ3Z5YlNsN0xoL25yc0xlczF6WDJqSE0rcndDcUZk?=
 =?utf-8?B?VUozNDIyUHlXUDZaODZmR1Azc1BtREIyRm96SHhPa3l1czdpRHgyRWJVam9w?=
 =?utf-8?B?UTBFVXN5bTVXUUp5ellQaTczNEFVVTNqNVVmanVtN2owd2RmbUhZTzNEcDVY?=
 =?utf-8?B?QlhMVGdBSm1UZEN2RTlVa05wWERJMXFtbll4Wllvdy9HNmhuR0hHSXI3VzYw?=
 =?utf-8?B?Rmk5SndLNk5OQWZUZG1hYlhCemhiZXd0ZnVLM2p5UmxnVXdCOFZQdTYwRjg2?=
 =?utf-8?B?cXVVclI4Wk16cHd5c1FDM1JnVXZENEw2WExXSlhiL1p2SHV2aFZVb0FyVkUy?=
 =?utf-8?B?OTBLSjlSYmpBeFRUdVVTOUhrMlN3ZzlwUFdHS2JNM0EzT0pYV29FWlIzamZX?=
 =?utf-8?B?WXR2dURIenpDNXFwWkE0Qkh2SHE2eStrQlRodW5hVnRmWHd5bmN3SHAyaGhN?=
 =?utf-8?B?MVBSMFJQVUdDbzI1bTE3Q09sZ1I0bmxKcW5PaThSd0IwVlFVVTVCWmxnUzhZ?=
 =?utf-8?B?T0JzNWZSdWE4VkdKblVUSkxrSjdPeE1ERCtieFNDOTNNWDN3M29QWjdPZnln?=
 =?utf-8?B?MTgzdDdzQ3hoWWVjZ0RkRVZjTmwrNUF3N3J0Tk0rVDNub3crb3lrdTJCUGtP?=
 =?utf-8?B?NHprNjY5SlpaZllqaU1mS1RtNjNkNTFpZVRsRmZYUlgwZWpZQXRuSHNoRHB2?=
 =?utf-8?B?RnlrZVRKbnlqSlQrcXd5Mk05WEFrdTZhZ1M0ZkxQa0JKWVRnYTExdzA1UGpP?=
 =?utf-8?B?T0ZBVzFUTER1SlFodm9TOFg1aGt6d09rMlU0MFJTOGtzamVEbXB0RGFyR1Fm?=
 =?utf-8?Q?pjxhO3jw48EUhwNDAVEX0BGpy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc273405-6910-4852-3995-08db35aa9f23
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 07:51:53.5830
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L+3YYYCOWQuonUCVN25+m4p0YeBxlD/vtv9Nf6wleOUKspmXxu3S+0mymrqAIIxuJQAWCxwX16NvvCaJzLx5AQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9940

On 04.04.2023 23:54, Andrew Cooper wrote:
> On 04/04/2023 3:49 pm, Jan Beulich wrote:
>> Provide support for this insn, which is a prereq to FRED. CPUID-wise
>> introduce both its and FRED's bit at this occasion, thus allowing to
>> also express the dependency right away.
>>
>> While adding a testcase, also add a SWAPGS one. In order to not affect
>> the behavior of pre-existing tests, install write_{segment,msr} hooks
>> only transiently.
> 
> IMO, the emulator is already complicated enough without us having
> fallback logic to cope with callers that don't set up all the hooks.
> 
> Nor do I think making these hooks transient in the test harness is a
> clever idea.

Are you suggesting we start to _require_ all hooks to be set? That'll
mean many useless stubs in particular in PV emulation. Furthermore
absent hooks sometimes cause default behavior to apply rather than
outright failure, so altering what hooks are present can affect
overall behavior. Hence the transient establishing of the two hooks
here.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Instead of ->read_segment() we could of course also use ->read_msr() to
>> fetch the original GS base. I don't think I can see a clear advantage of
>> either approach; the way it's done it matches how we handle SWAPGS.
> 
> read_segment() is a much shorter logic chain under the hood, so will be
> marginally faster, but it will be a couple of unnecessary VMREADs (on
> Intel at least).

And this is precisely why I think it's not entirely clear. Anyway,
the remark is here just in case you (or Roger) thought doing it the
other way might be better.

> We could expose the get/set reg paths for cases where we know we're not
> going to need sanity checks, but I'm not sure it's worth it in this case.

Probably not.

>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -2886,8 +2886,31 @@ x86_emulate(
>>                  break;
>>              }
>>              break;
>> -        default:
>> -            generate_exception_if(true, EXC_UD);
>> +        case 6: /* lkgs */
>> +            generate_exception_if((modrm_reg & 1) || vex.pfx != vex_f2, EXC_UD);
>> +            generate_exception_if(!mode_64bit() || !mode_ring0(), EXC_UD);
> 
> Can we switch to X86_* please.  Alternatively, I've got such a patch
> which I've just rebased over all your emulator changes anyway, if we're
> happy to fix this in one fell swoop.

Of course, and I've applied this transformation to all the emulator
patches I have pending (i.e. no need to re-request this elsewhere).

> (Sadly, you did move some TRAP_* names into util-xen.c which I fixed up
> in my other tree-wide exception constant patch.)

Hmm, yes, I changed EXC_* -> X86_EXC_* but failed to pay attention to
TRAP_* (because there no build issue arose).

>> +            vcpu_must_have(lkgs);
>> +            fail_if(!ops->read_segment || !ops->read_msr ||
>> +                    !ops->write_segment || !ops->write_msr);
>> +            if ( (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
>> +                                     ctxt)) != X86EMUL_OKAY ||
>> +                 (rc = ops->read_segment(x86_seg_gs, &sreg,
>> +                                         ctxt)) != X86EMUL_OKAY )
>> +                goto done;
>> +            dst.orig_val = sreg.base;
>> +            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
>> +                                         ctxt, ops)) != X86EMUL_OKAY ||
>> +                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
>> +                                      ctxt)) != X86EMUL_OKAY )
>> +                goto done;
>> +            sreg.base = dst.orig_val;
> 
> Honestly, I think a comment is needed here, because I'm struggling to
> work out if this is correct or not.
> 
> There is a 64->32 bit truncation of base with LGKS, just as there is
> with MOV GS.
> 
> Which I think does happen as a side effect of protmode_load_seg() only
> filling in the lower half of sreg.base,

I thought that's obvious, as protmode_load_seg() can't possibly have
any other behavior. But ...

> but I think it would be nicer to
> have:
> 
> +            dst.orig_val = sreg.base; /* Preserve full GS Base */
> +            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
> +                                         ctxt, ops)) != X86EMUL_OKAY ||
> +                 /* Write truncated base into GS_SHADOW */
> +                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
> +                                      ctxt)) != X86EMUL_OKAY )
> +                goto done;
> +            sreg.base = dst.orig_val; /* Reinstate full GS Base */
> 
> Or so, because it's weird not to see a (uint32_t) somewhere in this logic.

... sure, I've added comments. I don't think "truncated" is correct,
as there's no truncation - there's no more than 32 bits we can read
out of the GDT/LDT. I've used "32-bit" instead.

>> +            if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
>> +                                          ctxt)) != X86EMUL_OKAY )
>> +            {
>> +                /* Best effort unwind (i.e. no error checking). */
>> +                ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
> 
> write_segment() can't fail.  (The sanity checks are actually deferred
> until after emulation is complete, and I'm not sure if that's behaviour
> we want...)

Irrespective we shouldn't start omitting error checks. If we truly
mean write_segment() to always be successful, we should make it
return "void". Yet I wouldn't view such as a good move.

> However, more importantly, if we actually take this error path (for some
> future reason) then we've created a security vulnerability in the guest.

You mean in case additionally the write_msr() also fails? In any
event, ...

> It will be strictly better to crash the domain in this case, than to try
> and let it continue in this state.

... we don't return OKAY in this case, so in most cases the guest
will already be crashed, won't it? Otherwise it's not really clear
to me what action you propose to take to "crash the domain": Right
here we better wouldn't call domain_crash() (or else we'd need yet
another #ifdef __XEN__ around it).

Furthermore - what does what you say here mean for the (existing)
similar code path in SWAPGS handling?

>> --- a/xen/tools/gen-cpuid.py
>> +++ b/xen/tools/gen-cpuid.py
>> @@ -295,6 +295,9 @@ def crunch_numbers(state):
>>  
>>          # In principle the TSXLDTRK insns could also be considered independent.
>>          RTM: [TSXLDTRK],
>> +
>> +        # FRED builds on the LKGS instruction.
>> +        LKGS: [FRED],
> 
> Hmm...  This is the first case (I think) we've got where a dependency
> that goes back numerically in terms of feature number.
> 
> Obviously we need to support it, but I'm not sure if the deep_deps loop
> will cope in its current form.

As you know my Python isn't overly good, but looking at the loop
makes me think it deals with this fine; I can't see an ordering
dependency. Looking at the generated INIT_DEEP_DEPS appears to
confirm this - there is an entry for LKGS, and while I haven't
counted elements, the set bit there looks to be in a plausible
position. The only thing I'm not entirely certain about is whether
a further (hypothetical) transitive dependency would also be dealt
with correctly.

If there was an issue here, I'd really like to leave addressing it
to you, as your Python is surely much better than mine.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:05:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:05:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518301.804678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjy8w-0000sh-Cx; Wed, 05 Apr 2023 08:05:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518301.804678; Wed, 05 Apr 2023 08:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjy8w-0000sa-A9; Wed, 05 Apr 2023 08:05:22 +0000
Received: by outflank-mailman (input) for mailman id 518301;
 Wed, 05 Apr 2023 08:05:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=enMU=74=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjy8v-0000sP-Fq
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 08:05:21 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9986dc51-d388-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:05:17 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id n14so17852232plc.8
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 01:05:17 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 j1-20020aa78d01000000b0062e032b61a8sm6873761pfe.63.2023.04.05.01.05.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 01:05:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9986dc51-d388-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680681915;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=gG5PZXaUKhRxajyfEHYthrg54Lo1xoJ4el5+xnkywpQ=;
        b=bXqXlLJ6We8DJ5x6EsboXOQFZKE3WQBpg4pzsW643gYyFnwHIpw8eOykr4dOTCFtM9
         ECwx22xqOqDAJ0QOAfosmP3QeiPjyEvwnk10Q7cLZjJ6jsFjJCiRbvHO1TuZH8TD0iNP
         squLEepCF/T9uWNR2qwrlN7CAdHUc2X5DMmOC+tKyrQtoUafUJurBwHqvQF23gLPhsnN
         ks8bK3PeF2O7JpuZQ2C4EPZMJUfhFPu+cWnP22BTFdKwWM0mteoYEzFdXaPsnkzJldPD
         Ff160jxoHNRWKLdNRh8ABM+ocVKwrtsr8rIVSOHveAAWMHimyC7LRtXMPEwfcfvQ4dPa
         0Fkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680681915;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gG5PZXaUKhRxajyfEHYthrg54Lo1xoJ4el5+xnkywpQ=;
        b=0sZfdAoAztFhwx4JhLqW6TIQ3NiJ1LBqthzR1zCzeYXbI36Jqesz6q1RK+wKqno2yA
         VpTqF2q1r+OowOmcywYBJLx9vdhixiU0ZAjBPoAjO0zAVahpD53S+AKyTAk48WwVsW7e
         LAruPGgnHOCzNIF06B20rCFdooBoDFN+sMHdgmkB0WUgC4jzJQ4rudbjZidkIcrBO/k1
         xC1/9H9DO6aPzEz/bk6ixMbY6Gf4RYjpLTEtTkCCWrY8/ed8KPMZvFYZtB7hyo5MCE5N
         rC2m4gc+RiCRtgBF2Wl8AFxDhPmLmL0zV1hzicI+n0NoCzgAEHoenulqx+1nObgwfiWd
         85Dw==
X-Gm-Message-State: AAQBX9dmXR3tL6GimPanFuHJYPxcUHZXmtwe7c4keIBKZkl09Z+afPHx
	CCcVyKR2usR6kDZzHDfNG0YwxQ==
X-Google-Smtp-Source: AKy350ZJ1bbwKNRCdYkOTrVTQ+pN4bqMvXPSEwXvqfgJHgfHdmB+v/yBFWMjTYFvTH5cPlfe9/Nq/w==
X-Received: by 2002:a05:6a20:2d96:b0:dd:bf6a:4609 with SMTP id bf22-20020a056a202d9600b000ddbf6a4609mr3559565pzb.49.1680681915648;
        Wed, 05 Apr 2023 01:05:15 -0700 (PDT)
Date: Wed, 5 Apr 2023 13:35:12 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: qemu-devel@nongnu.org, virtio-dev@lists.oasis-open.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xen.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sebastien Boeuf <sebastien.boeuf@intel.com>,
	Liu Jiang <gerry@linux.alibaba.com>,
	Mathieu Poirier <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V3 0/2] qemu: vhost-user: Support Xen memory mapping
 quirks
Message-ID: <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7>
References: <cover.1678351495.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cover.1678351495.git.viresh.kumar@linaro.org>

On 09-03-23, 14:20, Viresh Kumar wrote:
> Hello,
> 
> This patchset tries to update the vhost-user protocol to make it support special
> memory mapping required in case of Xen hypervisor.
> 
> The first patch is mostly cleanup and second one introduces a new xen specific
> feature.

Can we apply this now ? I have developed code for rust-vmm crates
based on this and we need to get this merged/finalized first before
merging those changes.

Thanks.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:11:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518305.804688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyF2-0002X9-0h; Wed, 05 Apr 2023 08:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518305.804688; Wed, 05 Apr 2023 08:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyF1-0002X2-U6; Wed, 05 Apr 2023 08:11:39 +0000
Received: by outflank-mailman (input) for mailman id 518305;
 Wed, 05 Apr 2023 08:11:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjyF1-0002Wv-3Y
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:11:39 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7bf60ff4-d389-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:11:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7419.eurprd04.prod.outlook.com (2603:10a6:102:80::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 08:11:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 08:11:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bf60ff4-d389-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VHPWpZdtDyK64D90A3BgbC8RpvNru8I8zGT/6jgkJjlP7r28ypRC/wT+o++dW+LwJGJ83C5wfNjZRo+vYHlNKSE1GJx43FMXFGRbuDVeWEDfA3qInkaXlwypiJOmUBlpa8IwOYFXjM5Kepp7AKmOQ/Qlectsa6VffTrkXKBSAk/xoBXtOicc8NPHzV7RQxNWeFtzTtaxMUdBRTlE5MWJsBNbP9y7OKdXS2OchoYfkxSJjBymSiVpwDmSN9wDIPn6nZCe2bjvVpTr+uYlCmMBowfzg3SdwkzZQLYUKmmnHYUmzBEVs15ugoNjXBEdKXtzHzqDfUCFHM2oD+u6jy7eEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tCv6Uw70k5QdWXzfFmwvfNojOrbNjqU6z5B2gibT+ZQ=;
 b=M0C6L9/zmLXJdbjIVGDSVepBP+THbCHR6E/jHZNWOTOIahqULkrxhUu9Iqkm0vWLGgxHPDrcaSnJL6if6su918gLwHibaOpUTl7RbSjdGOhUmlRh/NXTmeLm5HsKYHGAVVtwJ+k5YxO82gSv9kfRObeDE+WJHw6qEHJLdO25qQH02lsZeDN3cI9gOm3WpbsVkO28JKazeO3UgWwj5ck4N6af/jL0pVNWsv/tHXSDYKN3C8bOLtSbPo9lNFG5vMaojFAhVbN8g6vFmfNs665SpoI/7Uf847heZvXVZuORGQ/X5u2f1cw8Yos6+umT7praazRxp6dCvIAChl/87SmiBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tCv6Uw70k5QdWXzfFmwvfNojOrbNjqU6z5B2gibT+ZQ=;
 b=TnNCEWHzBygeX0PbXe7nF8o7lDoe7B7g2Er+KiHyaHWox9+NjYNrIHktYFyAL52jVhFvgG+Ys7u5kTzMCM3hw0J5cBkJ0P31lioMmsUCtAzcgIjFwLF7QVdobOCb/d4tmYkBS7V3qMKdbGfzoz1zeB5iWRGzuzZuDb7S7shinhbYbGraZ+Xaz3qWxamaubji4V15HmvQEqsHXD0h8AjvcbjAbdTLS4gfDslEPZlbUkJe55cAnClegj04QuL/l8twoitCL7cifwFNqj0y9OZuURfegE0cOosNj4QixwLSZgfmzXr0lPtlB1LIxFzJlU7j+esyJuecEDHbbRCVrPwd5g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5cda250b-57b4-2833-99d3-84ae8ca32059@suse.com>
Date: Wed, 5 Apr 2023 10:11:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
 <ZCxI18gb8zK5X+nR@Air-de-Roger> <ZCxSooPqPwpGW6yv@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZCxSooPqPwpGW6yv@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0187.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7419:EE_
X-MS-Office365-Filtering-Correlation-Id: 8aaaa2fc-e1cb-454d-229e-08db35ad5e4c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hQrqelugHpzuJ9mUtxy9n/hrtCQ0aTkQUlkXSfWjTcMm7BfT+GaCBG9oeQr7LS3FUn7YSPbEyl+wmj72xBPJmwLiNiLR9ogaCVXzBvXWDQN9UP/AyKSNRjEvvDNr27s+03AC8HhsLLvd/sr69SB2mjZwYzz1qE7nCKcLqtvo7Oh2Rlmzcauqh9M4gVXbIPWOA3HwGFg/RDc8103PzElayvggKvKKLwuD9ametafRZX0Mxv5dsSnxeroVYoAm5ZmgBEvG91NTBBpaWQiDfRHySG7jnLh/0pqkW47bBam1+jY+jqeXI1zp5kwQo1kshKc5tl9xuCl43tJEnVRgEJZKY3i3Rzsmuqd9RNJcpIyXCUsKb2GBHM0m21bYKF0/Sw1kmYkWlshUAO0Q5juDuBACC75V8+v1RuyIOn1qcRTbj3Z05cOem/p1WVWdFQkwXF/eTRvsDN4whbVay91JOpEnj+QsinYJLwGB68l1QsYNV4c0UDplRnSgv2SINgiBJHhHl8d21sRq0mXx9d3IEALQfdzfroHvt/xZlHBhh6WK1wT1DSqD6iG/b7x+7v1pdTD31CfkECxCxFcUpNbdfU/pM5uoRoNppjgiykWRK5u71PhsbPZ/ZfHWAMdubHxMLCA02J5WuI+yp+tRrzaBjZRHeg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(136003)(396003)(366004)(346002)(451199021)(478600001)(66899021)(316002)(54906003)(5660300002)(8936002)(36756003)(2906002)(31696002)(4326008)(6916009)(86362001)(8676002)(38100700002)(66556008)(41300700001)(6512007)(26005)(6506007)(53546011)(66946007)(83380400001)(6486002)(2616005)(31686004)(66476007)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RW0rNksrRFhrczFRYUhZWTVqYWlod2xjajJQK0V5M1ArMmU0YUp3c2RsUE5q?=
 =?utf-8?B?OU4ycC83alpZdk9CUi9LUDNQQUxaRloyaGEwRXJRQjhWUFhEdHpzVGQ4MUJn?=
 =?utf-8?B?WG0yMlJIQU50NjczVlgyUUgxd2NWdzRjSmQvdWNrV3VCMkJXRUkzb1lWRFNU?=
 =?utf-8?B?L3Fhb0RLQXlCOEZBc3RBdE8vK3VNVHRudVRmU1ZVLy9EMDY1bW95bHZnV3BC?=
 =?utf-8?B?SjFML3NLNmFVRVVjY2x6djNmQlFsUHY1WitDaS8ycE1oS2lJbXRMNXlaZmli?=
 =?utf-8?B?R3hra2pYVW54QU5GMi9OdkIvQUlzUGxvbzdvMnlkb2E1OThKcStaZ3V5dnkv?=
 =?utf-8?B?Z3pjQ1M3TUdIalZkc0xyZXUrT0hJNmF1V1F5S05VSjFpcjFBNm4ybnhlYUJq?=
 =?utf-8?B?N3VpZlhZcGlTbzJpWWZrdWc2MTluS0R2WDlUdHNEMWswTitxbVlDM2pYMi9y?=
 =?utf-8?B?OVI1UDhWdFNPaHBoLzc4RHROcHc3LzdCSFFBL2tONFZkVXBDSU9ucmNhUWNy?=
 =?utf-8?B?SEtrSElEYkdaQW8vTWUzdTErZm5EbERiWG1QeVV0MFBuUCtzR254UzJteU9x?=
 =?utf-8?B?THEwNXlaVkc5OGdrZFloTXRWR25Vc0h2RFQveDVIQlZHbkVkRVJhLzhybHB0?=
 =?utf-8?B?c0lHRHNMMllKWDJkR1ZMUUJuOXZwc3ZBRTlzM1VIbWVHT3k0QjhwL3c0Zlp3?=
 =?utf-8?B?K3dGdUsrRFZyTDJHdWdkZVdqaVVDVlR6NnNIMFBPWXlKRnNtZVNtY1VKVFNx?=
 =?utf-8?B?VExjYVllZHljU0pUMHd0ODBpKytTaWR2NGZoMmw5MmkydFNuNlp5eFFsM1ZI?=
 =?utf-8?B?dUZmU1NGNUNjSmRPWmlIeUdPcVY4VmRseU96RzlLZWV2SkhPNzgrVlpPaDBy?=
 =?utf-8?B?MG5ubEpGcFZvNzNSQ0pJY01LU3N0RFE0am5Ed2JPdzdqMVlmQ1k3VmhLamVK?=
 =?utf-8?B?YWxRWVJ2VnJOVy9rbjFHb2NySlBQQWJFVXpWVFRjbzhXS09OOG9CdUhTNTR5?=
 =?utf-8?B?aVlIcUVTNVVlZ3BkSlR4aDIzL2l1Rk5WV0JCVEMrSVJQK1E2RzdJQ2VYYTBt?=
 =?utf-8?B?K2dvYmxDZ0UxL3FKenFEenMwSXFlL1I3akhrL1MrUkl1OW56UER2bm1KQi9m?=
 =?utf-8?B?eGswYnY1R2ZZZy9LY2RsejFsSUIzL1BxNGgzOWUxbFpMbU5JSXdsQlNHUGVY?=
 =?utf-8?B?ekRpdC9Pd3lVbjRFZ0kvS2I3YWFJVzlwcDhwaUdZK3kyUzJsY2Z5SmxWekR3?=
 =?utf-8?B?eE9aVmlqU1FrbFdlU1d2YnEwa3BEcHBNTExvM3hKQVVjcEh3bWxjeTN6SjhG?=
 =?utf-8?B?VS9RVm5pbUhCN05KdHhzcUFLN09UQVFsWCtkcFh0aDZvdWZZQk1jNDlaVHMz?=
 =?utf-8?B?cU52TmYwdXYrTks3amJDdXlMV3BWdC9pQU5HTDFhSytMVW9ELzlkSTFHeXpD?=
 =?utf-8?B?ekJ6cERmSFU1V2R6SDF6Vk41NFc2amRjdS9na1hQVVdnQ0hJcHk0ajRZclZO?=
 =?utf-8?B?WWhlcWZlekJySWwwZkY3WWNvWVN4MmdPYjVKZUZEeDY1SGNNa1hiMWJRLy95?=
 =?utf-8?B?czFyQm5keElpN1VxY3BReStSa093Y0VtR3ZYczJPY1hjWkFSdGYyUTY1VXBk?=
 =?utf-8?B?S1ZxM1pPRnY0S2JsemxlTEdOb0hEdnFSTGltMEVKVUFVVHF0TzFUVUY2alBJ?=
 =?utf-8?B?cDdsd3pRMW9EdFhscXV3UXN3a0FuYmF1WlpKNHFhbGMwaDhIb1ZGeEszL3V0?=
 =?utf-8?B?VzdBckNYSlhmaGp4TXlqR2hsRy9SU0R4dmE4Y2hwSEZUd28rdnREQXVULzU1?=
 =?utf-8?B?UXNaNkl0bjJuZXV1NTl1ays0VWc5cUN4WGFVZXcxcVpvdWhKMlJNdlNER0Uy?=
 =?utf-8?B?UnM2aWxPOWdNS2xPbmR5bnVPYnE0WnlCTjYyOXp2a1lzNXZRTUZXc21NUUJX?=
 =?utf-8?B?YVYxa3l0RnVjZkdlbkgzVEU0ZkNKaTJOZExiOXZ2U3hScThPN2dPKzVUbDFi?=
 =?utf-8?B?aW9GUnVhWk1xM1ZJNmNkNDNlSVgzMDlqQ2t6WXBvSlA4N1cxRnBTTnF1VlQ4?=
 =?utf-8?B?VkdRb01aak1qUDBKYnVndFRaVUI0c2p1dC9JWjhHYzJ5RkJXc3hGMGgrSDZm?=
 =?utf-8?Q?D4litlYL1SyzgG0K5m7OJ+Tsk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8aaaa2fc-e1cb-454d-229e-08db35ad5e4c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 08:11:33.2429
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cJwwp4Wu2JW559/hlApxWMpu3OcVu8dpes5DDXmUbyjx8IPoFixFzmUE8KNdr2q+G80Ep2CjpCm22dC8TtNiXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7419

On 04.04.2023 18:38, Roger Pau Monné wrote:
> On Tue, Apr 04, 2023 at 05:57:11PM +0200, Roger Pau Monné wrote:
>> On Tue, Apr 04, 2023 at 04:24:16PM +0200, Jan Beulich wrote:
>>> On 04.04.2023 13:41, Roger Pau Monné wrote:
>>>> On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
>>>>> On 04.04.2023 12:12, Roger Pau Monné wrote:
>>>>>> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
>>>>>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>>>>>>> applies to guests also when run on a 64-bit hypervisor: The "extended
>>>>>>> CR3" format has to be used there as well, to fit the address in the only
>>>>>>> 32-bit wide register there. As a result it was a mistake that the check
>>>>>>> was never enabled for that case, and was then mistakenly deleted in the
>>>>>>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
>>>>>>> CONFIG_PAGING_LEVELS==4"]).
>>>>>>>
>>>>>>> Similarly during Dom0 construction kernel awareness needs to be taken
>>>>>>> into account, and respective code was again mistakenly never enabled for
>>>>>>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
>>>>>>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
>>>>>>>
>>>>>>> At the same time restrict enabling of the assist for Dom0 to just the
>>>>>>> 32-bit case. Furthermore there's no need for an atomic update there.
>>>>>>>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>> ---
>>>>>>> I was uncertain whether to add a check to the CR3 guest read path,
>>>>>>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
>>>>>>> be converted to "extended" format (overflow is possible there in
>>>>>>> principle because of the control tools "slack" in promote_l3_table()).
>>>>>>>
>>>>>>> In that context I was puzzled to find no check on the CR3 guest write
>>>>>>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
>>>>>>> of the low reserved ones) could observe anomalous behavior rather than
>>>>>>> plain failure.
>>>>>>>
>>>>>>> As to a Fixes: tag - it's pretty unclear which of the many original
>>>>>>> 32-on-64 changes to blame. I don't think the two cited commits should
>>>>>>> be referenced there, as they didn't break anything that wasn't already
>>>>>>> broken.
>>>>>>>
>>>>>>> --- a/xen/arch/x86/mm.c
>>>>>>> +++ b/xen/arch/x86/mm.c
>>>>>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>>>>>>      unsigned int   partial_flags = page->partial_flags;
>>>>>>>      l3_pgentry_t   l3e = l3e_empty();
>>>>>>>  
>>>>>>> +    /*
>>>>>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>>>>>>> +     * understand the weird 'extended cr3' format for dealing with high-order
>>>>>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>>>>>>> +     * initialised).
>>>>>>
>>>>>> Don't we then need some check in the vCPU init path to assure that the
>>>>>> cr3 is < 32bits if we allow those to initially be set?
>>>>>>
>>>>>> Or will the initialization unconditionally overwrite any previous cr3
>>>>>> value?
>>>>>
>>>>> That's not the way I understand this "cut some slack". Instead I read it
>>>>> to be meant to cover for the VM-assist bit not being set, yet. Beyond
>>>>> that it is assumed to be tool stack's responsibility to constrain
>>>>> addresses suitably. If it doesn't, it'll simply break the guest. (There
>>>>> is some guessing on my part involved here, as the original introduction
>>>>> of that code didn't further explain things.)
>>>>
>>>> If it's just the guest that's broken I would think it's fine.  As long
>>>> as such mismatch doesn't cause issues in the hypervisor internal state.
>>>>
>>>> Did you see a toolstack setting such entries before pae_extended_cr3
>>>> is set?
>>>
>>> To be honest - I didn't look. As said in the longer reply to Andrew, I
>>> think it is more logical this way (the page table root already being
>>> validated as an L3 table when vCPU 0 is inititalized, which includes
>>> setting its CR3). Hence even if right now the order was the other way
>>> around (which I doubt it is), I wouldn't want to make impossible to
>>> restore the original ordering again.
>>
>> IMO I think it would be better if we could already report error at
>> domain creation time if the toolstack is attempting to create a domain
>> that the hypervisor knows is not going to work properly, rather than
>> allowing it and the guest failing in maybe non obvious ways.
>>
>> It seems to me however that we would need to fix xc_dom_boot_image()
>> in order to setup the vCPU before creating the initial page-tables.
>> (->setup_pgtables() hook being called before ->vcpu() hook)

This might be a possibility, yes, but it's (imo severe) scope creep
in the context here. All I'm after is to restore code which was
delete in error (and which was, when it was still there, not
properly put in use in all cases it would have been needed).

>> So I don't think this is strictly worse than what we have, but it
>> would also be nice to get things sorted out so the ability of the
>> toolstack to shot its own foot is limited.
> 
> Maybe I'm confused after all day, but isn't the hypercall used by the
> toolstack to set CR3 the same one used to set the vm_assist bits?
> (XEN_DOMCTL_setvcpucontext)
> 
> At which point we just need to make sure d->vm_assist gets set before
> attempting to load the new CR3 (seems that way from the quick look
> I've given at arch_set_info_guest()).

Right, it is this way already. But that's not the point.
MMUEXT_PIN_L3_TABLE may (will?) come ahead of this.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:20:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518309.804698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyMv-0003RY-SC; Wed, 05 Apr 2023 08:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518309.804698; Wed, 05 Apr 2023 08:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyMv-0003RR-Os; Wed, 05 Apr 2023 08:19:49 +0000
Received: by outflank-mailman (input) for mailman id 518309;
 Wed, 05 Apr 2023 08:19:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjyMu-0003RF-AF
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:19:48 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9fe97a41-d38a-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:19:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7034.eurprd04.prod.outlook.com (2603:10a6:10:128::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30; Wed, 5 Apr
 2023 08:19:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 08:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fe97a41-d38a-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJIIw9REseH4fauPitjsdNPAjQyVKU8SiTyUGnIXdsFKVMGxsT2Zn+kzY+qR2NaYdBoMfV0W7NuoA0WGtDo6ipN/+eJlF5SrroHmk0sLHikPIjRBdwFnNFwd9fYufaSk2uqr4455R/v+AkjxX4wqDR2WXEJ0spyNgPCay9jeG7Gklqd6UAkrFrUAcZi71y/dyRNYTD3jdr5RtWSzZ6PNoqWdAVrdnnoUitEZ2Q4qYIKtl01689tRGbYL8hSiDcZ0g8AT8oA1eojMcwnA5iUTEaZnrSsRtbJpahiZQVR87v3pzaUg9qFH/Pg+61sGuc6EviojFZnhvQDuhtpotZXOVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KvZ9P6Oc5QfRT6Z13Iipi/AKyrzjUUpibjnnKvc4YWY=;
 b=al34kqBMu4YlavyoKPMQSrm6yUkiv4GyhiWhe+D9VRmXHasiO17/USrJPoKsAX9yECrZEKRUYLERf0TpnfwXsw1e3XZD7i9kGvjgADQag/9z2WSbVT9yM4sfP+N/75VscTYhMqoZQnLDyTXbY5tpg0Y5595lmbkjOBOXG1+kbGXttnhoFyorQrkERLVT00KnGPse+A2lazHx+alU6+w6Omryh5erHJH4L5V4NfWOTASF6JmDJ85ThR3o6rucwHP8LCRFJ3W6O8vYVPQWUrUTF27xMxXF+oB8E9v8MkPX6FS9vB6mt1XJbGzbHQuLoegE4XqQftaAIM81jnsAeUhu2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KvZ9P6Oc5QfRT6Z13Iipi/AKyrzjUUpibjnnKvc4YWY=;
 b=R8cekt3y2+riXqT9vV6/gNV5YGGimhmLaK5sQ1RpRyq4nMqpqGcU7XpgcVP4Iv1FZLNa4CQoOeV5XAHUscDoHb2TskY4J4SgChMm9Tlx7pONBCd9Dl2+e5qpidQKtl4kE4zaXSk4kZk/FHqn2m0C8vJRnGohl+uiu8ozrI3s32A549S77LwLr3WW/zAGAnVRbs1vratXsADCRkTlbWZyvx1WOlDuiKsEFu+KInyXjB9U85giMEKuBxYiv6aN4owAt9bObd2fjk9znDsqXvdlUaTOVbmje7tL+9AIhE92BForPb2oQEHE8tTa1aU2I8vQW8KQP94p7aWLxR7WhwRb2Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2cc24b27-d2d2-4f15-acfc-acfd802f04a1@suse.com>
Date: Wed, 5 Apr 2023 10:19:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3] libx86: Update library API for cpu_policy
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230404095222.1373721-15-andrew.cooper3@citrix.com>
 <20230404210600.1404532-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404210600.1404532-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0145.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7034:EE_
X-MS-Office365-Filtering-Correlation-Id: 87c538e9-cb39-440f-43f0-08db35ae82ad
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qe+8HgCJYGrx+Vw8t2FhRJqNaoT/N2nahDreqbEKA/aOOLtb5i1flnZO+zSeugY8OiGHAZRSh3o0yM6FCIF4gGGgm5MqglOmFUVyTqa13mI5YM0Jj+MwmrYjI2Pf4+mHlzNdJAoJUB8UIKi+spSNBp7bIxNZSyAUxAzWfZWyw51gp6J+xD+p2oYLcZsmF9jTCzduMvKzrhDyMmx3/GrZnFlN6ONzvyyz56ZFJEN3FrbJ38eeh6Q1OCZECms8epxk0dmBCJ5dK65VANMSQ71gXUSZ6oHag4SyAULXO15HKeXuXIL4XxJzfJtYQNc3mkoPoklJYZ2L97MZHKBSshg1byQ/DHqhYL9neRx7g+W1TUCI1Ul0+XDTKkSkV/vvlRRWteYzqqxQu/PbHX1lEcj9ztM9rZ5bwSRIl+rA/0A87Dy4MCHRA/8yGJUbFLWCg5TaEULsw4Lrv2pG1+sTLGqF9dZRgXLOrkDbyz7ceY9ly0+Tp5PtKjtSgHmfK63OCa8+kweuAblO5xygrAs79senXo6gQ7yIPNdlv8SWSBxs1KdGknzd3uyviWF+Mn0v+gbaYDlIrMAx24qTmdUBCU9qkzTAlcufbWfFSpcqGPw3CqTynDQgdM1EIRcFiEp7VOmANHiIt7jIwjJ2P1/Ln/UH8A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(376002)(136003)(346002)(451199021)(31686004)(36756003)(38100700002)(5660300002)(4744005)(2906002)(8936002)(8676002)(66556008)(66476007)(41300700001)(86362001)(31696002)(66946007)(4326008)(6916009)(2616005)(26005)(54906003)(53546011)(6512007)(6506007)(316002)(6486002)(478600001)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q0pmWHByU0kxNiswQ1pTd1RXdFlYM0F6NHl0VGZlRnowY0pVUXkrelFwSzlP?=
 =?utf-8?B?WlZoeE5QSkwyU0FGMGlUV2ZxR0Nhc2JQUjd0cnJaaFdURGVtM3pQZnhPTjBE?=
 =?utf-8?B?SS9KaStxMlZDcVFPNFo3NnhVS2NtVGoyOEpOcVV0RHFlUGc1ZWRIWGtiODFQ?=
 =?utf-8?B?YzZ1Tmw5WEtFWVVzaWJSb0lMVWVLVXBIRnkrcFlUMkpkaFJ2NnlveUExRThh?=
 =?utf-8?B?SCs3TVdGNUFSK3RySjhQREt2QVZ3dXlOd0dORTVXeEltM29xYTVLZTJSWHNv?=
 =?utf-8?B?c2xyT2dDMm1yTm11ZzlCVkhaZnZ2TkVjQXozVFZML2d0cHF5VTRRZWxBRzhZ?=
 =?utf-8?B?bGZLREhIYTVtQ1MzNFg0YUwxUHFjM1Y5aXZtbjhuRStDdUp6SEhsVlN5WG43?=
 =?utf-8?B?Rk1UeWoyZkd6ZFR4anZwZEhKOUw4ZVZFR1RBbk1lQUhGRlNLZytkRkZRbWVu?=
 =?utf-8?B?STNLNTBnakNHM0VsM05FS3BtYWVCZ3VCOEpYcmx2REhMS2VsYmo4dkluVG95?=
 =?utf-8?B?OHJ3WEpxUnNvK21NMTRMVkpXbHkwQ1k4M3JNUmR5eEJYOWxDUUFaK0pBT3Zy?=
 =?utf-8?B?eGlXOGNJQzRib3k2blpZZjJsdW0yTHZvTGhvSjVKNXBFNnZNaGFKU0lHM2RQ?=
 =?utf-8?B?Nmp4eklaRDVwMHVRbkZXRXprVUczYkgvV3FBY3NONmE2T2xvY2ZRdW1HVDE2?=
 =?utf-8?B?aGRlR1pjcHhsalpMTDNkNjRVY0tzeVo3UjhxNDlyYkZFMDNSTlFVUWcyU2kw?=
 =?utf-8?B?OVZXRHdLdVpJbXFhZnErOFE5TW5qWnRhUlVLeXhhdG5GZnd4M0RFZG1KNHJ3?=
 =?utf-8?B?ODlMZm9McnFpRi9RSW90d3NrUk9tSjgxWmRpd0VxTGJhS2xqYlE3Q3RTaXVl?=
 =?utf-8?B?bHd1UWw5K2RIeGVYcUN0cnNSQ1FVRXZtNmN6ZklXYmtPenB0NG1INXpTamZF?=
 =?utf-8?B?dWlaS2oxYndHMms2TE92Umo5bXE5QVdjeDRBTnEzcENuVUJxcVdPaGFqc3ha?=
 =?utf-8?B?cVFBT3EyQjlPdlQ1QS85bXJKWUNTZEFUbTRYRElndktZNzJoNkZpRXIyRnNI?=
 =?utf-8?B?cjdldjVKaHlCZmthMTBKcStoak82aGwxQUJGR0Z0YitHYm1yaWZSOUkvTE5s?=
 =?utf-8?B?c0hrT0xUcWFWR0t6MzJvM1phNUhucTdWTVlUOEdON21IQVdnL0FGTnpzWGk0?=
 =?utf-8?B?S1Z3MEdkVmEybnBlaEJjRWtUTnFmVHhqbVpYYUpVUll5UFJSUUg3UkNSdm91?=
 =?utf-8?B?UHlTcXh4SSt4aUtwTkFPZlJRSkF5czdMSUJKR3RqV3pmYVo5WEU0TEUxdGw0?=
 =?utf-8?B?QnQwQU9xK2FneE9SbCsraGNZbVUySkNrZHpFeC9OQ2x4L1hXekFBWEhlcFNh?=
 =?utf-8?B?VlZjbmQ3Tm5EakpIOVdOcFcyb0dZZDRNd3E1VEhwc0xjZWp1clE2R1VWTDla?=
 =?utf-8?B?YjlVZEVuV2pvTlBEMFdjdlZpbzVGd0E2dUoxejB1S2xBTnFiUFRxeEozQXhZ?=
 =?utf-8?B?ZjVERXpFY28wU2lIS2IwTmV0emhRUGlZUW9WNDlHY3l6TndpYS8xUzgwbW9j?=
 =?utf-8?B?RHNrOGcrRm9KYmxmdGpBWVNHZGEvaERrcXJidk1xYThYbEEyZ2t5WWlsZ1Zp?=
 =?utf-8?B?U3o5NDVjWEpEWHRqNWxEcU5Dd3RBdk5KVi9FSTRnZWNSSTUyNFY1amF6ZmVa?=
 =?utf-8?B?dVJUU0EzNlBWWDVtT2d5aCtMeG9zT2h4dzYraitrMFZKOFNNUEl0Y1h4Qm1W?=
 =?utf-8?B?dDNFK1RlN0ZJR1BRMTg2ei9WOGtkdmJKK0M5L3FPTzNjaWdETVFVTlY0aS9z?=
 =?utf-8?B?Z3VQMm92N0pOekJZK3dJQjhZQmhwbndvV21uR2FsRGNhb1owZC9LZm1TSGo0?=
 =?utf-8?B?YjF3SWFLbGFPYlpIcENyNkF4TEF1dkhiZUdTYkc0NXhFUkJZbU1rMHNIeDZP?=
 =?utf-8?B?RXlKbFdZanhLY29BRzZic3h2aEIzVStxb2VCd2N4MFFtWkFEaEJUMU9NYzZT?=
 =?utf-8?B?UXJpRm9ReVZHUTZzakcvNFZSVUduSzBSekJicnVpQzhmMkNqUHJiZUVteFRw?=
 =?utf-8?B?RVJmcHZYcjdGYnFkSnRwTU8vaDVVeWF0T2tGRDhZZ0ZwK05tUVJma25ZYUFF?=
 =?utf-8?Q?zOUB7xSA8+xGgda74F7X46e67?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87c538e9-cb39-440f-43f0-08db35ae82ad
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 08:19:43.8521
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VgEpno+jo7JJVlWQf6WJvItBSVk/RcBNNWqxD8i8MwggAUOzfqXKnYVuvyGR9aqE+feK2PiR+sh0GmmHHY6U6w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7034

On 04.04.2023 23:06, Andrew Cooper wrote:
> Adjust the API and comments appropriately.
> 
> x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
> TODO in the short term.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518313.804708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyVd-0005CJ-Lf; Wed, 05 Apr 2023 08:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518313.804708; Wed, 05 Apr 2023 08:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyVd-0005CC-Iy; Wed, 05 Apr 2023 08:28:49 +0000
Received: by outflank-mailman (input) for mailman id 518313;
 Wed, 05 Apr 2023 08:28:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=//T7=74=linux.intel.com=andriy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pjyVb-0005C2-KY
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:28:48 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfb21dff-d38b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 10:28:44 +0200 (CEST)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 05 Apr 2023 01:28:41 -0700
Received: from smile.fi.intel.com ([10.237.72.54])
 by fmsmga008.fm.intel.com with ESMTP; 05 Apr 2023 01:28:32 -0700
Received: from andy by smile.fi.intel.com with local (Exim 4.96)
 (envelope-from <andriy.shevchenko@linux.intel.com>)
 id 1pjyVH-00CkZe-2G; Wed, 05 Apr 2023 11:28:27 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfb21dff-d38b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1680683324; x=1712219324;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=wrn29fWEWUh9DQE7oDLC0JoUgdEFYGWX1tgF9OvmiyE=;
  b=Pl3Z1zw4nobkhQz6coU1698+mcSuAFdqkrLJcvMRStDyV1mrbcjwDE8F
   PtI+Nr5V++PRN7Is3hyfrpbvYKboIPPJNPpcB7iO5RCzWlvRqz4x1Y5Ia
   kZBvxgBRGR9HO2jb/dn6zicDAel5/RWqo+U/4MjyKf0UGqUsQg/ZhLdr7
   ogcaezPy6S9+SkEyTeTJ7/vQ7CmR4nX5NmnTFGiTM6kU78BtiHYRziK8F
   4k+G1+WhqqdkcgAIbvmOsYCq/RGietDTHH4jFtbVZV1DKb7VMuneJpTeJ
   K9LgrCDJe4BZpj3GjLi/qJyQOE7ZDE3Cv1tGnP2xyf/AYCzfTD+3FYIuh
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="344113398"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="344113398"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="751192534"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="751192534"
Date: Wed, 5 Apr 2023 11:28:27 +0300
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZC0xK4YJrKga7akk@smile.fi.intel.com>
References: <20230330162434.35055-1-andriy.shevchenko@linux.intel.com>
 <20230404161101.GA3554747@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230404161101.GA3554747@bhelgaas>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo

On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > Provide two new helper macros to iterate over PCI device resources and
> > convert users.
> > 
> > Looking at it, refactor existing pci_bus_for_each_resource() and convert
> > users accordingly.
> > 
> > Note, the amount of lines grew due to the documentation update.
> > 
> > Changelog v8:
> > - fixed issue with pci_bus_for_each_resource() macro (LKP)
> > - due to above added a new patch to document how it works
> > - moved the last patch to be #2 (Philippe)
> > - added tags (Philippe)
> > 
> > Changelog v7:
> > - made both macros to share same name (Bjorn)
> 
> I didn't actually request the same name for both; I would have had no
> idea how to even do that :)
> 
> v6 had:
> 
>   pci_dev_for_each_resource_p(dev, res)
>   pci_dev_for_each_resource(dev, res, i)
> 
> and I suggested:
> 
>   pci_dev_for_each_resource(dev, res)
>   pci_dev_for_each_resource_idx(dev, res, i)
> 
> because that pattern is used elsewhere.

Ah, sorry I misinterpreted your suggestion (I thought that at the end of
the day you wanted the macro to be less intrusive, so we change less code,
that's why I interpreted it the way described in the Changelog).

> But you figured out how to do
> it, and having one name is even better, so thanks for that extra work!

You are welcome!

> > - split out the pci_resource_n() conversion (Bjorn)
> > 
> > Changelog v6:
> > - dropped unused variable in PPC code (LKP)
> > 
> > Changelog v5:
> > - renamed loop variable to minimize the clash (Keith)
> > - addressed smatch warning (Dan)
> > - addressed 0-day bot findings (LKP)
> > 
> > Changelog v4:
> > - rebased on top of v6.3-rc1
> > - added tag (Krzysztof)
> > 
> > Changelog v3:
> > - rebased on top of v2 by Mika, see above
> > - added tag to pcmcia patch (Dominik)
> > 
> > Changelog v2:
> > - refactor to have two macros
> > - refactor existing pci_bus_for_each_resource() in the same way and
> >   convert users
> > 
> > Andy Shevchenko (6):
> >   kernel.h: Split out COUNT_ARGS() and CONCATENATE()
> >   PCI: Introduce pci_resource_n()
> >   PCI: Document pci_bus_for_each_resource() to avoid confusion
> >   PCI: Allow pci_bus_for_each_resource() to take less arguments
> >   EISA: Convert to use less arguments in pci_bus_for_each_resource()
> >   pcmcia: Convert to use less arguments in pci_bus_for_each_resource()

...

> Applied 2-7 to pci/resource for v6.4, thanks, I really like this!

Btw, can you actually drop patch 7, please?
After I have updated the documentation I have realised that why the first
chunk is invalid. It needs mode careful check and rework.

> I omitted
> 
>   [1/7] kernel.h: Split out COUNT_ARGS() and CONCATENATE()"
> 
> only because it's not essential to this series and has only a trivial
> one-line impact on include/linux/pci.h.

I'm not sure I understood what exactly "essentiality" means to you, but
I included that because it makes the split which can be used later by
others and not including kernel.h in the header is the objective I want
to achieve. Without this patch the achievement is going to be deferred.
Yet, this, as you have noticed, allows to compile and use the macros in
the rest of the patches.

P.S. Thank you for the review and application of the rest!

-- 
With Best Regards,
Andy Shevchenko




From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:30:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518319.804718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyXQ-0006ij-4P; Wed, 05 Apr 2023 08:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518319.804718; Wed, 05 Apr 2023 08:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyXQ-0006ia-1a; Wed, 05 Apr 2023 08:30:40 +0000
Received: by outflank-mailman (input) for mailman id 518319;
 Wed, 05 Apr 2023 08:30:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=//T7=74=linux.intel.com=andriy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pjyXO-0006iP-8w
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:30:38 +0000
Received: from mga12.intel.com (mga12.intel.com [192.55.52.136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22ae3d07-d38c-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 10:30:36 +0200 (CEST)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 05 Apr 2023 01:30:34 -0700
Received: from smile.fi.intel.com ([10.237.72.54])
 by orsmga005.jf.intel.com with ESMTP; 05 Apr 2023 01:30:23 -0700
Received: from andy by smile.fi.intel.com with local (Exim 4.96)
 (envelope-from <andriy.shevchenko@linux.intel.com>)
 id 1pjyX4-00Ckbm-35; Wed, 05 Apr 2023 11:30:18 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22ae3d07-d38c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1680683436; x=1712219436;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=o2D+LfXhLk+QrH3a0th/mLBmSHhjZTr2yV97vm5EWfI=;
  b=DgfI8W389xQ6g+SnxwTeLhQmkeB2ju0MgkeZzkdeom0I2tnAebnK/r7K
   Qoz78Ty8b0NqEL+/SjCIZKu04l6vtBxLrZDu8MSwiw5PBPJRisW6dPHqB
   GkbtSZerwjCBIAwwDQWzkAhsqhGi/RA42hbU7jBgBrzRBN9dmTr5/BXb5
   llt3CWAAoB7L0tSaszYLyXVyiKIHUTUYeOoKY5jSCBSM6Pva7mUQ80wh0
   ObwqRx7LHIGMZ3uvlhghyalLqbn12y6pIL2+IjzMb8EtpyxuHGobA3Xgi
   vIpTcj1c75leYMIo5mVNPx76x7IkUMnPbFfcsxN6YpQ7aIWjDTtNPeYAL
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="322058593"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="322058593"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="860894821"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="860894821"
Date: Wed, 5 Apr 2023 11:30:18 +0300
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Bjorn Helgaas <helgaas@kernel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org
Cc: Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 7/7] pcmcia: Convert to use less arguments in
 pci_bus_for_each_resource()
Message-ID: <ZC0xmk71jQrx3AeH@smile.fi.intel.com>
References: <20230330162434.35055-1-andriy.shevchenko@linux.intel.com>
 <20230330162434.35055-8-andriy.shevchenko@linux.intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230330162434.35055-8-andriy.shevchenko@linux.intel.com>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo

On Thu, Mar 30, 2023 at 07:24:34PM +0300, Andy Shevchenko wrote:

...

> @@ -960,12 +960,9 @@ static int nonstatic_autoadd_resources(struct pcmcia_socket *s)
>  	 */
>  	if (s->cb_dev->bus->number == 0)
>  		return -EINVAL;
> -
> -	for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) {
> -		res = s->cb_dev->bus->resource[i];
> -#else
> -	pci_bus_for_each_resource(s->cb_dev->bus, res, i) {
>  #endif
> +
> +	pci_bus_for_each_resource(s->cb_dev->bus, res) {
>  		if (!res)
>  			continue;

As pointed out in the reply to Bjorn's email this hunk needs to be revisited,
since I wrote the documentation for the above call I have started understanding
the deal behind this special treatment for X86 case.

-- 
With Best Regards,
Andy Shevchenko




From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:37:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518324.804728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjydJ-0007Ye-Q3; Wed, 05 Apr 2023 08:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518324.804728; Wed, 05 Apr 2023 08:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjydJ-0007YX-M3; Wed, 05 Apr 2023 08:36:45 +0000
Received: by outflank-mailman (input) for mailman id 518324;
 Wed, 05 Apr 2023 08:36:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mHoi=74=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1pjydI-0007YN-Hn
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 08:36:44 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd3badf8-d38c-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:36:42 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id r11so35341365wrr.12
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 01:36:41 -0700 (PDT)
Received: from [192.168.0.106] ([91.123.150.38])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm13806796wrr.100.2023.04.05.01.36.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Apr 2023 01:36:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd3badf8-d38c-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680683801;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=pgcwjOGVr1lFUtMdP9HgDE1IUQk05oAJXW/E6R9ojlg=;
        b=nec8S49q1O8S+prmVrvx7Te7QroUTNSdm/2WXcKzONbDjQRldGX1Mr7AS5BMRnpwzP
         Nk1iu58bReXmw1Nv7CXSJm837UmL4XYMoKX7p5TGsKI8pateVn9ytaqVESFUiEYRNDDF
         Wz7crxIvJVPfuF/dxsSOW+MTBhBpyCAZN9nDGEvgnmM1OEeGUTjTUqvIyo261MZyRqhH
         VpkfV3Qil6dw/9FOxhTsQiQGWhQZUcNzYYDfmy1RAsP7FuBsuRk9wuB/q+a7cyMCmnrc
         ntv3d2jT7y7j1QVVSaz6LYyQ43woWI6jy14pLOAFsareox5dkCBhP4XzqMyP4KQPWIXo
         Iylw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680683801;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=pgcwjOGVr1lFUtMdP9HgDE1IUQk05oAJXW/E6R9ojlg=;
        b=v3lpE0u/UpHrPqpzQqzA1X5A4tfsPEEglxuf5Hl6BXJouWd97XxSuOxpgwnxJjcgGY
         inUDQQhXqV/AT1DAW9yKS5FQGQWcxkACF+Gws+Ij0hrpYzbRRqzs18F/gy+hYjS28vRN
         TLpjx6/xF7qv7xXy3doGvsklw6S2d+WSskLOKlZHQ1uMg/FE1yVujMXTEN5chh5DwUP/
         jTBpiBpuUjabGutHHqZFyO5a3WcfdxiXfCT99rwU+rO6X6EF6sWKgIArY7r3RBGzmUH+
         6cvAKJtyxr4arIDR7dhbA6vdCPuCeriSjtP42oQqp/2VdOUxzFzVPqdOuikAHuLSBWEE
         M53g==
X-Gm-Message-State: AAQBX9dQBQWBZlM4dNAxU8uJqnZs8kEkRdtyTQU9FfjDcqAwjKWy3twQ
	Q6VSEAWYM/08rmSa+E+PHxQ=
X-Google-Smtp-Source: AKy350Yx9d0su5dgNq1fXie+FM2THoKzDb9lZNmvrjool5qMebXAgSseo4f31iSy9RiikftaOrgnYw==
X-Received: by 2002:a5d:5188:0:b0:2d6:a357:f138 with SMTP id k8-20020a5d5188000000b002d6a357f138mr3257363wrv.18.1680683800918;
        Wed, 05 Apr 2023 01:36:40 -0700 (PDT)
Message-ID: <37fa3d37-5ee7-863c-48cd-1ce313c8e296@gmail.com>
Date: Wed, 5 Apr 2023 11:36:39 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH V2 1/2] docs: Allow generic virtio device types to contain
 device-id
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 Juergen Gross <jgross@suse.com>, stratos-dev@op-lists.linaro.org,
 xen-devel@lists.xen.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
Content-Language: en-US
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
In-Reply-To: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 05.04.23 03:12, Viresh Kumar wrote:


Hello Viresh

> For generic virtio devices, where we don't need to add compatible or
> other special DT properties, the type field is set to "virtio,device".
> 
> But this misses the case where the user sets the type with a valid
> virtio device id as well, like "virtio,device26" for file system device.


ok. For the record, a valid virtio device ids can be found at:

https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.html#x1-2160005

I don't know, maybe it is worth adding that link to commit description.


Also a NIT, is this example "like "virtio,device26" for file system 
device" precise?

According to
https://www.kernel.org/doc/Documentation/devicetree/bindings/virtio/virtio-device.yaml

the virtio device id should be in hex, so for file system device it
should be "virtio,device1a", or I really missed something?

With updating description if NIT is correct (I don't know, maybe this 
could be done on commit):
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>



> 
> Update documentation to support that as well.
> 
> Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> V1->V2: New patch.
> 
>   docs/man/xl.cfg.5.pod.in | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 10f37990be57..ea20eac0ba32 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1608,8 +1608,9 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is
>   
>   L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>
>   
> -For generic virtio devices, where we don't need to set special or compatible
> -properties in the Device Tree, the type field must be set to "virtio,device".
> +For other generic virtio devices, where we don't need to set special or
> +compatible properties in the Device Tree, the type field must be set to
> +"virtio,device" or "virtio,device<N>", where "N" is the virtio device id.



>   
>   =item B<transport=STRING>
>   


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:44:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:44:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518328.804738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyku-0000nn-Hd; Wed, 05 Apr 2023 08:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518328.804738; Wed, 05 Apr 2023 08:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyku-0000ng-ER; Wed, 05 Apr 2023 08:44:36 +0000
Received: by outflank-mailman (input) for mailman id 518328;
 Wed, 05 Apr 2023 08:44:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjykt-0000nW-KI
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:44:35 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1617d8f3-d38e-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:44:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8106.eurprd04.prod.outlook.com (2603:10a6:10:24b::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 08:44:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 08:44:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1617d8f3-d38e-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qoo4qaPbM9PoWjOzDF+6Kl9PT4V7A1jVIpKBy7iQeEH8hutvXZyTlHOs5poH4GD+OaTn4mjT/Bpe7voidQpDJpnvuk/QuqmLsqhygDYT+b94qMW25YMRsEP14mEphqNhLJWNYNY1pjl8IdFCYDN9HhohmZPyts/9hqgAkFSX6PmOoK/6dZVV5JwnTC5fEmZBHgCsH05SyrP1ghcokLwnuidxdOxkXtJKe27K3J8lswt+0zmWa0RXIIofClw38BHSwijyOuXHIxopWA53ncFjYOOv37UDq7SWrTCh6VavRvWq/OggqmyV8y0FrjerKR7x5tEP/tGACZx8g2EQJlJLUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vupqexl+OQR8xvArBtJGbcSswETVQm/vq3260uXglUI=;
 b=UV52cjepPlk2NsrSaRJo+AN2qYA2PKL+TvfHRev/I3UKhYugjx5z2NBG384EXoUXIyZ6P9Fwjpa/JSDtFaxaK5RB3rEqnGozuhDyT4wOZ/eFEvWrVCJ/DNPEZQWdzyUriSEA+r9tKueIc0gA5Ql1d6+EW5A0Jy8za3MFKGMT1IMCpiNHG2WGpYlA17MWDu/g/WPmfkq+0WRckMaFUjdLZ0iD4nX86Pdwmxq/1Yo1uWtqSLk05ogeh/DuCQMio/b4HF2imp54pC5AD9zm4g4MGTQPj+oYh2V0nje1obEbB7x/rEDiACLSVeixF7tbeFHbsN4Jra8Wh3HXszXJttr7tA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vupqexl+OQR8xvArBtJGbcSswETVQm/vq3260uXglUI=;
 b=RPE6Ka82iyrqAGrDOKh/nTlMt4Weojedes/kjbONb8a9xSqGrJ+QcluHJD1RxiqNxutg3CjrEr7m/goflyoWSjvVWKjeMQlaVbXruVrCCRA3vRXXc1UrvIIoTenDWFc8s4TAMsZ+9Thp8v4vwghhSVzdomejfb12LF8ltboMXk/Rw74X6PajwKLRKpdY06byHGcohV9hgc3IcJgmrI6jK7q4SopuA5/vePjddodVErd8xPBhRyivbCgqd47wx6bB4IgxUMbWRBu/V+usnuOJfwpZ+fe8hn0njJ7wZduoqraoyuUszoSunwmoMfh/x/U6Dsv2u0T8/+5p6JIzAR9uiw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0f997105-4601-683a-7edf-d3fe5ee5aa48@suse.com>
Date: Wed, 5 Apr 2023 10:44:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <6d64dd4a-5b25-ddca-5c07-7b4c0fc48c0c@citrix.com>
 <5f9218c1-9ee9-c5bd-af8b-003084aa66e4@suse.com>
 <78e72635-6ebc-b3b7-83d5-134d5c63d561@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <78e72635-6ebc-b3b7-83d5-134d5c63d561@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0008.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::18)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8106:EE_
X-MS-Office365-Filtering-Correlation-Id: dc0cea9c-568f-42cd-2b5f-08db35b1f7c7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBlpMTUQIuvZJuyLUhYSAH8M3MTbbr4yZJ2lV/byyiW3iJekFd8FCNRSKLItXoXyVgU4+f8ZJNn/EasSNhpLaiZP6Rs0Aj2Ht0UhOumg7tHx7gekNi8H53H3Y6kk2AuvoJaf0MAIAUaZbeWAPq6/fpTh+d2/bf9JpueJ1qmki65VJ0COXxfVf+uxAXWT08wm76njYY8ReDwTK+KvM5E3eA6ahbNV0/WO14ZJN1Xbwq6uxmB1Z/YpCAIj9f4Myi3HMjEO+sIjL76lH0yJ5o2dS2uaqiIx1WprsmFX1MzV16Ewv2go+TnoR0njN32+OqyqfAlkYVhPdmUBTNLsMtnaCVQaxWqBaQDAdpQW4qJ4gDSUhQbmM/nvGUkcqMnj4B7hwoBNRqlZXEszWMWc1oqqruYK/B/D7JFy1L3SnZW0r5/sI8kQ50TV9xEycRmKRUH277/IxJZjWVNPC/zz6q7typpP8kPy1SR94M8m9y28OVjgkB0FGhaAFSXegxHCuqIDon2jmzwjF9aq2a5vjFP9EEn/3zl53MTbDvbfqURhmYw2pJdP5QTZQXREh246GPjLhwnWX0+LNCQ15cDiIZoIuvglePz9Tib9xZEl/NADb6TTGSeHO4bC8mE469u7J1mjfMNyjpKcA5583nE2xKFwRA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(39860400002)(136003)(366004)(346002)(451199021)(8936002)(66946007)(66476007)(41300700001)(66556008)(316002)(2906002)(54906003)(478600001)(31686004)(2616005)(4326008)(186003)(8676002)(6916009)(5660300002)(6512007)(6506007)(26005)(53546011)(83380400001)(66899021)(86362001)(31696002)(6486002)(38100700002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blJqNjQ3b1RsZUJJR0dONnRZQ2MrMUdvVGlSK05ud2RKOEJuQm1mdDhMUS94?=
 =?utf-8?B?T3ZHWjNIMlJHWi80akJiN1BQZjhVVHlQUjYxWVRDS1creXRIQXVLNzUxVjht?=
 =?utf-8?B?VU16Z0Y2RlFoUUNyWjMydUJPUTFzQ2ZxcDljMkhBdk5jYzdjdTdxM1hkVkJ2?=
 =?utf-8?B?a0RoWnNsVW11dU9jL0xaTHladVRnZDFMT2ZoN3BydHVOMi9SNjdSL0NSNndk?=
 =?utf-8?B?Nk4wVUxHdFpyNkc3ckUrREhEMENIY1F5ZFZPelRGYzdGOEM4eXZjTUh2ZFJl?=
 =?utf-8?B?YW5SQm9jbDgrMURTK0I1YVRoZU43elQxUXZ3a3ZvZ2hOSHlacU5QL1lqVE80?=
 =?utf-8?B?bmE4SmhMNXpDeDVFZzdUajdDOUxVYXFjTEEyRDREVCtuZFUrNzNDOXFNZktu?=
 =?utf-8?B?M2N2dUV6bWxlRFRGQ080am1pc3RvZTZOdDlMQ2lQdmdiUVkxb21ZMlMxM2dI?=
 =?utf-8?B?bjMvY0dTTk5mdjIyTGREd2d0cmVxSHZwQ2xwNm1yYnNRVXNqSVZUQXZBUHFL?=
 =?utf-8?B?T1ZBK0t1eVJzTGg0WDB4WWZXZHhpZS9CbC9hRllwZ2Q0N0psYjVPUzM4eEpQ?=
 =?utf-8?B?a0lHTk1FaEdzWVpJK3cwZndQSDZ5RkRUTGlSYXJySEdGb29DRktvQWk4VGV1?=
 =?utf-8?B?dEdKNUd2NHFWNVhodktKWHArSURrd0dzUXZMN2taSmtyY1JkZktPRmRvZkgv?=
 =?utf-8?B?c211eFpKeDI4cWtLOHRqMi9yRnlsQ1RjRVBZU05kb3pPZ254K0dDRWNwT1lq?=
 =?utf-8?B?cTR5TTRuMFZlVkwyWkpFSWVJcWlwd1N1eXhmNTc1TEJ5ODJ5MEpYdXl2L1BX?=
 =?utf-8?B?Y1E2RnVqUWRzZGFKVENSMUFkb3cwRWl2R3dmMUl2bWtrSnBKTWROV0FNbklo?=
 =?utf-8?B?K09TaWFlN2tpZG1MMFc1VTVINTJxNTZNY3I5ZXo3alZhbWc1QmNvOVh3aVNo?=
 =?utf-8?B?MXhzeTBkcXZxTVBJYlRwTmp3NVI5VDNSWnNTdWQvbWpkZGhBWlE1MXNJVnQx?=
 =?utf-8?B?SWJjaUJOMUEyQ0lrZWhPMS9VRWZiSXVweGpFSUFOUmVLSDNmclZ5NHpxRW03?=
 =?utf-8?B?QnNMOVRFYmJkZ2Z6Y212dFBtNTVyU3VRTEhhSmJXaDEzQWtxVStReE1NQWgr?=
 =?utf-8?B?ZldickJoTmFWeTRIaGdzSU1xSkVGaTdJQkE2bnFrcEd0VTFNZ053bVo4eUVD?=
 =?utf-8?B?VTZJTUhMNDJ5dGxsWWM1SHhvTlhhV05tMDlCTWFPaDBQa21MU2tLclZyZlU5?=
 =?utf-8?B?dVZ0WVRmMXNaWjB5TS93ZjlzYXRmenZIcFFRanozVkdNaURYWUFFZTBscDJa?=
 =?utf-8?B?VThqZVVQQjBBdFBYVnZLclNmUzREejVSRTJZRG9WaEhSQ203YlhzYSt3Ri9U?=
 =?utf-8?B?cWNDVHNtN0s0YTgwR3RzNGpiSFl4ZFZqdnN1MS81djR6Zks4dXhEQXYwdWdK?=
 =?utf-8?B?NVlIZGRwRDl5YlRZdk9nOXFYVFRmOUh4UlI5NXJ1Rkp5YmRzTEoxYmZaT2c5?=
 =?utf-8?B?NStkSit1VGNFUzJBS2hycHQzSmtNRXBFbVhCNGNnNWpkYXBXT2FERDlnU3Js?=
 =?utf-8?B?c0k2cEZYOUZwZmdyYVUxWldaVFFiaUpSREkwVHo0U2c3UG01VHU3N1prdlNj?=
 =?utf-8?B?d1RJa2hnUUppcVNIVkVuN0piazU2MUsvNllNZ3hlY0pqZkd2Uzk2RVV6ckw2?=
 =?utf-8?B?NlRWbGVzUExwZmRZYkZMQmxza0dvbWc0UncvUEJqaUpjT0lSczcvV1F5aWNu?=
 =?utf-8?B?eUJIb0tTL0pUS3VhRWlUeTllc2pVTFlxdGtJVGJVVlJRaFFnWUliemg5a2pM?=
 =?utf-8?B?WjdUSzBHOVJ6ZEE3RzZpWldaNUlrU3VwNHE2b2JIOTFicmZWT0NwUHFEWXZM?=
 =?utf-8?B?amx1Qk5DdjhNa1BPUEF2Q3NNV09taXExbWZicVZpdS9TV0FVellTUkpsSGV4?=
 =?utf-8?B?QWlSRkZqd1NpajNQQThDWTcybnhOUWppN2hud1NmdW96QmpKTnZTYXV2RlR3?=
 =?utf-8?B?eTNHRVcyVjI2NWN3ZzhEY3VWOXlwRDZ2K3Y4YSszUFM5SjVuT2grbm80MStG?=
 =?utf-8?B?UjU5cHRZMS9mWnJJZENBRGw5TmY5WFk0N2M2VEY3bDVhRnp3Q3pPOEUxRmd4?=
 =?utf-8?Q?1i4sGD69ySDkomvUyND4i2xxG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc0cea9c-568f-42cd-2b5f-08db35b1f7c7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 08:44:28.7200
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xTsq/miP+jpQM4NEiJaIaO+6aNs6nHB0kONzJHc+Nnqk69shIcmpoHTNGLBtpv1uwGN1lwtg1BvT+NUXh2K4Wg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8106

On 04.04.2023 22:40, Andrew Cooper wrote:
> On 04/04/2023 3:21 pm, Jan Beulich wrote:
>> On 04.04.2023 15:08, Andrew Cooper wrote:
>>> On 15/02/2023 2:54 pm, Jan Beulich wrote:
>>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
>>>> applies to guests also when run on a 64-bit hypervisor:
>>> Is this really true?  Even when looking at Xen 4.2, 32bit guests are
>>> required to pass a full 4k page, not a 32b quad.
>> The full-page vs 32b-quad aspect is orthogonal. This VM-assist is solely
>> about where that data structure is, not what size it is.
>>
>>> Which makes complete sense.  It was a hard requirement of 32bit non-PAE
>>> guests, so it was a natural restriction to maintain into 32bit PAE guests.
>>>
>>> This is *only* a 32-on-64 issue, because this is the only case a 32bit
>>> guest could in principle use an L3 placed above the 4G boundary.
>> Not exactly. 32-bit Xen maintained a 4-entry "shadow" array below 4G
>> that it would copy (massage) the guest entries into upon CR3 reload
>> (just look for struct pae_l3_cache in the old sources). So above-4G
>> page table base was possible there as well.
> 
> Oh eww, so while Xen never gained an optimisation to permit only a 32b
> quad in place of a full 4k L3 table, it did support having the full
> tables higher.
> 
> (This code is especially hard to follow with #ifdefary in the common
> mm.c when there are perfectly good x86_{32,64}/mm.c's to use for
> differing function implementations...)

Except that the #ifdef-ary was wrong, and should have been suitable if()
instead. Those having been #ifdef was why the respective code got
removed in the course of purging 32-bit Xen logic.

>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
>>>>      unsigned int   partial_flags = page->partial_flags;
>>>>      l3_pgentry_t   l3e = l3e_empty();
>>>>  
>>>> +    /*
>>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
>>>> +     * understand the weird 'extended cr3' format for dealing with high-order
>>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
>>>> +     * initialised).
>>>> +     */
>>>> +    if ( is_pv_32bit_domain(d) &&
>>>> +         unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>>>> +         mfn_x(l3mfn) >= 0x100000 &&
>>>> +         d->vcpu[0] && d->vcpu[0]->is_initialised )
>>>> +    {
>>>> +        gdprintk(XENLOG_WARNING,
>>>> +                 "PAE pgd must be below 4GB (%#lx >= 0x100000)",
>>>> +                 mfn_x(l3mfn));
>>>> +        return -ERANGE;
>>>> +    }
>>> Having dug through source history, I see this is largely the form that
>>> it used to be.
>>>
>>> But I'm unconvinced by the "cut control tools some slack".  I'm quite
>>> tired of different bits of Xen taking on unnecessary complexity because
>>> people are unwilling to fix the problem at the correct layer.
>> But anything tools do before having created the first vCPU would not
>> have had any means to engage the VM-assist. I.e. ...
>>
>>> A toolstack which has non-pae_extended_cr3 guest on its hand will know
>>> this before any pagetables get allocated.
>> ... this knowledge buys it nothing: It would need to move the table
>> to below 4G irrespective of knowing that the guest can deal with
>> bigger addresses, just to get past this check.
> 
> This just goes from bad to worse.  It is mad that the VMASSIST flags
> can't be set ahead of a vcpu initialise hypercall.
> 
> But.
> 
> The code in xg_dom_x86.c unconditionally moves the L3 below the 4G
> boundary, so the thing actually pinned as an L3 will always pass the check.

Where do you see this being dome unconditionally? I only see this inside
a check of dom->parms->pae being XEN_PAE_YES.

> Which is just as well because it too blindly applies the extended-cr3
> transform momentarily after conditionally setting
> VMASST_TYPE_pae_extended_cr3...

Doing this "blindly" is (kind of) fine, isn't it? The transformation is
an identity one when extended-CR3 isn't in use.

> So a 32bit PV guests will pass the check irrespective of their
> pae_extended_cr3 setting.

As per above - I question this: dom->parms->pae isn't a simple boolean
(see enum xen_pae_type in libelf.h, and as can also be seen from the
condition around the enabling of the VM assist).

>>> For this check specifically, I'd suggest prohibiting non-32p guests from
>>> setting pae_extended_cr3 in the first place (I see no limit currently),
>>> and then simplifying the check to just
>>>
>>> if ( unlikely(!VM_ASSIST(d, pae_extended_cr3)) &&
>>>      mfn_x(l3mfn) >= PFN_DOWN(GB(4)) )
>> Dropping the is_pv_32bit_domain() check isn't possible because we can't,
>> all of the sudden, fail 64-bit guests' requests to enable this VM-
>> assist (no matter that we know that it is of no use to them).
> 
> I'm not so sure about this.  This VMASSIST cannot credibly be set at
> runtime,

Why not? A kernel may statically say XEN_PAE_YES, resulting in its
initial L3 to be relocated to below 4G, and then - before any further
page table creation - enable the assist.

> and making a restriction here is not usefully different from
> prior patches of yours that relax checks in Xen that still break on
> older builds.

How's this not "usefully different"? I replace potentially silent
misbehavior by the failing of a hypercall (which ought to be noticed)
plus a (debug build only) log message.

> But as I know you're going to argue with that position, I'll at least
> note that ignoring a 64bit guest's request to set that bit would be less
> bad than the current behaviour.

We already ignore this bit (as in: it has no effect). Yet just like we
can't fail the request all of the sudden, we also can't zap the bit
from the supplied mask, as kernels may legitimately check that what
they read is what they set. (That said - I'm unaware of such checking
anywhere.)

Jan

>> Dropping
>> the control-tools part of the condition is at least problematic as well,
>> as per above. Albeit I'll admit I didn't check whether nowadays vCPU 0
>> is initialized before page tables are built. But I think it's more
>> sensible the other way around: CR3 setting (in the hypervisor) is less
>> involved when the page was already validated as an L3 one.
> 
> All of this is before the guest starts running, so it doesn't matter.
> 
> The most efficient way (from Xen's point of view) is to pin the L1s,
> then L2s, then L3s and then set vCR3, because this is the only order
> where we don't have to do do recursive type acquisition.
> 
> But, the most efficient way for the toolstack to do this is the opposite
> way around, because making Xen do recursive type acquisition is faster
> than other ways, and turns all subsequent hypercalls into almost no-ops.
> 
> I doubt there is a relevant difference between these two approaches.
> 
> 
> And it doesn't matter either.  The check won't ever trip from domain
> creation (see above), nor from migration (we set vcpu context before
> pinning the pagetables, and a non pae_extended_cr3 will have exploded on
> the source side).
> 
> So there really are no toolstack codepaths that can trip the check. 
> Future improvements that might trip the check can come with a less
> broken hypercall as a prerequisite.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:51:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518332.804748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyrV-0002Sc-8q; Wed, 05 Apr 2023 08:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518332.804748; Wed, 05 Apr 2023 08:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyrV-0002SV-42; Wed, 05 Apr 2023 08:51:25 +0000
Received: by outflank-mailman (input) for mailman id 518332;
 Wed, 05 Apr 2023 08:51:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=enMU=74=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjyrU-0002SN-5a
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 08:51:24 +0000
Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com
 [2607:f8b0:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 08f1d51e-d38f-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:51:21 +0200 (CEST)
Received: by mail-pl1-x631.google.com with SMTP id z19so33810165plo.2
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 01:51:21 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 134-20020a63028c000000b0051303d3e3c5sm9018640pgc.42.2023.04.05.01.51.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 01:51:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08f1d51e-d38f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680684679;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=uTmWh7bgIaShLbjOtPbFopk4h+aCHK0BPMfF7xCsMCU=;
        b=UFReNiOFbZlGWg/34mbH/fROY4+46+yziPEOG/bjniycIzr4dqS8890aZ0MBq0Jor6
         WMIbpW9Z5UvgEEaeKa7nIMnk1id3yjjjk2WipvqX8Asc/XhFswF37HWylXnC/ytFSnWU
         1q9TB0mpl2DiYzIfnkvei/4SZsW4cN6AZBPxzcC0QN3UAxMLYF0nJ+UaDGxuaGKpVmEH
         sumYbxnKAKn2PLt65MN8+REC9QiX/QRbQWaQblDIUJHThY5+f1dzYWwSVMZl6gVtg64n
         sqYOAfw4EzJ9OsPMqfpV6O6dJ03pf6Y4GofSklzYHtrmjDLp8RwpELoNr/+A41W0WOW0
         RxYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680684679;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uTmWh7bgIaShLbjOtPbFopk4h+aCHK0BPMfF7xCsMCU=;
        b=zbUQ7Onk+qNIV2LasL+0z3VVDuQYZMisCEYUs9ti4diYdqITLzq2M43a+e7QactgD5
         Liaje9rKAeWZNbp91xUzliio8U8E+IAgcT8m9oFTFlMiuKQh/8/NJFUZCCyp9AhAZ2wy
         gBgGpaobWDPv2PhlPzsmEKL2RmSilVVlf29sHkWTF7lr16FdD73nj/aS0qDjpXgeT7Mq
         vvCf8nCLzet5yKLqQE++1sdelJDVWOuTgWJn5koS6mUp+xa001DKfa5l10qhejNOpbU2
         Z3l3bzScT53Oo9w7WzHQuCJJGgFATWpWPl7Pt9u2miXGK9+gYQgYNP+8fDprhBGekt94
         pMAw==
X-Gm-Message-State: AAQBX9c9qVdfoQ6LedI+DjmSYMEB4u9MvWb7GVNm3QksYb9gRs9OAYyD
	8FpSAvdHCCK08HSaLaW9bPN9+A==
X-Google-Smtp-Source: AKy350YTW5NK2cxTa4fIkmHliiCG4XUoXoaRsrI6R75HvHS3tlR9YNZmWTYfStMuoue/vx4QD12w/w==
X-Received: by 2002:a05:6a20:38a5:b0:d9:55b0:c0e8 with SMTP id n37-20020a056a2038a500b000d955b0c0e8mr4234722pzf.39.1680684679529;
        Wed, 05 Apr 2023 01:51:19 -0700 (PDT)
Date: Wed, 5 Apr 2023 14:21:16 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
	Juergen Gross <jgross@suse.com>, stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xen.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Erik Schilling <erik.schilling@linaro.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH V2 1/2] docs: Allow generic virtio device types to
 contain device-id
Message-ID: <20230405085115.dayqa6mrkac372sb@vireshk-i7>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
 <37fa3d37-5ee7-863c-48cd-1ce313c8e296@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <37fa3d37-5ee7-863c-48cd-1ce313c8e296@gmail.com>

On 05-04-23, 11:36, Oleksandr Tyshchenko wrote:
> Also a NIT, is this example "like "virtio,device26" for file system device"
> precise?

No :(

I will send the patch again later, this is how it looks now. I have
also updated the documentation to contain the hexadecimal format for
N.

Author: Viresh Kumar <viresh.kumar@linaro.org>
Date:   Wed Apr 5 05:36:19 2023 +0530

    docs: Allow generic virtio device types to contain device-id

    For generic virtio devices, where we don't need to add compatible or
    other special DT properties, the type field is set to "virtio,device".

    But this misses the case where the user sets the type with a valid
    virtio device id as well, like "virtio,device1a" for file system device.
    The complete list of virtio device ids is mentioned here:

    https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.html#x1-2160005

    Update documentation to support that as well.

    Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
    Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
    Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
 docs/man/xl.cfg.5.pod.in | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..938aea22c798 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1608,8 +1608,10 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is

 L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>

-For generic virtio devices, where we don't need to set special or compatible
-properties in the Device Tree, the type field must be set to "virtio,device".
+For other generic virtio devices, where we don't need to set special or
+compatible properties in the Device Tree, the type field must be set to
+"virtio,device" or "virtio,device<N>", where "N" is the virtio device id in
+hexadecimal format.

 =item B<transport=STRING>

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:56:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518337.804758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyw6-0003FR-Sy; Wed, 05 Apr 2023 08:56:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518337.804758; Wed, 05 Apr 2023 08:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyw6-0003FK-QB; Wed, 05 Apr 2023 08:56:10 +0000
Received: by outflank-mailman (input) for mailman id 518337;
 Wed, 05 Apr 2023 08:56:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjyw4-0003FA-Ro
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:56:08 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20618.outbound.protection.outlook.com
 [2a01:111:f400:fe12::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3887f47-d38f-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:56:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8806.eurprd04.prod.outlook.com (2603:10a6:10:2e1::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 08:56:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 08:56:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3887f47-d38f-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XBPb2bwqdMtAV18nOX5hsDz40W/yqZhYRFJnVJL6SKT9F+JmEVz0Rq1kJjIRrVBle6Bc+yxeOesadtg7HhUnVWlI9EJ6PCoX16Es/rkLLHSD3uU7vYN5z4xhp1Vf7N7vseHU73JOLcG8zwSRqahb3fz09aCasGndx/2FAz9o1YAOnynPw04h+5orRf9qVzP/dVgJJv2aQPw263DgPPhWooA8GWEUdQBZxrypgZQ5mX0Ccf5j3nBh04S1807O3ZTrp/JVWUKsROo/AHi965NmciNJXWKw0daJX1I6QB1pBBvcBjdwGsJtl+0eyau4UWq15SUdgPI9YjU1ulfdatuzLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+mVTO62r6MdNDfQvzJjVKCOtuW5R5PdZ4og7pljUhPc=;
 b=nePbIZtD0nvR9bshlTd/7uTXv3D+YWMQA2oSa++ZYDq5m/RC/qWsAF7Ov0RvYiodAVMIH3w+nriT/1BeB2mAdKXt3C2DrvKK+fiyfE7RC+m5RTcpuBt6baUT7xSxL+veujQFfC7G+DWNW4Ldqr3WgsW30ka2dd7ly1/c3Zu96B08Ng+RfnehbcXgMZadJyTH69Jf7+yEs0ZewD4O5Z590piZC6oJKiZhNtgl1zz3sbUsLp0lgWQvj3y8GuG3AKunNGp+t4iHqhBdfHT9XzUeqC4xOUJfybeoyVD27Aa/PN8gB9umj6FtbYg4nWTqQkyxNuVJZJsJaKBAyxt+dbEAOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+mVTO62r6MdNDfQvzJjVKCOtuW5R5PdZ4og7pljUhPc=;
 b=dxG/77rFnJRGMEn7p7hepF5W4d4JCa/1slGtyQ+FWtSUu2tREcFfVGvr7suG+PJKa16M7Je+z055w1mH4Hh9z1OIjwT1LHxjSwpMUwGM7HNST9m2CR0Ez6XGaEjkWM5fI2nb11W+7+lBokuKE//7V5NdKQN6UI+HUoQ3MBiJ+uwJ2BiONUEh8cjFnrcTgLFwCwE1yA2MfJmWbjmeiKydHxSAIuEluQTWjH2FjP+xVeBZo9Nm/DdI0tp2Ddfh1YrrWNhdQPzK3FaAoM28Ea5oT2U3pChUxeqKGxbBkv8QaSz77R5gsgFo9t2UVwJFoal0hKh0MpeneulU3wIgwCExYQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a1c16028-3f33-36bb-36cd-b1ad2664b0f9@suse.com>
Date: Wed, 5 Apr 2023 10:56:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v4 2/3] x86/platform: introduce XENPF_get_ucode_revision
Content-Language: en-US
To: Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-3-sergey.dyasli@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404160655.2354-3-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8806:EE_
X-MS-Office365-Filtering-Correlation-Id: 080548f0-1ca7-4cab-11e6-08db35b39690
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5Fw1PSZqg8TBwr9bXEIHVhFo0nkeout8a0NpsOG7Dgt1gCqTi2oM+/CgudF5+sXmxKVkFrT97fSXbOkFzvhuyLThXs116HJ33jakUTg5ASfga5qkGh4BHGvgu/nz3Vj6TKZUBbIFMXSBI7XIQ9THsA9jnEoTape29XVWnOnKsYsLjUMyzE2hQZ0ZiuWScxeIPdGi171TXOPTfElazDbo8cNKuUYVap8FS4KM5G0GCR2EbkCmn2H4+LfvEWhVFYDUZq2tHrCIHe+LCEl2iaRGbGZ8WB2LVQlDcs4Zs0Hwl13VUfEXh12ZkXVjTxml8QqjiNp5bi8025uckVqUq2zUURA5Zz9Lm9cSVFiB32n9t5a3eL9fnQ7Hca0Wp8V//QmCtNn0waq99+FV0uxWQybMO9c7xMJSnCXIMvabUv+rwHcfQcViwgKAeB5RVtuDvXt7DF+a8cnZLMLSkPNg+1Iy4ugxQITyfiGlMLyOOWj6PK1IWw9U13DNvIYJH5Wb+RZRswBtfUdcdNWthXisdr1lXA/eiLUJaWkWcplOVZ08ue31QPKWWp1Kqc0NJdpqUNpYbSLutrJHwx6MT/G05Rw2RnSHKLPJWPVanR6aX6aZl1IkG30wuiYGcw9bYug80Wy3nB2HbrTcOogip7iQ1tBuIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(136003)(346002)(451199021)(66946007)(478600001)(316002)(54906003)(8676002)(5660300002)(8936002)(36756003)(4326008)(31696002)(2906002)(6916009)(86362001)(38100700002)(41300700001)(6512007)(53546011)(6506007)(26005)(66556008)(6486002)(2616005)(83380400001)(31686004)(186003)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dkJVUksxRVVFVW8zbUp3VisyeFBCSVlhcFJzRlh6MHhxK004dnFESHkza0dn?=
 =?utf-8?B?dDhtS2pTblNvUXpEakVoZ2xQYnpibXZDUDJqZ2tSUWdPNkhxTEFHeTFqOU82?=
 =?utf-8?B?eThxS010eTV1R2V6VW1IM09rV055UGlmZGVkMXdpYncyc3dXNDB5M09QYkQy?=
 =?utf-8?B?dkFSRTVsZ1REWlI1ZHN4ajBpT0lDRGdKZTBNNDNUVW1XQ1U4UlZSWUJuZVhy?=
 =?utf-8?B?clR3aUJtZGZQYmF5SFE3a0ZwekRIKzFPYUNmWGRpdFMzZWFvSTBPbmZ6Rmkv?=
 =?utf-8?B?MnIxMVhYSVZSSU44aU5iZStPcC9obGQrMVlYVkk5bkRTR1d5bVRPMy9GMnBV?=
 =?utf-8?B?OEhDdmQ2QjhtZVlyYWs3UzhrWGU5bGhKV0paMyt1RzY4YktNNnArenVqNUZP?=
 =?utf-8?B?WWphVEZPay8wMWtsdHAvZHFjb2lVbTlkTjg3VzR3VHBkWndHZFZCcnZoem1K?=
 =?utf-8?B?ZS90NTFCMW1maVA5UXZFdExCNmJ1R3dnN0dxd2h3Sm9kZERPclRqYjdqTlVk?=
 =?utf-8?B?cTkwVjNhM3NxUzJQaDhYaTJsRisvY2FZVlR5TFhQSEFCRHZ4VkNLRUxGZzhY?=
 =?utf-8?B?YXZNUFAwYkU0aVluVEpqSG9kU2xPd0VWYjF4Z05EWXBqNnZraGZHb2tkVHpJ?=
 =?utf-8?B?V0lwUWF3MkVjTFdBRVpvRHdqMUNVT0NJT0VYeU1xeGY1ZnNBSWZ4THNpak5G?=
 =?utf-8?B?VFJWeXBWUDd0ay8yeHZiaVlCR2p3MmNSeVZrYmJhVzQzOWZMK1p2Wll2TXNX?=
 =?utf-8?B?SVI3RzRRZmhuWXdsWHhoVGxwcFk5T0RrL25BSlBrTi94eUhiL3Zpa0pZT2p5?=
 =?utf-8?B?SmNvaVZOZnBOS2VhUndYZjhWZDZmM1ovRExUZ2dBUmp4V1YrdWJ3aE9mSlEv?=
 =?utf-8?B?a1V3dGx6OFpuUGNXeWlkalhnVlgzcnBLSFdnZDdyM3FkS0VmQ2xkZWZpWWVZ?=
 =?utf-8?B?akVPdHl5S29KMVVBbnlpYjF3VWtqRFBtdUUxK3lmTStTMlpOdEVOOGwyT2Jl?=
 =?utf-8?B?amxuak9Od2Q3VndDMDVURVI5VlFPbDNia3daSlZqcnNTTW5POGFSTDZ2VXAw?=
 =?utf-8?B?MUhqK2xjWXpKUVB0K2lQTDY3SVlWVnBWbGZ1K0lONU16K29rWU5XRG1CUDhB?=
 =?utf-8?B?bFNRSGhZQVUxT1hiTWlwODJsdlpPVUFpTjU0TFJtYm9temJpSW1HNldOd1NQ?=
 =?utf-8?B?citqVjUzYUEyRThLR2JKdE1rYnBONGxGcU05R0ZsOGY0MEptODVROVRvOU5F?=
 =?utf-8?B?YzZyaEQwZzd2dDJkaXNJdFN4MVhleUhvSjB4MEpkak9LK1Z4dE9aMjlGRk1G?=
 =?utf-8?B?OTdXZW9ENyt3S1VVVTM2NmxYS3huMzBXRjNMVitBbk81QzdvREpjT1NYaktj?=
 =?utf-8?B?a3BYNWJVMzJTWEZja05FZjlSTVhnaGZoc2VnQlBwaDgxUzEwZkZ4OS94WDZS?=
 =?utf-8?B?YnQ2QWxjL3ZIU0ZsUy9kZ3U3MkpXZHVHenRvalUrVFVreFhISU41SU4vZnNt?=
 =?utf-8?B?Uy9wS2JFUXhzU2F5bEx5S05jTEtxNU1lbGdXaFIxdVQ4TkxKWFhtUnRsdFFR?=
 =?utf-8?B?YU40U2cvMy9qYXpoN0dwNHJqa3diNlQrY20yTzNZelRGN0MzeDV1NDlGT0d0?=
 =?utf-8?B?TkJ6ZjdRWmEzeHBNREszR3g5NFNvOXBubDBRVHpKYTZQVWpZc3FFUVRtZVJ3?=
 =?utf-8?B?ZmlrUS9NRjRxSkw0OXpOZzVkUDdJTFZLVlVhTFE5RUtaWklwM2k3ZkF5TVVh?=
 =?utf-8?B?b2ovMFhkNmpLWkRyS09hVjVmd0dEMEFhNTh0Q3d1YVN2RG01V2RRenBROVJD?=
 =?utf-8?B?VnRiNHhSY21IcFQ1YjhrdWY1UTRiL09xNE5HNzhpSGZCSFpMNFlVa0JUVzVI?=
 =?utf-8?B?UWwyZWxURVQvTmVLeGFKazM2N1VGNnAxa0NzOHBzZHMwUmQzMlowVm5uS0xE?=
 =?utf-8?B?dEVqNEJSbmtEbmp1by9lbTFKOTRZUUZPSm1tMFplak1aa28rUUhkQkljRWdU?=
 =?utf-8?B?NjFFWXpuU0xzS0Z4bGVmOGxubHZpUXY3c0FxVllUN0FISHJvL2NrenNXbmth?=
 =?utf-8?B?OVU2TGdDVUFTSlJFb2piL1k5czUyQXF4clB1WmNNZFdQNENGOWFoeSs5RXJX?=
 =?utf-8?Q?5VgusHQ4lG0EVzjzxvHS002hn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 080548f0-1ca7-4cab-11e6-08db35b39690
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 08:56:04.6427
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k6xC87La6JoQWdoSck+NccnVmTZcm5+/4MSO7T/NTnAeTwMMVmM5ef5ks2KH4OWEXC9TeMKhcJhV8pCIrTXNUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8806

On 04.04.2023 18:06, Sergey Dyasli wrote:
> Currently it's impossible to get CPU's microcode revision from Xen after
> late loading without looking into Xen logs which is not always convenient.
> 
> Add a new platform op in order to get the required data from Xen and
> provide a wrapper for libxenctrl.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two remarks:

> --- a/tools/libs/ctrl/xc_misc.c
> +++ b/tools/libs/ctrl/xc_misc.c
> @@ -243,6 +243,24 @@ int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
>      return 0;
>  }
>  
> +int xc_get_ucode_revision(xc_interface *xch,
> +                          struct xenpf_ucode_revision *ucode_rev)
> +{
> +    int ret;
> +    struct xen_platform_op op = {
> +        .cmd = XENPF_get_ucode_revision,
> +        .u.ucode_revision.cpu = ucode_rev->cpu,
> +    };
> +
> +    ret = do_platform_op(xch, &op);
> +    if ( ret != 0 )
> +        return ret;

Is there anything wrong with omitting this if() and ...

> +    *ucode_rev = op.u.ucode_revision;
> +
> +    return 0;

... using "return ret" here?

> --- a/xen/arch/x86/platform_hypercall.c
> +++ b/xen/arch/x86/platform_hypercall.c
> @@ -640,6 +640,35 @@ ret_t do_platform_op(
>      }
>      break;
>  
> +    case XENPF_get_ucode_revision:
> +    {
> +        struct xenpf_ucode_revision *rev = &op->u.ucode_revision;
> +
> +        if ( !get_cpu_maps() )
> +        {
> +            ret = -EBUSY;
> +            break;
> +        }
> +
> +        /* TODO: make it possible to know ucode revisions for parked CPUs */
> +        if ( (rev->cpu >= nr_cpu_ids) || !cpu_online(rev->cpu) )
> +            ret = -ENOENT;

While the cpu_online() check needs to be done under lock, it's kind of
misleading for the caller to tell it to try again later when it has
passed an out-of-range CPU number.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 08:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 08:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518341.804767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyzf-0003vY-Cq; Wed, 05 Apr 2023 08:59:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518341.804767; Wed, 05 Apr 2023 08:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjyzf-0003vR-A9; Wed, 05 Apr 2023 08:59:51 +0000
Received: by outflank-mailman (input) for mailman id 518341;
 Wed, 05 Apr 2023 08:59:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gxas=74=citrix.com=prvs=452091250=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjyzd-0003vH-55
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 08:59:49 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34fad4a4-d390-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 10:59:45 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 04:59:39 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB5837.namprd03.prod.outlook.com (2603:10b6:a03:2df::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 08:59:37 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Wed, 5 Apr 2023
 08:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34fad4a4-d390-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680685185;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YpNTfqkBcnMiVI9kFXp4pH2yoyKhgC8BFDB1sYVZMWk=;
  b=YR2ntzd49fvj+kj7f1iYrAottPrpr3OwVO9sj/8i5B6VR4TFVzDYFwHg
   hlPjiHvUHToh0osF/9I4p+EWKdY04DggGqMfrnh1zbi7AlYVnu32uHLz3
   PJG6Z79FqemAwk2V/j91B0x8SGt3nqlMuKc652jrRzJ78W/waeimZ+SNP
   A=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 103749612
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1gSm+61J+irmLNdJ4PbD5fVwkn2cJEfYwER7XKvMYLTBsI5bp2YFx
 jQYW2GPbKqJNmb1e4t/aITgp0pQscPTyoBgHlZqpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPCJk9
 dgELSAxaDuqhN/p/reFabkxmZF2RCXrFNt3VnBI6xj8VKxjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsi6Kk1IZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rSQwXKlAdh6+LuQx6FDnAW0mGwpORQpElzjj8e51WX5YocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxQ5eIgAQJG4GICMBEw0M5oC5pJlp102RCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb/D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:X8veiKwGZp78MOn7J6xfKrPxLOgkLtp133Aq2lEZdPULSK2lfp
 GV8sjziyWatN9IYgBdpTiBUJPwJU80hqQFnrX5XI3SEDUO3VHCEGgM1/qb/9SNIVydygcZ79
 YcT0EcMqy+MbEZt7eA3ODQKb9JrLn3k5xAx92utUuFJjsaDJ2Imj0JczpzZXcGIjWua6BJca
 Z04PAsygaISDAyVICWF3MFV+/Mq5nik4/nWwcPA1oK+RSDljSh7Z/9Cly90g0FWz1C7L8++S
 yd+jaJp5mLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjow4OyjhkQGhYaVmQvmnsCouqO+ixV42mJ
 3nogsmPe5093TNF1vF7yfF6k3F6nID+nXiwViXjT/IusriXg83DMJHmMZwbgbZw1BIhqA+7I
 t7m0ai87ZHBxLJmyrwo/LSUQtxq0ayqX0+1cYOkn1kV5cEYrM5l/1cwKoVKuZEIMvJ0vFhLA
 BcNrCb2B+QSyLCU5nthBgq/DVrZAVqIv7JeDlYhiXf6UkpoJkw9Tpo+CVYpAZByHt1ceg22w
 zJX54Y5I1mX4sYa7lwC/wGRtbyAmvRQQjUOGbXOlj/ErobUki94KIfzY9Frd1CQqZ4hKcaid
 DEShdVpGQyc0XhBYmH24BK6AnERCG4US72ws9T6pBlsvmkLYCbfBGrWRQriY+tsv8fCsrUV7
 K6P49XGebqKS/rFZxS1wPzVpFOIT0VUdETuNw8R1WSy/i7YLHCp6jearLeNbDtGTErVif2BW
 YCRiH6IIFa4kWiShbD8W7ssrPWCzvCFL5LYdznFrIoufow36V3w30otWg=
X-IronPort-AV: E=Sophos;i="5.98,319,1673931600"; 
   d="scan'208";a="103749612"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UC4GSn2QRqdmiVeshQZMsQwrtI6/t7jzzGdZkF+qJc2DXze1FrrpUf2h4LY0SqwczGgwwFymAsdbgQ+AT+bmJHThNIyWlkq+d0ioy6Z6K2mdZCt1VufEt9bCg/CzwGPwB27EBAe49WmOf2H5Yk0bmkjY72Qw0uIHS6vMI6DYZ+lREd0EM6nphzXbfatY+RkfCGL5BIKQupOSuNyWqSRU+6m2uuEdGB4YmbjnKvNmbe2vO0j306/Dql+3NP9NtWxqp/HwAXBjdc97Pe2aBKmUFJltOoWYuA0dBB2KFCpv27ulEc6KNK3SgsxkOEALspmqn3JUb7O1C3rZQovyxmuv8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RaRUamhn11nMvnYnVK64rs9/zrMUREZ4KwHbmPGqQfI=;
 b=FsmQGRcFmP/48s0/6JD8yxcW6oLrWdb54HXptGKcCpJNUQhenGVU7iNqeu8nzgoRdXpueOa8Gw7jro8OWCD7YuinsEXA1c27tS3TK+jK2GdGwxoPAoXcNrNoYkCjyIQUmsWfK0avhdSlBi12celpDv9flokdQ85/v4mgxuvJpElcblKbvIgIosfhx0fSot++y7h0Uba7DyCF5vNC5RfAo0witg1MoNK0vHSI2wSPADjKZdB7sFP+Lb5etn87L2nBT8X+Fde/xfTi3mSZ9pn5OLVq3asIcnwDNk1P8aN6UU9bsWFr4iL9MkWa8/qt/QHzXAJNuddCg8OwzoW8+Eo8ZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RaRUamhn11nMvnYnVK64rs9/zrMUREZ4KwHbmPGqQfI=;
 b=N2QYFLEF0MLHZwrQjW0syKr7ko/YMlXwexRW8io3r/9c2717vkxE9Q3laqqXTNWU77CjXrSORnoLRzKTsfgwBNjZSQvwNWz0FAtCS+I7zri9hmkLgcwy9JfZ291iwNcTZAZUbtAHFf+CIe44ZASv3JvMNRPQ5rqwb3QEfGLY5Ag=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 5 Apr 2023 10:59:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZC04cu9sXVeephOf@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
 <ZCxI18gb8zK5X+nR@Air-de-Roger>
 <ZCxSooPqPwpGW6yv@Air-de-Roger>
 <5cda250b-57b4-2833-99d3-84ae8ca32059@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5cda250b-57b4-2833-99d3-84ae8ca32059@suse.com>
X-ClientProxiedBy: LO4P265CA0097.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::16) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB5837:EE_
X-MS-Office365-Filtering-Correlation-Id: 8bca9d47-2567-44f9-bf3d-08db35b41527
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q/n+KX3JYJoF9OOZLreIZmXVhfwGhzNIKInhhegUZj9k+w1d8dXYAF4s7k4beDWAgETq+EbdfBvSxUahHjIJahHbfqFJjOzLERA96FHxhTYY0WRvfnsd58Ev6HJKyhNPiu1ZoPp6b0wxSKy61audd5OeVLNG8/4/0YgeoHqjg1i+TMThPAiscQVWW/twC2PxZNh+fd0k3Lf0OPoxiYXuorRhTLAkdU9mfGp7Hn5Zcvqt1ojfQh7a9JV5UxttxhkzCGiZFGyPQO0hCXcCL5//siOz5eXHR5ZF0E4ovCJCf6lGHxIHL9GGE/u/kWV2RZDzYSQfUkrsAIclyzaz/jwJlRJ0ayYJr4lnojfj5MXogAOvnlfcLE3apjXkyEZolwGrIvZXV7c0XVTRfy9DDFOGpXRnHAQdQB5orXnfSnoUvgEJSjprIVzPo+rXCYyimhWf6C9By5GwxVcIsUfw21S56REI+xfQPbBShmyaYeXFhDSwB75wHDEFV0SOeAA6zTCsl2EXqmgxKxigIw65dAwbqBSPhlwgdMigt8vDFOVTJBZzcLm9qxF41lE0/Y9xT62E
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(39860400002)(376002)(346002)(136003)(451199021)(6666004)(83380400001)(66899021)(6506007)(53546011)(6512007)(26005)(9686003)(66476007)(4326008)(8676002)(86362001)(66946007)(66556008)(6916009)(38100700002)(2906002)(82960400001)(186003)(41300700001)(54906003)(478600001)(316002)(33716001)(5660300002)(8936002)(85182001)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVZ6WW1LL3B3OXc1OHhjcXlGaUJSZlhNSG81NUFtR1U5MnB3dmRQNTlVa0hz?=
 =?utf-8?B?L2w2cVpkcnVDSlYwSmZBZTFWOFF0Mmx6STh2VjVpNzlOaE9IcjE4ZFNxM1NN?=
 =?utf-8?B?RUViZHNsYlcrVXdKcWJ4ajc0M2RZVWw5WVdPeWY2OElrU2N6cG1CdmRZeTlq?=
 =?utf-8?B?dGUwOVZOUjJOODkzNk0rVzNmcVg2VEQvTFFGUjhCMnAzZU9wTi8xY3hTTkpO?=
 =?utf-8?B?VnRvMjZMdHF0NUZ5TzJBT29uSXJ6aEg5S0NzdlNLOCs4LzlwbUN2TXlLSEF2?=
 =?utf-8?B?NWQycFFDQzJTbDNNZHpDL08yVW1JaHdJMHNYZDhQRjVUdkswOTd3SUVVSUpO?=
 =?utf-8?B?U21xVHZWakloUGc3eis0VVRDYzVHaWNVWFBBUDhkTG5VdWhQMUlzcFdvaWtZ?=
 =?utf-8?B?dVRZM0JSSndLK1ZkVFhwdXNoYjdmZ3VoSzFEanlJRjZqMnkraWIzR0xIYUxB?=
 =?utf-8?B?WjgwOXJGYWorQlRPdmhRcFZWTXRueVJUSy9RWFZycTFERTZQUlZMZ2RHSEpD?=
 =?utf-8?B?SWw5Y3NrdnQ2YjRDUE5IbVdkOWZoQ1A2YmJlaHlLVFpuNGJNdy9KUHpBU3pj?=
 =?utf-8?B?NmIxRTZqb3MxU2xsYnRyWGxMVnZ1K0publdPYXdQNDRadllYK1NNTVlCek1S?=
 =?utf-8?B?aW90M1FmY09ER2lDVzEvQSsxbDg3K013Z2lFYmpsbUlBdEZIUXRxQmtMeCt1?=
 =?utf-8?B?Yll6WmlSN0FRWlJkaXRGUE1WQ2txRzUwSEM3ZkpRRWdrcmdQNEhYVHk3VVJI?=
 =?utf-8?B?Q3FVRUcrTDVxQ0dzQVBsc0Nka0pDYnhGYVM2SkpIQkF5U2pFQmpWdStlSXBv?=
 =?utf-8?B?S09LaEZWUmFzMERFS2VrWUZnUlhHakdXb2dOa0VvK2N4TWlYdFR2UGgvSVRQ?=
 =?utf-8?B?ekloanh6d2lhWWpYa3pmenBpYkJnSCsxaXhjeWxuYSttdlpjYlNHUFdxTnhH?=
 =?utf-8?B?bzZNeTk3VHhMTWZjZGwvcG9kelhCSjByR01wUm5pY0R6dFFaVFhzTmE5TXJZ?=
 =?utf-8?B?a3Ywbk53cWVVWm9UM0ZpSUtFQ2ltVHg5Q0gwaytxRnBqWG9QMFVDemFTdzh1?=
 =?utf-8?B?bkdaWXZCOUtwSFdlMFdTV1RsSnQ2YzR4YW1xVlhTRUd0eG9JOGE1czNZSGJm?=
 =?utf-8?B?aXE2L3ZLSVpvV3VTWWdZVjc2dk5JMmF4Vm12bllwZkt1VFROczRzUTRQUUlw?=
 =?utf-8?B?ZUVpTUN5eGtLVlkzY3NYV3NZbktZbjBKMENFY1ZIRFEzR25sR2UvQ0J5WVh0?=
 =?utf-8?B?ajI3Zk96TnVsWnVnQWhEb2h1a2JXVkZmYjcrYWpBbzNKcklhdTd2dGtmSlhC?=
 =?utf-8?B?RmF4TlhneUc5MDZlbWdLZlNYcXZMWGplS3hPQnJ6UlNXUExIQXBXTWRBWHk1?=
 =?utf-8?B?OGh1NW5QcUJIRVZ6cXF0cGI2ckkvc3F2d2dmbW8ydUNYUUlUYURPRmJpUGd6?=
 =?utf-8?B?R3FkdXhGNFFDbmtjK0kxcndtcmRyVVdPUG5UQXBCTHU1RzlUNjV1dGVwUjdU?=
 =?utf-8?B?NXRNT09BQWJPVm81OGlvNkYvMlVzTmppZVVZbWlGSVg0UU9qUkgrRDQxNDZD?=
 =?utf-8?B?U0xEaExoaTRkSjMwdHNkbnhhRllpeWs1YTZHV09Fa3M0Zkx1WCt3UllzQjU3?=
 =?utf-8?B?dm10dUpiMHNYZ3EzQUlnS1BQK1dkWUxwcFVwVVY1TnMzVXo0bCtqUjU4aERv?=
 =?utf-8?B?aW5vUmJBUzF0QlJvUG1KbXhyZFRxb3hPdDVyVzg5c2NpaHFqbDRKdTJKd3NQ?=
 =?utf-8?B?WHdqUjV1eWtGZnA1ZTA4ME9RaWZjNnhIcG5jdU8ya3o5VDd2REJUK3AzSDRv?=
 =?utf-8?B?QlZaRG4zM3hVMFVNVWw2am1CS2N4S2V1eERaNkpQSlgxRU50d2Z6STZsL2FQ?=
 =?utf-8?B?c0cvMzAwRHZMNnd2c1duWU1IeGJnWFcxdTBlR1AwcEpVbjNjeHI4ZEpzWjJl?=
 =?utf-8?B?ZnRuSUo4alpPMlpvd3g3UE43TjM5RDhBNFFkVThmczdVM3kwNVkrZlBBTXdH?=
 =?utf-8?B?STdoSVM0d0lhZXZBNEVVVUp5Z3dseTJiQVlFTHcrWW5UVmdrdWYrNVVhN3RH?=
 =?utf-8?B?SGZGbTJUOWU0d0kyZzJGcWFmaFRwY3RoYUJYQStoYi9nWjQwY0ZtNm41ZGJM?=
 =?utf-8?Q?7eX//ufk+ZVX7sHymFfXvVC8F?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ci50GXTPXoH02XtwmXZtnzFQiZfNQQFN0jBFfPp0/5u9VGE6G9cA5if7LY7CaigEEPsM3ZQhc1vMteS7u9WkEGH83Ogci8c9Hgtv79mIXlFia8/gpb39jAFuyiGJCMUK2D186NuUNTtUm6XIsoIGVZaqNWM9MgZrloV9XY0ofqAkQQHwL31S24uFISzYTnJtye7VJw8DzNomYvDMKkb5465DlK8oNgsLibZ1PUfmZGjG658mlkNJ3JFM2nVGcovxUY8ladCcvA4MMcY6NlVJZOP1GcH9K8i3ssbroj0wfUk3PmolZk8z6AFuoDsF/T6yjbxj4ZFvBDhUF5DgVlgKJ/zjTxFy3HzobuTNjl2HJKU/Gtc9/7Y+JKF7uDjC4x2RtvNYYGoGufh+qEoXbHg8I3jzXdfRk4KbiFixmw3IVf+Bs46Yd6ZEw8m5MiL0HMrXh7xFIOIoHMuv9tBsZKzJXP8Xuw0s6iLVc9mWskWub8WRg3ItxICPAu/EkYnYL9bWszUrR7mYFy7WPT0A3oBp/JZTDjr49R3KEFKr6/gc2Vk1nfodTihRlIDWjxQPm64mkJjCLddLZrW6WVisxrxJJMRIsDn89IoQRFHKkYuC6dvqWoEM89vot2AnyCcM708bSOmyV4cVr/o1CU3are3+GXx50b+UCYILr1Ymov560Ss3mochgQmqoi0bKrEBk0m4MsBJmXrnY0gqotdW7bJ3g9F47FyxQuF2Datn5ojUfK5jtQtDMFawob7HYoOQdy1UqE9RULZRIzUHO8oWifWaj9y4raq0HZ0RHZDPUT2kXkwhdWKhom3WJPkPTD6Y2civ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8bca9d47-2567-44f9-bf3d-08db35b41527
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 08:59:37.4591
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PdDNWr2C7zqEqGOEseOrp7+yd5y4JoMOov7D3UObHPJ3IIVrMgPf4xRrnWry17hbuL+Si3xpdJPWj6MyoSZSFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5837

On Wed, Apr 05, 2023 at 10:11:30AM +0200, Jan Beulich wrote:
> On 04.04.2023 18:38, Roger Pau Monné wrote:
> > On Tue, Apr 04, 2023 at 05:57:11PM +0200, Roger Pau Monné wrote:
> >> On Tue, Apr 04, 2023 at 04:24:16PM +0200, Jan Beulich wrote:
> >>> On 04.04.2023 13:41, Roger Pau Monné wrote:
> >>>> On Tue, Apr 04, 2023 at 12:31:31PM +0200, Jan Beulich wrote:
> >>>>> On 04.04.2023 12:12, Roger Pau Monné wrote:
> >>>>>> On Wed, Feb 15, 2023 at 03:54:11PM +0100, Jan Beulich wrote:
> >>>>>>> While the PAE-extended-CR3 VM assist is a 32-bit only concept, it still
> >>>>>>> applies to guests also when run on a 64-bit hypervisor: The "extended
> >>>>>>> CR3" format has to be used there as well, to fit the address in the only
> >>>>>>> 32-bit wide register there. As a result it was a mistake that the check
> >>>>>>> was never enabled for that case, and was then mistakenly deleted in the
> >>>>>>> course of removal of 32-bit-Xen code (218adf199e68 ["x86: We can assume
> >>>>>>> CONFIG_PAGING_LEVELS==4"]).
> >>>>>>>
> >>>>>>> Similarly during Dom0 construction kernel awareness needs to be taken
> >>>>>>> into account, and respective code was again mistakenly never enabled for
> >>>>>>> 32-bit Dom0 when running on 64-bit Xen (and thus wrongly deleted by
> >>>>>>> 5d1181a5ea5e ["xen: Remove x86_32 build target"]).
> >>>>>>>
> >>>>>>> At the same time restrict enabling of the assist for Dom0 to just the
> >>>>>>> 32-bit case. Furthermore there's no need for an atomic update there.
> >>>>>>>
> >>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>>>>> ---
> >>>>>>> I was uncertain whether to add a check to the CR3 guest read path,
> >>>>>>> raising e.g. #GP(0) when the value read wouldn't fit but also may not
> >>>>>>> be converted to "extended" format (overflow is possible there in
> >>>>>>> principle because of the control tools "slack" in promote_l3_table()).
> >>>>>>>
> >>>>>>> In that context I was puzzled to find no check on the CR3 guest write
> >>>>>>> path even in 4.2: A guest (bogusly) setting the PCD or PWT bits (or any
> >>>>>>> of the low reserved ones) could observe anomalous behavior rather than
> >>>>>>> plain failure.
> >>>>>>>
> >>>>>>> As to a Fixes: tag - it's pretty unclear which of the many original
> >>>>>>> 32-on-64 changes to blame. I don't think the two cited commits should
> >>>>>>> be referenced there, as they didn't break anything that wasn't already
> >>>>>>> broken.
> >>>>>>>
> >>>>>>> --- a/xen/arch/x86/mm.c
> >>>>>>> +++ b/xen/arch/x86/mm.c
> >>>>>>> @@ -1520,6 +1520,23 @@ static int promote_l3_table(struct page_
> >>>>>>>      unsigned int   partial_flags = page->partial_flags;
> >>>>>>>      l3_pgentry_t   l3e = l3e_empty();
> >>>>>>>  
> >>>>>>> +    /*
> >>>>>>> +     * PAE pgdirs above 4GB are unacceptable if a 32-bit guest does not
> >>>>>>> +     * understand the weird 'extended cr3' format for dealing with high-order
> >>>>>>> +     * address bits. We cut some slack for control tools (before vcpu0 is
> >>>>>>> +     * initialised).
> >>>>>>
> >>>>>> Don't we then need some check in the vCPU init path to assure that the
> >>>>>> cr3 is < 32bits if we allow those to initially be set?
> >>>>>>
> >>>>>> Or will the initialization unconditionally overwrite any previous cr3
> >>>>>> value?
> >>>>>
> >>>>> That's not the way I understand this "cut some slack". Instead I read it
> >>>>> to be meant to cover for the VM-assist bit not being set, yet. Beyond
> >>>>> that it is assumed to be tool stack's responsibility to constrain
> >>>>> addresses suitably. If it doesn't, it'll simply break the guest. (There
> >>>>> is some guessing on my part involved here, as the original introduction
> >>>>> of that code didn't further explain things.)
> >>>>
> >>>> If it's just the guest that's broken I would think it's fine.  As long
> >>>> as such mismatch doesn't cause issues in the hypervisor internal state.
> >>>>
> >>>> Did you see a toolstack setting such entries before pae_extended_cr3
> >>>> is set?
> >>>
> >>> To be honest - I didn't look. As said in the longer reply to Andrew, I
> >>> think it is more logical this way (the page table root already being
> >>> validated as an L3 table when vCPU 0 is inititalized, which includes
> >>> setting its CR3). Hence even if right now the order was the other way
> >>> around (which I doubt it is), I wouldn't want to make impossible to
> >>> restore the original ordering again.
> >>
> >> IMO I think it would be better if we could already report error at
> >> domain creation time if the toolstack is attempting to create a domain
> >> that the hypervisor knows is not going to work properly, rather than
> >> allowing it and the guest failing in maybe non obvious ways.
> >>
> >> It seems to me however that we would need to fix xc_dom_boot_image()
> >> in order to setup the vCPU before creating the initial page-tables.
> >> (->setup_pgtables() hook being called before ->vcpu() hook)
> 
> This might be a possibility, yes, but it's (imo severe) scope creep
> in the context here. All I'm after is to restore code which was
> delete in error (and which was, when it was still there, not
> properly put in use in all cases it would have been needed).

I realize the check was wrongly removed in 218adf199e68, so I guess
it's fine to restore it, albeit it would be better if we could remove
the weirdness with setting up page tables before the hypervisor has
the full information about the domain capabilities.

> >> So I don't think this is strictly worse than what we have, but it
> >> would also be nice to get things sorted out so the ability of the
> >> toolstack to shot its own foot is limited.
> > 
> > Maybe I'm confused after all day, but isn't the hypercall used by the
> > toolstack to set CR3 the same one used to set the vm_assist bits?
> > (XEN_DOMCTL_setvcpucontext)
> > 
> > At which point we just need to make sure d->vm_assist gets set before
> > attempting to load the new CR3 (seems that way from the quick look
> > I've given at arch_set_info_guest()).
> 
> Right, it is this way already. But that's not the point.
> MMUEXT_PIN_L3_TABLE may (will?) come ahead of this.

Oh, I see.

We should likely move the setting of vm_assist to the domain create
hypercall, instead of doing it at vCPU initialization.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:11:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518347.804778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzAs-0006df-Io; Wed, 05 Apr 2023 09:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518347.804778; Wed, 05 Apr 2023 09:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzAs-0006dY-FP; Wed, 05 Apr 2023 09:11:26 +0000
Received: by outflank-mailman (input) for mailman id 518347;
 Wed, 05 Apr 2023 09:11:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mHoi=74=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1pjzAr-0006dR-Mw
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 09:11:25 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d51b8abb-d391-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 11:11:22 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id d17so35434249wrb.11
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 02:11:21 -0700 (PDT)
Received: from [192.168.0.106] ([91.123.150.38])
 by smtp.gmail.com with ESMTPSA id
 u17-20020adfed51000000b002c6e8af1037sm14385616wro.104.2023.04.05.02.11.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Apr 2023 02:11:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d51b8abb-d391-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680685881;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=gRLOiGbueskxaYnwpqydq+rM10NCqVGbvjWXD3gvn6E=;
        b=G2elS4ZYo1kgEMMcjJIhFHe/VvAfXmuTV4s89NwHNjE6m9A+6bvEvkm8YOybg5S59j
         +/VR0nxbNt5OV3tJhTq6fB3ImMahLMTA2BJQJRSnvH+aAY7yUB98kItFD0jHtNtamUqz
         3nwryQ72vjbrwPNnWmLDZtSk9ttqot/Q3E5AxOorOkYZ/G9dl592HMBp0fasowjNg2+v
         ONjsu+JmrYYtX0XO8GUIuWg3QaUEhfX2Fe/w4gkIbRlAV2hcyljW7K3GCLDwLXpJ8EXt
         6jlseYTxw6Q1GIWxuYSr/n8MsGiDLeguYXyy++qWraM7x47tXfS943MZREjSIXdDFxWs
         tG7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680685881;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=gRLOiGbueskxaYnwpqydq+rM10NCqVGbvjWXD3gvn6E=;
        b=7mMw/pglvxN0NgNES+0esaG7kEcQxjFpT2McME1nwfpR3PE0UQglKvIuZ0gEUbiyGt
         kxBnG6NVyqOq/3VhRZ+r8FoIK/Qdf/JbK0MurDBYDGyYf4uCeZIjzM3bL8wGGx1UiGvz
         St7y80KiBD7FqbLdKvkDCnQiqPPA5Y+HcA+neHKg+nEWaZ+Gr3X3MW+xnyXcL7f3O6iw
         JrHb6vuMgnYXF1OXiJxNRAa8yp1+1i+7RXPBOk7vdmKJAerjjOMaptH/OGUkJPVUdRV0
         fbyyISvJOoKPhjRSnCKIjHqRKYuMZqBILqCazsDtG03GzPtDDBllAvEgK7SsyOlH0J46
         1Anw==
X-Gm-Message-State: AAQBX9fke5OmJQhJ0J7Hg7Opqhiul7Zc5GH86Nt8jNYw8dfzOlEg/Qfb
	Mfmjuaeb+rYIKWs+5AfcB6E=
X-Google-Smtp-Source: AKy350boAjHuIRohqBGVvG15oxrCQ5hHEJXX3ihgms29Tggf0EMnE5y29T4h/k/F2llty9g8UQTbhg==
X-Received: by 2002:adf:fe8e:0:b0:2c5:4c9d:2dab with SMTP id l14-20020adffe8e000000b002c54c9d2dabmr1315099wrr.10.1680685881035;
        Wed, 05 Apr 2023 02:11:21 -0700 (PDT)
Message-ID: <25c3cc19-e32d-6ced-2853-d1f78e3a476b@gmail.com>
Date: Wed, 5 Apr 2023 12:11:19 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH V2 2/2] libxl: fix matching of generic virtio device
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xen.org,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
 <62f2603d8b3fba1efb236063a0819fb95285b0ae.1680653504.git.viresh.kumar@linaro.org>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
In-Reply-To: <62f2603d8b3fba1efb236063a0819fb95285b0ae.1680653504.git.viresh.kumar@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 05.04.23 03:12, Viresh Kumar wrote:


Hello Viresh

> The strings won't be an exact match, as we are only looking to match the
> prefix here, i.e. "virtio,device". This is already done properly in
> libxl_virtio.c file, lets do the same here too.
> 
> Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> V1->V2: Add the missing fixes tag.

Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

> 
>   tools/libs/light/libxl_arm.c | 12 ++++++++----
>   1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index ddc7b2a15975..97c80d7ed0fa 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -1033,10 +1033,14 @@ static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
>       } else if (!strcmp(type, VIRTIO_DEVICE_TYPE_GPIO)) {
>           res = make_virtio_mmio_node_gpio(gc, fdt);
>           if (res) return res;
> -    } else if (strcmp(type, VIRTIO_DEVICE_TYPE_GENERIC)) {
> -        /* Doesn't match generic virtio device */
> -        LOG(ERROR, "Invalid type for virtio device: %s", type);
> -        return -EINVAL;
> +    } else {
> +        int len = sizeof(VIRTIO_DEVICE_TYPE_GENERIC) - 1;
> +
> +        if (strncmp(type, VIRTIO_DEVICE_TYPE_GENERIC, len)) {
> +            /* Doesn't match generic virtio device */
> +            LOG(ERROR, "Invalid type for virtio device: %s", type);
> +            return -EINVAL;
> +        }
>       }
>   
>       return fdt_end_node(fdt);


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:11:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:11:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518348.804788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzB5-0006w5-Ra; Wed, 05 Apr 2023 09:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518348.804788; Wed, 05 Apr 2023 09:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzB5-0006vw-NF; Wed, 05 Apr 2023 09:11:39 +0000
Received: by outflank-mailman (input) for mailman id 518348;
 Wed, 05 Apr 2023 09:11:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjzB3-0006dR-Tx
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 09:11:38 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd7d230a-d391-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 11:11:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6771.eurprd04.prod.outlook.com (2603:10a6:208:18d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 09:11:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 09:11:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd7d230a-d391-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dqqp3kNObjQSdp/LiiaRn7qG//2OqMJZx0AwmnFpHvpT9mJIXN+PSlIOjv/vUDw8xd5hbC4Mkgr6yp8XDvY2/aICNqhqs9eIYVpDElE7LElddvYvM54icm5fXg59gXEKcPv7Mes/IZZB2TfzF7EbyTBVC2rDxSWTVp3b+PxprFGbd3P01IBLH81026WjXx6kHY8GEeNFXOEFTp102acI2Bp4UPVDIOwzAI2OxQhn0L4s9naIK1cNhMHGmadWp1AxQOui6y7bJn03Z5mNJwrA9uYUHXDuI+/ZuXr6yXvjH1b1hDJdonKfIERZ1buIhAftdDAFLGThILJEs4lSW56OUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rsIizMNRMggsxTCY+Sd25aj3DYEvJ4MtxDRw2VgDX44=;
 b=VG3p5wPCbAui+vxCWYV2gHzfY16jnXOXp0aQPYDtm3xrFvg+YzebkPjfCwrIWTEweq8mtBiLJTZWIqVGUfhDJXSYnBFJm+0Pz3+gMm05ahpPQp5qWQhxiezh0+6I8Ge5bue+AwbdkSo2/1Zp7wCnC4cHOXRJs7MQHZt5bDznk1psb6QmxlSbEQag0zcXmQz0xUc9MgDvdhvBeS1aZ1D3trwGwdCO4n9qTxoj3QCIWrCef+VYT9LmG5pokCIno+pWU1N55WL5Cd4GI5jlP5s2r+5CHZAkcA59Nd1XPORdUV4ZTmy1NCX4+7nxNsjIGC5Tx/cQExjj9sc3dJAfUsAbOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rsIizMNRMggsxTCY+Sd25aj3DYEvJ4MtxDRw2VgDX44=;
 b=z7CPivYOsb2W4E9I2HcknJ8gw9EWVZlJx0joUI2ZKh/wK5DYxaTPpBoN1ZUfw2bkSLlPdsKUPLTBsDF6CeEs2z/ENJ9voek0k84hm5R/RHpzoYs64ecVL65hKTf4IEVC1I9JPwtaZU8Ju10WdleoNs+nDj8mukPBF9ZW7Jm892hGjxgzyLY9Vtum5qsgFZMHex3X5NMCaJ2DMRK+NyAFggqV7Uh1+NUzPR3EpJlyTcLgIOOXqHhMva2SKdLIcRA2ribYgrCkRJYZ/zYCzo3RizugXI3UZwhMOH3TnBRIPZMYPTV8SsH+2qY6Wxp88OipfjBF0mK0fyhKJs2bYup33A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b46ce1ce-19f3-29d2-af50-e1515da4112f@suse.com>
Date: Wed, 5 Apr 2023 11:11:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v4 3/3] tools/xen-ucode: print information about currently
 loaded ucode
Content-Language: en-US
To: Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-4-sergey.dyasli@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230404160655.2354-4-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6771:EE_
X-MS-Office365-Filtering-Correlation-Id: 0787be21-fdae-4be4-7164-08db35b5c07a
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O7C669LueFj1JCSf0BzZTfn7YK9cnkQqWlhWAzz0yeEjeT4sQKgPZ1isMUC4zMSv836svO0vHuQOTdTwKhVpYCJKgWCVBloC59exz/IqHkOjbrgDusVW8B+LYK8F6nBMwkFL77VuWnZ2Vs9ukB+7m6DCYq7D1hrPWthN9aelf6NuKDX2B1HC5vy7DFsrvun3g4Jct9SlLwHlXWFqmxZHatTAK6CwY2ihee2Jxqu6BIstFGVeDPsp09KH61C+nP6tioiwpM7HgePUFfBujWmA4RVUN2gUSgg+SFmGCZGgdH6E1zlzQLw2/zsH2f4yMeeM/SPB5wqShikyESMMo58v6Q4u5ZdgXTpbRRM0CWm1QHsb+6J/0OP1y8jtxlEzpwc/keqXKeo4cjeBMV9h/4J5J2lZnQ9bwXp1QSvXdBgELB3xquqQGPCDZ73SfXNyCzPmE9JuOOrfmkY7hFR7LavufHzVM+ktAJseRnnkabuuTD089m4WUuXy9fUKiQkXu7fEIWAhUI857P2QSZ6lVBOXZZZWk6a9Xzj4WEYwumojV0fSbFBaS/2uvdLCEDo1mLk/fHR7/eI0DPfuVQnsSUDUbGoUY0AuAx5btWus/R3YmjiTPct1puLYL/aquDgPsG+luxP9drK3TB2DUoL+uBE2Lg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(366004)(136003)(396003)(451199021)(83380400001)(2616005)(6486002)(6506007)(6512007)(316002)(2906002)(478600001)(54906003)(53546011)(26005)(186003)(5660300002)(66476007)(38100700002)(66946007)(66556008)(86362001)(8936002)(6916009)(41300700001)(8676002)(31696002)(4326008)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cnovOW5CSitrTFFmd2dKUU80Z3YyY2Y3Q0ZxcGx6QndOZjgwbGJBY3B6ZzND?=
 =?utf-8?B?dDVsckdFVGtKNC84VmxYN3lPOEFSODZXWnhxZ0JJNlBFcWdUeitlcTFFZ1E3?=
 =?utf-8?B?ZWtlNURPS2phdXdwRmRIb2tpNU0rS2k5SG8rNzgzVk9Wb0NZZ1E3ZG5DK2or?=
 =?utf-8?B?SWpGRWE0RkRFekpBSnVaUmRIOXVrMkFjWTNvYkNFVmpBRVM4ZkhOQkx2TSs0?=
 =?utf-8?B?TkFqKzJ3c3M4UXRSL2VYV0x6Q2g5VWh5STdWQTYySmJQVGQ5RlJ6Tm84RE5u?=
 =?utf-8?B?OUlyOHRGMlVwU1RhWlRkMXRaaEZ0U1BlSzRMdkVUZHRWN2ZFWVNBNldBTURR?=
 =?utf-8?B?c3ZMMTJZZWFBVWRZLy9CRk5hbnZXVEJSM1Q4SkZTSk5kWEhiVVYvaElUZ3J1?=
 =?utf-8?B?VzJzbmZCZTZZZzV5VVhnWEtEZzlqQTlwcFFGOE9hRXV0a3hZTkJNK0xoeURp?=
 =?utf-8?B?U21KcWhldnZtZUo4R0pXeThsK0VmcXNsSkJWNGdXbjVUWXpkZUcxT25qNmtZ?=
 =?utf-8?B?MFEwbk0zR2ZPOFQwOE03MVFDRnFSaUVFc2JhQWpMUXVNajI1ZkdMdkpGRndv?=
 =?utf-8?B?c29qcldISGxwMlFjbU0yVk13QUUvMHp3bmpyUUJBdjhobC9pUWhKWHhpZnFH?=
 =?utf-8?B?bDgxdDUzZ2x2ZHJRdHN3U01hM1FjQ1l4b1lFMUVUYXh1NTVTKzlCL2FOeW9G?=
 =?utf-8?B?SGs5N2IyckxzUEZpaU84SWhrT3hTeWZkSjJ0d1JVaDhOc2ZyeVdzUEswNHd6?=
 =?utf-8?B?R1VDNUIzdm1Cb0s0QTdJenlmZFZJS3VGTHpaallBdmlvUVV5bmtGL1huU28w?=
 =?utf-8?B?SG5Ic3h5dkJYc3pHbk1ENHRycnJKeWJwNm1USUpiS00yTzkxSDVYcWlCOEpt?=
 =?utf-8?B?Smg0QU1hYWdIMDZvY0ZpRERzVk5QUjRNWkVtU1hEcnpkWEd4cDd1ZDJ1SjU3?=
 =?utf-8?B?VWJ3QXVvYVdSOFhFSXU5R0dzUXI1UlNwV3RGd0pvRnFLdTZGMmUrR204djMv?=
 =?utf-8?B?aEVZWXp3QkRnUmd1elBBRHkrWFB3TG1SbG1xVHVMbEVGdkpVaHhlT2NVTk1n?=
 =?utf-8?B?cjM4ZVVXNEdoZ1M5a2tnMzc1eHdZV0w1NXlvN1ZQWCsxdFB1TUFiTUoyaW1o?=
 =?utf-8?B?d2pwaC9PdElJNmx1a2RPVzFscTFxdUZGVWZLVGt0T1k3K0ZnbG9GbTdlb1J3?=
 =?utf-8?B?WTZhQWlWS2JEQ3V6dGEvS2RmalZsbXNtSXJhYjZwSHBkZWQ1dXUrUmVsU3VD?=
 =?utf-8?B?aVM0clczalVPR0hLYVIzZUIzcVRtRXNvaklSWUNJdngwZmZqTm01WnFRRTJI?=
 =?utf-8?B?NVlENWsrcmFxUVpWdDdpR1ZHWVZldnAxVkFXR3ltUkZDT1dlOCtUZTFyL1Bk?=
 =?utf-8?B?ZVlWdE9yMTkyc21HQ0Nxc3M2L3JaRHJGZEZVNDFkZnZQV0lvaXFJRisvMy94?=
 =?utf-8?B?Y0lPMDZ5SjN3WW85UnJlZEI3YUhNQitYNkwrQlRYTVZVVndyVzVUVGdqRU9F?=
 =?utf-8?B?UG5xSTRNaUlZZEk4RXJaaWR4Y3lUR2pVOXprcSsxVVpzNFFYalR5cUMzZ2Ns?=
 =?utf-8?B?YW4wR1loTFN5MENjM0NBZ2NhYUlUTGlqb1VIdDZXZUFsYmt0V20xclU5ZGtW?=
 =?utf-8?B?UlMyQlBpelo3M28vMnRaR2djOFVPN2VTUEZHMkI5RllJMTFYWWNhZjFsRVBn?=
 =?utf-8?B?cFdHc29SYXhEZnlIeDdWMWxXTHFHS3Nkd0hsWlJOYU4xcS9TNTRmbytuRlpv?=
 =?utf-8?B?MDRBSWFuY0ZtZGlnMnh5dXkwUytXNnhPOUgydzhTQmNYM2Y4NlVwdmU0UGlp?=
 =?utf-8?B?dEY1bGNkNkc3YVZaY3hLaktJeXhCWnpOQk40bUpCS1N3ZzNORmErM0VyMEwv?=
 =?utf-8?B?bjEvNzh4aWZaemYxa0dYa2huMFZqN1dNeGZ6aTlmOFdZb0FoeDZkTHdGSy84?=
 =?utf-8?B?dmo2SkUwbUhRVzdyOVl2UVhNNXY1L2hjOUdJRDNmbkZDTU02MDhpNmJlTGUr?=
 =?utf-8?B?OHBqTWVjbTFRbFRmak1mcjRxM0RFWFdpWDlZVU1rd3RoakRoVkdRYjB5WUF2?=
 =?utf-8?B?Wmp4VGF2MG5GTlpLci9mQy8xTWIycGpUMStrV0ExcjNST091YWErTDBXS0pY?=
 =?utf-8?Q?MJsezj3/nEfQaOVoIHBlEVquI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0787be21-fdae-4be4-7164-08db35b5c07a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 09:11:33.9218
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 25QtnpjRaVufSdNAG0CdAnkP7WrWrK/3zZBM7JUqIMwymmcpOgAGPw+m90xwa+tpOg3OHl7WXMwPz6+arylIEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6771

On 04.04.2023 18:06, Sergey Dyasli wrote:
> Add an option to xen-ucode tool to print the currently loaded ucode
> revision and also print it during usage info.  Print CPU signature and
> platform flags as well.  The raw data comes from XENPF_get_cpu_version
> and XENPF_get_ucode_revision platform ops.
> 
> Example output:
>     Intel:
>     CPU signature 06-55-04 (raw 0x00050654) pf 0x1 revision 0x02006e05
> 
>     AMD:
>     CPU signature fam19h (raw 0x00a00f11) revision 0x0a0011ce

Wouldn't it make sense to also report the model number here, even if
ucode blob file names (currently) don't include it?

While for the revision always printing 8 digits is definitely helpful, I
wonder whether the two top nibbles of the raw value wouldn't better be
omitted. (I understand Andrew did ask for this, but it's unclear to me
why he extended this request to the raw fields.)

> --- a/tools/misc/xen-ucode.c
> +++ b/tools/misc/xen-ucode.c
> @@ -12,22 +12,89 @@
>  #include <fcntl.h>
>  #include <xenctrl.h>
>  
> +static xc_interface *xch;
> +
> +static const char intel_id[] = "GenuineIntel";
> +static const char   amd_id[] = "AuthenticAMD";
> +
> +static void show_curr_cpu(FILE *f)
> +{
> +    int ret;
> +    struct xenpf_pcpu_version cpu_ver = { .xen_cpuid = 0 };
> +    struct xenpf_ucode_revision ucode_rev = { .cpu = 0 };

As mentioned before - the current state of the system may be
inconsistent, so I question the entire lack of a way to know of
this by using this tool (even if via a specific command line
option, defaulting to CPU0-only).

> +    ret = xc_get_cpu_version(xch, &cpu_ver);
> +    if ( ret )
> +    {
> +        fprintf(f, "Failed to get CPU information. (err: %s)\n",
> +                strerror(errno));
> +        exit(1);

Errors want to go to stderr, I would say (just like is already the case
in main()).

> +    }
> +
> +    ret = xc_get_ucode_revision(xch, &ucode_rev);
> +    if ( ret )
> +    {
> +        fprintf(f, "Failed to get microcode information. (err: %s)\n",
> +                strerror(errno));
> +        exit(1);
> +    }
> +
> +    /*
> +     * Print signature in a form that allows to quickly identify which ucode
> +     * blob to load, e.g.:
> +     *
> +     *      Intel:   /lib/firmware/intel-ucode/06-55-04
> +     *      AMD:     /lib/firmware/amd-ucode/microcode_amd_fam19h.bin
> +     */
> +    if ( memcmp(cpu_ver.vendor_id, intel_id,
> +                sizeof(cpu_ver.vendor_id)) == 0 )
> +    {
> +        fprintf(f, "CPU signature %02x-%02x-%02x (raw 0x%08x) pf %#x revision 0x%08x\n",
> +                   cpu_ver.family, cpu_ver.model, cpu_ver.stepping,
> +                   ucode_rev.signature, ucode_rev.pf, ucode_rev.revision);

Nit: Indentation (also below). I think you mean

        fprintf(f,
                "CPU signature %02x-%02x-%02x (raw 0x%08x) pf %#x revision 0x%08x\n",
                cpu_ver.family, cpu_ver.model, cpu_ver.stepping,
                ucode_rev.signature, ucode_rev.pf, ucode_rev.revision);

> +    }
> +    else if ( memcmp(cpu_ver.vendor_id, amd_id,
> +                     sizeof(cpu_ver.vendor_id)) == 0 )
> +    {
> +        fprintf(f, "CPU signature fam%xh (raw 0x%08x) revision 0x%08x\n",
> +                   cpu_ver.family, ucode_rev.signature, ucode_rev.revision);
> +    }
> +    else
> +    {
> +        fprintf(f, "Unsupported CPU vendor: %s\n", cpu_ver.vendor_id);
> +        exit(3);
> +    }
> +}
> +
>  int main(int argc, char *argv[])
>  {
>      int fd, ret;
>      char *filename, *buf;
>      size_t len;
>      struct stat st;
> -    xc_interface *xch;
> +
> +    xch = xc_interface_open(NULL, NULL, 0);
> +    if ( xch == NULL )
> +    {
> +        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
> +                strerror(errno));
> +        exit(1);
> +    }
>  
>      if ( argc < 2 )
>      {
> -        fprintf(stderr,
> -                "xen-ucode: Xen microcode updating tool\n"
> -                "Usage: %s <microcode blob>\n", argv[0]);
> +        fprintf(stderr, "xen-ucode: Xen microcode updating tool\n");
> +        show_curr_cpu(stderr);

I recall you had it this way before, but I don't see why this information
needs to be part of argument error handling. Furthermore, considering the
possibility of hitting an error path in show_curr_cpu(), I think ...

> +        fprintf(stderr, "Usage: %s <microcode blob>\n", argv[0]);

... this would need printing first, and perhaps ...

>          exit(2);

... the exit code also shouldn't be anything other than 2 in that event.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:15:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518355.804798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzEh-0007t5-AU; Wed, 05 Apr 2023 09:15:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518355.804798; Wed, 05 Apr 2023 09:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzEh-0007sy-7h; Wed, 05 Apr 2023 09:15:23 +0000
Received: by outflank-mailman (input) for mailman id 518355;
 Wed, 05 Apr 2023 09:15:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjzEg-0007sq-37
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 09:15:22 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62ec97af-d392-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 11:15:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7876.eurprd04.prod.outlook.com (2603:10a6:20b:240::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.29; Wed, 5 Apr
 2023 09:15:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 09:15:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62ec97af-d392-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+jaEhnggllRYjqTkpwjysVVVgWFYTGz9lzM5nBjrVMB9UystPgajJFU+uYsPPQdxLs/ndXUOLdb2pZ52y9G55PW6JbTh8VuMHFyzt8W7ak5Mo/AIgdS0OyMDtiqq0Op6qd24U2yEP08YvS80rxQmnsR/aVmfQnIqZaqdJuXwn2GoCr/IyiofILZwcQKXLLqLMpWN9eb0M0yl+aYlzMNDqloe9JzAt4fEb9CgmvfOXTucSION6hIJn2fzvpbKy2ReHCJXLSDOaztWjwQwOUxmYoQzUs0djmzADw9Le9hm2N/va6db5BNUkxpigYCh/jDVcYohx8wASZ315XrXa5J2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=e++isSxOgl7MWiuKsHDzIFCAkVOM4/Smlnr77qRqEsU=;
 b=Dp9Xs+vjTb6aobqXj1w3J1PKBsRhsCo4c2l5ygtBCJnnAH34myInPHAfk/EDhbO9+1PHhFwgQQ/0O7NR6qog0KBD4tKrNeXSfc6qdsFsl7yT/+pPMiLQ36dnH0syHiCstGXHmDuLKn8PpH0Lt4nil0440NtQ98qOv6Abgu6eWfRpDFsKImFGex4xJRaAdWB9YYbjp3bAXjJOzzW8o4c9EbXYynK3nfrDYZ3a0lLbe8VIulzltJMy2yZZip5PUafKB9KRHOw6dtASn2Ls2ot3blNR2CoZVfdm8JnSeiiqbDl8Em24FsC6gFkFIoOQfAEL5GFVpkbDl8Wo0b68l4MWLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e++isSxOgl7MWiuKsHDzIFCAkVOM4/Smlnr77qRqEsU=;
 b=O8ueFYfc6S/gFSd08mPmjK1LA/Tcj8ZC2QNXQtvD4BhxXhrEEi4i8/3QHK6o7jNfuKd9Zb91xw5mj42nNIdJ8AaoxIPimT/P36t4jscekT43+ijz4fzTpqFqNvr3/s4jTwisC3Cq152+nx4PQglJbp+E5TmjJV1nuR0PnFavGRIkUmm1IYeq3oEmCYfqEb6qRqR0Ev9sXsKcrCTLLiSLYQmbUkGRhXAfWlMiA1aG/O5DmbdTs28xYMDP4GSOdiW4hzF7eRZG2Dt3C9AvBD3zJGD3cIlk1vlrEQIfsPy26Kd6vAO5+sWKvpx0Wsv7Y01AOUukiDnyYalt7B9GlpF2PQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b523597b-40b8-7368-4742-9dbce06b4633@suse.com>
Date: Wed, 5 Apr 2023 11:15:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH V2 1/2] docs: Allow generic virtio device types to contain
 device-id
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 Juergen Gross <jgross@suse.com>, stratos-dev@op-lists.linaro.org,
 xen-devel@lists.xen.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
 <37fa3d37-5ee7-863c-48cd-1ce313c8e296@gmail.com>
 <20230405085115.dayqa6mrkac372sb@vireshk-i7>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230405085115.dayqa6mrkac372sb@vireshk-i7>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0161.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7876:EE_
X-MS-Office365-Filtering-Correlation-Id: 209caa11-fccc-4659-eeda-08db35b6458b
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a0MWxSlQGV65ovRIA57oL8LKsBhQ0cTYrfgSPHqbYNXpA2+noTpTMXJawwrlJSnKfFBVyEvFoc8MvLqoLRw6OluPVkiGk99cO0ceRqN/moi0CpGDdKdcmwLlnCcc6OMXxRRWVfmkNeM+jbI2j7hhT4o160m+VxxQlfT0wRREOYEjyqf1w6dtdGmZjxO9cy4y09jrBIixb3BrjG/qGlRWGosN70yU37jsGmz6D9177JHw+JJpOD8RhVSR9O5Hgoz++yrJ/kTY4abHvD7SlWJeWtmLaTdhpOu4iG/5ZUj9ZOiyGvbVQvmxSfeWjRkGZ7j1pH5kjOVnSnklsLDnJQBiKNSBjtW7zEG2YLcTlk1zHsge5YGKqxHJ9LwHwqPOtEjk4NZ+LyTMqg42o/7dTqII853q/XHbiOezvY+o5/PMP+OJOdpz3423WLBQmQ1emm0W8CokRkiSB/RgdZD8ZI0MkexiXk7mm3T37/UPGW/uqT5IIyXJVp4gMLE1wOCajHMTR7zbceI77wIgYFcabtqiSzDKa4nfJxH88PAZa3jxVHmZ+WyxshVNZzTmWnBXXkMO2lAnXJeRa0pUS2k8v8O5Uo2ZBEYOhynDm69wXUbm9lUEFK/CNYLYSYr7vcNktEVhwxtKQON0w4vtTMN4HmlJyRt39BWofNLS5ky1Hi+Sx3s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199021)(38100700002)(4326008)(5660300002)(8676002)(7416002)(4744005)(8936002)(6916009)(66476007)(66556008)(66946007)(41300700001)(86362001)(31696002)(36756003)(6486002)(2906002)(478600001)(316002)(53546011)(2616005)(6512007)(6506007)(31686004)(186003)(26005)(54906003)(41533002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVJ5RUJkUzJhcGE1dXhSTUhZTjFiWnc1RWpjdHdCMWZhVjZJOXJLZC8zM3ow?=
 =?utf-8?B?cEFvMmQzbDU3SW5scTRIV1d3b1IvR0p6eUxHQ0ZyNWhEMGE2RDYyT1NSSUVq?=
 =?utf-8?B?aVpMcGdBZ2RvSklMaklpRVpKQ1V2S3QvYm9IL01OdWo3RnlwY1B0eVFHR2hz?=
 =?utf-8?B?dzBBUEdlOE5MeXkxZWVSNndkRmJiMjFJMG5rdXJYUFUzY0pkNzJ2Y2RFNEdJ?=
 =?utf-8?B?K091R2JzTUpnZVhMZ3oxZzAxS2RZcG56YngzS0kwWlpDOURHWnlVSzdWUm9P?=
 =?utf-8?B?dTFRUXd6aWVpNHpCQ09BeXBjNy9mTGZqcVFrTDY0UElWVjFNU0prM3dHTXIw?=
 =?utf-8?B?RU1OVVRNZ1M5MGJNU1E4SzgxK3crZFJrY1BLbEc4aCtSRmU3Q1puVWRuanEw?=
 =?utf-8?B?bFhnejlIRm1VeXBDZkVMNmtxVE4vNHJVcjZXdlZ2a1JlT29DcFhCWmluTFF6?=
 =?utf-8?B?cmE1QlphQUE3Y3NoYWlsM1VaTEladlYzTVdlYkV2SkczaUhJZEI1MXRlWmhY?=
 =?utf-8?B?U1JpM2xNUy9JNTVlcU5Jd0RiNVJ2K2Y2OVN3eTAwb2VNSksxV0lodTNiZjBH?=
 =?utf-8?B?RkRoSWx2UUZITXA0dENubVVGNVJRM2dRenpqMC95RDkxRUVBTTBYQW9RMlov?=
 =?utf-8?B?cmwrcnpMc1lGdDRNT1Y3RDNLZkhEZ0g0ZC9QeXU0aU54T25JZEUzdE9FTk9y?=
 =?utf-8?B?MTNlVHlyS1pkRlhZS3lHbmpqdkxiZHdwVi93OHJKT0JGVDY4SmMxUCtGOUd0?=
 =?utf-8?B?VVp4TVVQNExFUGZyWnNJMkF6aFRRT3BCTzhNVFZrazdndE44RFRRM0pHTUtL?=
 =?utf-8?B?R2lYWTRVYjlZSFZLRkc2Y1pkQ0tmcDRUR1BaYnFVWnVDVWRjRldzREFEcDhr?=
 =?utf-8?B?UTdTU1FZMmxRWnRyZ3lBeXo2Q1padmg3Zzg1dnIrUXZicFRCd0dsUGFqLzc1?=
 =?utf-8?B?WTU1a0ppWUk5cnhNM3RuT0N4aVdsWUlpVXByeTZLbkkxb3pIMmRhSFJGR0g4?=
 =?utf-8?B?SUJkYWRHQ0dQWHlCSFowOVdYRkdsejQ5MHZlTDF1V0FXcENtY3hmYkFlcEFV?=
 =?utf-8?B?N0lSWTd1d1pwQzliZ3FoVGNiZzdaWWFSZ0s3SzZrQ1hjWUFvL2E3TkNZdEJ0?=
 =?utf-8?B?OGF3bDhpYmgrd1hqTEhEM0FBaHNYeHNKMVNFZ1R0dVRDSlZSZ0JqUzVCeGdZ?=
 =?utf-8?B?NkZRY210Ty9qYitRdFd5UHJVZWhKS2hZU2pETHBYRnRxVmlVMVR5QlpaMDlo?=
 =?utf-8?B?WGdodkc1NkJpbUU2QitJRTZVR2plZkdvQUlKSjJpd2hTNEFFRGJxcTYyaTFZ?=
 =?utf-8?B?YjBnV0NubzZMZkVyWmNoSW0yTDk3aFBRcWJXejJFRkdiWVZIWjZyZm04L3FK?=
 =?utf-8?B?TS9EdllEbExFeGdaUG15bnB5YWZqd1EydUVLNU5iWnNBbjFscFhGTUc3QjdZ?=
 =?utf-8?B?V1dlMkJkQ1lTamxlRGZyREpOVHJWSCtIN0RoTElGcTJJUml3Q2RnYWRsaUVH?=
 =?utf-8?B?bm53RFZDQmNIblZpZHc5NXVsamIrclR6SXpMWXA0cDIzWG1DMGdTcVNhQVBm?=
 =?utf-8?B?b2dUVGI4Z3dTbVBNL2hMWUxzRXNUbjU4ejdsNFc0VUdoa282Z1drZzB5dm9S?=
 =?utf-8?B?NG9tRjJTU0k0YWhEaGxmbXlHZU45bjJGdXpHQTh0ZG8zQVcrSlFBck05VTRU?=
 =?utf-8?B?ZFVJU0xtQkd1U0xPUGVPTHAxTHgreUQ2enlSaHVCSDNCNU84UEUrbkFRZjdk?=
 =?utf-8?B?UW9odFpiS2lTUzJBNHczN2dncWRDdFc2WVFiZTJTaDI3a2hOODJBRWY2c1cr?=
 =?utf-8?B?eWdpT00rVjFXckcxcDROVjRlZmRDV2l5STh3V0ZGYTBoQWdPb0o5QWZITkZD?=
 =?utf-8?B?RlUxRThzNk1WazhXOFNVSjhxQlpQOXdsVXFXRDkvOG1La1BnSXRaejVrTngw?=
 =?utf-8?B?TlJrVnBnSjhOaVBpcXloUkh5RFM0NWNzWEduOEdQNmU2NTBCSUl4Sllod1BF?=
 =?utf-8?B?bFk3akp1Y1BXTTFWU2UvSTY4QWpmSlRKWkNFZVpvSzdiTFZaOUtBLy9YM1B5?=
 =?utf-8?B?MVNFTncvMXBoc2o0aVZCZEpVZkFESFVFa2tMTmRwY0hpMFhFMWVlN1BPVHNX?=
 =?utf-8?Q?GTV2JFh3yzvRuv5p22GqtIBRW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 209caa11-fccc-4659-eeda-08db35b6458b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 09:15:17.1760
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Y0FcpipTBZHGtx7E/T4c/Ujcp08CGzDJZGguTsEmmD6FWKKzNhRqueGSlLBQMFP9yTsFFfDO4k+XVErcIC8lxA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7876

On 05.04.2023 10:51, Viresh Kumar wrote:
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1608,8 +1608,10 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is
> 
>  L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>
> 
> -For generic virtio devices, where we don't need to set special or compatible
> -properties in the Device Tree, the type field must be set to "virtio,device".
> +For other generic virtio devices, where we don't need to set special or
> +compatible properties in the Device Tree, the type field must be set to
> +"virtio,device" or "virtio,device<N>", where "N" is the virtio device id in
> +hexadecimal format.

Are "virtio,device0x1a" or "virtio,device1A" valid, too? If so, all is fine,
but if not, constraints on the hex representation may want mentioning.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:25:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518360.804809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzOD-0001Cu-DU; Wed, 05 Apr 2023 09:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518360.804809; Wed, 05 Apr 2023 09:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzOD-0001Cn-7o; Wed, 05 Apr 2023 09:25:13 +0000
Received: by outflank-mailman (input) for mailman id 518360;
 Wed, 05 Apr 2023 09:25:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=enMU=74=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pjzOB-0001Bs-UJ
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 09:25:11 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2698ef1-d393-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 11:25:10 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id x15so33319475pjk.2
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 02:25:10 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 c16-20020aa78c10000000b005e5b11335b3sm10217060pfd.57.2023.04.05.02.24.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 02:25:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2698ef1-d393-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680686708;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=dsJ1Pr7QDcgnVIhypGgV7+tj/tswyw0N5/Rm+NWo9rw=;
        b=pzmB7sUauULhFlJsutjbD5Pd+oLKsT4Xw/RxGQioknnTrxEKEet7YPNeeXCYoj/zme
         pqze3Hx8gl7n+YUq4EyKvVLtoFjBWsHsUiTy7ZEanhOSqYAk3/2AlNH8kavrYPFGtT4q
         Zurgcbc94KYYFAwVtv4YiUV5dlMEtKrVK2Tx7EUAn65NJVcQH7oikRAT4CA4zp204nJX
         3lEt1kyMTHXyiaCunAMqsWyIjyMKedvtl18qi6y/Ce04nulNI/5ngUn0P5AvLuRfhTBk
         8AhLO3UvrQWwjlbAM+Oe+UgIhr2k61l8XZFuPn2e3xohwbVsXiKG7ijBEb3f5DtFCRTr
         bHIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680686708;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dsJ1Pr7QDcgnVIhypGgV7+tj/tswyw0N5/Rm+NWo9rw=;
        b=E+897j5D3nRltUS4Y++PhYrdiXGEfElbj4U/9HqVYghtKQSskbQQsV2DigfZW4syNQ
         LDSrJ3eELlCAdNKBr3XGMDlRLxf1qiyGXpjwaxjuUBjLxuM3SrsD9zxb0kNU2fQwK3wn
         PieGgFUJMdP03Gk1so69DH2dQ5LsWrlFT+5K+cagKSyoTAYkMkBdE24dsg6JRpvJoaXQ
         /C5PeIXGGlNggAJwXxzuZ+xXgzAT3E017g/KatKKKeBRIVjPr15giriZwf11l5G0GVN0
         OLUbJaM5aJ6QEOywkI2g5SxsDasX96cWJsNYLP/F0jdwUpH46CFShWhqbbKrBZwSQpPK
         jBng==
X-Gm-Message-State: AAQBX9chiB3H55lRft0rIaiov4o5pxzwHmJmp0T8KduzKsX2G6LzMFe5
	gcn7nGuDzhlLTCbagJ8XkOXDNA==
X-Google-Smtp-Source: AKy350axzbppMO9pJvbe8uQWRrvT0Q70A5UqfDWeL56uAno2xeYPmJ3cO0Ki+RfnZn5B2CjkZP3wag==
X-Received: by 2002:a05:6a20:b728:b0:d8:bed9:33cf with SMTP id fg40-20020a056a20b72800b000d8bed933cfmr1816252pzb.17.1680686708638;
        Wed, 05 Apr 2023 02:25:08 -0700 (PDT)
Date: Wed, 5 Apr 2023 14:54:57 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
	Juergen Gross <jgross@suse.com>, stratos-dev@op-lists.linaro.org,
	xen-devel@lists.xen.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Erik Schilling <erik.schilling@linaro.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>
Subject: Re: [PATCH V2 1/2] docs: Allow generic virtio device types to
 contain device-id
Message-ID: <20230405092457.m6bi2yvv7s2nasxv@vireshk-i7>
References: <c5d2ab978255ca84197c980cbfb9a504e7c625f8.1680653504.git.viresh.kumar@linaro.org>
 <37fa3d37-5ee7-863c-48cd-1ce313c8e296@gmail.com>
 <20230405085115.dayqa6mrkac372sb@vireshk-i7>
 <b523597b-40b8-7368-4742-9dbce06b4633@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b523597b-40b8-7368-4742-9dbce06b4633@suse.com>

On 05-04-23, 11:15, Jan Beulich wrote:
> On 05.04.2023 10:51, Viresh Kumar wrote:
> > --- a/docs/man/xl.cfg.5.pod.in
> > +++ b/docs/man/xl.cfg.5.pod.in
> > @@ -1608,8 +1608,10 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is
> > 
> >  L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>
> > 
> > -For generic virtio devices, where we don't need to set special or compatible
> > -properties in the Device Tree, the type field must be set to "virtio,device".
> > +For other generic virtio devices, where we don't need to set special or
> > +compatible properties in the Device Tree, the type field must be set to
> > +"virtio,device" or "virtio,device<N>", where "N" is the virtio device id in
> > +hexadecimal format.
> 
> Are "virtio,device0x1a" or "virtio,device1A" valid, too? If so, all is fine,
> but if not, constraints on the hex representation may want mentioning.

>From [1], both above are invalid. Updated doc as:

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..24ac92718288 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1608,8 +1608,11 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is

 L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>

-For generic virtio devices, where we don't need to set special or compatible
-properties in the Device Tree, the type field must be set to "virtio,device".
+For other generic virtio devices, where we don't need to set special or
+compatible properties in the Device Tree, the type field must be set to
+"virtio,device" or "virtio,device<N>", where "N" is the virtio device id in
+hexadecimal format, without the "0x" prefix and all in lower case, like
+"virtio,device1a" for the file system device.

-- 
viresh

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/virtio/virtio-device.yaml


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:30:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518364.804818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzSo-0001vr-Se; Wed, 05 Apr 2023 09:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518364.804818; Wed, 05 Apr 2023 09:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzSo-0001vk-Pn; Wed, 05 Apr 2023 09:29:58 +0000
Received: by outflank-mailman (input) for mailman id 518364;
 Wed, 05 Apr 2023 09:29:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pjzSo-0001vd-Cj
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 09:29:58 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062a.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6cd39c67-d394-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 11:29:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8037.eurprd04.prod.outlook.com (2603:10a6:20b:2ac::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30; Wed, 5 Apr
 2023 09:29:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 09:29:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cd39c67-d394-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fBhg9vijep/v9C+llyabXe18soO62vzduroRAbB8BlMvl+73vmxZwu9ZjJlFcbrrOxs05GcL+bG3wRmliZ/9C1GAOtWjQf32dFVLFbnnkHeGmda09Q0KJDTLi3iEcEwDmNrJYO6VcpGIuY0G6AhC/sRtfAhRP3byN3X76Snlb4sL9uTeJ0WGLd0caOtmpKkcsHoKk44mwnAIg+kVfurlzCyodCfGnhYrVXANSPOREijRIfWlDUvuTW6ZsJh1lm6jK04V9S0MFxtj/qjEcQ8C3NL6EtH0E22f71w+tJ0uIxPagVEYPXd5mle5S9axn48uBGx4Jd/OIgwD4ZP1Fc/lqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z9Rxxk0xnOzGBNILaI2KtYmjf6KxyxKXqSchWhBI7lQ=;
 b=Ia9TTMdLrEyFi1mh1540MpmP3f+31DecoC2D8apK4Xy6+n8SMqnU1Tc9l+8ZzT8Ca/n5gJkgvBWRZurQV6UVr8UEeAZ/hBOBxVyJKK63eY8c38fjo7Ek02U9pMpLkPghjbroiTE2rbM3b1ENr+tiBWTqaUVtVAPO507RvQlsn7o5AR4fumnYdzIogPaefrOThjY7GxyGjip+mGVieEhc/lyMcFaHwj19NS7TNlw5MTR8wFPlsPq2NdPHeoqxkqAgao40OL3og0kIc2RYK8eaNcIS4FWnVcN6yjB9Cfh6xWI+gjn74hT9mVYszEenF3qia+j8+NOs3uYimbydGnTyOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z9Rxxk0xnOzGBNILaI2KtYmjf6KxyxKXqSchWhBI7lQ=;
 b=nak0QiD1TXZSpOgrrKlHFvQ0mV90eHf+Lg2Z89Z22f/dmleT6Bg3L1KBB3t+YXuC3w8cnnWm3wRtDezKTYVNlDifTDVKlOQ40rldT3JxdZWwD+PS/6yopXH1bsAyzk1uoIoc6VXRAjWIpv9OXtrrxjuKloJ4xB4LCAvBQ6A/kE/2qU68Rdszd/jL6rEoQYGE3eJmgqvEmw3wwZW2o9ol6Bfo2XXxkdgFTUiR26z/frM0G1g9+/3mt21vx+IjK+2fi1ABvutjKdzfCCvcEve7fq6VRCrAVY8bEf6tX/LCM90Scl5+YEoJZvnIEXtuI5MXpe05ZZXSqM3guuc15mL7aQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <83cc6b93-6a50-1eed-1588-20b3bc38c318@suse.com>
Date: Wed, 5 Apr 2023 11:29:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
 <ZCxI18gb8zK5X+nR@Air-de-Roger> <ZCxSooPqPwpGW6yv@Air-de-Roger>
 <5cda250b-57b4-2833-99d3-84ae8ca32059@suse.com>
 <ZC04cu9sXVeephOf@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZC04cu9sXVeephOf@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8037:EE_
X-MS-Office365-Filtering-Correlation-Id: 7300fe45-f032-4e02-af28-08db35b84fa1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sJMpCA0lXTsmaqWmD/wdHsz+mDNMs3uC4wsdhNJO21AjegL9VFpN1jtUEA1oKpFlq/BGUXOVfVC0Nqiz66JuvJpNydPSxwJ9lvPiCoYGApg/DVwp3uuAiRw0Zwyrc+O0wUHUSBS2+PPv81OgrltaibFTKb2we5n2bLrIBoMltY/Pwo8ntnUPvqHlz2a2t2s+Wbep+mHCmTuicXaW04aJooJ+UIx3dzFJlOISJt42YDFXGzsAz/3cDNnvNugAhTb/GvyFk8ExS8BFQRbQgfsvZlQ3sJrq02MkMpqw+Oc1uVeD0HG2xy/moQ82u9ltl+f6nDLzinM46Rl85EkzF5kj+tCyFmxyntCqZQZ1F/Obo1ZZOSjAfchhA48IhizxfvqR3RehycCmbzBCRVXgXwNcpE4Uio8hFBCpXtkwKUOKnEOel3Ht0hUwzVSCI8kPvG7Srwxl4HRDX86b2qiHxeqGzDeKkP6xUodCW05zjb02701xpb0faa/H0alZEwZBNJDc/CXiSLySF+1i/FFbRALwFp4bqxkp9m7ZBMzM0lFzykVNSd9yFZq/0luphkWZeZwm5ETgsM/ao1qJyYUOE3Bum1AhwTVKkv0nhk+RNzL5O+LRZGzHG2oD66Wg/xWoA8Uy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(366004)(396003)(376002)(451199021)(2616005)(83380400001)(6486002)(478600001)(316002)(54906003)(26005)(6506007)(186003)(6512007)(53546011)(2906002)(4744005)(36756003)(4326008)(5660300002)(38100700002)(6916009)(66556008)(41300700001)(66476007)(8676002)(86362001)(31696002)(8936002)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OWU0K2E2cWFaYW5BUVV5NGhJOGZucmsxZjNZV0t1QTJiV2NiaU5UWUxCV0ho?=
 =?utf-8?B?MkllaVE3NFQ5K1pUR0NEL2NkTVBJbVBKT1llaDR6dXdOQVVpUTJnOUpFVmVy?=
 =?utf-8?B?dzRhV1psUzY2RGRPUXRXUGRkVlJDbE9lUGdzL0xRNWZPY3ptSnNWNmRiS1B0?=
 =?utf-8?B?TERGR2sxN012SGdiS3pIZnoyYk9tQkgzMFcrNjZXVXBRZ09oUURyL0tCa1VO?=
 =?utf-8?B?enhjUy80Q05Cdk4wYXFadExQK2VQYXJJYXVzcnh3WmVJWGhFVFdvSWlvNHVU?=
 =?utf-8?B?ako0WG1UYXZ6V3pPSzJtUG9SNCtHblk5QlJtZ1NsV09CcXpGeEhrMzBjTnJx?=
 =?utf-8?B?TFI4K2JFRFBjU2R2QVNqVERXeEI0QXpwa1VJTmxOMDJoMHZMUCsraDQydnFl?=
 =?utf-8?B?eHVoNXdqT2gxWE5vNkRaSW80Ny9hckFXc01RZVhueHU3S2pvZE45bGp3UFl2?=
 =?utf-8?B?YXphM2NtTldDZmUwRXpYajREZkV5QnFkR2lGMTZvOUhLV2p1b2V5cEgzRHlo?=
 =?utf-8?B?RlF4QS8wSFU0SGJxWVhyTkpZQWNiSkUzOGpkbVg0cVQyMkdwa3JPWmxNSzFa?=
 =?utf-8?B?SDdyampWaldDSjlQS2l2M28yb3luZmVScmRNU1ROZG81SE1VOTd4U09qQXNI?=
 =?utf-8?B?OUoyb0F0eVgxSmpQcnJDQTNrSDMzMkJtdHdGZXBjaXJQWCs5MzduRmd1OXFT?=
 =?utf-8?B?QzFqUFB2MlIyczk4QWQ3Vmh3OGNHcVlId2dBVlJ1bUhia0VnVzNqNkhkN2J1?=
 =?utf-8?B?TUpJK3pqdmt4dkFPaTJaaUo3R0xWWmxVQjl6V0xhNklXTmxta25YMllnbVRZ?=
 =?utf-8?B?OU1qYkdPeUsvWWNkenJ2Nm9mZTdzT3JuM2w4UGVQVUY4NGdwd1lSZ0h2UHFG?=
 =?utf-8?B?QWZIaGZCTWJlZEh3VjRVV1M3WjRSalBnSVRDY2tkRXozTW9ndjNKR0JCcVNP?=
 =?utf-8?B?VTJVWThicFJYcTBOQmVxOEczcHdIb2xzLzEyYWFaUUxjclN0eExtVTdpRlNY?=
 =?utf-8?B?cDhJTGVoN2w1NFZQQTgxalA4VU9MdDBiYk1lY2kybksyalJnR29mUVRWSHZB?=
 =?utf-8?B?RUN6ejJnM211bHpzQVBRODJtcFQ2c1ZpVDR3YjhpRm9meXFZUVpCWnZDRlJl?=
 =?utf-8?B?RFJ4cy84ZEhicjhwWFN0dDZrc0tSZCtLT25TcndLWTBJdnA0QVdWYU0rT0xt?=
 =?utf-8?B?Z2xIcitWOU8yaGkwOWFhanZza3dySlhOMkVQcGVVQzI1WHgrQ3owZkVKSVlI?=
 =?utf-8?B?eVVNLzI1bU5rNUszZ2lZb2RVOFhmT1JpYjNhcUhyVGFqZXNYa0c1TEh4VXln?=
 =?utf-8?B?MEIrLyt5a2hobUVtM2NXWnJ3TXNUbFY2cVd1ZzAyZmVMUEhITEs1dVRKT2Rj?=
 =?utf-8?B?aC85c3lGbURJT0dMTlRVV2xUY2t6VDZzQ21Qd05wdkZLMVZuNVNBRUxabzNF?=
 =?utf-8?B?QitOV2xsRkRrUEVLMUxyczdId29ocUpTbE8zV0NGUk51Ny8zY2VHNEpqazFn?=
 =?utf-8?B?N3JRQ0R2QTNUV25NWndITXJYNDdsNDkvbzM4Q0FGajkvY0pONXFTM2tvWUxN?=
 =?utf-8?B?NUJhSFNUeE5KU1o3WEVZK2p4aGpkNzJoalljdlc2TzR1VDJJenYxdWd4ckxI?=
 =?utf-8?B?YWllcVlsS1ZpNEowUEZqZVJMUmU5TjhsQ0NIQlUrTmJmN0M3a1c0dVU0WEdO?=
 =?utf-8?B?blg5SzMrSTcrVmc2SlFLQjdmdEpLSkpXT1ovUDhjVUJlZkhhQVlHTFF0Umcy?=
 =?utf-8?B?WkdmVG5HRlRnTFpvaWw2NHMxTEdtQ1JBUU9MQzJ4V0JiZlpwYkVuMkVmNmQ2?=
 =?utf-8?B?UFZGSFhWZXovKytYUFpWblAxd1lFcWV5NTJ1bENjcmE5Z3FqRk00bmFib1RO?=
 =?utf-8?B?ZkFvQ0hKeTZXM3hmNUdTSW9RZ21UUm4vM0JBdVU4bW4vSnlFV0FrUXZsQXJG?=
 =?utf-8?B?VlNuNlVwc3Ywd3lodGxxUzI3UlpZTHZSUUxiWi9WRnRIVCt3QUg1ZGpDcjJv?=
 =?utf-8?B?K2dEQnRLMkdvNUxSWFcxbkxrTVEveXFwalUzWEVYSG9DYmJ0ZWFENnRZWU9J?=
 =?utf-8?B?WWhoL1RsOER4RTBZbkJBQmZSUmdWSEdlaGwwTGdEYldySDRKK2tJVkJxV2Ns?=
 =?utf-8?Q?LTYS+W5I85ydcbN1v7aIpTfRg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7300fe45-f032-4e02-af28-08db35b84fa1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 09:29:53.0767
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2IRIf/GiaRdEAt0DW3uEtdID2aavXiR4G84KvzcY2dlns7dQLrFQK19Aqcs1MW9AWZJti50QgeKG3qp1DghTDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8037

On 05.04.2023 10:59, Roger Pau Monné wrote:
> We should likely move the setting of vm_assist to the domain create
> hypercall, instead of doing it at vCPU initialization.

Perhaps, the more that setting the assist is limited to vCPU 0 init.
Which in a way makes sense when considering domain creation, but it
is odd for the case of vCPU 0 being brought down, reset, and then
re-initialized; IOW I think arch_set_info_guest() should further
have constrained the setting by a !d->creation_finished check.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 09:41:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 09:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518368.804828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzdK-0004Uk-TY; Wed, 05 Apr 2023 09:40:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518368.804828; Wed, 05 Apr 2023 09:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzdK-0004Ud-Qs; Wed, 05 Apr 2023 09:40:50 +0000
Received: by outflank-mailman (input) for mailman id 518368;
 Wed, 05 Apr 2023 09:40:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gxas=74=citrix.com=prvs=452091250=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pjzdJ-0004UV-IC
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 09:40:49 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef39a336-d395-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 11:40:44 +0200 (CEST)
Received: from mail-dm6nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 05:40:37 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by DM6PR03MB5195.namprd03.prod.outlook.com (2603:10b6:5:240::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 09:40:35 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Wed, 5 Apr 2023
 09:40:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef39a336-d395-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680687644;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Lg5WL66Cxisvq0Y7xWdHEEoEbUlpjZjugaRK5I+aV/k=;
  b=hzZI/J3q3O12f8xcSWtw6wP7yYjuajU2TL1C36IQlBWYj0ApKnE/j69i
   40Rw7iuPNZJi2xFcHqiIkAkikJqalhGzDnXiZkUnpVOoDW48VhOMmabMj
   NsZE77uTi2A/Dw7+2ioefaaQLhzxJB1Aw5d36S9MUrkrowiCHKDQB0Ozq
   U=;
X-IronPort-RemoteIP: 104.47.59.171
X-IronPort-MID: 106819497
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:H+DvbKLWOGC0tkDYFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENS12dUy
 2oYWWrQb/iDNzb9eNsibo6yoUoBuZbQzd9qSgFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRjPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5lCEBN0
 dwoBgkLSR6Km/CuzOuJVNdj05FLwMnDZOvzu1lG5BSAV7MKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dmpTGMlWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02L+WzH+qB+r+EpWi5r1U2QK86FAdCVoMWlWXsOWEkESXDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLSNrmK2YTzSa7Lj8kN+pES0cLGtHaSpaSwIAuoPnuNtq0UuJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNfNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:FhMQ0KvYb5F/RrTRic1Vj8cN7skCM4Mji2hC6mlwRA09TyX4rb
 HaoB1/73SbtN9/YhEdcK+7SdW9qB/nlKKdgrNhTotKIjOW2ldARbsKheHfKlbbak7DH4BmpM
 Jdm6MXMqyOMbAT5/yX3OHSeexO/DFJmprEuc7ui05ICSVWQ+VY6QF9YzzrYHGfhmN9dOQE/F
 733Ls2m9JkE05nH/hTfUN1O9TrlpnwjZf7ZhxDLwc/gTP+9A+A2frBCh2F2RVbeC9OxLpKyx
 m5ryXJop+7tu29yFv632vehq4m/+fJ+594HcmRjcpQDCvqhh3AXvUGZ5Sy+Aotpf2p6hIRsP
 SkmWZZA+1Dr0nJe32zo1/W1xL+3C0I43vvoGXo+kfLkIjCXTcnDMgEuo5DaBve7CMbzatB7J
 4=
X-IronPort-AV: E=Sophos;i="5.98,319,1673931600"; 
   d="scan'208";a="106819497"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nilq3Neu6MV/8GAvmrJwQYMu+/2UeiiSQi3Daacq+/5pk1yK9xekFlIkzXbZTuzpjWm4ohTrCJgwqBRONqpvwj2lmF3xS8+x7oIp0fFGDadxkjkAmMpsLBX6DuqjeYLJeCq7sZbRDqeZIfqhN2yrpHqOXaizo3TGITM/xeRxuEFfi5+Em4lEp6aS6cDY9J/zbhnK/RxLbt6f2z/imrmeuVRPNSFUBufFDI2UMfFjSjej1iheHKH0QpZV3ATzza69a0g9NuCN9olfckYMfWQAaSc1MX3xl43DYL2NWOSUF12CI8xfp4oDoaCLDlzAb5Bd9drgOP4uZhfw2UTryLCwpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=glMDDIPA9Olrwj9DPq4D9sJNv7NzFuBiUuVcNTR6e80=;
 b=APKEV0hloO0/ehkM0yydxWyZdBS+QvvdnL6iHYP8vXWku+zvdchmif1zYz99lMIsiDAXSlECG1fA5FjTIxkpGO8vQaVOjj31r3CeF4SD4DnnQAyx5FD1PxsHzF8K/HBuX4FT/+oY0TP2Tr4osTutSumrYRtDQpEJLw7LBTFF+yg2uO0YoHcudW7jxuc1PcH9+VGybBVVe+PoXDfYG15mWSjr+f0TMtMHZIk25aeP3ClflNplHctoTAgUqiHMpRCopYQmy0rhWBn5Y7mUVZhO+EbgFu3qYd+4D9noJQMSysuCqw7zvENDsgsxuXvfYFh+2ix+G2+fb7fVsH4RBEqaMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=glMDDIPA9Olrwj9DPq4D9sJNv7NzFuBiUuVcNTR6e80=;
 b=DAeIalqQ6ha+6gwIGfD50S7a1himneV20yKWHCZv4lfL+GcfOfEVuoYDZvrf6wyBZ9fac0lBaegc+WA9uNS8W678iO7lczAu9yuAIEN9EYrUYWl8iSSgdQP2S7CA9+w90GoLS0FJK+I2v3HtYAZmp2S3T+5RvKZuqdHY5DlK1WM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 5 Apr 2023 11:40:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/PV32: restore PAE-extended-CR3 logic
Message-ID: <ZC1CDs2Rei5OTE6d@Air-de-Roger>
References: <47ab9000-68f6-8925-d814-a3a955b7f6cc@suse.com>
 <ZCv3+cpzJ52Y679G@Air-de-Roger>
 <3752672b-f4a0-5ffb-9759-dd315ce31079@suse.com>
 <ZCwM1SfCAfh2koBD@Air-de-Roger>
 <ac13fa57-ceb2-0aaa-dcfa-42d8d01ee6d7@suse.com>
 <ZCxI18gb8zK5X+nR@Air-de-Roger>
 <ZCxSooPqPwpGW6yv@Air-de-Roger>
 <5cda250b-57b4-2833-99d3-84ae8ca32059@suse.com>
 <ZC04cu9sXVeephOf@Air-de-Roger>
 <83cc6b93-6a50-1eed-1588-20b3bc38c318@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <83cc6b93-6a50-1eed-1588-20b3bc38c318@suse.com>
X-ClientProxiedBy: LO4P265CA0159.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::9) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|DM6PR03MB5195:EE_
X-MS-Office365-Filtering-Correlation-Id: 0172ef3d-80a3-4e2f-22dd-08db35b9ceab
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fTS6iAqLmtDEj67Qeq20fxoEeHNKcAMTGEpv97NE+A7xkpbIs9B8k3jZ5p7cecJ6qXMIOpRAgJKv9ZznIUjtBVbs9Gov5IFTgz0XAA0puPsvPE+MqX+P8GPp6HLQ+GeyQ2i71ITwmt2gxrWnC32vARGSo9y5GDnRqJ3at0IiZ6ZLu7YLqdpdffzdvLmXTBMcyFRKZu7sxfYVBmh3FZMNM9VhlQy8xcMTUq/pHCDpxoioQFhjfVe5HfbEI3i5U4guHbiK+n71GG5TZdaR6MR4kRuNXSjkhmriXZOGswOsXUyvPCNwfN5njoPieOi9m3hBQZFnGnOBx1nJtpdRqDEoFqAhWtEAJuvuKPpoINgGFqZ2Nv0K0ohTzHz0pahMZ+Xsz+r2RQopfiL779QRv2ihkUb87HxHaQ9M567dLSs0UO1slDSASKfnen0D3bqCusDAu9/0iAny3j2PV7qaVLlGdTBdSbHVXIa9H0ZqOkIHJAqJ6uvUBRksJ0aSbNS3RdVGRJWHGJIpU+Lub1K8/tnzNX7DjG90XlV6LuUtte1lQ7Bgd443rU4zhkQHXbgQcqfi
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(346002)(39860400002)(376002)(136003)(366004)(451199021)(4326008)(6486002)(5660300002)(82960400001)(478600001)(85182001)(6916009)(8676002)(66476007)(86362001)(66556008)(38100700002)(41300700001)(316002)(54906003)(33716001)(8936002)(4744005)(186003)(66946007)(6506007)(9686003)(6512007)(26005)(53546011)(2906002)(6666004)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bENDcHVVYVFLdERmbFdUVzA5dTY5UFo1MWIxM3NVYWxGM1lkTllKaWM3YUtJ?=
 =?utf-8?B?MzdwYlp1dVBZTVJQT2VuZHp1dmF6QjI2aFJLWVNkV2R4cEVmK2lBMkN5RVls?=
 =?utf-8?B?dE41VTFxcXE2dWxickFiQTEyVUZPTUVlL2dTRzdiMUx5VTlWaC9OT3d5TU1G?=
 =?utf-8?B?UDZMODFQYU0zeEVrMUVWdzloWE8wWmZabm83TndITnBYck1adTBlb3N3NUhq?=
 =?utf-8?B?bk5tUXBmcTlXOXA2K2lBTzJnSGRTc0JLTE45VnpzeGNML0Q2a1Z4bm8yaDUz?=
 =?utf-8?B?L21UZFRXM3M3blJvQ3NENjl3bWY4STVVMFl5SjhQc2x2azk2bEJIT3dpNHk3?=
 =?utf-8?B?QlgycUxrTEFOUnB4cXlScG5YS1BkYUtSaEFPMmF6c2dPMVFnQTZTdGRVbU9h?=
 =?utf-8?B?bTZnb3pWOXgyMmpnTnQwaVhTNzVMT3JuZFlzVVZCbURqVWJRc3JXNytCeHRO?=
 =?utf-8?B?WFdvKzFWV2hnSnN2SmdyR1pqU1ZHOG84Z1Flb0Q4Z0VxOXplUmVlY1lzN1JV?=
 =?utf-8?B?TkpQb1RrOHMxMmhkV0NzRE84UGt4Z2pTcmxNai9UVHl2Z3QzeHBPNUdDenNC?=
 =?utf-8?B?QmZBeWI5WXFYdWNjdFZoMGx3K1RVSzlVQVY0RzBLNXcweUFLQ1huYkE1eFBI?=
 =?utf-8?B?L1ZSc0FlMWFVTnVSVkwyL0FNK0dkRmhDUHJac1dtRTRya2dyRUpjd0E2ZFFE?=
 =?utf-8?B?NkNhR0RlMXNnOS81OFE2R2JEV1F1eUdZeWdEWHd4N1ZQT3JJdkVTSEZtT0xw?=
 =?utf-8?B?Q0N3enhHcE9lMFowQVJCZDhIYmh3NFlkUnk4a3k5Yi9UTHlkenFyZUMrZjJG?=
 =?utf-8?B?VVBIZGxrVnl0UHlXWGtNekF4ZG80RG56UVZPbVNVWWRsQTFLMmpZNnJYMjBs?=
 =?utf-8?B?MGh5Q0JUME9RWXVUQ0F4cVJ3WE5yZm94UUt4ejJXU1FqNEpFZ21qZFR1OC9K?=
 =?utf-8?B?cFJBZEdvNnRGL3RXRllBWkZwNVV3eFJhZGJWbHlLMmJMTFFzUy91bEJQTGYz?=
 =?utf-8?B?elc4UUxtalI2M0NCc0w3UmMwcDcxYko5VngrZlF1TW1PaE9YaXI4eWcxeHds?=
 =?utf-8?B?ZElxZTA3SldpMy9uMnBrNXFzeXFPZHVHdE01alJvVUhMbTNIMlIvNWlqR2Yw?=
 =?utf-8?B?eEFjb1hMLzF1VGU1b2VvZmVScWxxMFFzVVdWaCtod3YvM2ZjbVMrNWdMQUx0?=
 =?utf-8?B?NnZmczllQUFIWk80YktST20xbWIyRzhZaER0ZjNIRURURHBmd0tsSXF0YXJN?=
 =?utf-8?B?R1UrYXZ0M3ZZL3BsclNvVk1CM3dFTW5HZ1hrWkVwM3lIWlc3dVI4aCtaajZz?=
 =?utf-8?B?Q2pvNCtvbTBwYmFjNU9WTHpSbUlaQml6c2Z5WHZDeXZxczBORjZYT0MvUzdG?=
 =?utf-8?B?bmRoY0hSU2RqS3NIbENsZWRZZzBMNHlYMC83Lyt1UUR2c2J4Y2xaOHRqUmVj?=
 =?utf-8?B?TVV0ZjlVMjkxakxGVmYvZnRLZ08vV1V4YXZESW5ScDFEOTJlUzJyZFEvaW43?=
 =?utf-8?B?clhNb0FqcXdrSVdPZysrY2orcHJkRnVMbHhRYTJ1dkJRSVdZdkpQMytzZkFL?=
 =?utf-8?B?dFVjQldGR2ZEVUEwNzNGZlZUR0p5OG9nOHNMdzRFSjdXQng1NnN2UnhEa1Ew?=
 =?utf-8?B?YVBNTVdHaEp0aVNkbEFHRjZtRmFwWFljcWhLMmx4Nk1rNjloZVViRm1rUU56?=
 =?utf-8?B?Rnp5U1c3cE9DWElkNGVwMGVyaERTY29DVDVrM25JUk03cUJRN2hMb3NlZkJB?=
 =?utf-8?B?VGpQM2FlZmtzbVU1a1ovcm81amNjUDFHQUhTUEFCREdKajdJbTVkLzlPV01K?=
 =?utf-8?B?WC8zMXU5cTkwQ29jbW1pUjFZalFTY0hnUEI3dUNsY0dKWjJtMVhZMjA2NDhN?=
 =?utf-8?B?T1hZMlBwMnVIUUxsMzFndHR0NUZyeXIwdmtCbXVJRC90Y2Nsd3NMZEtXL0Qr?=
 =?utf-8?B?c2docmllNmg0RTN6UGc4WEtEMFlPQjJQL3g0K2hSUGxpeWpITmFwR0pPTnh3?=
 =?utf-8?B?M3pzK3duVE0zbWhHMjFkVDkwSCtXMlFaWkVQN3pxVVZES05VVFFhNmVxU3ZT?=
 =?utf-8?B?dW9panpuSFIyOU5XNlBNeXJCVDRmbHU5d3gwV08vbDZYZVV4MG4vd1NpQS9M?=
 =?utf-8?B?bGhMTzhLdXVYa1FLYStXRzZUNmtvaE5RUlB6TXJmN1ZnZVFGS3Rpb2xQNXJp?=
 =?utf-8?B?bUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4rj4joOK95djKLYXW4ilqscTjL3Dfx6UOafqjbpDVPhNNIsWOf3bDxPE4tjPfQJ7lXDwiZ4nD89SulXlPavea0hjSSkWzTqHYs1t/9S2wUclNqju7wDnVtaM84/gRJ1BrgojPHQONfNUeNzp2PcOessR/2x9rGhrZALV938aNQ6GylOZLwkgEFBB/eILkdrZZ8DTaLUsntIFeTGcS2020o0fRzCxcvhkS0OZyZo93NFBtPZC8g3tX0Wjqq5F8EimhxT1ogwL02YGLlAe0lpS7H3vVQmXqlRKElv63Uj5H7x0rIGXAfawYB9JJwbU4YozLkIakHZIVP3DW1FdOeMYV6tLEtmcCukkir22R+wmjk33woTfApAJyfhd/F1h+baf1W/Sts/ge6FY/Bh52aby1u3mkcwArru6SGICLLziGeLzWkNGRYmVMju8csGMS0s1HpCTRTYT6ropuC9yrW36PuWJHBemWKWN19c8/7w3vUydbHOzTmKRF5Hj+an5mIjo/17Q0MUou/UXNxKnm/QM2MFQ0DjntGYSlsE/w9nS02efzyQlgd/ek63K7BStZqxwHngK0AGK14F+3KtrySPj2fJg362axzG8vbJMpUfHde9M30Pkt+7iI95JQ9yhmoHwWX38XdGmau5hWXk477J4PwtKnqBO1NNspDuJusn/HnUosUpvzXxACmqerD+69xZCjgUOqC40q8wIb0yUM8fLXX8YsYRZypsLSM1BXFJqR7o0jniiTjADt5K3C6vCMm8+soNBfJHC2i1V+cVSwd3TKlvddUds4XbNXTLPJhx9whNu7r4eOh8+GlNiqV6vdRbY
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0172ef3d-80a3-4e2f-22dd-08db35b9ceab
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 09:40:35.7015
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GnKwD9Kayq0djQDEgPywh6RZLNE0F7G5BNYYAU1PRK7lX7dAKY0oUxaEt0L6PHmyrw/Fq/Y4K/XYc0aj7Z8DJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5195

On Wed, Apr 05, 2023 at 11:29:51AM +0200, Jan Beulich wrote:
> On 05.04.2023 10:59, Roger Pau Monné wrote:
> > We should likely move the setting of vm_assist to the domain create
> > hypercall, instead of doing it at vCPU initialization.
> 
> Perhaps, the more that setting the assist is limited to vCPU 0 init.
> Which in a way makes sense when considering domain creation, but it
> is odd for the case of vCPU 0 being brought down, reset, and then
> re-initialized; IOW I think arch_set_info_guest() should further
> have constrained the setting by a !d->creation_finished check.

Maybe, but still the right fix IMO would be to move this into
domain_create.  We could add the !d->creation_finished check but that
feels more like a bodge than a proper solution.

Restoring the previous check is better than nothing, but it would be
nice if long term we could get rid of the vpcu related conditions.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:02:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518373.804838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzxe-0007Xe-M1; Wed, 05 Apr 2023 10:01:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518373.804838; Wed, 05 Apr 2023 10:01:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pjzxe-0007XX-JM; Wed, 05 Apr 2023 10:01:50 +0000
Received: by outflank-mailman (input) for mailman id 518373;
 Wed, 05 Apr 2023 10:01:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kv/O=74=linaro.org=alex.bennee@srs-se1.protection.inumbo.net>)
 id 1pjzxd-0007XQ-IH
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 10:01:49 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfee48be-d398-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:01:47 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id e18so35587146wra.9
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 03:01:46 -0700 (PDT)
Received: from zen.linaroharston ([85.9.250.243])
 by smtp.gmail.com with ESMTPSA id
 w7-20020adfec47000000b002cfe687fc7asm14532282wrn.67.2023.04.05.03.01.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 03:01:45 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 1E04B1FFB7;
 Wed,  5 Apr 2023 11:01:45 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfee48be-d398-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680688906;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xDFJjTGQNkjMjQaKDKrEsiOprP+3eY5rMht1R3BPmqI=;
        b=qJuUB8lXKPAV5az5FqyOzh/WnCwEbzeWlRe/pF6mMPsBue+wMOt4jrinivf9lbjUpj
         549SQvsI4JEw1kWHpGuzNjjgtF2FaRY74NPXxJL3rebT/OBdwarrYv3hKyub+n6lHD+v
         W1KKhZQ8xjsJwLmRTzhw1Jk0CGJc9RwPZTQr83EK9C0L96NZNLAlfk8qH1mvrT1mhp9E
         V7+aPEI7hnb4bZ+spDvZL1hbXutqqILlju2OgGdfRbp7wMf7Tpzjfq2e+dBmEqUyZHIe
         XvGZlRwMbSL9yUBa64cqjZlqK/PQIpmHT+l9dqKQsfiMiQ0J67JYpgDz+eVq5o4Ks3H7
         Semg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680688906;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=xDFJjTGQNkjMjQaKDKrEsiOprP+3eY5rMht1R3BPmqI=;
        b=XYAdqiSk81kYEuDaSk/klKXTpwdJsJxKFhq960sQLqS1D4lKx3aUOG+xG07OPp6xBN
         M8ZVUgeuuA6yYDUxlBqxPtWTlW0CXPwM3AKna5y0a7E5Qj/0A9otXYwWVGij2xQCCHhV
         zaLO2f9TKNaKwDlqUtqHT6d4nd7tYLrmi1ayfiKlYwuTt2vRGkqNHt+RkRJxpGY916F5
         6F4XLRgVju3J4yHI3sCFmIAGFPFSvXAGW1/kBAI7cOjdohP5t25WmCU91OFPqf9cfChd
         xpiw6em/C+h3vNSesQtZpRX/6telwnr9f4VZjtmUXDNjn6Fx4nYm4HYjNKQXfuXO6v0r
         5sGw==
X-Gm-Message-State: AAQBX9fC2FGlRvloSFXs4NYrxx5q+Jo0LpmzxfFN2LZ/VpDDwSTkcOtG
	SYuz59ZfTbPtiM5fWYXuOJSmyw==
X-Google-Smtp-Source: AKy350ZtC2zcGgpmA1tObIYkI/B3t4R7tGoffPFGOXgc8gOET2D0kOgJ1A31FE4xklaO7MxIT2j8nA==
X-Received: by 2002:adf:f1cc:0:b0:2cf:e67c:8245 with SMTP id z12-20020adff1cc000000b002cfe67c8245mr3708097wro.44.1680688905956;
        Wed, 05 Apr 2023 03:01:45 -0700 (PDT)
References: <cover.1678351495.git.viresh.kumar@linaro.org>
 <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7>
User-agent: mu4e 1.10.0; emacs 29.0.60
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: qemu-devel@nongnu.org, virtio-dev@lists.oasis-open.org, Stefan Hajnoczi
 <stefanha@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Vincent
 Guittot <vincent.guittot@linaro.org>, stratos-dev@op-lists.linaro.org,
 Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>, Sebastien Boeuf <sebastien.boeuf@intel.com>, Liu Jiang
 <gerry@linux.alibaba.com>, Mathieu Poirier <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V3 0/2] qemu: vhost-user: Support Xen memory mapping quirks
Date: Wed, 05 Apr 2023 11:00:34 +0100
In-reply-to: <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7>
Message-ID: <87h6tulkae.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Viresh Kumar <viresh.kumar@linaro.org> writes:

> On 09-03-23, 14:20, Viresh Kumar wrote:
>> Hello,
>>=20
>> This patchset tries to update the vhost-user protocol to make it support=
 special
>> memory mapping required in case of Xen hypervisor.
>>=20
>> The first patch is mostly cleanup and second one introduces a new xen sp=
ecific
>> feature.
>
> Can we apply this now ? I have developed code for rust-vmm crates
> based on this and we need to get this merged/finalized first before
> merging those changes.


I've queued into my virtio/vhost-user-device series so I'll get merged
with that series unless mst wants to take it now.

>
> Thanks.


--=20
Alex Benn=C3=A9e
Virtualisation Tech Lead @ Linaro


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:06:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:06:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518377.804847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk01w-0008Ds-7G; Wed, 05 Apr 2023 10:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518377.804847; Wed, 05 Apr 2023 10:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk01w-0008Dl-4a; Wed, 05 Apr 2023 10:06:16 +0000
Received: by outflank-mailman (input) for mailman id 518377;
 Wed, 05 Apr 2023 10:06:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E6P8=74=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pk01u-0008Dd-Qa
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 10:06:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7dc9ce65-d399-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:06:12 +0200 (CEST)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-64-psKlu5H3NgSykp4d88qt7g-1; Wed, 05 Apr 2023 06:06:09 -0400
Received: by mail-ed1-f69.google.com with SMTP id
 c11-20020a509f8b000000b00501e2facf47so49642067edf.16
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 03:06:09 -0700 (PDT)
Received: from redhat.com ([2.52.139.22]) by smtp.gmail.com with ESMTPSA id
 k15-20020a170906158f00b00948cb15c642sm3154354ejd.42.2023.04.05.03.06.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 03:06:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dc9ce65-d399-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680689171;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=K7tqdHp0z7SrsceuGaqk3f4Dn2eBRCUCgxJHv4nhpXM=;
	b=M+8Y6JcufEf29RyWksXclCE/eIAMBOVfEv75cOKN5hZT4TCCohraVYKKtgw6V6okr/xCia
	4OQQ138nMHjwltpcRq2cXsRcHce4wR7rSzrrbL9IFneEIfDGH3WaZ3Q7iP/BmSkQ/QMMtr
	AVg4vP/6pYP2Qcch+9+L1vrmUZzqy5s=
X-MC-Unique: psKlu5H3NgSykp4d88qt7g-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689168;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=K7tqdHp0z7SrsceuGaqk3f4Dn2eBRCUCgxJHv4nhpXM=;
        b=V4XSh60QqpVgWLj8Us8kDizNrW2iRvuofYSfbjirvgsS2nv80OGbKm0UO/kplnpG3C
         5qaNlMPSlqvBoBW7rbw+3U5BAX8GasBczfJVl98K5V2eI99pX1t2lUkK0X7mRQFehuWp
         4kqlSLTcE35HmeLuDjeqHfQTYtHeG3ERF2psyoxiPAwck1nC7G3C2LRVGGm5TbskfJvR
         +H0YIUmeFjRt7Q7gm0SFcLvkeIAOoTHBxxkAe13QSQvHvliiTojjjoyszecsal1a3nm1
         aOhKFPt2lFt7IdcHSwMzw5ojhY2j29sIgVp/hZ+FvN7lj4elRF/7haEAWKoIoJDxRKB4
         OlVQ==
X-Gm-Message-State: AAQBX9c7qUP49SLF0NKEKbJD1/xtcg0Z8v0BrSRizPQov05OMENzhwe1
	tuHUd6GWeuB6wjCdE4yUV0x/MDTEm3YlgZ3+tdCFHLFtsphrID+S7ah6j6E6v7hG1L+mPggOTVw
	LCitVesFXLv5irx8//w==
X-Received: by 2002:a17:906:a254:b0:933:37f4:2fe0 with SMTP id bi20-20020a170906a25400b0093337f42fe0mr2333565ejb.46.1680689168547;
        Wed, 05 Apr 2023 03:06:08 -0700 (PDT)
X-Google-Smtp-Source: AKy350Yf1A5XZK4oo80SHWAZ2GiHPkSEf5Qh7hmicDEC8A80aMGY6813loQHdU4+ZIZl0KypQH0u+A==
X-Received: by 2002:a17:906:a254:b0:933:37f4:2fe0 with SMTP id bi20-20020a170906a25400b0093337f42fe0mr2333546ejb.46.1680689168281;
        Wed, 05 Apr 2023 03:06:08 -0700 (PDT)
Date: Wed, 5 Apr 2023 06:06:03 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>, qemu-devel@nongnu.org,
	virtio-dev@lists.oasis-open.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xen.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sebastien Boeuf <sebastien.boeuf@intel.com>,
	Liu Jiang <gerry@linux.alibaba.com>,
	Mathieu Poirier <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V3 0/2] qemu: vhost-user: Support Xen memory mapping
 quirks
Message-ID: <20230405060340-mutt-send-email-mst@kernel.org>
References: <cover.1678351495.git.viresh.kumar@linaro.org>
 <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7>
 <87h6tulkae.fsf@linaro.org>
MIME-Version: 1.0
In-Reply-To: <87h6tulkae.fsf@linaro.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Benne wrote:
> 
> Viresh Kumar <viresh.kumar@linaro.org> writes:
> 
> > On 09-03-23, 14:20, Viresh Kumar wrote:
> >> Hello,
> >> 
> >> This patchset tries to update the vhost-user protocol to make it support special
> >> memory mapping required in case of Xen hypervisor.
> >> 
> >> The first patch is mostly cleanup and second one introduces a new xen specific
> >> feature.
> >
> > Can we apply this now ? I have developed code for rust-vmm crates
> > based on this and we need to get this merged/finalized first before
> > merging those changes.
> 
> 
> I've queued into my virtio/vhost-user-device series so I'll get merged
> with that series unless mst wants to take it now.

Well the patches are tagged and I was going to take these after the release.
Probably easier not to work on this in two trees.
Still if there's something in your tree being blocked
by these patches then
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Let me know.


> >
> > Thanks.
> 
> 
> -- 
> Alex Benne
> Virtualisation Tech Lead @ Linaro



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:15:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:15:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518382.804858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ay-0001Tl-6J; Wed, 05 Apr 2023 10:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518382.804858; Wed, 05 Apr 2023 10:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ay-0001Te-1U; Wed, 05 Apr 2023 10:15:36 +0000
Received: by outflank-mailman (input) for mailman id 518382;
 Wed, 05 Apr 2023 10:15:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk0Aw-0001TU-Mt
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:15:35 +0000
Received: from outbound.mail.protection.outlook.com
 (mail-ve1eur01on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb2f1987-d39a-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:15:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7597.eurprd04.prod.outlook.com (2603:10a6:102:e0::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 10:15:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 10:15:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb2f1987-d39a-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qy/7cb2mDISNV3/gwl2/8YwPzZfAY8CfbZ7+9ylJjvXoRBgM35AxFNuf1yu4CiS/wkm5GxSH8QgTDEWQslWM56XvC+l71GYx1aAbs0Us4NCpv8ZMMsal/FoOz5SwAEgCugNl3VN+VN4EsEMQWYvu5G7ZMqL6OnLIISv0v/RvjNmwa1K3x1q6MsFD6/5YisNVC23CoUK8Ny48wX4WErng96QDDALVAyg0iBjQwxnkfXnC20c9YnkpYcQi3ZsnOyKEZvyC6m4kax0E4Ru0FvjOsfJOXRd8+BI0gqgQscrS2fuZFXxBAMjow0drIXRYnXt3ApZCyKSeSlbHRyWbQaLBZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ia8dCuqc76h37XZyZZCOtQqagUqNbWQ4NOC/+flu5jM=;
 b=U3UDWhnesnxZ/llDrraQPi3wr6D0Nl/0WQUn+7Yrh0+/VXncI5C88VJKpLhJuBRjOic15Ib2Eu8rQ2LxY8nsqLHCrDtHRPA2uU5xRWpze/NonLb0gAMV5In5b13AFy8EqyN0L2rdKZ0xjo64Iyua+CGIS5gJsjJsHYS+ldPckHtT9Gj8AHXq+0zkLS1k1kI1MXyTtIdA8lk6OXG/+JGpNb/dE5fOMG/F5fW+ionaVZXSO8gmqWrv3m3UYU4x8COQsrj86SV/47vMxUgcHXYfs9KdJAXbjVCjIAJuNJ7qkOpPTp15vOWyeEhXfceBpd2NdNPVcOYukXJ/Q6pNHc9XQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ia8dCuqc76h37XZyZZCOtQqagUqNbWQ4NOC/+flu5jM=;
 b=u3rxKuPUiPGbuBDGztEkBbjx5GrWMMSEokpXejn9YYySnKg0Tz5HIO9r0tmN0u30jbhdv3OZRTrfblD4VO1zZvHAGr5CJfbs+JreqUj3vdPlmCtuxGiVqKSOQjJxMAPezF7U/OHskXM6Tkl7z58VPctkVMQtUPjzyO9txdpNtB8Jg+xGtyu93SeOtoIeiHcxz05zSFLL4eXaPJAMiu/uIRxgkYaAf+/t/gmg4HWxoyt2Ym/ZSuo9NkIKfNrN1mAlClV+3HgEhBcuzu7xvgtYXOpfC3DNojCz8lpn2Xg/z7mXx1ILpQKY8RsUxDohxkjG2Z/syJa1otlKtQFMaQmfOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
Date: Wed, 5 Apr 2023 12:15:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/3] multiboot2: parse console= and vga= options when
 setting GOP mode
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230331095946.45024-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0084.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7597:EE_
X-MS-Office365-Filtering-Correlation-Id: 4e1fdfc9-a63e-4a4c-792c-08db35bead7b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Fv2AMJeFHR4bWA1wnvx+09vle5+lbgWvxP1UsunTVEaXKZ+jYm9aNk0aiPKxYpglWoAkWsqk4kLSsG4JHpgCQ5e4DCTf36XRSkjnkpKjRpkMx7OhdDM+3lq7hokQraWrx895Tr+tE/HihpxMAVbXwP540YzR28vWfdx7GKmaFPXBMMbqmsAFcrBlNW+taYk2hwn4FjbL+tCBaiDcjC+lkCgEuwtro48PjsqP+cL+QCXDAi1HjRMgUd9f7uIt21VyceKd8//jLBbAcUnJWEvyiAV626Nu17c4ZYTsGvsOXvTe4jmRnPdsulmZZOU/4SP+iAUd7x0ukjnUTRzTwYtdP5sXlDSdrl64REjTeWqLziwww0b+bgaF022NpAyQOcn0yP7eg07gVucRKhW243JsFL33a0gIeUB56DhR192nh5QqfzjDyhDj+snEM0coQIPRS77vWneixbkTl0LV3A4BDmtCVhX3owa4ZmOL+ywNmdt0t20o4IlNLHlvGZpPsPDn03UKA7l3rYmiSlQ7cDK5fUcZ5K1VeUezVn7tc1UnnviB22mZrKr+PwxWDEQdmxHF2mkNptUTyiG9pdzdtwVb9+IW/j3xKelGYC2dLblW+zkQZ15mHjNioh0FnwG2HWXeCTA7VfqGb/ql2SWEAyNoDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(2616005)(186003)(38100700002)(31686004)(316002)(6916009)(66476007)(36756003)(66946007)(2906002)(4326008)(66556008)(478600001)(41300700001)(54906003)(6486002)(6506007)(6512007)(8676002)(26005)(8936002)(86362001)(31696002)(83380400001)(53546011)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QkFqNFdydWNKWU13c2FFRTRXZHQ4WmlZdzdMaVZPbDFtR0pVTitzV21yRzdM?=
 =?utf-8?B?amREcWdjcE1TenBTL2dxMnhpL3E4U1VjZGJGMGhKamQvUmtBMGdNd2NYVTNu?=
 =?utf-8?B?bVlWV0lCWUo5eHVvN3NIbkVueTcyTmdNY0lwNGloWjMzcHJ5TU1kcnFUcDdv?=
 =?utf-8?B?dXpTUzh4dmgwUm1KMjViUklWUmpPM2ZoMktqQlh6UlFvRGJVc1kxQWZ6S1Rw?=
 =?utf-8?B?VWc2OGNNcDlaVDkrWW5LTUFhM1BBRFJxNWZUWkNYQ0tIU3QwbUViVVhORHFG?=
 =?utf-8?B?VlBtQU9Lc1ludU9xQ2xROVJzMGc0TjdhNDdsc1hocmFjSVBrbXRCUG50ZG9y?=
 =?utf-8?B?NkJPL3VtVk5URVhHS0I4cUg3dVhSSWxWY0VvUmQ3a3J1V2h1QTR5K3dsSGl1?=
 =?utf-8?B?OFZoSjBjbjZmcDRwNWo5VXo4MWl1NWJLZE5WaENEblA1UzJXbm5tYzZRbVRk?=
 =?utf-8?B?RTUyOXJzdnpmTHRKLzBlRWROMHMvKzRYMEExNlV1R2dDSzFnb3plRThIVFpH?=
 =?utf-8?B?cmw0VG5YcENjNnZrUEFBYWQxN21JU2l0NTJZMGxsQ3QxNzgvZE9aV0RSMG9L?=
 =?utf-8?B?Uk9mOER3UmdzZm9sZmI2dWxFWlN6bDBNcjZwNDZYSG85VzdMMitYVFB2Mm1s?=
 =?utf-8?B?TVhsczQzZVhGcWErWHhFcnpHeXVlMWFFWnpNVTc2S2hLZ3J1SzB4bjBzVWpI?=
 =?utf-8?B?WVlZQTNUMDgrWmk1R0ZTQld6Ty9DcnpHNXk0WHdNdGUraCtiakpocGZtcytk?=
 =?utf-8?B?N0JOVTZNYUVQSitJU3NlV3VaNVI0Mzh4b1JRSVczWGpuRTk4bzlxTnp5SlJS?=
 =?utf-8?B?eVl2Z1d0cHQ5NVhlTXY2c042dTJ4NXpmSjdVVkg1cVdFUjZVR09uV1NlZitu?=
 =?utf-8?B?RStMZTRuOWtLTHpkeWw0Ynd1aTUxaExrTUZ1ZXdUVU1GZjRwcWJpWnQ2NGJL?=
 =?utf-8?B?SmJoWElzbjAzdytzQncrRlp3d2JoejdlRDgyVVVaS0V6TC9oVHY2Q0ZHUFJs?=
 =?utf-8?B?cHV2TFZHUDN5c1RkT0Z1dFkvblVBVTRCQ1ZUM1FJcFB5VGp5QmhVbVN0cy9v?=
 =?utf-8?B?c1dQakpsYUszeEJnQ2JKT2h5cElVMGRsRlZ2TkgvM3ozWXNkSzhVcTRMdWh6?=
 =?utf-8?B?cEZGbVR6Z2xIcTBGNkNWU1FyTHhieUZKb3VZVmVqcVlJRElUcGdYRERZT21z?=
 =?utf-8?B?cjBoSFJjaWZESXltcHdTaVgrRWhDWG9RWEYrdkdLL0JsdDJuSjRFSDl2MTFT?=
 =?utf-8?B?NU0xeVVzR2pwZXkyd3F4dGE0amZNMitUVTdFTElNVUZXMkdWdUI4VWxzZGFp?=
 =?utf-8?B?Tm5Ma3diWFRxUnFMQTZGLzNoVHF5Tm1KOEtHMzhtYWxsN0kzOUJVQVoxcFZO?=
 =?utf-8?B?MkRjYzl1UkRNeWhGQzVWOGpEQmRIdWxTUGIwTjdiaDZsQmFaTjJnVmVWcHhR?=
 =?utf-8?B?N3E2ejU3WkE0ZThkdE51MjVYZk9LRjdlaS9xZkxBR2JycklWNGxlNTcrUDNN?=
 =?utf-8?B?Y21QNHVuMmNleE9aa0E0UEVyb042M1NKRWFWNWgxVlo2bUR0RVZHV0dXT3FE?=
 =?utf-8?B?cEwxZlFsb25XRDJyclV1Z0w4eTkzYWJ1VHJNUFh3MlMxV2lRbmJjV3pRdktJ?=
 =?utf-8?B?R3V1dEgvRGRad1E2N3NiNWdNSHE4M2RHb21MeXB0V1JrcEFwekY1SWhsMkhi?=
 =?utf-8?B?MTlEK3l0TmRJYXFJL01FckhpWUd0R3hLV3BFVUM4cUh1SmZzUGJWcFZmMy9L?=
 =?utf-8?B?cTA3ajVWTzlFcmVXd0VPQmNrQkRsbUF0citYSDViWi9YQ0pFMGxKWTRTTkRW?=
 =?utf-8?B?MDhlYnJJdGQwcStZb25DcHFqZkUyMGRxUlhoeVV5bzAvby82dFNBaUhLMUhh?=
 =?utf-8?B?dlNVQUluaEdYM3Erdlk1SmpDOGdpVUJMYTdCanRuTlNrRlUwanFEVzhMOHRT?=
 =?utf-8?B?ZXlKZUppMnNZem9wd2F1d25UbXhvVjd3SkZ6NURNNlZraUh1QUJNdHR3Mm1N?=
 =?utf-8?B?dmRrZ3lHeWRwWGlYSEVDYVkwTTVLVUJ2V0J6ZkFDeXVDTm1yeGo1TDl0cTJH?=
 =?utf-8?B?ZUlFR2RFT2k1VWVPZzFOMXlUMjk2Slhub3ZscG1DYXo4MHRpTXpiYWR0Vmx4?=
 =?utf-8?Q?WjjZ3XDlD+kGveY1kYuIoykd8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e1fdfc9-a63e-4a4c-792c-08db35bead7b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 10:15:27.5458
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sydkG/s6kdfCO/9vSw9pbDJcF2L5d8r4bxJA9Z9QJ63QXMmO1rHNk0CjIViKVr6+Dt9y0NEQcsCRf5SHLD0i5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7597

On 31.03.2023 11:59, Roger Pau Monne wrote:
> Only set the GOP mode if vga is selected in the console option,

This particular aspect of the behavior is inconsistent with legacy
boot behavior: There "vga=" isn't qualified by what "console=" has.

> otherwise just fetch the information from the current mode in order to
> make it available to dom0.
> 
> Introduce support for passing the command line to the efi_multiboot2()
> helper, and parse the console= and vga= options if present.
> 
> Add support for the 'gfx' and 'current' vga options, ignore the 'keep'
> option, and print a warning message about other options not being
> currently implemented.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>[...] 
> --- a/xen/arch/x86/efi/efi-boot.h
> +++ b/xen/arch/x86/efi/efi-boot.h
> @@ -786,7 +786,30 @@ static bool __init efi_arch_use_config_file(EFI_SYSTEM_TABLE *SystemTable)
>  
>  static void __init efi_arch_flush_dcache_area(const void *vaddr, UINTN size) { }
>  
> -void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
> +/* Return the next occurrence of opt in cmd. */
> +static const char __init *get_option(const char *cmd, const char *opt)
> +{
> +    const char *s = cmd, *o = NULL;
> +
> +    if ( !cmd || !opt )

I can see why you need to check "cmd", but there's no need to check "opt"
I would say.

> @@ -807,7 +830,60 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
>  
>      if ( gop )
>      {
> -        gop_mode = efi_find_gop_mode(gop, 0, 0, 0);
> +        const char *opt = NULL, *last = cmdline;
> +        /* Default console selection is "com1,vga". */
> +        bool vga = true;
> +
> +        /* For the console option the last occurrence is the enforced one. */
> +        while ( (last = get_option(last, "console=")) != NULL )
> +            opt = last;
> +
> +        if ( opt )
> +        {
> +            const char *s = strstr(opt, "vga");
> +
> +            if ( !s || s > strpbrk(opt, " ") )

Why strpbrk() and not the simpler strchr()? Or did you mean to also look
for tabs, but then didn't include \t here (and in get_option())? (Legacy
boot code also takes \r and \n as separators, btw, but I'm unconvinced
of the need.)

Also aiui this is UB when the function returns NULL, as relational operators
(excluding equality ones) may only be applied when both addresses refer to
the same object (or to the end of an involved array).

> +                vga = false;
> +        }
> +
> +        if ( vga )
> +        {
> +            unsigned int width = 0, height = 0, depth = 0;
> +            bool keep_current = false;
> +
> +            last = cmdline;
> +            while ( (last = get_option(last, "vga=")) != NULL )

It's yet different for "vga=", I'm afraid: Early boot code (boot/cmdline.c)
finds the first instance only. Normal command line handling respects the
last instance only. So while "vga=gfx-... vga=keep" will have the expected
effect, "vga=keep vga=gfx-..." won't (I think). It is certainly fine to be
less inflexible here, but I think this then wants accompanying by an update
to the command line doc, no matter that presently it doesn't really
describe these peculiarities. Otoh it would end up being slightly cheaper
to only look for the first instance here as well. In particular ...

> +            {
> +                if ( !strncmp(last, "gfx-", 4) )
> +                {
> +                    width = simple_strtoul(last + 4, &last, 10);
> +                    if ( *last == 'x' )
> +                        height = simple_strtoul(last + 1, &last, 10);
> +                    if ( *last == 'x' )
> +                        depth = simple_strtoul(last + 1, &last, 10);
> +                    /* Allow depth to be 0 or unset. */
> +                    if ( !width || !height )
> +                        width = height = depth = 0;
> +                    keep_current = false;
> +                }
> +                else if ( !strncmp(last, "current", 7) )
> +                    keep_current = true;
> +                else if ( !strncmp(last, "keep", 4) )
> +                {
> +                    /* Ignore. */
> +                }
> +                else
> +                {
> +                    /* Fallback to defaults if unimplemented. */
> +                    width = height = depth = 0;
> +                    keep_current = false;

... this zapping of what was successfully parsed before would then not be
needed in any event (else I would question why this is necessary).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:18:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518389.804868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0DZ-00026V-Ik; Wed, 05 Apr 2023 10:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518389.804868; Wed, 05 Apr 2023 10:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0DZ-00026O-Ff; Wed, 05 Apr 2023 10:18:17 +0000
Received: by outflank-mailman (input) for mailman id 518389;
 Wed, 05 Apr 2023 10:18:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0DY-00024X-0j
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:16 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d2ca14d-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:18:15 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 n9-20020a05600c4f8900b003f05f617f3cso2518425wmq.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:15 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 t16-20020a05600c451000b003ef66c89af0sm5203608wmo.0.2023.04.05.03.18.13
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d2ca14d-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689894;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=R3c7APLhxJFNCqIErCGS2nhF3aw+HLHuokeVO/lFk00=;
        b=OBiO/iyMPp7HhyBZcX7fJgp2AA6LNBmXbyqspQfxTpbw5ifIhJkHachSnVGxiHum7T
         pDyTjdZU/TDYDH8YqIYCNPP2VKdx3wdUBLlBxM9gzo34nXlEfQws3UH3CuMpU0XuvBtJ
         BR0Y9rfffghCpUn3Lq722SfA08hTOExgU+rQvnILo+j+5Ft88ticBqenXwg5p/GlGvca
         41krEivQExeKbzC6OcydV1hPviKH9pwe4ul7G2ol7M0QTifUHh3PLePZ/6n5cypktE9F
         2KVXZ7OB0raOVG9XCJUMp3dAf1zpvJiqWUaGdW48AgcrjBnZSMXe3CucyEtdxru47k5Y
         KTzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689894;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=R3c7APLhxJFNCqIErCGS2nhF3aw+HLHuokeVO/lFk00=;
        b=02+2NsZRj3ClHmrk2mN+2kEpRFejs3Hr7gcidK/z/jKApKg1J0cBk8FJ3L0HO1XpUL
         U18pr8UcZg9HJ7EErCmhh2HxtxS01X7YNxoGl9hhks/Z7DQEbYMrOQWMw/Vkrdn7qP1W
         mLAl590KJXd4EbSVxY/pY4HGB11lZo5P/7KldgVnexFnA/o568y5moafCHWgZaUDZrCH
         S2WoqYfWmoFIJQdywrhD9GkxeXHrkhkv+DUx4Vg1kF3f8fT7/gL1YtCbbXblDfJnOQQo
         eBag7eNHedLxFJ4RYwuXf4fTPEn4kpBlhxQbBkfYzR4u5QUvtuFtejOmRIVnwsthISai
         gJlA==
X-Gm-Message-State: AAQBX9dxAXx/87IUkGCQaxOIBAJNtSRgL1oWfFsc1o6K/byLElVut8eg
	pWoulLeD6kRRcXx3hmPp6AVq0g==
X-Google-Smtp-Source: AKy350Ybve9k60nXpw2N7+saf2pFb1rgNJYM8Jf0Z78V3ulPWVpu1aVqOeAnHikWi6TcyhWdL0mjyg==
X-Received: by 2002:a7b:c4d3:0:b0:3ed:551b:b78f with SMTP id g19-20020a7bc4d3000000b003ed551bb78fmr4181570wmk.4.1680689894558;
        Wed, 05 Apr 2023 03:18:14 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PATCH 00/14] accel: Share CPUState accel context (HAX/NVMM/WHPX/HVF)
Date: Wed,  5 Apr 2023 12:17:57 +0200
Message-Id: <20230405101811.76663-1-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

This series is part of the single binary effort.

All accelerator will share their per-vCPU context in
an opaque 'accel' pointer within the CPUState.

First handle HAX/NVMM/WHPX/HVF. KVM and TCG will follow
as two different (bigger) follow-up series.

Philippe Mathieu-Daudé (14):
  accel: Document generic accelerator headers
  accel: Remove unused hThread variable on TCG/WHPX
  accel: Fix a leak on Windows HAX
  accel: Destroy HAX vCPU threads once done
  accel: Rename 'hax_vcpu' as 'accel' in CPUState
  accel: Use a typedef for struct hax_vcpu_state
  accel: Rename struct hax_vcpu_state -> struct AccelvCPUState
  accel: Move HAX hThread to accelerator context
  accel: Allocate NVMM vCPU using g_try_FOO()
  accel: Rename NVMM struct qemu_vcpu -> struct AccelvCPUState
  accel: Inline NVMM get_qemu_vcpu()
  accel: Rename WHPX struct whpx_vcpu -> struct AccelvCPUState
  accel: Inline WHPX get_whpx_vcpu()
  accel: Rename HVF struct hvf_vcpu_state -> struct AccelvCPUState

 include/hw/core/cpu.h             | 11 ++---
 include/sysemu/hax.h              |  2 +
 include/sysemu/hvf_int.h          |  2 +-
 include/sysemu/kvm.h              |  2 +
 include/sysemu/nvmm.h             |  2 +
 include/sysemu/tcg.h              |  2 +
 include/sysemu/whpx.h             |  2 +
 include/sysemu/xen.h              |  2 +
 target/i386/hax/hax-i386.h        | 14 ++++---
 accel/hvf/hvf-accel-ops.c         | 16 +++----
 accel/tcg/tcg-accel-ops-mttcg.c   |  4 --
 accel/tcg/tcg-accel-ops-rr.c      |  3 --
 target/arm/hvf/hvf.c              | 70 +++++++++++++++----------------
 target/i386/hax/hax-accel-ops.c   |  5 ++-
 target/i386/hax/hax-all.c         | 26 +++++++-----
 target/i386/hax/hax-posix.c       |  4 +-
 target/i386/hax/hax-windows.c     |  6 +--
 target/i386/nvmm/nvmm-all.c       | 38 +++++++----------
 target/i386/whpx/whpx-accel-ops.c |  3 --
 target/i386/whpx/whpx-all.c       | 39 +++++++----------
 20 files changed, 123 insertions(+), 130 deletions(-)

-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:18:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:18:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518390.804878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dh-0002OR-UE; Wed, 05 Apr 2023 10:18:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518390.804878; Wed, 05 Apr 2023 10:18:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dh-0002OK-QG; Wed, 05 Apr 2023 10:18:25 +0000
Received: by outflank-mailman (input) for mailman id 518390;
 Wed, 05 Apr 2023 10:18:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Dg-0002Na-Et
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:24 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31729cae-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:18:22 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id t4so30403161wra.7
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:22 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 a12-20020a056000100c00b002cea8664304sm14500312wrx.91.2023.04.05.03.18.19
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31729cae-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689902;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=y5W5kRucCzOBlp3KNoaJLlOhdcHcmgdcoSudurkJOAU=;
        b=YvdDHGNJmJITr/TCaBwa+nNTBQzwFo4m9pQoqmbmi3vaFmhxmN+kDfHXTlih7hjOuG
         i5foQd722oTmMrBj+Hnn/y+vSEYHeAfBNU8qFxNsz637G6z7vjp3C2UAjRMomPmXSt6B
         8zPBf0Ips8vfVnHZNtgCTbmeRiHGsNV5SUfNvKV2SqM3fJYcuc7ImyIhwyWRK6Hj9a9X
         +Js3FJbT2ARj2FgCtaCnNeF4Yijhd95peS0BiCwqBd/BuWcL+6seYLzsD3aU2CENQUwI
         ovnk8urTgfiCLVoqgt1J2QmQfKLKfpa8JYktu83jlKdtgVQauBcK79zdBkzhpVPSX3Ka
         StWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689902;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=y5W5kRucCzOBlp3KNoaJLlOhdcHcmgdcoSudurkJOAU=;
        b=yID/JQcz4IvI6Uf+RrQrUUjq6My/2RFP5113Ru96WPQdsmAevaGOwvHbizsVNtrOlN
         0GcG2wg/trt7QUqZcV2qJtRcfAX0z1ax1KEYQhHduq+LnHBM7s/PUnTxlz52jAShZVhE
         i2k6mZUGtL6bWDwu9Auqq6Edis3ntcl2iIKpnjDw/TFi+y/txrimNWdLlr97EVo5RYsM
         aYxB8vo8y8EL6ycGNdFCFKE2lS3+pUxfWMFHOUIx1Q1+QchugOxQI/myVpM7FORseTKX
         cocv1T33yFmNOtKGinSw4MMYvz+hdkrTo3wpoFV0KfFt2JvFZQGlbhY8DNwCVaIr4i89
         /Cyg==
X-Gm-Message-State: AAQBX9cYvSyiIyLdA85ApxGxouWGZ3KfbxuncfU5LLdw2KB78dt0fCR5
	gu4ES9iY/Avv+Zy9g4i9eX1HXA==
X-Google-Smtp-Source: AKy350bpxg4IrwdUQGHpIVdGoidA/+sCsrkONX9FyvsEM868pIe4QiJWHCOtkMTvKUMsoYziJ+Rgdw==
X-Received: by 2002:a5d:44c2:0:b0:2cf:efd7:2f1d with SMTP id z2-20020a5d44c2000000b002cfefd72f1dmr3851096wrr.13.1680689901758;
        Wed, 05 Apr 2023 03:18:21 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Reinoud Zandijk <reinoud@netbsd.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH 01/14] accel: Document generic accelerator headers
Date: Wed,  5 Apr 2023 12:17:58 +0200
Message-Id: <20230405101811.76663-2-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These headers are meant to be include by any file to check
the availability of accelerators, thus are not accelerator
specific.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/sysemu/hax.h  | 2 ++
 include/sysemu/kvm.h  | 2 ++
 include/sysemu/nvmm.h | 2 ++
 include/sysemu/tcg.h  | 2 ++
 include/sysemu/whpx.h | 2 ++
 include/sysemu/xen.h  | 2 ++
 6 files changed, 12 insertions(+)

diff --git a/include/sysemu/hax.h b/include/sysemu/hax.h
index bf8f99a824..80fc716f80 100644
--- a/include/sysemu/hax.h
+++ b/include/sysemu/hax.h
@@ -19,6 +19,8 @@
  *
  */
 
+/* header to be included in non-HAX-specific code */
+
 #ifndef QEMU_HAX_H
 #define QEMU_HAX_H
 
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index c8281c07a7..cc6c678ed8 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -11,6 +11,8 @@
  *
  */
 
+/* header to be included in non-KVM-specific code */
+
 #ifndef QEMU_KVM_H
 #define QEMU_KVM_H
 
diff --git a/include/sysemu/nvmm.h b/include/sysemu/nvmm.h
index 833670fccb..be7bc9a62d 100644
--- a/include/sysemu/nvmm.h
+++ b/include/sysemu/nvmm.h
@@ -7,6 +7,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-NVMM-specific code */
+
 #ifndef QEMU_NVMM_H
 #define QEMU_NVMM_H
 
diff --git a/include/sysemu/tcg.h b/include/sysemu/tcg.h
index 53352450ff..5e2ca9aab3 100644
--- a/include/sysemu/tcg.h
+++ b/include/sysemu/tcg.h
@@ -5,6 +5,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-TCG-specific code */
+
 #ifndef SYSEMU_TCG_H
 #define SYSEMU_TCG_H
 
diff --git a/include/sysemu/whpx.h b/include/sysemu/whpx.h
index 2889fa2278..781ca5b2b6 100644
--- a/include/sysemu/whpx.h
+++ b/include/sysemu/whpx.h
@@ -10,6 +10,8 @@
  *
  */
 
+/* header to be included in non-WHPX-specific code */
+
 #ifndef QEMU_WHPX_H
 #define QEMU_WHPX_H
 
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
index 0ca25697e4..bc13ad5692 100644
--- a/include/sysemu/xen.h
+++ b/include/sysemu/xen.h
@@ -5,6 +5,8 @@
  * See the COPYING file in the top-level directory.
  */
 
+/* header to be included in non-Xen-specific code */
+
 #ifndef SYSEMU_XEN_H
 #define SYSEMU_XEN_H
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:18:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518391.804888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dm-0002gl-6A; Wed, 05 Apr 2023 10:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518391.804888; Wed, 05 Apr 2023 10:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dm-0002ge-2x; Wed, 05 Apr 2023 10:18:30 +0000
Received: by outflank-mailman (input) for mailman id 518391;
 Wed, 05 Apr 2023 10:18:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Dk-00024X-Nw
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:28 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34ea1565-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:18:28 +0200 (CEST)
Received: by mail-wm1-x330.google.com with SMTP id m8so9666674wmq.5
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:28 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 ay8-20020a05600c1e0800b003edddae1068sm1744815wmb.9.2023.04.05.03.18.26
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34ea1565-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689907;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Z3PbVXrARru//womsk1QFTfUC/jT1xFpRBIRhWzNLeg=;
        b=c4FFRXSdAKN2IWwtiqV7Ffzer7xJF5apMUkKlQiTaxUK2YipBsMKVqtP9PC9OjOLHc
         rMq0iWLrZJBM6oJj3ZcxNuV1npeN62fnwpOyTCL/NyjHhKWn4eE/PATJuuf/ffKK2NvR
         l5O6MKTRN5ifGk9nigJmitObdfRL5i9btUsj2E5xzt7sy+nX/kRuir/iAaexJdDqUFJg
         4racwp2lla6NgRk1CkowemsGQ25lhbtxKUZ69OG2cP31L9IIwSsFNGT7+hn7ucd+REZe
         bpReBOAvbbifyVLiPMUCDSGVvfBSE6MFot4sUTf8DSomPdPCaRK9s3qVOS+XAggbGFny
         j31A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689907;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Z3PbVXrARru//womsk1QFTfUC/jT1xFpRBIRhWzNLeg=;
        b=VB59JqOIqaDXPClAXUimm8p7iDootZNKVZg2JAL1vWCxo34vMwlvKU2Km/j4M+XAc+
         dSjcvErBoeALjA53npzx0q7IYUD9X76zQel7XFigb5JIl0scoBUQ3fFg5DOQaJRQgIn9
         W6FYkBezEGfrXDSyN6v+MqKVdnr68xojRPPhpVg80K5fzgJOXbqfJa0cCVzfDt//kEwG
         7LzTclcd73syOm4486bI1WzinO4Exh/TCnG5lMCIKnhxNfHEKcpft+n95Y20e+HeMJCd
         IWqpMjPyCQZGTPiLd+I4HzrS7SgeV+HBZdJiRqPUHH4mbwVhCniBjH66TL1JNeZwnuai
         aqVg==
X-Gm-Message-State: AAQBX9cOObw1G6AfPvNJu1tLR8n+krWwC7zrw4+y67mVYYmoRQ95MCuX
	TgsILXMnCNpTzpqzY35MYBh82g==
X-Google-Smtp-Source: AKy350brt1TNZ7pG57+pZfUAcAl5eE1plZdTvYqjEnxz0kMrxGEeLijqmQ0ROub7VB7ZqCDrKQfQFA==
X-Received: by 2002:a05:600c:2942:b0:3ee:4ee:bf73 with SMTP id n2-20020a05600c294200b003ee04eebf73mr4220330wmd.24.1680689907651;
        Wed, 05 Apr 2023 03:18:27 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>
Subject: [PATCH 02/14] accel: Remove unused hThread variable on TCG/WHPX
Date: Wed,  5 Apr 2023 12:17:59 +0200
Message-Id: <20230405101811.76663-3-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On Windows hosts, cpu->hThread is assigned but never accessed:
remove it.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 accel/tcg/tcg-accel-ops-mttcg.c   | 4 ----
 accel/tcg/tcg-accel-ops-rr.c      | 3 ---
 target/i386/whpx/whpx-accel-ops.c | 3 ---
 3 files changed, 10 deletions(-)

diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index d50239e0e2..19cfb26c02 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -152,8 +152,4 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
 
     qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
                        cpu, QEMU_THREAD_JOINABLE);
-
-#ifdef _WIN32
-    cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
 }
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 290833a37f..dafff71530 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -291,9 +291,6 @@ void rr_start_vcpu_thread(CPUState *cpu)
 
         single_tcg_halt_cond = cpu->halt_cond;
         single_tcg_cpu_thread = cpu->thread;
-#ifdef _WIN32
-        cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
     } else {
         /* we share the thread */
         cpu->thread = single_tcg_cpu_thread;
diff --git a/target/i386/whpx/whpx-accel-ops.c b/target/i386/whpx/whpx-accel-ops.c
index e8dc4b3a47..67cad86720 100644
--- a/target/i386/whpx/whpx-accel-ops.c
+++ b/target/i386/whpx/whpx-accel-ops.c
@@ -71,9 +71,6 @@ static void whpx_start_vcpu_thread(CPUState *cpu)
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, whpx_cpu_thread_fn,
                        cpu, QEMU_THREAD_JOINABLE);
-#ifdef _WIN32
-    cpu->hThread = qemu_thread_get_handle(cpu->thread);
-#endif
 }
 
 static void whpx_kick_vcpu_thread(CPUState *cpu)
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:18:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518393.804898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ds-00036C-Et; Wed, 05 Apr 2023 10:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518393.804898; Wed, 05 Apr 2023 10:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ds-000365-AP; Wed, 05 Apr 2023 10:18:36 +0000
Received: by outflank-mailman (input) for mailman id 518393;
 Wed, 05 Apr 2023 10:18:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Dr-0002Na-Sm
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:35 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3893df93-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:18:34 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id y14so35677419wrq.4
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:34 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 d9-20020adff849000000b002c56af32e8csm14637033wrq.35.2023.04.05.03.18.32
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3893df93-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689914;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E6NW9VhhS5fLeVwZeTyWWyTbAHfvPOC/r0zI8X/AYhw=;
        b=FrtFKE2hFxCwSIOxwL7fJNJgclFtRZYFCKSwPQeumlgt6bklL8ZJLOhaxRanBaGmBN
         sSSdwoBuo0PWQIPC5aVYpH3V4lrlZrxSnF6ec08aPzNngc+4NCJth/v+YEXjEI5hAYZ7
         deplHkaFXLIdswcAyfQjoGU7HHGPGzJHJzztDL7n6LAkZeMYnyvPoi+3fXYTgBgP5XWs
         cPr7wmMyCDh4HsAHrXF/xlBABJq7EFQwNi0Rj4haWnYMHlXIAYY+Lsa8OGwO4Mt6ZR/d
         e7uyZyaesG4XjKKufoRM2SpXpgdAnFrSp25xtnMkjXzs/M6Ow7DNjd1C6Ovjn8q1Umho
         74DQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689914;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=E6NW9VhhS5fLeVwZeTyWWyTbAHfvPOC/r0zI8X/AYhw=;
        b=QlO71eGYcc4/A839ZcUOtT1xhGgo6p6sBc0/SoL0yHAzIMQ2534YlBNBp/v1Tr4fai
         s3vVURIQcv4z0XHckrG+VVXXPGJ3guwo0ZYGtzdlP/F9OslCrk7a6uBWuLwPH3IFiRca
         Qmq3DXvB57qGFi/61LUjorru7kUl/50Y/OMz8j8tOf4MJIk8el54lkBT8Rv+aCPNuNu/
         sXErcQeE+Chm3WRn/e1NHkM6aGfufph5Ci7HvR1BFJAEIp+ApEOifPcEB52ZxOt7NFjP
         O5KaYAgTE8DzLW/6tkpvcuiZ41x4EPYz59oomc7JclN1ipEOlpkpR83VtMQfuRIHYnGn
         3O2g==
X-Gm-Message-State: AAQBX9fEhyJijfAKsl7+kEZRdXE2m0VA9xBYCVwsI0j8AW0RQfIUgtT9
	OxitiO7hgJWadZg4vPhur9TgRQ==
X-Google-Smtp-Source: AKy350YkinPPwbt/c64vUzk9lM4SXacH8xjrAIzv8V1BeCK80tQHniLUzIE+5w/+4XbN335yprpGWw==
X-Received: by 2002:a5d:4d11:0:b0:2ce:9819:1c1e with SMTP id z17-20020a5d4d11000000b002ce98191c1emr3913343wrt.30.1680689913858;
        Wed, 05 Apr 2023 03:18:33 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PATCH 03/14] accel: Fix a leak on Windows HAX
Date: Wed,  5 Apr 2023 12:18:00 +0200
Message-Id: <20230405101811.76663-4-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

hThread is only used on the error path in hax_kick_vcpu_thread().

Fixes: b0cb0a66d6 ("Plumb the HAXM-based hardware acceleration support")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/hax/hax-all.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 3e5992a63b..a2321a1eff 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -205,6 +205,9 @@ int hax_vcpu_destroy(CPUState *cpu)
      */
     hax_close_fd(vcpu->fd);
     hax_global.vm->vcpus[vcpu->vcpu_id] = NULL;
+#ifdef _WIN32
+    CloseHandle(cpu->hThread);
+#endif
     g_free(vcpu);
     return 0;
 }
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:18:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:18:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518395.804908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dx-0003Yv-Mj; Wed, 05 Apr 2023 10:18:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518395.804908; Wed, 05 Apr 2023 10:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Dx-0003Ym-Iu; Wed, 05 Apr 2023 10:18:41 +0000
Received: by outflank-mailman (input) for mailman id 518395;
 Wed, 05 Apr 2023 10:18:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Dw-00024X-Nv
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:40 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3c165dc3-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:18:40 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 m6-20020a05600c3b0600b003ee6e324b19so21676991wms.1
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:40 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 l6-20020a1c7906000000b003ee4e99a8f6sm1696262wme.33.2023.04.05.03.18.38
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c165dc3-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689919;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IGm3E04zIWsYk05zgucyUicl5+3W8OYlCuN37xDTmqU=;
        b=rTu1tRZPqV4LFx/UoOZi2WLzS6khixq6d/WYxFS7eNNn/OYGeFnv0qc3Qcr02VSKxy
         rb660oY/fmNu7J89tEYsSKeFFMuIPWe0bwl6jY9zC3o3nLjl9EKCdNxVkCbBuJ6lm5CH
         gjTz1i3s9XYei/E+itmN9I/+2jlgOY1BBVZJB9fjEYyywfgateusSwusPqpu7NrWhMR0
         GMl837ggRVzo1kt3Z33apYrhv71R+7WptfZnBSbUKuQc75ogJNF4kCG2j8Ggv1YnBRCy
         yVn/ZgcJgjoRbF4Cbyz/wswjzVjQZxiYa1fX9y35/BD8s5WDn2b95Nx3tL6XSawRO0qk
         iJOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689919;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=IGm3E04zIWsYk05zgucyUicl5+3W8OYlCuN37xDTmqU=;
        b=48W1T2IaEf288nn7yfly3SGwmytE2z05UcIbVEjvOhBS0U/It01nMMQxkNiLXdRXRn
         X6HieE5DWVJ0ug2s2PqgAK6QzXz1fJarjAqOuj5WOUYf9ASbNMUEuy2NasnmfVTYy0f1
         42Rs0JXpeMyxtkrixDPuspTI6Ezayv68VfYgUa795y/W9Ahflxs2b5cm963dPvSOEyg+
         dJqnGZPQNEqOwy/DuzZ8wPCRhQ2CaqCLwKXOySKq23Gft5zhc39k7M3vCLTPVGudiz1l
         WHVPDzfZzx+AqHEIMZyu/WXi6eFTLEg+3lBIQFJNasK0OOpI7vV6awsti9S99Ud755qy
         A6Xw==
X-Gm-Message-State: AAQBX9fG1/zDNeZ119JRmWPGiH/38FJMyANdCrMbAzULANK4oLfjfBOb
	Xi2X8hwSreW61CXkMm9OSoRWmg==
X-Google-Smtp-Source: AKy350ZDfAFPi7DjfgX/oekuSK5IMwY5EAU6LrxLx1bImlo8Bj5BMxd5ZdtWz6ZThOU3rdF/7XqStA==
X-Received: by 2002:a1c:770e:0:b0:3ed:8780:f27b with SMTP id t14-20020a1c770e000000b003ed8780f27bmr4338493wmi.16.1680689919738;
        Wed, 05 Apr 2023 03:18:39 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PATCH 04/14] accel: Destroy HAX vCPU threads once done
Date: Wed,  5 Apr 2023 12:18:01 +0200
Message-Id: <20230405101811.76663-5-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

When the vCPU thread finished its processing, destroy
it and signal its destruction to generic vCPU management
layer.

Add a sanity check for the vCPU accelerator context.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/hax/hax-accel-ops.c | 3 +++
 target/i386/hax/hax-all.c       | 1 +
 2 files changed, 4 insertions(+)

diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index 18114fe34d..0157a628a3 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -53,6 +53,8 @@ static void *hax_cpu_thread_fn(void *arg)
 
         qemu_wait_io_event(cpu);
     } while (!cpu->unplug || cpu_can_run(cpu));
+    hax_vcpu_destroy(cpu);
+    cpu_thread_signal_destroyed(cpu);
     rcu_unregister_thread();
     return NULL;
 }
@@ -69,6 +71,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, hax_cpu_thread_fn,
                        cpu, QEMU_THREAD_JOINABLE);
+    assert(cpu->hax_vcpu);
 #ifdef _WIN32
     cpu->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index a2321a1eff..38a4323a3c 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -209,6 +209,7 @@ int hax_vcpu_destroy(CPUState *cpu)
     CloseHandle(cpu->hThread);
 #endif
     g_free(vcpu);
+    cpu->hax_vcpu = NULL;
     return 0;
 }
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518400.804917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0E5-0004EL-Vm; Wed, 05 Apr 2023 10:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518400.804917; Wed, 05 Apr 2023 10:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0E5-0004EE-Su; Wed, 05 Apr 2023 10:18:49 +0000
Received: by outflank-mailman (input) for mailman id 518400;
 Wed, 05 Apr 2023 10:18:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0E4-0002Na-RP
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:48 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 403c51b8-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:18:47 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id j24so35697941wrd.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:47 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 c1-20020adfef41000000b002d322b9a7f5sm14646618wrp.88.2023.04.05.03.18.44
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 403c51b8-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689926;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JKKj8/4pIvX0Y2G0Iw2NvkVvuNOgIGf3wL1krYyOhFI=;
        b=zZVWJRJJOaN+7Gbu5U00rRVxIF+M1REi0nasLd2+TX7rCZ676M8HfGZvzPUiiR+jmJ
         LzK21sAAN+g/5CA7pY0t71M+JRU/D0i2mKppiG11wKsb+ATVhKWStv+h8AcYfYWJ7o7q
         ZkVpkMfk04QTu63lwsYX09N6qWIm7uJJV67xns5BXbF7sXdcTtMRBlCsegTqDBPUcmBh
         ARAhBXK711g76VGI42qECVpEmFzwZ0PK4jFw+pyEwqyKB+BJxwBLM0WD1/uRxXapkmvf
         Ej+a+b8eIjRl0xMwS3Wlq5n5DK62HsO8B/0gtpHvOMwDIRcustaLrz9inDPWuLWLPEfj
         UczQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689926;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=JKKj8/4pIvX0Y2G0Iw2NvkVvuNOgIGf3wL1krYyOhFI=;
        b=gMbQmKSDOi+AYxv+OFNiSssECuzDxVZ4+jQ0vkHHd6sL12+i9rfOkQ3z9HIyzHKJEG
         a2kZl/t62U5ffqiujfStBt5Hq5t+Kx47ImZNQw986Rg6JSQ5nLDH3q2o2QYQuXuiHgIZ
         2iNAshx14cRuF2GeAKAXDAb1oBzgfODR4DjyZhz6r1qx8CPm3KUfltkotAFKnkYvwrfq
         UeBIIf1UUmc1gNd9LLhOpX1FJGL9K7PiDEUjo/MoCRF37xTZLp1XuTlwtlYVL7ZeL43T
         yfOOvIVpkoJfaSrOZWxMCf/bw0/NWNkKyhbYZzKG5qvPdGxh+U6byCt6KA1Aw+uC7QpQ
         NEyQ==
X-Gm-Message-State: AAQBX9fp/OrcSgArGhdq8SCnz89bmgYh+rQnvv7KegMyR02DqHm90Mgv
	/hJ4OcDAhyZfEnkFjhhw7oufog==
X-Google-Smtp-Source: AKy350bkMz7m3CqlZANDFC+N8H247sFug5Oc9sidBbssiHByDywSp20m46NTY3RF7XV/gx2ANc24Hw==
X-Received: by 2002:a5d:68ca:0:b0:2e4:bfa0:8c32 with SMTP id p10-20020a5d68ca000000b002e4bfa08c32mr3774527wrw.47.1680689926542;
        Wed, 05 Apr 2023 03:18:46 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yanan Wang <wangyanan55@huawei.com>,
	Reinoud Zandijk <reinoud@netbsd.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>
Subject: [PATCH 05/14] accel: Rename 'hax_vcpu' as 'accel' in CPUState
Date: Wed,  5 Apr 2023 12:18:02 +0200
Message-Id: <20230405101811.76663-6-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

All accelerators will share a single opaque context
in CPUState. Start by renaming 'hax_vcpu' as 'accelCPUState'.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/hw/core/cpu.h           |  2 +-
 target/i386/hax/hax-accel-ops.c |  2 +-
 target/i386/hax/hax-all.c       | 18 +++++++++---------
 target/i386/nvmm/nvmm-all.c     |  6 +++---
 target/i386/whpx/whpx-all.c     |  6 +++---
 5 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 397fd3ac68..193494cde4 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -442,7 +442,7 @@ struct CPUState {
     /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
     bool prctl_unalign_sigbus;
 
-    struct hax_vcpu_state *hax_vcpu;
+    struct hax_vcpu_state *accel;
 
     struct hvf_vcpu_state *hvf;
 
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index 0157a628a3..a8512efcd5 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -71,7 +71,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
              cpu->cpu_index);
     qemu_thread_create(cpu->thread, thread_name, hax_cpu_thread_fn,
                        cpu, QEMU_THREAD_JOINABLE);
-    assert(cpu->hax_vcpu);
+    assert(cpu->accel);
 #ifdef _WIN32
     cpu->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 38a4323a3c..3865ff9419 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -62,7 +62,7 @@ int valid_hax_tunnel_size(uint16_t size)
 
 hax_fd hax_vcpu_get_fd(CPUArchState *env)
 {
-    struct hax_vcpu_state *vcpu = env_cpu(env)->hax_vcpu;
+    struct hax_vcpu_state *vcpu = env_cpu(env)->accel;
     if (!vcpu) {
         return HAX_INVALID_FD;
     }
@@ -188,7 +188,7 @@ int hax_vcpu_create(int id)
 
 int hax_vcpu_destroy(CPUState *cpu)
 {
-    struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+    struct hax_vcpu_state *vcpu = cpu->accel;
 
     if (!hax_global.vm) {
         fprintf(stderr, "vcpu %x destroy failed, vm is null\n", vcpu->vcpu_id);
@@ -209,7 +209,7 @@ int hax_vcpu_destroy(CPUState *cpu)
     CloseHandle(cpu->hThread);
 #endif
     g_free(vcpu);
-    cpu->hax_vcpu = NULL;
+    cpu->accel = NULL;
     return 0;
 }
 
@@ -223,7 +223,7 @@ int hax_init_vcpu(CPUState *cpu)
         exit(-1);
     }
 
-    cpu->hax_vcpu = hax_global.vm->vcpus[cpu->cpu_index];
+    cpu->accel = hax_global.vm->vcpus[cpu->cpu_index];
     cpu->vcpu_dirty = true;
     qemu_register_reset(hax_reset_vcpu_state, cpu->env_ptr);
 
@@ -415,7 +415,7 @@ static int hax_handle_io(CPUArchState *env, uint32_t df, uint16_t port,
 static int hax_vcpu_interrupt(CPUArchState *env)
 {
     CPUState *cpu = env_cpu(env);
-    struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+    struct hax_vcpu_state *vcpu = cpu->accel;
     struct hax_tunnel *ht = vcpu->tunnel;
 
     /*
@@ -447,7 +447,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
 
 void hax_raise_event(CPUState *cpu)
 {
-    struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+    struct hax_vcpu_state *vcpu = cpu->accel;
 
     if (!vcpu) {
         return;
@@ -468,7 +468,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
     int ret = 0;
     CPUState *cpu = env_cpu(env);
     X86CPU *x86_cpu = X86_CPU(cpu);
-    struct hax_vcpu_state *vcpu = cpu->hax_vcpu;
+    struct hax_vcpu_state *vcpu = cpu->accel;
     struct hax_tunnel *ht = vcpu->tunnel;
 
     if (!hax_enabled()) {
@@ -1114,8 +1114,8 @@ void hax_reset_vcpu_state(void *opaque)
 {
     CPUState *cpu;
     for (cpu = first_cpu; cpu != NULL; cpu = CPU_NEXT(cpu)) {
-        cpu->hax_vcpu->tunnel->user_event_pending = 0;
-        cpu->hax_vcpu->tunnel->ready_for_interrupt_injection = 0;
+        cpu->accel->tunnel->user_event_pending = 0;
+        cpu->accel->tunnel->ready_for_interrupt_injection = 0;
     }
 }
 
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index b75738ee9c..cf4f0af24b 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -52,7 +52,7 @@ static struct qemu_machine qemu_mach;
 static struct qemu_vcpu *
 get_qemu_vcpu(CPUState *cpu)
 {
-    return (struct qemu_vcpu *)cpu->hax_vcpu;
+    return (struct qemu_vcpu *)cpu->accel;
 }
 
 static struct nvmm_machine *
@@ -995,7 +995,7 @@ nvmm_init_vcpu(CPUState *cpu)
     }
 
     cpu->vcpu_dirty = true;
-    cpu->hax_vcpu = (struct hax_vcpu_state *)qcpu;
+    cpu->accel = (struct hax_vcpu_state *)qcpu;
 
     return 0;
 }
@@ -1030,7 +1030,7 @@ nvmm_destroy_vcpu(CPUState *cpu)
     struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
 
     nvmm_vcpu_destroy(mach, &qcpu->vcpu);
-    g_free(cpu->hax_vcpu);
+    g_free(cpu->accel);
 }
 
 /* -------------------------------------------------------------------------- */
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 52af81683c..d1ad6f156a 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -262,7 +262,7 @@ static bool whpx_has_xsave(void)
 
 static struct whpx_vcpu *get_whpx_vcpu(CPUState *cpu)
 {
-    return (struct whpx_vcpu *)cpu->hax_vcpu;
+    return (struct whpx_vcpu *)cpu->accel;
 }
 
 static WHV_X64_SEGMENT_REGISTER whpx_seg_q2h(const SegmentCache *qs, int v86,
@@ -2258,7 +2258,7 @@ int whpx_init_vcpu(CPUState *cpu)
 
     vcpu->interruptable = true;
     cpu->vcpu_dirty = true;
-    cpu->hax_vcpu = (struct hax_vcpu_state *)vcpu;
+    cpu->accel = (struct hax_vcpu_state *)vcpu;
     max_vcpu_index = max(max_vcpu_index, cpu->cpu_index);
     qemu_add_vm_change_state_handler(whpx_cpu_update_state, cpu->env_ptr);
 
@@ -2300,7 +2300,7 @@ void whpx_destroy_vcpu(CPUState *cpu)
 
     whp_dispatch.WHvDeleteVirtualProcessor(whpx->partition, cpu->cpu_index);
     whp_dispatch.WHvEmulatorDestroyEmulator(vcpu->emulator);
-    g_free(cpu->hax_vcpu);
+    g_free(cpu->accel);
     return;
 }
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:19:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518403.804928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0EG-0004uu-FS; Wed, 05 Apr 2023 10:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518403.804928; Wed, 05 Apr 2023 10:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0EG-0004un-Ac; Wed, 05 Apr 2023 10:19:00 +0000
Received: by outflank-mailman (input) for mailman id 518403;
 Wed, 05 Apr 2023 10:18:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0EE-0002Na-Mn
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:18:58 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 461dd625-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:18:57 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id l12so35623144wrm.10
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:18:56 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 l18-20020adfe592000000b002c5534db60bsm14659802wrm.71.2023.04.05.03.18.52
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:18:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 461dd625-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689936;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6tftOh0BEcl8WGCnB6MC//b/5zdxDYjL5goslSUZJws=;
        b=yPbFhCwGeiCV4thDFylzfI/HwDyG5hzAj4MOzixAzKF25y0uzdJIpFxXuplWi9vjHl
         /pOllACxcM9z8xuQDdRw6M+2mhe134Dj+vuHZkVEnkH+iEToGCJ/gsI/e1vCCyvskKgL
         u3Mvtb6MFtubWA04wRDH74PCwnCAY/i2SngbnxTxDSfmMlutOwruOtZ/lxg1P8mD8QYc
         drS8XHL8crBE+lDtcMRmYU91Oi0ecrSiQz2TIPt5ieGnpP+PuypBCXiw3EMqe02ST1rI
         wsMGZCOdiVWLzTUX7Ca+wgJTbV5lZ6+mNbhvpM8NJsdWCvazgFD200r/8aPIN7ho8VP6
         onNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689936;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=6tftOh0BEcl8WGCnB6MC//b/5zdxDYjL5goslSUZJws=;
        b=20XlBoaIgxqLyPHspG5cAGPtye59aQMVcmYsl32UwxVf4NrX+gOrYG5gtboAPSTAQC
         yLuyKS0TCWumPj7AuiDkxyYahWSDfVgQxMTJdz0CkM1xlPkX+bROXoWVMjk4MabqZ+Jh
         DRTiItiNZoFR/qJ/lSc5xGiDd3Qwk/tDNesNjh7ui6BeBSuzku0ae5AjJh1ooSv9YJ7D
         N0hW2w8Dy35GoEyulwi3XhedqgeM9UeA+dzszJsZFZGgURmUPGHvJoNcr5h6RWcYe2x/
         ajvrYtqQfopbxUurulANys721/AF+LWZZD/OTx2fXpynL4a9bWn2sLZKKi7BxSNQeXlp
         xcRg==
X-Gm-Message-State: AAQBX9fZa8rtByuIva3RTlYmd567w3Cayzb6nmFrIRcno2dR+5lt93qP
	UWm4JTVCVDdoQjJb6V1JieSAaw==
X-Google-Smtp-Source: AKy350bSSR7dvtgFBRLgKNCzhtqOOQSU+0g5L1LOMUC3nb32jDJaawY9UmLjKaQ4gtEPNgL7gGWKbg==
X-Received: by 2002:a5d:610a:0:b0:2d5:553a:93ac with SMTP id v10-20020a5d610a000000b002d5553a93acmr3978963wrt.7.1680689936532;
        Wed, 05 Apr 2023 03:18:56 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PATCH 06/14] accel: Use a typedef for struct hax_vcpu_state
Date: Wed,  5 Apr 2023 12:18:03 +0200
Message-Id: <20230405101811.76663-7-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Use a type definition instead of explicit structure.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/hax/hax-i386.h    | 10 +++++-----
 target/i386/hax/hax-all.c     | 16 ++++++++--------
 target/i386/hax/hax-posix.c   |  4 ++--
 target/i386/hax/hax-windows.c |  4 ++--
 4 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/target/i386/hax/hax-i386.h b/target/i386/hax/hax-i386.h
index 409ebdb4af..3cb3b9bbd0 100644
--- a/target/i386/hax/hax-i386.h
+++ b/target/i386/hax/hax-i386.h
@@ -25,12 +25,12 @@ typedef HANDLE hax_fd;
 #endif
 
 extern struct hax_state hax_global;
-struct hax_vcpu_state {
+typedef struct hax_vcpu_state {
     hax_fd fd;
     int vcpu_id;
     struct hax_tunnel *tunnel;
     unsigned char *iobuf;
-};
+} hax_vcpu_state;
 
 struct hax_state {
     hax_fd fd; /* the global hax device interface */
@@ -46,7 +46,7 @@ struct hax_vm {
     hax_fd fd;
     int id;
     int numvcpus;
-    struct hax_vcpu_state **vcpus;
+    hax_vcpu_state **vcpus;
 };
 
 /* Functions exported to host specific mode */
@@ -57,7 +57,7 @@ int valid_hax_tunnel_size(uint16_t size);
 int hax_mod_version(struct hax_state *hax, struct hax_module_version *version);
 int hax_inject_interrupt(CPUArchState *env, int vector);
 struct hax_vm *hax_vm_create(struct hax_state *hax, int max_cpus);
-int hax_vcpu_run(struct hax_vcpu_state *vcpu);
+int hax_vcpu_run(hax_vcpu_state *vcpu);
 int hax_vcpu_create(int id);
 void hax_kick_vcpu_thread(CPUState *cpu);
 
@@ -76,7 +76,7 @@ int hax_host_create_vm(struct hax_state *hax, int *vm_id);
 hax_fd hax_host_open_vm(struct hax_state *hax, int vm_id);
 int hax_host_create_vcpu(hax_fd vm_fd, int vcpuid);
 hax_fd hax_host_open_vcpu(int vmid, int vcpuid);
-int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu);
+int hax_host_setup_vcpu_channel(hax_vcpu_state *vcpu);
 hax_fd hax_mod_open(void);
 void hax_memory_init(void);
 
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index 3865ff9419..a55b18f353 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -62,7 +62,7 @@ int valid_hax_tunnel_size(uint16_t size)
 
 hax_fd hax_vcpu_get_fd(CPUArchState *env)
 {
-    struct hax_vcpu_state *vcpu = env_cpu(env)->accel;
+    hax_vcpu_state *vcpu = env_cpu(env)->accel;
     if (!vcpu) {
         return HAX_INVALID_FD;
     }
@@ -136,7 +136,7 @@ static int hax_version_support(struct hax_state *hax)
 
 int hax_vcpu_create(int id)
 {
-    struct hax_vcpu_state *vcpu = NULL;
+    hax_vcpu_state *vcpu = NULL;
     int ret;
 
     if (!hax_global.vm) {
@@ -149,7 +149,7 @@ int hax_vcpu_create(int id)
         return 0;
     }
 
-    vcpu = g_new0(struct hax_vcpu_state, 1);
+    vcpu = g_new0(hax_vcpu_state, 1);
 
     ret = hax_host_create_vcpu(hax_global.vm->fd, id);
     if (ret) {
@@ -188,7 +188,7 @@ int hax_vcpu_create(int id)
 
 int hax_vcpu_destroy(CPUState *cpu)
 {
-    struct hax_vcpu_state *vcpu = cpu->accel;
+    hax_vcpu_state *vcpu = cpu->accel;
 
     if (!hax_global.vm) {
         fprintf(stderr, "vcpu %x destroy failed, vm is null\n", vcpu->vcpu_id);
@@ -263,7 +263,7 @@ struct hax_vm *hax_vm_create(struct hax_state *hax, int max_cpus)
     }
 
     vm->numvcpus = max_cpus;
-    vm->vcpus = g_new0(struct hax_vcpu_state *, vm->numvcpus);
+    vm->vcpus = g_new0(hax_vcpu_state *, vm->numvcpus);
     for (i = 0; i < vm->numvcpus; i++) {
         vm->vcpus[i] = NULL;
     }
@@ -415,7 +415,7 @@ static int hax_handle_io(CPUArchState *env, uint32_t df, uint16_t port,
 static int hax_vcpu_interrupt(CPUArchState *env)
 {
     CPUState *cpu = env_cpu(env);
-    struct hax_vcpu_state *vcpu = cpu->accel;
+    hax_vcpu_state *vcpu = cpu->accel;
     struct hax_tunnel *ht = vcpu->tunnel;
 
     /*
@@ -447,7 +447,7 @@ static int hax_vcpu_interrupt(CPUArchState *env)
 
 void hax_raise_event(CPUState *cpu)
 {
-    struct hax_vcpu_state *vcpu = cpu->accel;
+    hax_vcpu_state *vcpu = cpu->accel;
 
     if (!vcpu) {
         return;
@@ -468,7 +468,7 @@ static int hax_vcpu_hax_exec(CPUArchState *env)
     int ret = 0;
     CPUState *cpu = env_cpu(env);
     X86CPU *x86_cpu = X86_CPU(cpu);
-    struct hax_vcpu_state *vcpu = cpu->accel;
+    hax_vcpu_state *vcpu = cpu->accel;
     struct hax_tunnel *ht = vcpu->tunnel;
 
     if (!hax_enabled()) {
diff --git a/target/i386/hax/hax-posix.c b/target/i386/hax/hax-posix.c
index ac1a51096e..8ee247845b 100644
--- a/target/i386/hax/hax-posix.c
+++ b/target/i386/hax/hax-posix.c
@@ -205,7 +205,7 @@ hax_fd hax_host_open_vcpu(int vmid, int vcpuid)
     return fd;
 }
 
-int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu)
+int hax_host_setup_vcpu_channel(hax_vcpu_state *vcpu)
 {
     int ret;
     struct hax_tunnel_info info;
@@ -227,7 +227,7 @@ int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu)
     return 0;
 }
 
-int hax_vcpu_run(struct hax_vcpu_state *vcpu)
+int hax_vcpu_run(hax_vcpu_state *vcpu)
 {
     return ioctl(vcpu->fd, HAX_VCPU_IOCTL_RUN, NULL);
 }
diff --git a/target/i386/hax/hax-windows.c b/target/i386/hax/hax-windows.c
index 59afa213a6..08ec93a256 100644
--- a/target/i386/hax/hax-windows.c
+++ b/target/i386/hax/hax-windows.c
@@ -301,7 +301,7 @@ hax_fd hax_host_open_vcpu(int vmid, int vcpuid)
     return hDeviceVCPU;
 }
 
-int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu)
+int hax_host_setup_vcpu_channel(hax_vcpu_state *vcpu)
 {
     hax_fd hDeviceVCPU = vcpu->fd;
     int ret;
@@ -327,7 +327,7 @@ int hax_host_setup_vcpu_channel(struct hax_vcpu_state *vcpu)
     return 0;
 }
 
-int hax_vcpu_run(struct hax_vcpu_state *vcpu)
+int hax_vcpu_run(hax_vcpu_state *vcpu)
 {
     int ret;
     HANDLE hDeviceVCPU = vcpu->fd;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:19:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518406.804938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0EM-0005NU-P6; Wed, 05 Apr 2023 10:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518406.804938; Wed, 05 Apr 2023 10:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0EM-0005NI-Kv; Wed, 05 Apr 2023 10:19:06 +0000
Received: by outflank-mailman (input) for mailman id 518406;
 Wed, 05 Apr 2023 10:19:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0EL-00024X-5R
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:05 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a6f4b4b-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:19:04 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 l10-20020a05600c1d0a00b003f04bd3691eso6765085wms.5
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:04 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 f23-20020a7bcd17000000b003eb966d39desm1762763wmj.2.2023.04.05.03.19.01
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a6f4b4b-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689944;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0bXZGRuYaUWuS+waIlKd8VQv2t6ejAS6CfkyaECLnt0=;
        b=LIHJIKyuI7acMvt8rBrVg6eWFKgBkrJ3/b0arkph/DUHa0BLbU93H3b1GIwH92R4Zy
         +uvzlTCyUDlYf/j4al5z+wQUHx97GbeXdG9kBiFdMAHyviECH6MqzuY0CNALDN8Ap5Kl
         ApXLzY1U62e1horWK6Vt/U+csJdBfLpJs0pQHuzAzmEZEThhQrNbsPj75PM04Up7HqEo
         3/73NqiS9NmRAA8IVw8pYFlfJUzbpOMYXpCwDL0opeM+q2GUARMOBu22yoM5Hh8oWG8N
         qONWesA/cQ68zjyXExg7MhsK0M/t5sPv8yhjX2pllIkZYpVHgJWQ2hcKuv9wWGPJXdDK
         ogxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689944;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0bXZGRuYaUWuS+waIlKd8VQv2t6ejAS6CfkyaECLnt0=;
        b=WF0c0ek4QkPZ8AQoRDWY3zdt98WsSNb0Trd+pbXRgem+gdY/6mFFCD4sGD3FQutWQF
         hPfG5hVNX5ayeDIL321OzbKXi5sUhKW3xudxoQEXAm0xCdFDchAthvCsZ1tk2zyVM8Fn
         Yyv+B86xJyeK6GR6beAHwJ8is7HIdGV+slxP0xvDqE2YWJ4PoPTHfcGJqFgywxlFNFkt
         1+rg0boZy8bn4wo1aaK2PuCcPTdCyp+ow/qalK93j0pqHKgp0+wcXzuW/NtG3bzGFW7H
         AM1OrlIQCEaBqwpb8aiynMKKklkUuVoDXqnmny0SlHVrff3TxkqkO59O/L86KLCG0JB8
         /hig==
X-Gm-Message-State: AAQBX9eEnmHpVNebFr7b2XZbQqYQhPscrSQPgaiuihRa9wFuCjr7X65E
	T29VFWxEbVcQddLhUuGUbnu2jwan/K7RUk0AvaY=
X-Google-Smtp-Source: AKy350aOrY2Jw4wGetHTWaGkqVDR1/1hOaALytcr5CxarBrtXg4uhD0qN6hOdWeu8PVqJ7zVJHPs0g==
X-Received: by 2002:a7b:ca48:0:b0:3ee:ed5:6115 with SMTP id m8-20020a7bca48000000b003ee0ed56115mr4226349wml.19.1680689943740;
        Wed, 05 Apr 2023 03:19:03 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yanan Wang <wangyanan55@huawei.com>,
	Reinoud Zandijk <reinoud@netbsd.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>
Subject: [PATCH 07/14] accel: Rename struct hax_vcpu_state -> struct AccelvCPUState
Date: Wed,  5 Apr 2023 12:18:04 +0200
Message-Id: <20230405101811.76663-8-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We want all accelerators to share the same opaque pointer in
CPUState. Start with the HAX context, renaming its forward
declarated structure 'hax_vcpu_state' as 'AccelvCPUState'.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/hw/core/cpu.h       | 7 +++----
 target/i386/hax/hax-i386.h  | 3 ++-
 target/i386/nvmm/nvmm-all.c | 2 +-
 target/i386/whpx/whpx-all.c | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 193494cde4..173f47d24e 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -235,8 +235,7 @@ typedef struct SavedIOTLB {
 
 struct KVMState;
 struct kvm_run;
-
-struct hax_vcpu_state;
+struct AccelvCPUState;
 struct hvf_vcpu_state;
 
 /* work queue */
@@ -305,6 +304,7 @@ struct qemu_work_item;
  * @next_cpu: Next CPU sharing TB cache.
  * @opaque: User data.
  * @mem_io_pc: Host Program Counter at which the memory was accessed.
+ * @accel: Pointer to accelerator specific state.
  * @kvm_fd: vCPU file descriptor for KVM.
  * @work_mutex: Lock to prevent multiple access to @work_list.
  * @work_list: List of pending asynchronous work.
@@ -423,6 +423,7 @@ struct CPUState {
     uint32_t can_do_io;
     int32_t exception_index;
 
+    struct AccelvCPUState *accel;
     /* shared by kvm, hax and hvf */
     bool vcpu_dirty;
 
@@ -442,8 +443,6 @@ struct CPUState {
     /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
     bool prctl_unalign_sigbus;
 
-    struct hax_vcpu_state *accel;
-
     struct hvf_vcpu_state *hvf;
 
     /* track IOMMUs whose translations we've cached in the TCG TLB */
diff --git a/target/i386/hax/hax-i386.h b/target/i386/hax/hax-i386.h
index 3cb3b9bbd0..d11d43e857 100644
--- a/target/i386/hax/hax-i386.h
+++ b/target/i386/hax/hax-i386.h
@@ -25,7 +25,8 @@ typedef HANDLE hax_fd;
 #endif
 
 extern struct hax_state hax_global;
-typedef struct hax_vcpu_state {
+
+typedef struct AccelvCPUState {
     hax_fd fd;
     int vcpu_id;
     struct hax_tunnel *tunnel;
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index cf4f0af24b..3c7bdd560f 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -995,7 +995,7 @@ nvmm_init_vcpu(CPUState *cpu)
     }
 
     cpu->vcpu_dirty = true;
-    cpu->accel = (struct hax_vcpu_state *)qcpu;
+    cpu->accel = (struct AccelvCPUState *)qcpu;
 
     return 0;
 }
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index d1ad6f156a..70eadb7f05 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -2258,7 +2258,7 @@ int whpx_init_vcpu(CPUState *cpu)
 
     vcpu->interruptable = true;
     cpu->vcpu_dirty = true;
-    cpu->accel = (struct hax_vcpu_state *)vcpu;
+    cpu->accel = (struct AccelvCPUState *)vcpu;
     max_vcpu_index = max(max_vcpu_index, cpu->cpu_index);
     qemu_add_vm_change_state_handler(whpx_cpu_update_state, cpu->env_ptr);
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:27:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518414.804948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Lx-0007kW-G0; Wed, 05 Apr 2023 10:26:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518414.804948; Wed, 05 Apr 2023 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Lx-0007kP-Cb; Wed, 05 Apr 2023 10:26:57 +0000
Received: by outflank-mailman (input) for mailman id 518414;
 Wed, 05 Apr 2023 10:26:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kv/O=74=linaro.org=alex.bennee@srs-se1.protection.inumbo.net>)
 id 1pk0Lv-0007kH-OS
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 10:26:55 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 624caf44-d39c-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:26:54 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id l12so35644946wrm.10
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 03:26:53 -0700 (PDT)
Received: from zen.linaroharston ([85.9.250.243])
 by smtp.gmail.com with ESMTPSA id
 bd5-20020a05600c1f0500b003f0472ffc7csm1768461wmb.11.2023.04.05.03.26.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 03:26:52 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 7DA471FFB7;
 Wed,  5 Apr 2023 11:26:52 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 624caf44-d39c-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680690413;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Lww6dDMe0bMEJli6/6ZP6j4C+of80/qR+ErByK0yw5g=;
        b=A+4oG+PJQ4lEZ6T1z8xTGbTG1pEru7t6wl92B6x2UAdtjSn4xJzf7eV7I2j4gpyXbn
         92WZ5xE3JJgQFywvuyLRJMG40XXULmggfQHz69w92CEIWEygoSuigsrjq9ht1V3Zgu4i
         z6TNY9OfzzcZUcU6HOCAxUSIK/3goyowjInTAglW7fsosL6LxLa21DNgd+90SoYxNwuq
         nd1NNnRxAH+BUWyGiJkYqmYw+QSWqTzOkEiAh19YsI+W7wvfl1+9S9BkkI5tN+bxmvJZ
         grxortISRJgmJfB1B0BK78szZCrNSh2iGdM/WCCfqUuU/hheHyQVJZmwC0sSHOmCINwS
         /EBA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680690413;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Lww6dDMe0bMEJli6/6ZP6j4C+of80/qR+ErByK0yw5g=;
        b=RTx3Mudl1P+J4p5Le4iBC1iyLBqzS8Rv8UWS9wPq1ZzUutVHnCs+3eLeYthxVU2nX0
         31kr/iO12ZomSdAtxRNcIb9h0dPtxsuHK86jSsowCwCsvI3eub/gszdTTa9nZOWBZpoS
         32GAss6SdUmPz6jh1oACyDymOkFZEzV4s9EwmIE+lBk5xrYqT89K8olwtxLK8YfY33JY
         3vT3mGzdXJklb0JylTN2KpZb7HvHJXbrNaAyVCdVGZqUBk6l3TPoPejUnPEZCaFdCsvg
         tT5WvsjX5dMl/aP9IBpOWGpeyQ5D/QzwJL3+jJM30ZAqJ0Q28UrbkZxHL+qDAjtcWwIJ
         YAmg==
X-Gm-Message-State: AAQBX9cKsEvctpLDr+b1F5gN7xnbjsS2FisLyz2aCaEJWG0qS51LmgCf
	v7YLj7f+/keL6diCuH+z7mwXkw==
X-Google-Smtp-Source: AKy350bNr4KJH+ZbusBX8ddHYWCR5TF9ug/D4hQCNf5bE3nZzYv03gsenyNx3XTYaQjbqtu0borjcw==
X-Received: by 2002:a5d:6b0a:0:b0:2cd:bc79:5444 with SMTP id v10-20020a5d6b0a000000b002cdbc795444mr3747030wrw.2.1680690413210;
        Wed, 05 Apr 2023 03:26:53 -0700 (PDT)
References: <cover.1678351495.git.viresh.kumar@linaro.org>
 <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7> <87h6tulkae.fsf@linaro.org>
 <20230405060340-mutt-send-email-mst@kernel.org>
User-agent: mu4e 1.10.0; emacs 29.0.60
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>, qemu-devel@nongnu.org,
 virtio-dev@lists.oasis-open.org, Stefan Hajnoczi <stefanha@redhat.com>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, Oleksandr Tyshchenko
 <olekstysh@gmail.com>, xen-devel@lists.xen.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>, Sebastien
 Boeuf <sebastien.boeuf@intel.com>, Liu Jiang <gerry@linux.alibaba.com>,
 Mathieu Poirier <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V3 0/2] qemu: vhost-user: Support Xen memory mapping quirks
Date: Wed, 05 Apr 2023 11:24:43 +0100
In-reply-to: <20230405060340-mutt-send-email-mst@kernel.org>
Message-ID: <87cz4ilj4j.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


"Michael S. Tsirkin" <mst@redhat.com> writes:

> On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Benn=C3=A9e wrote:
>>=20
>> Viresh Kumar <viresh.kumar@linaro.org> writes:
>>=20
>> > On 09-03-23, 14:20, Viresh Kumar wrote:
>> >> Hello,
>> >>=20
>> >> This patchset tries to update the vhost-user protocol to make it supp=
ort special
>> >> memory mapping required in case of Xen hypervisor.
>> >>=20
>> >> The first patch is mostly cleanup and second one introduces a new xen=
 specific
>> >> feature.
>> >
>> > Can we apply this now ? I have developed code for rust-vmm crates
>> > based on this and we need to get this merged/finalized first before
>> > merging those changes.
>>=20
>>=20
>> I've queued into my virtio/vhost-user-device series so I'll get merged
>> with that series unless mst wants to take it now.
>
> Well the patches are tagged and I was going to take these after the relea=
se.
> Probably easier not to work on this in two trees.
> Still if there's something in your tree being blocked
> by these patches then
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Let me know.

The virtio/vhost-user-device tree work is orthogonal to this vhost-user
enhancement although all the work is related to our latest VirtIO
project inside Linaro, Orko:
https://linaro.atlassian.net/wiki/spaces/ORKO/overview

So if you are happy please take these patches now for when the tree
re-opens.

>
>
>> >
>> > Thanks.
>>=20
>>=20
>> --=20
>> Alex Benn=C3=A9e
>> Virtualisation Tech Lead @ Linaro


--=20
Alex Benn=C3=A9e
Virtualisation Tech Lead @ Linaro


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:28:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:28:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518418.804958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Nu-0008Kx-RJ; Wed, 05 Apr 2023 10:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518418.804958; Wed, 05 Apr 2023 10:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Nu-0008Kq-O8; Wed, 05 Apr 2023 10:28:58 +0000
Received: by outflank-mailman (input) for mailman id 518418;
 Wed, 05 Apr 2023 10:28:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Ej-0002Na-9W
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:29 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 586a9388-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:19:27 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id e18so35635998wra.9
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:27 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 q12-20020adff78c000000b002c5d3f0f737sm14591138wrp.30.2023.04.05.03.19.24
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 586a9388-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689967;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Zfz8X/GUHgutjcuWgCA2LWUZkCc4gRaFUTsRO6OH06g=;
        b=yL8zWIHfSIdBZYPkNDculPyk2YWRIlBQX8fQHgjeuSYeTxaloVwCCglH5DXA7/0zDg
         TZoThoM/7Yzk229VxX9xWe1sKCb9xPkVbKN8Rue8VJ9iqfCxkzpWDThAQYbFg2V+tT4n
         0lFcGByrmLtNLB2oRBl4LbPvsUeFbnwE43+pzddA4+wnnvx5l0rEaM3ytpPw2h5NOvmZ
         DVCYR6+eBbGkSb54pQlKLGqI1fhY/v3/lWGMSe4bZ8LvxqbH70eUp/6qo6vPZ7mUdh5Y
         PeQA12GP5hcgxOvodwGCZ4f1KCUcAPsr/zlEr6sTgBGcMgExpeywl1vstjMaVXIEP1HP
         4juw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689967;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Zfz8X/GUHgutjcuWgCA2LWUZkCc4gRaFUTsRO6OH06g=;
        b=WLxFXax1asXeE+jvMRFIdNkSkSk9P77rpUcZdvn3JkvbPuii95gVkNcDlzmGd5SXx7
         DMiUX3WKdwBX0dWb377mksbs/cUPW9/LChRVo9qWlDLqB9I5Zj49ZCyd5IVAFi0sxI19
         Ojf9zB4CTEI3xaNxveBLVXGUpwYvP60dh3mNjL75dNVx6Jm2TqxnlubPUJcqrjI3jjdG
         71fu5Hpwnvrm6AYFCrehJxHTZTwZwdbMBywAQO5oEzUw853/xIui3liWeTyIVYmch1vw
         LkG+dlij1s8obBjqwmv0RnG9tTh2GfWY2MNfZEntZjHh0XUfyaFBVyE/N61T3Itdk+Zw
         6huw==
X-Gm-Message-State: AAQBX9ffMYJ2uzDrr8nv++J767qMUJDiO9aqLoDlS2mxitAXmpN5ciLG
	lUz1PtWALgsSbryqZ1zWnid/Mg==
X-Google-Smtp-Source: AKy350Zvml7eWtPRqrRcRnGg6qq51tcU3Hzdj52W85LNQPphjff69u1l2lWyWIRrXMamHJtWfSOkkw==
X-Received: by 2002:adf:e48b:0:b0:2c7:6bb:fb7a with SMTP id i11-20020adfe48b000000b002c706bbfb7amr4270631wrm.54.1680689967157;
        Wed, 05 Apr 2023 03:19:27 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Reinoud Zandijk <reinoud@netbsd.org>
Subject: [PATCH 10/14] accel: Rename NVMM struct qemu_vcpu -> struct AccelvCPUState
Date: Wed,  5 Apr 2023 12:18:07 +0200
Message-Id: <20230405101811.76663-11-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We want all accelerators to share the same opaque pointer in
CPUState. Rename NVMM 'qemu_vcpu' as 'AccelvCPUState'.

Replace g_try_malloc0() by g_try_new0() for readability.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/nvmm/nvmm-all.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 45fd318d23..97a7225598 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -26,7 +26,7 @@
 
 #include <nvmm.h>
 
-struct qemu_vcpu {
+struct AccelvCPUState {
     struct nvmm_vcpu vcpu;
     uint8_t tpr;
     bool stop;
@@ -49,10 +49,10 @@ struct qemu_machine {
 static bool nvmm_allowed;
 static struct qemu_machine qemu_mach;
 
-static struct qemu_vcpu *
+static struct AccelvCPUState *
 get_qemu_vcpu(CPUState *cpu)
 {
-    return (struct qemu_vcpu *)cpu->accel;
+    return cpu->accel;
 }
 
 static struct nvmm_machine *
@@ -86,7 +86,7 @@ nvmm_set_registers(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     struct nvmm_x64_state *state = vcpu->state;
     uint64_t bitmap;
@@ -223,7 +223,7 @@ nvmm_get_registers(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -347,7 +347,7 @@ static bool
 nvmm_can_take_int(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     struct nvmm_machine *mach = get_nvmm_mach();
 
@@ -372,7 +372,7 @@ nvmm_can_take_int(CPUState *cpu)
 static bool
 nvmm_can_take_nmi(CPUState *cpu)
 {
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
 
     /*
      * Contrary to INTs, NMIs always schedule an exit when they are
@@ -395,7 +395,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -478,7 +478,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 static void
 nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
 {
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     uint64_t tpr;
@@ -565,7 +565,7 @@ static int
 nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
     struct nvmm_vcpu_exit *exit)
 {
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -610,7 +610,7 @@ static int
 nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
     struct nvmm_vcpu_exit *exit)
 {
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -686,7 +686,7 @@ nvmm_vcpu_loop(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_vcpu_exit *exit = vcpu->exit;
@@ -892,7 +892,7 @@ static void
 nvmm_ipi_signal(int sigcpu)
 {
     if (current_cpu) {
-        struct qemu_vcpu *qcpu = get_qemu_vcpu(current_cpu);
+        struct AccelvCPUState *qcpu = get_qemu_vcpu(current_cpu);
 #if NVMM_USER_VERSION >= 2
         struct nvmm_vcpu *vcpu = &qcpu->vcpu;
         nvmm_vcpu_stop(vcpu);
@@ -926,7 +926,7 @@ nvmm_init_vcpu(CPUState *cpu)
     struct nvmm_vcpu_conf_cpuid cpuid;
     struct nvmm_vcpu_conf_tpr tpr;
     Error *local_error = NULL;
-    struct qemu_vcpu *qcpu;
+    struct AccelvCPUState *qcpu;
     int ret, err;
 
     nvmm_init_cpu_signals();
@@ -942,7 +942,7 @@ nvmm_init_vcpu(CPUState *cpu)
         }
     }
 
-    qcpu = g_try_malloc0(sizeof(*qcpu));
+    qcpu = g_try_new0(struct AccelvCPUState, 1);
     if (qcpu == NULL) {
         error_report("NVMM: Failed to allocate VCPU context.");
         return -ENOMEM;
@@ -995,7 +995,7 @@ nvmm_init_vcpu(CPUState *cpu)
     }
 
     cpu->vcpu_dirty = true;
-    cpu->accel = (struct AccelvCPUState *)qcpu;
+    cpu->accel = qcpu;
 
     return 0;
 }
@@ -1027,7 +1027,7 @@ void
 nvmm_destroy_vcpu(CPUState *cpu)
 {
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
 
     nvmm_vcpu_destroy(mach, &qcpu->vcpu);
     g_free(cpu->accel);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518420.804968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ny-0000CP-7v; Wed, 05 Apr 2023 10:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518420.804968; Wed, 05 Apr 2023 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ny-0000C6-2v; Wed, 05 Apr 2023 10:29:02 +0000
Received: by outflank-mailman (input) for mailman id 518420;
 Wed, 05 Apr 2023 10:28:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0F5-0002Na-8w
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:51 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6587ef62-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:19:49 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id l27so35663873wrb.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:49 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 o7-20020adfe807000000b002e4cd2ec5c7sm14692347wrm.86.2023.04.05.03.19.46
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6587ef62-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689989;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jksVpkvUC0Cd0t39UFdIwyYip+p/HLpViupo0kEzavo=;
        b=M14p41XBeC2pDOXONWRmVPh/g0bgtOaeo2kq2u+TWuJbxjfFyLRtQYnT6xjEliAwSd
         Gmn7vgNMSmnVxfuYpWs2aBjugNVuv4EcJIwJLGMmyZg0QDYfH3i/sc7Kh/q/3WK7qsdn
         F60Wv7glyCGVMo3BOtpHfdZkvOyzxwN+gH3+Hex9JK/WUrdLUi4yf74vk34hUq78UiYb
         AORBP6ycIQAo33OBYSJhlzh69AqqlN5KQ4LDwsQpdYmaxHN8h7YoC/jYUnRSJ6PAS/Fc
         moP7c9mQFfmYCYIySXHMkSb6ncs0eKD1DYTa0llVPqgHxXnFQBGyNcDmOUOFGwExmveQ
         mqGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689989;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jksVpkvUC0Cd0t39UFdIwyYip+p/HLpViupo0kEzavo=;
        b=PyjYiXx1JJHfIZA4kAOC7EGN/KhQdJgM+jxnxOevm8EaxJq8pfi1I3LIt4a64ed2CE
         SocZ7t+5OUQ5xI4Ov4aXPhbjPeAKnRu5sfYilbjbhhfebQDQA9jwpy+kmYLsr8LQ1SO0
         hfphUE01mzcoGzCFq9MfO5nUdQk5dLjHZdOOmEuLboCiFieO5jg29rr8h5hkW8rxrePQ
         zSR3+XK0mnRCcV0Jhc0RNr+p3Ryf+MyDu8kKpsdl7QiaPd2TnldNhW5glKpouOAbU/Ti
         fbRMxuQb6H6tudjURYBaEov9K/RQO9GgEmjMwL5WyH0JUJWIAcLIjzJagAwMPEiRzOSM
         l8Iw==
X-Gm-Message-State: AAQBX9eNp0L/5Ld5yV5Vf/S1Je1qYZpd3piRuN2u82Za2nXXoK2oQVVr
	AzUdNGqo2CYG5fqj7RPATd3SBg==
X-Google-Smtp-Source: AKy350ab1egrbimBmhajh9GEqufBtAURJQuxG6H/Hs7JpaNOdz6mu7HJAmIyOPKcGGllVT4MOOnVWA==
X-Received: by 2002:adf:df85:0:b0:2ce:a85f:1313 with SMTP id z5-20020adfdf85000000b002cea85f1313mr3358628wrl.35.1680689989199;
        Wed, 05 Apr 2023 03:19:49 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>
Subject: [PATCH 13/14] accel: Inline WHPX get_whpx_vcpu()
Date: Wed,  5 Apr 2023 12:18:10 +0200
Message-Id: <20230405101811.76663-14-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

No need for this helper to access the CPUState::accel field.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/whpx/whpx-all.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 2372c4227a..2cca6bc004 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -256,15 +256,6 @@ static bool whpx_has_xsave(void)
     return whpx_xsave_cap.XsaveSupport;
 }
 
-/*
- * VP support
- */
-
-static struct AccelvCPUState *get_whpx_vcpu(CPUState *cpu)
-{
-    return (struct AccelvCPUState *)cpu->accel;
-}
-
 static WHV_X64_SEGMENT_REGISTER whpx_seg_q2h(const SegmentCache *qs, int v86,
                                              int r86)
 {
@@ -390,7 +381,7 @@ static uint64_t whpx_cr8_to_apic_tpr(uint64_t cr8)
 static void whpx_set_registers(CPUState *cpu, int level)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_register_set vcxt;
@@ -609,7 +600,7 @@ static void whpx_get_xcrs(CPUState *cpu)
 static void whpx_get_registers(CPUState *cpu)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_register_set vcxt;
@@ -892,7 +883,7 @@ static const WHV_EMULATOR_CALLBACKS whpx_emu_callbacks = {
 static int whpx_handle_mmio(CPUState *cpu, WHV_MEMORY_ACCESS_CONTEXT *ctx)
 {
     HRESULT hr;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     WHV_EMULATOR_STATUS emu_status;
 
     hr = whp_dispatch.WHvEmulatorTryMmioEmulation(
@@ -917,7 +908,7 @@ static int whpx_handle_portio(CPUState *cpu,
                               WHV_X64_IO_PORT_ACCESS_CONTEXT *ctx)
 {
     HRESULT hr;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     WHV_EMULATOR_STATUS emu_status;
 
     hr = whp_dispatch.WHvEmulatorTryIoEmulation(
@@ -1417,7 +1408,7 @@ static vaddr whpx_vcpu_get_pc(CPUState *cpu, bool exit_context_valid)
          * of QEMU, nor this port by calling WHvSetVirtualProcessorRegisters().
          * This is the most common case.
          */
-        struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+        struct AccelvCPUState *vcpu = cpu->accel;
         return vcpu->exit_ctx.VpContext.Rip;
     } else {
         /*
@@ -1468,7 +1459,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 {
     HRESULT hr;
     struct whpx_state *whpx = &whpx_global;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     int irq;
@@ -1590,7 +1581,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
 static void whpx_vcpu_post_run(CPUState *cpu)
 {
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -1617,7 +1608,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
 
     if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
@@ -1656,7 +1647,7 @@ static int whpx_vcpu_run(CPUState *cpu)
 {
     HRESULT hr;
     struct whpx_state *whpx = &whpx_global;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
     struct whpx_breakpoint *stepped_over_bp = NULL;
     WhpxStepMode exclusive_step_mode = WHPX_STEP_NONE;
     int ret;
@@ -2296,7 +2287,7 @@ int whpx_vcpu_exec(CPUState *cpu)
 void whpx_destroy_vcpu(CPUState *cpu)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = cpu->accel;
 
     whp_dispatch.WHvDeleteVirtualProcessor(whpx->partition, cpu->cpu_index);
     whp_dispatch.WHvEmulatorDestroyEmulator(vcpu->emulator);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518421.804972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ny-0000Fo-I2; Wed, 05 Apr 2023 10:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518421.804972; Wed, 05 Apr 2023 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ny-0000EP-CV; Wed, 05 Apr 2023 10:29:02 +0000
Received: by outflank-mailman (input) for mailman id 518421;
 Wed, 05 Apr 2023 10:29:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0FE-0002Na-Hl
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:20:00 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6af48c46-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:19:58 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id e18so35637518wra.9
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:58 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 e11-20020a5d4e8b000000b002cde626cd96sm14624762wru.65.2023.04.05.03.19.54
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6af48c46-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689998;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Byk5LNB9PsG7qsnBIJRRo9JOVccGApJy+cx27j4c9fU=;
        b=zy6RoKZl3HFi+Q0vtR3UwohHMiMsWOn2/ul3EQVzNnehPXDwziBD8kN5oz8YFLSGXj
         z/JbV/w7gRjJU5xeR6Z9EK4Q7ekBR7AroyEWwqox013ocsFMmdaA4ZWf2wIKs/m2o1XP
         +/We8oxS7dteLdYGU8FLDvQO4VX6Suu2gr/VvXj3kZJ2FFAYoGMXBduf7o4qkVPTQNmu
         0HhVtRaZCSFq9x8ZTK0MirJoKqdYOdV5BlEMLWbn5vWNkxftascntbFIYwA0uUlQwF/C
         2xBShpUeR7qjrBXuhSVb8YEJTUIHLJJZrHR3rxKsIkhUvcQpcNTzRCtk+oyEz9o0QBbV
         o/PA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689998;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Byk5LNB9PsG7qsnBIJRRo9JOVccGApJy+cx27j4c9fU=;
        b=mf2mSuq7lG7kdyKvB3AKbk066NpQugEV7T8gte70Y9jln6/UH+lbZuitS3iykbIXz8
         /SH809xBOkLvEu4RnfoqLyxEsfxpEpbJILWmh7XLCglk3zOO2kaVcV4ToPjdG35XXf2g
         YAb8OqvckT8C60RdDK3AwnP4CR6SLw4P4wem2jdnaGzDjA/F1eplyRw2wNCjXIERv0Wb
         FMjjn3BChuB0jPcySVo1PRb9GxiBsD5ZIbml+4f5MgqIKHdlbw3QOpt4/FS6m6BE9HB/
         5laUlPslxOlybzK1TPZ1+yolGw9SVlJcmjPxWl9hSkaFP6+sWT9A7t+GYTSFb/guLowl
         N4VQ==
X-Gm-Message-State: AAQBX9fSc+Ujt51dv1LNZU/FY7M5KSt1+rNFtWK5wpFJOcncDFb0NQPJ
	r4L6LmKXb7aaI4sLHiYvcNSuDA==
X-Google-Smtp-Source: AKy350ZkCRxUlRNTXX0kIqDAWFewpV/6tQ9nx8c+rFpbVyobQsMD1OzgOrzwPflhmrDI+QR50xCA9g==
X-Received: by 2002:a5d:42c1:0:b0:2cf:eeae:88c3 with SMTP id t1-20020a5d42c1000000b002cfeeae88c3mr3555848wrr.32.1680689998344;
        Wed, 05 Apr 2023 03:19:58 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Cameron Esfahani <dirty@apple.com>,
	Roman Bolshakov <r.bolshakov@yadro.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yanan Wang <wangyanan55@huawei.com>,
	Alexander Graf <agraf@csgraf.de>,
	Peter Maydell <peter.maydell@linaro.org>,
	qemu-arm@nongnu.org
Subject: [PATCH 14/14] accel: Rename HVF struct hvf_vcpu_state -> struct AccelvCPUState
Date: Wed,  5 Apr 2023 12:18:11 +0200
Message-Id: <20230405101811.76663-15-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We want all accelerators to share the same opaque pointer in
CPUState.

Rename the 'hvf_vcpu_state' structure as 'AccelvCPUState'.

Use the generic 'accel' field of CPUState instead of 'hvf'.

Replace g_malloc0() by g_new0() for readability.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/hw/core/cpu.h     |  3 --
 include/sysemu/hvf_int.h  |  2 +-
 accel/hvf/hvf-accel-ops.c | 16 ++++-----
 target/arm/hvf/hvf.c      | 70 +++++++++++++++++++--------------------
 4 files changed, 44 insertions(+), 47 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 8d27861ed5..1dc5efe650 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -236,7 +236,6 @@ typedef struct SavedIOTLB {
 struct KVMState;
 struct kvm_run;
 struct AccelvCPUState;
-struct hvf_vcpu_state;
 
 /* work queue */
 
@@ -442,8 +441,6 @@ struct CPUState {
     /* Used for user-only emulation of prctl(PR_SET_UNALIGN). */
     bool prctl_unalign_sigbus;
 
-    struct hvf_vcpu_state *hvf;
-
     /* track IOMMUs whose translations we've cached in the TCG TLB */
     GArray *iommu_notifiers;
 };
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
index 6545f7cd61..96ef51f4df 100644
--- a/include/sysemu/hvf_int.h
+++ b/include/sysemu/hvf_int.h
@@ -48,7 +48,7 @@ struct HVFState {
 };
 extern HVFState *hvf_state;
 
-struct hvf_vcpu_state {
+struct AccelvCPUState {
     uint64_t fd;
     void *exit;
     bool vtimer_masked;
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 24913ca9c4..06ca1d59a4 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -363,19 +363,19 @@ type_init(hvf_type_init);
 
 static void hvf_vcpu_destroy(CPUState *cpu)
 {
-    hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
+    hv_return_t ret = hv_vcpu_destroy(cpu->accel->fd);
     assert_hvf_ok(ret);
 
     hvf_arch_vcpu_destroy(cpu);
-    g_free(cpu->hvf);
-    cpu->hvf = NULL;
+    g_free(cpu->accel);
+    cpu->accel = NULL;
 }
 
 static int hvf_init_vcpu(CPUState *cpu)
 {
     int r;
 
-    cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
+    cpu->accel = g_new0(struct AccelvCPUState, 1);
 
     /* init cpu signals */
     struct sigaction sigact;
@@ -384,13 +384,13 @@ static int hvf_init_vcpu(CPUState *cpu)
     sigact.sa_handler = dummy_signal;
     sigaction(SIG_IPI, &sigact, NULL);
 
-    pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
-    sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
+    pthread_sigmask(SIG_BLOCK, NULL, &cpu->accel->unblock_ipi_mask);
+    sigdelset(&cpu->accel->unblock_ipi_mask, SIG_IPI);
 
 #ifdef __aarch64__
-    r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
+    r = hv_vcpu_create(&cpu->accel->fd, (hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
 #else
-    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
+    r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
 #endif
     cpu->vcpu_dirty = 1;
     assert_hvf_ok(r);
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index ad65603445..b85648b61c 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -366,29 +366,29 @@ int hvf_get_registers(CPUState *cpu)
     int i;
 
     for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
-        ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
+        ret = hv_vcpu_get_reg(cpu->accel->fd, hvf_reg_match[i].reg, &val);
         *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
         assert_hvf_ok(ret);
     }
 
     for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
-        ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+        ret = hv_vcpu_get_simd_fp_reg(cpu->accel->fd, hvf_fpreg_match[i].reg,
                                       &fpval);
         memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
         assert_hvf_ok(ret);
     }
 
     val = 0;
-    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
+    ret = hv_vcpu_get_reg(cpu->accel->fd, HV_REG_FPCR, &val);
     assert_hvf_ok(ret);
     vfp_set_fpcr(env, val);
 
     val = 0;
-    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
+    ret = hv_vcpu_get_reg(cpu->accel->fd, HV_REG_FPSR, &val);
     assert_hvf_ok(ret);
     vfp_set_fpsr(env, val);
 
-    ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
+    ret = hv_vcpu_get_reg(cpu->accel->fd, HV_REG_CPSR, &val);
     assert_hvf_ok(ret);
     pstate_write(env, val);
 
@@ -397,7 +397,7 @@ int hvf_get_registers(CPUState *cpu)
             continue;
         }
 
-        ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
+        ret = hv_vcpu_get_sys_reg(cpu->accel->fd, hvf_sreg_match[i].reg, &val);
         assert_hvf_ok(ret);
 
         arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
@@ -420,24 +420,24 @@ int hvf_put_registers(CPUState *cpu)
 
     for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
         val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
-        ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
+        ret = hv_vcpu_set_reg(cpu->accel->fd, hvf_reg_match[i].reg, val);
         assert_hvf_ok(ret);
     }
 
     for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
         memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
-        ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
+        ret = hv_vcpu_set_simd_fp_reg(cpu->accel->fd, hvf_fpreg_match[i].reg,
                                       fpval);
         assert_hvf_ok(ret);
     }
 
-    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
+    ret = hv_vcpu_set_reg(cpu->accel->fd, HV_REG_FPCR, vfp_get_fpcr(env));
     assert_hvf_ok(ret);
 
-    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
+    ret = hv_vcpu_set_reg(cpu->accel->fd, HV_REG_FPSR, vfp_get_fpsr(env));
     assert_hvf_ok(ret);
 
-    ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
+    ret = hv_vcpu_set_reg(cpu->accel->fd, HV_REG_CPSR, pstate_read(env));
     assert_hvf_ok(ret);
 
     aarch64_save_sp(env, arm_current_el(env));
@@ -449,11 +449,11 @@ int hvf_put_registers(CPUState *cpu)
         }
 
         val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
-        ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
+        ret = hv_vcpu_set_sys_reg(cpu->accel->fd, hvf_sreg_match[i].reg, val);
         assert_hvf_ok(ret);
     }
 
-    ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
+    ret = hv_vcpu_set_vtimer_offset(cpu->accel->fd, hvf_state->vtimer_offset);
     assert_hvf_ok(ret);
 
     return 0;
@@ -474,7 +474,7 @@ static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
     flush_cpu_state(cpu);
 
     if (rt < 31) {
-        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
+        r = hv_vcpu_set_reg(cpu->accel->fd, HV_REG_X0 + rt, val);
         assert_hvf_ok(r);
     }
 }
@@ -487,7 +487,7 @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
     flush_cpu_state(cpu);
 
     if (rt < 31) {
-        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
+        r = hv_vcpu_get_reg(cpu->accel->fd, HV_REG_X0 + rt, &val);
         assert_hvf_ok(r);
     }
 
@@ -629,22 +629,22 @@ int hvf_arch_init_vcpu(CPUState *cpu)
     assert(write_cpustate_to_list(arm_cpu, false));
 
     /* Set CP_NO_RAW system registers on init */
-    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
+    ret = hv_vcpu_set_sys_reg(cpu->accel->fd, HV_SYS_REG_MIDR_EL1,
                               arm_cpu->midr);
     assert_hvf_ok(ret);
 
-    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
+    ret = hv_vcpu_set_sys_reg(cpu->accel->fd, HV_SYS_REG_MPIDR_EL1,
                               arm_cpu->mp_affinity);
     assert_hvf_ok(ret);
 
-    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
+    ret = hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
     assert_hvf_ok(ret);
     pfr |= env->gicv3state ? (1 << 24) : 0;
-    ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
+    ret = hv_vcpu_set_sys_reg(cpu->accel->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
     assert_hvf_ok(ret);
 
     /* We're limited to underlying hardware caps, override internal versions */
-    ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
+    ret = hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
                               &arm_cpu->isar.id_aa64mmfr0);
     assert_hvf_ok(ret);
 
@@ -654,7 +654,7 @@ int hvf_arch_init_vcpu(CPUState *cpu)
 void hvf_kick_vcpu_thread(CPUState *cpu)
 {
     cpus_kick_thread(cpu);
-    hv_vcpus_exit(&cpu->hvf->fd, 1);
+    hv_vcpus_exit(&cpu->accel->fd, 1);
 }
 
 static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
@@ -1191,13 +1191,13 @@ static int hvf_inject_interrupts(CPUState *cpu)
 {
     if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
         trace_hvf_inject_fiq();
-        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
+        hv_vcpu_set_pending_interrupt(cpu->accel->fd, HV_INTERRUPT_TYPE_FIQ,
                                       true);
     }
 
     if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
         trace_hvf_inject_irq();
-        hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
+        hv_vcpu_set_pending_interrupt(cpu->accel->fd, HV_INTERRUPT_TYPE_IRQ,
                                       true);
     }
 
@@ -1231,7 +1231,7 @@ static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
      */
     qatomic_mb_set(&cpu->thread_kicked, false);
     qemu_mutex_unlock_iothread();
-    pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
+    pselect(0, 0, 0, 0, ts, &cpu->accel->unblock_ipi_mask);
     qemu_mutex_lock_iothread();
 }
 
@@ -1252,7 +1252,7 @@ static void hvf_wfi(CPUState *cpu)
         return;
     }
 
-    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
+    r = hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
     assert_hvf_ok(r);
 
     if (!(ctl & 1) || (ctl & 2)) {
@@ -1261,7 +1261,7 @@ static void hvf_wfi(CPUState *cpu)
         return;
     }
 
-    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
+    r = hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
     assert_hvf_ok(r);
 
     ticks_to_sleep = cval - hvf_vtimer_val();
@@ -1294,12 +1294,12 @@ static void hvf_sync_vtimer(CPUState *cpu)
     uint64_t ctl;
     bool irq_state;
 
-    if (!cpu->hvf->vtimer_masked) {
+    if (!cpu->accel->vtimer_masked) {
         /* We will get notified on vtimer changes by hvf, nothing to do */
         return;
     }
 
-    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
+    r = hv_vcpu_get_sys_reg(cpu->accel->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
     assert_hvf_ok(r);
 
     irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
@@ -1308,8 +1308,8 @@ static void hvf_sync_vtimer(CPUState *cpu)
 
     if (!irq_state) {
         /* Timer no longer asserting, we can unmask it */
-        hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
-        cpu->hvf->vtimer_masked = false;
+        hv_vcpu_set_vtimer_mask(cpu->accel->fd, false);
+        cpu->accel->vtimer_masked = false;
     }
 }
 
@@ -1317,7 +1317,7 @@ int hvf_vcpu_exec(CPUState *cpu)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
     CPUARMState *env = &arm_cpu->env;
-    hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
+    hv_vcpu_exit_t *hvf_exit = cpu->accel->exit;
     hv_return_t r;
     bool advance_pc = false;
 
@@ -1332,7 +1332,7 @@ int hvf_vcpu_exec(CPUState *cpu)
     flush_cpu_state(cpu);
 
     qemu_mutex_unlock_iothread();
-    assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
+    assert_hvf_ok(hv_vcpu_run(cpu->accel->fd));
 
     /* handle VMEXIT */
     uint64_t exit_reason = hvf_exit->reason;
@@ -1346,7 +1346,7 @@ int hvf_vcpu_exec(CPUState *cpu)
         break;
     case HV_EXIT_REASON_VTIMER_ACTIVATED:
         qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
-        cpu->hvf->vtimer_masked = true;
+        cpu->accel->vtimer_masked = true;
         return 0;
     case HV_EXIT_REASON_CANCELED:
         /* we got kicked, no exit to process */
@@ -1457,10 +1457,10 @@ int hvf_vcpu_exec(CPUState *cpu)
 
         flush_cpu_state(cpu);
 
-        r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
+        r = hv_vcpu_get_reg(cpu->accel->fd, HV_REG_PC, &pc);
         assert_hvf_ok(r);
         pc += 4;
-        r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
+        r = hv_vcpu_set_reg(cpu->accel->fd, HV_REG_PC, pc);
         assert_hvf_ok(r);
     }
 
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518423.804979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Nz-0000NT-16; Wed, 05 Apr 2023 10:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518423.804979; Wed, 05 Apr 2023 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Ny-0000L1-Qg; Wed, 05 Apr 2023 10:29:02 +0000
Received: by outflank-mailman (input) for mailman id 518423;
 Wed, 05 Apr 2023 10:29:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0ES-00024X-SF
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:12 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f45fb5b-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:19:12 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id n19so20587973wms.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:12 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 y5-20020a1c4b05000000b003ede06f3178sm1730449wma.31.2023.04.05.03.19.08
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f45fb5b-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689952;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RAVav4ABEpIiF2Ib5SEI8jHJn5J3YAuyzrKJBGzco1s=;
        b=hHxXSS6O9+86vT2m6d0DTc+hiT8gaRNGAVM+7W9qw5K7dcdKYYbh0E7lvSzpkYn368
         fOMNZZP/1e+qRYjuveniW+hqJPeXRIDEN0l73ChMXeJHLmaPOblTjTKGwBGBBShSS3FY
         MjxZ0uY781JiDGIyrZ2Wmf9jZfbEMqhXfcJrGMdfGEFfNJ40NRBoRK/JRBa+tS5fdI92
         eU/ZF0jqD4wzOX6Sr/86Q1BA8EHQgHGTKipyyzmSWtpcvo657D+2/e3mC0kuVQ+XljSH
         s07gbWhPjhP4utq/Kj/s6MnBKF2BXRP7AOXf6kw6C44JcSBNEeM091V355Hm/Q/Y0H3V
         Xa9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689952;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=RAVav4ABEpIiF2Ib5SEI8jHJn5J3YAuyzrKJBGzco1s=;
        b=r0T/AgGQgPTwWlFRuvSMOVNOkd3Eg89SaacytUiZgbkguQ0bMbB+kiUEylaneX+tQQ
         VvaSIPzIz9ZoAwd+YdLsmWwD4Dl8F0/gLncgYxdmxZC+ghIXnznrMEpyGf3CJfESIyXa
         IHzIH0jdOKZQgQr6/5G1QidfGAYNnsA79PX0ywMOcLa3fIas1mntpdtRCoj2W8gwAmB2
         h2Jj36abKocfiCQnRuQVCUN7C7N3BYbQCkX/ySVrDvNzwThAIt79+N4rAfrcy1uRWiUi
         MmWHbAT2ODcniJrtDcAl0F89/TsRA+vNEfKq+LoIfgYx3Ejo4CgFvyTN3kFscFGwLqPO
         ADPg==
X-Gm-Message-State: AAQBX9eLn0bTzTKqijVcuVsRo4ag9vwO9B/irOvAQFjNJXI2PX2glKd7
	bN78kTUp+PUHaogTff+bhLwMg5EwWpnNd4dZof4=
X-Google-Smtp-Source: AKy350Zr65D5G7Y2SXbNR//LfCiPeKcYs21bo5zZhAXH6JQLmNB2HwLRUfAeUMYH/5tXslpZRuo8zQ==
X-Received: by 2002:a05:600c:28f:b0:3ea:bc08:b63e with SMTP id 15-20020a05600c028f00b003eabc08b63emr4378808wmk.2.1680689951840;
        Wed, 05 Apr 2023 03:19:11 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yanan Wang <wangyanan55@huawei.com>
Subject: [PATCH 08/14] accel: Move HAX hThread to accelerator context
Date: Wed,  5 Apr 2023 12:18:05 +0200
Message-Id: <20230405101811.76663-9-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

hThread variable is only used by the HAX accelerator,
so move it to the accelerator specific context.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/hw/core/cpu.h           | 1 -
 target/i386/hax/hax-i386.h      | 3 +++
 target/i386/hax/hax-accel-ops.c | 2 +-
 target/i386/hax/hax-all.c       | 2 +-
 target/i386/hax/hax-windows.c   | 2 +-
 5 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 173f47d24e..8d27861ed5 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -334,7 +334,6 @@ struct CPUState {
 
     struct QemuThread *thread;
 #ifdef _WIN32
-    HANDLE hThread;
     QemuSemaphore sem;
 #endif
     int thread_id;
diff --git a/target/i386/hax/hax-i386.h b/target/i386/hax/hax-i386.h
index d11d43e857..15d16772db 100644
--- a/target/i386/hax/hax-i386.h
+++ b/target/i386/hax/hax-i386.h
@@ -27,6 +27,9 @@ typedef HANDLE hax_fd;
 extern struct hax_state hax_global;
 
 typedef struct AccelvCPUState {
+#ifdef _WIN32
+    HANDLE hThread;
+#endif
     hax_fd fd;
     int vcpu_id;
     struct hax_tunnel *tunnel;
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
index a8512efcd5..5031096760 100644
--- a/target/i386/hax/hax-accel-ops.c
+++ b/target/i386/hax/hax-accel-ops.c
@@ -73,7 +73,7 @@ static void hax_start_vcpu_thread(CPUState *cpu)
                        cpu, QEMU_THREAD_JOINABLE);
     assert(cpu->accel);
 #ifdef _WIN32
-    cpu->hThread = qemu_thread_get_handle(cpu->thread);
+    cpu->accel->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
 }
 
diff --git a/target/i386/hax/hax-all.c b/target/i386/hax/hax-all.c
index a55b18f353..c9ccc411e9 100644
--- a/target/i386/hax/hax-all.c
+++ b/target/i386/hax/hax-all.c
@@ -206,7 +206,7 @@ int hax_vcpu_destroy(CPUState *cpu)
     hax_close_fd(vcpu->fd);
     hax_global.vm->vcpus[vcpu->vcpu_id] = NULL;
 #ifdef _WIN32
-    CloseHandle(cpu->hThread);
+    CloseHandle(vcpu->hThread);
 #endif
     g_free(vcpu);
     cpu->accel = NULL;
diff --git a/target/i386/hax/hax-windows.c b/target/i386/hax/hax-windows.c
index 08ec93a256..b907953321 100644
--- a/target/i386/hax/hax-windows.c
+++ b/target/i386/hax/hax-windows.c
@@ -476,7 +476,7 @@ void hax_kick_vcpu_thread(CPUState *cpu)
      */
     cpu->exit_request = 1;
     if (!qemu_cpu_is_self(cpu)) {
-        if (!QueueUserAPC(dummy_apc_func, cpu->hThread, 0)) {
+        if (!QueueUserAPC(dummy_apc_func, cpu->accel->hThread, 0)) {
             fprintf(stderr, "%s: QueueUserAPC failed with error %lu\n",
                     __func__, GetLastError());
             exit(1);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518425.804987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Nz-0000YS-Mz; Wed, 05 Apr 2023 10:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518425.804987; Wed, 05 Apr 2023 10:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Nz-0000XE-EY; Wed, 05 Apr 2023 10:29:03 +0000
Received: by outflank-mailman (input) for mailman id 518425;
 Wed, 05 Apr 2023 10:29:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Ep-00024X-Ua
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:36 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5cfe3fec-d39b-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:19:35 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 n19-20020a05600c501300b003f064936c3eso858463wmr.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:35 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 d21-20020a1c7315000000b003ed1f6878a5sm1770353wmb.5.2023.04.05.03.19.32
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cfe3fec-d39b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689975;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=reJpuebcoI5BKEfS88SjCkcplXXPmguVbMU9Olmibx0=;
        b=Q2+dqv8HKpPQKTf9cQJSeqsGPGkeqsXxu8v+C2JYWo/wkdpZHobUDc9N35z1UbQyXo
         9C6lndVpWvjoGdquQUathNsJoiIiN7nJn3kgNVvT5T7tc01blota5eXmXHc1hE2a/9ew
         2Wf7CCzhPcSAgNhjaTij8S5PpdLVkajwG2fdrCTckKDaeIUqcPO5ZI4SEhoPCSlpK/xc
         jQRVVytzUEGa2Ydc7bSMmlI+Bj89/EomAs72KzqL8mlaSY0qlyPq25y3stucFF3G2pSS
         /sRRUshA2hioO50En8HSMUOtDGPuoJV66S46OhB8lo7qSEotmJH2F75Zpnda8gdb6yr0
         tHhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689975;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=reJpuebcoI5BKEfS88SjCkcplXXPmguVbMU9Olmibx0=;
        b=5ElheQRINiX2Ygsg/9tU8tpk0LQunD3JUPGDFeZwRYP1TtH8mbnZJmvwLhbfgeHWHk
         +BQ6UkGmVrNTjCsFcIrf60S3VVPZQPZj2wUs7jR37tjZtSTVxbUInqLz75TuCKxtipsx
         uaeEV2avu4/T4+WKR97w3vwbHeThGL2NwhfCdpnHoIb9w4CGbsvBeC/NNSNfwbojrVI4
         ui6UNsBBHjZWaC+0/HlLawEiG7POxUmobF1nDTXrTWd7DiBuflUl6eINn0Z7ziZfE8jP
         GMgrXbrW2tT7rLhcQAc/bD78jYzDQvGJ/1dhvXA6ga/Yk70b5Beq2Fq5IaBNRG/pb+ZE
         FDHw==
X-Gm-Message-State: AAQBX9ces/GWJ5T9bHIDWyxyrMmG9qEtbpkFLhjcJ9/N6dwDkHW6NXBJ
	ugRwI8rpMBt0S/5COyowdsxMLA==
X-Google-Smtp-Source: AKy350ZAz7PsFWbSoWRxQBkCoHp/n/DjoRXVF/1COGNHcX9m4lAa529X6M54U1ZwDgyYhrM/EeN7Fw==
X-Received: by 2002:a1c:7717:0:b0:3f0:48f4:8454 with SMTP id t23-20020a1c7717000000b003f048f48454mr4350332wmi.27.1680689974915;
        Wed, 05 Apr 2023 03:19:34 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Reinoud Zandijk <reinoud@netbsd.org>
Subject: [PATCH 11/14] accel: Inline NVMM get_qemu_vcpu()
Date: Wed,  5 Apr 2023 12:18:08 +0200
Message-Id: <20230405101811.76663-12-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

No need for this helper to access the CPUState::accel field.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/nvmm/nvmm-all.c | 28 +++++++++++-----------------
 1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 97a7225598..1c0168d83c 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -49,12 +49,6 @@ struct qemu_machine {
 static bool nvmm_allowed;
 static struct qemu_machine qemu_mach;
 
-static struct AccelvCPUState *
-get_qemu_vcpu(CPUState *cpu)
-{
-    return cpu->accel;
-}
-
 static struct nvmm_machine *
 get_nvmm_mach(void)
 {
@@ -86,7 +80,7 @@ nvmm_set_registers(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     struct nvmm_x64_state *state = vcpu->state;
     uint64_t bitmap;
@@ -223,7 +217,7 @@ nvmm_get_registers(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -347,7 +341,7 @@ static bool
 nvmm_can_take_int(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     struct nvmm_machine *mach = get_nvmm_mach();
 
@@ -372,7 +366,7 @@ nvmm_can_take_int(CPUState *cpu)
 static bool
 nvmm_can_take_nmi(CPUState *cpu)
 {
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
 
     /*
      * Contrary to INTs, NMIs always schedule an exit when they are
@@ -395,7 +389,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -478,7 +472,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
 static void
 nvmm_vcpu_post_run(CPUState *cpu, struct nvmm_vcpu_exit *exit)
 {
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     uint64_t tpr;
@@ -565,7 +559,7 @@ static int
 nvmm_handle_rdmsr(struct nvmm_machine *mach, CPUState *cpu,
     struct nvmm_vcpu_exit *exit)
 {
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -610,7 +604,7 @@ static int
 nvmm_handle_wrmsr(struct nvmm_machine *mach, CPUState *cpu,
     struct nvmm_vcpu_exit *exit)
 {
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_x64_state *state = vcpu->state;
@@ -686,7 +680,7 @@ nvmm_vcpu_loop(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
     struct nvmm_vcpu *vcpu = &qcpu->vcpu;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct nvmm_vcpu_exit *exit = vcpu->exit;
@@ -892,7 +886,7 @@ static void
 nvmm_ipi_signal(int sigcpu)
 {
     if (current_cpu) {
-        struct AccelvCPUState *qcpu = get_qemu_vcpu(current_cpu);
+        struct AccelvCPUState *qcpu = current_cpu->accel;
 #if NVMM_USER_VERSION >= 2
         struct nvmm_vcpu *vcpu = &qcpu->vcpu;
         nvmm_vcpu_stop(vcpu);
@@ -1027,7 +1021,7 @@ void
 nvmm_destroy_vcpu(CPUState *cpu)
 {
     struct nvmm_machine *mach = get_nvmm_mach();
-    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);
+    struct AccelvCPUState *qcpu = cpu->accel;
 
     nvmm_vcpu_destroy(mach, &qcpu->vcpu);
     g_free(cpu->accel);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518431.805008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0O4-0001Ny-4u; Wed, 05 Apr 2023 10:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518431.805008; Wed, 05 Apr 2023 10:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0O3-0001Ng-VT; Wed, 05 Apr 2023 10:29:07 +0000
Received: by outflank-mailman (input) for mailman id 518431;
 Wed, 05 Apr 2023 10:29:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Ex-0002Na-A1
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:43 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60c82f71-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:19:41 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 l10-20020a05600c1d0a00b003f04bd3691eso6766053wms.5
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:41 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 n7-20020a05600c4f8700b003ee9c8cc631sm1780821wmq.23.2023.04.05.03.19.39
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60c82f71-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689981;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=S/MtLkEvv/mpFG8xdDJ5UGIDl6QuLPsc27INO07zRJs=;
        b=j3hNNVtnFSkeMmMMmVP7MwKIlYYAEqbhDpfKTMInG3a+9K0+YXceoJPZJ4jZAHCBFI
         VPa6eMgi/+9TXCgyr86xjuKK9t4zNhPlw7CAX0fvenTdKFSa3t+eBYp0hgH6m98d3/lR
         jnJJMwWe2cN5PCDDRMn3ZiOfvLvOx0KgfKGPNgpITrxaoDPAOFx1uqGJJWFNIk4jpdv7
         NGGSO2xZQkFpOwy7mhzr21QREPPwjCH42qXvKwpRx5Hf1PwrV8q3x4WiQL9Zq/cJZpvA
         +TtnCQgltM297hRRmytaXGZhpTBrxwBYRmMmPEjntzUo7iHBOLTSWiIxfEFT8knJMWNo
         rgog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689981;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=S/MtLkEvv/mpFG8xdDJ5UGIDl6QuLPsc27INO07zRJs=;
        b=ENxCQpKaZ/vJYYVAI/pNkWj5blaiw0Y3P01s+6CN3vcTM/wnO5KzZqAqqOxAsyFpie
         QfnEX4I1aWEWKrME2hatZaUin5d2tLTPrXy2j53kEEXCM0U8G5wLqJOODKrPQ29r0ZPb
         dqdceKwV/3insS9Sz3Gkw7sg2oLuBMdT4SbEMa9QRXXJF1Q0n1kr6SeHc6rANMz4ZWQA
         Zub2EHnAYv/TpLWsAPUVM2rFO+mBeZlDci4tSaQOKALtsLWi09g7qnC80CfNfopgY+bx
         KjW7byf3VkBTyx/FtdgyvGefna0HHurow6rJEbnGHgyTa5KGVHp7JQ6/WaI7VUxsQ7xd
         xMzA==
X-Gm-Message-State: AAQBX9ee/CrNb48v9z13d39+WMgzcsZNjSDBQDTBlr7XhRNqq1gqFiLe
	foFpC3UFDFZ8uIL3qxhgYhTy9A==
X-Google-Smtp-Source: AKy350ZceuduIKjGNIdlkcOLlYLJL5KbbisXjbggRXGoX4b3J2s4uJbs/AyE+v9R+USsxTJSDNjt2g==
X-Received: by 2002:a1c:f707:0:b0:3ee:9909:acc8 with SMTP id v7-20020a1cf707000000b003ee9909acc8mr4174177wmh.32.1680689981231;
        Wed, 05 Apr 2023 03:19:41 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Sunil Muthuswamy <sunilmut@microsoft.com>
Subject: [PATCH 12/14] accel: Rename WHPX struct whpx_vcpu -> struct AccelvCPUState
Date: Wed,  5 Apr 2023 12:18:09 +0200
Message-Id: <20230405101811.76663-13-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We want all accelerators to share the same opaque pointer in
CPUState. Rename WHPX 'whpx_vcpu' as 'AccelvCPUState'.

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/whpx/whpx-all.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 70eadb7f05..2372c4227a 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -229,7 +229,7 @@ typedef enum WhpxStepMode {
     WHPX_STEP_EXCLUSIVE,
 } WhpxStepMode;
 
-struct whpx_vcpu {
+struct AccelvCPUState {
     WHV_EMULATOR_HANDLE emulator;
     bool window_registered;
     bool interruptable;
@@ -260,9 +260,9 @@ static bool whpx_has_xsave(void)
  * VP support
  */
 
-static struct whpx_vcpu *get_whpx_vcpu(CPUState *cpu)
+static struct AccelvCPUState *get_whpx_vcpu(CPUState *cpu)
 {
-    return (struct whpx_vcpu *)cpu->accel;
+    return (struct AccelvCPUState *)cpu->accel;
 }
 
 static WHV_X64_SEGMENT_REGISTER whpx_seg_q2h(const SegmentCache *qs, int v86,
@@ -390,7 +390,7 @@ static uint64_t whpx_cr8_to_apic_tpr(uint64_t cr8)
 static void whpx_set_registers(CPUState *cpu, int level)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_register_set vcxt;
@@ -609,7 +609,7 @@ static void whpx_get_xcrs(CPUState *cpu)
 static void whpx_get_registers(CPUState *cpu)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     struct whpx_register_set vcxt;
@@ -892,7 +892,7 @@ static const WHV_EMULATOR_CALLBACKS whpx_emu_callbacks = {
 static int whpx_handle_mmio(CPUState *cpu, WHV_MEMORY_ACCESS_CONTEXT *ctx)
 {
     HRESULT hr;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     WHV_EMULATOR_STATUS emu_status;
 
     hr = whp_dispatch.WHvEmulatorTryMmioEmulation(
@@ -917,7 +917,7 @@ static int whpx_handle_portio(CPUState *cpu,
                               WHV_X64_IO_PORT_ACCESS_CONTEXT *ctx)
 {
     HRESULT hr;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     WHV_EMULATOR_STATUS emu_status;
 
     hr = whp_dispatch.WHvEmulatorTryIoEmulation(
@@ -1417,7 +1417,7 @@ static vaddr whpx_vcpu_get_pc(CPUState *cpu, bool exit_context_valid)
          * of QEMU, nor this port by calling WHvSetVirtualProcessorRegisters().
          * This is the most common case.
          */
-        struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+        struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
         return vcpu->exit_ctx.VpContext.Rip;
     } else {
         /*
@@ -1468,7 +1468,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 {
     HRESULT hr;
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
     int irq;
@@ -1590,7 +1590,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
 
 static void whpx_vcpu_post_run(CPUState *cpu)
 {
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
 
@@ -1617,7 +1617,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
 {
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
 
     if ((cpu->interrupt_request & CPU_INTERRUPT_INIT) &&
         !(env->hflags & HF_SMM_MASK)) {
@@ -1656,7 +1656,7 @@ static int whpx_vcpu_run(CPUState *cpu)
 {
     HRESULT hr;
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
     struct whpx_breakpoint *stepped_over_bp = NULL;
     WhpxStepMode exclusive_step_mode = WHPX_STEP_NONE;
     int ret;
@@ -2154,7 +2154,7 @@ int whpx_init_vcpu(CPUState *cpu)
 {
     HRESULT hr;
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = NULL;
+    struct AccelvCPUState *vcpu = NULL;
     Error *local_error = NULL;
     CPUX86State *env = cpu->env_ptr;
     X86CPU *x86_cpu = X86_CPU(cpu);
@@ -2177,7 +2177,7 @@ int whpx_init_vcpu(CPUState *cpu)
         }
     }
 
-    vcpu = g_new0(struct whpx_vcpu, 1);
+    vcpu = g_new0(struct AccelvCPUState, 1);
 
     if (!vcpu) {
         error_report("WHPX: Failed to allocte VCPU context.");
@@ -2296,7 +2296,7 @@ int whpx_vcpu_exec(CPUState *cpu)
 void whpx_destroy_vcpu(CPUState *cpu)
 {
     struct whpx_state *whpx = &whpx_global;
-    struct whpx_vcpu *vcpu = get_whpx_vcpu(cpu);
+    struct AccelvCPUState *vcpu = get_whpx_vcpu(cpu);
 
     whp_dispatch.WHvDeleteVirtualProcessor(whpx->partition, cpu->cpu_index);
     whp_dispatch.WHvEmulatorDestroyEmulator(vcpu->emulator);
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:29:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518433.805014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0O4-0001S9-IF; Wed, 05 Apr 2023 10:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518433.805014; Wed, 05 Apr 2023 10:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0O4-0001RM-9g; Wed, 05 Apr 2023 10:29:08 +0000
Received: by outflank-mailman (input) for mailman id 518433;
 Wed, 05 Apr 2023 10:29:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk0Ea-0002Na-UF
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:19:20 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53612bd7-d39b-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:19:19 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id r11so35629067wrr.12
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 03:19:19 -0700 (PDT)
Received: from localhost.localdomain
 (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr. [176.184.52.81])
 by smtp.gmail.com with ESMTPSA id
 s11-20020a5d424b000000b002e5f6f8fc4fsm14040108wrr.100.2023.04.05.03.19.16
 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
 Wed, 05 Apr 2023 03:19:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53612bd7-d39b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680689959;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EBsjiWXEyjNdcnK7CdgNfH2aIDA7wl6BmVcuMNTGlRg=;
        b=ZCeEmGx5C1tuByO7Gk8190umAFm3Jq4q/3LEy8V4HB/sgDGk01PdV0MXcQqltklyu3
         DW09L0fwCMzuVJZy82A0vSyQNKZSvZwE3Rakfj2hr1zvFY1eZeaEbcBCDOHIx7fy11jt
         /S2FM9gq9oJe4BBqhtdS5l+3uhXXNdnl7KuLkaG9eftfsUPFv3TGzexWOVF1sWTguVrY
         838K9xTGvVrgnK3wGNr49BTbICPTkdgaMh9RIPXj5OJjY/e393NR9LtEmB+cM2VmI3zz
         RQrk9WLPd1wzBdC0E7STdPo8yjxVpnfYhSiLQXfXjBKsSb4Cc8fPtp7Mrl4DfK8dO1FF
         g35Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680689959;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EBsjiWXEyjNdcnK7CdgNfH2aIDA7wl6BmVcuMNTGlRg=;
        b=UmyRNXnzdCCa2vrgkknss4VTCQUCx6LmDdnwIHwYLFy2fpolh2KxAJXhSxmJTW65ej
         sivBCc+vTt+UkruP+yjbCc7XZKWYzGSoUigCzprhA7rd77lwYqOd203/dYvWGGAj3kEk
         o1mlTu3a2gpGkDtuGU1Q/AU62kkBKdHU1HM0H01tUehOwEEUhyHKhUL16IxoA09tr8wW
         /YWhe8DzMG03heV+CNx7BE9pTIA9tqLXa+juvAwqpapSV5Luu+bt/x+KEmZxuS0aJgs4
         UeU5YSD8fvXk80+HNfaN5ACHrYAzaMkzOpP024W8hSWT6wr59auuVY5qsrDr6EYFzo2E
         Zr/w==
X-Gm-Message-State: AAQBX9dHHg+FpUOeJzkl16FfeJEzhRjg6MbHvzlQ5Lcm5OM9ZGVM763c
	NTCR2J8qhJt/TeBwQR0DmnaQgEffp8+ee5BrAt4=
X-Google-Smtp-Source: AKy350bqyCS4jO7cj3IxU6M9gEfWmwwxn91ShCFu07OvQwRL7AbHTBtXunKmonFyvH6AyOInd3xSpQ==
X-Received: by 2002:adf:f004:0:b0:2cf:f01f:ed89 with SMTP id j4-20020adff004000000b002cff01fed89mr4200753wro.24.1680689958696;
        Wed, 05 Apr 2023 03:19:18 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Reinoud Zandijk <reinoud@netbsd.org>
Subject: [PATCH 09/14] accel: Allocate NVMM vCPU using g_try_FOO()
Date: Wed,  5 Apr 2023 12:18:06 +0200
Message-Id: <20230405101811.76663-10-philmd@linaro.org>
X-Mailer: git-send-email 2.38.1
In-Reply-To: <20230405101811.76663-1-philmd@linaro.org>
References: <20230405101811.76663-1-philmd@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

g_malloc0() can not fail. Use g_try_malloc0() instead.

https://developer-old.gnome.org/glib/stable/glib-Memory-Allocation.html#glib-Memory-Allocation.description

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 target/i386/nvmm/nvmm-all.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 3c7bdd560f..45fd318d23 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -942,7 +942,7 @@ nvmm_init_vcpu(CPUState *cpu)
         }
     }
 
-    qcpu = g_malloc0(sizeof(*qcpu));
+    qcpu = g_try_malloc0(sizeof(*qcpu));
     if (qcpu == NULL) {
         error_report("NVMM: Failed to allocate VCPU context.");
         return -ENOMEM;
-- 
2.38.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:37:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:37:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518451.805027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Vl-00050p-8l; Wed, 05 Apr 2023 10:37:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518451.805027; Wed, 05 Apr 2023 10:37:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0Vl-00050i-5z; Wed, 05 Apr 2023 10:37:05 +0000
Received: by outflank-mailman (input) for mailman id 518451;
 Wed, 05 Apr 2023 10:37:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk0Vk-00050a-6o
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 10:37:04 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7d00::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccbf2377-d39d-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 12:37:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8395.eurprd04.prod.outlook.com (2603:10a6:10:247::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 10:37:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 10:36:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccbf2377-d39d-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FGXqNsWaiCWBsLGXCHtB9NVu7xQfSwFyFtWPTySNrymtykz1/+WdsRug7SxjuBRlF3TsT9TDBEEEPkwL//RBILfvDt3E2lPF7anKERPEYrIHn5HZdKPJVk6sHRqx2/AqQY+m6wopXEpRrPPeykSQz6b1jQgwLfVVc9VhGJTwGvMiBLvi6IkkxR/gxFSuLUhMFori1qBC62z/nZIRdrLyarZR8e9NJn6Y+Id/kJcnY6ylFn9/4kvfSAfrKhwp52RpYmaBHd/qg4fP74qNhNxnjdAMWWynpuHUU+e/PY+zh4C7+QzZys+x3CvLo9w3NY/wtq5O8xs+cQaTkgSv3JlBjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tuyqy/r8c07AnJqD3GHmViiudIVOTpE8t2D0DhGIOh4=;
 b=iPRtFEsJ/uwPPijUhRg9/SdbqBfJwLjReIhYm1w9gZBjH7qUOmY2RC+evVR8bKpyBTPXqFzkA2l+/KvsoOdfTgJkcNN4dr2jwUVC2EDITK0UHkZCvHHjGne0bVJyCt7sxORTL/ltslG1P8OYzEMPrd7+6+ITZ+bFvEbfjFGEe1GxdAEWHlwA5hdAlmEh8ZlcCOcNuKV8avxtEt2EiR5Canw4vYWVkdJQGWScm8A3zDujIpUZVewk6td5lq7zN1LoenWDqvfn2F75BRHeKo/Xsla2NpKlvnHusJ3UkpsXiz5IP1gfZDFJWksJb8pCHOvZmM38xESyCfFLPMovJDhf7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tuyqy/r8c07AnJqD3GHmViiudIVOTpE8t2D0DhGIOh4=;
 b=qLrD/PpETL/SfonJsHIntiQuT4NtENzH+AslF9NUw1txvptkdHuFPVlcKF8Vfz8JX0Iy7fLHfHFk5oHrZ4XFsfGJOuMXPauklZF3FbtFdLtIjdYBG/nZpGbhRjnEI9ZHAgKPBvwC5vKN5ZE1qElARPSK2zALg/4E3+4mBFecDVbqCOuK3P5fhUGZ9Y7acMli9AhC+Z9pmhpvFjELJIaIFuVGYzjCM5GCQulv+0G22qDFIyUpPBPLfXRUzYQnTtmVzfvJIGDWUezZpS19TZeYsHnAPEZLsxpWGsTeFq9jniRnMvE+NFo37R2pvubp5mGAz7r7feKZg2CVduwTgJacmg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b9bd819d-93ad-d511-4602-8e3f4f515546@suse.com>
Date: Wed, 5 Apr 2023 12:36:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 3/3] multiboot2: do not set StdOut mode unconditionally
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230331095946.45024-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8395:EE_
X-MS-Office365-Filtering-Correlation-Id: 49e671e3-8bc6-48d8-689f-08db35c1ae55
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y4qAoOVY9WyhXGl1mNe/YN26Xi9fe5zmsejsCFHRnDvDHdOtHAuYLkSkS8sxCQiPUZTbGfr2fV1T69pZ/6uRHXRyEXFRYFuJT9N0Ii2dbxJl6qFsFX5s73IVvqZiO72qZ6stLRQCMWWjcsFGsQhAN2rhC9aaq2IBCJauGKW1r4Xf6OPuDSDnQ61+R4FZYA4cfeYdVQ36ukaVhvt09o5821L9DSlWalLSKbAQfhVvTKWQSxYCDN6s86zsMSm8BtvWteEfW+hLzGIBo6KLEH9sDQIbUXbeEo25Do7v2r37g1QGtLGPtIA6cofNvDZa3imZM3xg7WbaBH6zjL7eU2fFA/The4C95+Cr6xAiJmsLMM89VGcMYSi1Vzi4H1WT9oE3ZjJXDVpNrpcGdsCtAW+mS6B0W22Na4YxOk3nlQWW6Jbh4eZ2hovFbBV4PXL7HvZfosiYuzhRVyqPK7FuuNkrLi7Pq3LACd+/aoQHqPgslHdUtJOMEutZZ1qjXPnHjSzfGEgKsQvadowehFd+0bKuaBRhH3LCbXZFn0ZLyKiD5bchj7VLo7RHOH+REESgfjwPxlA0/5Pk/9Cq0tKk2yZedq6i/obqHsRHD7BTNODiA/g5DtvjR4oxyhf2OqNtM5oO1IRP4SwhVRitHLXSvSCs+Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(396003)(366004)(346002)(376002)(451199021)(31686004)(6486002)(54906003)(41300700001)(66556008)(66476007)(66946007)(6916009)(8676002)(36756003)(478600001)(316002)(86362001)(4326008)(31696002)(53546011)(6506007)(38100700002)(186003)(6512007)(2616005)(2906002)(5660300002)(26005)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejZNY2NkRjdUdEtPdDlndmk2MFZxeFFmOHMyWGdtemd5TElsaUNIUWFFSXE1?=
 =?utf-8?B?NitQVlRwRFVRYWJPV1BLV0N2bFljMmZPT2dVanA4cEQ4RGRFeno5NExxem1G?=
 =?utf-8?B?aEFDbVJQQ25rdUl0OGRLMVc0ZnN1Q2JLSVIwaWRONWo4bU5ScTZqUm8wMkwz?=
 =?utf-8?B?SEpFSGNHNTkreVpxUUlIVlhESXNCa0VxTUN6QkdQVVRIbUJueko0RkRyL0Fa?=
 =?utf-8?B?RG1QSVkyTWJLODJydHc4dDVFU0l6NVVQUSsrcS9HU2hvZUtkUnh5T2xmUmY5?=
 =?utf-8?B?N0VkejBZdEgvN2xJbW9wb1hDWm1hbVRTVzlqYWV6RnZhNGM3MVNvTk9sdHZ3?=
 =?utf-8?B?NWVsWDdtYi9mVFNkRlMwMHEvRTVhb0VUd2JtcmZRRzk2Y0tFTHljajFDbXlr?=
 =?utf-8?B?ZTBRL3lGdDVnZ0NveUp4d3NqNThIdVhYbTNRNzhoK1ltUW1udTNPWm0vcGk5?=
 =?utf-8?B?d1BVY2p4WnpJSWx3dmJvYjYxS1ZOWlV1aXdhUE0wNTNvaEIrQzlTNHo5NEpw?=
 =?utf-8?B?aXlEUTJia0VFbE5hemdEamdCK2ZwS21ycVA2YzZSaEMxUlBtQU5seWt4WXlZ?=
 =?utf-8?B?QWZzTHMwVHJrd2tnK2FZbCtYS1RiRXgrVElmTlY2OFZqbzBKazh3dzBrVDEv?=
 =?utf-8?B?ZjVzOGJOMFloTmFMZ1ZpbDVqdjRoRzhGay9Ya2RadWdSejUyZi92cjBSMTNS?=
 =?utf-8?B?ZHZaZWkwUGpaVlZjSnFJeGpJNnJ6enkvdW9vR1BJQmhxZVh5bGo2QmdKUTF5?=
 =?utf-8?B?LzFBM29yR1JTUWpHckFKR2gwaW11clpaVmZSV3F0cHFBTmczUHkycUZIMW8y?=
 =?utf-8?B?MHVxK0JLdGJrQUdRVDR0QjVmZzc3Q0QwNjBuZlFTV1RpN0IvSUpuYTU5V0pt?=
 =?utf-8?B?RDNKV0IxY2Z6Q0tLejY3SUhUYnFzcElJOHFjVGZtMGZ0dFRHdUlNMFhNVnl6?=
 =?utf-8?B?WSs5UFM3U2J6TDJTZDlYTFByZWRySVUxMGxPT0U4dFdreDJVM0R2aExZdXNt?=
 =?utf-8?B?dWZST2srOWEzdlR4UGtSZUNjWERkS0pSbXNhRHJVaDMwbEl2cTI0ZnU5SXQw?=
 =?utf-8?B?Z1hjajBaMTRTV1ZWRmJhQ1RCZDV3YXNoMmFsZ2RGck9NUXlScHhHek1aa3VM?=
 =?utf-8?B?SEtObGN4MUhyVHB6QXRrQSswdEdGWUJaMnFBUlp4b1dKa1BpTnJrVko4ZWhh?=
 =?utf-8?B?VTk3QXk4aytLTHpuMXIxY1Y2VVJkM2xPVC9aeE1VN2dEcnpUR0lSNjZ4akZo?=
 =?utf-8?B?VERMdmRhTi9SVG1TMFFOQTl2MnBKR05kWWo3OEdSc05sczdKZnlwNUl1bXhT?=
 =?utf-8?B?dUFwZHhIb1B1Y2htV01hcGhtbFdZbEtBZStCZk93TDNlRzdBVFhXZkx5dkE4?=
 =?utf-8?B?TEVsZDBxd2ZhaEIrNVR3YWlNdTNWRVpyeHczc0tsOUFGdm1GZytsTHlvVVZD?=
 =?utf-8?B?ajdpaDgwSGZtZXg4aGtIZlg0SHc3V3dDa3U5UEFLcnZuYS9FMlQzQ3FkV1NT?=
 =?utf-8?B?R241OXZpZk4wMFZ5dXV5aTRScTh2MnpOaWZuY3RFd0dJc2hNemdmR0V5aHBM?=
 =?utf-8?B?UUlIdUlWcFFVQ0xoSklPN3Nub1VPVTJQYnBjVE1ZUDB1dGxDbW01b3l0ZmVR?=
 =?utf-8?B?N3JRUElVRy8vbTJKZjVkMjQ2T2liZlhZVTZKbEJYck9RRE0wbmVmZ0IrSGR1?=
 =?utf-8?B?N0xhWGQ2ZCtkSmdvbWxRemlYaGNGdUZGVVkrZXh3UUFpODVSZ0hhWlc2T0tu?=
 =?utf-8?B?RmtsTUtKeGJnRVlKeGlxTjduTys5NS9WWDVLcnJ3QWlLRlRTcExVejNOaExF?=
 =?utf-8?B?ZlQ2S0dQOG5aTXpmVThYUkx5Snk1cFpuTmYyUkpBSlBXdERZd05mUDJsMDN4?=
 =?utf-8?B?cnh4TVhBcVhhTzZwcUhUU05JUi9QSzlzTzZlZU85elNKNUZiV1FRSDhEbVpS?=
 =?utf-8?B?ZWdpdnBYS0RIamo3SThJQlJZYm1HUUswbzZsSE9HQjBKbks4UTZleG9lOXpW?=
 =?utf-8?B?Y3JvRDZaM28rNktiT0xLTXhvMUZzOUR2bGtTUTdZTVV5QW5aL3dEaTFSS2Zs?=
 =?utf-8?B?SStrMU9lQ05rMVZzV29iazJjOEk1NzZpNEtndW9XeVVtZUh4V1JVbUd5Yk16?=
 =?utf-8?Q?/gj0gRiKLF7swI0MiYUduigvI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49e671e3-8bc6-48d8-689f-08db35c1ae55
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 10:36:57.4231
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fala2Nt6iCUE+kr5I+Y2PhXZQ7K4/lpN9bYc0HCvlji2b3XGVO48EzpdFU4ehG9M0g/5GO18aPex2EQ6OCzZKw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8395

On 31.03.2023 11:59, Roger Pau Monne wrote:
> @@ -887,6 +881,15 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
>  
>          efi_arch_edid(gop_handle);
>      }
> +    else
> +    {
> +        /* If no GOP, init ConOut (StdOut) to the max supported size. */
> +        efi_console_set_mode();
> +
> +        if ( StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
> +                               &cols, &rows) == EFI_SUCCESS )
> +            efi_arch_console_init(cols, rows);
> +    }

Instead of making this an "else", wouldn't you better check that a
valid gop_mode was found? efi_find_gop_mode() can return ~0 after all.

Furthermore, what if the active mode doesn't support text output? (I
consider the spec unclear in regard to whether this is possible, but
maybe I simply didn't find the right place stating it.)

Finally I think efi_arch_console_init() wants calling nevertheless.

So altogether maybe

    if ( gop_mode == ~0 ||
         StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
                           &cols, &rows) != EFI_SUCCESS )
        /* If no usable GOP mode, init ConOut (StdOut) to the max supported size. */
        efi_console_set_mode();

    if ( StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
                           &cols, &rows) == EFI_SUCCESS )
        efi_arch_console_init(cols, rows);

?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 10:45:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 10:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518462.805038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0dO-0006kE-5Y; Wed, 05 Apr 2023 10:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518462.805038; Wed, 05 Apr 2023 10:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk0dO-0006k7-27; Wed, 05 Apr 2023 10:44:58 +0000
Received: by outflank-mailman (input) for mailman id 518462;
 Wed, 05 Apr 2023 10:44:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E6P8=74=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pk0dM-0006jx-HL
 for xen-devel@lists.xen.org; Wed, 05 Apr 2023 10:44:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e55f4197-d39e-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 12:44:53 +0200 (CEST)
Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com
 [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-510--SGXi9h-OWiGow7FELWTWw-1; Wed, 05 Apr 2023 06:44:49 -0400
Received: by mail-ed1-f70.google.com with SMTP id
 n6-20020a5099c6000000b00502c2f26133so4460794edb.12
 for <xen-devel@lists.xen.org>; Wed, 05 Apr 2023 03:44:48 -0700 (PDT)
Received: from redhat.com ([2.52.139.22]) by smtp.gmail.com with ESMTPSA id
 p25-20020a170906a01900b0093a7952411asm7207199ejy.48.2023.04.05.03.44.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 03:44:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e55f4197-d39e-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1680691492;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kq6oZ7N2bYTf0eEXEKbV1aFmSrlbXHt85D5v+zP8YuE=;
	b=PqPwEh74rny6Rf2DAPJlaQtViJqcvRRT5MlAv6EYUs/M+2iARPJsWMnUGEThKN3DAhKUfJ
	R23xvhARXG6i94e5BILUVlFyxwnsHYlotpQBqhjb3+fSRS3IyXC/TvpnFbJadtn49rWkWx
	9MJjd8qtxWtvLFydBiA4OLy4R4eQh7M=
X-MC-Unique: -SGXi9h-OWiGow7FELWTWw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680691488;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=kq6oZ7N2bYTf0eEXEKbV1aFmSrlbXHt85D5v+zP8YuE=;
        b=EI6O2ucWs6IkbSUkEr55u19Z1IQBb7BtSwHBqwFc6FX4EsnO6FauVB52EvZ/wQiHeN
         qu9xV/+QFDvavxVMZhUNh9jQatUQZ2ks6eqMzpV1ZPjjMYmUtpXj3y4EPYvoTfUs1NFZ
         XukgO14XtWwuhWiklziGDI/XcGcG1GiW2nHIPAUTY31TWr7U6K5rPkPtRicgoat7QUmv
         ZcbJ2ZgRuTlvkGOCdacz4FQYGADpkjXJrMMfkODpWdzmwh7ypTtB4dwI8cBaXvI3Vjj/
         jIKZgBR3q9/fOVnXyt/dYGBHfNx10Iwu1NILM9C/JLENJLOYOGo4eOYIo7TuQEf192Cq
         cQSg==
X-Gm-Message-State: AAQBX9cTMNSx6HQji+T1/+ebewmL/DoBcAOAR2Wrawo9Lu1lxEXbjx4o
	umOMkrfxBJ9wh/vNbQCfT9Y1BwI50+SvrHBwNGwHooKZC3NRVmHdbJpRDs3fTOuTBBJZ/1zIiTa
	RP83Ea7Txxy/irLqK5g==
X-Received: by 2002:a17:906:d045:b0:931:624b:6804 with SMTP id bo5-20020a170906d04500b00931624b6804mr2345359ejb.33.1680691487916;
        Wed, 05 Apr 2023 03:44:47 -0700 (PDT)
X-Google-Smtp-Source: AKy350b5k+F7FQS0kEh/ahvkTDuRpd40DUgmdol+8mUYtq06ub6whVh6ZTdy0b/m/9U1TYG1Nw5Bew==
X-Received: by 2002:a17:906:d045:b0:931:624b:6804 with SMTP id bo5-20020a170906d04500b00931624b6804mr2345345ejb.33.1680691487657;
        Wed, 05 Apr 2023 03:44:47 -0700 (PDT)
Date: Wed, 5 Apr 2023 06:44:42 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>, qemu-devel@nongnu.org,
	virtio-dev@lists.oasis-open.org,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, xen-devel@lists.xen.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sebastien Boeuf <sebastien.boeuf@intel.com>,
	Liu Jiang <gerry@linux.alibaba.com>,
	Mathieu Poirier <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V3 0/2] qemu: vhost-user: Support Xen memory mapping
 quirks
Message-ID: <20230405064417-mutt-send-email-mst@kernel.org>
References: <cover.1678351495.git.viresh.kumar@linaro.org>
 <20230405080512.nvxiw4lv7hyuzqej@vireshk-i7>
 <87h6tulkae.fsf@linaro.org>
 <20230405060340-mutt-send-email-mst@kernel.org>
 <87cz4ilj4j.fsf@linaro.org>
MIME-Version: 1.0
In-Reply-To: <87cz4ilj4j.fsf@linaro.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, Apr 05, 2023 at 11:24:43AM +0100, Alex Benne wrote:
> 
> "Michael S. Tsirkin" <mst@redhat.com> writes:
> 
> > On Wed, Apr 05, 2023 at 11:00:34AM +0100, Alex Benne wrote:
> >> 
> >> Viresh Kumar <viresh.kumar@linaro.org> writes:
> >> 
> >> > On 09-03-23, 14:20, Viresh Kumar wrote:
> >> >> Hello,
> >> >> 
> >> >> This patchset tries to update the vhost-user protocol to make it support special
> >> >> memory mapping required in case of Xen hypervisor.
> >> >> 
> >> >> The first patch is mostly cleanup and second one introduces a new xen specific
> >> >> feature.
> >> >
> >> > Can we apply this now ? I have developed code for rust-vmm crates
> >> > based on this and we need to get this merged/finalized first before
> >> > merging those changes.
> >> 
> >> 
> >> I've queued into my virtio/vhost-user-device series so I'll get merged
> >> with that series unless mst wants to take it now.
> >
> > Well the patches are tagged and I was going to take these after the release.
> > Probably easier not to work on this in two trees.
> > Still if there's something in your tree being blocked
> > by these patches then
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
> > Let me know.
> 
> The virtio/vhost-user-device tree work is orthogonal to this vhost-user
> enhancement although all the work is related to our latest VirtIO
> project inside Linaro, Orko:
> https://linaro.atlassian.net/wiki/spaces/ORKO/overview
> 
> So if you are happy please take these patches now for when the tree
> re-opens.

Yes, I tagged them for when the tree reopens.

> >
> >
> >> >
> >> > Thanks.
> >> 
> >> 
> >> -- 
> >> Alex Benne
> >> Virtualisation Tech Lead @ Linaro
> 
> 
> -- 
> Alex Benne
> Virtualisation Tech Lead @ Linaro



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:18:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518468.805068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19p-0003Q1-5M; Wed, 05 Apr 2023 11:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518468.805068; Wed, 05 Apr 2023 11:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19p-0003Pu-0t; Wed, 05 Apr 2023 11:18:29 +0000
Received: by outflank-mailman (input) for mailman id 518468;
 Wed, 05 Apr 2023 11:18:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=btvB=74=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pk19m-0003OA-OQ
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:18:26 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2060b.outbound.protection.outlook.com
 [2a01:111:f400:fe59::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 91da60b8-d3a3-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 13:18:21 +0200 (CEST)
Received: from DM6PR06CA0007.namprd06.prod.outlook.com (2603:10b6:5:120::20)
 by PH0PR12MB7094.namprd12.prod.outlook.com (2603:10b6:510:21d::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 11:18:17 +0000
Received: from DM6NAM11FT049.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:120:cafe::87) by DM6PR06CA0007.outlook.office365.com
 (2603:10b6:5:120::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.22 via Frontend
 Transport; Wed, 5 Apr 2023 11:18:17 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT049.mail.protection.outlook.com (10.13.172.188) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6277.28 via Frontend Transport; Wed, 5 Apr 2023 11:18:17 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 5 Apr
 2023 06:18:16 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 5 Apr 2023 06:18:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91da60b8-d3a3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m7m5NbTPhzb7l3VwIWfMjPF0agHjhvDRI5bjtLX/cGY/YqGVspD8l/HgOc7rheU0HnfO3CrjzYkiYOrHqpAQODmbHfbaWHiG9ILddNkqjOWCR7X9L8/tQsHHdc9nwrISy7ZSkfOKEvAUebAzFOxukXXZFcsmIKPbitVnCglvdEwpl6mwDE0hQiw15v03PL/DMQq50GQVOZbYgen/UHHtnj2MiY90RKeUsyTf7RW4rGx6N7LbihxnNLRlPFf0UCuRsR0MJjhMu6wmP0A9CPPzqmjZkucJPAZyAbJLgKfHeZcaF4GO6eHT/wFyNZSchFqPhBhmA/qOR5fxHdjbW0heZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g9H/C1PpC36IRqyjibUsIvzP3hGQMaqzwi6jG5cD2/k=;
 b=LcOD00f4Pwrf/Q9l5/hye4fjRCUSuUfode4Ka8gjlQEMhJsc/4E8M7QcfH7NYuaFLU30b76yQO70tCAjWDUpxjwIzJrI6HKwePMJcMNG8fWhKjftN+/D2kslmBZDPdM+04V+x1CbaK9GUXx8jUOieIJ8/omNZiS2hnQFoIOyABZr53mHKKECVJ5+plWGK97XNNxFZTdV9g7YjV2xnQO78PNe3RKCNehmhjpcoMaW4Dq+zqfsZ167hFYjl5k6DHLJBQBfsCCPAMyzKBai84Ehhsh8YZ8ikuQGhgn4ptdSxmUjml/eBxWwzcgRDEjIQM/sHreLtad//HvjsJJeURTkJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g9H/C1PpC36IRqyjibUsIvzP3hGQMaqzwi6jG5cD2/k=;
 b=S8u0uFlo56x4SJF28DGuVASrEGaufHrfkZD42Js/oteIoLFAIadIi4lxOxGCdROXvRJ8x+nl4k8FHV4FbfAp8pAAavf43f1O+PXGMO80GMs9E4avFLObGREL5g14kukD116YjGP1yCSYLRRS4LnwkYEbx/hLPsVp/GKjWxOzfZU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
Date: Wed, 5 Apr 2023 13:17:48 +0200
Message-ID: <20230405111750.12491-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230405111750.12491-1-michal.orzel@amd.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT049:EE_|PH0PR12MB7094:EE_
X-MS-Office365-Filtering-Correlation-Id: 042fd97e-6201-403b-d9f8-08db35c7748c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0RALEhyx4hTMbmn4HT+qB3sV+Icf8kyI862pPtJsaCk9JidOAgy4PZV3O8LitYNCN9bFdqVZLx4p3ZP+er+6w8xs7MgX+EMhh8iuQxCWxNT1e85I274W0QwUurjN4ulrH8Y218TPtRV0QegQBQ2+eqJ9vaIRZW3zqJWkMaR2E5X4GkKtG4P/JnGajP8Z1cFXQmOBaMqenVPJRd1y1y1+nUDiq2oW91TbCuE7Xq4i1IjFOkJ61mCZ4WnzbYuuixxi01NqfQDPo7gvbQx1RP/mFkoQRrUZyYVPbSImOqDj5eh9v9l0VSoFc6ByXpRaXvuskf5rGhr+f7LYsE+NsKZTinzNURUvSAb86OfijPa5UaZ+BqTL3LrePiJrAG3HvVsRD95VJgQe90+SIDfVk2r+kXt8YIibBiRk0cLX/dMxV6GO4Fq5bATRsFcFxCKlUNbSwM+C5i4dy6khOjGutBvk9FjIe2p7NO8y/cfGyRoF4UnmS+GQWKIdHbI1aJu1NDJfey0rx2necX9nX6o3/jr3zs5evXyueaV1+MYSRhfm1aYx02NwyDQU6wn1d8bThlVtSs9q9HrViDboFcxFzOLLwd3xmXb7bo2sDAkOFZv7duSem4PComl7rXuji47LGPcwCMvjsXbUGkFkezrOy0YXgD8SwxFc2dkMd2c2Bz4WMU+fuSB2oCLx8eTErmxPQvxXe5pGsMtTcfQfCVZyNQ4SjHUSXcSw40fYaejnS7YVpko=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(376002)(396003)(451199021)(40470700004)(46966006)(36840700001)(86362001)(2906002)(36756003)(82310400005)(40480700001)(6666004)(2616005)(47076005)(186003)(83380400001)(426003)(336012)(1076003)(26005)(8676002)(4326008)(478600001)(70206006)(36860700001)(41300700001)(40460700003)(70586007)(5660300002)(6916009)(82740400003)(316002)(81166007)(356005)(8936002)(44832011)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 11:18:17.2427
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 042fd97e-6201-403b-d9f8-08db35c7748c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT049.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7094

In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
stating that the guest is expected to read the DR register only if the
TXFE bit of FR register is not set. This is obviously logically wrong and
it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/vpl011.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 2fa80bc15ac4..0186d8a31834 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -143,8 +143,8 @@ static uint8_t vpl011_read_data_xen(struct domain *d)
     /*
      * It is expected that there will be data in the ring buffer when this
      * function is called since the guest is expected to read the data register
-     * only if the TXFE flag is not set.
-     * If the guest still does read when TXFE bit is set then 0 will be returned.
+     * only if the RXFE flag is not set.
+     * If the guest still does read when RXFE bit is set then 0 will be returned.
      */
     if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
     {
@@ -202,8 +202,8 @@ static uint8_t vpl011_read_data(struct domain *d)
     /*
      * It is expected that there will be data in the ring buffer when this
      * function is called since the guest is expected to read the data register
-     * only if the TXFE flag is not set.
-     * If the guest still does read when TXFE bit is set then 0 will be returned.
+     * only if the RXFE flag is not set.
+     * If the guest still does read when RXFE bit is set then 0 will be returned.
      */
     if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
     {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:18:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518467.805053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19j-0002xS-US; Wed, 05 Apr 2023 11:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518467.805053; Wed, 05 Apr 2023 11:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19j-0002x0-Q9; Wed, 05 Apr 2023 11:18:23 +0000
Received: by outflank-mailman (input) for mailman id 518467;
 Wed, 05 Apr 2023 11:18:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=btvB=74=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pk19i-0002uD-JG
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:18:22 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20627.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92183f78-d3a3-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 13:18:21 +0200 (CEST)
Received: from DM6PR06CA0034.namprd06.prod.outlook.com (2603:10b6:5:120::47)
 by DM4PR12MB7669.namprd12.prod.outlook.com (2603:10b6:8:106::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Wed, 5 Apr
 2023 11:18:18 +0000
Received: from DM6NAM11FT049.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:120:cafe::5d) by DM6PR06CA0034.outlook.office365.com
 (2603:10b6:5:120::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.44 via Frontend
 Transport; Wed, 5 Apr 2023 11:18:18 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT049.mail.protection.outlook.com (10.13.172.188) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6277.28 via Frontend Transport; Wed, 5 Apr 2023 11:18:18 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 5 Apr
 2023 06:18:17 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 5 Apr 2023 06:18:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92183f78-d3a3-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kaV+HV0ycyoxTc4dkHTLNG3K/5Lf8Nrp6RWQmY7xY/RCfHl5wjPN6xLzuACwqAB8LXSj4+l4lQu+ZdOJJYzgJBYwTIIqBWlAiTO5set3uLlIWptzeYAYbTcNr9ET4OSqAR/NTYXftoAqHkOxCUfr497FQLyTir1c3cNidabbBUg1MgoQklCzmc9bEVCTl0izAjlUBALj1p6Ri8Q41wegJoVSjVrcF68eIfuQFavsZUx9cMSVrwHLPvtkAhzf0y/2hYX5pA2Io3pskB+in6gnGQQFsrM4ux17AFxeqmr+bX7dItVsciPcSCjzJmvL5AL5zQHeh/yauswQAikgVCAiKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=faFuTvjumdAB2CRmAM9G85/4bLPNgqkuwbkGeX5rngE=;
 b=kjcKQVGukCECjqOvmwnNyPf0gBTADZsQJvmNCJaJdAlX8+ngV4qh8nmRmOiZ+RrnbRVTaAZn99vRWxz32iEejmTqhncYyTuOVMjFxOobagQ6GYp4LAxlAIuiBWrPLsEVUz3rLkhDQL1dkvxLNAc7jg9jJCfEKGKd2U0QZJaQDR4LhhIlFOmtlSb/SsssxgGPaFGHSDywdTiiSSXpGEXJpzAyiElsoXVg5p0rvQTlfHljVnIgZI1DFINedb4OuUeH73wSC+5GPumTRTOaqK5rze63t0/LDz3leVeapXIgCWZi3cTurBab/D4huHCQa9FsT33fbwhyC7dR7KD96VHH2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=faFuTvjumdAB2CRmAM9G85/4bLPNgqkuwbkGeX5rngE=;
 b=RGWTlBzc/JICUHm5hSlDVNLdIDkoqasILDgwRVjJy8Wy5VyTTNIMvLeCRi90OU3dKuueFx5BfRhTFQMoytkKVLLpx8TeRepO1Ht2sFp9EREclTMnzchUyjPPaVnwJOg6GHyhDqYIsuXawjCJvA645nPaTIT6Qp2aa0cFrGZq+8E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/3] xen/arm: vpl011: Handle correctly TXFE when backend in Xen
Date: Wed, 5 Apr 2023 13:17:49 +0200
Message-ID: <20230405111750.12491-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230405111750.12491-1-michal.orzel@amd.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT049:EE_|DM4PR12MB7669:EE_
X-MS-Office365-Filtering-Correlation-Id: 00ee4a16-6e8e-45a1-6a0d-08db35c77563
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	673hHt9abvZww3bon2Kv6waImqCn0qsquHbGnmusaOu7iVAKUuPp5cX6+6AoDjbO58/zapfGBe7PKWVcNp1zpMIG8Gjy9Tvdcor/6L/iAf8vgPbI2mOQ2ME8tODBjmNyP51Dho96jUFa3eBTG338ii166a4b+m3/eutD+i6Se6O2tdEscxiXIbn7MnDDPfyV0JsVbwHNfeoaZAvhBZUGnGuDl5qbgmi1ZBKaU6AViSlG6J9t6V888PSVNr8dXYWOG6q7d4qmCkO8IzesWXsbXMJBWST/eaPiJ99LLnsYwMtDr+4X177Q5svIQqIC9mCpLT3FEY7VGJD/BkfkdbMn7F33bFWxPa0uysUp9RSO+/AzOeh5+584GxNZPbVn4vixHuSpPk4mZH5hBF53dhWiW0x1iGw4Oa1+RaegIrA/aQKzte7qrCDx6Gc9pmEAYpzaoc3/ZizLqiOgOl/MZcsKaKK7R7PKPfgTlE8Y8nhi/5kghqkRlPIaPWt0lQlhjVC5n8hiBF3DdIpRe8valZsvV3xE8JlBkbHKst7SJePziBKVNs/eXI8uUgYg7ASq3ttqb5fHOSMfsVWjfV4RcDxFEJjIbRyRD/dDY0OVWgpVVb5WDvfqIkdJ25ruz/HLURA0CFnkHHgM3a706skgYR19XL3we5x/Lhfc8P6E0LvgOiSz4MWwdzSnhq85LI+eOuo9ealmotnfaUpcEPp5qmgvpYa1222kEq3j0Wve0Y4v/tM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199021)(40470700004)(36840700001)(46966006)(86362001)(426003)(70206006)(70586007)(41300700001)(4326008)(6666004)(316002)(6916009)(54906003)(8676002)(40480700001)(82310400005)(186003)(44832011)(8936002)(36860700001)(5660300002)(2906002)(356005)(82740400003)(81166007)(478600001)(83380400001)(47076005)(336012)(1076003)(2616005)(26005)(36756003)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 11:18:18.6489
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 00ee4a16-6e8e-45a1-6a0d-08db35c77563
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT049.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7669

When backend is in Xen, the handling of data written to DR register is a
bit special because we want to tell guest that we are always ready for new
data to be written (i.e. no real FIFO, TXFF/BUSY never set and TXI always
set). This conflicts with the current handling of TXFE bit, which we
always clear and never set on a write path (we happen to set it when we
receive char from serial input due to use of vpl011_data_avail() but this
might never be called). This can lead to issues if a guest driver makes
use of TXFE bit to check for TX transmission completion (such guest could
then wait endlessly). Fix it by keeping TXFE always set to match the
current emulation logic.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
We don't have to look far for an example of a PL011/SBSA driver relying on TXFE.
If a guest had a driver like we have in Xen, we would end up with no messages
being printed.
---
 xen/arch/arm/vpl011.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index 0186d8a31834..ff06deeb645c 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -112,8 +112,14 @@ static void vpl011_write_data_xen(struct domain *d, uint8_t data)
         }
     }
 
+    /*
+     * When backend is in Xen, we tell guest we are always ready for new data
+     * to be written. This is fulfilled by having:
+     * - TXI/TXFE -> always set,
+     * - TXFF/BUSY -> never set.
+     */
     vpl011->uartris |= TXI;
-    vpl011->uartfr &= ~TXFE;
+    vpl011->uartfr |= TXFE;
     vpl011_update_interrupt_status(d);
 
     VPL011_UNLOCK(d, flags);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:18:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518469.805072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19p-0003Tf-Fq; Wed, 05 Apr 2023 11:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518469.805072; Wed, 05 Apr 2023 11:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19p-0003TA-AT; Wed, 05 Apr 2023 11:18:29 +0000
Received: by outflank-mailman (input) for mailman id 518469;
 Wed, 05 Apr 2023 11:18:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=btvB=74=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pk19n-0003OA-2Z
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:18:27 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20626.outbound.protection.outlook.com
 [2a01:111:f400:7e88::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93a97213-d3a3-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 13:18:24 +0200 (CEST)
Received: from DS7P222CA0022.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::25) by
 IA1PR12MB7734.namprd12.prod.outlook.com (2603:10b6:208:422::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 11:18:20 +0000
Received: from DM6NAM11FT061.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2e:cafe::2f) by DS7P222CA0022.outlook.office365.com
 (2603:10b6:8:2e::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.37 via Frontend
 Transport; Wed, 5 Apr 2023 11:18:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT061.mail.protection.outlook.com (10.13.173.138) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6277.28 via Frontend Transport; Wed, 5 Apr 2023 11:18:20 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 5 Apr
 2023 06:18:19 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 5 Apr 2023 06:18:18 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93a97213-d3a3-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HHh3tRuCS0Lg5C3Tq6P6x0xOB0FnbYQVfh4OfOuKxKw3t3a+PTDKaebTimcA2ee2m4MqTeUVlbqQbg5cnhOjC0Bmu5VAzxucmSa841KW+S1wfRDXjvDVtYxUCcjPCcZRTh5SG7jhVI5RUuMjerVOJGqvUVn782ESYZKZ7wuaKmbCSFwjZ4mHfnD4e+hBHgHAVhNIyGAuHO6CmIQwk4cznwLUBNcBDN5cmEQFO+ie73yjrwkFezyx3K4kLkQ2qa5ZB0RYr/uMmhCV/toVY8p95UiyG2DxqSXGcF4dXZ7a1Qjl9x09/2LqJb58YaBrUzWsUuzwRy1cvjW/HbuYE6DMcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u4oUaq7lN8GdxSqirWE6VLLbTs/fytDNa65dm3uA6/s=;
 b=mug+DyJJAZSjAG0UIkT8KjUfmhoWC6M965ZyxpF/CHnPOoTSHVuuNcQCzTqrMzXiu31/qbUpqnngH5eSEw4UsPwGJo30bnnytVsoHZ3UlUhkMLuoJ4dMjEEM+4pUUlafB8t8PM1djhU970gHPSMdgsSwdVFk/S7C3jdk9Il3GqlpuXN2sZhwOho/Z5U/G67nmMEG3uPe56R1rE4UIh1leuwauPjcWioHmhFetyGT068OIaXOv78YP5m9FbjB87yQQty/QQrka+VPBT+I0Gmyj2Y/aBuALCcIUojIicGcdGeVge7HBTAq8j5vzy0FYWv1pb2W757Dk6Erh3H4CGE1TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u4oUaq7lN8GdxSqirWE6VLLbTs/fytDNa65dm3uA6/s=;
 b=GGPjY6Sk8EY31upxf3pvXMKjvhmR5OWXEIsSh3IwDq9BwOXDEDLGF86X7BQh3y5CIUBjLlrlqt+TsC1EdMBtXvHkKfq7lHp8yBy2RtKO00CPZAqsiBlIrC+DqWV52YftIpby/bregZ9ExP9/gTPyBto3Tx+XuCFXJvEkMMdEaAc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status when backend in Xen
Date: Wed, 5 Apr 2023 13:17:50 +0200
Message-ID: <20230405111750.12491-4-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230405111750.12491-1-michal.orzel@amd.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT061:EE_|IA1PR12MB7734:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ddf0363-7b25-4ed1-c010-08db35c7765a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7B7zit89wb63fCYp2LFg+8QgEUB4Nx2tYbry3sdtjGK7YV8f0oZfGcKsRh+tuQduXsx3bx4JdJwl+Zy0BgdWsA5sbFH4M3lkEMh24Z1TZ+SoV289UbyghlO5gzxn352weDtNTvaXaAZlN8kH0G/eulweLK4uNuMNNlwDqnkQhkdX5OIwo41rbbSn8Ljt1EsHJmf5RyKH47BzQ3GUrfnyjNmDsF3gf5q4seflzwxPyOi6dQRwGeoPPfW6MCpxMEQkv4/t0CxsQCM6Tj9ECkSvnx3La2Co0+AdGzYNtY10K/dAumfXeLz6qmmhEWAKNVmXLYyEK0UPp5hhUPWC8JY+4N2n+EXJ9X+HMebRy8TYAMFcnUpXE+P5WNYcugAQ+XUSrmzIfcvXwn5QYVu/DWWtq7g+Cm8RyARoARUvIZCAyMsVzdsF9FKnUb0vISBENWhFEzmehgKiyaqBwJokrjMln82CyhQu2HwikmXO9z22bOgNehpMOllzEdXkBn2eMwpkjTkxM9vlxwLD402KFk/sQIwnwHjp6DoRXaA+76W9hYNFqXFZf9SSSNyK28JuTDipwKGVwOFBupeamnwqd75X9I4J+Uyzxolsjhs/kYV+VkkjG0DBtPPRDuH9yS3e8BoXzH8icqDInjU0A0IPHarKY2JMNNVOvvb1AN5jSwkzt8pH1YsvwOCz49RH0Mxvca5NdGy4LIe8satRfLsyf/Y9OUSjr45yQ9izAVICNn6zqyE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(2616005)(8936002)(186003)(44832011)(5660300002)(2906002)(26005)(81166007)(82740400003)(1076003)(356005)(36860700001)(54906003)(4326008)(8676002)(6916009)(41300700001)(86362001)(40460700003)(426003)(70586007)(336012)(70206006)(478600001)(316002)(40480700001)(82310400005)(36756003)(47076005)(83380400001)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 11:18:20.2698
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ddf0363-7b25-4ed1-c010-08db35c7765a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT061.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7734

>From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
RX and TX state. Because we are passing 0 as out_fifo_level and
SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
vpl011_update_tx_fifo_status() which performs TXI bit handling
depending on the FIFO trigger level. This does not make sense when backend
is in Xen, as we maintain a single TX state where data can always be
written and as such there is no TX FIFO handling. Furthermore, this
function assumes that the backend is in domain by making use of struct
xencons_interface unconditionally. Fix it by calling this function only
when backend is in domain. Also add an assert for sanity.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/vpl011.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index ff06deeb645c..7856b4b5f5a3 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -261,6 +261,9 @@ static void vpl011_update_tx_fifo_status(struct vpl011 *vpl011,
     struct xencons_interface *intf = vpl011->backend.dom.ring_buf;
     unsigned int fifo_threshold = sizeof(intf->out) - SBSA_UART_FIFO_LEVEL;
 
+    /* No TX FIFO handling when backend is in Xen */
+    ASSERT(vpl011->backend_in_domain);
+
     BUILD_BUG_ON(sizeof(intf->out) < SBSA_UART_FIFO_SIZE);
 
     /*
@@ -547,7 +550,13 @@ static void vpl011_data_avail(struct domain *d,
          */
         vpl011->uartfr &= ~BUSY;
 
-        vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
+        /*
+         * When backend is in Xen, we are always ready for new data to be
+         * written (i.e. no TX FIFO handling), therefore we do not want
+         * to change the TX FIFO status in such case.
+         */
+        if ( vpl011->backend_in_domain )
+            vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
     }
 
     vpl011_update_interrupt_status(d);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:18:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:18:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518466.805048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19j-0002uZ-LU; Wed, 05 Apr 2023 11:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518466.805048; Wed, 05 Apr 2023 11:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk19j-0002uS-Im; Wed, 05 Apr 2023 11:18:23 +0000
Received: by outflank-mailman (input) for mailman id 518466;
 Wed, 05 Apr 2023 11:18:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=btvB=74=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pk19i-0002uD-2G
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:18:22 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90c3ae6a-d3a3-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 13:18:19 +0200 (CEST)
Received: from DM6PR13CA0006.namprd13.prod.outlook.com (2603:10b6:5:bc::19) by
 LV2PR12MB5797.namprd12.prod.outlook.com (2603:10b6:408:17b::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Wed, 5 Apr
 2023 11:18:16 +0000
Received: from DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:bc:cafe::ca) by DM6PR13CA0006.outlook.office365.com
 (2603:10b6:5:bc::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.28 via Frontend
 Transport; Wed, 5 Apr 2023 11:18:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT110.mail.protection.outlook.com (10.13.173.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6277.16 via Frontend Transport; Wed, 5 Apr 2023 11:18:15 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 5 Apr
 2023 06:18:14 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Wed, 5 Apr 2023 06:18:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90c3ae6a-d3a3-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=egDyTYTFPoXBuaPuUrAD9TIEGGve+2IIoVmgyF6JUzvYgryJWYPHAMYw20B0KuWW0BtTCS9ACMpeMbWPKy5q28j2N7dEQq0KivkJwRURKPcVa/U671ljgAg6sSaFwAE1mhE70OVIkpWYR+SrMiZ7FN5DZiqf//xounPKlBXfio395jiI+LZxX006EG58Ql1PTuSwkTrd0QAUlfgGkAIdV9WGGRQJ3OhdenRhvqb8i7ujEnS+47xrKcHsMIbFaZEllsD4OG8As4s1f4cHYao7ZiL86nYT4S6J7/ua72vanhWNNI78O0UptfXu8KhBCk8AW2u8BzRa8HjyK72pP8xDMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NJXtiKGy9roPQFWOXibFz3rmlQ5I0iIIqdyGcUL9dLg=;
 b=XKrbqeswns5TthJXJwPqB4Q5KiMpreiKPV12hCDB/K4KweghOGEhuo3QXBrI+DKl1Yher+9Cb2GAEt4VT10Vptox72aFgclSHJiepE8FTZ0f3oeYS1zLcVvKugFRWG2HC65hAORhvJaHKp2Bk2NBENn6eEOebFxj7onjoxw8KVuiIcmao1RbcwRt2EXKe5O5cWqZi4mp11jXeQgCZyDSDqrdJdnfvCsZsdy8uRDiOR5tdCxIXIIX0kZ2nsL1GlQxfNLu4tbcroFA61bSyH9iMr95yiyjAxeO7ZY7L2W00GMwVHn6S/7U8/WmAWVEjZ9cpr6bTsPiEb4OCXNyOjP+uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJXtiKGy9roPQFWOXibFz3rmlQ5I0iIIqdyGcUL9dLg=;
 b=xRv0XUvvy2NUHf7BBziOnH3akAQ8WGIbMxKS3dETvK1eGBhCn/iBoxvVf3VLJBAFxg7hYqyDA/HArTxc1UTtmQx9rOBw++4QuQl6jECHRizqNENfgNxDurzsXz+MxztG2WzIEYbGcDXueeSTkM4G/AnRi8zQHGcAHkOtYXzCp70=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/3] xen/arm: vpl011 fixes continuation
Date: Wed, 5 Apr 2023 13:17:47 +0200
Message-ID: <20230405111750.12491-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT110:EE_|LV2PR12MB5797:EE_
X-MS-Office365-Filtering-Correlation-Id: 2abba9e8-18d2-44f0-f264-08db35c773cb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+cf5rcUvuzkcGJlAeB5SDbeP4/EoGERkmpIX6kGb86VINHd5Tu9VddoOdFcO3a7+jv2Ou/6WzVAh4FKBG26QCQlGR9iu8qHhEKzaqeskYXd1vRHDfXmk2ocG1TwN+Xok9sqLhLZT6I7RPbW7Qd1V6xop3h+p/AZVfr4LzjiLf3AHqdhW0ubgTJlxiJJkH3J4lhoxdtVXbKqZbCu6CXMU+Nqg5tVu70oBXTN+plfiVkRPjpllgZheYq+rS1ghMNqNH9o0b6VgGgJ6rX7dg3w2HDiP+PeBZM+lTqXm4ZNDHcgOTVy/mk1J6//iRAgsdkdgI4wK0hS3jikN4xRtC7BkBAi5XhQBhBeNKh0sM4ixT5ZGaUghdjRQYCa4TiBcrBMRA4Qg8ivdidF7hJIDXkHFkQIFWznn5e/5mijT8HR5jl1IPJKUOJA/pS4zbLqDzvxEle8ZZ0ip3Il7bKXC9asvdNtKW4IlYnB0goQxLN2KmhxeFPK4PxJD7fEmAqUq3VkssGfyZGvRFmpagOpfBIOV+FsLNkEeyWgwAZhxWfyW9T1XYYyguM6D6bE8jtEYR1MnqcxQoax4WegMCz5I37/DIYh7m1ND2Tk3Q+/e+XoNOIfwh3V3aueE3CVdou3vKVd4yW3Bia8SblDLZowKP6hXqhqLdxOSdKakA7ZndZawRy6ZJ44Tvj+ZAhb1LSOJ00Roi7ns0/S3P0ZSl5zYdeDyZ5KEnjtIx5g+S2UqRjqXYdw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(2616005)(86362001)(82310400005)(2906002)(36756003)(40480700001)(47076005)(336012)(426003)(83380400001)(186003)(26005)(6666004)(1076003)(4326008)(36860700001)(478600001)(70586007)(8676002)(6916009)(40460700003)(54906003)(70206006)(81166007)(82740400003)(5660300002)(44832011)(41300700001)(356005)(316002)(4744005)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 11:18:15.8848
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2abba9e8-18d2-44f0-f264-08db35c773cb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT110.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5797

Another portion of vpl011 fixes/hardening.

Michal Orzel (3):
  xen/arm: vpl011: Fix misleading comments
  xen/arm: vpl011: Handle correctly TXFE when backend in Xen
  xen/arm: vpl011: Do not try to handle TX FIFO status when backend in
    Xen

 xen/arch/arm/vpl011.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:24:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518483.805087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk1FL-0006JW-64; Wed, 05 Apr 2023 11:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518483.805087; Wed, 05 Apr 2023 11:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk1FL-0006JP-3P; Wed, 05 Apr 2023 11:24:11 +0000
Received: by outflank-mailman (input) for mailman id 518483;
 Wed, 05 Apr 2023 11:24:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pk1FJ-0006Ix-Nu
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:24:09 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60f995d9-d3a4-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 13:24:07 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 01ACB22161;
 Wed,  5 Apr 2023 11:24:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BE27A13A31;
 Wed,  5 Apr 2023 11:24:06 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XviXLFZaLWT9IgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 11:24:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60f995d9-d3a4-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680693847; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=oUbQqR93vbZJC0urz3kBptj/FjpSXr9L+EvtWJIzkAY=;
	b=ezl7dZOyS77RQ86UNc7n8IlIi5lpR0FyYMjVG9l1Cf5GWUeRcyQOZJ8jU0cd5TjdNlJ94l
	9lCyJ3o6zwcLlKrhnVg3Owaq+PyqRmfz6Je6gx/VRJ620F4w+6GSyvNoK3ugFohdnh1tgG
	2u3UZc5fN6qZhr0n6v7UeNOMtii8Wtc=
Message-ID: <563ff69f-0e9d-c90a-d18c-b3c351575716@suse.com>
Date: Wed, 5 Apr 2023 13:24:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/2] xen: update CONFIG_DEBUG_INFO help text
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-3-jgross@suse.com>
 <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------3BvgmbWaZapsVCTU88DTFarK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------3BvgmbWaZapsVCTU88DTFarK
Content-Type: multipart/mixed; boundary="------------hTOaozWB0FeTq5Wz2qESremc";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <563ff69f-0e9d-c90a-d18c-b3c351575716@suse.com>
Subject: Re: [PATCH v2 2/2] xen: update CONFIG_DEBUG_INFO help text
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-3-jgross@suse.com>
 <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>
In-Reply-To: <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>

--------------hTOaozWB0FeTq5Wz2qESremc
Content-Type: multipart/mixed; boundary="------------wL7lCk1Jbl1XcOSznPXSa8r6"

--------------wL7lCk1Jbl1XcOSznPXSa8r6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDQuMDQuMjMgMTE6MDksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwMy4wNC4yMDIz
IDE4OjI4LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gLS0tIGEveGVuL0tjb25maWcuZGVi
dWcNCj4+ICsrKyBiL3hlbi9LY29uZmlnLmRlYnVnDQo+PiBAQCAtMTUsOCArMTUsMTQgQEAg
Y29uZmlnIERFQlVHX0lORk8NCj4+ICAgCWJvb2wgIkNvbXBpbGUgWGVuIHdpdGggZGVidWcg
aW5mbyINCj4+ICAgCWRlZmF1bHQgREVCVUcNCj4+ICAgCWhlbHANCj4+IC0JICBJZiB5b3Ug
c2F5IFkgaGVyZSB0aGUgcmVzdWx0aW5nIFhlbiB3aWxsIGluY2x1ZGUgZGVidWdnaW5nIGlu
Zm8NCj4+IC0JICByZXN1bHRpbmcgaW4gYSBsYXJnZXIgYmluYXJ5IGltYWdlLg0KPj4gKwkg
IFNheSBZIGhlcmUgaWYgeW91IHdhbnQgdG8gYnVpbGQgWGVuIHdpdGggZGVidWcgaW5mb3Jt
YXRpb24uIFRoaXMNCj4+ICsJICBpbmZvcm1hdGlvbiBpcyBuZWVkZWQgZS5nLiBmb3IgZG9p
bmcgY3Jhc2ggZHVtcCBhbmFseXNpcyBvZiB0aGUNCj4+ICsJICBoeXBlcnZpc29yIHZpYSB0
aGUgImNyYXNoIiB0b29sLg0KPj4gKwkgIFNheWluZyBZIHdpbGwgaW5jcmVhc2UgdGhlIHNp
emUgb2YgdGhlIHhlbi1zeW1zIGFuZCB4ZW4uZWZpDQo+PiArCSAgYmluYXJpZXMuIEluIGNh
c2UgdGhlIHNwYWNlIG9uIHRoZSBFRkkgYm9vdCBwYXJ0aXRpb24gaXMgcmF0aGVyDQo+PiAr
CSAgbGltaXRlZCwgeW91IG1heSB3YW50IHRvIG1ha2UgdXNlIG9mIHRoZSBJTlNUQUxMX0VG
SV9TVFJJUCBtYWtlDQo+PiArCSAgdmFyaWFibGUgd2hlbiBidWlsZGluZyB0aGUgaHlwZXJ2
aXNvciwgaW4gb3JkZXIgdG8gc3RyaXAgeGVuLmVmaQ0KPj4gKwkgIGJlZm9yZSBpbnN0YWxs
aW5nIGl0IHRvIHRoZSBFRkkgcGFydGl0aW9uLg0KPiANCj4gSG1tLCBJTlNUQUxMX0VGSV9T
VFJJUCBpcyBvbmx5IGEgY291cnRlc3kgdG8gZGV2ZWxvcGVycyB3YW50aW5nIHRvIGluc3Rh
bGwNCj4geGVuLmVmaSBkaXJlY3RseSBpbnRvIHRoZSBFRkkgcGFydGl0aW9uLiBJdCB3b3Vs
ZG4ndCBhZmZlY3QgdGhlIG5vcm1hbA0KPiBmbG93LCBhbmQgaGVuY2UgSSB0aGluayB0aGlz
IHdhbnRzIGV4cHJlc3NpbmcgaGVyZSBzdWNoIHRoYXQgYm90aCBraW5kcyBvZg0KPiBwZW9w
bGUgaGF2ZSBhdCBsZWFzdCBhIGhpbnQgd2hhdCB0aGV5IG5lZWQgdG8gZG8uIEkuZS4gaW4g
dGhlIG5vcm1hbCBjYXNlDQo+IHRoZXknZCBuZWVkIHRvIGFkanVzdCB0aGUgd2F5IHhlbi5l
ZmkgaXMgInByb3BhZ2F0ZWQiIGZyb20gaXRzIGluc3RhbGxlZA0KPiBsb2NhdGlvbiBvbnRv
IHRoZSBFRkkgcGFydGl0aW9uLCB0byBkbyB0aGUgZGVzaXJlZCBzdHJpcHBpbmcgYXQgdGhh
dCB0aW1lLg0KDQpXb3VsZCB5b3UgYmUgZmluZSB3aXRoOg0KDQogICBJbiBjYXNlIHRoZSBz
cGFjZSBvbiB0aGUgRUZJIGJvb3QgcGFydGl0aW9uIGlzIHJhdGhlcg0KICAgbGltaXRlZCwg
eW91IG1heSB3YW50IHRvIGluc3RhbGwgYSBzdHJpcHBlZCB2YXJpYW50IG9mIHhlbi5lZmkg
aW4NCiAgIHRoZSBFRkkgYm9vdCBwYXJ0aXRpb24gKGxvb2sgZm9yICJJTlNUQUxMX0VGSV9T
VFJJUCIgaW4NCiAgIGRvY3MvbWlzYy9lZmkucGFuZG9jIGZvciBtb3JlIGluZm9ybWF0aW9u
IC0gd2hlbiBub3QgdXNpbmcNCiAgICJtYWtlIGluc3RhbGwteGVuIiBmb3IgaW5zdGFsbGlu
ZyB4ZW4uZWZpLCBzdHJpcHBpbmcgbmVlZHMgdG8gYmUNCiAgIGRvbmUgb3V0c2lkZSB0aGUg
WGVuIGJ1aWxkIGVudmlyb25tZW50KS4NCg0KDQpKdWVyZ2VuDQo=
--------------wL7lCk1Jbl1XcOSznPXSa8r6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------wL7lCk1Jbl1XcOSznPXSa8r6--

--------------hTOaozWB0FeTq5Wz2qESremc--

--------------3BvgmbWaZapsVCTU88DTFarK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQtWlYFAwAAAAAACgkQsN6d1ii/Ey+O
sgf9EmalaxAcqPyJFCxRiUgtTIOgegw1DREeJU+Ds1jGSIz8LoXi8rOJcFj6o+wxtocr67njyeQ2
zb7alst8y00MFFEyUh3Afnke+apjKZ+nIfkYdWrZopTneqcdVDgUr+1COTKiFm5ad6hnMclL7lE9
qSLQTA85Vm4C37k2wQ3BAseVDMMHrcmeUI0FGmPpPTwUT6hi2Ts0/WXqxZh8GU+oB95yMzODX9l1
LG0ilJQr73ZUVn61jm2yFhF3pN5tRX0ttXYLkqDCje76HYWaOWhjYLovt0+/R7dkPDJd7MzqtxtO
ya/gHS0J8Lxci3RUcRo1heb0JoIuSb99wHhQ+kq4TA==
=s4PX
-----END PGP SIGNATURE-----

--------------3BvgmbWaZapsVCTU88DTFarK--


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 11:51:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 11:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518487.805097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk1fY-0002DO-AU; Wed, 05 Apr 2023 11:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518487.805097; Wed, 05 Apr 2023 11:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk1fY-0002DH-7t; Wed, 05 Apr 2023 11:51:16 +0000
Received: by outflank-mailman (input) for mailman id 518487;
 Wed, 05 Apr 2023 11:51:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=//T7=74=linux.intel.com=andriy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pk1fV-0002D6-Um
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 11:51:14 +0000
Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2480ff71-d3a8-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 13:51:06 +0200 (CEST)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 05 Apr 2023 04:51:02 -0700
Received: from smile.fi.intel.com ([10.237.72.54])
 by fmsmga002.fm.intel.com with ESMTP; 05 Apr 2023 04:50:52 -0700
Received: from andy by smile.fi.intel.com with local (Exim 4.96)
 (envelope-from <andriy.shevchenko@linux.intel.com>)
 id 1pk1f5-00ColQ-2n; Wed, 05 Apr 2023 14:50:47 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2480ff71-d3a8-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1680695466; x=1712231466;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=CNsMJe3NJRh194M0JTRSE7YPi2b+rJpnYVMeT8Rmlsg=;
  b=beMB0TzY/9t3ZXnHaXJ+nTAQ/kOAeDt1srIWZRgNoYKgvNW9VOT+0lTP
   +33amnHV7nZs/Wln0KwMiaqnKDChCwUi9cMmAGnfIZbZ9XTuK5zXEQQap
   HnriDMwgMQqEta72leV+XgpC3GKh1/+n+Wi8k0OLhNxxK94EZCI6JUxaz
   AR2ZqNNq+sNQFvh+W9dxgQldeJOFQu8xmMpH904yPKDL3gwKzq28OFLRH
   RD6it8dX5N3BZp627RfvjLrxPDYxiAplGQaE7Qof5rj8ju2+cCpT40QCm
   A+OaEMNKhXN2nDAFqfrCuZ8seSnACfClA6MXHoZgFIrNopaiURPShEPJh
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="405207771"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="405207771"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10670"; a="797887729"
X-IronPort-AV: E=Sophos;i="5.98,319,1673942400"; 
   d="scan'208";a="797887729"
Date: Wed, 5 Apr 2023 14:50:47 +0300
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Bjorn Helgaas <helgaas@kernel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org
Cc: Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 5/7] PCI: Allow pci_bus_for_each_resource() to take
 less arguments
Message-ID: <ZC1glzw4F9F8zCK+@smile.fi.intel.com>
References: <20230330162434.35055-1-andriy.shevchenko@linux.intel.com>
 <20230330162434.35055-6-andriy.shevchenko@linux.intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230330162434.35055-6-andriy.shevchenko@linux.intel.com>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo

On Thu, Mar 30, 2023 at 07:24:32PM +0300, Andy Shevchenko wrote:
> Refactor pci_bus_for_each_resource() in the same way as it's done in
> pci_dev_for_each_resource() case. This will allow to hide iterator
> inside the loop, where it's not used otherwise.
> 
> No functional changes intended.

Bjorn, this has wrong author in your tree:

https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?h=resource&id=46dbad19a59e0dd8f1e7065e5281345797fbb365

Or did I misinterpret something?

-- 
With Best Regards,
Andy Shevchenko




From xen-devel-bounces@lists.xenproject.org Wed Apr 05 12:29:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 12:29:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518492.805108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2GH-0007LU-BM; Wed, 05 Apr 2023 12:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518492.805108; Wed, 05 Apr 2023 12:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2GH-0007LN-8X; Wed, 05 Apr 2023 12:29:13 +0000
Received: by outflank-mailman (input) for mailman id 518492;
 Wed, 05 Apr 2023 12:29:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2GG-0007LA-2h; Wed, 05 Apr 2023 12:29:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2GF-0007Kf-T4; Wed, 05 Apr 2023 12:29:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2GF-0002Bf-JC; Wed, 05 Apr 2023 12:29:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2GF-0002bc-HU; Wed, 05 Apr 2023 12:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rSmjIYVoguXEwcp7J0Zysrmo9pmK9TI7ocrw0C3tQCk=; b=42NFmFsjaNY1OO5PRjMUNrGtJd
	PXP7zhPgpOwc6BzvqFbS9JKHIFqbQNIdd7mSNt2gcsJc1zJRSH2AKwT+4xfQdiq8XWI8P/jT0DCtq
	iUMeQJ2JO5rwLrrQFHsFwFdtBou0SFpz0j9iLTAgQ/3IsxicPYreRhbOlnmvoYEEfJHU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180146: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=7d0334e49111787ae19fbc8d29ff6e7347f0605e
X-Osstest-Versions-That:
    qemuu=51a6dc9d394098e8f4141fad869a1ee9585f54f8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 12:29:11 +0000

flight 180146 qemu-mainline real [real]
flight 180151 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180146/
http://logs.test-lab.xenproject.org/osstest/logs/180151/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install fail pass in 180151-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 180151 like 180136
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180136
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180136
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180136
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180136
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                7d0334e49111787ae19fbc8d29ff6e7347f0605e
baseline version:
 qemuu                51a6dc9d394098e8f4141fad869a1ee9585f54f8

Last test of basis   180136  2023-04-04 13:08:44 Z    0 days
Testing same since   180146  2023-04-05 03:09:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Daniel P. Berrangé <berrange@redhat.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Dr. David Alan Gilbert <dave@treblig.org>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eric Blake <eblake@redhat.com>
  Junqiang Wang <wangjunqiang@iscas.ac.cn>
  Marco Liebel <quic_mliebel@quicinc.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Song Gao <gaosong@loongson.cn>
  tanhongze <tanhongze@loongson.cn>
  Thomas Huth <thuth@redhat.com>
  Tianrui Zhao <zhaotianrui@loongson.cn>
  Warner Losh <imp@bsdimp.com>
  Weiwei Li <liweiwei@iscas.ac.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   51a6dc9d39..7d0334e491  7d0334e49111787ae19fbc8d29ff6e7347f0605e -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 12:34:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 12:34:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518498.805118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2L5-0000WY-UZ; Wed, 05 Apr 2023 12:34:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518498.805118; Wed, 05 Apr 2023 12:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2L5-0000WR-R8; Wed, 05 Apr 2023 12:34:11 +0000
Received: by outflank-mailman (input) for mailman id 518498;
 Wed, 05 Apr 2023 12:34:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2L4-0000Vv-Gl; Wed, 05 Apr 2023 12:34:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2L4-0007UA-FX; Wed, 05 Apr 2023 12:34:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2L4-0002XQ-07; Wed, 05 Apr 2023 12:34:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2L3-0004rT-Vr; Wed, 05 Apr 2023 12:34:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lla/AEeIulxWhi8A1nh0/EHunR5dFLClMwvtKlMkPYA=; b=nHS4TLucBXJSMlu6sWLYMP+cDH
	QUk3W96Yv5w27xUpUiHc5CVTXZA3pVo+FeIgr1E2BqtQSBqGfbwIY7YkGog2zjAD9DXzXSs1AjJMx
	zS/Zc4L9cvp3tAwjqwjVJbXSvCm92QE43VlCudcPCwpmUdW6AD0VplsVuw9SznAJi1zM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180150-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180150: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cdd79996c217805a5bd67bb0c0e4ca05474ef92e
X-Osstest-Versions-That:
    ovmf=7df447930c42addaf2cc0d32916141d95ded677e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 12:34:09 +0000

flight 180150 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180150/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cdd79996c217805a5bd67bb0c0e4ca05474ef92e
baseline version:
 ovmf                 7df447930c42addaf2cc0d32916141d95ded677e

Last test of basis   180141  2023-04-04 17:42:43 Z    0 days
Testing same since   180150  2023-04-05 09:43:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7df447930c..cdd79996c2  cdd79996c217805a5bd67bb0c0e4ca05474ef92e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 12:38:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 12:38:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518504.805128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2Oy-0001H4-Fn; Wed, 05 Apr 2023 12:38:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518504.805128; Wed, 05 Apr 2023 12:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2Oy-0001Gx-CJ; Wed, 05 Apr 2023 12:38:12 +0000
Received: by outflank-mailman (input) for mailman id 518504;
 Wed, 05 Apr 2023 12:38:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pk2Ox-0001Go-Fu
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 12:38:11 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b72a6f23-d3ae-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 14:38:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b72a6f23-d3ae-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680698288;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=y9fFf4bToVg4q4ZBqyMdCtjoZ6FaJ5VIqXWpwJzTqIM=;
  b=fkghSAuGhX7z90Qt0sbRyLFXhQTnIxYLT5U35M7Ke+g1H0TH8IDjaCJG
   U6cjhjQrAq3RjuWIus3YnbRPeJEAXxUSKNZVLUeHlvzS2tlxoHZqcKZiJ
   cqXVnl07P2OGt/8Yq8UPuQoo1X2h4rnEULH/B2OneOdBtB32aoper+fOS
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104823534
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:rgPFbqsYCg4oqqGDQev7+E4sDufnVCleMUV32f8akzHdYApBsoF/q
 tZmKWHSb66OamTyet12aYnl/EkFvMDQyNI1QQRori42EyMR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGyiFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwFWkvdjCeosiP5LOfTPBTr50zdu6yM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zqarz6iX0pGXDCZ4Tu0yErvt/PhpxLUAZsWCISiteNrpkLGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8UYwgyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZTZcwcHDvaQZtzbOatB6L4fsnofSQD6ll
 lhmsxMCa6UvYd8jjvvrpQ6W22Py+fAlXSZuuFyJAzvNAhdRIdf8Otf2sQWzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF52LJ9o4DMlWfhsBDyr9UWaBj
 LXvkQ1Q/oRPG3ChcLV6ZYm8Y+xzk/i4S4++DKGPPosWCnSUSONg1Hg3DXN8Iki3yBR8+U3BE
 czznTmQ4YYyVv08kWveqxY12r433CEurV7uqWTA503/i9K2PSfFIYrpxXPSNojVGovY+lSKm
 zueXuPWoyhivBrWOHCPrtJNcglVdhDWx/ne8qRqSwJKGSI+cElJNhMb6elJl1BN90iNqtr1w
 w==
IronPort-HdrOrdr: A9a23:0/13Ca4TRXW1EtxEkQPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.98,319,1673931600"; 
   d="scan'208";a="104823534"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH] tools/libs/guest: Fix build following libx86 changes
Date: Wed, 5 Apr 2023 13:37:55 +0100
Message-ID: <20230405123755.1427246-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

I appear to have lost this hunk somewhere...

Fixes: 1b67fccf3b02 ("libx86: Update library API for cpu_policy")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/guest/xg_cpuid_x86.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 33d366a8eb43..bd16a87e489c 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -555,7 +555,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
             const uint32_t *dfs;
 
             if ( !test_bit(b, disabled_features) ||
-                 !(dfs = x86_cpuid_lookup_deep_deps(b)) )
+                 !(dfs = x86_cpu_policy_lookup_deep_deps(b)) )
                 continue;
 
             for ( i = 0; i < ARRAY_SIZE(disabled_features); ++i )
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 12:50:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 12:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518508.805137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2am-0004Ao-Hg; Wed, 05 Apr 2023 12:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518508.805137; Wed, 05 Apr 2023 12:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2am-0004Ah-Er; Wed, 05 Apr 2023 12:50:24 +0000
Received: by outflank-mailman (input) for mailman id 518508;
 Wed, 05 Apr 2023 12:50:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2al-0004AS-1m; Wed, 05 Apr 2023 12:50:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2ak-0008PD-Sa; Wed, 05 Apr 2023 12:50:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2ak-0003QK-LP; Wed, 05 Apr 2023 12:50:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk2ak-0004p4-Ku; Wed, 05 Apr 2023 12:50:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/40jwugWICvczquiRvorV8dIpJGVFD+rlTR5bFzBZpg=; b=cL+zKuInMtoVSnS9t9PizpvhTj
	vHjukMvJEjWmPbQ1SpfoYG7JDr9ddND4ywZF4l7sz+9t0x6zlggx4VHL4nJ5vnuq6w/hzc9Yg/7m5
	wgWE/99n5ZAIUI3hv959ULpOCgckwaNmMZ0QqRs6Q00BI4W690z+HwFkDHLrOxbnBxqg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180152: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=994c1553a158ada9db5ab64c9178a0d23c0a42ce
X-Osstest-Versions-That:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 12:50:22 +0000

flight 180152 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180152/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180143

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  994c1553a158ada9db5ab64c9178a0d23c0a42ce
baseline version:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1

Last test of basis   180143  2023-04-04 19:00:27 Z    0 days
Testing same since   180152  2023-04-05 11:02:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 994c1553a158ada9db5ab64c9178a0d23c0a42ce
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Mar 29 13:07:03 2023 +0100

    x86: Remove temporary {cpuid,msr}_policy defines
    
    With all code areas updated, drop the temporary defines and adjust all
    remaining users.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 1b67fccf3b02825f6a036bad06cd17963d0972d2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Apr 3 14:18:43 2023 +0100

    libx86: Update library API for cpu_policy
    
    Adjust the API and comments appropriately.
    
    x86_cpu_policy_fill_native() will eventually contain MSR reads, but leave a
    TODO in the short term.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a16dcd48c2db3f6820a15ea482551d289bd9cdec
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Apr 3 17:14:14 2023 +0100

    tools/fuzz: Rework afl-policy-fuzzer
    
    With cpuid_policy and msr_policy merged to form cpu_policy, merge the
    respective fuzzing logic.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 441b1b2a50ea3656954d75e06d42c96d619ea0fc
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Apr 3 20:03:57 2023 +0100

    x86/emul: Switch x86_emulate_ctxt to cpu_policy
    
    As with struct domain, retain cpuid as a valid alias for local code clarity.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8eb56eb959a50bf9afd0fd590ec394e9145970a4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Apr 3 19:06:02 2023 +0100

    x86/boot: Merge CPUID policy initialisation logic into cpu-policy.c
    
    Switch to the newer cpu_policy nomenclature.  Do some easy cleanup of
    includes.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 4f20f596ce9bd95bde077a1ae0d7e07d20a5f6be
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Apr 3 17:48:43 2023 +0100

    x86/boot: Move MSR policy initialisation logic into cpu-policy.c
    
    Switch to the newer cpu_policy nomenclature.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 1027df4c00823f8b448e3a6861cc7b6ce61ba4e4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Mar 30 18:21:01 2023 +0100

    x86: Out-of-inline the policy<->featureset convertors
    
    These are already getting over-large for being inline functions, and are only
    going to grow further over time.  Out of line them, yielding the following net
    delta from bloat-o-meter:
    
      add/remove: 2/0 grow/shrink: 0/4 up/down: 276/-1877 (-1601)
    
    Switch to the newer cpu_policy terminology while doing so.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 66c5c99656314451ff9520f91cff5bb39fee9fed
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Mar 29 12:01:33 2023 +0100

    x86: Drop struct old_cpu_policy
    
    With all the complicated callers of x86_cpu_policies_are_compatible() updated
    to use a single cpu_policy object, we can drop the final user of struct
    old_cpu_policy.
    
    Update x86_cpu_policies_are_compatible() to take (new) cpu_policy pointers,
    reducing the amount of internal pointer chasing, and update all callers to
    pass their cpu_policy objects directly.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c9985233ca663fea20fc8807cf509d2e3fef0dca
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Mar 29 12:37:33 2023 +0100

    x86: Merge xc_cpu_policy's cpuid and msr objects
    
    Right now, they're the same underlying type, containing disjoint information.
    
    Use a single object instead.  Also take the opportunity to rename 'entries' to
    'msrs' which is more descriptive, and more in line with nr_msrs being the
    count of MSR entries in the API.
    
    test-tsx uses xg_private.h to access the internals of xc_cpu_policy, so needs
    updating at the same time.  Take the opportunity to improve the code clarity
    by passing a cpu_policy rather than an xc_cpu_policy into some functions.
    
    No practical change.  This undoes the transient doubling of storage space from
    earlier patches.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit bd13dae34809e61e37ba1cd5de893c5c10c46256
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Mar 29 11:32:25 2023 +0100

    x86: Merge a domain's {cpuid,msr} policy objects
    
    Right now, they're the same underlying type, containing disjoint information.
    
    Drop the d->arch.msr pointer, and union d->arch.cpuid to give it a second name
    of cpu_policy in the interim.
    
    Merge init_domain_{cpuid,msr}_policy() into a single init_domain_cpu_policy(),
    moving the implementation into cpu-policy.c
    
    No practical change.  This undoes the transient doubling of storage space from
    earlier patches.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 6bc33366795d14a21a3244d0f3b63f7dccea87ef
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Mar 29 07:39:44 2023 +0100

    x86: Merge the system {cpuid,msr} policy objects
    
    Right now, they're the same underlying type, containing disjoint information.
    
    Introduce a new cpu-policy.{h,c} to be the new location for all policy
    handling logic.  Place the combined objects in __ro_after_init, which is new
    since the original logic was written.
    
    As we're trying to phase out the use of struct old_cpu_policy entirely, rework
    update_domain_cpu_policy() to not pointer-chase through system_policies[].
    
    This in turn allows system_policies[] in sysctl.c to become static and reduced
    in scope to XEN_SYSCTL_get_cpu_policy.
    
    No practical change.  This undoes the transient doubling of storage space from
    earlier patches.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 03812da3754d550dd8cbee7289469069ea6f0073
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Mar 28 21:24:20 2023 +0100

    x86: Merge struct msr_policy into struct cpu_policy
    
    As with the cpuid side, use a temporary define to make struct msr_policy still
    work.
    
    Note, this means that domains now have two separate struct cpu_policy
    allocations with disjoint information, and system policies are in a similar
    position, as well as xc_cpu_policy objects in libxenguest.  All of these
    duplications will be addressed in the following patches.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 743e530380a007774017df9dc2d8cb0659040ee3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Mar 28 18:55:19 2023 +0100

    x86: Rename struct cpuid_policy to struct cpu_policy
    
    Also merge lib/x86/cpuid.h entirely into lib/x86/cpu-policy.h
    
    Use a temporary define to make struct cpuid_policy still work.
    
    There's one forward declaration of struct cpuid_policy in
    tools/tests/x86_emulator/x86-emulate.h that isn't covered by the define, and
    it's easier to rename that now than to rearrange the includes.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 21e3ef57e0406b6b9a783f721f29df8f91a00f99
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Mar 28 20:48:29 2023 +0100

    x86: Rename {domctl,sysctl}.cpu_policy.{cpuid,msr}_policy fields
    
    These weren't great names to begin with, and using {leaves,msrs} matches up
    better with the existing nr_{leaves,msr} parameters anyway.
    
    Furthermore, by renaming these fields we can get away with using some #define
    trickery to avoid the struct {cpuid,msr}_policy merge needing to happen in a
    single changeset.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit c2ec94c370f211d73f336ccfbdb32499f1b05f82
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Mar 28 20:31:33 2023 +0100

    x86: Rename struct cpu_policy to struct old_cpuid_policy
    
    We want to merge struct cpuid_policy and struct msr_policy together, and the
    result wants to be called struct cpu_policy.
    
    The current struct cpu_policy, being a pair of pointers, isn't terribly
    useful.  Rename the type to struct old_cpu_policy, but it will disappear
    entirely once the merge is complete.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 12:56:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 12:56:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518515.805148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2gk-00055I-BC; Wed, 05 Apr 2023 12:56:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518515.805148; Wed, 05 Apr 2023 12:56:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2gk-00055A-8S; Wed, 05 Apr 2023 12:56:34 +0000
Received: by outflank-mailman (input) for mailman id 518515;
 Wed, 05 Apr 2023 12:56:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gxas=74=citrix.com=prvs=452091250=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pk2gi-00054z-Q1
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 12:56:32 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 485f72f6-d3b1-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 14:56:31 +0200 (CEST)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 08:56:28 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by DS7PR03MB5398.namprd03.prod.outlook.com (2603:10b6:5:2c2::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 12:56:25 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Wed, 5 Apr 2023
 12:56:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 485f72f6-d3b1-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680699391;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=LG+xH0EesdaTqZvUOLS0x3iH9+npzY9Dt2rpaa6Wl6A=;
  b=RhOd+GavSqcdW7huMfyp2wWEzFwg2ap/U1CHVuMwmKyKYjCPRmRBAA4K
   YHZ728Vgi1uKRn1+En8KxhzYn4elefJ6KALGQLlrMC64My5JZqtPj56JK
   kRuKoBAUu5JtxZrA9IevStpaZYwwrKzfTTAINJ+jQsqcYRu26mdNvFL7S
   o=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 104325715
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QkQl+a7WPQtj95A+iXqJlAxRtB7GchMFZxGqfqrLsTDasY5as4F+v
 mVNXjqEOa2ONjGhLoogaIq/o09Q6MSGn95jT1NrpHg8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4QeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 Ns1DhVcTTG/3+Od5bmibbB0usARFZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOCF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxX+gAt9NTeDQGvhCgUavlksWDi8teEqm8Nucjx7hXINRE
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQP4MudIyRDcq/
 kSUhN6vDjtq2JWKTVqN+7HSqim9URX5NkcHbC4ACAEDvN/qpdhrigqVF447VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraT0gbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:B1bHBK5f43Mlpx1FOwPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.98,319,1673931600"; 
   d="scan'208";a="104325715"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O+N9EI9TkQ/DhWeNlnE7Yo/sldMCNp1mdQD3T9TuoAYCwkLIviMDTeX2GbQX/ePQTzT5/Vw/6tioKm/LpY0NSMtSiWjv4VCwlDwr/SsP1cNg/3rovm5NsbRviKbUFAi83+zT39tbpu5RWLT4hoDCbqsu82xnv9/GWYG7rmnc9gg4xd7y/bAiUjvra91kWWsgmhKm3kzTVTjbff//E9R0uvEW/H/SV8Jv6eTPIQtE1O9PTArIjscS0S8I5yYpBWcBHFQn7pBQmJoLhSV3SjJExmhCI95ScR5nkOJl8HZRYtgZMNtvWwJMcL07wTvQeu4Vwo0tFaG2NRKDcX7dQM2Tyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aBW7ugeVKCpkm57u3b8Y1zOQVaNuGRpQnnTXZkDDejI=;
 b=kN8R7AcTMx8DjQ5lZP2vSJmaJ1XVprwYnCJM8/2e6DhRIg2Z/ALx6HTSDoVr7u54cWRnniGV7icvIsMU9+Xxi0jk0okFVB4VE2F906lY/Ggnsar9rdl0jlHFL8T4pPMvN6RjaJEqvk2S7j21vXKUgEd27G29E6UQ+gwkjj5LEwmIo+37jEIxG+zAlCp8Edq+UyGaeiZPjTkLWrYQWNN3NGxF+4z+0lz2En0lXhT2+gcynm04x0MFdcCH+s3aiNxqYf1ie3e3pccxujVinoeyWUm7YGGFk4n2BvuSUGTWRYAlF50jM48EXd619t5a+fJ2SrbFufWGjun+bq255/cT4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aBW7ugeVKCpkm57u3b8Y1zOQVaNuGRpQnnTXZkDDejI=;
 b=axbWHihRB9XMHG8McbpiQr6H1zhb6Vf4kvdmbSbk6LYSejwopISc8S1gbIysg86hqT46m/SXEn2MoKFDdHe8fKoLXROXiT7T1w6ya8hlGKZ0qp/Ank75iBJZ7D8FJOoaU0gSoWpuc3HsQ9oN1HdSoU+OYSO/Mxoic5O3wiOChuo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 5 Apr 2023 14:56:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] tools/libs/guest: Fix build following libx86 changes
Message-ID: <ZC1v85y+HQmvBAQ3@Air-de-Roger>
References: <20230405123755.1427246-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230405123755.1427246-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO4P265CA0278.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:37a::19) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|DS7PR03MB5398:EE_
X-MS-Office365-Filtering-Correlation-Id: a79f9569-2f32-4237-b2d4-08db35d52a13
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3atwFRkU9JE+3BrAmpS3zE+v6F8oCgtVdjZBXApsWXo4kh291tRI/4EqWutxaDaJ1ihDtg/N/+uhF9A1DjptX5P2AXXcswpU9kgCiDtVW3lIhoYicmE3/7zN8JKlxNicdYEVvIXvEzckWVEgXuk0dJRKA4jAXRL0XWYLgdKOGPDomcnvRfxv/Fu4plHCmSPzcoEWuGP0EuQ5Z7s0CXDApTDQYqRRE7x6EzdNFjypsNnvIibufIL7NDLKtLA/MZQB9NQukpUWHD4XuRoHY3VQCazKsloxIgnDTxcI8kcLH2IHQv9sNsMorvLXdCtZNihY4bx+EgBjAONBRD8j7G+vQysXUoCYnRRhHex6XKdY7vuI9s8vB2Z9SzkCdBNJnUzkn8PTArB34rPtsAqiCarXLltdS7P0SrGkqo4N+bGDsQC9VRIDtrs+JcrOIiRhJU7f30H4Mrv20PtPDA/mIMtyOq7SeU3ulbdmtCyya01l9RTEBlOGXkB5i13B5VXmQoARp3+bmTOYYzE+jJ+/zEicxbaHh7qiII5jLiSACI5t0659b4xIsCHX2PYigH3aA59Ez/2CVsDD5OO5rFHsKtVTxAhucoTsMt1wdUsm+QxyFtw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(451199021)(5660300002)(54906003)(316002)(6636002)(6862004)(41300700001)(8936002)(2906002)(186003)(478600001)(66946007)(66556008)(4326008)(66476007)(8676002)(6486002)(107886003)(6666004)(6512007)(6506007)(26005)(9686003)(83380400001)(85182001)(38100700002)(558084003)(82960400001)(33716001)(86362001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VWEwQUpMTnR6VDlPdmpKclppY2FTQnZNN0VyQ1lySWp4OFNZMFpTY1cvM3Ix?=
 =?utf-8?B?d29hOXU4WVlFbXhFV2lZU3p1OFFvUmZEYjkrSFdHTUd5dlJGUEc2RGowRzda?=
 =?utf-8?B?cGlSdjZobnk3dkJEQWI2dTlRczJoS2VwZXRUcXZPVzUyWWJGZHFCMWNSMDVn?=
 =?utf-8?B?OXU1VXlPV0VrRHhMeUNRNG9TTExNMW45NkVNSXFmWmZWN1pNU3gxWVBNanl0?=
 =?utf-8?B?QTV6VFJ4d3ZjOUUrU1VlOE5TZFhsQ1pKZER6cHhUYTB4dnQ1UzlrVml0RGlE?=
 =?utf-8?B?ZkRHZ0lDQWN5NGp6QVhpSmxOMUE2NTJsSENsNEVxSnNxTG8zZitQTDhScnZm?=
 =?utf-8?B?Vm9La3ZZREdaZG1HSlh3RWpMUmdLTTBIQmcveWhoVFg4NnB4UGZqYWNxazNP?=
 =?utf-8?B?dCtXVFdnbVRMR0NNbVo3R3Y4R3JHRkkvaGl4WWFPM2ZOWGR6bWRtS0lOb0la?=
 =?utf-8?B?RjFqRzg2RmlVVXhXWkdBbDhRRlBJVmM0aDlUb2lnaUhDc1d3R21hTXl4REhJ?=
 =?utf-8?B?c0RCWExGY2ozWDRDb2h5Vk13am9qK2VoeE5Zdzl2RDFFL25uU0w1amQrY0M5?=
 =?utf-8?B?SmI0Umk3ZUdmUW4rcWVTdDUxNjJsbEdZSmVidHRtUUlTZ2didW0ycGxqS0RL?=
 =?utf-8?B?MUFybTR6eFFrUk5UWUhaeGdlNWdYMHg2SmljQnpCRjdJUWNSUkZzem5zbHdH?=
 =?utf-8?B?ak91M3M4bDVCeDV6dENaK1d3c2tDVHNsY1kyUVBQVFJpNjVPeVJZbzhZYk5l?=
 =?utf-8?B?TVFBRWlZSzVLeG1EOGlPWnZPZTMrQ0pkSXExQWtJaEE5VlhKZ21HQlJBaTQ2?=
 =?utf-8?B?K3FpS0xYK1hrUFh5UVhET1dGNFVNV0llOXAvNjUyajlNb3RTUklCcU45eUlh?=
 =?utf-8?B?ay9aaTEwSkxCbCs5R1hrTCt5SDIvMjZXUmRLQjhOYW1HbnhvVEtqM1lQUmhE?=
 =?utf-8?B?YlFvWlJiclVQazNML0xYT3ROd0NLSmYxSmV0ZkNNNUxkQ09lTXlTMGdLSEZz?=
 =?utf-8?B?VGpYZGtKS0tYZTduZG40WUZxUjdrN2NaMDczM2FsSGZQSkkzTnN6REZ3OVJW?=
 =?utf-8?B?VGlrNWY3ZjJhTVF5VE4zeHJhYkdqc2FybTBvcE5iazdzVzltdkpnaVZGTXNi?=
 =?utf-8?B?VGU3ODFla3doUzRaY3VkbC9YOSt3QTVzOTlSV2I1bVI3QzlGVkFBU1k5bndm?=
 =?utf-8?B?QjRNTGkwZGJjNFdXQXQ5TEVXc1d4OGQxSWdjdUpwbmI3aWJzeDZGbkZ6YlVy?=
 =?utf-8?B?ZEdFeE9lSUN6aFlFQm0yV1dEMW5ndzJGY093Z1U2cVJwZFJmbHY1dW1aWjE5?=
 =?utf-8?B?S2J0czExR2g3N016N2s5THB1c2N4cEg4RUd1bEVNZHh2OGlJOHIxRWNadWVn?=
 =?utf-8?B?RjMweWl1WnNOYmhzaHFIV3JqMXpJa2VxN1ltUWlPdXQ1Uytja1dkVnpwcE04?=
 =?utf-8?B?M1FDR0FxVWdBZldPOFhEV0MyMnV2MGRyU3M4QnVTSWRTaytLNmtJQkRVTGpv?=
 =?utf-8?B?TUxLV0xObGFYSFcyWVNYYzU2T3BaenE4NTFBai9xcUJ2ZXBrbTYrUXd3MC9N?=
 =?utf-8?B?TkxtTFdsUWhMTzBJd2htZVcvL3h5WUJDZ0QrWU93RDBicThUZG1yR2lkNU5m?=
 =?utf-8?B?aDk0Vm9CQXNMMVJDeGJta0tnc0JUSEdEdnZuZDJJL2hNUHV2WG5mSkhjdDla?=
 =?utf-8?B?OHEzOWdPYlFwMktEMllKQ2VDYnFiTmViaXhwMkw0K2hxcXByTGhlNFIzYjVp?=
 =?utf-8?B?ZVhxb3MwUDM3S2hWQk1rbVE3RXRMazF6MFZHbjBMaTA1NnR4S2IxT3pqZjZr?=
 =?utf-8?B?b1g3WkxlVVVudUFsWUFMb3dPd0NTS1NrZis5OWJwN2RMcGdqcERWRWpucy9h?=
 =?utf-8?B?Sm9sV3hiV2RMNjZxQXdoMWJreE96bjNENnhXSEYzY25OTVdYLytvRHd1aVVZ?=
 =?utf-8?B?bjlaREpwTFllbTNtcG54MGpyVnFiYTZpeTJxMzBYazNlTFFCcnRZeFY5bFRX?=
 =?utf-8?B?bXV3eHY1alZWTjMzZ09sSkRxa3RJUXQrMHhXZWM5MmNWc2RaVVc1dWloNGI3?=
 =?utf-8?B?M0VjbmYxYVlQdzJjWjBqcXA0MWlXek10UmZzRXBHUGFUV0dYWk05dnVHQVFB?=
 =?utf-8?Q?7S4s1zXHYPXDQdhqCL3SjVdMY?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	UWraG3vZhzBUlEyzSwtwY5VqQKTWGQUD0hSUMLjGIzMxyyvJLbkDUHxdGCnHSw/+r7ASmrmeTxsPxjV7bjNbljIB+AP7abd7CTbqxKeUBN4KWZ6uHDcJHTwEt8m42Qtk6l2P3J27JJn47Uu/ZGM7Ttgw3eZoy2Jx23SUrAFAghS5qIjxxdEV1FYsazQnglue+boMMGsi1kb8Xpgy5U9DgYYMJZrcTIin4e3p/BlWDVT0wmbLuDdq9PM7vusUbxfSfGM69R7mzCRvm6FYucJ29CXv2pXOPLsEliUDuaMSjBKxuM+LuU7cuIv67/2HroUh7Vtvy/oHa5ugo6FmDDVGCF3hfUB5kWhfzpxdsPGa1WZ14ielQ6kA5YXRiDyVDnErW1oPfE3bBjUuNwHtk1TdQ8mGH/ZQtkYuFVAMPqkWJs2DohIzLx438EFicznE4nM+4ZygI4eQUaTZYI+KjnokI6F6nzGKo5l4FwCIDD4vcnnC6zKsyGjXscoYzTahDs0IfXaVfAC6VHx4/T4FJcNGCbjjr4X470UffGeLJXN6eWFWEyaVFlwi1Qv1kxpFICoxLZv9bWRK4gE4UGVSy+maMLYoE50gblWKmeZtgHmgtgjHH3iLbvXdCdWTEQge0VxoNt53JEBLAIxAg3WkQ8ylGaP+x0fKDPhZEp0YjPz695y9hvhHtLAk8hxnO4lYCO/npYjYF1tytcZj3JWSeTDeTW5yW8BUollnXMRujYvrZBqazt8oJYfuhHmUz0glWa1kZgROAyp9lqurIdmqigngIECWdu/IrgM1dzp8P5ikfP3eq3O2B9NJE1L3hX0/0CjZ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a79f9569-2f32-4237-b2d4-08db35d52a13
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 12:56:25.5555
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BIYHNQVKdrzbaAFn7MhEN7fo3OuO9hUaEl6zuDKmwgcFHuRo4+0zu4n3JX2v3ItuEtxRIGWenhfrd5XsHey0TQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5398

On Wed, Apr 05, 2023 at 01:37:55PM +0100, Andrew Cooper wrote:
> I appear to have lost this hunk somewhere...
> 
> Fixes: 1b67fccf3b02 ("libx86: Update library API for cpu_policy")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:00:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:00:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518520.805161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2ka-0006jc-Rp; Wed, 05 Apr 2023 13:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518520.805161; Wed, 05 Apr 2023 13:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2ka-0006jV-P2; Wed, 05 Apr 2023 13:00:32 +0000
Received: by outflank-mailman (input) for mailman id 518520;
 Wed, 05 Apr 2023 13:00:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk2kY-0006jK-W6
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:00:31 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on0607.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d67a0e83-d3b1-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 15:00:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6897.eurprd04.prod.outlook.com (2603:10a6:208:184::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 13:00:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 13:00:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d67a0e83-d3b1-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bu1LozaZwSHW0gQ8nnl1ClKXoo70m7hMUZeWF+Rn/E8nfQVmzKSa0TREPaf1/O1cpGT5aS0rAgZyYkVw1H1yrD33T9byTv0uGykw5m/Qa5JBRdcHafJwX8C64RjgVlB+99z64J4TZr4gpBSOQ5lKLY3a47ounbO7KX5KoZQmYc2cOvajwbQaxy1ip/vo3VHOuVlPtsxNvr0VyAtx1kLoIsBfkBsnAFUqeeODi5KIxmCiikJr9/wSztmmeQAI3QHkNHF3ldhyYWzTFfx+RxjjqJjkl4W/Kgd5giICbwsHVLOfoBQiJ7R1e+zYVPG1uM8yj+IioyJKcD3yAVIWhBJwBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qAfNC/zjVcbX3wOy+qauUhTJ8tcVsFPgjltxjEMksao=;
 b=gJFipnJVgve6cXeZRxscll5tkVoMgsxmfSW4MKJKyUdzEO0gpwx9NXUQHvAWXf8EAYq2JBq85h2UDhYii4i6gnrYTzA33mUzDpbp6ppv3sP9LffHMk98k3gQDrMZ/pE8JP7/3dXONJ3joJTvoAec8cHWoVNWAy/N7d3O3n348MzZ3cWt/B7MvE/R1E9aFdrW9w4bkaccufi4thkzNWo62TEINUXxYylSpQVkq5WyURedb5TyxhfBQDhcHMwZ25oY0HTtIZN7+J4z5ON+q20UNHFjEJ9bKQfPcph4PnPX/M5iLVkCORnLq2bf3L5iHOTx/H0gWiAzglvytxZFa8eVlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qAfNC/zjVcbX3wOy+qauUhTJ8tcVsFPgjltxjEMksao=;
 b=CeIDe/eMpTRstGsb2sF1ACodnSgpT8dCvKFriNw90ZilNoJV9hgRIgZyuizVpJF39hJrLxk3Kh3mbgN2c9jnrCs/TK4lslutRcTikdq2myKVMw1xC2S5BZoGS3N8XqNCQMS4Xp+dr8dEJTUrCA9BA79kJ8PLWCHh0XyjUEGTqAPI19JmcwXjVehCjWj5LRlXpplohKxoy9YrMyqAzSJD10nRGF3r4LDLS32CQauHnJ14PX4xz037H/d+r4JisSmJD/ESzSgTnwrZIBCYwGzxlfukdd67z2zUwp4mT7j8+b3urRvJFpOFRUV32kWl6MJHjYTgoe039tCkTSopJmvHLw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <52421029-a058-65fc-074b-fbcb1ccfb1b0@suse.com>
Date: Wed, 5 Apr 2023 15:00:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3] x86emul: build with -Os
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0086.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6897:EE_
X-MS-Office365-Filtering-Correlation-Id: 5476f786-e6e4-400a-1910-08db35d5b94a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TvXv/9UOKXtqvCkHycSojunwrjd/9c3/aGQe07liLNbzlncxfsWsyQWYJ4/CEuSz77gpNXKImNA+DNSWTh2Cx0NkDqIFcMLSrTfQUyLd8I333wT2rOYhv+fKj3pVy+lt7D5x5FFEc+5Bf1ZX6saQy8IXQot6cwdMCi9Ehn6lcU2bYiwtkDQ8dATDuvL0KGLu9vTEtM6ggEzG5WS48JsJAaWTTklyIfG9r/laNGG2rmCqfR3orIKKr4MdgPnK8a7OvRsMeY7LCd+LYf5/7mVr8O0e3tEu/fADhUYF7reOz6Tfi9u52uK0xP1+m1qUD+DG+OhcnPE3uNPlaaLeiXNjvfXWSAgsBZ6aPdJgh3CjaakpRp1qTcMBmTAhGeZmCfpc0CVOAJx0TdKxnezd4+XgopwTizGGE7S6NH0VmpB4JEGvcHEHQBbPDk1og/5pSlISi8D7U78WErXr6M4lxye6quFCy4odTdOgteeSIkjZGKGgGciHzIL9/KpVoeqEmlazu4HG32e4H6Kf6JE8tbHkxp+IMjYX0elpHf6kl7F3O/cOkX528D2EtxSuQOaGspbZVHeZZZxznyEg0mtdzc1BkklMf9PgYOTj/FMdF9EDX7rCyPlCdjwqk2k/4oIQfLHL9AsfG+tuyxOah+IxdHy/Yg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(39850400004)(376002)(346002)(136003)(451199021)(31686004)(2906002)(6512007)(6506007)(26005)(2616005)(186003)(83380400001)(6486002)(5660300002)(8936002)(38100700002)(36756003)(478600001)(54906003)(316002)(41300700001)(31696002)(66556008)(66946007)(86362001)(4326008)(6916009)(8676002)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R25rVG55aHQ1YmhTMzd6Q0kxdHRYdzk3WWhmb2w1Y0xnODQ5dnRUbjAyZFNm?=
 =?utf-8?B?cFJHNkd6U2lYTTVHZEpNb3dZZ3RZMDFuL2VKSG8vNTdIQlROeG5QVTdaVm1r?=
 =?utf-8?B?K1hEbWVGRjNhdUU0SE1QR2NmblJJbXNteGtyZ3BUcm5FemZ0Mnp2a0tjOFB2?=
 =?utf-8?B?cW91eThXMU9QNjhwWXJSSzRuRUphcHE1YzlSWTRkc3VkRWlHdC9pbVVoRVZx?=
 =?utf-8?B?dXJ0NEFKNmx6S3JEYnYyR3BGWnVEVmpsUUp6ajF4QTVNUXp5MUdVN1M4alRU?=
 =?utf-8?B?WlllVjRuQnVvRndFR1YvUjhraWdqTjJ0L3JhNUJyRlNxbjEySnlTZGk1bFVG?=
 =?utf-8?B?dFptdHhWMVR4WWpXdzBsNmt2SWRsMm11Q3dQTHVxejV6cGdqWHdCMnpXSHRY?=
 =?utf-8?B?RTNrVkpGTktsQWJ2ckVvcy9IL2luQ044RXFrMStld2VnZDFQWGZJcGFhQTBZ?=
 =?utf-8?B?YWptYlM5NHBPNmdEZERMVTZYMjhmT3JVdHNESkx2OUg1T2pGYndHYjd6L0p2?=
 =?utf-8?B?TWVUajh1ajd1UkJEajZZcnlsbFBQanR4b1I4c2VsOGNSR2d3NjZhMVZLbHhD?=
 =?utf-8?B?QUMrVWlXc0pzbDU0ekV5Zm1KR1Jra254a0Z0TnFBNDZhbXZkWGVVbnlyOG5X?=
 =?utf-8?B?cjhWZDNac0E4dUlmL3FGK2VoU2FUdkh0Rm05NlNYakh0RzZPYXBpdytEejFB?=
 =?utf-8?B?SGpjNUw2M2dzY0RCMjdCK2VLQkdEU1paY3ZMeXcrb2dSTVYxc01VSjRGU3Nl?=
 =?utf-8?B?TW5wVm1uQm8wZUFuUnU3bTlEb1QvRllHNDM4Wi9FMmgwTk1wWmxSVVk1azNS?=
 =?utf-8?B?amUrRFlVcjc5cm02TCtLTWFJam16MDZ5eElMcW1pc0YvUW15d1EzbUVmdkdF?=
 =?utf-8?B?a0habCtaMGpkSE94aEVQbG1TaWpjQzY3VWJVby9PZU4rVWl6NlVQZkd4NkUx?=
 =?utf-8?B?M2hVaGpWdVVIL3RMR3Bwd0RmZkFNcXo2UTJNZlBCSmV6a1RYbEYwd3JNWU52?=
 =?utf-8?B?cFpjcGwrajhSNmxHSEhRMzZmTDRDZCtGVVFRdjU3TTJJN0hSblgvSkZhQUpr?=
 =?utf-8?B?RUNjNEhnQkEvTzVsNThQa3hHbVdoVnFqTWIzUy9NcE5nRXZSV2NzNkg0TXEz?=
 =?utf-8?B?QW1RM0NOczlBZExQYkpsQ2xyTENBTk5LaWx6dE5vSDdET3YyVTZDcmpuK09E?=
 =?utf-8?B?NXpVZE5EUkc0RE41RmtvQTMrd1hvbDBnTGFGQUNtTlc4cWl4WlRDOUl0ampH?=
 =?utf-8?B?L0FHd1dJZHhkZjFreXUreU9IcmJIYVdEMDMrQVEvekF0eUJHelJ0akwxcVYy?=
 =?utf-8?B?RjlpeVBNMk4rRWJac21VSlpnZlRzSG5uS2Z2eXRCUDJxK0dyQzZ4T1dSRVNu?=
 =?utf-8?B?a0JGcHp4aGFxR2tTR0J1SzhLbFcwRnhpVkF3OS8zeXNnUjhwc3RuYXJBQ1c5?=
 =?utf-8?B?dllHQUVwZVUwYzhOOVVNZFFaSkM1dy8rRVB2VzFoVU1Nb2lxRmdpUkZjV1FC?=
 =?utf-8?B?bS9yS0I0cDVjMkIxM0NmWG8zcU85T09IUXJRYzA2RFA3Mit1bURRNEhlL3Zk?=
 =?utf-8?B?WU5TdWN4blpNVk5yS0w5Vldma3pYcU1IWWl2cVd1YTdqemxDcjNwYVlibUVP?=
 =?utf-8?B?MEhuSTZpbEZPYTNRbndJelRWSWJjNk5vakZNL0VSZkhGWjY3TWFZVktRbVpJ?=
 =?utf-8?B?cXY2Wkl3c0hMQVhXbWYzQzBqTTY1NWo5QXhNczIrdENFSkFmeTJvV2J4MFd1?=
 =?utf-8?B?WUJJUDlwaloxQmxqZndqVlBVY3NqVzIyNzduWmE2Sm5jWTNMSmNNK3NDSUpr?=
 =?utf-8?B?d2k3cVAvSEVPMWhRZnh5TVhTUnVVSlIxUUNwRU50T0xqb3VtWVJXOGxpK2l3?=
 =?utf-8?B?WlFvVmhjYW91MDB6WGx3THR2RWFML0VrU003aHdSYk9jUVcveU84R1VvU1Vi?=
 =?utf-8?B?UWxnUmxidVN1SGwydUhFWlJiR014WllUN0NiUzB3c2dRUkpwZFU0MHFyN3lZ?=
 =?utf-8?B?YzlXaDNacnViN3d0TkFRL0RUQllMdkIwOFFobHJySXljY1IxdkJIYjVUWnNp?=
 =?utf-8?B?b1pLM05ldXNzUnBJOVZzZVR0OWJqNXN0Vmh1MGxtSVZwbVdGQ3JXSFl6RFhX?=
 =?utf-8?Q?Ba9RDbU5X5FhBHUBHTDZp0IkH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5476f786-e6e4-400a-1910-08db35d5b94a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 13:00:25.9426
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wsnxvuPlJ5y0pXxlmL5hrbf0w1PGk5z8hzxFbuqt5uH3GVKJYmmD7BNUaHMmah+qmGP+JCUpx5blrR143FbaUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6897

Emulator code is large and involving it in guest operations cannot be
expected to be fast anyway. Help binary size as well as, for release
builds at least, compile time by building all involved code with size
optimization, independent of the build being a debug or a release one.

The size savings observed in a release build (with AMX and KeyLocker
code in place on top of what's upstream) are above 48k of .text, with
gcc 11.

To keep what is being tested similar to what's in the hypervisor, apply
the override to test and fuzzing harnesses as well (but affecting only
the core emulator files, not any auxiliary ones).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course if we were to gain a Kconfig control for selecting the
optimization level, the override done here may want to be controllable
that way as well then.
---
v3: Also apply to test and fuzzing harnesses.
v2: New.

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -39,6 +39,10 @@ x86-emulate.h: x86_emulate/x86_emulate.h
 x86-emulate.o x86-emulate-cov.o: x86-emulate.h x86_emulate/x86_emulate.c x86_emulate/private.h
 fuzz-emul.o fuzz-emul-cov.o wrappers.o: x86-emulate.h
 
+x86-emulate.o x86-emulate-cov.o: CFLAGS += -Os
+$(filter x86_emulate/%.o,$(OBJS)): CFLAGS += -Os
+$(patsubst %.o,%-cov.o,$(filter x86_emulate/%.o,$(OBJS))): CFLAGS += -Os
+
 $(filter x86_emulate/%.o,$(OBJS)): x86_emulate/%.o: x86_emulate/%.c x86_emulate/private.h $(x86_emulate.h)
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -o $@ $< $(APPEND_CFLAGS)
 
--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -298,7 +298,7 @@ $(OBJS): %.o: %.c $(x86_emulate.h)
 	$(HOSTCC) $(HOSTCFLAGS) -c -g -o $@ $<
 
 x86-emulate.o: x86_emulate/x86_emulate.c
-x86-emulate.o x86_emulate/%.o: HOSTCFLAGS += -D__XEN_TOOLS__
+x86-emulate.o x86_emulate/%.o: HOSTCFLAGS += -D__XEN_TOOLS__ -Os
 
 # In order for our custom .type assembler directives to reliably land after
 # gcc's, we need to keep it from re-ordering top-level constructs.
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -89,6 +89,7 @@ hostprogs-y += efi/mkreloc
 # Allows usercopy.c to include itself
 $(obj)/usercopy.o: CFLAGS-y += -iquote .
 
+$(obj)/x86_emulate.o: CFLAGS-y += -Os
 ifneq ($(CONFIG_HVM),y)
 $(obj)/x86_emulate.o: CFLAGS-y += -Wno-unused-label
 endif
--- a/xen/arch/x86/x86_emulate/Makefile
+++ b/xen/arch/x86/x86_emulate/Makefile
@@ -1,3 +1,5 @@
+CFLAGS-y += -Os
+
 obj-y += 0f01.o
 obj-y += 0fae.o
 obj-y += 0fc7.o


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:04:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518524.805171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2oC-0007ST-CP; Wed, 05 Apr 2023 13:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518524.805171; Wed, 05 Apr 2023 13:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2oC-0007SM-8D; Wed, 05 Apr 2023 13:04:16 +0000
Received: by outflank-mailman (input) for mailman id 518524;
 Wed, 05 Apr 2023 13:04:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk2oA-0007SC-Dp
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:04:14 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe02::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c6942ab-d3b2-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 15:04:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9099.eurprd04.prod.outlook.com (2603:10a6:10:2f2::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 13:04:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 13:04:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c6942ab-d3b2-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T0K2nSDTWCPjyRe5K/YcvofxJxRW4cvVZ3q/yrRSUbbuJuXEfE3+mgtoKr7xphQ4IuYRsVrjj8wj4GpFDMupmwormcIdT4JYXFmLxya7791b3Q/x1stiIXnzOPjDGxmEDWJ5jP3OpJYMP/xH1Ab4xU7ymFeJma0osNeopNoPIjX7aNpWxTE6ZEpKsSQbKjuFChLxKrrle0ERe86Pr40TlPKubM1dhHPw4x/sa4mcYCmFKWsSor7IvgmJx4Ya98h0LYax/tQEdlnFzCIA5BmI8qCp601Y7b9bMeGOXZUOqBBWCHlKiCaVZ3M9jg2k7GkTAFNFIesGljSAJm3RqP8yVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TuJFnVFq4lYzErg+AcntKHgzOzemz92LMQw4cTfU8u4=;
 b=OKeQPRmFiUELpLtmXyUeRceSqAtltirsMpBjlppkFKlunUlD5gb8/Tqn+DoCEhnVgauyh0yQsT3nmuRsBDGX8malFYbi9cij0yvc1N5Ssp7UMN7hh7GexRSwIf1QZvdE02Wkjgn9vsvcviQw08Hm5VOQfYb2fXf3HAfbMQPEonWW6PYWueyQpI60UP71iC81XB5zhDtUAalVBfHwI8aH2FHNwooIzw6QO6d9ZTZ9eYEWb7csdWPn5aM478sR5vaeFUmTpXG/V8d1uO136h+xL4l3o43pyEdRWB9SEWX76HRarq4h4JJw0lfdH/cnJerAk51+QcRjMtXwj6/0AM0F6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TuJFnVFq4lYzErg+AcntKHgzOzemz92LMQw4cTfU8u4=;
 b=sc8vbdxCa+AlaKCvvmuaMFLJX1wDgkjEE4BS8c9AJRO8uh+Y7PlViptmWIZTLYizQlDUk5C05c3GDYc6xPNY/qDnz1kUTwXejtA0dyMmjaGCEIt4VldgIxy7zo6vDB9DdYaC1d8gH33grcUI8zkj2+PcbbbLtpjMazZuj2r/IoFOuGI3CRgK8WjJ0DVxJWIUcSJ/I7abhezCz6W7WIbT1/e+eVglTx/astwF9wtMFURIjBFxPT5yFVwnB6FdP7Kunrv+bSbmpmRIr0AdyWuWRiuKSUioEgwPm6ac+Ld/wwsQMWFauWl9SoeUSVO4fCH23KP+YqFl1qFKaIz34b5rbg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1fceb2af-cf5e-9e70-59d5-24cbad3cb81f@suse.com>
Date: Wed, 5 Apr 2023 15:04:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/2] xen: update CONFIG_DEBUG_INFO help text
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-3-jgross@suse.com>
 <358e9788-b930-5c51-1e89-232be43f83e5@suse.com>
 <563ff69f-0e9d-c90a-d18c-b3c351575716@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <563ff69f-0e9d-c90a-d18c-b3c351575716@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9099:EE_
X-MS-Office365-Filtering-Correlation-Id: 179879b0-b9c3-419e-90d5-08db35d63e3c
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hKfKDVMU20wogdw0QJfXRjMlvULPEWaEZ3neYzqk3CMTs+JF/wAlG5dYdVoSV//iGOkFlJTCYzgNXe09ig7/qTxDXb05nE+APOT/Obr6vrEewgkAD9eh4vCsNz+uFAIY3pz2I6/UusyZ+GS8oB6jnAUS7rDluudsXrjZFQHnDAppgb0Nz4FcrZjG0uMQZ1Q6FCcve9Kfe6b1T4QuVlcRb4Pd/vshuumwJThQkxyWa54JmRDAtW0JXCp934zBaPHzo1T42TUEhJyzYuxHno5paINWTldIJ4Tz6QCBSOI8mC1AQ6/EB1c5HnPR6oOyjs8M2IFamXZC/M2RDyErgYBmxmHxcQ1VnadSW6AcFojf/YIwDchVwXMyJToBfCkYFT0yyjpiXie2G4pjZM43+yICSoqFFtnA7+zTJUQ95Ji6Gr2SsuG/nYkgXGT25UFULUxEJ8WPbsYCBnjqVX9bo0frARcAWScnwdrMxPekdaSS7Hx6Ncaq8IBJAFFeKQMgVe5qz195be3CkOK5+Xuov4wI39fB0guvNhYuVx8piPJ+OZctPvXG/68VOk+9zq8t827/j3N6Xbfyf6kYp8aCbD+KzditrgNRR880jzXDHMZv3j9JbxO8HU+KkzRKKmiSD5Og0cpJUuWGMPqSXRq3+6AQeQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(39860400002)(396003)(136003)(346002)(451199021)(6486002)(83380400001)(36756003)(2616005)(31696002)(86362001)(38100700002)(53546011)(6512007)(6506007)(26005)(186003)(2906002)(54906003)(6636002)(37006003)(316002)(5660300002)(31686004)(8936002)(41300700001)(66946007)(6862004)(66476007)(8676002)(4326008)(66556008)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RzZyT0M0QWJiank4NTM5akErcjRiendCTjN2QXdEZ1RqeW10Q2lUYVhobVRH?=
 =?utf-8?B?ajlFMlp3a2lpS2FCd0JvSnBtTzVhQ214YytrNVZmWGVtWFZCQ0NHRGRnNzRx?=
 =?utf-8?B?N2Fnd0J3MW9NTlAyaUVlVzRZbEQreHhNNVlCalN3RE5XeVZ4aEhZU2VSQ2tB?=
 =?utf-8?B?MHpyWFY4andJbk1GR2U5QlBBSE54cHg2UzR5T25OZzRselpXMnlHN2YxVENO?=
 =?utf-8?B?NDhqd0xoYUdqaWJ2bitGYlBtYlBYTWgzOEpZSUJVdWZFU2hRWnpGUWl4TnQr?=
 =?utf-8?B?bHY3NFZHcG5iOHhaT1BlSkZXNG9DODh4YmpzSGl5dmtMenZlazVNdFdxSjky?=
 =?utf-8?B?QjZXZGhMT3B5Vjd4VkwxeGhmWVo4RHd4WFg1MDZHZjErNmxMQWtYZk5lb3Mx?=
 =?utf-8?B?ZmNvM3ZUZzFWMVVBM05palpUVVhmZHgvdGdwcit2VXptMXozbWpUZVd2ZGVD?=
 =?utf-8?B?THZSNjBHSUFqakxxRFZKdGVuQTdXQWo1RkVtQ3FLaTk2WHRkZm1WQ1ZpZUFr?=
 =?utf-8?B?UWt2dzFyRG4xa3NvemJQVTlza0RFUG5DVFBtQjhPNnBFSnVRQmpVV05LRmho?=
 =?utf-8?B?ZHgxQXlhL0NJYTA5NFhLZjdPOU1yV1lpY0Z6M25xY2ttc3BPZHFTN1U4MXZh?=
 =?utf-8?B?RjNoTms4NlRxa0g1U2lDZmtBa0ZUM2E1K3lPQUZteFFzeUpRS2Zrd0poWC9l?=
 =?utf-8?B?elF5NDRZclg5NnN1WVdvblB1VS9DL2o0OXl6a2hyZk1GVTFuRjdjQisyZnNI?=
 =?utf-8?B?RUhHdFhoWTVRdmFOdWxLK0xrbUlDVlh3THJtMVVuejFiTUZlYnBxT1lUS1g4?=
 =?utf-8?B?RXJuRk5GZzd6N1daT053UTBReTBRYTFkTmpFajRQa0Y0cXN4TTFsNG5IN1Zj?=
 =?utf-8?B?ZGFKWGppMFRMdzNlRFNXOUM0UWtMeXZ6SnNaTzFnRkEySWtaY0htVlJPTmRm?=
 =?utf-8?B?aU9yRHRLL3Ixc2FiL000NFVqVml4NzBjaFNjMUVCbGowZTRsT2VLL1BHR25u?=
 =?utf-8?B?L1VISVZGV3RXQ0plK3AzekpyRDRTeStWUmFXdUQwOVZGdXJFVTZPZUJ1R2xu?=
 =?utf-8?B?U003ZG1BaVdxRzVZSmI1TlNsclpiZXQ4amw3ei9xb3VuNTQ0RmowbURoZTZU?=
 =?utf-8?B?dVpFZElIbXoycW84VE9rZGQ4cHh2UUFTbnhpQ2RSZXJTZDMwS1VwcG9aNDNT?=
 =?utf-8?B?WHFwSmxybXk1bVJtWld6NnVQUjFWWGM1bmU5a09qbjh0NjF3bXg4QnB0T2Fn?=
 =?utf-8?B?ckRKUnkrYUhtS1dlQmlOOER1aDJxR0FxKzdKeklxemhZN3c2aE5UbVdPbFlD?=
 =?utf-8?B?M1pjOUNJNnhHT2FLK0JWQ3hmWi9QQmpwMmVnWVc4UWhkdjM4ZXNxNCs3VVVl?=
 =?utf-8?B?THk2SVVKeFltQ3h6S1dTdmNSWEpTM2hIdXlacmdNYzBmTEZqTWU4YVFJVkVy?=
 =?utf-8?B?czJmSkx1bzlmQ2JXeVphL1J2VXU5T2VHWHhqVzV3MWFmZ2phRWE0NDNraVRt?=
 =?utf-8?B?ejA2QWtWZFUrRFhYVW0yNlhSRy9HeTZSUFY0QzJtT2pHN1NZR2hkUDJhaUUz?=
 =?utf-8?B?MW9jODA2U2JOMUVLU2doV2NUL0pxOFoxTjVZSGNxUExKTU1UQmtYejlITVJR?=
 =?utf-8?B?aE9DOUNWWXJwWCtNejg0bWRYd1d6N2tOYlowSGtVY2MxY2VSR2UvVXEzUlhL?=
 =?utf-8?B?azBmTVNZSlRvT05TWFJvRmFHVndCTXNoN0VHYTQ2THFVT2EwRkFQQnZXVU16?=
 =?utf-8?B?b2ViME94Vzc5RnVmQmYyQ2Nza1ZvcGpiSlhxOStRTFd5SHlCTnYxakc4OHBX?=
 =?utf-8?B?Q2FDbDFBdVdqYnh5UWpCam1FenkraXR5RENwNGZKQm1SR0ZFVVBselBVQlJs?=
 =?utf-8?B?TnlXS0JTYjZsVnExMXlLblpUbXhmS0VhenhuQWtOR0FkQkI3SHJNK09qdkNG?=
 =?utf-8?B?R054dGxRREppWW9pQXlKNVRBNSsvdEpQcXVNTFN4eU8yeVE1eE1JZWQvczhI?=
 =?utf-8?B?N04zbDRtSkJZclV0dEk3WGczRzRhS1o5YVE0djJmdHlTcHhVNTNuVUxDS3Bp?=
 =?utf-8?B?bzdFL1JYRTNCeG1wSlRpV3l2YThXVzZFWktKYTZjU1hldUJrQ2NPSUJ6QWtB?=
 =?utf-8?Q?gtPIRz8eZ95Qo7IWIQSZvXlNq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 179879b0-b9c3-419e-90d5-08db35d63e3c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 13:04:08.8407
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WVMpSXEJ9nO/8PvduEgP3HQOQP6p38FPncw9nweg8QY7ZQtsp7H+BlrYvKY2ilVsiVjOGUXAMEifXsQCzPcMeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9099

On 05.04.2023 13:24, Juergen Gross wrote:
> On 04.04.23 11:09, Jan Beulich wrote:
>> On 03.04.2023 18:28, Juergen Gross wrote:
>>> --- a/xen/Kconfig.debug
>>> +++ b/xen/Kconfig.debug
>>> @@ -15,8 +15,14 @@ config DEBUG_INFO
>>>   	bool "Compile Xen with debug info"
>>>   	default DEBUG
>>>   	help
>>> -	  If you say Y here the resulting Xen will include debugging info
>>> -	  resulting in a larger binary image.
>>> +	  Say Y here if you want to build Xen with debug information. This
>>> +	  information is needed e.g. for doing crash dump analysis of the
>>> +	  hypervisor via the "crash" tool.
>>> +	  Saying Y will increase the size of the xen-syms and xen.efi
>>> +	  binaries. In case the space on the EFI boot partition is rather
>>> +	  limited, you may want to make use of the INSTALL_EFI_STRIP make
>>> +	  variable when building the hypervisor, in order to strip xen.efi
>>> +	  before installing it to the EFI partition.
>>
>> Hmm, INSTALL_EFI_STRIP is only a courtesy to developers wanting to install
>> xen.efi directly into the EFI partition. It wouldn't affect the normal
>> flow, and hence I think this wants expressing here such that both kinds of
>> people have at least a hint what they need to do. I.e. in the normal case
>> they'd need to adjust the way xen.efi is "propagated" from its installed
>> location onto the EFI partition, to do the desired stripping at that time.
> 
> Would you be fine with:
> 
>    In case the space on the EFI boot partition is rather
>    limited, you may want to install a stripped variant of xen.efi in
>    the EFI boot partition (look for "INSTALL_EFI_STRIP" in
>    docs/misc/efi.pandoc for more information - when not using
>    "make install-xen" for installing xen.efi, stripping needs to be
>    done outside the Xen build environment).

SGTM, thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:15:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518529.805180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2yR-00010S-CQ; Wed, 05 Apr 2023 13:14:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518529.805180; Wed, 05 Apr 2023 13:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk2yR-00010L-9s; Wed, 05 Apr 2023 13:14:51 +0000
Received: by outflank-mailman (input) for mailman id 518529;
 Wed, 05 Apr 2023 13:14:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk2yQ-00010B-43
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:14:50 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe1e::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d7716a81-d3b3-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 15:14:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9523.eurprd04.prod.outlook.com (2603:10a6:10:2f6::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.28; Wed, 5 Apr
 2023 13:14:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 13:14:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7716a81-d3b3-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IK0ByPSu7cNyDE/ZVGvNyUdr4GKHcAVgFKC7pvcaUX9ep5AquFUp/rqOp6dkfsrMr2RYl9Ai++xARhi2nJZ9xk2qvvWw0Vh6tNCeUyziYtmuLUIfYBTWBDTeb3QiCwuoSvThqN9WftfsOS6nEIi57y1z6fy7un8NbEjIWs+qhTAUTtCLCAJyHo3JvGS4XzX8VsXpVPb9eOEIWB7ekKuBwd8wOuXAWZod2nSqbvwkbUrBaaEtdeqgcASHNqrR7PaJAjh2eLMN7gRTcIuypVHe/XaCepRJBsKmuL6xER3qjqmxq5+O75XkA/7uaOH8x0Cz2+ZFAFqKWrodKjf5hIgJFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=imZI6b+yoZSV7HogzUtbHcB4pG1nP8AqkDs/5EMUxAk=;
 b=dcC8xVBO0KqmilzM+Z8Bvon7nZJwa+eGflSvnn49mg66W4nAKQdRiZATv5YNeHSZvRatDcFbr/qsGBHzFLgzt4eHm+qwW22AzCZQf4f1djYlaDEo2kgfoWzuOp10WpWdMfvrdroqpuc4lSczC7bi+IYb/y8sof6wxpjUB3vCICJmkHn8NDLkGoo22mEubQ/BG+U7yTTtvnzosNtVOD3pOntpot0ZJwUrucX0FDvGMOgAEIqWW2FjosdoGaOVCDxPIq3tndxx4hgaiOfer4CSKvtaY8TOmBZ7jcWzWJM5wKj8JRVphvoSEDOdoOYHM7K8HhiyWve1LWRmnbeDoh+gjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=imZI6b+yoZSV7HogzUtbHcB4pG1nP8AqkDs/5EMUxAk=;
 b=h1dfp0fOI5TFWwfT75I7xEa0C4Mk83+RgYWi1nwTTSYK46DCFSre/seNXzCE+a221dDV6jwVnF2PhU0oXEBYY/4aY8llcytibb1xUiM2cki4ja3jkBvcMeVW0QrccOfEh7CDbDI0hDjAd2lX9OJhR2mqh/AcuDYUuSeY/exYD2fmxi7UE9Mer5cWT/MBnXawBf+aekZZ7lTJzMscFpSv3LvOqu1S5BkUy+LMpyUjEKvTs75cXAzre43S0KAD2B61o2mDwHAc28m/Kjnefz+GsKHPvKWRl7EHxGvp4aN1GmEBSD8s+8ozz3IL5F4VKDMNwjG5IOylejjfK2t4YJ2OmQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c22b42fc-9dcb-72cf-ad6c-fa4311502465@suse.com>
Date: Wed, 5 Apr 2023 15:14:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230403162823.30681-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0058.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9523:EE_
X-MS-Office365-Filtering-Correlation-Id: d32427e7-e4ea-4a32-c77e-08db35d7b9be
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b7Gzme0Y2q+/k4HydgpoyBzp9nf/NdugyddVAdefsyYR63o0ZyuF7aAbolQ8iC8kAgNuoGFhVeoVPRu+gCQrXIfIOjwoEwoIwsz/wj/5FQ3MJ+W0FLsWmDXTBDnWV2Qx7+b4kFtp0+yu0OQioWQLeM4YrY5h4bROoT/0fbc4oepF06pBVWio2zWMcOWUs2b2aSkaje0Kuj+KZ3HOytKAzelD1cxlySLWT2J62ms2/0lM8iqmumWIezXxeFF6FmVx8HDAUmoU2pAOSIwyGIfWLQURnvoSkSDS5YAezlooc17AGwdTpWRRhNcXL8W3/4WSxjdOnNG6gDa/w4IthEhdX/7Ys+OWYEENVKapOOpyfKnnPkvMQAzL2uc/Ywa0i4MlnkpqWrRK7BdQiyBICDiG1zkIGnJKKoGmolJpcNRQUSaRkoWM2saHxNzOtvlVOpYkC7o9pA26x1w3Z3GleXnPOsFksNb7so3BvBKtzjnReMu+TPF51d0L5rpbNqIG5yDWkmz5gdkzxEk6ZfnHocIpHBxDYu4tswTfycOSZ2WW5q/ynZuhTs+wwidYEnMZOZfUWDmDhWdk1yKHyT1jGS48JisLm1TNb+UuFS1p5sWqfFhlEqjXTN9pdfwaDpHoG8XZqYZoJbt3SzVN/HNBVtXrLw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(346002)(396003)(39860400002)(136003)(451199021)(31696002)(2906002)(36756003)(66899021)(86362001)(31686004)(83380400001)(2616005)(186003)(53546011)(6666004)(6486002)(6512007)(26005)(66476007)(5660300002)(478600001)(66556008)(6506007)(66946007)(54906003)(38100700002)(8676002)(41300700001)(316002)(8936002)(6636002)(4326008)(6862004)(37006003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTRDY21EWjlYRFQzSnFwYmM1TVBRa0RrV1Z6S2RvK3FKUnN4ajVPZzFGeUhz?=
 =?utf-8?B?ZVpsT1ZZdUluRCtTZUEwU1ZIakczbWZJN1lVZFEzeFpaQjd5VmNxRklMUGV5?=
 =?utf-8?B?KzA3aFF1S2tGK3N6ZWFJUDdJTGJtbGJMK0tMMi8zSlc1VXZsVHBIWHVLOHlC?=
 =?utf-8?B?VmJPMjNJdmcybWExeGozZ0NrbHJVNm1GV1J3Um9DNGh0clA3WVF3RnM4TVpQ?=
 =?utf-8?B?OXMvdWFnQTR3OVcvSHcvTmlwSGJxMVhsK21URk1DUWtpbk5mK1ArSnNjMXhN?=
 =?utf-8?B?T1lnMVFlVVJMU0Zma3VqN2pja29JdE0rZW1wSmovQ2JVR29sTHErNU91SFA2?=
 =?utf-8?B?ZHJzUzJvanF5R1orS1k5Sm1Wcy9SVHN3eG5nVUFDdmRGT3prM0c1Z1R1ZklC?=
 =?utf-8?B?SFkzWHpkbmdzOXNWbHJXd0J2c2MyQldLR3JVTVBUYVNxOWlIR285a3h0Skhj?=
 =?utf-8?B?WFFQRzdheGNWNGdjQllweEJscXluSlNlL2NGZWtTOFhpblBDcmpKWHdHYTRO?=
 =?utf-8?B?cFE2anBzSy9nTGprVUtpT1F2U0ZCR0owWnd3aTl4V1NGWksweitxVWlDYWtB?=
 =?utf-8?B?bk55aGVCaW55UWJiN3pvVG9mbkxsN2ZFWEg1akpaYzhYbnl2RTlNV284Mlkz?=
 =?utf-8?B?MVAyM1BGalgyQkdrRTRDNkxpVnZ0d25CQlBWZkdTNmhlcTgyTWdaL0RDekZy?=
 =?utf-8?B?QjB4QWpZMHc3WTd5c3lrYW5xMUs3QUhYdnRGWm95WmkwM3N5bnl4eldNcW05?=
 =?utf-8?B?SDN3ZnlZa002azgydHRQQkJ5c2F2RWxZK1J4eEpBUUVFUEF5eGdtZDRaci9h?=
 =?utf-8?B?Sm1NZm15K01Tc0tmeVlLUDl0SEVzMDlxWmZnNGN1YzN5WHlYNFg1SzR4ajd1?=
 =?utf-8?B?a0xINVBDd2cyTS9kVGhSTUMyOWNtcHhzVFFic3d6Mnp5NXBWSGJLNW9jOUox?=
 =?utf-8?B?blRLRmZGOVNSYmUwYzhhQXpxaUFmQ2Z2RE9kTWQ4ODFnK1ZIbWRiazcrYU5t?=
 =?utf-8?B?TlFqVWNsNFMyOVdHRVlWWnVuNVR6UjFsUUVGQzZFK0ZwOWpSOHNkSWhmU1hz?=
 =?utf-8?B?ZVdtNG1jd1YzY3gyMWpSSkh4WVpCN2RPNVJsOXQzMTI2VDdCcE9nbG5xcVF4?=
 =?utf-8?B?c0pJM2V0dVhMa0I4NFVFcWR6R0pBNWtENTBFU2ErTUthcVprRkdXRXJLU1pz?=
 =?utf-8?B?cnBlNm1RWU83emdqMjNTYWtDYnZBV2dmMnJybmxyUXhWWUpnbExyVnByUXBI?=
 =?utf-8?B?dW40Snh0ZXN0aG5DR0ZlK1hXaGxHVVliUTA5R1hhVzN3TlJ6d3p3bVBuUzJv?=
 =?utf-8?B?N2xmbjJTd3QzQ0c1UTBGcUJ2UExIV09pQjQyU0kwQmZwaTRUQzRMMXJiZnAz?=
 =?utf-8?B?YzZZVENUQWVZcFF5KzVZQUhxS0QwSHVhV3FvOVdWVUlnaU5ENFBGUURBTGpM?=
 =?utf-8?B?d1lQa0d0alNaQUtiOCtDNXhnd2djQTFldTlpZG5adkMxZmZhS3pyWXRXS2Zj?=
 =?utf-8?B?VGltNkg1Q1RTNm1LZnVJcWJON2wxN0syZ3lWV2w1YWQyU1BBMU5XRDBJZ0Rs?=
 =?utf-8?B?bkFoenpFdzlKTStEb1VZbklQZWVoOUVnVnNTOFhMdWwzeHlXcENEWTVXeG9o?=
 =?utf-8?B?UGU2TkNtVGJpRHBUMVJSOXlsenM5QStPeHFkSnhUUWFFM0dxWnp0TTRZYm81?=
 =?utf-8?B?bnlPZE9tR25pVmR5eFY1Y0toMXNOaDBIclVPdlFSSUdhODRqOWxuQ1k5K2Jr?=
 =?utf-8?B?b2hDQjBKdHVEUkg4d1UvZjFSd0RJZzlHZ0g5VFVSRWRROGpreGppZUoxNGdX?=
 =?utf-8?B?S1AybFJUYk9VZkhOa2dvTTB1cDB4emFPUzZGMDgxRGxOeGJZYVJ1YVNJVFFR?=
 =?utf-8?B?QjBYMThmdDM1Yi82U055MzlYeW5wTFJFMnB5UVVCcFRmb0s2SWgyTWlqckxK?=
 =?utf-8?B?dDhaRDl3c0RkTHZyZS85WkRhZCtLb3VoUm1wSHQ0ZnVsNy9Cb0luTmFOSVpX?=
 =?utf-8?B?V1QvL3dQL3h5U3RPZUhSOSthVDBnMGR1RFNjdUVFY3RlR1dCUm5KME03elVn?=
 =?utf-8?B?Z0FNL2twQ2pnL2F3bExXU1JXZ3NWS2ZKNDJqaUJPOXVpWnFUUGR0bkxFWElY?=
 =?utf-8?Q?EbKkHHvE+kjB60bwUfd4AxpXq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d32427e7-e4ea-4a32-c77e-08db35d7b9be
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 13:14:45.7355
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wf4DR2jPDU/1CE7EpBhl7kdY7sPLdAAdijKAGh2pKuduZGocX39wxDBBujfsHCuvfiTIgaAnP820S3wUQvpedQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9523

On 03.04.2023 18:28, Juergen Gross wrote:
> In order to support hypervisor analysis of crash dumps, xen-syms needs
> to contain debug_info. It should be allowed to configure the hypervisor
> to be built with CONFIG_DEBUG_INFO in non-debug builds without having
> to enable EXPERT.
> 
> Using a rather oldish gcc (7.5) it was verified that code generation
> doesn't really differ between CONFIG_DEBUG_INFO on or off without
> CONFIG_DEBUG being set (only observed differences were slightly
> different symbol addresses, verified via "objdump -d", resulting from
> the different config.gz in the binary). The old gcc version selection
> was based on the assumption, that newer gcc won't regress in this
> regard.
> 
> So move CONFIG_DEBUG_INFO out of the section guarded by EXPERT.
> 
> It should be mentioned that there have been reports that the linking
> of the xen.efi might take considerably longer with CONFIG_DEBUG_INFO
> selected when using newer binutils.

Thinking of it: Because of the need to deal with older binutils, we
already force --strip-debug as a linking option for xen.efi in
certain cases. Perhaps we could make another (x86-only) Kconfig
control which allows to force this mode even with recent binutils?
If so, would you be willing to include this right here, or should I
take care of this afterwards (or maybe even in parallel)?

> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -11,6 +11,13 @@ config DEBUG
>  
>  	  You probably want to say 'N' here.
>  
> +config DEBUG_INFO
> +	bool "Compile Xen with debug info"
> +	default DEBUG
> +	help
> +	  If you say Y here the resulting Xen will include debugging info
> +	  resulting in a larger binary image.
> +
>  if DEBUG || EXPERT

Just to repeat my v1 comment (to which your response was "Fine with me"):

The new placement isn't very helpful when considering some of the ways
kconfig data is presented. At least for the non-graphical presentation
it used to be the case that hierarchies were presented properly only
when dependencies immediately followed their dependents (i.e. here:
DEBUG is a dependent of everything inside the "if" above). Therefore I
think rather than moving the block up you may better move it down past
the "endif".

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:44:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:44:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518534.805191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3RO-0005jv-J2; Wed, 05 Apr 2023 13:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518534.805191; Wed, 05 Apr 2023 13:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3RO-0005jo-GI; Wed, 05 Apr 2023 13:44:46 +0000
Received: by outflank-mailman (input) for mailman id 518534;
 Wed, 05 Apr 2023 13:44:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pk3RN-0005jZ-LD
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:44:45 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03e8fd71-d3b8-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 15:44:42 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id CDDF120634;
 Wed,  5 Apr 2023 13:44:40 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8F42513A31;
 Wed,  5 Apr 2023 13:44:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IZ/fIEh7LWQtcgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 13:44:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03e8fd71-d3b8-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680702280; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PFghx5Gd1bMgaOBQLPmS7nHyfVl84PcJEn2YPi2RhpU=;
	b=Vug5EE6ZVVNV7Z32SW5o6BNOux/v7tRNnACieTv+QSNgiNpLfnVqj1CVABqe4UnrXimQ4x
	RE5zvMSy9iczPPOfuwlDRT74l3rWhcgyJ9Ql+oPBD6Ojk8LWSYPT6emuR8ZLzkTpk1Z+im
	ihRQv4KY+yRXK3LssiJT73gaNG+t3eM=
Message-ID: <718c7a92-b3ca-8fb3-e698-5e01464258c3@suse.com>
Date: Wed, 5 Apr 2023 15:44:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-2-jgross@suse.com>
 <c22b42fc-9dcb-72cf-ad6c-fa4311502465@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
In-Reply-To: <c22b42fc-9dcb-72cf-ad6c-fa4311502465@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------o00O0sD67mTCNvhlEJMiCJzA"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------o00O0sD67mTCNvhlEJMiCJzA
Content-Type: multipart/mixed; boundary="------------U1KZpPbs6bUVrEt0X0kQ5wrL";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <718c7a92-b3ca-8fb3-e698-5e01464258c3@suse.com>
Subject: Re: [PATCH v2 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
References: <20230403162823.30681-1-jgross@suse.com>
 <20230403162823.30681-2-jgross@suse.com>
 <c22b42fc-9dcb-72cf-ad6c-fa4311502465@suse.com>
In-Reply-To: <c22b42fc-9dcb-72cf-ad6c-fa4311502465@suse.com>

--------------U1KZpPbs6bUVrEt0X0kQ5wrL
Content-Type: multipart/mixed; boundary="------------ZU19NvmbvRm5zir8Zt0wfHpu"

--------------ZU19NvmbvRm5zir8Zt0wfHpu
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDUuMDQuMjMgMTU6MTQsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwMy4wNC4yMDIz
IDE4OjI4LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gSW4gb3JkZXIgdG8gc3VwcG9ydCBo
eXBlcnZpc29yIGFuYWx5c2lzIG9mIGNyYXNoIGR1bXBzLCB4ZW4tc3ltcyBuZWVkcw0KPj4g
dG8gY29udGFpbiBkZWJ1Z19pbmZvLiBJdCBzaG91bGQgYmUgYWxsb3dlZCB0byBjb25maWd1
cmUgdGhlIGh5cGVydmlzb3INCj4+IHRvIGJlIGJ1aWx0IHdpdGggQ09ORklHX0RFQlVHX0lO
Rk8gaW4gbm9uLWRlYnVnIGJ1aWxkcyB3aXRob3V0IGhhdmluZw0KPj4gdG8gZW5hYmxlIEVY
UEVSVC4NCj4+DQo+PiBVc2luZyBhIHJhdGhlciBvbGRpc2ggZ2NjICg3LjUpIGl0IHdhcyB2
ZXJpZmllZCB0aGF0IGNvZGUgZ2VuZXJhdGlvbg0KPj4gZG9lc24ndCByZWFsbHkgZGlmZmVy
IGJldHdlZW4gQ09ORklHX0RFQlVHX0lORk8gb24gb3Igb2ZmIHdpdGhvdXQNCj4+IENPTkZJ
R19ERUJVRyBiZWluZyBzZXQgKG9ubHkgb2JzZXJ2ZWQgZGlmZmVyZW5jZXMgd2VyZSBzbGln
aHRseQ0KPj4gZGlmZmVyZW50IHN5bWJvbCBhZGRyZXNzZXMsIHZlcmlmaWVkIHZpYSAib2Jq
ZHVtcCAtZCIsIHJlc3VsdGluZyBmcm9tDQo+PiB0aGUgZGlmZmVyZW50IGNvbmZpZy5neiBp
biB0aGUgYmluYXJ5KS4gVGhlIG9sZCBnY2MgdmVyc2lvbiBzZWxlY3Rpb24NCj4+IHdhcyBi
YXNlZCBvbiB0aGUgYXNzdW1wdGlvbiwgdGhhdCBuZXdlciBnY2Mgd29uJ3QgcmVncmVzcyBp
biB0aGlzDQo+PiByZWdhcmQuDQo+Pg0KPj4gU28gbW92ZSBDT05GSUdfREVCVUdfSU5GTyBv
dXQgb2YgdGhlIHNlY3Rpb24gZ3VhcmRlZCBieSBFWFBFUlQuDQo+Pg0KPj4gSXQgc2hvdWxk
IGJlIG1lbnRpb25lZCB0aGF0IHRoZXJlIGhhdmUgYmVlbiByZXBvcnRzIHRoYXQgdGhlIGxp
bmtpbmcNCj4+IG9mIHRoZSB4ZW4uZWZpIG1pZ2h0IHRha2UgY29uc2lkZXJhYmx5IGxvbmdl
ciB3aXRoIENPTkZJR19ERUJVR19JTkZPDQo+PiBzZWxlY3RlZCB3aGVuIHVzaW5nIG5ld2Vy
IGJpbnV0aWxzLg0KPiANCj4gVGhpbmtpbmcgb2YgaXQ6IEJlY2F1c2Ugb2YgdGhlIG5lZWQg
dG8gZGVhbCB3aXRoIG9sZGVyIGJpbnV0aWxzLCB3ZQ0KPiBhbHJlYWR5IGZvcmNlIC0tc3Ry
aXAtZGVidWcgYXMgYSBsaW5raW5nIG9wdGlvbiBmb3IgeGVuLmVmaSBpbg0KPiBjZXJ0YWlu
IGNhc2VzLiBQZXJoYXBzIHdlIGNvdWxkIG1ha2UgYW5vdGhlciAoeDg2LW9ubHkpIEtjb25m
aWcNCj4gY29udHJvbCB3aGljaCBhbGxvd3MgdG8gZm9yY2UgdGhpcyBtb2RlIGV2ZW4gd2l0
aCByZWNlbnQgYmludXRpbHM/DQo+IElmIHNvLCB3b3VsZCB5b3UgYmUgd2lsbGluZyB0byBp
bmNsdWRlIHRoaXMgcmlnaHQgaGVyZSwgb3Igc2hvdWxkIEkNCj4gdGFrZSBjYXJlIG9mIHRo
aXMgYWZ0ZXJ3YXJkcyAob3IgbWF5YmUgZXZlbiBpbiBwYXJhbGxlbCk/DQoNCkknbSBwbGFu
bmluZyB0byBkbyB0aGUgRUZJIHNpZGUgaW4gdGhlIG5leHQgZGF5cywgaG9waW5nIHRoYXQg
SSBjYW4gc2V0dXANCnRoZSB0ZXN0IHN5c3RlbS4NCg0KSSdkIGluY2x1ZGUgdGhhdCBhZGRp
dGlvbmFsIGNvbmZpZyBvcHRpb24gaW4gdGhlIHByb2JhYmx5IG5lZWRlZCBwYXRjaChlcykN
CnRoZW4gKGNyYXNoIGlzbid0IGhhcHB5IHdpdGggeGVuLmVmaSBhbnl3YXksIHNvIEknbGwg
bmVlZCBzb21lIGtpbmQgb2YNCnhlbi1zeW1zLmVmaSBvciB3aGF0ZXZlciB5b3Ugd2FudCB0
byBjYWxsIGl0KS4NCg0KPiANCj4+IC0tLSBhL3hlbi9LY29uZmlnLmRlYnVnDQo+PiArKysg
Yi94ZW4vS2NvbmZpZy5kZWJ1Zw0KPj4gQEAgLTExLDYgKzExLDEzIEBAIGNvbmZpZyBERUJV
Rw0KPj4gICANCj4+ICAgCSAgWW91IHByb2JhYmx5IHdhbnQgdG8gc2F5ICdOJyBoZXJlLg0K
Pj4gICANCj4+ICtjb25maWcgREVCVUdfSU5GTw0KPj4gKwlib29sICJDb21waWxlIFhlbiB3
aXRoIGRlYnVnIGluZm8iDQo+PiArCWRlZmF1bHQgREVCVUcNCj4+ICsJaGVscA0KPj4gKwkg
IElmIHlvdSBzYXkgWSBoZXJlIHRoZSByZXN1bHRpbmcgWGVuIHdpbGwgaW5jbHVkZSBkZWJ1
Z2dpbmcgaW5mbw0KPj4gKwkgIHJlc3VsdGluZyBpbiBhIGxhcmdlciBiaW5hcnkgaW1hZ2Uu
DQo+PiArDQo+PiAgIGlmIERFQlVHIHx8IEVYUEVSVA0KPiANCj4gSnVzdCB0byByZXBlYXQg
bXkgdjEgY29tbWVudCAodG8gd2hpY2ggeW91ciByZXNwb25zZSB3YXMgIkZpbmUgd2l0aCBt
ZSIpOg0KPiANCj4gVGhlIG5ldyBwbGFjZW1lbnQgaXNuJ3QgdmVyeSBoZWxwZnVsIHdoZW4g
Y29uc2lkZXJpbmcgc29tZSBvZiB0aGUgd2F5cw0KPiBrY29uZmlnIGRhdGEgaXMgcHJlc2Vu
dGVkLiBBdCBsZWFzdCBmb3IgdGhlIG5vbi1ncmFwaGljYWwgcHJlc2VudGF0aW9uDQo+IGl0
IHVzZWQgdG8gYmUgdGhlIGNhc2UgdGhhdCBoaWVyYXJjaGllcyB3ZXJlIHByZXNlbnRlZCBw
cm9wZXJseSBvbmx5DQo+IHdoZW4gZGVwZW5kZW5jaWVzIGltbWVkaWF0ZWx5IGZvbGxvd2Vk
IHRoZWlyIGRlcGVuZGVudHMgKGkuZS4gaGVyZToNCj4gREVCVUcgaXMgYSBkZXBlbmRlbnQg
b2YgZXZlcnl0aGluZyBpbnNpZGUgdGhlICJpZiIgYWJvdmUpLiBUaGVyZWZvcmUgSQ0KPiB0
aGluayByYXRoZXIgdGhhbiBtb3ZpbmcgdGhlIGJsb2NrIHVwIHlvdSBtYXkgYmV0dGVyIG1v
dmUgaXQgZG93biBwYXN0DQo+IHRoZSAiZW5kaWYiLg0KDQpPaCwgSSBzZWVtIHRvIGhhdmUg
bWlzc2VkIHRoYXQgbGFzdCBwYXJhZ3JhcGggd2hlbiBnb2luZyB0aHJvdWdoIHRoZSB0aHJl
YWQuDQoNCldpbGwgY29ycmVjdCBpdC4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------ZU19NvmbvRm5zir8Zt0wfHpu
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ZU19NvmbvRm5zir8Zt0wfHpu--

--------------U1KZpPbs6bUVrEt0X0kQ5wrL--

--------------o00O0sD67mTCNvhlEJMiCJzA
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQte0gFAwAAAAAACgkQsN6d1ii/Ey8r
Kwf+KZyGG/vV4YsXnS/k3jLP3WZ7QBavZ1Y1KmL0eqAvSM3+fgHfX2wjqTjptCWSsVlBsQRpz03o
7QNDBlkJBUClhv3TmzB8g4htgQuZEMFTTOVXi7xwuwctB/VnklJ5BN9gnvLQv2sk7984L2OdrlBl
H9rWyLt7RBAyNNLil46PFyHoJonrRzAejQYgLDxA3NqHW21VPfNNfYWQUr1vNbEi4VOVBysKN2xE
9FCUaILN9pgKRSABkiZvlPo1StusNDCEONMlVz8wV2rFlx+W8FMonwUleJQgo6bFikL8l0ALZlRC
qJ7MemkhxLdcvqY3YLwJD0FqwWcvYRze7bd8+5JkNQ==
=T3r3
-----END PGP SIGNATURE-----

--------------o00O0sD67mTCNvhlEJMiCJzA--


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:56:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518539.805204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cp-0007mZ-NS; Wed, 05 Apr 2023 13:56:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518539.805204; Wed, 05 Apr 2023 13:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cp-0007mS-Jm; Wed, 05 Apr 2023 13:56:35 +0000
Received: by outflank-mailman (input) for mailman id 518539;
 Wed, 05 Apr 2023 13:56:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pk3cn-0007mI-UL
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:56:33 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ab7fbc7e-d3b9-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 15:56:32 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8C40020634;
 Wed,  5 Apr 2023 13:56:31 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4458F13A31;
 Wed,  5 Apr 2023 13:56:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9Rt9Dw9+LWTBeAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 13:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab7fbc7e-d3b9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680702991; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=w87xtmd4g8BodHHbXoQ5bocxqFz/xo9ToPYB7CD8LlI=;
	b=XdlbCOExTZNMyIrlim6vD7Csa7IUMhkoLmDalpEf8/FwjTYBQYzpUOkBdm3meJ11MPxhK2
	jlKp0mUhfkAs8fcQKkiNIw9ZBRS+7G1PI61pbQWaFtkKAjY/4U73pO7CN9HAbUvPh9LF1r
	VnlGjjS5PF3JtSrxJWb4JLhfUdbQxEw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/2] xen: some CONFIG_DEBUG_INFO changes
Date: Wed,  5 Apr 2023 15:56:27 +0200
Message-Id: <20230405135629.21829-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enabling crash dump analysis of the hypervisor requires the hypervisor
having been built with CONFIG_DEBUG_INFO enabled. Today this requires
either CONFIG_DEBUG or CONFIG_EXPERT set, which are both not security
supported.

This small series changes that in order to allow security supported
Xen builds with the capability to do crash dump analysis via the
"crash" tool.

Note that due to problems with test machines proper support for EFI
booted systems hasn't been verified, so this will likely need some more
work.

Changes in V2:
- comments addressed

Changes in V3:
- comments addressed

Juergen Gross (2):
  xen: move CONFIG_DEBUG_INFO out of EXPERT section
  xen: update CONFIG_DEBUG_INFO help text

 xen/Kconfig.debug | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:56:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:56:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518540.805214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cu-00082a-Tw; Wed, 05 Apr 2023 13:56:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518540.805214; Wed, 05 Apr 2023 13:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cu-00082P-Qq; Wed, 05 Apr 2023 13:56:40 +0000
Received: by outflank-mailman (input) for mailman id 518540;
 Wed, 05 Apr 2023 13:56:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pk3cs-0007mI-TH
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:56:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aebb4c0b-d3b9-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 15:56:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2780F20634;
 Wed,  5 Apr 2023 13:56:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E0A4213A31;
 Wed,  5 Apr 2023 13:56:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cCeONRR+LWTSeAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 13:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aebb4c0b-d3b9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680702997; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QJpVodEY5mLUPTwqhB6PQxZ33ssAbKfS1kQ6CEUaSKQ=;
	b=ICMD0WG228KSxZpnbPT9WhTr77YIy+K2/4tIt2/1rxBAm33MSOIT5lZcnBXbdPAo165omS
	tibC4ydjOIu/4lEN2EmBEmqKafzYNwZ7irgtWCTYjikvA7EkRQlcS64l+jI9hR2k5+WNdS
	Xb7SGc9jYF1y4cFySy5JbUiOwPpTPmA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
Date: Wed,  5 Apr 2023 15:56:28 +0200
Message-Id: <20230405135629.21829-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405135629.21829-1-jgross@suse.com>
References: <20230405135629.21829-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to support hypervisor analysis of crash dumps, xen-syms needs
to contain debug_info. It should be allowed to configure the hypervisor
to be built with CONFIG_DEBUG_INFO in non-debug builds without having
to enable EXPERT.

Using a rather oldish gcc (7.5) it was verified that code generation
doesn't really differ between CONFIG_DEBUG_INFO on or off without
CONFIG_DEBUG being set (only observed differences were slightly
different symbol addresses, verified via "objdump -d", resulting from
the different config.gz in the binary). The old gcc version selection
was based on the assumption, that newer gcc won't regress in this
regard.

So move CONFIG_DEBUG_INFO out of the section guarded by EXPERT.

It should be mentioned that there have been reports that the linking
of the xen.efi might take considerably longer with CONFIG_DEBUG_INFO
selected when using newer binutils.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- expanded commit message (Jan Beulich)
V3:
- move DEBUG_INFO block to the end of the file (Jan Beulich)
---
 xen/Kconfig.debug | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
index fad3050d4f..279dbe8274 100644
--- a/xen/Kconfig.debug
+++ b/xen/Kconfig.debug
@@ -28,13 +28,6 @@ config GDBSX
 	  If you want to enable support for debugging guests from dom0 via
 	  gdbsx then say Y.
 
-config DEBUG_INFO
-	bool "Compile Xen with debug info"
-	default y
-	---help---
-	  If you say Y here the resulting Xen will include debugging info
-	  resulting in a larger binary image.
-
 config FRAME_POINTER
 	bool "Compile Xen with frame pointers"
 	default DEBUG
@@ -132,4 +125,11 @@ source "arch/$(SRCARCH)/Kconfig.debug"
 
 endif # DEBUG || EXPERT
 
+config DEBUG_INFO
+	bool "Compile Xen with debug info"
+	default DEBUG
+	help
+	  If you say Y here the resulting Xen will include debugging info
+	  resulting in a larger binary image.
+
 endmenu
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:56:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:56:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518541.805224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cz-0008KX-86; Wed, 05 Apr 2023 13:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518541.805224; Wed, 05 Apr 2023 13:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3cz-0008KQ-4o; Wed, 05 Apr 2023 13:56:45 +0000
Received: by outflank-mailman (input) for mailman id 518541;
 Wed, 05 Apr 2023 13:56:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wAG=74=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pk3cy-0007mI-Kd
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:56:44 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b21a5398-d3b9-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 15:56:43 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C81BD20634;
 Wed,  5 Apr 2023 13:56:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8864513A31;
 Wed,  5 Apr 2023 13:56:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aZsHIBp+LWTeeAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 05 Apr 2023 13:56:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b21a5398-d3b9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680703002; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h7E8xLpIHHyBj6gOvbHVGHsCDEUl39NE+WedwK4I/34=;
	b=MxDaNNi+YFuqUHP7aB796xrZ4HmTYr/kmjo7qmjvN1Tpga2ne9dHvL83hwTuBHGVdDsq/w
	OEPIkmzi/YgnHxOG9TOPMcAeoa923Ia58r3Y3uMV0P71qbeBjCsURWOcMwHXTz9XNH3A8X
	xW0xz7fVyw2rQyr01HbJ/MM/MfsnOFI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/2] xen: update CONFIG_DEBUG_INFO help text
Date: Wed,  5 Apr 2023 15:56:29 +0200
Message-Id: <20230405135629.21829-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230405135629.21829-1-jgross@suse.com>
References: <20230405135629.21829-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update the help text of the CONFIG_DEBUG_INFO option to be a little
bit more specific.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- expand help text, especially mentioning INSTALL_EFI_STRIP
  (Jan Beulich)
V3:
- expand help text even more (Jan Beulich)
---
 xen/Kconfig.debug | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
index 279dbe8274..94e818ee09 100644
--- a/xen/Kconfig.debug
+++ b/xen/Kconfig.debug
@@ -129,7 +129,15 @@ config DEBUG_INFO
 	bool "Compile Xen with debug info"
 	default DEBUG
 	help
-	  If you say Y here the resulting Xen will include debugging info
-	  resulting in a larger binary image.
+	  Say Y here if you want to build Xen with debug information. This
+	  information is needed e.g. for doing crash dump analysis of the
+	  hypervisor via the "crash" tool.
+	  Saying Y will increase the size of the xen-syms and xen.efi
+	  binaries. In case the space on the EFI boot partition is rather
+	  limited, you may want to install a stripped variant of xen.efi in
+	  the EFI boot partition (look for "INSTALL_EFI_STRIP" in
+	  docs/misc/efi.pandoc for more information - when not using
+	  "make install-xen" for installing xen.efi, stripping needs to be
+	  done outside the Xen build environment).
 
 endmenu
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 13:58:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 13:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518552.805234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3e2-00012P-KG; Wed, 05 Apr 2023 13:57:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518552.805234; Wed, 05 Apr 2023 13:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk3e2-00012I-HI; Wed, 05 Apr 2023 13:57:50 +0000
Received: by outflank-mailman (input) for mailman id 518552;
 Wed, 05 Apr 2023 13:57:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kv/O=74=linaro.org=alex.bennee@srs-se1.protection.inumbo.net>)
 id 1pk3e1-0000PU-4N
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 13:57:49 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8f46431-d3b9-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 15:57:48 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 l15-20020a05600c4f0f00b003ef6d684102so18489378wmq.3
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 06:57:48 -0700 (PDT)
Received: from zen.linaroharston ([85.9.250.243])
 by smtp.gmail.com with ESMTPSA id
 t12-20020a7bc3cc000000b003ee42696acesm2281079wmj.16.2023.04.05.06.57.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Apr 2023 06:57:47 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id E0D531FFB7;
 Wed,  5 Apr 2023 14:57:46 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8f46431-d3b9-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680703067;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7NetuLM1YNy+KIDJo2AuRwk1MQSAg78qWWaqODIp3uM=;
        b=STRahmuuUI8KQBpKkK6/PybRdBaAfHp1W+GWkva9949DvsniQ47A/YBs4/K6otGmIQ
         A7oHq5QZml8oDY9OkaJYXNIDicXknAlahrTy0t885QBs0Lt5BGqY/X1A5hzP+3cG4jyG
         hqExCPaaG4YfjYDF2Lvk4KZCMdUASYLq+3mofYtWUXbekaC+KANqj/gZ2+SaWy51z2Gx
         IK4y7wRVW/r+5NSnoD08WImnjxGuzAjZRzafDn3YXT4lhfyMCg0hk6YePHsX9rGEZ0yO
         5bbeqBjbAXiB4zUJZFVItGQ8ztZK7Dl/a/IVEwL731YIWMclVzlyNQOD4tFezaYuIqhf
         O8lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680703067;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=7NetuLM1YNy+KIDJo2AuRwk1MQSAg78qWWaqODIp3uM=;
        b=1zijq7juwmx3MMCqfyV/5sETlfwybfychoJRxCHE3bQNaUuJac3qIvr1hUIuV4bNN4
         luYyvIX76KaC6Fb2IJ49yOOh1lStzs6nOZxOngo6vZD4Z/f/XvfF6v5LgciIwISdkjLj
         X3Z06k8ZDLBkzHWCG++lXAiWQ62Ul78CS+S+uCQXSfgOuSMT2BaXU4XPT5Y4Kl97ErG7
         nEJsVrkP8Kd5tVu8+RKIqiTsMuYt/vX4njlS2oSzSfynbCX1VlR6UeChanxex0RLF2H0
         8kyq1/637PdJZCLbMxsMofZaHhVz+Sbw3M6DLh6k1Pvxml8rczCRdL/nQHNzAOSSxx/B
         Uesw==
X-Gm-Message-State: AAQBX9flo6vaYCRLpeTPih80o29MQvwCCIUlOSGnmnX/DDPY2lJMpSwO
	mM2hNQljH9D05932O7RgvwHB+Q==
X-Google-Smtp-Source: AKy350amvRA+LsVe3Qrc1qOw8K7TS5SLWtfzWAgqhDokCIDVgX46fW59a21dzTrAyh+jdrLxrPgnHA==
X-Received: by 2002:a1c:e916:0:b0:3f0:4f83:22f4 with SMTP id q22-20020a1ce916000000b003f04f8322f4mr4568690wmc.20.1680703067664;
        Wed, 05 Apr 2023 06:57:47 -0700 (PDT)
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-10-philmd@linaro.org>
User-agent: mu4e 1.10.0; emacs 29.0.60
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Reinoud Zandijk
 <reinoud@netbsd.org>
Subject: Re: [PATCH 09/14] accel: Allocate NVMM vCPU using g_try_FOO()
Date: Wed, 05 Apr 2023 14:55:41 +0100
In-reply-to: <20230405101811.76663-10-philmd@linaro.org>
Message-ID: <874jpul9d1.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Philippe Mathieu-Daud=C3=A9 <philmd@linaro.org> writes:

> g_malloc0() can not fail. Use g_try_malloc0() instead.
>
> https://developer-old.gnome.org/glib/stable/glib-Memory-Allocation.html#g=
lib-Memory-Allocation.description
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@linaro.org>
> ---
>  target/i386/nvmm/nvmm-all.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
> index 3c7bdd560f..45fd318d23 100644
> --- a/target/i386/nvmm/nvmm-all.c
> +++ b/target/i386/nvmm/nvmm-all.c
> @@ -942,7 +942,7 @@ nvmm_init_vcpu(CPUState *cpu)
>          }
>      }
>=20=20
> -    qcpu =3D g_malloc0(sizeof(*qcpu));
> +    qcpu =3D g_try_malloc0(sizeof(*qcpu));
>      if (qcpu =3D=3D NULL) {
>          error_report("NVMM: Failed to allocate VCPU context.");
>          return -ENOMEM;

Why - if we fail to allocate the vCPU context its game over anyway any
established QEMU practice is its ok to assert fail on a malloc when
there isn't enough memory. IOW keep the g_malloc0 and remove the error
handling case.

--=20
Alex Benn=C3=A9e
Virtualisation Tech Lead @ Linaro


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 14:37:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 14:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518556.805243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4GK-0006mE-JP; Wed, 05 Apr 2023 14:37:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518556.805243; Wed, 05 Apr 2023 14:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4GK-0006m7-Go; Wed, 05 Apr 2023 14:37:24 +0000
Received: by outflank-mailman (input) for mailman id 518556;
 Wed, 05 Apr 2023 14:37:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4GI-0006lw-MO; Wed, 05 Apr 2023 14:37:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4GI-0003mS-J9; Wed, 05 Apr 2023 14:37:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4GI-0001fk-2w; Wed, 05 Apr 2023 14:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4GI-0004kj-2N; Wed, 05 Apr 2023 14:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wumjcVPGMrsKrTvvi/dv6/gQIBDMtg0YVfVW/SQTiLM=; b=izHWXJdvvNdn02gih1RRf8se6i
	DQW+c97hr1fgy6vo2TfOX+ZDz/YJOR8W3Qm0MDcFv3gWdfwQ5ht4GhRvZ67gcKJGtSQRnelkhe7B/
	1n6pcqgzCLAKLLYYEAPMZFTbsYyJD4BOCO8E4wuG0fMlYxLAN3krtDnTe/G/kL91q6oc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180145-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180145: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=76f598ba7d8e2bfb4855b5298caedd5af0c374a8
X-Osstest-Versions-That:
    linux=148341f0a2f53b5e8808d093333d85170586a15d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 14:37:22 +0000

flight 180145 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180145/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180135
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180135
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                76f598ba7d8e2bfb4855b5298caedd5af0c374a8
baseline version:
 linux                148341f0a2f53b5e8808d093333d85170586a15d

Last test of basis   180135  2023-04-04 10:20:26 Z    1 days
Testing same since   180145  2023-04-05 00:11:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Anup Patel <anup@brainfault.org>
  Chuck Lever <chuck.lever@oracle.com>
  Dai Ngo <dai.ngo@oracle.com>
  Dmytro Maluka <dmy@semihalf.com>
  Janosch Frank <frankja@linux.ibm.com>
  Jeff Layton <jlayton@kernel.org>
  Jeremi Piotrowski <jpiotrowski@linux.microsoft.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Nicholas Piggin <npiggin@gmail.com>
  Nico Boehr <nrb@linux.ibm.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Sean Christopherson <seanjc@google.com>
  Takahiro Itazuri <itazur@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   148341f0a2f5..76f598ba7d8e  76f598ba7d8e2bfb4855b5298caedd5af0c374a8 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 15:12:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 15:12:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518564.805254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4nj-0003Xl-8b; Wed, 05 Apr 2023 15:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518564.805254; Wed, 05 Apr 2023 15:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4nj-0003Xe-5v; Wed, 05 Apr 2023 15:11:55 +0000
Received: by outflank-mailman (input) for mailman id 518564;
 Wed, 05 Apr 2023 15:11:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4nh-0003XQ-Du; Wed, 05 Apr 2023 15:11:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4nh-00056d-C3; Wed, 05 Apr 2023 15:11:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4nh-00040W-03; Wed, 05 Apr 2023 15:11:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk4ng-0003dd-Vv; Wed, 05 Apr 2023 15:11:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zq/5szviEDWigi1ZG16ruPTO/mrXRW9mMAb8Jijhpnk=; b=yE8Y8w6dM/KwVmM0EKGKsgKK3y
	kamLDkTyU3rZmiXHzOokcx7Z46w7CWeXr4H0dsAGWFhx6XoLUbFnmblHIj/ITCLVfxtXLt0KMrN8b
	51noCRWuG79wNRMwKd4WtSjBVrJb6JUl6wvSL2pllr5YmxW6KmP/dkXtQml+1IbWY/Z8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180155-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180155: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=48d76e6da92f9ef76c8468e299349a2f698362fa
X-Osstest-Versions-That:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 15:11:52 +0000

flight 180155 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180155/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  48d76e6da92f9ef76c8468e299349a2f698362fa
baseline version:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1

Last test of basis   180143  2023-04-04 19:00:27 Z    0 days
Failing since        180152  2023-04-05 11:02:36 Z    0 days    2 attempts
Testing same since   180155  2023-04-05 13:01:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   415f7d9404..48d76e6da9  48d76e6da92f9ef76c8468e299349a2f698362fa -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 15:15:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 15:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518571.805265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4qj-0004FZ-Tn; Wed, 05 Apr 2023 15:15:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518571.805265; Wed, 05 Apr 2023 15:15:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4qj-0004FS-P6; Wed, 05 Apr 2023 15:15:01 +0000
Received: by outflank-mailman (input) for mailman id 518571;
 Wed, 05 Apr 2023 15:15:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4lP2=74=citrix.com=prvs=452340087=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pk4qi-0004FH-Dq
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 15:15:00 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f4a7723-d3c4-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 17:14:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f4a7723-d3c4-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680707697;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=EJ3ev2JBehdWtPeyrAnBZaPmtmxUVr18gM35xQh8XrI=;
  b=ZWjvMY9iypUv4t4Edm2mCdtYVPyqaAWjY64mOgjepXB9T06ulkeCjym5
   mjOoChhX3AmkTj/fe7WVhFwrEilCpXiZAq733V7I60elfxptr0tknE+MN
   TM+sRY8wEhEIQ7VahznJ235p0O0PaIo9Ta3zsTboKqODYJC7bO/JOEQnC
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104465613
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:LVFSxqlOxkdm7OGEc2OAvhvo5gxnJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdX2CHaPqCNGTzetEkOY+28ENS75LSndA3T1Fk+Xg2QyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgX5AKGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fgpDzsXKTWYvOSrmLS0Dc5WvOJ8Dsa+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQth/A+
 j6YojqgWXn2MvSkihXd/WKdntTpph3lfaIJOaaEqKVT1Qj7Kms7V0RNCArTTeOComqjUNsZB
 UUS8ScqqbUa/VauCNL6WnWQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OccSCY2z
 FaPk5XMDCZ2rbyOYXuH8/GfqjbaERcYLCkpZSICZQIf5p/op4Rbpg3LZsZuFuiylNKdMSv32
 DqQqy89gZ0ciMcK0+Ow+lWvqzCjvJ/SVSYu+x7aGGmi62tRbZaofYWy5XDH7PxLK8CSSVzpg
 ZQfs5HAtqZUV8jLzXHTBrxXR9lF+sppLhXYsHlkBd4E7A+r5ialcsd32gtYe2VmZ5NslSDSX
 KPDhe9AzMYNbCP0PPcmPd3Z59cClva5S4m8PhzARp8XO8UqKlfalM17TRTIt10BhnTAhk3W1
 X2zVc+3RUgXBq18pNZdb7dMiOR7rszSKI66eHwa8/hE+eDEDJJtYe1ZWGZil8hghE9+nC3b8
 sxEK+yBwAhFXev1b0H/qNBDdQ5RdiJqXsCn86S7k9JvxSI/QQkc5wL5m+t9K+SJYYwO/gs3w
 p1NchABkweu7ZE2AQ6LdmpieNvSYHqLllpiZXZEFQ/xixAejXOHsP93m20fIeN2q4SODJdcE
 5E4Ril3Kq8VEGiWoGhAMMKVQU4LXE3DuD9i9hGNOFAXF6OMjSSQkjM4VmMDLBUzMxc=
IronPort-HdrOrdr: A9a23:1tJHzKj6rj9X9XshvrlI23ODQ3BQXvgji2hC6mlwRA09TyX4rb
 HKoB1/73WYtN9/Yh0dcLy7V5VoOEmskqKdgrNhX4tKPjOHhILAFugL0WKF+VPd8kbFh41gPM
 lbEpSWP+eAaWSS3fyQ3OBhKadb/DBcytHRuQ4C9QYKcei3UdAa0+6mMHfnLqUYLDM2fKYEKA
 ==
X-IronPort-AV: E=Sophos;i="5.98,321,1673931600"; 
   d="scan'208";a="104465613"
Date: Wed, 5 Apr 2023 16:14:45 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@cloud.com>
Subject: Re: [PATCH v4 10/12] xen/tools: add sve parameter in XL configuration
Message-ID: <8921a9ca-7284-44ce-8ce5-bc631b0980d6@perard>
References: <20230327105944.1360856-1-luca.fancellu@arm.com>
 <20230327105944.1360856-11-luca.fancellu@arm.com>
 <9bd2924b-bb4a-440d-ae31-0253e66c56e5@perard>
 <328A9CBD-5FCE-481B-93AF-D139963488D5@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <328A9CBD-5FCE-481B-93AF-D139963488D5@arm.com>

On Tue, Apr 04, 2023 at 01:48:34PM +0000, Luca Fancellu wrote:
> > On 31 Mar 2023, at 15:23, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > 
> > On Mon, Mar 27, 2023 at 11:59:42AM +0100, Luca Fancellu wrote:
> >> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> >> index 10f37990be57..adf48fe8ac1d 100644
> >> --- a/docs/man/xl.cfg.5.pod.in
> >> +++ b/docs/man/xl.cfg.5.pod.in
> >> @@ -2952,6 +2952,17 @@ Currently, only the "sbsa_uart" model is supported for ARM.
> >> 
> >> =back
> >> 
> >> +=item B<sve="NUMBER">
> >> +
> >> +To enable SVE, user must specify a number different from zero, maximum 2048 and
> >> +multiple of 128. That value will be the maximum number of SVE registers bits
> >> +that the hypervisor will impose to this guest. If the platform has a lower
> > 
> > Maybe start by describing what the "sve" value is before imposing
> > limits. Maybe something like:
> > 
> >    Set the maximum vector length that a guest's Scalable Vector
> >    Extension (SVE) can use. Or disable it by specifying 0, the default.
> > 
> >    Value needs to be a multiple of 128, with a maximum of 2048 or the
> >    maximum supported by the platform.
> > 
> > Would this, or something like that be a good explanation of the "sve"
> > configuration option?
> 
> Yes I can change it, a need to do it anyway because I think also here, the suggestion
> From Jan can apply and we could pass a negative value that means “max VL supported
> by the platform"

Well, it's a config file, not a C ABI, so max allowed here doesn't have to be
spelled "-1", it could also be "max", "max-allowed",
"max-size-supported", ... So fill free deviate from the restricted C
ABI. But "-1" works as long as it's the only allowed negative number.

> > 
> >> +supported bits value, then the domain creation will fail.
> >> +A value equal to zero is the default and it means this guest is not allowed to
> >> +use SVE.
> >> +
> >> +=back
> >> +
> >> =head3 x86
> >> 
> >> =over 4
> >> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> >> index ddc7b2a15975..16a49031fd51 100644
> >> --- a/tools/libs/light/libxl_arm.c
> >> +++ b/tools/libs/light/libxl_arm.c
> >> @@ -211,6 +211,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
> >>         return ERROR_FAIL;
> >>     }
> >> 
> >> +    config->arch.sve_vl = d_config->b_info.arch_arm.sve;
> > 
> > This truncate a 16bit value into an 8bit value, I think you should check
> > that the value can actually fit.
> > 
> > And maybe check `d_config->b_info.arch_arm.sve` value here instead of
> > `xl` as commented later.
> 
> Yes I can do it, one question, can I use here xc_physinfo to retrieve the maximum
> Vector length from arch_capabilities?
> I mean, is there a better way or I can go for that?

Yeah, there might be a "better" way. I think me suggestion to check the
sve value here was wrong. I still want to have it checked in libxl, but
it might be better to do that in the previous step, that is
"libxl__domain_config_setdefault". libxl__arch_domain_build_info_setdefault()
will have `physinfo` so you won't have to call xc_physinfo().


Regarding the default, libxl is capable of selecting a good set of
option, depending on the underling hardware. So is it possible that in
the future we would want to enable SVE by default? If that's even a
remote possibility, the current API wouldn't allow it because we have
"default" and "disable" been the same.

Since we want to add a max VL supported option, maybe we want to
separate the default and disable options. So we could keep the
single field `libxl_domain_build_info.arch_arm.sve` and have for example
"-1" for max-supported and "-2" for disabled, while "0" mean default.
Or alternatively, add an extra field libxl_defbool (can be
default/true/false) and keep the second one for VL. (That extra
"disabled" option would only be for libxl, for xl we can keep "sve=0"
mean disable, and the missing option just mean default.)

In any case, libxl__arch_domain_build_info_setdefault() will check
`b_info.arch_arm.sve` and set it to the value that can given to Xen. And
as of now, libxl__arch_domain_prepare_config() will just copy the value
from `b_info` to `config`. (I guess that last step _prepare_config()
could still do the division by 128.)


Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 15:18:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 15:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518575.805274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4tp-0004xY-Bu; Wed, 05 Apr 2023 15:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518575.805274; Wed, 05 Apr 2023 15:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk4tp-0004xR-8h; Wed, 05 Apr 2023 15:18:13 +0000
Received: by outflank-mailman (input) for mailman id 518575;
 Wed, 05 Apr 2023 15:18:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gFET=74=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pk4to-0004wx-1S
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 15:18:12 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12e2c4e1-d3c5-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 17:18:09 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 v14-20020a05600c470e00b003f06520825fso1304728wmo.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Apr 2023 08:18:09 -0700 (PDT)
Received: from [192.168.69.115] (4ab54-h01-176-184-52-81.dsl.sta.abo.bbox.fr.
 [176.184.52.81]) by smtp.gmail.com with ESMTPSA id
 i42-20020a05600c4b2a00b003f0321c22basm2461375wmp.12.2023.04.05.08.18.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Apr 2023 08:18:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12e2c4e1-d3c5-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680707889;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=idFXVwnEQw2AeckpahKfXswn5+/Jk4JVx854v+ndXlM=;
        b=P5FRw1Bg8gByNSRzaVrWUJucc+Ia4IMT8SBajxnMMvSH0nmkPoBKvzzOp8cPpm0043
         W55c3pCluAC9R0flUGbEbU7/UOi25PsdH6FiniT+JvE7ldpWa3l6uPy/ngUjkHb2R1nI
         lDct8ip3jjWb9ZROycFg+zRRunyCOZEJtqUtxAV7aEzkeuiimDkDVTD8s1N/T9AVVfcu
         ZKoBR5zXSIvJZ2JIo/R2+DR5oTvV0/L9XzKUlEb5jX9SCauc267Ovw8quhwNhbHCnBZ1
         8EWK7u4/h6p/P9crBvmOAbjQYbSrYrxaz/Jn0hJ/1Wx9uKiQKBukAv/SHkV2N833Rknz
         lSnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680707889;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=idFXVwnEQw2AeckpahKfXswn5+/Jk4JVx854v+ndXlM=;
        b=dp2/D3O2Z9XxHmDHopLfCTW1NXEJOP3QVxP33zNnk95UAhe9LA6UX9Tj5Vn+/yr2xu
         /wYG08idl9acv6J7vwH+52a2lt8gFF1PcxrnGncqZ3M6l4S3VgwpiE+MWukMQa8nQGXV
         tAioX7lpcKr9IPDds7qZOxaoRvFK0ZRxCRGuJ4ysunfugtscXE7/ZxvBCftXDoVR7Say
         w3IJVtTyrXb45p/ONXyPNzYonf5690mtUiV3nULcaZinknzCfQPYdQHfTk+2oNB3R4a1
         m18m6mFXAHEroRSvbB9sA77ZjvYaZsMhqMwky6905lSiOxu8IOy9fjVqK77ZUrMiJCLg
         Uodw==
X-Gm-Message-State: AAQBX9fJkVekEjq+oAPUgp9albvS0cYku54CDxob6jmI2H+Dwa6aUZlt
	v6jF6OHKIuDPQcRQ9JNa5C4RwQ==
X-Google-Smtp-Source: AKy350Z6S5WSad5iHIzjt49TqgYs41yzdYlN/8d2eI12bwZ9Xb+YKGePa0vAg6iMH5JR4Z/F8GyAAQ==
X-Received: by 2002:a7b:c38b:0:b0:3ed:8bef:6a04 with SMTP id s11-20020a7bc38b000000b003ed8bef6a04mr5244487wmj.27.1680707889387;
        Wed, 05 Apr 2023 08:18:09 -0700 (PDT)
Message-ID: <b84ecc42-4201-d714-364a-6a55682cbce7@linaro.org>
Date: Wed, 5 Apr 2023 17:18:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH 09/14] accel: Allocate NVMM vCPU using g_try_FOO()
Content-Language: en-US
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Reinoud Zandijk <reinoud@netbsd.org>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-10-philmd@linaro.org> <874jpul9d1.fsf@linaro.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <874jpul9d1.fsf@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 5/4/23 15:55, Alex Bennée wrote:
> 
> Philippe Mathieu-Daudé <philmd@linaro.org> writes:
> 
>> g_malloc0() can not fail. Use g_try_malloc0() instead.
>>
>> https://developer-old.gnome.org/glib/stable/glib-Memory-Allocation.html#glib-Memory-Allocation.description
>>
>> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>> ---
>>   target/i386/nvmm/nvmm-all.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
>> index 3c7bdd560f..45fd318d23 100644
>> --- a/target/i386/nvmm/nvmm-all.c
>> +++ b/target/i386/nvmm/nvmm-all.c
>> @@ -942,7 +942,7 @@ nvmm_init_vcpu(CPUState *cpu)
>>           }
>>       }
>>   
>> -    qcpu = g_malloc0(sizeof(*qcpu));
>> +    qcpu = g_try_malloc0(sizeof(*qcpu));
>>       if (qcpu == NULL) {
>>           error_report("NVMM: Failed to allocate VCPU context.");
>>           return -ENOMEM;
> 
> Why - if we fail to allocate the vCPU context its game over anyway any
> established QEMU practice is its ok to assert fail on a malloc when
> there isn't enough memory. IOW keep the g_malloc0 and remove the error
> handling case.

This was my first approach but then I realized the author took care
to warn / return ENOMEM, so I went for _try_; but you are right,
since this is "game over" let's simply remove the check.



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 15:49:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 15:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518579.805284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk5Na-0000a8-IA; Wed, 05 Apr 2023 15:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518579.805284; Wed, 05 Apr 2023 15:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk5Na-0000a1-FE; Wed, 05 Apr 2023 15:48:58 +0000
Received: by outflank-mailman (input) for mailman id 518579;
 Wed, 05 Apr 2023 15:48:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4lP2=74=citrix.com=prvs=452340087=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pk5NZ-0000Zu-Be
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 15:48:57 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c6bc2cc-d3c9-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 17:48:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c6bc2cc-d3c9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680709734;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=8vWpWya0wN4rZ3x0AJ/glU6yInfi2NjKtrZResNFdMc=;
  b=WMhKbBk1POGvT8MOwMgTSPHwTIXjzZVTHhmacbMbkORU4dTA4Rfcs3mF
   l3W3Uq8H0tD9EisJYojgU+CEozK/ZpwSgQ4DzWWQN+MH3NOa8c80nIYcS
   PfrV1llygfSlljBjxHgB+iW/FX53ajL6PyYnq3I5J6nluHloFN2vuj6ZR
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103233005
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:GsIbL6CCPiJTIRVW/yHjw5YqxClBgxIJ4kV8jS/XYbTApGwghGBSz
 mRMWTjVOazYZmT3KN9xPN6w8UwO6JOEyoJkQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5+JeKmZu2
 tokISlVUzCD2eOanrm6Vbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4TRH54Oxh7Fz
 o7A1zm6OU8eacOm8zSUrW+iurDFwCnUG6tHQdVU8dY12QbOlwT/EiY+V1G2vP24gU6WQM9EJ
 gof/S9Ghbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xGWwsXjNHLts8u6ceTCQnz
 FaTk/v1BDZkt/ueTnf1y1uPhWrsY25PdzZEPHJaC1JfuLEPvb3fkDrSSv9IF46aqOH8AGnN4
 zDUrQRgjLQM2JtjO7qAwbzXv969jsGXHlZrt12GAD3NAhBRP9D8OdHxgbTPxbMZddvCEAHc1
 JQRs5LGhN3iG61hg8BkrA8lOLiyr8iIPzTH6bKEN8lwrm/9k5JPkG053d2fGKuKGpxeEdMRS
 BWP0T69HbcKVJdQUYd5YpiqF+MhxrX6GNLuW5j8N4QeOsYqL1XWp3E/PSZ8OlwBd2B1yMkC1
 WqzK57wXR7294w8pNZJewvt+eBynX1vrY8ibZv60w6mwdKjWZJhcp9caAHmRrlgvMu5TPD9r
 4432z2il08OD4UTo0D/reYuELz9BSNhX8yn8JAKK7Xrz8gPMDhJNsI9CIgJI+RN95m5XM+Sl
 p1hcie0EGbCuEA=
IronPort-HdrOrdr: A9a23:ihJHs6Gzj/JunA9MpLqEHseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV86faQslwssR4b9uxoVJPvfZqYz+8W3WBzB8bEYOCFghrKEGgK1+KLrwEIWReOk9K1vZ
 0KT0EUMqyVMbEVt6fHCAnTKade/DGEmprY+9s3GR1WPHBXg6IL1XYINu6CeHcGPTWvnfACZe
 ehDswsnUvZRV0nKv6VK1MiROb5q9jChPvdEGI7705O0nj0sduwgoSKaSSl4g==
X-IronPort-AV: E=Sophos;i="5.98,321,1673931600"; 
   d="scan'208";a="103233005"
Date: Wed, 5 Apr 2023 16:48:38 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86emul/test: drop bogus .PHONY
Message-ID: <3d0c36f9-a715-4971-9031-04848bcd2d0d@perard>
References: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>

On Tue, Apr 04, 2023 at 08:37:55AM +0200, Jan Beulich wrote:
> x86_emulate is a real (directory) target.

Indeed, x86_emulate is a directory, so having the target in .PHONY
isn't bogus, but kind of expected in most cases.

Here, the recipe is written with .PHONY been used, as suggest the used
option "-p" from `mkdir` and "-f" from `ln`.

Without .PHONY, the recipe will never be executed if the directory
exist. And, if the content of the original x86_emulate directory
change, the linked directory will never be updated.

So, this patch description needs a reason for removing x86_emulate from
.PHONY. Is there some kind of bug?

> --- a/tools/tests/x86_emulator/Makefile
> +++ b/tools/tests/x86_emulator/Makefile
> @@ -278,7 +278,6 @@ else
>  run32 clean32: %32: %
>  endif
>  
> -.PHONY: x86_emulate
>  x86_emulate:
>  	mkdir -p $@
>  	ln -sf $(XEN_ROOT)/xen/arch/x86/$@/*.[ch] $@/

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 16:17:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 16:17:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518583.805294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk5oZ-0004xD-OK; Wed, 05 Apr 2023 16:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518583.805294; Wed, 05 Apr 2023 16:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk5oZ-0004x6-LW; Wed, 05 Apr 2023 16:16:51 +0000
Received: by outflank-mailman (input) for mailman id 518583;
 Wed, 05 Apr 2023 16:16:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gw2r=74=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pk5oY-0004wP-MT
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 16:16:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43a4874c-d3cd-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 18:16:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7209.eurprd04.prod.outlook.com (2603:10a6:102:92::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 16:16:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::154e:166d:ec25:531b%6]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 16:16:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43a4874c-d3cd-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G51r360AJlTIkzmctJlQi5uTONjZ23w2RzfWg1JTOFp2blJGsYMICvGKY2ZsClzGrsggO/ara3CCk7jKC1owO1iRrXIdZNEBqWkp+/Cpj8clc2XeeCERBrOY5l8JUbMGujzsMWrDolQp2xMouNY9nBBrkUbYwZ0d/qoqMBUKA3ywEAnOWGp0aZ0LHDPUHN+zqE7jyOYUIJE5hQkX5ans0NW6nzSScqq69hNc3A0LESlxmYXa50P15wzs8M9LfSTPPVY8mM0vmrQYkCK8maNDk2q23d9k7B2Tj5jnkFhe4OhfH/FkYACz5j/tqrwY8k0Pu4lfzO9yhr9n5OujtNih7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ED/pJhg31bjLlSP3HBT0D3rUkA+rwZgqc+paNaMixrg=;
 b=NYLlIQ9/1naOK2VdPAIDzUSt1sq7EoBnWB5XjiaqfNMuYNe7CMy5IbQZCjqzFak63WZKU9g+yP75Rz29eSfFwNImoVpeWSNwx7svXn/ZdMbaGoTwcnIfVRhnXBfBkXxSgaAWQ96XNw/yq+fPh7t3bdF/R2/I1W/9g7JEPbR6pv1RvRcpFLLZSrZMHzkkGtd5QnKUQTCYZMzWEskyU9TBSOaKLHMSF9okZkhpbKa/YnNi7uCERW/MgX0gYnh3/5T5xrlUt6a9mpWq7sQJDmopGHcKPp0sJbdszZ0AsuJcYoYdyNwGCd35qGjC+es51/WnhUzi+ghQjlP6UugBb/4Jow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ED/pJhg31bjLlSP3HBT0D3rUkA+rwZgqc+paNaMixrg=;
 b=mZNGfpiYZhmLFOJvEJcaXtg0pJFMjEjBHRTXNevU1jOosto2obndKIL51/2anZMxSsW9YCyG4je40Xg5VhCYIj91cU7mmWU09M8PNZeVL+spNQE9KY5S/pICGGrAWBUHzV/n9j3jZl1lHUUHle8RHVO9mKX3ugSg/YZU4M1kvQ/FTGUOAaweGgu73TGuONYdmxvXxCuPrjoo/MSQ4JP02uCqCMaaJp+bVI7vVCcBiTOo4nNqioKQBiDHGSMzZr9iFhMZQdEBUNNUi9ksR4a3rG2Se8Ichjg80keqy2WweOBY02bBIQaepoCmFEmg1n3XA1Sd2O2ozxo15qMQXk0mfA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a665b967-3554-4d26-e2d7-e48ffd7b384f@suse.com>
Date: Wed, 5 Apr 2023 18:16:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86emul/test: drop bogus .PHONY
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <515bbf07-91fa-1932-1be1-1411f7814e6e@suse.com>
 <3d0c36f9-a715-4971-9031-04848bcd2d0d@perard>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3d0c36f9-a715-4971-9031-04848bcd2d0d@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0090.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7209:EE_
X-MS-Office365-Filtering-Correlation-Id: 65f408ce-d6fd-4014-eceb-08db35f1265a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VoUg+C1X0ETwSPJ+WM6IWBpLL4EodNswdejUlHFSCoiyLckJMAy58mN1svF7MG2Nc7i573oa9grWPqcQ8Bbq6SxyoAZ3Lw2lLsZIcu52HIo5fdovhdVkVC5wQqeWvynS30Rf0rr0Ma9VXn1A0wk61SndClqL9amarWV4/ykBJS27HScRPnQw/m2vn3bRmejpFCQqBIij4yyxhDq85dANU9JQWty2+U1ITdAtc7SbvMFV1khb6y6vGA7cmVR+tjuDEVltWmgIUegHQubMYrnl+68K42CB2HTMHdtQDcn+A4eo4Q6DD0OSX6hFzJYDALJ6/r4WNpEPfHS1NaxPG2LM6CKzj+Dux37AtOFdlvam3k5XBrFmuvNa14hKBPJbTVosUyNpe5koKI3V6/Co4eJ8ZCWVYGsMxhpdsjrBnEiiCfVHYWrkqq5SJrNvEG7kB9+0+Fm4NFG03nXjRdW6tC6K+tk0xctSmxTzfvLOOFtM/HlwSuLnXna7joCnU8St5sr7PIXNXlltl/NOkRmpuR7dHKPvH2rkq1Zl+Cd+KJyjfOI5vwIa/wzc575V87oxSgEa8U+Qd7yBqPBVZo7wDiznfPDffVbKa/Wf3OSUI4gr4gjdMW7w9h7z8EkkO+rYVvveHJTqP4GrV+niRCGvSpHROWW9KfxwceIxbSUcjP94Z5i0umZPRb81H1vZWSOMJRJp
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(396003)(39860400002)(376002)(451199021)(6486002)(83380400001)(2616005)(53546011)(6512007)(6506007)(186003)(38100700002)(26005)(8936002)(2906002)(5660300002)(478600001)(36756003)(41300700001)(6916009)(66476007)(66946007)(66556008)(54906003)(8676002)(4326008)(31696002)(86362001)(316002)(31686004)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?elhVRmozMGtaeFZSd01EbmZaaXFLR1paSE8xVm12UENrelJuWVJXM0tySUhs?=
 =?utf-8?B?bmljK0JLQ1FBVVdTVlJ0VzZobGlnS0JrWDdTN3I0TWRPNDhnYXVSMEoxNEpu?=
 =?utf-8?B?MkhyMjAzVWNXWHB3SXphU3VqRVU1T0ZxME12TG01RUZRM0FyWksraUdXMVZw?=
 =?utf-8?B?bkJ4Q204Yk1Nclg1YXJEcVNoMFJOMFF2OVgwVitTdGUzOWN1OFFCblUrYU9t?=
 =?utf-8?B?a3d6QzBKR0FnOENhbWt2Ni9wNUZTaEtqTDNybzB3NEFvMStCZExQcW9CNEFt?=
 =?utf-8?B?eER6Ni91SDhkWXZGUHYzcFkwakhIbEc0RnBzZEEremp5Z3FYbUh5ZzI0Zi9I?=
 =?utf-8?B?WHJEcVE5QWJJQUJEc2liYU8xcFdRZ1IzZ1VjdG16Qi9FZ0ltSWF2elpPYmJx?=
 =?utf-8?B?c3Y4NmNqejJoRUpNM1lKaWZMa3pyNEQxcXdaKzNqaEcvZ2MzUVBTZjFaMFFO?=
 =?utf-8?B?RTM2OE9hKzQ0ME9McTFOV3pQZE9NeVdhVzRWeUtqYTlpeTRnNFRIUEc4ZGRJ?=
 =?utf-8?B?TmFrQmhyNmxkdkZkVFpaK2pBYVdIS2daTUU3Ui9sT1l3WWJUTERTREZmSmt2?=
 =?utf-8?B?VTV2OUd1akQ1NzZMNzNrY1RKbzB0c205ZjVKMXdyODhxeE1PTHJwOEFTUSsv?=
 =?utf-8?B?OWFiZWwyK1lvSURncEsrT3h1RHZPdTg4bmw3QjJCWnp2TXRFZEFJcC9Pcmo1?=
 =?utf-8?B?cm9mZHBjYjB6QXJSMmlWM3Faa0QzNHI5V2l3U0VqK3VYRFJHSkpRZkJqZDcw?=
 =?utf-8?B?dlVSVWlWb0UyR2VwMFJleGZlbkZsclR0R0YwQU9IcWdhOGs5djB3UGUzaEdP?=
 =?utf-8?B?V2pHN2h2UFJ0YSt6bFB6czB2b3B6bUUzYS9zRldhS2hGN1d2aFFiTnpZaVNS?=
 =?utf-8?B?MjlWalR2NDI0cllEaFRMSFc0V004OEIraElDMUNNbVJjT1o2S0VNOGhWRE5X?=
 =?utf-8?B?eVdORnJhSEgzdkNKTndQSTJDNXRXQkNkMG0rRlA4OGZjY3BYQ3VtQVM1MDhF?=
 =?utf-8?B?YjhTWTh6RlhOUzZ5NEFZSTVvbGttOFg1aVNYODBUNDF3bEdCaGtpMHdOZWRF?=
 =?utf-8?B?ZDhnOTZYRWFqWFl6QUVHZVFiRWJ3TDRzMWxXVFVwVW41ZlJHMC9sT1ZYaVl2?=
 =?utf-8?B?NkVsclN0M1IxbFNxdzkxUUdUaUZ6ZWY5L3JUM0J2U0NuOTlhb1RseGQyWGsy?=
 =?utf-8?B?cU1lQ09KdWFSMnl3OVNuSU5lUWJtMGZoVkJrRC9IWWF3aXh1Y2FiMXhMclFH?=
 =?utf-8?B?L1lSd2M2cllBc3pLdmVlK2pseCs3aFMzQStZMzVmRmEvU3VBTmxzb0VZMjZi?=
 =?utf-8?B?ekx1VEFBaWlPOEFHM0FNeXVaZjJ6WkdZWDhueG90SzRsdjlGeFk4S0paSG9s?=
 =?utf-8?B?a1A0RGJXVXEvbDErTkFicG9KeklNMmZUVzBMT0JWNHUwdS9FTEZkdHZWNmg3?=
 =?utf-8?B?bm1Wd2NYYlFkUUZ6TFovekdyejZFbzRtNzlTcVVwY0ZUWGwvWnNSUERuYVQ5?=
 =?utf-8?B?cSt6UDBjZW45Q3k2VlhSN2pVVjczeGJYbENoVVJEMDVpZHY5U2htZ2VGWWFa?=
 =?utf-8?B?OTAvSUZNZ3R0c1pmOWgvL3cxWU5IR2FrNjhobTkyeDBFSFV0NDJvcUdhQmtx?=
 =?utf-8?B?RFdQb2ZrUm52bldUZllITkR6WDEya0NYYzFyVWZUeWUrYjdFaXVoRU1QTm5B?=
 =?utf-8?B?NitXVUFjSUVTTjdDa2Q2TmROZUlHVkRxVWhMUXpQd3FNeG9IRmdLNS8xenZD?=
 =?utf-8?B?cG5qb0pLSjZ5MzZUNjcxZXp3M3hUa3JPTkhxQm10ak5USzd6d2JxQktKRU9P?=
 =?utf-8?B?ZUxoN3BwS212YVRJR29hMXB1Z1N2WERCYjVzV0l2MHJCTGt0YWR3ZHFWSjha?=
 =?utf-8?B?Zzc0RW5sZmgyc1dkcEQrMXR1VGtzaWpyc1FweGIxRVFzSC9CQVR1UXBmZElB?=
 =?utf-8?B?QTdjaURqY2Z3UVJxUUhYTTBQR05tNklTTjBpcnNodGIyWThFUExVQTVyUm9x?=
 =?utf-8?B?NktQSVFLWlZMUzRPR1JWMkZ5N1dzNGI5N1V2WHk2cDVUWGU0bWU3VlA5eUl0?=
 =?utf-8?B?NmdCcm40dVVZeExQTDZncTh2OUdYdlh0WGx1T0pIYUpFMXBHTFBqdEx2N29V?=
 =?utf-8?Q?dDVP/ux759MFd0lggR7tSXmPT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 65f408ce-d6fd-4014-eceb-08db35f1265a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 16:16:45.3713
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /hkqCkxhLiFzUQuOdQ/FnH2qB0YKzXT+Mb9IqHxYVDv/QDHrxcPUCrGBh2BsCAszCgEZeQElyltkw2WiR/SoFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7209

On 05.04.2023 17:48, Anthony PERARD wrote:
> On Tue, Apr 04, 2023 at 08:37:55AM +0200, Jan Beulich wrote:
>> x86_emulate is a real (directory) target.
> 
> Indeed, x86_emulate is a directory, so having the target in .PHONY
> isn't bogus, but kind of expected in most cases.
> 
> Here, the recipe is written with .PHONY been used, as suggest the used
> option "-p" from `mkdir` and "-f" from `ln`.
> 
> Without .PHONY, the recipe will never be executed if the directory
> exist. And, if the content of the original x86_emulate directory
> change, the linked directory will never be updated.
> 
> So, this patch description needs a reason for removing x86_emulate from
> .PHONY. Is there some kind of bug?

Yes, but it was my brain that was buggy. I deliberately added that
line long ago when converting the symlink to a real dir, and then for
whatever reason thought recently that it was bogus. Right you are,
and I'll revert this change.

Jan

>> --- a/tools/tests/x86_emulator/Makefile
>> +++ b/tools/tests/x86_emulator/Makefile
>> @@ -278,7 +278,6 @@ else
>>  run32 clean32: %32: %
>>  endif
>>  
>> -.PHONY: x86_emulate
>>  x86_emulate:
>>  	mkdir -p $@
>>  	ln -sf $(XEN_ROOT)/xen/arch/x86/$@/*.[ch] $@/
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 16:34:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 16:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518588.805304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk65M-0007h6-8q; Wed, 05 Apr 2023 16:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518588.805304; Wed, 05 Apr 2023 16:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk65M-0007gz-55; Wed, 05 Apr 2023 16:34:12 +0000
Received: by outflank-mailman (input) for mailman id 518588;
 Wed, 05 Apr 2023 16:34:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pk65K-0007gq-Ho
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 16:34:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pk65I-0008Jr-Tl; Wed, 05 Apr 2023 16:34:08 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.27.242]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pk65I-0000S4-KD; Wed, 05 Apr 2023 16:34:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LmgBJib0da20mCASfh4kXw+n8JSxgCsO6eclj0adSe0=; b=FVXkGZQB0TLLEayJf5xZme/TcM
	kyoVaI+PRfMPnuopsqpWBWEZpbZN6cJgjabat8SNyNntVm91JC4wOae/PXs8jZyBbuH70bua3gmSN
	c82jdLukxKyH0/AeyNEbzpSXxTyMIsBWHpZ2+5jaQvrkrD2s09I5DhBSdOnuInExx36c=;
Message-ID: <cce5e277-8692-d339-0283-55620c349fcf@xen.org>
Date: Wed, 5 Apr 2023 17:34:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
 <alpine.DEB.2.22.394.2304021557140.4566@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2304021557140.4566@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 03/04/2023 00:15, Stefano Stabellini wrote:
> On Fri, 31 Mar 2023, Julien Grall wrote:
>> Hi Oleksii,
>>
>> I was going to ack the patch but then I spotted something that would want some
>> clarification.
>>
>> On 29/03/2023 11:50, Oleksii Kurochko wrote:
>>> diff --git a/xen/arch/arm/include/asm/bug.h b/xen/arch/arm/include/asm/bug.h
>>> index cacaf014ab..3fb0471a9b 100644
>>> --- a/xen/arch/arm/include/asm/bug.h
>>> +++ b/xen/arch/arm/include/asm/bug.h
>>> @@ -1,6 +1,24 @@
>>>    #ifndef __ARM_BUG_H__
>>>    #define __ARM_BUG_H__
>>>    +/*
>>> + * Please do not include in the header any header that might
>>> + * use BUG/ASSERT/etc maros asthey will be defined later after
>>> + * the return to <xen/bug.h> from the current header:
>>> + *
>>> + * <xen/bug.h>:
>>> + *  ...
>>> + *   <asm/bug.h>:
>>> + *     ...
>>> + *     <any_header_which_uses_BUG/ASSERT/etc macros.h>
>>> + *     ...
>>> + *  ...
>>> + *  #define BUG() ...
>>> + *  ...
>>> + *  #define ASSERT() ...
>>> + *  ...
>>> + */
>>> +
>>>    #include <xen/types.h>
>>>      #if defined(CONFIG_ARM_32)
>>> @@ -11,76 +29,7 @@
>>>    # error "unknown ARM variant"
>>>    #endif
>>>    -#define BUG_FRAME_STRUCT
>>> -
>>> -struct bug_frame {
>>> -    signed int loc_disp;    /* Relative address to the bug address */
>>> -    signed int file_disp;   /* Relative address to the filename */
>>> -    signed int msg_disp;    /* Relative address to the predicate (for
>>> ASSERT) */
>>> -    uint16_t line;          /* Line number */
>>> -    uint32_t pad0:16;       /* Padding for 8-bytes align */
>>> -};
>>> -
>>> -#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>> -#define bug_line(b) ((b)->line)
>>> -#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>> -
>>> -/* Many versions of GCC doesn't support the asm %c parameter which would
>>> - * be preferable to this unpleasantness. We use mergeable string
>>> - * sections to avoid multiple copies of the string appearing in the
>>> - * Xen image. BUGFRAME_run_fn needs to be handled separately.
>>> - */
>>
>> Given this comment ...
>>
>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do {
>>> \
>>> -    BUILD_BUG_ON((line) >> 16);
>>> \
>>> -    BUILD_BUG_ON((type) >= BUGFRAME_NR);
>>> \
>>> -    asm ("1:"BUG_INSTR"\n"
>>> \
>>> -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"
>>> \
>>> -         "2:\t.asciz " __stringify(file) "\n"
>>> \
>>> -         "3:\n"
>>> \
>>> -         ".if " #has_msg "\n"
>>> \
>>> -         "\t.asciz " #msg "\n"
>>> \
>>> -         ".endif\n"
>>> \
>>> -         ".popsection\n"
>>> \
>>> -         ".pushsection .bug_frames." __stringify(type) ", \"a\",
>>> %progbits\n"\
>>> -         "4:\n"
>>> \
>>> -         ".p2align 2\n"
>>> \
>>> -         ".long (1b - 4b)\n"
>>> \
>>> -         ".long (2b - 4b)\n"
>>> \
>>> -         ".long (3b - 4b)\n"
>>> \
>>> -         ".hword " __stringify(line) ", 0\n"
>>> \
>>> -         ".popsection");
>>> \
>>> -} while (0)
>>> -
>>> -/*
>>> - * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set the
>>> - * flag but instead rely on the default value from the compiler). So the
>>> - * easiest way to implement run_in_exception_handler() is to pass the to
>>> - * be called function in a fixed register.
>>> - */
>>> -#define  run_in_exception_handler(fn) do {
>>> \
>>> -    asm ("mov " __stringify(BUG_FN_REG) ", %0\n"
>>> \
>>> -         "1:"BUG_INSTR"\n"
>>> \
>>> -         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","
>>> \
>>> -         "             \"a\", %%progbits\n"
>>> \
>>> -         "2:\n"
>>> \
>>> -         ".p2align 2\n"
>>> \
>>> -         ".long (1b - 2b)\n"
>>> \
>>> -         ".long 0, 0, 0\n"
>>> \
>>> -         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG) );
>>> \
>>> -} while (0)
>>> -
>>> -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
>>> -
>>> -#define BUG() do {                                              \
>>> -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
>>> -    unreachable();                                              \
>>> -} while (0)
>>> -
>>> -#define assert_failed(msg) do {                                 \
>>> -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
>>> -    unreachable();                                              \
>>> -} while (0)
>>> +#define BUG_ASM_CONST   "c"
>>
>> ... you should explain in the commit message why this is needed and the
>> problem described above is not a problem anymore.
>>
>> For instance, I managed to build it without 'c' on arm64 [1]. But it does
>> break on arm32 [2]. I know that Arm is also where '%c' was an issue.
>>
>> Skimming through linux, the reason seems to be that GCC may add '#' when it
>> should not. That said, I haven't look at the impact on the generic
>> implementation. Neither I looked at which version may be affected (the
>> original message was from 2011).
>>
>> However, without an explanation, I am afraid this can't go in because I am
>> worry we may break some users (thankfully that might just be a compilation
>> issues rather than weird behavior).
>>
>> Bertrand, Stefano, do you know if this is still an issue?
> 
> I don't know, but I confirm your observation.
> 
> In my system, both ARM64 and ARM32 compile without problems with "c".
> Without "c', only ARM64 compiles without problems, while ARM32 breaks.
> 
> My ARM32 compiler is:
> arm-linux-gnueabihf-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
> 
> Additing a meaningful explanation to the commit message might be
> difficult in this case.

Agree. One would need to look at the compiler code to confirm. We should 
at least acknowledge the change in the commit message and also...

> 
> Maybe instead we could run a few tests with different versions of arm64
> and arm32 gcc to check that everything still works? If everything checks
> out, given that the issue has been unchanged for 10+ years we could just
> keep "c" and move forward with it?

... confirm that we are still able to compile with GCC 4.9 (arm32) and 
GCC 5.1 (arm64).

Do we have those compiler in the CI? (I couldn't easily confirm from the 
automation directory).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 16:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 16:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518592.805314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk6Aa-0008RA-Sw; Wed, 05 Apr 2023 16:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518592.805314; Wed, 05 Apr 2023 16:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk6Aa-0008R3-PB; Wed, 05 Apr 2023 16:39:36 +0000
Received: by outflank-mailman (input) for mailman id 518592;
 Wed, 05 Apr 2023 16:39:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pk6AY-0008Qu-T3
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 16:39:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pk6AU-0000Bi-ND; Wed, 05 Apr 2023 16:39:30 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.27.242]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pk6AU-0000mO-Fq; Wed, 05 Apr 2023 16:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=TghqaubP4quF0whmmYBRFv5KsbLCk+erblh6nd8HG6w=; b=QiXlO7HZ0QxjPJZsO+e3nyzbqF
	AwIYrI0pAx+HYqEH8/5HFLJJPRHh3+KPg6VhLoe82A5Lsm60DdTsqvBHJglBs7zPMkG1d9XufZPsS
	m0Lb5jZPe3LSRkMvFA49/BQ/HkwQ838/4eb29OLFYVQ5fE2ipB+egnlak7xRkihcCaas=;
Message-ID: <fb639472-70f3-f7c9-eaa6-37effd4965b3@xen.org>
Date: Wed, 5 Apr 2023 17:39:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
 <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
 <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
 <d351a7b6d673b70d45e809123e6e42abbf7b8014.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d351a7b6d673b70d45e809123e6e42abbf7b8014.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 04/04/2023 09:09, Oleksii wrote:
> On Tue, 2023-04-04 at 08:41 +0200, Jan Beulich wrote:
>> On 03.04.2023 20:40, Oleksii wrote:
>>> Hello Julien,
>>> On Fri, 2023-03-31 at 22:05 +0100, Julien Grall wrote:
>>>> Hi Oleksii,
>>>>
>>>> I was going to ack the patch but then I spotted something that
>>>> would
>>>> want some clarification.
>>>>
>>>> On 29/03/2023 11:50, Oleksii Kurochko wrote:
>>>>> diff --git a/xen/arch/arm/include/asm/bug.h
>>>>> b/xen/arch/arm/include/asm/bug.h
>>>>> index cacaf014ab..3fb0471a9b 100644
>>>>> --- a/xen/arch/arm/include/asm/bug.h
>>>>> +++ b/xen/arch/arm/include/asm/bug.h
>>>>> @@ -1,6 +1,24 @@
>>>>>    #ifndef __ARM_BUG_H__
>>>>>    #define __ARM_BUG_H__
>>>>>    
>>>>> +/*
>>>>> + * Please do not include in the header any header that might
>>>>> + * use BUG/ASSERT/etc maros asthey will be defined later after
>>>>> + * the return to <xen/bug.h> from the current header:
>>>>> + *
>>>>> + * <xen/bug.h>:
>>>>> + *  ...
>>>>> + *   <asm/bug.h>:
>>>>> + *     ...
>>>>> + *     <any_header_which_uses_BUG/ASSERT/etc macros.h>
>>>>> + *     ...
>>>>> + *  ...
>>>>> + *  #define BUG() ...
>>>>> + *  ...
>>>>> + *  #define ASSERT() ...
>>>>> + *  ...
>>>>> + */
>>>>> +
>>>>>    #include <xen/types.h>
>>>>>    
>>>>>    #if defined(CONFIG_ARM_32)
>>>>> @@ -11,76 +29,7 @@
>>>>>    # error "unknown ARM variant"
>>>>>    #endif
>>>>>    
>>>>> -#define BUG_FRAME_STRUCT
>>>>> -
>>>>> -struct bug_frame {
>>>>> -    signed int loc_disp;    /* Relative address to the bug
>>>>> address
>>>>> */
>>>>> -    signed int file_disp;   /* Relative address to the
>>>>> filename */
>>>>> -    signed int msg_disp;    /* Relative address to the
>>>>> predicate
>>>>> (for ASSERT) */
>>>>> -    uint16_t line;          /* Line number */
>>>>> -    uint32_t pad0:16;       /* Padding for 8-bytes align */
>>>>> -};
>>>>> -
>>>>> -#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
>>>>> -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
>>>>> -#define bug_line(b) ((b)->line)
>>>>> -#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
>>>>> -
>>>>> -/* Many versions of GCC doesn't support the asm %c parameter
>>>>> which
>>>>> would
>>>>> - * be preferable to this unpleasantness. We use mergeable
>>>>> string
>>>>> - * sections to avoid multiple copies of the string appearing
>>>>> in
>>>>> the
>>>>> - * Xen image. BUGFRAME_run_fn needs to be handled separately.
>>>>> - */
>>>>
>>>> Given this comment ...
>>>>
>>>>> -#define BUG_FRAME(type, line, file, has_msg, msg) do
>>>>> {                      \
>>>>> -    BUILD_BUG_ON((line) >>
>>>>> 16);                                             \
>>>>> -    BUILD_BUG_ON((type) >=
>>>>> BUGFRAME_NR);                                    \
>>>>> -    asm
>>>>> ("1:"BUG_INSTR"\n"
>>>>>      
>>>>> \
>>>>> -         ".pushsection .rodata.str, \"aMS\", %progbits,
>>>>> 1\n"                \
>>>>> -         "2:\t.asciz " __stringify(file)
>>>>> "\n"                               \
>>>>> -
>>>>> "3:\n"
>>>>>     
>>>>> \
>>>>> -         ".if " #has_msg
>>>>> "\n"                                               \
>>>>> -         "\t.asciz " #msg
>>>>> "\n"                                              \
>>>>> -
>>>>> ".endif\n"
>>>>>     
>>>>> \
>>>>> -
>>>>> ".popsection\n"
>>>>>     
>>>>> \
>>>>> -         ".pushsection .bug_frames." __stringify(type) ",
>>>>> \"a\",
>>>>> %progbits\n"\
>>>>> -
>>>>> "4:\n"
>>>>>     
>>>>> \
>>>>> -         ".p2align
>>>>> 2\n"                                                     \
>>>>> -         ".long (1b -
>>>>> 4b)\n"                                                \
>>>>> -         ".long (2b -
>>>>> 4b)\n"                                                \
>>>>> -         ".long (3b -
>>>>> 4b)\n"                                                \
>>>>> -         ".hword " __stringify(line) ",
>>>>> 0\n"                                \
>>>>> -
>>>>> ".popsection");
>>>>>     
>>>>> \
>>>>> -} while (0)
>>>>> -
>>>>> -/*
>>>>> - * GCC will not allow to use "i"  when PIE is enabled (Xen
>>>>> doesn't
>>>>> set the
>>>>> - * flag but instead rely on the default value from the
>>>>> compiler).
>>>>> So the
>>>>> - * easiest way to implement run_in_exception_handler() is to
>>>>> pass
>>>>> the to
>>>>> - * be called function in a fixed register.
>>>>> - */
>>>>> -#define  run_in_exception_handler(fn) do
>>>>> {                                  \
>>>>> -    asm ("mov " __stringify(BUG_FN_REG) ",
>>>>> %0\n"                            \
>>>>> -
>>>>> "1:"BUG_INSTR"\n"
>>>>>     
>>>>> \
>>>>> -         ".pushsection .bug_frames."
>>>>> __stringify(BUGFRAME_run_fn)
>>>>> ","       \
>>>>> -         "             \"a\",
>>>>> %%progbits\n"                                 \
>>>>> -
>>>>> "2:\n"
>>>>>     
>>>>> \
>>>>> -         ".p2align
>>>>> 2\n"                                                     \
>>>>> -         ".long (1b -
>>>>> 2b)\n"                                                \
>>>>> -         ".long 0, 0,
>>>>> 0\n"                                                  \
>>>>> -         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG)
>>>>> );             \
>>>>> -} while (0)
>>>>> -
>>>>> -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0,
>>>>> "")
>>>>> -
>>>>> -#define BUG() do
>>>>> {                                              \
>>>>> -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0,
>>>>> "");        \
>>>>> -
>>>>> unreachable();                                              \
>>>>> -} while (0)
>>>>> -
>>>>> -#define assert_failed(msg) do
>>>>> {                                 \
>>>>> -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1,
>>>>> msg);     \
>>>>> -
>>>>> unreachable();                                              \
>>>>> -} while (0)
>>>>> +#define BUG_ASM_CONST   "c"
>>>>
>>>> ... you should explain in the commit message why this is needed
>>>> and
>>>> the
>>>> problem described above is not a problem anymore.
>>>>
>>>> For instance, I managed to build it without 'c' on arm64 [1]. But
>>>> it
>>>> does break on arm32 [2]. I know that Arm is also where '%c' was
>>>> an
>>>> issue.
>>>>
>>>> Skimming through linux, the reason seems to be that GCC may add
>>>> '#'
>>>> when
>>>> it should not. That said, I haven't look at the impact on the
>>>> generic
>>>> implementation. Neither I looked at which version may be affected
>>>> (the
>>>> original message was from 2011).
>>> You are right that some compilers add '#' when it shouldn't. The
>>> same
>>> thing happens with RISC-V.

I am a bit confused with this answer. My point was that on at least on 
Arm 32-bit we need to use 'c' otherwise it breaks.

AFAIU, this is not the same problem on RISC-V...

>>
>> RISC-V doesn't know of a '#' prefix, does it? '#' is a comment
>> character
>> there afaik, like for many other architectures.
> It doesn't and for RISC-V it's a comment character.
> 
> afaik '%c' is needed to skip prefix('sign' ) (# or $ ( in case of
> x86)).
> 
> I mean that RISC-V doesn't put anything before immediate so there is no
> need to use %c as we don't need to skip prefix/'sign' before immediate
> but if start to use '%c' it will cause an compiler issue.

... because here you say it will break when using 'c'. Did I miss anything?

Anyway, it sounds like to me that the default definition in xen/bug.h 
should be using 'c' rather than been empty because this seems to be the 
more common approach.

To reduce the amount of patch to resend, I was actually thinking to 
merge patch #1-3 and #5 (so leave this patch alone) and modify the 
default in a follow-up. Any thoughts?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 18:11:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 18:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518596.805325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk7az-0003Cw-Dy; Wed, 05 Apr 2023 18:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518596.805325; Wed, 05 Apr 2023 18:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk7az-0003Cp-95; Wed, 05 Apr 2023 18:10:57 +0000
Received: by outflank-mailman (input) for mailman id 518596;
 Wed, 05 Apr 2023 18:10:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7ay-0003Cd-7B; Wed, 05 Apr 2023 18:10:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7ay-0003H9-3P; Wed, 05 Apr 2023 18:10:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7ax-0002SL-JK; Wed, 05 Apr 2023 18:10:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7ax-0004A0-Is; Wed, 05 Apr 2023 18:10:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/1i52yX6gRlUMzxSw4N/F5u5P6Jux3QbIj5h4caLDgg=; b=ZVfxs7BGTfkDUXP1j5YOOZ5LYl
	Xz2tReEcoPrYPn1778qw2c5O0AdtryNrz54oEaPSbTOi2F9g7StM0AMs+a6RSmrz4OLhR5pryD3FC
	wtfcEDmTTa6VFydn7XDveRi4KSMqaDzCNjmmszO7eGwuvegHF03NmvdEjoP8cD5ubykk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180147-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180147: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=a56833e47a76e406c62ea0531a99ca199ad5b00a
X-Osstest-Versions-That:
    libvirt=d292ddf1cc268bdd8a494f8e7ce76dc3445c26ab
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 18:10:55 +0000

flight 180147 libvirt real [real]
flight 180158 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180147/
http://logs.test-lab.xenproject.org/osstest/logs/180158/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180158-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              a56833e47a76e406c62ea0531a99ca199ad5b00a
baseline version:
 libvirt              d292ddf1cc268bdd8a494f8e7ce76dc3445c26ab

Last test of basis   180134  2023-04-04 04:20:21 Z    1 days
Testing same since   180147  2023-04-05 04:18:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   d292ddf1cc..a56833e47a  a56833e47a76e406c62ea0531a99ca199ad5b00a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 18:11:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 18:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518602.805334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk7bn-0003la-QW; Wed, 05 Apr 2023 18:11:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518602.805334; Wed, 05 Apr 2023 18:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk7bn-0003lT-Ma; Wed, 05 Apr 2023 18:11:47 +0000
Received: by outflank-mailman (input) for mailman id 518602;
 Wed, 05 Apr 2023 18:11:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7bm-0003lJ-FS; Wed, 05 Apr 2023 18:11:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7bm-0003Ie-Dy; Wed, 05 Apr 2023 18:11:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7bl-0002T1-Vx; Wed, 05 Apr 2023 18:11:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk7bl-0004dG-VQ; Wed, 05 Apr 2023 18:11:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XvqHbodCn/9x5CeTWKvxkE+I5MSt9fM5mMCGSYVbKsM=; b=v/Z9SOPegX+4hZ98+bubbwUvx+
	DEbZqjtoydOqsZX6tfQtNsyuF1+xe6Bl8cwpy64lItPXGOmP6OzLfM4HLu2HsDd4hGb2gf0pqANr3
	TEm92+VGe8g/baHX/vK8Ki+zD+vt0dDT+qW8S1SPVG/fDIyRiR9D8Duf/arEIToKmA4Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180156: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a56ee36c494cb1dfe795259b2cba706ef55b5212
X-Osstest-Versions-That:
    ovmf=cdd79996c217805a5bd67bb0c0e4ca05474ef92e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 18:11:45 +0000

flight 180156 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180156/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a56ee36c494cb1dfe795259b2cba706ef55b5212
baseline version:
 ovmf                 cdd79996c217805a5bd67bb0c0e4ca05474ef92e

Last test of basis   180150  2023-04-05 09:43:16 Z    0 days
Testing same since   180156  2023-04-05 16:13:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cdd79996c2..a56ee36c49  a56ee36c494cb1dfe795259b2cba706ef55b5212 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 18:48:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 18:48:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518610.805343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk8Ar-0007M5-IN; Wed, 05 Apr 2023 18:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518610.805343; Wed, 05 Apr 2023 18:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk8Ar-0007Ly-Fe; Wed, 05 Apr 2023 18:48:01 +0000
Received: by outflank-mailman (input) for mailman id 518610;
 Wed, 05 Apr 2023 18:48:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk8Ar-0007Lo-22; Wed, 05 Apr 2023 18:48:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk8Ar-0004VU-0r; Wed, 05 Apr 2023 18:48:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pk8Aq-0003JA-Le; Wed, 05 Apr 2023 18:48:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pk8Aq-0005yO-LB; Wed, 05 Apr 2023 18:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KovrP06rUPrcL/p1LV/clgXy5tUXDJ7YChfuq7BDDgg=; b=Z7GfN3rDtgOiIDI+99iUuh9zuY
	4rk4VEiTTFI0smyJCu2BYZWvtbgKY/EP0xhxOtPLI+9Vaf9qQd5dqfyWFqcPYdr06dnUh0wAR5GjS
	L6tj1f1y7XY+SbIFGnXG3yHfKT5nvQfcgRkYIKgf3l+YG12ZFk0DUSwh16FRaIHpjLYs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180157: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=55c6d4e2257ca8aabb15232f696ede49642efa3c
X-Osstest-Versions-That:
    xen=48d76e6da92f9ef76c8468e299349a2f698362fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 18:48:00 +0000

flight 180157 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180157/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  55c6d4e2257ca8aabb15232f696ede49642efa3c
baseline version:
 xen                  48d76e6da92f9ef76c8468e299349a2f698362fa

Last test of basis   180155  2023-04-05 13:01:51 Z    0 days
Testing same since   180157  2023-04-05 17:01:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   48d76e6da9..55c6d4e225  55c6d4e2257ca8aabb15232f696ede49642efa3c -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 19:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 19:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518616.805354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9Ad-0006Ff-8Q; Wed, 05 Apr 2023 19:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518616.805354; Wed, 05 Apr 2023 19:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9Ad-0006FY-51; Wed, 05 Apr 2023 19:51:51 +0000
Received: by outflank-mailman (input) for mailman id 518616;
 Wed, 05 Apr 2023 19:51:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pk9Aa-0006FS-Ri
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 19:51:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ac0d714-d3eb-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 21:51:46 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 15:51:37 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6114.namprd03.prod.outlook.com (2603:10b6:610:b9::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 19:51:35 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 19:51:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ac0d714-d3eb-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680724306;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=h5dPrDhv3AkJwZ5rc0F3+4lMFU5MOn9MEBGjaaxk9aw=;
  b=QY8ONM94vGqwqVwjcFrpqOPEAp9mF36rYR3AykwcZj3F+Oq7MuzA1IIn
   lYGO6pxluhcWprf3TJLHaKYeyTiK+vFpJ4tmjl9K5es2IUphIbOGhs5UP
   N97pWj2pK32yAYNofttg/aoyv7DT2icOPybapK+zhVUTPIQmeLFavXa/T
   0=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 103260996
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LEqC5aLUq8ykc4N2FE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS0TAAy
 zMeCGuAMquKZ2ehft53PY/joxlVvcKDn9VlTQJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRjPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5SDEBu7
 PgpcAwVUQyZpseI4rbrT8dz05FLwMnDZOvzu1lG5BSAV7MKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dmpTGMk2Sd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHugAtxKRezmnhJsqEWNhVwpUVowbgKcntu5iU6uWOJtI
 kNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4J5oflqYRq1BbXFI88T+iyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:BcwX1qtbORdCxu2Wj+vNn/NK7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.98,321,1673931600"; 
   d="scan'208";a="103260996"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gqRYTh4Wm3OF383lfTbU6dHQfPP/bE6RWVJBEXvamITSNb8vHBvp0yFK73MRbYLEw4CfYG8Wsn1s/ImNFYXLz28PHlikwtnVHXHsOxJbIrjxTgDYKJtWBqgJDDTnYoTRjNwjLXnw2jiWbQJJv1opDCljSy3zUqyhORNoUC3gRaTSFKNeu6OiSAmaN5lMYL/4sMvOhPArIOXY7HPInf9WINrXcehncqy/75RcqUPl7xpqyl8bRlpw0LujkqJ+VmRsc65sS7ucRW+s+drVSJL+cRKhEKZdnYDxuax4LYH6NLo+3gZ6WngW51sQym9A+id1kgLc8dsqrb7AkNkFl6c5bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xqaxVbZHPGQdakwydD81MgkRk/5WO4ZaS7WH5LX8MG4=;
 b=EKqKLwHcMJ8J+Lq2yvlYQ8R7NIy7nHizAZUFf1DPMVga+pA6tgZeGDYOqTisbW7zir7Ie4J5YaWY12XTF01xAPOAt4/5tbxcpw36zhaUKrDScD4mFJkb2b6LEMC5qSPhBZ0V/TvRmG0o7NfnK9N204LisWbywvZdvhtkPt2qdB5KazFdKOOUsuAJW3+EybPEnwz7DU4Y53tYbX1YSKAW+5u4RRy0eaRmdjtj3udlnppbfadgCska8FPkFi5CBEP00bV10F5NYvMAt9OrYhwpUn65JbLRj76ZQ2zjAT/F5AiXIan/ooBwi7lRlZNXkSQpJHOI4tXV+y4UnbP3rLzcvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xqaxVbZHPGQdakwydD81MgkRk/5WO4ZaS7WH5LX8MG4=;
 b=m7QbxSSLQ/OebYrkcmuBNHxXxZMZDjyaBYLP41ZhEaM8pNjFLy0cnWydq9yJ4enWsh+h50/AdLbas0ii2sGBuJ+k6MaepdlPuN7lEcFFCW1VhP+JH44HejfpO8XCQYBVZ8hmVUILYbGS7tJqMug2pGKu1bNqBQt8LGz5yqJSR68=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <fe24d528-3f65-6e26-d467-60794b4f3ac7@citrix.com>
Date: Wed, 5 Apr 2023 20:51:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/3] x86/treewide: Drop the TRAP_* legacy names
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230220115956.1522728-1-andrew.cooper3@citrix.com>
 <20230220115956.1522728-4-andrew.cooper3@citrix.com>
 <1684ac85-f9ac-a57e-9529-f2e09f6f81ff@suse.com>
In-Reply-To: <1684ac85-f9ac-a57e-9529-f2e09f6f81ff@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0388.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH0PR03MB6114:EE_
X-MS-Office365-Filtering-Correlation-Id: 3fd15747-8ce3-438f-14fd-08db360f2858
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2Roy2HTV+3dydpawaMOsbr0tAO+O70Lm2tQKzQAYr09j9QpZEfd6xa4SEG7zbW4qkQOdze9e0sslriuTxmpz2BHb7i4Q+9R1NpyBDWy7/MpFrZdAbVm2O9v74LHppc8R5TcYA2OiC9/+DUQuBu+4goOokJCPg8+bdEjjOoYRfs2kwCgxCLcuSPzJF0xz1HcpoENF1T23dCSNTKk4+TQc3ENjkDlHv0zZERkUCUt3XxSwtRx64AYeF3IpFAJvbJ0X8zplKsiEmWncAR15Trpr2Yy/K4I8zbHhWkzzGlFGrSDr/fzgewvxgqH0OyBULkLSnFl/4mZ2ypVStTIETjkGGod5R4uj57EopHlE7C2qJNWHEHOvb6TyWvx2hq9isXyt/9SoE1Pmm2Jfqgs73VkxCNfZ6+ORY7qZ4JdIz8/ti0wcq/XoDVpil81Y73hpVt6Clo9klPeSdiBf7cd9ZjtqwJkAH/LDMYRbVAKpZ8Ca46+WbCW8Kx8QHINZ1DY/bvENrjZR12SJkTna1Kkpav4CncaOyQaxZfoq3yti5XYJvFSUo3C/BGPFBLiuXsVS/Wgoo2HRchpLi3ODcBun3wWNJJA763KIO6kubNnGX6mGsxO/CbpfmxfGAicA59fFFSdqEjHTdLGduGBHA5qlkx+ynw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(366004)(396003)(136003)(346002)(451199021)(31686004)(6666004)(83380400001)(2616005)(186003)(4326008)(53546011)(6512007)(6506007)(26005)(66556008)(31696002)(38100700002)(66476007)(6916009)(8676002)(2906002)(82960400001)(66946007)(41300700001)(316002)(478600001)(54906003)(5660300002)(86362001)(8936002)(36756003)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGVtaUowaExWN0R6ZlArejJXc2UvT2xJdTlnYjk0T2R0RU1WeXRwWXdMQzFl?=
 =?utf-8?B?eWZ5T3BtclNJUzYxWno3U0JObzM3TFhyR2cwTFBraTU2Vjg5Y3pJWWlvZ3RB?=
 =?utf-8?B?d1VyUFVQKzNOcFNPTkVjUk5FVVhIMHgwOFpuUW5od2k3K3BGZ2M1TlFGencv?=
 =?utf-8?B?WDJKR2VSa1hWa2JFVlVUZE92ZWtqQ3NNR092WnBENVhBL0l1aG1UNm8xZWtX?=
 =?utf-8?B?U1JWbitZRGYwL1lsSnpkOFdoUGVoQSt1aWNCaTVzVTFDZzlsbG8yWU5wb0gv?=
 =?utf-8?B?b1FUbXJXMHVvbU91Ukw2bEJVMW1qdmMyZ1RSK0krQ1prS0FuZ25QL1ZZSXVa?=
 =?utf-8?B?MWFsa25oZkJvUVFKdFEvOGRRK1ZpbCtEOCtHU3hsRTVUREdDcmYwTlRlV3Ft?=
 =?utf-8?B?Y3FON3NOVnhQeDdyZDFjb2hXMWxuRnhxbk5DQjJpY1dvVnFYWCt2N2VuMFpj?=
 =?utf-8?B?eFZFNmJKYjdTamVxNHJHeFlnazZmUGpjWkppamV4L2t0SnRFTi9NeElUczF5?=
 =?utf-8?B?Z0JlVzhJK3ltaUM5dVBZRlJ2ZE1KSVBrWDVmMjJwSUVqb1ZKMmFsQ1hOd1BD?=
 =?utf-8?B?ZUNLZDkxYUVhUE9lUVVTaHJJTU9UN1FGWUJKdGt0SnYxYzdJcVo2UHdhd2F2?=
 =?utf-8?B?dGV4d3BTbDBWUFErVzVPbWRtTGZsNkdydXJFL01mTlYwRTYrcWliUmlCLzB5?=
 =?utf-8?B?Vkhjc1Z6Nzh2WEVXbktBb0d2RjJBb2p4MzVSVmg1TU91a3VrOElFRWo4Kzhj?=
 =?utf-8?B?eFRKS0xuZnl4ZzZnSTVTSDhDUDlpQ2hVd1dZZjkxM3pSbHpRajNCR016cXB4?=
 =?utf-8?B?WUE5MXlyL3lTdkw2UnJ6TWt6MDNidjM3QXpLZ0wvR0FQaWpHZ1Y1VHFnRk1H?=
 =?utf-8?B?Q1ZNUXJlQ0FZQy8wd1JVWmg4elBSZnhaS3FGS3FvRTlyQXRGS1Fia2ZYUk4w?=
 =?utf-8?B?dkpoWm40cTdieVhOZUJnVng0eHVQaUx3RmFCUGRjSlM5L1BFeE9sblZmNk1L?=
 =?utf-8?B?TGJ2ekpzRU54WXovTXFPRHN3OFcycU1BZWxRU2x3OGRFQW1BVFRSV2txQ1V3?=
 =?utf-8?B?bXBVRmxCeEZ3REZlemFoYVF5cVIrSTVIZldCQ2Z0K2JVNU9DSG5BOUlnbjRM?=
 =?utf-8?B?elY0MHU0eStBa2toTFFDS2dveHNLT3lrQlQ2MVBxVmFsek14ZWM5WTFWTklt?=
 =?utf-8?B?OHlWWnBnaDBybGhObUw1WnpNZjVMMFlLZWJtaUY3OU4yNzhkY2FPVHdwK1VC?=
 =?utf-8?B?NnM0SVd6b0thMldzMU5lRWtPYU5iSmVZeDRxQXk1bHZmY3BGbUFPTDJWUFV0?=
 =?utf-8?B?L0plUkVycXNjOWNNSHJ2QTdVNllBM0dQZmg2TUtFVHpMalM4U0phVnFNeVVV?=
 =?utf-8?B?UkxGSVlWeFNDbVYzMHd4QmVHSTRYZXZmdUYvQ2tFM3hCMWhNbzloNGlHTWR1?=
 =?utf-8?B?a1Fmd2xpOW1pNDduNHFGbW1rU2k3MXRsc1BIaFZubjkrWUpIOVNNVlJia3pS?=
 =?utf-8?B?VVQ3V2RhWDZiQVhhK0VRcXl6SzBxZkhMUy9SdDU5SC9vOWlDN3RLVUE2OEhP?=
 =?utf-8?B?R3cwcldpQm5DREgvSkdGSXM3VGxlWnFJdzk1dDJ2WCsvU1BGdGFBaFJIa2x6?=
 =?utf-8?B?bXJ2K1ZaTnNFbE56ZnBDZlg3emlocm5HQUpMa1hTR21nSFd6QllrQUNaYWxa?=
 =?utf-8?B?M0lUQXd4QUltSzJXMkM5cXVYRkM0OFNyWFIrTThoeVFoeTZrMnp1ampYa3Ez?=
 =?utf-8?B?RElxb3dnaE1hS1dabDYvK0pWSUZpSDBsdTNBeEwrb1UyTHpTVDZDQVM4eWZT?=
 =?utf-8?B?L0xxQXFSNVBIRk41L1FmMit1OHdJcDlrOWpJYTcyc0dwOUNvWGRtOVA1c0Zn?=
 =?utf-8?B?WnFpemhkZDFwdkJ2cERZK0FTTW5IaGxTY0VRTzl1MkFhNkdVR0UvSDVyVmxJ?=
 =?utf-8?B?U2drM2VNK1ZpSk1vRGZMSmFGVkhRTEpEbXJ3RnQ2SCtnS0JRU3B6eHZwRy9H?=
 =?utf-8?B?aGcwamlSQlhPbDUwU0xZNDNyMFBLYXF1S0NOT2s1TkErdnIvWFE5bkZIWk1H?=
 =?utf-8?B?NVhqUEtMc2ZCK0ZlWkIzcndzNGNHVHJEKy8zWGZyaTVoQUVGaTYyeFdiSHJN?=
 =?utf-8?B?OGVXTG1POWZBY0F1Ym03SlpRSW1LK1JEZTAzeXBTOGJISkxtZ1IxVzlqaldi?=
 =?utf-8?B?dGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p4nSUsZql532w/eZe/RZZ2pAj8dRku0jUvdAzVRzIx8nBu182VhJT+r2oox9bYOey46AslGQTTLKGtYk15ao4p9dJFC+ONPafMNtkeaK6mlx9QS/DddkL4u08TathLMNyJBm6s6DREhv7hLS6iCD/bLcP5wUHimnK53bK6O3S6mlg+CRD6oXVami+CbqSGu+HGHnKjSqafhfdQjN+rdmV5BJLvyWapAqnPfIyqsvVpMrj9Q6v7z4QTCiTg8u+RNnH6hLFxiyCJunj+dpwIdtyRTAe9Ti4OUQtmCiuHdlc2NGXuxJn8CPbdXF4rAWJaw9rr0duw+cgmFy0xUth7wXhJsnUv5jyn+yB0UY8LxaC4a2PM5UqASM+ozNOjIgj3AowUHtZ7gmwmXpJCffSFWRjKnoCWBO50BcBlz8D/oE/dVQxQrkJpQFivy+GIEsRt+P9tloVL8X2rXsB7YsOEbnc0FODmP8yD5/PMwuA6+ZUwsuHszs6dj0oHNI4BgIAjS2u8d8JDE3QWXXMXqdDXy76hyIvKIXQ5hBpNCNud3s/L3K1KYZvEsRarysxwrH+yhk7NQLc/jAavg35dqO41TuQ5P0Ped6zd9na6rA2N7HtvhgjbVS81DvoRa4Cr27U2YqKAiCM90qFgejtwQ5XsjePRSFtrtwGwgYzifY1J6L/wa0r183FHD+3cLavkEpKku0Sk80Tr5xWq8G5Pg35xmB0jyEGnZVtspDnQ+p1k2ohkKFolw9UntUR4V3CGtqgdDLG5qi3v2LnoE874NvXCM8IWSxuRaCKq+h/C1VD7Lm5whYdjqGP/LtIgrZktktJ4JT
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3fd15747-8ce3-438f-14fd-08db360f2858
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 19:51:34.6495
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g9hnIYtHOgQybaIJXyLbH9tSRzYg3tkRxsRhWJDzROtSn+IaKmNjLlVrBMsHQhRbduQeRxdYKzcQ0ZCfZMVa7Ih8fhOfzwxyaaBRdHDA1LU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6114

On 21/02/2023 1:10 pm, Jan Beulich wrote:
> On 20.02.2023 12:59, Andrew Cooper wrote:
>> We have two naming schemes for exceptions - X86_EXC_?? which use the
>> archtiectural abbreviations, and TRAP_* which is a mix of terminology and
>> nonstandard abbrevations.  Switch to X86_EXC_* uniformly.
>>
>> No funcational change, confirmed by diffing the disassembly.  Only 7 binary
>> changes, and they're all __LINE__ being passed into printk().
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Given that this is proving a pain to rebase, I've pulled it out on its
own and committed it.  I'll rebase patches 1 and 2 over this at some
other point.

>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -2745,9 +2745,9 @@ static int cf_check sh_page_fault(
>>           * stream under Xen's feet.
>>           */
>>          if ( emul_ctxt.ctxt.event.type == X86_EVENTTYPE_HW_EXCEPTION &&
>> -             ((emul_ctxt.ctxt.event.vector == TRAP_page_fault) ||
>> -              (((emul_ctxt.ctxt.event.vector == TRAP_gp_fault) ||
>> -                (emul_ctxt.ctxt.event.vector == TRAP_stack_error)) &&
>> +             ((emul_ctxt.ctxt.event.vector == X86_EXC_PF) ||
>> +              (((emul_ctxt.ctxt.event.vector == X86_EXC_GP) ||
>> +                (emul_ctxt.ctxt.event.vector == X86_EXC_SS)) &&
>>                 emul_ctxt.ctxt.event.error_code == 0)) )
>>              hvm_inject_event(&emul_ctxt.ctxt.event);
>>          else
> Entirely unrelated to your change, but seeing that this piece of code is
> being touched: Aren't we too lax here with #PF? Shouldn't we refuse
> propagation also for e.g. reserved bit faults and insn fetch ones? Maybe
> even for anything that isn't a supervisor write?

Imagine a guest does a CMPXCHG16B which straddles a page boundary, with
the lower part hitting a shadow page table, and the higher half hitting
a mapping that the guest has mapped with RSVD bits in the PTE.

In this scenario, there would be a VMExit type PF (write to a page which
is actually read-only because it's a shadowed pagetable), and during the
course of emulation we'd permit the lower half, then encounter PF[Rsvd]
for the second half.

And the correct action here is genuinely to pass PF[Rsvd] back to the
guest.  So no, filtering on Rsvd is not the appropriate course of
action.  Similarly for a fetch fault, the guest could genuinely have
paged out the frame we're interested in in the time we've been spending
in the sh_fault() handler.


Whether the else path, choosing UNHANDLEABLE and unshadowing the frame
is really the right course of action, is a different question...  I
think the current behaviour is safe, and its not as if it's a codepath
worth optimising these days.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 20:11:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 20:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518621.805363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9U0-0000Lo-TG; Wed, 05 Apr 2023 20:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518621.805363; Wed, 05 Apr 2023 20:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9U0-0000Lh-Q5; Wed, 05 Apr 2023 20:11:52 +0000
Received: by outflank-mailman (input) for mailman id 518621;
 Wed, 05 Apr 2023 20:11:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S8B2=74=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1pk9U0-0000Lb-9B
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 20:11:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19985bcf-d3ee-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 22:11:51 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9660962810;
 Wed,  5 Apr 2023 20:11:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id ADAFAC433EF;
 Wed,  5 Apr 2023 20:11:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19985bcf-d3ee-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680725509;
	bh=RtwX6KJxYQmKc5eE6Ge2DG54ynisAFNybPH8hjW2Myw=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=dHKfeTPAenu64KkcwFR/WKz+RNZhtj4dMwwpBM3wvHCNU/ZTyHF9/pkW9hknVc0cZ
	 w7fp/t1tuFEZE/K5DTqgSzyGrz1qcWh7zusMfq4giOpy8g4OyLvojp9PCs9I6WB4rK
	 clFPb/fo8eu8ihkbsVTM4o62kBSyBDbIXU0jPgtBXm9exOFcJgVYeiMbQUYkfN6EK9
	 XV0ezMaCDooYBPeJf/A5iyTsnCn5G67bVixFgikYhvxBzUA3VpHE5+iVnBVoFC2sqe
	 xqiRjZGTB6oUN9IMzacEfdZ6ZznSh88QQVnCkDLmIu9+eOrCekx5YQQmw2U3RHRDgl
	 zr5da0P/R5hug==
Date: Wed, 5 Apr 2023 15:11:47 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
	Andrew Lunn <andrew@lunn.ch>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Russell King <linux@armlinux.org.uk>,
	Nicholas Piggin <npiggin@gmail.com>,
	Bjorn Helgaas <bhelgaas@google.com>, Rich Felker <dalias@libc.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	Miguel Ojeda <ojeda@kernel.org>, Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	"David S. Miller" <davem@davemloft.net>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Subject: Re: [PATCH v8 5/7] PCI: Allow pci_bus_for_each_resource() to take
 less arguments
Message-ID: <20230405201147.GA3637852@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZC1glzw4F9F8zCK+@smile.fi.intel.com>

On Wed, Apr 05, 2023 at 02:50:47PM +0300, Andy Shevchenko wrote:
> On Thu, Mar 30, 2023 at 07:24:32PM +0300, Andy Shevchenko wrote:
> > Refactor pci_bus_for_each_resource() in the same way as it's done in
> > pci_dev_for_each_resource() case. This will allow to hide iterator
> > inside the loop, where it's not used otherwise.
> > 
> > No functional changes intended.
> 
> Bjorn, this has wrong author in your tree:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?h=resource&id=46dbad19a59e0dd8f1e7065e5281345797fbb365

I botched it, sorry, should be fixed now.

Bjorn


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 20:18:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 20:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518626.805374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9ab-00010p-Iw; Wed, 05 Apr 2023 20:18:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518626.805374; Wed, 05 Apr 2023 20:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pk9ab-00010i-GI; Wed, 05 Apr 2023 20:18:41 +0000
Received: by outflank-mailman (input) for mailman id 518626;
 Wed, 05 Apr 2023 20:18:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S8B2=74=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1pk9aa-00010c-05
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 20:18:40 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b1f14d8-d3ef-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 22:18:37 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B8C1A62A31;
 Wed,  5 Apr 2023 20:18:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF579C433EF;
 Wed,  5 Apr 2023 20:18:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b1f14d8-d3ef-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680725914;
	bh=nzqjLaTJey3sRM+1TqQcKEYWams7IVp4kAG1Ou4QbaQ=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=qSIFfK6lOkdOAB5LY9zLp6kglpMt1j7LNegofExp0NbD3DdGJu2FIQSd9fKDakYSF
	 8EdaJODqwbg6kctJMgL0riv0zeo2GlzJNTyce/xBoaGt5wkbyslXtmEQr79jETD5VN
	 JkGR/SuDh0S5XNaO9WvVcUE8aLyF+fq2KHagkNoYCPN/ufYZsMFzIEL8TOwxz22rru
	 dTfc7KYc7iXQfdVOTiFqncjiOZ6vNACqFFlXmfZkUJ6vh6opIHeXG2F88mLt2+N1/j
	 qSW8PkxMlDLNSe4zXOh05GwayXmUNnmLRiK0jPqk3fXDd1WbPKGLoR9/9OHb8FIaNI
	 uOrsrDeBHNVGg==
Date: Wed, 5 Apr 2023 15:18:32 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <20230405201832.GA3638070@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZC0xK4YJrKga7akk@smile.fi.intel.com>

On Wed, Apr 05, 2023 at 11:28:27AM +0300, Andy Shevchenko wrote:
> On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> > On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > > Provide two new helper macros to iterate over PCI device resources and
> > > convert users.
> > > 
> > > Looking at it, refactor existing pci_bus_for_each_resource() and convert
> > > users accordingly.

> > Applied 2-7 to pci/resource for v6.4, thanks, I really like this!
> 
> Btw, can you actually drop patch 7, please?

Done.

> > I omitted
> > 
> >   [1/7] kernel.h: Split out COUNT_ARGS() and CONCATENATE()"
> > 
> > only because it's not essential to this series and has only a trivial
> > one-line impact on include/linux/pci.h.
> 
> I'm not sure I understood what exactly "essentiality" means to you, but
> I included that because it makes the split which can be used later by
> others and not including kernel.h in the header is the objective I want
> to achieve. Without this patch the achievement is going to be deferred.
> Yet, this, as you have noticed, allows to compile and use the macros in
> the rest of the patches.

I haven't followed the kernel.h splitting, and I try to avoid
incidental changes outside of the files I maintain, so I just wanted
to keep this series purely PCI and avoid any possible objections to a
new include file or discussion about how it should be done.


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 20:45:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 20:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518634.805384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkA06-0004Vn-Ol; Wed, 05 Apr 2023 20:45:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518634.805384; Wed, 05 Apr 2023 20:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkA06-0004Vg-LI; Wed, 05 Apr 2023 20:45:02 +0000
Received: by outflank-mailman (input) for mailman id 518634;
 Wed, 05 Apr 2023 20:45:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkA05-0004Va-IR
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 20:45:01 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9d15acb-d3f2-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 22:44:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9d15acb-d3f2-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680727499;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=eZkTQLlgPViTbuqQR+5y7JsM486poKpDOSK+FX/m5MU=;
  b=MMa1x3ZRZTC0E6IsB9dBmwPGG2jPPZuf9Yv/rWpYOQtX8SoC5+2aSvek
   9SPNS3TiwuD9mkWqzL9W3LbK9n6NmdmS9a8TKHQUvHDK1ddDlXh67XXj9
   dudx8sWL+4NlnWIjUwi1CJV4/UWEPKJ+K4BXF/p7u4UNBcZgIACa7crqO
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103266266
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:V8DxZ6340MN2FGuC9PbD5axxkn2cJEfYwER7XKvMYLTBsI5bpzVVm
 GUbXG/QPquKY2Xxe9x3at6x/BwOvMLSy4UySQQ+pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmOKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfE3sVr
 fs4FTs2VT+guu2p+fGDWPdDv5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0OxBjG/
 DyYl4j/KjM/HsGv8R+UzlO9hrWWvR39Aq4ACZTto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jY+cddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBX0YPncDbTYO2OLMg7w8kSuMdsR9P5fg27UZBgrML
 yC2QDkW3utD1ZFRiv/mpjgrkBr3+MGXE1ddChH/Gzv8s1gnPNPNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83oJrW7FF4aLJ9w43d2HDB4B3jw4UTHoe
 lTPngha+YVeOnCnBYcuPdLpVJ17k/m/SI25PhwxUjapSsEoHDJrAQk0PRLAt4wTuBNEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk33uYdykVbJEd/pxnPSNLFmhE5FyS2Jm
 +ti2zyikEUDCreuPXiHqub+7zkidBAGOHw/kOQPHsbrH+asMD1J5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:+XmAyaM7CqPesMBcTs2jsMiBIKoaSvp037Eqv3oedfUzSL3+qy
 nOpoV+6faaslYssR0b9exoW5PwJE80l6QFgrX5VI3KNGKN1VdARLsSi7cKqAeAJ8SRzIFgPN
 9bAspDNOE=
X-IronPort-AV: E=Sophos;i="5.98,321,1673931600"; 
   d="scan'208";a="103266266"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/svm: Provide EXITINFO decodes for MOV CR/DR intercepts
Date: Wed, 5 Apr 2023 21:44:23 +0100
Message-ID: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This removes raw number manipulation, and makes the logic easier to follow.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/svm/svm.c              | 4 ++--
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 8d8b250101ce..90c2f89c1b0d 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1740,7 +1740,7 @@ static void svm_vmexit_do_cr_access(
     cr = vmcb->exitcode - VMEXIT_CR0_READ;
     dir = (cr > 15);
     cr &= 0xf;
-    gp = vmcb->exitinfo1 & 0xf;
+    gp = vmcb->ei.mov.gpr;
 
     rc = dir ? hvm_mov_to_cr(cr, gp) : hvm_mov_from_cr(cr, gp);
 
@@ -2961,7 +2961,7 @@ void svm_vmexit_handler(void)
 
     case VMEXIT_CR0_READ ... VMEXIT_CR15_READ:
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
-        if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
+        if ( cpu_has_svm_decode && vmcb->ei.mov.mov_insn )
             svm_vmexit_do_cr_access(vmcb, regs);
         else if ( !hvm_emulate_one_insn(x86_insn_is_cr_access, "CR access") )
             hvm_inject_hw_exception(X86_EXC_GP, 0);
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index b809e85507aa..77e3bd9aa048 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -450,6 +450,11 @@ struct vmcb_struct {
 
                 uint64_t nrip;
             } io;
+            struct {
+                uint64_t gpr:4;
+                uint64_t :59;
+                bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
+            } mov;
             struct {
                 uint16_t sel;
                 uint64_t :48;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 21:21:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 21:21:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518641.805394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAZA-0000aI-Fg; Wed, 05 Apr 2023 21:21:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518641.805394; Wed, 05 Apr 2023 21:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAZA-0000Zv-CQ; Wed, 05 Apr 2023 21:21:16 +0000
Received: by outflank-mailman (input) for mailman id 518641;
 Wed, 05 Apr 2023 21:21:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkAZ8-0000Zn-VU
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 21:21:15 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c89d1104-d3f7-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 23:21:12 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 17:21:07 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5458.namprd03.prod.outlook.com (2603:10b6:208:29d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 21:21:05 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 21:21:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c89d1104-d3f7-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680729672;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=zgconbBhy1S8kc6pf8iqApafjGUU32fKIG1lZsK0EeY=;
  b=ax6u0s9DmOvQ5TDljPdGxd41NBpGX97+vaS97t0K/ZtbnZz20a5WYKhw
   X6RwQZ/jfQ9CG290CD70qPIM30ciXLYpqNidQZKwziUdlt6TJ4A22c9ca
   uYa21iS6aVbfCdnI1Dh3mHPevkiLse3JOzOZdL5VruLvSihSoYgLbsMtR
   w=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 104387275
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:43ERhq/YIhk7ftVUjDVIDrUDZH+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 mocDWrUM67eamvwfNgjaoyy8hlUsJfdzdJnHlE5qio8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagW5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklkt
 tNBFg4UfyqhlsWwzu6he+hoxf4seZyD0IM34hmMzBn/JNN/GNXoZPyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilUuidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpOT+3nrKE36LGV7lcIEho5ekernb68rlK5do1GM
 lQL0QN7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0TzE30
 l6Cn/vyGCdi9raSTBq16bO8vT60fy8PIgcqZyAeShAey8L+u4x1hRXKJv5hDaq0g9vdCTz2h
 TeQo0AWnK4PhMQG06G6+1HvgD+2oJXNCAkv6W3qsnmN6wp4YMuuYNWu4F2CtPJYdt/GFx+Go
 WQOnNWY4KYWF5aRmSeRQeILWra0+/KCNz6aillqd3U8ywmQF7eYVdg4yFlDyI1Ba67opReBj
 JfvhD5s
IronPort-HdrOrdr: A9a23:4Ny0ua26oZtyGDck/rppSAqjBEIkLtp133Aq2lEZdPU0SKGlfq
 GV7ZMmPHrP4gr5N0tOpTntAse9qDbnhP1ICOoqTNOftWvd2FdARbsKheffKn/bak/DH4Zmvp
 uIGJIObeEYY2IasS77ijPIb+rJwrO8gd+VbTG19QYSceloAZsQnjuQEmygYytLrJEtP+tCKH
 KbjPA33gaISDAsQemQIGIKZOTHr82jruOaXfZXbyRXkDVnlFmTmcXHLyQ=
X-IronPort-AV: E=Sophos;i="5.98,322,1673931600"; 
   d="scan'208";a="104387275"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=alHUemjVDOP0BqV957UXGvCDGeZ//U9asoSfu5lFIMwrzvhNWy/oFP42qENwnTMquQfa7Yv1V/bjhCpTPsiEstqk/pOLJP7F48bR6rIpKobvZuLJSqgX0Ms1dZCBcl+Gj3hpjKcbwExI7kouvm+sGyzPjx8za9sN+++uVsX0/OY4ghev+yseJUAB66G2E2YgAH+tePEVoM5NQgROWpog7byHYRx4hWpid8iylVE/yggE6mng2bTcg40eaoReLtTH4fG1HBcOccF28ruJIFInISXQN0IgJs5TssQajwFZGBYN/yV3KlVW3TMsuiJElPGQgUQijvmkjCL1MxnDNuasuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zgconbBhy1S8kc6pf8iqApafjGUU32fKIG1lZsK0EeY=;
 b=oCcnllttZKpyMJbbCZfXvSQIsTzZYcnJXMdqfOQtbiQEjL2TT10vS2uj8ntP0f9JbPfg5MzwUBnJBbHoWD9gmTqxUzJgzzT3w7uOLwwQ8HHqvOd44NHWB4UJAlnzorWjSR7SGHNhREahAXrT+m1lFaiG9P8BIR/KAh3jWJqxOCVDFDRWI/rXEqr0Yglozk9gw8f32ZX1x1DGq8WeCla5b+RbsHissMCq1cHWTXZDcc/+sfQQ4KEO+FCfuS9E3cxd+v7L4Vkfj1RkNE1Kea95Nhhw+25TbpE7/NEqjkdsTFLyTVoTuJX3Wj1H9ATnqwEGbrs7vLAcCT4CALabxwH7Bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zgconbBhy1S8kc6pf8iqApafjGUU32fKIG1lZsK0EeY=;
 b=mu4DXo+6yoPkLhH7FIKC451Y1zbp83X1jdskym7haj3t/wFgJoY6heHsV1PxDt7GgV6FMU0SI70ys/M+WUkgO0Hk6P5ZhodNXaZOIoViqFyvmm9BHpzmqgd0mgPd7rKxyHf5EXw592+M6v8LFwdupkQgCmGwPTXhmXxrGO3BJHQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e04f19e5-5be2-beb2-9a07-66f146e1f07c@citrix.com>
Date: Wed, 5 Apr 2023 22:20:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/3] tools/xenctrl: add xc_get_cpu_version()
Content-Language: en-GB
To: Sergey Dyasli <sergey.dyasli@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-2-sergey.dyasli@citrix.com>
In-Reply-To: <20230404160655.2354-2-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LNXP265CA0036.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5458:EE_
X-MS-Office365-Filtering-Correlation-Id: 6117cd20-48cf-4cf9-d0a3-08db361ba9cf
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/Nxy0idh9GgKElM5mj760WyPzTHt2PouIm/1c5HhirtRcckDQNn1LjSp9Vq/VeKvg5mOkMGra5shPG5O+IiWu+pMQP+pG4sBTYmWjX0NPjmWThx0GHnvTXC+OgLCXgGlF+WyrWUEumylTlswmBTS/WQwQQVvqePjtSnkRiLqL9m1aA8dnVJbicjG4ELeNB6rVWIQcEntCyKU29mNQIJzcbfDsRHisxgjrVyj8f5Hf5Omo+oE3myZWwcwv1g6rrpGVRcj4ic+tRPRM0+FI70N3tQPIaJ4WP39G/a91YqjGzs+ENTGHP13CfTZONjufW9Z3AH9eMfqpoc2XAWjG19bsK6bnOnubIWM6EteCkgVYuvtzzN9h1TzbClIVaWcs//zcPFxyVJc26Zqz3qZcKtiVSbBOddAa87vsPfHSBE2f75kxjQucExiJNpKsfckNv99Qh9sOUuD4FX0jUD9uMwa0bNCqCwI8tz7aCrKZughHkMj9qk99T2ArI7FeXfrGtlGSMMf0g7g8U0RdNEXra6eij+wccItH+g5noZlcXWJSMmNQVN8JVYrQl8lyPzoDENeMkNWztI0OcrAFw60eX2igGNLsgEny7wawoxOOy5KUVcPZNUuXEEK0H+ikt5b1H9Lp5wNt69XGnYC2+86eLdytA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(396003)(39860400002)(376002)(346002)(451199021)(6486002)(4326008)(82960400001)(66476007)(66556008)(8676002)(66946007)(316002)(558084003)(2906002)(38100700002)(31696002)(36756003)(5660300002)(8936002)(478600001)(41300700001)(86362001)(7416002)(53546011)(54906003)(6512007)(6506007)(26005)(31686004)(186003)(2616005)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0tqMFlwWTVCMTVQTktRYU8ybVVGRVBzVktucXZHdC9EMTM0ZVZ6clYrNysw?=
 =?utf-8?B?T1p3U2ZkYWJJT1Vrd05XQWRoRnRTd2RGTis1UWdEZkowcXdBUkRYaWNDalRN?=
 =?utf-8?B?MFB5SW9mcGJrdDNiL084Q2dPWkpDdXVLYUx5M0tkcVJ3Rm02cElDMGYzVFht?=
 =?utf-8?B?bFV5ck1YNW12WGxESEdXOVd4VkdLZmN3MTAvS243bUthd1V0YzM2bitTcEJW?=
 =?utf-8?B?L0tQVTBLV0FpcmliRityRXNWZXEyVE1NRHg5UVhURlRGbzBSKzBDdUpIVHM2?=
 =?utf-8?B?RmtoN3A4Yk1lUTJZTUc5cE5ZMTNIdE9aenZ5bHd1cjNVWnhnVkdlcGtmL3d3?=
 =?utf-8?B?YXBzSjcxRTd2WVZYR2dJQndCUHFQaVhocWVYZXBQaXphRHN5VDg5YlpFUXNX?=
 =?utf-8?B?S1d5R1ZDeVBBUm9KcEF2S3drS2xaSlVFMTVURDVkNUh3NmFWUFlvNnlGK0d2?=
 =?utf-8?B?d3R2dWNoLzBJcFFZUWtGZE1TWlBhT3JRRVFkdFZoanN3elNBanAwbHhkMTQ5?=
 =?utf-8?B?cUJOSnVkdjBrWG5QMGVCaHZRbkFIYXpZKzJQQzFtTFo1M0NYMTdnYk5mSVdJ?=
 =?utf-8?B?a2sra09pUmNlM3A1bmpMb2Zuc2JsT3l1Uzdyajc4TGZ4bGVrOEs3dzNyZEVQ?=
 =?utf-8?B?cFVkbU5tNlBUSGwyVXZqQlI1M0FSTmlWUFljMVFSNmFHMVJhUzQ0RWNISzRK?=
 =?utf-8?B?RlNGOHlZb2o2VFZBbmkzR0dEa2FlN1kzeXFDaWhiTnpUVC9BNDNWN3VnQkRP?=
 =?utf-8?B?NkZCQnZmelYrMk8rN1JRdERiR1crSWRITXVCVWdxT3hDMFVkbEVlbkl2T2Rv?=
 =?utf-8?B?RXVqOUtyRW5NN3FGNWg0a3kwMU5BRjRUSVp6NzhwZ1ExUDEreFV4bDF1SURC?=
 =?utf-8?B?YStjamNYWWlOSGFxNXp5eVpGNnNuVytIbFdWSkdMMkFxSlQ0cFJwb2NYa3NZ?=
 =?utf-8?B?NWdYbEhKT3ZFUlpZR21vaERSODVKWEV1c1ZISngrTWtMRm9oSGVyUDJVeFZZ?=
 =?utf-8?B?eWJtaUlKMG5LWlE5OTRuWVgvU21SdlFPbmhxRTg2N1c1ZFpyWEVKeGJKcUFS?=
 =?utf-8?B?UEFLbDV3aWJ3cDZHbjBhMTRXOVQxQzRkUytaR3dJcUdWYTF4Zitjejk1UTJE?=
 =?utf-8?B?RGI2MS9HUklUV1hGdVExanV6NitDc1lRUFB6LzdiZlpVR3FaTTdBdWxwV2ZX?=
 =?utf-8?B?SERuMDRqclRqd3JFdjB2a2ZWM2ZYZHVhYnRxZUFJQXNXMmdmWXVKWGpQb2ZM?=
 =?utf-8?B?ZHlBRVBlQ1IxSVhsaEJXZmgxMUpUS01qYUk5T2ZCMWdDK0V2THp2WDh5cmFQ?=
 =?utf-8?B?ZkVLS1RSZ1M0eG0rNkxZWkhQVkNxYmkyRkN0K01DRXhPNm4zM1RIUHJqbC8y?=
 =?utf-8?B?TzdsYXp4MnloNEVWU2o4b1IydnFDNmlLRlNRUCtXOXpMQ0dIcjRrN2RtUGpq?=
 =?utf-8?B?VTg4em10Z2s5UUwxb09RV2YrQnpBaHRnUlkybkp4NGpFdzNwUGNOTW1YSW5U?=
 =?utf-8?B?VHJtay92RzM5WWdrdmQ0cFEvMWhUQVZIOEMxeGxBeis1QTFYdVorNEI1L1kz?=
 =?utf-8?B?NFpZK3VaampPbUcvMGw4MkxjKy9CeGtTOXNnZmpDQzhIbHBPTTU0VkNFTVNG?=
 =?utf-8?B?Z3Z6L2U4b3BIR3hlRmY5ZUxNS0VESnpRaFptaENzRmVNR2Z3anNKR1lWSmNr?=
 =?utf-8?B?MjVBaUVDWmo5aGpnUE5VamxHMGhHTCtsL1liZDlSUkdlZGFRU3d5Ky9lcThh?=
 =?utf-8?B?M3NRTlQxRzVjSElPT1FSdnFoVnZ6bjlTZzc4SkM0aGRLUkUyaEdWQmhEcUpY?=
 =?utf-8?B?UmcxY0p4dUM5THFhaFp6Tm16cmJiVFpnYW1mZy9DTVltTHRyU1gvMmNRUWRT?=
 =?utf-8?B?ekJRWmZxa0tIV2VhUG5PWHloZnhEZVh3TUhjZTFXRXQzZkptZzQ4aktQUWpi?=
 =?utf-8?B?TjVnM09waHFxZ3FjZ0RSNnc4d05hWW4vRy9aM09TVlUrbFZvNUVidXM5ZDM3?=
 =?utf-8?B?T3JDMy8xd2Q4T3YyZUJCSXZJdHYzK2Q4a0RQMzRXendvUlBzZ2dpckw3bWZC?=
 =?utf-8?B?Y3p3bm5ZVTdrNGp3RGJQV29KdzBUUEg1ak1KeEZ2ci9vTUpqSkRvdVJ2SEZK?=
 =?utf-8?B?WmhIVWNpdHhqa1B0bVh2Ni9KZ29Qb2tuYUJ6R25vclZCSmFHNHZ4Vitza2V3?=
 =?utf-8?B?UkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hCjHlFqoph5inrOE4jG+qoKXCD5ZDk5WUuGaftmEeQazHCy0IMyw9uhWpG6UdHOIRE8xwj2f7V+WPLCjv6r4oQyPvH/vnkcjTwk47agoYsIqDw8XnA0zJ9bDcWFIYtFrHrk5BpxJFR882QS/eCAX2DfXtTpqCW6uPGj5l9r7KURnlmdNZAk4rKZGsIJ+UGTUgRXvp3s6vkpTAaLD9pVZuhf3VPbLHESP1MEOSknCCZGlJ5tuTBuhQDpQ+YCtRMzfMBAL+OlFEnVgy2JbbMvk8M8xNSKpP2Wt60TMEWP2ekMWkjeaoLI/VKQiaDQvQjeNVyC+m1q6pWFllwXXi4EpG1x7jdGn/ONiFJjEroyrugPMxDvsoLWe7zntp2tHBK4R+A7+XthHqYQYhRRdyW/emO+3JQQw8zvUK3oTquQsoRJA+qHo6ls/XHhQQXFvx63Teeib4dHG180F97tLK/mjzEAglWYlmDejTRQ4wn/xfHkOsVPOyUwD6UqANywU3fcHxMk5N7ijZapMeaCsfxa3FPhwTO9hvrNYOSewTM9xzLl7ERWjplF6owRfAgJF+/eFLLG+nlwyXLf0nXn6DjmNNQwRFmgJ1+5XNLrRY/R2rY40qNTMtJWQQYkkxrk6KRnLaYsBZ6d1d2+RkCp335KGUV4IAB78SiO9dwD8FGUnyQ0moQ1M6nSjmcMz/sh5MqATlhddwflpBUcVnVtMJMRZPm6W4PUMCAIlFzgzNmeyAuvJ4ztTxmNEOnVamftQCKJHbI49AFt2b2CowX1ldY72hxZFMzZqq/W/1RPbYVEedgtURrR1NCrTBDxtI4mgVBKVjbMOqSFDDnhzZZztfe+ycVTrJ/Bsj+nlhkM6O5b4WY8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6117cd20-48cf-4cf9-d0a3-08db361ba9cf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 21:21:04.7771
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6kHOMNPVDWZ+zksDGjCcM80yVrI/Cmj+T3vTr2VPIwGAJYph2sJmi5htqVk3/pW37fQH4G9wBMy/Ilgo6DH8S0t+Dium8QfpBiV48F1moCs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5458

On 04/04/2023 5:06 pm, Sergey Dyasli wrote:
> As a wrapper for XENPF_get_cpu_version platform op.
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 21:30:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 21:30:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518648.805404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAhq-00027X-Bn; Wed, 05 Apr 2023 21:30:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518648.805404; Wed, 05 Apr 2023 21:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAhq-00027Q-8d; Wed, 05 Apr 2023 21:30:14 +0000
Received: by outflank-mailman (input) for mailman id 518648;
 Wed, 05 Apr 2023 21:30:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkAhp-00026F-4u
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 21:30:13 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a8137d6-d3f9-11ed-b464-930f4c7d94ae;
 Wed, 05 Apr 2023 23:30:10 +0200 (CEST)
Received: from mail-sn1nam02lp2048.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 17:30:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5571.namprd03.prod.outlook.com (2603:10b6:208:298::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.23; Wed, 5 Apr
 2023 21:29:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 21:29:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a8137d6-d3f9-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680730210;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=sTRVs8W4txYmej4nd3N6tY1exzKR9taNf+gzQBtLPVo=;
  b=O0LHEE+sRWz7hd/L0z4W4FbVJXqTNuNdf+q++mQsIwMzqXY69c4ZOW1E
   8BQxMvZ8Xtq4SJdVNDGEDWkTFwp4zJUf/rIg5oASQmDghdwM7vB8khN5m
   6U9g5A0IAnHD05w3frKEZBe4ma34MeXRO87QGoEsFB+IpBJEtxdBe+7qT
   U=;
X-IronPort-RemoteIP: 104.47.57.48
X-IronPort-MID: 103269853
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jbOo+KA9mGO32RVW/wviw5YqxClBgxIJ4kV8jS/XYbTApDwj1zNVm
 2VOXGuGOa3fNGSgftxxPIS/p09QuZDQy9FqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw2918I0hPz
 dckCjE9U0+jlqWY3qyaVbw57igjBJGD0II3nFhFlWucNtB/BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxrswA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eUx32qCN9JfFG+3vpj3VOtxVVUMzk1el258d6rqk35BPsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluP1TM9KGYDYWofS1ID6ty7+oUr1EqTHpBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPuRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:vta1F6FCXnJz1zInpLqE+ceALOsnbusQ8zAXPiFKOH5om6mj/f
 xG88536faZslossQgb6LW90cq7MBDhHPxOgLX5VI3KNGLbUQ2TQ72KhrGD/9SPIUPDytI=
X-IronPort-AV: E=Sophos;i="5.98,322,1673931600"; 
   d="scan'208";a="103269853"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aYGOxAOOWHIdvloJefrYBtaAhHuXf8l7JYgFSyTm/yiCsBdl3uEnls+Ev3v7y8MpK+7TTmph4uJOIz4cLg4d2I4Uts0YHGLNLIofw8mHdC6bx3qZ1nhsOhfZrVKgHCS5LwTVlVlohvwizQzCvpXXqSC2DhrU5F4G04TJxcx9ZCldCaUHsqAtrMsE1S0rpc3u2sdEo6u1bZjqh+PH5kwwJ5KxdEdgdLYSSs4491GSqNGSkqU5+0EfzJq99yvBHUiRGmVam9tc8rZlWK35KCOdVsIS2pBUr34J4kP+oDUzOHLGcFWwuyibZ7HlZd/L5Im0t1jwjB8UYyJKmGgnus2aKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0SmF1t85NoXdJcE5Z7Mh+1yNn5WUeBPvZXB5ZnLGE/I=;
 b=a7Po7biwbrjOZ+rIvS/bK+ti4/m2DNSmxvdXeHm3bySw9w2hzU7hYB+pXWeVTUVAi3fHVvAtwNfKqcH2COkEohOeGg3MTFpAGiSJYEGrr0aWJT4fDpOTnqDi0kNMognAKrJ5R1xjzpPLa9N5qNsV3/0hv/sOvNIdwr79FZLjL7q/KlBj4grb266GwAhfm2CicJ5lg+SuHK/2U5604OuvfIrIcBdRQGmTlsouCnPy+IZ93LIEGTIBYrVi96aK/uvCdejOabu02wM6UIoOXPrRozongVFvw3w615jt0hs4cPuAZwMBK98ZR6QTMz1v0kbal+Zfy+WnZpncr1dmUVnGvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0SmF1t85NoXdJcE5Z7Mh+1yNn5WUeBPvZXB5ZnLGE/I=;
 b=GhaxCfszrF6aWD0EUhmV5YRzVX9uBnF8a9S0YNQAql1A4krC9qUrB+yCRVym6okEyx0sb6F3RwyiIs4mLmhRxuEmyT1c8W5+/aFReXEoPAQFBt8QSXw4PyJlV/IF7PF9L9Tve1gJpfGdcpLbbXwzU0VBskndp6MK5FiRjeZRksM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0c249d10-02c7-9d76-bc3d-2b3e2ece38a9@citrix.com>
Date: Wed, 5 Apr 2023 22:29:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/9] x86emul: support WRMSRNS
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <0c2ddae9-3222-9755-b6e1-35e51410093b@suse.com>
In-Reply-To: <0c2ddae9-3222-9755-b6e1-35e51410093b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0302.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5571:EE_
X-MS-Office365-Filtering-Correlation-Id: bf6767de-4960-47d8-3306-08db361ce854
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K8E90IrW6uGg7cejCBDcHE1JBzh4K9Gvd+blQAlHvoW/W8WhFGo+ZEADMywBwq0Sv3JG6s1GL7E+TrjiShOy1E+5ZdXZTGWrcuvcmapfzPjBDl17SETg5vm0TLbbYM/L+ndrMsrTwR9vVzsX9gWAJPDgKJmKqfvbHiOqudasXOYxSJqWFsfZf9E4Uh8rOsAqogkLai6X8jmNaV4Ym9kEsN5D1s2/dGd7GKF0gWPFXO7q3iNZKxWQz2ubGWpTEeoGkWAJmyvgLxqDqtiTNwCcFMfs0+L46V+/ug6a2X8ixHSl2hGNmvz4xBRm2wG2Q6ZvBgFsbmbVt4tRORD50zErazI/nEhwzOPiH5Xsmntdfu0JectPP837sH1I5xb9fw8p4Zq5BQadhHplg7/Cnxwu5XUqXXv5yZhDrBrFSBH/bKZ6ofsBCjXk/7Im4QI2da1qsBpcJfLdhnF4LQchvtrp/Ijwt/zc3pf5AFVrpZ2uVSUEhPusqZKCvJVlXsXh9/S8F5DIkHt076+uSEsaFyjXhzUBqwpxMifikISQF+qHjmhRlIaZcZ4zjWuHLkBgKBDhXELwf1G7YXq/xa9rnZRdYiDxIveGnxOyExPIcSYzLEsoPvn+aRejvmLHvvGdt+wwDpXTEtflwg0R5HZxbNHOqw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(396003)(136003)(346002)(376002)(451199021)(8936002)(31686004)(107886003)(53546011)(6666004)(6506007)(186003)(31696002)(6512007)(86362001)(5660300002)(82960400001)(54906003)(66946007)(83380400001)(26005)(36756003)(41300700001)(110136005)(478600001)(2616005)(6486002)(66476007)(316002)(38100700002)(66556008)(2906002)(4326008)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGFiWm9abzJXYTFtVGUwYmErcnhzZElsY3RFeVJDdXAyblp5OGdCblVRTjVX?=
 =?utf-8?B?M0VoMFpWOHE5Y09TK1MwaUJORlFzRi85d3RFK1NSUW51ZzlzNmlSY1dBUk5I?=
 =?utf-8?B?NFl3WjAxVWoyb2V1U3lBUkVwY05abHM5c0wwWk8waXEvM3U4dzRvODJhSFJk?=
 =?utf-8?B?U21yNk9HdHFidEFJUUZOTmwybTFjWEZOZXZiVG41U0dWUkJRUmg3YWV2bEdO?=
 =?utf-8?B?NjI3QjBRd0ZhcVR0Y2FiZldRWFJlbWlNRllBMHd4ZFFZaVdUNFhTZnUwRVdW?=
 =?utf-8?B?bDByT1k2SDJEQ2VGT0VESG03UzBybGM5RlA3QnZ2MHA2UDkzYXViYWxCTi81?=
 =?utf-8?B?VkZOZThrWXRSTW44bEQ5Tm5QNG5Cb0NKU1FkMUlJNE9zSUlaM1NaUFo0MVBp?=
 =?utf-8?B?WlFudEl4RmdGRDBzYVEyZVkzdGRIY1F2YU8rVnAzM0MzTjhWUXJGeVhXd25J?=
 =?utf-8?B?S2ZOK1ZZZVlxMUVOYWJIUys0VVZRYzdLaFloRHFZRXpyNUwvb25YUnlIOGIw?=
 =?utf-8?B?aUJBQ0Qxb0NZdTRHbXpubk9OVkVIZjFINFFmTEdpQi9OdFQvUktSQ0szUjBJ?=
 =?utf-8?B?QWJqeFU0dGJkZkFJSFFFdUlDc3hhWk8rSkt4ZVBoeHl1Q2g2SnBwNUJtYlFW?=
 =?utf-8?B?WkFDMlNyaGNwaWZka2YrR3YzVVpWdUttMFVhSXFGK1VqK3JmRzZ1YkRJaDIz?=
 =?utf-8?B?QTYySVQvUXJ0QXRHY1VSVTFabEVSd3p3ZU1BTDBzSGoxTDcvWC9mWUllc3BT?=
 =?utf-8?B?WFVyWE9kc3E0aVZPNE5KZ3FXbStwaDJxaS9xSEZBNWJrWUxTM0ZRSW9OVzBM?=
 =?utf-8?B?VDJlSGJHZnRpa0FlSXdoSmJVTTM5VEVWeEdvRDV2VGJ3QjBKY1JhZWtIMTJW?=
 =?utf-8?B?QW9YNzdpbFpkRzBza2tBL2FUdWloT2ZMeWxaLzBkdzQwcVBGSy9VTXRTZVAx?=
 =?utf-8?B?V3E5YXpkOWh1aWsxdlN6NWJZL0h3S2doQ3JpcG5lR0RUZHJoN3h2c2ZxcGZu?=
 =?utf-8?B?SStadHpQbUdZZGEzOWVDWjFRT0owV2MwSWFUa2J2Vk9OMjNBM0RUUHdRWGpE?=
 =?utf-8?B?TUFwUGZqMXo4dzIvdm8rM1Z4N2hzZU9JbEoyZzVOWEpQVHNoKzZ6S1lXV1dY?=
 =?utf-8?B?aDFKMnRpNmdySDc2RUEwRjNQemY0UUNONkNoY1RHT2JFSkZQT1BwUmZiNkdx?=
 =?utf-8?B?em16ZS9DVjFmZUtkdHdGYy9tUGtkM3hLUU5WNGF6NmFGMWFUTitmYkoyeUEx?=
 =?utf-8?B?dEM3WnJpeGlxTi9HZVpqSUpmS044eHFSMTVaWUhNRE9LSmpBdk81SEhyMk95?=
 =?utf-8?B?L2kzY1QvN3JReGFhMTErV0NjTUR2RGlTY0Joc0YzdmpqUXNZNFhxSzFLUVFk?=
 =?utf-8?B?MTJpcXUrdUNiekhGREU1TFVETXlqL2hMMTRxQ3JINnFVekZBaFpmYzR2aXVF?=
 =?utf-8?B?OWhzaklHTVBZUjFSVnZxTnFWREJIRzNxaklKRittcjYyemhRTWh1RHhpY1FZ?=
 =?utf-8?B?a0dyczhMV0FhUG5uOVdWTjdMZ1VRUTBrMFJHVGRsUmFqSnBzRWlPaFVCQ3NF?=
 =?utf-8?B?amJwTWp4OWdTeXplc1lTR2IxQk9uMjVINW5SVHh6SUt6SEYyY0VZNFNpMDZT?=
 =?utf-8?B?UmZWU1pUSTNZQmYxZWlGY0tNcmdJZG01S2x0RVg0aXg4NEhDS3F0NWt0WHFD?=
 =?utf-8?B?R0FPTUpmbzRoaTRXeG12dGdrQ1RpZXBLRFIzV05vTTc0a2llR05DTVd6THhC?=
 =?utf-8?B?TllOMTh4M21WSWpMaTVMU0l6Um4yOS83NDRKZlJWNTN5cTZ5d1NSM2piMis5?=
 =?utf-8?B?dHRzdjdoL3VaeUR1V1pyRXBYSnlBbnVpKzEyRmp3QmtRbWViNFVENmU5YkNR?=
 =?utf-8?B?emJXaGZuUUdKVU9XSGJoMnRWOU5SOUk0SE9wcms1NjFJS1BST3FOQzJhaFZj?=
 =?utf-8?B?NW5Vc3VsZVA3blZUemY5di9DcHZZLzFqSFhWVkdkVFMxT245YjdkdjJEM0pG?=
 =?utf-8?B?TjROZFZmK1kvUzFuM1lFMm5QQlE1YzMxVlNzRXVyZEh0dUIwd2NhRENDOTZs?=
 =?utf-8?B?WS9udWZjNExtNmlLNDJlN1RETWk2c01YZjkzRHB3aEVvS1Avazc2ZDhZZXdM?=
 =?utf-8?B?TFJ6ZnlDNitHMXpCY1ZHeDhyaURzdG4raHpVWVFsMHNjcGVodW9HalRJWmUr?=
 =?utf-8?B?SVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	mUcftL8RAyeoPAqm6CwXfS07o0Y2S8wcLh160EGO/6xrMQkLBS2XA5odtpY8WGkERgx4YPMf4wFtBCeGtCz9en3o5cAQPjhXOhh27gFILbh3to3fE+tm/OA909Cq2dmv2I9rOeEmVPReV4s67QIGJLUSlweWXY+o9sHG+4tiwWTxs1nKnGHQCIs2bzyMJSrZMSPXyXTEUMxIFjNvX67iyLrzHeJIORJqHDPDnG3m6JbpBPF4XWi71UG+t3g+lhk8eRb6m5H9haElJcWMeSVxuhhIE1pBeNflJx/Rj3EheMNfhUONvsQjKIpr5RdTCw9m7QyU6Kvyy4z3TsSFmVGTXF1kP74d2f43LXlux4g6Jdfv9XcMRmnYzITkQglBzKnwdGtJAQwFbXQIYbOYeU1T+al5kFRTO1fzvNXKWEudtqxzDG8a2MH/k/Frt017N/7EIYnkk483B8COUMtOqcxTkLCngFnTRRmAxYdDWkLhxxd92AGLg+su1Y+cZK7OsAbo1y9qpEBZ6UbN6b1iosOnYwYzynQjevO++JLFN0cEgIaDlEmNNZ8lpUhn8EGQqWRsgUu408V/s2VZOKIHGlGdA5t2RqRM6Dy3rJbdMXCSmELf2R/PCapwjnglzaIp7HCbnfT2BgoLeSFTdEwJF4ZaURpYIjYfEEvAdxwjm2TEt4LsJFeuvARp8VI2p1Jgp06pQb5Jprtpmn+Iq20gaf3RYj5cg0MGbx32ajaoTrmnpCX4r+hOO7qorXj0MRyHKJjgLAQ7r/usTsW7jup/Q1FlFw/0ux2woFmFZ80LjJ/wxTcOzKBxNgZulstsaW0H6QJC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf6767de-4960-47d8-3306-08db361ce854
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 21:29:59.2309
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L7O/ibzcGGUSipEkeVCFEoA1zATkMRLwSk+GG4V8cK3YK10Ir47V957gGKQEC2Hk01+2uYFELfz0VECqdaMusEQ+yMeXw/H2dzFUKKniSSk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5571

On 04/04/2023 3:50 pm, Jan Beulich wrote:
> This insn differs from WRMSR solely in the lack of serialization. Hence
> the code used there can simply be used here as well, plus a feature
> check of course.

I have a feeling this is a bit stale now that 0f01.c has moved into a
separate file ?

>  As there's no other infrastructure needed beyond
> permitting the insn for PV privileged-op emulation (in particular no
> separate new VMEXIT) we can expose the insn to guests right away.

There are good reasons not to expose this to PV guests.

The #GP fault to trigger emulation is serialising, so from the guest's
point of view there is literally no difference between WRMSR and WRMSRNS.

All code is going to need to default to WRMSR for compatibility reasons,
so not advertising WRMSRNS will save the guest going through and
self-modifying itself to use a "more efficient" instruction which is not
actually more efficient.


But, in general and as said before, I feel it is irresponsible to be
continuing to add slow guest-fastpaths without introducing a real fastpath.

We desperately need HYPERCALL_pv_priv_op which can take an array of
inputs, and has an sub-op for each of CPUID, RDMSR, WRMSR and whatever
other ops are trapping because of no fastpath.  Perf testing routinely
shows that #GP/#UD faults are only 2 orders of magnitude less rare in PV
guests than #PF, which is an insane wastage when guests are supposed to
be enlightened and use fastpaths in the first place.


The emulation implementation is fine, so Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com> but dependent on the PV guest changes being
dropped.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 21:49:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 21:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518654.805413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAzn-0003nI-3Q; Wed, 05 Apr 2023 21:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518654.805413; Wed, 05 Apr 2023 21:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkAzn-0003nB-0Y; Wed, 05 Apr 2023 21:48:47 +0000
Received: by outflank-mailman (input) for mailman id 518654;
 Wed, 05 Apr 2023 21:48:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkAzl-0003n1-Jw; Wed, 05 Apr 2023 21:48:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkAzl-0005Gx-Ex; Wed, 05 Apr 2023 21:48:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkAzk-0002ce-SJ; Wed, 05 Apr 2023 21:48:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkAzk-0002Ux-Rk; Wed, 05 Apr 2023 21:48:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PcJXMowNo58duxaMRrj5LVwF7Mv3Qq52bpVZWNfsVH8=; b=hw9qIL/s74dn4Uu3yj+/dQKJh9
	ieGi5m4zBAPD+55HP9HLrUPpwStZvCjxFt91wy1SJeH2n6eBRgD4qPWJ7zKgWpe5fExI3dsvW92iv
	d14qvrS5d79/FB42IFJL78Mf0mGXy/5phBPmVYDfVWTn7N9XIJ3c7bp8ZMIFr5jj9vGw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180160: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=881ba20eb0222305a9d2cd090c9345992794f4f5
X-Osstest-Versions-That:
    xen=55c6d4e2257ca8aabb15232f696ede49642efa3c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 21:48:44 +0000

flight 180160 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180160/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  881ba20eb0222305a9d2cd090c9345992794f4f5
baseline version:
 xen                  55c6d4e2257ca8aabb15232f696ede49642efa3c

Last test of basis   180157  2023-04-05 17:01:51 Z    0 days
Testing same since   180160  2023-04-05 20:01:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   55c6d4e225..881ba20eb0  881ba20eb0222305a9d2cd090c9345992794f4f5 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 21:53:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 21:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518660.805424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkB3n-0005DM-Ky; Wed, 05 Apr 2023 21:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518660.805424; Wed, 05 Apr 2023 21:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkB3n-0005DF-Hz; Wed, 05 Apr 2023 21:52:55 +0000
Received: by outflank-mailman (input) for mailman id 518660;
 Wed, 05 Apr 2023 21:52:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkB3m-0005D9-5k
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 21:52:54 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35bc4799-d3fc-11ed-85db-49a42c6b2330;
 Wed, 05 Apr 2023 23:52:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35bc4799-d3fc-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680731572;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=YG+tqcOrXnWWW86uHDi9/SgPfWk33wf/QiRsN0bkKP4=;
  b=GvdV6e6Oun+xMb2swa0tfxNE7IRA52enQkyBq9vZdLMkZQo0O/wMXCaI
   1efchE+wz3LUtVY1DigPynNgQH64LAvWwveB4cDkfhrS3/Bojzz99+SOv
   MaIKvyf1gjC2voZC8aXpYOqG+LA6Ekdy1onC+wWixu1wQu2FCNdE9Jlm2
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104889208
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xuGNDKLYiAZ3uZYMFE+R85UlxSXFcZb7ZxGr2PjKsXjdYENS1TNSx
 jEaDGiCOf/camuge9pwPtjj9k0EupbRm95lSVBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRjPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5MEUBQ0
 9AHeAwodz2pjt2b6/WBTe5j05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZkFzhfC/
 jKuE2LRPEoQE8zG4zS+yH+xncyImyarG75DLejtnhJtqALKnTFCYPEMbnOkpdGph0j4XMhQQ
 2QW5yxoq6Ex/U6qS9DVXhukrXrCtRkZM/JTHvM77keRy6PSywefGmUACDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcgTyIZSQoO4/H4vZo+yBnIS75e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1WvXoiyKioeZcissyy/eYU+OqTl+Y4HwMuRE9mPnAeZ8wJexFwfR5
 CdexpLDt4jiHrnWynXTHbxl8KWBoq/cbWaC2QMH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma
 VS7Veh5tM4KZyvCgUOajuuM5yUWIUvIT46Nugj8NIYmX3SIXFbvENtSTUCRxXvxt0MnjLsyP
 5yWGe71UyZCVv02kmvpGr5BuVPO+szZ7TqKLa0XMjz9iebODJJrYext3KSyghARs/rf/VS9H
 yd3PMqW0RRPONDDjt3s2ddLdzgidCFrba0aXuQLLoZv1CI6QjB+YxIQqJt9E7FYc1N9x7eXr
 y3hBxcBoLc97FWeQTi3hrlYQOuHdf5CQbgTZETA4X7AN6AfXLuS
IronPort-HdrOrdr: A9a23:PJwOfqlkYwYdFWNDTD6Mz+s5kl/pDfIo3DAbv31ZSRFFG/Fw9v
 re5cjzsCWetN9/YgBCpTntAse9qBDnhPtICOsqTNWftWDd0QPDQO5fBOPZslnd8kbFltK1u5
 0AT0DSY+ecMbD85vyKhjVRx70bsb66zJw=
X-IronPort-AV: E=Sophos;i="5.98,322,1673931600"; 
   d="scan'208";a="104889208"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering guest"
Date: Wed, 5 Apr 2023 22:52:45 +0100
Message-ID: <20230405215245.2137356-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

At the time of XSA-170, the x86 instruction emulator was genuinely broken.  It
would load arbitrary values into %rip and putting a check here probably was
the best stopgap security fix.  It should have been reverted following c/s
81d3a0b26c1 "x86emul: limit-check branch targets" which corrected the emulator
behaviour.

However, everyone involved in XSA-170, myself included, failed to read the SDM
correctly.  On the subject of %rip consistency checks, the SDM stated:

  If the processor supports N < 64 linear-address bits, bits 63:N must be
  identical

A non-canonical %rip (and SSP more recently) is an explicitly legal state in
x86, and the VMEntry consistency checks are intentionally off-by-one from a
regular canonical check.

The consequence of this bug is that Xen will currently take a legal x86 state
which would successfully VMEnter, and corrupt it into having non-architectural
behaviour.

Furthermore, in the time this bugfix has been pending in public, I
successfully persuaded Intel to clarify the SDM, adding the following
clarification:

  The guest RIP value is not required to be canonical; the value of bit N-1
  may differ from that of bit N.

Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

Rewrite the commit message, but no change to the fix.

Fun fact... LAM almost required this to be reverted in order to support, but
the penultimate draft of the spec backed down and made LAM only apply to data
accesses, not instruction fetches.
---
 xen/arch/x86/hvm/vmx/vmx.c | 34 +---------------------------------
 1 file changed, 1 insertion(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index bfc9693f7e43..79ff59b11c6b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4035,7 +4035,7 @@ static void undo_nmis_unblocked_by_iret(void)
 void vmx_vmexit_handler(struct cpu_user_regs *regs)
 {
     unsigned long exit_qualification, exit_reason, idtv_info, intr_info = 0;
-    unsigned int vector = 0, mode;
+    unsigned int vector = 0;
     struct vcpu *v = current;
     struct domain *currd = v->domain;
 
@@ -4730,38 +4730,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 out:
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_idtv_handling();
-
-    /*
-     * VM entry will fail (causing the guest to get crashed) if rIP (and
-     * rFLAGS, but we don't have an issue there) doesn't meet certain
-     * criteria. As we must not allow less than fully privileged mode to have
-     * such an effect on the domain, we correct rIP in that case (accepting
-     * this not being architecturally correct behavior, as the injected #GP
-     * fault will then not see the correct [invalid] return address).
-     * And since we know the guest will crash, we crash it right away if it
-     * already is in most privileged mode.
-     */
-    mode = vmx_guest_x86_mode(v);
-    if ( mode == 8 ? !is_canonical_address(regs->rip)
-                   : regs->rip != regs->eip )
-    {
-        gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
-
-        if ( vmx_get_cpl() )
-        {
-            __vmread(VM_ENTRY_INTR_INFO, &intr_info);
-            if ( !(intr_info & INTR_INFO_VALID_MASK) )
-                hvm_inject_hw_exception(X86_EXC_GP, 0);
-            /* Need to fix rIP nevertheless. */
-            if ( mode == 8 )
-                regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
-                            (64 - VADDR_BITS);
-            else
-                regs->rip = regs->eip;
-        }
-        else
-            domain_crash(v->domain);
-    }
 }
 
 static void lbr_tsx_fixup(void)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 05 23:01:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 23:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518666.805434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkC7t-0003wv-MR; Wed, 05 Apr 2023 23:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518666.805434; Wed, 05 Apr 2023 23:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkC7t-0003wo-Iu; Wed, 05 Apr 2023 23:01:13 +0000
Received: by outflank-mailman (input) for mailman id 518666;
 Wed, 05 Apr 2023 23:01:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkC7r-0003we-Jg; Wed, 05 Apr 2023 23:01:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkC7r-0000GA-DC; Wed, 05 Apr 2023 23:01:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkC7q-0004Oj-Pg; Wed, 05 Apr 2023 23:01:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkC7q-0005C9-PI; Wed, 05 Apr 2023 23:01:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jc85nSYJ6tQ3khPKcmN4waFRELzzHGSFhiWPVYGOGMM=; b=xH9yNCrZfzHaprp1LSSPt9xDQk
	tkqYgdQ5wj7d1e72SQe5xRWolbhfOGSL/h4gfw7BburUOPBm+Z26gekPaceOXx9HhMJ9sj8OutXdR
	PGJfb57yWyFURwPH77U1ZxjXc5XDMxuGBPXZTC7GYWqACHpST0baX67pcb1PvB7JGgUo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180161-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180161: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2bb693894920e634153275bea60278a9f192a8ef
X-Osstest-Versions-That:
    ovmf=a56ee36c494cb1dfe795259b2cba706ef55b5212
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 23:01:10 +0000

flight 180161 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180161/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2bb693894920e634153275bea60278a9f192a8ef
baseline version:
 ovmf                 a56ee36c494cb1dfe795259b2cba706ef55b5212

Last test of basis   180156  2023-04-05 16:13:44 Z    0 days
Testing same since   180161  2023-04-05 20:42:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Patrick Rudolph <patrick.rudolph@9elements.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a56ee36c49..2bb6938949  2bb693894920e634153275bea60278a9f192a8ef -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 23:02:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 23:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518672.805444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkC9J-0004TL-14; Wed, 05 Apr 2023 23:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518672.805444; Wed, 05 Apr 2023 23:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkC9I-0004TE-U3; Wed, 05 Apr 2023 23:02:40 +0000
Received: by outflank-mailman (input) for mailman id 518672;
 Wed, 05 Apr 2023 23:02:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkC9G-0004T2-If
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 23:02:38 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f349e007-d405-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 01:02:35 +0200 (CEST)
Received: from mail-bn8nam04lp2040.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 19:02:32 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6224.namprd03.prod.outlook.com (2603:10b6:a03:303::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 23:02:29 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 23:02:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f349e007-d405-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680735755;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=kggSEvmhTi06QG6RXgFMaSfrTmF1WUs8NNX6yajr5Kk=;
  b=FK6itah18DsoGv7nQCrxsMZedTmhXnCKqeJt3l+cKCu0DcwYtsSek7Sn
   hl5HiXOJ5CEGKO7wdIZkgFw0MmaRaPniuY+919sbARIaMb5GGxIewJpCt
   ZJVKJzPisYAoGsjuO5iZ4El5bwBpvtpGkCmFfBuZWcjp05ifp1/c5lCIE
   0=;
X-IronPort-RemoteIP: 104.47.74.40
X-IronPort-MID: 104395586
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NlpvgaPgVMcJTwjvrR1AlsFynXyQoLVcMsEvi/4bfWQNrUpx0D1Uy
 msbWGzXbPnYY2T0KI1wO4/kp0wA757cmt81Hgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gFmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rsrX3hEp
 achEykEaznY1v+5wruDVdA506zPLOGzVG8ekldJ6GmFSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vZxvzC7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwXiqBd9CStVU8NZU3n+hmVRKGSZPD32k/vuJs1+BasJmf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZDZ8Yhr9QeXiEx2
 xmCmNaBLT5ytLyYT1qN+7HSqim9UQAONnMLbyIASQoD4vHgrZs1gxaJScxseIalg9uwFTzuz
 jSiqCklm65VncMNz7+8/13Mn3SrvJehc+IuzgDeX2bg5AUpYoegP9Cs8QKDsa4GK5uFRF6cu
 nRCg9KZ8O0FEZCKkmqKXfkJG7aqof2CNVUwnGJSInXozBz1k1bLQGyayGoWyJtBWircRQLUX
 Q==
IronPort-HdrOrdr: A9a23:5AiwL63u4W4NBoYU04Bf+QqjBNEkLtp133Aq2lEZdPU1SKylfq
 WV98jzuiWYtN98YhsdcLO7WZVoP0myyXcd2+B4AV7IZmXbUQWTQr1f0Q==
X-IronPort-AV: E=Sophos;i="5.98,322,1673931600"; 
   d="scan'208";a="104395586"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Oh+YcL7tJ4/AgFonvFWWacxR03k2a22euro7a3B5DxHMrHyGszhPpCd8Z8ozehzrh5GD8FdeLF5x2zi1zFL+a77ZcwOJCHNqyPp/bnTdrebgFs6vaworcPG3G+o7rD2xK9bB1WOL4/DbisnEp3rTnlVEK3nrsjzxphFhS9i538uTwwSQcyK8ONXezQfUixkW4HqD0SG8ZirhyjlLZgqTdo48YnML2xmmGiI1vw0hma7BGJ4bjJ6uU6T9ABWH6Y9MtHbML8MmqjPxIZxErGDzoOy8FIc+m5467pvduDrGQxek6aXso2sKbD4OMtg8dH//zrdeGm24X9AMDL/Pp4ll/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UjqZEF+WjGnPfuswrQ6zTxKr03umwBBR1PBsIS+Rk/I=;
 b=ERsV4lnJTQnRDh+djIj34od6yaqhA2TqsTlXI4bUjPEi8oIg8Qg++CpZkqDJmYYaRJonqP41b5+s1tR3eVhSNsY5qUzJeXjg4jqRX0akPkn+v3On7qAqYvGCVgMTZ8+OIJVLIr1Mqa//aoFc2Me++I/OeekymSHnx0K/QkvtSEXOcPN3Xk6h2h19XMBN+mutDJ2noaQR32ifjk3GvgOC3xlK3fKRqRshYrFkwzDy3vIVk1u35GhddqjRPDJAPUuFthB/keMoyahDv8ReZyi/icsyoJ2LJ7EBAmdwBY3hBFeKySNqOU7Z6W8iCU14oQDicD1AkcbNWHN4IOYEa4GxAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UjqZEF+WjGnPfuswrQ6zTxKr03umwBBR1PBsIS+Rk/I=;
 b=nbEvXB5dvwBhZjLux6RwGzMDUlztqWm4NyFxSdhC1my6uqq3gsLz/9kKxWretGWSs/nKK+yt6eXu/YIbo6YXDN9F+rkvYh4vURbWK5ZHU4q4bP3zry4YHZEpxpmb0zu+RSs+grbzVqw5XfdARfqAcc7Jsi9tKfb4rjeAYqlrdWI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <724da9d4-2c8f-dc45-78de-5a50e87adba1@citrix.com>
Date: Thu, 6 Apr 2023 00:02:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 2/3] x86/platform: introduce XENPF_get_ucode_revision
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-3-sergey.dyasli@citrix.com>
 <a1c16028-3f33-36bb-36cd-b1ad2664b0f9@suse.com>
In-Reply-To: <a1c16028-3f33-36bb-36cd-b1ad2664b0f9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0572.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6224:EE_
X-MS-Office365-Filtering-Correlation-Id: cb537f57-3064-4997-b1db-08db3629d45e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zhIOvNl8iQqGcbQymec4lerj+seL6/NFq85kvLgHAaEmnMFLnaZJ/Ptq9U4sAOHH7q0imXKGE/nJritr8sJwPAZjCZzLQkfakD2jJeTP7itIRaOSchFhXRMBKEzok1zdR+Xw7NvhrVMZyDT/wa3F5aOB1GPVNuvM7m6OaV71jdoCeah0Qx/LYcpokJ4FNoazY9BoF9lbfgYVkCyvtmsjohz4KdbpuvClRtoPyiChMHGD6LSpRTAwH36cHsRP9gtfzm8nI8EzLg65tlUMvjecpnVSnq+GPPqyxxWR5aLN4OMLMFzNs5N4dq+hoT/8VLj46q+NovqRSr9+wUFrwITQm1Wtg0F36WaIr0gzEh/99azn4Rb1vIVPsxJw3yePCCRwhioOTJWcIVCsY7r+UwGii1Lk+ZUT5UzU3+bDNr3fQvET6+9YCBqjJN3bYVx7hlu6OeTPyuCIwm60w29yKXWz//DE9fVMNqrPPvK/csmXY7tT3XZCtKv3I8hSkeThQ1e7iC7jekpByIhOp9eDIb9QXoPAXSAG331LhWVr9Zu6jxkQqiPYSs2ckLfshexsVzjFksDf2mz0ABegcz0d7IhBlztIcK+0BmfQGVGa7i6xMptnHkiCX98xKsBVgt5O0AQv7KcMGrwKA0IVc1po9RruNg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(136003)(396003)(39860400002)(376002)(451199021)(478600001)(54906003)(110136005)(8676002)(316002)(66476007)(6636002)(66946007)(66556008)(6666004)(4326008)(6486002)(2906002)(5660300002)(7416002)(41300700001)(82960400001)(8936002)(38100700002)(86362001)(31696002)(2616005)(6512007)(26005)(6506007)(36756003)(186003)(83380400001)(53546011)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Nkw1WTVzKzZvVzVJdE0xNmZhaEFNRnZaL1RFcCs4Tmx0T21idDViQTF5VE41?=
 =?utf-8?B?cjRPWWlVYjlBbFZwa1kvTTl1c0dkbzZ6OGRhUXl2dlo0YnQxWnpCOWM2NkJT?=
 =?utf-8?B?UDNtaVlHLzdlaTdRMU9ZamQ5WnJSODRyV0FWdldBTXZxQ090NVo2UjM2Z1NF?=
 =?utf-8?B?dzJiRXpWK3JEUjlpd1ZTZndYWVBxQmx1T2FVS0hZbUtja01UbTRzRG5VOGc0?=
 =?utf-8?B?cmg4VThBWkxvQ0xBU1V6bmU2TmxXeVdjUmh0K0N1RS9LODVxdlZIRk1jTld1?=
 =?utf-8?B?dDNHQ1ZKc2lQdksvOFQwb214WitJMWY5eGVpQlNRUjc5cVpNWHRVZGxuajhE?=
 =?utf-8?B?ZVVzK1QzbHp1MmtITzlLSVpBY2Vuc2l0Rms5VmFoQmlkWTBiWVdBQUd1NlNX?=
 =?utf-8?B?cHpBS0cwNXExUHlYWmlMVStxQzVaRjNWK3oxb01OYk9vVGVWSllia1MyK0dD?=
 =?utf-8?B?ZG4yRkVQTkdac2haTUlSbEdFREF6a0pGdFAvcFcwM1pZV0N6NEYzQWZxWkI5?=
 =?utf-8?B?WUNmYjVWcmI3bUhaejVaSTlTem0ybUtRSS82eGNVVUk3a2VFRWZiUmdnVkJC?=
 =?utf-8?B?SXFBL1ZJbkNKUUZCQ2wxUVN1RUh4bmRnU0hScXpFUzA5elRiT2R0WWVReGZL?=
 =?utf-8?B?NFBpRnMyZlRUSXZxU2ducTdMN1o3d0dLbmpmSC95SlpiOVVXS0tFT3NOVXBz?=
 =?utf-8?B?OVJHZDJlSjNpODVsbS9jUGZnUXQzOVVhUTYvOGJ2TnhxRnZ2VkszRnZ6MnVv?=
 =?utf-8?B?QWg5M3ZWUUFEc21rcm9xYjArQUsvVE1mNlhSdzVwMENNNUNIK2NLbE4wSnpC?=
 =?utf-8?B?RzFScTlyNkdBRkNQWm5zYWtnZEZvSmVWVWd1UXBBTG5ibm1lY3J1cHpFc0gr?=
 =?utf-8?B?SDdPRmwwU28zVldGZDBqNnZSdmJRWjJrYlRrQ3ZxcU93V0wxTS9IdE9IZFJP?=
 =?utf-8?B?bGJkSDZiQWxINnpGeXNUWWx2L0ZZVCtMMUE5UmdsZkY5c0hEK3ZoeXM5b2NS?=
 =?utf-8?B?QlZGQ1RRbjQwblRNbW94TElSYjVvU3dISkNseHdBSmRjNlppSCtQb25CU0gr?=
 =?utf-8?B?eE9YaHRpd2xyVDF0UnEvbmZGNExqbVFsUmVBQW5VZGp3TStXc0R6YnVjWisw?=
 =?utf-8?B?bDh4TURtT2RqZ3pYd0cvSHlESkIvTm5kTzlBMmJhLzBsbFRJZzZxZlkyMDE0?=
 =?utf-8?B?MUZwYkg0Yjgyb3RaL29nYUtPaVM5UmxvY2t1dGpRemxDSFRpTE1KTElzcGpw?=
 =?utf-8?B?TlBhQ0tlZSs3ZkNqVXZuVTFGUk5PSDhKZzhBNVhadmU0cWw3QkdVQW1VYUox?=
 =?utf-8?B?b2l1bFZOczlacVZSeGhxeDhZR21KYzJjR251Uy92ODlnK1VSNDlnUkNBcW5Q?=
 =?utf-8?B?SzZZRWpNU2orRUJIVWdJb0E0ZjFZTC9ENVU0WXdadjY4bno5b01nb1FUdWRD?=
 =?utf-8?B?dmFGbURYeFJ5ZDhWdUdjMnpMVnZJTkUydFk3WXRZVlg5VUhFRjd2UGtkdFNj?=
 =?utf-8?B?cmlWSnRCbzk4bHU3dDlNVGFUdVFPS3VoOFZpY21hYXk3bEVHZXZLcTZ4UURw?=
 =?utf-8?B?VzE3bkhzMS9UcnJCdUVDTmd6L1pQbE90T2VmY2pGRVg1dDdrSlg3U3gwRmZw?=
 =?utf-8?B?NmZYcmd6b3g4akI4U3M1QlBQakIrRm9JWEJ1cXlxc0FCUHhQcjh0dmJJNUU1?=
 =?utf-8?B?dHdFbk00SThqTytPMmx3RG9WUVRrRjBWUnd6d2FZN3Vsd1FwZDJqNDArNUw2?=
 =?utf-8?B?NTN2VFRUdkxPZXpYT1VtZHkrdWRMQWcvYVNwWWVnNHN4bVorM1hHQkNoLzlK?=
 =?utf-8?B?aXMvR1BKVFM5RDIzQlRldTNJdGtieVFrQjhsSEhSdXJSS09QbmEzem11b09p?=
 =?utf-8?B?SEs4R2dnMjVkdGFONWoxQjlmSFFObjJ0bENWakc4dml4RnV6eUtmS1RjdUJR?=
 =?utf-8?B?OVNFTHcrWWhYWmdzZnozVG9UV3d6Y2NpMEphV2MrTmJMQy9za1FDNlFPc2tF?=
 =?utf-8?B?d2dxYjhENnVabW5RUDBlb2RjVTZTaEV6Q0NLRnZ2ck5nRWZXRTVUa25QbG9n?=
 =?utf-8?B?VXZ1QmgweUtkQ3JFc0hHb3lSeVd1c0RVRE5MY1VHSE1mZ1FvV21iYjdIaktr?=
 =?utf-8?B?MG9ZcGJ6Ly9keXJxZnE5d1VNY3l1Yk13NXg5dGIvcUpmWnM3L2FoY0hxVjg4?=
 =?utf-8?B?dlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	zwTcy4TTKFLSjd3IiBQ7SCC5uOR2jxyXl8rsSAIwR2mxA8+AP9h+J+8L6UqIKfydzUqs242jtNeORtj6AG+gQi/jXkOR8JEMdkUhDd84UFRaQvEUOojoPtY4IgxFet8sM4+Y7in4e6VmfiPHCHkNzLUcBJBfIQgUYoXsarNdEo4PnH489T19bmXNHYll79HoYjAL68prXIE1sMViucjynRqGAZbexhCAO70VW9ah1Mjq3Nfhj/uwCHtDE/PvFGsen5hB2cLGDK4O1yOqxTPNfOnxbGg66rPlEYhMf2EsHPUFmF3gyHBDdxOqbdoSAH/ZVZCoVfGqhiGb9rGxNnqGx9MN+3Bd97LGQDNCRZFhUKXv38u2KDLLibGKmugbTq7gBMCMMrOWEJW2/poduCg17ptb3d2r/oobZIb1H4wgHALqvepJXdrf1y/TaV3drhJoWfBTNLzOnJz5Z0+lAKmWx51cWQ+P759JEDaZ9c1NFUZOC2m5qIqM86rIXrcZVCrB+7f90bdI5JdUO5mYNOBqBsbdJnn621c1n9SAwfJb+P6/OfMJEw6tVAoycjUL1RIbiYzQtzpGCt87GnrEYyaz+7+U4Tty7NtxEQvgvg0BHLW4PAuM7kPbTP9uU5/mZ0t7AjlKfIx48oAukb+WipbqT2zgtJmp9M8jRQTsBlnjEPr/+VQcBM84hRulgCxO/pmSvOFGnBVq8GPitfyyDpZnmB806A+lD4LeGB+yUkIReXtQo8tKrQHGxfbmA7jkHxWmUEluvN5AO6Egs1jM6RxJW5W1VDctCSbj+2VlBvSg/POzZ2mayi0q3JLCEg2Zjw5Yf/ufR4d4PoWcj7MDpj8e7Udvxh+nJOTfXt7P3R280dY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb537f57-3064-4997-b1db-08db3629d45e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 23:02:29.1020
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mWk2ccMoiNjqBD2hJbdD/Xkdjs4obLRkua+HUqZh7ZvINGfQnV1Nda1DdGJXLOkREUeB09r0KU8ipNmkyaM1v4JT1m2rGLQPF2FNgtc90Uw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6224

On 05/04/2023 9:56 am, Jan Beulich wrote:
> On 04.04.2023 18:06, Sergey Dyasli wrote:
>> --- a/tools/libs/ctrl/xc_misc.c
>> +++ b/tools/libs/ctrl/xc_misc.c
>> @@ -243,6 +243,24 @@ int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
>>      return 0;
>>  }
>>  
>> +int xc_get_ucode_revision(xc_interface *xch,
>> +                          struct xenpf_ucode_revision *ucode_rev)
>> +{
>> +    int ret;
>> +    struct xen_platform_op op = {
>> +        .cmd = XENPF_get_ucode_revision,
>> +        .u.ucode_revision.cpu = ucode_rev->cpu,
>> +    };
>> +
>> +    ret = do_platform_op(xch, &op);
>> +    if ( ret != 0 )
>> +        return ret;
> Is there anything wrong with omitting this if() and ...
>
>> +    *ucode_rev = op.u.ucode_revision;
>> +
>> +    return 0;
> ... using "return ret" here?

Conceptually, yes.  *ucode_rev oughtn't to be written to on failure.

More importantly though, what Sergey wrote is consistent with the vast
majority of libxc, and consistency is far more important than a marginal
perf improvement which you won't be able to measure.

>> --- a/xen/arch/x86/platform_hypercall.c
>> +++ b/xen/arch/x86/platform_hypercall.c
>> @@ -640,6 +640,35 @@ ret_t do_platform_op(
>>      }
>>      break;
>>  
>> +    case XENPF_get_ucode_revision:
>> +    {
>> +        struct xenpf_ucode_revision *rev = &op->u.ucode_revision;
>> +
>> +        if ( !get_cpu_maps() )
>> +        {
>> +            ret = -EBUSY;
>> +            break;
>> +        }
>> +
>> +        /* TODO: make it possible to know ucode revisions for parked CPUs */
>> +        if ( (rev->cpu >= nr_cpu_ids) || !cpu_online(rev->cpu) )
>> +            ret = -ENOENT;
> While the cpu_online() check needs to be done under lock, it's kind of
> misleading for the caller to tell it to try again later when it has
> passed an out-of-range CPU number.

Honestly, I think you over-estimate the likelihood of the cpu map being
contended, and over-estimate by 100% the chances that an out-of-range
CPU is going to be passed.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 23:25:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 23:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518680.805458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkCUl-00073L-Ve; Wed, 05 Apr 2023 23:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518680.805458; Wed, 05 Apr 2023 23:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkCUl-00073E-Sg; Wed, 05 Apr 2023 23:24:51 +0000
Received: by outflank-mailman (input) for mailman id 518680;
 Wed, 05 Apr 2023 23:24:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkCUl-000734-6N; Wed, 05 Apr 2023 23:24:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkCUl-0001G0-4Z; Wed, 05 Apr 2023 23:24:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkCUk-0004vm-OD; Wed, 05 Apr 2023 23:24:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkCUk-0001vo-Nk; Wed, 05 Apr 2023 23:24:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SlyhLZkFqZWume9UQVwEN6plgx++3D8Lrja8hsqr7b4=; b=Td1R8b4eF86IgMCeK3SbW7rnxJ
	C/kNZjozNJsyiLPSpbdq1tglHugr1+Xc3OJjUGKNkFy0AraCIuQ0MqFTK7baPS7pc7wh6Jo8Kic8Z
	HfkEPlt3goHchoAtdjlxlWck+tqre70piwNMXmDYzBABaUsZva2J1i5a7X6VPkY9miyA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180148-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180148: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
X-Osstest-Versions-That:
    xen=658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Apr 2023 23:24:50 +0000

flight 180148 xen-unstable real [real]
flight 180162 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180148/
http://logs.test-lab.xenproject.org/osstest/logs/180162/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-migrupgrade 11 xen-install/dst_host fail pass in 180162-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180139
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180139
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180139
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180139
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180139
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180139
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180139
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180139
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180139
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
baseline version:
 xen                  658fcb7ac90a4d8b0f4736a7c8f21d0252cb492e

Last test of basis   180139  2023-04-04 16:39:59 Z    1 days
Testing same since   180148  2023-04-05 06:43:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   658fcb7ac9..415f7d9404  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1 -> master


From xen-devel-bounces@lists.xenproject.org Wed Apr 05 23:49:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Apr 2023 23:49:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518686.805468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkCrw-00017i-UK; Wed, 05 Apr 2023 23:48:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518686.805468; Wed, 05 Apr 2023 23:48:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkCrw-00017b-Qw; Wed, 05 Apr 2023 23:48:48 +0000
Received: by outflank-mailman (input) for mailman id 518686;
 Wed, 05 Apr 2023 23:48:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gV66=74=citrix.com=prvs=45279ec78=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkCrw-00017V-5y
 for xen-devel@lists.xenproject.org; Wed, 05 Apr 2023 23:48:48 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6619712b-d40c-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 01:48:45 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 Apr 2023 19:48:42 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5947.namprd03.prod.outlook.com (2603:10b6:806:11f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Wed, 5 Apr
 2023 23:48:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Wed, 5 Apr 2023
 23:48:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6619712b-d40c-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680738525;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=8svVWt7HH2xIy0tqQ2OwV8bT1afdTLaVAIvTAUGR4FY=;
  b=XhqbzWdvCih1xAX1EhLYVRLljp+M5pxYVNqp5Ve42E1P+FSFPMbGF2st
   0KVm7ZOZnPMXrMpJ2spynMm+VwATXEWwAfpTzBViJLr77sGq+ZCfvLO35
   RFcJ9ZAkGkNr00URqD0jmow+SlIoCf+Sos8bceBVahA61V9pvrIatw+jd
   4=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 104398281
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lMjACK8UAA6F4CNyZlG1DrUDZH+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 DAWUT+OP6mPYDPxf48nPYTipxwF65Pcx9FnTQI9+Sk8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagW5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkli3
 906dREBZCnZ2cCpg6iSd9s1ussaeZyD0IM34hmMzBn/JNN/GNXoZPyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilUuidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpOT+zorK4z6LGV7nMuMh0cbna2mPOauFOYQIpUO
 VZX9BN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnM05Xzsxz
 XeSgsjkQzdotdW9S2+Z97qShSO/P24SN2BqTTQfUQIP7t3noYcyphHCVNBuFOiylNKdMSH9x
 XWGoTYzg50XjNUXzOOr8FbfmTWuq5PVCAkv6W3qsnmN6wp4YMuuYNWu4F2CtPJYdt/GFx+Go
 WQOnNWY4KYWF5aRmSeRQeILWra0+/KCNz6aillqd3U8ywmQF7eYVdg4yFlDyI1Ba67opReBj
 JfvhD5s
IronPort-HdrOrdr: A9a23:63rR6qta9seN7xaHytvvsmQf7skDVdV00zEX/kB9WHVpm7+j5r
 yTdZUgpGTJYVMqMk3I9urwXpVoLUmslqKdpLNhWYtKPzOWwFdATrsSlLcKqgeIc0afygce79
 YET0EXMrzN5DNB/KHHCXyDYqsdKbe8gcKVbCTlo0uFjzsGV0it1WhE436gYzdLrcB9a6YEKA
 ==
X-IronPort-AV: E=Sophos;i="5.98,322,1673931600"; 
   d="scan'208";a="104398281"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PJMDTH+pScY8ERFsFjI7qubxn3OCYpihgBKYD7h9l4etFodi0kVnECdtGZjfmTNv0Q45xi152wlqTfBq2hW//uDuXvkEVvxd0qlbZT1tmBlTOaSalifS6KZ/sn1ofq3cHseWEZykbc6aE1FH0dNCdu4Eo1o1lGloxNBYslwR7YP0Y/zraSym2dgbxXv1mTCWPseUZ/c/hKIscGn51KhVmCRAII4/EvLTXtMGri4B1w6CZe9UOP74Xe2dIqzsCRfNII15f1LDNR/fwASA7ISg1jwGxTRy9jG/lLsGYLeDSLcMbBszJ+A9+MCrng9oEJmbiWSIV9pda1IIjLq02nnPow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b89C+WDjoBzAWdyAvIeyTESJkKHIuxXpo6ReecPT/Co=;
 b=FhL62pFx0ntXoHWKMozas0V4YJ4ys4y3SUDNKNAM93vYurt+h4D2Gb3xjeQNdq3n9lk/IjLLfki+wkeMfDJMo8I3cdG/Lmpii2VI5IkFSWWi3jgPKHELW59qynoC5WxFrBAEu5EzDUJAUf0M/JtZAy/DzjoXwuHrWmH1TdQLAjravwWpEA7J8GAkvzUDCzyvINVSOXxAd/NCIbXI5wNodsxgpDDwSa6MMWFx+34xQ3ErMEanl9Ex11rTAfLmT3wxDgPQxPiZnn6fIBT0vCQSHpwIk3bYKYlfjE1bWdbwouBdQosNouXG9jzD8FzEwT5I+hX83oTPVkHWg2gph7dAqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b89C+WDjoBzAWdyAvIeyTESJkKHIuxXpo6ReecPT/Co=;
 b=vf74fw8r+BOgpbeRF7ibBD4PvejNVO9DrgALKlOA875ORMoqNJVIsq3lZLJEqFSB3+/By2mcYysX6uO6pZH4VNmA/OvzDb+nD+LJ3hSv+Z+/IydQ6asO/v8YNkSZJf8grWHSJVnKvnQAWHJPL979zFC/jbdFX0ntfiP9O7CzFFs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <8bde7944-a9aa-a243-c621-9cef1c657552@citrix.com>
Date: Thu, 6 Apr 2023 00:48:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 3/3] tools/xen-ucode: print information about currently
 loaded ucode
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-4-sergey.dyasli@citrix.com>
 <b46ce1ce-19f3-29d2-af50-e1515da4112f@suse.com>
In-Reply-To: <b46ce1ce-19f3-29d2-af50-e1515da4112f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0465.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA2PR03MB5947:EE_
X-MS-Office365-Filtering-Correlation-Id: e4e4afa7-7493-4c6b-076e-08db363047f3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wBZ5grUS58HoTF6qlg9Nh7e8YSy0viwxx8ooIhErzv84mW7EEyOKU7yLD2BR0bT7JzLOFZQ4gt9y5ZB4NX4x7dKps2+es5GEgww8qoxW3ynrY9+dxkLZqt5OsO/Fya9arm7fGIuqyqLKu5x71GvTNeqY7WKoPJHhWcKPkRDLsWWDkp+sllxTkrQtczafofhJQLx2Gr+wGXCW3Wn7sFRuxIgoozDHWueBfL5SSheutOh8xUTw3jfBUlAd1mOMZvrJmWUsCXjmkRST0Q20aGNQ9snKcYfIdnsz9t81GIX1EsmPgUmb+Ov6cVSN6RkQRpJ7aY5AXF9WEpfIrdbzQuQ9ydgXlAJjzg9FtGVYl5h8s0nhuSZY9rgf1POrF1BgPcOAcZywpPOKq3OLkEhKgDJICf6CYz0G7D71eywxqLuaRWafktplptHn+eGNGpCbqY85sZEEzwtAiDEbHfm/hk04bj1H+EEiE1k4ZBHyc+xF6Hj0vVmVLophTwTn3/KZixKmlyd5NDnf+zhDEIj2ARBB/g2AMEkArwW1Znk1V4YXBQAPV97KBUqH6UAR2Fpb/pdulq5DTKcd84lDbIzjPe+bw1iH0xANy0jFGmzeZn/oXFSMmEEUBj9DXcZ+nE0Jf9kbYT/Q9P6uJQmHIo+AuiDKMQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(376002)(346002)(136003)(451199021)(31686004)(6666004)(2616005)(83380400001)(8676002)(53546011)(6506007)(6512007)(26005)(66476007)(66946007)(4326008)(66556008)(38100700002)(86362001)(2906002)(82960400001)(186003)(41300700001)(478600001)(110136005)(316002)(6636002)(36756003)(7416002)(5660300002)(8936002)(31696002)(54906003)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cFkrV2ZWWkNKUnNzbTF6REp2ekF2T1lqNEF5cGMzb1FKSFF1VzVMdVBkMVFY?=
 =?utf-8?B?c1hqNUhVQkpneXF6SzBjbFA0TVEvakRlRTdpUmc3Q0tNRE0wemhaaXYyczMw?=
 =?utf-8?B?OExaQklOQjV6YUFWREdTU2hNai9rdUhTTDl4WFU0RXErOFUzN1RDMmo3TkdE?=
 =?utf-8?B?SG93VzJKcjlpNDhpOWpDNnpoci9RUmlSZEJKNVZHU1dMNndEd1czOXdnYWo5?=
 =?utf-8?B?eEN5VGRGR3Y0Sk1FKzhCNlRmb1pHdTFTYkhDcVRsbzkxRGVlUUFsN1JrcTZ4?=
 =?utf-8?B?VXFTQ0E5RDljMWJ6RS9WTmtVS0JHc1BxS2tHeTJ2bnFyMUJrTnFnSlMzVmdH?=
 =?utf-8?B?VFZJSm4yYmwwbzNHajZlNnB5OGJHV1BSWFFsZHJ3ZlUwN2tnbXUwRjc0dzVx?=
 =?utf-8?B?UDQ2ZzN4R3lBbmlKN1diTzZRSzhFZktKVXRjOHZJdEZ6V0VPdlVNM0Q4T0Zo?=
 =?utf-8?B?WWhrK29zemwyYjBHbFFHRkhFYysrL1RRWlZpSWRzVENwNmZZM3NwaTBSZGhj?=
 =?utf-8?B?Y253OFV1VmcxN0YraGZCRnUyRi96T1ZEMU0zQXpITStTVTlwSFQzU090UVRj?=
 =?utf-8?B?UWE2ei9CM2tOQUNPR3czSkdOdmdVdURpdUxBRm84WnBhRTJSSkQrS1BFRFUv?=
 =?utf-8?B?Q2ovSzd3di9tWVR6eWwwUnhnVVl6Vk1ySFErazA2OTFRUlozT0c1WWtSQ2hC?=
 =?utf-8?B?UlVmVUlZajE0T3FQdTdUZ0t5YmtBaEljamlhd1pBQmxzNWdEZDEvUmFjcFpH?=
 =?utf-8?B?VTd4Yk01d0Q1ZEgvanlkTmVsQkl3L0tOR1cvbUtXNlBBTjJIQm1ySjcrQVVZ?=
 =?utf-8?B?UzZ6cmJjYUZ5WDZIcVI2WG9UOEIxTmFJOVMyS1VuVG5NMVJYMmJkQytQdGZs?=
 =?utf-8?B?bjBUQzVYSHFVNE0yQ21kQU9oOXl5UzNYNmhqU2U4NnlVS0M0WUVWRDlpWEZB?=
 =?utf-8?B?b1Y5RnZFNElZbjlYZ29Ra1loRFo3bEYwUFNITElHbDVXNlVOZ21tMVBvNTNS?=
 =?utf-8?B?SGhMWUFGUHF2KzBUTUM1cHNZNXNGS0lvN0dCZHdaZkd3VXBNRHg0TUczeUNH?=
 =?utf-8?B?bFlLVXFUamNXL1FhUzAxdXRZeVEwOGc2bHppMUNlOVdncVF3dEpGMVVrdHhr?=
 =?utf-8?B?TEs1L0pJRUN6T2pYQzFaQ2l3TEorVUp4b3JaQjFvbHNEcUk3NElBeGNsR3kr?=
 =?utf-8?B?U0xRZW9CL2xVTFlNckhhVFV5M0Fob2g4YnBrdEtjU2Erc3BVYml0MjlYL011?=
 =?utf-8?B?VUVzdE1mSUJMb1MrOUtvRGI3Z25heGc3cHNJWGJqNkgwYmxwNVZhck5CWWFh?=
 =?utf-8?B?N2hGNGIrdWpHRG5qWFMvOGtVc2RIVmRwWHoyT0toVUdWenNUTXZKeEg4NWFn?=
 =?utf-8?B?SnIzSEtiVGRhYTBVZ0xFMVBFc251VjUwc2pubEI5MnJyZkZMWGY1RFRiNXNx?=
 =?utf-8?B?cmF1b0x4M21FWVQ5NlNEaEdMOXpVNnUyNkpqWVFEdUQvcDBSVXNMdWxSR2Ra?=
 =?utf-8?B?N0c5dDJKdDd6V05lMTVqajFjZURkQ1lyNUNkWDF1MnVTSGk4U0YxWmREMUxt?=
 =?utf-8?B?cHM1akNPUVlnRm5FYzVHa1hhT2dSdXQ3UXVJTEFVejlseFhlTnJIa0VZQXl0?=
 =?utf-8?B?Q3FlaWZ6TGIvRkhqS1pIRnRiK2lacGRlbWFNemZ4YUlBWEZmZE1abXpxL1hO?=
 =?utf-8?B?ZWQzT2hKViswdlBxNkJWVEVvOUk5RzdTTFA5aUIvSmM4Vk94OURGQUk1ek9X?=
 =?utf-8?B?YTZROGhHNHZtemNKVmVWRmU3eXErMmN1KzMxTGFaQSswU05tK2ZTcjhEL0Jk?=
 =?utf-8?B?ME5FcTRnODBmcHN3OXVoTXZnWFd1NXBRdy9CM0pUK3hhQndYa3FneG9kdndB?=
 =?utf-8?B?dXExQ245ZzBleUhVRnVOSHlzUjJtTkp0dldHeE1wbXUxcVpsaU04SjRCdlFR?=
 =?utf-8?B?NW1vRGVqbEN6WHBTTjF1b2puQVJVVDUyQ25GTGEyNFZJbkZraWVxQW1BdjVC?=
 =?utf-8?B?ODB2QVI2OUJTYkQ3dnZWd25Ub29XTFpCcVFodnB5VkRKeEFqNm1TWUQrM2tV?=
 =?utf-8?B?Vm10WkptZ1RWbEswUHlQaTRZeDJNcVJSRG1MUTJoUUEydEY2R1dRY0dUdkdO?=
 =?utf-8?B?aVFkUzJyY2h2aDFMc3RXUEJmM3IxVDlxS0pzTVF1VEZJVUNxNTFOMC8rZXpE?=
 =?utf-8?B?L1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Eh6DUGPuRvjOx0T7xRY8P3tZUnwwujEpK5iX1d8noxfSftjxIMaNlhUC4arA3zE9IlG1uLWqz3WsK2DY/CoF4vYzYP8vFObiYDaIyvNlELgjwTMJC0n+aFgyrLCc9LQEBubmt6bIkJUJYtJr9TYKdYtv3kL/eqxiurYYSqG1mhKR/WvzXLdxx6Sxl9Nv6Fy3DEYr6+SJ5TxkrCvYCp/AUzhg2DNHyjmQwFRE4Bxiud84g5eUrAqyHvq//AYX9ySIIgjkIsTpeSxq/YyjnDzQdjZQAyQ5PK9mI7X14EO1tgEvwKABnllieoH/mOh/9ncmnP9ewTzYAarg253uyvsmhYORDW5yE+yPMpuaLSmu+pwOcNo8rHX9/SmXEFFWACDVGBG3Jw479fsLHV7xhpDDdjQwwTqT4bhF5eWzXKHe8U1KZ/494mG9d0beWrqvyFPEj6KQrfnTGqF+kwfVUoPfvMbF1dzgzxz67Rcxqv2cGt4eSNRGNhl5zu8Wn8vIsXFZVc7Wn7qKHVq6UkXxUaHAcuf3UVmgh4Sg22vbF9/2b2gd9JAd25b3gGBGgw4hQKHbJE0FHZbNaz88RZTvgv8C6dyyZ+aQUlkq8lKcNC8z8HXCkwnqqvJqUMroZS0CrCOW+QHs0c3FwZXzCT8GTBMZ20GJJKC+FowalHxfqTqIfjW0iIzS7Yl8dMnNXeHT+R82ZrQCpyYilngFrwJl7qhTmVfQPq86uKMto7GA0ht+uHUktutsOIceYJ26UmaRdAQ5O6FQ7vqfUxTybrK3wEca4mWPOOoYYum/uIU6JQZAzamSqJoZJ4n69TdxRYfpjpqNgOySGOf4Jur9F5/bwG8iXJcm3haUkZAacHFfx4C5/uc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e4e4afa7-7493-4c6b-076e-08db363047f3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2023 23:48:39.9875
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eCbaLpxVs4iqbCZVncydb/L1V2vcKaKagn7GMT4WJ9dXxBSvu0iP8YwL/b0n6CYmdfCBe/KMOK7BKk3bNT2EAVst0QI78qmIt0KBXlQycXo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5947

On 05/04/2023 10:11 am, Jan Beulich wrote:
> On 04.04.2023 18:06, Sergey Dyasli wrote:
>> Add an option to xen-ucode tool to print the currently loaded ucode
>> revision and also print it during usage info.  Print CPU signature and
>> platform flags as well.  The raw data comes from XENPF_get_cpu_version
>> and XENPF_get_ucode_revision platform ops.
>>
>> Example output:
>>     Intel:
>>     CPU signature 06-55-04 (raw 0x00050654) pf 0x1 revision 0x02006e05
>>
>>     AMD:
>>     CPU signature fam19h (raw 0x00a00f11) revision 0x0a0011ce
> Wouldn't it make sense to also report the model number here, even if
> ucode blob file names (currently) don't include it?

I have debated FF-MM-SS on AMD several times.

AMD do document all their CPUs consistently in hex, and I know the model
tables alarmingly well now.  Furthermore, it's not a straight nibble
shuffle from the raw value.

So yeah, it probably is worth including.  Given that we're not guessing
the Linux path of the microcode, it can replace the fam19h without any
loss of information.

> While for the revision always printing 8 digits is definitely helpful, I
> wonder whether the two top nibbles of the raw value wouldn't better be
> omitted. (I understand Andrew did ask for this, but it's unclear to me
> why he extended this request to the raw fields.)

For precisely the same reason.  (With a question like this, you clearly
don't work frequently with microcode crossing a wide range of CPUs...)

>> --- a/tools/misc/xen-ucode.c
>> +++ b/tools/misc/xen-ucode.c
>> @@ -12,22 +12,89 @@
>>  #include <fcntl.h>
>>  #include <xenctrl.h>
>>  
>> +static xc_interface *xch;
>> +
>> +static const char intel_id[] = "GenuineIntel";
>> +static const char   amd_id[] = "AuthenticAMD";
>> +
>> +static void show_curr_cpu(FILE *f)
>> +{
>> +    int ret;
>> +    struct xenpf_pcpu_version cpu_ver = { .xen_cpuid = 0 };
>> +    struct xenpf_ucode_revision ucode_rev = { .cpu = 0 };
> As mentioned before - the current state of the system may be
> inconsistent, so I question the entire lack of a way to know of
> this by using this tool (even if via a specific command line
> option, defaulting to CPU0-only).

It's a theoretical problem, not a practical one, and definitely not
something worthy to block this patch with.

Software always has more up-to-date microcode than firmware these days,
and it has probably been a decade since there was a system released
which could usefully function with asymmetric microcode.

I know we seen it in the past, but only on truly ancient systems.

>> +    ret = xc_get_cpu_version(xch, &cpu_ver);
>> +    if ( ret )
>> +    {
>> +        fprintf(f, "Failed to get CPU information. (err: %s)\n",
>> +                strerror(errno));
>> +        exit(1);

Use err(), which removes all the boilerplate here.

>> +    }
>> +    else if ( memcmp(cpu_ver.vendor_id, amd_id,
>> +                     sizeof(cpu_ver.vendor_id)) == 0 )
>> +    {
>> +        fprintf(f, "CPU signature fam%xh (raw 0x%08x) revision 0x%08x\n",
>> +                   cpu_ver.family, ucode_rev.signature, ucode_rev.revision);
>> +    }
>> +    else
>> +    {
>> +        fprintf(f, "Unsupported CPU vendor: %s\n", cpu_ver.vendor_id);
>> +        exit(3);
>> +    }
>> +}
>> +
>>  int main(int argc, char *argv[])
>>  {
>>      int fd, ret;
>>      char *filename, *buf;
>>      size_t len;
>>      struct stat st;
>> -    xc_interface *xch;
>> +
>> +    xch = xc_interface_open(NULL, NULL, 0);
>> +    if ( xch == NULL )
>> +    {
>> +        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
>> +                strerror(errno));
>> +        exit(1);
>> +    }
>>  
>>      if ( argc < 2 )
>>      {
>> -        fprintf(stderr,
>> -                "xen-ucode: Xen microcode updating tool\n"
>> -                "Usage: %s <microcode blob>\n", argv[0]);
>> +        fprintf(stderr, "xen-ucode: Xen microcode updating tool\n");
>> +        show_curr_cpu(stderr);
> I recall you had it this way before, but I don't see why this information
> needs to be part of argument error handling.

Because it is specifically useful information to be given when running
xen-ucode with no arguments.

But also because this patch predates Spectre/Meltdown going public, and
is documented this way for XenServer.

That said, the information does need correcting because it's changed to be:

  "Usage: %s [show-cpu-info | <microcode file>]"

in this patch.  (and file is specifically more relevant than calling it
a blob).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:50:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518695.805478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGdx-0007pl-8L; Thu, 06 Apr 2023 03:50:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518695.805478; Thu, 06 Apr 2023 03:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGdx-0007pc-28; Thu, 06 Apr 2023 03:50:37 +0000
Received: by outflank-mailman (input) for mailman id 518695;
 Thu, 06 Apr 2023 03:50:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkGdw-0007pS-Fq; Thu, 06 Apr 2023 03:50:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkGdw-0003xA-C2; Thu, 06 Apr 2023 03:50:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkGdv-0007gf-Oh; Thu, 06 Apr 2023 03:50:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkGdv-00033x-M7; Thu, 06 Apr 2023 03:50:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m+3nNViPA5ifLMVWVnXf4zot/Bk2Ma4Oi1tJwW4iA7E=; b=Mtgz8Hc4QAuIes/2epi2K7CBYk
	+B/dAFGruCOm1CXu7+0LGQhSFhk/Qg2GT2gWJRYnDwJ0zcna2YA7ZG6eN1TP9tsH084ldSaGAbmMq
	tCGjeI6tz0Apx+ExYfWQsCzdRt1lzMYh83sKtCtsWY4bJBQK/cQD7tQR4aaaYQvIOEOk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180164: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8d185dfb66700e65035d51f149570aeab728c665
X-Osstest-Versions-That:
    ovmf=2bb693894920e634153275bea60278a9f192a8ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 03:50:35 +0000

flight 180164 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180164/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8d185dfb66700e65035d51f149570aeab728c665
baseline version:
 ovmf                 2bb693894920e634153275bea60278a9f192a8ef

Last test of basis   180161  2023-04-05 20:42:13 Z    0 days
Testing same since   180164  2023-04-06 01:42:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2bb6938949..8d185dfb66  8d185dfb66700e65035d51f149570aeab728c665 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518703.805494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGkz-00005S-5x; Thu, 06 Apr 2023 03:57:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518703.805494; Thu, 06 Apr 2023 03:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGkz-0008WB-19; Thu, 06 Apr 2023 03:57:53 +0000
Received: by outflank-mailman (input) for mailman id 518703;
 Thu, 06 Apr 2023 03:57:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHTq=75=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pkGky-0008TM-Dh
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 03:57:52 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 31275807-d42f-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 05:57:48 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id BF15B5C0165;
 Wed,  5 Apr 2023 23:57:46 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Wed, 05 Apr 2023 23:57:46 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 5 Apr 2023 23:57:45 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31275807-d42f-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm2; t=1680753466; x=1680839866; bh=93qkb4vCoALCa+SR4SX+CpDYG
	2G80pxkOOvzmxxSHTg=; b=iZHL7IEYk5THv6vF8b0ULbkwAYhiBr8RMmecfCf58
	obuHqd+Sv8fLyCmsBK4xfR/FXE3aIYYRc0p3F0/0luH0jQu8VBnbQBYv11oeJ+vF
	O2455X/UV90zurzkYKkXmMACFFj2Sf36JOscKu2rmzGhsg+apZ4OapsQhEnPGp/U
	U5cbAK2vhYZlyf9v8Xe0X/W9bOXKvA5FRurfde+Z+jCQW7Q+JlhFDLuyu1/IUMfF
	k5uftt1nHYrLmvqtnUJZZ9MilASPqcjDISSQzsBefLdzKpfxmnjKHmJsLN7cRySm
	pZ9mTeyuB/1NQqYx1bAAar5xpURXwgTKGl4P4uOLlYUWg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm2; t=1680753466; x=1680839866; bh=9
	3qkb4vCoALCa+SR4SX+CpDYG2G80pxkOOvzmxxSHTg=; b=EIM7B2Tj1w88q1gCt
	oSQAf6kBItLzqwAQNda31yo7c0jKRiYdQI3N/Hgg7xj1oy/pSM6PCwn1gzuk3iu9
	5u+/I/mEn8BBhL0lRpV3DdlP25DuHZPvEZjFDrwOJQ9NStx7OiitRdFXfzCFrYwD
	zr6fkvpSSWpw//bnefAtcw9b4sHsIHoDlXKZa8ohASY47P6r3DQT0fCNcbcx7O1I
	qKSEoC3u2kHbybvAzuy8FcZmP77+Zqe8UiqKGREQQ5/VPpjEddyBgmdQRemhc0be
	NJD/VpK9tbgnqGK379ZbNuMqZXzbqEXa2gYhF4gjbqDcTV03xbAxX0ZwPSqcAbO3
	zy8eg==
X-ME-Sender: <xms:OkMuZI5r-jGnSqXNDZkUdGOhd5Ltcsw1Mfok9jHogJC3Gmo4DQLAxA>
    <xme:OkMuZJ57H9fzlrW9NaIDc2qAHHE-U9U0KnrKapZDKcZP8XUFhEIT0VMKHvO4sj7IX
    UV0Lr4FTIepXQ>
X-ME-Received: <xmr:OkMuZHfuf9KLzFq7_sovdLARcMoFFuOQujNwnTXXjzRj9_Vecfv3uNX816VqN7hLRa2PRAQY0Zbqfc0fpGlWrM53RUqPIgiZN9hDRiQ18qUTD3798-Ea>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdejvddgjeelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucenucfjughrpefhvfevufffkffogggtgfesthekre
    dtredtjeenucfhrhhomhepofgrrhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggt
    khhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
    eqnecuggftrfgrthhtvghrnhepleekhfduleetleelleetteevfeefteffkeetteejheel
    gfegkeelgeehhfdthedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrg
    hilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdr
    tghomh
X-ME-Proxy: <xmx:OkMuZNLeTt3T8FfoCJQedgGgb67eZ9Wc2u38jbKYX0epHKLMqcOfeg>
    <xmx:OkMuZMJPodictTm0UuTtG_CdzV5_HnF13EfSNax0y8-tad07EkvK_g>
    <xmx:OkMuZOzdFW7rHf110cNhKGhmrc3CIxS004dfQutTT8XtQE09YNrjRg>
    <xmx:OkMuZDlEcLjRYf9ovlrTOM-OaNx6UskjYYFMQa8e0L2vyKJLXk5ssQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH v3 0/4] MSI-X support with qemu in stubdomain, and other related changes
Date: Thu,  6 Apr 2023 05:57:22 +0200
Message-Id: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This series includes changes to make MSI-X working with Linux stubdomain and
especially Intel Wifi 6 AX210 card. This takes care of remaining reasons for
QEMU to access /dev/mem, but also the Intel Wifi card violating spec by putting
some registers on the same page as the MSI-X table.

See individual patches for details.

Marek Marczykowski-Górecki (4):
  x86/msi: passthrough all MSI-X vector ctrl writes to device model
  tools/xendevicemodel: Introduce ..._get_ioreq_server_info_ext
  x86/hvm: Allow writes to registers on the same page as MSI-X table
  x86/msi: clear initial MSI-X state on boot

 tools/include/xendevicemodel.h |  23 ++++-
 tools/libs/devicemodel/core.c  |  16 ++-
 xen/arch/x86/hvm/vmsi.c        | 209 ++++++++++++++++++++++++++++++++--
 xen/arch/x86/include/asm/msi.h |   5 +-
 xen/arch/x86/msi.c             |  38 ++++++-
 xen/common/ioreq.c             |   9 +-
 xen/drivers/passthrough/msi.c  |  17 +++-
 xen/include/public/hvm/dm_op.h |  12 +-
 8 files changed, 310 insertions(+), 19 deletions(-)

base-commit: 881ba20eb0222305a9d2cd090c9345992794f4f5
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518702.805488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGky-0008Te-Tb; Thu, 06 Apr 2023 03:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518702.805488; Thu, 06 Apr 2023 03:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGky-0008TX-QJ; Thu, 06 Apr 2023 03:57:52 +0000
Received: by outflank-mailman (input) for mailman id 518702;
 Thu, 06 Apr 2023 03:57:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHTq=75=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pkGkx-0008TM-OO
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 03:57:52 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3244fcb9-d42f-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 05:57:49 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id D583A5C016C;
 Wed,  5 Apr 2023 23:57:48 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Wed, 05 Apr 2023 23:57:48 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 5 Apr 2023 23:57:46 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3244fcb9-d42f-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm2; t=1680753468; x=1680839868; bh=Fv
	CSUD7PDGeiswNRgMg7zZom3rqpEr1RXA0kWfFNpvs=; b=IRBTl0BZo62ArcDldO
	x5wTInwPltu0N3kqFHPdwIr4rnKyHDkUHIoZsFCs1JoZbbnNdcERcr/YpwchvPRm
	ByRBlIHB32C0sspNmaoMEbN56XIOQ3i9aITpYQCvwqtWCExxvzS58TNapl/wrx8R
	JlL6z1op0nqFBlPhBfrO3ea7pUp4J5sCL13lAR2hrHXmXYO3sSyG/xoZds59CFpl
	GhUeiXLMKM1XHpe58mSAuAKS1DXelBO4Y1+rBMciD4TQHJ8PkJx5QBAgWhDurx84
	ABvSzBULj8RGLoMdsBTs5QtFssgrHS3g8tBVqdtXuYdqvOHro9kYDknMnt5lTN3L
	AOtw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1680753468; x=1680839868; bh=FvCSUD7PDGeiswNRgMg7zZom3rqpEr1RXA0
	kWfFNpvs=; b=YYVk8sH4fMz6Huu3gzMg7puPpyyK0TDy0doCcaG+0OeGjI5O2BI
	HZtmjOT5fLWlfxIfFTK5DeUVDljFlaNts38BYk5T4ToxgLQvh85iETLm5L9jbcWf
	Sf4lcPOiH24srqsVZinLiIfYXvopyqiCh16eqZIelbMd+HtuonoSu3QgKSzGMv1p
	f2f/gKvZNguuMn80W0F5Uwsh2HbuxbilI33fZJb1LnhepLH9oTcPJxkcLCGJS2x4
	w1a3A2DezelWLI4nX7qmIDX5IopCH3o6I4gcB816eB2B1oUsZeuXFdyR+tFalPPQ
	o1DXE1ScJi8chnueuNYJoOmpMavDbpr3ADw==
X-ME-Sender: <xms:PEMuZLRdPwMbgum2Mv-s6JiXwL9qSlbmbZ0iYskCT03m1BfUKRO-Zg>
    <xme:PEMuZMy5EBJnjGtF70KG9zaB-883pXNLjTTv5AUHl48MKA5YH_32TlfJYHjmVj-Xg
    GCCxO05mVc5kQ>
X-ME-Received: <xmr:PEMuZA28fN-0D25-aGXs3p55YYyOEwLGQhVyInW4TXO74kIdT4G7wihzBKZnNHFEE5iMfO5fmfDWxad6jjjXQkqiu_lXlriLHXcloyy0gvcl7lE6BbAl>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdejvddgjeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PEMuZLAr7x-Nnn_BfEt9xxU_2wPg7rCfxHbqkbp1mGYisQz6GUlJlw>
    <xmx:PEMuZEh-mZykwBXKSwBmrAMds_nTCkVgxxZIyD9JkRLTABkvhJ6vnw>
    <xmx:PEMuZPoQFvAri7jHMSiPsD2NNSHHAhsyRkHx_V8F4bNQvoTbCP-39Q>
    <xmx:PEMuZAhPjFrw3e4azr7bsWOiKlKgrKLnErIsFm_uUHfuyeYRwKbDNg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 1/4] x86/msi: passthrough all MSI-X vector ctrl writes to device model
Date: Thu,  6 Apr 2023 05:57:23 +0200
Message-Id: <f799fdc6b6899fa65a07eae0d6401753f7d61ef2.1680752649.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

QEMU needs to know whether clearing maskbit of a vector is really
clearing, or was already cleared before. Currently Xen sends only
clearing that bit to the device model, but not setting it, so QEMU
cannot detect it. Because of that, QEMU is working this around by
checking via /dev/mem, but that isn't the proper approach. It's just a
workaround which in fact is racy.

Give all necessary information to QEMU by passing all ctrl writes,
including masking a vector.

While this commit doesn't move the whole maskbit handling to QEMU (as
discussed on xen-devel as one of the possibilities), it is a necessary
first step anyway. Including telling QEMU it will get all the required
information to do so. The actual implementation would need to include:
 - a hypercall for QEMU to control just maskbit (without (re)binding the
   interrupt again
 - a methor for QEMU to tell Xen it will actually do the work
Those are not part of this series.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
v3:
 - advertise changed behavior in XEN_DMOP_get_ioreq_server_info - make
   "flags" parameter IN/OUT
 - move len check back to msixtbl_write() - will be needed there anyway
   in a later patch
v2:
 - passthrough quad writes to emulator too (Jan)
 - (ab)use len==0 for write len=4 completion (Jan), but add descriptive
   #define for this magic value

Should flags on output include only "out" values (current version), or
also include those passed in by the caller unchanged?
---
 xen/arch/x86/hvm/vmsi.c        | 18 ++++++++++++++----
 xen/common/ioreq.c             |  9 +++++++--
 xen/include/public/hvm/dm_op.h | 12 ++++++++----
 3 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 3cd4923060c8..231253a2cbd4 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -272,6 +272,15 @@ out:
     return r;
 }
 
+/*
+ * This function returns X86EMUL_UNHANDLEABLE even if write is properly
+ * handled, to propagate it to the device model (so it can keep its internal
+ * state in sync).
+ * len==0 means really len==4, but as a write completion that will return
+ * X86EMUL_OKAY on successful processing. Use WRITE_LEN4_COMPLETION to make it
+ * less confusing.
+ */
+#define WRITE_LEN4_COMPLETION 0
 static int msixtbl_write(struct vcpu *v, unsigned long address,
                          unsigned int len, unsigned long val)
 {
@@ -283,8 +292,9 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
     unsigned long flags;
     struct irq_desc *desc;
 
-    if ( (len != 4 && len != 8) || (address & (len - 1)) )
-        return r;
+    if ( (len != 4 && len != 8 && len != WRITE_LEN4_COMPLETION) ||
+         (len && (address & (len - 1))) )
+        return X86EMUL_UNHANDLEABLE;
 
     rcu_read_lock(&msixtbl_rcu_lock);
 
@@ -345,7 +355,7 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
 
 unlock:
     spin_unlock_irqrestore(&desc->lock, flags);
-    if ( len == 4 )
+    if ( len == WRITE_LEN4_COMPLETION )
         r = X86EMUL_OKAY;
 
 out:
@@ -635,7 +645,7 @@ void msix_write_completion(struct vcpu *v)
         return;
 
     v->arch.hvm.hvm_io.msix_unmask_address = 0;
-    if ( msixtbl_write(v, ctrl_address, 4, 0) != X86EMUL_OKAY )
+    if ( msixtbl_write(v, ctrl_address, WRITE_LEN4_COMPLETION, 0) != X86EMUL_OKAY )
         gdprintk(XENLOG_WARNING, "MSI-X write completion failure\n");
 }
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index ecb8f545e1c4..bd6f074c1e85 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -743,7 +743,8 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id)
 static int ioreq_server_get_info(struct domain *d, ioservid_t id,
                                  unsigned long *ioreq_gfn,
                                  unsigned long *bufioreq_gfn,
-                                 evtchn_port_t *bufioreq_port)
+                                 evtchn_port_t *bufioreq_port,
+                                 uint16_t *flags)
 {
     struct ioreq_server *s;
     int rc;
@@ -779,6 +780,9 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id,
             *bufioreq_port = s->bufioreq_evtchn;
     }
 
+    /* Advertise supported features/behaviors. */
+    *flags = XEN_DMOP_all_msix_writes;
+
     rc = 0;
 
  out:
@@ -1374,7 +1378,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op)
                                    NULL : (unsigned long *)&data->ioreq_gfn,
                                    (data->flags & XEN_DMOP_no_gfns) ?
                                    NULL : (unsigned long *)&data->bufioreq_gfn,
-                                   &data->bufioreq_port);
+                                   &data->bufioreq_port, &data->flags);
+
         break;
     }
 
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index acdf91693d0b..490b151c5dd7 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -70,7 +70,9 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
  * not contain XEN_DMOP_no_gfns then these pages will be made available and
  * the frame numbers passed back in gfns <ioreq_gfn> and <bufioreq_gfn>
  * respectively. (If the IOREQ Server is not handling buffered emulation
- * only <ioreq_gfn> will be valid).
+ * only <ioreq_gfn> will be valid). When Xen returns XEN_DMOP_all_msix_writes
+ * flag set, it will notify the IOREQ server about all writes to MSI-X table
+ * (if it's handled by this IOREQ server), not only those clearing a mask bit.
  *
  * NOTE: To access the synchronous ioreq structures and buffered ioreq
  *       ring, it is preferable to use the XENMEM_acquire_resource memory
@@ -81,11 +83,13 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
 struct xen_dm_op_get_ioreq_server_info {
     /* IN - server id */
     ioservid_t id;
-    /* IN - flags */
+    /* IN/OUT - flags */
     uint16_t flags;
 
-#define _XEN_DMOP_no_gfns 0
-#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns)
+#define _XEN_DMOP_no_gfns         0  /* IN */
+#define _XEN_DMOP_all_msix_writes 1  /* OUT */
+#define XEN_DMOP_no_gfns         (1u << _XEN_DMOP_no_gfns)
+#define XEN_DMOP_all_msix_writes (1u << _XEN_DMOP_all_msix_writes)
 
     /* OUT - buffered ioreq port */
     evtchn_port_t bufioreq_port;
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:57:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:57:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518704.805508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl5-0000au-CR; Thu, 06 Apr 2023 03:57:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518704.805508; Thu, 06 Apr 2023 03:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl5-0000am-8G; Thu, 06 Apr 2023 03:57:59 +0000
Received: by outflank-mailman (input) for mailman id 518704;
 Thu, 06 Apr 2023 03:57:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHTq=75=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pkGl4-0000Z1-Dt
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 03:57:58 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3328af7d-d42f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 05:57:52 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 5D6E55C0176;
 Wed,  5 Apr 2023 23:57:50 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Wed, 05 Apr 2023 23:57:50 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 5 Apr 2023 23:57:49 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3328af7d-d42f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm2; t=1680753470; x=1680839870; bh=D/
	T8tgnYwHUEAgL2AzdECT/ODhwEEHPrmRQMVyr6mbY=; b=hs/M75R9CPozgbKmPG
	ZW08qrDVQJHKNs+NsRXrC1+vytZM6C+uNBHlzFOiJzbRZcqkAUHkOubllB0BMo6J
	YDhRjpmig6EYPzKKpIpx084EJM1bj4LWL2YqwqZwIK67jcZKZkc51kOuPBBxnOnE
	kk4WjXG8cOeAg5aWO48M7w8TN6UVi8uQBwUeK8v/uXOXP1WoMOPq4spa04Zyi0PH
	S8gnP/f9UBIZx9Ryu25UD5ocXPi5rbALRYgFokBstGm5S4KvjriSpztjeBAo/MzI
	XO4rP2Pc00B0wpTx35O5DVf5/eIB3UMbDKmPr6KsCBSfQef6wexgdM2xRh+qptiE
	hKUQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1680753470; x=1680839870; bh=D/T8tgnYwHUEAgL2AzdECT/ODhwEEHPrmRQ
	MVyr6mbY=; b=ktX93hm1t6Ke6dLk4j4QQ76cGOlhDm/ItUtXhq7etEB9b4c5AOQ
	TuzxDZceQlk6Qwcgn5hMJxR/pL1BpZnwpwDXKe1dJVFsUnanWwJ470j5fLoVijC0
	ZIfFFPoZHXDy7Uy8H2YpDGYRmC4dktVGxdRTUYaSPAgsH42ljyKI3L7hpe/dzGv4
	+pZNkNp+rAAPtGnp4BJU4mJrBCS4lRraZ37LBWNcQ0YOmGI0p0gWnyX6zAutbveo
	g7uH2l+31w9ldDheps02nXVGIgT+StN53MyXl5F5+Q7gpyp8M5CaTnlQzK25FZgF
	Lq0HCIUZiFVcPQr8Mk7H46u8DXBJEb7XHvg==
X-ME-Sender: <xms:PkMuZAYtOKAIHg127yE15lZ5qoINBKEJJkjjMaA1Bda8-OEXWYZqdw>
    <xme:PkMuZLar3K01E0RDg0mxBi7q0ZzPfAQds4VN1Oqj6tWfxC1-oqLuzQ0P0NNO0I-VA
    ZspoPWU2OY9NQ>
X-ME-Received: <xmr:PkMuZK_4YmBHmMvuchkNsCHDhLOMy4ITytwzXfVpnMjWZor_RdvQe3L0_JAmC1cMI29_VEjZ8Y6b07uFGvFgktKZ-DJ0UeCNK3bled5slXwK3arRAjhn>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdejvddgjeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PkMuZKqIyD3xhLlfx0KqlOiO-PO65wOjCU85-yVCnDO3K4wNEfM77w>
    <xmx:PkMuZLqwJh7QsTH3bcNlA4e8DDfHPl0aTds_xwyfSLM8ZHmrjYoxvw>
    <xmx:PkMuZIT3nlEetB0dx4UA32dFKq5uG_WaY6q7ula3aZN85vO6jCaIbQ>
    <xmx:PkMuZC1bbbiqRNfh-rKy74iGLkJLcCDJR1C59ldEqgNRhF82cYqEDg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 2/4] tools/xendevicemodel: Introduce ..._get_ioreq_server_info_ext
Date: Thu,  6 Apr 2023 05:57:24 +0200
Message-Id: <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add xendevicemodel_get_ioreq_server_info_ext() which additionally
returns output flags that XEN_DMOP_get_ioreq_server_info can now return.
Do not change signature of existing
xendevicemodel_get_ioreq_server_info() so existing users will not need
to be changed.

This advertises behavior change of "x86/msi: passthrough all MSI-X
vector ctrl writes to device model" patch.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
v3:
 - new patch

Should there be some HAVE_* #define in the header? Does this change
require soname bump (I hope it doesn't...).
---
 tools/include/xendevicemodel.h | 23 +++++++++++++++++++++++
 tools/libs/devicemodel/core.c  | 16 ++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h
index 797e0c6b2961..77a99e670551 100644
--- a/tools/include/xendevicemodel.h
+++ b/tools/include/xendevicemodel.h
@@ -72,6 +72,29 @@ int xendevicemodel_get_ioreq_server_info(
     evtchn_port_t *bufioreq_port);
 
 /**
+ * This function retrieves the necessary information to allow an
+ * emulator to use an IOREQ Server, including feature flags.
+ *
+ * @parm dmod a handle to an open devicemodel interface.
+ * @parm domid the domain id to be serviced
+ * @parm id the IOREQ Server id.
+ * @parm ioreq_gfn pointer to a xen_pfn_t to receive the synchronous ioreq
+ *                  gfn. (May be NULL if not required)
+ * @parm bufioreq_gfn pointer to a xen_pfn_t to receive the buffered ioreq
+ *                    gfn. (May be NULL if not required)
+ * @parm bufioreq_port pointer to a evtchn_port_t to receive the buffered
+ *                     ioreq event channel. (May be NULL if not required)
+ * @parm flags pointer to receive flags bitmask, see hvm/dm_op.h for details.
+ *             (May be NULL if not required)
+ * @return 0 on success, -1 on failure.
+ */
+int xendevicemodel_get_ioreq_server_info_ext(
+    xendevicemodel_handle *dmod, domid_t domid, ioservid_t id,
+    xen_pfn_t *ioreq_gfn, xen_pfn_t *bufioreq_gfn,
+    evtchn_port_t *bufioreq_port,
+    unsigned int *flags);
+
+/**
  * This function registers a range of memory or I/O ports for emulation.
  *
  * @parm dmod a handle to an open devicemodel interface.
diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 8e619eeb0a1f..337622e608c2 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -189,10 +189,10 @@ int xendevicemodel_create_ioreq_server(
     return 0;
 }
 
-int xendevicemodel_get_ioreq_server_info(
+int xendevicemodel_get_ioreq_server_info_ext(
     xendevicemodel_handle *dmod, domid_t domid, ioservid_t id,
     xen_pfn_t *ioreq_gfn, xen_pfn_t *bufioreq_gfn,
-    evtchn_port_t *bufioreq_port)
+    evtchn_port_t *bufioreq_port, unsigned int *flags)
 {
     struct xen_dm_op op;
     struct xen_dm_op_get_ioreq_server_info *data;
@@ -226,9 +226,21 @@ int xendevicemodel_get_ioreq_server_info(
     if (bufioreq_port)
         *bufioreq_port = data->bufioreq_port;
 
+    if (flags)
+        *flags = data->flags;
+
     return 0;
 }
 
+int xendevicemodel_get_ioreq_server_info(
+    xendevicemodel_handle *dmod, domid_t domid, ioservid_t id,
+    xen_pfn_t *ioreq_gfn, xen_pfn_t *bufioreq_gfn,
+    evtchn_port_t *bufioreq_port)
+{
+    return xendevicemodel_get_ioreq_server_info_ext(
+        dmod, domid, id, ioreq_gfn, bufioreq_gfn, bufioreq_port, NULL);
+}
+
 int xendevicemodel_map_io_range_to_ioreq_server(
     xendevicemodel_handle *dmod, domid_t domid, ioservid_t id, int is_mmio,
     uint64_t start, uint64_t end)
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:58:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518705.805518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl6-0000ra-P4; Thu, 06 Apr 2023 03:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518705.805518; Thu, 06 Apr 2023 03:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl6-0000rP-LT; Thu, 06 Apr 2023 03:58:00 +0000
Received: by outflank-mailman (input) for mailman id 518705;
 Thu, 06 Apr 2023 03:57:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHTq=75=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pkGl4-0000Z1-OO
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 03:57:58 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 350fda72-d42f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 05:57:54 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 8C1395C0165;
 Wed,  5 Apr 2023 23:57:53 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Wed, 05 Apr 2023 23:57:53 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 5 Apr 2023 23:57:52 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 350fda72-d42f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm2; t=1680753473; x=1680839873; bh=jp
	uAaW3PocLxTjAz2gz/L9CxKTd3JuRdv2jiYB1F7uE=; b=CMyh8xUt2/XeuWuWly
	QYaK3x37WG6/AWwaIZouaj79YATV2yngaZrUzknYh5QowZWBFh8jqXeuLVObvZsp
	EZE0DMqwRYJUkzmwYTClsVB1dg4iirEZfdyvq0mHZfCz7LNXg5CSauXqCHeUWhiV
	YE4lVs3VIouNSDtQawOyHbvr4APMvSpJXF/IafpEAFG7rFTsoiaODXyGRBT4wj1Z
	ghEhB/i5rGNoRC3hDHGh8QtTA/k51E0jqx7U0R2eLnzNYm5NHSdlkPABbEIBU1Nd
	eUV0aDGEbVyIBDpsZfYMzeCDa0ZA2cpKi5K/Hah2cDHEUdniq446RRdL1xeb2pGY
	iTpg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1680753473; x=1680839873; bh=jpuAaW3PocLxTjAz2gz/L9CxKTd3JuRdv2j
	iYB1F7uE=; b=U4DzV7PsTmqf1zaY1JwFd6kf2wa5MT3kKgH82/AHh332No6k4/d
	QPIvFWtiX3FUUpUkvM4ILD60sx/zRjWLPJPdcmNz0+FJW942jK6mi087jjplNb+C
	SJWxzeY8D6mGriqv6gLVPe+WYVG1nsckhKVUdvY89Rp3DGD82V5AZxvQyiNPABoF
	r3ITXR9HRxVWutswrFJA2h85BTCQoFbku9uLbETBbXxv5TPbD3ozYjWAbuu0onx6
	p3qskz8xSy93PsRGgHXcEwQOOSpJNTjC/iAxbHzfKfZRJiQS7oz8d2XwXS4gqJ4c
	XGWQhfK5yfh+4qubnwk0zJoFTPeD+M3PVPw==
X-ME-Sender: <xms:QUMuZCEnJAzrS8_1heQgGjb4S1Gz6sLM-ULQ-e39y3p12kmMUPcEiw>
    <xme:QUMuZDVr7p5neoqXqR-xqt2ugJd6TpjHo6S5DVBtXfM1q64jZemQCKdV-2CCpulm8
    mdvrsZq9PEryw>
X-ME-Received: <xmr:QUMuZMKbEaLeMyvKPmkpEXYIuERldzqCgKMOSprctK4cGQXrd74dIWH_3ut9VDlYpfEBbdnHJS35VyO3rtZjt33FbIdDjAefwD2GRsXs8caPBy4jv7qL>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdejvddgjeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:QUMuZMEJK-ssEUCURfCxyuinRUhnJVI06sV9f10a5e1-J1aMYWvvfw>
    <xmx:QUMuZIVRRmR-AkXVZqyVun3HhtGmcK2LwCoEO8y8H2fuebn3scFIOw>
    <xmx:QUMuZPPPsNyqwbuDsByqIMwm9G0BYLBTWQ9FoXCLntN8mlgp9gbSDQ>
    <xmx:QUMuZPQguiSUueALUTVYusHheD_VtEPnGLNQ5mf8YkJ6bM_NlIf48Q>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
Date: Thu,  6 Apr 2023 05:57:26 +0200
Message-Id: <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Some firmware/devices are found to not reset MSI-X properly, leaving
MASKALL set. Jason reports on his machine MASKALL persists through a
warm reboot, but is cleared on cold boot. Xen relies on initial state
being MASKALL clear. Especially, pci_reset_msix_state() assumes if
MASKALL is set, it was Xen setting it due to msix->host_maskall or
msix->guest_maskall. Clearing just MASKALL might be unsafe if ENABLE is
set, so clear them both.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
v3:
 - update comment
 - clear bits only when they were set
---
 xen/drivers/passthrough/msi.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/xen/drivers/passthrough/msi.c b/xen/drivers/passthrough/msi.c
index ce1a450f6f4a..c9f7eac29ebf 100644
--- a/xen/drivers/passthrough/msi.c
+++ b/xen/drivers/passthrough/msi.c
@@ -46,6 +46,23 @@ int pdev_msi_init(struct pci_dev *pdev)
         spin_lock_init(&msix->table_lock);
 
         ctrl = pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
+
+        if ( ctrl & (PCI_MSIX_FLAGS_MASKALL|PCI_MSIX_FLAGS_ENABLE) )
+        {
+            /*
+             * pci_reset_msix_state() relies on MASKALL not being set
+             * initially, clear it (and ENABLE too - for safety), to meet that
+             * expectation.
+             */
+            printk(XENLOG_WARNING
+                   "%pp: unexpected initial MSI-X state (MASKALL=%d, ENABLE=%d), fixing\n",
+                   &pdev->sbdf,
+                   (ctrl & PCI_MSIX_FLAGS_MASKALL) ? 1 : 0,
+                   (ctrl & PCI_MSIX_FLAGS_ENABLE) ? 1 : 0);
+            ctrl &= ~(PCI_MSIX_FLAGS_ENABLE|PCI_MSIX_FLAGS_MASKALL);
+            pci_conf_write16(pdev->sbdf, msix_control_reg(pos), ctrl);
+        }
+
         msix->nr_entries = msix_table_size(ctrl);
 
         pdev->msix = msix;
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 03:58:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 03:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518706.805522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl7-0000vY-2n; Thu, 06 Apr 2023 03:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518706.805522; Thu, 06 Apr 2023 03:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkGl6-0000ue-Ur; Thu, 06 Apr 2023 03:58:00 +0000
Received: by outflank-mailman (input) for mailman id 518706;
 Thu, 06 Apr 2023 03:57:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHTq=75=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pkGl5-0000Z1-3G
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 03:57:59 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3428a853-d42f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 05:57:52 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 0D6C75C0178;
 Wed,  5 Apr 2023 23:57:52 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Wed, 05 Apr 2023 23:57:52 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 5 Apr 2023 23:57:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3428a853-d42f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm2; t=1680753472; x=1680839872; bh=+x
	aJZckXvX1k1Bf8G7KYguuLRWvLEosFkkfVW7RbnBc=; b=kJXc197cjVxU8s7j8L
	ClSe15MEgoDBr304tK+Dvx8Vu/B3o+r+ND+29+fBrtcmiZd9rtUNuUHNx7wvyNzi
	928xmpQWPrY6hF5fJvKrWyokQFVbrhC0PqGQtwP4HCOg0WcnzX+yxtwhpMelDqxX
	3yFPWKS2f5chuIamgCgfpqdZTXh/VW5l2/1N3bRkSHXf51coS2s8LoD9YnPqTZS8
	GElwbf9qNNPIqiTfVIh+LEfUNkfBQ/3XqDoEm98MwINYOgiQYjBA1v7Eur9bX93f
	5LKv6euFZx+xVin8gsVhG2QjJr4320TEdmRxXd7a1JMlT7PpAE3jwM80Pe1MHy7Z
	XeMg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1680753472; x=1680839872; bh=+xaJZckXvX1k1Bf8G7KYguuLRWvLEosFkkf
	VW7RbnBc=; b=fBMwm7xs3ACD/y3xTi8IHfZCtBWI47c3FFWWbP+YTMJ3otiDb3W
	DIrsifgHeTM0BJ1ZshuUJ5H1GbmxliLMJLV2zIqOjopZjNWXOc/GVVE1ALLR4xmz
	a6lTn9xF1Pf3zem7szRxYS/7M2kvlfFpw98mhGRGGWCoyjH2DilDnq2Slj9nXWZE
	5P7/giIpYJcyO3m0btql70vXmnfwgBUIKQ5KmqUai/atmtd0X1AWfSrGPOGB0Fsw
	QQ//Dk4ivg79EDCiN4oaJer8j1W3SpNtx+1U6TztpKjLvwZWcJ7PKvcmcj+4Q0aO
	aTpFCtzCJeQS9KrK1EIodxL+wJeo7MtHTIg==
X-ME-Sender: <xms:P0MuZMJDlEKqPqFEehRkkoMW-fps5CbFSDWOL8FtNNiPyWReJbva-g>
    <xme:P0MuZMJBz4jFXlgfnBbPjyLZ4nTWqtCMHU_Q2L3Y87ZjKJy77aqG1-rrW4-Sas4ap
    R390Kb8I0Qfig>
X-ME-Received: <xmr:P0MuZMuji0elxuQrKGTH_BwwQndEAPoMSit-Y4wOTMftsIA44LETfXkV2ffBeTY5Q0Kszx6DPOTQTMamlmH3bhLlTzY6rW4hVPvGfSYanTWbWAzaSNrC>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdejvddgjeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:P0MuZJaLpDmGD1lUM2OPzcC50-LW0vlcNjtYfvLdY21Ci8v1wdos1w>
    <xmx:P0MuZDb3A1-lSpUgJX5GAeAxTSZM546b4WGR0NQ7BN4WZ8Q-SXC7HQ>
    <xmx:P0MuZFBieT-vTr3BH_nodMr6rhcHkLgi2xeDMnowE1B_tC4kqkuPbQ>
    <xmx:QEMuZPHB7V1KlqbqgQByiXvkILufK5eRSgB7Hd5MVMszCSkVsFVJAw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/4] x86/hvm: Allow writes to registers on the same page as MSI-X table
Date: Thu,  6 Apr 2023 05:57:25 +0200
Message-Id: <3a8f54cf631e0342b144935950c853d1884a7eac.1680752649.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Some devices (notably Intel Wifi 6 AX210 card) keep auxiliary registers
on the same page as MSI-X table. Device model (especially one in
stubdomain) cannot really handle those, as direct writes to that page is
refused (page is on the mmio_ro_ranges list). Instead, extend
msixtbl_mmio_ops to handle such accesses too.

Doing this, requires correlating write location with guest view
of MSI-X table address. Since QEMU doesn't map MSI-X table to the guest,
it requires msixtbl_entry->gtable, which is HVM-only. Similar feature
for PV would need to be done separately.

This will be also used to read Pending Bit Array, if it lives on the same
page, making QEMU not needing /dev/mem access at all (especially helpful
with lockdown enabled in dom0). If PBA lives on another page, QEMU will
map it to the guest directly.
If PBA lives on the same page, discard writes and log a message.
Technically, writes outside of PBA could be allowed, but at this moment
the precise location of PBA isn't saved, and also no known device abuses
the spec in this way (at least yet).

To access those registers, msixtbl_mmio_ops need the relevant page
mapped. MSI handling already has infrastructure for that, using fixmap,
so try to map first/last page of the MSI-X table (if necessary) and save
their fixmap indexes. Note that msix_get_fixmap() does reference
counting and reuses existing mapping, so just call it directly, even if
the page was mapped before. Also, it uses a specific range of fixmap
indexes which doesn't include 0, so use 0 as default ("not mapped")
value - which simplifies code a bit.

GCC gets confused about 'desc' variable:

    arch/x86/hvm/vmsi.c: In function ‘msixtbl_range’:
    arch/x86/hvm/vmsi.c:553:8: error: ‘desc’ may be used uninitialized [-Werror=maybe-uninitialized]
      553 |     if ( desc )
          |        ^
    arch/x86/hvm/vmsi.c:537:28: note: ‘desc’ was declared here
      537 |     const struct msi_desc *desc;
          |                            ^~~~

It's conditional initialization is actually correct (in the case where
it isn't initialized, function returns early), but to avoid
build failure initialize it explicitly to NULL anyway.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
v3:
 - merge handling into msixtbl_mmio_ops
 - extend commit message
v2:
 - adjust commit message
 - pass struct domain to msixtbl_page_handler_get_hwaddr()
 - reduce local variables used only once
 - log a warning if write is forbidden if MSI-X and PBA lives on the same
   page
 - do not passthrough unaligned accesses
 - handle accesses both before and after MSI-X table
---
 xen/arch/x86/hvm/vmsi.c        | 199 ++++++++++++++++++++++++++++++++--
 xen/arch/x86/include/asm/msi.h |   5 +-
 xen/arch/x86/msi.c             |  38 ++++++-
 3 files changed, 231 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 231253a2cbd4..6f49493d3f58 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -181,15 +181,21 @@ static bool msixtbl_initialised(const struct domain *d)
 }
 
 static struct msixtbl_entry *msixtbl_find_entry(
-    struct vcpu *v, unsigned long addr)
+    struct vcpu *v, unsigned long addr, bool same_page)
 {
     struct msixtbl_entry *entry;
     struct domain *d = v->domain;
 
     list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
+    {
+        if ( same_page &&
+             PFN_DOWN(addr) >= PFN_DOWN(entry->gtable) &&
+             PFN_DOWN(addr) <= PFN_DOWN(entry->gtable + entry->table_len) )
+            return entry;
         if ( addr >= entry->gtable &&
              addr < entry->gtable + entry->table_len )
             return entry;
+    }
 
     return NULL;
 }
@@ -213,6 +219,144 @@ static struct msi_desc *msixtbl_addr_to_desc(
     return NULL;
 }
 
+/*
+ * Returns:
+ *  - UINT_MAX if no handling should be done
+ *  - UINT_MAX-1 if write should be discarded
+ *  - a fixmap idx to use for handling
+ */
+#define ADJACENT_DONT_HANDLE UINT_MAX
+#define ADJACENT_DISCARD_WRITE (UINT_MAX - 1)
+static unsigned int adjacent_handle(
+    const struct msixtbl_entry *entry, unsigned long addr, bool write)
+{
+    unsigned int adj_type;
+
+    if ( !entry || !entry->pdev )
+        return ADJACENT_DONT_HANDLE;
+
+    if ( PFN_DOWN(addr) == PFN_DOWN(entry->gtable) && addr < entry->gtable )
+        adj_type = ADJ_IDX_FIRST;
+    else if ( PFN_DOWN(addr) == PFN_DOWN(entry->gtable + entry->table_len - 1) &&
+              addr >= entry->gtable + entry->table_len )
+        adj_type = ADJ_IDX_LAST;
+    else
+        return ADJACENT_DONT_HANDLE;
+
+    ASSERT(entry->pdev->msix);
+
+    if ( !entry->pdev->msix->adj_access_table_idx[adj_type] )
+    {
+        gprintk(XENLOG_WARNING,
+                "Page for adjacent(%d) MSI-X table access not initialized for %pp (addr %#lx, gtable %#lx\n",
+                adj_type, &entry->pdev->sbdf, addr, entry->gtable);
+
+        return ADJACENT_DONT_HANDLE;
+    }
+
+    /* If PBA lives on the same page too, discard writes. */
+    if ( write && (
+        (adj_type == ADJ_IDX_LAST &&
+         entry->pdev->msix->table.last == entry->pdev->msix->pba.first) ||
+        (adj_type == ADJ_IDX_FIRST &&
+         entry->pdev->msix->table.first == entry->pdev->msix->pba.last)) )
+    {
+        gprintk(XENLOG_WARNING,
+                 "MSI-X table and PBA of %pp live on the same page, "
+                 "writing to other registers there is not implemented\n",
+                 &entry->pdev->sbdf);
+        return ADJACENT_DISCARD_WRITE;
+    }
+
+    return entry->pdev->msix->adj_access_table_idx[adj_type];
+}
+
+static int adjacent_read(
+        unsigned int fixmap_idx,
+        uint64_t address, uint32_t len, uint64_t *pval)
+{
+    const void __iomem *hwaddr;
+
+    *pval = ~0UL;
+
+    if ( !IS_ALIGNED(address, len) )
+    {
+        gdprintk(XENLOG_WARNING,
+                "Dropping unaligned read from MSI-X table page at %" PRIx64 "\n",
+                address);
+        return X86EMUL_OKAY;
+    }
+
+    ASSERT(fixmap_idx != ADJACENT_DISCARD_WRITE);
+
+    hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address);
+
+    switch ( len )
+    {
+    case 1:
+        *pval = readb(hwaddr);
+        break;
+
+    case 2:
+        *pval = readw(hwaddr);
+        break;
+
+    case 4:
+        *pval = readl(hwaddr);
+        break;
+
+    case 8:
+        *pval = readq(hwaddr);
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+    }
+    return X86EMUL_OKAY;
+}
+
+static int adjacent_write(
+        unsigned int fixmap_idx,
+        uint64_t address, uint32_t len, uint64_t val)
+{
+    void __iomem *hwaddr;
+
+    if ( !IS_ALIGNED(address, len) )
+    {
+        gdprintk(XENLOG_WARNING,
+                "Dropping unaligned write to MSI-X table page at %" PRIx64 "\n",
+                address);
+        return X86EMUL_OKAY;
+    }
+
+    if ( fixmap_idx == ADJACENT_DISCARD_WRITE )
+        return X86EMUL_OKAY;
+
+    hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address);
+
+    switch ( len ) {
+    case 1:
+        writeb(val, hwaddr);
+        break;
+
+    case 2:
+        writew(val, hwaddr);
+        break;
+
+    case 4:
+        writel(val, hwaddr);
+        break;
+
+    case 8:
+        writeq(val, hwaddr);
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+    }
+    return X86EMUL_OKAY;
+}
+
 static int cf_check msixtbl_read(
     const struct hvm_io_handler *handler, uint64_t address, uint32_t len,
     uint64_t *pval)
@@ -220,16 +364,27 @@ static int cf_check msixtbl_read(
     unsigned long offset;
     struct msixtbl_entry *entry;
     unsigned int nr_entry, index;
+    unsigned int adjacent_fixmap;
     int r = X86EMUL_UNHANDLEABLE;
 
-    if ( (len != 4 && len != 8) || (address & (len - 1)) )
+    if ( !IS_ALIGNED(address, len) )
         return r;
 
     rcu_read_lock(&msixtbl_rcu_lock);
-
-    entry = msixtbl_find_entry(current, address);
+    entry = msixtbl_find_entry(current, address, true);
     if ( !entry )
         goto out;
+
+    adjacent_fixmap = adjacent_handle(entry, address, false);
+    if ( adjacent_fixmap != ADJACENT_DONT_HANDLE )
+    {
+        r = adjacent_read(adjacent_fixmap, address, len, pval);
+        goto out;
+    }
+
+    if ( len != 4 && len != 8 )
+        goto out;
+
     offset = address & (PCI_MSIX_ENTRY_SIZE - 1);
 
     if ( offset != PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET )
@@ -291,16 +446,29 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
     int r = X86EMUL_UNHANDLEABLE;
     unsigned long flags;
     struct irq_desc *desc;
+    unsigned int adjacent_fixmap;
 
-    if ( (len != 4 && len != 8 && len != WRITE_LEN4_COMPLETION) ||
-         (len && (address & (len - 1))) )
-        return X86EMUL_UNHANDLEABLE;
+    if ( len && !IS_ALIGNED(address, len) )
+        return r;
 
     rcu_read_lock(&msixtbl_rcu_lock);
 
-    entry = msixtbl_find_entry(v, address);
+    entry = msixtbl_find_entry(v, address, true);
     if ( !entry )
         goto out;
+
+    if ( len != WRITE_LEN4_COMPLETION )
+    {
+        adjacent_fixmap = adjacent_handle(entry, address, true);
+        if ( adjacent_fixmap != ADJACENT_DONT_HANDLE )
+        {
+            r = adjacent_write(adjacent_fixmap, address, len, val);
+            goto out;
+        }
+        if ( len != 4 && len != 8 )
+            goto out;
+    }
+
     nr_entry = array_index_nospec(((address - entry->gtable) /
                                    PCI_MSIX_ENTRY_SIZE),
                                   MAX_MSIX_TABLE_ENTRIES);
@@ -375,14 +543,23 @@ static bool cf_check msixtbl_range(
 {
     struct vcpu *curr = current;
     unsigned long addr = r->addr;
-    const struct msi_desc *desc;
+    struct msixtbl_entry *entry;
+    const struct msi_desc *desc = NULL;
+    unsigned int adjacent_fixmap;
+
 
     ASSERT(r->type == IOREQ_TYPE_COPY);
 
     rcu_read_lock(&msixtbl_rcu_lock);
-    desc = msixtbl_addr_to_desc(msixtbl_find_entry(curr, addr), addr);
+    entry = msixtbl_find_entry(curr, addr, true);
+    adjacent_fixmap = adjacent_handle(entry, addr, false);
+    if ( adjacent_fixmap == ADJACENT_DONT_HANDLE )
+        desc = msixtbl_addr_to_desc(entry, addr);
     rcu_read_unlock(&msixtbl_rcu_lock);
 
+    if ( adjacent_fixmap != ADJACENT_DONT_HANDLE )
+        return 1;
+
     if ( desc )
         return 1;
 
@@ -627,7 +804,7 @@ void msix_write_completion(struct vcpu *v)
         uint32_t data;
 
         rcu_read_lock(&msixtbl_rcu_lock);
-        desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, snoop_addr),
+        desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, snoop_addr, false),
                                     snoop_addr);
         rcu_read_unlock(&msixtbl_rcu_lock);
 
diff --git a/xen/arch/x86/include/asm/msi.h b/xen/arch/x86/include/asm/msi.h
index a53ade95c9ad..d13cf1c1f873 100644
--- a/xen/arch/x86/include/asm/msi.h
+++ b/xen/arch/x86/include/asm/msi.h
@@ -207,6 +207,10 @@ struct msg_address {
                                        PCI_MSIX_ENTRY_SIZE + \
                                        (~PCI_MSIX_BIRMASK & (PAGE_SIZE - 1)))
 
+/* indexes in adj_access_table_idx[] below */
+#define ADJ_IDX_FIRST 0
+#define ADJ_IDX_LAST  1
+
 struct arch_msix {
     unsigned int nr_entries, used_entries;
     struct {
@@ -214,6 +218,7 @@ struct arch_msix {
     } table, pba;
     int table_refcnt[MAX_MSIX_TABLE_PAGES];
     int table_idx[MAX_MSIX_TABLE_PAGES];
+    int adj_access_table_idx[2];
     spinlock_t table_lock;
     bool host_maskall, guest_maskall;
     domid_t warned;
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index d0bf63df1def..c216acbf0e5d 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -961,6 +961,34 @@ static int msix_capability_init(struct pci_dev *dev,
                 domain_crash(d);
             /* XXX How to deal with existing mappings? */
         }
+
+        /*
+         * If the MSI-X table doesn't start at the page boundary, map the first page for
+         * passthrough accesses.
+         */
+        if ( PAGE_OFFSET(table_paddr) )
+        {
+            int idx = msix_get_fixmap(msix, table_paddr, table_paddr);
+
+            if ( idx > 0 )
+                msix->adj_access_table_idx[ADJ_IDX_FIRST] = idx;
+            else
+                gprintk(XENLOG_ERR, "Failed to map first MSI-X table page: %d\n", idx);
+        }
+        /*
+         * If the MSI-X table doesn't span full page(s), map the last page for
+         * passthrough accesses.
+         */
+        if ( PAGE_OFFSET(table_paddr + msix->nr_entries * PCI_MSIX_ENTRY_SIZE) )
+        {
+            uint64_t entry_paddr = table_paddr + msix->nr_entries * PCI_MSIX_ENTRY_SIZE;
+            int idx = msix_get_fixmap(msix, table_paddr, entry_paddr);
+
+            if ( idx > 0 )
+                msix->adj_access_table_idx[ADJ_IDX_LAST] = idx;
+            else
+                gprintk(XENLOG_ERR, "Failed to map last MSI-X table page: %d\n", idx);
+        }
     }
     WARN_ON(msix->table.first != (table_paddr >> PAGE_SHIFT));
     ++msix->used_entries;
@@ -1090,6 +1118,16 @@ static void _pci_cleanup_msix(struct arch_msix *msix)
             WARN();
         msix->table.first = 0;
         msix->table.last = 0;
+        if ( msix->adj_access_table_idx[ADJ_IDX_FIRST] )
+        {
+            msix_put_fixmap(msix, msix->adj_access_table_idx[ADJ_IDX_FIRST]);
+            msix->adj_access_table_idx[ADJ_IDX_FIRST] = 0;
+        }
+        if ( msix->adj_access_table_idx[ADJ_IDX_LAST] )
+        {
+            msix_put_fixmap(msix, msix->adj_access_table_idx[ADJ_IDX_LAST]);
+            msix->adj_access_table_idx[ADJ_IDX_LAST] = 0;
+        }
 
         if ( rangeset_remove_range(mmio_ro_ranges, msix->pba.first,
                                    msix->pba.last) )
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 05:54:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 05:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518727.805538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkIZp-0006fZ-CI; Thu, 06 Apr 2023 05:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518727.805538; Thu, 06 Apr 2023 05:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkIZp-0006fS-9Z; Thu, 06 Apr 2023 05:54:29 +0000
Received: by outflank-mailman (input) for mailman id 518727;
 Thu, 06 Apr 2023 05:54:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Amx=75=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pkIZn-0006fG-Hx
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 05:54:27 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b0ca530-d43f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 07:54:23 +0200 (CEST)
Received: from AS8PR05CA0005.eurprd05.prod.outlook.com (2603:10a6:20b:311::10)
 by AS8PR08MB5896.eurprd08.prod.outlook.com (2603:10a6:20b:294::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30; Thu, 6 Apr
 2023 05:54:11 +0000
Received: from AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:311:cafe::32) by AS8PR05CA0005.outlook.office365.com
 (2603:10a6:20b:311::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30 via Frontend
 Transport; Thu, 6 Apr 2023 05:54:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT017.mail.protection.outlook.com (100.127.140.184) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.18 via Frontend Transport; Thu, 6 Apr 2023 05:54:10 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 06 Apr 2023 05:54:10 +0000
Received: from a116dc8fa01a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DD35CB03-1301-47DC-8E1F-83A78E497BA7.1; 
 Thu, 06 Apr 2023 05:54:04 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a116dc8fa01a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 06 Apr 2023 05:54:04 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com (2603:10a6:208:ff::27)
 by PA4PR08MB5966.eurprd08.prod.outlook.com (2603:10a6:102:ee::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Thu, 6 Apr
 2023 05:53:57 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104]) by AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::7b58:72de:8c37:5104%6]) with mapi id 15.20.6254.033; Thu, 6 Apr 2023
 05:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b0ca530-d43f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oY4jx1GTbRwnlCdCm9O33GMybE9G6DdERn2IfAZfgFw=;
 b=kvuJATbuFyao9DC3bMq24h1W6Vx+mpnaDSVteW7unkqYr8nuS7z5zo9Kw8OTAlGLo0l1X33aKWAmifosElXgmyPGBFqRwMpZQk0YZRNLfKimG/JEmb8daX1ImguN6yJyD14sHItdH2xqBKm0xPQzr6IqXNKWKiLSaFf9aSCqa+w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f3424a9ff9d72e78
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R+ui8UZbw7a/YVI/klLz5RrvgL8pEzPkNOamcMK0MT7T5/lRjVL2/JVAx2RmQQ81Ij4JEE5UTUEbIH/VQdAE34TTvKOwNw1rdfnjRnf9E5zVn8Fispc6LF6MaOfsMi5m8UulsFCSKPquh+Tom44lVMeHwEbpGD4HHLDerXuJYXlYgs7tTqS/UOvT2uVXc9Tjw4x++MIpRpjnbAPQwoo5W9sz3cgZjSEtyTelYqGgDQSDpnAe5pEyUomDODpwfsEDh35JatM41dR6AH4sqY8IIcARH9nUAxGJx0Xa0XJgky/AM7og8p5NQUiGwotARNdCxxKs8tSf7em9b9WSXdPJ8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oY4jx1GTbRwnlCdCm9O33GMybE9G6DdERn2IfAZfgFw=;
 b=GXrMu+W94DMnFE1sX23cZWTeGOznRY8o7HXB7WR+OVckROmZ5UqO/etVtGlSiIMffsT1iB1y0s7n5b42Y31ZNMOcGNke7yG0QFwg5P0+q/09kduMbgd2YDLJlRLJaUrDBROc7rDoJZODnVhH/O4swFz8tZDxQ5Bmp79Zqmhk8R80bNH4T+bDc2709X3dTpTST7gkc+O5lB4B2WVBwAW6qPFzRyEldtbm+qTTZDlbQgCuFotYXgI8TmD/yxpcbbiRtoKxNhRE8zB+TDDQFwyAqtXDmnsZ1ik1/GM9sN8xiJczfKsCvMlIDgVRk5Y0W9uYSNNmU8b2Iwdn3MKq1kEzyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oY4jx1GTbRwnlCdCm9O33GMybE9G6DdERn2IfAZfgFw=;
 b=kvuJATbuFyao9DC3bMq24h1W6Vx+mpnaDSVteW7unkqYr8nuS7z5zo9Kw8OTAlGLo0l1X33aKWAmifosElXgmyPGBFqRwMpZQk0YZRNLfKimG/JEmb8daX1ImguN6yJyD14sHItdH2xqBKm0xPQzr6IqXNKWKiLSaFf9aSCqa+w=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@cloud.com>
Subject: Re: [PATCH v4 10/12] xen/tools: add sve parameter in XL configuration
Thread-Topic: [PATCH v4 10/12] xen/tools: add sve parameter in XL
 configuration
Thread-Index: AQHZYJtmX5JG1Zpok0mjr+irvDjf6q8U92AAgAY/qICAAap2gIAA9ZmA
Date: Thu, 6 Apr 2023 05:53:57 +0000
Message-ID: <15401568-22E5-48B0-8F8C-9CBC50BF883A@arm.com>
References: <20230327105944.1360856-1-luca.fancellu@arm.com>
 <20230327105944.1360856-11-luca.fancellu@arm.com>
 <9bd2924b-bb4a-440d-ae31-0253e66c56e5@perard>
 <328A9CBD-5FCE-481B-93AF-D139963488D5@arm.com>
 <8921a9ca-7284-44ce-8ce5-bc631b0980d6@perard>
In-Reply-To: <8921a9ca-7284-44ce-8ce5-bc631b0980d6@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.400.51.1.1)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB3745:EE_|PA4PR08MB5966:EE_|AM7EUR03FT017:EE_|AS8PR08MB5896:EE_
X-MS-Office365-Filtering-Correlation-Id: a64e1e91-1b54-4382-1229-08db36635812
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 CDWvoQ4DIhztbTfWtCBkcfvEKHYM3swLmQXFgZ5IZ9WPAdihAC08vwTlkUDwl2jZeAcwQ6IaHeGulVdvhb0WM5BVm4FBfAlMf82A+vSSPY2a2Y3qmSPa3EEAMK5jBq/8IdgWz30B963rfjXuYpb/znzuXP78kXqM8Kzf7GnZmJ4+7zudy5XMB/wIQXqfuTkdkjw8LZpbqnNyaQnnu5QyGVZ8Bt1sSWD1YtOVe/Yd3gSWKkBMx4hlD/OhlSgkaOw1XooyMmS5E3DnWAkEeuT5NBPTYR1W9pFP8n2sqtyzPkYDPG9AzcJmziBkMnAFdH+5bZu/BLgFLXhqYOxysvK5yn+VR47zXjqyjxNmLSAcSvwDuUUelZgZWF2YcabdFNdyct1mjM7vuafEjid7n7Q/wKsMj7V9oJLgWBoa/dH1RB+9fsq52/SLov7ZBT7HffARLfsGa9g9JCs7oyi2EWpEFLyPJV2RIRDr4nEjH3u3IjsIMkOAX0fW4rf+vjNActp7r9qg0Nja+yKghvt7zOJ/mff8b9wTqsEQ5WIIKatGQ04GRDc4cTNkOuOmHF/djtFsl4vRrtTafeRZv52JZ0jx/QQrL/tQDdMNwcsAtY8AQmwM/dw+bwHgVgPAQO/pgr7nONmmkPoKKJT5GmfD5qT2NA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3745.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(396003)(376002)(39860400002)(136003)(451199021)(2906002)(36756003)(38070700005)(40140700001)(33656002)(86362001)(6486002)(71200400001)(2616005)(83380400001)(186003)(26005)(6512007)(6506007)(76116006)(66476007)(8676002)(66946007)(66446008)(6916009)(41300700001)(66556008)(38100700002)(64756008)(91956017)(122000001)(5660300002)(54906003)(478600001)(316002)(8936002)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <7FC05514D620864F8E8740245053C59B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5966
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	27067f1d-4208-44a2-5abe-08db3663500e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YWojoA5nL//zNN9BIfLeRrMmPHfdU8gYjZDAcQt0lZS6rSHYPfeFi7GE/6oG+0aII44DXvEl4/RX7IBEcrwKnRYMeXa54oPSanJs2BQuuwdfDw+22lG13cjR8MVl8zjO7LPiId1WTo8GFoJgwGf/o/kjF+jECb59ke6IiUyJc/g26QTu4hwygIfcfERi6If/jJNqbd0ehCt7ZHQVMbV2B6C3Jwb13f4OtldNxfI7Dy5xn+gmxgRG/OmHloqlx8LmT17J8NNYr6Hz0b1t4hkNFh0nhtQfdT5UwGF3qPk6RF5I2OuVtuMruCzOyf9p3Zl4JOx0nksNs4DivBdKQADcINCpkkOIvJFI2s92JwoXsl05QseicNPjJUDTqnSgrRwe8tf2XOvsI5N4SIOXyViNNVqWlJfI71SQL8WASvC8/IJsz6UEEtmTo2QGjmtYClf5QHQ+bzyo8gGHoDjrh4S0vC/gIXGhgl1sC5/FviUbcKdtp0Je4uGJH5+pDGElqJJR017Oqpy4SYDlFQhYBAzgFvOFmS+LLPMFpQwIreMrhEo1bbwUHww2U+ufu0BaNp+/WjFWAQQ2GZCtwUkvbvP7tq4elTiQF4qoH9Q4H2U29BoLJuJLVZyYL66AezvW1HIwn0I60YF0mqGs5+z+jn5QIs4F+D8tE7mXbjY9BFjni+zEeQ3C5SkB2kycrD3fAhq4AxMRmDagp/f0f7jm0S1z6g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(40460700003)(40140700001)(33656002)(5660300002)(2906002)(4326008)(36756003)(40480700001)(82310400005)(356005)(81166007)(41300700001)(6862004)(8936002)(70586007)(8676002)(86362001)(70206006)(82740400003)(2616005)(336012)(47076005)(6512007)(36860700001)(83380400001)(186003)(26005)(54906003)(107886003)(6486002)(6506007)(316002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 05:54:10.9829
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a64e1e91-1b54-4382-1229-08db36635812
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5896

SGkgQW50aG9ueSwNCg0KPj4gDQo+PiBZZXMgSSBjYW4gY2hhbmdlIGl0LCBhIG5lZWQgdG8gZG8g
aXQgYW55d2F5IGJlY2F1c2UgSSB0aGluayBhbHNvIGhlcmUsIHRoZSBzdWdnZXN0aW9uDQo+PiBG
cm9tIEphbiBjYW4gYXBwbHkgYW5kIHdlIGNvdWxkIHBhc3MgYSBuZWdhdGl2ZSB2YWx1ZSB0aGF0
IG1lYW5zIOKAnG1heCBWTCBzdXBwb3J0ZWQNCj4+IGJ5IHRoZSBwbGF0Zm9ybSINCj4gDQo+IFdl
bGwsIGl0J3MgYSBjb25maWcgZmlsZSwgbm90IGEgQyBBQkksIHNvIG1heCBhbGxvd2VkIGhlcmUg
ZG9lc24ndCBoYXZlIHRvIGJlDQo+IHNwZWxsZWQgIi0xIiwgaXQgY291bGQgYWxzbyBiZSAibWF4
IiwgIm1heC1hbGxvd2VkIiwNCj4gIm1heC1zaXplLXN1cHBvcnRlZCIsIC4uLiBTbyBmaWxsIGZy
ZWUgZGV2aWF0ZSBmcm9tIHRoZSByZXN0cmljdGVkIEMNCj4gQUJJLiBCdXQgIi0xIiB3b3JrcyBh
cyBsb25nIGFzIGl0J3MgdGhlIG9ubHkgYWxsb3dlZCBuZWdhdGl2ZSBudW1iZXIuDQoNClllcyB3
aGlsZSB3b3JraW5nIG9uIHRoZSBwYXRjaCBJ4oCZdmUgZm91bmQgdGhhdCBJIGNvdWxkIGRlY2xh
cmUgdGhpcyB0eXBlIGluIExpYnhsOg0KDQpsaWJ4bF9zdmVfdHlwZSA9IEVudW1lcmF0aW9uKCJz
dmVfdHlwZSIsIFsNCiAgICAoMCwgImRpc2FibGVkIiksDQogICAgKDEyOCwgIjEyOCIpLA0KICAg
ICgyNTYsICIyNTYiKSwNCiAgICAoMzg0LCAiMzg0IiksDQogICAgKDUxMiwgIjUxMiIpLA0KICAg
ICg2NDAsICI2NDAiKSwNCiAgICAoNzY4LCAiNzY4IiksDQogICAgKDg5NiwgIjg5NiIpLA0KICAg
ICgxMDI0LCAiMTAyNCIpLA0KICAgICgxMTUyLCAiMTE1MiIpLA0KICAgICgxMjgwLCAiMTI4MCIp
LA0KICAgICgxNDA4LCAiMTQwOCIpLA0KICAgICgxNTM2LCAiMTUzNiIpLA0KICAgICgxNjY0LCAi
MTY2NCIpLA0KICAgICgxNzkyLCAiMTc5MiIpLA0KICAgICgxOTIwLCAiMTkyMCIpLA0KICAgICgy
MDQ4LCAiMjA0OCIpLA0KICAgICgtMSwgImh3IikNCiAgICBdLCBpbml0X3ZhbCA9ICJMSUJYTF9T
VkVfVFlQRV9ESVNBQkxFROKAnSkNCg0KU28gdGhhdCBpbiB4bCBJIGNhbiBqdXN0IHVzZSBsaWJ4
bF9zdmVfdHlwZV9mcm9tX3N0cmluZw0KDQo+IA0KPj4+IA0KPj4+PiArc3VwcG9ydGVkIGJpdHMg
dmFsdWUsIHRoZW4gdGhlIGRvbWFpbiBjcmVhdGlvbiB3aWxsIGZhaWwuDQo+Pj4+ICtBIHZhbHVl
IGVxdWFsIHRvIHplcm8gaXMgdGhlIGRlZmF1bHQgYW5kIGl0IG1lYW5zIHRoaXMgZ3Vlc3QgaXMg
bm90IGFsbG93ZWQgdG8NCj4+Pj4gK3VzZSBTVkUuDQo+Pj4+ICsNCj4+Pj4gKz1iYWNrDQo+Pj4+
ICsNCj4+Pj4gPWhlYWQzIHg4Ng0KPj4+PiANCj4+Pj4gPW92ZXIgNA0KPj4+PiBkaWZmIC0tZ2l0
IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9hcm0uYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxf
YXJtLmMNCj4+Pj4gaW5kZXggZGRjN2IyYTE1OTc1Li4xNmE0OTAzMWZkNTEgMTAwNjQ0DQo+Pj4+
IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfYXJtLmMNCj4+Pj4gKysrIGIvdG9vbHMvbGli
cy9saWdodC9saWJ4bF9hcm0uYw0KPj4+PiBAQCAtMjExLDYgKzIxMSw4IEBAIGludCBsaWJ4bF9f
YXJjaF9kb21haW5fcHJlcGFyZV9jb25maWcobGlieGxfX2djICpnYywNCj4+Pj4gICAgICAgIHJl
dHVybiBFUlJPUl9GQUlMOw0KPj4+PiAgICB9DQo+Pj4+IA0KPj4+PiArICAgIGNvbmZpZy0+YXJj
aC5zdmVfdmwgPSBkX2NvbmZpZy0+Yl9pbmZvLmFyY2hfYXJtLnN2ZTsNCj4+PiANCj4+PiBUaGlz
IHRydW5jYXRlIGEgMTZiaXQgdmFsdWUgaW50byBhbiA4Yml0IHZhbHVlLCBJIHRoaW5rIHlvdSBz
aG91bGQgY2hlY2sNCj4+PiB0aGF0IHRoZSB2YWx1ZSBjYW4gYWN0dWFsbHkgZml0Lg0KPj4+IA0K
Pj4+IEFuZCBtYXliZSBjaGVjayBgZF9jb25maWctPmJfaW5mby5hcmNoX2FybS5zdmVgIHZhbHVl
IGhlcmUgaW5zdGVhZCBvZg0KPj4+IGB4bGAgYXMgY29tbWVudGVkIGxhdGVyLg0KPj4gDQo+PiBZ
ZXMgSSBjYW4gZG8gaXQsIG9uZSBxdWVzdGlvbiwgY2FuIEkgdXNlIGhlcmUgeGNfcGh5c2luZm8g
dG8gcmV0cmlldmUgdGhlIG1heGltdW0NCj4+IFZlY3RvciBsZW5ndGggZnJvbSBhcmNoX2NhcGFi
aWxpdGllcz8NCj4+IEkgbWVhbiwgaXMgdGhlcmUgYSBiZXR0ZXIgd2F5IG9yIEkgY2FuIGdvIGZv
ciB0aGF0Pw0KPiANCj4gWWVhaCwgdGhlcmUgbWlnaHQgYmUgYSAiYmV0dGVyIiB3YXkuIEkgdGhp
bmsgbWUgc3VnZ2VzdGlvbiB0byBjaGVjayB0aGUNCj4gc3ZlIHZhbHVlIGhlcmUgd2FzIHdyb25n
LiBJIHN0aWxsIHdhbnQgdG8gaGF2ZSBpdCBjaGVja2VkIGluIGxpYnhsLCBidXQNCj4gaXQgbWln
aHQgYmUgYmV0dGVyIHRvIGRvIHRoYXQgaW4gdGhlIHByZXZpb3VzIHN0ZXAsIHRoYXQgaXMNCj4g
ImxpYnhsX19kb21haW5fY29uZmlnX3NldGRlZmF1bHQiLiBsaWJ4bF9fYXJjaF9kb21haW5fYnVp
bGRfaW5mb19zZXRkZWZhdWx0KCkNCj4gd2lsbCBoYXZlIGBwaHlzaW5mb2Agc28geW91IHdvbid0
IGhhdmUgdG8gY2FsbCB4Y19waHlzaW5mbygpLg0KDQpSaWdodCwgSeKAmXZlIHNlZW4gaXQgYmVm
b3JlIGJ1dCBJIHdhcyB1bnN1cmUgaWYgaXQgd2FzIHRoZSByaWdodCB3YXksIG5vdyB0aGF0IHlv
dQ0Kc3VnZ2VzdGVkIGl0LCBJIHdpbGwgZ28gZm9yIHRoYXQuDQoNClRoYW5rIHlvdS4NCg0KQ2hl
ZXJzLA0KTHVjYQ0KDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 06:05:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 06:05:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518733.805547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkIkA-0008K3-Ei; Thu, 06 Apr 2023 06:05:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518733.805547; Thu, 06 Apr 2023 06:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkIkA-0008Jw-By; Thu, 06 Apr 2023 06:05:10 +0000
Received: by outflank-mailman (input) for mailman id 518733;
 Thu, 06 Apr 2023 06:05:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=17RP=75=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pkIk8-0008Jq-I0
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 06:05:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9fa8b11-d440-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 08:05:07 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 33EB622851;
 Thu,  6 Apr 2023 06:05:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 09AD2133E5;
 Thu,  6 Apr 2023 06:05:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zi6EABFhLmRaHQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 06 Apr 2023 06:05:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9fa8b11-d440-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1680761105; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iAiGf3cX2GW0bfrFRKXl6BpZg86FVq+iM6Qx7WST+Fk=;
	b=ubKyyYdlYlaWScuO/QO3mcCFfA3fxykuYorEIU2rTFXl0wvCCTupVmHNPYrQRxaySPw5WL
	T7VZrWpkUeba4ish02JdcbCp0zGkZxMxymxrUZCZU9BXKL5F4LZqS9l4jU2Xtd95iwv9t0
	9dsXSSeu/LBqshHQ6CY8XJhqQYhIy2k=
Message-ID: <e529da7e-0da6-af2f-e5b1-bb8f361a518c@suse.com>
Date: Thu, 6 Apr 2023 08:05:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3 2/4] tools/xendevicemodel: Introduce
 ..._get_ioreq_server_info_ext
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------4yk10507j6h1jUkHG8ZqU8Vz"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------4yk10507j6h1jUkHG8ZqU8Vz
Content-Type: multipart/mixed; boundary="------------mdcjMNvxzTpVfimRWy4KDUVZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <e529da7e-0da6-af2f-e5b1-bb8f361a518c@suse.com>
Subject: Re: [PATCH v3 2/4] tools/xendevicemodel: Introduce
 ..._get_ioreq_server_info_ext
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>

--------------mdcjMNvxzTpVfimRWy4KDUVZ
Content-Type: multipart/mixed; boundary="------------IYP6TtmdnVCWEa5NYAwiEmVR"

--------------IYP6TtmdnVCWEa5NYAwiEmVR
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDYuMDQuMjMgMDU6NTcsIE1hcmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraSB3cm90ZToN
Cj4gQWRkIHhlbmRldmljZW1vZGVsX2dldF9pb3JlcV9zZXJ2ZXJfaW5mb19leHQoKSB3aGlj
aCBhZGRpdGlvbmFsbHkNCj4gcmV0dXJucyBvdXRwdXQgZmxhZ3MgdGhhdCBYRU5fRE1PUF9n
ZXRfaW9yZXFfc2VydmVyX2luZm8gY2FuIG5vdyByZXR1cm4uDQo+IERvIG5vdCBjaGFuZ2Ug
c2lnbmF0dXJlIG9mIGV4aXN0aW5nDQo+IHhlbmRldmljZW1vZGVsX2dldF9pb3JlcV9zZXJ2
ZXJfaW5mbygpIHNvIGV4aXN0aW5nIHVzZXJzIHdpbGwgbm90IG5lZWQNCj4gdG8gYmUgY2hh
bmdlZC4NCj4gDQo+IFRoaXMgYWR2ZXJ0aXNlcyBiZWhhdmlvciBjaGFuZ2Ugb2YgIng4Ni9t
c2k6IHBhc3N0aHJvdWdoIGFsbCBNU0ktWA0KPiB2ZWN0b3IgY3RybCB3cml0ZXMgdG8gZGV2
aWNlIG1vZGVsIiBwYXRjaC4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IE1hcmVrIE1hcmN6eWtv
d3NraS1Hw7NyZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4NCj4gLS0t
DQo+IHYzOg0KPiAgIC0gbmV3IHBhdGNoDQo+IA0KPiBTaG91bGQgdGhlcmUgYmUgc29tZSBI
QVZFXyogI2RlZmluZSBpbiB0aGUgaGVhZGVyPyBEb2VzIHRoaXMgY2hhbmdlDQo+IHJlcXVp
cmUgc29uYW1lIGJ1bXAgKEkgaG9wZSBpdCBkb2Vzbid0Li4uKS4NCg0KWW91IG5lZWQgdG8g
YWRkIHZlcnNpb24gMS41IHRvIGxpYnhlbmRldmljZW1vZGVsLm1hcCB3aGljaCBzaG91bGQg
ZGVmaW5lDQp0aGUgbmV3IGZ1bmN0aW9uLg0KDQoNCkp1ZXJnZW4NCg==
--------------IYP6TtmdnVCWEa5NYAwiEmVR
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------IYP6TtmdnVCWEa5NYAwiEmVR--

--------------mdcjMNvxzTpVfimRWy4KDUVZ--

--------------4yk10507j6h1jUkHG8ZqU8Vz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQuYRAFAwAAAAAACgkQsN6d1ii/Ey/9
UQf9GAYbrRxg5BEQZ9Hh0Zpihh5MjH79/FibaBJRfZwiNRcbQjWaiX5IGxpMpb3eIyIoLdodIAQ2
BnR6QL/H4RF/ibDEMIsccC6TqUCxA4RjAmMTW7NyJr/xw+Pk2VE4YpaZfhEh/skSbKDWITnpVRXj
jEX1ri3EbBg7tHlcQ4nAEc3Q4M60o/QoxT5ANJ8vD/zf1Tkz3gF9F2F26Mvg8DGFN3fGGRZ49jU6
2n1mfcf0l5F3waOrs0rfixO0clsw/sNlPekZWUSXFuAggnvyeJDqatu1VZG+GK0CPaZeTnUDntTn
xyJVOjlfrhFwaGPqGKvcFedXMeiu7HAC4slfQ2U84g==
=4SkI
-----END PGP SIGNATURE-----

--------------4yk10507j6h1jUkHG8ZqU8Vz--


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 06:35:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 06:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518737.805558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJDW-0003ML-OT; Thu, 06 Apr 2023 06:35:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518737.805558; Thu, 06 Apr 2023 06:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJDW-0003ME-K8; Thu, 06 Apr 2023 06:35:30 +0000
Received: by outflank-mailman (input) for mailman id 518737;
 Thu, 06 Apr 2023 06:35:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkJDV-0003M8-79
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 06:35:29 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3761be9c-d445-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 08:35:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6855.eurprd04.prod.outlook.com (2603:10a6:20b:dc::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Thu, 6 Apr
 2023 06:35:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 06:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3761be9c-d445-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uja96Q1S7FCrP9pZP8tp2MAeK0JSKB4cr3AcDwQjA9szlGS6RiBsldddF/83fewOCUfJkwCwIdIB5w9kN40+h4cw8pybsEcu9T77Q/w0d4svMbeZF83c+fjhVkJk+kzGkO393UuCgwRU8TYZc/emiyKanurEx74va/VQOqLhIyY0Nit1SVq3yXQ9jD1FJKJ3R3Yq9oWjJskv+FdayRwbb1QBPpk1KNTIfwLRxEG1vdpGF2ncNjjr3nQDVGRfet1cHn4IBpSOMP4yG/nacIx13tAU6cbHp9gDpZpLzLAQM2zcmQ+OUCCN8xTXcEujmQhIytX/4j2U+oxwOrDV9Jlvjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ynyCJJbqCagedK2F0kJGUmSwDJt20O0KiqLz/Y/RlX0=;
 b=VTbDgGbU4qoYj8Lf8wspTjMIEKwd+u3zxT6O+q/3quSyKnyZhrRCYr1jeCo/owuDpQTd8o5w4Ay74eFUbR1nnX37JY70Gvk5s0FsRDs5uGqrsYvOJg1Dm3P2HTas3IbgH/MIbsduZaIwgKTAdrb2dQrsyVJLUSvJIBuFxAUnyrpOGXnEbvFAtVtxEn5uJAvMxmXh5ijAG2g4c+jvtqjiVs4mTz5iM+LI4UmsXD1LBzr6vEOLdSiq9scrtH//4Jc+3KdVyCko6ccL6tAZBqNovMwgpCY96QhTslBSgF9nBRB+ru6Ly8bs+SkAajsUmaVQSqCTCFD8knjh2LbsLZbLGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ynyCJJbqCagedK2F0kJGUmSwDJt20O0KiqLz/Y/RlX0=;
 b=ISG2g9EDrIKOo9dx94jF6+IpaT1fD2C5aBguQxP19Wx/TTq5Gzjo57zmJrf7Ni7CFPnWmk415T1jge3qQcmAdU7DiuTd3eIZG3kvb6whDbWJM7eahM03CQPbLZC7VG44mRhWZQAUhAKw5N7nd1yod47Pt2PMpKt99dfCsQ4P942sng5Rl/pO2h/AKRQ231WMTpgZS2XJy1/SPH8MEXSZPKhSnMNlWFyetgRNmepdrH0x/a+rbhEsEbwi+hg2LFgxld8H47g+DSlxx1UArkSUjfC+keZaAR7M7OJj3MBFh964txJ6JMMB6W4nVEDq01I17YhaZr+SBPDjDDSR6m4+hg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8fce2549-a580-6780-759d-f287fa710640@suse.com>
Date: Thu, 6 Apr 2023 08:35:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
 <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
 <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
 <d351a7b6d673b70d45e809123e6e42abbf7b8014.camel@gmail.com>
 <fb639472-70f3-f7c9-eaa6-37effd4965b3@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <fb639472-70f3-f7c9-eaa6-37effd4965b3@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0267.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6855:EE_
X-MS-Office365-Filtering-Correlation-Id: f7be412f-a9c3-42e1-ea90-08db36691ac1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pR2w+7FRdyc8BXF35h4fu+6riMXivi+oFv4lZtm1CTyYc87EkRwaCARqpgkOAX2/P8XcwchatHniQMvs/GQLjFKTexfjW1JWIjWBwWlHym7HsyM8zJymapZ+XS2P4xcQYVYGxzBtEvh5K688bMW7qFDY7aiBSFxC1dIXrKqOMs1N7UGyapHAc3MxWNQmzZsXrjVJKFqOc8qrY1cAAmowKKe+bZWG6L5Fh+acE278BG8DHy5hW1jT8Qow3Km34Gz5gZQl1IxBa2wHVdk6zdqzKHM5zk8KQdb4Vlflj256ojmxVi6Q6RuV1quwpD4IrxmYgq2IosarQ0ZFxpli425hY7obLVt8obyLMzne64X7YtXkbA0Xd4WBa2W21ErpbuX7cOHUnWM/dNO/wBMAVZRtemQVJbG3Tri0RvKKX6nTftRs1gqwlIF66Y8IZBrslAYjjytrl3DEmn6aH68nLPd4eyEQI2CuZpjFZe0wVXFK3qhM1R80/bjdQt4C/D+XNJY9nTqzhrSIDeUlex8Qhl6byDuWq/pL5yKV8Sh11FYMbLlruL740Zj9k+IGSt7HBjVHUP7o2COD4XYEJfv2yyP3HTQHZA0ZinlXz4lfXrwKO1hzbRcax0a06pq8hDFoucTwZ9qs/NwaW2EjJpT58ddpZ3lUucYSIL2PXNyka47gVnc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(346002)(136003)(366004)(376002)(396003)(451199021)(5660300002)(31686004)(54906003)(316002)(8936002)(41300700001)(66946007)(186003)(2906002)(478600001)(4326008)(66556008)(8676002)(6916009)(66476007)(6486002)(26005)(53546011)(6506007)(6512007)(36756003)(2616005)(558084003)(31696002)(86362001)(38100700002)(41533002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmNRaHhZOWxzRndZWWRneWlhdVpWcXlzWTRsMWI2M0htTEVhZFVQNFhXbXFV?=
 =?utf-8?B?TWFqajF4djFZa1VmaVUzSmlWSjNaSnZGN0N5TVlzSVBzampKclh4OUwydDh5?=
 =?utf-8?B?dTZZVWhxRU9OM0lmVGFGTVJpUUdFeWdJZFdkU3c1YmwydmtWbDRzMU00NXNB?=
 =?utf-8?B?R20xd0xVdTI3b0x1MzZWUHFrNDQxUzBuci9yTTVQUElIS1cvWWZsTXNMVDlZ?=
 =?utf-8?B?b1lCRkx5eUVRUkxuemhuTzgxT2RIaDdTbnFJUUVBeW1JSkNrTUtWQmFpMGtl?=
 =?utf-8?B?YUU3aDJHNTkzYzZlUFlmVThsNFBJZWJuQWdEd3pXaEtqSjhYejY2ZVAySXRT?=
 =?utf-8?B?YWtaTlJhWnVBSkhYeHpuSGF0UnVNdndhQzdTSmpGUFg2OE5JeVBCWEh1NVhN?=
 =?utf-8?B?WHp2K1ZzQUtBMUswSW9ETWNzbm42ODF2MzduRFM5VTZrMDU3Q0hSSEEya1pD?=
 =?utf-8?B?V3lZU3YrOEJVMUFlTk5wUnk0UEFaRGx1WFNXQVRvZHRaK1hCdUQvRkhQTjRP?=
 =?utf-8?B?cWxlTVR2QkxwUjZ5RUJQOUt6RFZBTkpFeGh0QWQ5djBKNEdZNGZqK29zT2FE?=
 =?utf-8?B?a29Md01uRDIyTGEvQllHZXhzcEZzTk5GcTQ1aFgrNWcyZzBFMHhsdjNiYWJY?=
 =?utf-8?B?NXN0UWJIU0VTS1B5WXJETWdOeWwwTEtzZC85b0FJbm5jRklkU0t3Z1RzUTdN?=
 =?utf-8?B?Y0k0SXRzTXgzb3hocjdrUG1zclFraUhXMXZ5Nk94VTlqYVZiUTl0bG5tdFdy?=
 =?utf-8?B?UWhDbStNQU1aYXkyVTNESldZN2lIbVBTM2x2d2d3ZGV6NTFZcGtyS084aEFy?=
 =?utf-8?B?Qk00SnVzcmUzS1J6dno2ZGdzcnlLUjNBbTdOUjRGdTRnQ0lkWXRaY2xvUEdo?=
 =?utf-8?B?VFhXN1dTZVc4OWpLcWhMSmh2NStTWm9YMkd0M0NQZzJvV1MycmQ5b2R2QkVq?=
 =?utf-8?B?K2R1Z0JtRjBiM0hmMno1ZFFXajVmeWNOZU8vTVVzdEsycWRNOUZDdGNNTWVk?=
 =?utf-8?B?ZWd5d3hJdzdsS01meDJWUlhpbWJGNWh1QzRSQWhiZnRsMy83Tzl3R2ZyYzZO?=
 =?utf-8?B?MEJJUUd4djc4MnVaMUpMcGJ5Q1ZvVnFQaXhTSUJxOXlHeTh4ZU8vRExPcEp1?=
 =?utf-8?B?eGpvT2Q4MHJ3eFVzSmg0bXdCWTV4S2hZNi8xRlRRNlUxY1JlS1gzdlVGSlh1?=
 =?utf-8?B?QkpKMVk1eDFKWXlLUVNoSUtRdmtvN3luWDFjSUQzWXU0NVpIUGJCa3V2cGM5?=
 =?utf-8?B?YnJaNlNZN2hZaEd3VlJQMUJPWjY2NndQQnl6VTZGbVl1MUpoSmIxUjBjSGs0?=
 =?utf-8?B?aU1KaHdJS2UwQ3FOUXdHcmdRQldUTC9tejFDMVVqS3N6QmREZGNwWHpYVHlY?=
 =?utf-8?B?Y1JNV1o3MGo4V3hPL1ZxOHVmN1cyTGNUWXlqdnUvYjlVc0FUc05yVWNRMGg3?=
 =?utf-8?B?dWE2U0taZ3NvSXNBNFU5QkZja0FEWE1Ob1BqVG9GdDlVbEF4ZUt0Y0JRVDYv?=
 =?utf-8?B?WGNoc29PWi94NkNJNGRuaFpjM2NZellmUGJYZS9qdlBSaW1ZTFZNaTVlVi85?=
 =?utf-8?B?M0UveWNQOWFSZmoxd3JvTkh1YlExVE55SXgvZWZEdGxNV0FXOGJFaWNpdGZG?=
 =?utf-8?B?dXBhVkFCOWdTcGp2dDBCVW1wKzhKZzk1ZFltbk1LNFU2amVOMDVNeDdranZN?=
 =?utf-8?B?ai9yRVBZU1FIekJXanhhRmppMEpmc3QvRmdyK2kvaFFpYkJHanN4Wi9SZXNO?=
 =?utf-8?B?cjBUaGRldlJnUmZzTTZxUTBBRnM5bWRxd2t4cVVpUCt2b01kaG45bnlyblA3?=
 =?utf-8?B?TUVWcDh6UnVtejVkcUxva1g0RS82WEN0TXVKSHFTWmpxVEpVVkdacTd4S0hZ?=
 =?utf-8?B?OEt1b3k3UTFVU2dzMkVFdHMraFk4N2xuNVkzaGc1ZnFaQXBFbnVRekNoVk5T?=
 =?utf-8?B?WnM0OFJCWjl3MkYzNUwrTHM3bVU2SGNrMGIvakF2SXBiWFBYRytkUzlDMHNQ?=
 =?utf-8?B?UW9ERG05MkFXWnNKcXgwb1AzcjlUb3QvbkhSWG5QZ09FdEhYZnNSZmJuVEdo?=
 =?utf-8?B?MDVQU0ZKZWdHRnBmTlM3RC9McjJsTjd0b0lGNEYwT24xU2J3TnBrOE1sOE5w?=
 =?utf-8?Q?N/87Obk2KJ4rVoTWJuIH4GIpL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7be412f-a9c3-42e1-ea90-08db36691ac1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 06:35:25.2833
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tXjWiTDSv0LxW9uuo0iHnpBMYR66fFoN6d2SOus9IuYdx6/Dxl/qxD1ssdwQ+cMpOtyp7D2JZD0L4j5Hd0RdgA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6855

On 05.04.2023 18:39, Julien Grall wrote:
> To reduce the amount of patch to resend, I was actually thinking to 
> merge patch #1-3 and #5 (so leave this patch alone) and modify the 
> default in a follow-up. Any thoughts?

Well, yes, that's what I did a couple of days ago already.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 06:47:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 06:47:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518743.805567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJP6-0004te-PK; Thu, 06 Apr 2023 06:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518743.805567; Thu, 06 Apr 2023 06:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJP6-0004tX-Mf; Thu, 06 Apr 2023 06:47:28 +0000
Received: by outflank-mailman (input) for mailman id 518743;
 Thu, 06 Apr 2023 06:47:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkJP5-0004tR-Sr
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 06:47:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7d00::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3742f38-d446-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 08:47:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9237.eurprd04.prod.outlook.com (2603:10a6:20b:4d1::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 06:47:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 06:47:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3742f38-d446-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OZfrnU7T+SRufwcOeKKS/RSTyVYU4tJzR9AZhkH/MnZjyo2Map/rNMcIOL/AJxZ0Ab9nuYRSiqAgPosU72iaFlC1TkGy5Uy0UgyXw90YYbAu1YvevdOM3gaR3ghLt5T4vyZNEXXBpSt8xVS2tjZvobWrNJ3ERtHlc4rxphhHn5xBy2qIx9wfqrXEY2aFrIW6RSNrBWeRGAZdvOF4ps6w80U82kAG1ciEl6ZuGayfMyZZIpwFMltB1owD4M3MLrQ+GKj+kvOOtloozMz4WKYPNu6vxrUa5XDu2nTupznJ8gFamASS9k0qOFOBJJUHmXS1NmxBmoXzj98ICBw4tj0WwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8VAXMuarZPILiWCKWQf+oZsYjafLrQh8Xc8wugtXYnk=;
 b=cgnGf82Z1XpxH3jbq3qeYB40kak/Nr64lckgxFze8TVImlAuz1ieYk5746B7HRejw5dqyWMDmpKKXf4da2ACznmQfPWb958tUADlDhPscRJO5/Cg0v5KYXoAltoylCX6DAExcKgvdQYu2qGOt79nLeYfwoElMrEernTSkHG1Aos3b6cpV87YrHDLxpDiXVrMMr+H4XfCg6X2QNDin7NWLOOAgpKpiOx6n+sHzIoGX+oV0tUPCHzqxlbzDDWeBKHSLBSEiFK2KDrDHzm72dPnH6taZw5jmtrgyORlHZqAnq+g53psLpRw7kxmO7I2N32YbPYXx4w2pb9Cxe8jEPC9qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8VAXMuarZPILiWCKWQf+oZsYjafLrQh8Xc8wugtXYnk=;
 b=j8OMJ7vPDR078k62lTtRmkm4J/Z3TtzOj3SGCPSkijRHaTSdbKhr7GgLYIsk58UwoFNSXarKVLgc8mCSYErYG2dBickPTADo9ExhmVBCLAgjsLJPMJGUhQbT/d9GRis/5+LyHmHeSKDFCdZqKasn510v3o9Hg5sl63dQtw69v1+/i2ya5dgl67io7sCKTyS23uesZuWZqqDzwWrKz5D1GpKdcrvuHSRuRF9WdOMZOgdkyZnd6Eu/1zvbZt7gZL/OQJgbJ0/haNbTeUvxAZ0D/b3KIWyZBaWbgRX/do38+GY/h/I/NB8ic8MuhFUjtxZ/DOcDVnvt2H02bE4aR0Hn5A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2336d321-1463-a1cd-97e6-10c0d7ef5e39@suse.com>
Date: Thu, 6 Apr 2023 08:47:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 2/9] x86emul: support WRMSRNS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <0c2ddae9-3222-9755-b6e1-35e51410093b@suse.com>
 <0c249d10-02c7-9d76-bc3d-2b3e2ece38a9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <0c249d10-02c7-9d76-bc3d-2b3e2ece38a9@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0162.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9237:EE_
X-MS-Office365-Filtering-Correlation-Id: c8348c93-3ddf-48da-090d-08db366ac660
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0syHUBVdoV5zB1/0SJoHupe6b4juRS2m2Oh3+TkYSaTeP/ww5mXUSmJgEnEUMUnPZRS5K3LTVKmBCZBcWOea71bhWqWF2RQfvltaoVeELHLiMtW7kP5LTXEU0arjkJDLUx9vKal4Ih5dl/i5+zZITbSzZz+jh9I/vSLDqfSzCKJtzP7RLUkrSEQMvG0aOvUH+PtXQFnYbeKKp9shqG75geQb0d40AVoWzHhJanootR0IQKF61MJJ8OdaBsmudW+ms/9GxzjtfBY1Dh0oLl73iMHrAMcPXghRfO5Ck71VFQwBEHAB2x2tqZlirgtlvSKZ/CK+dtD3KjMSaqMCYg12C5RVVtanS6cgu8gx1/BGNt6uIdIpn3WPdNSGZg/74VZ4YkVy6oWwmEIag8cyYs9yZYyf7fry4dI6Y0g3CzmbBRuCfgZTilKdB2+K3l1oAYQz0L/QJK3ucgHbF++KaH7hYRasuRWKFMs/PBhwkRkq1NwY9v+yFjrPs6LZ110vWHrAYJ7jDBjeKTvZ4LjuHgVwSxzNiCQdKFstBNbQJrDxgOt5WA7qgy6UyxxlUEcBeDn8y+eVZC1eMVz8ePO3Sp38q/25/9W6RIFghkbB9dTDwc+/MFSB4RyTLo9Y+KGV5XpJWMiI8TFeS5bVrkD01VF2Uw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(39860400002)(376002)(396003)(451199021)(2616005)(6486002)(478600001)(316002)(54906003)(26005)(6506007)(6512007)(186003)(53546011)(5660300002)(2906002)(36756003)(38100700002)(4326008)(6916009)(66946007)(66476007)(8676002)(41300700001)(86362001)(66556008)(31696002)(8936002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1laOU5XRERsVG0zUXZGNWlOeU91aW0yWWdaSE1zczJSeS9IMG1PWlpISUdR?=
 =?utf-8?B?OU5oM2NnRTA0Sm95WGllN3lQRFB0MG96QVRwRDFVR0VhMklZZjJ5VDZhWFda?=
 =?utf-8?B?RVllTGk2QWc5UkJvT0xIcVZ1SVpCTGpJRmlBMmJzc0N1VkdPWmRoVDh6NXdt?=
 =?utf-8?B?Y3puc2k1MGJTS0VnRWYzM1FHUDlHMnJEQjZwMFZVbUp1N0xzU29yZHRyWUVt?=
 =?utf-8?B?R0tFVzh6N2RUZ214U1M5YStMS0d1aTBpUENNaVFDWVg1cGdrR1lLTktTeitG?=
 =?utf-8?B?ckg5VnZ3aytwUmVTMVlmVnl3d1lsdmJINk1ydmFpODZtNUtHTHFTdkNOTWww?=
 =?utf-8?B?QlFGSjR6d3d5Wk5keXpWVEFTNFhKMlBUYTRYSUROMWxBbkRrOG41ZEd0V0x6?=
 =?utf-8?B?cGZvQUVjRE5iaVYyKzhLVzAxbzAvaTBqSVF6WXdmZy9ZVEVKWWxveFM3UVB6?=
 =?utf-8?B?ZzJsOG1IKzBTTmNQM3AvaVBTVTJrWjZmVWx3MVFxN1F0NXcxYklsZ2FwWVhV?=
 =?utf-8?B?UG9HQlZuUnF4cm8wd0QzdytGMWRmVzRTSzF4R2lqQTcvOHdIZkNtVGJJQWFB?=
 =?utf-8?B?bXAydjI3LzdpVVh2RTlLcmhsSjhSZmUyMWZMUEgrZTAwQ29LUEpjWHhxTVFt?=
 =?utf-8?B?NE9QV3JYNWI4ZGFkNUJ0VVVRSVNBQXpuRnZmbTRZWXNHbE44aDV3b2JYMSs2?=
 =?utf-8?B?TlpSdW01aC9STk5admR3RytacGYyU3VSWVpsNkRYeVdSNGJIeVM1RnpUdDNQ?=
 =?utf-8?B?UW1TOVV3QUlhMEFWM3VxU1d2T1QzZERPNzlYd1BBZ3REc0VsMnlTTDg5M1FN?=
 =?utf-8?B?MlZKM3VNbTU2SDlxSzhneGd4ZnVHa3BuSkN0cWc3UE9iZWdDZGVlQzlEZUxm?=
 =?utf-8?B?QXEwZHUvR05rQW53QnB6ODZVc2laMDN4SG9OTjR1TXZIZWU2c25XR29tMlh6?=
 =?utf-8?B?ZE8wYTVUNDNpS1hwZFV2UjVuelhOTGczOXE5VmFUamVka0JGMkthNEU2bi9D?=
 =?utf-8?B?YytzTTMzRzNMTUFqMGpkZ2lyWHhWbXoxNWF2NmhLQS92K3F4eEVJT3VvZEd1?=
 =?utf-8?B?WkE3emVCaFhjYnc1KzN4UTI1d0hJbHVMZ1phLzRDdG1QYTN5b1J6RlJQM1dW?=
 =?utf-8?B?bGY0TXA2WXNFZ09wR3BxNzZ1SGw3bHBQVXpESWVtTUZOUTJxT1pydkZtenhq?=
 =?utf-8?B?bDVnWHJDaklNaytlZE9Nc2RhZDU1b0llejdwc2p3T21HWjVKS1ArYXA2OG04?=
 =?utf-8?B?RFB1b2ZjNGw1V2RFMXNaRFpIVDFlMlYyWnNNYkx6Sm4rcXBsNUJOcGZyU3N1?=
 =?utf-8?B?NGpUWjhDVnRmQmtjYU1yNjFqaGloZEF4VE5JMnVtMjRTeXBQbWp6MnlMNXBi?=
 =?utf-8?B?eDV5WjBoYVVHNFVQSWxOQlNpcldwMk9uYXFYNGNlUGF4YktlYWpYbmZ0MFZL?=
 =?utf-8?B?c253RnFNZTVKRnNzVzdiaXlxeU5BWUpUV0FsWmhXV2Jac003c2hLVjY3N1NZ?=
 =?utf-8?B?VGQveG1XZUtyM0sxYWJBbDZ5S1gxbFkvS2FYaU9jci9GTExDb3pObkZHWnRT?=
 =?utf-8?B?a2YydHRwaFpDa2dhY0ZJVUdsM3ZFeDZJR2JKQitVWEZiVFV0bDQzaThFN1FL?=
 =?utf-8?B?bUhUUjhyMDBPQkhDdlgzZHJDNXZQSTFhamdrMTdwTHJSMzNoY0J3eWNPL1RY?=
 =?utf-8?B?MEh5VU40aHJzKzlSelZzV3hvY1J5L3VtUE5zVmtySzZja3JvQ005Z1R2UUg0?=
 =?utf-8?B?cE5yMkJDa0xTb0pZM281d0xhMVdqWXZZYnFKZWV0ZmJkTWpDRWVPM2pXNXdq?=
 =?utf-8?B?dXB0YjRYWVlZai9JckQ3VjVINGhDWmdCSXJITnlENmw0SVJTRDREQlRUS0dU?=
 =?utf-8?B?bXdGTWVxbHBFWGlRYUlYYWxJVXpYWEQ4MHg5VTVGcmRRUVU0dFdUMElrM2ta?=
 =?utf-8?B?ZUQwUzFFMXpBY242bWNTMVlOUmc2V05NV2ErdzZDNTNMTUZQL3lLVEtua282?=
 =?utf-8?B?ajRjcFNOdTh1STBoNTdyTTlrUU53VE56SWdTN2YwOEpGT0ROa2UyWnNaZ0l6?=
 =?utf-8?B?Q2JFT3lZcjlwaEtqUUlIdnF5TkFiaGY4K1NSWnJUWXJsSnhVMFBPZjVTRnVI?=
 =?utf-8?Q?Rc49MEsePL/TEtmFmHdIyc8Mr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8348c93-3ddf-48da-090d-08db366ac660
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 06:47:22.6966
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FofPGH5EDjFCYUXltfHdBcXI2prkXYR7+iojQePo7Blgolnud4oVDSdJuALh3fbKu/z4zUGDGTgLpSwdwWZN+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9237

On 05.04.2023 23:29, Andrew Cooper wrote:
> On 04/04/2023 3:50 pm, Jan Beulich wrote:
>> This insn differs from WRMSR solely in the lack of serialization. Hence
>> the code used there can simply be used here as well, plus a feature
>> check of course.
> 
> I have a feeling this is a bit stale now that 0f01.c has moved into a
> separate file ?

Hmm, no, not really. This code was written long after the split anyway.
"Used here as well" is meant as "copy that code", not "goto ...".

>>  As there's no other infrastructure needed beyond
>> permitting the insn for PV privileged-op emulation (in particular no
>> separate new VMEXIT) we can expose the insn to guests right away.
> 
> There are good reasons not to expose this to PV guests.
> 
> The #GP fault to trigger emulation is serialising, so from the guest's
> point of view there is literally no difference between WRMSR and WRMSRNS.
> 
> All code is going to need to default to WRMSR for compatibility reasons,
> so not advertising WRMSRNS will save the guest going through and
> self-modifying itself to use a "more efficient" instruction which is not
> actually more efficient.

Well, sure, I can drop that again. I don't view "self-modifying itself"
as meaningful wastage. Plus I'd consider it reasonable to expose the
feature by default only to HVM, while permitting (non-default) exposure
to PV (in which case the emul-priv-op.c change would want retaining),
except that we can't express this afaict. Even without exposing at all
I'd consider it kind of reasonable to allow use of the insn, in case a
guest infers its availability (or simply knows from executing raw CPUID).

> The emulation implementation is fine, so Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com> but dependent on the PV guest changes being
> dropped.

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 06:53:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 06:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518749.805578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJUD-0006OR-Fi; Thu, 06 Apr 2023 06:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518749.805578; Thu, 06 Apr 2023 06:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJUD-0006OK-D4; Thu, 06 Apr 2023 06:52:45 +0000
Received: by outflank-mailman (input) for mailman id 518749;
 Thu, 06 Apr 2023 06:52:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkJUC-0006OE-2Y
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 06:52:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe0d::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0e65c27-d447-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 08:52:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9449.eurprd04.prod.outlook.com (2603:10a6:10:36a::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 06:52:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 06:52:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0e65c27-d447-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CQRus2hIwPyTwzZUg4hBt3SpxMA4KjJ/hphX9qfOsStxXMzYrywRJcWNCVIjSUSFGPAkD/Kep6eDAjcZR3vB8RsPX96jfRo13YnXXvEJnM//XD8t8f90naUIxcKPdPYxF4lL0afRQZqPN120+la6tJU2b/uf8PNMFs2hH43pwpp+/HGetk4lbmSXrn5Fa83WrYv0uSA5asW47LZrUsUSsJqXBW7bDLBAcNpgqNlHnhZpYLka4AzEbnxb2RulErtFtz17h/kp0i67raqHdaPriWUTpWTskJwJE1iLvs0wIIwDxItnF9tNZsdTKRN4CDiH1Z9IZfElDu+vaaxcYONgeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Mp8t5y9kQGZBW8iIjdSHnxCFO8OaixoI+/AozAavHPo=;
 b=E7AqMNIop6TvoYF9NZlXhQSrGw3xLZKkZuuTpEA8UJodrKMG+d0IZrDYzG7r56rzMLwGNp83Z2Img13yEvOKEDCfWiHIv4ayjTFCN1mq8+iko7U02LAB4drz3jVvHyM0DRhEAaI7zt9EjGU5/adi1l7r8pJ8dBXgoVhz5mDmgTLswoYwVM7F/pixeFpq8Sc7cAZ/PFa/MljelFV3fZppbDz8b1nT9u3kCZZAQLzH5IllBhoZzR+/lCh0nJn67j/lI4RT1QMOJnGFjxvHuv1CDQEa8tp2DjH01sE6jpia4wog55cdg+9y9z2nFuqba9qnjxcoltUB8GG/RByuUJylew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mp8t5y9kQGZBW8iIjdSHnxCFO8OaixoI+/AozAavHPo=;
 b=YFRhgK5+WWZPpSTR/uJQFx0UFzSCgyTDrNxtgMPCYrz3gPtIbs3vXI/obaZhpaVrIYPLI2uFyewdgjHLPRPwBqtZZkH41VZI0wDytirAQ4Tydztlaa4lJn1hYngkC5hFKP++0SekrvZ9J4bazjkPTYVY5xmaH+RSPlBBpyQi+eIKoxBYf1nzB1/EZ54Kh4tfQQMjpfyE1fw7wMb0ewg19IuLSKKZ7Z04rbefOSufetPPtVgDTulg3659HfdmH3z0AtQT1UKpFKtBfItwVUJZIga6qjn7Hjzwpj1/+LADuCtrIFg/CQS7H6ZEPwnTkic87vzeauip6nfm0lQK6ZgIWg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d322737f-3df2-5e81-8f1e-e6bee30627a8@suse.com>
Date: Thu, 6 Apr 2023 08:52:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v4 2/3] x86/platform: introduce XENPF_get_ucode_revision
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Sergey Dyasli <sergey.dyasli@citrix.com>
References: <20230404160655.2354-1-sergey.dyasli@citrix.com>
 <20230404160655.2354-3-sergey.dyasli@citrix.com>
 <a1c16028-3f33-36bb-36cd-b1ad2664b0f9@suse.com>
 <724da9d4-2c8f-dc45-78de-5a50e87adba1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <724da9d4-2c8f-dc45-78de-5a50e87adba1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9449:EE_
X-MS-Office365-Filtering-Correlation-Id: d240eac0-f57c-4b59-6be0-08db366b81ee
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KKcth/ur1s3WuiUanGPzFocdWfxoN/+2Zy+ro2zGSCl0PphPqsGiVZAKVzYxXuSQop6VDIT1Tg51DUWD6u5x9F176D6skktfmdlEiTIR+IwhsBEGmHMKH7t9dzf9gf5L5LMbiV4GhfeHdu8OFEKsCnnisx+HK6nVrHBKvmUnTh8IBxCVw3FKD285okEptZZHhHugQ8u0OJf06fwuYTA4u7dRfpxhbFhWL8PPL7wmE/SDpXPm3j4NxW3CutaB6f4X01wQsQcO16UOZ/Vw8CsD3H2hFpSVPT9aUg7yiCEuJA0BMQ4RWfvnukAI9KMGOa5KIULlDChqiegcNyoudL+7knzkDDpHH/VEMLSWTUBjQfXyPO8VDQrVWbpyUFLPtk+9qfnvEomG4zactuCLUtejmQNiuyY+0sJGGLcwwJBK01Ebct0/yboFirZngc1Nr9KEZ1bi5QcyGBvqlhIWvIzlGHZttusWjwmjgIA7SdBpQh74j+EsRQSVFlFDHPikB5R3m1SkV9+Xmv7j6OSwLAZCE0bjhu76+FWcfSC/xjYA7hF1+8r16b+z9ygDOnD2EDUqU53BFdeC3tB4xxI/S4Ons0lDraez1jcvLV1fIxd4E9ATtt4NH5k60GD3aj9/VEM8fHZBiXRXnhzBEP/cbf8www==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199021)(186003)(6512007)(2906002)(6506007)(26005)(53546011)(38100700002)(31696002)(86362001)(36756003)(2616005)(83380400001)(8936002)(41300700001)(5660300002)(316002)(6486002)(478600001)(54906003)(31686004)(66556008)(66476007)(8676002)(6916009)(4326008)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z1psTzFpdkZINlJyWWRzRTI2bHArRU1uSEk0OTMzTC9ydzFTdlJFa2tnNmtW?=
 =?utf-8?B?eCtZZ1FQMHRRYllYdjE4NlJGKzB6KzJtQlNlQ3kwN1N4Rm1LbkZjYjU1ZEpV?=
 =?utf-8?B?cWl5Vy9ZRjBCZUVRaXd6czdpKys0WFN5ckNSVWgzZHQ1Y3pvdnVUODJyZ21h?=
 =?utf-8?B?ZFFzNXRsOVRlZDRHaDBpZ0syem5XSmxCb0N4ckE0NFNBSHBwclRVRjJPM2Z0?=
 =?utf-8?B?TVpLaU51QXh0K1hjSHVGaUpmVlRmTVlFZkZjNlBEU3BLdjg1YW50Q2NrRU5E?=
 =?utf-8?B?RStXeTBvYnlyRHBaL2tVamRrWUJXR285S3ZzYmZTYmN5aG5IVzcyTndwL1RN?=
 =?utf-8?B?Q2NNUUpRNXk5bWNwRDJXMTJlaW9qMEN6MWFKS2U0R0xzOVgvM0h3MklMbnE5?=
 =?utf-8?B?ZEkrLzF6aGZHMGtsZGlFU3k5cVZ2UndZcnFraHIzK2RtQUJBWk5VU1oyUkM4?=
 =?utf-8?B?NlFOOU5VRVpuc2o2Y0t0VWZLRk1Xa2RoMUh4ejVwQ3JPekY3dGRBdGdteDNv?=
 =?utf-8?B?NzFQMW50a2NuRzNueUlOSEZQdkYvTitjWUZuOFFjK2J5YkR0aDdXbmNuZjJ1?=
 =?utf-8?B?VnZGMnRVL3g2SmQ5YUFTdUhNVTZ5Z1pYU0V5K25sa0FNcksrVHYzUW9HRVJu?=
 =?utf-8?B?Z1lLcVU4NHdkZEhJdjVSYk9MNFhuZWU3RjQveDMyQ2I1ZUF2SkRDT0daZWUv?=
 =?utf-8?B?Qm03Z2lIbXZMS1lPMWZNYktWYlRmOE1icWUzZVVUNjhtbzg3cTVLMEpzZlRN?=
 =?utf-8?B?WVhUbmRPa0tXamJpUnV2NmFDTDRWSDlhbHRkdEI3cG0raTBraG16eFJyVm1I?=
 =?utf-8?B?Z3BXbjV6b2xNUkhodFJldDlVemlGS1E5Rk4zcWFQVjFOblVES09oUVNtblpH?=
 =?utf-8?B?Yld5Zmp0SEtFNTVZR0lWZk5IK0Y3VEEvRlFjRTMrQ3RDaFVJb1d6MGhsU1ZQ?=
 =?utf-8?B?dzJHNTg1SEVnL1VLb2JiRG4yRjQwTmQ4MEt3ZGJaL05remxrbFd6TVBna2N0?=
 =?utf-8?B?azBCaHZWMjdGNE9OM09NbzRLSHVQRmhsckdSSG9TUWZTM1JWQkU3Z0YrVWlN?=
 =?utf-8?B?UG9LOWNPd1hMeHRwZzlpMFdscXZ1aTNGdnlYSnZmVTBlYmhIaFNJbDZMaHoy?=
 =?utf-8?B?c2ZreFcza3h4Y0phRW5QMDZhSGllZlZ2UTJLZmRrOXYzYmNzVldzVVI2bnZS?=
 =?utf-8?B?YUtHOVM1WW5IQTdCMGxFalpWWXc5SlVubkFoVll1YVFMcDBlcTJhRzVTMUMw?=
 =?utf-8?B?dUZsWGQvMjhldkd2ZkdKV1JOVDJxY3ExS3lqZGMveFRwek95T3JpbTNOV29j?=
 =?utf-8?B?NHN4azdWNS9kUm9yQXQrU3N6Q3daMVFhWXp4cjdRNS9pSjJ1eFJGMHRDZXQ5?=
 =?utf-8?B?Z3k0cFhXY3hHSVUwZ1BhRTVaeUxRMFhRTWxxc0VxMWNMTGVFVi9yZk5QMFJz?=
 =?utf-8?B?L05Ka0dFKzBmUHB6MklXTGJaM3UzRWZqOGFZNkkxU2NacWxlbEtreTBpemxI?=
 =?utf-8?B?QW83NEhLWEhjejJNODgvZ2Y4ZTF2cEdRVWJrckEydlNrdkY1RTd1SDZycGZ6?=
 =?utf-8?B?MUpzbENpQjBTVURqZUNoS2V1MWxnVjhXOFpoZkxoWDJBc0l2eDJDcW1BZ0ZZ?=
 =?utf-8?B?UDJ5a2lHVFF4czh5S0pMeWUwSzg4L0JHTmpPQktheCtSbWZZRENpdnUyb1pZ?=
 =?utf-8?B?WUZyZmMyZ0N6SVFSUkNRSVBXVHFkNDhVQVdzY3hoUkh3SlhLQ0FBWWpNOGpo?=
 =?utf-8?B?QVRhUzRkQkpRMFdsZ1RCU3RyUlFmQlBCMmxjclB5aVJXWFBrN3MwOFhocU8w?=
 =?utf-8?B?alNnMnUyOXhVZmZCY0FQU0JXcFZ3aU9uNFE2YW91UnpkbVBjNmNWUEdtTVJ1?=
 =?utf-8?B?WEZCZS9mY0pJOVFFeXBmRjc1VThhQnBldHlpcHA4Q1cySXNjOVh3S0dKbmFo?=
 =?utf-8?B?MnlyYURNbk40c21ZT3pDOEtVa1YyUGpMWWRZcmhwNlM3K256UE1NZSthbjRm?=
 =?utf-8?B?bVlpb0xOTzMxNEk2L1JUUEQ2cGFtMUdNSURkMVE3MUc2eWJ3Rk5tWTRQRVRY?=
 =?utf-8?B?ek4zc0JRWDdCODF2WmxIemFlRlM4UjRtSVAxZ1pVSEhxbzVtL0hhaDdqMTBi?=
 =?utf-8?Q?gRpY7LE6QWnLJ1Olioj/PEX1P?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d240eac0-f57c-4b59-6be0-08db366b81ee
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 06:52:37.4617
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1ak74kSNQaFeh8LndSKiIzvmlblVsIKnDRxxitt02wQ1NB5HBEgrZz9JuOjxK6CXA1QWy4uHdCuDvqQ5JBVFZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9449

On 06.04.2023 01:02, Andrew Cooper wrote:
> On 05/04/2023 9:56 am, Jan Beulich wrote:
>> On 04.04.2023 18:06, Sergey Dyasli wrote:
>>> --- a/tools/libs/ctrl/xc_misc.c
>>> +++ b/tools/libs/ctrl/xc_misc.c
>>> @@ -243,6 +243,24 @@ int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
>>>      return 0;
>>>  }
>>>  
>>> +int xc_get_ucode_revision(xc_interface *xch,
>>> +                          struct xenpf_ucode_revision *ucode_rev)
>>> +{
>>> +    int ret;
>>> +    struct xen_platform_op op = {
>>> +        .cmd = XENPF_get_ucode_revision,
>>> +        .u.ucode_revision.cpu = ucode_rev->cpu,
>>> +    };
>>> +
>>> +    ret = do_platform_op(xch, &op);
>>> +    if ( ret != 0 )
>>> +        return ret;
>> Is there anything wrong with omitting this if() and ...
>>
>>> +    *ucode_rev = op.u.ucode_revision;
>>> +
>>> +    return 0;
>> ... using "return ret" here?
> 
> Conceptually, yes.  *ucode_rev oughtn't to be written to on failure.
> 
> More importantly though, what Sergey wrote is consistent with the vast
> majority of libxc, and consistency is far more important than a marginal
> perf improvement which you won't be able to measure.

My remark was entirely unrelated to performance, and instead solely to
(source) code size.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 07:10:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 07:10:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518755.805588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJlH-0000Rb-U1; Thu, 06 Apr 2023 07:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518755.805588; Thu, 06 Apr 2023 07:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkJlH-0000RU-RL; Thu, 06 Apr 2023 07:10:23 +0000
Received: by outflank-mailman (input) for mailman id 518755;
 Thu, 06 Apr 2023 07:10:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkJlG-0000R5-LU
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 07:10:22 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20607.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 164c8bb1-d44a-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 09:10:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8572.eurprd04.prod.outlook.com (2603:10a6:20b:437::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 07:10:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 07:10:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 164c8bb1-d44a-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a/XvgeW/pYnjs2GLuD5j6QFjqW0aJbkKcayDwJIQ+H/ZIaaHV0H0SR1z7ERRnN6Q+RY4hmYYx7tKN1niEWEx0ngPwH46RYuK2QghV9rx0sMw7ixPAiXdr2R17cK8RqBdZQS9Z04mrr4IPOC055HyrX+2TRecEZ02out1/8djV7Et9R9oyM9gKGhfeJXPrmnmiAfJwTYXvfiYiDR0Kw9ilfbhah3ZeHu0gsCJW9Te8NlRIAUMpN1ihrHG2sfhhSKBixVKolcTrx5/0gY304TXLR3GDhDlG1Xwk/OePBSmXz8eBwVE8kqGJlowTy1DRLUj7sppwV5NazsrIOTG2Kr8Ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dPefrZvcEibxdwrsmC7k+QiZGikHI68fSkhMQAQyOcE=;
 b=TwBCDa77dGpGQmSnhOQl+gPsFQfR6aoe+NQpHFpX5sGVtmXniljs8jz1z1Un+SfiD2hkRjAo9KMGG6VUj5m5p3W9Z2vRGBHV0THFxqlOaHqLCnFpEcZ/QyR41qUtbMFjCyKT8Zt93DGDHJ9rlIzDGKXG7U/ow4bw0AMYL7DobGXWKltIDHP9aYXAv2zZJlKjwWSSMlk11iskYmMvtfu60W4RX/5TdMtIvgSFkakQ6jhJqC6Nvqw25VXGi4ADT5b2uqmowGu252ajj+PbLhBKAC2lTwYWw2ntNKt1Vi/vi2BVEzAUnYunsOU406TwfaqNKRpb34SOotN+/+n94yuPGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dPefrZvcEibxdwrsmC7k+QiZGikHI68fSkhMQAQyOcE=;
 b=w9/NiFksH8H3o4g2jrvdeBBaIpKLKpLoNlR4PGRzCEiKnW2MKWiWQvDCJjEbiZCewF1VvRkJXkuwB9lRuvs4tKTzjNePy+FkVmQsdWzHEiOiGllJSAzEJd4q+rE1HvW8dmh8rLds4j1NWjtor2EftJMRN5ojOnz8qaAkIXZfXMLCNQyqMKN8fSB1TA2PCKsTPVUXo0tcOo+sllcfwrvXODlgFTjk1HhuwgT17PrTon0Y0VKb7d7yGBofEUXCGMCOGQ0FZZ+xTb5m4FikjRSITsE0bY2c/t5SQt0+7v3DbZ3th0OlcezWVpX8B5PP0feSPQOlLCaHdz/Y9/mn3aqQyQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b771061a-4fc1-8c15-17c5-8696f515d021@suse.com>
Date: Thu, 6 Apr 2023 09:10:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230405215245.2137356-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230405215245.2137356-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0155.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8572:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d278d87-dd96-40aa-4dbc-08db366df918
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FRZk6DApjp1EHc+5KhzWYNP0NkCMSyFu1zi7jM2fyeH0wXm8ZvSjMXkZGIz6MjDTcfZXrCTdDUS0Mx0dYoHL+c68gDNi7kvncJ9ARBrvBFcHggnhZ9QmsWGqMa+UAmx9Hud0X1jVsjrGIE7ZgbIujGo5J7JPniMZxokFq3WLsiaEv8A1/ucd0Av37bAVTdjdTAvm7AJap0GN4CJYHWxAybAfmSjfMNHHvQEYCq9PN4cFghkb3L1PM4hEDJhvtAL8Nay0lo8V8EIaH+oHvyaUeR+qfjDVWik3Dc7e4qP7sTsOHsWM6BrQLvEFivlAqcXAI7jfCHrEgOQxkssSFVsTz8jM/WkcRkRtXPhfHak2F0Gwm3LOiezdmuqcoAQLkkPoubMxY8KTgk53C/eAbHOZRkgzR7iQdc1uSUTxvqp+ILeYDe663mLKSE3+edUklHf5RGTWogvi/Lbfq5SaHrErCu4o7Y/0CsKlzh3xNUFLz2PgUDdAmU6Ztg3pY6ylfrgfID57yuEJ5WG+QPAsZO9NZlSCA432xc8UZlbYlMUtGThRLyH6IvX4pN7+t+krRmVCo0igbTOrVYO6V3hzv1fagMMcKtABdpFfdq9fJ+ZRgX8gnvsi4t+lZiqunbu/SJMe4x6QK+l9fdILXRyhDG6qLg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(39860400002)(376002)(396003)(451199021)(2616005)(83380400001)(6486002)(478600001)(316002)(54906003)(26005)(6506007)(6512007)(186003)(53546011)(5660300002)(2906002)(36756003)(38100700002)(4326008)(6916009)(66946007)(66476007)(8676002)(41300700001)(86362001)(66556008)(31696002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QWJ3MDkrSUI4ekpCakZYVzFCcnp6V1JTd0RqbU42UmRvT0VTdm01Slp1dFVD?=
 =?utf-8?B?cFJrNlZQRkwrMzZSWDNLNUJBNHA1MFV6SmJXYnQ5b2pHOXRTbXVyRVRJS3RO?=
 =?utf-8?B?QnB4dzBocTg0NzdyTVpSQUZmQi9iNU1XQ2ZSMjg1S01iRkpUN3NhRlJuTGU4?=
 =?utf-8?B?R0gwVjZIUXYyTXVpeXFtdkErdUNDK1FNYWE0NkFtUkkzanBLU1BlN21TZW9K?=
 =?utf-8?B?VldrOWEwM1lXcjBJVGRYWXJiaDZFQXB3QXZRZ2J3Wm5XMkozR3poQXpaZ1Zy?=
 =?utf-8?B?V1I5WW84UTFiSkk4cWxoc2ovUTZ4Nk40UkE4QzlyNk92ejA5VmFuc0lWeFNV?=
 =?utf-8?B?aTFTeWhnN2pGbWg5KzB4VUl6V2lNODZPb1RBUnE1bkpieHdBcEhrTDJXcVN6?=
 =?utf-8?B?UXpHOUdlVk00U3NFUVBXcU51a1Zra05zQVd5ZmUvWE1ZUWhRVmxxa0pJTEpo?=
 =?utf-8?B?d3JGUklQWWRoZnk4c0hYRVVSVXh1OHNuVnllNGlJYWN6dlFnbFBPWXVNNUxN?=
 =?utf-8?B?RFIvWTZpOXB2U3JKalFhMCs0WDZ2RlhNM1BKV3JGL3J4czcvNTVsdEdrY1NP?=
 =?utf-8?B?M3RQUzVmYkpLelZYTXY1LytKR1hQQVhFV3hRcWd6WC9vcUZCNkU2WUdtM2lR?=
 =?utf-8?B?OWh0V1RxSTRJdmtPUDJTNGxOVlVlVEpNNFBIUVhTNjVDWTROcjVYcVExMG9D?=
 =?utf-8?B?dVJySDhxRUxudStBeVpLL0JUczVFMkhNeDRDM2puSkdhUHF1TGxtamFEaC91?=
 =?utf-8?B?THNMOHBRcU53ckp0TDBVZnVBQ3dhdEV6YUM3d0xzMlVJb0J3Z21YUHZTRDF3?=
 =?utf-8?B?NDlsYTB2NzNpS0drek9EbGtBQ25TNDFGczlVV1JsZVlPQ3A5dHNmZEpMdVB6?=
 =?utf-8?B?UW9zVnlPRFZwanZGN3hHQUVvS0FpdWJ5YjJ3YnRJTWJpeVRVd2h3MnJzemI4?=
 =?utf-8?B?SzRPa2dsMmZPb3VyL0YwdlVQaU1hNGV3dlhxdGR0R3crQjMrU3B3Nm1nMTli?=
 =?utf-8?B?UXBubk9LaFV3bzRDb2h2M2xRRFZzRnFmaUNsWWJtTk16UFpMWDVSTzRVRVJo?=
 =?utf-8?B?aVBCVHNHdkVSSURkNzRrYU1RRWphUldlSnNveDBaL1F6dTJvcllyTlhJY1lw?=
 =?utf-8?B?SFJHTU53UXpraFc0NTBzZ2swRnFMbWJzZE84TDd0MTlVVHR1bDVidGJUeFFr?=
 =?utf-8?B?aVBuSEpuVld3MmVIQ0lHbFN3emNiZHJMaEFnNi9DYWxyWUdPYjJDMmF3aC9i?=
 =?utf-8?B?NW1Sc3dGcnpDS21kN0pHdU5EUTExZHoxOU5TOXFhczk0MGwwMmcydDhnOXIr?=
 =?utf-8?B?RkRRY213VXJYSTF6RVNoTjJjWHMvTXJzSktGclJ2NjFpWmZvbVVDUmhzTzhp?=
 =?utf-8?B?VmcvTlV2YThjS3Qzcm5NaGwxL3BxcUR2SndpWTl4YnpFcXprc2tEbjdSSHEw?=
 =?utf-8?B?Und4Y3k3WDF5b3F5Y1BiZk1ZaFB1VmRyZVB1LzQ3YVlyZGt5cXRMbTJlT1Jv?=
 =?utf-8?B?Z3d4NXE4UVRYOTBNZkNRazBsMkJwLzRRbndDU1k3MEduclFIZy8yZy9PZ3Vj?=
 =?utf-8?B?RTgray9XeGRiZzZBa3FyRzhGcnBrWk5xTXE5bkdDanA0VmVQNzRRYnE0MkFS?=
 =?utf-8?B?c0lSM1BOcTBoM0xvWWh5dFNQaXdkRlF0dm56QzBBbm1YcE94bytpMUdjOGVB?=
 =?utf-8?B?MUUySE43M240MHF4Z3FvZ2VQQlNVV3NRaXMwVERrYnIzYndMT2hNSDJYM3N6?=
 =?utf-8?B?VnBmNUQvbTZWZFVmbVVUVDZCdzBXK0lWcjM0Y3RXZzdYWkRmb21CU0Q3Y0xD?=
 =?utf-8?B?ODN6SVVHRFV6QkptdnNEYS9iYjM2TGpuWjdwN1BNTHJtU0crTEZNSW1oNCtr?=
 =?utf-8?B?cHExbE1yRkhEOXJ2ZHk5ckdQeHFqWGx1ZWV0UFNjMysrWldUclQzYWo2dnB2?=
 =?utf-8?B?b0ZNbDRnTmlPVjFrZTVOeXhCQkJMQkhXTDMzSzlFWGNLOVR6OVo5UVlpekFF?=
 =?utf-8?B?bnowSXo5WkZKV3Zvajk2UHBPaTdMc0VVY1doTWlmRFRoZ1VqM1dzN3hHZ1Zp?=
 =?utf-8?B?cDZvTTcvdThLejdJaTUwcDZ3aWV2MHBpRWZINjdsWmZQZDJTODdPNm5URXFN?=
 =?utf-8?Q?G3wMo35uP76TLs1ls2dyMho9+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d278d87-dd96-40aa-4dbc-08db366df918
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 07:10:16.3316
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /QZyi7pfCO72y3Q68iZprTMpT08N8ZS+8ZnBJUlP1cHyTmTkD7bv1b2AYs7w6uTjIl3m3HUyS8mjqjH/1fXqkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8572

On 05.04.2023 23:52, Andrew Cooper wrote:
> At the time of XSA-170, the x86 instruction emulator was genuinely broken.  It
> would load arbitrary values into %rip and putting a check here probably was
> the best stopgap security fix.  It should have been reverted following c/s
> 81d3a0b26c1 "x86emul: limit-check branch targets" which corrected the emulator
> behaviour.
> 
> However, everyone involved in XSA-170, myself included, failed to read the SDM
> correctly.  On the subject of %rip consistency checks, the SDM stated:
> 
>   If the processor supports N < 64 linear-address bits, bits 63:N must be
>   identical
> 
> A non-canonical %rip (and SSP more recently) is an explicitly legal state in
> x86, and the VMEntry consistency checks are intentionally off-by-one from a
> regular canonical check.
> 
> The consequence of this bug is that Xen will currently take a legal x86 state
> which would successfully VMEnter, and corrupt it into having non-architectural
> behaviour.
> 
> Furthermore, in the time this bugfix has been pending in public, I
> successfully persuaded Intel to clarify the SDM, adding the following
> clarification:
> 
>   The guest RIP value is not required to be canonical; the value of bit N-1
>   may differ from that of bit N.
> 
> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I am kind of okay with such a full revert, but I'd consider it quite helpful
if the description made clear why the alternative of instead doing the spec
mandated check isn't necessary / useful. The emulator having gained respective
checking is only part of the reason for this, aiui. Plus bugs may be
introduced into the emulator again, where the checking here could be a guard
against needing to issue an XSA in such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 08:00:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 08:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518763.805598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkKXa-00065d-Sx; Thu, 06 Apr 2023 08:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518763.805598; Thu, 06 Apr 2023 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkKXa-00065W-Ov; Thu, 06 Apr 2023 08:00:18 +0000
Received: by outflank-mailman (input) for mailman id 518763;
 Thu, 06 Apr 2023 08:00:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkKXa-00065G-B8; Thu, 06 Apr 2023 08:00:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkKXa-0003J3-5p; Thu, 06 Apr 2023 08:00:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkKXZ-00082A-ME; Thu, 06 Apr 2023 08:00:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkKXZ-0000Vr-Ln; Thu, 06 Apr 2023 08:00:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IIsLIH/neFqcWgqz19FvOG2UWrYlycCuOvqI7ySeylo=; b=c1yQ59Q4/bY3X05mp7VW6twTPL
	YxeRZyg1zvaocZrVY86rFlXCt9Qnag/pW3/aLDvjhdreR1/VtTw1aKl2fRkn4qu77iXNT0VVAFXn6
	RCCdEgGxYjhaPz5gVI9Xha8qgEOj8h8S74hwqzA8UZKqAJFRCd4XC2X05iq4BYIX51dE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180149-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180149: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-5.4:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=32bea3bac5ca484c6f7e302c8c96fc686f62e7b4
X-Osstest-Versions-That:
    linux=09b1a76e7879184fb35d71a221cae9451b895fff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 08:00:17 +0000

flight 180149 linux-5.4 real [real]
flight 180165 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180149/
http://logs.test-lab.xenproject.org/osstest/logs/180165/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl            7 xen-install         fail pass in 180165-retest
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install  fail pass in 180165-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180066
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180066
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180066
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180066
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180066
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180066
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180066
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180066
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180066
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                32bea3bac5ca484c6f7e302c8c96fc686f62e7b4
baseline version:
 linux                09b1a76e7879184fb35d71a221cae9451b895fff

Last test of basis   180066  2023-03-30 13:12:10 Z    6 days
Testing same since   180149  2023-04-05 09:43:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrien Thierry <athierry@redhat.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alexander Aring <aahringo@redhat.com>
  Alexander Lobakin <aleksander.lobakin@intel.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexandre Ghiti <alex@ghiti.fr>
  Alexei Starovoitov <ast@kernel.org>
  Alvin Šipraga <alsi@bang-olufsen.dk>
  Anand Jain <anand.jain@oracle.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel)
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bharath SM <bharathsm@microsoft.com>
  Caleb Sander <csander@purestorage.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Claudiu Beznea <claudiu.beznea@microchip.com> # on SAMA7G5
  Colin Ian King <colin.king@canonical.com>
  Coly Li <colyli@suse.de>
  Corinna Vinschen <vinschen@redhat.com>
  Cristian Marussi <cristian.marussi@arm.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniil Tatianin <d-tatianin@yandex-team.ru>
  David Disseldorp <ddiss@suse.de>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dragos-Marian Panait <dragos.panait@windriver.com>
  Emanuele Ghidoli <emanuele.ghidoli@toradex.com>
  Enrico Sau <enrico.sau@gmail.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Fabio Estevam <festevam@denx.de>
  Faicker Mo <faicker.mo@ucloud.cn>
  Fedor Pchelkin <pchelkin@ispras.ru>
  Felix Fietkau <nbd@nbd.name>
  Finn Thain <fthain@linux-m68k.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Frank Crawford <frank@crawford.emu.id.au>
  Gaosheng Cui <cuigaosheng1@huawei.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Geoff Levand <geoff@infradead.org>
  George Kennedy <george.kennedy@oracle.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hangyu Hua <hbh25y@gmail.com>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Horatiu Vultur <horatiu.vultur@microchip.com>
  Ivan Bornyakov <i.bornyakov@metrotek.ru>
  Ivan Orlov <ivan.orlov0322@gmail.com>
  Jakob Koschel <jkl820.git@gmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kara via Ocfs2-devel <ocfs2-devel@oss.oracle.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jiasheng Jiang <jiasheng@iscas.ac.cn>
  Joel Selvaraj <joelselvaraj.oss@gmail.com>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Joseph Qi <joseph.qi@linux.alibaba.com>
  Juergen Gross <jgross@suse.com>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Li Zetao <lizetao1@huawei.com>
  Liang He <windhl@126.com>
  Lin Li <lilin@redhat.com>
  Lin Ma <linma@zju.edu.cn>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenz Bauer <lmb@isovalent.com>
  Lorenz Bauer <lorenz.bauer@isovalent.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lucas Stach <l.stach@pengutronix.de>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Maher Sanalla <msanalla@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marek Szlosek <marek.szlosek@intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <martin.lau@kernel.org>
  Maurizio Lombardi <mlombard@redhat.com>
  Meena Shanmugam <meenashanmugam@google.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Schmitz <schmitzmic@gmail.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Snitzer <snitzer@kernel.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  msizanoen <msizanoen@qtmlabs.xyz>
  Nathan Huckleberry <nhuck@google.com>
  NeilBrown <neilb@suse.de>
  Nilesh Javali <njavali@marvell.com>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Pawel Laszczak <pawell@cadence.com>
  Peter Chen <peter.chen@kernel.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Radoslaw Tyl <radoslawx.tyl@intel.com>
  Rafal Romanowski <rafal.romanowski@intel.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Roman Kagan <rkagan@amazon.de>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Shyam Prasad N <sprasad@microsoft.com>
  Song Liu <song@kernel.org>
  SongJingyi <u201912584@hust.edu.cn>
  Stan Johnson <userm57@yahoo.com>
  Stefan Schmidt <stefan@datenfreihafen.org>
  Steffen Bätz <steffen@innosonix.de>
  Stephan Gerhold <stephan.gerhold@kernkonzept.com>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Szymon Heidrich <szymon.heidrich@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tomas Henzl <thenzl@redhat.com>
  Tony Krowiak <akrowiak@linux.ibm.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tudor Ambarus <tudor.ambarus@linaro.org>
  Tzung-Bi Shih <tzungbi@kernel.org>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Wei Chen <harperchen1110@gmail.com>
  Wolfram Sang <wsa@kernel.org>
  Xu Yang <xu.yang_2@nxp.com>
  Yaroslav Furman <yaro330@gmail.com>
  Ye Bin <yebin10@huawei.com>
  Yu Kuai <yukuai3@huawei.com>
  Zhang Changzhong <zhangchangzhong@huawei.com>
  Zhang Qiao <zhangqiao22@huawei.com>
  Zheng Wang <zyytlz.wz@163.com>
  Zubin Mithra <zsm@google.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   09b1a76e7879..32bea3bac5ca  32bea3bac5ca484c6f7e302c8c96fc686f62e7b4 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 08:44:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 08:44:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518770.805608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLEE-0002C8-7i; Thu, 06 Apr 2023 08:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518770.805608; Thu, 06 Apr 2023 08:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLEE-0002C1-3X; Thu, 06 Apr 2023 08:44:22 +0000
Received: by outflank-mailman (input) for mailman id 518770;
 Thu, 06 Apr 2023 08:44:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pkLED-0002Bv-A9
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 08:44:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pkLE9-0004hU-1e; Thu, 06 Apr 2023 08:44:17 +0000
Received: from [54.239.6.184] (helo=[192.168.21.55])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pkLE8-0004Fp-PP; Thu, 06 Apr 2023 08:44:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cfEE4BoPCplRbPqVm1JQdcTkjW5f9vjLSMyN/3E6LSo=; b=X2Ie0M4qskuaS7WagpeRGxeCsL
	sjWn8UDIVD0sBJQX23cB+Qkaopo5mVqh+wuLcGfyY3j0QVphtiD9oC/03OsSTL8CtA6muRKuYnn7Z
	lZFrFd9CotT0PcacAyrGh80BKBSGOB7OyXmQDrKXvucl6MxqWAMxoeJWcySCS5Lhubk8=;
Message-ID: <cce9bac0-0b1f-eca1-6d21-10b2371fbe25@xen.org>
Date: Thu, 6 Apr 2023 09:44:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Oleksii <oleksii.kurochko@gmail.com>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com>
 <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com>
 <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org>
 <605245331bb93b7e60a4a9d65b19b6642d897034.camel@gmail.com>
 <9c4ca4a1-1b68-5ee0-0434-e6c9ec7d1ef6@suse.com>
 <d351a7b6d673b70d45e809123e6e42abbf7b8014.camel@gmail.com>
 <fb639472-70f3-f7c9-eaa6-37effd4965b3@xen.org>
 <8fce2549-a580-6780-759d-f287fa710640@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <8fce2549-a580-6780-759d-f287fa710640@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 06/04/2023 07:35, Jan Beulich wrote:
> On 05.04.2023 18:39, Julien Grall wrote:
>> To reduce the amount of patch to resend, I was actually thinking to
>> merge patch #1-3 and #5 (so leave this patch alone) and modify the
>> default in a follow-up. Any thoughts?
> 
> Well, yes, that's what I did a couple of days ago already.

Ah. I didn't check the tree when replying. So ignore me.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 08:58:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 08:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518776.805629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLRw-000446-OO; Thu, 06 Apr 2023 08:58:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518776.805629; Thu, 06 Apr 2023 08:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLRw-00043z-JQ; Thu, 06 Apr 2023 08:58:32 +0000
Received: by outflank-mailman (input) for mailman id 518776;
 Thu, 06 Apr 2023 08:58:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozA9=75=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pkLRv-0003pI-NJ
 for xen-devel@lists.xen.org; Thu, 06 Apr 2023 08:58:31 +0000
Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com
 [2607:f8b0:4864:20::102a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32127944-d459-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 10:58:30 +0200 (CEST)
Received: by mail-pj1-x102a.google.com with SMTP id x15so36653780pjk.2
 for <xen-devel@lists.xen.org>; Thu, 06 Apr 2023 01:58:28 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 x22-20020aa793b6000000b0062d85a1df56sm837151pff.178.2023.04.06.01.58.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 Apr 2023 01:58:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32127944-d459-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680771507;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wJIrzTwFQA01WlKUp5gnPqmqgvFFVnJLpG/b7bte7e8=;
        b=fYj9IhtspZTO1c45rT5cqb2UnaJWejpmy/I7Jb3yPwkHvmLkvAK3m4181eanFeguwO
         SeDbDT7dPWprn5RlScKWm3FaPgdvFky7QhW80ffdPOLiGzm6WXwdxWZ5U6S3Bb+e9l/C
         Yi9H5wbFVxuyI7f7PvJiQ53X9ftCPgmsswvAfuG7TrwuGfHnbsaQdLLhq64h8suekOuZ
         xFYPfQMLzF64xH5EDGcAkqzG9axCSQa/U81f3qPm1ffOGvWlAJDiOl7wbH+xqAPVDvmH
         Sk+CfT78gQcLg9hXyuQ6pdRjvJuz12Ukytnr2gkEZYB8s4Wxf3qOK9F5AyqYTgzgfoI0
         DYjQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680771507;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wJIrzTwFQA01WlKUp5gnPqmqgvFFVnJLpG/b7bte7e8=;
        b=TzD7GvWb5wFXdWAICq7MDrGRuVOX163coNn3cs4vWx6+xDoSnJnqVDo2oYV3B+TdeM
         M6QIYJt0uLeDPqVfa1qRg+hoTNi+bEx/jCAYqaV1JXwdkPuaqlnJ/VNKW+0w4QPJZwx+
         +CCdh+R0+BDOWwoiqnt7zzAtax0bdFPfnX9aAFmzebKT+AcvoVFBdL7miw/vKJvDgxvP
         eWxVaNyNHemVtcZXWqqCQw/aNcYcRv31yfFv37/u5yJsFganXFRCW2YhGztVqQiTu1gR
         rCvHTRHh4BRXRNCKxhiwhfM8WRfcOaDVYvO0FLv5YKWw+53LZAqYx9alOtLstEJrjK/K
         ZILw==
X-Gm-Message-State: AAQBX9cO677LMLjxtf9H/ch+OtZ5HkZ1Z4Nm0UsGdP9BvLmg2BRfs0yi
	eCQWw2qGLVhiFlJTOEzMcwrYDAGFf7mR9vGKkQw=
X-Google-Smtp-Source: AKy350aan+Rv6En7fZ329g/XSQU5SCipZqx70PrdyPW+xTyx5BJ0yf8Nh3iuU7/nwLuqljfErs6vsw==
X-Received: by 2002:a05:6a20:ba9c:b0:d9:4d77:643e with SMTP id fb28-20020a056a20ba9c00b000d94d77643emr2166510pzb.4.1680771506861;
        Thu, 06 Apr 2023 01:58:26 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 2/2] libxl: fix matching of generic virtio device
Date: Thu,  6 Apr 2023 14:28:18 +0530
Message-Id: <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
In-Reply-To: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The strings won't be an exact match, as we are only looking to match the
prefix here, i.e. "virtio,device". This is already done properly in
libxl_virtio.c file, lets do the same here too.

Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
V2->V3:
- Tag from Oleksandr.

 tools/libs/light/libxl_arm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..97c80d7ed0fa 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -1033,10 +1033,14 @@ static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
     } else if (!strcmp(type, VIRTIO_DEVICE_TYPE_GPIO)) {
         res = make_virtio_mmio_node_gpio(gc, fdt);
         if (res) return res;
-    } else if (strcmp(type, VIRTIO_DEVICE_TYPE_GENERIC)) {
-        /* Doesn't match generic virtio device */
-        LOG(ERROR, "Invalid type for virtio device: %s", type);
-        return -EINVAL;
+    } else {
+        int len = sizeof(VIRTIO_DEVICE_TYPE_GENERIC) - 1;
+
+        if (strncmp(type, VIRTIO_DEVICE_TYPE_GENERIC, len)) {
+            /* Doesn't match generic virtio device */
+            LOG(ERROR, "Invalid type for virtio device: %s", type);
+            return -EINVAL;
+        }
     }
 
     return fdt_end_node(fdt);
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 08:58:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 08:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518775.805618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLRv-0003pG-E2; Thu, 06 Apr 2023 08:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518775.805618; Thu, 06 Apr 2023 08:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLRv-0003p9-BC; Thu, 06 Apr 2023 08:58:31 +0000
Received: by outflank-mailman (input) for mailman id 518775;
 Thu, 06 Apr 2023 08:58:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ozA9=75=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pkLRu-0003p3-7w
 for xen-devel@lists.xen.org; Thu, 06 Apr 2023 08:58:30 +0000
Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com
 [2607:f8b0:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 304a2780-d459-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 10:58:26 +0200 (CEST)
Received: by mail-pl1-x62d.google.com with SMTP id ix20so36940117plb.3
 for <xen-devel@lists.xen.org>; Thu, 06 Apr 2023 01:58:25 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 f8-20020a170902860800b001a1adbe215asm904655plo.142.2023.04.06.01.58.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 Apr 2023 01:58:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 304a2780-d459-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680771504;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=DEQiq5V6EhExtdzg7l+GGgGWzMI4MYV9sltCbuia9S8=;
        b=DqZf+45MshOs1nlVyinOhqFrKCuzw0gQlsLOP0ABgx6laBf7Pg+beSuaze8QVVfghR
         awdDutM8sqXBsgP+BXTWzNrINKIlxL9iBKaOC6FIfHodelL7FbNSBDgVYjhwvLKOgE5m
         MdjjBz40SWhgR+pdZt5+jIRFtwB9GjMGosHqmqyaRwzYl/f1JBuTl1aMIqxNxdBI9KHf
         zicgU3cKtHZbiUWCVP2oqG6BYFOJGaCiOczU/tO9FR/zPQr/Xvr7opcZkYDOOoS4h7PS
         iQfhUMh6M2zx9Qn2jzsUzYLyunaEwrK0rd1FhMpQJRTeuVUHLGXqkeJrdlGtH/t/RBDy
         5U7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680771504;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=DEQiq5V6EhExtdzg7l+GGgGWzMI4MYV9sltCbuia9S8=;
        b=VnPy/njpmlu+i3yzYGBra32JD7STFd7O+TZPEwsAD91nD84kQB3vuNVkx929KUxpYB
         bG3cbUt53eDvw1JmQ5rFFTWRS0aoNPBPTTJ1TxerJn5LVKYT7JSRpJYlDLakdIj/KqIb
         YAqwF30DS1qPucvs0A61m/tRiKb4zLHkEZMsL4/lhptcR4ygQHQsnLfmc9rEiIguGCVa
         TqapjUmyTqHJ7DH6q/dRAqTsORxSlMQ4viKwjP+gu54ASlQFUIu+FwbqJaxkgBZvsVdN
         nzsj9lYPRvIw4Q6ZBjpj6KeMUQh3TXrS+zVwB5dSIJxYxOy8AnWMJFuWPBzxxGIuCD2y
         XMVg==
X-Gm-Message-State: AAQBX9d1iJ5bnqB/FBUhJ77brRGKdW0E/Sru9nzIZhiUhPxWoJuamnc0
	rILMSPNqqjIiz+8iAqE/aJLFQaK7ONAXOf45zFc=
X-Google-Smtp-Source: AKy350YftpXWO2PU4R2xgveFOOV/hWCgUBa/eRONakgCC5lW+aSPlmCOp88OEaOYp2Sq6SUnduQGYA==
X-Received: by 2002:a17:902:f986:b0:1a2:85f0:e73d with SMTP id ky6-20020a170902f98600b001a285f0e73dmr8431947plb.33.1680771503940;
        Thu, 06 Apr 2023 01:58:23 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V3 1/2] docs: Allow generic virtio device types to contain device-id
Date: Thu,  6 Apr 2023 14:28:17 +0530
Message-Id: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For generic virtio devices, where we don't need to add compatible or
other special DT properties, the type field is set to "virtio,device".

But this misses the case where the user sets the type with a valid
virtio device id as well, like "virtio,device1a" for file system device.
The complete list of virtio device ids is mentioned here:

https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.html#x1-2160005

Update documentation to support that as well.

Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
V2->V3:
- Updated commit log and clarified / fixed doc.
- Tag from Oleksandr.

 docs/man/xl.cfg.5.pod.in | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..24ac92718288 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1608,8 +1608,11 @@ example, "type=virtio,device22" for the I2C device, whose device-tree binding is
 
 L<https://www.kernel.org/doc/Documentation/devicetree/bindings/i2c/i2c-virtio.yaml>
 
-For generic virtio devices, where we don't need to set special or compatible
-properties in the Device Tree, the type field must be set to "virtio,device".
+For other generic virtio devices, where we don't need to set special or
+compatible properties in the Device Tree, the type field must be set to
+"virtio,device" or "virtio,device<N>", where "N" is the virtio device id in
+hexadecimal format, without the "0x" prefix and all in lower case, like
+"virtio,device1a" for the file system device.
 
 =item B<transport=STRING>
 
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:08:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:08:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518783.805638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLbQ-0005uo-Hf; Thu, 06 Apr 2023 09:08:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518783.805638; Thu, 06 Apr 2023 09:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLbQ-0005uh-Em; Thu, 06 Apr 2023 09:08:20 +0000
Received: by outflank-mailman (input) for mailman id 518783;
 Thu, 06 Apr 2023 09:08:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkLbP-0005uX-65; Thu, 06 Apr 2023 09:08:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkLbP-0005HC-3Y; Thu, 06 Apr 2023 09:08:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkLbO-0001MB-KJ; Thu, 06 Apr 2023 09:08:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkLbO-0006w7-Jr; Thu, 06 Apr 2023 09:08:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EnX4CyUyIlLVd/OPkcT7LpM7eS6oFPZK49HT/hv48gc=; b=VhHd+yKtHh7/uMgxLaj/88lmA4
	L/Wbjadli4j3miqIQ7ev2XOovUH3e260o03+XF/4ERal06MhMGsUf/3TSOYIogP2jaIt84qlVLYcr
	JA4L2vg3kOS7mxHTpTO6/44suuTYUxge/w8vxdPStwcoRMONsqGo7wIPcku3yGQAOVkY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180166: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3e3be2cbc286e773ed5bd3afd5942440546888de
X-Osstest-Versions-That:
    ovmf=8d185dfb66700e65035d51f149570aeab728c665
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 09:08:18 +0000

flight 180166 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180166/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3e3be2cbc286e773ed5bd3afd5942440546888de
baseline version:
 ovmf                 8d185dfb66700e65035d51f149570aeab728c665

Last test of basis   180164  2023-04-06 01:42:38 Z    0 days
Testing same since   180166  2023-04-06 07:10:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Jiewen Yao <jiewen.yao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8d185dfb66..3e3be2cbc2  3e3be2cbc286e773ed5bd3afd5942440546888de -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:18:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:18:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518789.805648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLlC-0007Pg-Hd; Thu, 06 Apr 2023 09:18:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518789.805648; Thu, 06 Apr 2023 09:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLlC-0007PZ-Du; Thu, 06 Apr 2023 09:18:26 +0000
Received: by outflank-mailman (input) for mailman id 518789;
 Thu, 06 Apr 2023 09:18:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RM5I=75=citrix.com=prvs=453d769fd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pkLlA-0007PT-TN
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 09:18:25 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8cc340a-d45b-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 11:18:22 +0200 (CEST)
Received: from mail-bn1nam02lp2045.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 05:18:19 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by CH2PR03MB5237.namprd03.prod.outlook.com (2603:10b6:610:9c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 09:18:16 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Thu, 6 Apr 2023
 09:18:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8cc340a-d45b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680772702;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=b86fja9ztHW6IR8ClYPj80TtEDvVmQLZHm9O83fq7MY=;
  b=W3f3tELm6H/Hf6gcDEbhdl02fg4o56//CNrFDMBkatgM9j4ftNEjon1+
   D6zx7A+IpMH4+V+Xhvd2Mxaatr6AQd251xOJU3v7vlnnOk8ldDTldSJuz
   VinhDfw7JneaDWvq6NvBGnJm3IcGxAAVD0EmCe6iYR/Y/kQZgpJuXXvx7
   A=;
X-IronPort-RemoteIP: 104.47.51.45
X-IronPort-MID: 104565972
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PCW4NaJoK4wjPGyxFE+Rz5QlxSXFcZb7ZxGr2PjKsXjdYENShWdRn
 WVLWjiGOvbYYzOnf9t3aN6/9h8Cvp/UmNQ2SQZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRgPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5TK2dB8
 uM9BQk2TQCSnfq58p6DR+pV05FLwMnDZOvzu1lG5BSBV7MMZ8mGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dupTSOpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrw27GSzHyhMG4UPK+i7+9AkACx+kMWJRMLUX2rp8mZ2lHrDrqzL
 GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAHSThbYdBgq84yRhQtz
 FaCm96vDjtq2IB5UlqY/7aQ6Dm0YC4cKDdYYTdeFVdYpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:6QOZb6NUjsdiqsBcTuOjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ/uxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykBsS8h2njBLz/8+qjgzEl1v5ak856yd3AWV0kOhz0JczqmLg==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104565972"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C8xNq3oRb7VmN0cOOS3xR+/cJyW6MpesmT4uBtWuSj4MWt6qAdmGPYVKqabs3Cv3uWojdV5QJzwvcZUKwIeXitKV+8xnmN8cZ+FAh4gBFYMfIXfKLF+U8mA4C3diyS/nl7HkOLh9BqDSvPuLDIMBsBWlBir18GTKAu5gzbsJ9E4K/gdItVVRdvpTpMuMfcqzs1bFGJbt/WgjcY1RY9gWyZKyviMMaHZ8z/p/EH8pCAPEwBBi3RYYT0z3s1tMdB+EI4dXJawHUsZ6J1obXw29meFddreNTD3ugZ/ZKuDE74SEgkeqKtSe88yPdjZqsWFL/iO1/5MAqmMWwN5s6n8z6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nnCA7B702KreQwdKNNPADcBQDwWIhYrArbxnmVXIaVQ=;
 b=fbeeVzboX55rGvvwI/c99TC0gDegBVWafk/j4PfFdAgQIFl0FcXjcF8ARgxYusCYWB4L667FizjWMc+t0i96UzR7d4uHNJNhvxg1R66pE0waP+MlbuR+3sedBxx/gPuGJP0VjN/MZxaDLScU9cosx+3FoSCuORLP7D4FATZt7ELvJBXwj7r1Aigp5wVjvkddoZ4u7GmrgdYwX3/aGyM/OSfnsqFG6Ow8NmtkQTvMxDNSoXIqt7x82zvnM7SJawvrZK/a9CHPQ4ZXRBauiDrcP/fu9MOg+GWsBIzjQwt+4ltWsdCtTbxZewogiwwjU4tLmhTyr9cpc2ojXc0cws9hNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nnCA7B702KreQwdKNNPADcBQDwWIhYrArbxnmVXIaVQ=;
 b=nWUEDexLPu1CjrQiNvZNAs7jniUtBMzVS4Ct64mNPCKngRmy3Y7CcZD8PtObmDeM2vPGdEDdB9crA89Rf6gfUNZ9cGoSqWogdmRY4nsNAMdYp2zBVFRIWv2NBcqyCoQYsF5OCMbLXj+xUH7Xv/jYquQkbe2gKa8PFPPEQMda4N0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH] livepatch-tools: remove usage of error.h
Date: Thu,  6 Apr 2023 11:18:07 +0200
Message-Id: <20230406091807.49028-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0091.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::6) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|CH2PR03MB5237:EE_
X-MS-Office365-Filtering-Correlation-Id: 994f8dae-6a35-4811-52a4-08db367fda7d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Txfqyi/JpK/S5syxTV94OEV7i6h7Ke/14BKTAFEZzVdPISYfX6LL9Y+EXQ8euQh872rwmQJaSSAtz3PUCSdavELmxNW1LvRlXhJ+qD/2eFQSZ2IEXhg9xbswKWAZwPTkANhZ/3OL3+8ERQPFeZgxbu/mb3D2fOM5lGp4wnZL07RgFkDsOScTeZGcLjqnD/JgSmeiRidjCbz6wZ2kXLLt9WP5deqZJ92nfBd/GG9RQSZ9UEOTs2k0VxopKRRrwvktXo+2+QiXJ3X7oDDc7QIvln9Zr1QF52ttIJR3LpjM5VmEUfiCZEwK98FgdBMGRz3s9nPVwZszwilP1E7CHHwQV6RAb87axj0qzW1sYkZxLBxmKd9WVwQzu10ioY1wD1dEhPCJOaAiB2yHQM5SVEScSnyyMOjRk8Xj8Dr1wgRZ/DQcJ2xQO27l4BtOHBWoybskzC71vu0xXKp/tQj9gMSlmXnqiX1ZP6xlro9QJf9rvC5pHWwXHd9XeptR2p8SDIkxUDoYmRSiwsQ8MtSKK0gunrT7CrpF+zGDbxLNvzCbuL+PXeUwJnTqDeNbRerP7IHC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(366004)(39860400002)(346002)(451199021)(316002)(186003)(6506007)(26005)(6512007)(1076003)(2616005)(6486002)(36756003)(6666004)(107886003)(83380400001)(478600001)(54906003)(86362001)(8936002)(82960400001)(41300700001)(2906002)(5660300002)(38100700002)(66946007)(66476007)(66556008)(6916009)(8676002)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZUJqME1kb1BZeUhOSG5hRXVaYUpYQVM2aUdOYzZudmppUlJHc3FQTktZKzZX?=
 =?utf-8?B?YTE4MUcrWi9MR3plaDVaSmxQQlNuSE9FOXJ6VU41TmhMNVQzMkN2L2U1Ykh4?=
 =?utf-8?B?b0R1YjhEOVYxVFo2YTRQeDJoMkpvMzNWTGYyUjREakFBTFF4TXFPRkxSOWdB?=
 =?utf-8?B?a0psMnNiRFQwT1I5TzVlRHpId3ZTVkVSbGl3Q1pOZkVqb1l6V0c5ZUlKQkFj?=
 =?utf-8?B?Qk04bm1TN0lIUDlkbzNrbTlUSHdqd0thanhrUjN2SThpV2s1ZjBEckdQZVlL?=
 =?utf-8?B?ZURCVjlrYnc1RXNZaS95TFlBeXFsd0FuYjZ2cGJuWWl3SlJ6MnVaazJYWU5L?=
 =?utf-8?B?eWRWU0JJT1ozb01EbUZNcElBdUFGZE8wUkVaemVrZ0pENDZkOTk1Y2JSemRh?=
 =?utf-8?B?RlhjSzhDMG9uYUp6eDFpaWZVWG9yT2xEN21hYzY3UWhJYW93VWNmWkYra3FT?=
 =?utf-8?B?K2Y5Yi82T0NOTFg4Rm04ekVwNEhMMjFTK3FLMENWZ3g3UWNRMzZsOUx4YU53?=
 =?utf-8?B?NlVxUm9HeEp4Q3JZYUVWMnhsRTIyTkNwV2xjMTBuWnFJaUxsamZ5Sk1WTk5E?=
 =?utf-8?B?S3RPd2FPNUFCMkltZVkxS1VSRWdiRVZGZTZnbFhCRm1mTlYrUkd6V2NnMFN6?=
 =?utf-8?B?TG5Cd1c3c0l4YlZQdkNCZjROejNGZmdjNTg0bnBFWkZJazY1dmxMU0tNQ3pa?=
 =?utf-8?B?a1NxQVVYUFNqbjRZZEVCNmhqdzhMTmNUUmt5anFsVjFlejFab0ZQYnkwOFRk?=
 =?utf-8?B?N2pMMytJYkRSNXlEdVE0R3g4NkZZYzBJVXhPVXllK21mZnBhbUdUa3B3cDVP?=
 =?utf-8?B?YTFlRkpPcWkyN1lweW12eWdNMWY1OUlEUytVUFJ0ZkJGT1IycDJCZlZNWVlw?=
 =?utf-8?B?V0pVZjRqaEFna2cxWDdabnJwKzdrTmM1QlJvM2tadkwrNkExY3U0WDhKcFpE?=
 =?utf-8?B?KzhoV2JaK2lXT1FCKzlpQUppN1hqSGxjQ05aNU9HaENrR1NiVFVTQnNkNHRo?=
 =?utf-8?B?VHBRZlZOQXdpQ0JsVEhpdDJjaW5HcHJEQ1RHcUM2bm1KZzlzRnBHWG1oK3U4?=
 =?utf-8?B?Wld1S2E4MXZaaitZdkswTzRGTHJmNkcwdUJ5QW5zUDJ4QkFhNWFhQmpPODFB?=
 =?utf-8?B?YnhZMTQ5OVcyOEg2SEwvM3hWdzJjUElkWTBuUlNHSEF5aGhORlplcnhiZ2lw?=
 =?utf-8?B?aG5jYjR6cUZCUFRoa1BFQnNVVnBiUGo0c2ZRN3phVDNaVGpXTkdnYWVTdHhx?=
 =?utf-8?B?UHFhbEg3QlAyY3NQQjhoWXBlR2VNUndsWTlOMnRIQ2hrVElNVDJwSnBtODZB?=
 =?utf-8?B?aXpPUytqV1N0QzI2Q3UzU3lEaEgvNm50ZW1rNFFIZEdaR0ErK1BuQkczWTA3?=
 =?utf-8?B?NEVWZXFyVWN4YmVzdDVKaVRmaGJCeW9BTmVieUpyRU1KTDBDNlh2dnJDVWlw?=
 =?utf-8?B?RU5qNmw5OXU1MWtLQXI0dkt3cG5RbXcxM0UxNjRSNk1kZWpCWU5qUTVtTDNs?=
 =?utf-8?B?RjMwTFpjTFlucTJXWEM3RGxLdUZWT2x2U0NIZVJneTN4d2ZTQUVQSWFMQUp2?=
 =?utf-8?B?ditjOXBobkxJZ212Ny91eGZsU0pkc2F0T1dNazZiODJYREtxVDhDWUNEaFBL?=
 =?utf-8?B?RUZIQ3BiRnU0RENkb1pXY3B0ZE5FUlRKZTBwVHlVTkVOQmtBYmxERkVLWDFs?=
 =?utf-8?B?aE00NWdIQnR1bUFtZXUxMmRhM1lQeXJOZFFCaDF5WWZTYm55T0lqRktaTTJ4?=
 =?utf-8?B?UGhEUDZkMkdTckwvTkFMU3N6RmhZbU44MmtNa3hCNGVhT0VsK2VXeUwwNFVE?=
 =?utf-8?B?T1VXN2hxeDVmQnJoVmxRZUdBMU84d0hqVlNlN2hzZzVMaHZjaVNKWGlQSzA1?=
 =?utf-8?B?a0JKR2U0WnhGOFE5UVdnKyt1bkVXQVVGZ3dGKzFUdkhaOEF4MmpFNXFidWN0?=
 =?utf-8?B?ZkJyeXNpUGFKUGgvbVhpRldnMHNwNGVjd3B5VTNaNWRBdEVISlNRZ1Q4K3Zw?=
 =?utf-8?B?V3RjMzV2bklhNTJVSElHcWRId2tOVHJTUTlXYTdFV0lGWE1JOEJzcHF1aTJK?=
 =?utf-8?B?cHlDSTdsa1ZHR2xIZWpwK2V0VU0rYkJWcUlWczNleFJ4SHZFbEVBL0JDVVRh?=
 =?utf-8?Q?a9a+a39WHA3D41TZ0emyfNml2?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	T6HaCGxBfx3BWrlvkPsX/Uz5TnRCWLfp92ysKg2JPTG1cDgWPUXpnyE0TKtGZM76b08Im+j4UlActZUi+/LBn+kfjM46rJrnyw1soDsfTJdVz9ThCvpmtC9j8FF1GnSLlki5dYj+aqOq23wG/i53a50HTvk7umsQ6IqgIyQN7+DSYpTHjzfUixQKmrXYRi6SN6RYKX2eBhp+o3Oq7HVlyo2uWbS1VwxZL6JcdKGJi78h2d25goyAXSNFl4TqZcpgmNVUYzGoZqYAdSh7VTOANviccgKwjPkO+msi0XUrXfnFAYZn8uVMmM9E9AQmRjVjYqBzQV7vkdga7u0HRCDOmAKOeic0VqkmlXXoVYrWLD1dw/DChszd3QBTzAFdwB8VVXE6gHlVZUviW+6aqFZn4wX3X0UM2rRMQ5XH8U56b7IMYG/tiH7G9VhxBA1urzDuZqvD2+whwxFX4cWHFPt1AiL2GEElzUKDDirOzNLa9DgHi7bVeE+Zk5Mt7tLig6f6tQVsYJV6M43FhqRg0zz4d4y13Fb4zDN7G5uvqYv3+XpxdH4suS1HetBEOmAn0hbH48f+8wDYE4/xvOenk/isoOyiH3UxFaRrr3cSdU339h02WdLTAA8hZO0C6JOp81oMbEnQM72IZt10ueHaby+RdW5WOkh04Eror+F9chGfv5FiNA7WEnDyaeOC6XGFb4UJJBZQciXWUYu0FSjUakcBJvn0StlXjBXj9SRIyxtDj4kRfjBqpLfStsDntkpBaygj9LXUyhmHDaTbzhRKH8UJWz2/K0ZG01fQkK6NqFjWCjAawZ5fM8XT+kEPAdgs0ome
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 994f8dae-6a35-4811-52a4-08db367fda7d
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 09:18:16.0178
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /+Rk58LVKarZQjUR0WYvbYpyzKb5FBmroMkER7wA+KUhP6fVEOxInWsm2A4dAnXRbdV+6/IXF2jZZX2nA+lYXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5237

It's a GNU libc specific header which prevents building on musl for
example.  Instead open code an equivalent replacement for the usage
of ERROR() and DIFF_FATAL() macros.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
---
 common.h             | 10 ++++++----
 create-diff-object.c |  1 -
 lookup.c             |  7 +++++--
 prelink.c            |  1 -
 4 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/common.h b/common.h
index 9a9da79..ec2ea33 100644
--- a/common.h
+++ b/common.h
@@ -1,18 +1,20 @@
 #ifndef _COMMON_H_
 #define _COMMON_H_
 
-#include <error.h>
-
 extern char *childobj;
 
 #define ERROR(format, ...) \
-	error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
+({ \
+	fflush(stdout); \
+	fprintf(stderr, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
+	exit(1); \
+})
 
 #define DIFF_FATAL(format, ...) \
 ({ \
 	fflush(stdout); \
 	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
-	error(2, 0, "unreconcilable difference"); \
+	exit(2); \
 })
 
 #define log_debug(format, ...) log(DEBUG, format, ##__VA_ARGS__)
diff --git a/create-diff-object.c b/create-diff-object.c
index 780e6c8..d8a0032 100644
--- a/create-diff-object.c
+++ b/create-diff-object.c
@@ -45,7 +45,6 @@
 #include <string.h>
 #include <libgen.h>
 #include <argp.h>
-#include <error.h>
 #include <unistd.h>
 #include <time.h>
 #include <gelf.h>
diff --git a/lookup.c b/lookup.c
index 39125c6..b440102 100644
--- a/lookup.c
+++ b/lookup.c
@@ -28,14 +28,17 @@
 #include <stdlib.h>
 #include <stdio.h>
 #include <string.h>
-#include <error.h>
 #include <gelf.h>
 #include <unistd.h>
 
 #include "lookup.h"
 
 #define ERROR(format, ...) \
-	error(1, 0, "%s: %d: " format, __FUNCTION__, __LINE__, ##__VA_ARGS__)
+({ \
+	fflush(stdout); \
+	fprintf(stderr, "%s: %d: " format, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
+	exit(1); \
+})
 
 struct symbol {
 	unsigned long value;
diff --git a/prelink.c b/prelink.c
index 2039e5b..18c5159 100644
--- a/prelink.c
+++ b/prelink.c
@@ -27,7 +27,6 @@
 #include <string.h>
 #include <libgen.h>
 #include <argp.h>
-#include <error.h>
 #include <unistd.h>
 #include <gelf.h>
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:32:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518794.805658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLyJ-0001O1-PV; Thu, 06 Apr 2023 09:31:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518794.805658; Thu, 06 Apr 2023 09:31:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkLyJ-0001Nu-MT; Thu, 06 Apr 2023 09:31:59 +0000
Received: by outflank-mailman (input) for mailman id 518794;
 Thu, 06 Apr 2023 09:31:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkLyI-0001No-3R
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 09:31:58 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2062b.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df6d4c47-d45d-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 11:31:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAWPR04MB10005.eurprd04.prod.outlook.com (2603:10a6:102:385::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Thu, 6 Apr
 2023 09:31:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 09:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df6d4c47-d45d-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RjJ5lKuGzxJuKMynp6s9tI0FCv0ig1su6zgBMKdjBtkQ8YKf3DGlk6mIaBFLzudZDP61QN1g03oegdIGYRJsm0ditPz5/fAq91NuJ7JRtePOuPXWmdGBtU4ebA3J4ICKx++XFnzOf71mH8n7mAKStkfk9B1dtkEz6DCqrnSBcBdS/KFH6rjksfh6U8zxu9Xca7g43FhqgcmR4fTzONlND64SlypI10/jHbdkydyRvMv9IzdIvN1KIr2kWR8+fsuOgbFC6izlIZSzErwQj8JEw6Xt+dtXmyRzdOBXtWO5LHe4oFj18jNQCNH4J0ciV65ik1xhQVJx1cQqCzlL37Ai8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9L2hFRv6P4F0H7tzS8f0dl8uOz+tEs6WwzZa6b6gSKs=;
 b=nXV2A2i6P+hMU4lbN/Yp6U51DBK5FNU9CXz02XFhvEe3RgWFVy8rr9nWS+RTaqI13Z1c5qdKMUstGhIA/gA0MbCXI0GXdrUxsF6xA/nuzoLTVcUxCB3RCoLLgfzP9rU/oTe7Y6LRKrs/RMK2iJVfFIbByQqnYnpPlxk4LMJJjO2rlPOXA1OWuJ+ORPIbFYJ89tQ9+VxuyMvf0C5yXvFV0gUXAVrMei0bJOr9UKHYt3gBB5s0XfCtWJfwIB/55NXwbQUsG2Uif0SdMODeKaf0pmu2+ejWfsFgfdkwfgva+i6Q06aWR7UwiWNaQZfJzaOLieQhqE1S+8ho+5f0T45usQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9L2hFRv6P4F0H7tzS8f0dl8uOz+tEs6WwzZa6b6gSKs=;
 b=xePubQNtrt7ggDcGOloQ5z1xZmsB/XH7hlv+c1xNY7p2mSGZ2k9fy0heqMgXFRnvhSGRY9laP8+/1+S04y5koiVZbKffKroHgO7swo7/I65hU4qeR7DBItroeFQYdvHXD9Z21ZdS+4zfSCA9KNdGEqeqtfVdSFf4zmKX36jALftU6FGSIWzz7LB0VHL8ryQ4QYJDcOk3IS3d6tmNiRF3v0x8nytMh/rSkeH38Kkw8oRFdO2cqSpMJcU+qMRUJ1RcdfLmEH+tr9j9FwMWSyIfhinVm1DtHuwNEQZiZvk2ev94S4qexBZf7+LauD9s+cDfqlWpfGhtg5Vd3Sa7ZtQf+A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <313a2a18-020c-ca76-f620-f5694a74efeb@suse.com>
Date: Thu, 6 Apr 2023 11:31:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for MOV CR/DR
 intercepts
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0074.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAWPR04MB10005:EE_
X-MS-Office365-Filtering-Correlation-Id: 244fe29d-eeb7-414f-bcaf-08db3681c243
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o+XMoBUObTM1601SPK4OlSsUgJeKGoK0G9wL+fMpgMZ7XcZEjhihUveUrAmLRfJOGeHtKEpeBBCdhHiK/GmJtJPw8RxD1sfgM6sReToplBivKOsQ6lgKbe4riigmLctxj0kWLAMR51mRiz+lyBvc5BwbERJj6StYkvoIzkUNEWGsaznUEF63iA3fRnIEcR9PaZGyOKm6xM2ZDdybwU8649qNjXhbKgE3n217TKcHD3gcJo3NX9QilCHKxAPekYxhd0NxzAs5dRS4biEbvhpce/vMkpgqiPesGDR6B7WIBXMInQo2ajVfMSC0HY6xin47b9Ql1woPm1op1Hg3I69FdNhfWc05h4LpnCDU9GhfBvf982z6a+miPgijajIXQSrFQJ2dM7XiXMhVEJU136d6/LyFP4kHFSB2YmD2jWq8GSgBvf5rRAdk53UWNFBYbqZ670TbTUzp8ImLBtx/nJE4HOg8sn4UUgCZLXxVcmJ/h4uZiMtT3pCBQKIO6g91+jx5TCk4xw9nLC6G1hh7V9wB3eFUTyAbxQYCyVjakqmwEOCFKPv9WpfpVsFBQVdSwnJyQfWcUpJRJMwH/do9QnXKoq4EuuiC7HLolmbfZU/bHiHdwHVvyulpoVsXDXWQB7fGR2qP8DEnNmxTBpujucyD3Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(366004)(39860400002)(376002)(451199021)(38100700002)(31696002)(86362001)(36756003)(4744005)(6916009)(6512007)(186003)(4326008)(26005)(2906002)(6506007)(5660300002)(66946007)(478600001)(66556008)(316002)(31686004)(41300700001)(8936002)(54906003)(53546011)(66476007)(6486002)(8676002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MmVpMWtoR2I0NVFBdWRHNko3a3hJbW9qcDZtYWJoTXk2QzhmTjhtR2FQRjNo?=
 =?utf-8?B?ZWZQcmJnRXZEZWhiTitHM0orekNZREF1YWRNNDBZN3lvTVdPS3FScC9Vamtw?=
 =?utf-8?B?NU5tbTJqN0pscmtXWjJtMHgxT2JsSUpxYmltTkVzWFJtK284MCtiTDE4OUNv?=
 =?utf-8?B?ZDI4Q2dnb3FTUWc5OUYvcFF4bTR3bExROW1QN2JJSHZUb0cvT2VNL01BTUh4?=
 =?utf-8?B?cU5FNTN0NHJrU2RwUFJHRTVqY2Y2K2I1dElaRldhdnRTK0h4cE05OE5qVDZa?=
 =?utf-8?B?VTc5dFZyOTBLNXpwWmY4TGUwenNnMzZJaEdxTzE1K0ZzamR4cFo5YXp2bXBa?=
 =?utf-8?B?YWVYOHZsdmlGYm01WkxZSE85NDdMZ2lIOXA0K3ZQazAvUWlaV2ZlWWdEazRO?=
 =?utf-8?B?TFhzS2dPUms3LytIN1lxK1BkSkFDekYwVTdabW9NZHQ0VlpMNmJic1pXMHhU?=
 =?utf-8?B?RUR2MVpHZ3d4TWsrR0w1VDZzN25sRDh2RUt2RkU2a0tidDZGaEM2Y1ZxNWlW?=
 =?utf-8?B?NUo4QTRFVTdaZHlJZU9adTBQekNlYVFhd3Z6UXdFWmkvOUVybWNsZm1ZeS9J?=
 =?utf-8?B?WXhCZ0QwMUd5SGtsS05sb1N2Vkt3ZnNqOFcwendTOVMySm45NTA4TEc1QkNR?=
 =?utf-8?B?ZWx2YWxTYTVyYkw5YUk5cUVPT1VvUDdxREVNTWhnUW85ajA2K0RNNlhOQTFa?=
 =?utf-8?B?YlM2RXFqa3NUOGRIbitvdWpaTkRYZW13aU1jNXRQZGMvbWpFOExGaFMwM0J6?=
 =?utf-8?B?bGNETk9SRC8yNHlyTTh1VjFDbWJmTjZkc0N0RExTcWRtRkRoMEdSdHRUNGNI?=
 =?utf-8?B?V0g5N2NXcnBaM0tkWjdQSFZob0hjN3E0QnhrbnV3bGdZZ3VXU0VSNG5qa28x?=
 =?utf-8?B?OEMvTW1xN0dtT0VqcjJVUjZhUStyVzdyS0N4eUQ4c2NBQlV4UWl6anpMbGRC?=
 =?utf-8?B?U1VPcURtaXhvQnZSSGxjZFZqbWsxVlNNYnN4ZDNLRTI5K0g2TTF3UU5kMlZ0?=
 =?utf-8?B?eVNsL2VEZk40ZXFIbkdBQ2lPMEhtdGhPNk5SeFFnRHBkdTdWWCtBb3FGVU5n?=
 =?utf-8?B?NkpZeFpLVnl0dUwrVmJwWmx6c2Z6N2w0STBXOUtNbVU4dkRjcFRjUkFFdU5y?=
 =?utf-8?B?c2tINlF6NDk3SUV5NmJSN1M2STlnNlNoMmo2Q0hubkNvblpDSFFITlFNYXZE?=
 =?utf-8?B?dGsra0pCV28wd2djWFVTU2ZiT3BQTWFmbXREcTRwV2xJZXJPOFZRVC9sQUVQ?=
 =?utf-8?B?MlRNSkF3WXdydWw0a1RXYnpUZ3RiQUd1SUR1d2JoSDREaUhoRUkra2x0eGNk?=
 =?utf-8?B?Z0NiWTV3eVF2NlBFeVV6Q0Vuc3dnS0xGSnpFVjhhVWt6cXpGbU5YM1U0bGI3?=
 =?utf-8?B?eDRDdkhyV1NiU3NZYm4weGQxSnkxTUYrS29uaTNjYlNrUlc5aE14OFNQWFJP?=
 =?utf-8?B?M0t0endVdmJpbFRZM2ZqV2treDFhTWRtN2ZtNHNEbnNxUmQ3WjYvMktFVnFQ?=
 =?utf-8?B?MG9uYmVvZjQ0Y0xHK29jV0hxdUFiMFJFSE1kZGZ4ZkR2TkVPSzBoRGdzdXdJ?=
 =?utf-8?B?RGRzL1VPTnQ3ZkVNeHVvUlFVZitjQ0xMYWM1RUkzOUl2ZG94Y2lxOG5iSHRM?=
 =?utf-8?B?TGV1cS9TRWY2WXE0REc1THVPa0h2NmwyZ0phVUVnMnBTRE04bUl4YmJxZXg0?=
 =?utf-8?B?RGE0U3h2UkxoQkhCNVlFTHdOdFBLQmRKZUxqZXgyU1RRNTlybEN2bXQrSmFE?=
 =?utf-8?B?cWxVWVllQXVvazJsSUZwL1RlRkd4RDFUWE5hOXFLbm9yNzRacEdMeVlLNFBN?=
 =?utf-8?B?ZXRTRGtxeUFQakZJYS9xWjNrNzhZa2JUTTdpTy9rYkRadkEvSlpBbGRQM3Qw?=
 =?utf-8?B?WW11ZWh0c25nUnpGM2xvZE1ING9zQXYzVURIZDZQMUdXc3doU1E1RjJJeWU1?=
 =?utf-8?B?czRuOWVMKzJPV0hUb0Q0QkEwWnFTQjZpZTNUMm9NNGFWdTUwQlBFdUNPVlZw?=
 =?utf-8?B?a240Z0ZYc2NiQnZ3MUNnc1duL3pFMjdNQlkrem1PZUVFVG5Dd0Y3L1l2N3Rx?=
 =?utf-8?B?YnpHZTRZZE1yN0F2anVkZDIyZDRqSnhLUWd1cVVGaEpKS2FJa1dYY0tGb3Zh?=
 =?utf-8?Q?2KjnB/YxPy4elFZMzoTH/DB1u?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 244fe29d-eeb7-414f-bcaf-08db3681c243
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 09:31:54.3479
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bCzUK8qWMIuLtNu2WlU18xzKwmmhgdW9NTOTlvOa/8pOfmiwHbWUfOiaPA/SwNCTokFiioFHSaYIRcyziyy1Lw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB10005

On 05.04.2023 22:44, Andrew Cooper wrote:
> This removes raw number manipulation, and makes the logic easier to follow.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

One remark though:

> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
> @@ -450,6 +450,11 @@ struct vmcb_struct {
>  
>                  uint64_t nrip;
>              } io;
> +            struct {
> +                uint64_t gpr:4;
> +                uint64_t :59;
> +                bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
> +            } mov;

The field being named just "mov" makes it apparently applicable to DRn
moves, too (and the title supports this), yet the top bit doesn't have
this meaning there. So perhaps say "MOV-CR" (or alike) in the comment?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518798.805667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkM3A-000200-Bi; Thu, 06 Apr 2023 09:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518798.805667; Thu, 06 Apr 2023 09:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkM3A-0001zt-8y; Thu, 06 Apr 2023 09:37:00 +0000
Received: by outflank-mailman (input) for mailman id 518798;
 Thu, 06 Apr 2023 09:36:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkM39-0001zn-9i
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 09:36:59 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91e3816d-d45e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 11:36:57 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 05:36:45 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6746.namprd03.prod.outlook.com (2603:10b6:a03:409::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Thu, 6 Apr
 2023 09:36:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 09:36:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91e3816d-d45e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680773817;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=t2dkUTvPu4iQJ9JC/sfi0IQMhAaSfTsB0BbB2aOUB28=;
  b=CuhwXZz4WMnkWtCo5qkT9hVSwszfRKaHJePzURmeCxqIySucWgK0PqM/
   7j3st3rzub/j/aZuuJmMuF07vCKZLP07BM9GwUcyuIm85/iaibm5ymijl
   JCbkSgcqBxrK81EUURveIQUuJznEc5/Pz/AXacAN1MJYuUbm2yUAH+8V3
   U=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 106966057
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:a3weoao8JLZ+ctnYh8Vq2QtiN7leBmI9ZBIvgKrLsJaIsI4StFCzt
 garIBmEbvuPMTDyc9wibNmx8U9U6sOGz95nHQQ+rn8zQS4TpJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzydNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABsIYw6voc+u+4yQZe5DgvwHJfbkP7pK7xmMzRmBZRonabbqZvyQoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3juarbIe9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqK803gXOmQT/DjUdflaWqvmoiXWZXoxOB
 RAs03AWsrIboRnDot7VGkfQTGS/lg4RXZ9cHvM37CmJy7HI+ECJC24cVDlDZdc68sgsSlQC1
 ViPhdrlQyNutL69TmiU/bOZ6zi1PEAowXQqYCYFSU4A/IPlqYRq1BbXFI4/Teiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:t723GKplRNVMq88FTAWyg9YaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="106966057"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RsYT0v2X80oAAjqV4g7zDqizvfABnFPRX73JuHqfW3Ov+3zsqZ/j6kN8XMKzrdHB2pAdyLgxCZj1eXODSPX06csYq6nZfQQQz+z5fjRUUeWe7fum8NRwL6QMLD2HuF49TMLEXUDsSbQy1/phnoCJtFmZgQvgBpb9eyIMY3mu7WJif8OtG9mxfs8qT9APMyRlN/f2sQaw5IIU+pXYl8TDDgKFHOA4XDjZ2e/rpTrvuvdQKsU+Y8yqa/ljD6ge9cMLmwVqEwo3maNO71cNTyx/3bbqRxoLHbt9lJurDfWzUKBd0ucRinVR9bOfBxpM3vQ1y5dP6zdK7wjK6Evd8+PUpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HUMCD5S95GqMGMWGQq2L1ujqOJ5fEhqwkrZ8ijeAgxo=;
 b=UViZDI0R8Ha17jHq1c/T1+gXV66iacOq9W8upkgSRJWuEz7SgfZA5UNGZTsVnBbXqIuTMaqJUt6XVyVYm4MzCoqHdp8M+EMLqM/S8eN1ycfF1PIpKslB06nlF9G/Kcnmm+24pvR/EkQjOuP1Q0TEj0joKEFjtB7Dq+1DaxNBsNy5lGlP30CqWP8Pyxfj/fjzxtondXVXlCbd9vyINE4kfu8WdTD3HYlTP9MnL88xJparkkE5bIhJpj3Do761zKw3yvRqpAe2b/vNAMF9qdUgfSzU+GouNqJrAj3LAUHPkTn9Z8JaIlejlqshYp7+7msMABwq6Mdipn5hFTf23KAYtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HUMCD5S95GqMGMWGQq2L1ujqOJ5fEhqwkrZ8ijeAgxo=;
 b=SI5eQs9S2u2nkY1YkizmTwuJNinX5HfJBhQvSOe7v0zqDmbw3WfJFUgDWxTKJr1QeI21KVPRMid5fNGUhGzAGDfdw9+Ynoz5XLmNoAZVBN5QZVlzmSA7yCRiTB7un84yLtQSux47DVF221/073HHjEsZm5mQzGbutUc8zoNJuYg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <4525547c-e309-c994-3dc7-5d1b398aabe1@citrix.com>
Date: Thu, 6 Apr 2023 10:36:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH] livepatch-tools: remove usage of error.h
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230406091807.49028-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230406091807.49028-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0386.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6746:EE_
X-MS-Office365-Filtering-Correlation-Id: f1f54af3-d120-4e7e-9e17-08db36826e73
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xytTK0eHICpzCvw98rOmBHwGjIsEWgkycFYWS8PXkZLpIeUI8k8eCsu2TOAGl2sP2zLy69zJbeqmQ8XUMvubtn6rzXqOGCX/HAZGQgPaHqIf4Qy94fDede6cIIWuvxDMJiWUVFLLduVpV/EsQH4qoCqgXS+crkFtfBcoXHJ2COGczAEvFWBQS8JVYU4WgTe0VNU4m6qJBcNo8OdCTZrc//AFT5JslRspWhpdCKrrElq9f1fR1bSFBIeyQC/xIi581eC3+k2DVVii8Sbz5sSq3XGHtCZQofTGjjF0cbmbUxrWjdNMWYey/7XKLw0CYUa9iHCqBfHUwe8MPrY5Z9ZvpYuR7CZtqwABx26BE1A4+unV+KklaLYrEOD7+su5bjZuAWipBdf/oqAJS4KThkTC3qYpYuL7EmNymkXtoRURAivtDddYreEtMH+qum2EVT7SKoGYPzSqBsjXPmvqPVAVMeUnBosGDFSb2GlFrjDIx/APLy790baGRu4SeugnQ7Uklg+KbNOu6RnvFIo7hZTC+SoDWmAfCnDX3z/xM5KWrUgn8dAsxFgGbOQt/RdCAevv5FzmJtm2o11t9xY1IQ7m76czDyGfMFhLSPxtJZmlA5OXtyr3tYeL62nApf0wk7i4D9yvZjfYqAqnoqdsecp0ew==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(376002)(136003)(39860400002)(451199021)(26005)(31696002)(54906003)(478600001)(86362001)(53546011)(107886003)(6506007)(6486002)(6512007)(6666004)(36756003)(186003)(316002)(82960400001)(66476007)(66556008)(66946007)(4326008)(8676002)(5660300002)(38100700002)(31686004)(83380400001)(41300700001)(2906002)(8936002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkxyRWdkY1Fpang1VGtETTN6V0cxVU9LMGxvNitiaUZsUFdNUWxFUENaWHFl?=
 =?utf-8?B?T2pyOVdlK3NESitGajJaTlZYdzhSMlhpelVLTlAvSjVTMzFRZzNrNzdidVlz?=
 =?utf-8?B?RDRsUzlzTnVuWHgrSUhRVHdCOUR6dkVLczluVWM1N0p5NXk2NCtVSUoyemht?=
 =?utf-8?B?YzFBM2p0bDRIeVgrVHNCRm4yd3ZySTMxTkVoUC9EaldvMnArL0tyVW94b0hy?=
 =?utf-8?B?WUVFZ0VBL0RwVWVJRHAzbllNb0pBOGZnMDlkZVhRSTFnVDZmR0xQbEJWRWJP?=
 =?utf-8?B?N0g3QkFkakNUZjZqWXpqMXlDWU9mZGh5S1V6WE9FNkRYREVWQzlJa0ZBc1N2?=
 =?utf-8?B?OGVtcHhMZTZZS29BWmdwdUNacnZRQkhCNDEzalY1NVhsSnIyNTBOR3ZZWVlv?=
 =?utf-8?B?RGJ1aGI5Uk9XRVFGd1VLSHV2T1U5aU5xOU5tOStlb0RtanBiM1hZUjk1OWJC?=
 =?utf-8?B?TlU0TFBVZ1BCSGMweStJSFFFL2NzUzB3NC9QenU5WldmalVndC9Eam9iandF?=
 =?utf-8?B?Y2dVK1Yzb2pCZFVQMEhXQ21IZGtKSElOdDZYVHEyVGZ0OXFWVkNMRWMrZm1w?=
 =?utf-8?B?dmtjLzdsRVRjNXhxOWpFQlZFUjdpemNpcWZpVzZad0E5MXA0QVljN2FhRThG?=
 =?utf-8?B?ZUpLK3M3bWllc2ZTNlFXQ0lZaTFZd0ZLdmtyS0VXbXNnL3Rrd2RZaUg5NjFx?=
 =?utf-8?B?UDRjWU9scktVS1FxTngyQ0ZoZktnTXhHS24waHNDOWZpVk11V1ZpUGNKMjIy?=
 =?utf-8?B?KzdzMzJBcFU4VlhwVUsxTTliN2lzKzJrd0grbFFYVndCRE93M2d4L01id0FC?=
 =?utf-8?B?TDlKYjRJUm0xNVlLdWlPYzJVaVRXZXovRUg1R2lTd1VwOUhzeE0wSFoxbksv?=
 =?utf-8?B?WEJpcUNxYXBlRERkZjJlUFhMR1BJaDRvbjJNY01WbzUvMGI4VFlWZjEvU0Ro?=
 =?utf-8?B?WDdJc2NFdFpqOFhsTlpuY2tNT0pPTFgvMUxFa1pSSDdMWjlnU2VOeVVuYUZm?=
 =?utf-8?B?RUozTEtDYjR4dkM2aFdwb0VWTFJPblpXUitmVWsrc0VHQlI1RUlwamtWL0py?=
 =?utf-8?B?clZKSmNhcXkwc0k3RHlVYkxMaDlLbDlQY0hWNzQzVGkyS0FJeEJLWkl6Ykpl?=
 =?utf-8?B?VXljMi9FamRJQnZRNXNNMHRwaXY4MEpuOERpek92aW5lN3o0eVhmWGdvQjRx?=
 =?utf-8?B?cExiVnBHbGVxbzRtMElhVTRwS2hpcHh5ZjB6NmtKV0lVZVFBZHNySjFyWUY1?=
 =?utf-8?B?V0FpYmIrZm9CMGxnTjVSQTlRK1VVREZsZ1IrSDU4dm81N3h5OWZXVG9aN0lk?=
 =?utf-8?B?Wi82aEJHVGx6cGxYd2JSZU5sWENUY3UrZVNPRmtzWVp6Rmc0OTdKU0FuUERU?=
 =?utf-8?B?Z21tdncxM3dxWDJ5dmw2SmIwU2RFd1ZteVlaeG9pV3hhTG1jcXlPR3NzcTRZ?=
 =?utf-8?B?ZXlGN0FncFlRWlZERzUwaE95ZmpEWW5jNUN1eG8rV1g0MXNFVm5LemVDNW1t?=
 =?utf-8?B?R3I5dEZ6N0M0b0tld2haenBvQ3d1TmtGSi9landNeUdzZS9XbldPTjBZbC9n?=
 =?utf-8?B?VjNDK2FsYVhsY3Jad3E4a1dUTWZCRGFTb0FJUGc3U2plYkdyb0J0YzBFN0xF?=
 =?utf-8?B?VXUrTVl5dGlyVkJjNkhWak9Oa3dMZEZqczVSazAvTXpCNytXaWNCVFIweXU4?=
 =?utf-8?B?TlZNSFFvR0FtUVFndlNsVGpkeTNsamNscExtZlpSRUJUVXF3SElQcUVRNERo?=
 =?utf-8?B?SjNSalh3Si9nVW05Q3YzSGg3dDVMMEJLZ3FtM3JHdFBiaExDWHJONlJUeXlv?=
 =?utf-8?B?VE10WmQzRnlQeXBKU2VDRXZFNzhhbzExVTlFanl4Mkl1RDExZjRxbkJkNDZL?=
 =?utf-8?B?MTRscTJ3eGZOeHU4Yk5aVjM3MloyOVB1WnMyTWhKYlphV2F4R3dzTDN1Yk1S?=
 =?utf-8?B?UkY4SXVpZmNHcXMvanB0cGMrcFBJc3BINGx4WHJROEJFWWUwTUlQVmN6N3d6?=
 =?utf-8?B?ZHU0SG1jVXZEaTd3T1QzVGx1SGV4M2tJWUhjWDJiQmVkcjFVY1p5a0Q0YnBo?=
 =?utf-8?B?aUkwZVZ0aVFSZ011K0V3WUVoRGNKS0FZRzJBTjNJTDJaY2lOc3krQmZHSGlX?=
 =?utf-8?B?SERTTitHSWtUYit4cE9rZjFqSkh2amV4VDB0ZnRjNmYwWGY1UmxkellOZHhD?=
 =?utf-8?B?eVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LhhrI/jqbfEPskn7w6jPDSHyaa62bJi5V1fE2eDGjY5pjo0ROcK7Hb+BELIAfKfULYDpzl5c7xlB1Kg58yC53lQzXLm2UJR+b/KbHgUpbh+ncmy1FlqWLmvzfPfk5HnCMWL9hlRU5M3/KjOi2yIC42n7D9tevZsjx0hZIPu+4nu7XRuIovkruVnaR/ttJHOuwyiAHc/sok/DoXDNJ+uebS1sGXV8YYRFc36YZCYO/9GvMBRiC5UbukmNaI8dPTFLzQ/SvPS5fh8JSkQ8vWQhycR9+BNrV2x1JahlhpHm7mMgJucmKTLyWgrXlfALc08A1wNjzS8H3B6O3JhEqHLZGWNUiLPBaovCwGYKGQ7CCXhOGgahoSWH19WYO1scKUAv+YaeGGkb76VviCX8Er3RvtfbF/HZz+r7gZAKzJezaW0oQGQsMJ3GLqR73Jb484v/hWjTTuvwI9ZM+b9F75mh2bkFfSUfeKvqUgeNqxo7ItQqP5gOjkhNGAdbBh4L9irFRHmXUslNWqGLg6SyEd7wiIwmDDXC7NN41a8jfj9La1+IvyZw5iyVs+AHbODM2zWBwDw4OmeAkDfieF7HL7kkBhvxuitvOXd8WOA/zujj6B74NR2RihiTns3uPxTP7r/07pK9cpt/rH4/m4vMSoJ3LbSXt9TxVaYDI5tnqXHs7tUuc8i/e2oyh8GEAJzO4qDgXgOEXQVzqAp6KMzHGYpRBywk1xfgXDEZBrYSKK8NzsiL7rp67xmsQQjwaxnt/5rILY9+DO5TXpG2fLbyi9VqZkn3WWBeeTeAi3jzykX+hx0kd3VM9g7CpBQWdTT0TEEC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f1f54af3-d120-4e7e-9e17-08db36826e73
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 09:36:43.2601
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Mvy9NxCbeIM0kxk05t6FwnTQmAPzKGA6lxtD7Ar9Aj8ZI1JMk1vsI2TPz7upBmfrT3qdHTZqpHxYx+LL0wH4RhCalN5OXHo1+svlgID/ukk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6746

On 06/04/2023 10:18 am, Roger Pau Monne wrote:
> It's a GNU libc specific header which prevents building on musl for
> example.  Instead open code an equivalent replacement for the usage
> of ERROR() and DIFF_FATAL() macros.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
> ---
>  common.h             | 10 ++++++----
>  create-diff-object.c |  1 -
>  lookup.c             |  7 +++++--
>  prelink.c            |  1 -
>  4 files changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/common.h b/common.h
> index 9a9da79..ec2ea33 100644
> --- a/common.h
> +++ b/common.h
> @@ -1,18 +1,20 @@
>  #ifndef _COMMON_H_
>  #define _COMMON_H_
>  
> -#include <error.h>
> -
>  extern char *childobj;
>  
>  #define ERROR(format, ...) \
> -	error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
> +({ \
> +	fflush(stdout); \
> +	fprintf(stderr, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
> +	exit(1); \
> +})
>  
>  #define DIFF_FATAL(format, ...) \
>  ({ \
>  	fflush(stdout); \
>  	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
> -	error(2, 0, "unreconcilable difference"); \
> +	exit(2); \
>  })

Looking at the usage, can't we just use err() instead?

Also, I suspect you don't intend to delete the error message in
DIFF_FATAL() ?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:53:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518802.805678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMIG-0004NT-N9; Thu, 06 Apr 2023 09:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518802.805678; Thu, 06 Apr 2023 09:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMIG-0004NM-Jp; Thu, 06 Apr 2023 09:52:36 +0000
Received: by outflank-mailman (input) for mailman id 518802;
 Thu, 06 Apr 2023 09:52:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkMIF-0004NG-Ub
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 09:52:35 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be09b5b3-d460-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 11:52:32 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 05:52:27 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6324.namprd03.prod.outlook.com (2603:10b6:806:1b7::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30; Thu, 6 Apr
 2023 09:52:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 09:52:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be09b5b3-d460-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680774752;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=dpDlh5icSRVarVCK54bUow0cqNLbjUdRZ+A09A0AoZU=;
  b=evogiJ7wbkdQ6efMCOnaTUFQVV7YHBDA4N3VDTmVteOCEbyrslsGVDfK
   gLr69bvW8jW5mEJ3FYm0hlHV1RvVhvCJt8l7hqB0+tTEhst9s2ne5G0ev
   o4yyKYbulrL+k5Q3+6OBdeNVdb6ozNjU5A+Jh0E6vtb3+QQllZN86zYTr
   8=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 104949579
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:A13gU6uuSGDMQvKzDCQyQQdrYefnVHtfMUV32f8akzHdYApBsoF/q
 tZmKWDSP/mIYmukeN12a4W1900P7JPVnIQ1GQc/qio2RnwS+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGySFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwOnMWQhyNruuPzpmWFeR0puUGLJTsM9ZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIS9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqq820QDMlwT/DjUWc3S6jPu+0nWlXuJFI
 WlJoBRtvJMLoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hBSQSN9mSfaxloesQW+2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:PE+laqsY6QuPr8Fa8uEVgEDT7skDS9V00zEX/kB9WHVpm62j5q
 OTdZEgviMc5wxxZJhNo7690cq7Lk80l6QFhLX5VI3KNGOK1FdARLsSj7cKqAeBJ8SRzJ866Y
 5QN4R4Fd3sHRxboK/BkXCF+51J+qjizElwv5a480tQ
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104949579"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l0hgq3DTqbUwTIZVf4TGGn1CCsbF2o2inFaaliXgsGg/Qt9FB8YDTubMOXkcTbIIi3hHlQRPUtTmvZSA/kP1Yw4fci2sjE9BsDVz+jmExBVgVTQ3xN5Qo7hDZm2NCszd2/H/74rWCmAyjl3Mgxx+5bBRbNAPf3hM8iyCXkX6O2m9ObE4RPjgCuLxBgatsNxNR8pf3LStGsGsICEhL43Voj0GkVlR7qwjV4B3+Z9dkUYBaAhd9oDPqnFMyJOM0SKYQBUgMDhvL72245hr7kCihe96B8XcYQV92k3I+4lnG4w1vxi3zb0wbsnaKipL6tdGPnmqX3sioMy/eNdhauqzyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DryfQqvKLhPUZKX04YcojEKABkspmMe186zBuIMmdKw=;
 b=aAw0Cc+ACP1iF9TYt8/EeRM80CqVx07H+cewXsgJuIR+nH3cjk+bcnn34zJJm7k9Jw/avcYtQllTdTmTzwp5ynkR53OLbqLCGfWOXMcSPOJzq1F1ELXQwpnLKTwf89hG/KeZIT+22YN60kZpOGxNgcXeKd0HZc50H2fHfcMGiluAPr8g4d8GEJGlUgTEmQehe5IgjhEL0liuSWw4p26LPWBmUktUJn3B+/+PIIdGb/3WoMXN1Z5MrP8dHqSkjWs34MZ+8pvycznqmzJNrU24M7FrIT5Yumi4CXHIiIM6YyB7gG4YDmDs76IjRN4dnyUOhGa/SYPHJuDHLdTNW9qDnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DryfQqvKLhPUZKX04YcojEKABkspmMe186zBuIMmdKw=;
 b=ZT5lApxF3kZmSdxwV26m7d6Qzv4i9JatLKBewaOT6U82TJzvm8WR21Qe9cYNnRTSZsf13buFhwH8XupUifx+T1Ja0qGDbo1d0J/UMNaYliaBTbBT7aTv26J0DYL/dKhaoHlMd3BCZebs1aWaWL/sI2BPa8+4bdLroN4aX4J44ik=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <24133c6e-3e66-7be9-41af-daa3db4fa961@citrix.com>
Date: Thu, 6 Apr 2023 10:52:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for MOV CR/DR
 intercepts
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
 <313a2a18-020c-ca76-f620-f5694a74efeb@suse.com>
In-Reply-To: <313a2a18-020c-ca76-f620-f5694a74efeb@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0354.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA1PR03MB6324:EE_
X-MS-Office365-Filtering-Correlation-Id: 10c28d9d-431c-4afd-d13d-08db36849fdb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OthXj+pOKt1VjDuRVUYnyZle4NED55VQao1ZA0+jjFEfu1W2hD2O+P3fBh99E3DNakAnOihdwBXPxZnbgocj4ml83Sr53wQnfKUcMKqIB1B7f2Ktc06ptuonE2wCj5+/czdM4DiMv5KgO9ihlVLLEHMl3PvHW/zcobk22qQVKbBEYIAcB5eo3OJvWqqoMWlB3zZIhppoVMp+6yJgnxMPVwJXeAHL0Dw+2Pb40qzXHdJJYiFkrBASobF+f4Cf83cN3hpOSbBPP65zI+ck+EB6XR9OtdC91JcmVGkdXHFoVc34Hc+tJtsj3/xY6odd0U+5vipNFq3RCUsNzftK55PccZAcu++RuE0gxv+ihhBpuyJBtla0w4Zs3NsaGRcAMd+fVjLjVsIwPPiB8rn+89Dx/MuXf0YaS2GbUDTIalMyg/XzouJKUPsjq62y1sldk4b0lDmljJeSsy2+qK7/o71jk1YX2vnf4msQcUts7/a1tZrH/+M78u+ksreu/QIayoTV7VHOumIcfbfvG4hcp2/hqvlI6AE3DZb+3I3msxzfdekuOcbVL9OWdQoXuv6dQzlo146XsQsbJSr05rp8G1B3vveUFQ30fw0DrMkeQmT44oOCMjRNKtL7AU/W+v3PlPXmfzKc6NpmYbZ52AXcLqHB1Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(366004)(396003)(451199021)(6486002)(6506007)(38100700002)(26005)(6512007)(4326008)(41300700001)(53546011)(6916009)(66946007)(66556008)(66476007)(8676002)(82960400001)(8936002)(2616005)(83380400001)(478600001)(31696002)(186003)(36756003)(5660300002)(2906002)(31686004)(86362001)(6666004)(316002)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVJtODBENTd4aVFXNjNsdktuNk1aVzJOUFlKVGxCc3c1QXNtVDdMNGVic2RB?=
 =?utf-8?B?YVBMNjNaYnRnTXROS3NvaDhaSE5sVVdsM0JoRmMybEIwUU1HczBvc0dnR2VQ?=
 =?utf-8?B?Snl1OThhT3pubjdFRWZoMzZxbG9sL3E4NVVWK2VISkpHZVJmdTEyMHpFUnkz?=
 =?utf-8?B?M2NRRGJJOGJTWnJhbTBLOHhVN2VQbGFuQnlyblZFckE3L0J1Q2FNTm9ldWRr?=
 =?utf-8?B?UVZsL0FzalpJTzdBUGNYLzV3ZzVVUmhGTVpEZ1FlRHgzN2RmcUFnNnQ5U0lj?=
 =?utf-8?B?NVRnTllBa0JmcDVlRnFoWnYyMG1qdGNtZ1hpd0lEVzYxa1MyVFpPeXQ3R2pk?=
 =?utf-8?B?ZmRGMC9BaW9lRG1tWUVqUW9KNEtqWjZsT0VWQkRuaVdFcVhHOTRLcTNsN0oy?=
 =?utf-8?B?ZXlaTGpzTXY1ODVhdURQK015WGNzU0dkTDR1N3FiaUVEWXdwS0ttaFptOUxL?=
 =?utf-8?B?c2NPVXhoN3hYOXplWDVWYURFdXZ1V3JZTEVpaElORVl1OWdEMjhHSXRJNTIr?=
 =?utf-8?B?ZDNrendsYkxCNTd4TmZRdjhDcnVEaWx5TzBuVTVUdUs5ZGVhaTJHT2QrZjVL?=
 =?utf-8?B?cGZKUHVEWmw3bko3aE1wQ09kbDlkTXdwOHhRcE5nTGNkU25TVjJUZ0RJaFVV?=
 =?utf-8?B?MEdJWWpITE51a1pUTVVRaWNnWUl2U3NWR282djZ1N1JVYU9vYmNBalB4RGMx?=
 =?utf-8?B?V2lwTGU0NjJ6aWhCaXl2RkhxV3ZzeC9rMjJidDdIQTQ2a2pIL3hGb0UzZ2RO?=
 =?utf-8?B?d0VBOE90d1B3YnpvaG1SMlRHMjlONVFYL2J4QUtURjgwUDU3eVpFd1JWTnJE?=
 =?utf-8?B?eDVWMEEwWGdmNUYxdEhzeElSeFA4Z01rOGN6c2J2c1k5bVNkZkhHSWJJd21u?=
 =?utf-8?B?TUs2aFpyUE1TQ3lnQWtsNGdmL0JxY0ZQNUZiREhaejU3UE5GelY2cnFEVER2?=
 =?utf-8?B?WU82N1Z4d0RkOHBUYzlvalVoRjNTK2phSjRFMFVpaVNzZFZrM0EyTlhCL2lp?=
 =?utf-8?B?VUFXSFFNZHZGQlZTNVEyKzBHR2JQUmxHczFtUm9Ma3lxSHlrYVI3UWxMNzgr?=
 =?utf-8?B?Ym1reTNvL0FVUG04Rnp1U3hWQkpjMGY0YVVtTzB3KzBLQjNja21WdXo3S1pi?=
 =?utf-8?B?c29QaXdUU2tUWHo1c2hSOTd1QXBpRzY0aTZQVFB4d0hxT1BEZ05NZlV4RlJu?=
 =?utf-8?B?TlFyTDBHaTBPRFBsb1dVdm8zQ05PUFFyRDFzeEtXWjV6YTVUdkVHLzV6Vk1r?=
 =?utf-8?B?N2J6Rm1uUHhSaDdpUjlVMXpMbjlGeExnSjdYdDIwcU5zTlN6TVRXQ1hRT1o2?=
 =?utf-8?B?ODVIb0ZUbFlTSHNNN2pVMm1ZcDhxZFFDTXEzNHRrR29Mams2cVhzTU45d09r?=
 =?utf-8?B?V3kwQTdVdHQ5Qm5vSURWc3VVSy9tTW83ZEVqN08rL1VTNzVseVhFd0szMTM5?=
 =?utf-8?B?ZlFkeGV3VVBBeGN0VkRuVW82eWx0VUlYVHBLMU5IY3FOeEdZZFkvMktSRXBZ?=
 =?utf-8?B?d1kySUVJQ2NqUFlXcllIRTJnK2I3OHVNRU5qUzFMVnFZcmRtRWo4UU5GcnE0?=
 =?utf-8?B?UExKVWN3TktJb0Y2YThVU1FOSmQxNFFwT2FrdGI0dC9ONUlOR3VzbzNobUVS?=
 =?utf-8?B?VDJyQnArMzBjU1ZvR2x3TlVoYUNRcUhlRFFJYWNwOEczdml2MGVZL0MrS2dI?=
 =?utf-8?B?aW03ZXlSbGpyNDdDVG1vK0haS25COG5ORDVjMGlFeC9hZkRra3RwUWNlcExF?=
 =?utf-8?B?cGUxZHBQVFlhMEhDK0VpU1RBdTFUem1tNzVsbG1PVGVlUnMrdHpzOFBwTlVv?=
 =?utf-8?B?MzQ5RlZkWGhwalBhbDdvbFBINFo0QTlnWTlNVWlzT29zNXpacGtqNUYyK1I3?=
 =?utf-8?B?aXY4S3pzbHpQSUhIYm4yRU9CU1liSzlJTyt1Y3M1QXVjcG5oZERaWkg3MzZK?=
 =?utf-8?B?RkM1WHlOM05Qbk1nUHhwdnJueWRoQVNPRVNGK3RFSXUwdlphWmg5MkMxZDl0?=
 =?utf-8?B?SmRuZmhqZ1gzUzcxa0NWeXdSZFZ6US9xSUJweHVNRjc4aEFXbjhkbXowN3lp?=
 =?utf-8?B?UUloVnVJR0NFZkFMVFQzNk9CNUVtVFpQTDM0NzdrclAxdEVxNzNPZHc5Yzhm?=
 =?utf-8?B?c2F5bnlrNmdtR2QwK2tVK0cyOWk3YmF2dzlVV1JrUW14Y3lSYXhldXFVRUNF?=
 =?utf-8?B?REE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xoB2nCLB3QYiYYq/Lad3yuxFJbvLGMs3hDn9A7JItNHGbfF37QJrwzkY2zcuiP/4lfvzzN2ahKaM5kU5GZWdmDjvNWYlNALB12ImIC+WH9DTs1DU4lF3MoM3rgPJViyWZtDKk7A+0JoLF5wLcDiOlfAnAaT3Xp8ybTUo5PW9fRKgs55uz1WlOQi2PLWI5N3cGWp+zsny+hugRJt1c4PAxAUcF8E5ZC/sMudIVlEuJul+l47xUau1KtBZ3rDw0Bqyf85Tuh9q9uVaTaB58kyk760QznlkaeibeU2nWXFrIZuaGl8dpZ0gGe3DOjOf549Hj0c/0/17Rgy6wF8BdNwZwK8pw1mWSRM/iRDgZh1GWffvA0ouoWmy6tPbSGFTwEdmyjxIgJt3J6givk31ffIEbgKgftCgUXNE/FYUFtF+riAP9vQa3nPnb+RARmofEBnyVzptZQ+dP1GhRT/lh6mrNSz4TpfCZ39dGUkObriTTmQ6UbEyCf4BsjebN/ZA1f+7jTSzh/8gg6HWlkUrrz4LcGBuKxCTWnFqVrue9eKFH+/r2sUKJdDtB6JN8etshDQ8ivkfX/4dUrJ4eDf9//7p3YLk+3Z+Hue1KUtURy9/j3FZM3nfKUlKYyqjv8wWQLJji7/xFJFbfIeodJ93kTKAOu5snOfq4rl2p8O7DtQ68DWHsh6qzf8WpbYrAXu3sSHJJR9qBwDbEtNN5XkAJ+Nh0hC32gmP8uXuKRGdCUq3k39fadsY0vYVkljjHh8gELlVwocZbChWmLieXnNbP7T8frIAZASXTa9FkSiv43jEeFaBLN7eYKraVOO65mit3ZY6
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10c28d9d-431c-4afd-d13d-08db36849fdb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 09:52:25.2381
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BjvxDHqIuGW3szLS4Yif1KB+nZk4wRF0TVRFdetZKOQ3mwHCgLkUn4L3WxBCsCgVga/DKoeCBHSRst8jjG49tzzLEXihYvLHMF0lwcTZk5c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6324

On 06/04/2023 10:31 am, Jan Beulich wrote:
> On 05.04.2023 22:44, Andrew Cooper wrote:
>> This removes raw number manipulation, and makes the logic easier to follow.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> One remark though:
>
>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>> @@ -450,6 +450,11 @@ struct vmcb_struct {
>>  
>>                  uint64_t nrip;
>>              } io;
>> +            struct {
>> +                uint64_t gpr:4;
>> +                uint64_t :59;
>> +                bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
>> +            } mov;
> The field being named just "mov" makes it apparently applicable to DRn
> moves, too (and the title supports this), yet the top bit doesn't have
> this meaning there. So perhaps say "MOV-CR" (or alike) in the comment?

Hmm - I'd not spotted that distinction.

Xen never decodes the exitinfo for a DR access - we just resync dr
state, drop the intercept and reenter the guest.

Therefore I think it would be better to rename "mov" to "mov_cr" so you
can't use the mov_insn field in a context that plausibly looks like a dr
access.

Thoughts?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 09:59:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 09:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518807.805688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMOv-00054t-HH; Thu, 06 Apr 2023 09:59:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518807.805688; Thu, 06 Apr 2023 09:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMOv-00054m-Eg; Thu, 06 Apr 2023 09:59:29 +0000
Received: by outflank-mailman (input) for mailman id 518807;
 Thu, 06 Apr 2023 09:59:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkMOt-00054c-SQ
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 09:59:27 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe12::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b635d1c0-d461-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 11:59:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8890.eurprd04.prod.outlook.com (2603:10a6:20b:409::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 09:59:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 09:59:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b635d1c0-d461-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NsuT9/OJ8GPjepXuRPCmPvTPmbs7xk68/ESIer9nl2tqQjMpJ2daWbDz7z5ODaIG5ZPfT64vwsWTvvkRs04xCAfinUDCfPEMBtDxMgFJYCRFpHEViW81Cng85Dt+kOSBpOAo5oYstxvEN4N+t2YpFj+6w3X0UxkE2Amjhxj0egHVZOPZ6KVgqwmkNTTAVH1sddwa8HecFLctoJRhuxAC8eiq+mb4JExosBEDblu0m3yoxW73UqtP6ce8XzXfCM0WtBBoESGkjUpyQ4t/7PggEiFTso5SOZDhRKLZqOui2mZCIu6VcFM0iWZLMtrl2M/LZX+r8skYlmWd0dyvqWpWYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g5SbSAc4iG86323+6XSRQQY0CLJ/fvoZzH2h7YjVi1I=;
 b=Z+7PiX+QUjf+kxPWOVWOgdwhN5384o8xHWKu0S2bWCvO2Unf/XdSvdX/cjeHMZ7LR/1SuU91SHzI0XZFRxHBdkfGPCYodr77hUjq94hhWd8/wnA4lTeF9KN3t3UDOj/GGNnByha4pkCH3T6djvPfRmwwBtnQZJln8n2NY8FTEibZ3UKXRtLwxkDYFNzJ0U5jF/hMAUysvzo2qrJYMVTdLkWEuG4I0l9JlQ4Y5+jnwSpCXx87CL+84I32pJ8ASRAnfJdJRw57RU6MW/3EABaCXEKLBtmsUXNB+I5HE8W4335NyYHOPesZEfQVOt8F/5paQ1td9haKqL8m3u2YwPNsuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g5SbSAc4iG86323+6XSRQQY0CLJ/fvoZzH2h7YjVi1I=;
 b=NYKomSGhPpA4Ayzb9KyC/uyuSHtAwivRZfBH8nRxaJvZ1Hs1NNN3I0gx+iqf3tITqjCEltb6mpXhzNF5Uo5gtaRBckj8COlNPsepD5PN28bnFB8MBjZjJHz5706qTt27+7KgeLEz1WNECh5TNyB5KIRXnHTXbVIN6nYPPPGrvzWBHpY7MGapWSUVrF6GQZWvRTdCz4TOAWtNqXpcfwTqkA6mf0MNWGZGrebJYiZqIy4Oj5KdnoB81BGksKtaqgVsH6oIM/cN8wXMLTIRbsDCf8Tp9Z/MrHMJ/AqZzPLqyEBcsRTLGKq7i7ct4CO+h7mf4ekLjHFS3HbHoHQeMV6nhQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e9bedb21-c081-9a00-e147-7528c28fc3f8@suse.com>
Date: Thu, 6 Apr 2023 11:59:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for MOV CR/DR
 intercepts
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
 <313a2a18-020c-ca76-f620-f5694a74efeb@suse.com>
 <24133c6e-3e66-7be9-41af-daa3db4fa961@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <24133c6e-3e66-7be9-41af-daa3db4fa961@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0017.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8890:EE_
X-MS-Office365-Filtering-Correlation-Id: e4f7aec9-5edb-45d3-e5ca-08db368598fa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hOLGS8qHU6Z9giA7qpMTjgFCu5QmjV40t/91Mxh6KzLQXvdiEd2woJ0/FqfTmUXyAWaxAjWOr06PlO7PAn7CjjWxOhovPxeWOkg6O6+/QfuMrjuMmSqkZA80/EgpuLkjMplG/0kCoC/CauPenFPnOs014AAFcWIMuD6+2tKYt6Bo7RfJlFuDFi3WgOaK42jWcayJks0jurt08R5nPduf2ZXQAyxE2PCEuQvZUGUT353jfcuuw1YBFhyyORDL1oP2X3C8gtOrd6S4HjuspzuvD2igj3i1LpiIsV3PUgwAHNMnvekKSwUsjh37fyAZ8lR/p1lEHS5JwnTQA7/yt5JvJWOunafm50K1VjFc1wpX+bAwZ4sFOJgtTDWIkoWxgGOQs22lnKFBXsFIJ60+deYITWgk959kZZK4bvDOlyFXiRWx5nLftaa6EzJ65k4OEwJZmDbbgcp77Sr7yYeyLFAwJ1T/t8ARJ1R53UvtwU+w8Y1LDL9kVQW470D5fXeMciK4FC+bfPyp8wmeskgZerd3iXTIgru3OxwCdKH2Tv26pwHVBeGWg8QIkbfstJA5N47W+B43NL3WZYHSUMoPqWVOGeiURKB+JAOi2Ib5WlKuC/656DL5tVU1OESyeUExvDn8j3xNA7shelKcMppANCxYDg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(136003)(366004)(396003)(346002)(451199021)(31686004)(4326008)(36756003)(38100700002)(2906002)(5660300002)(31696002)(8936002)(66476007)(66946007)(6916009)(86362001)(66556008)(8676002)(41300700001)(6486002)(83380400001)(2616005)(6512007)(186003)(53546011)(26005)(6506007)(54906003)(478600001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L2pTK1RVSFNpcXlCWkF6UWZXazNZdVNUaGJyeTFCQkhWNElEQndtWjVMWE1H?=
 =?utf-8?B?c3ZKWEYwRXltTm5VeTI2YmJyQjhHWFM1S0pzNkZQTXhXZExBQ01XU0Vmc0wv?=
 =?utf-8?B?UTNFRGpuUkRvRHhmeEYxcExFTW81aVFpbHR4SU1ScUZmN3Z0OUtxRDRsdVFJ?=
 =?utf-8?B?VjdLNDczRTZSeFg3aDZjRzNCWVd4S0t3SXFWMXVISnhDVUVaSkE0b1o0SS91?=
 =?utf-8?B?a293UkxmYjM2dXpER3NJcFgyWVMxMWI2TS9GbXIzaGd3RUFSVVJtOTl0ejZr?=
 =?utf-8?B?MldNNzZVZTJTMkUybWNTRDNJcDhic21aeXVvblo1Z00xN1pkZTR4RVdKbDhX?=
 =?utf-8?B?SkVOZXZvTy9ERmgvRkhoQjRGQ2FEdWZ6T2JuQUxDaEwwWFA3TlhLQnhvaWJW?=
 =?utf-8?B?elR1ZHhQaHViTFBkUDdDU1VLNHBSZmluSkZhcWdWL05YbXFhaGg4U0Ewbmtq?=
 =?utf-8?B?Qm0ydVE1eXdtSEF2NWswTklwKzA1dXFxakhJanEydFJzQ0VlMGsxMGNWRFQy?=
 =?utf-8?B?YnJqNndNdS80enZqMUVxWFNrankvZGRkdXFlRWk0NndFTy9XT0tkbE9lSmNH?=
 =?utf-8?B?QVBYcW0xZ25VVDJ6VU5HY0t0b1BIbDJKNjBvNmxKVUNVRi94OVdBemJSNjVi?=
 =?utf-8?B?UDNQSXJEL0tiQUZtQnAzOUsxbzBIMkxxSGMxR0daRjlGN1djUVpWYjJ5MzJT?=
 =?utf-8?B?ZUxCWDVzdVBZdmhFNHVpOXVza3lHU0tmREdPWDlqVWtkNEdBMFcybGJ2KzJq?=
 =?utf-8?B?b2NidWl2Vkw3aVhtY1Y5TFdaY2NCNGtCa3RsQWRqYko0eWZBQzdYUEsvcTll?=
 =?utf-8?B?R0lwbjdveEExZjZyNFo0Z3NmbVROemp2YUtUbXpPb2gzRzFkdnpvUHhpRmI0?=
 =?utf-8?B?Y1FLb1IvbkorVkhnQXZ4OGQ3QnRJU3FmeG1lcXdIOUxtQjYvNXp0S2J4ZEc2?=
 =?utf-8?B?cGZxWkcrbytMdHBGbDQvTWtLOFcrV0Q5dVQzUXltQWlPZm03Z1huekx0VWps?=
 =?utf-8?B?Y2hyNW1MV1h0QVNkNVFIZGsxOUljNTJESU1KM2ZZeXAxcFpsMFRBT0hxd1J0?=
 =?utf-8?B?WEcraVRIc2thM3owS3Q5MVlicUV5dmZNUy9KWTdPUlJVWThGQkpEdXAvcmEy?=
 =?utf-8?B?RDJFdnU1QlpwclR4azAwVzdIZmF5aFlPRjUyKzNrT3RGbklVZFMyUmc2YTdj?=
 =?utf-8?B?cUtsNHhOU1dvWkp6ZFo4VWZLTytvZWQxeDQvVmJDVmV6SThNOTU4UjJzcnFV?=
 =?utf-8?B?bTZRSnB1T3FpcU5xOG9iWDE5UCtteDZ0V2NCSjEwaFZZY3dlYWdobkptWnpi?=
 =?utf-8?B?eXVoSER5N1BQL05ITWZsR0Y2Szdra21TU25TdTlaMU5aazdicWp6bGdibmFO?=
 =?utf-8?B?bFF0Q3EvYWV6d1ZnYmI4VUZxRXoxMUdsdmJoSEdrQmVITzkvWWdtVHgyVjRI?=
 =?utf-8?B?SEhJRElPOEJCNlBpTHJOdXIyeCs2dVNmMXVjdVR4SS9oc3FqZEJVcXUrcjkx?=
 =?utf-8?B?NVA0SHQwc3lPQnMyNnNIcURhbEk1NzF1cFpvTTJEMVhvWGM4NFk3bDhFOHlC?=
 =?utf-8?B?MmhqNWlwamhSOHBESWd2dGl6WjNDV0d4SHFwa3VzaTZTSExRcUg5RVVJOE1v?=
 =?utf-8?B?WGpEK0VQbk56bjA0YzhUanlTVFhJbUE0dHZPeVdZUlJNZ2hNZ0hZU1RoMStk?=
 =?utf-8?B?NndpeWtSMEpPTmgvbEtUMTV4b0l4djc1WHJJZUFlM251SkgwbExNSFFBMnNt?=
 =?utf-8?B?aHlLYUxEd3ROaXVxenNiaWtzL1lmRE5WS1dLV1V6dFdvR0pLeXk3NmpmamJu?=
 =?utf-8?B?NW9yWExyY1VqSlQwOXppcllkdUFQV1pETHY5NXpIY21RYjlOUXoyVWR6czl6?=
 =?utf-8?B?bGtxeEFIclowaTIrbE9TYlIrZ0VLSmh4b0swc0wyeVRIekplbFk5ZXNPQTdK?=
 =?utf-8?B?ZzEwaW1CR0ZHaTU1OEZQNEpyZFVTR0NQZFVaK013SVdmcktlQk13d00zb3BR?=
 =?utf-8?B?M0tSY2d1c1lweC84MVloc1NTYXZCUTBJMHE2S0pYcEZ0d3dpQWhhWndWVnlT?=
 =?utf-8?B?T1VkSWFscG9WK0t0UU5MWXpNelgvdGhVOURkRGVBbFJyVGVFNmdtRWExUitP?=
 =?utf-8?Q?sWuYeRy2BD/nR/oEk5EjinpgX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e4f7aec9-5edb-45d3-e5ca-08db368598fa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 09:59:23.0005
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2E2aTZy9ZUWeMG/mW12zpYyEJT4jAfAAwEiUuv1eYt+m1Uqelzz1jVe9is1K15+vbkDQHQqgz2fjTGt0+pI1GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8890

On 06.04.2023 11:52, Andrew Cooper wrote:
> On 06/04/2023 10:31 am, Jan Beulich wrote:
>> On 05.04.2023 22:44, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>> @@ -450,6 +450,11 @@ struct vmcb_struct {
>>>  
>>>                  uint64_t nrip;
>>>              } io;
>>> +            struct {
>>> +                uint64_t gpr:4;
>>> +                uint64_t :59;
>>> +                bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
>>> +            } mov;
>> The field being named just "mov" makes it apparently applicable to DRn
>> moves, too (and the title supports this), yet the top bit doesn't have
>> this meaning there. So perhaps say "MOV-CR" (or alike) in the comment?
> 
> Hmm - I'd not spotted that distinction.
> 
> Xen never decodes the exitinfo for a DR access - we just resync dr
> state, drop the intercept and reenter the guest.
> 
> Therefore I think it would be better to rename "mov" to "mov_cr" so you
> can't use the mov_insn field in a context that plausibly looks like a dr
> access.
> 
> Thoughts?

That was my other thought; it merely seemed to me that updating the comment
would allow future (if ever) use with MOV-DR. So yes, if you prefer going
that route: Fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 10:31:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 10:31:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518811.805698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMtz-00011X-2F; Thu, 06 Apr 2023 10:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518811.805698; Thu, 06 Apr 2023 10:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMty-00011Q-VR; Thu, 06 Apr 2023 10:31:34 +0000
Received: by outflank-mailman (input) for mailman id 518811;
 Thu, 06 Apr 2023 10:31:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dy8+=75=linux.intel.com=andriy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pkMtv-00011K-KL
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 10:31:33 +0000
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2eca9e7b-d466-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 12:31:28 +0200 (CEST)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 06 Apr 2023 03:31:23 -0700
Received: from smile.fi.intel.com ([10.237.72.54])
 by orsmga001.jf.intel.com with ESMTP; 06 Apr 2023 03:31:12 -0700
Received: from andy by smile.fi.intel.com with local (Exim 4.96)
 (envelope-from <andriy.shevchenko@linux.intel.com>)
 id 1pkMtX-00DJOA-0t; Thu, 06 Apr 2023 13:31:07 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2eca9e7b-d466-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1680777088; x=1712313088;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=3ceVcmjc6Ptm9PzaLBlyJdk3QXqcq40nfjFwb4WczVs=;
  b=XPv9RYSEoAesxqju4O68ghh3S3S58xwk8Zh0KHafHRFDrQ4S799NN2fa
   tXTdOrzvey0chQpNAXBZLNCHGUWTq9Dl7jJCEwZGpLbZMoXy/HFGBmSKS
   MdYca52S3+0UwEO2TXkCCicsyTk6APuUt6WtcC8/yK3P7Gn/8h47KtYTl
   PBrB3CQJWvxzf7ECP6K9OhkH+7FQgOYQJmkW4h9MgBKuqDLzNpJZdxivx
   zRnCS8Vg4svbmVD/irflttaBnDAKw/HlKL+es58oDscqy933hnlWeTT79
   H3DcaV4M5SRhJpGY+qJRwp1/Bu2g3MIofRI+NFkt2ZWPwNxnbRsfTGxk0
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10671"; a="341435044"
X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; 
   d="scan'208";a="341435044"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10671"; a="719666156"
X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; 
   d="scan'208";a="719666156"
Date: Thu, 6 Apr 2023 13:31:07 +0300
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Randy Dunlap <rdunlap@infradead.org>, Arnd Bergmann <arnd@arndb.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>,
	Juergen Gross <jgross@suse.com>,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Russell King <linux@armlinux.org.uk>, Andrew Lunn <andrew@lunn.ch>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Gregory Clement <gregory.clement@bootlin.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	Anatolij Gustschin <agust@denx.de>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	"David S. Miller" <davem@davemloft.net>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZC6fa1vJGOOI7t8a@smile.fi.intel.com>
References: <ZC0xK4YJrKga7akk@smile.fi.intel.com>
 <20230405201832.GA3638070@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230405201832.GA3638070@bhelgaas>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo

On Wed, Apr 05, 2023 at 03:18:32PM -0500, Bjorn Helgaas wrote:
> On Wed, Apr 05, 2023 at 11:28:27AM +0300, Andy Shevchenko wrote:
> > On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> > > On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:

...

> > > I omitted
> > > 
> > >   [1/7] kernel.h: Split out COUNT_ARGS() and CONCATENATE()"
> > > 
> > > only because it's not essential to this series and has only a trivial
> > > one-line impact on include/linux/pci.h.
> > 
> > I'm not sure I understood what exactly "essentiality" means to you, but
> > I included that because it makes the split which can be used later by
> > others and not including kernel.h in the header is the objective I want
> > to achieve. Without this patch the achievement is going to be deferred.
> > Yet, this, as you have noticed, allows to compile and use the macros in
> > the rest of the patches.
> 
> I haven't followed the kernel.h splitting, and I try to avoid
> incidental changes outside of the files I maintain, so I just wanted
> to keep this series purely PCI and avoid any possible objections to a
> new include file or discussion about how it should be done.

Okay, fair enough :-) Thank you for elaboration, I will send the new version of
patch 7 separately.

-- 
With Best Regards,
Andy Shevchenko




From xen-devel-bounces@lists.xenproject.org Thu Apr 06 10:34:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 10:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518815.805709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMwy-0001Zf-Jv; Thu, 06 Apr 2023 10:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518815.805709; Thu, 06 Apr 2023 10:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkMwy-0001ZY-EB; Thu, 06 Apr 2023 10:34:40 +0000
Received: by outflank-mailman (input) for mailman id 518815;
 Thu, 06 Apr 2023 10:34:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkMwx-0001ZN-HJ; Thu, 06 Apr 2023 10:34:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkMwx-0007FK-DV; Thu, 06 Apr 2023 10:34:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkMwx-0003Qy-00; Thu, 06 Apr 2023 10:34:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkMww-0007Tz-Vm; Thu, 06 Apr 2023 10:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B2mENXBHBfZcHVvc4Pu1jqgSR0+uiWBGOkqDxS3HZvE=; b=59p0mfYzDYcllL3p4VeiLIjiP7
	h3ldrAyscs00RD6FImwYVD9i2HZLgdrD6hSnIQOGPemXol+9ntuOVEsFUlwLd4gJmmKXsEUO1qOaw
	Ahy/49N1vEOYyysEYsRvFAHm+P6t9sR9T+tukpQo/tNCQKHjgJ9ZkUCa/evdzD62So5Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180153: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=b5fba99ec7969054ab2f3727d2df014b5b72e4f1
X-Osstest-Versions-That:
    qemuu=7d0334e49111787ae19fbc8d29ff6e7347f0605e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 10:34:38 +0000

flight 180153 qemu-mainline real [real]
flight 180167 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180153/
http://logs.test-lab.xenproject.org/osstest/logs/180167/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180167-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop       fail blocked in 180146
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180146
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180146
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180146
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180146
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                b5fba99ec7969054ab2f3727d2df014b5b72e4f1
baseline version:
 qemuu                7d0334e49111787ae19fbc8d29ff6e7347f0605e

Last test of basis   180146  2023-04-05 03:09:37 Z    1 days
Testing same since   180153  2023-04-05 12:38:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7d0334e491..b5fba99ec7  b5fba99ec7969054ab2f3727d2df014b5b72e4f1 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 10:38:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 10:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518822.805717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkN0x-0002Nn-5q; Thu, 06 Apr 2023 10:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518822.805717; Thu, 06 Apr 2023 10:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkN0x-0002Ng-2X; Thu, 06 Apr 2023 10:38:47 +0000
Received: by outflank-mailman (input) for mailman id 518822;
 Thu, 06 Apr 2023 10:38:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cYqh=75=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pkN0v-0002Na-QM
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 10:38:45 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32f213f8-d467-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 12:38:43 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SN7PR12MB7369.namprd12.prod.outlook.com (2603:10b6:806:298::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.23; Thu, 6 Apr
 2023 10:38:39 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::659f:af8f:6d3e:8242%4]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 10:38:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32f213f8-d467-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ck/VP7+dRStZS8MyRj85qyc5ntU7HrUhkMjLWeSLsVvjRy8spyopnKIJzdudnmpYCK8v2xisC95TvVzjdNbWWxHzVZc2dT3KP15FdmPBHkSnqZtVEp/cfjNXYVTxaeaUTr7c4vaQ3xhGYm7sCjhkmh5B4MVln2ab+mmIrUTE+a0xMTpwQYOJHjMXNw+NYZIsKXPuwQRg4GxwviG2dEiDxuIiILn6kijCGdsEL3bB1vPhqVd2BB+3s1MgknqcJjpOMS4PftvuZJAcFGHUcj02amXqSUhe60QV8CAgfz56G3c0ol8sk47jpSRhGEvu2V8QyBAcOiN2jWSx3PDB2nR0sA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LWk4Hi1mF1FVJiebOCrXEQgZdKxWG/iAYVMdyPMXGEw=;
 b=N8CSQWbxBtrxUGLZRwDlbonfxKxV//WIZlPHbGH27KVWvLDL0QGz2EQQBlE+kxJkci+im1N8K4mxsGc2OS7OAZaVBTAKfi7/q7E1sDtFzdbX74G6osW2E9Bm9H0r1FGNbbyxPCOC0njFV0sIeHDHUmbSuwopHd4Ai8xDs22aQj0LzrldyDeyol5oCjqXen0XSFp0lPcufqTD4pEkw4ov48pdnwbX582Zc7+MA6kRdXd6VZNMp+mLjsdM6uCVdswVPwdQtvlEpnpEE+rmTno+jjP2hmqNyzRjMrosnd2e1XeqaPSq7mVyi3o1LyC38K8m/va+f3LMZseTMwwsA8bk4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LWk4Hi1mF1FVJiebOCrXEQgZdKxWG/iAYVMdyPMXGEw=;
 b=adywm6TV6IZYJ9WuZ456p9i70XkZT4KlGLF5NHc1DkjBJvQ1e0QBFRUHx8XSzVPTlaWrlH14njKHpS5fZyTGqUowL1oNa8qm28rTjcBHP1A06esKD4WrKmhMEVqdjUF1LqzBOdC7ut3rL+KlDoONAWTkV0XoBaSaLlyFzMWX2hc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <7dc2a51a-f1bd-6370-6b42-0bcf1adad619@amd.com>
Date: Thu, 6 Apr 2023 11:38:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
 <20230405111750.12491-2-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230405111750.12491-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0139.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::14) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SN7PR12MB7369:EE_
X-MS-Office365-Filtering-Correlation-Id: 91dd2ef6-1c2a-4cb8-4c11-08db368b158d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Cmccy00UDRbYGrcMOtcO1l0CkJmtyHIEdGCU5UJVNvJz6R6qoY+GRmtu6wZp1hvw4omb5g5gn9CCMk+lmtBBj081vvqxZK+A2yELKcB/pbKqSz6gNf1fNcevbGxHpR5OcbqGtQ45tcO3zh2GgrffcGJIrhpEhpycDUbMYZeaeRApe7mkjdOj90oBtqfAzx4pPAP82DB/lUkN4gtj/aURz8H25mOKoTYSo29rzTj2NFkub+Uq4aMeZAFuQQkRbwr4aFzZCvX1cjmDz+kjRhRRuraaJchIPvAsiQ5/m8yAt/IpjQZZZz/GWhqfZ3HbUqDKQBp5joyhjiRGy115UlqzGuA9v5x73TkUZyGleklGwIsC53eFvQDn7d/b6hWoJ6kg8REUNK29djlXeNo3p+u5+0xc6D+dslK2R31RInQqmm3cC7ENOq9RnoMfC5JyFVlWqSm1CYFGKR16Dm6uXMbbMtxY/LeBvHox1rTeL2SYnXpebyTjx9ccpBy55oyNMWGZnRdhU6j1jMLjah8/mp3rOzbwUeXhZJvjL8KI0uyfHRAUQ/rZWlMAmD2/6gAtrcRE1g/w8z7ihmCdquUZAtqUHG2r/8tU6DmeI1M9VHzteQvcQmp653qEmOUc0uCtuU/vmhgOV6GEdjFsqgdGBoyHwg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(366004)(39860400002)(451199021)(53546011)(6506007)(6666004)(6512007)(186003)(66556008)(4326008)(316002)(83380400001)(26005)(54906003)(2906002)(8676002)(6486002)(36756003)(478600001)(38100700002)(66946007)(2616005)(66476007)(5660300002)(31696002)(41300700001)(8936002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmx0d1RBdzIyRE53Wk9HNUhjRU5UNVZkVVdvTmdJT25GejM4UGZuS1hIQ0Q1?=
 =?utf-8?B?djJiL2xURm54WC9nMGJQS0FudW5CeXZhNk9YRk95VDdtdTVMMjl5Z1JPQ2Zs?=
 =?utf-8?B?Sm5saWFFSTBVNmlUSTN3WE1hODEreEVxYkt1d1l1cVpqNEkzQkt4ckJ1bENE?=
 =?utf-8?B?QVM0aWlYdXA3TU0vTlRvYWpNZUZxYWhSUWN0SkhtNTJxcmo5dzBYWXFjZGZV?=
 =?utf-8?B?MnNGYXNZazBPblBtNzQ2UFY0WW5hZVN0Y1JqTkFpVWNvQXN3OEhnYWpTRHhL?=
 =?utf-8?B?OVdQanVnVGdsS3pjbVQrRVpveVNOUy9BMW8vamtJNjNmTy9MQzdIdDJ5aCsx?=
 =?utf-8?B?c2Y2ejFOU01XTU9od1UxczVDVkluZlNURjJjc3JFeE9rZTlXOHVkbCtjU3VQ?=
 =?utf-8?B?aGFmZU52TGxyNnZHbnRXbzdscWNvL0xKVVUyaDh5Tk50RHZjZ3lmcWVGWGNG?=
 =?utf-8?B?YTdCSGlXUXA3c0ZPV3RYYUQ3OEN3OE5WVWYvZVRLUTBwUStTa2NlZWgvcndS?=
 =?utf-8?B?V1duZmxTU3U5YURLamlKK0RtT0d2clc3M3JJNWFxd0d3L2d4aDFIbVNodksz?=
 =?utf-8?B?TVE3UzdIRWR1clJaMnZlUG1vN3N4bWdzclJrSkd1SFU3N2RzL0FhMEk3MDEw?=
 =?utf-8?B?dGVrOG16a1hTTHpNVjhKRFFUcGJvQk1hU1cybi9UWWFjbnBid05nZ0U0WFNt?=
 =?utf-8?B?eEQyNkhINnlYVEJKWk9sbXpseDdSMVE3TUw5WFA5SDk1UFNmb0pJQ21rcVc5?=
 =?utf-8?B?N2xOdDBzMTBqcUozOE52ZXV2OG5CYnJQZ3R5RTNkNUhybzUyMFVQUnRDUjJC?=
 =?utf-8?B?U0dMTURaaWlxa3VQd1VTWGlsbFJJZy91RFhLU2ZSakpqTDBEZTQ3MXd6ek5O?=
 =?utf-8?B?bHBMcGZrNElmc3RGTENkNDJ1TitmVGxSc1QvbUR4OW83YkhkM3RJd0pUa2po?=
 =?utf-8?B?eEVRcEI3NDlvSGc0bHNjd1BKL1gyUmtTRTlTTG9KaGdNZ0xzN1pSeWZsQ2o2?=
 =?utf-8?B?cEJxc0s1M2hoRFk1NVl3VVR0TG8zOFYrMGlhS3ZiVyt4MWlaVE1KS0tQcVNT?=
 =?utf-8?B?cFl3UXJPOEZlUExqeWpsVVhJY0VBWDFXK2Q1SkVydGpvZGgycTlhU1JUdFhn?=
 =?utf-8?B?eng1RW9Ib2NmL2pCZXJXaXF4ay9LMSthVU8wY1ZHcTdqem9WY3l1SjBCVEJI?=
 =?utf-8?B?bjRzaHM4dkFpRTE2azJaUGl1bTNFMUIzYng4azJEMWx6dXR3Vk51VlQzWmhX?=
 =?utf-8?B?QU9JdTdVS1h3b29sMGdmc0lpbndNanZLUFVZc1NzaXdJMzdab3k4LzVab21l?=
 =?utf-8?B?dnRJRXp3bXBMV0h1ZE5oRjkxcFZiYUViZmhqMUZKSm9reGtERWhlNm9oZHhi?=
 =?utf-8?B?ZHpDMGlhSEh1ZjdPRjIramE5V3BTZDBmYXlTU2pCRmx0cWFZZCs1V25yK1lD?=
 =?utf-8?B?NXBYaFpsREZvdCt0VjEwUlN2MDJ0S096WnRSa21Jb3JSdDVvT2lFcUFoUWhB?=
 =?utf-8?B?ajM2cHBIZzNxZFA3a0syd2puQUVVU05tMDRvYXMxcmxFb24yWUlBQWMxbHR3?=
 =?utf-8?B?WmV4OTZ0K25LU1o1cjF1dXRhZTh5THFnMDVYN1ltZ3lvTHJ0Zk9xMGZ1aEln?=
 =?utf-8?B?WE9yMFRLSVlsMGRra29OQ29kK0ludEpUL25jbGY4cWZ0Sm1OSUwwOURxOU5y?=
 =?utf-8?B?ZnJNQUgxaTZjNFlQMmRSaG82UFArRjR4TmZHUEtEVnhpcmpjSk9xMlRranZY?=
 =?utf-8?B?Z3JVUVZTUHFVQ1cra2prRzhaUGx5bnRSUlR0Y0p3L0h5NmFCL2F0WDdsYWZG?=
 =?utf-8?B?U055R09qd01vS3dNYVFFNnM2Mk12dGRVc1JaOVBEVXhPeHZ1SGpWQmFkSHRm?=
 =?utf-8?B?NFhuMUxNbUJZZUFnQ3JwVGdiTTBSS25HUm1GUUxwdTlURUl1RzZuSTVtRCtT?=
 =?utf-8?B?ZzdIWG5nclhxcGJCZ1dwQ0ZCQ0tVcStoMFZSdGtjNXkwQ2dEbWdHajRDMnhI?=
 =?utf-8?B?UEI5WkdpblJtOEpQbEIwQ1gydittYytDSTNuMjRrMXZoKytJV2hvaHh3TkNk?=
 =?utf-8?B?TCtlQ2tCQ3ZpZEFXemxqV2NodXZqMnRCRUZNWEFVSlhRaXptQUE3N2l0d1hW?=
 =?utf-8?Q?5JEcC0ml3Np+eL/ElogBAamLf?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 91dd2ef6-1c2a-4cb8-4c11-08db368b158d
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 10:38:39.5147
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k9eMoZak6cnD1XgZC1ZkWHY/Utova0ZYqWhGJocqykjw+yIXfR/DOzac8gON58H+TQFUO0heL43VSp4iuTqscw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7369


On 05/04/2023 12:17, Michal Orzel wrote:
> In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
> stating that the guest is expected to read the DR register only if the
> TXFE bit of FR register is not set. This is obviously logically wrong and
> it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).
NIT:- I will prefer if the PL011 TRM is mentioned with the relevant section.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
>   xen/arch/arm/vpl011.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index 2fa80bc15ac4..0186d8a31834 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -143,8 +143,8 @@ static uint8_t vpl011_read_data_xen(struct domain *d)
>       /*
>        * It is expected that there will be data in the ring buffer when this
>        * function is called since the guest is expected to read the data register
> -     * only if the TXFE flag is not set.
> -     * If the guest still does read when TXFE bit is set then 0 will be returned.
> +     * only if the RXFE flag is not set.
> +     * If the guest still does read when RXFE bit is set then 0 will be returned.
>        */
>       if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
>       {
> @@ -202,8 +202,8 @@ static uint8_t vpl011_read_data(struct domain *d)
>       /*
>        * It is expected that there will be data in the ring buffer when this
>        * function is called since the guest is expected to read the data register
> -     * only if the TXFE flag is not set.
> -     * If the guest still does read when TXFE bit is set then 0 will be returned.
> +     * only if the RXFE flag is not set.
> +     * If the guest still does read when RXFE bit is set then 0 will be returned.
>        */
>       if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
>       {


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 10:54:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 10:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518826.805728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNG7-0004m5-HZ; Thu, 06 Apr 2023 10:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518826.805728; Thu, 06 Apr 2023 10:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNG7-0004ly-Eg; Thu, 06 Apr 2023 10:54:27 +0000
Received: by outflank-mailman (input) for mailman id 518826;
 Thu, 06 Apr 2023 10:54:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RM5I=75=citrix.com=prvs=453d769fd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pkNG5-0004lZ-Eu
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 10:54:25 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6290d447-d469-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 12:54:22 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 06:54:19 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SA1PR03MB6530.namprd03.prod.outlook.com (2603:10b6:806:1c5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Thu, 6 Apr
 2023 10:54:15 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Thu, 6 Apr 2023
 10:54:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6290d447-d469-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680778462;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=n0HOUBqDK03BjZGCqepxi1ytGZNaJm+Aysk+6zq3pl4=;
  b=P6iNJTYzTbsVP8NYMp9yd6Zu6LfctSAtPQjvo5xczKR8txJ02P9kuiSf
   J6JZ46q475vUv4lPxg3QMWCfE700yXznOw5pXhsfbdpnqdBSLycZSR+T4
   KFD3GjwxdqdYLWpWJc3g6j7J3lWeJ3LmhhU0mhZUUufP04tyIsaGWMwig
   c=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 104575953
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/tfCuapO27wDsfOlmQ95dtii73BeBmIBZBIvgKrLsJaIsI4StFCzt
 garIBnTM/uMY2D0c9hxadjjoRsCvcPRyodrSQE4/ik8HilBopuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzydNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAG0DXBS7hLmX+oiqE8JhtOM4LZC1YIxK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSFEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpOSeLlp6Mx6LGV7kwMUQUqdHu+m+CgyRW/acldC
 hJP/zV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc1Qjow3
 1PPgNLtBhRoqrSeTX/b/bCRxQ5eIgAQJG4GICUCHQ0M5oC6pJlp10yfCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb8D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:xJ9JzapBcIjOKmXmsKZA85oaV5oCeYIsimQD101hICG9JPbo7/
 xG+85rsSMc6QxhPU3J+7i7UpVoJEmwyXcb2+Us1NuZMzUO21HYTr2Kj7GD/9S6IVyGygc178
 4JGJSWbuefMbEQt7eY3ODXKbcdKHbsytHSuQ9zpU0dKj2DystbnmFENjo=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104575953"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nMzOjBpvtoyIEGeguEMooEmr//5NpK3Fr4JcucJ6gUDN3xVfVVntM2UkpXPqW/hWtKvws88vpYWEI5Cau/LbsqAHBzSxoS4HKrNPIxAXG6N5qRs1nVWHl8KsYrilkef+4jq921ZwEWf8Equlc4XqMJanVPEqA/e4JaYOxvz9H1u9viE31sTcaUp8aUiwAubHoiBsP0doNQI0n9kmJUIh74Zo5OOv/yRZParxVMu5xv0s4S1+AM+DdvWMSAaBUgXE0n8vbAlRYtARhf3waGlOTpbDeZRqg3wLl/lBV5tlklRXQUan+HfFQ0PJ0LkPIOdplXFPYuBCDIFB+EBsZgavBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EyMfEoLHM3v5KuwRZuXhJRCyB5szuGQEJtrTdAsqC3U=;
 b=KQaTi60ZmaEH28TooI+J0pzkXFgw7i/SxrqQ0ACRxpVWlf8YPyYDmc/f6+NLtB5EWZVptzAZvStb21jGlLXIaCDHVpbKEmc6smK79Wdu2NabxGSLIPuuFW80ORduR7DDt1XNdz9IB72pereHlE3p1G0xTWZ4BODhf8or4UG7LgKNxMoNWnaVA/MvSsl0u5Elp1G5iiQbOgZLvvp/0ihAcwoSczB4sRqy9ZtLGUeCusZEQITjb4V+/8vplKDJcbww5WNm/pO4JHI2b1I2GpC2LKJIK5KdQVpOCLe6ZomG8/O5FgUKm14w9Zkca8PnrWhcHZsAp2BOoIqFSm0+bavGxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EyMfEoLHM3v5KuwRZuXhJRCyB5szuGQEJtrTdAsqC3U=;
 b=iYgYwePhRgcO2OHymBU+JzIBB8lrYeQTsUphjVswsAOIAlsMhFOEdpVvm2mEgE7MgQfEFOIM5Z9vMtU+puSheCUmiZHt7B1luMHu3o50NsMiRV0KfXbdANm9CJZbXkj6kpAVFTFa02t5hr76msG4fGceiyj+jC9KCZAEyx+r5BI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 6 Apr 2023 12:54:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: Re: [PATCH] livepatch-tools: remove usage of error.h
Message-ID: <ZC6k0VNwb/+VzWHP@Air-de-Roger>
References: <20230406091807.49028-1-roger.pau@citrix.com>
 <4525547c-e309-c994-3dc7-5d1b398aabe1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4525547c-e309-c994-3dc7-5d1b398aabe1@citrix.com>
X-ClientProxiedBy: BN9PR03CA0972.namprd03.prod.outlook.com
 (2603:10b6:408:109::17) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SA1PR03MB6530:EE_
X-MS-Office365-Filtering-Correlation-Id: 84be8364-0641-432d-7f17-08db368d42a5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uypdFIJuvCrUsSgTSZ6hd6E7ZM9+1myKDvLfju0AVOt6arscxnMFBo1cQBS28vWQFqBvtRh5XYLK8y/rAJruY5mYLrWFDA8tUQ4VLfZyN/xQOERZZ1Z+YBdMbLjb7+ZA6Bfkp3ZmJ8WK2CiDRYXkkhoJKHPEvqUtnk/DbEEmLD9vgGV0u9FU7rJ6rkbWhJxbTTz/tLuRcseHW0LXIBMpuAZNuQpzVcMZy8EB7EXnwthbrl/GgUsk/OzpASUvX4V2xRFtbzTuwzG+lYRvZEpdxVXN9DdRaghr4FJ4tsuAjQIN2Ag1X8a0xC/i1jasM0RtFE42WgXSvgb4P+MEqmHhxn99kx2c7uEwjX0Ks3TwQPqrEUXUkCwz5KWg3+ghw6WcC7HN2/924OGay4erAcueHndpgylzY10wrDJ4M7nP0TeRvRIeMOmaHdpCBjXfleOxeMRqK5/Z37iWVoEe9E+9AuPZlIlOnRMw0qgX7X504Ur1csrbJBe0oJ5E1hlKWAtWBtsVmlVuWjxsnCDmRAfpY+Hu36nE70ilXVnUlzHsc5SJMfMaPzctAU0GDVxkleHw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(451199021)(186003)(5660300002)(85182001)(38100700002)(66946007)(8676002)(8936002)(66476007)(66556008)(82960400001)(6862004)(41300700001)(86362001)(4326008)(33716001)(83380400001)(53546011)(107886003)(2906002)(9686003)(54906003)(6512007)(6636002)(6506007)(6486002)(6666004)(26005)(478600001)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eE1xZ3h5L3pOVDV3R3FoTGpab3p4YWNXUDRvTDhqR0U3NzJJb1FVMWhnRXNO?=
 =?utf-8?B?NVJDNHJ0QUJMOVlPdnVlT1IyZHNQdDlDdUFFT1FJZkc0b21lajBSYUdJU1Zn?=
 =?utf-8?B?VnlDdzZseW4yMTBYOEFZODVMMWdRdVIxcWdoZDhsOFFDdHVGRG9ONnVHOG5C?=
 =?utf-8?B?NGNWdVBlamhjRXFLeEJFRXZLV3hiSFhKZTVZZjErcDBEbUVmTzNqeE9WcE9X?=
 =?utf-8?B?MEExUWVUT3JBTDBvSXV5MDMwUHg4dEhLV2Yza0ZaWkNPbS83MzVmY2FoMjZv?=
 =?utf-8?B?cTEva0lkOHNmZlNNMkpnektlaHQ0QjNGYk5BT3h2dVFzdjdDeWVLcjhMZlNC?=
 =?utf-8?B?Q2x1dXE3cDlWTTNESGxSUkRzNDNHY0NIY2NOaGJYYU9OMWFHM3c4YWFmeW0w?=
 =?utf-8?B?UDhwbG12Q29FdXJlMlQrN1JQQVl1SXpHM2l5K1k1WWVpbHVxNVhYOC94NS92?=
 =?utf-8?B?V1BwM1pNQVFuTVZjR1pyTFRWQkRZc2xubGxOVUl5N1ZEdEc0K3RuSGxjTThx?=
 =?utf-8?B?Tnk3RFpHdEE0WHlsMTkwekhQZTFMVkE2amtFK0tzRnNWbXprcXBzeEs5VXBq?=
 =?utf-8?B?WC9ZU0lQME5obmNOckJZTGdFMTZ3c2JZSWMwTk9oOTMzVUZzcFFUajVvUDZa?=
 =?utf-8?B?SlJ4ZDZoWUdkaUR2dGVXdTlEcFBsNUdlL1ZLai9abVc4SEJRajJiZHMvWnZp?=
 =?utf-8?B?alVDMVNPZEcwTHE2SHB6dG0vRVZCZzRRM2lHdFNFM1BwZStrR1QrNGMxeWV4?=
 =?utf-8?B?Q0hBNEMwcis4ODFlSXVZQmg4aUkyUzhFUU0zVUExQlg5dHltdk16K1ZnazF6?=
 =?utf-8?B?dEw4b2ZNSWo0T1JaZ1U3TlYyL2ZCcDgwVE93YmxKdlJVQkhTN2k5WHRlMFl5?=
 =?utf-8?B?VEpPa3cxQStzTkhxRldmdExMaG5RaUpaMUhXb3kya2p2UncxZVVqYWNUU1VC?=
 =?utf-8?B?eTJLWHdGTFg3UlpzaE1IN3Zmd0lzVnAxN094Qkh3dXdOZGNueDBER0Q3amww?=
 =?utf-8?B?K3VLWDZtN04vbWtFSnlPcjVvNWxvbjNCUHJBZkNrejFLRWlVSERpbHdrTTJX?=
 =?utf-8?B?MnNQWWQvYlFWbVk0SlFrUk5UTWduSWN4TFdYTkZ3MUVVNU1penFKcURmK3U0?=
 =?utf-8?B?OHZ5NzJuUno1Zk1rWGRFSzROelY2VitBZDdBQmJjTDFPNG83bHVTZFVpVTFj?=
 =?utf-8?B?OWNlZ0psNkpjRzd3eTZHcjB1WXpSNlBVMVIxOUZURVk2ZnRGQ3J2NWZ1eDVG?=
 =?utf-8?B?UWhDVVpsVkxDeENYTmd0Y011RFpEaGdaOXBNaGFZTWUwdzYySEFGWkdsMFZj?=
 =?utf-8?B?TXJtYVZwenZIL2lUT1FnRGdPRnFKY1VXenZRUjhRVmY1Ukp5VWZKbzUxY0Jz?=
 =?utf-8?B?SW02SUZwYlhvYjB6dDRRL1g1c0pEcnRLd3YraEc0bG9qWDNuK05XcjdBNTNE?=
 =?utf-8?B?SVRGeUpXMnFUL1lNM1VoTEE5QlNlR3AzMmhyUURYMVVpZ29LTGVhNUFxcCt5?=
 =?utf-8?B?ZG92ajA0akVqMy9hT0QrTThBN1FaZDNzeVdWeTAxbTVtMzVOY2V1dUZ4YUZQ?=
 =?utf-8?B?WXJ6bWRzVHowdXU1dTBsM2h4Q3FqOFMwdlFSZy96Nk1FdHV1V2hUS2U1QW9v?=
 =?utf-8?B?VzJSc2srUWV0dlgxMWU1VDZ6QmNtbVlwU3ovWTdFWXp3SktmdGdCd2s1Vk9G?=
 =?utf-8?B?V01FN0xyS2pyeCt4T0FNbW9GTmRBYzk0VkRFbURicGhmWnhuaG5zK1NsQ1A3?=
 =?utf-8?B?U3lYNHNsSmUwUml4N2FOdjhyQSsvVG9MQWk0NlMrbDArS0JTRDkveXhocjYv?=
 =?utf-8?B?c0JFaWZDUmduU2FWclllOEhFdkVCRllWRG5RQ09IbjJLY0hEQk9ZQzlVTEN1?=
 =?utf-8?B?VWl0NHFHZXJWK25Vc3ZSSDBvRzNqTUdGN0pidHhFdG92K3NZV0J1c3hsV3U2?=
 =?utf-8?B?eDhoSVM3V1gvOXpPZXQzZ3B2SDhWQml4R2lrVVVvMTZkalROM0FYZXVRTkw2?=
 =?utf-8?B?N2NYZWQ3aWZSeko4SHBnY0tYM3NZRVZNSmx4R084Z1FDN1lSb0EyaFk4dE5V?=
 =?utf-8?B?amlXQW1tS3BiUTNPMFdxSVhrVUU0QkxzU1Roa0ROcFVaMDBRSGExUFVVc0lF?=
 =?utf-8?Q?WY9ugsLyQinCesbQI2l1/4iN0?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	lwdgjmBzfvTGfdt6gPVzOLn5HcKBede/ss6WUPooGDk4+TI2IBtpoP+UQ9Pb3U66wai5+Inf1XDlYZCa7P81k1sguf9b1cETuRxLmg99iuCNNyH37Smd1u560buiO9mwiar42GIuT4Gbxg8aLMmWVDCWmg9OkE29DFs8ivuhVyJVSkKvS+CckWK/LzDRzLDKBpKja+ynVsjhTj38m2R9bOaPfjHVG7UIMjyv89VephZYkv/r42bQ8nejC4OkKbANIlnwAzbxh57cRy+y+U5pZrYc5gL//6gUPC00urpLp2IV/b+tkhIRk/E5LrnuFoo9ytJSKxqDmEn8CTi0lddG0nedIsTV1d+WlnW8ghkTD1u8034/oeA+8mb6LCdk4brilk+vEWd95eIlPL2BZ6/Ok1YBBxBPI1y5esqwRPfP0RLveDeKLl4LuRzKajC6ZzZFXi7ojsIDfvhBYSIBpTGW2+XMCEoPcbeooBeHS/m8wTHbxolnfnLCbUwsCJUD+L+J/LNz/qHAcD6zduBBHMUhbIU5I259GqAjVAurh07x990/ALa2lR5zfpu/ZxGswk5yYAwznRome1XNsxtoqbNd4rDnl9fIbMCHcvQloBK7YqhqfNnKMHW+wg6tEZgcdKO9ziBM4OqIbNVuIvHathd9qnt9CkZjMssXYYNqRi5Bf/PfStGYIaVpKEDDYCxNzL5EN+EF5l5JyLvFpV5fayUIoLCXGya5X5TEQboFgGN4cs0SArwU8Tj8H/ejpG3YfB1YACJXqWnIJhzpmAXUHu7spPNT3qz+4lhL6D8yK7P7kL6qIWs6MI9+9/jnLAOeCIv3
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84be8364-0641-432d-7f17-08db368d42a5
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 10:54:14.6232
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: klYusHizOEgwOdStYM8ivWH9VR9ypk8LSM2NRSrxJIGgSrQJW0PW13Hj7+wbybHnMOTLJpiZxUd1iRSlpVcq+g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6530

On Thu, Apr 06, 2023 at 10:36:37AM +0100, Andrew Cooper wrote:
> On 06/04/2023 10:18 am, Roger Pau Monne wrote:
> > It's a GNU libc specific header which prevents building on musl for
> > example.  Instead open code an equivalent replacement for the usage
> > of ERROR() and DIFF_FATAL() macros.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
> > ---
> >  common.h             | 10 ++++++----
> >  create-diff-object.c |  1 -
> >  lookup.c             |  7 +++++--
> >  prelink.c            |  1 -
> >  4 files changed, 11 insertions(+), 8 deletions(-)
> >
> > diff --git a/common.h b/common.h
> > index 9a9da79..ec2ea33 100644
> > --- a/common.h
> > +++ b/common.h
> > @@ -1,18 +1,20 @@
> >  #ifndef _COMMON_H_
> >  #define _COMMON_H_
> >  
> > -#include <error.h>
> > -
> >  extern char *childobj;
> >  
> >  #define ERROR(format, ...) \
> > -	error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
> > +({ \
> > +	fflush(stdout); \
> > +	fprintf(stderr, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
> > +	exit(1); \
> > +})
> >  
> >  #define DIFF_FATAL(format, ...) \
> >  ({ \
> >  	fflush(stdout); \
> >  	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
> > -	error(2, 0, "unreconcilable difference"); \
> > +	exit(2); \
> >  })
> 
> Looking at the usage, can't we just use err() instead?

Hm, err() will unconditionaly use errno, which doesn't seem wanted
here, as in both cases errnum is passed as 0, effectively preventing
printing it.

I could use errx(), as that doesn't append an error message, I think
that's available on musl.

Let me know if you agree with that substitution.

> Also, I suspect you don't intend to delete the error message in
> DIFF_FATAL() ?

I didn't think it was that helpful, but I could keep it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:00:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518830.805737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNMK-0006GZ-7Y; Thu, 06 Apr 2023 11:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518830.805737; Thu, 06 Apr 2023 11:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNMK-0006GS-4v; Thu, 06 Apr 2023 11:00:52 +0000
Received: by outflank-mailman (input) for mailman id 518830;
 Thu, 06 Apr 2023 11:00:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkNMI-0006GK-I6
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:00:50 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48f4623b-d46a-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:00:49 +0200 (CEST)
Received: from mail-dm6nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 07:00:46 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO1PR03MB5683.namprd03.prod.outlook.com (2603:10b6:303:9a::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 11:00:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 11:00:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48f4623b-d46a-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680778849;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=IlpTgSJI9pfA1iEPVzxBYcTZr86Ef7T3f6t/Sx2AMZU=;
  b=NlJFMGcRN/Txd6Dek8Ua9BNaPR1Vqx6S7XEG8/FMBz5hWvPz1fCyXRJK
   f9mbYp017oQtBZqnU+HBpUYKQtRM3VeaUwX/G70BIAOmja6VfdQfUa/iU
   SDFSF7HejnPP06CUptxk3bo6V1i6T4oVCTUy+AAuGcYKMMiXi5hxCV3dj
   k=;
X-IronPort-RemoteIP: 104.47.58.106
X-IronPort-MID: 104458869
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:qZix3KDgzAsPuhVW/w3iw5YqxClBgxIJ4kV8jS/XYbTApD50hmcFx
 mNKCmCPPPjZNGqnL41zPYvgo01SuZOAn9BhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw+PReBjl+9
 NAiChMTZzCxpsyTy5ikc7w57igjBJGD0II3nFhFlGmcIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuuza7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwX+mCNJMRdVU8NZMqmPL5lE1VSFLdmmRmueVsUGbA+pmf
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpa9sgrsY6AyMr0
 lyhmMngDjhi9raSTBqgGqy8qDqzPW0ZKDEEbCpdFQ8duYC7/sc0kw7FSctlHOitlNrpFDrsw
 jeM6i8jm7EUis1N3KK+lbzavw+RSlHyZlZdzm3qsqiNt2uVuKbNi1SU1GXm
IronPort-HdrOrdr: A9a23:gzQPWq88LBiHJJZZ7a9uk+EKdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVcqKRcdcLW7VJVoLkmskaKdjbNhX4tKPzOW21dATrsSlLcKqgeIc0KRltK1vZ
 0QC5SWY+eAamSS4/yKhTVQJ+xQu+VvvZrY9tv2/jNId0VHeqtg5wB2BkKyFVB3fhBPAd4cGI
 CH7sRKijK8cTBPB/7Lc0UtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrfHwKD1hkTfjtTyfMJ8H
 TDkSb++qK/2svLuCP05iv21dB7idHhwtxMCIiljdUUECzljkKSaIFoS9S5zU4ISLXE0jcXue
 iJhy1lE9V46nvXcG3wiwDqwRPc3DEn7GKn4UOEgFP4yPaJDg4SOo5kv8Z0YxHZ400vsJVXy6
 RQxV+UsJJREFfpgDn93d7VTBtn/3DE6kbKqdRjwkC3bLFuIYO57LZvin+9Ka1wax4SPbpXWN
 WHD6nnlYlrmB2hHjzkV1JUsaCRt0QIb2q7q3c5y7aoOhht7QFEJhgjtbwidzE7heYAd6U=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104458869"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aM9cyZ+1c6w089f298g0+DD8RyVvyvnLcmRb6lNec9hy7OGsbGDTU4r1pXVKx6jLFfharGl06WBt44Q+kXm/WZYrO3onCH2MubAh+BflPbe4nUH9qIwGLmU2XEx0ViXETqg/Tmu8GUm6sTap6eBhEae+LDPXJntoHvZP5x+3noBExAmFK8By9E5oQ9LPIoySTnuT62vyTspDUvZzGi9+7R6NNSFOtI7RQ90zSIzcD0kphXDrB2k9pR57Q8zV+ozLr8xBhcmAZJMwvBU17F2tspc3fr4edYLajkaxPEScDgzvfG0V3Y6gwAeCCFL10piMa1ykXLY71qmWjTx7jsGtOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2xNSNKUP7UWFvHKNJukQfcArgYn3YiCjUe8bsow8ens=;
 b=OGjC0bWBb+6lF1MCSGZalYbQXc9yKq3TeSsYdK1KFeMkFiB8F0RLz7S1PAuo8eegAfvSAdYhbF6RSuogiRdjOJopRzxi+jKNxcu84bvYzB4jV2R6MllPL0ld9m3vPckmJeqwCXba8NuCyA+leol3JeEDu2q53mcOoa1ewhywd+Gqo5A2lFkc/+Uqt5LlsEOS2aNbgzx4ws2HjPwQgaZHzPGZ8EL039jFEjpP/ezjXw9oqHLkTESpiftfo8P9ja53PMlbTAozEOekg6E5pWl8YqXB2n0m5QGGrbp3z6j09Camt7LVi1gEXNU7ReY8e/nh0oHXZZib0IxwmZ0OMIanGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2xNSNKUP7UWFvHKNJukQfcArgYn3YiCjUe8bsow8ens=;
 b=dYzRhre05yY9j+3nbifdi0G7JLp25/Uwx96uMIy9mUQ+98RA9YxzXfGxTCR4oQhYfguMoJICMMtkIKVndnQzEnHQQh0mDtcD3gigBECN/dcZwgYMzbSUlr2rNZmaOezk0S9g1fcDIXpGcXfY6NOHTXGs9i/1h0vee3JqzPi5uHc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <00396729-4e60-902a-dbe4-e2ef9ab3fc19@citrix.com>
Date: Thu, 6 Apr 2023 12:00:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] livepatch-tools: remove usage of error.h
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230406091807.49028-1-roger.pau@citrix.com>
 <4525547c-e309-c994-3dc7-5d1b398aabe1@citrix.com>
 <ZC6k0VNwb/+VzWHP@Air-de-Roger>
In-Reply-To: <ZC6k0VNwb/+VzWHP@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0164.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::32) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CO1PR03MB5683:EE_
X-MS-Office365-Filtering-Correlation-Id: ec3473c0-7d7f-4ba1-b20c-08db368e2ad6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2N1djKhdwr8Ea37Mua45Qb2b5b18jvgmp+ZnAH1l2prOXyHsYmR0ll9PFFRDrQugGkKYhbknCFa91kFETg2iO5H3P6C1V8qewajNMilruglUkyYbTqtnPZx0qTo8BFVGFfAoHbPkYsBe/TER6wifd2tmr+UmTs7Ibt/Tjb7hjOr+0npywaylpyoz3f6pTJTOWhEeL58FRn0O944nvVOZe6wULgN+s/SCBBluCNMF/XFr1cDXkve1ZZ+X//hxK1OJQ7gk1UklmF/jiaR0L3ZTIiWoq00ajJ3ZECU2EO7HIjjRtGzdCR/CiMI4fE3+kK4T1akq2PF5xl6mmHsVOVIms9i72Q/xTfN/MLLGa7ty1bl8FdIq06JLNCvH17lJ+rRSNZ2+HqyCAseeAP5XBAs2qJttwaruNRBbndBdxMSmRx3NJ+ZSFBkCnf7QBDOUYi+o1gIowpi0i4KqYcHiptwe2nFWEMDJ+xf2GCTdnVQjeHdMRvEOV8fhjuXso6BcAQbwoJfOcgK5nBtdtIPoloRA6u1+ER0cXrTzroGFDbkr+ynna/EyA3+mqiT8Zh1kivWwE6QMc5UNVlfacE927hfnTgaUSHggJiQ02/TxRwFnlcRiAluMzj/lVF+/p1G+9dhJi0DkMK6AiEaItDWqXC7X6Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(396003)(136003)(39860400002)(346002)(451199021)(31686004)(6486002)(6666004)(478600001)(31696002)(36756003)(83380400001)(38100700002)(82960400001)(86362001)(2616005)(53546011)(2906002)(6636002)(37006003)(316002)(54906003)(6512007)(186003)(107886003)(6506007)(26005)(8936002)(4326008)(8676002)(6862004)(5660300002)(66946007)(66476007)(66556008)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTVpQkVjbmFhR29ocDlxWStSbFNRTTlDQlIyUjlJQ2Z4NXJpeUFrNWFnZndL?=
 =?utf-8?B?bmF0LytBN1ZIZE4yWm85STRxbFZubzZ6YVFJRFozTTlLckYxZkxaWmo1Um9Y?=
 =?utf-8?B?bzd2TU85RkpuYWVET2t1SWIrSU91U3VYTEJnQjNWMFluL29JbHVzeVJHaXJM?=
 =?utf-8?B?UVZnQWExY0tUL0x3WmVraEZQQmtJa1h2L2d5Sm9JT3ZCcE5FT0kzSlFnajht?=
 =?utf-8?B?SkFOeURXaUs1SysyK0pOcU9mSWJSeFkwaTdza0w1d1hXN0p0L3I0dTBFbXl4?=
 =?utf-8?B?TnZxTmN1SGpGbFVNbGRDUUdvR3c4NC93cFZPemhpZlNaOHZTdUV6Y2cyc0Ra?=
 =?utf-8?B?dllXbkVYRmVyVXp3Y2lTUWxPdVBKMDYrclU2QXNKbGYzSUFFalBxWE02TjBp?=
 =?utf-8?B?am5WNDZBWDZOVndjV1ViSXdVQVFwU0JKVGpmS245NmNhTFUxQVBXNURSMDdN?=
 =?utf-8?B?TFZCSjE1c3BCSVByR1A3NFpYdXcvS1NFZCtqNHdXL3FtWUxkelkzRnJxTHFo?=
 =?utf-8?B?RjU2RlVkdDFlWnZaaVZ3eEhsNm5NWEhZQmJwMk9iaGgxaDdzT0l1bUNWdmNE?=
 =?utf-8?B?WEdHQXFhd2ZFdVdyNWlwR2xqZXBkVU1BVjJzb3hVaXZaVGdFaStKS0FDbENV?=
 =?utf-8?B?RVVFRGhmT21aVlQ5ZHN6N2xZVjA1YWJOYU1WVjlySFptMm15RkN3TEpDLzJT?=
 =?utf-8?B?S3Fqdm56eUgrbzlvT2JyYithQTcvdGdJMGdmOFRmWWxhcFZva3hRUWlxRlpD?=
 =?utf-8?B?U2U5RG5hMC9xVGI1RHg2YUsxdDhKRmROclA1elFra2JsNlQrUG11S0lBdG5x?=
 =?utf-8?B?TStrR0pYOWhaczZTY1pzcGh6L2NkamZwTGxVVDBMTVpuN0hxZEFWSE93aWNs?=
 =?utf-8?B?TFB4SnE2cHJacnBTaG5VU0VSWGdxbXR0dU9WVU9CaDVMckpVTTh5SUYyTzlI?=
 =?utf-8?B?ZFR5WEFNT0p1WnFjUk4wdEtDUnNkcmJiYUFYOVFaMklnOWNrUVdSalJxcTkr?=
 =?utf-8?B?RndMQmFlYjZORjdKT3lwUEwyTU02aVRPaUlLdThjWW9DYXoyUUhrMlNSYnNy?=
 =?utf-8?B?YjVDbGdJZ3BkQlltbldwazhsYi9sTE4rQ0NOdFVQTmUvVFBRRlFhbkQrdm4x?=
 =?utf-8?B?MVNHK1hWR3JvaDJNeHBzVklHQlZxbGxxQlI0bHpDd2laeFZFVXJhQnhyWHVh?=
 =?utf-8?B?VlJhMlZYc3ZDSWZXaWNpSGhXR1FrcEd3ZzlYZVd6WUpTNnF6TDJldkhaeE1E?=
 =?utf-8?B?dFMrRkphdFRxZ3VVSkF0Y0hNOXRieEZ3ZXhBT1Q5RFhncWh0bXFXVytlOExq?=
 =?utf-8?B?aFkzaHRKWVlqNW9BZ1NCL25RQmV0MkpMVklmR25GdytGTWoyZEVWRXlId2pW?=
 =?utf-8?B?QTVvd3hUQytybHlUOXorMUJWYnNaeDkvcUdwRmI0dFplc3J0d1d4cmpMUFlv?=
 =?utf-8?B?TyswZTFGeGs4Sm5pYlRlRG8rckpycTFDNHhuTkl4aWxwZllVSGYybjU0ekNK?=
 =?utf-8?B?ZUV2R2xlVXc0WmkrYmZjQjBTQk5lZnB3R0Q1Zjl0dlgwVnRibjE1Vlk2RE5O?=
 =?utf-8?B?ZHdCWG9RdGNpOWFSUGNrVUpPdHpaMEtraFZNbWxCbTNaa29La2hUSDB4QUxN?=
 =?utf-8?B?RWlZb2hXaDBIMS9OMDNpVWNDRElkNzdmL2I5Wm9wa2xGaUNkYUdXRndYbDJI?=
 =?utf-8?B?ZE9vdS83RCtGVVRBb1RXZUd3QlhJRjBTOVArWmlGZjNrZ0ZCZUxDNlJjMVBJ?=
 =?utf-8?B?ZjlmeFVmSFNydzQ3b0M4UXRnVkxpMkN0Tm56OE9lWjlhSmVpMnJZSzNmaTAy?=
 =?utf-8?B?Q1M4NWZzNWFoaVBvRnhYTVdqbXhib0pQVnNmRW1EM2dYVWh2SDdCd0JTTnBz?=
 =?utf-8?B?aHlTblBrZStpZ0hWQi93VnZuWkFUYXFxYytTWlB6SVIyV0dReE1vUU1Xb0dZ?=
 =?utf-8?B?R0FkWFhEbGk0OG1Fc0EvU1lySTdyMHpVT1ppYXJiMWpEczJ1YTBlQTgrWW1t?=
 =?utf-8?B?VnNyYStYTkdpSG5GTUh5c2VmYVFWL1U5cW5URWV5eHJ1YThmeTdGRmF6Rm8x?=
 =?utf-8?B?V1VqQnpUS2VjWHU3YnBQZkdPaUM2cUFmWEk4YkIwaU4rQUZTS2g4b3N4b0JK?=
 =?utf-8?B?YnJOM1FzWEN1d0ZmWHZnWFVUNCtNa1Q2Tk5wd2NiWEx0MTJhMWdkUHdhQkZX?=
 =?utf-8?B?ZHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	BTo38nGMK/GWwpRsBwTncd08KNFD9S0mjLv1yGeDHecWbFdBUg8YvXEmJuWxZByyj/tRNFECMm5CuDwzbi8ekqcqsQTjeEjnMi3pF/LnaeWnp/syOPzDJHMl6fChTbgW/LFFDekHJ+XOKHXFqDaGZsah6W/3zsQcIb3rVcxDPyV8ZrkjZpw341XpmUD74QII2nO0xjMT41V7EVbS0UYYqEbo2Ws+7YVQZBmEhoF1e8UkfgsngkharR6mfjQG3WeeNdasDTTcwRjU9iFkggFh6fm/wH4/4LYOnwGUl2CObjneJUn+bYME9CvmNVzClLMQsv4BVfZED1HhIXDIPy3qEvToi9IvW3/QaKjHse7+nB6skF8zve/cgdmxWOpQwvV3QSP8TZJEyqNiXzs0hA3+/xuwXHYMwl0ALXJvX6OCIahIkaN3SsqpTevvrgeZCOh5/b7myqWVEjJXpVVDxtBcautwBBvu1awv98BOiKkrPjlUlYA2BCM8ugw5DFCaQJMe4b8asmsfBKy8QZfAGbK2V2m6Kq5NvxCGXehzNoly+qXXQY3RWNhKcnZR6OvCGEyu+8SulWZassRkyp9Xz/hG/UtKBbmsZQBk0Tvpxfz9uyigISWg17Zzkp3lve0ZxpmpbkEJuIBxGsdcgedQyCOn4XNSYgeQw3vaAtpeKkzFCZb/59wXsLNGMYat4b6EUIGxf2q0ysSB1qrocsaIRSmM5iQYP89UvUX0jMKQeIYwvwxnIr0ijxdyRporKioALXJ4FdtXyEwy7glavATHpGqSJjD/DksVn+Li1wPhiVWdGQfzQay1nMuKZFDrZ4+QzrV7
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ec3473c0-7d7f-4ba1-b20c-08db368e2ad6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 11:00:43.8040
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L+heuQvxkz7any2GBjAi1FeZAO29rjz808ejCd6TEkZ3kkA7+AhnZooSzWSAIiUaWD+n2y9oCl/bruQdLEW+MwM3BMaud5tFuGXoJ2V9RJI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5683

On 06/04/2023 11:54 am, Roger Pau Monné wrote:
> On Thu, Apr 06, 2023 at 10:36:37AM +0100, Andrew Cooper wrote:
>> On 06/04/2023 10:18 am, Roger Pau Monne wrote:
>>> It's a GNU libc specific header which prevents building on musl for
>>> example.  Instead open code an equivalent replacement for the usage
>>> of ERROR() and DIFF_FATAL() macros.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
>>> ---
>>>  common.h             | 10 ++++++----
>>>  create-diff-object.c |  1 -
>>>  lookup.c             |  7 +++++--
>>>  prelink.c            |  1 -
>>>  4 files changed, 11 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/common.h b/common.h
>>> index 9a9da79..ec2ea33 100644
>>> --- a/common.h
>>> +++ b/common.h
>>> @@ -1,18 +1,20 @@
>>>  #ifndef _COMMON_H_
>>>  #define _COMMON_H_
>>>  
>>> -#include <error.h>
>>> -
>>>  extern char *childobj;
>>>  
>>>  #define ERROR(format, ...) \
>>> -	error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
>>> +({ \
>>> +	fflush(stdout); \
>>> +	fprintf(stderr, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
>>> +	exit(1); \
>>> +})
>>>  
>>>  #define DIFF_FATAL(format, ...) \
>>>  ({ \
>>>  	fflush(stdout); \
>>>  	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
>>> -	error(2, 0, "unreconcilable difference"); \
>>> +	exit(2); \
>>>  })
>> Looking at the usage, can't we just use err() instead?
> Hm, err() will unconditionaly use errno, which doesn't seem wanted
> here, as in both cases errnum is passed as 0, effectively preventing
> printing it.
>
> I could use errx(), as that doesn't append an error message, I think
> that's available on musl.
>
> Let me know if you agree with that substitution.

Yeah, anything in err.h ought to be fine.

>
>> Also, I suspect you don't intend to delete the error message in
>> DIFF_FATAL() ?
> I didn't think it was that helpful, but I could keep it.

I'd be hesitant to drop it, considering how much shell parsing there is
of these tools.

But ultimately it's up to Konrad/Ross.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:11:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518836.805748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNWU-0007pd-A7; Thu, 06 Apr 2023 11:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518836.805748; Thu, 06 Apr 2023 11:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNWU-0007pW-7M; Thu, 06 Apr 2023 11:11:22 +0000
Received: by outflank-mailman (input) for mailman id 518836;
 Thu, 06 Apr 2023 11:11:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkNWS-0007pP-Gs
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:11:20 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c099e94a-d46b-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:11:19 +0200 (CEST)
Received: from mail-mw2nam12lp2041.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 07:11:17 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW5PR03MB6957.namprd03.prod.outlook.com (2603:10b6:303:1a8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 11:11:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 11:11:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c099e94a-d46b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680779479;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=TmztV2nZ2906F86lRQAVOyoiRJRaRw4XaZyzs5XVePg=;
  b=AxmY37PULemvRcvo/E2P7Aew/7tZfwk/sP3lLDImmADbIj1nBTBhWDHQ
   NIPk7/H4ieRge5QAZcFVn1yercszPsFDgXP2A0N+uZ3nk7W7lxymhytox
   fEJxLxHBy9+/V9iRicR2XUB+Zi/uGYlkGjRV294lPHNfBFimG2cR3emHF
   o=;
X-IronPort-RemoteIP: 104.47.66.41
X-IronPort-MID: 104460191
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xCpjjq15o/4hWcSlqfbD5fFwkn2cJEfYwER7XKvMYLTBsI5bpzwPn
 TMWUWCPMv/eamr2e9xwbNjno0JQucXRy4VmHVRspC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBmO6gS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfOUFkp
 NE9Dnc0SlOEueWQmJ2Zdtldr5F2RCXrFNt3VnBI6xj8VKxjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxpvS6PlWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHunBNxKTezhnhJsqFiJ3GJDLT5GbAuEneC0rROBSd9iE
 VNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4A/IPlqYRq1BbXFI4/Seiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAszA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:R4phR6lC+bGxeq8A23G/hkkMlNLpDfLz3DAbv31ZSRFFG/Fw9v
 re+sjzsCWftN9/YgBCpTntAtjjfZq+z+8T3WByB9eftWDd0QPCQe0Mg+rfKlbbaknDH4BmpM
 JdWpk7Mcb3C1dii8b8/U2XL/YMqeP3iZyAtKPm1HFqQhhta6Z8qyFDKijePHFXaWB9dOIE/L
 z13Ls8m9NlQwVtUviG
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104460191"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jDpSdODRKyZeIayjbLGbOl2YxFR+Can9uLFAnM8ovDpjuknCuICkjiVKv+/vnCbxMfjSeblEHi36T4sZISu31DAXGuXkBrmL6pkQgToNV4QZcxHduO0/WSVPQNUqd/8fNIJSZ9aSdedTiJRnUDZOc6msmrcwY3SeoTj7ZD+PYc+YSy6e6WMVK0d4ivoC7vWr3RwdByA6cvrzlxb74CGyWW+drWIsqNh8L1qpFj3Vy4z6f9D2OT973ftamxJqx7El17fv8+UGcjNYB12puaJjpAzVteNjRJN3ffiur+dFmE7WKTvy9zLVQCXWQONXxRHTJxAzKkBuTZTYy3/qxPVVXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xQYGpsCo7B1mDdak5WQS3B5OG7n7X0XtcLMqpUqH6mM=;
 b=IXUG2dtEzk9e3WNEmqcfENSfCI77cb6tSAHs2EL5eXUC5HjgK66SSA4qNHVHHFhK3OYkjm+FqaTVGyBJo1ZiNokRaVXfYQJHzSgYiE+dfhXsAUIVoLie+LLgZWaNmfQKhXUTxXZbyXb4yIVUo+runGoHmLwYZnid5DtUwxHe6n1Gzd2LingC4Hc5TQCUvj6qDpaMipZjUk/5xOuY5v4Ar2rmt1hJH2YYNyBQ+T+taLTP9X9Sl4Hlr66bRA9RBGJ+C4TctF87wjqQp+6iYn6QSBS0brwFvNPI9ya04VHF+IPHxixMlaQ/aCrwfdq/q+hLMMjV82cYkRpyk0VMhtfmnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xQYGpsCo7B1mDdak5WQS3B5OG7n7X0XtcLMqpUqH6mM=;
 b=aYPg8BZkgDjXhNLJLmzj3tL8fCLwb2kA5L5aAnQBhRnAILLKW7h+8sPE2Lj+xoTM4hlP1/k4IMUUvFqbA5mlz55IAkz9HKxjxnn4cxeC3A6Dnf4UM/eUsE2yUdLSd0qOHACRf7TcLtL1zI/nkTlT/v72SLEvLb/AMP55V5aNKpg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0a5ce571-bd1e-e25a-fb76-a3a2ec6de201@citrix.com>
Date: Thu, 6 Apr 2023 12:11:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for MOV CR/DR
 intercepts
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230405204423.2113418-1-andrew.cooper3@citrix.com>
 <313a2a18-020c-ca76-f620-f5694a74efeb@suse.com>
 <24133c6e-3e66-7be9-41af-daa3db4fa961@citrix.com>
 <e9bedb21-c081-9a00-e147-7528c28fc3f8@suse.com>
In-Reply-To: <e9bedb21-c081-9a00-e147-7528c28fc3f8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0688.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:37b::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW5PR03MB6957:EE_
X-MS-Office365-Filtering-Correlation-Id: a7b7386a-2252-4be2-d6ae-08db368fa07e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xujIu/OY3USW7avdhhbiWs/ktO9Sc9w0SjI2vS2WtbQwxQA22kxHZdujwDX1iJHgR5LRy85dbtDi5B8Og0B1qb0lof63rPOTAfmENskd1zE5g8V+T1rv6e4jsAnB/PiZAViBYrmNcvHiojJh2jixbT3lkrZ+FEoGrj50Rqaw/xRFU6B70YFnkpLnaKisfsOy5iUs7JbwZc/ktMLg5UKSeTpPtCrEXKzWVLF+urTzQPAoXuwhXZ5HFI3MU3acPZW+zcihzjVOKeptNfwNbKCppoOC5uR3K0pvuvaeNMvvVnO+0VcxkZiH+pA/R9XnnopqBshq6btsavKGCiAMKx3UB9wOxPHKqRu58Voe18c46s9Itun3yxsnAPSrNVbit47oy52l6lPdvxH5a1dUWof3taSkKrSeE7iJUHTayTTSzha8bdBANTP5pO7srMBJO1sFY+APODc2HavGqbh12/kcD2UIRJs+VNacCicDXM0/V0bb0scuOOdRrI4ke2zZPUKNiR36XseXvaAN9uqU725/MRTO7QWtJUHNi93kzjjVD/cIPHCtWaotJEBnjMeV9eghTHYP5NHT5dtFyso89sKaFlWbTs9lIv7KtHqKI0j7gysw2LTYTQhJaSftA9SiJAb3o5LmxXaQ3Nq9m6Nynxyx0g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(366004)(396003)(376002)(136003)(451199021)(31686004)(41300700001)(82960400001)(38100700002)(2906002)(5660300002)(8936002)(4326008)(6916009)(8676002)(66946007)(66556008)(66476007)(316002)(31696002)(54906003)(478600001)(86362001)(186003)(6666004)(36756003)(83380400001)(6486002)(53546011)(6512007)(6506007)(26005)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VnU0WENScGNPUDB5YjBoSWFTZldZMU5TTjMybzZod2I4QTcwcVRHdW1pY0s3?=
 =?utf-8?B?WUlmampUQnIyVmQrMU1jb2pPcWtaZGw0NjdyMGVPNzV6Y29sNi9rOUhOVWRn?=
 =?utf-8?B?ckZ5MEd1MjZ1K05FSndmd0pqRWZNQ3lua1o1cm1aV09sekZrNktXa0J5TnhB?=
 =?utf-8?B?djYvVVJ6YWpJN2xnMGt4YTI4cGxLU0p3SmNXRVNwUFUrY2d3K0l0Z1FpZ3VV?=
 =?utf-8?B?blUzZVJCeHROamRzcituMGNXM0UwYkFUY0ZDWVJaS280Vms0SFBtNFNHMUpz?=
 =?utf-8?B?dUJwMVpwT3A5ZEFXMEVocUhud0VYVCs2clFuOHZLSTJvR3Z5dTYrMGN4bmZt?=
 =?utf-8?B?aUdiYVBwQ3hlK0RUOHVONXZ3Z2k5ZVZBaWhWcXZHNlJkaHVvNTQ3cS9scUVr?=
 =?utf-8?B?eHVhV0VRdE15QXpJY2RjVVRQaXY1T2hPRUdCMUk1UnBUbEVGZjVmRDVCL0dR?=
 =?utf-8?B?Yk9qQTlyWjB0dlVYTGdvQWM2cGlVVHlLaHAwYnpTSERwR2VlL21MbktYNXVE?=
 =?utf-8?B?WGczbUFkUG0vTEJ1RDNCeVVIcFRmbFV6eHJoYW5vTkZUMXJUekhpb05mUld1?=
 =?utf-8?B?L2p3UGI1OFpYalRUb0VjREdlR3ludkEzWFB2cjJlRGhLQ0xrTERjcEp3MWlQ?=
 =?utf-8?B?Tms4VG1XZVVxRjRXSnR0djBKcGtqaDE4ZlZWM2oyVXU5UENkb21GOGR4c1Br?=
 =?utf-8?B?OG9BR2JaZ0t1NTdIbHdjTUFJcTVaZ09NQmhYWERvL3h0ejF2K043UHJFcnNG?=
 =?utf-8?B?eG9aRURIeWR2aWFueDV4SXRrU2hXbHVjNW8xTXUzcVZKd2c3UUczREQ2aHZa?=
 =?utf-8?B?ZUZOZmYxL1ZERzZzZFBHaU9SN0tGQUhyNmVZN0NUL3dQV2VTZmtkbVFnMTNn?=
 =?utf-8?B?Q1V1ZzFYb1lBMDh2aTQwVWM2dEZjU2F4N3BKdjF4aTVPZ1hyeGhGcmk4VGhl?=
 =?utf-8?B?aFltTCt0eFRnMHE3cDVUcU5ERjUzTmNJNGxncVZYRmIxaE9zS3Z3WlpJRkdM?=
 =?utf-8?B?N1NpRU1RenNZZzRJU281OCtqSnZHcWtCbXRHdE9MN2pwbzNCcUJ2bFN2Vklx?=
 =?utf-8?B?cnl5bmpJbXlFVS92SnVYbmVGckpHNXZUaG1DSGxKODMxSkF5TUVkUG1FYmtn?=
 =?utf-8?B?SXVWRFBtRU5UWWdKaWo4Y0tZYzBBTzh0Q3VIbGJGcXE5aDdSVXl1bkJMUmMv?=
 =?utf-8?B?VW14OGlXM1dYenNqdWNEZVhJUUtsclJFYlNDekEvSy9yZE4ySHkvcXdyZXRL?=
 =?utf-8?B?WUxUbENKQnpGcVFhRkx1c2s4QkVTOHlKV3lvanRPcUtCUXB2b0NIMWtPUjBo?=
 =?utf-8?B?S1UyZ0tkR2p2SXd3Znlkb1JtZmFiaGE0MWk4WGtqdndGVWV2U3A2WkM1dXF4?=
 =?utf-8?B?cDNWNWpFb2NYOWc1MGFkcmJGQVFTKytXT3pvMDFnME9yUHRzNU4zeitIc3hw?=
 =?utf-8?B?cGY5MWM5RmdjbHg3b0x4REFtTERhSW9Oai90ejE2a0pIZ3hVS1NyTy82VTVa?=
 =?utf-8?B?TkVIUWlDTW9NNTAwZjRVNlkwQ1Jxd2ZrZ3I5cEo2V1VkN2t5TnFVTWZMdXRk?=
 =?utf-8?B?NHQxanIwMHFwMEpkZUFXbHoyZDFiK2FnUGNmaEpOYjJaeTdNWkNLeHk1elYx?=
 =?utf-8?B?dXVETTlsK0k3YVgvQ1RWQUIxZy9McUVXS2RYdUlrZ05zMXgxcXVtZWx4bTc1?=
 =?utf-8?B?S1FUdW4vdWtMQmFIKzlscGhvSlU0VjZ1R2JEREt6T0xOc0d4bXJBV29tVkFu?=
 =?utf-8?B?OEZKOUtzOUMyaTJhQ0oxaU9BdHJtdFZPd1czZWZROTRBVmJJZmxTV1BsVmJU?=
 =?utf-8?B?OCt1c3ErZzVmd2hXS1FVRmpSZDc5cE92SUdyM3dEMXpwMzRuNWlhd1gxTkVw?=
 =?utf-8?B?c1hKY1ZvdDQzOGlVaEI0MFBCdWFxOUVycENBZmJxeXRVNU9XTkRHSHJwSGRz?=
 =?utf-8?B?YmFzNmdTMG1vQjlYYjZISVZLTVREMk5hekttdytBWUlhZnJCenF3aHl2YnFv?=
 =?utf-8?B?aU1pYUZQTnlUcndCTXU1N0R3eUliVmsrb1pvK3NzUEZrQW56QlJFT2VHRVVQ?=
 =?utf-8?B?c1BxNFdpQXIybk5sdVhkSUphZXhqSWxKeTEvRFB3OFhyV0NSNmprTWs2aGJW?=
 =?utf-8?B?RG1DRWcweHRIZ0VBd0lHVVZYNEthTnM2UnhkS2EzTW84ekg4ZTlabkQ4WEMv?=
 =?utf-8?B?UUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TENWvx0F89HuKBz38DFDmbeHpybdKIjV3ixUgBt4rU6LsNDq4tM5rpQozRrkw3Q3yDgIl/pbzUdigMDMuYRZxquxL5mBKMruVhKFQG5OQpmlxJXmLVqNcro4v/FEbPevFJi6iFgZxCQaXgGiRS619v8QFcKEWB7gFqVyLqbYMWnu2lo4IMAmy4T+cptJ6Q0ImNnih0bdgW9IhQEZFUZaWq2UTS8y8SPDlS+NZQvauVDO7xJZkTskbitKQrFCkON7ZUdTceeWRbxp8njK70Iz9zWKiDslIPZkXo7924FJGmv81fDsSfDHhPqlYZUZL2+tXhR1yzgZTeFX7FSuSOf1zgwzd/C0RdqRD/wFOqK43Svj5vkrDHf9L0s/CGa6hWcc9Z3pljwKTWUZHFWaiXlZEZGbHMj6kInTmooub4HwpQCQhsqKcjdEfTFbL+hRxAGtFPd8PTVCJ1g56nQI6HcynBpsfP5qnXQROeNHPm5CNH9iTJLosox6CAjPaivTtxGDhdrmFo5bfNqxLaPGcQG248Qnxz0sS+441MGuhGOcoz48leLTkv0k+ZmeXiDo6D6CO8DWTEUgXKBULtQw3DdhaPS9wp3RJlTv9VGCom0t+3dPiXrO0wRTBIVXWo9kxE5mb/5FfemvvQfc00E2Ls0g4nuohWsej9o88oRAgFthqmFqDNAIgJIVQsBQoGksXx/IZ309X8rAJiPHiiwDgwkF1LDY+tngjyNHxNZRKIU8iezXQurNwv3oPNEReEnAV5bButhucwOhy2f6yviw5JHT8lWpm8/TxuGKr6yeCtg1TXsKEoIXHcGp+mxZRYdevvtD
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7b7386a-2252-4be2-d6ae-08db368fa07e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 11:11:10.6614
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kLVHbDiqla27d1ZqKGQoeUt4+0nmrr1vlQ6Dujn9q49RL34ggM92aIHzBl81tDu6gfdg/x9woDiruKN7YtQk0xDeSpBS80LY4T9LRnllVqw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR03MB6957

On 06/04/2023 10:59 am, Jan Beulich wrote:
> On 06.04.2023 11:52, Andrew Cooper wrote:
>> On 06/04/2023 10:31 am, Jan Beulich wrote:
>>> On 05.04.2023 22:44, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>> +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
>>>> @@ -450,6 +450,11 @@ struct vmcb_struct {
>>>>  
>>>>                  uint64_t nrip;
>>>>              } io;
>>>> +            struct {
>>>> +                uint64_t gpr:4;
>>>> +                uint64_t :59;
>>>> +                bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
>>>> +            } mov;
>>> The field being named just "mov" makes it apparently applicable to DRn
>>> moves, too (and the title supports this), yet the top bit doesn't have
>>> this meaning there. So perhaps say "MOV-CR" (or alike) in the comment?
>> Hmm - I'd not spotted that distinction.
>>
>> Xen never decodes the exitinfo for a DR access - we just resync dr
>> state, drop the intercept and reenter the guest.
>>
>> Therefore I think it would be better to rename "mov" to "mov_cr" so you
>> can't use the mov_insn field in a context that plausibly looks like a dr
>> access.
>>
>> Thoughts?
> That was my other thought; it merely seemed to me that updating the comment
> would allow future (if ever) use with MOV-DR. So yes, if you prefer going
> that route: Fine with me.

Will do.  I don't foresee us ever needing to inspect the dr decode
information.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:32:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518840.805757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNqc-0001qn-UE; Thu, 06 Apr 2023 11:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518840.805757; Thu, 06 Apr 2023 11:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNqc-0001qg-R5; Thu, 06 Apr 2023 11:32:10 +0000
Received: by outflank-mailman (input) for mailman id 518840;
 Thu, 06 Apr 2023 11:32:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNqc-0001qa-Fu
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:32:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9e897c1-d46e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:32:09 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7F35664650;
 Thu,  6 Apr 2023 11:32:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B761BC433A1;
 Thu,  6 Apr 2023 11:32:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9e897c1-d46e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780726;
	bh=JYv8anrELkRS2InyKfowv6LTjAw5NsWku3VF0hOAG1c=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=TUm3KHk747drsNPz+Z0buQynVuBAz2heVtesdzfxaXkIilPM390GQsjsEQQar30ZB
	 CNCgSIoeD8xdqL9Ptpi/m60PXAzEVIDOdjEV4rgEAZxtOTqsXSjkEctqcb9uyLfMRY
	 /bh6sQBcU2s2YuSPjOe8PMgqeJx8VWahD+5g/ne6SB8eV2QusDK4uYU8Fif1rugSU9
	 r59twVqy81k//UBRHBqdJCVcNW5InJt3pERxGFSPFclXtCxXh6LNuldlFVn2vtX00S
	 g6NukxvPyIGQJYs9UQqgEQL0pEM4bSynpaC3EEZkvy83W2H39DiCJ0F+p0x4ra7/dk
	 MjVFjEZpQH22Q==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 6.2 14/17] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:31:28 -0400
Message-Id: <20230406113131.648213-14-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113131.648213-1-sashal@kernel.org>
References: <20230406113131.648213-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5c266062c08f0..c35c085dbc877 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:32:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518843.805768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNrF-0002K5-7f; Thu, 06 Apr 2023 11:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518843.805768; Thu, 06 Apr 2023 11:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNrF-0002Jy-4h; Thu, 06 Apr 2023 11:32:49 +0000
Received: by outflank-mailman (input) for mailman id 518843;
 Thu, 06 Apr 2023 11:32:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNrD-0001qa-N3
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:32:47 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0bfc78e-d46e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:32:47 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E10B66459D;
 Thu,  6 Apr 2023 11:32:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22A6CC4339B;
 Thu,  6 Apr 2023 11:32:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0bfc78e-d46e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780765;
	bh=JYv8anrELkRS2InyKfowv6LTjAw5NsWku3VF0hOAG1c=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=TlhXg6GL5mf5rV0n3VwXmCe9PulH8a8MtoXyFv6OONrS0HyqnnsaUhkG+tnx+i6FS
	 hMzSiofU9zHbmdEeAgUwJa9tuO8xehyJ3ftsoCqe89JcT9bVyrwFkc4DJ7KvnfFQ1d
	 7S2h1mZ62MyPSdt9F4WDrpuD4fWI1QVLi/rjQYN9enSxrd6eddA7pXAeDyOs1G4+ZD
	 tPGNPiivyC7pQgnZFdLVFBb6B7cHKFSyhn+fFKK9OoVcvWvoSkMT9z8QA25keq1oJY
	 AEh/LKzK5g2/5vkxiOysqp5YTp1eQhxV4QuNkuzw0xeY/SUVx7uFBgsCQyDtYcSuEK
	 siLcWKzzJ1m8g==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 6.1 14/17] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:32:08 -0400
Message-Id: <20230406113211.648424-14-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113211.648424-1-sashal@kernel.org>
References: <20230406113211.648424-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5c266062c08f0..c35c085dbc877 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:33:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518847.805778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNrg-0002nP-Fa; Thu, 06 Apr 2023 11:33:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518847.805778; Thu, 06 Apr 2023 11:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNrg-0002nI-Cg; Thu, 06 Apr 2023 11:33:16 +0000
Received: by outflank-mailman (input) for mailman id 518847;
 Thu, 06 Apr 2023 11:33:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNrf-0001qa-CJ
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:33:15 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d13ff762-d46e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:33:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BC7A4645A1;
 Thu,  6 Apr 2023 11:33:13 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61EE2C433A1;
 Thu,  6 Apr 2023 11:33:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d13ff762-d46e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780793;
	bh=92M1GKdP/0zGE9SErJaH+kBPncxID6AmAyQ6G+mp5Gk=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=TGGRLJnk8qjic7eN7c12nElKyNd5bXmMCjG7/bjB3PqqoN+SbVBlfKzalzvLQIReV
	 tyHgF0TW5nnzvp8CWHRO4EaeJATBDVEDxFWA392f+CJK3udrAgiShc9sjRBvpGK98w
	 Ikza+T29g9Yv3i7hYwz7694Oje2lYmQ5nb9hggHiBk8wFQ8OUCg+RV2z+WbjtDEi/p
	 0yomVtcyjlIngUdgNwxW8JuUFqmJ7R4JNLwFVGLyLHmWcLeI/8NUVpNaYJaWRrCg1z
	 E3fK0pol1vJ/MZ5i1ENnKUYgfVacwAkF3TcmYFGX66k3yYNhUpTiYTfZpzz5ym6ogO
	 giGGe/45zFh6g==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 5.15 10/11] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:32:49 -0400
Message-Id: <20230406113250.648634-10-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113250.648634-1-sashal@kernel.org>
References: <20230406113250.648634-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 303d8ebbaafc4..63118b56c5289 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:33:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518850.805787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNs7-0003Lu-Og; Thu, 06 Apr 2023 11:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518850.805787; Thu, 06 Apr 2023 11:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNs7-0003Ln-Lz; Thu, 06 Apr 2023 11:33:43 +0000
Received: by outflank-mailman (input) for mailman id 518850;
 Thu, 06 Apr 2023 11:33:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNs6-0003Jk-5x
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:33:42 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df5972cb-d46e-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 13:33:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6B75764661;
 Thu,  6 Apr 2023 11:33:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1936CC43446;
 Thu,  6 Apr 2023 11:33:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df5972cb-d46e-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780817;
	bh=PlfyoNysZ9FML7GR+HPb+prC5EAQJfTUe15Aue3wG40=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=ivl/5fVh1TWiHJGMYcvdcLUZdx09TpE1iQos1OZrSEMA8SocOAi29lAQKD+myDqYl
	 z3QJJHtZEkKrfLHK3dZB5ePP4Md9cSt8YENZWIMOAwjhts3wAXGq9Bc5UEpUXLeLjD
	 /dbs2l8sYNJGLhpj3pn1CRF3XwO29rf8Q7WrV9yX6i6/ZRbqcTJBYOOW7TXf90e/p4
	 9T3S3KHsypRMOqPo/RR79Vay66qGR3RxuBUybVA3l5TOZAPFZOyzVKJ27Z7QGbpgjK
	 9jQQfhItmZTG7hi67yje0e88CltNV9UxivunEZztcWoN/LPUmQG9gV5y9joQSIeaMn
	 sDizZyNEsCOdg==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 5.10 9/9] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:33:15 -0400
Message-Id: <20230406113315.648777-9-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113315.648777-1-sashal@kernel.org>
References: <20230406113315.648777-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 67614e7166ac8..379ac9ca60b70 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -996,10 +996,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:34:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:34:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518853.805798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNsS-0003tA-2l; Thu, 06 Apr 2023 11:34:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518853.805798; Thu, 06 Apr 2023 11:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNsS-0003t3-00; Thu, 06 Apr 2023 11:34:04 +0000
Received: by outflank-mailman (input) for mailman id 518853;
 Thu, 06 Apr 2023 11:34:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNsQ-0001qa-7h
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:34:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ed2220a7-d46e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:34:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 90DCA6446C;
 Thu,  6 Apr 2023 11:34:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CDE3BC433EF;
 Thu,  6 Apr 2023 11:33:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed2220a7-d46e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780840;
	bh=DPDY4dLUTQFSWfyffi7VSXhzq2fp9bKn4Z6f3hsXLJY=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=I3O+lWgS9za640uc/O0WqKXjg9ExsPtbEkIL3gWU41hf7E1ctsPbsjZ9en7ZEPKq2
	 BfttYxCflZFHdJZWU74UrushclmrnUqLbOYP6DPlvJLhrsviv0nUnM0CvbJMj2WNrn
	 dS+KSjtYq2r/RmdaCmu4swItk+NekGlwVYBMR8Y77ngGriccJgRhW0MuARmSNf3OoB
	 Vyyv3//853X8cfh/V8ICCmxawrg92SvsJkxUFix4nS0Oi548sNtsNHVH2voZSkr8ey
	 oIql7dWyzbs7znGxHoZSz1DQTUAFpKjkHCfAHjOJJ/s6TBUx7Qho7jiR6kgpNRPGSS
	 Zp2cMUbK6Z3CQ==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 5.4 9/9] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:33:37 -0400
Message-Id: <20230406113337.648916-9-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113337.648916-1-sashal@kernel.org>
References: <20230406113337.648916-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 3dfc5c66f1408..a3078755939e3 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -989,10 +989,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:34:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:34:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518858.805808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNt5-0004Yi-D4; Thu, 06 Apr 2023 11:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518858.805808; Thu, 06 Apr 2023 11:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNt5-0004Yb-8x; Thu, 06 Apr 2023 11:34:43 +0000
Received: by outflank-mailman (input) for mailman id 518858;
 Thu, 06 Apr 2023 11:34:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNt3-0003Jk-VV
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:34:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04309ea0-d46f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 13:34:40 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3DD046464C;
 Thu,  6 Apr 2023 11:34:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73C4CC433D2;
 Thu,  6 Apr 2023 11:34:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04309ea0-d46f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780878;
	bh=FyLMAWNPRcnaESuiKhWJ8kZc0wNVgsQO4bWtiLKhM+Q=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=ZyTB2qmVRdeJYEe1AkE9jSXueWn3qkEgoSP1C47Qx/eiipH68OWecCWliK78mIAai
	 LFSpTul+sm4j3wjuRghYvUAtMoDyf2qA7+NxuCoAr+BJ/Vv32cbGIldMlTWEoNeNJ1
	 QOHF6VM9vXmz6inUdpkLdQYuZ98FjKxYxjA6J+9UK3Ph5pzq91fFFf/VsUKJJhkk95
	 fCcY3a6W11lmsF9bls5tbHXrYAeDhOkiIxq+hVtAZMj5gB7g1oyqOdvktfMVVrYGS1
	 zMPjgYh22yHEvbF8fKUpW2zSXcQYiskZlzB4IFaGM8z5wwoBli11H+KL48MTl0Nwcb
	 yOf0RawYxaYcA==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 4.14 7/7] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:34:21 -0400
Message-Id: <20230406113421.649149-7-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113421.649149-1-sashal@kernel.org>
References: <20230406113421.649149-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 252414a9293db..a141db3f0dc7c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -991,10 +991,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:39:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518864.805818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNxI-0005JT-Uu; Thu, 06 Apr 2023 11:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518864.805818; Thu, 06 Apr 2023 11:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNxI-0005JM-RI; Thu, 06 Apr 2023 11:39:04 +0000
Received: by outflank-mailman (input) for mailman id 518864;
 Thu, 06 Apr 2023 11:39:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=15Bo=75=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1pkNsl-0001qa-Cd
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:34:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9976a13-d46e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 13:34:22 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 78EB96468D;
 Thu,  6 Apr 2023 11:34:21 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B342DC433EF;
 Thu,  6 Apr 2023 11:34:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9976a13-d46e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680780860;
	bh=fHd5p+u3wf0voZ3oDttRIL4X67Yo6YPvWVEiI3rElds=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=kEWGd4RgSq03YTvEn7O6jvEUYCm9BsS4kA2Z3p0phclRBFfJd/F/qJqjZt7gMZ0kP
	 XQRRBx4fVJ9UwU0akyxEjw0p2elkuFIgJEAvFZEPXOKUAZEVJfQ/6cr4xPzoip9F1o
	 jP8NcTU8tTdAHgMpyNUFE5+4YPaxGg0lFOqJosnn79KgKc1MmFLNiRm6NeQQPuVcmm
	 RyQYP4AJj/FlITLV8bTsPEB7P4ucqPmcxiX2a4RGxTntCrVRr3l4sk+WAkeq8MUr/K
	 MdSUZP+goACcxZRbTRV9TqTVP2o0YVaNy2qpZSHQC4CLq3wWDh2i2Udqgc5FpaCQWb
	 7RZ3OxTrAxs+Q==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Sasha Levin <sashal@kernel.org>,
	wei.liu@kernel.org,
	paul@xen.org,
	davem@davemloft.net,
	edumazet@google.com,
	kuba@kernel.org,
	xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: [PATCH AUTOSEL 4.19 8/8] xen/netback: use same error messages for same errors
Date: Thu,  6 Apr 2023 07:34:00 -0400
Message-Id: <20230406113400.649038-8-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230406113400.649038-1-sashal@kernel.org>
References: <20230406113400.649038-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Juergen Gross <jgross@suse.com>

[ Upstream commit 2eca98e5b24d01c02b46c67be05a5f98cc9789b1 ]

Issue the same error message in case an illegal page boundary crossing
has been detected in both cases where this is tested.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Link: https://lore.kernel.org/r/20230329080259.14823-1-jgross@suse.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netback/netback.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index ed644b6824cef..d2b79d7c0b881 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -989,10 +989,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) {
-			netdev_err(queue->vif->dev,
-				   "txreq.offset: %u, size: %u, end: %lu\n",
-				   txreq.offset, txreq.size,
-				   (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size);
+			netdev_err(queue->vif->dev, "Cross page boundary, txreq.offset: %u, size: %u\n",
+				   txreq.offset, txreq.size);
 			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:41:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518869.805827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNzZ-0006iy-AN; Thu, 06 Apr 2023 11:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518869.805827; Thu, 06 Apr 2023 11:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkNzZ-0006ir-7j; Thu, 06 Apr 2023 11:41:25 +0000
Received: by outflank-mailman (input) for mailman id 518869;
 Thu, 06 Apr 2023 11:41:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RM5I=75=citrix.com=prvs=453d769fd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pkNzY-0006ik-Aw
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 11:41:24 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2df4311-d46f-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 13:41:21 +0200 (CEST)
Received: from mail-dm3nam02lp2044.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 07:41:18 -0400
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com (2603:10b6:a03:395::11)
 by SJ0PR03MB6949.namprd03.prod.outlook.com (2603:10b6:a03:43f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Thu, 6 Apr
 2023 11:41:17 +0000
Received: from SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda]) by SJ0PR03MB6360.namprd03.prod.outlook.com
 ([fe80::48a7:d1ab:897:acda%6]) with mapi id 15.20.6254.026; Thu, 6 Apr 2023
 11:41:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2df4311-d46f-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680781281;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=FSOJcFuyJLoPTZUkurwEgHtgOIn3wm4+zs0pUUG072c=;
  b=UdNLsgv7Zw0ebBotoJrJ/IXeVIfpuYHfruLA44e80NzD3zj1rlsi1IHT
   JIHAlaZVqbJrocWwtMycLCoc/bpuNfJiFtbBWJz2eQxGc28S6htlnc+MR
   Vv5qtikT5zc54E6klymWJYousWiOKnONZ3xYLtdmaRM92HG+Nm/cx/B2I
   0=;
X-IronPort-RemoteIP: 104.47.56.44
X-IronPort-MID: 104580366
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IcbTPaKBkrdRN7kPFE+Rz5QlxSXFcZb7ZxGr2PjKsXjdYENS0zQPx
 msZWWyObKuNYWCnftt1bo7g9UpVup/cnNZqTVNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRgPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5eI35D7
 aVFEgkcTUGao8Lm+o2pY/dj05FLwMnDZOvzu1lG5BSBV7MKZMuGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dppTSPpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrw27+Xxn2lMG4UPO2a/MM03wCZ+n0SVyELaUqYjcWlmmfrDrqzL
 GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAHSThbYdBgq84yRhQtz
 FaCm96vDjtq2IB5UlqY/7aQ6D+3Yi4cKDdYYTdeFVdZpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:Wb7A56M3QeUjxcBcTsajsMiBIKoaSvp037BN7TEWdfU1SL3+qy
 nKpp8mPHDP+VMssR0b6LO90de7Lk80hKQU3WB5B97LMDUO01HYSL2Kg7GSoQEJ00XFmNK0nc
 9bGZSXQrXLYGSTba7BgTWFLw==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104580366"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CNotY7T17C5arjjADDIK3ynwFtruXaQ7q4h/OD3p/0r6yDIqG3JJD/g+e1g/KpQYZTMg2Bsybd2W7di75xTIQNd+IW5Plqe7GM2bBD/iXdr48jAKEVKYCXdAl2rOZhqRhYzo29tlvveVmCD6lAKC9nmOoNj0QK66/w661yeFnCFgFUYoD42bYiAjUrNhZYrQPHZieXrcgQV8Aqfe8N0y1q8fgxg7fEd/ncVigSZhu2a5pAiu6TqwPa9cAIJxqGKAfDRdq/UzoSIc52ctdq6jyTpU6FKQageiwCGLCnnnuWd28gzddwXw9852B6p+aELBFhdriuv2ve7eXVzKsXdVDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=waqCQJYQpwRKwKZIL9b/VKrKkOu4lwBg+iCU1akNzjg=;
 b=oYzQEFjLYksgKWJd+UorPFNdZg7T0OHHgi5aa3/qeED2A5HPLcVlO5tBDN8gspmWigJ3cm2XwteQgAF0/VwfMrnJ/VQYHO1wi98Q7sUXI81hsB/IGSENv0RioWPCf7wHZUhbCVx9uHR3QBgOP5CdfChsSHqo4T+LgvQz9dBxOCwtu2NEV+/NT5iCS5dWh3AwwChVapk+XLDArRuEk3z1v/f+26g7OZFSxEaplVZLPRfV2iy+/jx0SkzXoqk7NxRkcT+TpvF2I/zgmkkwOL5mxGukTbwZ/tAcikKoqkT4tDntWGRaUs7Dwp+pk/WOY9nwHjpWlpHQjx0eLj4SabaSJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=waqCQJYQpwRKwKZIL9b/VKrKkOu4lwBg+iCU1akNzjg=;
 b=qBH5Jmh0SW1RvRPcbcu9JbI5+BSlpja7zNTjGKDBtuHnkUp9fEEuQrj3UzAR3u8nD0CEBkfss0QtrapPOibsGO1dS8jll+ZZVpUuZi5ZTUD+5OaqaagO/ZDPWhFXy6QCspTsWvXfNU3K2Mh0qfJswIJWrsaKBwuW/O+9ljpJP5Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH v2] livepatch-tools: remove usage of error.h
Date: Thu,  6 Apr 2023 13:41:06 +0200
Message-Id: <20230406114106.54735-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: BN0PR08CA0001.namprd08.prod.outlook.com
 (2603:10b6:408:142::6) To SJ0PR03MB6360.namprd03.prod.outlook.com
 (2603:10b6:a03:395::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6360:EE_|SJ0PR03MB6949:EE_
X-MS-Office365-Filtering-Correlation-Id: bd345b09-697c-4b94-987a-08db3693d501
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fz3dFpKXPppw55wCrR5kbp+aaFfUkUlR0L3yuLzzZVLlmfFu3LvHC44bomHaOgo7+2KpaH5wfSCIszAx0b7HgdkHnJbDYrv2yblU4aeDk+Y20K7VG6PHnioCtm73KvuFhUvUXCA4QOzHzZbkHRq8vgV1qbn2sU5+wBohPd2WUzUAezw9JoUpMVuG4n7+aiQ95N1PEVvyxJBsjnfug5i0hXEkN0Tvb+gOXLZAmAVx6BwmvpVvN/kcCU8ynsE5W958zcwehphvqmk5jnd64FI0O89FmsUhV3FuF6Jyw6nveXDJ8lFKdJfLsM/2ivWF9dRckCnnCNVMnk7z0igvHgZqKGRTqk2g6xwhXj33K45vIb0Atz21qaAXClyLTDSoqtDYv9gZ9xybsFX3MvxyGy7/iVN6Am5Q8EW3WPNHmIuBwxeMZ7daF4w7tmOGlbJEOHji/4gkugPx7kvhF2wj6JoSZcuqVf+4FWT/X5eMyZ4O4Gg95EykBo0/CdigzQUSfGeEoFMRYbs+ErUHCNkB664PcYwDXN2YHdXP9dlIN4k0hDC3TQdt2gjpXR6i1ZQVRaGi
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6360.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(366004)(39860400002)(451199021)(26005)(186003)(107886003)(6666004)(54906003)(86362001)(8936002)(8676002)(5660300002)(2906002)(41300700001)(6916009)(2616005)(316002)(1076003)(66556008)(4326008)(478600001)(82960400001)(66476007)(66946007)(6506007)(83380400001)(6512007)(38100700002)(36756003)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bTllRXd2NXFyQy8xYi9pcXRhYmpja3ZOSHlBSTZOcG5YZG1rNVhjNVZZWDJ6?=
 =?utf-8?B?UENlVzFNNlh5UGlTUWtCYzR4YVBBT2U1eGkvQWhFZjF3VzljWndCWjM1ZEFT?=
 =?utf-8?B?MlQyVlMycVRSNHZFSkl5aWJYL3JEMGk0RFpJMVIvQmhBclFHUStVQmtQOElZ?=
 =?utf-8?B?WjJ0RlZiRVU3VERKeVczaDM1MGRDWFc3emtpNng2M0hENkNMUXAwMk9xTXE5?=
 =?utf-8?B?VlI5L0xLVzBMRkYwSXB5WnRmQmVOUVA1QUFzRXljQmlsQzJXQnNaVE5aTHNk?=
 =?utf-8?B?ZVVFWUNJcExFaTl0V0dMQjNCRlVLQUJ4ZlBEV0xDUjhFUi9rYlk5QVJjd1k5?=
 =?utf-8?B?MUdONVZBN1dlakdib1ZiTnBtNWtSeHdZSzNhOVg1Y2E4YVgrNjVGRUhwNDMr?=
 =?utf-8?B?Z3lITEp6ZHBLOUp5cjBFWVBQUmYrSXUxc3hxTUZITFJYRjhtMWsxNGh0S1hV?=
 =?utf-8?B?dXpkMW5LaGQ2a1l4T1pISnNZc0EvVitRaEZ5aGphVVBDM2F3R1N0enZ6UlFw?=
 =?utf-8?B?NjB3eXRzeHZyekZFZVk1K2p1WTZzK09nNGlyZTNEbmxYQ0tBWEhYcmNTckky?=
 =?utf-8?B?QWx1YkhtWnFiV3dwcmlmVVBtYlV0SS9VZDlaN1U0VkNWbXNnc1ZRV3UzaDMw?=
 =?utf-8?B?amlXbGtEalBuZHBQa2hoUGhNQWFPQVFScUg0M2syVXh4RTd4UW5uSVMyNlll?=
 =?utf-8?B?L3lUKzVtcGFPc0VjK0EvbFVhZVppN3VERlJTYVVDcGdNYjJlQ1NnZ01TOWln?=
 =?utf-8?B?STN2MXROL2N2VmlMdGxaVzFPZllSLzNWK0V1c2JpeUZHNzJIcERLR0RIZmJ0?=
 =?utf-8?B?VFVuUUZHVkNQMGlTTWVEdy9LdnpvTjJjVllTd29lUk9pZ2NrbTdZd1ZIU2R4?=
 =?utf-8?B?VDNqWVhLNXN2WGhzcGhYRG1CVUpGc2NYSUN3bW96ZHdoclhJc1hQSmptVU8w?=
 =?utf-8?B?eVY2V3hFdTZNaG8zVHM2WERsOWVHYjBCZGhHbFd5N0tzazFQNW9McmxwdklC?=
 =?utf-8?B?TUpZdmZhT1BSc2dNN0w3TjR6bDFHK2R1NUhnb2xyUGgzNFc0d09nVlVnc0VZ?=
 =?utf-8?B?U0ptUlNWUCsyNHR6MGxYbmpaY1JkNVFLZGtaTmZjeExwWVkxcDM1VE1ZUlZx?=
 =?utf-8?B?UFJoaE5DR010SE15VDhuUXBrd1ZTdldYQ3FJMXlQYzY2N21rMVMrQ2JXeUti?=
 =?utf-8?B?N21Idm0wS2g0QVp2NkdBQ2hNVTVRMlJMZ2xkT2NDYmJBV1VxM0MxQ0NuZWdB?=
 =?utf-8?B?RWlJSjVmVW5WNFdGWUh5L2sxdEdJTU1ZR2dBS3ZLbmQ3NUljL0FKcmVaWFJU?=
 =?utf-8?B?SXZRSjh3SkdCeUFWWFpqZjJGcVJmUTJnWkxUZU1EdERjQnhUNkdsbjdOS015?=
 =?utf-8?B?ejdmWndUUTRMNENBemNERVNsdnV2UG9DbE0wVmlsSm1yWFF3QllrZ3RFQTBu?=
 =?utf-8?B?NjlqSUtyVXhaSlVidDVvNXM2OXVDbndkTk5QR1o2UUxTVkVMY1NFU05IT3hV?=
 =?utf-8?B?NTNvQTUzOFZlblBUMjJRM0NnV2hXUjlacm8vY3FZZWZuUFhBYUx5VEl6WXpI?=
 =?utf-8?B?Lzc1RHBSMHB2RDhrODRHanZYYXdvMjdBd1ZyWVJLSHR3dDBib1BoM2s2YmF6?=
 =?utf-8?B?QVVsYWVUcy9DNDJpQ0piVkxBaTFLWDFUaTk3VjNrYmEwMDV1aHRESmFPVUYv?=
 =?utf-8?B?NGtzcjV2THF5U0NhMWc3a1BpVWptTmtHRVVyL3drUEI4anNFTDdkQ2pUOFRQ?=
 =?utf-8?B?cUlsdFNlcWV1OEkzem5QM0Y4QjQ3TjQzcXBQTEZxSFJCMytRS2RDQmNtakNB?=
 =?utf-8?B?eWwyM21yRXlXTlkrbnBPVldKNjVqdm9PcnQwMXVHYWZORGZKUmZNLzFwMHo0?=
 =?utf-8?B?U1A2REtGNmZrbU5DUHkrYnNzeFFoazRoODA2N2M1M2loeDF2bjFGUHJNME9J?=
 =?utf-8?B?UnV2N2RNN212Tk5uMjhpYitldExRN1RVSlpSY1F3bFh5UHk4d1lIeTBLYUcv?=
 =?utf-8?B?UmRqUTYwUDJTNmlkc1I1RlFQdnpYdkNmNlg5T0ZzYTRNMUxjZlpXZWUxaTE3?=
 =?utf-8?B?MzZHeU1BeXpRMFVrZytpdE9xMllFeEhaV29lWFg2dWdTSXBEWUEzNnpGYXhr?=
 =?utf-8?Q?Y8cmuN8z6DAoaD+U9cnkpKttI?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Msqkiko7HtJFsnd6KhtuWpRvgqTSNZjtq+2fv/mfS+FAt2eSuH0nobm4x/tffLxgHxGSQ8ZDb1gh+SInPYnqkHh/MKNLjKV5lRimOTlDVmaYue34NNlMPsXt7UAs0vjjnePd1SjptMl717bQOEHXv21eId/YvAz9VDkLsrK1RrN+HTuCRMwxpgLdYEzOKIRwJqV7jsjAassuF4l1wCGfazylmysi8Cjfg/eMxyXUvg6yPja9Reo9YqYADULSxwX0swBlZQg0V9cvo3wBB96/2CSUKTTerCuS2Tg12FZc4kvbLK1k7TWNqfU0jtSDegaxN5srCBdTsBwuFPQCYEFf9KkL06qoIHU9clEYIkxzDLY4vOSBmu4gYMaW+999Moe7JNNLAX3qK6gDypYy48YDtLX+9cXy0UaOT+2KynHYR+XcvGOhUyebbdHKbyjQegVj56fNTaOHD4g3H9O6/zpUQgB97LqXrYI6I8l2WSQWXxFYeRtjjp8d3dy0sQJExx+6ML7lrSjKRLHKlPHcGnTzDNaa6dIekbwy2ZMzjhPTShw5n/Q2Sl1HkCGYxhGy2OLpIeKisTdpKjmAKeWUboMMiNoEMtFUGsycmeYlP6nTqBOuK7MYD5GymzXRi8GgMk4ntlM2J7ju1LGkUaTMTpfrSZJbam4exabSBckfNmdQcULQFFTKyKFfy1cu8Bu6dEYfCWmvByAi68yF1xKB0ghH+ZcXYUOc2hPxeABNnTvmpNpqpVHsr8daYdPaPFYDpqaEv6qRbu71yvV+v51WS+RSuP5zeXmkuSPZ76UAaKinsMbjbTCcy6rB4qbdK5EBvJi7
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bd345b09-697c-4b94-987a-08db3693d501
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6360.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 11:41:16.8040
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2OHCIy5rjj/eTxstseBPzwbouXinUgoFwC9dxeabwWXUYcr2PFFObgQdxG4Z8VVSoD1N6nJsBcYBSb/KF+aeIA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6949

It's a GNU libc specific header which prevents building on musl for
example.  Instead use errx() in ERROR() and DIFF_FATAL() macros.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
---
Changes since v1:
 - Use errx().
---
 common.h             | 9 ++++++---
 create-diff-object.c | 1 -
 lookup.c             | 7 +++++--
 prelink.c            | 1 -
 4 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/common.h b/common.h
index 9a9da79..bbaa950 100644
--- a/common.h
+++ b/common.h
@@ -1,18 +1,21 @@
 #ifndef _COMMON_H_
 #define _COMMON_H_
 
-#include <error.h>
+#include <err.h>
 
 extern char *childobj;
 
 #define ERROR(format, ...) \
-	error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
+({ \
+	fflush(stdout); \
+	errx(1, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
+})
 
 #define DIFF_FATAL(format, ...) \
 ({ \
 	fflush(stdout); \
 	fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \
-	error(2, 0, "unreconcilable difference"); \
+	errx(2, "unreconcilable difference"); \
 })
 
 #define log_debug(format, ...) log(DEBUG, format, ##__VA_ARGS__)
diff --git a/create-diff-object.c b/create-diff-object.c
index 780e6c8..d8a0032 100644
--- a/create-diff-object.c
+++ b/create-diff-object.c
@@ -45,7 +45,6 @@
 #include <string.h>
 #include <libgen.h>
 #include <argp.h>
-#include <error.h>
 #include <unistd.h>
 #include <time.h>
 #include <gelf.h>
diff --git a/lookup.c b/lookup.c
index 39125c6..9633ea2 100644
--- a/lookup.c
+++ b/lookup.c
@@ -28,14 +28,17 @@
 #include <stdlib.h>
 #include <stdio.h>
 #include <string.h>
-#include <error.h>
+#include <err.h>
 #include <gelf.h>
 #include <unistd.h>
 
 #include "lookup.h"
 
 #define ERROR(format, ...) \
-	error(1, 0, "%s: %d: " format, __FUNCTION__, __LINE__, ##__VA_ARGS__)
+({ \
+	fflush(stdout); \
+	errx(1, "%s: %d: " format, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
+})
 
 struct symbol {
 	unsigned long value;
diff --git a/prelink.c b/prelink.c
index 2039e5b..18c5159 100644
--- a/prelink.c
+++ b/prelink.c
@@ -27,7 +27,6 @@
 #include <string.h>
 #include <libgen.h>
 #include <argp.h>
-#include <error.h>
 #include <unistd.h>
 #include <gelf.h>
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 11:55:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 11:55:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518873.805838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkODF-0008Im-HE; Thu, 06 Apr 2023 11:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518873.805838; Thu, 06 Apr 2023 11:55:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkODF-0008If-E4; Thu, 06 Apr 2023 11:55:33 +0000
Received: by outflank-mailman (input) for mailman id 518873;
 Thu, 06 Apr 2023 11:55:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkODD-0008IV-SW; Thu, 06 Apr 2023 11:55:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkODD-0000it-Pv; Thu, 06 Apr 2023 11:55:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkODD-0006QK-As; Thu, 06 Apr 2023 11:55:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkODD-0000AV-AL; Thu, 06 Apr 2023 11:55:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UqB+yz6/8FouBPCs62FcLaZy51fC8d0roc/O0QwAfrA=; b=J6kMLmu6CcVIBCzfzMp6udlC1a
	4H2165MNnbc4XNRWY7EXg50oYitTJlFnF2/3F7GvetPksS5KA6GVktvT3E/D2WQCpNY/TNPGnxLfV
	VZE3HvT19zHKP8QKTw1h9tG7fgozpY5FZATa/QivBsWq3tAZiGgcnme+oMjvSMnHjMD4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180159: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=99ddf2254febae9eab7fb0bcc02c5322243f5c49
X-Osstest-Versions-That:
    linux=76f598ba7d8e2bfb4855b5298caedd5af0c374a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 11:55:31 +0000

flight 180159 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180159/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180145
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180145
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180145
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180145
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180145
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                99ddf2254febae9eab7fb0bcc02c5322243f5c49
baseline version:
 linux                76f598ba7d8e2bfb4855b5298caedd5af0c374a8

Last test of basis   180145  2023-04-05 00:11:48 Z    1 days
Testing same since   180159  2023-04-05 18:14:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel Bristot de Oliveira <bristot@kernel.org>
  John Keeping <john@metanate.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Tze-nan Wu <Tze-nan.Wu@mediatek.com>
  Zheng Yejian <zhengyejian1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   76f598ba7d8e..99ddf2254feb  99ddf2254febae9eab7fb0bcc02c5322243f5c49 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 12:03:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 12:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518880.805848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkOLC-0001Pp-HE; Thu, 06 Apr 2023 12:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518880.805848; Thu, 06 Apr 2023 12:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkOLC-0001Pi-EZ; Thu, 06 Apr 2023 12:03:46 +0000
Received: by outflank-mailman (input) for mailman id 518880;
 Thu, 06 Apr 2023 12:03:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkOLA-0001Pc-87
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 12:03:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 110cf3b7-d473-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 14:03:41 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 08:03:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6987.namprd03.prod.outlook.com (2603:10b6:a03:43b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.28; Thu, 6 Apr
 2023 12:03:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 12:03:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 110cf3b7-d473-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680782621;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eU7mxl30llzqrhO3LN68GckiTQC15SCNsmT0mt0lHKk=;
  b=RsmdA1coAMUTfAEsi4DV7EJkcr/XSdVJUcVWzGtjVv5KGdXJ45HhCNQJ
   zFdG7dpn8q9uKX9RS/QRsBziWSYiKKCCEqizNlYFsi/OCdSVgki7cUNmh
   WQ121H2bHvFYOAxnLHLt/t3s+2Kp0DeXaZKc1d5Lh6wXedW1AbaqjWD7k
   U=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 103916226
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FRPVnK/EILh/rvNOooCrDrUDpH+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 GMaCGuFbPbeZ2bxco93YY7n/EhXu5DdzoNkHANp/is8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagV5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklkq
 PZJJjUzViqogues6ai/SeJvqMsKeZyD0IM34hmMzBn/JNN/GdXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWDilUpi9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpJReTmrKQ06LGV7lc2JgIzXHn8m9ughUCcX5VfF
 FEU4xN7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc1Qjow3
 1PPgNLtBhRoqrSeTX/b/bCRxQ5eIgAQJG4GICMBEw0M5oC7pJlp10qUCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb/D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:d7TxqaDGWC6cKLHlHemi55DYdb4zR+YMi2TDtnocdfUxSKelfq
 +V88jzuSWbtN9yYhEdcKG7WZVoKEm0nfQZ3WB7B8bAYOCJghrMEKhSqafk3j38C2nf24dmpM
 NdmnFFeb/NMWQ=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="103916226"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aTrJRnWWTdBIZe9cID1HrVYEwuVs1kjWJsCkk/VDHpCXDubzlEAetgOxHYrZjxU9k53JekxQdjocgkyhUel+V7GPbAcgORyP91wpp5gRZYFLLR0tX55DEPyN/IYKprrxlwdgAkeX4Ft86p8gfhgqrriMn6F+QhzCUmXpl1xUslKFxg2QLxCj5ienJgDwWiShm8EwdfZDHNJZIjMlfvBBYxdfA4U0x+BWfQ7PwElqDvcqcQZNkbkBkqmnOOvxBllMp7/yVQGSY+ElBIOMuWz2UKjKmT15rk6L8quA+Kg3TDMjAed8ktxlzuhfEKZ5iFHWsmzWk8ABP9IOhd1FdaIuaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6i38wPZdc2HHQZDfVFyx6muok1UOMPpytAyqDoQ5UiU=;
 b=huPk0GdIFidvY78QhSIK1PJBqR9UbA5G4c0VhKON1juM459RyoZOk/O/ZO3Md5jMY4ojfQRATRs48l09O8MCP891JJlQGjGfPTE4EmhiUGFPIC27Z13yGTQY0hRyyhlQcAbBKtJxPkXLpMloZ6HYo/qCydL3MR1YOC5ilLH02V9qBlADXmCuy7x725k4i+6N4e4N33B99StETO9AbcUNyM/706TGHvn4nmOQ5FveRrE6QAfdXQWUQWKuoKQGg+J978y0IRld7HpjnrL8ALtvaTgmN+O7esaCrHB+SAOAZVs3s/oh9K5Sie4/vzkn0EZYmIIuQ/O4y8CrtDZd2NZifg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6i38wPZdc2HHQZDfVFyx6muok1UOMPpytAyqDoQ5UiU=;
 b=RrCF3KP2eeOX71SIYDrm34Us1n/6qH7VKSBOUyHb5n3QIzWYh+NGEGwoLV78Q177IyVtrd+aAYO/GM7cr+LcGQJDJHrm/A2/9F7a6fjdBrivQtNe/Weq1k49erPrQWQmZE7OcTpNITi5rFLG7QP+ZFXR7lxMHpsUDaa8JlUchG4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <df05f3fe-41e9-5ddc-2a9a-00882d4e54f7@citrix.com>
Date: Thu, 6 Apr 2023 13:03:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH v2] livepatch-tools: remove usage of error.h
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230406114106.54735-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230406114106.54735-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0410.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6987:EE_
X-MS-Office365-Filtering-Correlation-Id: ad3f8de8-525c-45d0-37bb-08db3696eb15
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	feejIuUdtno+9H2vaqMCxKylL2bt6/bDgzqdIS4MeAvyPJJQ+iYDqMRckyUKa88x4ceCERnQxCAPv9qvWF68d3urdRM6nbvBs33dAWzEF6KAOUcWKhS1K+Ybm/zd/IrxF5ZfZCHO/8d4ZcrDt8z4wik8e5aUYRbS6U+fxavvWHx08Y6627DrB09rhIFAmG/BBfg27GqqN167rtmVOcv0lH7e5fjt+mbyJyuqa6gLw5Gk9Y7Ced3je6NltqSIfch3F/b+n4nQ/BtZQtt1tYGsz0b8x194dsNfdkDtOBD3MJdeyUcHM4T/+id/WRfrrPHzhDAXE4v/k9t/eqCamMI5FB9qkAawX1r6OTqxMhzDVDtC7RUXCi19be8nuwlTX1iJPXONkGCV0wr+JF1B/EuF3LRnP90vBPPXaQ7tA+1Z57N0VdVC9drZy7VqDPLb4yrUgVnT7fO/mExKUhW/jIHGIfiG0gC6gxjVy2dU4ThJJiAeC/xn0a+9IoAZ3m2bx4am9mWTKpjTt7B/1qA5KMXC6MRxA08zeBh/vmcrS9ix8klo0GmarUywZCtMvpJx2LxQ5iMOZxfNDkIh6j0KLy0mgu8BIaatmeqy6UKULurGGAjw0w0BCL6HmbHAa3DRVrJgSNMM5Hx23opPBPs1QoJqmA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(366004)(39860400002)(451199021)(31686004)(38100700002)(66946007)(6486002)(54906003)(36756003)(41300700001)(316002)(66476007)(66556008)(6666004)(86362001)(31696002)(558084003)(4326008)(107886003)(26005)(2616005)(478600001)(2906002)(8676002)(6506007)(8936002)(5660300002)(6512007)(53546011)(82960400001)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NlU5YjdGRTJJN2g3UFVlYzBmMkN2Y0NCTWVjRGN3RGRLQnJrRURySHhWSy9W?=
 =?utf-8?B?cnduNVhndzhxaTU4VGg4L2lkQmY5YUhuS0ZWNG5wcllvbDkwM2Rsdld0UkZI?=
 =?utf-8?B?OVJCSUFjdUpSMFNZelJzd1pZQ09pcEUvaWpVZDNCcGVGVUtWeHloM2oxQzR4?=
 =?utf-8?B?SzdDUlNsaEp1ckdKalBQenNaNHFJS0ZXblF4REVkcUVJODU0TFdxTEVNNUk5?=
 =?utf-8?B?QmYwbUtiVVdpeG0vSE04TFJ4d3FsNjZramlHdXBGQmgrU3pNVmJtZjFyZ2Ry?=
 =?utf-8?B?Yk0vK0dzWVRFcEFSVzA3aDdQVnRtU01vMXdZUllSc0dGcHFsbE1vM1VxRG8r?=
 =?utf-8?B?VG4vM3VMR1o1VGdRaEdtaUlrc2tuSnVNU0tyVkdYaXoyYis3b3Jkc3hDV1R1?=
 =?utf-8?B?VkRWRWhLeWtDcDZHNDVoeVVLa1llSDNhcHhiYnA3Z2R3YjJ4RWorTFJjNzYv?=
 =?utf-8?B?ZndmU1lyazgrQUlnYmlxdkRNN2FyRVhPSFJZNTljVkVZOWc3WVRuR1R1ZkRt?=
 =?utf-8?B?aGdxdFNLeERhSVp0S0wraTBDN2Uzd0hNMWxZTVJNNU5mbjNsNkhtSG4rY0Ev?=
 =?utf-8?B?VjIyWXdNMVFCSGZrMXkyTW1MeVNvZ0NHVk0zbGhJV1p2NEllQlAyakpUTDYz?=
 =?utf-8?B?VVlja3hHdS85emxWYXk3dCtGS3BFSmNGdnVlcmhUTlF0NVdmS2toMjA4L3Zr?=
 =?utf-8?B?WWEwTTNYUzFDNkx1cmIxeVZRaldBT0tsWDMxR2ZpVzRpUGR1eWFOemtlTU1s?=
 =?utf-8?B?NWVqK2NNL2h3ZGZBWXk0bHN1SFFSWVNBMmIrQ0VvK0I4d2F1R1JrKzVEZnQr?=
 =?utf-8?B?Q1NRTnJiVEoxU09YellLemVOOGNua3I5aEhHS3lhaWxtK3Q5Q0QvN0hqN2Vv?=
 =?utf-8?B?eEE3cncrMjJyK3F6cGNkZUpQdjh1azlIcElaalpDc1RVQTZyelRWYUhGZlRk?=
 =?utf-8?B?dUpUSWhNMWZNZG1wamd3SDhxZys4OW13N0ZpUTYwY2x0OFlja0dxZlUrakwz?=
 =?utf-8?B?RzF4N3dmWVAwcDhCTEVxWHkzVGhILythcHp3Q1Ivd1UrY2ZtUTg3R0xKUUNF?=
 =?utf-8?B?T2FIQ2hCRnRKSER4QzVaZnEycm9yVkI2cGsvTU85Nml6TDc1SFkzc3doZ0Rn?=
 =?utf-8?B?VnRlZEd0WUthNW1pbmkxcjlLK2d3OXJsVFg1cSsyZWFOZUxTRFpTWWlLelgz?=
 =?utf-8?B?aHMyVXJjMXVPcUhrVXRFdFhvK1FUMm9zYVZjRkVhd1JWMWpHbnJJVXkrcTVI?=
 =?utf-8?B?SjRIY2cwTng5eTR5UGptNVV6LzUrbUVEMEVnUCtMMUtZd0ZJOWxQU3lBSjVZ?=
 =?utf-8?B?RnFUM004ZGVZL2NpZlBRc29DUFRZZUFZeHUzeDM2Wlk0d2N3K0xabEhWekpM?=
 =?utf-8?B?VFRaSUlDcEczYlJ3dWFNWDBLQjBnai8xZlJ4UUk2Ry9CNlpBVFhqb3dVeTI1?=
 =?utf-8?B?SkxualJtNFlpUW4xSlEvd2NSNU5XbzJSajNVUEtyaHpYc0VSdVNwbjhuYjQ0?=
 =?utf-8?B?RWZHYXdMMVh0WVkwVnpKcFcybUFWRFVVZFVhTndsY0lrTzhadW94YmQrZmhP?=
 =?utf-8?B?bURCSTRPU21QcG9hUmZ6Q25XYlhPSXJBS1EwUTNjVktVOERVakJxeUZFMGpX?=
 =?utf-8?B?VDJuZmhiTWNyNmRWWlZWdDBtS3dQb1BsMTVOYmp2NUttU01NN1lkSVpIY3px?=
 =?utf-8?B?eFJVVCtTYXZMbnA4THNRK1VxQzE3TUE3R3BxbjUzZm5RUWM5TWIrbUF5K1Jp?=
 =?utf-8?B?NTRlV2hjd2cySGNZTkRpS2czZk9vUG16TWVnTjZ1aDlEQ21UYktmOVNMZVNR?=
 =?utf-8?B?RnFvQ3piTEZUTVBRTWJUM1hLRDJFS3dZbzNjUUoyZHFQdjJ2eHd0T2hnWE5t?=
 =?utf-8?B?N1lQU1lTZUx5NjM1TDJqMzY3aEhKaGxCOGswZkQ0WVhGRWIzYmVMVCtlMnBR?=
 =?utf-8?B?N3Y4ajBRcTgrcnYrRjZWNzhNNDdDWDRZbHdVUk4zdWhxdU1YcS9tbk1QNmgr?=
 =?utf-8?B?KzhYREd2VmlDdDJobjZneWNtSGxFd1creEZyZTB0NnRQYjFjcm55YjZNeHBO?=
 =?utf-8?B?cEFQQjRCQmQxLzBBUkhRa2ExRWJEcnN5SFFSSkwrTU9pNlFSZDU4VCtmdmhx?=
 =?utf-8?B?RHR2SXRZaVhLaE91VjJGWGluUkRiaTVmQUlpSkNjVmxkUUNNOVU0M0F3N2sr?=
 =?utf-8?B?NlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	c9NATHlX099kTY5/bfjafgXO94OWI+TAy6VTyXdtiEioicngMx8bhVboCJuoxSa2A4U/r0TcyNYQr2CQw2vyvQdWSCvXO+nwZiWVUJV9wNbJSS+6zhwNnDhFjnXhj1HyPHXmnvXfaDE+C71DcR7Y6u60CE/Nn5WKqGtunckDD6jI5XtLKybbhgvJxrAcYd+Qep7fKLKZ5rzFwecgwnU1ZhlVLOh2yhDsj80b0K5DsswdYPLF8XZR1rYUyGZUUs130MAiE1EY4/U7edn4E/b9+2vXC9I5tBlRyDlWv7j3KmW+RfNT61NhuVMaVxkYdgueQ7qq4R6uaOtYdWS6bcOjAthVYMW4JYXe3yr2c5w58JRLv13YuVfui6mDuiZi9XOrWoiqLrvLk+Id/7/SQM0DrqdQyTRVI6vGlu/2Vr/ZtCVTBsocqKE2jKAOXb1e26b0Tj6ENuE07X7oJKfii8MjNFkmnbg84KKcMIwtpvzOyPc4XKm8rwEx7S4xl3fD1SJy6MhLjenSIxUiYl+8C/9uPp1TBEZ2zClgwHABfJR/PxZo9CUWm2IFddHQ8qSg7ZNGWn0nFRM5JWChqPQ9ZQwoHavVGwK/EjzDTxT/uS87z7+sNjAHESUtfjMR1GkQQWN0o+VADtAgNJxxkZj4z6LIv5RZxVmGxYJFapA40xuaitVBxYJ58rlAsXOEeNWOIDkjF7HtJSF9SbOy4+frAo/153AMLRPBPTh5TdooxMQarpBcOiLF+vg7BxAUuFzQoY34fRuT35ygIQQJrn+qg/JlSst6aadncuIn6gmiLtFF0iYaNixidnk6bKEOswUwGnFt
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad3f8de8-525c-45d0-37bb-08db3696eb15
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 12:03:22.6327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XXfAgakRayJAD1BS3vL5Obk7GYlbbIHKFzZ1wjfg4KzSGOSLu4IZW5DxyamHlgn09yzQzvVs3m2uQo8JDl0cJ1x6oOtuTPM2K3tyYUNbdzQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6987

On 06/04/2023 12:41 pm, Roger Pau Monne wrote:
> It's a GNU libc specific header which prevents building on musl for
> example.  Instead use errx() in ERROR() and DIFF_FATAL() macros.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 12:18:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 12:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518884.805859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkOZ2-0002x1-Oz; Thu, 06 Apr 2023 12:18:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518884.805859; Thu, 06 Apr 2023 12:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkOZ2-0002wu-Jf; Thu, 06 Apr 2023 12:18:04 +0000
Received: by outflank-mailman (input) for mailman id 518884;
 Thu, 06 Apr 2023 12:18:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkOZ0-0002wo-L0
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 12:18:03 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10d84c89-d475-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 14:17:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10d84c89-d475-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680783479;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=kLcrD6Qx/8oKyhJYXTcLhHRibh3zjUkt2HxDgLq/hoo=;
  b=fpbF+zfMqGb3S/LWSVfuA3cx2OxWT0k40ByJnRa52Jfk8vT8ZY3ENxKv
   oB3fetyL1J2hfz/2gOY/fm7KgJ8mtqI50BiQgUsIvv9GyisokmDFvwPM/
   KMJsCZCr2WEBVQ2esH+VJ3TpGPJzqXhK9NdT1DjRGcsvAY1hEQF+mvkPq
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 103918414
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:C4g1DqrxXY7lp2l4gRTEabTcq4ReBmJnZRIvgKrLsJaIsI4StFCzt
 garIBnTbPaCZmD2ft52bYWz90MGvZ6DyNVgTwQ4r3o9RChH8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzydNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAC0Obh2Govm2+rucDdZ1md4ncti0GJxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 juepDWgUk1DaLRzzxLc8VWjpvTLtBn9VaMQEab71+x3v2yMkzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyMY4NnupaQojijGaZFcCbGvioPkRhXvl
 mXiQDcFu1kDsSIa//zlrQiY2W/2+8Ohoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTYu8B66FMYcXPvCdkTNrGwk3PCatM53FyhBwwcnTx
 7/AGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvueHwP9Dz+ieD2TCfMGd843K6mMrhRAFWs/F+Er
 L6y9qKil31ibQEJSnOLqdFJcw5bcyBT6FKfg5U/S9Nv6zFOQAkJY8I9C5t4E2C5t8y5Ttv1w
 0w=
IronPort-HdrOrdr: A9a23:/Smpl66jdeKQhteUdAPXwPbXdLJyesId70hD6qkRc31om6mj/K
 qTdZsgpHzJYVoqNU3I4OrwXpVoIkmzyXcW2+Us1N6ZNWHbUAHBFvAa0WKI+VLd8kPFltK02c
 9bAspD4NCZNykcsS7xiDPIdurJz7G8gcSVuds=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="103918414"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/emul: Use existing X86_EXC_* constants
Date: Thu, 6 Apr 2023 13:17:53 +0100
Message-ID: <20230406121753.2205968-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

... rather than having separate definitions locally.  EXC_HAS_EC in particular
is missing #CP, #VC and #SX vs X86_EXC_HAVE_EC.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 608 ++++++++++++-------------
 1 file changed, 294 insertions(+), 314 deletions(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index c69f7c65f526..8aa23b929c07 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -135,28 +135,6 @@ static const uint8_t sse_prefix[] = { 0x66, 0xf3, 0xf2 };
 /* MXCSR bit definitions. */
 #define MXCSR_MM  (1U << 17)
 
-/* Exception definitions. */
-#define EXC_DE  0
-#define EXC_DB  1
-#define EXC_BP  3
-#define EXC_OF  4
-#define EXC_BR  5
-#define EXC_UD  6
-#define EXC_NM  7
-#define EXC_DF  8
-#define EXC_TS 10
-#define EXC_NP 11
-#define EXC_SS 12
-#define EXC_GP 13
-#define EXC_PF 14
-#define EXC_MF 16
-#define EXC_AC 17
-#define EXC_XM 19
-
-#define EXC_HAS_EC                                                      \
-    ((1u << EXC_DF) | (1u << EXC_TS) | (1u << EXC_NP) |                 \
-     (1u << EXC_SS) | (1u << EXC_GP) | (1u << EXC_PF) | (1u << EXC_AC))
-
 /* Segment selector error code bits. */
 #define ECODE_EXT (1 << 0)
 #define ECODE_IDT (1 << 1)
@@ -360,11 +338,11 @@ do {                                                                    \
 #define validate_far_branch(cs, ip) ({                                  \
     if ( sizeof(ip) <= 4 ) {                                            \
         ASSERT(!ctxt->lma);                                             \
-        generate_exception_if((ip) > (cs)->limit, EXC_GP, 0);           \
+        generate_exception_if((ip) > (cs)->limit, X86_EXC_GP, 0);       \
     } else                                                              \
         generate_exception_if(ctxt->lma && (cs)->l                      \
                               ? !is_canonical_address(ip)               \
-                              : (ip) > (cs)->limit, EXC_GP, 0);         \
+                              : (ip) > (cs)->limit, X86_EXC_GP, 0);     \
 })
 
 #define commit_far_branch(cs, newip) ({                                 \
@@ -431,7 +409,7 @@ int x86emul_get_fpu(
                 return rc;
             generate_exception_if(!(cr4 & ((type == X86EMUL_FPU_xmm)
                                            ? X86_CR4_OSFXSR : X86_CR4_OSXSAVE)),
-                                  EXC_UD);
+                                  X86_EXC_UD);
         }
 
         rc = ops->read_cr(0, &cr0, ctxt);
@@ -444,13 +422,13 @@ int x86emul_get_fpu(
         }
         if ( cr0 & X86_CR0_EM )
         {
-            generate_exception_if(type == X86EMUL_FPU_fpu, EXC_NM);
-            generate_exception_if(type == X86EMUL_FPU_mmx, EXC_UD);
-            generate_exception_if(type == X86EMUL_FPU_xmm, EXC_UD);
+            generate_exception_if(type == X86EMUL_FPU_fpu, X86_EXC_NM);
+            generate_exception_if(type == X86EMUL_FPU_mmx, X86_EXC_UD);
+            generate_exception_if(type == X86EMUL_FPU_xmm, X86_EXC_UD);
         }
         generate_exception_if((cr0 & X86_CR0_TS) &&
                               (type != X86EMUL_FPU_wait || (cr0 & X86_CR0_MP)),
-                              EXC_NM);
+                              X86_EXC_NM);
     }
 
  done:
@@ -776,7 +754,7 @@ static int ioport_access_check(
         return rc == X86EMUL_DONE ? X86EMUL_OKAY : rc;
 
     /* Ensure the TSS has an io-bitmap-offset field. */
-    generate_exception_if(tr.type != 0xb, EXC_GP, 0);
+    generate_exception_if(tr.type != 0xb, X86_EXC_GP, 0);
 
     switch ( rc = read_ulong(x86_seg_tr, 0x66, &iobmp, 2, ctxt, ops) )
     {
@@ -784,7 +762,7 @@ static int ioport_access_check(
         break;
 
     case X86EMUL_EXCEPTION:
-        generate_exception_if(!ctxt->event_pending, EXC_GP, 0);
+        generate_exception_if(!ctxt->event_pending, X86_EXC_GP, 0);
         /* fallthrough */
 
     default:
@@ -799,7 +777,7 @@ static int ioport_access_check(
         break;
 
     case X86EMUL_EXCEPTION:
-        generate_exception_if(!ctxt->event_pending, EXC_GP, 0);
+        generate_exception_if(!ctxt->event_pending, X86_EXC_GP, 0);
         /* fallthrough */
 
     default:
@@ -807,7 +785,7 @@ static int ioport_access_check(
     }
 
     generate_exception_if(iobmp & (((1 << bytes) - 1) << (first_port & 7)),
-                          EXC_GP, 0);
+                          X86_EXC_GP, 0);
 
  done:
     return rc;
@@ -854,7 +832,7 @@ protmode_load_seg(
     uint8_t dpl, rpl;
     int cpl = x86emul_get_cpl(ctxt, ops);
     uint32_t a_flag = 0x100;
-    int rc, fault_type = EXC_GP;
+    int rc, fault_type = X86_EXC_GP;
 
     if ( cpl < 0 )
         return X86EMUL_UNHANDLEABLE;
@@ -982,7 +960,7 @@ protmode_load_seg(
     /* Segment present in memory? */
     if ( !(desc.b & (1 << 15)) && seg != x86_seg_none )
     {
-        fault_type = seg != x86_seg_ss ? EXC_NP : EXC_SS;
+        fault_type = seg != x86_seg_ss ? X86_EXC_NP : X86_EXC_SS;
         goto raise_exn;
     }
 
@@ -1163,7 +1141,7 @@ static unsigned long *decode_vex_gpr(
     switch ( evex.lr ) \
     { \
     default: \
-        generate_exception(EXC_UD); \
+        generate_exception(X86_EXC_UD); \
     case 2: \
         break; \
     case 0: case 1: \
@@ -1273,7 +1251,7 @@ x86_emulate(
     generate_exception_if((mode_vif() &&
                            (_regs.eflags & X86_EFLAGS_VIF) &&
                            (_regs.eflags & X86_EFLAGS_VIP)),
-                          EXC_GP, 0);
+                          X86_EXC_GP, 0);
 
     rc = x86emul_decode(&state, ctxt, ops);
     if ( rc != X86EMUL_OKAY )
@@ -1302,7 +1280,7 @@ x86_emulate(
 #define state (&state)
     elem_bytes = 4 << evex.w;
 
-    generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);
+    generate_exception_if(state->not_64bit && mode_64bit(), X86_EXC_UD);
 
     if ( ea.type == OP_REG )
         ea.reg = _decode_gpr(&_regs, modrm_rm, (d & ByteOp) && !rex_prefix && !vex.opcx);
@@ -1420,12 +1398,12 @@ x86_emulate(
         generate_exception_if(lock_prefix &&
                               (vex.opcx || ext != ext_0f || b != 0xc7 ||
                                (modrm_reg & 7) != 1 || ea.type != OP_MEM),
-                              EXC_UD);
+                              X86_EXC_UD);
         dst.type = OP_NONE;
         break;
 
     case DstReg:
-        generate_exception_if(lock_prefix, EXC_UD);
+        generate_exception_if(lock_prefix, X86_EXC_UD);
         dst.type = OP_REG;
         if ( d & ByteOp )
         {
@@ -1477,17 +1455,17 @@ x86_emulate(
         d = (d & ~DstMask) | DstMem;
         /* Becomes a normal DstMem operation from here on. */
     case DstMem:
-        generate_exception_if(ea.type == OP_MEM && evex.z, EXC_UD);
+        generate_exception_if(ea.type == OP_MEM && evex.z, X86_EXC_UD);
         if ( state->simd_size )
         {
-            generate_exception_if(lock_prefix, EXC_UD);
+            generate_exception_if(lock_prefix, X86_EXC_UD);
             break;
         }
         ea.bytes = (d & ByteOp) ? 1 : op_bytes;
         dst = ea;
         if ( dst.type == OP_REG )
         {
-            generate_exception_if(lock_prefix, EXC_UD);
+            generate_exception_if(lock_prefix, X86_EXC_UD);
             switch ( dst.bytes )
             {
             case 1: dst.val = *(uint8_t  *)dst.reg; break;
@@ -1499,7 +1477,7 @@ x86_emulate(
         else if ( d & Mov ) /* optimisation - avoid slow emulated read */
         {
             /* Lock prefix is allowed only on RMW instructions. */
-            generate_exception_if(lock_prefix, EXC_UD);
+            generate_exception_if(lock_prefix, X86_EXC_UD);
             fail_if(!ops->write);
         }
         else if ( !ops->rmw )
@@ -1730,14 +1708,14 @@ x86_emulate(
     case 0x62: /* bound */ {
         int lb, ub, idx;
 
-        generate_exception_if(src.type != OP_MEM, EXC_UD);
+        generate_exception_if(src.type != OP_MEM, X86_EXC_UD);
         if ( (rc = read_ulong(src.mem.seg, truncate_ea(src.mem.off + op_bytes),
                               &ea.val, op_bytes, ctxt, ops)) )
             goto done;
         ub  = (op_bytes == 2) ? (int16_t)ea.val   : (int32_t)ea.val;
         lb  = (op_bytes == 2) ? (int16_t)src.val  : (int32_t)src.val;
         idx = (op_bytes == 2) ? (int16_t)dst.val  : (int32_t)dst.val;
-        generate_exception_if((idx < lb) || (idx > ub), EXC_BR);
+        generate_exception_if((idx < lb) || (idx > ub), X86_EXC_BR);
         dst.type = OP_NONE;
         break;
     }
@@ -1760,7 +1738,7 @@ x86_emulate(
             /* arpl */
             unsigned int src_rpl = dst.val & 3;
 
-            generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
+            generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_UD);
 
             dst = ea;
             dst.bytes = 2;
@@ -1957,7 +1935,7 @@ x86_emulate(
             dst.type = OP_NONE;
             break;
         }
-        generate_exception_if((modrm_reg & 7) != 0, EXC_UD);
+        generate_exception_if((modrm_reg & 7) != 0, X86_EXC_UD);
     case 0x88 ... 0x8b: /* mov */
     case 0xa0 ... 0xa1: /* mov mem.offs,{%al,%ax,%eax,%rax} */
     case 0xa2 ... 0xa3: /* mov {%al,%ax,%eax,%rax},mem.offs */
@@ -1966,7 +1944,7 @@ x86_emulate(
 
     case 0x8c: /* mov Sreg,r/m */
         seg = modrm_reg & 7; /* REX.R is ignored. */
-        generate_exception_if(!is_x86_user_segment(seg), EXC_UD);
+        generate_exception_if(!is_x86_user_segment(seg), X86_EXC_UD);
     store_selector:
         fail_if(ops->read_segment == NULL);
         if ( (rc = ops->read_segment(seg, &sreg, ctxt)) != 0 )
@@ -1977,14 +1955,14 @@ x86_emulate(
         break;
 
     case 0x8d: /* lea */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         dst.val = ea.mem.off;
         break;
 
     case 0x8e: /* mov r/m,Sreg */
         seg = modrm_reg & 7; /* REX.R is ignored. */
         generate_exception_if(!is_x86_user_segment(seg) ||
-                              seg == x86_seg_cs, EXC_UD);
+                              seg == x86_seg_cs, X86_EXC_UD);
         if ( (rc = load_seg(seg, src.val, 0, NULL, ctxt, ops)) != 0 )
             goto done;
         if ( seg == x86_seg_ss )
@@ -1993,7 +1971,7 @@ x86_emulate(
         break;
 
     case 0x8f: /* pop (sole member of Grp1a) */
-        generate_exception_if((modrm_reg & 7) != 0, EXC_UD);
+        generate_exception_if((modrm_reg & 7) != 0, X86_EXC_UD);
         /* 64-bit mode: POP defaults to a 64-bit operand. */
         if ( mode_64bit() && (dst.bytes == 4) )
             dst.bytes = 8;
@@ -2074,7 +2052,7 @@ x86_emulate(
                 if ( rc != X86EMUL_OKAY )
                     goto done;
             }
-            generate_exception_if(!(cr4 & X86_CR4_VME), EXC_GP, 0);
+            generate_exception_if(!(cr4 & X86_CR4_VME), X86_EXC_GP, 0);
             src.val = (_regs.flags & ~X86_EFLAGS_IF) | X86_EFLAGS_IOPL;
             if ( _regs.eflags & X86_EFLAGS_VIF )
                 src.val |= X86_EFLAGS_IF;
@@ -2104,7 +2082,7 @@ x86_emulate(
                 /* All IOPL != 3 POPFs fail, except in vm86 mode. */
                 generate_exception_if(!(cr4 & X86_CR4_VME) &&
                                       MASK_EXTR(_regs.eflags, X86_EFLAGS_IOPL) != 3,
-                                      EXC_GP, 0);
+                                      X86_EXC_GP, 0);
             }
             /*
              * IOPL cannot be modified outside of CPL 0.  IF cannot be
@@ -2128,11 +2106,11 @@ x86_emulate(
             if ( (cr4 & X86_CR4_VME) &&
                  MASK_EXTR(_regs.eflags, X86_EFLAGS_IOPL) != 3 )
             {
-                generate_exception_if(dst.val & X86_EFLAGS_TF, EXC_GP, 0);
+                generate_exception_if(dst.val & X86_EFLAGS_TF, X86_EXC_GP, 0);
                 if ( dst.val & X86_EFLAGS_IF )
                 {
                     generate_exception_if(_regs.eflags & X86_EFLAGS_VIP,
-                                          EXC_GP, 0);
+                                          X86_EXC_GP, 0);
                     dst.val |= X86_EFLAGS_VIF;
                 }
                 else
@@ -2267,7 +2245,7 @@ x86_emulate(
         break;
 
     case 0xc0 ... 0xc1: grp2: /* Grp2 */
-        generate_exception_if(lock_prefix, EXC_UD);
+        generate_exception_if(lock_prefix, X86_EXC_UD);
 
         switch ( modrm_reg & 7 )
         {
@@ -2307,7 +2285,7 @@ x86_emulate(
     case 0xc5: /* lds */
         seg = (b & 1) * 3; /* es = 0, ds = 3 */
     les:
-        generate_exception_if(src.type != OP_MEM, EXC_UD);
+        generate_exception_if(src.type != OP_MEM, X86_EXC_UD);
         if ( (rc = read_ulong(src.mem.seg, truncate_ea(src.mem.off + src.bytes),
                               &dst.val, 2, ctxt, ops)) != X86EMUL_OKAY )
             goto done;
@@ -2386,7 +2364,7 @@ x86_emulate(
         switch ( ctxt->opcode )
         {
         case 0xcc: /* int3 */
-            ctxt->event.vector = EXC_BP;
+            ctxt->event.vector = X86_EXC_BP;
             ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
             break;
         case 0xcd: /* int imm8 */
@@ -2394,11 +2372,11 @@ x86_emulate(
             ctxt->event.type = X86_EVENTTYPE_SW_INTERRUPT;
             break;
         case 0xce: /* into */
-            ctxt->event.vector = EXC_OF;
+            ctxt->event.vector = X86_EXC_OF;
             ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
             break;
         case 0xf1: /* icebp */
-            ctxt->event.vector = EXC_DB;
+            ctxt->event.vector = X86_EXC_DB;
             ctxt->event.type = X86_EVENTTYPE_PRI_SW_EXCEPTION;
             break;
         }
@@ -2447,7 +2425,7 @@ x86_emulate(
             _regs.ax = (uint8_t)(_regs.al + (_regs.ah * n));
         else
         {
-            generate_exception_if(!n, EXC_DE);
+            generate_exception_if(!n, X86_EXC_DE);
             _regs.al = _regs.al % n;
             _regs.ah = _regs.al / n;
         }
@@ -2551,7 +2529,7 @@ x86_emulate(
         break;
 
     case 0xf4: /* hlt */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         ctxt->retire.hlt = true;
         break;
 
@@ -2670,7 +2648,7 @@ x86_emulate(
                 v    = (uint8_t)src.val;
                 generate_exception_if(
                     div_dbl(u, v) || ((uint8_t)u[0] != (uint16_t)u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val = (uint8_t)u[0];
                 _regs.ah = u[1];
                 break;
@@ -2680,7 +2658,7 @@ x86_emulate(
                 v    = (uint16_t)src.val;
                 generate_exception_if(
                     div_dbl(u, v) || ((uint16_t)u[0] != (uint32_t)u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val = (uint16_t)u[0];
                 _regs.dx = u[1];
                 break;
@@ -2691,7 +2669,7 @@ x86_emulate(
                 v    = (uint32_t)src.val;
                 generate_exception_if(
                     div_dbl(u, v) || ((uint32_t)u[0] != u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val   = (uint32_t)u[0];
                 _regs.rdx = (uint32_t)u[1];
                 break;
@@ -2700,7 +2678,7 @@ x86_emulate(
                 u[0] = _regs.r(ax);
                 u[1] = _regs.r(dx);
                 v    = src.val;
-                generate_exception_if(div_dbl(u, v), EXC_DE);
+                generate_exception_if(div_dbl(u, v), X86_EXC_DE);
                 dst.val     = u[0];
                 _regs.r(dx) = u[1];
                 break;
@@ -2715,7 +2693,7 @@ x86_emulate(
                 v    = (int8_t)src.val;
                 generate_exception_if(
                     idiv_dbl(u, v) || ((int8_t)u[0] != (int16_t)u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val = (int8_t)u[0];
                 _regs.ah = u[1];
                 break;
@@ -2725,7 +2703,7 @@ x86_emulate(
                 v    = (int16_t)src.val;
                 generate_exception_if(
                     idiv_dbl(u, v) || ((int16_t)u[0] != (int32_t)u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val = (int16_t)u[0];
                 _regs.dx = u[1];
                 break;
@@ -2736,7 +2714,7 @@ x86_emulate(
                 v    = (int32_t)src.val;
                 generate_exception_if(
                     idiv_dbl(u, v) || ((int32_t)u[0] != u[0]),
-                    EXC_DE);
+                    X86_EXC_DE);
                 dst.val   = (int32_t)u[0];
                 _regs.rdx = (uint32_t)u[1];
                 break;
@@ -2745,7 +2723,7 @@ x86_emulate(
                 u[0] = _regs.r(ax);
                 u[1] = _regs.r(dx);
                 v    = src.val;
-                generate_exception_if(idiv_dbl(u, v), EXC_DE);
+                generate_exception_if(idiv_dbl(u, v), X86_EXC_DE);
                 dst.val     = u[0];
                 _regs.r(dx) = u[1];
                 break;
@@ -2767,7 +2745,7 @@ x86_emulate(
             _regs.eflags &= ~X86_EFLAGS_IF;
         else
         {
-            generate_exception_if(!mode_vif(), EXC_GP, 0);
+            generate_exception_if(!mode_vif(), X86_EXC_GP, 0);
             _regs.eflags &= ~X86_EFLAGS_VIF;
         }
         break;
@@ -2783,7 +2761,7 @@ x86_emulate(
         {
             generate_exception_if((_regs.eflags & X86_EFLAGS_VIP) ||
 				  !mode_vif(),
-                                  EXC_GP, 0);
+                                  X86_EXC_GP, 0);
             if ( !(_regs.eflags & X86_EFLAGS_VIF) )
                 ctxt->retire.sti = true;
             _regs.eflags |= X86_EFLAGS_VIF;
@@ -2799,7 +2777,7 @@ x86_emulate(
         break;
 
     case 0xfe: /* Grp4 */
-        generate_exception_if((modrm_reg & 7) >= 2, EXC_UD);
+        generate_exception_if((modrm_reg & 7) >= 2, X86_EXC_UD);
         /* Fallthrough. */
     case 0xff: /* Grp5 */
         switch ( modrm_reg & 7 )
@@ -2833,7 +2811,7 @@ x86_emulate(
             break;
         case 3: /* call (far, absolute indirect) */
         case 5: /* jmp (far, absolute indirect) */
-            generate_exception_if(src.type != OP_MEM, EXC_UD);
+            generate_exception_if(src.type != OP_MEM, X86_EXC_UD);
 
             if ( (rc = read_ulong(src.mem.seg,
                                   truncate_ea(src.mem.off + op_bytes),
@@ -2846,20 +2824,20 @@ x86_emulate(
         case 6: /* push */
             goto push;
         case 7:
-            generate_exception(EXC_UD);
+            generate_exception(X86_EXC_UD);
         }
         break;
 
     case X86EMUL_OPC(0x0f, 0x00): /* Grp6 */
         seg = (modrm_reg & 1) ? x86_seg_tr : x86_seg_ldtr;
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_UD);
         switch ( modrm_reg & 6 )
         {
         case 0: /* sldt / str */
-            generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0);
+            generate_exception_if(umip_active(ctxt, ops), X86_EXC_GP, 0);
             goto store_selector;
         case 2: /* lldt / ltr */
-            generate_exception_if(!mode_ring0(), EXC_GP, 0);
+            generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
             if ( (rc = load_seg(seg, src.val, 0, NULL, ctxt, ops)) != 0 )
                 goto done;
             break;
@@ -2877,7 +2855,7 @@ x86_emulate(
             case X86EMUL_EXCEPTION:
                 if ( ctxt->event_pending )
                 {
-                    ASSERT(ctxt->event.vector == EXC_PF);
+                    ASSERT(ctxt->event.vector == X86_EXC_PF);
             default:
                     goto done;
                 }
@@ -2887,7 +2865,7 @@ x86_emulate(
             }
             break;
         default:
-            generate_exception_if(true, EXC_UD);
+            generate_exception_if(true, X86_EXC_UD);
             break;
         }
         break;
@@ -2897,7 +2875,7 @@ x86_emulate(
         goto dispatch_from_helper;
 
     case X86EMUL_OPC(0x0f, 0x02): /* lar */
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_UD);
         _regs.eflags &= ~X86_EFLAGS_ZF;
         switch ( rc = protmode_load_seg(x86_seg_none, src.val, false, &sreg,
                                         ctxt, ops) )
@@ -2928,7 +2906,7 @@ x86_emulate(
         case X86EMUL_EXCEPTION:
             if ( ctxt->event_pending )
             {
-                ASSERT(ctxt->event.vector == EXC_PF);
+                ASSERT(ctxt->event.vector == X86_EXC_PF);
         default:
                 goto done;
             }
@@ -2945,7 +2923,7 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x03): /* lsl */
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_UD);
         _regs.eflags &= ~X86_EFLAGS_ZF;
         switch ( rc = protmode_load_seg(x86_seg_none, src.val, false, &sreg,
                                         ctxt, ops) )
@@ -2973,7 +2951,7 @@ x86_emulate(
         case X86EMUL_EXCEPTION:
             if ( ctxt->event_pending )
             {
-                ASSERT(ctxt->event.vector == EXC_PF);
+                ASSERT(ctxt->event.vector == X86_EXC_PF);
         default:
                 goto done;
             }
@@ -2996,8 +2974,8 @@ x86_emulate(
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(MSR_EFER, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        generate_exception_if((msr_val & EFER_SCE) == 0, EXC_UD);
-        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), EXC_UD);
+        generate_exception_if((msr_val & EFER_SCE) == 0, X86_EXC_UD);
+        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), X86_EXC_UD);
 
         if ( (rc = ops->read_msr(MSR_STAR, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
@@ -3065,7 +3043,7 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x06): /* clts */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         fail_if((ops->read_cr == NULL) || (ops->write_cr == NULL));
         if ( (rc = ops->read_cr(0, &dst.val, ctxt)) != X86EMUL_OKAY ||
              (rc = ops->write_cr(0, dst.val & ~X86_CR0_TS, ctxt)) != X86EMUL_OKAY )
@@ -3081,10 +3059,10 @@ x86_emulate(
         fail_if(!ops->read_msr);
         if ( (rc = ops->read_msr(MSR_EFER, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        generate_exception_if(!(msr_val & EFER_SCE), EXC_UD);
-        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), EXC_UD);
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
+        generate_exception_if(!(msr_val & EFER_SCE), X86_EXC_UD);
+        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_GP, 0);
 #ifdef __x86_64__
         /*
          * Doing this for just Intel (rather than e.g. !amd_like()) as this is
@@ -3093,7 +3071,7 @@ x86_emulate(
          */
         generate_exception_if(ctxt->cpuid->x86_vendor == X86_VENDOR_INTEL &&
                               op_bytes == 8 && !is_canonical_address(_regs.rcx),
-                              EXC_GP, 0);
+                              X86_EXC_GP, 0);
 #endif
 
         if ( (rc = ops->read_msr(MSR_STAR, &msr_val, ctxt)) != X86EMUL_OKAY )
@@ -3152,7 +3130,7 @@ x86_emulate(
 
     case X86EMUL_OPC(0x0f, 0x08): /* invd */
     case X86EMUL_OPC(0x0f, 0x09): /* wbinvd / wbnoinvd */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         fail_if(!ops->cache_op);
         if ( (rc = ops->cache_op(b == 0x09 ? !repe_prefix() ||
                                              !vcpu_has_wbnoinvd()
@@ -3167,7 +3145,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0x0b): /* ud2 */
     case X86EMUL_OPC(0x0f, 0xb9): /* ud1 */
     case X86EMUL_OPC(0x0f, 0xff): /* ud0 */
-        generate_exception(EXC_UD);
+        generate_exception(X86_EXC_UD);
 
     case X86EMUL_OPC(0x0f, 0x0d): /* GrpP (prefetch) */
     case X86EMUL_OPC(0x0f, 0x18): /* Grp16 (prefetch/nop) */
@@ -3187,7 +3165,7 @@ x86_emulate(
         else if ( _3dnow_ext_table[(imm1 >> 4) & 0xf] & (1 << (imm1 & 0xf)) )
             host_and_vcpu_must_have(3dnow_ext);
         else
-            generate_exception(EXC_UD);
+            generate_exception(X86_EXC_UD);
 
         get_fpu(X86EMUL_FPU_mmx);
 
@@ -3266,7 +3244,7 @@ x86_emulate(
         /* fall through */
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x2b):   /* movntp{s,d} xmm,m128 */
                                            /* vmovntp{s,d} {x,y}mm,mem */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         sfence = true;
         /* fall through */
     CASE_SIMD_ALL_FP_VEX(0x0f, 0x10):      /* mov{up,s}{s,d} xmm/mem,xmm */
@@ -3352,7 +3330,7 @@ x86_emulate(
         break;
 
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x2b): /* vmovntp{s,d} [xyz]mm,mem */
-        generate_exception_if(ea.type != OP_MEM || evex.opmsk, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM || evex.opmsk, X86_EXC_UD);
         sfence = true;
         /* fall through */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x10): /* vmovup{s,d} [xyz]mm/mem,[xyz]mm{k} */
@@ -3366,7 +3344,7 @@ x86_emulate(
         /* vmovs{s,d} to/from memory have only two operands. */
         if ( (b & ~1) == 0x10 && ea.type == OP_MEM )
             d |= TwoOp;
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         /* fall through */
     CASE_SIMD_ALL_FP(_EVEX, 0x0f, 0x51):    /* vsqrtp{s,d} [xyz]mm/mem,[xyz]mm{k} */
                                             /* vsqrts{s,d} xmm/m32,xmm,xmm{k} */
@@ -3380,7 +3358,7 @@ x86_emulate(
         generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) ||
                                (ea.type != OP_REG && evex.brs &&
                                 (evex.pfx & VEX_PREFIX_SCALAR_MASK))),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( ea.type != OP_REG || !evex.brs )
             avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK);
@@ -3408,7 +3386,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f, 0x16):   /* vmovhpd m64,xmm,xmm */
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x17):   /* movhp{s,d} xmm,m64 */
                                            /* vmovhp{s,d} xmm,m64 */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC(0x0f, 0x12):          /* movlps m64,xmm */
                                            /* movhlps xmm,xmm */
@@ -3418,7 +3396,7 @@ x86_emulate(
                                            /* movlhps xmm,xmm */
     case X86EMUL_OPC_VEX(0x0f, 0x16):      /* vmovhps m64,xmm,xmm */
                                            /* vmovlhps xmm,xmm,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         if ( (d & DstMask) != DstMem )
             d &= ~TwoOp;
         op_bytes = 8;
@@ -3428,7 +3406,7 @@ x86_emulate(
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x13): /* vmovlp{s,d} xmm,m64 */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x16):   /* vmovhpd m64,xmm,xmm */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x17): /* vmovhp{s,d} xmm,m64 */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX(0x0f, 0x12):      /* vmovlps m64,xmm,xmm */
                                             /* vmovhlps xmm,xmm,xmm */
@@ -3436,7 +3414,7 @@ x86_emulate(
                                             /* vmovlhps xmm,xmm,xmm */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
                                evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK)),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( (d & DstMask) != DstMem )
             d &= ~TwoOp;
@@ -3463,7 +3441,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x16):   /* vmovshdup [xyz]mm/mem,[xyz]mm{k} */
         generate_exception_if((evex.brs ||
                                evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK)),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         avx512_vlen_check(false);
         d |= TwoOp;
@@ -3475,7 +3453,7 @@ x86_emulate(
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x14): /* vunpcklp{s,d} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x15): /* vunpckhp{s,d} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK),
-                              EXC_UD);
+                              X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x76): /* vpermi2{d,q} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x77): /* vpermi2p{s,d} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -3502,7 +3480,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x65): /* vblendmp{s,d} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     avx512f_no_sae:
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(ea.type != OP_MEM && evex.brs, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM && evex.brs, X86_EXC_UD);
         avx512_vlen_check(false);
         goto simd_zmm;
 
@@ -3512,7 +3490,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0x21): /* mov dr,reg */
     case X86EMUL_OPC(0x0f, 0x22): /* mov reg,cr */
     case X86EMUL_OPC(0x0f, 0x23): /* mov reg,dr */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         if ( b & 2 )
         {
             /* Write to CR/DR. */
@@ -3580,7 +3558,7 @@ x86_emulate(
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x2a): /* vcvtsi2s{s,d} r/m,xmm,xmm */
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x7b): /* vcvtusi2s{s,d} r/m,xmm,xmm */
         generate_exception_if(evex.opmsk || (ea.type != OP_REG && evex.brs),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( !evex.brs )
             avx512_vlen_check(true);
@@ -3632,7 +3610,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
 
@@ -3691,7 +3669,7 @@ x86_emulate(
     CASE_SIMD_SCALAR_FP(_EVEX, 0x0f, 0x79): /* vcvts{s,d}2usi xmm/mem,reg */
         generate_exception_if((evex.reg != 0xf || !evex.RX || evex.opmsk ||
                                (ea.type != OP_REG && evex.brs)),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( !evex.brs )
             avx512_vlen_check(true);
@@ -3711,7 +3689,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
         }
@@ -3761,7 +3739,7 @@ x86_emulate(
         generate_exception_if((evex.reg != 0xf || !evex.RX || evex.opmsk ||
                                (ea.type != OP_REG && evex.brs) ||
                                evex.w != evex.pfx),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( !evex.brs )
             avx512_vlen_check(true);
@@ -3772,7 +3750,7 @@ x86_emulate(
         goto vcomi;
 
     case X86EMUL_OPC(0x0f, 0x30): /* wrmsr */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         fail_if(ops->write_msr == NULL);
         if ( (rc = ops->write_msr(_regs.ecx,
                                   ((uint64_t)_regs.r(dx) << 32) | _regs.eax,
@@ -3786,7 +3764,7 @@ x86_emulate(
             fail_if(ops->read_cr == NULL);
             if ( (rc = ops->read_cr(4, &cr4, ctxt)) )
                 goto done;
-            generate_exception_if(cr4 & X86_CR4_TSD, EXC_GP, 0);
+            generate_exception_if(cr4 & X86_CR4_TSD, X86_EXC_GP, 0);
         }
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(MSR_IA32_TSC,
@@ -3797,7 +3775,7 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x32): /* rdmsr */
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(_regs.ecx, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
@@ -3807,15 +3785,15 @@ x86_emulate(
 
     case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
         vcpu_must_have(sep);
-        generate_exception_if(amd_like(ctxt) && ctxt->lma, EXC_UD);
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
+        generate_exception_if(amd_like(ctxt) && ctxt->lma, X86_EXC_UD);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_GP, 0);
 
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS,
                                  &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if(!(msr_val & 0xfffc), EXC_GP, 0);
+        generate_exception_if(!(msr_val & 0xfffc), X86_EXC_GP, 0);
 
         _regs.eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF | X86_EFLAGS_RF);
 
@@ -3856,20 +3834,20 @@ x86_emulate(
 
     case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
         vcpu_must_have(sep);
-        generate_exception_if(amd_like(ctxt) && ctxt->lma, EXC_UD);
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
+        generate_exception_if(amd_like(ctxt) && ctxt->lma, X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
+        generate_exception_if(!in_protmode(ctxt, ops), X86_EXC_GP, 0);
 
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_CS,
                                  &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if(!(msr_val & 0xfffc), EXC_GP, 0);
+        generate_exception_if(!(msr_val & 0xfffc), X86_EXC_GP, 0);
         generate_exception_if(op_bytes == 8 &&
                               (!is_canonical_address(_regs.r(dx)) ||
                                !is_canonical_address(_regs.r(cx))),
-                              EXC_GP, 0);
+                              X86_EXC_GP, 0);
 
         cs.sel = (msr_val | 3) + /* SELECTOR_RPL_MASK */
                  (op_bytes == 8 ? 32 : 16);
@@ -3917,7 +3895,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX(0x0f, 0x47):    /* kxor{w,q} k,k,k */
     case X86EMUL_OPC_VEX_66(0x0f, 0x47): /* kxor{b,d} k,k,k */
     case X86EMUL_OPC_VEX_66(0x0f, 0x4a): /* kadd{b,d} k,k,k */
-        generate_exception_if(!vex.l, EXC_UD);
+        generate_exception_if(!vex.l, X86_EXC_UD);
     opmask_basic:
         if ( vex.w )
             host_and_vcpu_must_have(avx512bw);
@@ -3926,7 +3904,7 @@ x86_emulate(
     opmask_common:
         host_and_vcpu_must_have(avx512f);
         generate_exception_if(!vex.r || (mode_64bit() && !(vex.reg & 8)) ||
-                              ea.type != OP_REG, EXC_UD);
+                              ea.type != OP_REG, X86_EXC_UD);
 
         vex.reg |= 8;
         d &= ~TwoOp;
@@ -3944,16 +3922,16 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX(0x0f, 0x44):    /* knot{w,q} k,k */
     case X86EMUL_OPC_VEX_66(0x0f, 0x44): /* knot{b,d} k,k */
-        generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+        generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
         goto opmask_basic;
 
     case X86EMUL_OPC_VEX(0x0f, 0x4b):    /* kunpck{w,d}{d,q} k,k,k */
-        generate_exception_if(!vex.l, EXC_UD);
+        generate_exception_if(!vex.l, X86_EXC_UD);
         host_and_vcpu_must_have(avx512bw);
         goto opmask_common;
 
     case X86EMUL_OPC_VEX_66(0x0f, 0x4b): /* kunpckbw k,k,k */
-        generate_exception_if(!vex.l || vex.w, EXC_UD);
+        generate_exception_if(!vex.l || vex.w, X86_EXC_UD);
         goto opmask_common;
 
 #endif /* X86EMUL_NO_SIMD */
@@ -3974,7 +3952,7 @@ x86_emulate(
     simd_0f_to_gpr:
         opc[insn_bytes - PFX_BYTES] = 0xc3;
 
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
 
         if ( vex.opcx == vex_none )
         {
@@ -3997,7 +3975,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.reg != 0xf, X86_EXC_UD);
             if ( b == 0x50 || !vex.l )
                 host_and_vcpu_must_have(avx);
             else
@@ -4020,7 +3998,7 @@ x86_emulate(
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0x57): /* vxorp{s,d} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) ||
                                (ea.type != OP_MEM && evex.brs)),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512dq);
         avx512_vlen_check(false);
         goto simd_zmm;
@@ -4053,7 +4031,7 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0x5b): /* vcvtps2dq [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x5b): /* vcvttps2dq [xyz]mm/mem,[xyz]mm{k} */
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX(0x0f, 0x5b):    /* vcvtdq2ps [xyz]mm/mem,[xyz]mm{k} */
                                           /* vcvtqq2ps [xyz]mm/mem,{x,y}mm{k} */
@@ -4245,7 +4223,7 @@ x86_emulate(
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf6): /* vpsadbw [xyz]mm/mem,[xyz]mm,[xyz]mm */
-        generate_exception_if(evex.opmsk, EXC_UD);
+        generate_exception_if(evex.opmsk, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x60): /* vpunpcklbw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x61): /* vpunpcklwd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -4283,13 +4261,13 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x1c): /* vpabsb [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x1d): /* vpabsw [xyz]mm/mem,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = 1 << (b & 1);
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0x62): /* vpunpckldq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6a): /* vpunpckhdq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         fault_suppression = false;
         op_bytes = 16 << evex.lr;
         goto avx512f_no_sae;
@@ -4308,21 +4286,21 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x27): /* vptestm{d,q} [xyz]mm/mem,[xyz]mm,k{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x29): /* vpcmpeqq [xyz]mm/mem,[xyz]mm,k{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x37): /* vpcmpgtq [xyz]mm/mem,[xyz]mm,k{k} */
-        generate_exception_if(!evex.r || !evex.R || evex.z, EXC_UD);
+        generate_exception_if(!evex.r || !evex.R || evex.z, X86_EXC_UD);
         if ( b & (ext == ext_0f38 ? 1 : 2) )
         {
-            generate_exception_if(b != 0x27 && evex.w != (b & 1), EXC_UD);
+            generate_exception_if(b != 0x27 && evex.w != (b & 1), X86_EXC_UD);
             goto avx512f_no_sae;
         }
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = 1 << (ext == ext_0f ? b & 1 : evex.w);
         avx512_vlen_check(false);
         goto simd_zmm;
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6b): /* vpackssdw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x2b): /* vpackusdw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        generate_exception_if(evex.w || evex.brs, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_no_sae;
 
@@ -4333,7 +4311,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd4): /* vpaddq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf4): /* vpmuludq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x28): /* vpmuldq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         goto avx512f_no_sae;
 
 #endif /* X86EMUL_NO_SIMD */
@@ -4344,7 +4322,7 @@ x86_emulate(
                                           /* vmov{d,q} xmm,r/m */
         if ( vex.opcx != vex_none )
         {
-            generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
         }
@@ -4385,7 +4363,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
                                evex.reg != 0xf || !evex.RX),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         get_fpu(X86EMUL_FPU_zmm);
 
@@ -4409,7 +4387,7 @@ x86_emulate(
 
     case X86EMUL_OPC_66(0x0f, 0xe7):     /* movntdq xmm,m128 */
     case X86EMUL_OPC_VEX_66(0x0f, 0xe7): /* vmovntdq {x,y}mm,mem */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         sfence = true;
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0x6f):     /* movdqa xmm/m128,xmm */
@@ -4429,7 +4407,7 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe7): /* vmovntdq [xyz]mm,mem */
         generate_exception_if(ea.type != OP_MEM || evex.opmsk || evex.w,
-                              EXC_UD);
+                              X86_EXC_UD);
         sfence = true;
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6f): /* vmovdqa{32,64} [xyz]mm/mem,[xyz]mm{k} */
@@ -4437,7 +4415,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7f): /* vmovdqa{32,64} [xyz]mm,[xyz]mm/mem{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x7f): /* vmovdqu{32,64} [xyz]mm,[xyz]mm/mem{k} */
     vmovdqa:
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         d |= TwoOp;
         op_bytes = 16 << evex.lr;
         goto avx512f_no_sae;
@@ -4449,7 +4427,7 @@ x86_emulate(
         goto vmovdqa;
 
     case X86EMUL_OPC_VEX_66(0x0f, 0xd6): /* vmovq xmm,xmm/m64 */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         d |= TwoOp;
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0xd6):     /* movq xmm,xmm/m64 */
@@ -4534,11 +4512,11 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x70): /* vpshufhw $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(0x0f, 0x70): /* vpshuflw $imm8,[xyz]mm/mem,[xyz]mm{k} */
         if ( evex.pfx == vex_66 )
-            generate_exception_if(evex.w, EXC_UD);
+            generate_exception_if(evex.w, X86_EXC_UD);
         else
         {
             host_and_vcpu_must_have(avx512bw);
-            generate_exception_if(evex.brs, EXC_UD);
+            generate_exception_if(evex.brs, X86_EXC_UD);
         }
         d = (d & ~SrcMask) | SrcMem | TwoOp;
         op_bytes = 16 << evex.lr;
@@ -4562,7 +4540,7 @@ x86_emulate(
             goto unrecognized_insn;
         }
     simd_0f_shift_imm:
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
 
         if ( vex.opcx != vex_none )
         {
@@ -4622,7 +4600,7 @@ x86_emulate(
         {
         case 2: /* vpsrld $imm8,[xyz]mm/mem,[xyz]mm{k} */
         case 6: /* vpslld $imm8,[xyz]mm/mem,[xyz]mm{k} */
-            generate_exception_if(evex.w, EXC_UD);
+            generate_exception_if(evex.w, X86_EXC_UD);
             /* fall through */
         case 0: /* vpror{d,q} $imm8,[xyz]mm/mem,[xyz]mm{k} */
         case 1: /* vprol{d,q} $imm8,[xyz]mm/mem,[xyz]mm{k} */
@@ -4670,11 +4648,11 @@ x86_emulate(
         {
         case 2: /* vpsrlq $imm8,[xyz]mm/mem,[xyz]mm{k} */
         case 6: /* vpsllq $imm8,[xyz]mm/mem,[xyz]mm{k} */
-            generate_exception_if(!evex.w, EXC_UD);
+            generate_exception_if(!evex.w, X86_EXC_UD);
             goto avx512f_shift_imm;
         case 3: /* vpsrldq $imm8,[xyz]mm/mem,[xyz]mm */
         case 7: /* vpslldq $imm8,[xyz]mm/mem,[xyz]mm */
-            generate_exception_if(evex.opmsk, EXC_UD);
+            generate_exception_if(evex.opmsk, X86_EXC_UD);
             goto avx512bw_shift_imm;
         }
         goto unrecognized_insn;
@@ -4688,7 +4666,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX(0x0f, 0x77):    /* vzero{all,upper} */
         if ( vex.opcx != vex_none )
         {
-            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
 
@@ -4756,7 +4734,7 @@ x86_emulate(
         }
         /* fall through */
     case X86EMUL_OPC_F2(0x0f, 0x78):     /* insertq $imm8,$imm8,xmm,xmm */
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
 
         host_and_vcpu_must_have(sse4a);
         get_fpu(X86EMUL_FPU_xmm);
@@ -4771,14 +4749,14 @@ x86_emulate(
 
     case X86EMUL_OPC_66(0x0f, 0x79):     /* extrq xmm,xmm */
     case X86EMUL_OPC_F2(0x0f, 0x79):     /* insertq xmm,xmm */
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
         host_and_vcpu_must_have(sse4a);
         op_bytes = 8;
         goto simd_0f_xmm;
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe6):   /* vcvttpd2dq [xyz]mm/mem,{x,y}mm{k} */
     case X86EMUL_OPC_EVEX_F2(0x0f, 0xe6):   /* vcvtpd2dq [xyz]mm/mem,{x,y}mm{k} */
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x7a):   /* vcvtudq2pd {x,y}mm/mem,[xyz]mm{k} */
                                             /* vcvtuqq2pd [xyz]mm/mem,[xyz]mm{k} */
@@ -4801,7 +4779,7 @@ x86_emulate(
         else
         {
             host_and_vcpu_must_have(avx512f);
-            generate_exception_if(ea.type != OP_MEM && evex.brs, EXC_UD);
+            generate_exception_if(ea.type != OP_MEM && evex.brs, X86_EXC_UD);
         }
         if ( ea.type != OP_REG || !evex.brs )
             avx512_vlen_check(false);
@@ -4811,7 +4789,7 @@ x86_emulate(
 
     case X86EMUL_OPC_F2(0x0f, 0xf0):     /* lddqu m128,xmm */
     case X86EMUL_OPC_VEX_F2(0x0f, 0xf0): /* vlddqu mem,{x,y}mm */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0x7c):     /* haddpd xmm/m128,xmm */
     case X86EMUL_OPC_F2(0x0f, 0x7c):     /* haddps xmm/m128,xmm */
@@ -4830,14 +4808,14 @@ x86_emulate(
 
     case X86EMUL_OPC_F3(0x0f, 0x7e):     /* movq xmm/m64,xmm */
     case X86EMUL_OPC_VEX_F3(0x0f, 0x7e): /* vmovq xmm/m64,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         op_bytes = 8;
         goto simd_0f_int;
 
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x7e): /* vmovq xmm/m64,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd6): /* vmovq xmm,xmm/m64 */
         generate_exception_if(evex.lr || !evex.w || evex.opmsk || evex.brs,
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         d |= TwoOp;
         op_bytes = 8;
@@ -4859,11 +4837,11 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX(0x0f, 0x91):    /* kmov{w,q} k,mem */
     case X86EMUL_OPC_VEX_66(0x0f, 0x91): /* kmov{b,d} k,mem */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_VEX(0x0f, 0x90):    /* kmov{w,q} k/mem,k */
     case X86EMUL_OPC_VEX_66(0x0f, 0x90): /* kmov{b,d} k/mem,k */
-        generate_exception_if(vex.l || !vex.r, EXC_UD);
+        generate_exception_if(vex.l || !vex.r, X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( vex.w )
         {
@@ -4896,14 +4874,14 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f, 0x92): /* kmovb r32,k */
     case X86EMUL_OPC_VEX_F2(0x0f, 0x92): /* kmov{d,q} reg,k */
         generate_exception_if(vex.l || !vex.r || vex.reg != 0xf ||
-                              ea.type != OP_REG, EXC_UD);
+                              ea.type != OP_REG, X86_EXC_UD);
 
         host_and_vcpu_must_have(avx512f);
         if ( vex.pfx == vex_f2 )
             host_and_vcpu_must_have(avx512bw);
         else
         {
-            generate_exception_if(vex.w, EXC_UD);
+            generate_exception_if(vex.w, X86_EXC_UD);
             if ( vex.pfx )
                 host_and_vcpu_must_have(avx512dq);
         }
@@ -4933,7 +4911,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f, 0x93): /* kmovb k,r32 */
     case X86EMUL_OPC_VEX_F2(0x0f, 0x93): /* kmov{d,q} k,reg */
         generate_exception_if(vex.l || vex.reg != 0xf || ea.type != OP_REG,
-                              EXC_UD);
+                              X86_EXC_UD);
         dst = ea;
         dst.reg = decode_gpr(&_regs, modrm_reg);
 
@@ -4945,7 +4923,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.w, EXC_UD);
+            generate_exception_if(vex.w, X86_EXC_UD);
             dst.bytes = 4;
             if ( vex.pfx )
                 host_and_vcpu_must_have(avx512dq);
@@ -4978,7 +4956,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f, 0x98): /* kortest{b,d} k,k */
     case X86EMUL_OPC_VEX_66(0x0f, 0x99): /* ktest{b,d} k,k */
         generate_exception_if(vex.l || !vex.r || vex.reg != 0xf ||
-                              ea.type != OP_REG, EXC_UD);
+                              ea.type != OP_REG, X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( vex.w )
             host_and_vcpu_must_have(avx512bw);
@@ -5025,7 +5003,7 @@ x86_emulate(
             goto done;
 
         generate_exception_if((msr_val & MSR_MISC_FEATURES_CPUID_FAULTING),
-                              EXC_GP, 0); /* Faulting active? (Inc. CPL test) */
+                              X86_EXC_GP, 0); /* Faulting active? (Inc. CPL test) */
 
         rc = ops->cpuid(_regs.eax, _regs.ecx, &cpuid_leaf, ctxt);
         if ( rc != X86EMUL_OKAY )
@@ -5037,7 +5015,7 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0xa3): bt: /* bt */
-        generate_exception_if(lock_prefix, EXC_UD);
+        generate_exception_if(lock_prefix, X86_EXC_UD);
 
         if ( ops->rmw && dst.type == OP_MEM &&
              (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
@@ -5054,7 +5032,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xad): /* shrd %%cl,r,r/m */ {
         uint8_t shift, width = dst.bytes << 3;
 
-        generate_exception_if(lock_prefix, EXC_UD);
+        generate_exception_if(lock_prefix, X86_EXC_UD);
 
         if ( b & 1 )
             shift = _regs.cl;
@@ -5202,7 +5180,7 @@ x86_emulate(
         case 5: goto bts;
         case 6: goto btr;
         case 7: goto btc;
-        default: generate_exception(EXC_UD);
+        default: generate_exception(X86_EXC_UD);
         }
         break;
 
@@ -5320,7 +5298,7 @@ x86_emulate(
                                (ea.type != OP_REG && evex.brs &&
                                 (evex.pfx & VEX_PREFIX_SCALAR_MASK)) ||
                                !evex.r || !evex.R || evex.z),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( ea.type != OP_REG || !evex.brs )
             avx512_vlen_check(evex.pfx & VEX_PREFIX_SCALAR_MASK);
@@ -5352,7 +5330,7 @@ x86_emulate(
 
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc4):  /* pinsrw $imm8,r32/m16,{,x}mm */
                                            /* vpinsrw $imm8,r32/m16,xmm,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         memcpy(mmvalp, &src.val, 2);
         ea.type = OP_MEM;
         state->simd_size = simd_other;
@@ -5363,7 +5341,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0xc4):   /* vpinsrw $imm8,r32/m16,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */
-        generate_exception_if(evex.lr || evex.opmsk || evex.brs, EXC_UD);
+        generate_exception_if(evex.lr || evex.opmsk || evex.brs, X86_EXC_UD);
         if ( b & 2 )
             host_and_vcpu_must_have(avx512dq);
         else
@@ -5380,7 +5358,7 @@ x86_emulate(
 
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc5):  /* pextrw $imm8,{,x}mm,reg */
                                            /* vpextrw $imm8,xmm,reg */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         opc = init_prefixes(stub);
         opc[0] = b;
         /* Convert GPR destination to %rAX. */
@@ -5397,7 +5375,7 @@ x86_emulate(
 
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0xc6): /* vshufp{s,d} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK),
-                              EXC_UD);
+                              X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x03): /* valign{d,q} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         fault_suppression = false;
@@ -5405,7 +5383,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x25): /* vpternlog{d,q} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     avx512f_imm8_no_sae:
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(ea.type != OP_MEM && evex.brs, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM && evex.brs, X86_EXC_UD);
         avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
@@ -5442,7 +5420,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe2): /* vpsra{d,q} xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf2): /* vpslld xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf3): /* vpsllq xmm/m128,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x0c): /* vpermilps [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x0d): /* vpermilpd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -5455,7 +5433,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0xfe): /* vpaddd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x1e): /* vpabsd [xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x1f): /* vpabsq [xyz]mm/mem,[xyz]mm{k} */
-        generate_exception_if(evex.w != (b & 1), EXC_UD);
+        generate_exception_if(evex.w != (b & 1), X86_EXC_UD);
         goto avx512f_no_sae;
 
 #endif /* !X86EMUL_NO_SIMD */
@@ -5472,7 +5450,7 @@ x86_emulate(
 
     case X86EMUL_OPC_F3(0x0f, 0xd6):     /* movq2dq mm,xmm */
     case X86EMUL_OPC_F2(0x0f, 0xd6):     /* movdq2q xmm,mm */
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
         op_bytes = 8;
         host_and_vcpu_must_have(mmx);
         goto simd_0f_int;
@@ -5481,7 +5459,7 @@ x86_emulate(
 #ifndef X86EMUL_NO_MMX
 
     case X86EMUL_OPC(0x0f, 0xe7):        /* movntq mm,m64 */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         sfence = true;
         /* fall through */
     case X86EMUL_OPC(0x0f, 0xda):        /* pminub mm/m64,mm */
@@ -5504,7 +5482,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f, 0xea): /* vpminsw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xee): /* vpmaxsw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = b & 0x10 ? 1 : 2;
         goto avx512f_no_sae;
 
@@ -5521,10 +5499,10 @@ x86_emulate(
 #endif /* !X86EMUL_NO_SIMD */
 
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* {,v}maskmov{q,dqu} {,x}mm,{,x}mm */
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
         if ( vex.opcx != vex_none )
         {
-            generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
             d |= TwoOp;
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
@@ -5627,23 +5605,23 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,ymm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x1a): /* vbroadcastf128 m128,ymm */
-        generate_exception_if(!vex.l, EXC_UD);
+        generate_exception_if(!vex.l, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x18): /* vbroadcastss xmm/m32,{x,y}mm */
         if ( ea.type != OP_MEM )
         {
-            generate_exception_if(b & 2, EXC_UD);
+            generate_exception_if(b & 2, X86_EXC_UD);
             host_and_vcpu_must_have(avx2);
         }
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x0c): /* vpermilps {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x0d): /* vpermilpd {x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_avx;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x0e): /* vtestps {x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x0f): /* vtestpd {x,y}mm/mem,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_66(0x0f38, 0x17):     /* ptest xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x17): /* vptest {x,y}mm/mem,{x,y}mm */
@@ -5654,7 +5632,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
         }
@@ -5729,7 +5707,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x11): /* vpsravw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x12): /* vpsllvw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(!evex.w || evex.brs, EXC_UD);
+        generate_exception_if(!evex.w || evex.brs, X86_EXC_UD);
         elem_bytes = 2;
         goto avx512f_no_sae;
 
@@ -5751,7 +5729,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x33): /* vpmovzxwd {x,y}mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x34): /* vpmovzxwq xmm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x35): /* vpmovzxdq {x,y}mm/mem,[xyz]mm{k} */
-            generate_exception_if(evex.w && (b & 7) == 5, EXC_UD);
+            generate_exception_if(evex.w && (b & 7) == 5, X86_EXC_UD);
         }
         else
         {
@@ -5770,22 +5748,22 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x33): /* vpmovdw [xyz]mm,{x,y}mm/mem{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x34): /* vpmovqw [xyz]mm,xmm/mem{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x35): /* vpmovqd [xyz]mm,{x,y}mm/mem{k} */
-            generate_exception_if(evex.w || (ea.type != OP_REG && evex.z), EXC_UD);
+            generate_exception_if(evex.w || (ea.type != OP_REG && evex.z), X86_EXC_UD);
             d = DstMem | SrcReg | TwoOp;
         }
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         op_bytes = 32 >> (pmov_convert_delta[b & 7] + 1 - evex.lr);
         elem_bytes = (b & 7) < 3 ? 1 : (b & 7) != 5 ? 2 : 4;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x13): /* vcvtph2ps xmm/mem,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         host_and_vcpu_must_have(f16c);
         op_bytes = 8 << vex.l;
         goto simd_0f_ymm;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x13): /* vcvtph2ps {x,y}mm/mem,[xyz]mm{k} */
-        generate_exception_if(evex.w || (ea.type != OP_REG && evex.brs), EXC_UD);
+        generate_exception_if(evex.w || (ea.type != OP_REG && evex.brs), X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         if ( !evex.brs )
             avx512_vlen_check(false);
@@ -5795,19 +5773,19 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x16): /* vpermps ymm/m256,ymm,ymm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x36): /* vpermd ymm/m256,ymm,ymm */
-        generate_exception_if(!vex.l || vex.w, EXC_UD);
+        generate_exception_if(!vex.l || vex.w, X86_EXC_UD);
         goto simd_0f_avx2;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x16): /* vpermp{s,d} {y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x36): /* vperm{d,q} {y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
-        generate_exception_if(!evex.lr, EXC_UD);
+        generate_exception_if(!evex.lr, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x18): /* vbroadcastss xmm/m32,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x58): /* vpbroadcastd xmm/m32,[xyz]mm{k} */
         op_bytes = elem_bytes;
-        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        generate_exception_if(evex.w || evex.brs, X86_EXC_UD);
     avx512_broadcast:
         /*
          * For the respective code below the main switch() to work we need to
@@ -5828,17 +5806,17 @@ x86_emulate(
                                             /* vbroadcastf64x4 m256,zmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x5b): /* vbroadcasti32x8 m256,zmm{k} */
                                             /* vbroadcasti64x4 m256,zmm{k} */
-        generate_exception_if(ea.type != OP_MEM || evex.lr != 2, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM || evex.lr != 2, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,{y,z}mm{k} */
                                             /* vbroadcastf32x2 xmm/m64,{y,z}mm{k} */
-        generate_exception_if(!evex.lr, EXC_UD);
+        generate_exception_if(!evex.lr, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x59): /* vpbroadcastq xmm/m64,[xyz]mm{k} */
                                             /* vbroadcasti32x2 xmm/m64,[xyz]mm{k} */
         if ( b == 0x59 )
             op_bytes = 8;
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         if ( !evex.w )
             host_and_vcpu_must_have(avx512dq);
         goto avx512_broadcast;
@@ -5848,7 +5826,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x5a): /* vbroadcasti32x4 m128,{y,z}mm{k} */
                                             /* vbroadcasti64x2 m128,{y,z}mm{k} */
         generate_exception_if(ea.type != OP_MEM || !evex.lr || evex.brs,
-                              EXC_UD);
+                              X86_EXC_UD);
         if ( evex.w )
             host_and_vcpu_must_have(avx512dq);
         goto avx512_broadcast;
@@ -5870,7 +5848,7 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x29): /* vpmov{b,w}2m [xyz]mm,k */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x39): /* vpmov{d,q}2m [xyz]mm,k */
-        generate_exception_if(!evex.r || !evex.R, EXC_UD);
+        generate_exception_if(!evex.r || !evex.R, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x28): /* vpmovm2{b,w} k,[xyz]mm */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x38): /* vpmovm2{d,q} k,[xyz]mm */
@@ -5878,14 +5856,14 @@ x86_emulate(
             host_and_vcpu_must_have(avx512dq);
         else
             host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.opmsk || ea.type != OP_REG, EXC_UD);
+        generate_exception_if(evex.opmsk || ea.type != OP_REG, X86_EXC_UD);
         d |= TwoOp;
         op_bytes = 16 << evex.lr;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_66(0x0f38, 0x2a):     /* movntdqa m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x2a): /* vmovntdqa mem,{x,y}mm */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         /* Ignore the non-temporal hint for now, using movdqa instead. */
         asm volatile ( "mfence" ::: "memory" );
         b = 0x6f;
@@ -5901,7 +5879,7 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x2a): /* vmovntdqa mem,[xyz]mm */
         generate_exception_if(ea.type != OP_MEM || evex.opmsk || evex.w,
-                              EXC_UD);
+                              X86_EXC_UD);
         /* Ignore the non-temporal hint for now, using vmovdqa32 instead. */
         asm volatile ( "mfence" ::: "memory" );
         b = 0x6f;
@@ -5912,7 +5890,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x3a): /* vpbroadcastmw2d k,[xyz]mm */
         generate_exception_if((ea.type != OP_REG || evex.opmsk ||
                                evex.w == ((b >> 4) & 1)),
-                              EXC_UD);
+                              X86_EXC_UD);
         d |= TwoOp;
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xc4): /* vpconflict{d,q} [xyz]mm/mem,[xyz]mm{k} */
@@ -5929,7 +5907,7 @@ x86_emulate(
     {
         typeof(vex) *pvex;
 
-        generate_exception_if(ea.type != OP_MEM || vex.w, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM || vex.w, X86_EXC_UD);
         host_and_vcpu_must_have(avx);
         elem_bytes = 4 << (b & 1);
     vmaskmov:
@@ -6016,7 +5994,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xbf): /* vfnmsub231s{s,d} xmm/mem,xmm,xmm{k} */
         host_and_vcpu_must_have(avx512f);
     simd_zmm_scalar_sae:
-        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        generate_exception_if(ea.type != OP_REG && evex.brs, X86_EXC_UD);
         if ( !evex.brs )
             avx512_vlen_check(true);
         goto simd_zmm;
@@ -6030,7 +6008,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x3c): /* vpmaxsb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x3e): /* vpmaxuw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = b & 2 ?: 1;
         goto avx512f_no_sae;
 
@@ -6050,7 +6028,7 @@ x86_emulate(
             goto simd_0f38_common;
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x41): /* vphminposuw xmm/m128,xmm,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         goto simd_0f_avx;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */
@@ -6058,7 +6036,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f38, 0x52): /* vpdpwssd [xy]mm/mem,[xy]mm,[xy]mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x53): /* vpdpwssds [xy]mm/mem,[xy]mm,[xy]mm */
         host_and_vcpu_must_have(avx_vnni);
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_avx;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x50): /* vpdpbusd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -6066,7 +6044,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x52): /* vpdpwssd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x53): /* vpdpwssds [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512_vnni);
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_F2(0x0f38, 0x72): /* vcvtne2ps2bf16 [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -6078,7 +6056,7 @@ x86_emulate(
         /* fall through */
     case X86EMUL_OPC_EVEX_F3(0x0f38, 0x52): /* vdpbf16ps [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512_bf16);
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         op_bytes = 16 << evex.lr;
         goto avx512f_no_sae;
 
@@ -6089,13 +6067,13 @@ x86_emulate(
         op_bytes = 1 << ((!(b & 0x20) * 2) + (b & 1));
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x46): /* vpsravd {x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_avx2;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x4d): /* vrcp14s{s,d} xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x4f): /* vrsqrt14s{s,d} xmm/mem,xmm,xmm{k} */
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         avx512_vlen_check(true);
         goto simd_zmm;
 
@@ -6104,19 +6082,19 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_4vnniw);
         generate_exception_if((ea.type != OP_MEM || evex.w || evex.brs ||
                                evex.lr != 2),
-                              EXC_UD);
+                              X86_EXC_UD);
         op_mask = op_mask & 0xffff ? 0xf : 0;
         goto simd_zmm;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x8f): /* vpshufbitqmb [xyz]mm/mem,[xyz]mm,k{k} */
-        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, EXC_UD);
+        generate_exception_if(evex.w || !evex.r || !evex.R || evex.z, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x54): /* vpopcnt{b,w} [xyz]mm/mem,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512_bitalg);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x66): /* vpblendm{b,w} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = 1 << evex.w;
         goto avx512f_no_sae;
 
@@ -6125,7 +6103,7 @@ x86_emulate(
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x5a): /* vbroadcasti128 m128,ymm */
-        generate_exception_if(ea.type != OP_MEM || !vex.l || vex.w, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM || !vex.l || vex.w, X86_EXC_UD);
         goto simd_0f_avx2;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x62): /* vpexpand{b,w} [xyz]mm/mem,[xyz]mm{k} */
@@ -6138,7 +6116,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x8a): /* vcompressp{s,d} [xyz]mm,[xyz]mm/mem{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x8b): /* vpcompress{d,q} [xyz]mm,[xyz]mm/mem{k} */
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         avx512_vlen_check(false);
         /*
          * For the respective code below the main switch() to work we need to
@@ -6160,13 +6138,13 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_F2(0x0f38, 0x68): /* vp2intersect{d,q} [xyz]mm/mem,[xyz]mm,k+1 */
         host_and_vcpu_must_have(avx512_vp2intersect);
-        generate_exception_if(evex.opmsk || !evex.r || !evex.R, EXC_UD);
+        generate_exception_if(evex.opmsk || !evex.r || !evex.R, X86_EXC_UD);
         op_bytes = 16 << evex.lr;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x70): /* vpshldvw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x72): /* vpshrdvw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         elem_bytes = 2;
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x71): /* vpshldv{d,q} [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -6181,14 +6159,14 @@ x86_emulate(
             host_and_vcpu_must_have(avx512_vbmi);
         else
             host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x78): /* vpbroadcastb xmm/m8,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x79): /* vpbroadcastw xmm/m16,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        generate_exception_if(evex.w || evex.brs, X86_EXC_UD);
         op_bytes = elem_bytes = 1 << (b & 1);
         /* See the comment at the avx512_broadcast label. */
         op_mask |= !(b & 1 ? !(uint32_t)op_mask : !op_mask);
@@ -6197,12 +6175,12 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x7a): /* vpbroadcastb r32,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x7b): /* vpbroadcastw r32,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x7c): /* vpbroadcast{d,q} reg,[xyz]mm{k} */
         generate_exception_if((ea.type != OP_REG || evex.brs ||
                                evex.reg != 0xf || !evex.RX),
-                              EXC_UD);
+                              X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         avx512_vlen_check(false);
         get_fpu(X86EMUL_FPU_zmm);
@@ -6228,21 +6206,21 @@ x86_emulate(
 
     case X86EMUL_OPC_66(0x0f38, 0x82): /* invpcid reg,m128 */
         vcpu_must_have(invpcid);
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
-        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
+        generate_exception_if(!mode_ring0(), X86_EXC_GP, 0);
 
         if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 16,
                              ctxt)) != X86EMUL_OKAY )
             goto done;
 
-        generate_exception_if(mmvalp->xmm[0] & ~0xfff, EXC_GP, 0);
+        generate_exception_if(mmvalp->xmm[0] & ~0xfff, X86_EXC_GP, 0);
         dst.val = mode_64bit() ? *dst.reg : (uint32_t)*dst.reg;
 
         switch ( dst.val )
         {
         case X86_INVPCID_INDIV_ADDR:
              generate_exception_if(!is_canonical_address(mmvalp->xmm[1]),
-                                   EXC_GP, 0);
+                                   X86_EXC_GP, 0);
              /* fall through */
         case X86_INVPCID_SINGLE_CTXT:
              if ( !mode_64bit() || !ops->read_cr )
@@ -6250,13 +6228,13 @@ x86_emulate(
              else if ( (rc = ops->read_cr(4, &cr4, ctxt)) != X86EMUL_OKAY )
                  goto done;
              generate_exception_if(!(cr4 & X86_CR4_PCIDE) && mmvalp->xmm[0],
-                                   EXC_GP, 0);
+                                   X86_EXC_GP, 0);
              break;
         case X86_INVPCID_ALL_INCL_GLOBAL:
         case X86_INVPCID_ALL_NON_GLOBAL:
              break;
         default:
-             generate_exception(EXC_GP, 0);
+             generate_exception(X86_EXC_GP, 0);
         }
 
         fail_if(!ops->tlb_op);
@@ -6271,14 +6249,14 @@ x86_emulate(
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         host_and_vcpu_must_have(avx512_vbmi);
         fault_suppression = false;
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0x8c): /* vpmaskmov{d,q} mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x8e): /* vpmaskmov{d,q} {x,y}mm,{x,y}mm,mem */
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         host_and_vcpu_must_have(avx2);
         elem_bytes = 4 << vex.w;
         goto vmaskmov;
@@ -6299,8 +6277,8 @@ x86_emulate(
         ASSERT(ea.type == OP_MEM);
         generate_exception_if(modrm_reg == state->sib_index ||
                               modrm_reg == mask_reg ||
-                              state->sib_index == mask_reg, EXC_UD);
-        generate_exception_if(!cpu_has_avx, EXC_UD);
+                              state->sib_index == mask_reg, X86_EXC_UD);
+        generate_exception_if(!cpu_has_avx, X86_EXC_UD);
         vcpu_must_have(avx2);
         get_fpu(X86EMUL_FPU_ymm);
 
@@ -6419,7 +6397,7 @@ x86_emulate(
         generate_exception_if((!evex.opmsk || evex.brs || evex.z ||
                                evex.reg != 0xf ||
                                modrm_reg == state->sib_index),
-                              EXC_UD);
+                              X86_EXC_UD);
         avx512_vlen_check(false);
         host_and_vcpu_must_have(avx512f);
         get_fpu(X86EMUL_FPU_zmm);
@@ -6569,7 +6547,7 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_4fmaps);
         generate_exception_if((ea.type != OP_MEM || evex.w || evex.brs ||
                                evex.lr != 2),
-                              EXC_UD);
+                              X86_EXC_UD);
         op_mask = op_mask & 0xffff ? 0xf : 0;
         goto simd_zmm;
 
@@ -6578,7 +6556,7 @@ x86_emulate(
         host_and_vcpu_must_have(avx512_4fmaps);
         generate_exception_if((ea.type != OP_MEM || evex.w || evex.brs ||
                                evex.lr == 3),
-                              EXC_UD);
+                              X86_EXC_UD);
         op_mask = op_mask & 1 ? 0xf : 0;
         goto simd_zmm;
 
@@ -6599,7 +6577,7 @@ x86_emulate(
         generate_exception_if((!evex.opmsk || evex.brs || evex.z ||
                                evex.reg != 0xf ||
                                modrm_reg == state->sib_index),
-                              EXC_UD);
+                              X86_EXC_UD);
         avx512_vlen_check(false);
         host_and_vcpu_must_have(avx512f);
         get_fpu(X86EMUL_FPU_zmm);
@@ -6706,7 +6684,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xb4): /* vpmadd52luq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xb5): /* vpmadd52huq [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512_ifma);
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         goto avx512f_no_sae;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xc6):
@@ -6723,7 +6701,7 @@ x86_emulate(
         ASSERT(ea.type == OP_MEM);
         generate_exception_if((!cpu_has_avx512f || !evex.opmsk || evex.brs ||
                                evex.z || evex.reg != 0xf || evex.lr != 2),
-                              EXC_UD);
+                              X86_EXC_UD);
 
         switch ( modrm_reg & 7 )
         {
@@ -6734,7 +6712,7 @@ x86_emulate(
             vcpu_must_have(avx512pf);
             break;
         default:
-            generate_exception(EXC_UD);
+            generate_exception(X86_EXC_UD);
         }
 
         get_fpu(X86EMUL_FPU_zmm);
@@ -6815,7 +6793,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xcc): /* vrsqrt28p{s,d} zmm/mem,zmm{k} */
         host_and_vcpu_must_have(avx512er);
         generate_exception_if((ea.type != OP_REG || !evex.brs) && evex.lr != 2,
-                              EXC_UD);
+                              X86_EXC_UD);
         goto simd_zmm;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xcb): /* vrcp28s{s,d} xmm/mem,xmm,xmm{k} */
@@ -6829,12 +6807,12 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f38, 0xcf):  /* vgf2p8mulb {x,y}mm/mem,{x,y}mm,{x,y}mm */
         host_and_vcpu_must_have(gfni);
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_avx;
 
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xcf): /* vgf2p8mulb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(gfni);
-        generate_exception_if(evex.w || evex.brs, EXC_UD);
+        generate_exception_if(evex.w || evex.brs, X86_EXC_UD);
         elem_bytes = 1;
         goto avx512f_no_sae;
 
@@ -6853,7 +6831,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xde): /* vaesdec [xyz]mm/mem,[xyz]mm,[xyz]mm */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0xdf): /* vaesdeclast [xyz]mm/mem,[xyz]mm,[xyz]mm */
         host_and_vcpu_must_have(vaes);
-        generate_exception_if(evex.brs || evex.opmsk, EXC_UD);
+        generate_exception_if(evex.brs || evex.opmsk, X86_EXC_UD);
         goto avx512f_no_sae;
 
 #endif /* !X86EMUL_NO_SIMD */
@@ -6926,7 +6904,7 @@ x86_emulate(
             host_and_vcpu_must_have(bmi2);
         else
             host_and_vcpu_must_have(bmi1);
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
 
         buf[0] = 0xc4;
         *pvex = vex;
@@ -6960,7 +6938,7 @@ x86_emulate(
             goto unrecognized_insn;
         }
 
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
 
         buf[0] = 0xc4;
         *pvex = vex;
@@ -7013,7 +6991,7 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_F2(0x0f38, 0xf6): /* mulx r/m,r,r */
         vcpu_must_have(bmi2);
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         ea.reg = decode_vex_gpr(vex.reg, &_regs, ctxt);
         if ( mode_64bit() && vex.w )
             asm ( "mulq %3" : "=a" (*ea.reg), "=d" (dst.val)
@@ -7025,10 +7003,10 @@ x86_emulate(
 
     case X86EMUL_OPC_66(0x0f38, 0xf8): /* movdir64b r,m512 */
         host_and_vcpu_must_have(movdir64b);
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
         src.val = truncate_ea(*dst.reg);
         generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
-                              EXC_GP, 0);
+                              X86_EXC_GP, 0);
         fail_if(!ops->blk);
         state->blk = blk_movdir;
         BUILD_BUG_ON(sizeof(*mmvalp) < 64);
@@ -7043,11 +7021,11 @@ x86_emulate(
     case X86EMUL_OPC_F2(0x0f38, 0xf8): /* enqcmd r,m512 */
     case X86EMUL_OPC_F3(0x0f38, 0xf8): /* enqcmds r,m512 */
         host_and_vcpu_must_have(enqcmd);
-        generate_exception_if(ea.type != OP_MEM, EXC_UD);
-        generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), EXC_GP, 0);
+        generate_exception_if(ea.type != OP_MEM, X86_EXC_UD);
+        generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), X86_EXC_GP, 0);
         src.val = truncate_ea(*dst.reg);
         generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
-                              EXC_GP, 0);
+                              X86_EXC_GP, 0);
         fail_if(!ops->blk);
         BUILD_BUG_ON(sizeof(*mmvalp) < 64);
         if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
@@ -7055,16 +7033,16 @@ x86_emulate(
             goto done;
         if ( vex.pfx == vex_f2 ) /* enqcmd */
         {
-            generate_exception_if(mmvalp->data32[0], EXC_GP, 0);
+            generate_exception_if(mmvalp->data32[0], X86_EXC_GP, 0);
             fail_if(!ops->read_msr);
             if ( (rc = ops->read_msr(MSR_PASID, &msr_val,
                                      ctxt)) != X86EMUL_OKAY )
                 goto done;
-            generate_exception_if(!(msr_val & PASID_VALID), EXC_GP, 0);
+            generate_exception_if(!(msr_val & PASID_VALID), X86_EXC_GP, 0);
             mmvalp->data32[0] = MASK_EXTR(msr_val, PASID_PASID_MASK);
         }
         else
-            generate_exception_if(mmvalp->data32[0] & 0x7ff00000, EXC_GP, 0);
+            generate_exception_if(mmvalp->data32[0] & 0x7ff00000, X86_EXC_GP, 0);
         state->blk = blk_enqcmd;
         if ( (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
                             state, ctxt)) != X86EMUL_OKAY )
@@ -7074,7 +7052,7 @@ x86_emulate(
 
     case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
         host_and_vcpu_must_have(movdiri);
-        generate_exception_if(dst.type != OP_MEM, EXC_UD);
+        generate_exception_if(dst.type != OP_MEM, X86_EXC_UD);
         fail_if(!ops->blk);
         state->blk = blk_movdir;
         if ( (rc = ops->blk(dst.mem.seg, dst.mem.off, &src.val, op_bytes,
@@ -7087,37 +7065,37 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x01): /* vpermpd $imm8,ymm/m256,ymm */
-        generate_exception_if(!vex.l || !vex.w, EXC_UD);
+        generate_exception_if(!vex.l || !vex.w, X86_EXC_UD);
         goto simd_0f_imm8_avx2;
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x00): /* vpermq $imm8,{y,z}mm/mem,{y,z}mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x01): /* vpermpd $imm8,{y,z}mm/mem,{y,z}mm{k} */
-        generate_exception_if(!evex.lr || !evex.w, EXC_UD);
+        generate_exception_if(!evex.lr || !evex.w, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_imm8_no_sae;
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x38): /* vinserti128 $imm8,xmm/m128,ymm,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x39): /* vextracti128 $imm8,ymm,xmm/m128 */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x46): /* vperm2i128 $imm8,ymm/m256,ymm,ymm */
-        generate_exception_if(!vex.l, EXC_UD);
+        generate_exception_if(!vex.l, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x02): /* vpblendd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_imm8_avx2;
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x06): /* vperm2f128 $imm8,ymm/m256,ymm,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x18): /* vinsertf128 $imm8,xmm/m128,ymm,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x19): /* vextractf128 $imm8,ymm,xmm/m128 */
-        generate_exception_if(!vex.l, EXC_UD);
+        generate_exception_if(!vex.l, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x04): /* vpermilps $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x05): /* vpermilpd $imm8,{x,y}mm/mem,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_imm8_avx;
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x04): /* vpermilps $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x05): /* vpermilpd $imm8,[xyz]mm/mem,[xyz]mm{k} */
-        generate_exception_if(evex.w != (b & 1), EXC_UD);
+        generate_exception_if(evex.w != (b & 1), X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_imm8_no_sae;
 
@@ -7136,12 +7114,12 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x0a): /* vrndscaless $imm8,xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x0b): /* vrndscalesd $imm8,xmm/mem,xmm,xmm{k} */
-        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        generate_exception_if(ea.type != OP_REG && evex.brs, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x08): /* vrndscaleps $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x09): /* vrndscalepd $imm8,[xyz]mm/mem,[xyz]mm{k} */
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(evex.w != (b & 1), EXC_UD);
+        generate_exception_if(evex.w != (b & 1), X86_EXC_UD);
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
@@ -7177,7 +7155,7 @@ x86_emulate(
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x42): /* vdbpsadbw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(evex.w, EXC_UD);
+        generate_exception_if(evex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x0f): /* vpalignr $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         fault_suppression = false;
@@ -7225,7 +7203,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x15): /* vpextrw $imm8,xmm,r/m */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x16): /* vpextr{d,q} $imm8,xmm,r/m */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x17): /* vextractps $imm8,xmm,r/m */
-        generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+        generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
         host_and_vcpu_must_have(avx);
         get_fpu(X86EMUL_FPU_ymm);
 
@@ -7237,7 +7215,7 @@ x86_emulate(
         goto pextr;
 
     case X86EMUL_OPC_EVEX_66(0x0f, 0xc5):   /* vpextrw $imm8,xmm,reg */
-        generate_exception_if(ea.type != OP_REG, EXC_UD);
+        generate_exception_if(ea.type != OP_REG, X86_EXC_UD);
         /* Convert to alternative encoding: We want to use a memory operand. */
         evex.opcx = ext_0f3a;
         b = 0x15;
@@ -7251,7 +7229,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x17): /* vextractps $imm8,xmm,r/m */
         generate_exception_if((evex.lr || evex.reg != 0xf || !evex.RX ||
                                evex.opmsk || evex.brs),
-                              EXC_UD);
+                              X86_EXC_UD);
         if ( !(b & 2) )
             host_and_vcpu_must_have(avx512bw);
         else if ( !(b & 1) )
@@ -7272,13 +7250,13 @@ x86_emulate(
                                             /* vextracti64x2 $imm8,{y,z}mm,xmm/m128{k} */
         if ( evex.w )
             host_and_vcpu_must_have(avx512dq);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x23): /* vshuff32x4 $imm8,{y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
                                             /* vshuff64x2 $imm8,{y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x43): /* vshufi32x4 $imm8,{y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
                                             /* vshufi64x2 $imm8,{y,z}mm/mem,{y,z}mm,{y,z}mm{k} */
-        generate_exception_if(!evex.lr, EXC_UD);
+        generate_exception_if(!evex.lr, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_imm8_no_sae;
 
@@ -7292,7 +7270,7 @@ x86_emulate(
                                             /* vextracti64x4 $imm8,zmm,ymm/m256{k} */
         if ( !evex.w )
             host_and_vcpu_must_have(avx512dq);
-        generate_exception_if(evex.lr != 2 || evex.brs, EXC_UD);
+        generate_exception_if(evex.lr != 2 || evex.brs, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_imm8_no_sae;
 
@@ -7306,14 +7284,14 @@ x86_emulate(
         {
             generate_exception_if((evex.w || evex.reg != 0xf || !evex.RX ||
                                    (ea.type != OP_REG && (evex.z || evex.brs))),
-                                  EXC_UD);
+                                  X86_EXC_UD);
             host_and_vcpu_must_have(avx512f);
             avx512_vlen_check(false);
             opc = init_evex(stub);
         }
         else
         {
-            generate_exception_if(vex.w || vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.w || vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(f16c);
             opc = init_prefixes(stub);
         }
@@ -7395,12 +7373,12 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x1f): /* vpcmp{d,q} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x3e): /* vpcmpu{b,w} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x3f): /* vpcmp{b,w} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
-        generate_exception_if(!evex.r || !evex.R || evex.z, EXC_UD);
+        generate_exception_if(!evex.r || !evex.R || evex.z, X86_EXC_UD);
         if ( !(b & 0x20) )
             goto avx512f_imm8_no_sae;
     avx512bw_imm:
         host_and_vcpu_must_have(avx512bw);
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         elem_bytes = 1 << evex.w;
         avx512_vlen_check(false);
         goto simd_imm8_zmm;
@@ -7416,7 +7394,7 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         if ( !mode_64bit() )
             vex.w = 0;
         memcpy(mmvalp, &src.val, src.bytes);
@@ -7434,13 +7412,13 @@ x86_emulate(
         op_bytes = 4;
         /* fall through */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x41): /* vdppd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         goto simd_0f_imm8_avx;
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x21): /* vinsertps $imm8,xmm/m32,xmm,xmm */
         host_and_vcpu_must_have(avx512f);
         generate_exception_if(evex.lr || evex.w || evex.opmsk || evex.brs,
-                              EXC_UD);
+                              X86_EXC_UD);
         op_bytes = 4;
         goto simd_imm8_zmm;
 
@@ -7462,7 +7440,7 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x27): /* vgetmants{s,d} $imm8,xmm/mem,xmm,xmm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x55): /* vfixupimms{s,d} $imm8,xmm/mem,xmm,xmm{k} */
         host_and_vcpu_must_have(avx512f);
-        generate_exception_if(ea.type != OP_REG && evex.brs, EXC_UD);
+        generate_exception_if(ea.type != OP_REG && evex.brs, X86_EXC_UD);
         if ( !evex.brs )
             avx512_vlen_check(true);
         goto simd_imm8_zmm;
@@ -7473,7 +7451,7 @@ x86_emulate(
             host_and_vcpu_must_have(avx512dq);
     opmask_shift_imm:
         generate_exception_if(vex.l || !vex.r || vex.reg != 0xf ||
-                              ea.type != OP_REG, EXC_UD);
+                              ea.type != OP_REG, X86_EXC_UD);
         host_and_vcpu_must_have(avx512f);
         get_fpu(X86EMUL_FPU_opmask);
         op_bytes = 1; /* Any non-zero value will do. */
@@ -7495,7 +7473,7 @@ x86_emulate(
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x44): /* vpclmulqdq $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm */
         host_and_vcpu_must_have(vpclmulqdq);
-        generate_exception_if(evex.brs || evex.opmsk, EXC_UD);
+        generate_exception_if(evex.brs || evex.opmsk, X86_EXC_UD);
         goto avx512f_imm8_no_sae;
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x48): /* vpermil2ps $imm,{x,y}mm/mem,{x,y}mm,{x,y}mm,{x,y}mm */
@@ -7507,11 +7485,11 @@ x86_emulate(
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x4a): /* vblendvps {x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x4b): /* vblendvpd {x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_imm8_avx;
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x4c): /* vpblendvb {x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_int_imm8;
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x5c): /* vfmaddsubps {x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7572,7 +7550,7 @@ x86_emulate(
         }
         else
         {
-            generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+            generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
             host_and_vcpu_must_have(avx);
             get_fpu(X86EMUL_FPU_ymm);
         }
@@ -7623,16 +7601,16 @@ x86_emulate(
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x66): /* vfpclassp{s,d} $imm8,[xyz]mm/mem,k{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x67): /* vfpclasss{s,d} $imm8,xmm/mem,k{k} */
         host_and_vcpu_must_have(avx512dq);
-        generate_exception_if(!evex.r || !evex.R || evex.z, EXC_UD);
+        generate_exception_if(!evex.r || !evex.R || evex.z, X86_EXC_UD);
         if ( !(b & 1) )
             goto avx512f_imm8_no_sae;
-        generate_exception_if(evex.brs, EXC_UD);
+        generate_exception_if(evex.brs, X86_EXC_UD);
         avx512_vlen_check(true);
         goto simd_imm8_zmm;
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x70): /* vpshldw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x72): /* vpshrdw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         elem_bytes = 2;
         /* fall through */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x71): /* vpshld{d,q} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -7653,13 +7631,13 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f3a, 0xce):  /* vgf2p8affineqb $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0xcf):  /* vgf2p8affineinvqb $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
         host_and_vcpu_must_have(gfni);
-        generate_exception_if(!vex.w, EXC_UD);
+        generate_exception_if(!vex.w, X86_EXC_UD);
         goto simd_0f_imm8_avx;
 
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0xce): /* vgf2p8affineqb $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0xcf): /* vgf2p8affineinvqb $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         host_and_vcpu_must_have(gfni);
-        generate_exception_if(!evex.w, EXC_UD);
+        generate_exception_if(!evex.w, X86_EXC_UD);
         fault_suppression = false;
         goto avx512f_imm8_no_sae;
 
@@ -7668,14 +7646,14 @@ x86_emulate(
         host_and_vcpu_must_have(aesni);
         if ( vex.opcx == vex_none )
             goto simd_0f3a_common;
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         goto simd_0f_imm8_avx;
 
 #endif /* X86EMUL_NO_SIMD */
 
     case X86EMUL_OPC_VEX_F2(0x0f3a, 0xf0): /* rorx imm,r/m,r */
         vcpu_must_have(bmi2);
-        generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+        generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
         if ( ea.type == OP_REG )
             src.val = *ea.reg;
         else if ( (rc = read_ulong(ea.mem.seg, ea.mem.off, &src.val, op_bytes,
@@ -7713,11 +7691,11 @@ x86_emulate(
     case X86EMUL_OPC_XOP(08, 0xed): /* vpcomuw $imm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0xee): /* vpcomud $imm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0xef): /* vpcomuq $imm,xmm/m128,xmm,xmm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_XOP(08, 0xa3): /* vpperm xmm/m128,xmm,xmm,xmm */
                                     /* vpperm xmm,xmm/m128,xmm,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_XOP(08, 0xa2): /* vpcmov {x,y}mm/mem,{x,y}mm,{x,y}mm,{x,y}mm */
                                     /* vpcmov {x,y}mm,{x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7747,7 +7725,7 @@ x86_emulate(
         uint8_t *buf = get_stub(stub);
         typeof(vex) *pxop = container_of(buf + 1, typeof(vex), raw[0]);
 
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
 
         buf[0] = 0x8f;
         *pxop = vex;
@@ -7781,7 +7759,7 @@ x86_emulate(
         case 0: /* llwpcb r */
         case 1: /* slwpcb r */
             /* LWP is unsupported, so produce #UD unconditionally. */
-            generate_exception(EXC_UD);
+            generate_exception(X86_EXC_UD);
         }
         goto unrecognized_insn;
 
@@ -7789,12 +7767,12 @@ x86_emulate(
 
     case X86EMUL_OPC_XOP(09, 0x82): /* vfrczss xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0x83): /* vfrczsd xmm/m128,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_XOP(09, 0x80): /* vfrczps {x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_XOP(09, 0x81): /* vfrczpd {x,y}mm/mem,{x,y}mm */
         host_and_vcpu_must_have(xop);
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         goto simd_0f_ymm;
 
     case X86EMUL_OPC_XOP(09, 0xc1): /* vphaddbw xmm/m128,xmm */
@@ -7812,7 +7790,7 @@ x86_emulate(
     case X86EMUL_OPC_XOP(09, 0xe2): /* vphsubwd xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0xe3): /* vphsubdq xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0xe1): /* vphsubbw xmm/m128,xmm */
-        generate_exception_if(vex.w, EXC_UD);
+        generate_exception_if(vex.w, X86_EXC_UD);
         /* fall through */
     case X86EMUL_OPC_XOP(09, 0x90): /* vprotb xmm/m128,xmm,xmm */
                                     /* vprotb xmm,xmm/m128,xmm */
@@ -7838,7 +7816,7 @@ x86_emulate(
                                     /* vpshad xmm,xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0x9b): /* vpshaq xmm/m128,xmm,xmm */
                                     /* vpshaq xmm,xmm/m128,xmm */
-        generate_exception_if(vex.l, EXC_UD);
+        generate_exception_if(vex.l, X86_EXC_UD);
         host_and_vcpu_must_have(xop);
         goto simd_0f_ymm;
 
@@ -7850,7 +7828,7 @@ x86_emulate(
         typeof(vex) *pxop = container_of(buf + 1, typeof(vex), raw[0]);
 
         host_and_vcpu_must_have(tbm);
-        generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
+        generate_exception_if(vex.l || vex.reg != 0xf, X86_EXC_UD);
 
         if ( ea.type == OP_REG )
             src.val = *ea.reg;
@@ -7879,7 +7857,7 @@ x86_emulate(
         case 0: /* lwpins $imm32,r/m,r */
         case 1: /* lwpval $imm32,r/m,r */
             /* LWP is unsupported, so produce #UD unconditionally. */
-            generate_exception(EXC_UD);
+            generate_exception(X86_EXC_UD);
         }
         goto unrecognized_insn;
 
@@ -7946,10 +7924,10 @@ x86_emulate(
     }
     else if ( state->simd_size )
     {
-        generate_exception_if(!op_bytes, EXC_UD);
+        generate_exception_if(!op_bytes, X86_EXC_UD);
         generate_exception_if((vex.opcx && (d & TwoOp) &&
                                (vex.reg != 0xf || (evex_encoded() && !evex.RX))),
-                              EXC_UD);
+                              X86_EXC_UD);
 
         if ( !opc )
             BUG();
@@ -7988,7 +7966,7 @@ x86_emulate(
             generate_exception_if(!(mxcsr & MXCSR_MM) &&
                                   !is_aligned(ea.mem.seg, ea.mem.off, op_bytes,
                                               ctxt, ops),
-                                  EXC_GP, 0);
+                                  X86_EXC_GP, 0);
 
             EXPECT(elem_bytes > 0);
             if ( evex.brs )
@@ -8159,14 +8137,15 @@ x86_emulate(
 
 #ifdef __XEN__
  emulation_stub_failure:
-    generate_exception_if(stub_exn.info.fields.trapnr == EXC_MF, EXC_MF);
-    if ( stub_exn.info.fields.trapnr == EXC_XM )
+    if ( stub_exn.info.fields.trapnr == X86_EXC_MF )
+        generate_exception(X86_EXC_MF);
+    if ( stub_exn.info.fields.trapnr == X86_EXC_XM )
     {
         unsigned long cr4;
 
         if ( !ops->read_cr || ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
             cr4 = X86_CR4_OSXMMEXCPT;
-        generate_exception(cr4 & X86_CR4_OSXMMEXCPT ? EXC_XM : EXC_UD);
+        generate_exception(cr4 & X86_CR4_OSXMMEXCPT ? X86_EXC_XM : X86_EXC_UD);
     }
     gprintk(XENLOG_WARNING,
             "exception %u (ec=%04x) in emulation stub (line %u)\n",
@@ -8174,7 +8153,8 @@ x86_emulate(
             stub_exn.line);
     gprintk(XENLOG_INFO, "  stub: %"__stringify(MAX_INST_LEN)"ph\n",
             stub.func);
-    generate_exception_if(stub_exn.info.fields.trapnr == EXC_UD, EXC_UD);
+    if ( stub_exn.info.fields.trapnr == X86_EXC_UD )
+        generate_exception(X86_EXC_UD);
     domain_crash(current->domain);
 #endif
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 12:51:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 12:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518890.805868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkP4u-0007E4-Hf; Thu, 06 Apr 2023 12:51:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518890.805868; Thu, 06 Apr 2023 12:51:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkP4u-0007Dx-Ea; Thu, 06 Apr 2023 12:51:00 +0000
Received: by outflank-mailman (input) for mailman id 518890;
 Thu, 06 Apr 2023 12:50:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkP4t-0007Dq-3Z
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 12:50:59 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20602.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ab7799c1-d479-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 14:50:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9125.eurprd04.prod.outlook.com (2603:10a6:20b:448::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Thu, 6 Apr
 2023 12:50:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 12:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab7799c1-d479-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J3NtU0wWqoFjQARZxeccDknpYgzMHrdG0B/1uPOCX0xNEp4P9VYJw4YBNyJQfdSiXkkqZO9g0LV7V+fwhIwMC5MXo94tMOGqgC4RvbwnZP9myL4+Ns4Zp9FuESUF7eO2gSA1WDNUnmrSvJddJz85dJsoMcQVENj25Xc2CLNIf2QdOo3QefpvVGz3kDOwFEKig3cdRL6OP5S+lCGaah0YdRJrRZAIhi1237M/ieinJIQptLbyL8HWrkHqZ7mBkUPKf56fXcTte0NZANFokdF/YDDT9jX9PuagmNLIrzYKBZsZGkhrm39Uh4LflPH6djOFF9Cz2216URTWmQ3S0becfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5xZejxViod/vAIC5FopSmY3XD32mS+AFSGh0OUJ3uR8=;
 b=mx+3+JedwpW4xdmYgJS62Dat4gcIJdBxqC9Zpr/Mzi9Wp3Atu9/vp6g7FDlV/6zJ1XXNwaUBU6GUD014z/lFMvbZw3F2nsF+jmoBNe3zxmKKnAGgYcNi88LC4ue4NbEYE6DDpUhycXgpfUD/NT9k9nggVP5YVh8jcmyLR6skQMrv7liSZcP+GMrPv2hdM+a/lN61pI8LTaVC4WY9Qoxshd3nr04+RDIpGjJHD9pG/S3eiOE9mkGt4g+mHrkO9X357cXGi35b7lrtVKTtMLUhgQPCD4OSsoha48HRF+YvqKbiHXq8fgEw6wE9Bdkya6eobi4eiwqk5i/0bjXyeMkZWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5xZejxViod/vAIC5FopSmY3XD32mS+AFSGh0OUJ3uR8=;
 b=ov/BG0ZooURQfQxGo0bVCoJvkcEhdSWHpPK92qyFFSo7qLZFvNEkzfPpWA1MjtR5+ClsMfJQsSJb/SknhbtK36B0Jq3v48eEkIp8xArUc3qU3nRwGDYzrLVNNNDTHrXPF/y8Jf6ucktOAnjtLtCiKkAqlubrFlm99FXP6/9Ob7QuMmcJJ7mrK3WVxo3CCwF2yVElwnQR7/f1N4zbBzgSA4ovsiH/qqCX2pyYkNMOcD5e9E475MlYZToe4wyezkShl5a1sSX9MzK1Of/QsbpLo0MPEwYotCtN57Abu/H6ULCgRdoWTTAM7y5/Q6tMtNfbr1WpPXfDO8T268WAdFmVYw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <44079a26-4631-135e-6593-78f7c2d2319f@suse.com>
Date: Thu, 6 Apr 2023 14:50:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/emul: Use existing X86_EXC_* constants
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230406121753.2205968-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230406121753.2205968-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9125:EE_
X-MS-Office365-Filtering-Correlation-Id: efc3b920-ae50-4b73-5ee4-08db369d8ea5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6agV14GSanjTleYQAtz2iOuZDBvEeDOCNol/8x2li+O1OoZ3Iqt9oqIbxjH3mthAGPGvkyFl3HV5oXRaOGDEDPGG6951fceclhfjAOubzM/DBDE4+RtfBjk0r8CJMx7E6L+yBCNSF/cqy+3h1qOgfUKVgV7PowjlqMXv3Y449uJl7sdp8Kv6rLCGkJR8yPCmLQDhT7JYZTpcjKvu9qq0PbOjQocAS7/y285Bh85qSozb/OUbA9LXx8Y9yCHeqzzuL9K7VOKC/mYX1N98Ig2jK5szGtkY7HeDXYWBvDLpQRVmaefGXk7QLuEuaVd6o51EO1sBSgwOZUW56B5WdUFMrDX5ee3xlbvHobXcJIZkYAhpRzLC+4BCRmycRZeDOov6p7atZUC0ca+Kyv/Vjga3sznVN/czTKTiFl7U6eHo4OQw0vgGh0Y+VBHyMldJ2o3ydXCqVcuXrGky55yMnHN0Uc5zxlw1bWHwHOnGaR6lA3JgETaEGr2fRqLwjo/IaRfwm6wTBPvED3zzU5WoJVUZZYYZX8vXWzPZUJ3M+r0yUZtv0ibM4wQINcd5kQx4GIazol5qFWFMTacOegrAi3v/2S8zJ80eApXqeKRnI8LXkxPM5j/1B/P+J3yDERhM2+z3ab9UAMoo4BHhpuQbnI5sFA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(136003)(346002)(366004)(451199021)(66556008)(66476007)(54906003)(478600001)(316002)(86362001)(8936002)(36756003)(5660300002)(4744005)(2906002)(4326008)(6916009)(8676002)(31696002)(38100700002)(41300700001)(66946007)(6486002)(186003)(26005)(6512007)(53546011)(31686004)(6506007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y3IvYW8rVVo2TTNKTkZxdlNJYnFPVVN3YmdZUDlPTElRWDhRYW0vY29QSEJZ?=
 =?utf-8?B?ZjByZ3VvQmt6TURldm1Bb1RDYmJGckdXUGEzV2JMb3hibzlqWHBoSEM2UjJn?=
 =?utf-8?B?SjJuSFFIV3loT092U1U4SlEzYTRPTUMvYnR2Yjd1bk9SaXZHMHVZUFRXM1Y1?=
 =?utf-8?B?R1U3VmJDNENYUmg5eDRmZzdKdkJQYWFjcldGL1I1NUVWb0hCZUhTVGFkVVo1?=
 =?utf-8?B?U2liSS8wTEZhNHNxNHc0YzZRbVdSdElKUFJUdWVMREhXSTF0ZUx4ZGNrbjhD?=
 =?utf-8?B?NVI3SDRnbGs2aTFvMlZ5bHpVNEtOZlZHUDNNWUNYN3UyaEJoeFRGaVh2M3Zo?=
 =?utf-8?B?NE5kcjlxMUNyRk1DcTYwZDFhZkN4a3dGSlZjWFRwOFZHU0NaQ3N3M2hqRmcv?=
 =?utf-8?B?QXJGSS9FWFlSM3ZJSVF3QmdUaHhGWHlVbk9pVUdHQStlVGFRRGQ3aU4zVnR2?=
 =?utf-8?B?MFVlc2YxVU54WlVYSDV3MjY1bVc5N1UzQ0NVU01BWUtobitjb0VLWmRaL0dJ?=
 =?utf-8?B?cDJDbDNTa1U0eVV1S0YvYlNuMWpZdmRHQnB2d1JRZW5tL2hPV1ViNUE3eFdG?=
 =?utf-8?B?dzFnUU9CRVNpOVk2L3Zmc0FwRTlJZE1YRGM2Q3RJWXZXenA3YUFzWG5kclN6?=
 =?utf-8?B?Q0NJZUR4TjUvM3NiVkQxaktadlNxcUtVRkk3WWRDcWlXOXNvT3RaclI1elhy?=
 =?utf-8?B?bHBJc1kzTXVLQlNzeTFsN0NWNCs0c2JGeWprb0ppNUpNbjUvaU56aEx5RDRU?=
 =?utf-8?B?TWpTRm1jM2JJOUpXNjNVQndXZC85ME01S0RyditWS0VneTgrMENFZ082dndl?=
 =?utf-8?B?b3dCTjlScTQ5MDhrdllvR2pMTzJYeGRDVkdaek5WZmh4ekN0dTB0dkVienJW?=
 =?utf-8?B?d1d0KzVCMnFtZkxEMW14Z2xjZ0h2WXhQcGU3ejZTbVovMlJHa0QxTnpLaExD?=
 =?utf-8?B?WEl0NlhFaVptOS91a0JKeXlRQ1BSU09NVGZKODVVbFd5VWhReDhFNDNNT1Qr?=
 =?utf-8?B?K3AzMlI1d3VGLzBnU1VCZklsMWNtZzlPYzZSM2pOeld0UiszcEdtcFBIejNT?=
 =?utf-8?B?R2c2cElSSmx5TGgzOG9wbFBoTTBUa2NJdDJZT3FBbkM0SXdkVjZpeURuYVZL?=
 =?utf-8?B?ZmRQem44d3lxc2UwN1pXZ0lpNEU1RWtldlU0aGMxSHhYZDNzcWdZeWlDNmln?=
 =?utf-8?B?Q0MrZnRoSUhGUkVsd2kwR3ZEdzdNclBFc29YcWxNRXhRY1RjcGhkbnpnWSs1?=
 =?utf-8?B?Ni9Gdlo5RUNvV0ZNaG9DQUZmL1lRNnBHNEtOUjJGMklVYXYvYmVEL0dWZ05p?=
 =?utf-8?B?VFdpYXlkTFhpR0lrc1FmRGVJdzFtYm9IV29NOXhtMzMrb3duRExXY3QzOE92?=
 =?utf-8?B?ZUNVeWNENFc5YVJseHllbThrYU5sUWZRWFNrRW9LZlN0MytZVHFpT25PSCtt?=
 =?utf-8?B?ODQvWDJpNkFiNll0SFoyRzI0VHdRR2s0Q2JoZFpSaW5MVHZRSGgzMzN2djZv?=
 =?utf-8?B?d2VRVVExSlpLeWlLMnhSRHRITE9HdHBqUnU5L2dZQUNrNkt1eVdOK3U2d0N5?=
 =?utf-8?B?RkNvMnNmdFJzTW5mdWRkUC9ZWkV1WDlUM1ZUWGZISExNazJsZ2Q5aEgrTFNK?=
 =?utf-8?B?bFpRNnM4bmg0L3pVNDJRUEpVVW5uUXNocjhzTHV4UytuZHZodHVYclNxaG9n?=
 =?utf-8?B?NlFWSWFLK3BxeTliMGJVTGhXc0swL0F2NWZHNFRncXhVVnZSdWJHSERGdGJV?=
 =?utf-8?B?ekZxd3hod1lTTklQMzNMdUJhNU5qdTR4dXhLNU9Vdm45Nk1waVpGaTlwVWtz?=
 =?utf-8?B?YWVnQ2taS1dCd080eVhOclc3KzFENGswVFd1ejdsY0FxTHJEOWdRWVB0bUsr?=
 =?utf-8?B?dDA2NFNJanRKcjVRS1ErdDV4S1QrSEl0U2duZXplT0QvUXUxL09LS09CTUll?=
 =?utf-8?B?eUQ2Si92aGdsK1RYOFkyUlNWaDMzU3ZnVmZ0WlB6VXpQb0hMaDVqaWIvcE1s?=
 =?utf-8?B?cGZiMTJRd3FRaFMzdTROK0JQMXZxZXdkWUlFNHFaNG9ZUjc4VUM1S2VvM1pm?=
 =?utf-8?B?OGdvNGYvZ25sNGFkUlovVjVpbXNZdnZTUmhjdkJuNHhuYjhJZCtmYnZkQU9D?=
 =?utf-8?Q?CcpNklaqO6tgeT7pMePgcO6lY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efc3b920-ae50-4b73-5ee4-08db369d8ea5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 12:50:53.6372
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: knpPdms6tQyIXLmdsi8OKpgWSKo55ZyXMo31l1j/DmCp8N+eHoX0TgmvyTCcuyHutd41i+M3nbfdQr4O2/QskA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9125

On 06.04.2023 14:17, Andrew Cooper wrote:
> ... rather than having separate definitions locally.  EXC_HAS_EC in particular
> is missing #CP, #VC and #SX vs X86_EXC_HAVE_EC.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Yet more re-basing for me to do ... But yes, it needs to happen at some
point, and I guess there never really is a good time.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 12:55:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 12:55:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518893.805877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkP9Q-0007qi-2b; Thu, 06 Apr 2023 12:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518893.805877; Thu, 06 Apr 2023 12:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkP9P-0007qb-WC; Thu, 06 Apr 2023 12:55:40 +0000
Received: by outflank-mailman (input) for mailman id 518893;
 Thu, 06 Apr 2023 12:55:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkP9P-0007qV-Cf
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 12:55:39 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52aa5002-d47a-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 14:55:37 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 08:55:34 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6055.namprd03.prod.outlook.com (2603:10b6:208:31b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.30; Thu, 6 Apr
 2023 12:55:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 12:55:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52aa5002-d47a-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680785737;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=f0e7cYEDAEpcNAwlYi35VuHziRZ/GApB1egynoWXhLI=;
  b=JpWNA7JuCchMq+xDqoaiTaPh/Rw43ySfeJqR2l0IzUYDHN6dsu82MEI9
   veddTKGXveBx/j2mzjt+o+XxiWtKUA9AQlVIZpVpktjndYsZ/q9B8tLN9
   SpWZKHlmTyvr0Nvnl0MS2mTRgOZGQ2nvX54vN2fXbPChNTIKCdbdjGG1E
   g=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 104589109
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RVZPYqCNr+ZYZxVW/wTiw5YqxClBgxIJ4kV8jS/XYbTApG521WRVy
 TFNC2+DOv2Ma2XzcookbYW2/BxU68TTnIU1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwqqFSWmdh8
 NchKzVRNg2OrfCTkYKnY7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDG7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwX6rCdtKRdVU8NZFkEWR6EFLEiYXXGepm9eX1nCTQ+l2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBRw1V5dDm+ds3lkiWEY8lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:Os7Ei6sch0mc3oz00g0C2zYP7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104589109"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FUl88M0CGOehIOeLZBqa0+8IhsJFsbvuaoMkk4f6wSIj/pI+CEqsiG3nCdrScsxSJr1g1Ma7rLhNeOvQ3L4zIIawx7aPnbwgiyOuM3n5OvDkM7nhFyLh7TOepL8F4/BjSKT/b+dEBWTQLztdoqBbX8u/Nawlo6StY0TzIoR/e+dgIbTCOCJoWaBL8Gqho4fbrddMh3BxlnvL3DpKJAwcqzk6t3dSgCkBvjxUn2kiOfQQWqhExrp5v6XlwYTiOaXuT1sXsdhvkPkWfs2UqWef793HiQdT6ttP/rq6D6ESQ5b465ChkfSKCvM8/qufgV9qvqFyVRFwqLntsfeZisPttQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zD9k4AldTQrQV43R4LFqH4lhnBEpEDFokua341KF7u8=;
 b=XF+9+XqB50Z/+jemjJeTYFhUDENUX8smR8S8FXqMHa60qtf6nnow/zUU6yWLDLXUDjQ5NPmxE9ushx+NJuD1yAxO1lCjpGDSLjT4UYbW0g1ZL78PynIPFlOJgHRSXZ2oHrasA2lCj8aL3nO9e2+j3nvXwjQcJP9j70KDGIO+Vc+7J7RuLeTruGMdPzjnGRoi4ZGpDd8ds3WWlq/OQzZOEME4CSD1N3T7vemQJeDQ6vvPBcRCybIJhkBmOo46ZoXjzwbIjh0a76HjODTd64dQrQRBySB9G5j6ccSwDLYRLnBtzVc6djceF9gLV14Fpt+dCheBW2ruDbPjY9UnfSzhIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zD9k4AldTQrQV43R4LFqH4lhnBEpEDFokua341KF7u8=;
 b=MFFgN0JZCyL4NzJBxx8oSHirl2NgLX/r8++MV4Z+r4ERhxDwxi59PcmlqwW1iY/9frAeIqlRZ4o8qbT0yGmsWH8CLpQyBqf4Mph4+KD/7GsJk/AyBMmznduIf3FTHiQNlN5QCLOKQRnouzKRMbJFywPQ07CrbPZBnyCyFblmVWs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <18f32ddf-a62f-d3b9-5b57-f808b5b72f06@citrix.com>
Date: Thu, 6 Apr 2023 13:55:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/emul: Use existing X86_EXC_* constants
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230406121753.2205968-1-andrew.cooper3@citrix.com>
 <44079a26-4631-135e-6593-78f7c2d2319f@suse.com>
In-Reply-To: <44079a26-4631-135e-6593-78f7c2d2319f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0462.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6055:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d1e6a28-400b-4ad4-1972-08db369e346c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xcwu7DNNlSK7wdJ/RRGnfFtLnOTRtB0HI+U95GoiYsHhIEkGRxJHhiOj3bxqFqkHBShyX0HrkMVFeNY5J6WnuUyM673FbQiBhcX1mx73ouW+/1Sv2pUvyVsANHvIp3BNJYIFCwAHGxeWFmiFoyOkbAPgbxXgHCk99sbs3jxqWCcKMir6zHOGLKJ9W+ZtgBykFrqKaU1HQs8mS4EdCe2rsAugwV17oxFXdFi4eUAsli7HTxyUJYhQE/YHKz394AwxzQ+elomBN69DXkaH+P21ffxkVKb4WN3dfCuZbVra+l9+zfEeCWMDxCqxqySO2MKGorC8eYtqSx3sdqroBQsh3SJadVCd02nyUUJW63wDrIyApq1TWPUg+QD591aN4qbHt0fPMiXWhR+xtu4pXY3hfh1q5mZ4Y00FsLK59JpcQ+62v/+dyWTGrj32oqgyo//o/nQ0R3pxDk5d+XEaj1tDoTcacFDaayzSAJJSgjHTaerxBKEFfDmPrHIVjETj0+BlbZEmUmyVBMC8Pw0m+8BiSaAfBrwopx8K/tjGxdzOx8vVyWV8TamfQkWhPQX/Mh78eOTay2srjJ7sdLxbmfwLMJLMceCEA2XrGGUZ/0EsVxn77rf3pgakJzb91cmebjlLfI9Xhl9vg2WlVQclsoT4lw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(366004)(39860400002)(136003)(451199021)(6666004)(26005)(6506007)(53546011)(6512007)(31686004)(8936002)(186003)(2616005)(6486002)(478600001)(38100700002)(41300700001)(66946007)(66476007)(2906002)(66556008)(36756003)(6916009)(31696002)(316002)(5660300002)(4326008)(4744005)(8676002)(86362001)(54906003)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cXFEVThnanY5S0kwZmZuZG1MaG9uUzl0R2VodlF3dk9DbEM4aXMwRUd2b2Zw?=
 =?utf-8?B?MEcrZERZcHdGRm9iOVBWTWZRMnZjeGdyY3gvcGhiQUE0TWVWWkpPaEZSOWFw?=
 =?utf-8?B?Wk9hNGFnY1ltRnhaSkhiS0ZHSFMwV3ViMWl0a094S0YzQVlOb01Cc2RkK0da?=
 =?utf-8?B?Njd0NndmRUs3UGs2bFJPcjJMNWNrV0J0QjVhd0N0N0F4SEMvOGtCRExSUFg3?=
 =?utf-8?B?SEhxUDZkNWJqdG5yVlJtb09yUTZ1a25IY3JpcytvTEE5SG1zUnFQM1Mrcldq?=
 =?utf-8?B?SUxnNWJOOFIwMkN0OWIrYStMZW9tNlg3OEJtSTQwVW9US2pVc0VvVk1sOTRP?=
 =?utf-8?B?bGQwempDOHRrOVFUU2FUQU5oNzdMbW5mQlI5NWd4bXcvRjVSL1h6Sk9qOXNU?=
 =?utf-8?B?QUd2RkNiK05LTENSTXdRYkQxV3AwUVFEYlZsTFZ2RmlPbGNhL3pmS005bjY2?=
 =?utf-8?B?VDE3bzZyQkUxQlVQMmdJb3JqZ1RzYkJyZXVKcGhZWjVtSUI3UFRYK28vdnpX?=
 =?utf-8?B?UlZITEZEeVBlcGtHL3RvQmpZWTgxZ2NzdVkyVVFORUJIclRsSm8zcjJNbzIr?=
 =?utf-8?B?OU1hcTlYK1UwNEJnVHpCcGZib1EzK2ZyZDc5Y0hXNFU2a0R3WjBSNzR0T05s?=
 =?utf-8?B?b0NnRXUvTStxcEhyaWVxbTFLTkxiZHplcUhCdmxyS1VtdDZ4UXg5dkhSMHBX?=
 =?utf-8?B?cW1zTGZ2bHdxTmlTLytxaHJXUllrdGhQdlpvclpySzlhbGRveUJ3cFY3UHNa?=
 =?utf-8?B?VFRyanZuOFNia2lPc1VjRVdHWDhpTDl6M21tYTZXMHZranZFcXppODczUTlR?=
 =?utf-8?B?UTFPVERNekYvS290K21VUCtTZmV4WTVSVWVBaDNCcEpQMkhwbXBhZGp5L29q?=
 =?utf-8?B?UG51TkFEcGZOVmROdk5FWDhsSnEyZE9ReWo3UnZqbTRnYmhnamlNWlZiYjUx?=
 =?utf-8?B?MEpWQ2U5dXV5dk5aSThqVDZLczl5WkhTdmtuSWYrOXFPdFVSbTVtVkZoZEZs?=
 =?utf-8?B?TUtaQ283R3gvOTRCalp2Y21Nc0crS25zNmZ4ejR5eHBDMjJ0dktSdTRIV1Ji?=
 =?utf-8?B?YnMvWE9semZYQmlieThCVEZnTXp2NEtUbjZWc0VwenYxWXU2VDNXU29wVG95?=
 =?utf-8?B?MDVlcEY3NngwaEZzY3M5RmVSQWx5TXMyb2VIcTFIUmVLNWlKT3FQMEI5M0Ra?=
 =?utf-8?B?R2lpamI3aUZ0N3pWZW5pNzlTenBWQWFFdTZ1d2xYZ2lnZVR2SWlKdGxPbDlG?=
 =?utf-8?B?ZHM3aVRWdUU5V2UxSkZCNDErdDNuMWw5cUJLaWViajlnUkNmOTEzTGc3dlJQ?=
 =?utf-8?B?Y1pDTjZUT1BZZVlwZXNTSnFzaW40di81VU14NXNhcjRGVTFHWmpFOHE1SCtj?=
 =?utf-8?B?dlBuMGM5TjZMNHRQcy9XdEt5MW8wWjh3cmdTWEp0Mk1GWXRqTkpDL1VwWVpI?=
 =?utf-8?B?VmhidG1ac3VTOEFkTm5aVXYwSHlINHlPTmJ3ZThJdHhkZGRPZmVwbkdkNTBQ?=
 =?utf-8?B?QzFyekRPRUlsVHNnTUM0U0JqR1I3RzgwTS9TNlN4QVdnNk1TT1ZxV2hOVnN5?=
 =?utf-8?B?Q0tRSHZQQkFGSFdXQzU4Z3ZHNlgyM3c4ZGhDZWlYbkx3N20veXlIWmhDSy9G?=
 =?utf-8?B?eGY1a2JjWWpxbFZuaFYrNUw2K0dNZnNGeFYyZjBMUUhmaERIRVlDTTZldUdj?=
 =?utf-8?B?VEpXNUo5bnRaUkZXMW1mZG5HOURzUnRTYzRmc3dqaDVwS3RDOTJIY1UxbGh0?=
 =?utf-8?B?bk1DVzJ1SUptN082alJtQnRpcnYvd3liQzlZQzNHelNwUWhWMHlMZEU2TmU5?=
 =?utf-8?B?dlYyQmVYd3BpRFRtTlFkc3hRVzU1T0pIUEJmQTd4dzM5by9Hb3E2RWFTTUY0?=
 =?utf-8?B?dWdjRUpXemhCc0EwRzhyS1VjQ2lldEJucGgwZFNXQmhaMmxFV20xc0ZMYytY?=
 =?utf-8?B?U1p3WEFyQ0JWcnV3TFQyTEM3SjdiZUVoRlk3MFBuS0ZIS0xrdU5aK0o1S3Fq?=
 =?utf-8?B?ZTk5OEZHMit5M1RoNzlGR3pmQURUc1QxUXZGZHQ1UXNiZ1dmMmlXZlBNeFdP?=
 =?utf-8?B?RTRqdHgreU1DMUNHZXJBem0wYXg5bHJNVVNhZkYvSThNYklvOVpRZTR0bjZt?=
 =?utf-8?B?NFllY1pXblJqaXhDR1lXN3kxdTBUT0dXWE5jM0xPcHBEREIxMG1GWU9OVDVq?=
 =?utf-8?B?enc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	09jt7/srIb0/6oUes9oVLsW2Auddca0z5UaISgb55FoHhVuwhBcmqfETzGq7j7XUpn/QWo9a0WOWuH1Mos2HXLqH+UeTEG2duS5M9Y7OjKNgOGM1D4DctXVw37HaNDkzjSBoV539Klv1GlOfNxHJy10vDGhy8KzhAAHk1eZavsZBB0WK/GEtAwAP2louTpWIhAG0TZ2ZBA99rd5ofcm49+OFc3X5ZIT4HOoTqJkiCt+ekrW7LX/DshVmyskcSyAwHKJk0a+06znFteCsxdr8QxK4nhQx4ogsgRtQddcYQR1K5JTmxE/54rI5EjkIeXQz+NzDWHW+pLIlFUu3gu2IBFc9Z7LDw1eVaO6NNhvbJWssQnwp9xYEqvb9/YYovd2VcBPlqRAhKYoPpvM1w1WHa4RoiccIaEMdIbR7PcpmFdwHOd3PvPL88fkJlaCewrZgFKnoUndWlIDJG2uUIduL0FgalyjRhDJs9Tyb7dipO2PZKbVGCElR1HrWaKL1ISSlBjORFScsySFb9tOrzrloxW37mWEyFoJTk8mZwVERiXs9VLWZObLKl79HkgWHbDMIRRXElvrLqxOwFreQhSFr302tI7LhNePbONlbdkAQI5wGtvu0K9wHVV6dBZb2/HfNO0CuXZg9nunGgQKSTll4W62nOoiBrGdZDpCyQA1H90XvRtaNmKNt6aexVwU9JfuA9ClOVREfv+TCYGLgUU4ldxTQyP38Wd+tpVP2X8YOFYY39kqxGg5wvs8mnQQw3cwWDLW0vkvbH/O45a93U7lasfH8u/KbZ6n13xXVIosSWgaYLFkq3DIdbzrRichdIsOe
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d1e6a28-400b-4ad4-1972-08db369e346c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 12:55:31.8326
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kHHVWUkwyw7kuyPpqCAq8/uRNLNHAxFsaSWyFj4o99tHnrWIyHnC+lMeRvlZe3Y52iaMbBk6j0nLvqL7VrCUdG3ZbebBEvhz2w98KZabgGM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6055

On 06/04/2023 1:50 pm, Jan Beulich wrote:
> On 06.04.2023 14:17, Andrew Cooper wrote:
>> ... rather than having separate definitions locally.  EXC_HAS_EC in particular
>> is missing #CP, #VC and #SX vs X86_EXC_HAVE_EC.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> Yet more re-basing for me to do ...

And me... I'm somewhat alarmed with how may branch collisions I've had
with the TRAP_* change.

>  But yes, it needs to happen at some
> point, and I guess there never really is a good time.

Yeah, but doing it together with the TRAP_* change is going to be least
disruptive (overall) to others.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 13:27:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 13:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518897.805887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkPe3-0002nq-Gv; Thu, 06 Apr 2023 13:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518897.805887; Thu, 06 Apr 2023 13:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkPe3-0002nj-ED; Thu, 06 Apr 2023 13:27:19 +0000
Received: by outflank-mailman (input) for mailman id 518897;
 Thu, 06 Apr 2023 13:27:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkPe1-0002nd-NT
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 13:27:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be64ea88-d47e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 15:27:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be64ea88-d47e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680787635;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=y04BQ2DKaMiAf2Lo6AP/11T/yXr87e1AoI+G7Y+pvh8=;
  b=gtpOmiZhOxC1QASDeIUsvNOuwTUe/xnG2a4eXMH2FBM+Rw/gEC14B3An
   MOVDxiodNgVukL9yK1+NouaA8zXOc/+bYCO0DKb6nGFD3ppq1YTh+om29
   65U/Yx/ZOyChynkZbQW7f+sKHcQKsdK+pK1CxEHgqCMeSB/6+T/HunqNW
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104973764
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:U9UVuasLZ48MXQVn4538a63OrOfnVCleMUV32f8akzHdYApBsoF/q
 tZmKW+FaazeamGmeIgnPYu+oE1XusDcyNFiHARtqi03Ei4W+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGySFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwFA8WdB+t2v2P2b+JWMNjr+syD8/EFdZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zqarjuiXU9GXDCZ4WGMqjXzjOXKpx3yZK41EIyb1/9Xv1LGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8UYwgyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZTc6+MPsjtwstwvCfMRJH4ean/7zNhill
 lhmsxMCa6UvYd8jjvvrpQ+d3mjx/fAlXSZuuFyJAzvNAhdRIdf8Otf2sQWzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF52LJ9o4DMlWfhsBDyr9UWaBj
 LXvkQ1Q/oRPG3ChcLV6ZYm8Y+xzk/i4So20CKGLM4MSCnSUSONg1Hg3DXN8Iki3yBR8+U3BE
 czznTmQ4YYyVv08kWveqxY12r433CEurV7uqWTA503/i9K2PSfFIYrpxXPSNojVGovY+lSKm
 zueXuPWoyhivBrWOHWPrtRNcwFUdRDWx/ne8qRqSwJKGSI+cElJNhMb6ehJl1BN90iNqtr1w
 w==
IronPort-HdrOrdr: A9a23:zxa8BKqBkzZArzFf3gdDDgUaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104973764"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/svm: Provide EXITINFO decodes for Exceptions/NPT intercepts
Date: Thu, 6 Apr 2023 14:27:08 +0100
Message-ID: <20230406132708.2251000-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Exceptions and NPT intercepts almost have the same layout, but NPT has bits
above 31 in the error code, and the name for exitinfo2 really does want
distinguishing between cr2 and gpa.

In nsvm_vcpu_vmexit_inject() rearrange VMEXIT_NPF to fall through instead of
repeating the exitinfo1 write.  Use the fallthrough pseudo keyword instead of
a comment.

In VMEXIT_NPF, as we're editing the printk() anyway, switch to using the newer
domain_crash() form.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        |  9 +++------
 xen/arch/x86/hvm/svm/svm.c              | 23 ++++++++++-------------
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 10 ++++++++++
 3 files changed, 23 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 2003f28f66f4..51e437e73a20 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -824,14 +824,11 @@ nsvm_vcpu_vmexit_inject(struct vcpu *v, struct cpu_user_regs *regs,
                 ns_vmcb->exit_int_info = ns_vmcb->event_inj;
             break;
         case VMEXIT_EXCEPTION_PF:
-            ns_vmcb->_cr2 = ns_vmcb->exitinfo2;
-            /* fall through */
+            ns_vmcb->_cr2 = ns_vmcb->ei.exc.cr2;
+            fallthrough;
         case VMEXIT_NPF:
-            /* PF error code */
-            ns_vmcb->exitinfo1 = svm->ns_vmexit.exitinfo1;
-            /* fault address */
             ns_vmcb->exitinfo2 = svm->ns_vmexit.exitinfo2;
-            break;
+            fallthrough;
         case VMEXIT_EXCEPTION_NP:
         case VMEXIT_EXCEPTION_SS:
         case VMEXIT_EXCEPTION_GP:
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 1b32ef77b643..17359047b396 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2808,9 +2808,9 @@ void svm_vmexit_handler(void)
 
     case VMEXIT_EXCEPTION_PF:
     {
-        unsigned long va;
-        va = vmcb->exitinfo2;
-        regs->error_code = vmcb->exitinfo1;
+        unsigned long va = vmcb->ei.exc.cr2;
+
+        regs->error_code = vmcb->ei.exc.ec;
         HVM_DBG_LOG(DBG_LEVEL_VMMU,
                     "eax=%lx, ebx=%lx, ecx=%lx, edx=%lx, esi=%lx, edi=%lx",
                     regs->rax, regs->rbx, regs->rcx,
@@ -2838,7 +2838,7 @@ void svm_vmexit_handler(void)
 
     case VMEXIT_EXCEPTION_AC:
         HVMTRACE_1D(TRAP, X86_EXC_AC);
-        hvm_inject_hw_exception(X86_EXC_AC, vmcb->exitinfo1);
+        hvm_inject_hw_exception(X86_EXC_AC, vmcb->ei.exc.ec);
         break;
 
     case VMEXIT_EXCEPTION_UD:
@@ -3051,18 +3051,15 @@ void svm_vmexit_handler(void)
     case VMEXIT_NPF:
         if ( cpu_has_svm_decode )
             v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
-        rc = vmcb->exitinfo1 & PFEC_page_present
-             ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
+        rc = vmcb->ei.npt.ec & PFEC_page_present
+             ? p2m_pt_handle_deferred_changes(vmcb->ei.npt.gpa) : 0;
         if ( rc == 0 )
             /* If no recal adjustments were being made - handle this fault */
-            svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
+            svm_do_nested_pgfault(v, regs, vmcb->ei.npt.ec, vmcb->ei.npt.gpa);
         else if ( rc < 0 )
-        {
-            printk(XENLOG_G_ERR
-                   "%pv: Error %d handling NPF (gpa=%08lx ec=%04lx)\n",
-                   v, rc, vmcb->exitinfo2, vmcb->exitinfo1);
-            domain_crash(v->domain);
-        }
+            domain_crash(v->domain,
+                         "%pv: Error %d handling NPF (gpa=%08lx ec=%04lx)\n",
+                         v, rc, vmcb->ei.npt.gpa, vmcb->ei.npt.ec);
         v->arch.hvm.svm.cached_insn_len = 0;
         break;
 
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index 6cd1c4cfe4fa..05b3fb82bd12 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -436,6 +436,12 @@ struct vmcb_struct {
             uint64_t exitinfo2; /* offset 0x80 */
         };
         union {
+            struct {
+                uint32_t ec; /* #NP, #SS, #GP, #PF, #AC */
+                uint32_t :32;
+
+                uint64_t cr2; /* #PF */
+            } exc;
             struct {
                 bool     in:1;
                 bool     :1;
@@ -455,6 +461,10 @@ struct vmcb_struct {
                 uint64_t :59;
                 bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
             } mov_cr;
+            struct {
+                uint64_t ec;
+                uint64_t gpa;
+            } npt;
             struct {
                 uint16_t sel;
                 uint64_t :48;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 06 14:15:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 14:15:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518903.805898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQOs-000894-5v; Thu, 06 Apr 2023 14:15:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518903.805898; Thu, 06 Apr 2023 14:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQOs-00088x-1r; Thu, 06 Apr 2023 14:15:42 +0000
Received: by outflank-mailman (input) for mailman id 518903;
 Thu, 06 Apr 2023 14:15:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkQOq-00088r-83
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 14:15:40 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20630.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8106c404-d485-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 16:15:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7937.eurprd04.prod.outlook.com (2603:10a6:20b:248::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.26; Thu, 6 Apr
 2023 14:15:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 14:15:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8106c404-d485-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sn7gxKHobU4Ut8MWGKwol4LpnWNoQlZU71kdMMIedIPd6AAm6qcsx51LVtUceaLDSuheBy05qW+gdr/kXzq6Cw6UJ7ntQnIN5A5QQP65SHCMbkxzdOZV9whg9qLbLAEnyvAWKVDKqAs58kK72F4z3YtRVCeWhY4LuI/IhmIWVE7yDWS9bIpVyvUErnutNoI0lXrNbnDyKnZufySzxikmsKSITxjSSS+t3MODt2H58PwBVKEz1XncOcdCcgtxJMFfgrBhnSHwzqJQcfAIiB0SVczkQVz2HywSJdP6BApJoCipzQNOb5QdmADr+6oMFOGGEk/jgZI53z9/rulz/JrVGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rKDn1HJCVNxFcLW7Fft9AEmNPUbO0XRVtYZ2AEuomUo=;
 b=AKLNwFd5fzybubhcbxeZ0lYFNTO4vlCHCJ1QOvvb7nOCQHLYsdBcYGPdljoWOKNs2zK7gkv2x5v3EEIT/bG7QPbaum0oyulrZaVa1hvq6G04O2EnP231ZfutNWxez2WaxYL8wijSIwAIanHtCF0PlHp5euj/2ETsTe7r6VIqzRJMW+d6uDP9xsRRpsURDBYz50zPx0gifgZzSvCNi1eUlZOSJb9LybKnRSsKnsANKh8IlWA8o4ZiJbzKTnonyO3qKVFllJStWSK29KW9nVmPpDg9t8OwVouoequev9cD/voWOBLZcJQMCTcyHvMWMJzFf8HwuUy8gNWRnsu8UqQNyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rKDn1HJCVNxFcLW7Fft9AEmNPUbO0XRVtYZ2AEuomUo=;
 b=kjgxikcBOA1wiZBU+Sj7LW7oQRH2AFd4dte4YDqzGNPt3DysMMd7TSKHCBvcTsrjuDdenLSKC3CPIl/cdhMjMpaI+931z69QaxyElaUBnvD3fiaI1iq3jKvBVnjAlGt9fbyggY7842tyncNnEe0IIOZRuLb/Uzg4WJCNiQ+IYA6rs9CFLOqeobNU3PvXAIJXPujTudwD/jg8vSfk9zcuIY8EDL9nTF2mKa3ReQQ2vjFGpzSEapG/zBD9H4WKzr9wTSZ725CWiWpfd9l3oXG5wkLvAv1eV5xAi2taEakW20jtimgkFhBnvEj0eXZJdpc5DGt7wjmqAQWyOSgWp51nKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e79b3b8e-666e-b94f-d2c8-1ec2f6f5453f@suse.com>
Date: Thu, 6 Apr 2023 16:15:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: avoid triggering event related assertions
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0004.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7937:EE_
X-MS-Office365-Filtering-Correlation-Id: d03a05bf-ad74-47e4-1d0b-08db36a96378
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kMZaGK5EFpF/mXW2Sou0hajWda6KfuGcLAjDpPl44lD8W8VoPPpJtGkHqJcIR7uDp5p/sc/Ty5HM8Stoic0S5iLqj/BVbcOlGkCvQqyBIKRNVyLbKnpiI8YdrQKI17PP2XshOBCt193yks+T27Njmnc/1MwiN1ORBXL74TIQz0rsjinPPs8vv8ATrSVGO5RTpqVde0FaTd7zw8ZHvL+hw0XqLlICoaTRGKcIsNrWfCFoLJzR48elNzLViGUj6VI7SWrrcka6RMdnz5l5h2s+Fe+6D/IMmAX/p7uem5oJByx4AksJSji1ouMI62MrcZjqbJAXGcwX4c5ZfR2+LdJMbEGO8GwszAZQ9chhkCAWMw7GHs4k6OTOegtR8jbtPZm0ZABaN0IKUbnjaF4/VN5Zdx3U46z7BYQnVarDCYNGqEwdh66qm9sLLb2JH3sDhrxyo6ga7F36Dbi0pG0d2novdDlO8n7zYJB9Yuic6HuWYvUrYfc8CopTy7uitC0HCLI+XA/vrNojpKLnXjgaOCDUU7U0UD1L6KddXj2QEAG3HTXRnHqCJ8LoRz1zxge5/zW+mgJWy2KgUODKYfRIogzcDcsbm7LWVizz5Mb+FkWw0x4QW4Qf+tnfWPEiXu/+LWXfQV/nw/5sL8AMNznuB9LgsQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(346002)(136003)(366004)(451199021)(66476007)(66556008)(186003)(6916009)(8676002)(4326008)(54906003)(316002)(86362001)(478600001)(6506007)(6512007)(26005)(6486002)(66946007)(8936002)(38100700002)(5660300002)(2616005)(36756003)(2906002)(31686004)(31696002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Nk5ZNmI3Szd0MzZoQklMNk9PMDA2anF1ZWJ6UTR0UUlDOVAvYk9Bd2d4N2JZ?=
 =?utf-8?B?MlUwQ3BXc0lmd21ibzgwdkVwVUR1UE9hT05RSnYvV1MxN3lnWmFBZUEvZ2xr?=
 =?utf-8?B?aVRYa0dEZGdkaXNDVEtMYkpEQ01sRnB1TWxuRUt1UWNZbTJyRnc5Si9OejEw?=
 =?utf-8?B?V3dJWW93UndTK2lWQnNJOU45ZFFqUVF2WWxWa3Nxa3RPWjVlT1lYWEQrQ0pr?=
 =?utf-8?B?NERoM041MllOanN4ckRZT3A5VHNNK1Jia0VsUnpBYUNOdlhMbThJbml0aUor?=
 =?utf-8?B?UnNzeUxTNFJ6cTZ6QlgvWFFVbkJEUndPWEt1UWVkU0RpeWF5clVXSEFFdHFM?=
 =?utf-8?B?aE9GVlUvSG1BZUJaTy9ybzlSM3BSQ1FLNXUvb2RyQmN3cEppVWw1Q2h5eTlr?=
 =?utf-8?B?REdZSE1SV2lMbVNseWtJWEt4ZTdvU3crWllOM3cvMEswZVIwTjN5Zm1PZDNF?=
 =?utf-8?B?RVhRbFE0Qm5kMi9lOWgxNmVRdUg4QXFFUmhzUzVrUnRnZTdma1JaTkkvamx1?=
 =?utf-8?B?TklWVWk4K1FwN0JYRTJwSklZTWkzRE9paTFxRDhNd0w3Y1huVVoySHpDMnpE?=
 =?utf-8?B?bk1sRU1SSThLQkVOOGVIN2NDbWdMWENsSzdhS2VpSjNYZlZpVmJCWTduRHNw?=
 =?utf-8?B?V3FwTk9yVndEcVRlWnZHS3c5YjBDeDVDcVhXZFovWnpNS0c5QVl3R29qZDQ4?=
 =?utf-8?B?Ym16WGt6eEI5bzZwMUZ4enh5UW5QVnVSSzZ5MzRrRjdhQ003L2xIb2RSaVoy?=
 =?utf-8?B?UDl5b2U5STloa0x2MW5zZEV4eGxhQ3hmWnBFRWtaQkhtWk53Uk5seWxKVkln?=
 =?utf-8?B?MzRLOWs1WC84SzhwSktabkFzUG1xTytvcUJwdFVTdWZUODROcFJMMkUrZ2NZ?=
 =?utf-8?B?akNobXMxQkdDQkpEbXBrNTU0V1lZRjYzOVVaM2VUOEdsUlBSUG9CRVpRWXc4?=
 =?utf-8?B?NFpia0xyRm1kQU1RV2czMko3Vzh6VnYveWpsRHpHbHR6VTFQSG5ibVJLSStR?=
 =?utf-8?B?SXZIUFV1VWxTY3VFRlppN1hNTzM1K2xVNXlmL2ZLeWlma21HajRNZmo5WmVS?=
 =?utf-8?B?NTdyVUx6T05BbjRaVEdNa21jZndGMmtRUU1vODhlOFlnK2ZXOFlqZmJTOWFj?=
 =?utf-8?B?Q3puOUtVeUJkSm80RkFxL2ppUzltcXhKSzF2WTM5Y3Rkd25wMWlzQThFR3BJ?=
 =?utf-8?B?SEtKRUNXbGdHYWp5M2pxVGF0KyttZ0pzdnU4L2lnK1gwY3JlWDM0Q2RxWDlN?=
 =?utf-8?B?N3ZhRHZZbk9uVisyVWNjU2xURVpVY3NLMm9RQVluRjIxeC9kTFNIbTBML2Fj?=
 =?utf-8?B?cFplVlJjN052SUhWODF0L0ZLUTI4dmJxZVVZOEVHZXZjZkkzWjB1YitWcWpr?=
 =?utf-8?B?ZVA0Qk9CamJ6Q2J5OVUrdUI4ZEpKU0Y0blNrOEVFcVFId0RtM280VldiVWJy?=
 =?utf-8?B?bjdUS09OOS93TXpYbHpOc2N2cjBTRUZVSUZYOTFhUWhpTjk3NVExNHN2dTRK?=
 =?utf-8?B?a1JqR2xnVSt1a1BCQTRMN1BNdHFVQUZ5OHpiTkpMSlRIM0hhS3gzc2RMN3Bl?=
 =?utf-8?B?d2MrTGJpbXA3SGQ3TU5wSm01eExvQnVDbWNQekt0QVlsY0RtS0tFNk4wQjcr?=
 =?utf-8?B?aXlBejJlUFZ4TnpVbzkvU29DOXZKRE9UWVY0M2N3QTV6dFJwdmY1cjloUjhS?=
 =?utf-8?B?ZXhJN1pFcjFmQmtwV09kM2RYMnNuRmNiaWRHMmxBdEVibmVBOXhVTjA2empS?=
 =?utf-8?B?UkNKeWsrZmVMTkluenR3WWFFejJvOWc1blhqUHM1TVFaZ0xmK3ZsSks2QVpV?=
 =?utf-8?B?em1zZTVWSlZ4WUtycFJaOXNYeDFlN0krWCtiWTNEWmZ0SEg2eFJFQitmaFU1?=
 =?utf-8?B?RG5JN054T1MyN2hOZDcyRU5uQnllN0F4U3NyOStWb2FITHBXRmQ5NTN5bHpJ?=
 =?utf-8?B?eFRBNVhqdHYzMzRKUnI4a1ZJZW5yTG1vdjFvWGFpcVdyVXFuWmNuYmtBMkp5?=
 =?utf-8?B?azRWM2lLQXFsQ2FyOFZoN0lTeXJSQ0VNNnhORTkwdU1yZWNOWVVkOUJhRUpY?=
 =?utf-8?B?cGVrWFNmS3k2U2JmblA5TFd6VW5SYUZlZFRuRWhOMGkvd1JFL2RheFV3OUpq?=
 =?utf-8?Q?w2QqHb21BsaD+P+OO4GBBMgrG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d03a05bf-ad74-47e4-1d0b-08db36a96378
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 14:15:35.2498
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QOOChQkCN52JTcS9te/MurkdoyWj9sLm1WRZ2ldaYtE4DuQeV9SKSm049Lwq9tAMmbhIUaMEmm4yNsr/T/B/RQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7937

The assertion at the end of x86_emulate_wrapper() as well as the ones
in x86_emul_{hw_exception,pagefault}() can trigger if we ignore
X86EMUL_EXCEPTION coming back from certain hook functions. Squash
exceptions when merely probing MSRs, plus on SWAPGS'es "best effort"
error handling path.

In adjust_bnd() add another assertion after the read_xcr(0, ...)
invocation, paralleling the one in x86emul_get_fpu() - XCR0 reads should
never fault when XSAVE is (implicitly) known to be available.

Fixes: 14a6be89ec04 ("x86emul: correct EFLAGS.TF handling")
Fixes: cb2626c75813 ("x86emul: conditionally clear BNDn for branches")
Fixes: 6eb43fcf8a0b ("x86emul: support SWAPGS")
Reported-by: AFL
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
EFER reads won't fault in any of the handlers we have, so in principle
the respective check could be omitted (and hence has no Fixes: tag).
Thoughts?

The Fixes: tags are for the commits introducing the affected code; I'm
not entirely sure whether the raising of exceptions from hook functions
actually pre-dates them, but even if not the issues were at least latent
ones already before.

--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -198,8 +198,10 @@ int x86emul_0f01(struct x86_emulate_stat
         if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
                                       ctxt)) != X86EMUL_OKAY )
         {
-            /* Best effort unwind (i.e. no error checking). */
-            ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
+            /* Best effort unwind (i.e. no real error checking). */
+            if ( ops->write_msr(MSR_SHADOW_GS_BASE, msr_val,
+                                ctxt) == X86EMUL_EXCEPTION )
+                x86_emul_reset_event(ctxt);
             goto done;
         }
         break;
--- a/xen/arch/x86/x86_emulate/0fae.c
+++ b/xen/arch/x86/x86_emulate/0fae.c
@@ -67,7 +67,10 @@ int x86emul_0fae(struct x86_emulate_stat
                     cr4 = X86_CR4_OSFXSR;
                 if ( !ops->read_msr ||
                      ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                {
+                    x86_emul_reset_event(ctxt);
                     msr_val = 0;
+                }
                 if ( !(cr4 & X86_CR4_OSFXSR) ||
                      (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
                     s->op_bytes = offsetof(struct x86_fxsr, xmm[0]);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1177,10 +1177,18 @@ static bool is_branch_step(struct x86_em
                            const struct x86_emulate_ops *ops)
 {
     uint64_t debugctl;
+    int rc = X86EMUL_UNHANDLEABLE;
 
-    return ops->read_msr &&
-           ops->read_msr(MSR_IA32_DEBUGCTLMSR, &debugctl, ctxt) == X86EMUL_OKAY &&
-           (debugctl & IA32_DEBUGCTLMSR_BTF);
+    if ( !ops->read_msr ||
+         (rc = ops->read_msr(MSR_IA32_DEBUGCTLMSR, &debugctl,
+                             ctxt)) != X86EMUL_OKAY )
+    {
+        if ( rc == X86EMUL_EXCEPTION )
+            x86_emul_reset_event(ctxt);
+        debugctl = 0;
+    }
+
+    return debugctl & IA32_DEBUGCTLMSR_BTF;
 }
 
 static void adjust_bnd(struct x86_emulate_ctxt *ctxt,
@@ -1194,13 +1202,21 @@ static void adjust_bnd(struct x86_emulat
 
     if ( !ops->read_xcr || ops->read_xcr(0, &xcr0, ctxt) != X86EMUL_OKAY ||
          !(xcr0 & X86_XCR0_BNDREGS) || !(xcr0 & X86_XCR0_BNDCSR) )
+    {
+        ASSERT(!ctxt->event_pending);
         return;
+    }
 
     if ( !mode_ring0() )
         bndcfg = read_bndcfgu();
     else if ( !ops->read_msr ||
-              ops->read_msr(MSR_IA32_BNDCFGS, &bndcfg, ctxt) != X86EMUL_OKAY )
+              (rc = ops->read_msr(MSR_IA32_BNDCFGS, &bndcfg,
+                                  ctxt)) != X86EMUL_OKAY )
+    {
+        if ( rc == X86EMUL_EXCEPTION )
+            x86_emul_reset_event(ctxt);
         return;
+    }
     if ( (bndcfg & IA32_BNDCFGS_ENABLE) && !(bndcfg & IA32_BNDCFGS_PRESERVE) )
     {
         /*


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 14:28:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 14:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518907.805908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQba-0001Dc-Az; Thu, 06 Apr 2023 14:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518907.805908; Thu, 06 Apr 2023 14:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQba-0001DV-8D; Thu, 06 Apr 2023 14:28:50 +0000
Received: by outflank-mailman (input) for mailman id 518907;
 Thu, 06 Apr 2023 14:28:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkQbY-0001DP-VW
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 14:28:48 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061e.outbound.protection.outlook.com
 [2a01:111:f400:fe13::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57183b7d-d487-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 16:28:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7764.eurprd04.prod.outlook.com (2603:10a6:20b:244::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 14:28:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 14:28:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57183b7d-d487-11ed-b464-930f4c7d94ae
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d9tlqmYMa4a+dzcHIM/YNWEvRKX1eO6UviwX7D7S/mSeqNwBHGQIBc9TVNLjnilBeqvVTeQmzY3BTRfmzKnBAYiJm5x0jgqnv1+BGi7LBll58f6XL5+z2MXCtcvrq81TBakAkhVwddhn4vwvpBSbJn7D2Zh4yNLF24YvxV390UCc2FJGXtEw5In/F6oIBpsxajZ75Gp+yi9HloH8+k4T/wU4/HLENwZXSi/pSDhngSIl53CtV+lL/gOtF5P9MAalxqdVp8KpFR/rWTAakl/0iQ9hfIxQraWn68vNy9+HreUKqYNvVNCfYeR6ThSxomFLRQkKhslddZxzvusjcxqXEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VTFReQIaRtGB9vNQPjk8BpE+DVD5nFEnKxcMO3XVxCs=;
 b=i1gdlzc4dD5l57hRl4NBR+6KR2B5Kevn2va713rycesXIsTn7jpHLFl0MNc4w7e7GOZHVdQHe2lBlkhafS9Gw6neYPERdxChPaDF/HAbXvTaKz9ihXPTY3Ooa1ZLn9AhwKdMASNrCIhE7kSeZyqsvtpzZAuNzcPB2btMq32NUow4KF0vXDjU9RRLVs4nW8y4CHCAG5iJQMhEbD+lO0efkpb0n/WhcwHLrOcU7jhgkWIvRss1RGnJhltYF8yDSAyzMYDgRhPso5iCrH0FmirlmnrN+LLIjCnExV8Z5N79SSc0wzNhj2ABRh9uypQV0hcklbBWuG2lXVEnUDudkU3efg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VTFReQIaRtGB9vNQPjk8BpE+DVD5nFEnKxcMO3XVxCs=;
 b=M8KqmYvYOCIqx/TaHCcmZss4Iej05KLuYXEPJsGpTI/To38Ljll7jPrGG0rU8L4J23WGkITJf+x1A+6injV1ovT5Ds3uetKWP6eav8pDhbyG/VkBzaMk7TQLWFG8SSUjNgOFC24lL3D/DhE+jROiNmKbWSM5r3OcDRho0bYMBXFyz7pFpCBa1qaCukP4SkEyx5sqUPjaDrdIR6i6vQ2LYIQ5U8+qKkfcZ6Lg1HgFvcSG33VuH5r16LKnHjRMycLByvI0NMMaKrGi7i1zYbqR34kZcJUXd/Yo2UkxrFFdVPKIwIi0b9yX9oQu+ezgRG6yCb6vYf3QQwSugWdFQfwsIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9edbf4a4-978e-e1bb-43b7-971bdab8f41c@suse.com>
Date: Thu, 6 Apr 2023 16:28:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for Exceptions/NPT
 intercepts
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230406132708.2251000-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230406132708.2251000-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0027.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7764:EE_
X-MS-Office365-Filtering-Correlation-Id: f821e5b6-9f97-4528-6bd7-08db36ab3a66
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bd1lfnTTcgKnjN7sOq4qn7Y3yQcK549xz6P2P0dLbp4bdWMr1reLNg0UtcZjVv2lb+y6Ugx6b5KWXey+qkqbzAV4jSai7EidCL/9nsZs5OkgBDCyDWxhxaihj8sWUVnjnxU3an3uULHupFugc2h43ktgV7RGPQJCrYc/gUTFNl+qYd70F7Pyw/6DY772KRg1sGhoVS4TPO9uRr8j6XPXzzRciUT4nyW99i39FmXZ2M0MBMS3jvN26UYxspgEgyHA9HXl+aOe5gtIBqzmsfAl+zr2cHvPSlfPZQPRVdYmV0pT+4iWH2/9VAXmq3zHrfLfy/2S6/G6GUtqV8Y4Sry7PAXzCT/x/C3sQQY31RonqCdkCLmvKH2e9l6D3kcNEAEidTp5hBuWTJBohXrrXGpoEdm9yV0b+texQZPPHJbE2Vlnl9n6eQAW8TdWxWaxTT+nap8MUF4gE1Ux2+Qov2Q7/GyvVwTr40h+g0XWIJ+HsscV+wr4+YB0OTUER9AUklX6V8lX3qUnvarHC0N26C55ZucP0ofis+jIrmH2S/p1xj+M4cqllf5yKJBxu7WrRfN44ixC9p5DAwStDZ4pDVwDsjhBgH0QZRfTaFQyuAh8haemspgH5hT0GI9qVih7fh9GrmjK2i8fdKGHMKuV6F+S9A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(366004)(39860400002)(376002)(451199021)(2616005)(6486002)(478600001)(316002)(6506007)(26005)(54906003)(186003)(53546011)(6512007)(2906002)(4744005)(4326008)(5660300002)(36756003)(38100700002)(6916009)(66476007)(8676002)(66946007)(41300700001)(66556008)(86362001)(31696002)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bThMZlZMaWxRd0FHM1hwNXF0azFZRGJDaDI5U3FuL2NRcnVRNkJrNkNsYWgr?=
 =?utf-8?B?YWNxMGc0aG95SkZWNFk0d1ZwcjdmbGg0djYrM044S0lhdldEeHlnR2NteXY1?=
 =?utf-8?B?QVU2dUJVVXFuN1I0VHdZZlZmWU9COG5KRkdMZ1EyVDJKb205YVlXNkVYcmNy?=
 =?utf-8?B?NTk3bDV4VGp0KzN0UVpxcEV2ZEhlNFRFd0NGT2FlSzIzU3lhaG43S25kUzl6?=
 =?utf-8?B?OUtwZHd1TXBjMStqZmUwZFMxZ2dlSXlxcTBrMUEzZTFMQ042RkxzN2pkbUtx?=
 =?utf-8?B?QzM3OEV1dmlwaEN0SFFnM1k5K3BZT3hMRW45UnFMRytGSkRmWXZ5MHpWVWVO?=
 =?utf-8?B?eGxXODNWVS9STUF6elEva01MSU51dkpMTURZS1l4dDl0REYybWJaa0d1TGpZ?=
 =?utf-8?B?UTYvSW1KSytOdFhCNS9CV25WSlY3aEcxK0dyNWRsWXI5ZDg2QjlmN1IzdzNa?=
 =?utf-8?B?NU1CbXNucUZnWGRqNklFKy8yN1R3bW9wbCswOEZOMWRJQkRPTGRZYWR1L2Vk?=
 =?utf-8?B?cnRPMW4vWjlmL0NCUWREUkhIOHlxa0dOeTJkWi9HVkhHeGRwMTRYcWQwcWlx?=
 =?utf-8?B?TElIWTVGU0RTQ3dJZTgzN21OeSt1OHVyZFY2MXhuK3ZDTHlsYTFtOHMxOGJT?=
 =?utf-8?B?RWRMZGM0WTlQZkVVNjNVazV2RUN0dzZKaUtNU2hYTVJWNmRKSTNHQ2R4SWJi?=
 =?utf-8?B?cEdPSWlYclcxTGdJOE9yWWk1ZmdFcWZiUDZXZzBId0pNL3VyR1VoMXV2WVR3?=
 =?utf-8?B?T0xGc3h2NFdJMnlaYmx0UVZWL042OG1NLzFEYkcvNi9tcVBKblpxRzFnbnhi?=
 =?utf-8?B?dU5GMXMybjF4Z0lObTFvQWNaZFlhdVpMVVIycDhpb0E5bitCRXJha3kxV3g0?=
 =?utf-8?B?TUlITGZtVU5yNXlOcEc5YXN1TFJJUXZqeW1mRnlFZ3UyRFVOUjN3YjlhL2ZE?=
 =?utf-8?B?eUJ1ZTBvTXBUdWVsbW9sNy92bThES2l2MmhvTmhiWDNRUWNzeDBzQmdsVnRI?=
 =?utf-8?B?Y1l0NVpNWFF3bmRqQWpBazNZckNsZkFWUVhlT1RueTVOb1MwMFNGWmN0ZGYw?=
 =?utf-8?B?UUVjWmI2b3p4TGhadHNHRngyRXFEb0JFbUVhNVU3ZTZnNzd0WkhKaDg4YkZl?=
 =?utf-8?B?QWlYdnEzQjZUN2FNMTBqQ0pIMzZrZDUyaVVWNzI0MCtoUUx3b2xSMUNXeDRl?=
 =?utf-8?B?YlYxYlBwVnp0S1hIVitJR0tQQlRCdXd6d2ZTelh2S2NPeVFvbVlOaXN1U1VK?=
 =?utf-8?B?QytSdVBaci8ycXNGRVNBdVdKdTVtY2hQSksxL0tkc3FMN0VtTE5sRzZSQ3E4?=
 =?utf-8?B?S3Z6RWlwSkRlV2NsbzhmTk5reTA5Vk03d0lENTJYUHNLWkE5am5sdm1QRWQx?=
 =?utf-8?B?WmIrc040VnoyMGxjdExBeDgyM0I4WFFFUTRzMFM1TUljOXdVYXptcXlkSlJ6?=
 =?utf-8?B?YVd4OGNKb09ZNWFaaTFneTFNNXJVYlIrWlYxUUNmRDJnZkRxSlBaNy9KaGo3?=
 =?utf-8?B?T0lTL0J5TlpCTndrUU96b3FqWVpMaCtaYVdEMFlhNW1yTGRuQjRGSDZINzBv?=
 =?utf-8?B?OFNLbStybm9DVnlsMnhxaGRrZWhSQkJ4NStmOURGZS9hZUJvMXVwTGJSeGI4?=
 =?utf-8?B?d1pVUzBQSWt2VEY4TVNab2dPQWVEbVZ0bVlyZ09NalJIalpiY25RVTROcVYv?=
 =?utf-8?B?c0lyV0NBbFNreURwajFLd3NHUWd5ZVJCOUhnTU11N0c3a3puYXpGVGk4bDJM?=
 =?utf-8?B?SE95RkNENDU1OE5vT3ZPV3B2L1dOOXVBdW1PUTZqdys4VjI0bmh2clk5VTY3?=
 =?utf-8?B?c0x1SmJPVHRFUEhWWExrcDVFUTlhZzN1U21nZlAxZUIrNHUvLzNhNWRpZTMz?=
 =?utf-8?B?WUJjWlVUNEdsZ0RHMUlzSjdIRWd0YjM5a3F1elhYLzR4Z3R6d1dlMUdKdWp0?=
 =?utf-8?B?bmxaOERmdmI2aW9HUDVsTEkwU1dLRXZhSm1vZ0k3QVZxYnpoN2ZEUkl4cFNi?=
 =?utf-8?B?ano5NGM4SjZvcVJsSHJKWnlTMkgxNmRkclpJNk8vd0lndDM5aCtuV0Y4TWJH?=
 =?utf-8?B?OTVacnpsUEovU3UyaEY0elFXZjh6WFMyTjgvQTFwMDNWL3NlbW1HVkQwclpm?=
 =?utf-8?Q?ss6eQvwPs8odul/QRvzgz+kKI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f821e5b6-9f97-4528-6bd7-08db36ab3a66
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 14:28:45.1854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d3Hjug0Bdx1LReMT1M4mN+K1gr3cahc2K9MIHw7uxlWTLCqapfb6OzB9p78tbp3c7YSROS5R9TNQPekA+lRtYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7764

On 06.04.2023 15:27, Andrew Cooper wrote:
> Exceptions and NPT intercepts almost have the same layout, but NPT has bits
> above 31 in the error code, and the name for exitinfo2 really does want
> distinguishing between cr2 and gpa.
> 
> In nsvm_vcpu_vmexit_inject() rearrange VMEXIT_NPF to fall through instead of
> repeating the exitinfo1 write.  Use the fallthrough pseudo keyword instead of
> a comment.
> 
> In VMEXIT_NPF, as we're editing the printk() anyway, switch to using the newer
> domain_crash() form.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

with one remark / suggestion:

> @@ -455,6 +461,10 @@ struct vmcb_struct {
>                  uint64_t :59;
>                  bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
>              } mov_cr;
> +            struct {
> +                uint64_t ec;
> +                uint64_t gpa;
> +            } npt;

Maybe better "npf" than "npt", as it's describing the exit/fault?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 14:31:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 14:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518911.805918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQdu-0002bA-Na; Thu, 06 Apr 2023 14:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518911.805918; Thu, 06 Apr 2023 14:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkQdu-0002b3-Kf; Thu, 06 Apr 2023 14:31:14 +0000
Received: by outflank-mailman (input) for mailman id 518911;
 Thu, 06 Apr 2023 14:31:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkQdt-0002ax-Tl
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 14:31:14 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a9381bbf-d487-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 16:31:06 +0200 (CEST)
Received: from mail-mw2nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 10:31:03 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB6098.namprd03.prod.outlook.com (2603:10b6:610:b9::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 14:31:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 14:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9381bbf-d487-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680791466;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ogUa40awd4m5Ri/1ZLuwSC2FpKaHSx3ZJ1giQPYxjhs=;
  b=QqGDjkqo71QIM42GzUfB9MODpPxsSutjxb0luFM8gdDs64jYNp2RsXk3
   NzM5PcN3Ip0dcnVUKZ4u4CL9WBbbEQhQw8wiIiICbFI8AkO1HKpZpwdU5
   +maFT/BARSJT+HG4gppDxLdnu8WrKNeodMPqewVK4E7eUS3OoaU2x1Fvy
   k=;
X-IronPort-RemoteIP: 104.47.55.103
X-IronPort-MID: 104604306
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Vo+/aq7/EO37lRdSyBXNUwxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 mIaWWuBO/jeM2fzKIh+Od+zpkJV6peHn95iTVBkrnw0Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawT4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 eM4JmgCUhu6jcXr35GkTvRxufkDM5y+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOOF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKSOPgqqUy3jV/wEQvCxYPbHq1psOX1BO8fckAJ
 E4f/wYX+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqWMMfZQ4M4t2mpZ5piBvKF45nCPTs1oazHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxaoowFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:evAMiaClFaAQe2zlHelc55DYdb4zR+YMi2TDt3oddfU1SL39qy
 nKpp4mPHDP5wr5NEtPpTniAtjkfZq/z+8X3WB5B97LMDUO3lHIEGgL1+DfKlbbak/DH4BmtZ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104604306"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KUI/wd0uTYaYDhzqC4rWnh5Zm2HCNWOzTXZhYNM9ZfPYqhdYv9YhH2nS0YvJbL6kqxSuzgU1+NbqhqlOKvr7tjsRUwlafo53RClSqeAZUMaYmWM2PohthRsVEqE/8JTmK5LqUX+S9/Po6k/1IqBxP4wdaEfGtTsiXRJIa1vkCwgCzAhQmp2nXJp2D0ib8v1XvYTHp7lWbv20zqKVYsP2NQwxWTYan0ioAWQc0Xp4A6Mgcfwv3LMRkYdZTH1MMvcDuNNhcgn8nYXxq8hbY+NMSf7DvQiT85xNxeetphh9vgO/GMJy+ha7EuekfzoMuN1RzPsXuwIFk3GmAPWhDlUCxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lbKvxOKsESnEm+UoB7N2Ts/1VLqyZuSkZzZthBYVx30=;
 b=ddqTz98C+ZtTqdRpjVfm0PQbhwmlBxBiDjZkb2DZwuuX3khzrmt5bXkF3ngKKebfujMZWMnRaEhhBJrTTWh3W/tGaGgKyqird4BIdfNM2n0THmEiAb2SekNz20xmAsiMt5+gsPOkGGBBtQm3zJxKlj4N45W52JJOgycMSRdfOh9OSDgt3JcJTsQrLNeAp8nO/Rm1J3Eo0rkfZESYOj9JXEBEcnAxSoXnkXVTy5ahoctS6XAjbDAMmFmTsKlYziFljf0Vi1Rqm/Uio7YS1LuDezn9TGtQaTRXcnzbgGUCDqQklTk+MMM8KRIshW1fT3wTrDIr0vQCALbTBejdhOit4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lbKvxOKsESnEm+UoB7N2Ts/1VLqyZuSkZzZthBYVx30=;
 b=nQOYLUrgEEm94YjOfO/6jGvC8Ju/Buj3aDHG8bBCjEB2u++RrFjNHR7WmJw33smpbkJ7TXIiuGx7fvxerss/b9mC/umOkDQ387kc0TzapGrROMv+KvDaMyz70DRkY6dCH8wMk5X3JAG387VsqMKmPpfqxpdf8ZScWMdJMvprrsI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <cbaac839-f884-2aee-473f-2f68a68ccaaa@citrix.com>
Date: Thu, 6 Apr 2023 15:30:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Provide EXITINFO decodes for Exceptions/NPT
 intercepts
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230406132708.2251000-1-andrew.cooper3@citrix.com>
 <9edbf4a4-978e-e1bb-43b7-971bdab8f41c@suse.com>
In-Reply-To: <9edbf4a4-978e-e1bb-43b7-971bdab8f41c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0219.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH0PR03MB6098:EE_
X-MS-Office365-Filtering-Correlation-Id: 2e6bbdcb-f2be-4f3d-324e-08db36ab8b26
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bn2Z+P0229x8a4hu/dXHb53WjAZ/3UdYNLbNOP3e/pj8q2VkbYlT0126pZuIOT60ssNJ/YJVjKQkYAX+q7VVZ88TgmHzTl4QleY2ZqBDmBlKleZrc8fhpx6jHy/ACD2WWEsACFkuhd3Mu+2/84MlLoEW2ivjLeF6G6GhWCsjte+ZAApwWIUxBMe7vImQRQwkPdeXuUAKeXvAZpmujlazutsQcq9GK+d0s6w8Ea2XoYloCrGrowmDR1tafTmRyxjRBNU72sV9LZpAydNUdGQ+sbPoPYpAlbCp0Q4pa3OroZfKYUF52vDLcsn7C14JZqRCAOKyYJcthvOlWYBFq5y5FNqQ4uL7ngUSr4Y02iLa/Gul/ePC/+U+mK76+mUIF7/L5AIZfkjf2mnVs5BgafX28fU+CG/rp73Lw4/2TBb7zHkJ3YKjztSeV/4g09vT3GljvOHP+lWYjPUi8hd5/5r/4N+ZlkFKOhWa/d1RCuWk5ZaSUkfyMKjxeyUSPLDdLfxYlAHs4X3TxNRd4Q197U2HKVbGP8UbQjgT/lPDw/IDd9q2SalXFD8P+iBvzt06ufLAy2EKL1uYrFrI0NeU0DEPPzNKZTiPj6tLv+ZF0uhI/n1j999oY6ArSpz2Rx6uK7BcJLDA6WEtlU7RZGpfAU2DFQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(451199021)(478600001)(31686004)(6506007)(53546011)(26005)(6512007)(6486002)(2616005)(186003)(6666004)(5660300002)(66476007)(38100700002)(41300700001)(54906003)(2906002)(8936002)(82960400001)(6916009)(66556008)(8676002)(4326008)(31696002)(66946007)(86362001)(36756003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aXJvaTg1V2s4N1BuWUZES29GNkN4U3lUUUdoSHBqN0lBZ3EydHFaNThUY2lt?=
 =?utf-8?B?MjJiQk5LaHMySE80cVhjdXR5emFLV2FXTWVLd0JGQnc2QUZ1bU8zLzFmREFK?=
 =?utf-8?B?dEZ1dU9PQitEVXZVMXV1b0VmRWtlUUpVTzRLeFREMzBhS2FRczlMSHVBamtN?=
 =?utf-8?B?dm5sMkhDTDFNTW5reXY0dllTMXFNS0dROE01c080WFlaY2NQbW5PQitubmpz?=
 =?utf-8?B?TGNuWFZoRWlPQUdNVFovL3NMOU4wRmswbzRhK0gvRHpUKzdHYU1BN251WkZ1?=
 =?utf-8?B?eEwyWHBwWnRQRGcvOHJWWnhmaXVuSGhlTmM2czE4WEdzcHhjVGlWT05GU0pK?=
 =?utf-8?B?T1UyMkRWMU1RQlRmT3M1NEdBVFhGSmNZdFZxR0xhOFNQVmw3bnVaUWptak14?=
 =?utf-8?B?UmNwZmE3R0VoSTRzaEh0dTNjemxKSlFGSmxvdmZHREtEcFBrYzJUZldLdllp?=
 =?utf-8?B?V2JwbDJZNWFsc3RjY3hnN2xYYjJWK1NoUEluUWYwdmtnWmhXQnJBVkZvc3Vt?=
 =?utf-8?B?aUdrZzJwY29hS05sWVNiQmN6NlFLUzVjVzVFaVRrQmpyRm9ON2JxOE5vREZZ?=
 =?utf-8?B?RDRvQzZlS3hMdnJoRXNPUUVHKzVqRlUxZFhtNy9nK2V4MWJPUHl3cXBEd2g4?=
 =?utf-8?B?N25KdnhXbHBSTU9uK2pPeVEwc21UREZZbkxKRG1HbUEyZjNBcTFOdlA2a2ZE?=
 =?utf-8?B?YmV4ZjBZemd3SFNtazRWTGRFVS80YWR2WFFwODJ5TkE3VmE0VWpMam8rV2VI?=
 =?utf-8?B?d1pmbU1XbC9tMUdFUkFTN3RVUkR3b0pqQ3BuM2I3RC8yd3QydERqTHh2aTlV?=
 =?utf-8?B?Z0pPcWhheHJWMjNtTWM2RTlKbGVZTytqMExVN1NpcFlYZFFQc3gwM3Fib0pN?=
 =?utf-8?B?b3pxRDhLajNDbWp2Skp5MjNrOU9VKzBYSHE5VUdIQXBkS21BaG9qdDhtWDZo?=
 =?utf-8?B?YzgzbjJaRmJaSHJrWWJmRXhqeTUxcjREWkV3L3IyWGF4d2FxUmw0aVRVWSs1?=
 =?utf-8?B?OEE4S2Y0OWZLT3ltN09qNU81c2hUZk9hTWx0NHdESU12QWxSTFBhaDhGVFZL?=
 =?utf-8?B?MkhzYjVXby9VTmVIRVYyakF3N3E2ZDR2UXMxeEZvaVhMVlYzUWJVTnZTcUJm?=
 =?utf-8?B?dlc4UC9Qa3hQNWdDanpqdTRGcCt2a3NVSXBZVnpQMlhoVUQ0UENLbHdtVFFO?=
 =?utf-8?B?SUc5M2plVnZxbVpGZW0zSFphdXpoRlMzRXBmNW9WamVkekFNZ2tWOE5vNDZN?=
 =?utf-8?B?NjZwYnpPbEhKMXRGa09mVVFDb2kwK3lLbDF0cnV1VHJubkR2VllwejdOVytt?=
 =?utf-8?B?L0cvejdvdjN4NWFsNmtLbGJ6bmhWdldkZTB6M00zajNOOTVtMi9OaUxIdVd3?=
 =?utf-8?B?d2JsR01sd3d1N29Ocjg2SnpJUi90N3N2QjQvd2dGTVlaYnlUSGdMNmZKcmJj?=
 =?utf-8?B?OVNmZVNLNi9mVWNEQWVvMDhiNnNKOTViend0M0ErT1cvSWRqeEovTzB3aGc1?=
 =?utf-8?B?Z2MwZENoWXRCNkdPYVN6VkEzWEl4SVd6cTZaWnBDdUZYTllDVS9BZnJTeS8y?=
 =?utf-8?B?aVFOVlp6bkVQb3ZlRmxQQWoxOHN2WlMzKzQ1WjhTU2RDdG04Y3J1YUQ3MVVG?=
 =?utf-8?B?bC9rRms5TnpOYytMM3hBNnU5OUtoajZmVStMMGxOeit4Yk9RZXI2QVZtbzRD?=
 =?utf-8?B?ekV3N0liWld4MnpOU1F1eURld1NpWXFZQmEvV2JDSFhyZG15M29XZ1lGblRI?=
 =?utf-8?B?MGFCMnJQd3EzcUo3bXNFaERtYWw5MkdPNUVzMkkwMjNDYURoZmoxWlpCdTVa?=
 =?utf-8?B?NHFPbkFHVVd4UUR3S3NhQlVOcllhOWJhNnk0TzUxMW95S0NHUEdjUDJoRDZD?=
 =?utf-8?B?ZUh4UkNhNGxaNXVBRGhNSklWZXUxcXJRc2hiMHROeFF3Q1gzaFQ5R1ZaU2Y2?=
 =?utf-8?B?OFMyS01QV2JuRHdUS0swTytpaGlBeElNN3VvdHJjazJtdFFxRVRkSlgyUlBO?=
 =?utf-8?B?UGI5cUdyYnR6eGkxL1FaZnlCNERQTHNxMHpuRm0xR0Z1Q1oyVWVDNzlmU1Nw?=
 =?utf-8?B?Vkx4ZkpPdXBCaE50b2ZGKy8vMktqS2lKZXp0ZzBTRG5aVzJKdG5ZTW1QWHVj?=
 =?utf-8?B?M0h3c21zRzgyU3RZdGZNS2g3S1M2enRvNU1nbkUwY1dCZGVtTm4rTERaL1lM?=
 =?utf-8?B?ZXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cDDOEHv//Dru/2jUqwkFqBwSe41OytLYViqjnOSDqFjjP8HpkS09ptpdeQP+jgFUePxXtBuyJ/cY4zioBOflSEFCsPbnOM/mGGubHw6x+1UPAz5LS1b2keZ+4VVCv8+B0UN0juORgE+MIuctrrPPpE5SuuF7TN8jJk7JwdNGnJovX5W5GWy5wSaPazECQKDhTAOs9hed92eLHaKzrzOEPQJP/CC0SxGJ8wjnsSU9zQPz6LCegAVBJfuXwJvlDGq09bY9juFpJL6xAYLt0Z9KZ+U8QpaUTMwslLa5ZSu0tk54+Quntj4aYHjheAqRJbJe0lU3/Vp69lKMNP0l1SCq4NGwc88osHqMQkvbNmMkVOrDEZ+F2Ab3U3LyOTXgZAwgGugaCUGcv79SPu9BqXaLP+uvnc6JzHfFSHZIFYvyVP6vJcHWU+kiue7Mjo2JER/33Uqn7EMBFw1yTD999h6X/RSvTioor3a5AYI27bkEHs+ssFIUGRRTCw8M/E30J9MjCcr1crKLibbmobZ+uygo2vgTyS1hIRiU5mnUfWzMz01xGJIXW2FEsivLCMv5r7K01PNpVuIfUlQDPyDMENsKZwXPXYdcKrd6Uu3ut2+MTdNyfJp9+gsJQjR4AhaAYVturaqo3nYGM1neetVVblXRwT/DLhcmnebf/xcKochPFp5jF8FCTBIXpubjyPiR9iOiQde32wzgi+JcdzWf6n8dlo256ghSYKqO/QYZZEjJa4fhJuQNbQkOP63bYqQ506OjKtOc9ILGt/2XWpS2fRe45nehlNNrGVH5HpqzOXhDH/weMriZsnA1jgsrmWjCjmC1
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e6bbdcb-f2be-4f3d-324e-08db36ab8b26
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 14:31:01.0888
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ThLXgCfdekdOAR1A9BHd/npWORY43QrLvPtkQmim0EON22jQmuv30Qhn0kdQMKMScRgg4umC/SocPzWFPKMtRpPxcLML7qKYOHgk5SXdEk8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6098

On 06/04/2023 3:28 pm, Jan Beulich wrote:
> On 06.04.2023 15:27, Andrew Cooper wrote:
>> Exceptions and NPT intercepts almost have the same layout, but NPT has bits
>> above 31 in the error code, and the name for exitinfo2 really does want
>> distinguishing between cr2 and gpa.
>>
>> In nsvm_vcpu_vmexit_inject() rearrange VMEXIT_NPF to fall through instead of
>> repeating the exitinfo1 write.  Use the fallthrough pseudo keyword instead of
>> a comment.
>>
>> In VMEXIT_NPF, as we're editing the printk() anyway, switch to using the newer
>> domain_crash() form.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> with one remark / suggestion:
>
>> @@ -455,6 +461,10 @@ struct vmcb_struct {
>>                  uint64_t :59;
>>                  bool     mov_insn:1; /* MOV, as opposed to LMSW, CLTS, etc */
>>              } mov_cr;
>> +            struct {
>> +                uint64_t ec;
>> +                uint64_t gpa;
>> +            } npt;
> Maybe better "npf" than "npt", as it's describing the exit/fault?

Oh yes, of course.  That is what I'd intended to put here.

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 14:57:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 14:57:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518918.805928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkR3F-0005C6-VY; Thu, 06 Apr 2023 14:57:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518918.805928; Thu, 06 Apr 2023 14:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkR3F-0005Bz-RN; Thu, 06 Apr 2023 14:57:25 +0000
Received: by outflank-mailman (input) for mailman id 518918;
 Thu, 06 Apr 2023 14:57:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kFOP=75=arm.com=Peter.Hoyes@srs-se1.protection.inumbo.net>)
 id 1pkR3E-0005Bt-MH
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 14:57:24 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 549f176e-d48b-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 16:57:21 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34AABFEC;
 Thu,  6 Apr 2023 07:58:04 -0700 (PDT)
Received: from [10.1.199.64] (unknown [10.1.199.64])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5584D3F762;
 Thu,  6 Apr 2023 07:57:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 549f176e-d48b-11ed-b464-930f4c7d94ae
Message-ID: <6b98b95d-d737-5495-9c03-7857886cb1ce@arm.com>
Date: Thu, 6 Apr 2023 15:57:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH] tools/xendomains: Only save/restore/migrate if supported
 by xenlight
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, wei.chen@arm.com,
 bertrand.marquis@arm.com, Wei Liu <wl@xen.org>
References: <20230322135800.3869458-1-peter.hoyes@arm.com>
 <fa320fd7-31fa-4e96-a804-172e70ef1c80@perard>
Content-Language: en-US
From: Peter Hoyes <Peter.Hoyes@arm.com>
In-Reply-To: <fa320fd7-31fa-4e96-a804-172e70ef1c80@perard>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 04/04/2023 17:28, Anthony PERARD wrote:
> On Wed, Mar 22, 2023 at 01:58:00PM +0000, Peter Hoyes wrote:
>> From: Peter Hoyes <Peter.Hoyes@arm.com>
>>
>> Saving, restoring and migrating domains are not currently supported on
>> arm and arm64 platforms, so xendomains prints the warning:
>>
>>    An error occurred while saving domain:
>>    command not implemented
>>
>> when attempting to run `xendomains stop`. It otherwise continues to shut
>> down the domains cleanly, with the unsupported steps skipped.
> The patch looks kind of ok, but shouldn't $XENDOMAINS_SAVE be set to an
> empty string in the config by the admin instead?
>
> Or is the issue that $XENDOMAINS_SAVE is set by default, even on arm* ?
Yea the default is the issue. We are building for embedded, using Yocto, 
so there isn't really an admin.
>
> Maybe it's easier to check that the command is implemented at run time
> rather than trying to have a good default value for XENDOMAINS_SAVE at
> install/package time.

It would be cleaner to do this at build time for sure, but I'm not sure 
the autotools config file approach for sysconfig.xendomains.in can 
handle the logic for this?

Thanks,

Peter




From xen-devel-bounces@lists.xenproject.org Thu Apr 06 15:18:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 15:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518925.805938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRNc-0007ec-Nx; Thu, 06 Apr 2023 15:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518925.805938; Thu, 06 Apr 2023 15:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRNc-0007eV-Jx; Thu, 06 Apr 2023 15:18:28 +0000
Received: by outflank-mailman (input) for mailman id 518925;
 Thu, 06 Apr 2023 15:18:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hdop=75=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pkRNa-0007e6-Qn
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 15:18:26 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46a36581-d48e-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 17:18:25 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id l17so2238007ejp.8
 for <xen-devel@lists.xenproject.org>; Thu, 06 Apr 2023 08:18:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46a36581-d48e-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680794305; x=1683386305;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=DTYmK8YkPVhMeVdQAleCZ/tAcp1XekiLSY2jWJ8OVn0=;
        b=o3sXOGaTy35TBrWdA/ZCTjDHsafZroabmM7isj5bmvtyQzYNf8w0e6W2k5tDIZJOT5
         ZtB6SjqdlNsiTLK5jMTOa65QbxUKFL1QtdRlcLTCok/3waNhaSOTzSZm+sRQY+yAK0lW
         Gi4So07q5CVUbS2L9HUiwJvvYHUB6Q4vj3e6EhKcFFbiC8jMzEAQh9aTrDG3uGQUY/U5
         kzzUHFzg/wt6ZN4jsviiQ2NQraL5cbmbtq4S1ATmJ48Y0P7bFi3K5BWWuFpKUGrOiNDC
         NEYh2UGteXEvs2orgNeNIg/6c4ya/viPWQqtTpBOFtAM6nVYAKwullSbt+p47ZKT1Mhp
         zyZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680794305; x=1683386305;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=DTYmK8YkPVhMeVdQAleCZ/tAcp1XekiLSY2jWJ8OVn0=;
        b=4Ee/iMwUBLxGRpHM8AWdlZ4CxS1Wd+3HS+nCR6h8DMDNF1S1MepzTUE/5hyc7stnrz
         jN/0AUADjg7cfVnEUQRWXcqgV3aYDP1wdAr9yYPHddk8XVVOwwQu4ZN3fShbe9so+9zi
         AYblrZmzjYpsqAJmPJUqEiUGmeoQ3BTkdNjzezYaF5/WsD6eJOAhIlJhliqKdFp2ksq0
         GABv5x8vY4Mwtjz0qMX90TsaXr87V2zoqxX90kAfUrzXbOBNPBmCMdVDMfqvrfnx6dQ2
         WMBjsaHD/Kbb7FXu2mUJX+iypgFFkOgGG+OJVR90io9/VAlNimzWeOED7PTDYXXdEXbI
         31Eg==
X-Gm-Message-State: AAQBX9dWnTew+ohena3S8meRRwgwonfMGHTezPCaiWWMr2Y73hUr2gYq
	TChSzWBXFQHx9ET2B0974fvDW9Y+rkrvzJTBHG4MCQ==
X-Google-Smtp-Source: AKy350Zs8ZnFOSDFZhmrHpJmPBzmdZ0OVPINfkHmM71Js2ejfgFIiy9ZLXoyIwUqzPcpT94JVtx18iJ0Fh+BaFTYbQU=
X-Received: by 2002:a17:906:73d8:b0:939:a51a:dc30 with SMTP id
 n24-20020a17090673d800b00939a51adc30mr4786343ejl.2.1680794305057; Thu, 06 Apr
 2023 08:18:25 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-15-dwmw2@infradead.org>
In-Reply-To: <20230307182707.2298618-15-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 6 Apr 2023 16:18:14 +0100
Message-ID: <CAFEAcA-9GDCa8ZrxjZJBq7wx=pVDAdvvDvTQs_oVyhD-HNSsrA@mail.gmail.com>
Subject: Re: [PULL 14/27] hw/xen: Move xenstore_store_pv_console_info to xen_console.c
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 7 Mar 2023 at 18:28, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> There's no need for this to be in the Xen accel code, and as we want to
> use the Xen console support with KVM-emulated Xen we'll want to have a
> platform-agnostic version of it. Make it use GString to build up the
> path while we're at it.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Reviewed-by: Paul Durrant <paul@xen.org>

Hi; Coverity points out a double-free here (CID 1508254):

> +static int store_con_info(struct XenConsole *con)
> +{
> +    Chardev *cs = qemu_chr_fe_get_driver(&con->chr);
> +    char *pts = NULL;
> +    char *dom_path;
> +    GString *path;
> +    int ret = -1;
> +
> +    /* Only continue if we're talking to a pty. */
> +    if (!CHARDEV_IS_PTY(cs)) {
> +        return 0;
> +    }
> +    pts = cs->filename + 4;
> +
> +    dom_path = qemu_xen_xs_get_domain_path(xenstore, xen_domid);
> +    if (!dom_path) {
> +        return 0;
> +    }
> +
> +    path = g_string_new(dom_path);
> +    free(dom_path);
> +
> +    if (con->xendev.dev) {
> +        g_string_append_printf(path, "/device/console/%d", con->xendev.dev);
> +    } else {
> +        g_string_append(path, "/console");
> +    }
> +    g_string_append(path, "/tty");
> +
> +    if (xenstore_write_str(con->console, path->str, pts)) {
> +        fprintf(stderr, "xenstore_write_str for '%s' fail", path->str);
> +        goto out;
> +    }
> +    ret = 0;
> +
> +out:
> +    g_string_free(path, true);
> +    free(path);

g_string_free frees the GString, but then we call free() on it
as well. Presumably the free() should just be deleted ?

> +
> +    return ret;
> +}

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 15:33:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 15:33:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518929.805948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRcQ-0001dL-0P; Thu, 06 Apr 2023 15:33:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518929.805948; Thu, 06 Apr 2023 15:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRcP-0001dE-TY; Thu, 06 Apr 2023 15:33:45 +0000
Received: by outflank-mailman (input) for mailman id 518929;
 Thu, 06 Apr 2023 15:33:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkRcN-0001d8-TA
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 15:33:44 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67a483bb-d490-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 17:33:42 +0200 (CEST)
Received: from mail-dm6nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 11:33:38 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN4PR03MB6768.namprd03.prod.outlook.com (2603:10b6:806:214::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 15:33:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 15:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67a483bb-d490-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680795221;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BAvJtl8D8wnOaRJr6oR7rTwaR89LKmeaTPG9kn5YroU=;
  b=PJMFBwaEyvX5mQy4QiEWUZcp0gNbLZo4NfbXl1vJe3eSBaGhFNgK5pEH
   5sglr00sqSCci+0AO0UYtlKLThwwviN2GRszLdOv6+GVsEV2aoac9s1Vo
   uj0ZDYtJ2H6b7jyTl5hZvtPLGsJZRgNeqUAc0lB7Jb6tzvnCaax8xV41e
   Q=;
X-IronPort-RemoteIP: 104.47.59.172
X-IronPort-MID: 104498459
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:sDCqw6OL+tNknYnvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUp00DQBn
 WEaDWuHa67bZDfyf493a9+yo00DsZHXn9AySwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0thsHWFi+
 6EeFG4yfhODuumzypS6Y9A506zPLOGzVG8ekldJ6GiDSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PtxujeJpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxX+nCdJOSezQGvhCkG2z/E0xWQAqCniF/+KTzXybXfxNA
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9URX5NkcHbC4ACAcAvd/qpdhpigqVF4k5VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTzgbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:ht85cKvHnzKYUtHPS3ka1WSK7skDp9V00zEX/kB9WHVpmwKj5q
 KTddAgpGfJYVcqKQgdcLW7VpVoLkm8yXcY2+ks1PKZLXLbUSiTXf5fBOjZslvd8k/Fh4pgPP
 xbAtRD4bTLZDAQ56uXjzVQUexQp+Vvm5rY4Ns2oU0dLj1CWuVF5wd9CgGUVmh3XhQuP+tGKH
 OF3Ls7m9O/QwVsUviG
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104498459"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jk8sq1R7IOBpxOMsa3cjL6R/waHwiG51bfOF9HEPca/tYEC3+v+iJ9eO30QrD4TKsJXyTg1QbbkEPM7fukg7PELHCdjT/b03WRc14EjQr2Hnm36rnYnMiImrM1wBA8jjJbYRRVGPZeSHiZv1hpoUqd7LsMtLGZv6IKO8ixyGwGuUmp8FXorWMxsgJZXt6eqAHS8KJFNDKNRUh+qgd+kEJpNF+Vdj6poMSq0Ix5sX/3ANGovVUKNzgN0kiVLpayz/McJpa75ThIa0MGj1dzZz66mbmhtsv0AiQepYZa1xmgxCXL6mbL4Bx7IJ+9PZFivDeDRg8KjqQ1vX5MM/iJ72GA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/Mfne7y8aweDKenSHNw4RzZCfFt4bGVZDOLeZa41Gvw=;
 b=ayOto2LOQc9204BfQIkSK2zCUlBls4PThcFA78KAFVgkDl9fFBWpw0E8cJDDsaG6lZ/9GviOhsx6cmdc6qsEx4/iDqc/6qpPWxhRQcoyXw7l309sYvV81wHGUPemy5fQx+3J60txSPSdY4vOvTKMLdKWoptD7+zwXcsG3C2WF5Wo4PwPUvrij5vZdz+gx2x3tAdgDArs1ZFH/ux7kXRHgKzfv/61d1ccaBco/tEGnO3hyrHKP9LeJGnEq/wvMDnkG1t6YffX8pXp0JKWKzkCZzI0jq+U2cBBe1q3JX/Ws9Sz+p2imq8RIf9uu+pWAkfE+il+40wmUzFMTTNNAkxfJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Mfne7y8aweDKenSHNw4RzZCfFt4bGVZDOLeZa41Gvw=;
 b=IF3PTjbZGh3nO875qGvIrxSKDJn/XMA9iT9r7rP6yzwogIwJuaSNKXn6GtjjUlHfZu1U8eAESeG+KuJICSgK31UQLXm/6epqF6JdXdSb18wDWzbciUBenm8+13O4UD7sxQf+Ymhg/+BfPPW/Fkquw2PAXrWyJr0p7NCNBhVUKrU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <4fc4f5a9-0b9e-bb30-df32-09b5e9e63a57@citrix.com>
Date: Thu, 6 Apr 2023 16:33:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul: avoid triggering event related assertions
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <e79b3b8e-666e-b94f-d2c8-1ec2f6f5453f@suse.com>
In-Reply-To: <e79b3b8e-666e-b94f-d2c8-1ec2f6f5453f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0164.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:312::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SN4PR03MB6768:EE_
X-MS-Office365-Filtering-Correlation-Id: 716c6c66-42bb-4f2f-8be2-08db36b449b5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yufdyamwhPkGxL3mdxhoHtTluSuBBDvb4Ka+JJ8jDg9E7A5bniQDrwG7ABF2TP0Z8tFknJXGq2CxH/4Gw1dmIe1dZqY8zvj1F5xi7KAmFfYnku+cFUZ/zbSUVxTqNu4qkcKIiRLPMFxSN2V3tyGau849g/y9PikwYGXyGmNcMOJlJomnAtjQPT7KjzmIaoMyBf7dnNgS9hTHtWTt1RgfXiq+AUpb5nXgeUOjPDoocHq9TOkfjlT0klY1CPIcS1T0Z6TKZXqiX6a/u+WDdX371DL4dZMdAveXqmhRpaGLpuo1tt5Lo+xvAvNor4C4zt2kikZ44qHcSVavw96Q7jSqyscepWMoHXECZLcbjbv3EdAullW1TQMBcvLxPSRWeKYWnkbUC++qQz2upKhUNR79KzsznyHAZqq2v7Natf0auhZl3SF9/WUW0HYmkoypUXfp2ECD2f/SEAbqZ82EJ4eq/DwlPrLPndXt7a3L1UFzZlq9eFfiX//ntZvkjjRDQdcmY8iHtciMUtHX50zrvlPkkSJ5QkNlhrqtrVWaxhgfBqHELr3xkUElsO1G3dFkw7V3T00f5OlG0OvQn39QHnWch7c9Ug2nerNoSUCmxQJ+9N0eMz/dAdVrgH8xCURxrIS/dOGvGE4CYR2k7AkMgHDWwg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(366004)(39860400002)(376002)(451199021)(86362001)(31696002)(36756003)(316002)(110136005)(66946007)(41300700001)(4326008)(66476007)(6486002)(6666004)(478600001)(66556008)(54906003)(8676002)(8936002)(2906002)(5660300002)(82960400001)(38100700002)(186003)(53546011)(6506007)(26005)(107886003)(2616005)(6512007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N1dyK0U0cWo1Qmkra1ZoSUFKSnZYL1U3UExFblpncmk4ako0bUZLNmdHNWwv?=
 =?utf-8?B?ZnMzUE95b213cFpGV1QvT0prSS9CN2xYNUpwTEQzZjcyK2NqdjB6QzdtNjU1?=
 =?utf-8?B?dzJ2M1IvRzhxSHh2YlNOU3YwWC8wSk83bkFOaE8zOVBoYjdSdkJKU29qSlFp?=
 =?utf-8?B?YkFRQjRtaXZUV2J4cE15SU5DanZaeERUUlJXZ2RyY1k3UTh5dUZMWXh5ZnpC?=
 =?utf-8?B?R2xPdzlIaEFpYkcwUGIxWEx1a01lcHdhOTdja0hzK0wwMGVzV3h1N2NJbDVN?=
 =?utf-8?B?TnJ1Y0pNbVhidmsxNmQzN3VLOXVHNHkzNUpIMFVrdVFaU3V4V1FNd2t4Um9H?=
 =?utf-8?B?Um91MXBvL2FNTU9QVzU4VGFwdnN2MTdlcEt2M2VYVzNMSTNsWEd0aUNFd2ZG?=
 =?utf-8?B?RTdjMzh4TVNDVjg0YnM0Qk4ybjAxQU1JZ0VROEtneFRmdFNFK3VxMkVDVGRj?=
 =?utf-8?B?d3dHZmUyN3luQmI2NW4wdzRSek84VDhOdmVvWE8yNS8wYS9PdVZkNCszd2FJ?=
 =?utf-8?B?dEFldmJQVkxJZnh0eWsrTSsvYjluUkt3bk1Dci9KYnZlV0FsQUVUNVo5QzdQ?=
 =?utf-8?B?MmZad284UytNMXJOWnMzdGp0UlFKTVE4dXh6OFdkUCtQNG4yOGJsYzFCenM2?=
 =?utf-8?B?clBmbUxrb0t6OW0yNWdwR1FobHcrWWhMdHp5bndwd1d6K0NuaVF4QkxteUxq?=
 =?utf-8?B?TCtFME00QkFmRTg0S1QvV3dNQ0xQWmNseXR5SVRTRWM2Q2lINHVWUTc3dkNa?=
 =?utf-8?B?bWJhZ1ozWFlUYVVQRkxpYkVOeldPUStWK3k2Zlp2ckJlWXJvTEN0U2xhNzB2?=
 =?utf-8?B?OGZQMlp2eEpkT2dlQm9GVWZXd0luRDNlSFA4K1lXODZvVWp3Z2lwOUUzeDd5?=
 =?utf-8?B?VmhIZXE1ekw1bnNTeVBkU2E3MnpJbE4xYmUxR1c4d3B1K3h5dEczcXNpYWsz?=
 =?utf-8?B?WGRjVmRWTTFkQlgxOTJNMFJ6cUpHeWdYMGc3SWtJWmlRU2hLYlhZdG9EbE12?=
 =?utf-8?B?NUJyQ2JXeFNXeWJCSzZ5Z2JyVkROcmtTeWo3OWwxU2ZUcHNrWW5rY3lwakYv?=
 =?utf-8?B?RmFSQUZyV3o2elR0RFR6SFRaZ0JCdWFhM1VTWExuWW5hWEwrMWlYaVl6cGI5?=
 =?utf-8?B?OWlxYUttbThrMjF6ZlorMFJiMXN6M2x3dWZFa3ZYMUVBaGViWmc0UEhSdlFm?=
 =?utf-8?B?UDZCTkF5ZmFvOXVIUXR6dW1KWEdFNHFkdUFPY0p2UnBwTjYyT1VoR1U1V3E5?=
 =?utf-8?B?K3hGeVpGcHBWSTNHUCsxeTE1cGEzUkZGVnVrc0tNQWNnUkVZemk3RUQ3OWc1?=
 =?utf-8?B?bmdKYUhRbGpENGR1aU4xWCtrb0U1dW9nME5xdXNONE9aVW5VRXc0TktpRm5S?=
 =?utf-8?B?NmRXY3BvNXNUWGQxaStRWEhvTUNwNVQ5eG9nTFNXbE05cUdJaE13Vm1BYXUx?=
 =?utf-8?B?RC9XeWdOVHc2WEZiQnhwaDBQQ3UrakNmVUFMdzBRZkJFYjhSZ3lTZHdzU1ZB?=
 =?utf-8?B?L0pzbkl4WDZHZmlzb0l1TmJiTjRlUDl2TUVhbmhtTlpYU0ZwRlpmZHF6NEMv?=
 =?utf-8?B?VHIxQjZRV2F1WXp1UTFiSk5NVzVGMlVRWkJTbFliQlc0RExKV09WQ3BtRTN6?=
 =?utf-8?B?Vk1vVTNEczk0a1kzS1oyWGJjRHV5TUxRVkVlS3FOWVE0NUthSGRwUGJvUEVp?=
 =?utf-8?B?MC9IemNwY21WQ0hWSG01SFhpbFY2V3NkdW1WQW5JcTNoR3RscklvK2pPOTA5?=
 =?utf-8?B?QStJeVZ4cjFKQUJLYlIvR2pNb0Q3c3FtbW1TZGpwbzhkODJIeVJqRk1IRWg4?=
 =?utf-8?B?SzdWZmhkZWVGbFNBK0w0c1ByNnphdWdEK3kxa2Vnd1RKNUJpVzhEbC91VUN3?=
 =?utf-8?B?N1dXN052dTUyVmQ2VVdBbmhhK1hzMWxlZTZxYURmV3BkTEh0K0JRR2FnTVdn?=
 =?utf-8?B?c25MMEdGVEF1UWg4eEFhZ3VFVXB1dDFuR2EvWmh2VTFKTTVkRE5uV3U3TWxy?=
 =?utf-8?B?bThkcWY0V240WlEyN3ZQRzZSa2V0ajR0VmRMWnRrUWRpOEwva2R1Y2dnQ2ZH?=
 =?utf-8?B?TGRET1hvUlo1cHdhWHpoalE0ZGc1czRZeHdrOFY0T1RhNG91eW4vY0hXb2Yz?=
 =?utf-8?B?UXNsK20wQTB4UlVLQllJQUVvdmthbkNPMVk5UVRjZUhzWENEWkJuc3BsTjdM?=
 =?utf-8?B?a3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xUomECeXBtq+WFmCfpv4zrD+GjCWFVP/mBJtFcKw+xMpGbPRyCC34tQkdB0tT7taj2GWBlcHBNRLPyba8bQ009ZEBe7JpyMdhHPldjOs0eoAkdiK4Nonx1DRb1vebrdWMKqOd3/YJYFNwJzdpJZ/KGPnsQ9EXVD53+o+KWHGIC+kX9QlHhnRRjJnKEfgrM9n48gsUedtVLsnZwcaYMOn3pAl51h2rm+WeAr+t55GzVJsLj7ojT4tXIEa4CWTza7j3XP8LABkq5koyie9binPr1Fpnnkr44XauBHBhekHSX+nrDcWox1z+cd9knXPnUE/w5+EZfl3343fv/cYssUulvKSYT7J+h9UacMfHf2gRoiqezS/e/z0U6PxnDSA28hhtQrXAptj0WYZJ67Z/4ZAV6tv68ZMRk66IoaQF61MZuC7caXsoyBgU9UercwrT94CvtSRRaxp4+AahkaDU/tODeAtlqKOciXmqffPaWr2CqB/n4hXWLcfqR+r0nhlq5nkoPHMK5kcUc/UYQ5aizZFcsEmjtoCpkXcONtPoFFr9trM1/uencf9vH+uF5zbH7zxhrqCmMTBnPdcAQi6Q9FH+jHrFfeXPIAG8+izbO19S7gmXEbvb8WB/8dGeAVen9JUpg6khRzVJu7ZhzFKVtV3JzNJJ9qJhZPYXmQUG+dmbUqJ+U8wG19zNELJDwo/NPCM1KmxHGlYWMR2oMR067BiWhlx6N3REKKNUjQ8rXYp697BxqJFJl5W7IECy1P0pgOGUp31WZCwzX7iB0CsgJf9YQGi4VYu3lGUhMkxzGwkgZQ5AKIeqBl1ZsMQ89JkLYZ9
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 716c6c66-42bb-4f2f-8be2-08db36b449b5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 15:33:36.4315
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mPaaEM+aXghqzfcT+fXTdwN++BFvDbj0JB5xk+JfTynfAvkAkvktnxMB22Jii8e7qo9JWDhhoHsq4ge+6XmlsAKiej63FJjt98S2iyLJXz4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6768

On 06/04/2023 3:15 pm, Jan Beulich wrote:
> The assertion at the end of x86_emulate_wrapper() as well as the ones
> in x86_emul_{hw_exception,pagefault}() can trigger if we ignore
> X86EMUL_EXCEPTION coming back from certain hook functions.

Looking at the comment I wrote back then, I don't think I'd considered
this case.

What I was fixing at the time was the case where
hvm_inject_hw_exception() had raised an exception behind the back of the
emulator, and any subsequent exception escalates to #DF.

I guess the comment wants updating to discuss this problem too, where
the hook queued an exception but we didn't squash it properly when
deciding to ignore it.

As it's debugging-only anyway, it might be worth rearranging the
expression to be

if ( ctxt->event_pending )
    ASSERT(rc == X86EMUL_EXCEPTION);
else
    ASSERT(rc != X86EMUL_EXCEPTION);

because it distinguishes the two cases without an intermediate round of
debugging.

>  Squash
> exceptions when merely probing MSRs, plus on SWAPGS'es "best effort"
> error handling path.
>
> In adjust_bnd() add another assertion after the read_xcr(0, ...)
> invocation, paralleling the one in x86emul_get_fpu() - XCR0 reads should
> never fault when XSAVE is (implicitly) known to be available.
>
> Fixes: 14a6be89ec04 ("x86emul: correct EFLAGS.TF handling")
> Fixes: cb2626c75813 ("x86emul: conditionally clear BNDn for branches")
> Fixes: 6eb43fcf8a0b ("x86emul: support SWAPGS")
> Reported-by: AFL
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> EFER reads won't fault in any of the handlers we have, so in principle
> the respective check could be omitted (and hence has no Fixes: tag).
> Thoughts?

We already require LMA as input state, and don't have an execution model
where EFER is actually absent in the first place, so passing the whole
thing wouldn't really be an issue.

I have previously considered doing the same for CR0 too, but at best
it's marginal so I haven't got around to trying it.

> --- a/xen/arch/x86/x86_emulate/0fae.c
> +++ b/xen/arch/x86/x86_emulate/0fae.c
> @@ -67,7 +67,10 @@ int x86emul_0fae(struct x86_emulate_stat
>                      cr4 = X86_CR4_OSFXSR;
>                  if ( !ops->read_msr ||
>                       ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
> +                {
> +                    x86_emul_reset_event(ctxt);

This is the only path you've introduced that doesn't restrict the reset
to the X86EMUL_EXCEPTION case.

In principle you can get things like RETRY for introspection. 
Internally, UNHANDLEABLE is used but I hope that never escapes from
guest_{rd,wr}msr().

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 15:34:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 15:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518931.805958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRd2-00024d-9e; Thu, 06 Apr 2023 15:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518931.805958; Thu, 06 Apr 2023 15:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRd2-00024W-5P; Thu, 06 Apr 2023 15:34:24 +0000
Received: by outflank-mailman (input) for mailman id 518931;
 Thu, 06 Apr 2023 15:34:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkRd1-00022u-5a
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 15:34:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f13c028-d490-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 17:34:21 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 11:34:18 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN4PR03MB6768.namprd03.prod.outlook.com (2603:10b6:806:214::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 15:34:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 15:34:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f13c028-d490-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680795260;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Fo534stp1uE7qEY5cfiGY31pmf7oA8z54TWbmIphH0w=;
  b=eOvvnPKx4eRb/D2a2cpaX2AIVoUEKpijzx9nf28gPJwRDpqYg0VLnPBb
   9p16AyowWnEi2GkVbh5w58Q36uFKzycHAhtVFiKj+2ZmH7dUhoHe8FIGk
   0UDs53aTFuuMVJwMoUhKA6B5pAiHO0CzSitL39jWP6XVtJQq3190WXx31
   s=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 104498565
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7ODgZqK989CjcTn6FE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENSgmRVz
 zFKCz/TO62NNDCme4xwOd+08kkG6MfWzNVnSAtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5aXEFCz
 fY6CAwtdzqIt+as2ZKQYOtj05FLwMnDZOvzu1lG5BSAV7MKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dmpTGMkmSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHunBNNCSePlnhJsqEae9kkBBxctblmYs6apgUuXYsocI
 FNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1BbXFI88Tuiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAszA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:dT3QeKk5aUjOiwP6nGiDRSwNVo3pDfL93DAbv31ZSRFFG/FwWf
 re/sjz8SWE8Ar5OUtQ4OxoXZPvfZqyz/9ICOUqXYtKGTOWw1dAb7sSkrcKoAeQfREWlNQtsZ
 uIKJIOdOEYZGIS5a2RjWXWLz9j+qjhzEnCv5a6854Cd3AIV4hQqyNwCgOaFUMzYQldGPMCZe
 ShD9J81kedkGosH76GOkU=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104498565"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ixQ8+VxCLOd8BYAkH7zHxzqY/FbO/+Grj7Z+IxXexXTO3lr1PfnXsX9PMETOo6YwLVyddCYfUT5ws6fGPKCS+ZbDsJ5H+9y7/mjPh/w0kILu7Vb1gDD+2KEpaUp8vzwpbgYJPf4GlTTeN/XlMXn+KrSfIowhDAS5cGqsMzGQm6MzFmuoOuHAIVrOe3iTEOJsDR0m2Jr7OAzW4qT9plfR1FqkqRV6TNzW6nf6pjPW6HNNvkaqiWaqNKoogPoKGh6YhRTIZDwpiia4rlYkEV5GiXWD4KwkvaBrPvmTPxWVCeRD5MNEEU7slI82bDg06lA0VaWkNO0rpPq18qxFx8PiSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fo534stp1uE7qEY5cfiGY31pmf7oA8z54TWbmIphH0w=;
 b=MX7uf6S8zXZtlR4wrRsQusEfGuyyzFNpoSwlxkKLXSAc4gOeZpYewWvA7B2q5+9d3k290PTA2NEEPAiwCLrNSw+gTFSS/pdvA3maDvtIhNTlAOb8pbm53HpasxOFtjy11twgf+b9rDiC8n3G8a2sIA9k3mN3xDWs2V6fgRGb9an7XNqCzFLPcyV9GLYKFPi78jYS2P9kOjcfeNiUcLsM4sPO0ZH99+uIE1N931xubORNY10fNWqgB2o2dZQ5AasszxlPinAU6HJ6Cxq3CmgNyOQXmFByV70xczKYpQnOCjfBRN3lqNEMLTjCMvdeDsP4+o2Yu5ZsBP/VpLS3TqK6Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fo534stp1uE7qEY5cfiGY31pmf7oA8z54TWbmIphH0w=;
 b=PS2RmEQDoDziKfnwSb6loomN8Xv/bjTyzNMzQPhdvSdblJK5YzymWq1JoMa3IVggRW4r7Wrs1vi3gQH5xjZT1DMLApwf9NHsKVTC3sEpzMFCMAqgULU1HsxzECzjfhvrzxpI0xRb/syWBzezCkAD5W4XAmqExDG7X8jtZaiOPzs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <4dabc9c7-7c9f-1fb4-5555-827d282f0567@citrix.com>
Date: Thu, 6 Apr 2023 16:34:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/9] x86emul: drop regs field from emulator state
 structure
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <cf5580db-3573-ec73-9e59-61aee337b2c6@suse.com>
In-Reply-To: <cf5580db-3573-ec73-9e59-61aee337b2c6@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0169.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:312::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SN4PR03MB6768:EE_
X-MS-Office365-Filtering-Correlation-Id: 267078b5-8c83-4b1e-1719-08db36b46164
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+zUxhH4aE0TYglcYvP78cFAZeHcZ68V6qvLDUnaw8doEcv/soCV8JsJMjMGMFXJAq2i8g8k2Xay+u4xVRDhkRtGaaa+gfaf2aOqAunMnnkIr3VZlU4hX/55rk/eMD8daIJQPlR17gILhLVfhosqad7FtUyvGQt42/GVJtjIELmyChbiPgCFY+7GqfCdGTgfK5LD0teger2GrSp4gQLKr3WWTqPcowUTrL7RVxe/zxI28Q4wUN8eBdIz2XUhHlgPTb6aISaQ5otc7cVl0qGlm+2mNpRupWVyBNP7/2WHAuk9k2rVCzDtMCehH7ZowH8ombEndeXvQb2n2DsoFmDNLl4MYJuSs0HW1YeFUpJd+sTnAhd4vh+micDaAcbYpHXqqHOMoHf+ssWkyEPtA5pqMTcfxi55UI5DZ1zo2q/MJYiJFQMnqkvYoI7vmVZbCCnYmWgcIaeZ41owjqOlBl9wLQRh/cQV9yUIaLKtLRBFYmtL4uvcuV64FSFFur3X+hBfJlv/akEn8PhnpkQk7n7mqvxVkLTTdWlktCyc6sbi0k8jT9zvVUj5+B9kypk60ST39e67LLbM1LLYi231sOgrS/EaRD7xyWi6UgwKOLrrrxSO8cyGONp6pbldDwjuJth8q8RcaNMhpEKq3deiYAutiQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(366004)(39860400002)(376002)(451199021)(86362001)(31696002)(36756003)(316002)(110136005)(66946007)(41300700001)(4326008)(66476007)(6486002)(6666004)(478600001)(66556008)(54906003)(8676002)(8936002)(4744005)(2906002)(5660300002)(82960400001)(38100700002)(186003)(53546011)(6506007)(26005)(107886003)(2616005)(6512007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UHoxZnpzeHFDdDNuRERqNUFHRFpZaFRkQ0FMQ2YvU0xjZVNjZXlUZm1jOWRI?=
 =?utf-8?B?Z0I4VUpERmVoWHEvVjZMbUEwcUh6MmlnZTJTdnFNQmNYeTBmYnQ1SysxS094?=
 =?utf-8?B?NXJNcEdTcEh3Q3JwQ2dEeUpkVXRnQ1g2NWVVZ0ZRQWpMdktrdmZlQXBsV3o5?=
 =?utf-8?B?cjNNUXJOZlFBZFo3SmVFcDRYRnNvaWdFeWpiRmU3OHlmVUdHQ2pKOEZhWWlI?=
 =?utf-8?B?V01Yc2M0bmR4MVJXUG41eWkwck1YQ2dDcFFDK2xuZHRFWFUzMEJNNVJjbGZ5?=
 =?utf-8?B?eFhWTDN0T3Y4UlVKWmNNeUlpcEU5Z3VLSHhMaUlyWUlSYlJuc2hvY3ExTDM0?=
 =?utf-8?B?K28wWmp2UXI0MTZoU0ZOTlVuWk1Qd2FsaGVFNHpkSnBlK0xhQ245VGV1MXpS?=
 =?utf-8?B?bTdTS2YwazZCQzIzcGcxZ3FDVjJZWGNXN3hoeUd4dWxQbzBpUVhlcDdoOU1T?=
 =?utf-8?B?MzhzOHZDM1doRlR3U2U1K2JpRjF3MzhvOXpCZk5jb1MrTkFJWTh4ZWtHazJi?=
 =?utf-8?B?NXV0YTRYdFFyekkrbXRya21nL3BuL3I5S0RpWUh5cklicWNkYnlWWUNYR0xW?=
 =?utf-8?B?cVFjUzZYZTN5MDRvYXFlbk5oQXo3NXZIR3BzMW5obDZPalhuRE80aW1IdktH?=
 =?utf-8?B?Q05EcUw2VGxaQkthLzNmOE5CN3F2LzlnWHphN0ZISmRLTUdXalZuZUM3RTN0?=
 =?utf-8?B?MU9JaUY4VTJmdTRWc3dlNmQ2KzVyZU5SUDM4UVhtTU9OUHprQVlUK21rZXps?=
 =?utf-8?B?eEZ2SHpkNEU1OTlzN3NTRzdERmpJQ1oxWmFKakZyMlR4WVRVQjZiSXBNM2sr?=
 =?utf-8?B?bDVLREZZMzVHUVFVYjc4Wk1hbXVEaGpENXd5cUIzVjRCeU16VUtVenIyaHR0?=
 =?utf-8?B?SmNwRjJqZjdQSGlXMmFZbnBiTFMvcTZCMXFKbjNHNC8zS2dyR0p0cEJrMWVq?=
 =?utf-8?B?YjRLNVVjemVIZFJIekhLSzBibU5yL3BOa1ppenc0bHMzWmFQeG9rQ1hDck0y?=
 =?utf-8?B?TXM3REFqNGRYUDlHSmY4blp4L0EreHlaN0hIKzlNUStEMnBnYkFFajIxbHh0?=
 =?utf-8?B?UlNsZ2xPUWJxWFV0eVpDZFFQSWpQQ1A4NU1jMWpQWHh0TFcxakJpWFYwaGt6?=
 =?utf-8?B?bWI3ZHNEd05Ndloyc3R1cCtoK3RLdmlPY1V4cEZPTmpMZXpEV09rSkRHY09E?=
 =?utf-8?B?cWZubWFoZlZRRGFBSTB5aUt3T2JWQ3lXbzNaSFdWSkdhTFBkb2gzUUFLZG9z?=
 =?utf-8?B?NmN2SFNZbGhPa1p0MWdDYVYzRDZDSll6STdkOVZSTUlZVlRyS3pVYUwvWFRu?=
 =?utf-8?B?bWN1V0h5RHRPNEFMaUlDSDRoS1QwWGpiV3dHdXprY2pEemNPV2cxY2p3eHZ5?=
 =?utf-8?B?SGtneWM1Ym1YV1RCbjlMU0xZOFNpaW9pM25DWDZTeWJGVFV1NE5qRUV5NVBo?=
 =?utf-8?B?SHk1SVpjYU9MY01QbGJHK21LbGcrV2VUL2k1Q3FFNk0wS2txMmZaRE5FL3U3?=
 =?utf-8?B?M0Z4d2hrYlZXT0pSU29MZytGTlorekdER0dDQkZZN2owSUlHZTJwdlpuWTVo?=
 =?utf-8?B?MXZVNVFvbklGWUpvRWZwT1lVdHFmM0tEZGhJZzUyOW95SkxnTm9IMFlONlJo?=
 =?utf-8?B?Q25OZDhvR2pnOVpKWGw0VUtsZTQ1Wkc2K2REeXp1ZC9DYzk3MUNTcUU1TW53?=
 =?utf-8?B?UXlQaEFhZWIzQlBxUFhmbHJNRk1Ya2JQSkhMcWg2S3FXL3ZOM1VSSmpHaVVI?=
 =?utf-8?B?U3ZRdzhoMGVSdlJXS1p0T0ZHb2dBV2FQb0xNYkpJSVE3T09lcjQwTkF4SlRB?=
 =?utf-8?B?ckpXZGtubEVKOVZSMEs5MmVlenR5WjFBb2I4RStBaTkxQms2R1VIUE1XVm9o?=
 =?utf-8?B?d3FlUUVzYjFENlMrbS8wQXVDdDM1ckYwRnhvTlNwSlJlV0xnNVcyNUt2NmxK?=
 =?utf-8?B?ZXZ4K3l1ZWpvWlNNdURtTmVSMUQ4bFF0OVZDRlpWalYzcm9CejM3RzFncVJa?=
 =?utf-8?B?ZS8yOUxqREp0bm8zZE51SXpJeWVTbmtqMzI5KzZqRXBheDY1U2d4Zy9kbWRl?=
 =?utf-8?B?eFlOZStyVzdmUWdUL05oR0k0djBrci9IZStYZ052b3dlMnVtK1NxRFRxU1VB?=
 =?utf-8?B?b3MxTFJCS1JuNzdYM1VxVm5MaU1ac3cvVjVVa21FNmFUMjZFVlRiV21mYzgx?=
 =?utf-8?B?RXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	L17J5MsdPsNkDCn8jcn+HUBwvjtg8oO7BrrfwyjzMysls4ZvzhFDh3MULyfLii7wc/CYk7Kqn3i40i4Mu0a2a7Q/fx0u5BiOZSJtTgHy6cttrS1yFaZNj1lxqqpN5l1FGS7HRBV7uTVvBVWb4qkgZRl5jDFYTA80PchnsgqyNloWLwf+aEHeKpDuWidE1aknnBB+ZJuY7jGppRl3Rme44iRXiumP0pCaG/HD9R+OwHZjg+5koljNYsFK6Y7g74DYURraj0OrrfXxLHdnbhpHxS8CxCituiuqDUBAa7787iLypm2Og5CYyrkNY1g/ghNuZboMCCSXlAqrOxe6nW4h65zClh7ViZ/bzum5KDRCqQcBFPpbKfFz7sufKZWE9GLyQ8pMnzooD7nvddFqaT7jZZt/6n5dV9X2YbVVgj27Wae5kZYQhIntcnvm775SVqG5lNe5WKccnFFQ9F0B8SxzIuLKUfCStPBhD4DdlIJvbIseuKDHM4oVLpahof1kQVAV4IhuXf15WbG1lmwStonOvEMLqEvGvEkaR/eweK7l5W0zcGMyjnGP/XPv96bmWurKI9h95j3uWpd6ThaOycReglxDkGss5docG12+5KCe9spUZsF+VXCGnpfggfJqujwN6+Qq58xIH37Sq5ldrX1CTx/hGy4uwEf/Er71wF0P8PZ7KoWQ4D5eKl3uCNulRN+lhtHYzgJiROB0jE0q2zNTyu0GORqjfyCT4qdIaf4tv13bpW5H1f/DlDxRQswDLq1hRsIj6IekbwgRldkh81fCGMIcyJWVQxigZmwZzIJwzYuDTiGmCO2O+atMpZIWr4DD
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 267078b5-8c83-4b1e-1719-08db36b46164
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 15:34:16.2125
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MgbqYcmn0r07erzj7/m/oKROZT3iE2Xpw2VH1Wub5g7OnwHV5jWZifQlFYnulKy7wvZJP1XilZXUvx438CeF8bqptdY9BjaXAtczkznARUc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6768

On 04/04/2023 3:51 pm, Jan Beulich wrote:
> For an unclear reason 0552a8cfda43 ("x86emul: track only rIP in emulator
> state") converted the original struct cpu_user_regs instance to a
> pointer, rather than dropping the field altogether: The pointer merely
> aliases the one in the context structure.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 15:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 15:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518938.805969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRoT-0003oL-GE; Thu, 06 Apr 2023 15:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518938.805969; Thu, 06 Apr 2023 15:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkRoT-0003oE-Bf; Thu, 06 Apr 2023 15:46:13 +0000
Received: by outflank-mailman (input) for mailman id 518938;
 Thu, 06 Apr 2023 15:46:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LuhO=75=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pkRoS-0003o8-4E
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 15:46:12 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 247cc80c-d492-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 17:46:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7053.eurprd04.prod.outlook.com (2603:10a6:800:12f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 15:46:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6277.030; Thu, 6 Apr 2023
 15:46:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 247cc80c-d492-11ed-85db-49a42c6b2330
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B4UssXYJBhiBhKXc24U4kLrz4ABPVGONvoYBTbKauR5lXiNG8v1fVSv26GQqf7tSaENFasxFzORzyx9Ye5cucNBjUcebpSqN6z8oG7s9trtwSR8WYyzKcSK7ujDZJ2RJGvPur8Rz2FCulNjUKoqlSyQ+URIAVdvZI3FsQhG35JEreN4vNI3hhaQm/JYBfdrBOjnPpNh3PK16qAnHnJdYzDRYmaN8M6p/ruOmwSScN1hQZxVPsDY+gW+nZ8NMlJIZ5O6ulXKAi2ZwWtVot46XGFGgUXVIe9QA9bS080y7u9PaYDL0AMcuschejxnAcnzyR5j9+oPXmMIIMqPMXozecw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=thYrrAGxFG6X4w8l/nrCK7sjikeycPS+9A4eS6t2gXU=;
 b=nOaYIJKf+oVkyovyP8xoaeWKTIZ0umLtrQfZZyRsUsSwldpeZ6Buqn7Zpm+KJZjR6vqaYyuEPZTEfcKV5Ew1Rr0Q3gejHx/HeJdWjRG41GDWJcfTZ5lnQ29yWL1OoJ6e9hThyeqU12Vdh+4IHOlCW6Zt87sWjQUWc2T9vUWES8djkBDwSuftXSHI5soyETPtrBF5GtlMYAEmwDcifMh0GKE7rkqsfhkwddf5Lqv14qDGnH2pFvqffg4QsztVYsjAssoFzAWk/pgObGwIsMhX4oEDheSqgYnTaslu6q+XrKBcREYm9a82bojc1F3zo6McVq3lBxsp/OeL7WY7fGrDYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=thYrrAGxFG6X4w8l/nrCK7sjikeycPS+9A4eS6t2gXU=;
 b=gr4BwqNG1O2QaOC4XMYhs7ujM2ILbS7oV01f7skkl7XlONLGUdqUwLrj9n1+4l3EAsfLK0B08BwYgDrKBH8ba+M0D61Hbp36Y9KScmj9XGeHvjqVkHaHgRmJ+tboZou1IEZ+QvokAqSUynQrUjrn/6pRPcrkIs+QKzi+ZS+Mveoh351qt2NdFtColp14C5SzHSkERgIFFK6Qk2HTXfSCbBHOVqsgPuyUXMxIg62BYTqtkgyF084PzNySwwuzZo/c0V5ASadd81FIvqr1jwAZ9NdNVTEs9YJ5bh1S+DU6HoOZy3z7DCajyPH1FCM+n2FaGSa0UYMofyPl41LwNgn4gA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9b1ea79e-013e-f5a5-eb36-a79e400a87dd@suse.com>
Date: Thu, 6 Apr 2023 17:46:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86emul: avoid triggering event related assertions
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e79b3b8e-666e-b94f-d2c8-1ec2f6f5453f@suse.com>
 <4fc4f5a9-0b9e-bb30-df32-09b5e9e63a57@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4fc4f5a9-0b9e-bb30-df32-09b5e9e63a57@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7053:EE_
X-MS-Office365-Filtering-Correlation-Id: 8f3071a1-9b86-43ec-d921-08db36b606f7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kjqlDdbb2WNUYOPqfM9I5B41VwononOqYcD3p595yH3zfTkHi7lh3Nr0L7CWaa8X3xFj0Noyt8hKqR0ojjLuVsaXVKxO2JM3/v9cRUQxaArUP0y/VQwrwXfCGJUN72QNH9zWFOokzCU5rKjCBNjgSZ7emy0MuW5w0Pg7Wpv43vLF4xIl83RK3oTNhfglyNKTbdrHmGkig1Y3lWAYGVuJ0ubzuroHMwq4mUy3c3hNXOooydtZCOhumqQWjT//DKA4ygKuw3FbtWBV6iPdr8xHcfnd+uFjyTlAeaoza+d+qhzsEj5O5Z3JjJisTMWPSMwuxLukyr9NtaFhw2dvcCH1L/N7HO2oTCtlhSalOVLeHNZz/Ydze+2ZEGhYuAxSbiGiTllCM4oiK4TIClE021Q6EVpvUOcRz3qx1mfTQAT8EhWpjnj1ePJjhFgTZIU0BbOSSOdzJgN8SCc1l6bfIxTwvWuXahXfUrSb86jjiILch6mMB0bkiT4K3F0aJY9Q3HXbX/StS1ZyjIqSv7m7lek3K3FJMRiu8Wg3r6A5pp7WwXLnthN+GNE2t2y7NdOndMDvIy7H8pG+fG62fzNE/HthWKBOcnzOuc1h+eVufBRBJI8kCJshXHOBqIrTYF5GKQT9/JaapSGZbe+P7F3TOLsgKg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(366004)(376002)(136003)(39850400004)(451199021)(186003)(26005)(2906002)(6512007)(6506007)(53546011)(38100700002)(31696002)(86362001)(36756003)(2616005)(8936002)(41300700001)(5660300002)(316002)(478600001)(54906003)(31686004)(6486002)(66476007)(6916009)(8676002)(66556008)(4326008)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UWxMRXNZaEpBZ0RyZTBxcHMrb081bjE1T3Z2QmNBVndiSjFQZ2lkTXl4ekxh?=
 =?utf-8?B?VXJVa3V4d3hXTFNwMkY5d3pCSndOTVo1MmpRVEtrdERSUzhqTWFyaGxZaTdR?=
 =?utf-8?B?eUxYMWk3U1prU29LYTJQcjUzUm5FZUlHaDExUHpleTNxYXVDT1dXc2h4aHRQ?=
 =?utf-8?B?WG1pOUo3ODM4eS9RN2E3NU40OTV6WVN0YWhRTHFXait5aUViblpyK0hJeEdz?=
 =?utf-8?B?TWtCRzFNSlNtTzVXdUdjeVNOVDRFV2VhbXRYNnhZR1dzb1lwU3lQN3FhSmJa?=
 =?utf-8?B?YVg0U082S2ZZVmo4VEFjR25qeFNKUitRQnhhTTlnMWpWYVArTmcrMjQvbC9H?=
 =?utf-8?B?MlRkZGZZemU0TDlqemRab2tjSU01a1U3NEt3SzZJaUVTZGpHeUVRUnc4Tlc4?=
 =?utf-8?B?enVWZHdFRjR4K0g3a3M3b0hsaEg1Z1d3Y0U1Q3VQdGdHLzZEeHlKdy91MGJI?=
 =?utf-8?B?NmkrV011VEkyT1J0OUlieWoxaFM0MFovVkErcEZhS0pRYk9qcDRYZ3lWMHVJ?=
 =?utf-8?B?RU1IeXlndjQ3VG1hM3lDbGdKaElLWnV3eXBsUGRIcmpUUFpCMndXR0E1Y3Rr?=
 =?utf-8?B?ZVNPNkJzVXVLVlRhSzNadHhUamRTaTVEQmJERGxsL2Q2MWd5QUFYZEFKVmov?=
 =?utf-8?B?OXZsVWhIOWVBelpTYklLMzRnSWEvc1JDMkZNZHlHeithOEtiRzRnSUVXWHpE?=
 =?utf-8?B?emxQYlUzT0xTcTd1TUo4WUNsUXc5UXdzZGtrdXhxaDROMjRsS3llemFuUHBN?=
 =?utf-8?B?Z3ptZXlUSW93bXNyUHViSE1BM3V4ZkdNQStuaXlIUU13UXZPSWF1Q3Njam9l?=
 =?utf-8?B?TXBxbml0bkRjaGR5dyt5bUtPdnVINkVWN3k0T2VzZDh5a1F2VW5CY0dtdjFR?=
 =?utf-8?B?QU1paC9TelEwZ1JRTDAxTVkvOUkzVXYwSFY0UFRnWG5ORmZqYmgzc3VIYzF0?=
 =?utf-8?B?akdWQ2pyY1BtSFlsR1hnZERJaTRkUW05Y05rU0U1TzZUc0JXRDU5dW5KZVhH?=
 =?utf-8?B?a24ybDZZUHd0NlF1UTVQQThGUDhYOFJiRUp2OGNVRHBYUzVYbUp3TXN1YUMz?=
 =?utf-8?B?aXl6dEZRNWZ5NXN3SEFOVVFQdnplcWdSYmdsQlJRTzJpZldTQS9YcFUvYWZ1?=
 =?utf-8?B?TEtwcHZodHJJU1ArSTVNTjlVdHU1ei9GM3JwRnJoSEErVTlmaGRJZXBtbXlp?=
 =?utf-8?B?ZVNYWm5UWGF5ZGJ2Y3B6bVVURWtWLzdGT0ZzaGo4UE1CQkUyemlLZHp6RWl1?=
 =?utf-8?B?S0M0b1pWNEtJT1NIWjFQSGVaUUgvbDhaclgxSFRRMCtGVFpHSCswaGFBYWJv?=
 =?utf-8?B?ZGl1WFJoenlWeE04L1F1ZnJvM1BHbVhVeURJMjBDdXdpRnRaZWxrS3FlUy9m?=
 =?utf-8?B?ZmZPSGt0NDV6RVVISmQyNmFFMUhhNTBKZTBPaS85UTdRRzdHRDhXYlUwbDl1?=
 =?utf-8?B?SEVzNDRpUHE3NHJKL0wxUXRaYVo1cUE3Tm1KTUx1dmJrdGZkWWZQUVNsQjEw?=
 =?utf-8?B?RmxoSWxTNzBlZUxlNGorWE12OU5rVTJPMXhBTHlTSW9jK0M4cXFDUDcwZUdX?=
 =?utf-8?B?SUdjTjRPV3BRZGR4SWpyR1p5TVRQRERJUXNWcE8zNS9SZ3phZ3ZiWWdsbndC?=
 =?utf-8?B?WWlZcTVZVFl5dklpdjdURnVaTEliR2I3ZUlRaWZnS0hiN3JFV21kQ05tSXAx?=
 =?utf-8?B?eTUwVFNIRzBJak5MZEwvZUxQaTljRW1MeVlkNFBSenh3Qm9xWnhkcXg2cVZZ?=
 =?utf-8?B?eDV3VVB4R24vdkhWRGl2VXhtYWx6Mkl1VnhPbTAzODB5WlZmTEF6aG8rRGdk?=
 =?utf-8?B?SmtzcC9hTTFHL3lYV3VVd2I5c21zMFJaM25ydW5hYXUzSDd3bHJYODNScE5j?=
 =?utf-8?B?cDAyVHBDSWhTVEdheGduVEpBT1Raano3amxjQ0JneVBiUGxXNkgxWnkxMTE2?=
 =?utf-8?B?OGd4WEM5K2VvTHBFS0ZSc1o1YzBYZzdsKzRGOURYZjZaczBhZUlRQ2lBcUZ2?=
 =?utf-8?B?N2tQYnJmak1TSmtSNWRjbVRIT3JaZmZXNEc1Q1Q4U21QV3JOWjlmZyt3cXU2?=
 =?utf-8?B?Y3hTQ3dUb3d1dDBiTWpLTUlEemx1TnlDbkE0ajEwcnhkZUJJYThFSHdZRXhs?=
 =?utf-8?Q?wbbXAj8X+wzl847uQ84Nla7DB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f3071a1-9b86-43ec-d921-08db36b606f7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 15:46:03.4645
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X+o1SYKtcG775HbCYT1TlM6VPty0FLGV7Jclr41OMIeGwikcpgyyM+GryPnUhPaOn+CJt876/CB2Gz27Jqn5Fg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7053

On 06.04.2023 17:33, Andrew Cooper wrote:
> On 06/04/2023 3:15 pm, Jan Beulich wrote:
>> The assertion at the end of x86_emulate_wrapper() as well as the ones
>> in x86_emul_{hw_exception,pagefault}() can trigger if we ignore
>> X86EMUL_EXCEPTION coming back from certain hook functions.
> 
> Looking at the comment I wrote back then, I don't think I'd considered
> this case.
> 
> What I was fixing at the time was the case where
> hvm_inject_hw_exception() had raised an exception behind the back of the
> emulator, and any subsequent exception escalates to #DF.
> 
> I guess the comment wants updating to discuss this problem too, where
> the hook queued an exception but we didn't squash it properly when
> deciding to ignore it.

Can do.

> As it's debugging-only anyway, it might be worth rearranging the
> expression to be
> 
> if ( ctxt->event_pending )
>     ASSERT(rc == X86EMUL_EXCEPTION);
> else
>     ASSERT(rc != X86EMUL_EXCEPTION);
> 
> because it distinguishes the two cases without an intermediate round of
> debugging.

Maybe, but I don't think this adjustment would belong here.

>> ---
>> EFER reads won't fault in any of the handlers we have, so in principle
>> the respective check could be omitted (and hence has no Fixes: tag).
>> Thoughts?
> 
> We already require LMA as input state, and don't have an execution model
> where EFER is actually absent in the first place, so passing the whole
> thing wouldn't really be an issue.
> 
> I have previously considered doing the same for CR0 too, but at best
> it's marginal so I haven't got around to trying it.

Well, that's more longer term plans (and not very clear as you say). I'm
afraid this doesn't answer my question, though: Do you think the respective
adjustment should stay, or be dropped?

>> --- a/xen/arch/x86/x86_emulate/0fae.c
>> +++ b/xen/arch/x86/x86_emulate/0fae.c
>> @@ -67,7 +67,10 @@ int x86emul_0fae(struct x86_emulate_stat
>>                      cr4 = X86_CR4_OSFXSR;
>>                  if ( !ops->read_msr ||
>>                       ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
>> +                {
>> +                    x86_emul_reset_event(ctxt);
> 
> This is the only path you've introduced that doesn't restrict the reset
> to the X86EMUL_EXCEPTION case.

Right, other similar ones had to go into individual other patches I have
pending. The distinction I made was whether the call was in a helper
function (where I want to be careful, because I don't know what state we
may have accumulated) vs mainline code. IOW ...

> In principle you can get things like RETRY for introspection. 
> Internally, UNHANDLEABLE is used but I hope that never escapes from
> guest_{rd,wr}msr().

... other errors are possible, yes, but those shouldn't cause an event
to be recorded. Hence it seemed reasonable to me to flush events
unconditionally here, i.e. even if none is pending in the first place.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 16:22:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 16:22:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518942.805978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkSNg-00008l-8u; Thu, 06 Apr 2023 16:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518942.805978; Thu, 06 Apr 2023 16:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkSNg-00008e-5b; Thu, 06 Apr 2023 16:22:36 +0000
Received: by outflank-mailman (input) for mailman id 518942;
 Thu, 06 Apr 2023 16:22:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkSNe-00008U-LJ; Thu, 06 Apr 2023 16:22:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkSNe-0007s8-Gz; Thu, 06 Apr 2023 16:22:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkSNe-0007Ki-03; Thu, 06 Apr 2023 16:22:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkSNd-0006yQ-Vt; Thu, 06 Apr 2023 16:22:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L+bfPhK96tpk6H6YoLpFvcwvHcYpa6zLp+z5/IYzXbA=; b=aHQbYeQAcdyOyi8lOSr7CZOncQ
	Zx8QhftU4CsmE1Dkz2sk4XntPnwjsDvGLkTkotfSlve+9BjoFCPLJF5vcY6rYTXita7uOasDOrvJx
	zjR3oGfNBqDdND3b/b3FZcx+Y0/aX4YkKvgjZIgHmpGGVR7/zNygO/RbAnMMmk1hef+c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180169-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180169: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
X-Osstest-Versions-That:
    xen=881ba20eb0222305a9d2cd090c9345992794f4f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 16:22:33 +0000

flight 180169 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180169/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3
baseline version:
 xen                  881ba20eb0222305a9d2cd090c9345992794f4f5

Last test of basis   180160  2023-04-05 20:01:54 Z    0 days
Testing same since   180169  2023-04-06 14:03:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   881ba20eb0..a5087069a8  a5087069a8c40541ba81fa0e2850471c949932b3 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 19:29:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 19:29:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518950.805988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVI0-0001Ur-7L; Thu, 06 Apr 2023 19:28:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518950.805988; Thu, 06 Apr 2023 19:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVI0-0001Uk-42; Thu, 06 Apr 2023 19:28:56 +0000
Received: by outflank-mailman (input) for mailman id 518950;
 Thu, 06 Apr 2023 19:28:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkVHy-0001Ue-Ji
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 19:28:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41bfdbe2-d4b1-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 21:28:52 +0200 (CEST)
Received: from mail-mw2nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 15:28:44 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS0PR03MB7201.namprd03.prod.outlook.com (2603:10b6:8:120::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.29; Thu, 6 Apr
 2023 19:28:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 19:28:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41bfdbe2-d4b1-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680809332;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ysbyb9angYFcf8d8gX9w1IQesEb2r40c9lFWquA8ZKw=;
  b=aN520cBRMSY7/UbznaA7+N9daHKM68z4Krx59HW+lcc2WDj/Zrohv+vV
   Q2rfXtc+2AaF/wVpgKr9C8xhYU53y4y9dcsXQPf3LxdPsdcUzjTBr/n7D
   zb8JAVik2hFDOpBmgRGjLe2tvHsgVzN4LH8+5YFQxm8tBVa4oQ6W6OFHf
   E=;
X-IronPort-RemoteIP: 104.47.55.103
X-IronPort-MID: 103976512
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:llOviqOtrDpn/cXvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUoi0mZWz
 GJOUDvVPP2JYTHyeY13bITj9k0E7J6Dx4JmTwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvLrRC9H5qyo42tE5gJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rxPB3tUz
 NI5ExEUS0+JnPju7oPrSeY506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCIpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXiqB9xPTefQGvhCx1qo53ZLATEqTnjhg8OktVPkAdB1J
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9URX5NkcHbC4ACA4aud/qpdhrigqVF444VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTzgbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:Zm3ewqieYkNIr1U+fYtkwF6w53BQX6J23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwSpVohEmsg6KdkrNhSItKPTOW9ldASbsP0WKM+UyfJ8STzJ8n6U
 4kSdkMNDSSNyk6sS+Z2njILz9I+rDun87Y5pa9vg4dNT2Gc5sB0+46MHfqLqQffngEOXNTLu
 vg2iMznUveRZ1hVLXGOpBqZZm4mzT+ruOlXTc2QzI34gyHjTel85/9CQWV0y0fXTRG3Ks4/X
 KAtwDi/K2sv8ihzBXRzXXe4v1t6b7c4+oGKN2Hj8AULjn2qgKwf4RnRpWJoTAyp4iUmTAXue
 iJjwYrOsxy73/LXmWtuhvrxizpzToo4W+K8y7+vVLT5eDpTjczC85MnrtDdArIzkI8sNZ3wM
 twrgakXtdsfEv9dOuU3amGazha0m6P5VYym+8aiHJSFaMYdb9qtIQauHhYFZ8RdRiKorzORI
 NVbf301bJzSxe3fnrZtm5gzJiHRXIoBCqLRUAEp4i8zyVWtGoR9TpI+OUv2lM7sL4tQZhN4O
 rJdo5ykqtVc8MQZaVhQM8cXMqMDHDXSx6kChPNHb3eLtBWB5vxke+q3Fx13pD2RHUw9upppH
 0VaiIGiYYwE3ieQvFmkqc7smGffI16NQ6dj/22rKIJzoEUf4CbeRFqkjgV4oydSsUkc4nmsr
 6ISeVr6t/YXC3T8NVyrlTDs287EwhTbCXj0uxLFm5m5Pi7cbECntarOcruGA==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="103976512"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RnxF715zXNCxTFOsqJDp/sz9voXh6/aWCSn2BiI+gWgJzMTyOWxY119jV4zEJxLwNU/hCXefI6TvHw510maiSM4xaDlpXlJl3xYFLRV0exYPSDB33gkf6maHhfGTCoI2oq6DaugwHJZteApY8FGRsRGV0DcCAusBuKyORw7IQzkNXmWVypq8sSeINAe3c3+72ljN4VbZfyK2bY5tzeywacZk3sNZt0kHfU1ObDPckEM+IkknxVcC41P6cVUvMmcrxna2GtpCqOVb9WDLFgfeZhnsUHgSs604k0oU8mX0+hAn4qk6jKugRVtfT6oZU877b0W5VwkB3258WdDGujkkrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5TTQKdhsyUGU4bjLTQMm7FDlgUpl6toPSRU4SGT/cwo=;
 b=gnsZxey7SuyabkwBr5ZWiDKHnSylpUIQLtxTOjQJp9T90th4Z4rMsErhPR3AfRjk9v3WwqLEFMMCqQ6v21aNr5IroPpnzFtiBihuesyEBHlO/nNqlD3f53Pw3my1SUnHynHHhnRVVKRpmg1FbqAxuKrDf5X3BFrHCigAK/y9628VqfY4OPhyvwbVGqPUJLAqJEseRMM7IBdrPq4Km3u6L/uEhRKaKU2/6uQkm+g6rMqT86o26ixZw2ZLYZrZQfT93KCZm0zvK+m2RCH0KUAB0ELWtRICutg5EWaMdgmPbyG4TBaP1yqQSMa7wTp8Vj/7+iBeN+TJOkdfXldZeNWZMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5TTQKdhsyUGU4bjLTQMm7FDlgUpl6toPSRU4SGT/cwo=;
 b=xIfynxckhoLEpmcIXF1ztwZCkzo+gP6+dOqk4HnO8MIUZQ8tqeJZVLFpGOIohAgP/LzC5LhI+mwoj7FbOGn32bdowK59UY1P1jEXmyP7ZBKH7e51wqaHzKXcxFdjfpApAQXWD+rwhzRDHUHc221sHiq9vo4TwynJytrhqAKZ3n8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c973ddcf-506b-8318-07cc-bb177541619a@citrix.com>
Date: Thu, 6 Apr 2023 20:28:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/9] x86emul: support CMPccXADD
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <7fdf882f-0667-e0f1-8183-2dc1a344f4fb@suse.com>
In-Reply-To: <7fdf882f-0667-e0f1-8183-2dc1a344f4fb@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0301.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS0PR03MB7201:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b1c643b-51aa-48a1-eba7-08db36d51fdb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mwJpRKwVuIYZgXlXzkztHuhArta5CYQougG/+UwA3P17I8GLWS/I+mVgA7WAR/LSkdC/DKmoOkSLzzee7r8syb1MN/eBO3LwlBHNfHkaccsLn+7ayo/sjjMwPPpPALgt3uh0GB/p7gJ9uQirf87xZBXqv5/dXYp6fK9VC7dat9fp9aEb9yGTSgrowjHXCrYuvOh7sXCzQNrvtK5Ifg1XjGg338kpykuHFK97iu5zP1CLnbABUJ2gQob+zD2nx4XhSbWAv5j0RhnJz3dCL3IBInFjIQqIbO+LyT3AS83ZXrGwGzh+ujJ3RZvRIYtMsMTDedEB6zS5IxbJV4pyUmoFcuAYBsHeVbRf9Hb+ThJRh/zRY1VCwF3LuyFnqF95hxdr9aNxskGpOmoi0mhO9hD6OXTCtUbjA5UQIo4/3h7pTAubM7KbedKpxiDpQ/CQzvtGgTu9w2oVFPR50r0DKB6VYpvzer1JOk0Miy3slN4ewC/NjjCW0Vs+3iCaC04ZvQjoJsgNgi/x2q3cFspcQkfirknTTW9GXG34lW+y8JriwRfOpvXYpM2XiNmtcueLvlS8xQ+QLMDs6B/ZNSajXbm12ZfqeCogDQhzh/DWWs4i5e2c4giObTruuEuAeeG+NFViyOaVe30MFqXk4dVwfvHKfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(451199021)(54906003)(66476007)(8676002)(66946007)(66556008)(41300700001)(5660300002)(110136005)(478600001)(8936002)(38100700002)(316002)(82960400001)(4326008)(53546011)(83380400001)(186003)(2616005)(6486002)(107886003)(6506007)(26005)(6666004)(6512007)(31696002)(86362001)(2906002)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SXFiekFaQUZPODN5V0MrNXBRRm1xdWIwSmVhdjQ4UERSUkYvM3hKQXNYTUYz?=
 =?utf-8?B?akJ3bldTOFZKWENLQnUrUDZUQ2FQV1YxandJNXNhaXRiSVBwcE9wRlJsaFVT?=
 =?utf-8?B?T3Y0cjBFRmxNQytRdytHYTBidEZFVGQyaFRvREI3eGFRU1p0NzFXcXlVeU5t?=
 =?utf-8?B?U3RvMXE0aHNMSTdkZXIwaHp6NEdZOU9iSXYxNldWVWNaTXRXYUdnZ2NNejlD?=
 =?utf-8?B?Rys5WW5TMWZiVHpvMFRFdG8wM0txTU5QYmNiU1k1L1kvOFg4WXA4cm5hWC9D?=
 =?utf-8?B?c2k4U1NDS3JMTFZwSWFkaW1PTHJ1MnIyRVFEZFdjYWowRXJsUFltWmNaRWZy?=
 =?utf-8?B?cElvQ1E4UVZjSmZCZE15ekZEOGk0MFlIMko0MkQ3SGR1YktwZDM2QWh6ZFZE?=
 =?utf-8?B?TFg3anJTTlRtcXZqWGdFb1FxRXpJSWIvUGRnVHl5VjlGWm1RaGRLMEYxaEth?=
 =?utf-8?B?OExoWEdoTGluQWlPQy9ESFAzdHMweUZtL01XaWtCdllQdzAydC9idE9VVjVU?=
 =?utf-8?B?ZVZUYmdrMXFHd2NNSnU1UUp6dURZczhDYVR4NCtRbllMcmNyak82QmkrcVFE?=
 =?utf-8?B?QkxSeEcycGVWWUVTb0pzN2dDQW1EVjExV3ZneHhoaXlUaE80UkdRUmN6Z1Fj?=
 =?utf-8?B?bGs5bCtXb05jZXB0UjRsTS9HN0hHaWFqV2dnR2RoaDJtTEtPc3RRRlRMUzFh?=
 =?utf-8?B?WFB1L0JISDg3djlreDlYTkhuQmpRR1gvR1NIR3duaUNBTGk2MHorSjVJK3Qx?=
 =?utf-8?B?elVYWmZyOU9ySmJvbTJOTU9SMFF2ekJBb3BOdnk5TnVOQ1MvYlBFTzV5WHBO?=
 =?utf-8?B?WTBEKytab2RaQ1U2UHdES1Rad0ZRUE5LQzdReXJURmdaakU3V3BlZDBOUW9x?=
 =?utf-8?B?NjVSYTJPcjVVUzIzRGVLN3IvM2ZmVk9QOXhrTXU2cXFZU2luNmhhNzRLWkZY?=
 =?utf-8?B?SXZiSEcyblk1RmR4N2lDN2Vzd3NzSWg5UXNQMGFFZjBYdFlIR0xIejdYOEZD?=
 =?utf-8?B?Q0VXQkY5dXFpQ3o2THhsQk1PdjYxay9raW9pZkgvMEkvcXVVdDlYUXRQdVk5?=
 =?utf-8?B?bWpWSit3K0FUSS81UUwwclhVOXN2QmFZSzl5anJyQittNWNmejNqY1A5ODdP?=
 =?utf-8?B?VHNnd0F1OGhHUDdJVmNjeTN3STV1RUJObS9uVTd3Y0IxdTRXaDlVVW5aTEVP?=
 =?utf-8?B?K3ZHWmxVTVAzVWw3NDNXeDR2cWhoNlV6akJGRUZXMUhYK3ZHSlhNajJjSFRu?=
 =?utf-8?B?WWs2ZUg5K2k1T2tJUXhWZjRhQnhLVUdrczRMbDBPUXUrM29ac203dHppODdH?=
 =?utf-8?B?NzFaeG5zZE83YWJGVFprRW55NmxWZkxpU200VzlMeTRJU2dTVU1rVitHWEtq?=
 =?utf-8?B?RFhuRVllL3VpN0tyUW5VZ2pKQkhOWnRGRFVMNzhzWFBMaVJGVnlud3hUOFBv?=
 =?utf-8?B?bkV6NE5wRzh5ODB2VDdsaDRGaS9Cc1NRSWpncTFSNjRXbGpkcDN2ZjA3eDdE?=
 =?utf-8?B?b3lQbU1DaDVJcW9iUXNVWWRVK21WcXFOY3k4c3JXeWJDcTh6WjlrTFo0ei9V?=
 =?utf-8?B?cnRYZkRhbGlXWVF5UWo2N1ZLeU5Zdjl3VHFZQmtOanIxVUlUcm93VGZoV0pu?=
 =?utf-8?B?Qm5pRkQ4Y21pZURSRnVSOGpoV0FJWC9ORno2Q2N2ZExEbFZ5SmhHdmJmcDhS?=
 =?utf-8?B?OXhVNGtRK0FJdjlROHNNT2U1NGM5TUhEVnFuYjRja1VKdG53dHNlQWFWRmdl?=
 =?utf-8?B?ODRVMHRRNC9WUllGTXNDOERrczFhL2R2TG05MG9RQWNISnBIQXlOcHpleSs2?=
 =?utf-8?B?MG1FT2tveUo4N3hGd2VKTnpya0h0Q0h0ZjJpTDZmdUk0THdLdVVaZmFrNkt2?=
 =?utf-8?B?MnRjajJMUTFLcWtybHFDeTdMQjI5MExuaHozWFpKbk5oZFgrUjVqOU1yWDNh?=
 =?utf-8?B?UGFheUhhRzRTd2JEQUQyZ0MvODN1VlJEZlhWeXA4eUVhQ0tNTmJkL2hJdldX?=
 =?utf-8?B?MUZZUm9EbUZ6UUV2Y2hYQ3BkTGNEaGVwVk1Qbytlc3FQZHY4ZnM0cXV0bzJi?=
 =?utf-8?B?WloyTXRhU2hESFY0Ym9yamtSUGN0NGdOeGpoc1gvYlV2bGJiNGpKQndOTWg1?=
 =?utf-8?B?QUp2Q1Nwc1ZoYUp1d2UweXY0bVlPSDZtdWMvY3pYeTJoSkFreU9qWi9UbE9T?=
 =?utf-8?B?UHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	DtvDwWUnzh8p0g8U4acjxSarlHAKL1LmnfeIYrFJkZjzB8/edXsHQ+9AkmBE2PS361g+tmBLNOCvUV+qY4747ckBgc1GQ+1Y8X12cwpU4BK6VzQk10IdZd3Kl/JOEqXmkcdmeyIBj8cm/gIacH293PKab+NZ3w5Q8rqjT2I1GNaBPl6ge8FKTRoWG61nI1jfLn1peaRJ/ZHci5GxbiqV+ZPUZkkW2mcPZ3eaB56kDjExebM4TLK929fzY+AmUe4+y8JHhJAz/Nh//gsV2OOZgZ3xfq7cjNN/MvYZpQEaqcjzz2bI3qI6fm5WlOoiE4Fa7YcJsIl/EFMYDk5pUt80txRWTuVphFpzYk0ro8JgjbTT8rD/v6MAjErRIiEgOXpQASpjAQn3fTFWKpS+4iQmemUPDq1N3EDxuVple00KzIdkihPBIkVEPRpuQdZrZtzlWpnjW0XalV3y2bIuI1NTBQWEexSzP6zuuUsqjTZq9nwo3GB6kFQpfY9nq4OV/1E6trUQNTWXvln7v4JBPOjosiUk+OoNtmo5VZX8/jWbbii01tpSiWkXYVyr7jWwq6Y8hCmoZwWGuN7gEAj/jL6+Y6yy7KxBOSGIVdMWE1Wrxj0Sk9CErp1YGNj8dZFoLqzBELfsu2LM/UNAS7UgxNYkNnK2URXKbkuoX5SmkuCms8zo41oDCax2tMH7XtHPwso1Flk/Qo/VvEz7FZlu3j+yZ1jqO2bEkmei6uxPiCfWeUlFKpEifMFEDvTQK82DfeTSqCuAFijx6R8/2GNZea5+pvCPyzw9bVzTdgo6c5LvmVzgMfyGzXHPa5OylrRl5AtC
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b1c643b-51aa-48a1-eba7-08db36d51fdb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 19:28:39.6531
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1ZcVm6bR9aqFiNuhK1cwmyI3C/tnUCI6hyAXhgoU8/wtiV+wFHd0UTyrpl1D0xLCvZF68PnzH7CJNOHX8d6C/b9DbxS2YHvRc7x3KlAgkTs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR03MB7201

On 04/04/2023 3:52 pm, Jan Beulich wrote:
> Unconditionally wire this through the ->rmw() hook. Since x86_emul_rmw()
> now wants to construct and invoke a stub, make stub_exn available to it
> via a new field in the emulator state structure.

IMO, patch 5 should be re-ordered with this, because it removes one
incidental change that's not really related to CMPccXADD.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> # SDE: -grr or -srf

The ISE makes a point of noting that CMPccXADD is implicitly locked,
like XCHG.  (Unlike XCHG, there isn't a valid reg/reg encoding.)

Right now, the xchg emulation overrides lock_prefix, but I have a
feeling that's stale now with the rmw() hook in place.  But it is
dubious that we let xchg fall back to a non-atomic exchange if the rmw()
hook is missing.

Either way, I think it would be nice to clean that up so we don't have
differences in the handling of instructions which the ISE at least
claims are similar.

Tangentially, what about the RAO instructions?

> --- a/tools/tests/x86_emulator/x86-emulate.h
> +++ b/tools/tests/x86_emulator/x86-emulate.h
> @@ -934,6 +935,8 @@ decode_0f38(struct x86_emulate_state *s,
>              ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
>          break;
>  
> +    case X86EMUL_OPC_VEX_66(0, 0xe0)
> +     ... X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */

I know the style is a little mixed in the emulator, but

+    case X86EMUL_OPC_VEX_66(0, 0xe0) ...
+         X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */

is more consistent with Xen style (because it's somewhat of a binary
operator), and more readable IMO.

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -278,6 +278,7 @@ XEN_CPUFEATURE(SSBD,          9*32+31) /
>  /* Intel-defined CPU features, CPUID level 0x00000007:1.eax, word 10 */
>  XEN_CPUFEATURE(AVX_VNNI,     10*32+ 4) /*A  AVX-VNNI Instructions */
>  XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
> +XEN_CPUFEATURE(CMPCCXADD,    10*32+ 7) /*A  CMPccXADD Instructions */

Given the non-triviality of this instruction, I'd prefer to keep this
"a" until we've tried it on real hardware.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 19:47:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 19:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518975.806069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVaA-0004o8-0T; Thu, 06 Apr 2023 19:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518975.806069; Thu, 06 Apr 2023 19:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVa9-0004o1-Tu; Thu, 06 Apr 2023 19:47:41 +0000
Received: by outflank-mailman (input) for mailman id 518975;
 Thu, 06 Apr 2023 19:47:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkVa8-0004nr-BQ; Thu, 06 Apr 2023 19:47:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkVa8-0004gb-9f; Thu, 06 Apr 2023 19:47:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkVa7-0004tS-OI; Thu, 06 Apr 2023 19:47:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkVa7-0001TB-Nj; Thu, 06 Apr 2023 19:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ucmgWi8SVyA0sLbQW0FoX6ajpEg7W7Vw8VzJOfak8H8=; b=xqKprWov63qy/EOCVGDYUHeKCS
	dcsqThfbhdgMFCD8atRjCuL+QRYRD5yIqEejvip66hcPtHWI/HkddFPtregVj79tz+sehdiDlmOH5
	DLJ3R7GmJKd0TgQXm7IA3ClxjayM8K89Lt73OhTuRoWGY2XYUwLOtKpA1IEocRWyDD8I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180163-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180163: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=881ba20eb0222305a9d2cd090c9345992794f4f5
X-Osstest-Versions-That:
    xen=415f7d9404171cbc968b1ea22e7d3523ac2f3fc1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 19:47:39 +0000

flight 180163 xen-unstable real [real]
flight 180170 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180163/
http://logs.test-lab.xenproject.org/osstest/logs/180170/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 180170-retest
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail pass in 180170-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180170 like 180148
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180148
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180148
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180148
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180148
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180148
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180148
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180148
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180148
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  881ba20eb0222305a9d2cd090c9345992794f4f5
baseline version:
 xen                  415f7d9404171cbc968b1ea22e7d3523ac2f3fc1

Last test of basis   180148  2023-04-05 06:43:52 Z    1 days
Testing same since   180163  2023-04-05 23:38:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   415f7d9404..881ba20eb0  881ba20eb0222305a9d2cd090c9345992794f4f5 -> master


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 19:48:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 19:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518980.806080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVas-0005Jn-95; Thu, 06 Apr 2023 19:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518980.806080; Thu, 06 Apr 2023 19:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkVas-0005Jg-6H; Thu, 06 Apr 2023 19:48:26 +0000
Received: by outflank-mailman (input) for mailman id 518980;
 Thu, 06 Apr 2023 19:48:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkVaq-0005FB-S3
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 19:48:25 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb7a61de-d4b3-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 21:48:22 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 15:48:15 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6538.namprd03.prod.outlook.com (2603:10b6:303:125::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Thu, 6 Apr
 2023 19:48:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 19:48:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb7a61de-d4b3-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680810502;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ZsgUh6XwYy3/O+QPZr6A1ROCkyOu39sZ4l5RK4WQIxo=;
  b=AiO7DCZRpyKSs+MobuHBjH/sk+kK3DRzxSljw0g0e3sovTyikkG0zQVY
   RH6u971T/aduMzP4uxyTJ2KE3y6v92P4RHKQ/cMBRwVnUyh7NMmnHDRfd
   iP3580TUXzW3Ni/9AS2NGBUr9zhuM7I2ENzCpIF+7UkxXJh0984TmsnAQ
   4=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 107043711
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:TVgu76LzxG358h7QFE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS0zxWm
 DBOW2nTb/iON2L1L94lbYywoUwF78PQzdYwTQNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA4gRgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c57J38J0
 fEUBgoMbw3bueG7muu7dPNF05FLwMnDZOvzu1lG5BSAV7MKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dmpTGMkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHukAN5JS+ThnhJsqGy82nQTI0wEb0X4nbqS23SXdItgD
 0NBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1BbXFI88Teiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:Fds+w6B9Xe5H1orlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="107043711"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fDONPtyNiTo2z+kchG9QvxNMhUbPl0UmtYYbH/vg68b+gZE3I0gc4pSO+DRcKP1dsCyfxnZYY+8yKrVKJhIqIPh/d4xOzJfqRvGXa5Cmjpn6JNDXo0FIvLGhq2e9vuGY+A4s2PEwFX4Bx5YoAy+x1y9d+iATTPrc3x/NvV5y0PP+vR/8nhVqcEK3sSlsP39ALlyVq2IDA/dNnX4CCKRC2B/dod/pR5K+zE2tfcSfbKtHVf2ciXYCtfTR/c08H+2ElCwtBvNcWipFuOnmTLXGYAdOyAXmixsBrVBg60IsQm7ADD9uh6VB8UHZ8GdybzQwCIQqwHYFGDiXH/3lOmfJZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9GEv/vHHfWSr/9qukhjnK/VmYsC3r4UDh5/C75enRlo=;
 b=AHzJ+FkOcUMSbYkBqU+mcmYxfcHXZRt1/NAV6XGW1C+YLjJB8YJvmjIJeUzTwIiD6n8RCLFhPYAE9xWnxCnP7A7hoYZ1aPVNupCVgm2wfOJmkYcRvHPDSZfyV/V2brSejNL9q90jFbliIQ3vTP2ioI6qyuhOd8Zysfw4x2SWydBlN/o5VmGnbImXxEDSEI7+9sROrr1jEoHUOdc3dq6IvIsDxy2P54jF7zVLhHH8iPd+9WjJYd8EwC3grWiql3pg5uxtUWPYwo73bjbrL1TfUie/dbYSdLq+SXGg3z2tXFBOFg7pcIVPUftPYZhGL1J59dTWbU35k6ODgcYbE3dncA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9GEv/vHHfWSr/9qukhjnK/VmYsC3r4UDh5/C75enRlo=;
 b=waSE8FxTuVgbjy55rwI/7O2/2xhxrDw9U7e1OFIAN6xnsTBnVXndoUCCFELqLakun05gF9CBD0qITJ9CFGzFnScl7Q0DkP9SJ2mvWkdOp8DdjKMMvj945k5Ic/N7tGVdyutBhvaS/zt5rD/mCclnfd2BK7RDFfhbLVYMwN8H5ro=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <197c2d8a-0a41-da10-771a-87843c9a007d@citrix.com>
Date: Thu, 6 Apr 2023 20:48:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/9] x86emul: re-use new stub_exn field in state structure
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <a9c212a8-8c63-91e0-eb07-8c927b62c1ca@suse.com>
In-Reply-To: <a9c212a8-8c63-91e0-eb07-8c927b62c1ca@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP123CA0007.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6538:EE_
X-MS-Office365-Filtering-Correlation-Id: 79c16495-7673-4a48-76c0-08db36d7db55
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YVHMiDGuZVD1vrEMbauouKPVHRpBTNtmtjLmj+FdJr9sLf/zTKuhCzP1nK2mll5+mmSsvcjSNV1Z4B/MU0ebPZ8BRY+Cp2PQnL3gEG2Fg/TR0igX724Kcmq7dScEyMf/W4Q5sLHjTGiY2SPT+Kz4SKk7w9TogN/wkBvfel5rEIUKpFfVkbd15/hIgQ/7FyDr94lVuBDCMGGRSwRClTwxcfTlal6VyV+c36fBvFfBJX8gDU1uSPek1rVL30NXBB96Hga6BpAa7BL7zpadshx6n3T2zTSq99XIfegQAnK5Ovnw74uWlxSxcj7uZqDKU3VZzSyIoc8c4+oMT0knPIqyn1H+Al3BrWFeFml+Ugl0NhVSo7kjtxqTxbzwcXncINiBWB1IIT/EW/l2wSAkgPcgWkE3ZMyceAoYfs2oQhlOfErq2XvczgPokqDD3KxLpjBYGR2JiyyZyZ6Uv42vh4eUNnoYSeCzYj891+EsXKVoRVPXJLpQRmosTTh2k1DTfawW2Rh17YuD3GWxA68kp1lbBAC6xnmeraTZu7JFra0wpJY4s5p+TDJ370oNq+Svg3+jrwpt2XoluRQiKkTy/WE1Mpg6pFMirzLCicDrZtLeNZuzNdlUSj27llkxiTABNwNP+/+7QHAgEz0CWUARk0pUdg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(39860400002)(376002)(136003)(451199021)(31696002)(2906002)(36756003)(8936002)(31686004)(86362001)(2616005)(83380400001)(107886003)(53546011)(186003)(6666004)(6512007)(478600001)(6506007)(26005)(6486002)(66476007)(66556008)(8676002)(54906003)(4326008)(66946007)(5660300002)(41300700001)(38100700002)(316002)(82960400001)(110136005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDVFUDJ0aU9lOVl5ZzVsdFhpeC9YTkZLMWlFZUl3NVJaZUxsYnJpSU45T3VG?=
 =?utf-8?B?Z3pDa29XbDNSRmQ4b3FPQzZYZy9hMFUrZTVlaWVEY1FxcDdhRFkwdmIwK0VL?=
 =?utf-8?B?VXc2ZVY2ZFJORVYxa1grcU1YcWlsaGdXVjFHQnBRUW5KMHdieU15eFlHM2dD?=
 =?utf-8?B?S0REYWoxRkNEeUN1c2d3L1Rjbis2OFNBR2g0cG1iTnEyVUtKUStCb3lkV2cr?=
 =?utf-8?B?NGE3OC9mc0xreVZyTDZUZjdTSHNkeERGV2JYWnlzbjNST2xaN2hSQWNMa00w?=
 =?utf-8?B?Z1VqajlrMVlzOWZZcmEwd3lIcHVFZzcycUhPNGJBYXJSYnVRenlxdExUbzJQ?=
 =?utf-8?B?V1dQY1FsdmZZN0EzZHhVa21yV1pQWVBKWkl2bUJ5cDdhOUkvUGFEZGZmTjFJ?=
 =?utf-8?B?Sjlra0N0OExWeGhaQ1Y2Nmp3YmZaOVhTM0RsMXJjUmxDN09vYzVvdFpreDZK?=
 =?utf-8?B?dXJNckpMbVJweXJHZFlUQ3V6VlNoc1kyNzJGTUhHMXE2OHd2bzcvUUU4OEJ0?=
 =?utf-8?B?cUtuUUs4WEF4TVJyS3h6eXhTYld2SE9EMlFTUDdRb3JkbVpTdS8wV0IxbjV6?=
 =?utf-8?B?NUduYllSR1ltckllVGJrWW9MU0RaZ3IwdXZTZ3NPWWhqaDZMdjV3ZjY5MG9J?=
 =?utf-8?B?bDNaY0lnV0lnOVk4YU1iTENwcjdzR2dCOEJMSm03Q20xM3VYckR3cjdkZ3dV?=
 =?utf-8?B?Q005a2N4R0w4cDc0azRtNWVOYXMxV3NxRXUvRlZNM1JkS1FsbVB3Z2JEa2pF?=
 =?utf-8?B?NTM2dUtIaFRPL1RCZG5kWVZvMXZBcGZOdVYyTWpITUJubElGQXlFVkpIYXVm?=
 =?utf-8?B?MWRzTmlxTEhtLzZlc3pSbjlCV04rSmhlRllxYlZRTjdrd0xiSlJZNmxHU1k4?=
 =?utf-8?B?RFFXWHhIMGxveFh1bkMwNUJFVWhWSUJBS1JoNCtoWnQwMlM3ODJIMHlvbzZW?=
 =?utf-8?B?a0hsa3lIbkFkMkxPSFhrUWFxSWV3SzdnU0ZUQVBYTU1EWlRZekZPTzV6MjE5?=
 =?utf-8?B?MWFqb3kwTldxMUtFZm5mRGV0MmFiaWtzVTE4NDVydVN0RVgzSmxCOHAwWXFH?=
 =?utf-8?B?YVBaNkpJSWtlTWllK012U2Yrd0piWDdKcGQ0ZEJ1dUIxaEtYaFRLa3k5Rzhl?=
 =?utf-8?B?NTJQKzNwUVRqdXI5NDJYYnZuZjNadWFqTXBIam41VU5TZFFRSjhBS3lyRFNy?=
 =?utf-8?B?NmYwRkJXcFBhYmowU0FKM3ZicFJwMG1CcTJBRkVlYXdNUGhQUXBudThYMHpy?=
 =?utf-8?B?MGtKN3pFbjcvNjh5c252SlBHSDQvR3NHTjEzYVBqRjNEbVFuQ0JMVVVTaW1O?=
 =?utf-8?B?MlE4bWJTb2xGenBPNmhPZ0tSaXBYeUEwNldNcmswZ2FKUUdHRzQ4M1p1SFNw?=
 =?utf-8?B?Wk55d0dMMzJyam9MdUdVZ0kydk00Z24zUGZ1MWFBTUFTYjF6bUFUbEl1aFdK?=
 =?utf-8?B?Tm1oM05XazdpOG4rRFlSeTVCVGN1OXRFU2NIYm9lWEpCK3NyT3N1S1BWZGZX?=
 =?utf-8?B?YlpBc0ptQ0ZaRFV6Rm1hc1VXaHJnRVVsa3hBSzFrOVMvT1BhWkhTa0tlU3FX?=
 =?utf-8?B?RGtBYkxiVXVGM3RhL0NHR000c0QyWEFnb3ZHUTg1RlZwSG5vWWtjNmtMNStL?=
 =?utf-8?B?K2xGUHBLL3NlUzZvdTkzMExsL3NOVlM3VHlyQXdUSmdwSk0yeXdSRnp6Tkl4?=
 =?utf-8?B?YXMwa3cycVBDUERCeWlLOUtiQ3FLWEpzRFVzMVJZY0J0ZGtLV3NJeVBLYlEy?=
 =?utf-8?B?K1ZnMU52b1o5TGlHWnF3NFJndkZ2OXpkRWZGZXZPd050aktOVmRJRnJxOXBa?=
 =?utf-8?B?K3Z3VnVlTG1jZ3JmU3JUN25Da05uczRKVnlTU2F2eGJSK2ZVQWhGQ1VlVVcr?=
 =?utf-8?B?TjZ6NytGemFkNjNLMEg4Zlh4M3JwWGo2d29PdWlMOThuT3NZcW0yMXhlVGYr?=
 =?utf-8?B?NTVOTkNHY3plMjg1SENVTmMrZ3dMMlBjRHRnUU41S3RkM0Z0RzlHdm4zUTRy?=
 =?utf-8?B?MnFXRGlyT1YrVVlyWk00Um9IOFBlMWFpS0ZxNXZKQkhPTEkyMlFYdzhnVGlI?=
 =?utf-8?B?NnFaOTJZSkNjazlCUWkxbGQwM2kyb0UyVSsvcUFKbmI1YzI4TjAyYXFCZ3Zw?=
 =?utf-8?B?SHlVWm1kdlJ2MmROUzdFTnhQQmlmaGtVRG1nbFJPM0pKUUJqdkhlMUYyWDRE?=
 =?utf-8?B?RlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hR3JTT/yTDt5+8/LfHBqMSLOrJjfWa2guvNHPLjfWRrjm2xWd6dFMUO9D+FU4efC1pJk6u7w0QpDBxJUsA6q0hzIe8+j31uVPtDCmjSotVnF9WDEWUpp6HS+BNEgXlufM6NqAs2Y79lTJhtfTrVMKOrwbDEQM2cn7RAfrQZMLklOWWWK3YsogP4mWcOYrIYIwafRLdxDeF6X2fcMqAVzzo61hu25iguHfLTRjk+rUG9VTQArBU9DD2TxOt845vAP96nXhEEx69mIJxDNmvUibpDjbFCRkkNK0inlVf6xlhioomaA7rYPcH6FPkL0w9KfRQZP/2ZCt1m8Gq0vTMfQRhSWOo9JZdO8JYFG/dzEZ+yiN77P2GFs0BCuhoHY8l+d72/Hz3JD0o5GQClFn6NHvRDU9nDF81OemS9u7qEO4/r2p4vU/27khatmTp3+hVG49wfTsK7Q+B34Fmj2YB+mAa1vODIwK97jP6oELH0qmoK1daNt1dzUC9D2ivztEyiR22/skBiP2h9rZClbWe+cCiFcReIaoFqF+Qfb5yNjHeeELIdhH1gRpdbGv9PZyY8zdnO+VpZ6eygyNOMblN4+JiRe/FKgQGNOqBFBw2LQmkk57JsC6aIcjDgAVPOMQKJVI2t3f4FOk2nwM154QyC9E+dDVhbDjFLyY16er2Yvyh+MMsN2guCmZ3IQ1WwmtyPiNW7PwqhW7cWUhZknWtyajdWFcuJCGC21FgLNqdejXu22cP0Y0FCwe6Xs2NAvjqlzfCKjB8UsjzOK3luwO49wRM/fGMIQrKqr/D5i1TOKRW+qk1a2BCvu46bmmrl1j51x
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 79c16495-7673-4a48-76c0-08db36d7db55
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 19:48:13.2632
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G3Phhn+rxFjj28TsgIhIyq3U3yNqZTj0ElLXBIByn1Zdw47Ri3cTGuc4RbNLqPWXuiMDqINSYnDXatB3VcR7Gt56TqsibGOYuucpHWhCWWs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6538

On 04/04/2023 3:53 pm, Jan Beulich wrote:
> This can now also be used to reduce the number of parameters
> x86emul_fpu() needs to take.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As said in the previous patch, I think this patch wants reordering
forwards and picking up the addition into state.

"Because we're going to need it in another hook, and it simplifies an
existing one" is a perfectly fine justification in isolation.

With that done, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>,
although...

> ---
> We could of course set the struct field once early in x86_emulate(), but
> for now I think we're better off leaving it as NULL where not actually
> needed.

Do we gain anything useful from not doing it once?  it's certainly more
to remember, and more code overall, to assign when needed.

>
> --- a/xen/arch/x86/x86_emulate/fpu.c
> +++ b/xen/arch/x86/x86_emulate/fpu.c
> @@ -90,9 +90,8 @@ int x86emul_fpu(struct x86_emulate_state
>                  unsigned int *insn_bytes,
>                  enum x86_emulate_fpu_type *fpu_type,
>  #define fpu_type (*fpu_type) /* for get_fpu() */
> -                struct stub_exn *stub_exn,
> -#define stub_exn (*stub_exn) /* for invoke_stub() */
>                  mmval_t *mmvalp)
> +#define stub_exn (*s->stub_exn) /* for invoke_stub() */

... honestly, I'd really like to see these macros purged.

I know the general style was done like this to avoid code churn, but
hiding indirection is a very rude thing to do, and none of these are
usefully shortening the expressions they replace.

Also, putting stub_exn in the K&R type space is still weird to read.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 20:52:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 20:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518986.806090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWaW-0004K9-3T; Thu, 06 Apr 2023 20:52:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518986.806090; Thu, 06 Apr 2023 20:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWaW-0004K2-0e; Thu, 06 Apr 2023 20:52:08 +0000
Received: by outflank-mailman (input) for mailman id 518986;
 Thu, 06 Apr 2023 20:52:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkWaV-0004Jw-6Q
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 20:52:07 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e001f678-d4bc-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 22:52:02 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 16:51:55 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5053.namprd03.prod.outlook.com (2603:10b6:208:1a8::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Thu, 6 Apr
 2023 20:51:53 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 20:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e001f678-d4bc-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680814322;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FPJjCitQs70LnEKKEsJ07wEP56wcTZf3B05zZyNf+DM=;
  b=PRdwNKXLGVHxd0QvCWgkp1fChjhs4sFshEK9QCArP9BHmFHqB608RYtl
   oCBCupDVpDVg3yN2UyBAZjAR8DaBE3krakFyKtS7iw5qXcwtjVcaNjFJw
   2+7iJBxY1ryzh3RkyPwaIm1trlVUH0MQ7z1n7HGjLgrebLLCX1wPo64cA
   Y=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 107049219
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3MCJsa84GCDTWonvbsDoDrUDrX+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 jMYW23QbPyLZTf3e49wO4m//RhX6pDRn9AxQQRrqSE8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmOagV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklx+
 eMDchU3MCnThrma4OqRWMpIg+MKeZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUujdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpNTOHprKc76LGV7nIKDiYMdgWWm9WWlVO8VO1aO
 mIS/iV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsSAIE55zpptE1hxeWFNJ7Svfq05vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:hwlrHKk2S1eFfl2Fw2rKfaGzNibpDfLZ3DAbv31ZSRFFG/FwWf
 re5sjzpiWE8Ar5P0tQ6OxoWZPwP080kKQb3WB/B8bHYOCLghrVEGgm1/qc/9SiIVyKygc/79
 YGT0EdMqyUMbEgt7eJ3ODQKb9Jq7PrnNHLuQ6d9QYRcegAUdAY0+4WMHfhLqQ7fnghOXP7Lv
 uhDwh8yQZItU52Uu2LQl0MX+3CoNOOsZL9fHc9dmoaAcC1/FCVAabBYmulNwklIkNy/Ys=
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="107049219"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f91vVwNxGoT3FOtshY/F4R5J0ZmMP5M8zphZ12SG301cyQlMxmLFEiZtIqSugUvafpiq7CvFOtIK7s0oATcCu+QlI+pfUgKMTyqEcR1yHo2ry5zU/emZVEU0GP4iIdrBhxcZoFQUycahNrvuQoDLWzTLj5M7F5WTNzt+AxWtCowgsSqlBeP/T8Obk4U8SbykqmfZIkWexQN/TYKsUSqOSKlVk30VZgU0uAXI0978UBWuUG0iFJrUtDYfZGsXsq+eUdmhGAoMGfHlyipc6nFF5DjZWbSFeJzDO6A6POIqTvrcYLws6df0eNbGccJvlioisKMlRIC9kEaD1bcMtW4K2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FPJjCitQs70LnEKKEsJ07wEP56wcTZf3B05zZyNf+DM=;
 b=OqbKzQvRYmD73rOsp8gv00Uy5C9G1nKVFDjCGNO7JF37rRDByttpDW0rFZDFpNNUA7b6dYJ/gtVJvesuUko+DOX8O4tFXRChkc9imxw17BlFvvfSvuy2gmTeoCcq+SHJl4Y+18gPDwO1VQr2hy49at/OgmD0a4+z+5NHR7es6NzDao/FO3YmXobu+UZVVictnH/d/yYmzzKx1BUI7hql5nZgxucPpuXltgcBhR8U5geVXhtkq3TIJWK9P+sxUF7NpEfwX3lcxz537DiiyQwRu5E97p7s1P0iF9D2QXfD024nXNUJjcWV6s7KoLClsQkhJKAEJYA8G2z234YwMX/nnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FPJjCitQs70LnEKKEsJ07wEP56wcTZf3B05zZyNf+DM=;
 b=gF05FE1j0VhAHCN77dNSVGP8+NaJI4q+ioq3CxSxN7ZRmUSWXrn14HL1dPZqJlF3vNiB/JUJtFdU68T9ifqMK5z293AdYYNUrceWAR6h2eZWmR8h1dse0Rg577qR1V/V7nkX8EGEB8sz0QlAWSCoD2SLOWBnotAmFXlfH7acncU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0c94594e-2eb9-b94f-11a2-1c7cb937534b@citrix.com>
Date: Thu, 6 Apr 2023 21:51:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/9] x86emul: support AVX-IFMA insns
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <c5358413-2638-23c2-4d44-a925c6f08d49@suse.com>
In-Reply-To: <c5358413-2638-23c2-4d44-a925c6f08d49@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0203.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5053:EE_
X-MS-Office365-Filtering-Correlation-Id: f81b2d59-eaa7-40cb-ea10-08db36e0bff8
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S3WgowojS6ZA+a2EP/6xqud4B+A++/lONxzJd+dYVcG5+6fHQYOsGY0RY5mfagdDQlWYH2lnhJXUHxxFCjV3+QJ+G5Bm8p7VAHa0Hrwy/iPTM3K1PdzYXiUGJnBiqKq1keiRW1P6ReKNyKln6UR/gJzWiRj97i1sIfemg0OVPnpChr1vDayLiBWfLI3B+A1S6HhyLsL61fJp0hQTKOzMvwBqaOKXHFuR6bBrQd0Ge+KbVQiifPelsp9skMi0aTtvwt+bEqK5ynx2fAK2UA7oXSI3cmFl77wHFYw2636SWu/xvZRDeTirJ3moLoYrYcjFg+c/UN4cUFrE5ryZEZMs/u1d9yMBOTwIaX2WBD1uenVCkoRWoV0viHtbZzjHrosqZg343rgEYpE1+lkj7bBt/2qP+o7TlQX4UbnPcDX6FzjH78XLQxQjk4bHZtTl231+mgh9SfHclXZAkKLUTt8EpPhUOuvvq0bjxPGePLMwkPpE0aaikacLvB+7puz1scRommMP7UTYBTCPQCWyjtV5IZrO55HyfW2wIQtkFtOR295Ch8Rxel4I/1CjNpN99/rryN9lGpuJxPmMUU4Zc8B0cD5aKonQF1dFBd2M6VzGlZPNmPTRspylmtszoA1ZbfcXMECuiVtnqWUFHXrptVGyqw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(366004)(136003)(346002)(451199021)(31686004)(4326008)(41300700001)(38100700002)(36756003)(4744005)(66946007)(66556008)(8936002)(66476007)(86362001)(8676002)(31696002)(82960400001)(5660300002)(2616005)(2906002)(6512007)(110136005)(6486002)(186003)(83380400001)(6666004)(107886003)(6506007)(26005)(53546011)(478600001)(54906003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ck0rckw0THFsSVdDZ0pjQXdsM1lkTEdXVms0LzVOblBGQkNDTmhaS1NFOGM5?=
 =?utf-8?B?Mmo4YjZudElPVVFMUXNKdDFGQ0hpWlFqMlo5QnFyMFZHdG1heWlGM0dHQjdV?=
 =?utf-8?B?d21oQk1zWkcwa3ZPL2xwcTBaeUtrWFMwQUpRdjRna2FIUjRYK1BOcGRpS2lP?=
 =?utf-8?B?eHV1RXhoOG04c29ZWUM0akpMRjMxbTBocTduSUpscHJHdGgrL2pwRGRLSzgr?=
 =?utf-8?B?SHN4Z1gzSy9Cc1VCbzlmajJvUm91ak00ZXBKbDgxSjNqZXpnb0QxbDRocndl?=
 =?utf-8?B?MDBwUnVORm1Ba1N2Z1lRTkVBaWJxamJ3ZytmTWtkOE5EOW9uVVJHZ0lqOUZY?=
 =?utf-8?B?VzhZSE1TMDdlVTNXeW8vWTZ6cTNoakQ2bi9nUzdnWURlMDF1clUvdllQOHNL?=
 =?utf-8?B?Qko3eDhpUlF1MWhYQkw0MHA3TWFMR2pIVmp1TWx0L2thNitzb1FHS2wrWUw0?=
 =?utf-8?B?NVZ5UzYzMXBOTEljWE1iUFJqWGg0OXVYT0FHNlZvb2U5dmJ0UUhZQzZrRDdh?=
 =?utf-8?B?c0ZDRTFXRVN5emtkMEszbGFTMU1KSkVpeisxd0hPc0pFY01vd0J1eC9Ld05G?=
 =?utf-8?B?TGo5UzBqRVd3QzBYdU5IQmw2MnBHTWJJaGxCOHVGalFBWDluM3NlOHh4aWRY?=
 =?utf-8?B?Rlp3bWtmem54b05HVTFldVYvdjJCSlFrQVlkTjg4dnBDMUxUcm5sU1FOcjdM?=
 =?utf-8?B?bW9udG1GeVl2RGZITFZxV2ttYitxdE85cHBCSEkrck85NzNUdzBRYmJtZFR5?=
 =?utf-8?B?S0FIeWROQVZkcTgwaVFiYWh1bU5USk9uMnFzVjY2VjR3QXBReGxzdnlEN1BG?=
 =?utf-8?B?NWtOSXVpZjExdnUrZTBUWU9pbUxqVnV1dS9BNFBpUi9WdzRTcmtzd3E2Sm5y?=
 =?utf-8?B?SWoxS0Y4bDJDSXhWaDlYSUEyWmFRWGVvMnd6R3dBbkV0Nmk2bEIyMFB3enJX?=
 =?utf-8?B?Zks0YmdYaGMyL0NWOTZlRVBhdHlvWkFqZlNCU0VKNlpZeWVoRUxPWWkyaCtz?=
 =?utf-8?B?Y0l2aFI5S1hpSUFvSzRTdVN6U1ErYjcwVXJmOVJIV1FScUNiZXlXMXFQeW1M?=
 =?utf-8?B?NmZ4TU0rSlZiL0hJOVY1ZjR3SVQ4TEZKTUp3bFVTZ1F1RVFka3R5RzNtUGZW?=
 =?utf-8?B?THRRZ2ZRcUw1ZGhsOWdqaWJiaitjZTdsaWNidmtpNkxHZHRFL2Y2ZkxWalhx?=
 =?utf-8?B?c0dxL28xMFNvZ3RwbGJUSDJtZGIxR0l6ZzZyUHhsdGd2WVpKMStwY3JtLzlF?=
 =?utf-8?B?M1lyaUZzTTJSTUNkS0krd1ppNWV5SXJROXJMQ0IyRFUwWE5aMkF1REpIUTQ5?=
 =?utf-8?B?NmYrK0ZPUW5DcnE1NlFWMkg3RE5oTDN1T1hadXJFdHpJVFJxbWh2cDh1NHBx?=
 =?utf-8?B?amJCcTI4RTViekZwRXBubWJHVFBPUjFZU29TY1ZsNll3THFXdm9lYUloVDRJ?=
 =?utf-8?B?eFV1THZCandsdm9laXdLbXB4QzhGNGd4NWUvM1dtUmhHVExES3lzNnNtQmVL?=
 =?utf-8?B?b3Y1RmFZUFgwQm8zZXYzRW1qWDR4Vkg1ZE8raWswUms0cC82Mk9KL0tncit5?=
 =?utf-8?B?TFY4RkwveU5ZeEFzekVLK1FBeWh0Y1dPRTAvN3QzR204RElnRTJjS1VFWU1y?=
 =?utf-8?B?YkxvM3JFTTd1MENlcVJ4VXU1WEx3RE9XeEQ3ckp5WjRyNUg3KzJYK3JISjJ1?=
 =?utf-8?B?VUtocDJ6Ni95aHp3bHJNdHliVnV5NUsyUi9raUNpQmlaUDVxWHE0Sm5XRFlT?=
 =?utf-8?B?SU1GTnpwT21JYkFEdXJRMWFYWmNibHZ1OWp0VjlBQ3djUm5URzc4RVFmcGJG?=
 =?utf-8?B?VTNLYm15TXMvVUVuRHNmaWlvS3Znd3RUV2pqdDF4VnR5V0swWGs3Vk5EWDM0?=
 =?utf-8?B?MGRyckZSTC9pcUFhWmFUV1J4S0hTcDc1WUpzRjllUHd0eFc0MGhuMUM0cURE?=
 =?utf-8?B?REhnYkxyaUNGemNJZDRzTkUzV2c5OHFsdzF6UGZqVERtK2E1THF3c09QYmRv?=
 =?utf-8?B?Zk5tcCtHQU1GeDRjTE9ueEQxNnlxdTVKM2JtWnFrSXUvaEw0UmkrZUd1ZVgv?=
 =?utf-8?B?NWhtakR4SzUxU2hBUEs5M2h2KzRnOGdKRWYrTnU5U3VaVHFHdjJhcEtrNGUz?=
 =?utf-8?B?bXgwbGo3Uk04NWNqTnpQZ3RwZ2ZGcVoxUlUrZERWTTU1YnBTblZ6NHptZ0dj?=
 =?utf-8?B?S3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2RR74rlYCM+wQ0wfVRUJEgO4zba2qaJrjBMZd1R5ebeABfxRa/Pz5fJ8zd9b6O9z0eXbD3lINMKDQXVMaMD5i8Q+cQEO/J8NkIrALHneF+ALE9ovlUbh27qIg+NcQ/H1gRbAl+LTfJUHlK/jnlZLY5ilG5WUbyplhbcLROngh+REeeCKAklqGwVauxpHXyQ5SKAkjxFTc/BgTwZjuwyCuvTQEGEt9mEZHxJF/4tgDJZ+RYEtXbNmsVJh2EcXRShi+qoZhX56Q3ysFB+Xbkam+DMaWsE+abR33g/yhykR2S9uC28A287B0xi9/LYVB8950Z0LvJOYTbxMt/RWv5xQ1NOkcmyYaPxLEQeuDIlGZJx2XbK5lfrTkt3HO+DXv7ViE0odDqkmT6DhGgGiJgXEE/wluXewtpkWrcgz/HDprGue4Ohp65UxePSsgldkD9ha3khR7FDLW1LEEwOBMaxKE+uqt2kBScmwPxmFDdUPbDMg38cg5ZIa99pF7IsYRjh1iH9H3r1PRYv8SbKAGLs2q0p3i3aJ4blxM18wXuPzwxJ1foNex0xFkoiRLF1pevM3s9BNQrIL5w7QvXEtPt20fHe3BDqodvpHV//7Np52K//vmoIr+oCpKb8gFzBANgSVfuhwQ35G3XZj24UMJxwc8ofEAMV4NIjh8c/Eh+3oCzvN6KnKezkMDjdr5BhutHjZZCLpIKyJjXH2bchdRBE6mAkQGyvxcIiaVqujyx30A+nt1ygxBTT3KhIGOj3phIDxZF89d4wsx7RqxeHyEWy2oAZ7FEJivJwZ0J8JDcRinHJ3SbtLrU72KnwXttuGfdcr
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f81b2d59-eaa7-40cb-ea10-08db36e0bff8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 20:51:52.8281
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vy4dqYm7q+yK2KRRyDiLStU+oglfa8z9xBKRMkUTg2pvHdyXpkoDqgfxQCSyGKvzXTP4p7DHTE3dsnMABMUNs3EC7hg66DR7xO6Yzl+yAD8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5053

On 04/04/2023 3:53 pm, Jan Beulich wrote:
> As in a few cases before (in particular: AVX512_IFMA), since the insns
> here and in particular their memory access patterns follow the usual
> scheme, I didn't think it was necessary to add a contrived test
> specifically for them.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 20:56:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 20:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518990.806100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWfC-0004wg-Mf; Thu, 06 Apr 2023 20:56:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518990.806100; Thu, 06 Apr 2023 20:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWfC-0004wZ-J5; Thu, 06 Apr 2023 20:56:58 +0000
Received: by outflank-mailman (input) for mailman id 518990;
 Thu, 06 Apr 2023 20:56:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkWfB-0004wP-TJ; Thu, 06 Apr 2023 20:56:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkWfB-0006Fe-RH; Thu, 06 Apr 2023 20:56:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkWfB-0007FG-DK; Thu, 06 Apr 2023 20:56:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkWfB-0008Tl-Ct; Thu, 06 Apr 2023 20:56:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=khMSAbXYvW6v+pNwYg0znAtqn7/qpevD4W+uitRI6ZY=; b=B4OC8000DfaXPnxQCp7Qi02A4x
	P7wvmXNmDtERgD4klHhTjD25e2xNyp2JACy+XXCVZrKUWSoycYTA1BuIf3E5cUzuWM2GMESUsMdNY
	9ueZ7PpsWfMWbjhxfPj3rtWU3WIoOPn4PJKRUd/+oivFF7i1co/LggCNZOpx9fLR927U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180171: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Apr 2023 20:56:57 +0000

flight 180171 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180171/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3

Last test of basis   180169  2023-04-06 14:03:37 Z    0 days
Testing same since   180171  2023-04-06 19:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a5087069a8..ddaf7bb0cf  ddaf7bb0cfd27369252de52e4b03410c4065bad2 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 21:02:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 21:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.518996.806110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWjy-0006Pt-9W; Thu, 06 Apr 2023 21:01:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 518996.806110; Thu, 06 Apr 2023 21:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWjy-0006Pm-68; Thu, 06 Apr 2023 21:01:54 +0000
Received: by outflank-mailman (input) for mailman id 518996;
 Thu, 06 Apr 2023 21:01:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkWjx-0006Pg-CZ
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 21:01:53 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3f41457f-d4be-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 23:01:50 +0200 (CEST)
Received: from mail-bn8nam12lp2173.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 17:01:44 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6092.namprd03.prod.outlook.com (2603:10b6:408:11d::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 21:01:42 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 21:01:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f41457f-d4be-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680814910;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=gKurutHoy7sve8V5n2Kf1VRnA+63mQqgavH5Fq7Cw7k=;
  b=KobH7JOid6QEoj4NIcAvqvXEaPZzttZZjLsGlTc1KorORFEwAruajQ+v
   B9B4Jg1BGpSMEeh0GBA2TqdODtj7sQtK6EiBMPJoXOGs+HZFj7KOi3H93
   eCo+qGtMWErssYCRBgUWdPLZszVXHxlYHVwCEqhaV5Ji9Yyv8S/ipVWBT
   M=;
X-IronPort-RemoteIP: 104.47.55.173
X-IronPort-MID: 103415141
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xvM0m6AVZfHyShVW/wTiw5YqxClBgxIJ4kV8jS/XYbTApGl00WNVx
 mMZCmuCaP+LYGH3fdpwb9yy9k1V6pTcxtY3QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9A4ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwy8IqD2wR6
 eEhORtUX02lg+6535z4Rbw57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDO7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXxn+iBdtOTdVU8NY33EOpxnAyMCESFn+in6HnoWq8Q8BAf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBRw1V5dDm+ds3lkiWEY0lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:unloGqxDLcKhz1Mi3/xoKrPwD71zdoMgy1knxilNoG9uA7Clfq
 GV7Y8mPHrP4gr5N0tQ/+xoVJPwIk80sKQFhbX5Xo3SJDUO2lHYVb2KhLGKq1aBJ8S9zJ856U
 4JSclD4bbLfD9HZLPBkXSF+rgbsbu6GWOT6ds2DU0BceinUc5dBn9Ce3ym+4RNKjV7OQ==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="103415141"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bre8/qXhJC0qbJtELOf99I+xO1BBwwdpOIsI6DCKplbBfpb/rmG6QuTs8hJJ7QcytJZ7fF/dQgy1jVZKwB10MCz/1iyGVbL6WKnuEzD+89FX5Sr+LSMRDC+OOkUGyk6WSdBD13tQg0JcXOa3TefxUDxTUSi45SG15nTbwfqrROEreAfl32gHXVQJHIk0HUVsGdqN24H56z6EoMhCwS9vkR1+svWmIfvZkI43Ijdnu2gkU6nrr2vPGykzY6fNBGVR3X/b9TziFn/z2plMIz/XREPrvcK7gUKfIIGzMJuI3DfAcY2Xuc3tA2tCSK9NTCVR1SzmLYEadmh/gHwQPLQqWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gKurutHoy7sve8V5n2Kf1VRnA+63mQqgavH5Fq7Cw7k=;
 b=EYBtzpkANAR3PqpvJJhRwEqsjEyol7K+V2gAVfwgXcBFTj1FiWUdeyPsN2Gndd5dxvGFtklcwrh8KxnDW6xttFZKnHAp2UuP3X1IeN6cG6BK0WmO6GlyZa6z8Vr151bw0EESk5dlRGpR2JPCyD5DRLzKItn6msL7qnG+OIPmMAHnZd6LUWxzl2F7ml5Ad/Ams89XsZ9T5MKZdHwTJ+ZsxB6sGo0nfN2e5A+bq2xtdE5pN2bTNi0J9PuYwCzmPDshnR/n+2teSnwnEyIyExNtiVmGgborm+63z1giB96YiunfPdK+l/GATIBEnXXg35bbQgyCSMvgcWHIThE9GWG6fA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gKurutHoy7sve8V5n2Kf1VRnA+63mQqgavH5Fq7Cw7k=;
 b=Ma5iNDs2VraVm27liMEXNKZyY9n+/8EBAU61SrOnzsFr14WaUF68bbm60Y8rQ+1jahrIJMZ8msUh+bRnaZcXW/q+lhNd2XCwlQDnWDGyXGAkPIaqUaRll1XlfIk5veOjZrmtkvxvv6+fyeabjckNSa8z6HuhP0KT7kSP7U01M6A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <1fa826b6-7d85-9f1b-037d-7d6f3d35461a@citrix.com>
Date: Thu, 6 Apr 2023 22:01:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 7/9] x86emul: support AVX-VNNI-INT8
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <e5c21de7-8802-9226-82f6-505c8f4d6ac8@suse.com>
In-Reply-To: <e5c21de7-8802-9226-82f6-505c8f4d6ac8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0017.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN9PR03MB6092:EE_
X-MS-Office365-Filtering-Correlation-Id: 27cd20b8-7707-4c6d-5132-08db36e21f53
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E4feq0gnkNZJrDfy/17kgUlqR2/zxnIrnmCU4A0AZN6D+OpH5WTmpKueUuc1iUwokKq16N2yMnjl5tOlKfsbWy3cIjm3s5JPQJFWTrj7YUVRtUdsx4onGC54c4GKndu+xavsojTgwFdpV7NGhBv8UKoI+Quxum1VWMqrsF+KswQCSQIV0dRQIkb+4GJPyU9BsPeUj9J61Y/qNxOF8hdQPHXO0MxdbcF990pHTWytyawVJ1CrNMYbd8sFWHg6Bn1XjYQcL+B1WMGVvSaSdw+phUd8sdgF5QkJ7v68PweQT0JLNf0lhKvTV+DgRO/bVOB36lVVcoVGUyRQ0bfdsvwZ4/DHCoV2c/Coik9qZdPkxN2i+7+DKdBy9cKv3SjjUuA9HEpui0IK9gVDAJs31BzCgnsN11f9cGd3DWEyXz9n7cVLq3lh9SNi9eVU5S1nVPNx63P8ZTD/+IvyMA6OnICZwjbQRc+4rUuKwqXOsr5JwEJkAdX16jPDH+fsSD/o+0WUqXGW4rNlty41hWiL08H9jgkANkEyCDX3K1q01uxGAttF5elCXcJ4TvGvGo70687f/U2RpjwMybLnwQswI7jsNi685yEawNe+rLSfZ21OSTg74sjXXyBnH+gZB31R3CRKvHg/OJ+0PsSKG6nGz0sRAg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199021)(31686004)(66946007)(6486002)(66556008)(66476007)(8676002)(4326008)(54906003)(478600001)(41300700001)(316002)(110136005)(36756003)(31696002)(86362001)(83380400001)(2616005)(107886003)(6512007)(6506007)(26005)(53546011)(6666004)(2906002)(5660300002)(8936002)(4744005)(82960400001)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amcxeTdxSVBocTZ6c21PN21vYk5lVThqdW5hQjNnVm5aUElaUTkxSXI0aW1T?=
 =?utf-8?B?NGw1amZ2NEN5ekR3V0IyWDdnSmZqbGpLeWxYU0wrblZEQjhvcHdnbFFiNEVw?=
 =?utf-8?B?YVB6bHZ2UEI4YW1aUE1CZUFHZHRTV3pEQUJ0OVVXOU03b2FCbFNNY1NHNW15?=
 =?utf-8?B?MXRQME00clQvNUd4dTEzYUg0Rm55YTZyT1Y3OGV1M1VSOGJKNU1FRTBlc0M3?=
 =?utf-8?B?UmhPemJtcmVzOTVKaXFqQUpEQjlWNHVnaUxMYXc0blVKS3BSNFR1TUJEWDRT?=
 =?utf-8?B?ODZsZnloSTVzUzFuenlSYzlUd1BmUDYrMTZJZElPN1VWK1BLc29HSXRQSWg4?=
 =?utf-8?B?UFFIOVliaW5Cd1h6TU1GRmkrL1JVRkxVcHFhUk9uWFdmeVZKWU9rV2RxUlkz?=
 =?utf-8?B?akRzcXE5TjZHUksxY0dFZDRJazBBUjBseWpmcXhxd201ZkQvM3hDSHJaOEhy?=
 =?utf-8?B?MjRoVlJtUkdnOVZOTXE4Y2xmNk5tVmUwTVh6RGFsUDI0bHNxQUpBWUY2Y3Vh?=
 =?utf-8?B?TzU5QThZT0tFMDBITXUxRXpGeUYyamRRTDc0RExpUXlOQzJyakJhbTRMV2Y1?=
 =?utf-8?B?VjY5Y0VtdXluKzZqSlZBYU1hdWVDRmtYSWo4WGFhdkNMYlliaE1YczcwYW9O?=
 =?utf-8?B?Y3FCaytnYmpnRVFFNzR6U3c2ZVNWVytqdVFwSXlrT1grNVkzZjY4ZmU5S1Mw?=
 =?utf-8?B?VmxMc0R4RVp4SGQrdTZWTXhzRHEzOGtOWm4rKzlnWjh3THZISG42MkdGaWtS?=
 =?utf-8?B?azNWWEJQZzhYcEU5ZCs4NmMrd1A3S3l5V21xeFFjOUtCYWJFeEZaQ293Y21h?=
 =?utf-8?B?ZXZXYUVrRW5BUzdkTlNhbDZaT3BzV1Izb3lBdGF4L3dzZloyL1JlNDNsT0ZZ?=
 =?utf-8?B?MGpxUVNtaENWc2tWVThlVFZSWFgzRmZZcjlRb3NQaVR6UmtRbmZnUUJDZWdT?=
 =?utf-8?B?VXhSMzFnNXEyYmFjT0MrWGtxY0l6cjBnbUJkcG5vTVBUVm5ZTTE3OUUxclhD?=
 =?utf-8?B?ZHB1UVU2QnpmUEpmNWZKSkxHRXFWQU1uVi8yMm9YNUxnV0lqTmpvMFJtMEpD?=
 =?utf-8?B?Tk9EU1ZyOTRyZWpNL0x5UisxL0V1VFpWczRDdFROYm90R0xYV29MY04zeStq?=
 =?utf-8?B?WlZ4QjFOZDdJekZ3elFNR1hVZDhhKzRYRldBbFJDb3k2cWJyQ3JKQllwMVVK?=
 =?utf-8?B?RmMvN3RDM1dTNnlpbTVPRElDbkxYOHF4UGpkNWtPdjlsNW4xZEkwSTJPSS9R?=
 =?utf-8?B?QXhpWG1ESzhJWE5Ha2FscGpVWmJjeVcycmxONGpjTFEvbllwS3RSeTl1SDR1?=
 =?utf-8?B?ODFNR3l0OEpGT1JtYi9TZ1dHZUpjTCttZ2lSNGovUHVDRWVCaGRWeUZiYkVZ?=
 =?utf-8?B?dXNxZlpZSTFBQjVEUUU5Tm9Ycmw0M2x3U0JHSUtJSDR0WDlzT1o2T2hXRlpD?=
 =?utf-8?B?RDEwb0gzUnNoUlJtd2RkMWJSWkJiakZUMmxPTjVuSDgrRW9HbWQyYkovMWRq?=
 =?utf-8?B?QUVVMkM5SHc5MG96dXg0VW1TODBrN09lQ3djOSs5OTNoR0RZZ2prK3pqV3ZI?=
 =?utf-8?B?RCtCR01NVWQzYUlhNEcyMzBIT0RBRTdDOTE5TkRwU0VHQlpUU1Z4cm9wTkhV?=
 =?utf-8?B?Tml2RFZ0Q3N6blMrZk9RNVhxQi9lZDB5a2VUc0xML0llY1puNEcwNHNHNWlj?=
 =?utf-8?B?bjY3bDlYaHI1TW45eFpnWlN1ZHpkUktXcFFyamloTEVrQS9qZk5FSHpnSkV2?=
 =?utf-8?B?RTEvc0NtWkFrV3NTTnZXbFJXdmpoZGIvTVVmODBBVDdqQlNmRHBZZ0l0UzMr?=
 =?utf-8?B?aUhQTW1NcU1LSEJJZGlEd3Iwa2ptTWR5eURHMHpNaWZ4bUFsRUZ3VU1EekdI?=
 =?utf-8?B?cEZ1ajUrdzlESjgxV0dwSVdKYUJoaVBzd09nSDNDa0NLSCtjL3RDQ2cxSWRB?=
 =?utf-8?B?ODdhM0ZZNU16a1BjM05hS2doMnVmNzdYZzdtT2RaZzVwYkxvckFDc0F6OERt?=
 =?utf-8?B?Uk1VN0FTZHdMZ2FKVVJkTXFLZ28rRitlS3VZajNhUzFoN1VQNisyalVxV2tP?=
 =?utf-8?B?ODcweXFHWERiTFlZK3NzendLMjFJemY2WTNHN3ZlTXI1QU82YjNORkg1dTc2?=
 =?utf-8?B?blN2cm5jMjJsZEZ3S09jWW1ndXUyWkZuUTNoVUpkT3piZXg5bDdzRlpLTzMy?=
 =?utf-8?B?WVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HYIZfeqB36tCFoLop5sMPDAEiSegS0gNuQJ8WYrnS1/Gf73ASpaQLi+OiOexkfUmUn9GaszIJfjfcy6mMoH+5/+hgBrZytytk+BrPuRKaACP8BJ4BMIaXI3AoPFoq8CyDGhQ7wNFGj+Vdc3cC/KloZMi1NM9ZwrMUPZU5q0RgB0z0Vsys94zqAq1d0XMPxct3M3a+xXf3ZA9cdBoXMb6m2aZK0vYaJu8Sx/yzI0TP/PKtFGyQWNgfeNzIb7OoQrl505A9POjvnlAI3jdCkAhJSguOsZZzutpNlwVf14zhWQPGP90qz9umyoP98+kbRnjLiD+xeyrKLQ6T7GSGhIICwbWq98fWvWSMNiqUwCh3jV9f1UMJsDCyvuaq3Rn3IRunyfNynORyCTL8sot9DtTiuXyEQY5CEUsL1MUgjIld4nehb0N83sZBFxEVM/+NDtKbbKvhtHpxrLBsJhkLUTsUItsw/FnZVQvCk0ub2B3a72e12tDJrItujEjjXQpN9XQAh+Ca0IOUjOLrjumCV28vbJCESQEmygI8A3nAR+WXNym616un1v4peqe4SkuwKgWxfH18KasIXkz/qj3rqLp0Mr6B8TqCGdOhDd0RXq7phj7Syk7b9WE0BhIzYJS8x+m0kttNGgM6+CNzptEGUdGn79plvzHSqKr46oHPgOXKTdGMdHncZ9wbpFef4YYBIPz/f/OLUNrKNYX3FIzOBsKTBecWFMGYDQl/xuunaC45Ghhihx+8azCkBObcR/ugaHPCRRdORKC+7W/lSLkcMMBzHOy5ZHAWZpoaom/yXe/AacZFbR9tGMik9rjqxQFblhA
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27cd20b8-7707-4c6d-5132-08db36e21f53
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 21:01:42.2335
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vaWyGbhYNq3gRyFdKcxAHbDT2ytys4wtGwr6uwEUBl7x6/8lzvtXYR57g6sAjEPinxys37wHm+1kWzKNsHO11AukG4OsS6q5MDaEcwDE7YU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6092

On 04/04/2023 3:54 pm, Jan Beulich wrote:
> These are close relatives of the AVX-VNNI ISA extension. Since the insns
> here and in particular their memory access patterns follow the usual
> scheme (and especially the byte variants of AVX-VNNI), I didn't think it
> was necessary to add a contrived test specifically for them.
>
> While making the addition also re-wire AVX-VNNI's handling to
> simd_0f_ymm: There's no reason to check the AVX feature alongside the
> one actually of interest (there are a few features where two checks are
> actually necessary, e.g. GFNI+AVX, but this isn't the case here).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 21:03:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 21:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519000.806120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWla-00071C-Ne; Thu, 06 Apr 2023 21:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519000.806120; Thu, 06 Apr 2023 21:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkWla-000715-Km; Thu, 06 Apr 2023 21:03:34 +0000
Received: by outflank-mailman (input) for mailman id 519000;
 Thu, 06 Apr 2023 21:03:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S/rR=75=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pkWlZ-00070z-Fw
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 21:03:33 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b75b6c1-d4be-11ed-b464-930f4c7d94ae;
 Thu, 06 Apr 2023 23:03:30 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4ABD860F8A;
 Thu,  6 Apr 2023 21:03:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77CD6C433EF;
 Thu,  6 Apr 2023 21:03:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b75b6c1-d4be-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680815008;
	bh=baSuc3BibvB3PyiGiOM2HhWxK6GVI9IxnZZWwuwsNt0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mM5y2v9V0XrBUWuQ+oK7Za92/KG4pQawUlDfs4D9f8p5l5p/db+J1EMs+aK00T4ah
	 TNhy/0UbJhvvOo+EhrgsHRcFRfqDu6uuWiB2u0FDhll7tBlcXbCwQpnt4C2SzgAQ5o
	 FkCVHvmzLW+3qL3A1MSZWJwiwKaJ7E8ZT9d1S+DXYjxnWmLPMqoGUJ2wfkC3Z5Usds
	 7kJnUOW47MvF/Er4O8flIxqzXfN78Mx1/jzeHG49Hgp1xcTIUgdOHxb1t/8/CJhZB5
	 KHNnBB9G5Q0AUzTTNVxjNxFsmWRxy55a1VsqG89cdVCusrafau1kzkiVuTtiv4CFa4
	 9CjUpyzp5YwVw==
Date: Thu, 6 Apr 2023 14:03:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    Gianluca Guida <gianluca@rivosinc.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v9 4/5] xen/arm: switch ARM to use generic implementation
 of bug.h
In-Reply-To: <cce5e277-8692-d339-0283-55620c349fcf@xen.org>
Message-ID: <alpine.DEB.2.22.394.2304061403190.111906@ubuntu-linux-20-04-desktop>
References: <cover.1680086655.git.oleksii.kurochko@gmail.com> <8fdb98350ae4fc6029738d0aabe13a57e1945a50.1680086655.git.oleksii.kurochko@gmail.com> <3a94ad32-159d-d06f-cba6-3bb40f9f2085@xen.org> <alpine.DEB.2.22.394.2304021557140.4566@ubuntu-linux-20-04-desktop>
 <cce5e277-8692-d339-0283-55620c349fcf@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 5 Apr 2023, Julien Grall wrote:
> On 03/04/2023 00:15, Stefano Stabellini wrote:
> > On Fri, 31 Mar 2023, Julien Grall wrote:
> > > Hi Oleksii,
> > > 
> > > I was going to ack the patch but then I spotted something that would want
> > > some
> > > clarification.
> > > 
> > > On 29/03/2023 11:50, Oleksii Kurochko wrote:
> > > > diff --git a/xen/arch/arm/include/asm/bug.h
> > > > b/xen/arch/arm/include/asm/bug.h
> > > > index cacaf014ab..3fb0471a9b 100644
> > > > --- a/xen/arch/arm/include/asm/bug.h
> > > > +++ b/xen/arch/arm/include/asm/bug.h
> > > > @@ -1,6 +1,24 @@
> > > >    #ifndef __ARM_BUG_H__
> > > >    #define __ARM_BUG_H__
> > > >    +/*
> > > > + * Please do not include in the header any header that might
> > > > + * use BUG/ASSERT/etc maros asthey will be defined later after
> > > > + * the return to <xen/bug.h> from the current header:
> > > > + *
> > > > + * <xen/bug.h>:
> > > > + *  ...
> > > > + *   <asm/bug.h>:
> > > > + *     ...
> > > > + *     <any_header_which_uses_BUG/ASSERT/etc macros.h>
> > > > + *     ...
> > > > + *  ...
> > > > + *  #define BUG() ...
> > > > + *  ...
> > > > + *  #define ASSERT() ...
> > > > + *  ...
> > > > + */
> > > > +
> > > >    #include <xen/types.h>
> > > >      #if defined(CONFIG_ARM_32)
> > > > @@ -11,76 +29,7 @@
> > > >    # error "unknown ARM variant"
> > > >    #endif
> > > >    -#define BUG_FRAME_STRUCT
> > > > -
> > > > -struct bug_frame {
> > > > -    signed int loc_disp;    /* Relative address to the bug address */
> > > > -    signed int file_disp;   /* Relative address to the filename */
> > > > -    signed int msg_disp;    /* Relative address to the predicate (for
> > > > ASSERT) */
> > > > -    uint16_t line;          /* Line number */
> > > > -    uint32_t pad0:16;       /* Padding for 8-bytes align */
> > > > -};
> > > > -
> > > > -#define bug_loc(b) ((const void *)(b) + (b)->loc_disp)
> > > > -#define bug_file(b) ((const void *)(b) + (b)->file_disp);
> > > > -#define bug_line(b) ((b)->line)
> > > > -#define bug_msg(b) ((const char *)(b) + (b)->msg_disp)
> > > > -
> > > > -/* Many versions of GCC doesn't support the asm %c parameter which
> > > > would
> > > > - * be preferable to this unpleasantness. We use mergeable string
> > > > - * sections to avoid multiple copies of the string appearing in the
> > > > - * Xen image. BUGFRAME_run_fn needs to be handled separately.
> > > > - */
> > > 
> > > Given this comment ...
> > > 
> > > > -#define BUG_FRAME(type, line, file, has_msg, msg) do {
> > > > \
> > > > -    BUILD_BUG_ON((line) >> 16);
> > > > \
> > > > -    BUILD_BUG_ON((type) >= BUGFRAME_NR);
> > > > \
> > > > -    asm ("1:"BUG_INSTR"\n"
> > > > \
> > > > -         ".pushsection .rodata.str, \"aMS\", %progbits, 1\n"
> > > > \
> > > > -         "2:\t.asciz " __stringify(file) "\n"
> > > > \
> > > > -         "3:\n"
> > > > \
> > > > -         ".if " #has_msg "\n"
> > > > \
> > > > -         "\t.asciz " #msg "\n"
> > > > \
> > > > -         ".endif\n"
> > > > \
> > > > -         ".popsection\n"
> > > > \
> > > > -         ".pushsection .bug_frames." __stringify(type) ", \"a\",
> > > > %progbits\n"\
> > > > -         "4:\n"
> > > > \
> > > > -         ".p2align 2\n"
> > > > \
> > > > -         ".long (1b - 4b)\n"
> > > > \
> > > > -         ".long (2b - 4b)\n"
> > > > \
> > > > -         ".long (3b - 4b)\n"
> > > > \
> > > > -         ".hword " __stringify(line) ", 0\n"
> > > > \
> > > > -         ".popsection");
> > > > \
> > > > -} while (0)
> > > > -
> > > > -/*
> > > > - * GCC will not allow to use "i"  when PIE is enabled (Xen doesn't set
> > > > the
> > > > - * flag but instead rely on the default value from the compiler). So
> > > > the
> > > > - * easiest way to implement run_in_exception_handler() is to pass the
> > > > to
> > > > - * be called function in a fixed register.
> > > > - */
> > > > -#define  run_in_exception_handler(fn) do {
> > > > \
> > > > -    asm ("mov " __stringify(BUG_FN_REG) ", %0\n"
> > > > \
> > > > -         "1:"BUG_INSTR"\n"
> > > > \
> > > > -         ".pushsection .bug_frames." __stringify(BUGFRAME_run_fn) ","
> > > > \
> > > > -         "             \"a\", %%progbits\n"
> > > > \
> > > > -         "2:\n"
> > > > \
> > > > -         ".p2align 2\n"
> > > > \
> > > > -         ".long (1b - 2b)\n"
> > > > \
> > > > -         ".long 0, 0, 0\n"
> > > > \
> > > > -         ".popsection" :: "r" (fn) : __stringify(BUG_FN_REG) );
> > > > \
> > > > -} while (0)
> > > > -
> > > > -#define WARN() BUG_FRAME(BUGFRAME_warn, __LINE__, __FILE__, 0, "")
> > > > -
> > > > -#define BUG() do {                                              \
> > > > -    BUG_FRAME(BUGFRAME_bug,  __LINE__, __FILE__, 0, "");        \
> > > > -    unreachable();                                              \
> > > > -} while (0)
> > > > -
> > > > -#define assert_failed(msg) do {                                 \
> > > > -    BUG_FRAME(BUGFRAME_assert, __LINE__, __FILE__, 1, msg);     \
> > > > -    unreachable();                                              \
> > > > -} while (0)
> > > > +#define BUG_ASM_CONST   "c"
> > > 
> > > ... you should explain in the commit message why this is needed and the
> > > problem described above is not a problem anymore.
> > > 
> > > For instance, I managed to build it without 'c' on arm64 [1]. But it does
> > > break on arm32 [2]. I know that Arm is also where '%c' was an issue.
> > > 
> > > Skimming through linux, the reason seems to be that GCC may add '#' when
> > > it
> > > should not. That said, I haven't look at the impact on the generic
> > > implementation. Neither I looked at which version may be affected (the
> > > original message was from 2011).
> > > 
> > > However, without an explanation, I am afraid this can't go in because I am
> > > worry we may break some users (thankfully that might just be a compilation
> > > issues rather than weird behavior).
> > > 
> > > Bertrand, Stefano, do you know if this is still an issue?
> > 
> > I don't know, but I confirm your observation.
> > 
> > In my system, both ARM64 and ARM32 compile without problems with "c".
> > Without "c', only ARM64 compiles without problems, while ARM32 breaks.
> > 
> > My ARM32 compiler is:
> > arm-linux-gnueabihf-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
> > 
> > Additing a meaningful explanation to the commit message might be
> > difficult in this case.
> 
> Agree. One would need to look at the compiler code to confirm. We should at
> least acknowledge the change in the commit message and also...
> 
> > 
> > Maybe instead we could run a few tests with different versions of arm64
> > and arm32 gcc to check that everything still works? If everything checks
> > out, given that the issue has been unchanged for 10+ years we could just
> > keep "c" and move forward with it?
> 
> ... confirm that we are still able to compile with GCC 4.9 (arm32) and GCC 5.1
> (arm64).
> 
> Do we have those compiler in the CI? (I couldn't easily confirm from the
> automation directory).

In the CI we have:
- arm64v8/alpine:3.12, gcc 9.3.0
- arm64v8/debian:unstable, gcc 12.2.0
- arm64v8/debian:unstable cross arm32, gcc 12.2.0
- yocto, not sure exactly but it is going to be 9.x or newer


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 21:23:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 21:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519006.806130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkX4B-00011B-AF; Thu, 06 Apr 2023 21:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519006.806130; Thu, 06 Apr 2023 21:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkX4B-000114-5p; Thu, 06 Apr 2023 21:22:47 +0000
Received: by outflank-mailman (input) for mailman id 519006;
 Thu, 06 Apr 2023 21:22:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkX49-00010y-R6
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 21:22:45 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29b96750-d4c1-11ed-85db-49a42c6b2330;
 Thu, 06 Apr 2023 23:22:43 +0200 (CEST)
Received: from mail-sn1nam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 17:22:40 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5090.namprd03.prod.outlook.com (2603:10b6:408:db::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 21:22:34 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 21:22:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29b96750-d4c1-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680816163;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BzJN4JdOofRno5appIBdbLkXJFs7In04tnq68aN6iwQ=;
  b=PngVGeMJb/uFlRtrpeODSCgCVZzIjnBHcEuMJ56bwe6CtjRUAbclurI9
   4Wk0SQzkrWMRUxGnsK7vkCE9Gp8Ax3cRgjYBOGjWOHzEbqS8FWz3y/DnY
   HOz/ilvmKCKOXChC3dkKDcm+lLPqLHDoMnFK8R7ky+UsnI5p1xUQA7vsl
   4=;
X-IronPort-RemoteIP: 104.47.57.43
X-IronPort-MID: 104537676
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:42erWapDzMKCcCZx7HsZe6ThSbFeBmI6ZBIvgKrLsJaIsI4StFCzt
 garIBnTOvqPZzf9KY8gPty39x5Tv8PQztdqSFM5/Hg0FCxA85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCzydNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABQxTjXcmeet+oCiSrdVgJw5M9voGLpK7xmMzRmBZRonabbqZv2QoOR+hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeerbIG9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqqMx3wPMmAT/DjUcTmGE+8n+i3SgdNN9F
 GAZ6jYOtYoboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSy2ETgYKykFfyBscOcey9zqoYV2lRSWSN9mSPSxloetRWq2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:3a4AaqpdV2VVisR+cYlvJRAaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="104537676"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fHfH2mmXck/2QOKFmUJJzMhdrgvs2fiVXhJ+elNelkV7IFkEnZ1eVgxJ7NUlUl+MGzyFsmUQis/LyLji5lKE7Zj0pUbWn1JTK3C6AML+Bd6HNJkFpa+E+5Ynh1EZ8SLkEOxlVV72HyZyV76s2WvDMOvpVExTDEBg9GtuGP0aP3LCad3qA8fPWNGmBwgSByzYxW5pb6CORN4ItvmqRVA9sAbwKYzurcWYBnJZanCt9H4DkBTbXW9JxME4GerKZ++Pv3tmVat82jHW8nZudKQvl9EoaXYADoBa3QRt3AxQeh45Wvf6zzOAko0jQ4KGAgE9LlSyLulejD85wL7nyDSn6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=U60zMlzuDf1vv/6ILqjcYiN2syBoZzzYgyc4F3Axulk=;
 b=AYHHrkh/9LbOWWt7di+0XQA/o8MFLFvTS1PfiW9vdk2LlhFLVHBqaw74qYWf3maA75OP/xyWNCt7Qh8F6JhpIoudemsVmx7YIIH1lrYE1ualKJpvoJDTvpH2TgS9YgwWnCLR9En5mbE4MZq+XetAEFwjtNz3AbmNEKU5YizvaWpD1mHoIWvuNZcnZPqbjf7i4TU/ia5i7yyhAzEBsbqznPz/AJG7df5iv1z2l4cZzMyPVchoI0Zt0VvX03eBbmS6nSVnOuSxr9g85NvmHOBxH8wSvaMsUKVqstQbrRbjmpNDGEA8xtsacEn9hy8ifb9bF3TxzsXessI0xGoUQlA+cA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U60zMlzuDf1vv/6ILqjcYiN2syBoZzzYgyc4F3Axulk=;
 b=fGdBoA6ZzdRaUkZyFgMqO+ZOF10pTa5RU6qqgkRjAUFh2dRxGtadmFfl2Ujnl5lLHkjITsj3SmAwufJM4xOogNAX+DFnrajmUUjY+OykgdW1f1i/jdKIFkMum3Y3VBHN9NfBiNtI50H908EUZtvz+BMtXVX3+7JLDnLC12DK1YU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <385b175a-5123-3b1f-0663-1a956f5ca3a7@citrix.com>
Date: Thu, 6 Apr 2023 22:22:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 8/9] x86emul: support AVX-NE-CONVERT insns
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <bdcb4822-397a-0795-08eb-74e661d9b7ae@suse.com>
In-Reply-To: <bdcb4822-397a-0795-08eb-74e661d9b7ae@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0417.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN8PR03MB5090:EE_
X-MS-Office365-Filtering-Correlation-Id: ba28c604-f2e6-48c4-cc6d-08db36e509c5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZnuRQ9qqIfmm7S/S6TKDjGq2mHR92KglFDFE0+8rIR77RTqJOdCtQmN83OXj4HHR7i0478eg+Im5IlL1kYSNmb6XFao68fxDpcLMbx0VQElqFNpXqH4s759j0d8sFLxjyrAKuy70HcZoeFhDahXZJhO6OHBXJ1LTWdyzyecjsyZlgdcwxSWuj7s/WG45zqQ2h8I2sVP98tRV2dzYKfwINM3UaWAe6oE+IVMeceX2O9/1GsWVsqiISaU8efTyPmp7tcVLKEvnOlwssqOeD4NNYlDEmd9T1ESjMy5bWg1TeXbCy5cufJ0dJkvXFUIt/KKvXerQPY14UIXuWM3pOIzx4tR6rVXxoUfMEDkrJBwhk+Qdcc5m/l4IG9mmZwBuRPfTvSMwj0B1NmX37e+xseROATkxO/I8uaCQybin8dWJQ5YxhUPRuOYvswO14q74Q/Wq9OYx7ZglYUW4/0Ht5RVK16zecHR4lNXvR36XDMYn+kDHpRthvB0FYmFBM/93KVjPNf9S7LMmhPH/9ESvs4UuNZJQxZM+u59jhjxnj7e4mMezdyrd+oPlx9+5ljIetB/l7ohkgWQJkbdeg4GQoiC0h+0NebxvIWJPRXJo3+O2UKX0isi3EZtOY6p6fm7yY3b9LhROfWDwFt7CTDQeHKCT3A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(451199021)(8936002)(41300700001)(31686004)(38100700002)(66946007)(66556008)(66476007)(4326008)(8676002)(110136005)(36756003)(2906002)(54906003)(5660300002)(478600001)(6486002)(6512007)(6506007)(31696002)(316002)(107886003)(26005)(6666004)(186003)(86362001)(53546011)(2616005)(82960400001)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHRJbm1TTUg4V0Y5U3R2cStXbWhBYjN5SGNOaFdZQVgxQWMyQmZoUDh2UlFj?=
 =?utf-8?B?Z3pCdmlHcHdSZ1JLcU5PczlIdTQ5YUJ1OFhxVW9takdIMVlxSzRKSWkxdDhG?=
 =?utf-8?B?QVVPZkhsNzg4bWZKamJlTDE1bWI3ZUY5ZmI3WmdVZzJtT04wLzhydXJNclQ2?=
 =?utf-8?B?VXJRRWNlbSs0eDdnZkdlWSt3ZE1PMThPTGR4TXI5b2ltR1lQZldOeUkyTWhE?=
 =?utf-8?B?b3dOUnBnemxtU3ZjQjJHN29DTzVaTjkyTGRjWUZHSkROUHFVbFE0M0VEQzdT?=
 =?utf-8?B?Tnc4RFQzTzVXQWFNTWNMN0NZbEkvbkw3WWdLem1zSTVLUU5UbXpidStRU2Ez?=
 =?utf-8?B?OE5IVFJsbWplK1ZFYVNmb2tMNmlSaFZVNG52NUNJOWhjZFF3cW9XTWt6RXRt?=
 =?utf-8?B?cGNQOWZkbzJnY3hoSVl5TjgwSTJhajc3Z0dOcVBTMGFaV2tFUWpUaWl4TjdP?=
 =?utf-8?B?Rk5kZlF5Qis4VUY4dExsNFZTdVlJRW8vZTFxV1MrcjBYaHJINnJWNUhWOE1v?=
 =?utf-8?B?OExXTjZOQTNnVHZJTWFpd0wxa01NUVlkQ2RHZStjK0VOcjRQRnh0TWduY0h5?=
 =?utf-8?B?SU4zNmtFbXlmNTNmcDhxRDB2aHVuUHNURWR5WmRLZmZZV3BUQ2pSNzRQRk1p?=
 =?utf-8?B?REI3UTlCMTdQMGxkQ1gzTysyTXRUL0VvMHZMK2hGY3JjVDd6SS8vRUh5TzdV?=
 =?utf-8?B?YzBLZ2lnZFg2elhrckRtTHlhVEUwYThraXpTSTJpd1NBTVQxMU1Gc0wzS2do?=
 =?utf-8?B?TnN3VDIvQjNzSjRXSmJsYng3d3h3UWJZc3Z0TlNWcG1IVVdUaVoxeUE2WHRP?=
 =?utf-8?B?RlFDWHMxV2t2ZDMvR2RGZ0JVcng3WW5NaVE5QWFMaWJlTFZsMXo5NklYZC9u?=
 =?utf-8?B?amR3VGF3Rmh5aUtPdHZoaU9URlVCVi9HQWZ6NjlYTXkrT1hUYzRSVmhPS2xl?=
 =?utf-8?B?eWZNekJJb05oUmcwdHZEdDZaR3dpeXRnTnRLMGJBdmpOWkNyK281Q2t2WVNq?=
 =?utf-8?B?M2ZOTjREVEQxS1FjMTBYZGJzWGttOGpmd2RBZnBjcWhEeEhQMFlRNThQZkRG?=
 =?utf-8?B?R0tnbG1ic1UxWkVkYmVqdFlKU3dQYlh0U0MvZktwc3RlVXd3WU9yODBWTjE1?=
 =?utf-8?B?Q0Z4WHlPNXJkQmxHZ1FNcEs2NU1iRUt3cXlSVjV6YTUxTFNyYXFhMDcwaDg5?=
 =?utf-8?B?ODk3OHMxUTIwKzNpQzV3dXg0dkxCcXpVMkJLN3dHNE5aRGk3MXRSOW1zZXBE?=
 =?utf-8?B?OGtCSVM2Z1pISEFYLys3NVpwbXJtbnVDWHplTVA5eWxLdkpGVDFLRXBIaDRv?=
 =?utf-8?B?ZUwxbnk2VVRYamRRQ1VRL1NKRy8xSzJJK3VzanRZUVBkc1dsNGhjMmdmL3dX?=
 =?utf-8?B?MWpUNml0Rys3WHNLUG82RzhTODhTWTg0YUNGQ2VsRUpndDNOaEc1Y3ZiZTlq?=
 =?utf-8?B?enQ1UEREZCsrY2h4dHdZTnBmSWNkSVZUczJPQmpNcExHaDJjall6ekdJZEFX?=
 =?utf-8?B?Y2MrUWRJVXZHNG1QWGx0ZGtLdVR0OFRNcTF4bFN1NEwwRDVPclpIeStnNnB3?=
 =?utf-8?B?VFNrZ2lQa1VGbDQ2d2M4Wi9CdDR5NmM4SzFZVFg2TDg1USszU21ZcHlzNndy?=
 =?utf-8?B?eUJ0Ym92WXpRTmVmVmFST1pSZmhxckFkbC84QkdydzczWG11K0UxcFlPU1Zm?=
 =?utf-8?B?S3Q2b1hxQWR1QWdiT0JIV3Q2aWRyU1U3a0h0anBLai9vNDlpR29lZXkyYmRD?=
 =?utf-8?B?N0c1TEE1TFhzNVFTUUppZ3ZSL3I5SjZ5WHZFWnMreGo3d1BGbVlwbmZYb2xH?=
 =?utf-8?B?Q3dJM01CU2p6QlFpaW1KdjFETzZ4RTdIUTNyVjZ4SGRONk1paXhscXQwRnI4?=
 =?utf-8?B?cVR4ejZlMnY5c0FydW5yR2hxK0NJKy9mdlJucXlzSzVpMm8ycWMreEtuMjYw?=
 =?utf-8?B?a25ORW5Vb2JTbVdrU1JMb1JjTTNCV1hKYi9SUEE1SnBETzR4RFUxNFYvaFBX?=
 =?utf-8?B?bVlFYXd3Vjc3NGFhQkZZOVR5djJWR0ttRVpHSXVYcklUazlobzVEMFllbVZv?=
 =?utf-8?B?bWI0REY1MUQ5Umg3V2ErN3FZSFRNOFZDYlNvQUJzV2VrZE5jUUJiQ0VjYU1Q?=
 =?utf-8?B?dHFQZy9pQmtmOE53SWR0bGNpSExWcDNqNDhuLzFDMHJzd3pqRkZvcjFiWEh4?=
 =?utf-8?B?dVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	3jvthljj9fw7GjHAGRlKrWtHDA7yFTggXtvn8acUeYCRAb4OUFIEipbtjVCEFXN9KXllWRCKAYhGVshocL7T5T7f09hOfRNJ4NMWyD5MeyQNjEwV9gw8t5B5Um40MQvGfbcnQcfBmDGVhDp0zWsIdUQ143Y3SN+WrmNsiIUt6crmEQhM5EHL74UTwYU1LIFNzvjMchGjz7yAV/B15YVI8sx+jIOEnH/DygXxBa216qmz1nyV84iPPRwNpxsH3jkspNq9c5J7rRnDF2QZ9Qz3OadLQT2HoPm+Yf/ea+MLK96rkj3lLvmFP+VlXLcFM2Te90WwFGeRrlg0NV6ea7+6k4C1Nh5cL8mFOsNB+uIzRiPl94qlMQCfyODJSz1d6k9CxVhMnFgSKhXjcrlfk+JsA9DV9hoZjX99IYbnzYxIoxk+/Wbn7n7ahM/YveiDhA5wBHjHm/eRREfQJhXz76JkOnvRE4wLc4IT4nyWqVUaxLwz6z9lYiyRhoF6QLODu28hFWcdPPZg03wIDTEWuY7q9KtTSwlkTNpDhLVPzo91a2FOXpzAatD61cyLl0mDycIxX+5hDwH3OY11uoH/s2ugBQ7xIi2phfkNAC+/7J+7hC24FlHS4lwBT5t+5MtWXsbZNoBq0M6aP09WaJbc1rrcRlkr6f6D/wqZjRzBWEeG4rP8fPwRbXICfS8518tvMZ+oMChgwTpa/MxescdwwrpyhoIOU4ynDmKGq0LrZvoXvHM7uJQ+3kVk+EXdh3Xk+dN/+g4V3VcPrWEDXiHRsWG0N2mVILlNl/kLmqmhhc9lqLq8HB455if1MMLdEZVtC46C
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ba28c604-f2e6-48c4-cc6d-08db36e509c5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 21:22:34.6140
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eZ77ri7ZjipakmmXnHHmrHYCpqPHcyi82YdEJ1NkG/RiyVtWdd2e398sCyFB0Q9M05HvWf/tVCBfcSu5qAVbGGOyAxUmOVvH/YS3D7t+ElY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5090

On 04/04/2023 3:54 pm, Jan Beulich wrote:
> Matching what was done earlier, explicit tests are added only for
> irregular insn / memory access patterns.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, with two minor
requests.

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -214,7 +214,7 @@ static const char *const str_7c1[32] =
>  
>  static const char *const str_7d1[32] =
>  {
> -    [ 4] = "avx-vnni-int8",
> +    [ 4] = "avx-vnni-int8", [ 5] = "avx-ne-convert",

I'd leave a bit more horizontal space.  These names are getting rather
long, and we're only 10% into this word.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -6208,6 +6208,19 @@ x86_emulate(
>          host_and_vcpu_must_have(avx512_vbmi2);
>          goto avx512f_no_sae;
>  
> +    case X86EMUL_OPC_VEX   (0x0f38, 0xb0): /* vcvtneoph2ps mem,[xy]mm */
> +    case X86EMUL_OPC_VEX_66(0x0f38, 0xb0): /* vcvtneeph2ps mem,[xy]mm */
> +    case X86EMUL_OPC_VEX_F3(0x0f38, 0xb0): /* vcvtneebf162ps mem,[xy]mm */
> +    case X86EMUL_OPC_VEX_F2(0x0f38, 0xb0): /* vcvtneobf162ps mem,[xy]mm */
> +        generate_exception_if(ea.type != OP_MEM, EXC_UD);
> +        /* fall through */

Only just occurred to me, but we should probably be using fallthrough;
in new code, now there's a real attribute to use.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 06 22:04:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Apr 2023 22:04:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519010.806140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkXi1-0005OQ-EC; Thu, 06 Apr 2023 22:03:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519010.806140; Thu, 06 Apr 2023 22:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkXi1-0005OJ-A5; Thu, 06 Apr 2023 22:03:57 +0000
Received: by outflank-mailman (input) for mailman id 519010;
 Thu, 06 Apr 2023 22:03:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w3yY=75=citrix.com=prvs=453e3c94d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pkXi0-0005OD-7j
 for xen-devel@lists.xenproject.org; Thu, 06 Apr 2023 22:03:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e940a5ac-d4c6-11ed-b464-930f4c7d94ae;
 Fri, 07 Apr 2023 00:03:52 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 06 Apr 2023 18:03:39 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5260.namprd03.prod.outlook.com (2603:10b6:5:246::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Thu, 6 Apr
 2023 22:03:37 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6254.035; Thu, 6 Apr 2023
 22:03:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e940a5ac-d4c6-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1680818632;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=22ew3TdBsJAvJYmXneHqEiLeFfBjaF1cF229izoB/Wk=;
  b=c83lxsot8b7idIMqiDFdtz7CYf/gSwweaFCT4//zNx0n0Ls/J3/ScVxd
   vsv4j0LIupNy3FhVFCDZXL3yb//6s1XZm0avJGHXooShUKCOEAPskDzU/
   oYsqSI942+tJN7K06cLKWDnKckgL7xQzi10YXp+SPvbIVln4OvPJ5OzN6
   Q=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 107055567
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LONriqtjJIHCYm/izojw7imfiefnVIZfMUV32f8akzHdYApBsoF/q
 tZmKW/Ua66OajSnKIx+aty38B8EvpCDytQyGQs/ry49QyoR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOGySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwNGosdy2MqcaP8JG2ReRtiNQKB5H7BdZK0p1g5Wmx4fcOZ7nmGv+PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ovjv6xaLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnrqM33QPMroAVIEAGDHGf/Pqcs0WBBfRaA
 Uc+4RMwrpFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakcsTgYb4t+lvIA6iDrOSMpuFOi+ididMTPtx
 XaMpSs3hbQWhOYK0bm2+RbMhDfEm3TSZgs85wGSW33/6Ap8PdShf9bwtQCd6utcJoGESFXHp
 GIDh8WV8OEJC9eKiTCJR+IOWrqu4p5pLQHhvLKmJLF5nxzFxpJpVdo4DO1WTKuxDvs5RA==
IronPort-HdrOrdr: A9a23:QGDupazBMF60XE0X39P6KrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.98,323,1673931600"; 
   d="scan'208";a="107055567"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JKJr553KU7bXVVQFWZ/Lof42RDWBPBh6kcPFPh5ux3JAa/vEeIBAbPeJdkGEz6jIEGZcsl00O1spx1z4SNXsWoK9AKcZhgsV+smE7nDyBYuB7U0SBwuAj/GHaCIqJ7+LBO/Xfk11ga23avco7jtzDu/SiXttgRI3Zku6eN94KhfaxeE6yXbWXFhwuHdPDZIey1Fscka2aDFD1cQEmTPtaptTSqdpROrXaHxjRiqtuRaUPknZ11OtmcbYMX0JT4Q/VGyL0wm8eHQCs9YX8Es0zi4S820jNIpzpRmJihoriEkvlJQ70rMPfixpTNaYYUga7+oqnTv6bZddOo42ZrJ2Cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7nbgt0HD9hbY8GSlnvGTyg7OhKOvJp4JMkJSv7rMRKA=;
 b=Cq+d5o/MMMx9YXycIlJr8MMzZC6Cuy5SY1WPE72xvPwktgQj1FOC6CfOsgmF9yvYotHq0HzZIk9f6yfoVeNe5S6MXfRlPbe4XuhOZIPb/GhDI6F4r/gVZdRST4xJrT51l8LfsUAs9VsmABI1JQCpyTy1RbmYFXEUSvSto9bU5MBjfw7Hrg/dCT6VvQoIWl8l8RyfTUc1al36VZmVXBPhxmGKxBYdk86/VkT7qT7+FTVjuGJ7GgbwaW2+egzcgMeTDsUbKxuf1ykGd9RJO0DBeVYStEGAtL+Z6cxQYaKP1scL2Y4+aAUtJluw0JdGZWao4+7oA5z2db2AzqMZLlYcYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7nbgt0HD9hbY8GSlnvGTyg7OhKOvJp4JMkJSv7rMRKA=;
 b=RUzvXovRcnz39/bijGDJojREk7SBGWOUZBr9VvaPkDmoPiCP9pXTy/lix3uyd7V3otC/I+YLmRCyoicGnb+xkghJSYeOZklgQS9Pb2Av+pjFRDbIYuAdq6Gso/MLxlqPsUfX5j/6IsmAbSjVdjRQsvID6z+M/bJz4PX8KS9Y1pE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <1c2cfeec-d64a-3e4b-013a-840f812da12b@citrix.com>
Date: Thu, 6 Apr 2023 23:03:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 9/9] x86emul+VMX: support {RD,WR}MSRLIST
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
In-Reply-To: <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0046.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::34) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5260:EE_
X-MS-Office365-Filtering-Correlation-Id: 980e2fdb-114c-4412-ed96-08db36eac585
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	34cwPhEuZB159eksHxWjaRqZL/wrZ/r6Pi/DclVlS/Vy2d0rDDBv2tKk1lq/9z2720rlHjo1s2Gm0T+zmZQFlF8XJXHYweAJKM5p2+2OO9lhi/CXAT24E8IR6Q3z4blhgr8XBYofUrtIVBN+LQkN57A69lYZxGkWqiFSpn253GVHwfMK0HwV/Shy8jwJOMfuHCwkktcN/ertW8fXWMmxKU01A7yBz0qchcu77u7bRcHEyJ2SJoJqvkvAb3V7sRnUfC4ytZS0Lu0F4cQN/Av1ATjJ7rGA4t1ijWTKcNKs7pAj/b8wRaqy7jpPZpphBk5ICYNMMOeNQ+KXc6mNhw3R4moIVGy1mLww3HSOY7KllWAWDBifF+TEDMVttUbZ0xieqc8tCHDxQCI6JBIndyM0nojrRqlV9NNEut8d0JHWFFUrUe1w7xjiOL4Vt4pOSP/mf8O8H72AnoWgKBRH3FHrOmidvMjcfLtU+wzUucaSdYI0uLLVffeBH9v2J1OoSL1o+4rVDivLdotW9HBEt1uGisn/iSfKV2qU0b/rnnuSsDNdC0roQhlnTWNEShq4suVIRE5QN331A2l9cdE1jbqG9mem0dQNVGY3n7jhnofkdIlV+MKTYBEaMJ/Kz/XM5HKoXFw2yq8+8dtEgpATKldtHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(366004)(376002)(451199021)(31686004)(478600001)(83380400001)(38100700002)(66476007)(66946007)(66556008)(110136005)(36756003)(41300700001)(6666004)(2616005)(6486002)(5660300002)(316002)(54906003)(31696002)(8936002)(86362001)(53546011)(82960400001)(6506007)(4326008)(6512007)(8676002)(26005)(186003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGlOeWxYb3ltdW15VjJHOG5hVTRvcHdjaFBBY2RVTUxkeThUNXoyYXd6ay96?=
 =?utf-8?B?VkVoSUFZOSt4azVxTkZ4K0dMd1Jxbk1La1F1ekM1WmlvWlVNRXVKcmxTR3or?=
 =?utf-8?B?anV0bzRJaTRjTkFRbitnMHdjbjlMTUt5TUh0aGx0TGxSaHpJb0h1Z1NNcWlo?=
 =?utf-8?B?OVllay9KY3NURDFNQjhsVW82Y1dLcFJyTVVzL2RvKzQ1cm50a0pEeWo5RWFE?=
 =?utf-8?B?Z2hDSUl4MmU0SGRUK1JZbVpjdVEzN3ptMzNEbDNscCtRaGd4YmJGMDdzb09W?=
 =?utf-8?B?UExBQUFVN2lFcTNHZTh1NE9FbG5RWlJFVldiWW9XZmxrTFA3dU9YMkpITCto?=
 =?utf-8?B?QXRXek5KTGRHNUxnZFVMUk5KUWIyUjRMdnAveS9SZ1MrY1NFL0JZT1Y2c3BE?=
 =?utf-8?B?RWpER3hqYy9WaXlra2pJcG5YNldaUm9qN2tvemlUT1JVSjc1bWtDdHdpbHM4?=
 =?utf-8?B?RnNtOXdsQzIwbUFSU01hblZGSGxrUU52Mk9TOERZNTI0bDYreUpYRzR4d2Y0?=
 =?utf-8?B?QW1QNEQ2NklaZzMzMVVxQnkzNFAvOXVkM0grc1lPRVpRVmNacG5FWVdGU3VM?=
 =?utf-8?B?b2htTW5TTVR0WTRJanhZOHJTQXZJektzOGR1VEpUTWRQRWxqeVhCd2t5ay9j?=
 =?utf-8?B?K1RQQVZ1TWIvTG9KT1JVOW1JZnViYm5ZdUJ6aGtqMDMvTmx5eEV4YXgyMUQ1?=
 =?utf-8?B?NGxqcmQyOWdBN200NWc0R3hkYnZKTC9EcllQZjZrSVErbTlQeS80Nk9hcUt2?=
 =?utf-8?B?cktqaWJIK1lKUmpNNi9WS1U2UmIrMURkV3pHOUJKTUxYN2FyQ3VDeTNSYTJp?=
 =?utf-8?B?K3lOVFQySCs2ZDgyNlNYVU5mei92Nm91cmduM3htdDEwYnlydFpPelc2YVh3?=
 =?utf-8?B?UXBvdXd4RStFeHdNa1FqQkM4REc2NkJCTGEvSUZZY2lYekxyVmkxWndBaWVX?=
 =?utf-8?B?dlFXSCs4aXFnTk95cTMwallCWGNhbkpMQWd3YmpzcXFlWlNJSlpVWUFnRGRn?=
 =?utf-8?B?VXNMVFR4SHBFVFJTbHkwV2luaU53bU16Yk9KcE10V0NWU2plYTRpWXlGVGxm?=
 =?utf-8?B?RmFNVTRsWFNWdjlGNlk0eHdZc2ZHTXpJTUI1QU5qOU9ac3U1TkpkSm1sMXNR?=
 =?utf-8?B?V1NYODVnT25EdG5COEQ2cU1Gc3pEQmNYQnFXbmlDWUVJcm9maDNaY1RXdWk1?=
 =?utf-8?B?RkJWdUliNzFKL0lOOVpSNmdoY01idGJjMWlIZ21xNU1pYU05Y013dFd6YkxV?=
 =?utf-8?B?eDhFSFlVcWdsQS9sYlE2ckpmdVdHM000NnFVSWhsZjBRZS83SnpjYTVSZ1h5?=
 =?utf-8?B?UEkzSVNYYitvQlNOREtFNWtvUCtVUzROekJjbjNNOWE4dFRVM3pwakp1RlZo?=
 =?utf-8?B?bHpWWGVMeVZmQzVESEFoeUZ6ZFptRXhybU9pOG9ydFJ2YmxCa2pSamNBQ2hv?=
 =?utf-8?B?ZHlGUmxoSlIrL3haL1FtMGxEVkc1VTZ2TkxVNTlJR2lKMDFCSGI4bi93bkZJ?=
 =?utf-8?B?dnI5eGlsZldnUytGbHora01JZGVTbXVlc2t0VFhGN29MN3RJMkF2M0l4Z2pz?=
 =?utf-8?B?eUpDQTRFSVc4YnErQi9WcTBKblEwSVhIaHJEaEhTUWJ5clduZWVub3k2YzBt?=
 =?utf-8?B?L281U1U3amhyOXhLWjYwVE8yREpHSFBFeFVyd3JBenl4U2phS0s5SWtUa21Q?=
 =?utf-8?B?SmtRN2huTytsNHoxaGp0cnp0RFRqcUM1cVJDR0ZnVUJFdEtXRVpEMnlTcjl5?=
 =?utf-8?B?b2FyZmp6Uk9GNVRGelYxTnJ4bW5lMk5LcEE5TUdaY05yTTZXOHZrWS9MUjJC?=
 =?utf-8?B?UDNJTzJCcG91ejZpQlcrenJkcExQSXU2akxkWTE4ZHJQajZBUDFvMHpUZnZi?=
 =?utf-8?B?YzJvQWo2VVZzUjI3R3ZSNHFCUnBRTHBHZHV1eWVXaEVVZ1k2cHc5d3ErUzVE?=
 =?utf-8?B?dktzbWY5SEsyU0J5ZXpXT1pSSDkyaWlCR0RiWUJyRm5VU0RmZkRqVC8zMVYy?=
 =?utf-8?B?WmljVkM2N0tHd1dadGEvWEZNbmF4NjdHVVdhYXlUSGxVM2dvTysvMkREajJh?=
 =?utf-8?B?Z0I0Z1pxZmdZVTdialpsZ1VsTFBmSlpCTC8wakJ0V210eCswUmQxdVI5cHdE?=
 =?utf-8?B?dTViQ1Z0K0srODlzY1lTOWhnTlc5MnhZWVN4M1FwblJRcEMzS3NydGdFak9p?=
 =?utf-8?B?ZUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hf04QenghfBAMMnlDrmuvFglmFI9FuhvOob8JfcTj9UsB/zrT543XzziQSnSRO96srW2rf3M/lyqflRwlcSMcIJ9xY7G0bO95A1k7tr1uBcPLg2WOeg39eO5khxJE8t7ypzEUPHhLapXHtamVLFe5yPDkJBjZirAV6ewjW2VQrlAT0BwlkF7F+fV/ekZ7QL7EjmKi9++GqGIyS5FfOP5JBRhevNmzUc4LwSaGBOxc/n4bd/ZzeetZfjWjv+NQBFHSrhf+/BkDZjFfS0vo/J0Enzb3e7Nzb5xiPpnkYbZ8xPW6PmIcLsk4dasG2R8IRIBikjkQvYdM58Zk+v207F9k3nUJr6mO4MQYhvUXR8S5MW4soN6xkGNQqRY3fr/sQTP/CtImoxaNAyhmtKjSH5zLxUnf7hRh4YQ9MTWFx49YbF1TLU7kSe4BEEWWiJTER4KGjZmclP6rTRMQUFQv1zlTOFKBASkdGKod+je6KHmizv88INNiWqJaY3NEwvWGZET37dMuJMHwdQUwvjxKC5KtErUZQQrxcLmrTei2aPaplWeduDnyXaNv3+wLoIFwpI9kdyPqUrZDKn4cLzylQImyahQPuabw9bbyQp0JRxgb7J9h3Yo1LtNZPNYbdsIVWaBtvzzyAp6z1jdGqRb7CW5zuOfMnj+SGHKLmV1a3T8CscwHnRcD67bLOCE4+/JsZpUIV2uZPt9ecO3P3RUnDIfUWehTvRdJGCuflBCrfG9csyks9u+Uit2zfT7+YxzkIQ10Cuwy+BsA2y/Q4g9b3UsaotW0OO3ffx4afVQjtB+J7t3S61XI+fWsOJCPCTK+E0tQHqyNOxvNz+PZOEzwZkmK880224i9LuAK9fyRVEk47A=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 980e2fdb-114c-4412-ed96-08db36eac585
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2023 22:03:37.0421
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wc6VZU7FY8yiBhj8DJZUxgOmz6MYNZq2xcsGDtNnVu3cpTUw32FFXaQ4w59nnDNWC0RXn+dYgrbbL4fqxQ72evfpvjJj1dT8ORN/0SPouSc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5260

On 04/04/2023 3:55 pm, Jan Beulich wrote:
> These are "compound" instructions to issue a series of RDMSR / WRMSR
> respectively. In the emulator we can therefore implement them by using
> the existing msr_{read,write}() hooks. The memory accesses utilize that
> the HVM ->read() / ->write() hooks are already linear-address
> (x86_seg_none) aware (by way of hvmemul_virtual_to_linear() handling
> this case).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TODO: Use VMX tertiary execution control (once bit is known; see
>       //todo-s) and then further adjust cpufeatureset.h.
>
> RFC: In vmx_vmexit_handler() handling is forwarded to the emulator
>      blindly. Alternatively we could consult the exit qualification and
>      process just a single MSR at a time (without involving the
>      emulator), exiting back to the guest after every iteration. (I
>      don't think a mix of both models makes a lot of sense.)

{RD,WR}MSRLIST are supposed to be used for context switch paths.  They
really shouldn't be exiting in the common case.

What matters here is the conditional probability of a second MSR being
intercepted too, given that one already has been.  And I don't have a
clue how to answer this.

I would not expect Introspection to be intercepting a fastpath MSR. 
(And if it does, frankly it can live with the consequences.)

For future scenarios such as reloading PMU/LBR/whatever, these will be
all-or-nothing and we'd expect to have them eagerly in context anyway.

If I were going to guess, I'd say that probably MSR_XSS or MSR_SPEC_CTRL
will be giving us the most grief here, because they're both ones that
are liable to be touched on a context switch path, and have split
host/guest bits.

> RFC: For PV priv_op_ops would need to gain proper read/write hooks,
>      which doesn't look desirable (albeit there we could refuse to
>      handle anything else than x86_seg_none); we may want to consider to
>      instead not support the feature for PV guests, requiring e.g. Linux
>      to process the lists in new pvops hooks.

Ah - funny you should ask.  See patch 2.  These are even better reasons
not to support on PV guests.

> RFC: I wasn't sure whether to add preemption checks to the loops -
>      thoughts?

I'd be tempted to.  Mostly because a guest can exert 64* longest MSR
worth of pressure here, along with associated emulation overhead.

64* "write hypercall page" sounds expensive.  So too does 64* MSR_PAT,
given all the EPT actions behind the scenes.

Its probably one of those

> With the VMX side of the spec still unclear (tertiary execution control
> bit unspecified in ISE 046) we can't enable the insn yet for (HVM) guest
> use. The precise behavior of MSR_BARRIER is also not spelled out, so the
> (minimal) implementation is a guess for now.

MSRs are expensive for several reasons.  The %ecx decode alone isn't
cheap, nor is the reserved bit checking, and that's before starting the
action itself.

What the list form gets you is the ability to pipeline the checks on the
following MSR while performing the action of the current one.  I suspect
there are plans to try and parallelise the actions too if possible.

As I understand it, BARRIER just halts the piplineing of processing the
next MSR.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 00:13:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 00:13:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519017.806149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkZjM-0001qX-0v; Fri, 07 Apr 2023 00:13:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519017.806149; Fri, 07 Apr 2023 00:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkZjL-0001qQ-UJ; Fri, 07 Apr 2023 00:13:27 +0000
Received: by outflank-mailman (input) for mailman id 519017;
 Fri, 07 Apr 2023 00:13:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkZjK-0001qG-7s; Fri, 07 Apr 2023 00:13:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkZjK-0003EB-5M; Fri, 07 Apr 2023 00:13:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkZjJ-0002Pd-KS; Fri, 07 Apr 2023 00:13:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkZjJ-0007sE-Jx; Fri, 07 Apr 2023 00:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dkmuDh8o+p0bO8kRUlosY5ig20axNyNvwTiHf5J2eQA=; b=0qyR5cAbwmuYQWLnCgA2kBgG+E
	fjsni2ta+qTItxdQ4RPP2b3HuBbK20PLGE2cwQAKU042zDAG10JAQFOW2PFWZTEPoan027hf4i+U4
	24cIugsUdomEqV0RcfzeKeE4JRIvoWhy4VRZIL6O9Q6SjregJxO9YOBr7XTs0UqEoFf8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180168-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180168: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=c6f3cbca32bde9ee94d9949aa63e8a7ef2d7bc5b
X-Osstest-Versions-That:
    qemuu=b5fba99ec7969054ab2f3727d2df014b5b72e4f1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 00:13:25 +0000

flight 180168 qemu-mainline real [real]
flight 180174 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180168/
http://logs.test-lab.xenproject.org/osstest/logs/180174/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair        10 xen-install/src_host fail pass in 180174-retest
 test-amd64-i386-libvirt-raw 19 guest-start/debian.repeat fail pass in 180174-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180153
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180153
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180153
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180153
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180153
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                c6f3cbca32bde9ee94d9949aa63e8a7ef2d7bc5b
baseline version:
 qemuu                b5fba99ec7969054ab2f3727d2df014b5b72e4f1

Last test of basis   180153  2023-04-05 12:38:48 Z    1 days
Testing same since   180168  2023-04-06 10:37:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael S. Tsirkin <mst@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   b5fba99ec7..c6f3cbca32  c6f3cbca32bde9ee94d9949aa63e8a7ef2d7bc5b -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 04:59:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 04:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519025.806160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkeC6-0002qs-Ia; Fri, 07 Apr 2023 04:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519025.806160; Fri, 07 Apr 2023 04:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkeC6-0002ql-DZ; Fri, 07 Apr 2023 04:59:26 +0000
Received: by outflank-mailman (input) for mailman id 519025;
 Fri, 07 Apr 2023 04:59:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkeC5-0002qb-9t; Fri, 07 Apr 2023 04:59:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkeC5-0000g0-5I; Fri, 07 Apr 2023 04:59:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkeC4-00032O-MI; Fri, 07 Apr 2023 04:59:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkeC4-0000hf-Ju; Fri, 07 Apr 2023 04:59:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A17r9BwFn8tKWcPnBxRaVzrXm191/qaVotDZdHCHaAc=; b=EqLnsqEiW14TNhJ5IRpjfNF4wb
	1eOdLoQkho2d4Z2qqXPl3VpHu5EdM2+ilrjdjO3x+MgSj9JgL8X+24boOtLLuQvAJkFjSKqHxQIYW
	SnFx1jED6MxgvmC0DYPHTiWxfttE6WsgZmO1vILb5ykBido4uPrT7vOrKuNsER00HFYY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180172-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180172: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=f2afccfefe7be1f7346564fe619277110d341f9b
X-Osstest-Versions-That:
    linux=99ddf2254febae9eab7fb0bcc02c5322243f5c49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 04:59:24 +0000

flight 180172 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180172/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180159
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180159
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180159
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180159
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180159
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                f2afccfefe7be1f7346564fe619277110d341f9b
baseline version:
 linux                99ddf2254febae9eab7fb0bcc02c5322243f5c49

Last test of basis   180159  2023-04-05 18:14:53 Z    1 days
Testing same since   180172  2023-04-06 19:10:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Amit Pundir <amit.pundir@linaro.org>
  Andrea Righi <andrea.righi@canonical.com>
  Andy Chi <andy.chi@canonical.com>
  Andy Roulin <aroulin@nvidia.com>
  Anh Tuan Phan <tuananhlfc@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Ben Greear <greearb@candelatech.com>
  Benjamin Asbach <asbachb.kernel@impl.it>
  Bobby Eshleman <bobby.eshleman@bytedance.com>
  Boris Brezillon <boris.brezillon@collabora.com>
  chowtom <chowtom@gmail.com>
  Christian Brauner <brauner@kernel.org>
  Christian König <christian.koenig@amd.com>
  Christian König <ckoenig.leichtzumerken@gmail.com>
  Corinna Vinschen <vinschen@redhat.com>
  Daniel Golle <daniel@makrotopia.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
  David S. Miller <davem@davemloft.net>
  Duy Nguyen <duy.nguyen.rh@renesas.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Huang <eugene.huang99@gmail.com>
  Felix Fietkau <nbd@nbd.name>
  Frank Wunderlich <frank-w@public-files.de>
  Ge-org Brohammer <gbrohammer@outlook.com>
  Greg Ungerer <gerg@linux-m68k.org>
  Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Gustav Ekelund <gustaek@axis.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Hans de Goede <hdegoede@redhat.com>
  Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jani Nikula <jani.nikula@intel.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Montleon <jmontleo@redhat.com>
  Jeremy Soller <jeremy@system76.com>
  Jiri Slaby (SUSE) <jirislaby@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Junfeng Guo <junfeng.guo@intel.com>
  Kalle Valo <kvalo@kernel.org>
  Kalle Valo <quic_kvalo@quicinc.com>
  Karol Herbst <kherbst@redhat.com>
  Karol Wachowski <karol.wachowski@linux.intel.com>
  Khanh Le <khanh.le.xr@renesas.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Lai Peter Jun Ann <peter.jun.ann.lai@intel.com>
  Lingyu Liu <lingyu.liu@intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Pearson <mpearson-lenovo@squebb.ca>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Matthew Auld <matthew.auld@intel.com>
  Michael Sit Wei Hong <michael.wei.hong.sit@intel.com>
  Michal Sojka <michal.sojka@cvut.cz>
  Min Li <lm0963hack@gmail.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Paolo Abeni <pabeni@redhat.com>
  Pengfei Xu <pengfei.xu@intel.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafal Romanowski <rafal.romanowski@intel.com>
  Ram Kumar Dharuman <quic_ramd@quicinc.com>
  Ranjani Sridharan <ranjani.sridharan@linux.intel.com>
  Ryder Lee <ryder.lee@mediatek.com>
  Shahab Vahedi <shahab@synopsys.com>
  Shailend Chand <shailend@google.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Shuangpeng Bai <sjb7183@psu.edu>
  Siddharth Vadapalli <s-vadapalli@ti.com>
  Simei Su <simei.su@intel.com>
  Song Yoong Siang <yoong.siang.song@intel.com>
  Sricharan Ramabadhran <quic_srichara@quicinc.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
  Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tim Crawford <tcrawford@system76.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tvrtko Ursulin <tvrtko.ursulin@intel.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Xin Long <lucien.xin@gmail.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   99ddf2254feb..f2afccfefe7b  f2afccfefe7be1f7346564fe619277110d341f9b -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 06:13:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 06:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519032.806171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkfLY-0002p6-5D; Fri, 07 Apr 2023 06:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519032.806171; Fri, 07 Apr 2023 06:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkfLY-0002oz-0X; Fri, 07 Apr 2023 06:13:16 +0000
Received: by outflank-mailman (input) for mailman id 519032;
 Fri, 07 Apr 2023 06:13:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkfLW-0002op-KR; Fri, 07 Apr 2023 06:13:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkfLW-0002qS-I8; Fri, 07 Apr 2023 06:13:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkfLW-00053Z-8K; Fri, 07 Apr 2023 06:13:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkfLW-0008Dr-7o; Fri, 07 Apr 2023 06:13:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c+I7dqIwebt0T0UMEaeP/okWDXEJV3ElKrkGLAFpFH0=; b=2Xnty8S/E/qJ1q6RDGTAIu0cF7
	yM247on3NHUUx9Ku8pKdmrx+38db/GKAb/94f5ifC0V5yE9hPopqC/W0pU1nIoeVuPYU8PxN/UYhx
	ip554JRqYMswqEOQurlyzTmxC3NbRiFj3HKN11DWXlRhFgY04aK1WVTjOESwVW8tD3IA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180175: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cdf6ff1719a9453351baec4bd32fcfc30e9ceeac
X-Osstest-Versions-That:
    ovmf=3e3be2cbc286e773ed5bd3afd5942440546888de
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 06:13:14 +0000

flight 180175 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180175/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cdf6ff1719a9453351baec4bd32fcfc30e9ceeac
baseline version:
 ovmf                 3e3be2cbc286e773ed5bd3afd5942440546888de

Last test of basis   180166  2023-04-06 07:10:40 Z    0 days
Testing same since   180175  2023-04-07 04:13:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  KasimX Liu <kasimx.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3e3be2cbc2..cdf6ff1719  cdf6ff1719a9453351baec4bd32fcfc30e9ceeac -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 09:39:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 09:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519042.806179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkiYe-0006Uf-T9; Fri, 07 Apr 2023 09:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519042.806179; Fri, 07 Apr 2023 09:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkiYe-0006UY-QJ; Fri, 07 Apr 2023 09:39:00 +0000
Received: by outflank-mailman (input) for mailman id 519042;
 Fri, 07 Apr 2023 09:38:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkiYc-0006UO-Vb; Fri, 07 Apr 2023 09:38:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkiYc-0007j0-Sp; Fri, 07 Apr 2023 09:38:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkiYc-0001bz-BB; Fri, 07 Apr 2023 09:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkiYc-0003wX-Am; Fri, 07 Apr 2023 09:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kBU11CvgCiTZFmiSNigVtmDugrxRWY2IzDHlVYubW8k=; b=OUUSZCIQeWc0HQHwSVIO5JNDVs
	YH81h2jQR9pdDrZRP6app6jxQh8TwTQeic48zWfpf1iOJDzjtnK/lcbYEXSlmfSnevj+ERBOmkSEO
	Z13pYRv0EcC4OiejuAFJguVhr5629DhQQVbNdNzTyKMz5SMjCuRWlVQTSNaOKnMbC6MQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180173: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
X-Osstest-Versions-That:
    xen=881ba20eb0222305a9d2cd090c9345992794f4f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 09:38:58 +0000

flight 180173 xen-unstable real [real]
flight 180177 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180173/
http://logs.test-lab.xenproject.org/osstest/logs/180177/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 20 guest-localmigrate/x10 fail pass in 180177-retest
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180177-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop      fail blocked in 180163
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180163
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180163
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180163
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180163
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180163
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180163
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180163
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180163
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3
baseline version:
 xen                  881ba20eb0222305a9d2cd090c9345992794f4f5

Last test of basis   180163  2023-04-05 23:38:37 Z    1 days
Testing same since   180173  2023-04-06 19:49:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   881ba20eb0..a5087069a8  a5087069a8c40541ba81fa0e2850471c949932b3 -> master


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 13:16:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 13:16:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519052.806199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pklxJ-0002s0-I7; Fri, 07 Apr 2023 13:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519052.806199; Fri, 07 Apr 2023 13:16:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pklxJ-0002rt-CY; Fri, 07 Apr 2023 13:16:41 +0000
Received: by outflank-mailman (input) for mailman id 519052;
 Fri, 07 Apr 2023 13:16:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pklxI-0002re-Mb; Fri, 07 Apr 2023 13:16:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pklxI-0004CA-Hy; Fri, 07 Apr 2023 13:16:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pklxI-0004FA-5e; Fri, 07 Apr 2023 13:16:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pklxI-0006PZ-5B; Fri, 07 Apr 2023 13:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L7vHLPUamHg7iJFcLXHRa8750KD5bY8BXOp7spSyeZE=; b=XHKFx3tnsutCeYus+nY/ST0rOV
	nlyDCPCscIXoJO/EvJOXNjvY/nYJhcH5ZbQcLRhYln8rbdtkXcsp4QMB4Qef7w9zxnB3bNgCqHlDS
	3jTjBN/so5AKlZD/Clem18e8GXIRHGHd5ZHHz/H8H+9To3Lrss2fL+0mgoK8J+IWMXK4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180176: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-arm64-arm64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=7eead248c65f336afd9fd57ddcfe9e182cf30a6d
X-Osstest-Versions-That:
    libvirt=a56833e47a76e406c62ea0531a99ca199ad5b00a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 13:16:40 +0000

flight 180176 libvirt real [real]
flight 180179 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180176/
http://logs.test-lab.xenproject.org/osstest/logs/180179/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 180179-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              7eead248c65f336afd9fd57ddcfe9e182cf30a6d
baseline version:
 libvirt              a56833e47a76e406c62ea0531a99ca199ad5b00a

Last test of basis   180147  2023-04-05 04:18:51 Z    2 days
Testing same since   180176  2023-04-07 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ján Tomko <jtomko@redhat.com>
  Jérémie Tarot <silopolis@gmail.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>
  Yang Yulin <yylteam@icloud.com>
  Yuri Chornoivan <yurchor@ukr.net>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   a56833e47a..7eead248c6  7eead248c65f336afd9fd57ddcfe9e182cf30a6d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 15:26:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 15:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519059.806208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pknyE-0007pg-Jx; Fri, 07 Apr 2023 15:25:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519059.806208; Fri, 07 Apr 2023 15:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pknyE-0007pZ-H6; Fri, 07 Apr 2023 15:25:46 +0000
Received: by outflank-mailman (input) for mailman id 519059;
 Fri, 07 Apr 2023 15:25:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pknyD-0007pP-GY; Fri, 07 Apr 2023 15:25:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pknyD-00071O-CH; Fri, 07 Apr 2023 15:25:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pknyC-0000G8-Tc; Fri, 07 Apr 2023 15:25:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pknyC-0003Y4-T2; Fri, 07 Apr 2023 15:25:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ByD4i+VqFUirE9zzFzftvTCEaF5vHCznD5cfuN/FWwE=; b=J5QmbukJ8nQyNvcUO3XqgVGoTv
	1kKhNZhHdOJETe3BzaGVLte5s3FrwvSIrMLjGtdisPWqc5rYmTShisZoTz8WlDX1BA2R/cAOK47N9
	dnQLXt4J5viNEL2toHVlpEaaYUuChMewflfUcOE+Zbf0iFIGtzLwI+9DEmfk5VwFZTA8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180180-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180180: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6405cd03048c1850d5047bf66fefa04189a05c94
X-Osstest-Versions-That:
    ovmf=cdf6ff1719a9453351baec4bd32fcfc30e9ceeac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 15:25:44 +0000

flight 180180 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180180/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6405cd03048c1850d5047bf66fefa04189a05c94
baseline version:
 ovmf                 cdf6ff1719a9453351baec4bd32fcfc30e9ceeac

Last test of basis   180175  2023-04-07 04:13:37 Z    0 days
Testing same since   180180  2023-04-07 13:43:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cdf6ff1719..6405cd0304  6405cd03048c1850d5047bf66fefa04189a05c94 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 15:48:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 15:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519066.806223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKV-0001tv-O1; Fri, 07 Apr 2023 15:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519066.806223; Fri, 07 Apr 2023 15:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKV-0001tG-JD; Fri, 07 Apr 2023 15:48:47 +0000
Received: by outflank-mailman (input) for mailman id 519066;
 Fri, 07 Apr 2023 15:48:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aVqb=76=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pkoKU-0001rF-LK
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 15:48:46 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad953bc0-d55b-11ed-85db-49a42c6b2330;
 Fri, 07 Apr 2023 17:48:45 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id s20so23593292ljp.7
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 08:48:45 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 v12-20020a2e9f4c000000b00295a583a20bsm874765ljk.74.2023.04.07.08.48.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Apr 2023 08:48:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad953bc0-d55b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680882524; x=1683474524;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Bvofdso7h4C0wbCuGQpq+8NVtokRUIZ6ky3V3PNneG0=;
        b=UDjqnxv9yR2iTme8IfxrqJoKtTT30hQa3MiTyTzUHwl68nl1Pg/RDJfQve/u477hyf
         lkcy8haZp0tzhaVuLFKsyA1honQArutrJbWZz954UQQqOpeE7WQ/eO8dJoHJh2OP78oA
         p43yE+1vK+VNYdcRlgMOimU+ZWNBeUqBTXgbrjzcAVRBZUu1CCU/5thirqS7DOILUpow
         ABKPjWDmFHYBhcH8HCYvQBZwvv1Vxyk1hd0sehTQBcsv2Dckr2aQqwkVZBoRCri/f9ma
         3kyCh2nskDtgpRY+A+KbKJLHMr7MWfb9GSvh67L1+MgwBfQWZMWfL7By8YkqeKPctRGc
         pdkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680882524; x=1683474524;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Bvofdso7h4C0wbCuGQpq+8NVtokRUIZ6ky3V3PNneG0=;
        b=vDRAlXCCUkc4Te2ROzUpdVqAMmxUbtj2FvPGyDoXytQMoFHn2IXHjgb6eTGr/4tNZw
         n28C77aQf85vyYZEJy6+t/ZRpMDlvwHY0dVstkXlwI4UDe1N04mPaYEQDrltc2/gILvq
         mfVU/LS5jypSWnYsOoeJJKfirECK4aS17UQw+/LzRGvs8X6wX6gftsUbXG0djpU6bIGm
         T+apnjLgCmRC+Gkt6Pd5LJbC2aVoVofuTYcuYWPrpW3r5b6TjBLkZuujblKTpFsiVwSV
         Ec8pYSy48pDUQM2MMTM76/HdW13hWWWBhy+ocRrwAiCHV3UCPtLUJPbZg04GLQQJRIhk
         0o7Q==
X-Gm-Message-State: AAQBX9cmEqj/nKBrvoqLHy8RtBWYN81iq9jdSo+u3D2MM8R/uzuJ10d/
	F3+/bkSF+RH3i2q7Jo0zi4gOHBBP6UE=
X-Google-Smtp-Source: AKy350bLe1FW59P/4vvlykz2DGsyUiixloBEybTWDJDX7QqQWSCeG7bP8ED/tSCAql+uRecHRGgN5w==
X-Received: by 2002:a2e:9214:0:b0:29c:9223:2f5e with SMTP id k20-20020a2e9214000000b0029c92232f5emr750160ljg.48.1680882524338;
        Fri, 07 Apr 2023 08:48:44 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 2/3] xen/riscv: setup initial pagetables
Date: Fri,  7 Apr 2023 18:48:38 +0300
Message-Id: <2cbb70c8b29713f1faf235b4a90722d78bf258eb.1680882176.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1680882176.git.oleksii.kurochko@gmail.com>
References: <cover.1680882176.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Apr 07 15:48:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 15:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519065.806217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKV-0001rX-Ew; Fri, 07 Apr 2023 15:48:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519065.806217; Fri, 07 Apr 2023 15:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKV-0001rQ-C5; Fri, 07 Apr 2023 15:48:47 +0000
Received: by outflank-mailman (input) for mailman id 519065;
 Fri, 07 Apr 2023 15:48:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aVqb=76=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pkoKU-0001rE-Hs
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 15:48:46 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad08272b-d55b-11ed-b464-930f4c7d94ae;
 Fri, 07 Apr 2023 17:48:44 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id a44so25247209ljr.10
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 08:48:44 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 v12-20020a2e9f4c000000b00295a583a20bsm874765ljk.74.2023.04.07.08.48.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Apr 2023 08:48:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad08272b-d55b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680882523; x=1683474523;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=n5vGzdg5KR3sFgMIx6uCZ4DHW95hzYjHqWeD/lfZZlE=;
        b=DvyRYkPNMC1FS6LbAG0WIH+w+5ZqtKJkon2VKeUvTeA3MfxSThlGVOFOrMkGX+ibM0
         m6EiRv2Buf7bi0hj9PJuW2UTk/BDLLX4HP7HGGGEnPAORpG7AcezXnPdGgW5Dbz6EE23
         E/lnMv5uJhrkxImToTWgwDA8LUQee052WQbTxsbk83Z4CcH5jbe6Hg57yJbBEz/oqUrF
         Y0m8L31CU9FAov86ikTe6OBcZ5DzkEj9e9HM1m5Nhu+r9WYHZ0yM1L51OCJoFjb7KjaJ
         /flVT5RLe4Vg3xPX5h2B1bXWAuywEBgu9KFzrbu5MTtGjfh1+jgGS/MecoaQWjC+rzjV
         pBZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680882523; x=1683474523;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=n5vGzdg5KR3sFgMIx6uCZ4DHW95hzYjHqWeD/lfZZlE=;
        b=G74LdkQI6Bow3HCuaGVOE62zVhKVuaaO5r6b7zD211FzBuJ+suWusHuoH4kxAfqrIR
         ciBuM4rXM1IgFA4lhcAsxfZI3+vL7jx7wY6J1ZsLqTvkhGTvSnVYZkFMABEOSZjP6GI5
         yAXBhUkmBi5qR58twSucAGSTgolMUUGHNRl14grGHLagJX/CZVDKWhzjq/7XLHOOFHFq
         kyBNcNWeTh5ookSpw5G1O1E8BSK9LzItfnhAyOXzKzJ9qXUOYMRE9dAqkUClIrdmDzuk
         9dwjSqvcGf4SLn3QyZan2SPf+YqksnFcsgkB6/hDZiJywEbljwpgYF7AMLTo0s03jWsH
         rZ2g==
X-Gm-Message-State: AAQBX9eFOhLDt7Rs7iPj9jmGPv0O6FYeUIr/5qfbx2tFn1qYd9b2eOWo
	1tMRI8ma1WClXloxNB05ECDboBUZn6Q=
X-Google-Smtp-Source: AKy350YgFmRQ1UTY2zWln2c8Tl1UmqPWJBIhChxvfhk5ZQJuAPi68iEhGpcXUSckFx7SHu+pHMDJBQ==
X-Received: by 2002:a2e:97c1:0:b0:29e:a3a0:ee2f with SMTP id m1-20020a2e97c1000000b0029ea3a0ee2fmr649529ljj.30.1680882522804;
        Fri, 07 Apr 2023 08:48:42 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 0/3] enable MMU for RISC-V
Date: Fri,  7 Apr 2023 18:48:36 +0300
Message-Id: <cover.1680882176.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series is based on top patches [1] and [2]

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

[1] https://lore.kernel.org/xen-devel/2785518800dce64fafb3096480a5ae4c4e026bcb.1678970065.git.oleksii.kurochko@gmail.com/
[2] https://lore.kernel.org/xen-devel/7c066d6dcc0618749df04785b34b93819148087d.1678970065.git.oleksii.kurochko@gmail.com/

---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (3):
  xen/riscv: introduce setup_initial_pages
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  12 +-
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  65 +++++
 xen/arch/riscv/mm.c                    | 319 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   2 +
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   4 +
 9 files changed, 435 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Apr 07 15:48:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 15:48:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519068.806242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKX-0002Mg-7D; Fri, 07 Apr 2023 15:48:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519068.806242; Fri, 07 Apr 2023 15:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKX-0002Ls-1z; Fri, 07 Apr 2023 15:48:49 +0000
Received: by outflank-mailman (input) for mailman id 519068;
 Fri, 07 Apr 2023 15:48:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aVqb=76=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pkoKV-0001rF-An
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 15:48:47 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad7100aa-d55b-11ed-85db-49a42c6b2330;
 Fri, 07 Apr 2023 17:48:45 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id bd22so1726694ljb.5
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 08:48:44 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 v12-20020a2e9f4c000000b00295a583a20bsm874765ljk.74.2023.04.07.08.48.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Apr 2023 08:48:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad7100aa-d55b-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680882524; x=1683474524;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LmPb2JvpArMyMhsNogOHkOdzUdZrOYWvyQ6G5yv0Ex4=;
        b=Du9Fd7xXhKP5Ec9O6tMgoW2N50Btzk+xQq9lW8JFxZ/x2P+PsDdzMMbkUpzhSS6IaK
         bEyAV6A7SmEqK5FHXZwfNUr2f7x2IS/U0FUkKjth13iJJYU6FsUnapmHi/ApoCvP4SBj
         viaa/nUeklzEPMaak0qYM+zXZ1wrKaa7wdtQCVnTC9U0RVxit62/nKCdBK5mHAm7VKMU
         ayhlGXBvxFYnlzB8XAGcaW5tFkKP7ciP6Z8zE/aiPwi6BWs8q6GFxE4+6qzDXQOJXidS
         nFih+KlCk5GT4WONIeUJeRX/gG1IF6DU0NhIOK84X350KMuuVmO8nGLBX9O0K8qUqG8I
         X+DA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680882524; x=1683474524;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=LmPb2JvpArMyMhsNogOHkOdzUdZrOYWvyQ6G5yv0Ex4=;
        b=Ou5wx2qUr9UJZR///L75JIld1txxDsC44atFWMn+F8iG4VSgvYUcLjoWTt6LeSA+9M
         283m6VUFlBbPuqP4dmVDVaEf4QOURLfhzX9zDrB4+IgV6wVL78I0mqklJUYbLnfGLr/O
         2+fnOpIypFlOIFbXgjq7MJgDfrjGUhRZBf2Nv5YQWwnzhFyQe+rbSsQHd9QYvd6ZRoLI
         tnyvYKAQ0FSdjINKEcOjnMzDBC6Ukgbf0aGJwMtwJ841p4a4TkAtUuaDVkH4VF6QWntq
         PFeRQk6joQnOMa1BlC4iJjfZO1lyoxsa7n6OcGMxVHhfSPe9lchaLr3XF4riV/gsSZt/
         dnHQ==
X-Gm-Message-State: AAQBX9c6NYaQnPdvUd532BUU+Qd7VaVCt4QyslKKuAcRafhvDEgV5IkO
	lWtFY90PRMqaQw/pwS8A+F0hJoQs1eg=
X-Google-Smtp-Source: AKy350ZWfsrVbxSNagZoTRjLwxe5AJ9ycaTov9PXA5oVeOpvcWSZ3SsT7pnbDsGhGlaKl/LEna51nw==
X-Received: by 2002:a2e:9698:0:b0:298:b045:af96 with SMTP id q24-20020a2e9698000000b00298b045af96mr695028lji.9.1680882523577;
        Fri, 07 Apr 2023 08:48:43 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 1/3] xen/riscv: introduce setup_initial_pages
Date: Fri,  7 Apr 2023 18:48:37 +0300
Message-Id: <50ed83073ccb440fb651070de8b0abebd3888b43.1680882176.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1680882176.git.oleksii.kurochko@gmail.com>
References: <cover.1680882176.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  12 +-
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  65 +++++
 xen/arch/riscv/mm.c                    | 319 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   2 +
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   4 +
 9 files changed, 432 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..0cf9673558 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -39,12 +39,22 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..e16ce66fae
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,9 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..0879a527f2 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -1,6 +1,16 @@
 #ifndef __RISCV_PAGE_BITS_H__
 #define __RISCV_PAGE_BITS_H__
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..30406aa614
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,65 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+uint64_t pte;
+#else
+uint32_t pte;
+#endif
+} pte_t;
+
+#define addr_to_pte(x) (((x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)
+
+/* Shift the VPN[x] or PPN[x] fields of a virtual or physical address
+ * to become the shifted PPN[x] fields of a page table entry */
+#define ppn_to_paddr(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)
+
+static inline pte_t paddr_to_pte(const paddr_t paddr,
+                                 const unsigned long permissions)
+{
+    unsigned long tmp = ppn_to_paddr(paddr);
+    return (pte_t) { .pte = tmp | permissions };
+}
+
+static inline paddr_t pte_to_paddr(const pte_t pte)
+{
+    return addr_to_pte(pte.pte);
+}
+
+static inline bool pte_is_valid(const pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..43b7181c33
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,319 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+
+#include <asm/early_printk.h>
+#include <asm/config.h>
+#include <asm/csr.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned long num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 4 page (in case when Sv48 or Sv57 are used )
+ * tables are needed to cover 2 MB. One for each page level
+ * table with PAGE_SIZE = 4 Kb
+ *
+ * One L0 page table can cover 2 MB
+ * (512 entries of one page table * PAGE_SIZE).
+ *
+ * It might be needed one more page table in case when
+ * Xen load address isn't 2 MB aligned.
+ *
+ */
+#define PGTBL_INITIAL_COUNT     (5)
+
+#define PGTBL_ENTRY_AMOUNT  (PAGE_SIZE / sizeof(pte_t))
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PGTBL_ENTRY_AMOUNT];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PGTBL_ENTRY_AMOUNT];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PGTBL_ENTRY_AMOUNT;                         \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start,
+                                         unsigned long permissions)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+    pte_t pte_to_be_written;
+    unsigned long paddr;
+    unsigned long tmp_permissions;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
+         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    page_addr = map_start;
+    while ( page_addr < map_end )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch (mmu_desc->num_levels)
+        {
+            case 4: /* Level 3 */
+                HANDLE_PGTBL(3);
+            case 3: /* Level 2 */
+                HANDLE_PGTBL(2);
+            case 2: /* Level 1 */
+                HANDLE_PGTBL(1);
+            case 1: /* Level 0 */
+                index = pt_index(0, page_addr);
+                paddr = (page_addr - map_start) + pa_start;
+
+                tmp_permissions = permissions;
+
+                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                    tmp_permissions =
+                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                    tmp_permissions = PTE_READABLE | PTE_VALID;
+
+                pte_to_be_written = paddr_to_pte(paddr, tmp_permissions);
+
+                if ( !pte_is_valid(pgtbl[index]) )
+                    pgtbl[index] = pte_to_be_written;
+                else
+                {
+                    /*
+                    * get an adresses of current pte and that one to
+                    * be written without permission flags
+                    */
+                    unsigned long curr_pte =
+                        pgtbl[index].pte & ~((1 << PTE_PPN_SHIFT) - 1);
+
+                    pte_to_be_written.pte &= ~((1 << PTE_PPN_SHIFT) - 1);
+
+                    if ( curr_pte != pte_to_be_written.pte )
+                    {
+                        early_printk("PTE that is intended to write isn't the"
+                                    "same that the once are overwriting\n");
+                        /* panic(), <asm/bug.h> aren't ready now. */
+                        die();
+                    }
+                }
+        }
+
+        /* Point to next page */
+        page_addr += XEN_PT_LEVEL_SIZE(0);
+    }
+}
+
+static void __init calc_pgtbl_lvls_num(struct  mmu_desc *mmu_desc)
+{
+    unsigned long satp_mode = RV_STAGE1_MODE;
+
+    /* Number of page table levels */
+    switch (satp_mode)
+    {
+        case SATP_MODE_SV32:
+            mmu_desc->num_levels = 2;
+            break;
+        case SATP_MODE_SV39:
+            mmu_desc->num_levels = 3;
+            break;
+        case SATP_MODE_SV48:
+            mmu_desc->num_levels = 4;
+            break;
+        default:
+            early_printk("(XEN) Unsupported SATP_MODE\n");
+            die();
+    }
+}
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start,
+                                            unsigned long satp_mode)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    asm volatile("sfence.vma");
+    csr_write(CSR_SATP,
+              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
+              satp_mode << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
+        is_mode_supported = true;
+
+    /* Clean MMU root page table and disable MMU */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    csr_write(CSR_SATP, 0);
+    asm volatile("sfence.vma");
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { 0, 0, NULL, 0 };
+
+    /*
+     * Access to _{stard, end } is always PC-relative
+     * thereby when access them we will get load adresses
+     * of start and end of Xen
+     * To get linker addresses LOAD_TO_LINK() is required
+     * to use
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) ) {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    calc_pgtbl_lvls_num(&mmu_desc);
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start, RV_STAGE1_MODE) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start,
+                          PTE_LEAF_DEFAULT);
+}
+
+void __init noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    asm volatile("sfence.vma");
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile(".align 2");
+      mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     */
+    asm volatile ("mv sp, %0" : : "r"((unsigned long)cpu0_boot_stack + STACK_SIZE));
+
+    /*
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+    */
+    cont_after_mmu_is_enabled();
+}
+
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 8887f0cbd4..b3309d902c 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,5 @@
 #include <asm/asm.h>
+#include <asm/asm-offsets.h>
 #include <asm/riscv_encoding.h>
 
         .section .text.header, "ax", %progbits
@@ -32,3 +33,4 @@ ENTRY(start)
         add     sp, sp, t0
 
         tail    start_xen
+
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index f299ea8422..e00b5c803a 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -136,6 +136,7 @@ SECTIONS
     . = ALIGN(POINTER_ALIGN);
     __init_end = .;
 
+    . = ALIGN(PAGE_SIZE);
     .bss : {                     /* BSS */
         __bss_start = .;
         *(.bss.stack_aligned)
@@ -169,3 +170,6 @@ SECTIONS
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Apr 07 15:48:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 15:48:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519067.806238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKW-0002LD-Us; Fri, 07 Apr 2023 15:48:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519067.806238; Fri, 07 Apr 2023 15:48:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkoKW-0002L6-Ql; Fri, 07 Apr 2023 15:48:48 +0000
Received: by outflank-mailman (input) for mailman id 519067;
 Fri, 07 Apr 2023 15:48:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aVqb=76=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pkoKV-0001rE-9T
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 15:48:47 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae10e726-d55b-11ed-b464-930f4c7d94ae;
 Fri, 07 Apr 2023 17:48:45 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id s20so23593326ljp.7
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 08:48:45 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 v12-20020a2e9f4c000000b00295a583a20bsm874765ljk.74.2023.04.07.08.48.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Apr 2023 08:48:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae10e726-d55b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20210112; t=1680882525; x=1683474525;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dR16O0lugGTgYXlwGY6UhsQKrRSYw0CGkPTToCj++Ug=;
        b=WNtfsY4gAIKeRRXFCMruuMW9ZmEahyFB2wL7ubeBg+g9lGealukXMDfhRE1Y8sDvr1
         XKpW8xy5x7dKsTnjBzJS9Th7BV7Y2rc5yvDVY4IsrCaRjqCRdzDm3hlymk4pJQN8JBO6
         jF+ZvJivr55eMmvEhZhgP5sZ4tpik0pNAIeJ8ApqjKGIG3YEROX3SUxvwo+W2Ncg+QPK
         Or+Ws0WpsQ4fUmVsn0pUnxU2Zca34uwq2J16MO6bdpgjboqcJAf+zyznCgs1fssWos7B
         fUgtreelqzzAqy+Fgnqcq2al94r4uQMOmh5Dof4DmeOi1q5x5jyDqYlGNra3kfcEqB3a
         MhRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680882525; x=1683474525;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dR16O0lugGTgYXlwGY6UhsQKrRSYw0CGkPTToCj++Ug=;
        b=QzowfV09XlT4N7tuOWoTu1TCATz56kWehqiRKabg89IBOxtRZxNZWeKD1q/qSNJ3NI
         8+N0tHIxzZEZnUmJG92Ymg+KNhv0XoaA60mDSe1F4rInbBWXyWoPAouE8RQZEdT/nJ4k
         IiegWpEBXDr/j7h3mFAEr97+q3dhj5jr6DvVYZorrTEw5LCSvRLyWkls/FbRrSmrCCT/
         N+AsHewZJdGo/hgMW56CuFYbF0sb2ssam0DLIfttwdt8x1EDSgRx08/jahLPrGT2/arz
         E7EryCpKNGKWUd5bmSbEuh8sy1S/S7YRrVmZmyvbTheXfRLDSSuzzvVzLYDIyfhJJk0w
         2clA==
X-Gm-Message-State: AAQBX9fcB+5W5U69V3EZD91GPZLuutfiBM5w2OmcIsvRdYJG2fESAEMu
	LQluQ3TLqCs2M9HzV9COyHXyMoeOWH8=
X-Google-Smtp-Source: AKy350bkZK1Y1VW7X1oCiqKF1QUHOQQXqi85UO/S8xap3BZ/klg3do3at0JUwxjNXS5JlXSehm7jmA==
X-Received: by 2002:a2e:9dc7:0:b0:298:39f9:fb2f with SMTP id x7-20020a2e9dc7000000b0029839f9fb2fmr532236ljj.31.1680882525256;
        Fri, 07 Apr 2023 08:48:45 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v4 3/3] xen/riscv: remove dummy_bss variable
Date: Fri,  7 Apr 2023 18:48:39 +0300
Message-Id: <a9794d6c3990c5656aaf417f6d94228bb5749a75.1680882176.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1680882176.git.oleksii.kurochko@gmail.com>
References: <cover.1680882176.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Apr 07 17:37:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 17:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519082.806258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkq1S-0006Zh-5w; Fri, 07 Apr 2023 17:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519082.806258; Fri, 07 Apr 2023 17:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkq1S-0006Za-2X; Fri, 07 Apr 2023 17:37:14 +0000
Received: by outflank-mailman (input) for mailman id 519082;
 Fri, 07 Apr 2023 17:37:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkq1Q-0006ZQ-DT; Fri, 07 Apr 2023 17:37:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkq1Q-0002Qo-6h; Fri, 07 Apr 2023 17:37:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkq1P-0004Kf-IS; Fri, 07 Apr 2023 17:37:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkq1P-0000e5-I2; Fri, 07 Apr 2023 17:37:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gJ178f9Rtg6w0oQEQ2AbLo5BNPP1OlPYRar4f2nqo6g=; b=0juSwlsU1UV6XD5kzVE64UpnmW
	JfDHfBSDqmFRKtWkwmdgnr7dR6VElvYVmQEN+9/opVuThnBO2taB0tTqQakWEj5HdNj1ccAh+108/
	xnYjTgugfqmLqbOM+YiwMwG2jf2Vd3M55nmwbK9rP4dzFZQmVlLwppDGkGhATj+NRQjY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180178: regressions - trouble: broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:capture-logs(26):broken:regression
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Apr 2023 17:37:11 +0000

flight 180178 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180178/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2     <job status>                 broken
 test-amd64-amd64-xl-credit2  26 capture-logs(26)       broken REGR. vs. 180173
 test-amd64-i386-examine-uefi  6 xen-install              fail REGR. vs. 180173
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 180173

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180173
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3

Last test of basis   180173  2023-04-06 19:49:58 Z    0 days
Testing same since   180178  2023-04-07 09:41:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  broken  
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-credit2 broken
broken-step test-amd64-amd64-xl-credit2 capture-logs(26)

Not pushing.

------------------------------------------------------------
commit ddaf7bb0cfd27369252de52e4b03410c4065bad2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Mar 17 11:10:06 2023 +0000

    x86/svm: Provide EXITINFO decodes for Exceptions/NPF intercepts
    
    Exceptions and NPF intercepts almost have the same layout, but NPF has bits
    above 31 in the error code, and the name for exitinfo2 really does want
    distinguishing between cr2 and gpa.
    
    In nsvm_vcpu_vmexit_inject() rearrange VMEXIT_NPF to fall through instead of
    repeating the exitinfo1 write.  Use the fallthrough pseudo keyword instead of
    a comment.
    
    In VMEXIT_NPF, as we're editing the printk() anyway, switch to using the newer
    domain_crash() form.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 21:33:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 21:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519089.806268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkthr-0004tO-4g; Fri, 07 Apr 2023 21:33:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519089.806268; Fri, 07 Apr 2023 21:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkthr-0004tH-0T; Fri, 07 Apr 2023 21:33:15 +0000
Received: by outflank-mailman (input) for mailman id 519089;
 Fri, 07 Apr 2023 21:33:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wEkE=76=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pkthp-0004t6-Dn
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 21:33:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9901076-d58b-11ed-b464-930f4c7d94ae;
 Fri, 07 Apr 2023 23:33:09 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5148D654EC;
 Fri,  7 Apr 2023 21:33:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3D73C433D2;
 Fri,  7 Apr 2023 21:33:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9901076-d58b-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680903186;
	bh=KgPVxGeJyjHgHWJ1MvOr151jsd1aMT+s+wUu8MCW3MA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=amCe+dN+/bqX1XCsKNLagV3HpIosJbF9FlPy2W51kopEMdLznuiUZF+oveoR+86Sq
	 qXYze2QyhfQOP6pC6wdmzAsxe50fv6TXFNsN6fI7uscQB4pPNpkxyhIrhzXUHCVy/c
	 ioxWxo1YIGRX3533z9pzWXSW/82P2ILXsUyKYHx1y9FhEBfjgduhQb+G5ZS2zwHA8/
	 JJ7GicQKB1RmOnnKHLdrXnZgXVg9DWjMh7koEGhcAMc1hh327Xyb6k2DiZp4Y7Gzvf
	 3aNK1gFBTeIlU76UA1f7Kshq0DHQMKgHV7LKMq7L2rAQdGAQtjXmYnEZzeAjphe304
	 y0GAafVn2dicg==
Date: Fri, 7 Apr 2023 14:33:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayankuma@amd.com>
cc: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
In-Reply-To: <7dc2a51a-f1bd-6370-6b42-0bcf1adad619@amd.com>
Message-ID: <alpine.DEB.2.22.394.2304071432510.111906@ubuntu-linux-20-04-desktop>
References: <20230405111750.12491-1-michal.orzel@amd.com> <20230405111750.12491-2-michal.orzel@amd.com> <7dc2a51a-f1bd-6370-6b42-0bcf1adad619@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 6 Apr 2023, Ayan Kumar Halder wrote:
> On 05/04/2023 12:17, Michal Orzel wrote:
> > In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
> > stating that the guest is expected to read the DR register only if the
> > TXFE bit of FR register is not set. This is obviously logically wrong and
> > it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).
> NIT:- I will prefer if the PL011 TRM is mentioned with the relevant section.
> > 
> > Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> >   xen/arch/arm/vpl011.c | 8 ++++----
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> > index 2fa80bc15ac4..0186d8a31834 100644
> > --- a/xen/arch/arm/vpl011.c
> > +++ b/xen/arch/arm/vpl011.c
> > @@ -143,8 +143,8 @@ static uint8_t vpl011_read_data_xen(struct domain *d)
> >       /*
> >        * It is expected that there will be data in the ring buffer when this
> >        * function is called since the guest is expected to read the data
> > register
> > -     * only if the TXFE flag is not set.
> > -     * If the guest still does read when TXFE bit is set then 0 will be
> > returned.
> > +     * only if the RXFE flag is not set.
> > +     * If the guest still does read when RXFE bit is set then 0 will be
> > returned.
> >        */
> >       if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
> >       {
> > @@ -202,8 +202,8 @@ static uint8_t vpl011_read_data(struct domain *d)
> >       /*
> >        * It is expected that there will be data in the ring buffer when this
> >        * function is called since the guest is expected to read the data
> > register
> > -     * only if the TXFE flag is not set.
> > -     * If the guest still does read when TXFE bit is set then 0 will be
> > returned.
> > +     * only if the RXFE flag is not set.
> > +     * If the guest still does read when RXFE bit is set then 0 will be
> > returned.
> >        */
> >       if ( xencons_queued(in_prod, in_cons, sizeof(intf->in)) > 0 )
> >       {
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 21:42:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 21:42:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519094.806278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pktrD-0006T9-1L; Fri, 07 Apr 2023 21:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519094.806278; Fri, 07 Apr 2023 21:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pktrC-0006T2-Uk; Fri, 07 Apr 2023 21:42:54 +0000
Received: by outflank-mailman (input) for mailman id 519094;
 Fri, 07 Apr 2023 21:42:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wEkE=76=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pktrB-0006Su-3S
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 21:42:53 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2577c318-d58d-11ed-85db-49a42c6b2330;
 Fri, 07 Apr 2023 23:42:52 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id F03066409B;
 Fri,  7 Apr 2023 21:42:50 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DECBC433EF;
 Fri,  7 Apr 2023 21:42:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2577c318-d58d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680903770;
	bh=lVWwJy+ZKB+OJ2ogGpVmHsex9c5zOoBzr31cr4qWLaU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=d04Gd5vbHDZwvIeRDa2zeLFqGpWu/YOu4ZH0lL9AtbfXnD7XAEexo030Ac6lXvs+m
	 qMGuYNuEuLGKtr7SpKYMh7BEPV2yuBztRjmfJxhJ002fEj1aNvaz2+HcycQpVgGD72
	 On68uJPoLQap8VKjYTEZ00auiZ99xcOaV7Q4eFDT+pSLwgGQeOuveRxWaZoQitV1MX
	 jgURF+cw4zEm28G14eA2GXDGwqBPuy+DfUeRh1z9SGAswEFIy8JrDtxCYfhEVmi5gx
	 +5tO4fx4/PYfwVR56pFYGJryuC7JIGi6HX4EScnvrTDnzH9zKey6VrtWRMpYInk4Vz
	 DKr2aucx6qJRg==
Date: Fri, 7 Apr 2023 14:42:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 2/3] xen/arm: vpl011: Handle correctly TXFE when backend
 in Xen
In-Reply-To: <20230405111750.12491-3-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2304071442110.111906@ubuntu-linux-20-04-desktop>
References: <20230405111750.12491-1-michal.orzel@amd.com> <20230405111750.12491-3-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 5 Apr 2023, Michal Orzel wrote:
> When backend is in Xen, the handling of data written to DR register is a
> bit special because we want to tell guest that we are always ready for new
> data to be written (i.e. no real FIFO, TXFF/BUSY never set and TXI always
> set). This conflicts with the current handling of TXFE bit, which we
> always clear and never set on a write path (we happen to set it when we
> receive char from serial input due to use of vpl011_data_avail() but this
> might never be called). This can lead to issues if a guest driver makes
> use of TXFE bit to check for TX transmission completion (such guest could
> then wait endlessly). Fix it by keeping TXFE always set to match the
> current emulation logic.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> We don't have to look far for an example of a PL011/SBSA driver relying on TXFE.
> If a guest had a driver like we have in Xen, we would end up with no messages
> being printed.
> ---
>  xen/arch/arm/vpl011.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index 0186d8a31834..ff06deeb645c 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -112,8 +112,14 @@ static void vpl011_write_data_xen(struct domain *d, uint8_t data)
>          }
>      }
>  
> +    /*
> +     * When backend is in Xen, we tell guest we are always ready for new data
> +     * to be written. This is fulfilled by having:
> +     * - TXI/TXFE -> always set,
> +     * - TXFF/BUSY -> never set.
> +     */
>      vpl011->uartris |= TXI;
> -    vpl011->uartfr &= ~TXFE;
> +    vpl011->uartfr |= TXFE;
>      vpl011_update_interrupt_status(d);
>  
>      VPL011_UNLOCK(d, flags);
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 21:45:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 21:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519099.806288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkttk-00074i-EN; Fri, 07 Apr 2023 21:45:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519099.806288; Fri, 07 Apr 2023 21:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkttk-00074Z-BI; Fri, 07 Apr 2023 21:45:32 +0000
Received: by outflank-mailman (input) for mailman id 519099;
 Fri, 07 Apr 2023 21:45:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wEkE=76=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pktti-00074P-Nl
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 21:45:30 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 83004f4b-d58d-11ed-85db-49a42c6b2330;
 Fri, 07 Apr 2023 23:45:29 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 041B6654D8;
 Fri,  7 Apr 2023 21:45:28 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 725BAC4339B;
 Fri,  7 Apr 2023 21:45:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83004f4b-d58d-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680903927;
	bh=eBSbdsqfNNt6f8OYUMdxJVCz21US87S40JrcPEDoLQA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MLZKZBmudlRnVWoZB4Uwz0xTgAFEZ+RyjUrsS8h+3RaUn0O0fPoczVmlmtrWjHoQX
	 8PnG/E5E4Ejw3i3yl6aDslCywXyDAe1Q9ciHSIyky0KIxxsmEc311ka5AVerlzRrQ5
	 QO6fQvM8enrWMyPa/DfSWOxPOdWlDzXsuFQpQON8MwIfGMiORjHHvzef4OCGp/09YX
	 4eBdCtQWosz0b2N4cFY3zuIyqCBMioH8UG14tilQu+T77lWHHxdYleG6ekeBufirSk
	 ZkBjQuCN1l9+MM5470jpe8m7FvNG9iKRpz6mOSUg+t0lFP2cjzGJtAuH67+qwS+mUt
	 SR1LMxa/Hwmpg==
Date: Fri, 7 Apr 2023 14:45:24 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status
 when backend in Xen
In-Reply-To: <20230405111750.12491-4-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2304071445111.111906@ubuntu-linux-20-04-desktop>
References: <20230405111750.12491-1-michal.orzel@amd.com> <20230405111750.12491-4-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 5 Apr 2023, Michal Orzel wrote:
> >From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
> RX and TX state. Because we are passing 0 as out_fifo_level and
> SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
> vpl011_update_tx_fifo_status() which performs TXI bit handling
> depending on the FIFO trigger level. This does not make sense when backend
> is in Xen, as we maintain a single TX state where data can always be
> written and as such there is no TX FIFO handling. Furthermore, this
> function assumes that the backend is in domain by making use of struct
> xencons_interface unconditionally. Fix it by calling this function only
> when backend is in domain. Also add an assert for sanity.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/vpl011.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
> index ff06deeb645c..7856b4b5f5a3 100644
> --- a/xen/arch/arm/vpl011.c
> +++ b/xen/arch/arm/vpl011.c
> @@ -261,6 +261,9 @@ static void vpl011_update_tx_fifo_status(struct vpl011 *vpl011,
>      struct xencons_interface *intf = vpl011->backend.dom.ring_buf;
>      unsigned int fifo_threshold = sizeof(intf->out) - SBSA_UART_FIFO_LEVEL;
>  
> +    /* No TX FIFO handling when backend is in Xen */
> +    ASSERT(vpl011->backend_in_domain);
> +
>      BUILD_BUG_ON(sizeof(intf->out) < SBSA_UART_FIFO_SIZE);
>  
>      /*
> @@ -547,7 +550,13 @@ static void vpl011_data_avail(struct domain *d,
>           */
>          vpl011->uartfr &= ~BUSY;
>  
> -        vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
> +        /*
> +         * When backend is in Xen, we are always ready for new data to be
> +         * written (i.e. no TX FIFO handling), therefore we do not want
> +         * to change the TX FIFO status in such case.
> +         */
> +        if ( vpl011->backend_in_domain )
> +            vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
>      }
>  
>      vpl011_update_interrupt_status(d);
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 22:09:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 22:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519105.806300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkuGV-0001DL-BB; Fri, 07 Apr 2023 22:09:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519105.806300; Fri, 07 Apr 2023 22:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkuGV-0001DE-8S; Fri, 07 Apr 2023 22:09:03 +0000
Received: by outflank-mailman (input) for mailman id 519105;
 Fri, 07 Apr 2023 22:09:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wEkE=76=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pkuGU-0001D8-Oc
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 22:09:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cc414c02-d590-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 00:09:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5DF2A64F3C;
 Fri,  7 Apr 2023 22:08:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9A69C433EF;
 Fri,  7 Apr 2023 22:08:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc414c02-d590-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1680905338;
	bh=BShoxTWx4HlNjbO6Scv7dYkopvwzFqmn1E8X+A2sdI0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kx76WgITtpOTf5Oj8/rBT7WSkFWZXSM8UTVCTXKjXva63R6gQzvk6w+oDGL/MxYtL
	 pMcPNC9lKEGoyorJ87ePmxgZ62sN5ifaC9ee9Ccd0oZIW8+hg9TZBpJ48O7xy/XW1n
	 aFhiovcvXtDqP6jDIGorEjiK1cJ8ZflXiWldjsKgUxhYDCBOpaA/tz9Xva3w27OWxf
	 Bts6jkI72ebLZC+MXyBUqwL1I37R16QQxvHJ0zmqayZyeHTIsZzK1rZG1L/cI+yEWs
	 8SnvU07q+9K3pDqDHPMNqW6jK5lV6N5Ccl1ymsWQtRCdKpPRL3IXmmz6ugIEKhwM03
	 K4H3ft/HkAtng==
Date: Fri, 7 Apr 2023 15:08:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: linux-kernel@vger.kernel.org, Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    xen-devel@lists.xenproject.org, Dan Carpenter <error27@gmail.com>
Subject: Re: [PATCH v2] xen/pvcalls: don't call bind_evtchn_to_irqhandler()
 under lock
In-Reply-To: <20230403092711.15285-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2304071508480.111906@ubuntu-linux-20-04-desktop>
References: <20230403092711.15285-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 3 Apr 2023, Juergen Gross wrote:
> bind_evtchn_to_irqhandler() shouldn't be called under spinlock, as it
> can sleep.
> 
> This requires to move the calls of create_active() out of the locked
> regions. This is no problem, as the worst which could happen would be
> a spurious call of the interrupt handler, causing a spurious wake_up().
> 
> Reported-by: Dan Carpenter <error27@gmail.com>
> Link: https://lore.kernel.org/lkml/Y+JUIl64UDmdkboh@kadam/
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> V2:
> - remove stale spin_unlock() (Oleksandr Tyshchenko)
> ---
>  drivers/xen/pvcalls-front.c | 46 +++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index d5d589bda243..b72ee9379d77 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -227,22 +227,30 @@ static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id)
>  
>  static void free_active_ring(struct sock_mapping *map);
>  
> -static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
> -				   struct sock_mapping *map)
> +static void pvcalls_front_destroy_active(struct pvcalls_bedata *bedata,
> +					 struct sock_mapping *map)
>  {
>  	int i;
>  
>  	unbind_from_irqhandler(map->active.irq, map);
>  
> -	spin_lock(&bedata->socket_lock);
> -	if (!list_empty(&map->list))
> -		list_del_init(&map->list);
> -	spin_unlock(&bedata->socket_lock);
> +	if (bedata) {
> +		spin_lock(&bedata->socket_lock);
> +		if (!list_empty(&map->list))
> +			list_del_init(&map->list);
> +		spin_unlock(&bedata->socket_lock);
> +	}
>  
>  	for (i = 0; i < (1 << PVCALLS_RING_ORDER); i++)
>  		gnttab_end_foreign_access(map->active.ring->ref[i], NULL);
>  	gnttab_end_foreign_access(map->active.ref, NULL);
>  	free_active_ring(map);
> +}
> +
> +static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
> +				   struct sock_mapping *map)
> +{
> +	pvcalls_front_destroy_active(bedata, map);
>  
>  	kfree(map);
>  }
> @@ -433,19 +441,18 @@ int pvcalls_front_connect(struct socket *sock, struct sockaddr *addr,
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
> -
> -	spin_lock(&bedata->socket_lock);
> -	ret = get_request(bedata, &req_id);
> +	ret = create_active(map, &evtchn);
>  	if (ret < 0) {
> -		spin_unlock(&bedata->socket_lock);
>  		free_active_ring(map);
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
> -	ret = create_active(map, &evtchn);
> +
> +	spin_lock(&bedata->socket_lock);
> +	ret = get_request(bedata, &req_id);
>  	if (ret < 0) {
>  		spin_unlock(&bedata->socket_lock);
> -		free_active_ring(map);
> +		pvcalls_front_destroy_active(NULL, map);
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
> @@ -821,28 +828,27 @@ int pvcalls_front_accept(struct socket *sock, struct socket *newsock, int flags)
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
> -	spin_lock(&bedata->socket_lock);
> -	ret = get_request(bedata, &req_id);
> +	ret = create_active(map2, &evtchn);
>  	if (ret < 0) {
> -		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
> -			  (void *)&map->passive.flags);
> -		spin_unlock(&bedata->socket_lock);
>  		free_active_ring(map2);
>  		kfree(map2);
> +		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
> +			  (void *)&map->passive.flags);
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
>  
> -	ret = create_active(map2, &evtchn);
> +	spin_lock(&bedata->socket_lock);
> +	ret = get_request(bedata, &req_id);
>  	if (ret < 0) {
> -		free_active_ring(map2);
> -		kfree(map2);
>  		clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
>  			  (void *)&map->passive.flags);
>  		spin_unlock(&bedata->socket_lock);
> +		pvcalls_front_free_map(bedata, map2);
>  		pvcalls_exit_sock(sock);
>  		return ret;
>  	}
> +
>  	list_add_tail(&map2->list, &bedata->socket_mappings);
>  
>  	req = RING_GET_REQUEST(&bedata->ring, req_id);
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:01:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519110.806311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv4x-0007Np-8N; Fri, 07 Apr 2023 23:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519110.806311; Fri, 07 Apr 2023 23:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv4x-0007Ni-5U; Fri, 07 Apr 2023 23:01:11 +0000
Received: by outflank-mailman (input) for mailman id 519110;
 Fri, 07 Apr 2023 23:01:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkv4v-0007Nc-EQ
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:01:09 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12bf9c98-d598-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:01:06 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id n14so25231792plc.8
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:01:05 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 s21-20020aa78295000000b0062dc14ee2a7sm3505522pfm.211.2023.04.07.16.01.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:01:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12bf9c98-d598-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908464;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Pwlm1lMKQRjwxkmrW2F8t3gfYFcIvBu5GbvK5p1f3qI=;
        b=WYmgbbs2HSI1ZfANhB99p0It58GikHCFHg8GiCJM3VQAfDfBj7TpC/bvTiV2pJ79UV
         KAS5g4RAW3gC6aKuL00Z27MQAUOxZbmVQcMfZCDSB1r0VuBIuxLsMFHZKi1LaGTK3hGD
         QdhxTCrlzTDRZmVjPx1eYN/x1bPBMtI6NsRmoM2cgteOFw+t/qEnW3IymFjABKUtyO2w
         8JLLbEMqf+MaY87jjDzMDFdSjZ/ZhpXDV5gbgrECBEdb7dvChp+koaU4p/FdyjA+cGPu
         Q4it2C7c8GRxrjcQrXOBMNxBTHsarpA5kw1bkpmHXyGaIy0+DjId1bO+doBUisVu8LKa
         wxmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908464;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Pwlm1lMKQRjwxkmrW2F8t3gfYFcIvBu5GbvK5p1f3qI=;
        b=yNJr1H9IkWTa1Ljy+zSaY3DaEi4NWBkZr4ST4bnHfs8mw99lzSnOX8DUeWu/QfqKHD
         c7O1b7pLgHoUBZSCz6N1zHMGANRuue3EeqOI6Uum+s0lCPif5rx+2KzyKO0cUUSY3Ajz
         J2Mp+GF/pbEI3rgu36gKNxNVFtjwWGM9LYeca1fToYOyDrnMWZajYMSVZsKUHkRjYBG3
         BOz/uIJaMpk7TbRToCEmkrASoR9x8sMWCk0Il2Ln00LVHhtuFHyV+ugpaZNluGcLx9xb
         vs1Fj3d0hgkfAH0UcsBY9XdAscZN5sNf1W+O1+apjUpgw0BeVXaNb3stU0CkPwCUV7mt
         K07A==
X-Gm-Message-State: AAQBX9etslf8UpUgY9sxt4Xb9VUO3huLLjgZjXb5O1LWLy4lkTIxuF/7
	PSGfB4VQfRlYuuCJ0I2VvgRQeg==
X-Google-Smtp-Source: AKy350aJv8JM6VFQdChjoum7LkzJsp3BEooDRP1tQQsqzUK9IOYyaf2AaDi9zqTn0y042mdujiyLbw==
X-Received: by 2002:a05:6a20:cd5d:b0:d9:f539:727f with SMTP id hn29-20020a056a20cd5d00b000d9f539727fmr4424102pzb.28.1680908463869;
        Fri, 07 Apr 2023 16:01:03 -0700 (PDT)
Message-ID: <8174dba8-3d19-58e6-9bcc-cb8b58d76c1b@linaro.org>
Date: Fri, 7 Apr 2023 16:01:01 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 01/14] accel: Document generic accelerator headers
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Reinoud Zandijk <reinoud@netbsd.org>,
 Sunil Muthuswamy <sunilmut@microsoft.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-2-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-2-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:17, Philippe Mathieu-Daudé wrote:
> These headers are meant to be include by any file to check
> the availability of accelerators, thus are not accelerator
> specific.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   include/sysemu/hax.h  | 2 ++
>   include/sysemu/kvm.h  | 2 ++
>   include/sysemu/nvmm.h | 2 ++
>   include/sysemu/tcg.h  | 2 ++
>   include/sysemu/whpx.h | 2 ++
>   include/sysemu/xen.h  | 2 ++
>   6 files changed, 12 insertions(+)

Acked-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:01:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519111.806321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv57-0007eD-IB; Fri, 07 Apr 2023 23:01:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519111.806321; Fri, 07 Apr 2023 23:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv57-0007e2-Ds; Fri, 07 Apr 2023 23:01:21 +0000
Received: by outflank-mailman (input) for mailman id 519111;
 Fri, 07 Apr 2023 23:01:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkv55-0007Nc-Pr
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:01:19 +0000
Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com
 [2607:f8b0:4864:20::1034])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a6d04b8-d598-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:01:18 +0200 (CEST)
Received: by mail-pj1-x1034.google.com with SMTP id
 pc4-20020a17090b3b8400b0024676052044so87304pjb.1
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:01:18 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 n10-20020a170902968a00b0019f3cc463absm3447735plp.0.2023.04.07.16.01.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:01:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a6d04b8-d598-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908477;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=FM1IoSWydLocj7jgKW9MEFCM/UyoS3Vw58k8MbYXpPI=;
        b=VQRL/jwZ/iwhTjW4uDgB/xQLypDFRo9dz1YtZKQWTGVtHNnhpUq2Kgr4tJlXpVc9Qj
         KBbQLfG2Z5FHXSI2VMMAmE9q1dOQk2iWj8kn9Q3lFDBbsPO0CVUPQMhVWuKl2kkGhFG/
         8Zx43AFsVbKUnCTXY9xLDWzf05mzNP0m7zfiIrbp+mM1baQPyagVzxLIe6pjmdO2GbAF
         FeHj3l+btpKL7NfDJ58MeRoUTWNwI8jp8DY7i9J3B7PBjUoMr2KbS1QbvXFBz9wfhpPH
         DHITQUJiwZ2kYMJ1/oiVdG61WqG2rvBx9o0dwXq2wqhvT0EMxe8M67t1dsAj1odQ8rE9
         zrLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908477;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=FM1IoSWydLocj7jgKW9MEFCM/UyoS3Vw58k8MbYXpPI=;
        b=5N9o5spoxtihNK7B9eTgJNVFIfR+RVt1Z+5LgP67uiDzv48u+6RwJ/zItYipYouDnu
         nGG9+YPw404v6hD0j3W5XmMwerDQyrvQXvgtDjVmTWDJpdFXwgbzYTer1yoPRIFQE1W8
         h1dUDBos83KP8i/NpKK/wkBOvmjijSpVf2ZG/sDcNhepuSrKBFj9rWxxz2yuNg7UBaX8
         ptEfX4LxrWaAD+3QQsDX9Axe4lfoYk1Ro76WGehicKcad8aTN9VgVmSxCo3K5VOHTtm4
         6xXCvdvPqT+XtLVi4ksjxurwRVaJG6XUt10zSsiTD8PafhK4y8jQKKUYSWa5/MYP8KXo
         X7Kg==
X-Gm-Message-State: AAQBX9e1er5oa5MXn0J02hb8cx7Jlz8UWkQ7eEQFjqKE0+jsU9Mjv8hg
	+5HWD2+52M7R9zBabBV1ZoaGcQ==
X-Google-Smtp-Source: AKy350bRqvhbX5rOLuwqRdImyfLdTcBcyn1BquKHrKT2CT3Zjs0A2Q1f98kdBv1POsH49ZV8G3BXyw==
X-Received: by 2002:a17:902:d2c9:b0:1a0:50bd:31a8 with SMTP id n9-20020a170902d2c900b001a050bd31a8mr5700663plc.26.1680908476761;
        Fri, 07 Apr 2023 16:01:16 -0700 (PDT)
Message-ID: <af0397ba-f714-427f-c050-10b423cc772e@linaro.org>
Date: Fri, 7 Apr 2023 16:01:14 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 02/14] accel: Remove unused hThread variable on TCG/WHPX
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-3-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-3-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:17, Philippe Mathieu-Daudé wrote:
> On Windows hosts, cpu->hThread is assigned but never accessed:
> remove it.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   accel/tcg/tcg-accel-ops-mttcg.c   | 4 ----
>   accel/tcg/tcg-accel-ops-rr.c      | 3 ---
>   target/i386/whpx/whpx-accel-ops.c | 3 ---
>   3 files changed, 10 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:01:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:01:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519116.806331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv5S-000891-Vb; Fri, 07 Apr 2023 23:01:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519116.806331; Fri, 07 Apr 2023 23:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv5S-00088s-RZ; Fri, 07 Apr 2023 23:01:42 +0000
Received: by outflank-mailman (input) for mailman id 519116;
 Fri, 07 Apr 2023 23:01:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkv5R-0007Nc-SY
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:01:41 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27981dfc-d598-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:01:40 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id q102so40802556pjq.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:01:40 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 l2-20020a17090aec0200b0023fda235bb5sm3162546pjy.33.2023.04.07.16.01.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:01:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27981dfc-d598-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908499;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=TyPr1D+jmxon9z6cKE0o6xLl1ClF9NGYyNAHslJg3cw=;
        b=qw2wYpt5HKJz5r2a7OsRAzIXXkpT+qRLgdZ/nuCwMrIJrOzNH3fD9ex8LmkqgpG98t
         P7PSjtCmHjdONgfStzQ+Ft1yGw+mfPAJthFu3pkWjLS+Cx48M7Vchp8/VYXnCx8F0Q2l
         rMqzBqzxBQ+l/wpnRRrS/rTAmA759a2Cy1DvVHxMSI3cQ01dOcepbymKuIRh7i5+Rj3y
         2E5GOExso1G5XPyhGSXfoim2hbUkG2QTnQTTe8bwyEhXxDoEojVOcZO2EHhYS1wbqqGZ
         YjJzG7gBuAZ1vZfTsQ5mtob4797mWInf+aP3yIRHgPoJedv7oF+XRHI69l3pt0FE3cNd
         WV3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908499;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=TyPr1D+jmxon9z6cKE0o6xLl1ClF9NGYyNAHslJg3cw=;
        b=5Nv2YnOaUPSyVfNAU5+T6mPGNcdg54VgISDv3PbHoMA1TNdXuLkEpK/jkjgLjSb6jO
         diFZhfCMm8NUNEwYRKjVxJQRS9vJFPDk+B/RgmeXL17d/5WNYG3iK5KyIpCl6o0U5HPK
         M2ka5debBPzKlLk9ZrA5PFqqbriqCb3jpv5d3a86uuEvDkLUs0VXESiaYrurlGLKL1iE
         yPEpxeDWihzwLgC3nDMkLCsHU1r2v2vZJ/blv0C1ORmXN4nP+fMZO2NZX6DzXh30uL17
         z3tL33fYzIQ7oriln5lKyFFBeybgPNT/081WmIMrpDPTSyKYZKbKCpkAL0tW16upOZuZ
         PR9g==
X-Gm-Message-State: AAQBX9eNu7cb6qA+EWmGAnJgzhs7G5m88+CnFMuuFa7WUnUnnAMQLD8V
	KME7XS3TbtMnoNhdlgoTsejV9VamNSiMgNhW8SI=
X-Google-Smtp-Source: AKy350ZQLIgsGvtQbrBz25NftlqZ9SOXyQK0RAxFlNjlFwKEYf49WMV87F8ttHZLtHbQzsn5rLkPTA==
X-Received: by 2002:a17:90a:34c2:b0:246:6b3e:38dc with SMTP id m2-20020a17090a34c200b002466b3e38dcmr1539291pjf.10.1680908498904;
        Fri, 07 Apr 2023 16:01:38 -0700 (PDT)
Message-ID: <ba69b6b3-1b4f-6cc0-34a6-4c19fa3dbb8d@linaro.org>
Date: Fri, 7 Apr 2023 16:01:36 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 03/14] accel: Fix a leak on Windows HAX
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org, kvm@vger.kernel.org
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-4-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-4-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> hThread is only used on the error path in hax_kick_vcpu_thread().
> 
> Fixes: b0cb0a66d6 ("Plumb the HAXM-based hardware acceleration support")
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   target/i386/hax/hax-all.c | 3 +++
>   1 file changed, 3 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:03:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519126.806341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv7c-0000ch-B6; Fri, 07 Apr 2023 23:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519126.806341; Fri, 07 Apr 2023 23:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkv7c-0000ca-7q; Fri, 07 Apr 2023 23:03:56 +0000
Received: by outflank-mailman (input) for mailman id 519126;
 Fri, 07 Apr 2023 23:03:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkv7b-0000cU-Eu
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:03:55 +0000
Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com
 [2607:f8b0:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76e36e35-d598-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:03:53 +0200 (CEST)
Received: by mail-pf1-x42f.google.com with SMTP id
 d2e1a72fcca58-63255e756bfso12345b3a.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:03:53 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 b13-20020aa7870d000000b0062e26487e7esm3525658pfo.155.2023.04.07.16.03.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:03:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76e36e35-d598-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908632;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=hZqP/mMtrcpUQSJ0MIEuq6L7i9BJbItpoxQhlUdV2j0=;
        b=OScsKdJq+aKS4EMfnsQDF1A4ZpE9lHLC3dBugCWoYJ+T+zxfKuy7Bcfowv1Z4sMNGt
         5Tvqtj2tiQHF/cjx4hdeKJ6bZl9Y/HxUihnLOA+u6o+0o8KEhFMiBHXApK9qw2CCJXQY
         B/O1pBojgVNojn4tHZwtZ9Gj8964Bhc4vqVPOXtmkWUbZHDB/khpku9T9hNbVRaYpnKf
         6WkmzIgkqbHy5Lu85TuUalah4TTDdXAr+WRzB+W03rr+p/gGfvJi+W66/flHhGLrSK6a
         w5iWxWFxH44qZpC/JwuKPFVtQLmfZeKHX2acjMCobg3Mcp8TCtZQwPxUU3D6tMzl+TdP
         a4Tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908632;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=hZqP/mMtrcpUQSJ0MIEuq6L7i9BJbItpoxQhlUdV2j0=;
        b=cVKd6ApFSDrvpps4mp1Z1h9LxXqqMZr+SFFMZ7Jdwi+vZZP+uNCLWuTIw7RVPeiEcf
         T8cbxN7C9ekuMM2/ZeZb0W/9YY6Q/Sp5PyAQVUfQO8saKDGZSU8mRh4y73a8a9RitXe5
         7KbAYr+n0XcbsMjLGCOcETx+4h1lsTR4coWSwkZTr6iGHuYntW2iyomqAbkwX7RAuWXj
         JlkRJprgQ/hkFjGeLW3oGtqmI/rnHP+jbbF1/gdOe3FooLNAQ6nKYswxHR+KqZB4GUoY
         bUElkvLtPE1Y4cmiwMZQ8mEELIJru2r5fNI5tWQnSCffqxPtrOU0ngFAmZbtOcuV+6NF
         K88w==
X-Gm-Message-State: AAQBX9cTFy3HiAbv/vP6eSQZyxVbzhlKP7h8JHuFewCFTCtgnBeUDWIp
	tFFoFpWLmyp5Mx7cZZfZehEmjQ==
X-Google-Smtp-Source: AKy350an52aELf6hlPTEwYKQg2oGfdoio44PzUivo7lHIUSuO2HA+GtnFtvlc2ICRNAVAvj8q2M6tQ==
X-Received: by 2002:a62:5e41:0:b0:626:2ae6:31f6 with SMTP id s62-20020a625e41000000b006262ae631f6mr3993369pfb.7.1680908631953;
        Fri, 07 Apr 2023 16:03:51 -0700 (PDT)
Message-ID: <f28d2337-37c0-5a93-438f-7ce0ea7fc565@linaro.org>
Date: Fri, 7 Apr 2023 16:03:49 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 05/14] accel: Rename 'hax_vcpu' as 'accel' in CPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Yanan Wang <wangyanan55@huawei.com>, Reinoud Zandijk <reinoud@netbsd.org>,
 Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-6-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-6-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> All accelerators will share a single opaque context
> in CPUState. Start by renaming 'hax_vcpu' as 'accelCPUState'.

Pasto in 'accel' here.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:07:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:07:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519131.806351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvAu-0001Hd-Px; Fri, 07 Apr 2023 23:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519131.806351; Fri, 07 Apr 2023 23:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvAu-0001HW-N3; Fri, 07 Apr 2023 23:07:20 +0000
Received: by outflank-mailman (input) for mailman id 519131;
 Fri, 07 Apr 2023 23:07:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvAt-0001HQ-LZ
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:07:19 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0989c9b-d598-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:07:17 +0200 (CEST)
Received: by mail-pl1-x634.google.com with SMTP id m18so64689plx.5
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:07:17 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 w8-20020a63c108000000b005141e2c733dsm3117978pgf.11.2023.04.07.16.07.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:07:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0989c9b-d598-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908836;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=JAx0qx3C6KbkeA3/DQYQ7BheR/mOc8zbOwn50MI/ySU=;
        b=mbYbKfx8mUmFoekiQdkNphSpXLmmbqH8glL3AwV4W3HVWtDJAciucY0lPbRfBpEDFq
         gLlOvNgZ1aLfgvbPXG+Qx/8/igjS3HTbQe0TOOhogHHLu6CsdelnF0yAQM3P4NvkeUlZ
         /xxR7Zdqi7rV2zCgA4fHc1MzHVBEyEVNb2vgXSmk+/mwYBQWFUuxDl3NGR/k474i1eEt
         GapH/ZtfQMmDXQlxBXlVSl3mvt6+PQYPigGuP+NMhSb2Yk4I8i4WH+/5bJi5N0/JyBg7
         qvyMaJ5KjlSKrhsg0GppELpMFE4U4M6ddit4DbdCD0iuZxi2OYxIn/vTltNF+s2Tq4mQ
         uM6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908836;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=JAx0qx3C6KbkeA3/DQYQ7BheR/mOc8zbOwn50MI/ySU=;
        b=xzX81L5bM0BKKevQDGg89IiHg8hwZ31qiy3DHjC4XsaSyHRNayCJzh41GxLdXbP8fU
         UK5timuTba37tgsIRU3roOZC/qUezamZ3MhByWctbbA9XdRgy3+S+uuGTbwzzlP2MYe/
         wNTjkF88G22frSqnb3g/rbGOd827ysb4jtulPivqnPDa4sQXeyCv0YS1AK/ctOM/C9oL
         NmGIYmrc8VE6alTTi4tC64KiIkz2ERtW+9PVsfEQNBhNtwRqVZtGwtUhHrQa633cVic3
         qb8GGJ961CT9nPPSFiaYS/I/dWAu8wgZQtj+kN98VzovLkXk62rZB/Ut2rKYuP4023zx
         TAgA==
X-Gm-Message-State: AAQBX9fPZlHKA7KCt3uvXmUCyJPp2QQS/Ju/GnhXuVv+IQD4aTvagxsc
	3Lk63sUK07vXABx4vCzujhmw6Q==
X-Google-Smtp-Source: AKy350ZaLWvNZM1+jRdEw1wsK7EItizE9LE7fAUvdw6zSKnESjRPfP7KFdel4FWtCgR78LROUj0RRA==
X-Received: by 2002:a05:6a20:ca4e:b0:d7:34a1:85b9 with SMTP id hg14-20020a056a20ca4e00b000d734a185b9mr108846pzb.7.1680908836058;
        Fri, 07 Apr 2023 16:07:16 -0700 (PDT)
Message-ID: <7f1385f0-b0bc-090a-4437-434cb72a7cc6@linaro.org>
Date: Fri, 7 Apr 2023 16:07:13 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 07/14] accel: Rename struct hax_vcpu_state -> struct
 AccelvCPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Yanan Wang <wangyanan55@huawei.com>, Reinoud Zandijk <reinoud@netbsd.org>,
 Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-8-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-8-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> We want all accelerators to share the same opaque pointer in
> CPUState. Start with the HAX context, renaming its forward
> declarated structure 'hax_vcpu_state' as 'AccelvCPUState'.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   include/hw/core/cpu.h       | 7 +++----
>   target/i386/hax/hax-i386.h  | 3 ++-
>   target/i386/nvmm/nvmm-all.c | 2 +-
>   target/i386/whpx/whpx-all.c | 2 +-
>   4 files changed, 7 insertions(+), 7 deletions(-)

Can this be squashed with previous?  It seems odd to change the name twice in a row.
Is the "v" in AccelvCPUState helpful?

> +    struct AccelvCPUState *accel;
>      /* shared by kvm, hax and hvf */
>      bool vcpu_dirty;

Move below the comment?  Or is that later?


r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:07:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:07:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519134.806360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvBM-0001kb-25; Fri, 07 Apr 2023 23:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519134.806360; Fri, 07 Apr 2023 23:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvBL-0001kU-Vf; Fri, 07 Apr 2023 23:07:47 +0000
Received: by outflank-mailman (input) for mailman id 519134;
 Fri, 07 Apr 2023 23:07:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvBK-0001kC-Fz
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:07:46 +0000
Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com
 [2607:f8b0:4864:20::102d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01520507-d599-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 01:07:45 +0200 (CEST)
Received: by mail-pj1-x102d.google.com with SMTP id
 l9-20020a17090a3f0900b0023d32684e7fso8842973pjc.1
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:07:45 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 p2-20020a17090adf8200b002465e66256asm1238484pjv.11.2023.04.07.16.07.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:07:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01520507-d599-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908864;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=wgiL/ifYNfwo++KDFPg/Dio8zORaAFAk5bB5YtGhmQk=;
        b=MEGGWwlxDpwujW+GajFwMICW/+ZkQuiv23sCYl3JUT/QGW1SCye8aT/VwuveZyUz00
         danOwxCDQO8rPdclT/eLT0VCKJMmNVgE8cYX5i7MRWsQ4coyPuqnScvQZwZTRvGxdYvC
         FhJkRJCGIWp5W6kFxgNUJaKiGOLx9njvZfXCvRgcLvuhm7FACslFqNbhm4hJWCgiDvo2
         7hWTApcGKjnMgABAvptKlm34znz02qQ8vUOi0oekStr3YQbTAa5sGe+i21fWdcw/Aq1j
         c4DzqVmZej5HD6YvnXJJwa+uihhiloPCbj7INDw5KTL3mAU/KBhEKOkI7ZdD4FMRBbcu
         Juyw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908864;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=wgiL/ifYNfwo++KDFPg/Dio8zORaAFAk5bB5YtGhmQk=;
        b=iCinFRRNOkTvhEtv3cPKeJoygZJfq96etAATtQ04vE2gCNSfNwJ6Plnk659phgaIpy
         AdRLe12sT2uGVXlzMMXlOBgr/bO0cgQ9nTRpFs5KH4YV9F8SZpUDW2VVLrjmCDdviBoi
         7hE3T8YI5AHBXAa9DNhhnm2NGDn2mLuShFo0VL0XYhsSZiuv+D93eE//JX+gG/HHN/hA
         qz1+e+20TLDr3KPhXPT/8DSVCOo6A7kf2uMrNjtxkQpIcHckKeWaefvkFigHMkN+gX9o
         Qj5YFp21uDwFNxSXY8rdCHY358mVjoXQATWhuusoC+QB2AeqC3ZM/Iy+I6i4XOF3rC7b
         OKvg==
X-Gm-Message-State: AAQBX9fWhbQkMX6X2+R29jLA+9WDD7wPGNWN/RGWKWgwVwUehUHD/DhK
	TeaxGvaUBc6WIk40w5nyXg1tcg==
X-Google-Smtp-Source: AKy350bXcZjgyQUuRpp6nyP3eJ+x1rlN/eQIhLU/UutS8vew5xsari6fdOQWku6HCBWiasGwTXkzQw==
X-Received: by 2002:a17:902:e5cb:b0:1a1:e112:461c with SMTP id u11-20020a170902e5cb00b001a1e112461cmr12302648plf.30.1680908864146;
        Fri, 07 Apr 2023 16:07:44 -0700 (PDT)
Message-ID: <a3c36603-6fd0-b093-0e8f-97e15b8e6bc9@linaro.org>
Date: Fri, 7 Apr 2023 16:07:41 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 08/14] accel: Move HAX hThread to accelerator context
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Yanan Wang <wangyanan55@huawei.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-9-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-9-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> hThread variable is only used by the HAX accelerator,
> so move it to the accelerator specific context.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   include/hw/core/cpu.h           | 1 -
>   target/i386/hax/hax-i386.h      | 3 +++
>   target/i386/hax/hax-accel-ops.c | 2 +-
>   target/i386/hax/hax-all.c       | 2 +-
>   target/i386/hax/hax-windows.c   | 2 +-
>   5 files changed, 6 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:09:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519141.806371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvD8-0002Ot-DL; Fri, 07 Apr 2023 23:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519141.806371; Fri, 07 Apr 2023 23:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvD8-0002Om-AL; Fri, 07 Apr 2023 23:09:38 +0000
Received: by outflank-mailman (input) for mailman id 519141;
 Fri, 07 Apr 2023 23:09:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvD6-0002Oe-BF
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:09:36 +0000
Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com
 [2607:f8b0:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42e029cd-d599-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 01:09:35 +0200 (CEST)
Received: by mail-pg1-x532.google.com with SMTP id
 41be03b00d2f7-517c01e6660so18595a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:09:35 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 f3-20020aa78b03000000b005aa60d8545esm3564626pfd.61.2023.04.07.16.09.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:09:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42e029cd-d599-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680908974;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=XvSIGGl7kyzrRPKD+1h4YAIP7+IU0OISKySWyKHO+jg=;
        b=iv6gr2hKVqwKYI+iTQN+8od7y6Vm/m73ky9/2diyxBhZPqEorU219WD2XJzyvYDYRT
         EQnhzs/V9uvYpJLy4VBiCLJDPSSyy/mk7kdUnBMVS5pdIZkqIfGf52EThRLsFjAEdEsI
         75R64mVKrvYX9fSs1K2fFq6b2G/rdvEnYm5oV115etWVrXcUd06SIATm2soLvb5zJ2rS
         aLj36ZCRtc9CMFkr4v9c3m54c5MWwJYwceHUVQL+Hc3QAWiaTznLDbwG3uZRQ1o+xHSV
         QewZOhFy9l4j9VEGUvSVoAuQtIMQ9HwTJxUwxMmvV2wdOiXGWIwjq9EvrUIb00tY50Wg
         1WyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680908974;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=XvSIGGl7kyzrRPKD+1h4YAIP7+IU0OISKySWyKHO+jg=;
        b=t2GXLzkczR/QAZSWUSfUCNrJslAw5GKzXI0k/tmGeEpqRIJYFtGIXRn0P1cNTzrj6/
         tek5e2pMsneZygp3zp85yTA2jztAQSSze+/aL58mtj3YC9YjAyjqsN5JwqFFXbciRwhW
         3J841P1XfKkwNASXu5r2jNuD/RbRH0UmVufTypPlox9kYAdDTGe6N52yoGRxTpfrzcdn
         F3ovDdI/fza0rdfoqB/QaJCkwnfkfyZpBnX+OCJVtP+cRNtYC/GYrlCSMfMoZZG2I9i+
         +T76fOFDzD0m1tLDVgVclKSeWq8ZSiW9AhTcCR9fqXfvSjdRYXmp5Yl1VDo9HM5Y04Zb
         5LRQ==
X-Gm-Message-State: AAQBX9djz6kNUvIcSkPqTxNZuLCyKJrdHkaxiJNNqH4DCQMXjDMaVJsf
	oY9+mapUoACU7zJxN1RjxKoDxQ==
X-Google-Smtp-Source: AKy350Yqp9FrNOy78lp7BrA74JA2K1IJ9euoV6m53eCZnb3f4VQZGuFjTlRtLSsG6RBgf5M2spk2Qw==
X-Received: by 2002:aa7:8bd1:0:b0:62a:9d6f:98dc with SMTP id s17-20020aa78bd1000000b0062a9d6f98dcmr3877311pfd.11.1680908974132;
        Fri, 07 Apr 2023 16:09:34 -0700 (PDT)
Message-ID: <48b9e338-e20e-95f9-c611-3878b24ccec0@linaro.org>
Date: Fri, 7 Apr 2023 16:09:31 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 07/14] accel: Rename struct hax_vcpu_state -> struct
 AccelvCPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Yanan Wang <wangyanan55@huawei.com>, Reinoud Zandijk <reinoud@netbsd.org>,
 Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-8-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-8-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> +struct AccelvCPUState;

Missing typedef?


r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:11:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519146.806381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvEX-0003nm-P4; Fri, 07 Apr 2023 23:11:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519146.806381; Fri, 07 Apr 2023 23:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvEX-0003nf-L7; Fri, 07 Apr 2023 23:11:05 +0000
Received: by outflank-mailman (input) for mailman id 519146;
 Fri, 07 Apr 2023 23:11:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvEW-0003nU-P6
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:11:04 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77979410-d599-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 01:11:04 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id p8so184862plk.9
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:11:04 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 ik23-20020a170902ab1700b001967580f60fsm3360481plb.260.2023.04.07.16.11.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:11:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77979410-d599-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680909062;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+8sjrQkKmFa0fDEbK0ByEbj8xRVyyoXwt3mONWUrCjg=;
        b=fu0fc9kQ3M2sfVtK3t68zTbYFucNlj5oFSUSteqUYRu6VDv3uTItwTOkzCaPFNHOei
         Df/jKMmHFnnEBVa21r/eZ50XSQaDi0Hdouvc5bo4bnXn3NaTSvMg4uk2qfa5HE0KEHPK
         G/hnmvZSmlnUfmxSrzAzCG2somHL6hF+eFvhEfvqMO4TRTM4MdDjEbP4M3dgQLLmAJUt
         NQ70V3GEiNeBy6LgXX+ZifscCDpUsnjTSs3RYovPOLuMKgkROChad5HQjq9TyjzVOy/W
         ikj4D19qkBrRoB4IJxSKljZsM9kEwp6vq/D91lh+Kurhgan8M73NHLiK5l1wy/jRVm4c
         8CpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680909062;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+8sjrQkKmFa0fDEbK0ByEbj8xRVyyoXwt3mONWUrCjg=;
        b=4H6Y7kdbGPpD+gmoFNrjvHAumoTzlCcxscceakYxWRHVoG2uTPFocs0S1hzoeTlamv
         axA/ZL2nt0rvlCxfto1/wdEFtxVTF6/flH+o22ChOPxyd8zWlYyrClfp7inXy5jOvhm6
         h2PAMKNlIyMF9u7ocxn1gwPd1E54B9ZcvFWLFui809/ZxjNzcveBl+SYOWHnhZZdUz84
         v9miEQs7JqjtH0Hi4SoRIDkpFyUBjAFBIAdD6uoeFfAB20glm1w1AJ6QS+X0M50uJ4Dq
         6I973CApWEZS62GrGoFDFRU0Hq/fbJ7GyjXV1lJgvYdKjqn0Yw0umXODALgYUTUPx1+H
         VkUQ==
X-Gm-Message-State: AAQBX9cphCe+e4Fx6f3T5Dd9T6Mvzh4iqB7cQwALVzY6TkFkYCCIIyey
	b4cJ+5AiW5zoHTPfq0W/aaxG8g==
X-Google-Smtp-Source: AKy350Z01RjDdvFrn3SDcNSR06u3kONrO463lNJW2mYVi5QEQlyqB2hcRXm9iVwIpLBrSzeEsGCq9A==
X-Received: by 2002:a17:903:3093:b0:1a2:ca:c6cd with SMTP id u19-20020a170903309300b001a200cac6cdmr3648934plc.43.1680909062629;
        Fri, 07 Apr 2023 16:11:02 -0700 (PDT)
Message-ID: <e37bd0f9-b577-dbf4-6d34-a17a9d5cfbaf@linaro.org>
Date: Fri, 7 Apr 2023 16:11:00 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 10/14] accel: Rename NVMM struct qemu_vcpu -> struct
 AccelvCPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Reinoud Zandijk <reinoud@netbsd.org>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-11-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-11-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> -    struct qemu_vcpu *qcpu = get_qemu_vcpu(cpu);
> +    struct AccelvCPUState *qcpu = get_qemu_vcpu(cpu);

With the typedef in hw/core/cpu.h, you can drop the 'struct' at the same time.

Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

> -    qcpu = g_try_malloc0(sizeof(*qcpu));
> +    qcpu = g_try_new0(struct AccelvCPUState, 1);

Another 'try' to clean up.  :-)


r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:11:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:11:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519150.806391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvF3-0004I5-0n; Fri, 07 Apr 2023 23:11:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519150.806391; Fri, 07 Apr 2023 23:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvF2-0004Hy-UB; Fri, 07 Apr 2023 23:11:36 +0000
Received: by outflank-mailman (input) for mailman id 519150;
 Fri, 07 Apr 2023 23:11:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvF1-0003nU-T6
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:11:35 +0000
Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com
 [2607:f8b0:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a40117d-d599-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 01:11:35 +0200 (CEST)
Received: by mail-pg1-x536.google.com with SMTP id
 41be03b00d2f7-51396f3e6b0so146058a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:11:35 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 80-20020a630253000000b005136d5a2b26sm3141345pgc.60.2023.04.07.16.11.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:11:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a40117d-d599-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680909094;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4cErFy4VVSXH+T35LT4D8bhMQRXEc0jWj0rx0ggjSPk=;
        b=OJFinG4r+Jx35MVNhHZcNTMCoIxRVzugHa8wQb0aF3/WTzIXXBfPwEM81GLW5qzvA3
         5ETEJsnkXy63xKZpvH1BGx4E4mJabdebFUPIucQkQ109TgyzR/OJANaStfogZeUwJSvB
         +n0AkGdqt/wNTf7EIX5oMoy0GvlBdDdwbxgBQ/FNmniodkjKXrGRz7rWA6vw+T9AgziT
         RkNSL9rUUEBJmTGgAlLH2V6qPGr0J/0/wvBQXd6nbZl2I2R617/+2FmNyTRd56JrEd17
         LWo9ZJvzBF+Jg+CjaCPLtDsd5FPc+J0a2N1UTbzoR7A8cxU06vM7OEL1iLwmOTtAizig
         nHRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680909094;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=4cErFy4VVSXH+T35LT4D8bhMQRXEc0jWj0rx0ggjSPk=;
        b=3pFIiv4VH49Ty4AxFiMV6H1p3/kEbOLEyi4uMyuEaGkoJEdNsDrJUXKGTt7gXMFKvS
         kCEUDOFdiF0sssitNZJUc4FGSv7jwXXfEpOn6kDC6Mr7nDgL81dYCSBWdXHREuAbiwiz
         B/Epsw8DZGP4Ofy23BM7d15sRl5IOwdGVCYEf9dLkZcC0xJNuRKLNDlSH1KY6hhxyA+R
         cY0hAIEzUp9P66rzVaufsH0A8R6wiIgeWI+tTcfE+iENpApkKWD1CoSX36YkPzUZznWY
         hzTagxPIJyfOrbQq1KYTDlOq4hXI0dblsM++sZTyS7io6SPn+pq82NTW+x03KFHvsMpu
         1abw==
X-Gm-Message-State: AAQBX9dPzzt7hIoim3z0/OxRd7UZkRQWPO+an0hfFPcTtHbOwcCvDfoP
	DvrS8b7KXyPLR8C0RlfDxAPCyOFQdhcTVk39VgI=
X-Google-Smtp-Source: AKy350bXu0YXNAHiNTKVWIi38x7qw1rEbHmvanIsHsxL3sMeptcsGTDsDeyKqZftiwBuvO2w43WOyw==
X-Received: by 2002:a62:1850:0:b0:62d:8376:3712 with SMTP id 77-20020a621850000000b0062d83763712mr155681pfy.28.1680909093896;
        Fri, 07 Apr 2023 16:11:33 -0700 (PDT)
Message-ID: <b2778af0-52e1-88fa-f2a8-cc4060835464@linaro.org>
Date: Fri, 7 Apr 2023 16:11:31 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 11/14] accel: Inline NVMM get_qemu_vcpu()
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Reinoud Zandijk <reinoud@netbsd.org>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-12-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-12-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> No need for this helper to access the CPUState::accel field.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   target/i386/nvmm/nvmm-all.c | 28 +++++++++++-----------------
>   1 file changed, 11 insertions(+), 17 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:13:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519157.806400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvGU-0004xa-Gx; Fri, 07 Apr 2023 23:13:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519157.806400; Fri, 07 Apr 2023 23:13:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvGU-0004xT-E3; Fri, 07 Apr 2023 23:13:06 +0000
Received: by outflank-mailman (input) for mailman id 519157;
 Fri, 07 Apr 2023 23:13:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvGT-0004xG-6F
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:13:05 +0000
Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com
 [2607:f8b0:4864:20::1030])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bf128a8b-d599-11ed-85db-49a42c6b2330;
 Sat, 08 Apr 2023 01:13:03 +0200 (CEST)
Received: by mail-pj1-x1030.google.com with SMTP id
 r21-20020a17090aa09500b0024663a79050so1560399pjp.4
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:13:03 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 g24-20020a63dd58000000b004fcda0e59c3sm3091950pgj.69.2023.04.07.16.13.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:13:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf128a8b-d599-11ed-85db-49a42c6b2330
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680909182;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=aYIkZBIFK2ZpHWvSfO0jyCWHXEmHKEa00eKczku2HgE=;
        b=vKfScPpZZuTSQjVvlLzoq0nu/oNEzhegH0VN4qBbTDX1lF4jluykpN2c1wbw5Qjmnj
         Rb66ndQEM5IDPq+PkYMGfaYi9++yYdHMhB0dAtUmxQ0utcdT4d11dzuZmSQZmMmkR0YZ
         f19qZTPpNtjEBVCVK7fCUEY08KnyfjOg+z/5iwxAqvdo+K6M9MEOq/E2MXr0jeAqB+Kq
         suEXAJqiAwgQTQ/MyX+nLK78aB5JsXiZWTSWLDWgP46FZ/9UiE6oqKOC79oVNQyZQwyZ
         gs3Q/speYE/HWvXViIfmClwbFvilbHwN6YPaYwEE4apNDw1NyVzvGBqI622hYSkVaakq
         kurg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680909182;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=aYIkZBIFK2ZpHWvSfO0jyCWHXEmHKEa00eKczku2HgE=;
        b=RqTvOVAYeEv0Ows1Ns1anfM6cwHRbFxZ0jbi+wByDR38H73wsgRLMZl+A+PSH7uXP9
         p+kVey7gkhrvhnLLqTXcNohwPutBfrO/UDnSOAAPYciEJ6pltIz2QHEnQp7MHcRJ/5v6
         bCCpKVSY3OKjgfh7BVeLnBhs5QqNcGXupAKz2+i67bZm852M+qr4cOqrl+O82VsRCQ/Q
         Qybr/O9NKoLvXX1QozIHYWDBNSrrnwAxBBxpLP99wB8X9/dcbo+Fyqhbg91ETgGHJUGa
         YNAvEH0eLfuna8q50nB0DXfu2gjV0dlE6We+CtWzJqg7bhjS6AwfyfdaHe5HmoI0H9xk
         +6bg==
X-Gm-Message-State: AAQBX9edAbuYN1jdNl/xbt/1U9hHCu1ysePdQFe4kordyozCK+UlUvpA
	JFaK63NkJssbjX/FdGfAGSDLeg==
X-Google-Smtp-Source: AKy350bDgHCmk88fi7lKmF1o3TyQSTwibPI+P0wHGQ0hTOjZ8sVom4CqWEwPGykeBzb76cAn0hafQQ==
X-Received: by 2002:a05:6a20:4ca3:b0:d9:dbb6:2e67 with SMTP id fq35-20020a056a204ca300b000d9dbb62e67mr4159798pzb.31.1680909182482;
        Fri, 07 Apr 2023 16:13:02 -0700 (PDT)
Message-ID: <5a0297f8-ddc6-4fac-b896-1b1ecede844e@linaro.org>
Date: Fri, 7 Apr 2023 16:13:00 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 12/14] accel: Rename WHPX struct whpx_vcpu -> struct
 AccelvCPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-13-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-13-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> We want all accelerators to share the same opaque pointer in
> CPUState. Rename WHPX 'whpx_vcpu' as 'AccelvCPUState'.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>   target/i386/whpx/whpx-all.c | 30 +++++++++++++++---------------
>   1 file changed, 15 insertions(+), 15 deletions(-)
> 
> diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
> index 70eadb7f05..2372c4227a 100644
> --- a/target/i386/whpx/whpx-all.c
> +++ b/target/i386/whpx/whpx-all.c
> @@ -229,7 +229,7 @@ typedef enum WhpxStepMode {
>       WHPX_STEP_EXCLUSIVE,
>   } WhpxStepMode;
>   
> -struct whpx_vcpu {
> +struct AccelvCPUState {
>       WHV_EMULATOR_HANDLE emulator;
>       bool window_registered;
>       bool interruptable;
> @@ -260,9 +260,9 @@ static bool whpx_has_xsave(void)
>    * VP support
>    */
>   
> -static struct whpx_vcpu *get_whpx_vcpu(CPUState *cpu)
> +static struct AccelvCPUState *get_whpx_vcpu(CPUState *cpu)
>   {
> -    return (struct whpx_vcpu *)cpu->accel;
> +    return (struct AccelvCPUState *)cpu->accel;

Same comment about removing 'struct'.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


> -    vcpu = g_new0(struct whpx_vcpu, 1);
> +    vcpu = g_new0(struct AccelvCPUState, 1);
>   
>       if (!vcpu) {
>           error_report("WHPX: Failed to allocte VCPU context.");

Hah.  And a "can't happen" error_report, since we're not actually using try here.  :-P


r~



From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:13:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519159.806410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvGk-0005KH-Nv; Fri, 07 Apr 2023 23:13:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519159.806410; Fri, 07 Apr 2023 23:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvGk-0005KA-LL; Fri, 07 Apr 2023 23:13:22 +0000
Received: by outflank-mailman (input) for mailman id 519159;
 Fri, 07 Apr 2023 23:13:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvGi-0005Ev-KI
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:13:20 +0000
Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com
 [2607:f8b0:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c80f6c5d-d599-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:13:19 +0200 (CEST)
Received: by mail-pl1-x632.google.com with SMTP id o2so122996plg.4
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:13:19 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 x22-20020a170902821600b001a199972508sm3382209pln.124.2023.04.07.16.13.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:13:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c80f6c5d-d599-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680909197;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+ItnoDxjRyq5NxF6TJi+AQI7ZLK0+7Nql3EFIwnRinE=;
        b=tBpx2Whb4QzegaMnt95h9v7OIs3R0veH7QI0AUG8m9HbqbV69oRmE1Lp3HyvnMwLin
         McrVCRHDPd0x2sKbwU88FmIlMhQnR+/8x5bkAEnXgzjwWrTRhDDlXblkJAD/Gz648Xwc
         5Raj8RHXLDPLqPir8wxUa9xK9fb7aGTo1JHNiEa3mGeWRYIsyKtHdFiZJYaxFCiRB9Sw
         e3YuDa9UieI9e3U+MI+GMOtmitZ6G286gVAMwWk6LF3ANI4RVLMGHfTrXI4s+jKap2JY
         LvI5E5XUdmS2/NCFq1bHDfM8/DxpdIw7Sp0m9wmFWfggE3O13TldNX65vBYEOTNlWaI7
         hICg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680909197;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+ItnoDxjRyq5NxF6TJi+AQI7ZLK0+7Nql3EFIwnRinE=;
        b=bhqY53vuCmAMHhHohma2geZjl7/NAIYvFF1DNaALZHlB4/cy2EKZqOy31mzv+cYXUG
         MoqRvh93MOZ+plSvZaYv227CassUKdG1Iaai+T9mTL8nb1t4YRVTzSji2xoZ4N2PMt6n
         Yrm9qIHsNcc04/1YGqEAVi52r3vczrX/Vn/mImvEJsGGOAVtFhR5S/9/hnfakkwolqnw
         D0MbjQV9PEEdIxtxf+Sz37uSnFiOyI2lMSMWDQFODxaz1bcGfV90781h2T4V6z2N7mJi
         SpzyW27raEcfh0ooyVa3NXsaukf3jT5tiwxseX7778Q0UJbs2A8e6IQtG0PnVpnGXkkZ
         26Qg==
X-Gm-Message-State: AAQBX9fG5Ag8agHXX0xA4mMrdN3MzC/xy77nScHoXNQ169AgJjeccNrc
	FtSm2efSfaXBDq+83BMwNq7GZA==
X-Google-Smtp-Source: AKy350Y3lzjtZw4RK/yOE1pq/rsEAaEwouoQEVbujzz2mrQjMkOvcprqVjsdpaOumVHAhvkYih/Nlw==
X-Received: by 2002:a17:902:f544:b0:1a1:b8cc:59da with SMTP id h4-20020a170902f54400b001a1b8cc59damr5210653plf.33.1680909197678;
        Fri, 07 Apr 2023 16:13:17 -0700 (PDT)
Message-ID: <9a1a1a4a-ef80-61a8-8d2c-55817fd12496@linaro.org>
Date: Fri, 7 Apr 2023 16:13:15 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 13/14] accel: Inline WHPX get_whpx_vcpu()
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Sunil Muthuswamy <sunilmut@microsoft.com>
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-14-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-14-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> No need for this helper to access the CPUState::accel field.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   target/i386/whpx/whpx-all.c | 29 ++++++++++-------------------
>   1 file changed, 10 insertions(+), 19 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Fri Apr 07 23:14:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Apr 2023 23:14:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519167.806420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvHW-00061Y-0U; Fri, 07 Apr 2023 23:14:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519167.806420; Fri, 07 Apr 2023 23:14:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkvHV-00061R-UC; Fri, 07 Apr 2023 23:14:09 +0000
Received: by outflank-mailman (input) for mailman id 519167;
 Fri, 07 Apr 2023 23:14:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dUW=76=linaro.org=richard.henderson@srs-se1.protection.inumbo.net>)
 id 1pkvHU-0005Ev-QZ
 for xen-devel@lists.xenproject.org; Fri, 07 Apr 2023 23:14:08 +0000
Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com
 [2607:f8b0:4864:20::102b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4ce64e0-d599-11ed-b464-930f4c7d94ae;
 Sat, 08 Apr 2023 01:14:07 +0200 (CEST)
Received: by mail-pj1-x102b.google.com with SMTP id
 pc4-20020a17090b3b8400b0024676052044so104462pjb.1
 for <xen-devel@lists.xenproject.org>; Fri, 07 Apr 2023 16:14:07 -0700 (PDT)
Received: from ?IPV6:2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8?
 ([2602:ae:1541:f901:8bb4:5a9d:7ab7:b4b8])
 by smtp.gmail.com with ESMTPSA id
 jh3-20020a170903328300b001963a178dfcsm1477074plb.244.2023.04.07.16.14.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Apr 2023 16:14:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4ce64e0-d599-11ed-b464-930f4c7d94ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1680909246;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ZEtxxHTBPYDx1m8Q5UryyRdPATKO4VY3g+smnvxOwf4=;
        b=con+E+nBuUuL8bc6aBTNFmZw3A88rUFZYxGTFhDreXq5RHJZKuVoYlJBjJANRic/qc
         d5s0aIniIw0RL+malEGtEBMEHpe94GQp29//JHjND3LOTt2Qz6neACI5ima/6Bs8ADdA
         yo/JWYahYMsh7ErPLucmzyNckoeMuF0c9x1qrf9zMQmx6Bnh5MEN+0uiyAhmVmsWPJFI
         pWVaQ9k2ZnUgguA6EPAKVUUEzJY96hKZT4/VabPIIz8klAbW+JC3mY3jjRBuLk3fSGy5
         mfIp7D/2faI4Ti1BgkVfeheIv0888nglBVVCpOLVsdTkwhfsjv/mG9K919FmrAqv8UV4
         NPiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1680909246;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ZEtxxHTBPYDx1m8Q5UryyRdPATKO4VY3g+smnvxOwf4=;
        b=BmpybB9a8L6zOZIbSLPZKlUs1wvXxxJfrgZpopGgFeUtouEGrqS0t/jIhKTGJ8YU9+
         FFg626Z0zu+byPUMNL62awH5BlwHcwp+Nm57qeUxzKjyCxmgabp6471sEvTaiCNG+zHt
         g/zWOrglujcEKBk1XQdamcU0DIMs61N0UDn/lP5mI+xNay4XocLU4hhPVGQkXfMQhbIw
         GyDn9DNTiaj/kfYZdJLglzG99hSyHA/CrsW6L3e7AcIgNrplNhV5OxJUn2LlpQPNjJbF
         vvrmb9h0eC5eMndkz1//22zvp7G01RU4Au4L+IemWUkjEFAWe0m51oRl12bgHqkRDU7Z
         DXzw==
X-Gm-Message-State: AAQBX9fllYsEBfYeDNshlfwX7mWDQJav8EcXzRbd61HlRQs4Xcs3LqHy
	HJyeJOrcEYIGK73JjIDlsdBp7w==
X-Google-Smtp-Source: AKy350aI+71oWAmO3OSiaf8U410Jg/WRlz+R9/yR6IazGZ/MKyDHhEtyo9+noerB1d8Pc0Zo6Qu0ww==
X-Received: by 2002:a17:903:183:b0:1a1:e33f:d567 with SMTP id z3-20020a170903018300b001a1e33fd567mr5603426plg.52.1680909245863;
        Fri, 07 Apr 2023 16:14:05 -0700 (PDT)
Message-ID: <e2687d91-85d8-d11f-4cea-c7363927cb9b@linaro.org>
Date: Fri, 7 Apr 2023 16:14:03 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [PATCH 14/14] accel: Rename HVF struct hvf_vcpu_state -> struct
 AccelvCPUState
Content-Language: en-US
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Cameron Esfahani <dirty@apple.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>,
 Eduardo Habkost <eduardo@habkost.net>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Yanan Wang <wangyanan55@huawei.com>, Alexander Graf <agraf@csgraf.de>,
 Peter Maydell <peter.maydell@linaro.org>, qemu-arm@nongnu.org
References: <20230405101811.76663-1-philmd@linaro.org>
 <20230405101811.76663-15-philmd@linaro.org>
From: Richard Henderson <richard.henderson@linaro.org>
In-Reply-To: <20230405101811.76663-15-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/5/23 03:18, Philippe Mathieu-Daudé wrote:
> We want all accelerators to share the same opaque pointer in
> CPUState.
> 
> Rename the 'hvf_vcpu_state' structure as 'AccelvCPUState'.
> 
> Use the generic 'accel' field of CPUState instead of 'hvf'.
> 
> Replace g_malloc0() by g_new0() for readability.
> 
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
>   include/hw/core/cpu.h     |  3 --
>   include/sysemu/hvf_int.h  |  2 +-
>   accel/hvf/hvf-accel-ops.c | 16 ++++-----
>   target/arm/hvf/hvf.c      | 70 +++++++++++++++++++--------------------
>   4 files changed, 44 insertions(+), 47 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


From xen-devel-bounces@lists.xenproject.org Sat Apr 08 04:04:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Apr 2023 04:04:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519178.806431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkznc-0000AX-RW; Sat, 08 Apr 2023 04:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519178.806431; Sat, 08 Apr 2023 04:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pkznc-0000A7-Ky; Sat, 08 Apr 2023 04:03:36 +0000
Received: by outflank-mailman (input) for mailman id 519178;
 Sat, 08 Apr 2023 04:03:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkzna-00009p-TB; Sat, 08 Apr 2023 04:03:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkzna-0000qp-P5; Sat, 08 Apr 2023 04:03:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pkzna-0007Md-CU; Sat, 08 Apr 2023 04:03:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pkzna-0004ib-A1; Sat, 08 Apr 2023 04:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PQumjNl3YpE6W8W2axOQVSYxFK+AxJE2pdDbPoCKqU0=; b=E5+bH3aLQsOiZZaEbUuQSqVirm
	LmHY72AEUChMI5aOQ0u6sU+fHjrkYgKCqM7kmBw1qz0j/RHcLih8ig1LMy9kEnzwhHf46QK+yKo3r
	XeJD1iDvE6DoTkFBPiaPyonumGZNzsE1E1TNfXImkeHNh6zWzGtoo/iK5eem/p7CBf3Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180182: trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-credit2:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-credit2:capture-logs(26):broken:heisenbug
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Apr 2023 04:03:34 +0000

flight 180182 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180182/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-credit2     <job status>                 broken  in 180178

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-credit2 26 capture-logs(26) broken in 180178 pass in 180182
 test-amd64-i386-examine-uefi  6 xen-install      fail in 180178 pass in 180182
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail in 180178 pass in 180182
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 180178

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180178 like 180173
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3

Last test of basis   180173  2023-04-06 19:49:58 Z    1 days
Testing same since   180178  2023-04-07 09:41:23 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-credit2 broken

Not pushing.

------------------------------------------------------------
commit ddaf7bb0cfd27369252de52e4b03410c4065bad2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Mar 17 11:10:06 2023 +0000

    x86/svm: Provide EXITINFO decodes for Exceptions/NPF intercepts
    
    Exceptions and NPF intercepts almost have the same layout, but NPF has bits
    above 31 in the error code, and the name for exitinfo2 really does want
    distinguishing between cr2 and gpa.
    
    In nsvm_vcpu_vmexit_inject() rearrange VMEXIT_NPF to fall through instead of
    repeating the exitinfo1 write.  Use the fallthrough pseudo keyword instead of
    a comment.
    
    In VMEXIT_NPF, as we're editing the printk() anyway, switch to using the newer
    domain_crash() form.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Apr 08 06:34:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Apr 2023 06:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519186.806441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pl29C-0007Uk-FZ; Sat, 08 Apr 2023 06:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519186.806441; Sat, 08 Apr 2023 06:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pl29C-0007Ud-BA; Sat, 08 Apr 2023 06:34:02 +0000
Received: by outflank-mailman (input) for mailman id 519186;
 Sat, 08 Apr 2023 06:34:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl29B-0007UT-3f; Sat, 08 Apr 2023 06:34:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl29A-0004vQ-Si; Sat, 08 Apr 2023 06:34:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl29A-000610-6M; Sat, 08 Apr 2023 06:34:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pl29A-0007hp-5v; Sat, 08 Apr 2023 06:34:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fRbaKRTgdUOPwHl+wsegb2Ncl+Oecl0i2lNUo9EYjEA=; b=FKUvd14C+shrFykA1xQ37oD7Dc
	1iBuofs8c36bdWxkPQhV8uGSFLIcPhNz1U1JsTw7BSa2falTrRiO5YIKRC+E8dchxSAl8Qp3ngJ2n
	R7lIqFtRuwL5SG3+93C0oqwm79yz3oZSbF2lit44gbD7WG8wR+0WMzNwD9zlNBBqhzgo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180183-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180183: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=aa318c48808c0aa73216bd94c54c4553d3663add
X-Osstest-Versions-That:
    linux=f2afccfefe7be1f7346564fe619277110d341f9b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Apr 2023 06:34:00 +0000

flight 180183 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180183/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180172
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180172
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180172
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180172
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180172
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                aa318c48808c0aa73216bd94c54c4553d3663add
baseline version:
 linux                f2afccfefe7be1f7346564fe619277110d341f9b

Last test of basis   180172  2023-04-06 19:10:15 Z    1 days
Testing same since   180183  2023-04-07 21:10:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Dhruva Gole <d-gole@ti.com>
  Hans de Goede <hdegoede@redhat.com>
  Keerthy <j-keerthy@ti.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marios Makassikis <mmakassikis@freebox.fr>
  Michael Walle <michael@walle.cc>
  Namjae Jeon <linkinjeon@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Randy Dunlap <rdunlap@infradead.org>
  Steve French <stfrench@microsoft.com>
  Tom Rix <trix@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   f2afccfefe7b..aa318c48808c  aa318c48808c0aa73216bd94c54c4553d3663add -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Apr 08 11:47:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Apr 2023 11:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519215.806455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pl71c-00047q-Ko; Sat, 08 Apr 2023 11:46:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519215.806455; Sat, 08 Apr 2023 11:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pl71c-00047j-Hx; Sat, 08 Apr 2023 11:46:32 +0000
Received: by outflank-mailman (input) for mailman id 519215;
 Sat, 08 Apr 2023 11:46:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl71a-00047Z-I0; Sat, 08 Apr 2023 11:46:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl71a-0003mX-GB; Sat, 08 Apr 2023 11:46:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pl71Z-000830-R4; Sat, 08 Apr 2023 11:46:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pl71Z-0003r4-Qi; Sat, 08 Apr 2023 11:46:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2DNR3E2BaaMGtoxrKAIZ0b2+O9pm4xdmdk0kLASLRC0=; b=z6J5C+2Wxdhu8OCzsS3VUG0uB/
	MsPQgPUQrdvEMergKCAZ5Ygj35zqwDfy4laAKX99bzDIgm//xf2LYNW1p1v4mCMkIVXCW4eV2o/ej
	6JPss9diBM7AoqYBj62WRT5tHntFX6HazzlTjaLt5J3QvPA5M8YLF5i2Eh7RS2cijEnY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180184: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=a5087069a8c40541ba81fa0e2850471c949932b3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Apr 2023 11:46:29 +0000

flight 180184 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180184/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 180182 pass in 180184
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 180182
 test-amd64-i386-xl-qemut-debianhvm-amd64 18 guest-localmigrate/x10 fail pass in 180182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180182 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180173
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180173
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180173
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180173
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180173
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  a5087069a8c40541ba81fa0e2850471c949932b3

Last test of basis   180173  2023-04-06 19:49:58 Z    1 days
Testing same since   180178  2023-04-07 09:41:23 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a5087069a8..ddaf7bb0cf  ddaf7bb0cfd27369252de52e4b03410c4065bad2 -> master


From xen-devel-bounces@lists.xenproject.org Sun Apr 09 03:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Apr 2023 03:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519282.806465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plLQh-0007JN-Pk; Sun, 09 Apr 2023 03:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519282.806465; Sun, 09 Apr 2023 03:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plLQh-0007JF-JS; Sun, 09 Apr 2023 03:09:23 +0000
Received: by outflank-mailman (input) for mailman id 519282;
 Sun, 09 Apr 2023 03:09:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plLQg-0007J5-6F; Sun, 09 Apr 2023 03:09:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plLQg-0007jj-2u; Sun, 09 Apr 2023 03:09:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plLQf-0007bo-OZ; Sun, 09 Apr 2023 03:09:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plLQf-0005hQ-Nm; Sun, 09 Apr 2023 03:09:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4m4qTnMRLOEtgWnq9vxv0V/Hwr0o4k8E0CuYD8A7GoY=; b=vK6dXHQYXQ3r9sGTsmPrdUGqpB
	dY/ryEDv1Ii3LK9ORQcym9u8+P08Cac8+QBKh0cd4wN7suxcDEK21NnxN+BZAOW5DeXOx5TWsXYXv
	OexbQLQP0SAl3E+vbHZWX5qTIqKquMyrAkUs9F3c70kXkpdt6lsDK/1Y6FIp+0vRy/rQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180185-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180185: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=a79d5c76f705de81cb6b55ad279dde9759da06d2
X-Osstest-Versions-That:
    linux=aa318c48808c0aa73216bd94c54c4553d3663add
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Apr 2023 03:09:21 +0000

flight 180185 linux-linus real [real]
flight 180186 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180185/
http://logs.test-lab.xenproject.org/osstest/logs/180186/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180186-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180183
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180183
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180183
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180183
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180183
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                a79d5c76f705de81cb6b55ad279dde9759da06d2
baseline version:
 linux                aa318c48808c0aa73216bd94c54c4553d3663add

Last test of basis   180183  2023-04-07 21:10:18 Z    1 days
Testing same since   180185  2023-04-08 19:13:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrien Thierry <athierry@redhat.com>
  Alistair Popple <apopple@nvidia.com>
  Andrew Morton <akpm@linux-foundation.org>
  Christoph Hellwig <hch@lst.de>
  Daniel Wagner <dwagner@suse.de>
  David Hildenbrand <david@redhat.com>
  Dexuan Cui <decui@microsoft.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Jens Axboe <axboe@kernel.dk>
  Keith Busch <kbusch@kernel.org>
  Laurence Oberman <loberman@redhat.com>
  Leonard Göhrs <l.goehrs@pengutronix.de>
  Li Zetao <lizetao1@huawei.com>
  Liam Howlett <Liam.Howlett@oracle.com>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Ming Lei <ming.lei@redhat.com>
  Muchun Song <songmuchun@bytedance.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Xu <peterx@redhat.com>
  Petr Tesarik <petr.tesarik.ext@huawei.com>
  Ranjan Kumar <ranjan.kumar@broadcom.com>
  Rongwei Wang <rongwei.wang@linux.alibaba.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sergey Senozhatsky <senozhatsky@chromium.org>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Shiyang Ruan <ruansy.fnst@fujitsu.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suren Baghdasaryan <surenb@google.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Wojciech Lukowicz <wlukowicz01@gmail.com>
  Yafang Shao <laoar.shao@gmail.com>
  ye xingchen <ye.xingchen@zte.com.cn>
  Yongchen Yin <wb-yyc939293@alibaba-inc.com>
  Yu Kuai <yukuai3@huawei.com>
  Zheng Yejian <zhengyejian1@huawei.com>
  Zhong Jinghua <zhongjinghua@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   aa318c48808c..a79d5c76f705  a79d5c76f705de81cb6b55ad279dde9759da06d2 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Apr 09 08:49:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Apr 2023 08:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519308.806475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plQjn-0007Bq-7N; Sun, 09 Apr 2023 08:49:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519308.806475; Sun, 09 Apr 2023 08:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plQjn-0007Bj-4S; Sun, 09 Apr 2023 08:49:27 +0000
Received: by outflank-mailman (input) for mailman id 519308;
 Sun, 09 Apr 2023 08:49:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plQjl-0007BX-Ny; Sun, 09 Apr 2023 08:49:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plQjl-0007qJ-LB; Sun, 09 Apr 2023 08:49:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plQjl-0008Qw-5l; Sun, 09 Apr 2023 08:49:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plQjl-0005mC-5P; Sun, 09 Apr 2023 08:49:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jAzyI+BdnrlSbxvQ+lRAeHkj6vabYYoxfXFA+nn9zdY=; b=pEsuTej0/t8XXEqMBHXgJXuJIM
	cy+Ghf6m9RGFXgVVmrGNlz6gwBzWL7NUs2UeSUD+HCjJ8ufmfAsKEyOxrovHQgg5knMAd8Im/GG60
	tzL0v87UJhlKGIrg+jxzLPRxbZMyrD5dT+iBXZM85UHSWYrVixPWtdhdXfSuSoGBFmR4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180187-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180187: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Apr 2023 08:49:25 +0000

flight 180187 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180187/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180184 pass in 180187
 test-amd64-i386-xl-qemut-debianhvm-amd64 18 guest-localmigrate/x10 fail in 180184 pass in 180187
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install          fail pass in 180184
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 180184

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180184
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180184
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180184
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180184
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180184
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180184
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180184
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180184
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180184
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180184
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180187  2023-04-09 01:51:59 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 09 14:21:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Apr 2023 14:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519363.806485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plVuJ-0006DH-E4; Sun, 09 Apr 2023 14:20:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519363.806485; Sun, 09 Apr 2023 14:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plVuJ-0006DA-BH; Sun, 09 Apr 2023 14:20:39 +0000
Received: by outflank-mailman (input) for mailman id 519363;
 Sun, 09 Apr 2023 14:20:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plVuI-0006D0-51; Sun, 09 Apr 2023 14:20:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plVuI-0006hs-0z; Sun, 09 Apr 2023 14:20:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plVuH-0007VC-HE; Sun, 09 Apr 2023 14:20:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plVuH-0005Ih-Gp; Sun, 09 Apr 2023 14:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qRKBS9wYy29ngsH/xxR4oW0rnZ3RhvasB0bX3I7iegI=; b=LTpX3h8nhh1UDMf/3SFg+B+q3t
	Lgxz4Tlpebw/QwL/YmOTU780jUk17TnnBkDyJpTkgCWNDqfkHWoxUUZKYcSwSli0nrtNPl316qlqM
	6dkjynoZAFNqnnoel+fKCtFZwGxHerR4LfuukJjsjPV0tt2dH9JQg7HyLJSx/Uk6MuSo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180188-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180188: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=cdc9718d5e590d6905361800b938b93f2b66818e
X-Osstest-Versions-That:
    linux=a79d5c76f705de81cb6b55ad279dde9759da06d2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Apr 2023 14:20:37 +0000

flight 180188 linux-linus real [real]
flight 180189 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180188/
http://logs.test-lab.xenproject.org/osstest/logs/180189/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180189-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180185
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180185
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180185
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180185
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat    fail  like 180185
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180185
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                cdc9718d5e590d6905361800b938b93f2b66818e
baseline version:
 linux                a79d5c76f705de81cb6b55ad279dde9759da06d2

Last test of basis   180185  2023-04-08 19:13:27 Z    0 days
Testing same since   180188  2023-04-09 03:13:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjørn Mork <bjorn@mork.no>
  D Scott Phillips <scott@os.amperecomputing.com>
  Dan Carpenter <error27@gmail.com>
  David Lechner <david@lechnology.com>
  Enrico Sau <enrico.sau@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Haotien Hsu <haotienh@nvidia.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Ian Ray <ian.ray@ge.com>
  Ibrahim Tilki <Ibrahim.Tilki@analog.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jens Axboe <axboe@kernel.dk>
  Johan Hovold <johan@kernel.org>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Jan Koster <kjkoster@kjkoster.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marc Zyngier <maz@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Mehdi Djait <mehdi.djait.k@gmail.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Mårten Lindahl <marten.lindahl@axis.com>
  Nuno Sá <nuno.sa@analog.com>
  Patrik Dahlström <risca@dalakolonin.se>
  Pawel Laszczak <pawell@cadence.com>
  RD Babiera <rdbabiera@google.com>
  Sandeep Dhavale <dhavale@google.com>
  Sherry Sun <sherry.sun@nxp.com>
  Steve Clevenger <scclevenger@os.amperecomputing.com>
  Steve French <stfrench@microsoft.com>
  Suzuki K Poulose <suzuki.poulose@arm.com>
  Thiago Rafael Becker <tbecker@redhat.com>
  Wayne Chang <waynec@nvidia.com>
  William Breathitt Gray <william.gray@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a79d5c76f705..cdc9718d5e59  cdc9718d5e590d6905361800b938b93f2b66818e -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 10 00:24:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 00:24:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519430.806494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plfK2-00060i-4K; Mon, 10 Apr 2023 00:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519430.806494; Mon, 10 Apr 2023 00:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plfK2-00060b-19; Mon, 10 Apr 2023 00:23:50 +0000
Received: by outflank-mailman (input) for mailman id 519430;
 Mon, 10 Apr 2023 00:23:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plfK0-00060R-Mk; Mon, 10 Apr 2023 00:23:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plfK0-0004uX-JX; Mon, 10 Apr 2023 00:23:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plfK0-0007Tm-2l; Mon, 10 Apr 2023 00:23:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plfK0-0006io-2C; Mon, 10 Apr 2023 00:23:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rxTbtFR6M4X3MyhMwnvdYHD1x4laf0t2NbiwMzo84FQ=; b=TDxLM/SDDO6FFefQB0pD0j5ULB
	5zOU8Qg4X/zJy0D0fvyvpPbHzxpYgZV8Y54zsyIK1aJ8XszZmEVsyawOchMbJMBFW0Z5Ts4qUWFX2
	5/0N1rO0p8yyYUOUxUvOGVzT1gNs1UlEVyPK6LwX4lFzrGO/YxjlOnZnDEW/C8Z+XPcw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180190-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180190: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-vhd:guest-start.2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=faf8f41858e2792925b2c526e16d2f539a53a730
X-Osstest-Versions-That:
    linux=cdc9718d5e590d6905361800b938b93f2b66818e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 00:23:48 +0000

flight 180190 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180190/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-vhd      22 guest-start.2           fail blocked in 180188
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180188
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180188
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180188
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180188
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180188
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                faf8f41858e2792925b2c526e16d2f539a53a730
baseline version:
 linux                cdc9718d5e590d6905361800b938b93f2b66818e

Last test of basis   180188  2023-04-09 03:13:59 Z    0 days
Testing same since   180190  2023-04-09 17:41:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bjorn Helgaas <bhelgaas@google.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dan Williams <dan.j.williams@intel.com>
  Dave Jiang <dave.jiang@intel.com>
  David R <david@unsolicited.net>
  Eric DeVolder <eric.devolder@oracle.com>
  Gregory Price <gregory.price@memverge.com>
  Ira Weiny <ira.weiny@intel.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kan Liang <kan.liang@linux.intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Wunner <lukas@wunner.de>
  Mario Limonciello <mario.limonciello@amd.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Thomas Gleixner <tglx@linutronix.de>
  Tony Luck <tony.luck@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   cdc9718d5e59..faf8f41858e2  faf8f41858e2792925b2c526e16d2f539a53a730 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 10 08:00:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 08:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519456.806504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plmRF-0007Tb-Ae; Mon, 10 Apr 2023 07:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519456.806504; Mon, 10 Apr 2023 07:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plmRF-0007TU-82; Mon, 10 Apr 2023 07:59:45 +0000
Received: by outflank-mailman (input) for mailman id 519456;
 Mon, 10 Apr 2023 07:59:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plmRD-0007TK-PX; Mon, 10 Apr 2023 07:59:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plmRD-0006kl-Mr; Mon, 10 Apr 2023 07:59:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plmRD-0003O8-7E; Mon, 10 Apr 2023 07:59:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plmRD-0000bK-6k; Mon, 10 Apr 2023 07:59:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qyD7GKUih4UvNk+sg61LBgr1MK3ZJtttrdTHE7luv1A=; b=q4H+tE++JWEe5hqwxVoB0YNVjX
	+2Ok7Y8ozwLSjejII5ATrTtmvYyFWpfRYymBFZuhdp+1uCQvyn9vKzzN+Dc5e9nPXDb6HY/xfma7p
	TIVLvA/A+6qEmqaM0G8U09BiVSz+bq+prBQ7DZdVdY1RySg1JAFY7kYodMzT3KIpeJzo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180193-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180193: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cf4af503fb61e7a942e05e61627364cad48c5781
X-Osstest-Versions-That:
    ovmf=6405cd03048c1850d5047bf66fefa04189a05c94
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 07:59:43 +0000

flight 180193 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180193/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cf4af503fb61e7a942e05e61627364cad48c5781
baseline version:
 ovmf                 6405cd03048c1850d5047bf66fefa04189a05c94

Last test of basis   180180  2023-04-07 13:43:54 Z    2 days
Testing same since   180193  2023-04-10 06:12:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chris Johnson <chris.n.johnson@intel.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6405cd0304..cf4af503fb  cf4af503fb61e7a942e05e61627364cad48c5781 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Apr 10 08:40:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 08:40:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519469.806515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pln4D-00040m-RW; Mon, 10 Apr 2023 08:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519469.806515; Mon, 10 Apr 2023 08:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pln4D-000409-Nv; Mon, 10 Apr 2023 08:40:01 +0000
Received: by outflank-mailman (input) for mailman id 519469;
 Mon, 10 Apr 2023 08:40:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pln4C-0003xJ-Hf; Mon, 10 Apr 2023 08:40:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pln4C-0008Ji-Fa; Mon, 10 Apr 2023 08:40:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pln4C-0004Oz-1Z; Mon, 10 Apr 2023 08:40:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pln4C-0002k9-16; Mon, 10 Apr 2023 08:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oI5t8EYVe3CMtZPAvTZbwvGad2l6d5EFXj8gs9Vhisw=; b=YRZ2BsolizuwYyTbkklK7DFcRq
	ukDNtTNawioYbzM3VEfJD0mRfpzhTfLyLNc1IIa+Y4Q71OfS3NHNZQQsKOL7wpbQwC9KSQx0tIGb+
	oops6BOUZIGIi6/4u4067hgzQmx69HnyU68cc+rHQ/baf8Wh79GAvLFFvx21eFtDB9IE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180191: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=09a9639e56c01c7a00d6c0ca63f4c7c41abe075d
X-Osstest-Versions-That:
    linux=faf8f41858e2792925b2c526e16d2f539a53a730
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 08:40:00 +0000

flight 180191 linux-linus real [real]
flight 180194 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180191/
http://logs.test-lab.xenproject.org/osstest/logs/180194/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180194-retest
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180194-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180190
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180190
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180190
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180190
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180190
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                09a9639e56c01c7a00d6c0ca63f4c7c41abe075d
baseline version:
 linux                faf8f41858e2792925b2c526e16d2f539a53a730

Last test of basis   180190  2023-04-09 17:41:25 Z    0 days
Testing same since   180191  2023-04-10 00:42:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   faf8f41858e2..09a9639e56c0  09a9639e56c01c7a00d6c0ca63f4c7c41abe075d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 10 11:08:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 11:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519509.806525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plpO3-0001rf-BG; Mon, 10 Apr 2023 11:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519509.806525; Mon, 10 Apr 2023 11:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plpO3-0001rY-6v; Mon, 10 Apr 2023 11:08:39 +0000
Received: by outflank-mailman (input) for mailman id 519509;
 Mon, 10 Apr 2023 11:08:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plpO1-0001rO-JU; Mon, 10 Apr 2023 11:08:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plpO1-0003ED-FZ; Mon, 10 Apr 2023 11:08:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plpO0-0000De-T2; Mon, 10 Apr 2023 11:08:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plpO0-0006ov-SY; Mon, 10 Apr 2023 11:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Eo8V4wRWDDNdeDmFAFd+nCc9EVo1NzjJ+qXiNc4szrU=; b=CHsExxIDbGYnjW5aG0B3jDExVi
	/+3xoVzRHspMzeUogS03g7c6Va1x06wgZZgkAuyVy1YniAXML09fshlHQBtog5a5Cmee6uZk1Mu4b
	ix0U1Bi2H2rlTBxXV1KYv2UGHBmAsYPT2ijkilXcNx2f3QmiHYIHqSRwxDj69/dr9iD4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180192-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180192: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 11:08:36 +0000

flight 180192 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180192/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-install fail in 180187 pass in 180192
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 180187 pass in 180192
 test-amd64-i386-libvirt       7 xen-install                fail pass in 180187
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180187
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180187

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180187 like 180184
 test-amd64-i386-libvirt     15 migrate-support-check fail in 180187 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180187
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180187
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180187
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180187
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180187
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180187
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180187
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180187
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180187
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180192  2023-04-10 01:52:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Apr 10 16:39:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 16:39:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519542.806535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pluYE-0000Ov-Fw; Mon, 10 Apr 2023 16:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519542.806535; Mon, 10 Apr 2023 16:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pluYE-0000Oo-CR; Mon, 10 Apr 2023 16:39:30 +0000
Received: by outflank-mailman (input) for mailman id 519542;
 Mon, 10 Apr 2023 16:39:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pluYD-0000Oe-E7; Mon, 10 Apr 2023 16:39:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pluYD-0003La-C8; Mon, 10 Apr 2023 16:39:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pluYC-0002ek-SF; Mon, 10 Apr 2023 16:39:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pluYC-0008L7-Ro; Mon, 10 Apr 2023 16:39:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XB5ke6pxsM4SVEjmueJhrWhiqKxPUEEamVn9vkeaWdQ=; b=du472UvvHqmQdReA30qT0QKeMK
	sYZowh5ACodDh5u75gyl8EBfxZaq/EFmzEo1tXAf5ED0iFy4FHw3Xh03soUXe1U3lz4YqtRKklyLE
	4YXXm+ecfYVf99BMLVuvUXERO0b7Y+U/Tmvai4qMzUOJkl6ynotVJaRbkoIHDt4/Zz3w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180196-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180196: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=61652efd04a7585a779226137d0f9739a75aac69
X-Osstest-Versions-That:
    ovmf=cf4af503fb61e7a942e05e61627364cad48c5781
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 16:39:28 +0000

flight 180196 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180196/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 61652efd04a7585a779226137d0f9739a75aac69
baseline version:
 ovmf                 cf4af503fb61e7a942e05e61627364cad48c5781

Last test of basis   180193  2023-04-10 06:12:15 Z    0 days
Testing same since   180196  2023-04-10 14:40:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cf4af503fb..61652efd04  61652efd04a7585a779226137d0f9739a75aac69 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Apr 10 20:47:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Apr 2023 20:47:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519558.806546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plyPb-00089O-Di; Mon, 10 Apr 2023 20:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519558.806546; Mon, 10 Apr 2023 20:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1plyPb-00089H-8R; Mon, 10 Apr 2023 20:46:51 +0000
Received: by outflank-mailman (input) for mailman id 519558;
 Mon, 10 Apr 2023 20:46:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plyPa-000896-KH; Mon, 10 Apr 2023 20:46:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plyPa-0000Mu-DF; Mon, 10 Apr 2023 20:46:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1plyPZ-0000pn-Vi; Mon, 10 Apr 2023 20:46:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1plyPZ-0001L0-VF; Mon, 10 Apr 2023 20:46:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EPF9VAc5EtJ5H7fVFrLJnYoesRL+m7pjnVnsj9CoN9Y=; b=xENq8GQMab2xlrMV4iuXLfcHaR
	DQsvkwMylpH8UsPe29P5s7k9RzzaYpxpC1l7EemyuHtb/jinaM0PMaJG9xCB2O7IM9luV58p40yEJ
	wVm24/chcQjbSw6Tj2yRkFMeLkwaawTBlmwIS9IKaThpP81bPrVml/fops6r9aRbNYG4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180195-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180195: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=08dede07030973c1053868bc64de7e10bfa02ad6
X-Osstest-Versions-That:
    qemuu=c6f3cbca32bde9ee94d9949aa63e8a7ef2d7bc5b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Apr 2023 20:46:49 +0000

flight 180195 qemu-mainline real [real]
flight 180197 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180195/
http://logs.test-lab.xenproject.org/osstest/logs/180197/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386  7 xen-install       fail pass in 180197-retest
 test-amd64-i386-freebsd10-amd64  7 xen-install      fail pass in 180197-retest
 test-amd64-amd64-xl-qcow2    22 guest-start.2       fail pass in 180197-retest
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180197-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180168
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180168
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180168
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180168
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180168
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                08dede07030973c1053868bc64de7e10bfa02ad6
baseline version:
 qemuu                c6f3cbca32bde9ee94d9949aa63e8a7ef2d7bc5b

Last test of basis   180168  2023-04-06 10:37:17 Z    4 days
Testing same since   180195  2023-04-10 13:39:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cédric Le Goater <clg@kaod.org>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c6f3cbca32..08dede0703  08dede07030973c1053868bc64de7e10bfa02ad6 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 05:59:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 05:59:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519581.806555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pm72N-0001a0-Ts; Tue, 11 Apr 2023 05:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519581.806555; Tue, 11 Apr 2023 05:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pm72N-0001Zs-PT; Tue, 11 Apr 2023 05:59:27 +0000
Received: by outflank-mailman (input) for mailman id 519581;
 Tue, 11 Apr 2023 05:59:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm72M-0001Zi-6K; Tue, 11 Apr 2023 05:59:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm72M-0004U3-3o; Tue, 11 Apr 2023 05:59:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm72L-00022j-LT; Tue, 11 Apr 2023 05:59:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pm72L-0008D8-Kg; Tue, 11 Apr 2023 05:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UEQ4XIEi+F4a7HOzfrNhoD8BNy8nW7nRLP12fi23a3A=; b=FNtbrqeh8kSoPG5QOeqhDAGACc
	wa2X881yVsa6BQKRTn+qE5t0aUg0a7nLSMHOQ1irETLujkhIb1kvnnlnOXpF+iXpX/2dB5QWOSDtx
	6l9j8W5tGQRh4brveOzVEzS6NaXqWUuAD04AcFfSCrzV7QqjQv9NGQs2GnhYxorPllNE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180198-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180198: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=26aeb3b5894e0f3d17354e306002c59fa060e1c5
X-Osstest-Versions-That:
    qemuu=08dede07030973c1053868bc64de7e10bfa02ad6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 05:59:25 +0000

flight 180198 qemu-mainline real [real]
flight 180201 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180198/
http://logs.test-lab.xenproject.org/osstest/logs/180201/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180201-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180195
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180195
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180195
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180195
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180195
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                26aeb3b5894e0f3d17354e306002c59fa060e1c5
baseline version:
 qemuu                08dede07030973c1053868bc64de7e10bfa02ad6

Last test of basis   180195  2023-04-10 13:39:02 Z    0 days
Testing same since   180198  2023-04-10 21:08:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   08dede0703..26aeb3b589  26aeb3b5894e0f3d17354e306002c59fa060e1c5 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 08:50:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 08:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519598.806565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pm9h4-00029h-IY; Tue, 11 Apr 2023 08:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519598.806565; Tue, 11 Apr 2023 08:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pm9h4-00029a-Fg; Tue, 11 Apr 2023 08:49:38 +0000
Received: by outflank-mailman (input) for mailman id 519598;
 Tue, 11 Apr 2023 08:49:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm9h3-00029Q-LL; Tue, 11 Apr 2023 08:49:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm9h3-0000OV-GF; Tue, 11 Apr 2023 08:49:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pm9h3-0006aQ-8v; Tue, 11 Apr 2023 08:49:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pm9h3-0007YC-8U; Tue, 11 Apr 2023 08:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zOoTt19u0fkUjIzYFKjN5Jbw40+8n+0j0w+ATl6eIBY=; b=FxpUwTLjcSalwFAzO3ctEnb7CR
	qxJPkrAufrqFI73OB39dvWK1h7AyxqmB9N1R0g0d5lyS3wkNvZM4/SfppG66ZrWGjgqSlF15DJJNc
	EuHD9jhPt4UtoTKRjeU0HHM7ac1zoZWlMDPGVujUeHtKxUPZFvuJ75gtdGvDrdJUnN1M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180199-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180199: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:debian-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=0d3eb744aed40ffce820cded61d7eac515199165
X-Osstest-Versions-That:
    linux=09a9639e56c01c7a00d6c0ca63f4c7c41abe075d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 08:49:37 +0000

flight 180199 linux-linus real [real]
flight 180202 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180199/
http://logs.test-lab.xenproject.org/osstest/logs/180202/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl          12 debian-install      fail pass in 180202-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl         15 migrate-support-check fail in 180202 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180202 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180191
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180191
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180191
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180191
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180191
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                0d3eb744aed40ffce820cded61d7eac515199165
baseline version:
 linux                09a9639e56c01c7a00d6c0ca63f4c7c41abe075d

Last test of basis   180191  2023-04-10 00:42:31 Z    1 days
Testing same since   180199  2023-04-10 21:41:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arthur Grillo <arthurgrillo@riseup.net>
  Christian Schoenebeck <linux_oss@crudebyte.com>
  David Gow <davidgow@google.com>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Eli Cohen <elic@nvidia.com>
  Eric Van Hensbergen <ericvh@kernel.org>
  Ivan Orlov <ivan.orlov0322@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Christie <michael.christie@oracle.com>
  Paul E. McKenney <paulmck@kernel.org>
  Richard Weinberger <richard@nod.at>
  Roberto Sassu <roberto.sassu@huaweicloud.com>
  Ross Zwisler <zwisler@chromium.org>
  Ross Zwisler <zwisler@google.com>
  SeongJae Park <sj@kernel.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Uladzislau Rezki (Sony) <urezki@gmail.com>
  Zheng Wang <zyytlz.wz@163.com>
  Ziwei Dai <ziwei.dai@unisoc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   09a9639e56c0..0d3eb744aed4  0d3eb744aed40ffce820cded61d7eac515199165 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 09:47:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 09:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519609.806574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmAbD-0008QT-Sm; Tue, 11 Apr 2023 09:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519609.806574; Tue, 11 Apr 2023 09:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmAbD-0008QM-Q9; Tue, 11 Apr 2023 09:47:39 +0000
Received: by outflank-mailman (input) for mailman id 519609;
 Tue, 11 Apr 2023 09:47:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmAbC-0008QC-Dw; Tue, 11 Apr 2023 09:47:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmAbC-0001eW-Bx; Tue, 11 Apr 2023 09:47:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmAbB-0000Rv-W6; Tue, 11 Apr 2023 09:47:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmAbB-0007si-VT; Tue, 11 Apr 2023 09:47:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vxyce2jNMGY/qFEBEkuv4jDLk2/6we2Tca4ZRPSHPDw=; b=qLXVCOFUSvZ6x2qBJG1Izx3VEj
	z7O49UtdsKJn6Jl/LjkEx/ggD8gLaVIv5htGYbwNJJxvBlMaokY/EmIROgTTAsNfsZdfwX53MdI6O
	/iq8taNwS6qyHZxmNqeFpEAvcGkXo1kKA6oHE/qADjAghy8iTwwnoztqktwtYKUqe1Gs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180203-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180203: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=51734dfc48466eddfb0f8acdb24518266c36c905
X-Osstest-Versions-That:
    ovmf=61652efd04a7585a779226137d0f9739a75aac69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 09:47:37 +0000

flight 180203 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180203/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 51734dfc48466eddfb0f8acdb24518266c36c905
baseline version:
 ovmf                 61652efd04a7585a779226137d0f9739a75aac69

Last test of basis   180196  2023-04-10 14:40:49 Z    0 days
Testing same since   180203  2023-04-11 07:40:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Lin, MillerX <millerx.lin@intel.com>
  MillerX Lin <millerx.lin@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   61652efd04..51734dfc48  51734dfc48466eddfb0f8acdb24518266c36c905 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 10:31:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 10:31:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519618.806585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmBGs-0005Hl-6I; Tue, 11 Apr 2023 10:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519618.806585; Tue, 11 Apr 2023 10:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmBGs-0005He-3S; Tue, 11 Apr 2023 10:30:42 +0000
Received: by outflank-mailman (input) for mailman id 519618;
 Tue, 11 Apr 2023 10:30:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6J/=AC=invisiblethingslab.com=simon@srs-se1.protection.inumbo.net>)
 id 1pmBGq-0005HY-7j
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 10:30:40 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e56b8fec-d853-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 12:30:37 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 9CA585C01FA;
 Tue, 11 Apr 2023 06:30:35 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Tue, 11 Apr 2023 06:30:35 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 11 Apr 2023 06:30:33 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e56b8fec-d853-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:sender:subject:subject:to:to; s=fm2; t=1681209035; x=
	1681295435; bh=uvIpKHbegzj4xJPF2DM2w/cwsEorqqtDcMgBzDcQiro=; b=M
	XRgYMx/i98nvYqN4igFeDSN8a7RAmtIwdDTTj/cwqHrzkOrZO6kl1Sht0uxQsxa+
	awf/kIEfOVg9PVGUSd34h3lvxW4QzsdGcPZBcixeC6trV83NmvfitqSm81nWC/Ci
	jgSdgn0JovxylGbSFyR3KmZMFY5pjPYq4rJjjY04Wmcr9R6RNA1gaeZT4Z/3ElG8
	DTd2iKQ5xMZZQaHbycv13Z8lZNMEhjPg9O5FAuyCpdEgRi8LPLtkfEoefJoLVq6m
	s2CiKi0J3Ut/19RRRQsSAaQU0XdlFRESEFRmP51+MtbGbPeIW3r4UL0C5RphgfeF
	4wa6+tMQ8sAqaCGJ+o2oQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1681209035; x=1681295435; bh=uvIpKHbegzj4xJPF2DM2w/cwsEorqqtDcMg
	BzDcQiro=; b=K1avxbKVpjhEKX4yRjjGqz78k5/JPOdDrS3fgKY0SqX/EcEePoE
	nr8y7H/A1f5i8ePjjXo91IV+IPybpB8jTpR/96NGtwt4nuclMGORuD4ZCrbQUnYt
	aAFrT9Cq2FXEwYhTxHy5N71e6Lz1DaWRt/MXDehDCygdQoBgQ1/BOn1lEESnbo+K
	h9rUn7yXRl4zccj9q3kyPiAVW0f68ieT+H4DIMTzXQk2IX4JEOOtUPuJ4nniOJMl
	SZ95SVTk2vNbt2rYh1FXMCR7ZD4uvYMb4nHRN4hYlT6zQwEliuPsGKIQYOn/ZLwm
	fPi8Kf54eqCXyz2hb3m6lPZXWjoRY7c7rLw==
X-ME-Sender: <xms:yzY1ZOJGfGRPBNvoer34YdjAkV6Wb9bhOxnoBrKSMRzNKtsMNK1MMA>
    <xme:yzY1ZGJW5wx5hOT4rbeBsTVILU7Cg35FAzatMY154QVYPOfdcHfKK4zGOdfLAFBwD
    LocN3PiCMQRsts>
X-ME-Received: <xmr:yzY1ZOsr1Ou-2UTb22T4EFR4rfoNVOC2tzv7atkoZqoIyq-n9ZfrXuq2sQAksA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdekgedgvdekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepkfffggfvvefhufgtsehgtderredttdejnecuhfhrohhmpefuihhmohhnucfi
    rghishgvrhcuoehsihhmohhnsehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
    eqnecuggftrfgrthhtvghrnhepheegfeelteekieeuvdduvdffheetieeiheelgfeiheff
    veefhffhgfejveevieehnecuffhomhgrihhnpehkvghrnhgvlhdrohhrghenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehsihhmohhnsehinhhv
    ihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:yzY1ZDZBU5SdKulkYLqSWttrq8DkBfdMj2nlvRvuUuVV9ma_Y_SICQ>
    <xmx:yzY1ZFYMFSLhrR2opsIvXaYjOx53mGv986EKP7_9E3Q5rFdeAC1VLQ>
    <xmx:yzY1ZPAgOuANtqREZQjLKeat4Yf-3Eu0H8d1SsOjOJyQZt5JqpRJbA>
    <xmx:yzY1ZH0zZyJTLbcT50MxD9DSH0FyQUjh2EazBmFkRyLvJUQKn0hCGg>
Feedback-ID: idc5945a3:Fastmail
Message-ID: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
Date: Tue, 11 Apr 2023 12:30:10 +0200
MIME-Version: 1.0
Content-Language: en-US
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
From: Simon Gaiser <simon@invisiblethingslab.com>
Subject: RFC: disable HPET legacy mode after timer check
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------xK2STNKhiONHp1GmYwabHL6P"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------xK2STNKhiONHp1GmYwabHL6P
Content-Type: multipart/mixed; boundary="------------voXpEBUiwVU5bYblL6WJgkAT";
 protected-headers="v1"
From: Simon Gaiser <simon@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
Subject: RFC: disable HPET legacy mode after timer check

--------------voXpEBUiwVU5bYblL6WJgkAT
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi,

I have been recently looking into getting S0ix working on Xen [1].

Thanks to a tip from Andrew I found that the HPET legacy mode was
preventing my test system from reaching a package C-state lower than PC7
and thereby also preventing S0ix residency.

For testing I simply modified check_timer() to disable it again after it
checked the timer irq:

--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1966,6 +1969,8 @@ static void __init check_timer(void)
=20
             if ( timer_irq_works() )
             {
+                hpet_disable_legacy_replacement_mode();
                 local_irq_restore(flags);
                 return;
             }


With this [2] I'm able to reach S0ix residency for some time and for shor=
t
periods the systems power consumption goes down to the same level as with=

native Linux!

It reaches low power states only for a fraction of the suspend to idle
time, so something still makes the CPU/chipset think it should leave the
low power mode, but that's another topic.

I tried to understand how all the timer code interacts with disabling
the legacy mode. I think it only would break cpuidle if X86_FEATURE_ARAT
is not available (Which is available on my test system and indeed I
didn't run into obvious breakage).=20

Is this (disabled PIT && !ARAT) a configuration that exists (and needs
to be supported)?

Did I miss something else? (Very much possible, given that this is way
above my existing experience with X86 and Xen internals.)

Simon

[1]: https://lore.kernel.org/xen-devel/9051e484-b128-715a-9253-48af8e47bb=
9d@invisiblethingslab.com/
[2]: Plus [3] and some hack to have mwait-idle on Tiger Lake.
[3]: https://lore.kernel.org/xen-devel/20230313134102.3157-1-simon@invisi=
blethingslab.com/

--------------voXpEBUiwVU5bYblL6WJgkAT--

--------------xK2STNKhiONHp1GmYwabHL6P
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEE3E8ezGzG3N1CTQ//kO9xfO/xly8FAmQ1NrQACgkQkO9xfO/x
ly82/hAAnCpnkQefmNsAVFIbmpZOLKU3sQajgcA5wswNrvFXk9ApnZ3MbGfgjYxv
gyZtgTjbl1yTdC07CNA5upmqu2jO1wFmSyq/dpVgWHlXVtbBF/zxPf+AbFk252wb
f4eEm7hSjBnUr8TrG59ZcM5VrQDa/jQkzodE0C7pEpzyBt9HECFc0blmPNL1Ot85
FdjvZvHfbXP1wn8/5znpArBldab68vh4bA7puA9Wrmr2zuBr/Jo/7pA/+7DACo4I
wYMBLUkAra8fA+EDKT14OfwvDGkF5MuutFQRkgYDgFNP0j0qrZ0HgqTCmOJgp2+Z
WjKi5FSab50KPTPJhFO7fjsvEXwHrT9i/p6gB+LsVxTbCJwLFH8wbMsjkvaowzKA
H6CK1AN3yUSQo1uW59bmZYqXe5ZlaNIM2l8dGpX/CsKQ83z5NdlhwalkQIadYy5t
nCDrV02fQzIQw+otyoj23rkzKU9mJOabw5q0kfDHy6t6Wr9GD930SdLb5nK+Rl5z
XWbv5CM/pVoJjXv2ksDPqdKnv5uqIi8ZIEtiuYicaJM3bOFrqG4uGZmdP4yX021D
cNThqzSnMb46QihF/rbaHOh4rEYTM9eUXWvSqTjxeoh7Hjuih7hOYupARwpoy64U
D/xfOVZEPt2zwuQMgrBuUj4VnAMd78vaXGa4vUKzmO2A5yAzdrk=
=zCNu
-----END PGP SIGNATURE-----

--------------xK2STNKhiONHp1GmYwabHL6P--


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 11:21:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 11:21:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519629.806594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmC3q-0002GM-TV; Tue, 11 Apr 2023 11:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519629.806594; Tue, 11 Apr 2023 11:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmC3q-0002GF-Qq; Tue, 11 Apr 2023 11:21:18 +0000
Received: by outflank-mailman (input) for mailman id 519629;
 Tue, 11 Apr 2023 11:21:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7mzh=AC=citrix.com=prvs=458d1a855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmC3o-0002G8-Oj
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 11:21:16 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f77e551e-d85a-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 13:21:14 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 Apr 2023 07:21:12 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM8PR03MB6229.namprd03.prod.outlook.com (2603:10b6:8:24::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.34; Tue, 11 Apr 2023 11:21:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.036; Tue, 11 Apr 2023
 11:21:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f77e551e-d85a-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681212074;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=xv3mdbnecSply7SEWRhsOO1AtD/e1KmcibV1xhGs/Ho=;
  b=TVs12RONVfH/f6QThQ30ixMlr4zHUWWaLpJYAqtruPEqZTytrc0bt3O3
   y3IcB34ZQjs96UXs+sPcq6CY/qYc8Xyj3Zs7NkzCuGnVrsEq5z8h8cM1W
   ALhCdpgAQYfSS8Y/XVHQmGGZ3OXcOLYf7/lYtm4b1mKwfVkEhj6u/77Oq
   o=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 105088851
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GpgvPqkkyEbcy2VIO7nafd3o5gxMJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdXWGBMquCNmr2ct1/adu19U0C68eBy4BhGwA4rSE3QiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgX5QaGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 d0CbzcRPzPdvt31zLbiTcNBqcpzIPC+aevzulk4pd3YJdAPZMmZBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkV03ieewWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTODgpq463QT7Kmo7ORwmcWCJoN+ArGGZVtRTK
 1EbpiQrhP1nnKCsZpynN/Gim1afvxsbXfJRFfM78wCHzqfI4wefCXMARzQHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTSoNVw4M+dTgiIA1kBPUT9xnHbK1j9v6AjX5y
 XaBqy1WulkIpcsC1qH+8VWZhTup/8LNVlRsuViRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGe0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:SaC0SqgUzNIaoS31RKTXaRtKvnBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.98,336,1673931600"; 
   d="scan'208";a="105088851"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ECABTnGxn/BCHjXlOaT3EWISMacfPIgxb+i3A1QgYV8aA769SsNyobmuXPn5yTieO62GErYSTUPrgL7QipEEpIWaue5AF6znshNaWpBx6uT4CNuMvCdpNmSKdMHABjUmUAgssAZXYCsTfvZYw5IExzRMQUuO31zIoevNresstLXFT6+ABGehg/Cm78Os4hp8kWPqqIRChXe9Ky06T+4DMIXf1lHtywQkgEvquERMXmTthOdXYLrahBjDgVGwN+vXnnKaSs/Q6gR4tthftzox2TOQfuprnnIY7R4j5XtS04G7c/yjLETdJGxq67IKTJx0xLi0RmV1gmF13N12i0g9kQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mF6vPimqJVDHjhK2HcaQdB5HMWxQvcp8sa26GFl1IT8=;
 b=nwMpN3Vue2OrrBlrwiyHrx0tJzsSQNgy9H4dsdUll1u41EJ6JCIcDnpY+Ix5Q1O5kVEtT4xJ+CmBUIznLMG09IXc/aCLK3dX7SEdiSclP+MJew5YzpcxgGI7bLaBH9XxGL1LS107vy7v4zA8AqemFUStGz71WH3YszUdAFzqDm/c2BUcUNqhDisOyF7j2q9h1eEIKgxEAumAUh6TBi4tnjd6GXJRVYfvlOuCMFt5zDDe3OPb8WmaXZj+ycLnvM+Cdb5aN81e2sLzywI1ORXXJcJXaDwMbRhd6QatDtp3MZh+LgbtN6YAX6GG9GyKo0+3EZFy72YlzHC+HrRx5pxuXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mF6vPimqJVDHjhK2HcaQdB5HMWxQvcp8sa26GFl1IT8=;
 b=tTugQdwpMLyAP33XThmMHWOqaYkKhe36SGPnGeazXeagfU77f2WWTvDZ81FpnkQJyRvRiurPxTzIORpHj/O2zYLDW24WJOUifl0qT39NxOTlSL08S4VQ3poBS53oH2h+YOwmZykvF6QwRm+ZNC2eyleLaBJbDyy1UAWLIh2X/7Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
Date: Tue, 11 Apr 2023 12:20:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: RFC: disable HPET legacy mode after timer check
Content-Language: en-GB
To: Simon Gaiser <simon@invisiblethingslab.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0680.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:351::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM8PR03MB6229:EE_
X-MS-Office365-Filtering-Correlation-Id: 03dd4f92-6424-4dd0-684e-08db3a7ed8cc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SuzMyLDS4eHsXMH3BBSem+AyVe2EJ8Z73W3t6VTfFKixo8yGhQF1DEl/6NmoyQSj7HilNfTAxoatwHeYAMG1oXCfO0d3U8ufuECk72qjAFsnEe1xC304zoBqc4a0dk5LmdavHjRUS+Vqckd7YO/kBWxUhusHr+FPL/yc2Ue0uLWXhMgGqNkprXcdosL5uN0T5h74Ey3I/flMPW+Dn5BtviAUGuhpGN/AiYBUy8IWi0bb5GPORks1QvBQOhDM720/XZR7ryurHvV9f1/ULKoqPDJcjSpeQ9hRIyV98kv+h/ibK8gKRZyALt1I6p5+xyNvtgwT2CwMNEeXXNyIYm1KpvewEM4T9ql28jdy6CtZS6zc+fuEGpunXAwTvCgDqQoK7wvxUh0bms4y8B5PClbdhLtF56C5xROFgJZ1PVOvx6dmABMPkcjLpXQPTipTVn06wqEQ2NU+cWjx8TBt6ouAHFNMtrQzFl3IwhK0+txktn3+RRDFGSquwODCtl60gTkFFcAzzzK5tuBc3inlBHietX4CMe5BQriccb5HETCImQ94vDL5NwwNSIK/YkJHD+94ovmm6h/BrWcyFYO2EXV3R+cCXu3aDt5rRhQ+v4Vc7EYrC4raF7A4Tsm79tyo+bNvGLOVNCqt/w/ZsH5VrBS9Jg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(366004)(376002)(136003)(451199021)(86362001)(31696002)(36756003)(53546011)(6512007)(6506007)(26005)(186003)(6666004)(2906002)(2616005)(478600001)(4326008)(66476007)(316002)(6486002)(8676002)(66946007)(31686004)(110136005)(41300700001)(5660300002)(8936002)(83380400001)(66556008)(38100700002)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?djRtWGdkQzR6U1RuVWJ0Vlpyc3RTWUJBWHh2WHcyZ05KUm9QNWRjL1RITG9k?=
 =?utf-8?B?dVlnanRhRzNmNXFjaUF4bnF4a1dlcVFoYk9qSWR0L2p0d1R5WXNhYjBUalBX?=
 =?utf-8?B?RzRnZDNtdFpEbnB6dWdpVFlxWkVLMjJIclRYVTFEMFgrMmE5S0t0aTVsM0x2?=
 =?utf-8?B?LzFycFBLVFU3c3FNelFsZlNQb3BVb2F3WGFPa3pSWGNjRjZjbWlscm5ucWlT?=
 =?utf-8?B?bmJpQ0JNUUprUnhldDYvUDRDUm9LSDcrOTIzMjh5ZzQ3OExZV2dwSDkrbEtC?=
 =?utf-8?B?NGQ0Y1BHcVFjNUlsVmVSRUZBWGZjTHJTR1A0bnNSeTAvVnNPNkcyaGxHbGdh?=
 =?utf-8?B?U2IzQ2pMVW9JYnNCT0pEbkc4SkNIUEwrVnEwSzJXeXlwcmo5SmZ1a1lUUm0v?=
 =?utf-8?B?d2EvS1FNWnE5dytFRXJQOGZMZTFKRXBHcnFHbG45dXVMRHhpdHN4elNBd3hC?=
 =?utf-8?B?N290SE1MQVQ2b2M4ZHlUNkZOQThNdDJ5dkVQUWF1RW5sWmRWMzNDbjg1TFZk?=
 =?utf-8?B?ZkZoc0xKeDlwdGE3NnlhMU5xZ1IvSWJ1eGc2ZVRoSEhrODREVjBNSEo0NWlE?=
 =?utf-8?B?d0hPc2FOaVl3RVlIMC9PN2djeXZrQjhtVy92ekFOd2UrUGxnZHNDOWowWUZo?=
 =?utf-8?B?dkZFN0xCQVgycTl1OUkxVUdWQ2VtdWljSkEzaXh4dWM2dHBibmVXeXYrenk1?=
 =?utf-8?B?UjBwcnU4ZHJQUFdQVHc4MXhRbVVYbyttVEtpZlA1bTZVUUlFTjMyY3hDM3g4?=
 =?utf-8?B?QzVIa2VrYUZYY0t0bEpEczBYM2tYSnUwaGhjZGtrQWZuQ1pCKzZhZEVsMVFh?=
 =?utf-8?B?VURBWXU4bUdGTlBSRWVydmpEeUJsbklCODZUbnQyVE5vc2xtM2JMNG8zZjNh?=
 =?utf-8?B?MENJelBxSXdvWWx1bTRLVHgvN1RXL0lLVG1HMGZJbmxOczVKNGZJQmx4c0ZV?=
 =?utf-8?B?VlE2MnUvMjQwcDc2ZTVkaVhBUEUyZ2lSbXBSYzhxaXVRaFVsWGwwbEFreHZC?=
 =?utf-8?B?czF4Zlg5bEdSazdHZG11RmlGZTV6WEE3RVcwT0xQSEk4UWMvdkFnejdjdHNH?=
 =?utf-8?B?bk5WZHlETFNyTEJBLzY1QlYzRGtkeXExZDk1WUJVUFJTQU41V3JpamhJV1dF?=
 =?utf-8?B?dXFvRzRtK3I1RFdYTlpxdzMvc0lwcitLeFZWSytNcWwwME13NjUxUFRQKzJF?=
 =?utf-8?B?UTdVcEduL0IrV0dUYlB6LzV6QVFiY1lkRTg4WVpjZlhRL1VrTFZEcVEwaWR6?=
 =?utf-8?B?blVWMGlGdDV4dkdhVGwxV2xPTlg0M05YVU45R2Z3VXpwaGc4SUZMb3gvWStH?=
 =?utf-8?B?SXhjVDI0OVRUL2h2K2xNaytOazZWbkFJWkRqMkRCWWtUQUVZZEpMSGhORVlo?=
 =?utf-8?B?RWg2Z2ZIUmRQMUNZdk4rdlB0NFdxY1RzdHkxRjJsbFp5bEhSNUU2U0JsRnZX?=
 =?utf-8?B?OXpuVUZUZUJpL01HRHY0a2RGRkIwNWp0YktlUmZSbHp6eEpVcG85SXdkcFQ4?=
 =?utf-8?B?cGg4T0VkL040L1drU010eDZkNk1zVjExUGYzdXg4MklBVzBCSy9BcWVzbnlz?=
 =?utf-8?B?WmYzQTRkNTd6OE8zMlVaVnJQU3NoaU1SeTBTa0dLNEtPQmxiZ3BVZERCbjRr?=
 =?utf-8?B?a0RxSnd3NDlRQXY4UnE2ZVp3aGhrbW15Z3hwU2JkeVNjbURramtzYmxYU1NO?=
 =?utf-8?B?TDUxUEtybUhwdEJVNzl6bDQ1Mm40OEtkNVcrakFlelBJRlcvTVpLdk1VSGkz?=
 =?utf-8?B?QldxUXdPdXpSVVNKRkhUNzFkMmdScW9sdk9VamRrTENVWmU2UGl2bWFER1Zn?=
 =?utf-8?B?cjRIZjBiay9WUjRaczJDd1d6Z1Z0UEhpWS9lRURJOFZGbkM4TVFSN1dPYk03?=
 =?utf-8?B?YlhoMTB5dXkrSG9PN0NRWVBjbVBybUlVN214V2N5bFQ5M1JhNjJVeEMveDNo?=
 =?utf-8?B?Y25ScTJUY3d4Q0R2cmZ0dXVLa1E0RWpBcjFWNHhFOWs1WE5TSmd1dlVBbFQz?=
 =?utf-8?B?czQ2QkVka1dibi9WNndQWWU4czJxZ3l5M2lvRGloMGV5T3dZM0IrWjFPQ2dI?=
 =?utf-8?B?L1Q3OVQrN3J4c0ZTK01pY3AyS0h6a2F2QW1ybnUzS0xqN0ZTU3lNaDNkdGRD?=
 =?utf-8?B?QzJsVEtWK0ZYS3dFZGU3WS9wWUR1QllLTTN5eEhnREdrRU5rUEVuc2F5UTJT?=
 =?utf-8?B?bEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	K5pixMSOq2MOuqbrnZDyvBMa90SPZzvEhxnsalIHOaDMzi2RXTd4sRv1FUYu3NP97Te2M6BgV81WP8ApATdPGCoxbo0BNgFh62b8XoLDhWzjtKTBivy/rzz27HhyKRTJVh88hykUyw1SJtNSYE4OjZX8XRp6X+nTwTwVbpMcrjqX5biqnNZfr8HFSxnyKj8V5igCEJ/gOoD5h3haspwJxmQte2yOngUBN4ASYQp2zyElTU/CuCKJ18oUourkcWtovdewbedvOzkraw+4bSus8R7fwXvnXuRUF8N19w6OAL7Sshqm1IixCwk/asKPn+KQUFbiEd9v3Co1+YqkE8jhCwJ0rRxTXzEejsqooBQ9z59/FML7ZU6m2qFUQ+JfyIq3QOBH/oA3KJQbwFaKtFr6+6MUGFUUjbZsCINZb9m36yxF8Hwh59V76wGQ7hnK4nSLTtuRNbDRujUzRRByY4KIvfEddFSl7SwS3nFggsvshd5B77R1m9eKRVvFlQP/d/bDbQAO5u+WrJKJw86c+0OWyxZ8+XSkZThk/XQg21t492Weus0W1YmYi+soxWDMEbOMwZY1NzBiDj9ov/ZhSuTu3bJRNU0retbZleF7QPfAC9yGH8HZ8H6e+FbHJ4C8fRbkHxwIQIXQIg03l8me7nRAEQWeAMYj07XYeMrrnq1CUvE1fZrSRam6VLnBVhtafdyfGnURrAXw4JH4nuFXmNjM8LG8bnLNCgLL8PcjrLWRX6sHITc2D1pXeaoJ00Iz546oQfX4uw1SFH4qsNdpVQD5NgRVuVgvrshToL5v8o297MHzimrJ8zj7ENEVqzQDxvlLahfOWPo3qf9gEEDOPzgUGw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 03dd4f92-6424-4dd0-684e-08db3a7ed8cc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 11:21:08.3674
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WXRyY/d1FOnC5J8q/+2tF6CwxMHpSnR9th5E1JS8mjs+z0QF5eTf2SoikSBmS6in4708SomZgh7Pu6kGx0B3t0Ifz1qR/W6E/bFNVh/i/6M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6229

On 11/04/2023 11:30 am, Simon Gaiser wrote:
> Hi,
>
> I have been recently looking into getting S0ix working on Xen [1].
>
> Thanks to a tip from Andrew I found that the HPET legacy mode was
> preventing my test system from reaching a package C-state lower than PC7
> and thereby also preventing S0ix residency.
>
> For testing I simply modified check_timer() to disable it again after it
> checked the timer irq:
>
> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -1966,6 +1969,8 @@ static void __init check_timer(void)
>  
>              if ( timer_irq_works() )
>              {
> +                hpet_disable_legacy_replacement_mode();
>                  local_irq_restore(flags);
>                  return;
>              }
>
>
> With this [2] I'm able to reach S0ix residency for some time and for short
> periods the systems power consumption goes down to the same level as with
> native Linux!

Excellent progress!

> It reaches low power states only for a fraction of the suspend to idle
> time, so something still makes the CPU/chipset think it should leave the
> low power mode, but that's another topic.

Do you have any further info here?  There are a range of possibilities,
from excess timers in Xen (e.g. PV guests default to a 100Hz timer even
though no guests actually want it AFAICT), or the 1s TSC rendezvous
(which isn't actually needed on modern systems), all the way to the
platform devices not entering d3hot.

>
> I tried to understand how all the timer code interacts with disabling
> the legacy mode. I think it only would break cpuidle if X86_FEATURE_ARAT
> is not available (Which is available on my test system and indeed I
> didn't run into obvious breakage). 
>
> Is this (disabled PIT && !ARAT) a configuration that exists (and needs
> to be supported)?
>
> Did I miss something else? (Very much possible, given that this is way
> above my existing experience with X86 and Xen internals.)

Xen's code is a mess and needs an overhaul.

Right now, we're using the timer as "a source of interrupts" to try and
check that we've got things set up suitably.  But this doesn't need to
be the PIT, or a timer at all - it just needs to be "an interrupt coming
in from the platform".

Furthermore, there will soon be client platforms with no PIT/HPET/etc at
all.  (HPET is known broken in <PC7, and not used these days anyway in
order to get deeper sleep working), so this logic is going to explode on
us yet again.

AFAICT moving forwards, the expectation is to use TSCs as the
clocksource, and the deadline timer for interrupts.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 12:40:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 12:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519649.806611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmDIE-00026P-TA; Tue, 11 Apr 2023 12:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519649.806611; Tue, 11 Apr 2023 12:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmDIE-00026I-Ol; Tue, 11 Apr 2023 12:40:14 +0000
Received: by outflank-mailman (input) for mailman id 519649;
 Tue, 11 Apr 2023 12:40:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmDID-00025y-RB
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 12:40:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmDIA-0005UB-L8; Tue, 11 Apr 2023 12:40:10 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=[192.168.30.1])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmDIA-0003xn-DP; Tue, 11 Apr 2023 12:40:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=UiXpTl1a3tG+ZCl5AQUOykdSeTWIEtGDm225q3yNNVo=; b=mPdQ4uLy7ViBBQ8Lh7Nx61rIce
	XL26Et34t/buVnZkMLO+0k1L966qONZqTYqaXGPugeiGeP77bWbDAsZqt0u8EwoJyPSML8HBjtPbx
	6hwSJHQt/D1H9ZPixPQY7yNEXDKfPtY/W8C68iOL0JRbmy6iG2iZHacYDQ88CcJZcYQ8=;
Message-ID: <235a284c-48d6-fda3-2aa2-3670856d1547@xen.org>
Date: Tue, 11 Apr 2023 13:40:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v2 2/7] xen/x86: Replace GPL v2.0 license boilerplate with
 an SPDX tag in *.c
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Paul Durrant <paul@xen.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230327184520.81828-1-julien@xen.org>
 <20230327184520.81828-3-julien@xen.org>
 <alpine.DEB.2.22.394.2303271752210.4066@ubuntu-linux-20-04-desktop>
 <96d6a307-14dd-aa0e-4039-d84d67cf7ed6@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <96d6a307-14dd-aa0e-4039-d84d67cf7ed6@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 28/03/2023 10:16, Jan Beulich wrote:
> On 28.03.2023 02:53, Stefano Stabellini wrote:
>> On Mon, 27 Mar 2023, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> It is easier to understand the license of a file when using SPDX.
>>>
>>> This is replacing the below pattern with the SPDX tag GPL-2.0-only
>>> in xen/arch/x86/*.c:
>>>
>>>   * This program is free software; you can redistribute it and/or modify it
>>>   * under the terms and conditions of the GNU General Public License,
>>>   * version 2, as published by the Free Software Foundation.
>>>   *
>>>   * This program is distributed in the hope it will be useful, but WITHOUT
>>>   * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>>>   * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>>>   * more details.
>>>   *
>>>   * You should have received a copy of the GNU General Public License along with
>>>   * this program; If not, see <http://www.gnu.org/licenses/>.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> ---
>>>
>>>      Changes in v2:
>>>          * Switch to GPL-2.0-only
>>>          * Rebase
>>>
>>> 42sh> cat gpl-2.0.txt
>>>   * This program is free software; you can redistribute it and/or modify it
>>>   * under the terms and conditions of the GNU General Public License,
>>>   * version 2, as published by the Free Software Foundation.
>>>   *
>>>   * This program is distributed in the hope it will be useful, but WITHOUT
>>>   * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>>>   * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>>>   * more details.
>>>   *
>>>   * You should have received a copy of the GNU General Public License along with
>>>   * this program; If not, see <http://www.gnu.org/licenses/>.
>>> 42sh> find xen/arch/x86/ -name '*.c' -exec replace_license.py gpl-2.0.txt GPL-2.0-only {} \;
>>
>> I confirm that the commands above lead to this exact patch. I ran them
>> on my system and checked that the resulting changes are the same.
>>
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> On this basis then also
> Acked-by: Jan Beulich <jbeulich@suse.com>
> for this and the subsequent patches.

Thanks. I have pushed with the series with both of your acks.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 12:53:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 12:53:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519653.806621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmDUt-0003dN-0j; Tue, 11 Apr 2023 12:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519653.806621; Tue, 11 Apr 2023 12:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmDUs-0003dG-TO; Tue, 11 Apr 2023 12:53:18 +0000
Received: by outflank-mailman (input) for mailman id 519653;
 Tue, 11 Apr 2023 12:53:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmDUr-0003d6-TR; Tue, 11 Apr 2023 12:53:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmDUr-0005gW-Pp; Tue, 11 Apr 2023 12:53:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmDUr-0000Qw-BB; Tue, 11 Apr 2023 12:53:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmDUr-0006qH-Af; Tue, 11 Apr 2023 12:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zMVQ13PDCxim6hymiWfz/yn93WLKDbNCJcEfrrqDoLQ=; b=f7L++lcvekBnOL7h5pnTtUqw7y
	kvso/712BY1AszIodztTw0QRzWjhuVU06Trt3vY3/1XVTI21f9H5Lx75Nyj2qDGdpMn9nzKv+THTr
	tWl9BEKCNGkOWpYKIzyQGdz5VSvPV1Ht1p7SUuCRZzM0qs5w/3Z3pjoAQiGytyMhnCWc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180200: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 12:53:17 +0000

flight 180200 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180200/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install      fail in 180192 pass in 180200
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180192 pass in 180200
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  7 xen-install fail pass in 180192
 test-amd64-i386-xl-xsm        7 xen-install                fail pass in 180192

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair         11 xen-install/dst_host         fail  like 180192
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180192
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180192
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180192
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180192
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180192
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180192
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180192
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180192
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180192
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180200  2023-04-11 01:53:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 14:21:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 14:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519663.806631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmEro-0004LX-Ek; Tue, 11 Apr 2023 14:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519663.806631; Tue, 11 Apr 2023 14:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmEro-0004LQ-BG; Tue, 11 Apr 2023 14:21:04 +0000
Received: by outflank-mailman (input) for mailman id 519663;
 Tue, 11 Apr 2023 14:21:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmErn-0004LG-3Z; Tue, 11 Apr 2023 14:21:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmErn-0007i6-14; Tue, 11 Apr 2023 14:21:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmErm-0003TS-HY; Tue, 11 Apr 2023 14:21:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmErm-0006yP-HA; Tue, 11 Apr 2023 14:21:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oViWm03fcP/JOWc+IsE7owOrN8Cx8w79BBCUGqgy9lg=; b=03tu7jy4i8HBMNXouMAfKMYqPG
	TQejnX4HoZp3b+dht7kgn6/EbBiCjFMcVNwyh3+0Om7XYc2kW0wvAFO1GKjBOzhvl/wyWSDoyM2L+
	mnOz/bj8jkTNsfwYFMfidMYBejRuIwB7BvGO8hbtmmxXXLXQFpc/AkfE4rWgcrhbyQI8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180205-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180205: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 14:21:02 +0000

flight 180205 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180205/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180171  2023-04-06 19:00:24 Z    4 days
Testing same since   180205  2023-04-11 13:03:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ddaf7bb0cf..5ea03c570c  5ea03c570c8610d4359f8bbf5f093d215344ce3f -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 17:47:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 17:47:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519752.806679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmI5k-0001zV-8F; Tue, 11 Apr 2023 17:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519752.806679; Tue, 11 Apr 2023 17:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmI5k-0001zO-5k; Tue, 11 Apr 2023 17:47:40 +0000
Received: by outflank-mailman (input) for mailman id 519752;
 Tue, 11 Apr 2023 17:47:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqSW=AC=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmI5i-0001zI-P1
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 17:47:39 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0cae525-d890-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 19:47:35 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-94a356c7419so213604466b.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Apr 2023 10:47:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0cae525-d890-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681235254; x=1683827254;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=89D/U2/ABD8Hnu1Y3Qqw4jqD4G1xSecynmnwgvqJh1E=;
        b=YnBxk/BjOwvAHI0NJ7QNqauiFlUOEHqqJtYcO4wshZ5d4biGRT67xd+gtKmL2yTAG1
         /spFeX1cHhQgfRHw3qLE5gFE7DRVJIMjMwU0DKeOPP0up7qHE+6ksN0ALNPK6hDQfnEC
         1sV/LtYwA/6V9r7/ZEqPfrGe5yyGQXOSKvcaxXB4v6K6z1xRdLMUuK2JPIv3XQ3tgxtX
         p3Ltf49Z5yLqWpbA8sgb666/3Gc2TXDl8ibXyTw+niQhw+3KVsqdSceIVC0Vi+7rN3c8
         cmNFc/doZli6PfAphk5aF7x2UCDJ1v0sdXqMpgHst/1sSiaUnbaNpir21qrErLs7O43a
         94+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681235254; x=1683827254;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=89D/U2/ABD8Hnu1Y3Qqw4jqD4G1xSecynmnwgvqJh1E=;
        b=D+Kug/RWbwvJ/XkWufEbRly7eH5prDZT/oO/GaR0pzBQy1zbH0q75GaZcdg6M8zSFS
         OZgfHVcKIrnx1bhYAo9W7FT5IYnzSNFZzEYBec1uUS8ndNlLc14L2X/wC2WoxTq3BRG+
         MlfYcXRanUoeN/ANTWj7hZQhUtQHQaXSmNhAmpFbLwWdHSVhxtJmST0hoGpPvtecvABe
         IJbwKsFnw9p65/sqbf/d9FqSJ+phPGMlqLh/MdfQLUNXRKgj6EHW608IFdPHhDGo/QFJ
         7E9mSmZu74NnKznqeMBGd6fH22bHBP6d9JL1JZfqVzo64Yk4GMGn1r2YWD3o50cBEYd3
         1/CA==
X-Gm-Message-State: AAQBX9ejJMVs+nsq7XSqtFwcyb+GP87xVkq51m+QglLAukywmDcY9AKO
	ZVVMBOIYWmiWwue4Az6E0k6nRPogCCDgZ0TC0qT8xg==
X-Google-Smtp-Source: AKy350ahP395VbSvIx11nYGRgeo6TcVvzT9+ov1e9nkBImUEMlhVDS3j04fcNMoH4CH/1w9Xg8+N3piLIX+Q3hmx3y4=
X-Received: by 2002:a50:baea:0:b0:4fc:fc86:5f76 with SMTP id
 x97-20020a50baea000000b004fcfc865f76mr5594114ede.6.1681235254106; Tue, 11 Apr
 2023 10:47:34 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-23-dwmw2@infradead.org>
In-Reply-To: <20230307182707.2298618-23-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 11 Apr 2023 18:47:23 +0100
Message-ID: <CAFEAcA-uebHqs=53w62BiKQGhXZedA5ijAoOefd2pcOFPF_Rpw@mail.gmail.com>
Subject: Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Now that we have an internal implementation of XenStore, we can populate
> the xenstore_backend_ops to allow PV backends to talk to it.
>
> Watches can't be processed with immediate callbacks because that would
> call back into XenBus code recursively. Defer them to a QEMUBH to be run
> as appropriate from the main loop. We use a QEMUBH per XS handle, and it
> walks all the watches (there shouldn't be many per handle) to fire any
> which have pending events. We *could* have done it differently but this
> allows us to use the same struct watch_event as we have for the guest
> side, and keeps things relatively simple.


> +static struct qemu_xs_handle *xs_be_open(void)
> +{
> +    XenXenstoreState *s = xen_xenstore_singleton;
> +    struct qemu_xs_handle *h;
> +
> +    if (!s && !s->impl) {

Coverity points out that this will dereference a NULL pointer
if you hand it one, and will happily let through a XenXenstoreState
where s->impl is NULL.
Should be "!s || !s->impl" I guess ?
CID 1508131.

> +        errno = -ENOSYS;
> +        return NULL;
> +    }
> +
> +    h = g_new0(struct qemu_xs_handle, 1);
> +    h->impl = s->impl;
> +
> +    h->watch_bh = aio_bh_new(qemu_get_aio_context(), be_watch_bh, h);
> +
> +    return h;
> +}

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 18:07:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 18:07:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519756.806689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmIPG-0004aO-S2; Tue, 11 Apr 2023 18:07:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519756.806689; Tue, 11 Apr 2023 18:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmIPG-0004aH-PY; Tue, 11 Apr 2023 18:07:50 +0000
Received: by outflank-mailman (input) for mailman id 519756;
 Tue, 11 Apr 2023 18:07:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MqSW=AC=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmIPF-0004aA-0d
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 18:07:49 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c372ae83-d893-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 20:07:47 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id sg7so33995401ejc.9
 for <xen-devel@lists.xenproject.org>; Tue, 11 Apr 2023 11:07:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c372ae83-d893-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681236466; x=1683828466;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=p+ZXlQzojGZSUw8Sbrj54moAbTxBe6kr2mEZxRk868M=;
        b=htQd6MnaVmc6ChTU9ga2AKBuKN/jMCHPNppmEK68X4N53Qb1vdWgV8+ScDciTYIfNf
         yTTv+p+l/7zd7JnjzLpoI0T9rpGVoM3rBovx9O9MYXZOOZAHEGAEofuZXYNMCMm4S2je
         HrVEnuiyo58GeRHtEWg1acuDtaAm6cAXPjFpqpLP5RG7wvqkVgDZy63tt4L/jkH3PrAd
         Sr0luoY0PXQkxG7c620MMgKqeEVWRqOW0XxeVlYVPHJ9+xt8lWSFR5GsDF9XgX9moyki
         BeRspCtNH3VYVNWx8BrwC9LwJQBkchSkeHcXsjk9cpa6LVSFcMgpqK+vqEBZEk7zBBSP
         tJPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681236466; x=1683828466;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=p+ZXlQzojGZSUw8Sbrj54moAbTxBe6kr2mEZxRk868M=;
        b=rxAfwC4h7aLRzzI/ANzKh2HJT5OM+2tRQK+HN+jFo+Lx5sRq2YQEKzHO1zoIOxgN66
         iDM4+1RdeDQ6Q20cK7vBJpeYgn0DQt3MdpTFhHfBNq7AV73Z6IVJh9YEtzSjxLVAZbHL
         hKc6VaFGkBUhKk64k4A2WWZPW5StDONIkbJSbPP+4/F4va3KoYCSXGg3ELF5YAFpg2Uk
         ewN4h0crTHFycOi6XlaQ5j/4NeNkisP1Dd2maj/fCQfM1qd76Sqmlm3bL6YzvUcsWDrg
         BduNI0Ppys5+DQNJf1gbYzOd1sHI4Yn6gWJkWeQHMEPHz9mr+tGfUqkU/fskDBHrx+aL
         nKUw==
X-Gm-Message-State: AAQBX9fezB8PnJNPFbw0R1MEohjWcrVbQY8geySekq1pnzXCxfgawa4g
	BkBCd8OnsVxa6VjmnAunwo2aCdWa00XPUtFI2PuHmA==
X-Google-Smtp-Source: AKy350aJKxcJXaacrbJSaqSrXddCMNffIuvTCBmwTQDw5caxmJIJZoc5CGUDO73D9H9Mawv/lNkDNQ71sykqnrtiSRU=
X-Received: by 2002:a17:906:2a48:b0:920:da8c:f7b0 with SMTP id
 k8-20020a1709062a4800b00920da8cf7b0mr5987227eje.6.1681236466538; Tue, 11 Apr
 2023 11:07:46 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-23-dwmw2@infradead.org>
In-Reply-To: <20230307182707.2298618-23-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 11 Apr 2023 19:07:35 +0100
Message-ID: <CAFEAcA-vCihVupZsLBdh6+-xjdNX2-K1Ceo+tgsjA=KCdWTjpg@mail.gmail.com>
Subject: Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Now that we have an internal implementation of XenStore, we can populate
> the xenstore_backend_ops to allow PV backends to talk to it.
>
> Watches can't be processed with immediate callbacks because that would
> call back into XenBus code recursively. Defer them to a QEMUBH to be run
> as appropriate from the main loop. We use a QEMUBH per XS handle, and it
> walks all the watches (there shouldn't be many per handle) to fire any
> which have pending events. We *could* have done it differently but this
> allows us to use the same struct watch_event as we have for the guest
> side, and keeps things relatively simple.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Reviewed-by: Paul Durrant <paul@xen.org>



> +static void xs_be_unwatch(struct qemu_xs_handle *h, struct qemu_xs_watch *w)
> +{
> +    xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL, xs_be_watch_cb, w);

Coverity points out that this is the only call to xs_impl_unwatch()
where we don't check the return value. Is there some useful way
we can report the error, or is it a "we're closing everything down
anyway, no way to report anything" situation? (This particular
Coverity heuristic is quite prone to false positives, so if that's
the way it is I'll just mark it as a f-p in the coverity UI.)


> +    h->watches = g_list_remove(h->watches, w);
> +    g_list_free_full(w->events, (GDestroyNotify)free_watch_event);
> +    g_free(w->path);
> +    g_free(w);
> +}

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 18:35:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 18:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519760.806700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmIqB-0008Al-0d; Tue, 11 Apr 2023 18:35:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519760.806700; Tue, 11 Apr 2023 18:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmIqA-0008Ae-U5; Tue, 11 Apr 2023 18:35:38 +0000
Received: by outflank-mailman (input) for mailman id 519760;
 Tue, 11 Apr 2023 18:35:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmIq9-0008AU-LM; Tue, 11 Apr 2023 18:35:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmIq9-0005TK-I7; Tue, 11 Apr 2023 18:35:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmIq9-0005dD-06; Tue, 11 Apr 2023 18:35:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmIq8-0005Sy-Vv; Tue, 11 Apr 2023 18:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LBnWzs/U9QCWpGlHuuaTO3d+qvXmIMmgZdWYt5MPk58=; b=SlLgkza13KumwxwdtkX3r14U1i
	e6kF3hkbmbj8dUyf0hrAt6YOC51JvZj0Bh0SbJcWlY9onAzLKrpt6Om7ZEnA2c6undgMyCrVq6Yqv
	CyNo6xuYlbR8BDDv3s7b9VU1dVJCB1iiTgrN+OjjJdtiSgz+4qLqKWz5mzKuyULqPzPc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180204-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180204: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=dda860b9c031d6a2768f75e5e622545d41d4b688
X-Osstest-Versions-That:
    qemuu=26aeb3b5894e0f3d17354e306002c59fa060e1c5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Apr 2023 18:35:36 +0000

flight 180204 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180204/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd       22 guest-start.2           fail blocked in 180198
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180198
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180198
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180198
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180198
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180198
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                dda860b9c031d6a2768f75e5e622545d41d4b688
baseline version:
 qemuu                26aeb3b5894e0f3d17354e306002c59fa060e1c5

Last test of basis   180198  2023-04-10 21:08:28 Z    0 days
Testing same since   180204  2023-04-11 09:38:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   26aeb3b589..dda860b9c0  dda860b9c031d6a2768f75e5e622545d41d4b688 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519769.806740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUI-0004zv-8U; Tue, 11 Apr 2023 19:17:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519769.806740; Tue, 11 Apr 2023 19:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUI-0004zo-5D; Tue, 11 Apr 2023 19:17:06 +0000
Received: by outflank-mailman (input) for mailman id 519769;
 Tue, 11 Apr 2023 19:17:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUG-0004Ta-DQ
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:04 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2062a.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f9ec66c-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:02 +0200 (CEST)
Received: from BN9PR03CA0074.namprd03.prod.outlook.com (2603:10b6:408:fc::19)
 by DS0PR12MB7927.namprd12.prod.outlook.com (2603:10b6:8:147::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Tue, 11 Apr
 2023 19:16:59 +0000
Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fc:cafe::4b) by BN9PR03CA0074.outlook.office365.com
 (2603:10b6:408:fc::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:16:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:16:59 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:58 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:57 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:57 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f9ec66c-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hsnvahqefEJcrGbq8TaKoxKIRcJ8k6g3h4CnC8ViW6gsJNOkIj7iYXQL9Ii413JsBQybdidGJ6LrYp4fbkQfkCB+uk9G8JtQNGWQEg7I2zZVvPT0EekScDapJsrHJ2zroYwVXqobef3b6SibjN1W95m4XVNV8awgAgRxS+ctdQe76N96+hxiTZeE6koUtR3VImFtKbEN7dLGN9hrSzfE8pxKNgYKtUkx5df4BxHctIT+EYRqjZiKki1Ixz3dScKqCfPRWYYBMY/C9zXeYJx1kov8c46c5urOiOJlWKEhQzyGMoDaeydbYpyedbXcu8qGhtYFZ2jrveqP13TX74XUSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aXlXQRcv7U/T/rRXFeXTNVQL0ZVDEHTt5u8gFMIkdTA=;
 b=L19PP9YIoz2ENayQ10udRQtImwhc0jGVL9fhd/yJy3oG4NYnKeU5Uo7S9PfQ9S8C1q+F4AwS7Y77NS815xEiRFBaNIOjsCyiACllRlkf8xpQYjufEogNI1/VWI9eg7Am75r3yQL14wxPA0vDfIwf8/6m9kPfVD1Vqw1QEzg7RC+e1UwfTZBxCjLz0wKU3mIcfVUrw0bJmuGMsNCCKlVGcwz2mvlqPpWHMLSxVYXyT6DuugcjRPXS2YanxJLXxvxMB+b8bGYlUNFrk6RRU3hRJjZHjJkQnbf1S2sz5s3uYyet8zguNevVPntPXjgvfXEy9umt/iGpNr6PZusNGspcJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aXlXQRcv7U/T/rRXFeXTNVQL0ZVDEHTt5u8gFMIkdTA=;
 b=WEz5Ij5Kp8boEyjMUb91XSI7OHDNJpWmshSsP8z3y16orb5bMbZYN4Ifo6hwDNVLqS9pcLJn7M7tz6Hpl6WM6nVl3T2uL2+3WmAzWkPtgLlRVYNE7KBVjbwOq/kRR34spEH0g+H7ltXDphzhcRAXoKFJL30lTxKV8Cm8wEzuk5w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Vikram Garhwal <fnu.vikram@xilinx.com>, David Gibson
	<david@gibson.dropbear.id.au>
Subject: [XEN][PATCH v5 05/17] libfdt: overlay: change overlay_get_target()
Date: Tue, 11 Apr 2023 12:16:06 -0700
Message-ID: <20230411191636.26926-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT066:EE_|DS0PR12MB7927:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f074106-fed4-47f9-7c35-08db3ac1529c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oC7YWE9imiHPDPvyBdq1PTLkB0ISDaGkF+VlhetZPVZnrzr6qM7i8dNDmV+QzgsOCOuzwqrGgKNhT6Gc22W0h6Jl/VO52RYpH5NKqj6nuXeOvhDgl5n1sZfailtvZQ88ra2YqPqoWwMRZ/ranbQZks7SlxvR9ZlSLAbrSFWtdAI34AOBnT71936i+XeHyBYcVEF/jOWTxjkh/+G45mkAZLAGk32lNWO7+a0YoA55yRJDH5xXnogPn3Oqx1XIFtciV347wsEzKAHR/q5xdxufvwgEuBDl0xOC5mxRQ+FOxXjsenAxKvYFPe5qgzV08T1nisxhvhRijfLnun7V25nvhtfCXFCf+A2UTc1DtYM6oe2k0vbMOqhnAYKnwVMNkHszSH897j6urVNiCCXBv+SeywRIahq1UOkE9rHubtMSbGU7Bgh3uczPNvtnDxuc9yjluZZ/3UFlw9ymuEWobZgLUULm812dpGF2tKKTCu7rUz6qRg0clEn/C4NfeKoeGw2eQnPQRcGNOBdoc38sFsycpXmByMev6cjJxEVKXcaLXzH48i3nRDdI57qPVYuvkt3zWy/ri3F6DbdkhSXIIX6EWE+EbF4BziXbNg/ovhq0lb891h54Gpe7MePtoE+87lj72BOYskDuhcpNVoO/NCN7u9vRlpXEi/C5sECMSM31b4uyhfDxhXHIaU8+69B/HAcsq73Pb+ALVezM4Jeq7j1OBL8v9Evy9xODkhsWbiLMfmw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(46966006)(40470700004)(36840700001)(47076005)(82740400003)(36860700001)(40480700001)(2616005)(426003)(316002)(966005)(26005)(478600001)(6666004)(186003)(1076003)(54906003)(83380400001)(2906002)(336012)(81166007)(8676002)(5660300002)(4326008)(36756003)(356005)(40460700003)(44832011)(6916009)(41300700001)(70586007)(8936002)(82310400005)(70206006)(86362001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:16:59.1674
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f074106-fed4-47f9-7c35-08db3ac1529c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7927

Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
function type.

This is done to get the target path for the overlay nodes which is very useful
in many cases. For example, Xen hypervisor needs it when applying overlays
because Xen needs to do further processing of the overlay nodes, e.g. mapping of
resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.

Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Origin: https://github.com/dgibson/dtc 45f3d1a095dd

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/libfdt/fdt_overlay.c | 29 +++++++----------------------
 xen/common/libfdt/version.lds   |  1 +
 xen/include/xen/libfdt/libfdt.h | 18 ++++++++++++++++++
 3 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/xen/common/libfdt/fdt_overlay.c b/xen/common/libfdt/fdt_overlay.c
index 7b95e2b639..acf0c4c2a6 100644
--- a/xen/common/libfdt/fdt_overlay.c
+++ b/xen/common/libfdt/fdt_overlay.c
@@ -41,37 +41,22 @@ static uint32_t overlay_get_target_phandle(const void *fdto, int fragment)
 	return fdt32_to_cpu(*val);
 }
 
-/**
- * overlay_get_target - retrieves the offset of a fragment's target
- * @fdt: Base device tree blob
- * @fdto: Device tree overlay blob
- * @fragment: node offset of the fragment in the overlay
- * @pathp: pointer which receives the path of the target (or NULL)
- *
- * overlay_get_target() retrieves the target offset in the base
- * device tree of a fragment, no matter how the actual targeting is
- * done (through a phandle or a path)
- *
- * returns:
- *      the targeted node offset in the base device tree
- *      Negative error code on error
- */
-static int overlay_get_target(const void *fdt, const void *fdto,
-			      int fragment, char const **pathp)
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp)
 {
 	uint32_t phandle;
 	const char *path = NULL;
 	int path_len = 0, ret;
 
 	/* Try first to do a phandle based lookup */
-	phandle = overlay_get_target_phandle(fdto, fragment);
+	phandle = overlay_get_target_phandle(fdto, fragment_offset);
 	if (phandle == (uint32_t)-1)
 		return -FDT_ERR_BADPHANDLE;
 
 	/* no phandle, try path */
 	if (!phandle) {
 		/* And then a path based lookup */
-		path = fdt_getprop(fdto, fragment, "target-path", &path_len);
+		path = fdt_getprop(fdto, fragment_offset, "target-path", &path_len);
 		if (path)
 			ret = fdt_path_offset(fdt, path);
 		else
@@ -638,7 +623,7 @@ static int overlay_merge(void *fdt, void *fdto)
 		if (overlay < 0)
 			return overlay;
 
-		target = overlay_get_target(fdt, fdto, fragment, NULL);
+		target = fdt_overlay_target_offset(fdt, fdto, fragment, NULL);
 		if (target < 0)
 			return target;
 
@@ -781,7 +766,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 			return -FDT_ERR_BADOVERLAY;
 
 		/* get the target of the fragment */
-		ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+		ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 		if (ret < 0)
 			return ret;
 		target = ret;
@@ -803,7 +788,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 
 		if (!target_path) {
 			/* again in case setprop_placeholder changed it */
-			ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+			ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 			if (ret < 0)
 				return ret;
 			target = ret;
diff --git a/xen/common/libfdt/version.lds b/xen/common/libfdt/version.lds
index 7ab85f1d9d..cbce5d4a8b 100644
--- a/xen/common/libfdt/version.lds
+++ b/xen/common/libfdt/version.lds
@@ -77,6 +77,7 @@ LIBFDT_1.2 {
 		fdt_appendprop_addrrange;
 		fdt_setprop_inplace_namelen_partial;
 		fdt_create_with_flags;
+		fdt_overlay_target_offset;
 	local:
 		*;
 };
diff --git a/xen/include/xen/libfdt/libfdt.h b/xen/include/xen/libfdt/libfdt.h
index c71689e2be..fabddbee8c 100644
--- a/xen/include/xen/libfdt/libfdt.h
+++ b/xen/include/xen/libfdt/libfdt.h
@@ -2109,6 +2109,24 @@ int fdt_del_node(void *fdt, int nodeoffset);
  */
 int fdt_overlay_apply(void *fdt, void *fdto);
 
+/**
+ * fdt_overlay_target_offset - retrieves the offset of a fragment's target
+ * @fdt: Base device tree blob
+ * @fdto: Device tree overlay blob
+ * @fragment_offset: node offset of the fragment in the overlay
+ * @pathp: pointer which receives the path of the target (or NULL)
+ *
+ * fdt_overlay_target_offset() retrieves the target offset in the base
+ * device tree of a fragment, no matter how the actual targeting is
+ * done (through a phandle or a path)
+ *
+ * returns:
+ *      the targeted node offset in the base device tree
+ *      Negative error code on error
+ */
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp);
+
 /**********************************************************************/
 /* Debugging / informational functions                                */
 /**********************************************************************/
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519767.806720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUF-0004U6-Lt; Tue, 11 Apr 2023 19:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519767.806720; Tue, 11 Apr 2023 19:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUF-0004Tx-Hx; Tue, 11 Apr 2023 19:17:03 +0000
Received: by outflank-mailman (input) for mailman id 519767;
 Tue, 11 Apr 2023 19:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUE-0004Ta-U4
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:02 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e83::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e0c351d-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:00 +0200 (CEST)
Received: from BN9PR03CA0604.namprd03.prod.outlook.com (2603:10b6:408:106::9)
 by MW4PR12MB5601.namprd12.prod.outlook.com (2603:10b6:303:168::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:16:56 +0000
Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:106:cafe::3) by BN9PR03CA0604.outlook.office365.com
 (2603:10b6:408:106::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 19:16:56 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:16:55 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:55 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:55 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e0c351d-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R5odVlwbleoZs5wA2Fen6R9qmt+TiYhz/GpyppkIzbqswKQkuXyCbkxlQkjB8nSBOC20onufEscuNKm8vNm/j8yfswu3WLAnPgPOmNfR0T9GpLsMwbeW08oSzC1XnlUtEyHEDQlWFwA0VgmbcWHu+8ImpdPqQNNPYJVB9kGOgqahApezPnCwhGAVm6Fk/gJlRBBaxRQU2dbVZQlrSGqFNw5ze+hpUvgkOqmrl0qa67NkZZzimM2in0Ulk6CErHvk43d48O/zoqDAx/6h/d2jN/HJ5cMTC0cdC4wsTytksf5KinwvIQS5rOH9t2nsVXBKrTq1ZGqTcEvHCDNupHfXYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0odwSwlZEbTJvdLLf27PaAsLK4pjjIm+Jsb7s8Lqv7g=;
 b=fR7WKLv54LP01SNa8MUepB3sId6Q718wi+GEOAuTATUVrbm5NmItXz521gJfoVce67ZdtDj3uDs+G6apBoBxTE+lCnDL+cIRrkpbDo+X29siGtAjog02PguifLjPmmhd8qYgMAvhHaDsn8UsmECzWKFzzUOhiLt4WhPPrKmffXvzcNBvFlzv6jZzite09ifNwof4EuwxN2kq2I3KKwQJaK/2yK/LxpPm5tIOOFf7NE7Y3kRp1BxBnxe34Tr9Stx0nbskob0ru6CJZf1I/vEDbXoZTHIrcEC4b542tPSX1IKI1PJ3H4zpeOLcL8feazxofP/zOOuTgyJ/MLkR38YXNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0odwSwlZEbTJvdLLf27PaAsLK4pjjIm+Jsb7s8Lqv7g=;
 b=Md5NgPnF/dZc6LSjpF9zQvCHu3SnY1fZQYKEa4Ctv2VO6o6TW+wqH6xsxLNdN88/nIjeb8ZQwPcGMwej8/143x3pPhkp5fPtJvG1Ou/G/K/OCJj1pfKEpnGaKFb417l4ClhSo/MkCvoC3HbxphnPRiypyMP4AOyNZm/zoYvzbgI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 02/17] common/device_tree: change __unflatten_device_tree()
Date: Tue, 11 Apr 2023 12:16:03 -0700
Message-ID: <20230411191636.26926-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT052:EE_|MW4PR12MB5601:EE_
X-MS-Office365-Filtering-Correlation-Id: 7a55ab8b-09ca-432d-daf6-08db3ac15098
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tRF8rJuNDr5YAn1IKIgQmYgV4Xg3sb4+bNUMt0OUvk/7+9ZimF1PYMvu+V9AR319u1kO/w+2FZjcOha9YM1YSyz8nG5g9SE5swyVF/F+bEKokoJeyZlCGBUMjnmu0lJE4JbCc3xzHtHKY5Ms/JgGqNJAt5aPewk/of/WaAL4Lj20+TsmvONcyw99ZcJdonUU5JqIXalTOBeCU6LUDFk2QU1Asw0Vm304GjCHVxWrqTSHrYN0E/jLVwB3f/+YW/1sPJuZeQNFvstg22Jw0OXhnltLLL2MXutboZ3A2s/JUu4+O14c7+3ikq0T0o1Vkeysarod4OYoIVZJaexZXZcmgJ3HapzUw5MVMbTil7/FvNupjYSq4qin63K9Xg/L/zSZqk5ZbWdqZTUCGXxCmlsIme6PMcpMpTSb1rJ73dYjHhzrz0da2vQtGwplOitZCAQm608CDDedenQ66kEZ+Bp31uBouNN4Ysj1zBuMIbCjBzgWSF5zGisvcxfBdidoqHKji3INiI+kzIbrWYl1yUso8UkOxMe+3AwCpH1jx0nxOTOxLi2WV3jUmxGgFs1fCcABD+iXeskYnd9CSFppMreCQiqmZixCd0zBtyd4oMmPL992mXabbPY4c872k51Exb4Q5U+9QvpVvdNU8vuDKrQsP4jufjckzAgiCd21astDPcag/QPqJbpZI2BGHExZBL57fhM9FJWIEbmXcPSsn+fF9vs/Emc8/qMF2RqBHC2Wlhc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(40470700004)(46966006)(36840700001)(36860700001)(54906003)(2616005)(47076005)(478600001)(26005)(1076003)(6666004)(81166007)(82740400003)(356005)(41300700001)(316002)(4326008)(6916009)(186003)(336012)(83380400001)(426003)(70206006)(70586007)(2906002)(5660300002)(44832011)(36756003)(82310400005)(40460700003)(86362001)(40480700001)(8676002)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:16:55.7882
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a55ab8b-09ca-432d-daf6-08db3ac15098
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT052.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5601

Following changes are done to __unflatten_device_tree():
    1. __unflatten_device_tree() is renamed to unflatten_device_tree().
    2. Remove static function type.
    3. Add handling of memory allocation. This will be useful in dynamic node
        programming when we unflatten the dt during runtime memory allocation
        can fail.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      | 10 ++++++----
 xen/include/xen/device_tree.h |  5 +++++
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index aed38ff63c..bf847b2584 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
 }
 
 /**
- * __unflatten_device_tree - create tree of device_nodes from flat blob
+ * unflatten_device_tree - create tree of device_nodes from flat blob
  *
  * unflattens a device-tree, creating the
  * tree of struct device_node. It also fills the "name" and "type"
@@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static void __unflatten_device_tree(const void *fdt,
-                                    struct dt_device_node **mynodes)
+void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
@@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const void *fdt,
     /* Allocate memory for the expanded device tree */
     mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
 
+    if ( !mem )
+        panic("Cannot allocate memory for unflatten device tree\n");
+
     ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
 
     dt_dprintk("  unflattening %lx...\n", mem);
@@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
 
 void __init dt_unflatten_host_device_tree(void)
 {
-    __unflatten_device_tree(device_tree_flattened, &dt_host);
+    unflatten_device_tree(device_tree_flattened, &dt_host);
     dt_alias_scan();
 }
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..58ac12abe3 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -178,6 +178,11 @@ int device_tree_for_each_node(const void *fdt, int node,
  */
 void dt_unflatten_host_device_tree(void);
 
+/**
+ * unflatten any device tree.
+ */
+void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes);
+
 /**
  * IRQ translation callback
  * TODO: For the moment we assume that we only have ONE
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519768.806730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUG-0004ji-Ui; Tue, 11 Apr 2023 19:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519768.806730; Tue, 11 Apr 2023 19:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUG-0004jb-QV; Tue, 11 Apr 2023 19:17:04 +0000
Received: by outflank-mailman (input) for mailman id 519768;
 Tue, 11 Apr 2023 19:17:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUF-0004Ta-KC
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:03 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f89fd80-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:02 +0200 (CEST)
Received: from BN9PR03CA0077.namprd03.prod.outlook.com (2603:10b6:408:fc::22)
 by DM6PR12MB4282.namprd12.prod.outlook.com (2603:10b6:5:223::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:16:59 +0000
Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fc:cafe::5b) by BN9PR03CA0077.outlook.office365.com
 (2603:10b6:408:fc::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:16:58 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:16:58 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:57 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:57 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f89fd80-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dqfHvUQkKRI+G1Rz6oofnKIRzyCknyLcK0EUyws71WyuEo6e0kx4wVRS09qucjTXA9EqkU8e+z63bCu/YkL+kfC8GOSrj7rwUj/hevwvj9CRfVNBXfSRMd26MnOj3P8VppZyJsbOp7oN9ULsAUdBY38gE9097qaUlMR1paZy56dBemNS5UHDTkVZOl1r3tGL1yoFI1mzVv0ozo9bM9zVFJZ/DRtGpVz1Qx60rijmc4vA9vapQqD+RSk23+p1T2rJWJbUrSzG8xgrWNwYoabwKn3Zx7YQvsbVjj6T73nkM0AEDfvtaOEkS0oh/1/GuMZ4JZyfsyyma955nur+H1rWYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=XVfer5z5EHUilHhV+QpewCTkSOCiPTyjg/flVMZDEAzRK7xS4U90MmLS2j6XCxJadMrTyfIFtqZgoV8tE5knDGCt9Oy0pJnHl/YgFH12ggxAV7MMV8Fr+WG99/NQCfuORKTWXweHXckRRu6Ghyd9EzVd+cdZhZgDEOTfJrOaND763U12ywlONjxb2o0xNbsw08P4nJPuYUvA34TqM5tdqXE4UKXbylbIoJu4mMYx3dg+VOB48TH9m8UNMBnMYOUnFMnaNJhPMSzo+GyHnp1zexnQxanpVOmx8/4ez4aN6AvQjfluvkCvSi4wU3+b/2iwK54aukTqwBjwLwEyLBAb8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=rl/f+0LCUCpB609QADltQg+rTxdAOrQiu8YujzkFf1OQB3AAg5ivux1ExrlGxmgNX4gA13RYWWZLejjd2CoSI8Y6BCICQIu+vn/vyCh/BDlKvHSYiFlDmHWaQLT2ceA64UDpUA/XJ+8ZOo+WqDeTLlbQI3mUA4YsgiIRfgsyJ2A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 04/17] libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
Date: Tue, 11 Apr 2023 12:16:05 -0700
Message-ID: <20230411191636.26926-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT066:EE_|DM6PR12MB4282:EE_
X-MS-Office365-Filtering-Correlation-Id: a9c843cc-4c85-4c96-a8c6-08db3ac15246
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E5mZY9arVfIz4R/AVfjqkQGX6I4NgpAdkIvjkuVZqz5BKhyPZEDL6b5Xq8b3aZ8yk9JU+rCdeveWZbPStUFbHz4rOiSVlz3wiAZwUYR0jRVcmHdSCANP2OdgfJgacMgBX58liZjifMHUHzC/32dEBes0dT/X0sVh0V0ClZ/pmvVNF2vnoktMpqFl7WQQ6/bzuXUT8t6RLcCdF1n/3N0pUtC3AMR9H2HHoI73W91MU1zSRQ9p3o//hEvudGpbf1Q/O9EyfgJP5cn6D5Wa+pOcUC+a0wuMf94oRCG6cfHd5p9GA/HhF+Rw7SezASSeL6IA6UysRg+9vzAPtm0ZN/6sMk48bGE1upLmFIspEKzE5Xx6I6Oa3iUBQemvUG0GRGf4bdhWntvbrDNu30WquTcxt+EoHMzvNBdW59Z2Q54ECfZK69MyfO0N3SILS823JJrSKR2I444zyf/CPE3tEfFHX91CO9QOXC+5ALFlZGMr2X28jaiAhNzEM19Os8Ea7kh5ySxKv0vuydLq5lJtnjoAQwr5BCuUnxCWZO7KKKfDzxM4jLQXG2exYJ7Yc61JwrHEHD0D39MpQ1pWsM0R7MGeR2RyYy8cukIRVhn2m5REcOmNb6w6YBFsOkQbj9rXjK4kFARvTQo2T2N1Y9AaR02BFnvB9oY1v1UmFgnOGiR1+Qql+LFy4XNhSrlWdx9PgLB67O/ky0Ps9USxEUlIXO9dY9kMQK7XS1nWTKovgA0R31I=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(36756003)(54906003)(86362001)(41300700001)(83380400001)(316002)(70586007)(4326008)(478600001)(70206006)(6916009)(8676002)(82310400005)(5660300002)(40480700001)(8936002)(2906002)(4744005)(44832011)(36860700001)(356005)(186003)(82740400003)(81166007)(6666004)(26005)(1076003)(426003)(336012)(2616005)(47076005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:16:58.6206
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a9c843cc-4c85-4c96-a8c6-08db3ac15246
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4282

This is done to access fdt library function which are required for adding device
tree overlay nodes for dynamic programming of nodes.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/libfdt/Makefile | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/common/libfdt/Makefile b/xen/common/libfdt/Makefile
index 75aaefa2e3..d50487aa6e 100644
--- a/xen/common/libfdt/Makefile
+++ b/xen/common/libfdt/Makefile
@@ -1,7 +1,11 @@
 include $(src)/Makefile.libfdt
 
 SECTIONS := text data $(SPECIAL_DATA_SECTIONS)
+
+# For CONFIG_OVERLAY_DTB, libfdt functionalities will be needed during runtime.
+ifneq ($(CONFIG_OVERLAY_DTB),y)
 OBJCOPYFLAGS := $(foreach s,$(SECTIONS),--rename-section .$(s)=.init.$(s))
+endif
 
 obj-y += libfdt.o
 nocov-y += libfdt.o
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519766.806710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJU8-0004De-5r; Tue, 11 Apr 2023 19:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519766.806710; Tue, 11 Apr 2023 19:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJU8-0004DX-2g; Tue, 11 Apr 2023 19:16:56 +0000
Received: by outflank-mailman (input) for mailman id 519766;
 Tue, 11 Apr 2023 19:16:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJU6-0004DR-Ab
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:16:54 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e89::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67dbd682-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:16:50 +0200 (CEST)
Received: from BN9PR03CA0143.namprd03.prod.outlook.com (2603:10b6:408:fe::28)
 by MW6PR12MB8997.namprd12.prod.outlook.com (2603:10b6:303:23e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:16:43 +0000
Received: from BN8NAM11FT090.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::5d) by BN9PR03CA0143.outlook.office365.com
 (2603:10b6:408:fe::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:16:43 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT090.mail.protection.outlook.com (10.13.177.105) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:16:42 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:42 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:42 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67dbd682-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ew2aiM153IebdZ51yzNra06Kl9mM25TsDnKfivWygS65BEpXhoawrogqnDdjjIC9vbLaC/wbSt6K7ZMAQ96foHVPQIFMx1z8S8lv/9WNwqxQ/4NqjIgwV+NuwO81GAoRfBFSBSXqUKuOnACRF5ZxiLXm1Ax84gYgRskhpyb3UHE7liJZb+6A6Ij6q0J5WdH7fhx7m6W53pLO8LgDhoY6H01mOpGKtOt8lM1dECSk1P2pddFgjIdc5FTPdCpzuIjErtqaIMFeLQoNvjCabf6aDmWDIW/bPHAG/enaYGCNuMcQsmSUXsRygM5pkOgW657f09s45IDovMPjHcTgP9t7hg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hw2g30xtcvBa51vtOCCSs3IGMhpfBDNnN9dvV4fBjSk=;
 b=RukjOJoqD2n/mHx/m2eV6KVZLukNhlyEkcMgEVEopf/ZBxe0TDwNLA4JpHHwtOle952Btdj+s+d0miNJ2Ilyj2MhN5KkpZWJUP7eXkLepG18xDdf7EJCF16LTgGXFtZH3Uq1u/SUGh/mJrDBrpiSFLU485DCfNpJtMZd6Ulob4GGDfCCqGO4LJnFQ54SR4fsOcHfoRfyswEbAvr6KXF6HJHz8M5Tj5rzPvYe2ho7AB+rMP3hbjM0G1EvvA0w3R88VbaTHu8YT2stmysKTQpP93nDn0JWNTvS09tUSFahLAD64cO30lPXeylHQ+1qWH7cHMjD3wWGQPp58TQdiavBnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hw2g30xtcvBa51vtOCCSs3IGMhpfBDNnN9dvV4fBjSk=;
 b=5IJhn74+mcYpGGrffiF8APF/b19yvnApDraSDN9to2eQsvF6VxKlzj9FmOSoRnYmsFUIeOgEs6TmVA+fgSlo5IRYW2VsxOHgP3+5ud/SCSaPfDyAyX+AjSc4jWac2ftO92LRzyvCjuXR1nC5+xDZjz7A+L2BNy9zd1JV3mW5Lbc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [XEN][PATCH v5 00/17] dynamic node programming using overlay dtbo
Date: Tue, 11 Apr 2023 12:16:01 -0700
Message-ID: <20230411191636.26926-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT090:EE_|MW6PR12MB8997:EE_
X-MS-Office365-Filtering-Correlation-Id: 2584fd03-1918-43d8-5ac6-08db3ac148f7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	adKKCHa+6ja/Kc6S4MUnTSDiZZSB5vtvNKcKRS9+jOyBL/QW+npLt193WrP38VLHD+S8nfiznhm7bJl3XTBBbTBSrJ9eVf8fLkMSx1fETIZUabDFVB71fVmpdhJeSvmOFf+G7fNPalwQGS/h1ofAN0zchKJWMPVA+DntuF5PMDdOmEgFLaIHZRDodNG/Qh0hYKVO+gvyt1sk2a68HkmP7tTLCc0QIui1iRHFmMhMR9VdTUT7W+UI16ojJEnlPeyxwMoRCdhfKzIiENIsFm7UzgWX9Bnlyw8JgzccBUnKltmS/buWoBlYsLeq4iyDvCmUQMD5VH7qKZQmUq7fjDxU8F2XQgYjkh1ICLcCFviVsQU2iY9Zbwsdypwy3bKNYgKGg7E09emLI56ct5FK0MIUaW7uLveBB6tJ1K6OMs8KblAd25oB5AJTwBqkVcwQQBUfNbmaPFtJ+d6dxh6NtHsEANI5HdXwGIWX+yArXWLkGOzSEfqFLTuH65kjur5yTBOYpsRNIFSDutx9kdkicRKk7nNIQs9bzWTI6wjBOvo2smU4SzVaVR/FYHvSD2ZU4Eu1mNJc9hYNpKo9Qx9E/oo3hWfQscYMoTEM0MvUypzkWXRcrqaeJ2fxZC55/pIS8z3+uzpwmJlfG/V8GUPDvm05f1Ld5iMTfMsgtARx0Ztxm9UH5thxslbp3QMB1PetkRNh8OSNC3mQGX/PbZjPmKjgkeHU8OnVsautIQqTz230kFE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(46966006)(40470700004)(36840700001)(478600001)(83380400001)(86362001)(40460700003)(40480700001)(36756003)(82740400003)(81166007)(356005)(47076005)(36860700001)(336012)(2616005)(426003)(2906002)(1076003)(26005)(316002)(54906003)(186003)(44832011)(5660300002)(82310400005)(8676002)(6666004)(6916009)(7416002)(8936002)(41300700001)(4326008)(70586007)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:16:42.9545
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2584fd03-1918-43d8-5ac6-08db3ac148f7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT090.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8997

Hi,
The main purpose of this series to address first part of the dynamic programming
i.e. making Xen aware of new device tree node which means updating the dt_host
with overlay node information. Here we are adding/removing node from dt_host,
and checking/set IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

For adding a node using dynamic programming:
    1. flatten device tree overlay node will be added to a fdt
    2. Updated fdt will be unflattened to a new dt_host_new
    3. Extract the newly added node information from dt_host_new
    4. Add the added node under correct parent in original dt_host.

For removing a node:
    1. Find the node with given path.
    2. Check if the node is used by any of domus. Removes the node only when
        it's not used by any domain.
    3. Removes IRQ permissions and MMIO access.
    5. Find the node in dt_host and delete the device node entry from dt_host.
    6. Free the overlay_tracker entry which means free dt_host_new also(created
in adding node step).

To map IOREQ and IOMMU during runtime, there will be another small series after
this one where we will do the actual IOMMU and IRQ mapping to a running domain
and will call unmap_mmio_regions() to remove the mapping.

Change Log:
 v4 -> v5:
    Split patch 01/16 to two patches. One with function type changes and another
        with changes inside unflatten_device_tree().
    Change dt_overlay xl command to dt-overlay.
    Protect overlay functionality with CONFIG(arm).
    Fix rwlock issues.
    Move include "device_tree.h" to c file where arch_cpu_init() is called and
        forward declare dt_device_node. This was done to avoid circular deps b/w
        device_tree.h and rwlock.h
    Address Michal's comment on coding style.

 v3 -> v4:
    Add support for adding node's children.
    Add rwlock to dt_host functions.
    Corrected fdt size issue when applying overlay into it.
    Add memory allocation fail handling for unflatten_device_tree().
    Changed xl overlay to xl dt_overlay.
    Correct commit messages.
    Addressed code issue from v3 review.

 v2 -> v3:
    Moved overlay functionalities to dt_overlay.c file.
    Renamed XEN_SYSCTL_overlay to XEN_SYSCTL_dt_overlay.
    Add dt_* prefix to overlay_add/remove_nodes.
    Added dtdevs_lock to protect iommu_add_dt_device().
    For iommu, moved spin_lock to caller.
    Address code issue from v2 review.

 v1 -> v2:
    Add support for multiple node addition/removal using dtbo.
    Replaced fpga-add and fpga-remove with one hypercall overlay_op.
    Moved common domain_build.c function to device.c
    Add OVERLAY_DTB configuration.
    Renamed overlay_get_target() to fdt_overlay_get_target().
    Split remove_device patch into two patches.
    Moved overlay_add/remove code to sysctl and changed it from domctl to sysctl.
    Added all overlay code under CONFIG_OVERLAY_DTB
    Renamed all tool domains fpga function to overlay
    Addressed code issues from v1 review.

Regards,
Vikram



Vikram Garhwal (17):
  xen/arm/device: Remove __init from function type
  common/device_tree: change __unflatten_device_tree()
  xen/arm: Add CONFIG_OVERLAY_DTB
  libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
  libfdt: overlay: change overlay_get_target()
  xen/device-tree: Add device_tree_find_node_by_path() to find nodes in
    device tree
  xen/smmu: Add remove_device callback for smmu_iommu ops
  xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
  xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
  xen/iommu: Introduce iommu_remove_dt_device()
  asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
  common/device_tree: Add rwlock for dt_host
  xen/arm: Implement device tree node removal functionalities
  xen/arm: Implement device tree node addition functionalities
  tools/libs/ctrl: Implement new xc interfaces for dt overlay
  tools/libs/light: Implement new libxl functions for device tree
    overlay ops
  tools/xl: Add new xl command overlay for device tree overlay support

 SUPPORT.md                              |   6 +
 tools/include/libxl.h                   |  11 +
 tools/include/xenctrl.h                 |   5 +
 tools/libs/ctrl/Makefile.common         |   1 +
 tools/libs/ctrl/xc_dt_overlay.c         |  48 ++
 tools/libs/light/Makefile               |   3 +
 tools/libs/light/libxl_dt_overlay.c     |  71 ++
 tools/xl/xl.h                           |   1 +
 tools/xl/xl_cmdtable.c                  |   6 +
 tools/xl/xl_vmcontrol.c                 |  52 ++
 xen/arch/arm/Kconfig                    |   5 +
 xen/arch/arm/device.c                   | 145 ++++
 xen/arch/arm/domain_build.c             | 142 ----
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/arch/arm/include/asm/smp.h          |   3 +-
 xen/arch/arm/smpboot.c                  |   1 +
 xen/arch/arm/sysctl.c                   |  16 +-
 xen/common/Makefile                     |   1 +
 xen/common/device_tree.c                |  30 +-
 xen/common/dt_overlay.c                 | 897 ++++++++++++++++++++++++
 xen/common/libfdt/Makefile              |   4 +
 xen/common/libfdt/fdt_overlay.c         |  29 +-
 xen/common/libfdt/version.lds           |   1 +
 xen/drivers/passthrough/arm/smmu.c      |  56 ++
 xen/drivers/passthrough/device_tree.c   | 109 ++-
 xen/include/public/sysctl.h             |  24 +
 xen/include/xen/device_tree.h           |  28 +-
 xen/include/xen/dt_overlay.h            |  59 ++
 xen/include/xen/iommu.h                 |   2 +
 xen/include/xen/libfdt/libfdt.h         |  18 +
 31 files changed, 1595 insertions(+), 187 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c
 create mode 100644 tools/libs/light/libxl_dt_overlay.c
 create mode 100644 xen/common/dt_overlay.c
 create mode 100644 xen/include/xen/dt_overlay.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519770.806750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUK-0005JE-Lj; Tue, 11 Apr 2023 19:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519770.806750; Tue, 11 Apr 2023 19:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUK-0005J7-I8; Tue, 11 Apr 2023 19:17:08 +0000
Received: by outflank-mailman (input) for mailman id 519770;
 Tue, 11 Apr 2023 19:17:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUH-0004Ta-Dk
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:05 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7003e71b-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:03 +0200 (CEST)
Received: from MW3PR05CA0004.namprd05.prod.outlook.com (2603:10b6:303:2b::9)
 by MW6PR12MB8734.namprd12.prod.outlook.com (2603:10b6:303:249::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Tue, 11 Apr
 2023 19:17:00 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::33) by MW3PR05CA0004.outlook.office365.com
 (2603:10b6:303:2b::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:16:59 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:16:59 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:54 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:54 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7003e71b-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZoaxB4bFLbZgS5WHHZcc7Q2JNIUjaW6J+JCxA2fmYTW3nBQ2j9coBWisKiCfCXFOcjrFMTstSF0xWUGWtuCyep3JhovZ1XV5inwtQLmZy8uO+FYvx5mDwiBdcL6PLqd7EtB0GLquF5Pl5Hc3LXu3CAC5oEyM2rZxbBCL/58jxvE9aoxo740ZUVX9Rc6bvUJF38XrQzk2NUYrPSrAhccYEYaOVr9n8MtwIG5hYV9tJIj4yvzsJtJrwbmEeS+fC6IHFmMm/bvOdHFwjnL2/nqmODQhBwPEuwI8zon1FpL9EZrM/5fHWNQdLAd2OCFdQc3h07QoCzui9AlPZ6uYGsIPRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z2clB4UixcJGvwOZJ+XN0wD+kcnhi2QU+8ofkZ5Co48=;
 b=RoZApb5D1WrOD7o9t5X9invczFFpM8bXTIFK1x6yv+3TUxNKFSmB3f68ySJc6moMrpmguITRrb8G1Z9Eg3iya5EzPRP+eX5XFexXisbGVyF8um0COhhoJ5Q60FEdFev7CbsZfo3Nt8rZxN01+IMhyockuz+HdWmLRaKZdG4XyuUd3nXoC0wL1cp5ZFD9k6A+Pj9kMr4J53LXAoFI4P4+WvAlcgIhu8ZwmNwQ1Jx2+AwSIRnLemwiCNYf5EGvOu9TFG08pVt9Vk2KQWMx6GV0rO1D1qq7YnJGeh4MRQV91Cifi3JAZig53CQDi60kteeemQDfNSWCOjT/oK5dr/GgKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z2clB4UixcJGvwOZJ+XN0wD+kcnhi2QU+8ofkZ5Co48=;
 b=uG3yZ2q5CylqwilEN0w1lucY3u1hCuIw3gAflcKN2wFvTGJlsbJOVoocZsN62RVXs8md4ynK/i2EJD4Kd9n+L3YDixHpgt81Xdc+yDJdzoZagvx0OA/iP3eR6k6AmI+5qyCX8MgdbB80bE5ShP9n6Sk9LC4s8FzM/+ovwThwI8I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 01/17] xen/arm/device: Remove __init from function type
Date: Tue, 11 Apr 2023 12:16:02 -0700
Message-ID: <20230411191636.26926-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|MW6PR12MB8734:EE_
X-MS-Office365-Filtering-Correlation-Id: 6ccc9639-ecdb-4de5-16fc-08db3ac152be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1nZdWlJL/siybVTnbTI1lAmr6TWeqSDLIc0m/tAf95v6A7UuW3OoeXSDRffUjaQ+5Y85O6S+9kI0qym75WjJBPFA7AizyTun88WjRujaWDHA08ihv7jXoXXTPfhDLoBmI1EgY09qrTQh7zlyBDJsJ2mnCieg7+TsGy668oLevSG4Hy2X76p+WPkuFVS0dJKrgQI3EGrduQCHwm160dyE/AUQmh60Ov4ohYORM3d6at/H+cokq0m+Lut5JFcFaeCMFqhv76PNthhld8VWHra1d3Wm9/WPWNdhH9HJklCuH7vg/nN6dzeua6f8bxzkardLfOQkV4xdccn9O7fop9mjDuIcnpKxZHls2hluVbFRGyv0c6xkcdni+fp5RlOq+mZzOS1YS5CfHNdfJHm63D1qY+qNCPo9OC3w+NI3qV+0ZRTPmF06wtM0tgBwmb3RN1D+jSL9QTlkGKZdgnSzntFKK0thEzDeObN3TdVFALa/KUNOdgEI2Ztmvhm1BGjBE93tWwGQk2Dy9C65WddhIJiWDP3SDLeQqrdt16CQQsvfp1flbBKBDKvx27Cs79UCsdWff/SMB7cUTwE6HvubGxEoi07+jeM8O+omQnrsvEzn6RuYUXjJOOIR4XvRy4wWhVMGGEM1r4O+H3y1MkVStWMMpexk67WObCUn+G3Zv84x+DcDNYbYuISH1tZWs8pZ1cPhATNDJY4+mD6sj31xuWqyuPbN6UsLH5H0ZsDIQh294sE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(396003)(346002)(451199021)(40470700004)(36840700001)(46966006)(30864003)(8936002)(8676002)(41300700001)(47076005)(81166007)(2616005)(2906002)(5660300002)(44832011)(356005)(4326008)(83380400001)(70586007)(6916009)(336012)(36860700001)(40460700003)(426003)(70206006)(6666004)(478600001)(316002)(36756003)(86362001)(40480700001)(82310400005)(1076003)(54906003)(186003)(82740400003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:16:59.2939
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ccc9639-ecdb-4de5-16fc-08db3ac152be
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8734

Remove __init from following function to access during runtime:
    1. map_irq_to_domain()
    2. handle_device_interrupt()
    3. map_range_to_domain()
    4. unflatten_dt_node()
    5. unflatten_device_tree()

Move map_irq_to_domain() prototype from domain_build.h to setup.h.

To avoid the breaking the build, following changes are also done:
1. Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain() to
    device.c. After removing __init type,  these functions are not specific to
    domain building, so moving them out of domain_build.c to device.c.
2. Remove static type from handle_device_interrupt().

Overall, these changes are done to support the dynamic programming of a nodes
where an overlay node will be added to fdt and unflattened node will be added to
dt_host. Furthermore, IRQ and mmio mapping will be done for the added node.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/device.c                   | 145 ++++++++++++++++++++++++
 xen/arch/arm/domain_build.c             | 142 -----------------------
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/common/device_tree.c                |  16 +--
 5 files changed, 159 insertions(+), 152 deletions(-)

diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
index ca8539dee5..fec6e29c42 100644
--- a/xen/arch/arm/device.c
+++ b/xen/arch/arm/device.c
@@ -12,6 +12,9 @@
 #include <xen/errno.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/iocap.h>
+#include <asm/domain_build.h>
+#include <asm/setup.h>
 
 extern const struct device_desc _sdevice[], _edevice[];
 extern const struct acpi_device_desc _asdevice[], _aedevice[];
@@ -75,6 +78,148 @@ enum device_class device_get_class(const struct dt_device_node *dev)
     return DEVICE_UNKNOWN;
 }
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname)
+{
+    int res;
+
+    res = irq_permit_access(d, irq);
+    if ( res )
+    {
+        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
+               d->domain_id, irq);
+        return res;
+    }
+
+    if ( need_mapping )
+    {
+        /*
+         * Checking the return of vgic_reserve_virq is not
+         * necessary. It should not fail except when we try to map
+         * the IRQ twice. This can legitimately happen if the IRQ is shared
+         */
+        vgic_reserve_virq(d, irq);
+
+        res = route_irq_to_guest(d, irq, irq, devname);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
+                   irq, d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - IRQ: %u\n", irq);
+    return 0;
+}
+
+int map_range_to_domain(const struct dt_device_node *dev,
+                        u64 addr, u64 len, void *data)
+{
+    struct map_range_data *mr_data = data;
+    struct domain *d = mr_data->d;
+    int res;
+
+    /*
+     * reserved-memory regions are RAM carved out for a special purpose.
+     * They are not MMIO and therefore a domain should not be able to
+     * manage them via the IOMEM interface.
+     */
+    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
+                     strlen("/reserved-memory/")) != 0 )
+    {
+        res = iomem_permit_access(d, paddr_to_pfn(addr),
+                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to permit to dom%d access to"
+                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                    d->domain_id,
+                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
+            return res;
+        }
+    }
+
+    if ( !mr_data->skip_mapping )
+    {
+        res = map_regions_p2mt(d,
+                               gaddr_to_gfn(addr),
+                               PFN_UP(len),
+                               maddr_to_mfn(addr),
+                               mr_data->p2mt);
+
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
+                   " - 0x%"PRIx64" in domain %d\n",
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
+                   d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
+               addr, addr + len, mr_data->p2mt);
+
+    return 0;
+}
+
+/*
+ * handle_device_interrupts retrieves the interrupts configuration from
+ * a device tree node and maps those interrupts to the target domain.
+ *
+ * Returns:
+ *   < 0 error
+ *   0   success
+ */
+int handle_device_interrupts(struct domain *d,
+                             struct dt_device_node *dev,
+                             bool need_mapping)
+{
+    unsigned int i, nirq;
+    int res;
+    struct dt_raw_irq rirq;
+
+    nirq = dt_number_of_irq(dev);
+
+    /* Give permission and map IRQs */
+    for ( i = 0; i < nirq; i++ )
+    {
+        res = dt_device_get_raw_irq(dev, i, &rirq);
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        /*
+         * Don't map IRQ that have no physical meaning
+         * ie: IRQ whose controller is not the GIC
+         */
+        if ( rirq.controller != dt_interrupt_controller )
+        {
+            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
+                      i, dt_node_full_name(rirq.controller));
+            continue;
+        }
+
+        res = platform_get_irq(dev, i);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
+        if ( res )
+            return res;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4f9d4f9d88..6ab18c53ab 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2256,41 +2256,6 @@ int __init make_chosen_node(const struct kernel_info *kinfo)
     return res;
 }
 
-int __init map_irq_to_domain(struct domain *d, unsigned int irq,
-                             bool need_mapping, const char *devname)
-{
-    int res;
-
-    res = irq_permit_access(d, irq);
-    if ( res )
-    {
-        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
-               d->domain_id, irq);
-        return res;
-    }
-
-    if ( need_mapping )
-    {
-        /*
-         * Checking the return of vgic_reserve_virq is not
-         * necessary. It should not fail except when we try to map
-         * the IRQ twice. This can legitimately happen if the IRQ is shared
-         */
-        vgic_reserve_virq(d, irq);
-
-        res = route_irq_to_guest(d, irq, irq, devname);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
-                   irq, d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - IRQ: %u\n", irq);
-    return 0;
-}
-
 static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
                                        const struct dt_irq *dt_irq,
                                        void *data)
@@ -2322,57 +2287,6 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
     return 0;
 }
 
-int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
-{
-    struct map_range_data *mr_data = data;
-    struct domain *d = mr_data->d;
-    int res;
-
-    /*
-     * reserved-memory regions are RAM carved out for a special purpose.
-     * They are not MMIO and therefore a domain should not be able to
-     * manage them via the IOMEM interface.
-     */
-    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
-                     strlen("/reserved-memory/")) != 0 )
-    {
-        res = iomem_permit_access(d, paddr_to_pfn(addr),
-                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
-                    d->domain_id,
-                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
-            return res;
-        }
-    }
-
-    if ( !mr_data->skip_mapping )
-    {
-        res = map_regions_p2mt(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(len),
-                               maddr_to_mfn(addr),
-                               mr_data->p2mt);
-
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
-                   " - 0x%"PRIx64" in domain %d\n",
-                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
-                   d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
-               addr, addr + len, mr_data->p2mt);
-
-    return 0;
-}
-
 /*
  * For a node which describes a discoverable bus (such as a PCI bus)
  * then we may need to perform additional mappings in order to make
@@ -2400,62 +2314,6 @@ static int __init map_device_children(const struct dt_device_node *dev,
     return 0;
 }
 
-/*
- * handle_device_interrupts retrieves the interrupts configuration from
- * a device tree node and maps those interrupts to the target domain.
- *
- * Returns:
- *   < 0 error
- *   0   success
- */
-static int __init handle_device_interrupts(struct domain *d,
-                                           struct dt_device_node *dev,
-                                           bool need_mapping)
-{
-    unsigned int i, nirq;
-    int res;
-    struct dt_raw_irq rirq;
-
-    nirq = dt_number_of_irq(dev);
-
-    /* Give permission and map IRQs */
-    for ( i = 0; i < nirq; i++ )
-    {
-        res = dt_device_get_raw_irq(dev, i, &rirq);
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        /*
-         * Don't map IRQ that have no physical meaning
-         * ie: IRQ whose controller is not the GIC
-         */
-        if ( rirq.controller != dt_interrupt_controller )
-        {
-            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
-                      i, dt_node_full_name(rirq.controller));
-            continue;
-        }
-
-        res = platform_get_irq(dev, i);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
-        if ( res )
-            return res;
-    }
-
-    return 0;
-}
-
 /*
  * For a given device node:
  *  - Give permission to the guest to manage IRQ and MMIO range
diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h
index 34ceddc995..b9329c9ee0 100644
--- a/xen/arch/arm/include/asm/domain_build.h
+++ b/xen/arch/arm/include/asm/domain_build.h
@@ -4,8 +4,6 @@
 #include <xen/sched.h>
 #include <asm/kernel.h>
 
-int map_irq_to_domain(struct domain *d, unsigned int irq,
-                      bool need_mapping, const char *devname);
 int make_chosen_node(const struct kernel_info *kinfo);
 void evtchn_allocate(struct domain *d);
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..1d636e8a4a 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -163,9 +163,15 @@ void device_tree_get_reg(const __be32 **cell, u32 address_cells,
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
+int handle_device_interrupts(struct domain *d, struct dt_device_node *dev,
+                             bool need_mapping);
+
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..aed38ff63c 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1811,12 +1811,12 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  * @allnextpp: pointer to ->allnext from last allocated device_node
  * @fpsize: Size of the node path up at the current depth.
  */
-static unsigned long __init unflatten_dt_node(const void *fdt,
-                                              unsigned long mem,
-                                              unsigned long *p,
-                                              struct dt_device_node *dad,
-                                              struct dt_device_node ***allnextpp,
-                                              unsigned long fpsize)
+static unsigned long unflatten_dt_node(const void *fdt,
+                                       unsigned long mem,
+                                       unsigned long *p,
+                                       struct dt_device_node *dad,
+                                       struct dt_device_node ***allnextpp,
+                                       unsigned long fpsize)
 {
     struct dt_device_node *np;
     struct dt_property *pp, **prev_pp = NULL;
@@ -2056,8 +2056,8 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static void __init __unflatten_device_tree(const void *fdt,
-                                           struct dt_device_node **mynodes)
+static void __unflatten_device_tree(const void *fdt,
+                                    struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519771.806754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUL-0005OG-0z; Tue, 11 Apr 2023 19:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519771.806754; Tue, 11 Apr 2023 19:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUK-0005MU-SE; Tue, 11 Apr 2023 19:17:08 +0000
Received: by outflank-mailman (input) for mailman id 519771;
 Tue, 11 Apr 2023 19:17:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUI-0004DR-7o
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:06 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2062b.outbound.protection.outlook.com
 [2a01:111:f400:fe59::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70c37f42-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:04 +0200 (CEST)
Received: from MW3PR05CA0004.namprd05.prod.outlook.com (2603:10b6:303:2b::9)
 by SJ2PR12MB7962.namprd12.prod.outlook.com (2603:10b6:a03:4c2::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:01 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::87) by MW3PR05CA0004.outlook.office365.com
 (2603:10b6:303:2b::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:01 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:01 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:56 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:56 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70c37f42-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UxpCf3u8aQhuKKrQ0T4SThXXP1/7rOEjefrnQSrBSTFBH+2vet0/BkN9/96BSOh7BRc29yBAuGPfWBrzsmO+z5fnIfCwZv+/W19BY/NV8YANilCikYT1V6SYpNidRCML6foQhvXzNpSPdoWeuYsSgXeXBoRcw30A9hLweGC05laY3bfZd0nKDfZygI2pHhb+DVTBGxMKHmbwxn5ZVbZ7WJk6S5aqZRc1vjlpRsq2ZVXjC8+8eeuht7v/iiIdbRQyVVxOlmpJIwUjwPmg0cum0s/L0CtRf+4gtmJkTKc71Q13em4p8IVwFzZY2HoSwbOnofwG8X+xRU252Zq5wh0tew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y5jE5w9tUBXYNb4GkaW66AGdvE8156w4aHYPgYkE7PU=;
 b=J8chaI68GCgVd6yiVlsCc2D6W9f8W6rt3VGRxdijG5QkKl4XmOHnBv8yIOYI/Rgw6Klo5ZQhcfIdeK4P15dtOyfRXRvLNo/MftvzVHrbSeH/vaYcM9RFyEJ7NUhxXpfNqKRi+aR0qwKnJVPtlrB7QQh00GsQM9m0Rodu36ZTFZeAjjD47Mx2VrRjvv9dtW52pf3kcqkolch+iGosTYvrcT784CYrbxufSy0xOdAiTpXV1A0iS3ImLKKJp6281qCGzbhF9c9Xln6UIHd4NKB3CWweckTiIqaye45CAj0ZJkXf6BnGjxwtBMmifjilgQ5RjO+UazrA0bzcIwYk090FzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y5jE5w9tUBXYNb4GkaW66AGdvE8156w4aHYPgYkE7PU=;
 b=wxWB/BWXY2CO32y5vRS7236kub8VwKV+uQFpF+1OL7L0gNQP0NFpjSVE+5iJrM4LMKSlT0XcPeLcswieKsQ8nTGed5rjuOLbVuJXYhTiPU4/xfe0O9OCoYPR1oCufxMZ35hDCStB3gc6PZlfTqyQ9vA0Q0yoWaW96zC54AlnEGs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 03/17] xen/arm: Add CONFIG_OVERLAY_DTB
Date: Tue, 11 Apr 2023 12:16:04 -0700
Message-ID: <20230411191636.26926-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|SJ2PR12MB7962:EE_
X-MS-Office365-Filtering-Correlation-Id: 93a1cd42-a56c-4eb5-c800-08db3ac153f8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XWH3oXWHqcZD9sTbyewkrsWBFCMU89fDJmR4pzMiPZW5pze/IlzoIeNqgMO+onWgXaj7kkAxDBoDLNVOcSQsC8idNUm+ugK7ePoigLJoBTkuI4LPtr6TpF1Hu96MhwTrlzVFaMa8sY/LwPPV4Jv9pxEySrnK4eUpkPnpbZIFmvfQwFd2u2XDDj5UI2KTlnkYAVF8I4WaxTvn1RsWJ01pNp1scH91N4EH6kjSdx9cp9nn4MhQKQyquijx0FvB96sd8lhRvXqKnGN4ej9l87xkd6gaBZFF2QDaWi/bx8h82ONNlZbn/6Bdf9FCl4EgW9/S3afztXOAIZr0zPtDkxpYUmdBxecv9dJ0AWU5e18XbD9auKVMJm/1NlaTK8CnG5SPzNnfl840lBkZdlL1otUt/ycx6XbjnQ8Pg18BFGRMa54ksmJgSg7RBkpYZ0ebggriUDOCaJTYHTG1iDX7k35EY/ACLzaqqPtlL8QkBkYeLj32xs5mlb5IuZUXzwgAMxGmvvp6TOPOE+EMrXpxYCmby1nAHtZ6i/vJHIVuc8xH9wvDpCtCcA1gw/nFOXZ0eiJaWGH/JmKExlvCImja4S1yoLikS/kf61uDhiWCluec1ISqzh7nToVQZBqAlEykzumK+3IM04rzaalN6zkN/upDRJ+JUwM+HvF/xC7nLvPig/jILnqLMs5OG7+4ucnYho7ogyInqZSxFbkoEr5V6l8vHITm7LFWVa+3ZtQdrpW0ie8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(36860700001)(70206006)(70586007)(8676002)(4326008)(6916009)(5660300002)(86362001)(186003)(426003)(336012)(40480700001)(41300700001)(6666004)(82310400005)(316002)(8936002)(478600001)(54906003)(2906002)(36756003)(40460700003)(26005)(44832011)(82740400003)(1076003)(2616005)(47076005)(356005)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:01.3563
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 93a1cd42-a56c-4eb5-c800-08db3ac153f8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7962

Introduce a config option where the user can enable support for adding/removing
device tree nodes using a device tree binary overlay.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 SUPPORT.md           | 6 ++++++
 xen/arch/arm/Kconfig | 5 +++++
 2 files changed, 11 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..0a31f40af4 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -822,6 +822,12 @@ No support for QEMU backends in a 16K or 64K domain.
 
     Status: Supported
 
+### Device Tree Overlays
+
+Add/Remove device tree nodes using a device tree overlay binary(.dtbo).
+
+    Status: Supported for ARM
+
 ### ARM: Guest ACPI support
 
     Status: Supported
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..1fe3d698a5 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -53,6 +53,11 @@ config HAS_ITS
         bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
         depends on GICV3 && !NEW_VGIC && !ARM_32
 
+config OVERLAY_DTB
+	bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
+	help
+	  Dynamic addition/removal of Xen device tree nodes using a dtbo.
+
 config HVM
         def_bool y
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519772.806761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUL-0005Uc-Hg; Tue, 11 Apr 2023 19:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519772.806761; Tue, 11 Apr 2023 19:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUL-0005Si-A6; Tue, 11 Apr 2023 19:17:09 +0000
Received: by outflank-mailman (input) for mailman id 519772;
 Tue, 11 Apr 2023 19:17:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUJ-0004Ta-L8
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:07 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20613.outbound.protection.outlook.com
 [2a01:111:f400:7eab::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7205330f-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:06 +0200 (CEST)
Received: from MW3PR05CA0006.namprd05.prod.outlook.com (2603:10b6:303:2b::11)
 by DM4PR12MB5357.namprd12.prod.outlook.com (2603:10b6:5:39b::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:03 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::94) by MW3PR05CA0006.outlook.office365.com
 (2603:10b6:303:2b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:02 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:02 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:58 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:58 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:58 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7205330f-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WSIe8IWtkogZDuJROEuyEa4L5qtz0r0ub7oLaSfO1oYNN6uZkXy/AYkEa3PeoOYonr9wYRQt+vHxHp+g/sYWlxHPL5FypP2L333pRjkvdOHBFru51nKqqPmtHjUq8o8GrG5y8yDBLzH8tnluod1ewfCDKM4wcoGOPGGStdCZjVWAwUNmZr5A9t2PuUIYrZjRaoSZUWNZTqyMP+OYSaGEH1hCgOgsJ3vJs2EyLcRAlUvGXsMRB0pozooPmglFCVbN34KWvZrvQUmwLSVu0nZ1ELhtlGW0nNHvKq6OCci9womzrMTovnYOKR2qZWd8RrqKtlTmvon+x8XhFJpGe+Fl9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IPRvr0PydrbevxDRuC5bBe8hAAj8BLWppbzz+G6AD20=;
 b=Q6QuNO/MJsxQ0Mfgmqv9Fc2TTQvDV0zliZG80oSqKGwzzpfT9J7UZ2FuPaiYfXU/qiOYHT7fHQP1t/QBxkf012+VIOerMfB3Dt6nDxwpI6myg11MXW+FBGeex1p/gIZzfUq33QTMvrNskEHqisXKNHnCX7jcmrseAw5RwEvaEafIOJ2EThr5w/RXlVXFLxiGBqOtb8NJBRHVXwuxBkFFg/2byl1vVO1ABs4TYzz0/3GqkGZZEFmVOoH2X8bA4bhzAOBAwZicx47pyOkjCng0OPhHTyoKK5D2zLgqyQj2e2lwqaorRkw+Gg9QI3UT3rfGtJroT25tHoWNkBIhp7696Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IPRvr0PydrbevxDRuC5bBe8hAAj8BLWppbzz+G6AD20=;
 b=KDu9AlO4fQb4RnMxIBoAkLNp+pI1dLaayOR4hzEGfRdnkwnBMysAjR/rLTC3LnlSxfeTfOoBcgV7J2cmTTfYIMihNqqMdDt0ekAxFiY2Rvo00w9cwKWP85ONIYg9S2Q4lQM+yFt67X9yvcEUyhncpYclWgTkKAv6Gp/j6YjnUoc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 06/17] xen/device-tree: Add device_tree_find_node_by_path() to find nodes in device tree
Date: Tue, 11 Apr 2023 12:16:07 -0700
Message-ID: <20230411191636.26926-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|DM4PR12MB5357:EE_
X-MS-Office365-Filtering-Correlation-Id: 92748e6e-4e36-4f2d-f5b0-08db3ac154cc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	POXz1WG1hmvtmv5RRF+iVjz+L8Z++hE0T36YgoND+tvNfXU/Rd6SPUicT57Mcps7aDrp6/TmvJJRrlH6FadTG1hn4VFTdykLIM4JnRdvcd1mf/i1lRmcPBSrHMP+Im8G8qLaN7BXyrxnrrH/yOOwAzOpRDrG8IOa12/TZkyjJadsajlTtO4WDHpR8O+nVF+NkYSTWfDzv7AqcPsvh9r3KCDIzHfBFurWQ6Kz3waGiXRtZtGZ8BDRDJ+Ehde4oNdx/Jhh5iuIEPauPu6juAmCzj6Jw07AJ4Bo6qvTPRfHt7AM6yIxi7U6+vVNDzyVZGYtuxhRP/MyQKl5wdkGY3Y/U/dGmoxFNMyNVRvKaL69tpU+m9J7iGTxkbbs+O/EIboBocgQrs2YE6q3MMpNR0pAw3av7s45/S2QnBRJgvfcYOxuvoT31hUUewjypoQ2OI+Ce+RbA6xziFmiOaE0XOWv5buUBnwRohqZZkALRJRtuUKYsDDDKs539xqAA86hV2ihzXiQzSAkL7GWAaMm5RBxEH96prsa3Ha9XE4D67wF6GK6vkDkzuLGzurXbz4rw7Q2wMTZdaihTT04kxLSPbswtunugHmQQB/eKBcU1LZr2dwByFx+JEfIIXiRF3SOmnEwaDeqK6nwrGqe0tmJoYnQ+rsI+rCoLcosJVnzaOchF7EnmLYwRMMP9ENiQhoAfopd2dzm2C63Ud1EGkJKAnjkzZXwSKT5coSqN2lJ962XuiXWU7p/NRVEOg94z+CrKbPrKEUI1p+5Ot9Zk6tZkaCHLg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(4326008)(478600001)(6916009)(54906003)(8676002)(70206006)(41300700001)(70586007)(316002)(86362001)(36756003)(83380400001)(426003)(2616005)(336012)(26005)(1076003)(6666004)(2906002)(8936002)(44832011)(40480700001)(5660300002)(82310400005)(36860700001)(82740400003)(356005)(186003)(81166007)(47076005)(37363002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:02.7468
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 92748e6e-4e36-4f2d-f5b0-08db3ac154cc
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5357

Add device_tree_find_node_by_path() to find a matching node with path for a
dt_device_node.

Reason behind this function:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, we need to find
    the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
    and add the nodes as child under their parent in the dt_host. Thus we need
    this function to search for node in different unflattened device trees.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      |  5 +++--
 xen/include/xen/device_tree.h | 17 +++++++++++++++--
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index bf847b2584..507b4ac5b6 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
     return np;
 }
 
-struct dt_device_node *dt_find_node_by_path(const char *path)
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path)
 {
     struct dt_device_node *np;
 
-    dt_for_each_device_node(dt_host, np)
+    dt_for_each_device_node(dt, np)
         if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
             break;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 58ac12abe3..998f972ebc 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -534,13 +534,26 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
 struct dt_device_node *dt_find_node_by_alias(const char *alias);
 
 /**
- * dt_find_node_by_path - Find a node matching a full DT path
+ * device_tree_find_node_by_path - Generic function to find a node matching the
+ * full DT path for any given unflatten device tree
+ * @dt_node: The device tree to search
  * @path: The full path to match
  *
  * Returns a node pointer.
  */
-struct dt_device_node *dt_find_node_by_path(const char *path);
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path);
 
+/**
+ * dt_find_node_by_path - Find a node matching a full DT path in dt_host
+ * @path: The full path to match
+ *
+ * Returns a node pointer.
+ */
+static inline struct dt_device_node *dt_find_node_by_path(const char *path)
+{
+    return device_tree_find_node_by_path(dt_host, path);
+}
 
 /**
  * dt_find_node_by_gpath - Same as dt_find_node_by_path but retrieve the
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519773.806769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUM-0005hl-A9; Tue, 11 Apr 2023 19:17:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519773.806769; Tue, 11 Apr 2023 19:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUM-0005gq-4P; Tue, 11 Apr 2023 19:17:10 +0000
Received: by outflank-mailman (input) for mailman id 519773;
 Tue, 11 Apr 2023 19:17:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUK-0004Ta-Pp
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:08 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20611.outbound.protection.outlook.com
 [2a01:111:f400:fe59::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 739189b0-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:08 +0200 (CEST)
Received: from MW3PR05CA0017.namprd05.prod.outlook.com (2603:10b6:303:2b::22)
 by BN9PR12MB5257.namprd12.prod.outlook.com (2603:10b6:408:11e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:17:05 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::4a) by MW3PR05CA0017.outlook.office365.com
 (2603:10b6:303:2b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.27 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:05 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:04 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:04 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:03 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:03 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 739189b0-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RnOWGfhR+alOXtb1kEbH7mY9GHTUEb4Fh/Fq5W+ejuEBGScgkN+8M1kOzS1mFU4cA3rfTYuCUVidDnhaZwcIb3yH4MUbng3sDhlaGnQNs2bLeLWU+QLALHyOh6CCJhJQ35Fv0tfwvwbfL/RVHk4qdplIJ6NK9iWXfIv59MjOsILUtV/+MaRpY9HuwnDQh1R2S+1WxwGgc2i6+DPVMRL9v6qFUU46v4NtKiUIhGPHgtmRxH0AoZsyJEY9Kjj3m1REYDdpCUA4+p4qwj21PDv3c+9WAoLdNmEiTWZ1c2zIcG7SesZD2zVy+jc545NHBmf4mtQzOAGaAOrGiljWyw30fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zOQW4Omju1dTUno8ADGoDg+I28lOLIV01naa3ZWrW1o=;
 b=AfQz2DOlaIMvQKAfoj502N6k0Jzk3taYs6o6qwigXCHpwW9phuQL1ITtRk4IU4J3E9NW7amG3/ALbGXLT5qtSR/E78kmN0/SGnoIuH3ZklgOvT69Vgiste+lkcBBzOTpqZnmNqJHaI8nvqy9GkgQxNOezhqE18ZkBNOVddwf1TgIaJuwB4/MVqX7ZryKA4OXdowle9i35wq0g55QFrE2CaAXwmnFk7yQ5C72gzfecJg05Z8FViqa9a0TieJdOuPJzwaF3XJoql9riQPR7cpExcFwxk7tSvAArFJhJxJ0qDJhDi7kFBDo9XCjVSBw5u7OLEOkT5yS6dCv/wG+WRmLgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zOQW4Omju1dTUno8ADGoDg+I28lOLIV01naa3ZWrW1o=;
 b=yxZPsIxPbBZxAQRHbKz6JQv92aqGSLAVhz9iyCR72zNrtSEZDD/8sdBXFUVIUlg8q8cq3Nz60IlR1Gp2e7TcV6+htFinDnkGslGbL1h/edwWvv6KV5dmwrIBDYGlYaMPQ0QtCijJ0IRHkCLQQn4cHAy94E8McblhaOHAsSIB5ng=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
Date: Tue, 11 Apr 2023 12:16:13 -0700
Message-ID: <20230411191636.26926-13-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|BN9PR12MB5257:EE_
X-MS-Office365-Filtering-Correlation-Id: 9c5bf8b1-3ea5-4fb3-321a-08db3ac1561d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QhfFyki35SJbRekBupI7TtH1V3WKYpDwoOfoHKrgY+p0KezPSeQt3siAWShc/QMBXA+1E2Alz5IVPwpto5fOu0pPF3dg1ZSOaysK/53rpx8PXrCDgOKlLGXLzMl8nR5wlJhLoHzmvG5bzi9shRnWClX6oAJRfKsSZqE0A27pLeOMJFcig7YwOd73TTCH37VmFh46V9uY66e3pWOZlm2pVv9ABTpMJaVcnRqtq4BgYqR9JGyCzuZB7H+vxmxoUyKdi97Z+bfAdJJsOVB7yamKBwyS9FLGOG0h8OnvnH8/3VudTqh0TaoRqhhZ/tSALeKw7cxetA1NkG8gKfCuzfpd1NtSfEbHB452W64YVPYrmtr7OrMaw4uga/aqEPeRRmt+eJ6TC1BXum2xBhu0eofYNTIYXg6SROb8t0CxYbPKCVGfqeu9h/XgTaLv5SGd9MTUR/3R8aENWmMrHhHESYCIj9aTFP0KjIV9WgU6hcZ2Ovhx3N6sVjA4W7XY3cwbsif9JTp08qjzIFanvRRZ011vcEKrCxhDKupQ9WbDjejzxoo35ggY0btzTNQDAYe83SgaIc0SH516ef74Kb1MN+zPK+Ozu3pZtoeYCOOPjp4tS+ZqjZjKTs/Pi3VJ7go13Q4q04+vWqTU1Jzarorqoku8E5LYWnFcWulNlnjo1UB6k8qMLAZqBSt3LAjT4QccAqgXz8W3GGqRfz8aLuVruJQvuzgBavv9/PZa4GCONwDeNHg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199021)(40470700004)(36840700001)(46966006)(478600001)(40480700001)(47076005)(36756003)(83380400001)(86362001)(40460700003)(81166007)(356005)(82740400003)(426003)(2616005)(2906002)(316002)(1076003)(336012)(36860700001)(54906003)(44832011)(26005)(186003)(8676002)(8936002)(6666004)(6916009)(82310400005)(41300700001)(5660300002)(70586007)(4326008)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:04.9504
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c5bf8b1-3ea5-4fb3-321a-08db3ac1561d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5257

 Dynamic programming ops will modify the dt_host and there might be other
 function which are browsing the dt_host at the same time. To avoid the race
 conditions, adding rwlock for browsing the dt_host during runtime.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c              |  3 +++
 xen/drivers/passthrough/device_tree.c | 39 +++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |  6 +++++
 3 files changed, 48 insertions(+)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 507b4ac5b6..b330e6a013 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2098,6 +2098,9 @@ void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
     *allnextp = NULL;
 
     dt_dprintk(" <- unflatten_device_tree()\n");
+
+    /* Init r/w lock for host device tree. */
+    rwlock_init(&dt_host->lock);
 }
 
 static void dt_alias_add(struct dt_alias_prop *ap,
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index a77a217f3d..74f994b9da 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return 0;
 
+    read_lock(&dt_host->lock);
+
     list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list)
     {
         rc = iommu_deassign_dt_device(d, dev);
@@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d)
         {
             dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n",
                     dt_node_full_name(dev), d->domain_id);
+
+            read_unlock(&dt_host->lock);
             return rc;
         }
     }
 
+    read_unlock(&dt_host->lock);
+
     return 0;
 }
 
@@ -259,11 +265,15 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
 
         spin_lock(&dtdevs_lock);
 
+        read_lock(&dt_host->lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
         {
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -272,6 +282,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
         if ( ret )
         {
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -287,6 +299,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
                 ret = -EINVAL;
             }
 
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -295,22 +309,31 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         spin_unlock(&dtdevs_lock);
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_add_dt_device(dev);
         if ( ret < 0 )
         {
             printk(XENLOG_G_ERR "Failed to add %s to the IOMMU\n",
                    dt_node_full_name(dev));
+
+            read_unlock(&dt_host->lock);
             break;
         }
 
         ret = iommu_assign_dt_device(d, dev);
 
         if ( ret )
+        {
             printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign \"%s\""
                    " to dom%u failed (%d)\n",
                    dt_node_full_name(dev), d->domain_id, ret);
+        }
+
+        read_unlock(&dt_host->lock);
         break;
 
     case XEN_DOMCTL_deassign_device:
@@ -322,25 +345,41 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( domctl->u.assign_device.flags )
             break;
 
+        read_lock(&dt_host->lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
+        {
+            read_unlock(&dt_host->lock);
             break;
+        }
 
         ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+
         if ( ret )
+        {
+            read_unlock(&dt_host->lock);
             break;
+        }
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_deassign_dt_device(d, dev);
 
         if ( ret )
+        {
             printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign \"%s\""
                    " to dom%u failed (%d)\n",
                    dt_node_full_name(dev), d->domain_id, ret);
+        }
+
+        read_unlock(&dt_host->lock);
         break;
 
     default:
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 998f972ebc..b7272bbccc 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -18,6 +18,7 @@
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/list.h>
+#include <xen/rwlock.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -106,6 +107,11 @@ struct dt_device_node {
     struct list_head domain_list;
 
     struct device dev;
+
+    /*
+     * Lock that protects r/w updates to unflattened device tree i.e. dt_host.
+     */
+    rwlock_t lock;
 };
 
 #define dt_to_dev(dt_node)  (&(dt_node)->dev)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519774.806777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUN-0005rE-2i; Tue, 11 Apr 2023 19:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519774.806777; Tue, 11 Apr 2023 19:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUM-0005nZ-Kb; Tue, 11 Apr 2023 19:17:10 +0000
Received: by outflank-mailman (input) for mailman id 519774;
 Tue, 11 Apr 2023 19:17:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUL-0004DR-4f
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:09 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e89::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 727aca3a-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:07 +0200 (CEST)
Received: from MW3PR05CA0015.namprd05.prod.outlook.com (2603:10b6:303:2b::20)
 by SA3PR12MB7975.namprd12.prod.outlook.com (2603:10b6:806:320::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:17:03 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::98) by MW3PR05CA0015.outlook.office365.com
 (2603:10b6:303:2b::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.27 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:03 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:03 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:16:59 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:16:59 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:16:59 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 727aca3a-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DDZel6WEBGb2aTYAnYAotrmzVbZ3s3kcfUhLpszVA23+8sB6m7eS38UT9bYqoRo6RP23ED4kQur/HOYEtasxuVNCH2OINnUCwHgjJBrrPz/EQEzneYHUaMhiwE6tSOH7ZTDIbyopUVlTXoj1irUeAIHzr0qJa79iKrprkvfH0bGG478A723b3fPn0GdS2JgFHW8dj8i9FwM19w51JAtejG+N24Sw9LsGNC5yJy9IV8OiftA4AyouNKMQhf9HDP23SNhsv7wJqyAC+07IxAtkNBrK0996Ut17FGuOYWMYtSLrVwrJbwiG7C3759csDuFlp7vdsfzDkoqFHwGNTVXo8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SvjprfsFjT2q+G0BBsMISOBPht18qx5Ov6rcUISgUjE=;
 b=dw5LUMyKeF5kkahMH7yGjNK1hZL2dfKx4esYNnvJI8FmdkkhcX8LaD/tLlW+JYtNFKxonu4qD8NlSbJO9k7b6LEo5QMSpEznWwTFYdC8p/kK9JkcBBv0FPlcWd7M6PN6EdR1nuGk6dCABARmo33RFDXAM4OYiTXk16JNTyg6My8aoS8Im3Yae7e+z3VwdLecvoxpgiMWhUtLotEM8m3Nz7NxyCcrc031cS8qmdxC2SNOr6fGWaEv/VleGwn6A8tOwATNZKIvLo++UhVR8W5Az4qEAa+jS8kt6vu1yl8bi0nblYOPf1hMg3ZR2UXNdVip2ZhsluYEnJZb05efiLzOoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SvjprfsFjT2q+G0BBsMISOBPht18qx5Ov6rcUISgUjE=;
 b=37Bcs5BNxuIQA4OVW40raq+uSYlN3KhcoUpYSnVGbWlCEJDzDZYDouCqG/QsSbIcef6GSJ6TPWTkiNkRQPAYIrAAWmRrEtsOY9jdJolx4wj10djxsKpGqzim6xKu8aUSgLSfZDP8Av4z2wQDegFm51ZAjcZrZE1XNNQewGfi+1k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 07/17] xen/smmu: Add remove_device callback for smmu_iommu ops
Date: Tue, 11 Apr 2023 12:16:08 -0700
Message-ID: <20230411191636.26926-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|SA3PR12MB7975:EE_
X-MS-Office365-Filtering-Correlation-Id: 0f9a7f2f-a193-4b53-18b5-08db3ac15512
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DR+ixQxTfr4F1T81YK50UlMuZ859lngTVfX60KdhxnLbhinqY03bYxNAzFU/gcCZ5s6LA5Js9UQZcEl8RamVvLuIvh1Fuy6KKfhr4tsA2xxQrmnR08hnkxfJeGCQaSmDAXvL3eHxqgo1TiQqGaAV1Sf5feOZjPeIM9s0AHubsl1BRLMcTCi538v0w3WLCKKKH21Y9zPDunYZVpx/bkVQpu1bGxi01n8wb+H0DH84qci3sRFtGa7o/ertOi3A2igLGal3mDlIQGedktPkaa1iNb5KYb1iJYEH2FS7UkPknbiL2NV7vp6RMK4PA4czQjI3EgMKg+wM/nW42TYgAs7vbPx+ksJRTeFi2pxLPB7G74Y2ww+IfL8ViFTev06tC9+FGtaP8B+U/BujH+Lm8s6qUEJw6BFL/g6C3onBWcrDDLoObosYP1b7FqPX4DAxQA8jvl6pr94dAw3HavaHiH0nyxwAqpzt+mTUcyd7WUiu/roG43XjG0fPZilSbqr0EWEHdSvhx6L0+Sb+XDj1m6/5KzFkhYF5L/+q1dWSihWClC+4bTJaWvm1S48337U87hTkHZAn5ub1bE7WfIwFS+upjkIZkXWVPEryBw3ftrJKGuP93tx0y7xIebcj1aAyv87qSH5xcEcnMhr/r8Bz9ZTst5YYo21V6teKVAT1TEXOyPgmSE8SV3p1G79jpyQ+GLNHHubHTSA48X6q1d6MJ96hXiLDCfQWy3iPMYV9fhgAoLA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(70586007)(70206006)(4326008)(478600001)(6916009)(8676002)(41300700001)(316002)(36756003)(54906003)(86362001)(47076005)(426003)(336012)(2616005)(26005)(1076003)(6666004)(8936002)(44832011)(5660300002)(40480700001)(82310400005)(81166007)(2906002)(82740400003)(186003)(36860700001)(356005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:03.1999
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f9a7f2f-a193-4b53-18b5-08db3ac15512
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7975

Add remove_device callback for removing the device entry from smmu-master using
following steps:
1. Find if SMMU master exists for the device node.
2. Remove the SMMU master

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/arm/smmu.c | 56 ++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..14e15f1bc6 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -816,6 +816,19 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 	return 0;
 }
 
+static int remove_smmu_master(struct arm_smmu_device *smmu,
+			      struct arm_smmu_master *master)
+{
+	if (!smmu->masters.rb_node) {
+		ASSERT_UNREACHABLE();
+		return -ENOENT;
+	}
+
+	rb_erase(&master->node, &smmu->masters);
+
+	return 0;
+}
+
 static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 					 struct device *dev,
 					 struct iommu_fwspec *fwspec)
@@ -853,6 +866,32 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	return insert_smmu_master(smmu, master);
 }
 
+static int arm_smmu_dt_remove_device_legacy(struct arm_smmu_device *smmu,
+					 struct device *dev)
+{
+	struct arm_smmu_master *master;
+	struct device_node *dev_node = dev_get_dev_node(dev);
+	int ret;
+
+	master = find_smmu_master(smmu, dev_node);
+	if (master == NULL) {
+		dev_err(dev,
+			"No registrations found for master device %s\n",
+			dev_node->name);
+		return -EINVAL;
+	}
+
+	ret = remove_smmu_master(smmu, master);
+
+	if (ret)
+		return ret;
+
+	dev_node->is_protected = false;
+
+	kfree(master);
+	return 0;
+}
+
 static int register_smmu_master(struct arm_smmu_device *smmu,
 				struct device *dev,
 				struct of_phandle_args *masterspec)
@@ -876,6 +915,22 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
 					     fwspec);
 }
 
+static int arm_smmu_dt_remove_device_generic(u8 devfn, struct device *dev)
+{
+	struct arm_smmu_device *smmu;
+	struct iommu_fwspec *fwspec;
+
+	fwspec = dev_iommu_fwspec_get(dev);
+	if (fwspec == NULL)
+		return -ENXIO;
+
+	smmu = find_smmu(fwspec->iommu_dev);
+	if (smmu == NULL)
+		return -ENXIO;
+
+	return arm_smmu_dt_remove_device_legacy(smmu, dev);
+}
+
 static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
 {
 	struct arm_smmu_device *smmu;
@@ -2858,6 +2913,7 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
     .init = arm_smmu_iommu_domain_init,
     .hwdom_init = arch_iommu_hwdom_init,
     .add_device = arm_smmu_dt_add_device_generic,
+    .remove_device = arm_smmu_dt_remove_device_generic,
     .teardown = arm_smmu_iommu_domain_teardown,
     .iotlb_flush = arm_smmu_iotlb_flush,
     .assign_device = arm_smmu_assign_dev,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519775.806786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUN-0005z6-O5; Tue, 11 Apr 2023 19:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519775.806786; Tue, 11 Apr 2023 19:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUN-0005x3-4z; Tue, 11 Apr 2023 19:17:11 +0000
Received: by outflank-mailman (input) for mailman id 519775;
 Tue, 11 Apr 2023 19:17:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUL-0004DR-DB
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:09 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72be6fba-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:07 +0200 (CEST)
Received: from MW3PR05CA0009.namprd05.prod.outlook.com (2603:10b6:303:2b::14)
 by DS7PR12MB6360.namprd12.prod.outlook.com (2603:10b6:8:93::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:04 +0000
Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::93) by MW3PR05CA0009.outlook.office365.com
 (2603:10b6:303:2b::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.27 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:04 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:04 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:01 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:01 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72be6fba-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lDNOOUoUQrDMzJfKXlJfbBxfcLNE2ExFDTv865MP0/28SIEbvp9JnZjJDr5LiG2XgsPaddpgjmGypo4MdsgnlyvwAvcH5/tAVyON4tcV29yB5Ax5/rzMW6ngkXaB1UBKGYvq2j+aZVZmhrNwjVcuwB0KKYXqfz/ptE6F65s+mJEjq8G2VLHNZg4X4aI+pQhT+GVYB2esbpZP5Le+4vQaIRdvSESALv92/hOY2ohpeVzIb8GKRKbpFZlCAH/NKZEy8kHh6QD5+Iy7xS7PSL5SGScUzX7Q3pYlRuIDFqeQ8QY0Umpam1vCOps3ut4ieAid0xpteHShUAq767fEik0JzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qd0QFdZvzHUGWrw0BfIrEu3cWkp8jWSV4vPT9FURuHE=;
 b=BLJ69mgmDOC4R6oelAOdYmiz7bsaXr3tCsp/n9jVQSfNwQkvbwB8mNzPAXedghQGVKl73r9eHZiXnJDEgi6/5PIi785YsAGhLn/NjQVkncwcrUtvIO3Ub6EdFenIHqnvqeDVN+21UtIh+Agwu3sYvQtHLFYcsiEX8fDmQ8SxvaMsHl2nIL8WKgr0dF1fefO9Zd3hLLl1qRtq0ONlThpsSekvWPCjEdCdGe18ABWfwzYnKYzMpFoX/bFTV5a9F1lWAvL7nMGpRo/DHrjB9bZ+Wq0GxePVPj7o0sU+FM9Jnt0wqPJ5dJ0WJKD0FQVylkqF6rg4pG5aaacy8nY5HFtxPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qd0QFdZvzHUGWrw0BfIrEu3cWkp8jWSV4vPT9FURuHE=;
 b=ydjZWvY1P/SI3BwD6fN+ts4m/35w6s58norWJb8ajYjmOcxbz3+nrGWMPxgzBAof3asGEfFsqYqJX+24SJKCU0VWnbnkWYCLCKikt1G3NmUzCNGNO6RCVa+N34wH/DUa74ovVoxQDb1Q4ujmxsdoFPr8gxaJg5RL2qb538w68hc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 09/17] xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
Date: Tue, 11 Apr 2023 12:16:10 -0700
Message-ID: <20230411191636.26926-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|DS7PR12MB6360:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ab94d90-81ed-4d5a-7551-08db3ac1559c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qApyCw/eRhNKr9aze4U/rylhAkLcEqFW2qkkpLpyGUEdW2AgCK6S44/p2a67w1EuRCOHOm8BHUSpGgB9zXZ/Ac8aWvzxeFHcn+Gh5fQCW7wznN0MXMeYGWOr8xRnW2zrjSd8TadH8++Qaqe9vvGXNJaAcKZ5LNgkE/09f4wCEemxWSE/QVJ9HtFE9r9ehK/pb1HXoFwvHKYWhHgipn/QH/68Cj9kOEaQ9eIcIKU2DzYu/x3NWq8hbgQBJxAVoz1FRURZUtTkhzGZVW/DyKVMYu5NjzEBKekSLWPWvrIpLPAU45Acb1aQ/Q15gRkhEnUnPbe4UTxH3INxGJui8bfLeQ65cibYQCatpgNnCYr/08gyMg7prvQrE+8+41WLV4dXlHGSswThx20toJwjzJu0RaGii9JoQossCFqzOwXj3cNVH44tvvejdre2iIAhssiBRrApAzRdaNkLbd0kytp4OYeit8PRWwCskR4Xr7+qfYx1nPDzot+qTkZ31qz4oF5QYy1/VGAolU1Dh9dQSS7K3kF7fUiF+XrYB6Jjem6uybz2J0GNJQ28hMf6vMjeNx7Qj2S20XUxo7Fv+ofCyrSIBSz2JnN1IJvo7PBotlBZk8siV0EgvqNdK1qqC1eJIo3w4GJJ0m7IGxURagIbwE93lQ0DRC/u33VvRuxQlF48aqlSw0gUYORcOya+QbJs79vSJMyWczmowmgjZfg+dN6orTbpzeYxYKZjKFfO4LE+BQM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(39860400002)(136003)(451199021)(40470700004)(46966006)(36840700001)(1076003)(356005)(82740400003)(8936002)(8676002)(4326008)(86362001)(6666004)(5660300002)(6916009)(41300700001)(44832011)(40460700003)(82310400005)(36756003)(316002)(2906002)(478600001)(54906003)(70586007)(70206006)(40480700001)(47076005)(83380400001)(26005)(81166007)(186003)(36860700001)(2616005)(336012)(426003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:04.1067
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ab94d90-81ed-4d5a-7551-08db3ac1559c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT088.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6360

Protect iommu_add_dt_device() with dtdevs_lock to prevent concurrent access add.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/device_tree.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index bb4cf7784d..457df333a0 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -146,6 +146,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( dev_iommu_fwspec_get(dev) )
         return 0;
 
+    spin_lock(&dtdevs_lock);
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
@@ -158,7 +160,10 @@ int iommu_add_dt_device(struct dt_device_node *np)
          * these callback implemented.
          */
         if ( !ops->add_device || !ops->dt_xlate )
-            return -EINVAL;
+        {
+            rc = -EINVAL;
+            goto fail;
+        }
 
         if ( !dt_device_is_available(iommu_spec.np) )
             break;
@@ -189,6 +194,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( rc < 0 )
         iommu_fwspec_free(dev);
 
+fail:
+    spin_unlock(&dtdevs_lock);
     return rc;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519776.806793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUO-0006Fq-JH; Tue, 11 Apr 2023 19:17:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519776.806793; Tue, 11 Apr 2023 19:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUO-0006DA-3H; Tue, 11 Apr 2023 19:17:12 +0000
Received: by outflank-mailman (input) for mailman id 519776;
 Tue, 11 Apr 2023 19:17:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUL-0004Ta-UB
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:09 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20611.outbound.protection.outlook.com
 [2a01:111:f400:fe59::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73e6f7e9-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:09 +0200 (CEST)
Received: from BN9PR03CA0142.namprd03.prod.outlook.com (2603:10b6:408:fe::27)
 by SA0PR12MB4461.namprd12.prod.outlook.com (2603:10b6:806:9c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:06 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::68) by BN9PR03CA0142.outlook.office365.com
 (2603:10b6:408:fe::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:06 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:05 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:03 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:02 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73e6f7e9-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EE2Ctn9GNlCFsItgMw8F9HmYXg17oJOdiP9I9XI4gGGOLXsCtSbDBSqC61H+Dq+MLsM7lxptVaIo+tOpPZPzzMDUlCj/wE4F6cekg2BKBn6/Lo9J9ZDhfqwtpK/9IY1089A9Q867ER7/LITYmQibC71RbKQVmwQGce9+X5tjUYl3mShbhsOu8sRmoCVk/82SwIkHBA0S/pxssD5JCg+BWCV4HX/8NOUZm+GqDJPV/BsZhURrKd9nJXFRya41E9vv9DuOzNDjwYbJ5xo+9I8FY/BzeL3GPEYu4G6QlZSHWIXlZp5tfgNRPLpwRbKvetPtBqkQF/6ytR5INCV8dx6KlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x1PNAnqijXbZmLbaJTz/bnk2u6mRzrBSQaqCUQeSU5s=;
 b=jaL9+qOMzRad5tarJg4rEj4AKXrLNbMCDZlevdOeN1RNicT0Vj4aGC2TpobcwtvzX7YLo5mFaIpM/8ulm1opSE0QM2dTfcTf+xjX0o/MfWBBCLKrIfMG4b3D39A3YuA0fhCjqtpKPBGsMN/cjpUibFTkyhbO+SToa+IxsW4Vu2rS7dD6RbdMLXSMN3IF1G05npHhRVG9BabpwY7H+D6HQI0DUuznXWTDFXOS2Tx//5JnZtFgOpDe9Ck2R9xUkNMQmOz+Y2p0ORVooRxFOIzonSG/hvi6aJR7+oABL/WNh62YxFEGv7Frl7HiY+NUrkun81OMteRtRID/tKwmaCyugw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x1PNAnqijXbZmLbaJTz/bnk2u6mRzrBSQaqCUQeSU5s=;
 b=utGqqoXglk4yU5u11TW967K8k1NpxM1j7Bmkllm4Bn7wCTX1ogO1oSrTPEHf13ml2qCcv7EmB00zIOviy+PnFtwWVV26CjjThzvZrLhKh2848DLvyL841LgU937KVR6dXT4JeV+TpyfgP5FucG/R6tH+0XCreVFnWdWGjbUzCDM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
Date: Tue, 11 Apr 2023 12:16:12 -0700
Message-ID: <20230411191636.26926-12-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|SA0PR12MB4461:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c464d7b-9522-41af-e4d2-08db3ac156aa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T1rYiXDUh4QnttGRazlOfWqjXoFP74k7bGx8YZ1wWb56PmvEsA86NABuo8umFlX5ixEi3YYq+E2xjZP6eBbWnvYB8jxJWUKeG79Av4TW3jJB0aDKOibhxbA8op9ZEslTO2VsDBJsW4puJuLQF7PNsVPhHaCsz5gSEg86dt60nC2ol7cScOcmegiDeOXY+tXSJiIhUvybYfmsyq869lb3lyUcRji4lT2SPZkgGwlhtMALVu5CEvyb4OD12v3OB56KM6iv8RWFhILrbyKWRqnCZ8PzKzi/TvS9SiFw0ahr+L4ptt3IZFsM19tr9C3tZrfGcKox5W6ERt+1bkk5UtbxL53hbKEx4hQyEZ/9oV76ao9yEIiAK6As55jinHTIfSUmYt/NK1WegBul9N/cwFSkVrvI6svGFAWb6w9labchneUXBergLWfB3IMNsYi0urR6/e0DNYLNxyZrf0Oukje6aAbxrhkjmnEDG+zm2iyCshjIIYqYIoQI9aZ/nxZMUfWlQiKtFnaRJj0554loeuyY/TkoOT4sXNZeygfCwwjgY9UgFvt8oxTkAmwMW/Lizz5BqYQSUsCa4r6xDDcLKsHV/YDmBTYJ89hw78Le2RG7Ls5rR68UcuQHYCXuNRbn9NHVB0APiSi70L1uADioPmuqiUCXQ/o8LtHvjZWYSeM5OZ8GxB3fuIZB6BJujSECBgvXC/F8hNVXBAFgT6WhzWUjtt7RdHucICoh2uh3QgMCCbk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(4326008)(6916009)(8676002)(54906003)(70586007)(70206006)(478600001)(41300700001)(316002)(36756003)(86362001)(336012)(47076005)(2616005)(426003)(6666004)(1076003)(26005)(83380400001)(44832011)(5660300002)(2906002)(36860700001)(8936002)(82310400005)(40480700001)(356005)(186003)(82740400003)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:05.9727
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c464d7b-9522-41af-e4d2-08db3ac156aa
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4461

Dynamic programming ops will modify the dt_host and there might be other
function which are browsing the dt_host at the same time. To avoid the race
conditions, adding rwlock for browsing the dt_host. But adding rwlock in
device_tree.h causes following circular dependency:
    device_tree.h->rwlock.h->smp.h->asm/smp.h->device_tree.h

To fix this, removed the "#include <xen/device_tree.h> and forward declared
"struct dt_device_node".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/include/asm/smp.h | 3 ++-
 xen/arch/arm/smpboot.c         | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c295..afe6129276 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -3,13 +3,14 @@
 
 #ifndef __ASSEMBLY__
 #include <xen/cpumask.h>
-#include <xen/device_tree.h>
 #include <asm/current.h>
 #endif
 
 DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
 DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
+struct dt_device_node;
+
 #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
 
 #define smp_processor_id() get_processor_id()
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae22869..336a7d418b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -11,6 +11,7 @@
 #include <xen/cpumask.h>
 #include <xen/delay.h>
 #include <xen/domain_page.h>
+#include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519777.806801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUP-0006Pl-CE; Tue, 11 Apr 2023 19:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519777.806801; Tue, 11 Apr 2023 19:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUO-0006LA-OY; Tue, 11 Apr 2023 19:17:12 +0000
Received: by outflank-mailman (input) for mailman id 519777;
 Tue, 11 Apr 2023 19:17:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUM-0004DR-4r
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:10 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe59::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7309f7e2-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:08 +0200 (CEST)
Received: from BN9PR03CA0125.namprd03.prod.outlook.com (2603:10b6:408:fe::10)
 by BN9PR12MB5084.namprd12.prod.outlook.com (2603:10b6:408:135::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:05 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::84) by BN9PR03CA0125.outlook.office365.com
 (2603:10b6:408:fe::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:04 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:00 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:00 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7309f7e2-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f5hwxHDiJ6cPXtGVi0oDhgGm3os2tvfuJk/rvPeOs5IMC2wLTyxhSHPzGdvuuRI/bYz0Ev6Rre1K2rIqyc6Ri59vS16WiVSCQ1l2QfFetjgsYV5Yff3j66E4dGWelns5JVGmuHmyH1IJ/kbG8OD8pCblaMZ0E+AZIamdZgxFL6/rMSeIGdzBgffGT18CcjFfcjPhq0SrSrarKjXWWlLoGAmgksCM4eUYqtYTTyJn79+PXiCL8X26kfGxK59CnhJuMs3jKTv4FM3xgKbn0JR2wtFqYfeAhFWUzF9AQNhkj4t7gwiq4fg9M5y2vc9uwGkqFYIy5rGppra40oH/rnLn+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hbn+9h9tF7QZlww7kr6YoDzwWCRUPyZKofdb3QDs5Ls=;
 b=QNK9jA3ONnSE1MrYZppf2BKDWAJikpDz/GOU3l1M5GQWpOa3zeWUIQHywMUMYHK0RC5LhxeEzDXtfOIAQM0QmgHUemK4J03w6+oy55pd+1fcryOg5UQwJPlFbpsShvUrxOvN3WuucMqSP7vpnPTswMhoDr+e5riI0lrri1kXnMMExr7tbXY2OIjsGQRMoYLXtR0TaNRsThkS/pCIwVm+3tEDFUWCdgtZB+TMJ0Br/XCmGe3kqYmcHtx8/disPE18Rg/weHlFOUxiy8yaxbv3rxBSTJ1V5a/C6+kS0j+q55PYoyljRW2We6QxUC4N36NIpQdhyaeMVVjcoS/mlftlmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hbn+9h9tF7QZlww7kr6YoDzwWCRUPyZKofdb3QDs5Ls=;
 b=auVQJ0UMG/mylausafR6rHAq+f0clyeouWECzrLAe+0FRFpwvUeGm/ek4sst6QIP7jQR4I5kTJUXKwX0yStFdEbrBAO1c2cp3KYtzvbqVOIAn8vSEU5dQerSPjGyH1T2jf7RtpXR85IyJQbGr+k7taprljIZT/DgVkW0Mua+GrE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 08/17] xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
Date: Tue, 11 Apr 2023 12:16:09 -0700
Message-ID: <20230411191636.26926-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|BN9PR12MB5084:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bc72af7-9755-4072-ca37-08db3ac155e9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aesU9CuNguun7Fu2Zu/Im/OValFm4zDIlA3vKluqIupLhBGOIhKJzL3kOBw9pc+M1uTqxvlV8M2MTLGidHSoQ9QYcZBNx5MMH4dzfVw9fHu7qhFgBOaDdU8yRkB/fpClf7vqVCXPJ1Qxo4cxojToznqjQ/baIHSEtTZfQSWDudyCvjxHCPZFDwqNLaq+Ah60vqJm/EhkmqwcHG9R1XA9y0IpflLQMperCGb1aIxadYWQRdN5a4EH0mC1C8e+dYSNpSaKavZkUiA+V2EX9NmlJ12gA+grRZfQMeksLwPkOQ5FNG+v0wlI22li2TB2cRwLDpz1No5Ao1bKRC7zkV3QaTsndXFhkrek7dVSAIrr8hAAWVzEq9n0xupoLdEtcP9yRez/dnhChpBsKdBbThmlvePtmhEY94uxyHALWIqEehztR9Yg/vkejNFzOw6ZYDzEL+XYrOIS4550vkADN18wNBsXdqTyNbygFJLwD0Iv21Ur58i01+RybNF6VvAKeqE7L9W5hZ/WPTdbA40j54Ab4SO/tSnJEw7LtXld4XERgzT4obmEtWSRllwB7nDL1sHWdinxTBN9pzhsBq6l7qW5ow7vY+9M2nJo5yHXIC3C0zVcOX/O2eqxZCsYvMIV81j7AW3n92+55NFguWetyfdNr2sKQDUo1FM+r7ncCtaDSeoKVob3SxQQosMjKUzuebtDao5PsQKzn0dHOwcXMmUi5flVdoIVD7EJQkivGjwgH4g=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(478600001)(41300700001)(316002)(4326008)(6916009)(70586007)(70206006)(8676002)(54906003)(8936002)(40480700001)(82310400005)(44832011)(5660300002)(2906002)(36860700001)(356005)(186003)(82740400003)(81166007)(1076003)(26005)(2616005)(426003)(47076005)(336012)(83380400001)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:04.7072
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bc72af7-9755-4072-ca37-08db3ac155e9
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5084

Rename iommu_dt_device_is_assigned() to iommu_dt_device_is_assigned_lock().

Moving spin_lock to caller was done to prevent the concurrent access to
iommu_dt_device_is_assigned while doing add/remove/assign/deassign.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/device_tree.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50c..bb4cf7784d 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -83,16 +83,15 @@ fail:
     return rc;
 }
 
-static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
+static bool_t
+    iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
     if ( !dt_device_is_protected(dev) )
         return 0;
 
-    spin_lock(&dtdevs_lock);
     assigned = !list_empty(&dev->domain_list);
-    spin_unlock(&dtdevs_lock);
 
     return assigned;
 }
@@ -213,27 +212,43 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( (d && d->is_dying) || domctl->u.assign_device.flags )
             break;
 
+        spin_lock(&dtdevs_lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
+
             break;
+        }
 
         ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
+
             break;
+        }
 
         if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
         {
-            if ( iommu_dt_device_is_assigned(dev) )
+
+            if ( iommu_dt_device_is_assigned_locked(dev) )
             {
                 printk(XENLOG_G_ERR "%s already assigned.\n",
                        dt_node_full_name(dev));
                 ret = -EINVAL;
             }
+
+            spin_unlock(&dtdevs_lock);
+
             break;
         }
 
+        spin_unlock(&dtdevs_lock);
+
         if ( d == dom_io )
             return -EINVAL;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519778.806807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUQ-0006fJ-99; Tue, 11 Apr 2023 19:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519778.806807; Tue, 11 Apr 2023 19:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUP-0006dJ-LC; Tue, 11 Apr 2023 19:17:13 +0000
Received: by outflank-mailman (input) for mailman id 519778;
 Tue, 11 Apr 2023 19:17:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUN-0004DR-58
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:11 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73919cc7-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:09 +0200 (CEST)
Received: from BN9PR03CA0142.namprd03.prod.outlook.com (2603:10b6:408:fe::27)
 by DM4PR12MB7526.namprd12.prod.outlook.com (2603:10b6:8:112::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Tue, 11 Apr
 2023 19:17:05 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::a2) by BN9PR03CA0142.outlook.office365.com
 (2603:10b6:408:fe::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:05 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:02 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:02 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73919cc7-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nvRZFbWVxaHzD4dC4eg10s+JGeIh41RGjpq+bd3XY0t9hcciIFmAIf/xunvaYze63VcqaP8lDQIBoG0ArOGN3q0ezmZKBrhNS2VJPfRvLgWOtBxvXk3wP/Y5hb2gRua+HQqpFQyZBbhT3GPys5oZctzZiNmO07UsUtWReK6Cg2TnoZSShhAhjMNYMANyre5ctqTUhJ168r8N2bb4rLzUQpDH5ymPpE4a5xQlb6SGLgwEOJ48CAgd5flxzlaA7xH6uelUGpzk4lPdrhLlBwtJRfy529sag0IAv2JGcwAaMlK8rPanjPldyLK9auTZUvD4AY9kEDEfTcPAtisaOwrzdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gLWShZkghSoju5aCTSrHjPTktEFI3gdZrQ2ShX60hx0=;
 b=KxsI1IgHe4kyjJTYgsFZFWdpdnVOjddPLs48Hx2uUh+5zs2jS8/T0oe6xPIASt4IeeHSuiDMdecDtOM2c7WT0UFmNL2oWFvv3Ifeptc6jTvcwgO7xzganDpzZ3AKixd0P+QFPSA8SQ5lvbGx0E5bM3eQ6c5kZZvKd1mhKIdHtOp1utdCrQjJDiLbtRVo/tw8/cfAQAm9l7W+Nj+aiee3nWIK4QxyTi0SOcGq5aXtKKn3rjqDqpIrtT2tZECCubVI5sByrivWKtBTUXFTBWcJFerV0eQF8yZGMvGwmBdH3hx6kc80uQLYkaQp1XyV12Inde5nQdIAuMlDKkvw42GQUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLWShZkghSoju5aCTSrHjPTktEFI3gdZrQ2ShX60hx0=;
 b=NWggMp+NbZSoS74+iEW/lnwNxS/0+U45dx/xyGra3jRZCZ/dfkfv8fv3DPfx3rxJ7FZc8iSwFmlouf7C98Zy+LTDNBg0U7IISXWuutersEy7dJTx5zPfo0BU9grO+Uy/1sa/8XNWxNOd/BoFskPRKMLnF063zd0vUd3Kc6RMoNs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN][PATCH v5 10/17] xen/iommu: Introduce iommu_remove_dt_device()
Date: Tue, 11 Apr 2023 12:16:11 -0700
Message-ID: <20230411191636.26926-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|DM4PR12MB7526:EE_
X-MS-Office365-Filtering-Correlation-Id: e678d25e-8dd0-4fed-c468-08db3ac15646
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4SMf3AeZxBcqMh+wCIuBWx4KpJmrSvC3Q6Fd/cdWCkn2sVjgyyZjynVpyRRCVplD4p7cX1bmJX8PzrHIYXpPfTZBmYKZFLRIbVNsdcPSiwHAsvtxVPdMUM3rYt0uBAwi3CbZQtanTiEDypOO3bHozVcvs9edN4GDqQayKQwrdlpZwqTUM7Dn3xGqLlQjth+k/NrdL3/fDAT75zeiPZn8Jrrw1aU29UNz+QImHUOTkkhEa2r0uyHUvWdT59Z1M/kyzQwOOJyqOGY4pbrxq6ZMYFAzX7k03In/TlaL+uXrLnDZnZ82Xxbw+g/mtEHlci2wm4JJNySUal8bPXdhEiLRZVDNqiA28f5E3FCAKaTbohv+nRaWhpKe8E3tTKFTC1KAFtngaI+KOaUHNdYMqtmeErZCaWnzMeWVTWhD81JubVi20aQviby1gBWMXZ/YVTAlSn2+ghFDMRKzziSt/jLbRI0sdUpQxNtpe6VHYWGTXv6FTuDmejueMdxSLoobwU01xwNiRaAgBSvoNF3kSyxmrWV3CX/sXLvkFqDPZ7g/uX4vjREiVIhb9XJb8XMnMNyWqNK+Zf4tCGCRxenOxYvbIlc/2jKxVc4sw/ojHVzauZE11kuFcpV1MVUZB7L8aigy5lElgLw7MIX696Bg9wmTK9ASSE9XBvuEaUsBOS8eGAgK1d6KhQJg2qzZqbpOVRPGpAsIA4UWS08+78bJnaCz4M6SNCLP5vfAGm5WdPlqAho=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199021)(40470700004)(36840700001)(46966006)(478600001)(40480700001)(47076005)(36756003)(83380400001)(86362001)(40460700003)(81166007)(356005)(82740400003)(426003)(2616005)(2906002)(316002)(1076003)(336012)(36860700001)(54906003)(44832011)(26005)(186003)(8676002)(8936002)(6666004)(6916009)(82310400005)(41300700001)(5660300002)(70586007)(4326008)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:05.3322
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e678d25e-8dd0-4fed-c468-08db3ac15646
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7526

Remove master device from the IOMMU. This will be helpful when removing the
overlay nodes using dynamic programming during run time.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/drivers/passthrough/device_tree.c | 38 +++++++++++++++++++++++++++
 xen/include/xen/iommu.h               |  2 ++
 2 files changed, 40 insertions(+)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 457df333a0..a77a217f3d 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -126,6 +126,44 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+int iommu_remove_dt_device(struct dt_device_node *np)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct device *dev = dt_to_dev(np);
+    int rc;
+
+    if ( !ops )
+        return -EOPNOTSUPP;
+
+    spin_lock(&dtdevs_lock);
+
+    if ( iommu_dt_device_is_assigned_locked(np) )
+    {
+        rc = -EBUSY;
+        goto fail;
+    }
+
+    /*
+     * The driver which supports generic IOMMU DT bindings must have
+     * these callback implemented.
+     */
+    if ( !ops->remove_device )
+    {
+        rc = -EOPNOTSUPP;
+        goto fail;
+    }
+
+    /* Remove master device from the IOMMU if latter is present and available. */
+    rc = ops->remove_device(0, dev);
+
+    if ( !rc )
+        iommu_fwspec_free(dev);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+    return rc;
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971..079c06321e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -218,6 +218,8 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
 int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
+int iommu_remove_dt_device(struct dt_device_node *np);
+
 /*
  * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
  *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519779.806822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUS-0007Ds-DN; Tue, 11 Apr 2023 19:17:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519779.806822; Tue, 11 Apr 2023 19:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUR-0007Bc-SA; Tue, 11 Apr 2023 19:17:15 +0000
Received: by outflank-mailman (input) for mailman id 519779;
 Tue, 11 Apr 2023 19:17:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUO-0004Ta-VA
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:13 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75176950-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:11 +0200 (CEST)
Received: from BN9PR03CA0140.namprd03.prod.outlook.com (2603:10b6:408:fe::25)
 by PH7PR12MB6717.namprd12.prod.outlook.com (2603:10b6:510:1b0::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.33; Tue, 11 Apr
 2023 19:17:06 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::d5) by BN9PR03CA0140.outlook.office365.com
 (2603:10b6:408:fe::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:06 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:06 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:05 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:04 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75176950-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fcX85IqJccCGBe/QojBYX98xNDGdabmYF+Zf31yQxBGDff6mnyC5036RjUc+vNeCVVM9gONrC9zUraMNMVOWRsNICeIxhzmjMi/dltGM84fA19Tq+ErybnmMlSpZje5r2zPQg3lnZazpNhNAdM+5KVJko8GlumURs0aXbNPQtBIFMvc5llhGsEaw/qXzVis6NlV9143xMNg0CpJBVPElRmu+5InH9K9uWE5t3fT9rokREVUnuFNcgMTf6rhWQ1sZl+Wgl+IiZLHkhAGzRErEIbNamxllm2/jsIyFYUc9TstI8SCMkZdJEQaf9wMixXzJSg03SrL2uwj0docX5gDAEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vXy30MDq5H9lVjpDCmPPgeX6WY9DS2JY3kC1QKAn9No=;
 b=EWdxnXZIHj9Q2jeSvYo16U1pFYbKoWoC2fSjscISmOZTKCjofhAH3otiYoq9EgsrwA7oblZnJxGTiwNVcX7lSLiGc3H5bUurk38eDYFOVRNcRk7s2yR/qbQl81SGxhUArMIPYIdeWuA31izTVpMFixl/s4VNKWmWLoN8XrlIId2tCMZ3YYXv66oiVEsbxVGOBOZGyNLdYFbP228nsgbfOrhFqjA1x1NSea8/ftLR42H7zjNHSoEhUa9th3DOONoNPMYD8eHNrxdwBEKR5PGU5RmRmyaxMxcg/OTlGzyesyf3oK3aSKRwHJB54fTKkTTtiXR5njZTFEy61ILRaFryRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vXy30MDq5H9lVjpDCmPPgeX6WY9DS2JY3kC1QKAn9No=;
 b=TB5K3tBM/Q6w1B75wjMitVnL7DqvAIonQ11xUEnWZ5E17YXNhU7YqMMxdEGzS9TQHnpD1Vsb/aAYnooiZvtNCkXG0+oiVFCsZAdOExBypXdAzv17r9OuhyxrTvcE1HGe9oEfHchBAULwoNvRClZ0Z0aFDHo7aXHqDF6EFdPgq6c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: [XEN][PATCH v5 13/17] xen/arm: Implement device tree node removal functionalities
Date: Tue, 11 Apr 2023 12:16:14 -0700
Message-ID: <20230411191636.26926-14-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|PH7PR12MB6717:EE_
X-MS-Office365-Filtering-Correlation-Id: d0188860-8724-4ff4-1c1e-08db3ac156e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ETK2kqVrnWW6vEg7dU542k+8z7yWzf4PQnFPqlH3d7oKxJwQjx69chGqA9JIEedbE0LNWVxNifNQELyLIY0R87SVx+5b+Oekmwp0ETMWwHohlVZVHARguE611aybKX+R+jSUUm94ojGqd4OjisXeErDBqWU9cjnWHMs8ZmqE04qTAYIwSct1FVKl7eAbAiD6KqKORABiFkLCE3JEYLXTmYu0oO/wqQwEawVhzp+ChRYki4iaCE5Q+hT+CfgMn9hi1UMmMD2o+vAETEXv4HFthHlQdZnAxJZ3oYbNLBys9BR5EdWL/O1pHfmKHkEjbTkTC1WnwGcoMFW47DKP14ZDFdl1UFeeQvIsZ8m1ClthAer+yEdf1Tzbh8aspHgjfCjXXztApP3NSNvl6dYhPWR6AV6lW27C7NojPCzisjvsjn2N4w2sjdSBA046RzSXi+2/27ifxkgUhNY1e0BpghcuKuWjbzJ6aWcmMrE8aZwaRgudGzYCk4AqgX7UJLZK+MckPdx6fhtFQUh/S4GpSwWu3IyPapi78woFVIV1YtKesRTEXsU4Vsne6hSVkeHxVjxzPFeAu3a4KoC/52Gfp3FX3rDQfvFHRCeypuyHDGuGz2vFL8kNzTGr0udzCArpmC5/f1I7nv4L6MATLHOP0vh6LoKz7jGorLGWUfg4xhGSs7/+V2keNPsO/X/Pm8OAeUeBDpwO1SFBuOnpkpbwD+kI5rCbxUgdQF0zX+F4ENiuFksxwEUAkCEgwhX8byXjH3Ne5bDJoK6qJZdmMjhZQbgnCw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(86362001)(30864003)(186003)(5660300002)(44832011)(426003)(336012)(82310400005)(47076005)(356005)(1076003)(26005)(83380400001)(82740400003)(36860700001)(81166007)(316002)(8936002)(8676002)(54906003)(6666004)(40480700001)(478600001)(40460700003)(2616005)(70586007)(6916009)(70206006)(36756003)(41300700001)(4326008)(2906002)(403724002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:06.3633
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d0188860-8724-4ff4-1c1e-08db3ac156e4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6717

Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
device tree overlay.

xl dt-overlay remove file.dtbo:
    Removes all the nodes in a given dtbo.
    First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
    in dt_host and delete the device node entries from dt_host.

    The nodes get removed only if it is not used by any of dom0 or domio.

Also, added overlay_track struct to keep the track of added node through device
tree overlay. overlay_track has dt_host_new which is unflattened form of updated
fdt and name of overlay nodes. When a node is removed, we also free the memory
used by overlay_track for the particular overlay node.

Nested overlay removal is supported in sequential manner only i.e. if
overlay_child nests under overlay_parent, it is assumed that user first removes
overlay_child and then removes overlay_parent.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/sysctl.c        |  16 +-
 xen/common/Makefile          |   1 +
 xen/common/dt_overlay.c      | 415 +++++++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h  |  24 ++
 xen/include/xen/dt_overlay.h |  59 +++++
 5 files changed, 514 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/dt_overlay.c
 create mode 100644 xen/include/xen/dt_overlay.h

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10..672db61650 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -12,6 +12,7 @@
 #include <xen/errno.h>
 #include <xen/hypercall.h>
 #include <public/sysctl.h>
+#include <xen/dt_overlay.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
@@ -21,7 +22,20 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 long arch_do_sysctl(struct xen_sysctl *sysctl,
                     XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+
+    switch ( sysctl->cmd )
+    {
+    case XEN_SYSCTL_dt_overlay:
+        ret = dt_sysctl(&sysctl->u.dt_overlay);
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
 }
 
 /*
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 46049eac35..be78c9a8c2 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
+obj-$(CONFIG_OVERLAY_DTB) += dt_overlay.o
 obj-y += event_2l.o
 obj-y += event_channel.o
 obj-y += event_fifo.o
diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
new file mode 100644
index 0000000000..516e8010c5
--- /dev/null
+++ b/xen/common/dt_overlay.c
@@ -0,0 +1,415 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0
+ *
+ * xen/common/dt_overlay.c
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#include <xen/iocap.h>
+#include <xen/xmalloc.h>
+#include <asm/domain_build.h>
+#include <xen/dt_overlay.h>
+#include <xen/guest_access.h>
+
+static LIST_HEAD(overlay_tracker);
+static DEFINE_SPINLOCK(overlay_lock);
+
+/* Find last descendants of the device_node. */
+static struct dt_device_node *find_last_descendants_node(
+                                            struct dt_device_node *device_node)
+{
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node->sibling != NULL;
+          child_node = child_node->sibling )
+    {
+    }
+
+    /* If last child_node also have children. */
+    if ( child_node->child )
+        child_node = find_last_descendants_node(child_node);
+
+    return child_node;
+}
+
+static int dt_overlay_remove_node(struct dt_device_node *device_node)
+{
+    struct dt_device_node *np;
+    struct dt_device_node *parent_node;
+    struct dt_device_node *device_node_last_descendant = device_node->child;
+
+    parent_node = device_node->parent;
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("%s's parent node not found\n", device_node->name);
+        return -EFAULT;
+    }
+
+    np = parent_node->child;
+
+    if ( np == NULL )
+    {
+        dt_dprintk("parent node %s's not found\n", parent_node->name);
+        return -EFAULT;
+    }
+
+    /* If node to be removed is only child node or first child. */
+    if ( !dt_node_cmp(np->full_name, device_node->full_name) )
+    {
+        parent_node->child = np->sibling;
+
+        /*
+         * Iterate over all child nodes of device_node. Given that we are
+         * removing parent node, we need to remove all it's descendants too.
+         */
+        if ( device_node_last_descendant )
+        {
+            device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+            parent_node->allnext = device_node_last_descendant->allnext;
+        }
+        else
+            parent_node->allnext = np->allnext;
+
+        return 0;
+    }
+
+    for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
+    {
+        if ( !dt_node_cmp(np->sibling->full_name, device_node->full_name) )
+        {
+            /* Found the node. Now we remove it. */
+            np->sibling = np->sibling->sibling;
+
+            if ( np->child )
+                np = find_last_descendants_node(np);
+
+            /*
+             * Iterate over all child nodes of device_node. Given that we are
+             * removing parent node, we need to remove all it's descendants too.
+             */
+            if ( device_node_last_descendant )
+                device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+
+            if ( device_node_last_descendant )
+                np->allnext = device_node_last_descendant->allnext;
+            else
+                np->allnext = np->allnext->allnext;
+
+            break;
+        }
+    }
+
+    return 0;
+}
+
+/* Basic sanity check for the dtbo tool stack provided to Xen. */
+static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
+{
+    if ( (fdt_totalsize(overlay_fdt) != overlay_fdt_size) ||
+          fdt_check_header(overlay_fdt) )
+    {
+        printk(XENLOG_ERR "The overlay FDT is not a valid Flat Device Tree\n");
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* Count number of nodes till one level of __overlay__ tag. */
+static unsigned int overlay_node_count(void *fdto)
+{
+    unsigned int num_overlay_nodes = 0;
+    int fragment;
+
+    fdt_for_each_subnode(fragment, fdto, 0)
+    {
+        int subnode;
+        int overlay;
+
+        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, fdto, overlay)
+        {
+            num_overlay_nodes++;
+        }
+    }
+
+    return num_overlay_nodes;
+}
+
+static int handle_remove_irq_iommu(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct domain *d = hardware_domain;
+    domid_t domid;
+    unsigned int naddr, len;
+    unsigned int i, nirq;
+    uint64_t addr, size;
+
+    domid = dt_device_used_by(device_node);
+
+    dt_dprintk("Checking if node %s is used by any domain\n",
+               device_node->full_name);
+
+    /* Remove the node iff it's assigned to domain 0 or domain io. */
+    if ( domid != 0 && domid != DOMID_IO )
+    {
+        printk(XENLOG_ERR "Device %s as it is being used by domain %d. Removing nodes failed\n",
+               device_node->full_name, domid);
+        return -EINVAL;
+    }
+
+    dt_dprintk("Removing node: %s\n", device_node->full_name);
+
+    nirq = dt_number_of_irq(device_node);
+
+    /* Remove IRQ permission */
+    for ( i = 0; i < nirq; i++ )
+    {
+        rc = platform_get_irq(device_node, i);;
+
+        if ( irq_access_permitted(d, rc) == false )
+        {
+            printk(XENLOG_ERR "IRQ %d is not routed to domain %d\n", rc,
+                   domid);
+            return -EINVAL;
+        }
+        /*
+         * TODO: We don't handle shared IRQs for now. So, it is assumed that
+         * the IRQs was not shared with another devices.
+         */
+        rc = irq_deny_access(d, rc);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "unable to revoke access for irq %u for %s\n",
+                   i, device_node->full_name);
+            return rc;
+        }
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(device_node, "iommus", &len) )
+    {
+        rc = iommu_remove_dt_device(device_node);
+        if ( rc != 0 && rc != -ENXIO )
+            return rc;
+    }
+
+    naddr = dt_number_of_address(device_node);
+
+    /* Remove mmio access. */
+    for ( i = 0; i < naddr; i++ )
+    {
+        rc = dt_device_get_address(device_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(device_node));
+            return rc;
+        }
+
+        rc = iomem_deny_access(d, paddr_to_pfn(addr),
+                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to remove dom%d access to"
+                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   d->domain_id,
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
+            return rc;
+        }
+
+    }
+
+    return rc;
+}
+
+/* Removes all descendants of the given node. */
+static int remove_all_descendant_nodes(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node != NULL;
+         child_node = child_node->sibling )
+    {
+        if ( child_node->child )
+            remove_all_descendant_nodes(child_node);
+
+        rc = handle_remove_irq_iommu(child_node);
+        if ( rc )
+            return rc;
+    }
+
+    return rc;
+}
+
+/* Remove nodes from dt_host. */
+static int remove_nodes(const struct overlay_track *tracker)
+{
+    int rc = 0;
+    struct dt_device_node *overlay_node;
+    unsigned int j;
+
+    for ( j = 0; j < tracker->num_nodes; j++ )
+    {
+        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
+        if ( overlay_node == NULL )
+        {
+            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
+                   overlay_node->full_name);
+            return -EINVAL;
+        }
+
+        rc = remove_all_descendant_nodes(overlay_node);
+
+        /* All children nodes are unmapped. Now remove the node itself. */
+        rc = handle_remove_irq_iommu(overlay_node);
+        if ( rc )
+            return rc;
+
+        read_lock(&dt_host->lock);
+
+        rc = dt_overlay_remove_node(overlay_node);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            return rc;
+        }
+
+        read_unlock(&dt_host->lock);
+    }
+
+    return rc;
+}
+
+/*
+ * First finds the device node to remove. Check if the device is being used by
+ * any dom and finally remove it from dt_host. IOMMU is already being taken care
+ * while destroying the domain.
+ */
+static long handle_remove_overlay_nodes(void *overlay_fdt,
+                                        uint32_t overlay_fdt_size)
+{
+    int rc = 0;
+    struct overlay_track *entry, *temp, *track;
+    bool found_entry = false;
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+        return rc;
+
+    if ( overlay_node_count(overlay_fdt) == 0 )
+        return -ENOMEM;
+
+    spin_lock(&overlay_lock);
+
+    /*
+     * First check if dtbo is correct i.e. it should one of the dtbo which was
+     * used when dynamically adding the node.
+     * Limitation: Cases with same node names but different property are not
+     * supported currently. We are relying on user to provide the same dtbo
+     * as it was used when adding the nodes.
+     */
+    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
+    {
+        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
+        {
+            track = entry;
+            found_entry = true;
+            break;
+        }
+    }
+
+    if ( found_entry == false )
+    {
+        rc = -EINVAL;
+
+        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
+               " Removing nodes is supported for only prior added dtbo. Please"
+               " provide a valid dtbo which was used to add the nodes.\n");
+        goto out;
+
+    }
+
+    rc = remove_nodes(entry);
+
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Removing node failed\n");
+        goto out;
+    }
+
+    list_del(&entry->entry);
+
+    xfree(entry->dt_host_new);
+    xfree(entry->fdt);
+    xfree(entry->overlay_fdt);
+
+    xfree(entry->nodes_address);
+
+    xfree(entry);
+
+out:
+    spin_unlock(&overlay_lock);
+    return rc;
+}
+
+long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    long ret;
+    void *overlay_fdt;
+
+    if ( op->overlay_fdt_size <= 0 || op->overlay_fdt_size > KB(500) )
+        return -EINVAL;
+
+    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
+
+    if ( overlay_fdt == NULL )
+        return -ENOMEM;
+
+    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
+    if ( ret )
+    {
+        gprintk(XENLOG_ERR, "copy from guest failed\n");
+        xfree(overlay_fdt);
+
+        return -EFAULT;
+    }
+
+    switch ( op->overlay_op )
+    {
+    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
+        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+        xfree(overlay_fdt);
+
+        break;
+
+    default:
+        xfree(overlay_fdt);
+        break;
+    }
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..1158c1efb3 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1057,6 +1057,25 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
 #endif
 
+#if defined(__arm__) || defined (__aarch64__)
+#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
+#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
+
+/*
+ * XEN_SYSCTL_dt_overlay
+ * Performs addition/removal of device tree nodes under parent node using dtbo.
+ * This does in three steps:
+ *  - Adds/Removes the nodes from dt_host.
+ *  - Adds/Removes IRQ permission for the nodes.
+ *  - Adds/Removes MMIO accesses.
+ */
+struct xen_sysctl_dt_overlay {
+    XEN_GUEST_HANDLE_64(void) overlay_fdt; /* IN: overlay fdt. */
+    uint32_t overlay_fdt_size;  /* IN: Overlay dtb size. */
+    uint8_t overlay_op; /* IN: Add or remove. */
+};
+#endif
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -1087,6 +1106,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_livepatch_op                  27
 /* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
+#define XEN_SYSCTL_dt_overlay                    30
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -1117,6 +1137,10 @@ struct xen_sysctl {
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
+
+#if defined(__arm__) || defined (__aarch64__)
+        struct xen_sysctl_dt_overlay        dt_overlay;
+#endif
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/dt_overlay.h b/xen/include/xen/dt_overlay.h
new file mode 100644
index 0000000000..2cd975a070
--- /dev/null
+++ b/xen/include/xen/dt_overlay.h
@@ -0,0 +1,59 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0
+ *
+ * xen/dt_overlay.h
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (c) 2022 AMD Inc.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#ifndef __XEN_DT_SYSCTL_H__
+#define __XEN_DT_SYSCTL_H__
+
+#include <xen/list.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+#include <xen/rangeset.h>
+
+/*
+ * overlay_node_track describes information about added nodes through dtbo.
+ * @entry: List pointer.
+ * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
+ * @fdt: Stores the fdt.
+ * @nodes_fullname: Stores the full name of nodes.
+ * @nodes_irq: Stores the IRQ added from overlay dtb.
+ * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
+ * @num_nodes: Stores total number of nodes in overlay dtb.
+ */
+struct overlay_track {
+    struct list_head entry;
+    struct dt_device_node *dt_host_new;
+    void *fdt;
+    void *overlay_fdt;
+    unsigned long *nodes_address;
+    unsigned int num_nodes;
+};
+
+struct xen_sysctl_dt_overlay;
+
+#ifdef CONFIG_OVERLAY_DTB
+long dt_sysctl(struct xen_sysctl_dt_overlay *op);
+#else
+static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    return -ENOSYS;
+}
+#endif
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519780.806829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUT-0007Qi-98; Tue, 11 Apr 2023 19:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519780.806829; Tue, 11 Apr 2023 19:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUS-0007Lm-LY; Tue, 11 Apr 2023 19:17:16 +0000
Received: by outflank-mailman (input) for mailman id 519780;
 Tue, 11 Apr 2023 19:17:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUP-0004Ta-VN
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:14 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e89::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75148f5c-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:12 +0200 (CEST)
Received: from MW4PR04CA0047.namprd04.prod.outlook.com (2603:10b6:303:6a::22)
 by MN2PR12MB4502.namprd12.prod.outlook.com (2603:10b6:208:263::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:07 +0000
Received: from CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::73) by MW4PR04CA0047.outlook.office365.com
 (2603:10b6:303:6a::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:07 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT085.mail.protection.outlook.com (10.13.174.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:06 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:06 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:05 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:05 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75148f5c-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RZdxqLJLgKXDWMojKmWYYz9UnYELtxncFWW4wx0bV5HcVbaJrpZBh04U5pSoVFPsQkFcKZDIq1Cg8njnRsZo6At6d4z/W7Zb556MDP3/UZ4X1Wb5wU8M5Er5sE3aE8e8e1aOyEtOXb76xht+yVKRN7MYBlIlizy+KTpDxKe4zd1CmkCF4sn+uCQ0yjObeHk7/SE6yLnGUgnlu68frNEVqHXGYkKu5drI0ZpXtqvO3qVhNk/rpBs+02yeuqTefgfStDVLs8Nuc0LaUnIWCzCro0vRIkBdCAavv7c2RUSxDWRs+0RhhYHkJuRPKWQPVvsr5gpIkF3+fZ5oLOfEcn4RWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3+cCBf3g6fBI4ES7DG61XJTRWpPAgPp5V0YvFzOd6Q8=;
 b=KJK2ygR2wbBhuVxFyoZFLe+hjdtRm5koMDZd9Oeof9o6Aryt6qAPXH4LSXA53BlJtoWTl7/QMpPB3rE1/jgDjcVx6BzJS2qm1j1Nzza7P1YhA10GSS0bJNqHNjNK7+g9BV0KMtvFYvDXV4I+yVf5/9R1o0iETEgk5CSteU9Iuq8c1w26r9xNQoxbwSH7Xf31WwyiYOFDY1Z8BWsfLWkNm4M35eIeZSkN2Px0Y9WJI6r1y6CaciDRh+ALhIi+01A2xtV+iMUUCCug8h1kLS0BerTPjTmiuO8KQzFu4jVy1oFWzmC8mSZ7Prz+LFeloL+VwtbV04/T7J/jbrAjQpbNMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3+cCBf3g6fBI4ES7DG61XJTRWpPAgPp5V0YvFzOd6Q8=;
 b=brYbKaGU8Q72lJSgTQfQSW8LmLDO7PvXt2hCa4KPGUB8MerKituz4Z9o0vpiMd3zauNzeTF9tRf/OkKJfza8PRinChP0AN4UqPhRxE7AF4B7Ohpaal8qYF6xk+4HuIFn2jQ6lgAMkQS8ir7ykY4Ws1ZcEeHGCpeoyIoCKF37Z+I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>
Subject: [XEN][PATCH v5 14/17] xen/arm: Implement device tree node addition functionalities
Date: Tue, 11 Apr 2023 12:16:15 -0700
Message-ID: <20230411191636.26926-15-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT085:EE_|MN2PR12MB4502:EE_
X-MS-Office365-Filtering-Correlation-Id: c6a6d862-d820-4db3-7343-08db3ac15750
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MmAyEa5oVIVl3CtWWYgdlvLVK6ewQoub2983E5bimxekYFO5Peb8HJsCasTmDKy/FzyLzXzOVomWOv2EFfH7ERRVTuHUGIYfKq54yrOp0jb8L+OG6ab/c/7eSpMtPAQeyXF1dyw7OlLUZV7gMQCeGsE2wKS6BmaFmzlBR80+wO7Z/IMmuca/CSfj4rCyJBtwSBtlAjnDlaJ6+5ulPhbbNEeddu0NNyXLzXtnhQgQo/sPjTEdQIlHQVC5wrJt+Q2dHRPbcLz90zCboCsQl5U+OcqJlPg6sZ0RLrXEdmqnEN7Ox3+7UupKtwwqmjgb34j4AyBqYsr1ZO7FWUP84f44ybjswyvoIlbJkiYcx5pyYDjerN7W4GaXoctw5xFMzz7eNNlzlGuTtURm1KSrhOEqnnnm1s9NU6n53DJuPYQL3njtUgGoQhYtMqFiXhYMXTKWLyBiaV8k5ew7WeCfLAFAR6y+YAc0EwCQYAcCemvz7YJKA1XGh7ygvZ6b+qrUXSH3W1iyKgQLTBD3h5gd+NdUHWzsVWQR9HfOajVGzAivWRoRRYkj2GMll49MUooNSHSWL5C5TAi7sj6hYuXl324GMIYIdayWJn4emzVeGGRfVyRYQZsXtk4BhqaF9SselyMrRpSuMC62XFvr1Q6AcaoQc5IZhXJXLs3UU5cTaVUeSB/uJgnQHukVeTC9TL4ycINQumm2Kk0n0lrIEEZJb8+WMESo8HpBytE9R4oFUgdIvk8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(316002)(82310400005)(54906003)(82740400003)(36860700001)(356005)(81166007)(70586007)(8676002)(70206006)(4326008)(86362001)(6916009)(478600001)(426003)(336012)(47076005)(186003)(2616005)(26005)(41300700001)(36756003)(1076003)(2906002)(83380400001)(30864003)(40460700003)(44832011)(40480700001)(8936002)(6666004)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:06.9640
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c6a6d862-d820-4db3-7343-08db3ac15750
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4502

Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
using device tree overlay.

xl dt-overlay add file.dtbo:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, it checks if any
    of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
    exist then find the overlay nodes in dt_host_new, find the overlay node's
    parent in dt_host and add the nodes as child under their parent in the
    dt_host. The node is attached as the last node under target parent.

    Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
    overlay node.

When a node is added using overlay, a new entry is allocated in the
overlay_track to keep the track of memory allocation due to addition of overlay
node. This is helpful for freeing the memory allocated when a device tree node
is removed.

The main purpose of this to address first part of dynamic programming i.e.
making xen aware of new device tree node which means updating the dt_host with
overlay node information. Here we are adding/removing node from dt_host, and
checking/setting IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/dt_overlay.c | 482 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 482 insertions(+)

diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
index 516e8010c5..3344bad313 100644
--- a/xen/common/dt_overlay.c
+++ b/xen/common/dt_overlay.c
@@ -36,6 +36,25 @@ static struct dt_device_node *find_last_descendants_node(
     return child_node;
 }
 
+/*
+ * Returns next node to the input node. If node has children then return
+ * last descendant's next node.
+*/
+static struct dt_device_node *dt_find_next_node(struct dt_device_node *dt,
+                                            const struct dt_device_node *node)
+{
+    struct dt_device_node *np;
+
+    dt_for_each_device_node(dt, np)
+        if ( np == node )
+            break;
+
+    if ( np->child )
+        np = find_last_descendants_node(np);
+
+    return np->allnext;
+}
+
 static int dt_overlay_remove_node(struct dt_device_node *device_node)
 {
     struct dt_device_node *np;
@@ -109,6 +128,72 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
     return 0;
 }
 
+static int dt_overlay_add_node(struct dt_device_node *device_node,
+                               const char *parent_node_path)
+{
+    struct dt_device_node *parent_node;
+    struct dt_device_node *np, *np_last_descendant;
+    struct dt_device_node *next_node;
+    struct dt_device_node *device_node_last_descendant;
+
+    parent_node = dt_find_node_by_path(parent_node_path);
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("Node not found. Overlay node will not be added\n");
+        return -EINVAL;
+    }
+
+    /* If parent has no child. */
+    if ( parent_node->child == NULL )
+    {
+        next_node = parent_node->allnext;
+        device_node->parent = parent_node;
+        parent_node->allnext = device_node;
+        parent_node->child = device_node;
+    }
+    else
+    {
+        /* If parent has at least one child node.
+         * Iterate to the last child node of parent.
+         */
+        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling );
+
+        /* Iterate over all child nodes of np node. */
+        if ( np->child )
+        {
+            np_last_descendant = find_last_descendants_node(np);
+
+            next_node = np_last_descendant->allnext;
+            np_last_descendant->allnext = device_node;
+        }
+        else
+        {
+            next_node = np->allnext;
+            np->allnext = device_node;
+        }
+
+        device_node->parent = parent_node;
+        np->sibling = device_node;
+        np->sibling->sibling = NULL;
+    }
+
+    /* Iterate over all child nodes of device_node to add children too. */
+    if ( device_node->child )
+    {
+        device_node_last_descendant = find_last_descendants_node(device_node);
+        /* Plug next_node at the end of last children of device_node. */
+        device_node_last_descendant->allnext = next_node;
+    }
+    else
+    {
+        /* Now plug next_node at the end of device_node. */
+        device_node->allnext = next_node;
+    }
+
+    return 0;
+}
+
 /* Basic sanity check for the dtbo tool stack provided to Xen. */
 static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
 {
@@ -148,6 +233,79 @@ static unsigned int overlay_node_count(void *fdto)
     return num_overlay_nodes;
 }
 
+/*
+ * overlay_get_nodes_info will get full name with path for all the nodes which
+ * are in one level of __overlay__ tag. This is useful when checking node for
+ * duplication i.e. dtbo tries to add nodes which already exists in device tree.
+ */
+static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
+                                  unsigned int num_overlay_nodes)
+{
+    int fragment;
+
+    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
+
+    if ( *nodes_full_path == NULL )
+        return -ENOMEM;
+
+    fdt_for_each_subnode(fragment, fdto, 0)
+    {
+        int target;
+        int overlay;
+        int subnode;
+        const char *target_path;
+
+        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
+                                           fragment, &target_path);
+        if ( target < 0 )
+            return target;
+
+        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, fdto, overlay)
+        {
+            const char *node_name = NULL;
+            int node_name_len;
+            unsigned int target_path_len = strlen(target_path);
+            unsigned int node_full_name_len;
+            unsigned int node_num = 0;
+
+            node_name = fdt_get_name(fdto, subnode, &node_name_len);
+
+            if ( node_name == NULL )
+                return node_name_len;
+
+            /*
+             * Magic number 2 is for adding '/'. This is done to keep the
+             * node_full_name in the correct full node name format.
+             */
+            node_full_name_len = target_path_len + node_name_len + 2;
+
+            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
+
+            if ( (*nodes_full_path)[node_num] == NULL )
+                return -ENOMEM;
+
+            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
+
+            (*nodes_full_path)[node_num][target_path_len] = '/';
+
+            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
+                    node_name, node_name_len);
+
+            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
+
+            node_num++;
+        }
+    }
+
+    return 0;
+}
+
 static int handle_remove_irq_iommu(struct dt_device_node *device_node)
 {
     int rc = 0;
@@ -367,6 +525,322 @@ out:
     return rc;
 }
 
+/*
+ * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
+ * overlay_nodes.
+ */
+static int handle_add_irq_iommu(struct domain *d,
+                                struct dt_device_node *overlay_node)
+{
+    int rc;
+    unsigned int naddr, i, len;
+    uint64_t addr, size;
+    struct dt_device_node *np;
+
+    /* First let's handle the interrupts. */
+    rc = handle_device_interrupts(d, overlay_node, false);
+    if ( rc < 0 )
+    {
+        printk(XENLOG_ERR "Interrupt failed\n");
+        return rc;
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(overlay_node, "iommus", &len) )
+    {
+
+        /* Add device to IOMMUs. */
+        rc = iommu_add_dt_device(overlay_node);
+        if ( rc < 0 )
+        {
+            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
+                   dt_node_full_name(overlay_node));
+            return rc;
+        }
+    }
+
+    /* Set permissions. */
+    naddr = dt_number_of_address(overlay_node);
+
+    dt_dprintk("%s passthrough = %d naddr = %u\n",
+               dt_node_full_name(overlay_node), false, naddr);
+
+    /* Give permission for map MMIOs */
+    for ( i = 0; i < naddr; i++ )
+    {
+        /*
+         * For now, we skip_mapping which means it will only permit iomem access
+         * to hardware_domain using iomem_permit_access() but will never map as
+         * map_range_p2mt() will not be called.
+         */
+        struct map_range_data mr_data = { .d = d,
+                                          .p2mt = p2m_mmio_direct_c,
+                                          .skip_mapping = true };
+
+        rc = dt_device_get_address(overlay_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(overlay_node));
+            return rc;
+        }
+
+        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
+        if ( rc )
+            return rc;
+    }
+
+    /* Map IRQ and IOMMU for overlay_node's children. */
+    for ( np = overlay_node->child; np != NULL; np = np->sibling)
+    {
+        rc = handle_add_irq_iommu(d, np);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+    }
+
+    return rc;
+}
+
+/*
+ * Adds device tree nodes under target node.
+ * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
+ * is done to avoid the removal of device_tree generation, iomem regions mapping
+ * to hardware domain done by handle_node().
+ */
+static long handle_add_overlay_nodes(void *overlay_fdt,
+                                     uint32_t overlay_fdt_size)
+{
+    int rc, j, i;
+    struct dt_device_node *overlay_node, *prev_node, *next_node;
+    struct domain *d = hardware_domain;
+    struct overlay_track *tr = NULL;
+    char **nodes_full_path = NULL;
+    unsigned int new_fdt_size;
+
+    tr = xzalloc(struct overlay_track);
+    if ( tr == NULL )
+        return -ENOMEM;
+
+    new_fdt_size = fdt_totalsize(device_tree_flattened) +
+                                 fdt_totalsize(overlay_fdt);
+
+    tr->fdt = xzalloc_bytes(new_fdt_size);
+    if ( tr->fdt == NULL )
+    {
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->num_nodes = overlay_node_count(overlay_fdt);
+    if ( tr->num_nodes == 0 )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
+    if ( tr->nodes_address == NULL )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return rc;
+    }
+
+    /*
+     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
+     * overlay's content(magic) when applying overlay.
+     */
+    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
+    if ( tr->overlay_fdt == NULL )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
+
+    spin_lock(&overlay_lock);
+
+    memcpy(tr->fdt, device_tree_flattened,
+           fdt_totalsize(device_tree_flattened));
+
+    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
+    fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
+
+    /*
+     * overlay_get_nodes_info is called to get the node information from dtbo.
+     * This is done before fdt_overlay_apply() because the overlay apply will
+     * erase the magic of overlay_fdt.
+     */
+    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
+                                tr->num_nodes);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
+               rc);
+        goto err;
+    }
+
+    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
+        goto err;
+    }
+
+    /*
+     * Check if any of the node already exists in dt_host. If node already exits
+     * we can return here as this overlay_fdt is not suitable for overlay ops.
+     */
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
+        if ( overlay_node != NULL )
+        {
+            printk(XENLOG_ERR "node %s exists in device tree\n",
+                   nodes_full_path[j]);
+            rc = -EINVAL;
+            goto err;
+        }
+    }
+
+    /* Unflatten the tr->fdt into a new dt_host. */
+    unflatten_device_tree(tr->fdt, &tr->dt_host_new);
+
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
+
+        /* Find the newly added node in tr->dt_host_new by it's full path. */
+        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
+                                                     nodes_full_path[j]);
+        if ( overlay_node == NULL )
+        {
+            dt_dprintk("%s node not found\n", nodes_full_path[j]);
+            rc = -EFAULT;
+            goto remove_node;
+        }
+
+        /*
+         * Find previous and next node to overlay_node in dt_host_new. We will
+         * need these nodes to fix the dt_host_new mapping. When overlay_node is
+         * take out of dt_host_new tree and added to dt_host, link between
+         * previous node and next_node is broken. We will need to refresh
+         * dt_host_new with correct linking for any other overlay nodes
+         * extraction in future.
+         */
+        dt_for_each_device_node(tr->dt_host_new, prev_node)
+            if ( prev_node->allnext == overlay_node )
+                break;
+
+        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
+
+        read_lock(&dt_host->lock);
+
+        /* Add the node to dt_host. */
+        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            /* Node not added in dt_host. */
+            goto remove_node;
+        }
+
+        read_unlock(&dt_host->lock);
+
+        prev_node->allnext = next_node;
+
+        overlay_node = dt_find_node_by_path(overlay_node->full_name);
+        if ( overlay_node == NULL )
+        {
+            /* Sanity check. But code will never come here. */
+            ASSERT_UNREACHABLE();
+            goto remove_node;
+        }
+
+        rc = handle_add_irq_iommu(d, overlay_node);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+
+        /* Keep overlay_node address in tracker. */
+        tr->nodes_address[j] = (unsigned long)overlay_node;
+    }
+
+    INIT_LIST_HEAD(&tr->entry);
+    list_add_tail(&tr->entry, &overlay_tracker);
+
+    spin_unlock(&overlay_lock);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    return rc;
+
+/*
+ * Failure case. We need to remove the nodes, free tracker(if tr exists) and
+ * tr->dt_host_new.
+ */
+remove_node:
+    tr->num_nodes = j;
+    rc = remove_nodes(tr);
+
+    if ( rc )
+    {
+        /* If removing node fails, this may cause memory leaks. */
+        printk(XENLOG_ERR "Removing node failed.\n");
+        spin_unlock(&overlay_lock);
+        return rc;
+    }
+
+err:
+    spin_unlock(&overlay_lock);
+
+    xfree(tr->dt_host_new);
+    xfree(tr->fdt);
+    xfree(tr->overlay_fdt);
+    xfree(tr->nodes_address);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    xfree(tr);
+
+    return rc;
+}
+
 long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 {
     long ret;
@@ -391,6 +865,14 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 
     switch ( op->overlay_op )
     {
+    case XEN_SYSCTL_DT_OVERLAY_ADD:
+        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+
+        if ( ret )
+            xfree(overlay_fdt);
+
+        break;
+
     case XEN_SYSCTL_DT_OVERLAY_REMOVE:
         ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
         xfree(overlay_fdt);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519782.806842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUV-0007pa-9l; Tue, 11 Apr 2023 19:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519782.806842; Tue, 11 Apr 2023 19:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUU-0007iE-Ba; Tue, 11 Apr 2023 19:17:18 +0000
Received: by outflank-mailman (input) for mailman id 519782;
 Tue, 11 Apr 2023 19:17:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUQ-0004Ta-Vc
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:14 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75d4f138-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:13 +0200 (CEST)
Received: from MW4PR04CA0033.namprd04.prod.outlook.com (2603:10b6:303:6a::8)
 by SA0PR12MB4559.namprd12.prod.outlook.com (2603:10b6:806:9e::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:09 +0000
Received: from CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::c1) by MW4PR04CA0033.outlook.office365.com
 (2603:10b6:303:6a::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:09 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT085.mail.protection.outlook.com (10.13.174.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:09 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:08 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:08 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75d4f138-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DTml8cnkj84ksqDvwmgLUZtR0xNNi8j8gmyZZ50NnJf6XmYk4HgUz7aoClfr4FK1jkfk4g7plQg+djTV75FFKMdPB0MHeFeu0W4d0eHgep/Ape8kzIMtibQyKSlQ2HcwCfkhZzQ61cG3MKquXcpIboKoRpvhINdRYEs+179ympYaKf2fbqF8cKSx9vE423zovjpRNHKf8rT7ooFcREz351MXGRmnGy6V3YM6zUOi5R2bLLuPmsJUV1ZOd6WLqzdMTidyptnQAZ1a+SAT06XcG4KIgsiHhXXdlILV4juyOxujiRVVO0R3sE+nX8flJHmSfRO67haMXeHYRNxVuNkp7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=HnPfgKNaLFifYRiXm89CjxS4F6TXWN/XgNhsUBNeY+VHE1YVDqBxhjNCQQL9rrOaT2XRtjgRmxCcikoVKx8LaB9Nq4VyTfo4SF/u9kg79qz3XWzXcmnK15+nx4p9iJf2zR5U/9VawbiiPi4Z5SGyGvtbLAqpw0w0GhF0M6caGbWoLlFL5cc9AGmrKvApx63v4pC5aQMtmIDvV66olDZ6NDoOZSfW/jgb4Xt+R4ldKtWAwvLbNW+VuVEhubTNm4sw1xwK+i4JH+RcZC1LwUr+euvvnhE8w4TQmOoN4wA6g1Bup0r8nhtRTHJ4UaCYiJ8oJsInS9QTJaAyt/2IDafrXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=rkrcfNVyCeJcZHMV3Mza5hd29yd4W4yc4A3VEwlix84TV47Qvdg/J/YD/UzPLoW9asNTunX/Xp1eeCtMc+rad/aRjHlgyONX5dtS7Tsk6n5H+vz4iSpN/C0apzzGMBWgTn55O00uFgxIx77ez6Pq5BTvsFNeOEPLg5z4cgq0ftM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [XEN][PATCH v5 17/17] tools/xl: Add new xl command overlay for device tree overlay support
Date: Tue, 11 Apr 2023 12:16:18 -0700
Message-ID: <20230411191636.26926-18-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT085:EE_|SA0PR12MB4559:EE_
X-MS-Office365-Filtering-Correlation-Id: e028d683-9b8b-4468-33f4-08db3ac158bd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	acc5R9HxREvMwAXUxJDUDLP/yKj+S5WJ6zRQIH7J2xdWmf3Jqiu7x0lcP9LSJvCgCr61Qk9r2NXH5f2ToI2cyrpaUf67UPfRtH+cqDrbkgcuvmndpY8kp99iOxQzTV1MwgCXPdcXVFz8AqacI55kDNTUEpxempIAws7UXihOK5HmtDyLJfKyUMrlGm+W+kSv9q7RCCboOcVU2zwbraBLJ/0S+2kv0PIBQQZK8YFeNFVBOaJipQSAHQvdYRdtf5rXVjWEGhawFasc9dkrtYACET2+YgH6HTx3qyGDJU78m1Hu94wiaswrAQl1PdxBrodzHvqioimRUT8hz7VWmUOM9gA5iHF17CLEjxD9XDCeES0fvA4ZGBxWwI3uRoHYiCw0hniX07Rpwu7kM9Ce0cZyFL0a22B/DWq2394y70mdK5twYuNIKSb5TrZHA9bsEcOq6xrTzbrZPDSXEDTL33u6+yH5VG6Ut3qdKkOPbeNxkSY2S7bVPMPWJ0QsunegqlGFmg1GXz3QMTj+7WHEEJ4Ycwp96sfqQnX0a3b3xCSimkkQe3OwPzn5xpQtaFKyW8zg9UykK3EN/efvjcUGcsr6qxgLBnSjYk0k/XXVX8TqWHZAafajz+PAMfJc+aRFCDzRABFLsJdzfmHz+XhqKWdWA1DifLmYYxeIF5IRWUthWjHyOK6pCs9V561lQJMWq3rmDuoT29l5fO/6WKMwB/z54sksBswB4M1iehdjS67IFk8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(478600001)(40460700003)(40480700001)(47076005)(83380400001)(36756003)(82740400003)(356005)(81166007)(426003)(86362001)(2616005)(336012)(36860700001)(6666004)(44832011)(2906002)(316002)(1076003)(186003)(5660300002)(26005)(54906003)(41300700001)(8936002)(6916009)(82310400005)(8676002)(70206006)(70586007)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:09.3544
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e028d683-9b8b-4468-33f4-08db3ac158bd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4559

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/xl/xl.h           |  1 +
 tools/xl/xl_cmdtable.c  |  6 +++++
 tools/xl/xl_vmcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 72538d6a81..a923daccd3 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -138,6 +138,7 @@ int main_shutdown(int argc, char **argv);
 int main_reboot(int argc, char **argv);
 int main_list(int argc, char **argv);
 int main_vm_list(int argc, char **argv);
+int main_dt_overlay(int argc, char **argv);
 int main_create(int argc, char **argv);
 int main_config_update(int argc, char **argv);
 int main_button_press(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index ccf4d83584..db0acff62a 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -630,6 +630,12 @@ const struct cmd_spec cmd_table[] = {
       "Issue a qemu monitor command to the device model of a domain",
       "<Domain> <Command>",
     },
+    { "dt-overlay",
+      &main_dt_overlay, 0, 1,
+      "Add/Remove a device tree overlay",
+      "add/remove <.dtbo>"
+      "-h print this help\n"
+    },
 };
 
 const int cmdtable_len = ARRAY_SIZE(cmd_table);
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index 5518c78dc6..de56e00d8b 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -1265,6 +1265,58 @@ int main_create(int argc, char **argv)
     return 0;
 }
 
+int main_dt_overlay(int argc, char **argv)
+{
+    const char *overlay_ops = NULL;
+    const char *overlay_config_file = NULL;
+    void *overlay_dtb = NULL;
+    int rc;
+    uint8_t op;
+    int overlay_dtb_size = 0;
+    const int overlay_add_op = 1;
+    const int overlay_remove_op = 2;
+
+    if (argc < 2) {
+        help("dt_overlay");
+        return EXIT_FAILURE;
+    }
+
+    overlay_ops = argv[1];
+    overlay_config_file = argv[2];
+
+    if (strcmp(overlay_ops, "add") == 0)
+        op = overlay_add_op;
+    else if (strcmp(overlay_ops, "remove") == 0)
+        op = overlay_remove_op;
+    else {
+        fprintf(stderr, "Invalid dt overlay operation\n");
+        return EXIT_FAILURE;
+    }
+
+    if (overlay_config_file) {
+        rc = libxl_read_file_contents(ctx, overlay_config_file,
+                                      &overlay_dtb, &overlay_dtb_size);
+
+        if (rc) {
+            fprintf(stderr, "failed to read the overlay device tree file %s\n",
+                    overlay_config_file);
+            free(overlay_dtb);
+            return ERROR_FAIL;
+        }
+    } else {
+        fprintf(stderr, "overlay dtbo file not provided\n");
+        return ERROR_FAIL;
+    }
+
+    rc = libxl_dt_overlay(ctx, overlay_dtb, overlay_dtb_size, op);
+
+    free(overlay_dtb);
+
+    if (rc)
+        return EXIT_FAILURE;
+
+    return rc;
+}
 /*
  * Local variables:
  * mode: C
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519783.806848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUV-0007ww-RZ; Tue, 11 Apr 2023 19:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519783.806848; Tue, 11 Apr 2023 19:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUV-0007uF-01; Tue, 11 Apr 2023 19:17:19 +0000
Received: by outflank-mailman (input) for mailman id 519783;
 Tue, 11 Apr 2023 19:17:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUR-0004DR-Q0
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:15 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75bcde91-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:13 +0200 (CEST)
Received: from MW4PR04CA0041.namprd04.prod.outlook.com (2603:10b6:303:6a::16)
 by IA1PR12MB6529.namprd12.prod.outlook.com (2603:10b6:208:3a6::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Tue, 11 Apr
 2023 19:17:08 +0000
Received: from CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::d0) by MW4PR04CA0041.outlook.office365.com
 (2603:10b6:303:6a::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:08 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT085.mail.protection.outlook.com (10.13.174.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:08 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:07 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:06 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:06 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75bcde91-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YpIDKdgdf7yz2eJTAnEtGnIAAnvHfpj3300VLO6iafBOlyg9uKGtWR8AlWLxZXP7jXq9z+MOemUrxotW3ALnTbYqRKLYnv4hTlBU7fOiISFawFTn2sfZGHNIb/LaE9ZO2I+2Qd/HNcuwn5+M8fnxdXlS6Bqmzr3N6eCnwFV/c9YZRm82uhULvb8h7wuS9Mip6Y5DUmuTbpTPaJGax/zVmWaporPWDtZGvPYZXRhakfjvF5Glz2cx6XgfjW78lAGD/6ddcqxVFNiG3vicbmGg0mYIVHVd7jMXYOiGlYnApFw9Vr1UlR226LBQIzbrDZb7A5rL6hhtY7hLB1EWpHlorQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nkwDgQcNvE9UkGiRuGgnKgAIS4K7kLAxiI7Z7Wg7YaY=;
 b=F5OZE8ajLmHZ/3smvtrG3B3WH5oV4QmrHteG/vHTRllwu508EHSaQPST7gJiFbgHoDWf/e2BCweqleNSWP9MHT+GLdqcgZK5AiBmzlg2os1366qXKVcQlnDk4JdYYrnROOe5Sy2X4NZVgweQrs5LGeNMPwrv8/9f4U0UI7nyceUS39HZ6pA7+XALE0/zSb90fv+KxfUmpme0YPQeuQrPeuVfI61ZaGeJM97fGAA/SZn+rC4m0cvg6C0lUtesgR/EQkCsDqHidnogKC/o+r6wpScV0HDwk2GrA1yn+zaL2Xc7m9py2DDfF9tBZfERXRPetcpYXqwXCxeV7y22lYa4UA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nkwDgQcNvE9UkGiRuGgnKgAIS4K7kLAxiI7Z7Wg7YaY=;
 b=34B69AbRgoSes/U8YltA7jp1SIRw84uuhNt7TRQZNULohAXDZZhsPhgLOiVTgnRk+J/lfOOIctNDhuCdHAaPGqQfAc3uPDMpNjcUypQ3tyfJV2gzFGg9cZP7f0PTD/hRhQXzyre6jKZVAHVEzs9NDOUJMKcbR6jeo+riXyEmy8o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [XEN][PATCH v5 15/17] tools/libs/ctrl: Implement new xc interfaces for dt overlay
Date: Tue, 11 Apr 2023 12:16:16 -0700
Message-ID: <20230411191636.26926-16-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT085:EE_|IA1PR12MB6529:EE_
X-MS-Office365-Filtering-Correlation-Id: 79fb94b7-2e1f-4cd8-dde2-08db3ac15800
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	osc+4KnrjY/eXHRp/c21nvn18/PBqzgrfrXWovfYXG3QrplR0MgbJ72hALFCiEcxPhk3QCSjzTFO0dTXS6yY9bt/J8GQaZ3WYGZnBX2YGioCcALoBjUowaD5pS+r/sLI+L2YN/4s9DHwAJKZFS2+nLiYjL1EMqTc6AHTqNHVpeHyOcIaMNldU5M5EzDaASxUhO3oCEUMTi96Ba4RbWsjp3sMkNFeeKj7mdIpNg39NVoQtJJrexrczJ3CXJ4N5k3KSt8jbO0kW511tYq5p+Ahe7LLGnRU0SMi7baInhIHt0mb5n2gFKkRFvd0ht9LB96tkW4dWRFp3QKStHnhBh7nZG/l8PXpbqpgzp4viFw2WKasPQ5v0VkIlSn0xAafRSR/qj3De+tPh/fTVyxfhRntQdkTvcpxMSsXaIHjID7Nyxx24+li9JVuGCxeXyewaEMKOsrFzaQ4y+oj7CslTKxiL9D0SR+uWSi/EfyxI8TfNgnalPhnHrEZzPQOsBKxFrh0eaXeS/MW13vx6/MePibxA9oju6iZnUa+Hc80Zf2Dr5VbNvLe7Tl+P13UgrnjzYNyI9vQS+oO0EdP89r4GuGuwIdysUR9FW1yxvmTteTro8D60MVMkOZyS/XwStAp5xf8DPld7jEQcQpzt8I5BHTNI+rEq3SpHSmv7VEwN27rRCI+IUMoActsf1te3hUn69CJxWUpjJw0GHVSd/eNq7wywPqJvAYdGBddDOaD8uiSerCTHaw/TBAANLwczdGUStdu
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(70206006)(4326008)(478600001)(8676002)(54906003)(41300700001)(316002)(70586007)(6916009)(86362001)(36756003)(47076005)(2616005)(336012)(26005)(1076003)(426003)(6666004)(5660300002)(44832011)(2906002)(82310400005)(40480700001)(8936002)(356005)(82740400003)(186003)(81166007)(36860700001)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:08.1201
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79fb94b7-2e1f-4cd8-dde2-08db3ac15800
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6529

xc_dt_overlay() sends the device tree binary overlay, size of .dtbo and overlay
operation type i.e. add or remove to xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/xenctrl.h         |  5 ++++
 tools/libs/ctrl/Makefile.common |  1 +
 tools/libs/ctrl/xc_dt_overlay.c | 48 +++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..b932589c8d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2637,6 +2637,11 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
                          xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
 
+#if defined(__arm__) || defined(__aarch64__)
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op);
+#endif
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libs/ctrl/Makefile.common b/tools/libs/ctrl/Makefile.common
index 0a09c28fd3..247afbe5f9 100644
--- a/tools/libs/ctrl/Makefile.common
+++ b/tools/libs/ctrl/Makefile.common
@@ -24,6 +24,7 @@ OBJS-y       += xc_hcall_buf.o
 OBJS-y       += xc_foreign_memory.o
 OBJS-y       += xc_kexec.o
 OBJS-y       += xc_resource.o
+OBJS-$(CONFIG_ARM)  += xc_dt_overlay.o
 OBJS-$(CONFIG_X86) += xc_psr.o
 OBJS-$(CONFIG_X86) += xc_pagetab.o
 OBJS-$(CONFIG_Linux) += xc_linux.o
diff --git a/tools/libs/ctrl/xc_dt_overlay.c b/tools/libs/ctrl/xc_dt_overlay.c
new file mode 100644
index 0000000000..202fc906f4
--- /dev/null
+++ b/tools/libs/ctrl/xc_dt_overlay.c
@@ -0,0 +1,48 @@
+/*
+ *
+ * Device Tree Overlay functions.
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op)
+{
+    int err;
+    DECLARE_SYSCTL;
+
+    DECLARE_HYPERCALL_BOUNCE(overlay_fdt, overlay_fdt_size,
+                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( (err = xc_hypercall_bounce_pre(xch, overlay_fdt)) )
+        goto err;
+
+    sysctl.cmd = XEN_SYSCTL_dt_overlay;
+    sysctl.u.dt_overlay.overlay_op = overlay_op;
+    sysctl.u.dt_overlay.overlay_fdt_size = overlay_fdt_size;
+
+    set_xen_guest_handle(sysctl.u.dt_overlay.overlay_fdt, overlay_fdt);
+
+    if ( (err = do_sysctl(xch, &sysctl)) != 0 )
+        PERROR("%s failed", __func__);
+
+err:
+    xc_hypercall_bounce_post(xch, overlay_fdt);
+
+    return err;
+}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:17:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:17:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519784.806859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUX-0008Ei-R9; Tue, 11 Apr 2023 19:17:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519784.806859; Tue, 11 Apr 2023 19:17:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJUW-00089U-B1; Tue, 11 Apr 2023 19:17:20 +0000
Received: by outflank-mailman (input) for mailman id 519784;
 Tue, 11 Apr 2023 19:17:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUT-0004DR-4C
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:17 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 771c4613-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:15 +0200 (CEST)
Received: from BN9P221CA0007.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::26)
 by SA1PR12MB7104.namprd12.prod.outlook.com (2603:10b6:806:29e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:17:11 +0000
Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10a:cafe::b5) by BN9P221CA0007.outlook.office365.com
 (2603:10b6:408:10a::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:11 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:10 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:10 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 771c4613-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZCntpfDjlU7bvxwIp3bh4NQamhz8bpTDYo97gW0u+wMi4TyyzH24dA9vddIIPPPpGW+cmmXSixmLW4zjPrg/NPvfYyBrIuXJ4X+1d/Qu7HCDYvAGUWg0Hn6Hn/3/I3Du/SW4/Gz4Abm6XcoxgvuBA4qDNhEAAIgtyaKHkWMQ7niTiME8PeanhQvZKk4x9jZ5ogm7SQYyEWkpjWbxV/YAgRaIqkRVZ9+ZBgYbSayN33edDezxvJ7kHwWqGEu9WKYUcyf49eaF79C0Tr37lvbY/jlEjG5VlawA/YUe9MD0JNz5hu62XBbhyCHsfH/OJbglHeiO8jzx7fBaW78RJL2+Dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z2clB4UixcJGvwOZJ+XN0wD+kcnhi2QU+8ofkZ5Co48=;
 b=O9v0T+PyvauZzj/zb0BDPvK1AvsUjiy2Dz9Krp5DhRXqIBlwkRFYiyY8BcfRR+03rMFtVdmu56h3h56eiVquwNwVj6713IzFpnWcdkJ5Y+VlMgIKk4MIgwxGaqlQjcgb9fPDv9qkV1riewcBw+41QvJaCYI1aH+HYwTC0/ExJS+ehe5ywTsgY5fefTdnmb4me0WuD5tP3OyJwKf5Y42wKjvc/83dg1W5zj4KPf+LoBOloV1+F1xhwLOp0yhTyn8fOqCyerPsC0wmmDvh9jDCL3IWHUFtBWfp+7dZ0UH9iH5qR6hAmSc7gYXQULmKZE9V+qhyrfYhlrrjem2y0s642g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z2clB4UixcJGvwOZJ+XN0wD+kcnhi2QU+8ofkZ5Co48=;
 b=ThtM+MEUeVsRifpSzpg6l+6DHWGYRGYDH4jmJf5iAiBLQgXjj+inCmV1xKq7LBTQOGjbBMUITDSe6XVcGfge7MqfgphuprtLS6pCPNR9EZ5IKEnSBACZ16KGOCmexvC04LZr+imaKF5Rn9i4sEK+JFqzwa2XPLa92X9dUWZRe1s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 01/17] xen/arm/device: Remove __init from function type
Date: Tue, 11 Apr 2023 12:16:20 -0700
Message-ID: <20230411191636.26926-20-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|SA1PR12MB7104:EE_
X-MS-Office365-Filtering-Correlation-Id: 197fc178-d1c0-4c52-5cbb-08db3ac159dd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q19Ab48E2BZeSHFpac25Yyrbd44cGvHmrkZcuJb1ggFsQuJyAmUF0Mfoz9A1OkKNuv3VLwy17kDisYJXBkp83av5cqnmHMCmGG0QLlueGm3YuCzXclIw5ncEXT6NRV0emnt3d/eqZ0E7qTokPVv7SzNahC1kFqx1gtEFsldwlVwpnGSTpkcmIwuxEE3M1TqMN7j9bCEl04I59de8LwUAp1bEjSz7nE9vm19mn9NyOuKJjLOxqiyowWp7jsCf2ukSVXUo+0cMkPgECL/Zkl0SpY2FYrH39zAxiO6nc7DJ1XZBfiVJIk/aG4qOoN57L7G78oKEsqeAHnWg1pBa4YsQDP2w1UWM9Hn9pAcVY63MnC/mzTcj2wgHhL5qRRPSlE/nFX7ISU0EeWrA6GUhSHE2NeXdnCMcXSPmqy9XypkVJPkFUR04KZD7x/SPgpNsY4M1RLMrrexW4HAoQPnT5gq6N3m4uH1xWRVwjLD8xKQycbw9zMuav0oQNxLKFf4OKJOjWKADUw/ELFvcd5WDXGWk+dCOE3EpDr1q4cgSASZEJGqMJh5z1FaSsONuCfd413w0DEX3/ExXhN579c4QoF9SH5jG1+e5NedsBz2abSlQoH7d44tYGOdJQOG5w3D2+reHs3TuS6unsPFXmSxkTzWnBgOgMmy1tyFfb+8/lpyy3VFYVd822KVxOBv3XJRHszY1tPuCpwZDJEEHcQ+4yfbz/vb6PGocrGp09G20EHEKjDM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(4326008)(70206006)(70586007)(1076003)(86362001)(26005)(316002)(6916009)(186003)(40460700003)(36756003)(6666004)(54906003)(47076005)(40480700001)(478600001)(36860700001)(8676002)(8936002)(2616005)(41300700001)(356005)(81166007)(5660300002)(2906002)(30864003)(83380400001)(44832011)(82310400005)(426003)(336012)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:11.3366
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 197fc178-d1c0-4c52-5cbb-08db3ac159dd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7104

Remove __init from following function to access during runtime:
    1. map_irq_to_domain()
    2. handle_device_interrupt()
    3. map_range_to_domain()
    4. unflatten_dt_node()
    5. unflatten_device_tree()

Move map_irq_to_domain() prototype from domain_build.h to setup.h.

To avoid the breaking the build, following changes are also done:
1. Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain() to
    device.c. After removing __init type,  these functions are not specific to
    domain building, so moving them out of domain_build.c to device.c.
2. Remove static type from handle_device_interrupt().

Overall, these changes are done to support the dynamic programming of a nodes
where an overlay node will be added to fdt and unflattened node will be added to
dt_host. Furthermore, IRQ and mmio mapping will be done for the added node.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/device.c                   | 145 ++++++++++++++++++++++++
 xen/arch/arm/domain_build.c             | 142 -----------------------
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/common/device_tree.c                |  16 +--
 5 files changed, 159 insertions(+), 152 deletions(-)

diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
index ca8539dee5..fec6e29c42 100644
--- a/xen/arch/arm/device.c
+++ b/xen/arch/arm/device.c
@@ -12,6 +12,9 @@
 #include <xen/errno.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/iocap.h>
+#include <asm/domain_build.h>
+#include <asm/setup.h>
 
 extern const struct device_desc _sdevice[], _edevice[];
 extern const struct acpi_device_desc _asdevice[], _aedevice[];
@@ -75,6 +78,148 @@ enum device_class device_get_class(const struct dt_device_node *dev)
     return DEVICE_UNKNOWN;
 }
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname)
+{
+    int res;
+
+    res = irq_permit_access(d, irq);
+    if ( res )
+    {
+        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
+               d->domain_id, irq);
+        return res;
+    }
+
+    if ( need_mapping )
+    {
+        /*
+         * Checking the return of vgic_reserve_virq is not
+         * necessary. It should not fail except when we try to map
+         * the IRQ twice. This can legitimately happen if the IRQ is shared
+         */
+        vgic_reserve_virq(d, irq);
+
+        res = route_irq_to_guest(d, irq, irq, devname);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
+                   irq, d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - IRQ: %u\n", irq);
+    return 0;
+}
+
+int map_range_to_domain(const struct dt_device_node *dev,
+                        u64 addr, u64 len, void *data)
+{
+    struct map_range_data *mr_data = data;
+    struct domain *d = mr_data->d;
+    int res;
+
+    /*
+     * reserved-memory regions are RAM carved out for a special purpose.
+     * They are not MMIO and therefore a domain should not be able to
+     * manage them via the IOMEM interface.
+     */
+    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
+                     strlen("/reserved-memory/")) != 0 )
+    {
+        res = iomem_permit_access(d, paddr_to_pfn(addr),
+                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to permit to dom%d access to"
+                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                    d->domain_id,
+                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
+            return res;
+        }
+    }
+
+    if ( !mr_data->skip_mapping )
+    {
+        res = map_regions_p2mt(d,
+                               gaddr_to_gfn(addr),
+                               PFN_UP(len),
+                               maddr_to_mfn(addr),
+                               mr_data->p2mt);
+
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
+                   " - 0x%"PRIx64" in domain %d\n",
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
+                   d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
+               addr, addr + len, mr_data->p2mt);
+
+    return 0;
+}
+
+/*
+ * handle_device_interrupts retrieves the interrupts configuration from
+ * a device tree node and maps those interrupts to the target domain.
+ *
+ * Returns:
+ *   < 0 error
+ *   0   success
+ */
+int handle_device_interrupts(struct domain *d,
+                             struct dt_device_node *dev,
+                             bool need_mapping)
+{
+    unsigned int i, nirq;
+    int res;
+    struct dt_raw_irq rirq;
+
+    nirq = dt_number_of_irq(dev);
+
+    /* Give permission and map IRQs */
+    for ( i = 0; i < nirq; i++ )
+    {
+        res = dt_device_get_raw_irq(dev, i, &rirq);
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        /*
+         * Don't map IRQ that have no physical meaning
+         * ie: IRQ whose controller is not the GIC
+         */
+        if ( rirq.controller != dt_interrupt_controller )
+        {
+            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
+                      i, dt_node_full_name(rirq.controller));
+            continue;
+        }
+
+        res = platform_get_irq(dev, i);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
+        if ( res )
+            return res;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4f9d4f9d88..6ab18c53ab 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2256,41 +2256,6 @@ int __init make_chosen_node(const struct kernel_info *kinfo)
     return res;
 }
 
-int __init map_irq_to_domain(struct domain *d, unsigned int irq,
-                             bool need_mapping, const char *devname)
-{
-    int res;
-
-    res = irq_permit_access(d, irq);
-    if ( res )
-    {
-        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
-               d->domain_id, irq);
-        return res;
-    }
-
-    if ( need_mapping )
-    {
-        /*
-         * Checking the return of vgic_reserve_virq is not
-         * necessary. It should not fail except when we try to map
-         * the IRQ twice. This can legitimately happen if the IRQ is shared
-         */
-        vgic_reserve_virq(d, irq);
-
-        res = route_irq_to_guest(d, irq, irq, devname);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
-                   irq, d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - IRQ: %u\n", irq);
-    return 0;
-}
-
 static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
                                        const struct dt_irq *dt_irq,
                                        void *data)
@@ -2322,57 +2287,6 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
     return 0;
 }
 
-int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
-{
-    struct map_range_data *mr_data = data;
-    struct domain *d = mr_data->d;
-    int res;
-
-    /*
-     * reserved-memory regions are RAM carved out for a special purpose.
-     * They are not MMIO and therefore a domain should not be able to
-     * manage them via the IOMEM interface.
-     */
-    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
-                     strlen("/reserved-memory/")) != 0 )
-    {
-        res = iomem_permit_access(d, paddr_to_pfn(addr),
-                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
-                    d->domain_id,
-                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
-            return res;
-        }
-    }
-
-    if ( !mr_data->skip_mapping )
-    {
-        res = map_regions_p2mt(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(len),
-                               maddr_to_mfn(addr),
-                               mr_data->p2mt);
-
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
-                   " - 0x%"PRIx64" in domain %d\n",
-                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
-                   d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
-               addr, addr + len, mr_data->p2mt);
-
-    return 0;
-}
-
 /*
  * For a node which describes a discoverable bus (such as a PCI bus)
  * then we may need to perform additional mappings in order to make
@@ -2400,62 +2314,6 @@ static int __init map_device_children(const struct dt_device_node *dev,
     return 0;
 }
 
-/*
- * handle_device_interrupts retrieves the interrupts configuration from
- * a device tree node and maps those interrupts to the target domain.
- *
- * Returns:
- *   < 0 error
- *   0   success
- */
-static int __init handle_device_interrupts(struct domain *d,
-                                           struct dt_device_node *dev,
-                                           bool need_mapping)
-{
-    unsigned int i, nirq;
-    int res;
-    struct dt_raw_irq rirq;
-
-    nirq = dt_number_of_irq(dev);
-
-    /* Give permission and map IRQs */
-    for ( i = 0; i < nirq; i++ )
-    {
-        res = dt_device_get_raw_irq(dev, i, &rirq);
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        /*
-         * Don't map IRQ that have no physical meaning
-         * ie: IRQ whose controller is not the GIC
-         */
-        if ( rirq.controller != dt_interrupt_controller )
-        {
-            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
-                      i, dt_node_full_name(rirq.controller));
-            continue;
-        }
-
-        res = platform_get_irq(dev, i);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
-        if ( res )
-            return res;
-    }
-
-    return 0;
-}
-
 /*
  * For a given device node:
  *  - Give permission to the guest to manage IRQ and MMIO range
diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h
index 34ceddc995..b9329c9ee0 100644
--- a/xen/arch/arm/include/asm/domain_build.h
+++ b/xen/arch/arm/include/asm/domain_build.h
@@ -4,8 +4,6 @@
 #include <xen/sched.h>
 #include <asm/kernel.h>
 
-int map_irq_to_domain(struct domain *d, unsigned int irq,
-                      bool need_mapping, const char *devname);
 int make_chosen_node(const struct kernel_info *kinfo);
 void evtchn_allocate(struct domain *d);
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..1d636e8a4a 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -163,9 +163,15 @@ void device_tree_get_reg(const __be32 **cell, u32 address_cells,
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
+int handle_device_interrupts(struct domain *d, struct dt_device_node *dev,
+                             bool need_mapping);
+
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..aed38ff63c 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1811,12 +1811,12 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  * @allnextpp: pointer to ->allnext from last allocated device_node
  * @fpsize: Size of the node path up at the current depth.
  */
-static unsigned long __init unflatten_dt_node(const void *fdt,
-                                              unsigned long mem,
-                                              unsigned long *p,
-                                              struct dt_device_node *dad,
-                                              struct dt_device_node ***allnextpp,
-                                              unsigned long fpsize)
+static unsigned long unflatten_dt_node(const void *fdt,
+                                       unsigned long mem,
+                                       unsigned long *p,
+                                       struct dt_device_node *dad,
+                                       struct dt_device_node ***allnextpp,
+                                       unsigned long fpsize)
 {
     struct dt_device_node *np;
     struct dt_property *pp, **prev_pp = NULL;
@@ -2056,8 +2056,8 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static void __init __unflatten_device_tree(const void *fdt,
-                                           struct dt_device_node **mynodes)
+static void __unflatten_device_tree(const void *fdt,
+                                    struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519806.806890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWB-0005cQ-Pa; Tue, 11 Apr 2023 19:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519806.806890; Tue, 11 Apr 2023 19:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWB-0005cJ-Kw; Tue, 11 Apr 2023 19:19:03 +0000
Received: by outflank-mailman (input) for mailman id 519806;
 Tue, 11 Apr 2023 19:19:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUW-0004Ta-0r
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:20 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78375fb9-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:16 +0200 (CEST)
Received: from BN9PR03CA0142.namprd03.prod.outlook.com (2603:10b6:408:fe::27)
 by MW4PR12MB7166.namprd12.prod.outlook.com (2603:10b6:303:224::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Tue, 11 Apr
 2023 19:17:13 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::68) by BN9PR03CA0142.outlook.office365.com
 (2603:10b6:408:fe::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:13 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:13 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:12 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78375fb9-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P2ZtciHNQZJ1CW8HGjWQBQaYU9j7azTGq6d429aQHdzUVfNY+HEgTexkd1sVJO95biDcrw3hgcE8H0Nhfj6TVvlcjMKaZcfDuC/c9XeHnS9V6r9rSmjMYVn3FEsNiW1LtemPqHGo5qL25K31njuz1We61WKQV2YKzT+gP6pozm0ir/rloUptHimg11gAPZEhFFsHc+nx+vEEXyNLoujb5SteJ374nbnXx8ini/09XWbtmfiystSKJZmvIykJr3Rs+uqbEChCRx4bFKC9flyn2KIoTFOSeDWvCzM2mVwkBLUcHfeusV4036wQatGP8EOvvN/VmDXJw9KT8Zwq5O5V2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y5jE5w9tUBXYNb4GkaW66AGdvE8156w4aHYPgYkE7PU=;
 b=YxFnbaNOWzAlnzIFiw5613yOUxX5R0jVPGMcq9tgF0BRy+RgiKllDzuhPiGZi4U/anDvOkAhTzSxfD32QfPw4G5qPtW0FMVYC6cLJXEYYBtqLhxaar+tvmo1OSPE11OsG1AC6NzIT/IZ8H72vJ/Dr3IQJ1BtAsnbw3q2jN5KnREaiSATqwdksWmYNidCxWdfIwd+2sry/2/Lw24edOu8VlR8AVk/1JZJbnXZIXfJLAvODKVdsolt14jiPzYuPgOjMmqdEonYpHHT+lkiovERI2trJrfi+11dpHvgsbzW7tWEcbXGzQoNmKnDllw7C86cqYxNHsQjlLPs0WPYuFrUew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y5jE5w9tUBXYNb4GkaW66AGdvE8156w4aHYPgYkE7PU=;
 b=JQ/hU2P8vnyDzTVBSxNElxt8ZK5IT7TJQ8XEi+uUvWOa4YWebyhtS7WVkuVhxIIX8e3Dhu+yBATcGbf7wjzBLwILdKKGViPVecu7FoMITWx/tdHMCZz8NVhjbni0K1Nid5jHmBpNw7nTztLWeeA7mXUiK6oVmWGAoeWIeEJHmaM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 03/17] xen/arm: Add CONFIG_OVERLAY_DTB
Date: Tue, 11 Apr 2023 12:16:22 -0700
Message-ID: <20230411191636.26926-22-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|MW4PR12MB7166:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d21aa73-be46-49f3-61ab-08db3ac15aec
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6Ye5PZBaC4XZ9/w0ud9o+0cmn0rwEgLvLYNNpTFIOK1wvfkPHxko6LLG21O64/yuJ4XkFLLeJfgt07VWzZQ6JKyRwMV7YuL3ktryLnBiE5FbYWIA7nwElkY1UInIzC79O/6u/lKjWh8ae0Wl/YfBxRZ21t7APPccxvQJtL3b/kmbb2ZZEaNKP4TqZWcAIBEr+SUmZoRwFsAJZlY0sBjtdXR6SyXy7+A/BOiLmp+Ns3yYGXCxFgZ7SSbDIHVWxBS+gADFTtSP6IER7T6efdjCTiEmp5n2I3tCyZHGEdVphRHLZOqHuxM8PSDxE7aCaIOdjHhU3J0SDgNHSYN1u2uTeJNYYIdAXidar1rYAm1afauzkt34f7P7JagqmOLb23Ja0TKBHTZAReO9KhQfeHojhCjSr/YMWcVW7KHcbSiP4Ow4sbvKdO1RNWgUxmBl4xHWZrq+YQkPHZG36AO5bDuRBU3RTIQZgw71y0j9xIXjZ+CYlEIvpopbvkwurva+eKQoDtyJDbCHXvLPrX9wtJhv0AcDJWXM8B6ckzwg5rfgL9tX2ZlkZUuYxPpkUCnDlz+sxYQNy+OznMRmAlNwAzmgCRHeI6hNoH4l8jygKzfZCXmRvd8L9MrxbyG0abRgdFOT1JnQGMwv495pWlcqa63M4JWnGw49JDeL5fSLeLIWIzaaWbcCC1qUzZO0lgbhfbcImPt6/qXunohb3VRLPLFamNJ3L0x8obJ4/8DUowVr41s=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199021)(36840700001)(40470700004)(46966006)(186003)(8936002)(1076003)(26005)(40460700003)(478600001)(82310400005)(36860700001)(316002)(70206006)(54906003)(40480700001)(4326008)(6916009)(81166007)(6666004)(36756003)(86362001)(356005)(8676002)(70586007)(82740400003)(41300700001)(44832011)(2616005)(2906002)(47076005)(426003)(336012)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:13.1128
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d21aa73-be46-49f3-61ab-08db3ac15aec
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7166

Introduce a config option where the user can enable support for adding/removing
device tree nodes using a device tree binary overlay.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 SUPPORT.md           | 6 ++++++
 xen/arch/arm/Kconfig | 5 +++++
 2 files changed, 11 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..0a31f40af4 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -822,6 +822,12 @@ No support for QEMU backends in a 16K or 64K domain.
 
     Status: Supported
 
+### Device Tree Overlays
+
+Add/Remove device tree nodes using a device tree overlay binary(.dtbo).
+
+    Status: Supported for ARM
+
 ### ARM: Guest ACPI support
 
     Status: Supported
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..1fe3d698a5 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -53,6 +53,11 @@ config HAS_ITS
         bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
         depends on GICV3 && !NEW_VGIC && !ARM_32
 
+config OVERLAY_DTB
+	bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
+	help
+	  Dynamic addition/removal of Xen device tree nodes using a dtbo.
+
 config HVM
         def_bool y
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519812.806899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWI-0005v8-12; Tue, 11 Apr 2023 19:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519812.806899; Tue, 11 Apr 2023 19:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWH-0005v1-U5; Tue, 11 Apr 2023 19:19:09 +0000
Received: by outflank-mailman (input) for mailman id 519812;
 Tue, 11 Apr 2023 19:19:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUX-0004Ta-0y
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:21 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20629.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 799897a0-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:19 +0200 (CEST)
Received: from MW4PR03CA0203.namprd03.prod.outlook.com (2603:10b6:303:b8::28)
 by BY5PR12MB4209.namprd12.prod.outlook.com (2603:10b6:a03:20d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:15 +0000
Received: from CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b8:cafe::c5) by MW4PR03CA0203.outlook.office365.com
 (2603:10b6:303:b8::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:15 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT096.mail.protection.outlook.com (10.13.175.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:14 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:13 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:13 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799897a0-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CXpPdrD59NHcZced6N+ReY2rzQPGHtm40Xv7ICsxIt/xl5azaRgbg/FpXDbBbgSewjJItDYzLHIs+vPw639QRQa7qwZH5zrduXWD9kv8CFmN8FKuRSH2JI9fBoSNFQjipdW6hobv15IL6dNzc4fAi+/pkD2phxqPTUkk+cL1b07NMwx3LkrgEZzfVRB041phzXLdlD0aq0pTOYaLhL1/dxxJwFvsA6R0gI6oXRThSiq2OovUzCLI0T4rPXDjmWhztfRLeVKHHDIzTttnJ6E3nEZtplNw5FpKxiJwi1W0BA2eMXUoxhPTMuHwgqLEqS6vaF9YVZDLmzu2PCq8DcBiMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=cksZoW0ks3716PsidDanbIitYQa+dR1UVj/3ZfkfzDYwBV4HM0p5HaJDaiJqjrwy0bWn7nx/Nj6IDIjqwMHEkw6PJVeyPnNpLs5kYAXMa4TxEVDe2C3mRj1IRSpGvprN3FJr26Kqy3IKvA51h3MLFZorE7J3CJ3DsOc+NOZ7GTklZciqfOUfZHF6RD40qsP7ijZncH4Z0dxPPiFf7rOaP7j/mis5HABhIf6WwiXpRFDWklw/SsdbeH2e1m0QVNH4y4mzzTG5a746gRPQKahAhxgNODv13BxfDsWfW+yG3EXtFKSwUN+DskgWMPoJLgDXrhriCkLT0fURLXudLW8psw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=C8LnUlJK8/O5qvHp1MAOesiGzHauPP4chRPeau8EzNQHbWxjGd1E++r0vWFVvMOhDGikzO9d8m3aAWft9gjihUuvmmSU6xIL1/S78nTyco3qOZccbzYNHX2eoGMx9gyTdDz6XzcmGKhz+yyGfhMkjCaBQ735nslsas2Mn29N7io=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 04/17] libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
Date: Tue, 11 Apr 2023 12:16:23 -0700
Message-ID: <20230411191636.26926-23-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT096:EE_|BY5PR12MB4209:EE_
X-MS-Office365-Filtering-Correlation-Id: 778a1185-cf2d-464a-ab52-08db3ac15ba7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3+4G1HcuhJ+RiYvsVU7dkqReSPNLwMFDbQ/azTNjZysqNxtVxfff/YnYzEUt2wAkHB2yvZUfBol6uUCA4r0dblCaMB7aZcUX6OXNqj8fJ23o/K4lN4fYhO6BIh1VqCx9vJKsEmiQb+qYais8rHA0bggCJUVGcLbT5DroM5n7GRQhzx5yy3b09oKhcxmMgz80BjWkgo/1CfMPyJLEhwRC06Ayko9Rewr+r9Sfn317zCyy9tW0ia+3Sz2FyfDm57eL/7Tp8cZ2F+Pm11bL+AaKmrRIFgGTwyQlFeOMQOMVe6KEptapev+8/nihgkdFIRsC1EA3F82tXmJMt+jx6HsZXjSe1H7Jjo53Bw7nIu4yjp73lNm2x/lAARgP2o84IkduqpNNbvAOYDTc776rZToKOlZ3bMBuBrh8Wyzrmo34dnLD6J4jPG/HlKyhlnepVkCa51bOJ0GyWMKgjKjwj3xvQC8eH8BylFZMRL7MJLdVDb/b1rjNS2PP0zOZx1gpqPTo9EWrfEmR6QsoOSxoP3lIJFXmtjFhgd3Cz89gCybOFi3x93wPXGWFdV9olZjnviDD6iyTgvvgVeIh8uh4X/y3QD5dl6VwhV2W1+l69rIbCfmvPxveBsK0yHK0V64w6goLA8KgbGF+Fk4H1TgymT2xbnDR7GeIWhIJMk5YU8ihtkIJVRkWv/xmXEyN9yKt3BUr2IjGzAGzG9S7/v2awZJhCoA4NvTOQFaIqbDvKGVB6rA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(70586007)(41300700001)(70206006)(316002)(54906003)(478600001)(6916009)(6666004)(4326008)(40480700001)(83380400001)(82310400005)(4744005)(44832011)(8936002)(8676002)(2906002)(36860700001)(5660300002)(82740400003)(186003)(356005)(81166007)(1076003)(26005)(47076005)(336012)(2616005)(426003)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:14.2308
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 778a1185-cf2d-464a-ab52-08db3ac15ba7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4209

This is done to access fdt library function which are required for adding device
tree overlay nodes for dynamic programming of nodes.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/libfdt/Makefile | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/common/libfdt/Makefile b/xen/common/libfdt/Makefile
index 75aaefa2e3..d50487aa6e 100644
--- a/xen/common/libfdt/Makefile
+++ b/xen/common/libfdt/Makefile
@@ -1,7 +1,11 @@
 include $(src)/Makefile.libfdt
 
 SECTIONS := text data $(SPECIAL_DATA_SECTIONS)
+
+# For CONFIG_OVERLAY_DTB, libfdt functionalities will be needed during runtime.
+ifneq ($(CONFIG_OVERLAY_DTB),y)
 OBJCOPYFLAGS := $(foreach s,$(SECTIONS),--rename-section .$(s)=.init.$(s))
+endif
 
 obj-y += libfdt.o
 nocov-y += libfdt.o
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519813.806904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWI-0005xm-AO; Tue, 11 Apr 2023 19:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519813.806904; Tue, 11 Apr 2023 19:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWI-0005x2-6M; Tue, 11 Apr 2023 19:19:10 +0000
Received: by outflank-mailman (input) for mailman id 519813;
 Tue, 11 Apr 2023 19:19:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUf-0004DR-BP
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:29 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20616.outbound.protection.outlook.com
 [2a01:111:f400:7eab::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e5a69a4-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:26 +0200 (CEST)
Received: from BN8PR04CA0032.namprd04.prod.outlook.com (2603:10b6:408:70::45)
 by BY1PR12MB8446.namprd12.prod.outlook.com (2603:10b6:a03:52d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:18 +0000
Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:70:cafe::84) by BN8PR04CA0032.outlook.office365.com
 (2603:10b6:408:70::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:18 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:18 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:17 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:17 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e5a69a4-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F51bULKW9b+g2uxlrVXvYKaoRY/BibCkb2SPb5ObheoJoI53wTk2aDI9+YohJii69KmGma19PCO3tW4tMnTXwVRBluuokfqiZBvHbDix5pAxbJkCPdGIZks3izhxcCg/FRDhnDNJ3UXX/IOp/0QeQ3BXuKu/SZr5c/FTmVDulC6JFE3uw6eUf7SOgl7lQpI/HoxiwWGC6RHH2eBjYOLKf1bYGiBhb3Fh6EJvskGhjE56TCzyHpQFn6ti2rK4E/vW7JD/RPalmCsxNhe8kw6+k7G/itVKHq8vFCxCqUMXg8eOk7jPmHah208vRKPqp4AoWBNBSnT/oWo9SYFr8TIcJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qd0QFdZvzHUGWrw0BfIrEu3cWkp8jWSV4vPT9FURuHE=;
 b=GVHkLjWTfBaR1e9+F+dbVMwK2uPs8b88ZM46uVIzIjudv3ZNg96WWLSa45UoP0/uNAJuubbd04wbbqWC6cwzCsQOBwsVxaMBD51h4I2wWjJAca+nC56si7I1vHpQCn2CLfRof8p9kzafAbAKIl2fh6trQu5fSolcjQmEgVIIVwVK8CfDMLkLaRDZE9ooGA/Nziwh5MSwBbJes2JFzfHHstxsIuTrLDLAh99xsCijJiRte6+1KsyBLwCx3tTzppZz7Ki7Q+PzcUF7dmzdCnBAWBXsO4FfKq+pdP2F2FoZh/KFFM7eemF3P6yTRltC+HwNwejPVGZSSpkdSRy0XDiYIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qd0QFdZvzHUGWrw0BfIrEu3cWkp8jWSV4vPT9FURuHE=;
 b=UeV9rIjPsBcg+PwgcsDBTnzu7kPN8nOMhtQyadhmsSoW+nUgdraiApOXwmBYNc5z+TyS1KVrqcIj6Cw4BABNtfalpWqlmSOPwib3dFZ/lfKYtoGlgu2/tHWkNtX4WKoNmaUoLhzGDoycrB0cG3HCUXiHeQoyuq4WuEQ7pAphK6s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 09/17] xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
Date: Tue, 11 Apr 2023 12:16:28 -0700
Message-ID: <20230411191636.26926-28-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT010:EE_|BY1PR12MB8446:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ccc8d77-ac1a-43f3-f466-08db3ac15dfb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	87wQgUV/Rs/rhMoHDyAXQevmpXRLXB2USDKIB2K94zIe7mzQF4qIHmhhubJj04rWW2nT2FatrLn3Kt8TGmgFjqK8f+qmX2eM0pKIu79MzkElNpmXfWbrUaoiCeOaVjOVtpMTxwVQ/TVBMRz9BCMSQNuQD7Yggd5Mo+h21Ax/tfIZ+YiZBBX+9fdtphasJMnj3JRIzmTExGeWivt1ezYxPXs4F4xWL0+X5Yj9dWPCGKw+h3sHRw7bAPByNK58TZLfzTgYWIgW8luPcTV/oAwPC9JiFuoC777ounPNB5yFl5DJCz9C+JKqlYkaoedAk88jGQSLo8CW3pBpxb5am0CHJA80bu2hS4PZqD8NLGcxlKMCaJ+N0a5ddFyndjGV7vP5bWWVMGBJU8Hv25YMfG6TfRLhBGE0QnJkQfch2iPUPwmhw6Sn/hbNxmxJTrmk+636iDCuR22Iut9OGQHXPnzwZrKS7E3zemncGSI9VQ9Xs2DefUg0nQOVAKEM1MeuZ8yjZCO6OFasSOW47C3+i8oRpXXPQgLNzA3CxXZy5WpPWMC5y/2+lo0O6MVsOvvW2cLK61qnya3zCk204VPco9hwJ4jdelqsO1Dd7FvswYqso3rvOkQBKMVXIBwwf0X5IR0xnCtzvo8Uc+9JvpjbyurI9BjfqmKp6TrYNniNUyM0JMLWW3MpsYSijbeV0MgLrq2QriGdg3+sbqYIospL1ZsJLywJOiUaDKK+1yovMUWyhRw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(82310400005)(82740400003)(356005)(54906003)(36860700001)(81166007)(70206006)(70586007)(8676002)(86362001)(6916009)(478600001)(4326008)(336012)(47076005)(426003)(186003)(2616005)(26005)(316002)(41300700001)(36756003)(2906002)(1076003)(83380400001)(44832011)(40460700003)(40480700001)(8936002)(6666004)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:18.2324
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ccc8d77-ac1a-43f3-f466-08db3ac15dfb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR12MB8446

Protect iommu_add_dt_device() with dtdevs_lock to prevent concurrent access add.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/device_tree.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index bb4cf7784d..457df333a0 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -146,6 +146,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( dev_iommu_fwspec_get(dev) )
         return 0;
 
+    spin_lock(&dtdevs_lock);
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
@@ -158,7 +160,10 @@ int iommu_add_dt_device(struct dt_device_node *np)
          * these callback implemented.
          */
         if ( !ops->add_device || !ops->dt_xlate )
-            return -EINVAL;
+        {
+            rc = -EINVAL;
+            goto fail;
+        }
 
         if ( !dt_device_is_available(iommu_spec.np) )
             break;
@@ -189,6 +194,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( rc < 0 )
         iommu_fwspec_free(dev);
 
+fail:
+    spin_unlock(&dtdevs_lock);
     return rc;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519817.806920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWL-0006Tw-Jp; Tue, 11 Apr 2023 19:19:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519817.806920; Tue, 11 Apr 2023 19:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWL-0006Td-FF; Tue, 11 Apr 2023 19:19:13 +0000
Received: by outflank-mailman (input) for mailman id 519817;
 Tue, 11 Apr 2023 19:19:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUe-0004DR-BM
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:28 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d49f6a6-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:25 +0200 (CEST)
Received: from BN9PR03CA0582.namprd03.prod.outlook.com (2603:10b6:408:10d::17)
 by CH2PR12MB5020.namprd12.prod.outlook.com (2603:10b6:610:65::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Tue, 11 Apr
 2023 19:17:21 +0000
Received: from BN8NAM11FT020.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10d:cafe::82) by BN9PR03CA0582.outlook.office365.com
 (2603:10b6:408:10d::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT020.mail.protection.outlook.com (10.13.176.223) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:21 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:21 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:21 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:20 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d49f6a6-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ibWlJlAzJCwMexqfvCpGmZ6XhEB8FJ4xNsD0SrVfgFoEotE23Ikc6A04vt89teQXMdxung1NIw2OB3J1E2TG0a3GkcgUlQwtq/vRkcVfrah6AK4wbnGxM60PLseV0s2c+yhXJgO5639NUb5Fsag58ZU2NVSnu7d1Fvh2Sc2cbQWt5+axFB/n+Tuux5Zchjd5UvLpYz9l+9azgN0KrgVEzk8qkKLxzq7Oav6ibp7NRr96ImmUxP9bkkSIPfbx2q7imNaFiXlWFfz0RbG20pT+5N4aNCeBTMth+tl7C+VljWZNO2y174I6iw9NtHTTQDER+B21Zz0/E/2BOQBkDhpPkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vXy30MDq5H9lVjpDCmPPgeX6WY9DS2JY3kC1QKAn9No=;
 b=ZvkD6tuYNHXuUPhrT56khdRe6Ep7R8oju15YD7KJv4EX+T8pUzFXntKFluORwoLvkWGQ2plxbfo22vg3Pq6SFlGdREoKquhLhFtl9XSnfWREcnAnoO9voaEdEZP3eVsIMMvScgIUnqAwW6Oi5ct/pGE3AP/0F3fBB2C2yRlgyNr4bAr6vLE7oFgekcky2j7Er8Y3V8fYSZ1+1AjV+d0si7A5SQMrTvyPTcc45u2RdonItJj0ybJo8sl9tzlgzE9UQFdvQmZ2ORmomNUOZDxkb4wHj0pWGx9P6RALJOZwnwdrLc3bgkA8xJVQKFcg7Jd2OBCDqXlEVR6Y4ygc/lOzbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vXy30MDq5H9lVjpDCmPPgeX6WY9DS2JY3kC1QKAn9No=;
 b=CvONxjfIyVRzv3f+Pmw+eh0tckqwwueTCTaLxtMjD31wLOLT4FyuVH5CVvxOyFx9QA2/AAlfmiJMzQKWBrePSawrUDnhNeOgvaWCo2UE1zMR2LTeKWVbhebCFPgiq4WiQ4VUcLiIGu67hq1yV3GJ52XKHwxpwG44HkKZPISip7s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: [XEN][PATCH v5 13/17] xen/arm: Implement device tree node removal functionalities
Date: Tue, 11 Apr 2023 12:16:32 -0700
Message-ID: <20230411191636.26926-32-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT020:EE_|CH2PR12MB5020:EE_
X-MS-Office365-Filtering-Correlation-Id: 6086e7bd-38a6-4ddc-2e3f-08db3ac16013
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5EE62jWduwEF8oWuL9B5C/JPyVk+v9e4sC11SOTrWWP4qxhUg/YVbv+PRzWrKGfCPvTY2ebYsCHKlRgYBxEeEgXxQILhwCj0OPERE8r52eSwUCLiTlPWXWeXqw4LyW7Sjnru5/hm2bY0ZW4DnUqbqXJndUY9VGXdwxgoJmtmXdfWnHX0d0vgoltkNFsRa+kC5qwwDkfpyINBZ6cXQ6qztXW/CuzreqH2RZTAk4UkecJ//4hzg9+jMxRWFVdQOfSLDlu1KjiNf7jCb4VPPHdKClS3GpPgCmyUFqB7QFdAHxNekab0fxJBjOw3tyIuTdT9//8GoYVW1wTPAlU+bwUcVQ79sAYukSB6vEqvbLspGDXSNODvvBbRFGoHa7+HLPTLowGoKB1lSCPNdeXwDhyMgedxMFHwqfTQb0b4QLwdingK5nbMCqBm6FyccgRFjTQ665yp7TJayFp9vQqE58lGLjWTttZKJFxaBWY//5moDHA1Bn0bUBem0TdLI7GknKTBJ5n1U3i0RWxPkCeSwO8qF1bm7/PUbECOuEqtWCMexCoYzszeCkEAkvsx8G1IpStAOiuUBlOA3Ml5JUJ2OnTdtaHBKniauYlb9RxURc0ndhHQ1VjXz1148hA+tRD6wTDnIC42ahHxEx4PGUSe1vkA5pBrNE+JNK7JQ12YTU04U0HcEtiFmCdauwN2rxRLlQcn7TPdXuhTFVIX1tdvi21Aiy/KVsnBgK30LF+YzTgI6BWGcQK5488luJvOtM8FPliibbSP4WQXKefS2HhseXVFXg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(39860400002)(136003)(451199021)(40470700004)(46966006)(36840700001)(1076003)(356005)(82740400003)(8936002)(8676002)(4326008)(86362001)(6666004)(5660300002)(6916009)(41300700001)(44832011)(40460700003)(82310400005)(36756003)(316002)(2906002)(30864003)(478600001)(54906003)(70586007)(70206006)(40480700001)(47076005)(83380400001)(26005)(81166007)(186003)(36860700001)(2616005)(336012)(426003)(403724002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:21.7557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6086e7bd-38a6-4ddc-2e3f-08db3ac16013
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT020.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5020

Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
device tree overlay.

xl dt-overlay remove file.dtbo:
    Removes all the nodes in a given dtbo.
    First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
    in dt_host and delete the device node entries from dt_host.

    The nodes get removed only if it is not used by any of dom0 or domio.

Also, added overlay_track struct to keep the track of added node through device
tree overlay. overlay_track has dt_host_new which is unflattened form of updated
fdt and name of overlay nodes. When a node is removed, we also free the memory
used by overlay_track for the particular overlay node.

Nested overlay removal is supported in sequential manner only i.e. if
overlay_child nests under overlay_parent, it is assumed that user first removes
overlay_child and then removes overlay_parent.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/sysctl.c        |  16 +-
 xen/common/Makefile          |   1 +
 xen/common/dt_overlay.c      | 415 +++++++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h  |  24 ++
 xen/include/xen/dt_overlay.h |  59 +++++
 5 files changed, 514 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/dt_overlay.c
 create mode 100644 xen/include/xen/dt_overlay.h

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10..672db61650 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -12,6 +12,7 @@
 #include <xen/errno.h>
 #include <xen/hypercall.h>
 #include <public/sysctl.h>
+#include <xen/dt_overlay.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
@@ -21,7 +22,20 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 long arch_do_sysctl(struct xen_sysctl *sysctl,
                     XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+
+    switch ( sysctl->cmd )
+    {
+    case XEN_SYSCTL_dt_overlay:
+        ret = dt_sysctl(&sysctl->u.dt_overlay);
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
 }
 
 /*
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 46049eac35..be78c9a8c2 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
+obj-$(CONFIG_OVERLAY_DTB) += dt_overlay.o
 obj-y += event_2l.o
 obj-y += event_channel.o
 obj-y += event_fifo.o
diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
new file mode 100644
index 0000000000..516e8010c5
--- /dev/null
+++ b/xen/common/dt_overlay.c
@@ -0,0 +1,415 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0
+ *
+ * xen/common/dt_overlay.c
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#include <xen/iocap.h>
+#include <xen/xmalloc.h>
+#include <asm/domain_build.h>
+#include <xen/dt_overlay.h>
+#include <xen/guest_access.h>
+
+static LIST_HEAD(overlay_tracker);
+static DEFINE_SPINLOCK(overlay_lock);
+
+/* Find last descendants of the device_node. */
+static struct dt_device_node *find_last_descendants_node(
+                                            struct dt_device_node *device_node)
+{
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node->sibling != NULL;
+          child_node = child_node->sibling )
+    {
+    }
+
+    /* If last child_node also have children. */
+    if ( child_node->child )
+        child_node = find_last_descendants_node(child_node);
+
+    return child_node;
+}
+
+static int dt_overlay_remove_node(struct dt_device_node *device_node)
+{
+    struct dt_device_node *np;
+    struct dt_device_node *parent_node;
+    struct dt_device_node *device_node_last_descendant = device_node->child;
+
+    parent_node = device_node->parent;
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("%s's parent node not found\n", device_node->name);
+        return -EFAULT;
+    }
+
+    np = parent_node->child;
+
+    if ( np == NULL )
+    {
+        dt_dprintk("parent node %s's not found\n", parent_node->name);
+        return -EFAULT;
+    }
+
+    /* If node to be removed is only child node or first child. */
+    if ( !dt_node_cmp(np->full_name, device_node->full_name) )
+    {
+        parent_node->child = np->sibling;
+
+        /*
+         * Iterate over all child nodes of device_node. Given that we are
+         * removing parent node, we need to remove all it's descendants too.
+         */
+        if ( device_node_last_descendant )
+        {
+            device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+            parent_node->allnext = device_node_last_descendant->allnext;
+        }
+        else
+            parent_node->allnext = np->allnext;
+
+        return 0;
+    }
+
+    for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
+    {
+        if ( !dt_node_cmp(np->sibling->full_name, device_node->full_name) )
+        {
+            /* Found the node. Now we remove it. */
+            np->sibling = np->sibling->sibling;
+
+            if ( np->child )
+                np = find_last_descendants_node(np);
+
+            /*
+             * Iterate over all child nodes of device_node. Given that we are
+             * removing parent node, we need to remove all it's descendants too.
+             */
+            if ( device_node_last_descendant )
+                device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+
+            if ( device_node_last_descendant )
+                np->allnext = device_node_last_descendant->allnext;
+            else
+                np->allnext = np->allnext->allnext;
+
+            break;
+        }
+    }
+
+    return 0;
+}
+
+/* Basic sanity check for the dtbo tool stack provided to Xen. */
+static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
+{
+    if ( (fdt_totalsize(overlay_fdt) != overlay_fdt_size) ||
+          fdt_check_header(overlay_fdt) )
+    {
+        printk(XENLOG_ERR "The overlay FDT is not a valid Flat Device Tree\n");
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* Count number of nodes till one level of __overlay__ tag. */
+static unsigned int overlay_node_count(void *fdto)
+{
+    unsigned int num_overlay_nodes = 0;
+    int fragment;
+
+    fdt_for_each_subnode(fragment, fdto, 0)
+    {
+        int subnode;
+        int overlay;
+
+        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, fdto, overlay)
+        {
+            num_overlay_nodes++;
+        }
+    }
+
+    return num_overlay_nodes;
+}
+
+static int handle_remove_irq_iommu(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct domain *d = hardware_domain;
+    domid_t domid;
+    unsigned int naddr, len;
+    unsigned int i, nirq;
+    uint64_t addr, size;
+
+    domid = dt_device_used_by(device_node);
+
+    dt_dprintk("Checking if node %s is used by any domain\n",
+               device_node->full_name);
+
+    /* Remove the node iff it's assigned to domain 0 or domain io. */
+    if ( domid != 0 && domid != DOMID_IO )
+    {
+        printk(XENLOG_ERR "Device %s as it is being used by domain %d. Removing nodes failed\n",
+               device_node->full_name, domid);
+        return -EINVAL;
+    }
+
+    dt_dprintk("Removing node: %s\n", device_node->full_name);
+
+    nirq = dt_number_of_irq(device_node);
+
+    /* Remove IRQ permission */
+    for ( i = 0; i < nirq; i++ )
+    {
+        rc = platform_get_irq(device_node, i);;
+
+        if ( irq_access_permitted(d, rc) == false )
+        {
+            printk(XENLOG_ERR "IRQ %d is not routed to domain %d\n", rc,
+                   domid);
+            return -EINVAL;
+        }
+        /*
+         * TODO: We don't handle shared IRQs for now. So, it is assumed that
+         * the IRQs was not shared with another devices.
+         */
+        rc = irq_deny_access(d, rc);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "unable to revoke access for irq %u for %s\n",
+                   i, device_node->full_name);
+            return rc;
+        }
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(device_node, "iommus", &len) )
+    {
+        rc = iommu_remove_dt_device(device_node);
+        if ( rc != 0 && rc != -ENXIO )
+            return rc;
+    }
+
+    naddr = dt_number_of_address(device_node);
+
+    /* Remove mmio access. */
+    for ( i = 0; i < naddr; i++ )
+    {
+        rc = dt_device_get_address(device_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(device_node));
+            return rc;
+        }
+
+        rc = iomem_deny_access(d, paddr_to_pfn(addr),
+                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to remove dom%d access to"
+                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   d->domain_id,
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
+            return rc;
+        }
+
+    }
+
+    return rc;
+}
+
+/* Removes all descendants of the given node. */
+static int remove_all_descendant_nodes(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node != NULL;
+         child_node = child_node->sibling )
+    {
+        if ( child_node->child )
+            remove_all_descendant_nodes(child_node);
+
+        rc = handle_remove_irq_iommu(child_node);
+        if ( rc )
+            return rc;
+    }
+
+    return rc;
+}
+
+/* Remove nodes from dt_host. */
+static int remove_nodes(const struct overlay_track *tracker)
+{
+    int rc = 0;
+    struct dt_device_node *overlay_node;
+    unsigned int j;
+
+    for ( j = 0; j < tracker->num_nodes; j++ )
+    {
+        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
+        if ( overlay_node == NULL )
+        {
+            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
+                   overlay_node->full_name);
+            return -EINVAL;
+        }
+
+        rc = remove_all_descendant_nodes(overlay_node);
+
+        /* All children nodes are unmapped. Now remove the node itself. */
+        rc = handle_remove_irq_iommu(overlay_node);
+        if ( rc )
+            return rc;
+
+        read_lock(&dt_host->lock);
+
+        rc = dt_overlay_remove_node(overlay_node);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            return rc;
+        }
+
+        read_unlock(&dt_host->lock);
+    }
+
+    return rc;
+}
+
+/*
+ * First finds the device node to remove. Check if the device is being used by
+ * any dom and finally remove it from dt_host. IOMMU is already being taken care
+ * while destroying the domain.
+ */
+static long handle_remove_overlay_nodes(void *overlay_fdt,
+                                        uint32_t overlay_fdt_size)
+{
+    int rc = 0;
+    struct overlay_track *entry, *temp, *track;
+    bool found_entry = false;
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+        return rc;
+
+    if ( overlay_node_count(overlay_fdt) == 0 )
+        return -ENOMEM;
+
+    spin_lock(&overlay_lock);
+
+    /*
+     * First check if dtbo is correct i.e. it should one of the dtbo which was
+     * used when dynamically adding the node.
+     * Limitation: Cases with same node names but different property are not
+     * supported currently. We are relying on user to provide the same dtbo
+     * as it was used when adding the nodes.
+     */
+    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
+    {
+        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
+        {
+            track = entry;
+            found_entry = true;
+            break;
+        }
+    }
+
+    if ( found_entry == false )
+    {
+        rc = -EINVAL;
+
+        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
+               " Removing nodes is supported for only prior added dtbo. Please"
+               " provide a valid dtbo which was used to add the nodes.\n");
+        goto out;
+
+    }
+
+    rc = remove_nodes(entry);
+
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Removing node failed\n");
+        goto out;
+    }
+
+    list_del(&entry->entry);
+
+    xfree(entry->dt_host_new);
+    xfree(entry->fdt);
+    xfree(entry->overlay_fdt);
+
+    xfree(entry->nodes_address);
+
+    xfree(entry);
+
+out:
+    spin_unlock(&overlay_lock);
+    return rc;
+}
+
+long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    long ret;
+    void *overlay_fdt;
+
+    if ( op->overlay_fdt_size <= 0 || op->overlay_fdt_size > KB(500) )
+        return -EINVAL;
+
+    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
+
+    if ( overlay_fdt == NULL )
+        return -ENOMEM;
+
+    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
+    if ( ret )
+    {
+        gprintk(XENLOG_ERR, "copy from guest failed\n");
+        xfree(overlay_fdt);
+
+        return -EFAULT;
+    }
+
+    switch ( op->overlay_op )
+    {
+    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
+        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+        xfree(overlay_fdt);
+
+        break;
+
+    default:
+        xfree(overlay_fdt);
+        break;
+    }
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..1158c1efb3 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1057,6 +1057,25 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
 #endif
 
+#if defined(__arm__) || defined (__aarch64__)
+#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
+#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
+
+/*
+ * XEN_SYSCTL_dt_overlay
+ * Performs addition/removal of device tree nodes under parent node using dtbo.
+ * This does in three steps:
+ *  - Adds/Removes the nodes from dt_host.
+ *  - Adds/Removes IRQ permission for the nodes.
+ *  - Adds/Removes MMIO accesses.
+ */
+struct xen_sysctl_dt_overlay {
+    XEN_GUEST_HANDLE_64(void) overlay_fdt; /* IN: overlay fdt. */
+    uint32_t overlay_fdt_size;  /* IN: Overlay dtb size. */
+    uint8_t overlay_op; /* IN: Add or remove. */
+};
+#endif
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -1087,6 +1106,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_livepatch_op                  27
 /* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
+#define XEN_SYSCTL_dt_overlay                    30
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -1117,6 +1137,10 @@ struct xen_sysctl {
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
+
+#if defined(__arm__) || defined (__aarch64__)
+        struct xen_sysctl_dt_overlay        dt_overlay;
+#endif
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/dt_overlay.h b/xen/include/xen/dt_overlay.h
new file mode 100644
index 0000000000..2cd975a070
--- /dev/null
+++ b/xen/include/xen/dt_overlay.h
@@ -0,0 +1,59 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0
+ *
+ * xen/dt_overlay.h
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (c) 2022 AMD Inc.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#ifndef __XEN_DT_SYSCTL_H__
+#define __XEN_DT_SYSCTL_H__
+
+#include <xen/list.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+#include <xen/rangeset.h>
+
+/*
+ * overlay_node_track describes information about added nodes through dtbo.
+ * @entry: List pointer.
+ * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
+ * @fdt: Stores the fdt.
+ * @nodes_fullname: Stores the full name of nodes.
+ * @nodes_irq: Stores the IRQ added from overlay dtb.
+ * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
+ * @num_nodes: Stores total number of nodes in overlay dtb.
+ */
+struct overlay_track {
+    struct list_head entry;
+    struct dt_device_node *dt_host_new;
+    void *fdt;
+    void *overlay_fdt;
+    unsigned long *nodes_address;
+    unsigned int num_nodes;
+};
+
+struct xen_sysctl_dt_overlay;
+
+#ifdef CONFIG_OVERLAY_DTB
+long dt_sysctl(struct xen_sysctl_dt_overlay *op);
+#else
+static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    return -ENOSYS;
+}
+#endif
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519832.806930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWv-0007o6-3O; Tue, 11 Apr 2023 19:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519832.806930; Tue, 11 Apr 2023 19:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWv-0007nz-0F; Tue, 11 Apr 2023 19:19:49 +0000
Received: by outflank-mailman (input) for mailman id 519832;
 Tue, 11 Apr 2023 19:19:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUX-0004DR-UL
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:21 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a02f9d4-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:19 +0200 (CEST)
Received: from MW4PR03CA0193.namprd03.prod.outlook.com (2603:10b6:303:b8::18)
 by BY5PR12MB4082.namprd12.prod.outlook.com (2603:10b6:a03:212::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Tue, 11 Apr
 2023 19:17:16 +0000
Received: from CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b8:cafe::50) by MW4PR03CA0193.outlook.office365.com
 (2603:10b6:303:b8::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:16 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT096.mail.protection.outlook.com (10.13.175.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:16 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:15 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:15 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a02f9d4-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P0exrgxOsL3/n+mSGDHDH6/A5pjNuEf1TPtfLeN1YNCdk/0fcuZKfeBQ+kAUCR1oHXMjAPvE4GWoEgdLcUa5IXQSPdX3maD/MO6bq+CXTHU5LaU0IqsbKChtU0AktrZN5df86NdF3mDTlVhNjIcY41LV/rIZwWJLWlSjQRhxQB0isasO/zfTdo4LRCsid8vbMYikhwEtk4/933RhXZmxSNbei+shE8xh3yKyNkG0jhYwCaTFBEFaptQ8zIjLdzCvxToy6C8bW4lxVLJV5YKiItoUp/mGckp+lYgoZKKJT8bDmgYke5UlqVKgdukX8f/G4+27OvNAJDhnOJ7u4Df+Vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IPRvr0PydrbevxDRuC5bBe8hAAj8BLWppbzz+G6AD20=;
 b=lrxwKTPqIWTKUtX/8wpDACNNInoA3QgYgqgmcAw+exLo9MtAX5xaaT3gL7prU8bDv23L70gZ55n1nXx+zfkcxjOVA23gHjB3oceDq7HyhmCQgBkKPmA+d/3KKyK1HDmR0RkbE95MoYRQ4EX1rra2gYqQfSuAPzvbKCMRpy3K0TyVlckDUw5BJBbXjQi4LNY/776Fwk1Ht3PhRqhjP2vSys3UfL0iKFekLlyDzUSyClKrvMAa0zXHOuq523iRvuiWLOAByCmBUeaRCIxa8QuahtSZCEshumzaxrAbHWEVRXZbPq9++7ZKY7hYrABhEj4Zk2EjecLRsBHCsbn96lcVCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IPRvr0PydrbevxDRuC5bBe8hAAj8BLWppbzz+G6AD20=;
 b=qH59dc/ukbVX2ZGRfg7805zmZ0sNdHiS/3/rE6u+NrZ0/AfFAQfuPd9pdmWOSQ45slWxwf4a6PXicvYGdbvsLrV0wkn3Tbmrblpj4tSOH/+MOzHoEx4K4IWLyxnY8IS4TQkrGX57VadXkvNt3dnpjGsjO533SpnFoTsgNemL2Pc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 06/17] xen/device-tree: Add device_tree_find_node_by_path() to find nodes in device tree
Date: Tue, 11 Apr 2023 12:16:25 -0700
Message-ID: <20230411191636.26926-25-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT096:EE_|BY5PR12MB4082:EE_
X-MS-Office365-Filtering-Correlation-Id: b7acaf12-230c-4e31-5eb6-08db3ac15cc3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1MfYvPhUa7cLMUQI3UZmTwYyclh+2tL8706Z/bu7eT1dEfy1NAyms7VzyXY5KTCL7NL8JlCu0W1DTsziZzFOz1LkKWWz+hJvfclNY8rCnVN1x7MAjO8H6RhcfhfSJY6EpQLqpjqRpgkRrDTVxp/rhMbvVOWCJc5OzNmCp3dEopgFV44w2pd6ZMUe6EtR5Nu2NU1I7vpxMOs7P1FtgwpQRfP77kZwvanCXLNaI97U71brqvYCDejv6Sn/TJYD+HL4c8A6viwc1bO1W1rNtiJuGrJ8f7wyFIJJISfrqnjUZWXE4OmWJp4fDDk7JD0LTLc5og9uQgXZur+RGQmy9khxVVSklL22dU3GFR8EnsVLVJIb6KU6WogwgkaWgvfQBd2FJLbMRnNgUmL9vzQTwmHUcVU3jmw0cDjqLb7OrMNKOX4zpNpB9gfNQJ4t50f7XMYW+A1imJwbKNMM/lyEOY0Pu8R/ona/GRwnTilEsXuD0PQ8gQxA+Lx3UeLlrPQlix1s5OrdiwcW/AlBCdiDOLDkMPq4jLQOImaGCyrN48Ggk5tpeDYClOPsb1pC2l3ncx/npuVCUnBDsFJL0cTjRJweqVf0zi0Stb2shucwvX2ux25bQrD7r/PTfv3EoIw6ymXFqoTlQGQDqOR+q2Q8G2KUdg0tN6qIbWgz/jeVjII7nW8sl8vgtzC+kGsH8Bv9fw9aE5DTuLubkCAIz7nbgwIyPDGdG1H6YMM0ekmJ29YvqaezDp3wJZke+4Zf1TCiwCxfKW3M2hmkU4kbJk2kIuzQiw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(478600001)(1076003)(26005)(316002)(186003)(44832011)(54906003)(2906002)(5660300002)(4326008)(70586007)(70206006)(8676002)(6916009)(6666004)(82310400005)(41300700001)(8936002)(81166007)(356005)(82740400003)(47076005)(40480700001)(83380400001)(86362001)(40460700003)(36756003)(2616005)(426003)(36860700001)(336012)(37363002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:16.0901
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b7acaf12-230c-4e31-5eb6-08db3ac15cc3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4082

Add device_tree_find_node_by_path() to find a matching node with path for a
dt_device_node.

Reason behind this function:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, we need to find
    the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
    and add the nodes as child under their parent in the dt_host. Thus we need
    this function to search for node in different unflattened device trees.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      |  5 +++--
 xen/include/xen/device_tree.h | 17 +++++++++++++++--
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index bf847b2584..507b4ac5b6 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
     return np;
 }
 
-struct dt_device_node *dt_find_node_by_path(const char *path)
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path)
 {
     struct dt_device_node *np;
 
-    dt_for_each_device_node(dt_host, np)
+    dt_for_each_device_node(dt, np)
         if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
             break;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 58ac12abe3..998f972ebc 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -534,13 +534,26 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
 struct dt_device_node *dt_find_node_by_alias(const char *alias);
 
 /**
- * dt_find_node_by_path - Find a node matching a full DT path
+ * device_tree_find_node_by_path - Generic function to find a node matching the
+ * full DT path for any given unflatten device tree
+ * @dt_node: The device tree to search
  * @path: The full path to match
  *
  * Returns a node pointer.
  */
-struct dt_device_node *dt_find_node_by_path(const char *path);
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path);
 
+/**
+ * dt_find_node_by_path - Find a node matching a full DT path in dt_host
+ * @path: The full path to match
+ *
+ * Returns a node pointer.
+ */
+static inline struct dt_device_node *dt_find_node_by_path(const char *path)
+{
+    return device_tree_find_node_by_path(dt_host, path);
+}
 
 /**
  * dt_find_node_by_gpath - Same as dt_find_node_by_path but retrieve the
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519837.806936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWv-0007rh-Cz; Tue, 11 Apr 2023 19:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519837.806936; Tue, 11 Apr 2023 19:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWv-0007qt-87; Tue, 11 Apr 2023 19:19:49 +0000
Received: by outflank-mailman (input) for mailman id 519837;
 Tue, 11 Apr 2023 19:19:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUi-0004DR-CQ
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:32 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7eab::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f2d845c-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:28 +0200 (CEST)
Received: from BN7PR02CA0013.namprd02.prod.outlook.com (2603:10b6:408:20::26)
 by DS7PR12MB6216.namprd12.prod.outlook.com (2603:10b6:8:94::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.35; Tue, 11 Apr 2023 19:17:23 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:20:cafe::c5) by BN7PR02CA0013.outlook.office365.com
 (2603:10b6:408:20::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:22 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:22 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:21 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f2d845c-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QhVPwCHPBd0JqSv8CuQhxwh7IwEIlkz5sDOLIQoApkzz+fDG8H3ckb4RW2xe1A6GP/beE8nO6wLtzAfmVAyjvSEv2yKsT7X+dfjcEwyALwDkIEhJAaKQZNX+pgRXCcjHm6Yin/4G23Ar6Q41WxmcDxfHNDTh/1Ms2OU0xsm3CVtBRAY8uJLCCESO+dJRTRscHO2zUrsIPhhkEH6KlXV/jTocmaVYYctPPU1yKmmLKLnJFmlUUMT0x91+KDsGYQ5Hka3GEgFsMbgufym9BD8yAGDjMe8JeSxwjtdUgvpw2JpgbCsQy0UnwCaWBvxlQy1lg51N0sYsqW3S04xY8X5LBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3+cCBf3g6fBI4ES7DG61XJTRWpPAgPp5V0YvFzOd6Q8=;
 b=Kk7BuBTpE4tduXmrudfAI3Fo4DlKtKQFO3HrA+7ZlGTSCCpxvej7ika47IqM43+1FbEWwyXZAN0zQYxGhiTLgzHGAIdG89qtx4uPv6WBCRXaA0qt4RGCFNXIVCOVCrBA21Q3kNWn5ngbSH21DEqmfMnDx4uIdXm9N8fc/NAQbO4rLgoEX/4EX2So76MrlEV7zdBrE8evE9WcbLGW2KXgDXOka9w35ts4fUs3Ri6xQLUFcfRgK0UO/7Yppxwp3iZyC2/WqcOYiG8vPXG5bdOsp6fxc+G71x7n/cL5kr7OA/aY0KMQjMLNIOx6oyX5/IG4wpdxDV9YdOHEcfO6W9bO2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3+cCBf3g6fBI4ES7DG61XJTRWpPAgPp5V0YvFzOd6Q8=;
 b=QNeE23U21k2usTifMLPD62BNgxvVZhA1lzql9jc0s5JylFMBCkzq4iQprqGL78CJ8RdIwnwc0qLcCfZpRiTemKXkCSCsmFk+Pz6uya/t5QSBl1iXjj8zvEYXONR3FW93AGTHvk9wsBr1pXtR3BNbF+27jLtYwEfB1QvFIvGHKyw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>
Subject: [XEN][PATCH v5 14/17] xen/arm: Implement device tree node addition functionalities
Date: Tue, 11 Apr 2023 12:16:33 -0700
Message-ID: <20230411191636.26926-33-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|DS7PR12MB6216:EE_
X-MS-Office365-Filtering-Correlation-Id: c115fdb8-a23b-44d5-3ea3-08db3ac160a0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	25OgTU+pwi5Jek33LVmE9JLfv8q5jD3oVUA3yVVaKNM7y4JUV6U0EqP4k8B+1MCK9P2rVG/cY6Xdxlc9CiJCU8cSMFjnmRNdZ/WiZKy69MqXz4Jr0JayAYLw71fqiLZFzv1XlMO9hwvnQXPGh7I777aHghv9JNlvkkEes6o0qIk8MWIGGHe3+TjCfJpVbSz9IgAlptmWDajNadmQS9KwOIAshXbip0Jdhh9duBUDk3G0QVfKEIl3oYixSyO/WtTfeOhqf0PvoyDz3zTh9vdddKsNvZBVzHZD4FvZgvRIYt4P1pNTGkmiA6XN4vbXAtIxbL1zoXZJIUIIIYG0oyR+qa9ZoUC49zx7cDbG3Q1DtmcxFdApJ0aRXVOneqs9ikfEpQhKQowPwC04jy5QZ/OeZJoFmIpUc6vqCNSuTnxvS0Cf9bzBhA6H6mAcmLRYD1G1MSlaZuGlp+HyVNoceftO04G4vaOJ6VDfpU/TbzW8sSSfh9j4uCe5yWxZ5jfC1FOHCFSx/6XX/geZ2Sz4tIpWNY9XQ7FyC+cEhKZgNSGDCZzYVja4g381K1I/vE9Zk/LqPnMRreG8vnIU+flRUY5glwi4RqcqoJgX23ZHZw1ymY6iE/RRrRgfANX4HBGZveF7gYtuDyQjnVnV3EewXF2gXxt+BVPvJt6TcBaWWe/3ye0DVO6bPnPKcTClCMh/+g8BW5YSk+vdB2vAbRHiN8vq3zjGaNCtbz5hvRqBXpG6N20=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(46966006)(40470700004)(36840700001)(478600001)(83380400001)(86362001)(40460700003)(40480700001)(36756003)(82740400003)(81166007)(356005)(47076005)(36860700001)(336012)(2616005)(426003)(30864003)(2906002)(1076003)(26005)(316002)(54906003)(186003)(44832011)(5660300002)(82310400005)(8676002)(6666004)(6916009)(8936002)(41300700001)(4326008)(70586007)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:22.6958
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c115fdb8-a23b-44d5-3ea3-08db3ac160a0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6216

Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
using device tree overlay.

xl dt-overlay add file.dtbo:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, it checks if any
    of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
    exist then find the overlay nodes in dt_host_new, find the overlay node's
    parent in dt_host and add the nodes as child under their parent in the
    dt_host. The node is attached as the last node under target parent.

    Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
    overlay node.

When a node is added using overlay, a new entry is allocated in the
overlay_track to keep the track of memory allocation due to addition of overlay
node. This is helpful for freeing the memory allocated when a device tree node
is removed.

The main purpose of this to address first part of dynamic programming i.e.
making xen aware of new device tree node which means updating the dt_host with
overlay node information. Here we are adding/removing node from dt_host, and
checking/setting IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/dt_overlay.c | 482 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 482 insertions(+)

diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
index 516e8010c5..3344bad313 100644
--- a/xen/common/dt_overlay.c
+++ b/xen/common/dt_overlay.c
@@ -36,6 +36,25 @@ static struct dt_device_node *find_last_descendants_node(
     return child_node;
 }
 
+/*
+ * Returns next node to the input node. If node has children then return
+ * last descendant's next node.
+*/
+static struct dt_device_node *dt_find_next_node(struct dt_device_node *dt,
+                                            const struct dt_device_node *node)
+{
+    struct dt_device_node *np;
+
+    dt_for_each_device_node(dt, np)
+        if ( np == node )
+            break;
+
+    if ( np->child )
+        np = find_last_descendants_node(np);
+
+    return np->allnext;
+}
+
 static int dt_overlay_remove_node(struct dt_device_node *device_node)
 {
     struct dt_device_node *np;
@@ -109,6 +128,72 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
     return 0;
 }
 
+static int dt_overlay_add_node(struct dt_device_node *device_node,
+                               const char *parent_node_path)
+{
+    struct dt_device_node *parent_node;
+    struct dt_device_node *np, *np_last_descendant;
+    struct dt_device_node *next_node;
+    struct dt_device_node *device_node_last_descendant;
+
+    parent_node = dt_find_node_by_path(parent_node_path);
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("Node not found. Overlay node will not be added\n");
+        return -EINVAL;
+    }
+
+    /* If parent has no child. */
+    if ( parent_node->child == NULL )
+    {
+        next_node = parent_node->allnext;
+        device_node->parent = parent_node;
+        parent_node->allnext = device_node;
+        parent_node->child = device_node;
+    }
+    else
+    {
+        /* If parent has at least one child node.
+         * Iterate to the last child node of parent.
+         */
+        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling );
+
+        /* Iterate over all child nodes of np node. */
+        if ( np->child )
+        {
+            np_last_descendant = find_last_descendants_node(np);
+
+            next_node = np_last_descendant->allnext;
+            np_last_descendant->allnext = device_node;
+        }
+        else
+        {
+            next_node = np->allnext;
+            np->allnext = device_node;
+        }
+
+        device_node->parent = parent_node;
+        np->sibling = device_node;
+        np->sibling->sibling = NULL;
+    }
+
+    /* Iterate over all child nodes of device_node to add children too. */
+    if ( device_node->child )
+    {
+        device_node_last_descendant = find_last_descendants_node(device_node);
+        /* Plug next_node at the end of last children of device_node. */
+        device_node_last_descendant->allnext = next_node;
+    }
+    else
+    {
+        /* Now plug next_node at the end of device_node. */
+        device_node->allnext = next_node;
+    }
+
+    return 0;
+}
+
 /* Basic sanity check for the dtbo tool stack provided to Xen. */
 static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
 {
@@ -148,6 +233,79 @@ static unsigned int overlay_node_count(void *fdto)
     return num_overlay_nodes;
 }
 
+/*
+ * overlay_get_nodes_info will get full name with path for all the nodes which
+ * are in one level of __overlay__ tag. This is useful when checking node for
+ * duplication i.e. dtbo tries to add nodes which already exists in device tree.
+ */
+static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
+                                  unsigned int num_overlay_nodes)
+{
+    int fragment;
+
+    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
+
+    if ( *nodes_full_path == NULL )
+        return -ENOMEM;
+
+    fdt_for_each_subnode(fragment, fdto, 0)
+    {
+        int target;
+        int overlay;
+        int subnode;
+        const char *target_path;
+
+        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
+                                           fragment, &target_path);
+        if ( target < 0 )
+            return target;
+
+        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, fdto, overlay)
+        {
+            const char *node_name = NULL;
+            int node_name_len;
+            unsigned int target_path_len = strlen(target_path);
+            unsigned int node_full_name_len;
+            unsigned int node_num = 0;
+
+            node_name = fdt_get_name(fdto, subnode, &node_name_len);
+
+            if ( node_name == NULL )
+                return node_name_len;
+
+            /*
+             * Magic number 2 is for adding '/'. This is done to keep the
+             * node_full_name in the correct full node name format.
+             */
+            node_full_name_len = target_path_len + node_name_len + 2;
+
+            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
+
+            if ( (*nodes_full_path)[node_num] == NULL )
+                return -ENOMEM;
+
+            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
+
+            (*nodes_full_path)[node_num][target_path_len] = '/';
+
+            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
+                    node_name, node_name_len);
+
+            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
+
+            node_num++;
+        }
+    }
+
+    return 0;
+}
+
 static int handle_remove_irq_iommu(struct dt_device_node *device_node)
 {
     int rc = 0;
@@ -367,6 +525,322 @@ out:
     return rc;
 }
 
+/*
+ * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
+ * overlay_nodes.
+ */
+static int handle_add_irq_iommu(struct domain *d,
+                                struct dt_device_node *overlay_node)
+{
+    int rc;
+    unsigned int naddr, i, len;
+    uint64_t addr, size;
+    struct dt_device_node *np;
+
+    /* First let's handle the interrupts. */
+    rc = handle_device_interrupts(d, overlay_node, false);
+    if ( rc < 0 )
+    {
+        printk(XENLOG_ERR "Interrupt failed\n");
+        return rc;
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(overlay_node, "iommus", &len) )
+    {
+
+        /* Add device to IOMMUs. */
+        rc = iommu_add_dt_device(overlay_node);
+        if ( rc < 0 )
+        {
+            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
+                   dt_node_full_name(overlay_node));
+            return rc;
+        }
+    }
+
+    /* Set permissions. */
+    naddr = dt_number_of_address(overlay_node);
+
+    dt_dprintk("%s passthrough = %d naddr = %u\n",
+               dt_node_full_name(overlay_node), false, naddr);
+
+    /* Give permission for map MMIOs */
+    for ( i = 0; i < naddr; i++ )
+    {
+        /*
+         * For now, we skip_mapping which means it will only permit iomem access
+         * to hardware_domain using iomem_permit_access() but will never map as
+         * map_range_p2mt() will not be called.
+         */
+        struct map_range_data mr_data = { .d = d,
+                                          .p2mt = p2m_mmio_direct_c,
+                                          .skip_mapping = true };
+
+        rc = dt_device_get_address(overlay_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(overlay_node));
+            return rc;
+        }
+
+        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
+        if ( rc )
+            return rc;
+    }
+
+    /* Map IRQ and IOMMU for overlay_node's children. */
+    for ( np = overlay_node->child; np != NULL; np = np->sibling)
+    {
+        rc = handle_add_irq_iommu(d, np);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+    }
+
+    return rc;
+}
+
+/*
+ * Adds device tree nodes under target node.
+ * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
+ * is done to avoid the removal of device_tree generation, iomem regions mapping
+ * to hardware domain done by handle_node().
+ */
+static long handle_add_overlay_nodes(void *overlay_fdt,
+                                     uint32_t overlay_fdt_size)
+{
+    int rc, j, i;
+    struct dt_device_node *overlay_node, *prev_node, *next_node;
+    struct domain *d = hardware_domain;
+    struct overlay_track *tr = NULL;
+    char **nodes_full_path = NULL;
+    unsigned int new_fdt_size;
+
+    tr = xzalloc(struct overlay_track);
+    if ( tr == NULL )
+        return -ENOMEM;
+
+    new_fdt_size = fdt_totalsize(device_tree_flattened) +
+                                 fdt_totalsize(overlay_fdt);
+
+    tr->fdt = xzalloc_bytes(new_fdt_size);
+    if ( tr->fdt == NULL )
+    {
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->num_nodes = overlay_node_count(overlay_fdt);
+    if ( tr->num_nodes == 0 )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
+    if ( tr->nodes_address == NULL )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return rc;
+    }
+
+    /*
+     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
+     * overlay's content(magic) when applying overlay.
+     */
+    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
+    if ( tr->overlay_fdt == NULL )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
+
+    spin_lock(&overlay_lock);
+
+    memcpy(tr->fdt, device_tree_flattened,
+           fdt_totalsize(device_tree_flattened));
+
+    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
+    fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
+
+    /*
+     * overlay_get_nodes_info is called to get the node information from dtbo.
+     * This is done before fdt_overlay_apply() because the overlay apply will
+     * erase the magic of overlay_fdt.
+     */
+    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
+                                tr->num_nodes);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
+               rc);
+        goto err;
+    }
+
+    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
+        goto err;
+    }
+
+    /*
+     * Check if any of the node already exists in dt_host. If node already exits
+     * we can return here as this overlay_fdt is not suitable for overlay ops.
+     */
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
+        if ( overlay_node != NULL )
+        {
+            printk(XENLOG_ERR "node %s exists in device tree\n",
+                   nodes_full_path[j]);
+            rc = -EINVAL;
+            goto err;
+        }
+    }
+
+    /* Unflatten the tr->fdt into a new dt_host. */
+    unflatten_device_tree(tr->fdt, &tr->dt_host_new);
+
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
+
+        /* Find the newly added node in tr->dt_host_new by it's full path. */
+        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
+                                                     nodes_full_path[j]);
+        if ( overlay_node == NULL )
+        {
+            dt_dprintk("%s node not found\n", nodes_full_path[j]);
+            rc = -EFAULT;
+            goto remove_node;
+        }
+
+        /*
+         * Find previous and next node to overlay_node in dt_host_new. We will
+         * need these nodes to fix the dt_host_new mapping. When overlay_node is
+         * take out of dt_host_new tree and added to dt_host, link between
+         * previous node and next_node is broken. We will need to refresh
+         * dt_host_new with correct linking for any other overlay nodes
+         * extraction in future.
+         */
+        dt_for_each_device_node(tr->dt_host_new, prev_node)
+            if ( prev_node->allnext == overlay_node )
+                break;
+
+        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
+
+        read_lock(&dt_host->lock);
+
+        /* Add the node to dt_host. */
+        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            /* Node not added in dt_host. */
+            goto remove_node;
+        }
+
+        read_unlock(&dt_host->lock);
+
+        prev_node->allnext = next_node;
+
+        overlay_node = dt_find_node_by_path(overlay_node->full_name);
+        if ( overlay_node == NULL )
+        {
+            /* Sanity check. But code will never come here. */
+            ASSERT_UNREACHABLE();
+            goto remove_node;
+        }
+
+        rc = handle_add_irq_iommu(d, overlay_node);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+
+        /* Keep overlay_node address in tracker. */
+        tr->nodes_address[j] = (unsigned long)overlay_node;
+    }
+
+    INIT_LIST_HEAD(&tr->entry);
+    list_add_tail(&tr->entry, &overlay_tracker);
+
+    spin_unlock(&overlay_lock);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    return rc;
+
+/*
+ * Failure case. We need to remove the nodes, free tracker(if tr exists) and
+ * tr->dt_host_new.
+ */
+remove_node:
+    tr->num_nodes = j;
+    rc = remove_nodes(tr);
+
+    if ( rc )
+    {
+        /* If removing node fails, this may cause memory leaks. */
+        printk(XENLOG_ERR "Removing node failed.\n");
+        spin_unlock(&overlay_lock);
+        return rc;
+    }
+
+err:
+    spin_unlock(&overlay_lock);
+
+    xfree(tr->dt_host_new);
+    xfree(tr->fdt);
+    xfree(tr->overlay_fdt);
+    xfree(tr->nodes_address);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    xfree(tr);
+
+    return rc;
+}
+
 long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 {
     long ret;
@@ -391,6 +865,14 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 
     switch ( op->overlay_op )
     {
+    case XEN_SYSCTL_DT_OVERLAY_ADD:
+        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+
+        if ( ret )
+            xfree(overlay_fdt);
+
+        break;
+
     case XEN_SYSCTL_DT_OVERLAY_REMOVE:
         ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
         xfree(overlay_fdt);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519844.806950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWx-0008My-SM; Tue, 11 Apr 2023 19:19:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519844.806950; Tue, 11 Apr 2023 19:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWx-0008Mn-Nv; Tue, 11 Apr 2023 19:19:51 +0000
Received: by outflank-mailman (input) for mailman id 519844;
 Tue, 11 Apr 2023 19:19:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUV-0004DR-JA
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:19 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20623.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 796fc584-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:17 +0200 (CEST)
Received: from BN9PR03CA0130.namprd03.prod.outlook.com (2603:10b6:408:fe::15)
 by BL1PR12MB5873.namprd12.prod.outlook.com (2603:10b6:208:395::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:14 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fe:cafe::74) by BN9PR03CA0130.outlook.office365.com
 (2603:10b6:408:fe::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:14 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.22 via Frontend Transport; Tue, 11 Apr 2023 19:17:14 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:14 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:14 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:13 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 796fc584-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bs1J3pf2zY4xwRkL2xusjQN+dy/Hb3CaMEHIoZ23NB46HoFsLYKnIIwlW0XBpwPCN7sBqHrLf1rhal5UYE17cMnksix5aN6nMbrkNv+6GmJ5qE6Uj2V6c4owe1QhxzF+plzdo56ixceaOUEADAn48uVrOvUxGe/8gwQd7g34rc6cYj7wio0UF4K0lnKZR06l9DojTR1D15uuLKKtytOGPEUeIXqhf8Sx8C1aqbmRLYFtBZYzmFR5F48AkQdQrv/laftAZcrj63fbiKIrPrOrv6s05wQbb/pqCWRTMoqCUYYgrPu9iN2oZ4GLTNTH+nujRbHFxLhI6ij6sTF2rC+xPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aXlXQRcv7U/T/rRXFeXTNVQL0ZVDEHTt5u8gFMIkdTA=;
 b=dyMSY9mhc6T6gLBQ322vHVv2gtQ+FagNKnGJbC0oXLxhAW+OqmqmgtM1hUq9FTG6PNVRkzoe58KwtA/vmxD/ESwZ1JTgEYunaBP5t59JxcdCmdKOcVYrQnu3oB/KlXeX0FYiBgTTs35uN8/3UKbNZRs6FX1NUtZnv0go3t6Qw27Che+yW3qKUKNdY6YpdfzcNyhQ9jzRnERJjNe1k99DL1RmlD0P6RQmz5/u6BJADWhJ3glO2TBhAWUJkO4Oz1NwEk653E1Tau5l+VFx7AklCyjcwbn/SovfgCL4uQvoHODEgI/o5xqWQiCFFmSdagI+9yw3yz5w3W4UqLG4k+K6qQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aXlXQRcv7U/T/rRXFeXTNVQL0ZVDEHTt5u8gFMIkdTA=;
 b=AWISUh6SBvJgSoIVRMU5xu4akTc/rzbLimZg/dOzu2AZQmrJyx/aPaQNUFlWQRm3lFXICp+ky2stwepnVeprzphrV52xhw71Tna0Mff4itrrZStzZyt4faOE08CRCKzPkYH6mOB3Fs3jzGw3qpm3+6CdtL4IZFKtCm8NVjQDxDs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Vikram Garhwal <fnu.vikram@xilinx.com>, David Gibson
	<david@gibson.dropbear.id.au>
Subject: [XEN][PATCH v5 05/17] libfdt: overlay: change overlay_get_target()
Date: Tue, 11 Apr 2023 12:16:24 -0700
Message-ID: <20230411191636.26926-24-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|BL1PR12MB5873:EE_
X-MS-Office365-Filtering-Correlation-Id: 9cf27a4a-d6d4-4ebc-9f12-08db3ac15bf2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UCHVU1GAWIXT5Vr+KAVVr/ta9BYbkCgoPqfm4WTdrU3RoPTLQlW5CXh242muGzIhtQJsutSR/AdTUTv3GXr9K6JpMHuCZpPgs666nQKZ+jyc2N657XVz/DG1JfeYyH+UcWG/zsLLdQIkR9TPf5QQxXO3KzR1KWAR9inkhIW9JVJhUmK7g+JsBobErTBk/ouoY2iADGrjzKIa89n5JoSdMp8yv7NSFZySqlc/CSKXAOrE1VgmQMTmNJqaeIhmt+UUnmPo7zeZweX7qfx977UfWyIXZUsLLcEPhXrEt9k8lE7sZFtW2nB8HFv6uqvC+9BvwEIHAHe7Z7xGvhZ6+ELaVxfIQuSP3JK0gHSS3V8Y59Yspvf3wWUjb/Y0B56ZRzLpTVXmfhlXFMa2BkSMfuxN2NV2Ro+nTR9LbuaBpgAamSXCmQcHDuUpz7CP4fK0zOvvqp/FHDM7cRViQi3HUNhjIssB/bUmhGW1BRrjzUDc/dUaTPibCgktd9ZIvqNZK6JWRogn5f1gE+KAf+2ZyFfozJ00rZ+4W7YBkJ7BUi4uzIl2qwfss4/iIU/tT2OunavEHSE4KsJW/6LdesyEcjRU03lv41GqBpdvagpa2C5kqBZQ5Qt7PGsPbiqQizXbrXcQPL0tIZkgJG/MxC4yoPPQxOGqcOQ9atH73lJKOpkeLX7PxcBtbNYX14FLELAG1txN4zzogOP1tuMTvoTYSpBCFl7eMflZIfm2eZXnK/bk8jk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(41300700001)(316002)(54906003)(8676002)(4326008)(966005)(70206006)(478600001)(6916009)(70586007)(82310400005)(5660300002)(40480700001)(36860700001)(2906002)(8936002)(44832011)(356005)(186003)(81166007)(82740400003)(6666004)(26005)(1076003)(336012)(2616005)(83380400001)(426003)(47076005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:14.8314
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9cf27a4a-d6d4-4ebc-9f12-08db3ac15bf2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5873

Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
function type.

This is done to get the target path for the overlay nodes which is very useful
in many cases. For example, Xen hypervisor needs it when applying overlays
because Xen needs to do further processing of the overlay nodes, e.g. mapping of
resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.

Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Origin: https://github.com/dgibson/dtc 45f3d1a095dd

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/libfdt/fdt_overlay.c | 29 +++++++----------------------
 xen/common/libfdt/version.lds   |  1 +
 xen/include/xen/libfdt/libfdt.h | 18 ++++++++++++++++++
 3 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/xen/common/libfdt/fdt_overlay.c b/xen/common/libfdt/fdt_overlay.c
index 7b95e2b639..acf0c4c2a6 100644
--- a/xen/common/libfdt/fdt_overlay.c
+++ b/xen/common/libfdt/fdt_overlay.c
@@ -41,37 +41,22 @@ static uint32_t overlay_get_target_phandle(const void *fdto, int fragment)
 	return fdt32_to_cpu(*val);
 }
 
-/**
- * overlay_get_target - retrieves the offset of a fragment's target
- * @fdt: Base device tree blob
- * @fdto: Device tree overlay blob
- * @fragment: node offset of the fragment in the overlay
- * @pathp: pointer which receives the path of the target (or NULL)
- *
- * overlay_get_target() retrieves the target offset in the base
- * device tree of a fragment, no matter how the actual targeting is
- * done (through a phandle or a path)
- *
- * returns:
- *      the targeted node offset in the base device tree
- *      Negative error code on error
- */
-static int overlay_get_target(const void *fdt, const void *fdto,
-			      int fragment, char const **pathp)
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp)
 {
 	uint32_t phandle;
 	const char *path = NULL;
 	int path_len = 0, ret;
 
 	/* Try first to do a phandle based lookup */
-	phandle = overlay_get_target_phandle(fdto, fragment);
+	phandle = overlay_get_target_phandle(fdto, fragment_offset);
 	if (phandle == (uint32_t)-1)
 		return -FDT_ERR_BADPHANDLE;
 
 	/* no phandle, try path */
 	if (!phandle) {
 		/* And then a path based lookup */
-		path = fdt_getprop(fdto, fragment, "target-path", &path_len);
+		path = fdt_getprop(fdto, fragment_offset, "target-path", &path_len);
 		if (path)
 			ret = fdt_path_offset(fdt, path);
 		else
@@ -638,7 +623,7 @@ static int overlay_merge(void *fdt, void *fdto)
 		if (overlay < 0)
 			return overlay;
 
-		target = overlay_get_target(fdt, fdto, fragment, NULL);
+		target = fdt_overlay_target_offset(fdt, fdto, fragment, NULL);
 		if (target < 0)
 			return target;
 
@@ -781,7 +766,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 			return -FDT_ERR_BADOVERLAY;
 
 		/* get the target of the fragment */
-		ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+		ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 		if (ret < 0)
 			return ret;
 		target = ret;
@@ -803,7 +788,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 
 		if (!target_path) {
 			/* again in case setprop_placeholder changed it */
-			ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+			ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 			if (ret < 0)
 				return ret;
 			target = ret;
diff --git a/xen/common/libfdt/version.lds b/xen/common/libfdt/version.lds
index 7ab85f1d9d..cbce5d4a8b 100644
--- a/xen/common/libfdt/version.lds
+++ b/xen/common/libfdt/version.lds
@@ -77,6 +77,7 @@ LIBFDT_1.2 {
 		fdt_appendprop_addrrange;
 		fdt_setprop_inplace_namelen_partial;
 		fdt_create_with_flags;
+		fdt_overlay_target_offset;
 	local:
 		*;
 };
diff --git a/xen/include/xen/libfdt/libfdt.h b/xen/include/xen/libfdt/libfdt.h
index c71689e2be..fabddbee8c 100644
--- a/xen/include/xen/libfdt/libfdt.h
+++ b/xen/include/xen/libfdt/libfdt.h
@@ -2109,6 +2109,24 @@ int fdt_del_node(void *fdt, int nodeoffset);
  */
 int fdt_overlay_apply(void *fdt, void *fdto);
 
+/**
+ * fdt_overlay_target_offset - retrieves the offset of a fragment's target
+ * @fdt: Base device tree blob
+ * @fdto: Device tree overlay blob
+ * @fragment_offset: node offset of the fragment in the overlay
+ * @pathp: pointer which receives the path of the target (or NULL)
+ *
+ * fdt_overlay_target_offset() retrieves the target offset in the base
+ * device tree of a fragment, no matter how the actual targeting is
+ * done (through a phandle or a path)
+ *
+ * returns:
+ *      the targeted node offset in the base device tree
+ *      Negative error code on error
+ */
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp);
+
 /**********************************************************************/
 /* Debugging / informational functions                                */
 /**********************************************************************/
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519846.806960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWz-0000Cy-4q; Tue, 11 Apr 2023 19:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519846.806960; Tue, 11 Apr 2023 19:19:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJWz-0000Cp-1c; Tue, 11 Apr 2023 19:19:53 +0000
Received: by outflank-mailman (input) for mailman id 519846;
 Tue, 11 Apr 2023 19:19:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUb-0004DR-Au
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:25 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7eab::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c43909a-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:23 +0200 (CEST)
Received: from BN8PR04CA0065.namprd04.prod.outlook.com (2603:10b6:408:d4::39)
 by MW5PR12MB5598.namprd12.prod.outlook.com (2603:10b6:303:193::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:20 +0000
Received: from BN8NAM11FT069.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:d4:cafe::d) by BN8PR04CA0065.outlook.office365.com
 (2603:10b6:408:d4::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT069.mail.protection.outlook.com (10.13.176.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:19 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:19 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:19 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:18 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c43909a-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kTkfvHMw4fwZfUXsGYwMWqNK9TBfbtOjNei/7m39xnNYGph1NAu8DmaU7G6RySe+rPUyKwHKLe99naA3atslI4+AtkslwbNRuNPYKKLdJofW3MMmOnpzeJGo5XiutQUte1DlFCfhFhdj87Gq49q4IuDy+mjjnfUTiAwGt/o5zZ5/X7mij/jUiRVQ/ypK4xXSK93zZTk3/8P6pzHFzO8Mz8QipY2YO5nk4KY2Qz+AsD8VOFDR84tsR6FIQhSvNDNrtx8rgi8WPHsHHO+VA4ZP2zTUMxJMFUnRfG64qCT0S3ZwpNR+Qdc+W0XXbvCxXDG3/qaB0QWPh0Jl+kbHQyfRDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x1PNAnqijXbZmLbaJTz/bnk2u6mRzrBSQaqCUQeSU5s=;
 b=FFOrj0o6TWzAcl90jAx+ZHpRzHGwNIn49bwTOzhpxT9xczQKuC+bOv3XbPGefV2DAUNjs8LPXlSJl7eeFxcl3Js77kcXAsjSpBo/pZ4FoAXUZluAod0dSF+4iK/AuJ9jqq+FZ5F+2w0CfPVA7EZ4T4KfIah77GdUw1DS5OspoVVRjpfVj6GivPDwnhiMA3xRShQrR80etqXS8yQrlNOZTaQGIE8hgoTJY2lTkLjnOa5fqC2hGR0lXlxwWyZmiVfUaHtO1lh3TN7b+FkSX04YJucGOjqJ0o2GNVZbQy75ZPPjLf2LMYA+6320fVpyHymJTLTV9G0wLwN/smWITJA8og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x1PNAnqijXbZmLbaJTz/bnk2u6mRzrBSQaqCUQeSU5s=;
 b=wuntfbt/6RBi8we1bEE2MT1iJltEJu3U0H+EIy9i2cz6kFGZok1kVKid9tZJ9YA4toycLlEjO+CWZBRQ5Ip6Z+OBgVQZJU7x0141SchJob5+X1IMGwlxoqatul6fHZdmGPXqUhYG7EPBT0MeLCz1tnHGX9EHSUVraItI7oKzduE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
Date: Tue, 11 Apr 2023 12:16:30 -0700
Message-ID: <20230411191636.26926-30-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT069:EE_|MW5PR12MB5598:EE_
X-MS-Office365-Filtering-Correlation-Id: 65b4fc63-54fd-44d5-795c-08db3ac15eef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6qps8HEQfzg6wg3LtuU3NpVWgiVYlUptzQkTg/BZ9Gx9WDs1l6HDyz8G0SoDMCc7LRHWlB2Y195o8IxiFIGfe00rQ/Y3jE/uWYyf5hxB27tMEFxn1Fdd6XND800/0MYjrtNWAW/Ufrmiu+8+pVwS4GhBT71kAKdVlGPP0YQpQDVJdpnZQecPzZY7fnz0onN5GjRImXxLG4hsGswv9MCwizwqJSUGDHRyT37u9jW3d+zqsh58sUrcN42JdNpKAqqehq9UeUx+7oEyEg3VF0kkwlLlJCSooQC/+ltMLVfVL4683p2wWl0i3tyVPCV8kwcWlNj6jUBwmN56BB0sZRFDgNRhER5ttELBJ4ZW2773SZmdfgy5USZsnKt8GKjAULUjv+j7lw4iCNNri/oMGPQkPKBRdRRZ5Rj0VzcaJJsWKtXhIzIGNXUXDO2AuoJtI8NLFbnEI/S48bTKGPUdapAW+VBlIm/h/G70ED2XUZKdmmGkDdyxYKDk3l8+05eGdoSl962L8Vn6Cb3L7fvK5ZuzulOuPd2l8cho+UpIXAZQ3fwTJ3Of0NqHKN19SEO3kOS6bENVhoHroexpz2sOI75rMGp8cu/JSihG39ZB7hrL/WfXleR3Gc/hGE2a7d6jqMSK6Q8WHI6/L/kSKmd2JiJ/N3XGO02+YbtfE3pOn2KgsnxTSMlHFHGilsOflgK3M9qWL5Q3XQCshZtJ5/VRIvbzztT0cU2B2oSekLyp1Hqb1Q0=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(44832011)(82310400005)(40460700003)(2906002)(5660300002)(8936002)(86362001)(40480700001)(8676002)(26005)(6666004)(1076003)(36860700001)(54906003)(478600001)(2616005)(47076005)(336012)(426003)(83380400001)(186003)(70206006)(70586007)(81166007)(82740400003)(41300700001)(356005)(4326008)(6916009)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:19.8440
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 65b4fc63-54fd-44d5-795c-08db3ac15eef
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT069.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR12MB5598

Dynamic programming ops will modify the dt_host and there might be other
function which are browsing the dt_host at the same time. To avoid the race
conditions, adding rwlock for browsing the dt_host. But adding rwlock in
device_tree.h causes following circular dependency:
    device_tree.h->rwlock.h->smp.h->asm/smp.h->device_tree.h

To fix this, removed the "#include <xen/device_tree.h> and forward declared
"struct dt_device_node".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/include/asm/smp.h | 3 ++-
 xen/arch/arm/smpboot.c         | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c295..afe6129276 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -3,13 +3,14 @@
 
 #ifndef __ASSEMBLY__
 #include <xen/cpumask.h>
-#include <xen/device_tree.h>
 #include <asm/current.h>
 #endif
 
 DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
 DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
+struct dt_device_node;
+
 #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
 
 #define smp_processor_id() get_processor_id()
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae22869..336a7d418b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -11,6 +11,7 @@
 #include <xen/cpumask.h>
 #include <xen/delay.h>
 #include <xen/domain_page.h>
+#include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519849.806970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX1-0000ZH-HL; Tue, 11 Apr 2023 19:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519849.806970; Tue, 11 Apr 2023 19:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX1-0000Yx-Am; Tue, 11 Apr 2023 19:19:55 +0000
Received: by outflank-mailman (input) for mailman id 519849;
 Tue, 11 Apr 2023 19:19:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUf-0004Ta-7y
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:29 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20620.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f308f27-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:28 +0200 (CEST)
Received: from MW4PR03CA0028.namprd03.prod.outlook.com (2603:10b6:303:8f::33)
 by CH3PR12MB8996.namprd12.prod.outlook.com (2603:10b6:610:170::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:24 +0000
Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::e) by MW4PR03CA0028.outlook.office365.com
 (2603:10b6:303:8f::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:20 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:20 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f308f27-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=THcWC/giHtyMgrprcuU0OA3pwHxXNMdUgtgLjl6l1LPlqw6WBVt+khXjjWFhmDAgzqUF7UiMdv4dI7KWtb5aoVLa/2a1Y+jNT54qVZqUfaryLTfy3bcXNi4WS7eBXNJQuMazFGshN/GyCG3iZjEDyRotCMcyl8OIon1WyMhHg3d3vSGUw7gPuRUi6ZhWQ8rrE8c/S0AnJjHAENIq/CR2R77j2thE+wXaZMe3J2JXbPoL7aTEG1RQInihHjKdcDSKDJ37nDlLMAnlAfv6ruLrQhgvrW/w4t+/8YX2QUEsp20mea+4gQ/4C9/BJmUWKVscO92M0C/s1VhjpNku0TiJ+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zOQW4Omju1dTUno8ADGoDg+I28lOLIV01naa3ZWrW1o=;
 b=Imta5BSur4SKHpXK1OWAg6ScqRx0I/4QXi5mKvt+7uUc4PMTlrEaF+jP8s2QdCGY3pC2xjOm9jxiv/buyrsEa6FtMTjHl5AAEsMQhlXhltPJXnRQgqUMILcl8g0Q+GKiRz0imGZl257O4n9t9aPcExhjDLkTorHq7p5aL1iMQdjfAnb5iXP8Eyoo4Tr9D7V1xSoxINEhabj8mGrEd2xiU+HEL+/AiZADEq7U5y63LFYzH1pA3M5wx5WcX+fGNnJFMng58wqLLP2z0AQ16WAaoUsa1xOveF6d9rojy4kmPJV9c5NwSpaPoiV4h391VR63Gwy/Mhd9G3UsDszCym8MjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zOQW4Omju1dTUno8ADGoDg+I28lOLIV01naa3ZWrW1o=;
 b=bVqMaj/2byuHZk9KB58ch3tZUaCTDN6A8dmi21o0MZKHRB5YR1KnXT41/CI674SqlxxK0rfv06M811p4s2tWNoY1hIV3e0C1gugQoMlb6HPEbuMlMZlO0SSLbZL2uzlxhMaMLJrfz0I6wimh4GqV1YGKtSiADK/XtU1PLmOxWl8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
Date: Tue, 11 Apr 2023 12:16:31 -0700
Message-ID: <20230411191636.26926-31-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT048:EE_|CH3PR12MB8996:EE_
X-MS-Office365-Filtering-Correlation-Id: a4eb61d8-5ff6-40fc-bc5e-08db3ac16151
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HiPueW2wg3jaIMtd+UEEbXnwFPVrbmrZrGMl75Sfc/2DQnY7SCJkWLuxxLohuemFBARCMfBdCO3U3IKWJOCpKJWbUHFo+lZKF72rRQ7WmHKrCm1nrPV6jU+39VpsKIH783g/qXfZ+K3ys6k/BoC6AbXcd+5EAO5kuDLUKnEbxa5F1k9w4aYebEYTImOag5DUi50X8oVWYwtC0a9bzzVs3n+dhyRWokbof6EQ2XFh+BewN+4alARJcaE8xKzioSXJ9N9UZPKmA3E0SwyqRaZwKi+nl+aFHCaRQoHKW8VmEPfa4lTKVyjhWBQqHjOYmJ5QlKWqPwZQ0ISt1xX6o/1rkW14LZoZPq0kuLNJQ1T9yV4ORenDnYbpjb4/RjLm4gYCHYrdVBObuS/Wo1iGmXMnSQvDA8/Ou9BOAiNvanNo/YWa810hFcOc1vXjS8Zk0YcZFCWQvUHgKn0/NY2JU84vZa+7+HrnLw18HEXqYn3olKpfUyR7OBPzirk2NkSOj3DQLJEjaU8xPB8wyK99m+x2zdYg4rCEj04nDf8IR1hTzrrLXNJIaG4vRYDxhYiBWJWnTavnTYirvpBL0pXGIvTBjuRyGwOe+aezVRNvoS7B3azuwZUVecMvdphTkQbLzOHcF27V5DSQ3qsWVZW7pVi+ePwvhN/npoFLtFZs4P3gtG4TRrxykQl4smaZOcPB/KmFBWmxubxnDS6D2+hoJRabj0hE7dyjPEeZ73+6TLpsBp8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(40460700003)(44832011)(8936002)(26005)(1076003)(40480700001)(6666004)(83380400001)(2906002)(47076005)(36756003)(186003)(426003)(2616005)(336012)(356005)(81166007)(82740400003)(86362001)(82310400005)(36860700001)(478600001)(41300700001)(6916009)(4326008)(70206006)(8676002)(70586007)(316002)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:23.7496
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a4eb61d8-5ff6-40fc-bc5e-08db3ac16151
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT048.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8996

 Dynamic programming ops will modify the dt_host and there might be other
 function which are browsing the dt_host at the same time. To avoid the race
 conditions, adding rwlock for browsing the dt_host during runtime.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c              |  3 +++
 xen/drivers/passthrough/device_tree.c | 39 +++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |  6 +++++
 3 files changed, 48 insertions(+)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 507b4ac5b6..b330e6a013 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2098,6 +2098,9 @@ void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
     *allnextp = NULL;
 
     dt_dprintk(" <- unflatten_device_tree()\n");
+
+    /* Init r/w lock for host device tree. */
+    rwlock_init(&dt_host->lock);
 }
 
 static void dt_alias_add(struct dt_alias_prop *ap,
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index a77a217f3d..74f994b9da 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -112,6 +112,8 @@ int iommu_release_dt_devices(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return 0;
 
+    read_lock(&dt_host->lock);
+
     list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list)
     {
         rc = iommu_deassign_dt_device(d, dev);
@@ -119,10 +121,14 @@ int iommu_release_dt_devices(struct domain *d)
         {
             dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n",
                     dt_node_full_name(dev), d->domain_id);
+
+            read_unlock(&dt_host->lock);
             return rc;
         }
     }
 
+    read_unlock(&dt_host->lock);
+
     return 0;
 }
 
@@ -259,11 +265,15 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
 
         spin_lock(&dtdevs_lock);
 
+        read_lock(&dt_host->lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
         {
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -272,6 +282,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
         if ( ret )
         {
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -287,6 +299,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
                 ret = -EINVAL;
             }
 
+            read_unlock(&dt_host->lock);
+
             spin_unlock(&dtdevs_lock);
 
             break;
@@ -295,22 +309,31 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         spin_unlock(&dtdevs_lock);
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_add_dt_device(dev);
         if ( ret < 0 )
         {
             printk(XENLOG_G_ERR "Failed to add %s to the IOMMU\n",
                    dt_node_full_name(dev));
+
+            read_unlock(&dt_host->lock);
             break;
         }
 
         ret = iommu_assign_dt_device(d, dev);
 
         if ( ret )
+        {
             printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign \"%s\""
                    " to dom%u failed (%d)\n",
                    dt_node_full_name(dev), d->domain_id, ret);
+        }
+
+        read_unlock(&dt_host->lock);
         break;
 
     case XEN_DOMCTL_deassign_device:
@@ -322,25 +345,41 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( domctl->u.assign_device.flags )
             break;
 
+        read_lock(&dt_host->lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
+        {
+            read_unlock(&dt_host->lock);
             break;
+        }
 
         ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+
         if ( ret )
+        {
+            read_unlock(&dt_host->lock);
             break;
+        }
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_deassign_dt_device(d, dev);
 
         if ( ret )
+        {
             printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign \"%s\""
                    " to dom%u failed (%d)\n",
                    dt_node_full_name(dev), d->domain_id, ret);
+        }
+
+        read_unlock(&dt_host->lock);
         break;
 
     default:
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 998f972ebc..b7272bbccc 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -18,6 +18,7 @@
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/list.h>
+#include <xen/rwlock.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -106,6 +107,11 @@ struct dt_device_node {
     struct list_head domain_list;
 
     struct device dev;
+
+    /*
+     * Lock that protects r/w updates to unflattened device tree i.e. dt_host.
+     */
+    rwlock_t lock;
 };
 
 #define dt_to_dev(dt_node)  (&(dt_node)->dev)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519853.806975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX2-0000fQ-6Z; Tue, 11 Apr 2023 19:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519853.806975; Tue, 11 Apr 2023 19:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX1-0000da-S4; Tue, 11 Apr 2023 19:19:55 +0000
Received: by outflank-mailman (input) for mailman id 519853;
 Tue, 11 Apr 2023 19:19:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUa-0004Ta-1x
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:24 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7b2923f8-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:21 +0200 (CEST)
Received: from BN8PR04CA0036.namprd04.prod.outlook.com (2603:10b6:408:70::49)
 by BY5PR12MB4259.namprd12.prod.outlook.com (2603:10b6:a03:202::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Tue, 11 Apr
 2023 19:17:17 +0000
Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:70:cafe::e1) by BN8PR04CA0036.outlook.office365.com
 (2603:10b6:408:70::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:17 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:17 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:16 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:16 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b2923f8-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NrtgAUF9Ql0p+k/3sbtJEEchhDJAFMiITvM5wvWAKrtZSJIjO/6bQ2SPX+rz/sappAVESgdPQ90GX61i3lkvKAUVAagdoKWZ+qGhdSqDj+1W3M78hBsQRPDvQEuP0zOaiwjBa1WdLKRdDgiPhCNC7bL+7NzgG+8yC5lDDbPWYrska49kaV8IRd4vCiddB8YjwsNqIijO3tP21NPdOOWtFamZ7xcHO7lRokyVp79nclMttnFjF63ePo6t8QqOSnwmt1qqCp32NS001hcwr0hcwauDEZiLw26N0xIwN4FZd5VzUnZmjO5JK/NdRSiQBB0c0PYZbIfpLKjaVzyjr7nXhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hbn+9h9tF7QZlww7kr6YoDzwWCRUPyZKofdb3QDs5Ls=;
 b=Y7u0NYxfZCHpmoCI6mcjMl+Z5AL5PPIYLf23plUFQFvxZIu8fvLzCeXV5nbGm35DbTKpGszQ380njpPIqH+SPu84vjpMPGsP7LCTf7YNv0nTs24ijswVw0zcB7DChv87DDhAVxByw1yAvt3xPi4/5bDP73uupyKAAj6V64XzqgDyJcnz+ZgJ2beVWBHqqHR69h6DqGA1+qRJZFk1u+AgQcy5s+Pa8KYdNOo3YrjOwskLAWPNMijG7j7YiIyj5wYE0BnbwynIwd0cBCzdLKd1Y+05b3Sxw1yj4tjsaztqB8JjZSBgYRDBej2dcQO+XyLqqC3pEHhQM7oQAXYWVeFebw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hbn+9h9tF7QZlww7kr6YoDzwWCRUPyZKofdb3QDs5Ls=;
 b=aXEeV2TDU5sVp4LNobPoDDRxdR0iWKUUyeDVrTIdLuz4fjJq/9AdC2FfmUKyk39Fvlf/pAaRkKeyUMtPzf90H3Z5+XN2xLCDyPoyuDGBQu8nyn3di2VJMCtHSSj6p5D+5sUEqjovHkYzqqu9cnKeNlQCvHRDKnORE8FF8B25tD4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 08/17] xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
Date: Tue, 11 Apr 2023 12:16:27 -0700
Message-ID: <20230411191636.26926-27-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT010:EE_|BY5PR12MB4259:EE_
X-MS-Office365-Filtering-Correlation-Id: 637928ec-8ec6-4807-adc7-08db3ac15d71
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CajsAvd3Asi4MY6PTH9AjYDhGuRLyIwHjUjHGwBlOXg/mHrNQQ8RiN84aM42F9kCb8Cj4tDnawyfBdVqDnZ8RqIzMiOzOKHgvctzORSeTk02QwNr0WWCWp7/nipg+IxDjTM1PVxZgOek64p3GZ0ECZa8pxSl9dSzVyRsSenGOdKrYT4DC6h1nn4k6w6rrCZTkIaSdCbsnsa0asPmvRthqP53z3EhoNruocdb7RmDD0YKlDsCRs5PVM4wPG5M5Nut1WPFnjRKr6fhVbgKYMf0toL+bjgEG7nwQm24XGmapqg+w5fIaxNCNMj+kLSfSdtG+q/oV/FHmgD9h+v7orTiVklY6RNqzL7fBSMLZtdDStxtCgky06t2z3eGSkpWLSOyUMUryUkp4c1FyvhioYctp/DV9JSIAz/DhyHtQG933bmMtu9ltETVRF1TLmEzgawRqhVELxPcHcPwxGrHYQaRrEBynCvQk2Bx/Ak8dNiZEtMD71+2iudOjcviAcCCHLOqJu9sq1wvjj4JiYka0yl/3mS/39QJvuXBIJ0ivA6qLCGID6FXgVHONDH3fATNdTiUJ7ToRHGvX6qoWnp88Gxk7fREU2zwfTgn6vxhBnCkLjc6rIPWTGoiS4OEL0gKlqfZ5pO5PcqNV6ghDbqgdt9EnYT6lh4WDOlhMIL7GcOeCSM+p21BYtPi9saJs4TL50putv3/Xj+ZBrT6cvhG2ZejYl4ByzFU7356fVcMZZO6Fg4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(46966006)(40470700004)(36840700001)(478600001)(83380400001)(86362001)(40460700003)(40480700001)(36756003)(82740400003)(81166007)(356005)(47076005)(36860700001)(336012)(2616005)(426003)(2906002)(1076003)(26005)(316002)(54906003)(186003)(44832011)(5660300002)(82310400005)(8676002)(6666004)(6916009)(8936002)(41300700001)(4326008)(70586007)(70206006)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:17.3574
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 637928ec-8ec6-4807-adc7-08db3ac15d71
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4259

Rename iommu_dt_device_is_assigned() to iommu_dt_device_is_assigned_lock().

Moving spin_lock to caller was done to prevent the concurrent access to
iommu_dt_device_is_assigned while doing add/remove/assign/deassign.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/device_tree.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50c..bb4cf7784d 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -83,16 +83,15 @@ fail:
     return rc;
 }
 
-static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
+static bool_t
+    iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
     if ( !dt_device_is_protected(dev) )
         return 0;
 
-    spin_lock(&dtdevs_lock);
     assigned = !list_empty(&dev->domain_list);
-    spin_unlock(&dtdevs_lock);
 
     return assigned;
 }
@@ -213,27 +212,43 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( (d && d->is_dying) || domctl->u.assign_device.flags )
             break;
 
+        spin_lock(&dtdevs_lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
+
             break;
+        }
 
         ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
+
             break;
+        }
 
         if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
         {
-            if ( iommu_dt_device_is_assigned(dev) )
+
+            if ( iommu_dt_device_is_assigned_locked(dev) )
             {
                 printk(XENLOG_G_ERR "%s already assigned.\n",
                        dt_node_full_name(dev));
                 ret = -EINVAL;
             }
+
+            spin_unlock(&dtdevs_lock);
+
             break;
         }
 
+        spin_unlock(&dtdevs_lock);
+
         if ( d == dom_io )
             return -EINVAL;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519859.806988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX3-00012Q-Ha; Tue, 11 Apr 2023 19:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519859.806988; Tue, 11 Apr 2023 19:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX3-00011E-BM; Tue, 11 Apr 2023 19:19:57 +0000
Received: by outflank-mailman (input) for mailman id 519859;
 Tue, 11 Apr 2023 19:19:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUV-0004Ta-0W
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:19 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20609.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77e9e886-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:16 +0200 (CEST)
Received: from MW4PR04CA0265.namprd04.prod.outlook.com (2603:10b6:303:88::30)
 by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 11 Apr
 2023 19:17:11 +0000
Received: from CO1NAM11FT019.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:88:cafe::78) by MW4PR04CA0265.outlook.office365.com
 (2603:10b6:303:88::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:11 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT019.mail.protection.outlook.com (10.13.175.57) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:10 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:09 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e9e886-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FHBwRl823hZHqaHvlJEDD9c0hgJYEYyzHw7+HTSe3zci0HAjvN2K8OG2rhN3gDvOIqw484GPO48nWzNRr6OEQuK9Ri49wrw2/uM0uPdUX+IhkHRgjy/sT1qn5ZgK0cuX7Q3w0MI0vEz/vk804UEtknUZ5UBrPxbDjalphVoPLCHCioRfVnN5oIwTgQEfcIB4Ut7Sl4EOsvPtxd3HLWA2TCcsrn6Sle4uxtbkSUfPvYTpPDaRAbM9Jaz95bgDEd6569wHV0sop2tU2CrQqk2LXQpCW6Mu+PG+7cRc3laoMxcJdn/b8XPF7ilUQp2z0N9N/XRW49A/6ySSJUDr2aCfaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hw2g30xtcvBa51vtOCCSs3IGMhpfBDNnN9dvV4fBjSk=;
 b=WF5a2E5cX1Cldnw4kycbLzrRT96Y2n9DabXE21Qn4h/ak9llBaQBfwceaY0HRCSLMiz6j8wDiPQ1f+Yq5ho+cTCuHrzbOb/rqT6Qy6gXdDePXr6GbPhL7hwsQZN8ZvDGO6usrEaCF2kZMwV03NNxzP49lR1GPpWhxKGM26bDuYITrcd3TRCIydSIH7QPt6sDnQYD+SPwGZAFwHfMyzQ3uTsO0l63+ohTWOlJTus0+7nVFCDj4VpgNYWzfjRLBjQY1bBLdQsJy+3J/IEX/B22ZdHHs0oQqlOgj+9ZH2pqzQMzXJgoJv9C1G7TeaOQvoyD3tCluDmp9nTfiS2+UQaKAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hw2g30xtcvBa51vtOCCSs3IGMhpfBDNnN9dvV4fBjSk=;
 b=S9efHmq59byr7CoVacRkxGSKidI32MJIGuyDa9VEV8CDAInJatmvdxGjc3Rpe4q6es2psc/FEi8tTa4WvVs2hZWWTNOyw87pUZeckigavYUq50NwEtYTVsEqNPJgJxj0wCTpvydY2yn7aQPfPOo3bC8twb9QrEfnSFCiR20KN+I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [XEN][PATCH v5 00/17] dynamic node programming using overlay dtbo
Date: Tue, 11 Apr 2023 12:16:19 -0700
Message-ID: <20230411191636.26926-19-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT019:EE_|MW4PR12MB6826:EE_
X-MS-Office365-Filtering-Correlation-Id: b9d2ea11-1fd7-4bdc-b624-08db3ac1598d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mZ9k7sGiGuqKUJ2aHqC1WS7N6XihTTKyLAp3o5cybz4tyD18X3rr24xohcciAIwFxrlYRAlxC39CesC/zJfb4qVm9BifAsx13LsP7Sl0M5pf6Cr6j1LkbAX4MxIli/+6Lyt0wd006TTbvejBeOIUv4oBZW2Hma6eVB57z/E6P/xEWgu2g2oEVvqgOPvyNsUwI16tdQcnDAHa2RcAL1Ib0A240Lp66ZKKwfi4Je4s5HxsHRzlgzLE3aJWK6LppTWQ5MuVZQiWCzbi65py7+C4pIZKApOC49drEG6FS5YFGO0C0dv7FXPnV7zaWCWxfJ3+U8jI++igbm3uoBafyO5e2Sr7pa+7FzFAK/X/1ORpeQt+P0ReIC2Zynjd5Lm/btt7dRZg6HAra9sl1broasHovOeDEtj7JpB0fTNoX1zXcPmUZcOfdn75bL5d5hm8c41Tl7gI2QiDZqavzAzg00G4O71GqTtbh6y6WQEn3S6tMSLXiHdZo1SijsM1nZ3GRT6YPw3BMauYjgA1RSS6xFPsx+SFRl2ZFZJ9jwIcdLPFnbnIpCXJlBlU+DiJ/6sPx+8HDsKEj1dJjqDxBolg54cwBPBPKtWmZ5HOPSbNlY419djNaB9Jbuz6BFarfhXX88aUixCAVMl4UHtiZ9tOVt+9EWBIRf9+Kh6w9X6ckMVyJqOxeA/8X1e0ztKBUJFaBBb3rdLgLIjnJvYEiHOnaKRPbGGgW/YtcOnCZhn1HzKVH7o=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199021)(46966006)(36840700001)(40470700004)(6666004)(47076005)(2616005)(426003)(336012)(83380400001)(4326008)(40460700003)(186003)(86362001)(26005)(478600001)(2906002)(6916009)(70586007)(8676002)(82740400003)(41300700001)(70206006)(356005)(44832011)(81166007)(54906003)(36756003)(1076003)(82310400005)(316002)(8936002)(7416002)(5660300002)(36860700001)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:10.7209
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b9d2ea11-1fd7-4bdc-b624-08db3ac1598d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT019.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826

Hi,
The main purpose of this series to address first part of the dynamic programming
i.e. making Xen aware of new device tree node which means updating the dt_host
with overlay node information. Here we are adding/removing node from dt_host,
and checking/set IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

For adding a node using dynamic programming:
    1. flatten device tree overlay node will be added to a fdt
    2. Updated fdt will be unflattened to a new dt_host_new
    3. Extract the newly added node information from dt_host_new
    4. Add the added node under correct parent in original dt_host.

For removing a node:
    1. Find the node with given path.
    2. Check if the node is used by any of domus. Removes the node only when
        it's not used by any domain.
    3. Removes IRQ permissions and MMIO access.
    5. Find the node in dt_host and delete the device node entry from dt_host.
    6. Free the overlay_tracker entry which means free dt_host_new also(created
in adding node step).

To map IOREQ and IOMMU during runtime, there will be another small series after
this one where we will do the actual IOMMU and IRQ mapping to a running domain
and will call unmap_mmio_regions() to remove the mapping.

Change Log:
 v4 -> v5:
    Split patch 01/16 to two patches. One with function type changes and another
        with changes inside unflatten_device_tree().
    Change dt_overlay xl command to dt-overlay.
    Protect overlay functionality with CONFIG(arm).
    Fix rwlock issues.
    Move include "device_tree.h" to c file where arch_cpu_init() is called and
        forward declare dt_device_node. This was done to avoid circular deps b/w
        device_tree.h and rwlock.h
    Address Michal's comment on coding style.

 v3 -> v4:
    Add support for adding node's children.
    Add rwlock to dt_host functions.
    Corrected fdt size issue when applying overlay into it.
    Add memory allocation fail handling for unflatten_device_tree().
    Changed xl overlay to xl dt_overlay.
    Correct commit messages.
    Addressed code issue from v3 review.

 v2 -> v3:
    Moved overlay functionalities to dt_overlay.c file.
    Renamed XEN_SYSCTL_overlay to XEN_SYSCTL_dt_overlay.
    Add dt_* prefix to overlay_add/remove_nodes.
    Added dtdevs_lock to protect iommu_add_dt_device().
    For iommu, moved spin_lock to caller.
    Address code issue from v2 review.

 v1 -> v2:
    Add support for multiple node addition/removal using dtbo.
    Replaced fpga-add and fpga-remove with one hypercall overlay_op.
    Moved common domain_build.c function to device.c
    Add OVERLAY_DTB configuration.
    Renamed overlay_get_target() to fdt_overlay_get_target().
    Split remove_device patch into two patches.
    Moved overlay_add/remove code to sysctl and changed it from domctl to sysctl.
    Added all overlay code under CONFIG_OVERLAY_DTB
    Renamed all tool domains fpga function to overlay
    Addressed code issues from v1 review.

Regards,
Vikram



Vikram Garhwal (17):
  xen/arm/device: Remove __init from function type
  common/device_tree: change __unflatten_device_tree()
  xen/arm: Add CONFIG_OVERLAY_DTB
  libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
  libfdt: overlay: change overlay_get_target()
  xen/device-tree: Add device_tree_find_node_by_path() to find nodes in
    device tree
  xen/smmu: Add remove_device callback for smmu_iommu ops
  xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
  xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
  xen/iommu: Introduce iommu_remove_dt_device()
  asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
  common/device_tree: Add rwlock for dt_host
  xen/arm: Implement device tree node removal functionalities
  xen/arm: Implement device tree node addition functionalities
  tools/libs/ctrl: Implement new xc interfaces for dt overlay
  tools/libs/light: Implement new libxl functions for device tree
    overlay ops
  tools/xl: Add new xl command overlay for device tree overlay support

 SUPPORT.md                              |   6 +
 tools/include/libxl.h                   |  11 +
 tools/include/xenctrl.h                 |   5 +
 tools/libs/ctrl/Makefile.common         |   1 +
 tools/libs/ctrl/xc_dt_overlay.c         |  48 ++
 tools/libs/light/Makefile               |   3 +
 tools/libs/light/libxl_dt_overlay.c     |  71 ++
 tools/xl/xl.h                           |   1 +
 tools/xl/xl_cmdtable.c                  |   6 +
 tools/xl/xl_vmcontrol.c                 |  52 ++
 xen/arch/arm/Kconfig                    |   5 +
 xen/arch/arm/device.c                   | 145 ++++
 xen/arch/arm/domain_build.c             | 142 ----
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/arch/arm/include/asm/smp.h          |   3 +-
 xen/arch/arm/smpboot.c                  |   1 +
 xen/arch/arm/sysctl.c                   |  16 +-
 xen/common/Makefile                     |   1 +
 xen/common/device_tree.c                |  30 +-
 xen/common/dt_overlay.c                 | 897 ++++++++++++++++++++++++
 xen/common/libfdt/Makefile              |   4 +
 xen/common/libfdt/fdt_overlay.c         |  29 +-
 xen/common/libfdt/version.lds           |   1 +
 xen/drivers/passthrough/arm/smmu.c      |  56 ++
 xen/drivers/passthrough/device_tree.c   | 109 ++-
 xen/include/public/sysctl.h             |  24 +
 xen/include/xen/device_tree.h           |  28 +-
 xen/include/xen/dt_overlay.h            |  59 ++
 xen/include/xen/iommu.h                 |   2 +
 xen/include/xen/libfdt/libfdt.h         |  18 +
 31 files changed, 1595 insertions(+), 187 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c
 create mode 100644 tools/libs/light/libxl_dt_overlay.c
 create mode 100644 xen/common/dt_overlay.c
 create mode 100644 xen/include/xen/dt_overlay.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:19:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:19:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519861.806993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX4-00019e-2w; Tue, 11 Apr 2023 19:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519861.806993; Tue, 11 Apr 2023 19:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX3-00017T-Q5; Tue, 11 Apr 2023 19:19:57 +0000
Received: by outflank-mailman (input) for mailman id 519861;
 Tue, 11 Apr 2023 19:19:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUR-0004Ta-Vq
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:15 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20615.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76edfa86-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:14 +0200 (CEST)
Received: from BN9P221CA0020.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::32)
 by BY5PR12MB4322.namprd12.prod.outlook.com (2603:10b6:a03:20a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:11 +0000
Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10a:cafe::eb) by BN9P221CA0020.outlook.office365.com
 (2603:10b6:408:10a::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:10 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:10 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:07 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:07 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76edfa86-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E8b/gdsp9EhJpwqEqWnSuieF6eymTrNe7CZCQIhNlXPnI9rbzaQ9wvtKtsjf8kP5i3R8URY/xdnXA4z5xpo6gHBEMWaxZykiVSxkS5IX9GPzCVHxrsWjarIu4i2Gai1OZW1LhqjKfQ3613hbwJtVwcHRkvsqrjZQMhmB1zr+3fL7NYtkYjn/O/lE3EtnFmvqnRpqZjW2HhvCO8FLrJFTjXGgjdfpCxF+iPM1wo6kyd6vUyRKgOAiYIQGCBzoEf0Qa6jDfyC4ZMVRSiSGssuo3vgTR2fcrzTK+5aDDiUAsT2QnfwW7VVz06WpZOzG8MrZh1Rzisa1hHvAHTmyd6+Ukw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=nxIqwyEEBoGjZ2Kuiet6/zxQ1rXzFTD9+Tb8gozYJ/K69DjIujomDliSsWjN6KFcbMnN2aGkwTczNRgYS6wjhAw1GVWvCHbB877FDdlw/Z7ryiq0f0E0TkFr6zwbr6c+9/31vEwBGNPKAdGpMRl/1MRnkUrrkz2ExOW5z/Z0zUwocZDZe/bCYvRT+gZkSJQEFXS9Co2Di9Fi9PdJNEPQh4Q9FWsxvw/v6dOeJyMpOK92Qi76t4Up2UXXyPumbKQtuUsjFySgj4hkfiGchNgMjtqJzNIDEQ6BlcddYezacJLktwxN2ZYOhv3OVqVhnjwCzVsVvyy5irVapGD5gnheKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=Mlr/+9OeakKs1WY2xZ18xA0mf8IJHjFeJSeRCQ20Q0zD8LVeRq1TGn30xKGU8oa4JxImKkmgn4JGZaMnjvON9eZpaV/sVxo6PY/uIkYeMQfwhCSOKw8PW/xN+uEztqYsTT2xIFp0708WEh0TgXAcyESUMJ/qgIrlxY3WqsNsi6o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [XEN][PATCH v5 16/17] tools/libs/light: Implement new libxl functions for device tree overlay ops
Date: Tue, 11 Apr 2023 12:16:17 -0700
Message-ID: <20230411191636.26926-17-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|BY5PR12MB4322:EE_
X-MS-Office365-Filtering-Correlation-Id: f399bca7-e537-444f-ca75-08db3ac15925
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K+t5V2+TAUOZSEpGSwWXK//7mVQ+/eyBcdLVEKdtEN4WY/OiGfcf1/gyhYIYcIO+xc2Z7SKl80+536Xeh5lp8rzwgmEG+EG1jTi+Dc0f7A5bykEp6HZNuGr4lWbKv/EtVCJdoFEo8wFwiEixU7By7P0fVXx77hwcPYNUCbnNTvhi4G3SB0PBW/wO19rtokte+XM11+gBEn3zZLSuFoIrdn97DHMFYsLdHHXCNCittTwO1WBOKmJPPZWZG6KA7wHnG8RYBi22flUwZQtwFON5Hv6xc2GS+LmOAlQsq3MSkv6Et51WAhgS6vECFMCqK+g7NSO5gvOm5emZJmtZTbipCt/XJ/VfOBh4Oa0djPeXy7z7gAgQqYxYVbz7N0Nuv3zzlGhZGDoQHxYkDAj4MNPHjlLD2z2CO2u/I8yj5k9ensKm4kjHEvBsvwLu2yiDG2oOYcvP7DfwpkT3fzXLKr1X6wKEXFe6tZv9rv5pz9WAzKXzusNAXLtYbtLK98MfTMptlzToh300Tn8FXSC3hHdECNcZfnPYVizzo/ogVgkdudl1bnIT1gI2jcjL7rKGZEXwY6i1IcLpLD37kdpYHv8szq26znZyba7r1Ca3/pdwXujCrw2C6froyU9xQXa4UdpI7Fh86Axdin70rTa6Hnd/Uv7KoVvjilC8Mzrj+v/lshuDw2ue+66wdjmcpghqxa2/+reQkMaVbByvXO+5tG6+1jiyC12STT/1S0UlEImC/76aPg8EpW7OH3jfEVSOg3NG
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199021)(36840700001)(46966006)(40470700004)(8936002)(5660300002)(44832011)(82310400005)(41300700001)(70206006)(70586007)(40460700003)(4326008)(8676002)(6916009)(316002)(2906002)(478600001)(54906003)(6666004)(36756003)(86362001)(40480700001)(1076003)(26005)(2616005)(336012)(426003)(186003)(356005)(81166007)(82740400003)(47076005)(36860700001)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:10.1336
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f399bca7-e537-444f-ca75-08db3ac15925
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4322

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/libxl.h               | 11 +++++
 tools/libs/light/Makefile           |  3 ++
 tools/libs/light/libxl_dt_overlay.c | 71 +++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+)
 create mode 100644 tools/libs/light/libxl_dt_overlay.c

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a19131..1c5e8abaae 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -250,6 +250,12 @@
  */
 #define LIBXL_HAVE_DEVICETREE_PASSTHROUGH 1
 
+#if defined(__arm__) || defined(__aarch64__)
+/**
+ * This means Device Tree Overlay is supported.
+ */
+#define LIBXL_HAVE_DT_OVERLAY 1
+#endif
 /*
  * libxl_domain_build_info has device_model_user to specify the user to
  * run the device model with. See docs/misc/qemu-deprivilege.txt.
@@ -2453,6 +2459,11 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
 void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
+#if defined(__arm__) || defined(__aarch64__)
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay,
+                     uint32_t overlay_size, uint8_t overlay_op);
+#endif
+
 /*
  * Turns the current process into a backend device service daemon
  * for a driver domain.
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 96daeabc47..563a1e8d0a 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -112,6 +112,9 @@ OBJS-y += _libxl_types.o
 OBJS-y += libxl_flask.o
 OBJS-y += _libxl_types_internal.o
 
+# Device tree overlay is enabled only for ARM architecture.
+OBJS-$(CONFIG_ARM) += libxl_dt_overlay.o
+
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 endif
diff --git a/tools/libs/light/libxl_dt_overlay.c b/tools/libs/light/libxl_dt_overlay.c
new file mode 100644
index 0000000000..a6c709a6dc
--- /dev/null
+++ b/tools/libs/light/libxl_dt_overlay.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+#include <libfdt.h>
+#include <xenctrl.h>
+
+static int check_overlay_fdt(libxl__gc *gc, void *fdt, size_t size)
+{
+    int r;
+
+    if (fdt_magic(fdt) != FDT_MAGIC) {
+        LOG(ERROR, "Overlay FDT is not a valid Flat Device Tree");
+        return ERROR_FAIL;
+    }
+
+    r = fdt_check_header(fdt);
+    if (r) {
+        LOG(ERROR, "Failed to check the overlay FDT (%d)", r);
+        return ERROR_FAIL;
+    }
+
+    if (fdt_totalsize(fdt) > size) {
+        LOG(ERROR, "Overlay FDT totalsize is too big");
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay_dt, uint32_t overlay_dt_size,
+                     uint8_t overlay_op)
+{
+    int rc;
+    int r;
+    GC_INIT(ctx);
+
+    if (check_overlay_fdt(gc, overlay_dt, overlay_dt_size)) {
+        LOG(ERROR, "Overlay DTB check failed");
+        rc = ERROR_FAIL;
+        goto out;
+    } else {
+        LOG(DEBUG, "Overlay DTB check passed");
+        rc = 0;
+    }
+
+    r = xc_dt_overlay(ctx->xch, overlay_dt, overlay_dt_size, overlay_op);
+
+    if (r) {
+        LOG(ERROR, "%s: Adding/Removing overlay dtb failed.", __func__);
+        rc = ERROR_FAIL;
+    }
+
+out:
+    GC_FREE;
+    return rc;
+}
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519868.807010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX7-0001tn-Pm; Tue, 11 Apr 2023 19:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519868.807010; Tue, 11 Apr 2023 19:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX7-0001tT-HI; Tue, 11 Apr 2023 19:20:01 +0000
Received: by outflank-mailman (input) for mailman id 519868;
 Tue, 11 Apr 2023 19:19:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUh-0004Ta-FG
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:31 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20631.outbound.protection.outlook.com
 [2a01:111:f400:7eab::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80933d9d-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:30 +0200 (CEST)
Received: from BN7PR02CA0032.namprd02.prod.outlook.com (2603:10b6:408:20::45)
 by MW4PR12MB5667.namprd12.prod.outlook.com (2603:10b6:303:18a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Tue, 11 Apr
 2023 19:17:26 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:20:cafe::27) by BN7PR02CA0032.outlook.office365.com
 (2603:10b6:408:20::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:26 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:26 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:23 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80933d9d-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bJ0O6GHtiT21f4jamG/1nkPHCqa4FcTuvEoiZY7rnNzRVvZcMWqD+22gBPXrCKVZRbuhqzacuoSI40GoahdOs9YmS1GUW5+yVMr/ZuE/dTYJdv8Jv5NNTEJgukvDUr4a9yAD3cmLLDo21ymy8VNH2GtsPC5m4+aNyWl+PlH0v3MgXFDrZ4cxMi45wX2y05e/FdpGkGLOA573pcDtxRkZ/CJe9pYbdh5JC46HIJLI0Fij2qLWr5/Ti/FRgDc4OB5t9wCMzA9hykIEg+4d0EVWwvnHFKXAXAfcJdm56KTciJjx6ZOZE+MBDg2chC1GRaEzxa8wIwoc1fqOnOA6wDFltw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=Uxtz+1EHP7Es6fMd/EjL8S02P7I1hCuklZeceuPRvGkwc418ulHBVILrSdEbq2w4q59M5OA9jDTiy23+Ri6lwJtN4HRAKnhN7O8U1zaro/ZMdCNAw6ym0zTH20WiDj+RiI3/VD7eQ3MdaI8WZc2aHI0O12VpkNTrYocifrJHBWASnr73YLEAD4EioXEyhETX790iFZ41oRe7YM0SQQuEnT6JqP9Hz3FraOoVouW8t0qT/Z/phjJzjL7A7CxPhyeAZu2PiibxbekpY0qokVbEc8GbGzR8caiRTwIIr/npCXSPFMc9nNUV9L78rvMXRXVSoUhnmgRrXKZpmrMfOz3LkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=jiBfdo0psJ+gincF4tYwny/wZE4YYqB+6weFDtEz4RDwWLvG1btBljprsLUqj7qRKvmdPI0tVipKGrL5HCSTJodkdK7Wum/AeI1WkuOse3Yuh+L8rqsvyudOZPx1cgMZ+NwJH9bS9/63FzMuLOe5hqM9bJkKmMMcC+uA+zTfq6c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [XEN][PATCH v5 16/17] tools/libs/light: Implement new libxl functions for device tree overlay ops
Date: Tue, 11 Apr 2023 12:16:35 -0700
Message-ID: <20230411191636.26926-35-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|MW4PR12MB5667:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a392f45-94a7-4efa-ba31-08db3ac162ac
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0zobVB8q1qtHq3f8MXBx+n29ojXpLxjihYItaqooTtF8rfKBKAgmA5h2vRJvDxizfMIzSGZ3LBBi1Gb+MQGRui+yi3I327vpPoiHA6f8qRPbFOUhXncHvsUavOxX9l0rF815NWNH3APr3iVo1sRHaBdWhaJuul3bnVEZFvD2MjkSS5yjlYSU3HYX3I1otNAeam3xZ+VQHgQdniUUZ7ZHubJvg7zP3EU3cgzNGi3bJvz1Pv9FJRfVd+aas3ck15keQ8uZUdWwNCkT1H4pEHPGB2R0qlAq5pIzkI/k0EkHc7som6HxPQ32GAnSYTnJ/DTfDOCwvWqEZaofe51SkyeS2TUKVJMohIdXu1mMnZWX6iacg+USLyWny7QAat1VGwpWSV3apZubruWU4t1zDRqeJbBnopkKvUAvQfDUtD6f8XadAllpbvEVe5RzykvQvsAEY16aTGK1fdjpkA6RwQSPMdwLn4muAScMkEUvl93iubdM1r5awDcl3syKXHzdB5nFVpz4NXS7tdmH2uXP7hoXWQDhsdMr1mbV8qPafbv02S4k1nlOgSOlgBu1cBAsx+Lz036WMSxCSaJ9a/8BAqZgzynCiHx8Sxs07e9phhWHsmNXl+jbVvzSFMLr1XS7NTPugI+zeP1gvUBnN661TojN3pliFUBfvmcWhh7t1jnLmgc6gp1908WNuDkzEA2CsBI5nTZPJHa/9v0+Qh+bbdbuEn58S/Dawz7ZFuo1d4Ar3xI99rq912bMGpKJbziARJl6
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(44832011)(82310400005)(40460700003)(2906002)(5660300002)(8936002)(86362001)(40480700001)(8676002)(26005)(6666004)(1076003)(36860700001)(54906003)(478600001)(2616005)(47076005)(336012)(426003)(186003)(70206006)(70586007)(81166007)(82740400003)(41300700001)(356005)(4326008)(6916009)(316002)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:26.1174
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a392f45-94a7-4efa-ba31-08db3ac162ac
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB5667

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/libxl.h               | 11 +++++
 tools/libs/light/Makefile           |  3 ++
 tools/libs/light/libxl_dt_overlay.c | 71 +++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+)
 create mode 100644 tools/libs/light/libxl_dt_overlay.c

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a19131..1c5e8abaae 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -250,6 +250,12 @@
  */
 #define LIBXL_HAVE_DEVICETREE_PASSTHROUGH 1
 
+#if defined(__arm__) || defined(__aarch64__)
+/**
+ * This means Device Tree Overlay is supported.
+ */
+#define LIBXL_HAVE_DT_OVERLAY 1
+#endif
 /*
  * libxl_domain_build_info has device_model_user to specify the user to
  * run the device model with. See docs/misc/qemu-deprivilege.txt.
@@ -2453,6 +2459,11 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
 void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
+#if defined(__arm__) || defined(__aarch64__)
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay,
+                     uint32_t overlay_size, uint8_t overlay_op);
+#endif
+
 /*
  * Turns the current process into a backend device service daemon
  * for a driver domain.
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 96daeabc47..563a1e8d0a 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -112,6 +112,9 @@ OBJS-y += _libxl_types.o
 OBJS-y += libxl_flask.o
 OBJS-y += _libxl_types_internal.o
 
+# Device tree overlay is enabled only for ARM architecture.
+OBJS-$(CONFIG_ARM) += libxl_dt_overlay.o
+
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 endif
diff --git a/tools/libs/light/libxl_dt_overlay.c b/tools/libs/light/libxl_dt_overlay.c
new file mode 100644
index 0000000000..a6c709a6dc
--- /dev/null
+++ b/tools/libs/light/libxl_dt_overlay.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+#include <libfdt.h>
+#include <xenctrl.h>
+
+static int check_overlay_fdt(libxl__gc *gc, void *fdt, size_t size)
+{
+    int r;
+
+    if (fdt_magic(fdt) != FDT_MAGIC) {
+        LOG(ERROR, "Overlay FDT is not a valid Flat Device Tree");
+        return ERROR_FAIL;
+    }
+
+    r = fdt_check_header(fdt);
+    if (r) {
+        LOG(ERROR, "Failed to check the overlay FDT (%d)", r);
+        return ERROR_FAIL;
+    }
+
+    if (fdt_totalsize(fdt) > size) {
+        LOG(ERROR, "Overlay FDT totalsize is too big");
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay_dt, uint32_t overlay_dt_size,
+                     uint8_t overlay_op)
+{
+    int rc;
+    int r;
+    GC_INIT(ctx);
+
+    if (check_overlay_fdt(gc, overlay_dt, overlay_dt_size)) {
+        LOG(ERROR, "Overlay DTB check failed");
+        rc = ERROR_FAIL;
+        goto out;
+    } else {
+        LOG(DEBUG, "Overlay DTB check passed");
+        rc = 0;
+    }
+
+    r = xc_dt_overlay(ctx->xch, overlay_dt, overlay_dt_size, overlay_op);
+
+    if (r) {
+        LOG(ERROR, "%s: Adding/Removing overlay dtb failed.", __func__);
+        rc = ERROR_FAIL;
+    }
+
+out:
+    GC_FREE;
+    return rc;
+}
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519874.807016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX8-0001zM-Ch; Tue, 11 Apr 2023 19:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519874.807016; Tue, 11 Apr 2023 19:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJX8-0001y6-0c; Tue, 11 Apr 2023 19:20:02 +0000
Received: by outflank-mailman (input) for mailman id 519874;
 Tue, 11 Apr 2023 19:20:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUY-0004Ta-1E
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:22 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e89::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a4ad9dd-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:20 +0200 (CEST)
Received: from BN8PR04CA0029.namprd04.prod.outlook.com (2603:10b6:408:70::42)
 by PH0PR12MB8029.namprd12.prod.outlook.com (2603:10b6:510:26c::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:17 +0000
Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:70:cafe::9a) by BN8PR04CA0029.outlook.office365.com
 (2603:10b6:408:70::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:16 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:15 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:15 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a4ad9dd-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BzNQr2SLdVAdJZYxI0PgizBtvdz3yZFN10NELXNrTWTkU1yT4d7zyT2qsaTk80El/bDeQAg3eLkaD6zK0YlyCxErvuB/hbDhLaivuG8jpOszKOszktekyI3kO8ueiyKjr7s5QNyI2rzQzJ+sXp6OSRorFiWNigWWaprbhjv3b/+vafq6IZpGAMksC4MW+YAg9Fmg7pJujgMRO7dKME+DIDnkp+Bnh95+Xlx/BShiu0fJ+tZBeSBo6n3ILgNw9CSpRaEAPUvaAtlMjZmoc8vFZ6MU0sGKeeJsfHT5Ho+zmWY6stgP0Bn6XxUhiF7gsBD+qrQwW71ya+Icczl+7tRoQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SvjprfsFjT2q+G0BBsMISOBPht18qx5Ov6rcUISgUjE=;
 b=nkBUXiMU6lIQQwpdxAbfbWFCZ6TYiWvoM8erBKwAbx7S7ytEd1CkM3YMbbLQHGkM+/D2+jQEt1iaI/Wh9ulQRjrDj+9WNz7nUgYw1JRrnBfgMBsUKx2YcgxX+jxfzRkFsiKuvv1A/BcAg0emVYelnD2g0leMjx+sVgecT7fJwv+tuNkr36z3t5wE3KYcIHOPIcWigW2ZZ+yU1/0AmdWv6xPczWDvFG21vHld3xEciPzBvKFR7bYDvB17YUmic+s2doVAodUrVQwedABB5GeLL/o6UdHMU+vZ3d+8VdnZ4138SitgcnGQcrvrhyaUTBIoamUCqGLJ5iO6VWITMa/kEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SvjprfsFjT2q+G0BBsMISOBPht18qx5Ov6rcUISgUjE=;
 b=Ly+lUQ0+M/IUp+B4OQS8qNOM8TYaB5uasdcrjoYspgz20WyS/UPD0htASwL6rO45lrVjKtv8zb6YxuNzTMM1uTmjU0flvEfsxmBavxHdB/ufDqqEZWLLSAFmnaLL+IjjgPzHVDv/wtlfuCX/w5b2upQNQONkaX8hn9CWp9zKrug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v5 07/17] xen/smmu: Add remove_device callback for smmu_iommu ops
Date: Tue, 11 Apr 2023 12:16:26 -0700
Message-ID: <20230411191636.26926-26-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT010:EE_|PH0PR12MB8029:EE_
X-MS-Office365-Filtering-Correlation-Id: 50790be0-6352-4be7-7b8f-08db3ac15cdb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mAFf3vbwh8xgKfEkCpsPWufkJqzgszk8dRjmBqwJbNWlxJ+4MaLJCrh3ccDNlcLfayxl9wcH/xmIKVAon8eO33lB0xgU9U/mor0WczGDwNKR83kQdcfRIWBJoTm5tvf676K0Yg5hTKLKUUHJo69ZZ60lOLw2LeRT1zUJC/d3aXabjaQ47+PkrR5xRMsAY9mxb4XXREnJS2IKVeE0zjlkyJOSrktcywagpler50PR5tZCgeGKw+16N/Nf9ynCTBAUzF0/7UOm+Ey8dTNW0WDjnsrkoj+h44NAWDnlQ/6Giwl4Y+7PwmCvUUR9Z0ZlMsEfcy+5aoGNqM3WESMk+CSupsa1o2EPANS9moVe/epI1meG7z4vn3CcSAdraj1xoP2gaXUcMGV1avcP46Wsj9MBpxB+6QuUIw16OGnTafMmWuyVeigwyN0+plpQAiOBvz5I7QNEJuQyymxLF7yjcmpG9mbkf6mb9WxpW16MF5cu/0GP04gsRdDyXHaK9gkGhQ71n+sVvZT9utYjYXMUoIZB+3S3BdHO/WE0p9hEtyUDthLPRqx+skjdOhOVdgMus8lxMtzsPFH/mlauhsMLCyXE5t0Xet9Kz8iQDLKEUlVzHK3LnZePD5pWg5QRmmoDnbwMkiIuylXMQdCnYMRUTPJ6lI69oxCT6V2edfu5EHECe2G7J58mNFTUsGGZjjRdvMaLqkljziga/kiAm/tydIdQBCPQSpD6whYnqMsIZgNgYFM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199021)(36840700001)(46966006)(40470700004)(82310400005)(2906002)(5660300002)(8936002)(41300700001)(47076005)(40460700003)(4326008)(8676002)(6916009)(70586007)(70206006)(6666004)(316002)(36756003)(44832011)(478600001)(54906003)(186003)(81166007)(86362001)(36860700001)(356005)(26005)(336012)(426003)(2616005)(1076003)(82740400003)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:16.3575
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 50790be0-6352-4be7-7b8f-08db3ac15cdb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8029

Add remove_device callback for removing the device entry from smmu-master using
following steps:
1. Find if SMMU master exists for the device node.
2. Remove the SMMU master

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/arm/smmu.c | 56 ++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..14e15f1bc6 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -816,6 +816,19 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 	return 0;
 }
 
+static int remove_smmu_master(struct arm_smmu_device *smmu,
+			      struct arm_smmu_master *master)
+{
+	if (!smmu->masters.rb_node) {
+		ASSERT_UNREACHABLE();
+		return -ENOENT;
+	}
+
+	rb_erase(&master->node, &smmu->masters);
+
+	return 0;
+}
+
 static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 					 struct device *dev,
 					 struct iommu_fwspec *fwspec)
@@ -853,6 +866,32 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	return insert_smmu_master(smmu, master);
 }
 
+static int arm_smmu_dt_remove_device_legacy(struct arm_smmu_device *smmu,
+					 struct device *dev)
+{
+	struct arm_smmu_master *master;
+	struct device_node *dev_node = dev_get_dev_node(dev);
+	int ret;
+
+	master = find_smmu_master(smmu, dev_node);
+	if (master == NULL) {
+		dev_err(dev,
+			"No registrations found for master device %s\n",
+			dev_node->name);
+		return -EINVAL;
+	}
+
+	ret = remove_smmu_master(smmu, master);
+
+	if (ret)
+		return ret;
+
+	dev_node->is_protected = false;
+
+	kfree(master);
+	return 0;
+}
+
 static int register_smmu_master(struct arm_smmu_device *smmu,
 				struct device *dev,
 				struct of_phandle_args *masterspec)
@@ -876,6 +915,22 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
 					     fwspec);
 }
 
+static int arm_smmu_dt_remove_device_generic(u8 devfn, struct device *dev)
+{
+	struct arm_smmu_device *smmu;
+	struct iommu_fwspec *fwspec;
+
+	fwspec = dev_iommu_fwspec_get(dev);
+	if (fwspec == NULL)
+		return -ENXIO;
+
+	smmu = find_smmu(fwspec->iommu_dev);
+	if (smmu == NULL)
+		return -ENXIO;
+
+	return arm_smmu_dt_remove_device_legacy(smmu, dev);
+}
+
 static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
 {
 	struct arm_smmu_device *smmu;
@@ -2858,6 +2913,7 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
     .init = arm_smmu_iommu_domain_init,
     .hwdom_init = arch_iommu_hwdom_init,
     .add_device = arm_smmu_dt_add_device_generic,
+    .remove_device = arm_smmu_dt_remove_device_generic,
     .teardown = arm_smmu_iommu_domain_teardown,
     .iotlb_flush = arm_smmu_iotlb_flush,
     .assign_device = arm_smmu_assign_dev,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519877.807028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXA-0002g0-Tq; Tue, 11 Apr 2023 19:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519877.807028; Tue, 11 Apr 2023 19:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXA-0002fC-Ko; Tue, 11 Apr 2023 19:20:04 +0000
Received: by outflank-mailman (input) for mailman id 519877;
 Tue, 11 Apr 2023 19:20:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUc-0004Ta-Ky
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:26 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e83::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7dc924f3-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:26 +0200 (CEST)
Received: from MW4PR03CA0021.namprd03.prod.outlook.com (2603:10b6:303:8f::26)
 by PH0PR12MB5419.namprd12.prod.outlook.com (2603:10b6:510:e9::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Tue, 11 Apr
 2023 19:17:23 +0000
Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::8e) by MW4PR03CA0021.outlook.office365.com
 (2603:10b6:303:8f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:22 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:18 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 12:17:18 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:17 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dc924f3-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mc7+AWllmzix4mhii1U8rnfn6L8ASn96uvHv2soW2G7oS/3cPcG/WPHUuDLB+gHbOMNpDcX5jRXZqCgLhpusyEg1R0n+mz/yllTm5QaeY4RI32zVz4/gYMdFKTJKM1aSfofl01QwWJ60BIb5tzqUHYHmw2+LokFw44QIaOMTEccm+xh/MP4D6Mr2c0WftqUjLDtMZuFXUbW1dP573ft+kB+iOw8Uec9EU+SgqOkjEmleM400hkhMhM1xTVs1yzcTwh8cv/23Fibu5oHI/h5g6ToYlPm1PWlzi/cXlBbGWYHRZTJ4Et9UEnxpUwGWGtm91S/vxOibgh5Reu43WmfIUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gLWShZkghSoju5aCTSrHjPTktEFI3gdZrQ2ShX60hx0=;
 b=ZzvG+mK/iy4AgUZDTYFPIQYPZO+KAPqS1wjw2S3ySjkMCrDPW/p7P0Pb9tYbfao7N2wXlSB+GOhajUT1r93LBMrYMaOStDb2tWPzI1R0WOlD2uNAFRdBX/nd4IY8s/4ELcLxSCcTfOUBOWR8olwMUeKCZ12nCKQlN2Z/0tPfdDw05IlQHVJA5ojUFqjObsvOzDxxicj2mBS9V+asAN7T7vGq1+42BEvVUnrPd3H8jN/pln2atHXnmNF6xB57s6Oz0oW80KFunQ8oA34qGYuswx/4ZWg52NjAa/27D57vY+zFiyoq9ZOx5T/F1ZoYLY9KF5aDQg10Mv/Dn3cK+QCDVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gLWShZkghSoju5aCTSrHjPTktEFI3gdZrQ2ShX60hx0=;
 b=KVktFdAFizbAL/V23oXfjwGQ1VwQmvuLOLNg9ydmkLqePyqfEfZtuz9SgQnQ3IGivyHAElCOHj9+oLQza55PUBKSr1IglMdUNBPbvmbJZ0zY59ftKxbgf49iWnuXKBY0p2zGrDCylHIKyzQ/gfZDtc+nt8cYRs7fhLIAcfwjGUk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN][PATCH v5 10/17] xen/iommu: Introduce iommu_remove_dt_device()
Date: Tue, 11 Apr 2023 12:16:29 -0700
Message-ID: <20230411191636.26926-29-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT048:EE_|PH0PR12MB5419:EE_
X-MS-Office365-Filtering-Correlation-Id: 6efa7f57-ca98-45af-4f83-08db3ac160c0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9q1fBSp1v32ZaZSlxWE5iV198r5t8GU1JGK5iYoab7v6pemY0mVsLrBfn3g3VIFB+ZSQeujLUZ9GW0FrxRlTMlchWsvewHOZF8yrhIrEucnfpn4+KA9W2R5ZvXMlIfrd8QPZzs2Gck1ubm3WL0SbOe5FbHeVHi5DFRWO0LRfN8m1oKh6WguUgZF19ZQ8yjHqSmHEpt7WnWGL9OSUamL9m9t9z53G88xg96P6nnkUCkjLmhyBq/lA8gflXEfMAXNgI6BSWXo0817OFAar17z0qEkXP0siB+Jittc3yAtJPsLuihLnllrhjOjZzttP4/NaUCOShrajkPSXmVEMM2zaqnUWZ5EufNFS4NVEqqutDC7y+m8ILU4iLtPdq5CTFf51CrrXc+eaxZKyx1UghtgQOqucUCkY8lgDEv3Mjpy0zVCb2n5cgoWDtSUNYcEGdqzow+n6uVl38+8AEoWXjMdEsd0Z8kz4NJ6guUKHMgmEk4uOj1lG339ZbES9JzzTqfoJWqExFGTlNBk00PGOs15RDv2ArSC/ZjN61mGL56Bu77qclZqQqIwZN3c03bJCo0AjvIQl9OSx7UJPGoqclZEsRfVrpNz+5MYCSvEStuGF0HAgwqGcU6smopMuGzBx4N6dUNB1H0DMqDnDqH/CmkOhJSPcCkAbasO1aeOr9qiXSZOO5yTaz/aAV/1180qKkVw4BCdR74iZ4fR4pPPZn0njt3mA1CNd+NPtSbNRVLB8YOk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(70206006)(478600001)(54906003)(70586007)(4326008)(6916009)(8676002)(41300700001)(316002)(86362001)(36756003)(83380400001)(426003)(2616005)(336012)(26005)(1076003)(6666004)(8936002)(2906002)(44832011)(40480700001)(5660300002)(82310400005)(36860700001)(82740400003)(356005)(186003)(81166007)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:22.7966
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6efa7f57-ca98-45af-4f83-08db3ac160c0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT048.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5419

Remove master device from the IOMMU. This will be helpful when removing the
overlay nodes using dynamic programming during run time.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/drivers/passthrough/device_tree.c | 38 +++++++++++++++++++++++++++
 xen/include/xen/iommu.h               |  2 ++
 2 files changed, 40 insertions(+)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 457df333a0..a77a217f3d 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -126,6 +126,44 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+int iommu_remove_dt_device(struct dt_device_node *np)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct device *dev = dt_to_dev(np);
+    int rc;
+
+    if ( !ops )
+        return -EOPNOTSUPP;
+
+    spin_lock(&dtdevs_lock);
+
+    if ( iommu_dt_device_is_assigned_locked(np) )
+    {
+        rc = -EBUSY;
+        goto fail;
+    }
+
+    /*
+     * The driver which supports generic IOMMU DT bindings must have
+     * these callback implemented.
+     */
+    if ( !ops->remove_device )
+    {
+        rc = -EOPNOTSUPP;
+        goto fail;
+    }
+
+    /* Remove master device from the IOMMU if latter is present and available. */
+    rc = ops->remove_device(0, dev);
+
+    if ( !rc )
+        iommu_fwspec_free(dev);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+    return rc;
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971..079c06321e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -218,6 +218,8 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
 int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
+int iommu_remove_dt_device(struct dt_device_node *np);
+
 /*
  * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
  *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519883.807039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXD-0003M8-K3; Tue, 11 Apr 2023 19:20:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519883.807039; Tue, 11 Apr 2023 19:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXD-0003LY-Ev; Tue, 11 Apr 2023 19:20:07 +0000
Received: by outflank-mailman (input) for mailman id 519883;
 Tue, 11 Apr 2023 19:20:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUk-0004DR-CY
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:34 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe59::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80294892-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:30 +0200 (CEST)
Received: from BN7PR02CA0032.namprd02.prod.outlook.com (2603:10b6:408:20::45)
 by CH0PR12MB8505.namprd12.prod.outlook.com (2603:10b6:610:193::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 19:17:27 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:20:cafe::fb) by BN7PR02CA0032.outlook.office365.com
 (2603:10b6:408:20::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:27 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:26 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:24 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:24 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80294892-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bXcp6be5EOxaRQCLBYG53j+05Hu7TinVPhvIYfE9gJxAavjYLIKc6/7NioHLDCrcurhEiJ8Z2L+kT8x8gZErqZHQveFz/ruJQC4QkbuQqpM+92ljIOdfuP1I2u/pM4nz1sHzr2x8WdmSAXTeG2eg/BC+DDTNeWk8lZjUOdCyn5diec1fDD9yvuLTOeQ5YgC0gn68dAKFW97Q16ccg8J0j/r0k1YHoqXrxJVxZogmu9dMnkw80x5/LFGjdetpX2PvM8gZoSG8Fdn4fE+/BJeN2UF7mygE1eUVZUG4NOffEyu08L0s9wDgjSKyuYNR4a6rehID9pHzKBo806PvJx6twA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=UESNVU5X4U7Mb1sIUnUW6Y8bS06cx8WSLF1wrd4oXl7LaOBS0lr2K4Qe0+ehQ4ecstfVO8GS+uIHD58VyL/8yZaXE/DygxLfAybnwrm984eiL5SYI0oqjyWMNEyuQdrRqupBzXpjJIi19DjBXsAG0H0LpvXjM9vfuom1IOdVOVhKy9TyxbfNtu2BtzMXEUFOcZOUNiEBsqb/5jL951M0S0Qprc0Qm/rS23Ykn9k6VAfTYSm7Bei64FeQbd9n/gDX+HlnvFgBlEEZ7mgdlduem0kBgsuoRoMTv/hkzr2rYS+E1YtH83fyufPQzFdQCIK5LLUkEsBxy6Vf9O97nLxQQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=1GiTmjkOyBemqL8wpiEBx/y65ZzIdEqTLmoD55ye5iseqtdm0w/3O0UOXVUz9BaJUICEmxgiIUKELqme1d8WUFiyqIehMkIZFtdJjfne7ZG3siohBJ3oBuULNFkBPUYiciJZmCtGOKSInpWejEyWQCnaqdfWD0LwyroHP+ufIqY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [XEN][PATCH v5 17/17] tools/xl: Add new xl command overlay for device tree overlay support
Date: Tue, 11 Apr 2023 12:16:36 -0700
Message-ID: <20230411191636.26926-36-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|CH0PR12MB8505:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d59fb95-371e-40ee-d911-08db3ac1632f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nW4r2U3H6TrJMWIKgWISxhpfKTRgrSgjfkPPjFEqwV+PV72ixWavNRL57lf2zNDQvBzpHxBd5fpfXOKDOoZuT0F9iXe5r2UGkLbHTo6FDWqwLxgsVqgBDxQYeUyVovX0KXicgBcTNmyPeNshlu+kRcDVyNvkn0Ryy5uBzfKG+LQctr271k1kF/ky7COwagWYTdwNcUctEsn+FhjKBK0+QXRmkZxEoZry2EUPvR3YK7IHS0SxTml6xucxAGGASNSmJpKQ0uiyfGDDjXnpZvfU2XL27qCF0OqLy9kXAtayBDdOisSra46hjYIDTtFTX39UaUk88yN1vPb0y6lYaDNMjkpp0U1qDpba40BDxhUgF7J0YL3bi31c5fX4jlBNiaLBxs3mElWEAgWdiXNCmVXwWFSfYwpjqq6L8CQgQZ/qUolUvE7N5+NJo1FJi0NXaiK4Oyv9lizIqEXHugZILPEbXo0NMfd1UTeSxPaMCK3iAi82zqUogxDD6kmvFO70tx/W32bf5hrPW08ShFLiMMB4cNea0g2f4SJ9AarBaBKiClsUNU+cnOX/XrGaWwzWVky1+pPgzMt0P0aZwIOHXWKQB682ONv95mLVneYfIc6DkQXBeXpGxsQtFtsGo62JFcw0lRgf7pUQwrPbHvo+63NR+VUp5MY9uhdzRvacRJq61i0hCX6nkI0QPvIdactc/McmqnLOvsBVAvi+O6tG0bbpOhvPh1fsTupE8G+oJmTh9RU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(6916009)(54906003)(8676002)(70206006)(70586007)(478600001)(4326008)(41300700001)(316002)(36756003)(86362001)(83380400001)(336012)(2616005)(426003)(26005)(1076003)(6666004)(47076005)(8936002)(2906002)(44832011)(82310400005)(5660300002)(40480700001)(36860700001)(81166007)(186003)(82740400003)(356005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:26.9767
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d59fb95-371e-40ee-d911-08db3ac1632f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB8505

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/xl/xl.h           |  1 +
 tools/xl/xl_cmdtable.c  |  6 +++++
 tools/xl/xl_vmcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 72538d6a81..a923daccd3 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -138,6 +138,7 @@ int main_shutdown(int argc, char **argv);
 int main_reboot(int argc, char **argv);
 int main_list(int argc, char **argv);
 int main_vm_list(int argc, char **argv);
+int main_dt_overlay(int argc, char **argv);
 int main_create(int argc, char **argv);
 int main_config_update(int argc, char **argv);
 int main_button_press(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index ccf4d83584..db0acff62a 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -630,6 +630,12 @@ const struct cmd_spec cmd_table[] = {
       "Issue a qemu monitor command to the device model of a domain",
       "<Domain> <Command>",
     },
+    { "dt-overlay",
+      &main_dt_overlay, 0, 1,
+      "Add/Remove a device tree overlay",
+      "add/remove <.dtbo>"
+      "-h print this help\n"
+    },
 };
 
 const int cmdtable_len = ARRAY_SIZE(cmd_table);
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index 5518c78dc6..de56e00d8b 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -1265,6 +1265,58 @@ int main_create(int argc, char **argv)
     return 0;
 }
 
+int main_dt_overlay(int argc, char **argv)
+{
+    const char *overlay_ops = NULL;
+    const char *overlay_config_file = NULL;
+    void *overlay_dtb = NULL;
+    int rc;
+    uint8_t op;
+    int overlay_dtb_size = 0;
+    const int overlay_add_op = 1;
+    const int overlay_remove_op = 2;
+
+    if (argc < 2) {
+        help("dt_overlay");
+        return EXIT_FAILURE;
+    }
+
+    overlay_ops = argv[1];
+    overlay_config_file = argv[2];
+
+    if (strcmp(overlay_ops, "add") == 0)
+        op = overlay_add_op;
+    else if (strcmp(overlay_ops, "remove") == 0)
+        op = overlay_remove_op;
+    else {
+        fprintf(stderr, "Invalid dt overlay operation\n");
+        return EXIT_FAILURE;
+    }
+
+    if (overlay_config_file) {
+        rc = libxl_read_file_contents(ctx, overlay_config_file,
+                                      &overlay_dtb, &overlay_dtb_size);
+
+        if (rc) {
+            fprintf(stderr, "failed to read the overlay device tree file %s\n",
+                    overlay_config_file);
+            free(overlay_dtb);
+            return ERROR_FAIL;
+        }
+    } else {
+        fprintf(stderr, "overlay dtbo file not provided\n");
+        return ERROR_FAIL;
+    }
+
+    rc = libxl_dt_overlay(ctx, overlay_dtb, overlay_dtb_size, op);
+
+    free(overlay_dtb);
+
+    if (rc)
+        return EXIT_FAILURE;
+
+    return rc;
+}
 /*
  * Local variables:
  * mode: C
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519884.807045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXE-0003Sb-6Q; Tue, 11 Apr 2023 19:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519884.807045; Tue, 11 Apr 2023 19:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXD-0003Qd-T5; Tue, 11 Apr 2023 19:20:07 +0000
Received: by outflank-mailman (input) for mailman id 519884;
 Tue, 11 Apr 2023 19:20:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUU-0004Ta-0J
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:18 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e88::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7825769c-d89d-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 21:17:15 +0200 (CEST)
Received: from BN9P221CA0020.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::32)
 by DM4PR12MB6541.namprd12.prod.outlook.com (2603:10b6:8:88::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Tue, 11 Apr
 2023 19:17:12 +0000
Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10a:cafe::f6) by BN9P221CA0020.outlook.office365.com
 (2603:10b6:408:10a::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:12 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.29 via Frontend Transport; Tue, 11 Apr 2023 19:17:12 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:11 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7825769c-d89d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nd9RHJF2398b5isOKaYdNprBPl/JNuJF6/ge3zT77zkGtEPFSzds6zzf69EGMJAgklQy0X2u3+v9EzkoUL8og3kVHQixgbWQ5Njigjp66elYLenh+J7QZEtnku2fMxbv6PBqSq6XIUj6sva/sCfwpP1PsxcwsbfWdY496muh8Z0FNB87I9udOfbP5c8smQ5H8CGbfH5c8V32AP2JJfi9boLHlUPpiFHk0Qj5HgGnWlTpfQLHfHGfKZoeheoD6EWHaF6sTHYHVSB6UxxLD5olg3jxzi0vdP4lQP8I1jTuxUMXF086TLemBhklKP1rC8U3EvYFa6TzY3yYcxuxpEoqJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0odwSwlZEbTJvdLLf27PaAsLK4pjjIm+Jsb7s8Lqv7g=;
 b=K3LK6ruFsTA4TPqFt/ammZSl/L8ggQ6E16nomQr5n3ta6K0P+pc/IVT5pkNN8uKuBfN33CHrxT49q9EbSeedM+cJ9eBr5YKtjSRzUXAS3ki2vEzEcU272lsMobTIFxbZ35sDe9X5OMw0QP19LbdEUNveyReAdEinT6VGWtcOt+sjJ9t7O+TOqVZif8yPg+IlNbIbZP8MPBIDycowyRdg/OKXXvmcDnzoQccPxD8YtsQm4XyAdJ4AkjRQYTqVnIMebgRyjh176UgkFlaschPgOCAyq6aJPbs4nOJl6xueJtw7xLSbE+NNKhSn8K67xZDLZt4Q8tdSLM3GSuEmGpXf+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0odwSwlZEbTJvdLLf27PaAsLK4pjjIm+Jsb7s8Lqv7g=;
 b=udgPt9YgeXZs+lWeAZV5aVlBBUqHsJP0l5aFIU9sPSKpCRYAo6zY4oeKvcTzYBmDZyLZotUsv/O38LNzRY1kN4j00nm0f0Ez1qkF7ftHoqc3b5GBw/5Fn7K2kCVzTaM+aOmAX1jvjcawQ5Tu8Dbr5fDa5Ck2tJ50YmouDiOODDY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, "Julien
 Grall" <julien@xen.org>
Subject: [XEN][PATCH v5 02/17] common/device_tree: change __unflatten_device_tree()
Date: Tue, 11 Apr 2023 12:16:21 -0700
Message-ID: <20230411191636.26926-21-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|DM4PR12MB6541:EE_
X-MS-Office365-Filtering-Correlation-Id: 1db08cfe-55db-45a7-8ea6-08db3ac15a73
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1doF2RKmkOdJFbpJzZWghxFTAGAl514VARfqJ35cis52xR+3+oTUWV7ecTA9tKoXEKyGmv7DeTyTrn1VDEGDx/kRtGkMVfurLAzr3fmJ4pBmQdekjSGNIDH1N0cXBF6OUke7faWga+sa998uw4TOw3DoQ/ToBFwAdRYVw3ORW2bCw3CKqlDTkeWUTabLkD3nWScqZzDjja/4LTm64WuivS6yhyYd/Ii3uNNf5ro+GxpN6ZiodW8WCjVqwatrM/UVGv08GLDQdWwzL+F4hvaWnOcy/j3DB0hvzRM3jtIHUQuwsZgoWE+Rqh3A+urm8gt6jYpvjBOASs0e/IipHUuMt+O86Qj77dCQYN1slyKzUHekyTJeOlZVP81zPuFoeYm8SsS+mTyp2K7gjE8urSEac5HkMpLitJBbkq42mBPwi1T9xdWWPNEk2iCCQ3Ubw2GcoX91M+l+OLegeqjPAu5lfbcOYUV+qHO37OZgHoxDn9rrWJlrwi5QWNUyIHhImMeIW/LOAcTTPcioLXZhjat6OP1f3QkWaloNc8AqDPrW27sxOojetd31kuyEfyfDg83pkXVS/zg0wsA2MstlSabqI+yKf9vWvPk6pVX1MhUJPmwEsOpNJOnMZG6t8sLnWaFCWf3GlT7WDsP/jknupLlnu68GqnnzU3JPq+AEY4WHEzSCD/IlHew1+8I+3eiEp0NrYGUzGLLZIh3bJM3hbhbusjnEhQ1sqMouBCmmovcEbDM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(36860700001)(82740400003)(47076005)(336012)(426003)(83380400001)(8676002)(316002)(54906003)(478600001)(36756003)(1076003)(26005)(2616005)(6666004)(186003)(44832011)(4326008)(81166007)(5660300002)(2906002)(6916009)(40460700003)(356005)(70586007)(86362001)(70206006)(8936002)(82310400005)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:12.3366
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1db08cfe-55db-45a7-8ea6-08db3ac15a73
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6541

Following changes are done to __unflatten_device_tree():
    1. __unflatten_device_tree() is renamed to unflatten_device_tree().
    2. Remove static function type.
    3. Add handling of memory allocation. This will be useful in dynamic node
        programming when we unflatten the dt during runtime memory allocation
        can fail.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      | 10 ++++++----
 xen/include/xen/device_tree.h |  5 +++++
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index aed38ff63c..bf847b2584 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
 }
 
 /**
- * __unflatten_device_tree - create tree of device_nodes from flat blob
+ * unflatten_device_tree - create tree of device_nodes from flat blob
  *
  * unflattens a device-tree, creating the
  * tree of struct device_node. It also fills the "name" and "type"
@@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static void __unflatten_device_tree(const void *fdt,
-                                    struct dt_device_node **mynodes)
+void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
@@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const void *fdt,
     /* Allocate memory for the expanded device tree */
     mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
 
+    if ( !mem )
+        panic("Cannot allocate memory for unflatten device tree\n");
+
     ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
 
     dt_dprintk("  unflattening %lx...\n", mem);
@@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
 
 void __init dt_unflatten_host_device_tree(void)
 {
-    __unflatten_device_tree(device_tree_flattened, &dt_host);
+    unflatten_device_tree(device_tree_flattened, &dt_host);
     dt_alias_scan();
 }
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..58ac12abe3 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -178,6 +178,11 @@ int device_tree_for_each_node(const void *fdt, int node,
  */
 void dt_unflatten_host_device_tree(void);
 
+/**
+ * unflatten any device tree.
+ */
+void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes);
+
 /**
  * IRQ translation callback
  * TODO: For the moment we assume that we only have ONE
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 19:20:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 19:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519885.807052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXE-0003cy-Rs; Tue, 11 Apr 2023 19:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519885.807052; Tue, 11 Apr 2023 19:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmJXE-0003aE-GY; Tue, 11 Apr 2023 19:20:08 +0000
Received: by outflank-mailman (input) for mailman id 519885;
 Tue, 11 Apr 2023 19:20:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmJUg-0004DR-Bm
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 19:17:30 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e88::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e6342f1-d89d-11ed-8611-37d641c3527e;
 Tue, 11 Apr 2023 21:17:27 +0200 (CEST)
Received: from BN7PR02CA0011.namprd02.prod.outlook.com (2603:10b6:408:20::24)
 by PH0PR12MB5607.namprd12.prod.outlook.com (2603:10b6:510:142::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 19:17:23 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:20:cafe::21) by BN7PR02CA0011.outlook.office365.com
 (2603:10b6:408:20::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 19:17:23 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 14:17:23 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 14:17:22 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e6342f1-d89d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hYkT+9ybedTAThsa69OTF8pDiP0MapFqOpgACARH+6PwvEwplOikd1yK31DLgjqTQ+NwL2NhFM612qQn1Zk2T6REA9HurY0vIUfm6ASLw9cTfNiqAXo+zsYiW6NgkFbMtRVKKhWh4zBVeAkVUpov/tkhNshlfcm6pDULZyjl7BxJEuemGsbPWwSyfD6XE+czKIfu5bjD6FARRQyZ7BSQ/ScOW4mlS7kUXbzqxyUCsuuV4a9Jg+k+It3c1BmoiLLTyBBu747DBAhbcOA4ZtDg4TTTcsWT0QhuXgHC9nxog1Mu0qoXwfoBkK86oQvEyUwcSFZYZIitwdjyAhkTJ7lT7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nkwDgQcNvE9UkGiRuGgnKgAIS4K7kLAxiI7Z7Wg7YaY=;
 b=bFkwv0hRmgWPKHlPE78WePjeyze5djTM4XidulgttiHnANW6RcfIX235Ic4iOQ4EGGu2nm2KmYzhwCKpyv1ofZ5ALGFn+RObauX7gT5t745XCjHJ9bqBeA38DW350O8BXLiMRHHvlBc89Am8iYZW67l5N96Bc5fMa8BJmAf/jAccbFHW1rtmsF3iJvhcB/dHEj6H9a2PDFep+nPqawQo3qh46GeX5UaAw2jnFBFPp4gNMcTKkYm3f/R/mFy21DQhOYhAr38KHm6H1ibY158dC/TZRwmcN2ICQPNWbW3TUv+gnOVdtKLqXE41I1KX8TopAf2UKqG4E9XVt/sAwJhIrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nkwDgQcNvE9UkGiRuGgnKgAIS4K7kLAxiI7Z7Wg7YaY=;
 b=21/M39zzNkj6wIHkED3tJss5gecgEfRVwuaFUsyo+K3x1WdUj1VY+Ix1f+ggPHp698D1EViTsLUcnRtHqsqNJokVJHXZPoxBvyLa2VuGdBFmfVOkuwkwBiLs2e+dHETrO539r0RHzhgqZGOQUSD7Yh2xicGbe5d3/hZ3/KZT7Kg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Vikram Garhwal <vikram.garhwal@amd.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>
Subject: [XEN][PATCH v5 15/17] tools/libs/ctrl: Implement new xc interfaces for dt overlay
Date: Tue, 11 Apr 2023 12:16:34 -0700
Message-ID: <20230411191636.26926-34-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411191636.26926-1-vikram.garhwal@amd.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|PH0PR12MB5607:EE_
X-MS-Office365-Filtering-Correlation-Id: 7bb3dab3-66a2-4d94-0fae-08db3ac16115
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NjdL+dF7YtTYsdvEmz+gwZtMQejePwuqoUCxVtuOYQWysB0NMs9qEb+cfhvo6TJNzD/9vi69QT2fhp+OJDVPgfUkgVu1tn/qlZAYyWuVTf3N0cFFtJo0+Qx/bvE/w4ztWaOrlR/LK4EUmD9HZWW2Ty6fwF+EGqbSftprSFjA0H6RQ0+31wdLq+JOh4Lvz5Qk8XhjB7JOof5mawk2Vm9n5GJHTQmZUM0XZVb/FLLM3YKedjjhJ7lIt36Kr3VO8TDnbLJ7/aUA3ic4jmVm6ApUiUEZcumHQ/jubuNqi/Kb3NeQDu6KCmMYpYYV68o5Tti7/XfZ1i24vD2De9ayV9gEHlj7jKjNUFoHiifxaHbvGh6U6+T2zQE23AybgmDS/cbFmj4+XbtJauR6nf+e3wc231biL3TBFKbqW2zNtuEZpUYEan3mwwB5X4LJvnHctA5YPldL020mz4UjL0046VI2BorKesLR1Ssm9sqkz8EI5kZxExFETPQ3kOeNcrjqJiwHxl107F2ywBiqNv8UPmLLd3dmLWJmcgRO4nzQeWQ/1oxK6chL1Ec9vfiz7VemckQRV5zKIvroPRvvRjFCfQQOKrFcxUIP+ZcOI/7ZffJpIOW0jxZVakknuR3y4YWkzL4UhICB285ipDmmK59cQxfz7/GjtyTeuY3L1pK+xfCHtwUmmCkdNybWCEk8g6NdEFg/RQ2TRdGemRpmRCb4oJQ4qkxW7GWizOkLjuulhdFCLmhvzgZvWSIgztkxhXpne7E9
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(82310400005)(82740400003)(356005)(54906003)(36860700001)(81166007)(70206006)(70586007)(8676002)(86362001)(6916009)(478600001)(4326008)(336012)(47076005)(426003)(186003)(2616005)(26005)(316002)(41300700001)(36756003)(2906002)(1076003)(44832011)(40460700003)(40480700001)(8936002)(6666004)(5660300002)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 19:17:23.4457
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7bb3dab3-66a2-4d94-0fae-08db3ac16115
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5607

xc_dt_overlay() sends the device tree binary overlay, size of .dtbo and overlay
operation type i.e. add or remove to xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/xenctrl.h         |  5 ++++
 tools/libs/ctrl/Makefile.common |  1 +
 tools/libs/ctrl/xc_dt_overlay.c | 48 +++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..b932589c8d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2637,6 +2637,11 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
                          xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
 
+#if defined(__arm__) || defined(__aarch64__)
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op);
+#endif
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libs/ctrl/Makefile.common b/tools/libs/ctrl/Makefile.common
index 0a09c28fd3..247afbe5f9 100644
--- a/tools/libs/ctrl/Makefile.common
+++ b/tools/libs/ctrl/Makefile.common
@@ -24,6 +24,7 @@ OBJS-y       += xc_hcall_buf.o
 OBJS-y       += xc_foreign_memory.o
 OBJS-y       += xc_kexec.o
 OBJS-y       += xc_resource.o
+OBJS-$(CONFIG_ARM)  += xc_dt_overlay.o
 OBJS-$(CONFIG_X86) += xc_psr.o
 OBJS-$(CONFIG_X86) += xc_pagetab.o
 OBJS-$(CONFIG_Linux) += xc_linux.o
diff --git a/tools/libs/ctrl/xc_dt_overlay.c b/tools/libs/ctrl/xc_dt_overlay.c
new file mode 100644
index 0000000000..202fc906f4
--- /dev/null
+++ b/tools/libs/ctrl/xc_dt_overlay.c
@@ -0,0 +1,48 @@
+/*
+ *
+ * Device Tree Overlay functions.
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op)
+{
+    int err;
+    DECLARE_SYSCTL;
+
+    DECLARE_HYPERCALL_BOUNCE(overlay_fdt, overlay_fdt_size,
+                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( (err = xc_hypercall_bounce_pre(xch, overlay_fdt)) )
+        goto err;
+
+    sysctl.cmd = XEN_SYSCTL_dt_overlay;
+    sysctl.u.dt_overlay.overlay_op = overlay_op;
+    sysctl.u.dt_overlay.overlay_fdt_size = overlay_fdt_size;
+
+    set_xen_guest_handle(sysctl.u.dt_overlay.overlay_fdt, overlay_fdt);
+
+    if ( (err = do_sysctl(xch, &sysctl)) != 0 )
+        PERROR("%s failed", __func__);
+
+err:
+    xc_hypercall_bounce_post(xch, overlay_fdt);
+
+    return err;
+}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 21:05:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 21:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519947.807070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmLAz-00028n-DN; Tue, 11 Apr 2023 21:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519947.807070; Tue, 11 Apr 2023 21:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmLAz-00027q-9z; Tue, 11 Apr 2023 21:05:17 +0000
Received: by outflank-mailman (input) for mailman id 519947;
 Tue, 11 Apr 2023 21:05:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmLAx-00027i-VZ
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 21:05:16 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20609.outbound.protection.outlook.com
 [2a01:111:f400:fe59::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c77a037-d8ac-11ed-b21e-6b7b168915f2;
 Tue, 11 Apr 2023 23:05:13 +0200 (CEST)
Received: from MN2PR01CA0042.prod.exchangelabs.com (2603:10b6:208:23f::11) by
 PH8PR12MB7424.namprd12.prod.outlook.com (2603:10b6:510:228::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.33; Tue, 11 Apr
 2023 21:05:10 +0000
Received: from BL02EPF000145B9.namprd05.prod.outlook.com
 (2603:10b6:208:23f:cafe::5a) by MN2PR01CA0042.outlook.office365.com
 (2603:10b6:208:23f::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 21:05:09 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000145B9.mail.protection.outlook.com (10.167.241.209) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 21:05:09 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 16:05:08 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 16:05:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c77a037-d8ac-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jegRTeJSpZuE5Q3pTyaHRqHQvGxMY6P+irT7jYSVekWN5rAyXxR1So/cwGjL5Z/MnE0/jDfYXl1Aw2mIV+DLEgOhpsS4n66xuv8Rzht9Scr7ACfDjQVBlkMGgC97Bhf+QtA3vDbt/WPC5xcanZ280jYCW82zAxdI3wpqXFvdVO73ROJJbZiGvvWQhmEar6PpXkbsqNDP86EQ8lfgN0GAwa1Rg1MvGv+4Z96tFmELn2OnNVjv+I7lm2S7NyKuaHwKzQxc+nf+YMczgy37zVJhMiqA6gpYFUGO1jTvOW0lns0UDiUn1H+WIwKySbu6KBIFOnwqD2ZB6AxqNBUmzd3C1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dyQCgywzSPQJgk3l7gPLyp/Wd2+MwFaNmcvT4lvITlY=;
 b=VTXEPXu8h5R2+ITRNrw9ekN6qqrpfoelBZDfFPTJ0hAAW6a/iI6RDagUQaNhSMdf9ses+zqkHjDdLrwnDIPpfMoLhb0JqlepZLPa9pXrnRKCN/VhmzNvCcSBUjewg7W0a/7m3+5yz00Dd5KphnaVof07SNeLsyXsbecRdKsGlgysZqe/3JV59pg5OHfEqxT7ROI2rP9VjGeTFkVLGCGKxymmB99gUzHBYGqChSNoKgmWw5NM7Nu3z+bYZmkhUmLpsWwPuulDT0zzgp7c/sqgfS2knxKidJhKNxM7bx/NKlBJiq+tbLtqQCGSXWgwp/Rd7zge/yknwNn33GDTIbs/IA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dyQCgywzSPQJgk3l7gPLyp/Wd2+MwFaNmcvT4lvITlY=;
 b=HIAOrlOoPf1lOufdYmhQtb0i+ul2zvBLxwuklWzGQSL6JdKDRU4mtCJcC4HOM6kZqEok146MDaD30EKcnw2MngI74Kq+jNzxTk51XcACqXWJ3Xz536mXij8irfBlM6W+/9Dzi67z18xq5lDIlMnGuC1o7xrUlpGklhpQBZb3Aj0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, =?UTF-8?q?Alex=20Benn=C3=A9e?=
	<alex.bennee@linaro.org>, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?=
	<philmd@linaro.org>, Thomas Huth <thuth@redhat.com>, "Wainer dos Santos
 Moschetta" <wainersm@redhat.com>, Beraldo Leal <bleal@redhat.com>
Subject: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg' configuration for xen
Date: Tue, 11 Apr 2023 14:04:22 -0700
Message-ID: <20230411210422.24255-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000145B9:EE_|PH8PR12MB7424:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c46b70e-6ee8-44ab-b23f-08db3ad06f05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RN1w/NXEhX5pXZwc0HtFOl0/F+RYHwjclG4S+a/RqRG+m2nbAQvVjY0RovoCmHqlNh968194MTBhC8ECFgct7SZeAKkYmXNDYR6Ed+GdrjjpKwAyPqTbZxtufa5NfTsgwHuCuE5ATlLeYv5qrD153ip4g1bZ9u/Sfw+lekGksvzeE+zcQoRI7yGsrZ/zMEX1TkKTi6yF2e82W8s6vMF7v0zC6MEdK+qnam17685sXlJMJA7xNsyTXfiq7fMWmiR+Sja5J5DR+bhJCKcxr4OYpPH1Xke8pIwyrUQM5OIBYy5NXwUWGdikmY0iL852sUnToJNrtwXfopbV1suD7UQPxvYyFo0bO2FdPlB6qQIlGrjnOuZNhg2Z0/E//k0rfyV8WZ6bKqJ9+3lIlpcIkUOCPQaVX/un8OKG4M0I7YFDtZGdnVPAx9JX8pM2ghicaZllNtWLx0iIuiub3FGTdvVepkZMTZv8HuEcstlyxcXap3aIbEcC0isVJSNDc3w98DnExBh7J6Ojg1jcQPeXpfZOZ4xa5F2PIyBlGhbec2MTYNqjFib1b25Vn0vMuoOvro4FouEswSpVTs/t8qcZyDy9sCmzF1/gzD24iG1OcMKpQd6erYBY6CkxaULL5hsd38LltVg2MvYERXYDPe6W6G153+nrbpZZI+p2wXS9pNFt47zy+Iwb5fSjPmqJ+attx+twCbNj8YQM+J6uW6ZG3KoGX/1U/RMKdOCqXt38WG1azuA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199021)(40470700004)(46966006)(36840700001)(86362001)(36756003)(40460700003)(40480700001)(26005)(186003)(83380400001)(1076003)(2906002)(966005)(336012)(47076005)(2616005)(6666004)(478600001)(426003)(8676002)(70206006)(4326008)(5660300002)(70586007)(54906003)(6916009)(44832011)(4744005)(8936002)(316002)(41300700001)(82310400005)(36860700001)(81166007)(356005)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 21:05:09.2841
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c46b70e-6ee8-44ab-b23f-08db3ad06f05
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000145B9.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7424

Xen is supported for aarch64 via xenpvh machine. disable-tcg option fails the
build for aarch64 target.

Link for xen on arm patch series: https://mail.gnu.org/archive/html/qemu-devel/2023-02/msg03979.html

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 .gitlab-ci.d/crossbuilds.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
index 61b8ac86ee..6867839248 100644
--- a/.gitlab-ci.d/crossbuilds.yml
+++ b/.gitlab-ci.d/crossbuilds.yml
@@ -186,7 +186,7 @@ cross-amd64-xen-only:
   variables:
     IMAGE: debian-amd64-cross
     ACCEL: xen
-    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
+    EXTRA_CONFIGURE_OPTS: --disable-kvm
 
 cross-arm64-xen-only:
   extends: .cross_accel_build_job
@@ -195,4 +195,4 @@ cross-arm64-xen-only:
   variables:
     IMAGE: debian-arm64-cross
     ACCEL: xen
-    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
+    EXTRA_CONFIGURE_OPTS: --disable-kvm
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:28:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:28:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519957.807080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMTK-0001ob-5D; Tue, 11 Apr 2023 22:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519957.807080; Tue, 11 Apr 2023 22:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMTK-0001oU-2S; Tue, 11 Apr 2023 22:28:18 +0000
Received: by outflank-mailman (input) for mailman id 519957;
 Tue, 11 Apr 2023 22:28:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C0V3=AC=epam.com=prvs=8465a122a0=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pmMTJ-0001oO-0s
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:28:17 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2607fd85-d8b8-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 00:28:15 +0200 (CEST)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 33BH66r3003071;
 Tue, 11 Apr 2023 22:28:00 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2056.outbound.protection.outlook.com [104.47.12.56])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3pw8jehpd8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 11 Apr 2023 22:28:00 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by AS8PR03MB6870.eurprd03.prod.outlook.com (2603:10a6:20b:29f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 22:27:46 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6277.034; Tue, 11 Apr 2023
 22:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2607fd85-d8b8-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KMfA1SKBtkQTIq0hIBTVNbba8WABg5EJ122/OmR8G+fKHrexC6TSLJWYRc2GZhwVt5pxy7M8BzzUhpc6eyyjl9/lxIjPhZj1zwpyoMtEQMQfugF4JSWUIgSP86g0vVGfNuY0KjKZDA4mgdFAiduAGsStgWunX6nMIwBtHIrjV5TTpEorSoHmwxvFwy1TWHzDUQU+Z2qr0b70a/L2RRRX1fSpBO3GT+bCWzfRFKy68I8Rgyz3X1qcKp+/Rgg7stvnAQGyZDL675ebCZECa8GwHdfZw2JxbYwNDkRrfEDcK1RdYDXHFbCvfyxC0PaalgAktbUIg8nZPJpL1dU502gIpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pkiCktxs4VSNFb5+LI+rK6V+Cf5t5MxyAXN+OKTAnxI=;
 b=d9OrkvNA6ItuYIfSJOO4TQsKBtkzIdzTYIHbLGWpH0fCO77DNO7W8Teu8rcpl7/uiKtW0+bZg9wGNJz8Ya0aX01Re79UM74AqqqfBe8vlZOPVYucZwEPg5yki6Irx2RBVi1uTuEckze5Db1FChjA89Zf+BYTIGgI9/RFID+zPNoJ695CkgXS7RJkPKoPWYzeUPpChNBL+BNpZeQP+xannzAFCEq2Qhd5AP4vsewcpjs4gNd1ZLy9GJOqo1BxqRsY9ZXwt9j7AMPF6O+m6PIA1RYeihwzRaMrc5hwGUu0Gfmt9TEUL8ZI8nPIloE5c6EPWLasIE6h5pkLIusWFt+95w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pkiCktxs4VSNFb5+LI+rK6V+Cf5t5MxyAXN+OKTAnxI=;
 b=SD5sjgMZOvEPaEQL22m58WNiijf6em28tO0Bc6ZKnIqn1cWjlJ4p/34NSeWv/NoVMH9mSVIrqTgD++yktJr+ktwnT1Qn6C1FrHkfJC4BogfK/h8uaWpwDACm9MobVWr37Dg1TvYDeMDmo7r2GssgVnTXfjThpzmC0Q9ZHKGGGax/KnyalK2eOw0gKdAI9c65iijGz+Ml0FLnI9uWoGlRkvsgwD6PRuIrZnPURoAfai5XZTbg8uossrRarZIH9rZDFXBT7Kna2CIufHgy6c7M+C+RTddtp9zPpqlIZh8Gxszd5UXs11Q47YMuO7SN4mb95pOMrIVAsmPoFqHVAoV6RQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Andrew
 Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/6] xen: add reference counter support
Thread-Topic: [PATCH v3 1/6] xen: add reference counter support
Thread-Index: AQHZVrdyBx9pogLqRUq3RbhGdjZNSK79cEwAgClpiQA=
Date: Tue, 11 Apr 2023 22:27:45 +0000
Message-ID: <87h6tmxden.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-2-volodymyr_babchuk@epam.com>
 <ZBMfpnzW4YdqEiA0@Air-de-Roger>
In-Reply-To: <ZBMfpnzW4YdqEiA0@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|AS8PR03MB6870:EE_
x-ms-office365-filtering-correlation-id: de4f9997-687e-4b4a-5603-08db3adbf973
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 TjrnYb+lenGgK2LgbYUoRT+69DB9ECEkUNOzfE19N08uZFgNS0AhGtHK0nI13nUKtlPqMjxxJnDvy6tbMYW7xoDUZ/kEFYYuN/d821rSeJwywfiNLaA9xWuTPgdr46XsCNVSvGB1MLg8j5IEd0kdc8IfpovEA6YyOiZgPCyZNx5n9w40+g/Goeagf1watIpOjklnLL8S/1FnH2t7bHcICzDEPAVC8BKSreQMTYbAfp4m24Q4Bkzj5UYbQAre6nM7GF16kyvSu2h/5Yddj+qDet07W5cl8mh7qu4zsh80I7QbMFh3RgWc9LEeu6UxTXaQOZfJv6QbaeDMl9oWIJB0fMGsyP8aKVUWGwJ33ftl052Owd8glhYX7Rp+xF/lWNS0m8d1y4bmMyfe2wytG9bxRjjvCODjtoZ0/5JGMU4OUfpz+kI/4ggYC4NqKxA5Itq9a0/zzQuN+YYTk7rtvr3MeF0M1mmnRPvfEKqQPW1jhGiAluN2Fttv89v7xK1cV+Df4RaguYIDTIsZRvvS/htdwftsLA439cTCVY5fwNPCDgtZ4jxdQMi1Jnzlvt8ZTcrR63QGq5xRRHVlwj4ymQnuBsJQcQcSrZgNocjR6hnB3bjvktJ6ty39lwADNxG2Oom0
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(366004)(376002)(346002)(451199021)(2906002)(38100700002)(6486002)(5660300002)(122000001)(71200400001)(38070700005)(83380400001)(8936002)(55236004)(6512007)(6506007)(41300700001)(26005)(2616005)(66476007)(66446008)(6916009)(66946007)(76116006)(66556008)(86362001)(91956017)(4326008)(8676002)(186003)(316002)(64756008)(36756003)(54906003)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?Q1U1QjVrS2ZzMFZaMVhRSndhcWtadXFjb0JoNktxRm9IQzJ4WDJVRnYyekh6?=
 =?utf-8?B?OVIyQnl1cTdOVE5CKzlwU3lmQ0ZRTGU5ckpsUXNNSFF5Lzl3WHMrWEhQV2dP?=
 =?utf-8?B?b2duM05mZ3hhMFlKcitBTG5JaVpFOWpSelNVdSs5SmhVb1VQVk9NTVZXcjdi?=
 =?utf-8?B?Um9aNTJDVzU3S2lQWmJzK3c1RHZ6Q0xZVWlFbloyeTUzdFBYczRDZXhOaEhP?=
 =?utf-8?B?ai9YM0ZKb3R1SmViYmNXRVljRFJBbjJ0TVQ2eGgrVVZJZE1zN2RMU21YQVFo?=
 =?utf-8?B?UUZTQ0xHSXUvUjNLd2FIRDhOYlJkUEtLTjBZdGN2byt5MDVMV2xsQW5ZcURx?=
 =?utf-8?B?SmV0QXMxTHBBbnhmYU1kY2U3N0JMdjFVTHJEN1h3RFpTUDdJUWZmbWxJRDkx?=
 =?utf-8?B?MTkxUzVLcDJtS2NtVzlyOE51cC92bEZIWU90L3ZuQ3k4aCtZbDJCVTB6cUJB?=
 =?utf-8?B?NUtiNDErNkprcU5QdkY1NkdHaGZwNDJjdHEyeStMa0d0T096bCt4QkMxWDdJ?=
 =?utf-8?B?N3lCK1ltK2E2VUk3T3JmTEpUTmJEa0pvYzhENkEwQy84ODF2S1ZIajFudFY4?=
 =?utf-8?B?eVBhUVByZTMxcGEzNkwxME9XWVMrMnZ1ZDc3T29hbGpnOG8wUmUyQTh5MWNt?=
 =?utf-8?B?WllmTW5FOTU2ejRvdThiOElJbDFoN0wvblRNQWs0Vkh2NnNGeTJMTlBzZXlF?=
 =?utf-8?B?NGJ0anBHdnRjYWFpeFVuV2xJNlphV01kdEFHWkFZajVKYkV6MVpOY1l1RWZR?=
 =?utf-8?B?OENuVEZZc3k4Z2JvUWlEMGE4RGVQL3J3TzFjWFU3dFIxZUlpR0ZjU21BV2x0?=
 =?utf-8?B?T2trenV6Y2hEeUtVOC9pWnFlTTlmZVVjYVVqR1NkMkdRa2hrZlZlbkdmRC9T?=
 =?utf-8?B?VjdVODJQa3llMFgzV0hkK0dhb1FiZ0JYZ0Vnb2grT1NKRDVhNnFDdXZ5WU5l?=
 =?utf-8?B?T0c3NVp3djRxdXo5Sm1BOFJucjVrUWdiN3EzTVRGOHZhMXlUYXRxWWhZNzZ0?=
 =?utf-8?B?dVZOMWtPRlBpQm5FL2xrNk9TNlFwZldUcWNRY1ppSFhSU29qNUhPUy8wK2Ft?=
 =?utf-8?B?VGlVOW9XU3VTWkEzQ3liOGM5Y055ZjVUR0Z1NCtHU1g1Nng5M1R6S0hJNGpx?=
 =?utf-8?B?djJXQ2lvRjVUQllsajNRSGtSamwyMFlqUFoyZU9Cd1lJM0N6eFAzMEpnVEdY?=
 =?utf-8?B?SjdXekFoK2NUb1JmZ3hPeHgweTN0NDdQYStEd1hNVnlxTTlGZ0J6cjZ2YlVj?=
 =?utf-8?B?eDh4TmtueVBuOXRZVmJuZ3ljNVU2K1FmWFdWeE43NjZ4SHpvUjk0UW9UQUcx?=
 =?utf-8?B?VmxQQWcrUmt5RDZLUVcreUJhVFpZc1AxTDI3a1VUcUg5SmZaQ3pKdVlYenhs?=
 =?utf-8?B?QmVQYVlwT3oraCtPd2huNzFndUN2cklmcUdjQmRsVW1ONmpLQ0V1TStvbGU5?=
 =?utf-8?B?V3NPd0dEYkprbTVrY29ZSExodFlwWFRSQnZhM3h4VjBSNjBtMVBFdnR2Yno3?=
 =?utf-8?B?TTdOZWJoSmRmQ1RFOEVQeXBZVzZXaW5tRDQrY0JXOWZYY2JBb1BRMEtlQXRr?=
 =?utf-8?B?Q0VHL1pnVEtSOTZ0TEYxeWZ4N3c1Q1FlWWZPT2E0ckNZU0RnbnF1TFBGK2t2?=
 =?utf-8?B?WFFjdjh0VlpZRkN2QUFsWDN6YU55SmNONEQyNFhkMEtuVnY2QlNOb290bmo1?=
 =?utf-8?B?Mkxzd3JOZEU1MzA5SmNUcEZRcnRna2xaMjgydE9EN2FnMENiUDdlc0JmVzVL?=
 =?utf-8?B?WkhGbEtKZ2VrWitJYWQremIwRFc4Si9qUVJ1WHl1LzJmMTRyZCtPZmhYR0U0?=
 =?utf-8?B?bjhKS1ozOEdpRnJMSm9SQnF2di94Y2ZhSW1Oc0pPYkthYTkveFVpRmZvaFZC?=
 =?utf-8?B?NjZOazJxNktYdFQxKzA4Z1d3UkMzdnh3Y1pKbjBGNTg1ZFV3aEQ5dTIrVVpV?=
 =?utf-8?B?RGhjeFVRTDgxaTBqWXg3WGpOb0VzWHZNaW9ZWFVtQjkzVHFQdG1QNGlHNDJk?=
 =?utf-8?B?NzgxdWhnYUxQRTlhVEJIWFVCWXlqTDFFWEE4NEdyZEtNNVVCc1MrSVJEeUxk?=
 =?utf-8?B?Ujlid2hlaHBDWXJjdGpZT2hKZzk4dWNmRVJKTC9GdFMxTFAzWitDN3NzMjhm?=
 =?utf-8?B?QUoyc0NYa05VZjgzY2FuVzc0dXpvVkJhd01oQWg5VUx5NW0yN2VnSDJscUJV?=
 =?utf-8?B?aFE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5E946897F8C693429C3A38C9D0704C15@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de4f9997-687e-4b4a-5603-08db3adbf973
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Apr 2023 22:27:46.0247
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: AfybyA4BX7/UhsfbxqHIPjCf2V0CDwW6cxxvqht+qWYTG1xcA5gc92s7fONUcM2ayIUF4YdInnsq6G5Am7hagxx2YwaPkNyDDowjEEe3BDo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR03MB6870
X-Proofpoint-GUID: JuQqmSocrXSbpyv1VGRk_no9VtP0zqXJ
X-Proofpoint-ORIG-GUID: JuQqmSocrXSbpyv1VGRk_no9VtP0zqXJ
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-11_16,2023-04-11_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0
 lowpriorityscore=0 suspectscore=0 spamscore=0 malwarescore=0 clxscore=1011
 priorityscore=1501 adultscore=0 mlxlogscore=999 mlxscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000
 definitions=main-2304110202

DQpIZWxsbyBSb2dlciwNCg0KU29ycnkgZm9yIHRoZSBsYXRlIGFuc3dlci4NCg0KUm9nZXIgUGF1
IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyaXRlczoNCg0KPiBPbiBUdWUsIE1hciAx
NCwgMjAyMyBhdCAwODo1NjoyOVBNICswMDAwLCBWb2xvZHlteXIgQmFiY2h1ayB3cm90ZToNCj4+
IFdlIGNhbiB1c2UgcmVmZXJlbmNlIGNvdW50ZXIgdG8gZWFzZSB1cCBvYmplY3QgbGlmZXRpbWUg
bWFuYWdlbWVudC4NCj4+IFRoaXMgcGF0Y2ggYWRkcyB2ZXJ5IGJhc2ljIHN1cHBvcnQgZm9yIHJl
ZmVyZW5jZSBjb3VudGVycy4gcmVmY250DQo+PiBzaG91bGQgYmUgdXNlZCBpbiB0aGUgZm9sbG93
aW5nIHdheToNCj4+IA0KPj4gMS4gUHJvdGVjdGVkIHN0cnVjdHVyZSBzaG91bGQgaGF2ZSByZWZj
bnRfdCBmaWVsZA0KPj4gDQo+PiAyLiBUaGlzIGZpZWxkIHNob3VsZCBiZSBpbml0aWFsaXplZCB3
aXRoIHJlZmNudF9pbml0KCkgZHVyaW5nIG9iamVjdA0KPj4gY29uc3RydWN0aW9uLg0KPj4gDQo+
PiAzLiBJZiBjb2RlIGhvbGRzIGEgdmFsaWQgcG9pbnRlciB0byBhIHN0cnVjdHVyZS9vYmplY3Qg
aXQgY2FuIGluY3JlYXNlDQo+PiByZWZjb3VudCB3aXRoIHJlZmNudF9nZXQoKS4gTm8gYWRkaXRp
b25hbCBsb2NraW5nIGlzIHJlcXVpcmVkLg0KPj4gDQo+PiA0LiBDb2RlIHNob3VsZCBjYWxsIHJl
ZmNudF9wdXQoKSBiZWZvcmUgZHJvcHBpbmcgcG9pbnRlciB0byBhDQo+PiBwcm90ZWN0ZWQgc3Ry
dWN0dXJlLiBgZGVzdHJ1Y3RvcmAgaXMgYSBjYWxsIGJhY2sgZnVuY3Rpb24gdGhhdCBzaG91bGQN
Cj4+IGRlc3RydWN0IG9iamVjdCBhbmQgZnJlZSBhbGwgcmVzb3VyY2VzLCBpbmNsdWRpbmcgc3Ry
dWN0dXJlIHByb3RlY3RlZA0KPj4gaXRzZWxmLiBEZXN0cnVjdG9yIHdpbGwgYmUgY2FsbGVkIGlm
IHJlZmVyZW5jZSBjb3VudGVyIHJlYWNoZXMgemVyby4NCj4+IA0KPj4gNS4gSWYgY29kZSBkb2Vz
IG5vdCBob2xkIGEgdmFsaWQgcG9pbnRlciB0byBhIHByb3RlY3RlZCBzdHJ1Y3R1cmUgaXQNCj4+
IHNob3VsZCB1c2Ugb3RoZXIgbG9ja2luZyBtZWNoYW5pc20gdG8gb2J0YWluIGEgcG9pbnRlci4g
Rm9yIGV4YW1wbGUsDQo+PiBpdCBzaG91bGQgbG9jayBhIGxpc3QgdGhhdCBob2xkIHByb3RlY3Rl
ZCBvYmplY3RzLg0KPg0KPiBTb3JyeSwgSSBkaWRuJ3QgbG9vayBhdCB0aGUgcHJldmlvdXMgdmVy
c2lvbnMsIGJ1dCBkaWQgd2UgY29uc2lkZXINCj4gaW1wb3J0aW5nIHJlZmNvdW50X3QgYW5kIHJl
bGF0ZWQgbG9naWMgZnJvbSBMaW51eD8NCg0KV2VsbCwgSSBjb25zaWRlcmVkIHRoaXMuIEJ1dCBp
dCBpcyBtb3JlIGNvbXBsZXguIEl0IGhhcyBzZXBhcmF0ZQ0KcmVmY291bnQgbW9kdWxlLCB3aGlj
aCBqdXN0IGNvdW50cyByZWZlcmVuY2VzICsga3JlZiBjb2RlLCB0aGF0IGlzDQpjYXBhYmxlIG9m
IGNhbGxpbmcgZGVzdHJ1Y3RvcnMuIEkgYW0gbm90IHN1cmUgaWYgWGVuIG5lZWQgdGhpcw0KZGl2
aXNpb24uIEluIGFueSBjYXNlLCBJIHRyaWVkIHRvIHJlcGxpY2F0ZSBMaW51eCBiZWhhdmlvciBh
cyBjbG9zZSBhcw0KcG9zc2libGUuIE9uIG90aGVyIGhhbmQsIEphbiBzdWdnZXN0cyB0byByZXdv
cmsgQVBJLCBzbyBpdCB3aWxsIGJlDQpkaWZmZXIgZnJvbSBMaW51eCBvbmUuLi4NCg0KLS0gDQpX
QlIsIFZvbG9keW15cg==


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519961.807090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMdu-0003Km-4l; Tue, 11 Apr 2023 22:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519961.807090; Tue, 11 Apr 2023 22:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMdu-0003Kf-1n; Tue, 11 Apr 2023 22:39:14 +0000
Received: by outflank-mailman (input) for mailman id 519961;
 Tue, 11 Apr 2023 22:39:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C0V3=AC=epam.com=prvs=8465a122a0=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pmMds-0003KZ-Mg
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:39:12 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad00b5c5-d8b9-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 00:39:11 +0200 (CEST)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33BEvnMg025232; Tue, 11 Apr 2023 22:39:01 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2053.outbound.protection.outlook.com [104.47.12.53])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3pvr2vmgca-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 11 Apr 2023 22:39:01 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by GVXPR03MB8425.eurprd03.prod.outlook.com (2603:10a6:150:5::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 22:38:57 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6277.034; Tue, 11 Apr 2023
 22:38:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad00b5c5-d8b9-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hvJjmrM7M3IQZEKpfk/1/0fX7L+q3Hpr5kVbX5otyatiHmap6aWNVzCv3UWCIFHo84zOqPb2DL9OrWWBoa3sBhiGK0BQHPemLXePDcpaj0dRtxcVdymd0Q/6xf4LXcAtRPbSd1s2YQgDufk3F1wrT5GK2ENkt3tJwRutRCD+wj1GdgcPkkXooc+sZe5dQT1NLlfVLhycmQwCoeKv6jvfGyuZ8Ea+obmyep/v7dOZt1gH0DyIzrWhXaFQFlvc4yPk0SJiPtxeHSgmX5oe7+56IGK7iJOEYM3zYLPdg1Xu5Fh3X9MMJlJxJ6oREIuBjxUcYWK1I43GTj6yK2r8TjQZ2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BVMeI+YDw0P+RLQYwajulzk0QC6yTYde2vQfAduLjHg=;
 b=TV3TpcHLdqKn1LvUqmrKQRvANg4jmrh+kb1hvNiPU1KECCjK1I+RvvkLMA7skNkH3CSs2gcq0ki1oeudY6EYyWuoDw/afdFbtDgPneAobBLf8lEphoAB3j+DUk6ml380oNtXJp1t4uV8M6V1Ev7EzKy9w9cVUezVpr9GUZX6TeqzKERDKvcch9ZILdAZZM4fXlzFHCAkVQk9q+GQ5ZoMzHw9OVzB4wNkZhPKgEF1qtMotjM+7u+aEfpszTme6fI0dD4WAhBg4KQcpJXRtPpdj/EyvSkDmsDz5Wul5q2WyhT2re+IWciY090z4uCa9I9wV1XUXZKTrRYvK9MRMsrqjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BVMeI+YDw0P+RLQYwajulzk0QC6yTYde2vQfAduLjHg=;
 b=TRC9N0pKEFakCEQcosA1+oR6wtQ6Vk4qMRgN5wlHEKSNWaHnG6czy/LKIVd23zUcw+QU7tf9PLONtUSz4rlLRmJQgZBVaPSft9fpILfq3z/Cj4PdhsmSFbYooUCD/n65R8KMeR2ikNnQEVF7as3bQHtHkSItrp4/jGWcBYfkGlxj0CJ8Bk9RgCnxBfeeiV/+2cGBrWA1xsWyTz+PCbHxxdaS5bRccZGgFUpZXXbMpH7tPXRvIool04o31fvKMKCsUA6gfnRjwTZpVQuBiXrQdSCH1Xgh9eK0ASlZXsIwoauQ5e+tbtt9ANnvGspEkWTTjIdn2hOYgAqbpO96bJm89w==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/6] xen: add reference counter support
Thread-Topic: [PATCH v3 1/6] xen: add reference counter support
Thread-Index: AQHZVrdyBx9pogLqRUq3RbhGdjZNSK79pIaAgCk3xwA=
Date: Tue, 11 Apr 2023 22:38:56 +0000
Message-ID: <87a5zexcw0.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-2-volodymyr_babchuk@epam.com>
 <57c7c2e4-ae68-19c4-2140-f5a41fc1a6d3@suse.com>
In-Reply-To: <57c7c2e4-ae68-19c4-2140-f5a41fc1a6d3@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|GVXPR03MB8425:EE_
x-ms-office365-filtering-correlation-id: 2a2fc1da-2c54-474f-1513-08db3add8921
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 kU/Txwe+dAk1M/l+7jEholiJn3Oz0mP3gb+cKtxlXml6sW5zy0Dwmk/H7Ln7tEBv45db919C6kx1gBDT7o81Pn9rvG7IUSG7xcQXbWuN0Q+RNxw36Nql7Ms57+M3E2BzQu3me2yusckCR4v8OgcnTzD7VMJVuaiJlyvGLFp/c/DRKSr3Dj0JtjJf92N8B2ojnkqnq1ys2UBh5CmIrplc00K0YS4+Sor1mLlBB/xjb6i1I0pwG5KzHwbzRlpH/ZWexbubd3fkk7uJXWeU7Q1mZRWlQZSWueWpHV2fy6iNhyDJo20Jkj/ekmoOuvZAufXPqEKYSFD9oRbeJ1Yv7fg5mHao9tkxrI3Iq/C8KvKX19B+QS9axzoajFfXTa4kp6LdZdFWzQMwB6aH5YEYsPQvMCh6KdwmJTEzgr1bv/TDxYJTQAj87kODpx5667cOuHgxhw3vUZfpPm2di55ZMamt0VdeK5dC+mOmH9lCFgMOdZlmaHxiG6hjx7hgKVql7J44VnJ0fmjG86RYxvEaC03LmlJ7aK6pZJrRuQ6Twgmou+obetyGCRINKYEVNMDZiKJfqJWSDqA1Sxr7eq++3jOc1nAPT4fBIyKKw2ISmuK+Lma9kAqj+p+PWiqQylvP2Ljd
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(366004)(39860400002)(136003)(451199021)(53546011)(55236004)(478600001)(26005)(54906003)(38100700002)(6512007)(6506007)(186003)(8676002)(8936002)(316002)(41300700001)(66476007)(66946007)(66556008)(64756008)(6916009)(66446008)(122000001)(4326008)(76116006)(86362001)(36756003)(71200400001)(6486002)(91956017)(2616005)(2906002)(83380400001)(38070700005)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?3f7QP3NMdyqKNHAJXTIEhib5iE7Y6uhPluRIyYF4EUyQsiuez5w6uSEje/?=
 =?iso-8859-1?Q?edYwGvsmCnhO/X6UbtiI0RFxMtAXztaNh9u6Y/I+TIxXx+so0R8xm6Jvgt?=
 =?iso-8859-1?Q?dXgNgv+OhbrGv4B8H8PTMEPr9MELYVm6opAtzGeo4iJjAC2Ss9oHTwh883?=
 =?iso-8859-1?Q?rANJDbHqcPezlu5IaY6iMAxZllkxtyXLvegMn4Wf7l6uGZ+lTKpyDgVoiK?=
 =?iso-8859-1?Q?6V/aAzZr+GwzwjmCIymLJBpdTDPwoOcwmqK+79pM9vFaQX4T7I8Y/S14oS?=
 =?iso-8859-1?Q?ilQeeDw8cTqpE9VwBAabFlqUhoiNXid/AeLnxN3w5NYYz2uDIiOQAvClgS?=
 =?iso-8859-1?Q?NmWz39Elz/w6CyHowirjI259M09De5Yj6A09XHyeQepquK0y9SpmppR19H?=
 =?iso-8859-1?Q?DegePslURKznI9MYavTok7alANncLq8+Jkeg0kIHlK9I+MIDy2QstoFmRl?=
 =?iso-8859-1?Q?AHR9MFYlx4torF+C+9u6m/VhrXTVSMkzLHFYfWGJEFvo30wCHvP056pGo+?=
 =?iso-8859-1?Q?yZs3Oc0MB5n2Azf4CeVI11/P4uFQZZOe36rW6o3kGiLIeeT8ZxDqVfknHm?=
 =?iso-8859-1?Q?lirnlHphSPJyQm1zycvXI42HIU282dFETlFf4tXd0PCRnPUbI/u+cPbzFg?=
 =?iso-8859-1?Q?c2RzEfOJK/XkgMUBlkp9PJoIcJl+rpKtG8S5q03InfGI3gO4HraCKTIg8S?=
 =?iso-8859-1?Q?8Zs1mICf0OB0TCQY0YzOxeXtlU9bUXHUOIuFCJH3crYwAMmVAcq/pzF3Dl?=
 =?iso-8859-1?Q?N6YR/HnKtWnUvk0ELcxiwcLxi3F8dnngnC5pnudhXSUuoF2qGeq7Y3nkIu?=
 =?iso-8859-1?Q?lBYB+Jw6uhX74V8Ckg1tv7/T3QwfTcqEGLvEqxEGkc3Scum9gc5JQwDaav?=
 =?iso-8859-1?Q?WF0J/evKz3ET626DNSL3Y6zBmFz222EGUf+Dh9m0xQi4BAc7PEt32v4zHS?=
 =?iso-8859-1?Q?FfI2FtopEMiRKRCI/kQ6bLkHAb6a9WL5FvIukVGACtAjCusL709tTM/OKm?=
 =?iso-8859-1?Q?moPgNqTwb+XjF8Pse/aJ04f/A1G4Fx4kl7Dj4LGM1qX5jTt6mG2MELAiI7?=
 =?iso-8859-1?Q?qJkbGYC3OjN53fUKXdwPkSbWgTmEjLDhZF5MnV5XCr4q/MtsDmoEm4iyRT?=
 =?iso-8859-1?Q?gyy9t2G2mBIw9eUKvO/UqY7mOCkZLWzV3IlAF1DbL3MnGjbr5zvA24r+uh?=
 =?iso-8859-1?Q?8eJ6EFgE12Q7R839I5TOMrBMJhE4W7249NC41vCl2vnqS3ujFBN5Adk3ft?=
 =?iso-8859-1?Q?HE4KhhTQKESE04eSNa3M6+/wSDH8pyYba0VMFBs0GDDPE/D7UBvYYXxoc+?=
 =?iso-8859-1?Q?YsP7Fgv/oo5ZUoZsFj2gy7SOgStk2sQZLXiBBiF0u9jBktJDz0Xk5b5a6t?=
 =?iso-8859-1?Q?9ukSx4U63wHw+L1MlqAiQTwhk6LAxJtIN303Lby/4Oj/aTDqQW/+GnFmj7?=
 =?iso-8859-1?Q?yySdmvG4B/ddU2YoFSe51tvN06Xmr+44mX81tT3kwwXuw3yiTQjAUNDl2G?=
 =?iso-8859-1?Q?lZgCUW8gnQsdpPZKyvOTqmcmb1FEjE0pxRbOB8B7BNXrpHpJqot3443MK+?=
 =?iso-8859-1?Q?zWfotLTs984clx9TbIOInLP/6P1iC7RU1M+lEGQPwBa3XupR2QCtSU3ZFg?=
 =?iso-8859-1?Q?+PEEDYs2LeyDj6l+W9mKTrJTwYvRcTG4HjsI93MDH8NgQi6RRbYqNfYQ?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a2fc1da-2c54-474f-1513-08db3add8921
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Apr 2023 22:38:56.5586
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: As8uLz4ga5JNBtS2OoY4lER6g8FAz3d2rgnzwWGOhBa4hcTIGL+BwgtTOQ2NatYQLl4TbJY6+ux11ReUTLAnoXfQQNNdg6wvx3xS/rtLRTI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR03MB8425
X-Proofpoint-GUID: GQyNidjuNw7Zjaqc_mizmRHmaTxCBT2b
X-Proofpoint-ORIG-GUID: GQyNidjuNw7Zjaqc_mizmRHmaTxCBT2b
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-11_16,2023-04-11_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015
 priorityscore=1501 mlxscore=0 phishscore=0 lowpriorityscore=0 bulkscore=0
 spamscore=0 suspectscore=0 adultscore=0 malwarescore=0 mlxlogscore=900
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304110204


Jan Beulich <jbeulich@suse.com> writes:

> On 14.03.2023 21:56, Volodymyr Babchuk wrote:
>> +static inline void refcnt_put(refcnt_t *refcnt, void (*destructor)(refc=
nt_t *refcnt))
>
> Hmm, this means all callers need to pass (and agree on) the supposedly
> single destructor function that needs calling. Wouldn't the destructor
> function better be stored elsewhere (and supplied to e.g. refcnt_init())?
>

I tried to replicate Linux approach. They provide destructor function
every time. On other hand, kref_put() is often called from a wrapper
function (like pdev_put() in our case), so destructor in fact, is
provided only once.

>> +{
>> +    int ret =3D atomic_dec_return(&refcnt->refcnt);
>> +
>> +    if ( ret =3D=3D 0 )
>> +        destructor(refcnt);
>> +
>> +    if ( unlikely(ret < 0))
>> +    {
>> +        atomic_set(&refcnt->refcnt, REFCNT_SATURATED);
>
> It's undefined whether *refcnt still exists once the destructor was
> called (which would have happened before we can make it here). While
> even the atomic_dec_return() above would already have acted in an
> unknown way in this case I don't think it's a good idea to access the
> object yet another time. (Same for the "negative" case in
> refcnt_get() then.)

Okay, then I'll remove saturation logic.

--=20
WBR, Volodymyr=


From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519967.807100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmO-0004tx-2t; Tue, 11 Apr 2023 22:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519967.807100; Tue, 11 Apr 2023 22:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmN-0004tq-W9; Tue, 11 Apr 2023 22:47:59 +0000
Received: by outflank-mailman (input) for mailman id 519967;
 Tue, 11 Apr 2023 22:47:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmM-0004tk-5y
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:47:58 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5186b5d-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:47:55 +0200 (CEST)
Received: from DS7PR03CA0166.namprd03.prod.outlook.com (2603:10b6:5:3b2::21)
 by DM4PR12MB6349.namprd12.prod.outlook.com (2603:10b6:8:a4::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.35; Tue, 11 Apr 2023 22:47:51 +0000
Received: from DS1PEPF0000E650.namprd02.prod.outlook.com
 (2603:10b6:5:3b2:cafe::7d) by DS7PR03CA0166.outlook.office365.com
 (2603:10b6:5:3b2::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 22:47:51 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E650.mail.protection.outlook.com (10.167.18.6) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:47:51 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:50 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 15:47:50 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:50 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5186b5d-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I6Qst0FnUKccO1kawVj0Dr4+p8lN8Hh2C3K9lbNrYo8JaCaCOqWqCCDlzpLJJ+eABmiiVNNaxs4CUkEKVFAJdhet2zyeUcRi35CRZcueTfrIaFH619Nt5D0u5wrrmILjhwbX1jNx96LAzGLDI51X1b1FgYbyeIgA+brj9w9WkzF3v55rKaZm8ZYIv1fehyZkRaiNTpXk319FRfXRkmcOfFFTlCvjm8dBEubQrpJ2Y38EnGQQOSK9qYHvi/YZv0q3urjxvN7Z8+FkMcetYxr0HX09HLvi/zWgE7nQiLBLntcQ1tG3euNsiKEnF0GTsP6RJ4zoVdLPSm5HF6yF1rZ6FQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ekDE0eiudwYgbUq+PMKOx/34dRSx7wTAoRnsZ6NgBnA=;
 b=RZoZntukHShejgj3EW3bAPo1nQJYc+w8FE3wy4vepXElKJ6xyUqOCM43lr4JnAHD6tXv2E0YqfrUj+67h7H1b5nOEScbUqOQFdGQ3QNZavpkqvFWkri+88aiRMItYUBwzdUXNeBqkVg/PSW8F//OxuHLtLo+7HJQRLggoVLVIprCH4m2coAci17Iwjn4slpxL/DXXcH7JskFoWwojWmd0t7jrO5zibkFI/pGciPj2RL95sGDNEz74PgJa77Agsr+SFIpQnPfXU2WLMcNyNMzYZ+GKgQv2nCWmDEw9udTRznfB8vDyqyMvvnnx9RhZLDG+UHWaJDPfEJh7VKUFOb2gQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ekDE0eiudwYgbUq+PMKOx/34dRSx7wTAoRnsZ6NgBnA=;
 b=hAoInk28qiStAOXZBrt5jGkhwuULYdrRY2iAFjPDOb3di+nwYN0+4Ts58iU4N4v8RDN0PGF7tGqlOeNUhfRr/bwn5aTaeEZg3nOad+oxjoIZHvS++CtEZXUdyiMCbWT6yLBEI8IDTEuhfWGg9UJKK55bd5pnGtqNQGZ5/oPMzcw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>
Subject: [QEMU][PATCH v6 00/10] Introduce xenpvh machine for arm architecture
Date: Tue, 11 Apr 2023 15:47:36 -0700
Message-ID: <20230411224746.16152-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E650:EE_|DM4PR12MB6349:EE_
X-MS-Office365-Filtering-Correlation-Id: 73f2d8bf-93ae-4be7-dcb6-08db3adec7e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MlzEbrGaZYdcCuRU2YlSvmSMwspQVOYTm5smEpGJnCAQNXD5/9xFne8JYmJKqGHD7SWIW1geF2plrQUd2naINYYqGA3eIihXs16TF+GiEjO67txRFAvEWaig5Vo0p3k/flVBhO0dJmY2hIoa8y313lLhaKHnyr4LmLcxMRVyhXEmbUdcAxr/0jz8Mt+JWx+K9oTDBmsLhKIZquFba+zLv4In4dv8uyLdXvFUVTP9jHamz+xMc4y8A+6g3ZprtQIBfRz/w7qrfJu5tf576GWY6P92yPO2rVdOZaNBGNQ7I+i6+M+cW2Im6mxxqnMcB4akOWdxxZZrouX/Nm8NjmRZdgvQd3iv+grtofKuexfh30RhxmBosTFhP6KH82pRE7kIC9RfH6pED6kcBAbyz+7xLCkdVJtxr/p8sAi0Vd2dORQ2LWOxyjkoDe3Gb8hk72x4A3MNqBdCT+BiPDd90xaEnOO0IEWz6cU8AO9RdDiPvwMYLIJF1S/FLFu2anVY2z61xvDBEIpPrj9nbLaUItTu767SxlHr2qY6CiZ/xDQtIDWDTYUmn2OXmfEv/kLVA5eY6t8wSXRM8saSN+dF0fcCf5EjxP1+tfxbI3XiqrNIG/XNJvKF2rVULEKk5IAxP62/HIAZLV7U+bnb+TDpHJvyka4wUjhrUKo05lL5FH8J2Qo7rToG5UTpBv5yFhDcmRHU/7Ij6huJhwVkRAQoOh3KnHglt0jyljqa3x1gsvzMhuM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(41300700001)(70586007)(316002)(478600001)(6916009)(4326008)(966005)(70206006)(8676002)(54906003)(40480700001)(36860700001)(5660300002)(82310400005)(44832011)(2906002)(8936002)(81166007)(47076005)(82740400003)(356005)(186003)(6666004)(26005)(1076003)(2616005)(426003)(336012)(83380400001)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:47:51.3091
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 73f2d8bf-93ae-4be7-dcb6-08db3adec7e4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E650.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6349

Hi,
Rebased and resending the series with latest QEMU as it's been quite sometime.
There are no code changes.

Also, this series has dependency on following gitlab-ci
patch: https://lists.gnu.org/archive/html/qemu-devel/2023-04/msg01641.html.

This series add xenpvh machine for aarch64. Motivation behind creating xenpvh
machine with IOREQ and TPM was to enable each guest on Xen aarch64 to have it's
own unique and emulated TPM.

This series does following:
    1. Moved common xen functionalities from hw/i386/xen to hw/xen/ so those can
       be used for aarch64.
    2. We added a minimal xenpvh arm machine which creates an IOREQ server and
       support TPM.

Also, checkpatch.pl fails for 03/12 and 06/12. These fails are due to
moving old code to new place which was not QEMU code style compatible.
No new add code was added.

Regards,
Vikram

ChangeLog:
    v5->v6:
    rebased series with latest branch. No code change.

    v4->v5:
        Fix missing 3 lines of codes in xen_exit_notifier() due to rebase.
        Fix 07/10 patch subject.

    v3->v4:
        Removed the out of series 04/12 patch.

    v2->v3:
        1. Change machine name to xenpvh as per Jurgen's input.
        2. Add docs/system/xenpvh.rst documentation.
        3. Removed GUEST_TPM_BASE and added tpm_base_address as property.
        4. Correct CONFIG_TPM related issues.
        5. Added xen_register_backend() function call to xen_register_ioreq().
        6. Added Oleksandr's suggestion i.e. removed extra interface opening and
           used accel=xen option

    v1 -> v2
    Merged patch 05 and 06.
    04/12: xen-hvm-common.c:
        1. Moved xen_be_init() and xen_be_register_common() from
            xen_register_ioreq() to xen_register_backend().
        2. Changed g_malloc to g_new and perror -> error_setg_errno.
        3. Created a local subroutine function for Xen_IOREQ_register.
        4. Fixed build issues with inclusion of xenstore.h.
        5. Fixed minor errors.

Stefano Stabellini (5):
  hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
  xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
  include/hw/xen/xen_common: return error from xen_create_ioreq_server
  hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration
    failure
  meson.build: do not set have_xen_pci_passthrough for aarch64 targets

Vikram Garhwal (5):
  hw/i386/xen/: move xen-mapcache.c to hw/xen/
  hw/i386/xen: rearrange xen_hvm_init_pc
  hw/xen/xen-hvm-common: Use g_new and error_report
  hw/arm: introduce xenpvh machine
  meson.build: enable xenpv machine build for ARM

 docs/system/arm/xenpvh.rst       |   34 +
 docs/system/target-arm.rst       |    1 +
 hw/arm/meson.build               |    2 +
 hw/arm/xen_arm.c                 |  181 +++++
 hw/i386/meson.build              |    1 +
 hw/i386/xen/meson.build          |    1 -
 hw/i386/xen/trace-events         |   19 -
 hw/i386/xen/xen-hvm.c            | 1075 +++---------------------------
 hw/xen/meson.build               |    7 +
 hw/xen/trace-events              |   19 +
 hw/xen/xen-hvm-common.c          |  879 ++++++++++++++++++++++++
 hw/{i386 => }/xen/xen-mapcache.c |    0
 include/hw/arm/xen_arch_hvm.h    |    9 +
 include/hw/i386/xen_arch_hvm.h   |   11 +
 include/hw/xen/arch_hvm.h        |    5 +
 include/hw/xen/xen-hvm-common.h  |   99 +++
 include/hw/xen/xen_native.h      |   13 +-
 meson.build                      |    4 +-
 18 files changed, 1350 insertions(+), 1010 deletions(-)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 hw/xen/xen-hvm-common.c
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)
 create mode 100644 include/hw/arm/xen_arch_hvm.h
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519969.807120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmS-0005Ov-KU; Tue, 11 Apr 2023 22:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519969.807120; Tue, 11 Apr 2023 22:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmS-0005Om-G2; Tue, 11 Apr 2023 22:48:04 +0000
Received: by outflank-mailman (input) for mailman id 519969;
 Tue, 11 Apr 2023 22:48:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmQ-0004tk-QV
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:02 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e857821c-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:00 +0200 (CEST)
Received: from DM5PR07CA0070.namprd07.prod.outlook.com (2603:10b6:4:ad::35) by
 PH8PR12MB6817.namprd12.prod.outlook.com (2603:10b6:510:1c8::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6254.33; Tue, 11 Apr 2023 22:47:54 +0000
Received: from DS1PEPF0000E653.namprd02.prod.outlook.com
 (2603:10b6:4:ad:cafe::cc) by DM5PR07CA0070.outlook.office365.com
 (2603:10b6:4:ad::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 22:47:54 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E653.mail.protection.outlook.com (10.167.18.9) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:47:54 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:53 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 15:47:53 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:52 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e857821c-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hz5163pAoWJ2ZQ+zoM+kkRyAWo7OO6avTpHz7Bu983vvbR86QwVdAw5Z5uKjbSOhB1swJskoQt71Bbt0x2q/ehy7SuSMcJEe8d+VxbnIbKjjx62Jwb1ajqIrA8ppf8Y7XLA6P+/rsiwj9DVG00R0k9QjArTGgE6PQUW2QY6Elu6Ssl/KvaNnGRXOu9f9Q0ylMT7K3PuiSn1lGrzwJZINPC7GLJdF1NxwJDBS7SRZ3g6i/94NrOuWvI3g6etbT7I3MK2N0uAA11OMxq88WfldGX6qXJP0F1DuBClMjtQuRLh9oFVlVdgBrxZqxRfIK7hEiE3YS7RS3LsmL0+aXGVNEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oDSuDO2maVMFN1l7RQU+96HCYXUpMutEwT5v2DbnewQ=;
 b=mEnJB6Ff/wZ04ogd0bSMarz5lOEj93I0ED1p0VnJZyxWXkNDmzOodZW0FqxnwupleHfYqgDTPZ/ybKOqhqbNNRheLQ6eR3TFK2oXZ3Lfabo1v9Dn2m2230ZnA3zMWO+YO2bRHa5yY1wEhPnIENhBoOWDJSTWtuAj4iHDxqY0NXtPcW3wWVBbVR6Qea/8yU23aF/Qm+11dGjFne3cjPcOPgRJDHTI50FeOhUNyoUZaK5OL+ktCo+16m6a9u/FUz1UkUScqCPLRq4irvdWtg27bapSxJ931PA9wPRcaBO8HLjFMA0tZSN4uZiETZGq/qZwLqOqQbrE9bexC9EC+legyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oDSuDO2maVMFN1l7RQU+96HCYXUpMutEwT5v2DbnewQ=;
 b=biK3WJGL8H0acN6Y+3qlv00oO0m6n+AghO2mN4hHs4qSfV140GDaIwULltVJ+vsMpxA4NQW0Qkg3co9C8j2m/j+bZxYev6Hynvv1Zk+ON9MnfTsTmn+laYG7+p4itLX2wNir/f7H9549uurythBvJAjTjdZlva8aBCGIr6khxPE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, "Michael S. Tsirkin" <mst@redhat.com>, "Marcel
 Apfelbaum" <marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>, Stefano Stabellini <sstabellini@kernel.org>, "Anthony
 Perard" <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v6 01/10] hw/i386/xen/: move xen-mapcache.c to hw/xen/
Date: Tue, 11 Apr 2023 15:47:37 -0700
Message-ID: <20230411224746.16152-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E653:EE_|PH8PR12MB6817:EE_
X-MS-Office365-Filtering-Correlation-Id: ba59163f-ef3b-4fb4-f65e-08db3adec9ae
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b+FTQ4VVXuGv7PY1pGM3+BrSHmgZQS0IbtfZO1oPmAXmuWvTUK7RdGQ57q+uhm2rBsU59uPQvwMF3fxfAyeC/9uq06+igiXDfgcOp+J03ZhgBpbOSk11LPPa+BVgTAF5riDvLGyAVb9gyqhoOWVKbcpmX4m+7EsPls7edeh24bNANKsdx31QXDFcuQ18o37Z+wE0zoGhOB38AVOm4VMBPt3Sb/GW5GDLeBN6rFCUrGp8yJdkJdODwCfEAjmPZY7MMIK3Qvjba3fVbBqdZbh2zoRGzot1STMhBnXbzQdKIPdRnJd5aUC84v70Qv4ACMoLKig2g2l44dWeUFT36T6v0CNi51j69aBXpu+rgsI9qmE0ZE+xH373G3wABZhKChUPwR9Bu5WxMl07e2n8iKQdKcXI8K8VZmH4Y96DuITJsqc/akeKCLhCBuLt3PB/gtHFJUhaQSrX3u++DZhWe53vDMRPZ2jXYcC84ltDKBy8D2PUTdD+VN0JOn+gtZiWtlEhD5HUPb3xRF7gOHp6L9CUTgCRXmpC9+bPJw1bi0T2QkpLzTWEzHnpxiFuEIywt4mMPI9yzIm6yzrwzeGkHaPQ9/RgvAXSVht/BZ9aYfM/Ec5L6190C6hcBAM6eq/2KxSndDnU75e+OzE7z5DDwy+H6InAW92q20Ou5zy4brMvwBPWUsLvWm+OPovxsQIXb8AM1N9HOJjCtIaBBLb07Au6MwHrRn3dCQnBZY7qg5Qv7lQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(86362001)(7416002)(336012)(426003)(82740400003)(5660300002)(40480700001)(478600001)(6916009)(2616005)(41300700001)(8936002)(36756003)(6666004)(40460700003)(4326008)(316002)(44832011)(70206006)(26005)(8676002)(70586007)(54906003)(356005)(186003)(1076003)(81166007)(82310400005)(47076005)(83380400001)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:47:54.3112
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ba59163f-ef3b-4fb4-f65e-08db3adec9ae
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E653.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6817

xen-mapcache.c contains common functions which can be used for enabling Xen on
aarch64 with IOREQ handling. Moving it out from hw/i386/xen to hw/xen to make it
accessible for both aarch64 and x86.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/meson.build              | 1 +
 hw/i386/xen/meson.build          | 1 -
 hw/i386/xen/trace-events         | 5 -----
 hw/xen/meson.build               | 4 ++++
 hw/xen/trace-events              | 5 +++++
 hw/{i386 => }/xen/xen-mapcache.c | 0
 6 files changed, 10 insertions(+), 6 deletions(-)
 rename hw/{i386 => }/xen/xen-mapcache.c (100%)

diff --git a/hw/i386/meson.build b/hw/i386/meson.build
index 213e2e82b3..cfdbfdcbcb 100644
--- a/hw/i386/meson.build
+++ b/hw/i386/meson.build
@@ -33,5 +33,6 @@ subdir('kvm')
 subdir('xen')
 
 i386_ss.add_all(xenpv_ss)
+i386_ss.add_all(xen_ss)
 
 hw_arch += {'i386': i386_ss}
diff --git a/hw/i386/xen/meson.build b/hw/i386/xen/meson.build
index 2e64a34e16..3dc4c4f106 100644
--- a/hw/i386/xen/meson.build
+++ b/hw/i386/xen/meson.build
@@ -1,6 +1,5 @@
 i386_ss.add(when: 'CONFIG_XEN', if_true: files(
   'xen-hvm.c',
-  'xen-mapcache.c',
   'xen_apic.c',
   'xen_pvdevice.c',
 ))
diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index 5d6be61090..a0c89d91c4 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -21,8 +21,3 @@ xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
 cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
 
-# xen-mapcache.c
-xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
-xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
-xen_map_cache_return(void* ptr) "%p"
-
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 19c6aabc7c..202752e557 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -26,3 +26,7 @@ else
 endif
 
 specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
+
+xen_ss = ss.source_set()
+
+xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index 55c9e1df68..f977c7c8c6 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -41,3 +41,8 @@ xs_node_vprintf(char *path, char *value) "%s %s"
 xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
+
+# xen-mapcache.c
+xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
+xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
+xen_map_cache_return(void* ptr) "%p"
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c
similarity index 100%
rename from hw/i386/xen/xen-mapcache.c
rename to hw/xen/xen-mapcache.c
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519968.807110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmR-00059C-BH; Tue, 11 Apr 2023 22:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519968.807110; Tue, 11 Apr 2023 22:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmR-000595-7b; Tue, 11 Apr 2023 22:48:03 +0000
Received: by outflank-mailman (input) for mailman id 519968;
 Tue, 11 Apr 2023 22:48:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmP-0004tk-E8
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:01 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e88::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e876fe83-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:47:59 +0200 (CEST)
Received: from CYZPR12CA0006.namprd12.prod.outlook.com (2603:10b6:930:8b::22)
 by PH7PR12MB5830.namprd12.prod.outlook.com (2603:10b6:510:1d5::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 22:47:56 +0000
Received: from CY4PEPF0000C97D.namprd02.prod.outlook.com
 (2603:10b6:930:8b:cafe::d6) by CYZPR12CA0006.outlook.office365.com
 (2603:10b6:930:8b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 22:47:55 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C97D.mail.protection.outlook.com (10.167.241.136) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.25 via Frontend Transport; Tue, 11 Apr 2023 22:47:55 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:55 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:54 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e876fe83-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ml8IWApAM/Y+CpPbi26hXoz1YL4wM93WnZqBELl/H4zJX5i5gZPZt2DOponO8axbIp3noPREdbZVku680ZxyyfHNFQmPW6BtMkeTjLYfCUAt9kt4iMV8Y72rNrTy1LLP9o2PHlbmnNzpgniIcGFUDBzE2shlBGaFeWZHr54F4xJbgBEtCnkKNwBsRS/y1Q/550xuYNs/3hMAR5S5O0g3MMqTBKP7OQUPidlmczNMzZI4YO5/g+Wz3keXqq4aO9IxDBK0mrbVwKsGMODjCgRaKdejCxKBBZ20ZqHBMCddJrS1U/10ARozQt+cKYhtdnCf8GcPsqsszEoWzi2kBtzARQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gNOkxasZFjLHtu5kunUMzoJvRLjjI2EYoAqF9wsm5vg=;
 b=lW9OBEfC+geE5ZV8JuGUvImaLGKjHO5xconbE/vP4EhPW9GMouL43COlAOhNyLO69dLnXv2JsXbJfQT7cf9tTUfM+gcZcC4oVirwMyZ00TmtgLGaSmKHZ3lXJyQ3SAHpJK/x96SjNOJH90f8aIiMOOWtGpxNUhtxAX5YhxAeZkoBawFtukWDBs758C79vuO7DaT0hh6r2daxyD/vhn60Y9fB2AYuRUSMEOV9kzOEnRkkc7bXapO1QBNyiQyOlq0SVKIT2u/P6RR+UHopjm60BePnfJfcHCe88cd4d7ugwC5pBSZ4zC3ZqYicH73BhMfnZxezmuXbB5Cop3cicw39nA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gNOkxasZFjLHtu5kunUMzoJvRLjjI2EYoAqF9wsm5vg=;
 b=oKegRFjWBYeZxVDIZ+8JT+EP2sK3MNCEH7wS0/YCp3OjDYhRN2qzW38YpxBU7Oo9kXzAymrxv9i/6JmQfXC4TdX6cKfFNmzpAlZytU9k4vstKCym6SVx2WUNQXwSx5KUcsUgFwzKuFRMkT6ZZJJmREmvXZNDEEhnsPvXsm1Ytag=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <richard.henderson@linaro.org>, Eduardo Habkost
	<eduardo@habkost.net>
Subject: [QEMU][PATCH v6 02/10] hw/i386/xen: rearrange xen_hvm_init_pc
Date: Tue, 11 Apr 2023 15:47:38 -0700
Message-ID: <20230411224746.16152-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C97D:EE_|PH7PR12MB5830:EE_
X-MS-Office365-Filtering-Correlation-Id: dbb9fae0-3dc9-4b6c-28fc-08db3adeca75
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1DUA5dfU5ZEonWalEs4FC78dDWo+c5tHirqd+i1O168JJlsmlPIuXgEyTi/NGrKTt7sdxa9Up+mh1cDVTXHlUn8OfHw2gtVSGpkFjMd5ZKrwUiWvOw/EY2/JFQPXfYFGQUn8Ko79yK3o35i/wB4NioiKWdoVwbuw/60BGvLcsoCjxpsB0rFwClgbaofON4E33YMdmyCEsg+4W8JEePvyyKvuAPMGf10tmWo3Zj4fOOiJHMzae2VthQJjN4gEemN+69Bhadw0QwNgnAHZ3zUMmDGTBDdqe5LBr5b/JVAqzUQ/LqFukieK1v8vLjr842DaQG+CzERxkb7+iwwkbvEwb5QW88DH6h68ZxDhQf5q6C3Jq2kQSgWXfdWYyejk4XAUps3g2y+IV+CKitw40RPzsHpsn2Uzu/oWXqQMP7LJJEp/fnpfN1skxwacp1AgvghmWuNAX6FvXb+3KDD6dnLd3zbBT04CKZ5fRmOq0MO9UlbNrDJsNzBCQQtXfIeG6JYsA3CpuHXMyiWnkcZ8bMQ+GZ770DQA5+qfc8GvyGjAgMELqP6X8UqEuI8UOhlx2HgJ+YM7KyXbB0ZmwI6vTQnk43fxBYGiRA9bg3oE3K231yOdbH5d+Bq8lVKaHdXctX00opiGD3TXOY+/h7mks6yeSSRAQMVR1OApryO7aia/m4fU/uFC7E98EEl/mybZHxrtuS9xS1OD3eYY+kum7PQ8iQvcmWGaUYVT+mqLs5qR3LM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(86362001)(36756003)(82310400005)(40480700001)(6666004)(2616005)(426003)(83380400001)(186003)(336012)(47076005)(36860700001)(26005)(1076003)(70586007)(478600001)(6916009)(4326008)(316002)(70206006)(40460700003)(7416002)(8936002)(356005)(82740400003)(41300700001)(8676002)(81166007)(5660300002)(44832011)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:47:55.5946
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dbb9fae0-3dc9-4b6c-28fc-08db3adeca75
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C97D.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5830

In preparation to moving most of xen-hvm code to an arch-neutral location,
move non IOREQ references to:
- xen_get_vmport_regs_pfn
- xen_suspend_notifier
- xen_wakeup_notifier
- xen_ram_init

towards the end of the xen_hvm_init_pc() function.

This is done to keep the common ioreq functions in one place which will be
moved to new function in next patch in order to make it common to both x86 and
aarch64 machines.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/xen/xen-hvm.c | 49 ++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 56641a550e..5403ac4b89 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1419,12 +1419,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
-
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
-
     /*
      * Register wake-up support in QMP query-current-machine API
      */
@@ -1435,23 +1429,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
         goto err;
     }
 
-    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
-    if (!rc) {
-        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
-            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
-                                 1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
-            error_report("map shared vmport IO page returned error %d handle=%p",
-                         errno, xen_xc);
-            goto err;
-        }
-    } else if (rc != -ENOSYS) {
-        error_report("get vmport regs pfn returned error %d, rc=%d",
-                     errno, rc);
-        goto err;
-    }
-
     /* Note: cpus is empty at this point in init */
     state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
@@ -1490,7 +1467,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 #else
     xen_map_cache_init(NULL, state);
 #endif
-    xen_ram_init(pcms, ms->ram_size, ram_memory);
 
     qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
 
@@ -1511,6 +1487,31 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
+    state->suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&state->suspend);
+
+    state->wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&state->wakeup);
+
+    rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
+    if (!rc) {
+        DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
+        state->shared_vmport_page =
+            xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
+                                 1, &ioreq_pfn, NULL);
+        if (state->shared_vmport_page == NULL) {
+            error_report("map shared vmport IO page returned error %d handle=%p",
+                         errno, xen_xc);
+            goto err;
+        }
+    } else if (rc != -ENOSYS) {
+        error_report("get vmport regs pfn returned error %d, rc=%d",
+                     errno, rc);
+        goto err;
+    }
+
+    xen_ram_init(pcms, ms->ram_size, ram_memory);
+
     /* Disable ACPI build because Xen handles it */
     pcms->acpi_build_enabled = false;
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519970.807129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmU-0005fW-3J; Tue, 11 Apr 2023 22:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519970.807129; Tue, 11 Apr 2023 22:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmU-0005fP-0R; Tue, 11 Apr 2023 22:48:06 +0000
Received: by outflank-mailman (input) for mailman id 519970;
 Tue, 11 Apr 2023 22:48:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmS-0004tk-VH
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:04 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e882a2ec-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:02 +0200 (CEST)
Received: from DM6PR13CA0038.namprd13.prod.outlook.com (2603:10b6:5:134::15)
 by PH7PR12MB8016.namprd12.prod.outlook.com (2603:10b6:510:26b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6254.35; Tue, 11 Apr
 2023 22:47:57 +0000
Received: from DS1PEPF0000E651.namprd02.prod.outlook.com
 (2603:10b6:5:134:cafe::1e) by DM6PR13CA0038.outlook.office365.com
 (2603:10b6:5:134::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend
 Transport; Tue, 11 Apr 2023 22:47:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E651.mail.protection.outlook.com (10.167.18.7) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:47:56 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:56 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:55 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e882a2ec-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PhmBbkW6M1d9ZSz4x2jESuuyePSazESG2fD0YFKdqvuflVga/72KwMHuXUEOSbwSVXLOSCvqfC0h2i/WAWphU9EskfbMR0JeJPsh6Wu2OJfqmaZnE3ctyj3HnfunPgYOOWR9Ixr4uXPyaHGnUDQgv5X+H33khPYkzsEuCbayhSVxNQEdfY6yy388ZS10C7bUM0So+dpIquYDfTKNWXPBeSr5nifpzJdAwdiejmOeDyPBr3L2Imdd21m+XAZ2fPP6JD0206PD5CoqstcLhoIG/6nzfYIKLcmZHrCwwOSqYpnglvVL+b0evvnOBLCuox/xAnld9VD4AVsVPKnITZbItg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=moDpzrNRsIQ+F6BoWEVODEFb14KOCFj6iOm1H5bs3as=;
 b=Wans1hYuBlTzN/3t7kfC+V/wP468ERZEBREgIAB21ts2qahc6HzrknCBr+xYBYiFgQRwIzDKNm7omL4PhKhTdzV/Af5Qebpado190R2cZxbc8EEK61yjkorypyFzjEiySKJoNBu0mkiDCBkVz3oKR+YW0vT1GfM6+WN98IJvrpFXSIJiwXaz/BWgYr1TI3oT4XMRC/l+rAsi+XpOKSAhm5Zwk964bva8WsoAIhWVFigbCahY37tNfIhTn5QdrJekLOoeXhYZ9l4xAA2dxRTkCYScnsHVMs8zq2YHizVbg7afxYb4rwbH50xcSg7BXrAoQUjU12RZTeD44gUZ7aqeAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=moDpzrNRsIQ+F6BoWEVODEFb14KOCFj6iOm1H5bs3as=;
 b=gcw8KrKoSCZWiW+HFJBHzpP+cV2+8hLtl+UcIPqlnwFiACIG+QwQp4+Y+Q7PR8/v33hca1qiCdTA+YMctLeDgy2XbcspB0RI7iKmSO8koV7q2wxtWOZxXJ78qkIHLtqx1B+tCy/t8tm9a1A6amUbgY6E6OS0qV5EXnDWG7AnKWA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson
	<richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>
Subject: [QEMU][PATCH v6 03/10] hw/i386/xen/xen-hvm: move x86-specific fields out of XenIOState
Date: Tue, 11 Apr 2023 15:47:39 -0700
Message-ID: <20230411224746.16152-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E651:EE_|PH7PR12MB8016:EE_
X-MS-Office365-Filtering-Correlation-Id: 30c9e9f2-43c7-4571-57be-08db3adecb1b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hEYdzjc5CRBT+IE/TU3KP1GB1I7xQjHd/SE6vP9u3A0EP85TmaOazB2J6nii2oIu0Q/BmZTCvAM/uhDv9rks8uj3/fWn5iCZ2X+wYLXae1uoJYDMxpiyg1V6CbO8suqQYSbvzlTBlQH98Tq0M6RfVODMiVx4ISWVd9EEHZcZecapnALI3GYr3gGGTc/9Wz4ZhZDE8Hk+LefvbicrIsbWQ4HvncGjwJ3gX2WbwySgxHxUgv7ZjGixPkq7u8JjnV6VXFRiIPFm5dZFZQgebLH4SnIwnLry7tEwP4LtPnhMJhLPJpqKqD1uF2OnW2XduQ+Ki4OW4kZpdCbR/iFGaOpdOM3/kmGHr1opUyghv3ID3pFcmrq2NrmFBK0ojSzOH5+ok6DrZj+fgQjZq4IVnRRb4Qvh8RFxPRmN9SJ2raBldidCVbRTT6t9eMXApZujiRCzz05z29q/88Zh3jtVnyUIatFYqdjH41/8Rv7TT+HRn08PBQfRGf5y0ZQ5U9r1Tkz/Hordd8CHctjm2hQxNFWzWoglnifxKgaes78UqFzMC3lAKjZXdBuLSCnY4ypOLL4SZ4Bqf9hDGBke72E4EmU9OxU9Jws9haN/TF/bKsL0t5sdeNNxjV0I9RXUlSh0rBaVgTD9VnOeM1UphzXNaAwnnP9XK0sCg9nvl+WB9LeSXzWP7S5qBG85scXwgk7JeOYavMb/GM+39/8oV+ETlHVeDw8ZmvRoC8nFWe5uddlN9rs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(396003)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(5660300002)(7416002)(2906002)(41300700001)(54906003)(316002)(186003)(8936002)(8676002)(44832011)(70206006)(70586007)(478600001)(4326008)(6916009)(6666004)(47076005)(1076003)(356005)(336012)(26005)(40460700003)(81166007)(83380400001)(426003)(36756003)(40480700001)(2616005)(66574015)(82740400003)(36860700001)(82310400005)(86362001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:47:56.6993
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 30c9e9f2-43c7-4571-57be-08db3adecb1b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E651.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8016

From: Stefano Stabellini <stefano.stabellini@amd.com>

In preparation to moving most of xen-hvm code to an arch-neutral location, move:
- shared_vmport_page
- log_for_dirtybit
- dirty_bitmap
- suspend
- wakeup

out of XenIOState struct as these are only used on x86, especially the ones
related to dirty logging.
Updated XenIOState can be used for both aarch64 and x86.

Also, remove free_phys_offset as it was unused.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 hw/i386/xen/xen-hvm.c | 58 ++++++++++++++++++++-----------------------
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 5403ac4b89..6be5a250a8 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -74,6 +74,7 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
 {
@@ -96,6 +97,11 @@ typedef struct XenPhysmap {
 } XenPhysmap;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
+static const XenPhysmap *log_for_dirtybit;
+/* Buffer used by xen_sync_dirty_bitmap */
+static unsigned long *dirty_bitmap;
+static Notifier suspend;
+static Notifier wakeup;
 
 typedef struct XenPciDevice {
     PCIDevice *pci_dev;
@@ -106,7 +112,6 @@ typedef struct XenPciDevice {
 typedef struct XenIOState {
     ioservid_t ioservid;
     shared_iopage_t *shared_page;
-    shared_vmport_iopage_t *shared_vmport_page;
     buffered_iopage_t *buffered_io_page;
     xenforeignmemory_resource_handle *fres;
     QEMUTimer *buffered_io_timer;
@@ -126,14 +131,8 @@ typedef struct XenIOState {
     MemoryListener io_listener;
     QLIST_HEAD(, XenPciDevice) dev_list;
     DeviceListener device_listener;
-    hwaddr free_phys_offset;
-    const XenPhysmap *log_for_dirtybit;
-    /* Buffer used by xen_sync_dirty_bitmap */
-    unsigned long *dirty_bitmap;
 
     Notifier exit;
-    Notifier suspend;
-    Notifier wakeup;
 } XenIOState;
 
 /* Xen specific function for piix pci */
@@ -463,10 +462,10 @@ static int xen_remove_from_physmap(XenIOState *state,
     }
 
     QLIST_REMOVE(physmap, list);
-    if (state->log_for_dirtybit == physmap) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+    if (log_for_dirtybit == physmap) {
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
     }
     g_free(physmap);
 
@@ -627,16 +626,16 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
         return;
     }
 
-    if (state->log_for_dirtybit == NULL) {
-        state->log_for_dirtybit = physmap;
-        state->dirty_bitmap = g_new(unsigned long, bitmap_size);
-    } else if (state->log_for_dirtybit != physmap) {
+    if (log_for_dirtybit == NULL) {
+        log_for_dirtybit = physmap;
+        dirty_bitmap = g_new(unsigned long, bitmap_size);
+    } else if (log_for_dirtybit != physmap) {
         /* Only one range for dirty bitmap can be tracked. */
         return;
     }
 
     rc = xen_track_dirty_vram(xen_domid, start_addr >> TARGET_PAGE_BITS,
-                              npages, state->dirty_bitmap);
+                              npages, dirty_bitmap);
     if (rc < 0) {
 #ifndef ENODATA
 #define ENODATA  ENOENT
@@ -651,7 +650,7 @@ static void xen_sync_dirty_bitmap(XenIOState *state,
     }
 
     for (i = 0; i < bitmap_size; i++) {
-        unsigned long map = state->dirty_bitmap[i];
+        unsigned long map = dirty_bitmap[i];
         while (map != 0) {
             j = ctzl(map);
             map &= ~(1ul << j);
@@ -677,12 +676,10 @@ static void xen_log_start(MemoryListener *listener,
 static void xen_log_stop(MemoryListener *listener, MemoryRegionSection *section,
                          int old, int new)
 {
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-
     if (old & ~new & (1 << DIRTY_MEMORY_VGA)) {
-        state->log_for_dirtybit = NULL;
-        g_free(state->dirty_bitmap);
-        state->dirty_bitmap = NULL;
+        log_for_dirtybit = NULL;
+        g_free(dirty_bitmap);
+        dirty_bitmap = NULL;
         /* Disable dirty bit tracking */
         xen_track_dirty_vram(xen_domid, 0, 0, NULL);
     }
@@ -1022,9 +1019,9 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
 {
     vmware_regs_t *vmport_regs;
 
-    assert(state->shared_vmport_page);
+    assert(shared_vmport_page);
     vmport_regs =
-        &state->shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
+        &shared_vmport_page->vcpu_vmport_regs[state->send_vcpu];
     QEMU_BUILD_BUG_ON(sizeof(*req) < sizeof(*vmport_regs));
 
     current_cpu = state->cpu_by_vcpu_id[state->send_vcpu];
@@ -1472,7 +1469,6 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 
     state->memory_listener = xen_memory_listener;
     memory_listener_register(&state->memory_listener, &address_space_memory);
-    state->log_for_dirtybit = NULL;
 
     state->io_listener = xen_io_listener;
     memory_listener_register(&state->io_listener, &address_space_io);
@@ -1487,19 +1483,19 @@ void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
 
-    state->suspend.notify = xen_suspend_notifier;
-    qemu_register_suspend_notifier(&state->suspend);
+    suspend.notify = xen_suspend_notifier;
+    qemu_register_suspend_notifier(&suspend);
 
-    state->wakeup.notify = xen_wakeup_notifier;
-    qemu_register_wakeup_notifier(&state->wakeup);
+    wakeup.notify = xen_wakeup_notifier;
+    qemu_register_wakeup_notifier(&wakeup);
 
     rc = xen_get_vmport_regs_pfn(xen_xc, xen_domid, &ioreq_pfn);
     if (!rc) {
         DPRINTF("shared vmport page at pfn %lx\n", ioreq_pfn);
-        state->shared_vmport_page =
+        shared_vmport_page =
             xenforeignmemory_map(xen_fmem, xen_domid, PROT_READ|PROT_WRITE,
                                  1, &ioreq_pfn, NULL);
-        if (state->shared_vmport_page == NULL) {
+        if (shared_vmport_page == NULL) {
             error_report("map shared vmport IO page returned error %d handle=%p",
                          errno, xen_xc);
             goto err;
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519971.807134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmU-0005iG-Ft; Tue, 11 Apr 2023 22:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519971.807134; Tue, 11 Apr 2023 22:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmU-0005hJ-9Q; Tue, 11 Apr 2023 22:48:06 +0000
Received: by outflank-mailman (input) for mailman id 519971;
 Tue, 11 Apr 2023 22:48:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmT-0004tk-AL
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:05 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e89::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eaba3a4e-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:03 +0200 (CEST)
Received: from DM6PR07CA0127.namprd07.prod.outlook.com (2603:10b6:5:330::28)
 by SJ2PR12MB8064.namprd12.prod.outlook.com (2603:10b6:a03:4cc::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 22:48:00 +0000
Received: from DS1PEPF0000E64E.namprd02.prod.outlook.com
 (2603:10b6:5:330:cafe::b3) by DM6PR07CA0127.outlook.office365.com
 (2603:10b6:5:330::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35 via Frontend
 Transport; Tue, 11 Apr 2023 22:47:59 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E64E.mail.protection.outlook.com (10.167.18.4) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:47:59 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:58 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 15:47:58 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:57 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaba3a4e-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YD2Sjvqw1aiCb/5kve5wM4Bz/reAjtkCRm1Bz1PzaJAg3ac3PjndSuNORPqR4weXADEWEJAK1GLDKJJEhNmAbCj3VAuq6azfHDLcnHvQ3bEHjRo1GPdUESOsak21BurEHu6eCJa/IXMjpDBep4prig6MZOYPlRJg7qg4PTGbLARsYYzTXUshn5yH8vLBvxr23vvGUIhHcQJ1n2XF8TpAVF52pR8e9PeucwlHOB55FumD7xaKzGpB3uQg40NriAGF3DD/BRnS+sD9PqNdWLlwczTxbQzB/T+CXw4lVHncO8UzTyqV/UoAa5mvMHtNkkL7lc/xmfVao06e/WMgPJCfVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FMxXRBFxRmkD0pWrYu+sj44qnNzbURauoioUBy1Lo+4=;
 b=U5KMC9g4P5MHtmS6GLiha1q6hKu1eSxJ7jtVF4QwVignsYxpiZg6aAsTkNP+RRylwFHJr3uw8dqov33uroCaTrU/tktbRSL5Q67b924H5me+1YK6EXHRyLBq99SQMXHmkWMxx82JqBJAOH8crsa4aw2j5bNtWQIbSCsoEBZt5dfVvmTZAbP4zLEwv1nwFVpXGYWALqaX4Rn5GSOmI5NIFrSpGz13F5eQcM75i4PCyo+HAw+m+hYwfAqHmrNcfDxqzsDDlAIfYUACMdT6Vh+l5GIK/vRfgB6/w60UFGJ3q+h9jX1okXLkmg8oW6jCiogC1yIxkI5aRYIbp2j3yqlCfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FMxXRBFxRmkD0pWrYu+sj44qnNzbURauoioUBy1Lo+4=;
 b=yUBAoDuJqmaZVGyBnfL0Dq0J48SosZrNjzgJXHQW/TYeQ8TQO32JpJZzh6CsduofqeSGBWCdNwTaadZ2gRiSTRn0CSPm72YgZt4lTZ8K/eFqolMBLX7ifoVKmceSouuJ4ET5HynkrbMW1LtBUpTZ99Zz659z6+tIrIBw3yhMAuk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v6 05/10] include/hw/xen/xen_common: return error from xen_create_ioreq_server
Date: Tue, 11 Apr 2023 15:47:41 -0700
Message-ID: <20230411224746.16152-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E64E:EE_|SJ2PR12MB8064:EE_
X-MS-Office365-Filtering-Correlation-Id: afb312bc-de3f-4ebc-60d9-08db3adecc92
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	obenWLWhq6bdrYeodulUvswme8fWEcMGn7qxPqRobjL/QaGgmCOIsSUGuNOX02ssYXN/5TsJ59VzleK+FoDZlUpoSiiO1j0uaDOvyRk+Ts3eg3Oy5n7dC1Ip0QgPN5x1FrWc/9UempJCZQZILuDo4vKo+OZElGTVaYJ7Mz8UKW5ib720tn6PHf9qe6MXfojbwJbzYQ46MW5SyyU+xdhrcbMcq8o6IomAs+nRwI/FjqMp825ujie3MMSWrrLfv2YjDdAoljykKkzDxlTWWGWAiYgAuKOwBBqp0K2anabLPQmUr3IOIP6TRpoPIqEvDuLE+ryPKxLaVBNXAoiV90x3xiy5dEpn9qN9ZQcBJB3mqOMucgF1/enNNjPafIvixZP3FUgO3t9umkr2IevaAh3pJeajfPhjDUMDyZHJbQBIEnYgxKBxngZ5kVoNzGAGai+g91IYXsmufLrvpAt7h42FkBOQbvd9hRNam/zWRRCPYV5sp5XT+9B1pTRa8aR3nBcvOWyzHcjBSS4Mi8umgWv8eTZ4CMfdmZ3UAcSCXIhPrPrqlvI5BpMjcXPEAAq81uL4SKFbId7kqouT4uS/VH0ExKbElvX2mUy2Djw4Rh0JE/aXLPpelx6g3Dpj1Zny+gFQQ2uQfvxCJ5eGD7BHcucddBTm0zyUIbzOhq8l4Hv3lHMtXUCEOfMNy8UDJHOpxtqpRwD3b6tjcadMwykfSQhKvRIv+nawL5i9dsoocwqV6vM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(36840700001)(40470700004)(46966006)(36756003)(86362001)(41300700001)(316002)(478600001)(70206006)(4326008)(8676002)(6916009)(70586007)(54906003)(40480700001)(36860700001)(5660300002)(82310400005)(2906002)(44832011)(8936002)(81166007)(47076005)(82740400003)(356005)(186003)(6666004)(26005)(1076003)(2616005)(426003)(336012)(83380400001)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:47:59.1727
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: afb312bc-de3f-4ebc-60d9-08db3adecc92
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E64E.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8064

From: Stefano Stabellini <stefano.stabellini@amd.com>

This is done to prepare for enabling xenpv support for ARM architecture.
On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails,
continue to the PV backends initialization.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 include/hw/xen/xen_native.h | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
index 6bcc83baf9..8b01b071e5 100644
--- a/include/hw/xen/xen_native.h
+++ b/include/hw/xen/xen_native.h
@@ -433,9 +433,10 @@ static inline void xen_unmap_pcidev(domid_t dom,
 {
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
+    return 0;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
@@ -566,8 +567,8 @@ static inline void xen_unmap_pcidev(domid_t dom,
                                                   PCI_FUNC(pci_dev->devfn));
 }
 
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
+static inline int xen_create_ioreq_server(domid_t dom,
+                                          ioservid_t *ioservid)
 {
     int rc = xendevicemodel_create_ioreq_server(xen_dmod, dom,
                                                 HVM_IOREQSRV_BUFIOREQ_ATOMIC,
@@ -575,12 +576,14 @@ static inline void xen_create_ioreq_server(domid_t dom,
 
     if (rc == 0) {
         trace_xen_ioreq_server_create(*ioservid);
-        return;
+        return rc;
     }
 
     *ioservid = 0;
     use_default_ioreq_server = true;
     trace_xen_default_ioreq_server();
+
+    return rc;
 }
 
 static inline void xen_destroy_ioreq_server(domid_t dom,
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519972.807150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmW-0006Du-NI; Tue, 11 Apr 2023 22:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519972.807150; Tue, 11 Apr 2023 22:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmW-0006Dj-J5; Tue, 11 Apr 2023 22:48:08 +0000
Received: by outflank-mailman (input) for mailman id 519972;
 Tue, 11 Apr 2023 22:48:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmU-0004tk-ME
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:06 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e88::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb089360-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:04 +0200 (CEST)
Received: from DS7P222CA0028.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::21) by
 SA1PR12MB7198.namprd12.prod.outlook.com (2603:10b6:806:2bf::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 22:48:02 +0000
Received: from DS1PEPF0000E64F.namprd02.prod.outlook.com
 (2603:10b6:8:2e:cafe::8e) by DS7P222CA0028.outlook.office365.com
 (2603:10b6:8:2e::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:01 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E64F.mail.protection.outlook.com (10.167.18.5) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:48:01 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:00 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 15:48:00 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:59 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb089360-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GWFwXSh/XJsVrrBDH6oSynniQ/IglXdM8mpvw4/iGS4UwF19F99brSYRUXG1fELOy1vVYZ5mYDnRFWxCVSkUCglZA5ie/LzlLLzIvGHQ3rQbCxrOauKrasTnfoDK9MdY6ImbW0YUOC0sINYeHpQ+HnqcojhZvjNp8VztPZP0qhVkhaRrG5cF2+EWHVPKTcuci0o8+3up1yireVb6GXOZ0+T29B7RNYeD+1gm/bp2jU8GM11KYAB1QzNGgNp1RTEwPtu53S51/LbmhPLf14r9laj3E4+UgkJt6skQyZOY2pVof3N5/07YlsDrtQwHCkrBC10/KGlQgHZF9n/FllYqxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ujYfJM4vaNtZk1zA3MwYT/E3tRvcfbmfzK5faMVAmdY=;
 b=brMGnhH/Tl8QzxLRkf8oULCS+fcHO/YO8Qpe/DajdjjrjudKQ5fXpSIULTstO6EUiim+HJ9NmpjHcGKxF4GxjlmIVRfwOt4M3LlE0j6aRj4gpD0FM/pUC9PxCowVJ91b/Vvn6pPSjhLvb8IGsgJOpXuyVKcJ5QmzgInXYapIijvko0KVk/PIjehTqZ2hiY63VBpN86IUKdyMACdVieqxvElDPJvVhCLebeSNNiO8UYX5JI0MAGIP/5xnjtdOIvZE48ULiiSbbuZorroT9VMWy9T97eVFKQESSYq+3ejJGzN+ai60AwXCy3VUiBVmfACoup42pZUtwSaTQO74C0py0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujYfJM4vaNtZk1zA3MwYT/E3tRvcfbmfzK5faMVAmdY=;
 b=fxiyOeyBcL/x4CyCKdICWV04zsQOnnz+VCXpWT+0t6/zf5OXkJB9Vc9GzQ/Pldt0f9cW7aEffwbsbHQEKmi4pzh7LwiqQrBh6IX5adcIbYqrIc0qZ7hJQUHEXL7f0ec8NuMUxPRzpQAItV0Vz8DzitbPE6A/ZlQbPqfcxmBbTak=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v6 07/10] hw/xen/xen-hvm-common: Use g_new and error_report
Date: Tue, 11 Apr 2023 15:47:43 -0700
Message-ID: <20230411224746.16152-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E64F:EE_|SA1PR12MB7198:EE_
X-MS-Office365-Filtering-Correlation-Id: 91e3f26d-b041-4293-93a8-08db3adece1c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ry1nPJe5ps0zWI4ft2ZGrmnMusAWHy8AQjIAzB/D5qeuiawc4EjbBwTj/1Oj5ZtpLWmybZ0dTsf8hCfVU5qtXPMTvJXfLtsMXA4TgqDomrJkUYAfSuXQSv3iHWRojwAc7fSZ4qGKc4MdtFmLUV7DdpZgypQp8mJD7+hLOrhk1PVNUSFP1eWL7ZRC4clVm9ri6fP9vMQYIF84BHNwmdqmGA+r8vdp0+FEj90hgZVCys9A9rIkEIeb056jY/aRLuItVdu75FIcekCLr0BJud8CznnxkyZ70O+kWQQBlRB1lpGEa3E5LTnZJzgC3WKg8xb70Cl5wUTiYZrOK3iV027P18ZIjaX6Fc6US3M6PUh/fpiJOimUegt3ZA1SCfIDQFYsSmIS+jkALDsC+njyBVEVG0xmWKOavOQmtdGP0Sz0GHGmqsmZrRh7xH7oK5X1Rwklow49dcvSAMHGKSb+nmArJxbhByxYdTUKRdXtXEMS0Eni755tVIeAktZyjCpmPDAKAholE06sqC1SxIFctQRXX+Xzvv+2cS7soe9C3kRxfm5ctxi0zWq0EwyvywJ/xikZHX2OJ4um8OyK47t6KA/fiMdECqnvMndIKARmjbd9nB67pdQcxuKzWQbZ6Aj5mg5mvw73LJgRAPRGbDpqo18AlLTEX6W7ueOJJllLlfC1RYX3ZsUiSzf/kj97wuEPFkWXdy9CWcDaCGGYIa4PrhXZ+7SZdpBvpkqK/lIcsBO405k=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199021)(40470700004)(36840700001)(46966006)(86362001)(41300700001)(316002)(4326008)(478600001)(36756003)(54906003)(70586007)(70206006)(8676002)(6916009)(40480700001)(82310400005)(356005)(5660300002)(2906002)(8936002)(44832011)(36860700001)(81166007)(82740400003)(186003)(26005)(1076003)(6666004)(2616005)(336012)(426003)(47076005)(83380400001)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:01.7562
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 91e3f26d-b041-4293-93a8-08db3adece1c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E64F.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7198

Replace g_malloc with g_new and perror with error_report.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/xen/xen-hvm-common.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index cb82f4b83d..42339c96bd 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -33,7 +33,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
     trace_xen_ram_alloc(ram_addr, size);
 
     nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+    pfn_list = g_new(xen_pfn_t, nr_pfn);
 
     for (i = 0; i < nr_pfn; i++) {
         pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
@@ -730,7 +730,7 @@ void destroy_hvm_domain(bool reboot)
             return;
         }
         if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
+            error_report("xendevicemodel_shutdown failed with error %d", errno);
         }
         /* well, try the old thing then */
     }
@@ -784,7 +784,7 @@ static void xen_do_ioreq_register(XenIOState *state,
     }
 
     /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
 
     rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
     if (rc < 0) {
@@ -793,7 +793,7 @@ static void xen_do_ioreq_register(XenIOState *state,
         goto err;
     }
 
-    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
 
     /* FIXME: how about if we overflow the page here? */
     for (i = 0; i < max_cpus; i++) {
@@ -850,13 +850,13 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
 
     state->xce_handle = qemu_xen_evtchn_open();
     if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
+        error_report("xen: event channel open failed with error %d", errno);
         goto err;
     }
 
     state->xenstore = xs_daemon_open();
     if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
+        error_report("xen: xenstore open failed with error %d", errno);
         goto err;
     }
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519973.807155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmX-0006Ht-31; Tue, 11 Apr 2023 22:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519973.807155; Tue, 11 Apr 2023 22:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmW-0006HK-Se; Tue, 11 Apr 2023 22:48:08 +0000
Received: by outflank-mailman (input) for mailman id 519973;
 Tue, 11 Apr 2023 22:48:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmV-00060o-E8
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:07 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20610.outbound.protection.outlook.com
 [2a01:111:f400:fe59::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb0ae277-d8ba-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 00:48:05 +0200 (CEST)
Received: from CYZPR12CA0016.namprd12.prod.outlook.com (2603:10b6:930:8b::29)
 by IA0PR12MB8696.namprd12.prod.outlook.com (2603:10b6:208:48f::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 22:48:01 +0000
Received: from CY4PEPF0000C97D.namprd02.prod.outlook.com
 (2603:10b6:930:8b:cafe::f) by CYZPR12CA0016.outlook.office365.com
 (2603:10b6:930:8b::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:01 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C97D.mail.protection.outlook.com (10.167.241.136) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.25 via Frontend Transport; Tue, 11 Apr 2023 22:48:01 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:59 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 15:47:59 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:58 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb0ae277-d8ba-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JRCZk4s7Ii4EdN1B74IqEOBLTVU1HXBuT1qc5KzCanwkL6+fzGV3JXjHWjuI6deXYHABYS0YYSRmi5IRb8CulQAPLl0YxFh1M55E4Sv+smIvWSycac1Ra0gCOrtNZgoR9SoF9f7JJjBcTs1lSXCxyWfhn4f/Guy6BI3oW7ZHKaZsJbadhQXdfvZ6VLd5R7xFCy8YAOBXgIlM5cpMIldwiOswb8S4si4hgg0prtLf5HfdQwysRRDN5H0RJ62C2bIpzrvqLrioOmxKmP18cGqUdd0f3Al2NL3ThsBSO5lxMc/0qOPmOVAN8fbPwtg+yndESNYOnPVMcUi3aMJVy3G4+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/uOpZiqStaRi7+Ygxu1TUCLhyLdSY+19OW/XA3O2IUs=;
 b=M/Hr1cvtNvi17dGN0v5w1aKj6s+7gnihp0iCHuBDtGtRnOPo+0JmcUXrDxAvm0hh/6/UHzrr7iD74GXu1fOxl4lNYXOGmWu8DsQ1U5CwNsGOsZHvzbQ6UILYFtqrFfPcZ80tzzdWTBkwNboEwLxSSvMQ4xQblzgSjZl6TwgAopYAhT3KqghmDNUKuj3DPqzj4jcgfUneumnoUcNE1sejiuLaSpjW+3hi8IAn1MWDCvbvm06eOdbqoPubnDweI3/rcC6v23CIXFNa4/WP9lPq5CmxwLOhseXeTuRQM8w4YWI16Yv5twybk/pa2CH7iP2NpTf30Pn8M4MobRGTKGnYkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/uOpZiqStaRi7+Ygxu1TUCLhyLdSY+19OW/XA3O2IUs=;
 b=FQLqW+DEDlfSpoOhisp8UwcBcnJgYmAPdKvPrI1o19LHPtgG1FsWFUXsyrr3PWXnM0Gzv7wWPEase5Qj+J1zkdwaKhi68Q7JZiej9HrfN1eKW5RZ+CcP3ZucphlwuTR9B1hlWs4M9TVhTfLN3ywqZ36iLcL6GWMpc+TqPNvrpOg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: [QEMU][PATCH v6 06/10] hw/xen/xen-hvm-common: skip ioreq creation on ioreq registration failure
Date: Tue, 11 Apr 2023 15:47:42 -0700
Message-ID: <20230411224746.16152-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C97D:EE_|IA0PR12MB8696:EE_
X-MS-Office365-Filtering-Correlation-Id: 30eacd88-8e45-4048-b00d-08db3adecdca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7bPuRk+3vt1vNfZklRi2A3XGXwTsjmGSTcl6GZ1Qq8+OVNH5kuDrflKxNZiU7bc9Bi8FObp9zgOoFNy/qY0g/tOYXhGPs1IGsMHY68Qhtbf5C6MMFflErcyX9sTYH/SrsrK2j9Uo7ByurFOR0nstFu6pMGdKLRtN2GG6jKvjFAM+lmqaGR6x05+GUSv62PlWwNQRXfO57Fj5JffZTc4FfGr92To6qkLKXu747pFegzr3YtAGtRkJo+IVI3AxnXYNjeRm/Q+WYOzYz8Q2e+wrHTJRJER5DxI5tqlyTB6LRloybU/TrUGGHgM8dnWyNt5FAfGwR2Hz7yZjx3d5fPw7Yfqxt43iWiCPNebZVWjTIuXlYElnA/Z1m7PXiEyeHMxAkOll6zMszqM6VQG2bwdKtE/KbCgSpv1HaG5CUL6F3RW0+yzeMv8+gL6DsaVgH/9U7meqVY/BPr1fOfnCzTk4LgVILrQwtitX/M8Eis1+4HKzm5EQep+jemW5WUmHbqtqKJtauL7R83fNzBc6GQnsvPq2o/ddL/UEWcyPD9hgI7hr1oY7C4VF3C0AqoPi0GJkphheouYVUVpt61kXMgsuDwtdsMDGaZjbM2IM+wgYNW7pnBZmz1IU1L+Zfpno8XGkUtnsx0SL9JbGL3mcxc8foH3hHUKVa1Y/E+i6YNLL47jSB53Ua/Gk/6GPs3qa3SY+dv5t2+qnnztzQp2Qymg7+NFOg2HOnr1G2TitAs63I8A=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(346002)(136003)(451199021)(36840700001)(46966006)(40470700004)(26005)(6916009)(70206006)(1076003)(4326008)(86362001)(70586007)(316002)(186003)(40460700003)(36756003)(6666004)(54906003)(36860700001)(40480700001)(478600001)(8936002)(8676002)(2616005)(5660300002)(356005)(41300700001)(81166007)(2906002)(44832011)(47076005)(82310400005)(426003)(336012)(83380400001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:01.1414
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 30eacd88-8e45-4048-b00d-08db3adecdca
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C97D.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8696

From: Stefano Stabellini <stefano.stabellini@amd.com>

On ARM it is possible to have a functioning xenpv machine with only the
PV backends and no IOREQ server. If the IOREQ server creation fails continue
to the PV backends initialization.

Also, moved the IOREQ registration and mapping subroutine to new function
xen_do_ioreq_register().

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/xen/xen-hvm-common.c | 57 +++++++++++++++++++++++++++--------------
 1 file changed, 38 insertions(+), 19 deletions(-)

diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
index a31b067404..cb82f4b83d 100644
--- a/hw/xen/xen-hvm-common.c
+++ b/hw/xen/xen-hvm-common.c
@@ -764,27 +764,12 @@ void xen_shutdown_fatal_error(const char *fmt, ...)
     qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
 }
 
-void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
-                        MemoryListener xen_memory_listener)
+static void xen_do_ioreq_register(XenIOState *state,
+                                           unsigned int max_cpus,
+                                           MemoryListener xen_memory_listener)
 {
     int i, rc;
 
-    setup_xen_backend_ops();
-
-    state->xce_handle = qemu_xen_evtchn_open();
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
     state->exit.notify = xen_exit_notifier;
     qemu_add_exit_notifier(&state->exit);
 
@@ -849,12 +834,46 @@ void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
     QLIST_INIT(&state->dev_list);
     device_listener_register(&state->device_listener);
 
+    return;
+
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int rc;
+
+    setup_xen_backend_ops();
+
+    state->xce_handle = qemu_xen_evtchn_open();
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    rc = xen_create_ioreq_server(xen_domid, &state->ioservid);
+    if (!rc) {
+        xen_do_ioreq_register(state, max_cpus, xen_memory_listener);
+    } else {
+        warn_report("xen: failed to create ioreq server");
+    }
+
     xen_bus_init();
 
     xen_be_init();
 
     return;
+
 err:
-    error_report("xen hardware virtual machine initialisation failed");
+    error_report("xen hardware virtual machine backend registration failed");
     exit(1);
 }
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519974.807160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmX-0006QJ-JG; Tue, 11 Apr 2023 22:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519974.807160; Tue, 11 Apr 2023 22:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmX-0006Ng-CB; Tue, 11 Apr 2023 22:48:09 +0000
Received: by outflank-mailman (input) for mailman id 519974;
 Tue, 11 Apr 2023 22:48:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmV-0004tk-ED
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:07 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb4b3643-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:05 +0200 (CEST)
Received: from DS7P222CA0028.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::21) by
 BL0PR12MB4948.namprd12.prod.outlook.com (2603:10b6:208:1cc::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.33; Tue, 11 Apr
 2023 22:48:02 +0000
Received: from DS1PEPF0000E64F.namprd02.prod.outlook.com
 (2603:10b6:8:2e:cafe::ce) by DS7P222CA0028.outlook.office365.com
 (2603:10b6:8:2e::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.39 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:02 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E64F.mail.protection.outlook.com (10.167.18.5) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:48:02 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:01 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:01 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:48:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb4b3643-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z8LbizSGQNuImsKV2JiCgRPBuNK5PpUGtGUHwaXbJpk74fDJE761l4N6GgHgswxT/PD9p+9gwN0XuBKz5zwKZ6D1YxNMmldXsM/DCeLfnRPGdPcan4W+j6dqIULDqPGjs2x92Zg4B5iGsN7e5USuW449zzfD1xvS6Avt+9fXaXsT06hYaDLrRsYeVAAQ97tJEQTNf1TJKIBiHlAZDIo70pVG9Y+vQDedg3+9b9eplYaE2LdQGL3NUgn8mxBPMAejUL6RKTeUFtxjkQ8m9ExjEBR1AO8hf/5/25b68ELhwCXV9Qwhlc1eEo32d9kz01psUltYhrIMuIPQbHPglDUagQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r4LfuuzS6bsMM7PpjlXU3TZGIWAHgT1E78TzwKVxfOw=;
 b=kX3WTLDJVpe3gWs9M1htHG5W2p1S4dg6ePhNSWkNg24vCxQ3+moozKUiUt1hSg2pa5zpaa6vqiKZtLWcTXzU0gPCueQjx7k/E8rQ4jj9yBuhfpTrd4Gl4fYVA7qfgDTWNah6637opWqYUk6SlOMX7NPoSdKTPrZqEBiFGISSId+RR7O2HyJqmZ/aO6ZAs+FTmRHA2y/6hC+hUHd22Cur3IhFj6+Ee3Grw1WiE4FdfgYECtdTjWq7Ta1Rq3K8hYOe7XIOTXe/LW+axLsINeDCvi2ThjFNT1RBJZk1OwC3jGGiiOEqSWb9JZ1sdIxIICcPXcitHPvQs/aDII7IOpBK9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r4LfuuzS6bsMM7PpjlXU3TZGIWAHgT1E78TzwKVxfOw=;
 b=12o8kRtfU76dz/NFnJwmvdO7VN04cJ6r/o7cwWmGxggMqXQ535omGUktXEbDI4zyauW5LddOql6N1vdS0N8rcMym/ixpD9fdhtmDKu3tnobu/BaxQCA4OIlTHdcHRHQUme2asdwbDv7n68qe6QjUaNX/cIEUPgTqC+0/BPUY6uk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>, Thomas Huth
	<thuth@redhat.com>, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?=
	<philmd@linaro.org>
Subject: [QEMU][PATCH v6 08/10] meson.build: do not set have_xen_pci_passthrough for aarch64 targets
Date: Tue, 11 Apr 2023 15:47:44 -0700
Message-ID: <20230411224746.16152-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E64F:EE_|BL0PR12MB4948:EE_
X-MS-Office365-Filtering-Correlation-Id: 4700fb71-d47e-4c98-4174-08db3adece6b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PB56rwWPkNs4DeI0XxyZs93+wwisuZf//D1UMVN3p39f/P051boLubv1fI8fxeSbigREmTcptsy128NCGwgAGlQmqxkQM7sl34x3AG8x+qWeVhdwDEycr47NwAlglLigxdh6L2dtdLn46yKH5uT3GK5nXoWNbkj2GtALWQn/Fq4RwEDmG7Ql9dLO362xi4GmCtlF4MrRvvS5IT3fmQYBpTeyOgSCxI6zINaNyJucjUDd58taX1Lj8Qi/u6Adok9BXQCraHoEKbF1AURmyQTzNTd6PM1NvUlk4Dt2xAh+y7WUGuk/MYUKa2RByqWhaS7aqE7ffg6jYCbeusXrYV25YyzJn7wzr/QOESONsFpDYFaO+6JfFrLcqC2qvQQptW37Du8SePik4EQoBFiadmd/UKoFDj5knyZQM0dQ/qUj//tMN3Li1NklUgvsGYTu2JIGN7qazPyXjZ1Ay2C2pnH+0/xbBxrb9SaY0g8bG9Lu8+XvuZOHRtCHfp0fd+7dAvW9aHn3uxO+UiKuLwXRpo/zhKO/aFYaJVBwgUChu2SocKWt/MlIuxMi4yDxPfCllbuRIY3smtdACazUdh9KRHGvee4YmyVTPY+0UClf0DbZw2LfEs/TVe34JBkG4te2UfLpEq97sg0WEAeesg64m69LNGZk9h15Jm5sM2cmyj45u6W/KIAM9/zQkzfA/lPmAPZYaULj9JEF+/gFhaTRf4uKaTPr+4ESdXpJut+Xx1aLKrs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(86362001)(70206006)(70586007)(316002)(4326008)(8676002)(41300700001)(478600001)(54906003)(6916009)(82310400005)(40480700001)(5660300002)(44832011)(8936002)(4744005)(356005)(81166007)(36860700001)(2906002)(82740400003)(186003)(26005)(1076003)(2616005)(426003)(83380400001)(336012)(47076005)(6666004)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:02.2562
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4700fb71-d47e-4c98-4174-08db3adece6b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E64F.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4948

From: Stefano Stabellini <stefano.stabellini@amd.com>

have_xen_pci_passthrough is only used for Xen x86 VMs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meson.build b/meson.build
index 29f8644d6d..52c3995c9d 100644
--- a/meson.build
+++ b/meson.build
@@ -1467,6 +1467,8 @@ have_xen_pci_passthrough = get_option('xen_pci_passthrough') \
            error_message: 'Xen PCI passthrough requested but Xen not enabled') \
   .require(targetos == 'linux',
            error_message: 'Xen PCI passthrough not available on this platform') \
+  .require(cpu == 'x86'  or cpu == 'x86_64',
+           error_message: 'Xen PCI passthrough not available on this platform') \
   .allowed()
 
 
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519975.807169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmY-0006bK-GU; Tue, 11 Apr 2023 22:48:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519975.807169; Tue, 11 Apr 2023 22:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmY-0006XX-3i; Tue, 11 Apr 2023 22:48:10 +0000
Received: by outflank-mailman (input) for mailman id 519975;
 Tue, 11 Apr 2023 22:48:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmW-0004tk-80
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:08 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20631.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec250365-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:06 +0200 (CEST)
Received: from DM6PR03CA0101.namprd03.prod.outlook.com (2603:10b6:5:333::34)
 by DS7PR12MB8201.namprd12.prod.outlook.com (2603:10b6:8:e1::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Tue, 11 Apr
 2023 22:48:04 +0000
Received: from DS1PEPF0000E655.namprd02.prod.outlook.com
 (2603:10b6:5:333:cafe::e0) by DM6PR03CA0101.outlook.office365.com
 (2603:10b6:5:333::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:04 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E655.mail.protection.outlook.com (10.167.18.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Tue, 11 Apr 2023 22:48:03 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:03 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:03 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:48:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec250365-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cpCzqrcKkc4cjgV6C9+XLDIAG9fTypdq9VK8dYRsbsL78HskN9YjVQ6n3HZVJqSuGmPe6Zt+UyDd2JNZNRpO+ihpGpMZmu7MeageoEr4fZusvitoK3i+FqAV8KRnnbTR6qQidlMK+l9kKUZd43Ls6stBh9VYoYxlusQKF5FzUFhOIXFn38upo+QDejjTG39oUyYpDp/gcQwugLvSBDtWfcIGpJlE9ikCUkBAsaAgOXarRDbnzgNe5J7joqPylc7LfHSQTTflBplH5qAlfgr4x8mkxJV3FleMi/E9z0sXI/UavnePw7ZRZaqkIZos6wdxMbv3+iFpsTl6ycPNXWOWtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jmblM95zvi2ytozurVmKkWMS6LTJXUZA+FjuQuy4Brw=;
 b=OThx/cSOIooUslYkRKLX08FWpc/qto2zZXPR+mi3vftKINrm0NBz67KxfhGIaBQ9D7g+oAkXQRWqu1g8Rx4GGYlpNnZLcobT2Qtyk4VviPXNnqWesLRSAJEF3LFxjqfJH9nWWGIxFkGjS1w2GTyTbzHFbRcwsdZn2mtb9ntcVKBShMGaM93ecbx1pbTu5vvhu69Y1e7jfjDO3LCtaUeXQcmDh3WzRYlahZKlTj3h6QF6uoQesHw5jVjsUrlbzPNkN0paIWp2bVllflhO5naeNnzRp0RTk7n3gIYD8TQb3jmhLX+sL11pCTj7DDUQP4NLikh7vNhcmDTi8vgngukLtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jmblM95zvi2ytozurVmKkWMS6LTJXUZA+FjuQuy4Brw=;
 b=1yH6lmG0NOkBJEwHNgmeTWShUeogFlBM02ud79eOoP9pWFtoPgVDjbhkGpv2JO9XL4CmdcJ7fjbgwKoq94rYwuYb8CmF849ZEp1Kqs+YE8CfZj3tl8MoPtyex9tJVxHh7Iv5jWvFS4G9HtbtMr9LC1uDx7GEA6/5uSGvVo/H3Js=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>, Thomas Huth
	<thuth@redhat.com>, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?=
	<philmd@linaro.org>
Subject: [QEMU][PATCH v6 10/10] meson.build: enable xenpv machine build for ARM
Date: Tue, 11 Apr 2023 15:47:46 -0700
Message-ID: <20230411224746.16152-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E655:EE_|DS7PR12MB8201:EE_
X-MS-Office365-Filtering-Correlation-Id: 673a436c-a4c9-4ad1-4827-08db3adecf71
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	B7FppZGjYAs0J1RgHfhjQV4DfpRWg/VXFUte8wSnpWXIiZEUfQyF5/nmmmi3R1hKpx9pZhDCRvxUv2gR2C6LnBHP68Y2QQkM8HNJvI0Vo1uxIqbw0vrQGDEs/G7uenme8+09A5NjS34Wmgg7mNL6CnEz3kvPu2IZsJQOoXHrooe3xuGcQ2nE4zSNSV/D+vw8oI5EBgnfp0diW0BcSEoGqT/B4BqlHMSLQK3wzYD3Rv7RRcYiZXx/CaHVGRGAEWELEkTXwX6IKEHKQMPGDhY/KQuL8/5Qgtz6s8Fy84R7+I4TbhWqoQIYbNDsQMhmWsJuLtnCmzdNpN7THl5U8dfbEAH0/M50nEEwYcCo9oO9pQM/PqHF9IwwAp6vmnXEVrMhC8n2gip4D3Rp2QPAowQ2fsLAclE+WiKRPVKUKHnBuLMemfATYT9iC5Vr+kFfv/ZJYVVVS6RKm8eKp0EGF9GbmTCHlncJtGsqcw7O3IHJtsPi8rVVHpQecxk80b4FROGXQ29BWSJgkbEIHZdH2/ISrfdkfBQelYNXAsZVtiH5a1rMC1PHw0yvBCoEGCmfUk/7knWqyqplIkaFTLgtnwujiEGLG41bOpgXDgMcWbIWkzGCy45Crm+YsGDG9UHhGz1x72+WWkrKrJ5yJRpeiyVDsfNncGEO2gALLq4QfmU7wyKDj/fa2yJXwqWGkxxh5vZ34GJY3I7XTdul2wXbNa+zLeRcKGXAhIDS212cNzsA2Us=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(6666004)(478600001)(26005)(316002)(186003)(54906003)(2906002)(336012)(4744005)(44832011)(5660300002)(70206006)(70586007)(8676002)(4326008)(41300700001)(6916009)(8936002)(82310400005)(356005)(82740400003)(81166007)(47076005)(40480700001)(66574015)(40460700003)(36756003)(83380400001)(86362001)(1076003)(36860700001)(2616005)(426003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:03.9578
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 673a436c-a4c9-4ad1-4827-08db3adecf71
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E655.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8201

Add CONFIG_XEN for aarch64 device to support build for ARM targets.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index 52c3995c9d..eb5bb305ae 100644
--- a/meson.build
+++ b/meson.build
@@ -135,7 +135,7 @@ endif
 if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
   # i386 emulator provides xenpv machine type for multiple architectures
   accelerator_targets += {
-    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu', 'aarch64-softmmu'],
   }
 endif
 if cpu in ['x86', 'x86_64']
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519976.807177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmZ-0006mZ-6y; Tue, 11 Apr 2023 22:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519976.807177; Tue, 11 Apr 2023 22:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmY-0006ho-Os; Tue, 11 Apr 2023 22:48:10 +0000
Received: by outflank-mailman (input) for mailman id 519976;
 Tue, 11 Apr 2023 22:48:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmX-00060o-1q
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:09 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e88::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ebc2165b-d8ba-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 00:48:06 +0200 (CEST)
Received: from CYZPR12CA0001.namprd12.prod.outlook.com (2603:10b6:930:8b::8)
 by CH3PR12MB7762.namprd12.prod.outlook.com (2603:10b6:610:151::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Tue, 11 Apr
 2023 22:48:00 +0000
Received: from CY4PEPF0000C97D.namprd02.prod.outlook.com
 (2603:10b6:930:8b:cafe::5e) by CYZPR12CA0001.outlook.office365.com
 (2603:10b6:930:8b::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:00 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C97D.mail.protection.outlook.com (10.167.241.136) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.25 via Frontend Transport; Tue, 11 Apr 2023 22:48:00 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:57 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:47:57 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:47:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebc2165b-d8ba-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K9SzP0FBBf9SUYqgTF4o/Sz8oINLOIgPYVgUeJWbbaaGqOzi3WnOSOyuwGEKg/r1luk0nlXErdrayuU+FJa8WuF2kMvApFcdInQ6lD7j8GJYkEY5C8bsiiVbgrTMVQxPLoWVtOAOoIZElAGvHPoY6BeFhWsa/3oSomkIHcQGH9AR2TxpYyVI5BjC4F7CdctnVeS4x/uJPWjQGcUHHLoIvXl2qacFTrQy1Uh115+Vb1iRZZHJStPJzkIv2JNIlM6lZC1e8S/vJquHCR9u97CFUMAjTK6QP81hheymH+W1DRAQhK8B+XRul6MJV76aluVdppTcnxowMJYCgYcFwXVlUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wIZIFcEluMLarFziGuZ+tNtfbECwk02fJil82lSHuSo=;
 b=b18gyK/T0iXHVC6D3jTjFhoNDwXSq8X3TIDJWfMtuVX1Hj+fW28uDkMAFMxAVN6qKcy1ftYdyzl6bNSZJOXAtrNaiXoihQHfvg4qKT7eiqCdQxTActGyFLxHxRPjZ15kX1tqDpdM0rKzRbs8dR9T0gCrLC4xfkBFJbTEqMc9rJU7eJZ0qLuQ2mOyU/R5eXMGsFplWzttJgkmKRuqPitzekXmKwpLmCut4/DTGtzyiFCiYOfH+xYC8hbK2ly0ZnCw1M2K8lYPsNGhJ3hCeMjOlOllAltk7F46GPpXsYhTejNNeEgW6y/HQzdVO++XDmGbqT5kbCo9JGYMnPgGUQ3+Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIZIFcEluMLarFziGuZ+tNtfbECwk02fJil82lSHuSo=;
 b=b7MmAEPsP7TpVCRz250eu3R3LVxIY9UAUeqS1NgcqE/1Ei6qt7ov6XlK85tPDcRIdSzoFdcvmt/bMcTDJbNloPMxK3+BdBwVHEtfjaug32uirRKIKLvtD4I1V4fMHOErptbgd+m2FnerV+7eVLYQ8L9aAWNSrZgfeLeUh9JoO0c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson
	<richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>
Subject: [QEMU][PATCH v6 04/10] xen-hvm: reorganize xen-hvm and move common function to xen-hvm-common
Date: Tue, 11 Apr 2023 15:47:40 -0700
Message-ID: <20230411224746.16152-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C97D:EE_|CH3PR12MB7762:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ff20461-d2a4-4736-0590-08db3adecd5d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CABI988Q0U8KWOTCEQfyL++X9Kte3o1wSlWAL5huGdFf8AgATwUC7elcL4Ck3UkOVNu8vyBI/p52cp6xrjL78eOhNqojLnbyociesjJ10zr9bom29SzQsvp45QwlHHPy+DW84El5FyIOCMvvrBjDe9igceV1SKzxYe5qKIXZk8CubINeDDvCbWU2VD2iIi3yaaT2amCWB1pf/9NOdK0abm+9qIOLencq3E8AQW+IRFePGqXCQHH2RNfhmfVZpHSXD3mZeA21t4HY+Bt8SYK/01ZSWgJE4gF5uwMH6jzw4xwDOKZskxKxprbdcacjpJzWR6Pn2MaBfebYRpkprsd8l4yBhaucuryOhJjwJQWMEPdsVoSao9XKkc/pUrjsie0EySNxnxEvP5AaDYrjst24VTsePoOxJHqMqOVFpRIQO7rY62DqLebRFZj8gd2tmKEUBEZ5O9qENf+RidMoyyhi8dRsoiKyHTURc/S3CZC8MyF5i9kqvRJ4PwqdTh3qBjP6UpxY7J/buTSSqcMdFUaKNLHNUwXikjvCREF4f9MLYG8qkZIh3Zy6rgsu9RoQeqj1OQ7u/rpPdoXwMbd8NirjXj3NAFBiNS+J6zorW34VuFDjF0b1dU3q1mFF08RTXUzCBCjNgqZdOruz5xDMAufXHtdFgE/TOxaqYQO7MwC9ST3KmQPuCxWFD3gJXswLc4SJnMIfadQwC9rdVmOmknE+hGUxNWuhcZ2eae/W766FmN4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199021)(40470700004)(36840700001)(46966006)(30864003)(2906002)(40460700003)(356005)(41300700001)(81166007)(5660300002)(7416002)(8936002)(8676002)(44832011)(82310400005)(36756003)(86362001)(40480700001)(6666004)(1076003)(26005)(54906003)(478600001)(2616005)(36860700001)(83380400001)(336012)(426003)(47076005)(186003)(316002)(70206006)(70586007)(4326008)(6916009)(82740400003)(36900700001)(579004)(559001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:00.4539
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ff20461-d2a4-4736-0590-08db3adecd5d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C97D.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7762

From: Stefano Stabellini <stefano.stabellini@amd.com>

This patch does following:
1. creates arch_handle_ioreq() and arch_xen_set_memory(). This is done in
    preparation for moving most of xen-hvm code to an arch-neutral location,
    move the x86-specific portion of xen_set_memory to arch_xen_set_memory.
    Also, move handle_vmport_ioreq to arch_handle_ioreq.

2. Pure code movement: move common functions to hw/xen/xen-hvm-common.c
    Extract common functionalities from hw/i386/xen/xen-hvm.c and move them to
    hw/xen/xen-hvm-common.c. These common functions are useful for creating
    an IOREQ server.

    xen_hvm_init_pc() contains the architecture independent code for creating
    and mapping a IOREQ server, connecting memory and IO listeners, initializing
    a xen bus and registering backends. Moved this common xen code to a new
    function xen_register_ioreq() which can be used by both x86 and ARM machines.

    Following functions are moved to hw/xen/xen-hvm-common.c:
        xen_vcpu_eport(), xen_vcpu_ioreq(), xen_ram_alloc(), xen_set_memory(),
        xen_region_add(), xen_region_del(), xen_io_add(), xen_io_del(),
        xen_device_realize(), xen_device_unrealize(),
        cpu_get_ioreq_from_shared_memory(), cpu_get_ioreq(), do_inp(),
        do_outp(), rw_phys_req_item(), read_phys_req_item(),
        write_phys_req_item(), cpu_ioreq_pio(), cpu_ioreq_move(),
        cpu_ioreq_config(), handle_ioreq(), handle_buffered_iopage(),
        handle_buffered_io(), cpu_handle_ioreq(), xen_main_loop_prepare(),
        xen_hvm_change_state_handler(), xen_exit_notifier(),
        xen_map_ioreq_server(), destroy_hvm_domain() and
        xen_shutdown_fatal_error()

3. Removed static type from below functions:
    1. xen_region_add()
    2. xen_region_del()
    3. xen_io_add()
    4. xen_io_del()
    5. xen_device_realize()
    6. xen_device_unrealize()
    7. xen_hvm_change_state_handler()
    8. cpu_ioreq_pio()
    9. xen_exit_notifier()

4. Replace TARGET_PAGE_SIZE with XC_PAGE_SIZE to match the page side with Xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/xen/trace-events        |   14 -
 hw/i386/xen/xen-hvm.c           | 1016 ++-----------------------------
 hw/xen/meson.build              |    5 +-
 hw/xen/trace-events             |   14 +
 hw/xen/xen-hvm-common.c         |  860 ++++++++++++++++++++++++++
 include/hw/i386/xen_arch_hvm.h  |   11 +
 include/hw/xen/arch_hvm.h       |    3 +
 include/hw/xen/xen-hvm-common.h |   99 +++
 8 files changed, 1054 insertions(+), 968 deletions(-)
 create mode 100644 hw/xen/xen-hvm-common.c
 create mode 100644 include/hw/i386/xen_arch_hvm.h
 create mode 100644 include/hw/xen/arch_hvm.h
 create mode 100644 include/hw/xen/xen-hvm-common.h

diff --git a/hw/i386/xen/trace-events b/hw/i386/xen/trace-events
index a0c89d91c4..5d0a8d6dcf 100644
--- a/hw/i386/xen/trace-events
+++ b/hw/i386/xen/trace-events
@@ -7,17 +7,3 @@ xen_platform_log(char *s) "xen platform: %s"
 xen_pv_mmio_read(uint64_t addr) "WARNING: read from Xen PV Device MMIO space (address 0x%"PRIx64")"
 xen_pv_mmio_write(uint64_t addr) "WARNING: write to Xen PV Device MMIO space (address 0x%"PRIx64")"
 
-# xen-hvm.c
-xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
-xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
-handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
-cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
-xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
-cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
-
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 6be5a250a8..a1ff5be4e1 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -10,43 +10,21 @@
 
 #include "qemu/osdep.h"
 #include "qemu/units.h"
+#include "qapi/error.h"
+#include "qapi/qapi-commands-migration.h"
+#include "trace.h"
 
-#include "cpu.h"
-#include "hw/pci/pci.h"
-#include "hw/pci/pci_host.h"
 #include "hw/i386/pc.h"
 #include "hw/irq.h"
-#include "hw/hw.h"
 #include "hw/i386/apic-msidef.h"
-#include "hw/xen/xen_native.h"
-#include "hw/xen/xen-legacy-backend.h"
-#include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
-#include "qapi/error.h"
-#include "qapi/qapi-commands-migration.h"
-#include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "qemu/range.h"
-#include "sysemu/runstate.h"
-#include "sysemu/sysemu.h"
-#include "sysemu/xen.h"
-#include "sysemu/xen-mapcache.h"
-#include "trace.h"
 
-#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/arch_hvm.h"
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN_HVM
-
-#ifdef DEBUG_XEN_HVM
-#define DPRINTF(fmt, ...) \
-    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
-#else
-#define DPRINTF(fmt, ...) \
-    do { } while (0)
-#endif
-
-static MemoryRegion ram_memory, ram_640k, ram_lo, ram_hi;
+static MemoryRegion ram_640k, ram_lo, ram_hi;
 static MemoryRegion *framebuffer;
 static bool xen_in_migration;
 
@@ -74,27 +52,8 @@ struct shared_vmport_iopage {
 };
 typedef struct shared_vmport_iopage shared_vmport_iopage_t;
 #endif
-static shared_vmport_iopage_t *shared_vmport_page;
 
-static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
-{
-    return shared_page->vcpu_ioreq[i].vp_eport;
-}
-static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
-{
-    return &shared_page->vcpu_ioreq[vcpu];
-}
-
-#define BUFFER_IO_MAX_DELAY  100
-
-typedef struct XenPhysmap {
-    hwaddr start_addr;
-    ram_addr_t size;
-    const char *name;
-    hwaddr phys_offset;
-
-    QLIST_ENTRY(XenPhysmap) list;
-} XenPhysmap;
+static shared_vmport_iopage_t *shared_vmport_page;
 
 static QLIST_HEAD(, XenPhysmap) xen_physmap;
 static const XenPhysmap *log_for_dirtybit;
@@ -103,38 +62,6 @@ static unsigned long *dirty_bitmap;
 static Notifier suspend;
 static Notifier wakeup;
 
-typedef struct XenPciDevice {
-    PCIDevice *pci_dev;
-    uint32_t sbdf;
-    QLIST_ENTRY(XenPciDevice) entry;
-} XenPciDevice;
-
-typedef struct XenIOState {
-    ioservid_t ioservid;
-    shared_iopage_t *shared_page;
-    buffered_iopage_t *buffered_io_page;
-    xenforeignmemory_resource_handle *fres;
-    QEMUTimer *buffered_io_timer;
-    CPUState **cpu_by_vcpu_id;
-    /* the evtchn port for polling the notification, */
-    evtchn_port_t *ioreq_local_port;
-    /* evtchn remote and local ports for buffered io */
-    evtchn_port_t bufioreq_remote_port;
-    evtchn_port_t bufioreq_local_port;
-    /* the evtchn fd for polling */
-    xenevtchn_handle *xce_handle;
-    /* which vcpu we are serving */
-    int send_vcpu;
-
-    struct xs_handle *xenstore;
-    MemoryListener memory_listener;
-    MemoryListener io_listener;
-    QLIST_HEAD(, XenPciDevice) dev_list;
-    DeviceListener device_listener;
-
-    Notifier exit;
-} XenIOState;
-
 /* Xen specific function for piix pci */
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
@@ -247,42 +174,6 @@ static void xen_ram_init(PCMachineState *pcms,
     }
 }
 
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-    unsigned long nr_pfn;
-    xen_pfn_t *pfn_list;
-    int i;
-
-    if (runstate_check(RUN_STATE_INMIGRATE)) {
-        /* RAM already populated in Xen */
-        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
-                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
-                __func__, size, ram_addr);
-        return;
-    }
-
-    if (mr == &ram_memory) {
-        return;
-    }
-
-    trace_xen_ram_alloc(ram_addr, size);
-
-    nr_pfn = size >> TARGET_PAGE_BITS;
-    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
-
-    for (i = 0; i < nr_pfn; i++) {
-        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
-    }
-
-    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
-        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
-                   ram_addr);
-    }
-
-    g_free(pfn_list);
-}
-
 static XenPhysmap *get_physmapping(hwaddr start_addr, ram_addr_t size)
 {
     XenPhysmap *physmap = NULL;
@@ -472,144 +363,6 @@ static int xen_remove_from_physmap(XenIOState *state,
     return 0;
 }
 
-static void xen_set_memory(struct MemoryListener *listener,
-                           MemoryRegionSection *section,
-                           bool add)
-{
-    XenIOState *state = container_of(listener, XenIOState, memory_listener);
-    hwaddr start_addr = section->offset_within_address_space;
-    ram_addr_t size = int128_get64(section->size);
-    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
-    hvmmem_type_t mem_type;
-
-    if (section->mr == &ram_memory) {
-        return;
-    } else {
-        if (add) {
-            xen_map_memory_section(xen_domid, state->ioservid,
-                                   section);
-        } else {
-            xen_unmap_memory_section(xen_domid, state->ioservid,
-                                     section);
-        }
-    }
-
-    if (!memory_region_is_ram(section->mr)) {
-        return;
-    }
-
-    if (log_dirty != add) {
-        return;
-    }
-
-    trace_xen_client_set_memory(start_addr, size, log_dirty);
-
-    start_addr &= TARGET_PAGE_MASK;
-    size = TARGET_PAGE_ALIGN(size);
-
-    if (add) {
-        if (!memory_region_is_rom(section->mr)) {
-            xen_add_to_physmap(state, start_addr, size,
-                               section->mr, section->offset_within_region);
-        } else {
-            mem_type = HVMMEM_ram_ro;
-            if (xen_set_mem_type(xen_domid, mem_type,
-                                 start_addr >> TARGET_PAGE_BITS,
-                                 size >> TARGET_PAGE_BITS)) {
-                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
-                        start_addr);
-            }
-        }
-    } else {
-        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
-            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
-        }
-    }
-}
-
-static void xen_region_add(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    memory_region_ref(section->mr);
-    xen_set_memory(listener, section, true);
-}
-
-static void xen_region_del(MemoryListener *listener,
-                           MemoryRegionSection *section)
-{
-    xen_set_memory(listener, section, false);
-    memory_region_unref(section->mr);
-}
-
-static void xen_io_add(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    memory_region_ref(mr);
-
-    xen_map_io_section(xen_domid, state->ioservid, section);
-}
-
-static void xen_io_del(MemoryListener *listener,
-                       MemoryRegionSection *section)
-{
-    XenIOState *state = container_of(listener, XenIOState, io_listener);
-    MemoryRegion *mr = section->mr;
-
-    if (mr->ops == &unassigned_io_ops) {
-        return;
-    }
-
-    xen_unmap_io_section(xen_domid, state->ioservid, section);
-
-    memory_region_unref(mr);
-}
-
-static void xen_device_realize(DeviceListener *listener,
-                               DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev = g_new(XenPciDevice, 1);
-
-        xendev->pci_dev = pci_dev;
-        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
-                                     pci_dev->devfn);
-        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
-
-        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
-    }
-}
-
-static void xen_device_unrealize(DeviceListener *listener,
-                                 DeviceState *dev)
-{
-    XenIOState *state = container_of(listener, XenIOState, device_listener);
-
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
-        PCIDevice *pci_dev = PCI_DEVICE(dev);
-        XenPciDevice *xendev, *next;
-
-        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
-
-        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
-            if (xendev->pci_dev == pci_dev) {
-                QLIST_REMOVE(xendev, entry);
-                g_free(xendev);
-                break;
-            }
-        }
-    }
-}
-
 static void xen_sync_dirty_bitmap(XenIOState *state,
                                   hwaddr start_addr,
                                   ram_addr_t size)
@@ -717,277 +470,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-static MemoryListener xen_io_listener = {
-    .name = "xen-io",
-    .region_add = xen_io_add,
-    .region_del = xen_io_del,
-    .priority = 10,
-};
-
-static DeviceListener xen_device_listener = {
-    .realize = xen_device_realize,
-    .unrealize = xen_device_unrealize,
-};
-
-/* get the ioreq packets from share mem */
-static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
-{
-    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
-
-    if (req->state != STATE_IOREQ_READY) {
-        DPRINTF("I/O request not ready: "
-                "%x, ptr: %x, port: %"PRIx64", "
-                "data: %"PRIx64", count: %u, size: %u\n",
-                req->state, req->data_is_ptr, req->addr,
-                req->data, req->count, req->size);
-        return NULL;
-    }
-
-    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
-
-    req->state = STATE_IOREQ_INPROCESS;
-    return req;
-}
-
-/* use poll to get the port notification */
-/* ioreq_vec--out,the */
-/* retval--the number of ioreq packet */
-static ioreq_t *cpu_get_ioreq(XenIOState *state)
-{
-    MachineState *ms = MACHINE(qdev_get_machine());
-    unsigned int max_cpus = ms->smp.max_cpus;
-    int i;
-    evtchn_port_t port;
-
-    port = qemu_xen_evtchn_pending(state->xce_handle);
-    if (port == state->bufioreq_local_port) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-        return NULL;
-    }
-
-    if (port != -1) {
-        for (i = 0; i < max_cpus; i++) {
-            if (state->ioreq_local_port[i] == port) {
-                break;
-            }
-        }
-
-        if (i == max_cpus) {
-            hw_error("Fatal error while trying to get io event!\n");
-        }
-
-        /* unmask the wanted port again */
-        qemu_xen_evtchn_unmask(state->xce_handle, port);
-
-        /* get the io packet from shared memory */
-        state->send_vcpu = i;
-        return cpu_get_ioreq_from_shared_memory(state, i);
-    }
-
-    /* read error or read nothing */
-    return NULL;
-}
-
-static uint32_t do_inp(uint32_t addr, unsigned long size)
-{
-    switch (size) {
-        case 1:
-            return cpu_inb(addr);
-        case 2:
-            return cpu_inw(addr);
-        case 4:
-            return cpu_inl(addr);
-        default:
-            hw_error("inp: bad size: %04x %lx", addr, size);
-    }
-}
-
-static void do_outp(uint32_t addr,
-        unsigned long size, uint32_t val)
-{
-    switch (size) {
-        case 1:
-            return cpu_outb(addr, val);
-        case 2:
-            return cpu_outw(addr, val);
-        case 4:
-            return cpu_outl(addr, val);
-        default:
-            hw_error("outp: bad size: %04x %lx", addr, size);
-    }
-}
-
-/*
- * Helper functions which read/write an object from/to physical guest
- * memory, as part of the implementation of an ioreq.
- *
- * Equivalent to
- *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
- *                          val, req->size, 0/1)
- * except without the integer overflow problems.
- */
-static void rw_phys_req_item(hwaddr addr,
-                             ioreq_t *req, uint32_t i, void *val, int rw)
-{
-    /* Do everything unsigned so overflow just results in a truncated result
-     * and accesses to undesired parts of guest memory, which is up
-     * to the guest */
-    hwaddr offset = (hwaddr)req->size * i;
-    if (req->df) {
-        addr -= offset;
-    } else {
-        addr += offset;
-    }
-    cpu_physical_memory_rw(addr, val, req->size, rw);
-}
-
-static inline void read_phys_req_item(hwaddr addr,
-                                      ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 0);
-}
-static inline void write_phys_req_item(hwaddr addr,
-                                       ioreq_t *req, uint32_t i, void *val)
-{
-    rw_phys_req_item(addr, req, i, val, 1);
-}
-
-
-static void cpu_ioreq_pio(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(uint32_t)) {
-        hw_error("PIO: bad size (%u)", req->size);
-    }
-
-    if (req->dir == IOREQ_READ) {
-        if (!req->data_is_ptr) {
-            req->data = do_inp(req->addr, req->size);
-            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
-                                         req->size);
-        } else {
-            uint32_t tmp;
-
-            for (i = 0; i < req->count; i++) {
-                tmp = do_inp(req->addr, req->size);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        }
-    } else if (req->dir == IOREQ_WRITE) {
-        if (!req->data_is_ptr) {
-            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
-                                          req->size);
-            do_outp(req->addr, req->size, req->data);
-        } else {
-            for (i = 0; i < req->count; i++) {
-                uint32_t tmp = 0;
-
-                read_phys_req_item(req->data, req, i, &tmp);
-                do_outp(req->addr, req->size, tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_move(ioreq_t *req)
-{
-    uint32_t i;
-
-    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
-                         req->data, req->count, req->size);
-
-    if (req->size > sizeof(req->data)) {
-        hw_error("MMIO: bad size (%u)", req->size);
-    }
-
-    if (!req->data_is_ptr) {
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &req->data);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                write_phys_req_item(req->addr, req, i, &req->data);
-            }
-        }
-    } else {
-        uint64_t tmp;
-
-        if (req->dir == IOREQ_READ) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->addr, req, i, &tmp);
-                write_phys_req_item(req->data, req, i, &tmp);
-            }
-        } else if (req->dir == IOREQ_WRITE) {
-            for (i = 0; i < req->count; i++) {
-                read_phys_req_item(req->data, req, i, &tmp);
-                write_phys_req_item(req->addr, req, i, &tmp);
-            }
-        }
-    }
-}
-
-static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
-{
-    uint32_t sbdf = req->addr >> 32;
-    uint32_t reg = req->addr;
-    XenPciDevice *xendev;
-
-    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
-        req->size != sizeof(uint32_t)) {
-        hw_error("PCI config access: bad size (%u)", req->size);
-    }
-
-    if (req->count != 1) {
-        hw_error("PCI config access: bad count (%u)", req->count);
-    }
-
-    QLIST_FOREACH(xendev, &state->dev_list, entry) {
-        if (xendev->sbdf != sbdf) {
-            continue;
-        }
-
-        if (!req->data_is_ptr) {
-            if (req->dir == IOREQ_READ) {
-                req->data = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, req->data);
-            } else if (req->dir == IOREQ_WRITE) {
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, req->data);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->data, req->size);
-            }
-        } else {
-            uint32_t tmp;
-
-            if (req->dir == IOREQ_READ) {
-                tmp = pci_host_config_read_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    req->size);
-                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
-                                            req->size, tmp);
-                write_phys_req_item(req->data, req, 0, &tmp);
-            } else if (req->dir == IOREQ_WRITE) {
-                read_phys_req_item(req->data, req, 0, &tmp);
-                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
-                                             req->size, tmp);
-                pci_host_config_write_common(
-                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
-                    tmp, req->size);
-            }
-        }
-    }
-}
-
 static void regs_to_cpu(vmware_regs_t *vmport_regs, ioreq_t *req)
 {
     X86CPU *cpu;
@@ -1031,226 +513,6 @@ static void handle_vmport_ioreq(XenIOState *state, ioreq_t *req)
     current_cpu = NULL;
 }
 
-static void handle_ioreq(XenIOState *state, ioreq_t *req)
-{
-    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
-                       req->addr, req->data, req->count, req->size);
-
-    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
-            (req->size < sizeof (target_ulong))) {
-        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
-    }
-
-    if (req->dir == IOREQ_WRITE)
-        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
-                                 req->addr, req->data, req->count, req->size);
-
-    switch (req->type) {
-        case IOREQ_TYPE_PIO:
-            cpu_ioreq_pio(req);
-            break;
-        case IOREQ_TYPE_COPY:
-            cpu_ioreq_move(req);
-            break;
-        case IOREQ_TYPE_VMWARE_PORT:
-            handle_vmport_ioreq(state, req);
-            break;
-        case IOREQ_TYPE_TIMEOFFSET:
-            break;
-        case IOREQ_TYPE_INVALIDATE:
-            xen_invalidate_map_cache();
-            break;
-        case IOREQ_TYPE_PCI_CONFIG:
-            cpu_ioreq_config(state, req);
-            break;
-        default:
-            hw_error("Invalid ioreq type 0x%x\n", req->type);
-    }
-    if (req->dir == IOREQ_READ) {
-        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
-                                req->addr, req->data, req->count, req->size);
-    }
-}
-
-static bool handle_buffered_iopage(XenIOState *state)
-{
-    buffered_iopage_t *buf_page = state->buffered_io_page;
-    buf_ioreq_t *buf_req = NULL;
-    bool handled_ioreq = false;
-    ioreq_t req;
-    int qw;
-
-    if (!buf_page) {
-        return 0;
-    }
-
-    memset(&req, 0x00, sizeof(req));
-    req.state = STATE_IOREQ_READY;
-    req.count = 1;
-    req.dir = IOREQ_WRITE;
-
-    for (;;) {
-        uint32_t rdptr = buf_page->read_pointer, wrptr;
-
-        xen_rmb();
-        wrptr = buf_page->write_pointer;
-        xen_rmb();
-        if (rdptr != buf_page->read_pointer) {
-            continue;
-        }
-        if (rdptr == wrptr) {
-            break;
-        }
-        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
-        req.size = 1U << buf_req->size;
-        req.addr = buf_req->addr;
-        req.data = buf_req->data;
-        req.type = buf_req->type;
-        xen_rmb();
-        qw = (req.size == 8);
-        if (qw) {
-            if (rdptr + 1 == wrptr) {
-                hw_error("Incomplete quad word buffered ioreq");
-            }
-            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
-                                           IOREQ_BUFFER_SLOT_NUM];
-            req.data |= ((uint64_t)buf_req->data) << 32;
-            xen_rmb();
-        }
-
-        handle_ioreq(state, &req);
-
-        /* Only req.data may get updated by handle_ioreq(), albeit even that
-         * should not happen as such data would never make it to the guest (we
-         * can only usefully see writes here after all).
-         */
-        assert(req.state == STATE_IOREQ_READY);
-        assert(req.count == 1);
-        assert(req.dir == IOREQ_WRITE);
-        assert(!req.data_is_ptr);
-
-        qatomic_add(&buf_page->read_pointer, qw + 1);
-        handled_ioreq = true;
-    }
-
-    return handled_ioreq;
-}
-
-static void handle_buffered_io(void *opaque)
-{
-    XenIOState *state = opaque;
-
-    if (handle_buffered_iopage(state)) {
-        timer_mod(state->buffered_io_timer,
-                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
-    } else {
-        timer_del(state->buffered_io_timer);
-        qemu_xen_evtchn_unmask(state->xce_handle, state->bufioreq_local_port);
-    }
-}
-
-static void cpu_handle_ioreq(void *opaque)
-{
-    XenIOState *state = opaque;
-    ioreq_t *req = cpu_get_ioreq(state);
-
-    handle_buffered_iopage(state);
-    if (req) {
-        ioreq_t copy = *req;
-
-        xen_rmb();
-        handle_ioreq(state, &copy);
-        req->data = copy.data;
-
-        if (req->state != STATE_IOREQ_INPROCESS) {
-            fprintf(stderr, "Badness in I/O request ... not in service?!: "
-                    "%x, ptr: %x, port: %"PRIx64", "
-                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
-                    req->state, req->data_is_ptr, req->addr,
-                    req->data, req->count, req->size, req->type);
-            destroy_hvm_domain(false);
-            return;
-        }
-
-        xen_wmb(); /* Update ioreq contents /then/ update state. */
-
-        /*
-         * We do this before we send the response so that the tools
-         * have the opportunity to pick up on the reset before the
-         * guest resumes and does a hlt with interrupts disabled which
-         * causes Xen to powerdown the domain.
-         */
-        if (runstate_is_running()) {
-            ShutdownCause request;
-
-            if (qemu_shutdown_requested_get()) {
-                destroy_hvm_domain(false);
-            }
-            request = qemu_reset_requested_get();
-            if (request) {
-                qemu_system_reset(request);
-                destroy_hvm_domain(true);
-            }
-        }
-
-        req->state = STATE_IORESP_READY;
-        qemu_xen_evtchn_notify(state->xce_handle,
-                               state->ioreq_local_port[state->send_vcpu]);
-    }
-}
-
-static void xen_main_loop_prepare(XenIOState *state)
-{
-    int evtchn_fd = -1;
-
-    if (state->xce_handle != NULL) {
-        evtchn_fd = qemu_xen_evtchn_fd(state->xce_handle);
-    }
-
-    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
-                                                 state);
-
-    if (evtchn_fd != -1) {
-        CPUState *cpu_state;
-
-        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
-        CPU_FOREACH(cpu_state) {
-            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
-                    __func__, cpu_state->cpu_index, cpu_state);
-            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
-        }
-        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
-    }
-}
-
-
-static void xen_hvm_change_state_handler(void *opaque, bool running,
-                                         RunState rstate)
-{
-    XenIOState *state = opaque;
-
-    if (running) {
-        xen_main_loop_prepare(state);
-    }
-
-    xen_set_ioreq_server_state(xen_domid,
-                               state->ioservid,
-                               (rstate == RUN_STATE_RUNNING));
-}
-
-static void xen_exit_notifier(Notifier *n, void *data)
-{
-    XenIOState *state = container_of(n, XenIOState, exit);
-
-    xen_destroy_ioreq_server(xen_domid, state->ioservid);
-    if (state->fres != NULL) {
-        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
-    }
-
-    qemu_xen_evtchn_close(state->xce_handle);
-    xs_daemon_close(state->xenstore);
-}
-
 #ifdef XEN_COMPAT_PHYSMAP
 static void xen_read_physmap(XenIOState *state)
 {
@@ -1310,175 +572,17 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
     xc_set_hvm_param(xen_xc, xen_domid, HVM_PARAM_ACPI_S_STATE, 0);
 }
 
-static int xen_map_ioreq_server(XenIOState *state)
-{
-    void *addr = NULL;
-    xen_pfn_t ioreq_pfn;
-    xen_pfn_t bufioreq_pfn;
-    evtchn_port_t bufioreq_evtchn;
-    int rc;
-
-    /*
-     * Attempt to map using the resource API and fall back to normal
-     * foreign mapping if this is not supported.
-     */
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
-    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
-    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
-                                         XENMEM_resource_ioreq_server,
-                                         state->ioservid, 0, 2,
-                                         &addr,
-                                         PROT_READ | PROT_WRITE, 0);
-    if (state->fres != NULL) {
-        trace_xen_map_resource_ioreq(state->ioservid, addr);
-        state->buffered_io_page = addr;
-        state->shared_page = addr + TARGET_PAGE_SIZE;
-    } else if (errno != EOPNOTSUPP) {
-        error_report("failed to map ioreq server resources: error %d handle=%p",
-                     errno, xen_xc);
-        return -1;
-    }
-
-    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
-                                   (state->shared_page == NULL) ?
-                                   &ioreq_pfn : NULL,
-                                   (state->buffered_io_page == NULL) ?
-                                   &bufioreq_pfn : NULL,
-                                   &bufioreq_evtchn);
-    if (rc < 0) {
-        error_report("failed to get ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        return rc;
-    }
-
-    if (state->shared_page == NULL) {
-        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
-
-        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                  PROT_READ | PROT_WRITE,
-                                                  1, &ioreq_pfn, NULL);
-        if (state->shared_page == NULL) {
-            error_report("map shared IO page returned error %d handle=%p",
-                         errno, xen_xc);
-        }
-    }
-
-    if (state->buffered_io_page == NULL) {
-        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
-
-        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
-                                                       PROT_READ | PROT_WRITE,
-                                                       1, &bufioreq_pfn,
-                                                       NULL);
-        if (state->buffered_io_page == NULL) {
-            error_report("map buffered IO page returned error %d", errno);
-            return -1;
-        }
-    }
-
-    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
-        return -1;
-    }
-
-    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
-
-    state->bufioreq_remote_port = bufioreq_evtchn;
-
-    return 0;
-}
-
 void xen_hvm_init_pc(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
     MachineState *ms = MACHINE(pcms);
     unsigned int max_cpus = ms->smp.max_cpus;
-    int i, rc;
+    int rc;
     xen_pfn_t ioreq_pfn;
     XenIOState *state;
 
-    setup_xen_backend_ops();
-
     state = g_new0(XenIOState, 1);
 
-    state->xce_handle = qemu_xen_evtchn_open();
-    if (state->xce_handle == NULL) {
-        perror("xen: event channel open");
-        goto err;
-    }
-
-    state->xenstore = xs_daemon_open();
-    if (state->xenstore == NULL) {
-        perror("xen: xenstore open");
-        goto err;
-    }
-
-    xen_create_ioreq_server(xen_domid, &state->ioservid);
-
-    state->exit.notify = xen_exit_notifier;
-    qemu_add_exit_notifier(&state->exit);
-
-    /*
-     * Register wake-up support in QMP query-current-machine API
-     */
-    qemu_register_wakeup_support();
-
-    rc = xen_map_ioreq_server(state);
-    if (rc < 0) {
-        goto err;
-    }
-
-    /* Note: cpus is empty at this point in init */
-    state->cpu_by_vcpu_id = g_new0(CPUState *, max_cpus);
-
-    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
-    if (rc < 0) {
-        error_report("failed to enable ioreq server info: error %d handle=%p",
-                     errno, xen_xc);
-        goto err;
-    }
-
-    state->ioreq_local_port = g_new0(evtchn_port_t, max_cpus);
-
-    /* FIXME: how about if we overflow the page here? */
-    for (i = 0; i < max_cpus; i++) {
-        rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                              xen_vcpu_eport(state->shared_page,
-                                                             i));
-        if (rc == -1) {
-            error_report("shared evtchn %d bind error %d", i, errno);
-            goto err;
-        }
-        state->ioreq_local_port[i] = rc;
-    }
-
-    rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid,
-                                          state->bufioreq_remote_port);
-    if (rc == -1) {
-        error_report("buffered evtchn bind error %d", errno);
-        goto err;
-    }
-    state->bufioreq_local_port = rc;
-
-    /* Init RAM management */
-#ifdef XEN_COMPAT_PHYSMAP
-    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
-#else
-    xen_map_cache_init(NULL, state);
-#endif
-
-    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
-
-    state->memory_listener = xen_memory_listener;
-    memory_listener_register(&state->memory_listener, &address_space_memory);
-
-    state->io_listener = xen_io_listener;
-    memory_listener_register(&state->io_listener, &address_space_io);
-
-    state->device_listener = xen_device_listener;
-    QLIST_INIT(&state->dev_list);
-    device_listener_register(&state->device_listener);
-
-    xen_bus_init();
-    xen_be_init();
+    xen_register_ioreq(state, max_cpus, xen_memory_listener);
 
     QLIST_INIT(&xen_physmap);
     xen_read_physmap(state);
@@ -1518,59 +622,11 @@ err:
     exit(1);
 }
 
-void destroy_hvm_domain(bool reboot)
-{
-    xc_interface *xc_handle;
-    int sts;
-    int rc;
-
-    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
-
-    if (xen_dmod) {
-        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
-        if (!rc) {
-            return;
-        }
-        if (errno != ENOTTY /* old Xen */) {
-            perror("xendevicemodel_shutdown failed");
-        }
-        /* well, try the old thing then */
-    }
-
-    xc_handle = xc_interface_open(0, 0, 0);
-    if (xc_handle == NULL) {
-        fprintf(stderr, "Cannot acquire xenctrl handle\n");
-    } else {
-        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
-        if (sts != 0) {
-            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
-                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
-                    sts, strerror(errno));
-        } else {
-            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
-                    reboot ? "reboot" : "poweroff");
-        }
-        xc_interface_close(xc_handle);
-    }
-}
-
 void xen_register_framebuffer(MemoryRegion *mr)
 {
     framebuffer = mr;
 }
 
-void xen_shutdown_fatal_error(const char *fmt, ...)
-{
-    va_list ap;
-
-    va_start(ap, fmt);
-    vfprintf(stderr, fmt, ap);
-    va_end(ap);
-    fprintf(stderr, "Will destroy the domain.\n");
-    /* destroy the domain */
-    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
-}
-
 void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
 {
     if (unlikely(xen_in_migration)) {
@@ -1602,3 +658,57 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
         memory_global_dirty_log_stop(GLOBAL_DIRTY_MIGRATION);
     }
 }
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                                bool add)
+{
+    hwaddr start_addr = section->offset_within_address_space;
+    ram_addr_t size = int128_get64(section->size);
+    bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
+    hvmmem_type_t mem_type;
+
+    if (!memory_region_is_ram(section->mr)) {
+        return;
+    }
+
+    if (log_dirty != add) {
+        return;
+    }
+
+    trace_xen_client_set_memory(start_addr, size, log_dirty);
+
+    start_addr &= TARGET_PAGE_MASK;
+    size = TARGET_PAGE_ALIGN(size);
+
+    if (add) {
+        if (!memory_region_is_rom(section->mr)) {
+            xen_add_to_physmap(state, start_addr, size,
+                               section->mr, section->offset_within_region);
+        } else {
+            mem_type = HVMMEM_ram_ro;
+            if (xen_set_mem_type(xen_domid, mem_type,
+                                 start_addr >> TARGET_PAGE_BITS,
+                                 size >> TARGET_PAGE_BITS)) {
+                DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
+                        start_addr);
+            }
+        }
+    } else {
+        if (xen_remove_from_physmap(state, start_addr, size) < 0) {
+            DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
+        }
+    }
+}
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    switch (req->type) {
+    case IOREQ_TYPE_VMWARE_PORT:
+            handle_vmport_ioreq(state, req);
+        break;
+    default:
+        hw_error("Invalid ioreq type 0x%x\n", req->type);
+    }
+
+    return;
+}
diff --git a/hw/xen/meson.build b/hw/xen/meson.build
index 202752e557..afd20754a1 100644
--- a/hw/xen/meson.build
+++ b/hw/xen/meson.build
@@ -29,4 +29,7 @@ specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
 
 xen_ss = ss.source_set()
 
-xen_ss.add(when: 'CONFIG_XEN', if_true: files('xen-mapcache.c'))
+xen_ss.add(when: 'CONFIG_XEN', if_true: files(
+  'xen-mapcache.c',
+  'xen-hvm-common.c',
+))
diff --git a/hw/xen/trace-events b/hw/xen/trace-events
index f977c7c8c6..67a6c41926 100644
--- a/hw/xen/trace-events
+++ b/hw/xen/trace-events
@@ -42,6 +42,20 @@ xs_node_vscanf(char *path, char *value) "%s %s"
 xs_node_watch(char *path) "%s"
 xs_node_unwatch(char *path) "%s"
 
+# xen-hvm.c
+xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: 0x%lx, size 0x%lx"
+xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "0x%"PRIx64" size 0x%lx, log_dirty %i"
+handle_ioreq(void *req, uint32_t type, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p type=%d dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_read(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p read type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+handle_ioreq_write(void *req, uint32_t type, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p write type=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p pio dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+cpu_ioreq_pio_read_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio read reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_pio_write_reg(void *req, uint64_t data, uint64_t addr, uint32_t size) "I/O=%p pio write reg data=0x%"PRIx64" port=0x%"PRIx64" size=%d"
+cpu_ioreq_move(void *req, uint32_t dir, uint32_t df, uint32_t data_is_ptr, uint64_t addr, uint64_t data, uint32_t count, uint32_t size) "I/O=%p copy dir=%d df=%d ptr=%d port=0x%"PRIx64" data=0x%"PRIx64" count=%d size=%d"
+xen_map_resource_ioreq(uint32_t id, void *addr) "id: %u addr: %p"
+cpu_ioreq_config_read(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+cpu_ioreq_config_write(void *req, uint32_t sbdf, uint32_t reg, uint32_t size, uint32_t data) "I/O=%p sbdf=0x%x reg=%u size=%u data=0x%x"
+
 # xen-mapcache.c
 xen_map_cache(uint64_t phys_addr) "want 0x%"PRIx64
 xen_remap_bucket(uint64_t index) "index 0x%"PRIx64
diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
new file mode 100644
index 0000000000..a31b067404
--- /dev/null
+++ b/hw/xen/xen-hvm-common.c
@@ -0,0 +1,860 @@
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+#include "qapi/error.h"
+#include "trace.h"
+
+#include "hw/pci/pci_host.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "hw/xen/xen-bus.h"
+#include "hw/boards.h"
+#include "hw/xen/arch_hvm.h"
+
+MemoryRegion ram_memory;
+
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
+                   Error **errp)
+{
+    unsigned long nr_pfn;
+    xen_pfn_t *pfn_list;
+    int i;
+
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        /* RAM already populated in Xen */
+        fprintf(stderr, "%s: do not alloc "RAM_ADDR_FMT
+                " bytes of ram at "RAM_ADDR_FMT" when runstate is INMIGRATE\n",
+                __func__, size, ram_addr);
+        return;
+    }
+
+    if (mr == &ram_memory) {
+        return;
+    }
+
+    trace_xen_ram_alloc(ram_addr, size);
+
+    nr_pfn = size >> TARGET_PAGE_BITS;
+    pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
+
+    for (i = 0; i < nr_pfn; i++) {
+        pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
+    }
+
+    if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
+        error_setg(errp, "xen: failed to populate ram at " RAM_ADDR_FMT,
+                   ram_addr);
+    }
+
+    g_free(pfn_list);
+}
+
+static void xen_set_memory(struct MemoryListener *listener,
+                           MemoryRegionSection *section,
+                           bool add)
+{
+    XenIOState *state = container_of(listener, XenIOState, memory_listener);
+
+    if (section->mr == &ram_memory) {
+        return;
+    } else {
+        if (add) {
+            xen_map_memory_section(xen_domid, state->ioservid,
+                                   section);
+        } else {
+            xen_unmap_memory_section(xen_domid, state->ioservid,
+                                     section);
+        }
+    }
+
+    arch_xen_set_memory(state, section, add);
+}
+
+void xen_region_add(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    memory_region_ref(section->mr);
+    xen_set_memory(listener, section, true);
+}
+
+void xen_region_del(MemoryListener *listener,
+                           MemoryRegionSection *section)
+{
+    xen_set_memory(listener, section, false);
+    memory_region_unref(section->mr);
+}
+
+void xen_io_add(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    memory_region_ref(mr);
+
+    xen_map_io_section(xen_domid, state->ioservid, section);
+}
+
+void xen_io_del(MemoryListener *listener,
+                       MemoryRegionSection *section)
+{
+    XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops) {
+        return;
+    }
+
+    xen_unmap_io_section(xen_domid, state->ioservid, section);
+
+    memory_region_unref(mr);
+}
+
+void xen_device_realize(DeviceListener *listener,
+                               DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev = g_new(XenPciDevice, 1);
+
+        xendev->pci_dev = pci_dev;
+        xendev->sbdf = PCI_BUILD_BDF(pci_dev_bus_num(pci_dev),
+                                     pci_dev->devfn);
+        QLIST_INSERT_HEAD(&state->dev_list, xendev, entry);
+
+        xen_map_pcidev(xen_domid, state->ioservid, pci_dev);
+    }
+}
+
+void xen_device_unrealize(DeviceListener *listener,
+                                 DeviceState *dev)
+{
+    XenIOState *state = container_of(listener, XenIOState, device_listener);
+
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
+        PCIDevice *pci_dev = PCI_DEVICE(dev);
+        XenPciDevice *xendev, *next;
+
+        xen_unmap_pcidev(xen_domid, state->ioservid, pci_dev);
+
+        QLIST_FOREACH_SAFE(xendev, &state->dev_list, entry, next) {
+            if (xendev->pci_dev == pci_dev) {
+                QLIST_REMOVE(xendev, entry);
+                g_free(xendev);
+                break;
+            }
+        }
+    }
+}
+
+MemoryListener xen_io_listener = {
+    .name = "xen-io",
+    .region_add = xen_io_add,
+    .region_del = xen_io_del,
+    .priority = 10,
+};
+
+DeviceListener xen_device_listener = {
+    .realize = xen_device_realize,
+    .unrealize = xen_device_unrealize,
+};
+
+/* get the ioreq packets from share mem */
+static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
+{
+    ioreq_t *req = xen_vcpu_ioreq(state->shared_page, vcpu);
+
+    if (req->state != STATE_IOREQ_READY) {
+        DPRINTF("I/O request not ready: "
+                "%x, ptr: %x, port: %"PRIx64", "
+                "data: %"PRIx64", count: %u, size: %u\n",
+                req->state, req->data_is_ptr, req->addr,
+                req->data, req->count, req->size);
+        return NULL;
+    }
+
+    xen_rmb(); /* see IOREQ_READY /then/ read contents of ioreq */
+
+    req->state = STATE_IOREQ_INPROCESS;
+    return req;
+}
+
+/* use poll to get the port notification */
+/* ioreq_vec--out,the */
+/* retval--the number of ioreq packet */
+static ioreq_t *cpu_get_ioreq(XenIOState *state)
+{
+    MachineState *ms = MACHINE(qdev_get_machine());
+    unsigned int max_cpus = ms->smp.max_cpus;
+    int i;
+    evtchn_port_t port;
+
+    port = qemu_xen_evtchn_pending(state->xce_handle);
+    if (port == state->bufioreq_local_port) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+        return NULL;
+    }
+
+    if (port != -1) {
+        for (i = 0; i < max_cpus; i++) {
+            if (state->ioreq_local_port[i] == port) {
+                break;
+            }
+        }
+
+        if (i == max_cpus) {
+            hw_error("Fatal error while trying to get io event!\n");
+        }
+
+        /* unmask the wanted port again */
+        qemu_xen_evtchn_unmask(state->xce_handle, port);
+
+        /* get the io packet from shared memory */
+        state->send_vcpu = i;
+        return cpu_get_ioreq_from_shared_memory(state, i);
+    }
+
+    /* read error or read nothing */
+    return NULL;
+}
+
+static uint32_t do_inp(uint32_t addr, unsigned long size)
+{
+    switch (size) {
+        case 1:
+            return cpu_inb(addr);
+        case 2:
+            return cpu_inw(addr);
+        case 4:
+            return cpu_inl(addr);
+        default:
+            hw_error("inp: bad size: %04x %lx", addr, size);
+    }
+}
+
+static void do_outp(uint32_t addr,
+        unsigned long size, uint32_t val)
+{
+    switch (size) {
+        case 1:
+            return cpu_outb(addr, val);
+        case 2:
+            return cpu_outw(addr, val);
+        case 4:
+            return cpu_outl(addr, val);
+        default:
+            hw_error("outp: bad size: %04x %lx", addr, size);
+    }
+}
+
+/*
+ * Helper functions which read/write an object from/to physical guest
+ * memory, as part of the implementation of an ioreq.
+ *
+ * Equivalent to
+ *   cpu_physical_memory_rw(addr + (req->df ? -1 : +1) * req->size * i,
+ *                          val, req->size, 0/1)
+ * except without the integer overflow problems.
+ */
+static void rw_phys_req_item(hwaddr addr,
+                             ioreq_t *req, uint32_t i, void *val, int rw)
+{
+    /* Do everything unsigned so overflow just results in a truncated result
+     * and accesses to undesired parts of guest memory, which is up
+     * to the guest */
+    hwaddr offset = (hwaddr)req->size * i;
+    if (req->df) {
+        addr -= offset;
+    } else {
+        addr += offset;
+    }
+    cpu_physical_memory_rw(addr, val, req->size, rw);
+}
+
+static inline void read_phys_req_item(hwaddr addr,
+                                      ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 0);
+}
+static inline void write_phys_req_item(hwaddr addr,
+                                       ioreq_t *req, uint32_t i, void *val)
+{
+    rw_phys_req_item(addr, req, i, val, 1);
+}
+
+
+void cpu_ioreq_pio(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_pio(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(uint32_t)) {
+        hw_error("PIO: bad size (%u)", req->size);
+    }
+
+    if (req->dir == IOREQ_READ) {
+        if (!req->data_is_ptr) {
+            req->data = do_inp(req->addr, req->size);
+            trace_cpu_ioreq_pio_read_reg(req, req->data, req->addr,
+                                         req->size);
+        } else {
+            uint32_t tmp;
+
+            for (i = 0; i < req->count; i++) {
+                tmp = do_inp(req->addr, req->size);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        }
+    } else if (req->dir == IOREQ_WRITE) {
+        if (!req->data_is_ptr) {
+            trace_cpu_ioreq_pio_write_reg(req, req->data, req->addr,
+                                          req->size);
+            do_outp(req->addr, req->size, req->data);
+        } else {
+            for (i = 0; i < req->count; i++) {
+                uint32_t tmp = 0;
+
+                read_phys_req_item(req->data, req, i, &tmp);
+                do_outp(req->addr, req->size, tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_move(ioreq_t *req)
+{
+    uint32_t i;
+
+    trace_cpu_ioreq_move(req, req->dir, req->df, req->data_is_ptr, req->addr,
+                         req->data, req->count, req->size);
+
+    if (req->size > sizeof(req->data)) {
+        hw_error("MMIO: bad size (%u)", req->size);
+    }
+
+    if (!req->data_is_ptr) {
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &req->data);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                write_phys_req_item(req->addr, req, i, &req->data);
+            }
+        }
+    } else {
+        uint64_t tmp;
+
+        if (req->dir == IOREQ_READ) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->addr, req, i, &tmp);
+                write_phys_req_item(req->data, req, i, &tmp);
+            }
+        } else if (req->dir == IOREQ_WRITE) {
+            for (i = 0; i < req->count; i++) {
+                read_phys_req_item(req->data, req, i, &tmp);
+                write_phys_req_item(req->addr, req, i, &tmp);
+            }
+        }
+    }
+}
+
+static void cpu_ioreq_config(XenIOState *state, ioreq_t *req)
+{
+    uint32_t sbdf = req->addr >> 32;
+    uint32_t reg = req->addr;
+    XenPciDevice *xendev;
+
+    if (req->size != sizeof(uint8_t) && req->size != sizeof(uint16_t) &&
+        req->size != sizeof(uint32_t)) {
+        hw_error("PCI config access: bad size (%u)", req->size);
+    }
+
+    if (req->count != 1) {
+        hw_error("PCI config access: bad count (%u)", req->count);
+    }
+
+    QLIST_FOREACH(xendev, &state->dev_list, entry) {
+        if (xendev->sbdf != sbdf) {
+            continue;
+        }
+
+        if (!req->data_is_ptr) {
+            if (req->dir == IOREQ_READ) {
+                req->data = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, req->data);
+            } else if (req->dir == IOREQ_WRITE) {
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, req->data);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->data, req->size);
+            }
+        } else {
+            uint32_t tmp;
+
+            if (req->dir == IOREQ_READ) {
+                tmp = pci_host_config_read_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    req->size);
+                trace_cpu_ioreq_config_read(req, xendev->sbdf, reg,
+                                            req->size, tmp);
+                write_phys_req_item(req->data, req, 0, &tmp);
+            } else if (req->dir == IOREQ_WRITE) {
+                read_phys_req_item(req->data, req, 0, &tmp);
+                trace_cpu_ioreq_config_write(req, xendev->sbdf, reg,
+                                             req->size, tmp);
+                pci_host_config_write_common(
+                    xendev->pci_dev, reg, PCI_CONFIG_SPACE_SIZE,
+                    tmp, req->size);
+            }
+        }
+    }
+}
+
+static void handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    trace_handle_ioreq(req, req->type, req->dir, req->df, req->data_is_ptr,
+                       req->addr, req->data, req->count, req->size);
+
+    if (!req->data_is_ptr && (req->dir == IOREQ_WRITE) &&
+            (req->size < sizeof (target_ulong))) {
+        req->data &= ((target_ulong) 1 << (8 * req->size)) - 1;
+    }
+
+    if (req->dir == IOREQ_WRITE)
+        trace_handle_ioreq_write(req, req->type, req->df, req->data_is_ptr,
+                                 req->addr, req->data, req->count, req->size);
+
+    switch (req->type) {
+        case IOREQ_TYPE_PIO:
+            cpu_ioreq_pio(req);
+            break;
+        case IOREQ_TYPE_COPY:
+            cpu_ioreq_move(req);
+            break;
+        case IOREQ_TYPE_TIMEOFFSET:
+            break;
+        case IOREQ_TYPE_INVALIDATE:
+            xen_invalidate_map_cache();
+            break;
+        case IOREQ_TYPE_PCI_CONFIG:
+            cpu_ioreq_config(state, req);
+            break;
+        default:
+            arch_handle_ioreq(state, req);
+    }
+    if (req->dir == IOREQ_READ) {
+        trace_handle_ioreq_read(req, req->type, req->df, req->data_is_ptr,
+                                req->addr, req->data, req->count, req->size);
+    }
+}
+
+static bool handle_buffered_iopage(XenIOState *state)
+{
+    buffered_iopage_t *buf_page = state->buffered_io_page;
+    buf_ioreq_t *buf_req = NULL;
+    bool handled_ioreq = false;
+    ioreq_t req;
+    int qw;
+
+    if (!buf_page) {
+        return 0;
+    }
+
+    memset(&req, 0x00, sizeof(req));
+    req.state = STATE_IOREQ_READY;
+    req.count = 1;
+    req.dir = IOREQ_WRITE;
+
+    for (;;) {
+        uint32_t rdptr = buf_page->read_pointer, wrptr;
+
+        xen_rmb();
+        wrptr = buf_page->write_pointer;
+        xen_rmb();
+        if (rdptr != buf_page->read_pointer) {
+            continue;
+        }
+        if (rdptr == wrptr) {
+            break;
+        }
+        buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
+        req.size = 1U << buf_req->size;
+        req.addr = buf_req->addr;
+        req.data = buf_req->data;
+        req.type = buf_req->type;
+        xen_rmb();
+        qw = (req.size == 8);
+        if (qw) {
+            if (rdptr + 1 == wrptr) {
+                hw_error("Incomplete quad word buffered ioreq");
+            }
+            buf_req = &buf_page->buf_ioreq[(rdptr + 1) %
+                                           IOREQ_BUFFER_SLOT_NUM];
+            req.data |= ((uint64_t)buf_req->data) << 32;
+            xen_rmb();
+        }
+
+        handle_ioreq(state, &req);
+
+        /* Only req.data may get updated by handle_ioreq(), albeit even that
+         * should not happen as such data would never make it to the guest (we
+         * can only usefully see writes here after all).
+         */
+        assert(req.state == STATE_IOREQ_READY);
+        assert(req.count == 1);
+        assert(req.dir == IOREQ_WRITE);
+        assert(!req.data_is_ptr);
+
+        qatomic_add(&buf_page->read_pointer, qw + 1);
+        handled_ioreq = true;
+    }
+
+    return handled_ioreq;
+}
+
+static void handle_buffered_io(void *opaque)
+{
+    XenIOState *state = opaque;
+
+    if (handle_buffered_iopage(state)) {
+        timer_mod(state->buffered_io_timer,
+                BUFFER_IO_MAX_DELAY + qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
+    } else {
+        timer_del(state->buffered_io_timer);
+        qemu_xen_evtchn_unmask(state->xce_handle, state->bufioreq_local_port);
+    }
+}
+
+static void cpu_handle_ioreq(void *opaque)
+{
+    XenIOState *state = opaque;
+    ioreq_t *req = cpu_get_ioreq(state);
+
+    handle_buffered_iopage(state);
+    if (req) {
+        ioreq_t copy = *req;
+
+        xen_rmb();
+        handle_ioreq(state, &copy);
+        req->data = copy.data;
+
+        if (req->state != STATE_IOREQ_INPROCESS) {
+            fprintf(stderr, "Badness in I/O request ... not in service?!: "
+                    "%x, ptr: %x, port: %"PRIx64", "
+                    "data: %"PRIx64", count: %u, size: %u, type: %u\n",
+                    req->state, req->data_is_ptr, req->addr,
+                    req->data, req->count, req->size, req->type);
+            destroy_hvm_domain(false);
+            return;
+        }
+
+        xen_wmb(); /* Update ioreq contents /then/ update state. */
+
+        /*
+         * We do this before we send the response so that the tools
+         * have the opportunity to pick up on the reset before the
+         * guest resumes and does a hlt with interrupts disabled which
+         * causes Xen to powerdown the domain.
+         */
+        if (runstate_is_running()) {
+            ShutdownCause request;
+
+            if (qemu_shutdown_requested_get()) {
+                destroy_hvm_domain(false);
+            }
+            request = qemu_reset_requested_get();
+            if (request) {
+                qemu_system_reset(request);
+                destroy_hvm_domain(true);
+            }
+        }
+
+        req->state = STATE_IORESP_READY;
+        qemu_xen_evtchn_notify(state->xce_handle,
+                               state->ioreq_local_port[state->send_vcpu]);
+    }
+}
+
+static void xen_main_loop_prepare(XenIOState *state)
+{
+    int evtchn_fd = -1;
+
+    if (state->xce_handle != NULL) {
+        evtchn_fd = qemu_xen_evtchn_fd(state->xce_handle);
+    }
+
+    state->buffered_io_timer = timer_new_ms(QEMU_CLOCK_REALTIME, handle_buffered_io,
+                                                 state);
+
+    if (evtchn_fd != -1) {
+        CPUState *cpu_state;
+
+        DPRINTF("%s: Init cpu_by_vcpu_id\n", __func__);
+        CPU_FOREACH(cpu_state) {
+            DPRINTF("%s: cpu_by_vcpu_id[%d]=%p\n",
+                    __func__, cpu_state->cpu_index, cpu_state);
+            state->cpu_by_vcpu_id[cpu_state->cpu_index] = cpu_state;
+        }
+        qemu_set_fd_handler(evtchn_fd, cpu_handle_ioreq, NULL, state);
+    }
+}
+
+
+void xen_hvm_change_state_handler(void *opaque, bool running,
+                                         RunState rstate)
+{
+    XenIOState *state = opaque;
+
+    if (running) {
+        xen_main_loop_prepare(state);
+    }
+
+    xen_set_ioreq_server_state(xen_domid,
+                               state->ioservid,
+                               (rstate == RUN_STATE_RUNNING));
+}
+
+void xen_exit_notifier(Notifier *n, void *data)
+{
+    XenIOState *state = container_of(n, XenIOState, exit);
+
+    xen_destroy_ioreq_server(xen_domid, state->ioservid);
+    if (state->fres != NULL) {
+        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
+    }
+
+    qemu_xen_evtchn_close(state->xce_handle);
+    xs_daemon_close(state->xenstore);
+}
+
+static int xen_map_ioreq_server(XenIOState *state)
+{
+    void *addr = NULL;
+    xen_pfn_t ioreq_pfn;
+    xen_pfn_t bufioreq_pfn;
+    evtchn_port_t bufioreq_evtchn;
+    int rc;
+
+    /*
+     * Attempt to map using the resource API and fall back to normal
+     * foreign mapping if this is not supported.
+     */
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
+    QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
+    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
+                                         XENMEM_resource_ioreq_server,
+                                         state->ioservid, 0, 2,
+                                         &addr,
+                                         PROT_READ | PROT_WRITE, 0);
+    if (state->fres != NULL) {
+        trace_xen_map_resource_ioreq(state->ioservid, addr);
+        state->buffered_io_page = addr;
+        state->shared_page = addr + XC_PAGE_SIZE;
+    } else if (errno != EOPNOTSUPP) {
+        error_report("failed to map ioreq server resources: error %d handle=%p",
+                     errno, xen_xc);
+        return -1;
+    }
+
+    rc = xen_get_ioreq_server_info(xen_domid, state->ioservid,
+                                   (state->shared_page == NULL) ?
+                                   &ioreq_pfn : NULL,
+                                   (state->buffered_io_page == NULL) ?
+                                   &bufioreq_pfn : NULL,
+                                   &bufioreq_evtchn);
+    if (rc < 0) {
+        error_report("failed to get ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        return rc;
+    }
+
+    if (state->shared_page == NULL) {
+        DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
+
+        state->shared_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                  PROT_READ | PROT_WRITE,
+                                                  1, &ioreq_pfn, NULL);
+        if (state->shared_page == NULL) {
+            error_report("map shared IO page returned error %d handle=%p",
+                         errno, xen_xc);
+        }
+    }
+
+    if (state->buffered_io_page == NULL) {
+        DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
+
+        state->buffered_io_page = xenforeignmemory_map(xen_fmem, xen_domid,
+                                                       PROT_READ | PROT_WRITE,
+                                                       1, &bufioreq_pfn,
+                                                       NULL);
+        if (state->buffered_io_page == NULL) {
+            error_report("map buffered IO page returned error %d", errno);
+            return -1;
+        }
+    }
+
+    if (state->shared_page == NULL || state->buffered_io_page == NULL) {
+        return -1;
+    }
+
+    DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
+
+    state->bufioreq_remote_port = bufioreq_evtchn;
+
+    return 0;
+}
+
+void destroy_hvm_domain(bool reboot)
+{
+    xc_interface *xc_handle;
+    int sts;
+    int rc;
+
+    unsigned int reason = reboot ? SHUTDOWN_reboot : SHUTDOWN_poweroff;
+
+    if (xen_dmod) {
+        rc = xendevicemodel_shutdown(xen_dmod, xen_domid, reason);
+        if (!rc) {
+            return;
+        }
+        if (errno != ENOTTY /* old Xen */) {
+            perror("xendevicemodel_shutdown failed");
+        }
+        /* well, try the old thing then */
+    }
+
+    xc_handle = xc_interface_open(0, 0, 0);
+    if (xc_handle == NULL) {
+        fprintf(stderr, "Cannot acquire xenctrl handle\n");
+    } else {
+        sts = xc_domain_shutdown(xc_handle, xen_domid, reason);
+        if (sts != 0) {
+            fprintf(stderr, "xc_domain_shutdown failed to issue %s, "
+                    "sts %d, %s\n", reboot ? "reboot" : "poweroff",
+                    sts, strerror(errno));
+        } else {
+            fprintf(stderr, "Issued domain %d %s\n", xen_domid,
+                    reboot ? "reboot" : "poweroff");
+        }
+        xc_interface_close(xc_handle);
+    }
+}
+
+void xen_shutdown_fatal_error(const char *fmt, ...)
+{
+    va_list ap;
+
+    va_start(ap, fmt);
+    vfprintf(stderr, fmt, ap);
+    va_end(ap);
+    fprintf(stderr, "Will destroy the domain.\n");
+    /* destroy the domain */
+    qemu_system_shutdown_request(SHUTDOWN_CAUSE_HOST_ERROR);
+}
+
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener)
+{
+    int i, rc;
+
+    setup_xen_backend_ops();
+
+    state->xce_handle = qemu_xen_evtchn_open();
+    if (state->xce_handle == NULL) {
+        perror("xen: event channel open");
+        goto err;
+    }
+
+    state->xenstore = xs_daemon_open();
+    if (state->xenstore == NULL) {
+        perror("xen: xenstore open");
+        goto err;
+    }
+
+    xen_create_ioreq_server(xen_domid, &state->ioservid);
+
+    state->exit.notify = xen_exit_notifier;
+    qemu_add_exit_notifier(&state->exit);
+
+    /*
+     * Register wake-up support in QMP query-current-machine API
+     */
+    qemu_register_wakeup_support();
+
+    rc = xen_map_ioreq_server(state);
+    if (rc < 0) {
+        goto err;
+    }
+
+    /* Note: cpus is empty at this point in init */
+    state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
+
+    rc = xen_set_ioreq_server_state(xen_domid, state->ioservid, true);
+    if (rc < 0) {
+        error_report("failed to enable ioreq server info: error %d handle=%p",
+                     errno, xen_xc);
+        goto err;
+    }
+
+    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
+
+    /* FIXME: how about if we overflow the page here? */
+    for (i = 0; i < max_cpus; i++) {
+        rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                              xen_vcpu_eport(state->shared_page,
+                                                             i));
+        if (rc == -1) {
+            error_report("shared evtchn %d bind error %d", i, errno);
+            goto err;
+        }
+        state->ioreq_local_port[i] = rc;
+    }
+
+    rc = qemu_xen_evtchn_bind_interdomain(state->xce_handle, xen_domid,
+                                          state->bufioreq_remote_port);
+    if (rc == -1) {
+        error_report("buffered evtchn bind error %d", errno);
+        goto err;
+    }
+    state->bufioreq_local_port = rc;
+
+    /* Init RAM management */
+#ifdef XEN_COMPAT_PHYSMAP
+    xen_map_cache_init(xen_phys_offset_to_gaddr, state);
+#else
+    xen_map_cache_init(NULL, state);
+#endif
+
+    qemu_add_vm_change_state_handler(xen_hvm_change_state_handler, state);
+
+    state->memory_listener = xen_memory_listener;
+    memory_listener_register(&state->memory_listener, &address_space_memory);
+
+    state->io_listener = xen_io_listener;
+    memory_listener_register(&state->io_listener, &address_space_io);
+
+    state->device_listener = xen_device_listener;
+    QLIST_INIT(&state->dev_list);
+    device_listener_register(&state->device_listener);
+
+    xen_bus_init();
+
+    xen_be_init();
+
+    return;
+err:
+    error_report("xen hardware virtual machine initialisation failed");
+    exit(1);
+}
diff --git a/include/hw/i386/xen_arch_hvm.h b/include/hw/i386/xen_arch_hvm.h
new file mode 100644
index 0000000000..1000f8f543
--- /dev/null
+++ b/include/hw/i386/xen_arch_hvm.h
@@ -0,0 +1,11 @@
+#ifndef HW_XEN_ARCH_I386_HVM_H
+#define HW_XEN_ARCH_I386_HVM_H
+
+#include <xen/hvm/ioreq.h>
+#include "hw/xen/xen-hvm-common.h"
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
new file mode 100644
index 0000000000..26674648d8
--- /dev/null
+++ b/include/hw/xen/arch_hvm.h
@@ -0,0 +1,3 @@
+#if defined(TARGET_I386) || defined(TARGET_X86_64)
+#include "hw/i386/xen_arch_hvm.h"
+#endif
diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
new file mode 100644
index 0000000000..858ea07075
--- /dev/null
+++ b/include/hw/xen/xen-hvm-common.h
@@ -0,0 +1,99 @@
+#ifndef HW_XEN_HVM_COMMON_H
+#define HW_XEN_HVM_COMMON_H
+
+#include "qemu/osdep.h"
+#include "qemu/units.h"
+
+#include "cpu.h"
+#include "hw/pci/pci.h"
+#include "hw/hw.h"
+#include "hw/xen/xen_native.h"
+#include "hw/xen/xen-legacy-backend.h"
+#include "sysemu/runstate.h"
+#include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
+#include "sysemu/xen-mapcache.h"
+
+#include <xen/hvm/ioreq.h>
+
+extern MemoryRegion ram_memory;
+extern MemoryListener xen_io_listener;
+extern DeviceListener xen_device_listener;
+
+//#define DEBUG_XEN_HVM
+
+#ifdef DEBUG_XEN_HVM
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static inline uint32_t xen_vcpu_eport(shared_iopage_t *shared_page, int i)
+{
+    return shared_page->vcpu_ioreq[i].vp_eport;
+}
+static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
+{
+    return &shared_page->vcpu_ioreq[vcpu];
+}
+
+#define BUFFER_IO_MAX_DELAY  100
+
+typedef struct XenPhysmap {
+    hwaddr start_addr;
+    ram_addr_t size;
+    const char *name;
+    hwaddr phys_offset;
+
+    QLIST_ENTRY(XenPhysmap) list;
+} XenPhysmap;
+
+typedef struct XenPciDevice {
+    PCIDevice *pci_dev;
+    uint32_t sbdf;
+    QLIST_ENTRY(XenPciDevice) entry;
+} XenPciDevice;
+
+typedef struct XenIOState {
+    ioservid_t ioservid;
+    shared_iopage_t *shared_page;
+    buffered_iopage_t *buffered_io_page;
+    xenforeignmemory_resource_handle *fres;
+    QEMUTimer *buffered_io_timer;
+    CPUState **cpu_by_vcpu_id;
+    /* the evtchn port for polling the notification, */
+    evtchn_port_t *ioreq_local_port;
+    /* evtchn remote and local ports for buffered io */
+    evtchn_port_t bufioreq_remote_port;
+    evtchn_port_t bufioreq_local_port;
+    /* the evtchn fd for polling */
+    xenevtchn_handle *xce_handle;
+    /* which vcpu we are serving */
+    int send_vcpu;
+
+    struct xs_handle *xenstore;
+    MemoryListener memory_listener;
+    MemoryListener io_listener;
+    QLIST_HEAD(, XenPciDevice) dev_list;
+    DeviceListener device_listener;
+
+    Notifier exit;
+} XenIOState;
+
+void xen_exit_notifier(Notifier *n, void *data);
+
+void xen_region_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_region_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_add(MemoryListener *listener, MemoryRegionSection *section);
+void xen_io_del(MemoryListener *listener, MemoryRegionSection *section);
+void xen_device_realize(DeviceListener *listener, DeviceState *dev);
+void xen_device_unrealize(DeviceListener *listener, DeviceState *dev);
+
+void xen_hvm_change_state_handler(void *opaque, bool running, RunState rstate);
+void xen_register_ioreq(XenIOState *state, unsigned int max_cpus,
+                        MemoryListener xen_memory_listener);
+
+void cpu_ioreq_pio(ioreq_t *req);
+#endif /* HW_XEN_HVM_COMMON_H */
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 22:48:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 22:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.519977.807183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMma-00070u-6s; Tue, 11 Apr 2023 22:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 519977.807183; Tue, 11 Apr 2023 22:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmMmZ-0006zX-OK; Tue, 11 Apr 2023 22:48:11 +0000
Received: by outflank-mailman (input) for mailman id 519977;
 Tue, 11 Apr 2023 22:48:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOX4=AC=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmMmX-0004tk-RE
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 22:48:09 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec76f6fe-d8ba-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 00:48:07 +0200 (CEST)
Received: from CY5PR22CA0056.namprd22.prod.outlook.com (2603:10b6:930:1d::19)
 by PH0PR12MB5466.namprd12.prod.outlook.com (2603:10b6:510:d7::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 22:48:03 +0000
Received: from CY4PEPF0000C976.namprd02.prod.outlook.com
 (2603:10b6:930:1d:cafe::27) by CY5PR22CA0056.outlook.office365.com
 (2603:10b6:930:1d::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend
 Transport; Tue, 11 Apr 2023 22:48:03 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C976.mail.protection.outlook.com (10.167.241.132) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.25 via Frontend Transport; Tue, 11 Apr 2023 22:48:03 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:02 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 11 Apr
 2023 17:48:02 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 11 Apr 2023 17:48:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec76f6fe-d8ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DUrpHrhygylW65EIGNgcYhyexOOetItaPx0tXwn+r1Nhk2FpPQfqu+wT7GLbIDCkLUooB3qcX0M3ECrdMMMOmZ6VIUgKwFjIw41vz0KdtGaEtxG/QyEncf0kPiN72WJ8PsNndKF/zUouoRasRKv94zAE3qCyiD8AldWPBZsHVAfVJ89rgp0WZUDye2T+5ggkZ/AwnvSTOhbPxwSnucIlipvrZmNZDVnvKajmVU/HlRYBPKCXb1JZSKhqDheqslKdPGXT3UMFa2e8KuvDUKdsD+0Rz2xd6uFxyaGWyP8iEnnIt8uqZabmccSZop2pFWWcUVGFWOqJ2vjaiSbJOzUHdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ceVVG2Mcqf7l56uhkCH1ot2RTwEnUox0BY709RE34rI=;
 b=Qn684BdjEZvJ2qQaif7WhfImqtNfnJV2pjMHN5aYYpd8f3CiJSok4OasgTjuHOx45u4+fZ0+3qmSamnOoW6Cq1lZ3M7CTr7VKkpeEboo8DzN7rKbNIV8tnmb+z+O9/ucZHHmS8/RGu8+mCH8LMrfNy3onLGbn7alWwDPSgoJ8zQ+RNfHLdWIOKoLSYAJUsuakiR22HZJgX/CH36IdFAbErvE/OHkbWmtUwhsP4daIKXqJRJO6dmskfWArK2zQYHRULKSc7eU6mEHK/B2nJJHhw7YfpzcG+HIAfQOtlY+7PT1ILpeymje1IwE85vp361OCsrnmCzKgyesK5oFIm6iZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=nongnu.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ceVVG2Mcqf7l56uhkCH1ot2RTwEnUox0BY709RE34rI=;
 b=hgJQ6e907CCExPV87MF+D+1190Op0Xb+Q2JFOCxPezvgsR6a1pWlFqpYv9yJxvzwuuHJ5kfQ15QfpkqR0Rw6nCyNn4jHST4+j6HNtkS4MWcVKH5T90nxrK4/rO6Uo1xaXvv2BvALEjdgC0esMBHdaYl0cHudKDsPldu6mbUZEQI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <qemu-devel@nongnu.org>
CC: <xen-devel@lists.xenproject.org>, <vikram.garhwal@amd.com>,
	<stefano.stabellini@amd.com>, Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard
	<anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, "open list:ARM TCG
 CPUs" <qemu-arm@nongnu.org>
Subject: [QEMU][PATCH v6 09/10] hw/arm: introduce xenpvh machine
Date: Tue, 11 Apr 2023 15:47:45 -0700
Message-ID: <20230411224746.16152-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230411224746.16152-1-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C976:EE_|PH0PR12MB5466:EE_
X-MS-Office365-Filtering-Correlation-Id: 24af51b0-eff8-4248-c079-08db3adecee9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	i09bwLH20tj3WrE9gVcw90DTYx5/l0lmcMzwmQpsrfurGGhp+OmvxAqeFWBVHKru5GREPdyinfiS/p1aG0tNvqwD98tPWc3fH9lGAJrxt9QR3WH85jnx8/WWdvHSDGPxY1UviGzQVgnbThKIPefslBeNH2LAFl7rFPQMWblOXb24G1aOiTguVMA+JmOf4AI0YW2lDJq3Cd4W6OoplP5jRZPEqWZc5UFD1bK23+7jWee0rREmxGnXEk+28lL3nYyRBbfR6V6amvcsz9JN/JDeoErpIvB7SDHvEobRNf8g8vpryiEtzI/aZXp9snpSUiSbza37VDL7yc01tW6OMh0c0QZmA82k+8btSWW31XBjDtEke729ZpwmnVL2DhHJtX2niGfGXiqmdq3BXugoREEsbrqBYhBkKRPaSnoDL4VBmu+xQA8dVrfI47zKmfoPg/pi0/oHm1MFsPq0qmqQNsbCbD9+Ks6eXL3u4UmSYdBCNmzrSvCVDfblo7Jqt/lYsFqpmGnytrgQqoPfG2N3ms2I/tz1TwdyDYgk/kEyEmDOdQfgtuWnqkIFn9L/PzNUKIaBcKpXzCw3HCwJ/gVcyjod55jv3A7QUrEbZnKz2toBHAuatC4JMSwQpDJwUEKvjZpXrd8Q3eSbpt56CGEupPSbLEt5HD09vqDnQY8gp1x1psKHuv17qCN3PiGXQ4Nf6YXP0UUSX8Om62ljVl+xcWQ8ZVwXccnpe7rAGs72jZ6pYFw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(36840700001)(46966006)(40470700004)(6666004)(8936002)(66899021)(40480700001)(44832011)(40460700003)(966005)(5660300002)(86362001)(70206006)(70586007)(8676002)(6916009)(4326008)(478600001)(82310400005)(82740400003)(356005)(54906003)(36860700001)(81166007)(2906002)(83380400001)(1076003)(186003)(2616005)(336012)(47076005)(426003)(316002)(41300700001)(36756003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 22:48:03.0539
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 24af51b0-eff8-4248-c079-08db3adecee9
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C976.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5466

Add a new machine xenpvh which creates a IOREQ server to register/connect with
Xen Hypervisor.

Optional: When CONFIG_TPM is enabled, it also creates a tpm-tis-device, adds a
TPM emulator and connects to swtpm running on host machine via chardev socket
and support TPM functionalities for a guest domain.

Extra command line for aarch64 xenpvh QEMU to connect to swtpm:
    -chardev socket,id=chrtpm,path=/tmp/myvtpm2/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -machine tpm-base-addr=0x0c000000 \

swtpm implements a TPM software emulator(TPM 1.2 & TPM 2) built on libtpms and
provides access to TPM functionality over socket, chardev and CUSE interface.
Github repo: https://github.com/stefanberger/swtpm
Example for starting swtpm on host machine:
    mkdir /tmp/vtpm2
    swtpm socket --tpmstate dir=/tmp/vtpm2 \
    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/system/arm/xenpvh.rst    |  34 +++++++
 docs/system/target-arm.rst    |   1 +
 hw/arm/meson.build            |   2 +
 hw/arm/xen_arm.c              | 181 ++++++++++++++++++++++++++++++++++
 include/hw/arm/xen_arch_hvm.h |   9 ++
 include/hw/xen/arch_hvm.h     |   2 +
 6 files changed, 229 insertions(+)
 create mode 100644 docs/system/arm/xenpvh.rst
 create mode 100644 hw/arm/xen_arm.c
 create mode 100644 include/hw/arm/xen_arch_hvm.h

diff --git a/docs/system/arm/xenpvh.rst b/docs/system/arm/xenpvh.rst
new file mode 100644
index 0000000000..e1655c7ab8
--- /dev/null
+++ b/docs/system/arm/xenpvh.rst
@@ -0,0 +1,34 @@
+XENPVH (``xenpvh``)
+=========================================
+This machine creates a IOREQ server to register/connect with Xen Hypervisor.
+
+When TPM is enabled, this machine also creates a tpm-tis-device at a user input
+tpm base address, adds a TPM emulator and connects to a swtpm application
+running on host machine via chardev socket. This enables xenpvh to support TPM
+functionalities for a guest domain.
+
+More information about TPM use and installing swtpm linux application can be
+found at: docs/specs/tpm.rst.
+
+Example for starting swtpm on host machine:
+.. code-block:: console
+
+    mkdir /tmp/vtpm2
+    swtpm socket --tpmstate dir=/tmp/vtpm2 \
+    --ctrl type=unixio,path=/tmp/vtpm2/swtpm-sock &
+
+Sample QEMU xenpvh commands for running and connecting with Xen:
+.. code-block:: console
+
+    qemu-system-aarch64 -xen-domid 1 \
+    -chardev socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \
+    -mon chardev=libxl-cmd,mode=control \
+    -chardev socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \
+    -mon chardev=libxenstat-cmd,mode=control \
+    -xen-attach -name guest0 -vnc none -display none -nographic \
+    -machine xenpvh -m 1301 \
+    -chardev socket,id=chrtpm,path=tmp/vtpm2/swtpm-sock \
+    -tpmdev emulator,id=tpm0,chardev=chrtpm -machine tpm-base-addr=0x0C000000
+
+In above QEMU command, last two lines are for connecting xenpvh QEMU to swtpm
+via chardev socket.
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
index 91ebc26c6d..af8d7c77d6 100644
--- a/docs/system/target-arm.rst
+++ b/docs/system/target-arm.rst
@@ -106,6 +106,7 @@ undocumented; you can get a complete list by running
    arm/stm32
    arm/virt
    arm/xlnx-versal-virt
+   arm/xenpvh
 
 Emulated CPU architecture support
 =================================
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
index b545ba0e4f..1b2a01a005 100644
--- a/hw/arm/meson.build
+++ b/hw/arm/meson.build
@@ -62,6 +62,8 @@ arm_ss.add(when: 'CONFIG_FSL_IMX7', if_true: files('fsl-imx7.c', 'mcimx7d-sabre.
 arm_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmuv3.c'))
 arm_ss.add(when: 'CONFIG_FSL_IMX6UL', if_true: files('fsl-imx6ul.c', 'mcimx6ul-evk.c'))
 arm_ss.add(when: 'CONFIG_NRF51_SOC', if_true: files('nrf51_soc.c'))
+arm_ss.add(when: 'CONFIG_XEN', if_true: files('xen_arm.c'))
+arm_ss.add_all(xen_ss)
 
 softmmu_ss.add(when: 'CONFIG_ARM_SMMUV3', if_true: files('smmu-common.c'))
 softmmu_ss.add(when: 'CONFIG_EXYNOS4', if_true: files('exynos4_boards.c'))
diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
new file mode 100644
index 0000000000..19b1cb81ad
--- /dev/null
+++ b/hw/arm/xen_arm.c
@@ -0,0 +1,181 @@
+/*
+ * QEMU ARM Xen PVH Machine
+ *
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qapi/qapi-commands-migration.h"
+#include "qapi/visitor.h"
+#include "hw/boards.h"
+#include "hw/sysbus.h"
+#include "sysemu/block-backend.h"
+#include "sysemu/tpm_backend.h"
+#include "sysemu/sysemu.h"
+#include "hw/xen/xen-hvm-common.h"
+#include "sysemu/tpm.h"
+#include "hw/xen/arch_hvm.h"
+
+#define TYPE_XEN_ARM  MACHINE_TYPE_NAME("xenpvh")
+OBJECT_DECLARE_SIMPLE_TYPE(XenArmState, XEN_ARM)
+
+static MemoryListener xen_memory_listener = {
+    .region_add = xen_region_add,
+    .region_del = xen_region_del,
+    .log_start = NULL,
+    .log_stop = NULL,
+    .log_sync = NULL,
+    .log_global_start = NULL,
+    .log_global_stop = NULL,
+    .priority = 10,
+};
+
+struct XenArmState {
+    /*< private >*/
+    MachineState parent;
+
+    XenIOState *state;
+
+    struct {
+        uint64_t tpm_base_addr;
+    } cfg;
+};
+
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req)
+{
+    hw_error("Invalid ioreq type 0x%x\n", req->type);
+
+    return;
+}
+
+void arch_xen_set_memory(XenIOState *state, MemoryRegionSection *section,
+                         bool add)
+{
+}
+
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
+{
+}
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+
+#ifdef CONFIG_TPM
+static void xen_enable_tpm(XenArmState *xam)
+{
+    Error *errp = NULL;
+    DeviceState *dev;
+    SysBusDevice *busdev;
+
+    TPMBackend *be = qemu_find_tpm_be("tpm0");
+    if (be == NULL) {
+        DPRINTF("Couldn't fine the backend for tpm0\n");
+        return;
+    }
+    dev = qdev_new(TYPE_TPM_TIS_SYSBUS);
+    object_property_set_link(OBJECT(dev), "tpmdev", OBJECT(be), &errp);
+    object_property_set_str(OBJECT(dev), "tpmdev", be->id, &errp);
+    busdev = SYS_BUS_DEVICE(dev);
+    sysbus_realize_and_unref(busdev, &error_fatal);
+    sysbus_mmio_map(busdev, 0, xam->cfg.tpm_base_addr);
+
+    DPRINTF("Connected tpmdev at address 0x%lx\n", xam->cfg.tpm_base_addr);
+}
+#endif
+
+static void xen_arm_init(MachineState *machine)
+{
+    XenArmState *xam = XEN_ARM(machine);
+
+    xam->state =  g_new0(XenIOState, 1);
+
+    xen_register_ioreq(xam->state, machine->smp.cpus, xen_memory_listener);
+
+#ifdef CONFIG_TPM
+    if (xam->cfg.tpm_base_addr) {
+        xen_enable_tpm(xam);
+    } else {
+        DPRINTF("tpm-base-addr is not provided. TPM will not be enabled\n");
+    }
+#endif
+}
+
+#ifdef CONFIG_TPM
+static void xen_arm_get_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value = xam->cfg.tpm_base_addr;
+
+    visit_type_uint64(v, name, &value, errp);
+}
+
+static void xen_arm_set_tpm_base_addr(Object *obj, Visitor *v,
+                                      const char *name, void *opaque,
+                                      Error **errp)
+{
+    XenArmState *xam = XEN_ARM(obj);
+    uint64_t value;
+
+    if (!visit_type_uint64(v, name, &value, errp)) {
+        return;
+    }
+
+    xam->cfg.tpm_base_addr = value;
+}
+#endif
+
+static void xen_arm_machine_class_init(ObjectClass *oc, void *data)
+{
+
+    MachineClass *mc = MACHINE_CLASS(oc);
+    mc->desc = "Xen Para-virtualized PC";
+    mc->init = xen_arm_init;
+    mc->max_cpus = 1;
+    mc->default_machine_opts = "accel=xen";
+
+#ifdef CONFIG_TPM
+    object_class_property_add(oc, "tpm-base-addr", "uint64_t",
+                              xen_arm_get_tpm_base_addr,
+                              xen_arm_set_tpm_base_addr,
+                              NULL, NULL);
+    object_class_property_set_description(oc, "tpm-base-addr",
+                                          "Set Base address for TPM device.");
+
+    machine_class_allow_dynamic_sysbus_dev(mc, TYPE_TPM_TIS_SYSBUS);
+#endif
+}
+
+static const TypeInfo xen_arm_machine_type = {
+    .name = TYPE_XEN_ARM,
+    .parent = TYPE_MACHINE,
+    .class_init = xen_arm_machine_class_init,
+    .instance_size = sizeof(XenArmState),
+};
+
+static void xen_arm_machine_register_types(void)
+{
+    type_register_static(&xen_arm_machine_type);
+}
+
+type_init(xen_arm_machine_register_types)
diff --git a/include/hw/arm/xen_arch_hvm.h b/include/hw/arm/xen_arch_hvm.h
new file mode 100644
index 0000000000..8fd645e723
--- /dev/null
+++ b/include/hw/arm/xen_arch_hvm.h
@@ -0,0 +1,9 @@
+#ifndef HW_XEN_ARCH_ARM_HVM_H
+#define HW_XEN_ARCH_ARM_HVM_H
+
+#include <xen/hvm/ioreq.h>
+void arch_handle_ioreq(XenIOState *state, ioreq_t *req);
+void arch_xen_set_memory(XenIOState *state,
+                         MemoryRegionSection *section,
+                         bool add);
+#endif
diff --git a/include/hw/xen/arch_hvm.h b/include/hw/xen/arch_hvm.h
index 26674648d8..c7c515220d 100644
--- a/include/hw/xen/arch_hvm.h
+++ b/include/hw/xen/arch_hvm.h
@@ -1,3 +1,5 @@
 #if defined(TARGET_I386) || defined(TARGET_X86_64)
 #include "hw/i386/xen_arch_hvm.h"
+#elif defined(TARGET_ARM) || defined(TARGET_ARM_64)
+#include "hw/arm/xen_arch_hvm.h"
 #endif
-- 
2.17.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 11 23:41:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Apr 2023 23:41:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520019.807210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmNc9-0008QG-1W; Tue, 11 Apr 2023 23:41:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520019.807210; Tue, 11 Apr 2023 23:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmNc8-0008Q9-Ut; Tue, 11 Apr 2023 23:41:28 +0000
Received: by outflank-mailman (input) for mailman id 520019;
 Tue, 11 Apr 2023 23:41:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C0V3=AC=epam.com=prvs=8465a122a0=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pmNc6-0008Q0-H5
 for xen-devel@lists.xenproject.org; Tue, 11 Apr 2023 23:41:27 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c478059-d8c2-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 01:41:21 +0200 (CEST)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33BJgr56026489; Tue, 11 Apr 2023 23:41:10 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2053.outbound.protection.outlook.com [104.47.14.53])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3pw43vtrb2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 11 Apr 2023 23:41:10 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by DBAPR03MB6358.eurprd03.prod.outlook.com (2603:10a6:10:192::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 11 Apr
 2023 23:41:05 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6277.034; Tue, 11 Apr 2023
 23:41:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c478059-d8c2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VSl54BxMIS/LRC689wNJe0FViLOMWlQrZSgRWINqjENzjdgda+8VVWBXYB1D05VIaHeXPqOiNOv6BVmb+jfSiCKhiPB3JkWzQJwBdg4//lhZwHMuAlGbEaVUW4xjuFSr2crBCYf08Ms9b7lYZNYzbrXZHTAu614QPjV8+mw617DWZ98h5dr8r5HoHR3FawmcRWSjW389Hh9hEakB3kWAG9JCZ8viNzGRySTULPpMtfR2mrL2SgRCh5k1zjI0ZOrMjeROxDbk/hh1jcG0PXpR+QQd//wySgbA1GX3ohtzVgMbb3Bx8YVLJHSASEbdQeepzsF8YFaBjlngxgWXA54t1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eJtZlAWsizs0bUKr7ZMUM4O/+fMTn+dea7fKlJpYbe8=;
 b=e7P8mSSe41Yli0AZg94xUeIA2YW5jn9Pr09MdqiZ9umAjIoR+QjZww5r0trn/0HHlton1OhQn0FDiOaXcvANHQEiTE7SMaqsZEaWjVz8ZYraOVcKbFg9FXlHl+LMB6Bkl6ZqfIIl9SBVdF79pPiXl2WHrt9GUvKVZ5wOuPA9hsTpaiM/Ys3iobgt1SLcGBAJfc1OTJtbF3+3QBN5XNzNurnSMxx1k3XUgtb5TdI1SC9fmB04faDu2qa8N5xYWWpQnjydOmvXiSjDlzyl75vquHX115AJHvgzs5xd57Ius7RgEre2jUUcxhRfsYTHRyZcIoyI3ZoSjupO90aYUttbBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eJtZlAWsizs0bUKr7ZMUM4O/+fMTn+dea7fKlJpYbe8=;
 b=YCOfc03JAQPWs9wTvWOms9KkZIxL5WkWBEqjXvu8+lHVc1K5YLygo69vpMPsw67NLQLL8KhHUYDsy32ou3Pnks02fVYPClcpK3+05kq+hekDGWKXAa6EbVwdasPBlxctyenjZexjTo2QhcgqqL8aGRO32WAgis8IKP9nj9jee/3V3ewEZm1Ce1Q2NpSkLL8ERazoN+he4je5RB8eMTp0mcgvD8Iyk2WUGu+g0zYWYL0AUBKBCywUnUVN6fJSSnvlKfT6I+ctdJXCds8479T+rXFYWmHv3YqsIkHrp0Y9R3hPsV38410DtB4rLebDK7YnB7gLdxMqhSxWpUjQnKKbpg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei
 Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2AA=
Date: Tue, 11 Apr 2023 23:41:04 +0000
Message-ID: <873556xa0g.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger>
In-Reply-To: <ZBNA9q5DXJYG3KVp@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|DBAPR03MB6358:EE_
x-ms-office365-filtering-correlation-id: 26f9804a-65d3-41e3-06ad-08db3ae6371c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 fiQFJWj8fiZOVY/B9QisSu4HHIiMIthYCtm051MNdLAnstyX3OtSK5V5InQtzlbCq+eqIqOyFPU8XVBKmXa1BDtbK8He+uOHQxANrFSIsSSzOeeXSDcdD3lPjd9epb+qo6uX0sx++VsJFh02TTrZNOvYCdBOU22iFDUNnfo7nnQe5/NYIINsz/AsPZJbGhL0QS+d6I3t85by7Cmx7QF6CiRLcphzFE+NFfErRVhpTF+lHA1aEXxYnc5YvorOJBMXfFZiitEvmRA90N2nMiWnL+jXyGGJM1B5HJZ4ZPHOV7wnZkQe8gzIg2cugNQmCIQlIw8AB8bBrB7mNNIu1wAQSEVgGLYCwkb5ITgL+fcGtewIq5NWnDYCD/WI7YdGIVWSr+PbtqSQnQvQJam/V/1TfXP880vXOYMkwtq58vUagdistt1cxg++a/squlwMlgOGNhrbBjCKCLcNoVhfPTncM2dwfQ8GNiWe7Ry0mhqaGyO0zNzBpSivBK4TO8FXGbwsIw6/XekVLDRgmlG6RKtNflUlmNxMqUMvh1kk61XEoJPOWaHaKFxcUDd6ljaV0sljyPE30M1LFnBIhTtJedUqjkf7G1Gs6ypSVh1mzx1Kpi0FgWkWwbuWirdjnUFkolSCpWmYTS1NePC00GXOiXf6pg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(451199021)(2906002)(38100700002)(7416002)(6486002)(5660300002)(122000001)(30864003)(71200400001)(38070700005)(83380400001)(8936002)(55236004)(6506007)(6512007)(41300700001)(26005)(2616005)(66476007)(6916009)(66946007)(66446008)(76116006)(66556008)(86362001)(91956017)(4326008)(186003)(316002)(8676002)(64756008)(36756003)(54906003)(478600001)(21314003)(559001)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?Mkp3c0dvMWtyVVhhZU9hRkJvSzZyMkxyZlFuU21KL2cvZUM4SEZqMWI5S2ww?=
 =?utf-8?B?cktJKzN3WmlNY2VVdVlIUEtsVFFOalA0NndmL1RKNXhsam5UZGFtb3BNNjZB?=
 =?utf-8?B?b0NBZTN1aWRQV0x1d01aczdMRnc3VmQzeU1XSXBpZHNFOHBjUEF0UmgyVlQz?=
 =?utf-8?B?M3d2eFRQaktJeGFXNHpUbXJnS1UvSDlDM05wZFNVaDB6V0pFWVFoQktCa1lo?=
 =?utf-8?B?MmpYRWxiVHB2cHVWNzN0M2NSQmsydkMxYitlZG1WOWdTRVYxeU5udktVTm1v?=
 =?utf-8?B?SzhtVzhYRTFwRHNmUXlIZWwyQlpXTVN6cUJaay9QdFJHeGNMMEs2Uk9aL0FH?=
 =?utf-8?B?U3U3U2NjNUR0M0NFYlBVQ2FCYWRMZ1JYZ3VCYUFkY1l2TjdJMGQzYWFUWEdU?=
 =?utf-8?B?SHFHNHZDc3ZLYk54b1BHRGl4Ymc2UUlLaXM2UU9ERmRLdEsxQ3BDNFE0Uk5t?=
 =?utf-8?B?LzhxR05tLzNNQjRVYUZEdVhXcDkwZ0pPRzNLQTZtMWlVemFZbzBpb1hXNkto?=
 =?utf-8?B?T20zZERtZjFrNHVUTy84YnNnVkVUaGZzbkFkSjl2VjhHc1ZQYktyWk5QZjBZ?=
 =?utf-8?B?TFZqYnliMUU4NjRlUk41S0lZclV5a1VvTUk1b1JscGk3bUZucHoxbGFzK1ZZ?=
 =?utf-8?B?OWdkSTkyZVlDNU9IZGdOd1BHeE9WTFNpYkRCQ250eUwwelZrOFNhVTdPUzRD?=
 =?utf-8?B?eWJQcEdIRnpyTFpQRDJxRUFjTFVLNUpGUU1lc2Rsck1LQXIvQmJHV1ZoNEw0?=
 =?utf-8?B?SktYa3RYUlN6ZGVyOGROeGJLYWpMcmhmbmgvMFAxd1JWYms0N0lBQUxMNGY2?=
 =?utf-8?B?SVNSZ1ROTjJraW1sOXo2VUk3R3IwK0xZVGw2YVBsSDBob2huNmJZRFdTb1hk?=
 =?utf-8?B?SVNiVXhpVWZrYktEeXNWdUFPc2lqU2prRTRXZkh2UnlZWW5zd3U0WFBoR3VL?=
 =?utf-8?B?Q3p5RnJUajUzNUhNRG1EdHhXa0tsdzZubGVTMDgrUmJCbUV2VWFTTU9iSHU4?=
 =?utf-8?B?N21JdC9RZk1oMGwyS3JLZUtDbHQ0SnB6SGh6dS8vZHh0SXgxRGZZUzUrRXRU?=
 =?utf-8?B?N1ZaN2laVTgwUzAzS2NXWkNZQ3ZYWktQY09KLzFWZTE5R1Q2aUJ1OUlldkFk?=
 =?utf-8?B?emNzZzUvaWpPdjlKVWVpTjJCZHEyTVZicHZCNVdMbmg4bkF0bllmQkczQ2t1?=
 =?utf-8?B?Q2NnTkl0UGx4MTVvUHBKSjVqa0VINHZqYjdvTVpyYU1VKzVUZkZLWjZRcDJr?=
 =?utf-8?B?MVMvaHpMekhQR0FTeXg0dkVvYStTMjV1T2NXOXhFTGJnbFY1a2lOWEVlY2pS?=
 =?utf-8?B?cVJuQm9TMEF2aDd1ZnZhRHNjejRKWjdzN0g1TllTTzQ0SVF3MzJJSG1MdHFR?=
 =?utf-8?B?NTk4UlFWbm5mTmZEc2lIek5tSzRhMEIyNlo1a21XK2p5TWMzZ2ZFdnRnZWZk?=
 =?utf-8?B?Y0dUTWYzakxNeE8rT0dpaDNUYzNEMFBDS2sxY2FUb3QyVEJ3YjRRMmhJUHRw?=
 =?utf-8?B?U01HbjNMSWsraDVZaDRqMXRmdFpnOWdmUVVnZVB3TVpDdEFkYmYyQ0tMVU1x?=
 =?utf-8?B?UkJRTWJLcW9Ja0NvbUcxQTlESk1OcWMySGhCcWQ1MHhXZlJ4WkdFMkw0cm16?=
 =?utf-8?B?MkN0OGI3ZFhhRVVVL3FjT0lsdUMxSy9hSk9DZE9BcFZnZVE4Q1c2Q3Vxa2xn?=
 =?utf-8?B?MUFpM3VWMGtqemNWQTRQYmdSSThxNHNZNzNKenVvejViNDA5RGRwZHBWUnNF?=
 =?utf-8?B?aXA0UEtZMGlMWjc3QnB3d25XWVprZHAxeUg2VGJUcm5rZ0FRd3VaV0laWnNE?=
 =?utf-8?B?TENrOXdYSHMvY3BLWTF1NXlMVUdzU0xIR3p2eXY0Y3lyMkNFL1VGRmtmWldN?=
 =?utf-8?B?NjhjbEsvZFNRbENHNWFVQm1iZlgwWkttZW5qWjhUTWlCM2x4anBaTm8rTzln?=
 =?utf-8?B?U0xReFhvVHBWcFhPdVdaQnFRUHRSYS9JRzIwdWY4aWtNa3E0V2EyZWxNV0dS?=
 =?utf-8?B?TC82NmthR0pCalE3SkFEd3ZzamhldVV0QVM5MkNXT0FjWFMrakxvamc4ejhZ?=
 =?utf-8?B?VDBrYjgyRWw4WUxRU21CSjZ5MjNYVGprMDNnbjRkcC83US9ib245dDZJd0pU?=
 =?utf-8?B?c0N3NFQrQkM4Wk82aGhKZ05vbko5Q05heXRvTk0zTzF2L2xtNVJZRGNZVVAv?=
 =?utf-8?B?Q3c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D85597D42357784198C9CE8607193FA2@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 26f9804a-65d3-41e3-06ad-08db3ae6371c
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Apr 2023 23:41:04.3932
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NZbzsBNY+3WSfna+FKgdlfrADCnTo0bsqNYdZR5jTixlnV+SoKW6ch46HSupSBuaXHffqruAZae0oSkNrZYy4vqEdKBH81lVzl+nIKRXCOQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR03MB6358
X-Proofpoint-ORIG-GUID: NBWx1qpd7oTfHIlgL1c21heI70KuBW85
X-Proofpoint-GUID: NBWx1qpd7oTfHIlgL1c21heI70KuBW85
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-11_16,2023-04-11_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 impostorscore=0 mlxlogscore=999 lowpriorityscore=0 phishscore=0
 suspectscore=0 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0
 clxscore=1015 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304110213

DQpIaSBSb2dlciwNCg0KUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdy
aXRlczoNCg0KPiBPbiBUdWUsIE1hciAxNCwgMjAyMyBhdCAwODo1NjoyOVBNICswMDAwLCBWb2xv
ZHlteXIgQmFiY2h1ayB3cm90ZToNCj4+IFByaW9yIHRvIHRoaXMgY2hhbmdlLCBsaWZldGltZSBv
ZiBwY2lfZGV2IG9iamVjdHMgd2FzIHByb3RlY3RlZCBieSBnbG9iYWwNCj4+IHBjaWRldnNfbG9j
aygpLiBMb25nLXRlcm0gcGxhbiBpcyB0byByZW1vdmUgdGhpcyBsb2csIHNvIHdlIG5lZWQgc29t
ZQ0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBe
IGxvY2sNCj4NCj4gSSB3b3VsZG4ndCBzYXkgcmVtb3ZlLCBhcyBvbmUgd2F5IG9yIGFub3RoZXIg
d2UgbmVlZCBhIGxvY2sgdG8gcHJvdGVjdA0KPiBjb25jdXJyZW50IGFjY2Vzc2VzLg0KPg0KDQpJ
J2xsIHdyaXRlICJyZXBsYWNlIHRoaXMgZ2xvYmFsIGxvY2sgd2l0aCBjb3VwbGUgb2YgbW9yZSBn
cmFudWxhcg0KbG9ja2luZyBkZXZpY2VzIg0KaWYgdGhpcyBpcyBva2F5IGZvciB5b3UuDQoNCj4+
IG90aGVyIG1lY2hhbmlzbSB0byBlbnN1cmUgdGhhdCB0aG9zZSBvYmplY3RzIHdpbGwgbm90IGRp
c2FwcGVhciB1bmRlcg0KPj4gZmVldCBvZiBjb2RlIHRoYXQgYWNjZXNzIHRoZW0uIFJlZmVyZW5j
ZSBjb3VudGluZyBpcyBhIGdvb2QgY2hvaWNlIGFzDQo+PiBpdCBwcm92aWRlcyBlYXN5IHRvIGNv
bXByZWhlbmQgd2F5IHRvIGNvbnRyb2wgb2JqZWN0IGxpZmV0aW1lLg0KPj4gDQo+PiBUaGlzIHBh
dGNoIGFkZHMgdHdvIG5ldyBoZWxwZXIgZnVuY3Rpb25zOiBwY2lkZXZfZ2V0KCkgYW5kDQo+PiBw
Y2lkZXZfcHV0KCkuIHBjaWRldl9nZXQoKSB3aWxsIGluY3JlYXNlIHJlZmVyZW5jZSBjb3VudGVy
LCB3aGlsZQ0KPj4gcGNpZGV2X3B1dCgpIHdpbGwgZGVjcmVhc2UgaXQsIGRlc3Ryb3lpbmcgb2Jq
ZWN0IHdoZW4gY291bnRlciByZWFjaGVzDQo+PiB6ZXJvLg0KPj4gDQo+PiBwY2lkZXZfZ2V0KCkg
c2hvdWxkIGJlIHVzZWQgb25seSB3aGVuIHlvdSBhbHJlYWR5IGhhdmUgYSB2YWxpZCBwb2ludGVy
DQo+PiB0byB0aGUgb2JqZWN0IG9yIHlvdSBhcmUgaG9sZGluZyBsb2NrIHRoYXQgcHJvdGVjdHMg
b25lIG9mIHRoZQ0KPj4gbGlzdHMgKGRvbWFpbiwgcHNlZyBvciBhdHMpIHRoYXQgc3RvcmUgcGNp
X2RldiBzdHJ1Y3RzLg0KPj4gDQo+PiBwY2lkZXZfZ2V0KCkgaXMgcmFyZWx5IHVzZWQgZGlyZWN0
bHksIGJlY2F1c2UgdGhlcmUgYWxyZWFkeSBhcmUNCj4+IGZ1bmN0aW9ucyB0aGF0IHdpbGwgcHJv
dmlkZSB2YWxpZCBwb2ludGVyIHRvIHBjaV9kZXYgc3RydWN0Og0KPj4gcGNpX2dldF9wZGV2KCks
IHBjaV9nZXRfcmVhbF9wZGV2KCkuIFRoZXkgd2lsbCBsb2NrIGFwcHJvcHJpYXRlIGxpc3QsDQo+
PiBmaW5kIG5lZWRlZCBvYmplY3QgYW5kIGluY3JlYXNlIGl0cyByZWZlcmVuY2UgY291bnRlciBi
ZWZvcmUgcmV0dXJuaW5nDQo+PiB0byB0aGUgY2FsbGVyLg0KPj4gDQo+PiBOYXR1cmFsbHksIHBj
aV9wdXQoKSBzaG91bGQgYmUgY2FsbGVkIGFmdGVyIGZpbmlzaGluZyB3b3JraW5nIHdpdGggYQ0K
Pj4gcmVjZWl2ZWQgb2JqZWN0LiBUaGlzIGlzIHRoZSByZWFzb24gd2h5IHRoaXMgcGF0Y2ggaGF2
ZSBzbyBtYW55DQo+PiBwY2lkZXZfcHV0KClzIGFuZCBzbyBsaXR0bGUgcGNpZGV2X2dldCgpczog
ZXhpc3RpbmcgY2FsbHMgdG8NCj4+IHBjaV9nZXRfKigpIGZ1bmN0aW9ucyBub3cgd2lsbCBpbmNy
ZWFzZSByZWZlcmVuY2UgY291bnRlcg0KPj4gYXV0b21hdGljYWxseSwgd2UganVzdCBuZWVkIHRv
IGRlY3JlYXNlIGl0IGJhY2sgd2hlbiB3ZSBmaW5pc2hlZC4NCj4NCj4gQWZ0ZXIgbG9va2luZyBh
IGJpdCBpbnRvIHRoaXMsIEkgd291bGQgbGlrZSB0byBhc2sgd2hldGhlciBpdCdzIGJlZW4NCj4g
Y29uc2lkZXJlZCB0aGUgbmVlZCB0byBpbmNyZWFzZSB0aGUgcmVmY291bnQgZm9yIGVhY2ggdXNl
IG9mIGEgcGRldi4NCj4NCg0KVGhpcyBpcyBob3cgTGludXggdXNlcyByZWZlcmVuY2UgbG9ja2lu
Zy4gSXQgZGVjcmVhc2VzIGNvZ25pdGl2ZSBsb2FkDQphbmQgY2hhbmNlIGZvciBhbiBlcnJvciwg
YXMgdGhlcmUgaXMgYSBzaW1wbGUgc2V0IG9mIHJ1bGVzLCB3aGljaCB5b3UNCmZvbGxvdy4NCg0K
PiBGb3IgZXhhbXBsZSBJIHdvdWxkIGNvbnNpZGVyIHRoZSBpbml0aWFsIGFsbG9jX3BkZXYoKSB0
byB0YWtlIGENCj4gcmVmY291bnQsIGFuZCB0aGVuIHBjaV9yZW1vdmVfZGV2aWNlKCkgX211c3Rf
IGJlIHRoZSBmdW5jdGlvbiB0aGF0DQo+IHJlbW92ZXMgdGhlIGxhc3QgcmVmY291bnQsIHNvIHRo
YXQgaXQgY2FuIHJldHVybiAtRUJVU1kgb3RoZXJ3aXNlIChzZWUNCj4gbXkgY29tbWVudCBiZWxv
dykuDQoNCkkgdGVuZCB0byBkaXNhZ3JlZSB0aGVyZSwgYXMgdGhpcyBydWlucyB0aGUgdmVyeSBp
ZGVhIG9mIHJlZmVyZW5jZQ0KY291bnRpbmcuIFdlIGNhbid0IGtub3cgd2hvIGVsc2UgaG9sZHMg
cmVmZXJlbmNlIHJpZ2h0IG5vdy4gT2theSwgd2UNCm1pZ2h0IGtub3csIGJ1dCB0aGlzIHJlcXVp
cmVzIGFkZGl0aW9uYWwgbG9jayB0byBzZXJpYWxpemUNCmFjY2Vzc2VzLiBXaGljaCwgaW4gdHVy
biwgbWFrZXMgcmVmY291bnQgdW4tbmVlZGVkLg0KDQo+DQo+IEkgd291bGQgYWxzbyB0aGluayB0
aGF0IGhhdmluZyB0aGUgZGV2aWNlIGFzc2lnbmVkIHRvIGEgZ3Vlc3Qgd2lsbCB0YWtlDQo+IGFu
b3RoZXIgcmVmY291bnQsIGFuZCB0aGVuIGFueSB1c2FnZSBmcm9tIGZ1cnRoZXIgY2FsbGVycyAo
aWU6IGxpa2UNCj4gdnBjaSkgd2lsbCBuZWVkIHNvbWUga2luZCBvZiBwcm90ZWN0aW9uIGZyb20g
cHJldmVudGluZyB0aGUgZGV2aWNlDQo+IGZyb20gYmVpbmcgZGVhc3NpZ25lZCBmcm9tIGEgZG9t
YWluIHdoaWxlIHZQQ0kgaGFuZGxlcnMgYXJlIHJ1bm5pbmcsDQo+IGFuZCB0aGUgY3VycmVudCBy
ZWZjb3VudCB3b24ndCBoZWxwIHdpdGggdGhhdC4NCg0KWWVzLCBpZGVhIG9mIHRoaXMgcmVmY291
bnRpbmcgaXMgdG8gZW5zdXJlIHRoYXQgYSBwZGV2IG9iamVjdCBleGlzdHMgYXMgYW4NCnZhbGlk
IG9iamVjdCBpbiBtZW1vcnkgaWYgd2UgYXJlIGhvbGRpbmcgYSBsb25nLXRlcm0gcG9pbnRlciB0
bw0KaXQuIEluZGVlZCwgdlBDSSBoYW5kbGVycyBzaG91bGQgdXNlIHNvbWUgb3RoZXIgbWVjaGFu
aXNtIHRvIGVuc3VyZSB0aGF0DQpwZGV2IGlzIG5vdCBiZWluZyByZS1hc3NpZ25lZCB3aGlsZSBo
YW5kbGVycyBhcmUgcnVubmluZy4gSSBiZWxpZXZlLA0KdGhpcyBpcyB0aGUgdGFzayBvZiB2cGNp
LT5sb2NrLiBTaG91bGQgd2UgY2FsbA0KdnBjaV9yZW1vdmVfZGV2aWNlL3ZwY2lfYWRkX2hhbmRs
ZXJzIGVhY2ggdGltZSB3ZSByZS1hc3NpZ24gYSBQQ0kgZGV2aWNlPw0KDQo+DQo+IFRoYXQgbWFr
ZXMgbWUgd29uZGVyIGlmIGZvciBleGFtcGxlIGNhbGxlcnMgb2YgcGNpX2dldF9wZGV2KGQsIHNi
ZGYpDQo+IGRvIG5lZWQgdG8gdGFrZSBhbiBleHRyYSByZWZjb3VudCwgYmVjYXVzZSBzdWNoIGFj
Y2VzcyBpcyBhbHJlYWR5DQo+IHByb3RlY3RlZCBmcm9tIHRoZSBwZGV2IGdvaW5nIGF3YXkgYnkg
dGhlIGZhY3QgdGhhdCB0aGUgZGV2aWNlIGlzDQo+IGFzc2lnbmVkIHRvIGEgZ3Vlc3QuICBCdXQg
bWF5YmUgaXQncyB0b28gbXVjaCB3b3JrIHRvIHNlcGFyYXRlIHVzZXJzDQo+IG9mIHBjaV9nZXRf
cGRldihkLCAuLi4pOyB2cyBwY2lfZ2V0X3BkZXYoTlVMTCwgLi4uKTsuDQo+DQo+IFRoZXJlJ3Mg
YWxzbyBhIHdpbmRvdyB3aGVuIHRoZSByZWZjb3VudCBpcyBkcm9wcGVkIHRvIDAsIGFuZCB0aGUN
Cj4gZGVzdHJ1Y3Rpb24gZnVuY3Rpb24gaXMgY2FsbGVkLCBidXQgYXQgdGhlIHNhbWUgdGltZSBh
IGNvbmN1cnJlbnQNCj4gdGhyZWFkIGNvdWxkIGF0dGVtcHQgdG8gdGFrZSBhIHJlZmVyZW5jZSB0
byB0aGUgcGRldiBzdGlsbD8NCg0KTGFzdCBwY2lkZXZfcHV0KCkgd291bGQgYmUgY2FsbGVkIGJ5
IHBjaV9yZW1vdmVfZGV2aWNlKCksIGFmdGVyIHJlbW92aW5nDQppdCBmcm9tIGFsbCBsaXN0cy4g
VGhpcyBzaG91bGQgcHJldmVudCBvdGhlciB0aHJlYWRzIGZyb20gb2J0YWluaW5nIGEgdmFsaWQN
CnJlZmVyZW5jZSB0byB0aGUgcGRldi4NCg0KPg0KPj4gDQo+PiBUaGlzIHBhdGNoIHJlbW92ZXMg
ImNvbnN0IiBxdWFsaWZpZXIgZnJvbSBzb21lIHBkZXYgcG9pbnRlcnMgYmVjYXVzZQ0KPj4gcGNp
ZGV2X3B1dCgpIHRlY2huaWNhbGx5IGFsdGVycyB0aGUgY29udGVudHMgb2YgcGNpX2RldiBzdHJ1
Y3R1cmUuDQo+PiANCj4+IFNpZ25lZC1vZmYtYnk6IFZvbG9keW15ciBCYWJjaHVrIDx2b2xvZHlt
eXJfYmFiY2h1a0BlcGFtLmNvbT4NCj4+IFN1Z2dlc3RlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPg0KPj4gDQo+PiAtLS0NCj4+IA0KPj4gdjM6DQo+PiAgLSBNb3ZlZCBpbiBm
cm9tIGFub3RoZXIgcGF0Y2ggc2VyaWVzDQo+PiAgLSBGaXhlZCBjb2RlIGZvcm1hdHRpbmcgKHRh
YnMgLT4gc3BhY2VzKQ0KPj4gIC0gUmVtb3ZlZCBlcnJvbmVvdXMgcGNpZGV2X3B1dCBpbiB2Z2Eu
Yw0KPj4gIC0gQWRkZWQgbWlzc2luZyBwY2lkZXZfcHV0IGluIGNvdXBsZSBvZiBwbGFjZXMNCj4+
ICAtIHJlbW92ZWQgbWVudGlvbiBvZiBwY2lfZ2V0X3BkZXZfYnlfZG9tYWluKCkNCj4+IC0tLQ0K
Pj4gIHhlbi9hcmNoL3g4Ni9odm0vdm1zaS5jICAgICAgICAgICAgICAgICAgfCAgIDIgKy0NCj4+
ICB4ZW4vYXJjaC94ODYvaXJxLmMgICAgICAgICAgICAgICAgICAgICAgIHwgICA0ICsNCj4+ICB4
ZW4vYXJjaC94ODYvbXNpLmMgICAgICAgICAgICAgICAgICAgICAgIHwgIDQ0ICsrKysrKystDQo+
PiAgeGVuL2FyY2gveDg2L3BjaS5jICAgICAgICAgICAgICAgICAgICAgICB8ICAgMyArDQo+PiAg
eGVuL2FyY2gveDg2L3BoeXNkZXYuYyAgICAgICAgICAgICAgICAgICB8ICAxNyArKy0NCj4+ICB4
ZW4vY29tbW9uL3N5c2N0bC5jICAgICAgICAgICAgICAgICAgICAgIHwgICA3ICstDQo+PiAgeGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2luaXQuYyB8ICAxMiArLQ0KPj4gIHhlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYyAgfCAgIDYgKy0NCj4+ICB4ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC9wY2kuYyAgICAgICAgICAgIHwgMTM4ICsrKysrKysrKysrKysrKy0t
LS0tLS0tDQo+PiAgeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jICAgICB8ICAg
MiArDQo+PiAgeGVuL2RyaXZlcnMvdmlkZW8vdmdhLmMgICAgICAgICAgICAgICAgICB8ICAgNyAr
LQ0KPj4gIHhlbi9kcml2ZXJzL3ZwY2kvdnBjaS5jICAgICAgICAgICAgICAgICAgfCAgMTYgKyst
DQo+PiAgeGVuL2luY2x1ZGUveGVuL3BjaS5oICAgICAgICAgICAgICAgICAgICB8ICAxOCArKysN
Cj4+ICAxMyBmaWxlcyBjaGFuZ2VkLCAyMTUgaW5zZXJ0aW9ucygrKSwgNjEgZGVsZXRpb25zKC0p
DQo+PiANCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL3Ztc2kuYyBiL3hlbi9hcmNo
L3g4Ni9odm0vdm1zaS5jDQo+PiBpbmRleCAzY2Q0OTIzMDYwLi44YzNkNjczODcyIDEwMDY0NA0K
Pj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXNpLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9o
dm0vdm1zaS5jDQo+PiBAQCAtOTE0LDcgKzkxNCw3IEBAIGludCB2cGNpX21zaXhfYXJjaF9wcmlu
dChjb25zdCBzdHJ1Y3QgdnBjaV9tc2l4ICptc2l4KQ0KPj4gIA0KPj4gICAgICAgICAgICAgIHNw
aW5fdW5sb2NrKCZtc2l4LT5wZGV2LT52cGNpLT5sb2NrKTsNCj4+ICAgICAgICAgICAgICBwcm9j
ZXNzX3BlbmRpbmdfc29mdGlycXMoKTsNCj4+IC0gICAgICAgICAgICAvKiBOQjogd2UgYXNzdW1l
IHRoYXQgcGRldiBjYW5ub3QgZ28gYXdheSBmb3IgYW4gYWxpdmUgZG9tYWluLiAqLw0KPj4gKw0K
Pj4gICAgICAgICAgICAgIGlmICggIXBkZXYtPnZwY2kgfHwgIXNwaW5fdHJ5bG9jaygmcGRldi0+
dnBjaS0+bG9jaykgKQ0KPj4gICAgICAgICAgICAgICAgICByZXR1cm4gLUVCVVNZOw0KPj4gICAg
ICAgICAgICAgIGlmICggcGRldi0+dnBjaS0+bXNpeCAhPSBtc2l4ICkNCj4+IGRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvaXJxLmMgYi94ZW4vYXJjaC94ODYvaXJxLmMNCj4+IGluZGV4IDIwMTUw
YjFjN2YuLjg3NDY0ZDgyYzggMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC94ODYvaXJxLmMNCj4+
ICsrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYw0KPj4gQEAgLTIxNzUsNiArMjE3NSw3IEBAIGludCBt
YXBfZG9tYWluX3BpcnEoDQo+PiAgICAgICAgICAgICAgICAgIG1zaS0+ZW50cnlfbnIgPSByZXQ7
DQo+PiAgICAgICAgICAgICAgICAgIHJldCA9IC1FTkZJTEU7DQo+PiAgICAgICAgICAgICAgfQ0K
Pj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICAgICAgZ290byBk
b25lOw0KPj4gICAgICAgICAgfQ0KPj4gIA0KPj4gQEAgLTIxODksNiArMjE5MCw3IEBAIGludCBt
YXBfZG9tYWluX3BpcnEoDQo+PiAgICAgICAgICAgICAgbXNpX2Rlc2MtPmlycSA9IC0xOw0KPj4g
ICAgICAgICAgICAgIG1zaV9mcmVlX2lycShtc2lfZGVzYyk7DQo+PiAgICAgICAgICAgICAgcmV0
ID0gLUVCVVNZOw0KPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAg
ICAgICAgZ290byBkb25lOw0KPj4gICAgICAgICAgfQ0KPj4gIA0KPj4gQEAgLTIyNzMsMTAgKzIy
NzUsMTIgQEAgaW50IG1hcF9kb21haW5fcGlycSgNCj4+ICAgICAgICAgICAgICB9DQo+PiAgICAg
ICAgICAgICAgbXNpX2Rlc2MtPmlycSA9IC0xOw0KPj4gICAgICAgICAgICAgIG1zaV9mcmVlX2ly
cShtc2lfZGVzYyk7DQo+PiArICAgICAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAg
ICAgICAgICBnb3RvIGRvbmU7DQo+PiAgICAgICAgICB9DQo+PiAgDQo+PiAgICAgICAgICBzZXRf
ZG9tYWluX2lycV9waXJxKGQsIGlycSwgaW5mbyk7DQo+PiArICAgICAgICBwY2lkZXZfcHV0KHBk
ZXYpOw0KPj4gICAgICAgICAgc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZGVzYy0+bG9jaywgZmxh
Z3MpOw0KPj4gICAgICB9DQo+PiAgICAgIGVsc2UNCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94
ODYvbXNpLmMgYi94ZW4vYXJjaC94ODYvbXNpLmMNCj4+IGluZGV4IGQwYmY2M2RmMWQuLjkxOTI2
ZmNlNTAgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC94ODYvbXNpLmMNCj4+ICsrKyBiL3hlbi9h
cmNoL3g4Ni9tc2kuYw0KPj4gQEAgLTU3Miw2ICs1NzIsMTAgQEAgaW50IG1zaV9mcmVlX2lycShz
dHJ1Y3QgbXNpX2Rlc2MgKmVudHJ5KQ0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgIHZpcnRf
dG9fZml4KCh1bnNpZ25lZCBsb25nKWVudHJ5LT5tYXNrX2Jhc2UpKTsNCj4+ICANCj4+ICAgICAg
bGlzdF9kZWwoJmVudHJ5LT5saXN0KTsNCj4+ICsNCj4+ICsgICAgLyogQ29ycmVzcG9uZHMgdG8g
cGNpZGV2X2dldCgpIGluIG1zaVt4XV9jYXBhYmlsaXR5X2luaXQoKSAgKi8NCj4+ICsgICAgcGNp
ZGV2X3B1dChlbnRyeS0+ZGV2KTsNCj4+ICsNCj4+ICAgICAgeGZyZWUoZW50cnkpOw0KPj4gICAg
ICByZXR1cm4gMDsNCj4+ICB9DQo+PiBAQCAtNjQ0LDYgKzY0OCw3IEBAIHN0YXRpYyBpbnQgbXNp
X2NhcGFiaWxpdHlfaW5pdChzdHJ1Y3QgcGNpX2RldiAqZGV2LA0KPj4gICAgICAgICAgICAgIGVu
dHJ5W2ldLm1zaS5tcG9zID0gbXBvczsNCj4+ICAgICAgICAgIGVudHJ5W2ldLm1zaS5udmVjID0g
MDsNCj4+ICAgICAgICAgIGVudHJ5W2ldLmRldiA9IGRldjsNCj4+ICsgICAgICAgIHBjaWRldl9n
ZXQoZGV2KTsNCj4+ICAgICAgfQ0KPj4gICAgICBlbnRyeS0+bXNpLm52ZWMgPSBudmVjOw0KPj4g
ICAgICBlbnRyeS0+aXJxID0gaXJxOw0KPj4gQEAgLTcwMywyMiArNzA4LDM2IEBAIHN0YXRpYyB1
NjQgcmVhZF9wY2lfbWVtX2Jhcih1MTYgc2VnLCB1OCBidXMsIHU4IHNsb3QsIHU4IGZ1bmMsIHU4
IGJpciwgaW50IHZmKQ0KPj4gICAgICAgICAgICAgICAhbnVtX3ZmIHx8ICFvZmZzZXQgfHwgKG51
bV92ZiA+IDEgJiYgIXN0cmlkZSkgfHwNCj4+ICAgICAgICAgICAgICAgYmlyID49IFBDSV9TUklP
Vl9OVU1fQkFSUyB8fA0KPj4gICAgICAgICAgICAgICAhcGRldi0+dmZfcmxlbltiaXJdICkNCj4+
ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBpZiAoIHBkZXYgKQ0KPj4gKyAgICAgICAgICAg
ICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gICAgICAgICAgICAgIHJldHVybiAwOw0KPj4gKyAg
ICAgICAgfQ0KPj4gICAgICAgICAgYmFzZSA9IHBvcyArIFBDSV9TUklPVl9CQVI7DQo+PiAgICAg
ICAgICB2ZiAtPSBQQ0lfQkRGKGJ1cywgc2xvdCwgZnVuYykgKyBvZmZzZXQ7DQo+PiAgICAgICAg
ICBpZiAoIHZmIDwgMCApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgcGNpZGV2X3B1
dChwZGV2KTsNCj4+ICAgICAgICAgICAgICByZXR1cm4gMDsNCj4+ICsgICAgICAgIH0NCj4+ICAg
ICAgICAgIGlmICggc3RyaWRlICkNCj4+ICAgICAgICAgIHsNCj4+ICAgICAgICAgICAgICBpZiAo
IHZmICUgc3RyaWRlICkNCj4+ICsgICAgICAgICAgICB7DQo+PiArICAgICAgICAgICAgICAgIHBj
aWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICAgICAgICAgIHJldHVybiAwOw0KPj4gKyAgICAg
ICAgICAgIH0NCj4+ICAgICAgICAgICAgICB2ZiAvPSBzdHJpZGU7DQo+PiAgICAgICAgICB9DQo+
PiAgICAgICAgICBpZiAoIHZmID49IG51bV92ZiApDQo+PiArICAgICAgICB7DQo+PiArICAgICAg
ICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAgICAgICAgICByZXR1cm4gMDsNCj4+ICsg
ICAgICAgIH0NCj4+ICAgICAgICAgIEJVSUxEX0JVR19PTihBUlJBWV9TSVpFKHBkZXYtPnZmX3Js
ZW4pICE9IFBDSV9TUklPVl9OVU1fQkFSUyk7DQo+PiAgICAgICAgICBkaXNwID0gdmYgKiBwZGV2
LT52Zl9ybGVuW2Jpcl07DQo+PiAgICAgICAgICBsaW1pdCA9IFBDSV9TUklPVl9OVU1fQkFSUzsN
Cj4+ICsgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgIH0NCj4+ICAgICAgZWxzZSBz
d2l0Y2ggKCBwY2lfY29uZl9yZWFkOChQQ0lfU0JERihzZWcsIGJ1cywgc2xvdCwgZnVuYyksDQo+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX0hFQURFUl9UWVBFKSAmIDB4
N2YgKQ0KPj4gQEAgLTkyNSw2ICs5NDQsOCBAQCBzdGF0aWMgaW50IG1zaXhfY2FwYWJpbGl0eV9p
bml0KHN0cnVjdCBwY2lfZGV2ICpkZXYsDQo+PiAgICAgICAgICBlbnRyeS0+ZGV2ID0gZGV2Ow0K
Pj4gICAgICAgICAgZW50cnktPm1hc2tfYmFzZSA9IGJhc2U7DQo+PiAgDQo+PiArICAgICAgICBw
Y2lkZXZfZ2V0KGRldik7DQo+PiArDQo+PiAgICAgICAgICBsaXN0X2FkZF90YWlsKCZlbnRyeS0+
bGlzdCwgJmRldi0+bXNpX2xpc3QpOw0KPj4gICAgICAgICAgKmRlc2MgPSBlbnRyeTsNCj4+ICAg
ICAgfQ0KPj4gQEAgLTk5OSw2ICsxMDIwLDcgQEAgc3RhdGljIGludCBfX3BjaV9lbmFibGVfbXNp
KHN0cnVjdCBtc2lfaW5mbyAqbXNpLCBzdHJ1Y3QgbXNpX2Rlc2MgKipkZXNjKQ0KPj4gIHsNCj4+
ICAgICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiAgICAgIHN0cnVjdCBtc2lfZGVzYyAqb2xk
X2Rlc2M7DQo+PiArICAgIGludCByZXQ7DQo+PiAgDQo+PiAgICAgIEFTU0VSVChwY2lkZXZzX2xv
Y2tlZCgpKTsNCj4+ICAgICAgcGRldiA9IHBjaV9nZXRfcGRldihOVUxMLCBtc2ktPnNiZGYpOw0K
Pj4gQEAgLTEwMTAsNiArMTAzMiw3IEBAIHN0YXRpYyBpbnQgX19wY2lfZW5hYmxlX21zaShzdHJ1
Y3QgbXNpX2luZm8gKm1zaSwgc3RydWN0IG1zaV9kZXNjICoqZGVzYykNCj4+ICAgICAgew0KPj4g
ICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlIgImlycSAlZCBhbHJlYWR5IG1hcHBlZCB0byBNU0kg
b24gJXBwXG4iLA0KPj4gICAgICAgICAgICAgICAgIG1zaS0+aXJxLCAmcGRldi0+c2JkZik7DQo+
PiArICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gICAgICAgICAgcmV0dXJuIC1FRVhJU1Q7
DQo+PiAgICAgIH0NCj4+ICANCj4+IEBAIC0xMDIwLDcgKzEwNDMsMTAgQEAgc3RhdGljIGludCBf
X3BjaV9lbmFibGVfbXNpKHN0cnVjdCBtc2lfaW5mbyAqbXNpLCBzdHJ1Y3QgbXNpX2Rlc2MgKipk
ZXNjKQ0KPj4gICAgICAgICAgX19wY2lfZGlzYWJsZV9tc2l4KG9sZF9kZXNjKTsNCj4+ICAgICAg
fQ0KPj4gIA0KPj4gLSAgICByZXR1cm4gbXNpX2NhcGFiaWxpdHlfaW5pdChwZGV2LCBtc2ktPmly
cSwgZGVzYywgbXNpLT5lbnRyeV9ucik7DQo+PiArICAgIHJldCA9IG1zaV9jYXBhYmlsaXR5X2lu
aXQocGRldiwgbXNpLT5pcnEsIGRlc2MsIG1zaS0+ZW50cnlfbnIpOw0KPj4gKyAgICBwY2lkZXZf
cHV0KHBkZXYpOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gcmV0Ow0KPj4gIH0NCj4+ICANCj4+ICBz
dGF0aWMgdm9pZCBfX3BjaV9kaXNhYmxlX21zaShzdHJ1Y3QgbXNpX2Rlc2MgKmVudHJ5KQ0KPj4g
QEAgLTEwNTQsMjAgKzEwODAsMjkgQEAgc3RhdGljIGludCBfX3BjaV9lbmFibGVfbXNpeChzdHJ1
Y3QgbXNpX2luZm8gKm1zaSwgc3RydWN0IG1zaV9kZXNjICoqZGVzYykNCj4+ICB7DQo+PiAgICAg
IHN0cnVjdCBwY2lfZGV2ICpwZGV2Ow0KPj4gICAgICBzdHJ1Y3QgbXNpX2Rlc2MgKm9sZF9kZXNj
Ow0KPj4gKyAgICBpbnQgcmV0Ow0KPj4gIA0KPj4gICAgICBBU1NFUlQocGNpZGV2c19sb2NrZWQo
KSk7DQo+PiAgICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoTlVMTCwgbXNpLT5zYmRmKTsNCj4+ICAg
ICAgaWYgKCAhcGRldiB8fCAhcGRldi0+bXNpeCApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIGlm
ICggcGRldiApDQo+PiArICAgICAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAgICAg
IHJldHVybiAtRU5PREVWOw0KPj4gKyAgICB9DQo+PiAgDQo+PiAgICAgIGlmICggbXNpLT5lbnRy
eV9uciA+PSBwZGV2LT5tc2l4LT5ucl9lbnRyaWVzICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAg
cGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4gKyAgICB9
DQo+PiAgDQo+PiAgICAgIG9sZF9kZXNjID0gZmluZF9tc2lfZW50cnkocGRldiwgbXNpLT5pcnEs
IFBDSV9DQVBfSURfTVNJWCk7DQo+PiAgICAgIGlmICggb2xkX2Rlc2MgKQ0KPj4gICAgICB7DQo+
PiAgICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiaXJxICVkIGFscmVhZHkgbWFwcGVkIHRvIE1T
SS1YIG9uICVwcFxuIiwNCj4+ICAgICAgICAgICAgICAgICBtc2ktPmlycSwgJnBkZXYtPnNiZGYp
Ow0KPj4gKyAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAgICAgIHJldHVybiAtRUVY
SVNUOw0KPj4gICAgICB9DQo+PiAgDQo+PiBAQCAtMTA3OCw3ICsxMTEzLDExIEBAIHN0YXRpYyBp
bnQgX19wY2lfZW5hYmxlX21zaXgoc3RydWN0IG1zaV9pbmZvICptc2ksIHN0cnVjdCBtc2lfZGVz
YyAqKmRlc2MpDQo+PiAgICAgICAgICBfX3BjaV9kaXNhYmxlX21zaShvbGRfZGVzYyk7DQo+PiAg
ICAgIH0NCj4+ICANCj4+IC0gICAgcmV0dXJuIG1zaXhfY2FwYWJpbGl0eV9pbml0KHBkZXYsIG1z
aSwgZGVzYyk7DQo+PiArICAgIHJldCA9IG1zaXhfY2FwYWJpbGl0eV9pbml0KHBkZXYsIG1zaSwg
ZGVzYyk7DQo+PiArDQo+PiArICAgIHBjaWRldl9wdXQocGRldik7DQo+PiArDQo+PiArICAgIHJl
dHVybiByZXQ7DQo+PiAgfQ0KPj4gIA0KPj4gIHN0YXRpYyB2b2lkIF9wY2lfY2xlYW51cF9tc2l4
KHN0cnVjdCBhcmNoX21zaXggKm1zaXgpDQo+PiBAQCAtMTE1OSw2ICsxMTk4LDcgQEAgaW50IHBj
aV9wcmVwYXJlX21zaXgodTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbiwgYm9vbCBvZmYpDQo+PiAg
ICAgIH0NCj4+ICAgICAgZWxzZQ0KPj4gICAgICAgICAgcmMgPSBtc2l4X2NhcGFiaWxpdHlfaW5p
dChwZGV2LCBOVUxMLCBOVUxMKTsNCj4+ICsgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICAgICAg
cGNpZGV2c191bmxvY2soKTsNCj4+ICANCj4+ICAgICAgcmV0dXJuIHJjOw0KPj4gZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9wY2kuYyBiL3hlbi9hcmNoL3g4Ni9wY2kuYw0KPj4gaW5kZXggOTdi
NzkyZTU3OC4uYzFmY2RmMDhkNiAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9wY2kuYw0K
Pj4gKysrIGIveGVuL2FyY2gveDg2L3BjaS5jDQo+PiBAQCAtOTIsNyArOTIsMTAgQEAgaW50IHBj
aV9jb25mX3dyaXRlX2ludGVyY2VwdCh1bnNpZ25lZCBpbnQgc2VnLCB1bnNpZ25lZCBpbnQgYmRm
LA0KPj4gIA0KPj4gICAgICBwZGV2ID0gcGNpX2dldF9wZGV2KE5VTEwsIFBDSV9TQkRGKHNlZywg
YmRmKSk7DQo+PiAgICAgIGlmICggcGRldiApDQo+PiArICAgIHsNCj4+ICAgICAgICAgIHJjID0g
cGNpX21zaV9jb25mX3dyaXRlX2ludGVyY2VwdChwZGV2LCByZWcsIHNpemUsIGRhdGEpOw0KPj4g
KyAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICsgICAgfQ0KPj4gIA0KPj4gICAgICBwY2lk
ZXZzX3VubG9jaygpOw0KPj4gIA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9waHlzZGV2
LmMgYi94ZW4vYXJjaC94ODYvcGh5c2Rldi5jDQo+PiBpbmRleCAyZjFkOTU1YTk2Li45NjIxNGEz
ZDQwIDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gveDg2L3BoeXNkZXYuYw0KPj4gKysrIGIveGVu
L2FyY2gveDg2L3BoeXNkZXYuYw0KPj4gQEAgLTUzMyw3ICs1MzMsMTQgQEAgcmV0X3QgZG9fcGh5
c2Rldl9vcChpbnQgY21kLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykNCj4+ICAg
ICAgICAgIHBjaWRldnNfbG9jaygpOw0KPj4gICAgICAgICAgcGRldiA9IHBjaV9nZXRfcGRldihO
VUxMLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICBQQ0lfU0JERigwLCByZXN0b3Jl
X21zaS5idXMsIHJlc3RvcmVfbXNpLmRldmZuKSk7DQo+PiAtICAgICAgICByZXQgPSBwZGV2ID8g
cGNpX3Jlc3RvcmVfbXNpX3N0YXRlKHBkZXYpIDogLUVOT0RFVjsNCj4+ICsgICAgICAgIGlmICgg
cGRldiApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgcmV0ID0gcGNpX3Jlc3RvcmVf
bXNpX3N0YXRlKHBkZXYpOw0KPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAr
ICAgICAgICB9DQo+PiArICAgICAgICBlbHNlDQo+PiArICAgICAgICAgICAgcmV0ID0gLUVOT0RF
VjsNCj4+ICsNCj4+ICAgICAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+PiAgICAgICAgICBicmVh
azsNCj4+ICAgICAgfQ0KPj4gQEAgLTU0OCw3ICs1NTUsMTMgQEAgcmV0X3QgZG9fcGh5c2Rldl9v
cChpbnQgY21kLCBYRU5fR1VFU1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykNCj4+ICANCj4+ICAg
ICAgICAgIHBjaWRldnNfbG9jaygpOw0KPj4gICAgICAgICAgcGRldiA9IHBjaV9nZXRfcGRldihO
VUxMLCBQQ0lfU0JERihkZXYuc2VnLCBkZXYuYnVzLCBkZXYuZGV2Zm4pKTsNCj4+IC0gICAgICAg
IHJldCA9IHBkZXYgPyBwY2lfcmVzdG9yZV9tc2lfc3RhdGUocGRldikgOiAtRU5PREVWOw0KPj4g
KyAgICAgICAgaWYgKCBwZGV2ICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICByZXQg
PSAgcGNpX3Jlc3RvcmVfbXNpX3N0YXRlKHBkZXYpOw0KPj4gKyAgICAgICAgICAgIHBjaWRldl9w
dXQocGRldik7DQo+PiArICAgICAgICB9DQo+PiArICAgICAgICBlbHNlDQo+PiArICAgICAgICAg
ICAgcmV0ID0gLUVOT0RFVjsNCj4+ICAgICAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+PiAgICAg
ICAgICBicmVhazsNCj4+ICAgICAgfQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vc3lzY3Rs
LmMgYi94ZW4vY29tbW9uL3N5c2N0bC5jDQo+PiBpbmRleCAwMjUwNWFiMDQ0Li45YWYwN2ZhOTJh
IDEwMDY0NA0KPj4gLS0tIGEveGVuL2NvbW1vbi9zeXNjdGwuYw0KPj4gKysrIGIveGVuL2NvbW1v
bi9zeXNjdGwuYw0KPj4gQEAgLTQzOCw3ICs0MzgsNyBAQCBsb25nIGRvX3N5c2N0bChYRU5fR1VF
U1RfSEFORExFX1BBUkFNKHhlbl9zeXNjdGxfdCkgdV9zeXNjdGwpDQo+PiAgICAgICAgICB7DQo+
PiAgICAgICAgICAgICAgcGh5c2Rldl9wY2lfZGV2aWNlX3QgZGV2Ow0KPj4gICAgICAgICAgICAg
IHVpbnQzMl90IG5vZGU7DQo+PiAtICAgICAgICAgICAgY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBk
ZXY7DQo+PiArICAgICAgICAgICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiAgDQo+PiAgICAg
ICAgICAgICAgaWYgKCBjb3B5X2Zyb21fZ3Vlc3Rfb2Zmc2V0KCZkZXYsIHRpLT5kZXZzLCBpLCAx
KSApDQo+PiAgICAgICAgICAgICAgew0KPj4gQEAgLTQ1NCw4ICs0NTQsMTEgQEAgbG9uZyBkb19z
eXNjdGwoWEVOX0dVRVNUX0hBTkRMRV9QQVJBTSh4ZW5fc3lzY3RsX3QpIHVfc3lzY3RsKQ0KPj4g
ICAgICAgICAgICAgICAgICBub2RlID0gWEVOX0lOVkFMSURfTk9ERV9JRDsNCj4+ICAgICAgICAg
ICAgICBlbHNlDQo+PiAgICAgICAgICAgICAgICAgIG5vZGUgPSBwZGV2LT5ub2RlOw0KPj4gLSAg
ICAgICAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+PiAgDQo+PiArICAgICAgICAgICAgaWYgKCBw
ZGV2ICkNCj4+ICsgICAgICAgICAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICsNCj4+ICsg
ICAgICAgICAgICBwY2lkZXZzX3VubG9jaygpOw0KPj4gICAgICAgICAgICAgIGlmICggY29weV90
b19ndWVzdF9vZmZzZXQodGktPm5vZGVzLCBpLCAmbm9kZSwgMSkgKQ0KPj4gICAgICAgICAgICAg
IHsNCj4+ICAgICAgICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsNCj4+IGRpZmYgLS1naXQgYS94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfaW5pdC5jIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvYW1kL2lvbW11X2luaXQuYw0KPj4gaW5kZXggOTc3M2NjZmNiNC4uZjkwYjFjMWU1
OCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9pbml0
LmMNCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9pbml0LmMNCj4+
IEBAIC02NDYsNiArNjQ2LDcgQEAgc3RhdGljIHZvaWQgY2ZfY2hlY2sgcGFyc2VfcHByX2xvZ19l
bnRyeShzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSwgdTMyIGVudHJ5W10pDQo+PiAgDQo+PiAgICAg
IGlmICggcGRldiApDQo+PiAgICAgICAgICBndWVzdF9pb21tdV9hZGRfcHByX2xvZyhwZGV2LT5k
b21haW4sIGVudHJ5KTsNCj4+ICsgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICB9DQo+PiAgDQo+
PiAgc3RhdGljIHZvaWQgaW9tbXVfY2hlY2tfcHByX2xvZyhzdHJ1Y3QgYW1kX2lvbW11ICppb21t
dSkNCj4+IEBAIC03NDksNiArNzUwLDExIEBAIHN0YXRpYyBib29sX3QgX19pbml0IHNldF9pb21t
dV9pbnRlcnJ1cHRfaGFuZGxlcihzdHJ1Y3QgYW1kX2lvbW11ICppb21tdSkNCj4+ICAgICAgfQ0K
Pj4gIA0KPj4gICAgICBwY2lkZXZzX2xvY2soKTsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogWFhY
OiBpdCBpcyB1bmNsZWFyIGlmIHRoaXMgZGV2aWNlIGNhbiBiZSByZW1vdmVkLiBSaWdodCBub3cN
Cj4+ICsgICAgICogdGhlcmUgaXMgbm8gY29kZSB0aGF0IGNsZWFycyBtc2kuZGV2LCBzbyBubyBv
bmUgd2lsbCBkZWNyZWFzZQ0KPj4gKyAgICAgKiByZWZjb3VudCBvbiBpdC4NCj4+ICsgICAgICov
DQo+PiAgICAgIGlvbW11LT5tc2kuZGV2ID0gcGNpX2dldF9wZGV2KE5VTEwsIFBDSV9TQkRGKGlv
bW11LT5zZWcsIGlvbW11LT5iZGYpKTsNCj4NCj4gSSBkb24ndCB0aGluayB3ZSBjYW4gcmVtb3Zl
IGFuIElPTU1VIGZyb20gdGhlIHN5c3RlbSwgc28gdGhpcyBpcw0KPiBmaW5lIGFzLWlzIEFGQUlD
VC4NCj4NCg0KT2gsIHRoYW5rIHlvdSBmb3IgdGhlIGNsYXJpZmljYXRpb24uIEknbGwgcmVtb3Zl
IHRoZSBjb21tZW50IHRoZW4uDQoNCj4+ICAgICAgcGNpZGV2c191bmxvY2soKTsNCj4+ICAgICAg
aWYgKCAhaW9tbXUtPm1zaS5kZXYgKQ0KPj4gQEAgLTEyNzQsNyArMTI4MCw3IEBAIHN0YXRpYyBp
bnQgX19pbml0IGNmX2NoZWNrIGFtZF9pb21tdV9zZXR1cF9kZXZpY2VfdGFibGUoDQo+PiAgICAg
IHsNCj4+ICAgICAgICAgIGlmICggaXZyc19tYXBwaW5nc1tiZGZdLnZhbGlkICkNCj4+ICAgICAg
ICAgIHsNCj4+IC0gICAgICAgICAgICBjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiA9IE5VTEw7
DQo+PiArICAgICAgICAgICAgc3RydWN0IHBjaV9kZXYgKnBkZXYgPSBOVUxMOw0KPj4gIA0KPj4g
ICAgICAgICAgICAgIC8qIGFkZCBkZXZpY2UgdGFibGUgZW50cnkgKi8NCj4+ICAgICAgICAgICAg
ICBpb21tdV9kdGVfYWRkX2RldmljZV9lbnRyeSgmZHRbYmRmXSwgJml2cnNfbWFwcGluZ3NbYmRm
XSk7DQo+PiBAQCAtMTI5OSw3ICsxMzA1LDEwIEBAIHN0YXRpYyBpbnQgX19pbml0IGNmX2NoZWNr
IGFtZF9pb21tdV9zZXR1cF9kZXZpY2VfdGFibGUoDQo+PiAgICAgICAgICAgICAgICAgICAgICAg
ICAgcGRldi0+bXNpeCA/IHBkZXYtPm1zaXgtPm5yX2VudHJpZXMNCj4+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIDogcGRldi0+bXNpX21heHZlYyk7DQo+PiAgICAgICAgICAg
ICAgICAgIGlmICggIWl2cnNfbWFwcGluZ3NbYmRmXS5pbnRyZW1hcF90YWJsZSApDQo+PiArICAg
ICAgICAgICAgICAgIHsNCj4+ICsgICAgICAgICAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7
DQo+PiAgICAgICAgICAgICAgICAgICAgICByZXR1cm4gLUVOT01FTTsNCj4+ICsgICAgICAgICAg
ICAgICAgfQ0KPj4gIA0KPj4gICAgICAgICAgICAgICAgICBpZiAoIHBkZXYtPnBoYW50b21fc3Ry
aWRlICkNCj4+ICAgICAgICAgICAgICAgICAgew0KPj4gQEAgLTEzMTcsNiArMTMyNiw3IEBAIHN0
YXRpYyBpbnQgX19pbml0IGNmX2NoZWNrIGFtZF9pb21tdV9zZXR1cF9kZXZpY2VfdGFibGUoDQo+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGl2cnNfbWFwcGluZ3NbYmRmXS5pbnRyZW1h
cF9pbnVzZTsNCj4+ICAgICAgICAgICAgICAgICAgICAgIH0NCj4+ICAgICAgICAgICAgICAgICAg
fQ0KPj4gKyAgICAgICAgICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gICAgICAgICAgICAg
IH0NCj4+ICANCj4+ICAgICAgICAgICAgICBhbWRfaW9tbXVfc2V0X2ludHJlbWFwX3RhYmxlKA0K
Pj4gZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYyBi
L3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYw0KPj4gaW5kZXggOTkzYmFj
NmY4OC4uOWQ2MjFlM2QzNiAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L2FtZC9pb21tdV9tYXAuYw0KPj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lv
bW11X21hcC5jDQo+PiBAQCAtNzI0LDE0ICs3MjQsMTggQEAgaW50IGNmX2NoZWNrIGFtZF9pb21t
dV9nZXRfcmVzZXJ2ZWRfZGV2aWNlX21lbW9yeSgNCj4+ICAgICAgICAgIGlmICggIWlvbW11ICkN
Cj4+ICAgICAgICAgIHsNCj4+ICAgICAgICAgICAgICAvKiBNYXkgbmVlZCB0byB0cmlnZ2VyIHRo
ZSB3b3JrYXJvdW5kIGluIGZpbmRfaW9tbXVfZm9yX2RldmljZSgpLiAqLw0KPj4gLSAgICAgICAg
ICAgIGNvbnN0IHN0cnVjdCBwY2lfZGV2ICpwZGV2Ow0KPj4gKyAgICAgICAgICAgIHN0cnVjdCBw
Y2lfZGV2ICpwZGV2Ow0KPj4gIA0KPj4gICAgICAgICAgICAgIHBjaWRldnNfbG9jaygpOw0KPj4g
ICAgICAgICAgICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoTlVMTCwgc2JkZik7DQo+PiAgICAgICAg
ICAgICAgcGNpZGV2c191bmxvY2soKTsNCj4+ICANCj4+ICAgICAgICAgICAgICBpZiAoIHBkZXYg
KQ0KPj4gKyAgICAgICAgICAgIHsNCj4+ICAgICAgICAgICAgICAgICAgaW9tbXUgPSBmaW5kX2lv
bW11X2Zvcl9kZXZpY2Uoc2VnLCBiZGYpOw0KPj4gKyAgICAgICAgICAgICAgICAvKiBYWFg6IFNo
b3VsZCB3ZSBob2xkIHBkZXYgcmVmZXJlbmNlIHRpbGwgZW5kIG9mIHRoZSBsb29wPyAqLw0KPj4g
KyAgICAgICAgICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPg0KPiBJIGRvbid0IHRoaW5rIHlv
dSBuZWVkIHRvIGhvbGQgYSByZWZlcmVuY2UgdG8gdGhlIGRldmljZSB1bnRpbCB0aGUgZW5kDQo+
IG9mIHRoZSBsb29wLCB0aGUgZGF0YSBmZXRjaGVkIHRoZXJlIGlzIGZyb20gdGhlIEFDUEkgdGFi
bGVzLiAgSWYgdGhlDQo+IGZ1bmMoKSBoZWxwZXIgYWxzbyBuZWVkcyBhIHBkZXYgaW5zdGFuY2Ug
aXMgaXQncyB0YXNrIHRvIGdldCBvbmUuDQo+DQoNClRoYW5rIHlvdSBmb3IgdGhlIGNsYXJpZmlj
YXRpb24uIEknbGwgcmVtb3ZlIHRoZSBjb21tZW50Lg0KDQo+PiArICAgICAgICAgICAgfQ0KPj4g
ICAgICAgICAgICAgIGlmICggIWlvbW11ICkNCj4+ICAgICAgICAgICAgICAgICAgY29udGludWU7
DQo+PiAgICAgICAgICB9DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gv
cGNpLmMgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9wY2kuYw0KPj4gaW5kZXggYjQyYWNiOGQ3
Yy4uYjMyMzgyYWNhMCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Bj
aS5jDQo+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9wY2kuYw0KPj4gQEAgLTMyOCw2
ICszMjgsNyBAQCBzdGF0aWMgc3RydWN0IHBjaV9kZXYgKmFsbG9jX3BkZXYoc3RydWN0IHBjaV9z
ZWcgKnBzZWcsIHU4IGJ1cywgdTggZGV2Zm4pDQo+PiAgICAgICooKHU4KikgJnBkZXYtPmJ1cykg
PSBidXM7DQo+PiAgICAgICooKHU4KikgJnBkZXYtPmRldmZuKSA9IGRldmZuOw0KPj4gICAgICBw
ZGV2LT5kb21haW4gPSBOVUxMOw0KPj4gKyAgICByZWZjbnRfaW5pdCgmcGRldi0+cmVmY250KTsN
Cj4+ICANCj4+ICAgICAgYXJjaF9wY2lfaW5pdF9wZGV2KHBkZXYpOw0KPj4gIA0KPj4gQEAgLTQy
MiwzMyArNDIzLDYgQEAgc3RhdGljIHN0cnVjdCBwY2lfZGV2ICphbGxvY19wZGV2KHN0cnVjdCBw
Y2lfc2VnICpwc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gICAgICByZXR1cm4gcGRldjsNCj4+
ICB9DQo+PiAgDQo+PiAtc3RhdGljIHZvaWQgZnJlZV9wZGV2KHN0cnVjdCBwY2lfc2VnICpwc2Vn
LCBzdHJ1Y3QgcGNpX2RldiAqcGRldikNCj4+IC17DQo+PiAtICAgIC8qIHVwZGF0ZSBidXMyYnJp
ZGdlICovDQo+PiAtICAgIHN3aXRjaCAoIHBkZXYtPnR5cGUgKQ0KPj4gLSAgICB7DQo+PiAtICAg
ICAgICB1bnNpZ25lZCBpbnQgc2VjX2J1cywgc3ViX2J1czsNCj4+IC0NCj4+IC0gICAgICAgIGNh
c2UgREVWX1RZUEVfUENJZTJQQ0lfQlJJREdFOg0KPj4gLSAgICAgICAgY2FzZSBERVZfVFlQRV9M
RUdBQ1lfUENJX0JSSURHRToNCj4+IC0gICAgICAgICAgICBzZWNfYnVzID0gcGNpX2NvbmZfcmVh
ZDgocGRldi0+c2JkZiwgUENJX1NFQ09OREFSWV9CVVMpOw0KPj4gLSAgICAgICAgICAgIHN1Yl9i
dXMgPSBwY2lfY29uZl9yZWFkOChwZGV2LT5zYmRmLCBQQ0lfU1VCT1JESU5BVEVfQlVTKTsNCj4+
IC0NCj4+IC0gICAgICAgICAgICBzcGluX2xvY2soJnBzZWctPmJ1czJicmlkZ2VfbG9jayk7DQo+
PiAtICAgICAgICAgICAgZm9yICggOyBzZWNfYnVzIDw9IHN1Yl9idXM7IHNlY19idXMrKyApDQo+
PiAtICAgICAgICAgICAgICAgIHBzZWctPmJ1czJicmlkZ2Vbc2VjX2J1c10gPSBwc2VnLT5idXMy
YnJpZGdlW3BkZXYtPmJ1c107DQo+PiAtICAgICAgICAgICAgc3Bpbl91bmxvY2soJnBzZWctPmJ1
czJicmlkZ2VfbG9jayk7DQo+PiAtICAgICAgICAgICAgYnJlYWs7DQo+PiAtDQo+PiAtICAgICAg
ICBkZWZhdWx0Og0KPj4gLSAgICAgICAgICAgIGJyZWFrOw0KPj4gLSAgICB9DQo+PiAtDQo+PiAt
ICAgIGxpc3RfZGVsKCZwZGV2LT5hbGxkZXZzX2xpc3QpOw0KPj4gLSAgICBwZGV2X21zaV9kZWlu
aXQocGRldik7DQo+PiAtICAgIHhmcmVlKHBkZXYpOw0KPj4gLX0NCj4+IC0NCj4+ICBzdGF0aWMg
dm9pZCBfX2luaXQgX3BjaV9oaWRlX2RldmljZShzdHJ1Y3QgcGNpX2RldiAqcGRldikNCj4+ICB7
DQo+PiAgICAgIGlmICggcGRldi0+ZG9tYWluICkNCj4+IEBAIC01MTcsMTAgKzQ5MSwxNCBAQCBz
dHJ1Y3QgcGNpX2RldiAqcGNpX2dldF9yZWFsX3BkZXYocGNpX3NiZGZfdCBzYmRmKQ0KPj4gICAg
ICB7DQo+PiAgICAgICAgICBpZiAoICEoc2JkZi5kZXZmbiAmIHN0cmlkZSkgKQ0KPj4gICAgICAg
ICAgICAgIGNvbnRpbnVlOw0KPj4gKw0KPg0KPiBVbnJlbGF0ZWQgY2hhbmdlPyAgVGhlcmUgYXJl
IHNvbWUgb2YgdGhvc2UgaW4gdGhlIHBhdGNoLCBzaG91bGQgYmUNCj4gcmVtb3ZlZC4NCg0KWWVz
LCBzb3JyeSBmb3IgdGhpcy4NCg0KPg0KPj4gICAgICAgICAgc2JkZi5kZXZmbiAmPSB+c3RyaWRl
Ow0KPj4gICAgICAgICAgcGRldiA9IHBjaV9nZXRfcGRldihOVUxMLCBzYmRmKTsNCj4+ICAgICAg
ICAgIGlmICggcGRldiAmJiBzdHJpZGUgIT0gcGRldi0+cGhhbnRvbV9zdHJpZGUgKQ0KPj4gKyAg
ICAgICAgew0KPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICAg
ICAgcGRldiA9IE5VTEw7DQo+PiArICAgICAgICB9DQo+PiAgICAgIH0NCj4+ICANCj4+ICAgICAg
cmV0dXJuIHBkZXY7DQo+PiBAQCAtNTQ4LDEzICs1MjYsMTggQEAgc3RydWN0IHBjaV9kZXYgKnBj
aV9nZXRfcGRldihjb25zdCBzdHJ1Y3QgZG9tYWluICpkLCBwY2lfc2JkZl90IHNiZGYpDQo+PiAg
ICAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5ICggcGRldiwgJnBzZWctPmFsbGRldnNfbGlzdCwg
YWxsZGV2c19saXN0ICkNCj4+ICAgICAgICAgICAgICBpZiAoIHBkZXYtPnNiZGYuYmRmID09IHNi
ZGYuYmRmICYmDQo+PiAgICAgICAgICAgICAgICAgICAoIWQgfHwgcGRldi0+ZG9tYWluID09IGQp
ICkNCj4+ICsgICAgICAgICAgICB7DQo+PiArICAgICAgICAgICAgICAgIHBjaWRldl9nZXQocGRl
dik7DQo+PiAgICAgICAgICAgICAgICAgIHJldHVybiBwZGV2Ow0KPj4gKyAgICAgICAgICAgIH0N
Cj4+ICAgICAgfQ0KPj4gICAgICBlbHNlDQo+PiAgICAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5
ICggcGRldiwgJmQtPnBkZXZfbGlzdCwgZG9tYWluX2xpc3QgKQ0KPj4gICAgICAgICAgICAgIGlm
ICggcGRldi0+c2JkZi5iZGYgPT0gc2JkZi5iZGYgKQ0KPj4gKyAgICAgICAgICAgIHsNCj4+ICsg
ICAgICAgICAgICAgICAgcGNpZGV2X2dldChwZGV2KTsNCj4+ICAgICAgICAgICAgICAgICAgcmV0
dXJuIHBkZXY7DQo+PiAtDQo+PiArICAgICAgICAgICAgfQ0KPj4gICAgICByZXR1cm4gTlVMTDsN
Cj4+ICB9DQo+PiAgDQo+PiBAQCAtNjYzLDcgKzY0NiwxMCBAQCBpbnQgcGNpX2FkZF9kZXZpY2Uo
dTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbiwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgUENJX1NCREYoc2VnLCBpbmZvLT5waHlzZm4uYnVzLA0KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBpbmZvLT5waHlzZm4uZGV2Zm4pKTsNCj4+ICAgICAgICAgIGlm
ICggcGRldiApDQo+PiArICAgICAgICB7DQo+PiAgICAgICAgICAgICAgcGZfaXNfZXh0Zm4gPSBw
ZGV2LT5pbmZvLmlzX2V4dGZuOw0KPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+
PiArICAgICAgICB9DQo+PiAgICAgICAgICBwY2lkZXZzX3VubG9jaygpOw0KPj4gICAgICAgICAg
aWYgKCAhcGRldiApDQo+PiAgICAgICAgICAgICAgcGNpX2FkZF9kZXZpY2Uoc2VnLCBpbmZvLT5w
aHlzZm4uYnVzLCBpbmZvLT5waHlzZm4uZGV2Zm4sDQo+PiBAQCAtODE4LDcgKzgwNCw5IEBAIGlu
dCBwY2lfcmVtb3ZlX2RldmljZSh1MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gICAgICAg
ICAgICAgIGlmICggcGRldi0+ZG9tYWluICkNCj4+ICAgICAgICAgICAgICAgICAgbGlzdF9kZWwo
JnBkZXYtPmRvbWFpbl9saXN0KTsNCj4+ICAgICAgICAgICAgICBwcmludGsoWEVOTE9HX0RFQlVH
ICJQQ0kgcmVtb3ZlIGRldmljZSAlcHBcbiIsICZwZGV2LT5zYmRmKTsNCj4+IC0gICAgICAgICAg
ICBmcmVlX3BkZXYocHNlZywgcGRldik7DQo+PiArICAgICAgICAgICAgbGlzdF9kZWwoJnBkZXYt
PmFsbGRldnNfbGlzdCk7DQo+PiArICAgICAgICAgICAgcGRldl9tc2lfZGVpbml0KHBkZXYpOw0K
Pj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+DQo+IEhtLCBJIHRoaW5rIGhlcmUg
d2Ugd2FudCB0byBtYWtlIHN1cmUgdGhhdCB0aGUgZGV2aWNlIGhhcyBiZWVuIGZyZWVkLA0KPiBv
ciBlbHNlIHlvdSB3b3VsZCBoYXZlIHRvIHJldHVybiAtRUJVU1kgdG8gdGhlIGNhbGxzIHRvIG5v
dGlmeSB0aGF0DQo+IHRoZSBkZXZpY2UgaXMgc3RpbGwgaW4gdXNlLg0KDQpXaHk/IEFzIEkgY2Fu
IHNlZSwgcGRldiBvYmplY3QgaXMgc3RpbGwgbWF5IHBvdGVudGlhbGx5IGJlIGFjY2Vzc2VkIGJ5
DQpzb21lIG90aGVyIENQVSByaWdodCBub3cuIFNvIHBkZXYgb2JqZWN0IHdpbGwgYmUgZnJlZWQg
YWZ0ZXIgbGFzdA0KcmVmZXJlbmNlIGlzIGRyb3BwZWQuIEFzIGl0IGlzIGFscmVhZHkgcmVtb3Zl
ZCBmcm9tIGFsbCB0aGUgbGlzdHMsDQpwY2lfZGV2X2dldCgpIHdpbGwgbm90IGZpbmQgaXQgYW55
bW9yZS4NCg0KQWN0dWFsbHksIEkgY2FuJ3Qgc2VlIGhvdyB0aGlzIGNhbiBoYXBwZW4gaW4gcmVh
bGl0eSwgYXMgVlBDSSwgTVNJIGFuZA0KSU9NTVUgYXJlIGFscmVhZHkgZGVhY3RpdmF0ZWQgZm9y
IHRoaXMgZGV2aWNlLiBTbywgbm8gb25lIHdvdWxkIHRvdWNoIGl0Lg0KDQo+DQo+IEkgdGhpbmsg
d2UgbmVlZCBhbiBleHRyYSBwY2lkZXZfcHV0X2ZpbmFsKCkgb3Igc2ltaWxhciB0aGF0IGNhbiBi
ZQ0KPiB1c2VkIGluIHBjaV9yZW1vdmVfZGV2aWNlKCkgdG8gYXNzZXJ0IHRoYXQgdGhlIGRldmlj
ZSBoYXMgYmVlbg0KPiBhY3R1YWxseSByZW1vdmVkLg0KDQpXaWxsIHNvbWV0aGluZyBicmVhayBp
ZiB3ZSBkb24ndCBkbyB0aGlzPyBJIGNhbid0IHNlZSBob3cgdGhpcyBjYW4NCmhhcHBlbi4NCg0K
Pg0KPj4gICAgICAgICAgICAgIGJyZWFrOw0KPj4gICAgICAgICAgfQ0KPj4gIA0KPj4gQEAgLTg0
OCw3ICs4MzYsNyBAQCBzdGF0aWMgaW50IGRlYXNzaWduX2RldmljZShzdHJ1Y3QgZG9tYWluICpk
LCB1aW50MTZfdCBzZWcsIHVpbnQ4X3QgYnVzLA0KPj4gICAgICB7DQo+PiAgICAgICAgICByZXQg
PSBpb21tdV9xdWFyYW50aW5lX2Rldl9pbml0KHBjaV90b19kZXYocGRldikpOw0KPj4gICAgICAg
ICAgaWYgKCByZXQgKQ0KPj4gLSAgICAgICAgICAgcmV0dXJuIHJldDsNCj4+ICsgICAgICAgICAg
ICBnb3RvIG91dDsNCj4+ICANCj4+ICAgICAgICAgIHRhcmdldCA9IGRvbV9pbzsNCj4+ICAgICAg
fQ0KPj4gQEAgLTg3OCw2ICs4NjYsNyBAQCBzdGF0aWMgaW50IGRlYXNzaWduX2RldmljZShzdHJ1
Y3QgZG9tYWluICpkLCB1aW50MTZfdCBzZWcsIHVpbnQ4X3QgYnVzLA0KPj4gICAgICBwZGV2LT5m
YXVsdC5jb3VudCA9IDA7DQo+PiAgDQo+PiAgIG91dDoNCj4+ICsgICAgcGNpZGV2X3B1dChwZGV2
KTsNCj4+ICAgICAgaWYgKCByZXQgKQ0KPj4gICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0VSUiAi
JXBkOiBkZWFzc2lnbiAoJXBwKSBmYWlsZWQgKCVkKVxuIiwNCj4+ICAgICAgICAgICAgICAgICBk
LCAmUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSwgcmV0KTsNCj4+IEBAIC0xMDExLDcgKzEwMDAs
MTAgQEAgdm9pZCBwY2lfY2hlY2tfZGlzYWJsZV9kZXZpY2UodTE2IHNlZywgdTggYnVzLCB1OCBk
ZXZmbikNCj4+ICAgICAgICAgICAgICBwZGV2LT5mYXVsdC5jb3VudCA+Pj0gMTsNCj4+ICAgICAg
ICAgIHBkZXYtPmZhdWx0LnRpbWUgPSBub3c7DQo+PiAgICAgICAgICBpZiAoICsrcGRldi0+ZmF1
bHQuY291bnQgPCBQVF9GQVVMVF9USFJFU0hPTEQgKQ0KPj4gKyAgICAgICAgew0KPj4gKyAgICAg
ICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICAgICAgcGRldiA9IE5VTEw7DQo+
PiArICAgICAgICB9DQo+PiAgICAgIH0NCj4+ICAgICAgcGNpZGV2c191bmxvY2soKTsNCj4+ICAN
Cj4+IEBAIC0xMDIyLDYgKzEwMTQsOCBAQCB2b2lkIHBjaV9jaGVja19kaXNhYmxlX2RldmljZSh1
MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gICAgICAgKiBjb250cm9sIGl0IGZvciB1cy4g
Ki8NCj4+ICAgICAgY3dvcmQgPSBwY2lfY29uZl9yZWFkMTYocGRldi0+c2JkZiwgUENJX0NPTU1B
TkQpOw0KPj4gICAgICBwY2lfY29uZl93cml0ZTE2KHBkZXYtPnNiZGYsIFBDSV9DT01NQU5ELCBj
d29yZCAmIH5QQ0lfQ09NTUFORF9NQVNURVIpOw0KPj4gKw0KPj4gKyAgICBwY2lkZXZfcHV0KHBk
ZXYpOw0KPj4gIH0NCj4+ICANCj4+ICAvKg0KPj4gQEAgLTExMzgsNiArMTEzMiw3IEBAIHN0YXRp
YyBpbnQgX19od2RvbV9pbml0IGNmX2NoZWNrIF9zZXR1cF9od2RvbV9wY2lfZGV2aWNlcygNCj4+
ICAgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5HICJEb20lZCBvd25pbmcgJXBw
P1xuIiwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgIHBkZXYtPmRvbWFpbi0+ZG9tYWluX2lk
LCAmcGRldi0+c2JkZik7DQo+PiAgDQo+PiArICAgICAgICAgICAgcGNpZGV2X3B1dChwZGV2KTsN
Cj4+ICAgICAgICAgICAgICBpZiAoIGlvbW11X3ZlcmJvc2UgKQ0KPj4gICAgICAgICAgICAgIHsN
Cj4+ICAgICAgICAgICAgICAgICAgcGNpZGV2c191bmxvY2soKTsNCj4+IEBAIC0xMzg1LDMzICsx
MzgwLDI4IEBAIHN0YXRpYyBpbnQgaW9tbXVfcmVtb3ZlX2RldmljZShzdHJ1Y3QgcGNpX2RldiAq
cGRldikNCj4+ICAgICAgcmV0dXJuIGlvbW11X2NhbGwoaGQtPnBsYXRmb3JtX29wcywgcmVtb3Zl
X2RldmljZSwgZGV2Zm4sIHBjaV90b19kZXYocGRldikpOw0KPj4gIH0NCj4+ICANCj4+IC1zdGF0
aWMgaW50IGRldmljZV9hc3NpZ25lZCh1MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gK3N0
YXRpYyBpbnQgZGV2aWNlX2Fzc2lnbmVkKHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4gIHsNCj4+
IC0gICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiAgICAgIGludCByYyA9IDA7DQo+PiAgDQo+
PiAgICAgIEFTU0VSVChwY2lkZXZzX2xvY2tlZCgpKTsNCj4+IC0gICAgcGRldiA9IHBjaV9nZXRf
cGRldihOVUxMLCBQQ0lfU0JERihzZWcsIGJ1cywgZGV2Zm4pKTsNCj4+IC0NCj4+IC0gICAgaWYg
KCAhcGRldiApDQo+PiAtICAgICAgICByYyA9IC1FTk9ERVY7DQo+PiAgICAgIC8qDQo+PiAgICAg
ICAqIElmIHRoZSBkZXZpY2UgZXhpc3RzIGFuZCBpdCBpcyBub3Qgb3duZWQgYnkgZWl0aGVyIHRo
ZSBoYXJkd2FyZQ0KPj4gICAgICAgKiBkb21haW4gb3IgZG9tX2lvIHRoZW4gaXQgbXVzdCBiZSBh
c3NpZ25lZCB0byBhIGd1ZXN0LCBvciBiZQ0KPj4gICAgICAgKiBoaWRkZW4gKG93bmVkIGJ5IGRv
bV94ZW4pLg0KPj4gICAgICAgKi8NCj4+IC0gICAgZWxzZSBpZiAoIHBkZXYtPmRvbWFpbiAhPSBo
YXJkd2FyZV9kb21haW4gJiYNCj4+IC0gICAgICAgICAgICAgIHBkZXYtPmRvbWFpbiAhPSBkb21f
aW8gKQ0KPj4gKyAgICBpZiAoIHBkZXYtPmRvbWFpbiAhPSBoYXJkd2FyZV9kb21haW4gJiYNCj4+
ICsgICAgICAgICBwZGV2LT5kb21haW4gIT0gZG9tX2lvICkNCj4+ICAgICAgICAgIHJjID0gLUVC
VVNZOw0KPj4gIA0KPj4gICAgICByZXR1cm4gcmM7DQo+PiAgfQ0KPj4gIA0KPj4gIC8qIENhbGxl
ciBzaG91bGQgaG9sZCB0aGUgcGNpZGV2c19sb2NrICovDQo+DQo+IEkgd291bGQgYXNzdW1lIHRo
ZSBjYWxsZXIgaGFzIHRha2VuIGFuIGV4dHJhIHJlZmVyZW5jZSB0byB0aGUgcGRldiwgc28NCj4g
aG9sZGluZyB0aGUgcGNpZGV2c19sb2NrIGlzIG5vIGxvbmdlciBuZWVkZWQ/DQoNCkkgYW0gYXNz
dW1lZCB0aGF0IGxvY2sgbWF5IGJlIHJlcXVpcmVkIGJ5IE1TSVggb3IgSU9NTVUgZnVuY3Rpb25z
LCB0aGF0DQphcmUgYmVpbmcgY2FsbGVkIGhlcmUuIEZvciBleGFtcGxlLCBJIGNhbiBzZWUgdGhh
dCByZWFzc2lnbl9kZXZpY2UoKSBpbg0KcGNpX2FtZF9pb21tdS5jIG1hbmlwdWxhdGVzIHdpdGgg
c29tZSBsaXN0cy4gSSBiZWxpZXZlLCBpdCBzaG91bGQgYmUNCnByb3RlY3RlZCB3aXRoIHRoZSBs
b2NrLg0KDQo+DQo+PiAtc3RhdGljIGludCBhc3NpZ25fZGV2aWNlKHN0cnVjdCBkb21haW4gKmQs
IHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4sIHUzMiBmbGFnKQ0KPj4gK3N0YXRpYyBpbnQgYXNz
aWduX2RldmljZShzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdTMyIGZs
YWcpDQo+PiAgew0KPj4gICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9p
b21tdShkKTsNCj4+IC0gICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiArICAgIHVpbnQ4X3Qg
ZGV2Zm47DQo+PiAgICAgIGludCByYyA9IDA7DQo+PiAgDQo+PiAgICAgIGlmICggIWlzX2lvbW11
X2VuYWJsZWQoZCkgKQ0KPj4gQEAgLTE0MjIsMTAgKzE0MTIsMTEgQEAgc3RhdGljIGludCBhc3Np
Z25fZGV2aWNlKHN0cnVjdCBkb21haW4gKmQsIHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4sIHUz
MiBmbGFnKQ0KPj4gIA0KPj4gICAgICAvKiBkZXZpY2VfYXNzaWduZWQoKSBzaG91bGQgYWxyZWFk
eSBoYXZlIGNsZWFyZWQgdGhlIGRldmljZSBmb3IgYXNzaWdubWVudCAqLw0KPj4gICAgICBBU1NF
UlQocGNpZGV2c19sb2NrZWQoKSk7DQo+PiAtICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoTlVMTCwg
UENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+PiAgICAgIEFTU0VSVChwZGV2ICYmIChwZGV2
LT5kb21haW4gPT0gaGFyZHdhcmVfZG9tYWluIHx8DQo+PiAgICAgICAgICAgICAgICAgICAgICBw
ZGV2LT5kb21haW4gPT0gZG9tX2lvKSk7DQo+PiAgDQo+PiArICAgIGRldmZuID0gcGRldi0+ZGV2
Zm47DQo+PiArDQo+PiAgICAgIC8qIERvIG5vdCBhbGxvdyBicm9rZW4gZGV2aWNlcyB0byBiZSBh
c3NpZ25lZCB0byBndWVzdHMuICovDQo+PiAgICAgIHJjID0gLUVCQURGOw0KPj4gICAgICBpZiAo
IHBkZXYtPmJyb2tlbiAmJiBkICE9IGhhcmR3YXJlX2RvbWFpbiAmJiBkICE9IGRvbV9pbyApDQo+
PiBAQCAtMTQ2MCw3ICsxNDUxLDcgQEAgc3RhdGljIGludCBhc3NpZ25fZGV2aWNlKHN0cnVjdCBk
b21haW4gKmQsIHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4sIHUzMiBmbGFnKQ0KPj4gICBkb25l
Og0KPj4gICAgICBpZiAoIHJjICkNCj4+ICAgICAgICAgIHByaW50ayhYRU5MT0dfR19XQVJOSU5H
ICIlcGQ6IGFzc2lnbiAoJXBwKSBmYWlsZWQgKCVkKVxuIiwNCj4+IC0gICAgICAgICAgICAgICBk
LCAmUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSwgcmMpOw0KPj4gKyAgICAgICAgICAgICAgIGQs
ICZQQ0lfU0JERihwZGV2LT5zZWcsIHBkZXYtPmJ1cywgZGV2Zm4pLCByYyk7DQo+PiAgICAgIC8q
IFRoZSBkZXZpY2UgaXMgYXNzaWduZWQgdG8gZG9tX2lvIHNvIG1hcmsgaXQgYXMgcXVhcmFudGlu
ZWQgKi8NCj4+ICAgICAgZWxzZSBpZiAoIGQgPT0gZG9tX2lvICkNCj4+ICAgICAgICAgIHBkZXYt
PnF1YXJhbnRpbmUgPSB0cnVlOw0KPj4gQEAgLTE1OTUsNiArMTU4Niw5IEBAIGludCBpb21tdV9k
b19wY2lfZG9tY3RsKA0KPj4gICAgICAgICAgQVNTRVJUKGQpOw0KPj4gICAgICAgICAgLyogZmFs
bCB0aHJvdWdoICovDQo+PiAgICAgIGNhc2UgWEVOX0RPTUNUTF90ZXN0X2Fzc2lnbl9kZXZpY2U6
DQo+PiArICAgIHsNCj4+ICsgICAgICAgIHN0cnVjdCBwY2lfZGV2ICpwZGV2Ow0KPj4gKw0KPj4g
ICAgICAgICAgLyogRG9uJ3Qgc3VwcG9ydCBzZWxmLWFzc2lnbm1lbnQgb2YgZGV2aWNlcy4gKi8N
Cj4+ICAgICAgICAgIGlmICggZCA9PSBjdXJyZW50LT5kb21haW4gKQ0KPj4gICAgICAgICAgew0K
Pj4gQEAgLTE2MjIsMjYgKzE2MTYsMzYgQEAgaW50IGlvbW11X2RvX3BjaV9kb21jdGwoDQo+PiAg
ICAgICAgICBzZWcgPSBtYWNoaW5lX3NiZGYgPj4gMTY7DQo+PiAgICAgICAgICBidXMgPSBQQ0lf
QlVTKG1hY2hpbmVfc2JkZik7DQo+PiAgICAgICAgICBkZXZmbiA9IFBDSV9ERVZGTihtYWNoaW5l
X3NiZGYpOw0KPj4gLQ0KPj4gICAgICAgICAgcGNpZGV2c19sb2NrKCk7DQo+PiAtICAgICAgICBy
ZXQgPSBkZXZpY2VfYXNzaWduZWQoc2VnLCBidXMsIGRldmZuKTsNCj4+ICsgICAgICAgIHBkZXYg
PSBwY2lfZ2V0X3BkZXYoTlVMTCwgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+PiArICAg
ICAgICBpZiAoICFwZGV2ICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBwcmludGso
WEVOTE9HX0dfSU5GTyAiJXBwIG5vbi1leGlzdGVudFxuIiwNCj4+ICsgICAgICAgICAgICAgICAg
ICAgJlBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikpOw0KPj4gKyAgICAgICAgICAgIHJldCA9IC1F
SU5WQUw7DQo+PiArICAgICAgICAgICAgYnJlYWs7DQo+PiArICAgICAgICB9DQo+PiArDQo+PiAr
ICAgICAgICByZXQgPSBkZXZpY2VfYXNzaWduZWQocGRldik7DQo+PiAgICAgICAgICBpZiAoIGRv
bWN0bC0+Y21kID09IFhFTl9ET01DVExfdGVzdF9hc3NpZ25fZGV2aWNlICkNCj4+ICAgICAgICAg
IHsNCj4+ICAgICAgICAgICAgICBpZiAoIHJldCApDQo+PiAgICAgICAgICAgICAgew0KPj4gLSAg
ICAgICAgICAgICAgICBwcmludGsoWEVOTE9HX0dfSU5GTyAiJXBwIGFscmVhZHkgYXNzaWduZWQs
IG9yIG5vbi1leGlzdGVudFxuIiwNCj4+ICsgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19H
X0lORk8gIiVwcCBhbHJlYWR5IGFzc2lnbmVkXG4iLA0KPj4gICAgICAgICAgICAgICAgICAgICAg
ICAgJlBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikpOw0KPj4gICAgICAgICAgICAgICAgICByZXQg
PSAtRUlOVkFMOw0KPj4gICAgICAgICAgICAgIH0NCj4+ICAgICAgICAgIH0NCj4+ICAgICAgICAg
IGVsc2UgaWYgKCAhcmV0ICkNCj4+IC0gICAgICAgICAgICByZXQgPSBhc3NpZ25fZGV2aWNlKGQs
IHNlZywgYnVzLCBkZXZmbiwgZmxhZ3MpOw0KPj4gKyAgICAgICAgICAgIHJldCA9IGFzc2lnbl9k
ZXZpY2UoZCwgcGRldiwgZmxhZ3MpOw0KPj4gKw0KPj4gKyAgICAgICAgcGNpZGV2X3B1dChwZGV2
KTsNCj4NCj4gSSB3b3VsZCB0aGluayB5b3UgbmVlZCB0byBrZWVwIHRoZSByZWZjb3VudCBoZXJl
IGlmIHJldCA9PSAwLCBzbyB0aGF0DQo+IHRoZSBkZXZpY2UgY2Fubm90IGJlIHJlbW92ZWQgd2hp
bGUgYXNzaWduZWQgdG8gYSBkb21haW4/DQoNCkxvb2tzIGxpa2Ugd2UgYXJlIHBlcmNlaXZpbmcg
ZnVuY3Rpb24gb2YgcmVmY250IGluIGEgZGlmZmVyZW50DQp3YXlzLiBGb3IgbWUsIHRoaXMgaXMg
dGhlIG1lY2hhbmlzbSB0byBndWFyYW50ZWUgdGhhdCBpZiB3ZSBoYXZlIGEgdmFsaWQNCnBvaW50
ZXIgdG8gYW4gb2JqZWN0LCB0aGlzIG9iamVjdCB3aWxsIG5vdCBkaXNhcHBlYXIgdW5kZXIgb3Vy
DQpmZWV0LiBUaGlzIGlzIHRoZSBtYWluIGZ1bmN0aW9uIG9mIGtyZWZzIGluIHRoZSBsaW51eCBr
ZXJuZWw6IGlmIHlvdXINCmNvZGUgaG9sZHMgYSByZWZlcmVuY2UgdG8gYW4gb2JqZWN0LCB5b3Ug
Y2FuIGJlIHN1cmUgdGhhdCB0aGlzIG9iamVjdCBpcw0KZXhpc3RzIGluIG1lbW9yeS4NCg0KT24g
b3RoZXIgaGFuZCwgaXQgc2VlbXMgdGhhdCB5b3UgYXJlIGNvbnNpZGVyaW5nIHRoaXMgcmVmY250
IGFzIGFuIHVzYWdlDQpjb3VudGVyIGZvciBhbiBhY3R1YWwgUENJIGRldmljZSwgbm90ICJzdHJ1
Y3QgcGRldiIgdGhhdCByZXByZXNlbnQNCml0LiBUaG9zZSBhcmUgdHdvIHJlbGF0ZWQgdGhpbmdz
LCBidXQgbm90IHRoZSBzYW1lLiBTbywgSSBjYW4gc2VlIHdoeQ0KeW91IGFyZSBzdWdnZXN0aW5n
IHRvIGdldCBhZGRpdGlvbmFsIHJlZmVyZW5jZSB0aGVyZS4gQnV0IGZvciBtZSwgdGhpcw0KbG9v
a3MgdW5uZWNlc3Nhcnk6IHRoZSB2ZXJ5IGZpcnN0IHJlZmNvdW50IGlzIG9idGFpbmVkIGluDQpw
Y2lfYWRkX2RldmljZSgpIGFuZCB0aGVyZSBpcyB0aGUgY29ycmVzcG9uZGluZyBmdW5jdGlvbg0K
cGNpX3JlbW92ZV9kZXZpY2UoKSB0aGF0IHdpbGwgZHJvcCB0aGlzIHJlZmNvdW50LiBTbywgZm9y
IG1lLCBpZiBhZG1pbg0Kd2FudHMgdG8gcmVtb3ZlIGEgUENJIGRldmljZSB3aGljaCBpcyBhc3Np
Z25lZCB0byBhIGRvbWFpbiwgdGhleSBjYW4gZG8NCnRoaXMgYXMgdGhleSB3ZXJlIGFibGUgdG8g
ZG8gdGhpcyBwcmlvciB0aGlzIHBhdGNoZXMuDQoNClRoZSBtYWluIHZhbHVlIG9mIGludHJvZHVj
aW5nIHJlZmNudCBpcyB0byBiZSBhYmxlIHRvIGFjY2VzcyBwZGV2IG9iamVjdHMNCndpdGhvdXQg
aG9sZGluZyB0aGUgZ2xvYmFsIHBjaWRldnNfbG9jaygpLiBUaGlzIGRvZXMgbm90IG1lYW4gdGhh
dCB5b3UNCmRvbid0IG5lZWQgbG9ja2luZyBhdCBhbGwuIEJ1dCB0aGlzIGFsbG93cyB5b3UgdG8g
dXNlIHBkZXYtPmxvY2sgKHdoaWNoDQpkb2VzIG5vdCBleGlzdHMgaW4gdGhpcyBzZXJpZXMsIGJ1
dCB3YXMgaW50cm9kdWNlZCBpbiBhIFJGQyBlYXJsaWVyKSwgb3INCnZwY2ktPmxvY2ssIG9yIGFu
eSBvdGhlciBzdWJzeXN0ZW0tPmxvY2suDQoNCj4NCj4+ICAgICAgICAgIHBjaWRldnNfdW5sb2Nr
KCk7DQo+PiAgICAgICAgICBpZiAoIHJldCA9PSAtRVJFU1RBUlQgKQ0KPj4gICAgICAgICAgICAg
IHJldCA9IGh5cGVyY2FsbF9jcmVhdGVfY29udGludWF0aW9uKF9fSFlQRVJWSVNPUl9kb21jdGws
DQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgImgi
LCB1X2RvbWN0bCk7DQo+PiAgICAgICAgICBicmVhazsNCj4+IC0NCj4+ICsgICAgfQ0KPj4gICAg
ICBjYXNlIFhFTl9ET01DVExfZGVhc3NpZ25fZGV2aWNlOg0KPj4gICAgICAgICAgLyogRG9uJ3Qg
c3VwcG9ydCBzZWxmLWRlYXNzaWdubWVudCBvZiBkZXZpY2VzLiAqLw0KPj4gICAgICAgICAgaWYg
KCBkID09IGN1cnJlbnQtPmRvbWFpbiApDQo+PiBAQCAtMTY4MSw2ICsxNjg1LDQ2IEBAIGludCBp
b21tdV9kb19wY2lfZG9tY3RsKA0KPj4gICAgICByZXR1cm4gcmV0Ow0KPj4gIH0NCj4+ICANCj4+
ICtzdGF0aWMgdm9pZCByZWxlYXNlX3BkZXYocmVmY250X3QgKnJlZmNudCkNCj4+ICt7DQo+PiAr
ICAgIHN0cnVjdCBwY2lfZGV2ICpwZGV2ID0gY29udGFpbmVyX29mKHJlZmNudCwgc3RydWN0IHBj
aV9kZXYsIHJlZmNudCk7DQo+PiArICAgIHN0cnVjdCBwY2lfc2VnICpwc2VnID0gZ2V0X3BzZWco
cGRldi0+c2VnKTsNCj4+ICsNCj4+ICsgICAgcHJpbnRrKFhFTkxPR19ERUJVRyAiUENJIHJlbGVh
c2UgZGV2aWNlICVwcFxuIiwgJnBkZXYtPnNiZGYpOw0KPj4gKw0KPj4gKyAgICAvKiB1cGRhdGUg
YnVzMmJyaWRnZSAqLw0KPj4gKyAgICBzd2l0Y2ggKCBwZGV2LT50eXBlICkNCj4+ICsgICAgew0K
Pj4gKyAgICAgICAgdW5zaWduZWQgaW50IHNlY19idXMsIHN1Yl9idXM7DQo+PiArDQo+PiArICAg
ICAgICBjYXNlIERFVl9UWVBFX1BDSWUyUENJX0JSSURHRToNCj4+ICsgICAgICAgIGNhc2UgREVW
X1RZUEVfTEVHQUNZX1BDSV9CUklER0U6DQo+PiArICAgICAgICAgICAgc2VjX2J1cyA9IHBjaV9j
b25mX3JlYWQ4KHBkZXYtPnNiZGYsIFBDSV9TRUNPTkRBUllfQlVTKTsNCj4+ICsgICAgICAgICAg
ICBzdWJfYnVzID0gcGNpX2NvbmZfcmVhZDgocGRldi0+c2JkZiwgUENJX1NVQk9SRElOQVRFX0JV
Uyk7DQo+PiArDQo+PiArICAgICAgICAgICAgc3Bpbl9sb2NrKCZwc2VnLT5idXMyYnJpZGdlX2xv
Y2spOw0KPj4gKyAgICAgICAgICAgIGZvciAoIDsgc2VjX2J1cyA8PSBzdWJfYnVzOyBzZWNfYnVz
KysgKQ0KPj4gKyAgICAgICAgICAgICAgICBwc2VnLT5idXMyYnJpZGdlW3NlY19idXNdID0gcHNl
Zy0+YnVzMmJyaWRnZVtwZGV2LT5idXNdOw0KPj4gKyAgICAgICAgICAgIHNwaW5fdW5sb2NrKCZw
c2VnLT5idXMyYnJpZGdlX2xvY2spOw0KPj4gKyAgICAgICAgICAgIGJyZWFrOw0KPj4gKw0KPj4g
KyAgICAgICAgZGVmYXVsdDoNCj4+ICsgICAgICAgICAgICBicmVhazsNCj4+ICsgICAgfQ0KPj4g
Kw0KPj4gKyAgICB4ZnJlZShwZGV2KTsNCj4+ICt9DQo+PiArDQo+PiArdm9pZCBwY2lkZXZfZ2V0
KHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4gK3sNCj4+ICsgICAgcmVmY250X2dldCgmcGRldi0+
cmVmY250KTsNCj4+ICt9DQo+PiArDQo+PiArdm9pZCBwY2lkZXZfcHV0KHN0cnVjdCBwY2lfZGV2
ICpwZGV2KQ0KPj4gK3sNCj4+ICsgICAgcmVmY250X3B1dCgmcGRldi0+cmVmY250LCByZWxlYXNl
X3BkZXYpOw0KPj4gK30NCj4+ICsNCj4+ICAvKg0KPj4gICAqIExvY2FsIHZhcmlhYmxlczoNCj4+
ICAgKiBtb2RlOiBDDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRk
L3F1aXJrcy5jIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jDQo+PiBpbmRl
eCBmY2M4ZjczZThiLi5kMjQwZGEwNDE2IDEwMDY0NA0KPj4gLS0tIGEveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvdnRkL3F1aXJrcy5jDQo+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvcXVpcmtzLmMNCj4+IEBAIC00MjksNiArNDI5LDggQEAgc3RhdGljIGludCBfX211c3RfY2hl
Y2sgbWFwX21lX3BoYW50b21fZnVuY3Rpb24oc3RydWN0IGRvbWFpbiAqZG9tYWluLA0KPj4gICAg
ICAgICAgcmMgPSBkb21haW5fY29udGV4dF91bm1hcF9vbmUoZG9tYWluLCBkcmhkLT5pb21tdSwg
MCwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBDSV9ERVZGTihk
ZXYsIDcpKTsNCj4+ICANCj4+ICsgICAgcGNpZGV2X3B1dChwZGV2KTsNCj4+ICsNCj4+ICAgICAg
cmV0dXJuIHJjOw0KPj4gIH0NCj4+ICANCj4+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy92aWRl
by92Z2EuYyBiL3hlbi9kcml2ZXJzL3ZpZGVvL3ZnYS5jDQo+PiBpbmRleCAwYTAzNTA4YmVlLi4x
MDQ5ZDRkYTZkIDEwMDY0NA0KPj4gLS0tIGEveGVuL2RyaXZlcnMvdmlkZW8vdmdhLmMNCj4+ICsr
KyBiL3hlbi9kcml2ZXJzL3ZpZGVvL3ZnYS5jDQo+PiBAQCAtMTE0LDcgKzExNCw3IEBAIHZvaWQg
X19pbml0IHZpZGVvX2VuZGJvb3Qodm9pZCkNCj4+ICAgICAgICAgIGZvciAoIGJ1cyA9IDA7IGJ1
cyA8IDI1NjsgKytidXMgKQ0KPj4gICAgICAgICAgICAgIGZvciAoIGRldmZuID0gMDsgZGV2Zm4g
PCAyNTY7ICsrZGV2Zm4gKQ0KPj4gICAgICAgICAgICAgIHsNCj4+IC0gICAgICAgICAgICAgICAg
Y29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiArICAgICAgICAgICAgICAgIHN0cnVjdCBw
Y2lfZGV2ICpwZGV2Ow0KPj4gICAgICAgICAgICAgICAgICB1OCBiID0gYnVzLCBkZiA9IGRldmZu
LCBzYjsNCj4+ICANCj4+ICAgICAgICAgICAgICAgICAgcGNpZGV2c19sb2NrKCk7DQo+PiBAQCAt
MTI2LDcgKzEyNiwxMSBAQCB2b2lkIF9faW5pdCB2aWRlb19lbmRib290KHZvaWQpDQo+PiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBDSV9DTEFTU19ERVZJQ0UpICE9IDB4
MDMwMCB8fA0KPj4gICAgICAgICAgICAgICAgICAgICAgICEocGNpX2NvbmZfcmVhZDE2KFBDSV9T
QkRGKDAsIGJ1cywgZGV2Zm4pLCBQQ0lfQ09NTUFORCkgJg0KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgKFBDSV9DT01NQU5EX0lPIHwgUENJX0NPTU1BTkRfTUVNT1JZKSkgKQ0KPj4gKyAgICAg
ICAgICAgICAgICB7DQo+PiArICAgICAgICAgICAgICAgICAgICBpZiAoIHBkZXYgKQ0KPj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICAgICAg
ICAgICAgICBjb250aW51ZTsNCj4+ICsgICAgICAgICAgICAgICAgfQ0KPj4gIA0KPj4gICAgICAg
ICAgICAgICAgICB3aGlsZSAoIGIgKQ0KPj4gICAgICAgICAgICAgICAgICB7DQo+PiBAQCAtMTU3
LDYgKzE2MSw3IEBAIHZvaWQgX19pbml0IHZpZGVvX2VuZGJvb3Qodm9pZCkNCj4+ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBidXMsIFBDSV9TTE9UKGRldmZuKSwgUENJX0ZVTkMoZGV2Zm4p
KTsNCj4+ICAgICAgICAgICAgICAgICAgICAgIHBjaV9oaWRlX2RldmljZSgwLCBidXMsIGRldmZu
KTsNCj4+ICAgICAgICAgICAgICAgICAgfQ0KPj4gKyAgICAgICAgICAgICAgICBwY2lkZXZfcHV0
KHBkZXYpOw0KPj4gICAgICAgICAgICAgIH0NCj4+ICAgICAgfQ0KPj4gIA0KPj4gZGlmZiAtLWdp
dCBhL3hlbi9kcml2ZXJzL3ZwY2kvdnBjaS5jIGIveGVuL2RyaXZlcnMvdnBjaS92cGNpLmMNCj4+
IGluZGV4IDZkNDhkNDk2YmIuLjUyMzJmOTYwNWIgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vZHJpdmVy
cy92cGNpL3ZwY2kuYw0KPj4gKysrIGIveGVuL2RyaXZlcnMvdnBjaS92cGNpLmMNCj4+IEBAIC0z
MTcsOCArMzE3LDggQEAgc3RhdGljIHVpbnQzMl90IG1lcmdlX3Jlc3VsdCh1aW50MzJfdCBkYXRh
LCB1aW50MzJfdCBuZXcsIHVuc2lnbmVkIGludCBzaXplLA0KPj4gIA0KPj4gIHVpbnQzMl90IHZw
Y2lfcmVhZChwY2lfc2JkZl90IHNiZGYsIHVuc2lnbmVkIGludCByZWcsIHVuc2lnbmVkIGludCBz
aXplKQ0KPj4gIHsNCj4+IC0gICAgY29uc3Qgc3RydWN0IGRvbWFpbiAqZCA9IGN1cnJlbnQtPmRv
bWFpbjsNCj4+IC0gICAgY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiArICAgIHN0cnVj
dCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47DQo+DQo+IFdoeSBkbyB5b3UgbmVlZCB0byBk
cm9wIHRoZSBjb25zdCBvbiBkb21haW4gaGVyZT8NCj4NCg0KTG9va3MgbGlrZSBsZWZ0b3ZlciBm
cm9tIGEgcHJldmlvdXMgdmVyc2lvbi4gV2lsbCByZW1vdmUuDQoNCj4+ICsgICAgc3RydWN0IHBj
aV9kZXYgKnBkZXY7DQo+PiAgICAgIGNvbnN0IHN0cnVjdCB2cGNpX3JlZ2lzdGVyICpyOw0KPj4g
ICAgICB1bnNpZ25lZCBpbnQgZGF0YV9vZmZzZXQgPSAwOw0KPj4gICAgICB1aW50MzJfdCBkYXRh
ID0gfih1aW50MzJfdCkwOw0KPj4gQEAgLTMzMiw3ICszMzIsMTEgQEAgdWludDMyX3QgdnBjaV9y
ZWFkKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50IHJlZywgdW5zaWduZWQgaW50IHNpemUp
DQo+PiAgICAgIC8qIEZpbmQgdGhlIFBDSSBkZXYgbWF0Y2hpbmcgdGhlIGFkZHJlc3MuICovDQo+
PiAgICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoZCwgc2JkZik7DQo+PiAgICAgIGlmICggIXBkZXYg
fHwgIXBkZXYtPnZwY2kgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBpZiAoIHBkZXYgKQ0KPj4g
KyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiAgICAgICAgICByZXR1cm4gdnBjaV9y
ZWFkX2h3KHNiZGYsIHJlZywgc2l6ZSk7DQo+PiArICAgIH0NCj4+ICANCj4+ICAgICAgc3Bpbl9s
b2NrKCZwZGV2LT52cGNpLT5sb2NrKTsNCj4+ICANCj4+IEBAIC0zNzgsNiArMzgyLDcgQEAgdWlu
dDMyX3QgdnBjaV9yZWFkKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50IHJlZywgdW5zaWdu
ZWQgaW50IHNpemUpDQo+PiAgICAgICAgICBBU1NFUlQoZGF0YV9vZmZzZXQgPCBzaXplKTsNCj4+
ICAgICAgfQ0KPj4gICAgICBzcGluX3VubG9jaygmcGRldi0+dnBjaS0+bG9jayk7DQo+PiArICAg
IHBjaWRldl9wdXQocGRldik7DQo+PiAgDQo+PiAgICAgIGlmICggZGF0YV9vZmZzZXQgPCBzaXpl
ICkNCj4+ICAgICAgew0KPj4gQEAgLTQyMCw4ICs0MjUsOCBAQCBzdGF0aWMgdm9pZCB2cGNpX3dy
aXRlX2hlbHBlcihjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwNCj4+ICB2b2lkIHZwY2lfd3Jp
dGUocGNpX3NiZGZfdCBzYmRmLCB1bnNpZ25lZCBpbnQgcmVnLCB1bnNpZ25lZCBpbnQgc2l6ZSwN
Cj4+ICAgICAgICAgICAgICAgICAgdWludDMyX3QgZGF0YSkNCj4+ICB7DQo+PiAtICAgIGNvbnN0
IHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21haW47DQo+PiAtICAgIGNvbnN0IHN0cnVj
dCBwY2lfZGV2ICpwZGV2Ow0KPj4gKyAgICBzdHJ1Y3QgZG9tYWluICpkID0gY3VycmVudC0+ZG9t
YWluOw0KPj4gKyAgICBzdHJ1Y3QgcGNpX2RldiAqcGRldjsNCg0KLS0gDQpXQlIsIFZvbG9keW15
cg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 02:51:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 02:51:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520025.807220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmQZk-00018l-7A; Wed, 12 Apr 2023 02:51:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520025.807220; Wed, 12 Apr 2023 02:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmQZk-00018d-1D; Wed, 12 Apr 2023 02:51:12 +0000
Received: by outflank-mailman (input) for mailman id 520025;
 Wed, 12 Apr 2023 02:51:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmQZi-00018T-PF; Wed, 12 Apr 2023 02:51:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmQZi-0002gq-I0; Wed, 12 Apr 2023 02:51:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmQZi-0007l4-1U; Wed, 12 Apr 2023 02:51:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmQZh-00036I-WD; Wed, 12 Apr 2023 02:51:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YjnMIFcgKr1sN1zkHM6fwNbRt2MIYv9HW2gISw40VZY=; b=AV2qCwnIA3DnEqaJTeJGg4q0Un
	EkME3w6DZFaV8C1P22kFe+Dx8nD+Yeskqm8LqzckjXBIpoX0gEcBlUEFJRM/5mEcNf9A/poSY+vXM
	yiB1ltX4bCrVWNc/QdNXRxvbAkCv7vJryARdQPM+CptSEJ6EaovQKYzbJKW57iwiA/Nk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180206: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 02:51:10 +0000

flight 180206 xen-unstable real [real]
flight 180210 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180206/
http://logs.test-lab.xenproject.org/osstest/logs/180210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 180200

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180187
 test-amd64-i386-xl-xsm        7 xen-install                  fail  like 180200
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180200
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180200
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180200
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180200  2023-04-11 01:53:29 Z    1 days
Testing same since   180206  2023-04-11 14:37:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5ea03c570c8610d4359f8bbf5f093d215344ce3f
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:20 2023 +0100

    xen/x86: Replace GPL v2.0+ license boilerplate with an SPDX tag in *.h
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-or-later
    in xen/arch/x86/*.h:
    
     * This program is free software; you can redistribute it and/or modify
     * it under the terms of the GNU General Public License as published by
     * the Free Software Foundation; either version 2 of the License, or
     * (at your option) any later version.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
     * GNU General Public License for more details.
     *
     * You should have received a copy of the GNU General Public License
     * along with this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit f68674efb79745cbcb80f333f1f1bd3007ad9f1e
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:19 2023 +0100

    xen/x86: Replace GPL v2.0+ license boilerplate with an SPDX tag in *.c
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-or-later
    in xen/arch/x86/*.c:
    
     * This program is free software; you can redistribute it and/or modify
     * it under the terms of the GNU General Public License as published by
     * the Free Software Foundation; either version 2 of the License, or
     * (at your option) any later version.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
     * GNU General Public License for more details.
     *
     * You should have received a copy of the GNU General Public License
     * along with this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 406f0f593ef33f67e64f96c3742fecf20b341b3e
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:18 2023 +0100

    xen/x86: Replace GPL v2.0 license boilerplate with an SPDX tag in *.h (part 3)
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-only
    in xen/arch/x86/*.h:
    
     * This program is free software; you can redistribute it and/or
     * modify it under the terms and conditions of the GNU General Public
     * License, version 2, as published by the Free Software Foundation.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
     * General Public License for more details.
     *
     * You should have received a copy of the GNU General Public
     * License along with this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 0f10cd10f44b0868194071865f7ec3aac0aed2b2
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:17 2023 +0100

    xen/x86: Replace GPL v2.0 license boilerplate with an SPDX tag in *.h
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-only
    in xen/arch/x86/*.h:
    
     * This program is free software; you can redistribute it and/or modify it
     * under the terms and conditions of the GNU General Public License,
     * version 2, as published by the Free Software Foundation.
     *
     * This program is distributed in the hope it will be useful, but WITHOUT
     * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
     * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
     * more details.
     *
     * You should have received a copy of the GNU General Public License along with
     * this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 831a1c2da01f13bf66b2922896abd37f94aecddb
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:16 2023 +0100

    xen/x86: Replace GPL v2.0 license boilerplate with an SPDX tag in *.c (part 3)
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-only
    in xen/arch/x86/*.c:
    
     * This program is free software; you can redistribute it and/or
     * modify it under the terms and conditions of the GNU General Public
     * License, version 2, as published by the Free Software Foundation.
     *
     * This program is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
     * General Public License for more details.
     *
     * You should have received a copy of the GNU General Public
     * License along with this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit fc269f81fd6535392bbb8ac14360ce705d7d8e27
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:15 2023 +0100

    xen/x86: Replace GPL v2.0 license boilerplate with an SPDX tag in *.c
    
    It is easier to understand the license of a file when using SPDX.
    
    This is replacing the below pattern with the SPDX tag GPL-2.0-only
    in xen/arch/x86/*.c:
    
     * This program is free software; you can redistribute it and/or modify it
     * under the terms and conditions of the GNU General Public License,
     * version 2, as published by the Free Software Foundation.
     *
     * This program is distributed in the hope it will be useful, but WITHOUT
     * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
     * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
     * more details.
     *
     * You should have received a copy of the GNU General Public License along with
     * this program; If not, see <http://www.gnu.org/licenses/>.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a778dbdf443d81d30eed802018ecd498b8dd043f
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Mar 27 19:45:14 2023 +0100

    LICENSES: Clarify that the SPDX tag GPL-2.0 is deprecated
    
    From https://spdx.org/licenses/GPL-2.0.html, the SPDX tag GPL-2.0
    is deprecated. Instead, GPL-2.0-only should be used.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 05:39:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 05:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520036.807239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTCh-0000dt-Cl; Wed, 12 Apr 2023 05:39:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520036.807239; Wed, 12 Apr 2023 05:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTCh-0000dm-9q; Wed, 12 Apr 2023 05:39:35 +0000
Received: by outflank-mailman (input) for mailman id 520036;
 Wed, 12 Apr 2023 05:39:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tEdO=AD=redhat.com=thuth@srs-se1.protection.inumbo.net>)
 id 1pmTCg-0000df-I0
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 05:39:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65eef55e-d8f4-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 07:39:32 +0200 (CEST)
Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com
 [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-489-JQc_aQzRMJS6h635kSrpkA-1; Wed, 12 Apr 2023 01:39:29 -0400
Received: by mail-ej1-f71.google.com with SMTP id
 r12-20020a1709062ccc00b0092fce91a838so3594910ejr.20
 for <xen-devel@lists.xenproject.org>; Tue, 11 Apr 2023 22:39:29 -0700 (PDT)
Received: from [192.168.8.105] (tmo-096-44.customers.d1-online.com.
 [80.187.96.44]) by smtp.gmail.com with ESMTPSA id
 s21-20020a1709060c1500b0094a85f6074bsm2889674ejf.33.2023.04.11.22.39.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Apr 2023 22:39:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65eef55e-d8f4-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681277971;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VHCdvDKrd6SHmT5+4IxRxVZw2qNaGEXhrgpsq6j6904=;
	b=alIzvP4R9DRwmMLLw7sGq/zhvb+mNNzrNWSmktRTobBXyGMOYn/V+z5Rx/rwKxdm0iHuUF
	K5IOgv7R9DepU+b+9mdUkAd1J3SlFA7hxG6hHDjwVGM3w2TzX2KPwMRQOXI+HzGtspUnWW
	UireaiFfk84teqvuOMiHa6wEJzaMHPA=
X-MC-Unique: JQc_aQzRMJS6h635kSrpkA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681277968; x=1683869968;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=VHCdvDKrd6SHmT5+4IxRxVZw2qNaGEXhrgpsq6j6904=;
        b=bLh1A+/knq2m7I4kEW5oKh73VQf6lkLeB37+6v2z5jxb/Znc3j1sCaB90VnAmiv3vo
         zX5MSn48Iu80WF1t44v77SEwItqS6D8HNkZmHLCKVhtPC80eawXrPgR9yhRBKDvIWTyg
         XyP2EMQ6WCscg95gc1uGCE79R4vsCgSMhnQ2F7n+NeT/Z0EbCAQyIbUNRp8paWuj4s5w
         HZdm1IwU6xWhZybfevlKz3rAXe1re1CJqpTG/3w0rbRikJZ9eG+DQ8Uc7OZfb+P4M02I
         5bvsL9TZnj5fexPk7sQm3+0a7n1RqocG41KZd+MHSJ1fZNIGQfbnw8kzCc2okcJw+RYo
         uuSA==
X-Gm-Message-State: AAQBX9fXNIaqp1Gzk9jYwkMPbTH7twl5/8I/uDpthZ/Pqi9Y9IIFIUtG
	8FUtxpsjU8iFPsmT6BYzMieZwcdRoQrCJVtX5Gma6TaAkntPfacn1QcJY9/vV/pjcmYZ98NCOsN
	UoK4hG1XtXaonJlYcsnUVztPHC/o=
X-Received: by 2002:a17:906:1401:b0:94e:5224:b21e with SMTP id p1-20020a170906140100b0094e5224b21emr1134981ejc.14.1681277968556;
        Tue, 11 Apr 2023 22:39:28 -0700 (PDT)
X-Google-Smtp-Source: AKy350aiVhpgQoLUs6/2V6EcX+z/iUJaQs6ANB2YdA1maYchDM3UA/NUeo7RWULN8e+iRtBMgK9+FQ==
X-Received: by 2002:a17:906:1401:b0:94e:5224:b21e with SMTP id p1-20020a170906140100b0094e5224b21emr1134974ejc.14.1681277968264;
        Tue, 11 Apr 2023 22:39:28 -0700 (PDT)
Message-ID: <895bcdd3-350d-38e7-1982-899948072b93@redhat.com>
Date: Wed, 12 Apr 2023 07:39:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101
 Thunderbird/91.13.0
Subject: Re: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg'
 configuration for xen
To: Vikram Garhwal <vikram.garhwal@amd.com>, qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Beraldo Leal <bleal@redhat.com>
References: <20230411210422.24255-1-vikram.garhwal@amd.com>
From: Thomas Huth <thuth@redhat.com>
In-Reply-To: <20230411210422.24255-1-vikram.garhwal@amd.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 11/04/2023 23.04, Vikram Garhwal wrote:
> Xen is supported for aarch64 via xenpvh machine. disable-tcg option fails the
> build for aarch64 target.
> 
> Link for xen on arm patch series: https://mail.gnu.org/archive/html/qemu-devel/2023-02/msg03979.html
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>   .gitlab-ci.d/crossbuilds.yml | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 61b8ac86ee..6867839248 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -186,7 +186,7 @@ cross-amd64-xen-only:
>     variables:
>       IMAGE: debian-amd64-cross
>       ACCEL: xen
> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>   
>   cross-arm64-xen-only:
>     extends: .cross_accel_build_job
> @@ -195,4 +195,4 @@ cross-arm64-xen-only:
>     variables:
>       IMAGE: debian-arm64-cross
>       ACCEL: xen
> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +    EXTRA_CONFIGURE_OPTS: --disable-kvm

This patch looks wrong. I'm pretty sure we wanted to test the build without 
TCG here. Building with TCG enabled is already done in other jobs. So 
instead of removing "--disable-tcg" here the question is rather: Why does it 
not build with this flag anymore? Can those problems be fixed instead?

  Thomas



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 06:17:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 06:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520042.807248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTnR-00055q-F2; Wed, 12 Apr 2023 06:17:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520042.807248; Wed, 12 Apr 2023 06:17:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTnR-00055j-CP; Wed, 12 Apr 2023 06:17:33 +0000
Received: by outflank-mailman (input) for mailman id 520042;
 Wed, 12 Apr 2023 06:17:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmTnP-00055Z-QG; Wed, 12 Apr 2023 06:17:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmTnP-0007mi-Nu; Wed, 12 Apr 2023 06:17:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmTnP-0008RC-7Y; Wed, 12 Apr 2023 06:17:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmTnP-0002Hi-76; Wed, 12 Apr 2023 06:17:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YVJ9R5lxIqSYVV0OeV9+lAmJP3BxA+AKRho7KzIBlBw=; b=ixb7Wsw0ecvnjrgiYQFN8q6+Dk
	0D0Apt19hoXQ/vuMuSjd+eJwCo6EENDOzcJvQqQU6a5awbOEx6i0+bZZ33XhjYrN7SUxx5JKK/CGr
	F8D49nUC+7CZPjJMe+igomBeQy1y2S2xg/H12Kv7fVg0du04p0zG/7bboAXZWme5GR1Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180207-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180207: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=6c50845a9183610cfd4cfffd48dfc704cd340882
X-Osstest-Versions-That:
    qemuu=dda860b9c031d6a2768f75e5e622545d41d4b688
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 06:17:31 +0000

flight 180207 qemu-mainline real [real]
flight 180215 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180207/
http://logs.test-lab.xenproject.org/osstest/logs/180215/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180215-retest
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180215-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180204
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180204
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180204
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180204
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180204
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                6c50845a9183610cfd4cfffd48dfc704cd340882
baseline version:
 qemuu                dda860b9c031d6a2768f75e5e622545d41d4b688

Last test of basis   180204  2023-04-11 09:38:32 Z    0 days
Testing same since   180207  2023-04-11 18:40:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   dda860b9c0..6c50845a91  6c50845a9183610cfd4cfffd48dfc704cd340882 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 06:22:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 06:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520048.807259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTrc-0006Vy-0p; Wed, 12 Apr 2023 06:21:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520048.807259; Wed, 12 Apr 2023 06:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmTrb-0006Vr-U6; Wed, 12 Apr 2023 06:21:51 +0000
Received: by outflank-mailman (input) for mailman id 520048;
 Wed, 12 Apr 2023 06:21:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QpzN=AD=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmTra-0006Vj-Vb
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 06:21:51 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e0e0f21-d8fa-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 08:21:48 +0200 (CEST)
Received: from AM6PR04CA0068.eurprd04.prod.outlook.com (2603:10a6:20b:f0::45)
 by PAXPR08MB6544.eurprd08.prod.outlook.com (2603:10a6:102:157::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 06:21:46 +0000
Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::64) by AM6PR04CA0068.outlook.office365.com
 (2603:10a6:20b:f0::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Wed, 12 Apr 2023 06:21:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Wed, 12 Apr 2023 06:21:45 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 12 Apr 2023 06:21:45 +0000
Received: from 3dd2b681c351.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 802509AA-3554-47F9-BF16-535B220D3E67.1; 
 Wed, 12 Apr 2023 06:21:36 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3dd2b681c351.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 12 Apr 2023 06:21:36 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VI1PR08MB5520.eurprd08.prod.outlook.com (2603:10a6:803:135::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Wed, 12 Apr
 2023 06:21:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6277.034; Wed, 12 Apr 2023
 06:21:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e0e0f21-d8fa-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3pOZzwPMMV1DuJnSfPi/4WZieEoZhtGCWT5AFri2azU=;
 b=nm+QF3bsYc0vNf4C+gVWneVgJDy0Sq/KA+A3AIFM4n8go09/lJ4Rdn6CUN44fc1uPq42MyoEWkeNlgwzxxHY253mY/QjHnzQnhhKVUKXmA3JudMzxuox7czk8lO8kK82OalfxwJKCsFBiAoXVuOjhbYwh/3hTwBhdqR2qQkTCjU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L5fdIfojZSLOEm8yVjlvrglenTFRKAnLe4InAyul0DxS5c7xj5p266IGRD+s8eB/t2Y6hev0ZQlbWNDWGBHPNxp4yfNzP8q4O86MzPSCmr2W6j9WeOGhOsWQhIc/KHtuw/JPCfMky4ClBd4RdZlod2wtGFb+N8b9IHJozXDxb+6g2DzgdxybbZSzKcSZWCqNDxYXyjz6jrADoKCkQcPA0UNgO0zQrJRgEp4cGOj2RR2IQc8uJiUYk0ItncegTEUpbTga6Pfwc3WGOJCaxLKEayAqS+gAV/6Xa1QtZVLFT9PgyMjt1EFlO9stQnZbIhr6IV81pPert3zR67VlPU6X+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3pOZzwPMMV1DuJnSfPi/4WZieEoZhtGCWT5AFri2azU=;
 b=OJPgqgWY4566BH2p7ZGP2/XgrSKonYA0YqqG0utqf3bhN+cbHZ63bwnspB0HZdtzAewsXjWZBgCaZo9QxaHlrnClBBkmdvLoEsd66Odw0aAx7mHJBWdr0k6S03g/FT2QVd9o9mJcXjFgVL8CZKkSojb/0NLIFDoHe/vUBsbmCaYf9lRtwACanX/1QqyA4S02SiU4eqQSLb6hjJ5aYD5y5+4NCE5jjQCwnyCAjrk/2IgLVyf73+l90zqY7ovNdBKGyOv6ogdT2T4xF3HBFxwDBKs14sxPyGxJ4gwOBkMZ/TXU3CrRtguyYLVsvvMHlLmrK4cxk+TvHA/pAIxtascrDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3pOZzwPMMV1DuJnSfPi/4WZieEoZhtGCWT5AFri2azU=;
 b=nm+QF3bsYc0vNf4C+gVWneVgJDy0Sq/KA+A3AIFM4n8go09/lJ4Rdn6CUN44fc1uPq42MyoEWkeNlgwzxxHY253mY/QjHnzQnhhKVUKXmA3JudMzxuox7czk8lO8kK82OalfxwJKCsFBiAoXVuOjhbYwh/3hTwBhdqR2qQkTCjU=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
Thread-Topic: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
Thread-Index: AQHZZ7BiVWY3mibtKkuaZtbP8Fag3a8nPoCg
Date: Wed, 12 Apr 2023 06:21:29 +0000
Message-ID:
 <AS8PR08MB799186E653509C6F5DF026EA929B9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
 <20230405111750.12491-2-michal.orzel@amd.com>
In-Reply-To: <20230405111750.12491-2-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F07AB8ADDB37B84AB094A345987E9277.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VI1PR08MB5520:EE_|AM7EUR03FT052:EE_|PAXPR08MB6544:EE_
X-MS-Office365-Filtering-Correlation-Id: 92ac399f-3749-43ba-a29d-08db3b1e30b0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MTM5EREg8xJPMRv05d1Io9TWxqNSPdPhZZ72t4Xey47EVl4SQKvKfMI6jfagdwHf4n83qKiZ4VOt4KbCi6tbtCeCtMlv7mfVBOiRrbBZWNNsRQvVRu3dMT12ChgJowoAFCcooaWxO5NKzhkv50pqIbN5KA97SnZ2Ls3oI200HkFxUdvH21MRKTpSPlnh2ONlWVHu0+EhTio+u3r6ZyyRAMtj7KJqiZJIpco4HTWgzZl8syyKBRQ4fTstEKejuvs+Efu0rek959RfYUGgNTy2koiK7v6Ixh3xAq2IQm3sRRLxtO5XfGqekaaBoDKe5YQItjXMREcr0o/r4Zplse2Le+sxCdwPsLd9H+irtMNmJAHJbJOPmckXqXPklVC/1Ab4WE0irW/6/nJhnRosH/nSP067sg4zkJmtHvsinrB8YE+VUIgLqs9+a6YhkWLtVC2sRv2ZFqw94uQJcTJU8PUvtmX0STYM0EYUghYNBFGqtDqaDd2M7zZ511QTbA9PL9C1YF08sBPEdGmR5zz1LUh/VmG0gCx7Ftw7qV6yWLexcxlHF2wU0NkBIurryA53RbJl/BX0o9gCCzE8lLwAn7/WGto14E2+VJqFwcQ/SgAg/A/p0X7m4whQu+TUmAUoXO0U
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199021)(71200400001)(478600001)(7696005)(26005)(316002)(110136005)(6506007)(186003)(5660300002)(54906003)(9686003)(4744005)(2906002)(4326008)(66446008)(66946007)(76116006)(8676002)(64756008)(66476007)(41300700001)(66556008)(52536014)(8936002)(122000001)(55016003)(86362001)(83380400001)(33656002)(38070700005)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5520
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	edcebe21-1819-479b-3240-08db3b1e2733
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ja222fkM5Pqy7lZJm79fuDaupKXC1qmcSMvtB5f0HYpgWfAINItbKeNhU+YvDoQlpysMS0K4LbLSkRwPZ6V6nRdA4TWVrw8bRRtHyxZhQ3iUEzd1JrIyEnH5Bn5thVqn79V4Sf0DsXMMyy6ypS5Q6bZYQ/24WY51SLrf125zDxN0KNobJByH1TtcqhXeq0X1V/YNDb7KdhK0h8VLx2ZSYLGzwzGVetX2R25FgE73CGnsn05t6VbPLB7GzS5yAm0vkUXxGElcsFYT9NsMFFZWxjjVPG/bCTLqtMwpgI562oOAqJ5FA5l1tEL34gKVWQnO1KGpyWwrUb0IzGEIFLmCT2OoVRZ53BHpm++3oxKyXjfIhRg3G1tzNeohm7EO0QfqEjogNe4sM8aErzgWFazVgNfNHpfthg+IqWhxt1ml+Az1NmxCiab9xemxLRELdxqQI52iw114+4KiTBYMqRFWae0egPvF47C99DBIRw0JJ+Yu5NQVUUdPhtzvvXFeM51rz/ezwUvZeMaX0Q41MZNUh6sME4yd9QEX+DNgiUGwzwiLlmdpMB4O8XkeAbPCTYslUU5ATIdYPRvsrZtpSfWE+tUjl8da7Lf2A0cE+Cn9Wpm4xHbhLJuSv2tZ5mxjajMCrS55qQBbmpXM4kbA09BGPGolYUVY8N5g+bTPA12R+SbguLsqlc+suQgJXSvJ+EfjfzJkG9oOGsqT5AT2J4HzaQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(82740400003)(83380400001)(478600001)(82310400005)(47076005)(316002)(110136005)(54906003)(356005)(81166007)(107886003)(36860700001)(186003)(40480700001)(9686003)(6506007)(26005)(55016003)(336012)(7696005)(33656002)(40460700003)(41300700001)(8676002)(8936002)(4326008)(70206006)(70586007)(4744005)(2906002)(86362001)(52536014)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 06:21:45.4372
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 92ac399f-3749-43ba-a29d-08db3b1e30b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6544

Hi Michal,

> -----Original Message-----
> Subject: [PATCH 1/3] xen/arm: vpl011: Fix misleading comments
>=20
> In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
> stating that the guest is expected to read the DR register only if the
> TXFE bit of FR register is not set. This is obviously logically wrong and
> it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 06:56:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 06:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520054.807272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmUOf-0001WX-KQ; Wed, 12 Apr 2023 06:56:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520054.807272; Wed, 12 Apr 2023 06:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmUOf-0001WQ-Hb; Wed, 12 Apr 2023 06:56:01 +0000
Received: by outflank-mailman (input) for mailman id 520054;
 Wed, 12 Apr 2023 06:56:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QpzN=AD=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmUOe-0001WK-Iy
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 06:56:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2083.outbound.protection.outlook.com [40.107.7.83])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12cf5f27-d8ff-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 08:55:56 +0200 (CEST)
Received: from AS4PR10CA0030.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:5d8::11)
 by AS2PR08MB8381.eurprd08.prod.outlook.com (2603:10a6:20b:558::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Wed, 12 Apr
 2023 06:55:20 +0000
Received: from AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5d8:cafe::62) by AS4PR10CA0030.outlook.office365.com
 (2603:10a6:20b:5d8::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Wed, 12 Apr 2023 06:55:19 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT057.mail.protection.outlook.com (100.127.140.117) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Wed, 12 Apr 2023 06:55:19 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 12 Apr 2023 06:55:19 +0000
Received: from 8ce5651f1bbc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4343D40D-5391-4EEA-8F3E-466E5BA9F7B3.1; 
 Wed, 12 Apr 2023 06:55:13 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8ce5651f1bbc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 12 Apr 2023 06:55:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAVPR08MB9817.eurprd08.prod.outlook.com (2603:10a6:102:31d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Wed, 12 Apr
 2023 06:55:11 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6277.034; Wed, 12 Apr 2023
 06:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12cf5f27-d8ff-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVWSCWaPCaSF2F5CYJf465h0SbWQLJccfkSY+GBw/Bc=;
 b=bP7iWe/TLAJ6UpL96lDBW9EFwcNXHKigvl+DisnWf+2c4KvO787DgH3J3P+xRwgCTWxDme5fZPBmU35XS0/e17vyi9IwsgH/GylZuvYdcMAouiSHKtRfbnQza62Lzm5QqMlXSJ6kiYw916/t++5Ewx97dev85+cbM7GX/bkmHSQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oKHN3oBp9P3+6EHVHS5ZEXDL0F7wNLgdr3HWV0kq3Ip20LhJyu7D+dZLjoTTc13RJtUMqrewef5LzQquDnMt5vOpS1ZzTBW362EApfoDxBFrSGfV2TEYqxLs5q5hf7Lx7A80UiU7+XVLRIEaNVFJMpoRSPMUzUUH6hr6RffwiSOVB7O/MfqTKAJSPhLQtAOiJ6eWQBswKsiF3kcGiyTKUaZEsN70FxqqrNPtKcM+1xLlYPqxDLamsGdlrqDnfx9VCZyKTzPNQ0UkIVsUHXWDXRevirONE4DDsM3jfuWV51X6b4xOiEr8fYGBzObD9iYbIHG7IF8QdiWlbMKd75bImA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iVWSCWaPCaSF2F5CYJf465h0SbWQLJccfkSY+GBw/Bc=;
 b=QvBCQV1mY19LAH0LB9i7KUq8Cfuc1PeT6Ck7Ex+8vhyzoKXL8CRzrSe7M6RFIOplwd4kT6cnEMMptegnaPfIUlkbIvfFljaAzRqRUEX+L5W443/u9AE/apGoM9PJsxWrfeySIwiCLSOPrZjQuMnozv3yJ6PpaPL5ulR0zju9K3pgFHrcpR211Ms0mjnXM4ndj+jNGK39JvN5ckFvYek4R9BAFoghZsxPLh+6kzXJwo9m8S7OyM4ZcDLSW0FL+no31JUOw9ykwj41PM3GlDmB/ZfpznJMDn7BuGDFi4fylvWv4MxifQBIEV1Wb5XpwcKJNuj1jMJPXW6UtT2coPIrUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVWSCWaPCaSF2F5CYJf465h0SbWQLJccfkSY+GBw/Bc=;
 b=bP7iWe/TLAJ6UpL96lDBW9EFwcNXHKigvl+DisnWf+2c4KvO787DgH3J3P+xRwgCTWxDme5fZPBmU35XS0/e17vyi9IwsgH/GylZuvYdcMAouiSHKtRfbnQza62Lzm5QqMlXSJ6kiYw916/t++5Ewx97dev85+cbM7GX/bkmHSQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status
 when backend in Xen
Thread-Topic: [PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status
 when backend in Xen
Thread-Index: AQHZZ7BhM9d/TecBu020qzDpPrY4KK8nPD6g
Date: Wed, 12 Apr 2023 06:55:11 +0000
Message-ID:
 <AS8PR08MB7991B95CF8B3209FAADB1613929B9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
 <20230405111750.12491-4-michal.orzel@amd.com>
In-Reply-To: <20230405111750.12491-4-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 49045ABDCBE97E4392E7DD52327A3008.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAVPR08MB9817:EE_|AM7EUR03FT057:EE_|AS2PR08MB8381:EE_
X-MS-Office365-Filtering-Correlation-Id: 6f2fe3dd-53db-4226-d4a0-08db3b22e11e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TsNLowLfNpIpfZDhOUqRdxL8C5E+rw+oA3rmqR7Avw55gYVYgAkpkF1NKYe9eBmfVTw0DmIx40RHxwICLGEbS32zUWjEKcKVWBqJ3DscUYWXIRU36avbzsdOK4PQXltaN45ksU2C9hFzzrBJ8w4ggo4u9D4yjuccctdr2sajiaQ6gp8VdKR1pVWEL0B0uI0ZTiKwARW63yGcziMvEiX7O//DTbeLPtrl2yKZDxG9LPn+ckE3vnxSFaOGoM2NrVZC/bpk1hYMVBZxWzJpdNh8l5mNSGqTFdh9KD8PLrImhJztvtLM8yMWhPxoR4jZ4nglKtFvTu3lYl1B2Pa4/dT4BAg/WvB6SCngrM0ej6FaNYznml1WiteiR0kpChkvinfP2YRnM+M782dzmZ8asrKYnoVCDHv3wbKMIBXthW0DDHSFU6ndbTo2fUjhKMc+mw+6j3zDrzskogW4SgPQUddEQ7A+qrTNwJlvldmajmBZOoeEIHoH4aH4W2EPDudPIiwDJBWNRUjAaz+FwsvD3z8dKVsHlI+1psHMKwbMwTHf8El3CykMeu/kIhbn5M7qAV0SQz4ut0k3KYgkJ9YdAJmK59aGYSPDDwVCQdX6tcckUJtPvnoJO8XnyE+ir1lN8T1x
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(39860400002)(136003)(376002)(346002)(451199021)(66476007)(8676002)(66446008)(4326008)(7696005)(66556008)(64756008)(71200400001)(66946007)(76116006)(54906003)(478600001)(41300700001)(110136005)(316002)(33656002)(86362001)(83380400001)(9686003)(6506007)(26005)(8936002)(2906002)(5660300002)(55016003)(38070700005)(38100700002)(186003)(52536014)(122000001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9817
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	51587d48-102d-4b71-0f5c-08db3b22dc25
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/3jL306WTHXVGJvhmWF9fjkTKkn/X3n+LHl+nZDblk8Xvg0uMf8qoIuSWm/h65CerlOZPpdrCTVO7tYEIzVRIc0qdDhr8MA8Oysa0DKEY73mCxhl8J89xq+39Ew/HC2CpFmzdw4rj/5vVoGvGkTMq7vA7/xw/8bDr6E7jFgh8DvKA5KfIdCM1w2N/uzm7Lq/ol0DndTpVCpbaNt3wVCsfkDsxkA2PrU9/rERL1SHtGlohQwNrpjQR2s43KPToPkQvmU30gu8pa80k5kNxBRqRQNQuB//UijrF4mC6gq/qrxzIyrCQbuAbPMr3qVSqZf5BKBOnCcCqzHwQrP9GXVD5s/urORK+MlyUFCjG7PDEjPKTtWz/55l4k5fizIbm6aHBNqbD2/ZZvIjWecZPyTlISf7dkqTYXTNsQODZmpw5JzH2PC0Qsee5j8JqO2dmKWM0OhXUmWRzAwtaTLAHMSBIqpW5BCxubAzwBZ4tjl+ran5Wuix8mi9+3w+FRBiDCA7B1StOFcA/Ul980chqwbdXCkm4ucgpKqeFufLcu6cLsJb/c5ePYlwLVfAo6tSdzraO9eQYfViI+JByeUC3qEtnP/jseqzvr42E66EI43uaxr54tWhckypVzxR8FV0BaDZAG7KH6eDEGHxkj3siPpRa33E/EQKFxvwgeyeGY/ILwFmBWGX5WjcMT8WciG2n3QNCdhu7t8XfRKttw4/xktNlQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(36840700001)(40470700004)(46966006)(478600001)(7696005)(40460700003)(86362001)(83380400001)(33656002)(55016003)(40480700001)(81166007)(356005)(82740400003)(47076005)(336012)(36860700001)(2906002)(26005)(316002)(110136005)(6506007)(54906003)(9686003)(186003)(5660300002)(82310400005)(107886003)(8936002)(52536014)(41300700001)(8676002)(4326008)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 06:55:19.4253
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f2fe3dd-53db-4226-d4a0-08db3b22e11e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8381

Hi Michal,

> -----Original Message-----
> Subject: [PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status
> when backend in Xen
>=20
> From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
> RX and TX state. Because we are passing 0 as out_fifo_level and
> SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
> vpl011_update_tx_fifo_status() which performs TXI bit handling
> depending on the FIFO trigger level. This does not make sense when backen=
d
> is in Xen, as we maintain a single TX state where data can always be
> written and as such there is no TX FIFO handling. Furthermore, this
> function assumes that the backend is in domain by making use of struct
> xencons_interface unconditionally. Fix it by calling this function only
> when backend is in domain. Also add an assert for sanity.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

I've tested the patch by manually running the XTP on FVP_Base, and this
patch works fine. So:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 07:18:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 07:18:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520059.807281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmUk7-000475-Dr; Wed, 12 Apr 2023 07:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520059.807281; Wed, 12 Apr 2023 07:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmUk7-00046y-B9; Wed, 12 Apr 2023 07:18:11 +0000
Received: by outflank-mailman (input) for mailman id 520059;
 Wed, 12 Apr 2023 07:18:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QpzN=AD=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmUk5-00046s-Io
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 07:18:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c638a7e-d902-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 09:18:08 +0200 (CEST)
Received: from DB8PR06CA0018.eurprd06.prod.outlook.com (2603:10a6:10:100::31)
 by DBBPR08MB6315.eurprd08.prod.outlook.com (2603:10a6:10:209::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Wed, 12 Apr
 2023 07:18:00 +0000
Received: from DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::31) by DB8PR06CA0018.outlook.office365.com
 (2603:10a6:10:100::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Wed, 12 Apr 2023 07:18:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT012.mail.protection.outlook.com (100.127.142.126) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Wed, 12 Apr 2023 07:18:00 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 12 Apr 2023 07:18:00 +0000
Received: from 7d22e4c8651a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DEB2799E-E68B-4E4C-A210-5916B48206A0.1; 
 Wed, 12 Apr 2023 07:17:49 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7d22e4c8651a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 12 Apr 2023 07:17:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB9684.eurprd08.prod.outlook.com (2603:10a6:10:460::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Wed, 12 Apr
 2023 07:17:45 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6277.034; Wed, 12 Apr 2023
 07:17:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c638a7e-d902-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Fnve0Ztn4Wjfr4f/V2rjsT2NHcKUk9HKR+bwZFLvtc=;
 b=CAqdnEz25wKZE4gSHr90pjg5KnXY3kQEG0gaVO0v3dJMYF1lNr3RZwVdf+fSmLts2iUbCNy5AbiXgilcNoq8OghVcVpzNKZvcCQM9IFJcITjV3J2yW+CJW6WFFCbzz5L6ZWCp31AQj9D9ntMwUf9Z10iaJ/XKdBVu2eko3fOnIM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J1crUVwBDPTW1NvxRDV5eMksN6QeaD8uO3cmHDY0LWsYuCRlUflwn0Yu/jYpyqED2+9E/OQaqtA+cgHOVy7VE4gSTRKTkWjagZPzoeVqIhAgiMyy3VR4cyl4k69DWW0cblcsvbiF+mX2JrmYk+xgZv4AXMBx62DRt/HCRTTbPruC1C2iXHgu6g/LRWXKkSf5AzzSafkTSShH10hpziOU+7flaRW+1CcFbnMp6nPqD8y+W+7ne6cGD6EJyYc07AcUa3kALETaMXjAvnQOLdUAeVxSNgSvDSNEe5FNdj3XXYlqnDk0GjAjZeqa83bJNaK1A3VeO9I5B+T9eew16DVykQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8Fnve0Ztn4Wjfr4f/V2rjsT2NHcKUk9HKR+bwZFLvtc=;
 b=EfCwq65pgHkLjORH9b+qT3uYCSFr34Z5cjX/cHJMAKSxq8f2e+2vqUsPyP3Ol33vYrHepexNNOG6tKcaL7mJf9932RsBQbof6JGtlIfVso2/08ojBBCUlCb6oqOHMhy3FgMxUv3S5IYBeMPSlUsUO+c11d1iMDHS82mda5cWGHJ+RsJAQ7PNBiPxBKU2v4QgodAdQ178yXrvVkeYOo5j//8/Hxj33ikGv3ED7JBY+sSpk8wAP4oD7pjrFDANzVdoqaQWVDPPYRBlIxYR0+nkCcy99tGTB6bYHMhbnHjIEfEdDyZVgFhiTe0FCd6XTLD3oYsWb7q73x4Fucv99QzIYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Fnve0Ztn4Wjfr4f/V2rjsT2NHcKUk9HKR+bwZFLvtc=;
 b=CAqdnEz25wKZE4gSHr90pjg5KnXY3kQEG0gaVO0v3dJMYF1lNr3RZwVdf+fSmLts2iUbCNy5AbiXgilcNoq8OghVcVpzNKZvcCQM9IFJcITjV3J2yW+CJW6WFFCbzz5L6ZWCp31AQj9D9ntMwUf9Z10iaJ/XKdBVu2eko3fOnIM=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 2/3] xen/arm: vpl011: Handle correctly TXFE when backend
 in Xen
Thread-Topic: [PATCH 2/3] xen/arm: vpl011: Handle correctly TXFE when backend
 in Xen
Thread-Index: AQHZZ7BjfTPA7ZSpsEidFkEBoEBCAa8nPfIg
Date: Wed, 12 Apr 2023 07:17:45 +0000
Message-ID:
 <AS8PR08MB79915E2D49B4F3564257A483929B9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230405111750.12491-1-michal.orzel@amd.com>
 <20230405111750.12491-3-michal.orzel@amd.com>
In-Reply-To: <20230405111750.12491-3-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D65879B074B6EA42AC8C56E519FC8DAF.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB9684:EE_|DBAEUR03FT012:EE_|DBBPR08MB6315:EE_
X-MS-Office365-Filtering-Correlation-Id: 337f0675-ee7f-4aec-d694-08db3b260c3b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oxrzO5H0Ik6PQI3RZJlN0xOz/FZxQISaBcqoCEGlcFtjFw2vllvb+0iIGYh4xqtQCBxF+5ND610YK2ykqtbBeT02NaEPSTg0T410fF9UvbiAYKsN0ptbSMPEN5SSiH0Or1+J7juSdl6OFF8MtL6prNUI8sA7u6vmABgVJlpEPis6mWIEYyUs66+dGyuldF3UGfQ+lf2JajawnZvI1dXyhZgieljcX0vDMqLcxVqHSfyRH3OTQvwBpkEh0g5ZGxtKIDM4CVMeIRmIzbpqVXaoF+Xq9nvK0QOyNlnD7r0U0YZQysl2cANLQyYRqI/jSbSxt3nFiwmWPl8ZUduHwwwK9jcfM8AZ2W97ayYlKbvJgfaOXfDuMiD20CMOPDolL8KJQ+jhlfvpPtRG/D9IHAicwsUFQT/bpHg16wCgfMvt4LPxq1XvuPZbmybm2/AE7jwyThhMxWQd8g1kUBTyispJSU0kpMzVNTHVi1IcvSvmAgAlDs5miV4DZqOslxCmxddj9ScAGcTrRuzGGVcHuRoeVfURIWaz4RMTytC20SOXDpkHe83qOnELhJkFEAVOIvQ3v8YptBS3g0UaG/K5/JlXH+v+DXKvTXn+0NkN2qeSRC8S70RHdqG2Y20orCKtSJUm
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(366004)(136003)(396003)(39860400002)(451199021)(33656002)(86362001)(76116006)(110136005)(316002)(64756008)(71200400001)(41300700001)(54906003)(7696005)(66946007)(8676002)(4326008)(478600001)(66476007)(66446008)(66556008)(5660300002)(55016003)(8936002)(2906002)(4744005)(122000001)(38070700005)(52536014)(38100700002)(186003)(83380400001)(26005)(6506007)(9686003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9684
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3084d2e2-1f7e-4595-0ad1-08db3b260348
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K/5TPRNLPKViX0Tp3VTdvqnZI04de9Dhj7YoQvjTXz3upqPXYoDiFdR/CjbYwmHlvt9vlTIaChCv4YWkqxWjKMkB+fHCaCBTBJdTrINI2/MT/YueMNKr/zY04a6U8/bwl0LyMzQnKXdtJE6pPyYoPbGyX3NV4NZSNIfK9OVINH2LOcCVe2ycg8XCxTTwcoLuuTBuPba89W6R6UGzRAt3hsBPdNZlg30URTg9sSWBoMZjVwRkFeefCqYshjzJN6ZQFfdqmJ4/iDc5Hx//nSNRtgTyFyCowckrK09EgczFnsJBDFOrbQXtHN7RFUEmOBHrEGyEtYfrPcKTZAd/IHNWRTt2F4a0vvv/w9v7jDW9YLLtWyozIN4NxKNNgyRW5SNzFgL6r5JdSiNQ8M9nSXlJ/3ACpTi1fil8l36OasvGPgn0f4YDSm6E0lzyEVJyB3tvJg8JQBAijFO5LuMthgVDE4D+C25+0ccOJGZgaI/tzyD1IPDZ2hWQLf4muppE9wTmLh6APHYGIaQWNT/xjaghHDxDt1KAlfi/9blctgiJIF4qpp6qJeAcuJ3tWfUZMMdg9kmokMC75sKfwMDb2j4hMD1v7ezm0+8yOBXmqeeXroqcOAx7lZtlHzoMhXazuNmiZ7kodfBoczPSkI7EoxgDGk7krD8QwnX11XFY7ZFzuW0w5/ZD9XLrbDMJQQLL0OBf4sfiKiKGx483+7NW7wBmrw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199021)(46966006)(40470700004)(36840700001)(2906002)(4744005)(356005)(40460700003)(41300700001)(81166007)(52536014)(5660300002)(8936002)(8676002)(82310400005)(33656002)(86362001)(40480700001)(7696005)(55016003)(9686003)(6506007)(26005)(107886003)(54906003)(36860700001)(83380400001)(478600001)(336012)(47076005)(186003)(316002)(70206006)(70586007)(4326008)(110136005)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 07:18:00.3101
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 337f0675-ee7f-4aec-d694-08db3b260c3b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6315

Hi Michal,

> -----Original Message-----
> Subject: [PATCH 2/3] xen/arm: vpl011: Handle correctly TXFE when backend =
in
> Xen
>=20
> When backend is in Xen, the handling of data written to DR register is a
> bit special because we want to tell guest that we are always ready for ne=
w
> data to be written (i.e. no real FIFO, TXFF/BUSY never set and TXI always
> set). This conflicts with the current handling of TXFE bit, which we
> always clear and never set on a write path (we happen to set it when we
> receive char from serial input due to use of vpl011_data_avail() but this
> might never be called). This can lead to issues if a guest driver makes
> use of TXFE bit to check for TX transmission completion (such guest could
> then wait endlessly). Fix it by keeping TXFE always set to match the
> current emulation logic.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>
Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 08:44:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 08:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520065.807292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmW5B-00057j-Om; Wed, 12 Apr 2023 08:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520065.807292; Wed, 12 Apr 2023 08:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmW5B-00057c-LX; Wed, 12 Apr 2023 08:44:01 +0000
Received: by outflank-mailman (input) for mailman id 520065;
 Wed, 12 Apr 2023 08:44:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmW5A-00057O-57; Wed, 12 Apr 2023 08:44:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmW5A-00038b-2S; Wed, 12 Apr 2023 08:44:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmW59-0007wT-JJ; Wed, 12 Apr 2023 08:43:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmW59-00063j-Gb; Wed, 12 Apr 2023 08:43:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3JFpmcgnGXT87F7+LhsTc5a5sKrJMy9tEoW1Bm+evp0=; b=t+A7Z0t4fICYvsp5NKhkd6Czn1
	WdZya+E2L7M+dody6UrsJcY8Ep8cPE280zgElJDsx2VpoGJ2uADtk2PM/5mjW8nSPtNbCB2Kwn/j/
	/tmPmyRd+JjAK3p+K9xWckIum29OowtMHiZjGUdY1o9ESR0IKAGhnDtPEsfR3xFMBZdk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180208-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180208: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=e62252bc55b6d4eddc6c2bdbf95a448180d6a08d
X-Osstest-Versions-That:
    linux=0d3eb744aed40ffce820cded61d7eac515199165
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 08:43:59 +0000

flight 180208 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180208/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180199
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180199
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180199
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180199
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180199
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                e62252bc55b6d4eddc6c2bdbf95a448180d6a08d
baseline version:
 linux                0d3eb744aed40ffce820cded61d7eac515199165

Last test of basis   180199  2023-04-10 21:41:48 Z    1 days
Testing same since   180208  2023-04-11 19:10:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bang Li <libang.linuxer@gmail.com>
  Basavaraj Natikar <Basavaraj.Natikar@amd.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Kerello <christophe.kerello@foss.st.com>
  Damien Le Moal <dlemoal@kernel.org>
  David Sterba <dsterba@suse.com>
  Fuad Tabba <tabba@google.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Oliver Upton <oliver.upton@linux.dev>
  Paolo Bonzini <pbonzini@redhat.com>
  Reiji Watanabe <reijiw@google.com>
  Reinette Chatre <reinette.chatre@intel.com>
  Richard Weinberger <richard@nod.at>
  Rob Herring <robh@kernel.org>
  Thomas Glanzmann <thomas@glanzmann.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   0d3eb744aed4..e62252bc55b6  e62252bc55b6d4eddc6c2bdbf95a448180d6a08d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:08:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520071.807302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmWSa-0007cd-O9; Wed, 12 Apr 2023 09:08:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520071.807302; Wed, 12 Apr 2023 09:08:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmWSa-0007cW-LX; Wed, 12 Apr 2023 09:08:12 +0000
Received: by outflank-mailman (input) for mailman id 520071;
 Wed, 12 Apr 2023 09:01:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOUS=AD=zen2.lab.linutronix.de=alex@srs-se1.protection.inumbo.net>)
 id 1pmWMC-0007XZ-F4
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:01:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f19566a-d910-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:01:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f19566a-d910-11ed-8611-37d641c3527e
From: Alexander Kanavin <alex@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681290092;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ASDAJcN9ZDCx7m2Ft89Q46dO/QfgStSaLx/RmpoqlmE=;
	b=j9n5cvIz7zoE/PDOF9CPf+FJcl6xd+Q8njqcaKJTWCkpCrqxeijmMdF2w3qJEWtCHStLLW
	MRidshZoi5xM6kQN79R9ICW3OPGW/6YXfHTI9GluBPlwkNDaMRV/HCmrNUZL7rtKms4tBX
	7C+3WRWa00YnmxvhZP58E+SjWYDpDDKlak+XxBv3HcCKyRKDWLpiS1snhiz25YKP9FrkIA
	SuNumLFhyiEf3saSBjqX3H/+TAE7f3ktg3g8Zr4t3wJBgvK5KFHXXazQFoRPQcO6h2VV28
	YY5Ec68PwYuZEJKhJ4zzjHgN5tZL2XWA0Yp5OtRoRgZVnFl2x5rEhTaL80XljQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681290092;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ASDAJcN9ZDCx7m2Ft89Q46dO/QfgStSaLx/RmpoqlmE=;
	b=Y2N2se/Zydec6sWuFFSISLM+R1HkD+nTrPu9DD9ulcZYJ+sX3ceUARERhH3Vs8eLjjndEA
	hoZs6wFhmyk91LDA==
To: xen-devel@lists.xenproject.org
Cc: Alexander Kanavin <alex@linutronix.de>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/xenstore/xenstored_control.c: correctly print time_t
Date: Wed, 12 Apr 2023 11:01:04 +0200
Message-Id: <20230412090104.3794213-1-alex@linutronix.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On 32 bit systems with 64 bit time_t (hello, Y2038 problem),
the following error occurs otherwise:

| xenstored_control.c: In function 'lu_reject_reason':
| xenstored_control.c:646:70: error: format '%ld' expects argument of type 'long int', but argument 5 has type 'time_t' {aka 'long long int'} [-Werror=format=]

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
---
 tools/xenstore/xenstored_control.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index cbd62556c3..8683947d25 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -668,10 +668,10 @@ static const char *lu_reject_reason(const void *ctx)
 	list_for_each_entry(conn, &connections, list) {
 		if (conn->ta_start_time &&
 		    (now - conn->ta_start_time >= lu_status->timeout)) {
-			ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld s",
+			ret = talloc_asprintf(ctx, "%s\nDomain %u: %jd s",
 					      ret ? : "Domains with long running transactions:",
 					      conn->id,
-					      now - conn->ta_start_time);
+					      (intmax_t)now - conn->ta_start_time);
 		}
 	}
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:14:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520078.807311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmWYM-0000gA-G6; Wed, 12 Apr 2023 09:14:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520078.807311; Wed, 12 Apr 2023 09:14:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmWYM-0000g3-D6; Wed, 12 Apr 2023 09:14:10 +0000
Received: by outflank-mailman (input) for mailman id 520078;
 Wed, 12 Apr 2023 09:14:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SpRk=AD=citrix.com=prvs=4590bba82=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pmWYK-0000fx-Q9
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:14:09 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5df577ba-d912-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:14:04 +0200 (CEST)
Received: from mail-mw2nam04lp2173.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Apr 2023 05:13:58 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BN8PR03MB4931.namprd03.prod.outlook.com (2603:10b6:408:d8::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Wed, 12 Apr
 2023 09:13:54 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Wed, 12 Apr 2023
 09:13:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df577ba-d912-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681290845;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=2991Y1o0h26C9mYfga/O1VOH3rPvqbiCrW9xHOa9IfQ=;
  b=B34CwRx++heqcu99TsrhSr3RDYWq6a9bwOfaiMwVV1se8MzW6QBZuRmY
   VtEpvmPmA9Q+qaFSTGGWcQk7x7RkEWKzjPH41Kt0gW04jUbROgRVADFgX
   cwxCvdNW9hcmwFKl+sLHt/m6gUpFSWlYaquN7GKEZl7tf1tjfA8e+fy4j
   c=;
X-IronPort-RemoteIP: 104.47.73.173
X-IronPort-MID: 107631059
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FLSQ7a2UyrwK8F7Z2/bD5V1wkn2cJEfYwER7XKvMYLTBsI5bp2MHn
 GQeWG2PMveKamSnfd4kYN+18B8Fu5CGzIcwTAdtpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+HuDgNyo4GlD5gBnP6gS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLHleq
 ewjAigxMEqK2M7p3bjjRsNwmZF2RCXrFNt3VnBI6xj8VapjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouC6Kk1cZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rCRxnqjBdN6+LuQx+VJmQSh7D0oEAA0WUe+pqOrkG2BcocKQ
 6AT0m90xUQoz2SVSd36Uwy9sWSzlBcWUNpNEMU38AiIjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+Wpz6vPSkeLUcZeDQJCwAC5rHLv4Ubnh/JCNF5H8adjMDxGDz26
 yCHqm45nbp7pdUQy6yx8FTDgjStjpvEVAg44kPQRG3NxhtweYqNd4Gur1/B4p5oL4uHT1/Ho
 HkNneCf6vwDCdeGkynlfQkWNLSg5vLANSKGh1dqR8Ul7270pCXlep1M6jZjIksvKtwDZTLif
 E7Uv0VW+YNXO3ypK6RwZupdFvgX8EQpLvy9Pti8UzaESsIZmNOvlM22WXOt4g==
IronPort-HdrOrdr: A9a23:eWUF+6Np4prdvsBcTvmjsMiBIKoaSvp037Eqv3oBKiC9E/b1qy
 nKpp9w6faaslsssQ4b6LS90cW7L080l6QFhLX5TI3SPjUO0VHARL2KhrGC/9SPIU3DH+dmpM
 BdWqJjEsD3CVRgreuS2maFL+o=
X-IronPort-AV: E=Sophos;i="5.98,338,1673931600"; 
   d="scan'208";a="107631059"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YRI/DNIdat9Yxq9em9NW7+zqMf/zi9mgU/w7GjDei02GUaDGCgVLceNJZRbX/4pwaw3CNsnkh4kogxvnCBjyUYZMd3tTOGX0QLrCFGqd92fB154AV9o8lxkQjrwwE/jlR5CujZcXsiWcXl/Ti79BxFXYsJPmAcaNqYXnahx2xc1KylYamfO61O+FeO2UfdE0eYZWsFcMJ8w7iwiHT47ywZsGWCLzcyKelEYu4AhYvrwHEeHSyC7moV20Za+Y469VgwhW7qV65I4DATCb9nRje9wxsg1AUOQXpBgmEru8pIrmjUnY3RxxXpUM3nodgCB1Mk+/njNSiiDL1SugOAZK3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HK/G2jZPuDIzdh5segEY+bnvxm5SocI73p3sLSt9VtI=;
 b=cy+2WOhKdXKipqA+evdEDMNeh9KwuUqDgaX/I+DTvrcgqG7U7sk3/CasnZvA/qqCnh59Vy/os7ffDgBGK8n3ZBoetVXA1TSGfl9li0gc8k1SAWmOLvSQTQQpoEJevj9XrVVFkkOMgKUIp4K3us+QPadzPqmsmmUzHEWFab6zVWgQrOvnTnX9J6CFEl7iNxTIAOO4mftan3a1srmdaX8c7LLLIjS1KZaWulGEokpYOzZB59i6WMijhMy7CWzhE+eGq/L+HoCDSxaArP5PraagH6tAln1AtBNBDUDp4YmGGYor96t1t8dz9xJatsibn2iw4VRGMQ72XOcIJrffoQP1nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HK/G2jZPuDIzdh5segEY+bnvxm5SocI73p3sLSt9VtI=;
 b=wifTpD/GczYOpvGKSpQ5pRbinAQPtInC5ScuzjpVRYfPQlKMuH89ERMdMJllG0HYrZ9j87CvOdrXut1nM1lOqRg8j8DgNj3/i7AOTiDEgko4FSEVc4jp5OyosoxurimQo7GVEf2pMGCG15oIa/EGGCxDA0fOQCGA4Ql/rPaFx7Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 12 Apr 2023 11:13:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Message-ID: <ZDZ2S4OxP2e12oSX@Air-de-Roger>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger>
 <873556xa0g.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <873556xa0g.fsf@epam.com>
X-ClientProxiedBy: LO2P265CA0222.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BN8PR03MB4931:EE_
X-MS-Office365-Filtering-Correlation-Id: c20940d5-8f31-4eb6-d1ae-08db3b363d20
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LciPqFrnk0CSu/fNPFNSgD6E/JZCDw6TpwGdwbW057YH2CaPbSlxUc5p8XhSckvb85Bwomal9ykyZGQT+ZLknivfd8WnMKSMA2xhxdn4FlhVmn4C8wQYSuYfRlX98lnvRzSA5y8Ej/QRHqRllxba1VrYbcQ7RilmRZpj6bnkzhuHG61d8fluET6GiO6Qf+gldkXUmURKcVzcwc7M2gNLNbuCRY0Yr530qQ+TENoKfjHIV6e6KNOytyXUVnNiOFvUEWh4MXBnwboQK2ONStDWifnA1unstgy1MsdElRJMhSPXgyVtml6TTgcsDoNAER4yHAVJTRJk9yxKIghWY2QP5T+AI4PWBajBNuYQN7bmg5f3BbBG29vbnUc1Xng6Vcg7KCWGAcKaoKHnQrQ2b4hKutbsSVptDE2r9wWH9rlcS/CoO1/lfJqttas9ZhLOIBKT7FkAyhHBAaxxhFheP6OvxmgHGfOtZ3NHYGG3/uEvnE2ZdUyNZ4bpTHyMMBWNnmADnb5lCjMaIJxGeQ2a+9cjsIoGqS02uEdp6bzT515SohgeG1cxzDvVgu+Xk/CGjHNk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(39860400002)(396003)(136003)(376002)(346002)(451199021)(54906003)(41300700001)(66476007)(66556008)(66946007)(8676002)(4326008)(478600001)(6486002)(6916009)(316002)(85182001)(86362001)(26005)(6512007)(6506007)(6666004)(9686003)(83380400001)(8936002)(2906002)(7416002)(82960400001)(5660300002)(33716001)(30864003)(38100700002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDVqTkJCcytOTklZdkFwMWc4ekdVT3dteHRzMnFIY29qSHduSm96TUpDYmlF?=
 =?utf-8?B?cUVYZm5lVjhhZklGc3hoSlB4ZGhFd3NaU1g0ODMzekcrOW1ESXpqMlp0YnZ0?=
 =?utf-8?B?VHQ5a1NyTTB6WDhncnhJODdZczJrU2Y2eDNPQnM0R1pESmtBZ0xJR21pc1hs?=
 =?utf-8?B?dzc2QTJCRUQrUXJISDg5UmJpOG5aQUhneXMydVpRcThEL1NsRVUzR0dsUjFs?=
 =?utf-8?B?SEN6VGJCaDR5czRNUyt5RkhkWDJNWFpMdkVWMzUvZHo1ZkpCRzBaalFQekw5?=
 =?utf-8?B?di9LNUY0L1ZKNHN2ZG80Y0pzeks3dldOUEtRRVRwblpVSUxHSFdPWkhaZktV?=
 =?utf-8?B?OFM2SUlXaXQxdGQ1aVFZQTRYYjBsYWdUTVdDRDJRajZ3VWpvYjZBR1RHaUZL?=
 =?utf-8?B?T05CRnoxNWdIQzVHTTRTL0xSQ29Cam9Cb3VpdXdkVURHU2EwQjFFTzFHYzFI?=
 =?utf-8?B?dU5aLzJOcWtBdWpoUUY0UGllR2c3THFiSCtYTlRmeklidmMrdkM1MHljMGg1?=
 =?utf-8?B?ODlYM2JMN1Jia1dmNjBiRHl3YkVSMnFzdDdVSmJ6VkhaSlYyRnVwdVo3NmlB?=
 =?utf-8?B?Uk9oaVRqRjdPaGdEZFlDdC84Zjl2bnBxSXBQdEh4eExIQWJxM2p1R0Vjdnp3?=
 =?utf-8?B?Y3VUWDVhV1IrSEVYRHY2clBoNkNVOXBHb1Badk1MZmpHSHFYM3ZtN3lYQUpV?=
 =?utf-8?B?QkxSVkRBcmY3UkJpUnRiazJUN3pXVmJuOTV2QlpsenFUenFwWEZBbEF1akpq?=
 =?utf-8?B?aGwwZEFWdEJ1Wms5STJCU3BwVXhZMXlzNnJVMUxWc281aDBSazZFa285eUZj?=
 =?utf-8?B?Tk4zQzJpOU1Pa1ZZRjhOaEwxcnMrMUJzRDhqWjcrd09iUUR0Nzd6cXVzLzJY?=
 =?utf-8?B?cHQ4dHpZU1VyRE5LWlgyN3k5bmZzaGpPemxpY040cktFNmpxWVpDOHcxY0Ey?=
 =?utf-8?B?Q0ppbnhOU0FsM1ZaZEkrLzRzaTNRSVR6WC9LV3VvUlNxbm01ZFdCNFE1b2Vm?=
 =?utf-8?B?RkRCZi9RUUdabjVDOE9kSmZ6dFQxRVhpR0ZqVmEvS0pLbWtjcFdyNU1TcVNa?=
 =?utf-8?B?eDJ0SnJOaXkxZEV3WXFJZUVTUHo5eDJVNmF3NU5KMXAzUHdDSkRjU0lqVDN3?=
 =?utf-8?B?UkorV2RqU045OHhFakVLUG9yVlhnVG1XZTMvN0J0c2s1ckkwOXk4cG1mazZv?=
 =?utf-8?B?NjVSMWVFYSs3VWU3ano3YVVyeXYwcStvOWI1V1Q4RnFnZFNOMVErRnZRQVIv?=
 =?utf-8?B?dkZYblQ1OVlQK3V0NEJvZFo3Uld4MnhlL2NFTnJHZHhUMDlOMElVU3RCcGVx?=
 =?utf-8?B?dGpRK0RWZmdVUEpXOWpJUTRXY1FpS09venFldHRJNWVWdnd4NmxtVnVDNHBu?=
 =?utf-8?B?bi9Ja0pkVmY0d0FoU2NhNHZkTlVibGhldmNybGduZ1BUODY4eWZqL0pKRnla?=
 =?utf-8?B?VWVJZldQYXB2dHZJY2Zha1pVM0pUUmU2bEhxeXN1YVU5MnViMDQ5bnlLVnBI?=
 =?utf-8?B?NndCNUFuTnhkd1JkNFhuQjErcytmdkRlRXZ3ZmZuN2c2UldlRlJpREd6Skh3?=
 =?utf-8?B?dmdLTXpGZDVuMFlJQ1RBQit3b3lnOUdlV09adWE3Rmk0bXpDZnBkWVJqZWpN?=
 =?utf-8?B?U3ZFN0dyczhwVWwwSmwvaGh2TzdTNGVNblgvb2pCNWUxY3p3ZkhmbDJXaDdm?=
 =?utf-8?B?NVJFTmdYZFZTMDBQNTlMWXl3cDJleUpJMmdZNmlNT2J0ZGRoNy9nc0Q4OFVW?=
 =?utf-8?B?ZWNJaEtXMmMwQUQxZk90TW9oYXJhbnc1L0RFZmNXa1RkelFjKzRxV2V1VWV3?=
 =?utf-8?B?STFBaS9PaTJFR0FESGRkYndtbUhITlozeDFIQ1ZxVVk2TzkrY1QycXZHRG03?=
 =?utf-8?B?a1BDejZRVEZ0dWx0RWpaWEhqV2ZLUHVoTHVSSkNqUXBxUUpVUUxwMW9FQlo1?=
 =?utf-8?B?bHY3UjNqVFFvbGtqQ3RMMHZkOHY2ZysrWGlCdERPaW5PVkNyZGYveFJFZmRo?=
 =?utf-8?B?dGpPcm82Q2hCaCs2QURtVFcvN3IrMjAwclBQL2x0OTUzK2JOVVhQb05RV1dv?=
 =?utf-8?B?RVM0K3V0b1ZZcHc5OG1ZUDRLR1daNi81UkNqQ2Jsenp3SVlnMGVsamNpRGd4?=
 =?utf-8?Q?ZmFqFsxIVbgFUrx8wTYTGJ/jI?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	8ggj53bxZXMKKJXvysyvWg03wXUSK7id5DpJA1AGZ7hg0NYytihLy+cJyiDy+/H5FpTncx8vCJuOskqvlQ14xwbTcHO+RUffzbACviYtcGjkNpQlZ3Yk7UIoN5BmT6oHv2uaTqxsROCB5Ue1NfgrUBrV87Dfnt2Yai5uyvjeFUEZs6x20wJlgmjY/PJpPJWQlhy6WJkfP3yeW0K3s0NSmO0BcFeNbPvissh2D9rGufmAu+X4uw2bLZy0j6f1Gt648v1+Uy45OPFdRRGyEuYi1FRdlnBio3kn8u5K+DnpJKrF6hu4liJQAjRsF3sVVT525u84Nw5ml+Xko4hPWiFaYNOR8d/2uqdKS30pPeeDLQtPNrZqaHmjUlMRcX4tyPV33TqRdnavNXJuhrUpj1j/+z8MWK0/toTCEhbFOrMCiZYQrEE8+Uf2OXUJJ/9yNtBIWNqQ3qiDKUHR2rA96lfVdlaoPkHcgJXCOReYlEQW87xLWtLCGkDBwhsTGOPcQB3d5XDXYeKHoDnEwQa+f+lVGI07UoHQhugY5dQ6DfB/1CaL9vlkjwBIVLSwFUvShR1DRtGB3tRzI4BgiS603IfGtJHWB2R1Vv4ZPl0IPLE7n5pYJf+mv2TzkXjD9ggxRod/QMqQsbGWK277AaUN+17HzT5RiQqSUfvUgflQhh6PCR82IR3MTe9C8Zilqq560tLiZ+u4N+q4zIhxiNiSiOU7SAFNxN1RjC9Ga9QaPajzDxG9Ze8HL7HTDaXhZDaumbQjq278nV1oy8buR4p+HsoJBB0sWXOWCv3NPBuBwf2YTfPNfi/1IjNEaLDsiE1/FKkBOFy9VdbKHyM3uxowWUqtXpjnGCJFGQI981WVTUVjvDfFh1RsUSfRgC58fvgLkp/3eIxDpNbotO0o3JIEdUDJoQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c20940d5-8f31-4eb6-d1ae-08db3b363d20
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 09:13:54.5879
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vxJo11qcy+ow6KJNBMWlRPbRVzykqUFes2mJg25PfQFk5qGdqsU+1nLw91PlBg68tFf88nGMIC3PUDK6YnTIOw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4931

On Tue, Apr 11, 2023 at 11:41:04PM +0000, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
> > On Tue, Mar 14, 2023 at 08:56:29PM +0000, Volodymyr Babchuk wrote:
> >> Prior to this change, lifetime of pci_dev objects was protected by global
> >> pcidevs_lock(). Long-term plan is to remove this log, so we need some
> >                                                    ^ lock
> >
> > I wouldn't say remove, as one way or another we need a lock to protect
> > concurrent accesses.
> >
> 
> I'll write "replace this global lock with couple of more granular
> locking devices"
> if this is okay for you.
> 
> >> other mechanism to ensure that those objects will not disappear under
> >> feet of code that access them. Reference counting is a good choice as
> >> it provides easy to comprehend way to control object lifetime.
> >> 
> >> This patch adds two new helper functions: pcidev_get() and
> >> pcidev_put(). pcidev_get() will increase reference counter, while
> >> pcidev_put() will decrease it, destroying object when counter reaches
> >> zero.
> >> 
> >> pcidev_get() should be used only when you already have a valid pointer
> >> to the object or you are holding lock that protects one of the
> >> lists (domain, pseg or ats) that store pci_dev structs.
> >> 
> >> pcidev_get() is rarely used directly, because there already are
> >> functions that will provide valid pointer to pci_dev struct:
> >> pci_get_pdev(), pci_get_real_pdev(). They will lock appropriate list,
> >> find needed object and increase its reference counter before returning
> >> to the caller.
> >> 
> >> Naturally, pci_put() should be called after finishing working with a
> >> received object. This is the reason why this patch have so many
> >> pcidev_put()s and so little pcidev_get()s: existing calls to
> >> pci_get_*() functions now will increase reference counter
> >> automatically, we just need to decrease it back when we finished.
> >
> > After looking a bit into this, I would like to ask whether it's been
> > considered the need to increase the refcount for each use of a pdev.
> >
> 
> This is how Linux uses reference locking. It decreases cognitive load
> and chance for an error, as there is a simple set of rules, which you
> follow.
> 
> > For example I would consider the initial alloc_pdev() to take a
> > refcount, and then pci_remove_device() _must_ be the function that
> > removes the last refcount, so that it can return -EBUSY otherwise (see
> > my comment below).
> 
> I tend to disagree there, as this ruins the very idea of reference
> counting. We can't know who else holds reference right now. Okay, we
> might know, but this requires additional lock to serialize
> accesses. Which, in turn, makes refcount un-needed.

In principle pci_remove_device() must report whether the device is
ready to be physically removed from the system, so it must return
-EBUSY if there are still users accessing the device.

A user would use PHYSDEVOP_manage_pci_remove to signal Xen it's trying
to physically remove a PCI device from a system, so we must ensure
that when the hypervisor returns success the device is ready to be
physically removed.

Or at least that's my understanding of how this should work.

> >
> > I would also think that having the device assigned to a guest will take
> > another refcount, and then any usage from further callers (ie: like
> > vpci) will need some kind of protection from preventing the device
> > from being deassigned from a domain while vPCI handlers are running,
> > and the current refcount won't help with that.
> 
> Yes, idea of this refcounting is to ensure that a pdev object exists as an
> valid object in memory if we are holding a long-term pointer to
> it. Indeed, vPCI handlers should use some other mechanism to ensure that
> pdev is not being re-assigned while handlers are running. I believe,
> this is the task of vpci->lock. Should we call
> vpci_remove_device/vpci_add_handlers each time we re-assign a PCI device?

Yes, I think this was also part of a comment I've made on a different
patch.  The device state needs to be cleared when assigned to a
different guest (as the hardware domain will also perform a device
reset).

I think there are some points that needs to be part of the commit
message so the code can be properly evaluated:

 - The reference counting is only used to ensure the object cannot be
   removed while in use.  Users of the pci device object should
   implement whatever protections required in order to get mutual
   exclusion between them and device state changes.

> >
> > That makes me wonder if for example callers of pci_get_pdev(d, sbdf)
> > do need to take an extra refcount, because such access is already
> > protected from the pdev going away by the fact that the device is
> > assigned to a guest.  But maybe it's too much work to separate users
> > of pci_get_pdev(d, ...); vs pci_get_pdev(NULL, ...);.
> >
> > There's also a window when the refcount is dropped to 0, and the
> > destruction function is called, but at the same time a concurrent
> > thread could attempt to take a reference to the pdev still?
> 
> Last pcidev_put() would be called by pci_remove_device(), after removing
> it from all lists. This should prevent other threads from obtaining a valid
> reference to the pdev.

What if a concurrent user has taken a reference to the object before
pci_remove_device() has removed the device from the lists, and still
holds it when pci_remove_device() performs the supposedly last
pcidev_put() call?

> >
> >>          sbdf.devfn &= ~stride;
> >>          pdev = pci_get_pdev(NULL, sbdf);
> >>          if ( pdev && stride != pdev->phantom_stride )
> >> +        {
> >> +            pcidev_put(pdev);
> >>              pdev = NULL;
> >> +        }
> >>      }
> >>  
> >>      return pdev;
> >> @@ -548,13 +526,18 @@ struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
> >>          list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> >>              if ( pdev->sbdf.bdf == sbdf.bdf &&
> >>                   (!d || pdev->domain == d) )
> >> +            {
> >> +                pcidev_get(pdev);
> >>                  return pdev;
> >> +            }
> >>      }
> >>      else
> >>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
> >>              if ( pdev->sbdf.bdf == sbdf.bdf )
> >> +            {
> >> +                pcidev_get(pdev);
> >>                  return pdev;
> >> -
> >> +            }
> >>      return NULL;
> >>  }
> >>  
> >> @@ -663,7 +646,10 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
> >>                              PCI_SBDF(seg, info->physfn.bus,
> >>                                       info->physfn.devfn));
> >>          if ( pdev )
> >> +        {
> >>              pf_is_extfn = pdev->info.is_extfn;
> >> +            pcidev_put(pdev);
> >> +        }
> >>          pcidevs_unlock();
> >>          if ( !pdev )
> >>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
> >> @@ -818,7 +804,9 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
> >>              if ( pdev->domain )
> >>                  list_del(&pdev->domain_list);
> >>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
> >> -            free_pdev(pseg, pdev);
> >> +            list_del(&pdev->alldevs_list);
> >> +            pdev_msi_deinit(pdev);
> >> +            pcidev_put(pdev);
> >
> > Hm, I think here we want to make sure that the device has been freed,
> > or else you would have to return -EBUSY to the calls to notify that
> > the device is still in use.
> 
> Why? As I can see, pdev object is still may potentially be accessed by
> some other CPU right now. So pdev object will be freed after last
> reference is dropped. As it is already removed from all the lists,
> pci_dev_get() will not find it anymore.
> 
> Actually, I can't see how this can happen in reality, as VPCI, MSI and
> IOMMU are already deactivated for this device. So, no one would touch it.

Wouldn't it be possible for a concurrent user to hold a reference from
befoe the device has been 'deactivated'?

> >
> > I think we need an extra pcidev_put_final() or similar that can be
> > used in pci_remove_device() to assert that the device has been
> > actually removed.
> 
> Will something break if we don't do this? I can't see how this can
> happen.

As mentioned above, once pci_remove_device() returns 0 the admin
should be capable of physically removing the device from the system.

> >
> >>              break;
> >>          }
> >>  
> >> @@ -848,7 +836,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
> >>      {
> >>          ret = iommu_quarantine_dev_init(pci_to_dev(pdev));
> >>          if ( ret )
> >> -           return ret;
> >> +            goto out;
> >>  
> >>          target = dom_io;
> >>      }
> >> @@ -878,6 +866,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
> >>      pdev->fault.count = 0;
> >>  
> >>   out:
> >> +    pcidev_put(pdev);
> >>      if ( ret )
> >>          printk(XENLOG_G_ERR "%pd: deassign (%pp) failed (%d)\n",
> >>                 d, &PCI_SBDF(seg, bus, devfn), ret);
> >> @@ -1011,7 +1000,10 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
> >>              pdev->fault.count >>= 1;
> >>          pdev->fault.time = now;
> >>          if ( ++pdev->fault.count < PT_FAULT_THRESHOLD )
> >> +        {
> >> +            pcidev_put(pdev);
> >>              pdev = NULL;
> >> +        }
> >>      }
> >>      pcidevs_unlock();
> >>  
> >> @@ -1022,6 +1014,8 @@ void pci_check_disable_device(u16 seg, u8 bus, u8 devfn)
> >>       * control it for us. */
> >>      cword = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
> >>      pci_conf_write16(pdev->sbdf, PCI_COMMAND, cword & ~PCI_COMMAND_MASTER);
> >> +
> >> +    pcidev_put(pdev);
> >>  }
> >>  
> >>  /*
> >> @@ -1138,6 +1132,7 @@ static int __hwdom_init cf_check _setup_hwdom_pci_devices(
> >>                  printk(XENLOG_WARNING "Dom%d owning %pp?\n",
> >>                         pdev->domain->domain_id, &pdev->sbdf);
> >>  
> >> +            pcidev_put(pdev);
> >>              if ( iommu_verbose )
> >>              {
> >>                  pcidevs_unlock();
> >> @@ -1385,33 +1380,28 @@ static int iommu_remove_device(struct pci_dev *pdev)
> >>      return iommu_call(hd->platform_ops, remove_device, devfn, pci_to_dev(pdev));
> >>  }
> >>  
> >> -static int device_assigned(u16 seg, u8 bus, u8 devfn)
> >> +static int device_assigned(struct pci_dev *pdev)
> >>  {
> >> -    struct pci_dev *pdev;
> >>      int rc = 0;
> >>  
> >>      ASSERT(pcidevs_locked());
> >> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> -
> >> -    if ( !pdev )
> >> -        rc = -ENODEV;
> >>      /*
> >>       * If the device exists and it is not owned by either the hardware
> >>       * domain or dom_io then it must be assigned to a guest, or be
> >>       * hidden (owned by dom_xen).
> >>       */
> >> -    else if ( pdev->domain != hardware_domain &&
> >> -              pdev->domain != dom_io )
> >> +    if ( pdev->domain != hardware_domain &&
> >> +         pdev->domain != dom_io )
> >>          rc = -EBUSY;
> >>  
> >>      return rc;
> >>  }
> >>  
> >>  /* Caller should hold the pcidevs_lock */
> >
> > I would assume the caller has taken an extra reference to the pdev, so
> > holding the pcidevs_lock is no longer needed?
> 
> I am assumed that lock may be required by MSIX or IOMMU functions, that
> are being called here. For example, I can see that reassign_device() in
> pci_amd_iommu.c manipulates with some lists. I believe, it should be
> protected with the lock.

OK, so that's pcidevs_lock being used to protect something else that's
not strictly a pci device, but a related structure.

> >
> >> -static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> +static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
> >>  {
> >>      const struct domain_iommu *hd = dom_iommu(d);
> >> -    struct pci_dev *pdev;
> >> +    uint8_t devfn;
> >>      int rc = 0;
> >>  
> >>      if ( !is_iommu_enabled(d) )
> >> @@ -1422,10 +1412,11 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >>  
> >>      /* device_assigned() should already have cleared the device for assignment */
> >>      ASSERT(pcidevs_locked());
> >> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >>      ASSERT(pdev && (pdev->domain == hardware_domain ||
> >>                      pdev->domain == dom_io));
> >>  
> >> +    devfn = pdev->devfn;
> >> +
> >>      /* Do not allow broken devices to be assigned to guests. */
> >>      rc = -EBADF;
> >>      if ( pdev->broken && d != hardware_domain && d != dom_io )
> >> @@ -1460,7 +1451,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >>   done:
> >>      if ( rc )
> >>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> >> -               d, &PCI_SBDF(seg, bus, devfn), rc);
> >> +               d, &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
> >>      /* The device is assigned to dom_io so mark it as quarantined */
> >>      else if ( d == dom_io )
> >>          pdev->quarantine = true;
> >> @@ -1595,6 +1586,9 @@ int iommu_do_pci_domctl(
> >>          ASSERT(d);
> >>          /* fall through */
> >>      case XEN_DOMCTL_test_assign_device:
> >> +    {
> >> +        struct pci_dev *pdev;
> >> +
> >>          /* Don't support self-assignment of devices. */
> >>          if ( d == current->domain )
> >>          {
> >> @@ -1622,26 +1616,36 @@ int iommu_do_pci_domctl(
> >>          seg = machine_sbdf >> 16;
> >>          bus = PCI_BUS(machine_sbdf);
> >>          devfn = PCI_DEVFN(machine_sbdf);
> >> -
> >>          pcidevs_lock();
> >> -        ret = device_assigned(seg, bus, devfn);
> >> +        pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> +        if ( !pdev )
> >> +        {
> >> +            printk(XENLOG_G_INFO "%pp non-existent\n",
> >> +                   &PCI_SBDF(seg, bus, devfn));
> >> +            ret = -EINVAL;
> >> +            break;
> >> +        }
> >> +
> >> +        ret = device_assigned(pdev);
> >>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
> >>          {
> >>              if ( ret )
> >>              {
> >> -                printk(XENLOG_G_INFO "%pp already assigned, or non-existent\n",
> >> +                printk(XENLOG_G_INFO "%pp already assigned\n",
> >>                         &PCI_SBDF(seg, bus, devfn));
> >>                  ret = -EINVAL;
> >>              }
> >>          }
> >>          else if ( !ret )
> >> -            ret = assign_device(d, seg, bus, devfn, flags);
> >> +            ret = assign_device(d, pdev, flags);
> >> +
> >> +        pcidev_put(pdev);
> >
> > I would think you need to keep the refcount here if ret == 0, so that
> > the device cannot be removed while assigned to a domain?
> 
> Looks like we are perceiving function of refcnt in a different
> ways. For me, this is the mechanism to guarantee that if we have a valid
> pointer to an object, this object will not disappear under our
> feet. This is the main function of krefs in the linux kernel: if your
> code holds a reference to an object, you can be sure that this object is
> exists in memory.
> 
> On other hand, it seems that you are considering this refcnt as an usage
> counter for an actual PCI device, not "struct pdev" that represent
> it. Those are two related things, but not the same. So, I can see why
> you are suggesting to get additional reference there. But for me, this
> looks unnecessary: the very first refcount is obtained in
> pci_add_device() and there is the corresponding function
> pci_remove_device() that will drop this refcount. So, for me, if admin
> wants to remove a PCI device which is assigned to a domain, they can do
> this as they were able to do this prior this patches.

This is all fine, but needs to be stated in the commit message.

> The main value of introducing refcnt is to be able to access pdev objects
> without holding the global pcidevs_lock(). This does not mean that you
> don't need locking at all. But this allows you to use pdev->lock (which
> does not exists in this series, but was introduced in a RFC earlier), or
> vpci->lock, or any other subsystem->lock.

I guess I was missing this other bit about introducing a
per-device lock, would it be possible to bundle all this together into
a single patch series?

It would be good to place this change together with any other locking
related change that you have pending.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:45:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520082.807322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX2r-00047G-Uo; Wed, 12 Apr 2023 09:45:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520082.807322; Wed, 12 Apr 2023 09:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX2r-000479-S3; Wed, 12 Apr 2023 09:45:41 +0000
Received: by outflank-mailman (input) for mailman id 520082;
 Wed, 12 Apr 2023 09:45:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmX2q-00046z-1q; Wed, 12 Apr 2023 09:45:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmX2p-0004hN-V2; Wed, 12 Apr 2023 09:45:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmX2p-0002bT-HS; Wed, 12 Apr 2023 09:45:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmX2p-000807-Gz; Wed, 12 Apr 2023 09:45:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VMKyUUjzjrTCk4213uS93aIez0iiAg7/17gbjT222pE=; b=uOqOAWwB534Zuf+hcrZ6BMf0kQ
	aALRJJ7pQuORy7hD72VdaLRYB2KO30XF7mJUKsxIUDF0Xv2+aTwqmbjH+qY9KvPuBcLD8HYjnJGwK
	u0QT1j+2QTh362yNDcDDoulTucEF032YSlboM84JbIq1EOJCjt08e8ETysdKPpJ8rHF8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180216-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180216: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b991aec0509f24ae7573d732ba337549ecee310c
X-Osstest-Versions-That:
    ovmf=51734dfc48466eddfb0f8acdb24518266c36c905
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 09:45:39 +0000

flight 180216 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180216/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b991aec0509f24ae7573d732ba337549ecee310c
baseline version:
 ovmf                 51734dfc48466eddfb0f8acdb24518266c36c905

Last test of basis   180203  2023-04-11 07:40:42 Z    1 days
Testing same since   180216  2023-04-12 07:13:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yu Pu <yu.pu@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   51734dfc48..b991aec050  b991aec0509f24ae7573d732ba337549ecee310c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520090.807336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6x-0004oJ-Vw; Wed, 12 Apr 2023 09:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520090.807336; Wed, 12 Apr 2023 09:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6x-0004nq-QE; Wed, 12 Apr 2023 09:49:55 +0000
Received: by outflank-mailman (input) for mailman id 520090;
 Wed, 12 Apr 2023 09:49:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6w-0004lU-6N
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5f7e0b57-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:49:53 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C899A168F;
 Wed, 12 Apr 2023 02:50:36 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1211B3F587;
 Wed, 12 Apr 2023 02:49:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f7e0b57-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 02/12] xen/arm: add SVE vector length field to the domain
Date: Wed, 12 Apr 2023 10:49:28 +0100
Message-Id: <20230412094938.2693890-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
to allow the domain to have an information about the SVE feature
and the number of SVE register bits that are allowed for this
domain.

sve_vl field is the vector length in bits divided by 128, this
allows to use less space in the structures.

The field is used also to allow or forbid a domain to use SVE,
because a value equal to zero means the guest is not allowed to
use the feature.

Check that the requested vector length is lower or equal to the
platform supported vector length, otherwise fail on domain
creation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
   removed else if since the conditions can't fallthrough, removed not
   needed condition checking for VL bits validity because it's already
   covered, so delete is_vl_valid() function. (Jan)
Changes from v3:
 - don't use fixed types when not needed, use encoded value also in
   arch_domain so rename sve_vl_bits in sve_vl. (Jan)
 - rename domainconfig_decode_vl to sve_decode_vl because it will now
   be used also to decode from arch_domain value
 - change sve_vl from uint16_t to uint8_t and move it after "type" field
   to optimize space.
Changes from v2:
 - rename field in xen_arch_domainconfig from "sve_vl_bits" to
   "sve_vl" and use the implicit padding after gic_version to
   store it, now this field is the VL/128. (Jan)
 - Created domainconfig_decode_vl() function to decode the sve_vl
   field and use it as plain bits value inside arch_domain.
 - Changed commit message reflecting the changes
Changes from v1:
 - no changes
Changes from RFC:
 - restore zcr_el2 in sve_restore_state, that will be introduced
   later in this serie, so remove zcr_el2 related code from this
   patch and move everything to the later patch (Julien)
 - add explicit padding into struct xen_arch_domainconfig (Julien)
 - Don't lower down the vector length, just fail to create the
   domain. (Julien)
---
 xen/arch/arm/arm64/sve.c             | 12 ++++++++++++
 xen/arch/arm/domain.c                | 27 +++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++++++
 xen/arch/arm/include/asm/domain.h    |  5 +++++
 xen/include/public/arch-arm.h        |  2 ++
 xen/include/public/domctl.h          |  2 +-
 6 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 6f3fb368c59b..78f7482619da 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -6,6 +6,7 @@
  */
 
 #include <xen/types.h>
+#include <asm/cpufeature.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
 #include <asm/processor.h>
@@ -48,3 +49,14 @@ register_t vl_to_zcr(unsigned int vl)
     ASSERT(vl > 0);
     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
 }
+
+/* Get the system sanitized value for VL in bits */
+unsigned int get_sys_vl_len(void)
+{
+    if ( !cpu_has_sve )
+        return 0;
+
+    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
+    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
+            SVE_VL_MULTIPLE_VAL;
+}
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index adb6ace2e24d..769fae8fe25e 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
@@ -550,6 +551,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.cptr_el2 = get_default_cptr_flags();
+    if ( is_sve_domain(v->domain) )
+        v->arch.cptr_el2 &= ~HCPTR_CP(8);
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
@@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    /* Check feature flags */
+    if ( sve_vl_bits > 0 )
+    {
+        unsigned int zcr_max_bits = get_sys_vl_len();
+
+        if ( !zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
+            return -EINVAL;
+        }
+
+        if ( sve_vl_bits > zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO,
+                    "Requested SVE vector length (%u) > supported length (%u)\n",
+                    sve_vl_bits, zcr_max_bits);
+            return -EINVAL;
+        }
+    }
+
     /* The P2M table must always be shared between the CPU and the IOMMU */
     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
     {
@@ -744,6 +768,9 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vpci_init(d)) != 0 )
         goto fail;
 
+    /* Copy the encoded vector length sve_vl from the domain configuration */
+    d->arch.sve_vl = config->arch.sve_vl;
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 144d2b1cc485..a4c53e3e8e2e 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -13,10 +13,17 @@
 /* Vector length must be multiple of 128 */
 #define SVE_VL_MULTIPLE_VAL (128U)
 
+static inline unsigned int sve_decode_vl(unsigned int sve_vl)
+{
+    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
+    return sve_vl * SVE_VL_MULTIPLE_VAL;
+}
+
 #ifdef CONFIG_ARM64_SVE
 
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(unsigned int vl);
+unsigned int get_sys_vl_len(void);
 
 #else /* !CONFIG_ARM64_SVE */
 
@@ -30,6 +37,11 @@ static inline register_t vl_to_zcr(unsigned int vl)
     return 0;
 }
 
+static inline unsigned int get_sys_vl_len(void)
+{
+    return 0;
+}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index e776ee704b7d..78cc2da3d4e5 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -31,6 +31,8 @@ enum domain_type {
 
 #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 
+#define is_sve_domain(d) ((d)->arch.sve_vl > 0)
+
 /*
  * Is the domain using the host memory layout?
  *
@@ -67,6 +69,9 @@ struct arch_domain
     enum domain_type type;
 #endif
 
+    /* max SVE encoded vector length */
+    uint8_t sve_vl;
+
     /* Virtual MMU */
     struct p2m_domain p2m;
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..38311f559581 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 struct xen_arch_domainconfig {
     /* IN/OUT */
     uint8_t gic_version;
+    /* IN - Contains SVE vector length divided by 128 */
+    uint8_t sve_vl;
     /* IN */
     uint16_t tee_type;
     /* IN */
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 529801c89ba3..e2e22cb534d6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -21,7 +21,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520091.807340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6y-0004st-78; Wed, 12 Apr 2023 09:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520091.807340; Wed, 12 Apr 2023 09:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6y-0004qd-36; Wed, 12 Apr 2023 09:49:56 +0000
Received: by outflank-mailman (input) for mailman id 520091;
 Wed, 12 Apr 2023 09:49:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6w-0004lO-Aw
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5df35070-d917-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:49:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 452C01684;
 Wed, 12 Apr 2023 02:50:35 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D2B3B3F587;
 Wed, 12 Apr 2023 02:49:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df35070-d917-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Date: Wed, 12 Apr 2023 10:49:27 +0100
Message-Id: <20230412094938.2693890-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot only when SVE resources are accessed.
While there, correct coding style for the comment on coprocessor
trapping.

Now cptr_el2 is part of the domain context and it will be restored
on context switch, this is a preparation for saving the SVE context
which will be part of VFP operations, so restore it before the call
to save VFP registers.
To save an additional isb barrier, restore cptr_el2 before an
existing isb barrier and move the call for saving VFP context after
that barrier.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - don't use fixed types in vl_to_zcr, forgot to address that in
   v3, by mistake I changed that in patch 2, fixing now (Jan)
Changes from v3:
 - no changes
Changes from v2:
 - renamed sve_asm.S in sve-asm.S, new files should not contain
   underscore in the name (Jan)
Changes from v1:
 - Add assert to vl_to_zcr, it is never called with vl==0, but just
   to be sure it won't in the future.
Changes from RFC:
 - Moved restoring of cptr before an existing barrier (Julien)
 - Marked the feature as unsupported for now (Julien)
 - Trap and un-trap only when using SVE resources in
   compute_max_zcr() (Julien)
---
 xen/arch/arm/Kconfig                     | 10 +++--
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++--
 xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    |  9 +++--
 xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 ++-
 xen/arch/arm/traps.c                     | 28 +++++++------
 14 files changed, 201 insertions(+), 24 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..41f45d8d1203 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM_64
 	help
-	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
+	  Scalar Vector Extension (SVE/SVE2) support for guests.
+
+	  Please be aware that currently, enabling this feature will add latency on
+	  VM context switch between SVE enabled guests, between not-enabled SVE
+	  guests and SVE enabled guests and viceversa, compared to the time
+	  required to switch between not-enabled SVE guests.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..24e08fd42596 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -12,6 +12,7 @@ obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve-asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..6f3fb368c59b
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+register_t compute_max_zcr(void)
+{
+    register_t cptr_bits = get_default_cptr_flags();
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /* Remove trap for SVE resources */
+    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
+    isb();
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    /* Restore CPTR_EL2 */
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
+    isb();
+
+    return vl_to_zcr(hw_vl);
+}
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+register_t vl_to_zcr(unsigned int vl)
+{
+    ASSERT(vl > 0);
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..83b84368f6d5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default to the guests */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 99577adb6c69..adb6ace2e24d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
-    /* VFP */
-    vfp_restore_state(n);
-
     /* XXX MPU */
 
     /* Fault Status */
@@ -234,6 +231,7 @@ static void ctxt_switch_to(struct vcpu *n)
     p2m_restore_state(n);
 
     /* Control Registers */
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
     WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
 
     /*
@@ -258,6 +256,9 @@ static void ctxt_switch_to(struct vcpu *n)
 #endif
     isb();
 
+    /* VFP */
+    vfp_restore_state(n);
+
     /* CP 15 */
     WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
 
@@ -548,6 +549,8 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..144d2b1cc485
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS (2048U)
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL (128U)
+
+#ifdef CONFIG_ARM64_SVE
+
+register_t compute_max_zcr(void);
+register_t vl_to_zcr(unsigned int vl);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline register_t compute_max_zcr(void)
+{
+    return 0;
+}
+
+static inline register_t vl_to_zcr(unsigned int vl)
+{
+    return 0;
+}
+
+#endif /* CONFIG_ARM64_SVE */
+
+#endif /* _ARM_ARM64_SVE_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..6d703e051906 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       (0)
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@ struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 2a51f0ca688e..e776ee704b7d 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 54f253087718..bc683334125c 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90e3..5459cc4f5e62 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd68..a78a99ddadd0 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
 
 void init_traps(void)
 {
+    register_t cptr_bits = get_default_cptr_flags();
     /*
      * Setup Hyp vector base. Note they might get updated with the
      * branch predictor hardening.
@@ -135,17 +151,7 @@ void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
-     */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520089.807332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6x-0004lm-Ln; Wed, 12 Apr 2023 09:49:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520089.807332; Wed, 12 Apr 2023 09:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6x-0004lf-Iy; Wed, 12 Apr 2023 09:49:55 +0000
Received: by outflank-mailman (input) for mailman id 520089;
 Wed, 12 Apr 2023 09:49:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6v-0004lO-L7
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:53 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5d4f5d4b-d917-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:49:49 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11FF5C14;
 Wed, 12 Apr 2023 02:50:34 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 53B1F3F587;
 Wed, 12 Apr 2023 02:49:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d4f5d4b-d917-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v5 00/12] SVE feature for arm guests
Date: Wed, 12 Apr 2023 10:49:26 +0100
Message-Id: <20230412094938.2693890-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie is introducing the possibility for Dom0 and DomU guests to use
sve/sve2 instructions.

SVE feature introduces new instruction and registers to improve performances on
floating point operations.

The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
when available the ID_AA64ZFR0_EL1 register provides additional information
about the implemented version and other SVE feature.

New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.

Z0-Z31 are scalable vector register whose size is implementation defined and
goes from 128 bits to maximum 2048, the term vector length will be used to refer
to this quantity.
P0-P15 are predicate registers and the size is the vector length divided by 8,
same size is the FFR (First Fault Register).
ZCR_ELx is a register that can control and restrict the maximum vector length
used by the <x> exception level and all the lower exception levels, so for
example EL3 can restrict the vector length usable by EL3,2,1,0.

The platform has a maximum implemented vector length, so for every value
written in ZCR register, if this value is above the implemented length, then the
lower value will be used. The RDVL instruction can be used to check what vector
length is the HW using after setting ZCR.

For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.

SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
register is added to the domain state, to be able to trap only the guests that
are not allowed to use SVE.

This serie is introducing a command line parameter to enable Dom0 to use SVE and
to set its maximum vector length that by default is 0 which means the guest is
not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
the selected value used as maximum allowed vector length (which could be lower
if the implemented one is lower).
For DomUs, an XL parameter with the same way of use is introduced and a dom0less
DTB binding is created.

The context switch is the most critical part because there can be big registers
to be saved, in this serie an easy approach is used and the context is
saved/restored every time for the guests that are allowed to use SVE.

Luca Fancellu (12):
  xen/arm: enable SVE extension for Xen
  xen/arm: add SVE vector length field to the domain
  xen/arm: Expose SVE feature to the guest
  xen/arm: add SVE exception class handling
  arm/sve: save/restore SVE context switch
  xen/common: add dom0 xen command line argument for Arm
  xen: enable Dom0 to use SVE feature
  xen/physinfo: encode Arm SVE vector length in arch_capabilities
  tools: add physinfo arch_capabilities handling for Arm
  xen/tools: add sve parameter in XL configuration
  xen/arm: add sve property for dom0less domUs
  xen/changelog: Add SVE and "dom0" options to the changelog for Arm

 CHANGELOG.md                                  |   5 +
 docs/man/xl.cfg.5.pod.in                      |  15 ++
 docs/misc/arm/device-tree/booting.txt         |  11 +
 docs/misc/xen-command-line.pandoc             |  18 +-
 tools/golang/xenlight/helpers.gen.go          |   4 +
 tools/golang/xenlight/types.gen.go            |  24 +++
 tools/include/libxl.h                         |  11 +
 .../include/xen-tools/arm-arch-capabilities.h |  28 +++
 tools/include/xen-tools/common-macros.h       |   2 +
 tools/libs/light/libxl.c                      |   1 +
 tools/libs/light/libxl_arm.c                  |  28 +++
 tools/libs/light/libxl_internal.h             |   1 -
 tools/libs/light/libxl_types.idl              |  23 +++
 tools/ocaml/libs/xc/xenctrl.ml                |   4 +-
 tools/ocaml/libs/xc/xenctrl.mli               |   4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   8 +-
 tools/python/xen/lowlevel/xc/xc.c             |   8 +-
 tools/xl/xl_info.c                            |   8 +
 tools/xl/xl_parse.c                           |   8 +
 xen/arch/arm/Kconfig                          |  10 +-
 xen/arch/arm/arm64/Makefile                   |   1 +
 xen/arch/arm/arm64/cpufeature.c               |   7 +-
 xen/arch/arm/arm64/sve-asm.S                  | 189 ++++++++++++++++++
 xen/arch/arm/arm64/sve.c                      | 141 +++++++++++++
 xen/arch/arm/arm64/vfp.c                      |  79 ++++----
 xen/arch/arm/arm64/vsysreg.c                  |  39 +++-
 xen/arch/arm/cpufeature.c                     |   6 +-
 xen/arch/arm/domain.c                         |  43 +++-
 xen/arch/arm/domain_build.c                   |  56 ++++++
 xen/arch/arm/include/asm/arm64/sve.h          |  83 ++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h      |   4 +
 xen/arch/arm/include/asm/arm64/vfp.h          |  10 +
 xen/arch/arm/include/asm/cpufeature.h         |  14 ++
 xen/arch/arm/include/asm/domain.h             |   8 +
 xen/arch/arm/include/asm/processor.h          |   3 +
 xen/arch/arm/setup.c                          |   5 +-
 xen/arch/arm/sysctl.c                         |   4 +
 xen/arch/arm/traps.c                          |  40 +++-
 xen/arch/x86/dom0_build.c                     |  48 ++---
 xen/common/domain.c                           |  23 +++
 xen/common/kernel.c                           |  25 +++
 xen/include/public/arch-arm.h                 |   2 +
 xen/include/public/domctl.h                   |   2 +-
 xen/include/public/sysctl.h                   |   4 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/lib.h                         |  10 +
 46 files changed, 963 insertions(+), 105 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520092.807360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6z-0005UZ-Lx; Wed, 12 Apr 2023 09:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520092.807360; Wed, 12 Apr 2023 09:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6z-0005TL-Hq; Wed, 12 Apr 2023 09:49:57 +0000
Received: by outflank-mailman (input) for mailman id 520092;
 Wed, 12 Apr 2023 09:49:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6x-0004lO-NT
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5f8292ed-d917-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:49:53 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E05321691;
 Wed, 12 Apr 2023 02:50:37 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 944A23F587;
 Wed, 12 Apr 2023 02:49:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f8292ed-d917-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 03/12] xen/arm: Expose SVE feature to the guest
Date: Wed, 12 Apr 2023 10:49:29 +0100
Message-Id: <20230412094938.2693890-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a guest is allowed to use SVE, expose the SVE features through
the identification registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
CHanges from v4:
 - no changes
Changes from v3:
 - no changes
Changes from v2:
 - no changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/arm64/vsysreg.c | 39 ++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 758750983c11..10048bb4d221 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -18,6 +18,7 @@
 
 #include <xen/sched.h>
 
+#include <asm/arm64/cpufeature.h>
 #include <asm/current.h>
 #include <asm/regs.h>
 #include <asm/traps.h>
@@ -295,7 +296,28 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
-    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+
+    case HSR_SYSREG_ID_AA64PFR0_EL1:
+    {
+        register_t guest_reg_value = guest_cpuinfo.pfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+        {
+            /* 4 is the SVE field width in id_aa64pfr0_el1 */
+            uint64_t mask = GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
+                                    ID_AA64PFR0_SVE_SHIFT);
+            /* sysval is the sve field on the system */
+            uint64_t sysval = cpuid_feature_extract_unsigned_field_width(
+                                system_cpuinfo.pfr64.bits[0],
+                                ID_AA64PFR0_SVE_SHIFT, 4);
+            guest_reg_value &= ~mask;
+            guest_reg_value |= (sysval << ID_AA64PFR0_SVE_SHIFT) & mask;
+        }
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
+
     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
@@ -306,7 +328,20 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
-    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    case HSR_SYSREG_ID_AA64ZFR0_EL1:
+    {
+        /*
+         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
+         * needs to be exposed.
+         */
+        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];
+        if ( is_sve_domain(v->domain) )
+            guest_reg_value = system_cpuinfo.zfr64.bits[0];
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
 
     /*
      * Those cases are catching all Reserved registers trapped by TID3 which
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520093.807367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX70-0005ZH-3l; Wed, 12 Apr 2023 09:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520093.807367; Wed, 12 Apr 2023 09:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX6z-0005YF-TV; Wed, 12 Apr 2023 09:49:57 +0000
Received: by outflank-mailman (input) for mailman id 520093;
 Wed, 12 Apr 2023 09:49:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6x-0004lU-Mb
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 60c4e9c5-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:49:55 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 03FA21684;
 Wed, 12 Apr 2023 02:50:39 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AC3433F587;
 Wed, 12 Apr 2023 02:49:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60c4e9c5-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Date: Wed, 12 Apr 2023 10:49:30 +0100
Message-Id: <20230412094938.2693890-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SVE has a new exception class with code 0x19, introduce the new code
and handle the exception.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - No changes
Changes from v3:
 - No changes
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/include/asm/processor.h |  1 +
 xen/arch/arm/traps.c                 | 12 ++++++++++++
 2 files changed, 13 insertions(+)

diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index bc683334125c..7e42ff8811fc 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -426,6 +426,7 @@
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
+#define HSR_EC_SVE                  0x19
 #endif
 #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
 #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a78a99ddadd0..c2e30feafd5a 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2160,6 +2160,13 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_sysreg);
         do_sysreg(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        GUEST_BUG_ON(regs_mode_is_32bit(regs));
+        gprintk(XENLOG_WARNING,
+                "Domain id %d tried to use SVE while not allowed\n",
+                current->domain->domain_id);
+        inject_undef_exception(regs, hsr);
+        break;
 #endif
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
@@ -2189,6 +2196,11 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
     case HSR_EC_BRK:
         do_trap_brk(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        /* An SVE exception is a bug somewhere in hypervisor code */
+        printk("SVE trap at EL2.\n");
+        do_unexpected_trap("Hypervisor", regs);
+        break;
 #endif
     case HSR_EC_DATA_ABORT_CURR_EL:
     case HSR_EC_INSTR_ABORT_CURR_EL:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:49:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520094.807380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX71-00061A-Bf; Wed, 12 Apr 2023 09:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520094.807380; Wed, 12 Apr 2023 09:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX71-0005zh-7j; Wed, 12 Apr 2023 09:49:59 +0000
Received: by outflank-mailman (input) for mailman id 520094;
 Wed, 12 Apr 2023 09:49:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX6z-0004lU-Ic
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:57 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6181944e-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:49:56 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 379A31691;
 Wed, 12 Apr 2023 02:50:40 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C4ED53F587;
 Wed, 12 Apr 2023 02:49:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6181944e-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Date: Wed, 12 Apr 2023 10:49:31 +0100
Message-Id: <20230412094938.2693890-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Save/restore context switch for SVE, allocate memory to contain
the Z0-31 registers whose length is maximum 2048 bits each and
FFR who can be maximum 256 bits, the allocated memory depends on
how many bits is the vector length for the domain and how many bits
are supported by the platform.

Save P0-15 whose length is maximum 256 bits each, in this case the
memory used is from the fpregs field in struct vfp_state,
because V0-31 are part of Z0-31 and this space would have been
unused for SVE domain otherwise.

Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
creation given the requested vector length and restore it on
context switch, save/restore ZCR_EL1 value as well.

Remove headers from sve.c that are already included using
xen/sched.h.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - No changes
Changes from v3:
 - don't use fixed len types when not needed (Jan)
 - now VL is an encoded value, decode it before using.
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - Moved zcr_el2 field introduction in this patch, restore its
   content inside sve_restore_state function. (Julien)
---
 xen/arch/arm/arm64/sve-asm.S             | 141 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 |  68 ++++++++++-
 xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
 xen/arch/arm/domain.c                    |   7 ++
 xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
 xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  10 ++
 xen/arch/arm/include/asm/domain.h        |   2 +
 8 files changed, 284 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
index 4d1549344733..8c37d7bc95d5 100644
--- a/xen/arch/arm/arm64/sve-asm.S
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -17,6 +17,18 @@
     .endif
 .endm
 
+.macro _sve_check_zreg znr
+    .if (\znr) < 0 || (\znr) > 31
+        .error "Bad Scalable Vector Extension vector register number \znr."
+    .endif
+.endm
+
+.macro _sve_check_preg pnr
+    .if (\pnr) < 0 || (\pnr) > 15
+        .error "Bad Scalable Vector Extension predicate register number \pnr."
+    .endif
+.endm
+
 .macro _check_num n, min, max
     .if (\n) < (\min) || (\n) > (\max)
         .error "Number \n out of range [\min,\max]"
@@ -26,6 +38,54 @@
 /* SVE instruction encodings for non-SVE-capable assemblers */
 /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
 
+/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
 /* RDVL X\nx, #\imm */
 .macro _sve_rdvl nx, imm
     _check_general_reg \nx
@@ -35,11 +95,92 @@
         | (((\imm) & 0x3f) << 5)
 .endm
 
+/* RDFFR (unpredicated): RDFFR P\np.B */
+.macro _sve_rdffr np
+    _sve_check_preg \np
+    .inst 0x2519f000                \
+        | (\np)
+.endm
+
+/* WRFFR P\np.B */
+.macro _sve_wrffr np
+    _sve_check_preg \np
+    .inst 0x25289000                \
+        | ((\np) << 5)
+.endm
+
+.macro __for from:req, to:req
+    .if (\from) == (\to)
+        _for__body %\from
+    .else
+        __for %\from, %((\from) + ((\to) - (\from)) / 2)
+        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
+    .endif
+.endm
+
+.macro _for var:req, from:req, to:req, insn:vararg
+    .macro _for__body \var:req
+        .noaltmacro
+        \insn
+        .altmacro
+    .endm
+
+    .altmacro
+    __for \from, \to
+    .noaltmacro
+
+    .purgem _for__body
+.endm
+
+.macro sve_save nxzffrctx, nxpctx, save_ffr
+    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
+    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
+        cbz \save_ffr, 1f
+        _sve_rdffr 0
+        _sve_str_p 0, \nxzffrctx
+        _sve_ldr_p 0, \nxpctx
+        b 2f
+1:
+        str xzr, [x\nxzffrctx]      // Zero out FFR
+2:
+.endm
+
+.macro sve_load nxzffrctx, nxpctx, restore_ffr
+    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
+        cbz \restore_ffr, 1f
+        _sve_ldr_p 0, \nxzffrctx
+        _sve_wrffr 0
+1:
+    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
+.endm
+
 /* Gets the current vector register size in bytes */
 GLOBAL(sve_get_hw_vl)
     _sve_rdvl 0, 1
     ret
 
+/*
+ * Save the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Save FFR if non-zero
+ */
+GLOBAL(sve_save_ctx)
+    sve_save 0, 1, x2
+    ret
+
+/*
+ * Load the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Restore FFR if non-zero
+ */
+GLOBAL(sve_load_ctx)
+    sve_load 0, 1, x2
+    ret
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 78f7482619da..5485648850a0 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,14 +5,29 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
-#include <xen/types.h>
-#include <asm/cpufeature.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
 #include <asm/arm64/sve.h>
-#include <asm/arm64/sysregs.h>
-#include <asm/processor.h>
-#include <asm/system.h>
 
 extern unsigned int sve_get_hw_vl(void);
+extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
+extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
+                         int restore_ffr);
+
+static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
+{
+    /*
+     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
+     * in bytes is VL/8.
+     */
+    return (vl / 8U) * 32U;
+}
+
+static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
+{
+    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
+    return (vl / 64U);
+}
 
 register_t compute_max_zcr(void)
 {
@@ -60,3 +75,46 @@ unsigned int get_sys_vl_len(void)
     return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
             SVE_VL_MULTIPLE_VAL;
 }
+
+int sve_context_init(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
+                             sve_ffrreg_ctx_size(sve_vl_bits),
+                             L1_CACHE_BYTES);
+
+    if ( !ctx )
+        return -ENOMEM;
+
+    v->arch.vfp.sve_context = ctx;
+
+    return 0;
+}
+
+void sve_context_free(struct vcpu *v)
+{
+    xfree(v->arch.vfp.sve_context);
+}
+
+void sve_save_state(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *sve_ctx_zreg_end = v->arch.vfp.sve_context +
+            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    v->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
+
+    sve_save_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
+}
+
+void sve_restore_state(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *sve_ctx_zreg_end = v->arch.vfp.sve_context +
+            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
+    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);
+
+    sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
+}
diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 47885e76baae..2d0d7c2e6ddb 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -2,29 +2,35 @@
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
+#include <asm/arm64/sve.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
-                 "stp q2, q3, [%1, #16 * 2]\n\t"
-                 "stp q4, q5, [%1, #16 * 4]\n\t"
-                 "stp q6, q7, [%1, #16 * 6]\n\t"
-                 "stp q8, q9, [%1, #16 * 8]\n\t"
-                 "stp q10, q11, [%1, #16 * 10]\n\t"
-                 "stp q12, q13, [%1, #16 * 12]\n\t"
-                 "stp q14, q15, [%1, #16 * 14]\n\t"
-                 "stp q16, q17, [%1, #16 * 16]\n\t"
-                 "stp q18, q19, [%1, #16 * 18]\n\t"
-                 "stp q20, q21, [%1, #16 * 20]\n\t"
-                 "stp q22, q23, [%1, #16 * 22]\n\t"
-                 "stp q24, q25, [%1, #16 * 24]\n\t"
-                 "stp q26, q27, [%1, #16 * 26]\n\t"
-                 "stp q28, q29, [%1, #16 * 28]\n\t"
-                 "stp q30, q31, [%1, #16 * 30]\n\t"
-                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_save_state(v);
+    else
+    {
+        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                     "stp q2, q3, [%1, #16 * 2]\n\t"
+                     "stp q4, q5, [%1, #16 * 4]\n\t"
+                     "stp q6, q7, [%1, #16 * 6]\n\t"
+                     "stp q8, q9, [%1, #16 * 8]\n\t"
+                     "stp q10, q11, [%1, #16 * 10]\n\t"
+                     "stp q12, q13, [%1, #16 * 12]\n\t"
+                     "stp q14, q15, [%1, #16 * 14]\n\t"
+                     "stp q16, q17, [%1, #16 * 16]\n\t"
+                     "stp q18, q19, [%1, #16 * 18]\n\t"
+                     "stp q20, q21, [%1, #16 * 20]\n\t"
+                     "stp q22, q23, [%1, #16 * 22]\n\t"
+                     "stp q24, q25, [%1, #16 * 24]\n\t"
+                     "stp q26, q27, [%1, #16 * 26]\n\t"
+                     "stp q28, q29, [%1, #16 * 28]\n\t"
+                     "stp q30, q31, [%1, #16 * 30]\n\t"
+                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    }
 
     v->arch.vfp.fpsr = READ_SYSREG(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG(FPCR);
@@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
-                 "ldp q2, q3, [%1, #16 * 2]\n\t"
-                 "ldp q4, q5, [%1, #16 * 4]\n\t"
-                 "ldp q6, q7, [%1, #16 * 6]\n\t"
-                 "ldp q8, q9, [%1, #16 * 8]\n\t"
-                 "ldp q10, q11, [%1, #16 * 10]\n\t"
-                 "ldp q12, q13, [%1, #16 * 12]\n\t"
-                 "ldp q14, q15, [%1, #16 * 14]\n\t"
-                 "ldp q16, q17, [%1, #16 * 16]\n\t"
-                 "ldp q18, q19, [%1, #16 * 18]\n\t"
-                 "ldp q20, q21, [%1, #16 * 20]\n\t"
-                 "ldp q22, q23, [%1, #16 * 22]\n\t"
-                 "ldp q24, q25, [%1, #16 * 24]\n\t"
-                 "ldp q26, q27, [%1, #16 * 26]\n\t"
-                 "ldp q28, q29, [%1, #16 * 28]\n\t"
-                 "ldp q30, q31, [%1, #16 * 30]\n\t"
-                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_restore_state(v);
+    else
+    {
+        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                     "ldp q2, q3, [%1, #16 * 2]\n\t"
+                     "ldp q4, q5, [%1, #16 * 4]\n\t"
+                     "ldp q6, q7, [%1, #16 * 6]\n\t"
+                     "ldp q8, q9, [%1, #16 * 8]\n\t"
+                     "ldp q10, q11, [%1, #16 * 10]\n\t"
+                     "ldp q12, q13, [%1, #16 * 12]\n\t"
+                     "ldp q14, q15, [%1, #16 * 14]\n\t"
+                     "ldp q16, q17, [%1, #16 * 16]\n\t"
+                     "ldp q18, q19, [%1, #16 * 18]\n\t"
+                     "ldp q20, q21, [%1, #16 * 20]\n\t"
+                     "ldp q22, q23, [%1, #16 * 22]\n\t"
+                     "ldp q24, q25, [%1, #16 * 24]\n\t"
+                     "ldp q26, q27, [%1, #16 * 26]\n\t"
+                     "ldp q28, q29, [%1, #16 * 28]\n\t"
+                     "ldp q30, q31, [%1, #16 * 30]\n\t"
+                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    }
 
     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 769fae8fe25e..060fc30bbb5d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -552,7 +552,12 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.cptr_el2 = get_default_cptr_flags();
     if ( is_sve_domain(v->domain) )
+    {
+        if ( (rc = sve_context_init(v)) != 0 )
+            goto fail;
         v->arch.cptr_el2 &= ~HCPTR_CP(8);
+        v->arch.zcr_el2 = vl_to_zcr(sve_decode_vl(v->domain->arch.sve_vl));
+    }
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -582,6 +587,8 @@ fail:
 
 void arch_vcpu_destroy(struct vcpu *v)
 {
+    if ( is_sve_domain(v->domain) )
+        sve_context_free(v);
     vcpu_timer_destroy(v);
     vcpu_vgic_free(v);
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index a4c53e3e8e2e..fc162c9d2cf7 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -24,6 +24,10 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(unsigned int vl);
 unsigned int get_sys_vl_len(void);
+int sve_context_init(struct vcpu *v);
+void sve_context_free(struct vcpu *v);
+void sve_save_state(struct vcpu *v);
+void sve_restore_state(struct vcpu *v);
 
 #else /* !CONFIG_ARM64_SVE */
 
@@ -42,6 +46,15 @@ static inline unsigned int get_sys_vl_len(void)
     return 0;
 }
 
+static inline int sve_context_init(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline void sve_context_free(struct vcpu *v) {}
+static inline void sve_save_state(struct vcpu *v) {}
+static inline void sve_restore_state(struct vcpu *v) {}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4cabb9eb4d5e..3fdeb9d8cdef 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -88,6 +88,9 @@
 #ifndef ID_AA64ISAR2_EL1
 #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
 #endif
+#ifndef ZCR_EL1
+#define ZCR_EL1                     S3_0_C1_C2_0
+#endif
 
 /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
 
diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
index e6e8c363bc16..8af714cb8ecc 100644
--- a/xen/arch/arm/include/asm/arm64/vfp.h
+++ b/xen/arch/arm/include/asm/arm64/vfp.h
@@ -6,7 +6,17 @@
 
 struct vfp_state
 {
+    /*
+     * When SVE is enabled for the guest, fpregs memory will be used to
+     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
+     * registers.
+     */
     uint64_t fpregs[64] __vfp_aligned;
+    /*
+     * When SVE is enabled for the guest, sve_context contains memory to
+     * save/restore Z0-Z31 registers and FFR.
+     */
+    uint64_t *sve_context;
     register_t fpcr;
     register_t fpexc32_el2;
     register_t fpsr;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 78cc2da3d4e5..6b5ec3bd0680 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -195,6 +195,8 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t zcr_el1;
+    register_t zcr_el2;
     register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520095.807391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX73-0006Qu-OQ; Wed, 12 Apr 2023 09:50:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520095.807391; Wed, 12 Apr 2023 09:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX73-0006QJ-KH; Wed, 12 Apr 2023 09:50:01 +0000
Received: by outflank-mailman (input) for mailman id 520095;
 Wed, 12 Apr 2023 09:49:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX71-0004lO-R7
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:49:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 61d7f23c-d917-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:49:57 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D61281684;
 Wed, 12 Apr 2023 02:50:41 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 04F2F3F587;
 Wed, 12 Apr 2023 02:49:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61d7f23c-d917-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 06/12] xen/common: add dom0 xen command line argument for Arm
Date: Wed, 12 Apr 2023 10:49:32 +0100
Message-Id: <20230412094938.2693890-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently x86 defines a Xen command line argument dom0=<list> where
there can be specified dom0 controlling sub-options, to use it also
on Arm, move the code that loops through the list of arguments from
x86 to the common code and from there, call architecture specific
functions to handle the comma separated sub-options.

No functional changes are intended.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes from v4:
 - return EINVAL in Arm implementation of parse_arch_dom0_param,
   shorten variable name in the funtion from str_begin, str_end to
   s, e. Removed variable rc from x86 parse_arch_dom0_param
   implementation. (Jan)
 - Add R-By Jan
Changes from v3:
 - new patch
---
 xen/arch/arm/domain_build.c |  5 ++++
 xen/arch/x86/dom0_build.c   | 48 ++++++++++++++-----------------------
 xen/common/domain.c         | 23 ++++++++++++++++++
 xen/include/xen/domain.h    |  1 +
 4 files changed, 47 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4f9d4f9d8867..eeb4662f0eee 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -59,6 +59,11 @@ static int __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
+int __init parse_arch_dom0_param(const char *s, const char *e)
+{
+    return -EINVAL;
+}
+
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 79234f18ff01..9f5300a3efbb 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -266,42 +266,30 @@ bool __initdata opt_dom0_pvh = !IS_ENABLED(CONFIG_PV);
 bool __initdata opt_dom0_verbose = IS_ENABLED(CONFIG_VERBOSE_DEBUG);
 bool __initdata opt_dom0_msr_relaxed;
 
-static int __init cf_check parse_dom0_param(const char *s)
+int __init parse_arch_dom0_param(const char *s, const char *e)
 {
-    const char *ss;
-    int rc = 0;
+    int val;
 
-    do {
-        int val;
-
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
-            opt_dom0_pvh = false;
-        else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
-            opt_dom0_pvh = true;
+    if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
+        opt_dom0_pvh = false;
+    else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
+        opt_dom0_pvh = true;
 #ifdef CONFIG_SHADOW_PAGING
-        else if ( (val = parse_boolean("shadow", s, ss)) >= 0 )
-            opt_dom0_shadow = val;
+    else if ( (val = parse_boolean("shadow", s, e)) >= 0 )
+        opt_dom0_shadow = val;
 #endif
-        else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
-            opt_dom0_verbose = val;
-        else if ( IS_ENABLED(CONFIG_PV) &&
-                  (val = parse_boolean("cpuid-faulting", s, ss)) >= 0 )
-            opt_dom0_cpuid_faulting = val;
-        else if ( (val = parse_boolean("msr-relaxed", s, ss)) >= 0 )
-            opt_dom0_msr_relaxed = val;
-        else
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
+    else if ( (val = parse_boolean("verbose", s, e)) >= 0 )
+        opt_dom0_verbose = val;
+    else if ( IS_ENABLED(CONFIG_PV) &&
+              (val = parse_boolean("cpuid-faulting", s, e)) >= 0 )
+        opt_dom0_cpuid_faulting = val;
+    else if ( (val = parse_boolean("msr-relaxed", s, e)) >= 0 )
+        opt_dom0_msr_relaxed = val;
+    else
+        return -EINVAL;
 
-    return rc;
+    return 0;
 }
-custom_param("dom0", parse_dom0_param);
 
 static char __initdata opt_dom0_ioports_disable[200] = "";
 string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae095..7779ba088675 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,6 +364,29 @@ static int __init cf_check parse_extra_guest_irqs(const char *s)
 }
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
+static int __init cf_check parse_dom0_param(const char *s)
+{
+    const char *ss;
+    int rc = 0;
+
+    do {
+        int ret;
+
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        ret = parse_arch_dom0_param(s, ss);
+        if ( ret && !rc )
+            rc = ret;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("dom0", parse_dom0_param);
+
 /*
  * Release resources held by a domain.  There may or may not be live
  * references to the domain, and it may or may not be fully constructed.
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 26f9c4f6dd5b..1df8f933d076 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -16,6 +16,7 @@ typedef union {
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id);
 
 unsigned int dom0_max_vcpus(void);
+int parse_arch_dom0_param(const char *s, const char *e);
 struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520096.807397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX74-0006UM-AW; Wed, 12 Apr 2023 09:50:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520096.807397; Wed, 12 Apr 2023 09:50:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX74-0006TF-0D; Wed, 12 Apr 2023 09:50:02 +0000
Received: by outflank-mailman (input) for mailman id 520096;
 Wed, 12 Apr 2023 09:50:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX72-0004lU-CY
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:00 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 635adcb8-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:49:59 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64ED8C14;
 Wed, 12 Apr 2023 02:50:43 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A2CE13F587;
 Wed, 12 Apr 2023 02:49:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 635adcb8-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Date: Wed, 12 Apr 2023 10:49:33 +0100
Message-Id: <20230412094938.2693890-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a command line parameter to allow Dom0 the use of SVE resources,
the command line parameter sve=<integer>, sub argument of dom0=,
controls the feature on this domain and sets the maximum SVE vector
length for Dom0.

Add a new function, parse_signed_integer(), to parse an integer
command line argument.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - Negative values as user param means max supported HW VL (Jan)
 - update documentation, make use of no_config_param(), rename
   parse_integer into parse_signed_integer and take long long *,
   also put a comment on the -2 return condition, update
   declaration comment to reflect the modifications (Jan)
Changes from v3:
 - Don't use fixed len types when not needed (Jan)
 - renamed domainconfig_encode_vl to sve_encode_vl
 - Use a sub argument of dom0= to enable the feature (Jan)
 - Add parse_integer() function
Changes from v2:
 - xen_domctl_createdomain field has changed into sve_vl and its
   value now is the VL / 128, create an helper function for that.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed docs to explain that the domain won't be created if the
   requested vector length is above the supported one from the
   platform.
---
 docs/misc/xen-command-line.pandoc    | 18 ++++++++++++++++--
 xen/arch/arm/arm64/sve.c             | 21 +++++++++++++++++++++
 xen/arch/arm/domain_build.c          | 27 +++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 15 +++++++++++++++
 xen/common/kernel.c                  | 25 +++++++++++++++++++++++++
 xen/include/xen/lib.h                | 10 ++++++++++
 6 files changed, 114 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d3319..9c0790ce6c7c 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
 
 ### dom0
     = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
-                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
+                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
 
-    Applicability: x86
+    = List of [ sve=<integer> ] (Arm)
 
 Controls for how dom0 is constructed on x86 systems.
 
@@ -838,6 +838,20 @@ Controls for how dom0 is constructed on x86 systems.
 
     If using this option is necessary to fix an issue, please report a bug.
 
+Enables features on dom0 on Arm systems.
+
+*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
+    the maximum SVE vector length.
+    A value equal to 0 disables the feature, this is the default value.
+    Values below 0 means the feature uses the maximum SVE vector length
+    supported by hardware, please be aware that if the hardware doesn't supports
+    SVE, the feature remains disabled.
+    Values above 0 explicitly set the maximum SVE vector length for Dom0,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specify the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop.
+
 ### dom0-cpuid
     = List of comma separated booleans
 
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 5485648850a0..ad5db62e1805 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -9,6 +9,9 @@
 #include <xen/sizes.h>
 #include <asm/arm64/sve.h>
 
+/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
+int __initdata opt_dom0_sve;
+
 extern unsigned int sve_get_hw_vl(void);
 extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
 extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
@@ -118,3 +121,21 @@ void sve_restore_state(struct vcpu *v)
 
     sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
 }
+
+int __init sve_sanitize_vl_param(int val, unsigned int *out)
+{
+    /*
+     * Negative SVE parameter value means to use the maximum supported
+     * vector length, otherwise if a positive value is provided, check if the
+     * vector length is a multiple of 128 and not bigger than the maximum value
+     * 2048
+     */
+    if ( val < 0 )
+        *out = get_sys_vl_len();
+    else if ( ((val % SVE_VL_MULTIPLE_VAL) == 0) && (val <= SVE_VL_MAX_BITS) )
+        *out = val;
+    else
+        return -1;
+
+    return 0;
+}
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index eeb4662f0eee..3f30ef5c37b6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -26,6 +26,7 @@
 #include <asm/platform.h>
 #include <asm/psci.h>
 #include <asm/setup.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
 #include <xen/event.h>
@@ -61,6 +62,21 @@ custom_param("dom0_mem", parse_dom0_mem);
 
 int __init parse_arch_dom0_param(const char *s, const char *e)
 {
+    long long val;
+
+    if ( !parse_signed_integer("sve", s, e, &val) )
+    {
+#ifdef CONFIG_ARM64_SVE
+        if ( (val >= INT_MIN) && (val <= INT_MAX) )
+            opt_dom0_sve = val;
+        else
+            printk(XENLOG_INFO "'sve=%lld' value out of range!\n", val);
+#else
+        no_config_param("ARM64_SVE", "sve", s, e);
+#endif
+        return 0;
+    }
+
     return -EINVAL;
 }
 
@@ -4109,6 +4125,17 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
+    if ( opt_dom0_sve )
+    {
+        unsigned int vl;
+
+        if ( !sve_sanitize_vl_param(opt_dom0_sve, &vl) )
+            dom0_cfg.arch.sve_vl = sve_encode_vl(vl);
+        else
+            printk(XENLOG_WARNING
+                   "SVE vector length error, disable feature for Dom0\n");
+    }
+
     dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
     if ( IS_ERR(dom0) )
         panic("Error creating domain 0 (rc = %ld)\n", PTR_ERR(dom0));
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index fc162c9d2cf7..f1801876b5de 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -19,8 +19,15 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
     return sve_vl * SVE_VL_MULTIPLE_VAL;
 }
 
+static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
+{
+    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
+}
+
 #ifdef CONFIG_ARM64_SVE
 
+extern int opt_dom0_sve;
+
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(unsigned int vl);
 unsigned int get_sys_vl_len(void);
@@ -28,9 +35,12 @@ int sve_context_init(struct vcpu *v);
 void sve_context_free(struct vcpu *v);
 void sve_save_state(struct vcpu *v);
 void sve_restore_state(struct vcpu *v);
+int sve_sanitize_vl_param(int val, unsigned int *out);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define opt_dom0_sve (0)
+
 static inline register_t compute_max_zcr(void)
 {
     return 0;
@@ -55,6 +65,11 @@ static inline void sve_context_free(struct vcpu *v) {}
 static inline void sve_save_state(struct vcpu *v) {}
 static inline void sve_restore_state(struct vcpu *v) {}
 
+static inline int sve_sanitize_vl_param(int val, unsigned int *out)
+{
+    return -1;
+}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f7b1f65f373c..29d05282c8bb 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
     return -1;
 }
 
+int __init parse_signed_integer(const char *name, const char *s, const char *e,
+                                long long *val)
+{
+    size_t slen, nlen;
+    const char *str;
+    long long pval;
+
+    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
+    nlen = strlen(name);
+
+    /* Does s start with name or contains only the name? */
+    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
+        return -1;
+
+    pval = simple_strtoll(&s[nlen + 1], &str, 0);
+
+    /* Number not recognised */
+    if ( str != e )
+        return -2;
+
+    *val = pval;
+
+    return 0;
+}
+
 int cmdline_strcmp(const char *frag, const char *name)
 {
     for ( ; ; frag++, name++ )
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index e914ccade095..5343ee7a944a 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
  */
 int parse_boolean(const char *name, const char *s, const char *e);
 
+/**
+ * Given a specific name, parses a string of the form:
+ *   $NAME=<integer number>
+ * returning 0 and a value in val, for a recognised integer.
+ * Returns -1 for name not found, general errors, or -2 if name is found but
+ * not recognised number.
+ */
+int parse_signed_integer(const char *name, const char *s, const char *e,
+                         long long *val);
+
 /**
  * Very similar to strcmp(), but will declare a match if the NUL in 'name'
  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed for
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520097.807406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX75-0006jy-F4; Wed, 12 Apr 2023 09:50:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520097.807406; Wed, 12 Apr 2023 09:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX75-0006h2-1d; Wed, 12 Apr 2023 09:50:03 +0000
Received: by outflank-mailman (input) for mailman id 520097;
 Wed, 12 Apr 2023 09:50:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX73-0004lU-Jj
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:01 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6448c5f3-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:50:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7FF41684;
 Wed, 12 Apr 2023 02:50:44 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 31C993F587;
 Wed, 12 Apr 2023 02:49:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6448c5f3-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 08/12] xen/physinfo: encode Arm SVE vector length in arch_capabilities
Date: Wed, 12 Apr 2023 10:49:34 +0100
Message-Id: <20230412094938.2693890-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the arm platform supports SVE, advertise the feature in the
field arch_capabilities in struct xen_sysctl_physinfo by encoding
the SVE vector length in it.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - Write arch_capabilities from arch_do_physinfo instead of using
   stub functions (Jan)
Changes from v3:
 - domainconfig_encode_vl is now named sve_encode_vl
Changes from v2:
 - Remove XEN_SYSCTL_PHYSCAP_ARM_SVE_SHFT, use MASK_INSR and
   protect with ifdef XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK (Jan)
 - Use the helper function sve_arch_cap_physinfo to encode
   the VL into physinfo arch_capabilities field.
Changes from v1:
 - Use only arch_capabilities and some defines to encode SVE VL
   (Bertrand, Stefano, Jan)
Changes from RFC:
 - new patch
---
 xen/arch/arm/sysctl.c       | 4 ++++
 xen/include/public/sysctl.h | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10d0..e9a0661146e4 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -11,11 +11,15 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
+#include <asm/arm64/sve.h>
 #include <public/sysctl.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
     pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap;
+
+    pi->arch_capabilities |= MASK_INSR(sve_encode_vl(get_sys_vl_len()),
+                                       XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
 }
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd00e..9d06e92d0f6a 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -94,6 +94,10 @@ struct xen_sysctl_tbuf_op {
 /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
 #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
 
+#if defined(__arm__) || defined(__aarch64__)
+#define XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK  (0x1FU)
+#endif
+
 struct xen_sysctl_physinfo {
     uint32_t threads_per_core;
     uint32_t cores_per_socket;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520098.807421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX77-0007S3-M2; Wed, 12 Apr 2023 09:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520098.807421; Wed, 12 Apr 2023 09:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX77-0007RI-Gw; Wed, 12 Apr 2023 09:50:05 +0000
Received: by outflank-mailman (input) for mailman id 520098;
 Wed, 12 Apr 2023 09:50:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX75-0004lU-TV
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:03 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 6570b679-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:50:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE35D1684;
 Wed, 12 Apr 2023 02:50:46 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B57853F587;
 Wed, 12 Apr 2023 02:50:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6570b679-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH v5 09/12] tools: add physinfo arch_capabilities handling for Arm
Date: Wed, 12 Apr 2023 10:49:35 +0100
Message-Id: <20230412094938.2693890-10-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On Arm, the SVE vector length is encoded in arch_capabilities field
of struct xen_sysctl_physinfo, make use of this field in the tools
when building for arm.

Create header arm-arch-capabilities.h to handle the arch_capabilities
field of physinfo for Arm.

Removed include for xen-tools/common-macros.h in
python/xen/lowlevel/xc/xc.c because it is already included by the
arm-arch-capabilities.h header.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
Changes from v4:
 - Move arm-arch-capabilities.h into xen-tools/, add LIBXL_HAVE_,
   fixed python return type to I instead of i. (Anthony)
Changes from v3:
 - add Ack-by for the Golang bits (George)
 - add Ack-by for the OCaml tools (Christian)
 - now xen-tools/libs.h is named xen-tools/common-macros.h
 - changed commit message to explain why the header modification
   in python/xen/lowlevel/xc/xc.c
Changes from v2:
 - rename arm_arch_capabilities.h in arm-arch-capabilities.h, use
   MASK_EXTR.
 - Now arm-arch-capabilities.h needs MASK_EXTR macro, but it is
   defined in libxl_internal.h, it doesn't feel right to include
   that header so move MASK_EXTR into xen-tools/libs.h that is also
   included in libxl_internal.h
Changes from v1:
 - now SVE VL is encoded in arch_capabilities on Arm
Changes from RFC:
 - new patch
---
 tools/golang/xenlight/helpers.gen.go          |  2 ++
 tools/golang/xenlight/types.gen.go            |  1 +
 tools/include/libxl.h                         |  6 ++++
 .../include/xen-tools/arm-arch-capabilities.h | 28 +++++++++++++++++++
 tools/include/xen-tools/common-macros.h       |  2 ++
 tools/libs/light/libxl.c                      |  1 +
 tools/libs/light/libxl_internal.h             |  1 -
 tools/libs/light/libxl_types.idl              |  1 +
 tools/ocaml/libs/xc/xenctrl.ml                |  4 +--
 tools/ocaml/libs/xc/xenctrl.mli               |  4 +--
 tools/ocaml/libs/xc/xenctrl_stubs.c           |  8 ++++--
 tools/python/xen/lowlevel/xc/xc.c             |  8 ++++--
 tools/xl/xl_info.c                            |  8 ++++++
 13 files changed, 62 insertions(+), 12 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..35397be2f9e2 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3506,6 +3506,7 @@ x.CapVmtrace = bool(xc.cap_vmtrace)
 x.CapVpmu = bool(xc.cap_vpmu)
 x.CapGnttabV1 = bool(xc.cap_gnttab_v1)
 x.CapGnttabV2 = bool(xc.cap_gnttab_v2)
+x.ArchCapabilities = uint32(xc.arch_capabilities)
 
  return nil}
 
@@ -3540,6 +3541,7 @@ xc.cap_vmtrace = C.bool(x.CapVmtrace)
 xc.cap_vpmu = C.bool(x.CapVpmu)
 xc.cap_gnttab_v1 = C.bool(x.CapGnttabV1)
 xc.cap_gnttab_v2 = C.bool(x.CapGnttabV2)
+xc.arch_capabilities = C.uint32_t(x.ArchCapabilities)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..3d968a496744 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1079,6 +1079,7 @@ CapVmtrace bool
 CapVpmu bool
 CapGnttabV1 bool
 CapGnttabV2 bool
+ArchCapabilities uint32
 }
 
 type Connectorinfo struct {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a191318c..4fa09ff7635a 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -525,6 +525,12 @@
  */
 #define LIBXL_HAVE_PHYSINFO_CAP_GNTTAB 1
 
+/*
+ * LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES indicates that libxl_physinfo has a
+ * arch_capabilities field.
+ */
+#define LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES 1
+
 /*
  * LIBXL_HAVE_MAX_GRANT_VERSION indicates libxl_domain_build_info has a
  * max_grant_version field for setting the max grant table version per
diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
new file mode 100644
index 000000000000..ac44c8b14344
--- /dev/null
+++ b/tools/include/xen-tools/arm-arch-capabilities.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef ARM_ARCH_CAPABILITIES_H
+#define ARM_ARCH_CAPABILITIES_H
+
+#include <stdint.h>
+#include <xen/sysctl.h>
+
+#include <xen-tools/common-macros.h>
+
+static inline
+unsigned int arch_capabilities_arm_sve(unsigned int arch_capabilities)
+{
+#if defined(__aarch64__)
+    unsigned int sve_vl = MASK_EXTR(arch_capabilities,
+                                    XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
+
+    /* Vector length is divided by 128 before storing it in arch_capabilities */
+    return sve_vl * 128U;
+#else
+    return 0;
+#endif
+}
+
+#endif /* ARM_ARCH_CAPABILITIES_H */
diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h
index 76b55bf62085..d53b88182560 100644
--- a/tools/include/xen-tools/common-macros.h
+++ b/tools/include/xen-tools/common-macros.h
@@ -72,6 +72,8 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+
 #ifndef __must_check
 #define __must_check __attribute__((__warn_unused_result__))
 #endif
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f69..175d6dde0b80 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -409,6 +409,7 @@ int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v1);
     physinfo->cap_gnttab_v2 =
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v2);
+    physinfo->arch_capabilities = xcphysinfo.arch_capabilities;
 
     GC_FREE;
     return 0;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 5244fde6239a..8aba3e138909 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -132,7 +132,6 @@
 
 #define DIV_ROUNDUP(n, d) (((n) + (d) - 1) / (d))
 
-#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
 #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
 
 #define LIBXL__LOGGING_ENABLED
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..fd31dacf7d5a 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -1133,6 +1133,7 @@ libxl_physinfo = Struct("physinfo", [
     ("cap_vpmu", bool),
     ("cap_gnttab_v1", bool),
     ("cap_gnttab_v2", bool),
+    ("arch_capabilities", uint32),
     ], dir=DIR_OUT)
 
 libxl_connectorinfo = Struct("connectorinfo", [
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e4096bf92c1d..bf23ca50bb15 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -128,12 +128,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index ef2254537430..ed1e28ea30a0 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -113,12 +113,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 6ec9ed6d1e6f..526a3610fa42 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -853,13 +853,15 @@ CAMLprim value stub_xc_physinfo(value xch_val)
 	arch_cap_list = Tag_cons;
 
 	arch_cap_flags_tag = 1; /* tag x86 */
-#else
-	caml_failwith("Unhandled architecture");
-#endif
 
 	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
 	Store_field(arch_cap_flags, 0, arch_cap_list);
 	Store_field(physinfo, 10, arch_cap_flags);
+#elif defined(__aarch64__)
+	Store_field(physinfo, 10, Val_int(c_physinfo.arch_capabilities));
+#else
+	caml_failwith("Unhandled architecture");
+#endif
 
 	CAMLreturn(physinfo);
 }
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 35901c2d63b6..94b0354cf52f 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -22,6 +22,7 @@
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/params.h>
 
+#include <xen-tools/arm-arch-capabilities.h>
 #include <xen-tools/common-macros.h>
 
 /* Needed for Python versions earlier than 2.3. */
@@ -897,7 +898,7 @@ static PyObject *pyxc_physinfo(XcObject *self)
     if ( p != virt_caps )
       *(p-1) = '\0';
 
-    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
+    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s,s:I}",
                             "nr_nodes",         pinfo.nr_nodes,
                             "threads_per_core", pinfo.threads_per_core,
                             "cores_per_socket", pinfo.cores_per_socket,
@@ -907,7 +908,10 @@ static PyObject *pyxc_physinfo(XcObject *self)
                             "scrub_memory",     pages_to_kib(pinfo.scrub_pages),
                             "cpu_khz",          pinfo.cpu_khz,
                             "hw_caps",          cpu_cap,
-                            "virt_caps",        virt_caps);
+                            "virt_caps",        virt_caps,
+                            "arm_sve_vl",
+                              arch_capabilities_arm_sve(pinfo.arch_capabilities)
+                        );
 }
 
 static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject *args, PyObject *kwds)
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 712b7638b013..ddc42f96b979 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -27,6 +27,7 @@
 #include <libxl_json.h>
 #include <libxl_utils.h>
 #include <libxlutil.h>
+#include <xen-tools/arm-arch-capabilities.h>
 
 #include "xl.h"
 #include "xl_utils.h"
@@ -224,6 +225,13 @@ static void output_physinfo(void)
          info.cap_gnttab_v2 ? " gnttab-v2" : ""
         );
 
+    /* Print arm SVE vector length only on ARM platforms */
+#if defined(__aarch64__)
+    maybe_printf("arm_sve_vector_length  : %u\n",
+         arch_capabilities_arm_sve(info.arch_capabilities)
+        );
+#endif
+
     vinfo = libxl_get_version_info(ctx);
     if (vinfo) {
         i = (1 << 20) / vinfo->pagesize;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520099.807429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX79-0007p0-CO; Wed, 12 Apr 2023 09:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520099.807429; Wed, 12 Apr 2023 09:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX79-0007nF-4Y; Wed, 12 Apr 2023 09:50:07 +0000
Received: by outflank-mailman (input) for mailman id 520099;
 Wed, 12 Apr 2023 09:50:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX77-0004lU-7E
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 662b20b6-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:50:04 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07E0CC14;
 Wed, 12 Apr 2023 02:50:48 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7AB0F3F587;
 Wed, 12 Apr 2023 02:50:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 662b20b6-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v5 10/12] xen/tools: add sve parameter in XL configuration
Date: Wed, 12 Apr 2023 10:49:36 +0100
Message-Id: <20230412094938.2693890-11-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve parameter in XL configuration to allow guests to use
SVE feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - Rename sve field to sve_vl (Anthony), changed type to
   libxl_sve_type
 - Sanity check of sve field in libxl instead of xl, update docs
   (Anthony)
 - drop Ack-by from George because of the changes in the Golang bits
Changes from v3:
 - no changes
Changes from v2:
 - domain configuration field name has changed to sve_vl,
   also its value now is VL/128.
 - Add Ack-by George for the Golang bits
Changes from v1:
 - updated to use arch_capabilities field for vector length
Changes from RFC:
 - changed libxl_types.idl sve field to uint16
 - now toolstack uses info from physinfo to check against the
   sve XL value
 - Changed documentation
---
 docs/man/xl.cfg.5.pod.in             | 15 +++++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   | 23 +++++++++++++++++++++++
 tools/include/libxl.h                |  5 +++++
 tools/libs/light/libxl_arm.c         | 28 ++++++++++++++++++++++++++++
 tools/libs/light/libxl_types.idl     | 22 ++++++++++++++++++++++
 tools/xl/xl_parse.c                  |  8 ++++++++
 7 files changed, 103 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..e5f9c76737b3 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2952,6 +2952,21 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=item B<sve="vl">
+
+Enable the use of SVE feature for the guest and set the maximum vector length
+that a guest's Scalable Vector Extension (SVE) can use. Or disable it by
+specifying "disabled", which is the default value.
+Allowed values are "disabled", "128", "256", "384", "512", "640", "768", "896",
+"1024", "1152", "1280", "1408", "1536", "1664", "1792", "1920", "2048", "hw".
+Specifying "hw" means that the maximum vector length supported by the platform
+will be used.
+Please be aware that if a specific vector length is passed and its value is
+above the maximum vector length supported by the platform, an error will be
+raised.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 35397be2f9e2..cd1a16e32eac 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1149,6 +1149,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+x.ArchArm.SveVl = SveType(xc.arch_arm.sve_vl)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
@@ -1653,6 +1654,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+xc.arch_arm.sve_vl = C.libxl_sve_type(x.ArchArm.SveVl)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 3d968a496744..adb32bfa5bf0 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -490,6 +490,28 @@ TeeTypeNone TeeType = 0
 TeeTypeOptee TeeType = 1
 )
 
+type SveType int
+const(
+SveTypeDisabled SveType = 0
+SveType128 SveType = 128
+SveType256 SveType = 256
+SveType384 SveType = 384
+SveType512 SveType = 512
+SveType640 SveType = 640
+SveType768 SveType = 768
+SveType896 SveType = 896
+SveType1024 SveType = 1024
+SveType1152 SveType = 1152
+SveType1280 SveType = 1280
+SveType1408 SveType = 1408
+SveType1536 SveType = 1536
+SveType1664 SveType = 1664
+SveType1792 SveType = 1792
+SveType1920 SveType = 1920
+SveType2048 SveType = 2048
+SveTypeHw SveType = -1
+)
+
 type RdmReserve struct {
 Strategy RdmReserveStrategy
 Policy RdmReservePolicy
@@ -564,6 +586,7 @@ TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
+SveVl SveType
 }
 ArchX86 struct {
 MsrRelaxed Defbool
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4fa09ff7635a..cac641a7eba2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -283,6 +283,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * libxl_domain_build_info has the arch_arm.sve_vl field.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_SVE_VL 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..1e69dac2c4fa 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -3,6 +3,8 @@
 #include "libxl_libfdt_compat.h"
 #include "libxl_arm.h"
 
+#include <xen-tools/arm-arch-capabilities.h>
+
 #include <stdbool.h>
 #include <libfdt.h>
 #include <assert.h>
@@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
+    if (d_config->b_info.arch_arm.sve_vl) {
+        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
+        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
+    }
+
     return 0;
 }
 
@@ -1681,6 +1689,26 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
 
+    /* Sanitise SVE parameter */
+    if (b_info->arch_arm.sve_vl) {
+        unsigned int max_sve_vl =
+            arch_capabilities_arm_sve(physinfo->arch_capabilities);
+
+        if (!max_sve_vl) {
+            LOG(ERROR, "SVE is unsupported on this machine.");
+            return ERROR_FAIL;
+        }
+
+        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
+            b_info->arch_arm.sve_vl = max_sve_vl;
+        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
+            LOG(ERROR,
+                "Invalid sve value: %d. Platform supports up to %u bits",
+                b_info->arch_arm.sve_vl, max_sve_vl);
+            return ERROR_FAIL;
+        }
+    }
+
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return 0;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index fd31dacf7d5a..9e48bb772646 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -523,6 +523,27 @@ libxl_tee_type = Enumeration("tee_type", [
     (1, "optee")
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
+libxl_sve_type = Enumeration("sve_type", [
+    (-1, "hw"),
+    (0, "disabled"),
+    (128, "128"),
+    (256, "256"),
+    (384, "384"),
+    (512, "512"),
+    (640, "640"),
+    (768, "768"),
+    (896, "896"),
+    (1024, "1024"),
+    (1152, "1152"),
+    (1280, "1280"),
+    (1408, "1408"),
+    (1536, "1536"),
+    (1664, "1664"),
+    (1792, "1792"),
+    (1920, "1920"),
+    (2048, "2048")
+    ], init_val = "LIBXL_SVE_TYPE_DISABLED")
+
 libxl_rdm_reserve = Struct("rdm_reserve", [
     ("strategy",    libxl_rdm_reserve_strategy),
     ("policy",      libxl_rdm_reserve_policy),
@@ -690,6 +711,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("sve_vl", libxl_sve_type),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..f036e56fc239 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2887,6 +2887,14 @@ skip_usbdev:
         }
     }
 
+    if (!xlu_cfg_get_string (config, "sve", &buf, 1)) {
+        e = libxl_sve_type_from_string(buf, &b_info->arch_arm.sve_vl);
+        if (e) {
+            fprintf(stderr, "Unknown sve \"%s\" specified\n", buf);
+            exit(EXIT_FAILURE);
+        }
+    }
+
     parse_vkb_list(config, d_config);
 
     d_config->virtios = NULL;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520100.807435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX79-0007u6-TN; Wed, 12 Apr 2023 09:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520100.807435; Wed, 12 Apr 2023 09:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX79-0007sX-MM; Wed, 12 Apr 2023 09:50:07 +0000
Received: by outflank-mailman (input) for mailman id 520100;
 Wed, 12 Apr 2023 09:50:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX77-0004lU-OR
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 66aff3eb-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:50:05 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 202D6168F;
 Wed, 12 Apr 2023 02:50:49 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C7E793F587;
 Wed, 12 Apr 2023 02:50:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66aff3eb-d917-11ed-b21e-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 11/12] xen/arm: add sve property for dom0less domUs
Date: Wed, 12 Apr 2023 10:49:37 +0100
Message-Id: <20230412094938.2693890-12-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a device tree property in the dom0less domU configuration
to enable the guest to use SVE.

Update documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - Now it is possible to specify the property "sve" for dom0less
   device tree node without any value, that means the platform
   supported VL will be used.
Changes from v3:
 - Now domainconfig_encode_vl is named sve_encode_vl
Changes from v2:
 - xen_domctl_createdomain field name has changed into sve_vl
   and its value is the VL/128, use domainconfig_encode_vl
   to encode a plain VL in bits.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed documentation
---
 docs/misc/arm/device-tree/booting.txt | 11 +++++++++++
 xen/arch/arm/domain_build.c           | 24 ++++++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e0a..f9d2ecdda48a 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -193,6 +193,17 @@ with the following properties:
     Optional. Handle to a xen,cpupool device tree node that identifies the
     cpupool where the guest will be started at boot.
 
+- sve
+
+    Optional. A number that, when above 0, enables SVE for this guest and sets
+    its maximum SVE vector length. The default value is 0, that means this
+    guest is not allowed to use SVE, the maximum value allowed is 2048, any
+    other value must be multiple of 128.
+    Please note that if the platform supports a lower value of bits, then the
+    domain creation will fail.
+    Specifying this property with no value, means that the SVE vector length
+    will be set equal to the maximum vector length supported by the platform.
+
 - xen,enhanced
 
     A string property. Possible property values are:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3f30ef5c37b6..c1f0d1d78431 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -4004,6 +4004,30 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( dt_get_property(node, "sve", &val) )
+        {
+            unsigned int sve_vl_bits;
+
+            if ( !val )
+            {
+                /* Property found with no value, means max HW VL supported */
+                rc = sve_sanitize_vl_param(-1, &sve_vl_bits);
+            }
+            else
+            {
+                if ( dt_property_read_u32(node, "sve", &val) )
+                    rc = sve_sanitize_vl_param(val, &sve_vl_bits);
+                else
+                    panic("Error reading 'sve' property");
+            }
+
+            if ( !rc )
+                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
+            else
+                printk(XENLOG_WARNING
+                       "SVE vector length error, disable feature for Dom0less DomU\n");
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:50:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520102.807446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX7B-0008Nc-R3; Wed, 12 Apr 2023 09:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520102.807446; Wed, 12 Apr 2023 09:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmX7B-0008LB-HN; Wed, 12 Apr 2023 09:50:09 +0000
Received: by outflank-mailman (input) for mailman id 520102;
 Wed, 12 Apr 2023 09:50:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFOG=AD=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pmX79-0004lO-Sn
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:50:07 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 66c20f5b-d917-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 11:50:05 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CC69C14;
 Wed, 12 Apr 2023 02:50:50 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF61D3F587;
 Wed, 12 Apr 2023 02:50:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c20f5b-d917-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to the changelog for Arm
Date: Wed, 12 Apr 2023 10:49:38 +0100
Message-Id: <20230412094938.2693890-13-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Arm now can use the "dom0=" Xen command line option and the support
for guests running SVE instructions is added, put entries in the
changelog.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v4:
 - No changes
Change from v3:
 - new patch
---
 CHANGELOG.md | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index c978cfd9b68f..a24951603359 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 
 ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
 
+### Changed
+- The "dom0" option is now supported on Arm and "sve=" sub-option can be used
+  to enable dom0 guest to use SVE/SVE2 instructions.
+
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
    - PKS (Protection Key Supervisor) available to HVM/PVH guests.
@@ -14,6 +18,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - On Arm, Xen supports guests running SVE/SVE2 instructions.
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 09:53:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 09:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520114.807462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXAl-0004Ww-BO; Wed, 12 Apr 2023 09:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520114.807462; Wed, 12 Apr 2023 09:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXAl-0004Wp-8V; Wed, 12 Apr 2023 09:53:51 +0000
Received: by outflank-mailman (input) for mailman id 520114;
 Wed, 12 Apr 2023 09:53:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QpzN=AD=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmXAj-0004Wj-Td
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 09:53:50 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ebf629ff-d917-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 11:53:48 +0200 (CEST)
Received: from AM6PR0502CA0072.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::49) by AS2PR08MB9197.eurprd08.prod.outlook.com
 (2603:10a6:20b:57b::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 09:53:45 +0000
Received: from AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::aa) by AM6PR0502CA0072.outlook.office365.com
 (2603:10a6:20b:56::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Wed, 12 Apr 2023 09:53:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT018.mail.protection.outlook.com (100.127.140.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.28 via Frontend Transport; Wed, 12 Apr 2023 09:53:44 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 12 Apr 2023 09:53:44 +0000
Received: from 67a50924a495.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 36914FB2-8A4A-4FA4-B5DB-DE8B80D3CC03.1; 
 Wed, 12 Apr 2023 09:53:37 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 67a50924a495.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 12 Apr 2023 09:53:37 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAVPR08MB9259.eurprd08.prod.outlook.com (2603:10a6:102:307::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Wed, 12 Apr
 2023 09:53:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6277.034; Wed, 12 Apr 2023
 09:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebf629ff-d917-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QO87h/B4tZJozq0ApWzW4uhvl9hVnbZ2lFC/3noO028=;
 b=C/L9risaHhZUG3Dmu3fQ/dmwj6LvlLersWBeU5ksbHuZic/ZT1VeXYvD+pB8c7rkwkfyJw6+HvBnG4bsemmzt+eT0gmGwKziqFrC04GQFgn25DbYoFZiLrgouhqwLFvNwFqZ2Eedae9BSp6BSZR98HLaIHBJRFmNDL53pbL6BYo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UoNwsXgjmy1EAIdyurOWW6n4leyLPEljdROlzYVGG1fEWlSBd16IbBvYxtQmaf6k65JDU4Dtod+dEkCM+V+Dpb/fIWEOKlEUnYO9uEv59v3CRFempIu+gzIUnMwGRlxID47ql732cMr3mFetdavvMZ2Cr4BjafZLu+abQZIcg1uFM9v/OF5fRdnJqIgq1M1JbzWDiFb66oghgmHxvEylT5DPwhyv7seWdAuyxGQMBuIzS8z4GGR0LAmwXNFBa90fG+ikf70KHCYcfHbOhk1eOgoMQ8VyEc326lnXYVEhbj0n1rYwhpvTfy8Cfmt9Y4c5JhtdaRMG6vxcPS14Jm6YZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QO87h/B4tZJozq0ApWzW4uhvl9hVnbZ2lFC/3noO028=;
 b=Jf/TFL+SkzxTP/iP6jeW5IKycRdLzQPmN5A4Omd6FG4Y7m0GVp2xy5dcjcEKfjmEa0JiJI3e5XDoMjqWzP6Lz1Ddij3xnQGpL15mDM3fP8fGPo5ffGLvyzTyEGYebkFPxENGvr4wcx4HxW4xt/SRVHJUPtDFFr0Pzv5YIUBQlPbi8Lw+SMagOfNur2CwYObubQ7qjzM6JQT/Av+EnBLcDAUhdpEmOUSGk52qvEy6eFwFYcaM14JinT9xctbTMV1+UY4xQZ2jHLIt058i4GOczUhYYSojsamBW799f+GdLvl8obdwLxEZh10zgrB+jO0cGOrWvZEzgA9G2R6UGoGXvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QO87h/B4tZJozq0ApWzW4uhvl9hVnbZ2lFC/3noO028=;
 b=C/L9risaHhZUG3Dmu3fQ/dmwj6LvlLersWBeU5ksbHuZic/ZT1VeXYvD+pB8c7rkwkfyJw6+HvBnG4bsemmzt+eT0gmGwKziqFrC04GQFgn25DbYoFZiLrgouhqwLFvNwFqZ2Eedae9BSp6BSZR98HLaIHBJRFmNDL53pbL6BYo=
From: Henry Wang <Henry.Wang@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: RE: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZbSQvpEqMQcYfY0uxDJPB32sRQK8nbmLA
Date: Wed, 12 Apr 2023 09:53:30 +0000
Message-ID:
 <AS8PR08MB79917190E2A5730AD295026C929B9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-13-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-13-luca.fancellu@arm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5324AD9204C32D4C878C8D9CCB826009.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAVPR08MB9259:EE_|AM7EUR03FT018:EE_|AS2PR08MB9197:EE_
X-MS-Office365-Filtering-Correlation-Id: 341cf477-8d53-49c5-99c7-08db3b3bcdef
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xN8aDDBN3fuygun8SjjIegaQaie0tu5S5wPUnHvxN6plyoeBl8eHGVP3B/lwDCuvIkCOy8GBdBt/HfiJV3m6QXC9bIo8IuaShasVWlPypnNjhMQKdYnAIcqPRXBeXKvkDWz4w+E22qfbdcxE9ATzYQkpHbDGFNgT1M6vyAEbD3vbURKCe/cqRUPUQcujyighDxjhOKqhuelb3mBQd/GhW//oi4pPy4eBvBwwu22LhLMUcPXhkfSKPtxXa1YcXrFUGUVMB+pHFmSg8a4Xbh5rSco0N2p9dn2dvFEGA/9Jvnexfo/UbE0zE+xNIgEaPKZN81fbK18Psh51px+x9FcC7BXD8uG/TNQHfXxN/kQcJeyiKymljThFqS5z2SG3wrqY0+Xa2ZHQU0qOqhsJWFBEeJMGRLVsRwLYIjSeeP+HmmDiUaE4ofamlb/qfqgH6MLlWcdGOOgZP44k6+0J8gGiwLCX3pxKl9uRVKcOqWeuAHZnWinZ9kOGD1M/kzefGl711U7U18xJCmLI+7pkY9UxS+8Y91DyfYJyrZfyW+zT9dCt7GAJKLhqqKdcNYv+BDuItYQdC3dIlbbabdWLRk8nKb1z6aJ/ROPx2ocefBLXiZwDyJmkFbHfKkJlCFaXXzL4EUW3iA/YIt9AnmqKAkKfeiGPzvObC2KWetx21Eea6Wn9lk1cY0BLckbvlFYxdIaeiLj/XmVO5I60SEAx53M88A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(396003)(136003)(376002)(346002)(451199021)(38070700005)(110136005)(38100700002)(54906003)(122000001)(76116006)(66446008)(66476007)(66946007)(316002)(8676002)(64756008)(86362001)(66556008)(83380400001)(33656002)(478600001)(4326008)(186003)(26005)(41300700001)(9686003)(6506007)(71200400001)(2906002)(7696005)(4744005)(8936002)(52536014)(55016003)(5660300002)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9259
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	78b7cde3-9b4f-4587-f4d6-08db3b3bc5be
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pn/9IaGY6XTNLgidpgRzDNPZ29+uVXLwCmAMP+DzzDurhaDEVDZNlHtSIjpQ+il9lTm2+0EwMZPmIgCiYr7nSogQuAd5zbOTl2UwSstk7qfBs5UwqHRuNi55nesZZgyOgSoeJSnVvktlvQTVQZQ1eQVJyGnWeQhiLVQxOtzo+v4bQxwlGxjagOdGNQEBmYeaDmAWIigqbNA5DwMoYjrsELMmRTLnpBZDSxO59aA0w6q4cewmyaNXHAQ1WiPeK2st2f6OSy+2hsJg0YLC+eIjwCeePglMC0eMf1/5dMqj4Ju3jjUdIofHpucF1dPuLGl4oX4ZGdOpVfW6/E6vpDJxM/slnxRBI4GvyjGWXmuCFkxduk4z0c494xVAa6Xv1nnN3dlFV8JacGinmy1vP12k1HjwRv8+ENKvPKtpPAa9zusejdSQGa4UYgr1Gmt6c/djRpMxfurKVNGawzCNeNUaalhmmEzfpiAfA4n0IkLUWo7UC9yDBpscNY40paprsCx1ocKVL25A0JfBGoI19S7rDDR2cke0Ay/HnW5ZJUl5y3ZvHt4vGIx0Vc7z4aqBM4d1DN5ZgfVS+YLLcEshYffyVD5G93p3MCZ+k4kkShzslDh2JUYwJU6OEN10VqXsS8DWTLJVcagAsePQecMBJV09gO/Giz8p5bnBNpsiphXhZebXDO96+shggM3hd7ifCSc1nEP8bEvPwc/2Kx0p+/H67QQGKAFU/ExRFmSMTx5UtaONJx0rxebjy+d+jSgcb7mJDk61wVnM2ZfDwcIAbTbUGWa04bGjUzdWDT8Xgqnrni4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(336012)(47076005)(83380400001)(478600001)(7696005)(54906003)(26005)(186003)(6506007)(110136005)(9686003)(41300700001)(2906002)(52536014)(4744005)(33656002)(82740400003)(40460700003)(5660300002)(70206006)(4326008)(81166007)(356005)(82310400005)(70586007)(8936002)(316002)(55016003)(86362001)(8676002)(40480700001)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 09:53:44.6729
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 341cf477-8d53-49c5-99c7-08db3b3bcdef
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9197

Hi Luca,

> -----Original Message-----
> Subject: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to th=
e
> changelog for Arm
>=20
> Arm now can use the "dom0=3D" Xen command line option and the support
> for guests running SVE instructions is added, put entries in the
> changelog.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

If Arm maintainers don't have any other comments, you can keep my:
Acked-by: Henry Wang <Henry.Wang@arm.com>

If they do have some comments about the wording, please respect their
comments, thanks!

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 10:06:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 10:06:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520156.807471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXNA-0006FE-ED; Wed, 12 Apr 2023 10:06:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520156.807471; Wed, 12 Apr 2023 10:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXNA-0006F7-BU; Wed, 12 Apr 2023 10:06:40 +0000
Received: by outflank-mailman (input) for mailman id 520156;
 Wed, 12 Apr 2023 10:06:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=In0L=AD=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1pmXN9-0006F1-1b
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 10:06:39 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3fc7726-d919-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 12:06:33 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 he11-20020a05600c540b00b003ef6d684102so4018103wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 03:06:34 -0700 (PDT)
Received: from [192.168.69.115] ([176.187.216.226])
 by smtp.gmail.com with ESMTPSA id
 m2-20020a05600c3b0200b003f0652084b8sm1865060wms.20.2023.04.12.03.06.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Apr 2023 03:06:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3fc7726-d919-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681293994; x=1683885994;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=nodu07HKidxKPxvOg7xbv/1Polhi/238wioQUMdq6YU=;
        b=Xk3y6DXBeCFFzJr+vithiTz5Beg0CXzofhffIWROJNMLkIZ7q26zdzFL4KJtA4YxyU
         Il4dRd58xoKO29XLYhdAiWdEM4huVsdPW7tera/P7GyqMzKKr5sQFUsTfalIJ40UoLt5
         RZNqZzfSgstPdGFJ4R0YX5dBPr8yhN791OWpFaBmqSRZApEh4jrE+bwym33YTPmGkOEI
         7gr68QxBAS8wdpi6KH/Bcx+0ZH++esjWqvsINZI9qV3ixkUf50FIyXp6LuCXItCKkmpu
         w95X1V7K9gFRCUDsJeIqJJ4IrgOgZWYJ5NN9Uve6hEwbGvxbD9cVhTlpHNiW7ruAt4Ob
         FUaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681293994; x=1683885994;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=nodu07HKidxKPxvOg7xbv/1Polhi/238wioQUMdq6YU=;
        b=iYUBTFRUGLg1Zku10Ubx8cR5LhqjvanBrad2vEJyOaDsdyLbEOntJqBEUipfxTEMjo
         00HOu36Jit3yE9XLGfKfhJeWCrd7VUPkMN/MNmzVaVFhOKa4YYu0NVsJ3bS67oBQYR8r
         koDm6R+50iHggqptaLn0oLMVaHHvgWd4wc3MHwiG0CxlIl+eBh7KL82YrFR5eLb1XpRl
         eKTLwAu4MpSW7CC8n409IuhwmGqcGqHCPddG/WJWDiysame2jt/A5MMn2Zv0tO96MmiX
         IHkrF49ph9IaruKLkIxwBPKPq5XYYF2+SpObGWXzOwo6muuLeMiapWHi0ddkeZGh86vz
         /0FA==
X-Gm-Message-State: AAQBX9fr+qnLy6o7XooFpAnhj5NGi+D5t9JaIRdDrEqfQdYLmfQXRfZO
	PtqvNNY3oTGH5ZangPdTG4T/gA==
X-Google-Smtp-Source: AKy350a38QzxrOG40N/FCLtmNOZJ6NWu2hOiFQSQbZsWgPAmb1mo1As6jLt15bkoB1132d1qV+yodA==
X-Received: by 2002:a05:600c:b56:b0:3ee:7e12:f50 with SMTP id k22-20020a05600c0b5600b003ee7e120f50mr1516825wmr.8.1681293994158;
        Wed, 12 Apr 2023 03:06:34 -0700 (PDT)
Message-ID: <72ee7e72-5ec1-d53b-94fa-e88f68880c2f@linaro.org>
Date: Wed, 12 Apr 2023 12:06:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg'
 configuration for xen
Content-Language: en-US
To: Thomas Huth <thuth@redhat.com>, Vikram Garhwal <vikram.garhwal@amd.com>,
 qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Beraldo Leal <bleal@redhat.com>
References: <20230411210422.24255-1-vikram.garhwal@amd.com>
 <895bcdd3-350d-38e7-1982-899948072b93@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <895bcdd3-350d-38e7-1982-899948072b93@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 12/4/23 07:39, Thomas Huth wrote:
> On 11/04/2023 23.04, Vikram Garhwal wrote:
>> Xen is supported for aarch64 via xenpvh machine. disable-tcg option 
>> fails the
>> build for aarch64 target.
>>
>> Link for xen on arm patch series: 
>> https://mail.gnu.org/archive/html/qemu-devel/2023-02/msg03979.html
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   .gitlab-ci.d/crossbuilds.yml | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>> index 61b8ac86ee..6867839248 100644
>> --- a/.gitlab-ci.d/crossbuilds.yml
>> +++ b/.gitlab-ci.d/crossbuilds.yml
>> @@ -186,7 +186,7 @@ cross-amd64-xen-only:
>>     variables:
>>       IMAGE: debian-amd64-cross
>>       ACCEL: xen
>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>>   cross-arm64-xen-only:
>>     extends: .cross_accel_build_job
>> @@ -195,4 +195,4 @@ cross-arm64-xen-only:
>>     variables:
>>       IMAGE: debian-arm64-cross
>>       ACCEL: xen
>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
> 
> This patch looks wrong. I'm pretty sure we wanted to test the build 
> without TCG here. Building with TCG enabled is already done in other 
> jobs. So instead of removing "--disable-tcg" here the question is 
> rather: Why does it not build with this flag anymore? Can those problems 
> be fixed instead?

I concur, this used to work. Did this config bit-rotted?
The latest /master job succeeded:
https://gitlab.com/qemu-project/qemu/-/jobs/4094506462


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 10:10:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 10:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520163.807482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXQS-0007PI-Uy; Wed, 12 Apr 2023 10:10:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520163.807482; Wed, 12 Apr 2023 10:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXQS-0007OY-QR; Wed, 12 Apr 2023 10:10:04 +0000
Received: by outflank-mailman (input) for mailman id 520163;
 Wed, 12 Apr 2023 10:10:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R0Xd=AD=linaro.org=alex.bennee@srs-se1.protection.inumbo.net>)
 id 1pmXQR-00075J-9t
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 10:10:03 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2f337a2d-d91a-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 12:10:00 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id j1so14313778wrb.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 03:10:01 -0700 (PDT)
Received: from zen.linaroharston ([85.9.250.243])
 by smtp.gmail.com with ESMTPSA id
 z16-20020a5d4410000000b002f490a0cd1asm1080829wrq.92.2023.04.12.03.10.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Apr 2023 03:10:00 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 190F31FFB7;
 Wed, 12 Apr 2023 11:10:00 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f337a2d-d91a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681294201; x=1683886201;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bPJ2SFierbyb3Cclh63Wlefmv6JskXMO+atr4mXdl8I=;
        b=BScgF6Er6TmA9oO9n/EyYK3mPOlQAD+EaBbBo1rH+wjOVZNNffPA5u8mTVh3sa9rAd
         QHL+9y7eA4QlOM3dyrIeluXkvHuwijSBYc25rAQSnSY41YwmH6MRPHChCzVJ4vm8FZX4
         5ebp5hfJis2nUoxbankpexnCPOMMn9n+Fl8A8S52wn/8zWiNcd9wti3k0LgL9iYgsG2H
         LSxKzu/DesSxMfYHcYjnym0+FpfzyNZUD+IuLQ1UJfJHADUtRUkxAajjpL4xVZUAVupL
         6VU9hq/G3upGGHaf8TMUHF5mDs6JMlXa7eqGqeUUOqlnGB9+CYU1PclzWUb7I8YuWA2d
         v6Yw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681294201; x=1683886201;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=bPJ2SFierbyb3Cclh63Wlefmv6JskXMO+atr4mXdl8I=;
        b=xAaZJSiwBZFMTUHL1K5kvkqjdt3C0n5ltyNRozMc+xW7lltSxr2CUjqbjZjXE7fna0
         7aBaKm0fRI98195Lqy+9qNEwrNDQhR9DIWlDYEwZ59yIr/FsyycdCL997xzWextktl1+
         +8cS9VQhvmoTB5yqlGjFMAnmCqQodIgWLPuRZnXKLUSNTPFEB+bjpgAGvynSaFJi5A3+
         KlgeKiw9y45NikHo37fzlwxKzsNL5r0FsIO/rEr0+4auL/fXS6fZpTlJ8IUBoEORpVsL
         HNIr6zDrBxWQQByReKdC7t8mejLSyi38SghnA9049lTuSyNKCj9SxDMKOv22fiox2GWU
         RDIg==
X-Gm-Message-State: AAQBX9dsfmHjfG/r/Ol4NLzkSOD5xtIybZl1aXkxEeo1+X5PvRVay0sK
	NLYOPcbpjxe2qh7oHMPw62qCPA==
X-Google-Smtp-Source: AKy350aBY3mwROC5VviG3+/nxUD4IH6ZvxN1KXZmddzemSoJH/nDLqqfeanFjDkNN3oVWK6yU/IRoQ==
X-Received: by 2002:adf:ee0f:0:b0:2d0:354e:dc77 with SMTP id y15-20020adfee0f000000b002d0354edc77mr9024079wrn.66.1681294200833;
        Wed, 12 Apr 2023 03:10:00 -0700 (PDT)
References: <20230411210422.24255-1-vikram.garhwal@amd.com>
User-agent: mu4e 1.10.0; emacs 29.0.90
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 stefano.stabellini@amd.com, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?=
 <philmd@linaro.org>,
 Thomas Huth <thuth@redhat.com>, Wainer dos Santos  Moschetta
 <wainersm@redhat.com>, Beraldo Leal <bleal@redhat.com>
Subject: Re: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg'
 configuration for xen
Date: Wed, 12 Apr 2023 11:08:50 +0100
In-reply-to: <20230411210422.24255-1-vikram.garhwal@amd.com>
Message-ID: <877cuhpg1z.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Vikram Garhwal <vikram.garhwal@amd.com> writes:

> Xen is supported for aarch64 via xenpvh machine. disable-tcg option fails=
 the
> build for aarch64 target.
>
> Link for xen on arm patch series: https://mail.gnu.org/archive/html/qemu-=
devel/2023-02/msg03979.html
>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  .gitlab-ci.d/crossbuilds.yml | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
> index 61b8ac86ee..6867839248 100644
> --- a/.gitlab-ci.d/crossbuilds.yml
> +++ b/.gitlab-ci.d/crossbuilds.yml
> @@ -186,7 +186,7 @@ cross-amd64-xen-only:
>    variables:
>      IMAGE: debian-amd64-cross
>      ACCEL: xen
> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +    EXTRA_CONFIGURE_OPTS: --disable-kvm

x86 should handle --disable-tcg fine.

>=20=20
>  cross-arm64-xen-only:
>    extends: .cross_accel_build_job
> @@ -195,4 +195,4 @@ cross-arm64-xen-only:
>    variables:
>      IMAGE: debian-arm64-cross
>      ACCEL: xen
> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
> +    EXTRA_CONFIGURE_OPTS: --disable-kvm

Currently this builds qemu-system-i386, but with your changes and the
work Fabiano is doing:

  Message-Id: <20230313151058.19645-1-farosas@suse.de>
  Date: Mon, 13 Mar 2023 12:10:48 -0300
  Subject: [PATCH v9 00/10] target/arm: Allow CONFIG_TCG=3Dn builds
  From: Fabiano Rosas <farosas@suse.de>

We should be able to have a qemu-system-aarch64 supporting Xen without TCG

--=20
Alex Benn=C3=A9e
Virtualisation Tech Lead @ Linaro


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 10:12:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 10:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520168.807491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXSk-0008J0-Fi; Wed, 12 Apr 2023 10:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520168.807491; Wed, 12 Apr 2023 10:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmXSk-0008It-Cz; Wed, 12 Apr 2023 10:12:26 +0000
Received: by outflank-mailman (input) for mailman id 520168;
 Wed, 12 Apr 2023 10:12:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SpRk=AD=citrix.com=prvs=4590bba82=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pmXSj-0008In-MT
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 10:12:25 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84034ac4-d91a-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 12:12:24 +0200 (CEST)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Apr 2023 06:12:14 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6110.namprd03.prod.outlook.com (2603:10b6:5:395::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 10:12:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Wed, 12 Apr 2023
 10:12:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84034ac4-d91a-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681294343;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=nkmtczLzb6GNy4XEaW8RkYxPle9XiClkPiW3KoeJLwE=;
  b=hg8JNBsgUfnpc0c0IOs0pAKiu2uZ5OQIjzlLDnTMZBFTum3pgtkajyvI
   xNKKGMhzNgnHBef7vCnZYTlghdCPzkdqrYAa6/Bfe+CvEHIetMHnW7aS2
   F2ztJyvGw0zLTFjtsBdvQiNTaUo2wSWQ9furZcJT4hZnYAnoTpbX0MJ1N
   Q=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 104558752
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:M0iZQ6D5b6woVxVW/8viw5YqxClBgxIJ4kV8jS/XYbTApDl01zJWn
 GZOCDyOPK3eY2b2c48gO4vi9ENV6J7Wz95hQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9B5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw4MxSWzFl8
 84hOW4HQR/c3eSm8Zy5Y7w57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDS7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi7x27+QwHmTtIQ6CbOW+v9IsVyogS8XWC0WT2SYkKKchRvrMz5YA
 wlOksY0loAM80isQsj4TgePineOtR4BWPJdC+Q/rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqf/LqJqTK5OQAOMHQPIyQDSGMt4cTnoYw1pgLCSJBkCqHdpsbuBTj6z
 jSOrS4/r7Yel8gG0+O851+vqy2ojojESEgy/Aq/Y46+xgZwZYrga4n271HetK9ENNzAEQXHu
 2UYkc+D6uxIFYuKiCGGXOQKGveu+uqBNzrfx1VoGvHN6giQxpJqRqgIiBkWGaujGpxslePBC
 KMLhT5s2Q==
IronPort-HdrOrdr: A9a23:xWYB7K15bhx7g/Yd09QZCgqjBQhyeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4uxpOMG7MBDhHO1OkPMs1NaZLULbUQ6TQL2KgrGSpAEIdxeeygcZ79
 YZT0EcMqy7MbEZt7ed3ODQKb9Jr7e6GeKT9J7jJhxWPGNXgtRbnmNE43GgYyhLrWd9ZaYRJd
 653I5qtjCgcXMYYoCQHX8eRdXOoNXNidbPfQMGLwRP0njDsRqYrJrBVzSI1BYXVD1ChZ0493
 LergD/7qK/99mm1x7n0XPJ5Zg+oqqv9jIDPr3DtiEmEESttu+aXvUjZ1REhkF2nAib0idqrD
 ALmWZkAy080QKUQoj/m2qQ5+Cp6kdQ11bSjXuVhmbip8viLQhKdvapw7gpPycxonBQwu1Uwe
 ZF2XmUuIFQCg6FlCPh58LQXxUvjUasp2E++NRj+UC3fLFuHIO5l7Zvi399AdMFBmb3+YonGO
 5hAIXV4+tXa0qTazTcsnN0yNKhU3wvFlPeK3Jy8vC9wnxThjR03kEYzMsQkjMJ8488UYBN46
 DBPr5znL9DQ8cKZeZ2BfsHQ8GwFmvRKCi8eV66MBDiDuUKKnjNo5n47PE84/yrYoUByN8olJ
 HIQDpjxBsPkoLVeL+zNbFwg2PwqT+GLEXQI+llluhEhoE=
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="104558752"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kjaZUU1gW5/aoVSRY5GSDXFZrVphfdUWzFMmjXqe6lXybFJVyjYRKY0Fzl4gQ2amX58cj0KpGFSy5yllhKy5jYUp6B3UNiWcAcm7GGDGoksDW88pF3BJns32oNxTTXm/xhz+KJmGIBngWvvt8bkw8iZR7dk4SuCUBLyf2OyfodBPRAX3C6aLE/Mfkx9NvFLs/LwQ2IxIgjA28QiQx5ZXTUotlJDtSIacSsLKGCWFwRR4rVB1BNvkSQPRvgmv5Xm6ebuFNljhIMN/k508Mf//0MFFvCDM9na+V36oIrSSKKslDkDFGKYRYJHgPFnvEUl8psuXarY1WczamQfPMP0MwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XGLAXe+N1thtgbkL0MNg4MOYeKvGBwUt6nAxSEZ7hAE=;
 b=bU2WHp39JQw32OAfFvhVPcjDP8StWjuHN8uc5H65YhPkPDFeajMnZFQpq7OhSooPbEqcrBT9KrAk048RMn0O5Gj29doBTviIcLTf7nBqXa448nmuM6C79LKHQ/LwM7o5bVNcSHLATOo49xYRDKwLR03Rg8815vFogbjOU7PqYu5OuDfAxrq1VzwduHWqT5wOgSAPrIBvopryCtvaP5h/+EcZzusvvFCQAul25PGpPomiOxzM6JO7EYbX+1llgfPm6brcv6tKdCJZ7+MMFtSp6YGrzpcAFim/ICYnrxSy9uyfQmjFDnqlDcSRxIahtK7ToxKIFssdcXp2JOR+JMaIEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XGLAXe+N1thtgbkL0MNg4MOYeKvGBwUt6nAxSEZ7hAE=;
 b=aLkYrPn9Cj799pSHFIsrXpaPRbfhh5k/H/7bNvu0VA21Jl6zgcOZ9tyGrILM3M5cN5RIx0UZP+pJ20NXypeqPa77+6z45DXnNH4U/RucIWUj7J4EZdR+/cALduyZ0dPz2h7PaHCLYMzhJ6+od0JtvM6/zotqWyiceTIySnTFSn0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 12 Apr 2023 12:12:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/6] xen: add reference counter support
Message-ID: <ZDaD9k2HMUX2LbqU@Air-de-Roger>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-2-volodymyr_babchuk@epam.com>
 <ZBMfpnzW4YdqEiA0@Air-de-Roger>
 <87h6tmxden.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87h6tmxden.fsf@epam.com>
X-ClientProxiedBy: LO4P123CA0434.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6110:EE_
X-MS-Office365-Filtering-Correlation-Id: 9fcf486c-475c-4d42-afdd-08db3b3e6202
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tB2YoYaFAyRJCC3Bj2xw5Hda+zLU0xbEzeLg5UmwPHc0vb/iCQMB3ZLrDt6IMFiDAF0xKCHoyOOIps+3S9YLocRVwTR6WMk3iPNcCJhN5Vw3V1zXNmh1xMYCHErZuQWnUEMeGSpUy7wmv50WfWXzXzlkpOjK1UBP0y/L472VwDBry2BAMsDiX8ZkjOeX2MiaCugcbgT0yNEVoeZnzP60TggDKR4OcfyA5plflbS7cpMiEK9ApFzHkrWXp9cbtxzPX/BwUtKc2X4LZHKHbF8V25zq64TpO9o432b0oK7JUKACm107GHJh7ZUieKXa1/k1ysdk2S9cni+YIzxWzS53yDTNl70b/dn+YfgBrcfBAbd3X1yCTqPY5rPSyOMs07aQAMZMoa6weYnlQ5ajH41j0yzrSTh52C864gXSx6HsV47zaRfe6mWcEmxlKaVUzrcuffZI9Q45jX+QSh9Egg6dPyAxPSSo9qSOC8djwru90iMPjPrT6zgSyL80VtNw/mDn+Oost0mXwKCC/2jSM89NwWsrDQ7uP8EE/hcyXfIx25Dvg6XnVTjRrfwxHql8WP7a
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(136003)(366004)(376002)(39860400002)(451199021)(26005)(66556008)(4326008)(6916009)(66946007)(66476007)(6512007)(6506007)(9686003)(85182001)(2906002)(6666004)(6486002)(83380400001)(186003)(33716001)(54906003)(86362001)(5660300002)(8936002)(8676002)(38100700002)(478600001)(41300700001)(316002)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0JFY2kwSDRWU0V0OWJlR3V1eWttWVh6NGdZM2xkblJzbG11YXRWUXJDcDdz?=
 =?utf-8?B?VTdCWEpSbldPcjBVU3B2dGZvWExlZ1E0TDBRb256OWxPYjJWZi9NSS8xVHN5?=
 =?utf-8?B?U2EzNUFVcjJDTHFhQVo1R1ZVcjhheWk1L3FXWlM3VzRRUVFJL3JoY2grUDRR?=
 =?utf-8?B?dVprcGRkdWo2dm5vQjV1YWhoWVIvUVM2a3MwN3RpandUOWtTK21zbkVnakRB?=
 =?utf-8?B?b1RKci9oS2x3WmhMOGRTOG5wbEVKc1BuYklIL050ZkptQlkrakJzVnFpTW9L?=
 =?utf-8?B?KzBLSkVnRE1pT0kzMnFzUEZhSUNaTVpqUXNZbE9qQjFZNVpnN2t0cGJyZ2Uv?=
 =?utf-8?B?VmFyeGh4c0h0b0d4QzJUN3laUzBTcUR0RHhWdjlzWThiNkhCZHhSSDB0akhH?=
 =?utf-8?B?SmRmZ3NCd2p1bjdKNzF4S0JCa2x5czdBZW9hcktwTWgwQ0FyNituZDNlMUI3?=
 =?utf-8?B?QnJ4cXZEVGJ2RlJNRkNQSDNjWHg2aXJ3NkdGbFkzMTdzTzVKSUFvNERQRjEv?=
 =?utf-8?B?MDRhRmNDdERxU0F6U0JIa3FZSU5oT1FONGFjQVp4NmI5bURDVkVSeHlGWlh4?=
 =?utf-8?B?QVo4ZnFlc0lEZ0hNT1ViU05pdWp3cEJhY2tlTlRVbCtJN3B5YWI2RnhUbnB2?=
 =?utf-8?B?RWZYQTJnb1UvalpvSUVGZWNESGFtSHJyRTE1QkkrWXVTTVZZczI2NzJyLzFt?=
 =?utf-8?B?RDZ2dUxOL1FJUjZKRExXbHFMODcvSWZwaFFQNS9BdGZ2UzdhSy9ZUHNaK2Vo?=
 =?utf-8?B?VEQ4TFlaelNxczBoWGs5a3V3UFdRdlp6MUNCTC90Q3FycFhKWWl5cmZ4VkhB?=
 =?utf-8?B?d0RsaFVhc0gxNmZmd2VycEY0M1doTXZ0OU9ROStOMm9rbHJrSGdIR3lhT3Mw?=
 =?utf-8?B?QW56KzYwbGZlSGhOMW1xYmhlTWdDaHBlaUVJK1dZVGtVaGhBNS9PL0RyckpE?=
 =?utf-8?B?bW5OQ1JuSEY5N1pJK09wbUNLeGtCV08xbU9vcEFTUzlJUXJoVXZXWDRkbURt?=
 =?utf-8?B?Ky9EK3RHR3dLRjR0dUkwTWRZSEZwT3Q2em1Ib1VoT1dUOXJKem16VmhwMkhX?=
 =?utf-8?B?cEJERlJ5VThRWFZNbFVSN3ROWTFlakpZRmtreTVoRDVCMG5sVENqeis2NHhj?=
 =?utf-8?B?MVhvTERWUVJlTGwrKzFQOHJzOEFqdStRVktBVHpXVDJoazdTdlNBbE1UcjBz?=
 =?utf-8?B?OFpVUGZ4SGlnYmhFOFBlb3JPeklvZHM5emYzYitVS0pMOVZqT1BqWm1jTlFa?=
 =?utf-8?B?UUF3KzBFTGVHa2YwSzJmRGJ5dklOcEQ0Ri9Gc2tlR2o2U0hwdnNRdjN2dmVX?=
 =?utf-8?B?MmxGc0FOZ2ZvZi9Pa01Xbk45YnZNdzBQQ2NQMGhza0xORlVnOWVmQURxYnNQ?=
 =?utf-8?B?dEY3cnA0LzBOekFQV2xFdHhodkJnQnJyNG1JT3dxVUlWSlFOeDh2cEtsbkl2?=
 =?utf-8?B?Q2hZTFJxV2dNL0NVVTBNY0pjVHpXVUxzZ0RuZFFXZGNIRW1GVFkrbUVJVzVN?=
 =?utf-8?B?cmo3cUQzVWpMMFAzdUtyczBjbzFEbFk3SnRiVEU3Mm9kZ3Nab0t1ZGsyY0tz?=
 =?utf-8?B?aEtwdHQzZEZ6R24wSUVSSTd5WFFWOG9vc3RUY29seDloUWcvZjFnUDVJM1Uw?=
 =?utf-8?B?bUJNWGxGUUtUdS9MaG0rSDdTWGtzRlIrRmJOSmFhUXJsNFc4S2JINEtrUTBX?=
 =?utf-8?B?S2k1eXJYNFBUZkJVVU85bmlURFRrdkRIUk81RVlPTlJQQmp6eHJvNlJxdUdZ?=
 =?utf-8?B?YnNkR01QWTlka0UzMHZ5WjdoMmY5am9KeHhlMFlzaHRqUmY3a25IN3VjWkVP?=
 =?utf-8?B?dld2cURBR0ZzVlFaS2RoWHJndjRqa0hlejJpT25jUU50WFVneGZTcExtR0V2?=
 =?utf-8?B?SUhKN1h1MzVVUHNTTGhOTmo5TFk4YmYwR2dYa2RZdVNzMDczWXdoWXNza0Jl?=
 =?utf-8?B?dC83Vi9wTzZxanBrZ1lTTTlXTmIyU2RFYnBZYVdRVGR2Y3dtNDBuQXV6Q2xO?=
 =?utf-8?B?TFZ4eTFsdzd5WGsrbWM5Q2s2emRZcEI2SzRONTFERTRjYk95NldXWjdwSHJv?=
 =?utf-8?B?Vkk5MDQ4bi94MWU0Q3Z1UWVqb254bTRvcXYzNHVoaUlNV2ZHelNvV1did014?=
 =?utf-8?B?am9KeEdON2FkZE1TOU5DaXkxd09VOGFQYnNORVgvU0wrbkM0bnkyYzFabnFy?=
 =?utf-8?B?c1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0J2J5JEH7vsSNHxtUXIHnTInFNsZxYGxGnxuAC/SK9RNguE5nEbTJWfWfYo6gkWZZkyjD1P+6nijQM2R/CEtD2cmJuqLtqJs/lkFBLnWolr4PiFcN5DC0kH4aQaN5p08DnZpzaNDN6xXJv5CZY78VhlrfxX3fwvpHKYgG933ra48v2eRCBv3uaF6icz/HFTRHFAZxqRa9mtcpXYZv7Wz8HNbPIqvD18bxnCEJdhmzpSijnPtKNFIjiNQ6EV5vEGnbXe/eIOJirPgTf3upw8n1XxnD0OMJwWIvuc7Y6byQkYyjGF4iSON/7Cu6POM7OsIWVLsFZzGWUGUse4Cv98bHYE1CRT1GK2whpczIPMfyDrL0OBZmtzL5zZuZnfoKqo1+KofW98UB0XD/7nzM5z8m1N4qidUIODEMden2ysaMiom/OGkJDbfUo7ihKGKL8jYh50C6JHuJoI5JXNp4HgT6VHT7avDnQZKJksEt4vOQZM/w3N2Yz2NBSZvT4uRw7+cR0ehnmBUpgQV+KcivTUQKjVFW0qnbOFlMbNa78HpO6rpMGXSDX3V52U1hBGcnWXqOp2CMz/U5oUpLhQzaC2ZstyparOpJyWKzi2c3KjMc6SHZK3lcjM+K83priWrZupm67WdukSK/5IgaFfkLopNEI5wH8Hhb5G3i1DGUJVubqBIRaLx78A7jJ2QR53Jsjs6RqO8qo4eR0R8AerR9uc6DaYePL1pFZHTdp248YdBAPq9nphSTu1eOO3hfpjvPTBw5b1vYpQ28libueB/gNZkzRaBFl305rm7+686fJH1ynTC5GPU32jG0xUQvsioK6mAwoTf2SgHaICqxjTmXWKZ3whtM41RF7uMlOGYera6NuyI/g6Tjhojr0OZWbVkORUJ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9fcf486c-475c-4d42-afdd-08db3b3e6202
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 10:12:12.4592
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O5PbhQAJy3womqlFmx5yoEo9BfeHnffIylOxCkupRvH8C3TocLVKRLrabUdoIC6ZehFBOoR/LmsRm6iNBy5m3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6110

On Tue, Apr 11, 2023 at 10:27:45PM +0000, Volodymyr Babchuk wrote:
> 
> Hello Roger,
> 
> Sorry for the late answer.
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
> > On Tue, Mar 14, 2023 at 08:56:29PM +0000, Volodymyr Babchuk wrote:
> >> We can use reference counter to ease up object lifetime management.
> >> This patch adds very basic support for reference counters. refcnt
> >> should be used in the following way:
> >> 
> >> 1. Protected structure should have refcnt_t field
> >> 
> >> 2. This field should be initialized with refcnt_init() during object
> >> construction.
> >> 
> >> 3. If code holds a valid pointer to a structure/object it can increase
> >> refcount with refcnt_get(). No additional locking is required.
> >> 
> >> 4. Code should call refcnt_put() before dropping pointer to a
> >> protected structure. `destructor` is a call back function that should
> >> destruct object and free all resources, including structure protected
> >> itself. Destructor will be called if reference counter reaches zero.
> >> 
> >> 5. If code does not hold a valid pointer to a protected structure it
> >> should use other locking mechanism to obtain a pointer. For example,
> >> it should lock a list that hold protected objects.
> >
> > Sorry, I didn't look at the previous versions, but did we consider
> > importing refcount_t and related logic from Linux?
> 
> Well, I considered this. But it is more complex. It has separate
> refcount module, which just counts references + kref code, that is
> capable of calling destructors. I am not sure if Xen need this
> division. In any case, I tried to replicate Linux behavior as close as
> possible. On other hand, Jan suggests to rework API, so it will be
> differ from Linux one...

OK, just asking because it's likely for the interface to grow if there
are more users of refcounting, and at some point we might need a set
of features similar to Linux.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 11:26:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 11:26:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520175.807502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmYc8-0007F4-Vh; Wed, 12 Apr 2023 11:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520175.807502; Wed, 12 Apr 2023 11:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmYc8-0007Ex-Rp; Wed, 12 Apr 2023 11:26:12 +0000
Received: by outflank-mailman (input) for mailman id 520175;
 Wed, 12 Apr 2023 11:26:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SpRk=AD=citrix.com=prvs=4590bba82=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pmYc6-0007Er-Nf
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 11:26:11 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd5e2407-d924-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 13:26:03 +0200 (CEST)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Apr 2023 07:25:59 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CO6PR03MB6227.namprd03.prod.outlook.com (2603:10b6:5:358::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 11:25:56 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Wed, 12 Apr 2023
 11:25:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd5e2407-d924-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681298762;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6J7iWm4iSPCoESaN81BwvrsTYthin0+NWUfWLWrh4YM=;
  b=P982LGsERJ00AjMchkN2Uq6LX8TIvdMwfNPLS/I8nu9Ej0nXeqB/KDxs
   6d1IcwCCNZ5+GlIwKxs6e6fRET+0vid/hlnW5NEOaTmnhJ/yt96tI8ASm
   Ms6HHOCGMEhsDeLAFdlbHGllb7gCDKfdkX3/RjUAw2DuaSznRWH7U86qN
   0=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 105626508
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lhfcXqth59NllZTtkspcPwT8H+fnVJBfMUV32f8akzHdYApBsoF/q
 tZmKWGEP/jeMTGmfot+Otm2o0pSuJPSnII3G1Fo/3pnRH5H+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOHzSFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwAxEAQAqD3/+KyY28Wss0h+IRA9TLFdZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60b4S9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN9MTuPlrq4CbFu7nEsQIzgxDUOCg/SCzXagR/VSM
 X1F5X97xUQ13AnxJjXnZDWjoXuDuDYdXcRRCOww7AyRyqvS7B2dD2JCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BWoLfyoNVwYGy9jlvoAojxjLQ8pjEai6ldn8E3f7x
 DXikcQlr7AajMpO3aPr+1nC2miovsKQEVBz4RjLVGW46A8/fJSie4Gj9Vnc67BHMZqdSV6C+
 nMDnqBy8dwzMH1ErwTVKM1lIV1jz6/t3OH06bK3I6Qcyg==
IronPort-HdrOrdr: A9a23:QvMwaqyXHa9EWHxyXq+DKrPxS+gkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRZxRBf7LdUUtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrXAgWVxRAXVhJI2PMH/X
 LemwL0y62/u7XjoyWsmlP73tBzop/M29FDDMuDhow8LSjtsB+hYMBMSqCPpzc8pcCo8RIPnM
 PXqxktEsxv4zf6f32zozHqxw78uQxeoUPK+Bu9uz/OsMb5TDU1B45ogp9YSALQ7w4FsMtn2K
 xG8mqFv94PZCmw1xjV1pztbVVHh0C0qX0tnao6iGFea5IXbPt0oZYE9E1YPZ8cFGbR6ZwhEs
 NpEMbAjcwmOW+yXjT8hC1C0dasVnM8ElOvRVUDgNWc13xskHVw3yIjtbgit0ZF0Kh4Z4hP5u
 zCPKgtvqpJVNUqYaV0A/pEaderC0TWKCi8cV66EBDCLuUqKnjNo5n47PEe/+exYqEFy5M0hd
 DoTE5Yj2gvYEjjYPf+kqGjyiq9A1lVYA6diP23v/NCy/jBrfvQQGK+oWkV4oudS651OLyeZx
 6xUKgmdsMLY1GeXrqh5DeOK6W6GUNuLvH9hexLKm5mgvi7XbEC5darBsr7Ff7KLQsOfF/ZLz
 8qYAXTTf8wnHxDHEWIzCTsZw==
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="105626508"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wtr7brGMUlrZFWmFb33NIYD/Ug0h0BNvu16CW/wpYvluKWJD/rS/aqosg0jpSRnHEe+0JdQ5TvcngWXcoDbirH+W0IdKZi/IY3zz24TfGMa/QbiF0AEmgA+r3KumI7rb9Hb6tnddwDSLQrNXobMoKFtZ1mmnuNR7Y+9oDt1LbuB/85mDnsVkgszZ4W59VZvwRqd+AyIKXC8glNz/Gua95ZpfgyzZmgsLAJGgbCKYys/7wtG/NBCEJCwEhqnGrktud4YIf8ZG5wz6L1+bx6D/p806yO1EhLLYryRPcrKO2udiSaSsS40dFUde6TO0u5JWJhR3S5C+llHkTYbA2+Eamg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wVv+u4ZD0bgtW12ZnZ97zJ8R98hI71abvFB8dGcBmG0=;
 b=OBTNBGglc6vRbc4GxozfP0vWIAqID0YhD2GBZSH38JyBeqDqJm54VoHbDL+I/pBm2vV7GX2AQu/RX0Ute2sT0kQalew1keGiJ4efyMOdtHuU13+EKjjoggMNPl6v3atBqKnL9XNNiZEEHkyH3uE0Nt9yRUxyryzp7JxsfxdI2s/IfKAx5MVnBxQkFIAJruZ4W9jl2Hppjm69Yuewra1pBGHU9tQxHsm6BBXc7MFDiE8x3XRgf0NBprpmNX4KBpWrGlB0o0vcF7qTZ/4FpH/vB0F8tMP/aQO+1b2Zp3EibV6qBI/Yd9JwHMU6uTpDUpr7Ib5SYp1Sjgh4FeHHxLBYuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wVv+u4ZD0bgtW12ZnZ97zJ8R98hI71abvFB8dGcBmG0=;
 b=fQYRGMe0psAtl+md8iCXG4NpP/htdu+TpZAOek8xPy6pLDX49mpD8B31ga4YNpmQc+Chc8KrKJBjXyzeSikyM9rPPVZ4eOMwJVyB6kYCAclkOqnyvg9w3sWg7R8wj4XI7ZI2tITt4r843+tLi5xEOuOOXsvoYyBmR9QSXcFz5fM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 12 Apr 2023 13:25:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Simon Gaiser <simon@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: RFC: disable HPET legacy mode after timer check
Message-ID: <ZDaVPiJTt8q74nQw@Air-de-Roger>
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
 <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
X-ClientProxiedBy: LO2P265CA0459.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CO6PR03MB6227:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ece3b53-98a6-4b6d-a4f3-08db3b48aedf
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+Q3oGv3rBKT6NP9QoowZn4lWHPowYVmRrbnB+v52vKNjI5y7wsA6EQDDyP7imcF7YOUkNKsp4uSqoJK8unCYyQtWZokoTQ/5r3vQXczvRbou/ZwoHdaNtDkBq2YJigvgUkjD2U0ikjv4DU2BSEw9ZM6BQU1ww+i9ufPP6/p2dUEy/ObNeDY6jpP0q4oNqOcbuYkoeojH2hPlnhfAVzkwk3AfOW6j0um13C+4nj0QZ/IXCCP6jOxTEz8m29G+4zOIAhZk/q7c2uXIgcQtznhYaUNsBZeT36/ZxD2rcyNQc+5fF83MxUIJEjjXLJmxA7XdZkvJVl8GQEYoHQHszrVDLWA6DBQwjox9omQHSGDPyl2KYG/0qCfKez3eIniwz3N9KfOnz5iTA2DaqSE98WOtVg468NSL/BvEoESZ4MoR4oDkJuSnzZ2Fk82CYYOwZoR1enKIF8JGbRkjy4kHx/ihTfIRWPKho36WQmrTsfHIwYuJjUBc/GC1Wt8AywANtzYCqpeMYBvgoKvLzkqJP4MX10FeRLiwDm+9jPrCr361mpwQftQ+xiGNuOJXgw4z9FYv
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(136003)(396003)(39860400002)(346002)(376002)(451199021)(83380400001)(478600001)(6486002)(6666004)(186003)(26005)(54906003)(6506007)(6636002)(53546011)(9686003)(6512007)(33716001)(2906002)(85182001)(38100700002)(5660300002)(66946007)(82960400001)(4326008)(41300700001)(66556008)(66476007)(6862004)(8676002)(8936002)(316002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YU9UYmdaeFNVSytKemFEKzZabEI1MVZFNjIxeUM0VzRkWTN1ajZCR1ZqV3BP?=
 =?utf-8?B?SXpBdHQ4UnFkOC9iL0FpUnVjdWRqRFkwbm44RnhnNmhPSzlCK1RiZkVWTi8v?=
 =?utf-8?B?L3k5Qkw4cjN1a1V2bGxGSVd2WDZOSTdmRC9WbnFkTCsrUFJxVXhvM2xpZ0x0?=
 =?utf-8?B?cmNKNGpLZWFNc2ZOcDhvT3pBYytvQWZMb25odTEzQzI1bTdyamFnRGhaejNy?=
 =?utf-8?B?ZzYxOHJub3ZPbENzTzEzL1FXVGR6eFRGL2VJeUh5OVhFdnJqK3hISGdVenlu?=
 =?utf-8?B?VzM2VlI4a1d6amVTMHU1cUYxeFJFN2E1R01zVVR4ZXVDRWVLNm5aSHNITTh0?=
 =?utf-8?B?YzNlWHdsMFBuQzNCbE9NWDY5V1p2d2ZIOXNpUmFNZC9lN082Nk9FWUR3WndW?=
 =?utf-8?B?VHJnU2NBbDF6bHRHN0Z5UTErNWZISi9nRXg0ME94dEYxVlNUS05LRU5nalox?=
 =?utf-8?B?VTcvdGNsSkYvb0tTRU5kYnRJSEMwUHA4QzJocUwvK2I4cDJOa0tldkhLY0Zq?=
 =?utf-8?B?NDk1Ky9zd0RRQVAwMVVveVVBSm5tRStzWjBvN2FsVERuLzl0ZHNFc21Td2gr?=
 =?utf-8?B?WEFtTjB6ZGcwd21tMk5YN29NWVV4WWw4VXU1VnBwS1d1RkF1SStya28zVVZD?=
 =?utf-8?B?RGRyN3lGcXF0QVJjUUpHaGhZc2NiWVRRNW9VeWdFM1EvR1NXVVZGbUhLVUJI?=
 =?utf-8?B?YUwwd0I3eW4yUlNYOGE3K0FzOVpMaEJEUkhWRmEvL0VPMUhuT3VYeStubm1Q?=
 =?utf-8?B?ZXYrTXB1SFk5L0FSTXlVN0J2TU9NRUJlYk1SMGZZR08yc1BLbHNqeURVYWIv?=
 =?utf-8?B?WVVJTm1hWStEVXZiSE1qNVJ1T3ZBS29ERTZxK01XTnR6TTJ6eEYxd2pQVzRw?=
 =?utf-8?B?L1lVSW11SmlhTG1aS21XTlAwUFVDSDcyTHJYSFZlTVBTR20rR0ZwNGpMbDNZ?=
 =?utf-8?B?T2RyUitoNnlvOTBUdGdxdEx5VFhRTkpUa1pqdkxjUWFVdmljamw5S2J3U2ow?=
 =?utf-8?B?aUZOTytKdzd1YVlHWnpONTc0RzdySk9uOEJUTi84Wi9CZUZUZm1RRGZQOFpF?=
 =?utf-8?B?eit6YWVBYWtoekJyV0IxWWdUUjNCV0FiMG11N0pSdEdLazN3MUZWWU5RMlo0?=
 =?utf-8?B?WGYzdFV1NUdUL1JSOGs0OVhTay9SSGs1NEJ0N1U3ZzE2RmJQZnVPRjhqVmJV?=
 =?utf-8?B?M0tlenRBaWFOY0RqeGZhdXhQVWZzZ2xNVElqaklrait2NWEreG5Sa2c0SGtB?=
 =?utf-8?B?ZXNGcnM5eUhTamhPaTIyZGIwZkwxTGtXRkd0bE9vV2hRRm4wZ2RkR2t2UWJw?=
 =?utf-8?B?UkJwdER1OVlGZmNlbnRYS2drRmw2dkZHTHp1WCtzOFVkak5NQVAvdjdQSjI0?=
 =?utf-8?B?RGJ0KzBXYTVkVUlhcEZEME5oK1BEM2k1S3R2Y1dWbEx1eG1ONTU0MVF5TDlZ?=
 =?utf-8?B?QTVnbHdTbEtBSmswdHZoN1Z0d1RFMXg2VmMyUmpCU0RZcmZJN2U0ckUzcmRF?=
 =?utf-8?B?M1Q2M3dzS2xCdER2U2ZnSEc5c3V6YWEwbnZYYThIQ3F2Qi92bXdEYVB2b3JC?=
 =?utf-8?B?Zmh4N1ExSFpndnNHalBhZWdob3NTM3pUNWphZFROUWhSYlEwb2hUcWV4bXJs?=
 =?utf-8?B?Qkdmdi9DSEJFRTY5NDRCRE5sOUwzdklwdTVwTFNqY1BiTHR2dkhOcTh4S3dG?=
 =?utf-8?B?Wk1FRTh4VDZZVkQrUk1hanVjN0t5NXlSS2R5ZFNWSE9ic1JDV2Nsc1dWcGp1?=
 =?utf-8?B?Ri9lZ29rdkYyMWJVcnNxdU84UE8vcExJMk1ibkpOUTdUbUgwVHJ4R3BEdWxJ?=
 =?utf-8?B?MG1yVVA3UlFVVytTcmdMSjhNcy9kTDNJTmZnRjEzUEN0cGs2MythbWYrbVRz?=
 =?utf-8?B?MDJleFIyVFhzOWhjRUV6aWNkM0h4WE1EUld4ZzdYcmo4ZFREOTJiaG43aWlF?=
 =?utf-8?B?ZjQ5ZW5WamppTGNnbG1UNW41TndncnBKVzVwKzVmUGxydFd5VDVvRUFvNWFU?=
 =?utf-8?B?M0UxWXJNTUI3bjVNUXBXVEFOZTR3NlBxSTAxdTlBSWlmdXZxVDJOVnNBeEJj?=
 =?utf-8?B?ZVNuenhMTzJkR1BnZVJLNWdadHoyNHFCK0lteExKanNuck1CTVNRMlRvWnZW?=
 =?utf-8?Q?GEOWjoJBgiKhAH6sUdJElqa9w?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	MSdUblyKRUE4yzyZu/6G1FyMAw+JT1Hmxyo44deF2S2Tfc/5hGJ72Xuh4ZVPWs7BaKHRpjiKtM47oJGTBBweHkoQA2ih6Sgy+mOfC/+VHWKD2XhEAjJFtulQkrmXz0EkLCKLUeeWKGFDAQ0wI/qnPR/Qd34rOxkJwNZzutylssTqRT4vjnDKMbpvBD2JarA1l0NQZGnfrd/jAh92ouwgczQk9/13yUMUW6eylw61siYiVhHbgFmtBiq7KCQavWxVHS9xxY3XMWMNy2iwWrnN2xmoB7R1tb92Fm4prQpk0RpxsR7zMuZi4rm76epc7k+Aa/DseHSJWZqjIbqkIkebgFxLeij5P8lYwOhcljEAHlKvbmEfwEFjse1suI4C6ClHXbx1UFwI80z2hqTpm1/1onbnNHGIwgyBv8RYFfV+TWGJii8YP6UWrZk6NGfGt0RPtplVy1/JsJgKagh4jDUG1IAnFdffUSrwUn9whfiVGZJh3cyLNIhhhiOQYbxxe6VZWgm5RymBn6flWEQDkztWruJ5euRFU/khxSar9q8MBLJTDiTNbfNpoeK0pMLT+Q17WriPtPqBpf8AGWeH3c7GxbsRXsDQR0ffZn0GTsUxjd01DzrckB0tE5px27HyzbXMvW3mYasRCjiLRIG/2V5rhSphrcprIN60g7nIznOk7K1JL/v+lwfZkJLJR6Bi4Eh5MOQwrRiFShbn3DKZL1QGmImCXvBBuU7F4FfkEJPl0pJdXGP3PjcgUpAl3WWS/Q+q3sBlXn42/5jU+sWZFVTZKtdp4xWZoAY3Xoblgd3Tmz/QfKSBCrprdPleOniq//RzjQrlnKDQPz96JjVQKakGHg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ece3b53-98a6-4b6d-a4f3-08db3b48aedf
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 11:25:56.3181
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +Nao5uXLAyfAWqCJ9JYGxEfVzTreyw5E5t2xR3v8PepK9sNB1Sx2O0afHTaGpmhkPSMqtevkxwyf47k/kMvyvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR03MB6227

On Tue, Apr 11, 2023 at 12:20:13PM +0100, Andrew Cooper wrote:
> On 11/04/2023 11:30 am, Simon Gaiser wrote:
> > Hi,
> >
> > I have been recently looking into getting S0ix working on Xen [1].
> >
> > Thanks to a tip from Andrew I found that the HPET legacy mode was
> > preventing my test system from reaching a package C-state lower than PC7
> > and thereby also preventing S0ix residency.
> >
> > For testing I simply modified check_timer() to disable it again after it
> > checked the timer irq:
> >
> > --- a/xen/arch/x86/io_apic.c
> > +++ b/xen/arch/x86/io_apic.c
> > @@ -1966,6 +1969,8 @@ static void __init check_timer(void)
> >  
> >              if ( timer_irq_works() )
> >              {
> > +                hpet_disable_legacy_replacement_mode();
> >                  local_irq_restore(flags);
> >                  return;
> >              }
> >
> >
> > With this [2] I'm able to reach S0ix residency for some time and for short
> > periods the systems power consumption goes down to the same level as with
> > native Linux!
> 
> Excellent progress!
> 
> > It reaches low power states only for a fraction of the suspend to idle
> > time, so something still makes the CPU/chipset think it should leave the
> > low power mode, but that's another topic.
> 
> Do you have any further info here?  There are a range of possibilities,
> from excess timers in Xen (e.g. PV guests default to a 100Hz timer even
> though no guests actually want it AFAICT), or the 1s TSC rendezvous
> (which isn't actually needed on modern systems), all the way to the
> platform devices not entering d3hot.
> 
> >
> > I tried to understand how all the timer code interacts with disabling
> > the legacy mode. I think it only would break cpuidle if X86_FEATURE_ARAT
> > is not available (Which is available on my test system and indeed I
> > didn't run into obvious breakage). 
> >
> > Is this (disabled PIT && !ARAT) a configuration that exists (and needs
> > to be supported)?
> >
> > Did I miss something else? (Very much possible, given that this is way
> > above my existing experience with X86 and Xen internals.)
> 
> Xen's code is a mess and needs an overhaul.
> 
> Right now, we're using the timer as "a source of interrupts" to try and
> check that we've got things set up suitably.  But this doesn't need to
> be the PIT, or a timer at all - it just needs to be "an interrupt coming
> in from the platform".

I would even question whether that testing is useful overall.  We test
a single IO-APIC pin, which still leaves room for the rest of them to
not be properly configured, and Xen might not be using the PIT timer at
the end.

IOW: I think it's fine to test that the timer is working, but forcing
that to be routed through the IO-APIC is wrong.  HPET for example can
support FSB, which skips the IO-APIC and injects the vector directly
into the local APIC.

We do support the APIC deadline timer, so I'm confused as to why we
also need to use the PIT or HPET.  I don't think I fully understand
the relation between the usage of PIT/HPET/APIC deadline timers on
Xen.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 12:17:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 12:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520197.807517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmZPV-0004Yo-84; Wed, 12 Apr 2023 12:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520197.807517; Wed, 12 Apr 2023 12:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmZPV-0004Yh-4L; Wed, 12 Apr 2023 12:17:13 +0000
Received: by outflank-mailman (input) for mailman id 520197;
 Wed, 12 Apr 2023 12:17:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmZPU-0004YX-Kh; Wed, 12 Apr 2023 12:17:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmZPU-0008Qb-Eg; Wed, 12 Apr 2023 12:17:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmZPT-0004w3-Vf; Wed, 12 Apr 2023 12:17:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmZPT-00042D-VC; Wed, 12 Apr 2023 12:17:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=01lRaplkKQyfQBYwOAh13Aq5EN5AiVg7fx436qDACUM=; b=RM9AeRNED6f/tsZckZZ9aTucv2
	hgFGqdE+fhJ36btBAh5aXzm+ufcFVM7HPrKM48c4a9u/2tG2ReE+8xNHvb4Zr455fJ0HcIb8YL6xR
	nblI7taRTO8+Z0T6/Odtm5vIurWP4KG3Sf21wuf+svoekyJnnomrLBbTuRny76lmDnX0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180209: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-4.17-testing:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=e4a5fb9227889bec99ab212b839680f4d5b51e60
X-Osstest-Versions-That:
    xen=7758cd57e002c5096b2296ede67c59fca68724d7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 12:17:11 +0000

flight 180209 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180209/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180084
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180084
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180084
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180084
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180084
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180084
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180084
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180084
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180084
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  e4a5fb9227889bec99ab212b839680f4d5b51e60
baseline version:
 xen                  7758cd57e002c5096b2296ede67c59fca68724d7

Last test of basis   180084  2023-03-31 06:37:07 Z   12 days
Testing same since   180209  2023-04-11 22:08:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7758cd57e0..e4a5fb9227  e4a5fb9227889bec99ab212b839680f4d5b51e60 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 12:57:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 12:57:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520216.807551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pma1s-0000pT-Jq; Wed, 12 Apr 2023 12:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520216.807551; Wed, 12 Apr 2023 12:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pma1s-0000pM-HJ; Wed, 12 Apr 2023 12:56:52 +0000
Received: by outflank-mailman (input) for mailman id 520216;
 Wed, 12 Apr 2023 12:51:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NJdj=AD=suse.de=farosas@srs-se1.protection.inumbo.net>)
 id 1pmZx7-0000k9-Gy
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 12:51:57 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cda8aab8-d930-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 14:51:55 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B5D2421977;
 Wed, 12 Apr 2023 12:51:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3AA23132C7;
 Wed, 12 Apr 2023 12:51:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id SKu3AGqpNmRpYwAAMHmgww
 (envelope-from <farosas@suse.de>); Wed, 12 Apr 2023 12:51:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cda8aab8-d930-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1681303914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VoGmIlsqcWdORKcz9FoVFULPzoKHb5NjNQeKmVcafvU=;
	b=JR/bMA+GwKZEzsACqpuP4Wf8MuOlQK+oDSTIhVDIuiCMwiSRpOdRoj5Sx6DmOXsSFPshjm
	Ka2lfN1czFyI8vSpqpdVTYulvpJW3yc/foUuH+tP41J5DoFfxH+VEY3OXw9hI2YPcoYWCr
	veZj6X34Z9SdVYes0e5YJUUf57/hGfs=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1681303914;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VoGmIlsqcWdORKcz9FoVFULPzoKHb5NjNQeKmVcafvU=;
	b=BZMyoLWjCvmZ+9aDz5x6X+k9CqX5aypGuZvUHO2ffwsSxItR4TDrJ+aCY8NmZgL3m3mCyv
	7aO8HjHldCi4V/Dw==
From: Fabiano Rosas <farosas@suse.de>
To: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>, Vikram Garhwal
 <vikram.garhwal@amd.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 stefano.stabellini@amd.com, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?=
 <philmd@linaro.org>,
 Thomas Huth <thuth@redhat.com>, Wainer dos Santos  Moschetta
 <wainersm@redhat.com>, Beraldo Leal <bleal@redhat.com>
Subject: Re: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg'
 configuration for xen
In-Reply-To: <877cuhpg1z.fsf@linaro.org>
References: <20230411210422.24255-1-vikram.garhwal@amd.com>
 <877cuhpg1z.fsf@linaro.org>
Date: Wed, 12 Apr 2023 09:51:51 -0300
Message-ID: <87ile1clg8.fsf@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Alex Benn=C3=A9e <alex.bennee@linaro.org> writes:

> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
>> Xen is supported for aarch64 via xenpvh machine. disable-tcg option fail=
s the
>> build for aarch64 target.
>>
>> Link for xen on arm patch series: https://mail.gnu.org/archive/html/qemu=
-devel/2023-02/msg03979.html
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>  .gitlab-ci.d/crossbuilds.yml | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>> index 61b8ac86ee..6867839248 100644
>> --- a/.gitlab-ci.d/crossbuilds.yml
>> +++ b/.gitlab-ci.d/crossbuilds.yml
>> @@ -186,7 +186,7 @@ cross-amd64-xen-only:
>>    variables:
>>      IMAGE: debian-amd64-cross
>>      ACCEL: xen
>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>
> x86 should handle --disable-tcg fine.
>
>>=20=20
>>  cross-arm64-xen-only:
>>    extends: .cross_accel_build_job
>> @@ -195,4 +195,4 @@ cross-arm64-xen-only:
>>    variables:
>>      IMAGE: debian-arm64-cross
>>      ACCEL: xen
>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>
> Currently this builds qemu-system-i386, but with your changes and the
> work Fabiano is doing:
>
>   Message-Id: <20230313151058.19645-1-farosas@suse.de>
>   Date: Mon, 13 Mar 2023 12:10:48 -0300
>   Subject: [PATCH v9 00/10] target/arm: Allow CONFIG_TCG=3Dn builds
>   From: Fabiano Rosas <farosas@suse.de>
>
> We should be able to have a qemu-system-aarch64 supporting Xen without TCG

The build should already be working on current master after Philippe
fixed the gdbstub issues. My remaining patches fix tests and general
runtime issues. I just sent v10 to the list.


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 13:19:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 13:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520221.807561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmaNx-0003J6-DV; Wed, 12 Apr 2023 13:19:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520221.807561; Wed, 12 Apr 2023 13:19:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmaNx-0003Iz-AR; Wed, 12 Apr 2023 13:19:41 +0000
Received: by outflank-mailman (input) for mailman id 520221;
 Wed, 12 Apr 2023 13:19:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NJdj=AD=suse.de=farosas@srs-se1.protection.inumbo.net>)
 id 1pmaNv-0003Is-H6
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 13:19:39 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acce0f1a-d934-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 15:19:38 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 96821218FA;
 Wed, 12 Apr 2023 13:19:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1E86F13498;
 Wed, 12 Apr 2023 13:19:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PSm5NeivNmTLcgAAMHmgww
 (envelope-from <farosas@suse.de>); Wed, 12 Apr 2023 13:19:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acce0f1a-d934-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1681305577; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZnQgldug7U+NIeAhYftaDVy/YRzrSi0DIbP1KUpKmbw=;
	b=AKnMWaq1qCMQA1L1wez2jgyW8rQSLoHi/KNVsAUC1GC9EQcJwLUh86WjwPMLxoYYTDS6eO
	IJwY7kMeRUiNsHVwPkFPYBv2TUm6/X5wRN8x3Qm52/2FjcLCh0QQ8WROAqBQsjcnZoYQ4S
	l6PV/6JRHVzJ5Pt+59KiF1bFhVSveUA=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1681305577;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZnQgldug7U+NIeAhYftaDVy/YRzrSi0DIbP1KUpKmbw=;
	b=i4w3/kGmJytP7NS46yu7iHK4IZRrRljaV/DXUJFf4JyH0u4i4CyvXrD5aAAxRs1qxj8FU1
	QuSXRK+kKUIhN3Cw==
From: Fabiano Rosas <farosas@suse.de>
To: Vikram Garhwal <vikram.garhwal@amd.com>, qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org, vikram.garhwal@amd.com,
 stefano.stabellini@amd.com, Paolo Bonzini <pbonzini@redhat.com>,
 =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
 =?utf-8?Q?Daniel_P=2E_Berrang=C3=A9?=
 <berrange@redhat.com>, Thomas Huth <thuth@redhat.com>, Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: Re: [QEMU][PATCH v6 10/10] meson.build: enable xenpv machine build
 for ARM
In-Reply-To: <20230411224746.16152-11-vikram.garhwal@amd.com>
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
 <20230411224746.16152-11-vikram.garhwal@amd.com>
Date: Wed, 12 Apr 2023 10:19:34 -0300
Message-ID: <87fs95ck61.fsf@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Vikram Garhwal <vikram.garhwal@amd.com> writes:

> Add CONFIG_XEN for aarch64 device to support build for ARM targets.
>
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> ---
>  meson.build | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/meson.build b/meson.build
> index 52c3995c9d..eb5bb305ae 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -135,7 +135,7 @@ endif
>  if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
>    # i386 emulator provides xenpv machine type for multiple architectures
>    accelerator_targets +=3D {
> -    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
> +    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu',
> 'aarch64-softmmu'],

I'm not familiar with Xen, so pardon my ignorance, but would it (ever)
make sense to do a 1:1 map of host architecture and qemu target? So we
don't have to deal with having a build on x86 pulling aarch64-softmmu
and vice-versa.

Do we expect both x86_64-softmmu and aarch64-softmmu binaries to be used
in the same host?




From xen-devel-bounces@lists.xenproject.org Wed Apr 12 13:36:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 13:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520226.807572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmadf-0005g7-O8; Wed, 12 Apr 2023 13:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520226.807572; Wed, 12 Apr 2023 13:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmadf-0005g0-LL; Wed, 12 Apr 2023 13:35:55 +0000
Received: by outflank-mailman (input) for mailman id 520226;
 Wed, 12 Apr 2023 13:35:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R0Xd=AD=linaro.org=alex.bennee@srs-se1.protection.inumbo.net>)
 id 1pmadf-0005fu-6Q
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 13:35:55 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2058777-d936-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 15:35:53 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id gw13so6549799wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 06:35:53 -0700 (PDT)
Received: from zen.linaroharston ([85.9.250.243])
 by smtp.gmail.com with ESMTPSA id
 p4-20020a1c7404000000b003ee44b2effasm2413602wmc.12.2023.04.12.06.35.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Apr 2023 06:35:52 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id E78071FFB7;
 Wed, 12 Apr 2023 14:35:51 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2058777-d936-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681306552; x=1683898552;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pp/vZNTH1CdYoakxWMjsbyXkrBnC6lsoL8fgM36jhHA=;
        b=UdnscHMEvQz45UkLik37L0hEIoYpItpzh0YlKZDl85OiVjf15e/Kusv7Q22Nt/B7Sg
         8U4bUJwS+rxvIUcFzhv6pw/xeOvQlhS/4KG0r+++aLXVNeR8BBGBM8ffToibmc7lxJPM
         vT0qxlkgOwxmtM0U68ttNyjdRl7SBaon2terdXX7UoK/9dJDyAHYxzcqjNVtP3HhnvRR
         0Zxw5ugxQcXH7EkWUIh3vkXLSiyyr7m1tYeC8hg2QKx1AMlF0AJ6xwx9E3D0r3f+oiO3
         Eb5vACUdhbCOiTSA+KErKHtsDc3JfFPxWP9+qCDfbkz7QMp1qcfNJE1TtE3BAOnAVms1
         MaYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681306552; x=1683898552;
        h=content-transfer-encoding:mime-version:message-id:in-reply-to:date
         :subject:cc:to:from:user-agent:references:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=pp/vZNTH1CdYoakxWMjsbyXkrBnC6lsoL8fgM36jhHA=;
        b=Pt8fAX/trDB+yauhwf/a5kBhFrhJC18o/YiwtfK3V3QT8xw2+Jh4kZ7ijrFIwy7jen
         SQIBvk5i2PwSGW4Qoa1b/P3US9Im35b2U868r0qJesKTG+jZ2dy22IUJUAF0uqK7nU3k
         7LcXQISWM1QKvPMpIiH3IBwZSx3U+/ATmirEIJGYcLsWzl52RYGUFoxzE/9CFLDaeUoo
         UpQToQuoBacUvJ8u3FARxStsihs+8WqeGn1/TWh0w2HMZEKMj0tApNSSt+497HkKCc5Z
         SByab2v/d2uwCmEcwWN4keIZpddgI+uO6k/75mo+lQcK28qIbZo5QpjUqoywc0PS/kpq
         Vy6g==
X-Gm-Message-State: AAQBX9fn0iYgyIR3A1zLTmhC15N0g978nk2TWtdkMWPeBFHHFqqLaHyA
	rL99UPxG5YPCg8OIaLfRGKp1PA==
X-Google-Smtp-Source: AKy350ZMvSEee+J206l5ykzqOZYJksxor/SX5VB6FJI2pHRVAyrysQrLIuP9OKazZ7dvbgMZlvebZw==
X-Received: by 2002:a1c:ed18:0:b0:3ed:5eed:5581 with SMTP id l24-20020a1ced18000000b003ed5eed5581mr2256488wmh.2.1681306552665;
        Wed, 12 Apr 2023 06:35:52 -0700 (PDT)
References: <20230411224746.16152-1-vikram.garhwal@amd.com>
 <20230411224746.16152-11-vikram.garhwal@amd.com> <87fs95ck61.fsf@suse.de>
User-agent: mu4e 1.10.0; emacs 29.0.90
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Fabiano Rosas <farosas@suse.de>
Cc: xen-devel@lists.xenproject.org, vikram.garhwal@amd.com,
 stefano.stabellini@amd.com, Paolo Bonzini <pbonzini@redhat.com>,
 =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
 =?utf-8?Q?Daniel_P=2E_Berrang=C3=A9?=
 <berrange@redhat.com>, Thomas Huth <thuth@redhat.com>, Philippe
  =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, qemu-devel@nongnu.org
Subject: Re: [QEMU][PATCH v6 10/10] meson.build: enable xenpv machine build
 for ARM
Date: Wed, 12 Apr 2023 14:32:03 +0100
In-reply-to: <87fs95ck61.fsf@suse.de>
Message-ID: <87pm89nryg.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Fabiano Rosas <farosas@suse.de> writes:

> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>
>> Add CONFIG_XEN for aarch64 device to support build for ARM targets.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>> Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>> ---
>>  meson.build | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/meson.build b/meson.build
>> index 52c3995c9d..eb5bb305ae 100644
>> --- a/meson.build
>> +++ b/meson.build
>> @@ -135,7 +135,7 @@ endif
>>  if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
>>    # i386 emulator provides xenpv machine type for multiple architectures
>>    accelerator_targets +=3D {
>> -    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
>> +    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu',
>> 'aarch64-softmmu'],
>
> I'm not familiar with Xen, so pardon my ignorance, but would it (ever)
> make sense to do a 1:1 map of host architecture and qemu target? So we
> don't have to deal with having a build on x86 pulling aarch64-softmmu
> and vice-versa.
>
> Do we expect both x86_64-softmmu and aarch64-softmmu binaries to be used
> in the same host?

Xen is different from the other accelerators as it isn't really guest
CPU aware. It is merely io device emulation backend albeit one that
supports a non-paravirtualised guest on x86. But you are right that
using qemu-system-i386 as a backend on aarch64 hosts does cause some
cognitive dissonance for users. For aarch64 hosts we would only support
the VirtIO guests.

--=20
Alex Benn=C3=A9e
Virtualisation Tech Lead @ Linaro


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 15:03:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 15:03:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520242.807617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmbzo-0006jq-Ke; Wed, 12 Apr 2023 15:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520242.807617; Wed, 12 Apr 2023 15:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmbzo-0006jj-Hm; Wed, 12 Apr 2023 15:02:52 +0000
Received: by outflank-mailman (input) for mailman id 520242;
 Wed, 12 Apr 2023 15:02:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmbzm-0006jY-V7; Wed, 12 Apr 2023 15:02:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmbzm-0003zH-TT; Wed, 12 Apr 2023 15:02:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmbzm-0005hl-CY; Wed, 12 Apr 2023 15:02:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmbzm-0003Ry-C4; Wed, 12 Apr 2023 15:02:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IXE4p6CRxNgxxkFLgwNbXuDtLKhZwUYTZdPeTqYO48M=; b=QTqdmbGNkSTAsgcVgnjG1pU7yh
	JA+UBsLtTIGe7rvVWzaY01fiqd9pnL6v8NzpYxH4bxjqByIza4HsWj22ZtOoVM8yV0YmJ0Kj5LkrI
	L/b3YhJcZzKaQC3VhhE08skR1cuS1Z5oK/dE2+XGCI/b+3r31WBp5fFwTht3UU3+/S58=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180217-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 180217: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-4.14-testing:build-armhf:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=622675cdbc5f249bddfe970054f43a867d3ebed0
X-Osstest-Versions-That:
    xen=e49571868d67944b9f4a546ade130e0b6e506b65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 15:02:50 +0000

flight 180217 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180217/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 179840
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 179840
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 179840
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 179840
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 179840
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 179840
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 179840
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 179840
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 179840
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  622675cdbc5f249bddfe970054f43a867d3ebed0
baseline version:
 xen                  e49571868d67944b9f4a546ade130e0b6e506b65

Last test of basis   179840  2023-03-21 12:36:28 Z   22 days
Testing same since   180217  2023-04-12 08:06:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e49571868d..622675cdbc  622675cdbc5f249bddfe970054f43a867d3ebed0 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 15:14:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 15:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520248.807627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcAX-0008Fe-LH; Wed, 12 Apr 2023 15:13:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520248.807627; Wed, 12 Apr 2023 15:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcAX-0008FX-II; Wed, 12 Apr 2023 15:13:57 +0000
Received: by outflank-mailman (input) for mailman id 520248;
 Wed, 12 Apr 2023 15:13:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xrgk=AD=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pmcAV-0008FR-D3
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 15:13:55 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1ecc93e-d944-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 17:13:53 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by IA0PR12MB7603.namprd12.prod.outlook.com (2603:10b6:208:439::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Wed, 12 Apr
 2023 15:13:48 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.028; Wed, 12 Apr 2023
 15:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1ecc93e-d944-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M4CJLINdkooA43MQr8C58k69DehRV4CFeX4n8Jo1c2g05nrDNrSaIbqbccDOZ9OLXpE6j9S6ERS3zFYP7x6840neJWQn52JF4KqVXO9t5sHCorRkYpYkbUxcjlPd9wZyV0XU6MkdXfhJlgXA8vNn9A8mR5VcIAdeChoPQbTeGbpYGdGdMm3JpHzufuOWAP6Tlei5Qsk4vfKnwhUHTzonxvFmezj6l+/45tQcyxBmuh1/KEG+rkEhCGdJdC7SAyUD/2z4XOxqcRY6vm/is07wK3Ru/fjH/J4JHp4WO40MTvvZ3TdsHhuDO+pVRWrXHIF6SQH00M0p+gDZJXiktfjA5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UTQlqOpWjyxi5i6sQJNA7jdhRycZqCJb3WK+QddfC3I=;
 b=VMAL1Kkq6kZKfpEeKq3x+yrwNKqvNbg5G/y8cy2QSEqsJ2dpI95FNFP/5KyeHp8OJvyXYmfGTxTIbgQ59sDRYB20LZeLKODuWqLLWLvPtd2Nu9WL1igATrw4aKli6CScZUyVRQFdpJWyGnG6Vr8e0izcv5SplMdcX8pTX/gROhsIAOXtY9D7WsOaV/bUD5STHx2eEr7X9iP9ScBgNM6DBnhNjrmZ7PPm+Phi4UOqYojla1dVUwGHsv8+UPcWKtiwjVXimoyXFcJIUCPBEXCuaxsWsxaIKWVlZSbBuIaNvzAMB7ZeuDUsYpAWFsdnf/uUsN9E/zlFnmuVJW+zFTqNHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UTQlqOpWjyxi5i6sQJNA7jdhRycZqCJb3WK+QddfC3I=;
 b=a5bo6F9I5l+hiKp3LVPTQFE81zcwCrIxmxGj7GG0hFXzz5duJ6a0PxQRB3iTiAijVuMiuDJ/vSJwnx6BIlaKwpoJ7wCpQO2WQm0e8aJXVvaqirC9BXkFgHMS+7nATarFRhbLl0ZWM5ujZLvLWeHmagqSaUMAyloBGVD9qpbhFK0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <41e4a106-c06b-ea29-4be4-ac46b561e17a@amd.com>
Date: Wed, 12 Apr 2023 08:12:03 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [QEMU][PATCH] gitlab-ci.d/crossbuilds: Drop the '--disable-tcg'
 configuration for xen
Content-Language: en-US
To: Fabiano Rosas <farosas@suse.de>, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 stefano.stabellini@amd.com, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, Thomas Huth <thuth@redhat.com>,
 Wainer dos Santos Moschetta <wainersm@redhat.com>,
 Beraldo Leal <bleal@redhat.com>
References: <20230411210422.24255-1-vikram.garhwal@amd.com>
 <877cuhpg1z.fsf@linaro.org> <87ile1clg8.fsf@suse.de>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <87ile1clg8.fsf@suse.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SJ0PR05CA0203.namprd05.prod.outlook.com
 (2603:10b6:a03:330::28) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|IA0PR12MB7603:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c649a81-fd7a-4467-ac5c-08db3b6883d4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JQB+wd4d+wpf889o0ntlAXJpcBnTscZCn0PQMCPOkoy7tweyvw1AtoXakiMaLmZ/qHw9Mo2YtfOWWYBiTDz1lRcjY5eapa/JVtSDU82g/fqk+dCUQKHlDU0HQfusbewLWUx2IkqT53RFZIAUf3t570zKqHSSZFDOngjUuBKH5UJ3t1IIudmNXFjKXHCAVfFO+Ct/Jzbm5rkK9qEeQhF2mzbAebHthV4Wvk865fHJwIl8hmwA0ulyPyqFXzotyrdsa/rZp3dvm2HEcHExxdQ+H354jWQG5QDGEymxpQUXmfh97h47SGTUgLGVmo6Trnz5fAVCPveRnem4KUFa/5I1uL4ts+WwnLP7QZ7CEdGa4Z/AqFdbgx9p6Af/DqhkBROBFjRI1YzhZWq0SdwL6bZwDB0ewKR/SqX80+Et3rp4D58vreYWK9s7n1U+JtipWHYglozvpDGSua1LuN3lFBOakRYVHkPRBcPIMRv7KtlHR0vK4WKDLoLT6m3r+WKB7XuLCSjb5jcklMiEfHjNMslLhucezNHcFILRQ0EC4/851FSNo3kOMm7u91/8Q60tPtulMYRv6pxTfXeE1uGm0OU2ZzJDvhksn7ERXhkjWSZqppGZgdPKAPjgdF6BmLGjsIUwF/UUGdGH+oNCsG5FMhQleA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(39860400002)(396003)(346002)(451199021)(6666004)(8936002)(44832011)(6486002)(966005)(5660300002)(31696002)(86362001)(66556008)(66476007)(66946007)(8676002)(4326008)(478600001)(31686004)(110136005)(54906003)(38100700002)(53546011)(316002)(26005)(6506007)(2906002)(6512007)(2616005)(83380400001)(41300700001)(36756003)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTVOYVlYL2N4RUlIeU15TGZhcTlVajNHNVh2ak54dERnUzZuMFZ2WjlTdXlU?=
 =?utf-8?B?TGZSVThnSkFjY0p4MUpZd0lWbnNHbkg0K01aSDQ0dG13QTR4MFVJUzdsWHdF?=
 =?utf-8?B?WE9KZWtNUENKL2lZU3Y4NkVtbGljNXI3RXRRa1RMaWtUMU4xcGlIY0x3YTV5?=
 =?utf-8?B?c0F3c205cDR6dS9PWHNGeE9OOVA1dCs5TkJzZUFwMmtSSERYaDZnbFNueHVN?=
 =?utf-8?B?NzhYdDBIWE1HeE9PUmg2WHhFNkdaWGFQK0FPQUtaTStqYmIrbWYrR2htb1l0?=
 =?utf-8?B?SDlKZlhQMUxoMDNyWGdLenQwWjZmekdoMGdzdGlBOGhsM01QZGswR2ZvbUJI?=
 =?utf-8?B?ekhxSHozdXNwNFkrZnRjUVgxYXpkajI3aDQ2OWJOSjgwYk54L1RXdFl4ZGJQ?=
 =?utf-8?B?bFlTSnI4NG1ZWFV5SVd6WTkwM1ZSdzVZczFndTBJbjdiM2hKaENDN2oxT0lM?=
 =?utf-8?B?WUJ5WnJpVGcvTkx1S0VHRHZDWWlnMjhuQ1hWLzdiWGdibHdxY1p3Z2NGdVNk?=
 =?utf-8?B?cVRFTStIbXVrN2hzc25vTG9uRDZqSjFaK05ORDQ2Yjh3clRsRmRFQXZnZm1D?=
 =?utf-8?B?NW0yZDFTYzh6SHBvMTJZVTY1Q3JXSHlraERjQ2tHOTh3UUxYeStydE1DYytj?=
 =?utf-8?B?aUlDQTRnS0NlUXBqbnJrVE4rZXlOckxWZDRjSWZ6ZWdWWUpoM3VpWmV6dyto?=
 =?utf-8?B?NWw0QzJqMUhtVDhjSkRNMS9aLzF6N3B1NGpjZkJIT2V1UldkTFNzTUt6TnFn?=
 =?utf-8?B?Sjgvb3FHZGpUaDFNcDN6dVdKeW1IMDdOUmVQcnk5UGFNbDc5VXVLb204dVFM?=
 =?utf-8?B?eEVCTUxTMUh6MEdsR0FQeWZ4SEJmLzVaSlpXcE8rNnY3VE1FL2F4YXpyWEEw?=
 =?utf-8?B?SzlkNHZlVGJOR1BXMHZiZVhhT2d0SXRISHVZejI2S2M0ZTM0VDIvbUprVTE1?=
 =?utf-8?B?QWNqM3FjWFNHMENJK0VEVXB2ZER4aWw1T1pTTWZoc2w4NUhNQWltSmdSMFNQ?=
 =?utf-8?B?L2pXcUc2VjJmbE5BdDhQR1o5emx4YjQyQW5jSTRVeVhZTlUwUmNQYkNFU01I?=
 =?utf-8?B?bHoramRYTXdUZXdDMnpmd2tSZE9LYUY4bVNPd0hGZGJINzRFNWczQkYzVEVo?=
 =?utf-8?B?YUNidTIxYXRzQ25jY1VtYXhucEFkcmNtSUpCTm1jOEdSNlZRd3FTZVJWNWF5?=
 =?utf-8?B?RVkvVVBreElFTHVHVWwrY2hHbjhVa1J4WHRzbXpOUFc2VmsrNHE1NnZYU0pU?=
 =?utf-8?B?dzFTMWVkQnlreFo0WERRSXRGeXZFdzRwNEVxcTc4Ymg4ZHZFRWdyekpPaVNs?=
 =?utf-8?B?SmlnbFAra05SeGgzVVZ0aG5zQm9nRE9NMEtVWGx6SkZ2RmlvdFRzVFozbFR0?=
 =?utf-8?B?TExZVXV4dDZBREtmRUYveVdOTFgwdEFadCt5NmFQd3dyZnBxSm54eGZuTGNK?=
 =?utf-8?B?a2dNTHJQSDA1RXdKQ2Q0U21pWk1aRy9jdmhDcndsOVNpNFl2bVc3QTFieHJW?=
 =?utf-8?B?ZzhqQSszZW5NYmxLdDRxbXlWbEU2OXhrRDZycnMzc09zOCtHaXMrVlpOamNP?=
 =?utf-8?B?d2ZqQ2NZTFEvL0wwcE1DOTZFUXgzNjFYNVl2cElPY3NNejVUeGhSbFlDVVZP?=
 =?utf-8?B?NkJVeFkrbWprTzVaRS95bTQwZlZrUlY4OUx4L3daYVcvcGRZelU3U2lLK0VX?=
 =?utf-8?B?ZXFqalBZU2tGbzVBK3pjaDR0cCtjZmczc3pUL3djSXI3RTExbjRZeUFETmYw?=
 =?utf-8?B?ZC8weWhHUUxxNjB6SExYVVUzSUNMbEE0eFhPVFBVckJBR2dBeHRYNkhGRVhL?=
 =?utf-8?B?dzhoSGZjcFhBTFFTRHMyaDlSc1RIc1E1OHVBVTdvMVM1aFlCWElEZTFhNVpU?=
 =?utf-8?B?dE1JbDBYNTFEUTRyUUVJcTZMNXdzK1FpY3pFakNqZENER3ZnUDBBcklKWTVu?=
 =?utf-8?B?Z0VSSU81M05IcmpYNXRHMTZKYVFjZHNFK0NvZ3poN0VBOCtwZkhySVpCR1p1?=
 =?utf-8?B?UndNZmRJOEpNUTlHSW96Ym1rU01UUGozSFdQTjVtY0ltaG1RUit6ajhPcVRH?=
 =?utf-8?B?bkEreTdOZ2loRzVvdCt1SmJSUFd2cFlZeGdjZitxenRiN2E1czU4SUVrYnJV?=
 =?utf-8?Q?cE5WM0mKFclvp7CL+pdl920Yj?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c649a81-fd7a-4467-ac5c-08db3b6883d4
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 15:13:48.0904
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yU0jEEMPNN6yexGevDFCMMIsBFrUeKxOvW0ntI0xa3+WzwJ7go/mSw66MdUof4ihDtP46dJhEWLrvhJkYmPiOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7603

Hi all,

Yes, gdbstub build issue is only for for aarch64-softmmu target and that 
was the reason for this patch. x86 targets build fine with 
''disable-tcg" option.

Thanks Fabiano & Philippe for sharing the existing patch series for this.

Regards,

Vikram



On 4/12/23 5:51 AM, Fabiano Rosas wrote:
> Alex Bennée <alex.bennee@linaro.org> writes:
>
>> Vikram Garhwal <vikram.garhwal@amd.com> writes:
>>
>>> Xen is supported for aarch64 via xenpvh machine. disable-tcg option fails the
>>> build for aarch64 target.
>>>
>>> Link for xen on arm patch series: https://mail.gnu.org/archive/html/qemu-devel/2023-02/msg03979.html
>>>
>>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>>> ---
>>>   .gitlab-ci.d/crossbuilds.yml | 4 ++--
>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/.gitlab-ci.d/crossbuilds.yml b/.gitlab-ci.d/crossbuilds.yml
>>> index 61b8ac86ee..6867839248 100644
>>> --- a/.gitlab-ci.d/crossbuilds.yml
>>> +++ b/.gitlab-ci.d/crossbuilds.yml
>>> @@ -186,7 +186,7 @@ cross-amd64-xen-only:
>>>     variables:
>>>       IMAGE: debian-amd64-cross
>>>       ACCEL: xen
>>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>> x86 should handle --disable-tcg fine.
>>
>>>   
>>>   cross-arm64-xen-only:
>>>     extends: .cross_accel_build_job
>>> @@ -195,4 +195,4 @@ cross-arm64-xen-only:
>>>     variables:
>>>       IMAGE: debian-arm64-cross
>>>       ACCEL: xen
>>> -    EXTRA_CONFIGURE_OPTS: --disable-tcg --disable-kvm
>>> +    EXTRA_CONFIGURE_OPTS: --disable-kvm
>> Currently this builds qemu-system-i386, but with your changes and the
>> work Fabiano is doing:
>>
>>    Message-Id: <20230313151058.19645-1-farosas@suse.de>
>>    Date: Mon, 13 Mar 2023 12:10:48 -0300
>>    Subject: [PATCH v9 00/10] target/arm: Allow CONFIG_TCG=n builds
>>    From: Fabiano Rosas <farosas@suse.de>
>>
>> We should be able to have a qemu-system-aarch64 supporting Xen without TCG
> The build should already be working on current master after Philippe
> fixed the gdbstub issues. My remaining patches fix tests and general
> runtime issues. I just sent v10 to the list.



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 15:49:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 15:49:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520253.807637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmciL-0003Gt-6C; Wed, 12 Apr 2023 15:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520253.807637; Wed, 12 Apr 2023 15:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmciL-0003Gm-3J; Wed, 12 Apr 2023 15:48:53 +0000
Received: by outflank-mailman (input) for mailman id 520253;
 Wed, 12 Apr 2023 15:48:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YunU=AD=rabbit.lu=slack@srs-se1.protection.inumbo.net>)
 id 1pmciJ-0003Gf-GC
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 15:48:51 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84108b4c-d949-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 17:48:49 +0200 (CEST)
Received: by mail-wm1-x330.google.com with SMTP id
 n19-20020a05600c501300b003f064936c3eso11523826wmr.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 08:48:49 -0700 (PDT)
Received: from [192.168.2.1] (82-64-138-184.subs.proxad.net. [82.64.138.184])
 by smtp.googlemail.com with ESMTPSA id
 p23-20020a1c7417000000b003f0824e8c92sm2785131wmc.7.2023.04.12.08.48.48
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Apr 2023 08:48:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84108b4c-d949-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rabbit-lu.20210112.gappssmtp.com; s=20210112; t=1681314528; x=1683906528;
        h=content-transfer-encoding:content-language:to:subject:from
         :user-agent:mime-version:date:message-id:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6LBfq31O2AAEWRG4ywnAOsuD4uMl32CAztINfYySm/o=;
        b=FomN/lr86BLNsr3Y6cWC+1+pMbxorYG9R719VX9WrNvHZJTeZ4y4gy7Hxc8h0IuPFv
         Obb5gp+sgDGmu6fI2h9Rz36/Vq/uhoDSvPzjHhTLKNDZlT0W1kTOVIHy3YeSIgD+907Y
         k2+cAIZRcpzgLey++BfSxQysBcaqm83FKMdY6AuRlgt6RfCdRjWdhw/vZBQwzgjSYUyD
         8a1KSEZ4kug8fGGNzrdEKpV5KssgDq7w6Zdd2Z8HeUmAz+wJnzlXZKE0auPiSbIM4uFX
         YmzCz31IcXt+TMYZolqT+Ca4fen/pcrZBDE4f6dJaKqkln/Mf4IWh6EhbJ9GV9D2q9Rx
         K4Fg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1681314528; x=1683906528;
        h=content-transfer-encoding:content-language:to:subject:from
         :user-agent:mime-version:date:message-id:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=6LBfq31O2AAEWRG4ywnAOsuD4uMl32CAztINfYySm/o=;
        b=g03A9ZG63od65j7Q9eoLTZmAX2zSPIHay1V+5IStWVh+/B4FfEi16M4zJ0LZJKuNf1
         2qhzo92DFwQ/KrbxBRp69mv5JBDKUAlJDYn9+s780xDpXhjt3QHOZ5gEWTmoTEV0b2V5
         yxEQhN2b9VSjbOqKR45DkX5vagTurWZsxavzEDGVq+SJPYFWgxTg/s024UOIietztrg9
         dUCd7OcFXPAYg7n9nbdnzS1Je3UWQYFwOdqRe4Ntb2yqx0mHUupQFO7bjEWDZBfWhYa1
         CI/KU05eLbw90FhyWpZVzdmH1CskUGqw6bt9WMDEn3VgJNP+pz3y/LNRGrlxW1r8e05J
         3V5w==
X-Gm-Message-State: AAQBX9d1UZ3A2AMD2s79vM04McIr/tZc5aXNfHfqPpn8FgRxWDzVNe3t
	k1mL+GdFQiZPXhwvjggXV1UzIN/jDfmsxCQFxfI=
X-Google-Smtp-Source: AKy350ZulnTz82F7LR/eWwDJDfiN8TI/mkgCMOD78h2qvnKeIWRImZdwnFRQOMiUurBImH3ktPxG+g==
X-Received: by 2002:a7b:c4c8:0:b0:3f0:9b53:b997 with SMTP id g8-20020a7bc4c8000000b003f09b53b997mr2609182wmk.35.1681314528622;
        Wed, 12 Apr 2023 08:48:48 -0700 (PDT)
Message-ID: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
Date: Wed, 12 Apr 2023 17:48:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
From: zithro <slack@rabbit.lu>
Subject: xenstored: EACCESS error accessing control/feature-balloon 1
To: xen-devel@lists.xenproject.org
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

this is what I have in "xenstored-access.log" in dom0 :

[20230411T23:48:27.917Z]  D5         write     control/feature-balloon 1
[20230411T23:48:27.917Z]  D5         error     EACCES
[20230411T23:48:27.923Z]  D5         write     data/updated Wed Apr 12 
01:48:27 CEST 2023

It happens once each minute, on two different hosts, both amd64.
(both hosts using the same config, only the hardware differs).

I tried looking up for a similar bug, but didn't find one.
I apologize in advance if this error is known, and if this is not the
place to report this !

-----------------------
Technical infos
-----------------------
dom0s:

Debian stable, kernel 5.10.0-21-amd64
Xen 4.14.5
xl.conf has : autoballoon="0"
GRUB_CMDLINE_XEN="dom0_mem=2048M,max:2048M dom0_max_vcpus=4 
dom0_vcpus_pin loglvl=all guest_loglvl=all ucode=scan iommu=verbose"
Running "xenstore-ls -f -p | grep balloon" returns no result
-----------------------
domUs (D5 in above logs):

HVM TrueNAS Core, based on FreeBSD 13.1-RELEASE-p7
(it happened also on previous FreeBSD realeases, but don't remember when
it started, logs have been filled and rotated).
In cfg files, using either the same value for "memory" and "maxmem" or 
only setting "memory" give the same results.

What's strange is that I have xen* commands in FreeNAS :

xen-detect        xenstore-control  xenstore-ls       xenstore-watch
xenstore          xenstore-exists   xenstore-read     xenstore-write
xenstore-chmod    xenstore-list     xenstore-rm

root@truenas[~]# xenstore-ls
xenstore-ls: xs_directory (/): Permission denied

root@truenas[~]# ps aux
root   [...]     0:36.98 [xenwatch]
root   [...]     0:01.01 [xenstore_rcv]
root   [...]     0:00.00 [balloon]
root   [...]     0:01.74 /bin/sh /usr/local/sbin/xe-daemon -p 
/var/run/xe-daemon.pid
[...]

The xe-daemon looks strange also, I don't use XenServer/XCP-ng, only
"raw" Xen.
And this script which hand

I also use PFsense domUs (based on FreeBSD), but they don't exhibit
this behaviour (ie. no xenstore access error in dom0, no xen*
commands in domU).

So is this a problem with TrueNAS rather than with Xen ?
If so I apologize for wasting your time.

Thanks, have a nice day !
(and as it's my first post here: thx for Xen, it rocks)

zithro


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 15:59:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 15:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520258.807647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcsm-0004pz-9k; Wed, 12 Apr 2023 15:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520258.807647; Wed, 12 Apr 2023 15:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcsm-0004ps-6f; Wed, 12 Apr 2023 15:59:40 +0000
Received: by outflank-mailman (input) for mailman id 520258;
 Wed, 12 Apr 2023 15:59:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/N7h=AD=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pmcsl-0004pm-Dj
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 15:59:39 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 063aafaa-d94b-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 17:59:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9D1F721852;
 Wed, 12 Apr 2023 15:59:36 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8400613498;
 Wed, 12 Apr 2023 15:59:36 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id rN2CHmjVNmQVQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 12 Apr 2023 15:59:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 063aafaa-d94b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1681315176; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=r2sDQFE0958jm+r9RCHSMtJJSFOUbPg85D4fX12sQKU=;
	b=Wovnyqf/iXpjFBFaKebkCa3aM60LRrj7jXW15Fc0nEtDrvGELbhpIMGubaBU+St7eETdzA
	RBx4ecleQ98HMfetMrl2W8i1895WY40lUnEMlfD75jLXmhaRW2eSe/bxtQHgujcKq+JykS
	5d5Zd+zDhjBjncF4lMV5taqerszA54Q=
Message-ID: <4df0a9b9-db58-c25e-f166-878efd53e5d1@suse.com>
Date: Wed, 12 Apr 2023 17:59:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
Content-Language: en-US
To: zithro <slack@rabbit.lu>, xen-devel@lists.xenproject.org
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------9yO0uIg1vua9CFsYKNPhS6qZ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------9yO0uIg1vua9CFsYKNPhS6qZ
Content-Type: multipart/mixed; boundary="------------p3P0VdE0yxKc78Ow5j7oQ4Vz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: zithro <slack@rabbit.lu>, xen-devel@lists.xenproject.org
Message-ID: <4df0a9b9-db58-c25e-f166-878efd53e5d1@suse.com>
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
In-Reply-To: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>

--------------p3P0VdE0yxKc78Ow5j7oQ4Vz
Content-Type: multipart/mixed; boundary="------------yL0GrpwVDbBsGQJmvbglSETD"

--------------yL0GrpwVDbBsGQJmvbglSETD
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTIuMDQuMjMgMTc6NDgsIHppdGhybyB3cm90ZToNCj4gSGkgYWxsLA0KPiANCj4gdGhp
cyBpcyB3aGF0IEkgaGF2ZSBpbiAieGVuc3RvcmVkLWFjY2Vzcy5sb2ciIGluIGRvbTAgOg0K
PiANCj4gWzIwMjMwNDExVDIzOjQ4OjI3LjkxN1pdwqAgRDXCoMKgwqDCoMKgwqDCoMKgIHdy
aXRlwqDCoMKgwqAgY29udHJvbC9mZWF0dXJlLWJhbGxvb24gMQ0KPiBbMjAyMzA0MTFUMjM6
NDg6MjcuOTE3Wl3CoCBENcKgwqDCoMKgwqDCoMKgwqAgZXJyb3LCoMKgwqDCoCBFQUNDRVMN
Cj4gWzIwMjMwNDExVDIzOjQ4OjI3LjkyM1pdwqAgRDXCoMKgwqDCoMKgwqDCoMKgIHdyaXRl
wqDCoMKgwqAgZGF0YS91cGRhdGVkIFdlZCBBcHIgMTIgMDE6NDg6MjcgDQo+IENFU1QgMjAy
Mw0KPiANCj4gSXQgaGFwcGVucyBvbmNlIGVhY2ggbWludXRlLCBvbiB0d28gZGlmZmVyZW50
IGhvc3RzLCBib3RoIGFtZDY0Lg0KPiAoYm90aCBob3N0cyB1c2luZyB0aGUgc2FtZSBjb25m
aWcsIG9ubHkgdGhlIGhhcmR3YXJlIGRpZmZlcnMpLg0KPiANCj4gSSB0cmllZCBsb29raW5n
IHVwIGZvciBhIHNpbWlsYXIgYnVnLCBidXQgZGlkbid0IGZpbmQgb25lLg0KPiBJIGFwb2xv
Z2l6ZSBpbiBhZHZhbmNlIGlmIHRoaXMgZXJyb3IgaXMga25vd24sIGFuZCBpZiB0aGlzIGlz
IG5vdCB0aGUNCj4gcGxhY2UgdG8gcmVwb3J0IHRoaXMgIQ0KDQpUaGlzIGlzIG5vcm1hbCBi
ZWhhdmlvci4gQSBndWVzdCBpc24ndCBhbGxvd2VkIHRvIGNyZWF0ZSByYW5kb20gbmV3IG5v
ZGVzDQpiZWxvdyAiY29udHJvbCIsIGFuZCAiZmVhdHVyZS1iYWxsb29uIiBpc24ndCBjcmVh
dGVkIGJ5IHRoZSBYZW4gdG9vbHMsIHNvDQp0aGUgZ3Vlc3QgY2FuJ3Qgd3JpdGUgImNvbnRy
b2wvZmVhdHVyZS1iYWxsb29uIi4NCg0KDQpKdWVyZ2VuDQo=
--------------yL0GrpwVDbBsGQJmvbglSETD
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------yL0GrpwVDbBsGQJmvbglSETD--

--------------p3P0VdE0yxKc78Ow5j7oQ4Vz--

--------------9yO0uIg1vua9CFsYKNPhS6qZ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQ21WgFAwAAAAAACgkQsN6d1ii/Ey8X
iAf9FWt+pJn+P3pI4pDGa5L4WrLIfxNPj1Wc24Oeo6PzGbsIh0fvb/dSNKP/RexYJqZi2huTIGrk
aNj2iKgZB7lf7PtiehIs2Uy8zw9FYsecgOT6E8bzWY0QBtSr5e1zCXTEv2AqqmtdattYis8OcyOK
OGLt+coIZQzSHBmW8inGIcUeonemsIChszRxUk9w0osBO/V1ZzyKtYsKosEHC/pOun/EOh4bF3tQ
D1lp8iLAKFsAiLL1jlo5PAg8MgPdcJrA+/4BuWv2HesUeDLGq/A0zR+1f/GlDwwpw+6AyjmZVVLZ
Eeic7OnZKtMuO1in2WKOq2FLFQfiVAioK+zxZUhkwQ==
=4QLz
-----END PGP SIGNATURE-----

--------------9yO0uIg1vua9CFsYKNPhS6qZ--


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 16:06:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 16:06:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520262.807657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcz2-0006qo-Ud; Wed, 12 Apr 2023 16:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520262.807657; Wed, 12 Apr 2023 16:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmcz2-0006qh-Rq; Wed, 12 Apr 2023 16:06:08 +0000
Received: by outflank-mailman (input) for mailman id 520262;
 Wed, 12 Apr 2023 16:06:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KPvI=AD=citrix.com=prvs=459801679=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmcz1-0006qb-Ui
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 16:06:08 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebeb381b-d94b-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 18:06:04 +0200 (CEST)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Apr 2023 12:06:00 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6578.namprd03.prod.outlook.com (2603:10b6:806:1c9::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 16:05:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.036; Wed, 12 Apr 2023
 16:05:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebeb381b-d94b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681315564;
  h=message-id:date:subject:to:references:cc:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3nmMro4PyJ8UWY3AplUrckhKfWxJt0OMYrLX6n8kRDo=;
  b=Tx6w8H4mMYs26HOioEDAqcJcCd3hl88QHxooP1h2k+b67LNJG+6Sn/CY
   PQNeo/ibKbl0BfR6SGV7ZDOPfSYU3rYSgvW78dGQ0J2nKQeIdfSXABvH/
   s3D16jseiTDm+6cG5AdKVcxn9VzK5NGNtdBROSCOg8RbBV9uv0oyqMilr
   M=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 105166298
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:KRh5CaCALf/TVhVW//7iw5YqxClBgxIJ4kV8jS/XYbTApGh00zEEz
 jAZXGvVP/6NY2ahL911O9i/9RsEvsTdnNJmQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFu8pvlDs15K6p4G9B5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/9gpGF9J3
 9YicDFKaz6HnsuTwrapRbw57igjBJGD0II3nFhFlGmcIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuuzK7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwHqlBthDRNVU8NY7mGSdllMrKyYORFiDhP/62m20CuJmf
 hl8Fi0G6PJaGFaQZsL5Vha7iGSNtBQdR5xUDoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRtv6eSUmm17aqPoHW5Pi19BW0fYS4JSyMV7t+lp5s85jrCQsliFuipicfyGhn7x
 zeXoG41gLB7sCIQ/6Cy/FSCjzfyoJHMF1Yx/l+OBjjj6R5lbom4YYDu8ULc8ftLMIeeSB+Go
 WQAnM+dqusJCPlhiRCwfQnEJ5nxj97tDdEWqQcH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:h6+ThqpjWLc3bcIlEMr7IDIaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: =?us-ascii?q?9a23=3A8JBYEGk4BDy3wn7E8cNWsDC39LjXOXjjkm7uJGW?=
 =?us-ascii?q?SMz5oD7i7TUa53aFggcU7zg=3D=3D?=
X-Talos-MUID: 9a23:2NM0MwVljuEhOAHq/G70vT8yC/9L2Ly/C3gol9ZWmMPVBRUlbg==
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="105166298"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CQVBIj64aHWA7MEPIbA8jS4FvaFS+BznGRXbhDcku+/DwzRFvHMHkzpiOobnulYiKNc8tx6W5Q9m7VhFy/5ULZlNlANZAfVdjJlZ7DASsdRtc4PRdlniAiJeGB69HS2I/FTz2GzFtr2Vd9mA8OAdzWI/Gd8KBfEKB6Okfo4ZJtsHvxn3yfrSy2D4+3tRWNLho0WX4nQW6sMjT4WNCCXat79Z/VIJ9qU6AKKFpS1Dh4DvzPezwt7MyRWe7m4KBkqHXrfgpfrOWpznA0WeOJQZwopKB1cBDZJ+d4UDbRn21vZb+hjZZzGJ20zlL4fYCW571u1dEsg/anZbd1YdD5TSuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3nmMro4PyJ8UWY3AplUrckhKfWxJt0OMYrLX6n8kRDo=;
 b=ATc1Au19UECefjVk+BDkfDN+pjalYbO5VMu+L0vibi0JItc3VHwNKLfvQJQjWE0O7v0m5AuHoUrjLtUPOkDNsdk0kwIDv7quW21q8JgiBTUz2YoQUNrH1PPyAsw7keQ4tHuQnhyhWX+BplcHuMvuAM2xxWpaWkmhT/Gb6NNbf/vemc/vfrpthqJOjpf+mFTG5AOzBzUWwkt8h0ziQqu5qb2Hk7lIUAfei7hDo8CEw2v4TsMMegoJqTEloZZFZi3GLkcUkE8nAeUbfNeUuvuBHLAVmR2D7Iqsj11WQBC61MkxJf1klJ8WN6Rhq8FwJMAcqLe1Ikng4qWWcrkKuKHbVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3nmMro4PyJ8UWY3AplUrckhKfWxJt0OMYrLX6n8kRDo=;
 b=qTaXcxgyXbX3JMdS+juZaD+snZ58HHFoq3BnQOh8beBNP7yzq4wnPemARkWed+Nwj+V9N8t1kvIEYihhcqhswSzVA490FZLRfTIqiMytwpMC1+Mo5nsxrEIPehUxryeSVamww9peBjAjZ/xCwfCGiyN9N63bFAL2T9R7QRfURaw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
Date: Wed, 12 Apr 2023 17:05:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
Content-Language: en-GB
To: zithro <slack@rabbit.lu>, xen-devel@lists.xenproject.org
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
Cc: Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>,
 =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edwin.torok@cloud.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P265CA0014.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:339::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA1PR03MB6578:EE_
X-MS-Office365-Filtering-Correlation-Id: a70278b2-05f6-459b-83d4-08db3b6fcc82
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zVUJ6KDQRHY+5j6bdl52xMAtIHGKxOs4qKOLRW1ZM/4RKhB7GE2A4KIvHAHM0vul66tXHZRwzl1zp+FR/1iuiYXRMvRSp3jarogX/gzlia9X/DdifoAtnnFXK87sYrYogi2O7d+mhZndIQPdgx5iuYxsA4akgqd/hdv5ggGu39bPyfYt4pjHrab+FKqItveXcfzv29EAw0oVSR8mWrKoLH88zBmIFw8S+D4xDLubIUo+ekaNkobnu9BnQGIU+ntIsu9B/XibEwe9mV/1rSm+ma3XB9wAH3qF0B/9N82vk+OdJCCaCeP5X9ieukPimD501k1DdO3m9L3qI4kXa+yvhQn31SNFmc+eQ6pR08qJKWs8nT7Ae/evTC5FKbiw9GGPqaOITSvcBI70F8Qdi9SAgDFB7heYhUojHTiSEb9CNnFEynUvDh0uPAcBoluz9EibEonCiWVfeHWsRwGlwADuCKfPrp77iVvSszC9sFB1h6wnGI37hmp0iRsD6dxiyAob10iDFMc3Jd9HklK06pv9lC2PWNCG/SVq4BZeLMGhk8Yg5/9JkkykRUeVD1KO0IQUnA+zVnUFI2tZZ4fSfChnkHnKdGpIgmXXv1oJR9isNXHdh0lKd3t3uYFpuuTzqzGKBekmW6crFIZrwGL7b7zHhTRqRrcWW0MDLCl7bzTsUdR+VNcGxMMOJwkYoESkHFJn
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(366004)(39860400002)(396003)(451199021)(316002)(41300700001)(6486002)(6512007)(26005)(53546011)(6506007)(38100700002)(6666004)(86362001)(186003)(966005)(31696002)(2616005)(36756003)(66476007)(82960400001)(83380400001)(66556008)(66946007)(4326008)(54906003)(478600001)(8676002)(8936002)(5660300002)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TiswWDJjZ2tWNVNDbnlOdThBN0ZoWXdnemh4clo5SWc3QnhjSmpTbEprK05l?=
 =?utf-8?B?YVpyWnEvbFQrY0I3VHdQY3hBSng0eEl5NS8xVXlpVkZZMklrcktEU1c3Z0Nj?=
 =?utf-8?B?YktETnJDZnNSak5rZVhtN2toVEYzNkhoU1JEWmNiZW1yR2VZVGZyQ1QxdlUv?=
 =?utf-8?B?MmxUWEhlSkpqTnNxbVN1eFhCWXRHK3NRR1pueVQ1OFUrMzVuMmpLWllsOHhx?=
 =?utf-8?B?aGUxYU5uSVNPamNTSHVIcXQ0VDRkdEtIV05udWlmR0RRcVMxSzhIRTY4VTdJ?=
 =?utf-8?B?aXBPZzZHeTA0SWRmTW1WS1B4akp2WStHZW41c3d3a1RSOE1walBVTU5naXJa?=
 =?utf-8?B?YUZlajNOb0FlVU5MdmxxWkJZZXF5dGNvYTVFeUc0THRnYUdDekVUN2M1dDVh?=
 =?utf-8?B?Rm5KNjVKaGIvd1V6ZjNvaXpYNHllMkkzaURmWmUvRjdUNWJ2dU1GcXZJdVkz?=
 =?utf-8?B?bjlFMjdsNm1qRFJwd29LWXUrUmxmaVJWVkJVdXJqRzFwYkVWSERxeWNZMzBP?=
 =?utf-8?B?QUVJaFUxSkwvWGtPVTZzNVRuMjJTaitmYU1uMjhsTE5ySFVEbk1GcmY0SEli?=
 =?utf-8?B?eVRPaTZHNk9qRWVPREVCSU53MkR2VGhBZmRiTENJZkRNcFROdVlmcjd4QkhC?=
 =?utf-8?B?TGlkNE5wODBRTjJMSEpybXllUWtrdldha3U2LzAwbFpaaE9peDFwZXBWZ1Fm?=
 =?utf-8?B?RVJKWUZtc05jbWRmYUgrd0l2Q0M3Q1NNcmxHRVF6TDVDbk04OUVLekc3SDVY?=
 =?utf-8?B?L2xMMWlvQWZHU3pDb0ZHRFhsZTc3ODFycURTb29nYmQxa01WclhOM1doLzR1?=
 =?utf-8?B?TEdnRmxzbFFzNHU3MXdFVnE5Q24yTUV6SDJKaW0yZ3BTZTc2L3UwYlFWYy9F?=
 =?utf-8?B?QUpCU08zMnVzRlo1dkp2cWhMVnNXcjRyZEIyaXQvTTdJbFZ0U2Z5dkNURzBQ?=
 =?utf-8?B?MXZ1aW0rZGJqNVE4N2Jaa3VnL20zenJzak1GaExCN3I5UVZYMXRuWHVUK3Zl?=
 =?utf-8?B?dmRBQXZGVnBGcHQ4dWlNSytNYVlyUis0Tkh3SzhLRkt1bUlMWWUyM0JGQ3VV?=
 =?utf-8?B?TlFUMXhQdllmSVlGTXFNOEJOc2tFYW1FVE5kc2d6Vml4RVA4TGNhYy9TaGNn?=
 =?utf-8?B?Skpod3BLZVpid0NiMHFsZHFEbnA5ZjZrK0xOSnhLMEk4NHRIN21tM3g0ZU9J?=
 =?utf-8?B?QkQ0cjh6OFFXNlhsU21mUWZWQmtUbUc3OU8vS0Fvb0cwYWtJc3FlNWRzdy81?=
 =?utf-8?B?UXVwL2FoQ1AvTTIxcnQ2WS9vTjJUZ2dUQmN1eHoxL2YwTjFScFZtNmtMVzBv?=
 =?utf-8?B?RkhmNlBXeWczUm1ZQVEwbTNHd0dOTTFoRTdZWG1RYUF0WDIxTVVhMjkyZW5v?=
 =?utf-8?B?dWlkTmtXS3dHRXM2YTdhVGZpMkI0N2hhdHlyUndHVHFtQ3gwV0ZiR2lqV3FP?=
 =?utf-8?B?dkF0WTBPOTlRWExhcEhRem83YVJ3TDFaYlNBcStzZEYvaExTQk5Jc3hnU2d4?=
 =?utf-8?B?OVZyWXE3cllMSFhQQ3RNYzJSbDZGZEhnYkZ0emVRNmZLL3hvSDQvbmg4NU5D?=
 =?utf-8?B?Q0tscXYwWTEvNDJaUHBveVVWM3dsek1QRHVJRHEvejczSHBPLy9yQXNnYnpv?=
 =?utf-8?B?UWJSOHhkVzdhaEQwSGgvdFdrbkx1UVI2VFhybkpITzNOTGloWW92NnVlTzht?=
 =?utf-8?B?OUhHNmFnOWxDNk51ZCtxYzZwZ0JtV2s4cXpjRTRXK2d2ZDdjSVM4OEJ0T3Yx?=
 =?utf-8?B?ZmFFVDdFNWR1ZFhWWTNtWW5jYVZoOXpVNXpJcjFpaUw2d0FRcm5TMkxaZXcz?=
 =?utf-8?B?UldzMWFDLzRmaHFwL2U0NzB6ejVNandTRk5PQU5ab3lMdGpCckpSS2hoUHVH?=
 =?utf-8?B?QlRYUjBxU1NCT2pmaHVlNnFMRSttdFVzUHcxVlRxOXpkbGFXOFVqU2JPdVBq?=
 =?utf-8?B?M0NlUUZzaGNZS1ptMWtyeCtvUjdCVEFZNmVLUlVrZEtNZWpPbko0SzRQUUV5?=
 =?utf-8?B?L2RCRjVmVVQ1bUZYV1FNbVI1MEw4b0dIeEdsdENyZHdTdnBGWmVKVUN5b0VG?=
 =?utf-8?B?bG5ZYkE1eUYrV3c3cm0vdDNZSG9ZakY4VDJlejFLRW0rQTlTenpia2tvSHpt?=
 =?utf-8?B?NTk3MWpxaEFzUlpjdmxobmFIaVpQdUtHbGYxajZqUWd4MVZrNklod1E2N1Nt?=
 =?utf-8?B?OVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	RnBonBBgzMDqYMRnYvZ0WQIkI/TQKDNvUc7oYaNSWmaSZOBVjFaywTSkK+qunOHDqGpN0YDmTZn3sb9k/xytO1uBeZNREcBh7liJPsZyydLZAW3KRqR/nGLM1lrEjlmtiu4TT7lbuaCv4Ija1/JBeXqGNKSfXb9wXs1Wj2RtANovENPlJi46p/X8yNbaNYGb9Y26cokWKP2aHa3aNBihLZwDSV9zXtD5eMQaVWssRu0c7GoBEtzeOrvgPN80cZHE/VPjQPTjzIez1guzim6gg5sW/xYn66dUsJQCoOLi5gF1UTEfJ39NCli21KyQwJwJm17gpk+K1u4KJAzkwnnChTdtnVtgDZsai2WoMbmjvwbixITfwyPd+51cmwcRlaAiMx/x8mjOj7MNVrHFOjC332U2vGOqso3kMyX7yZ7xACOG/HrsBJNa4xUMJz3k3O9MKnWtZIksADpwpx0lPWEUQEdlT/mKKJ3XDytW/6aH0UEWDlzUb8WDmegM6DbFuvCCXZdcieRieD82twBFIQaQEStyh8YnjUrfzg2x6XO1QBQqihZ9WdNLzuW1XKTGFm17dQuyVHd4netlKK3L1n89ZB7TWFgA+K4nL32pUti4kppShhDxwD11KFXIun4olz2GGrSU7uTvEvjGfNPYlMEADjiifycl0A2LJqcBqhtQTUrWFSfZuCAA4xf1Fa96N0/C3SMqSJ1atVfL1TGJJ0hPGxjmEU4CV2+9m9l7t78toq7P78DyHpnkWZyiiqo8TYf+1OWppgWULQEPgswj/f8rZopW1JwrggIn1DABcKP6giFx0/+zmwjbpWpH1dPN1d75eyBYGDnf9Ff77tB9A4iQGaW3coFBSMtH8ajl5/6ZPLFwiGuCWgNCrKA+LHeFonGD
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a70278b2-05f6-459b-83d4-08db3b6fcc82
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 16:05:56.9254
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MYRqNDyvry0ajFYRGGJkiNGd1fBfyDKInasa2Ty5eiEZOLKM4Vd/Cnm/RVmYmxS6GKyuK8VAZCsQAe0OidUV6lFrZBw9KTf7EyQ0pOxGL88=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6578

On 12/04/2023 4:48 pm, zithro wrote:
> Hi all,
>
> this is what I have in "xenstored-access.log" in dom0 :
>
> [20230411T23:48:27.917Z]  D5         write     control/feature-balloon 1
> [20230411T23:48:27.917Z]  D5         error     EACCES
> [20230411T23:48:27.923Z]  D5         write     data/updated Wed Apr 12
> 01:48:27 CEST 2023
>
> It happens once each minute, on two different hosts, both amd64.
> (both hosts using the same config, only the hardware differs).
>
> I tried looking up for a similar bug, but didn't find one.
> I apologize in advance if this error is known, and if this is not the
> place to report this !
>
> -----------------------
> Technical infos
> -----------------------
> dom0s:
>
> Debian stable, kernel 5.10.0-21-amd64
> Xen 4.14.5
> xl.conf has : autoballoon="0"
> GRUB_CMDLINE_XEN="dom0_mem=2048M,max:2048M dom0_max_vcpus=4
> dom0_vcpus_pin loglvl=all guest_loglvl=all ucode=scan iommu=verbose"
> Running "xenstore-ls -f -p | grep balloon" returns no result
> -----------------------
> domUs (D5 in above logs):
>
> HVM TrueNAS Core, based on FreeBSD 13.1-RELEASE-p7
> (it happened also on previous FreeBSD realeases, but don't remember when
> it started, logs have been filled and rotated).
> In cfg files, using either the same value for "memory" and "maxmem" or
> only setting "memory" give the same results.
>
> What's strange is that I have xen* commands in FreeNAS :
>
> xen-detect        xenstore-control  xenstore-ls       xenstore-watch
> xenstore          xenstore-exists   xenstore-read     xenstore-write
> xenstore-chmod    xenstore-list     xenstore-rm
>
> root@truenas[~]# xenstore-ls
> xenstore-ls: xs_directory (/): Permission denied
>
> root@truenas[~]# ps aux
> root   [...]     0:36.98 [xenwatch]
> root   [...]     0:01.01 [xenstore_rcv]
> root   [...]     0:00.00 [balloon]
> root   [...]     0:01.74 /bin/sh /usr/local/sbin/xe-daemon -p
> /var/run/xe-daemon.pid
> [...]
>
> The xe-daemon looks strange also, I don't use XenServer/XCP-ng, only
> "raw" Xen.
> And this script which hand
>
> I also use PFsense domUs (based on FreeBSD), but they don't exhibit
> this behaviour (ie. no xenstore access error in dom0, no xen*
> commands in domU).
>
> So is this a problem with TrueNAS rather than with Xen ?
> If so I apologize for wasting your time.
>
> Thanks, have a nice day !
> (and as it's my first post here: thx for Xen, it rocks)

Hello,

(Leaving the full report intact so CC'd people can see it whole)

Yes, it is TrueNAS trying to re-write that file every minute.  It
appears that TrueNAS has inherited (from debian?) a rather old version
of https://github.com/xenserver/xe-guest-utilities/

https://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html doesn't
list feature-balloon as a permitted feature node.

But, I suspect that it used to be the case that guests could write
arbitrary feature nodes, and I suspect we tightened the permissions in a
security fix to reduce worst-case memory usage of xenstored.

I suspect the best (/least bad) thing to do here is formally introduce
feature-ballon as a permitted node, and have the toolstack initialise it
to "" like we do with all other nodes, after which TrueNAS ought to be
able to set it successfully and not touch it a second time.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 16:41:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 16:41:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520271.807666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmdXa-0002gG-LA; Wed, 12 Apr 2023 16:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520271.807666; Wed, 12 Apr 2023 16:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmdXa-0002g9-IO; Wed, 12 Apr 2023 16:41:50 +0000
Received: by outflank-mailman (input) for mailman id 520271;
 Wed, 12 Apr 2023 16:41:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DpjG=AD=bounce.vates.fr=bounce-md_30504962.6436df48.v1-cc6b503e5fa0409fa2eeb092e8a9f553@srs-se1.protection.inumbo.net>)
 id 1pmdXY-0002g3-Ez
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 16:41:48 +0000
Received: from mail136-25.atl41.mandrillapp.com
 (mail136-25.atl41.mandrillapp.com [198.2.136.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e91b97a0-d950-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 18:41:45 +0200 (CEST)
Received: from pmta11.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail136-25.atl41.mandrillapp.com (Mailchimp) with ESMTP id
 4PxT4m3534z35hjR4
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 16:41:44 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 cc6b503e5fa0409fa2eeb092e8a9f553; Wed, 12 Apr 2023 16:41:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e91b97a0-d950-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1681317704; x=1681375304; i=yann.dirson@vates.fr;
	bh=3JTXjkmoPGh0ttTfkEsTFYXtthTwOcD8aDvT7t9kOso=;
	h=From:Subject:Message-Id:To:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=XpZktOdtZSTN6L3Y0D/3a+svDlbMllPmTEsPvZFup8oOh6+9XU68GfoG1IDdPLPHg
	 MquK1nTt0TufVg0a/6BoL8aWG+5Tl3vgAyXsUqGim91DGrl1lpsK43g7Q99ihw69d8
	 G+u25KvAQJDDHj4GPPlhfACpEzeFjfnoWa3j4ny8=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1681317704; h=From : 
 Subject : Message-Id : To : References : In-Reply-To : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=3JTXjkmoPGh0ttTfkEsTFYXtthTwOcD8aDvT7t9kOso=; 
 b=aAz8xn02E6Lzb1c19IB1SBKRJXAlqRuDDXzimIymw9OUJfS1is8tYMDDzKAUuWuy7duzu4
 ODLipWXY9o4F0LWhTeIn7dIJJg5R8DVhXJEqjuEERKUZbF7JrIgq7g+CQl68WlkmrBsyYmlx
 T+9ikeP0tdY9gSn7ANV/wmfHZwN/4=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: aebc14ef-2387-424b-ab1b-4782f58676dc
X-Bm-Transport-Timestamp: 1681317702247
Message-Id: <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr>
To: xen-devel@lists.xenproject.org
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu> <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
In-Reply-To: <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
X-Native-Encoded: 0
X-Report-Abuse: Please forward a copy of this message, including all headers, to abuse@mandrill.com
X-Report-Abuse: You can also report abuse here: http://mandrillapp.com/contact/abuse?id=30504962.cc6b503e5fa0409fa2eeb092e8a9f553
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230412:md
Date: Wed, 12 Apr 2023 16:41:44 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi all,

On 4/12/23 18:05, Andrew Cooper wrote:
> On 12/04/2023 4:48 pm, zithro wrote:
>> Hi all,
>>
>> this is what I have in "xenstored-access.log" in dom0 :
>>
>> [20230411T23:48:27.917Z]=C2=A0 D5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 write=C2=A0=C2=A0=C2=A0=C2=A0 control/feature-balloon 1
>> [20230411T23:48:27.917Z]=C2=A0 D5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 error=C2=A0=C2=A0=C2=A0=C2=A0 EACCES
>> [20230411T23:48:27.923Z]=C2=A0 D5=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 write=C2=A0=C2=A0=C2=A0=C2=A0 data/updated Wed Apr 12
>> 01:48:27 CEST 2023
>>
>> It happens once each minute, on two different hosts, both amd64.
>> (both hosts using the same config, only the hardware differs).
>>
>> I tried looking up for a similar bug, but didn't find one.
>> I apologize in advance if this error is known, and if this is not the
>> place to report this !
>>
>> -----------------------
>> Technical infos
>> -----------------------
>> dom0s:
>>
>> Debian stable, kernel 5.10.0-21-amd64
>> Xen 4.14.5
>> xl.conf has : autoballoon=3D"0"
>> GRUB_CMDLINE_XEN=3D"dom0_mem=3D2048M,max:2048M dom0_max_vcpus=3D4
>> dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall ucode=3Dscan iommu=3Dverb=
ose"
>> Running "xenstore-ls -f -p | grep balloon" returns no result
>> -----------------------
>> domUs (D5 in above logs):
>>
>> HVM TrueNAS Core, based on FreeBSD 13.1-RELEASE-p7
>> (it happened also on previous FreeBSD realeases, but don't remember when
>> it started, logs have been filled and rotated).
>> In cfg files, using either the same value for "memory" and "maxmem" or
>> only setting "memory" give the same results.
>>
>> What's strange is that I have xen* commands in FreeNAS :
>>
>> xen-detect=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xenstore-control=C2=
=A0 xenstore-ls=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xenstore-watch
>> xenstore=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xenstore-=
exists=C2=A0=C2=A0 xenstore-read=C2=A0=C2=A0=C2=A0=C2=A0 xenstore-write
>> xenstore-chmod=C2=A0=C2=A0=C2=A0 xenstore-list=C2=A0=C2=A0=C2=A0=C2=A0 x=
enstore-rm
>>
>> root@truenas[~]# xenstore-ls
>> xenstore-ls: xs_directory (/): Permission denied
>>
>> root@truenas[~]# ps aux
>> root=C2=A0=C2=A0 [...]=C2=A0=C2=A0=C2=A0=C2=A0 0:36.98 [xenwatch]
>> root=C2=A0=C2=A0 [...]=C2=A0=C2=A0=C2=A0=C2=A0 0:01.01 [xenstore_rcv]
>> root=C2=A0=C2=A0 [...]=C2=A0=C2=A0=C2=A0=C2=A0 0:00.00 [balloon]
>> root=C2=A0=C2=A0 [...]=C2=A0=C2=A0=C2=A0=C2=A0 0:01.74 /bin/sh /usr/loca=
l/sbin/xe-daemon -p
>> /var/run/xe-daemon.pid
>> [...]
>>
>> The xe-daemon looks strange also, I don't use XenServer/XCP-ng, only
>> "raw" Xen.
>> And this script which hand
>>
>> I also use PFsense domUs (based on FreeBSD), but they don't exhibit
>> this behaviour (ie. no xenstore access error in dom0, no xen*
>> commands in domU).
>>
>> So is this a problem with TrueNAS rather than with Xen ?
>> If so I apologize for wasting your time.
>>
>> Thanks, have a nice day !
>> (and as it's my first post here: thx for Xen, it rocks)
> Hello,
>
> (Leaving the full report intact so CC'd people can see it whole)
>
> Yes, it is TrueNAS trying to re-write that file every minute.=C2=A0 It
> appears that TrueNAS has inherited (from debian?) a rather old version
> of https://github.com/xenserver/xe-guest-utilities/

TrueNAS being FreeBSD-based probably inherits this from the 
sysutils/xe-guest-utilities port, which installs a fork of the shell 
version which predates this golang repository.


> https://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html doesn't
> list feature-balloon as a permitted feature node.
>
> But, I suspect that it used to be the case that guests could write
> arbitrary feature nodes, and I suspect we tightened the permissions in a
> security fix to reduce worst-case memory usage of xenstored.
>
> I suspect the best (/least bad) thing to do here is formally introduce
> feature-ballon as a permitted node, and have the toolstack initialise it
> to "" like we do with all other nodes, after which TrueNAS ought to be
> able to set it successfully and not touch it a second time.

Is there anything besides XAPI using this node, or the other data 
published by xe-daemon?

Maybe the original issue is just that there is no reason to have 
xe-guest-utilities installed in this setup?

Best regards,

-- 
Yann Dirson | Vates Platform Developer
XCP-ng & Xen Orchestra - Vates solutions
w: vates.tech | xcp-ng.org | xen-orchestra.com



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 17:09:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 17:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520276.807677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmdxe-0005F7-Vt; Wed, 12 Apr 2023 17:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520276.807677; Wed, 12 Apr 2023 17:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmdxe-0005F0-Sg; Wed, 12 Apr 2023 17:08:46 +0000
Received: by outflank-mailman (input) for mailman id 520276;
 Wed, 12 Apr 2023 17:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KPvI=AD=citrix.com=prvs=459801679=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmdxd-0005Eu-28
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 17:08:46 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id abf1074f-d954-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 19:08:42 +0200 (CEST)
Received: from mail-mw2nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 Apr 2023 13:08:38 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6239.namprd03.prod.outlook.com (2603:10b6:a03:3ad::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 17:08:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.036; Wed, 12 Apr 2023
 17:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abf1074f-d954-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681319321;
  h=message-id:date:to:from:subject:cc:
   content-transfer-encoding:mime-version;
  bh=BtD/8AvZNPytXou2Hlw9qZJmjNoCMdu898mKJ+GnOoA=;
  b=KzH09fwdFjllksIr1WXREIKG9Izj/LMjYX3vMcVBQ7f75kXFZ9Kmgkkp
   sU38NdPKeuLhGVGzYsro/WcATGe2E/3j+IVSwU7BHdv76X5kJVTOe4IxG
   rnRmwRNe1oAQ181fsT4dQvZyf0eQAkpBWSY4JMDwImGz7u5sIAgimUmAl
   0=;
X-IronPort-RemoteIP: 104.47.55.101
X-IronPort-MID: 105298112
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:CGDWOqo5VhRwq+ZC2sQ4OzCsrv9eBmKwZBIvgKrLsJaIsI4StFCzt
 garIBnVa/bbMWHwcohyPI+zo0ID7cLUzYRhTFFo+yszEXhH9ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weCziNNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACoTQy2jrv+E+7b4aMlsms94L8PUFZxK7xmMzRmBZRonabbqZvySoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeWraYKMEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqMw0Q3KnzZ75Bs+SH27/dewtWSHRYh7d
 09N6AEnlpE2+xn+JjX6d1jiyJKehTYMVtwVH+Ak5QWlzqvP/x3fFmUCViRGatEtqIkxXzNC/
 lSOmtboGSBiqqa9WX+b7q2Trz65JW4SN2BqTS0OQBYB4tLjiJoulR+JRdFmeIaulcH8Ezz0x
 zGMrQA9iq8VgMpN0L+0lXjigjmsr5yPaRQn6wH/V3igqAh+YeaNbIWy5V/Wxf1JNoqeQx+Ku
 31ss82X9uUHF5yOvC2LXuQWHbut6uqFMTvTmlpmFd8q8DHF016ue5pBpg53IkhBO9wBPzTuZ
 SfuVRh54ZZSOD6vcvVxaofoU8Ayl/G4TJLiS+zeacdIbt5pbgib8SpyZEmWmWfwjEwrlqJ5M
 pCeGSqxMUsn5W1c5GLeb48gPXUDnEjSGUu7qUjH8ima
IronPort-HdrOrdr: A9a23:D8EMs62nvJHiy5Op3lPE8QqjBKMkLtp133Aq2lEZdPU1SKGlfq
 WV954mPHDP+VUssQ4b6LK90cW7L080lqQY3WByB9eftWDd0QOVxepZgrcKrQeAJ8T2zJ856Z
 td
X-Talos-CUID: 9a23:9065JGO0bIj+j+5DQjdKyW49B/Iedj7/7HfMKBS7A1lAV+jA
X-Talos-MUID: 9a23:8db73QZIKI0CSeBTqhns2B9rC+1R/f6KVENUg74PgPCAOnkl
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="105298112"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dkHHcB9HciYGK9Ja1WmMAQKJ49R5T4r3mlZVzLOisUg8N2xjdnzg6CE7bkfyNylKtvB8q4GEP6gjF/K1YrAyECJijHRa29azA8e2OD3B9WsTcIPdehR3PArVkMRpHq72F9CEhjZFzeIGMYs7jwLked8hCrg1hWCfkECLo4VRCPulyl14qDCcviB32N6zuMilydG8Xs3R53PQoKNMvkbvCF40YjH+uP+i/YBaagFkyCGcZZiJtkT/khV4ta6uDYnKfEtZHsN6f0Vh7vBPjyPSbdm4XRF6WjgXio1TWcTPuqz0TYPyJsYoFWWqLT8LLx9Ss7CY9pQiTWjiXZCCTeH5fA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BtD/8AvZNPytXou2Hlw9qZJmjNoCMdu898mKJ+GnOoA=;
 b=gwsoK/Hkm1P3QvpRpAZKKQukUTj0icu/Ag3XReUA+lB4ISC5gp4wKlKr77xTeDm5Gn79s2bnaNIyXJZ+/5a2MUo862aQplFCuD0NvHnE9vavHa8tq7zf1yxodrkILNRpQOlzvpns2PC23Zajc3BCwtV11HKgvVUoXjiHu2i0j4erXbiuiep+RBdaH2qCGj+AKpx8wgqC+jnZ+Er8MVg1+1vuDT1kZ6hwy5tjQ0KER50hFDI6eXSqkr6VrMtpJGymMjyBK0sMpg2xIo8zZM4wwwV0HWSgaWjfbLsZD+8sUtOJiUs+hZTPy+IqKh+AKO9Pq/jOViwPHnYpYKsf0MsNDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BtD/8AvZNPytXou2Hlw9qZJmjNoCMdu898mKJ+GnOoA=;
 b=pBFZd6hfvGwQIqxyqWwOU15KmmltVTB2Px3yUBrcEz3hRw6AMKfkMsrEpXdrFRFdO3c9UVS2bHkIi7DG6ZQ3kiOh6XG+Xc2lMWwxf1w2PWIS8s5KwZu4HvXRLdVXOzFaT2vXzgs0u7UXImh/bMGMXgnpAUFXKabCQXH6ksdDQdI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <193206bf-76a0-818d-8fa8-1886a15ad5e5@citrix.com>
Date: Wed, 12 Apr 2023 18:08:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Content-Language: en-GB
To: xen-devel <xen-devel@lists.xenproject.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Gitlab status on older branches (Inc some 4.18 blockers)
Cc: "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <Michal.Orzel@arm.com>, Doug Goldstein <cardoe@cardoe.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0358.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6239:EE_
X-MS-Office365-Filtering-Correlation-Id: 698c16ca-3baf-43be-67f7-08db3b788db4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mq0gkoaNvjh047BUcJRWhlJrcD8s5NUTYuo5xtOkhI+DdRl5yQ9Np1ciGfDfTpdPXI6Xkp1zny1CPpT9pB806iP6jxdIgwyHfdsyOtdMWuqIVNAePadjvW6xOZhJvfsroxQV3sN8SSXhcEMGjRHr4DaehTlkr8lky6KfQ4p4N1quZgkp//ls/7P4RS4NdWa3a0gqG0f1nIv8cbyylswF6wfhjtKjUMJTDbkd1cIxrKBHED+T/MJVrrXqtXeDKzTbicE8uwqrT32KQJEgzlQUSkipF2b33FsZ6JPYHoIVZMEIVx1HbrL4r6KaANYDlXaPd9M1JhSHonGarFVkecboYchkBk+Eo1RUNT0sJCXntvVq5+3oGEBrFMafItumbO6tMYneOVBAEmnu+1deKR6Mu0F9vdS63rYTfC7GRskhNXuXfmsedviL6nRXVii0MF3JN/Y8zmMkYC5zFKGy2q3FDZ+wmtAJ/VUnYHJ7TjTOGZVq9icfbyA+EZ8c4ox4Mz1pzmEF2wzgmHcIR7BDjTE2E/lMq9In2n3VTQDVz045tyOTavKfmFzdiPg3o1pe8jChgVDWRklMCEN0JJvuKz6CRTIfZh0ZLYG48HdV6o5rpFD6iTMRsc9JiP1vsKNd7Bnncl6PdU0B8JlYVznrv4y9UjzwqoSY0guBEb/avxd6O6Vv0imb6BQ5UKaN9nRRGaCV
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(346002)(366004)(39860400002)(451199021)(83380400001)(2616005)(966005)(6666004)(6486002)(26005)(186003)(6506007)(54906003)(478600001)(6512007)(2906002)(36756003)(5660300002)(38100700002)(6916009)(4326008)(82960400001)(41300700001)(66556008)(66946007)(8936002)(8676002)(316002)(86362001)(66476007)(31696002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXIreUhuQ3dGdFlNWkZnWnBhVURKSTF3bm9UbHgvRTJUQXQ5VzV3UHh5YXh2?=
 =?utf-8?B?VkdFcGhIaWd1U05remtFV0NKVDVlZzFPYTMzdzYvSzJmaXZYdzdTbUVQVVBR?=
 =?utf-8?B?QkZ3VGxBMHNPMThvcU1sYmloWTZwQUNFWnIvcDZNbmNGWUYyZGxzOHdISlN6?=
 =?utf-8?B?QkRzditUTVg5WVk2OExOV1A1dUZQTmxmdjRNZE0rd2V5WXNxOGZNc25CWnlN?=
 =?utf-8?B?SlYxcVZTV1lwVTMyMDFFeWZhem1HYWZZQnI0SzFhd0VIaGdsSGI4eWxTS3o4?=
 =?utf-8?B?L2NPTy8zaGJWbEYxUnNBZlAxcXVkSmVmckZaeEdXNER6M2RVakxBN0dkaktw?=
 =?utf-8?B?Y1phU1F3NC9Kd2lzNDAwT05BTTN4TjF2OW1IUHpDam9tb3hVdlZEZ3AxemIz?=
 =?utf-8?B?WCt2Q00vV3Bhc0lCc2xnLzJhdWtmTWtCanNQVzQ3YUZmZ3BoSGFBRnlNa2l1?=
 =?utf-8?B?SkY2cUFEdGVscG1Ydk51V1VUY3hXQmZRRDN2ZDhQc3U2eFVsY0FPcitrTTEv?=
 =?utf-8?B?R0F1eUpQcnh4dWhyM0ppZ2ZvckE2OXpSWjFId0F1NHgxVXMxQlZESVBDL2x4?=
 =?utf-8?B?ejJucU5JZWhIOFpWY2R6ZTVDWXpDM2R0TThjb1FJYVdJSjB1cjdJdStvMEtp?=
 =?utf-8?B?VFpJcGVyS0JmNlErUDRRdDl1STJNM2lVcmsvdG9ZcS92TmU3ZWZCVk9wNmhQ?=
 =?utf-8?B?ZEtzKzYrL0c0RkRmc0tqZjc2RWZabjVCWWduSGlsN2twVXNsWFVseUJkVnpi?=
 =?utf-8?B?MmxzTk1oOG9uL1hZdlVBWTlTUmJGS0Roc2xXYnhHRWU2UWFVZi8wdFpYT0Rl?=
 =?utf-8?B?TnZ2NDhSeW9VSVRHZndCM2hUa1NCeGNGS1N5SEhTQ0xLajZUNHRLY3o2Ymk3?=
 =?utf-8?B?Y2x4c2ZzY0dhaFNwaXJaWHBxMk9uR2F5emd6dVBnVy9KWjRxWDZMNVpmbjRS?=
 =?utf-8?B?bE9IUHhUS3MyVW5heUpMcDVvS0NxRTNHLy9GYVlVT292SStRQmJHRlF3bG9O?=
 =?utf-8?B?eWltcFd3MzZsY1NCV1dMMGw1MnZrUUhKZlZnbkp3M0N3QitlNkllVVN4NFdY?=
 =?utf-8?B?S3FBc1FHaFlyVEVaMnE1N0lFQTlGK21USlM1WDBTQ09wYWRhZXI2cUFjaVhk?=
 =?utf-8?B?SHIzRzlvVStkblk2eWFNb0QwQVc5YlByTHR3L1YrZDR4NFJ0eTNVUXdpeHZF?=
 =?utf-8?B?amhXaDh4Z1FmcFE2N3owR1J2cE4rMkw5blhaM3pqby8wSHdFamQ3ZjNISHp6?=
 =?utf-8?B?WjRtZlBITjZOeVRZVXVQU204eFdFcHNTSDFIRXhNeFl4TCtQQzZFREMyb1dD?=
 =?utf-8?B?a2ZqSm4wOXpLSllaWlVBVXdVcDJiUWN5aWUyYXNZenJGU3Ftd09rSjRmclpq?=
 =?utf-8?B?WTEzQUFITmpjQURTM0lwekxXNTRjcmNvdm9GemVDRFBwQjQ5TW8xMXdJY0NB?=
 =?utf-8?B?cHVhZW94MTNFUENrdE9kUkRNWmZrTEdDelVDaU1va2JWQmo1enR0cjIvUFpM?=
 =?utf-8?B?UmZiWWdtSXhCbDU5dFdDSXdNdnVoSG1uQ2lQVWUvQXNtTHRQL096ZVgwd3VF?=
 =?utf-8?B?K0lqUXpGUjNHdU1kbTMrT3V3QlNQdnJBdFJXaTgrdHVHdWJjY1IvNmVxWlJi?=
 =?utf-8?B?YnFGeWZVbDczQ3hEalo0N3NmaFllSldsaVh6VXEzZUhTcm1OOVFISE9vY0Ev?=
 =?utf-8?B?MXpOQ2hoMXBKZlc1M3lGb3oxc2lSWUluUEMwcXZmRGpuMFdHWEMvZ080REZz?=
 =?utf-8?B?UlZwM0FpYkhNNW0rcVJBZFJUTkNERFZGYkd1SmhXYVJ1Mjk2Vk5MMUdjclRy?=
 =?utf-8?B?M1hQd3B0czJ6Q21xZEZsYnpUck04czJXdlh4SEZwM3NIT3BYdGhRdUlqQ3Vx?=
 =?utf-8?B?dTJFck1oTVpEQUZlaE8rcE5qbVB5SGNzUnRTMWVmSW5Na1ZYV3VQSENHdTlU?=
 =?utf-8?B?bm5QcWFtU2FoVGRJZmo3dnRKMDF5bWFMd210bTJVR2VpRWpSdzlJM3pVb2Fv?=
 =?utf-8?B?Uk5XcDU2MnRKNExDUWxDdDI2dVdpUm4yRjllMVV0bjJIKytzY0xKNE54Y2Jo?=
 =?utf-8?B?Ti9Od1h0bFRNdndyZUxyRVlPK2VWUElaMG9HdUhsc2lKc2ROK1NNQVorVmFi?=
 =?utf-8?B?Vk5PWlZMMlVxV0YyM1pTYlBtS20zWVR5N1BrajVhdE9qYzhVQVdpM2doOEZk?=
 =?utf-8?B?cEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?QWIyWDBzU3lBdFphR2RYd0NPK3haU2JtaVJyMGxhK3hRREN3TFR2M1YrNTBL?=
 =?utf-8?B?QmRRVy9wMnlkcmNUQmtKdVd4MTBhc2pYdHkyMXFERXVJNTVmTllidDJzUWpz?=
 =?utf-8?B?SmhLQnJHU0VLYlpTUjVHUE5GS2d2dG1XVnlRU2V3VStDWlMwL05KT0N0VVM3?=
 =?utf-8?B?dWhEQVBSMTZadllNMFpJeVgySDRwUlllajg5WmZQQ2FFTWRUNzJDeW1sTVFR?=
 =?utf-8?B?aFJrZndZRjNYb21yanZjRkJQK1lwNGJEaHpyZm5KWnNOektHSVh2NlVtbU1p?=
 =?utf-8?B?MklydXR1ZGhSR21idGg2MEx1MllobGJLNFoxVjd2NkcwUUJJNC9GM0wwY1Vv?=
 =?utf-8?B?K290ZTVpVkMwenF4bm1oZ2F0VUtZL1RNRlQ1M3dEamxIWlFMaGw0dUcyM2tr?=
 =?utf-8?B?ZGpFbzhEdGxHRnQwVEhrVGVjMjNQM09DWlB4Z281VnR4ZFVHSjhmRzlTQ2Jy?=
 =?utf-8?B?eFh3Y3NPb0NtNTBKY2FOT3k0eUM3WVdGR0srb0RLVnhBK1hoQVlScndJa1VG?=
 =?utf-8?B?QXFGTDRibG9DcFZ6UlJWWDNja3BZbkhLTUc1aTNEUXJDR0tEVEtwYzhaaHdN?=
 =?utf-8?B?NnRmcDVmQmdVSjcwamI5cDZLeE5mNSs1UnJncnhESC96d0pQNEZWWWVXSEhP?=
 =?utf-8?B?bkdDbkZtdmF3N2VxWjdIMjQzNlYyRENiS2taYTlIdW92ajdBMThpUWgvZVk2?=
 =?utf-8?B?K3pYcHFYMW1yOElrTGk0anNGcnp1NlZGWGZVTE5ha3RLZisyZ1JyRFUxaDNG?=
 =?utf-8?B?bEpra29GSW5CV0lBOCs5MFNvVDRoM0J1TTI1OGZDTmhvRDd1aytoTVg4YkFE?=
 =?utf-8?B?MmxxTTZEL3BJbjRtcU44UWFJbEdQcWcyRUF0dzlyTTBadkJZOEx0UnBJaktS?=
 =?utf-8?B?NHkrZXBuL2lLQ1NkZ1o5bXJET2RzWkZxblJoOGNWTWVvMnBYS2Z2QlRuSVZv?=
 =?utf-8?B?V2F1SVl3bjgvK2Zkdi80SGZFRjFiOCtnbE95aVRuTzBSakd3bk9KU0FVWERo?=
 =?utf-8?B?YW5HWllVbkdoeHBITmt4ampyQ3VRUm1yR1FXR2NzUkh6VlJjeVJpa0szR0FY?=
 =?utf-8?B?b0VYdm54WkdBUGVEUDhZTjlMTmNXUFFCWXVFTXFxNjA0RzhGQk9zbkYyUGtv?=
 =?utf-8?B?MHFJc2dvNSs1Z3BpcjVvUVpCYlV0ZVZBMFI1ckZpRnB5aFpqZ3R2L1hWMTBP?=
 =?utf-8?B?c0xudnQ1NHhEbHJKVnYvbks0cE9NN0pIVGdDNU1ZTmNNbHJNdWpOZE1JQmhE?=
 =?utf-8?B?QzJ5UUM2T0drUWEzVk83R0dmTVBYU0xoQ1JBclpQakhOS3RNU3dqYndoRDd0?=
 =?utf-8?Q?rrPjzQPchcs+I=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 698c16ca-3baf-43be-67f7-08db3b788db4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2023 17:08:36.5468
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i9wJyVSZepS+jCuNQ0XrPakP9ydXw1kfDlmqTTlKajww3N8unaEalSxXHerH7VXiGCcIykknTeyl9Ge+F/io/TwzwB91g5aTYHuAPJuZBCk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6239

Hello,

I've done various backports to staging-4.14 and later trying to improve
the state of Gitlab testing.

The good news is that 4.16 and 4.17 now pass.  The bad news is that
there are still bugs which need fixing, but lets start with the older
branches.

Also, I was forced to backport an update to SeaBIOS 1.16 to all branches
in order to fix compile failures in build environments we supported at
the time of each of these releases.  I honestly don't know what we were
failing to do testing wise back then, but whatever we missed ought to
have been release blockers.


4.15:
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/834460832

Individual failure instances:

1) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232265

This is a -Werror=array-bounds failure in HVMLoader but the same
job/container works in 4.16 and newer, and the underlying code is the
same.  There must be some change in the build environment, but I haven't
worked out what yet.

2) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232266

This is a -Werror=array-bounds in iPXE.  Probably needs an update like
SeaBIOS too.

3) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232290

This is a Qemu upstream failure which I do vaguely recall.  Bit it also
means that Xen 4.15 had a dead-on-arrival version of qemu which call
into question a number of our normal release activities.  Probably the
least-bad option is to backport the one fix relevant to this, because
changing the version of qemu in the security-only trees is far riskier
than changing one of the in-guest ROMs.

4) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232334

I have no idea what's going on here.  If nothing else, we're failing to
collect all the relevant log files from a build and that probably wants
fixing and backporting.

5) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232324

This isn't so much about the failure, as the fact that the OpenSUSE Leap
tests (which were replaced with tumbleweed in newer versions of Xen)
probably want the same doing to them.  And being marked as
non-blocking.  This is a failure somewhere in the middle of qemu.

But, on top of all of ^, I discovered that we have a majority of tests
being debian/unstable which, when we refreshed to fix the HTTPS issue,
ended up retrofitting a newer-than-at-release-time build environment to
the old trees.

This has come up previously, and not addressed, so I'm now declaring it
a blocker for 4.18.  Only tests against a fixed disto version can be
blocking; those against an unstable distro must be non-blocking, and
most of the currently unstable things should be transformed into their
stable alternative.  For backports, we want to retrofit what
debian/unstable was at the time of release, rather than what it
currently is.

Furthermore, the fixed distros we currently test in staging are old
bordering on obsolete.  Which is not a healthy position to be in as far
as the 4.18 release goes.


4.14:
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/834461234

6) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097236330

This is the only 4.14 failure I can see which isn't a duplicate of a
4.15 failure, but it is an OpenSUSE leap failure in qemu so perhaps
related to #5


As a general note, we still have too much testing (and/or insufficient
testing resource).  It's very painful waiting 2h for each branch to
complete.  I'm very tempted to trim things down further on staging and
backport the results.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 17:52:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 17:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520280.807686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmedG-0001z6-5j; Wed, 12 Apr 2023 17:51:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520280.807686; Wed, 12 Apr 2023 17:51:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmedG-0001yz-23; Wed, 12 Apr 2023 17:51:46 +0000
Received: by outflank-mailman (input) for mailman id 520280;
 Wed, 12 Apr 2023 17:51:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmedF-0001yp-0U; Wed, 12 Apr 2023 17:51:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmedE-0008CX-TR; Wed, 12 Apr 2023 17:51:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmedE-0000C4-Bt; Wed, 12 Apr 2023 17:51:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmedE-0003Cc-BR; Wed, 12 Apr 2023 17:51:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=syCZeO6YXALore4fuin+JnwNvBu5+zRRo2BqRduIp9Y=; b=pT84bADO1WxocjDVreMrLEapLb
	O++BScFPZhaxHDPBoGoZPlr1acXv/bk12HkcTIGGLjZnWSUAfD0QY7TcGn50pSNA7ErHC/cG04hGk
	GwXf9s4q4gnNDxbSxGJzY76uNYGV2AykEatAAoSjugm587ZpQXvqvTEtv/aUdKKgXxMs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180211-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180211: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
X-Osstest-Versions-That:
    xen=ddaf7bb0cfd27369252de52e4b03410c4065bad2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 17:51:44 +0000

flight 180211 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180211/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386  7 xen-install    fail in 180206 pass in 180211
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail pass in 180206

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180206 like 180187
 test-amd64-i386-xl-xsm        7 xen-install         fail in 180206 like 180200
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180200
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180200
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180200
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180200
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180200
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f
baseline version:
 xen                  ddaf7bb0cfd27369252de52e4b03410c4065bad2

Last test of basis   180200  2023-04-11 01:53:29 Z    1 days
Testing same since   180206  2023-04-11 14:37:01 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ddaf7bb0cf..5ea03c570c  5ea03c570c8610d4359f8bbf5f093d215344ce3f -> master


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:23:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520294.807725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmf7Q-0005oO-7b; Wed, 12 Apr 2023 18:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520294.807725; Wed, 12 Apr 2023 18:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmf7Q-0005oH-4M; Wed, 12 Apr 2023 18:22:56 +0000
Received: by outflank-mailman (input) for mailman id 520294;
 Wed, 12 Apr 2023 18:22:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmf7N-0005o9-DE
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:22:54 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0808e238-d95f-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 20:22:51 +0200 (CEST)
Received: from [2001:8b0:10b:5:d4a3:d89c:c03d:e45c]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmf7A-0076Pj-Ki; Wed, 12 Apr 2023 18:22:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0808e238-d95f-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=FMgErKlvTq/DZ1hsfRQ5x+/8tteMVJxJl1K2G+ztm2E=; b=r+b0LY0NHCggqPQ0RVv0bmzUhf
	lV8QwoxNrzHT0HCozeE8EeQOq3dxq1tgyS4TuuTsJB00qBw+TyaXuxLysyGRws2vMJ5Pml4SJF/9X
	7hnYzlza5TWWbYq5R3n7RM+tAJSpjbsYJgLLZV6XTkZTWvCAWInbdRwhQ3cmM6K9jZPqNVCspxaO9
	PG08IkKAVKGwmXuzNSLWQBbXTCxOx5Rr6URQHgE/3hWmIFKGw3vHTWxuF6ML5ysUaIvKtkouRJwGt
	JPx7NuT9VObawsogWqoFn77swrD7y0/nL1jkVhoq7zWiHIFXZWQltNQAUzhZxr1+OAAtgxkCwqfhV
	EC742khA==;
Message-ID: <92e10c45117dce9c07304a567fd412434ea0ddd3.camel@infradead.org>
Subject: Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore
 operations
From: David Woodhouse <dwmw2@infradead.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant
	 <paul@xen.org>, Joao Martins <joao.m.martins@oracle.com>, Ankur Arora
	 <ankur.a.arora@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	vikram.garhwal@amd.com, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, Juan Quintela <quintela@redhat.com>, "Dr .
	David Alan Gilbert"
	 <dgilbert@redhat.com>
Date: Wed, 12 Apr 2023 19:22:38 +0100
In-Reply-To: <CAFEAcA-vCihVupZsLBdh6+-xjdNX2-K1Ceo+tgsjA=KCdWTjpg@mail.gmail.com>
References: <20230307182707.2298618-1-dwmw2@infradead.org>
	 <20230307182707.2298618-23-dwmw2@infradead.org>
	 <CAFEAcA-vCihVupZsLBdh6+-xjdNX2-K1Ceo+tgsjA=KCdWTjpg@mail.gmail.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-Xp0bkBS/xd5/DOVDOVjB"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-Xp0bkBS/xd5/DOVDOVjB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2023-04-11 at 19:07 +0100, Peter Maydell wrote:
>=20
>=20
> > +static void xs_be_unwatch(struct qemu_xs_handle *h, struct
> > qemu_xs_watch *w)
> > +{
> > +=C2=A0=C2=A0=C2=A0 xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL,
> > xs_be_watch_cb, w);
>=20
> Coverity points out that this is the only call to xs_impl_unwatch()
> where we don't check the return value. Is there some useful way
> we can report the error, or is it a "we're closing everything down
> anyway, no way to report anything" situation? (This particular
> Coverity heuristic is quite prone to false positives, so if that's
> the way it is I'll just mark it as a f-p in the coverity UI.)

This is because the Xen libxenstore API doesn't return an error, and
this is the ops function which emulates that same API. I suppose we
could explicitly cast to void with a comment to that effect, to avoid
having it linger in Coverity? I think that's sufficient to make
Coverity shut up, isn't it?

--=-Xp0bkBS/xd5/DOVDOVjB
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDEyMTgyMjM4WjAvBgkqhkiG9w0BCQQxIgQgJ0U7/4DY
IOLbvjX7/V1CfHz2x4x+x2tqnhDjPo1c5FEwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgB/0bmG8i4XLIiZKhZfVQi7Riv5DfIbsBMd
RwsoOUxPloK9v7nylfD6IsvRGL9vRAs6/mA70i2Gg5Pnm+q4bdRE35V6kVML1jsfb6hP342HLM+H
k8Npmq6PYbfR8y8K1NmBbpNCO6l7vEwFwDjTrt7Er96Xbuct2UCFReNtDZcPldugsfIsbrUhsK1Q
s4nVtGGBcBuifXwAV1qIcu/V159h0g/5ardYKvzcnjbOLBU7pt9QXm8GLxhsE4L+SLda0O2wkul2
Apdj7pRhpMAz7deOWl2huW/HPOdJga2ssuFKFtaw2HYxxNZBS7F4MklC5bxS37D5pLjw7PW3Qf1x
hsdOjlntErh4C/peVYtRrJsh8FgosaOKgdwI/1PK1XTxy5o48e9fZL248/UGe7GtXjbhkXD8cOhL
jITcmME68Mev3lgJ3LnoklZFUbF07onGJhH2mPwjwJSTZs5Qu/xTWRl/c+nLHP1leB4FUIkjQ0z1
2rta7txQGxkNRokt+uFFVVeAoXhAQblm6oTf5tzVYQWvMrVE0MSnr6TqQGPsNXEaSkNT0Mvd1R1x
eXg4rYC3KzLGfDYJ8Kf6xANVd+dgs9OgIqfZy57x2wDB89m0O1vW2Tmi8bN+bc2G0RMKIOKe+IeT
6ukYeRro7teUDaVP+Fj9hWIcPPVcGfo52dSc7ochbQAAAAAAAA==


--=-Xp0bkBS/xd5/DOVDOVjB--


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:34:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:34:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520301.807735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfIL-0007K0-66; Wed, 12 Apr 2023 18:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520301.807735; Wed, 12 Apr 2023 18:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfIL-0007Jt-3T; Wed, 12 Apr 2023 18:34:13 +0000
Received: by outflank-mailman (input) for mailman id 520301;
 Wed, 12 Apr 2023 18:34:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KPvI=AD=citrix.com=prvs=459801679=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmfII-0007Jn-Ot
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:34:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ac37d79-d960-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:34:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ac37d79-d960-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681324447;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=E8aR5c3iwwAn+MHFxqQ3wmB9i/4CaPvwKAvF/3XXZ3g=;
  b=KOViJcXlDzXr9mI+CZmSqEg58U+I5q7FRZJWiGBzDTOASoKsU+H1iHy+
   mofvW8GTUkrtRNrJ0Pofr5ooc7h3h5yel5Uc0urenlYAiD4BBOabkn9Hc
   0lyvJeNQw+HDDL0IQBO5aK9pNYGyUUQayi+baFeAUHmU+V/AXCZp7MjoI
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 105184545
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:OXLSlK4Q+aV6onN0c7L+yQxRtCnHchMFZxGqfqrLsTDasY5as4F+v
 mAeDDqAPv6JN2WmfNAgYNm2pE0EvpaAnd9kGlRrqitmHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPawS5geF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 6EWDBoqfi2/ge+V8eKeTPtg18EiFZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7F/
 zKaojikav0cHNajmRyv2XKdvM3goj/wUtgZNrSRpsc/1TV/wURMUUZLBDNXu8KRkVWiUthSL
 0gV/CsGrqUo8kGvCN7nUHWQhX+PvhcYHf1KAeA+wAiXz+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+pQSiaYHZPazVYPGldEFVDuoO4yG0usv7RZvA+Hoqfqc3IJTf94
 AHaiS4si+QWjPdegs1X4mv7byKQSonhF1Blv1+JAD30smuVd6b+OdX2tAGzAeJoad/AEwLf5
 CVsd922trhmMH2bqMCarAzh9pmN7u3NDjDTiEUH83IJp2X0oC7LkWy9DVhDyKZV3iUsI2WBj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMgoLlTYrXwzPRHMt4wIrKTKuftnU
 Xt8WZ/1ZUv29Iw9lGbmLwvj+eRDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8HX+PTkkU3bQELSnSOmWLlBQtRdiZT6FGfg5E/S9Nv1SI9STB/UqOAnux9E2Gn9owM/tr1E
 riGchcw4DLCabfvcG1mtlgLhGvTYKtC
IronPort-HdrOrdr: A9a23:QPwYt6sikskfLQfjgTOXzHRz7skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-Talos-CUID: 9a23:N8yBcGOICzdayu5DfnRp0GQzCv0ZdHD+/Sf1JV20WGNocejA
X-Talos-MUID: =?us-ascii?q?9a23=3A0MhzRA+mez78Ib/XnjrSqACQf81Us4KEI0A8q7c?=
 =?us-ascii?q?phNWPP3Z7GT603A3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="105184545"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Michal Orzel <Michal.Orzel@arm.com>
Subject: [PATCH] CI: Update FreeBSD to 13.2
Date: Wed, 12 Apr 2023 19:33:56 +0100
Message-ID: <20230412183356.2986459-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Michal Orzel <Michal.Orzel@arm.com>

Successful run:
  https://cirrus-ci.com/task/6232358978846720
---
 .cirrus.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index b133afb74057..9bb6cce4ead3 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -23,7 +23,7 @@ task:
 task:
   name: 'FreeBSD 13'
   freebsd_instance:
-    image_family: freebsd-13-1
+    image_family: freebsd-13-2
   << : *FREEBSD_TEMPLATE
 
 task:
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:35:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520305.807745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfJY-0007si-Fv; Wed, 12 Apr 2023 18:35:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520305.807745; Wed, 12 Apr 2023 18:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfJY-0007sb-Ct; Wed, 12 Apr 2023 18:35:28 +0000
Received: by outflank-mailman (input) for mailman id 520305;
 Wed, 12 Apr 2023 18:35:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KPvI=AD=citrix.com=prvs=459801679=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmfJX-0007sT-AS
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:35:27 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9410ef3-d960-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:35:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9410ef3-d960-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681324524;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=rHGJGCPiZIDqX0+lu5FAg42KZJHmVLthhNzInWrh6ko=;
  b=JTnx8lsgQgYALHW74mPmonm5yV2xD4IUxdSMW4mmKSJ3GbMztdMxgWOR
   vk24XrM6IPh54tWc9gsKzU7k0g2I59NbACcZLORkmqCHO+np81Y6+T2DC
   ugGqVirZV7Fqz4FpxMJBEdX+GPAyIcoyLp6oWieYG1AtIvdxo3m2n6OTb
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 105184657
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:6GDp8qtmGQVdYTdyklliJzYym+fnVFZeMUV32f8akzHdYApBsoF/q
 tZmKWzXOPaNMGqjfN50YIrg/B5QuJfUydUwSgVo/yBhQXgX+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Fv0gnRkPaoQ5AOHzSFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBBk0aiG+jruNzui3euxOu80Tc9LWM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zqbpjSlXExFXDCZ4TmAzmq1uN7dp3LmUqsvP6ab/aZmskLGkwT/DzVJDADm8JFVkHWWRNZ3O
 0ESvC00osAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAmZDNcbN0ttOctWCcnk
 FSOmrvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZScs2t3SnYhqtRyVQuZmMpO8voLuJD6ll
 lhmsxMCa6UvYd8jjvvrpAqZ3W39+vAlXSZuuFyJAzvNAhdRIdf8Otf2sQWzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF52LJ9o4DMlWfhsBDyr9UWaBj
 LXvkQ1Q/oRPG3ChcLV6ZYm8Y+xzk/i5T4+6B62JNoUSCnSUSONh1Hs2DaJ39zm0+HXAbIllY
 cvLGSpSJS1y5VtbIMqeGL5GjO5DKtEWzmLPX5HrpymaPU6lTCfNE98taQLWBshgtfPsnekg2
 4sGXyd8404EC7OWj+i+2dN7EG3m2lBgVMGo8pAGKbHfSuekcUl4Y8LsLXoaU9QNt8xoei3gp
 xlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:EhX1qqlgtBNjPhU/oGZfCDIwM2vpDfJP3DAbv31ZSRFFG/Fw9v
 rPoB1/73TJYVkqNU3I9errBEDiexLhHOBOjrX5VI3KNDUO01HFEGgN1+Xf/wE=
X-Talos-CUID: =?us-ascii?q?9a23=3A2CCosWkAxiHU4wQUanEnph4RncDXOVH09XPhHkb?=
 =?us-ascii?q?hMmgzRI2Qa2KZyPx/rsU7zg=3D=3D?=
X-Talos-MUID: 9a23:Ua085AZTnZX5FuBTtDrQnChcBdVSsrmkIX0mirYN48W2Knkl
X-IronPort-AV: E=Sophos;i="5.98,339,1673931600"; 
   d="scan'208";a="105184657"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Date: Wed, 12 Apr 2023 19:35:19 +0100
Message-ID: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The Long Mode consistency checks exist to "ensure that the processor does not
enter an undefined mode or state that results in unpredictable behavior".  APM
Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
preventing the OS from trying to exit Long mode while in 64bit mode.  This
could leave the CPU in Protected Mode with an %rip above the 4G boundary.

Experimentally, AMD CPUs really do permit this state transition.  An OS which
tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
to be going on behind the scenes ought to result in sane continued execution.

Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
section instructs peoples to switch to a compatibility mode segment first
before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
Architecture".

Either way, this appears to have been a genuine oversight in the AMD64 spec.

Intel, on the other hand, rejects this state transition with #GP.

Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
4.1.2 "Paging-Mode Enable" was altered from:

  If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
  exception (#GP); software should clear CR4.PCIDE before attempting to
  disable paging.

to

  If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
  clear CR0.PG causes a general-protection exception (#GP). Software should
  transition to compatibility mode and clear CR4.PCIDE before attempting to
  disable paging.

which acknowledges this corner case, but there doesn't appear to be any other
discussion even in the relevant Long Mode sections.

So it appears that Intel spotted and addressed the corner case in IA-32e mode,
but where 15 years late to document it.

Xen was written to the AMD spec, and misses the check.  Follow the Intel
behaviour, because it is more sensible and avoids hitting a VMEntry failure.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/hvm.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1454c1732d95..3c8d28a2d8be 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2340,6 +2340,21 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
     }
     else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
     {
+        struct segment_register cs;
+
+        hvm_get_segment_register(v, x86_seg_cs, &cs);
+
+        /*
+         * Intel documents a #GP fault in this case, and VMEntry checks reject
+         * it as a valid state.  AMD permits the state transition, and hits
+         * SHUTDOWN immediately thereafter.  Follow the Intel behaviour.
+         */
+        if ( cs.l )
+        {
+            HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG while CS.L=1");
+            return X86EMUL_EXCEPTION;
+        }
+
         if ( hvm_pcid_enabled(v) )
         {
             HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG "
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520311.807754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYs-0001qg-R1; Wed, 12 Apr 2023 18:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520311.807754; Wed, 12 Apr 2023 18:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYs-0001qZ-Nv; Wed, 12 Apr 2023 18:51:18 +0000
Received: by outflank-mailman (input) for mailman id 520311;
 Wed, 12 Apr 2023 18:51:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYr-0001qS-9T
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:17 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00c3d2ca-d963-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 20:51:16 +0200 (CEST)
Received: from [2001:8b0:10b:5::ebe] (helo=i7.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfYf-0077lB-4g; Wed, 12 Apr 2023 18:51:05 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1B-1f; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 00c3d2ca-d963-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	Content-Type:MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=FAopztqETbo2fPeQeCNjHsbWvQfvN+qB27Usma2Ee8w=; b=j2olmn1su4XXnixX5a0dl5WOQs
	tGYDDng0Lu+7HnGPetpa5EKtx3Lh/D8nFMhXglOuKJzqlsQwOBDoQXWNtMljZQblzaKOV87EDsLRZ
	pKAmiO3JtQ1YIlqJfjTzOZJaXMkA3Nl6WPyaJbSPv7KJzgTV/r5skMbPF4geX7HCxRpVQQGcyuob5
	qZVUs4cPDm0csrie7Ly5YZ8Zxr7tQbe/0n/V2YJDDMijpRdLW9YQCRiOI/yknWTPT3OPM1/KBR/r7
	4gA+a3Oju3jyc4FeBjWcLFL9ogJsiMqaX7dvj6CqYVHCf7rGfp9punMadb1R04bsjxcg2Z8OxJDKY
	NA37yTRg==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
Date: Wed, 12 Apr 2023 19:50:57 +0100
Message-Id: <20230412185102.441523-1-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Some Coverity fixes and minor cleanups. And most notably, dropping
support for Xen libraries older than 4.7.1.

I believe there are two issues that remain to be fixed. The x32 build
fails, and I've seen patches which attempt to detect x32 and disable
the Xen emulation. Along with assertions that we just shouldn't care.
I don't have a strong opinion either way but it seems to be in hand.

The other is the question of what Xen *actually* does if you try to
unmap an IRQ_MSI_EMU PIRQ. I don't think Linux guests try that, and
I'm fairly sure Windows doesn't even use MSI→PIRQ mappings in the
first place, and I doubt any other guests care either. I'd like to
establish the 'correct' behaviour and implement it, ideally before
the 8.0 release, but it's going to take me a few days more.

David Woodhouse (5):
      hw/xen: Simplify emulated Xen platform init
      hw/xen: Fix memory leak in libxenstore_open() for Xen
      xen: Drop support for Xen versions below 4.7.1
      hw/xen: Fix double-free in xen_console store_con_info()
      hw/xen: Fix broken check for invalid state in xs_be_open()

 hw/char/xen_console.c       |  13 ++----
 hw/i386/kvm/xen_evtchn.c    |  40 ++++++++---------
 hw/i386/kvm/xen_evtchn.h    |   3 +-
 hw/i386/kvm/xen_xenstore.c  |   2 +-
 hw/i386/pc.c                |  13 ++----
 hw/xen/xen-operations.c     |  59 +-----------------------
 include/hw/xen/xen_native.h | 107 +-------------------------------------------
 meson.build                 |   5 +--
 scripts/xen-detect.c        |  60 -------------------------
 9 files changed, 33 insertions(+), 269 deletions(-)





From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520314.807785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYw-0002bs-MT; Wed, 12 Apr 2023 18:51:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520314.807785; Wed, 12 Apr 2023 18:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYw-0002bj-Jl; Wed, 12 Apr 2023 18:51:22 +0000
Received: by outflank-mailman (input) for mailman id 520314;
 Wed, 12 Apr 2023 18:51:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYu-0001qk-QE
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:20 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00cacccc-d963-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:51:15 +0200 (CEST)
Received: from [2001:8b0:10b:5::ebe] (helo=i7.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfYf-0077lC-4l; Wed, 12 Apr 2023 18:51:05 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1E-1o; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 00cacccc-d963-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=tO8ebQO7hdBkDVSP4r+H8wDgeYM8ejzEVuKZ0cLSybE=; b=XNrj8BMABVv6sUMXdtjglSDnX8
	3vqc0BHSKUVoTm4KdYsPLXfvX4sg8NBLll2N3jWqzzy1XjtXhTso4UClv3zK+Wo/lXaWTIKROWAXk
	n8V6T9UBtmYBSg9NMdsP7hiQhPxZoolCMSUzWdMFVrb/wHfVc4To4//Se0e86u8oU9+t4hyPezTbH
	f5MJasfK2wQfTiJs7w5l/hYZYx2CDcsMh7Hwr5uho1w8CUdyi62f2QwEgOT7RJLjBRi8CwnVD9JNN
	4T2rynFSIcpVIvoF02xBSckWGDbUY0yH2mpFcVj7EgK7zOJFQpvAXN7f/sN77jA/1LGpvocufAjv5
	3J7SUvwQ==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 1/5] hw/xen: Simplify emulated Xen platform init
Date: Wed, 12 Apr 2023 19:50:58 +0100
Message-Id: <20230412185102.441523-2-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

I initially put the basic platform init (overlay pages, grant tables,
event channels) into mc->kvm_type because that was the earliest place
that could sensibly test for xen_mode==XEN_EMULATE.

The intent was to do this early enough that we could then initialise the
XenBus and other parts which would have depended on them, from a generic
location for both Xen and KVM/Xen in the PC-specific code, as seen in
https://lore.kernel.org/qemu-devel/20230116221919.1124201-16-dwmw2@infradead.org/

However, then the Xen on Arm patches came along, and *they* wanted to
do the XenBus init from a 'generic' Xen-specific location instead:
https://lore.kernel.org/qemu-devel/20230210222729.957168-4-sstabellini@kernel.org/

Since there's no generic location that covers all three, I conceded to
do it for XEN_EMULATE mode in pc_basic_devices_init().

And now there's absolutely no point in having some of the platform init
done from pc_machine_kvm_type(); we can move it all up to live in a
single place in pc_basic_devices_init(). This has the added benefit that
we can drop the separate xen_evtchn_connect_gsis() function completely,
and pass just the system GSIs in directly to xen_evtchn_create().

While I'm at it, it does no harm to explicitly pass in the *number* of
said GSIs, because it does make me twitch a bit to pass an array of
impicit size. During the lifetime of the KVM/Xen patchset, that had
already changed (albeit just cosmetically) from GSI_NUM_PINS to
IOAPIC_NUM_PINS.

And document a bit better that this is for the *output* GSI for raising
CPU0's events when the per-CPU vector isn't available. The fact that
we create a whole set of them and then only waggle the one we're told
to, instead of having a single output and only *connecting* it to the
GSI that it should be connected to, is still non-intuitive for me.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 hw/i386/kvm/xen_evtchn.c | 40 ++++++++++++++++++++--------------------
 hw/i386/kvm/xen_evtchn.h |  3 +--
 hw/i386/pc.c             | 13 ++++---------
 3 files changed, 25 insertions(+), 31 deletions(-)

diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
index 3048329474..3d810dbd59 100644
--- a/hw/i386/kvm/xen_evtchn.c
+++ b/hw/i386/kvm/xen_evtchn.c
@@ -147,7 +147,10 @@ struct XenEvtchnState {
     QemuMutex port_lock;
     uint32_t nr_ports;
     XenEvtchnPort port_table[EVTCHN_2L_NR_CHANNELS];
-    qemu_irq gsis[IOAPIC_NUM_PINS];
+
+    /* Connected to the system GSIs for raising callback as GSI / INTx */
+    unsigned int nr_callback_gsis;
+    qemu_irq *callback_gsis;
 
     struct xenevtchn_handle *be_handles[EVTCHN_2L_NR_CHANNELS];
 
@@ -299,7 +302,7 @@ static void gsi_assert_bh(void *opaque)
     }
 }
 
-void xen_evtchn_create(void)
+void xen_evtchn_create(unsigned int nr_gsis, qemu_irq *system_gsis)
 {
     XenEvtchnState *s = XEN_EVTCHN(sysbus_create_simple(TYPE_XEN_EVTCHN,
                                                         -1, NULL));
@@ -310,8 +313,19 @@ void xen_evtchn_create(void)
     qemu_mutex_init(&s->port_lock);
     s->gsi_bh = aio_bh_new(qemu_get_aio_context(), gsi_assert_bh, s);
 
-    for (i = 0; i < IOAPIC_NUM_PINS; i++) {
-        sysbus_init_irq(SYS_BUS_DEVICE(s), &s->gsis[i]);
+    /*
+     * These are the *output* GSI from event channel support, for
+     * signalling CPU0's events via GSI or PCI INTx instead of the
+     * per-CPU vector. We create a *set* of irqs and connect one to
+     * each of the system GSIs which were passed in from the platform
+     * code, and then just trigger the right one as appropriate from
+     * xen_evtchn_set_callback_level().
+     */
+    s->nr_callback_gsis = nr_gsis;
+    s->callback_gsis = g_new0(qemu_irq, nr_gsis);
+    for (i = 0; i < nr_gsis; i++) {
+        sysbus_init_irq(SYS_BUS_DEVICE(s), &s->callback_gsis[i]);
+        sysbus_connect_irq(SYS_BUS_DEVICE(s), i, system_gsis[i]);
     }
 
     /*
@@ -336,20 +350,6 @@ void xen_evtchn_create(void)
     xen_evtchn_ops = &emu_evtchn_backend_ops;
 }
 
-void xen_evtchn_connect_gsis(qemu_irq *system_gsis)
-{
-    XenEvtchnState *s = xen_evtchn_singleton;
-    int i;
-
-    if (!s) {
-        return;
-    }
-
-    for (i = 0; i < IOAPIC_NUM_PINS; i++) {
-        sysbus_connect_irq(SYS_BUS_DEVICE(s), i, system_gsis[i]);
-    }
-}
-
 static void xen_evtchn_register_types(void)
 {
     type_register_static(&xen_evtchn_info);
@@ -430,8 +430,8 @@ void xen_evtchn_set_callback_level(int level)
         return;
     }
 
-    if (s->callback_gsi && s->callback_gsi < IOAPIC_NUM_PINS) {
-        qemu_set_irq(s->gsis[s->callback_gsi], level);
+    if (s->callback_gsi && s->callback_gsi < s->nr_callback_gsis) {
+        qemu_set_irq(s->callback_gsis[s->callback_gsi], level);
         if (level) {
             /* Ensure the vCPU polls for deassertion */
             kvm_xen_set_callback_asserted();
diff --git a/hw/i386/kvm/xen_evtchn.h b/hw/i386/kvm/xen_evtchn.h
index bfb67ac2bc..b740acfc0d 100644
--- a/hw/i386/kvm/xen_evtchn.h
+++ b/hw/i386/kvm/xen_evtchn.h
@@ -16,10 +16,9 @@
 
 typedef uint32_t evtchn_port_t;
 
-void xen_evtchn_create(void);
+void xen_evtchn_create(unsigned int nr_gsis, qemu_irq *system_gsis);
 int xen_evtchn_soft_reset(void);
 int xen_evtchn_set_callback_param(uint64_t param);
-void xen_evtchn_connect_gsis(qemu_irq *system_gsis);
 void xen_evtchn_set_callback_level(int level);
 
 int xen_evtchn_set_port(uint16_t port);
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 1489abf010..25584cb8f3 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1319,7 +1319,10 @@ void pc_basic_device_init(struct PCMachineState *pcms,
 
 #ifdef CONFIG_XEN_EMU
     if (xen_mode == XEN_EMULATE) {
-        xen_evtchn_connect_gsis(gsi);
+        xen_overlay_create();
+        xen_evtchn_create(IOAPIC_NUM_PINS, gsi);
+        xen_gnttab_create();
+        xen_xenstore_create();
         if (pcms->bus) {
             pci_create_simple(pcms->bus, -1, "xen-platform");
         }
@@ -1868,14 +1871,6 @@ static void pc_machine_initfn(Object *obj)
 
 int pc_machine_kvm_type(MachineState *machine, const char *kvm_type)
 {
-#ifdef CONFIG_XEN_EMU
-    if (xen_mode == XEN_EMULATE) {
-        xen_overlay_create();
-        xen_evtchn_create();
-        xen_gnttab_create();
-        xen_xenstore_create();
-    }
-#endif
     return 0;
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520312.807765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYu-00025d-2G; Wed, 12 Apr 2023 18:51:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520312.807765; Wed, 12 Apr 2023 18:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYt-00025V-Ub; Wed, 12 Apr 2023 18:51:19 +0000
Received: by outflank-mailman (input) for mailman id 520312;
 Wed, 12 Apr 2023 18:51:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYt-0001qk-4D
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:19 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00c6275a-d963-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:51:15 +0200 (CEST)
Received: from [2001:8b0:10b:5::ebe] (helo=i7.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfYf-0077lE-7x; Wed, 12 Apr 2023 18:51:06 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1R-2I; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 00c6275a-d963-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=pvefxXf1YhjL3JSf+V8JRgGjqQjxWW3E4QNSUXU8Vyk=; b=TjPE0dW5/ekjTjDaeeWQk5fUJV
	nNUgr4r4gwcl33okRN07A5zJwF7Q0B4b01PnDXOtfu3v7VarXGfsZZNSPBiwmIfLc2EdIZ2EGBo4o
	QTqoNs4SeLgJml9Y/6rUfD8FuNIh2wI+nWzJS/7XYaEwpMo+wj7PQJ9f5m1MKj8o0U6AFB1l5km3X
	rElR8Zf42jdPGjQ1hHcqk1DSpdw5yW/z9RBJwQGSUYukk2JFnGU7rFkK9jtVNSx3ySS6kedTvELwC
	pvVDHGxtrOnUPo8v7MBvIwMGNL0HzMnLXSZ8RfjifLwUOnrHdF05EhNKDHsy1O0kxGie2f4iVThoa
	QyhSoxew==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 4/5] hw/xen: Fix double-free in xen_console store_con_info()
Date: Wed, 12 Apr 2023 19:51:01 +0100
Message-Id: <20230412185102.441523-5-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

Coverity spotted a double-free (CID 1508254); we g_string_free(path) and
then for some reason immediately call free(path) too.

We should just use g_autoptr() for it anyway, which simplifies the code
a bit.

Fixes: 7a8a749da7d3 ("hw/xen: Move xenstore_store_pv_console_info to xen_console.c")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 hw/char/xen_console.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/hw/char/xen_console.c b/hw/char/xen_console.c
index c7a19c0e7c..810dae3f44 100644
--- a/hw/char/xen_console.c
+++ b/hw/char/xen_console.c
@@ -178,8 +178,7 @@ static int store_con_info(struct XenConsole *con)
     Chardev *cs = qemu_chr_fe_get_driver(&con->chr);
     char *pts = NULL;
     char *dom_path;
-    GString *path;
-    int ret = -1;
+    g_autoptr(GString) path = NULL;
 
     /* Only continue if we're talking to a pty. */
     if (!CHARDEV_IS_PTY(cs)) {
@@ -204,15 +203,9 @@ static int store_con_info(struct XenConsole *con)
 
     if (xenstore_write_str(con->console, path->str, pts)) {
         fprintf(stderr, "xenstore_write_str for '%s' fail", path->str);
-        goto out;
+        return -1;
     }
-    ret = 0;
-
-out:
-    g_string_free(path, true);
-    free(path);
-
-    return ret;
+    return 0;
 }
 
 static int con_init(struct XenLegacyDevice *xendev)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520313.807775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYv-0002Lv-E3; Wed, 12 Apr 2023 18:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520313.807775; Wed, 12 Apr 2023 18:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYv-0002Lk-BD; Wed, 12 Apr 2023 18:51:21 +0000
Received: by outflank-mailman (input) for mailman id 520313;
 Wed, 12 Apr 2023 18:51:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYt-0001qk-Q3
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:19 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00ceabaf-d963-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:51:16 +0200 (CEST)
Received: from [2001:8b0:10b:5::ebe] (helo=i7.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfYf-0077lF-7n; Wed, 12 Apr 2023 18:51:05 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1V-2S; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 00ceabaf-d963-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=NIk/dSGE1S/XkSMWglE5vSkIy2Aotr95wrmjXgO3qTE=; b=jjZUxe9t2zm5dC09PCwBEOxg1z
	joZy6jDOxzr1nMq4pNbcp3WLngm4pYpiqE8AoYSdmmYEq9kGeF6bv9Z2a4WWxpA7YiY0Xtz7jkDNZ
	VMLSp2/tDON07Uv3Yd4BFcaGPn27lWka2nduaqL3piYJyOoFIM7uSZqKzax8Kh6VsHdHz0zYgSqqO
	ILzM43hX1ULSuqD1DrDrzjbRoUwQetje20sfAqhL/5s+GMRopH8X5yY3w5y/cRZoRoZGcwy+b3i9P
	WZYW1Tl6JlHkIlCYN93JjtiITKBL2dr6Oz/qojqVcvQj/3dU01ZHLF4uYLj6QxJIzAGsOfpqp4I7B
	a9J7iXrQ==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 5/5] hw/xen: Fix broken check for invalid state in xs_be_open()
Date: Wed, 12 Apr 2023 19:51:02 +0100
Message-Id: <20230412185102.441523-6-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

Coverity points out that if (!s && !s->impl) isn't really what we intended
to do here. CID 1508131.

Fixes: 032475127225 ("hw/xen: Add emulated implementation of XenStore operations")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..65f91e87d7 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -1688,7 +1688,7 @@ static struct qemu_xs_handle *xs_be_open(void)
     XenXenstoreState *s = xen_xenstore_singleton;
     struct qemu_xs_handle *h;
 
-    if (!s && !s->impl) {
+    if (!s || !s->impl) {
         errno = -ENOSYS;
         return NULL;
     }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520315.807794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYy-0002tS-UA; Wed, 12 Apr 2023 18:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520315.807794; Wed, 12 Apr 2023 18:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfYy-0002tL-RD; Wed, 12 Apr 2023 18:51:24 +0000
Received: by outflank-mailman (input) for mailman id 520315;
 Wed, 12 Apr 2023 18:51:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ni7i=AD=desiato.srs.infradead.org=BATV+dd6f636cd48703d0a954+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYx-0001qS-Kg
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 045f501b-d963-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 20:51:22 +0200 (CEST)
Received: from [2001:8b0:10b:1::425] (helo=i7.infradead.org)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pmfYf-00E0VL-3D; Wed, 12 Apr 2023 18:51:06 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1H-1y; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 045f501b-d963-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Xa7ooeKNSf+k8dNFDJzvKocJu/52pawuc5iQqeYJl9c=; b=nxQF+PYllY75sHqovcNk2BH4Gi
	RpVSWDgHzGp6LR/J9XHEZ/8NcMhK+wvi2yYuhn1/gaMykPZ42EuBEZDwbpl+7j+9ffR82KtYbpKHp
	jDMpMfdHfLyMA6cWiFQ9a+98ywwxU3E4Ks45R78pIEPXpoIkT8fg91nyZVcDyzekRslNMu+N3+Dmi
	vWtqtkYz9VGIPOJ6qJHf/pF+xK8y0jdGcEwCDtzTR1tSBnHGNjIVmY6iljd5SJMmR7UXzcXXwRDsC
	ZEJtD3rZKBTKNS1GAyETt1GMxjt9mnIJoRAy5zmBRb9118HdiHPaxBwJA1NYFEIvolFPS+Z3L+hAX
	p3eq+52w==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 2/5] hw/xen: Fix memory leak in libxenstore_open() for Xen
Date: Wed, 12 Apr 2023 19:50:59 +0100
Message-Id: <20230412185102.441523-3-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by desiato.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

There was a superfluous allocation of the XS handle, leading to it
being leaked on both the error path and the success path (where it gets
allocated again).

Spotted by Coverity (CID 1508098).

Fixes: ba2a92db1ff6 ("hw/xen: Add xenstore operations to allow redirection to internal emulation")
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
---
 hw/xen/xen-operations.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/xen/xen-operations.c b/hw/xen/xen-operations.c
index 4b78fbf4bd..3d213d28df 100644
--- a/hw/xen/xen-operations.c
+++ b/hw/xen/xen-operations.c
@@ -287,7 +287,7 @@ static void watch_event(void *opaque)
 static struct qemu_xs_handle *libxenstore_open(void)
 {
     struct xs_handle *xsh = xs_open(0);
-    struct qemu_xs_handle *h = g_new0(struct qemu_xs_handle, 1);
+    struct qemu_xs_handle *h;
 
     if (!xsh) {
         return NULL;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:51:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520316.807805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfZ0-0003AM-7z; Wed, 12 Apr 2023 18:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520316.807805; Wed, 12 Apr 2023 18:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfZ0-00039k-42; Wed, 12 Apr 2023 18:51:26 +0000
Received: by outflank-mailman (input) for mailman id 520316;
 Wed, 12 Apr 2023 18:51:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ni7i=AD=desiato.srs.infradead.org=BATV+dd6f636cd48703d0a954+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfYy-0001qS-Ki
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:51:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04a6bb6f-d963-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 20:51:22 +0200 (CEST)
Received: from [2001:8b0:10b:1::425] (helo=i7.infradead.org)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pmfYg-00E0VM-03; Wed, 12 Apr 2023 18:51:06 +0000
Received: from dwoodhou by i7.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1pmfYe-001r1N-2D; Wed, 12 Apr 2023 19:51:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 04a6bb6f-d963-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	Reply-To:Content-Type:Content-ID:Content-Description;
	bh=r0KrVFy8wO5hkkGDmOWYariQi9uMlzauvQswU3PX06g=; b=ksJ5J0ZCaGPS+w9bWo3qQ38f/f
	5dx1E09HdQlfCQ7zAQv5t2wDpola+fKSQIlh1VljOSAovETc/RoDS2hP3KrOuLWG9duJ5tX9z9DaW
	taNp+hP4l9lhhhObZc8K4m8DgnbwxhK9vx6czSEsBsDqOXqeXUDOEh28woBd+9HCM1BEPGXEaYzoV
	l8v2T/LFJWOzVC1kPWOvBjP7dR8xE41+HAp7IxxgMcgQzxkCQXqYiF9Rx2XJqYn6KM1GSM3i8zmw9
	2DVadTK32LRFcm29UdnVR0tSfHTiHb1bXNHb8L4WHF7YnzgGmkL7bxff29Izie8M7mcFw9H83AOyG
	9x+LnoVg==;
From: David Woodhouse <dwmw2@infradead.org>
To: qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 3/5] xen: Drop support for Xen versions below 4.7.1
Date: Wed, 12 Apr 2023 19:51:00 +0100
Message-Id: <20230412185102.441523-4-dwmw2@infradead.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: David Woodhouse <dwmw2@infradead.org>
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by desiato.infradead.org. See http://www.infradead.org/rpr.html

From: David Woodhouse <dwmw@amazon.co.uk>

In restructuring to allow for internal emulation of Xen functionality,
I broke compatibility for Xen 4.6 and earlier. Fix this by explicitly
removing support for anything older than 4.7.1, which is also ancient
but it does still build, and the compatibility support for it is fairly
unintrusive.

Fixes: 15e283c5b684 ("hw/xen: Add foreignmem operations to allow redirection to internal emulation")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 hw/xen/xen-operations.c     |  57 +------------------
 include/hw/xen/xen_native.h | 107 +-----------------------------------
 meson.build                 |   5 +-
 scripts/xen-detect.c        |  60 --------------------
 4 files changed, 3 insertions(+), 226 deletions(-)

diff --git a/hw/xen/xen-operations.c b/hw/xen/xen-operations.c
index 3d213d28df..e00983ec44 100644
--- a/hw/xen/xen-operations.c
+++ b/hw/xen/xen-operations.c
@@ -28,46 +28,13 @@
 #include <xenctrl.h>
 
 /*
- * We don't support Xen prior to 4.2.0.
+ * We don't support Xen prior to 4.7.1.
  */
 
-/* Xen 4.2 through 4.6 */
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40701
-
-typedef xc_evtchn xenevtchn_handle;
-typedef evtchn_port_or_error_t xenevtchn_port_or_error_t;
-
-#define xenevtchn_open(l, f) xc_evtchn_open(l, f);
-#define xenevtchn_close(h) xc_evtchn_close(h)
-#define xenevtchn_fd(h) xc_evtchn_fd(h)
-#define xenevtchn_pending(h) xc_evtchn_pending(h)
-#define xenevtchn_notify(h, p) xc_evtchn_notify(h, p)
-#define xenevtchn_bind_interdomain(h, d, p) xc_evtchn_bind_interdomain(h, d, p)
-#define xenevtchn_unmask(h, p) xc_evtchn_unmask(h, p)
-#define xenevtchn_unbind(h, p) xc_evtchn_unbind(h, p)
-
-typedef xc_gnttab xengnttab_handle;
-
-#define xengnttab_open(l, f) xc_gnttab_open(l, f)
-#define xengnttab_close(h) xc_gnttab_close(h)
-#define xengnttab_set_max_grants(h, n) xc_gnttab_set_max_grants(h, n)
-#define xengnttab_map_grant_ref(h, d, r, p) xc_gnttab_map_grant_ref(h, d, r, p)
-#define xengnttab_unmap(h, a, n) xc_gnttab_munmap(h, a, n)
-#define xengnttab_map_grant_refs(h, c, d, r, p) \
-    xc_gnttab_map_grant_refs(h, c, d, r, p)
-#define xengnttab_map_domain_grant_refs(h, c, d, r, p) \
-    xc_gnttab_map_domain_grant_refs(h, c, d, r, p)
-
-typedef xc_interface xenforeignmemory_handle;
-
-#else /* CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40701 */
-
 #include <xenevtchn.h>
 #include <xengnttab.h>
 #include <xenforeignmemory.h>
 
-#endif
-
 /* Xen before 4.8 */
 
 static int libxengnttab_fallback_grant_copy(xengnttab_handle *xgt,
@@ -223,26 +190,6 @@ static struct gnttab_backend_ops libxengnttab_backend_ops = {
     .unmap = libxengnttab_backend_unmap,
 };
 
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40701
-
-static void *libxenforeignmem_backend_map(uint32_t dom, void *addr, int prot,
-                                          size_t pages, xfn_pfn_t *pfns,
-                                          int *errs)
-{
-    if (errs) {
-        return xc_map_foreign_bulk(xen_xc, dom, prot, pfns, errs, pages);
-    } else {
-        return xc_map_foreign_pages(xen_xc, dom, prot, pfns, pages);
-    }
-}
-
-static int libxenforeignmem_backend_unmap(void *addr, size_t pages)
-{
-    return munmap(addr, pages * XC_PAGE_SIZE);
-}
-
-#else /* CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40701 */
-
 static void *libxenforeignmem_backend_map(uint32_t dom, void *addr, int prot,
                                           size_t pages, xen_pfn_t *pfns,
                                           int *errs)
@@ -256,8 +203,6 @@ static int libxenforeignmem_backend_unmap(void *addr, size_t pages)
     return xenforeignmemory_unmap(xen_fmem, addr, pages);
 }
 
-#endif
-
 struct foreignmem_backend_ops libxenforeignmem_backend_ops = {
     .map = libxenforeignmem_backend_map,
     .unmap = libxenforeignmem_backend_unmap,
diff --git a/include/hw/xen/xen_native.h b/include/hw/xen/xen_native.h
index 6bcc83baf9..f11eb423e3 100644
--- a/include/hw/xen/xen_native.h
+++ b/include/hw/xen/xen_native.h
@@ -24,23 +24,11 @@
 extern xc_interface *xen_xc;
 
 /*
- * We don't support Xen prior to 4.2.0.
+ * We don't support Xen prior to 4.7.1.
  */
 
-/* Xen 4.2 through 4.6 */
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40701
-
-typedef xc_interface xenforeignmemory_handle;
-
-#define xenforeignmemory_open(l, f) xen_xc
-#define xenforeignmemory_close(h)
-
-#else /* CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40701 */
-
 #include <xenforeignmemory.h>
 
-#endif
-
 extern xenforeignmemory_handle *xen_fmem;
 
 #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40900
@@ -148,8 +136,6 @@ static inline xendevicemodel_handle *xendevicemodel_open(
     return xen_xc;
 }
 
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40500
-
 static inline int xendevicemodel_create_ioreq_server(
     xendevicemodel_handle *dmod, domid_t domid, int handle_bufioreq,
     ioservid_t *id)
@@ -211,8 +197,6 @@ static inline int xendevicemodel_set_ioreq_server_state(
     return xc_hvm_set_ioreq_server_state(dmod, domid, id, enabled);
 }
 
-#endif /* CONFIG_XEN_CTRL_INTERFACE_VERSION >= 40500 */
-
 static inline int xendevicemodel_set_pci_intx_level(
     xendevicemodel_handle *dmod, domid_t domid, uint16_t segment,
     uint8_t bus, uint8_t device, uint8_t intx, unsigned int level)
@@ -340,15 +324,6 @@ static inline int xen_get_vmport_regs_pfn(xc_interface *xc, domid_t dom,
 }
 #endif
 
-/* Xen before 4.6 */
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40600
-
-#ifndef HVM_IOREQSRV_BUFIOREQ_ATOMIC
-#define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
-#endif
-
-#endif
-
 static inline int xen_get_default_ioreq_server_info(domid_t dom,
                                                     xen_pfn_t *ioreq_pfn,
                                                     xen_pfn_t *bufioreq_pfn,
@@ -386,84 +361,6 @@ static inline int xen_get_default_ioreq_server_info(domid_t dom,
     return 0;
 }
 
-/* Xen before 4.5 */
-#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 40500
-
-#ifndef HVM_PARAM_BUFIOREQ_EVTCHN
-#define HVM_PARAM_BUFIOREQ_EVTCHN 26
-#endif
-
-#define IOREQ_TYPE_PCI_CONFIG 2
-
-typedef uint16_t ioservid_t;
-
-static inline void xen_map_memory_section(domid_t dom,
-                                          ioservid_t ioservid,
-                                          MemoryRegionSection *section)
-{
-}
-
-static inline void xen_unmap_memory_section(domid_t dom,
-                                            ioservid_t ioservid,
-                                            MemoryRegionSection *section)
-{
-}
-
-static inline void xen_map_io_section(domid_t dom,
-                                      ioservid_t ioservid,
-                                      MemoryRegionSection *section)
-{
-}
-
-static inline void xen_unmap_io_section(domid_t dom,
-                                        ioservid_t ioservid,
-                                        MemoryRegionSection *section)
-{
-}
-
-static inline void xen_map_pcidev(domid_t dom,
-                                  ioservid_t ioservid,
-                                  PCIDevice *pci_dev)
-{
-}
-
-static inline void xen_unmap_pcidev(domid_t dom,
-                                    ioservid_t ioservid,
-                                    PCIDevice *pci_dev)
-{
-}
-
-static inline void xen_create_ioreq_server(domid_t dom,
-                                           ioservid_t *ioservid)
-{
-}
-
-static inline void xen_destroy_ioreq_server(domid_t dom,
-                                            ioservid_t ioservid)
-{
-}
-
-static inline int xen_get_ioreq_server_info(domid_t dom,
-                                            ioservid_t ioservid,
-                                            xen_pfn_t *ioreq_pfn,
-                                            xen_pfn_t *bufioreq_pfn,
-                                            evtchn_port_t *bufioreq_evtchn)
-{
-    return xen_get_default_ioreq_server_info(dom, ioreq_pfn,
-                                             bufioreq_pfn,
-                                             bufioreq_evtchn);
-}
-
-static inline int xen_set_ioreq_server_state(domid_t dom,
-                                             ioservid_t ioservid,
-                                             bool enable)
-{
-    return 0;
-}
-
-/* Xen 4.5 */
-#else
-
 static bool use_default_ioreq_server;
 
 static inline void xen_map_memory_section(domid_t dom,
@@ -624,6 +521,4 @@ static inline int xen_set_ioreq_server_state(domid_t dom,
                                                  enable);
 }
 
-#endif
-
 #endif /* QEMU_HW_XEN_NATIVE_H */
diff --git a/meson.build b/meson.build
index c44d05a13f..1f223ae7fc 100644
--- a/meson.build
+++ b/meson.build
@@ -1425,16 +1425,13 @@ if get_option('xen').enabled() or (get_option('xen').auto() and have_system)
     endif
   endif
   if not xen.found()
-    xen_tests = [ '4.11.0', '4.10.0', '4.9.0', '4.8.0', '4.7.1', '4.6.0', '4.5.0', '4.2.0' ]
+    xen_tests = [ '4.11.0', '4.10.0', '4.9.0', '4.8.0', '4.7.1' ]
     xen_libs = {
       '4.11.0': [ 'xenstore', 'xenctrl', 'xendevicemodel', 'xenforeignmemory', 'xengnttab', 'xenevtchn', 'xentoolcore' ],
       '4.10.0': [ 'xenstore', 'xenctrl', 'xendevicemodel', 'xenforeignmemory', 'xengnttab', 'xenevtchn', 'xentoolcore' ],
       '4.9.0': [ 'xenstore', 'xenctrl', 'xendevicemodel', 'xenforeignmemory', 'xengnttab', 'xenevtchn' ],
       '4.8.0': [ 'xenstore', 'xenctrl', 'xenforeignmemory', 'xengnttab', 'xenevtchn' ],
       '4.7.1': [ 'xenstore', 'xenctrl', 'xenforeignmemory', 'xengnttab', 'xenevtchn' ],
-      '4.6.0': [ 'xenstore', 'xenctrl' ],
-      '4.5.0': [ 'xenstore', 'xenctrl' ],
-      '4.2.0': [ 'xenstore', 'xenctrl' ],
     }
     xen_deps = {}
     foreach ver: xen_tests
diff --git a/scripts/xen-detect.c b/scripts/xen-detect.c
index 85e8206490..db049e605c 100644
--- a/scripts/xen-detect.c
+++ b/scripts/xen-detect.c
@@ -138,66 +138,6 @@
     return 0;
   }
 
-#elif CONFIG_XEN_CTRL_INTERFACE_VERSION == 40600
-  #include <xenctrl.h>
-  #include <xenstore.h>
-  #include <stdint.h>
-  #include <xen/hvm/hvm_info_table.h>
-  #if !defined(HVM_MAX_VCPUS)
-  # error HVM_MAX_VCPUS not defined
-  #endif
-  int main(void) {
-    xc_interface *xc;
-    xs_daemon_open();
-    xc = xc_interface_open(0, 0, 0);
-    xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0);
-    xc_gnttab_open(NULL, 0);
-    xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
-    xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
-    xc_hvm_create_ioreq_server(xc, 0, HVM_IOREQSRV_BUFIOREQ_ATOMIC, NULL);
-    xc_reserved_device_memory_map(xc, 0, 0, 0, 0, NULL, 0);
-    return 0;
-  }
-
-#elif CONFIG_XEN_CTRL_INTERFACE_VERSION == 40500
-  #include <xenctrl.h>
-  #include <xenstore.h>
-  #include <stdint.h>
-  #include <xen/hvm/hvm_info_table.h>
-  #if !defined(HVM_MAX_VCPUS)
-  # error HVM_MAX_VCPUS not defined
-  #endif
-  int main(void) {
-    xc_interface *xc;
-    xs_daemon_open();
-    xc = xc_interface_open(0, 0, 0);
-    xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0);
-    xc_gnttab_open(NULL, 0);
-    xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
-    xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
-    xc_hvm_create_ioreq_server(xc, 0, 0, NULL);
-    return 0;
-  }
-
-#elif CONFIG_XEN_CTRL_INTERFACE_VERSION == 40200
-  #include <xenctrl.h>
-  #include <xenstore.h>
-  #include <stdint.h>
-  #include <xen/hvm/hvm_info_table.h>
-  #if !defined(HVM_MAX_VCPUS)
-  # error HVM_MAX_VCPUS not defined
-  #endif
-  int main(void) {
-    xc_interface *xc;
-    xs_daemon_open();
-    xc = xc_interface_open(0, 0, 0);
-    xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0);
-    xc_gnttab_open(NULL, 0);
-    xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
-    xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
-    return 0;
-  }
-
 #else
 #error invalid CONFIG_XEN_CTRL_INTERFACE_VERSION
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:54:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:54:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520329.807814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfbj-0005EX-Tw; Wed, 12 Apr 2023 18:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520329.807814; Wed, 12 Apr 2023 18:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfbj-0005EQ-RL; Wed, 12 Apr 2023 18:54:15 +0000
Received: by outflank-mailman (input) for mailman id 520329;
 Wed, 12 Apr 2023 18:54:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKmc=AD=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmfbi-0005EC-3F
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:54:14 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68b94743-d963-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:54:10 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-504718a2282so3778162a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 11:54:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68b94743-d963-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681325649; x=1683917649;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=OCBM1eYOuCRMs+HWvFuj5wOCzBihsmjaSCKE5mxqFgI=;
        b=VXQdX4vyMY3Ts0kkSN2W7HmWqpWcjPqeqmESwkSYw/q+25N8hvpY4U6FCHolp/aSvK
         MNKlEndf4lQRDiitq6UIN/+34xz3Q7//qS6jjPrgY6j8IHZHoZJ3fOwKmuVDRlkTQ0jA
         xE7LoPfyFE5cmWTELrHJ5NIQ/1Q5ioX4YcfBW3IbzyXXMmLqaMDaQjndFk7+sXFoSz+A
         Mbusyjbo6iE+llpLR9OFKLxabubeD369SAwLc2bQfq8kh8o3gk/AhTStNyTVDXZ0THrE
         4bGnZhRjZYE+yVpugd4VoPmQYiuTZGQlS6Gm1GePZxuvuniS954uRoeLQ0SOjiZpCHym
         HIrA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681325649; x=1683917649;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OCBM1eYOuCRMs+HWvFuj5wOCzBihsmjaSCKE5mxqFgI=;
        b=jYg11YJFCJnni/WOkjUCBTQklHTkQHtQEF5yl3ITg4VTX9mFt5XJeg9215CwdVxi1Q
         vPKrzUPmR/D7MIMI9GZZaV5dqp0g5zySZDBfYuea9XWrI7kmDs6b0VK8Xd1OtgzBmOJh
         ygQ1pAaJ+6JTnYLnqhQAFwf54y9mAl3P5GrsHHeR9livBehiPqrnubmDcmLeG538n8sM
         ZwcHDs2ezBzYaRpcNRwbJcLgU9HqTYbF/bGMzhQx67QSKt+iokXJFiq/RZO96BruKXgu
         vPdT47vqm8BedrppLkc45BoH1obz3oEFB2le5NdABSUagrOmbt5iECq3xhy1NxYp+MSQ
         M8+g==
X-Gm-Message-State: AAQBX9cqtmyd/xfFZZ9X8ngJnrTJPwrY5lEI3HrYKDgvUc/qgsYOMvZ3
	q0K0os2eKEtcuCVI4ZBni+LCMe/EkR0Y9EAFJpTL+g==
X-Google-Smtp-Source: AKy350YpkBKagCiorTlciXGC1XIYvr/Cp6SiNr9+jL2am6FQOLrXgpQ/KR800WHaG6IAXmEJFIDhvRUdZb691xauqlI=
X-Received: by 2002:a50:9f65:0:b0:505:521:7880 with SMTP id
 b92-20020a509f65000000b0050505217880mr1382745edf.6.1681325649641; Wed, 12 Apr
 2023 11:54:09 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-23-dwmw2@infradead.org>
 <CAFEAcA-vCihVupZsLBdh6+-xjdNX2-K1Ceo+tgsjA=KCdWTjpg@mail.gmail.com> <92e10c45117dce9c07304a567fd412434ea0ddd3.camel@infradead.org>
In-Reply-To: <92e10c45117dce9c07304a567fd412434ea0ddd3.camel@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 12 Apr 2023 19:53:58 +0100
Message-ID: <CAFEAcA8W49n-O3jokxQjYD-u0ScC+giGdryai4bKy_Lk-nrseA@mail.gmail.com>
Subject: Re: [PULL 22/27] hw/xen: Add emulated implementation of XenStore operations
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Wed, 12 Apr 2023 at 19:22, David Woodhouse <dwmw2@infradead.org> wrote:
>
> On Tue, 2023-04-11 at 19:07 +0100, Peter Maydell wrote:
> >
> >
> > > +static void xs_be_unwatch(struct qemu_xs_handle *h, struct
> > > qemu_xs_watch *w)
> > > +{
> > > +    xs_impl_unwatch(h->impl, DOMID_QEMU, w->path, NULL,
> > > xs_be_watch_cb, w);
> >
> > Coverity points out that this is the only call to xs_impl_unwatch()
> > where we don't check the return value. Is there some useful way
> > we can report the error, or is it a "we're closing everything down
> > anyway, no way to report anything" situation? (This particular
> > Coverity heuristic is quite prone to false positives, so if that's
> > the way it is I'll just mark it as a f-p in the coverity UI.)
>
> This is because the Xen libxenstore API doesn't return an error, and
> this is the ops function which emulates that same API. I suppose we
> could explicitly cast to void with a comment to that effect, to avoid
> having it linger in Coverity? I think that's sufficient to make
> Coverity shut up, isn't it?

I've just marked it a false-positive in the UI. Coverity's generally
good at not resurfacing old false-positives, so don't bother
changing the code unless you think it would improve clarity for
a human reader.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:55:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520333.807825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfcs-0005rH-6q; Wed, 12 Apr 2023 18:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520333.807825; Wed, 12 Apr 2023 18:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfcs-0005rA-49; Wed, 12 Apr 2023 18:55:26 +0000
Received: by outflank-mailman (input) for mailman id 520333;
 Wed, 12 Apr 2023 18:55:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKmc=AD=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmfcr-0005r2-1b
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:55:25 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94c16079-d963-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 20:55:24 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-94a342ef3beso236757466b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 11:55:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94c16079-d963-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681325723; x=1683917723;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=V4pWJbzUQpY8EmvUozfn2vkGqj1/INMaAK1x13wucOs=;
        b=toMZkIj6jmJNXgvJM2iCJhd+tD4KlpE79RuzB12bmf2Vf6UYUICXsSC75ScnWgNUlp
         tNF09+6k1wHrTegGkF+f3yr7jQ460DjY/5MoH1XJXhPYVXFBWkltlvFLbpRHOS3TyzEy
         qNV0aH3PtcM/Ao9OvaQ6Lgo5RbfjMqNgHeaqznU2zrfidOotrI8elhxFU891tmc2KMWm
         IbeT3qpoGz+4emMfmcmeYbAj4t+ZGsD76hNBskqsfWYLEaDfiMkLgGmwEd5OsO+TXFF+
         CIhidmHbcloULT6o0kItJmZYi1RglGW+MqFbeYMRkHQkYwjGDoJKusElTRbKQhNnEAZr
         tcpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681325723; x=1683917723;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=V4pWJbzUQpY8EmvUozfn2vkGqj1/INMaAK1x13wucOs=;
        b=GL6hqq4UzbbhwAvUdlXLKto0Wy/hAKVhUiV4lv4e7w1kPeuKk3mWsA3UY5JqquUFzD
         jBOkmJV50vgdGET050kfNRlvDBRSkApZBebh13U45dfGEkxubApiGHh+JPYQ1dVlI/Gd
         J/LVUtUhM5ktfofjivFW/ZvS841ZsW4aAcmpNAj/nga9qTFgOLRhoVgsvHBwTqMydckU
         R/lvrewev5d5FIH20ihT/LxOAlEnDkxAYl+FGRKW0KOgVtANWlpKKd41XshuEySp3pWJ
         IwgW6GN0Xh5rghxcWfRNmsZtELNp5ez/xJK3VZdGZF+aUtcpwbHRTiya/VMoyOYccxoM
         JaUA==
X-Gm-Message-State: AAQBX9dJ9oRx8hAkLHbNCkuDLJE+KgEm1oF0NvdFEqN2ar5B4vbyPyDZ
	GSeMqJairJj7tFYyb/+1yiCB+mmb7lFBDI7P/8/OXg==
X-Google-Smtp-Source: AKy350brHiGejy7/I5Am1Uiea2aUHJKtqkhZKfRHYEYkw1NA/RSGTtjpBQnE8UYQsaFjrtq6Yg3S62pCjzMkAKOp+BA=
X-Received: by 2002:a50:9f65:0:b0:505:521:7880 with SMTP id
 b92-20020a509f65000000b0050505217880mr1384308edf.6.1681325723523; Wed, 12 Apr
 2023 11:55:23 -0700 (PDT)
MIME-Version: 1.0
References: <20230412185102.441523-1-dwmw2@infradead.org>
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 12 Apr 2023 19:55:12 +0100
Message-ID: <CAFEAcA9G0KpkOivD8fBvEQwGcTsUQz53z5W53YcjcHmZGPHkmQ@mail.gmail.com>
Subject: Re: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
	=?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>, 
	Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wrote:
>
> Some Coverity fixes and minor cleanups. And most notably, dropping
> support for Xen libraries older than 4.7.1.
>
> I believe there are two issues that remain to be fixed. The x32 build
> fails, and I've seen patches which attempt to detect x32 and disable
> the Xen emulation. Along with assertions that we just shouldn't care.
> I don't have a strong opinion either way but it seems to be in hand.
>
> The other is the question of what Xen *actually* does if you try to
> unmap an IRQ_MSI_EMU PIRQ. I don't think Linux guests try that, and
> I'm fairly sure Windows doesn't even use MSI=E2=86=92PIRQ mappings in the
> first place, and I doubt any other guests care either. I'd like to
> establish the 'correct' behaviour and implement it, ideally before
> the 8.0 release, but it's going to take me a few days more.
>
> David Woodhouse (5):
>       hw/xen: Simplify emulated Xen platform init
>       hw/xen: Fix memory leak in libxenstore_open() for Xen
>       xen: Drop support for Xen versions below 4.7.1
>       hw/xen: Fix double-free in xen_console store_con_info()
>       hw/xen: Fix broken check for invalid state in xs_be_open()
>

This is highly unlikely to make 8.0 at this point, FYI.
If there's anything in this you think is super-critical we
might be able to sneak it in.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:57:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520337.807835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmffD-0006RQ-Jp; Wed, 12 Apr 2023 18:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520337.807835; Wed, 12 Apr 2023 18:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmffD-0006RJ-GL; Wed, 12 Apr 2023 18:57:51 +0000
Received: by outflank-mailman (input) for mailman id 520337;
 Wed, 12 Apr 2023 18:57:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKmc=AD=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmffC-0006RB-6b
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:57:50 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eac0b8dd-d963-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:57:48 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-94a34a0fc1dso297428266b.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 11:57:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eac0b8dd-d963-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681325868; x=1683917868;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=mwuMRWTckpmHAxe9wcRRZwJ9PRGYotK6uPdDjaX47Hw=;
        b=UFvkjSNf07xqd6h81iyqZZVbdIKwVidmZpnv982GZosyT3wl3xhCoALO9/LDw45xnq
         0vzZ7KxG3EtKzQgOX3qdBV0FRh+Qoxi+WvK2hFMxIdDaAszhRqaKgcSYYnL50ZjlSM4n
         gQQkYf0xVtuA+P5lWQTrymgcX2CxMzrqm75TY/dEFCchTk8RsyB5djPO2M0olMCkXTtu
         NRu5ImExgJ6ZVZi0c5PAOCgjTSJxejwJHKI4ekw4imu8ReQYQfpmqNjfAGEzQNxewuyQ
         meemcXyNP1InerS5KRghmuEGZE0+/ABLAQA1gcBDnrDwER+q2cZ1eO0IlLQwb1yRLqHC
         PFHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681325868; x=1683917868;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mwuMRWTckpmHAxe9wcRRZwJ9PRGYotK6uPdDjaX47Hw=;
        b=IDvjXLvadYmfSXRr8nFusoYhido4OKwQ5z3s6MSO3Wy2ouPoymhIkTYJ1wTDDHpLzP
         Mokc4Pszn9ep3AJb51bTXV0PZf8pjx+U6FEr6z8DyRiUMGjYQzun/4x5MmtUIkDO/lVi
         K4UUt0TesgqyAVZOzM28gnJ9SvI8kamtXAxMrjXKx2ZPoTVMsyLB1uBFGUa9j4bmtLGD
         Z3sshSeSDHfAsZhxnizAYlxrFIKr2Piu3OP3WbuhhkkRwLlUEvqVbSiyZ1y2TFlhmoHt
         BFY6Tqz/75CVWJsH6ivIZssxN2+wKhX5W7Dvf+Slfy5ZmoE+v9/bI0vZcKKT+nt4f6uu
         y4CQ==
X-Gm-Message-State: AAQBX9d4JWqonS2y10Cj6nAlaPP/0bvzryQNNUlZhpzrG58VSZeEeIG2
	Z76bkeaDCtnDxT4dspULVis7zxGO5MsTzgwAoGn8sw==
X-Google-Smtp-Source: AKy350b4Tq5Y77eGxUkuJXiU9YL7lKJFcM6uLvpeqBXHaxWzpKfedPmHYoMDes2bLhYp48y7W4tZfu4nqa5bO2ofXBQ=
X-Received: by 2002:a50:d71e:0:b0:504:9c1f:1c65 with SMTP id
 t30-20020a50d71e000000b005049c1f1c65mr6052761edi.6.1681325867781; Wed, 12 Apr
 2023 11:57:47 -0700 (PDT)
MIME-Version: 1.0
References: <20230412185102.441523-1-dwmw2@infradead.org> <20230412185102.441523-6-dwmw2@infradead.org>
In-Reply-To: <20230412185102.441523-6-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 12 Apr 2023 19:57:37 +0100
Message-ID: <CAFEAcA9staAPX2buXO0MWj2yzVU1n22xLx-PEvOa5Aa2xC4YWw@mail.gmail.com>
Subject: Re: [PATCH 5/5] hw/xen: Fix broken check for invalid state in xs_be_open()
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
	=?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>, 
	Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Coverity points out that if (!s && !s->impl) isn't really what we intended
> to do here. CID 1508131.
>
> Fixes: 032475127225 ("hw/xen: Add emulated implementation of XenStore operations")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 18:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 18:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520350.807844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfgr-00071l-To; Wed, 12 Apr 2023 18:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520350.807844; Wed, 12 Apr 2023 18:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfgr-00071e-RJ; Wed, 12 Apr 2023 18:59:33 +0000
Received: by outflank-mailman (input) for mailman id 520350;
 Wed, 12 Apr 2023 18:59:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mKmc=AD=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmfgr-00071Y-5T
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 18:59:33 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27f1d415-d964-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 20:59:31 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-94a4a898649so223231666b.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Apr 2023 11:59:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27f1d415-d964-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681325970; x=1683917970;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=rfKtPIZLyI/9kyv0UlHMQW/E9H0QfT6xiEL4AwLN9MA=;
        b=mMglum5fqjfiCG7dFKCTVMDKeWjPt0TTq66CTLfgOcTIRq7JNlX+ickLCdUi669e1C
         GvykWxqeZRdVcPEBFuaLBzprxK8efJXDWK0ryJAYraZLFjAzQuTcNgihiu3y8Fx/bG4j
         0rzmVV9NZ3L9tRjU48S01OFPLBEvTA+LKl6YHEPLWriBH+1tPJgDqdYtTke4wSdh+r9C
         8DfQGztPQfD+wDrkiDMBUSf8MMat4W6chryhuxWCWg+c3wB2Gqg0GUzKQ33dI7R+UAw8
         hF6bbkEFmyJCS68UEPqY9ywQvXP/TlWXQnICcXyZxL51Y4Ci/px0t40bjvEqXu3wrtp9
         nPQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681325970; x=1683917970;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=rfKtPIZLyI/9kyv0UlHMQW/E9H0QfT6xiEL4AwLN9MA=;
        b=V+TA7Pq+nl/Nv4HVoqPyZRsY4rkeiapBdL/KpFioCKejtBd04LoqZs72mmcLQRACcl
         m+9CDagf18GajogAfExcVFwXFUL85/0JagNjCZQlm9KCtST66+PrBKgtM+bbvz0OQeYL
         UoRMOqNrk9NgWEFBPx11o3bpBm3yYs1er5NU6A5Mnwmbs1NojUXuaNmAyoWJHwvuPYD1
         LBCqZ/z+n8/1TRef6w+jaJQhDpcq3k6u4yEOjWw72l0ShaZcBmc6oQinp9mhSh4qURRn
         SHu883ZC4dgiW1CTHKkL8BuSQHpqkreMyYfEQlBw2MvPgJp6ynaW8H1mn2HbfDuiBkIp
         zkVg==
X-Gm-Message-State: AAQBX9dfpe2p9Wa+UY3zkA/jpbrk7eO0IqO2RAhj1Q7PAGLMQlEqHtNO
	eXo7FX+3Glz3DZbnVJ16vAyfYRzq2n/vbmhWFm8MkQ==
X-Google-Smtp-Source: AKy350axA4G9jLT9eVbrgGwoJ8edxXEnerhD+zot/u3c030bUJZC7VGQJkI6O/FFhxjTF6mlsmSnzBPsK6fIIXKEV88=
X-Received: by 2002:a50:cd42:0:b0:506:6ca5:3128 with SMTP id
 d2-20020a50cd42000000b005066ca53128mr82975edj.6.1681325970501; Wed, 12 Apr
 2023 11:59:30 -0700 (PDT)
MIME-Version: 1.0
References: <20230412185102.441523-1-dwmw2@infradead.org> <20230412185102.441523-5-dwmw2@infradead.org>
In-Reply-To: <20230412185102.441523-5-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 12 Apr 2023 19:59:19 +0100
Message-ID: <CAFEAcA8KS1Hx+zwFPa=8em3RnQJFfDtg0S486U44Byb6+yujMA@mail.gmail.com>
Subject: Re: [PATCH 4/5] hw/xen: Fix double-free in xen_console store_con_info()
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
	=?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>, 
	Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Coverity spotted a double-free (CID 1508254); we g_string_free(path) and
> then for some reason immediately call free(path) too.
>
> We should just use g_autoptr() for it anyway, which simplifies the code
> a bit.
>
> Fixes: 7a8a749da7d3 ("hw/xen: Move xenstore_store_pv_console_info to xen_console.c")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 19:01:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 19:01:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520354.807854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfj2-0008T6-9t; Wed, 12 Apr 2023 19:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520354.807854; Wed, 12 Apr 2023 19:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfj2-0008Sz-6p; Wed, 12 Apr 2023 19:01:48 +0000
Received: by outflank-mailman (input) for mailman id 520354;
 Wed, 12 Apr 2023 19:01:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfj0-0008St-Gs
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 19:01:46 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76bf6d42-d964-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 21:01:43 +0200 (CEST)
Received: from [2001:8b0:10b:5:bf9d:c0ec:e079:876a]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfik-00789I-P3; Wed, 12 Apr 2023 19:01:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76bf6d42-d964-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=IPYtVkA5Vq0tNDE8dtZeWC5ZqapWHQVzVZtmTBnL1ZQ=; b=XepvWdNzPbSLUlexNxvVOu2U4m
	Tm0WKDGqScwiYlWMsZeeuXfPlrUT0kBVASk6XfXH0As4vYiNvucpCT6JQvD62Y3IDXKGr8D1Wjulj
	itbYwWJDWr+3ehhmLqDgTndJ0caf8HZ55qry3MIl/2Z1DhVF7cSQEYud6H3KHsFAzkgK+LUcoq3M6
	91dfUjr3LGcsVyXZIU+RYCv+fW223GLJ6cqAWdmbq0WWVDWNgPS3ilW41Jt8vl+iAC/N3NmPAdEZo
	yWdhsQofAotmzmNnDa2Vv1YEEjYCjjJffc19BBZce/v6SMChVOlUO+Xzv9VJU/cQXs59WVjVOlpTJ
	XM22Jc9w==;
Message-ID: <ac9417c017a2f1bda399d831b100e9b009f8d4c2.camel@infradead.org>
Subject: Re: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
From: David Woodhouse <dwmw2@infradead.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, Anthony
 Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
 =?ISO-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Richard Henderson
 <richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
 "Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, "Daniel P." =?ISO-8859-1?Q?Berrang=E9?=
 <berrange@redhat.com>, Thomas Huth <thuth@redhat.com>, Philippe
 =?ISO-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
 xen-devel@lists.xenproject.org
Date: Wed, 12 Apr 2023 20:01:29 +0100
In-Reply-To: <CAFEAcA9G0KpkOivD8fBvEQwGcTsUQz53z5W53YcjcHmZGPHkmQ@mail.gmail.com>
References: <20230412185102.441523-1-dwmw2@infradead.org>
	 <CAFEAcA9G0KpkOivD8fBvEQwGcTsUQz53z5W53YcjcHmZGPHkmQ@mail.gmail.com>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-hAPWpF3HBMdZEXQCQF5D"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-hAPWpF3HBMdZEXQCQF5D
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-04-12 at 19:55 +0100, Peter Maydell wrote:
> On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wrote=
:
> >=20
> > Some Coverity fixes and minor cleanups. And most notably, dropping
> > support for Xen libraries older than 4.7.1.
> >=20
> > I believe there are two issues that remain to be fixed. The x32 build
> > fails, and I've seen patches which attempt to detect x32 and disable
> > the Xen emulation. Along with assertions that we just shouldn't care.
> > I don't have a strong opinion either way but it seems to be in hand.
> >=20
> > The other is the question of what Xen *actually* does if you try to
> > unmap an IRQ_MSI_EMU PIRQ. I don't think Linux guests try that, and
> > I'm fairly sure Windows doesn't even use MSI=E2=86=92PIRQ mappings in t=
he
> > first place, and I doubt any other guests care either. I'd like to
> > establish the 'correct' behaviour and implement it, ideally before
> > the 8.0 release, but it's going to take me a few days more.
> >=20
> > David Woodhouse (5):
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Simplify emulated Xen platform i=
nit
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix memory leak in libxenstore_o=
pen() for Xen
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen: Drop support for Xen versions below=
 4.7.1
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix double-free in xen_console s=
tore_con_info()
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix broken check for invalid sta=
te in xs_be_open()
> >=20
>=20
> This is highly unlikely to make 8.0 at this point, FYI.
> If there's anything in this you think is super-critical we
> might be able to sneak it in.

Nothing is super-critical except maybe the double-free in
store_con_info(). That could lead to a crash on startup if the QEMU Xen
console is being used.

--=-hAPWpF3HBMdZEXQCQF5D
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDEyMTkwMTI5WjAvBgkqhkiG9w0BCQQxIgQgmCpRZJqf
5qtPDua4xjlAYzxr8V2F4/r+0GhSfRSQgv0wgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAjDxyV8ymhUJRPPtLfucBwphBSYPC+XRaB
Zf7Rzwa1OcB43p2gsFmEz8SmLlI2x7H1Q3cKxc/aExqWpL/KbRpfzya0DS+EEdcTRoPofl7E3JDv
nMmzXCqhpj1BgN821wkEPuUzUDL7mXn/UxmzVKA7RE3hQG0BniXmcv9LNHfqdaMFiTyFRWnu3AiD
E34qskQW5LqG1ggKo0Zgrf0e7ZgsYPj9XNihX6OUa/R7NDWspsqEZ0lLfAL+gSt4x1QmRLwsMWeN
pVr+mC5olbu53Y/3vslknx3YvorAbY032JAdXR985PsGv/6TkMFLGS66sSLcVyQOiuxlwV1kBSbu
owDVqBB8SRp0DztFvgnhztphrsEjdERd6zIctEkdebb3xedVdDIEgQ/DDXdVQ0+uCCqttHoc2H+E
BBqeESkScIQpM3kccmy5Rp+xDQHGwuPWTUhcGS/h3Ej3V98hgzTHHsr/EJ6qnOwQpoefQYL8jtvR
RpZWfdWpfOw47vx/4RjFFOEmjnvnyOSjZQuDnjZ8pRuN87A/dRRw/YHdY/NUX7Fz+3Vg+kv33knn
6qFcX0GsOJ1hbs6ajGcLYMFT0kDOlF9UQn54CU9dbPfCjsAiH66CLpVEp86hs8RjHXuGQ+j613RD
VqaGfy7Gw+fUvPddqfsS+PBhFRn9zBYabjw3opAJXAAAAAAAAA==


--=-hAPWpF3HBMdZEXQCQF5D--


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 19:08:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 19:08:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520360.807865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfpV-0000nT-3G; Wed, 12 Apr 2023 19:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520360.807865; Wed, 12 Apr 2023 19:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmfpV-0000nM-0R; Wed, 12 Apr 2023 19:08:29 +0000
Received: by outflank-mailman (input) for mailman id 520360;
 Wed, 12 Apr 2023 19:08:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3LfV=AD=casper.srs.infradead.org=BATV+9719990f4703cc1bc73b+7171+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pmfpT-0000nE-Fe
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 19:08:27 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66e8884c-d965-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 21:08:26 +0200 (CEST)
Received: from [2001:8b0:10b:5:bf9d:c0ec:e079:876a]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pmfpI-0078TD-37; Wed, 12 Apr 2023 19:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66e8884c-d965-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=mZoMi8L2CGkeUh70HowUzZy76uyvh1EHPMDqyJF+jSQ=; b=pt6tJNuxqio0xIzwCenSsPnkc0
	mc2gGVIY72qbHxUQiSAlE3IolAcJZuJmDTFc/dJ64o0iveGkZamZ0noeuC02vRVc2S2bMREAD/tHP
	Swy58UEct7DWbpa6630gteGOTcDZXFYC/XjTq854kj2xgEMRNY56hg3Ke800BfbwDqeQTRvrIpbMv
	UPawFscf2h+n70N8Aak+3nH5aGHpZT3Rd462ovkAXUrpBLvTGd2bklcFidTOleabp2Fw70gWMABOC
	GD+3rVwwGA4OArTl4xe7mLEi2qWVja4DX3OXoPpx/YPUHXitz0mzdt0IlrGaXPGx7OTK0kOPjgpco
	LjEtet3g==;
Message-ID: <43bd491a139a2dd8ade04927ba318b8988dda4e4.camel@infradead.org>
Subject: Re: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
From: David Woodhouse <dwmw2@infradead.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, Anthony
 Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
 =?ISO-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Richard Henderson
 <richard.henderson@linaro.org>, Eduardo Habkost <eduardo@habkost.net>,
 "Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
 <marcel.apfelbaum@gmail.com>, "Daniel P." =?ISO-8859-1?Q?Berrang=E9?=
 <berrange@redhat.com>, Thomas Huth <thuth@redhat.com>, Philippe
 =?ISO-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
 xen-devel@lists.xenproject.org
Date: Wed, 12 Apr 2023 20:08:14 +0100
In-Reply-To: <ac9417c017a2f1bda399d831b100e9b009f8d4c2.camel@infradead.org>
References: <20230412185102.441523-1-dwmw2@infradead.org>
	 <CAFEAcA9G0KpkOivD8fBvEQwGcTsUQz53z5W53YcjcHmZGPHkmQ@mail.gmail.com>
	 <ac9417c017a2f1bda399d831b100e9b009f8d4c2.camel@infradead.org>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-xWkwo3ffcuylPKwCMhTB"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-xWkwo3ffcuylPKwCMhTB
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-04-12 at 20:01 +0100, David Woodhouse wrote:
> On Wed, 2023-04-12 at 19:55 +0100, Peter Maydell wrote:
> > On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wro=
te:
> > >=20
> > > Some Coverity fixes and minor cleanups. And most notably, dropping
> > > support for Xen libraries older than 4.7.1.
> > >=20
> > > I believe there are two issues that remain to be fixed. The x32 build
> > > fails, and I've seen patches which attempt to detect x32 and disable
> > > the Xen emulation. Along with assertions that we just shouldn't care.
> > > I don't have a strong opinion either way but it seems to be in hand.
> > >=20
> > > The other is the question of what Xen *actually* does if you try to
> > > unmap an IRQ_MSI_EMU PIRQ. I don't think Linux guests try that, and
> > > I'm fairly sure Windows doesn't even use MSI=E2=86=92PIRQ mappings in=
 the
> > > first place, and I doubt any other guests care either. I'd like to
> > > establish the 'correct' behaviour and implement it, ideally before
> > > the 8.0 release, but it's going to take me a few days more.
> > >=20
> > > David Woodhouse (5):
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Simplify emulated Xen platform=
 init
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix memory leak in libxenstore=
_open() for Xen
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen: Drop support for Xen versions bel=
ow 4.7.1
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix double-free in xen_console=
 store_con_info()
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 hw/xen: Fix broken check for invalid s=
tate in xs_be_open()
> > >=20
> >=20
> > This is highly unlikely to make 8.0 at this point, FYI.
> > If there's anything in this you think is super-critical we
> > might be able to sneak it in.
>=20
> Nothing is super-critical except maybe the double-free in
> store_con_info(). That could lead to a crash on startup if the QEMU Xen
> console is being used.

Although we could just do the one-liner that drops the extra 'free'
instead of converting to g_autoptr.

--=-xWkwo3ffcuylPKwCMhTB
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDEyMTkwODE0WjAvBgkqhkiG9w0BCQQxIgQgfyx9B51J
Unl3bvNoUg9ozMJomlOgSsZyFejr9gfIKHIwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgAF6H//9WMzgcw9g6Je+Lkmf6ddLx9+hM7S
LsZlyyirMYEF4mlTfqxeDeUxFiz8ebjoW2PZZPcRJvLS/JLDveJf0v0s+/VqRrzCcXVH7PbzEb1D
K47XmqL19c09GdAI9j/RFHkl83kREbZg2hubi+hs9dnPogsZoljm5N9jMfIJYw6fL04AaGdQEKlr
vxn6P2bve+zUX6AYde54COGEjfHt0BRagjPICCqCVhEt+/jcxczHVus2wO90BYTW/cOf+q4tjd96
M4QxeE8c4g+mxGF/fIg0RYJzXkv2vjsMTuc3Y2rziKyo7J7BwFr4jaHg5AICdiyMboYfxerYpx6A
Hnlz+oFH0bDFf0N3uXDOhqRFHKLLiOkNcDNLGMWqFgzanJ/Z9yQc3AAlMwUytuXVjnEUDxlBtNbD
T4xNFyF7Nt2hS5/xmjEaSZINahJ1fdR1VifxSX1RAkXRY+8+JNLKrLcW1qeK/d9faYisCUzb56uv
ShOK2YoOIeA2nSm7bQUCFiMCWhmBJIvWBSQsG1qDzGF+0+UXIsptAGeOo7GrtgxmpsVbX3pmOHGN
SSAYCu4PVN60Ih+sb0mSqWsdjb8yqnlYxSKBPK+gbUBqwSWaK9WvMZBHpPnVJ5mXVa+64fpiX9l3
pFJdxdGbslx32eV7MWE3aNfos4l/FOt8iJ09UJTyoQAAAAAAAA==


--=-xWkwo3ffcuylPKwCMhTB--


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 19:23:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 19:23:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520364.807875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmg3x-0003El-CM; Wed, 12 Apr 2023 19:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520364.807875; Wed, 12 Apr 2023 19:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmg3x-0003Ee-8V; Wed, 12 Apr 2023 19:23:25 +0000
Received: by outflank-mailman (input) for mailman id 520364;
 Wed, 12 Apr 2023 19:23:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmg3w-0003EU-Sc; Wed, 12 Apr 2023 19:23:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmg3w-0001wq-QG; Wed, 12 Apr 2023 19:23:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmg3w-0004Ln-Cp; Wed, 12 Apr 2023 19:23:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmg3w-0003My-CL; Wed, 12 Apr 2023 19:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uSmtsTWaov06wpgREGReyhxDchumTUqWKni7VZLnN/w=; b=gVajRp/esVobMnpGsWegXDRehT
	Be3KVIGkgKp5aCM0bHygbW3p8+lhfKWDJb5pzB7S+nqyatodzVrjJjLAJ/oz74Aps0KWA/b5UyEOA
	TRzjtnHiR8zgZhtCh/gHjJQaHyemDaAyBPXcTtgCr+uCicstIuoHaze0yV5PEfO/BZ3s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180221-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180221: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=89520115b89d755ae79f61850c356aae4be2d9ad
X-Osstest-Versions-That:
    ovmf=b991aec0509f24ae7573d732ba337549ecee310c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 19:23:24 +0000

flight 180221 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180221/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 89520115b89d755ae79f61850c356aae4be2d9ad
baseline version:
 ovmf                 b991aec0509f24ae7573d732ba337549ecee310c

Last test of basis   180216  2023-04-12 07:13:37 Z    0 days
Testing same since   180221  2023-04-12 12:11:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Benjamin Doron <benjamin.doron00@gmail.com>
  Sean Rhodes <sean@starlabs.systems>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b991aec050..89520115b8  89520115b89d755ae79f61850c356aae4be2d9ad -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 20:04:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 20:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520370.807885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmghM-0007fu-EQ; Wed, 12 Apr 2023 20:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520370.807885; Wed, 12 Apr 2023 20:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmghM-0007fn-Az; Wed, 12 Apr 2023 20:04:08 +0000
Received: by outflank-mailman (input) for mailman id 520370;
 Wed, 12 Apr 2023 20:04:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmghK-0007fh-GV
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 20:04:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmghJ-00039z-CT; Wed, 12 Apr 2023 20:04:05 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmghJ-0004cU-5q; Wed, 12 Apr 2023 20:04:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=rf55FNKq+3vzbc7Nnoy2CdPTLoMelHYpsXH6DzqDWIo=; b=vTNhmpSQZqXYkXD/IOY7ipUExn
	1W1u0knuQaen6LdHmYmQR2pQewwqSSdlgpZOA4iksIaodBBzL9dDvtr9FfX4zFIyyPutN95D3A12u
	/ctLeufxH1xQrmU6iee82+N4IvU58IGZUz3QGb4/+wnwD0PqOCcsc/saOF5gN99YRkcA=;
Message-ID: <4d4ecf63-dd07-439d-4fdf-85cebea05b29@xen.org>
Date: Wed, 12 Apr 2023 21:04:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, zithro <slack@rabbit.lu>,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
 <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
From: Julien Grall <julien@xen.org>
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
In-Reply-To: <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 12/04/2023 17:05, Andrew Cooper wrote:
> On 12/04/2023 4:48 pm, zithro wrote:
>> Hi all,
>>
>> this is what I have in "xenstored-access.log" in dom0 :
>>
>> [20230411T23:48:27.917Z]  D5         write     control/feature-balloon 1
>> [20230411T23:48:27.917Z]  D5         error     EACCES
>> [20230411T23:48:27.923Z]  D5         write     data/updated Wed Apr 12
>> 01:48:27 CEST 2023
>>
>> It happens once each minute, on two different hosts, both amd64.
>> (both hosts using the same config, only the hardware differs).
>>
>> I tried looking up for a similar bug, but didn't find one.
>> I apologize in advance if this error is known, and if this is not the
>> place to report this !
>>
>> -----------------------
>> Technical infos
>> -----------------------
>> dom0s:
>>
>> Debian stable, kernel 5.10.0-21-amd64
>> Xen 4.14.5
>> xl.conf has : autoballoon="0"
>> GRUB_CMDLINE_XEN="dom0_mem=2048M,max:2048M dom0_max_vcpus=4
>> dom0_vcpus_pin loglvl=all guest_loglvl=all ucode=scan iommu=verbose"
>> Running "xenstore-ls -f -p | grep balloon" returns no result
>> -----------------------
>> domUs (D5 in above logs):
>>
>> HVM TrueNAS Core, based on FreeBSD 13.1-RELEASE-p7
>> (it happened also on previous FreeBSD realeases, but don't remember when
>> it started, logs have been filled and rotated).
>> In cfg files, using either the same value for "memory" and "maxmem" or
>> only setting "memory" give the same results.
>>
>> What's strange is that I have xen* commands in FreeNAS :
>>
>> xen-detect        xenstore-control  xenstore-ls       xenstore-watch
>> xenstore          xenstore-exists   xenstore-read     xenstore-write
>> xenstore-chmod    xenstore-list     xenstore-rm
>>
>> root@truenas[~]# xenstore-ls
>> xenstore-ls: xs_directory (/): Permission denied
>>
>> root@truenas[~]# ps aux
>> root   [...]     0:36.98 [xenwatch]
>> root   [...]     0:01.01 [xenstore_rcv]
>> root   [...]     0:00.00 [balloon]
>> root   [...]     0:01.74 /bin/sh /usr/local/sbin/xe-daemon -p
>> /var/run/xe-daemon.pid
>> [...]
>>
>> The xe-daemon looks strange also, I don't use XenServer/XCP-ng, only
>> "raw" Xen.
>> And this script which hand
>>
>> I also use PFsense domUs (based on FreeBSD), but they don't exhibit
>> this behaviour (ie. no xenstore access error in dom0, no xen*
>> commands in domU).
>>
>> So is this a problem with TrueNAS rather than with Xen ?
>> If so I apologize for wasting your time.
>>
>> Thanks, have a nice day !
>> (and as it's my first post here: thx for Xen, it rocks)
> 
> Hello,
> 
> (Leaving the full report intact so CC'd people can see it whole)
> 
> Yes, it is TrueNAS trying to re-write that file every minute.  It
> appears that TrueNAS has inherited (from debian?) a rather old version
> of https://github.com/xenserver/xe-guest-utilities/
> 
> https://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html doesn't
> list feature-balloon as a permitted feature node.
> 
> But, I suspect that it used to be the case that guests could write
> arbitrary feature nodes, and I suspect we tightened the permissions in a
> security fix to reduce worst-case memory usage of xenstored.

 From a brief look, this is very similar to the patch below that was 
sent 3 years ago. I bet no-one ever tested the driver against libxl.

commit 30a970906038
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Tue Sep 4 13:39:29 2018 +0200

     libxl: create control/sysrq xenstore node

     'xl sysrq' command doesn't work with modern Linux guests with the 
following
     message in guest's log:

      xen:manage: sysrq_handler: Error -13 writing sysrq in control/sysrq

     xenstore trace confirms:

      IN 0x24bd9a0 20180904 04:36:32 WRITE (control/sysrq )
      OUT 0x24bd9a0 20180904 04:36:32 ERROR (EACCES )

     The problem seems to be in the fact that we don't pre-create 
control/sysrq
     xenstore node and libxl_send_sysrq() doing libxl__xs_printf() 
creates it as
     read-only. As we want to allow guests to clean 'control/sysrq' 
after the
     requested action is performed, we need to make this node writable.

     Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
     Acked-by: Wei Liu <wei.liu2@citrix.com>

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 60676304e9b5..dcfde7787e2c 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -695,6 +695,9 @@ retry_transaction:
                          GCSPRINTF("%s/control/feature-s4", dom_path),
                          rwperm, ARRAY_SIZE(rwperm));
      }
+    libxl__xs_mknod(gc, t,
+                    GCSPRINTF("%s/control/sysrq", dom_path),
+                    rwperm, ARRAY_SIZE(rwperm));
      libxl__xs_mknod(gc, t,
                      GCSPRINTF("%s/device/suspend/event-channel", 
dom_path),
                      rwperm, ARRAY_SIZE(rwperm));

> 
> I suspect the best (/least bad) thing to do here is formally introduce
> feature-ballon as a permitted node, and have the toolstack initialise it
> to "" like we do with all other nodes, after which TrueNAS ought to be
> able to set it successfully and not touch it a second time.

+1. This would match how libxl already deal "feature-s3" & co.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 20:09:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 20:09:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520374.807894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmgmV-0008Hw-0n; Wed, 12 Apr 2023 20:09:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520374.807894; Wed, 12 Apr 2023 20:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmgmU-0008Hp-UJ; Wed, 12 Apr 2023 20:09:26 +0000
Received: by outflank-mailman (input) for mailman id 520374;
 Wed, 12 Apr 2023 20:09:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+hK=AD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pmgmU-0008Hj-33
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 20:09:26 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb58dd50-d96d-11ed-b21e-6b7b168915f2;
 Wed, 12 Apr 2023 22:09:24 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A4EF662F85;
 Wed, 12 Apr 2023 20:09:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46C3FC433EF;
 Wed, 12 Apr 2023 20:09:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb58dd50-d96d-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681330163;
	bh=8B2OVPHQHUwiNOLLrscY/I401k0vtxih3jLGGUDSZUk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=isH2hHlC7MKcTEpi3cvYBuFwSXm7bfi+7UtiCiSxRhJvoJrz1bZTtaEX878g9+FO8
	 ZZuTFhx3o0CydAnW4JtkqWFL/IS1rIea/bPm/69Vs72f5GTfYgCkanvHSEojuUtCNH
	 rMrcizdGo05D0QTIVEWIKkSGHjIQv3z8oPVt4Lycz58HVAx5lRkqHPhYv4e/iemkm2
	 UlLe8bzNTJ4ztwDIZkYpmsVhIYNjFn8K47YZm3ZH4HZPnD7cai47soX5RIsO8wahf2
	 8KAX4BdpEqi/qDwiOJjWH7M9TZHYiwvHMtwpCOlCfl/WYt4Xo1QdQB0UTl5zmdDc2Y
	 hnLqFZYN3h/Hw==
Date: Wed, 12 Apr 2023 13:09:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: David Woodhouse <dwmw2@infradead.org>
cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    =?UTF-8?Q?Marc-Andr=C3=A9_Lureau?= <marcandre.lureau@redhat.com>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
    Thomas Huth <thuth@redhat.com>, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
In-Reply-To: <20230412185102.441523-1-dwmw2@infradead.org>
Message-ID: <alpine.DEB.2.22.394.2304121309050.15580@ubuntu-linux-20-04-desktop>
References: <20230412185102.441523-1-dwmw2@infradead.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 11 Apr 2023, David Woodhouse wrote:
> Some Coverity fixes and minor cleanups. And most notably, dropping
> support for Xen libraries older than 4.7.1.

I just wanted to say that I am fine with this


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 20:14:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 20:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520378.807904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmgrT-0001Hd-Jd; Wed, 12 Apr 2023 20:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520378.807904; Wed, 12 Apr 2023 20:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmgrT-0001HW-Gp; Wed, 12 Apr 2023 20:14:35 +0000
Received: by outflank-mailman (input) for mailman id 520378;
 Wed, 12 Apr 2023 20:14:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+hK=AD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pmgrS-0001HQ-7v
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 20:14:34 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a292907b-d96e-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 22:14:32 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1D2B763232;
 Wed, 12 Apr 2023 20:14:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4BA9C433EF;
 Wed, 12 Apr 2023 20:14:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a292907b-d96e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681330470;
	bh=Nuluv8nBOkGlpny4WJ71/qwluqzH+ARqb7rxaDBOxIQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Tg6RQ7lPQa6yC5/X/KzV4OMvhJChlXUxBtjOpEH7+R2nXAzM2be3HC8AfF9OBjJqg
	 nl3pLN26fPrlyX80s8elIMSGSKON9MMigQydFBNh036wqXpGTo+3jfY8uQTVmZpOzL
	 tiduPIwDi5qH6BeP0f32VYwAr54/KLm2QAZrTbE08FTf+I/KSq+/fsqGYk6BqRRrL9
	 1UrhcDWssyQgYjLw3YGw3HttltKmzsSNjjohdeVGtve2Gfuk8fn3UIW16ecEXysPqt
	 3bsdExDzVRhNZPzKI298AAjeqF6XBloTF4iBeDQZ0+M5PygnfYdXYmjun6pQOODLYW
	 BbTetaWnMqlbg==
Date: Wed, 12 Apr 2023 13:14:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <Michal.Orzel@arm.com>
Subject: Re: [PATCH] CI: Update FreeBSD to 13.2
In-Reply-To: <20230412183356.2986459-1-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304121314210.15580@ubuntu-linux-20-04-desktop>
References: <20230412183356.2986459-1-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2103527278-1681330470=:15580"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2103527278-1681330470=:15580
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 12 Apr 2023, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Michal Orzel <Michal.Orzel@arm.com>
> 
> Successful run:
>   https://cirrus-ci.com/task/6232358978846720
> ---
>  .cirrus.yml | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/.cirrus.yml b/.cirrus.yml
> index b133afb74057..9bb6cce4ead3 100644
> --- a/.cirrus.yml
> +++ b/.cirrus.yml
> @@ -23,7 +23,7 @@ task:
>  task:
>    name: 'FreeBSD 13'
>    freebsd_instance:
> -    image_family: freebsd-13-1
> +    image_family: freebsd-13-2
>    << : *FREEBSD_TEMPLATE
>  
>  task:
> -- 
> 2.30.2
> 
--8323329-2103527278-1681330470=:15580--


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 20:47:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 20:47:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520383.807915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmhNJ-0004kt-6H; Wed, 12 Apr 2023 20:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520383.807915; Wed, 12 Apr 2023 20:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmhNJ-0004km-32; Wed, 12 Apr 2023 20:47:29 +0000
Received: by outflank-mailman (input) for mailman id 520383;
 Wed, 12 Apr 2023 20:47:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmhNH-0004kc-Dr; Wed, 12 Apr 2023 20:47:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmhNH-0004IL-Au; Wed, 12 Apr 2023 20:47:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmhNG-00013E-M8; Wed, 12 Apr 2023 20:47:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmhNG-0004w6-Lk; Wed, 12 Apr 2023 20:47:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+b5x3DWwpcqjB/9g9IEackBZQAJwKEq6ChyW9mjFfoo=; b=WgEvcRKPncAt1u3N9fKDazGabs
	GDgaplnXJrDsIACdzm0+lroOz4J/UyxSDVH57ZjWsx3ueXEPyX68nmjEFs0LQkYJaKwEew3uKcOTQ
	8ytJMnwR0WDQ8CxdLvvukdbTpymIkky2CtvVOkHoZDxcw3cDtRVXNQpb1raxmL4YwBdQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180213: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=7e1b4cc19cefba11763d09799f2b742658a9a03a
X-Osstest-Versions-That:
    libvirt=7eead248c65f336afd9fd57ddcfe9e182cf30a6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 20:47:26 +0000

flight 180213 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180213/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              7e1b4cc19cefba11763d09799f2b742658a9a03a
baseline version:
 libvirt              7eead248c65f336afd9fd57ddcfe9e182cf30a6d

Last test of basis   180176  2023-04-07 04:18:49 Z    5 days
Testing same since   180213  2023-04-12 04:20:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Krempa <pkrempa@redhat.com>
  Remus-Gabriel Chelu <remusgabriel.chelu@disroot.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   7eead248c6..7e1b4cc19c  7e1b4cc19cefba11763d09799f2b742658a9a03a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 21:36:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 21:36:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520389.807925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmi93-0001b3-QA; Wed, 12 Apr 2023 21:36:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520389.807925; Wed, 12 Apr 2023 21:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmi93-0001aw-N4; Wed, 12 Apr 2023 21:36:49 +0000
Received: by outflank-mailman (input) for mailman id 520389;
 Wed, 12 Apr 2023 21:36:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmi92-0001am-01; Wed, 12 Apr 2023 21:36:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmi91-0005ZX-TU; Wed, 12 Apr 2023 21:36:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmi91-00033J-CT; Wed, 12 Apr 2023 21:36:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmi91-0006wH-C5; Wed, 12 Apr 2023 21:36:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GZgkiwoKGHEopYJwhWrhHdliT63KV7XQXMNuoYtFrhk=; b=Dj0kqyuvqjpUhOmvAUndUC+TM5
	xsHtFx/6YtKFWR+PsFQhu6Ybyq2El0mWoRxfv33C+R/H4mPrCUDVNbcZ1qHTB/YfS9Mv0IKAac7Fc
	uloO/AwcaUwGBZ+z5OX1Uep7+etPbpzrIXDHhw8oHg/oaTqUEkvy+zTeTMokqTPRaH2I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180223: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5430f7f60dee3747fff906b48718db8afb4589d9
X-Osstest-Versions-That:
    ovmf=89520115b89d755ae79f61850c356aae4be2d9ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Apr 2023 21:36:47 +0000

flight 180223 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180223/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5430f7f60dee3747fff906b48718db8afb4589d9
baseline version:
 ovmf                 89520115b89d755ae79f61850c356aae4be2d9ad

Last test of basis   180221  2023-04-12 12:11:24 Z    0 days
Testing same since   180223  2023-04-12 19:43:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>
  Rebecca Cran <rebecca@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   89520115b8..5430f7f60d  5430f7f60dee3747fff906b48718db8afb4589d9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 21:54:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 21:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520397.807935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmiQJ-0003z3-Be; Wed, 12 Apr 2023 21:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520397.807935; Wed, 12 Apr 2023 21:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmiQJ-0003yw-8c; Wed, 12 Apr 2023 21:54:39 +0000
Received: by outflank-mailman (input) for mailman id 520397;
 Wed, 12 Apr 2023 21:54:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gYIF=AD=epam.com=prvs=8466f4a093=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pmiQH-0003yq-Aw
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 21:54:37 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ade299c-d97c-11ed-8611-37d641c3527e;
 Wed, 12 Apr 2023 23:54:32 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33CGB4kA030903; Wed, 12 Apr 2023 21:54:18 GMT
Received: from eur02-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur02lp2049.outbound.protection.outlook.com [104.47.11.49])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3pwrwhatfd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 12 Apr 2023 21:54:17 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by DB9PR03MB7273.eurprd03.prod.outlook.com (2603:10a6:10:22f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 12 Apr
 2023 21:54:12 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.030; Wed, 12 Apr 2023
 21:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ade299c-d97c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C6GLVl387ZrE80wvyItqIgZ+yYpRggbraWYA6bT3tPwZrRdZnLoC9xoOXXZMquiTCQWrnUvqgNi5kneD3VO1KTJVn1dKDiiqRTFYJZv50nROoZhCSbi/9BrhrV2I/PKg3ctUoe67dnMev7JN0zbietpzkfaTiskFp2oIJCV9pBnQuu+H9v8FtWyPLJTsfN5bZxa41bpEgsHlNMzZz/s2AwrSowwS7cHLLS1EllJ04ITAi+nxYf3LVBBV1Zpdtf5eZMRyILCvWnLe3TJj3R00gysxNcAJdhrw4xofQRs0qabvDJno5K4sTCxiCnSP/mHMApgL3jBfdYaMdomfCE6DhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hlxlCw/RFtXuleUg42Tg0KKb8+9yk2ViQOEkqs2gl2A=;
 b=O66MBguRLmkJzvATzeV1d3RI6xykDI6Fr8HK702aPL1qPnWPmrnKOvE9wF++E3jk2pARgnq4/BG/Xf/eBuqqGTU5fYMvZQbydJWrC5L9cesa0yOWhHU3koicYiloBS/cdC7JHBteEwKbZQZLkTRhaQlqjmnHHa8U+nHvBo+F6+1yW31y7xs6E0DaNTAxLq6gLtpoOhRZ9y+HVlaOeKDXvuFUmbzTMStZwEey0Bx6NGvikSVefcuyMeKlXSDrW9WW9WTsgcoH3jrfrkbkrTp99lzssUZHti6SQBp0pnQEBamfgTxXmcXduj/IAScCVoNiX6n/qD3AvaNyDCNAk0/b5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hlxlCw/RFtXuleUg42Tg0KKb8+9yk2ViQOEkqs2gl2A=;
 b=NQOIXr4UojOVy65+omauUceM4kKsWVuwb/9dQlOt3T0WcJXFl2Ge8xMlXsI7d0s35UqeXX+h3sgHpurce/wUkk17jzqPs3xZuh4oPYciqfgCoEf95xg+BHTcQiV6+1bx9+z4DhVmr5TeiTzJmw7c93F3BipTczLYiwiMKhpHHHHOQnAURiKJ0ZimVqXzFhOqT/iudF0a8/fwJHxqKLg3NUGVLWT6CmNH1eXA9WnfOJsUrqU1Rtolys0NFyNjOlEYt72735f22Ot9oKWQ92PzY2ZSHmL/L6Tgr45EgAmIyGM7EGOBPmfhRqu62UF5w+8kyhJrUaQX3j1HLN6pBuR0mQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei
 Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGA
Date: Wed, 12 Apr 2023 21:54:12 +0000
Message-ID: <87v8i0wyv0.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger>
In-Reply-To: <ZDZ2S4OxP2e12oSX@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|DB9PR03MB7273:EE_
x-ms-office365-filtering-correlation-id: 5b1d0827-00f8-46f8-d438-08db3ba073ca
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 zQUakqcvCwF65m8IKsS+ihj1nBvJUJgm30uuPDY4di74TjDy+qptEGHuB3d6vCCpTWz+g5S7QobjWGyksc9Bi+BjYt+lANOsegsuMXt+mwlZYRMm8zZm4irenjT5XPBkVirYfJjKFisYfLtxKYwSB7T4/osa3Qj+PDBcZOUeRiZRQF62txgpJghWjQDU0qZTZhFr/ttXQbnt0gvNNHcpHxZfT6oN8HHe7Z8bgBd1+v4lJGJSO/YMKgx0Mnc5EAZfc7zHFsFMmQhy4eVow+XLUAPoXJyTJRAZaY19Fo0/JS4M2y3EFXyUnfXTrAK+9QEv/VKXRsoCm93aX/MmOOjV3b54r8+4mfSnwT7AmUyclynCL9AIyy8CnwHMkc26HMkuCsYkD1/G202rd5YY8i2KKNkmvkljKLYCH2HCmWMn9N/auKkxy5qlYDBEGyY0iD1sHZfuLXD+BR2dHsNV54GfJO7rUBGswKPHx9gvQIK3GLvxmEla0YDUswdtuGv8yC0mVwt40AX8Gfw9Dg1AUj8uFV0dxcbFwROCSAY/wA9TxeM7G1fzrZ0I9JWshe2MLeNL8Maj03zno+IPuF4QidBlZKt/y94JLpv4KEhSCEeHAnw=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(136003)(396003)(376002)(451199021)(316002)(54906003)(6916009)(4326008)(91956017)(76116006)(66946007)(66476007)(64756008)(966005)(66446008)(66556008)(6486002)(71200400001)(478600001)(5660300002)(8676002)(8936002)(41300700001)(7416002)(30864003)(36756003)(122000001)(2906002)(38070700005)(86362001)(38100700002)(83380400001)(2616005)(26005)(55236004)(6506007)(6512007)(186003)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?ZndaT3FhSXdBRUNiSHZoVThNcGY1WmM4SUZNZlJ6anE1aUlURitoRVoveTE3?=
 =?utf-8?B?TTdwT1RaY2tGMVdRQko0ZTNPWU9IODV6c3N2bm96UldkL2NLRWljM0hsb0Z6?=
 =?utf-8?B?VVA2MEJkL0UwNTVvWDJ6QVFyM0Q4TEQ1V1NwSjJKOHBNUHRHb0ZwY28wMldv?=
 =?utf-8?B?SXk2RmlrMnJERm05NUhwNUFVdi9CUW1mQnZmT2hTaHRFcENkMVIxV2pFdDJX?=
 =?utf-8?B?MjhMeGx6QmIvcDE4eGFKeWhUTXpHa1pEQXQ0VStPU0drb0JhVGFFUUp4MHJH?=
 =?utf-8?B?MzA0WWtnQWFkSUtJV2pWbm5ZaFZaRmsrUTNsUGdPSWEyWFVRNWkxT05yTXM5?=
 =?utf-8?B?S0RqbExlKzk2Z0cvWGJtV2NNT3c2TGVnQnIzSDlsd25xY2NPN2lXd053MXZM?=
 =?utf-8?B?cHpXamNmSExuZUkrdWxZME1uQ0dpcmk3di8wakhzQlNHS0FER2VDR2dkQlp0?=
 =?utf-8?B?V2RoUmxkanBQRm5aeDc4bUdjMmRBd1VkTUxwWWdEcldmcUpEamNndVlqL3o2?=
 =?utf-8?B?d3kzS1lHZ1Fsd24vczRUd2xxKzljQW05QTB2bVY4dE5Td2xGb3g3TFNCczNT?=
 =?utf-8?B?Ly9BRzlTVG5QOVNRNUZuSGJlL25Lamc2NG5ISEJja1kzdTdHUFNLY25FbHUx?=
 =?utf-8?B?NkNOMDJ5UG1CSUZhMlhieTg2VzNlSHNWQldFN2ZaSFBNdXBQY0lyS1p4WVRi?=
 =?utf-8?B?cGtBR2paS2syUEdBZWlMOWFla1gwUStxb0VTQUN2a0JpL1VzVFZ3MnViYXU3?=
 =?utf-8?B?cE5DSDJPK1FtYkNBcmdvSGRiYVlId1dLaVlNTEh4Umo4a3VGVzUwcGswVGlS?=
 =?utf-8?B?dzhMMmoxT3RPYStkUFlnMWJ2emYvakJCbkFGbmdSSU4rU2d3QVd5NFlxVEda?=
 =?utf-8?B?WEltZzFua1lPTjFvWS96WCt6dXF1TzhDR3JrS0I2dWlsbWJwUlNtZ01SeUpl?=
 =?utf-8?B?VTVDSVJHYTRDdmk1K21OMzhYSE1RVWd5eDVIMTFleFBubmdiRnAzL2xmZk1V?=
 =?utf-8?B?c0FnK3RsVnhxaUpiUm5NbUprSUpJR25mZk1UdGZ1KzdsTUY3YUkxOUNyTmx6?=
 =?utf-8?B?TjVQcnNRQm0zTG0yZmVKMncxcGhaKzBLVjljYVZ0eEo0OHBySGc4TytwdlhT?=
 =?utf-8?B?K09ySThSNExueWxGWGNEcy8vcTlRYjhWUnlKbHQ2Tml1bTVtNGc0akZBeGU2?=
 =?utf-8?B?TTJNVVBWREVBYzZNMnZjcHdxMzdUMXFlT2xUckFMSXRqTHR5NnZ3dkpBeUtv?=
 =?utf-8?B?MmpVN1ZlMnJmU1haWFpXK3ZlVWg2ZWkvRXdPTWJseUVadWRXYjkzYW5CQjAy?=
 =?utf-8?B?UXFhTlBEYVVPVWJrMGpEK1VkRWZBSjc0WVFEK1B6WGxqc2lLbGRPQUZTKzd5?=
 =?utf-8?B?Z2pyaXR4UFo2aHRpekt1Q01zcUdRdVlPaGdqZWZ0djFFNUZ3N083bHJGTWdM?=
 =?utf-8?B?WjByMWp6NjB6T1h5Qll1b0Q1THpKY3NJOUlIT29pVnZ2eWp6VGc5a3oyLzVl?=
 =?utf-8?B?Q2wwN2FwZC8zR29kQlRNVDdRQW02elloRHFHbDg2blJHblV4cUFHaU5kdU45?=
 =?utf-8?B?cWk2UWhodDNaQjZHODJZSW9LVis3OGsrS2xOWXRMQUJqMzhacm9VM0ZmTWx0?=
 =?utf-8?B?OHVOclZ6R3lIMjRiclNNTnNETTh6TG56Zml3ZGpHMktxSndKZTA0WjkwMk1t?=
 =?utf-8?B?d014SzQ0c2FMZ3c2MlgyZUJRY1d0Y1RCY3NIaW1YcWpTeUFoNmx0L3VwUmpQ?=
 =?utf-8?B?OWRiMWNFbkFOdnZpczZ0aGVhZDhYM1MvVjEweFl2TGhBUndtSEdrRk9rWG9I?=
 =?utf-8?B?TDc4TVc3MXdvcjNMMzFFeHVoZS9lcEp5c3NRbFIvdVpSdzVFZWZwRUlJWFAw?=
 =?utf-8?B?QXRGcnNVb05FTFdiMGRhQ2tZb3FRallMbXZYS2JiSVVtV0NjNmpOTTJ1UTBX?=
 =?utf-8?B?YUxhQytrWFl1RjZjcUJXanhHYkYwUjRTQ2dXZXVhYWdza0xGbjZWMVhyRlVH?=
 =?utf-8?B?bUkzTWE2dngzVGtZeU9IU2RYZ2NCQjJmQUZ5L1J3MGRVaTIybldJMmlKUnln?=
 =?utf-8?B?N0Rsenk4aXVES1ZVZFkvNG9QSVV6cm5pKzVhbmlZRFErem5kSHQ3WTdseG9O?=
 =?utf-8?B?M0g0eGg4Y2ZhNlpyUURPZ0ZTRkI4U29OQnp4Kzh3ODZieFlUckVYZHljR281?=
 =?utf-8?B?bXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7AA23B5AD2C5324CAAF5852778763AF9@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b1d0827-00f8-46f8-d438-08db3ba073ca
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Apr 2023 21:54:12.6334
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: d7XdYb4w4kHdoLPzVQS+5pvrEmkQVfp0eCvkkd5nzwPWX0ryIg66R6kldTZ2w4j+1du62jEBI41nkyNpCyXTem/B/cB3kTQ21K/RlsVFVRU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB7273
X-Proofpoint-ORIG-GUID: o3vUuPCoyRd1aiAp6Zk_FFOMh1K8xJwC
X-Proofpoint-GUID: o3vUuPCoyRd1aiAp6Zk_FFOMh1K8xJwC
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-12_12,2023-04-12_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0
 clxscore=1011 bulkscore=0 mlxscore=0 suspectscore=0 priorityscore=1501
 impostorscore=0 adultscore=0 malwarescore=0 lowpriorityscore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304120186

DQpIaSBSb2dlciwNCg0KRmlyc3Qgb2YgYWxsLCBJIHdhbnQgdG8gcHJvdmlkZSBsaW5rIFsxXSB0
byB0aGUgUkZDIHNlcmllcyB3aGVyZSBJIHRyaWVkDQp0b3RhbCBQQ0kgbG9ja2luZyByZXdvcmsu
IEFmdGVyIGRpc2N1c3Npbmcgd2l0aCBKYW4sIGl0IGJlY2FtZSBjbGVhciBmb3INCm1lLCB0aGF0
IHRhc2sgaXMgbXVjaCBoYXJkZXIsIHRoYW4gSSBhbnRpY2lwYXRlZC4gU28sIGl0IHdhcyBkZWNp
ZGVkIHRvDQptb3ZlIHdpdGggYSBzbWFsbGVyIHN0ZXBzLiBGaXJzdCBzdGVwIGlzIHRvIG1ha2Ug
dlBDSSBjb2RlIGluZGVwZW5kZWQNCmZyb20gdGhlIGdsb2JhbCBQQ0kgbG9jay4gQWN0dWFsbHks
IHRoaXMgaXMgbm90IHRoZSBmaXJzdCB0cnkuDQpPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB0cmll
ZCB0byB1c2Ugci93IGxvY2sgZm9yIHRoaXM6IFsyXS4gQnV0LA0KSmFuIHN1Z2dlc3RlZCB0byB1
c2UgcmVmY291bnRpbmcgaW5zdGVhZCBvZiByL3cgbG9ja3MsIGFuZCBJIGxpa2VkIHRoZQ0KaWRl
YS4gU28sIHRoaXMgaXMgd2h5IHlvdSBhcmUgc2VlaW5nIHRoaXMgcGF0Y2ggc2VyaWVzLg0KDQoN
Cg0KUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyaXRlczoNCg0KPiBP
biBUdWUsIEFwciAxMSwgMjAyMyBhdCAxMTo0MTowNFBNICswMDAwLCBWb2xvZHlteXIgQmFiY2h1
ayB3cm90ZToNCj4+IA0KPj4gSGkgUm9nZXIsDQo+PiANCj4+IFJvZ2VyIFBhdSBNb25uw6kgPHJv
Z2VyLnBhdUBjaXRyaXguY29tPiB3cml0ZXM6DQo+PiANCj4+ID4gT24gVHVlLCBNYXIgMTQsIDIw
MjMgYXQgMDg6NTY6MjlQTSArMDAwMCwgVm9sb2R5bXlyIEJhYmNodWsgd3JvdGU6DQo+PiA+PiBQ
cmlvciB0byB0aGlzIGNoYW5nZSwgbGlmZXRpbWUgb2YgcGNpX2RldiBvYmplY3RzIHdhcyBwcm90
ZWN0ZWQgYnkgZ2xvYmFsDQo+PiA+PiBwY2lkZXZzX2xvY2soKS4gTG9uZy10ZXJtIHBsYW4gaXMg
dG8gcmVtb3ZlIHRoaXMgbG9nLCBzbyB3ZSBuZWVkIHNvbWUNCj4+ID4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXiBsb2NrDQo+PiA+DQo+PiA+IEkg
d291bGRuJ3Qgc2F5IHJlbW92ZSwgYXMgb25lIHdheSBvciBhbm90aGVyIHdlIG5lZWQgYSBsb2Nr
IHRvIHByb3RlY3QNCj4+ID4gY29uY3VycmVudCBhY2Nlc3Nlcy4NCj4+ID4NCj4+IA0KPj4gSSds
bCB3cml0ZSAicmVwbGFjZSB0aGlzIGdsb2JhbCBsb2NrIHdpdGggY291cGxlIG9mIG1vcmUgZ3Jh
bnVsYXINCj4+IGxvY2tpbmcgZGV2aWNlcyINCj4+IGlmIHRoaXMgaXMgb2theSBmb3IgeW91Lg0K
Pj4gDQo+PiA+PiBvdGhlciBtZWNoYW5pc20gdG8gZW5zdXJlIHRoYXQgdGhvc2Ugb2JqZWN0cyB3
aWxsIG5vdCBkaXNhcHBlYXIgdW5kZXINCj4+ID4+IGZlZXQgb2YgY29kZSB0aGF0IGFjY2VzcyB0
aGVtLiBSZWZlcmVuY2UgY291bnRpbmcgaXMgYSBnb29kIGNob2ljZSBhcw0KPj4gPj4gaXQgcHJv
dmlkZXMgZWFzeSB0byBjb21wcmVoZW5kIHdheSB0byBjb250cm9sIG9iamVjdCBsaWZldGltZS4N
Cj4+ID4+IA0KPj4gPj4gVGhpcyBwYXRjaCBhZGRzIHR3byBuZXcgaGVscGVyIGZ1bmN0aW9uczog
cGNpZGV2X2dldCgpIGFuZA0KPj4gPj4gcGNpZGV2X3B1dCgpLiBwY2lkZXZfZ2V0KCkgd2lsbCBp
bmNyZWFzZSByZWZlcmVuY2UgY291bnRlciwgd2hpbGUNCj4+ID4+IHBjaWRldl9wdXQoKSB3aWxs
IGRlY3JlYXNlIGl0LCBkZXN0cm95aW5nIG9iamVjdCB3aGVuIGNvdW50ZXIgcmVhY2hlcw0KPj4g
Pj4gemVyby4NCj4+ID4+IA0KPj4gPj4gcGNpZGV2X2dldCgpIHNob3VsZCBiZSB1c2VkIG9ubHkg
d2hlbiB5b3UgYWxyZWFkeSBoYXZlIGEgdmFsaWQgcG9pbnRlcg0KPj4gPj4gdG8gdGhlIG9iamVj
dCBvciB5b3UgYXJlIGhvbGRpbmcgbG9jayB0aGF0IHByb3RlY3RzIG9uZSBvZiB0aGUNCj4+ID4+
IGxpc3RzIChkb21haW4sIHBzZWcgb3IgYXRzKSB0aGF0IHN0b3JlIHBjaV9kZXYgc3RydWN0cy4N
Cj4+ID4+IA0KPj4gPj4gcGNpZGV2X2dldCgpIGlzIHJhcmVseSB1c2VkIGRpcmVjdGx5LCBiZWNh
dXNlIHRoZXJlIGFscmVhZHkgYXJlDQo+PiA+PiBmdW5jdGlvbnMgdGhhdCB3aWxsIHByb3ZpZGUg
dmFsaWQgcG9pbnRlciB0byBwY2lfZGV2IHN0cnVjdDoNCj4+ID4+IHBjaV9nZXRfcGRldigpLCBw
Y2lfZ2V0X3JlYWxfcGRldigpLiBUaGV5IHdpbGwgbG9jayBhcHByb3ByaWF0ZSBsaXN0LA0KPj4g
Pj4gZmluZCBuZWVkZWQgb2JqZWN0IGFuZCBpbmNyZWFzZSBpdHMgcmVmZXJlbmNlIGNvdW50ZXIg
YmVmb3JlIHJldHVybmluZw0KPj4gPj4gdG8gdGhlIGNhbGxlci4NCj4+ID4+IA0KPj4gPj4gTmF0
dXJhbGx5LCBwY2lfcHV0KCkgc2hvdWxkIGJlIGNhbGxlZCBhZnRlciBmaW5pc2hpbmcgd29ya2lu
ZyB3aXRoIGENCj4+ID4+IHJlY2VpdmVkIG9iamVjdC4gVGhpcyBpcyB0aGUgcmVhc29uIHdoeSB0
aGlzIHBhdGNoIGhhdmUgc28gbWFueQ0KPj4gPj4gcGNpZGV2X3B1dCgpcyBhbmQgc28gbGl0dGxl
IHBjaWRldl9nZXQoKXM6IGV4aXN0aW5nIGNhbGxzIHRvDQo+PiA+PiBwY2lfZ2V0XyooKSBmdW5j
dGlvbnMgbm93IHdpbGwgaW5jcmVhc2UgcmVmZXJlbmNlIGNvdW50ZXINCj4+ID4+IGF1dG9tYXRp
Y2FsbHksIHdlIGp1c3QgbmVlZCB0byBkZWNyZWFzZSBpdCBiYWNrIHdoZW4gd2UgZmluaXNoZWQu
DQo+PiA+DQo+PiA+IEFmdGVyIGxvb2tpbmcgYSBiaXQgaW50byB0aGlzLCBJIHdvdWxkIGxpa2Ug
dG8gYXNrIHdoZXRoZXIgaXQncyBiZWVuDQo+PiA+IGNvbnNpZGVyZWQgdGhlIG5lZWQgdG8gaW5j
cmVhc2UgdGhlIHJlZmNvdW50IGZvciBlYWNoIHVzZSBvZiBhIHBkZXYuDQo+PiA+DQo+PiANCj4+
IFRoaXMgaXMgaG93IExpbnV4IHVzZXMgcmVmZXJlbmNlIGxvY2tpbmcuIEl0IGRlY3JlYXNlcyBj
b2duaXRpdmUgbG9hZA0KPj4gYW5kIGNoYW5jZSBmb3IgYW4gZXJyb3IsIGFzIHRoZXJlIGlzIGEg
c2ltcGxlIHNldCBvZiBydWxlcywgd2hpY2ggeW91DQo+PiBmb2xsb3cuDQo+PiANCj4+ID4gRm9y
IGV4YW1wbGUgSSB3b3VsZCBjb25zaWRlciB0aGUgaW5pdGlhbCBhbGxvY19wZGV2KCkgdG8gdGFr
ZSBhDQo+PiA+IHJlZmNvdW50LCBhbmQgdGhlbiBwY2lfcmVtb3ZlX2RldmljZSgpIF9tdXN0XyBi
ZSB0aGUgZnVuY3Rpb24gdGhhdA0KPj4gPiByZW1vdmVzIHRoZSBsYXN0IHJlZmNvdW50LCBzbyB0
aGF0IGl0IGNhbiByZXR1cm4gLUVCVVNZIG90aGVyd2lzZSAoc2VlDQo+PiA+IG15IGNvbW1lbnQg
YmVsb3cpLg0KPj4gDQo+PiBJIHRlbmQgdG8gZGlzYWdyZWUgdGhlcmUsIGFzIHRoaXMgcnVpbnMg
dGhlIHZlcnkgaWRlYSBvZiByZWZlcmVuY2UNCj4+IGNvdW50aW5nLiBXZSBjYW4ndCBrbm93IHdo
byBlbHNlIGhvbGRzIHJlZmVyZW5jZSByaWdodCBub3cuIE9rYXksIHdlDQo+PiBtaWdodCBrbm93
LCBidXQgdGhpcyByZXF1aXJlcyBhZGRpdGlvbmFsIGxvY2sgdG8gc2VyaWFsaXplDQo+PiBhY2Nl
c3Nlcy4gV2hpY2gsIGluIHR1cm4sIG1ha2VzIHJlZmNvdW50IHVuLW5lZWRlZC4NCj4NCj4gSW4g
cHJpbmNpcGxlIHBjaV9yZW1vdmVfZGV2aWNlKCkgbXVzdCByZXBvcnQgd2hldGhlciB0aGUgZGV2
aWNlIGlzDQo+IHJlYWR5IHRvIGJlIHBoeXNpY2FsbHkgcmVtb3ZlZCBmcm9tIHRoZSBzeXN0ZW0s
IHNvIGl0IG11c3QgcmV0dXJuDQo+IC1FQlVTWSBpZiB0aGVyZSBhcmUgc3RpbGwgdXNlcnMgYWNj
ZXNzaW5nIHRoZSBkZXZpY2UuDQo+DQo+IEEgdXNlciB3b3VsZCB1c2UgUEhZU0RFVk9QX21hbmFn
ZV9wY2lfcmVtb3ZlIHRvIHNpZ25hbCBYZW4gaXQncyB0cnlpbmcNCj4gdG8gcGh5c2ljYWxseSBy
ZW1vdmUgYSBQQ0kgZGV2aWNlIGZyb20gYSBzeXN0ZW0sIHNvIHdlIG11c3QgZW5zdXJlDQo+IHRo
YXQgd2hlbiB0aGUgaHlwZXJ2aXNvciByZXR1cm5zIHN1Y2Nlc3MgdGhlIGRldmljZSBpcyByZWFk
eSB0byBiZQ0KPiBwaHlzaWNhbGx5IHJlbW92ZWQuDQo+DQo+IE9yIGF0IGxlYXN0IHRoYXQncyBt
eSB1bmRlcnN0YW5kaW5nIG9mIGhvdyB0aGlzIHNob3VsZCB3b3JrLg0KPg0KDQpBcyBJIGNhbiBz
ZWUsIHRoaXMgaXMgbm90IGhvdyBpdCBpcyBpbXBsZW1lbnRlZCByaWdodA0Kbm93LiBwY2lfcmVt
b3ZlX2RldmljZSgpIGlzIG5vdCBjaGVja2luZyBpZiBkZXZpY2UgaXMgbm90IGFzc2lnbmVkIHRv
IGENCmRvbWFpbi4gSWQgZG9lcyBub3QgY2hlY2sgaWYgdGhlcmUgYXJlIHN0aWxsIHVzZXJzIGFj
Y2Vzc2luZyB0aGUNCmRldmljZS4gSXQganVzdCByZWxpZXMgb24gYSB0aGUgZ2xvYmFsIFBDSSBs
b2NrIHRvIGVuc3VyZSB0aGF0IGRldmljZSBpcw0KcmVtb3ZlZCBpbiBhbiBvcmRlcmx5IG1hbm5l
ci4NCg0KTXkgcGF0Y2ggc2VyaWVzIGhhcyBubyBpbnRlbnRpb24gdG8gY2hhbmdlIHRoaXMgYmVo
YXZpb3IuIEFsbCB3aGF0IEkNCndhbnQgdG8gYWNoaWV2ZSAtIGlzIHRvIGFsbG93IHZwY2kgY29k
ZSBhY2Nlc3Mgc3RydWN0IHBkZXYgb2JqZWN0cw0Kd2l0aG91dCBob2xkaW5nIHRoZSBnbG9iYWwg
UENJIGxvY2suIA0KDQo+PiA+DQo+PiA+IEkgd291bGQgYWxzbyB0aGluayB0aGF0IGhhdmluZyB0
aGUgZGV2aWNlIGFzc2lnbmVkIHRvIGEgZ3Vlc3Qgd2lsbCB0YWtlDQo+PiA+IGFub3RoZXIgcmVm
Y291bnQsIGFuZCB0aGVuIGFueSB1c2FnZSBmcm9tIGZ1cnRoZXIgY2FsbGVycyAoaWU6IGxpa2UN
Cj4+ID4gdnBjaSkgd2lsbCBuZWVkIHNvbWUga2luZCBvZiBwcm90ZWN0aW9uIGZyb20gcHJldmVu
dGluZyB0aGUgZGV2aWNlDQo+PiA+IGZyb20gYmVpbmcgZGVhc3NpZ25lZCBmcm9tIGEgZG9tYWlu
IHdoaWxlIHZQQ0kgaGFuZGxlcnMgYXJlIHJ1bm5pbmcsDQo+PiA+IGFuZCB0aGUgY3VycmVudCBy
ZWZjb3VudCB3b24ndCBoZWxwIHdpdGggdGhhdC4NCj4+IA0KPj4gWWVzLCBpZGVhIG9mIHRoaXMg
cmVmY291bnRpbmcgaXMgdG8gZW5zdXJlIHRoYXQgYSBwZGV2IG9iamVjdCBleGlzdHMgYXMgYW4N
Cj4+IHZhbGlkIG9iamVjdCBpbiBtZW1vcnkgaWYgd2UgYXJlIGhvbGRpbmcgYSBsb25nLXRlcm0g
cG9pbnRlciB0bw0KPj4gaXQuIEluZGVlZCwgdlBDSSBoYW5kbGVycyBzaG91bGQgdXNlIHNvbWUg
b3RoZXIgbWVjaGFuaXNtIHRvIGVuc3VyZSB0aGF0DQo+PiBwZGV2IGlzIG5vdCBiZWluZyByZS1h
c3NpZ25lZCB3aGlsZSBoYW5kbGVycyBhcmUgcnVubmluZy4gSSBiZWxpZXZlLA0KPj4gdGhpcyBp
cyB0aGUgdGFzayBvZiB2cGNpLT5sb2NrLiBTaG91bGQgd2UgY2FsbA0KPj4gdnBjaV9yZW1vdmVf
ZGV2aWNlL3ZwY2lfYWRkX2hhbmRsZXJzIGVhY2ggdGltZSB3ZSByZS1hc3NpZ24gYSBQQ0kgZGV2
aWNlPw0KPg0KPiBZZXMsIEkgdGhpbmsgdGhpcyB3YXMgYWxzbyBwYXJ0IG9mIGEgY29tbWVudCBJ
J3ZlIG1hZGUgb24gYSBkaWZmZXJlbnQNCj4gcGF0Y2guICBUaGUgZGV2aWNlIHN0YXRlIG5lZWRz
IHRvIGJlIGNsZWFyZWQgd2hlbiBhc3NpZ25lZCB0byBhDQo+IGRpZmZlcmVudCBndWVzdCAoYXMg
dGhlIGhhcmR3YXJlIGRvbWFpbiB3aWxsIGFsc28gcGVyZm9ybSBhIGRldmljZQ0KPiByZXNldCku
DQo+DQo+IEkgdGhpbmsgdGhlcmUgYXJlIHNvbWUgcG9pbnRzIHRoYXQgbmVlZHMgdG8gYmUgcGFy
dCBvZiB0aGUgY29tbWl0DQo+IG1lc3NhZ2Ugc28gdGhlIGNvZGUgY2FuIGJlIHByb3Blcmx5IGV2
YWx1YXRlZDoNCj4NCj4gIC0gVGhlIHJlZmVyZW5jZSBjb3VudGluZyBpcyBvbmx5IHVzZWQgdG8g
ZW5zdXJlIHRoZSBvYmplY3QgY2Fubm90IGJlDQo+ICAgIHJlbW92ZWQgd2hpbGUgaW4gdXNlLiAg
VXNlcnMgb2YgdGhlIHBjaSBkZXZpY2Ugb2JqZWN0IHNob3VsZA0KPiAgICBpbXBsZW1lbnQgd2hh
dGV2ZXIgcHJvdGVjdGlvbnMgcmVxdWlyZWQgaW4gb3JkZXIgdG8gZ2V0IG11dHVhbA0KPiAgICBl
eGNsdXNpb24gYmV0d2VlbiB0aGVtIGFuZCBkZXZpY2Ugc3RhdGUgY2hhbmdlcy4NCj4NCg0KU3Vy
ZSwgSSB3aWxsIGFkZCB0aGlzLg0KDQo+PiA+DQo+PiA+IFRoYXQgbWFrZXMgbWUgd29uZGVyIGlm
IGZvciBleGFtcGxlIGNhbGxlcnMgb2YgcGNpX2dldF9wZGV2KGQsIHNiZGYpDQo+PiA+IGRvIG5l
ZWQgdG8gdGFrZSBhbiBleHRyYSByZWZjb3VudCwgYmVjYXVzZSBzdWNoIGFjY2VzcyBpcyBhbHJl
YWR5DQo+PiA+IHByb3RlY3RlZCBmcm9tIHRoZSBwZGV2IGdvaW5nIGF3YXkgYnkgdGhlIGZhY3Qg
dGhhdCB0aGUgZGV2aWNlIGlzDQo+PiA+IGFzc2lnbmVkIHRvIGEgZ3Vlc3QuICBCdXQgbWF5YmUg
aXQncyB0b28gbXVjaCB3b3JrIHRvIHNlcGFyYXRlIHVzZXJzDQo+PiA+IG9mIHBjaV9nZXRfcGRl
dihkLCAuLi4pOyB2cyBwY2lfZ2V0X3BkZXYoTlVMTCwgLi4uKTsuDQo+PiA+DQo+PiA+IFRoZXJl
J3MgYWxzbyBhIHdpbmRvdyB3aGVuIHRoZSByZWZjb3VudCBpcyBkcm9wcGVkIHRvIDAsIGFuZCB0
aGUNCj4+ID4gZGVzdHJ1Y3Rpb24gZnVuY3Rpb24gaXMgY2FsbGVkLCBidXQgYXQgdGhlIHNhbWUg
dGltZSBhIGNvbmN1cnJlbnQNCj4+ID4gdGhyZWFkIGNvdWxkIGF0dGVtcHQgdG8gdGFrZSBhIHJl
ZmVyZW5jZSB0byB0aGUgcGRldiBzdGlsbD8NCj4+IA0KPj4gTGFzdCBwY2lkZXZfcHV0KCkgd291
bGQgYmUgY2FsbGVkIGJ5IHBjaV9yZW1vdmVfZGV2aWNlKCksIGFmdGVyIHJlbW92aW5nDQo+PiBp
dCBmcm9tIGFsbCBsaXN0cy4gVGhpcyBzaG91bGQgcHJldmVudCBvdGhlciB0aHJlYWRzIGZyb20g
b2J0YWluaW5nIGEgdmFsaWQNCj4+IHJlZmVyZW5jZSB0byB0aGUgcGRldi4NCj4NCj4gV2hhdCBp
ZiBhIGNvbmN1cnJlbnQgdXNlciBoYXMgdGFrZW4gYSByZWZlcmVuY2UgdG8gdGhlIG9iamVjdCBi
ZWZvcmUNCj4gcGNpX3JlbW92ZV9kZXZpY2UoKSBoYXMgcmVtb3ZlZCB0aGUgZGV2aWNlIGZyb20g
dGhlIGxpc3RzLCBhbmQgc3RpbGwNCj4gaG9sZHMgaXQgd2hlbiBwY2lfcmVtb3ZlX2RldmljZSgp
IHBlcmZvcm1zIHRoZSBzdXBwb3NlZGx5IGxhc3QNCj4gcGNpZGV2X3B1dCgpIGNhbGw/DQoNCldl
bGwsIGxldCdzIGNvbnNpZGVyIFZQQ0kgY29kZSBhcyB0aGlzIGNvbmN1cnJlbnQgdXNlciwgZm9y
DQpleGFtcGxlLiBGaXJzdCwgaXQgd2lsbCB0cnkgdG8gdGFrZSB2cGNpLT5sb2NrLiBEZXBlbmRp
bmcgb24gd2hlcmUgaW4NCnBjaV9yZW1vdl9kZXZpY2UoKSB0aGVyZSB3aWxsIGJlIHRocmVlIGNh
c2VzOg0KDQoxLiBMb2NrIGlzIHRha2VuIGJlZm9yZSB2cGNpX3JlbW92ZV9kZXZpY2UoKSB0YWtl
cyB0aGUgbG9jay4gSW4gdGhpcw0KY2FzZSB2cGNpIGNvZGUgd29ya3MgYXMgYWx3YXlzDQoNCjIu
IEl0IHRyaWVzIHRvIHRha2UgdGhlIGxvY2sgd2hlbiB2cGNpX3JlbW92ZV9kZXZpY2UoKSBpcyBh
bHJlYWR5IGxvY2tlZA0KdGhpcy4gSW4gdGhpcyBjYXNlIHdlIGFyZSBmYWxsaW5nIHRvIHRoZSBu
ZXh0IGNhc2U6DQoNCjMuIExvY2sgaXMgdGFrZW4gYWZ0ZXIgdnBjaV9yZW1vdmVfZGV2aWNlKCkg
aGFkIGZpbmlzaGVkIGl0J3Mgd29yay4gSW4gdGhpcw0KY2FzZSB2UENJIGNvZGUgc2VlcyB0aGF0
IGl0IHdhcyBjYWxsZWQgZm9yIGEgZGV2aWNlIGluIGFuIGludmFsaWQgc3RhdGUNCmFuZCBleGl0
cy4NCg0KQXMgeW91IGNhbiBzZWUsIHRoZXJlIGlzIG5vIGNhc2Ugd2hlcmUgdlBDSSBjb2RlIGlz
IHJ1bm5pbmcgb24gYW4gZGV2aWNlDQp3aGljaCB3YXMgcmVtb3ZlZC4NCg0KQWZ0ZXIgdlBDSSBj
b2RlIGRyb3BzIHJlZmNvdW50ZXIsIHBkZXYgb2JqZWN0IHdpbGwgYmUgZnJlZWQgb25jZSBhbmQg
Zm9yDQphbGwuIFBsZWFzZSBub2RlLCB0aGF0IEkgYW0gdGFsa2luZyBhYm91dCBwZGV2IG9iamVj
dCB0aGVyZSwgbm90IGFib3V0DQpQQ0kgZGV2aWNlLCBiZWNhdXNlIFBDSSBkZXZpY2UgKGFzIGEg
aGlnaC1sZXZlbCBlbnRpdHkpIHdhcyBkZXN0cm95ZWQgYnkNCnBjaV9yZW1vdmVfZGV2aWNlKCku
IHJlZmNvdW50IGlzIG5lZWRlZCBqdXN0IGZvciB0aGUgbGFzdCBjbGVhbi11cA0Kb3BlcmF0aW9u
cy4NCg0KPg0KPj4gPg0KPj4gPj4gICAgICAgICAgc2JkZi5kZXZmbiAmPSB+c3RyaWRlOw0KPj4g
Pj4gICAgICAgICAgcGRldiA9IHBjaV9nZXRfcGRldihOVUxMLCBzYmRmKTsNCj4+ID4+ICAgICAg
ICAgIGlmICggcGRldiAmJiBzdHJpZGUgIT0gcGRldi0+cGhhbnRvbV9zdHJpZGUgKQ0KPj4gPj4g
KyAgICAgICAgew0KPj4gPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiA+PiAg
ICAgICAgICAgICAgcGRldiA9IE5VTEw7DQo+PiA+PiArICAgICAgICB9DQo+PiA+PiAgICAgIH0N
Cj4+ID4+ICANCj4+ID4+ICAgICAgcmV0dXJuIHBkZXY7DQo+PiA+PiBAQCAtNTQ4LDEzICs1MjYs
MTggQEAgc3RydWN0IHBjaV9kZXYgKnBjaV9nZXRfcGRldihjb25zdCBzdHJ1Y3QgZG9tYWluICpk
LCBwY2lfc2JkZl90IHNiZGYpDQo+PiA+PiAgICAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5ICgg
cGRldiwgJnBzZWctPmFsbGRldnNfbGlzdCwgYWxsZGV2c19saXN0ICkNCj4+ID4+ICAgICAgICAg
ICAgICBpZiAoIHBkZXYtPnNiZGYuYmRmID09IHNiZGYuYmRmICYmDQo+PiA+PiAgICAgICAgICAg
ICAgICAgICAoIWQgfHwgcGRldi0+ZG9tYWluID09IGQpICkNCj4+ID4+ICsgICAgICAgICAgICB7
DQo+PiA+PiArICAgICAgICAgICAgICAgIHBjaWRldl9nZXQocGRldik7DQo+PiA+PiAgICAgICAg
ICAgICAgICAgIHJldHVybiBwZGV2Ow0KPj4gPj4gKyAgICAgICAgICAgIH0NCj4+ID4+ICAgICAg
fQ0KPj4gPj4gICAgICBlbHNlDQo+PiA+PiAgICAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5ICgg
cGRldiwgJmQtPnBkZXZfbGlzdCwgZG9tYWluX2xpc3QgKQ0KPj4gPj4gICAgICAgICAgICAgIGlm
ICggcGRldi0+c2JkZi5iZGYgPT0gc2JkZi5iZGYgKQ0KPj4gPj4gKyAgICAgICAgICAgIHsNCj4+
ID4+ICsgICAgICAgICAgICAgICAgcGNpZGV2X2dldChwZGV2KTsNCj4+ID4+ICAgICAgICAgICAg
ICAgICAgcmV0dXJuIHBkZXY7DQo+PiA+PiAtDQo+PiA+PiArICAgICAgICAgICAgfQ0KPj4gPj4g
ICAgICByZXR1cm4gTlVMTDsNCj4+ID4+ICB9DQo+PiA+PiAgDQo+PiA+PiBAQCAtNjYzLDcgKzY0
NiwxMCBAQCBpbnQgcGNpX2FkZF9kZXZpY2UodTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbiwNCj4+
ID4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX1NCREYoc2VnLCBpbmZvLT5waHlz
Zm4uYnVzLA0KPj4gPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbmZv
LT5waHlzZm4uZGV2Zm4pKTsNCj4+ID4+ICAgICAgICAgIGlmICggcGRldiApDQo+PiA+PiArICAg
ICAgICB7DQo+PiA+PiAgICAgICAgICAgICAgcGZfaXNfZXh0Zm4gPSBwZGV2LT5pbmZvLmlzX2V4
dGZuOw0KPj4gPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7DQo+PiA+PiArICAgICAg
ICB9DQo+PiA+PiAgICAgICAgICBwY2lkZXZzX3VubG9jaygpOw0KPj4gPj4gICAgICAgICAgaWYg
KCAhcGRldiApDQo+PiA+PiAgICAgICAgICAgICAgcGNpX2FkZF9kZXZpY2Uoc2VnLCBpbmZvLT5w
aHlzZm4uYnVzLCBpbmZvLT5waHlzZm4uZGV2Zm4sDQo+PiA+PiBAQCAtODE4LDcgKzgwNCw5IEBA
IGludCBwY2lfcmVtb3ZlX2RldmljZSh1MTYgc2VnLCB1OCBidXMsIHU4IGRldmZuKQ0KPj4gPj4g
ICAgICAgICAgICAgIGlmICggcGRldi0+ZG9tYWluICkNCj4+ID4+ICAgICAgICAgICAgICAgICAg
bGlzdF9kZWwoJnBkZXYtPmRvbWFpbl9saXN0KTsNCj4+ID4+ICAgICAgICAgICAgICBwcmludGso
WEVOTE9HX0RFQlVHICJQQ0kgcmVtb3ZlIGRldmljZSAlcHBcbiIsICZwZGV2LT5zYmRmKTsNCj4+
ID4+IC0gICAgICAgICAgICBmcmVlX3BkZXYocHNlZywgcGRldik7DQo+PiA+PiArICAgICAgICAg
ICAgbGlzdF9kZWwoJnBkZXYtPmFsbGRldnNfbGlzdCk7DQo+PiA+PiArICAgICAgICAgICAgcGRl
dl9tc2lfZGVpbml0KHBkZXYpOw0KPj4gPj4gKyAgICAgICAgICAgIHBjaWRldl9wdXQocGRldik7
DQo+PiA+DQo+PiA+IEhtLCBJIHRoaW5rIGhlcmUgd2Ugd2FudCB0byBtYWtlIHN1cmUgdGhhdCB0
aGUgZGV2aWNlIGhhcyBiZWVuIGZyZWVkLA0KPj4gPiBvciBlbHNlIHlvdSB3b3VsZCBoYXZlIHRv
IHJldHVybiAtRUJVU1kgdG8gdGhlIGNhbGxzIHRvIG5vdGlmeSB0aGF0DQo+PiA+IHRoZSBkZXZp
Y2UgaXMgc3RpbGwgaW4gdXNlLg0KPj4gDQo+PiBXaHk/IEFzIEkgY2FuIHNlZSwgcGRldiBvYmpl
Y3QgaXMgc3RpbGwgbWF5IHBvdGVudGlhbGx5IGJlIGFjY2Vzc2VkIGJ5DQo+PiBzb21lIG90aGVy
IENQVSByaWdodCBub3cuIFNvIHBkZXYgb2JqZWN0IHdpbGwgYmUgZnJlZWQgYWZ0ZXIgbGFzdA0K
Pj4gcmVmZXJlbmNlIGlzIGRyb3BwZWQuIEFzIGl0IGlzIGFscmVhZHkgcmVtb3ZlZCBmcm9tIGFs
bCB0aGUgbGlzdHMsDQo+PiBwY2lfZGV2X2dldCgpIHdpbGwgbm90IGZpbmQgaXQgYW55bW9yZS4N
Cj4+IA0KPj4gQWN0dWFsbHksIEkgY2FuJ3Qgc2VlIGhvdyB0aGlzIGNhbiBoYXBwZW4gaW4gcmVh
bGl0eSwgYXMgVlBDSSwgTVNJIGFuZA0KPj4gSU9NTVUgYXJlIGFscmVhZHkgZGVhY3RpdmF0ZWQg
Zm9yIHRoaXMgZGV2aWNlLiBTbywgbm8gb25lIHdvdWxkIHRvdWNoIGl0Lg0KPg0KPiBXb3VsZG4n
dCBpdCBiZSBwb3NzaWJsZSBmb3IgYSBjb25jdXJyZW50IHVzZXIgdG8gaG9sZCBhIHJlZmVyZW5j
ZSBmcm9tDQo+IGJlZm9lIHRoZSBkZXZpY2UgaGFzIGJlZW4gJ2RlYWN0aXZhdGVkJz8NCj4NCg0K
WWVzLCBpdCBjYW4gaG9sZCBhIHJlZmVyZW5jZS4gVGhpcyBpcyB3aHkgd2UgbmVlZCBhZGRpdGlv
bmFsIGxvY2tpbmcgdG8NCmVuc3VyZSB0aGF0LCBzYXksIHBjaV9jbGVhbnVwX21zaSgpIGRvZXMg
bm90IHJhY2VzIHdpdGggcmVzdCBvZiB0aGUgTVNJDQpjb2RlLiBSaWdodCBub3cgdGhpcyBpcyBl
bnN1cmVkIGJ5IHRoZW4gZ2xvYmFsIFBDSSBsb2NrLg0KDQo+PiA+DQo+PiA+IEkgdGhpbmsgd2Ug
bmVlZCBhbiBleHRyYSBwY2lkZXZfcHV0X2ZpbmFsKCkgb3Igc2ltaWxhciB0aGF0IGNhbiBiZQ0K
Pj4gPiB1c2VkIGluIHBjaV9yZW1vdmVfZGV2aWNlKCkgdG8gYXNzZXJ0IHRoYXQgdGhlIGRldmlj
ZSBoYXMgYmVlbg0KPj4gPiBhY3R1YWxseSByZW1vdmVkLg0KPj4gDQo+PiBXaWxsIHNvbWV0aGlu
ZyBicmVhayBpZiB3ZSBkb24ndCBkbyB0aGlzPyBJIGNhbid0IHNlZSBob3cgdGhpcyBjYW4NCj4+
IGhhcHBlbi4NCj4NCj4gQXMgbWVudGlvbmVkIGFib3ZlLCBvbmNlIHBjaV9yZW1vdmVfZGV2aWNl
KCkgcmV0dXJucyAwIHRoZSBhZG1pbg0KPiBzaG91bGQgYmUgY2FwYWJsZSBvZiBwaHlzaWNhbGx5
IHJlbW92aW5nIHRoZSBkZXZpY2UgZnJvbSB0aGUgc3lzdGVtLg0KPg0KDQpUaGlzIHBhdGNoIHNl
cmllcyBkb2VzIG5vdCBhbHRlciB0aGlzIHJlcXVpcmVtZW50LiBBZG1pbiBpcyBzdGlsbA0KY2Fw
YWJsZSBvZiBwaHlzaWNhbGx5IHJlbW92aW5nIHRoZSBkZXZpY2UgZnJvbSB0aGUgc3lzdGVtLiBB
ZnRlcg0Kc3VjY2Vzc2Z1bCBjYWxsIHRvIHRoZSBwY2lfcmVtb3ZlX2RldmljZSgpDQoNCj4+ID4N
Cj4+ID4+ICAgICAgICAgICAgICBicmVhazsNCj4+ID4+ICAgICAgICAgIH0NCj4+ID4+ICANCj4+
ID4+IEBAIC04NDgsNyArODM2LDcgQEAgc3RhdGljIGludCBkZWFzc2lnbl9kZXZpY2Uoc3RydWN0
IGRvbWFpbiAqZCwgdWludDE2X3Qgc2VnLCB1aW50OF90IGJ1cywNCj4+ID4+ICAgICAgew0KPj4g
Pj4gICAgICAgICAgcmV0ID0gaW9tbXVfcXVhcmFudGluZV9kZXZfaW5pdChwY2lfdG9fZGV2KHBk
ZXYpKTsNCj4+ID4+ICAgICAgICAgIGlmICggcmV0ICkNCj4+ID4+IC0gICAgICAgICAgIHJldHVy
biByZXQ7DQo+PiA+PiArICAgICAgICAgICAgZ290byBvdXQ7DQo+PiA+PiAgDQo+PiA+PiAgICAg
ICAgICB0YXJnZXQgPSBkb21faW87DQo+PiA+PiAgICAgIH0NCj4+ID4+IEBAIC04NzgsNiArODY2
LDcgQEAgc3RhdGljIGludCBkZWFzc2lnbl9kZXZpY2Uoc3RydWN0IGRvbWFpbiAqZCwgdWludDE2
X3Qgc2VnLCB1aW50OF90IGJ1cywNCj4+ID4+ICAgICAgcGRldi0+ZmF1bHQuY291bnQgPSAwOw0K
Pj4gPj4gIA0KPj4gPj4gICBvdXQ6DQo+PiA+PiArICAgIHBjaWRldl9wdXQocGRldik7DQo+PiA+
PiAgICAgIGlmICggcmV0ICkNCj4+ID4+ICAgICAgICAgIHByaW50ayhYRU5MT0dfR19FUlIgIiVw
ZDogZGVhc3NpZ24gKCVwcCkgZmFpbGVkICglZClcbiIsDQo+PiA+PiAgICAgICAgICAgICAgICAg
ZCwgJlBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbiksIHJldCk7DQo+PiA+PiBAQCAtMTAxMSw3ICsx
MDAwLDEwIEBAIHZvaWQgcGNpX2NoZWNrX2Rpc2FibGVfZGV2aWNlKHUxNiBzZWcsIHU4IGJ1cywg
dTggZGV2Zm4pDQo+PiA+PiAgICAgICAgICAgICAgcGRldi0+ZmF1bHQuY291bnQgPj49IDE7DQo+
PiA+PiAgICAgICAgICBwZGV2LT5mYXVsdC50aW1lID0gbm93Ow0KPj4gPj4gICAgICAgICAgaWYg
KCArK3BkZXYtPmZhdWx0LmNvdW50IDwgUFRfRkFVTFRfVEhSRVNIT0xEICkNCj4+ID4+ICsgICAg
ICAgIHsNCj4+ID4+ICsgICAgICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gPj4gICAgICAg
ICAgICAgIHBkZXYgPSBOVUxMOw0KPj4gPj4gKyAgICAgICAgfQ0KPj4gPj4gICAgICB9DQo+PiA+
PiAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+PiA+PiAgDQo+PiA+PiBAQCAtMTAyMiw2ICsxMDE0
LDggQEAgdm9pZCBwY2lfY2hlY2tfZGlzYWJsZV9kZXZpY2UodTE2IHNlZywgdTggYnVzLCB1OCBk
ZXZmbikNCj4+ID4+ICAgICAgICogY29udHJvbCBpdCBmb3IgdXMuICovDQo+PiA+PiAgICAgIGN3
b3JkID0gcGNpX2NvbmZfcmVhZDE2KHBkZXYtPnNiZGYsIFBDSV9DT01NQU5EKTsNCj4+ID4+ICAg
ICAgcGNpX2NvbmZfd3JpdGUxNihwZGV2LT5zYmRmLCBQQ0lfQ09NTUFORCwgY3dvcmQgJiB+UENJ
X0NPTU1BTkRfTUFTVEVSKTsNCj4+ID4+ICsNCj4+ID4+ICsgICAgcGNpZGV2X3B1dChwZGV2KTsN
Cj4+ID4+ICB9DQo+PiA+PiAgDQo+PiA+PiAgLyoNCj4+ID4+IEBAIC0xMTM4LDYgKzExMzIsNyBA
QCBzdGF0aWMgaW50IF9faHdkb21faW5pdCBjZl9jaGVjayBfc2V0dXBfaHdkb21fcGNpX2Rldmlj
ZXMoDQo+PiA+PiAgICAgICAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORyAiRG9tJWQg
b3duaW5nICVwcD9cbiIsDQo+PiA+PiAgICAgICAgICAgICAgICAgICAgICAgICBwZGV2LT5kb21h
aW4tPmRvbWFpbl9pZCwgJnBkZXYtPnNiZGYpOw0KPj4gPj4gIA0KPj4gPj4gKyAgICAgICAgICAg
IHBjaWRldl9wdXQocGRldik7DQo+PiA+PiAgICAgICAgICAgICAgaWYgKCBpb21tdV92ZXJib3Nl
ICkNCj4+ID4+ICAgICAgICAgICAgICB7DQo+PiA+PiAgICAgICAgICAgICAgICAgIHBjaWRldnNf
dW5sb2NrKCk7DQo+PiA+PiBAQCAtMTM4NSwzMyArMTM4MCwyOCBAQCBzdGF0aWMgaW50IGlvbW11
X3JlbW92ZV9kZXZpY2Uoc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiA+PiAgICAgIHJldHVybiBp
b21tdV9jYWxsKGhkLT5wbGF0Zm9ybV9vcHMsIHJlbW92ZV9kZXZpY2UsIGRldmZuLCBwY2lfdG9f
ZGV2KHBkZXYpKTsNCj4+ID4+ICB9DQo+PiA+PiAgDQo+PiA+PiAtc3RhdGljIGludCBkZXZpY2Vf
YXNzaWduZWQodTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbikNCj4+ID4+ICtzdGF0aWMgaW50IGRl
dmljZV9hc3NpZ25lZChzdHJ1Y3QgcGNpX2RldiAqcGRldikNCj4+ID4+ICB7DQo+PiA+PiAtICAg
IHN0cnVjdCBwY2lfZGV2ICpwZGV2Ow0KPj4gPj4gICAgICBpbnQgcmMgPSAwOw0KPj4gPj4gIA0K
Pj4gPj4gICAgICBBU1NFUlQocGNpZGV2c19sb2NrZWQoKSk7DQo+PiA+PiAtICAgIHBkZXYgPSBw
Y2lfZ2V0X3BkZXYoTlVMTCwgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+PiA+PiAtDQo+
PiA+PiAtICAgIGlmICggIXBkZXYgKQ0KPj4gPj4gLSAgICAgICAgcmMgPSAtRU5PREVWOw0KPj4g
Pj4gICAgICAvKg0KPj4gPj4gICAgICAgKiBJZiB0aGUgZGV2aWNlIGV4aXN0cyBhbmQgaXQgaXMg
bm90IG93bmVkIGJ5IGVpdGhlciB0aGUgaGFyZHdhcmUNCj4+ID4+ICAgICAgICogZG9tYWluIG9y
IGRvbV9pbyB0aGVuIGl0IG11c3QgYmUgYXNzaWduZWQgdG8gYSBndWVzdCwgb3IgYmUNCj4+ID4+
ICAgICAgICogaGlkZGVuIChvd25lZCBieSBkb21feGVuKS4NCj4+ID4+ICAgICAgICovDQo+PiA+
PiAtICAgIGVsc2UgaWYgKCBwZGV2LT5kb21haW4gIT0gaGFyZHdhcmVfZG9tYWluICYmDQo+PiA+
PiAtICAgICAgICAgICAgICBwZGV2LT5kb21haW4gIT0gZG9tX2lvICkNCj4+ID4+ICsgICAgaWYg
KCBwZGV2LT5kb21haW4gIT0gaGFyZHdhcmVfZG9tYWluICYmDQo+PiA+PiArICAgICAgICAgcGRl
di0+ZG9tYWluICE9IGRvbV9pbyApDQo+PiA+PiAgICAgICAgICByYyA9IC1FQlVTWTsNCj4+ID4+
ICANCj4+ID4+ICAgICAgcmV0dXJuIHJjOw0KPj4gPj4gIH0NCj4+ID4+ICANCj4+ID4+ICAvKiBD
YWxsZXIgc2hvdWxkIGhvbGQgdGhlIHBjaWRldnNfbG9jayAqLw0KPj4gPg0KPj4gPiBJIHdvdWxk
IGFzc3VtZSB0aGUgY2FsbGVyIGhhcyB0YWtlbiBhbiBleHRyYSByZWZlcmVuY2UgdG8gdGhlIHBk
ZXYsIHNvDQo+PiA+IGhvbGRpbmcgdGhlIHBjaWRldnNfbG9jayBpcyBubyBsb25nZXIgbmVlZGVk
Pw0KPj4gDQo+PiBJIGFtIGFzc3VtZWQgdGhhdCBsb2NrIG1heSBiZSByZXF1aXJlZCBieSBNU0lY
IG9yIElPTU1VIGZ1bmN0aW9ucywgdGhhdA0KPj4gYXJlIGJlaW5nIGNhbGxlZCBoZXJlLiBGb3Ig
ZXhhbXBsZSwgSSBjYW4gc2VlIHRoYXQgcmVhc3NpZ25fZGV2aWNlKCkgaW4NCj4+IHBjaV9hbWRf
aW9tbXUuYyBtYW5pcHVsYXRlcyB3aXRoIHNvbWUgbGlzdHMuIEkgYmVsaWV2ZSwgaXQgc2hvdWxk
IGJlDQo+PiBwcm90ZWN0ZWQgd2l0aCB0aGUgbG9jay4NCj4NCj4gT0ssIHNvIHRoYXQncyBwY2lk
ZXZzX2xvY2sgYmVpbmcgdXNlZCB0byBwcm90ZWN0IHNvbWV0aGluZyBlbHNlIHRoYXQncw0KPiBu
b3Qgc3RyaWN0bHkgYSBwY2kgZGV2aWNlLCBidXQgYSByZWxhdGVkIHN0cnVjdHVyZS4NCj4NCg0K
WWVzLiBJIGhhdmUgZm91bmQgbXVsdGlwbGUgc3VjaCBwbGFjZXMsIHdoZW4gSSB0cmllZCB0b3Rh
bCBQQ0kgbG9ja2luZw0KcmV3b3JraW5nLg0KDQo+PiA+DQo+PiA+PiAtc3RhdGljIGludCBhc3Np
Z25fZGV2aWNlKHN0cnVjdCBkb21haW4gKmQsIHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4sIHUz
MiBmbGFnKQ0KPj4gPj4gK3N0YXRpYyBpbnQgYXNzaWduX2RldmljZShzdHJ1Y3QgZG9tYWluICpk
LCBzdHJ1Y3QgcGNpX2RldiAqcGRldiwgdTMyIGZsYWcpDQo+PiA+PiAgew0KPj4gPj4gICAgICBj
b25zdCBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsNCj4+ID4+IC0gICAg
c3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiA+PiArICAgIHVpbnQ4X3QgZGV2Zm47DQo+PiA+PiAg
ICAgIGludCByYyA9IDA7DQo+PiA+PiAgDQo+PiA+PiAgICAgIGlmICggIWlzX2lvbW11X2VuYWJs
ZWQoZCkgKQ0KPj4gPj4gQEAgLTE0MjIsMTAgKzE0MTIsMTEgQEAgc3RhdGljIGludCBhc3NpZ25f
ZGV2aWNlKHN0cnVjdCBkb21haW4gKmQsIHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4sIHUzMiBm
bGFnKQ0KPj4gPj4gIA0KPj4gPj4gICAgICAvKiBkZXZpY2VfYXNzaWduZWQoKSBzaG91bGQgYWxy
ZWFkeSBoYXZlIGNsZWFyZWQgdGhlIGRldmljZSBmb3IgYXNzaWdubWVudCAqLw0KPj4gPj4gICAg
ICBBU1NFUlQocGNpZGV2c19sb2NrZWQoKSk7DQo+PiA+PiAtICAgIHBkZXYgPSBwY2lfZ2V0X3Bk
ZXYoTlVMTCwgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+PiA+PiAgICAgIEFTU0VSVChw
ZGV2ICYmIChwZGV2LT5kb21haW4gPT0gaGFyZHdhcmVfZG9tYWluIHx8DQo+PiA+PiAgICAgICAg
ICAgICAgICAgICAgICBwZGV2LT5kb21haW4gPT0gZG9tX2lvKSk7DQo+PiA+PiAgDQo+PiA+PiAr
ICAgIGRldmZuID0gcGRldi0+ZGV2Zm47DQo+PiA+PiArDQo+PiA+PiAgICAgIC8qIERvIG5vdCBh
bGxvdyBicm9rZW4gZGV2aWNlcyB0byBiZSBhc3NpZ25lZCB0byBndWVzdHMuICovDQo+PiA+PiAg
ICAgIHJjID0gLUVCQURGOw0KPj4gPj4gICAgICBpZiAoIHBkZXYtPmJyb2tlbiAmJiBkICE9IGhh
cmR3YXJlX2RvbWFpbiAmJiBkICE9IGRvbV9pbyApDQo+PiA+PiBAQCAtMTQ2MCw3ICsxNDUxLDcg
QEAgc3RhdGljIGludCBhc3NpZ25fZGV2aWNlKHN0cnVjdCBkb21haW4gKmQsIHUxNiBzZWcsIHU4
IGJ1cywgdTggZGV2Zm4sIHUzMiBmbGFnKQ0KPj4gPj4gICBkb25lOg0KPj4gPj4gICAgICBpZiAo
IHJjICkNCj4+ID4+ICAgICAgICAgIHByaW50ayhYRU5MT0dfR19XQVJOSU5HICIlcGQ6IGFzc2ln
biAoJXBwKSBmYWlsZWQgKCVkKVxuIiwNCj4+ID4+IC0gICAgICAgICAgICAgICBkLCAmUENJX1NC
REYoc2VnLCBidXMsIGRldmZuKSwgcmMpOw0KPj4gPj4gKyAgICAgICAgICAgICAgIGQsICZQQ0lf
U0JERihwZGV2LT5zZWcsIHBkZXYtPmJ1cywgZGV2Zm4pLCByYyk7DQo+PiA+PiAgICAgIC8qIFRo
ZSBkZXZpY2UgaXMgYXNzaWduZWQgdG8gZG9tX2lvIHNvIG1hcmsgaXQgYXMgcXVhcmFudGluZWQg
Ki8NCj4+ID4+ICAgICAgZWxzZSBpZiAoIGQgPT0gZG9tX2lvICkNCj4+ID4+ICAgICAgICAgIHBk
ZXYtPnF1YXJhbnRpbmUgPSB0cnVlOw0KPj4gPj4gQEAgLTE1OTUsNiArMTU4Niw5IEBAIGludCBp
b21tdV9kb19wY2lfZG9tY3RsKA0KPj4gPj4gICAgICAgICAgQVNTRVJUKGQpOw0KPj4gPj4gICAg
ICAgICAgLyogZmFsbCB0aHJvdWdoICovDQo+PiA+PiAgICAgIGNhc2UgWEVOX0RPTUNUTF90ZXN0
X2Fzc2lnbl9kZXZpY2U6DQo+PiA+PiArICAgIHsNCj4+ID4+ICsgICAgICAgIHN0cnVjdCBwY2lf
ZGV2ICpwZGV2Ow0KPj4gPj4gKw0KPj4gPj4gICAgICAgICAgLyogRG9uJ3Qgc3VwcG9ydCBzZWxm
LWFzc2lnbm1lbnQgb2YgZGV2aWNlcy4gKi8NCj4+ID4+ICAgICAgICAgIGlmICggZCA9PSBjdXJy
ZW50LT5kb21haW4gKQ0KPj4gPj4gICAgICAgICAgew0KPj4gPj4gQEAgLTE2MjIsMjYgKzE2MTYs
MzYgQEAgaW50IGlvbW11X2RvX3BjaV9kb21jdGwoDQo+PiA+PiAgICAgICAgICBzZWcgPSBtYWNo
aW5lX3NiZGYgPj4gMTY7DQo+PiA+PiAgICAgICAgICBidXMgPSBQQ0lfQlVTKG1hY2hpbmVfc2Jk
Zik7DQo+PiA+PiAgICAgICAgICBkZXZmbiA9IFBDSV9ERVZGTihtYWNoaW5lX3NiZGYpOw0KPj4g
Pj4gLQ0KPj4gPj4gICAgICAgICAgcGNpZGV2c19sb2NrKCk7DQo+PiA+PiAtICAgICAgICByZXQg
PSBkZXZpY2VfYXNzaWduZWQoc2VnLCBidXMsIGRldmZuKTsNCj4+ID4+ICsgICAgICAgIHBkZXYg
PSBwY2lfZ2V0X3BkZXYoTlVMTCwgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+PiA+PiAr
ICAgICAgICBpZiAoICFwZGV2ICkNCj4+ID4+ICsgICAgICAgIHsNCj4+ID4+ICsgICAgICAgICAg
ICBwcmludGsoWEVOTE9HX0dfSU5GTyAiJXBwIG5vbi1leGlzdGVudFxuIiwNCj4+ID4+ICsgICAg
ICAgICAgICAgICAgICAgJlBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikpOw0KPj4gPj4gKyAgICAg
ICAgICAgIHJldCA9IC1FSU5WQUw7DQo+PiA+PiArICAgICAgICAgICAgYnJlYWs7DQo+PiA+PiAr
ICAgICAgICB9DQo+PiA+PiArDQo+PiA+PiArICAgICAgICByZXQgPSBkZXZpY2VfYXNzaWduZWQo
cGRldik7DQo+PiA+PiAgICAgICAgICBpZiAoIGRvbWN0bC0+Y21kID09IFhFTl9ET01DVExfdGVz
dF9hc3NpZ25fZGV2aWNlICkNCj4+ID4+ICAgICAgICAgIHsNCj4+ID4+ICAgICAgICAgICAgICBp
ZiAoIHJldCApDQo+PiA+PiAgICAgICAgICAgICAgew0KPj4gPj4gLSAgICAgICAgICAgICAgICBw
cmludGsoWEVOTE9HX0dfSU5GTyAiJXBwIGFscmVhZHkgYXNzaWduZWQsIG9yIG5vbi1leGlzdGVu
dFxuIiwNCj4+ID4+ICsgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0lORk8gIiVwcCBh
bHJlYWR5IGFzc2lnbmVkXG4iLA0KPj4gPj4gICAgICAgICAgICAgICAgICAgICAgICAgJlBDSV9T
QkRGKHNlZywgYnVzLCBkZXZmbikpOw0KPj4gPj4gICAgICAgICAgICAgICAgICByZXQgPSAtRUlO
VkFMOw0KPj4gPj4gICAgICAgICAgICAgIH0NCj4+ID4+ICAgICAgICAgIH0NCj4+ID4+ICAgICAg
ICAgIGVsc2UgaWYgKCAhcmV0ICkNCj4+ID4+IC0gICAgICAgICAgICByZXQgPSBhc3NpZ25fZGV2
aWNlKGQsIHNlZywgYnVzLCBkZXZmbiwgZmxhZ3MpOw0KPj4gPj4gKyAgICAgICAgICAgIHJldCA9
IGFzc2lnbl9kZXZpY2UoZCwgcGRldiwgZmxhZ3MpOw0KPj4gPj4gKw0KPj4gPj4gKyAgICAgICAg
cGNpZGV2X3B1dChwZGV2KTsNCj4+ID4NCj4+ID4gSSB3b3VsZCB0aGluayB5b3UgbmVlZCB0byBr
ZWVwIHRoZSByZWZjb3VudCBoZXJlIGlmIHJldCA9PSAwLCBzbyB0aGF0DQo+PiA+IHRoZSBkZXZp
Y2UgY2Fubm90IGJlIHJlbW92ZWQgd2hpbGUgYXNzaWduZWQgdG8gYSBkb21haW4/DQo+PiANCj4+
IExvb2tzIGxpa2Ugd2UgYXJlIHBlcmNlaXZpbmcgZnVuY3Rpb24gb2YgcmVmY250IGluIGEgZGlm
ZmVyZW50DQo+PiB3YXlzLiBGb3IgbWUsIHRoaXMgaXMgdGhlIG1lY2hhbmlzbSB0byBndWFyYW50
ZWUgdGhhdCBpZiB3ZSBoYXZlIGEgdmFsaWQNCj4+IHBvaW50ZXIgdG8gYW4gb2JqZWN0LCB0aGlz
IG9iamVjdCB3aWxsIG5vdCBkaXNhcHBlYXIgdW5kZXIgb3VyDQo+PiBmZWV0LiBUaGlzIGlzIHRo
ZSBtYWluIGZ1bmN0aW9uIG9mIGtyZWZzIGluIHRoZSBsaW51eCBrZXJuZWw6IGlmIHlvdXINCj4+
IGNvZGUgaG9sZHMgYSByZWZlcmVuY2UgdG8gYW4gb2JqZWN0LCB5b3UgY2FuIGJlIHN1cmUgdGhh
dCB0aGlzIG9iamVjdCBpcw0KPj4gZXhpc3RzIGluIG1lbW9yeS4NCj4+IA0KPj4gT24gb3RoZXIg
aGFuZCwgaXQgc2VlbXMgdGhhdCB5b3UgYXJlIGNvbnNpZGVyaW5nIHRoaXMgcmVmY250IGFzIGFu
IHVzYWdlDQo+PiBjb3VudGVyIGZvciBhbiBhY3R1YWwgUENJIGRldmljZSwgbm90ICJzdHJ1Y3Qg
cGRldiIgdGhhdCByZXByZXNlbnQNCj4+IGl0LiBUaG9zZSBhcmUgdHdvIHJlbGF0ZWQgdGhpbmdz
LCBidXQgbm90IHRoZSBzYW1lLiBTbywgSSBjYW4gc2VlIHdoeQ0KPj4geW91IGFyZSBzdWdnZXN0
aW5nIHRvIGdldCBhZGRpdGlvbmFsIHJlZmVyZW5jZSB0aGVyZS4gQnV0IGZvciBtZSwgdGhpcw0K
Pj4gbG9va3MgdW5uZWNlc3Nhcnk6IHRoZSB2ZXJ5IGZpcnN0IHJlZmNvdW50IGlzIG9idGFpbmVk
IGluDQo+PiBwY2lfYWRkX2RldmljZSgpIGFuZCB0aGVyZSBpcyB0aGUgY29ycmVzcG9uZGluZyBm
dW5jdGlvbg0KPj4gcGNpX3JlbW92ZV9kZXZpY2UoKSB0aGF0IHdpbGwgZHJvcCB0aGlzIHJlZmNv
dW50LiBTbywgZm9yIG1lLCBpZiBhZG1pbg0KPj4gd2FudHMgdG8gcmVtb3ZlIGEgUENJIGRldmlj
ZSB3aGljaCBpcyBhc3NpZ25lZCB0byBhIGRvbWFpbiwgdGhleSBjYW4gZG8NCj4+IHRoaXMgYXMg
dGhleSB3ZXJlIGFibGUgdG8gZG8gdGhpcyBwcmlvciB0aGlzIHBhdGNoZXMuDQo+DQo+IFRoaXMg
aXMgYWxsIGZpbmUsIGJ1dCBuZWVkcyB0byBiZSBzdGF0ZWQgaW4gdGhlIGNvbW1pdCBtZXNzYWdl
Lg0KPg0KDQpTdXJlLCBJIHdpbGwgYWRkIHRoaXMuDQoNCj4+IFRoZSBtYWluIHZhbHVlIG9mIGlu
dHJvZHVjaW5nIHJlZmNudCBpcyB0byBiZSBhYmxlIHRvIGFjY2VzcyBwZGV2IG9iamVjdHMNCj4+
IHdpdGhvdXQgaG9sZGluZyB0aGUgZ2xvYmFsIHBjaWRldnNfbG9jaygpLiBUaGlzIGRvZXMgbm90
IG1lYW4gdGhhdCB5b3UNCj4+IGRvbid0IG5lZWQgbG9ja2luZyBhdCBhbGwuIEJ1dCB0aGlzIGFs
bG93cyB5b3UgdG8gdXNlIHBkZXYtPmxvY2sgKHdoaWNoDQo+PiBkb2VzIG5vdCBleGlzdHMgaW4g
dGhpcyBzZXJpZXMsIGJ1dCB3YXMgaW50cm9kdWNlZCBpbiBhIFJGQyBlYXJsaWVyKSwgb3INCj4+
IHZwY2ktPmxvY2ssIG9yIGFueSBvdGhlciBzdWJzeXN0ZW0tPmxvY2suDQo+DQo+IEkgZ3Vlc3Mg
SSB3YXMgbWlzc2luZyB0aGlzIG90aGVyIGJpdCBhYm91dCBpbnRyb2R1Y2luZyBhDQo+IHBlci1k
ZXZpY2UgbG9jaywgd291bGQgaXQgYmUgcG9zc2libGUgdG8gYnVuZGxlIGFsbCB0aGlzIHRvZ2V0
aGVyIGludG8NCj4gYSBzaW5nbGUgcGF0Y2ggc2VyaWVzPw0KDQpBcyBJIHNhaWQgYXQgdGhlIHRv
cCBvZiB0aGlzIGVtYWlsLCBpdCB3YXMgdHJpZWQuIFlvdSBjYW4gY2hlY2sgUkZDIGF0IFsxXS4N
Cg0KPg0KPiBJdCB3b3VsZCBiZSBnb29kIHRvIHBsYWNlIHRoaXMgY2hhbmdlIHRvZ2V0aGVyIHdp
dGggYW55IG90aGVyIGxvY2tpbmcNCj4gcmVsYXRlZCBjaGFuZ2UgdGhhdCB5b3UgaGF2ZSBwZW5k
aW5nLg0KDQpIb25lc3RseSwgbXkgbWFpbiBnb2FsIGlzIHRvIGZpeCB0aGUgY3VycmVudCBpc3N1
ZXMgd2l0aCB2UENJLCBzbyBBUk0NCmNhbiBtb3ZlIGZvcndhcmQgb24gYWRkaW5nIFBDSSBzdXBw
b3J0IGZvciB0aGUgcGxhdGZvcm0uIFNvLCBJIGFtDQpmb2N1c2luZyBvbiB0aGlzIHJpZ2h0IG5v
dy4NCg0KWzFdDQpodHRwczovL3BhdGNod29yay5rZXJuZWwub3JnL3Byb2plY3QveGVuLWRldmVs
L2NvdmVyLzIwMjIwODMxMTQxMDQwLjEzMjMxLTEtdm9sb2R5bXlyX2JhYmNodWtAZXBhbS5jb20v
DQoNClsyXQ0KaHR0cHM6Ly9wYXRjaHdvcmsua2VybmVsLm9yZy9wcm9qZWN0L3hlbi1kZXZlbC9j
b3Zlci8yMDIyMDIxNjE1MTYyOC4xNjEwNzc3LTEtYW5kcjIwMDBAZ21haWwuY29tLw0KDQotLSAN
CldCUiwgVm9sb2R5bXly


From xen-devel-bounces@lists.xenproject.org Wed Apr 12 23:25:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Apr 2023 23:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520409.807945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmjq1-0004p9-Tc; Wed, 12 Apr 2023 23:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520409.807945; Wed, 12 Apr 2023 23:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmjq1-0004p2-Qk; Wed, 12 Apr 2023 23:25:17 +0000
Received: by outflank-mailman (input) for mailman id 520409;
 Wed, 12 Apr 2023 23:25:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+hK=AD=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pmjq0-0004ow-DM
 for xen-devel@lists.xenproject.org; Wed, 12 Apr 2023 23:25:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 45f9ca5e-d989-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 01:25:13 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 20EEB62E1D;
 Wed, 12 Apr 2023 23:25:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3A99C433EF;
 Wed, 12 Apr 2023 23:25:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45f9ca5e-d989-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681341911;
	bh=vPbyqvQQY1g9sdApUh+HKPuhZ4O391Xu9iswHov8S3E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SPWtHMDfw22R1+xyrL3WXUPwDIm73YYfsCPu7ALXE7hzQf66/Hs+UGgPZ36t0uBy6
	 PGPLEUpXAhNDPhpFwvcFSmO4jXj5j0+keYBkpSjlaFAV6Yz7HsBQPRsIKsUKxaMD4B
	 0OU+sRtst683UepYfjcBjGHxDubT1O9r+N7bhsElqIzQxL0iXo8TQfVx/WRttLL6b8
	 FPU8aMiQzPOWh1YbQT8+peLo7IwNQU1XDEyx9o1zB84Vt0NY9ln03oGlhsDb3CJAC+
	 jb+39oSJTZOIMqmi2NTDMrW9YcfAz15tgrcuicNSfLwrvLHoeYtvrR1n5ZYDcoBzPK
	 bdX5Zw5dvQGqQ==
Date: Wed, 12 Apr 2023 16:25:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: xen-devel <xen-devel@lists.xenproject.org>, 
    "committers@xenproject.org" <committers@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
    Henry Wang <Henry.Wang@arm.com>, michal.orzel@amd.com, 
    anthony.perard@citrix.com, george.dunlap@cloud.com, julien@xen.org
Subject: Re: Gitlab status on older branches (Inc some 4.18 blockers)
In-Reply-To: <193206bf-76a0-818d-8fa8-1886a15ad5e5@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304121522410.15580@ubuntu-linux-20-04-desktop>
References: <193206bf-76a0-818d-8fa8-1886a15ad5e5@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1692276256-1681338173=:15580"
Content-ID: <alpine.DEB.2.22.394.2304121522590.15580@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1692276256-1681338173=:15580
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304121522591.15580@ubuntu-linux-20-04-desktop>

On Wed, 12 Apr 2023, Andrew Cooper wrote:
> Hello,
> 
> I've done various backports to staging-4.14 and later trying to improve
> the state of Gitlab testing.
> 
> The good news is that 4.16 and 4.17 now pass.  The bad news is that
> there are still bugs which need fixing, but lets start with the older
> branches.
> 
> Also, I was forced to backport an update to SeaBIOS 1.16 to all branches
> in order to fix compile failures in build environments we supported at
> the time of each of these releases.  I honestly don't know what we were
> failing to do testing wise back then, but whatever we missed ought to
> have been release blockers.
> 
> 
> 4.15:
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/834460832
> 
> Individual failure instances:
> 
> 1) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232265
> 
> This is a -Werror=array-bounds failure in HVMLoader but the same
> job/container works in 4.16 and newer, and the underlying code is the
> same.  There must be some change in the build environment, but I haven't
> worked out what yet.

> 2) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232266
> 
> This is a -Werror=array-bounds in iPXE.  Probably needs an update like
> SeaBIOS too.
 

Also seeing your valid comments below about using stable releases for
tests, I think it would be reasonable to disable or allow_failure the
archlinux tests in stable trees, given that archlinux is a rolling
distro.

This would "solve" 1) and 2) and also 5) and probably 6) below

In the sense that maybe it doesn't make sense to backport fixes to 4.15
so that we can build 4.15 with versions of GCC released after 4.15 was
out. I am open to all options, and also OK if we prefer to fix it.

That leaves us with 3) and 4).


> 3) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232290
> 
> This is a Qemu upstream failure which I do vaguely recall.  Bit it also
> means that Xen 4.15 had a dead-on-arrival version of qemu which call
> into question a number of our normal release activities.  Probably the
> least-bad option is to backport the one fix relevant to this, because
> changing the version of qemu in the security-only trees is far riskier
> than changing one of the in-guest ROMs.

This seems to be an header inclusion problem: /usr/include/linux/swab.h
is picking up the BITS_PER_LONG implementation from QEMU
(include/qemu/bitops.h) instead of the one from somewhere else under
/usr/include.

/usr/include/linux/swab.h:137:5: error: token is not a valid binary operator in a preprocessor subexpression
#if BITS_PER_LONG == 64
    ^~~~~~~~~~~~~
/builds/xen-project/people/andyhhp/xen/tools/qemu-xen-dir/include/qemu/bitops.h:20:41: note: expanded from macro 'BITS_PER_LONG'
#define BITS_PER_LONG           (sizeof (unsigned long) * BITS_PER_BYTE)
                                 ~~~~~~ ^
  CC      block/commit.o
1 error generated.
/builds/xen-project/people/andyhhp/xen/tools/qemu-xen-dir/rules.mak:69: recipe for target 'block/file-posix.o' failed
make: *** [block/file-posix.o] Error 1

I don't know which specific QEMU commit would solve this. Or whether we
can get away with remove/install one or more Debian Stretch packages to
make it work.



There is a one more different failure here:
https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232292

This one is:

qemu-xen-dir/include/qemu/bitops.h:20:34: error: "sizeof" is not defined [-Werror=undef]
 #define BITS_PER_LONG           (sizeof (unsigned long) * BITS_PER_BYTE)

And it seems to match from qemu.git:

commit b5d621ff4a7d86e82a58104d5706bda2b4238626
Author: Thomas Huth <thuth@redhat.com>
Date:   Wed May 20 10:38:37 2020 +0200

    gitlab-ci: Do not use the standard container images from gitlab

which points to a problem with the base container image.


> 4) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232334
> 
> I have no idea what's going on here.  If nothing else, we're failing to
> collect all the relevant log files from a build and that probably wants

https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg01103.html

>From the thread, the same error before happened when the QEMU configure
invocation didn't have:

  -I$(XEN_ROOT)/tools/libs/devicemodel/include 
  -L$(XEN_ROOT)/tools/libs/devicemodel 

in --extra-cflags

However, there are no changes in tools/Makefile between 4.15 and 4.16
which makes me think this issue was fixed on the QEMU side instead.


> 5) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097232324
> 
> This isn't so much about the failure, as the fact that the OpenSUSE Leap
> tests (which were replaced with tumbleweed in newer versions of Xen)
> probably want the same doing to them.  And being marked as
> non-blocking.  This is a failure somewhere in the middle of qemu.
> 
> But, on top of all of ^, I discovered that we have a majority of tests
> being debian/unstable which, when we refreshed to fix the HTTPS issue,
> ended up retrofitting a newer-than-at-release-time build environment to
> the old trees.
> 
> This has come up previously, and not addressed, so I'm now declaring it
> a blocker for 4.18.  Only tests against a fixed disto version can be
> blocking; those against an unstable distro must be non-blocking, and
> most of the currently unstable things should be transformed into their
> stable alternative.  For backports, we want to retrofit what
> debian/unstable was at the time of release, rather than what it
> currently is.

Yes this is a good idea, +1


> Furthermore, the fixed distros we currently test in staging are old
> bordering on obsolete.  Which is not a healthy position to be in as far
> as the 4.18 release goes.
> 
> 
> 4.14:
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/834461234
> 
> 6) https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4097236330
> 
> This is the only 4.14 failure I can see which isn't a duplicate of a
> 4.15 failure, but it is an OpenSUSE leap failure in qemu so perhaps
> related to #5
> 
> 
> As a general note, we still have too much testing (and/or insufficient
> testing resource).  It's very painful waiting 2h for each branch to
> complete.  I'm very tempted to trim things down further on staging and
> backport the results.

On the ARM side we should still be OK.

On the x86 side, I was hoping to get more HW resources very soon. I
wonder if any of the people in this thread have any updates?

Otherwise, anyone that can temporarily contribute a workstation/server
to run a gitlab-runner?
--8323329-1692276256-1681338173=:15580--


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 01:48:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 01:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520430.807986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmm4k-00014a-BD; Thu, 13 Apr 2023 01:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520430.807986; Thu, 13 Apr 2023 01:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmm4k-00014T-7N; Thu, 13 Apr 2023 01:48:38 +0000
Received: by outflank-mailman (input) for mailman id 520430;
 Thu, 13 Apr 2023 01:48:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmm4j-00014J-I6; Thu, 13 Apr 2023 01:48:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmm4j-0001n3-GU; Thu, 13 Apr 2023 01:48:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmm4i-0001fQ-T0; Thu, 13 Apr 2023 01:48:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmm4i-0001Nx-SU; Thu, 13 Apr 2023 01:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Efr1PpMxeF4PwMSDVfXXEZlbxscJRHezAn5eQPmWoD0=; b=Y+LBEqdFRciHPggwSsBBLSRH/A
	TdG9XIcJT4SC5MYLVOj9WnRJG8fugwOfTXaHFNYv2pKjhtW9M/u8ZpSWGqsKIPYHMl8ZiAENankpz
	rpReNH3/aOvFdKQS92/laG4Knu46mWhvkGCSlcfOgNwYLZkh5LtcG+pnpXhXKLgMCpVs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 180218: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-4.15-testing:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=7963cdbf91d8a8d2f8338171adab3807b20f658a
X-Osstest-Versions-That:
    xen=11193e13e5359ba1896be46be3e9b468154c1295
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 01:48:36 +0000

flight 180218 xen-4.15-testing real [real]
flight 180224 xen-4.15-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180218/
http://logs.test-lab.xenproject.org/osstest/logs/180224/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180224-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 179841
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 179841
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 179841
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 179841
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 179841
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 179841
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 179841
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 179841
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 179841
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  7963cdbf91d8a8d2f8338171adab3807b20f658a
baseline version:
 xen                  11193e13e5359ba1896be46be3e9b468154c1295

Last test of basis   179841  2023-03-21 12:36:45 Z   22 days
Testing same since   180218  2023-04-12 08:06:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   11193e13e5..7963cdbf91  7963cdbf91d8a8d2f8338171adab3807b20f658a -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 04:12:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 04:12:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520437.807997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmoK4-0007us-7s; Thu, 13 Apr 2023 04:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520437.807997; Thu, 13 Apr 2023 04:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmoK4-0007ul-4H; Thu, 13 Apr 2023 04:12:36 +0000
Received: by outflank-mailman (input) for mailman id 520437;
 Thu, 13 Apr 2023 04:12:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmoK2-0007ub-0L; Thu, 13 Apr 2023 04:12:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmoK1-0005h4-Rk; Thu, 13 Apr 2023 04:12:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmoK1-0005SO-F4; Thu, 13 Apr 2023 04:12:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmoK1-0004wt-Ee; Thu, 13 Apr 2023 04:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3piTDhAffuACSWOgcOwBkOjX2G8AOqMRvKGGVuuub0o=; b=zV8ncQl4oDVzW2YDo9R6C0cZFz
	uJK5bMzJa/ZL68+KhGf/+O/nwImzaQnr+LcvYcl7SdQ0PJhU4AwwDWlPLo2o8HeJfE0vqFHfVLY0m
	vWusGXZJaX9YDiqyJeF1QxkvEW896hwuk0t5OHBWA7sCM40wz1IAXYji5N+DBJRyJs7I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180226: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
X-Osstest-Versions-That:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 04:12:33 +0000

flight 180226 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180226/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380
baseline version:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f

Last test of basis   180205  2023-04-11 13:03:27 Z    1 days
Testing same since   180226  2023-04-13 02:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5ea03c570c..f872a624cb  f872a624cbf92de9944483eea7674ef80ced1380 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 05:31:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 05:31:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520447.808006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmpYL-00087i-Uo; Thu, 13 Apr 2023 05:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520447.808006; Thu, 13 Apr 2023 05:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmpYL-00087b-Rs; Thu, 13 Apr 2023 05:31:25 +0000
Received: by outflank-mailman (input) for mailman id 520447;
 Thu, 13 Apr 2023 05:31:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmpYL-00087R-2J; Thu, 13 Apr 2023 05:31:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmpYK-0008B0-VI; Thu, 13 Apr 2023 05:31:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmpYK-0000p8-BP; Thu, 13 Apr 2023 05:31:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmpYK-0002Gd-Ao; Thu, 13 Apr 2023 05:31:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+DEAbIgF2M9Hv8ngGWvCsnS1wOMBT/vILP8Mvt38pR4=; b=xBj8oz84jowqbDBKzm+TR0hUQO
	x05glM2gbF7S2XtEKjPHu3mgwfWdZjl/BY1j50XPjnYLOEgf5iXZr88VlPNuMmxJocmsrKDEp0DBm
	fEkgpooxcTepDYCXlbDwlX2bgpjyzDCXhpQ1YbV5f5ryRLG6NJ//x1R3gXPw3Jaw80Rg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180219-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180219: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-4.16-testing:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=31627a059c2e186f4ad12d171d964b09abe8a4a9
X-Osstest-Versions-That:
    xen=06264af090ac69a95cdadbc261cc82d964dcb568
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 05:31:24 +0000

flight 180219 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180219/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180086
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180086
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180086
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180086
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180086
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180086
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180086
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180086
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180086
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  31627a059c2e186f4ad12d171d964b09abe8a4a9
baseline version:
 xen                  06264af090ac69a95cdadbc261cc82d964dcb568

Last test of basis   180086  2023-03-31 07:08:31 Z   12 days
Testing same since   180219  2023-04-12 08:06:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   06264af090..31627a059c  31627a059c2e186f4ad12d171d964b09abe8a4a9 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520463.808071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBN-0002Q4-Kz; Thu, 13 Apr 2023 07:15:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520463.808071; Thu, 13 Apr 2023 07:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBN-0002Px-Gc; Thu, 13 Apr 2023 07:15:49 +0000
Received: by outflank-mailman (input) for mailman id 520463;
 Thu, 13 Apr 2023 07:15:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBL-0001gr-Fq
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:47 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02094e23-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:45 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id u12so153802lfu.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:45 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02094e23-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370145; x=1683962145;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=T8pIVujHEbhV2cnHgvkx7ic1HrkG7oPR2753cV35Lic=;
        b=j/zN3l9fv0KQLVQaEiGtYmZHRVx12v/6iL5ap9JHNd+d76QxhLEM7GP+H7rIAd+TjL
         DMa4mn0kjlYjBOTlMNIU6WzhszIdGrH9wEBvum0FYuM9tPkC/2IW7fvsK6ehOD9bBYEf
         2pbG8Redd+mN7UYsohGRPBCPUqhc+6fqGkFA8WcXtnYF/HNJ1OT43UmhD11IvM7x17xs
         7w+x4zBBLPtipshfSJya2xR+pE7uT+5vDvfUTn3wnvm4n95QkDutQt6Bl2syeeBGzUk2
         20tatdWDtF8jLhYBzgwtZjuiImQfwc3UpJbjhPcDQ1JjZ71Ioo/5ge1+rA0oPatUzp9g
         B2TA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370145; x=1683962145;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=T8pIVujHEbhV2cnHgvkx7ic1HrkG7oPR2753cV35Lic=;
        b=XwBma4TmzI0scF73ODBGIHoFOmvlIAPc6finYzaf0PJwbY4FJt+mt6UfqlHqXTeuYH
         mrsZSyTvlfseiyE2bFe+S4TO/7uh/3yECFrDF9rNzUde7kMKcf+9WAxeZZHRwLRMfvdP
         RIBSU/zxBMv3RzjBxFHFKNno0Yo+j7s3hdaBLmk7/AXr9kv61OUlPyYChVasaUi9ZoCL
         URPAGlLN9xXVZnHjtNa1pBscJax+AcJYoGFf60jTw3olEACinHCaPYagSxjUFZNCVcaj
         nQZ5XatlQBRAOrERuQS+ppys9ImBIvUSuvJwyu8YykvkWp4g1E+lesbpTdePgFe0goyO
         BBNg==
X-Gm-Message-State: AAQBX9ezts9t1CYa30ocvo49PYGdqyuC6bdJU4z5CmbkOl1IVZJEvi0C
	+kdmb5NGbE/UVvQLycS0wYq9zR5yB8tfG1wKb/Y=
X-Google-Smtp-Source: AKy350bj4BD08C0CLbkgoYe6CCcHFDtvUrsW3hSspX4zO5IYWVp0t7iWRBIc5cwXOha6KG3DUvErSA==
X-Received: by 2002:ac2:5d4e:0:b0:4eb:c18:efae with SMTP id w14-20020ac25d4e000000b004eb0c18efaemr591180lfd.17.1681370144856;
        Thu, 13 Apr 2023 00:15:44 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Date: Thu, 13 Apr 2023 09:14:04 +0200
Message-Id: <20230413071424.3273490-3-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a FF-A version 1.1 [1] mediator to communicate with a Secure
Partition in secure world.

This commit brings in only the parts needed to negotiate FF-A version
number with guest and SPMC.

[1] https://developer.arm.com/documentation/den0077/e
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/include/asm/psci.h    |   4 +
 xen/arch/arm/include/asm/tee/ffa.h |  35 +++++
 xen/arch/arm/tee/Kconfig           |  11 ++
 xen/arch/arm/tee/Makefile          |   1 +
 xen/arch/arm/tee/ffa.c             | 219 +++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c                |  17 ++-
 xen/include/public/arch-arm.h      |   1 +
 7 files changed, 285 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/tee/ffa.h
 create mode 100644 xen/arch/arm/tee/ffa.c

diff --git a/xen/arch/arm/include/asm/psci.h b/xen/arch/arm/include/asm/psci.h
index 832f77afff3a..4780972621bb 100644
--- a/xen/arch/arm/include/asm/psci.h
+++ b/xen/arch/arm/include/asm/psci.h
@@ -24,6 +24,10 @@ void call_psci_cpu_off(void);
 void call_psci_system_off(void);
 void call_psci_system_reset(void);
 
+/* Range of allocated PSCI function numbers */
+#define	PSCI_FNUM_MIN_VALUE                 _AC(0,U)
+#define	PSCI_FNUM_MAX_VALUE                 _AC(0x1f,U)
+
 /* PSCI v0.2 interface */
 #define PSCI_0_2_FN32(nr) ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL,             \
                                              ARM_SMCCC_CONV_32,               \
diff --git a/xen/arch/arm/include/asm/tee/ffa.h b/xen/arch/arm/include/asm/tee/ffa.h
new file mode 100644
index 000000000000..44361a4e78e4
--- /dev/null
+++ b/xen/arch/arm/include/asm/tee/ffa.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xen/arch/arm/include/asm/tee/ffa.h
+ *
+ * Arm Firmware Framework for ARMv8-A(FFA) mediator
+ *
+ * Copyright (C) 2023  Linaro Limited
+ */
+
+#ifndef __ASM_ARM_TEE_FFA_H__
+#define __ASM_ARM_TEE_FFA_H__
+
+#include <xen/const.h>
+#include <xen/kconfig.h>
+
+#include <asm/smccc.h>
+#include <asm/types.h>
+
+#define FFA_FNUM_MIN_VALUE              _AC(0x60,U)
+#define FFA_FNUM_MAX_VALUE              _AC(0x86,U)
+
+static inline bool is_ffa_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= FFA_FNUM_MIN_VALUE && fn <= FFA_FNUM_MAX_VALUE;
+}
+
+#ifdef CONFIG_FFA
+#define FFA_NR_FUNCS    12
+#else
+#define FFA_NR_FUNCS    0
+#endif
+
+#endif /*__ASM_ARM_TEE_FFA_H__*/
diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
index 392169b2559d..923f08ba8cb7 100644
--- a/xen/arch/arm/tee/Kconfig
+++ b/xen/arch/arm/tee/Kconfig
@@ -8,3 +8,14 @@ config OPTEE
 	  virtualization-enabled OP-TEE present. You can learn more
 	  about virtualization for OP-TEE at
 	  https://optee.readthedocs.io/architecture/virtualization.html
+
+config FFA
+	bool "Enable FF-A mediator support (UNSUPPORTED)" if UNSUPPORTED
+	default n
+	depends on ARM_64
+	help
+	  This option enables a minimal FF-A mediator. The mediator is
+	  generic as it follows the FF-A specification [1], but it only
+	  implements a small subset of the specification.
+
+	  [1] https://developer.arm.com/documentation/den0077/latest
diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
index 982c87968447..58a1015e40e0 100644
--- a/xen/arch/arm/tee/Makefile
+++ b/xen/arch/arm/tee/Makefile
@@ -1,2 +1,3 @@
+obj-$(CONFIG_FFA) += ffa.o
 obj-y += tee.o
 obj-$(CONFIG_OPTEE) += optee.o
diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
new file mode 100644
index 000000000000..aaf74c287aef
--- /dev/null
+++ b/xen/arch/arm/tee/ffa.c
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xen/arch/arm/tee/ffa.c
+ *
+ * Arm Firmware Framework for ARMv8-A (FF-A) mediator
+ *
+ * Copyright (C) 2023  Linaro Limited
+ */
+
+#include <xen/bitops.h>
+#include <xen/domain_page.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/types.h>
+
+#include <asm/event.h>
+#include <asm/regs.h>
+#include <asm/smccc.h>
+#include <asm/tee/ffa.h>
+#include <asm/tee/tee.h>
+
+/* Error codes */
+#define FFA_RET_OK                      0
+#define FFA_RET_NOT_SUPPORTED           -1
+#define FFA_RET_INVALID_PARAMETERS      -2
+#define FFA_RET_NO_MEMORY               -3
+#define FFA_RET_BUSY                    -4
+#define FFA_RET_INTERRUPTED             -5
+#define FFA_RET_DENIED                  -6
+#define FFA_RET_RETRY                   -7
+#define FFA_RET_ABORTED                 -8
+
+/* FFA_VERSION helpers */
+#define FFA_VERSION_MAJOR_SHIFT         16U
+#define FFA_VERSION_MAJOR_MASK          0x7FFFU
+#define FFA_VERSION_MINOR_SHIFT         0U
+#define FFA_VERSION_MINOR_MASK          0xFFFFU
+#define MAKE_FFA_VERSION(major, minor)  \
+        ((((major) & FFA_VERSION_MAJOR_MASK) << FFA_VERSION_MAJOR_SHIFT) | \
+         ((minor) & FFA_VERSION_MINOR_MASK))
+
+#define FFA_VERSION_1_0         MAKE_FFA_VERSION(1, 0)
+#define FFA_VERSION_1_1         MAKE_FFA_VERSION(1, 1)
+/* The minimal FF-A version of the SPMC that can be supported */
+#define FFA_MIN_SPMC_VERSION    FFA_VERSION_1_1
+
+/*
+ * This is the version we want to use in communication with guests and SPs.
+ * During negotiation with a guest or a SP we may need to lower it for
+ * that particular guest or SP.
+ */
+#define FFA_MY_VERSION_MAJOR    1U
+#define FFA_MY_VERSION_MINOR    1U
+#define FFA_MY_VERSION          MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
+                                                 FFA_MY_VERSION_MINOR)
+
+/* Function IDs */
+#define FFA_ERROR                       0x84000060U
+#define FFA_SUCCESS_32                  0x84000061U
+#define FFA_VERSION                     0x84000063U
+
+struct ffa_ctx {
+    /* FF-A version used by the guest */
+    uint32_t guest_vers;
+};
+
+/* Negotiated FF-A version to use with the SPMC */
+static uint32_t ffa_version __ro_after_init;
+
+static bool ffa_get_version(uint32_t *vers)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_VERSION,
+        .a1 = FFA_MY_VERSION,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    if ( resp.a0 == FFA_RET_NOT_SUPPORTED )
+    {
+        gprintk(XENLOG_ERR, "ffa: FFA_VERSION returned not supported\n");
+        return false;
+    }
+
+    *vers = resp.a0;
+
+    return true;
+}
+
+static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
+                     register_t v2, register_t v3, register_t v4, register_t v5,
+                     register_t v6, register_t v7)
+{
+        set_user_reg(regs, 0, v0);
+        set_user_reg(regs, 1, v1);
+        set_user_reg(regs, 2, v2);
+        set_user_reg(regs, 3, v3);
+        set_user_reg(regs, 4, v4);
+        set_user_reg(regs, 5, v5);
+        set_user_reg(regs, 6, v6);
+        set_user_reg(regs, 7, v7);
+}
+
+static void handle_version(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+    uint32_t vers = get_user_reg(regs, 1);
+
+    if ( vers < FFA_VERSION_1_1 )
+        vers = FFA_VERSION_1_0;
+    else
+        vers = FFA_VERSION_1_1;
+
+    ctx->guest_vers = vers;
+    set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
+}
+
+static bool ffa_handle_call(struct cpu_user_regs *regs)
+{
+    uint32_t fid = get_user_reg(regs, 0);
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    if ( !ctx )
+        return false;
+
+    switch ( fid )
+    {
+    case FFA_VERSION:
+        handle_version(regs);
+        return true;
+
+    default:
+        gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
+        return false;
+    }
+}
+
+static int ffa_domain_init(struct domain *d)
+{
+    struct ffa_ctx *ctx;
+
+    if ( !ffa_version )
+        return -ENODEV;
+
+    ctx = xzalloc(struct ffa_ctx);
+    if ( !ctx )
+        return -ENOMEM;
+
+    d->arch.tee = ctx;
+
+    return 0;
+}
+
+/* This function is supposed to undo what ffa_domain_init() has done */
+static int ffa_relinquish_resources(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    if ( !ctx )
+        return 0;
+
+    XFREE(d->arch.tee);
+
+    return 0;
+}
+
+static bool ffa_probe(void)
+{
+    uint32_t vers;
+    unsigned int major_vers;
+    unsigned int minor_vers;
+
+    /*
+     * psci_init_smccc() updates this value with what's reported by EL-3
+     * or secure world.
+     */
+    if ( smccc_ver < ARM_SMCCC_VERSION_1_2 )
+    {
+        printk(XENLOG_ERR
+               "ffa: unsupported SMCCC version %#x (need at least %#x)\n",
+               smccc_ver, ARM_SMCCC_VERSION_1_2);
+        return false;
+    }
+
+    if ( !ffa_get_version(&vers) )
+        return false;
+
+    if ( vers < FFA_MIN_SPMC_VERSION || vers > FFA_MY_VERSION )
+    {
+        printk(XENLOG_ERR "ffa: Incompatible version %#x found\n", vers);
+        return false;
+    }
+
+    major_vers = (vers >> FFA_VERSION_MAJOR_SHIFT) & FFA_VERSION_MAJOR_MASK;
+    minor_vers = vers & FFA_VERSION_MINOR_MASK;
+    printk(XENLOG_INFO "ARM FF-A Mediator version %u.%u\n",
+           FFA_MY_VERSION_MAJOR, FFA_MY_VERSION_MINOR);
+    printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
+           major_vers, minor_vers);
+
+    ffa_version = vers;
+
+    return true;
+}
+
+static const struct tee_mediator_ops ffa_ops =
+{
+    .probe = ffa_probe,
+    .domain_init = ffa_domain_init,
+    .relinquish_resources = ffa_relinquish_resources,
+    .handle_call = ffa_handle_call,
+};
+
+REGISTER_TEE_MEDIATOR(ffa, "FF-A", XEN_DOMCTL_CONFIG_TEE_FFA, &ffa_ops);
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index cd68fa80e98a..7f2f5eb9ce3d 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -15,6 +15,7 @@
 #include <asm/monitor.h>
 #include <asm/regs.h>
 #include <asm/smccc.h>
+#include <asm/tee/ffa.h>
 #include <asm/tee/tee.h>
 #include <asm/traps.h>
 #include <asm/vpsci.h>
@@ -24,7 +25,7 @@
 #define XEN_SMCCC_FUNCTION_COUNT 3
 
 /* Number of functions currently supported by Standard Service Service Calls. */
-#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS)
+#define SSSC_SMCCC_FUNCTION_COUNT (3 + VPSCI_NR_FUNCS + FFA_NR_FUNCS)
 
 static bool fill_uid(struct cpu_user_regs *regs, xen_uuid_t uuid)
 {
@@ -188,13 +189,23 @@ static bool handle_existing_apis(struct cpu_user_regs *regs)
     return do_vpsci_0_1_call(regs, fid);
 }
 
+static bool is_psci_fid(uint32_t fid)
+{
+    uint32_t fn = fid & ARM_SMCCC_FUNC_MASK;
+
+    return fn >= PSCI_FNUM_MIN_VALUE && fn <= PSCI_FNUM_MAX_VALUE;
+}
+
 /* PSCI 0.2 interface and other Standard Secure Calls */
 static bool handle_sssc(struct cpu_user_regs *regs)
 {
     uint32_t fid = (uint32_t)get_user_reg(regs, 0);
 
-    if ( do_vpsci_0_2_call(regs, fid) )
-        return true;
+    if ( is_psci_fid(fid) )
+        return do_vpsci_0_2_call(regs, fid);
+
+    if ( is_ffa_fid(fid) )
+        return tee_handle_call(regs);
 
     switch ( fid )
     {
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..92aff923056a 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -296,6 +296,7 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 
 #define XEN_DOMCTL_CONFIG_TEE_NONE      0
 #define XEN_DOMCTL_CONFIG_TEE_OPTEE     1
+#define XEN_DOMCTL_CONFIG_TEE_FFA       2
 
 struct xen_arch_domainconfig {
     /* IN/OUT */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520468.808119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBR-0003e2-Tt; Thu, 13 Apr 2023 07:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520468.808119; Thu, 13 Apr 2023 07:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBR-0003cB-NB; Thu, 13 Apr 2023 07:15:53 +0000
Received: by outflank-mailman (input) for mailman id 520468;
 Thu, 13 Apr 2023 07:15:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBP-0001gq-Dc
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:51 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0550939e-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:50 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id d7so28946612lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:50 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0550939e-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370150; x=1683962150;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/vIqA0Q41NfcSAHoiP+QU7NDN221D0N/8z0QGKDCRsU=;
        b=NZKECmgRCwD3/Qt/bQB0GE/LjYiBu8Q2NiBGD0tYIoffAUzZUEIEg1ZhPAEM7wCIzB
         HVRzL93FmjUV+sQFKE63TqJy5SYL88iVwe0lhTKx8TzLqP5FsngVGv7NwsYdzj95bZkq
         74bNTFshVeGzooHtp9hPQFnB4uaRKEG/Lisi2zZPBdCp48nfvov6kx6pBaF3+rbPaAzr
         SmyHDDmGE4vy/RGBE2uPyLO0X9dMF9dNbI0JYHkyKS1Qzyc6KKfvWp4ukx1qDGyP8Tga
         56Od1+AyTWKqEYvWJ6u5nmVv+eavkqdnMP3m2gs7QCgLCxg0HySO9LooAAMSod4CTMFt
         OznA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370150; x=1683962150;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/vIqA0Q41NfcSAHoiP+QU7NDN221D0N/8z0QGKDCRsU=;
        b=I4l2vRPuCadQetVMKL9mQ/0Y+xyxaa2XQIgG9VZDqJj1jAJJFB0hQ8xaKep9BlBHmI
         enCxi0yYmNVl91TWoJDiE4YkvBXSrPxRCtF0wwsk15tyCS/QsroA5P0Ke/d5k9FKIKe/
         yhGTT9u7l0l4WNnkn5bq7YeALK6qGOa726SYzwRzNmwTDbfa7JisUctKunQq2iRAXJH/
         8Oxpk5sFvqYucgpBQ5iZNGOlJjNExUcR4Uk8u46voMRUxeA1+7eIokKAR269rGylOBqJ
         45UxuDXjkolvQKG/y4XdFDEfNThsfUQi/ACoDPcNCooLYI2FXxpNOYMUm5/I7wALkmsS
         cScQ==
X-Gm-Message-State: AAQBX9dCU6fStQ8+Joxv3VV3JlEkgw/qi+bbF620h3GCyomyrb9HPQ9i
	G9uHZlidmK7FpGfl+cI7KAMnEL1N2gcxtNcdVQQ=
X-Google-Smtp-Source: AKy350Yeiq68Y6+HSv/qOyPfcMO+tL0U6l+2JdS8VPoPn1UifZK4LBUiSq/JMu8PwrkWsTEHHs2mfQ==
X-Received: by 2002:a19:f60e:0:b0:4e8:5576:98f4 with SMTP id x14-20020a19f60e000000b004e8557698f4mr563090lfe.45.1681370150463;
        Thu, 13 Apr 2023 00:15:50 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Date: Thu, 13 Apr 2023 09:14:11 +0200
Message-Id: <20230413071424.3273490-10-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support for sending a FF-A direct request. Checks that the SP also
supports handling a 32-bit direct request. 64-bit direct requests are
not used by the mediator itself so there is not need to check for that.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 112 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 112 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index f129879c5b81..f2cce955d981 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
     return true;
 }
 
+static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
+{
+    switch ( resp->a0 )
+    {
+    case FFA_ERROR:
+        if ( resp->a2 )
+            return resp->a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+    case FFA_SUCCESS_64:
+        return FFA_RET_OK;
+    default:
+        return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
+static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a2,
+                               register_t a3, register_t a4)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = fid,
+        .a1 = a1,
+        .a2 = a2,
+        .a3 = a3,
+        .a4 = a4,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    return get_ffa_ret_code(&resp);
+}
+
+static int32_t ffa_features(uint32_t id)
+{
+    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
+}
+
+static bool check_mandatory_feature(uint32_t id)
+{
+    int32_t ret = ffa_features(id);
+
+    if (ret)
+        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: error %d\n",
+               id, ret);
+
+    return !ret;
+}
+
 static uint16_t get_vm_id(const struct domain *d)
 {
     /* +1 since 0 is reserved for the hypervisor in FF-A */
@@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs *regs)
     set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
 }
 
+static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
+{
+    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
+    struct arm_smccc_1_2_regs resp = { };
+    struct domain *d = current->domain;
+    uint32_t src_dst;
+    uint64_t mask;
+
+    if ( smccc_is_conv_64(fid) )
+        mask = GENMASK_ULL(63, 0);
+    else
+        mask = GENMASK_ULL(31, 0);
+
+    src_dst = get_user_reg(regs, 1);
+    if ( (src_dst >> 16) != get_vm_id(d) )
+    {
+        resp.a0 = FFA_ERROR;
+        resp.a2 = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    arg.a1 = src_dst;
+    arg.a2 = get_user_reg(regs, 2) & mask;
+    arg.a3 = get_user_reg(regs, 3) & mask;
+    arg.a4 = get_user_reg(regs, 4) & mask;
+    arg.a5 = get_user_reg(regs, 5) & mask;
+    arg.a6 = get_user_reg(regs, 6) & mask;
+    arg.a7 = get_user_reg(regs, 7) & mask;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    switch ( resp.a0 )
+    {
+    case FFA_ERROR:
+    case FFA_SUCCESS_32:
+    case FFA_SUCCESS_64:
+    case FFA_MSG_SEND_DIRECT_RESP_32:
+    case FFA_MSG_SEND_DIRECT_RESP_64:
+        break;
+    default:
+        /* Bad fid, report back. */
+        memset(&arg, 0, sizeof(arg));
+        arg.a0 = FFA_ERROR;
+        arg.a1 = src_dst;
+        arg.a2 = FFA_RET_ABORTED;
+    }
+
+out:
+    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & mask,
+             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask);
+}
+
 static bool ffa_handle_call(struct cpu_user_regs *regs)
 {
     uint32_t fid = get_user_reg(regs, 0);
@@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_ID_GET:
         set_regs_success(regs, get_vm_id(d), 0);
         return true;
+    case FFA_MSG_SEND_DIRECT_REQ_32:
+    case FFA_MSG_SEND_DIRECT_REQ_64:
+        handle_msg_send_direct_req(regs, fid);
+        return true;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -326,6 +431,13 @@ static bool ffa_probe(void)
     printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
            major_vers, minor_vers);
 
+    /*
+     * TODO save result of checked features and use that information to
+     * accept or reject requests from guests.
+     */
+    if ( !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
+        return false;
+
     ffa_version = vers;
 
     return true;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520460.808041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBL-0001hE-Jw; Thu, 13 Apr 2023 07:15:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520460.808041; Thu, 13 Apr 2023 07:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBL-0001h7-Gs; Thu, 13 Apr 2023 07:15:47 +0000
Received: by outflank-mailman (input) for mailman id 520460;
 Thu, 13 Apr 2023 07:15:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBK-0001gr-5N
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:46 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0100e1cd-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:43 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id i26so18220895lfc.6
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:43 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0100e1cd-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370143; x=1683962143;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=VMvjySMspSE015gZK69lV2WbUjdretJTAbkf7n0CxhU=;
        b=Lz+hZG3+qpCZKO5KqfGN+4tEOzNy3kN9j0ZEJl4JORDnXfr4lDA1vuEuexCKwVNBGb
         5LT+zFisK/EdFBTpnjecyWIN5QQWQP2XOvGyYaKP5pItVCvNJFaXaXekZ6vIWcQYqh4t
         f1XDAkPkVqVrtorHhOz2mKbPwS5S1CH7emTH0H7t1bBKQyFK9rU9kA2gfs0+1bI6KycG
         9k0Wh7lYUHpgCVAjntR7Kz6Bf4epqpw40EtYPZgTYGPUv0c+6EKzq29NLyKZVpRuB1qx
         ngEkneWug6IEOVJ5aeSZzT3XwDmYCH45UQ6zr9xoyFjk9u50Q2Jk6sPyz9Luu7CbU4hP
         qJxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370143; x=1683962143;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=VMvjySMspSE015gZK69lV2WbUjdretJTAbkf7n0CxhU=;
        b=ZWQHm6pWqQ4UVrGLLRqB0Kda/BncgBxfgGJCVR8yzhF7lVZgk+ighQPeq/JUD5hxO1
         6tPKo2bSDYJyMK8TCy5VAx8b5QxcobsYYuVcfOID/dvE/wWwbvHX+S2qhxMl7W6NqbIx
         WYPXEQ4qydChpEHCOZINpIvrkSsLzZvRVSKpHAjpPCS8dy5ceDgO7S5KEFX8bxYmfloR
         SLf7TBpTMX3QnbJlp3eJ8a/XCShQ9ZgVBdguolWzhxt1YEhhz5uLj4N4qXGo4zJBq33g
         Gxlxf0x98Tox0uG6+4Ha/2l9zp2jJ6qsfryuLJDWbaR0TwRnd8Roz9O/29/0GcVu77xk
         rjeQ==
X-Gm-Message-State: AAQBX9ejH1NwoLK/UnF/QtxYAs064sKRh5zJwGLwZpzCW/1kGbJ8dHiY
	YlvnIv/vwfwT1FovULFJ99sn7DfLAU5acR49S+I=
X-Google-Smtp-Source: AKy350Z7Z6sTH2OTRU5VHM21XPoxz9a5+BQOI0gle7wcKkeA8xcOkzPoaWtTBGKGHARxcWkEYj4TTw==
X-Received: by 2002:a19:c215:0:b0:4b6:f51e:b8b6 with SMTP id l21-20020a19c215000000b004b6f51eb8b6mr486047lfc.56.1681370142959;
        Thu, 13 Apr 2023 00:15:42 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Luca Fancellu <luca.fancellu@arm.com>,
	Julien Grall <jgrall@amazon.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v8 00/22] Xen FF-A mediator
Date: Thu, 13 Apr 2023 09:14:02 +0200
Message-Id: <20230413071424.3273490-1-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch sets add an FF-A [1] mediator to the TEE mediator framework
already present in Xen.  The FF-A mediator implements the subset of the
FF-A 1.1 specification needed to communicate with OP-TEE using FF-A as
transport mechanism instead of SMC/HVC as with the TEE mediator. It allows
a similar design in OP-TEE as with the TEE mediator where OP-TEE presents
one virtual partition of itself to each guest in Xen.

The FF-A mediator is generic in the sense it has nothing OP-TEE specific
except that only the subset needed for OP-TEE is implemented so far. The
hooks needed to inform OP-TEE that a guest is created or destroyed are part
of the FF-A specification.

It should be possible to extend the FF-A mediator to implement a larger
portion of the FF-A 1.1 specification without breaking with the way OP-TEE
is communicated with here. So it should be possible to support any TEE or
Secure Partition using FF-A as transport with this mediator.

The patches are also available at https://github.com/jenswi-linaro/xen
branch "xen_ffa_v8".

With help from Bertrand I've integrated this in a test setup with OP-TEE.
Please check prerequisites at
https://optee.readthedocs.io/en/latest/building/prerequisites.html

My setup is duplicated using:
repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \
        -b qemu_xen_ffa
repo sync -j8
cd build
make -j8 toolchains
make -j8 all
make run-only

Test in dom0 with for instance:
xtest 1004

at the prompt.

To start up a domu and connect to it do:
cd /mnt/host/build/qemu_v8/xen
xl create guest_ffa.cfg
xl console domu

Then test as usual with "xtest 1004".

The setup uses the branch "ffa" from https://github.com/jenswi-linaro/xen.
That's currently the same as the "xen_ffa_v8" branch, but the "ffa" branch
may change later as I update for a new version of the patch set.

[1] https://developer.arm.com/documentation/den0077/latest

Thanks,
Jens

v7->v8:
* Adding "xen/arm: ffa: list current limitations" as requested
* Adding tags to "xen/arm: smccc: add support for SMCCCv1.2 extended
  input/output registers"
* Patch "xen/arm: tee: add a primitive FF-A mediator":
  - Changing license for ffa.h and ffa.c to GPL-2.0-only
  - Avoiding IS_ENABLED() in the constant FFA_NR_FUNCS
  - Accepting version 1.1 SPMC only to keep things simple
  - Removes 32bit and only supports 64bit to keep things simple
* Patch "tools: add Arm FF-A mediator"
  - Adding Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
  - Adding LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE_FFA for the "ffa" value 
    in arch_arm.tee
* Patch "docs: add Arm FF-A mediator"
  - Fixing a spell error
  - Moving the patch last in the series
* Patch "xen/arm: ffa: add remaining SMC function IDs"
  - Adding Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
  - Renaming the define FFA_MSG_RUN to FFA_RUN to match the specification
* Patch "xen/arm: ffa: add flags for FFA_PARTITION_INFO_GET"
  - Updating the comment describing the flags for FFA_PARTITION_INFO_GET
* Patch "xen/arm: ffa: add defines for framework direct request/response
  messages"
  - Updating the comment describing the flags for MSG_SEND_DIRECT_REQ/RESP
* Patch "xen/arm: ffa: enforce dependency on 4k pages"
  - Updating title of patch
  - Adding Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
* Patch "xen/arm: ffa: add support for FFA_ID_GET"
  - In ffa_domain_init(), check that domain_id isn't greater than
    UINT16_MAX to avoid a future potential integer overflow in get_vm_id()
* Patch "xen/arm: ffa: add direct request support"
  - Move preemption (interrupted) parts to a separate patch "xen/arm: ffa:
    support preemption of SP during direct request"
  - Remove loop in handle_msg_send_direct_req() to return eventual
    errors back to the VM instead of the SP.
* Patch "xen/arm: ffa: map SPMC rx/tx buffers"
  - Adding a FFA_RXTX_PAGE_COUNT define instead of using 1 directly
* New patch "xen/arm: ffa: support preemption of SP during direct request"
* Patch "xen/arm: ffa: send guest events to Secure Partitions"
  - Replacing unsigned int with uint16_t for subscr_vm_created_count and
    subscr_vm_destroyed_count plus the needed range check to see that
    they don't overflow.
* Patch "xen/arm: ffa: support mapping guest RX/TX buffers"
  - Limit the number of pages in VM RX/TX buffers to 32 using a new
    FFA_MAX_RXTX_PAGE_COUNT define.
* Patch "xen/arm: ffa: support guest FFA_PARTITION_INFO_GET"
  - Renaming tx_is_mine to rx_is_free as requested
  - Simplified the FFA_PARTITION_INFO_GET_COUNT_FLAG check in
    handle_partition_info_get()
  - Adding a comment on ownership of the RX buffer
  - Adding the patch "xen/arm: ffa: improve lock granularity" to address
    parts of the locking concerns.
* Patch "xen/arm: move regpair_to_uint64() and uint64_to_regpair() to regs.h"
  - Adding Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
* Patch "xen/arm: ffa: add defines for sharing memory"
  - Fixing reference for FFA_NORMAL_MEM_REG_ATTR and FFA_MEM_ACC_RW
  - Updating descirption for FFA_MAX_SHM_PAGE_COUNT
* Patch "xen/arm: ffa: add ABI structs for sharing memory"
  - Changing name of the "global_handle" memeber in struct
    ffa_mem_transaction_* to "handle".
* Patch "xen/arm: ffa: support sharing memory"
  - Use FFA_MEM_SHARE_64 only since we changed to only suporting ARM_64.
  - Rename struct ffa_mem_transaction_x to struct ffa_mem_transaction_int
    as requested.
  - Adding a check that shm->page_count isn't 0 before calling share_shm()
  - Masking return value from FFA_MEM_FRAG_RX to avoid an implic cast to
    the int32_t returned by ffa_mem_share().
* Patch "xen/arm: ffa: add support to reclaim shared memory"
  - Adding Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
* Patch "xen/arm: ffa: support sharing large memory ranges"
  - Adding commetns for struct ffa_ctx
  - Cleaning up and removing the fragmentation state if handle_mem_frag_tx()
    detects an error.
* Adding "xen/arm: ffa: improve lock granularity" to address some of the
  locking concerns.

v6->v7:
* Split some of the larger patches into smaller patches for easier review.
  For instance, the v6 patch "xen/arm: add a primitive FF-A mediator" has
  been replaced with:
  - "xen/arm: add a primitive FF-A mediator"
  - "tools: add Arm FF-A mediator"
  - "docs: add Arm FF-A mediator"
  - "xen/arm: ffa: add remaining SMC function IDs"
* Some small fixes in the error path for handle_mem_share()
* Switched to SPDX for license in new files.
* Fixed comment style issues in
  "xen/arm: smccc: add support for SMCCCv1.2 extended input/output registers"
* Made FFA support UNSUPPORTED in "xen/arm: add a primitive FF-A mediator"
* Replaced ffa_get_call_count() with FFA_NR_FUNCS
* Update the FFA_MAX_SHM_PAGE_COUNT with a formula instead of a value.
* Replaced XEN_ARM_FLAGS_FFA with XEN_DOMCTL_CONFIG_TEE_FFA to minimize impact
  on struct xen_arch_domainconfig. This works because the FF-A mediator and
  the OP-TEE mediator will not be used at the same time in by a guest.
* Replaced "ffa" boolean in the guest config with a new "ffa" value to the
  enumeration "tee_type".
* Integrated the FF-A mediator in the TEE mediator framework instead of
  being its own.
* Rebased on staging as of 2023-02-16

v5->v6:
* Updated "xen/arm: move regpair_to_uint64() and uint64_to_regpair() to regs.h"
  commit message and moved the patch right before the patch which needs it.
  Applied Michal Orzel's R-B tag.
* Renamed the guest configuration option "ffa_enabled" to "ffa" and
  updated the description.
* More tools update in "xen/arm: add a primitive FF-A mediator" with the "ffa"
  option, including golang and ocaml.
* Update ffa_domain_init() to return an error if communication with
  the SPMC can't be established.
* Factored out a ffa_domain_destroy() from ffa_relinquish_resources().
* Added ffa_get_call_count() to give an accurate number of FF-A function,
  updated in each patch as new FF-A functions are added.
* Added a flags field in struct xen_arch_domainconfig that replaces the
  ffa_enabled field.
* Made check_mandatory_feature() __init
* Replaced a few printk() calls with gprintk() where needed.
* Rebased on staging as of 2022-09-14

V4->v5:
* Added "xen/arm: move regpair_to_uint64() and uint64_to_regpair() to regs.h"
* Added documentation for the "ffa_enabled" guest config flag
* Changed to GPL license for xen/arch/arm/ffa.c
* Added __read_mostly and const where applicable
* Added more describing comments in the code
* Moved list of shared memory object ("ffa_mem_list") into the guest context
  as they are guest specific
* Simplified a few of the simple wrapper functions for SMC to SPMC
* Added a BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE) since the mediator
  currently depends on the page size to be same as FFA_PAGE_SIZE (4k).
* Added max number of shared memory object per guest and max number of
  size of each shared memory object
* Added helper macros to calculate offsets of different FF-A data structures
  in the communication buffer instead of relying on pointer arithmetic
* Addressed style issues and other comments
* Broke the commit "xen/arm: add FF-A mediator" into multiple parts, trying
  to add a few features at a time as requested
* Added a missing call to rxtx_unmap() in ffa_relinquish_resources()
* Assignment of "ffa_enabled" is kept as is until I have something definitive
  on the type etc.
* Tested with CONFIG_DEBUG=y

v3->v4:
* Missed v3 and sent a v4 instead by mistake.

v2->v3:
* Generates offsets into struct arm_smccc_1_2_regs with asm-offsets.c in
  order to avoid hard coded offsets in the assembly function
  arm_smccc_1_2_smc()
* Adds an entry in SUPPORT.md on the FF-A status
* Adds a configuration variable "ffa_enabled" to tell if FF-A should be
  enabled for a particular domu guest
* Moves the ffa_frag_list for fragmented memory share requests into
  struct ffa_ctx instead to keep it per guest in order to avoid mixups
  and simplify locking
* Adds a spinlock to struct ffa_ctx for per guest locking
* Addressing style issues and suggestions
* Uses FFA_FEATURES to check that all the needed features are available
  before initializing the mediator
* Rebased on staging as of 2022-06-20

v1->v2:
* Rebased on staging to resolve some merge conflicts as requested



Jens Wiklander (22):
  xen/arm: smccc: add support for SMCCCv1.2 extended input/output
    registers
  xen/arm: tee: add a primitive FF-A mediator
  tools: add Arm FF-A mediator
  xen/arm: ffa: add remaining SMC function IDs
  xen/arm: ffa: add flags for FFA_PARTITION_INFO_GET
  xen/arm: ffa: add defines for framework direct request/response
    messages
  xen/arm: ffa: enforce dependency on 4k pages
  xen/arm: ffa: add support for FFA_ID_GET
  xen/arm: ffa: add direct request support
  xen/arm: ffa: map SPMC rx/tx buffers
  xen/arm: ffa: send guest events to Secure Partitions
  xen/arm: ffa: support mapping guest RX/TX buffers
  xen/arm: ffa: support guest FFA_PARTITION_INFO_GET
  xen/arm: move regpair_to_uint64() and uint64_to_regpair() to regs.h
  xen/arm: ffa: add defines for sharing memory
  xen/arm: ffa: add ABI structs for sharing memory
  xen/arm: ffa: support sharing memory
  xen/arm: ffa: add support to reclaim shared memory
  xen/arm: ffa: support sharing large memory ranges
  xen/arm: ffa: improve lock granularity
  xen/arm: ffa: list current limitations
  docs: add Arm FF-A mediator

 SUPPORT.md                         |    8 +
 docs/man/xl.cfg.5.pod.in           |   15 +
 tools/include/libxl.h              |    5 +
 tools/libs/light/libxl_arm.c       |    3 +
 tools/libs/light/libxl_types.idl   |    3 +-
 xen/arch/arm/arm64/asm-offsets.c   |    9 +
 xen/arch/arm/arm64/smc.S           |   42 +
 xen/arch/arm/include/asm/psci.h    |    4 +
 xen/arch/arm/include/asm/regs.h    |   12 +
 xen/arch/arm/include/asm/smccc.h   |   40 +
 xen/arch/arm/include/asm/tee/ffa.h |   35 +
 xen/arch/arm/tee/Kconfig           |   11 +
 xen/arch/arm/tee/Makefile          |    1 +
 xen/arch/arm/tee/ffa.c             | 1950 ++++++++++++++++++++++++++++
 xen/arch/arm/tee/optee.c           |   11 -
 xen/arch/arm/vsmc.c                |   19 +-
 xen/include/public/arch-arm.h      |    1 +
 17 files changed, 2153 insertions(+), 16 deletions(-)
 create mode 100644 xen/arch/arm/include/asm/tee/ffa.h
 create mode 100644 xen/arch/arm/tee/ffa.c

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520465.808083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBO-0002cz-Ge; Thu, 13 Apr 2023 07:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520465.808083; Thu, 13 Apr 2023 07:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBO-0002bP-Bd; Thu, 13 Apr 2023 07:15:50 +0000
Received: by outflank-mailman (input) for mailman id 520465;
 Thu, 13 Apr 2023 07:15:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBM-0001gq-PI
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:48 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 036225ef-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:47 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id d7so28946432lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:47 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 036225ef-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370147; x=1683962147;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=09FoNIBfx/AhxypRHIQzVNktgeSQpjk+3JcEVMVKSbQ=;
        b=mL3AGTAscDTr06MSOYuyOae/9mF5PB5pXeLkrsbHjr7olu0YfJ8Jy5+7ZIUDQG0x/e
         r+bvqpbM2txxDX1PdmXV6Rr8mr0rp8uqqqm+9VxnBFE3x1TPAjZcVMskqm3G5xnlZ4gW
         v4ak64XkvDUjMOkYgUY5KPCsFxiM3UlVYGmackYnC6HlqAoKnUtfMpv/khYJ1MMJbU66
         MD1R1MsSPgAJawf26IakdMQ1Lf8+EQDHRdEVo7NNBWl2ioMhR6D5k88Ru88KE+vJeraX
         RC2ZoqUjsyZoDJsQlj6lFscE96hlR5xbIZW7FDy0ZzDIDVtI+fLqpUBuQ+03HRjbQezk
         7aIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370147; x=1683962147;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=09FoNIBfx/AhxypRHIQzVNktgeSQpjk+3JcEVMVKSbQ=;
        b=N8owYxrnxmRUUnnJAyx1008JKere+IsFKpEUa7DGvNYVVtrm6kWxsn5LcbRXnaRkFa
         JFmW6jde8GB5X5RkX0kU1Pw/wbcbntju8ACPjpkYckQNWW/UnYlOymCB2rQLhARde5nj
         RoDbEI370RCR9a340W5KJfQnrXb9NJa5rvuH9flhpzOWS29kMeY8QtnhchbgX72bcgX5
         YeQbxP3hHQRM1+WI15RHCXnWgVKUJF+1xIaDpXgkFujN8BOdGFkpvPrDgQGC8NXqX5YP
         qknOovM2OKPWc8KXsK3GN5ePeVuuJajiA7mxp+SXAqFGPeM56AAJ5kuat7q2ItM3MMEa
         oGJA==
X-Gm-Message-State: AAQBX9csACw1OB0PX2ZJxmTyybv3UHfiGLylYibClV86oOrsxqbfn+Us
	ta09cTsEdozi6m0ipf1lPGfZfLF6UOM9LUdhQ8Y=
X-Google-Smtp-Source: AKy350Zu4mlvVFjzT3PBZkyGciR4NH8yu2tbn2rzyYy9BZQSqAIwB2Q8kebTAuck9yLTVDRzxW+ibg==
X-Received: by 2002:ac2:4199:0:b0:4ec:a218:4f8f with SMTP id z25-20020ac24199000000b004eca2184f8fmr509860lfh.8.1681370147122;
        Thu, 13 Apr 2023 00:15:47 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for FFA_PARTITION_INFO_GET
Date: Thu, 13 Apr 2023 09:14:07 +0200
Message-Id: <20230413071424.3273490-6-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defines flags used for the function FFA_PARTITION_INFO_GET.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index ba0942e76993..72e7d0575de5 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -57,6 +57,40 @@
 #define FFA_MY_VERSION          MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
                                                  FFA_MY_VERSION_MINOR)
 
+/*
+ * Flags to determine partition properties in FFA_PARTITION_INFO_GET return
+ * message:
+ * BIT(0): Supports receipt of direct requests
+ * BIT(1): Can send direct requests
+ * BIT(2): Can send and receive indirect messages
+ * BIT(3): Supports receipt of notifications
+ * BIT(4-5): Partition ID is a PE endpoint ID
+ * BIT(6): Partition must be informed about each VM that is created by
+ *         the Hypervisor
+ * BIT(7): Partition must be informed about each VM that is destroyed by
+ *         the Hypervisor
+ * BIT(8): Partition runs in the AArch64 execution state else AArch32
+ *         execution state
+ */
+#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0, U)
+#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1, U)
+#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2, U)
+#define FFA_PART_PROP_RECV_NOTIF        BIT(3, U)
+#define FFA_PART_PROP_IS_MASK           (3U << 4)
+#define FFA_PART_PROP_IS_PE_ID          (0U << 4)
+#define FFA_PART_PROP_IS_SEPID_INDEP    (1U << 4)
+#define FFA_PART_PROP_IS_SEPID_DEP      (2U << 4)
+#define FFA_PART_PROP_IS_AUX_ID         (3U << 4)
+#define FFA_PART_PROP_NOTIF_CREATED     BIT(6, U)
+#define FFA_PART_PROP_NOTIF_DESTROYED   BIT(7, U)
+#define FFA_PART_PROP_AARCH64_STATE     BIT(8, U)
+
+/*
+ * Flag used as parameter to FFA_PARTITION_INFO_GET to return partition
+ * count only.
+ */
+#define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U)
+
 /* Function IDs */
 #define FFA_ERROR                       0x84000060U
 #define FFA_SUCCESS_32                  0x84000061U
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520462.808052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBM-0001oM-9N; Thu, 13 Apr 2023 07:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520462.808052; Thu, 13 Apr 2023 07:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBM-0001mg-2c; Thu, 13 Apr 2023 07:15:48 +0000
Received: by outflank-mailman (input) for mailman id 520462;
 Thu, 13 Apr 2023 07:15:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBL-0001gq-7R
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:47 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 027433b3-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:46 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id 26so1146103lfq.11
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:46 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 027433b3-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370145; x=1683962145;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jvHKHuXznPMzD/YrdwjpNkW0g0k7uCPv2OF4tumTIDg=;
        b=DxBHQVm8UihnJHYKIUCIRR4mhys8QHkTlgKUvBv8RJDmjDYQTMTPuDqqbViOSBsbn2
         TNVIM8CmIpsufjdLysMS5NUS4UnMM5V/uwkNyVXARSx0YudIWg47FsTbgoiUHhKGzk3i
         /j0A6GHghGov3HPZGsQSdoKqlqR3bH34EeaysF2xnw5mjSTa1CzzPYQ+LoQsutJ7NPkm
         6jQGzFuqFkX0M/O44D7zzJS3Y0vrkJB4s05OQBzsCMDuO+ThxyZlM1inrgAZwLbKPMF/
         iFZPHdsAekJLRjDwylEiL1Jswv+jzPIiPlylHHf0VQR5MJ9dXMtSfVePZ2PtzRhGlflx
         CJmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370145; x=1683962145;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jvHKHuXznPMzD/YrdwjpNkW0g0k7uCPv2OF4tumTIDg=;
        b=IEZXGBrwI4SCipDzuZ7WcbEU9GOYlKr77+YwzGTGYg4MvvTH/qlsJXieUzftyYSsbw
         jleNGuGrJNogSBvSfiKt8EEimCVscvj/Qgf6ov3V1PeIHkHHoeHzow4zpc+xJC0XbUDR
         cJQvSWxV9eNLqdrzaPYkwO2Rq+1Fz1XrEio+4GEDHJDflFmWXlpGQIb1kWrT+sPeA3gJ
         pkEA80J5qgRI3im59hQYXEan/hVPu+V3vaZxkGicbqvW/MtyVDQgYwd/LgpMO4E0cu+h
         jsTRfghPXh8P8npSR3zqyADurHW/sWx7/MR9TQz359X3RAjMG+Ri6reVqW7iscaRVN1h
         omDA==
X-Gm-Message-State: AAQBX9fdkFzylwaxLpvgcLSP3X6mwzU0fT4zQXS+taD9t8po/RoWn3QQ
	odBKi9OCwH7njn5J6svPr/x26vb+NNZkOf405tk=
X-Google-Smtp-Source: AKy350a8U/l3Qa0bSPOl21Sj+Fuz5NilmrG2kLfKY3kCeRxYyC5CV0tP0CCqVlw1bSHkvy9bxluygQ==
X-Received: by 2002:ac2:43bc:0:b0:4ea:f526:5bea with SMTP id t28-20020ac243bc000000b004eaf5265beamr386992lfl.27.1681370145684;
        Thu, 13 Apr 2023 00:15:45 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
Date: Thu, 13 Apr 2023 09:14:05 +0200
Message-Id: <20230413071424.3273490-4-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a new "ffa" value to the Enumeration "tee_type" to indicate if a
guest is trusted to use FF-A.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/include/libxl.h            | 5 +++++
 tools/libs/light/libxl_arm.c     | 3 +++
 tools/libs/light/libxl_types.idl | 3 ++-
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a191318c..7c48e8d8472e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -283,6 +283,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * arch_arm.tee field in libxl_domain_build_info has ffa value.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE_FFA 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..601890dda1ce 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -205,6 +205,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
     case LIBXL_TEE_TYPE_OPTEE:
         config->arch.tee_type = XEN_DOMCTL_CONFIG_TEE_OPTEE;
         break;
+    case LIBXL_TEE_TYPE_FFA:
+        config->arch.tee_type = XEN_DOMCTL_CONFIG_TEE_FFA;
+        break;
     default:
         LOG(ERROR, "Unknown TEE type %d",
             d_config->b_info.tee);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..1a680d0f8839 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -520,7 +520,8 @@ libxl_gic_version = Enumeration("gic_version", [
 
 libxl_tee_type = Enumeration("tee_type", [
     (0, "none"),
-    (1, "optee")
+    (1, "optee"),
+    (2, "ffa"),
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
 libxl_rdm_reserve = Struct("rdm_reserve", [
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520469.808127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBT-0003kh-7c; Thu, 13 Apr 2023 07:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520469.808127; Thu, 13 Apr 2023 07:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBS-0003in-Cd; Thu, 13 Apr 2023 07:15:54 +0000
Received: by outflank-mailman (input) for mailman id 520469;
 Thu, 13 Apr 2023 07:15:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBP-0001gr-PC
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:51 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04d9ebc3-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:50 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id i6so6145839lfp.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:50 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04d9ebc3-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370149; x=1683962149;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Xdtgtks0Z+0OQgcElk/gBwu5R5RBnZenMx5onSQ1voY=;
        b=pCCHs+6Bq8GYnAbcHC07R7jVm1hYJMLtVTHERhDWKLdVIDcQUDt8p+ovEdZg90KOMF
         U7ikJbSTPvG8WwfYE2yJzKe6Sn8HTuV9XZw5KJShNA3lrW4KF58SZu+KBwrFaa5H38Ow
         aupikAeEB3nBQMonVVpV0oQrYW3HZDD3X/dKWCquf27RNt39OVWuphK4LUyAG85qNDq4
         zvo6ydDk0GF+rYM0anyskX5B8LIV/JpxfB2oQ5Xm9iOYkr2e0wWUR2rjG81K7TOJoz6I
         Sh8FSpb9+k4Cnln8+AM3a2BocKbGq606CPIrx71oRf0ufxgE//vnlkvrqGzyFu9upJb6
         IcdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370149; x=1683962149;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Xdtgtks0Z+0OQgcElk/gBwu5R5RBnZenMx5onSQ1voY=;
        b=Sl0VyrA2jr7H0GXtE8vcOJt3ySJvHqREiFE74GBXX225YYe2aBu1+WK2hbLYPMlwVN
         iW1oTTCno4w/aJ8PPUMYjnYNhYWfkwNKaF0gJWPiHEMCk33GYabMDvsGVuroaNl5pxPs
         d+jnl4AzsaXaNwHAeIqyEQIpu5wFWYQ603LXz/aURFrespoARyqH+oFIr9o3XdUEQAMd
         Z8dsr4CDRDHRoVkRmkSs8oFlylIzELeZqpkndH+74adgof38RPSiSXir6wOp4cs4gQ4S
         u6JpbpykUTO01VHf4hyPP3GFmnwXnFXT0XlzNJOyWvfVst2yHJSQ1ur83sL01HM/5lir
         /qMg==
X-Gm-Message-State: AAQBX9ei0QsGl2SBuCcR4TW182GBuMV3sUxeR8Xh7WIxuI/g7sDL32Y/
	/ADk/tMSGvOMkYz1d3idnBEJlEeiLPG96qesMis=
X-Google-Smtp-Source: AKy350YzDtXtY7GPdfoFCSuzwqUBv5NtAssSt/D/m1g+loRAlXuUWXfnFaxlPX9CUMS+xBFkbVlYWg==
X-Received: by 2002:ac2:50cd:0:b0:4ec:8615:303e with SMTP id h13-20020ac250cd000000b004ec8615303emr495805lfm.33.1681370149673;
        Thu, 13 Apr 2023 00:15:49 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Date: Thu, 13 Apr 2023 09:14:10 +0200
Message-Id: <20230413071424.3273490-9-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support for the FF-A function FFA_ID_GET to return the ID of the
calling client.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 90ed71cbfda3..f129879c5b81 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -181,6 +181,12 @@ static bool ffa_get_version(uint32_t *vers)
     return true;
 }
 
+static uint16_t get_vm_id(const struct domain *d)
+{
+    /* +1 since 0 is reserved for the hypervisor in FF-A */
+    return d->domain_id + 1;
+}
+
 static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
                      register_t v2, register_t v3, register_t v4, register_t v5,
                      register_t v6, register_t v7)
@@ -195,6 +201,12 @@ static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
         set_user_reg(regs, 7, v7);
 }
 
+static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
+                             uint32_t w3)
+{
+    set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
+}
+
 static void handle_version(struct cpu_user_regs *regs)
 {
     struct domain *d = current->domain;
@@ -224,6 +236,9 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_VERSION:
         handle_version(regs);
         return true;
+    case FFA_ID_GET:
+        set_regs_success(regs, get_vm_id(d), 0);
+        return true;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -237,6 +252,12 @@ static int ffa_domain_init(struct domain *d)
 
     if ( !ffa_version )
         return -ENODEV;
+     /*
+      * We can't use that last possible domain ID or get_vm_id() would cause
+      * an overflow.
+      */
+    if ( d->domain_id >= UINT16_MAX)
+        return -ERANGE;
 
     ctx = xzalloc(struct ffa_ctx);
     if ( !ctx )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520464.808076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBO-0002Tm-08; Thu, 13 Apr 2023 07:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520464.808076; Thu, 13 Apr 2023 07:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBN-0002T7-Q9; Thu, 13 Apr 2023 07:15:49 +0000
Received: by outflank-mailman (input) for mailman id 520464;
 Thu, 13 Apr 2023 07:15:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBL-0001gq-Oz
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:47 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 02f5e23d-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:47 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id z8so19679192lfb.12
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:47 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02f5e23d-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370146; x=1683962146;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kGRv/GDtKkNvjjvcwSxhfbYecPI7h/XXuKkIl9vhng4=;
        b=X7rve2g9D2aLGlQZAHVPmtlYVObpWW8EnmykiL2WKGnNOywY4Q7OFH9UXDFlEwRMMT
         NH7Qec7/QDvnggmptfJs7QyBetrWYrnNxAu32nF1Xxppf9sS/j2aCNReb3+aXew09pdi
         M8VBf3qKLuG9tVNafbEZmokeH28cZXT/d3H0h1Wk8lL91cL3Ajddyi4bSE7nbNk7ytOn
         dNol1rAilxv2tWFhxOZaH7pzeUxb9/BPN8zRTC+iSsBUFKOl3p0le9TNfNLFr5JIm2Jq
         b+CTrYLAl9OSg5jvg5C01cTx1i9PnvXt/xUA7xR7P99ElC2cPiTLm1KLUQXaBm5eir0Y
         tTxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370146; x=1683962146;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kGRv/GDtKkNvjjvcwSxhfbYecPI7h/XXuKkIl9vhng4=;
        b=M9WlmdJkLX9R+ua6U0WrUWwJjeRlMkkr4qMJalUsEvkvIeex74lgDxUT9U2Xm7dTO+
         svm+TIzDbkHCsuEahQ0KWUn8pXOGgW9x6VnomM2FtooD8V8H19kn2CrXKJfi5nIuNLeZ
         Cwkp39wGp3XfG+q4zNgtyqcGkCMtNHvH3J0GLrPLlrzrDl/uLEYtxjFzsvDdiqY13zjR
         WdPZyMtq2+wQutIcpAgam6iIXTqMD9y8PEKVkt8InMSFVE/7nkawL5S2dEyWpWHmBWif
         ldlNv3miMTjQy8q6H7bY21hlUDzFllG8ntVSCjNRrSq5K7vxETddfLIu8YxBhkQ4DzBC
         1FQQ==
X-Gm-Message-State: AAQBX9dRRP6JkPOorjS/2b6hzMb6ZWUMvKPFTOGKmVo38QakBD9TYgTo
	c+Rnm7aon2IijxbWXBpqZcuTM4yeubij2OXoWgg=
X-Google-Smtp-Source: AKy350Y8rJotM9HbgaonmZirdkGyCcpi9jzouFQD7iz2ghgpp2OU5rPRMQUwThOKh97sZcboONDI2w==
X-Received: by 2002:a05:6512:11c6:b0:4ea:f526:5bee with SMTP id h6-20020a05651211c600b004eaf5265beemr456649lfr.11.1681370146419;
        Thu, 13 Apr 2023 00:15:46 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function IDs
Date: Thu, 13 Apr 2023 09:14:06 +0200
Message-Id: <20230413071424.3273490-5-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds the remaining SMC function IDs from FF-A 1.1 specification.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index aaf74c287aef..ba0942e76993 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -60,7 +60,41 @@
 /* Function IDs */
 #define FFA_ERROR                       0x84000060U
 #define FFA_SUCCESS_32                  0x84000061U
+#define FFA_SUCCESS_64                  0xC4000061U
+#define FFA_INTERRUPT                   0x84000062U
 #define FFA_VERSION                     0x84000063U
+#define FFA_FEATURES                    0x84000064U
+#define FFA_RX_ACQUIRE                  0x84000084U
+#define FFA_RX_RELEASE                  0x84000065U
+#define FFA_RXTX_MAP_32                 0x84000066U
+#define FFA_RXTX_MAP_64                 0xC4000066U
+#define FFA_RXTX_UNMAP                  0x84000067U
+#define FFA_PARTITION_INFO_GET          0x84000068U
+#define FFA_ID_GET                      0x84000069U
+#define FFA_SPM_ID_GET                  0x84000085U
+#define FFA_MSG_WAIT                    0x8400006BU
+#define FFA_MSG_YIELD                   0x8400006CU
+#define FFA_RUN                         0x8400006DU
+#define FFA_MSG_SEND2                   0x84000086U
+#define FFA_MSG_SEND_DIRECT_REQ_32      0x8400006FU
+#define FFA_MSG_SEND_DIRECT_REQ_64      0xC400006FU
+#define FFA_MSG_SEND_DIRECT_RESP_32     0x84000070U
+#define FFA_MSG_SEND_DIRECT_RESP_64     0xC4000070U
+#define FFA_MEM_DONATE_32               0x84000071U
+#define FFA_MEM_DONATE_64               0xC4000071U
+#define FFA_MEM_LEND_32                 0x84000072U
+#define FFA_MEM_LEND_64                 0xC4000072U
+#define FFA_MEM_SHARE_32                0x84000073U
+#define FFA_MEM_SHARE_64                0xC4000073U
+#define FFA_MEM_RETRIEVE_REQ_32         0x84000074U
+#define FFA_MEM_RETRIEVE_REQ_64         0xC4000074U
+#define FFA_MEM_RETRIEVE_RESP           0x84000075U
+#define FFA_MEM_RELINQUISH              0x84000076U
+#define FFA_MEM_RECLAIM                 0x84000077U
+#define FFA_MEM_FRAG_RX                 0x8400007AU
+#define FFA_MEM_FRAG_TX                 0x8400007BU
+#define FFA_MSG_SEND                    0x8400006EU
+#define FFA_MSG_POLL                    0x8400006AU
 
 struct ffa_ctx {
     /* FF-A version used by the guest */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520461.808046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBM-0001ko-0A; Thu, 13 Apr 2023 07:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520461.808046; Thu, 13 Apr 2023 07:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBL-0001jz-Pm; Thu, 13 Apr 2023 07:15:47 +0000
Received: by outflank-mailman (input) for mailman id 520461;
 Thu, 13 Apr 2023 07:15:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBL-0001gq-0O
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:47 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0182c663-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:44 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id d7so28946280lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:44 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0182c663-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370144; x=1683962144;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sPB2Dc6t3Zgd1EEjPsex1QWFM8C9VFNKedpjWqBxO5k=;
        b=EvXtGtaw0BIV2NMQMuY4QKgGrDz+ryd3caJ2Qw+PTBLFkehwADpIx1R7gReCRzoJSa
         b9GnenXOP5EM0wQM9GtwCIVbCQVRv5eFGLfAAzn6BCYX3JXdJd3ToNbJvllEJix8BfkT
         78PwvoaHZJ9jgydmRYGzedCTgOy0WXpUtRL3IpOTCFIn/muvvbfEaLdaUZh0qkhTt85R
         4Zb3B15KDi7OexYQdYu9LpG7IOcibytoXe5KyvfN0ImMCWmMAN4aTGiKD25jO2GzYanX
         tNdGds9mCtnYQpPXmGHJkPYZONy6ckogZ0sGx6ZJ7G5meQb/vvFNiKZ9ySAUmfF3b/G7
         M2AQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370144; x=1683962144;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sPB2Dc6t3Zgd1EEjPsex1QWFM8C9VFNKedpjWqBxO5k=;
        b=WMNZCr1AwrZdzSJwgGucafP4Ziy35GmvplfFfPKHP+fDJKpMBodMa763rCm/AWsp6y
         0LL+0iCGO7v15bYVRrqgtdtcw+fEPup70xryzuwz5u+D9F0K5E1/Gar5nkc5h7U+vj/o
         PgwrDPcISgzLlzkmbqI5iZtOpE9HzVJX/T+XMYXW5TPy/KH7KYI/7UpGmVBW47X39dpW
         vrbgM+E/UKdlnEZoYiYdu2otchQ2BUUVBu6PSxwvbIbXKgIG3l7dAPq4FexTc3qC2Gjm
         CFvMunxmiebkdBUU8CQWZBnaxE2ikW/LzfGm9Zypwc80n9K3iVQJE7QwmOhVFOWFOZaY
         eI9g==
X-Gm-Message-State: AAQBX9fbolx1CHYv6xOX9aQ/sqm9DIDSVRurBE8FprGSQchSKKtkWduX
	NJPZKlPA3TQQGAcAhqC1Lpe2O8qy+nMnIjHOXCk=
X-Google-Smtp-Source: AKy350ahhVJexmXN4HqaSvh9J9OfzgvqpB3LvRahdZVySM2Hj7+tIQKdZi3psAdn0wNK35trjyeemQ==
X-Received: by 2002:ac2:4946:0:b0:4c0:2ddc:4559 with SMTP id o6-20020ac24946000000b004c02ddc4559mr489919lfi.69.1681370143951;
        Thu, 13 Apr 2023 00:15:43 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [XEN PATCH v8 01/22] xen/arm: smccc: add support for SMCCCv1.2 extended input/output registers
Date: Thu, 13 Apr 2023 09:14:03 +0200
Message-Id: <20230413071424.3273490-2-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SMCCC v1.2 [1] AArch64 allows x0-x17 to be used as both parameter
registers and result registers for the SMC and HVC instructions.

Arm Firmware Framework for Armv8-A specification makes use of x0-x7 as
parameter and result registers.

Let us add new interface to support this extended set of input/output
registers.

This is based on 3fdc0cb59d97 ("arm64: smccc: Add support for SMCCCv1.2
extended input/output registers") by Sudeep Holla from the Linux kernel

The SMCCC version reported to the VM is bumped to 1.2 in order to support
handling FF-A messages.

[1] https://developer.arm.com/documentation/den0028/c/?lang=en

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/arm64/asm-offsets.c |  9 +++++++
 xen/arch/arm/arm64/smc.S         | 42 ++++++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/smccc.h | 40 ++++++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c              |  2 +-
 4 files changed, 92 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/asm-offsets.c b/xen/arch/arm/arm64/asm-offsets.c
index 7226cd9b2eb0..7adb67a1b81a 100644
--- a/xen/arch/arm/arm64/asm-offsets.c
+++ b/xen/arch/arm/arm64/asm-offsets.c
@@ -57,6 +57,15 @@ void __dummy__(void)
    BLANK();
    OFFSET(SMCCC_RES_a0, struct arm_smccc_res, a0);
    OFFSET(SMCCC_RES_a2, struct arm_smccc_res, a2);
+   OFFSET(ARM_SMCCC_1_2_REGS_X0_OFFS, struct arm_smccc_1_2_regs, a0);
+   OFFSET(ARM_SMCCC_1_2_REGS_X2_OFFS, struct arm_smccc_1_2_regs, a2);
+   OFFSET(ARM_SMCCC_1_2_REGS_X4_OFFS, struct arm_smccc_1_2_regs, a4);
+   OFFSET(ARM_SMCCC_1_2_REGS_X6_OFFS, struct arm_smccc_1_2_regs, a6);
+   OFFSET(ARM_SMCCC_1_2_REGS_X8_OFFS, struct arm_smccc_1_2_regs, a8);
+   OFFSET(ARM_SMCCC_1_2_REGS_X10_OFFS, struct arm_smccc_1_2_regs, a10);
+   OFFSET(ARM_SMCCC_1_2_REGS_X12_OFFS, struct arm_smccc_1_2_regs, a12);
+   OFFSET(ARM_SMCCC_1_2_REGS_X14_OFFS, struct arm_smccc_1_2_regs, a14);
+   OFFSET(ARM_SMCCC_1_2_REGS_X16_OFFS, struct arm_smccc_1_2_regs, a16);
 }
 
 /*
diff --git a/xen/arch/arm/arm64/smc.S b/xen/arch/arm/arm64/smc.S
index 91bae62dd4d2..fc6b676e2ee3 100644
--- a/xen/arch/arm/arm64/smc.S
+++ b/xen/arch/arm/arm64/smc.S
@@ -27,3 +27,45 @@ ENTRY(__arm_smccc_1_0_smc)
         stp     x2, x3, [x4, #SMCCC_RES_a2]
 1:
         ret
+
+/*
+ * void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+ *                        struct arm_smccc_1_2_regs *res)
+ */
+ENTRY(arm_smccc_1_2_smc)
+    /* Save `res` and free a GPR that won't be clobbered by SMC call */
+    stp     x1, x19, [sp, #-16]!
+
+    /* Ensure `args` won't be clobbered while loading regs in next step */
+    mov	x19, x0
+
+    /* Load the registers x0 - x17 from the struct arm_smccc_1_2_regs */
+    ldp	x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
+    ldp	x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
+    ldp	x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
+    ldp	x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
+    ldp	x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
+    ldp	x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
+    ldp	x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
+    ldp	x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
+    ldp	x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
+
+    smc #0
+
+    /* Load the `res` from the stack */
+    ldr	x19, [sp]
+
+    /* Store the registers x0 - x17 into the result structure */
+    stp	x0, x1, [x19, #ARM_SMCCC_1_2_REGS_X0_OFFS]
+    stp	x2, x3, [x19, #ARM_SMCCC_1_2_REGS_X2_OFFS]
+    stp	x4, x5, [x19, #ARM_SMCCC_1_2_REGS_X4_OFFS]
+    stp	x6, x7, [x19, #ARM_SMCCC_1_2_REGS_X6_OFFS]
+    stp	x8, x9, [x19, #ARM_SMCCC_1_2_REGS_X8_OFFS]
+    stp	x10, x11, [x19, #ARM_SMCCC_1_2_REGS_X10_OFFS]
+    stp	x12, x13, [x19, #ARM_SMCCC_1_2_REGS_X12_OFFS]
+    stp	x14, x15, [x19, #ARM_SMCCC_1_2_REGS_X14_OFFS]
+    stp	x16, x17, [x19, #ARM_SMCCC_1_2_REGS_X16_OFFS]
+
+    /* Restore original x19 */
+    ldp     xzr, x19, [sp], #16
+    ret
diff --git a/xen/arch/arm/include/asm/smccc.h b/xen/arch/arm/include/asm/smccc.h
index b3dbeecc90ad..1adcd37443c7 100644
--- a/xen/arch/arm/include/asm/smccc.h
+++ b/xen/arch/arm/include/asm/smccc.h
@@ -33,6 +33,7 @@
 
 #define ARM_SMCCC_VERSION_1_0   SMCCC_VERSION(1, 0)
 #define ARM_SMCCC_VERSION_1_1   SMCCC_VERSION(1, 1)
+#define ARM_SMCCC_VERSION_1_2   SMCCC_VERSION(1, 2)
 
 /*
  * This file provides common defines for ARM SMC Calling Convention as
@@ -265,6 +266,45 @@ void __arm_smccc_1_0_smc(register_t a0, register_t a1, register_t a2,
         else                                                    \
             arm_smccc_1_0_smc(__VA_ARGS__);                     \
     } while ( 0 )
+
+/*
+ * struct arm_smccc_1_2_regs - Arguments for or Results from SMC call
+ * @a0-a17 argument values from registers 0 to 17
+ */
+struct arm_smccc_1_2_regs {
+    unsigned long a0;
+    unsigned long a1;
+    unsigned long a2;
+    unsigned long a3;
+    unsigned long a4;
+    unsigned long a5;
+    unsigned long a6;
+    unsigned long a7;
+    unsigned long a8;
+    unsigned long a9;
+    unsigned long a10;
+    unsigned long a11;
+    unsigned long a12;
+    unsigned long a13;
+    unsigned long a14;
+    unsigned long a15;
+    unsigned long a16;
+    unsigned long a17;
+};
+
+/*
+ * arm_smccc_1_2_smc() - make SMC calls
+ * @args: arguments passed via struct arm_smccc_1_2_regs
+ * @res: result values via struct arm_smccc_1_2_regs
+ *
+ * This function is used to make SMC calls following SMC Calling Convention
+ * v1.2 or above. The content of the supplied param are copied from the
+ * structure to registers prior to the SMC instruction. The return values
+ * are updated with the content from registers on return from the SMC
+ * instruction.
+ */
+void arm_smccc_1_2_smc(const struct arm_smccc_1_2_regs *args,
+                       struct arm_smccc_1_2_regs *res);
 #endif /* CONFIG_ARM_64 */
 
 #endif /* __ASSEMBLY__ */
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index 7335276f3fa1..cd68fa80e98a 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -85,7 +85,7 @@ static bool handle_arch(struct cpu_user_regs *regs)
     switch ( fid )
     {
     case ARM_SMCCC_VERSION_FID:
-        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_1);
+        set_user_reg(regs, 0, ARM_SMCCC_VERSION_1_2);
         return true;
 
     case ARM_SMCCC_ARCH_FEATURES_FID:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520467.808104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBQ-0003D2-FL; Thu, 13 Apr 2023 07:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520467.808104; Thu, 13 Apr 2023 07:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBQ-0003A1-4Y; Thu, 13 Apr 2023 07:15:52 +0000
Received: by outflank-mailman (input) for mailman id 520467;
 Thu, 13 Apr 2023 07:15:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBO-0001gr-LB
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:50 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 043dabf0-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:49 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id u12so154018lfu.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:49 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 043dabf0-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370148; x=1683962148;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=s7kR9rwQGILBw8tLRM1a/b6iPghICXgNju4KgQlDbBQ=;
        b=XkCTd+94xqCLh7P6D6XNMY/dneEGpJmdtidCJu6cK4qLOm4YDvaAwSa7WVeCDWKGPu
         +MTs6jMcap41uebOt1s38dHYurwDB3UZXB8eF6gKf/e2siAfgT4ZDWn/UtGMC0xe5PJQ
         xuMUF4UN9RMqnVfKg6aDM+G6bApMh3recCTS4qdHgkZHTIxfXrqP+NKFtksT31Nf5VNE
         GrwYr+1O+5mOSGReSqtMtlVDvBWBaR1XSgnll0v57TBFrf8/dOswCmSBg2xPz7ve6qGh
         VPlfcx7d35srxBkV8RxFmjn2cNCxW5OxnI+jLGQnjJ3PHyxIfJSX7Wt06JWr1V5PKK6S
         DSlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370148; x=1683962148;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=s7kR9rwQGILBw8tLRM1a/b6iPghICXgNju4KgQlDbBQ=;
        b=Tzv7InO/UqQw2Ra0ERmLzhf37TmEEN+etsiUJ7Vw1fejOTGMlfXsjOJz6uo1JTEpdV
         ED6d5R+tPMC5qWd/bCCSCWBC416NXaRYr66ndazMrfcwh6sQtsnF4W+Mp6PCE1860KdY
         CVKDYRn14us36YIsFuVXjU+YNGUfp5AkpqZR1fUZkmctXFRoEhKs6Mh0qW5P2YyJEtwJ
         50itJ8odkS+z4iuOmRPl72VptJcK0QgO2gKkg9H+hQgFHL7yAdDzuSRv5JCljz3sfop/
         Y92IYX9Q5kflUDHRg+rBgYTD2swAUFU0dlqRZcg5zW5Ux8Q8516rQsumgN+jJfoYSI5R
         CR8w==
X-Gm-Message-State: AAQBX9dcH2MEuyCvr2TFtOp97fE1sHO4ZltHIDK3huyKv4lpdtfkPuBo
	e85axPV2mrEvY8hdU91OvQShAfrqoAx9/g+XvEs=
X-Google-Smtp-Source: AKy350Z7CY4MuGRuwwnHSScNY/LH434Frkw+9N7zWTQ1lAI4TZU2KZM8qPHQmQHr3wLFJgimUoc4bQ==
X-Received: by 2002:ac2:5dcd:0:b0:4ec:9df9:f11a with SMTP id x13-20020ac25dcd000000b004ec9df9f11amr690577lfq.9.1681370148638;
        Thu, 13 Apr 2023 00:15:48 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k pages
Date: Thu, 13 Apr 2023 09:14:09 +0200
Message-Id: <20230413071424.3273490-8-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a BUILD_BUG_ON() to assert the dependency on 4k pages in the FF-A
mediator since the current implementation only works if Xen page size is
the same as the FFA page size.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index cf7b7545aa03..90ed71cbfda3 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -57,6 +57,16 @@
 #define FFA_MY_VERSION          MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
                                                  FFA_MY_VERSION_MINOR)
 
+/*
+ * The FF-A specification explicitly works with 4K pages as a measure of
+ * memory size, for example, FFA_RXTX_MAP takes one parameter "RX/TX page
+ * count" which is the number of contiguous 4K pages allocated. Xen may use
+ * a different page size depending on the configuration to avoid confusion
+ * with PAGE_SIZE use a special define when it's a page size as in the FF-A
+ * specification.
+ */
+#define FFA_PAGE_SIZE                   SZ_4K
+
 /*
  * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
  * BIT(31): Framework or partition message
@@ -256,6 +266,17 @@ static bool ffa_probe(void)
     unsigned int major_vers;
     unsigned int minor_vers;
 
+    /*
+     * FF-A often works in units of 4K pages and currently it's assumed
+     * that we can map memory using that granularity. See also the comment
+     * above the FFA_PAGE_SIZE define.
+     *
+     * It is possible to support a PAGE_SIZE larger than 4K in Xen, but
+     * until that is fully handled in this code make sure that we only use
+     * 4K page sizes.
+     */
+    BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
+
     /*
      * psci_init_smccc() updates this value with what's reported by EL-3
      * or secure world.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520466.808099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBP-00035F-S2; Thu, 13 Apr 2023 07:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520466.808099; Thu, 13 Apr 2023 07:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBP-000347-Kb; Thu, 13 Apr 2023 07:15:51 +0000
Received: by outflank-mailman (input) for mailman id 520466;
 Thu, 13 Apr 2023 07:15:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBN-0001gq-PK
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:49 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03ce63d4-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:48 +0200 (CEST)
Received: by mail-lf1-x12e.google.com with SMTP id a23so18385405lfk.4
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:48 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ce63d4-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370148; x=1683962148;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4h8RYyh5bzZRCknS4UVtSZ4pAXzS4T4Tgcv5Vo44wYc=;
        b=A82X0qygBUotlb7DfIfBGFrNRMxjB6Hi05AXYcE8ng1UX27OkAmm6Q/PFJxKqveD7i
         EB7UqFSYK0P5TZDMiPSPv/ixIlCi6gshIcsvr7Dutopp5imfc1DZBqnTQNSSjN/WYMTp
         BIUVVW8yS8fksYfPJ78t6mQ2SCX1iQYa+vYfY6LH/TKAVAuWC4wzBbR+2A+OmxLuRZz6
         mxdz/AmtFt+RH4qdWt053i/mFucNmnjypNZTxu8RScA4pLwdig6WHhpbJGhtv5phInhY
         tiIlH6/Y1/KbHDIyudkgxuxvjt01vEIN8BXPdWTWTInkwAJQ+KiirwVJC39pozeRQLbi
         S94g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370148; x=1683962148;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4h8RYyh5bzZRCknS4UVtSZ4pAXzS4T4Tgcv5Vo44wYc=;
        b=df8AFcW20vdiZnBhTQqquMjtlsaonyiNq1TYPBe6ua/B+JiNx5EP51xcBPj0YqL7ZX
         DkNT8s8taohm1rIaxRwGWohLS2El/Z/1RHLYd6rX5+f7WjfD83ZNeOaxDkJ8SRB7Ufib
         HbPZf9tb0LCqzQ0ixU1mVKIsIWXdQ/pjNIQN9ssdL3kjU5seflZFV0rcwoB/i4xNYz5X
         XHiWNwrDKKm26bZMwlbQYvTL2fbpJD6KdGoQnkWiKK8lE18PJk+oo6xto0G5qE/jlMVN
         Azw5MdoOyzk4DsFxvP7CBLAk/fnVfhGnwlgpinPGdbzHCPb/YZN2JUNEfAIurTxLJ9dz
         Bydg==
X-Gm-Message-State: AAQBX9e30JWBemLWUZBobmldPHaJmWeZDED+Tan5ma3DoIxHYHwpXtyK
	W2YAyIZvKl/tn4DnAdyjeg+0XMt3vHcPED1BOyE=
X-Google-Smtp-Source: AKy350beH+O+9f6FocX15Q6eI3oQwCr9xsNRMnzQDFyL45vH8snBqVe9+m1d9CZ9/Az5vOcXP12/uA==
X-Received: by 2002:a05:6512:3902:b0:4e8:45d5:53bf with SMTP id a2-20020a056512390200b004e845d553bfmr548369lfu.40.1681370147940;
        Thu, 13 Apr 2023 00:15:47 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework direct request/response messages
Date: Thu, 13 Apr 2023 09:14:08 +0200
Message-Id: <20230413071424.3273490-7-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds defines for framework direct request/response messages.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 72e7d0575de5..cf7b7545aa03 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -57,6 +57,19 @@
 #define FFA_MY_VERSION          MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
                                                  FFA_MY_VERSION_MINOR)
 
+/*
+ * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
+ * BIT(31): Framework or partition message
+ * BIT(7-0): Message type for frameworks messages
+ */
+#define FFA_MSG_FLAG_FRAMEWORK          BIT(31, U)
+#define FFA_MSG_TYPE_MASK               0xFFU;
+#define FFA_MSG_PSCI                    0x0U
+#define FFA_MSG_SEND_VM_CREATED         0x4U
+#define FFA_MSG_RESP_VM_CREATED         0x5U
+#define FFA_MSG_SEND_VM_DESTROYED       0x6U
+#define FFA_MSG_RESP_VM_DESTROYED       0x7U
+
 /*
  * Flags to determine partition properties in FFA_PARTITION_INFO_GET return
  * message:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520470.808131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBT-0003qr-JK; Thu, 13 Apr 2023 07:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520470.808131; Thu, 13 Apr 2023 07:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBS-0003op-VB; Thu, 13 Apr 2023 07:15:54 +0000
Received: by outflank-mailman (input) for mailman id 520470;
 Thu, 13 Apr 2023 07:15:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBQ-0001gq-6z
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:52 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05beb46c-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:51 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id t14so17730665lft.7
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:51 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05beb46c-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370151; x=1683962151;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pRYh9rViyFOvZCJSQQWOLQPi8YGZ6hqxd9UhBA1/mVQ=;
        b=bGID0BM3IKCyBs9Fb+1qCm7grvNZP5aDga9QzxfN/qFX7WEy7SRlMd81xJeklLXa7J
         98uZFaWKBlZ3aFWumgJ/uZm5Fmhaip+2yGt/pdwc8BmshtqAsIBEiwkiBfwCLodxC5Oo
         X6VUUVdaDUNnLCB+ftfVskw2RB71YMwmk3xdtG+aU+C/2rZ3VRdEPfhxQzCRvT2nq885
         W4keD+/B6xfHT1Ttr9il2iuJThDE+wLRp29p9vmrnSem6prPLe9+ZZmFzi4zJhXyPuSa
         tFRVihRNqdtIPPcJBrgZYbvZmCvqC2AQ4P95iHs0304lx5mSn/eFR8VCw6FYjbFopYR7
         Z51A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370151; x=1683962151;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=pRYh9rViyFOvZCJSQQWOLQPi8YGZ6hqxd9UhBA1/mVQ=;
        b=KjyzWcSz12Sy8vDlUv5EmF55NXWUgaIGD1EPXix8u68GoNRnFW+uVxDGN8YV9Z87EQ
         Hb0PBURkTn6cGt+aZjNDGDm4VQLqxeDvHyv9T80Z675e24jg8gxVFiE0matYkj7kIkZz
         yaUAiURiMCLR4navEegL1JxAUoi0APf2/s5IoXtVVLOoWKhOOTxJHTPZ0veaNce7YWOV
         XPQiQwN6eTHGjQLiapTPxv43mHUb8liIwiSzkUCglxdTE4ohNf8MTAOhGx6c9qI8sX37
         gpWL8NwMxCYKJJs5uFmKG/ed9pelV36K9eM9r9LGrmnWpOsIZhGRqNCfG8PIYeA3u3FS
         c8xQ==
X-Gm-Message-State: AAQBX9eHO6RoM7zVIUtzpN4JlwRBa2nL0rQ2MurYajoGcywaCV6JgxPF
	5ZThvJwYXlNr2Vd7qYHYbsQkzntXGloDs+yTcow=
X-Google-Smtp-Source: AKy350Zl091Qic1jeEJAh4/nzkT6NnQXBmuUhjxdulYlyB0Q3Uok5N2ISXaclmkeNym5ht4Hh/RUpQ==
X-Received: by 2002:a19:f602:0:b0:4eb:3f80:3ca3 with SMTP id x2-20020a19f602000000b004eb3f803ca3mr471078lfe.48.1681370151154;
        Thu, 13 Apr 2023 00:15:51 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 10/22] xen/arm: ffa: map SPMC rx/tx buffers
Date: Thu, 13 Apr 2023 09:14:12 +0200
Message-Id: <20230413071424.3273490-11-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When initializing the FF-A mediator map the RX and TX buffers shared with
the SPMC.

These buffer are later used to to transmit data that cannot be passed in
registers only.

Adds a check that the SP supports the needed FF-A features
FFA_RXTX_MAP_64 and FFA_RXTX_UNMAP.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 50 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 49 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index f2cce955d981..054121fe4321 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -12,6 +12,7 @@
 #include <xen/errno.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/mm.h>
 #include <xen/sched.h>
 #include <xen/sizes.h>
 #include <xen/types.h>
@@ -67,6 +68,12 @@
  */
 #define FFA_PAGE_SIZE                   SZ_4K
 
+/*
+ * The number of pages used for each of the RX and TX buffers shared with
+ * the SPMC.
+ */
+#define FFA_RXTX_PAGE_COUNT             1
+
 /*
  * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
  * BIT(31): Framework or partition message
@@ -161,6 +168,13 @@ struct ffa_ctx {
 /* Negotiated FF-A version to use with the SPMC */
 static uint32_t ffa_version __ro_after_init;
 
+/*
+ * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the
+ * number of pages used in each of these buffers.
+ */
+static void *ffa_rx __read_mostly;
+static void *ffa_tx __read_mostly;
+
 static bool ffa_get_version(uint32_t *vers)
 {
     const struct arm_smccc_1_2_regs arg = {
@@ -231,6 +245,12 @@ static bool check_mandatory_feature(uint32_t id)
     return !ret;
 }
 
+static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr,
+                            uint32_t page_count)
+{
+    return ffa_simple_call(FFA_RXTX_MAP_64, tx_addr, rx_addr, page_count, 0);
+}
+
 static uint16_t get_vm_id(const struct domain *d)
 {
     /* +1 since 0 is reserved for the hypervisor in FF-A */
@@ -389,6 +409,7 @@ static int ffa_relinquish_resources(struct domain *d)
 static bool ffa_probe(void)
 {
     uint32_t vers;
+    int e;
     unsigned int major_vers;
     unsigned int minor_vers;
 
@@ -435,12 +456,39 @@ static bool ffa_probe(void)
      * TODO save result of checked features and use that information to
      * accept or reject requests from guests.
      */
-    if ( !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
+    if (
+         !check_mandatory_feature(FFA_RXTX_MAP_64) ||
+         !check_mandatory_feature(FFA_RXTX_UNMAP) ||
+         !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
+        return false;
+
+    ffa_rx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0);
+    if ( !ffa_rx )
         return false;
 
+    ffa_tx = alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0);
+    if ( !ffa_tx )
+        goto err_free_ffa_rx;
+
+    e = ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), FFA_RXTX_PAGE_COUNT);
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to map rxtx: error %d\n", e);
+        goto err_free_ffa_tx;
+    }
     ffa_version = vers;
 
     return true;
+
+err_free_ffa_tx:
+    free_xenheap_pages(ffa_tx, 0);
+    ffa_tx = NULL;
+err_free_ffa_rx:
+    free_xenheap_pages(ffa_rx, 0);
+    ffa_rx = NULL;
+    ffa_version = 0;
+
+    return false;
 }
 
 static const struct tee_mediator_ops ffa_ops =
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520471.808140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBU-0004Gx-SM; Thu, 13 Apr 2023 07:15:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520471.808140; Thu, 13 Apr 2023 07:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBU-0004EK-Ir; Thu, 13 Apr 2023 07:15:56 +0000
Received: by outflank-mailman (input) for mailman id 520471;
 Thu, 13 Apr 2023 07:15:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBR-0001gq-5d
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:53 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 063a7a26-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:52 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id t14so17730699lft.7
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:52 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 063a7a26-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370152; x=1683962152;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LUHuc1SEEBwu3k5oaDgiHqlkqyD0euvjsOMhi6KaWmA=;
        b=rHpQDktneg32sQ9YVbFF0X51AsyYDiifK58Xl6+ydvoAcCLhMgdnDWl5hEGDcekivh
         zeU7sI6uJZ0O1dO9+Udg6oabVH15iMsF5vRxpKMc75/sBa9WFlfpQ1bCPxEa7xSkIM41
         qWEfgPaFjFmsHbHNbx/ePFpnjGMSfJQeae9lCw8W5ZBfN/Vv+CjzSngNOnlG0dM6D8rP
         N+02TE1AhUSXP4+QOnzMk0Y77sKVXrZPluGGccgF1z6hqOk42O8xPvqZUR0Z9zCXxaI3
         QWuNqNQsFMJPmayiVuuNOOBtt6U9NrzW1Thb4ojCGX5+VdIJnvPNfoWfATq+8NaW4//s
         8CBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370152; x=1683962152;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=LUHuc1SEEBwu3k5oaDgiHqlkqyD0euvjsOMhi6KaWmA=;
        b=KvtpL24WaElCQw44odIJDkUwgbqnUSiRBMCBRvRYrCu755pRD8MU+jyuGO1fVMat8y
         FpAA+M6aFgLWGWikALCUDSnE5KKwPX5m0ijPhHQDEXFkW1zvs35B0N8AjTu+tdLgDPem
         ukjFKyTrZDOKrhFQtysDDp02ZGCcLAEknDCmBCaU0MqOd1wSYkhRSe1Qm4ICYTwfRQq7
         2KJ9eH5sSS5h0bWikZVJ8n/rPFK5nbsT4oRjD5pBX9ap+Y4hSS2UILbSFW7JQsRkf3Xt
         pAkN8dMu6ka1WhFrRERt2fA5kYkb2D5jSTLkWMXj4xJ7nQpxNZjMctLbPrvXYaVz5Bd0
         mlwA==
X-Gm-Message-State: AAQBX9d3JuggbsSJ+DXmhGXzTfEENEmUDp5/scjWmNeeyfT4HrVvm16l
	hsZgsrtYDX+e6AL4avVVT3bwAWMFzsKbz4Q1MqA=
X-Google-Smtp-Source: AKy350Y1XOnbdxrY/+pdsHCDPtT49ARaxmuTprdYzygYwrCh8xdQWpD3Nq34I2yYJHU8uH8uB6T+CA==
X-Received: by 2002:ac2:5d4e:0:b0:4eb:c18:efae with SMTP id w14-20020ac25d4e000000b004eb0c18efaemr591339lfd.17.1681370151879;
        Thu, 13 Apr 2023 00:15:51 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure Partitions
Date: Thu, 13 Apr 2023 09:14:13 +0200
Message-Id: <20230413071424.3273490-12-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The FF-A specification defines framework messages sent as direct
requests when certain events occurs. For instance when a VM (guest) is
created or destroyed. Only SPs which have subscribed to these events
will receive them. An SP can subscribe to these messages in its
partition properties.

Adds a check that the SP supports the needed FF-A features
FFA_PARTITION_INFO_GET and FFA_RX_RELEASE.

The partition properties of each SP is retrieved with
FFA_PARTITION_INFO_GET which returns the information in our RX buffer.
Using FFA_PARTITION_INFO_GET changes the owner of the RX buffer to the
caller (us), so once we're done with the buffer it must be released
using FFA_RX_RELEASE before another call can be made.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 200 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 199 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 054121fe4321..b4fea65ce31d 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -160,6 +160,14 @@
 #define FFA_MSG_SEND                    0x8400006EU
 #define FFA_MSG_POLL                    0x8400006AU
 
+/* Partition information descriptor */
+struct ffa_partition_info_1_1 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+    uint8_t uuid[16];
+};
+
 struct ffa_ctx {
     /* FF-A version used by the guest */
     uint32_t guest_vers;
@@ -168,6 +176,12 @@ struct ffa_ctx {
 /* Negotiated FF-A version to use with the SPMC */
 static uint32_t ffa_version __ro_after_init;
 
+/* SPs subscribing to VM_CREATE and VM_DESTROYED events */
+static uint16_t *subscr_vm_created __read_mostly;
+static uint16_t subscr_vm_created_count __read_mostly;
+static uint16_t *subscr_vm_destroyed __read_mostly;
+static uint16_t subscr_vm_destroyed_count __read_mostly;
+
 /*
  * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the
  * number of pages used in each of these buffers.
@@ -251,6 +265,72 @@ static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr,
     return ffa_simple_call(FFA_RXTX_MAP_64, tx_addr, rx_addr, page_count, 0);
 }
 
+static int32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                      uint32_t w4, uint32_t w5,
+                                      uint32_t *count)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_PARTITION_INFO_GET,
+        .a1 = w1,
+        .a2 = w2,
+        .a3 = w3,
+        .a4 = w4,
+        .a5 = w5,
+    };
+    struct arm_smccc_1_2_regs resp;
+    uint32_t ret;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    ret = get_ffa_ret_code(&resp);
+    if ( !ret )
+        *count = resp.a2;
+
+    return ret;
+}
+
+static int32_t ffa_rx_release(void)
+{
+    return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0);
+}
+
+static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
+                                      uint8_t msg)
+{
+    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
+    int32_t res;
+
+    if ( msg == FFA_MSG_SEND_VM_CREATED )
+        exp_resp |= FFA_MSG_RESP_VM_CREATED;
+    else if ( msg == FFA_MSG_SEND_VM_DESTROYED )
+        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
+    else
+        return FFA_RET_INVALID_PARAMETERS;
+
+    do {
+        const struct arm_smccc_1_2_regs arg = {
+            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
+            .a1 = sp_id,
+            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
+            .a5 = vm_id,
+        };
+        struct arm_smccc_1_2_regs resp;
+
+        arm_smccc_1_2_smc(&arg, &resp);
+        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp )
+        {
+            /*
+             * This is an invalid response, likely due to some error in the
+             * implementation of the ABI.
+             */
+            return FFA_RET_INVALID_PARAMETERS;
+        }
+        res = resp.a3;
+    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
+
+    return res;
+}
+
 static uint16_t get_vm_id(const struct domain *d)
 {
     /* +1 since 0 is reserved for the hypervisor in FF-A */
@@ -374,6 +454,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
 static int ffa_domain_init(struct domain *d)
 {
     struct ffa_ctx *ctx;
+    unsigned int n;
+    unsigned int m;
+    unsigned int c_pos;
+    int32_t res;
 
     if ( !ffa_version )
         return -ENODEV;
@@ -388,24 +472,134 @@ static int ffa_domain_init(struct domain *d)
     if ( !ctx )
         return -ENOMEM;
 
+    for ( n = 0; n < subscr_vm_created_count; n++ )
+    {
+        res = ffa_direct_req_send_vm(subscr_vm_created[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_CREATED);
+        if ( res )
+        {
+            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subscr_vm_created[n], res);
+            c_pos = n;
+            goto err;
+        }
+    }
+
     d->arch.tee = ctx;
 
     return 0;
+
+err:
+    /* Undo any already sent vm created messaged */
+    for ( n = 0; n < c_pos; n++ )
+        for ( m = 0; m < subscr_vm_destroyed_count; m++ )
+            if ( subscr_vm_destroyed[m] == subscr_vm_created[n] )
+                ffa_direct_req_send_vm(subscr_vm_destroyed[n], get_vm_id(d),
+                                       FFA_MSG_SEND_VM_DESTROYED);
+
+    return -EIO;
 }
 
 /* This function is supposed to undo what ffa_domain_init() has done */
 static int ffa_relinquish_resources(struct domain *d)
 {
     struct ffa_ctx *ctx = d->arch.tee;
+    unsigned int n;
+    int32_t res;
 
     if ( !ctx )
         return 0;
 
+    for ( n = 0; n < subscr_vm_destroyed_count; n++ )
+    {
+        res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], get_vm_id(d),
+                                     FFA_MSG_SEND_VM_DESTROYED);
+
+        if ( res )
+            printk(XENLOG_ERR "ffa: Failed to report destruction of vm_id %u to  %u: res %d\n",
+                   get_vm_id(d), subscr_vm_destroyed[n], res);
+    }
+
     XFREE(d->arch.tee);
 
     return 0;
 }
 
+static void uninit_subscribers(void)
+{
+        subscr_vm_created_count = 0;
+        subscr_vm_destroyed_count = 0;
+        XFREE(subscr_vm_created);
+        XFREE(subscr_vm_destroyed);
+}
+
+static bool init_subscribers(struct ffa_partition_info_1_1 *fpi, uint16_t count)
+{
+    uint16_t n;
+    uint16_t c_pos;
+    uint16_t d_pos;
+
+    subscr_vm_created_count = 0;
+    subscr_vm_destroyed_count = 0;
+    for ( n = 0; n < count; n++ )
+    {
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED)
+            subscr_vm_created_count++;
+        if (fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED)
+            subscr_vm_destroyed_count++;
+    }
+
+    if ( subscr_vm_created_count )
+        subscr_vm_created = xzalloc_array(uint16_t, subscr_vm_created_count);
+    if ( subscr_vm_destroyed_count )
+        subscr_vm_destroyed = xzalloc_array(uint16_t,
+                                            subscr_vm_destroyed_count);
+    if ( (subscr_vm_created_count && !subscr_vm_created) ||
+         (subscr_vm_destroyed_count && !subscr_vm_destroyed) )
+    {
+        printk(XENLOG_ERR "ffa: Failed to allocate subscription lists\n");
+        uninit_subscribers();
+        return false;
+    }
+
+    for ( c_pos = 0, d_pos = 0, n = 0; n < count; n++ )
+    {
+        if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_CREATED )
+            subscr_vm_created[c_pos++] = fpi[n].id;
+        if ( fpi[n].partition_properties & FFA_PART_PROP_NOTIF_DESTROYED )
+            subscr_vm_destroyed[d_pos++] = fpi[n].id;
+    }
+
+    return true;
+}
+
+static bool init_sps(void)
+{
+    bool ret = false;
+    uint32_t count;
+    int e;
+
+    e = ffa_partition_info_get(0, 0, 0, 0, 0, &count);
+    if ( e )
+    {
+        printk(XENLOG_ERR "ffa: Failed to get list of SPs: %d\n", e);
+        goto out;
+    }
+
+    if ( count >= UINT16_MAX )
+    {
+        printk(XENLOG_ERR "ffa: Impossible number of SPs: %u\n", count);
+        goto out;
+    }
+
+    ret = init_subscribers(ffa_rx, count);
+
+out:
+    ffa_rx_release();
+
+    return ret;
+}
+
 static bool ffa_probe(void)
 {
     uint32_t vers;
@@ -456,7 +650,8 @@ static bool ffa_probe(void)
      * TODO save result of checked features and use that information to
      * accept or reject requests from guests.
      */
-    if (
+    if ( !check_mandatory_feature(FFA_PARTITION_INFO_GET) ||
+         !check_mandatory_feature(FFA_RX_RELEASE) ||
          !check_mandatory_feature(FFA_RXTX_MAP_64) ||
          !check_mandatory_feature(FFA_RXTX_UNMAP) ||
          !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
@@ -478,6 +673,9 @@ static bool ffa_probe(void)
     }
     ffa_version = vers;
 
+    if ( !init_sps() )
+        goto err_free_ffa_tx;
+
     return true;
 
 err_free_ffa_tx:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520472.808148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBV-0004Si-OS; Thu, 13 Apr 2023 07:15:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520472.808148; Thu, 13 Apr 2023 07:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBV-0004QD-CJ; Thu, 13 Apr 2023 07:15:57 +0000
Received: by outflank-mailman (input) for mailman id 520472;
 Thu, 13 Apr 2023 07:15:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBR-0001gq-QI
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:53 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 06a81776-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:53 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id h37so1074263lfv.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:53 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06a81776-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370152; x=1683962152;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xuOrFGzp3bQy6WF/rcEt04FHwlM3ACHxMEsVNb5k43M=;
        b=C2cwYx0xgzJnKg9vDH7Al2RhRIpKXc3PMJ5q7dtFkoiIX62/R95NCUz5GZpPvj1xzn
         adjqcbajRJ6eh5vX/MJGIVljuc6wwbmhgBvSjx7eytIIGCI/llssQDclboM0bSTVcUZg
         S2YPoqxUDRRkqVNYrGqa9JmA++d10SF0WEb3XJyBtyIzJePnRBMmTIrS8qmOL5WAl8mK
         Rd9jID2u9u52+AZ7SQOxvvY4pLlB1eHf73RauurMg/aa0Cr+QasTS+t4s0HS1QzhAH7W
         0EqLMngiZl7G8Ka8lyngFzW1F8ZNBnl6mkQcwj7bFEVZGy/t1+4CQZRQfb80YzWX3tlt
         PnvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370152; x=1683962152;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xuOrFGzp3bQy6WF/rcEt04FHwlM3ACHxMEsVNb5k43M=;
        b=JbMRp/BzQ1sWS7MvN0Oxwiz84O7GwNjByqjOMRuJabfZavMHL9Fk1vhj/hTurCknc6
         CpdG9NVmtFAhNGzkNsByWr5fo+BYfJb+3ouoi54KqjD/JA4qCezBYYwQtuk4o26ato7D
         VIMEjDvN/kNuzeoM0VPYZC+BENWQpAOZV6Y2zWF7I+VEhuK5mkKf8N+Lqd0GFGnco3ex
         LaVqaP+B79foiHFrVBaqGyMXHMMjPajdwITZvnXN16BpmxthyAUfDvPV4+sAV90ivmKc
         vdg4DbeT+nWbIXpLXDpE5VhG46o3kDRGGy+Phm0Ic5ruYKEty8oz57gg7eAOHFSwmrpm
         LuBQ==
X-Gm-Message-State: AAQBX9cNYdGmeYNct9IqY1Ng7Se7qhCBYYHjMB5GnnVwZWzAf1tO2eLi
	EMiaZzroLWsjFGOE7f+ULiMwc/cECrJ0N90kLLU=
X-Google-Smtp-Source: AKy350ZTVFbXDHijwkV8Xa+5i4VYHTf+Be6BMZHtsfXpiolxEsUHP1ZKdCCeiS99RxfioLYUbsjClQ==
X-Received: by 2002:ac2:50ca:0:b0:4eb:40d4:e0d3 with SMTP id h10-20020ac250ca000000b004eb40d4e0d3mr563100lfm.35.1681370152689;
        Thu, 13 Apr 2023 00:15:52 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 12/22] xen/arm: ffa: support mapping guest RX/TX buffers
Date: Thu, 13 Apr 2023 09:14:14 +0200
Message-Id: <20230413071424.3273490-13-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support in the mediator to map and unmap the RX and TX buffers
provided by the guest using the two FF-A functions FFA_RXTX_MAP and
FFA_RXTX_UNMAP.

These buffer are later used to transmit data that cannot be passed in
registers only.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 137 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 137 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index b4fea65ce31d..127397d8e448 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -74,6 +74,12 @@
  */
 #define FFA_RXTX_PAGE_COUNT             1
 
+/*
+ * Limits the number of pages RX/TX buffers guests can map. This value has
+ * been chosen arbitrary.
+ */
+#define FFA_MAX_RXTX_PAGE_COUNT         32
+
 /*
  * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
  * BIT(31): Framework or partition message
@@ -169,8 +175,15 @@ struct ffa_partition_info_1_1 {
 };
 
 struct ffa_ctx {
+    void *rx;
+    const void *tx;
+    struct page_info *rx_pg;
+    struct page_info *tx_pg;
+    /* Number of 4kB pages in each of rx/rx_pg and tx/tx_pg */
+    unsigned int page_count;
     /* FF-A version used by the guest */
     uint32_t guest_vers;
+    bool tx_is_free;
 };
 
 /* Negotiated FF-A version to use with the SPMC */
@@ -351,6 +364,11 @@ static void set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1,
         set_user_reg(regs, 7, v7);
 }
 
+static void set_regs_error(struct cpu_user_regs *regs, uint32_t error_code)
+{
+    set_regs(regs, FFA_ERROR, 0, error_code, 0, 0, 0, 0, 0);
+}
+
 static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
                              uint32_t w3)
 {
@@ -372,6 +390,105 @@ static void handle_version(struct cpu_user_regs *regs)
     set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
 }
 
+static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
+                                register_t rx_addr, uint32_t page_count)
+{
+    uint32_t ret = FFA_RET_INVALID_PARAMETERS;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+    struct page_info *tx_pg;
+    struct page_info *rx_pg;
+    p2m_type_t t;
+    void *rx;
+    void *tx;
+
+    if ( !smccc_is_conv_64(fid) )
+    {
+        /*
+         * Calls using the 32-bit calling convention must ignore the upper
+         * 32 bits in the argument registers.
+         */
+        tx_addr &= UINT32_MAX;
+        rx_addr &= UINT32_MAX;
+    }
+
+    if ( page_count > FFA_MAX_RXTX_PAGE_COUNT ) {
+        printk(XENLOG_ERR "ffa: RXTX_MAP: error: %u pages requested (limit %u)\n",
+               page_count, FFA_MAX_RXTX_PAGE_COUNT);
+        return FFA_RET_NOT_SUPPORTED;
+    }
+
+    /* Already mapped */
+    if ( ctx->rx )
+        return FFA_RET_DENIED;
+
+    tx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(tx_addr)), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        return FFA_RET_INVALID_PARAMETERS;
+    /* Only normal RAM for now */
+    if ( !p2m_is_ram(t) )
+        goto err_put_tx_pg;
+
+    rx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(rx_addr)), &t, P2M_ALLOC);
+    if ( !tx_pg )
+        goto err_put_tx_pg;
+    /* Only normal RAM for now */
+    if ( !p2m_is_ram(t) )
+        goto err_put_rx_pg;
+
+    tx = __map_domain_page_global(tx_pg);
+    if ( !tx )
+        goto err_put_rx_pg;
+
+    rx = __map_domain_page_global(rx_pg);
+    if ( !rx )
+        goto err_unmap_tx;
+
+    ctx->rx = rx;
+    ctx->tx = tx;
+    ctx->rx_pg = rx_pg;
+    ctx->tx_pg = tx_pg;
+    ctx->page_count = page_count;
+    ctx->tx_is_free = true;
+    return FFA_RET_OK;
+
+err_unmap_tx:
+    unmap_domain_page_global(tx);
+err_put_rx_pg:
+    put_page(rx_pg);
+err_put_tx_pg:
+    put_page(tx_pg);
+
+    return ret;
+}
+
+static void rxtx_unmap(struct ffa_ctx *ctx)
+{
+    unmap_domain_page_global(ctx->rx);
+    unmap_domain_page_global(ctx->tx);
+    put_page(ctx->rx_pg);
+    put_page(ctx->tx_pg);
+    ctx->rx = NULL;
+    ctx->tx = NULL;
+    ctx->rx_pg = NULL;
+    ctx->tx_pg = NULL;
+    ctx->page_count = 0;
+    ctx->tx_is_free = false;
+}
+
+static uint32_t handle_rxtx_unmap(void)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    if ( !ctx->rx )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    rxtx_unmap(ctx);
+
+    return FFA_RET_OK;
+}
+
 static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
 {
     struct arm_smccc_1_2_regs arg = { .a0 = fid, };
@@ -428,6 +545,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     uint32_t fid = get_user_reg(regs, 0);
     struct domain *d = current->domain;
     struct ffa_ctx *ctx = d->arch.tee;
+    int e;
 
     if ( !ctx )
         return false;
@@ -440,6 +558,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_ID_GET:
         set_regs_success(regs, get_vm_id(d), 0);
         return true;
+    case FFA_RXTX_MAP_32:
+    case FFA_RXTX_MAP_64:
+        e = handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2),
+                            get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
+    case FFA_RXTX_UNMAP:
+        e = handle_rxtx_unmap();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
     case FFA_MSG_SEND_DIRECT_REQ_32:
     case FFA_MSG_SEND_DIRECT_REQ_64:
         handle_msg_send_direct_req(regs, fid);
@@ -520,6 +654,9 @@ static int ffa_relinquish_resources(struct domain *d)
                    get_vm_id(d), subscr_vm_destroyed[n], res);
     }
 
+    if ( ctx->rx )
+        rxtx_unmap(ctx);
+
     XFREE(d->arch.tee);
 
     return 0;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:15:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:15:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520473.808156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBX-0004hy-0k; Thu, 13 Apr 2023 07:15:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520473.808156; Thu, 13 Apr 2023 07:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBW-0004eX-GH; Thu, 13 Apr 2023 07:15:58 +0000
Received: by outflank-mailman (input) for mailman id 520473;
 Thu, 13 Apr 2023 07:15:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBS-0001gq-QV
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:54 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 072bf309-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:54 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id q26so5660539lfe.9
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:54 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 072bf309-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370153; x=1683962153;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GR2XQOhN0lHMXLb5++oGf9JMuFsfS462ejdMhNzV4DY=;
        b=gGIRIObi89IrBahA6+sVARSNDH2wnEHfLGa2RrKoJNoix8BkRVP8wMKo1zIy0o4O6l
         riib/Kkt/TVJF0iHt9qdoHRCHHQ6QZqTR1RwRz+p1X6Vd16TYaCiSjlnDd3gvQXOAXeD
         oEMqSC6wOoZeobzdx9OVckFDx07TSkKlVG05BXl8g0HSzGz4iHjlfkLQY+ZNvak5s0dB
         fJrjUeZQNW/ZSphGIXV+xuNlLq/NZdkg6DaTG+ER8PNiVWiwngan7uN1w8GCM4Y+383/
         E6lHRWyN2GPZ6CSET4S6lCJuBcPby9+uzJifTetCAM/4ns/7cRBrozAOIfINtn5wQ5Fy
         r81Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370153; x=1683962153;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GR2XQOhN0lHMXLb5++oGf9JMuFsfS462ejdMhNzV4DY=;
        b=eopmVtXhuFc8qJchad0YJGfpyNTt31b4EYqh+pTONNJBsuOQrmLhVx3rklNWC+gPIk
         TrlBCZ3QfZU2jSdtYTdNIsD2Kbo9l38N+5kUr1ho8cGiNGi2eUvZdW12xMCfODlbTG3O
         LNHC66Mh3lVuzKiYoxi555Y7tip1UM4MV7yEvBUNGjNE26bbtBb8tzyfWYIpmS6kJleg
         4KUQGkKgbBbUwV4v6n7EZCxLboAFaiV8WLBg3Wa3O6uU9qJXp26/8dNCCozu1mAf7yIL
         ahRoa69JMjMlYx0J1hKlIhJ0jf8Qc8xhIAQs2dcB7mo4w6/YlBSgOGpxwQuhf/Rd1DA2
         HRfg==
X-Gm-Message-State: AAQBX9cHzVHEt1LaRdL9cbdslY9DMfKn//EzA29j8/6oFnVVIIs3a0gA
	XrVqoXVzgpy4X0WdTvWWNj0xUMhjm/rcgmZ7jvM=
X-Google-Smtp-Source: AKy350ab5t5FQhmU9qvLXFWPGFpKyD3JH7lTCDpinTADeMzIc498fZhRGX4cr3+LeonWPEkBuIqvng==
X-Received: by 2002:ac2:515a:0:b0:4ed:5c73:79cb with SMTP id q26-20020ac2515a000000b004ed5c7379cbmr20947lfd.21.1681370153558;
        Thu, 13 Apr 2023 00:15:53 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 13/22] xen/arm: ffa: support guest FFA_PARTITION_INFO_GET
Date: Thu, 13 Apr 2023 09:14:15 +0200
Message-Id: <20230413071424.3273490-14-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support in the mediator to handle FFA_PARTITION_INFO_GET requests
from a guest. The requests are forwarded to the SPMC and the response is
translated according to the FF-A version in use by the guest.

Using FFA_PARTITION_INFO_GET changes the owner of the RX buffer to the
caller (the guest in this case), so once it is done with the buffer it
must be released using FFA_RX_RELEASE before another call can be made.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 137 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 134 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 127397d8e448..74b8c517afb8 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -166,7 +166,18 @@
 #define FFA_MSG_SEND                    0x8400006EU
 #define FFA_MSG_POLL                    0x8400006AU
 
+/*
+ * Structs below ending with _1_0 are defined in FF-A-1.0-REL and
+ * struct ending with _1_1 are defined in FF-A-1.1-REL0.
+ */
+
 /* Partition information descriptor */
+struct ffa_partition_info_1_0 {
+    uint16_t id;
+    uint16_t execution_context;
+    uint32_t partition_properties;
+};
+
 struct ffa_partition_info_1_1 {
     uint16_t id;
     uint16_t execution_context;
@@ -183,7 +194,8 @@ struct ffa_ctx {
     unsigned int page_count;
     /* FF-A version used by the guest */
     uint32_t guest_vers;
-    bool tx_is_free;
+    bool rx_is_free;
+    spinlock_t lock;
 };
 
 /* Negotiated FF-A version to use with the SPMC */
@@ -198,9 +210,15 @@ static uint16_t subscr_vm_destroyed_count __read_mostly;
 /*
  * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the
  * number of pages used in each of these buffers.
+ *
+ * The RX buffer is protected from concurrent usage with ffa_rx_buffer_lock.
+ * Note that the SPMC is also tracking the ownership of our RX buffer so
+ * for calls which uses our RX buffer to deliver a result we must call
+ * ffa_rx_release() to let the SPMC know that we're done with the buffer.
  */
 static void *ffa_rx __read_mostly;
 static void *ffa_tx __read_mostly;
+static DEFINE_SPINLOCK(ffa_rx_buffer_lock);
 
 static bool ffa_get_version(uint32_t *vers)
 {
@@ -449,7 +467,7 @@ static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
     ctx->rx_pg = rx_pg;
     ctx->tx_pg = tx_pg;
     ctx->page_count = page_count;
-    ctx->tx_is_free = true;
+    ctx->rx_is_free = true;
     return FFA_RET_OK;
 
 err_unmap_tx:
@@ -473,7 +491,7 @@ static void rxtx_unmap(struct ffa_ctx *ctx)
     ctx->rx_pg = NULL;
     ctx->tx_pg = NULL;
     ctx->page_count = 0;
-    ctx->tx_is_free = false;
+    ctx->rx_is_free = false;
 }
 
 static uint32_t handle_rxtx_unmap(void)
@@ -489,6 +507,100 @@ static uint32_t handle_rxtx_unmap(void)
     return FFA_RET_OK;
 }
 
+static int32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
+                                         uint32_t w4, uint32_t w5,
+                                         uint32_t *count)
+{
+    int32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    /*
+     * FF-A v1.0 has w5 MBZ while v1.1 allows
+     * FFA_PARTITION_INFO_GET_COUNT_FLAG to be non-zero.
+     */
+    if ( w5 == FFA_PARTITION_INFO_GET_COUNT_FLAG &&
+         ctx->guest_vers == FFA_VERSION_1_1 )
+        return ffa_partition_info_get(w1, w2, w3, w4, w5, count);
+    if ( w5 )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    if ( !ffa_rx )
+        return FFA_RET_DENIED;
+
+    spin_lock(&ctx->lock);
+    if ( !ctx->page_count || !ctx->rx_is_free )
+        goto out;
+    spin_lock(&ffa_rx_buffer_lock);
+    ret = ffa_partition_info_get(w1, w2, w3, w4, w5, count);
+    if ( ret )
+        goto out_rx_buf_unlock;
+    /*
+     * ffa_partition_info_get() succeeded so we now own the RX buffer we
+     * share with the SPMC. We must give it back using ffa_rx_release()
+     * once we've copied the content.
+     */
+
+    if ( ctx->guest_vers == FFA_VERSION_1_0 )
+    {
+        size_t n;
+        struct ffa_partition_info_1_1 *src = ffa_rx;
+        struct ffa_partition_info_1_0 *dst = ctx->rx;
+
+        if ( ctx->page_count * FFA_PAGE_SIZE < *count * sizeof(*dst) )
+        {
+            ret = FFA_RET_NO_MEMORY;
+            goto out_rx_release;
+        }
+
+        for ( n = 0; n < *count; n++ )
+        {
+            dst[n].id = src[n].id;
+            dst[n].execution_context = src[n].execution_context;
+            dst[n].partition_properties = src[n].partition_properties;
+        }
+    }
+    else
+    {
+        size_t sz = *count * sizeof(struct ffa_partition_info_1_1);
+
+        if ( ctx->page_count * FFA_PAGE_SIZE < sz )
+        {
+            ret = FFA_RET_NO_MEMORY;
+            goto out_rx_release;
+        }
+
+
+        memcpy(ctx->rx, ffa_rx, sz);
+    }
+    ctx->rx_is_free = false;
+out_rx_release:
+    ffa_rx_release();
+out_rx_buf_unlock:
+    spin_unlock(&ffa_rx_buffer_lock);
+out:
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
+static int32_t handle_rx_release(void)
+{
+    int32_t ret = FFA_RET_DENIED;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    spin_lock(&ctx->lock);
+    if ( !ctx->page_count || ctx->rx_is_free )
+        goto out;
+    ret = FFA_RET_OK;
+    ctx->rx_is_free = true;
+out:
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
 static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
 {
     struct arm_smccc_1_2_regs arg = { .a0 = fid, };
@@ -545,6 +657,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     uint32_t fid = get_user_reg(regs, 0);
     struct domain *d = current->domain;
     struct ffa_ctx *ctx = d->arch.tee;
+    uint32_t count;
     int e;
 
     if ( !ctx )
@@ -574,6 +687,24 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
         else
             set_regs_success(regs, 0, 0);
         return true;
+    case FFA_PARTITION_INFO_GET:
+        e = handle_partition_info_get(get_user_reg(regs, 1),
+                                      get_user_reg(regs, 2),
+                                      get_user_reg(regs, 3),
+                                      get_user_reg(regs, 4),
+                                      get_user_reg(regs, 5), &count);
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, count, 0);
+        return true;
+    case FFA_RX_RELEASE:
+        e = handle_rx_release();
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
     case FFA_MSG_SEND_DIRECT_REQ_32:
     case FFA_MSG_SEND_DIRECT_REQ_64:
         handle_msg_send_direct_req(regs, fid);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520474.808164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBY-0004t1-4s; Thu, 13 Apr 2023 07:16:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520474.808164; Thu, 13 Apr 2023 07:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBX-0004ob-9I; Thu, 13 Apr 2023 07:15:59 +0000
Received: by outflank-mailman (input) for mailman id 520474;
 Thu, 13 Apr 2023 07:15:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBV-0001gr-RI
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:57 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 087a2851-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:56 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id u12so154347lfu.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:56 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 087a2851-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370155; x=1683962155;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WQr8Av63Y05+KRPqQW2MMZI9t0v676Nzga708PFgZbs=;
        b=xvxXEBK2Compws42neNLU7+SQEYWylb37C3YXI+KnyyvdTZWisC+dizJ54qPLe/7BC
         F11IjC/0l2XcefIx8iJhYi3TF+WZWHPvfub72LmSh53AbtCaW9NIy3kY3Qja74ixd6PH
         dbyKlzrPuNpdAPqs4RGvaaVEx6hpV2OCdK1RbrAIyqwpOkJ6V5xMls0Fal+x6mJttwyD
         BXSFedWxmKUb5ec+JCbveQxkxKeJ5szHSwzzAfR9pPIVITVVN2iaOOtNDQYx5la3dSWo
         SZ3GQwv9ZHzTFkBJJDDO2IbPbD3586GTH9NzfXhR/C4dFl6khMr7vgaGNKbaIphF+riY
         vwmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370155; x=1683962155;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WQr8Av63Y05+KRPqQW2MMZI9t0v676Nzga708PFgZbs=;
        b=YoD1S/G4Fn6dunZ6G4PLzQ1PR99OJr3crDXKwhquv5xYIDkHrwfAAk2A+sc+yS8vHe
         zD20h1qm869TlTyPyKSk7h2VxJ/h+AOc9SmCea1iH6rqXSPmVlApxrWE6qxHghW7sSmq
         hdJQr6/mmdmn8Fkf+FOhIEKwf+YOl7Bd67SPPQfAEc5cVHKW8ARdtkck0jQjcRtodWDC
         MmnVd7zhIzFy9BlBoK0+4uTldDkU9eXCV9mScf32I6X/Ie/COcR/Axb3KtNLT7uXoDqE
         TcdPAx46NJaUvDgLgEwNQALUWK3ixJrF5EVmhxD6wdeCyYRdGYnza/qFLILFGI56iBf4
         6ybw==
X-Gm-Message-State: AAQBX9e+aeVqNYZ65eeEnrH1B4Xe0MGJdftf2X//TTlyf4EORUzSPUQi
	CvRgmUFozuzGUPhrJQSiulBBL1KbTlr45TzOR/E=
X-Google-Smtp-Source: AKy350Zs8E4alI77NZ8nPoBbhBLjXrkqVWz+tUygSw+cu9HmXrFfnPMlnm+5FI3jSslmcHKSwGkg1A==
X-Received: by 2002:a19:f00f:0:b0:4eb:2b32:feab with SMTP id p15-20020a19f00f000000b004eb2b32feabmr444477lfc.50.1681370155779;
        Thu, 13 Apr 2023 00:15:55 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing memory
Date: Thu, 13 Apr 2023 09:14:18 +0200
Message-Id: <20230413071424.3273490-17-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds the ABI structs used by function FFA_MEM_SHARE and friends for
sharing memory.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa.c | 69 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 58c581c8ffc7..f3e05911e16e 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -245,6 +245,75 @@ struct ffa_partition_info_1_1 {
     uint8_t uuid[16];
 };
 
+/* Constituent memory region descriptor */
+struct ffa_address_range {
+    uint64_t address;
+    uint32_t page_count;
+    uint32_t reserved;
+};
+
+/* Composite memory region descriptor */
+struct ffa_mem_region {
+    uint32_t total_page_count;
+    uint32_t address_range_count;
+    uint64_t reserved;
+    struct ffa_address_range address_range_array[];
+};
+
+/* Memory access permissions descriptor */
+struct ffa_mem_access_perm {
+    uint16_t endpoint_id;
+    uint8_t perm;
+    uint8_t flags;
+};
+
+/* Endpoint memory access descriptor */
+struct ffa_mem_access {
+    struct ffa_mem_access_perm access_perm;
+    uint32_t region_offs;
+    uint64_t reserved;
+};
+
+/* Lend, donate or share memory transaction descriptor */
+struct ffa_mem_transaction_1_0 {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t reserved0;
+    uint32_t flags;
+    uint64_t handle;
+    uint64_t tag;
+    uint32_t reserved1;
+    uint32_t mem_access_count;
+    struct ffa_mem_access mem_access_array[];
+};
+
+struct ffa_mem_transaction_1_1 {
+    uint16_t sender_id;
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint64_t handle;
+    uint64_t tag;
+    uint32_t mem_access_size;
+    uint32_t mem_access_count;
+    uint32_t mem_access_offs;
+    uint8_t reserved[12];
+};
+
+/* Endpoint RX/TX descriptor */
+struct ffa_endpoint_rxtx_descriptor_1_0 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_range_count;
+    uint32_t tx_range_count;
+};
+
+struct ffa_endpoint_rxtx_descriptor_1_1 {
+    uint16_t sender_id;
+    uint16_t reserved;
+    uint32_t rx_region_offs;
+    uint32_t tx_region_offs;
+};
+
 struct ffa_ctx {
     void *rx;
     const void *tx;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520475.808176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBZ-0005MJ-Qx; Thu, 13 Apr 2023 07:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520475.808176; Thu, 13 Apr 2023 07:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBZ-0005Jb-9X; Thu, 13 Apr 2023 07:16:01 +0000
Received: by outflank-mailman (input) for mailman id 520475;
 Thu, 13 Apr 2023 07:15:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBX-0001gr-B5
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:59 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 096abe3f-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:15:57 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id u12so154417lfu.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:57 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 096abe3f-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370157; x=1683962157;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ul/1FA/L6eAJ4JFgbgtNcnhIDZBc/H8WlYcLQZsgqf4=;
        b=gt9n3iyMe7ujsjYVCsHKczv2GRF+eCg0yIseZ/d3zg7kq7p0mXW7qumqQ4m3zom+4s
         sgFc1hD5ZxXD5D4hgjHNHJAe+WWFSi4rGZMU4BICPkyUT3+mCg+IGbyWvyiHpsNvNvbR
         PCyk7QuUpfkeG1of60PFJ6umYlFrBo4V85VB7igwQ0rDpgwViKLOyecfqv0iA1zZuIbf
         ebNKYsqSycPoAXUa1ZsLPjVxGUuGN44qerYrHAXJ9DeEr9eeSwDL/KAA0cwGnMl5ZpYN
         0Gwt2TWjFGMmBmyOp9OZjIkZsbnuGw8SEcecI9sohJBJBgexCOttwJgcdMxgjXsGexF/
         Ou1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370157; x=1683962157;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ul/1FA/L6eAJ4JFgbgtNcnhIDZBc/H8WlYcLQZsgqf4=;
        b=BlqrqzacRnodGiGgjL6rNmCFfXXZSbiUPcBqNNsqriwbrJG+IL1JPbOcuLWiZ/yUnH
         yPyQpsSuFE0/7mpgna8GY4YUHycbzD9Gu8bfwvAgWN+Jdhzt4AmvYpI1B5WWhT5Tukg8
         InQHdS7WYG2EzXm2hEKC85HhLY4v76kkRwyIX25dHIu6g+2/izE1ytPYoB7VCs4cqPI4
         kwqCT+fZGH35n7aYUMOY0kMWAW4dBBrDxLCPmvZTFCo1qjL8jZbSRYSHRwivx9RNj/NG
         j3YDoCrW1XiFQ4mPuNudFcqfqJTVyvTlaeKVIAFMm4esqV9G3+omjDbSfgrKPfUz6QDZ
         MORw==
X-Gm-Message-State: AAQBX9epkTs4zDOOCR/xaIcpcd8GDvqAo7wVHxM2Dzwis05+n3qmjwPd
	BINCkL2z68EInMXlrqJumlQjotPmX4b/b/KXXg8=
X-Google-Smtp-Source: AKy350bZfh7qZ51cjeaR08/Y3OCAVeKqhEC41a1zdISPOxbkbb1wGKx3EkXrHZZ5gvKNgbhX2ITudQ==
X-Received: by 2002:a19:f017:0:b0:4e8:4b58:bfbd with SMTP id p23-20020a19f017000000b004e84b58bfbdmr615879lfc.10.1681370157386;
        Thu, 13 Apr 2023 00:15:57 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 18/22] xen/arm: ffa: add support to reclaim shared memory
Date: Thu, 13 Apr 2023 09:14:20 +0200
Message-Id: <20230413071424.3273490-19-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support to reclaim memory previously shared with FFA_MEM_SHARE.

A memory region that doesn't need to be shared any longer can be
reclaimed with FFA_MEM_RECLAIM once the SP doesn't use it any longer.
This is checked by the SPMC and not in control of the mediator.

Adds a check that the SP supports the needed FF-A feature
FFA_MEM_RECLAIM.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa.c | 53 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 438e0b21d1ea..47ff899eca32 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -535,6 +535,12 @@ static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
     }
 }
 
+static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
+                               uint32_t flags)
+{
+    return ffa_simple_call(FFA_MEM_RECLAIM, handle_lo, handle_hi, flags, 0);
+}
+
 static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
                                       uint8_t msg)
 {
@@ -1256,6 +1262,43 @@ out_set_ret:
             set_regs_error(regs, ret);
 }
 
+static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+    struct ffa_shm_mem *shm;
+    register_t handle_hi;
+    register_t handle_lo;
+    int ret;
+
+    spin_lock(&ctx->lock);
+    list_for_each_entry(shm, &ctx->shm_list, list)
+    {
+        if ( shm->handle == handle )
+            goto found_it;
+    }
+    shm = NULL;
+    ret = FFA_RET_INVALID_PARAMETERS;
+    goto out;
+found_it:
+
+    uint64_to_regpair(&handle_hi, &handle_lo, handle);
+    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
+    if ( ret )
+    {
+        shm = NULL;
+        goto out;
+    }
+
+    list_del(&shm->list);
+
+out:
+    free_ffa_shm_mem(ctx, shm);
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
 static bool ffa_handle_call(struct cpu_user_regs *regs)
 {
     uint32_t fid = get_user_reg(regs, 0);
@@ -1317,6 +1360,15 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_MEM_SHARE_64:
         handle_mem_share(regs);
         return true;
+    case FFA_MEM_RECLAIM:
+        e = handle_mem_reclaim(regpair_to_uint64(get_user_reg(regs, 2),
+                                                 get_user_reg(regs, 1)),
+                               get_user_reg(regs, 3));
+        if ( e )
+            set_regs_error(regs, e);
+        else
+            set_regs_success(regs, 0, 0);
+        return true;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -1534,6 +1586,7 @@ static bool ffa_probe(void)
          !check_mandatory_feature(FFA_MEM_SHARE_64) ||
          !check_mandatory_feature(FFA_RXTX_UNMAP) ||
          !check_mandatory_feature(FFA_MEM_SHARE_32) ||
+         !check_mandatory_feature(FFA_MEM_RECLAIM) ||
          !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
         return false;
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520476.808197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBe-0006Ia-Ih; Thu, 13 Apr 2023 07:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520476.808197; Thu, 13 Apr 2023 07:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrBd-0006FW-40; Thu, 13 Apr 2023 07:16:05 +0000
Received: by outflank-mailman (input) for mailman id 520476;
 Thu, 13 Apr 2023 07:16:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBa-0001gr-LR
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:16:02 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b48f696-d9cb-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 09:16:01 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id o1so17896811lfc.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:16:01 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:16:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b48f696-d9cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370160; x=1683962160;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bHjUSAX1G+PSTsVlSk/rHD6dSlwHglC2zbE6BMp+VKo=;
        b=btNr9S7x7r1kSXq07/s61mffJI59SVCiAB7WMvILy3nvyhwmRc49W267xWGxNrQGe4
         uZ/bePskes7286wzyRplVA/AXc+q/d3GwJP+NjiHv41DvwIXAV7VZPSy5ZLDo7fRpIO/
         Afz6rk03wZDSD1+3HjPimoXN11zLN1uLy5v1KYLFn8EfGShlBXaRDjxypm/iA8aQMayx
         kDrOf4kD+R7f7sZZaC51CEahX7GYucH0U4DKajspeSyDHl5NXzH/3/W5f3ziz9k4Qwsv
         m+LWayOSWZRcy1Ax714BDl+tpd35jcyfBVfvbhYCtJ5HWEWKOfB9q/S4YpRavvntvdUP
         xf7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370160; x=1683962160;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bHjUSAX1G+PSTsVlSk/rHD6dSlwHglC2zbE6BMp+VKo=;
        b=WKpv1E+oFYrlaWebcqlK0/tkUaqScx2R8DqIqnGLIiYpuplQkgfBBrfcNZ2Kch4KoO
         ga8iVvII91j+ZuEli6PfoxBLVov+Q1NV5I4wK+2I1UAAbpg8TGiuB1o3Xn/hvC5hXAza
         tA9AF+q9nOGYqb5kYD0uhe5BC8r4tK/Sb6jJRQjfEDqMkv8/rmFL+4wkt4CFDWDied83
         1KNHoqkvBRkX/RuSbbRREMm3jurD+rVtTkROu/W9zFrpZkv0ycbU4gbIpoqxYvjL11GP
         4bMuOkKKPFABgcHHfZR2gyPZ5pJo7i9BcWNLKxN6Zpok173KPyeHfr2uXphdHOr5DIU9
         4q9A==
X-Gm-Message-State: AAQBX9ck5LpFes6S8H0HsXcgplmRtf36GAof49cMreza+gdsJx1mU/Y7
	UIMZ7SJxX/C8jNjF0okyDlAx7yp2i27ThxdoKBo=
X-Google-Smtp-Source: AKy350YWU9r6iN+MQQbR65FG8mulx+cLkSIkEYhbzgpG253YbNkvwHiggHR8kCHk4ZKcRQGM0YA/Hg==
X-Received: by 2002:ac2:454e:0:b0:4ec:89d3:a8ac with SMTP id j14-20020ac2454e000000b004ec89d3a8acmr459019lfm.30.1681370160490;
        Thu, 13 Apr 2023 00:16:00 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
Date: Thu, 13 Apr 2023 09:14:24 +0200
Message-Id: <20230413071424.3273490-23-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Describes a FF-A version 1.1 [1] mediator to communicate with a Secure
Partition in secure world.

[1] https://developer.arm.com/documentation/den0077/latest
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 SUPPORT.md               |  8 ++++++++
 docs/man/xl.cfg.5.pod.in | 15 +++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f09..1fd746f7f7f2 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -818,6 +818,14 @@ that covers the DMA of the device to be passed through.
 
 No support for QEMU backends in a 16K or 64K domain.
 
+### ARM: Firmware Framework for Arm A-profile (FF-A) Mediator
+
+    Status, Arm64: Tech Preview
+
+There are still some code paths where a vCPU may hog a pCPU longer than
+necessary. The FF-A mediator is not yet implemented for Arm32. Part of the
+FF-A specification is not supported.
+
 ### ARM: Guest Device Tree support
 
     Status: Supported
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..bba99c576b48 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1645,6 +1645,21 @@ in OP-TEE.
 
 This feature is a B<technology preview>.
 
+=item B<ffa>
+
+B<Arm only.> Allow a guest to communicate via FF-A with Secure Partitions
+(SP), default false.
+
+Currently is only a small subset of the FF-A specification supported. Just
+enough to communicate with OP-TEE. In general only direct messaging and
+sharing memory with one SP. More advanced use cases where memory might be
+shared or donated to multple SPs are not supported.
+
+See L<https://developer.arm.com/documentation/den0077/latest> for more
+informantion about FF-A.
+
+This feature is a B<technology preview>.
+
 =back
 
 =back
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520491.808211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCG-0001yC-4R; Thu, 13 Apr 2023 07:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520491.808211; Thu, 13 Apr 2023 07:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCG-0001y3-05; Thu, 13 Apr 2023 07:16:44 +0000
Received: by outflank-mailman (input) for mailman id 520491;
 Thu, 13 Apr 2023 07:16:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrCE-0001wd-Ga
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:16:42 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0905fb16-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:57 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id x34so907330ljq.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:57 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0905fb16-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370156; x=1683962156;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8ZCEdSFHwslEgaIydy46nytop0ECS1ocVnr/aHKCQRk=;
        b=SeDjGg7l463KhgfWyWGSRKKBJwYa1xPY3KJchJoSqOmVer4k516NpP7GKfvOctp81R
         H+8IWvi+Jl5z9xCsyo9luagc2VZEZbTFXwgvZG0YiJSctGIT3rLgXHZRTY8GJpOGwsHA
         yJKv+V+FsiINqiHkCMUGI3rXN9jcjB0Rn4OcyIZ7Lnj0dME7Qg9UzTCIpgpg72Zq0EZw
         v3CTmvW+lZNGzxjsjy08f3HRr6Y02KIbzyrth9J5Zd8zjwY6wKOOv0AovCPC5uvLN005
         2gVBYZpcW4dtlLw8gFj79f+ZwmeI4EKlnKoZFBWwr4y42IfGY/i0U31ZVCWff6vlr54z
         V5OQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370156; x=1683962156;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=8ZCEdSFHwslEgaIydy46nytop0ECS1ocVnr/aHKCQRk=;
        b=eurVkONe0JzwW0Nl1xj6oWsXOyUROI4IkQ0omhRShD4BOzjncT45e3gmRCO+vkobHQ
         ms2xOFvqe38PAn8u6jpY9HsvDSvFfcKQDTOFxZjvGj2Vcvg5P0husbrpJGVqq6W6RdLT
         dKTXONpP/bn7Bl97W1EcGehTenF0uNblw7PWDXdemkQRVgPe0d414u3yJHDEslDu1RLx
         +1naRlD/s7Dj/k/rKcbzLHH8Lyb6HnY2W85RifNvxwvEIdlH5m74up1a2UvR+YS4v391
         l9ImGyBRLJYcBNCJ78JLLbBmk8PgMHvjplw9z8/v7F3fmhcIXTN96Q32zD9+VXREoP0P
         GKrA==
X-Gm-Message-State: AAQBX9eiEXDlTYTzP+rAqBpCV+k0SwScbY2Pg+wz6IVfTb+/L+aA+xVw
	MYrgMt1//TvfC6XaOqBmZ2B/TELq05FmenfAJY8=
X-Google-Smtp-Source: AKy350YK+HbgR9hM+KGDovRsTyxsnfWzKnI/0keGOj9aRz2024n97uAT+dey9VRIssh4Bkirsd3r+Q==
X-Received: by 2002:a2e:97c5:0:b0:2a7:75aa:40c with SMTP id m5-20020a2e97c5000000b002a775aa040cmr454342ljj.10.1681370156514;
        Thu, 13 Apr 2023 00:15:56 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 17/22] xen/arm: ffa: support sharing memory
Date: Thu, 13 Apr 2023 09:14:19 +0200
Message-Id: <20230413071424.3273490-18-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support for a guest to share memory with an SP using FFA_MEM_SHARE.
Only memory regions small enough to be shared with a single call to
FFA_MEM_SHARE are supported.

With this commit we have a FF-A version 1.1 [1] mediator able to
communicate with a Secure Partition in secure world using shared memory.
The secure world must use FF-A version 1.1, but the guest is free to use
version 1.0 or version 1.1.

Adds a check that the SP supports the needed FF-A features
FFA_MEM_SHARE_64 or FFA_MEM_SHARE_32.

[1] https://developer.arm.com/documentation/den0077/latest
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 483 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 483 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index f3e05911e16e..438e0b21d1ea 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -299,6 +299,38 @@ struct ffa_mem_transaction_1_1 {
     uint8_t reserved[12];
 };
 
+/* Calculate offset of struct ffa_mem_access from start of buffer */
+#define MEM_ACCESS_OFFSET(access_idx) \
+    ( sizeof(struct ffa_mem_transaction_1_1) + \
+      ( access_idx ) * sizeof(struct ffa_mem_access) )
+
+/* Calculate offset of struct ffa_mem_region from start of buffer */
+#define REGION_OFFSET(access_count, region_idx) \
+    ( MEM_ACCESS_OFFSET(access_count) + \
+      ( region_idx ) * sizeof(struct ffa_mem_region) )
+
+/* Calculate offset of struct ffa_address_range from start of buffer */
+#define ADDR_RANGE_OFFSET(access_count, region_count, range_idx) \
+    ( REGION_OFFSET(access_count, region_count) + \
+      ( range_idx ) * sizeof(struct ffa_address_range) )
+
+/*
+ * The parts needed from struct ffa_mem_transaction_1_0 or struct
+ * ffa_mem_transaction_1_1, used to provide an abstraction of difference in
+ * data structures between version 1.0 and 1.1. This is just an internal
+ * interface and can be changed without changing any ABI.
+ */
+struct ffa_mem_transaction_int {
+    uint16_t sender_id;
+    uint8_t mem_reg_attr;
+    uint8_t flags;
+    uint8_t mem_access_size;
+    uint8_t mem_access_count;
+    uint16_t mem_access_offs;
+    uint64_t handle;
+    uint64_t tag;
+};
+
 /* Endpoint RX/TX descriptor */
 struct ffa_endpoint_rxtx_descriptor_1_0 {
     uint16_t sender_id;
@@ -324,9 +356,22 @@ struct ffa_ctx {
     /* FF-A version used by the guest */
     uint32_t guest_vers;
     bool rx_is_free;
+    /* Used shared memory objects, struct ffa_shm_mem */
+    struct list_head shm_list;
+    /* Number of allocated shared memory object */
+    unsigned int shm_count;
     spinlock_t lock;
 };
 
+struct ffa_shm_mem {
+    struct list_head list;
+    uint16_t sender_id;
+    uint16_t ep_id;     /* endpoint, the one lending */
+    uint64_t handle;    /* FFA_HANDLE_INVALID if not set yet */
+    unsigned int page_count;
+    struct page_info *pages[];
+};
+
 /* Negotiated FF-A version to use with the SPMC */
 static uint32_t ffa_version __ro_after_init;
 
@@ -348,6 +393,7 @@ static uint16_t subscr_vm_destroyed_count __read_mostly;
 static void *ffa_rx __read_mostly;
 static void *ffa_tx __read_mostly;
 static DEFINE_SPINLOCK(ffa_rx_buffer_lock);
+static DEFINE_SPINLOCK(ffa_tx_buffer_lock);
 
 static bool ffa_get_version(uint32_t *vers)
 {
@@ -454,6 +500,41 @@ static int32_t ffa_rx_release(void)
     return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0);
 }
 
+static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
+                             register_t addr, uint32_t pg_count,
+                             uint64_t *handle)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_SHARE_64,
+        .a1 = tot_len,
+        .a2 = frag_len,
+        .a3 = addr,
+        .a4 = pg_count,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 )
+    {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        *handle = regpair_to_uint64(resp.a3, resp.a2);
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        *handle = regpair_to_uint64(resp.a2, resp.a1);
+        if ( resp.a3 > INT32_MAX ) /* Impossible value */
+            return FFA_RET_ABORTED;
+        return resp.a3 & INT32_MAX;
+    default:
+        return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
 static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
                                       uint8_t msg)
 {
@@ -781,6 +862,400 @@ out:
              resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask);
 }
 
+/*
+ * Gets all page and assigns them to the supplied shared memory object. If
+ * this function fails then the caller is still expected to call
+ * put_shm_pages() as a cleanup.
+ */
+static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
+                         const struct ffa_address_range *range,
+                         uint32_t range_count, unsigned int start_page_idx,
+                         unsigned int *last_page_idx)
+{
+    unsigned int pg_idx = start_page_idx;
+    gfn_t gfn;
+    unsigned int n;
+    unsigned int m;
+    p2m_type_t t;
+    uint64_t addr;
+
+    for ( n = 0; n < range_count; n++ )
+    {
+        for ( m = 0; m < range[n].page_count; m++ )
+        {
+            if ( pg_idx >= shm->page_count )
+                return FFA_RET_INVALID_PARAMETERS;
+
+            addr = read_atomic(&range[n].address);
+            gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE);
+            shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t,
+						   P2M_ALLOC);
+            if ( !shm->pages[pg_idx] )
+                return FFA_RET_DENIED;
+            /* Only normal RAM for now */
+            if ( !p2m_is_ram(t) )
+                return FFA_RET_DENIED;
+            pg_idx++;
+        }
+    }
+
+    *last_page_idx = pg_idx;
+
+    return FFA_RET_OK;
+}
+
+static void put_shm_pages(struct ffa_shm_mem *shm)
+{
+    unsigned int n;
+
+    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
+    {
+        put_page(shm->pages[n]);
+        shm->pages[n] = NULL;
+    }
+}
+
+static struct ffa_shm_mem *alloc_ffa_shm_mem(struct ffa_ctx *ctx,
+                                             unsigned int page_count)
+{
+    struct ffa_shm_mem *shm;
+
+    if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ||
+         ctx->shm_count >= FFA_MAX_SHM_COUNT )
+        return NULL;
+
+    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
+    if ( shm )
+    {
+        ctx->shm_count++;
+        shm->page_count = page_count;
+    }
+
+    return shm;
+}
+
+static void free_ffa_shm_mem(struct ffa_ctx *ctx, struct ffa_shm_mem *shm)
+{
+    if ( shm ) {
+        ASSERT(ctx->shm_count > 0);
+        ctx->shm_count--;
+        put_shm_pages(shm);
+        xfree(shm);
+    }
+}
+
+static void init_range(struct ffa_address_range *addr_range,
+                       paddr_t pa)
+{
+    memset(addr_range, 0, sizeof(*addr_range));
+    addr_range->address = pa;
+    addr_range->page_count = 1;
+}
+
+/*
+ * This function uses the ffa_tx buffer to transmit the memory transaction
+ * descriptor. The function depends ffa_tx_buffer_lock to be used to guard
+ * the buffer from concurrent use.
+ */
+static int share_shm(struct ffa_shm_mem *shm)
+{
+    const uint32_t max_frag_len = FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE;
+    struct ffa_mem_access *mem_access_array;
+    struct ffa_mem_transaction_1_1 *descr;
+    struct ffa_address_range *addr_range;
+    struct ffa_mem_region *region_descr;
+    const unsigned int region_count = 1;
+    void *buf = ffa_tx;
+    uint32_t frag_len;
+    uint32_t tot_len;
+    paddr_t last_pa;
+    unsigned int n;
+    paddr_t pa;
+
+    ASSERT(spin_is_locked(&ffa_tx_buffer_lock));
+    ASSERT(shm->page_count);
+
+    descr = buf;
+    memset(descr, 0, sizeof(*descr));
+    descr->sender_id = shm->sender_id;
+    descr->handle = shm->handle;
+    descr->mem_reg_attr = FFA_NORMAL_MEM_REG_ATTR;
+    descr->mem_access_count = 1;
+    descr->mem_access_size = sizeof(*mem_access_array);
+    descr->mem_access_offs = MEM_ACCESS_OFFSET(0);
+
+    mem_access_array = buf + descr->mem_access_offs;
+    memset(mem_access_array, 0, sizeof(*mem_access_array));
+    mem_access_array[0].access_perm.endpoint_id = shm->ep_id;
+    mem_access_array[0].access_perm.perm = FFA_MEM_ACC_RW;
+    mem_access_array[0].region_offs = REGION_OFFSET(descr->mem_access_count, 0);
+
+    region_descr = buf + mem_access_array[0].region_offs;
+    memset(region_descr, 0, sizeof(*region_descr));
+    region_descr->total_page_count = shm->page_count;
+
+    region_descr->address_range_count = 1;
+    last_pa = page_to_maddr(shm->pages[0]);
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + FFA_PAGE_SIZE == pa )
+            continue;
+        region_descr->address_range_count++;
+    }
+
+    tot_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count,
+                                region_descr->address_range_count);
+    if ( tot_len > max_frag_len )
+        return FFA_RET_NOT_SUPPORTED;
+
+    addr_range = region_descr->address_range_array;
+    frag_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, 1);
+    last_pa = page_to_maddr(shm->pages[0]);
+    init_range(addr_range, last_pa);
+    for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
+    {
+        pa = page_to_maddr(shm->pages[n]);
+        if ( last_pa + FFA_PAGE_SIZE == pa )
+        {
+            addr_range->page_count++;
+            continue;
+        }
+
+        frag_len += sizeof(*addr_range);
+        addr_range++;
+        init_range(addr_range, pa);
+    }
+
+    return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+}
+
+static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen,
+                                struct ffa_mem_transaction_int *trans)
+{
+    uint16_t mem_reg_attr;
+    uint32_t flags;
+    uint32_t count;
+    uint32_t offs;
+    uint32_t size;
+
+    if ( ffa_vers >= FFA_VERSION_1_1 )
+    {
+        const struct ffa_mem_transaction_1_1 *descr;
+
+        if ( blen < sizeof(*descr) )
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = descr->sender_id;
+        mem_reg_attr = descr->mem_reg_attr;
+        flags = descr->flags;
+        trans->handle = descr->handle;
+        trans->tag = descr->tag;
+
+        count = descr->mem_access_count;
+        size = descr->mem_access_size;
+        offs = descr->mem_access_offs;
+    }
+    else
+    {
+        const struct ffa_mem_transaction_1_0 *descr;
+
+        if ( blen < sizeof(*descr) )
+            return FFA_RET_INVALID_PARAMETERS;
+
+        descr = buf;
+        trans->sender_id = descr->sender_id;
+        mem_reg_attr = descr->mem_reg_attr;
+        flags = descr->flags;
+        trans->handle = descr->handle;
+        trans->tag = descr->tag;
+
+        count = descr->mem_access_count;
+        size = sizeof(struct ffa_mem_access);
+        offs = offsetof(struct ffa_mem_transaction_1_0, mem_access_array);
+    }
+    /*
+     * Make sure that "descr" which is shared with the guest isn't accessed
+     * again after this point.
+     */
+    barrier();
+
+    /*
+     * We're doing a rough check to see that no information is lost when
+     * tranfering the values into a struct ffa_mem_transaction_int below.
+     * The fields in struct ffa_mem_transaction_int are wide enough to hold
+     * any valid value so being out of range means that something is wrong.
+     */
+    if ( mem_reg_attr > UINT8_MAX || flags > UINT8_MAX || size > UINT8_MAX ||
+        count > UINT8_MAX || offs > UINT16_MAX )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Check that the endpoint memory access descriptor array fits */
+    if ( size * count + offs > blen )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    trans->mem_reg_attr = mem_reg_attr;
+    trans->flags = flags;
+    trans->mem_access_size = size;
+    trans->mem_access_count = count;
+    trans->mem_access_offs = offs;
+
+    return 0;
+}
+
+static void handle_mem_share(struct cpu_user_regs *regs)
+{
+    uint32_t tot_len = get_user_reg(regs, 1);
+    uint32_t frag_len = get_user_reg(regs, 2);
+    uint64_t addr = get_user_reg(regs, 3);
+    uint32_t page_count = get_user_reg(regs, 4);
+    const struct ffa_mem_region *region_descr;
+    const struct ffa_mem_access *mem_access;
+    struct ffa_mem_transaction_int trans;
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+    struct ffa_shm_mem *shm = NULL;
+    unsigned int last_page_idx = 0;
+    register_t handle_hi = 0;
+    register_t handle_lo = 0;
+    int ret = FFA_RET_DENIED;
+    uint32_t range_count;
+    uint32_t region_offs;
+
+    /*
+     * We're only accepting memory transaction descriptors via the rx/tx
+     * buffer.
+     */
+    if ( addr )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_set_ret;
+    }
+
+    /* Check that fragment length doesn't exceed total length */
+    if ( frag_len > tot_len )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_set_ret;
+    }
+
+    /* We currently only support a single fragment */
+    if ( frag_len != tot_len )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_set_ret;
+    }
+
+    spin_lock(&ctx->lock);
+
+    if ( frag_len > ctx->page_count * FFA_PAGE_SIZE )
+        goto out_unlock;
+
+    ret = read_mem_transaction(ctx->guest_vers, ctx->tx, frag_len, &trans);
+    if ( ret )
+        goto out_unlock;
+
+    if ( trans.mem_reg_attr != FFA_NORMAL_MEM_REG_ATTR )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    /* Only supports sharing it with one SP for now */
+    if ( trans.mem_access_count != 1 )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    if ( trans.sender_id != get_vm_id(d) )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    /* Check that it fits in the supplied data */
+    if ( trans.mem_access_offs + trans.mem_access_size > frag_len )
+        goto out_unlock;
+
+    mem_access = ctx->tx + trans.mem_access_offs;
+    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_offs = read_atomic(&mem_access->region_offs);
+    if ( sizeof(*region_descr) + region_offs > frag_len )
+    {
+        ret = FFA_RET_NOT_SUPPORTED;
+        goto out_unlock;
+    }
+
+    region_descr = ctx->tx + region_offs;
+    range_count = read_atomic(&region_descr->address_range_count);
+    page_count = read_atomic(&region_descr->total_page_count);
+
+    if ( !page_count )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_unlock;
+    }
+
+    shm = alloc_ffa_shm_mem(ctx, page_count);
+    if ( !shm )
+    {
+        ret = FFA_RET_NO_MEMORY;
+        goto out_unlock;
+    }
+    shm->sender_id = trans.sender_id;
+    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
+
+    /*
+     * Check that the Composite memory region descriptor fits.
+     */
+    if ( sizeof(*region_descr) + region_offs +
+         range_count * sizeof(struct ffa_address_range) > frag_len )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
+                        0, &last_page_idx);
+    if ( ret )
+        goto out;
+    if ( last_page_idx != shm->page_count )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+
+    /* Note that share_shm() uses our tx buffer */
+    spin_lock(&ffa_tx_buffer_lock);
+    ret = share_shm(shm);
+    spin_unlock(&ffa_tx_buffer_lock);
+    if ( ret )
+        goto out;
+
+    list_add_tail(&shm->list, &ctx->shm_list);
+
+    uint64_to_regpair(&handle_hi, &handle_lo, shm->handle);
+
+out:
+    if ( ret )
+        free_ffa_shm_mem(ctx, shm);
+out_unlock:
+    spin_unlock(&ctx->lock);
+
+out_set_ret:
+    if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
 static bool ffa_handle_call(struct cpu_user_regs *regs)
 {
     uint32_t fid = get_user_reg(regs, 0);
@@ -838,6 +1313,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_MSG_SEND_DIRECT_REQ_64:
         handle_msg_send_direct_req(regs, fid);
         return true;
+    case FFA_MEM_SHARE_32:
+    case FFA_MEM_SHARE_64:
+        handle_mem_share(regs);
+        return true;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -879,6 +1358,8 @@ static int ffa_domain_init(struct domain *d)
         }
     }
 
+    INIT_LIST_HEAD(&ctx->shm_list);
+
     d->arch.tee = ctx;
 
     return 0;
@@ -1050,7 +1531,9 @@ static bool ffa_probe(void)
     if ( !check_mandatory_feature(FFA_PARTITION_INFO_GET) ||
          !check_mandatory_feature(FFA_RX_RELEASE) ||
          !check_mandatory_feature(FFA_RXTX_MAP_64) ||
+         !check_mandatory_feature(FFA_MEM_SHARE_64) ||
          !check_mandatory_feature(FFA_RXTX_UNMAP) ||
+         !check_mandatory_feature(FFA_MEM_SHARE_32) ||
          !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
         return false;
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520493.808221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCH-0002EW-D3; Thu, 13 Apr 2023 07:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520493.808221; Thu, 13 Apr 2023 07:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCH-0002EB-7u; Thu, 13 Apr 2023 07:16:45 +0000
Received: by outflank-mailman (input) for mailman id 520493;
 Thu, 13 Apr 2023 07:16:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrCG-0001wd-6K
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:16:44 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ac74948-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:16:00 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id d7so28947048lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:16:00 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ac74948-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370159; x=1683962159;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yodICrBL6WOIoQq7XouRka4JhH5GEeCWApa9x3I7zt0=;
        b=TQjlYR0eNL3PQqTubzL6FvZ4C8RV2/tVxlgPb2sFYtuNXlIpwOWCppcmvIe4zIe6VV
         h/id3+UsnHaa+wwPJh3Yk05O3O4PzFz2lywfhUvt3lH4hHwVNclytoMHsiGyTkYaTn1H
         JXhGyHGP/xcNp7ni5CCR8pVsHdhlTJcClMvRX8kCRpTaYs1/qgElR1ZG5z0TOXl39bUl
         k6p7C2DA/NAf4h/hJ/oKU4Ni18Y3e5VPe5J5M3rIzab0gFCACyYOuHsLvAzeEM6+RF25
         s/0kLV8cVrCWZcbYAZQO8Bzv8ZDC/7jG53JulrOjbCXDMjF32i15ivwz15XrQwF1uv/g
         uOHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370159; x=1683962159;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=yodICrBL6WOIoQq7XouRka4JhH5GEeCWApa9x3I7zt0=;
        b=iS7TF9cZURWBlGkzHebP/LAchVAqlXqn/sbO9rJv7owncJ9y6JIAT561FDvcG2vYEt
         UALnF2JLPyYCJMcqTcBgwnfxDq4lcJh3aL9Wlm2wVUQvwDd57LbFWORu+CURc8OkC22o
         nsHezcBGt5itZ51FJqPGjsIzwkNSnZDHyjNPy5wCj00IU0JeMsKV2B5y8NE86WTQyXkX
         cVoQoBjJHWlw7O2FPfykYFMsrJw7z6eIx63sz3W7S9/9RK9aHvnb1DroBC/2CPSpbnae
         HKpSmwqTLfL7Xw58m/TzsVYPPgH4ILwLAYzuQxKlsrMdsp5nctko9O/B1VCbyP0X4EjR
         LLGw==
X-Gm-Message-State: AAQBX9dSJELDOQ4IQe6YDDM+aC8IUr/hdtPXMJVAk+s2SioqX5kPpmoI
	hI0fW/1Senkl3ESHzdaavf07eX4B5usxh4BXuRE=
X-Google-Smtp-Source: AKy350bdjDbsFWiQejACd34meF454MJsfUSzRAHwjvgKOixy0ULG2yl6IgQa9uz5wrrP2F/JvSeG9A==
X-Received: by 2002:ac2:5a1c:0:b0:4ca:98ec:7d9a with SMTP id q28-20020ac25a1c000000b004ca98ec7d9amr550426lfn.15.1681370159616;
        Thu, 13 Apr 2023 00:15:59 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
Date: Thu, 13 Apr 2023 09:14:23 +0200
Message-Id: <20230413071424.3273490-22-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds a comments with a list of unsupported FF-A interfaces and
limitations in the implemented FF-A interfaces.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 0948cc636871..6424c222c885 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -13,6 +13,38 @@
  *                https://developer.arm.com/documentation/den0077/e
  * TEEC-1.0C: TEE Client API Specification version 1.0c available at
  *            https://globalplatform.org/specs-library/tee-client-api-specification/
+ *
+ * Notes on the the current implementstion.
+ *
+ * Unsupported FF-A interfaces:
+ * o FFA_MSG_POLL and FFA_MSG_SEND - deprecated in FF-A-1.1-REL0
+ * o FFA_MEM_RETRIEVE_* - Used when sharing memory from an SP to a VM
+ * o FFA_MEM_DONATE_* and FFA_MEM_LEND_* - Used when tranferring ownership
+ *   or access of a memory readion
+ * o FFA_MSG_SEND2 and FFA_MSG_WAIT - Used for indirect messaging
+ * o FFA_MSG_YIELD
+ * o FFA_INTERRUPT - Used to report preemption
+ * o FFA_RUN
+ *
+ * Limitations in the implemented FF-A interfaces:
+ * o FFA_RXTX_MAP_*:
+ *   - Maps at most 32 4k pages large RX and TX buffers
+ *   - RT/TX buffers must be normal RAM
+ *   - Doesn't support forwarding this call on behalf of an endpoint
+ * o FFA_MEM_SHARE_*: only supports sharing
+ *   - from a VM to an SP
+ *   - with one borrower
+ *   - with the memory transaction descriptor in the RX/TX buffer
+ *   - normal memory
+ *   - at most 512 kB large memory regions
+ *   - at most 32 shared memory regions per guest
+ * o FFA_MSG_SEND_DIRECT_REQ:
+ *   - only supported from a VM to an SP
+ *
+ * There are some large locked sections with ffa_tx_buffer_lock and
+ * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
+ * around share_shm() is a very large locked section which can let one VM
+ * affect another VM.
  */
 
 #include <xen/bitops.h>
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:16:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:16:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520494.808231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCJ-0002XJ-NV; Thu, 13 Apr 2023 07:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520494.808231; Thu, 13 Apr 2023 07:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrCJ-0002Wq-KH; Thu, 13 Apr 2023 07:16:47 +0000
Received: by outflank-mailman (input) for mailman id 520494;
 Thu, 13 Apr 2023 07:16:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrCH-0001wd-6d
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:16:45 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09e05e9c-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:58 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id d7so28946978lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:58 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09e05e9c-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370158; x=1683962158;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=++rL2jsxUxRm5qFt5hBWNof5SzDBga+okTGdWBhdlBo=;
        b=G5BbTYqiSbzMXxtdyqd2Z8QovoHQoRitdheZIo/ph2u7styw58M7tIX9aUo0jRbZFp
         zGvyFwY5WjAOQYSNp+zNKFDZslgxwH8Lrry6Ewjw2bQtihPWJ0LOzaeimXOkRzQN8XIn
         NhXJv0joRONF4I0cwS0T/K4akUweFtNe+1pdDxb4UoJvRmegZJ9YsskTQSWX8bkDcqC1
         zqlqgI6+Mg0sNyiiFX2fYEmDi92IeVwfn/xpM0wzGVRWyWDZ9KTdMu6YKN5C+RuhV/7u
         ikBjLgVB0pWQ53RJ60asijR6aOIpv2Dxw1agHIajNs5KmbXD/6UNc4qAD3KlpLlGulBc
         Enbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370158; x=1683962158;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=++rL2jsxUxRm5qFt5hBWNof5SzDBga+okTGdWBhdlBo=;
        b=ht8X1T3lsuaOx8UN+mUAUJSyALWq/66Ms8iuS6SeDsqJ1EaFl6KVs+jKVjVnEFEFSF
         EMFEX51HvMn7UbKmdQm7NycfXE9Y1lbRyCx5BxP0fndnyMyjpG9Obo93xu9/puCBQCHy
         ro6Zxl3BbwHLal4tkh9FURp0KjMOXs5zM+LvP2owGduDuwqUb5KRr7dfjMDWPLj1pBkV
         fdcQ3OSRk/+gYAl8P/z0GgfEkxPiTbe/Toj+URA2FSRbXmWFuxI+1R/DI1nX3D2DZLBn
         CW/CFg8n4h9QfhyBeHLcOXpHNWsOCeN045DEmZJdjJYnLqj6jgW273q4/+TD9OqjTda1
         cCew==
X-Gm-Message-State: AAQBX9dfxtTWSW7ghvwT4Xv0CEkOq8e0uQGtJdyJU+1YObIIQoFKv/4I
	1Cs5WX34ZuzxgbWP2Ws9tr3nJFGhQHB2GShU0hA=
X-Google-Smtp-Source: AKy350bJBmtxrb0v4lr9dv4qiCmPEngcbPR6AmM3VYhlPBzTM5TLdZGlcVcxTV9g4xHfF2f26ISMCg==
X-Received: by 2002:ac2:4195:0:b0:4e1:8309:1db5 with SMTP id z21-20020ac24195000000b004e183091db5mr505672lfh.2.1681370158063;
        Thu, 13 Apr 2023 00:15:58 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 19/22] xen/arm: ffa: support sharing large memory ranges
Date: Thu, 13 Apr 2023 09:14:21 +0200
Message-Id: <20230413071424.3273490-20-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support for sharing large memory ranges transmitted in fragments
using FFA_MEM_FRAG_TX.

The implementation is the bare minimum to be able to communicate with
OP-TEE running as an SPMC at S-EL1.

Adds a check that the SP supports the needed FF-A feature
FFA_MEM_FRAG_TX.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 253 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 240 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 47ff899eca32..888e3f9265c2 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -356,6 +356,8 @@ struct ffa_ctx {
     /* FF-A version used by the guest */
     uint32_t guest_vers;
     bool rx_is_free;
+    /* Currently used fragment states, struct mem_frag_state */
+    struct list_head frag_list;
     /* Used shared memory objects, struct ffa_shm_mem */
     struct list_head shm_list;
     /* Number of allocated shared memory object */
@@ -372,6 +374,18 @@ struct ffa_shm_mem {
     struct page_info *pages[];
 };
 
+struct mem_frag_state {
+    struct list_head list;
+    struct ffa_shm_mem *shm;
+    uint32_t range_count;
+    unsigned int current_page_idx;
+    unsigned int frag_offset;
+    unsigned int range_offset;
+    const uint8_t *buf;
+    unsigned int buf_size;
+    struct ffa_address_range range;
+};
+
 /* Negotiated FF-A version to use with the SPMC */
 static uint32_t ffa_version __ro_after_init;
 
@@ -535,6 +549,36 @@ static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
     }
 }
 
+static int32_t ffa_mem_frag_tx(uint64_t handle, uint32_t frag_len,
+                               uint16_t sender_id)
+{
+    struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_MEM_FRAG_TX,
+        .a1 = handle & UINT32_MAX,
+        .a2 = handle >> 32,
+        .a3 = frag_len,
+        .a4 = (uint32_t)sender_id << 16,
+    };
+    struct arm_smccc_1_2_regs resp;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+
+    switch ( resp.a0 )
+    {
+    case FFA_ERROR:
+        if ( resp.a2 )
+            return resp.a2;
+        else
+            return FFA_RET_NOT_SUPPORTED;
+    case FFA_SUCCESS_32:
+        return FFA_RET_OK;
+    case FFA_MEM_FRAG_RX:
+        return resp.a3;
+    default:
+            return FFA_RET_NOT_SUPPORTED;
+    }
+}
+
 static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
                                uint32_t flags)
 {
@@ -609,6 +653,14 @@ static void set_regs_success(struct cpu_user_regs *regs, uint32_t w2,
     set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, 0, 0, 0, 0);
 }
 
+static void set_regs_frag_rx(struct cpu_user_regs *regs, uint32_t handle_lo,
+                             uint32_t handle_hi, uint32_t frag_offset,
+                             uint16_t sender_id)
+{
+    set_regs(regs, FFA_MEM_FRAG_RX, handle_lo, handle_hi, frag_offset,
+             (uint32_t)sender_id << 16, 0, 0, 0);
+}
+
 static void handle_version(struct cpu_user_regs *regs)
 {
     struct domain *d = current->domain;
@@ -977,6 +1029,8 @@ static int share_shm(struct ffa_shm_mem *shm)
     paddr_t last_pa;
     unsigned int n;
     paddr_t pa;
+    bool first;
+    int ret;
 
     ASSERT(spin_is_locked(&ffa_tx_buffer_lock));
     ASSERT(shm->page_count);
@@ -1012,13 +1066,23 @@ static int share_shm(struct ffa_shm_mem *shm)
 
     tot_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count,
                                 region_descr->address_range_count);
-    if ( tot_len > max_frag_len )
-        return FFA_RET_NOT_SUPPORTED;
 
+    /*
+     * Sharing memory with secure world may have to be done with multiple
+     * calls depending on how many address ranges will be needed. If we're
+     * sharing physically contiguous memory we will only need one range but
+     * we will also need to deal with the worst case where all physical
+     * pages are non-contiguous. For the first batch of address ranges we
+     * call ffa_mem_share() and for all that follows ffa_mem_frag_tx().
+     *
+     * We use frag_len to keep track of how far into the transmit buffer we
+     * have gone.
+     */
     addr_range = region_descr->address_range_array;
     frag_len = ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, 1);
     last_pa = page_to_maddr(shm->pages[0]);
     init_range(addr_range, last_pa);
+    first = true;
     for ( n = 1; n < shm->page_count; last_pa = pa, n++ )
     {
         pa = page_to_maddr(shm->pages[n]);
@@ -1028,12 +1092,34 @@ static int share_shm(struct ffa_shm_mem *shm)
             continue;
         }
 
-        frag_len += sizeof(*addr_range);
-        addr_range++;
+        if ( frag_len == max_frag_len )
+        {
+            if ( first )
+            {
+                ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+                first = false;
+            }
+            else
+            {
+                ret = ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
+            }
+            if ( ret <= 0 )
+                return ret;
+            frag_len = sizeof(*addr_range);
+            addr_range = buf;
+        }
+        else
+        {
+            frag_len += sizeof(*addr_range);
+            addr_range++;
+        }
         init_range(addr_range, pa);
     }
 
-    return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+    if ( first )
+        return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
+    else
+        return ffa_mem_frag_tx(shm->handle, frag_len, shm->sender_id);
 }
 
 static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen,
@@ -1110,8 +1196,53 @@ static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t blen,
     return 0;
 }
 
+static int add_mem_share_frag(struct mem_frag_state *s, unsigned int offs,
+                              unsigned int frag_len)
+{
+    struct domain *d = current->domain;
+    unsigned int o = offs;
+    unsigned int l;
+    int ret;
+
+    if ( frag_len < o )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Fill up the first struct ffa_address_range */
+    l = min_t(unsigned int, frag_len - o, sizeof(s->range) - s->range_offset);
+    memcpy((uint8_t *)&s->range + s->range_offset, s->buf + o, l);
+    s->range_offset += l;
+    o += l;
+    if ( s->range_offset != sizeof(s->range) )
+        goto out;
+    s->range_offset = 0;
+
+    while ( true )
+    {
+        ret = get_shm_pages(d, s->shm, &s->range, 1, s->current_page_idx,
+                            &s->current_page_idx);
+        if ( ret )
+            return ret;
+        if ( s->range_count == 1 )
+            return 0;
+        s->range_count--;
+        if ( frag_len - o < sizeof(s->range) )
+            break;
+        memcpy(&s->range, s->buf + o, sizeof(s->range));
+        o += sizeof(s->range);
+    }
+
+    /* Collect any remaining bytes for the next struct ffa_address_range */
+    s->range_offset = frag_len - o;
+    memcpy(&s->range, s->buf + o, frag_len - o);
+out:
+    s->frag_offset += frag_len;
+
+    return s->frag_offset;
+}
+
 static void handle_mem_share(struct cpu_user_regs *regs)
 {
+    static uint64_t next_handle = FFA_HANDLE_HYP_FLAG;
     uint32_t tot_len = get_user_reg(regs, 1);
     uint32_t frag_len = get_user_reg(regs, 2);
     uint64_t addr = get_user_reg(regs, 3);
@@ -1146,13 +1277,6 @@ static void handle_mem_share(struct cpu_user_regs *regs)
         goto out_set_ret;
     }
 
-    /* We currently only support a single fragment */
-    if ( frag_len != tot_len )
-    {
-        ret = FFA_RET_NOT_SUPPORTED;
-        goto out_set_ret;
-    }
-
     spin_lock(&ctx->lock);
 
     if ( frag_len > ctx->page_count * FFA_PAGE_SIZE )
@@ -1218,6 +1342,36 @@ static void handle_mem_share(struct cpu_user_regs *regs)
     shm->sender_id = trans.sender_id;
     shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
 
+    if ( frag_len != tot_len )
+    {
+        struct mem_frag_state *s = xzalloc(struct mem_frag_state);
+
+        if ( !s )
+        {
+            ret = FFA_RET_NO_MEMORY;
+            goto out;
+        }
+        s->shm = shm;
+        s->range_count = range_count;
+        s->buf = ctx->tx;
+        s->buf_size = FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE;
+        ret = add_mem_share_frag(s, sizeof(*region_descr)  + region_offs,
+                                 frag_len);
+        if ( ret <= 0 )
+        {
+            xfree(s);
+            if ( ret < 0 )
+                goto out;
+        }
+        else
+        {
+            shm->handle = next_handle++;
+            uint64_to_regpair(&handle_hi, &handle_lo, shm->handle);
+            list_add_tail(&s->list, &ctx->frag_list);
+        }
+        goto out_unlock;
+    }
+
     /*
      * Check that the Composite memory region descriptor fits.
      */
@@ -1256,7 +1410,75 @@ out_unlock:
     spin_unlock(&ctx->lock);
 
 out_set_ret:
-    if ( ret == 0)
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, trans.sender_id);
+    else if ( ret == 0)
+            set_regs_success(regs, handle_lo, handle_hi);
+    else
+            set_regs_error(regs, ret);
+}
+
+static struct mem_frag_state *find_frag_state(struct ffa_ctx *ctx,
+                                              uint64_t handle)
+{
+    struct mem_frag_state *s;
+
+    list_for_each_entry(s, &ctx->frag_list, list)
+        if ( s->shm->handle == handle )
+            return s;
+
+    return NULL;
+}
+
+static void handle_mem_frag_tx(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+    uint32_t frag_len = get_user_reg(regs, 3);
+    uint32_t handle_lo = get_user_reg(regs, 1);
+    uint32_t handle_hi = get_user_reg(regs, 2);
+    uint64_t handle = regpair_to_uint64(handle_hi, handle_lo);
+    struct mem_frag_state *s;
+    uint16_t sender_id = 0;
+    int ret;
+
+    spin_lock(&ctx->lock);
+    s = find_frag_state(ctx, handle);
+    if ( !s )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out;
+    }
+    sender_id = s->shm->sender_id;
+
+    if ( frag_len > s->buf_size )
+    {
+        ret = FFA_RET_INVALID_PARAMETERS;
+        goto out_free_s;
+    }
+
+    ret = add_mem_share_frag(s, 0, frag_len);
+    if ( ret < 0 )
+        goto out_free_s;
+
+    /* Note that share_shm() uses our tx buffer */
+    spin_lock(&ffa_tx_buffer_lock);
+    ret = share_shm(s->shm);
+    spin_unlock(&ffa_tx_buffer_lock);
+    if ( ret < 0 )
+        goto out_free_s;
+    list_add_tail(&s->shm->list, &ctx->shm_list);
+out_free_s:
+    if ( ret < 0 )
+        free_ffa_shm_mem(ctx, s->shm);
+    list_del(&s->list);
+    xfree(s);
+out:
+    spin_unlock(&ctx->lock);
+
+    if ( ret > 0 )
+            set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
+    else if ( ret == 0)
             set_regs_success(regs, handle_lo, handle_hi);
     else
             set_regs_error(regs, ret);
@@ -1369,6 +1591,9 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
         else
             set_regs_success(regs, 0, 0);
         return true;
+    case FFA_MEM_FRAG_TX:
+        handle_mem_frag_tx(regs);
+        return true;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -1410,6 +1635,7 @@ static int ffa_domain_init(struct domain *d)
         }
     }
 
+    INIT_LIST_HEAD(&ctx->frag_list);
     INIT_LIST_HEAD(&ctx->shm_list);
 
     d->arch.tee = ctx;
@@ -1586,6 +1812,7 @@ static bool ffa_probe(void)
          !check_mandatory_feature(FFA_MEM_SHARE_64) ||
          !check_mandatory_feature(FFA_RXTX_UNMAP) ||
          !check_mandatory_feature(FFA_MEM_SHARE_32) ||
+         !check_mandatory_feature(FFA_MEM_FRAG_TX) ||
          !check_mandatory_feature(FFA_MEM_RECLAIM) ||
          !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
         return false;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520509.808241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrET-0004My-FX; Thu, 13 Apr 2023 07:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520509.808241; Thu, 13 Apr 2023 07:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrET-0004Mr-9g; Thu, 13 Apr 2023 07:19:01 +0000
Received: by outflank-mailman (input) for mailman id 520509;
 Thu, 13 Apr 2023 07:19:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrCP-0001wd-7V
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:16:53 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a5cc3e4-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:59 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id be27so14525387ljb.12
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:59 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a5cc3e4-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370159; x=1683962159;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AWMfdA9HpSiI1qDObUeAHUi8FAHJApSAQ901NaR8YIk=;
        b=ilktQ26XzJ4n2sJx6Kp+98uQyeKojkT1UGZZHwt26/EHVvDL/hzyJC04WzfdBm51/T
         6fgUEBd84Hg2QaI536ykBWPMyBZfVt5TsBpAnT9Wv1U+AW0EUTRQS0LgD3rDPHlvFG/Y
         tzaqt0zLmnJh8bA2Z9LpITJo9QM4jABS4faREWjii9jhPmOThPvq3WayjcFNe+ojnS6a
         F8QfR8cpV1SZ98n3JXp7rZNlIAQM+gt6rI73g4rF8potof+/3zSt+2dN63aEKWn2R6la
         I6fl3OVBz2U1k9nVbtvB56KLRVTKKmcQeWH8UIuMYUmgizyS/EFCYnDEXM44ETXUwFn1
         zgKQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370159; x=1683962159;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AWMfdA9HpSiI1qDObUeAHUi8FAHJApSAQ901NaR8YIk=;
        b=aQc23mOcNzsVfdD6dQeZQx14Zo40Xg2Cw+M3P3116UJ6bAApB27wF5zJ+j5InxQsvg
         oJIWM0qsSrDZFquTqRYUg+onYBXX1nWIjM+qyxsfRVKqIkv81N4mpzGZFY234tLs7ImG
         TEzKxK2lWlXgOh6YqGbDy7wBnzZBDeiLCKgoktBsj8lZ9azBKnenPnrtek3i/AfM10vm
         o30jVXCLGB8PGN4b7xXQc5k63OmxuHGuM0fXrUHWBVyz2GoHc7EwjwX+nnHFR6+v5mhO
         FIsqEV9aUwwn1XPcx3Z1Zx8vWZ1dPf3dHXkXJsLr+xqYAc0TKa6ElMHuAovc3bM7QHqj
         V7sg==
X-Gm-Message-State: AAQBX9cZ8F3SbssddMKOWQvl9lFz55C+aRdapomv/32EIGbm39yjHe0C
	oPRkMLNFqGfevmUVdAFssNkwycKfDX3tpTmzm0A=
X-Google-Smtp-Source: AKy350bGmfrrRvHIGr0xu/AMw2wt+/mtPTeA7AKRuu86fhugHg801zDTzTHURvTo0DZpwjgCsXPKsw==
X-Received: by 2002:a2e:9d4d:0:b0:2a6:1960:4367 with SMTP id y13-20020a2e9d4d000000b002a619604367mr481467ljj.8.1681370158883;
        Thu, 13 Apr 2023 00:15:58 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 20/22] xen/arm: ffa: improve lock granularity
Date: Thu, 13 Apr 2023 09:14:22 +0200
Message-Id: <20230413071424.3273490-21-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The single lock in struct ffa_ctx is complemented with rx_lock and tx_lock.

The old lock is used for small critical sections, like increasing
shm_count or adding another shm to shm_list.

rx_lock and tx_lock are only acquired using spin_trylock() which for
well-behaving guests should always succeed. Guests using the RX and TX
buffers are expected to serialize accesses before doing the FF-A
request.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 124 ++++++++++++++++++++++++++++++-----------
 1 file changed, 91 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 888e3f9265c2..0948cc636871 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -362,6 +362,13 @@ struct ffa_ctx {
     struct list_head shm_list;
     /* Number of allocated shared memory object */
     unsigned int shm_count;
+    /*
+     * tx_lock is used to serialize access to tx
+     * rx_lock is used to serialize access to rx
+     * lock is used for the rest in this struct
+     */
+    spinlock_t tx_lock;
+    spinlock_t rx_lock;
     spinlock_t lock;
 };
 
@@ -796,7 +803,9 @@ static int32_t handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
     if ( !ffa_rx )
         return FFA_RET_DENIED;
 
-    spin_lock(&ctx->lock);
+    if ( !spin_trylock(&ctx->rx_lock) )
+        return FFA_RET_BUSY;
+
     if ( !ctx->page_count || !ctx->rx_is_free )
         goto out;
     spin_lock(&ffa_rx_buffer_lock);
@@ -847,7 +856,7 @@ out_rx_release:
 out_rx_buf_unlock:
     spin_unlock(&ffa_rx_buffer_lock);
 out:
-    spin_unlock(&ctx->lock);
+    spin_unlock(&ctx->rx_lock);
 
     return ret;
 }
@@ -858,13 +867,15 @@ static int32_t handle_rx_release(void)
     struct domain *d = current->domain;
     struct ffa_ctx *ctx = d->arch.tee;
 
-    spin_lock(&ctx->lock);
+    if ( !spin_trylock(&ctx->rx_lock) )
+        return FFA_RET_BUSY;
+
     if ( !ctx->page_count || ctx->rx_is_free )
         goto out;
     ret = FFA_RET_OK;
     ctx->rx_is_free = true;
 out:
-    spin_unlock(&ctx->lock);
+    spin_unlock(&ctx->rx_lock);
 
     return ret;
 }
@@ -973,30 +984,52 @@ static void put_shm_pages(struct ffa_shm_mem *shm)
     }
 }
 
+static bool inc_ctx_shm_count(struct ffa_ctx *ctx)
+{
+    bool ret = true;
+
+    spin_lock(&ctx->lock);
+    if (ctx->shm_count >= FFA_MAX_SHM_COUNT)
+        ret = false;
+    else
+        ctx->shm_count++;
+    spin_unlock(&ctx->lock);
+
+    return ret;
+}
+
+static void dec_ctx_shm_count(struct ffa_ctx *ctx)
+{
+    spin_lock(&ctx->lock);
+    ASSERT(ctx->shm_count > 0);
+    ctx->shm_count--;
+    spin_unlock(&ctx->lock);
+}
+
 static struct ffa_shm_mem *alloc_ffa_shm_mem(struct ffa_ctx *ctx,
                                              unsigned int page_count)
 {
     struct ffa_shm_mem *shm;
 
-    if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ||
-         ctx->shm_count >= FFA_MAX_SHM_COUNT )
+    if ( page_count >= FFA_MAX_SHM_PAGE_COUNT )
+        return NULL;
+    if ( !inc_ctx_shm_count(ctx) )
         return NULL;
 
     shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
     if ( shm )
-    {
-        ctx->shm_count++;
         shm->page_count = page_count;
-    }
+    else
+        dec_ctx_shm_count(ctx);
 
     return shm;
 }
 
 static void free_ffa_shm_mem(struct ffa_ctx *ctx, struct ffa_shm_mem *shm)
 {
-    if ( shm ) {
-        ASSERT(ctx->shm_count > 0);
-        ctx->shm_count--;
+    if ( shm )
+    {
+        dec_ctx_shm_count(ctx);
         put_shm_pages(shm);
         xfree(shm);
     }
@@ -1277,7 +1310,11 @@ static void handle_mem_share(struct cpu_user_regs *regs)
         goto out_set_ret;
     }
 
-    spin_lock(&ctx->lock);
+    if ( !spin_trylock(&ctx->tx_lock) )
+    {
+        ret = FFA_RET_BUSY;
+        goto out_set_ret;
+    }
 
     if ( frag_len > ctx->page_count * FFA_PAGE_SIZE )
         goto out_unlock;
@@ -1399,7 +1436,9 @@ static void handle_mem_share(struct cpu_user_regs *regs)
     if ( ret )
         goto out;
 
+    spin_lock(&ctx->lock);
     list_add_tail(&shm->list, &ctx->shm_list);
+    spin_unlock(&ctx->lock);
 
     uint64_to_regpair(&handle_hi, &handle_lo, shm->handle);
 
@@ -1407,7 +1446,7 @@ out:
     if ( ret )
         free_ffa_shm_mem(ctx, shm);
 out_unlock:
-    spin_unlock(&ctx->lock);
+    spin_unlock(&ctx->tx_lock);
 
 out_set_ret:
     if ( ret > 0 )
@@ -1442,7 +1481,12 @@ static void handle_mem_frag_tx(struct cpu_user_regs *regs)
     uint16_t sender_id = 0;
     int ret;
 
-    spin_lock(&ctx->lock);
+    if ( !spin_trylock(&ctx->tx_lock) )
+    {
+        ret = FFA_RET_BUSY;
+        goto out_set_ret;
+    }
+
     s = find_frag_state(ctx, handle);
     if ( !s )
     {
@@ -1467,15 +1511,20 @@ static void handle_mem_frag_tx(struct cpu_user_regs *regs)
     spin_unlock(&ffa_tx_buffer_lock);
     if ( ret < 0 )
         goto out_free_s;
+
+    spin_lock(&ctx->lock);
     list_add_tail(&s->shm->list, &ctx->shm_list);
+    spin_unlock(&ctx->lock);
+
 out_free_s:
     if ( ret < 0 )
         free_ffa_shm_mem(ctx, s->shm);
     list_del(&s->list);
     xfree(s);
 out:
-    spin_unlock(&ctx->lock);
+    spin_unlock(&ctx->tx_lock);
 
+out_set_ret:
     if ( ret > 0 )
             set_regs_frag_rx(regs, handle_lo, handle_hi, ret, sender_id);
     else if ( ret == 0)
@@ -1484,6 +1533,18 @@ out:
             set_regs_error(regs, ret);
 }
 
+/* Must only be called with ctx->lock held */
+static struct ffa_shm_mem *find_shm_mem(struct ffa_ctx *ctx, uint64_t handle)
+{
+    struct ffa_shm_mem *shm;
+
+    list_for_each_entry(shm, &ctx->shm_list, list)
+        if ( shm->handle == handle )
+            return shm;
+
+    return NULL;
+}
+
 static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
 {
     struct domain *d = current->domain;
@@ -1494,29 +1555,26 @@ static int handle_mem_reclaim(uint64_t handle, uint32_t flags)
     int ret;
 
     spin_lock(&ctx->lock);
-    list_for_each_entry(shm, &ctx->shm_list, list)
-    {
-        if ( shm->handle == handle )
-            goto found_it;
-    }
-    shm = NULL;
-    ret = FFA_RET_INVALID_PARAMETERS;
-    goto out;
-found_it:
+    shm = find_shm_mem(ctx, handle);
+    if ( shm )
+        list_del(&shm->list);
+    spin_unlock(&ctx->lock);
+    if ( !shm )
+        return FFA_RET_INVALID_PARAMETERS;
 
     uint64_to_regpair(&handle_hi, &handle_lo, handle);
     ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
+
     if ( ret )
     {
-        shm = NULL;
-        goto out;
+        spin_lock(&ctx->lock);
+        list_add_tail(&shm->list, &ctx->shm_list);
+        spin_unlock(&ctx->lock);
+    }
+    else
+    {
+        free_ffa_shm_mem(ctx, shm);
     }
-
-    list_del(&shm->list);
-
-out:
-    free_ffa_shm_mem(ctx, shm);
-    spin_unlock(&ctx->lock);
 
     return ret;
 }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520513.808251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrEU-0004d6-Nr; Thu, 13 Apr 2023 07:19:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520513.808251; Thu, 13 Apr 2023 07:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrEU-0004cl-Io; Thu, 13 Apr 2023 07:19:02 +0000
Received: by outflank-mailman (input) for mailman id 520513;
 Thu, 13 Apr 2023 07:19:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBT-0001gq-Qa
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:55 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07a336ea-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:54 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id z8so19679568lfb.12
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:54 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07a336ea-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370154; x=1683962154;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OTVH9FKVgFMYmzPdlzuBZ/KTps+v6zvJIPz4zAQrzVk=;
        b=qhns4ayM4yU9XmaqWG8lePkqfyjARvlK8qOXmAnt7exl47vJPlIG7GOkizIDKe85cr
         87B0Z17dFrb952cUnI1MkOTr/HHEqF3hVUvroNHzidG61Ks6lKtj7GDQlyTuvrTKkNQP
         e94BO3L+xfeNkOIHfEVkAGp9SUDdTSmPkLHxjHeXfbEzjy9iGpqbR4hIEXEv85aO0zva
         Yv2chyl12UQxsl7Po0tVJd6H1bWn5s8M0qdNZjN6bjNMhyLF3FvmtPn4cLkoogsZmKSD
         1Bdt0EZLdxHxx0SkwX1NMR3eRhr6mmmoLS6mby0jvNPUYCa3m35mzaBLDzmSPImhrATt
         cQ9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370154; x=1683962154;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OTVH9FKVgFMYmzPdlzuBZ/KTps+v6zvJIPz4zAQrzVk=;
        b=WLvMSN6h0a0jcAzxXgayxQWZdgcXLJdCSnTV7SYabHcqltDfmTgyMoaLDWTdfenSne
         nXPQ2sPpriK2ftciTOE9oCS9feEaATJzPYeCrWguFwiznjQ9Mv4Gl7hkx0hkq+f58JnJ
         5cn7skFunvEGzGlhuKJjJolLdRD8j3qoH+lxt4/AjddOyXlZh69D/ER4tugqQ1BkJaP2
         OYm2CSiWGZwT1Syvb3ksBCXxr0gV74iNdw7vtVnP3Sk0svZ+eWEQgxI8AX/f2E2v38uP
         FyDrKYZBDt+Nt27DXdRrcuztJWwmm8N7we1D656jr27H5PdmgDepa632BEwzpcYM0pDF
         P4/w==
X-Gm-Message-State: AAQBX9e69hFDvBwEi+xQR8nlwrKixceYRfzoLGKR+hxMeowWKJIHrrdx
	h19aV6mq3ZRSgrgcjVBAOEyPLSvdKI5uu12GT4k=
X-Google-Smtp-Source: AKy350az7dkINsL9Mv2dlGn6+WRY2SwA1BjsfL4s8/eqQ95rDG24jwsgOfVPQduKpoGhhcELdoJTTg==
X-Received: by 2002:ac2:592a:0:b0:4eb:3021:3a8f with SMTP id v10-20020ac2592a000000b004eb30213a8fmr580571lfi.61.1681370154377;
        Thu, 13 Apr 2023 00:15:54 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and uint64_to_regpair() to regs.h
Date: Thu, 13 Apr 2023 09:14:16 +0200
Message-Id: <20230413071424.3273490-15-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moves the two helper functions regpair_to_uint64() and
uint64_to_regpair() from xen/arch/arm/tee/optee.c to the common arm
specific regs.h. This enables reuse of these functions in the FF-A
mediator in a subsequent patch.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/include/asm/regs.h | 12 ++++++++++++
 xen/arch/arm/tee/optee.c        | 11 -----------
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/include/asm/regs.h b/xen/arch/arm/include/asm/regs.h
index 0693a681315f..aa39e83ee5f4 100644
--- a/xen/arch/arm/include/asm/regs.h
+++ b/xen/arch/arm/include/asm/regs.h
@@ -60,6 +60,18 @@ static inline bool guest_mode(const struct cpu_user_regs *r)
 register_t get_user_reg(struct cpu_user_regs *regs, int reg);
 void set_user_reg(struct cpu_user_regs *regs, int reg, register_t val);
 
+static inline uint64_t regpair_to_uint64(register_t reg0, register_t reg1)
+{
+    return ((uint64_t)reg0 << 32) | (uint32_t)reg1;
+}
+
+static inline void uint64_to_regpair(register_t *reg0, register_t *reg1,
+                                     uint64_t val)
+{
+    *reg0 = val >> 32;
+    *reg1 = (uint32_t)val;
+}
+
 #endif
 
 #endif /* __ARM_REGS_H__ */
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 9cb9f16d43cb..47027ecef47c 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -268,17 +268,6 @@ static int optee_domain_init(struct domain *d)
     return 0;
 }
 
-static uint64_t regpair_to_uint64(register_t reg0, register_t reg1)
-{
-    return ((uint64_t)reg0 << 32) | (uint32_t)reg1;
-}
-
-static void uint64_to_regpair(register_t *reg0, register_t *reg1, uint64_t val)
-{
-    *reg0 = val >> 32;
-    *reg1 = (uint32_t)val;
-}
-
 static struct page_info *get_domain_ram_page(gfn_t gfn)
 {
     struct page_info *page;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:19:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:19:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520534.808261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrEr-0005YW-W9; Thu, 13 Apr 2023 07:19:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520534.808261; Thu, 13 Apr 2023 07:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrEr-0005YN-Sd; Thu, 13 Apr 2023 07:19:25 +0000
Received: by outflank-mailman (input) for mailman id 520534;
 Thu, 13 Apr 2023 07:19:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmrBU-0001gq-Qo
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 07:15:56 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0823b4e8-d9cb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 09:15:55 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id m4so17676305lfj.13
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 00:15:55 -0700 (PDT)
Received: from rayden.urgonet (h-46-59-78-111.A175.priv.bahnhof.se.
 [46.59.78.111]) by smtp.gmail.com with ESMTPSA id
 n12-20020ac2490c000000b004dc83d04840sm181354lfi.79.2023.04.13.00.15.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Apr 2023 00:15:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0823b4e8-d9cb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681370155; x=1683962155;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8pVvJSmvEUbWoR8Cd3hbfPcIyCeeGauP1BW/rZN5S8A=;
        b=RNWnGeHLfTdsBTj5l1au/ZPEzs0NX4KpVVgZq3rrBHb2zkglV7qAy9hipsYphVje+K
         d8xDdthy83lot7BtgNvicD6dVPsyZFv7bg5MfTvH9A/EvuRxYRSPf5cnzSNaoyNd+Ylk
         iHTl8GiMQMGMGebCKZLMZ4qngMBiz4NsjkI8S4XopdD6xnPyig/tQbPOiOX/nnBapHx8
         OnIsHydBrAUiGBbHOQ8Zp/hRcjqOHrEUUcX+7iNSe2E6eXa62GOEW90XldApCFVbLNXg
         X4YRtzOG8vPcnbNCViNmn6nuFuPARWZ5OE7gCPtLS4rmMXXDPjzpigisTYTFJ37b7pLU
         LWBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681370155; x=1683962155;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=8pVvJSmvEUbWoR8Cd3hbfPcIyCeeGauP1BW/rZN5S8A=;
        b=gQlRvEv77admM5hxR8ltLqdI9yn6hoHo7K9k8VH3ErrAtm0H4qHra68q4+zuBxL6dx
         MMxmaw3Ji90lXG85dpkNajKaDJgj5rMUd+g9f6wPjprZPzYq89rkZWoqDliHW8sM9tX/
         l5EH6LtXzD/C+ztYfcT74/GOO9Y9j0VrKIrFBZBlfC4riNRWu14+lxJALbMu+U8c/jy1
         WHwA9/LpQrhKvcPwi3YfBs91OaQNVLokQGkcEA9br1uQncv2lat6YBlwfOCqgUKLg9nI
         racTcAxa4+rm6x9wz6z7A9Nz2FChIYkI/aFjqSrA1HtLkFGIairCgAUZHJtzW6OKTGv4
         17qg==
X-Gm-Message-State: AAQBX9cDwrzhNsigzQJlDPZqhD02aL0o+5KR8flpVZ7Ot+AcqAug4+5f
	iAawriKTq+YrGyPySZNl4+n1+K8wS31LB5Zbmy8=
X-Google-Smtp-Source: AKy350aISGmyJtT6PnyRYtke2nmyaEfxDFKxQorAbQ5fkpsNjHgfVnvRPhyT/3byMsJ8NdgcOkumvA==
X-Received: by 2002:ac2:5296:0:b0:4eb:11c:f83 with SMTP id q22-20020ac25296000000b004eb011c0f83mr599448lfm.49.1681370155064;
        Thu, 13 Apr 2023 00:15:55 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com,
	Marc Bonnici <marc.bonnici@arm.com>,
	Achin Gupta <achin.gupta@arm.com>,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing memory
Date: Thu, 13 Apr 2023 09:14:17 +0200
Message-Id: <20230413071424.3273490-16-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230413071424.3273490-1-jens.wiklander@linaro.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds defines needed for sharing using the function FFA_MEM_SHARE and
friends.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
 xen/arch/arm/tee/ffa.c | 60 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 74b8c517afb8..58c581c8ffc7 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -5,6 +5,14 @@
  * Arm Firmware Framework for ARMv8-A (FF-A) mediator
  *
  * Copyright (C) 2023  Linaro Limited
+ *
+ * References:
+ * FF-A-1.0-REL: FF-A specification version 1.0 available at
+ *               https://developer.arm.com/documentation/den0077/a
+ * FF-A-1.1-REL0: FF-A specification version 1.1 available at
+ *                https://developer.arm.com/documentation/den0077/e
+ * TEEC-1.0C: TEE Client API Specification version 1.0c available at
+ *            https://globalplatform.org/specs-library/tee-client-api-specification/
  */
 
 #include <xen/bitops.h>
@@ -80,6 +88,58 @@
  */
 #define FFA_MAX_RXTX_PAGE_COUNT         32
 
+/*
+ * Limit for shared buffer size. Please note that this define limits
+ * number of pages. But user buffer can be not aligned to a page
+ * boundary. So it is possible that user would not be able to share
+ * exactly FFA_MAX_SHM_BUFFER_PG * FFA_PAGE_SIZE bytes.
+ *
+ * FF-A doesn't have any direct requirments on GlobalPlatform or vice
+ * versa, but an implementation can very well use FF-A in order to provide
+ * a GlobalPlatform interface on top.
+ *
+ * Global Platform specification for TEE requires that any TEE
+ * implementation should allow to share buffers with size of at least
+ * 512KB, defined in TEEC-1.0C page 24, Table 4-1,
+ * TEEC_CONFIG_SHAREDMEM_MAX_SIZE.
+ * Due to align issue mentioned above, we need to increase this
+ * value with one.
+ */
+#define FFA_MAX_SHM_PAGE_COUNT          (SZ_512K / FFA_PAGE_SIZE + 1)
+
+/*
+ * Limits the number of shared buffers that guest can have at once. This
+ * is to prevent case, when guests tricks XEN into exhausting its own
+ * memory by allocating many small buffers. This value has been chosen
+ * arbitrary.
+ */
+#define FFA_MAX_SHM_COUNT               32
+
+/* FF-A-1.1-REL0 section 10.9.2 Memory region handle, page 167 */
+#define FFA_HANDLE_HYP_FLAG             BIT(63, ULL)
+#define FFA_HANDLE_INVALID              0xffffffffffffffffULL
+
+/*
+ * Memory attributes: Normal memory, Write-Back cacheable, Inner shareable
+ * Defined in FF-A-1.1-REL0 Table 10.18 at page 175.
+ */
+#define FFA_NORMAL_MEM_REG_ATTR         0x2fU
+/*
+ * Memory access permissions: Read-write
+ * Defined in FF-A-1.1-REL0 Table 10.15 at page 168.
+ */
+#define FFA_MEM_ACC_RW                  0x2U
+
+/* FF-A-1.1-REL0 section 10.11.4 Flags usage, page 184-187 */
+/* Clear memory before mapping in receiver */
+#define FFA_MEMORY_REGION_FLAG_CLEAR            BIT(0, U)
+/* Relayer may time slice this operation */
+#define FFA_MEMORY_REGION_FLAG_TIME_SLICE       BIT(1, U)
+/* Clear memory after receiver relinquishes it */
+#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH BIT(2, U)
+/* Share memory transaction */
+#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (1U << 3)
+
 /*
  * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
  * BIT(31): Framework or partition message
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 07:29:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 07:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520547.808271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrOC-0007La-V5; Thu, 13 Apr 2023 07:29:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520547.808271; Thu, 13 Apr 2023 07:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrOC-0007LT-QI; Thu, 13 Apr 2023 07:29:04 +0000
Received: by outflank-mailman (input) for mailman id 520547;
 Thu, 13 Apr 2023 07:29:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrOC-0007LJ-1N; Thu, 13 Apr 2023 07:29:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrOB-0002So-Sf; Thu, 13 Apr 2023 07:29:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrOB-0007Jg-AZ; Thu, 13 Apr 2023 07:29:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrOB-0007Sy-A9; Thu, 13 Apr 2023 07:29:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Uv7ypzs0/6mNaD/KTbc1L7eBNl1ONLFO4trSnEXMadI=; b=zEtUQ9lbjVAGPPe5FwuZV17nXO
	vxDPVS3LWWI0bdJKHRUX5RqbT9r/OVhBtHMwzgzusbpqkrpGx2VIK0vDi+6J2MNKlKCT+gg9djluj
	zKvFn1Yx/8rY31z7np73Ithj4bhPrSPCXAgHyHFUepxKt1OKHSd6DrlH22TDu+mf7AZw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180222: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):starved:nonblocking
    linux-linus:build-arm64-pvops:hosts-allocate:starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=0bcc4025550403ae28d2984bddacafbca0a2f112
X-Osstest-Versions-That:
    linux=e62252bc55b6d4eddc6c2bdbf95a448180d6a08d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 07:29:03 +0000

flight 180222 linux-linus real [real]
flight 180228 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180222/
http://logs.test-lab.xenproject.org/osstest/logs/180228/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 180208

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180228-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180208
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl           1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-arm64-arm64-examine      1 build-check(1)               starved  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               starved  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               starved  n/a
 build-arm64-pvops             2 hosts-allocate               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                0bcc4025550403ae28d2984bddacafbca0a2f112
baseline version:
 linux                e62252bc55b6d4eddc6c2bdbf95a448180d6a08d

Last test of basis   180208  2023-04-11 19:10:17 Z    1 days
Testing same since   180222  2023-04-12 16:43:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Howells <dhowells@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            starved 
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     starved 
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0bcc4025550403ae28d2984bddacafbca0a2f112
Author: David Howells <dhowells@redhat.com>
Date:   Wed Apr 12 13:18:57 2023 +0100

    netfs: Fix netfs_extract_iter_to_sg() for ITER_UBUF/IOVEC
    
    Fix netfs_extract_iter_to_sg() for ITER_UBUF and ITER_IOVEC to set the
    size of the page to the part of the page extracted, not the remaining
    amount of data in the extracted page array at that point.
    
    This doesn't yet affect anything as cifs, the only current user, only
    passes in non-user-backed iterators.
    
    Fixes: 018584697533 ("netfs: Add a function to extract an iterator into a scatterlist")
    Signed-off-by: David Howells <dhowells@redhat.com>
    Reviewed-by: Jeff Layton <jlayton@kernel.org>
    Cc: Steve French <sfrench@samba.org>
    Cc: Shyam Prasad N <nspmangalore@gmail.com>
    Cc: Rohith Surabattula <rohiths.msft@gmail.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 08:00:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 08:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520566.808281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrs3-0002XB-Dj; Thu, 13 Apr 2023 07:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520566.808281; Thu, 13 Apr 2023 07:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmrs3-0002X4-B3; Thu, 13 Apr 2023 07:59:55 +0000
Received: by outflank-mailman (input) for mailman id 520566;
 Thu, 13 Apr 2023 07:59:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrs2-0002Wu-13; Thu, 13 Apr 2023 07:59:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrs1-0003Dl-VC; Thu, 13 Apr 2023 07:59:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrs1-0001Cd-H8; Thu, 13 Apr 2023 07:59:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmrs1-0000Ag-Gj; Thu, 13 Apr 2023 07:59:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zO1qOIfB3F3Pz5XhwVLrj13qfFYpX86DZpPKCSyHt1Q=; b=y8zAy49bG6j2Css7PBMka6uamt
	pYfn5EQ23IkvGEGgmqn6K5oPolbiX03ywSLwvtZV/aQulMg04OMwK+NVNATaHZ7qOXDj3ApIb6CRV
	Az7v8rayV7TGIugHSw74g/MdGDz7upq6sENZaT+tKq63qVKox1Zu8HoG+UcnGbLi7SRs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180229-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180229: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=42b0443599a69c703034079cf2bd389fa3a6bfde
X-Osstest-Versions-That:
    ovmf=5430f7f60dee3747fff906b48718db8afb4589d9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 07:59:53 +0000

flight 180229 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180229/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 42b0443599a69c703034079cf2bd389fa3a6bfde
baseline version:
 ovmf                 5430f7f60dee3747fff906b48718db8afb4589d9

Last test of basis   180223  2023-04-12 19:43:57 Z    0 days
Testing same since   180229  2023-04-13 06:12:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sam Kaynor <Sam.Kaynor@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5430f7f60d..42b0443599  42b0443599a69c703034079cf2bd389fa3a6bfde -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:10:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:10:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520574.808291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmsyA-0002fQ-Vg; Thu, 13 Apr 2023 09:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520574.808291; Thu, 13 Apr 2023 09:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmsyA-0002fJ-RS; Thu, 13 Apr 2023 09:10:18 +0000
Received: by outflank-mailman (input) for mailman id 520574;
 Thu, 13 Apr 2023 09:10:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ryz=AE=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pmsy9-0002f7-Rl
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 09:10:17 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01591ccf-d9db-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 11:10:16 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id xi5so35699109ejb.13
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 02:10:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01591ccf-d9db-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681377016; x=1683969016;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Do4v+4gSZeDj6LuHJ8qdjlM3TTHlOiF+w4OATSLeFHM=;
        b=wGEP9wcTBajPafFO4L3RzrwQHHWrL+ZU6umu2uZGYSrXLPRSkdnGU1d65GUly6zZRq
         eT5qK00lxgi99YdAnBIOcG/PH83oNEnVkA0xsIOQS+EswUmmr2F+lluejQirkr9aF6N4
         2VDk+Fsz2QFbCWmSmlXB0hYER+MXHNCtOZSolffaY2IJ9owFQPuWjt+cfjsDpAJ6MEmC
         lDlrwyZRtVlkmzOEey5kzjgRgOvUj2wQmWeMy6SyFISVS3qd3YcTRCKgDC79OMf7GZ1T
         S3btuB1ODhmxglf2hpZTVM0MzrYorI2jz7bT8HxaSgYjbXpV79K4SG50b3eyTs1JKthu
         TcQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681377016; x=1683969016;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Do4v+4gSZeDj6LuHJ8qdjlM3TTHlOiF+w4OATSLeFHM=;
        b=FsdrJvOiKGc/B1buCEauEbEfrjjSgCHI+QtHc8xppLmzWZM+ZfuHRIMzMy0kW3IlvG
         YaGgp1RJ4fTdXKDxMhG/0pEhhR7MaHFwpHVrod2dzJKTOGp4+saFICUTCzpfb/Qa2uwf
         nsKRAPvIUhDASsAQMz1p8Xcsd36UzusoBw7JO/+t1fk13npN7rlLYHqcdFHdF6VIwRax
         4LmywohRk/2KUD2BOTKRM+Gb3ZPj8cgjAAR4D3wKzdbf+rDh43Vb1ARx7vqcuMNHlcrd
         ZiXUh5sHoRIJYN1oo1NPDZnuumpqNBkbpyBob7bevhH/dF8j914uUCQulFF5atX0Lgh5
         Cdtw==
X-Gm-Message-State: AAQBX9daLbb7JlEK9IAsS/8W5rbdCcaam7sPqRV+AWGo12Fa5Lq4vN3t
	WiG7g3YJXT41x29UYr0WYJDCBx8wZFJg5xGikboEbw==
X-Google-Smtp-Source: AKy350byjrmiwhEHWnHNrUEegIDr5idiF0lEE4xD/k/48btffTUnjrwf7orRfE/QIRyTnbVQIaHIDqnClr3J21bxCL0=
X-Received: by 2002:a17:906:3c52:b0:94e:5c27:dc5f with SMTP id
 i18-20020a1709063c5200b0094e5c27dc5fmr883617ejg.6.1681377015834; Thu, 13 Apr
 2023 02:10:15 -0700 (PDT)
MIME-Version: 1.0
References: <20230412185102.441523-1-dwmw2@infradead.org> <CAFEAcA9G0KpkOivD8fBvEQwGcTsUQz53z5W53YcjcHmZGPHkmQ@mail.gmail.com>
 <ac9417c017a2f1bda399d831b100e9b009f8d4c2.camel@infradead.org>
In-Reply-To: <ac9417c017a2f1bda399d831b100e9b009f8d4c2.camel@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 13 Apr 2023 10:10:04 +0100
Message-ID: <CAFEAcA_UoiM5vFqvyia3tU0Kb9xCMkFUoRiDPrcqX9te33Ot+A@mail.gmail.com>
Subject: Re: [PATCH for-8.0 0/5] Xen emulation build/Coverity fixes
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, no Stabellini <sstabellini@kernel.org>, 
	Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
	=?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <richard.henderson@linaro.org>, 
	Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 12 Apr 2023 at 20:01, David Woodhouse <dwmw2@infradead.org> wrote:
>
> On Wed, 2023-04-12 at 19:55 +0100, Peter Maydell wrote:
> > On Wed, 12 Apr 2023 at 19:52, David Woodhouse <dwmw2@infradead.org> wro=
te:
> > >
> > > Some Coverity fixes and minor cleanups. And most notably, dropping
> > > support for Xen libraries older than 4.7.1.
> > >
> > > I believe there are two issues that remain to be fixed. The x32 build
> > > fails, and I've seen patches which attempt to detect x32 and disable
> > > the Xen emulation. Along with assertions that we just shouldn't care.
> > > I don't have a strong opinion either way but it seems to be in hand.
> > >
> > > The other is the question of what Xen *actually* does if you try to
> > > unmap an IRQ_MSI_EMU PIRQ. I don't think Linux guests try that, and
> > > I'm fairly sure Windows doesn't even use MSI=E2=86=92PIRQ mappings in=
 the
> > > first place, and I doubt any other guests care either. I'd like to
> > > establish the 'correct' behaviour and implement it, ideally before
> > > the 8.0 release, but it's going to take me a few days more.
> > >
> > > David Woodhouse (5):
> > >       hw/xen: Simplify emulated Xen platform init
> > >       hw/xen: Fix memory leak in libxenstore_open() for Xen
> > >       xen: Drop support for Xen versions below 4.7.1
> > >       hw/xen: Fix double-free in xen_console store_con_info()
> > >       hw/xen: Fix broken check for invalid state in xs_be_open()
> > >
> >
> > This is highly unlikely to make 8.0 at this point, FYI.
> > If there's anything in this you think is super-critical we
> > might be able to sneak it in.
>
> Nothing is super-critical except maybe the double-free in
> store_con_info(). That could lead to a crash on startup if the QEMU Xen
> console is being used.

I've cherry-picked that double-free patch to apply for 8.0; thanks.

-- PMM


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:14:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:14:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520578.808301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmt1v-0003EB-FN; Thu, 13 Apr 2023 09:14:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520578.808301; Thu, 13 Apr 2023 09:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmt1v-0003E4-Bs; Thu, 13 Apr 2023 09:14:11 +0000
Received: by outflank-mailman (input) for mailman id 520578;
 Thu, 13 Apr 2023 09:14:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmt1u-0003Du-4Z; Thu, 13 Apr 2023 09:14:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmt1u-0005ka-1w; Thu, 13 Apr 2023 09:14:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmt1t-0004DN-P9; Thu, 13 Apr 2023 09:14:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmt1t-0008Gq-OY; Thu, 13 Apr 2023 09:14:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=03mdalFGTVRv2rQnLt0Qmf3ASWvgXlzu0GsEW9bNnSY=; b=mkkbLdJs6R1yqPDEsn5lImKlJ6
	5U1JVenjeMQsPj0SAGPgVtDgyQQG1LDqsxCwze1FTnf2QPsts6nbTXWHNJDnzpcx6P6nIjOj6KwyL
	nqbhPA3i1E9SYzbMpVlaJtmLFC+zl8BVP7cDUwKPq55OZCAcl6hKj9mdnRF6AK9iQjvo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180220-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180220: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=abb02ce0e76a8e00026699a863ab2d11d88f56d4
X-Osstest-Versions-That:
    qemuu=6c50845a9183610cfd4cfffd48dfc704cd340882
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 09:14:09 +0000

flight 180220 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180220/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180207
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180207
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180207
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180207
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180207
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180207
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                abb02ce0e76a8e00026699a863ab2d11d88f56d4
baseline version:
 qemuu                6c50845a9183610cfd4cfffd48dfc704cd340882

Last test of basis   180207  2023-04-11 18:40:13 Z    1 days
Testing same since   180220  2023-04-12 12:10:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin Wolf <kwolf@redhat.com>
  Lukas Tschoke <lukts330@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   6c50845a91..abb02ce0e7  abb02ce0e76a8e00026699a863ab2d11d88f56d4 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:19:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520584.808310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmt7G-0003sP-2k; Thu, 13 Apr 2023 09:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520584.808310; Thu, 13 Apr 2023 09:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmt7F-0003sI-WF; Thu, 13 Apr 2023 09:19:42 +0000
Received: by outflank-mailman (input) for mailman id 520584;
 Thu, 13 Apr 2023 09:19:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=czPd=AE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pmt7E-0003sB-N6
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 09:19:41 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eab::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4fe6dfb0-d9dc-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 11:19:37 +0200 (CEST)
Received: from CY5PR22CA0002.namprd22.prod.outlook.com (2603:10b6:930:16::11)
 by PH0PR12MB8005.namprd12.prod.outlook.com (2603:10b6:510:26c::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Thu, 13 Apr
 2023 09:19:33 +0000
Received: from CY4PEPF0000C964.namprd02.prod.outlook.com
 (2603:10b6:930:16:cafe::77) by CY5PR22CA0002.outlook.office365.com
 (2603:10b6:930:16::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 09:19:33 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C964.mail.protection.outlook.com (10.167.241.68) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Thu, 13 Apr 2023 09:19:33 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 04:19:32 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 04:19:31 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fe6dfb0-d9dc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j1uJH9teiH/Qn4AgQdU7+QbwJa6CGDjg0581Vn62iJTueGlk3B7l48xh2s2eiR0ppkb2pjNk5z+xA6u3bcIVFUsPzbCEmXczFdbOKAUl5dSz/5Gfw6J1lHGYyEWQ+o6cuopLYcmX7Ie7q9K/CEzKZ8TuzHFn0tYymdmTRJqwG01t+Kp6HdfFsnODlPKlXxK2cXi1wk7C+/CkCSciX5n9ZNST9r4xi9cXfzVaL2qSa3be3CHNjlCdUnIXVrDmicJ0Z9poWY+LZ0O8h2sqtER0+uPIEvFQ+WsVs8dz5o0jDxg+5vX/AsY5kvLGlE4B3R33R+AA/k4VEdUponkRP9Qt7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rhId+4L7TrpgnRGi0eVPN6ygrjDhOuIpBpqxIPPVTL4=;
 b=nd2K3SWXZpgtIy7biFLI9suKMhmBF9vTKi8w4XOiC5XcRLHCYpnPvId6Z09fmwmeNVj0N1AyLMY7ssVeMdk8t0sPsHqrVp+jH7FbcpXblOFAAttDBhnp74tZ4ioK1s2JeH1Ugh6Lh3H5oikAshS40Pnj9fKrsg2TmQuKgxrcLDx0As0ahkFeZ7Vu/+Z/cC7ScoxXR86tymB7qAuDHQxJj+rEJg0aDXBLurEjzIn79LnmApM3u9ebAp14lq2atBrRxX1ScxHSHkU7XIto2vkQRs3O0xRxC8f7UTeiFkcOsAJgDisMf9jDFpu/18/MluMQKvnhCtI+w+FAQGTgY/cGig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rhId+4L7TrpgnRGi0eVPN6ygrjDhOuIpBpqxIPPVTL4=;
 b=SbAVmjuP69ZYNgwwKurCGP6UasO8sgQIosYJMwLRt+1oflz2teaVqrkMkKjmw6Q6joQ2aF7tf0hdI4IlnF6sVH3fBaX4L4eJp1h3c5o22DTAjUtW1shDRKS/+zMyO3wRbSZz+aEqn6B7QL4ZiWEQvkIvd9Ax1H5WkDseOVZDIjg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <836d2629-b097-9a8e-7aea-3f83fd13228f@amd.com>
Date: Thu, 13 Apr 2023 11:19:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 01/17] xen/arm/device: Remove __init from function
 type
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-2-vikram.garhwal@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-2-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C964:EE_|PH0PR12MB8005:EE_
X-MS-Office365-Filtering-Correlation-Id: d05e48d2-1a3e-4ec6-fce8-08db3c003182
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nkDZdVSaZ8WSAwZiSKeVyOVTy+EDpVyOZDFlNgyHq/NrlZVdHjxUHoo5nl7q8XmrHkcwgofoMSmDy5wB3nQD8khSjGr9FRqlGggHZM+8ca3Z9I/uOykYDbvwXBDv4vdet6ZtJwY/VYs7lceH08fCF6xLIeZJr4MccUG2wgvfg8uNV6agfUg1QUh6m5xdFDYzb/HGGqc2PCeSC3ip72Tp8jBccYMTlW3hGZciyw+Fo9NYAYE7WcEfiBb18ThFFn1zLpE0+7hwm5PIkQS9gdrkaP+Vx4goQtIDU+C0vWdp5z/aEw9IWJ3BRCglhJZNe9PHarL9FCqn70oeU9UDYy3QuoSOk4mt9pLMHlYs9DK/NMGhsa5frQS3n1qc1QrybqXU7+aBtdnXyVFTX11OGnKpHBGBmzlUBr56vDf+lRWSd6j3PwdV84qlPV3rZ4V6yP+VEPLvo0qla0ei61XDBhS/B+IzZC8671Fss3inL4J4+itKDXo2U8jB/Y+Ac0foLwvY45MAToxQ1SWxCZs//vpDtIxAX0yQQHNxBAjKHntyklOyX3CMjjzxdpqhY2kJQmbreotOh90tRqyzwQKnibcEMzXaKFxhtlqj4pAPtkn3Un8wqiiwv3ariQ5yS/JWp4SuTSuCjPO5rZ1gM25uv7bUQfXDNf8IlYZHuxctTJKa5pHOIA0DGNnym4w5LzIMg+cYVCPRdpEI3znGjvi6V/nTtQ02ko0qPIRVOiLFNkd3LsQvHckDS9z5vXlYNZWQGCYKvz1UZ22pRdjYH6SaWEPUUQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199021)(46966006)(40470700004)(36840700001)(31686004)(36860700001)(5660300002)(36756003)(2906002)(44832011)(31696002)(316002)(86362001)(8676002)(8936002)(40480700001)(82310400005)(356005)(70206006)(4326008)(70586007)(81166007)(41300700001)(40460700003)(2616005)(82740400003)(336012)(426003)(83380400001)(47076005)(54906003)(186003)(26005)(110136005)(53546011)(16576012)(478600001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 09:19:33.0297
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d05e48d2-1a3e-4ec6-fce8-08db3c003182
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C964.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8005

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Remove __init from following function to access during runtime:
>     1. map_irq_to_domain()
>     2. handle_device_interrupt()
s/interrupt/interrupts/ since there is no handle_device_interrupt() function.

>     3. map_range_to_domain()
>     4. unflatten_dt_node()
>     5. unflatten_device_tree()
> 
> Move map_irq_to_domain() prototype from domain_build.h to setup.h.
> 
> To avoid the breaking the build, following changes are also done:
'avoid breaking' instead of 'avoid the breaking'.

> 1. Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain() to
s/interrupt/interrupts/

>     device.c. After removing __init type,  these functions are not specific to
>     domain building, so moving them out of domain_build.c to device.c.
> 2. Remove static type from handle_device_interrupt().
> 
> Overall, these changes are done to support the dynamic programming of a nodes
> where an overlay node will be added to fdt and unflattened node will be added to
> dt_host. Furthermore, IRQ and mmio mapping will be done for the added node.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/device.c                   | 145 ++++++++++++++++++++++++
>  xen/arch/arm/domain_build.c             | 142 -----------------------
>  xen/arch/arm/include/asm/domain_build.h |   2 -
>  xen/arch/arm/include/asm/setup.h        |   6 +
>  xen/common/device_tree.c                |  16 +--
>  5 files changed, 159 insertions(+), 152 deletions(-)
> 
> diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
> index ca8539dee5..fec6e29c42 100644
> --- a/xen/arch/arm/device.c
> +++ b/xen/arch/arm/device.c
> @@ -12,6 +12,9 @@
>  #include <xen/errno.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
> +#include <xen/iocap.h>
> +#include <asm/domain_build.h>
I can't see why do we need to include this header.

> +#include <asm/setup.h>
You should keep the alphabetical order so:
- iocap goes after init.h
- setup.h goes after device.h

Apart from that, it looks ok so:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:46:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520589.808320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtWv-0007P7-6H; Thu, 13 Apr 2023 09:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520589.808320; Thu, 13 Apr 2023 09:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtWv-0007P0-2t; Thu, 13 Apr 2023 09:46:13 +0000
Received: by outflank-mailman (input) for mailman id 520589;
 Thu, 13 Apr 2023 09:46:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=czPd=AE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pmtWt-0007Ou-T9
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 09:46:11 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03443134-d9e0-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 11:46:08 +0200 (CEST)
Received: from MW4PR04CA0115.namprd04.prod.outlook.com (2603:10b6:303:83::30)
 by MW3PR12MB4588.namprd12.prod.outlook.com (2603:10b6:303:2e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 09:46:04 +0000
Received: from CO1NAM11FT043.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:83:cafe::b8) by MW4PR04CA0115.outlook.office365.com
 (2603:10b6:303:83::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40 via Frontend
 Transport; Thu, 13 Apr 2023 09:46:04 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT043.mail.protection.outlook.com (10.13.174.193) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 09:46:03 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 04:46:02 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 02:45:57 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 04:45:57 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03443134-d9e0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lrAc0zm+IydeOP9bEwZoAMbbOGokSPSy+fMqeGJToExmQbpzxgzhPZKL8EJEtw3xqA3XzmIdUF8f9A+xnZSihZD7k2qE2EW+WSL/ea34IQDvRmNS81WgxomopV8ebOSQ/GWuZaGs7DHslBrsDKpBbX+jQT2QUNdWUbx2Ar8XcJrFSc6SLRk1Wd1fqReuqNXLKsHQLLz/r3n97GJ1fj+1te6hgu03Wz6nnMyhw8+WeUXUJHEt297g/7/66LKaOQCb8ht2mrzjE71m5DojCtCq6/v9dIemfH++/2Gzs9yT446OU2lbwsPnLy9uP9aWasfiTUvLaJPR6BTsHuGPpCSqPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yBE32qRAmtmPbFr0kHVShrV/37fYMBBmHh6EfBSAWCA=;
 b=OdtewoJGY/wUSlJ7ePeAf+dEvl0c3iRYJUmcfDcVq9fNXTvEvANuKKDl1wAg4RWS+CdQql3G1DawhpSOpk+xhnbctSiBSo+LEqlq/hauf2JOuwwDgIsjmOg9cO+nWuSnHsaqgXL2oVrGOu8MvOrkOi7yX1fWgn9K/xWls2/MDKEs9XtsDmGRVR+dS1wkFjZXgRbgX4x2526F47NpGzbqpKTNwRCnynJNMqUVipTTfCa5jQ8ED63TGrTYGXNJpQMIzLBwu6lnJ+vxO8Go11B5Jq/DJq739J+8w9sb1xwpFdKsoSNmyONoGV0bF9CyUe+b6S1Aq22HadlYpZn9I3o0sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yBE32qRAmtmPbFr0kHVShrV/37fYMBBmHh6EfBSAWCA=;
 b=Mm/3/EIfnpaSPE4ed0Bzpb4RMs/1X5SuPTbkRm0gK6EazRAZOoXkFdin7fnCjKlK/Cq8zS0Mfm/JNlWIupE3tZdBggdle57mm4Rv7pTMuO5hSe4G00py50nly68Q2KMDnLO7VWHSutIAphyB7XJMNPN5zfHdz1emgODbqKnFAh4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <3f98a8b0-dea6-4ae4-137a-312e89af879b@amd.com>
Date: Thu, 13 Apr 2023 11:45:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-3-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT043:EE_|MW3PR12MB4588:EE_
X-MS-Office365-Filtering-Correlation-Id: 4e9e7ae2-0701-4a4c-2067-08db3c03e5c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bbeuh9WHfASCepKZohvmeTSMfS1TuhzHoregaTFlt1jKbqnLlqJ6v3nfaGEdrsbQEWcLFlv1HC5CwajTOuhlHNZNFozSAZ7X8VnuvJLxqLmQRjtBQb8EMXnXjjWq1x2i/sgYp6mq8b1uiWthTf3f6aPPtI0p0SEyXuebRb4ckhb9uHY86l/FHphL7sL9lpsqgYWsfhOh4kpzE/P6bAPCujv8BJRePCqeSfik6mMipj6aSLkakL3XTvxB5LFFlQX8eYylkLZl2NaWGmSEDJGLcJzX1soTSlQv+eK8FckCufOOginuDDa4YCuRz8gDWVHI9omuZzYvVMBeKr+MGOGSZwsXcz7yDZ0aj33+1L6ijN2MnY94grltjD/yNoy5Lfk/bHljFfN87DBbrEtkPqznqs1rKeM5iZR+YFid4cyb/BdSZTIC/3f/WJRyOpF9nx+FiLKil6gzoZLGlMBPzsX9T+d9wkvW6Z4pjgRW/UroTrQpWz7NZVpe6qQIpa1rTjk02sPlzjvprUJwP5yv5rxEPoYY0XqN4CQfiLLUgQhsabNpH2TID9A5czB/jWPXAtpfQD3UFItEaQf/nDl1lrr4UoSCP4CuP20rsJRXWdSV0xEyhDZquPjveg4vGCOjSm8H70jcsBwbUZQBGBptSf/RojvVTO6+G4tGWhbqrBI1xjrq2kPOlMu2JecDHppgr9fARU2mK+nUWYNsBA0UUt5eMFjX4r5YSCa2ogTDbATNS62iaGOsztNa+WABY3HRM0twFySxxL/eVBZCdFaIfEkhwg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(46966006)(36840700001)(40470700004)(31686004)(36860700001)(336012)(5660300002)(36756003)(4744005)(2906002)(44832011)(31696002)(316002)(40480700001)(8676002)(8936002)(86362001)(41300700001)(81166007)(356005)(40460700003)(82310400005)(4326008)(82740400003)(70206006)(426003)(70586007)(2616005)(47076005)(54906003)(16576012)(186003)(478600001)(110136005)(26005)(53546011)(6666004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 09:46:03.9317
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e9e7ae2-0701-4a4c-2067-08db3c03e5c8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT043.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4588

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Following changes are done to __unflatten_device_tree():
>     1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>     2. Remove static function type.
I think there was no need to touch this function in patch 1 if you are modifying it here
additionally in a separate patch.

>     3. Add handling of memory allocation. This will be useful in dynamic node
>         programming when we unflatten the dt during runtime memory allocation
>         can fail.
Didn't we say that checking if the memory allocation failed or not should be done
as a separate patch (i.e. a prerequisite to your series) as part of hardening?

In any case (depending on the maintainers vote), the change itself looks ok, so:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:54:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520594.808333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmteG-0000Tt-W0; Thu, 13 Apr 2023 09:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520594.808333; Thu, 13 Apr 2023 09:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmteG-0000Tm-TO; Thu, 13 Apr 2023 09:53:48 +0000
Received: by outflank-mailman (input) for mailman id 520594;
 Thu, 13 Apr 2023 09:53:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmteF-0000Tg-VR
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 09:53:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmteF-0006bU-So; Thu, 13 Apr 2023 09:53:47 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmteF-0006bg-Lb; Thu, 13 Apr 2023 09:53:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=3TMnKaazw7RkdLGoudNI4e9xwLmqht+031sHO8iWC7s=; b=CUnW6t+sGMVByDDCo4bpqGLBZN
	mj4xEuar3YKYVTV+Uks8TXvHqSrSe8/BWtTo1SQUnmhe3Zbf4Vx2APUUDNQR7GJOJFlgnhf4ob80F
	y61AH9urPjtA7/SLBn9+JkTQT+XDmWJVA7EW3XPHGHjjO2NUCGe8yw6S08MvI4c5h/CU=;
Message-ID: <e714186d-0ebb-ecf0-0800-c9fc0bdcaa6c@xen.org>
Date: Thu, 13 Apr 2023 10:53:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Vikram Garhwal <vikram.garhwal@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
 <3f98a8b0-dea6-4ae4-137a-312e89af879b@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <3f98a8b0-dea6-4ae4-137a-312e89af879b@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/04/2023 10:45, Michal Orzel wrote:
> On 11/04/2023 21:16, Vikram Garhwal wrote:
>>
>>
>> Following changes are done to __unflatten_device_tree():
>>      1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>>      2. Remove static function type.
> I think there was no need to touch this function in patch 1 if you are modifying it here
> additionally in a separate patch.
> 
>>      3. Add handling of memory allocation. This will be useful in dynamic node
>>          programming when we unflatten the dt during runtime memory allocation
>>          can fail.
> Didn't we say that checking if the memory allocation failed or not should be done
> as a separate patch (i.e. a prerequisite to your series) as part of hardening?
> 
> In any case (depending on the maintainers vote), the change itself looks ok, so:

Yes it should be separate because this is something we may want to 
consider to backport.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 09:58:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 09:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520598.808344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtib-00018u-GR; Thu, 13 Apr 2023 09:58:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520598.808344; Thu, 13 Apr 2023 09:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtib-00018n-Dj; Thu, 13 Apr 2023 09:58:17 +0000
Received: by outflank-mailman (input) for mailman id 520598;
 Thu, 13 Apr 2023 09:58:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=czPd=AE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pmtia-00018h-NY
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 09:58:16 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b367ec3a-d9e1-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 11:58:13 +0200 (CEST)
Received: from BN6PR17CA0038.namprd17.prod.outlook.com (2603:10b6:405:75::27)
 by MW4PR12MB7440.namprd12.prod.outlook.com (2603:10b6:303:223::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 09:58:10 +0000
Received: from BN8NAM11FT078.eop-nam11.prod.protection.outlook.com
 (2603:10b6:405:75:cafe::29) by BN6PR17CA0038.outlook.office365.com
 (2603:10b6:405:75::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.33 via Frontend
 Transport; Thu, 13 Apr 2023 09:58:09 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT078.mail.protection.outlook.com (10.13.176.251) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 09:58:09 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 04:58:09 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 02:58:08 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 04:58:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b367ec3a-d9e1-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AdmCoO3i0uOWdS5tE2iX6csiz103s+ES9/P7W8pL8UVqCVu5MA7+cZ0SeKxVtbguyHRQGByHw5ICRQSgmw4/Ph0RdsoouRs/G7n6cg1+8dgAEoevWzk+9N/ZwMDN+wob372C0XRtFaEYy6sDq6fjYNt7/s84pNQHS0KRwdydkCnW2d0PfGuo6lNqQb9XGEH8KvklEVdVSZO5vkVrkJf6JRX8iTITz/WG9Vcq8LYh3gGOd7SFgpt1EYtVLo5N6Djyalw0CDMDq391Vxaz+/v83+Y7fYCCUiNgsbCrZLtLfqd7gvary2augLNs3VtB1yazeLEczFDXdLI/IqfGCmh5yA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s7hXF7uogdRLXdoz4J9okqV8ECVMCAycJUpqZlbPpR0=;
 b=frUgQpfv5ZZDlnfLjv1Wg4goY2RiR4OKPdFX4WZEim8PoZkKvuBXG9HDZyNV97IwNvpliw5f3mICS0CHp8bwdyMjgAonXjEIg8JhcF3y9hHQmb4MTu5i+k18Tc4Z5Jv8sevBcJ372zddHnFHaUa7Es9TvpDa/Mb8LQ+3we7BQXwephPmhnh7AHCAD23pJjn86TMG2zrnSZkB1RdAd3Q+Sbe4sNsJ2efXLRq3MFjdQxJqwPNPD38Mj6SZjF6XRAW027/aVow1jLnNB2oAuE7kikFmo6/Y9+G7CjoWrx3ayDoZKa3rJklq5uIsplB8Wavrund7rCokHt+xNCh8khjW7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s7hXF7uogdRLXdoz4J9okqV8ECVMCAycJUpqZlbPpR0=;
 b=pyQ+Gcn37YLFnaHc9oeV6toAmVZ9nOB7yCnRy7TqqyD+WZF47xf+JKZgoJuT2nERs0AZVLDuXQKwKj5xVQd//4zWRr6Q9bDbnt/5PM4DPkj5ebnJjJiSfYu54CrwJqFdgWM1QIEZlvLanUHjKnX36a0IPQI0q+zON08Lo9khQnE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <e40e323b-6eec-2cdc-62ac-d7c6eff59bd8@amd.com>
Date: Thu, 13 Apr 2023 11:58:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 03/17] xen/arm: Add CONFIG_OVERLAY_DTB
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-4-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-4-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT078:EE_|MW4PR12MB7440:EE_
X-MS-Office365-Filtering-Correlation-Id: 8d35228e-0865-42e8-e1f5-08db3c059624
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K6bFRhAp3Jj9muTMjbZRkaGPHRroE40YOyRvmA9QZbqq4sZ7/WBXV3p8mapAUIlRd9sk/eOJWbYclZ5CZkRaa1InDGz4Oxm9SxcD8uw4asILUIQJPyeOGEO+tzG2Rj86J1gbR82iH3DwBaHIUlL1i84Svu6sLk0lbXKIJT2ACHYjDHRI3hfEGzNNSjjJ3Uq+tNZgCszEJFiyj+3MyFRVlka7visOKikbfvjRHhb68deuDW64hmzS/O7Oc27NN8ZTnaAjOsYcHK5plwKzZukx3hw+ECoG8Ts3+JgSMnVp1Q94S8ExeW39QTklvueSuQaXXResbiS0ydPtvW4I+a3kTAuCghI74r5Agk62CqoLZE+l33SMAEpJAuV7E2ZK3Ii9S4OgaPd+VoQTdybT0/UPwTfBzf9kCW2qx8W5O96VC16xRCvKQMCha916XfjtntEMYunGIKloGxUTQPvj6g3HHIUd7uIwIlKJsCp2oaTeV7MtWSYGSgrPR7mmLFouyU/aLwCFBNsCxuwG+WStnqZPM8VVpEFcvvgKZRnStpu1lS4pdh/obmO1uihU2NslTH9nS+Z1XEkmHOqAWMgMkVEPS1iliuh/lXwXaA02MnvkixDTFOf44+XjQOMmBpilKT1XiiXi9fHaQ+2tE40MQVLMgbWg0QSmvMrz2PVPy97v2n6JPdVLoXvXGh05EEF/MV8P2mK27zT87ZbQhOvx5jHx7TWdJ9NTFUbL0n2I9ocgfXrgs36Fw/g8uEEbRWtw/yP47HPMEUUx7DkcL+5KgxtA5w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(31686004)(8676002)(70586007)(70206006)(8936002)(5660300002)(44832011)(478600001)(41300700001)(40460700003)(316002)(40480700001)(82740400003)(426003)(336012)(36756003)(16576012)(54906003)(110136005)(4326008)(186003)(86362001)(26005)(81166007)(53546011)(2616005)(36860700001)(82310400005)(356005)(47076005)(31696002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 09:58:09.4091
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d35228e-0865-42e8-e1f5-08db3c059624
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT078.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7440

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Introduce a config option where the user can enable support for adding/removing
> device tree nodes using a device tree binary overlay.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  SUPPORT.md           | 6 ++++++
>  xen/arch/arm/Kconfig | 5 +++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index aa1940e55f..0a31f40af4 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -822,6 +822,12 @@ No support for QEMU backends in a 16K or 64K domain.
> 
>      Status: Supported
> 
> +### Device Tree Overlays
> +
> +Add/Remove device tree nodes using a device tree overlay binary(.dtbo).
> +
> +    Status: Supported for ARM
Hmm, so here you say supported but in Kconfig - unsupported.
I think, this should be:
Status, ARM: Tech Preview
or Experimental

> +
>  ### ARM: Guest ACPI support
> 
>      Status: Supported
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c..1fe3d698a5 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -53,6 +53,11 @@ config HAS_ITS
>          bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
>          depends on GICV3 && !NEW_VGIC && !ARM_32
> 
> +config OVERLAY_DTB
> +       bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
> +       help
> +         Dynamic addition/removal of Xen device tree nodes using a dtbo.
> +
>  config HVM
>          def_bool y
> 
> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:03:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520602.808354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtnc-0002f6-2H; Thu, 13 Apr 2023 10:03:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520602.808354; Thu, 13 Apr 2023 10:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmtnb-0002ez-Va; Thu, 13 Apr 2023 10:03:27 +0000
Received: by outflank-mailman (input) for mailman id 520602;
 Thu, 13 Apr 2023 10:03:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmtna-0002et-VM
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:03:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmtna-00072s-RR; Thu, 13 Apr 2023 10:03:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmtna-00078e-Iv; Thu, 13 Apr 2023 10:03:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Eheui3wJSAP0Xc1BwIsLqFdWXjNT/C88bEtOr2Jl/Dk=; b=6sNyxDYML5/Rq8oIqxggdsoKjX
	2b6OWbV4Z+LpptbSUKOJ+p/sHt8hw2LcmzkOuiZg/GCVVpUOu+wzmxuew2IIbBkbjANHzFP8FBTTC
	jJlRwCR2Z5KA11kz/bEx9lnlIfhqEuozUyIhgObsaHC06pQfNyWj2RdMr/K8CwxAEQgY=;
Message-ID: <869d014a-d325-6592-d51e-e3638ba04076@xen.org>
Date: Thu, 13 Apr 2023 11:03:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230411191636.26926-3-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 11/04/2023 20:16, Vikram Garhwal wrote:
> Following changes are done to __unflatten_device_tree():
>      1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>      2. Remove static function type.
>      3. Add handling of memory allocation. This will be useful in dynamic node
>          programming when we unflatten the dt during runtime memory allocation
>          can fail.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>   xen/common/device_tree.c      | 10 ++++++----
>   xen/include/xen/device_tree.h |  5 +++++
>   2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index aed38ff63c..bf847b2584 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
>   }
>   
>   /**
> - * __unflatten_device_tree - create tree of device_nodes from flat blob
> + * unflatten_device_tree - create tree of device_nodes from flat blob
>    *
>    * unflattens a device-tree, creating the
>    * tree of struct device_node. It also fills the "name" and "type"
> @@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
>    * @fdt: The fdt to expand
>    * @mynodes: The device_node tree created by the call
>    */
> -static void __unflatten_device_tree(const void *fdt,
> -                                    struct dt_device_node **mynodes)
> +void unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
>   {
>       unsigned long start, mem, size;
>       struct dt_device_node **allnextp = mynodes;
> @@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const void *fdt,
>       /* Allocate memory for the expanded device tree */
>       mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
>   
> +    if ( !mem )
> +        panic("Cannot allocate memory for unflatten device tree\n");

After your series, unflatten_device_tree() will be called after boot, so 
we should not unconditionally called panic(). Instead, 
unflatten_device_tree() should return an error and let the caller decide 
what to do.

I suggest to read misc/xen-error-handling.txt to understand when to use 
panic()/BUG() & co. For...


> +
>       ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
>   
>       dt_dprintk("  unflattening %lx...\n", mem);
> @@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
>   
>   void __init dt_unflatten_host_device_tree(void)
>   {
> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
> +    unflatten_device_tree(device_tree_flattened, &dt_host);

... this caller this should be a panic() (this is OK here because it is 
boot code).

But for your new caller, you should properly report the error back the 
toolstack.

Also, unflatten_dt_node() (called by __unflatten_device_tree()) seems to 
have some failure cases. Can you explain why they are not properly 
propagated in your case? Are you trusting the device-tree to always be 
valid?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:19:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520606.808364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmu2O-0004Dn-DQ; Thu, 13 Apr 2023 10:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520606.808364; Thu, 13 Apr 2023 10:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmu2O-0004Dg-9s; Thu, 13 Apr 2023 10:18:44 +0000
Received: by outflank-mailman (input) for mailman id 520606;
 Thu, 13 Apr 2023 10:18:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmu2M-0004DW-OP
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:18:42 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8d589bed-d9e4-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 12:18:37 +0200 (CEST)
Received: from DU2P250CA0001.EURP250.PROD.OUTLOOK.COM (2603:10a6:10:231::6) by
 VE1PR08MB5807.eurprd08.prod.outlook.com (2603:10a6:800:1b2::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 10:18:30 +0000
Received: from DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::cb) by DU2P250CA0001.outlook.office365.com
 (2603:10a6:10:231::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 10:18:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT035.mail.protection.outlook.com (100.127.142.136) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 10:18:30 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 13 Apr 2023 10:18:29 +0000
Received: from 97e27bb58b8d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E73DDC16-C863-42D5-AD3D-2BF3107F8604.1; 
 Thu, 13 Apr 2023 10:18:23 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 97e27bb58b8d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 10:18:23 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB8668.eurprd08.prod.outlook.com (2603:10a6:150:86::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 10:18:21 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 10:18:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d589bed-d9e4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l6DoXTvtCegDEgg3NGPXql67sOC/wpT/TaHEjZ1NZZk=;
 b=pqY0tFfxs971YjEHXADAgYNLDGpTCffSDI8E9JttbBt/+htE0Xo70eye3mXCZL2E9uQ33y076veCKi7rauioB8Ft2RSMHEVfOFtzusC2xNSbMNl39OzPPW/G8KTXrA2tr17iMlGHUm0hAOBKHc3E2oE9+TGPEnaVAblPKbV6Z4M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 106133614934fda5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gszU1sd8qfw8fsi6mkJtF99lxoBGvOEBdHBSbWipedks+ejGCF2GuKN4MrsSFArQNiAfO2VDniE73lKAxlDC8OCdeEUyJXvuf/W8o/tc+SDkcohhomY57ktYxrS78+8WHxiTcHrxZBvjFrRsfXBk+KLb+AYR/VoR/QR2BXwk1uK2HxcuHGfWjPmSWmryihJFhZSrGZdnrdO2CITNEAv3zoked8CJmdI9wMlAquYDwNKirtMiW3gntH33m+OZcJM4A/l/L3eHvLt+hXLCIePMWd8dfEa1hiF+2wVri1GvQAxNR+lB2gIAyll/3ci0UlXdp/O/LBCIxGP+2CbdcvlpSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=l6DoXTvtCegDEgg3NGPXql67sOC/wpT/TaHEjZ1NZZk=;
 b=jMOHpDrlnKVwEQkiRH5nIFkRTJ/KZPvnbJO3FXDJRI+USqj1JGtlI3zqOs5pnSp4dn4fGaDfgcUsFTxx9hzorhv7g3goX4teYtgY8ivLeQitC2ovs8FnvDnRIlN5HAXfng+4H2KND6xvOJnMESoYzb+8LoKUiGmJ8NyRX2/+FOQ8SWER7x/METapMBwYPXE9cNxgnjHZT3ebzXtxtejEn1NOv7jq9BdmNvFh8oRAqpPT0OiD3RM5tcoQJAVKQsOQKWNjG9cAGJNg91mZgWusrkGO5V0ktWwnuYkWROevt5pZ48OitgdjG6B2rHXuHOpf3ET1en73nIfE9hXirJe64Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l6DoXTvtCegDEgg3NGPXql67sOC/wpT/TaHEjZ1NZZk=;
 b=pqY0tFfxs971YjEHXADAgYNLDGpTCffSDI8E9JttbBt/+htE0Xo70eye3mXCZL2E9uQ33y076veCKi7rauioB8Ft2RSMHEVfOFtzusC2xNSbMNl39OzPPW/G8KTXrA2tr17iMlGHUm0hAOBKHc3E2oE9+TGPEnaVAblPKbV6Z4M=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function IDs
Thread-Topic: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function
 IDs
Thread-Index: AQHZbdfQvOZjy0avwkmKCeVOj5Eyzq8pBU1w
Date: Thu, 13 Apr 2023 10:18:21 +0000
Message-ID:
 <AS8PR08MB7991150DD65CAB61C276A21892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-5-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-5-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7F3587952335594DB45AB22285B60C93.0
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB8668:EE_|DBAEUR03FT035:EE_|VE1PR08MB5807:EE_
X-MS-Office365-Filtering-Correlation-Id: 7916772e-a785-44a5-b46c-08db3c086dad
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ZUXIWunWcQ/qHWTlj9yh7YSVsZ2Fqhu+iJg4YYpP/XM5B7JTNpFrS5tjEWYIAS6NQzid7DYqtGd3GSYWd8yDfPidDybKlKVhhu2njsQQYNzgNaqE6CbB/sNPtN//lh/xT3RJ3hjgck/rm+EUPYdH0PrZKt3vs1VW5Y7RvVfL6e5h4SCs4vB9Sy+EgI8IJsBxVcthIiVlRoTj5iCEVBhXkQQNO9IxBVtTaAa23J4j004KdK/WVaSf+y5uAKTpVTR65tYKiZGg9gC8h/u2SW33WhbCD+I2UXocphYND7s1p8fdN/B2cX5M0b2vS/iwYsId+C0h1QGeZGjfcS9aOu4RNfhm/UglGnf1Sie+8+Qdly0jYkxat8CxvPJ/gHQylYx+elgSOFPYl6r+AhOoQpZdaueIQ5/otAOncG3bbox1Kw+pPyaPclS1DnNBLhW5CE1eTinukF4kPLjjY1s4iD/qnQbe3VwyHT2jnDD8gWhvi/QlHE5tk1GNndyVUTw/tQRdwyDgQAUqPIHuuRFfGZ/ZlHcWdtmuiODKlCfl2ajkIH8Kvezaq9mdc5VrSCln98M/RGG8eVRYyNaHtOVfNqZUzinS48JXKWqQfbwd6EmAEGbwLIHvAFxt/WNrA1odjZuO
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(366004)(376002)(39860400002)(451199021)(71200400001)(478600001)(7696005)(83380400001)(122000001)(55016003)(86362001)(38100700002)(38070700005)(33656002)(2906002)(4744005)(6506007)(26005)(110136005)(316002)(9686003)(54906003)(5660300002)(76116006)(186003)(64756008)(52536014)(4326008)(8676002)(8936002)(41300700001)(66446008)(66476007)(66946007)(66556008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8668
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0d26029d-1b20-4376-ee90-08db3c086884
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WlGwhmcYj7EN9Y+J4vh28p8HAqCmT1i8WO/9Au1qn0PI8W+ThYBXk/valbaEyVZ+SVLxSHU5GxHJLXIrIMEXkromi6ExIl661cvgMWKlb0yuvGkPCryqmk+IgFZSOAKzAx5YRXQ6Dywq+0wVR9SjyHdkPjovpijSc7avDmvaruryj7KSUHew14/85PnzM0wGX3/sxnI0ybYZH1vZ2wSbpI1hXnFSCoEqk5tIo59LN+dIPdJaCG93bkdnCYLyK24jOJNDRCKt+oQCtGypQtiGNrNZcv3b5JDa9wIj9SkcsfBjvB3BKLzDvpwTjsh1JTGQhwV7+au3qxvX2+14FOdDqbUVZke62/DGsm/qlObl7jKHswZpQHRy73Dy4lKixcWsFPigXzSgDdnWTSieEy9f1ZQNSjj9PXs2Od32fxYUSx8v57jhYWqk4POeiBIVcMVZ4Om/oHCZGCDFEOb18xMVxCSsvk+rEwnJ0IQlwQvPfNFEHQ+Qe3Nqzpvb/qhkV+5pmTsJLLZRMqodlsNj70SMaiAWpFr0D2989ubmQ22zghUOqYr62GeMuTdCY33wPbFd9YEhaM6w4z+wc4IgRvM5TfwT6HqQUv72IsdPjgi6uHXUADMdvHOBGjyWqZ6l1r8aotowtVGl5/09jwuXpWsR9QTxzmsg2maaCAEtNsPh1Es8LiIBx8naw0Ix0D2ObEUHYOYjl4Mf5T3UNqmbRyZnmQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(70586007)(47076005)(83380400001)(70206006)(6506007)(4326008)(26005)(54906003)(7696005)(478600001)(110136005)(186003)(9686003)(52536014)(2906002)(40460700003)(33656002)(5660300002)(4744005)(316002)(81166007)(356005)(82310400005)(336012)(82740400003)(41300700001)(55016003)(8936002)(86362001)(8676002)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 10:18:30.0562
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7916772e-a785-44a5-b46c-08db3c086dad
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5807

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function ID=
s
>=20
> Adds the remaining SMC function IDs from FF-A 1.1 specification.

Nit: I would suggest that in commit message you can mention the documentati=
on
number you used. During my review of this patch I am using
DEN0077A version 1.1 REL0.

>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I also confirm that the macro values introduced by this patch is consistent=
 with
the spec in commit message, hence:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520611.808374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuCD-0005m2-DT; Thu, 13 Apr 2023 10:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520611.808374; Thu, 13 Apr 2023 10:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuCD-0005lv-AN; Thu, 13 Apr 2023 10:28:53 +0000
Received: by outflank-mailman (input) for mailman id 520611;
 Thu, 13 Apr 2023 10:28:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmuCC-0005lp-Br
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:28:52 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa81314e-d9e5-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 12:28:49 +0200 (CEST)
Received: from AS9PR04CA0122.eurprd04.prod.outlook.com (2603:10a6:20b:531::24)
 by PR3PR08MB5707.eurprd08.prod.outlook.com (2603:10a6:102:8b::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Thu, 13 Apr
 2023 10:28:47 +0000
Received: from AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:531:cafe::5a) by AS9PR04CA0122.outlook.office365.com
 (2603:10a6:20b:531::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 10:28:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT009.mail.protection.outlook.com (100.127.140.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.28 via Frontend Transport; Thu, 13 Apr 2023 10:28:46 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 13 Apr 2023 10:28:46 +0000
Received: from bf118fd0be95.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 90881DDF-AD1E-45BE-A0AB-ACE3A38CCFDF.1; 
 Thu, 13 Apr 2023 10:28:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf118fd0be95.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 10:28:35 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB10044.eurprd08.prod.outlook.com (2603:10a6:102:362::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 10:28:32 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 10:28:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa81314e-d9e5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PTfLyCN+fkRfuN0F3YonH3ulHsbdCoNdxmseEkV5p6E=;
 b=bd5muEGq3UdxCmY/IWmF9TfpwTbEjqSmccp9NQoBt/Fqw86ThqIaZe/fUrTdejZ1dOC5kJt6B/IrCNFtbJ300aZU05A6J5MJFxSApvDQ3HXjv2wR7IEWmk4ZkYos6L/a6fMlPLZBlfINfPP994QOSnE54aFQIZrYplpEO45pgpY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 105883896882219a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aBja5EV1Q3CTT/efdWDogIBK6SaDsLAbdrCmamMZe2fgqUTkKMu0f9B0SK3TQGOK3Tz9LOnv1XlaeJx6nBRkL3msT7cl1k69JKbW/kp45zrTM/JOlflMuY7la8etghUF54RUM6JlMqwBSN652b6h9Rvq1C0q+srmnygOqyqrMnEI64kMpwfCGQ0tLqIVZ5nrHelkcpKKi4CVyakkkc2CTuW+8I2TtlFUpVPXNxzspz/rU6Pvs5bV4/cPFDOUw6KgdXIymwn7mp6o49Zk6ylMEL1++G6aeYNYcbJ3QNcPCEbutnhj1cIVUH0LvuSDqydoPAkG6bZpl7Ru/30zCrVV8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PTfLyCN+fkRfuN0F3YonH3ulHsbdCoNdxmseEkV5p6E=;
 b=GPA/1yNFI1ELc89KaibtrPykCdwRGpk93P+yxemHrBRxKMptvEDWa8FVKXPDK25bGkTb9zQE23VgRzO5YojChvPPfvodIdQOMR5VD7qlQVTzUn4/mub1YLh63ZKZKn+yJmKfbFZ4X/8Kr7qJcJBkaFiCv9ctNkD43KAkLGFeQI8A1YuzD/8ALFNOfq/pHQ7nHNzVbjCQkl4QRwwGH2zjUJBwQ8268CSfgXiM2718P3aRMEwhI/3pRU0Pg1KZ5xPae6Ezp273pyRV1b/Y0zGNW/Rb5mY8rBJ/CnJTa93QJhiPZGg0iajk7hWcGNilgLrXhGdTzX80C7FgGm7Zo2Z7sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PTfLyCN+fkRfuN0F3YonH3ulHsbdCoNdxmseEkV5p6E=;
 b=bd5muEGq3UdxCmY/IWmF9TfpwTbEjqSmccp9NQoBt/Fqw86ThqIaZe/fUrTdejZ1dOC5kJt6B/IrCNFtbJ300aZU05A6J5MJFxSApvDQ3HXjv2wR7IEWmk4ZkYos6L/a6fMlPLZBlfINfPP994QOSnE54aFQIZrYplpEO45pgpY=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
 FFA_PARTITION_INFO_GET
Thread-Topic: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
 FFA_PARTITION_INFO_GET
Thread-Index: AQHZbdfQYdI+Ke10g0y9D6f6Y8dBZq8pCBXg
Date: Thu, 13 Apr 2023 10:28:32 +0000
Message-ID:
 <AS8PR08MB79913E8D281DB674FDC0D70192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-6-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-6-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4F5B102125D24D46A420CEEA92276860.0
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB10044:EE_|AM7EUR03FT009:EE_|PR3PR08MB5707:EE_
X-MS-Office365-Filtering-Correlation-Id: dc55f43e-70ae-40f5-edd3-08db3c09dd49
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GDdBHOUZRu7x/LLEd4YfEe2DZ+8KECTFA+y5bnxNNcJpkg/tCcnmBn+U8XPBYCaM8PMQibVAz3OcK7b9lClHipd0WFaCwBw77zFTZ0aPEXzojBn3VNatnxoCEHjlWMc01cb4UxALtTQuEcHI6E98JJeuVarDJAqhYdrC8ARAkNiZNjuMLLiFu7ppSvUAgAX1CFowFHFQoJj4JwLZjhk4dcFnvBB2zKnhjkmhO6jwWMFOuEJPynacT1A9viiK+dS2uznz82E7mSa6l9Cof6NvVV2u1LjKbtsTUFTxfucOdwPyYaYQI/tXog9zW+uP4+ruOIhRtVxOCErBEKsEedk+kgC56NQSafwRvbXJnQtumuRf4Yxa3LToYihxzEnvBc7y0Ib7khE7MxSetk2o2R0IkmsKv6r9xO/8Iu/02lmGUWCc9rUA2ZjtlpvCKl75sjJ1sx/WcCxBm63glcGlWWIFBMQxgFSIpI5zKcuIRFoYgiL/3NUfEAqd3SGypbncT7VFh/Lz8StYRwC+zUUXXpSSNSVhFwcQZLnDOYWwgs3lsR41+pAdLH7LtkKw9zMuxsmPayRebKPys96t3rCBI6TzINYcc7f20zupyTk3gWzD+4YI7WhExkZNAaPLvRRG8TwW
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(451199021)(478600001)(64756008)(66446008)(52536014)(55016003)(38070700005)(5660300002)(4326008)(2906002)(76116006)(66946007)(66476007)(66556008)(41300700001)(8936002)(8676002)(122000001)(33656002)(186003)(9686003)(38100700002)(316002)(26005)(110136005)(6506007)(86362001)(54906003)(83380400001)(71200400001)(7696005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10044
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	26ec4f1e-5931-4834-ead8-08db3c09d487
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JBQH98YUjfG0JLRLMXE0oXCYH6cVisRA5/tij5Gsay+u4IicDe/cqdyOgONF24jH/FfTSeLHP+teON7nS03q321BWNO8yVuEZHLfrBWz5xaqk4MghcHhL6oLGmL3gCL0D6c13yInLJeZh/7H+CJ3mgY/wrI9Fr0BoCY+Xph+3VtyECKyDLHOu+8NezQs9R5TqyrW7yQ6PampSH1ZZv3kACrh0WHES7mgHNBmvxRGEdxNEp1krpPp5Kb42AtJSXich+fJgdV0Ujlnbwa+O7F0teHPc25s0SyFVOW4b+8f+R2Mwm4Sj2k/cop4C+oChg37HOCgt61pCdNmSvUt9Ti3gakt4gNUJXyHAL/RlA9eZc6l/tT+JvxMDpoD+F1ySdhyI+Ni7NK+oeewZjUbnhmNxLRWyQyIrONXfwkuZS/dBtqLZCUYDjTMyCHkfQDH7ATujqr2q5GKOhvQ+W+ClrWY/hy0HyTTc2uTcB6QGRYq87OUY1X0Ts003uoc/o48WsvYYRC/mZO7wUDEL2XdNeDLWhIOwyEiSQsO1o+fPYO6S4rrD97GF8O9/mCqXUrWXMzzsR/nMUdddwkouyAfWE8WXrByO/DJlzCGTwuthgLkOV349GLHZcAeTng13n8+KQvdmNAtnWraiu8JOvFK4CNI23eyjV6dVIf+M3BK+u/am8psSvH+eIuNqgQvnC6EevrT5ZmRICospuFvWwB0NAimHw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(54906003)(36860700001)(83380400001)(478600001)(55016003)(7696005)(40480700001)(6506007)(9686003)(26005)(70586007)(70206006)(316002)(82740400003)(110136005)(4326008)(47076005)(336012)(186003)(5660300002)(52536014)(8676002)(8936002)(356005)(40460700003)(2906002)(81166007)(41300700001)(86362001)(33656002)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 10:28:46.7414
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dc55f43e-70ae-40f5-edd3-08db3c09dd49
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5707

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
> FFA_PARTITION_INFO_GET
>=20
> Defines flags used for the function FFA_PARTITION_INFO_GET.

Nit: Similarly as my comment for patch #4, I would suggest that in
commit message you can mention the documentation number and
the chapter of FFA_PARTITION_INFO_GET. Something like:
"According to DEN0077A version 1.1 REL0, section 13.8, defines
flags used for the function FFA_PARTITION_INFO_GET"

>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/tee/ffa.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
>=20
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index ba0942e76993..72e7d0575de5 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -57,6 +57,40 @@
>  #define FFA_MY_VERSION
> MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
>                                                   FFA_MY_VERSION_MINOR)
>=20
> +/*
> + * Flags to determine partition properties in FFA_PARTITION_INFO_GET
> return
> + * message:
> + * BIT(0): Supports receipt of direct requests
> + * BIT(1): Can send direct requests
> + * BIT(2): Can send and receive indirect messages
> + * BIT(3): Supports receipt of notifications
> + * BIT(4-5): Partition ID is a PE endpoint ID
> + * BIT(6): Partition must be informed about each VM that is created by
> + *         the Hypervisor
> + * BIT(7): Partition must be informed about each VM that is destroyed by
> + *         the Hypervisor
> + * BIT(8): Partition runs in the AArch64 execution state else AArch32
> + *         execution state
> + */
> +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0, U)
> +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1, U)
> +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2, U)
> +#define FFA_PART_PROP_RECV_NOTIF        BIT(3, U)
> +#define FFA_PART_PROP_IS_MASK           (3U << 4)

I am a bit confused here, here (3U<<4) is "IS_MASK" but...

> +#define FFA_PART_PROP_IS_PE_ID          (0U << 4)
> +#define FFA_PART_PROP_IS_SEPID_INDEP    (1U << 4)
> +#define FFA_PART_PROP_IS_SEPID_DEP      (2U << 4)
> +#define FFA_PART_PROP_IS_AUX_ID         (3U << 4)

...here the same value is used for "IS_AUX_ID". According to
the spec that I referred to, bit[5:4] has the following encoding:
b'11: Partition ID is an auxiliary ID. Hence I guess the above
"IS_MASK" should be removed?

I confirm the values of other fields are consistent with the spec.

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:37:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520615.808384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuK7-0007GK-8X; Thu, 13 Apr 2023 10:37:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520615.808384; Thu, 13 Apr 2023 10:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuK7-0007GD-3q; Thu, 13 Apr 2023 10:37:03 +0000
Received: by outflank-mailman (input) for mailman id 520615;
 Thu, 13 Apr 2023 10:37:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmuK5-0007G7-CR
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:37:01 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d74c1fc-d9e7-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 12:36:59 +0200 (CEST)
Received: from mail-bn7nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 06:36:52 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6379.namprd03.prod.outlook.com (2603:10b6:303:11e::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 10:36:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Thu, 13 Apr 2023
 10:36:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d74c1fc-d9e7-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681382219;
  h=message-id:date:to:from:subject:
   content-transfer-encoding:mime-version;
  bh=7o/zoZvLpO8iuKHhAUnS6ufHTV8O/4SsFt5DFoyrqSQ=;
  b=cuZtoDiBPD/C6Mr/fg+RYjdBW5OCH4e/vH+wCtTwmcsCdtbdxUI3ZAmD
   50TFbsP+BhshWaQfYQdrbiQuQ7L9OzYQMcCKjGk11bGK4aIm352unvY3y
   nlDQzfIRax5Y4s7dNo8MHhAsxSfxhzTdznSYBD0j45auR41Jb2vkcQPJf
   A=;
X-IronPort-RemoteIP: 104.47.70.105
X-IronPort-MID: 107797070
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:CNjO6qh5LFp06etRCUYABjj4X161eREKZh0ujC45NGQN5FlHY01je
 htvUGyBbvzeMDHxf98nPYzg9B4B7J7QztY2SlRorXw2Fn8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWi0N8klgZmP6sT4AaFzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ/KSpUVS2dxNv10ZCHYbFOiMU5b9bkadZ3VnFIlVk1DN4AaLWaG+Dgw4Ad2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMpluG1bbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnqa812ALLroAVID4XUFiwhfC6s2mvY9lCN
 m87/jArrKdnoSRHSfG4BXVUukWsrhMaHtZdDeA+wAWM0bbPpRaUAHAeSTxMY8Bgs9U5LRQo3
 FKUm9LiBRR0raaYD3ma89+pQSiaPCEUKSoIY3ACRA5cu937+thr01TIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslRPuZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:As6Zz6Dg2SuPnnjlHelc55DYdb4zR+YMi2TDt3oddfU1SL3+qy
 nKpp4mPHDP5wr5NEtPpTnEAtjifZq+z+8Q3WByB9eftWDd0QPFEGgh1/qB/9SJIUbDH4VmpM
 JdmsZFaeEZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-Talos-CUID: =?us-ascii?q?9a23=3AHm4iHmi2SjAjHmPxnOlLFg/c4jJuVXzHlUztPQy?=
 =?us-ascii?q?BSndLdJbNa1GR4uB7nJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3A7ik9xg43h/+X20ed/bdGHWYpxoxL4qGtA2QEkqw?=
 =?us-ascii?q?hqvGPKDFTFDOHiTW4F9o=3D?=
X-IronPort-AV: E=Sophos;i="5.98,341,1673931600"; 
   d="scan'208";a="107797070"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CTZg9dXvjL/Z6Tcrj4hKJP4JyLXxfahnJtEpspXDa2/VpCsAOrP1fRP8mAKbbBQ/o3Eye79UJyNgUeNjCPFj0f/71SbtEst6OAVKrV72IkbVWMv89PCieAtJAd0Klgy/1XsuD+g53T6n8HqoCYSpHpE4EHoz2ALwoYaiK9Q0ZPhbA9SonZoDGjGINmpOKeV48p81cFusB2zl77Ss07HLVgbBxCJzG+JB+AP4vrRZblYQO+jIcFhACNV6Tq7vzExFWBR0nxtKKzccVI17KVgZZW3IH6R/XL1jmkBW3K6kLA3wVdTdPWSIjHJIoohSYy9HwACa/E51evSfhTG27J4Qlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7o/zoZvLpO8iuKHhAUnS6ufHTV8O/4SsFt5DFoyrqSQ=;
 b=PqmLiHYU84Mu/fSyO7zp0qiXoT71mTog7RWgIWhrH7G3iHCTkgRlGI6X8iO4wDhKRv0onzz3I5QGbvM6w45aYOXCdhimZ6NiXAzeTSBuzyCsHVuVyUnQSHhcMqj3k6wkePcqEklZ90qpXZpcbUtZVSzMLdbDabhsRkcuWApeYV8vH1Kw7AZQesON9nb7qj0g0HlFcCze3jpCOJdzDJObZlQyNXK2lPKQ1ka/22v+Kb4TVYSCT6AZdx0nLddSYGZ+tg8XB2k+T2/ZuFyT64eBqP8ON0JQb6/CQhtYImayXcl8ftFFzDeH1k6nBc2iJ8JPdKVgcb/GgTsalvq8/tLKww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7o/zoZvLpO8iuKHhAUnS6ufHTV8O/4SsFt5DFoyrqSQ=;
 b=nXNsAPQWnPjWQrBHxWAchQ/jX+mn6v8eRChjlWvLlsf5lcASpUWS0a805oSI8m5y5k8kvp/6PTcmecJ2DULP/8RAdyT+0gn7aB/nKncmQCBLjEI0R3q3vnnwcfy4T3brQuio6FYF/68TSP8dAVsxna3FsrFLBJwUAMibxHuFvS0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <19944c81-8dd4-6c89-0fa0-f4837648c7d5@citrix.com>
Date: Thu, 13 Apr 2023 11:36:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-GB
To: xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Parallel build regression in the emulator harness
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0007.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6379:EE_
X-MS-Office365-Filtering-Correlation-Id: 5536b235-eee8-476d-a8ac-08db3c0afc55
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ekTbf1YQIlGn2o9xdOCYCH+G5xyPE7zMId2NkTYLsfJfabXUeA40IMER5iL4xuy0O11QalS4SOqPmm0umXY6N7Yf//sw5+NTuuQ3HFLUXYpx1MdxpDVMUKviky/sVlTODRToNdPo/cKkdS8df5IAO9j1pgbUnrYsLZby7jEfTGcmUn8J+LiQ9Y2wq/Ih+aX9a7wlhUnXJYFJc71E+ckGATIWz40Hps9LEi5iQTakCYl4YbQi6saTLcmI86Xxv/dQZQ9VQQGJrskrbepnst516AITuSFw6bbGRSjDxxiD38QKHYhhg4npUSbD6rquZyPkuK+sk0JWfX/e+D+Yrgnbfn9hOQfNqtyAzmgi+nnfPZ189seRQnBQ/g6V9x00z3oEMuy8qAuYD61Us8zMfoyl4UfS0eBlBDnzCyJPksRkmmCq9+O8x2qNNDeJg7UQQ0C6MsV29L+hJMERGYWrEuBJHTESQjpeqb+8sUJhU/3vJPk/XHS6ab4ZRbcMN6zFxFnJ0Xn02cKT8//Y396S1DeQuxVQY2eN6ySfb4uVOHhxtINq8NKZx315I0z0pNhGh2j8Tk0HwxAOgmTr238+4A7JwVo9N0l8bf0OuatUXoRg4JW+Ni/SwMgEZ7o0/CbV923B/ztuKSSn+4TCnYitnKdT7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(376002)(39860400002)(136003)(346002)(451199021)(6512007)(110136005)(966005)(316002)(6636002)(66946007)(66476007)(66556008)(5660300002)(478600001)(8936002)(6666004)(6486002)(8676002)(41300700001)(82960400001)(4744005)(2906002)(36756003)(86362001)(31696002)(38100700002)(2616005)(26005)(186003)(6506007)(83380400001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WDdDQ09wcWdGQ1FxclVqMkc2djJzTEUvQjlhMGI0VVBwREZGUmgzZlJ4amh2?=
 =?utf-8?B?Y2Y0WVorRHh4b0NOTVY3S3VaamVUb29zUGhxbUlRVlh1QVJQNXoyLzFWenBZ?=
 =?utf-8?B?STJiMzNrakNEVFZaN0FyN2tVbTFHb3dSa29pWENxVlg4WkxZbUJLN01ISktz?=
 =?utf-8?B?T3ZrbFN0a3lOMnJ0SExwVVNrM210SjYxT3JWbzFZNUYrZTRxRFpUZkowOVlJ?=
 =?utf-8?B?MW03eGdlVHBseGozWmpUVTJBaHpxZ1M0WktsRmRMc3ZaRlhOVUpEWVU2MVFa?=
 =?utf-8?B?Q1dCUGRhWEZ3YU1TeFVvc09wWXZGR2lPcnVEV1JzN1Z2eHlqTTcrNkVRbFp0?=
 =?utf-8?B?cEpnNEt1SjFLVE9JclNGUnp1djRITGU3Wk84UWdrSG1MYit3aTRwL3lYWUg3?=
 =?utf-8?B?TGFEQStSZVN5MnI5TUZ1WHlpL1dGaTRBazh6Tmp6MkxLK2lXcFZVTGExR3Qz?=
 =?utf-8?B?cjhtRk9FeUMrNGJHb29Rd1BteDNiWkU3amd3MGFSZWZ2QWFMZ2lKQWdQVEtI?=
 =?utf-8?B?SHlLQmJseXUvT2p6emR4MXVCZ1JHZGhUdG1rdWplWkJVa0M4QjYxZWJwdkN6?=
 =?utf-8?B?M3llNk5zanRya0ZtVXNDcjcwSWErek44TW41RzBSZ3JzMlRnREo5ak8xVCth?=
 =?utf-8?B?VWVVSWJuY3NNejlVM1dJSU5kTkZvaEd0R2cxZ09Xek5qQkJPNGRnbk1zOUJh?=
 =?utf-8?B?Y3lUQnViYU5oTmZWZlFaNzd3SkRWMzAwemQ2VEdPUVd0cjBjbTZMSUczUGty?=
 =?utf-8?B?aWU0dmk4STJyRFlBOTlVTEN3QmswdVUrSHJFY1phL21EUkx2aHNKQWFQWHl3?=
 =?utf-8?B?Wm0vN0g1MzRjR3VLRkdHM2ZCNFFSZm9iUHB4OHpxWHFDZHVrT0NNWXZlVTBl?=
 =?utf-8?B?Y1hPSHN4a0lTUzVtbm9rU1pEZVFvWk5ZcTV3TEt3dnBjYjJGcGdRN25TME5O?=
 =?utf-8?B?R3lPam80YzIrZlZYS3JXaGxyRHdjeWI0anZmMXpnYnA2cUVkcUxPV3MyK1lw?=
 =?utf-8?B?MXN0ZHliUUd6VjVJTm9TTUhmdnZrdWpsTGlZQ01IZWs3SGZtbDZsOTgrZGxl?=
 =?utf-8?B?eWxxWUVqcUVyVlR0b0V6SEN2OG1IMHlialBXVFR3ME9pTUpFYzV1SDl6NlJY?=
 =?utf-8?B?MS90YzQyWnpYdWhlSGdpVXVxbE9NUmxxT0c0MENZRnpzbXlDUVZmYndaUGJ5?=
 =?utf-8?B?ZmN5OFo2THdhdXVrMTk5VFRkcjNLNWoyZExobFNzaE5RbUZibTdaWjFKczR0?=
 =?utf-8?B?bDgxQjIyLzVpZHFoRDBqQVlHRWZGUVFwZFNTZ0gyZVd5Q09oL01pUDBNVXU3?=
 =?utf-8?B?Q2lTbFc4RlZhYlI5RUhoanU3Rmh5OGMvS01zaFJEazVzOS9ocWZ2OGw3dmFu?=
 =?utf-8?B?Z3FqNEJmb2JXYjZCaXVFdlJ6UWNNUWhhTmtUQVVFZmNZVW9XQUo2eWI1aWE0?=
 =?utf-8?B?b1F3M29Benl3SnJNdXk5Sy9Ra0t0OFVhRDdCeDZhU3hVbksyUFo2VG12Y0pO?=
 =?utf-8?B?SHBZK1FGc1J6bURKc2U0MUg3SnI5S2NzbUhMeWpiQTNZSlduam1wUE9XZTk5?=
 =?utf-8?B?Q2FhNkFGcmJSZ1lwdDdJbWM1T0RtV1l5b3ZpbVBUY1N1RW9KeG10alNHemFM?=
 =?utf-8?B?WndmQ1ZEd0RFUGQ4cjFFN0ZCSW9aWFdYcENTZVd2eWovM291RjdQWW9YdGw4?=
 =?utf-8?B?OXhXVnR4UUkxK1RYNjlGNjUySGRocWpubktLaExBTk05WStjdXl6dWp2dkU1?=
 =?utf-8?B?M2NCNllWZitPN1pzTWNaWi90Rjg0VWI5dzdjMjZDek9FR2hoN1VkZE12b09C?=
 =?utf-8?B?eFBsZ1ZvNWIzSmJiK0JhVFMzN0JJaVo3QkVCOGc5MC9hejc0Ukt0OHhscENM?=
 =?utf-8?B?a3J4SkZuMXhHaE1UVXlhcGRuWk4weENhc2VIcHR2cHBhUTdHMEZaYTljb2tU?=
 =?utf-8?B?aFZmOCtTazRtcUhxbHROQXRsWHhBZmpDcU9LT29BazNZU1NOUm5sVjN3ZU5F?=
 =?utf-8?B?MVZ3WFBKaW16MVYvc0x2MXJpMW9LUkQ3ZGJQOXNUSHRwTSt4MWprR0c2RHI0?=
 =?utf-8?B?dWpPQzA1OUQ5ay9iTlZwOVk4dTFPbXA3a1BwNmxxV01HMjNRa2JqR3FQU0Iy?=
 =?utf-8?B?bU5sZjFQVDd0ZUxJNFU5Um5jNW5xOEFsRElqODJOU2J2bzZmYjVzQlNOVmRB?=
 =?utf-8?B?Z1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	i6EYP9E8dEZ5u06rkJvyrVxNJ9VcEvWL6uEKFw3WkdZ/NniTUHB+F9WVSzMA7JcoHWYHLjIq/KCgWfeM8OEnpKSFu+Oc3/Z7hz65dGsGtrqnouiiVqt9HjGL3GkxyoLhbhb6ylvKIz3MYNlVWymYu8VxxagdIyJlYHU6g5gzFW7oiz0QPjNPxafyo0tVanlbLDfUqOZof4Zoe8YqjXQO51U/6TnOF4kKJ/Dyz7+5dev/awQJvY4EM06RzFysOpAxD6qf+2y5QJx2egaUGRDfz2S4c8CjmYaFuAmSnOe8rMPzpoWOcrBuEfAfNDPBp0U1uuG21fgks/dHQnM2/8wqJFrn3NzRrYERwq5PxavtrUry2IpsrLVtOxZ8QxnuRKdJ2cMsbqOCmZavN7r9HPfcZzBgDzgj7Ec99jlECmhHLpBCpSp8TS1/vvwoy61fnd/ZzWTHTo9sHglOwNcL3kY8Wvp4v/4yL+3RgMYdWXJle4GjNVhYYEiUG7uf/npd628AGvHuqrhwcAB4pGxpjX8OgW7CCIC+0Y0Xn6gcf14nHeo0F+Y10V3yluJv6O7pJN8eslhwFqI9MILZe/mB++wxo2ib8NAQKiMi9M5HMRU1bbFVtBP4tU1kEo8cJznYFO8gfD7KoYhtI6VHGt4rrZMYb6OI6vU0Yewmfqnv9irC+/1LdlVvH5X03P/LCJCoW7qECrKQHa/47cA9iH2YW1DY5LlB59wvsVNLGW/4yd3ojE/c+57vFEyjdPdBuXLRKhmHor/pGWqMXaF5ZgTJB9EV3gu4YG+alvvrsly57+rbmEjaUwlLDYkVJT6/SJojA8y1
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5536b235-eee8-476d-a8ac-08db3c0afc55
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 10:36:48.7503
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7JRZ3i7fmIcbKiDce3tV/VC6B5ApEBOYBDUrpxkmTZh7ZfRCNBIXMaJV6GqeJmKIdbKGYEBuq4Ljwc4WFx8itfmmpah24p48bjKzozbflkc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6379

Hi,

Gitlab has started very occasionally failing with this:

https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4104532296

While the individual log lines appear to be in the right order, the
build is clearly failing because compiling decode.o is happening before
the x86-emulate.h symlink is properly in place.

This will be an error in the recent splitting, but I've not had time to
investigate further at this juncture.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:51:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:51:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520619.808395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuXj-0001Br-HR; Thu, 13 Apr 2023 10:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520619.808395; Thu, 13 Apr 2023 10:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuXj-0001Bk-Cv; Thu, 13 Apr 2023 10:51:07 +0000
Received: by outflank-mailman (input) for mailman id 520619;
 Thu, 13 Apr 2023 10:51:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmuXi-0001Be-4q
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:51:06 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061c.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 156e8975-d9e9-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 12:51:03 +0200 (CEST)
Received: from AS4P190CA0036.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:5d1::7)
 by VI1PR08MB5358.eurprd08.prod.outlook.com (2603:10a6:803:13c::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 10:50:55 +0000
Received: from AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5d1:cafe::c) by AS4P190CA0036.outlook.office365.com
 (2603:10a6:20b:5d1::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30 via Frontend
 Transport; Thu, 13 Apr 2023 10:50:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT036.mail.protection.outlook.com (100.127.140.93) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Thu, 13 Apr 2023 10:50:54 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 13 Apr 2023 10:50:54 +0000
Received: from f5188b42cb7d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 82771DA4-E170-425C-B69C-2443DD83CFF8.1; 
 Thu, 13 Apr 2023 10:50:53 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f5188b42cb7d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 10:50:53 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB6684.eurprd08.prod.outlook.com (2603:10a6:10:26d::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 10:50:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 10:50:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 156e8975-d9e9-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnanJ/Xo/U/t2aXUs2oKoot5SwX5CsZYxI13BH5G38U=;
 b=RY0/SdiOhCAgGL5bYS3RbRcOx+ETNGk6cWGsPbCWOPQzIuvYq0Hwt5CqPhop31cU32JinjiQ+/vtyx7thVctF17w+psqnk0Wzm8D88iiSHYIi9kMSpJ92kbbKgpN0CzrccjDIEN35S+DSBft0roGWeASqxZnnFL4+l8RO9NWiP4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EpyXA0tFxl01egftUCULgvupUKuORwiN3XlYQSxR3tvhO4d5cBZ+6jY9epiMd2uXkgqgkn1LrFjH4SSF4n7GxXoUBBkiIpqC7KBYpXhuVwCL15gkxP+GXKIX6cQOMwQkrOSIBkfpWwQMfC9jkZlQ8QjzP2ibf3AhX4ldJqLuJc3M9lkhQJv0WWfTXaj/KiB7Jw4IaqxCVDTxkwbM3r4iHd9Qga/Aikdyq0DHI9PMqBAU+MVPY/OUeClWh2NT4BOjvZD5nwo28TlHPq8UbyrWgB+//YU1Qsyn+6Uq0B0AVJLsScSDQgg/7ua+D5RhRJk3fMN1UaJME4F7ys+ZgXRNqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OnanJ/Xo/U/t2aXUs2oKoot5SwX5CsZYxI13BH5G38U=;
 b=ishhqhc4c6V/WDdk+F+/QgklHVBUQmc08CaQSoBhRyXtMJjsNGMDvo23aGCmSNA1hR5nvX+u8T0ljS/xZPTMCDk2MniArNtUe8c+CLnYH4BiMzUE46z1uxQEU1jN4hzcPzCl1uBEYhcGm8IhC92j0dchMOek3R1EPyGg05duqmgGHfgoHTHaG9DDtQWdEfxiHmGFxr5KjY06iXnlEvhypgxT5HlDi2OaTjD2RaAvHRgbFI8k/X6vF7n+ZZbucrclLaw7HDc41k6kNCXKVVnYCbGOqqtNwtq/fD9EOq3rPI5ShRza7oy7ytSw6ZFdYKMCcURLq+5h/TzSDDXOC+QbNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OnanJ/Xo/U/t2aXUs2oKoot5SwX5CsZYxI13BH5G38U=;
 b=RY0/SdiOhCAgGL5bYS3RbRcOx+ETNGk6cWGsPbCWOPQzIuvYq0Hwt5CqPhop31cU32JinjiQ+/vtyx7thVctF17w+psqnk0Wzm8D88iiSHYIi9kMSpJ92kbbKgpN0CzrccjDIEN35S+DSBft0roGWeASqxZnnFL4+l8RO9NWiP4=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework
 direct request/response messages
Thread-Topic: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework
 direct request/response messages
Thread-Index: AQHZbdfR/bDbMlylSUmzQmA1LCqNwK8pDjBA
Date: Thu, 13 Apr 2023 10:50:45 +0000
Message-ID:
 <AS8PR08MB79916CDCB5DB7CA86E8077A592989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-7-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-7-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4B458A3ED24A7E44A1401B89678F16FD.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB6684:EE_|AM7EUR03FT036:EE_|VI1PR08MB5358:EE_
X-MS-Office365-Filtering-Correlation-Id: 4d253a80-8989-4ac3-7ba4-08db3c0cf4ba
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6LchsenPIN+ocvX2timcV7g7PyjsyfY/29Cum5GMgVABkeKOkWjS5PvdeF62I22HBNnPRHLXoAYwUuBwLVdW9IZXls9rc6E3e6HEMLxdYx2TWWulrlYfsg6558K8rQxvz3mavfA2JE2arfwrQJ8zof51oJebzu1ZH3BZLnUvBy1l6d78Eb12AWsRPpR+l1neAClftTOIENpX3xAM1EZUkSv+ZWRlPZCurFRrZ39WZ3BPBRRgHO2YFIROD5y2bX96PN7Xn40bW734nL4hKokOEJGN1W8DLSr3OxFrqDXrbgywodc/mD9iStA9Zy4+evwvCTRBuzwWfJnlwnDCHGSaKcwSTZtduvS/sAmtZIKrJU5DgJ0sEfwnCBhHAUxGG7sPeuzqO9H3MaRjZLp4W40IZRa5To2cQVy0wZ/LC1+KqY41n7a9CEHzXZ3w9olTXqQ/OHnIyMWA+LPqocFlsolU/70lQNSJin0pMF3rt7gCBFBrdp+SegbvvyH4oAqbuSOO2JQIo8dlfLD4eexqW+hG90oEiO5P5OTK8j+S623BBjr9AtGyfx/Y9vn1mf2vEBqNG7yLSp3u5Q+K+/vtJwMFtf7oq9uMD0uC+RKAD35BC1zD4AXSiHgX29Imc+5F0POt
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(39860400002)(136003)(366004)(451199021)(33656002)(55016003)(478600001)(54906003)(110136005)(7696005)(71200400001)(122000001)(41300700001)(8676002)(8936002)(38100700002)(66556008)(66946007)(76116006)(66476007)(316002)(64756008)(66446008)(4326008)(83380400001)(6506007)(9686003)(26005)(186003)(86362001)(52536014)(15650500001)(4744005)(2906002)(38070700005)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6684
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6ae4e6c8-d752-4162-e99e-08db3c0cef78
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XNZFXGgvdJqJonSRI9w1IQf47RO1OWYcV4KM7TTPj4CIY8k4YxLXNqcWmdcF9rx1Vsvc6f4JBLsHwBG5ZQsTOcenirCPQbuiHEtTkzwalVM471NsRssRGwuQyCLz9D2Pbt95/Q/bTDIp7TOJo3TlC6XtIPYdT7AnhpMSgTiw73xCM9DydPsgl9GLXuMTLAgLuwoWNo/W9seaT7MHV/VO0Dpuj7s4UfN5CyMsVDzQPm3C35getl2uhXR6Melh1kpNkjJFscW16c99rjPHg+F4wdC2+F4PE8ss+5261SUNBjCI/vP7K+O2zBRf9m2qTadqpncW9MVYfGeFIREgISuw594s9SPjTNTmB05ChEuF/2FpKcKI1oBjmAh+7X8kJrYP7VOu9B9dzs94oiQSjNgfkODvnnRbmuZpLBOK3z4Smw1mMvSsUhz+nyMT+HzJ1rss4g0n2PmlNzpCBFwJHc37+dfJaOiaSbtATGrDv0Jz01SoBRi9wNO97BTqht0KWudMPPGWiYzA+fgTyqYGAmtFh4VCrkZQKadpuQwOpMg9MrBCNcBhzTD9cSUjKNY53ZXuTbVAQzH8S/J+ZtvXdRi3dCWeSnZP6954PoZmw7IwvalZUZnqQHx/vD+vn0pYyh5TC695kgHFhLyE1hmQ8hENzE6jAi+Tifa6nGBQHSJlRDzvrXOX6hYF3rw0q3NMEEn7QpbMW1mqkduwZJVJu77P+w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39850400004)(451199021)(36840700001)(46966006)(40470700004)(2906002)(4744005)(8676002)(70586007)(70206006)(8936002)(5660300002)(52536014)(15650500001)(478600001)(41300700001)(33656002)(40460700003)(316002)(40480700001)(83380400001)(82740400003)(336012)(55016003)(54906003)(110136005)(4326008)(7696005)(186003)(86362001)(6506007)(26005)(9686003)(81166007)(36860700001)(82310400005)(356005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 10:50:54.5568
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d253a80-8989-4ac3-7ba4-08db3c0cf4ba
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5358

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework dir=
ect
> request/response messages
>=20
> Adds defines for framework direct request/response messages.

Same here, it actually took me a while to find the related chapters
about this patch, so I would suggest that we can add more details
about where the values are from.

>From the spec that I referred to (DEN0077A version 1.1 REL0), they are
in section 18.3, Table 18.{21, 25, 26, 27,28}.

>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

I confirm the values introduced by this patch are consistent with the
above spec, hence:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 10:55:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 10:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520624.808404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuc9-0001rJ-5m; Thu, 13 Apr 2023 10:55:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520624.808404; Thu, 13 Apr 2023 10:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuc9-0001rC-2o; Thu, 13 Apr 2023 10:55:41 +0000
Received: by outflank-mailman (input) for mailman id 520624;
 Thu, 13 Apr 2023 10:55:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmuc7-0001r6-NP
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 10:55:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b912dceb-d9e9-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 12:55:37 +0200 (CEST)
Received: from DUZPR01CA0277.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b9::21) by GVXPR08MB7728.eurprd08.prod.outlook.com
 (2603:10a6:150:6a::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 10:55:33 +0000
Received: from DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b9:cafe::f7) by DUZPR01CA0277.outlook.office365.com
 (2603:10a6:10:4b9::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 10:55:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT039.mail.protection.outlook.com (100.127.142.225) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 10:55:32 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 13 Apr 2023 10:55:32 +0000
Received: from 4080c0bc5b40.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 448A4F4C-DD8C-4774-B5BD-5762DF691FF4.1; 
 Thu, 13 Apr 2023 10:55:27 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4080c0bc5b40.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 10:55:27 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB10099.eurprd08.prod.outlook.com (2603:10a6:20b:628::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 10:55:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 10:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b912dceb-d9e9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7an/gZhtDrWRNg2OCaW8n6WhijrkmtfSxkjWDk2/8wU=;
 b=ESitETdiT1Ie66Gv8DiGSlWKjNt4wtMQDCo1vah4/P+ef9+MpOPBLmg5GY7FbpF6wcoNISR0J8UTtnvuRLAR6U8sfQeIzuL4AqUm2uzKoBKjZqJF1FYcnltIhKuYvBM1Q7Okdriw+Ofdf9A6zWr73CG0hsFksLYHkdD7wOa79f0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yh+lifeKc2lLyKL4NbEaZ775l2BpKDxCBtFI3AVfGZQylXpoP682RgB90BfPqxlS63JF3IBCjpV76TnaAHbnqJytVywVvk/RXeEfljAA1E16MRVexL/lXqFmn7uejZGQxgEtLqwjLO7bcARg5Tpkn5kWTdDPURNpToetHnlttaUDlSrPpjhHm1fRuChNxxTAM/L5N4gJ3eIF9o1gdOuz/uMR7ufbX1okvqrtTSs8HS4Ezz6fdez18PNtbADBQMD/IVHSVpAF9CD5nlXWPmpF1Io/xNPpoiC2IBFTE4gpDcVzRzoRZvY9B9kkEFHDWJdIGRzaw93MayllX9yFsURjww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7an/gZhtDrWRNg2OCaW8n6WhijrkmtfSxkjWDk2/8wU=;
 b=XCySfI+Xaa/Ay7S8qP+BQo0P8023gwh8AyYXwBQj37ejpdvNJ6pvzq+jBTQ3nMZtv2O/7HB1sIdKMbhXDqakTSsXFtf6hVVYs8wdxouEbzIT/n8Z1iNyj7E/rGQovKwvjcGRqwtJ6Ju8srK+C4pDef72iq1CZcnakMiVQLWXYRbpmn+ExEtzeu5dTn2OZsKLFkSTzT7noCPlM8KfGLaqtxd/x6OR3dn1a1Gc8jbK+Em5S8a/fM47SDsDA0FCCBeSbEdhcJmw4n2BGNz/K8qi2//vmrRIHtAoACDfg7Dmmglld2LUvdPBQeQZjcHFMlLItsuwMyeDl9gikHiT1GoXIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7an/gZhtDrWRNg2OCaW8n6WhijrkmtfSxkjWDk2/8wU=;
 b=ESitETdiT1Ie66Gv8DiGSlWKjNt4wtMQDCo1vah4/P+ef9+MpOPBLmg5GY7FbpF6wcoNISR0J8UTtnvuRLAR6U8sfQeIzuL4AqUm2uzKoBKjZqJF1FYcnltIhKuYvBM1Q7Okdriw+Ofdf9A6zWr73CG0hsFksLYHkdD7wOa79f0=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k pages
Thread-Topic: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k
 pages
Thread-Index: AQHZbdfbHuhV0b3KPUiyzqcZuUyyg68pEPqQ
Date: Thu, 13 Apr 2023 10:55:13 +0000
Message-ID:
 <AS8PR08MB7991C16F543C4B1C4611572F92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-8-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-8-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3767CAB25120244EB39A17645D99F2A6.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB10099:EE_|DBAEUR03FT039:EE_|GVXPR08MB7728:EE_
X-MS-Office365-Filtering-Correlation-Id: e0e2aefb-fd21-4ac1-7c55-08db3c0d9a74
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 k9+hC3buFul2nfqkP2IP/Md9zXujGPlfzzDy9IHyB7B2Ppn2Wn9FJmoa608IALFkWzPN4LVAhKmvMz0kFsUyBTgK5UgwIVjfvmtagn8T4QXIaYwvxpCn8ycqW8/0b8JGKl0jC8Kn3QkDxdRULwVtmxWUz7UzURu52P62c8BLXhllHFn62mMQLe+65JXgF5kPW6fgZVxpbKVMDyC5G+wKbdENry+4bZEZ9RJp3VUn3lrUNqZEF57s+2WVpYRPoV+4zxaWUJ1zPD1Q4rDiCJRYciWZLIXIh+Zlrdcgbe0leBf5ISpXq7JgHfc3JNxHjj1oduo1GCyzHOA/bIdh9tvMfff6h030vPhxhSm9QXi3seJC9NSeyEHZvCCVhy+BSfwLdOav4ylN3hw0IGEdL0/SRAMTJhHBErqmEGUO6YbouDV16yayJG5wbq8h/izyyKexxJpPO/Jf8XgU385iGYPApRjleq3v5W4ZfiXiYL/OCc31v2bsNbJ05ml1R+iScx5eZ3coqtkkk2f6sAwEohZC0+iMBW0S71uP9JxWf29hRopuO4p/0gc6TVHmZSreo195jzLA55m3Jb4DrpPyAzuaQwx2Y6ZvZbFO15g5qCpvJjI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(451199021)(5660300002)(41300700001)(33656002)(122000001)(38100700002)(4744005)(2906002)(52536014)(55016003)(38070700005)(8676002)(86362001)(4326008)(66556008)(8936002)(66446008)(66946007)(64756008)(66476007)(316002)(83380400001)(76116006)(6506007)(54906003)(110136005)(26005)(9686003)(186003)(7696005)(478600001)(71200400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10099
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7e0ec7bc-7f98-4615-eace-08db3c0d8ec8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eZNQY1OQQ2l+TvsJjOCfqYoR0G3hkzQBdmH1MWEtanK/MLNhCGWKYcS85z+gjcKsh7X9XQt7WQN0nOwXSN+czz1xvscBVU5cgzLhsF1Er2zAwiKwlv0OT4wu9aaLp+4xzOUUkrDHaXuCyu2R6z3HIUR2ecVpGpsvO5ML3Q/lp1Snp9V0ReFTPS0bAH4x/xodu2DKMXvyDu1dYlc491z8TOaeZXMpnsV/YPG3jvuGAFWhxnEclWoM/LE5iqwS8h+5SBftEpvo+2Jsv61Y8VwIegfSNeYBLvlVshIWyBG+ZBfiWQBXABBvVrxW3KS2UtlQFS9OkEruTCc/TO96LLN4bN+9bY3+2Ko2ejJfXphoRvRJ6z9RPgMkdfxzTE474kTj+a6fA+UThY+tSuDsW13UqdXTFFnYD5WR7JH/VqES5V1vcnwzJSA2iTZA4spwYWbdQx0XxQs7AbL6V3eHZ7sRs9eQ8aQ9sgoex5WnJf78+kgEhn5lcvUaTU/YyGm1O6R5J/nsBw1twXSaWjZI4R0X6KRtUjYR7oUV1h74IHhSjh97Esay6I5RxcURCo1vhm9+E1HNkSdeVB/JCsH4gxRGyW4YL3bTumyqU35ZzCPg71a0uC4/Sc/ZKvdHWMblOD0vrGKcYlFftw7xsFvpHP5G3iFr4Ywfw/Im9pXkdMq0Cdx3YKWaiO2yCmTPUXne17xP
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(7696005)(8676002)(70206006)(4326008)(54906003)(478600001)(70586007)(41300700001)(110136005)(316002)(33656002)(86362001)(336012)(9686003)(6506007)(26005)(47076005)(83380400001)(4744005)(40480700001)(82310400005)(5660300002)(36860700001)(55016003)(8936002)(186003)(52536014)(81166007)(2906002)(82740400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 10:55:32.6623
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e0e2aefb-fd21-4ac1-7c55-08db3c0d9a74
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7728

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k page=
s
>=20
> Adds a BUILD_BUG_ON() to assert the dependency on 4k pages in the FF-A
> mediator since the current implementation only works if Xen page size is
> the same as the FFA page size.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:07:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520628.808414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmunV-0003Nh-7M; Thu, 13 Apr 2023 11:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520628.808414; Thu, 13 Apr 2023 11:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmunV-0003Na-4W; Thu, 13 Apr 2023 11:07:25 +0000
Received: by outflank-mailman (input) for mailman id 520628;
 Thu, 13 Apr 2023 11:07:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmunU-0003NU-Ju
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:07:24 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0622.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5dd9f92b-d9eb-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 13:07:23 +0200 (CEST)
Received: from DU2PR04CA0303.eurprd04.prod.outlook.com (2603:10a6:10:2b5::8)
 by GV2PR08MB8170.eurprd08.prod.outlook.com (2603:10a6:150:74::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 11:07:13 +0000
Received: from DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::78) by DU2PR04CA0303.outlook.office365.com
 (2603:10a6:10:2b5::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 11:07:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT045.mail.protection.outlook.com (100.127.142.142) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 11:07:13 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 13 Apr 2023 11:07:13 +0000
Received: from 583c30d9f038.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B5692465-EF07-44D4-8B09-58C394C9F5BA.1; 
 Thu, 13 Apr 2023 11:07:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 583c30d9f038.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:07:07 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB9575.eurprd08.prod.outlook.com (2603:10a6:20b:619::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 11:07:04 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:07:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5dd9f92b-d9eb-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fJWgnlrg0Wg4hkIVgIUCJl1zVVBS0VE1HQf7JQV7XmE=;
 b=0IvQBB02+ryCJFB31AN3SgAaFH2GruzdAEfFHELAX2o0zqn0FOqmKDA16GfbfiFsMWPZPt/TNhYcD7GXmAwrNmJmQycqc4JMdmWhUhw48/IfmJFs8L0IuN2wxYNBT5ct/u0BW0xULLgpuXZOCgqq7LMlbw+J0Du/C3bmQb2bURM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hYxpVoQiolbrAZwtaaG74oJOTiGlSQcF/CcrnGTGPeFdcfKgdJ5aoWa6ub2V0d3LHzbmyYjG+ODUfaywRcohSRBVF+TcWeJVGg4tb0QGhCHt8O22FbTQ+wkR2KcLZufEfx/TEngKVIFdK97UTh6r7Rw7d+k5s6e6ciwpVmMNe5BDgkmEHKIuYt+C0YRDxWRnWZcEda04XugUlZFVxniz80EmGT6w+c7OtIysGnFrtl64JcAxjYWQgQ6zePNRLxPQSKeNkUF2Aq0EYxPfTYtrjjhPpKdwDxfTL7oX5dcLTpCBnnVC2yFIELlWxOJKZiM1ujJUX7Nksqo1TM7mEJavXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fJWgnlrg0Wg4hkIVgIUCJl1zVVBS0VE1HQf7JQV7XmE=;
 b=j6HlG/SHaxh/tVVfEbLbKIpk1FfzQ6YCqIiOiMTFNa0nT2nN2jzn5yvnaRfzFFeZrJZD1nxjKDZGlJnvGPRwrMTMCMIJtBgz9vlROUYYeiuUJy+R8NTLPSrb/MuQd2jXy/COLh1//0reAt7ejheRIh97FiROYgbj1KqlurTZL/3dOID0Gm8ma++mM7cllr5XvFM6O4t4nriEkKhM5WZSDdmT9SjFjgiWCDbUBuI+tpSZEQ6/6Te9mbWZj+wdsVnj7ixJD4lc7hWWQi8B1c2/1J3WinSW1LwJjwHF/aqzEJLsGikUO+pOmTQokDtZwOw0KWN/tt7zmeb7KdeZTB+ghw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fJWgnlrg0Wg4hkIVgIUCJl1zVVBS0VE1HQf7JQV7XmE=;
 b=0IvQBB02+ryCJFB31AN3SgAaFH2GruzdAEfFHELAX2o0zqn0FOqmKDA16GfbfiFsMWPZPt/TNhYcD7GXmAwrNmJmQycqc4JMdmWhUhw48/IfmJFs8L0IuN2wxYNBT5ct/u0BW0xULLgpuXZOCgqq7LMlbw+J0Du/C3bmQb2bURM=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Thread-Topic: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Thread-Index: AQHZbdfdlkWasQb0b0OUu0VnJOxcoK8pEVxw
Date: Thu, 13 Apr 2023 11:07:04 +0000
Message-ID:
 <AS8PR08MB7991020B2BB9676DACD2494392989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-9-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-9-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EDEA08A656D40C4DA2380F8808E3DB5A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB9575:EE_|DBAEUR03FT045:EE_|GV2PR08MB8170:EE_
X-MS-Office365-Filtering-Correlation-Id: edcb7e6b-2cba-4591-5a78-08db3c0f3c03
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 d/YWIZg8OFsN8TObEvR+Sm89L9a2gb0kajVkrioXR1dtZThz2kPSHihZ12MNeSiZak2pWOisaZoWGjXo1u7f8sLG8spqjJ+RcDCQk4L9OuCNARl3LnqABdSb8r0/7wYb4PPxMyc0zypTzJsP24FRqHmEwEFQCKqHL15FAkw4AaLxzYwJutb3nWIsM85XNHnJoialUM3YVFfI8mWy8uBf/0ESNum6NEUGpMWtHfDV2Enrz9LzrZNkhSg/7zoBVRX1B9KhMU3lTS1iOyl9zyMwraAZPjgRJ801LGk+u4tHmwYQkNC1ziBiD1gN9mWeUvEs08cTYgaeuAEFQ93jNUmUgBziGyP4nLh8fwmAwfbsNaN0446g3EKfChPplBUxKXKLX9NYRHG75qEhH7t5FNmkpRxoQvjiKEdCsqsX1kVDWdbXa0AQkVJRnrWukQxLUWGCbFF8PYR9PgBwVWQ7KJQu0R50pPxE7h6Wk+16PU2BzzSYrnkPSxbyWlrTZ7Wcmcz8ZNHogWfVvSoKIDEDxgEXmVks8jKEg8JJ2RrmrkoL+B+EEZsSY4RMDTxMiqg78IMa2OqkTR33N4/e1UsL82FQXTSDYdO4RzS3o0p/b0SEMv0Ss9Z4GsMsaDdvMV8plBI+
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199021)(478600001)(71200400001)(7696005)(52536014)(33656002)(55016003)(83380400001)(122000001)(38100700002)(38070700005)(86362001)(2906002)(4744005)(316002)(26005)(5660300002)(9686003)(6506007)(110136005)(186003)(66446008)(8676002)(64756008)(66476007)(41300700001)(4326008)(8936002)(76116006)(54906003)(66556008)(66946007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9575
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	45487b85-16a8-44dd-fe35-08db3c0f36e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7kQ1jFSIFJyjlfcsiFgtjWEB1i7EErLPiov6yIGxzxk0W9bqtcaMlKx0XVCs6tLwerEqTiafck0pLQKh7OVj3aiLFSPSbenq9LgBIZ/4P7y8T5aWUg7UNiAnMNDLC2i/gwwrhxWSyYyA79E4Zj1ftZ10MVrNFTDXaDhT46pvldJQEK646gtXpWNaRh0cOOemAF2kqeyShiq3XLeKmNd1lHQRGd4Up2MlzPRqPsyOed65bKzSCdAw4AFLPnoqIOCjpNgFVHPBMANEIW934z4wmU1dmj2MihlTKdDDkj9JSyuWOLDeoPX1zK13ZeiRDGoFiLWlXI6fPuMUFbB+lWQahy4sPoGjEugknaMvxkozO2DxdOE7+vbECNED2baa+f3N9ETODvQe4TccS2MVGEutXxZx18KdlEepvI50+zD10LO4Vs/lj6EL/zuTDdCRjGOu0SKyQF7bvL2EdDKd+60GtqMBgktYUajZUATjiPUg56NToEQg4BSyERK6HRy3g+h+KsWW6J6Z3KUmmbPtdRc/64zegaJz0Uy5hzgra1aquiWc9UMterfr3/RI9oD95SjzqibTQM6HC8ttw6a9xGpxj6I25Ek6uZCwhPx47Kgpy4eSIZxEjHWT8At/auKWrt9O2+hbt0xem7Kl7fCZ491XD8bNRNTnvQ1r8IoNFSSvczyqsS5y95r7osJ6QWKbXj+IyDrkEWWFe2yyrI9eSuGzYQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199021)(36840700001)(40470700004)(46966006)(47076005)(40460700003)(7696005)(70206006)(41300700001)(54906003)(478600001)(8676002)(70586007)(4326008)(316002)(110136005)(81166007)(86362001)(33656002)(336012)(6506007)(9686003)(83380400001)(26005)(55016003)(4744005)(2906002)(5660300002)(40480700001)(36860700001)(82310400005)(8936002)(82740400003)(186003)(52536014)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:07:13.2085
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: edcb7e6b-2cba-4591-5a78-08db3c0f3c03
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8170

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
>=20
> Adds support for the FF-A function FFA_ID_GET to return the ID of the
> calling client.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/tee/ffa.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>=20
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 90ed71cbfda3..f129879c5b81 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -181,6 +181,12 @@ static bool ffa_get_version(uint32_t *vers)
>      return true;
>  }
>=20
> +static uint16_t get_vm_id(const struct domain *d)
> +{
> +    /* +1 since 0 is reserved for the hypervisor in FF-A */
> +    return d->domain_id + 1;

Since here you want 0 to be reserved, I think maybe you can use
the "d->arch.p2m.vmid"? According to the logic in p2m_alloc_vmid(),
the "d->arch.p2m.vmid" is also a per-domain u16 value that starts
from 1.

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:16:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520633.808427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuwY-0004sL-4n; Thu, 13 Apr 2023 11:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520633.808427; Thu, 13 Apr 2023 11:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmuwY-0004sE-1z; Thu, 13 Apr 2023 11:16:46 +0000
Received: by outflank-mailman (input) for mailman id 520633;
 Thu, 13 Apr 2023 11:16:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmuwW-0004s8-Qp
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:16:44 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20610.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aaaf72df-d9ec-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 13:16:42 +0200 (CEST)
Received: from AS9PR01CA0035.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:542::14) by AS8PR08MB5942.eurprd08.prod.outlook.com
 (2603:10a6:20b:29f::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:16:38 +0000
Received: from AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:542:cafe::7e) by AS9PR01CA0035.outlook.office365.com
 (2603:10a6:20b:542::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 11:16:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT051.mail.protection.outlook.com (100.127.140.64) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 11:16:36 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 13 Apr 2023 11:16:35 +0000
Received: from 0d84159f08d7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 33B58FD5-443D-4119-98CF-B201F18E2EDA.1; 
 Thu, 13 Apr 2023 11:16:28 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0d84159f08d7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:16:28 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM9PR08MB6226.eurprd08.prod.outlook.com (2603:10a6:20b:2d8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:16:22 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:16:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaaf72df-d9ec-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8bA0eVWBnPdXLQXkC9X3VWUJ+ot9sWzORV2Kx2lSobI=;
 b=aelvx9QnNuU1cppr+I9HrbkmcZuCDMaAGbjjNW9XGWw7Em0qmoWYGo5WDGo+MFWdQB8UzuSFTh8N6clVGQIhNQQ9tbUcZycPqr6rOEp4pJi1rJRaOcnfLXZSH7Z4r29W7sExBN/vqTkPOcz+eAjNt8iddk0aWnz4c43h/d8nNC4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tc6t2s1WPEAs37dKPda17MAHFqbgc00cS075TBex9EHlUtA0Rqm5zxHFkMuBqngwtwSNPdy0slE36sVd2vZRgaJGQ+POfGGFBL8oFRgx0FeEKsYfO+8/64CKiJNEMG+o4g2fnkXzmZQSVdA+Pn7KpmJsIHCIrDRNVp33EbZxR51dflDcVbuPywW24fvCXaobhZMqsbXYHOcWT/tVxMgWHzZWiHiksn520YVMpLETa6p8BkdbYPOeFhnd8ZKWVX1DAsJzUUX2Fnq0jszyb3MbrlsnZf8B7uDGnLeKH2ds7GY54X14ii2hPHO1WKM83yo/rnEfZdYzB0VP7shgR0j33g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8bA0eVWBnPdXLQXkC9X3VWUJ+ot9sWzORV2Kx2lSobI=;
 b=LUkKrRWCmt/cCsQBfQJXwWqarRnb8nUjNVhEcHSoIVsPrbOSHd8XFcCFoR2j3TRRhxDhiaM7pd1qCZrYsLFwfY1qg6JGa2gjt1krcC9uVFLzSQHHYbUV77lwL8FpzzS37zBxapRSnU7pSK7EMT2ZRJHTS+AF8fZkMr/5RqmDFUIE/XPvEjtpyCckVw41B9tGIb0mXNJQDTZdoGtJHvK9gf2g1JmSo8Sx5EE0uNtPoPaLqiUKPIKZoFlidEUoV4OTb3s+iodK9RzsDX+Cg+pVvRuOV9J8RQ2WWptb0NkQpS1hzsxblNrL8EkOKDB80ayOi5LWChomUexJ6swcb0xbcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8bA0eVWBnPdXLQXkC9X3VWUJ+ot9sWzORV2Kx2lSobI=;
 b=aelvx9QnNuU1cppr+I9HrbkmcZuCDMaAGbjjNW9XGWw7Em0qmoWYGo5WDGo+MFWdQB8UzuSFTh8N6clVGQIhNQQ9tbUcZycPqr6rOEp4pJi1rJRaOcnfLXZSH7Z4r29W7sExBN/vqTkPOcz+eAjNt8iddk0aWnz4c43h/d8nNC4=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Topic: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Index: AQHZbdfc5IUuFaQpQE2OHTLbOE6lL68pFi8g
Date: Thu, 13 Apr 2023 11:16:22 +0000
Message-ID:
 <AS8PR08MB7991020558FDF641D9E89C7192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-10-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6AE5779DD03AEF4B9F88C8470DE294F9.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM9PR08MB6226:EE_|AM7EUR03FT051:EE_|AS8PR08MB5942:EE_
X-MS-Office365-Filtering-Correlation-Id: 6482099a-1398-4ea6-1d27-08db3c108b8b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gzzaCJQn2mmxhJ9vQyyc88FFg8ZglnfhQ6kjctnF69vtP3Fmj+FAG5999uvNg9ch/Bp2FlJQX0KYGPpeTyLtg2pltZISegpbwNwlOvxRPzgW6WqM2OXHUscZW5PJG6Xf3bOZy0EUg3sXkugDG+p8VBF6v1Cg7mQY50/HTMdl7xZ0SAc7DnxeKvCzqZFzlUwSr9vOIuv/awGo0jtzCgvZu1glNdotuvO0DEwY2mvJbxEpVPk6m2g7yFiVGuQPyvsuq5E8wVIFJSOtaV7VZP43UsR40cvTjT36tDjB9Ktq7Jh+Aga9abadwBAMPALZlgJbnkUncwVTjyTEA4+aPBWdjEgEjq7jRIIsPG5+M9hGsOCnhFLRU+RMmnPbZ3MQ05nVHIcMijUq8mpPtNgyO1upbdHPD1QEQaZ8C7BHyjLFktkq7HlG7A26GJqkRpJu2pgu7wlfawIqdI63ahTsOFo1jS6CoriDJRT6x/RXzZr1b84ubD4jLJ8yIn/odV4LC7VJ362Yn3phpEUVhveJVfO/rC29wq+JrFi+GjuvDxpLqMCLBVdagC1g8NSnExvWfYHFUf1IDpWc1cAHvbFusUSIlknHwn/H2yv/Yu0pG/wdgWjjEGpoP5AHdlX/goJI9k8b
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(346002)(39860400002)(376002)(136003)(451199021)(26005)(8676002)(186003)(8936002)(38100700002)(6506007)(9686003)(478600001)(316002)(33656002)(54906003)(86362001)(76116006)(66946007)(110136005)(64756008)(4326008)(66556008)(7696005)(66476007)(66446008)(71200400001)(55016003)(41300700001)(122000001)(83380400001)(38070700005)(52536014)(5660300002)(2906002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6226
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	05d54294-d292-4bce-3b75-08db3c10835a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1Ik2TXdFCAdcQCi1+Jq7+BFke9bMsNpjHS25MYOrt1/GPM8dsGLIAHHiPGXiG3NNQFfhxrnZMWyeyeeD26DLl8PGJlBGkOGw68FWIDmRkuray0lVFTiz+Y5O9PQY0ZioohSViEhI6YHnWjI4zy6mFYK15lT/0tg+4+IeFECeLAy1aaYq3g5t6ZqIP9IIgUzJYcoRIPHmM/8etr90HUW0ff2dQLCBcd9cDuBs1OvCHzMBzAcaCUA5gjr7aI7Q2QljbvsLqnFXqVyJ8nDWFZvQBB9Ila4/qGhNIV2L6kfJCpXvk5tHqUfw7PEiSF/HNpNh0txIKc+vxj+6qipfZxNXDjecYWW0N8eeoGN/FOiOvSaigE3MwaIhLF7iPeHV+6OvXv3RPQQlFqvII0Y1W1mukPfv45zKxk8qEkd9S9QTk4M9C8iTmy+gShvPpn6HtvCxZsA+i3wpR9r/UcE/TlIKM/S6oiK3oXsYY0RWtDMIjRc0RRrYq4hbUSKpXI7/8Rvxwr9aU2upbURJOuFngOXHxFyCnDE/E8MOT+l310jE2r6lnSGV0JNOJ7A9T97bXT6BhaslIFAXAzVxQdS846VNDtJ8asYesW/s0CGd3x8JAYujBIqPrOZZkVh1oOMOpIi5ay0Mq8yOcRTqkrmXDvLbv8rhZL3BOXGU0Ay82tYE+Nl3SzN+8jNXCaX2l15K3XI5K/xT8lgv+NkVgu+0vYNLFA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199021)(36840700001)(40470700004)(46966006)(8676002)(33656002)(36860700001)(83380400001)(47076005)(336012)(7696005)(478600001)(40460700003)(26005)(6506007)(54906003)(110136005)(9686003)(186003)(2906002)(52536014)(5660300002)(81166007)(316002)(4326008)(356005)(82310400005)(70586007)(70206006)(41300700001)(55016003)(82740400003)(8936002)(40480700001)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:16:36.0725
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6482099a-1398-4ea6-1d27-08db3c108b8b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5942

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
>=20
> Adds support for sending a FF-A direct request. Checks that the SP also
> supports handling a 32-bit direct request. 64-bit direct requests are
> not used by the mediator itself so there is not need to check for that.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/tee/ffa.c | 112
> +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 112 insertions(+)
>=20
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index f129879c5b81..f2cce955d981 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
>      return true;
>  }
>=20
> +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
> +{
> +    switch ( resp->a0 )
> +    {
> +    case FFA_ERROR:
> +        if ( resp->a2 )
> +            return resp->a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +    case FFA_SUCCESS_64:
> +        return FFA_RET_OK;
> +    default:
> +        return FFA_RET_NOT_SUPPORTED;
> +    }
> +}
> +
> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a=
2,
> +                               register_t a3, register_t a4)
> +{
> +    const struct arm_smccc_1_2_regs arg =3D {
> +        .a0 =3D fid,
> +        .a1 =3D a1,
> +        .a2 =3D a2,
> +        .a3 =3D a3,
> +        .a4 =3D a4,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    return get_ffa_ret_code(&resp);
> +}
> +
> +static int32_t ffa_features(uint32_t id)
> +{
> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
> +}
> +
> +static bool check_mandatory_feature(uint32_t id)
> +{
> +    int32_t ret =3D ffa_features(id);
> +
> +    if (ret)

Coding style nit: You need spaces before and after "ret", i.e.
if ( ret )

With this fixed:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:20:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:20:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520638.808436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmv07-0006Lg-OF; Thu, 13 Apr 2023 11:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520638.808436; Thu, 13 Apr 2023 11:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmv07-0006LZ-LY; Thu, 13 Apr 2023 11:20:27 +0000
Received: by outflank-mailman (input) for mailman id 520638;
 Thu, 13 Apr 2023 11:20:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmv05-0006LI-Pa
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:20:25 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c9b8484-d9ed-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 13:20:20 +0200 (CEST)
Received: from AM3PR07CA0090.eurprd07.prod.outlook.com (2603:10a6:207:6::24)
 by DB5PR08MB10164.eurprd08.prod.outlook.com (2603:10a6:10:4a1::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:20:17 +0000
Received: from AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:6:cafe::b7) by AM3PR07CA0090.outlook.office365.com
 (2603:10a6:207:6::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.6 via Frontend
 Transport; Thu, 13 Apr 2023 11:20:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT046.mail.protection.outlook.com (100.127.140.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 11:20:17 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 11:20:16 +0000
Received: from ea88f1c3a20e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 87783C65-DE60-4F96-BA21-B9EFE4E583E8.1; 
 Thu, 13 Apr 2023 11:20:16 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ea88f1c3a20e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:20:16 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8662.eurprd08.prod.outlook.com (2603:10a6:10:402::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 11:20:01 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:20:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c9b8484-d9ed-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V4I0Ly3VMIqge6eo/oxfppyc3rD8Kuvq1z0VJHkg0MQ=;
 b=jPZ8wBh1dbImg2GMAhyjeoNv+YVyND/OCDHY1OMn9ZvMiopM+sysg4fAk/0UHi1l4ywow+g+QeecLObGUWJZKjwReYqJKGGkHGpU9p4KvPc+Eg4rKrjfK4I1g7k5EQ84AyisJFCXO7NW3lJZtWtG+p0GWl0Mv+FmouVk8OyU+ds=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DNar9kSHTk9oPnDWajJWaMJ9TNcnUCxxWZb9Zw1Ln2ogobxBd0XwRMiTiHOaSK7ahBoGtWBIjHG3dkjtDzBaAQq/34V4cGp7wD9zYgxEH5QV0Xu2eHOQgx4rnQjKncr0H+WJ7SPl04Mdw98U1QfahfUHJo3Qcfb9atLPvrILW9ZRQBvFSC9LjDue6KLT08SvJEYr3rNiS9VYXjqDyTvPtocOgF0g7owfc3SakeAW/KvD/4adfsrt0q7H88hGbZU85I+1FPx2e4rqD0BS2Z7l+WHk4uxO4SJTnIFGZdzZSLPgFhF+DUJ6PK7Sk62ZJZXwOcY7RM+6ClDkVcT/41DgEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V4I0Ly3VMIqge6eo/oxfppyc3rD8Kuvq1z0VJHkg0MQ=;
 b=DvqLaj9v042aIvN55KGR457kqCR0qlRxbtJ+yfrgScbtnnNkP19FdF7bZbfCbKdQYd74naj4Vj9aZaHE025x/Z53fAKJ8aGxHph0LHZTdsX9v2aL2Iohe43zQk1c4FnJrm4mQHLWYgau4cEurq5EHZtulCdtnrEDEmldlGx3Pj24uKbQimiJU8v27v7JtgroM5m4Uqsen+gWyjPWTyDDBFsMmVMfXeDXP33o7wF5BDvunGHO81fSCqMuLqTCW8MCmvohz3+/e+2Ra3F0BAoJ9kAxgkOhqgqwDtMQUSkNz4FbAOpMVZulQHz5jek8Q7ONlGTERJYHiszFB/boro2B6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V4I0Ly3VMIqge6eo/oxfppyc3rD8Kuvq1z0VJHkg0MQ=;
 b=jPZ8wBh1dbImg2GMAhyjeoNv+YVyND/OCDHY1OMn9ZvMiopM+sysg4fAk/0UHi1l4ywow+g+QeecLObGUWJZKjwReYqJKGGkHGpU9p4KvPc+Eg4rKrjfK4I1g7k5EQ84AyisJFCXO7NW3lJZtWtG+p0GWl0Mv+FmouVk8OyU+ds=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: RE: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
Thread-Topic: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
Thread-Index: AQHZbdfkK4+Lpc3ia0Cwa4AcAShE7K8pF0vQ
Date: Thu, 13 Apr 2023 11:20:01 +0000
Message-ID:
 <AS8PR08MB799163D2CB5E1184FC2F978B92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-23-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-23-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D794DA9012850145B3787C7B4B253050.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8662:EE_|AM7EUR03FT046:EE_|DB5PR08MB10164:EE_
X-MS-Office365-Filtering-Correlation-Id: 9dffd47e-076c-45ea-687c-08db3c110f43
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9W7k1OX8qJTJhnSyPFJFRSNOrZMjSFgz9WCCnVPIvLLs+0PKR8pHtc2gH3qeoEMoymGcJtxfVtNkhC7qyaDrQ5RxlzstF65iRF5LYG0OKN3abk1dcRXlzEe2dRdHpk8L1Ke5cS95XVoG/uAMuqP0j4B+EKlWIYKpIGecyAXv1AxwDeAR8c7OEiM4T8INkFqQ+38F60AE+zX8XhtfB1ZRQssXvgHgAlc/Ea4P4wpLS9Djvd5muJy3wZJeg+CD3QShGvIaP65K/joE8CfQWXmOvvw+HmkeC8/G8XWVWtryoC7RSsvE83JemoI8i30Drj+zrAbcEc4QA7ZvfZ6NAkNHLQ0YmnSWmL+vqnU+mPsNZCf54zCXe2LUIkunLSonWWu5J9y5Pw1bCoidJQcoo9UBQ8cjYXLRGLGf4Tsnmbh95iDSMNOuCbhm+YNXCwRpncano6Tbbgt4kISx6YX8Ena+R6dzBQEViLYxhcl9pteqdho3Bpa7k/gjLGQAKmqs1/Jq+ZY8LvpndBrFDVaPro6bn5N+Yhrcv8mmIM7+m5Y2silYZM6eaI+gUn1KmtHLqrebJk+juV1pN2amj7kEeM+wS+Bl6pmdQ9l7IZRYVRFzC8s=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(396003)(39860400002)(346002)(136003)(451199021)(4326008)(71200400001)(41300700001)(66446008)(66946007)(54906003)(64756008)(66556008)(478600001)(8676002)(966005)(66476007)(76116006)(110136005)(316002)(33656002)(86362001)(6506007)(9686003)(83380400001)(26005)(7696005)(122000001)(2906002)(55016003)(4744005)(5660300002)(38070700005)(8936002)(38100700002)(186003)(52536014);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8662
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	51d9b2bd-d348-4319-58aa-08db3c1105fb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2qYJAIeL4JAEMgucaI1evwoUBFcRg3dJddbjGacdhC2SBkHB4W+6c3Pn+AUCKF+DcTvM+gQgJDFtezf5ns+iFkUEH+OVwK3RQ26b7fjZwk9kY6NPlGLHGw75Gy3FzCAaXeKDyyWwv76LgfR15lRJq4mPtSSv3ldjrTwRUtimmFDc6oZelMkiKPp7l9M8PDIv978x1Ea2mM+f0ww1Jk0UFWhRVnC6aALHTGcgJMJyWnTkzVTx38mJbiSI/FWkNJjt3bsqcFoPePECeHu7IQrqnBOntLCBHXY6dPqqwdF92+grUEviqCAEM5oLWxuvruTvncXUC4y5oGEP78RYFkl3HgqlNwEgRk9Cw519nuEGejglUMN4lnxkfr/mIPEoo69CpQzyR+ag85ctP5uu4V3Tm+dW6Y1uAMHT7rbZFFxHUfGqmi7hc+6oFZdS7nUogmrsQmVorTVt5Ysj6MqcBAntPmq5opPjNi3vBY2QdSHD6Qs1WlBa+FcF5OdYkk/I6VxMoSnmNW6gpcmvf7pHU6LAcqk6WZdb3hQQKgXSuk/9QqtzAtBR8rxnZMAXi4BCbk0DIzyQ19qLw9JKdJEurhwEcy7C9TS5krwQrHTKNLPxF/gLequ6aoo4Y2WVJFL6Trz8FQ9foGx5eCwtulo7/mQWolVbC1eDCJSt3nwXtHfyo0rcbGfCroobVFcv/cSEcZK3
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39850400004)(396003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(110136005)(54906003)(478600001)(83380400001)(336012)(36860700001)(33656002)(6506007)(186003)(86362001)(55016003)(26005)(9686003)(107886003)(47076005)(81166007)(82740400003)(40480700001)(82310400005)(356005)(40460700003)(7696005)(966005)(2906002)(41300700001)(70206006)(316002)(70586007)(4326008)(4744005)(8676002)(8936002)(52536014)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:20:17.0618
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9dffd47e-076c-45ea-687c-08db3c110f43
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10164

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
>=20
> Describes a FF-A version 1.1 [1] mediator to communicate with a Secure
> Partition in secure world.
>=20
> [1] https://developer.arm.com/documentation/den0077/latest
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  SUPPORT.md               |  8 ++++++++
>  docs/man/xl.cfg.5.pod.in | 15 +++++++++++++++
>  2 files changed, 23 insertions(+)
>
> +B<Arm only.> Allow a guest to communicate via FF-A with Secure Partition=
s
> +(SP), default false.
> +
> +Currently is only a small subset of the FF-A specification supported. Ju=
st

I am not a native English speaker but I think this sentence would better be=
:
"Currently only a small subset of the FF-A specification is supported."

Other parts look good to me, so:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:25:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520642.808447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmv4M-0006vX-A0; Thu, 13 Apr 2023 11:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520642.808447; Thu, 13 Apr 2023 11:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmv4M-0006vQ-75; Thu, 13 Apr 2023 11:24:50 +0000
Received: by outflank-mailman (input) for mailman id 520642;
 Thu, 13 Apr 2023 11:24:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmv4L-0006vK-Nh
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:24:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc1b33f7-d9ed-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 13:24:47 +0200 (CEST)
Received: from DUZPR01CA0032.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::11) by AM7PR08MB5480.eurprd08.prod.outlook.com
 (2603:10a6:20b:de::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:24:33 +0000
Received: from DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:468:cafe::1b) by DUZPR01CA0032.outlook.office365.com
 (2603:10a6:10:468::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30 via Frontend
 Transport; Thu, 13 Apr 2023 11:24:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT023.mail.protection.outlook.com (100.127.142.253) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 11:24:32 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 13 Apr 2023 11:24:32 +0000
Received: from 80e7a1cdee1c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F1F9C4C4-5C27-4C9C-8F01-058FA06429BE.1; 
 Thu, 13 Apr 2023 11:24:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 80e7a1cdee1c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:24:26 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM9PR08MB6641.eurprd08.prod.outlook.com (2603:10a6:20b:306::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:24:24 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:24:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc1b33f7-d9ed-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5k82IwOeVZsMCaZrdotkzys+/bc2SQkXVVA1MVPmjuM=;
 b=wGf5d89sfvHgJlnh3gZNsXPo2fQJ+L6az8CGWbawMjSqzPn5Lfh54W6R2Z+d1Rr6JgR4Iu05vR6LnXavHSxzrnDkJ8iSwNLGL2XpW/J4Fc2aUGwh2JGOiemEx/bCEQil0IDX/irFueicOne24tebl6D/7V56PXefidBx6W3h53k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GK4LcPAhxphz0DN+iuE5iGG5iVxOj5D3RPaWqFDnhvqI3XJqwwDOYqi5bC7i5cejhaqYL8wXihIPteWaS8TH5uxXFzTp3tbg/iukAfsFBGFMsXoUrtoASD/UpAZZ1h48Pv/ZTAKv7mt0kvvlWsLbFeZ6JfVkUj5PD2ksbR81TJ6+dg6Lta1Rn/DuS0B4/m/G1N8pq1lVQRuiY8Jkg/Zbg8CLfqSFa3bORacjNtbyO6f0cxrlV46ZY4EP/F9ygmt4L+9H/Dfs5hbF6enULm9lJW5r6ETsIjdG8ju4PHwDs47/yaAjr/pQSoT/1TJDPCgPrh2mPL2cczo9U3hcjeMXXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5k82IwOeVZsMCaZrdotkzys+/bc2SQkXVVA1MVPmjuM=;
 b=LqCC5OneCzWqr2UtZgfVQ72q8u/tVvgFoGrvp6TNJnNZkL8O1lofuTI66hclGu0wGUkOE/7n/QVkFbqEDXcFyZIS2SZ7RDwtAloufd/57kIle7y95e3ryCCeztM6OIjM3S+1Ps6lzyIn/0pxPWipMpb4LffqN48BrGQ9tDuI9FfIChzU1kNdAKWpfnTgzd9SWiNHUOvlmOiWaZpggofYDOfHsqs734fBE5XfT467ImGRaMdw+dow5HdnHQdHJotcmyHKypevt1k9yk/QS1AqT9U6ogDv8XKNZ2MhHNfePbjjt6GUEB90X7FPF5X9zjdU60+l8HLzT3MOfFAPdwSFkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5k82IwOeVZsMCaZrdotkzys+/bc2SQkXVVA1MVPmjuM=;
 b=wGf5d89sfvHgJlnh3gZNsXPo2fQJ+L6az8CGWbawMjSqzPn5Lfh54W6R2Z+d1Rr6JgR4Iu05vR6LnXavHSxzrnDkJ8iSwNLGL2XpW/J4Fc2aUGwh2JGOiemEx/bCEQil0IDX/irFueicOne24tebl6D/7V56PXefidBx6W3h53k=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
Thread-Topic: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
Thread-Index: AQHZbdf3MKIAKwiaG0CPrYU6IaiuNK8pGDOg
Date: Thu, 13 Apr 2023 11:24:24 +0000
Message-ID:
 <AS8PR08MB79914AB812D7BC35ACBC80E692989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-22-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-22-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0D6EEA0095F4834E8EDFE70681FBFCAD.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM9PR08MB6641:EE_|DBAEUR03FT023:EE_|AM7PR08MB5480:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b3cb212-52bb-44cb-bf11-08db3c11a779
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WpTbuMSBIIamfo14OOL35PhS5nph+9NHkUUZYp36iXGoNW5YjpdQQV6Qip/RISA6aHslthaL/DMp593zvneGTjEapHa64N0BZApA5E0ITIBzNxeYfB+wWt7XOc8MsX2SqeIgLzHAa2Bw6efPXWslAYEoFDoJKPGXf5jySh4lXJ1dtWdB6PQfX0dgcAG+w8iR69AcZcfSTllrSZDZ9RN+oYMBZjAaGKtglzsx/YlJp50wD+/txKOeO1Vj2FwXGDDJ0ayNIDCdI4sabCM784tBDcSUBhGXasm7uFWQvCLUV3qtg1UcVVR7/kWrUs93tn0UTX4WSnBCbcpXgb2ja9vbBobWVvVs0NY8/1cRY48UAlvcTgfcHltFYM48BBQRdEIyHhipdViAu5e8uA96dDlGfdMYaUGWeCajjch/i/7X3sJhE3mEAhIC0y+nEBGFiG9z+6pHbUdS6QG5VZDQd48RBbiTuC2EKiC83uGpRsLpgTKO8mJRnPN4Jluc8TTZ26iHGxz9hw/z0QDMMesgxDvtEF9ceZyPSur2/YRJFi6IwQhEgEaZQ0kfRABcQ2gM2IIydnp6K/GxMCfNvNp5qJh++MtzSejn8EbuXzH7yicicuk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(9686003)(26005)(6506007)(186003)(2906002)(83380400001)(64756008)(8676002)(66446008)(4326008)(66476007)(66556008)(76116006)(66946007)(71200400001)(41300700001)(54906003)(110136005)(316002)(478600001)(5660300002)(7696005)(8936002)(966005)(52536014)(55016003)(38100700002)(122000001)(38070700005)(86362001)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6641
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8ba5d430-0da1-4f6e-515f-08db3c11a293
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	idgqJOQ8S6/nay58TObFZaq/f7/kGeeJjmjg1FiXRRU11KVppHog3cNdbyySPkWQ8+x6vAdhiCcnBbzNbQ6m2A446qfRHddFhPI3xq55ijjBMLkrcggMQ/WXn3WeQzO0NB6zXp7iPRWZU1fTlkyOPZ+hvvJzkGiMXc4wbCsqCFBE1gbuaaQZ8YcNTSbZ2+W2oKiQxzsyqF14fi2C2Y1tbnQ8ASt4tI4IzHl9WoTBnu32U0bGVxSS6Mnnu04JO2dYMEpl6FfRUSdhLZuGZmjb6z24dOCaIRhVjHZ/ZER3sWWV8Q22EqIqF38M77wplr5yElQZWMz02FSlDP7IEZbT4x9gdex2J/NWprdrZUCENV8Ql4+BiJ4PXFsu7RjpzZngI/R1jwMoAqmREF4+8mA0GAAPNA6o2Y9yb+Mbfr0Hm84uFn0SP9NqN+HeHss3RKte7vu9LnUzh4VMxp5/5uv1Y6Q8ZS4LassPrF/5IuZpvnOqKoUk2Nm57LNHAlG2vrh1U/riUz2arr/UOIGYBpKuJ/ckYQjN/Yj6oLFh+isq1BTmkwo47m9wjGX5r/AkGlPNLwilFaxqRwL7qV09/9Ld6q3D/ltTSIZ0AmhawMmmhjHHim410eFJZopjlabanRGkeuteAklolsu+TbUF6VT8MUDLsvQM6RgnpyvSHLuHv747tifhq1IZop9ngrKrZMQF
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39850400004)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(70586007)(47076005)(83380400001)(70206006)(6506007)(4326008)(966005)(26005)(54906003)(7696005)(478600001)(110136005)(186003)(9686003)(52536014)(2906002)(40460700003)(33656002)(5660300002)(316002)(81166007)(356005)(82310400005)(336012)(82740400003)(41300700001)(55016003)(8936002)(86362001)(8676002)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:24:32.4785
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b3cb212-52bb-44cb-bf11-08db3c11a779
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5480

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
>=20
> Adds a comments with a list of unsupported FF-A interfaces and

Typo: s/a comments/comments/

> limitations in the implemented FF-A interfaces.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/tee/ffa.c | 32 ++++++++++++++++++++++++++++++++
>  1 file changed, 32 insertions(+)
>=20
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 0948cc636871..6424c222c885 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -13,6 +13,38 @@
>   *                https://developer.arm.com/documentation/den0077/e
>   * TEEC-1.0C: TEE Client API Specification version 1.0c available at
>   *            https://globalplatform.org/specs-library/tee-client-api-sp=
ecification/
> + *
> + * Notes on the the current implementstion.

Typo: s/implementstion/implementation/

> + *
> + * Unsupported FF-A interfaces:
> + * o FFA_MSG_POLL and FFA_MSG_SEND - deprecated in FF-A-1.1-REL0
> + * o FFA_MEM_RETRIEVE_* - Used when sharing memory from an SP to a
> VM
> + * o FFA_MEM_DONATE_* and FFA_MEM_LEND_* - Used when tranferring
> ownership
> + *   or access of a memory readion

Typo "readion"? Maybe I am wrong but I cannot find this word in the spec.

With above typos corrected:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:31:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520646.808457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvAu-0008Nm-1O; Thu, 13 Apr 2023 11:31:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520646.808457; Thu, 13 Apr 2023 11:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvAt-0008Nf-Ud; Thu, 13 Apr 2023 11:31:35 +0000
Received: by outflank-mailman (input) for mailman id 520646;
 Thu, 13 Apr 2023 11:31:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmvAt-0008NZ-AU
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:31:35 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe12::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdaec907-d9ee-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 13:31:33 +0200 (CEST)
Received: from DB3PR08CA0036.eurprd08.prod.outlook.com (2603:10a6:8::49) by
 AS2PR08MB8310.eurprd08.prod.outlook.com (2603:10a6:20b:555::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.38; Thu, 13 Apr 2023 11:31:27 +0000
Received: from DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::8b) by DB3PR08CA0036.outlook.office365.com
 (2603:10a6:8::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.29 via Frontend
 Transport; Thu, 13 Apr 2023 11:31:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT052.mail.protection.outlook.com (100.127.142.144) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 11:31:27 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Thu, 13 Apr 2023 11:31:27 +0000
Received: from f99560a82f49.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A8043CDE-EBAF-40DB-BC5E-C5BFE48B29EE.1; 
 Thu, 13 Apr 2023 11:31:16 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f99560a82f49.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:31:16 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB8331.eurprd08.prod.outlook.com (2603:10a6:150:86::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:31:11 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdaec907-d9ee-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=agRwrRlllO1RgHAX9L5poJQYfyokSC1xSJJLk9b08DY=;
 b=IHACNtRS/xQTn7IeqS7UF1Xv6ooMq1KxTGgkdoP9M4XR/iCKcGeubiAVJl9o3jYWmqIAVPEiLUhJGiz+4P6+3He9j7eb4eTPBmDTFDBgaHoORQpcXXqQGcYZ4LDHa3ldr9dsg3VZysmQpWRFCW3DATghLxrbb6Ot0N6q3D3i5mE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ao0EgnI9VQ7baSq1IquIPFnl3f2pvCy5EM8czhuyHb2pWyVWon2KiuNhrVL1gIFV+8hZeLvM1EYeGApMrqX2QzT6hBG+yaNvGyY5byRWR4GWm3UHMjXmq9I3O1/N4ZESeUCP9xRZYHC+LC+nWQFjChxlLYQt0nSFWXBFbL+bdI9tXc+aBefn/zBw0/BTQoOw/UJV9CuzIsTs6q67GwR5YkIfZdilfYsk1k1a5LnaEeweRsAHn8Ne2SHKbwbYn5Mou9NQyhKBuIWi9SG01oCADrMvEeOHmmLrz7Mx/XSByd2mYcLq7hBorDf9d7fjc6iYr3EDJDt41hvQ/ZUpE2Lfiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=agRwrRlllO1RgHAX9L5poJQYfyokSC1xSJJLk9b08DY=;
 b=ibV+hGLq9XrESRf4wJ3WduIE61j8QrCe5pfN/A4each+J9zPaQheNiVp3atHAmOON+bht5qTZhlJx5YiuVbuuqffCcCwQn7T1nb7gwi1D3JqnSfLpdyimJFDNKCGtKZIEGvb9e2gfsJiOui2oOLHixB2050FM+xC5LpP+mAtQSyDzWXhXfuP/hP22ibCfDnfosTqrrkQHSq6MO41YjHRJ6Tu8rplIOztR1ZxHQnSf+InUgHmWSmQBIPM6Q/YqFQ9YJM2gfugQSuSRM8Ga+WVeXkRMJccIbHr4hUSFMyWUl19rNpRPGDi/l2JnApExNNeE57xZk2hqS/6I7hEZrL4OA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=agRwrRlllO1RgHAX9L5poJQYfyokSC1xSJJLk9b08DY=;
 b=IHACNtRS/xQTn7IeqS7UF1Xv6ooMq1KxTGgkdoP9M4XR/iCKcGeubiAVJl9o3jYWmqIAVPEiLUhJGiz+4P6+3He9j7eb4eTPBmDTFDBgaHoORQpcXXqQGcYZ4LDHa3ldr9dsg3VZysmQpWRFCW3DATghLxrbb6Ot0N6q3D3i5mE=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: RE: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and
 uint64_to_regpair() to regs.h
Thread-Topic: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and
 uint64_to_regpair() to regs.h
Thread-Index: AQHZbdhGMHWAhk4AakOJfz+iMxVrZa8pGvlA
Date: Thu, 13 Apr 2023 11:31:11 +0000
Message-ID:
 <AS8PR08MB799105F5A90ECF0085BA63A492989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-15-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-15-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 363F3F4C22C48D45998409FFDB132FB8.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB8331:EE_|DBAEUR03FT052:EE_|AS2PR08MB8310:EE_
X-MS-Office365-Filtering-Correlation-Id: df56719b-03a1-4bc7-4105-08db3c129ec2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jgzaqY0KqXRIWSnRBw0lfEFtMwmjrtpNYVEmhmUDeVQx4IRlC4tGJM12rYOOGQRBQvYdMXQgd2/y8FiZlTOc05/oBeGnmpXIKzujGP+5i6AThuZAsleo7Yf2CsnK9c0xvusCozRhg2ICdPNTAP/IMOntSqi6mZ9JL3JsWZOCaIpNf2CygnDrIVjuA5nNhZDtZFWjXPULN0o+bJmJrwcRut3074YbR6xfcq7k9ic436k3/qHvvD0tKsk8AFU1T12ROUFDwkep7YPT1qZOe+LNPgOunRzmIMBu6QeUoJ+Zu0Oz8qLx/39SeiXglmzMPB+MW9KzSJYpjoNrQgbXvyo5HajBjV/8BWHLGU0SPBRbxEhCZX+g5rf4MhkDyKDQ3MA/GvxBhIEPUFBWbbFdCrGAwFkIhTTG5JdtzpVqHYkjUcSGBDM4pbu4+RcK2kYyj/nJ1DQRsyv4Bmah17Y56UKbUE2hBHOlrAAlNyagtsggN+vyHFi2lgR+93usDoO+EYjGDHKz8isyMRLAyzTpbQtv6kCBuvEOkxElGm8jyEZ/uB9U8CXnr0KlsESZZDOPchJrWKdWNpSgHDfk7tIfwVCVdQtEXwTJkndQiVnuiVUQiHRJ35/Kl3u+PHbSwE/lz0LJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(366004)(346002)(39860400002)(376002)(451199021)(38100700002)(5660300002)(4744005)(2906002)(52536014)(38070700005)(55016003)(86362001)(316002)(8936002)(8676002)(4326008)(122000001)(64756008)(66556008)(66446008)(66946007)(66476007)(41300700001)(83380400001)(76116006)(26005)(186003)(110136005)(9686003)(6506007)(478600001)(7696005)(54906003)(71200400001)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8331
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5c8d7118-a72e-4d7e-38c6-08db3c129515
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kvM8EmC/8UI2f4+YzVniItzuv4Zbn87fdUXrbZWC6IOqqB6ZBC1EheA9bXTXSBzaTDw5QHdCp0w0c5Y/cB1qD0yvmHwDgYFmwj6hpKraPRiJfshYscRZJBNAgoDPYCtHBrAS1dVbJwJ08CW4HLQU6bYbwl0J1lT9u0sj5m25vhaYTrC+6ZdBLVulsMpN3lJZ2rdmWpNUfQn9cLkp+kj+vPOSh4ymAa16ymNnrcH2Qxpxy+cWhYWjDMbi2JPUXIiIg7DVQev2XIPDygLey0VKCMPdLE0s0+jErW1D9yE8/SzFdSpfOrFgcXR/PFSBq34fsnRCoxu0ZwYpCrKFXP22nDsvBmV6nVRxOywJhGfJlTKLMxcrzLkNclxtgQwk+AFS0bAnuvjV8MzqmsFjWPFhZsck7XvG2cQ5cDJkuJX+704ZF8e/A6x8d2/oYwt76LYfmV3YsnsGfXThpJeW2yhAO/BJisfclE30U3m1eanrjbQrpWZo9hutows9sR0X1H1odJ3K8OoaEIv/zdsV707swljCXw3eoUY5lAIiPclbKCCc25YD+ji/cTXuZOxuTW8JSqxyMH0l53G8SCW46EQwLMCIEXsbxgaI1R4azEsN9EO8TByT5rPBK0B4a5FNhIfEUQEO2jGaffo24I2UvUxnQGETllGP7P+pzuAAfGa2P2uMl5VFIHmqEWNvxtq7jbd/odvnHF6aNW0HzmnMtcwoXw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(7696005)(8676002)(70206006)(4326008)(54906003)(478600001)(70586007)(41300700001)(110136005)(316002)(33656002)(86362001)(336012)(9686003)(6506007)(26005)(47076005)(83380400001)(4744005)(40480700001)(82310400005)(5660300002)(36860700001)(55016003)(8936002)(186003)(52536014)(81166007)(2906002)(82740400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:31:27.3509
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: df56719b-03a1-4bc7-4105-08db3c129ec2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8310

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and
> uint64_to_regpair() to regs.h
>=20
> Moves the two helper functions regpair_to_uint64() and
> uint64_to_regpair() from xen/arch/arm/tee/optee.c to the common arm
> specific regs.h. This enables reuse of these functions in the FF-A
> mediator in a subsequent patch.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:47:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520651.808466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvQB-0001aq-C3; Thu, 13 Apr 2023 11:47:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520651.808466; Thu, 13 Apr 2023 11:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvQB-0001aj-9G; Thu, 13 Apr 2023 11:47:23 +0000
Received: by outflank-mailman (input) for mailman id 520651;
 Thu, 13 Apr 2023 11:47:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmvQ9-0001ad-RI
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:47:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f18e9345-d9f0-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 13:47:19 +0200 (CEST)
Received: from DU2PR04CA0309.eurprd04.prod.outlook.com (2603:10a6:10:2b5::14)
 by DB9PR08MB7472.eurprd08.prod.outlook.com (2603:10a6:10:36c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:47:16 +0000
Received: from DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::a7) by DU2PR04CA0309.outlook.office365.com
 (2603:10a6:10:2b5::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 11:47:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT055.mail.protection.outlook.com (100.127.142.171) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 11:47:15 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 11:47:15 +0000
Received: from b0881e927402.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4EF8F059-50DC-4055-81F9-3256CD8EDB95.1; 
 Thu, 13 Apr 2023 11:47:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b0881e927402.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:47:09 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS4PR08MB8047.eurprd08.prod.outlook.com (2603:10a6:20b:587::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:46:59 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:46:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f18e9345-d9f0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kldRuhPLful5lckP+1pa4Nka1yK3wWVImOAu6JXy/wo=;
 b=cFTqr8UH4rcUVKFLHT/+FuL2Bp8UvLA/daG8hLIHfcGM9MBEUhVSMHcy2Am/bZhYON/13upX9T0V5GqQNyTPwtEpzsuZ5mnCNAHKVZFk0cF+A3iIT+bbOhaDed2cmPFiFTf1hHIMRf37RpAgfnoQoxST17PHkKXA1Tk6JXnXfLg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FQb0TfzET0VtJ8ScGdEmGsWv+lDOmcRoQPy+dK68L3PXRLFNn+3cpYBHTxha4YrliRhSu3lhKaW+GsRJfaNj51+CJndWdzsV+jTjFUr8p9Fe0MmX8tkf1AUicgDAGy+Mcwu3mkM3GtvX9lcnjkAjrVahpgZtxpZTK2KWeFCqB9ARuCsbn4MZsf7uMDHK+Pp8j9n9IYHceYrsI9SmjMQYGBXP3VFIIFSvIoSfhzWpNKDIpmK6/HMXu4OeD4a7gEnfWJ6FKgeMGIkiI7qcJjMBpM+e6qk5cnvmfog5bpwTi9MeV0SE0JLJCc6FPK1pSgMitZVGGYUddVMuS/WqvAWpOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kldRuhPLful5lckP+1pa4Nka1yK3wWVImOAu6JXy/wo=;
 b=DaNo9raLfPmNnsNDNC5oFJzNEx/QveKHZxq/UE64F6dBSjBkTB8NVb6cqfpIZntabrZNz8M6ZJ3C+F5P9FdUUtzBjHD8EoIx0203Y95WopmehztIAUUlBZ+k6h77w7eANGwVpMq161pt+H3gUB58ZTLGLDMTuX0LucEUoDADXrfNvhFK2AmmgLVI4cbvu6VAGnuQmSz7qN+O1SXoRj4Nql+fs5HUY9ffKC2AUHx5JJigjreveoW0nhnaNT+j0Z3TNISyJoXvwhrYppS2cvkrMMPtJOLLQI7aE+vpRGmMW+zBKioWT1VdGMVVNYpL/dF6ida/sR7xJhfmMGpceD5+kw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kldRuhPLful5lckP+1pa4Nka1yK3wWVImOAu6JXy/wo=;
 b=cFTqr8UH4rcUVKFLHT/+FuL2Bp8UvLA/daG8hLIHfcGM9MBEUhVSMHcy2Am/bZhYON/13upX9T0V5GqQNyTPwtEpzsuZ5mnCNAHKVZFk0cF+A3iIT+bbOhaDed2cmPFiFTf1hHIMRf37RpAgfnoQoxST17PHkKXA1Tk6JXnXfLg=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing memory
Thread-Topic: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing
 memory
Thread-Index: AQHZbdhXhHRhSefTr0SCWhhFgt3t1K8pG5rg
Date: Thu, 13 Apr 2023 11:46:59 +0000
Message-ID:
 <AS8PR08MB799188B36D51EF5BB367590D92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-16-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-16-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EC8D0DEB56B5594F85B83D6046A37D1A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS4PR08MB8047:EE_|DBAEUR03FT055:EE_|DB9PR08MB7472:EE_
X-MS-Office365-Filtering-Correlation-Id: b934e56e-f009-4b3e-cdb0-08db3c14d3bf
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FzrkwmufMJXB/sKU2IXKfCzBdVUpdqFXsLnwoLPwYWCy4LJoYnlI8jOJ0NrF+bjlGrxDfgPVt3dC6wTPeG78VXM1weQxTBpO3sOC7YfABYSoD7505YQ53/yMhXaw3++2sCgKHrUyAkoQuc5okcCMkDyOFONE7r10eKRpGKs3tLeJBZwgIxGj0JamYHTTcG/aLak6zqnvbjehGy+m0lyXUodzaKX7zdB3RQhV04RS/xJ2EUeKFnsfJtbC7WLwV8eEn9TvbZhRVXIz1YpOM7tNwWlNin3tEhY8cKbO4eZt1zSy672PSN0tsO/ROfEkUqh4pGi/2zBNDM3qUKRtv0F0gxETIZ76QS2PiVYnHf2D305Tw5rj5gQsq9ptjt/fsjk1itVa4F7A2kVaVTvXOVTGg+4NjUdaD4PRvTETbgbFvZuJwnEfMdWT7L8rrPWT73azltHNPb6YUIp7V7RsToNOfjQwdgnv8wGcoweTXIWBtrk0Cmc8KH0GVg7l0R25qvJX8QmVJ4oywwyenAa6XDm0NqAgrp441im9Zozc2vst+NMk/ESFHBF/+ocI2j7ube0XcD7+1baoZYgq21iudaZzzVV8Osl5A2UuV87HtF/bsi6Qpv6OMD7mVG35/yfF9/eI
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(366004)(136003)(451199021)(8676002)(8936002)(76116006)(41300700001)(5660300002)(52536014)(66446008)(66476007)(66556008)(66946007)(64756008)(122000001)(4326008)(316002)(2906002)(478600001)(54906003)(110136005)(86362001)(38070700005)(38100700002)(7696005)(186003)(9686003)(6506007)(55016003)(71200400001)(83380400001)(26005)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8047
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8570bbb7-4713-400c-6319-08db3c14ca6a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mmvWpRSnCggVWBxggvOukhv/EL2tSWDlsCH4xvRit/Pqqkzvj35gphYWrkx4ndQYzcWtSfk8NVwHcFwBT5DmkDhD9QHFb+nLO7GgGdgprKdyYLr6uurYMHJE9KOERPdU/LSuxplWciH4PLhPt6DtmcUyU6OgqrQagTgn+PxIQwvx/CJ7Pci7hd46dp9p2++QaH6P+ylyDlx2f/C45KOn/Ou3dllZli2CgbeE2rNSsP9UgEtVJLFiewvy0Wc+jCt5c7/GU+EDB+XprDVJ6AD/K2MC8krshkZkjqtuSNNw+3fRiBc2idEsfurzf8gAmTRRTf0sX6uEbCxPSGSTRNPDGUZWV61RTF0p5hlgUxHy7FipWXaSBhU3ceacdoH3KJBfkuANZJObr2SEPBqCOd5Gyz9uBNfwLDNhsxuMfDrzoT6WcLartcicohu7UE1XVD+g2jMyyfdsgmVka0H8AIkQcKW4y2Hc5iOszquzfax8JbuAYTrn0l0C8rbu9xbBnIJ8EFxXk29ueyD3VplSQ+1PfLZU9c6Gkm1thwrSioLqah9+mfGVpO/Ozj1sHd2xt9+Tpp4sygv/1MZBMGErR+RxxKoEONg/zM1V7jHVhjajOxH52PwWnAjpedg1uLDNV6Jq7jPvR23QmeVXb0MdPjw+nbDUlw0U110XA3+O2r5kQfJLtS36iBzqfvYUs5YDkaqkg4NUqe5BO1DTOhDusCirMw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(5660300002)(41300700001)(33656002)(82740400003)(40460700003)(2906002)(52536014)(55016003)(8676002)(86362001)(40480700001)(356005)(4326008)(8936002)(70586007)(70206006)(81166007)(82310400005)(316002)(83380400001)(336012)(47076005)(6506007)(54906003)(110136005)(26005)(9686003)(186003)(7696005)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:47:15.2632
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b934e56e-f009-4b3e-cdb0-08db3c14d3bf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7472

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing memor=
y
>=20
> Adds defines needed for sharing using the function FFA_MEM_SHARE and
> friends.

Same as my comments in previous patches, I would suggest to also mention
the references in commit message.

>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>  xen/arch/arm/tee/ffa.c | 60
> ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 60 insertions(+)
>=20
> + * FF-A doesn't have any direct requirments on GlobalPlatform or vice

Typo: s/requirements/requirements/

> + * versa, but an implementation can very well use FF-A in order to provi=
de
> + * a GlobalPlatform interface on top.
> + *
> + * Global Platform specification for TEE requires that any TEE
> + * implementation should allow to share buffers with size of at least
> + * 512KB, defined in TEEC-1.0C page 24, Table 4-1,
> + * TEEC_CONFIG_SHAREDMEM_MAX_SIZE.
> + * Due to align issue mentioned above, we need to increase this

s/align issue/alignment issue/ ?

> + * value with one.
> + */
> +#define FFA_MAX_SHM_PAGE_COUNT          (SZ_512K / FFA_PAGE_SIZE + 1)
> +
> +/*
> + * Limits the number of shared buffers that guest can have at once. This
> + * is to prevent case, when guests tricks XEN into exhausting its own

Typo: s/tricks/trick/

> + * memory by allocating many small buffers. This value has been chosen
> + * arbitrary.

Typo: s/ arbitrary/arbitrarily/

> + */
> +#define FFA_MAX_SHM_COUNT               32
> +
> +/* FF-A-1.1-REL0 section 10.9.2 Memory region handle, page 167 */
> +#define FFA_HANDLE_HYP_FLAG             BIT(63, ULL)
> +#define FFA_HANDLE_INVALID              0xffffffffffffffffULL
> +
> +/*
> + * Memory attributes: Normal memory, Write-Back cacheable, Inner
> shareable
> + * Defined in FF-A-1.1-REL0 Table 10.18 at page 175.
> + */
> +#define FFA_NORMAL_MEM_REG_ATTR         0x2fU
> +/*
> + * Memory access permissions: Read-write
> + * Defined in FF-A-1.1-REL0 Table 10.15 at page 168.
> + */
> +#define FFA_MEM_ACC_RW                  0x2U
> +
> +/* FF-A-1.1-REL0 section 10.11.4 Flags usage, page 184-187 */
> +/* Clear memory before mapping in receiver */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR            BIT(0, U)
> +/* Relayer may time slice this operation */
> +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE       BIT(1, U)
> +/* Clear memory after receiver relinquishes it */
> +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH BIT(2, U)
> +/* Share memory transaction */
> +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (1U << 3)
> +

I confirm the values introduced in this patch are consistent with in-code
comments on top of them. Thanks for the pointer :)

With the typos corrected:
Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


>  /*
>   * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
>   * BIT(31): Framework or partition message
> --
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:50:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520655.808477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvSn-0002sP-RH; Thu, 13 Apr 2023 11:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520655.808477; Thu, 13 Apr 2023 11:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvSn-0002sI-Nf; Thu, 13 Apr 2023 11:50:05 +0000
Received: by outflank-mailman (input) for mailman id 520655;
 Thu, 13 Apr 2023 11:50:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmvSn-0002nV-41
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:50:05 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20612.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 536c908c-d9f1-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 13:50:03 +0200 (CEST)
Received: from DUZPR01CA0078.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:46a::20) by AS8PR08MB8733.eurprd08.prod.outlook.com
 (2603:10a6:20b:565::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:49:57 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:46a:cafe::b9) by DUZPR01CA0078.outlook.office365.com
 (2603:10a6:10:46a::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30 via Frontend
 Transport; Thu, 13 Apr 2023 11:49:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 11:49:57 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 13 Apr 2023 11:49:57 +0000
Received: from fe975fe53355.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FA495623-D270-40D6-8A34-9D226E5F055C.1; 
 Thu, 13 Apr 2023 11:49:50 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fe975fe53355.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:49:50 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB9387.eurprd08.prod.outlook.com (2603:10a6:20b:5aa::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.42; Thu, 13 Apr
 2023 11:49:44 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:49:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 536c908c-d9f1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KENF3SNCqPm8wKQyaXsksLT5Ho8kPPZq6WXd2priraM=;
 b=zVTsnV6uWLmr6km9bMLi3bO8BMY6XkYuP3s3M1EGTbhG9yHwkU4LipazsElv9lTYtP6fCJoZYo7SHDMIT1wD+2ye41txPc1qC8GCtM3y9LUR5wiQVWu2Cwnc0oPQ+ac7w5w7LpcQZiU/sT9U+b3Gk5wMpaRg72Cu17nYKrJBuAY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=edmPGFuwO9J+KypHiLoVG0aVPJkRodKolRfHjb2GanqC3b6MeZspAHe1ah17XLayVnjfJIQtNYPC4wPSt4vgXGMZc3Np+1/WXe0sws6AnELVzGw8GhTeqFYg/b5traqFo+vKzgb3UCKRwNv4B/9EemH2iyNUnnJizey057W5W6myX/GBmSlNVrNwEKISOMa+1GPE5MtqprHn3SRSf3kIkpnQ7s9yzXTY8O3KQ7AexfQPylx3xcwT1RYqcmjqXyQbDzbil+ayL4DNgWUTY3EYoJADT2ksIjCB9fyjeNltYLpSMyX/geG+1RqMauNPVjt/7XadpKBPRUHZsGqkEmnvRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KENF3SNCqPm8wKQyaXsksLT5Ho8kPPZq6WXd2priraM=;
 b=DcSjNa1ULkTMOtL5Y7r2zqfcUpuwyAn6kQZYf95a9eye51MUSETnpazEBY5ckEXTIOr3YtsnuxxvcXgi1JLV8qZ+NGMTmLyP4Gf9aiswQ9nsIUJL7Dq499uhKhWpEMWrPCfaIhCagw/vEKLDjnl+nJMfuuCfu/wGRF29lGz8BNDpUU55HYo0ytf3aaEmJ7N2kGN/pIIdjNXkFsbQDL5fuk+A0QeRhNUfgzkqyeiCuGpC/KS2BxybI3LcX0Zed6EVsAnhszdJk2eJQDW1Rbp1B+GF6M6JG/44ECgt+B33D99jvhtD40NhViSzEEMRVrB3cuoBDaZdrrLuHuhXwGncjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KENF3SNCqPm8wKQyaXsksLT5Ho8kPPZq6WXd2priraM=;
 b=zVTsnV6uWLmr6km9bMLi3bO8BMY6XkYuP3s3M1EGTbhG9yHwkU4LipazsElv9lTYtP6fCJoZYo7SHDMIT1wD+2ye41txPc1qC8GCtM3y9LUR5wiQVWu2Cwnc0oPQ+ac7w5w7LpcQZiU/sT9U+b3Gk5wMpaRg72Cu17nYKrJBuAY=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing
 memory
Thread-Topic: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing
 memory
Thread-Index: AQHZbdfWFMmwO7dS/0GyxucHcZRHm68pIDaw
Date: Thu, 13 Apr 2023 11:49:44 +0000
Message-ID:
 <AS8PR08MB79910752526D506B3BA881EA92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-17-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-17-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: BC511DC5E2FDEE4F8677F399D504E0F2.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB9387:EE_|DBAEUR03FT026:EE_|AS8PR08MB8733:EE_
X-MS-Office365-Filtering-Correlation-Id: 3db85b56-7abb-412b-cfe3-08db3c153454
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vUe7HQ0w6R8lutg1HQ4KUtOc2ZVBc33mMrjsmSFI8ZobYzMaQ2R9pj79zBRfhcfAV60Fe/ansOdbmcXJcjarssDVLy7O+1mxpwPk/B8cNHIKd7hYHUd+6NbQZFTgL7f1xTmYng0NrO4gwQRAufe8BrKUYKltmGQM3B0zF829b/VgzMD3tc6eNDyl0Wrnk1TiSIYtc4A00ZQIrE6ykuAS0Gurh6VaLKc412/4K17zFYr7ecszbbfSkXKDCzhf9cflysqvf8IjU47CQVGqxw9C64RLW76x9uBmDuWEhd6aKPKbqcMgzBctxptbIfHjuGSQ5KMpaWQDPtU07K1Rdwi83adpmUX9o2sCkvJPhdgW6EDStBBS3UNW93UoERs4mFvYMg4pi1l8NvgxMDbtiMpWYurEnVoRmT7vgnTjevUqgvhmNxRH4RHsxFvwoFrH4C9nbX1vfWr7R98YLJqabFKxR3wWjWezuhue4rsg87deYxHYo5wnpC4EvhOSBo4/h5I75kUGsgihjMXhk3VRBq76UDE+ZPzPwdd/SLFRyJqOpfAjwL70oMqyOuG4v4JtoYWsHeVCsyTqqy5QtvKx2+mIJP+QufnqqXsXK9OGtUJzAC3SyeLuK44NamHS/oojIhLD
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(376002)(396003)(346002)(136003)(451199021)(86362001)(186003)(33656002)(6506007)(9686003)(26005)(38070700005)(2906002)(4744005)(71200400001)(7696005)(83380400001)(5660300002)(8936002)(478600001)(4326008)(55016003)(54906003)(52536014)(316002)(110136005)(64756008)(66446008)(66476007)(66556008)(66946007)(8676002)(38100700002)(76116006)(122000001)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9387
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fa84e76b-c697-4f61-9724-08db3c152cb1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZdkISEIfWy3B7DIL24Maa4gYurqAZAhu9ntM94M7FjkAb/F1O0M4DBlCrR2rWiJadjBnzJ1wZSAK8GKhAbL1+zzibpsoQKJQantcx/HzjTAZpcGwe9K/P/VauGAKGAs9jp9wTsr8mxF0mWyx4iaqUlGNhF49nvQSPxzqUyqqRj2fp26+cUx/Q412i3/89qmH0KGimj7jBztqNbDre5JGaznMffe4vLnTffrKf4brbRi1/4XSqvVutrizPtCPxpyFWMfAoKYx5wubro5g/wYR2/rYfgdKALq4pJEe+zCjeh/YuORDjOmFXbklDxGeZNeuYpplG9Ax5UZcA70zf41JdMQeXQ2o9VVUsZP/rGS1qv3EtNOvRb5KaxdoNepeshY1qelHKRApaAOubr3pk1wmDvnrcjZh7E3F64lD8nRPXCjEfk81PBgtanMxQNAdQJixHlA+p066zsQT3EIqR14eI/6WEYwnDYYMsJqLQr4u1yOKl+lQSgtk0JvnGRhXZ/Bc9IuFMQgChYWKf0Ew0eVREY3kcnaaR6gS4FAxQ+0WEMmwRLpAe1mgrTFQ74eqqAk7bvu97vRq9pRXjainT0bxVbTrmwHbl44GfS3cmg+yq0EYwSqPDgrX9Ck4gcgQ2hjhlaXMW4A8n/oSmtwokUA2YeCsSKDOAqgObNc0L5dLFk6zVOKfo1TPJ2Q+P/B9mEVhCVEkWh0UYENMKElnV8FLcw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199021)(40470700004)(46966006)(36840700001)(40480700001)(41300700001)(70206006)(70586007)(110136005)(478600001)(54906003)(7696005)(316002)(4326008)(2906002)(4744005)(40460700003)(36860700001)(83380400001)(82740400003)(55016003)(186003)(26005)(8676002)(8936002)(5660300002)(52536014)(47076005)(82310400005)(86362001)(33656002)(336012)(9686003)(6506007)(81166007)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:49:57.2854
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3db85b56-7abb-412b-cfe3-08db3c153454
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8733

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing
> memory
>=20
> Adds the ABI structs used by function FFA_MEM_SHARE and friends for
> sharing memory.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 11:53:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 11:53:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520659.808487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvW3-0003Zh-92; Thu, 13 Apr 2023 11:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520659.808487; Thu, 13 Apr 2023 11:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvW3-0003Za-5l; Thu, 13 Apr 2023 11:53:27 +0000
Received: by outflank-mailman (input) for mailman id 520659;
 Thu, 13 Apr 2023 11:53:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmvW2-0003ZU-2F
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 11:53:26 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb767ea8-d9f1-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 13:53:25 +0200 (CEST)
Received: from AM6P193CA0041.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:8e::18)
 by AM8PR08MB5796.eurprd08.prod.outlook.com (2603:10a6:20b:1d1::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 11:53:13 +0000
Received: from AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8e:cafe::ec) by AM6P193CA0041.outlook.office365.com
 (2603:10a6:209:8e::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 11:53:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT017.mail.protection.outlook.com (100.127.140.184) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.28 via Frontend Transport; Thu, 13 Apr 2023 11:53:12 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Thu, 13 Apr 2023 11:53:12 +0000
Received: from 96011df6c354.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 04D84B93-A4DA-4053-807A-F56D9AD4BEA8.1; 
 Thu, 13 Apr 2023 11:53:06 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 96011df6c354.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 11:53:06 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VI1PR08MB5453.eurprd08.prod.outlook.com (2603:10a6:803:132::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.40; Thu, 13 Apr
 2023 11:53:03 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 11:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb767ea8-d9f1-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AwTaoUcDceKDe8aMgaY2EUZfNFklPvhP/ajqp4DZwok=;
 b=HF/kp3u5ggsfjTwQ1syyCTuBduzU3dm+qwfWiV0BRnLIXZaUVeOWqahz4Bg5ZrMxaLGCB/K3qN8c5C4FbLDMEuUR6wOxpatUshVGf3dOh1khle9x0xyCRyWuXILayaouvnou4gIlNs79Q7AoBlaccV1C78ofphNNnwEnJStSio0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aU1WK6+EslAU1x5zyfCxZed8okzchAQ+dOt23ChPE6bgChAmv2buxJKzrZaCw3iv/ccBWNJavuc57sfDVKRt5olSuG7NP4XfSfEjlUPP4C0PYhV4dqXpCzFFCJe3Xpg7QnrrIWfF9tjMq6gvRYbAVQZTWrYU/HwP9qodMPpeCYmXh8vhm8TpsLvWiXMwR9V9+zumk30Wlc+xd+aIQH2LYvLRqu+zxuZhT++5itl2mw98jlR8KqBJLzFRx8gtNbhfbRf03sohbcl4NAZIwNvNi9ntwD8VgP89cyqwvhyudz/NMy6NDRsRYU9Gr+TwZDRHcFzfNN96oMg2Ou4g5RP8wg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AwTaoUcDceKDe8aMgaY2EUZfNFklPvhP/ajqp4DZwok=;
 b=XkBS6gQCCeqTpb5H7hauWC9QXsP7taFtmqPpGO+uVqpDkea7E5AWhDhP4uYFflSRKfJYQ3wLcI83BLfLht0upMtRcK+cdkJpoxloFdildS9eozRF8TqwLBj243BFjwTo2OHrQjdpmUqQd0XHFrGUNku9oqxBiOg1iZSM/ZWMbQSIf7F3trCW2azAP79pV66Ih9A9rbUa2ux4mjLIThJlsQWmXXpoE2j6HGkVA42Rz0tOTNeO6rBWhKJ5m5gsDAmxPeJYpKzf1h9XvcoSstNNVTDm+qSeAmI4Kl/rugxdUdY9sYfeuUiOKH4For9SACV6lNGhUixved4a08vPS4VsOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AwTaoUcDceKDe8aMgaY2EUZfNFklPvhP/ajqp4DZwok=;
 b=HF/kp3u5ggsfjTwQ1syyCTuBduzU3dm+qwfWiV0BRnLIXZaUVeOWqahz4Bg5ZrMxaLGCB/K3qN8c5C4FbLDMEuUR6wOxpatUshVGf3dOh1khle9x0xyCRyWuXILayaouvnou4gIlNs79Q7AoBlaccV1C78ofphNNnwEnJStSio0=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
Thread-Topic: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
Thread-Index: AQHZbdfU80uw6OSkAUuNnMAzzeK71K8pIJ8Q
Date: Thu, 13 Apr 2023 11:53:03 +0000
Message-ID:
 <AS8PR08MB7991029EED281DD96108D4C292989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-4-jens.wiklander@linaro.org>
In-Reply-To: <20230413071424.3273490-4-jens.wiklander@linaro.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 25D1E2305DEFB24E8474D15D83D5F973.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VI1PR08MB5453:EE_|AM7EUR03FT017:EE_|AM8PR08MB5796:EE_
X-MS-Office365-Filtering-Correlation-Id: 00cf5b37-18a9-436e-72bb-08db3c15a8ac
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OX1FC7UdbzWPDdCmgdVPKQJ53K0IpF9hC6etv2w18sS4SR9ILoclnSjTxRcHc6NoxiodFTFM/OCj8V6yk4QqdUIJDlE9cRLgu72vp6dwraRj7jUf8/3bUHCMykUtvt0YrqpS0wN5myI6E76Vwf/DB3SMz+OhcKIU2r/Dp2yrBmZTt3d0bPtSXhkCAlptYhVFpLm7WvzdFpk1l01nBwj9p8jynO3Reey1qHPzTt5WP8hboL9ePdFM2yZ486jRljhJoFkw5qGXvxoOMlKZIs8nk5OuEVXNydekIZsose0nlTbcjoySoH7uycmEk3iJpT1HugpWH+4ot4OW7d9S4Sw4hwhPR89SyiSZ6ZsEiJOr0dXGnVzK2wQmaniYi6X1Kh/Fm8lXL2L6TFZZacGCs6e7gsHEQRuT7DY9PJJhJ63fn/1018kSVIYSpiBykT8b/SfxjV80ihCDEY9VqZHg1qOADKCxlUPU5nvhkkYHPupKLurfloU3JoWZBiG6e7JK3yEF9Cbq0TuK6zgnQYI+/PQZf+mrDvUzDEhEunD5r4+tKlY38IVclSn62qs/xfnz8dGSV+Eu2GtAM2NhLz+DR6lK8WBhIcTwJ5KFNYThJYTToMNT1J/vNGliFmDWM/IkYycj
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(8936002)(186003)(52536014)(9686003)(26005)(6506007)(55016003)(5660300002)(4744005)(122000001)(86362001)(33656002)(478600001)(38070700005)(38100700002)(316002)(83380400001)(54906003)(41300700001)(110136005)(2906002)(71200400001)(76116006)(66476007)(7696005)(66556008)(66946007)(8676002)(4326008)(66446008)(64756008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5453
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	05301706-18e1-4fe6-a7c4-08db3c15a36a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hZGma7JOomUUAAfHVFIc1A610LLeAMk7dbVAHEwrZqI6lJh6Mp7SxE+OfQzSWT7WO8QUH3/M2Qe617tXXYyOuBUur9Nmzvj+oGjcYcZzp6P17EwvoDX6k7TjcAhZ3TLh0p+AUwG9MYokmcWxNKM17Z1LGtpkQimYH2sL6lKD2pGpJkA/UttsPJYJr5hM1MEc0hVX2f4nSJyXzrx9uCkl7ic54skKCoxZDuyEFYMqUtsKmScXYCJnMJiJ/ELznCsIsvvrSVNH+eG5iJZbZiHR8bVWOO1pt/AzXrPaMjqz4pvptfn6eTr4kUH8Hew6I/cgZiqMyt2kq7WJgA5pL3wPQUJyYFbTaP+zOtHRLM4et4M2W/28jU1blMKqVgzZtvxij6gICgA1cdWMD39C5/pb0eqZ6NzdepM9jkwpwbUiSWyfNifja+jE5RAc6x/WLQtuhJ8wa6TMBGIIAJpBWwT+BjGSzOyt+WR7OK42X1qzFNZwLuHU3r6mH7K3Kwk5m1JIURkal1mu4FUIifWjR/0Iu6lumcoRoSI3CX1c0JpLnF8S8R68aZG/E8P+XmBmudzb1XB/YP5+Dii+/nW+bcXHc/TAjUFk7pqNPzfj/a9Io52ZeHIrJrUN9PJqa4qKgcs7gMaFa74Tk4rKx9QCuLaM38O2llijJefpBwYzS4iAmdqmNlb4UVCS7Z4WxhMGjx9ygWZ4D3/OY1FDqKnZzbzZtQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(451199021)(46966006)(36840700001)(40470700004)(8676002)(8936002)(41300700001)(5660300002)(52536014)(4326008)(81166007)(316002)(356005)(2906002)(70206006)(4744005)(70586007)(478600001)(54906003)(110136005)(86362001)(82740400003)(7696005)(186003)(9686003)(6506007)(40460700003)(82310400005)(55016003)(83380400001)(336012)(26005)(40480700001)(33656002)(36860700001)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 11:53:12.4285
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 00cf5b37-18a9-436e-72bb-08db3c15a8ac
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5796

Hi Jens,

> -----Original Message-----
> Subject: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
>=20
> Adds a new "ffa" value to the Enumeration "tee_type" to indicate if a
> guest is trusted to use FF-A.
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:09:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520667.808497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvkw-0005FD-OF; Thu, 13 Apr 2023 12:08:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520667.808497; Thu, 13 Apr 2023 12:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvkw-0005F6-LR; Thu, 13 Apr 2023 12:08:50 +0000
Received: by outflank-mailman (input) for mailman id 520667;
 Thu, 13 Apr 2023 12:08:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmvku-0005F0-VZ
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:08:49 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efb02f32-d9f3-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 14:08:46 +0200 (CEST)
Received: from mail-mw2nam04lp2172.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 08:08:43 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6256.namprd03.prod.outlook.com (2603:10b6:a03:303::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:08:41 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Thu, 13 Apr 2023
 12:08:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efb02f32-d9f3-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681387726;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NyKC7ful0tmjXCvD/cE0aEjeiVra+yd+nIM9MxeMqkU=;
  b=VvNWghZIFUFSCtiBjYSuzW/iijYXUXaT7QKG9ij5p9tlrq+UeiQYd7/w
   5MmQeONamTB4GONJE84Lo/7h9MPdsGVd6mBwaYq/9XLVZBSjqukwMNMly
   SgZF/tphqiiShybjDKdCEem1Uf//OGNibImOo6yP0z6kptmh/OmNNTFOg
   4=;
X-IronPort-RemoteIP: 104.47.73.172
X-IronPort-MID: 105401415
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Dr/FlKoMvLn0YMDuFv1DM76j8EZeBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBnXOP3YZmOkKYtybd+z8UgA7J7RzIQyTARk+ChmF3gWpJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCziJNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACAvdzGAgOWa/Or4evN8lJw9NOXxP6pK7xmMzRmBZRonabbqZv2WoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+OraYWPEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqM22ATPnzF75Bs+e2DgvOurhk2CSctaE
 0Uz8TYfooII3Rn+JjX6d1jiyJKehTYMVtwVH+Ak5QWlzqvP/x3fFmUCViRGatEtqIkxXzNC/
 n2jks7tBDdvmKaIUn/b/bCRxRutPQAFIGlEYjULJTbp+PHmqYA3yxfQFNBqFfbpisWvQG6qh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:BMPltqFwDClf4LpQpLqEMceALOsnbusQ8zAXPiFKOGFom6mj/P
 xG88576faKskd2ZJhNo7y90eq7MArhHOdOkPIs1O6ZLXPbUQiTXf1fBOnZowEIcheWnoRgPM
 FbHJSWY+ecMbEVt6jHCHTRKadZ/DBSytHLuQ+jp00dKj2CE5sQjDuR/TzrdnGe7TM2YKYEKA
 ==
X-Talos-CUID: 9a23:DprlHGFsl3B0pquCqmJ31lc1GcciXkGFj0fKM13gF35FUr68HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3A3k36jAzomZ2ZQt9MNKnSGsHURcCaqIqVU10fg5k?=
 =?us-ascii?q?Fh9udMBR5HDyBvTudeKZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,341,1677560400"; 
   d="scan'208";a="105401415"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q0Jjc31Gti3egFW9Qhs5dLK/uYaUJ+c0KGrWoyq7mssKd28gkDNpe0j+8y2Ph6MOSgvVErAyrpEBZ61fY1UjhhKiqs/0WgYeVBn2G+7dIU9+8qT5F+ka19qvWbYo+J7dATxz6zUscGpXE7Ox40CuGOaYISfWV0fjJ0N7K0kE+Fk4SAqIDc7n7abj3Zvk8BwpQ3db7LgB95jA0n8G4vT2q6LkZG0RyVeqPGHIp+yV1C5PGLljpCFqeCWJfPwUZJWJejuScXZOHXY4edXrOgjN7eM3aYFPg95o0sud76W6XUDBROuhEpUQcxsbqpkiY+ZaSprnxU3y0ZoOGHF9y2BbhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7KazW/e7IpADiN0sjCBUE50+wnvHb9vU6fgRXtc6R2I=;
 b=g6b+TQ/AGvG5TOKj0WwfPpcv+ibqPDmqpduMmUiIj4t6KOBcx905VIsMIG8+gr0RQLq1guUclNUN6hoqXMMeboHeJDClIiO8Ch0KFJ0yMnP0NcWSMEG7j09oWKeaKkYybmqn4hp7vHmEvsy13JJ2dwBhXo8QKgzTo6zYzLqtg7tCpL7m2Wb0fMo0g6skMkcV6Om8n5yk7HKaz1+9Dn/frEJdCRV+udA6q9CQ6fHQhhP0U/CyJzAEbgN6eQMinf1G20q/Bdhn/Cp9BdjMHT+dKhg+Ge0KBIlVoe1G3ZQLCPCA91hwBqP8xCU+zArBhLrecVxZLD0fOYOpRC6GpybJyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7KazW/e7IpADiN0sjCBUE50+wnvHb9vU6fgRXtc6R2I=;
 b=qc9Ogh0x9VlpZXe+Olxeah7lU1GeTlpn2Z8EouGyRuh1GUF3kVRI/js9IOIGormbqyp8Rn61Nf0nAKXboThEEibrDrpedN8o2nyHrO2fU79YrbG9luzJs+iIecA9i40CI3UxSlo186EueT/LP6ptRRttoKrMpfrdEkUSkAIPcBs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <71d52d94-806a-7919-72f9-b8e42e34d146@citrix.com>
Date: Thu, 13 Apr 2023 13:08:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Content-Language: en-GB
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0079.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6256:EE_
X-MS-Office365-Filtering-Correlation-Id: 73330da1-9246-4348-e6dd-08db3c17d180
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1flO0QNeJ22avbbqDKcRGqPk/wB8CD1JqXRs44EUSz50hJcec2pPBnqTAAH9Zd0GWsT2z3WVLqfr9h2ZND9PN0CWdmm3s3OcgD+t3f2sD63UK/XvGa5FKIkZjHf85q5Bx3jSif33XRdXYyGLOOXiuS8Re255R07Dn2P/MxHloBjjACytvzaRZstGgVcRMu/K2cTIFnQdI9npCObqXn/eIP1zj/Xt+6Bgm2FuDFTruDLK9BllEIHgiduyjmQOCHQ0MXFtjEVkFnnatboY1WDih58UtpZ1u8NPmPdKla2oWWEgKdYuji3v/L5kgbrtNtDMQmYZQtIZABFnaLuj/bEKrcFbWT3GtH4V5XKjlwf2L+DUTG/Rti5aHeL/l5C2lI5flmG8bHWyx84QwMLiXHbXG3fmwUNMSXWaxbNj0pkz0m4WKu9rey9w/8d7FXHKNrEcpNFGcix20kmzrAITyM74O/6A+DJzx0wR6+rzUMo7cDMYY3n0mE8TL1hxL/MDrd2JRY2+Iajb0R62rHnGy7HLJ5l1z1Iermq2MH2y2MzDSWu/U6h7jV9vS/Irtn6TiQJZ5VwpK+u6KdUXAHDcA12jXtRLUv0w3tO04TojCS/uAXz/lX9PSt5iixrQVcEcvz64+lXy9mIhPQXeGaQc1ayYEA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(366004)(136003)(396003)(451199021)(31686004)(38100700002)(5660300002)(36756003)(2906002)(31696002)(316002)(8676002)(8936002)(86362001)(66556008)(41300700001)(66946007)(6916009)(66476007)(82960400001)(4326008)(2616005)(83380400001)(54906003)(6506007)(186003)(478600001)(26005)(6512007)(53546011)(6486002)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M2F3UjlmamRuOEJuNUxCMjN3ZU84eFcxbDJBUG5BdUxnMm5hTWJ6TWFPY1FW?=
 =?utf-8?B?Ym8wZVZ4QWlRTFF0YzN1SnYyaFlCTUF4dnM3Tm8raENkV3ZYZ2RGR0xHU0VR?=
 =?utf-8?B?aWU4MWRndlFWSzZmUjU0MU1qV2ZrRU5RNUtIU0JyTXhDOFdNM3NOSXhEWnBC?=
 =?utf-8?B?S1JNZ3EyQ25MSUltS205dndYcURwN3RFc2dFQ0hxUzZKbjBUaVp5UHBDMmh1?=
 =?utf-8?B?WjdScFVob255Um05UW5VQXdaSzc0VDZXbHFrUFdKcjAxRTVyajZmdTV3UXhr?=
 =?utf-8?B?eXhxYlN6SnFOTCtJVHNmRjhWR1BFQ2dDS0xrVDRwcGhmZ3JUQS9nT3c5TURk?=
 =?utf-8?B?cVl5MVA4WGpuWWhyUFUxaStDTHFkVVM0dzJXMVQvWmV1UENsTUp0RXRPNjkr?=
 =?utf-8?B?WGpONlJYV2FMYWYwYTNiNnprWkN5MkF2WHBjdlZ5MGFkVjVjSjN0WkhVUXJ2?=
 =?utf-8?B?OUJ4TkZ6L240bDcva0ZwQnF1dk5VOWgrRTNBTWJ6d2FsdlNzR3V1TzM2K3VE?=
 =?utf-8?B?SDg1VGFMYXM0b1pIM0x2RmdzbGR5YzNQYUVRYllFcCtoV2xzazhiTERXNUJM?=
 =?utf-8?B?OEQvZVhORGIyQ1NaTkR6NU1WTVZrcEEwUFFOV2N0NlZNQVNrd3Y2clBxM2J5?=
 =?utf-8?B?cTBXdXFXUnM3Ukw2cWIzM0E3WnpWNHRrRU5IV3QwNFhUak80WWZ4UEZXQTEy?=
 =?utf-8?B?aFZPQWxiZFc1V3AySjJjM0hYUkpYS0hiUGFXZzFFVlM0NXdreVlJMFdWTUFY?=
 =?utf-8?B?bFdqVEJsV2FkQ0hLMkM1dEFBbGVyK0h6TUR5dWtKdlpoa09DdzlEVXluenQ5?=
 =?utf-8?B?dkpCWlFOeWo0bFlDa2Q5NHFOcFl6V01laWZKZXpBVHBLRnBVTDQ4Nk92ejRl?=
 =?utf-8?B?bTcvT0E1czZPbWNBZk9CcDJ2M2grZnkvRUZFTW1BOCsyUHl6anFlSVc3OU9D?=
 =?utf-8?B?TThXQnRPZGp1R281ZmJVYU1lTnhUUE9WZUY1OG1rU2NsN29ERHNvZFpvNGNa?=
 =?utf-8?B?MmFzdzBUdXRCZnFZZUNaUjNyK2pQc2JNdXhoYkZGY1J3TWNJUU1oZXF3VEVS?=
 =?utf-8?B?QzdwSkFwbEVDOTVPK3FmYWFlaHRBaXlyZjV3VGNzNHdHbjlnRFlWN2d1a2ZP?=
 =?utf-8?B?WWtDT0xiODRVakd3M1hDeUVtWEcrOE82NVo3bDRrVVhXbGpYYmU4ODBhMmJE?=
 =?utf-8?B?UkRrSzhBSytuS0Rna0UyRmUwNlAyTWJIaEVSS1crNWJhcDIyRm42OTNpVk9p?=
 =?utf-8?B?aGM3amdCallKajBYOGxWdkJxV0RxUnhWUUpiOWxnNnZFcm8yRk95M0c3TURD?=
 =?utf-8?B?K2ZKMDRIbDlKVE9ZeWR1am5WS0E1OWxtbjBqTHRhUzRJd2hsdUp2V2RxWFpB?=
 =?utf-8?B?OVdtUXVyV2ZUTkVkdTNpbDgvZnRQdVN2UHNHUzRFanZyMU9DLzY0RlFnb1FR?=
 =?utf-8?B?Z00wa2JaQ2VvREZ2Uy95dHF3MDlDVXd2ZW95bG82ZnlsT003UkU5MTZpM0Ri?=
 =?utf-8?B?K0JpeUp5eEI2U3E0STR5R0U0WjhKb3hYRHA0YS9GUFRDVlhNNkRlTTZLUzhZ?=
 =?utf-8?B?K2NVOGFYYThHOWlTL2xVcVRmWURVQ2tqdEtmUUFYL3RGczMyYU1UekJ3MXRQ?=
 =?utf-8?B?TVNWWmdBTFduRitCRzc3V2hBaS9IZFZ2enBPWkZEMG1wNGo3TW1DaVNSNjhB?=
 =?utf-8?B?SG04d0g1VTNTVVpnYk4yY2RIMzdKOG5vNmc1VnVmVm1CTDNPcHpUQ0xUREJi?=
 =?utf-8?B?cEVFNU4vZU01MzJYSHVwdVBiWDNXa09aa2s2NFZwNzZXVlFtV0FnL0FLdXF3?=
 =?utf-8?B?SWVaYU04WTBRYUgwNkpGbzdCNFJuaXNpZVV4QkREa1dhWitUMW4yZHdDM0Uy?=
 =?utf-8?B?S2FVTFhyODFmeXBieUtqQVN4WWdhdHRFSzJ1K3U0V2NOa3N5UGtmUStBdnpX?=
 =?utf-8?B?Y3JjNlpUTWJHUTRzVkREdFdpcXJhdFdvWE5zNXhhckNmSnU5ZnFKR1dCakFy?=
 =?utf-8?B?WTdvdzg3MkdOMHpXUEpZTldEdGZCNE9qeGdBSHl0V2J5Mi9uNkpPbVo4V0JZ?=
 =?utf-8?B?aXp0eEdCM0w2YWw5eGdKTUdEME5RUnpjYzRwakJmT3hycXR4QitHTjFlT2pT?=
 =?utf-8?B?b1gyb0k4WWFGckpqaVd3eDJQUEhQYS9GaTRuK2pjb25hOFpGaEN0UzV2NndS?=
 =?utf-8?B?REE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Zrz4U+umDLZX/Xma6Ts27+20V087EZFF2lkkhdwiUiIjPAmOdgVE8YoOtKCC9ekdTr0YCGDwP2DEKclAwqwqjrg1TxAUeAMet83mv0Wpc6zDAVoiCqQNKCdZXspCA2cTIcf70UDksgi8RBoWXw4YLBiYjmYOtuaan0r75ho+fLn4XJHewRVLjAbW0Xh+hZRq8Gb6EENEJ2BwmHN/vDT3WbNvWXagJG5tQI+G+Iytg8H27K/vcRK/MuRVDUY2xnnUqb/2cf4rU8j+jQtRKamf6nrzv/SEkz7+iTFn1/TJ8qPH0QEC55q5ZjWjocCDVjNbet5wJx2q1S0mqHn0gg2/jFPkM8UksicM0KhljFw/jj2S98kM0Rw1xpeFCoxZzftJy2CcoOE7XhegBjwHCi1ikRiY/jVEiTzM1fj/ZxLuwHuTb0sZTsRDWWB6R1gAL5/4icFsHNgl8l/P9/woDwxjvEhwpf1rGiQt/8QvKoPGvBB/FDU+8Fgriu+8BdGWT5jeJ9unZl2BEULovDOVM0ZgKDB3iqa7qI07b/uLmyYj504ZLa1VRHYATg5SzqZ2LZMYfykhPZH/n3ElfWvQ7PGAS68kBr/abUPR5GdshJKZtOUBpFNVipjN0Uk7FcTCAV2k4Lt9KYFeh6ImyGsoYC5X2BquDeNloCkyv/8Y84Spn+TJihJrqexY6O3x5z98kC2gtZ6+usTgPi66doPnyMxWRorpq53lRRnImUzTKcMveHfK34qATSRXOVisqB1nNuq7Y6vY5xfXODLgVQ0MY0RMnmCrQq6N2++DX7bhTT+0hkw5Ybuk9qbsbMTKRJW+UWacejI28bNQ+V8AVfT0nsmxRQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 73330da1-9246-4348-e6dd-08db3c17d180
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:08:40.6961
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sMCjrPNhhmdBpGEQRDvqBf7im0950dV6c83rJWncdESmvrOYpnJJuwapz1pIq7nZlkVVnUBxEJs/ovUjZ14ta53ef8hFtnO0IAbHQZM+qtU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6256

On 12/04/2023 7:35 pm, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 1454c1732d95..3c8d28a2d8be 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2340,6 +2340,21 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
>      }
>      else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
>      {
> +        struct segment_register cs;
> +
> +        hvm_get_segment_register(v, x86_seg_cs, &cs);
> +
> +        /*
> +         * Intel documents a #GP fault in this case, and VMEntry checks reject
> +         * it as a valid state.  AMD permits the state transition, and hits
> +         * SHUTDOWN immediately thereafter.  Follow the Intel behaviour.
> +         */
> +        if ( cs.l )

It occurs to me that this needs to be qualified with LME first, because
cs.l is software-available outside of long mode.

~Andrew

> +        {
> +            HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG while CS.L=1");
> +            return X86EMUL_EXCEPTION;
> +        }
> +
>          if ( hvm_pcid_enabled(v) )
>          {
>              HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG "



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:24:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520672.808507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvzb-0007bR-1b; Thu, 13 Apr 2023 12:23:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520672.808507; Thu, 13 Apr 2023 12:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmvza-0007bK-V2; Thu, 13 Apr 2023 12:23:58 +0000
Received: by outflank-mailman (input) for mailman id 520672;
 Thu, 13 Apr 2023 12:23:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Tzi=AE=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pmvzZ-0007bE-4X
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:23:57 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0cf44ba9-d9f6-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 14:23:53 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id AA16B5C0144;
 Thu, 13 Apr 2023 08:23:51 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Thu, 13 Apr 2023 08:23:51 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 13 Apr 2023 08:23:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cf44ba9-d9f6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm2; t=1681388631; x=1681475031; bh=H4jzTRmNuiASiivtBudmUIm91
	iIj5Qc7O7FtvJTdyD8=; b=TfRjBKIgdBL7hNfjovnNfQRebm+pnn6643rTt6zN6
	hmOIAYHIDKTmkVCxqvCGSRkU/97NrBgurbkS/Bx3ZW5aIIrzICo8NSsoXpQb7RS5
	WQJV5SsohhCSHRlPaBg5ke3iVdnvn1+aQGN9kz42dzsfc9JI2P1iCjBVALhsKbvw
	lx3kF3ixRjIViG9yEQFtZIDk8K/zY4zjVlJm1ffCc+i+Om55H2KZsi587N40AidW
	c/Vx6f/4ClhHGoFsd7eweK6T9hdCFQSqp9UKSdJltWUi8jJNLXWc15JSo3SH38vV
	BcVOe5QNqfKzR1idOf/1mSpgMo7JFBVOBH6tknRXy5GKQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1681388631; x=1681475031; bh=H
	4jzTRmNuiASiivtBudmUIm91iIj5Qc7O7FtvJTdyD8=; b=eVzccDoe7Ibp2jJk8
	bxkVqCUXlUoQoSyUp7pnReWWGm+B8bHaGSSWpCRcxK1Ti7h3MeEj8qy5EQgELeu3
	FnnRuNp6ZcICWOeRdY4LN02qMnBTqLljZTXP/25OlgFw0bo5H0cVTmE+qrRM5xAG
	jrAqA9pqM2FgOq1+/+CttkMxD8XOexbWD2lMHkXHeRe3pH49yzXCEFhpPimFCi1h
	YWgzDclIo+JMrM0g1NbuodR5MY91awqMO8KwkZg+wLOsGPtTfEk1gbAnJH0s3eul
	YlGWTvllExR/bfvDVQtoLL9PylEuLASWNHcPRKKBaAXDC2XKlg7QCeGA9/z9n87w
	vU+jw==
X-ME-Sender: <xms:V_Q3ZFgubhNfvhCNcVsV_DCvmywmtX-OxOuOfR237JVudAuD4iazTQ>
    <xme:V_Q3ZKDnopSvjpM9_-WHhdnaOsyScBmY5PO9MZLR7h6R-xy8-3_OwZYAHTmAW8c8v
    AhcRGgckU0hLA>
X-ME-Received: <xmr:V_Q3ZFG3VuTUIRvH7xON0MDOnEofU69aVo5SukgOD5s2uExhk-YmZu22upOmjPUEscO0jZa-g0BJKC6_JiItVSjpWX_3qg55jFPYkzx4ARcffYbI1DP_>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdekkedghedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:V_Q3ZKQ_LqM9TO7SRX2hK3B2oNwBgzu-tRIoKXVDkpqEVR6Fp2aqqA>
    <xmx:V_Q3ZCw1UifwQ_qbr-QT89uwtlliZ623aSCdJyEBpuggSWIx3L7hGg>
    <xmx:V_Q3ZA7IPEd9xYxL6Zd0NeDhkSiOsGU1vPogzdqaJmmjK7vsCp0npw>
    <xmx:V_Q3ZHobiOzXraln3nsJJPFomkHh-HkH_wJGG_mjTx8U4Z-e3TUXig>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] automation: switch ADL hw tests to debug build
Date: Thu, 13 Apr 2023 14:23:40 +0200
Message-Id: <20230413122340.47233-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This should give a lot more useful information in case of a failure, and
also enable some asserts for extra checks.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/gitlab-ci/test.yaml | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 0916b367ea90..d68c584269dd 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -131,21 +131,21 @@ xilinx-smoke-dom0less-arm64-gcc:
     - *arm64-test-needs
     - alpine-3.12-gcc-arm64
 
-adl-smoke-x86-64-gcc:
+adl-smoke-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
     - ./automation/scripts/qubes-x86-64.sh 2>&1 | tee ${LOGFILE}
   needs:
     - *x86-64-test-needs
-    - alpine-3.12-gcc
+    - alpine-3.12-gcc-debug
 
-adl-suspend-x86-64-gcc:
+adl-suspend-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
     - ./automation/scripts/qubes-x86-64.sh s3 2>&1 | tee ${LOGFILE}
   needs:
     - *x86-64-test-needs
-    - alpine-3.12-gcc
+    - alpine-3.12-gcc-debug
 
 qemu-smoke-dom0-arm64-gcc:
   extends: .qemu-arm64
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:25:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:25:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520677.808517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmw15-0008A1-Cn; Thu, 13 Apr 2023 12:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520677.808517; Thu, 13 Apr 2023 12:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmw15-00089u-AE; Thu, 13 Apr 2023 12:25:31 +0000
Received: by outflank-mailman (input) for mailman id 520677;
 Thu, 13 Apr 2023 12:25:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmw14-00089l-Un
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:25:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45662288-d9f6-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 14:25:28 +0200 (CEST)
Received: from mail-co1nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 08:25:25 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6198.namprd03.prod.outlook.com (2603:10b6:208:31e::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:25:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Thu, 13 Apr 2023
 12:25:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45662288-d9f6-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681388728;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nw8B0PqzZkmEgNqIfZy+uVF3ETSAfu8vkeNNFIBfcIs=;
  b=UpoYQRe6mfRFz8E0OgKhHD2U/UnZoXVLKLeUTX4A/w01J3mNuZQyccqg
   jwSi2HPibSbrTnC8+VIXIs4Pms0Euo6V/4SI+Yd6iTI+fBXmTRg7kPjDN
   LJItlGX3GSu9qFFfTpRl9fEQFfT+Yn97SSJ8xn68kk8isrEnkn+tTE9AR
   s=;
X-IronPort-RemoteIP: 104.47.56.175
X-IronPort-MID: 105277033
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XzOWK6pKpn1klJv32w/QfGZCrYpeBmLIZBIvgKrLsJaIsI4StFCzt
 garIBmAafeDZmH8ft11OoS3pxsD6pPVzIRhHgtopX9kRCka8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCziJNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADA1SBKI2c3r+5W2ScVVmdpkMpjrNapK7xmMzRmBZRonabbqZvyToPR/hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jearaYWIEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqMw3wLLnTd75Bs+b1C8ntmLsBGCRNNTC
 As71xsrtfgvzRn+JjX6d1jiyJKehTYZUsBVGvc36ymMzLTV+AeTAmUYTj9HZ8civcVwTjsvv
 neZktWsCTFxvbm9TXOG6qzSvT60ITISL2IJeWkDVwRty9Lquo00gzrER8xvF6PzhdrwcRnr2
 CyDpiU6g7QVjOYI2r+98FSBhCijzqUlVSYw7wTTG2mitwVwYdf8Y5TysAeGq/FdMIyeU1+N+
 mAenNST5/wPCpfLkzGRROIKH/ei4PPt3CDgvGOD1qIJr1yFk0NPt6gJiN2iDC+F6vo5RAI=
IronPort-HdrOrdr: A9a23:t8uu5qzGYN0jf04dHdqYKrPwL71zdoMgy1knxilNoH1uA6+lfq
 WV98jzuiWftN9vYgBGpTntAsW9qArnhPpICOoqXYtKPjOKhILAFugL0WOJqAeQfhEWkNQz6U
 4KSclD4bPLY2ST4K3BkWuFL+o=
X-Talos-CUID: =?us-ascii?q?9a23=3AsT0d8mjg1GNYNzKvcVyTv1E0IjJuckX7znyODBK?=
 =?us-ascii?q?DLmtqZISKVxiOqaNtup87?=
X-Talos-MUID: 9a23:7ETkNgpVNs8HcEhrE10ezwtpM/hI0Z7xNF8MnbZBlOS1MDQgJg7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,341,1677560400"; 
   d="scan'208";a="105277033"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B5NMW2E0vCAbYcESnlgYK325SNjhZW/OtTNDwzXJT7RcB0f7SC7NMSHtlhPq8waAAXlt1dSvdnkyFXToZn+3/MHRIeMxuuDE0OCDGOD0UjbP12C5nwOr8/EYFx8VFHeKN6vXpZGjckbXdpXlKPIm0AFHmtNA5pWpLFYXbzx6ihg7GddgNkjDQCXeO62oHobPJGfMXCoX+6mxUwjKkIor0EYEZpP1ByC9+1+LPEi6hwaQJqqDZBWFW/JK2ztF2I1psBPAZw0Qi4Zv3jAz7ssTaE04aF82uQ5PtTGRNKadY2SW61etoz+ZWvcB0MVa9nZMcT8ttlLvR/+d3EyBX41vdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nw8B0PqzZkmEgNqIfZy+uVF3ETSAfu8vkeNNFIBfcIs=;
 b=d2xAhR+FdFhFqtPc+NXIMw/Zj135B22nUQacq8Yp7mNQvqAieuoLrpeySEq1LNnoB7E5QNpp4rW3DKX1J28b2LX9wBdi5BjYfwv+9lArJiLvFNS9ZYyJqOEoP6YlLGBmVh8RetoLLLw/2Xqsm0ySG+ENgtDqhEbxZ8UZ/ssU9AX04PJ8Eqo1KOEdT/tUpIHMrgWUQuQn3NC8E6Qm3HPYNka4dOGXgtOi79mktRnxpnd6oMv4NJOYWX8WiDiduC2JWONIUtXUBkkPisINLzshxWUczOpXkcUqM56sCDa3TLLlWuuYBDvPUMW6AXntIwqrFNf7+6SWA0ptB7HW49tWNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nw8B0PqzZkmEgNqIfZy+uVF3ETSAfu8vkeNNFIBfcIs=;
 b=Q4xMhWa5vxTOR7nugplsg6k4e6RvU8AWuShDm94BfLtOJOWEn4Y93Jo/KqMC2h8fN9WeV1JGAkQg4qI+28hRZBOhz9Vhf01fy5t2p6Bgfg7a4r69kZBMmk9jLayU4g2uDaQwXToYvZW+esEB5kU2jupC9M7l0Dv3xjM5hqcPDKs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <50de28eb-f661-fc55-8d7b-7ba07b160c25@citrix.com>
Date: Thu, 13 Apr 2023 13:25:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] automation: switch ADL hw tests to debug build
Content-Language: en-GB
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413122340.47233-1-marmarek@invisiblethingslab.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230413122340.47233-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0324.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6198:EE_
X-MS-Office365-Filtering-Correlation-Id: 1e64e8d0-c4a3-402d-b634-08db3c1a2759
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BGSzvVApVqVS6BhuK6MY08lLHJCjwEnSv/evmBBSvzKq3QQOw6xd/z+bj5ZEw0nmNkJlfX+k0meLeg4GEpYVtiC1tl/gm8eXYMe0oj1vHFsOFZRwMx+qf7glhGQ81eb2SG6Ck4Ftey1F5jDRWoO9TR1WEs40Yq3bC9yzEjy4vF8eSMxRZrDudQARQ9/PPwZpZ/syV+iGDVxwHl5YJy3Esg6PiFnLN+sUol1v7tGhANSpUbjF1ag96LeswKZnvwaiHeD/QtewMlCyXeBG1rAcIZFCZ04DYv9ude0WDZ63D4HUMU0inrPMY5VIhza2FPl2ho9GdrnAE5Ky9MFfZbDcQC3V8vTG3c2AgIrM9/mnr5FZU/ekfU6WtyvV2l3uPLCbPUaRwkyRBJIrKDSda9IrsFYnB+yvwZnHc5lPKY+J2hXaldIcfUo2AgGD6qjkPrVP+yhWIH3g0EMaXVzqkC0k0mVprVbQ3u41dg7hzMvKhjkCq8e6bcFkVpVpo2LCwv/y6kWWSCxIOWD3+NVgrlWrZ826Cbsy9WY2HpwGTaKcpEAmupdL3PXZiBv+sEtJ2yphM3XWFWhWX+6gBp9y0dQeJp0PqsTy7OWgbOOcC5wm+fYgzd2sy9B0zS2Q1/j5VZvBbhsKKtbhSX56hvHUXbnaTQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199021)(8676002)(8936002)(82960400001)(4744005)(2906002)(31696002)(86362001)(26005)(186003)(53546011)(6512007)(6506007)(36756003)(6666004)(6486002)(31686004)(83380400001)(66574015)(2616005)(66556008)(66476007)(66946007)(38100700002)(4326008)(41300700001)(54906003)(478600001)(5660300002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUZ5a3lGT1BuY2JtbWkwM0tuL0dObWVPN2pHN3F5ZjlUTUdpSDQxdEZKRUxL?=
 =?utf-8?B?N2V5QytkMVhIeHp2bW01NHJhUnhIaS9TZXJ6dkttYmdyQ1J5UGhXZ29aakI4?=
 =?utf-8?B?YjVlaW1maW1NclF2dVNtQ1dtS2hxKzJreDRxaVQyWHJNK2JSbE5zcmpEemNv?=
 =?utf-8?B?dm9DSTVua3ZydWZSRE5FN0FONWFCQVZpRlBHY2dXM2hXaEttTk9JYjl6NTJY?=
 =?utf-8?B?RTNieUJqL1BjVk0zc2Q5Y1QwY0x0SFFYMjRBNUpoNUhhZXdpZW1kVlNCbVl4?=
 =?utf-8?B?NUl6LzR2Y2ZkWHgvaStUd2hqTmdVOGowS1hiYS9BYStLMnB1clYwYXNGaHoz?=
 =?utf-8?B?Y1N6VEQweENRU3hPRnU4OXVJQTNuVW9nOTZtaWtMYUdtc0t6ZVk0TGp5RVAr?=
 =?utf-8?B?UXpsU2hzRFhVdGRjaUVSYXpoV0JRb1lZWnVIM01sUW50YktlbnprcUNva2Jo?=
 =?utf-8?B?VzBjMEZReFJGbG5jV0NDcXZGZno5MmJoRjJKNnE5ZGlkK2dwYjloM2JaWnhB?=
 =?utf-8?B?STBBNHJXT1FaOWJ0a0JaU3UyVkYrWGN1UGVGT3ArZHp4bzFTcENYNVpacG82?=
 =?utf-8?B?N2FaMm1TZDJaNDFqdENYbTAvSXBMVElNU2RWQ2hTYlFENFN0SGRGUHdockhP?=
 =?utf-8?B?dUJKYUtsMXFKVzBxTE51NHdHZUVseDNjaUd4OU5GczE0MTYvK0tNcHFYVzlM?=
 =?utf-8?B?MFpMenBySTBxc3lZTjVObHNhcHc1MjdaUWN4dUxyZFdmTVIyT05SRC8wSzAx?=
 =?utf-8?B?M1BCZXAwejg2RWdGK2c3RkZiTUg3Y0h1b1UzRHR2clNFNld4VjNvMXNDY0dZ?=
 =?utf-8?B?SG5QRWYwOWVqOTdEenFGaDlvYURhODBCVmhFd0xaaDJkMDQvQWg2VHNBWjhV?=
 =?utf-8?B?ZnoyMjYxcU9tc1h3NDE2ckhvYWVINzZBeERXK1N3TXdCOG9LMU5DdEZkVktX?=
 =?utf-8?B?T3lOWDdsNEhiQnpyUnFJM2xpVjZiZy92QUFGMExJYVRNM3JFeFJsdTJ4ZWNG?=
 =?utf-8?B?Nm40RVp0MXRJQWNucVIxYURFMndNSUV3Rmwwc3I4VEl1NXQxT0JSTXBFVmh0?=
 =?utf-8?B?aVNNWmREWC84S2x4WWZlWDBkNGQ0N3BDSXNybFVWR1hTK25xVlo1QXB1NXZl?=
 =?utf-8?B?MmdmUjFCWWFISU9vbjdZSEFkRlVLOUlqc1Z2K1UrWDdGYldrWU5NUlFMTXR0?=
 =?utf-8?B?azFrSmUxQVNPSTdEZm4wR0YvVEg1WWNCb0REQmtYRUNWV1VYd2VvUDB1TzAy?=
 =?utf-8?B?RVJBcy9NRGkyOTRBeGQrNUVqVm13TU5tbjdiZEFDUkdGUUh2cmtPYlJ2RkdI?=
 =?utf-8?B?VjBkNVBNSDNFdUZINjlFSWFnSXBKRHlSemhWQmtFWG5BcUZxcXFWVGR5b2Jm?=
 =?utf-8?B?UWxMS0ZJMjVHeEE5R00yL2Z1bXBBN2NuajlrdFpWVEM4c2hwMjNKMnFSYnVt?=
 =?utf-8?B?Vm11dlI0Z0pTY1BrcStKNzJRMDJZNkdEcjBBZjQ4U1ZJWGJTUldmM0M3V21y?=
 =?utf-8?B?K2hSaHpJRUg2UmxPaDRVblg5WWNtbFpRZ25SK3p2Vkt1RGlNZ28ySWt3aS9j?=
 =?utf-8?B?UVdzWXpVOUpQNWFDTWg2Nm4rUGN4bUtjaEhTeUZVYmNBRkEzYlVVRGlpVHRh?=
 =?utf-8?B?ekZybnNWVmNUbkdLWGhEeEFrcmFZd21vTDNpcjcwbGt2WVRGVDVibVFCc0FN?=
 =?utf-8?B?WFJndmVVdGRtNFExTkZzRUVLRDIrMTFVMGpTdUhoQ0k0T2IyQnlkZmtSSitX?=
 =?utf-8?B?Q2N5Y0YrMXZzWDNVWkZBREZSdGJPYmgvUWt5S3E5ZXhkTHRLWmpET21qNG11?=
 =?utf-8?B?NVkrWkxJYjJuQkYvMjlIYUxweTEyeWNSQzNPL1lZN20zYmVrcitHdmxRNEpV?=
 =?utf-8?B?bVE5bHlMa1BVRkZ0N2hzUGlrbDNBdWtaQ0VyR0V6NG51TXFqK0hxNlp0Y253?=
 =?utf-8?B?VFU1NE9VZUtnL0w3SjFsRjNPaitlUVpGWUYyMGdKMUR3d2ZlKzBtNjZ0T2VI?=
 =?utf-8?B?aGlINDBlZjJJbjNLRC9mbEFNdy9mNWh5bWdlMEEzZGhNNUlybXNBY2ptSldT?=
 =?utf-8?B?bXQ0WHV4WVJjaDdZSHVZbHRON3VncUtzc1hwY2k0Vzh2eVhteXdQdC9XMGJC?=
 =?utf-8?B?elR1TVZ3d1NmQnV2UStJVlk2QUZqeWU3dW9uMWgrTjdqZ0J5cTYyUHcwdlEr?=
 =?utf-8?B?Qmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9qizroPoeI8F2lPM42z6pF+pOkjuzghjlIyKfjC7+YpVG+S6GYU6c0OzFLoFS20DbuBtoZ8JbnOQeQZBK4o2bKd5R2l3pBCO+LTZovo8BRbY+qaZVNsbOcWm/BGLM5iznxC164K3R68tt+2FwF+cpiRtws7bK9SDCxbcIde04KBLJdUMhOW/iGdQHHSFntUA48Y3BX1GJemENdJJ9Op/iZYzH4vG0w3HzN9sqhffLWdbDulnP48FfopRFavs4PsSturLamQseea8UPoaFIP/2Kecg5T9CunLB9WYz7E6LpIiCLep5YQfCVtMt3QhAkER8SasFTErk0mAugFnOGE39wgKziMvWlJMOIRJROt7xIvbq5rDckMcxs8rkGddg0iVkMDlQw3Zo6LY6br7hS3PnvMizuWh6Gjg/aH9x9otTX/nFRB/iEE3bwNHVrab5Cg1hI0rX6+O2CFq3+zCeNGM5uAvniMwftbPC0zmMrfRlBaQpCp2yn9dafUAS6fVqnkiT2XmLBQnhNd+zgB2KDFXtorGYLsskYWc6ECsoq92Mc/iSu5F9vsrXpJmhqpXOLGh6I20XC+PGZK3Lix7r/YY+OoEwIN4TjzFtAgBHnbYqqWp/zrmjXPSoqQSDEWjOYp1f/1tzR+h7Zg9Rb/N73aHokNm/+i5emc9x/wzI8wrnL0hr4dXYk6FTDG+b5e+BeLMsAHeAhqCanLdDyqwK/wO4VqXSG3KPmlpmI4MZ7WT8qIGFRKan/5AQlZrk16Zj5EeNpqnqZEPJ4o3vIA+BVuTqGhu8uhPWkoebujo8mAl1JzHF7QRo1f8R5PPNn+liHHJAXkqvi7znWUT8GB+0A8BeEJZ4TqdIkNzDJepbwiZnYo0lzr/k+CEjtLhTLGiy4leWGdvYGYO+1PC4KuNXM85xA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1e64e8d0-c4a3-402d-b634-08db3c1a2759
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:25:23.3774
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aDsLRiFLIM7jnEvaw/LsNDXdpzgz/Qc7Rmx5NQ8MFjEhj1Ay06zgPozyErIAlogEx4XkBHnteQxWgrQ03lomI4vuw37yeq5jixzPp3I5z4U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6198

On 13/04/2023 1:23 pm, Marek Marczykowski-Górecki wrote:
> This should give a lot more useful information in case of a failure, and
> also enable some asserts for extra checks.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'll merge this shortly.


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:26:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:26:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520682.808527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmw1t-0000Ju-P3; Thu, 13 Apr 2023 12:26:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520682.808527; Thu, 13 Apr 2023 12:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmw1t-0000Jn-Lt; Thu, 13 Apr 2023 12:26:21 +0000
Received: by outflank-mailman (input) for mailman id 520682;
 Thu, 13 Apr 2023 12:26:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmw1t-0000Jd-0J
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:26:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmw1s-0001jK-LU; Thu, 13 Apr 2023 12:26:20 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmw1s-0004af-DB; Thu, 13 Apr 2023 12:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jOO+IcLy6m2G+FcfCMMlFmJIrteZRP85wg/5Z1eG/mc=; b=pfwaeJqPTxtNQ5xgPsx8+VM0QL
	ryrPsPK/Yv1uy+nqs4QiqlOxe7SH6EWx+fW/cVU+T+1FObsNGXiAtU1F8ug/nN3iFA5cPoPT6I5XP
	ihrEPdlo21ZQ5TVxv/Vrbwu+DUIDHaMVYrqkYh1k0tzP4tJJZXGqOR6hP8qICevh8WbM=;
Message-ID: <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
Date: Thu, 13 Apr 2023 13:26:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Content-Language: en-US
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-3-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-3-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

Mainly reviewing this patch from a Xen PoV. I will leave the others to 
review the patch from a spec-compliance PoV.

On 13/04/2023 08:14, Jens Wiklander wrote:
> +struct ffa_ctx {
> +    /* FF-A version used by the guest */
> +    uint32_t guest_vers;
> +};
> +
> +/* Negotiated FF-A version to use with the SPMC */
> +static uint32_t ffa_version __ro_after_init;

Coding style: We tend to add attributes just after the type. E.g.:

static uint32_t __ro_after_init ffa_version;

I can deal with this one on commit if there is nothing else to change.

[...]

> +static int ffa_domain_init(struct domain *d)
> +{
> +    struct ffa_ctx *ctx;
> +
> +    if ( !ffa_version )
> +        return -ENODEV;
> +
> +    ctx = xzalloc(struct ffa_ctx);
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    d->arch.tee = ctx;
> +
> +    return 0;
> +}
> +
> +/* This function is supposed to undo what ffa_domain_init() has done */

I think there is a problem in the TEE framework. The callback 
.relinquish_resources() will not be called if domain_create() failed. So 
this will result to a memory leak.

We also can't call .relinquish_resources() on early domain creation 
failure because relinquishing resources can take time and therefore 
needs to be preemptible.

So I think we need to introduce a new callback domain_free() that will 
be called arch_domain_destroy(). Is this something you can look at?

> +static int ffa_relinquish_resources(struct domain *d)
> +{
> +    struct ffa_ctx *ctx = d->arch.tee;
> +
> +    if ( !ctx )
> +        return 0;
> +
> +    XFREE(d->arch.tee);
> +
> +    return 0;
> +}

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:36:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520688.808540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwBg-0001tv-Or; Thu, 13 Apr 2023 12:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520688.808540; Thu, 13 Apr 2023 12:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwBg-0001to-L1; Thu, 13 Apr 2023 12:36:28 +0000
Received: by outflank-mailman (input) for mailman id 520688;
 Thu, 13 Apr 2023 12:36:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwBf-0001tf-HJ
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:36:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwBf-00025L-3y; Thu, 13 Apr 2023 12:36:27 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwBe-00056E-TF; Thu, 13 Apr 2023 12:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rFaQOQZqtnuZCv66vbPgUNGabOZCkbkvgGmoXQcw9Yw=; b=TXYAaVXq0oLG7bDbTNU6uUCqSE
	VqnqfX7vZp3irdfobVXvVSjoQ4XDv6WNcsbG5Nhmu+2szP99ainFVkbqMWU7RviO26XsKK3gD2wtF
	mIARx/mUsmOUwtdvG+IhyGDskc4SgnV1lx8CloV3O/Krcq6ktI/q0sqQk4B9ycIXC5W4=;
Message-ID: <137a1457-aacc-e763-b313-4266be3fa032@xen.org>
Date: Thu, 13 Apr 2023 13:36:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>,
 Jens Wiklander <jens.wiklander@linaro.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Marc Bonnici <Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-9-jens.wiklander@linaro.org>
 <AS8PR08MB7991020B2BB9676DACD2494392989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB7991020B2BB9676DACD2494392989@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 13/04/2023 12:07, Henry Wang wrote:
> Hi Jens,
> 
>> -----Original Message-----
>> Subject: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
>>
>> Adds support for the FF-A function FFA_ID_GET to return the ID of the
>> calling client.
>>
>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>> ---
>>   xen/arch/arm/tee/ffa.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>> index 90ed71cbfda3..f129879c5b81 100644
>> --- a/xen/arch/arm/tee/ffa.c
>> +++ b/xen/arch/arm/tee/ffa.c
>> @@ -181,6 +181,12 @@ static bool ffa_get_version(uint32_t *vers)
>>       return true;
>>   }
>>
>> +static uint16_t get_vm_id(const struct domain *d)
>> +{
>> +    /* +1 since 0 is reserved for the hypervisor in FF-A */
>> +    return d->domain_id + 1;
> 
> Since here you want 0 to be reserved, I think maybe you can use
> the "d->arch.p2m.vmid"? According to the logic in p2m_alloc_vmid(),
> the "d->arch.p2m.vmid" is also a per-domain u16 value that starts
> from 1.

I would rather not do that for a few reasons:
  1) This is assuming the P2M code is initialized first
  2) We should not rely on the VMID to be fixed. We may need to change 
that if we want to run more VMs than the number of VMIDs (we may even 
need multiple VMIDs per domain...).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:40:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520692.808550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwFY-0003Jd-AE; Thu, 13 Apr 2023 12:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520692.808550; Thu, 13 Apr 2023 12:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwFY-0003JW-5e; Thu, 13 Apr 2023 12:40:28 +0000
Received: by outflank-mailman (input) for mailman id 520692;
 Thu, 13 Apr 2023 12:40:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmwFW-0003JQ-Tt
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:40:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060d.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5bca9401-d9f8-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 14:40:23 +0200 (CEST)
Received: from DB8PR04CA0017.eurprd04.prod.outlook.com (2603:10a6:10:110::27)
 by DB9PR08MB9681.eurprd08.prod.outlook.com (2603:10a6:10:45c::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 12:40:21 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::89) by DB8PR04CA0017.outlook.office365.com
 (2603:10a6:10:110::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 12:40:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 12:40:21 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 13 Apr 2023 12:40:21 +0000
Received: from 4ec3420f14fc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8C77334E-929B-45AD-9867-4B675EFF8411.1; 
 Thu, 13 Apr 2023 12:40:15 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4ec3420f14fc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 12:40:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM8PR08MB6500.eurprd08.prod.outlook.com (2603:10a6:20b:361::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:40:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 12:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bca9401-d9f8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5ka1TPzcvs3wzjFmSld0UHQItCpr3ZxdcmEzmyQ6RIU=;
 b=xgHEBrnEB8nQOsInbiEUtLqgUaBjPAZU6k06+3H4OqEPP0/2m840OIuExlekdmLk92YvUjHOX+zYfoA3thcrn91Xc28LxoDQajIUuwhraR9XhTuNfUVn7NQh9jPVYBIESM/ln41pYAkVnwas4/+lxANa9prksnhxmpqP3o/DDZ4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FG9y0t9VkxUd3tv0sx4/MJVTUQw03TJKIEtymlJ3QCeLLMdVLe9XrlLv23iGaXE/wVpXeeFYlZP4iL79fFbC8IYZxF7SpUtirmHX2cSTxgdiilOQhDxSe/b5MFVnk31JiYaXSA5aheS0FIc/MnHUii5GnX0ojL7xJY+GGASl1vVEKMHDfmj6zACPqHP+spJr551hEsc3UTfaf48TAfMfld/bOTGyj7MBfMaXa7qKWjnrgIhu4lRbezN7HFqWXPSLixilkJMJvyKV5IgS1nvZUYVeWdHQz+sqZLitUKh6j7bkzPl7/e7plCzVw0hBtXXjzRaUIa8+H/3PBZcGkQUxvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5ka1TPzcvs3wzjFmSld0UHQItCpr3ZxdcmEzmyQ6RIU=;
 b=gWsAIRSljjuDWRKXkXOPth0oM3/gqNAaiJ3+UT9/1AV7fth6fHXdP9CjV9OQlfcKytPO6yxe3swIW0otfXgchlC53VdLOvRUAYxT+4W4IVth/XpE4DeyJOUlUaBKBaCLTrtHdbD2AVYV/P3VlyRNEgo/D1QE3hZQCyHbiYJjQZQUzDeMYXrwUzozs7+cK6j+q5Tjtqf8H9Jrz5OuXdBAlvfFIaxHy7/YT/GlRtG070usl1PoNUxunH9xUZ+qf+Rx6hTl0WBMY3xeHXrNlrl8ux+Ek83lFg9G9oCT6I/OWUETg0EOtL6Fx6gS42qyVnK+jRgmB1y8IFkZvpsTKOH6oA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5ka1TPzcvs3wzjFmSld0UHQItCpr3ZxdcmEzmyQ6RIU=;
 b=xgHEBrnEB8nQOsInbiEUtLqgUaBjPAZU6k06+3H4OqEPP0/2m840OIuExlekdmLk92YvUjHOX+zYfoA3thcrn91Xc28LxoDQajIUuwhraR9XhTuNfUVn7NQh9jPVYBIESM/ln41pYAkVnwas4/+lxANa9prksnhxmpqP3o/DDZ4=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, Jens Wiklander <jens.wiklander@linaro.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: RE: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Thread-Topic: [XEN PATCH v8 08/22] xen/arm: ffa: add support for FFA_ID_GET
Thread-Index: AQHZbdfdlkWasQb0b0OUu0VnJOxcoK8pEVxwgAAcEICAAABzkA==
Date: Thu, 13 Apr 2023 12:40:13 +0000
Message-ID:
 <AS8PR08MB7991D8EA176982B9D6780C3A92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-9-jens.wiklander@linaro.org>
 <AS8PR08MB7991020B2BB9676DACD2494392989@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <137a1457-aacc-e763-b313-4266be3fa032@xen.org>
In-Reply-To: <137a1457-aacc-e763-b313-4266be3fa032@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1CA69413958B9B4CBF580C8ADA194074.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM8PR08MB6500:EE_|DBAEUR03FT034:EE_|DB9PR08MB9681:EE_
X-MS-Office365-Filtering-Correlation-Id: 838fd2dc-af41-41a8-91cd-08db3c1c3ed9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 j6V0MDx3IHi+Cl40rz/jcWeq1t4CX6lUKVA+50qRPSUVTBhG7sYZVExU81trri93G/JVUDW9edpyT+orWqofaaL7Bp6m0IxfduWJeWQAdaXtK8nSfxrsohbC4iz4PcMUVbNzIDBqX34rOhxTt86MKDFlF63SMB5BOKb9of5J7vDzKygsu+MV5Mv9o0c5Vk5NOkDl58bdZmA1kFsQmFhuBldUSzqpJZu2o3c1T0UW1gi+FVkCj9OdSEJMS2ML6ejLfUhhtnMo4vimxXJLZHK8J7AsLlmPV+0b4TEci0gE2wV6DGNMYCBLPUDtG9sWAck3lSxUP1oz9yUoSrNqbqqCHleqh7jCdPsIScWt8FHkCjGTghMILKg+gaZ4WqJ3QaDU1el+aA1p5QFH/pdXL+cCH9WB6Qkb48VjuHJtKv7E0clVJsmc41epy5CpOwQLxuKr9MdQQHGYbsE+/ymyvEdIQHhiVZzbqu785OhOu/R0F/7jPMCWeAf7ZqMKMkSXz3GU6afA+bgCN3LFqVQ9SwKrQDhOnkPEE9pr8b1IbgVrktdApAa2okoLCSoxrh0u4XKL2sxURrVKX/g1CaMQlvPJr83o+owc+LguGUjvW4vOBi7Ofm7hvkRdi+OhaoSO28nG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(451199021)(66476007)(316002)(66946007)(76116006)(64756008)(66446008)(4326008)(83380400001)(7696005)(478600001)(54906003)(110136005)(8936002)(8676002)(41300700001)(38100700002)(66556008)(71200400001)(122000001)(52536014)(38070700005)(5660300002)(4744005)(2906002)(186003)(9686003)(26005)(6506007)(86362001)(33656002)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6500
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e0d4c870-0d90-4a93-f82b-08db3c1c3a4d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bxd7BhhsE/XYXTqoz2PLm/zzNOiP8fWNannp+lLMIOUsLcLBYuF6PlY3KHIorQddgKIgfw+Jaa4rBcH0xdVqn9/Jro01JCGSrYkjaB44mzMtpONZE2eTY2ewpmiSSNNDHNbCW7vwtZXlrKifVNYXBtWawxPIlSFFrnleMtdAz9JnHQA3cYdkPChR3itpkjessltCF/1JJC9C62WoQhNPv572EAzDF+tyGtHqnY1Hbfb4uiEQrYOSbxeqmAMxGcxqoaodLekpBUQohx/gxYvDJGsTaMz3I6BxGTzpibeXN8+2TOzzHeT/tDNjEvKGeG1HouoJnctAQkced7p9k+kzG1T1zVTuCqLw4ZB8UU3YiJxC/Wny7BsTevZtqp5WKwOJiIl4kfdhHShwtbj0wk1w0S7/jesFvtlJrfIv0+6uYPx4/9qSYNnEcX0FsC5NFRX0OQPIp/95ya8cU7Tr8n1w7kKY+YlA7D7tOHBSQfgLgVQUo2aeJ2aKR+WEZ9FuZUlWbSleLxg49g+o7FDqF21Z17S3qGc1y8alwKaGC5cDzSrplFgBe5JC+gFE6oYy+s8yhtjPBdE5Rc/3ioTmjQ3aSGg9gQJuFBlP8kwC6KM/xN/XXdrsZlax4vC0Q6Hm1lWzWeR+kaYLW8LbGZQgrbNIuiDLKgJr/nv1WTtpC1Y/uw3f6FfNqKjOTsKdoNR9CxWP4SpYa5q9X1bxHRm/bUtZ4A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(478600001)(41300700001)(82310400005)(36860700001)(86362001)(316002)(54906003)(8676002)(70586007)(70206006)(4326008)(110136005)(7696005)(8936002)(52536014)(5660300002)(40460700003)(4744005)(336012)(186003)(356005)(81166007)(82740400003)(33656002)(9686003)(26005)(6506007)(83380400001)(2906002)(47076005)(40480700001)(55016003)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:40:21.4097
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 838fd2dc-af41-41a8-91cd-08db3c1c3ed9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9681

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbWEVOIFBBVENIIHY4IDA4
LzIyXSB4ZW4vYXJtOiBmZmE6IGFkZCBzdXBwb3J0IGZvciBGRkFfSURfR0VUDQo+ID4+ICtzdGF0
aWMgdWludDE2X3QgZ2V0X3ZtX2lkKGNvbnN0IHN0cnVjdCBkb21haW4gKmQpDQo+ID4+ICt7DQo+
ID4+ICsgICAgLyogKzEgc2luY2UgMCBpcyByZXNlcnZlZCBmb3IgdGhlIGh5cGVydmlzb3IgaW4g
RkYtQSAqLw0KPiA+PiArICAgIHJldHVybiBkLT5kb21haW5faWQgKyAxOw0KPiA+DQo+ID4gU2lu
Y2UgaGVyZSB5b3Ugd2FudCAwIHRvIGJlIHJlc2VydmVkLCBJIHRoaW5rIG1heWJlIHlvdSBjYW4g
dXNlDQo+ID4gdGhlICJkLT5hcmNoLnAybS52bWlkIj8gQWNjb3JkaW5nIHRvIHRoZSBsb2dpYyBp
biBwMm1fYWxsb2Nfdm1pZCgpLA0KPiA+IHRoZSAiZC0+YXJjaC5wMm0udm1pZCIgaXMgYWxzbyBh
IHBlci1kb21haW4gdTE2IHZhbHVlIHRoYXQgc3RhcnRzDQo+ID4gZnJvbSAxLg0KPiANCj4gSSB3
b3VsZCByYXRoZXIgbm90IGRvIHRoYXQgZm9yIGEgZmV3IHJlYXNvbnM6DQo+ICAgMSkgVGhpcyBp
cyBhc3N1bWluZyB0aGUgUDJNIGNvZGUgaXMgaW5pdGlhbGl6ZWQgZmlyc3QNCj4gICAyKSBXZSBz
aG91bGQgbm90IHJlbHkgb24gdGhlIFZNSUQgdG8gYmUgZml4ZWQuIFdlIG1heSBuZWVkIHRvIGNo
YW5nZQ0KPiB0aGF0IGlmIHdlIHdhbnQgdG8gcnVuIG1vcmUgVk1zIHRoYW4gdGhlIG51bWJlciBv
ZiBWTUlEcyAod2UgbWF5IGV2ZW4NCj4gbmVlZCBtdWx0aXBsZSBWTUlEcyBwZXIgZG9tYWluLi4u
KS4NCg0KWWVhaCB0aGVzZSBhcmd1bWVudHMgYXJlIHJlYXNvbmFibGUuIEZvcmdldCBhYm91dCBt
eSBjb21tZW50cyB0aGVuLg0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0KPiBDaGVlcnMs
DQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520696.808559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMI-0003wb-0E; Thu, 13 Apr 2023 12:47:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520696.808559; Thu, 13 Apr 2023 12:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMH-0003wU-TF; Thu, 13 Apr 2023 12:47:25 +0000
Received: by outflank-mailman (input) for mailman id 520696;
 Thu, 13 Apr 2023 12:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=73wY=AE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pmwMF-0003wO-Uy
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:47:24 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe12::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55a5d9c5-d9f9-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 14:47:23 +0200 (CEST)
Received: from DB8PR09CA0012.eurprd09.prod.outlook.com (2603:10a6:10:a0::25)
 by AS8PR08MB8223.eurprd08.prod.outlook.com (2603:10a6:20b:52b::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:47:18 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:a0::4) by DB8PR09CA0012.outlook.office365.com
 (2603:10a6:10:a0::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 12:47:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 12:47:17 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Thu, 13 Apr 2023 12:47:17 +0000
Received: from dafbc1c48e49.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AA2553E5-1DDC-4E0D-BC9A-BBE9232C4840.1; 
 Thu, 13 Apr 2023 12:47:06 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dafbc1c48e49.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 12:47:06 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB9303.eurprd08.prod.outlook.com (2603:10a6:150:d4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:47:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 12:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55a5d9c5-d9f9-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zYSNyVTp8W/FB7GwWnYK3bCKTHKAmsNBpUFfPF0J3sk=;
 b=7hgFXUVMbiJ7Xo1HjWj+0ndJs+/vDeVB+cASc2g2X68Dd4ec3tnHCoMw34augXRJQdUUdN5Wmal+a8G+4txIDc6HVirJ7toDSUzY7u4mqrQNIulVicfoepoy0vWbOJObvMCIqMi1297XOxVvenisB9MHJNIGFjvcTEE1EV7JVVs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b6977f6572de6333
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F2HZ3hRY0bEFOZGJiL36bVNPr/qs9XhAt7zHaDR9+As87iiw5FdlyROY55peNAJKbprN82dImc9GLsmpdcY/jRiSpCBnflEtL3XgXKlqzhzzmt0UN8lrUCXvrJb87V0tG5Kj+i54RBYiY1ETg6IFheVFKU2EEF4gJ0eKWzq/cX64cV3Ncu1Rg1co0evVg5g0MstnNGzwL8HJDbt2+/Q4TEEI3QotnhLL7VrmRHlTbrSoVq+c+r4yWDoHC1sx7E80uLJEDuSAybrFJhkFlSvkH9ld1qtez5cpXTwZdxQHi80a7f4I0kysAMkPLRpI2WtLOxcRlxHo7LEfSC3YoqW9mQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zYSNyVTp8W/FB7GwWnYK3bCKTHKAmsNBpUFfPF0J3sk=;
 b=g1sqZJtVb/Th+AHabQL+OChYkNcI8JO9fkKU4Bzlc26CK+YtzoEKhWd6DWcP6zzw0ZmTYU1yjVga1GVcIBW1yMWZMjmNpzXO+Ymo+Ps8dg9loE8HZN4x6+Aeh8SF6SWrYckIwHvpQHL/F7ZiPgB9g78U+o/1Bk2INxn9HxIhKwwIHWyLSyIgbqNKJL6VSiqtRHTBQKZ1ODy4eB5bH33t42JrRNBeB7pdORDw1abFdsOTJx1bkkWZvZOvRT2SIRD/DzO5jGOovF8cYdnFNn8iZ2VgB6Tf3wWmnHqnYQW85SV8QaQVpyhDFR0wl7wObAOm8rh77s+oNGQvHbrhfD/CDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zYSNyVTp8W/FB7GwWnYK3bCKTHKAmsNBpUFfPF0J3sk=;
 b=7hgFXUVMbiJ7Xo1HjWj+0ndJs+/vDeVB+cASc2g2X68Dd4ec3tnHCoMw34augXRJQdUUdN5Wmal+a8G+4txIDc6HVirJ7toDSUzY7u4mqrQNIulVicfoepoy0vWbOJObvMCIqMi1297XOxVvenisB9MHJNIGFjvcTEE1EV7JVVs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZbSQnZHhi12xjyUK0b5ypet9SdK8pMcIA
Date: Thu, 13 Apr 2023 12:47:02 +0000
Message-ID: <190AEC88-68BE-4DEA-84D5-9DDF0F63A365@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-2-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-2-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB9303:EE_|DBAEUR03FT032:EE_|AS8PR08MB8223:EE_
X-MS-Office365-Filtering-Correlation-Id: d8659d4c-50fd-412c-6170-08db3c1d3721
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 G1/kKWLWtfq1Ih6vfQZDmdHc5EfSKHQhLcoJEzCZZUmZ5MDaYi1zL2dTEglvoo0Ew3m8quMHIKIJY8UN9VxxOldsoJG8xPE6PyDT8jX6ac7zaE243PsiJYLr3Y5Km9q8O1+/yZhZmEhQc0pe1u2lVoRdc776DcxwHODBmTFo9iWgTrO3gEmqwHJoGBonJyIrvEqstJhJcwyzGbX21msjtCsf9LKjhHHa+ie9gkn53zu8+Aw+cD0UV2ZzRnaLMOEzTJXJCM2MUuZ9xxsNcVWY09SraiiMAjN+ZsOKn+VQLjTBG752GwHJb411BiQce67+YGbfvtvWdcGPhVTr0u05QRoK0YUOXLPFXRVp3uKgT70HdZ0DthMJ4xM36JxEOrpmIDJEcuWaKIZbb2x9boQ1yAmcXUM+N4NEDmsAT8jhjmnHHQet2SM77BYeH0i4RkBYxTmOu2Nj5k+Lhzt/Aact9tpdBMCe+hndUAX53x8cakX3+W55r9VuB9zWsQ6aTG6twMbGFvxNvr0MCrb+X1rP2PyEi2xcFeXAhTS0FnJIhcMeTUfnqa4jTC9yCRvXjKD0jCwmTqBJzM8TCFSJKI0PCuoefFp4hqo3naTotStrBVKlPpJ8ZeexsldQ9GZ7+J271E6+Du/2ldpMFg/evBTNIziEo8v0xvJg8s/uvBgYkzE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(71200400001)(6486002)(64756008)(66446008)(66476007)(66556008)(66946007)(4326008)(36756003)(76116006)(91956017)(37006003)(30864003)(2906002)(38070700005)(86362001)(122000001)(41300700001)(33656002)(5660300002)(8676002)(8936002)(6862004)(316002)(38100700002)(478600001)(54906003)(6636002)(53546011)(6512007)(6506007)(26005)(2616005)(186003)(83380400001)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2FAC3C652EDC8B4CA1BEA307BC42661D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9303
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	86a3bd36-fcc8-47b9-51bc-08db3c1d2e07
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XKriwFsla8SlnX7RH06F8Xj6vbc94kc7cpa+iPpRtpA/CXgsZSc81pYY/T/5Rhh7h/HhWHJXnENqGyyJ3G0I8A0BPsjpoz47xrTGONKsl/wK/HE311N2vja8oO689cNb+Z7FFaCiCPR0/tTfW6mNWrrBeUmFDvid5sd5cA3aFXvUv66Ubtz8DlaONMSFq2a666ElQOZjoxP5/hTzy0nmuFAnYZoBOBYHsiQyI9TNJMRSVpLTYIu+wbfrToquTVWo43o9AnC34FI7N3nBvcAOC24Uh7EproYy8lpWOyTb4Eg7WDw80FPfm3dSBOpbHYnddVG/b79GQZq6T/eIuFWo5ABA4BmlrCuldi/fVd6ku2WyVFyDT5csWRDbgdB+jTonHnSDovaQXAU3kUQ7xsPBXnYNWH9roZYCl+QqNloV5sofoZPJxR9I4qcfCsSuQPTLvrGr4I7oMndvgvY8r34J6RD+jQPnzBGqCU5ywC4g0xLwxI6j1HcA6KJR+5QA64cLrcUZbm6Fzg80dEn7WbgowA3BvL0ZoTJKoGBK7SNoaCZAfr7F7SYe//iFY6RTyXjll3DFvctumt76FSPGXKeqo9/kv/H5FOnoPm8aYbpXn9jI7CsyVfNPJksCjufT3zlKLaknN7sUYE7vE9eBUBVv4HkEntxYMwfWEVocSTnsoD/aqpa3ujOqdj1g6ZOKJbIYOg6f2Rg+mpQr6ZrtCnF6lnneYCGuP5zMqzESrXLc/Q8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(4326008)(6636002)(316002)(54906003)(37006003)(70206006)(70586007)(6486002)(478600001)(41300700001)(40480700001)(8936002)(5660300002)(6862004)(8676002)(82310400005)(30864003)(2906002)(81166007)(356005)(36756003)(82740400003)(86362001)(36860700001)(33656002)(336012)(2616005)(53546011)(6512007)(107886003)(40460700003)(186003)(26005)(47076005)(83380400001)(6506007)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:47:17.9709
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8659d4c-50fd-412c-6170-08db3c1d3721
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8223

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Enable Xen to handle the SVE extension, add code in cpufeature module
> to handle ZCR SVE register, disable trapping SVE feature on system
> boot only when SVE resources are accessed.
> While there, correct coding style for the comment on coprocessor
> trapping.
>=20
> Now cptr_el2 is part of the domain context and it will be restored
> on context switch, this is a preparation for saving the SVE context
> which will be part of VFP operations, so restore it before the call
> to save VFP registers.
> To save an additional isb barrier, restore cptr_el2 before an
> existing isb barrier and move the call for saving VFP context after
> that barrier.
>=20
> Change the KConfig entry to make ARM64_SVE symbol selectable, by
> default it will be not selected.
>=20
> Create sve module and sve_asm.S that contains assembly routines for
> the SVE feature, this code is inspired from linux and it uses
> instruction encoding to be compatible with compilers that does not
> support SVE.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
> - don't use fixed types in vl_to_zcr, forgot to address that in
>   v3, by mistake I changed that in patch 2, fixing now (Jan)
> Changes from v3:
> - no changes
> Changes from v2:
> - renamed sve_asm.S in sve-asm.S, new files should not contain
>   underscore in the name (Jan)
> Changes from v1:
> - Add assert to vl_to_zcr, it is never called with vl=3D=3D0, but just
>   to be sure it won't in the future.
> Changes from RFC:
> - Moved restoring of cptr before an existing barrier (Julien)
> - Marked the feature as unsupported for now (Julien)
> - Trap and un-trap only when using SVE resources in
>   compute_max_zcr() (Julien)
> ---
> xen/arch/arm/Kconfig                     | 10 +++--
> xen/arch/arm/arm64/Makefile              |  1 +
> xen/arch/arm/arm64/cpufeature.c          |  7 ++--
> xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
> xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
> xen/arch/arm/cpufeature.c                |  6 ++-
> xen/arch/arm/domain.c                    |  9 +++--
> xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
> xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
> xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
> xen/arch/arm/include/asm/domain.h        |  1 +
> xen/arch/arm/include/asm/processor.h     |  2 +
> xen/arch/arm/setup.c                     |  5 ++-
> xen/arch/arm/traps.c                     | 28 +++++++------
> 14 files changed, 201 insertions(+), 24 deletions(-)
> create mode 100644 xen/arch/arm/arm64/sve-asm.S
> create mode 100644 xen/arch/arm/arm64/sve.c
> create mode 100644 xen/arch/arm/include/asm/arm64/sve.h
>=20
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c7f..41f45d8d1203 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
>  This feature is not supported in Xen.
>=20
> config ARM64_SVE
> - def_bool n
> + bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPOR=
TED
> depends on ARM_64
> help
> -  Scalar Vector Extension support.
> -  This feature is not supported in Xen.
> +  Scalar Vector Extension (SVE/SVE2) support for guests.

I would prevent to mention SVE2 here unless both versions of SVE are suppor=
ted with this config.
Is it the case ?

Cheers
Bertrand

> +
> +  Please be aware that currently, enabling this feature will add latency=
 on
> +  VM context switch between SVE enabled guests, between not-enabled SVE
> +  guests and SVE enabled guests and viceversa, compared to the time
> +  required to switch between not-enabled SVE guests.
>=20
> config ARM64_MTE
> def_bool n
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 6d507da0d44d..24e08fd42596 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -12,6 +12,7 @@ obj-y +=3D insn.o
> obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o
> obj-y +=3D smc.o
> obj-y +=3D smpboot.o
> +obj-$(CONFIG_ARM64_SVE) +=3D sve.o sve-asm.o
> obj-y +=3D traps.o
> obj-y +=3D vfp.o
> obj-y +=3D vsysreg.o
> diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeat=
ure.c
> index d9039d37b2d1..b4656ff4d80f 100644
> --- a/xen/arch/arm/arm64/cpufeature.c
> +++ b/xen/arch/arm/arm64/cpufeature.c
> @@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] =
=3D {
> ARM64_FTR_END,
> };
>=20
> -#if 0
> -/* TODO: use this to sanitize SVE once we support it */
> -
> static const struct arm64_ftr_bits ftr_zcr[] =3D {
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
> ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0), /* LEN */
> ARM64_FTR_END,
> };
> -#endif
>=20
> /*
>  * Common ftr bits for a 32bit register with all hidden, strict
> @@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm =
*new)
>=20
> SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
>=20
> + if ( cpu_has_sve )
> + SANITIZE_REG(zcr64, 0, zcr);
> +
> /*
> * Comment from Linux:
> * Userspace may perform DC ZVA instructions. Mismatched block sizes
> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
> new file mode 100644
> index 000000000000..4d1549344733
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve-asm.S
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Arm SVE assembly routines
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + *
> + * Some macros and instruction encoding in this file are taken from linu=
x 6.1.1,
> + * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modifi=
ed
> + * version.
> + */
> +
> +/* Sanity-check macros to help avoid encoding garbage instructions */
> +
> +.macro _check_general_reg nr
> +    .if (\nr) < 0 || (\nr) > 30
> +        .error "Bad register number \nr."
> +    .endif
> +.endm
> +
> +.macro _check_num n, min, max
> +    .if (\n) < (\min) || (\n) > (\max)
> +        .error "Number \n out of range [\min,\max]"
> +    .endif
> +.endm
> +
> +/* SVE instruction encodings for non-SVE-capable assemblers */
> +/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
> +
> +/* RDVL X\nx, #\imm */
> +.macro _sve_rdvl nx, imm
> +    _check_general_reg \nx
> +    _check_num (\imm), -0x20, 0x1f
> +    .inst 0x04bf5000                \
> +        | (\nx)                     \
> +        | (((\imm) & 0x3f) << 5)
> +.endm
> +
> +/* Gets the current vector register size in bytes */
> +GLOBAL(sve_get_hw_vl)
> +    _sve_rdvl 0, 1
> +    ret
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> new file mode 100644
> index 000000000000..6f3fb368c59b
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -0,0 +1,50 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Arm SVE feature code
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +
> +#include <xen/types.h>
> +#include <asm/arm64/sve.h>
> +#include <asm/arm64/sysregs.h>
> +#include <asm/processor.h>
> +#include <asm/system.h>
> +
> +extern unsigned int sve_get_hw_vl(void);
> +
> +register_t compute_max_zcr(void)
> +{
> +    register_t cptr_bits =3D get_default_cptr_flags();
> +    register_t zcr =3D vl_to_zcr(SVE_VL_MAX_BITS);
> +    unsigned int hw_vl;
> +
> +    /* Remove trap for SVE resources */
> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
> +    isb();
> +
> +    /*
> +     * Set the maximum SVE vector length, doing that we will know the VL
> +     * supported by the platform, calling sve_get_hw_vl()
> +     */
> +    WRITE_SYSREG(zcr, ZCR_EL2);
> +
> +    /*
> +     * Read the maximum VL, which could be lower than what we imposed be=
fore,
> +     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() l=
ater
> +     */
> +    hw_vl =3D sve_get_hw_vl() * 8U;
> +
> +    /* Restore CPTR_EL2 */
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
> +    isb();
> +
> +    return vl_to_zcr(hw_vl);
> +}
> +
> +/* Takes a vector length in bits and returns the ZCR_ELx encoding */
> +register_t vl_to_zcr(unsigned int vl)
> +{
> +    ASSERT(vl > 0);
> +    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
> +}
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index c4ec38bb2554..83b84368f6d5 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -9,6 +9,7 @@
> #include <xen/init.h>
> #include <xen/smp.h>
> #include <xen/stop_machine.h>
> +#include <asm/arm64/sve.h>
> #include <asm/cpufeature.h>
>=20
> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
> @@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
>=20
>     c->zfr64.bits[0] =3D READ_SYSREG(ID_AA64ZFR0_EL1);
>=20
> +    if ( cpu_has_sve )
> +        c->zcr64.bits[0] =3D compute_max_zcr();
> +
>     c->dczid.bits[0] =3D READ_SYSREG(DCZID_EL0);
>=20
>     c->ctr.bits[0] =3D READ_SYSREG(CTR_EL0);
> @@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
>     guest_cpuinfo.pfr64.mpam =3D 0;
>     guest_cpuinfo.pfr64.mpam_frac =3D 0;
>=20
> -    /* Hide SVE as Xen does not support it */
> +    /* Hide SVE by default to the guests */
>     guest_cpuinfo.pfr64.sve =3D 0;
>     guest_cpuinfo.zfr64.bits[0] =3D 0;
>=20
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 99577adb6c69..adb6ace2e24d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
>     /* VGIC */
>     gic_restore_state(n);
>=20
> -    /* VFP */
> -    vfp_restore_state(n);
> -
>     /* XXX MPU */
>=20
>     /* Fault Status */
> @@ -234,6 +231,7 @@ static void ctxt_switch_to(struct vcpu *n)
>     p2m_restore_state(n);
>=20
>     /* Control Registers */
> +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>     WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
>=20
>     /*
> @@ -258,6 +256,9 @@ static void ctxt_switch_to(struct vcpu *n)
> #endif
>     isb();
>=20
> +    /* VFP */
> +    vfp_restore_state(n);
> +
>     /* CP 15 */
>     WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
>=20
> @@ -548,6 +549,8 @@ int arch_vcpu_create(struct vcpu *v)
>=20
>     v->arch.vmpidr =3D MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>=20
> +    v->arch.cptr_el2 =3D get_default_cptr_flags();
> +
>     v->arch.hcr_el2 =3D get_default_hcr_flags();
>=20
>     v->arch.mdcr_el2 =3D HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/=
asm/arm64/sve.h
> new file mode 100644
> index 000000000000..144d2b1cc485
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Arm SVE feature code
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +
> +#ifndef _ARM_ARM64_SVE_H
> +#define _ARM_ARM64_SVE_H
> +
> +#define SVE_VL_MAX_BITS (2048U)
> +
> +/* Vector length must be multiple of 128 */
> +#define SVE_VL_MULTIPLE_VAL (128U)
> +
> +#ifdef CONFIG_ARM64_SVE
> +
> +register_t compute_max_zcr(void);
> +register_t vl_to_zcr(unsigned int vl);
> +
> +#else /* !CONFIG_ARM64_SVE */
> +
> +static inline register_t compute_max_zcr(void)
> +{
> +    return 0;
> +}
> +
> +static inline register_t vl_to_zcr(unsigned int vl)
> +{
> +    return 0;
> +}
> +
> +#endif /* CONFIG_ARM64_SVE */
> +
> +#endif /* _ARM_ARM64_SVE_H */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/incl=
ude/asm/arm64/sysregs.h
> index 463899951414..4cabb9eb4d5e 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -24,6 +24,7 @@
> #define ICH_EISR_EL2              S3_4_C12_C11_3
> #define ICH_ELSR_EL2              S3_4_C12_C11_5
> #define ICH_VMCR_EL2              S3_4_C12_C11_7
> +#define ZCR_EL2                   S3_4_C1_C2_0
>=20
> #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
> #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include=
/asm/cpufeature.h
> index c62cf6293fd6..6d703e051906 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -32,6 +32,12 @@
> #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) =3D=3D 1)
> #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
>=20
> +#ifdef CONFIG_ARM64_SVE
> +#define cpu_has_sve       (boot_cpu_feature64(sve) =3D=3D 1)
> +#else
> +#define cpu_has_sve       (0)
> +#endif
> +
> #ifdef CONFIG_ARM_32
> #define cpu_has_gicv3     (boot_cpu_feature32(gic) >=3D 1)
> #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) =3D=3D 1)
> @@ -323,6 +329,14 @@ struct cpuinfo_arm {
>         };
>     } isa64;
>=20
> +    union {
> +        register_t bits[1];
> +        struct {
> +            unsigned long len:4;
> +            unsigned long __res0:60;
> +        };
> +    } zcr64;
> +
>     struct {
>         register_t bits[1];
>     } zfr64;
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm=
/domain.h
> index 2a51f0ca688e..e776ee704b7d 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -190,6 +190,7 @@ struct arch_vcpu
>     register_t tpidrro_el0;
>=20
>     /* HYP configuration */
> +    register_t cptr_el2;
>     register_t hcr_el2;
>     register_t mdcr_el2;
>=20
> diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/=
asm/processor.h
> index 54f253087718..bc683334125c 100644
> --- a/xen/arch/arm/include/asm/processor.h
> +++ b/xen/arch/arm/include/asm/processor.h
> @@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs)=
;
>=20
> register_t get_default_hcr_flags(void);
>=20
> +register_t get_default_cptr_flags(void);
> +
> /*
>  * Synchronize SError unless the feature is selected.
>  * This is relying on the SErrors are currently unmasked.
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90e3..5459cc4f5e62 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -135,10 +135,11 @@ static void __init processor_id(void)
>            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
>            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
>            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
> -    printk("    Extensions:%s%s%s\n",
> +    printk("    Extensions:%s%s%s%s\n",
>            cpu_has_fp ? " FloatingPoint" : "",
>            cpu_has_simd ? " AdvancedSIMD" : "",
> -           cpu_has_gicv3 ? " GICv3-SysReg" : "");
> +           cpu_has_gicv3 ? " GICv3-SysReg" : "",
> +           cpu_has_sve ? " SVE" : "");
>=20
>     /* Warn user if we find unknown floating-point features */
>     if ( cpu_has_fp && (boot_cpu_feature64(fp) >=3D 2) )
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 061c92acbd68..a78a99ddadd0 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
>              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
> }
>=20
> +register_t get_default_cptr_flags(void)
> +{
> +    /*
> +     * Trap all coprocessor registers (0-13) except cp10 and
> +     * cp11 for VFP.
> +     *
> +     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> +     *
> +     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> +     * RES1, i.e. they would trap whether we did this write or not.
> +     */
> +    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> +             HCPTR_TTA | HCPTR_TAM);
> +}
> +
> static enum {
>     SERRORS_DIVERSE,
>     SERRORS_PANIC,
> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>=20
> void init_traps(void)
> {
> +    register_t cptr_bits =3D get_default_cptr_flags();
>     /*
>      * Setup Hyp vector base. Note they might get updated with the
>      * branch predictor hardening.
> @@ -135,17 +151,7 @@ void init_traps(void)
>     /* Trap CP15 c15 used for implementation defined registers */
>     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>=20
> -    /* Trap all coprocessor registers (0-13) except cp10 and
> -     * cp11 for VFP.
> -     *
> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> -     *
> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> -     * RES1, i.e. they would trap whether we did this write or not.
> -     */
> -    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> -                 HCPTR_TTA | HCPTR_TAM,
> -                 CPTR_EL2);
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
>=20
>     /*
>      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520697.808570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMJ-0004Br-DV; Thu, 13 Apr 2023 12:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520697.808570; Thu, 13 Apr 2023 12:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMJ-0004Bi-9i; Thu, 13 Apr 2023 12:47:27 +0000
Received: by outflank-mailman (input) for mailman id 520697;
 Thu, 13 Apr 2023 12:47:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=73wY=AE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pmwMI-0003wO-Gf
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:47:26 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7d00::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57aa59ef-d9f9-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 14:47:26 +0200 (CEST)
Received: from AS9PR06CA0062.eurprd06.prod.outlook.com (2603:10a6:20b:464::12)
 by PAWPR08MB10019.eurprd08.prod.outlook.com (2603:10a6:102:362::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 12:47:24 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:464:cafe::80) by AS9PR06CA0062.outlook.office365.com
 (2603:10a6:20b:464::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 12:47:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 12:47:23 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 13 Apr 2023 12:47:23 +0000
Received: from ccdb36c774c0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 66580BF8-7DA3-4829-8CD8-35E1FB063A95.1; 
 Thu, 13 Apr 2023 12:47:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ccdb36c774c0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 12:47:17 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB9303.eurprd08.prod.outlook.com (2603:10a6:150:d4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:47:14 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 12:47:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57aa59ef-d9f9-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WtygNClpbjg3pL0Vy527GsUXVXEkKMNbd269i/PqNMo=;
 b=ETlS/QOdPpTuECTtWEZyp3D9n060bLmEZwx8Onjq1xblOQIk5IP97UfChQE1txEjbf81APQNqNKVzZ3XAPziMKHngegmRL5Ke22vq6j1vB3jof5KZ6icjhdiGPWksozVZf2s0RiNV9x0HKEkdEtM+njrFrh+CmngQAuznRCLLug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a0b35fb7e16df590
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FhqgCvgDE4o7F6D3ZlPK5jMSkditeh3lTLpGpcARQJZhNy3ORKlZ7jM8j5HGMu27yYJQpswjoPfwAoKByxPDWIHSP7+iA7Cs/2XRgd4lhb4mZMzANRG60cPzlfVZeJCYVHj4hBOutC0TKsNjBO+1Kba6AdOFFmrP92y/op6LYh/e3OXg4TpdD5/v8AQr/fTNHKmQ8PT0gku+U74xedzKyiTCzGjpzqTtYGxMh/ZoHfC0bfI7q0bk4NEqemfQU/7QSznkygD33rQSSlV7hHiO08T4vmcnJU/Xtx54kFsdI1YVjvkrNNfDclN6Ddw4UHqVBcZ/59qEUM8LVE4iRmqdpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WtygNClpbjg3pL0Vy527GsUXVXEkKMNbd269i/PqNMo=;
 b=JFm64lKTGW1DC91hFjvHBcejgR2TYxLgeoH2+8En99qM9ndJpz65P/WE+CvosbQ3AcRkoa/nROTENSezjbSpl+NHpv8OiRdbpx9hjOsKe5OAMNk6OkndwipbTN+mAl5GTk22N+ejXDscS6tZov+xQzf6gg2Duldp68KNbTjnkeSAt7NHG8IjP4wxb+jsZ9CFSG55XUB8Jr344g1h7ruZkWUk/NjTsbVmZUroS5RLgrVnyQUS6wq/SWlyBMXDSYvFvkeyJ/kwMYS1lpLqJbSUps07TeSIWD56/uZvg3vbnH905iTpS2+6H1uxhz5EImNsgMfaRQRnuFTu9M/Xod6HxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WtygNClpbjg3pL0Vy527GsUXVXEkKMNbd269i/PqNMo=;
 b=ETlS/QOdPpTuECTtWEZyp3D9n060bLmEZwx8Onjq1xblOQIk5IP97UfChQE1txEjbf81APQNqNKVzZ3XAPziMKHngegmRL5Ke22vq6j1vB3jof5KZ6icjhdiGPWksozVZf2s0RiNV9x0HKEkdEtM+njrFrh+CmngQAuznRCLLug=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 03/12] xen/arm: Expose SVE feature to the guest
Thread-Topic: [PATCH v5 03/12] xen/arm: Expose SVE feature to the guest
Thread-Index: AQHZbSQuM7VmfjG4MUSvggInCqBRn68pMdAA
Date: Thu, 13 Apr 2023 12:47:14 +0000
Message-ID: <43CDAEBF-D406-4DD8-9C17-00971867B4B8@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-4-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-4-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB9303:EE_|AM7EUR03FT034:EE_|PAWPR08MB10019:EE_
X-MS-Office365-Filtering-Correlation-Id: 6adee899-713e-4829-e4f8-08db3c1d3a72
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lB9lvKG5/lxn9WHr5boZApxnt3l3J5FVFEi2A618vWkcR/cRZH4jQQmiiV8dm8B8EF4ePcWIZ1JZHUbR/MiVd1XxbJ4a4tacq7gXyvgbgDqdoCsgoV9+772GFyCnxnrN5PnqHFO3clMQbr3gjL7IpLH+bDaO6wOwkIFyLk4ZJmQIwkggdwPvyz0NzhuZe5s90MzdzOvwGVxTQo1+/Zt+ff+bTWeXZA5Qk1p5rjSVL2OSOosoVbv+1f++xolXAyNYBsDUCq9h6MwAiO+WX4JWqWTPuIagE4SjuEVOSSAexizpGFZqUfkAETihGiWiupCORZGeb4TmJ6IH1fLnNPB98lqGFvOe0l8/c92i6moN8V/EYvmQHG6nPuOQA9+ewncbAuGTt1FeClB59ipsSHyC/pN5ryworVWyU/y5EjARSXuVm7Owex0nh6h10edac+SeOsN4yNMdA1lYjsarYldKaHQCItB1zdHMNpXZczFYekkj0lizvZ1JKld+/69LIXUBDRHiDZRmavOK/74M7AgjD/ZB2OB5+sTIaX4pdb0jhwrAphT3cRtbk2lFylMVbD1TbRKnlGbprmGq6kq8Yn+EE8R3ovSVGpYwgBYxUZ/wzLv2m/9UcFOOM1gLTrObVIgiyWqM/hSg/pYq6cGRc3b1Iw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(71200400001)(6486002)(64756008)(66446008)(66476007)(66556008)(66946007)(4326008)(36756003)(76116006)(91956017)(37006003)(2906002)(38070700005)(86362001)(122000001)(41300700001)(33656002)(5660300002)(8676002)(8936002)(6862004)(316002)(38100700002)(478600001)(54906003)(6636002)(53546011)(6512007)(6506007)(26005)(2616005)(186003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <929C3E3C7C006D4CA470CDFCA214CDFA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9303
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f640fbef-ab04-43e7-9890-08db3c1d351d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pu4nV/XhyAXCO2vVeK0uF2MSk6NX4zNnj335MgUhl3SICHum/9faaJk/QGYgmVGNhPlQe3XLN/Nr+yHpc7WCLo2i+0sl72lexDpvgrskJsfayN/C3b/Ng0fFUkHEwJ6O2xSGtch9z8SkIoyFTfthxIf2RwaasPC/JBXVhi7NRBtLknTcnT5g7XiwognzIBBv4XFupfSUq9aQKBXRgZGQiFXUveK052ths19tzUP17bXWYOEZgPBy52nX6/fmN2vgG4vo0ZeTiyMzdGlkMWmwE2/P4xxFteDJMm0vHz3+DZkg75XA+5BZppuCD0ze/E1pvzha2zJFT+eiJ3+7m8ZDQrKMdbS9l0W4tu+aTccozcHozkLedYUsKk4hbOn2YkXlacNX8TGlmbX+ilsHSUWTZnUvYBW6jwrGGXvrIuhXpYMC8P8jHwfEna4E41NxTFGjd8z770kI2Kll5Pq1R9VgeFfdymhJasvlAb+z9PzbbHPKtRrx+9OpxvrAdGg57SXtBeX0XJcJxCal0K0PDF2dAvMSkhoHOY/+BpX51t+0kdV1CoQ845UCE7Y8SxCzk+X57VoK8izvBYAzTZJWKuzdtj0O8Pmi2bwWXQeuaUdWS2LR5ctk4h2t1mHJG84RCPURdd/NCqV8TROXNRWhVBAQa4LYUtL3uf0swxvEIoNiKfgeLMa8cywwe+O4ghbYWMEEZqJ6Hr9bgNlpiGGsgosRWw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199021)(36840700001)(46966006)(40470700004)(70206006)(4326008)(336012)(107886003)(81166007)(37006003)(86362001)(36756003)(54906003)(82310400005)(2906002)(8936002)(8676002)(5660300002)(6862004)(33656002)(478600001)(41300700001)(6636002)(316002)(70586007)(40480700001)(40460700003)(356005)(82740400003)(6506007)(6512007)(26005)(53546011)(6486002)(2616005)(36860700001)(83380400001)(186003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:47:23.4758
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6adee899-713e-4829-e4f8-08db3c1d3a72
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10019

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> When a guest is allowed to use SVE, expose the SVE features through
> the identification registers.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> CHanges from v4:
> - no changes
> Changes from v3:
> - no changes
> Changes from v2:
> - no changes
> Changes from v1:
> - No changes
> Changes from RFC:
> - No changes
> ---
> xen/arch/arm/arm64/vsysreg.c | 39 ++++++++++++++++++++++++++++++++++--
> 1 file changed, 37 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 758750983c11..10048bb4d221 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -18,6 +18,7 @@
>=20
> #include <xen/sched.h>
>=20
> +#include <asm/arm64/cpufeature.h>
> #include <asm/current.h>
> #include <asm/regs.h>
> #include <asm/traps.h>
> @@ -295,7 +296,28 @@ void do_sysreg(struct cpu_user_regs *regs,
>     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
>     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
>     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> -    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +
> +    case HSR_SYSREG_ID_AA64PFR0_EL1:
> +    {
> +        register_t guest_reg_value =3D guest_cpuinfo.pfr64.bits[0];
> +
> +        if ( is_sve_domain(v->domain) )
> +        {
> +            /* 4 is the SVE field width in id_aa64pfr0_el1 */
> +            uint64_t mask =3D GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
> +                                    ID_AA64PFR0_SVE_SHIFT);
> +            /* sysval is the sve field on the system */
> +            uint64_t sysval =3D cpuid_feature_extract_unsigned_field_wid=
th(
> +                                system_cpuinfo.pfr64.bits[0],
> +                                ID_AA64PFR0_SVE_SHIFT, 4);
> +            guest_reg_value &=3D ~mask;
> +            guest_reg_value |=3D (sysval << ID_AA64PFR0_SVE_SHIFT) & mas=
k;
> +        }
> +
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
> +                                  guest_reg_value);
> +    }
> +
>     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
>     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
>     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> @@ -306,7 +328,20 @@ void do_sysreg(struct cpu_user_regs *regs,
>     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
>     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
>     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> -    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
> +    case HSR_SYSREG_ID_AA64ZFR0_EL1:
> +    {
> +        /*
> +         * When the guest has the SVE feature enabled, the whole id_aa64=
zfr0_el1
> +         * needs to be exposed.
> +         */
> +        register_t guest_reg_value =3D guest_cpuinfo.zfr64.bits[0];
> +        if ( is_sve_domain(v->domain) )
> +            guest_reg_value =3D system_cpuinfo.zfr64.bits[0];
> +
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
> +                                  guest_reg_value);
> +    }
>=20
>     /*
>      * Those cases are catching all Reserved registers trapped by TID3 wh=
ich
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520698.808579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMM-0004Si-Lp; Thu, 13 Apr 2023 12:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520698.808579; Thu, 13 Apr 2023 12:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwMM-0004SZ-IJ; Thu, 13 Apr 2023 12:47:30 +0000
Received: by outflank-mailman (input) for mailman id 520698;
 Thu, 13 Apr 2023 12:47:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=73wY=AE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pmwMK-0003wO-TA
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:47:29 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58fe0859-d9f9-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 14:47:28 +0200 (CEST)
Received: from AM8P251CA0020.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:21b::25)
 by DU0PR08MB10365.eurprd08.prod.outlook.com (2603:10a6:10:40b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:47:25 +0000
Received: from AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:21b:cafe::7b) by AM8P251CA0020.outlook.office365.com
 (2603:10a6:20b:21b::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 12:47:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT045.mail.protection.outlook.com (100.127.140.150) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 12:47:25 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 12:47:25 +0000
Received: from c0ea31a0b597.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BE3E619E-DE64-4BA1-ACF0-87A088E72251.1; 
 Thu, 13 Apr 2023 12:47:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c0ea31a0b597.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 12:47:14 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB9303.eurprd08.prod.outlook.com (2603:10a6:150:d4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 12:47:11 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 12:47:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58fe0859-d9f9-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zNNA5Fs2ECk9W1QxRKusEU1FI9146iq0S4uPgQ9X4p4=;
 b=qPJpbpTU5aILNCwtzCg1/HJiplJXKfymVlgW5Tnbglmwy/TjkqF2QMbvR9+dFtClmMAI0Z/HuD37KzNgP9WhZ2pauLOH9GXR8VwczVYJBQ4PZWc0ii70y1G/DkUYm9ltACVe2o6/eHuG4/w5rUUz02MsZ3FomUSglHV7JkHGyEg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fc6176764ed79353
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m8kOLa0CoF0sajb3xxWz98uzFRyIz1fUTjOgb7oIlc+9sBg6O+cTx93N8jvWgYAuaQ1csg4dpc3oGZ9L5aRpOq2TX0sV1YyDmJKBMai+spUtaAr6tnTNuboyLu97w+dQzlH8iotN/H2RgJHuhsJQ7bT5nPM24P1AJ0lg+H4Y6y1I2vcNrZT6sayke1PUxd/9GuvdUQUemtOUGOQw22CE7GF/5EHX3NGheJ760C9upd/PIc+pR7DZx5C6Y21x9/xOblKzU6GJzaY/7nkHECSbLhsAeEu6N1NB5IN8Tb9fKD2DaiinWhZ0Rzx3zKu64YS3SpyMIA1eyeyw8HoGUIbDuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zNNA5Fs2ECk9W1QxRKusEU1FI9146iq0S4uPgQ9X4p4=;
 b=arapsL8xJNiNzwU+7hkRQ3kdnSeQMrW9mU4wwmuQNiXEfmzi8bZK0BFKcnJIrrWf8l8TAUASSnNg728u5+mjUxee9yoTG4ABVjfiQhyvy7u8kNNlxxNMjxLlHhKDg17yf7ZJzuxL/UFbaSjITkcT1tq8xxWU2A2TKKdlh3zJm0gZm3KAKWaE3xvC7Ff4J4/AtPaOG0JGgFPrkQjEw468+X/Gwn290/FCvjHWBKV/lkCLDDhqluP+WXyT305NYhVOhT9wpVdhpDYwFyulHkHJSeym5y4/IUhwmHFqJ/Uf34rDchFm6FYbfYJ6SiM/2pJcTEMoy/R2Agr4QtdJhmkQdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zNNA5Fs2ECk9W1QxRKusEU1FI9146iq0S4uPgQ9X4p4=;
 b=qPJpbpTU5aILNCwtzCg1/HJiplJXKfymVlgW5Tnbglmwy/TjkqF2QMbvR9+dFtClmMAI0Z/HuD37KzNgP9WhZ2pauLOH9GXR8VwczVYJBQ4PZWc0ii70y1G/DkUYm9ltACVe2o6/eHuG4/w5rUUz02MsZ3FomUSglHV7JkHGyEg=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index: AQHZbSQrDABwD/+it0m8/HJNYINaK68pMcyA
Date: Thu, 13 Apr 2023 12:47:11 +0000
Message-ID: <9486E559-879E-49AF-9145-B929A8EE9301@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-3-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB9303:EE_|AM7EUR03FT045:EE_|DU0PR08MB10365:EE_
X-MS-Office365-Filtering-Correlation-Id: 235f773e-a398-4c4b-704c-08db3c1d3bac
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qtpnxEsJglGJtaVJS1gd0+NAc2h/fDA7anNR0lRTLNdnQxZS6wnxT8sS+SeEjsi6dJJFOCx0t3w4t23blFlbzXbCIldPwEQ8e66O/0/Lj4U6lNQxeLiJp7J+PmgAwl6m34GhP677CxwbLXcO+Dxay8YpzVQtTZvEx4KyETTcLu/IE5Mb3I843TWz8bkPFejTaVAa3QjN8qhnR+/K7wbwrhNvVF1prtBVh6ztG2S9V0kW0L9mCdYAmV9ZBu7eKFwAvclEOEHxMIXVCtZIJhkYy9C6uinUnznTjXwrR3J/Lqb4RfJpW6rStRcIj6Xsm48WTN8xX89NV/oGIseyfmdiGn3/fDaQOn+o+m4HjoXRSNirff5ClOj03ZJnV7BLkZQsjMeao8ruTU47KvYwVEWKq/xyaOQlv1OwzYtHuLdVv3hD8JQE+Ujzvk2MSopRw5BZm4Mkzm3lmxDmLYu0tahlpVGomAvZ+Z0bpMrmsC5IVw8U2fdxfJqzf7tlXCsuKfEB+F1Dhqnav04nGKC7+xBMcNkT5sQkU608l2Q9AdsfpbQ47G6AtgzPhiJ8MunEFuZiS7zkNIh2VVINm8PL35bhAZgB9C5GH3OD/hqdrCBD3H1qq3NxskQkCgq8mO+EDVWW5H/QOwPkWUgWcc9NtwyjBw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(71200400001)(6486002)(64756008)(66446008)(66476007)(66556008)(66946007)(4326008)(36756003)(76116006)(91956017)(37006003)(2906002)(38070700005)(86362001)(122000001)(41300700001)(33656002)(5660300002)(8676002)(8936002)(6862004)(316002)(38100700002)(478600001)(54906003)(6636002)(53546011)(6512007)(6506007)(26005)(2616005)(186003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E5F57B3FF502044E8FC0B670546A37D0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9303
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	67c33dcd-4937-410a-dae4-08db3c1d3353
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F9LRtjO4DVq56VTr+Tuyy8SAg4pQfZTYyUsFaManOQpsCQg1F3kYVIINKfz8gx1Thp4VeuBnRBDoVuPeZFbLDxE8x1hIb42M3vTfRqgbEXzeJQ9O3xvJED9tkRfvDcFMqHn70jdDmmoTRT0XomSh+y7AOAG/1QBeqiE+M8F1E+LSCUFinGUhcmNpxzEZe7YxLNbaa+R2AEuM/BDel8WiHOS3b3mheG7pLUGArlEMUaZmAZEMy8bCtrcAlxsMoWxVbUprs4udrKuVcdHNU3QsQiQR+FyKUW9RydfQlC5+aYzhfwl7/oPIGPNW+Df0SArMDhzNM4JPdXr1wOazo8nWNaChCn8g+xjnP3lkuIsPuA6KuBSHg2ZXr3h4AUN5hsBdjRaOgmgWdWFiHGyi2yzraUIJoO9Udi8zdSnwsyLfdLf3mAcmZ7OCuSvqa/a2+12/sDyZba8llbltcOFqn/zhUzJonW9Ro/vYLPswpphnxvDCozr3Rh6Jyd2pQlqYezLWvsW+hVRYGTv/5KRKNZx/X+REYAU5hXKMTnaAsOR1RCSc4XAuKsaStdXjM5tNYloi/WffTRidgUFlZvsoUFuCi9WJLjW3PB6LguTWrmJyS9YIcDTy8vS5A0rhAkEg6xxjCj7V7tBDXfuQAZGEaYXsplux5oK5jJnxZv0m3BK0VueKfvMvBuMtknbxni4gLYmrwu7EVVbFrLK5KK6PPVjIig==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(36860700001)(40480700001)(82740400003)(2616005)(336012)(47076005)(53546011)(40460700003)(6506007)(6512007)(2906002)(26005)(33656002)(186003)(82310400005)(81166007)(356005)(8936002)(83380400001)(70206006)(70586007)(86362001)(6486002)(478600001)(36756003)(6636002)(5660300002)(6862004)(316002)(4326008)(8676002)(54906003)(41300700001)(37006003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 12:47:25.5150
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 235f773e-a398-4c4b-704c-08db3c1d3bac
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB10365

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
> to allow the domain to have an information about the SVE feature
> and the number of SVE register bits that are allowed for this
> domain.
>=20

Please mention in the commit message that you are bumping
domctl interface version.

> sve_vl field is the vector length in bits divided by 128, this
> allows to use less space in the structures.
>=20
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
>=20
> Check that the requested vector length is lower or equal to the
> platform supported vector length, otherwise fail on domain
> creation.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

With the commit message fixed:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v4:
> - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
>   removed else if since the conditions can't fallthrough, removed not
>   needed condition checking for VL bits validity because it's already
>   covered, so delete is_vl_valid() function. (Jan)
> Changes from v3:
> - don't use fixed types when not needed, use encoded value also in
>   arch_domain so rename sve_vl_bits in sve_vl. (Jan)
> - rename domainconfig_decode_vl to sve_decode_vl because it will now
>   be used also to decode from arch_domain value
> - change sve_vl from uint16_t to uint8_t and move it after "type" field
>   to optimize space.
> Changes from v2:
> - rename field in xen_arch_domainconfig from "sve_vl_bits" to
>   "sve_vl" and use the implicit padding after gic_version to
>   store it, now this field is the VL/128. (Jan)
> - Created domainconfig_decode_vl() function to decode the sve_vl
>   field and use it as plain bits value inside arch_domain.
> - Changed commit message reflecting the changes
> Changes from v1:
> - no changes
> Changes from RFC:
> - restore zcr_el2 in sve_restore_state, that will be introduced
>   later in this serie, so remove zcr_el2 related code from this
>   patch and move everything to the later patch (Julien)
> - add explicit padding into struct xen_arch_domainconfig (Julien)
> - Don't lower down the vector length, just fail to create the
>   domain. (Julien)
> ---
> xen/arch/arm/arm64/sve.c             | 12 ++++++++++++
> xen/arch/arm/domain.c                | 27 +++++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++++++
> xen/arch/arm/include/asm/domain.h    |  5 +++++
> xen/include/public/arch-arm.h        |  2 ++
> xen/include/public/domctl.h          |  2 +-
> 6 files changed, 59 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 6f3fb368c59b..78f7482619da 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -6,6 +6,7 @@
>  */
>=20
> #include <xen/types.h>
> +#include <asm/cpufeature.h>
> #include <asm/arm64/sve.h>
> #include <asm/arm64/sysregs.h>
> #include <asm/processor.h>
> @@ -48,3 +49,14 @@ register_t vl_to_zcr(unsigned int vl)
>     ASSERT(vl > 0);
>     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
> }
> +
> +/* Get the system sanitized value for VL in bits */
> +unsigned int get_sys_vl_len(void)
> +{
> +    if ( !cpu_has_sve )
> +        return 0;
> +
> +    /* ZCR_ELx len field is ((len+1) * 128) =3D vector bits length */
> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
> +            SVE_VL_MULTIPLE_VAL;
> +}
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index adb6ace2e24d..769fae8fe25e 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -13,6 +13,7 @@
> #include <xen/wait.h>
>=20
> #include <asm/alternative.h>
> +#include <asm/arm64/sve.h>
> #include <asm/cpuerrata.h>
> #include <asm/cpufeature.h>
> #include <asm/current.h>
> @@ -550,6 +551,8 @@ int arch_vcpu_create(struct vcpu *v)
>     v->arch.vmpidr =3D MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>=20
>     v->arch.cptr_el2 =3D get_default_cptr_flags();
> +    if ( is_sve_domain(v->domain) )
> +        v->arch.cptr_el2 &=3D ~HCPTR_CP(8);
>=20
>     v->arch.hcr_el2 =3D get_default_hcr_flags();
>=20
> @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_cre=
atedomain *config)
>     unsigned int max_vcpus;
>     unsigned int flags_required =3D (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_=
hap);
>     unsigned int flags_optional =3D (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CD=
F_vpmu);
> +    unsigned int sve_vl_bits =3D sve_decode_vl(config->arch.sve_vl);
>=20
>     if ( (config->flags & ~flags_optional) !=3D flags_required )
>     {
> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_cr=
eatedomain *config)
>         return -EINVAL;
>     }
>=20
> +    /* Check feature flags */
> +    if ( sve_vl_bits > 0 )
> +    {
> +        unsigned int zcr_max_bits =3D get_sys_vl_len();
> +
> +        if ( !zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n"=
);
> +            return -EINVAL;
> +        }
> +
> +        if ( sve_vl_bits > zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO,
> +                    "Requested SVE vector length (%u) > supported length=
 (%u)\n",
> +                    sve_vl_bits, zcr_max_bits);
> +            return -EINVAL;
> +        }
> +    }
> +
>     /* The P2M table must always be shared between the CPU and the IOMMU =
*/
>     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
>     {
> @@ -744,6 +768,9 @@ int arch_domain_create(struct domain *d,
>     if ( (rc =3D domain_vpci_init(d)) !=3D 0 )
>         goto fail;
>=20
> +    /* Copy the encoded vector length sve_vl from the domain configurati=
on */
> +    d->arch.sve_vl =3D config->arch.sve_vl;
> +
>     return 0;
>=20
> fail:
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/=
asm/arm64/sve.h
> index 144d2b1cc485..a4c53e3e8e2e 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -13,10 +13,17 @@
> /* Vector length must be multiple of 128 */
> #define SVE_VL_MULTIPLE_VAL (128U)
>=20
> +static inline unsigned int sve_decode_vl(unsigned int sve_vl)
> +{
> +    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
> +    return sve_vl * SVE_VL_MULTIPLE_VAL;
> +}
> +
> #ifdef CONFIG_ARM64_SVE
>=20
> register_t compute_max_zcr(void);
> register_t vl_to_zcr(unsigned int vl);
> +unsigned int get_sys_vl_len(void);
>=20
> #else /* !CONFIG_ARM64_SVE */
>=20
> @@ -30,6 +37,11 @@ static inline register_t vl_to_zcr(unsigned int vl)
>     return 0;
> }
>=20
> +static inline unsigned int get_sys_vl_len(void)
> +{
> +    return 0;
> +}
> +
> #endif /* CONFIG_ARM64_SVE */
>=20
> #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm=
/domain.h
> index e776ee704b7d..78cc2da3d4e5 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -31,6 +31,8 @@ enum domain_type {
>=20
> #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>=20
> +#define is_sve_domain(d) ((d)->arch.sve_vl > 0)
> +
> /*
>  * Is the domain using the host memory layout?
>  *
> @@ -67,6 +69,9 @@ struct arch_domain
>     enum domain_type type;
> #endif
>=20
> +    /* max SVE encoded vector length */
> +    uint8_t sve_vl;
> +
>     /* Virtual MMU */
>     struct p2m_domain p2m;
>=20
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.=
h
> index 1528ced5097a..38311f559581 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
> struct xen_arch_domainconfig {
>     /* IN/OUT */
>     uint8_t gic_version;
> +    /* IN - Contains SVE vector length divided by 128 */
> +    uint8_t sve_vl;
>     /* IN */
>     uint16_t tee_type;
>     /* IN */
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 529801c89ba3..e2e22cb534d6 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -21,7 +21,7 @@
> #include "hvm/save.h"
> #include "memory.h"
>=20
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
>=20
> /*
>  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 12:57:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 12:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520710.808590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwVz-0006e2-PY; Thu, 13 Apr 2023 12:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520710.808590; Thu, 13 Apr 2023 12:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwVz-0006dv-MJ; Thu, 13 Apr 2023 12:57:27 +0000
Received: by outflank-mailman (input) for mailman id 520710;
 Thu, 13 Apr 2023 12:57:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwVx-0006dp-NG
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 12:57:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwVx-0002Vl-7Q; Thu, 13 Apr 2023 12:57:25 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwVw-00061S-Vg; Thu, 13 Apr 2023 12:57:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=euzWDavnCtCzLiLiWNGNCKiDquQF7a7twnh2ruC0fqw=; b=2lqpvHyeicongDvjHdkFlly8za
	qhrlAPqaxNQla6oXuF1lT9/o1+z7Dne5zHlkounh7AcpyQuB4w9CpgR361OIh0g6WCr+8gy1Zgjb/
	hDevg7+ZKTCu9NJfiMy2o2MNOJmYl9n3p5l13u0FeaJCU3SwIT96GpICHw3CVkQhkdiY=;
Message-ID: <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
Date: Thu, 13 Apr 2023 13:57:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230412094938.2693890-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 12/04/2023 10:49, Luca Fancellu wrote:
> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
> to allow the domain to have an information about the SVE feature
> and the number of SVE register bits that are allowed for this
> domain.
> 
> sve_vl field is the vector length in bits divided by 128, this
> allows to use less space in the structures.
> 
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
> 
> Check that the requested vector length is lower or equal to the
> platform supported vector length, otherwise fail on domain
> creation.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
>   - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
>     removed else if since the conditions can't fallthrough, removed not
>     needed condition checking for VL bits validity because it's already
>     covered, so delete is_vl_valid() function. (Jan)
> Changes from v3:
>   - don't use fixed types when not needed, use encoded value also in
>     arch_domain so rename sve_vl_bits in sve_vl. (Jan)
>   - rename domainconfig_decode_vl to sve_decode_vl because it will now
>     be used also to decode from arch_domain value
>   - change sve_vl from uint16_t to uint8_t and move it after "type" field
>     to optimize space.
> Changes from v2:
>   - rename field in xen_arch_domainconfig from "sve_vl_bits" to
>     "sve_vl" and use the implicit padding after gic_version to
>     store it, now this field is the VL/128. (Jan)
>   - Created domainconfig_decode_vl() function to decode the sve_vl
>     field and use it as plain bits value inside arch_domain.
>   - Changed commit message reflecting the changes
> Changes from v1:
>   - no changes
> Changes from RFC:
>   - restore zcr_el2 in sve_restore_state, that will be introduced
>     later in this serie, so remove zcr_el2 related code from this
>     patch and move everything to the later patch (Julien)
>   - add explicit padding into struct xen_arch_domainconfig (Julien)
>   - Don't lower down the vector length, just fail to create the
>     domain. (Julien)
> ---
>   xen/arch/arm/arm64/sve.c             | 12 ++++++++++++
>   xen/arch/arm/domain.c                | 27 +++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/arm64/sve.h | 12 ++++++++++++
>   xen/arch/arm/include/asm/domain.h    |  5 +++++
>   xen/include/public/arch-arm.h        |  2 ++
>   xen/include/public/domctl.h          |  2 +-
>   6 files changed, 59 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 6f3fb368c59b..78f7482619da 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -6,6 +6,7 @@
>    */
>   
>   #include <xen/types.h>
> +#include <asm/cpufeature.h>
>   #include <asm/arm64/sve.h>
>   #include <asm/arm64/sysregs.h>
>   #include <asm/processor.h>
> @@ -48,3 +49,14 @@ register_t vl_to_zcr(unsigned int vl)
>       ASSERT(vl > 0);
>       return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
>   }
> +
> +/* Get the system sanitized value for VL in bits */
> +unsigned int get_sys_vl_len(void)
> +{
> +    if ( !cpu_has_sve )
> +        return 0;
> +
> +    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
> +            SVE_VL_MULTIPLE_VAL;
> +}
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index adb6ace2e24d..769fae8fe25e 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -13,6 +13,7 @@
>   #include <xen/wait.h>
>   
>   #include <asm/alternative.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpuerrata.h>
>   #include <asm/cpufeature.h>
>   #include <asm/current.h>
> @@ -550,6 +551,8 @@ int arch_vcpu_create(struct vcpu *v)
>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>   
>       v->arch.cptr_el2 = get_default_cptr_flags();
> +    if ( is_sve_domain(v->domain) )
> +        v->arch.cptr_el2 &= ~HCPTR_CP(8);
>   
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
> @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>       unsigned int max_vcpus;
>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>   
>       if ( (config->flags & ~flags_optional) != flags_required )
>       {
> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           return -EINVAL;
>       }
>   
> +    /* Check feature flags */
> +    if ( sve_vl_bits > 0 )
> +    {
> +        unsigned int zcr_max_bits = get_sys_vl_len();
> +
> +        if ( !zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
> +            return -EINVAL;
> +        }
> +
> +        if ( sve_vl_bits > zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO,
> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
> +                    sve_vl_bits, zcr_max_bits);
> +            return -EINVAL;
> +        }

Is SVE supported for 32-bit guest? If not, then you should had a check 
here to prevent the creation of the domain if sve_vl_bits is set.

> +    }
> +
>       /* The P2M table must always be shared between the CPU and the IOMMU */
>       if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
>       {
> @@ -744,6 +768,9 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_vpci_init(d)) != 0 )
>           goto fail;
>   
> +    /* Copy the encoded vector length sve_vl from the domain configuration */
> +    d->arch.sve_vl = config->arch.sve_vl;
> +
>       return 0;
>   
>   fail:
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index 144d2b1cc485..a4c53e3e8e2e 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -13,10 +13,17 @@
>   /* Vector length must be multiple of 128 */
>   #define SVE_VL_MULTIPLE_VAL (128U)
>   
> +static inline unsigned int sve_decode_vl(unsigned int sve_vl)
> +{
> +    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
> +    return sve_vl * SVE_VL_MULTIPLE_VAL;
> +}
> +
>   #ifdef CONFIG_ARM64_SVE
>   
>   register_t compute_max_zcr(void);
>   register_t vl_to_zcr(unsigned int vl);
> +unsigned int get_sys_vl_len(void);
>   
>   #else /* !CONFIG_ARM64_SVE */
>   
> @@ -30,6 +37,11 @@ static inline register_t vl_to_zcr(unsigned int vl)
>       return 0;
>   }
>   
> +static inline unsigned int get_sys_vl_len(void)
> +{
> +    return 0;
> +}
> +
>   #endif /* CONFIG_ARM64_SVE */
>   
>   #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index e776ee704b7d..78cc2da3d4e5 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -31,6 +31,8 @@ enum domain_type {
>   
>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>   
> +#define is_sve_domain(d) ((d)->arch.sve_vl > 0)
> +
>   /*
>    * Is the domain using the host memory layout?
>    *
> @@ -67,6 +69,9 @@ struct arch_domain
>       enum domain_type type;
>   #endif
>   
> +    /* max SVE encoded vector length */
> +    uint8_t sve_vl;
> +
Can we move this somewhere else to avoid adding extra padding? Also 
shouldn't this be protected with #ifdef CONFIG_ARM_64 to make clear this 
is not supported on Xen 32-bit?

>       /* Virtual MMU */
>       struct p2m_domain p2m;
>   
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 1528ced5097a..38311f559581 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
>   struct xen_arch_domainconfig {
>       /* IN/OUT */
>       uint8_t gic_version;
> +    /* IN - Contains SVE vector length divided by 128 */
> +    uint8_t sve_vl;
>       /* IN */
>       uint16_t tee_type;
>       /* IN */
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 529801c89ba3..e2e22cb534d6 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -21,7 +21,7 @@
>   #include "hvm/save.h"
>   #include "memory.h"
>   
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
>   
>   /*
>    * NB. xen_domctl.domain is an IN/OUT parameter for this operation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:02:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:02:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520714.808600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwb0-00087N-BR; Thu, 13 Apr 2023 13:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520714.808600; Thu, 13 Apr 2023 13:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwb0-00087G-8h; Thu, 13 Apr 2023 13:02:38 +0000
Received: by outflank-mailman (input) for mailman id 520714;
 Thu, 13 Apr 2023 13:02:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwaz-000879-Hy
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:02:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwaz-0002n9-5E; Thu, 13 Apr 2023 13:02:37 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmway-0006K5-Qg; Thu, 13 Apr 2023 13:02:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=lLBVMFdXzSBS66TcytgO+g8bcAII3N0qP+rdanIkDHQ=; b=tedVXxabetuTfyaivXg1gnnsAk
	ZbSl6eRRYKWwj+x8yqtpPAXHoya6hubplba9I5G690pY9495YDzbO5aMuITBA0mt/98si6ADua3UI
	lYpriE9aU+DQTBG2weHoXE5pdZ6loGyHivgfkbMezfe5rB8oDutS1g0W7IzoZwjYecF0=;
Message-ID: <f405fa12-d99b-c07f-0bdd-c49f64f3ffb4@xen.org>
Date: Thu, 13 Apr 2023 14:02:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-5-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230412094938.2693890-5-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 12/04/2023 10:49, Luca Fancellu wrote:
> SVE has a new exception class with code 0x19, introduce the new code
> and handle the exception.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
>   - No changes
> Changes from v3:
>   - No changes
> Changes from v2:
>   - No changes
> Changes from v1:
>   - No changes
> Changes from RFC:
>   - No changes
> ---
>   xen/arch/arm/include/asm/processor.h |  1 +
>   xen/arch/arm/traps.c                 | 12 ++++++++++++
>   2 files changed, 13 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
> index bc683334125c..7e42ff8811fc 100644
> --- a/xen/arch/arm/include/asm/processor.h
> +++ b/xen/arch/arm/include/asm/processor.h
> @@ -426,6 +426,7 @@
>   #define HSR_EC_HVC64                0x16
>   #define HSR_EC_SMC64                0x17
>   #define HSR_EC_SYSREG               0x18
> +#define HSR_EC_SVE                  0x19
>   #endif
>   #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
>   #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index a78a99ddadd0..c2e30feafd5a 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2160,6 +2160,13 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>           perfc_incr(trap_sysreg);
>           do_sysreg(regs, hsr);
>           break;
> +    case HSR_EC_SVE:
> +        GUEST_BUG_ON(regs_mode_is_32bit(regs));
> +        gprintk(XENLOG_WARNING,
> +                "Domain id %d tried to use SVE while not allowed\n",
> +                current->domain->domain_id);

gprintk() will already print the domain/vCPU for you. Also, if you want 
to print a domain ID, then you should use ("%pd", d) rather than ("%d", 
d->domain_id).

> +        inject_undef_exception(regs, hsr);
> +        break;
>   #endif
>   
>       case HSR_EC_INSTR_ABORT_LOWER_EL:
> @@ -2189,6 +2196,11 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
>       case HSR_EC_BRK:
>           do_trap_brk(regs, hsr);
>           break;
> +    case HSR_EC_SVE:
> +        /* An SVE exception is a bug somewhere in hypervisor code */
> +        printk("SVE trap at EL2.\n");
> +        do_unexpected_trap("Hypervisor", regs);

I think it would be better if you pass "SVE trap at EL2" as a string 
rather than adding your own printk above.

> +        break;
>   #endif
>       case HSR_EC_DATA_ABORT_CURR_EL:
>       case HSR_EC_INSTR_ABORT_CURR_EL:

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:12:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:12:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520720.808620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwjv-0001Q1-Fy; Thu, 13 Apr 2023 13:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520720.808620; Thu, 13 Apr 2023 13:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwjv-0001Pu-DE; Thu, 13 Apr 2023 13:11:51 +0000
Received: by outflank-mailman (input) for mailman id 520720;
 Thu, 13 Apr 2023 13:11:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=czPd=AE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pmwjt-0001Je-VE
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:11:50 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e89::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be31e67d-d9fc-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:11:48 +0200 (CEST)
Received: from MW4PR04CA0316.namprd04.prod.outlook.com (2603:10b6:303:82::21)
 by CH0PR12MB5297.namprd12.prod.outlook.com (2603:10b6:610:d4::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:11:41 +0000
Received: from CO1NAM11FT098.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::8a) by MW4PR04CA0316.outlook.office365.com
 (2603:10b6:303:82::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30 via Frontend
 Transport; Thu, 13 Apr 2023 13:11:41 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT098.mail.protection.outlook.com (10.13.174.207) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 13:11:40 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 08:11:40 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 06:11:14 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 08:11:13 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be31e67d-d9fc-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JzzymER0toW5nB1v0r0f2DToC/RXaZF4U7IwogO2bnKAwVWktI2PsFALZNF0lakDL1f2egZY3hMuCIl6391JkXXhnEj+KTAhqktTySXC9ykUZpdAOlfrfu6ktlTqRiOhlmVaAmq9MFq++bpKM+30c5hki+l2xdH4WrhNGA2O3n1o4N5d/mOzIZVN7Zz3dCAOvFB3G2u6Tv2Cjere2TPUzhRx8qvsI1wTgzTq8RTnJltCPg2YBPG6hbTdsEXzNNhDXOKMjZWPfE+YoZmKkZ78F2WptIthHxuyBbGhCwSLvmDQuKs91d9fBCspSr8Wh6agtqkFyicA60oeN8AEinCoiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TqkGxVz3dOyXztzEvg0vAz1bCZKFuJHjXDqMEbM0hmg=;
 b=iDGUKblwiUMmNyed67Y1WqLR0xolo81hyjTGtd1rkYZyXN55VNzKoL0DmOI7CXDTosrYzesuIq4jp5uB4NCxxW8KWmNBihvVHXavbp5ePs8LzrL/LGr+3npTnR1eUj81vwg8Ivv1o6UdbEaea2ssrPDKUSlwxPW6MKxZrcRvhdqCK0e02Tyx7O7IcM0BQW0wDWOfRQUeCCnIRx9tmVG3NkTh3umWPUF8kbI64RX40J6bJVfn8xupOv7oP0diNJ1aJsTwu05z43lC+TvOxwsmnkfMLt36s3ZTuqMY8wV08jy2nBSLeG0zPI8Z3qJJCFsU/x7TXqCQEid9EtUoPsmvSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TqkGxVz3dOyXztzEvg0vAz1bCZKFuJHjXDqMEbM0hmg=;
 b=JcYQA51yhZphLxR6iLl4oNhK0nWgvxMphMRgYPAsvd/lMNpF8bbJnoGQj1y2Q/JltqHzW3x0asVwqnZKw9ahie4VWKq8h94Fx3RBYppX9ExmbtEKnq8V4QXRK5q8VBCOk807Jyw4oHnGTGtdr6CaFhQPKHNW/emE8RxWaBmsWCw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <601858e2-79b5-35da-df00-2d9061d8ff22@amd.com>
Date: Thu, 13 Apr 2023 15:11:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 05/17] libfdt: overlay: change
 overlay_get_target()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Vikram Garhwal
	<fnu.vikram@xilinx.com>, David Gibson <david@gibson.dropbear.id.au>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-6-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-6-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT098:EE_|CH0PR12MB5297:EE_
X-MS-Office365-Filtering-Correlation-Id: 8ef6d78c-5b73-49a7-5ea5-08db3c209ef1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kiE6guV/k+HDzEEn1ZkFLD85vm2IdTtJhI8Q/sIbbvGhGAAPKnGwd1e56yCm3+pKbnrBy6eiFcEGsFFT4prsgS7vu7JXQ3b2Q9Zo0lhpQ7qUyAi4aIoETX+8im40s9hZZ8j7BkS9om65HEf3mlGvfpDvmQ6aoKXjpFdfb+G5CDCKTCoiGi7bKZhr3UlHr4u2shmOcMTg+WqlHD+qsiWWltSHVH1QI+ts7Q9IlhYZ2tnzRHNTzkc1wnVF3Wo8mLqCO8Eudpl88HoVMg+NrDp+9oCYbzelqnK1ftrzGw5go4+1NoHR5dElavof8PnHCdBXDuASSANM40hoyXvWXqYfKVZV5zrdbZff/KJFmT98mSOEiYZ28Ctzj2xwuyXGRD45HCoIdMHBScpmPsdaKmH0H0cgYW1aFKhftp3kt4QeOr2RziNNC0KQvNLcRC5lanVru+rHOuTU5BMsr+Mfr8EodjXuUsm5WuHw2KFS/SaU2r6g4ySYvlfLdZZXMF8toO2unStZQnftq0nzBOe3yQlglmqgbhusu6ndy+uAOLUR2o4/hZ4MctqhBTop01VezrUstvOjlKy1s8HtWgFvMtxIDH37yDym9xsoI61sG084JuuveQWJkxLUZbH7p/UQykuYlfEnGON+NZSiBMm2ajx4QL71SbB04sPTts+/ix45Om8vqAem3oR+5hOa8k20MHMFLrbNekdRtHxbYf13diuFWnwDQ2QjS/FgWagz6vt+lIRHh2Qlx/yJHtMTtq4OUbnX
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(46966006)(36840700001)(40470700004)(31686004)(36860700001)(336012)(5660300002)(36756003)(2906002)(44832011)(31696002)(316002)(40480700001)(8676002)(8936002)(86362001)(41300700001)(81166007)(356005)(40460700003)(82310400005)(4326008)(82740400003)(966005)(70206006)(426003)(70586007)(2616005)(83380400001)(47076005)(54906003)(16576012)(186003)(478600001)(110136005)(26005)(53546011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:11:40.4577
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ef6d78c-5b73-49a7-5ea5-08db3c209ef1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT098.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5297

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
> function type.
> 
> This is done to get the target path for the overlay nodes which is very useful
> in many cases. For example, Xen hypervisor needs it when applying overlays
> because Xen needs to do further processing of the overlay nodes, e.g. mapping of
> resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.
> 
> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
> Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Origin: https://github.com/dgibson/dtc 45f3d1a095dd
Wouldn't it be better to point to the main dtc repository under kernel.org rather than github?
Origin: git://git.kernel.org/pub/scm/utils/dtc/dtc.git 45f3d1a095dd

> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
In any case:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:12:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:12:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520719.808610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwjt-0001B2-7K; Thu, 13 Apr 2023 13:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520719.808610; Thu, 13 Apr 2023 13:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwjt-0001Av-4B; Thu, 13 Apr 2023 13:11:49 +0000
Received: by outflank-mailman (input) for mailman id 520719;
 Thu, 13 Apr 2023 13:11:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwjr-0001Al-Jp
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:11:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwjr-0002wV-9t; Thu, 13 Apr 2023 13:11:47 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwjr-0006lZ-2G; Thu, 13 Apr 2023 13:11:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cB0gGg5I8BEmRP1ArCMDNiP+A+rPj1v4EFP5w+9h0Ag=; b=NOCJdMWRKxQvB1M3A+YCfXQXle
	CdjRmnSswtmSpUGuO6gjgVFOieDVNsyHCp4I9L7QfifT5P85vtia+FvbJjM04vzyyNBhBVemU0LF2
	Jhwwvj5JFrv45+scamQYgya016BvJfvc+SU9itteOZdqXubltOTY9e6n/3K7ImXvVtfk=;
Message-ID: <b1c77bdf-6979-83b6-f5e4-ac5b3e751a3d@xen.org>
Date: Thu, 13 Apr 2023 14:11:44 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230412094938.2693890-6-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 12/04/2023 10:49, Luca Fancellu wrote:
> Save/restore context switch for SVE, allocate memory to contain
> the Z0-31 registers whose length is maximum 2048 bits each and
> FFR who can be maximum 256 bits, the allocated memory depends on
> how many bits is the vector length for the domain and how many bits
> are supported by the platform.
> 
> Save P0-15 whose length is maximum 256 bits each, in this case the
> memory used is from the fpregs field in struct vfp_state,
> because V0-31 are part of Z0-31 and this space would have been
> unused for SVE domain otherwise.
> 
> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
> creation given the requested vector length and restore it on
> context switch, save/restore ZCR_EL1 value as well.
> 
> Remove headers from sve.c that are already included using
> xen/sched.h.
I dislike this because ...

> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 78f7482619da..5485648850a0 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -5,14 +5,29 @@
>    * Copyright (C) 2022 ARM Ltd.
>    */
>   
> -#include <xen/types.h>
> -#include <asm/cpufeature.h>

... it is not entirely obvious that sched.h will import 
asm/cpufeatures.h. This could easily change in the future and would only 
require us to re-add those includes.

> +#include <xen/sched.h>
> +#include <xen/sizes.h> >   #include <asm/arm64/sve.h>
> -#include <asm/arm64/sysregs.h>
> -#include <asm/processor.h>
> -#include <asm/system.h>
>   
>   extern unsigned int sve_get_hw_vl(void);
> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
> +                         int restore_ffr);
> +
> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
> +{
> +    /*
> +     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
> +     * in bytes is VL/8.
> +     */
> +    return (vl / 8U) * 32U;
> +}
> +
> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
> +{
> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
> +    return (vl / 64U);
> +}
>   
>   register_t compute_max_zcr(void)
>   {
> @@ -60,3 +75,46 @@ unsigned int get_sys_vl_len(void)
>       return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>               SVE_VL_MULTIPLE_VAL;
>   }
> +
> +int sve_context_init(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
> +                             sve_ffrreg_ctx_size(sve_vl_bits),
> +                             L1_CACHE_BYTES);
> +
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    v->arch.vfp.sve_context = ctx;
> +
> +    return 0;
> +}
> +
> +void sve_context_free(struct vcpu *v)
> +{
> +    xfree(v->arch.vfp.sve_context);
> +}

Please use XFREE().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:15:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520726.808630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwnD-0002Ne-33; Thu, 13 Apr 2023 13:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520726.808630; Thu, 13 Apr 2023 13:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwnC-0002NX-VD; Thu, 13 Apr 2023 13:15:14 +0000
Received: by outflank-mailman (input) for mailman id 520726;
 Thu, 13 Apr 2023 13:15:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwnB-0002Me-GP
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:15:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwnB-000314-3S; Thu, 13 Apr 2023 13:15:13 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwnA-0006qO-TD; Thu, 13 Apr 2023 13:15:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=XyEECiqCsWYUcu8zF9/Wwizrl1PUl1S/MULOV7hgBww=; b=v3x4K+RG9NNucUjIEChgxj90Iz
	j8V9idAWeLZoCoiDPN6zVIjI3CMgyCfuBqqsBj8CPQ1Wb1gLsPyvCewb7HQueClSINQx4e9DZsS+6
	QqY93LWXfHbI9djQ+7GxVRlYIBlnzQMup7EW8aP9I3FifVz7Cq9BqsIrTWaQ3S9FtqpU=;
Message-ID: <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
Date: Thu, 13 Apr 2023 14:15:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Content-Language: en-US
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-10-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/04/2023 08:14, Jens Wiklander wrote:
> Adds support for sending a FF-A direct request. Checks that the SP also
> supports handling a 32-bit direct request. 64-bit direct requests are
> not used by the mediator itself so there is not need to check for that.
> 
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>   xen/arch/arm/tee/ffa.c | 112 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 112 insertions(+)
> 
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index f129879c5b81..f2cce955d981 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
>       return true;
>   }
>   
> +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
> +{
> +    switch ( resp->a0 )
> +    {
> +    case FFA_ERROR:
> +        if ( resp->a2 )
> +            return resp->a2;
> +        else
> +            return FFA_RET_NOT_SUPPORTED;
> +    case FFA_SUCCESS_32:
> +    case FFA_SUCCESS_64:
> +        return FFA_RET_OK;
> +    default:
> +        return FFA_RET_NOT_SUPPORTED;
> +    }
> +}
> +
> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a2,
> +                               register_t a3, register_t a4)
> +{
> +    const struct arm_smccc_1_2_regs arg = {
> +        .a0 = fid,
> +        .a1 = a1,
> +        .a2 = a2,
> +        .a3 = a3,
> +        .a4 = a4,
> +    };
> +    struct arm_smccc_1_2_regs resp;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +
> +    return get_ffa_ret_code(&resp);
> +}
> +
> +static int32_t ffa_features(uint32_t id)
> +{
> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
> +}
> +
> +static bool check_mandatory_feature(uint32_t id)
> +{
> +    int32_t ret = ffa_features(id);
> +
> +    if (ret)
> +        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: error %d\n",
> +               id, ret);
> +
> +    return !ret;
> +}
> +
>   static uint16_t get_vm_id(const struct domain *d)
>   {
>       /* +1 since 0 is reserved for the hypervisor in FF-A */
> @@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs *regs)
>       set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
>   }
>   
> +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
> +{
> +    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
> +    struct arm_smccc_1_2_regs resp = { };
> +    struct domain *d = current->domain;
> +    uint32_t src_dst;
> +    uint64_t mask;
> +
> +    if ( smccc_is_conv_64(fid) )
> +        mask = GENMASK_ULL(63, 0);
> +    else
> +        mask = GENMASK_ULL(31, 0);
> +
> +    src_dst = get_user_reg(regs, 1);
> +    if ( (src_dst >> 16) != get_vm_id(d) )
> +    {
> +        resp.a0 = FFA_ERROR;
> +        resp.a2 = FFA_RET_INVALID_PARAMETERS;
> +        goto out;
> +    }
> +
> +    arg.a1 = src_dst;
> +    arg.a2 = get_user_reg(regs, 2) & mask;
> +    arg.a3 = get_user_reg(regs, 3) & mask;
> +    arg.a4 = get_user_reg(regs, 4) & mask;
> +    arg.a5 = get_user_reg(regs, 5) & mask;
> +    arg.a6 = get_user_reg(regs, 6) & mask;
> +    arg.a7 = get_user_reg(regs, 7) & mask;
> +
> +    arm_smccc_1_2_smc(&arg, &resp);
> +    switch ( resp.a0 )
> +    {
> +    case FFA_ERROR:
> +    case FFA_SUCCESS_32:
> +    case FFA_SUCCESS_64:
> +    case FFA_MSG_SEND_DIRECT_RESP_32:
> +    case FFA_MSG_SEND_DIRECT_RESP_64:
> +        break;
> +    default:
> +        /* Bad fid, report back. */
> +        memset(&arg, 0, sizeof(arg));
> +        arg.a0 = FFA_ERROR;
> +        arg.a1 = src_dst;
> +        arg.a2 = FFA_RET_ABORTED;
> +    }
> +
> +out:
> +    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & mask,
> +             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask);
> +}
> +
>   static bool ffa_handle_call(struct cpu_user_regs *regs)
>   {
>       uint32_t fid = get_user_reg(regs, 0);
> @@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>       case FFA_ID_GET:
>           set_regs_success(regs, get_vm_id(d), 0);
>           return true;
> +    case FFA_MSG_SEND_DIRECT_REQ_32:
> +    case FFA_MSG_SEND_DIRECT_REQ_64:
> +        handle_msg_send_direct_req(regs, fid);
> +        return true;
>   
>       default:
>           gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
> @@ -326,6 +431,13 @@ static bool ffa_probe(void)
>       printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
>              major_vers, minor_vers);
>   
> +    /*
> +     * TODO save result of checked features and use that information to
> +     * accept or reject requests from guests.
> +     */

I am not entirely sure I understand this TODO. Does it mean a guest can 
currently use a request that is not supported by FFA?

> +    if ( !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
> +        return false;
> +
>       ffa_version = vers;
>   
>       return true;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:21:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520730.808639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwsh-0003oE-M5; Thu, 13 Apr 2023 13:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520730.808639; Thu, 13 Apr 2023 13:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwsh-0003o7-JG; Thu, 13 Apr 2023 13:20:55 +0000
Received: by outflank-mailman (input) for mailman id 520730;
 Thu, 13 Apr 2023 13:20:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=73wY=AE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pmwsg-0003o1-Op
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:20:54 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062c.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0347dfc0-d9fe-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:20:52 +0200 (CEST)
Received: from DU2PR04CA0188.eurprd04.prod.outlook.com (2603:10a6:10:28d::13)
 by VI1PR08MB5343.eurprd08.prod.outlook.com (2603:10a6:803:12d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:20:49 +0000
Received: from DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::6a) by DU2PR04CA0188.outlook.office365.com
 (2603:10a6:10:28d::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 13:20:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT010.mail.protection.outlook.com (100.127.142.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 13:20:48 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 13:20:48 +0000
Received: from 64e2bc829647.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 404CC043-21C9-4D8E-B634-03A4DFB41E67.1; 
 Thu, 13 Apr 2023 13:20:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 64e2bc829647.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 13:20:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAWPR08MB10210.eurprd08.prod.outlook.com (2603:10a6:102:367::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:20:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 13:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0347dfc0-d9fe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UkSsLVJWM0/MxcpvXmJ6fWZcYRv/ReE4568KPGnB7Lg=;
 b=BSUvBrfaXzcfVlZrwgky4C7LeceN3x7jwh3z68/hcGXhsRsFhVxgyrQaLTcmU/pt6WshuuwkkvUs8sAh/G8x40MwgJndc20dGbm6kWJP8SA8B2DsXLy6jPEIr1qJmHNrCc5VlGI3YAHsQIlmVbSJl4MUzLmByOXw9pOwkx1j2r0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 755511fa65ce1425
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jcgplabmw/+th2YDJPZJFubTQKp7mQIcKduKxKFNNawzOcIBCEdlfEKiP169wd7SM33PYP/K7F6A7fQVYaokRpGhfqKP/oGu4nDwP6bEONeTJnqn3G8YIMU2s7lCdFDIBIb5pfwcb5h36QJAU4sDAurwDXK3s43S8xkOkE1Ohz+BjtBQrHkx55jr/Dr44eEHVp9MPhtpxTAtVrRrJZbuKdwsBdyvgOOkcRRjAZbRfo4XK3WZzBb335ToLY6DAYIbq5B+88ydC9/Wal5DSStsqM4HDPkfkTDr7zutT5DWG8f4LhS63A5WgdXtuu2GL/jT+C7XXEG+i0QjGHm4rl2kdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UkSsLVJWM0/MxcpvXmJ6fWZcYRv/ReE4568KPGnB7Lg=;
 b=guz8JirCqbygg1zTMKQB80umcfTLDJaUZHPYxAMGO5vQPCU4+L8+nNjzqG2F6HThAfElqqNh8yq88qo7yC95itWRpZFxIYfniVlNNgzSHv3zFU3z1wD79+Msw2z47PnEW6efiUTly54GbAFX/XAH/gFS6ZQ995/4yZ9Cq2Kp1MeGaz3EBOiGDatMnIngtG7HY2EqlV5ypTPo6djFV5VVG/wYgID3bVpKSIBPsOzVc9WDeYtlTx2zUF4fcKPZ/w/IzM/CnRPGNLfbREN5G46j1poEfl0zivxk4N/Ea/e0zJ4YNs/FHPzVg1+Kr3YwStLvDiRHrTVeRIIC9/MFBpU13w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UkSsLVJWM0/MxcpvXmJ6fWZcYRv/ReE4568KPGnB7Lg=;
 b=BSUvBrfaXzcfVlZrwgky4C7LeceN3x7jwh3z68/hcGXhsRsFhVxgyrQaLTcmU/pt6WshuuwkkvUs8sAh/G8x40MwgJndc20dGbm6kWJP8SA8B2DsXLy6jPEIr1qJmHNrCc5VlGI3YAHsQIlmVbSJl4MUzLmByOXw9pOwkx1j2r0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jens Wiklander <jens.wiklander@linaro.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, Marc Bonnici <Marc.Bonnici@arm.com>, Achin
 Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Topic: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Index: AQHZbdfL0AxH2Xu3t0igFa2jZHjA9K8pOEIAgAABeIA=
Date: Thu, 13 Apr 2023 13:20:36 +0000
Message-ID: <916BB708-3028-4AAB-BD6A-BCABAFBD7C45@arm.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org>
 <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
In-Reply-To: <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|PAWPR08MB10210:EE_|DBAEUR03FT010:EE_|VI1PR08MB5343:EE_
X-MS-Office365-Filtering-Correlation-Id: cfc6ec32-3680-42b3-f96b-08db3c21e5c6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vw73FFR3gv+vpynZa1wljssxFfhiMjE/wIcoqPjOYlLlTl1P1ugdu0AzQkcfXorXeM2UVsIe9Daoe/6y1Xb1fKDver+uLoEWvRtftdeK6h/pQN/+p9d2BnGpEXExAXDEeQz90QM28LeOSUVxLgKpZMNoqfwWgPWYsIL5DtAjNdbkUPaBs1buLhpLN8GwZnasJUjylQ+FASs86AbSaZ8vFZFfhcHe4Su3FXKkSSwSI5hpHeZWAbQl+PuE2AttyknSWebVRyeK5Uv+rb3DNfC1rpgieyPdJhJiL6ifo0wzfshnE+NIdGdzpKWcy1U0Os5ParISv/7ramsILeptVk3E9C9R5WkZIEpURxf53MfE/IliS6rbWYaKNhtDNQP8NqVDJSqpwBhqOSY4c/002xc5T3rDXRHelEt4Wtz6NOtprQXYot4fHPkHBp7A+yikEYl3crcCQ9k77g31l9DLIgtEOFO2nKfdIk2xG4ylANtAQsVan9xF5k75nGg4pvAy8FaD8Vyq8IFlm+R5XM065wZYwm9HyfxdaT1K5gZAwBNsVDYNSnn5VTr/77EwDvmHObXTwr18UtBQPak75CSATrVWL31IlSdeK3Man0oQCBxCvep7nBdQw5D8rEiPESJzsrOgm1UYPyFSNtUFtRnvAzCEkg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(366004)(136003)(396003)(451199021)(71200400001)(33656002)(38100700002)(122000001)(5660300002)(36756003)(2906002)(316002)(38070700005)(8676002)(8936002)(86362001)(66446008)(66556008)(64756008)(41300700001)(66946007)(6916009)(66476007)(4326008)(91956017)(76116006)(2616005)(83380400001)(54906003)(6506007)(186003)(478600001)(26005)(6512007)(53546011)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1FB4E54CB59B9C47AD3A69B6CDAC874F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10210
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ebe6749b-fefd-469e-2c46-08db3c21de22
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tYygeYU2C2hHpMFoFSe52WQGrdu0LQIPt5IK0lgDts7Vb0FlFdciImMCcsImldxCV7iNqI2QJBDIFEOxlAS3Zo3psITJ12T5F2Pm/19BZmQ1SEPFoh8WESInqbRQ6pgeBaqGEhJkanpFT+1ipCTXUJAtG2NLNChC1znDy7qBCA5x/VYrGh81Km3Rwi+6JaFZxvHynvLLqpV/j8dfnyS5R170ADMEXee1ro1LIrphtRWXoml5k1c9MeKwmCvttR7x7N8vVrpnmbXDUkFxMYZnzfrU20AW6uPIIMmHsFeqW/OMxfY2o4gMbwHoCSBkV7x8rqSX2xwkWks+rhchPSRMhsf4VocH1EDIR63VMkB0n2+ZALG3prISeDqP7FuK7zc1guzJxdF/9PDMYe/3Qgvie39+Y2U4UMOyNiP3X7CgKPSuCwpJ8NDWIOs11KnLtosEqvL6ksJOSPjOOkHp8PKLCTIHmcnXmTzfJnYONFXtm5ELXIbs3VcYEdsBbvcbMiL0D3s/DAHwkN+XAK1gr4rF34eWvhIAstV/nBUAmay/lN6alAH3gBK3pJUudwkmG9uPR+htoG45KpzAQVIk+JTPh84cGC9DLqY0bK6DHfdDmeTNfpnMNYNC7DLzxvXFnRu3krUlOW56exVWewCBUlp5vXQWluhIcS2H5wUkiBA54r3rln4MPrQeG/2GNJ3wBbKzP0ziXvh5GBNKJxooc827Lw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(5660300002)(41300700001)(33656002)(82740400003)(40460700003)(36756003)(2906002)(6862004)(8676002)(86362001)(40480700001)(356005)(4326008)(8936002)(70586007)(70206006)(81166007)(82310400005)(316002)(107886003)(2616005)(83380400001)(336012)(47076005)(6506007)(54906003)(6512007)(26005)(53546011)(186003)(478600001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:20:48.9510
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cfc6ec32-3680-42b3-f96b-08db3c21e5c6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5343

Hi Julien,

> On 13 Apr 2023, at 15:15, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 13/04/2023 08:14, Jens Wiklander wrote:
>> Adds support for sending a FF-A direct request. Checks that the SP also
>> supports handling a 32-bit direct request. 64-bit direct requests are
>> not used by the mediator itself so there is not need to check for that.
>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>> ---
>>  xen/arch/arm/tee/ffa.c | 112 +++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 112 insertions(+)
>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>> index f129879c5b81..f2cce955d981 100644
>> --- a/xen/arch/arm/tee/ffa.c
>> +++ b/xen/arch/arm/tee/ffa.c
>> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
>>      return true;
>>  }
>>  +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
>> +{
>> +    switch ( resp->a0 )
>> +    {
>> +    case FFA_ERROR:
>> +        if ( resp->a2 )
>> +            return resp->a2;
>> +        else
>> +            return FFA_RET_NOT_SUPPORTED;
>> +    case FFA_SUCCESS_32:
>> +    case FFA_SUCCESS_64:
>> +        return FFA_RET_OK;
>> +    default:
>> +        return FFA_RET_NOT_SUPPORTED;
>> +    }
>> +}
>> +
>> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t =
a2,
>> +                               register_t a3, register_t a4)
>> +{
>> +    const struct arm_smccc_1_2_regs arg =3D {
>> +        .a0 =3D fid,
>> +        .a1 =3D a1,
>> +        .a2 =3D a2,
>> +        .a3 =3D a3,
>> +        .a4 =3D a4,
>> +    };
>> +    struct arm_smccc_1_2_regs resp;
>> +
>> +    arm_smccc_1_2_smc(&arg, &resp);
>> +
>> +    return get_ffa_ret_code(&resp);
>> +}
>> +
>> +static int32_t ffa_features(uint32_t id)
>> +{
>> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
>> +}
>> +
>> +static bool check_mandatory_feature(uint32_t id)
>> +{
>> +    int32_t ret =3D ffa_features(id);
>> +
>> +    if (ret)
>> +        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: error=
 %d\n",
>> +               id, ret);
>> +
>> +    return !ret;
>> +}
>> +
>>  static uint16_t get_vm_id(const struct domain *d)
>>  {
>>      /* +1 since 0 is reserved for the hypervisor in FF-A */
>> @@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs *re=
gs)
>>      set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
>>  }
>>  +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uin=
t32_t fid)
>> +{
>> +    struct arm_smccc_1_2_regs arg =3D { .a0 =3D fid, };
>> +    struct arm_smccc_1_2_regs resp =3D { };
>> +    struct domain *d =3D current->domain;
>> +    uint32_t src_dst;
>> +    uint64_t mask;
>> +
>> +    if ( smccc_is_conv_64(fid) )
>> +        mask =3D GENMASK_ULL(63, 0);
>> +    else
>> +        mask =3D GENMASK_ULL(31, 0);
>> +
>> +    src_dst =3D get_user_reg(regs, 1);
>> +    if ( (src_dst >> 16) !=3D get_vm_id(d) )
>> +    {
>> +        resp.a0 =3D FFA_ERROR;
>> +        resp.a2 =3D FFA_RET_INVALID_PARAMETERS;
>> +        goto out;
>> +    }
>> +
>> +    arg.a1 =3D src_dst;
>> +    arg.a2 =3D get_user_reg(regs, 2) & mask;
>> +    arg.a3 =3D get_user_reg(regs, 3) & mask;
>> +    arg.a4 =3D get_user_reg(regs, 4) & mask;
>> +    arg.a5 =3D get_user_reg(regs, 5) & mask;
>> +    arg.a6 =3D get_user_reg(regs, 6) & mask;
>> +    arg.a7 =3D get_user_reg(regs, 7) & mask;
>> +
>> +    arm_smccc_1_2_smc(&arg, &resp);
>> +    switch ( resp.a0 )
>> +    {
>> +    case FFA_ERROR:
>> +    case FFA_SUCCESS_32:
>> +    case FFA_SUCCESS_64:
>> +    case FFA_MSG_SEND_DIRECT_RESP_32:
>> +    case FFA_MSG_SEND_DIRECT_RESP_64:
>> +        break;
>> +    default:
>> +        /* Bad fid, report back. */
>> +        memset(&arg, 0, sizeof(arg));
>> +        arg.a0 =3D FFA_ERROR;
>> +        arg.a1 =3D src_dst;
>> +        arg.a2 =3D FFA_RET_ABORTED;
>> +    }
>> +
>> +out:
>> +    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & m=
ask,
>> +             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & =
mask);
>> +}
>> +
>>  static bool ffa_handle_call(struct cpu_user_regs *regs)
>>  {
>>      uint32_t fid =3D get_user_reg(regs, 0);
>> @@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_regs *r=
egs)
>>      case FFA_ID_GET:
>>          set_regs_success(regs, get_vm_id(d), 0);
>>          return true;
>> +    case FFA_MSG_SEND_DIRECT_REQ_32:
>> +    case FFA_MSG_SEND_DIRECT_REQ_64:
>> +        handle_msg_send_direct_req(regs, fid);
>> +        return true;
>>        default:
>>          gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
>> @@ -326,6 +431,13 @@ static bool ffa_probe(void)
>>      printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
>>             major_vers, minor_vers);
>>  +    /*
>> +     * TODO save result of checked features and use that information to
>> +     * accept or reject requests from guests.
>> +     */
>=20
> I am not entirely sure I understand this TODO. Does it mean a guest can c=
urrently use a request that is not supported by FFA?

In fact this is a bit the opposite: in the following patch we check that al=
l features we could need are supported but if a guest is only using a subse=
t we might not need to have all of them.
Idea of this TODO would be to save the features supported and refuse guest =
requests depending on the features needed for them.

Cheers
Bertrand

>=20
>> +    if ( !check_mandatory_feature(FFA_MSG_SEND_DIRECT_REQ_32) )
>> +        return false;
>> +
>>      ffa_version =3D vers;
>>        return true;
>=20
> Cheers,
>=20
> --=20
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:24:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:24:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520733.808650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwvj-0004Mj-55; Thu, 13 Apr 2023 13:24:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520733.808650; Thu, 13 Apr 2023 13:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwvj-0004Mc-1N; Thu, 13 Apr 2023 13:24:03 +0000
Received: by outflank-mailman (input) for mailman id 520733;
 Thu, 13 Apr 2023 13:24:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwvi-0004MS-4w
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:24:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwvh-000391-PC; Thu, 13 Apr 2023 13:24:01 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwvh-00078L-IP; Thu, 13 Apr 2023 13:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cY4vS72LVwklu7Jew54sP8KEU4l53pjfjrYouwzFacw=; b=Vu9jlyHvIFPSNC4qm50WYG4kE3
	DzQ1lMiftua6vIZGbvhiIldunTwfhrrVoZeSRnV/IFm4YbZfZ5ajLb4pY4x46W5CytBMpsqsYH/Uc
	363Qr6Bmd4WFgDF+j7WGvP8xR1GOHTRXAjUqnHPdO/nlZVbltvxD5o8IH09LpU5cQE9o=;
Message-ID: <d354fee8-4d02-fe5f-1ff1-15f96efeb13f@xen.org>
Date: Thu, 13 Apr 2023 14:23:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure
 Partitions
Content-Language: en-US
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-12-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-12-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/04/2023 08:14, Jens Wiklander wrote:
> +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> +                                      uint8_t msg)
> +{
> +    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
> +    int32_t res;
> +
> +    if ( msg == FFA_MSG_SEND_VM_CREATED )
> +        exp_resp |= FFA_MSG_RESP_VM_CREATED;
> +    else if ( msg == FFA_MSG_SEND_VM_DESTROYED )
> +        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
> +    else
> +        return FFA_RET_INVALID_PARAMETERS;
> +
> +    do {
> +        const struct arm_smccc_1_2_regs arg = {
> +            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
> +            .a1 = sp_id,
> +            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
> +            .a5 = vm_id,
> +        };
> +        struct arm_smccc_1_2_regs resp;
> +
> +        arm_smccc_1_2_smc(&arg, &resp);
> +        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != exp_resp )
> +        {
> +            /*
> +             * This is an invalid response, likely due to some error in the
> +             * implementation of the ABI.
> +             */
> +            return FFA_RET_INVALID_PARAMETERS;
> +        }
> +        res = resp.a3;
> +    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );

This loop seems potentially unbounded to me. Can you add a comment 
explaining why this is fine?

> +
> +    return res;
> +}
> +

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:25:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:25:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520737.808660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwww-0004wT-Dx; Thu, 13 Apr 2023 13:25:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520737.808660; Thu, 13 Apr 2023 13:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwww-0004wM-B2; Thu, 13 Apr 2023 13:25:18 +0000
Received: by outflank-mailman (input) for mailman id 520737;
 Thu, 13 Apr 2023 13:25:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HU9H=AE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pmwwv-0004vu-H0
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:25:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9eb57539-d9fe-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:25:12 +0200 (CEST)
Received: from DB6PR0202CA0041.eurprd02.prod.outlook.com (2603:10a6:4:a5::27)
 by AM9PR08MB6035.eurprd08.prod.outlook.com (2603:10a6:20b:2d9::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:25:10 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::ca) by DB6PR0202CA0041.outlook.office365.com
 (2603:10a6:4:a5::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 13:25:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Thu, 13 Apr 2023 13:25:08 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Thu, 13 Apr 2023 13:25:08 +0000
Received: from 29f2334368a9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E3E3F3F3-9300-48CD-8F68-80C65B1A15F0.1; 
 Thu, 13 Apr 2023 13:24:55 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 29f2334368a9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 13:24:55 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB7691.eurprd08.prod.outlook.com (2603:10a6:10:3a5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Thu, 13 Apr
 2023 13:24:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 13:24:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eb57539-d9fe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XDlP7wrlTDsn66ctfF6VxZhhr/8wAWkk0cmlLOo5TMI=;
 b=6gjrkpCSXMvTAdYCkFA+OmOiKPfGrorpb2AiKmu1UC5tCI40Llx1sElxD7tCpzyHCvfEIJL14A6QIJNxC1SLOt4b37VIaDPOI+Zsq6dT1vcMSoN15W4JcQuADcnMh0/04DSRewl2S7y43/3CGQyYEczhocpEi62hoKCQvI1s9Z4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 43011ed1a40fe462
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IrjZY6LJfv6PRH38ESwI4vnco6mFZp0yT6SYgYTI3luB0mizJxar5ARwViZKp+qe5Zu1ZXC0Fh3wEkoU4WJagq4JxbHeVaoCoJZWOKSwvhdszctlMHx9jZhEsoSxu6kFylXKioulH0gvOvMAd/tm9KD/xINN9jM5pIAC2WtqhRziWhcFYaIBTF/EaQNrCqh6hu5lBmdKXDrERTWUI4MVSsCIRg7y4hRpV73+CIIjP2cQ9rDwB98x1VLO90eG1oOag6DMyHNX0l3JngcUlnb48NSCPyUZdWuE6sXTl1BRj7E9+3xdFVoDFxCPHxH4kguGipZyUavG172FtL+EqmAFEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XDlP7wrlTDsn66ctfF6VxZhhr/8wAWkk0cmlLOo5TMI=;
 b=PhE3l549RZ1MKWMzN0ReFqWcZNk5upO0uXOTVVvrJXwlpIe0pzWY9GJUWcosS+NxgLG1vHZcAtvIDKUeyEOzNzThfTyBSTIxnjsumrUDW0uNWkTmZjq7hQ4JoikD45/xY2HmTlg+R1Zs6IIaj7IW+LBtrkFP1MZXYNbuKt3pCiqdyXeRodypeMs4Eu0CpR4KjhADMZxZoTwU3FgfaKrPdZA27oAkBFIsis+olZco4jWzGzqJrF4fCU5R8/kIveJGLhuRvyxxQ1VW/rL40BKwMw9NGZvn/RZZyWKyIGyR/wj1XHU8zbiwtarU3OWGpNp+RsVKTcFxHXb6sn14SiaoFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XDlP7wrlTDsn66ctfF6VxZhhr/8wAWkk0cmlLOo5TMI=;
 b=6gjrkpCSXMvTAdYCkFA+OmOiKPfGrorpb2AiKmu1UC5tCI40Llx1sElxD7tCpzyHCvfEIJL14A6QIJNxC1SLOt4b37VIaDPOI+Zsq6dT1vcMSoN15W4JcQuADcnMh0/04DSRewl2S7y43/3CGQyYEczhocpEi62hoKCQvI1s9Z4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index: AQHZbSQ9GJqCFHqaX0uCiT+EC9U7s68pNK+AgAAHnIA=
Date: Thu, 13 Apr 2023 13:24:46 +0000
Message-ID: <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
In-Reply-To: <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB7691:EE_|DBAEUR03FT032:EE_|AM9PR08MB6035:EE_
X-MS-Office365-Filtering-Correlation-Id: e61db3f3-91da-47a1-5a8f-08db3c228061
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QBbHbWAwViTAcr+WnJdHz4bWznUi8w8ZvcPYnJ+e8tjC6kKpQW00mqdvNnyopJlgssL5u7QXFXO8lex2an7iczrFSPru8lc184rgffOOcZCxmTQv3ZyLEtamUXFQ7SRX9gNUrbfKgRG0UQ9MScwVQ9/BnxdpbPrLy9Q8uCh98H8AJxLYC+BeUlWKfDJl0P0+dbWAHw1gkITCqIQYjFwxBftjuZSJWyCJLUDDEF8S4JFbyGO9UBnncpxlQhXtT97PfxuE90EJd6U554JJpqSiO052uujNWBv3gvXgSanw8xeuQUNm4hVT1e8jnGDIQBP/bdQSvq8i3iy1HZysYpsExC4afC+8t8V5FzzknH2nf3mZqW2SKu9JxkTFr2JBKytgDIVsi9z4TiXDiJpoKnGC9zh5GVxqszeLjdjG7KxUePXZ3PFKUj6fCQfnTx/FTi9nsRYebi3r1sAkhuIybfXKAaRMZ4Jhbf1l7bnCs/lBtSEruxJakhQYOWKXnnWa1C/otQLn7VQJ7dSowMm+VlUylmy8lSGUw2jAmViO+OzddWCc5r2W5wm9830aIDUNz5Hn3H47YOU01Rsn8dQHUneEE8gmBEI4z7UsJ3nGAgqqLmgSNBNPivuU4pEsZ7WQ1KQEI4sATu1bgildFScHpYuQ+NiRjU6u263VIx7qN/be1R957mVqLR9HPcB4hrkzq8+zKv8RsWv6C5zhVQ/nmahO2g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(396003)(376002)(366004)(451199021)(71200400001)(6666004)(478600001)(6506007)(186003)(6512007)(26005)(64756008)(6486002)(66476007)(2906002)(91956017)(66946007)(66556008)(54906003)(76116006)(41300700001)(5660300002)(8936002)(66446008)(316002)(6916009)(8676002)(4326008)(122000001)(38100700002)(36756003)(83380400001)(38070700005)(33656002)(2616005)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <A3E62C8B6D6D41408E0431E1DFD8DBEE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7691
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8c0c4329-ba9a-437b-7502-08db3c2273ad
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MOw20nxea/NPqUvWozZbGcqpdj+OmrS9IhIiaKJf7RUocNNE538mfTtmDQv6nhfUPUDyFXVJR6uO0EGNwyldZcCFWUQT3IfbW4NJMTSdEseBM6gixwL/sf2rkFWMxgXawHz4NSP8GbxZvqsPUkLAKDeU06KbbD/LUxEq0wGZ4SI+U/wUTIx3jC68uH8IU7Yu5TzR/Q+3vtoMlEbSs14yLmixtR5iAfPNL88xXFFlfH1yRWNeziRYwD8akQMYfNEnzOW2YbKHAR3xsQ6ez14tHVN8LYtDzKq2hH2bvkdiTn4fpb1gs1hjbTa8E+4Ly3xO/mY+JZAqHMG7nHy5hKBn2lNvLrIWQCZbE0GAiEehi5vCphobzNEqaSZCJHqmkYuoeGsOETKFmzKesJ+eId25Sz0W3kwRMpCuDjaAdP63BaVMDTDHPBZ1l1VCZbFgOmb6+ZsWyagiRShsuRluWNBDXSaxVwe8GDSeK4pfjNWwyFpa61R3ggs+x+I5Ic2Jok+hKwbIv4gx11MuyZesRjGoFeNcgvZmM+tEdjW3qYS1ShaIzu/+SB2IQjaCjB9StrdXhyb+vS4nfvNVhkQV7r+8SClnrLEa1MB4nUSHAcjh2K/glHADEvhkt6owY2oAvAduL2YB5q5JctjFKGJTbhO3p/OaV0d5tn+IZdQtXshr+0GqFzUZ7Z4fhj0KxRtCyjm6fPKQcgZ76zpTg4XEXVj6YQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199021)(36840700001)(40470700004)(46966006)(54906003)(316002)(6512007)(6666004)(6506007)(26005)(36860700001)(40480700001)(478600001)(81166007)(41300700001)(356005)(82740400003)(83380400001)(47076005)(2616005)(6486002)(186003)(336012)(70206006)(70586007)(4326008)(5660300002)(6862004)(8936002)(8676002)(2906002)(40460700003)(33656002)(86362001)(82310400005)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:25:08.2225
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e61db3f3-91da-47a1-5a8f-08db3c228061
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6035

SGkgSnVsaWVuLA0KDQoNCj4+ICBAQCAtNTk0LDYgKzU5Nyw3IEBAIGludCBhcmNoX3Nhbml0aXNl
X2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+
PiAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7DQo+PiAgICAgIHVuc2lnbmVkIGludCBmbGFn
c19yZXF1aXJlZCA9IChYRU5fRE9NQ1RMX0NERl9odm0gfCBYRU5fRE9NQ1RMX0NERl9oYXApOw0K
Pj4gICAgICB1bnNpZ25lZCBpbnQgZmxhZ3Nfb3B0aW9uYWwgPSAoWEVOX0RPTUNUTF9DREZfaW9t
bXUgfCBYRU5fRE9NQ1RMX0NERl92cG11KTsNCj4+ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9i
aXRzID0gc3ZlX2RlY29kZV92bChjb25maWctPmFyY2guc3ZlX3ZsKTsNCj4+ICAgICAgICBpZiAo
IChjb25maWctPmZsYWdzICYgfmZsYWdzX29wdGlvbmFsKSAhPSBmbGFnc19yZXF1aXJlZCApDQo+
PiAgICAgIHsNCj4+IEBAIC02MDIsNiArNjA2LDI2IEBAIGludCBhcmNoX3Nhbml0aXNlX2RvbWFp
bl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+PiAgICAg
ICAgICByZXR1cm4gLUVJTlZBTDsNCj4+ICAgICAgfQ0KPj4gICsgICAgLyogQ2hlY2sgZmVhdHVy
ZSBmbGFncyAqLw0KPj4gKyAgICBpZiAoIHN2ZV92bF9iaXRzID4gMCApDQo+PiArICAgIHsNCj4+
ICsgICAgICAgIHVuc2lnbmVkIGludCB6Y3JfbWF4X2JpdHMgPSBnZXRfc3lzX3ZsX2xlbigpOw0K
Pj4gKw0KPj4gKyAgICAgICAgaWYgKCAhemNyX21heF9iaXRzICkNCj4+ICsgICAgICAgIHsNCj4+
ICsgICAgICAgICAgICBkcHJpbnRrKFhFTkxPR19JTkZPLCAiU1ZFIGlzIHVuc3VwcG9ydGVkIG9u
IHRoaXMgbWFjaGluZS5cbiIpOw0KPj4gKyAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4g
KyAgICAgICAgfQ0KPj4gKw0KPj4gKyAgICAgICAgaWYgKCBzdmVfdmxfYml0cyA+IHpjcl9tYXhf
Yml0cyApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5G
TywNCj4+ICsgICAgICAgICAgICAgICAgICAgICJSZXF1ZXN0ZWQgU1ZFIHZlY3RvciBsZW5ndGgg
KCV1KSA+IHN1cHBvcnRlZCBsZW5ndGggKCV1KVxuIiwNCj4+ICsgICAgICAgICAgICAgICAgICAg
IHN2ZV92bF9iaXRzLCB6Y3JfbWF4X2JpdHMpOw0KPj4gKyAgICAgICAgICAgIHJldHVybiAtRUlO
VkFMOw0KPj4gKyAgICAgICAgfQ0KPiANCj4gSXMgU1ZFIHN1cHBvcnRlZCBmb3IgMzItYml0IGd1
ZXN0PyBJZiBub3QsIHRoZW4geW91IHNob3VsZCBoYWQgYSBjaGVjayBoZXJlIHRvIHByZXZlbnQg
dGhlIGNyZWF0aW9uIG9mIHRoZSBkb21haW4gaWYgc3ZlX3ZsX2JpdHMgaXMgc2V0Lg0KDQpObyBT
VkUgaXMgbm90IHN1cHBvcnRlZCBmb3IgMzIgYml0IGd1ZXN0cywgaGVyZSBJIHRoaW5rIHdlIHdp
bGwgZ2V0IOKAnFNWRSBpcyB1bnN1cHBvcnRlZCBvbiB0aGlzIG1hY2hpbmXigJ0gYmVjYXVzZSBn
ZXRfc3lzX3ZsX2xlbigpIHdpbGwgcmV0dXJuIDAuDQoNCkkgY2FuIGhvd2V2ZXIgcHV0IGV2ZXJ5
dGhpbmcgaW5zaWRlICNpZmRlZiBDT05GSUdfQVJNNjRfU1ZFIG9yIENPTkZJR19BUk1fNjQgaWYg
eW91IHByZWZlcg0KDQo+PiANCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vZG9tYWluLmggYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgNCj4+IGluZGV4
IGU3NzZlZTcwNGI3ZC4uNzhjYzJkYTNkNGU1IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJt
L2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20v
ZG9tYWluLmgNCj4+IEBAIC0zMSw2ICszMSw4IEBAIGVudW0gZG9tYWluX3R5cGUgew0KPj4gICAg
I2RlZmluZSBpc19kb21haW5fZGlyZWN0X21hcHBlZChkKSAoKGQpLT5jZGYgJiBDREZfZGlyZWN0
bWFwKQ0KPj4gICsjZGVmaW5lIGlzX3N2ZV9kb21haW4oZCkgKChkKS0+YXJjaC5zdmVfdmwgPiAw
KQ0KPj4gKw0KPj4gIC8qDQo+PiAgICogSXMgdGhlIGRvbWFpbiB1c2luZyB0aGUgaG9zdCBtZW1v
cnkgbGF5b3V0Pw0KPj4gICAqDQo+PiBAQCAtNjcsNiArNjksOSBAQCBzdHJ1Y3QgYXJjaF9kb21h
aW4NCj4+ICAgICAgZW51bSBkb21haW5fdHlwZSB0eXBlOw0KPj4gICNlbmRpZg0KPj4gICsgICAg
LyogbWF4IFNWRSBlbmNvZGVkIHZlY3RvciBsZW5ndGggKi8NCj4+ICsgICAgdWludDhfdCBzdmVf
dmw7DQo+PiArDQo+IENhbiB3ZSBtb3ZlIHRoaXMgc29tZXdoZXJlIGVsc2UgdG8gYXZvaWQgYWRk
aW5nIGV4dHJhIHBhZGRpbmc/IEFsc28gc2hvdWxkbid0IHRoaXMgYmUgcHJvdGVjdGVkIHdpdGgg
I2lmZGVmIENPTkZJR19BUk1fNjQgdG8gbWFrZSBjbGVhciB0aGlzIGlzIG5vdCBzdXBwb3J0ZWQg
b24gWGVuIDMyLWJpdD8NCg0KWWVzLCBJ4oCZbGwgbW92ZSBpdCBhbmQgcHJvdGVjdCB3aXRoIENP
TkZJR19BUk1fNjQsIGlzIGl0IG9rIGZvciB5b3UgaWYgSSBtb3ZlIGl0IGFmdGVyOg0KDQovKiBN
b25pdG9yIG9wdGlvbnMgKi8NCnN0cnVjdCB7DQogICAgdWludDhfdCBwcml2aWxlZ2VkX2NhbGxf
ZW5hYmxlZCA6IDE7DQp9IG1vbml0b3I7DQoNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:27:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520743.808675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwyx-0005bq-1R; Thu, 13 Apr 2023 13:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520743.808675; Thu, 13 Apr 2023 13:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwyw-0005bj-V1; Thu, 13 Apr 2023 13:27:22 +0000
Received: by outflank-mailman (input) for mailman id 520743;
 Thu, 13 Apr 2023 13:27:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmwyv-0005bd-In
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:27:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwyv-0003DS-7A; Thu, 13 Apr 2023 13:27:21 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmwyu-0007CE-UR; Thu, 13 Apr 2023 13:27:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=9/n7hnNMcYoNgoCz7dxwE/VXQp/HTEyugpR8WeQ2158=; b=jHv18kU29a4bFg7THOxcDLqP/b
	ElGSFhdnhFShbiJBdq75Ik7vpT7BPkukQ5JbqeDOrbir/CmVQSnNEojI9rCX63i5iiDZeJu12YsTO
	7ZH2nwkSd4zObjnuqkXB/gqnIOJNCko/HBONXD0wasM7BUX0eXAodkep0qcB3KijcQls=;
Message-ID: <2dba6372-330d-a068-241f-59e19b837150@xen.org>
Date: Thu, 13 Apr 2023 14:27:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Jens Wiklander <jens.wiklander@linaro.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Marc Bonnici <Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org>
 <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
 <916BB708-3028-4AAB-BD6A-BCABAFBD7C45@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <916BB708-3028-4AAB-BD6A-BCABAFBD7C45@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 13/04/2023 14:20, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 13 Apr 2023, at 15:15, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 13/04/2023 08:14, Jens Wiklander wrote:
>>> Adds support for sending a FF-A direct request. Checks that the SP also
>>> supports handling a 32-bit direct request. 64-bit direct requests are
>>> not used by the mediator itself so there is not need to check for that.
>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>>> ---
>>>   xen/arch/arm/tee/ffa.c | 112 +++++++++++++++++++++++++++++++++++++++++
>>>   1 file changed, 112 insertions(+)
>>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>>> index f129879c5b81..f2cce955d981 100644
>>> --- a/xen/arch/arm/tee/ffa.c
>>> +++ b/xen/arch/arm/tee/ffa.c
>>> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
>>>       return true;
>>>   }
>>>   +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
>>> +{
>>> +    switch ( resp->a0 )
>>> +    {
>>> +    case FFA_ERROR:
>>> +        if ( resp->a2 )
>>> +            return resp->a2;
>>> +        else
>>> +            return FFA_RET_NOT_SUPPORTED;
>>> +    case FFA_SUCCESS_32:
>>> +    case FFA_SUCCESS_64:
>>> +        return FFA_RET_OK;
>>> +    default:
>>> +        return FFA_RET_NOT_SUPPORTED;
>>> +    }
>>> +}
>>> +
>>> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t a2,
>>> +                               register_t a3, register_t a4)
>>> +{
>>> +    const struct arm_smccc_1_2_regs arg = {
>>> +        .a0 = fid,
>>> +        .a1 = a1,
>>> +        .a2 = a2,
>>> +        .a3 = a3,
>>> +        .a4 = a4,
>>> +    };
>>> +    struct arm_smccc_1_2_regs resp;
>>> +
>>> +    arm_smccc_1_2_smc(&arg, &resp);
>>> +
>>> +    return get_ffa_ret_code(&resp);
>>> +}
>>> +
>>> +static int32_t ffa_features(uint32_t id)
>>> +{
>>> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
>>> +}
>>> +
>>> +static bool check_mandatory_feature(uint32_t id)
>>> +{
>>> +    int32_t ret = ffa_features(id);
>>> +
>>> +    if (ret)
>>> +        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: error %d\n",
>>> +               id, ret);
>>> +
>>> +    return !ret;
>>> +}
>>> +
>>>   static uint16_t get_vm_id(const struct domain *d)
>>>   {
>>>       /* +1 since 0 is reserved for the hypervisor in FF-A */
>>> @@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs *regs)
>>>       set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
>>>   }
>>>   +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid)
>>> +{
>>> +    struct arm_smccc_1_2_regs arg = { .a0 = fid, };
>>> +    struct arm_smccc_1_2_regs resp = { };
>>> +    struct domain *d = current->domain;
>>> +    uint32_t src_dst;
>>> +    uint64_t mask;
>>> +
>>> +    if ( smccc_is_conv_64(fid) )
>>> +        mask = GENMASK_ULL(63, 0);
>>> +    else
>>> +        mask = GENMASK_ULL(31, 0);
>>> +
>>> +    src_dst = get_user_reg(regs, 1);
>>> +    if ( (src_dst >> 16) != get_vm_id(d) )
>>> +    {
>>> +        resp.a0 = FFA_ERROR;
>>> +        resp.a2 = FFA_RET_INVALID_PARAMETERS;
>>> +        goto out;
>>> +    }
>>> +
>>> +    arg.a1 = src_dst;
>>> +    arg.a2 = get_user_reg(regs, 2) & mask;
>>> +    arg.a3 = get_user_reg(regs, 3) & mask;
>>> +    arg.a4 = get_user_reg(regs, 4) & mask;
>>> +    arg.a5 = get_user_reg(regs, 5) & mask;
>>> +    arg.a6 = get_user_reg(regs, 6) & mask;
>>> +    arg.a7 = get_user_reg(regs, 7) & mask;
>>> +
>>> +    arm_smccc_1_2_smc(&arg, &resp);
>>> +    switch ( resp.a0 )
>>> +    {
>>> +    case FFA_ERROR:
>>> +    case FFA_SUCCESS_32:
>>> +    case FFA_SUCCESS_64:
>>> +    case FFA_MSG_SEND_DIRECT_RESP_32:
>>> +    case FFA_MSG_SEND_DIRECT_RESP_64:
>>> +        break;
>>> +    default:
>>> +        /* Bad fid, report back. */
>>> +        memset(&arg, 0, sizeof(arg));
>>> +        arg.a0 = FFA_ERROR;
>>> +        arg.a1 = src_dst;
>>> +        arg.a2 = FFA_RET_ABORTED;
>>> +    }
>>> +
>>> +out:
>>> +    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 & mask,
>>> +             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask);
>>> +}
>>> +
>>>   static bool ffa_handle_call(struct cpu_user_regs *regs)
>>>   {
>>>       uint32_t fid = get_user_reg(regs, 0);
>>> @@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>>>       case FFA_ID_GET:
>>>           set_regs_success(regs, get_vm_id(d), 0);
>>>           return true;
>>> +    case FFA_MSG_SEND_DIRECT_REQ_32:
>>> +    case FFA_MSG_SEND_DIRECT_REQ_64:
>>> +        handle_msg_send_direct_req(regs, fid);
>>> +        return true;
>>>         default:
>>>           gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
>>> @@ -326,6 +431,13 @@ static bool ffa_probe(void)
>>>       printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
>>>              major_vers, minor_vers);
>>>   +    /*
>>> +     * TODO save result of checked features and use that information to
>>> +     * accept or reject requests from guests.
>>> +     */
>>
>> I am not entirely sure I understand this TODO. Does it mean a guest can currently use a request that is not supported by FFA?
> 
> In fact this is a bit the opposite: in the following patch we check that all features we could need are supported but if a guest is only using a subset we might not need to have all of them.
> Idea of this TODO would be to save the features supported and refuse guest requests depending on the features needed for them.

Thanks. I would suggest the following comment:

/*
  * At the moment domains must supports the same features used by Xen.
  * TODO: Rework the code to allow domain to use a subset of the features
  * supported.
  */

Note that I am using "domains" rather than "guests" because the latter 
doesn't include dom0.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:27:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520744.808686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwz1-0005ro-9Y; Thu, 13 Apr 2023 13:27:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520744.808686; Thu, 13 Apr 2023 13:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwz1-0005rh-62; Thu, 13 Apr 2023 13:27:27 +0000
Received: by outflank-mailman (input) for mailman id 520744;
 Thu, 13 Apr 2023 13:27:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmwyy-0005qb-Hb
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:27:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e97e78c4-d9fe-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:27:21 +0200 (CEST)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 09:27:08 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5824.namprd03.prod.outlook.com (2603:10b6:a03:2d0::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:27:06 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Thu, 13 Apr 2023
 13:27:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e97e78c4-d9fe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681392441;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=r0MfYP8Wrn/gvQc3A35Iu17mJutW0KFbhur4W7xAmNM=;
  b=BprTPBdu8bg/Xyz87zIJqREbpdi0xpjC3ZyJM/wurshdfyHyT/s03OcO
   hKrhe7gp7IVVO0f82aIjljLiA1qlFY341w5SVoyJotAwNotgj8Ta25lRH
   2/l1ltlfqACm0NL4CewZo31Fs+OSIPhkt+O0KbnKynIG+JIn//cfcI++y
   g=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 104168221
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:vxB5BqsS+zR+cMMd6AaLjVNTSufnVP5fMUV32f8akzHdYApBsoF/q
 tZmKWmEM/eLa2L9eNonYd+yoEJV6pTVnNBiTQRtr3hhEH5H+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHzCFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwERIVdAqq3uGNwLPiYLlXwcZ8EO62I9ZK0p1g5Wmx4fcOZ7nmGv2PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60aIq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqqc00AfJnwT/DjUKUluLjMW1inehUo5wA
 GAV9CUfk4cboRnDot7VGkfQTGS/lhwBX9tdFcUq5QfLzbDbiy6bC24fCCFAa9gvssM7XxQu1
 1mAhdSvAiZg2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nZvxuCrKvh9v5XxT52
 SmXrTMWjq8Wy8UM0s2T+FndiHSmoZ7PTwU0zgzNWySu6QYRTIeuZ42ur1fG9epJBI+DSx+Ku
 31ss9OF8OkEAJWJlSqMaOYABrek47CCKjK0qWBoG54t5jG84UmJdIpb4Cx9DEpxO8NCcjjsC
 HI/oitU7Z5XeX61N6l+ZtvpD9xwlPCwU9P4SvrTc9xCJIBrcxOK9z1vYkjW2H3xlE8rkuc0P
 pLznduQMEv2wJ9PlFKeL9rxG5d3rszi7Qs/nazG8ik=
IronPort-HdrOrdr: A9a23:znOweqjue/whnwXg5HKdHYKPvHBQX1B13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICPoqTM2ftWjdySOVxeRZgbcKrAeQfBEWmtQ96U
 4kSdkHNDSSNykwsS+Z2njfLz9I+rDun86VbKXlvg5QpGpRGsNdBnJCe2Km+zpNNWx77PQCdK
 a0145inX6NaH4XZsO0Cj0uRO7YveDGk5rgfFovGwMnwBPmt0Lk1JfKVzyjmjsOWTJGxrkvtU
 LflRbi26mlu/anjjfBym7o6YhMkteJ8KoOOCXMsLlbFtzfsHfoWG1TYczDgNnzmpDt1L8eqq
 iDn/7nBbUw15qeRBDxnfKn4Xic7N9n0Q6f9bbfuwqonSWxfkNEN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3am+a/hGrDvAnZMZq59ms1VPFY8FLLNBp40W+01YVJ8GASLh8YgiVO
 1jFtvV6vpaeU6TKymxhBgn/PW8GnAoWhuWSEkLvcKYlzBQgXBi1kMdgMgShG0J+p4xQ4RNo+
 7ELqNrnrdTSdJ+V9MKOM4RBc+sTmDdSxPFN2yfZVzhCaEcInrI74X65b0kjdvaCqDgDKFC66
 gpfGkoxVLaIXied/Fm9Kc7gyzwfA==
X-Talos-CUID: 9a23:elUmhWC/SzGqgqP6ExBF7FQOM5kmSX3clV2AGAi6VlpOFaLAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3Aze67rQzpJvP/2VrN0wdOwxdlMvqaqLipCHAgrZk?=
 =?us-ascii?q?GgtaNaQNbFzaTgwWJbaZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,193,1677560400"; 
   d="scan'208";a="104168221"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QO6Th7zRZ0WB+3rGUrGuuNyAAHYVJEkq9dwKT3jO7oRF2yazjnozjrW/j8+bC+3A/4KM2xTJmugDViFjEQSah+gKhkkA9dkfMh+crHuxbr+fZuj4/kS0egO5eLZMPRpolKdjXy9nqe9lRWwBeYJJLKlxACWamhPPvY4NTbYI1NvEOA2zNrolecIBCnXRCIBU99qogvpxuIz/eL+hEj/2emB+K3xUZub/R+Futc2j61clxx3vIh+95OeGWJixI8Z6xL2IjoyPYl8yydXXtm5ikLYvDfywz4ZSScrmAF0pgOrtKd7R3+MFFiVvkUXGZ30gScc6awQRPOlt5uWP3EXisQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r0MfYP8Wrn/gvQc3A35Iu17mJutW0KFbhur4W7xAmNM=;
 b=MFYoaOr44U5Tv+WUjELtd3dPFqpEm+n8qQHTymMZwZdz5y2xFibSG2evltvFv7UhmnlHAj+Oi1fKd0mhsVK2H7BN3YCxDrVU4NMcOvQlDddT+i6ssVCigxwa/GABQNnkZAbewb/d1TGdPblwrpDgfs0drwDLsa3gc68M6NT8TMYP9If4ikpgb1qw26gv6Hfriwo6VkJFq5qnoDVYUbSHJu9yhztWbeUWYKSVCMqt7PVmgLj/tXdy0XRv0pCjJEVugGcZIdUfb0ALIc6uAUEul/KHzthrZdjxdK7BUiI1v2uugqBzJd62/bR2xOXH0aiBhjuG+VivXoInwCu2sT0i8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r0MfYP8Wrn/gvQc3A35Iu17mJutW0KFbhur4W7xAmNM=;
 b=iAm2CTIeDPCg5Pp+lvNQYfTLXvjvXBzAgqQh3Ttc2/fkZTuZBIvESqKTflUvSbPRUhGGOib4tPycts5S92vLfsK2UD17zAN7j+UFWQc3St5T/xgePLtbEwkI+ICMK7mO2+qokPwxK328WXY7tV4WmyQqIbfX+B3N8Sx2Spbyd2c=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d7f18393-262b-f2b1-9af3-a371dae75994@citrix.com>
Date: Thu, 13 Apr 2023 14:26:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Content-Language: en-GB
To: Julien Grall <julien@xen.org>, Jens Wiklander
 <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-3-jens.wiklander@linaro.org>
 <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0406.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5824:EE_
X-MS-Office365-Filtering-Correlation-Id: 37544ae6-4e45-42bb-031a-08db3c22c62a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dKUsDklUK1GRYi3J8PFG/pDkc8aDWd8TsPcDbCV5NLlEEf0JHfYcGp1FuE6zRtZ2/u1p1neY/VUTsMMGGlI/b5fUf4UXCf3zdkWcSj/Qqswtc0fnn8jjI45ThPBp47oH0jGaxdSKJ0FkhgilAu2WtEZM21ekOwDXB9DJECCJro6g9KTW0qmmjZT1l6JNqKU+qFayn+clhg5uAPJHQ7e3Bs1Ei3/4I592/W6imgF2SJxRZ8R730ib8QN8+xxCURfWZeTukaHEzKtEd7Dxh33OYFL2BbBqgMEMZ7QR9DbzdaUjVdlytvKtgpkGSO1NSjTTVp0ehVKbqm7PbuJmG9WM1mO2Zi5lpzU8ActMcCWtt1bZANI5q1LCunxntjidwoOVsw6P1gFvN9QTtXUUW1BsZD+jsDe2QVtM69D8pvkKGZsOg5LeIT468wYlkWl1nrFHx7HLGgtuHfF8MmCrRJz5PPDOpF8Xm39tmUuOeph3Eq0vXM2UHIlMsa5VDwwSnBa7aHSjut7smfcJJrcAjBKe1hQA9MeYcvAw2hVw8F/xsUBbqgDjNLr7ssOhxRt4ZD2gtXJbrgsi3BwhbJiHVn2FqsmGvbrHDotFml9hDQ7EBSMxkuFyuxYA7A48UGFyLEkHcAnLrxylD1jXa+G+zrANQA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(6666004)(6486002)(66476007)(66556008)(66946007)(4326008)(36756003)(110136005)(2906002)(31696002)(86362001)(41300700001)(82960400001)(5660300002)(8676002)(8936002)(316002)(38100700002)(478600001)(54906003)(53546011)(6512007)(6506007)(26005)(31686004)(2616005)(186003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1R0ZFRHSFZ5SlZxd0hIVitsTE05QWZqdmVBL3drOXlnSXAwVkhmWWZRSlh2?=
 =?utf-8?B?bGVUbjR3WXFsQmpWWWN1d2ZuSjl5RkJ3b0JqMzR5cEVZbm5vTnR6azNSWDF0?=
 =?utf-8?B?cnFNYXF1N2VtYmt6UHd2cGNDSU1VbkpITXpjOUtzVnhFb29OaU5JQUdpeWtk?=
 =?utf-8?B?MlIvMDdjcXJFUHNyODZUNTdWeXVraElDRm56N2VzdWNrK0o0aTdLejExRTJl?=
 =?utf-8?B?Y0I4UWpScVpKZFh4ZGcwT1FoQlBBTDZCTFZBM0M3cGhWNUJ1blZOVGpXQ2lG?=
 =?utf-8?B?QnpMbW1JcnNjbkcrZ3ljOExmWXZzNnhBSHFldDhveURCV1RJZmcvWm5kQ09O?=
 =?utf-8?B?N2V5b2dvNEdENm9xNnlkOTJIOUhqY1dNQkRSckNyaGVwWkpsNHBlWnpoa3JT?=
 =?utf-8?B?dDZxUkx6ay9MZzdDTURPL2h6ZnJ1ajBYQllWOEVlUjdsb3ZmZ1UyRWtmN1E4?=
 =?utf-8?B?WUVzRThJcE5PSFQ5bGs5YzYzWEg1Slo0S0Fjb0l4MXJJc0RMZEJ2KzRYWXZE?=
 =?utf-8?B?TEwrM1lGTTBtamZORUdiM3NTcE1QOXhMMll6cDJaejJLVHJCZGQ2Y2xMSWxF?=
 =?utf-8?B?Z1QvdGsvZ04xaVZQU3FqRlg0SHplelFhOEtWc2JIQ1h4akpXdmxxeEd0UGxx?=
 =?utf-8?B?bitxVzRicVBJNURPcFhSdEIzUDhBUlZ3NWJGUHJMRllvTWc3N1BkY05BZ0N5?=
 =?utf-8?B?NXp4blp1OVN4alZaMkJBY2VIVjJmTzZsK1dUWk0wYTZ6eHVGK3lkNUhLS2d2?=
 =?utf-8?B?aW5KODdDNk41YUZUcXZmdTZ1YVV1MCtlTzYzY1dleENzYkpMSlZpOEs5RklD?=
 =?utf-8?B?MWRjVTA3dG0xamRpdmZyQlY0ZmU4K2VuV29QeHBjU0MyVUwvRHV2eUpkVXpz?=
 =?utf-8?B?NFBMZ3hYRldQdDdRdnI0M1NKc1ZlVVQ0QnBWOTZwK21hZWhURlp4bHIwa2xv?=
 =?utf-8?B?cXBFTXpEdWJhdlBuNE9rc0g5Rml4Q1BrZXN2K0hLZnZUZVk3Q3ZlQ2VHL1Ar?=
 =?utf-8?B?NnVUL3g5azZzWllrR2VlRUhZZ21LS3dQWlNMV2s2bElJeWF4SnVZNlIyN0pl?=
 =?utf-8?B?OEc5ZU0vRUpsQ0xnTXVYZ2sxR1liTGgzdnhvcDZZNllmaiszMmc4b1dCVWIr?=
 =?utf-8?B?ajRjQnkwTEFaNHBSV0kxNWhCRnVaNFIrOWhLSUtMcWN4R2k1L2RHcndNcEtM?=
 =?utf-8?B?UEJReTZIZSs2OWpjZFFoWGFVOXhCUGtTRWc3MkJNOEpadU9sV1FpYU0yV2wx?=
 =?utf-8?B?dU1ucE5MSm9QeERVVE9ybk5PdTJOQm5XcitLY3NvcXdiZkQwUnQyaVpNL2lQ?=
 =?utf-8?B?SUZtcDhYL3NSWUNza3JCNDBsQ1liUFR1ckFoL3dzbzlGbS9yNEI0Wi91VHNX?=
 =?utf-8?B?UWowSlRHK2YzMDJhNEFxdU1jUE1qWjlvb1h0ZUd1QlFMbmtnWHZNZytNT0pY?=
 =?utf-8?B?eFJlMjUyMWIza01oRmk3ZGFVdlFjeWxqaGYvendXdGRFdThpblJkVEdJSkpT?=
 =?utf-8?B?Qlh6SHIzdFpLYWh0WDhtcWNuVnlNWmJBWHFZb3pKV244ZStFTk5USVBYcjBv?=
 =?utf-8?B?cGFZalR2d1dNRE9JcHlQZjJhSGM1dkw5OG9VNWZwdEs3dGkveDBqODdaNHFZ?=
 =?utf-8?B?RXFZYnBLelM1eUhhbGpNOUFWK081STFQRmMyWFl3ZEhreENuaGhVLy90ZkxS?=
 =?utf-8?B?S0JEdURxOEtRbk96OFA2MUJ6UEJGYXV6aEhBT1YvRkc2NEN6NGVrbzRGL253?=
 =?utf-8?B?MTU1TkVtdHVnclNsZUZockdobmdVYVh6RGRrZzNGYSs1bG4zNXZBUTFZQWMw?=
 =?utf-8?B?MHNFdlV3RlVaQWk5S1dMUEFvMko0L2NXbzBlUUN6OEFRcmFaSlhDVHFGSVFx?=
 =?utf-8?B?eHB0M1J6eG8rc0wvUTduekdScmJ0RjZYaWdJQk5qN2Y2YmpSMXQ4WFRkcVQz?=
 =?utf-8?B?V3dUTFc1Wjc3WFNYcjBOTG04OWF0bHRoMGFZV3R1c2pWeHBsckhXMlhYMnQw?=
 =?utf-8?B?WlRHWmxZSmluMERmL1hsZ21mWFRia2VvRnZYRlZBeGxLN0lZbEx0TC9YV0Nz?=
 =?utf-8?B?d3dneVM2T255dDk4eDVZUHcyMkJDMlI2emw0eCtxalZOa1RnRVFFZk9KWEhV?=
 =?utf-8?B?d2Rocm14dW5wdVVkcndLUkhCMHFNM3N5K3RGMnJadU5XVy8vMnpLVFVYUHB3?=
 =?utf-8?B?Zmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	g2HpqDSPs1UWzoV93MCYkA6IlhBGNqbZUvauUaAI81Tdsqq6iq6/vl3Nu2Aji8yDjDjFq5oJNzihSJ9SFs6ZBoA7r6il4Gi7NWnGT7BJXzxPySk9iopzjRoU4LwqvymFqMT+hbp2HvUf3tafJdLqNXKmaAE9S7oD+YYZKNGapt+EYJk1v7QVE0SDrPsqLFYZS0oqlXgMiBg2W5ablih4oJV9iFoX1F5sfs12ks3oPJ01wQLytW4pWN4tTD2nzJJzofj6/MBIncmtJoTQ/dBKLb/TDWpF3AP4q9iyNyt9b/gDCtrmdi7JBO0jTg27wOgSM6Sdu2BOSxrdsxgc5JYJhz/nedowY/GHsoIM6XsPbW17Z83GbZhHn8P/e6kDusJQQpbqxeyKN5AsnhbBRfBDPJuZ+o3zQXEUsnFTacgd05rHBRNpxpOGGCiwmhetriIZ9NuQ6wnl9xC4QTX6qTg3/CZYriG/LeN+dMPUZ8cyLk+RzlbwMyYRzNNwyCpIc03+GwSA+xT9S1kngMFm6fP0BA8EEJP4IqaG3wEix4n6zKLiZbjrU00zf7MOOwyyVUCPQGxndOyi1gZNCCc+4pV6G0sugJYqSG7wDHopQMQvepJ+HfwXogyGVIdPIfzfHSpcUmb/ow9hnMfw4HDBR7/bx+MYzRogASglOUHT2xs3Z5S5h9Uf+DP9kTQVqA/sVJwPMU3wIbiho9RB0BFSXgU79kCQO9tH2I0Xh9/fnzcDfoMxwCns3Baz+3Nty53MtqboNdBa55xDJg2LRXZQC0RTFxWFi+AeUtereREq5lXlL++a6z5LKFetaaPfKdYgUzTUD3R67bNfZathHDSyuyGOl6lP1ghK1AEVXpJq2XF6NEEnlxKX/bWMCWGexwC+sk9XXAhcni/WNH1/dU+NBJ44PQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37544ae6-4e45-42bb-031a-08db3c22c62a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:27:05.7191
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8ZIvLrlQ7euwEIiKXaqE8s+m96IpwhKkt72yn/eC8N6HVWBqZSs3SHaMJrD/CGNjLcitGakoardLLckochw6TwnVge/gSOoSpIRD24dPssU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5824

On 13/04/2023 1:26 pm, Julien Grall wrote:
>> +static int ffa_domain_init(struct domain *d)
>> +{
>> +    struct ffa_ctx *ctx;
>> +
>> +    if ( !ffa_version )
>> +        return -ENODEV;
>> +
>> +    ctx = xzalloc(struct ffa_ctx);
>> +    if ( !ctx )
>> +        return -ENOMEM;
>> +
>> +    d->arch.tee = ctx;
>> +
>> +    return 0;
>> +}
>> +
>> +/* This function is supposed to undo what ffa_domain_init() has done */
>
> I think there is a problem in the TEE framework. The callback
> .relinquish_resources() will not be called if domain_create() failed.
> So this will result to a memory leak.
>
> We also can't call .relinquish_resources() on early domain creation
> failure because relinquishing resources can take time and therefore
> needs to be preemptible.
>
> So I think we need to introduce a new callback domain_free() that will
> be called arch_domain_destroy(). Is this something you can look at?


Cleanup of an early domain creation failure, however you do it, is at
most "the same amount of time again".  It cannot (absent of development
errors) take the same indefinite time periods of time that a full
domain_destroy() can.

The error path in domain_create() explicitly does call domain_teardown()
so we can (eventually) purge these duplicate cleanup paths.  There are
far too many easy errors to be made which occur from having split
cleanup, and we have had to issue XSAs in the past to address some of
them.  (Hence the effort to try and specifically change things, and
remove the ability to introduce the errors in the first place.)


Right now, it is specifically awkward to do this nicely because
domain_teardown() doesn't call into a suitable arch hook.

IMO the best option here is extend domain_teardown() with an
arch_domain_teardown() state/hook, and wire in the TEE cleanup path into
this too.

Anything else is explicitly adding to technical debt that I (or someone
else) is going to have to revert further down the line.

If you want, I am happy to prototype the arch_domain_teardown() bit of
the fix, but I will have to defer wiring in the TEE part to someone
capable of testing it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:28:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:28:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520749.808695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwzc-0006cL-IU; Thu, 13 Apr 2023 13:28:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520749.808695; Thu, 13 Apr 2023 13:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmwzc-0006cE-Ff; Thu, 13 Apr 2023 13:28:04 +0000
Received: by outflank-mailman (input) for mailman id 520749;
 Thu, 13 Apr 2023 13:28:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HU9H=AE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pmwzb-0005qb-KY
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:28:03 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0328be83-d9ff-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:28:01 +0200 (CEST)
Received: from DUZPR01CA0163.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b3::25) by PAWPR08MB9736.eurprd08.prod.outlook.com
 (2603:10a6:102:2ea::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:27:59 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b3:cafe::31) by DUZPR01CA0163.outlook.office365.com
 (2603:10a6:10:4b3::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 13:27:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 13:27:59 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 13 Apr 2023 13:27:59 +0000
Received: from 4a6a96a6ad27.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 07377B3D-DB35-4508-8E0B-4F85C873DCB7.1; 
 Thu, 13 Apr 2023 13:27:52 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4a6a96a6ad27.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 13:27:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM0PR08MB5489.eurprd08.prod.outlook.com (2603:10a6:208:18e::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:27:50 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 13:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0328be83-d9ff-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cLiqadu5AG8GrrWoQG7VBbBfiLxOAYVHW7Po1uAAX34=;
 b=vvvlebrTgNSMqrutYMyhDdCQ5capYmxdP07/9l6OTWNM6cU5DLfNPyDUQG9E7MJSOPAf+olQKcUoWVEhv1yuSX/uKQCp91iUINX8JPHHOKfluvlFl3KOZkPn5UCclwXqS0nAFDI7w9Ztlt32GuYivc9pz4aeik1X33A9NFvH46g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 533f5e52ad0b60ef
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=boEMpuluaGVaTAvrPJx5n8KdGDpNN183GSScEMsBr4LG9PB4lJCN27Yt8oWc79YapkitdqsPbsAQXz6AZ/DkKw4upwpiScqXknDyNPthDrIbRi54niytK75fLKFoNbGZSlg2kjcqWQ/brATSfyAqm9Bogt5078srCf+YVcVQs1rz4CRz2vXhEohZ7HWYKi+BGr/6niC1o9XX5Dk5ueu3VbCyu2uTISQ6CpMoOkg7Vj+eoZyHUsBiIRCB88IMf/igXq9IdgGyvdPxpEeusgoCmoRbNUKC45/MuW7Y+GC37YKWi96gcCMa/oZGU2ANQtME/R15F8p4/YuP/skKlu1L2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cLiqadu5AG8GrrWoQG7VBbBfiLxOAYVHW7Po1uAAX34=;
 b=X8WSc6wRcLdqeIIhywXMuxV7ktvy59/k2FqOTrkiI2+qkxafiiiRgyFaxHKlupGKdkb8UEtRTa94DRugOGPp59kYAYljrgFAO6KmBvrbjAD73c/Wh9gaMi3AtqyJE1mrUzVD5/fY/yJEr/OITStk6MRYFLQ0Y/dqc/7FtCLIhq3bo3sLvuhlf/xg6lNhg2+CcTcYvVk69gvAYy19x2Q0y1saig5UCIg+jLlDz3vWq8QWBDFwRQ/oaiz+b3E9OmawpWuF726DubAtV0+GxeJrgNcbMuxCneHcEUSF+xPl/sojTQrM/P7XpYURPJjMLruN8GgkcmY1CJdvaCkBOvkrPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cLiqadu5AG8GrrWoQG7VBbBfiLxOAYVHW7Po1uAAX34=;
 b=vvvlebrTgNSMqrutYMyhDdCQ5capYmxdP07/9l6OTWNM6cU5DLfNPyDUQG9E7MJSOPAf+olQKcUoWVEhv1yuSX/uKQCp91iUINX8JPHHOKfluvlFl3KOZkPn5UCclwXqS0nAFDI7w9Ztlt32GuYivc9pz4aeik1X33A9NFvH46g=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Thread-Topic: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Thread-Index: AQHZbSQzMQ+GouBmA0y3P+pRTDiAq68pNiQAgAAHA4A=
Date: Thu, 13 Apr 2023 13:27:50 +0000
Message-ID: <E4470524-C5C5-4168-A9C2-A5A594D4C7CF@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-5-luca.fancellu@arm.com>
 <f405fa12-d99b-c07f-0bdd-c49f64f3ffb4@xen.org>
In-Reply-To: <f405fa12-d99b-c07f-0bdd-c49f64f3ffb4@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM0PR08MB5489:EE_|DBAEUR03FT041:EE_|PAWPR08MB9736:EE_
X-MS-Office365-Filtering-Correlation-Id: e22c8ac6-a553-472a-1bcf-08db3c22e633
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 BWI8JAZCh/TVHVx9JpTphva1KorqYmMY3syXIbBz9to1c/znVrdlRrMNBBMDpCfLAs7QJtJ8uKVt5m09BGcdhOypUq2D84GJdKcJuslECqqXk21QY4rLXWnehIpvr+JreqYoKpfEZ72BkGdbCndkJhgPi6R2cWqDxID/3c/WpvqXwll1VavwMfH8IiHXZBjTznu141WGJ7cJ265HcrY6MI9pXheC9KalO4VikYmhYSxWuHBxAfU3tVDwoe49L/O3jiccyPT62ONV/WhPiQVE8pzd8rNjheI7AaB4icxIqbpgmrqjSgPKjkORGT9NOdVbdG+7PcGjXqhMA9Z3xFGYZRGBHRJaAujQGrxiwfzs3XWCE87VXCdwoetf3uWpeQcLu8n2UWcB5uPOfl7hcIoff71HGuHPn/2zkh+md75jryxLQN514VQrE/Incdr82J+uwDTXD4SGATNOEbRL7ZxZr9hOqhyJfE4O93gGuCrKgGzKpVQATgU+gEJ8gtB7a+y1JA18bhgAki0aVI0yjkI4VrrcDwDT3R+PyKTIGgJ6MOtJLG9aEtvj0QVuVXVYxud1OhO0vzTd040FUaNC0CJv6xb9k+edvxmVil0wEjU1X6mXU3uZ1bZ0uzszFcWRmadg4Twsso2H7UrNbJbIxFmjeg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(366004)(451199021)(38070700005)(478600001)(316002)(54906003)(86362001)(122000001)(66446008)(38100700002)(6916009)(4326008)(66476007)(66946007)(66556008)(64756008)(91956017)(76116006)(33656002)(6486002)(83380400001)(2616005)(41300700001)(2906002)(36756003)(26005)(186003)(71200400001)(6506007)(5660300002)(6512007)(8676002)(8936002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <11F32304C51EBB4B95704D58160D405E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5489
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bdd4a059-81a2-4015-2639-08db3c22e123
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GaxOB+QFhlceuw/3Xp1u+mBb26j90EJZ+zjM3vTZIMOFDP2ngTWV5Fy26CIPKozLsNJQlFIzLtO/qM/2RMq+0KAoGPJsAdTghVKe/3SE0mtk2B9caPS8m6CLx9RoiN04bd7jQfnoNG0EITWa0HZ/vz9+2JeVuyAT+G5kK9lvnb/LfkrFKiuqB7zQx7mZCseaE6vSxga5AueD1s0OuME3mWCpV8bfJ+H7Zw+br0GJLhyXDDhZQJzZmZ+eO17Ihx+lowWz53GuxMF62zwT0dzgfaWQnGt6/QK2HmYfiJNaSVlQ8OzmuGw9t77RyM2F6cxVzzfS9WlK2y0cbE1KhFp98WhNm7S4pSJPiCJHhPacKBWz0DAGp33gepUOjA/YnnHVa3Zhp4AogWIUoQfFsXHfqR7NKuewwf92d9fWltZWefkgfnzCOxBW9PkGjBtQTlQDysbCgIk8IjiNnEnsO0SyT0gZuWOguqVTuoIVvDk/bOt0TZb5fcB4ro+gBGGu7kPmdtUL8m/tOStN2WP94yhu9UKwKrmgPn6wCi2uJ2b6arscOafqUc/86/TJAhOlUSrwQCn48/pkgSCQmeELQWhkWj+JtHPI4t2NHPwh3m0ZvhSXt69EWUmPeivylvB91KEpYDI24Ul3lDu2qzoBmg+BDaJzt+evspTSG+21AibUfFZqPPwlET/UPHqFYlNgk9q3SY5ZWEjGjqnMO8q6XcMUhw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(6486002)(40460700003)(70206006)(70586007)(4326008)(36756003)(2906002)(86362001)(81166007)(82740400003)(356005)(41300700001)(33656002)(5660300002)(82310400005)(8676002)(8936002)(6862004)(316002)(478600001)(40480700001)(54906003)(6512007)(6506007)(26005)(336012)(36860700001)(2616005)(186003)(107886003)(47076005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:27:59.1720
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e22c8ac6-a553-472a-1bcf-08db3c22e633
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9736

DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vdHJhcHMuYw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL3Ry
YXBzLmMNCj4+IEBAIC0yMTYwLDYgKzIxNjAsMTMgQEAgdm9pZCBkb190cmFwX2d1ZXN0X3N5bmMo
c3RydWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpDQo+PiAgICAgICAgICBwZXJmY19pbmNyKHRyYXBf
c3lzcmVnKTsNCj4+ICAgICAgICAgIGRvX3N5c3JlZyhyZWdzLCBoc3IpOw0KPj4gICAgICAgICAg
YnJlYWs7DQo+PiArICAgIGNhc2UgSFNSX0VDX1NWRToNCj4+ICsgICAgICAgIEdVRVNUX0JVR19P
TihyZWdzX21vZGVfaXNfMzJiaXQocmVncykpOw0KPj4gKyAgICAgICAgZ3ByaW50ayhYRU5MT0df
V0FSTklORywNCj4+ICsgICAgICAgICAgICAgICAgIkRvbWFpbiBpZCAlZCB0cmllZCB0byB1c2Ug
U1ZFIHdoaWxlIG5vdCBhbGxvd2VkXG4iLA0KPj4gKyAgICAgICAgICAgICAgICBjdXJyZW50LT5k
b21haW4tPmRvbWFpbl9pZCk7DQo+IA0KPiBncHJpbnRrKCkgd2lsbCBhbHJlYWR5IHByaW50IHRo
ZSBkb21haW4vdkNQVSBmb3IgeW91LiBBbHNvLCBpZiB5b3Ugd2FudCB0byBwcmludCBhIGRvbWFp
biBJRCwgdGhlbiB5b3Ugc2hvdWxkIHVzZSAoIiVwZCIsIGQpIHJhdGhlciB0aGFuICgiJWQiLCBk
LT5kb21haW5faWQpLg0KDQpPayBJ4oCZbGwgY2hhbmdlIGl0IHRvOg0KDQpncHJpbnRrKFhFTkxP
R19XQVJOSU5HLCAiRG9tYWluIHRyaWVkIHRvIHVzZSBTVkUgd2hpbGUgbm90IGFsbG93ZWRcbiIp
Ow0KDQo+IA0KPj4gKyAgICAgICAgaW5qZWN0X3VuZGVmX2V4Y2VwdGlvbihyZWdzLCBoc3IpOw0K
Pj4gKyAgICAgICAgYnJlYWs7DQo+PiAgI2VuZGlmDQo+PiAgICAgICAgY2FzZSBIU1JfRUNfSU5T
VFJfQUJPUlRfTE9XRVJfRUw6DQo+PiBAQCAtMjE4OSw2ICsyMTk2LDExIEBAIHZvaWQgZG9fdHJh
cF9oeXBfc3luYyhzdHJ1Y3QgY3B1X3VzZXJfcmVncyAqcmVncykNCj4+ICAgICAgY2FzZSBIU1Jf
RUNfQlJLOg0KPj4gICAgICAgICAgZG9fdHJhcF9icmsocmVncywgaHNyKTsNCj4+ICAgICAgICAg
IGJyZWFrOw0KPj4gKyAgICBjYXNlIEhTUl9FQ19TVkU6DQo+PiArICAgICAgICAvKiBBbiBTVkUg
ZXhjZXB0aW9uIGlzIGEgYnVnIHNvbWV3aGVyZSBpbiBoeXBlcnZpc29yIGNvZGUgKi8NCj4+ICsg
ICAgICAgIHByaW50aygiU1ZFIHRyYXAgYXQgRUwyLlxuIik7DQo+PiArICAgICAgICBkb191bmV4
cGVjdGVkX3RyYXAoIkh5cGVydmlzb3IiLCByZWdzKTsNCj4gDQo+IEkgdGhpbmsgaXQgd291bGQg
YmUgYmV0dGVyIGlmIHlvdSBwYXNzICJTVkUgdHJhcCBhdCBFTDIiIGFzIGEgc3RyaW5nIHJhdGhl
ciB0aGFuIGFkZGluZyB5b3VyIG93biBwcmludGsgYWJvdmUuDQoNCk9rIEnigJlsbCByZW1vdmUg
dGhlIHByaW50ayBhbmQgZG8ganVzdCBkb191bmV4cGVjdGVkX3RyYXAoIlNWRSB0cmFwIGF0IEVM
MiIsIHJlZ3MpOw0KDQo+IA0KPj4gKyAgICAgICAgYnJlYWs7DQo+PiAgI2VuZGlmDQo+PiAgICAg
IGNhc2UgSFNSX0VDX0RBVEFfQUJPUlRfQ1VSUl9FTDoNCj4+ICAgICAgY2FzZSBIU1JfRUNfSU5T
VFJfQUJPUlRfQ1VSUl9FTDoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3Jh
bGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:30:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:30:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520754.808706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmx1d-00088A-5h; Thu, 13 Apr 2023 13:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520754.808706; Thu, 13 Apr 2023 13:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmx1d-000881-30; Thu, 13 Apr 2023 13:30:09 +0000
Received: by outflank-mailman (input) for mailman id 520754;
 Thu, 13 Apr 2023 13:30:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pmx1b-00087q-Ob
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:30:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmx1b-0003Hy-EN; Thu, 13 Apr 2023 13:30:07 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.20.117]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pmx1b-0007Ed-5L; Thu, 13 Apr 2023 13:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=v7H1LFc+RZzX3FjhIKOTo0D/cMmXgeAh9UuYM+IlyqE=; b=BP3GZR88T5E5p09n7NJusPAn80
	uwp9rmi4J0Hy8c/yuvdgK968/tSY1Yzdk1NtuY00apOp0by/DcFmzjUGTOI0TONbFaz9R6R3aT0Vz
	LX/HCaiJzWJrNBv/BSZU8p4JiUy8HJNWa3YAxpOU2qe9HDg+fDbzNfxeWDeP+NPSH15s=;
Message-ID: <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
Date: Thu, 13 Apr 2023 14:30:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.1
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 13/04/2023 14:24, Luca Fancellu wrote:
> Hi Julien,

Hi Luca,

>>>   @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>       unsigned int max_vcpus;
>>>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>>>         if ( (config->flags & ~flags_optional) != flags_required )
>>>       {
>>> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>           return -EINVAL;
>>>       }
>>>   +    /* Check feature flags */
>>> +    if ( sve_vl_bits > 0 )
>>> +    {
>>> +        unsigned int zcr_max_bits = get_sys_vl_len();
>>> +
>>> +        if ( !zcr_max_bits )
>>> +        {
>>> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>> +            return -EINVAL;
>>> +        }
>>> +
>>> +        if ( sve_vl_bits > zcr_max_bits )
>>> +        {
>>> +            dprintk(XENLOG_INFO,
>>> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
>>> +                    sve_vl_bits, zcr_max_bits);
>>> +            return -EINVAL;
>>> +        }
>>
>> Is SVE supported for 32-bit guest? If not, then you should had a check here to prevent the creation of the domain if sve_vl_bits is set.
> 
> No SVE is not supported for 32 bit guests, here I think we will get “SVE is unsupported on this machine” because get_sys_vl_len() will return 0.

 From my understanding, get_sys_vl_len() will return the len supported 
by the hosts. So if you run a 32-bit guest on top of a 64-bit hosts, 
then I believe get_sys_vl_len() will be non-zero.

>> Can we move this somewhere else to avoid adding extra padding? Also shouldn't this be protected with #ifdef CONFIG_ARM_64 to make clear this is not supported on Xen 32-bit?
> 
> Yes, I’ll move it and protect with CONFIG_ARM_64, is it ok for you if I move it after:
> 
> /* Monitor options */
> struct {
>      uint8_t privileged_call_enabled : 1;
> } monitor;

Please check the padding with "pahole". If possible, it would be better 
to re-use an existing one.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:33:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:33:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520758.808716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmx4T-0000KY-Ip; Thu, 13 Apr 2023 13:33:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520758.808716; Thu, 13 Apr 2023 13:33:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmx4T-0000KR-Fy; Thu, 13 Apr 2023 13:33:05 +0000
Received: by outflank-mailman (input) for mailman id 520758;
 Thu, 13 Apr 2023 13:33:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmx4S-0000KH-9R
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:33:04 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b719632f-d9ff-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:33:03 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 n9-20020a05600c4f8900b003f05f617f3cso14767826wmq.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 06:33:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b719632f-d9ff-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681392782; x=1683984782;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4K1UoNLB5Wh1WQWKgQkyNekfW4Vh91Orm3gyoXK+OJI=;
        b=hTXsYSWz5Dn9kvDBeZ0XboBhjt7uqGu2jk483osiWXjOtVRqk/5dT+U1eS71bIcjR0
         UMlJQ8B1GlyFrqpDr4NUdHNiBV65MYiwnS9rO941CfebCd+NZDyWSM+WOg/nC41xynmW
         +fpE/uitj2GPKKMpUYW8Q1ZIOgHb9fTOK4XdZsYLHBzx5+DBwCOKx5ZxMQEbqE+b+rwU
         MFFBOjg38BdJ8a+XiTIg1xp9euR1wXMFUGc1hWsjNOGTiN7Xg7eIMe1lEkhLCW2sP25E
         E1AnGys5R7T/C1n5V+WXMNhe1zk5yd7KSezlWrEHGl+e1jV9MAqllKNIzTIyP1XF6EFa
         6Ajg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681392782; x=1683984782;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4K1UoNLB5Wh1WQWKgQkyNekfW4Vh91Orm3gyoXK+OJI=;
        b=Hj40aOlhZ6I/qZSNDVjGYZrTdA694MwhaGH9zfUjwH82M8W+k69QJk5K2aNTBoeV8J
         RNAoeyv6zTWhEKNCVoHuJDrOHvWyq5je9DBuDygrjkuY0OSCKNRqxErhZ/cSmcCTppTn
         vWRYDKwhptCx9zp23I7bp0OdkZuf4Jh4/keZpazmq6BMAx1MqQBTGyig5Xfjuaq6Ynm/
         UI1KeTSIaxg7M1wOZABJgJ+u9uFoZ3Bkzt27pDlIOiDAa3AL4/1yIuGkyIkV3W6YtX6K
         sQhJ/3dIfVL2Wbha5hY4PVc8wtwZRnNQb3nYy2Hubjr1H0vmQ9Mndh6S6XgdizNUpIx9
         ps0Q==
X-Gm-Message-State: AAQBX9cuAXA2cLq40p5Q4hGFxxLD10RQDkeZcmfOyKkrnhV7zYyXgTTz
	W49xAzKk2tW/AkwahpRVaTjqj8KWDcvMlKpoqiUi0Q==
X-Google-Smtp-Source: AKy350Yj2W0pP0IaXaltscmIrZBLEU5xTRF8xqKcfSByIv/x+OFAODxCo8j1+sHOjWAx5c4VWIs382d/bMIGCD9Dijs=
X-Received: by 2002:a1c:f302:0:b0:3df:97de:8bab with SMTP id
 q2-20020a1cf302000000b003df97de8babmr574193wmq.4.1681392782610; Thu, 13 Apr
 2023 06:33:02 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-5-jens.wiklander@linaro.org> <AS8PR08MB7991150DD65CAB61C276A21892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB7991150DD65CAB61C276A21892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 15:32:51 +0200
Message-ID: <CAHUa44HWND3BHE8X2_iYcEyXH6mOcv-WwCFXizxZijD8g3y62Q@mail.gmail.com>
Subject: Re: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function IDs
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 12:18=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wr=
ote:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 04/22] xen/arm: ffa: add remaining SMC function =
IDs
> >
> > Adds the remaining SMC function IDs from FF-A 1.1 specification.
>
> Nit: I would suggest that in commit message you can mention the documenta=
tion
> number you used. During my review of this patch I am using
> DEN0077A version 1.1 REL0.

OK, I'll add that.

>
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> I also confirm that the macro values introduced by this patch is consiste=
nt with
> the spec in commit message, hence:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:40:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520761.808726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxBW-0001pA-9o; Thu, 13 Apr 2023 13:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520761.808726; Thu, 13 Apr 2023 13:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxBW-0001p3-74; Thu, 13 Apr 2023 13:40:22 +0000
Received: by outflank-mailman (input) for mailman id 520761;
 Thu, 13 Apr 2023 13:40:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=czPd=AE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pmxBV-0001ox-70
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:40:21 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9800453-da00-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:40:18 +0200 (CEST)
Received: from BN9PR03CA0553.namprd03.prod.outlook.com (2603:10b6:408:138::18)
 by CYYPR12MB8855.namprd12.prod.outlook.com (2603:10b6:930:bb::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:40:14 +0000
Received: from BN8NAM11FT079.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:138:cafe::71) by BN9PR03CA0553.outlook.office365.com
 (2603:10b6:408:138::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 13:40:14 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT079.mail.protection.outlook.com (10.13.177.61) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 13:40:13 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 08:40:12 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 08:40:13 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 08:40:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9800453-da00-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j/EHzSU1RUuhpswNi2nUztmMWzsk7GkxZti2iDB5ceo7Hde6lFstFgo7+E1XTGYUzyl4JVUYRdWXxhiXKl//GenrkP5lG+5+V3gDBwSArRvSnSoujq386dheAdU8O5PwCAG8K2T3juapCk/gYVeniZuRKEBURX87B+0OVtKXfjnz2YhQ31b0RnMFf3T3p5q4khYvj46sdn/+/tjuyex+ddG3i1EHKbOSZiF6szz9ZwL3nGt4Gn9vB+QiMsJy3Ni9iX86Wqd0lDXfZ6bH66HPmvlRSqj/YYeAycgCe0oNXCGHfRKBZUJTxVswX2wSQMXmiUAaMa2fB6BOZMsGznio9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E2N7a2FPedQrW8HRhTsmPLwKB/iTYWUt1peWJYN7LVg=;
 b=B03Hk7CL/S0nMpEyH9roLfVJfLliktGvhM0AcuQX5eSTQ3nHGUK7iKUa2JrGhJnmTZMhsgcMEsWgUfc9Sll3tUS9uRKC4gNyTF9mybz2ENVM2TSIhQ3gQJy6ioe0n7enXsdZB9xKqvLbMVDPzgOz/ZAbPyw2ORMRaU085tPIlk+dVEJ44jeWAToKkS8nevyNcUDFwNOIBikl8wmiiPTItk/9fhj3cxbUjs3b2vVpaUN+AJ+chU4JNeRBu/qkx7v76Ei++Gn+lna0pluzselizkC5jx0W96e4rWAYDxzJFpG19p6xTO1KIqDtj2dnwnzDK7LCAkrJA0/Y76dlT1Tk7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E2N7a2FPedQrW8HRhTsmPLwKB/iTYWUt1peWJYN7LVg=;
 b=GhCBKVrpHfqQyeaZzRRwEPe3kdl04bAACax8W/IVJ6bh6o/+yqckjvZniXxI7PGh5bhjF9UGPwM51p6YnCeNjZLsQelwkApDfeaOSjhPE7Xx1h8B4oyBpl0fAfAt5JqkoueEqmQv95IZ6fqOUAYBcG/9f13Vcuak+JRje/Cdrp4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <9211242e-102c-4468-c35b-c88f8e31b274@amd.com>
Date: Thu, 13 Apr 2023 15:40:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 06/17] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-7-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-7-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT079:EE_|CYYPR12MB8855:EE_
X-MS-Office365-Filtering-Correlation-Id: 52a2ce5e-2806-4659-2df5-08db3c249c0b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8r4mmPuq33zQyJJgXBtiZXWvzmiXFbdfXLQ7WDm4ddQFpwREnYpBLn+ik8ls7KW5JMI2mN3Xph5oTutNawf6wveW1jnxUOxIrkHZtjLbkk3dqf+T6p2OU1ov6LBZ7p5jMJ0kyYfo2XrE/GfbfWixN+SoLiZ1y4Kcu6aF3GtA2ScQ413P+nvB29rBxsDgEsslLs/clWa3BcF0M3GI0602ojKSQ2V6sq/kReBdK2FWL9JbEjbkpyDMrDbc1q9oQVh81P+qndMTJmCfz8/Vjombrv3IV4jS6tcR+zYIDR2Tsk3MX1q5c2I2a2UzWKbZf6HyV6GZX93LUGOUwFHMao5v9NU6cysYLKt/YiovBKLA5WKYY3e0asM4+FVT2upxJrC2LX6tOKrcfQxA754n/hCYiMjda0KDzG3Y9y+56NlD5W46bPXzVO+hjlunXb6lFGH73oaBB3XaqLB0fEe6HzVxUGZ4PUkbhh9G6m7MvqgFtrUEqFpIxeWHKWT5PAIRerjDeuR7Pt6bOM4+T0iERsM/H7bd8pGVWDspxOc4BY/ikPYQxGPbqYTXNs3HTs8S1ZAc7U0ZD4TYezrdyGsh5J3xNk16fdVhbejNQMBUA/XDnsnjcRASru0TRuSiWmzWbqtTpf0IUqbawxNpTZcsyzjZpYLSnFJPZ/ZsMdoGbnFJrbUwqZZDyxbmL+q+Cr72CUsS2pNXhjpFVova7eTlXZk7AEqGUXoqJWUFrliQnf7dk1EVjic7Om3RVfT/kCcwz3TBBU9CgWnRzk3mHiCjBjwu6dNTZEh549myEeqHwoC+fFQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(31686004)(8676002)(70586007)(70206006)(8936002)(5660300002)(44832011)(478600001)(41300700001)(40460700003)(316002)(40480700001)(83380400001)(82740400003)(426003)(336012)(36756003)(16576012)(54906003)(110136005)(4326008)(186003)(86362001)(6666004)(26005)(81166007)(53546011)(2616005)(36860700001)(82310400005)(356005)(47076005)(31696002)(37363002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:40:13.7055
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 52a2ce5e-2806-4659-2df5-08db3c249c0b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT079.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8855

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Add device_tree_find_node_by_path() to find a matching node with path for a
> dt_device_node.
> 
> Reason behind this function:
>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>     device_tree_flattened) is created and updated with overlay nodes. This
>     updated fdt is further unflattened to a dt_host_new. Next, we need to find
>     the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
>     and add the nodes as child under their parent in the dt_host. Thus we need
>     this function to search for node in different unflattened device trees.
You do not mention making dt_find_node_by_path() static inline.

> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/device_tree.c      |  5 +++--
>  xen/include/xen/device_tree.h | 17 +++++++++++++++--
>  2 files changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index bf847b2584..507b4ac5b6 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>      return np;
>  }
> 
> -struct dt_device_node *dt_find_node_by_path(const char *path)
> +struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
> +                                                     const char *path)
>  {
>      struct dt_device_node *np;
> 
> -    dt_for_each_device_node(dt_host, np)
> +    dt_for_each_device_node(dt, np)
>          if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
>              break;
> 
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 58ac12abe3..998f972ebc 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -534,13 +534,26 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>  struct dt_device_node *dt_find_node_by_alias(const char *alias);
> 
>  /**
> - * dt_find_node_by_path - Find a node matching a full DT path
> + * device_tree_find_node_by_path - Generic function to find a node matching the
> + * full DT path for any given unflatten device tree
> + * @dt_node: The device tree to search
This should be @dt to match the parameter. Also, shouldn't the description say:
"the node to start searching from"
or
"device tree root node"

FWICS, you expect to pass a root node as dt node. However, in device_tree_find_node_by_path()
you do not check if a provided node is a root node or not (e.g. no parent). Is this intended?

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:44:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520765.808735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxFC-0002P7-PA; Thu, 13 Apr 2023 13:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520765.808735; Thu, 13 Apr 2023 13:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxFC-0002P0-MK; Thu, 13 Apr 2023 13:44:10 +0000
Received: by outflank-mailman (input) for mailman id 520765;
 Thu, 13 Apr 2023 13:44:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=73wY=AE=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pmxFB-0002Ou-LW
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:44:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42b6aca9-da01-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:44:09 +0200 (CEST)
Received: from DUZPR01CA0262.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b9::24) by DBBPR08MB5930.eurprd08.prod.outlook.com
 (2603:10a6:10:200::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Thu, 13 Apr
 2023 13:44:02 +0000
Received: from DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b9:cafe::d2) by DUZPR01CA0262.outlook.office365.com
 (2603:10a6:10:4b9::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 13:44:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT005.mail.protection.outlook.com (100.127.142.81) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Thu, 13 Apr 2023 13:44:01 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 13 Apr 2023 13:44:01 +0000
Received: from 2b12832b2021.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F72E3F02-77E9-473B-8CF2-552D0F44A20B.1; 
 Thu, 13 Apr 2023 13:43:50 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2b12832b2021.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 13:43:50 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS4PR08MB8120.eurprd08.prod.outlook.com (2603:10a6:20b:58c::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:43:48 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 13:43:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42b6aca9-da01-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QpDiLgjmhcno6+YylSBX0/fbamyJ59nPjnU0JZsH9h0=;
 b=c4lU0SlM2QE7utx0koHKHKLb3neiF4IvhtQCgwiXEq2f6VhTg/maBcElAov+4m5wUcLqMsJPTFAPg99UWMrjYGKXDyJzLWsIyeqCLOhClPz/J1OnyqS+lDttUF3kQBrwG8eU+nIPihwCsZ9bx/humBLPTcwQ/y9kforJIF6zzYw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0d7973943360cc7f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OXIhkfPb8W2jvGuxMia47xLqRxJiNyfEqSMFd4WI3Mi637MxvtB6AhHkJ7Bjg82GyLqap5AKBgcrZ5BwIlVHnvbmVLskviVqsJTQwYExhHCA9zKnlrtFdi6FEbfsCoU0JMqqRDLfYzyogVWGkuqcZ9YcDQ8Ln9eq4i2B/IX8bt7fyaRb6zarxOE9sd45VF9j91FiMkssWmycSoAMVk8OsX9xGoWourHDh61ZD681JVxYWd08LpO38XK+fM+xMIKoT57m51mk5jckSBdC2muCYn96xw/25+F6MIFNc8uFoxsolAk6selhLJM91Mfis1sGBgEMEmLspnJkSg68SujktQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QpDiLgjmhcno6+YylSBX0/fbamyJ59nPjnU0JZsH9h0=;
 b=L2uQv33w8X/7bQyfekx68R/OetJrh2yFaTqhZx1xo6GlZLRuZkf/sou8hRZU8edh5rWYCqGygksefJtM3NvCy4jBTICopILCJ5EbMTaRG6UyDNphd/JiNBCtND54HEl1s0ubuFtmLQe39yOhNf14KOYauUboxuo3uBayoMwdeMv95qEckkMSUUEpXiGbYXEy1/h54KVwbEqCudbLUXipf9F/OkD2hzm0ofBcuS9UbaaQDUrYeD2mqXCPDfSj65fQsCpJcKQYVK2fuVMw5Vh85Uy1jWyTS1XqHOHq6qmtxmgrShWZmUJW0nSE3qQ38v25M0XfMCpebIkUWVbRcIM2xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QpDiLgjmhcno6+YylSBX0/fbamyJ59nPjnU0JZsH9h0=;
 b=c4lU0SlM2QE7utx0koHKHKLb3neiF4IvhtQCgwiXEq2f6VhTg/maBcElAov+4m5wUcLqMsJPTFAPg99UWMrjYGKXDyJzLWsIyeqCLOhClPz/J1OnyqS+lDttUF3kQBrwG8eU+nIPihwCsZ9bx/humBLPTcwQ/y9kforJIF6zzYw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jens Wiklander <jens.wiklander@linaro.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, Marc Bonnici <Marc.Bonnici@arm.com>, Achin
 Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Topic: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
Thread-Index: AQHZbdfL0AxH2Xu3t0igFa2jZHjA9K8pOEIAgAABeICAAAHsAIAABI2A
Date: Thu, 13 Apr 2023 13:43:45 +0000
Message-ID: <0B0212E8-BAC7-4557-B21B-B49EB14F1D09@arm.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org>
 <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
 <916BB708-3028-4AAB-BD6A-BCABAFBD7C45@arm.com>
 <2dba6372-330d-a068-241f-59e19b837150@xen.org>
In-Reply-To: <2dba6372-330d-a068-241f-59e19b837150@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS4PR08MB8120:EE_|DBAEUR03FT005:EE_|DBBPR08MB5930:EE_
X-MS-Office365-Filtering-Correlation-Id: 9e23ba76-8fca-4159-3365-08db3c25240f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OTuEzy7Kw4oF+10rTaxvq5bddZFm92HxVS1hTbC+GOLuh5FRysrjCKnDegkQxdjDQFam0rZzqHwxAF4ksKZPdkCYlzvY1upNFVjDLwAHBduFysQ3JBLFU1doeYt4QLZGY0XvtpAJ1FZEMysjpC2miwsSu7wBh3QaHL8QDDZkSaNSVZf1bGkEmNwkoXOGoFT9+4wrkeGWHcIGN12OFog6CHvrX37kdjKld0Yvejz3izGPwQ9Xoo2Ay/0vL01MTKAoe5Q7l0NRyfbmtFQf8EmiMn5V2HLIdyl7Hxd8pItSqcpqhZMIWtHbQqvYKJoUvdefqpNomLQjxtVxnJoTSYwcIrD2HPoPKUQazwbARxf/FYZzws8nbDnGp0WJMBdLsJtQC06zivUm6LYdAiFNK92VmtELkV9inTxwW5nyMkyBEcY83LMFymS9UoDLhvWoxkLnCWypn78uoc1aBsKbIUNlzXMHckx8npxZPbuwClP3uiKt8OM1n5jv/VSALlhDXGZvgEw6vqXoz6vqGbQlcnRl6gCqK9XaZhXlNZamjpx1uBMxF7/enlCpEWd7E9dV/VaPxUVdt1qCMf/f8dJyZ8sR/yw1Phk5ZjXKoDzPEwy3ZfENxnAZ9wppgVd1yyv0qDceeALgfZ0g5B7gBUFZ9iCvvQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(346002)(39860400002)(396003)(376002)(451199021)(41300700001)(2616005)(4326008)(6916009)(64756008)(66446008)(66476007)(66556008)(76116006)(66946007)(91956017)(36756003)(186003)(83380400001)(2906002)(5660300002)(38070700005)(33656002)(8676002)(8936002)(86362001)(6666004)(71200400001)(54906003)(478600001)(6486002)(6506007)(6512007)(53546011)(26005)(316002)(122000001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6D9D03559D10EB42A969834AE2120A57@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8120
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	18b6463f-39b3-4b90-69d4-08db3c251a86
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pWQs48axZRvGiKqefU4FZKVobYWzkCdVxW2EmL8Wkp2g7rkHkEDqRE6mzUkkCoimTWumVymVzRmaMjnGs+3zGci2/5e/topYy8lH8ygKzFyo9DAKxn2gaimZ3D+J2UyxEAVkKPSKfnPRt4gBq+zXh9wjp3aWWBfKF292BkOmc3d6X+ENShjVRoGcyCBGen+w1HtPFpNaqZadNV8G7Gjma6VtgPz81337YYJws0xcyIRv60APDyp1F5Dae9YZduEc9WCy7i2WsPvq0/L1MZUQplI2BZ7Yyta6Dskv/OPPBN9ldXEpT3326a+gqDKYe33gO/qY7tI6Tlxwn2VWCD2cbISvOSg6Zhv5PFBAJ/G2KRifl9A2taf1LRaa/5npgFjsyccAlDadX+0ijKIU22wBfsGsv1HUNbcG2hwROeLJiq8+ZyBcfQwLDOC3j4F/2hfqEv5ay+64biR2drTkB3g4X1glE5KQ4G/t3LG26AagoxGdWVdrpo509I+mEclQGWECX+FtDKeNL84cNt1MZPwLwhagrq+c87xg5mcLqOcCupMylS2rqdI5DiIdFQ8x2LmyB/tZyDk73qkSeop9QUMtlu/EuNjVZi71SXtxBwB8DEjSzDeopC+DPfrm0i5QH0Db6ffYhyOF+Yg7CrQzLX9Z2r+o2SzXlh+tuV1ohgZzSgjiRDK3FsODjsQ/1jrAEYL8D8lm/Yt631I/8ErnPmKIOQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(5660300002)(316002)(54906003)(478600001)(83380400001)(2616005)(336012)(47076005)(107886003)(6512007)(33656002)(36756003)(6666004)(6506007)(40480700001)(53546011)(186003)(26005)(356005)(81166007)(36860700001)(82310400005)(86362001)(82740400003)(6486002)(40460700003)(41300700001)(6862004)(70586007)(2906002)(8676002)(4326008)(70206006)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:44:01.9020
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e23ba76-8fca-4159-3365-08db3c25240f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5930

Hi,

> On 13 Apr 2023, at 15:27, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 13/04/2023 14:20, Bertrand Marquis wrote:
>> Hi Julien,
>>> On 13 Apr 2023, at 15:15, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi,
>>>=20
>>> On 13/04/2023 08:14, Jens Wiklander wrote:
>>>> Adds support for sending a FF-A direct request. Checks that the SP als=
o
>>>> supports handling a 32-bit direct request. 64-bit direct requests are
>>>> not used by the mediator itself so there is not need to check for that=
.
>>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>>>> ---
>>>>  xen/arch/arm/tee/ffa.c | 112 ++++++++++++++++++++++++++++++++++++++++=
+
>>>>  1 file changed, 112 insertions(+)
>>>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>>>> index f129879c5b81..f2cce955d981 100644
>>>> --- a/xen/arch/arm/tee/ffa.c
>>>> +++ b/xen/arch/arm/tee/ffa.c
>>>> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
>>>>      return true;
>>>>  }
>>>>  +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *res=
p)
>>>> +{
>>>> +    switch ( resp->a0 )
>>>> +    {
>>>> +    case FFA_ERROR:
>>>> +        if ( resp->a2 )
>>>> +            return resp->a2;
>>>> +        else
>>>> +            return FFA_RET_NOT_SUPPORTED;
>>>> +    case FFA_SUCCESS_32:
>>>> +    case FFA_SUCCESS_64:
>>>> +        return FFA_RET_OK;
>>>> +    default:
>>>> +        return FFA_RET_NOT_SUPPORTED;
>>>> +    }
>>>> +}
>>>> +
>>>> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_=
t a2,
>>>> +                               register_t a3, register_t a4)
>>>> +{
>>>> +    const struct arm_smccc_1_2_regs arg =3D {
>>>> +        .a0 =3D fid,
>>>> +        .a1 =3D a1,
>>>> +        .a2 =3D a2,
>>>> +        .a3 =3D a3,
>>>> +        .a4 =3D a4,
>>>> +    };
>>>> +    struct arm_smccc_1_2_regs resp;
>>>> +
>>>> +    arm_smccc_1_2_smc(&arg, &resp);
>>>> +
>>>> +    return get_ffa_ret_code(&resp);
>>>> +}
>>>> +
>>>> +static int32_t ffa_features(uint32_t id)
>>>> +{
>>>> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
>>>> +}
>>>> +
>>>> +static bool check_mandatory_feature(uint32_t id)
>>>> +{
>>>> +    int32_t ret =3D ffa_features(id);
>>>> +
>>>> +    if (ret)
>>>> +        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: err=
or %d\n",
>>>> +               id, ret);
>>>> +
>>>> +    return !ret;
>>>> +}
>>>> +
>>>>  static uint16_t get_vm_id(const struct domain *d)
>>>>  {
>>>>      /* +1 since 0 is reserved for the hypervisor in FF-A */
>>>> @@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs *=
regs)
>>>>      set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
>>>>  }
>>>>  +static void handle_msg_send_direct_req(struct cpu_user_regs *regs, u=
int32_t fid)
>>>> +{
>>>> +    struct arm_smccc_1_2_regs arg =3D { .a0 =3D fid, };
>>>> +    struct arm_smccc_1_2_regs resp =3D { };
>>>> +    struct domain *d =3D current->domain;
>>>> +    uint32_t src_dst;
>>>> +    uint64_t mask;
>>>> +
>>>> +    if ( smccc_is_conv_64(fid) )
>>>> +        mask =3D GENMASK_ULL(63, 0);
>>>> +    else
>>>> +        mask =3D GENMASK_ULL(31, 0);
>>>> +
>>>> +    src_dst =3D get_user_reg(regs, 1);
>>>> +    if ( (src_dst >> 16) !=3D get_vm_id(d) )
>>>> +    {
>>>> +        resp.a0 =3D FFA_ERROR;
>>>> +        resp.a2 =3D FFA_RET_INVALID_PARAMETERS;
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    arg.a1 =3D src_dst;
>>>> +    arg.a2 =3D get_user_reg(regs, 2) & mask;
>>>> +    arg.a3 =3D get_user_reg(regs, 3) & mask;
>>>> +    arg.a4 =3D get_user_reg(regs, 4) & mask;
>>>> +    arg.a5 =3D get_user_reg(regs, 5) & mask;
>>>> +    arg.a6 =3D get_user_reg(regs, 6) & mask;
>>>> +    arg.a7 =3D get_user_reg(regs, 7) & mask;
>>>> +
>>>> +    arm_smccc_1_2_smc(&arg, &resp);
>>>> +    switch ( resp.a0 )
>>>> +    {
>>>> +    case FFA_ERROR:
>>>> +    case FFA_SUCCESS_32:
>>>> +    case FFA_SUCCESS_64:
>>>> +    case FFA_MSG_SEND_DIRECT_RESP_32:
>>>> +    case FFA_MSG_SEND_DIRECT_RESP_64:
>>>> +        break;
>>>> +    default:
>>>> +        /* Bad fid, report back. */
>>>> +        memset(&arg, 0, sizeof(arg));
>>>> +        arg.a0 =3D FFA_ERROR;
>>>> +        arg.a1 =3D src_dst;
>>>> +        arg.a2 =3D FFA_RET_ABORTED;
>>>> +    }
>>>> +
>>>> +out:
>>>> +    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3 &=
 mask,
>>>> +             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 =
& mask);
>>>> +}
>>>> +
>>>>  static bool ffa_handle_call(struct cpu_user_regs *regs)
>>>>  {
>>>>      uint32_t fid =3D get_user_reg(regs, 0);
>>>> @@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_regs =
*regs)
>>>>      case FFA_ID_GET:
>>>>          set_regs_success(regs, get_vm_id(d), 0);
>>>>          return true;
>>>> +    case FFA_MSG_SEND_DIRECT_REQ_32:
>>>> +    case FFA_MSG_SEND_DIRECT_REQ_64:
>>>> +        handle_msg_send_direct_req(regs, fid);
>>>> +        return true;
>>>>        default:
>>>>          gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
>>>> @@ -326,6 +431,13 @@ static bool ffa_probe(void)
>>>>      printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
>>>>             major_vers, minor_vers);
>>>>  +    /*
>>>> +     * TODO save result of checked features and use that information =
to
>>>> +     * accept or reject requests from guests.
>>>> +     */
>>>=20
>>> I am not entirely sure I understand this TODO. Does it mean a guest can=
 currently use a request that is not supported by FFA?
>> In fact this is a bit the opposite: in the following patch we check that=
 all features we could need are supported but if a guest is only using a su=
bset we might not need to have all of them.
>> Idea of this TODO would be to save the features supported and refuse gue=
st requests depending on the features needed for them.
>=20
> Thanks. I would suggest the following comment:
>=20
> /*
> * At the moment domains must supports the same features used by Xen.
> * TODO: Rework the code to allow domain to use a subset of the features
> * supported.
> */
>=20
> Note that I am using "domains" rather than "guests" because the latter do=
esn't include dom0.

Makes sense and new comment is nice.

Up to Jens to say if he is ok with it.

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:45:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520770.808745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxGe-00031D-7z; Thu, 13 Apr 2023 13:45:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520770.808745; Thu, 13 Apr 2023 13:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxGe-000316-5P; Thu, 13 Apr 2023 13:45:40 +0000
Received: by outflank-mailman (input) for mailman id 520770;
 Thu, 13 Apr 2023 13:45:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxGc-00030t-Qr
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:45:38 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78d01c93-da01-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:45:37 +0200 (CEST)
Received: by mail-wm1-x330.google.com with SMTP id
 n19-20020a05600c501300b003f064936c3eso13130222wmr.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 06:45:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78d01c93-da01-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681393537; x=1683985537;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vyrGnitHbdUQwqT6RzMJ7xCDQTyGge708Ud+KeXIuOo=;
        b=vPDgz4ZuYSkCfQN1oGmbPdFR/lSJnvxtNfM6Jem1xta7eVlogCwNKcR74qi/w7KEkt
         Llq3LNWZhr5UJio9mPoFZYKiyapBR2jJ0QQ7pe+WEec8fJXZ25UYJpXA0Qbjd9L6aFX8
         gjTFux7rWot6b+eXZBGokLv0tX8w8jsGlc6isYO9zgTyMoMotTfSzspApSWZ5KO6cuXc
         CEhQLH7WQD88rfRsWJCKpA8Xh4MeO2h3oKmBsc9yg3em2ip74WNWay5N3hro9uBVizNl
         By2iuNFRToDGuTHmfiuby3W8L7sXiaGgY+9b71akncSFkjevf3r6wkwsAPP+WLXbt0QS
         NhLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681393537; x=1683985537;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=vyrGnitHbdUQwqT6RzMJ7xCDQTyGge708Ud+KeXIuOo=;
        b=R8XSBewCl59jY1u7brBCJag/EOLP4jVW8KFQt3zfYSX7pblPhg19eIgvafKW4pI5JT
         eCi5Dsdr6c6KpIFLOA6JhYe0H0nf3yGNrt88sno/ecifJKS72Uc8QP919EG3w+1+5O3E
         zRQkAyaa+3Uek/wc75t99r5NNReuY0lm3hiAe0O8VlhNNtRyPfuDDQ2YQDYAhZySgCpS
         QCRnJmKVOw6Ar5tr06Ox7nRf+5pDcYHRHIfitUQc7DknvZOLkXfEI06Z+ASB3UAAHkY/
         QANSdOc2XWghHtVr/G9zc0LMb0KLaeGpEB184W7sT2dGFxfhudqgRBXX/1J+0Krc+zib
         xbRg==
X-Gm-Message-State: AAQBX9dB8on/q8zvLrvCoB/cFyAlQkh7Ok0KYiuYtRKUIIIjRPCTvLWM
	8b7A1Qnx+suCY6I6UDUx893XfpFhiY0Gb4RO/kmxlQ==
X-Google-Smtp-Source: AKy350ZswUTkpoHgm+BaPSSDOy/tKOK15YjSzcEmz1HZMPqZXjTmeRGSjcvzIxZSHzYbnrRWNsuqd2zyR+jn1nOvFq8=
X-Received: by 2002:a05:600c:201:b0:3f0:4428:94eb with SMTP id
 1-20020a05600c020100b003f0442894ebmr487362wmi.4.1681393537130; Thu, 13 Apr
 2023 06:45:37 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-6-jens.wiklander@linaro.org> <AS8PR08MB79913E8D281DB674FDC0D70192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB79913E8D281DB674FDC0D70192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 15:45:26 +0200
Message-ID: <CAHUa44FqgH3QRiTR=8v4WH+6XYbzwHYn8=Ht_KRC--jLWz9cog@mail.gmail.com>
Subject: Re: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for FFA_PARTITION_INFO_GET
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 13, 2023 at 12:28=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wr=
ote:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
> > FFA_PARTITION_INFO_GET
> >
> > Defines flags used for the function FFA_PARTITION_INFO_GET.
>
> Nit: Similarly as my comment for patch #4, I would suggest that in
> commit message you can mention the documentation number and
> the chapter of FFA_PARTITION_INFO_GET. Something like:
> "According to DEN0077A version 1.1 REL0, section 13.8, defines
> flags used for the function FFA_PARTITION_INFO_GET"

I'll add that.

>
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/tee/ffa.c | 34 ++++++++++++++++++++++++++++++++++
> >  1 file changed, 34 insertions(+)
> >
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index ba0942e76993..72e7d0575de5 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -57,6 +57,40 @@
> >  #define FFA_MY_VERSION
> > MAKE_FFA_VERSION(FFA_MY_VERSION_MAJOR, \
> >                                                   FFA_MY_VERSION_MINOR)
> >
> > +/*
> > + * Flags to determine partition properties in FFA_PARTITION_INFO_GET
> > return
> > + * message:
> > + * BIT(0): Supports receipt of direct requests
> > + * BIT(1): Can send direct requests
> > + * BIT(2): Can send and receive indirect messages
> > + * BIT(3): Supports receipt of notifications
> > + * BIT(4-5): Partition ID is a PE endpoint ID
> > + * BIT(6): Partition must be informed about each VM that is created by
> > + *         the Hypervisor
> > + * BIT(7): Partition must be informed about each VM that is destroyed =
by
> > + *         the Hypervisor
> > + * BIT(8): Partition runs in the AArch64 execution state else AArch32
> > + *         execution state
> > + */
> > +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0, U)
> > +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1, U)
> > +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2, U)
> > +#define FFA_PART_PROP_RECV_NOTIF        BIT(3, U)
> > +#define FFA_PART_PROP_IS_MASK           (3U << 4)
>
> I am a bit confused here, here (3U<<4) is "IS_MASK" but...
>
> > +#define FFA_PART_PROP_IS_PE_ID          (0U << 4)
> > +#define FFA_PART_PROP_IS_SEPID_INDEP    (1U << 4)
> > +#define FFA_PART_PROP_IS_SEPID_DEP      (2U << 4)
> > +#define FFA_PART_PROP_IS_AUX_ID         (3U << 4)
>
> ...here the same value is used for "IS_AUX_ID". According to
> the spec that I referred to, bit[5:4] has the following encoding:
> b'11: Partition ID is an auxiliary ID. Hence I guess the above
> "IS_MASK" should be removed?

FFA_PART_PROP_IS_MASK is supposed to be used when extracting the bits
to compare with one of the other  FFA_PART_PROP_IS_* defines. For
example:
if ((props & FFA_PART_PROP_IS_MASK) =3D=3D FFA_PART_PROP_IS_PE_ID)

using
if ((props & FFA_PART_PROP_IS_AUX_ID) =3D=3D FFA_PART_PROP_IS_PE_ID)

doesn't seem right.

>
> I confirm the values of other fields are consistent with the spec.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:52:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:52:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520774.808756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxMl-0004U3-V6; Thu, 13 Apr 2023 13:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520774.808756; Thu, 13 Apr 2023 13:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxMl-0004Tw-Qo; Thu, 13 Apr 2023 13:51:59 +0000
Received: by outflank-mailman (input) for mailman id 520774;
 Thu, 13 Apr 2023 13:51:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxMk-0004Tq-Tu
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:51:58 +0000
Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com
 [2a00:1450:4864:20::335])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ab890de-da02-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:51:56 +0200 (CEST)
Received: by mail-wm1-x335.google.com with SMTP id l16so8071206wms.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 06:51:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ab890de-da02-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681393916; x=1683985916;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bi8hFeyLb1wtgcL73aZmUI9/nlIn4MwZ/hV9h9PzSJA=;
        b=y9X7e5x3js+izG438LeC1AhCpvDq6t/rItKXY/kwYKAZKe/3cLmHpAfRJIQQxG+QuD
         FPYfNh3mJylLvT1Ty5H/7xtjnBOMa+QBejtHS6lF2aaLYc2Fey7b5mrBaKVD0EK/UL48
         3Ur3F2voxg5/HEXQmbF8/dLAKl0ehgegrxduvtKXJ2SPqSV50tS8+49XPi0GFPD5nZWL
         64/NYS3HYhpv3bZqcY1i5qdrqxs6T6yJdySvCBLDW2ql7IXp+plil79yVk44OeU0QY5a
         VHBY/PeAtHSp9ol50ehvIWYc6C06TfvzNYmNbRJggLItPc3e2qk4zueChijKBjQa2Vnv
         71VQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681393916; x=1683985916;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bi8hFeyLb1wtgcL73aZmUI9/nlIn4MwZ/hV9h9PzSJA=;
        b=VWltT80G5jPTOPAB8LDghdSWngpKAsBvXLTdO8zrdk775Mj6bZPYAKaU03H44PDAMe
         bcUqt6DSEF1MWnpk2XxRVRju4x4I/seTc6PiFbYstDrP2d4hWRGiaJX7tSPmfIMaoKPl
         5FftV9MHKxZAxhBGADbGm0PNFCvzm9fWYkyFSses7cXsezn5OT7sHTvIXcwiptxmMoE3
         5Ii7R5hR+ui9tka3BM70cZolrkFrweUSJbbQq7+ACfww7XIMr8wv1a2WYJtG4cSS/t5G
         +SSalg6RAcRjGapg+1qA5ShG9vwJAHcujVubdFyoUU3RwPDKL2RwosdOPQrMOv+PaIQG
         ICkA==
X-Gm-Message-State: AAQBX9fJ6IpTbmbRrKOjhOaH1NficLU6Fgeq8DhZVy54IocLYS75lvhL
	sZRqPwwZKWDQN0BGOpVahABos//vNEhQILsqlsf/qg==
X-Google-Smtp-Source: AKy350YB6S6RKRB7rmgQ7y0ai4x80rdo5aGQytRTIaOdAgl7Ml5rxXp5ZfffiyiZPXt6jnNJtzGquxitYt+muVcJxlE=
X-Received: by 2002:a05:600c:b4e:b0:3ee:9c6d:832a with SMTP id
 k14-20020a05600c0b4e00b003ee9c6d832amr595980wmr.4.1681393916086; Thu, 13 Apr
 2023 06:51:56 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-7-jens.wiklander@linaro.org> <AS8PR08MB79916CDCB5DB7CA86E8077A592989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB79916CDCB5DB7CA86E8077A592989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 15:51:45 +0200
Message-ID: <CAHUa44GUWvXneub-p=iw6ufbpUtv=zwEx0Oy-Ov8CTTVgX=7fw@mail.gmail.com>
Subject: Re: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework
 direct request/response messages
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 12:51=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wr=
ote:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 06/22] xen/arm: ffa: add defines for framework d=
irect
> > request/response messages
> >
> > Adds defines for framework direct request/response messages.
>
> Same here, it actually took me a while to find the related chapters
> about this patch, so I would suggest that we can add more details
> about where the values are from.
>
> From the spec that I referred to (DEN0077A version 1.1 REL0), they are
> in section 18.3, Table 18.{21, 25, 26, 27,28}.

OK, I'll add it.

>
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>
> I confirm the values introduced by this patch are consistent with the
> above spec, hence:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:53:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:53:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520777.808766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxOH-00051W-8w; Thu, 13 Apr 2023 13:53:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520777.808766; Thu, 13 Apr 2023 13:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxOH-00051P-68; Thu, 13 Apr 2023 13:53:33 +0000
Received: by outflank-mailman (input) for mailman id 520777;
 Thu, 13 Apr 2023 13:53:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vvgu=AE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pmxOF-00051J-SH
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:53:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 920bcd8a-da02-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 15:53:29 +0200 (CEST)
Received: from AM5PR0202CA0021.eurprd02.prod.outlook.com
 (2603:10a6:203:69::31) by GV2PR08MB8027.eurprd08.prod.outlook.com
 (2603:10a6:150:77::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Thu, 13 Apr
 2023 13:53:25 +0000
Received: from AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::a3) by AM5PR0202CA0021.outlook.office365.com
 (2603:10a6:203:69::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 13:53:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT040.mail.protection.outlook.com (100.127.140.128) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Thu, 13 Apr 2023 13:53:24 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 13:53:24 +0000
Received: from f0e5d4ac022f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 22D427B3-6A3D-4AD2-98A1-D12CD192B6F5.1; 
 Thu, 13 Apr 2023 13:53:17 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f0e5d4ac022f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 13:53:17 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PR3PR08MB5563.eurprd08.prod.outlook.com (2603:10a6:102:89::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 13:53:11 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 13:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 920bcd8a-da02-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X/IiZS1H1pLzzQw8lxikRzQ5nWgyR1+jPu0WKdMTNjc=;
 b=nm7nbIVzgJdADF7rej5vFCPKbODKG2Sg0lZW6FwMmjtEppOJMT788w2snrez3drA+inVAFmO3szIUQrexgtTbsRnotrF8U+zLdACArZqsz9duQ9mm4K2CA58zXP1CU2o9RNZxszMJkzgNOBuFTae/PClcZHSmWhNFFYYu5PBvrw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gy6uKaLkpLZXAJKeAF+RACDIQioF8CPa7zV+125vQUYtFzkOmF3AlflQAe8CWbUMick//9uECIVrBlDALhwJjMQTjR3lYnUZTvJKbsf87xWYJpSpnNikbyiHnhNNRjij9l/l7N765H4HDwWBEMCdmqw/Mxlu7StsjP+gxhg9CInb+uCqNaCqo6tfEFn4K3Kv/fZyxLwRw/thQNJNOgXwnjPzioFHIbAxBvAnLj+hhZbF1mpaIBvlx0hsQvno/J62CZskY+OEZug5EKrSLwpS6qGVvFbMloKQzmeiaG4lPKLrq8sTdpgyNjRWOZNy4ITZgw+03Wsjps+fuUhOqawPiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X/IiZS1H1pLzzQw8lxikRzQ5nWgyR1+jPu0WKdMTNjc=;
 b=K1DNe5MeToeS+KC6X7aLaxxxgf0D+cw+mHJnraTnUgtI0HNT9jbBbhgG/5tQiIuXOdmNpIGOfHQDdVfy8aBp+RrrWfH70sQ10XjQLFlZe9vA1PceeA3k1EmJGr6VHjh/1O7ObHoQISrq0xACj/KS4YdGBmZGtxt3aqlyFI6mRcV1uhgyV9KD/fk01H5ezfSzjUFUZX0+VT/e+9j9i/R3yx9KxVjcR88aGjKnLlLtdlh24W1hK7Yzppra5P/BBxkSB/Xi+iARG1Ok9l2IiH9wesP/MdHVpUlJG/MDhU9BtIpwuqLcL78aUQTrPaLuCj4Nr1JTuTQS8LiYmzPMCeKIUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X/IiZS1H1pLzzQw8lxikRzQ5nWgyR1+jPu0WKdMTNjc=;
 b=nm7nbIVzgJdADF7rej5vFCPKbODKG2Sg0lZW6FwMmjtEppOJMT788w2snrez3drA+inVAFmO3szIUQrexgtTbsRnotrF8U+zLdACArZqsz9duQ9mm4K2CA58zXP1CU2o9RNZxszMJkzgNOBuFTae/PClcZHSmWhNFFYYu5PBvrw=
From: Henry Wang <Henry.Wang@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici
	<Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: RE: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
 FFA_PARTITION_INFO_GET
Thread-Topic: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
 FFA_PARTITION_INFO_GET
Thread-Index: AQHZbdfQYdI+Ke10g0y9D6f6Y8dBZq8pCBXggAA4ogCAAAB2UA==
Date: Thu, 13 Apr 2023 13:53:11 +0000
Message-ID:
 <AS8PR08MB799116F19A91F29F889D57A892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-6-jens.wiklander@linaro.org>
 <AS8PR08MB79913E8D281DB674FDC0D70192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <CAHUa44FqgH3QRiTR=8v4WH+6XYbzwHYn8=Ht_KRC--jLWz9cog@mail.gmail.com>
In-Reply-To:
 <CAHUa44FqgH3QRiTR=8v4WH+6XYbzwHYn8=Ht_KRC--jLWz9cog@mail.gmail.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CD35A620DCA0CA42B04F4775A9FD4A73.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PR3PR08MB5563:EE_|AM7EUR03FT040:EE_|GV2PR08MB8027:EE_
X-MS-Office365-Filtering-Correlation-Id: 23265e6d-4a17-4572-997b-08db3c2673a9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Kiw833tvy6p2VfGa30IJc4oruS7bWITyj1OHTUiDGjklItRiDrsIrAplUESIZ/vG8F0XB6EcDH2d69sTeykMFbdHp8o6HvpkTebszE6up/lBW2HHLYqvecTGFXGibBWs8xMUGq2LqXBG9Mp2A1yaancJXHs1W3kyrTRqY3SX6CSSzpiLfDQFnBqxSC1aQquJrm0rm1JJkzVWtsXGGuBaG0ZAiMJr2mOUmnlqkjpvxowAVEM4xNi92t1u62P8jcoN4mxMXuiTcJ56suzz8+Fqw5GCghH/6JhnIWpnmpMiQseDdjIe/nXMhzrrQsNypYUOXzdP/nUDQYSD4I98qlhUTEz9aMM3vksl4DLJxH6AhH6oQ+2x4NEldhchQ0hEmIwuc74irpLyoN/gqp31cS8DQCrvWRAo73BxOq/JNnPgUzxsvZa3YK5z03xK1+ljDNpssiD3EgFRPGgFf95qj2EJTkK3DN53gT0g+t8YKtT/jlckWvCsA7W9MMHlYLwtTUbeQkALXF+Vf9cvrCQWaPJHBh0zu1IqZ8U1C31zEAOEVZ2Ba1Wz8qkrog4E+bplU7fXH+rVF403Qa5wzwIcZss49ukya+qtZY2iCGPWsHW1l7qaKnv+q4j1OJa2CrOlJOkV
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(366004)(136003)(451199021)(8676002)(8936002)(76116006)(41300700001)(5660300002)(52536014)(66446008)(66476007)(66556008)(66946007)(64756008)(122000001)(6916009)(4326008)(316002)(2906002)(478600001)(54906003)(86362001)(38070700005)(38100700002)(7696005)(186003)(9686003)(6506007)(55016003)(71200400001)(83380400001)(26005)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5563
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	170c55e6-2a8b-41bc-6b82-08db3c266b9d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dePGJLiQc016Mm1JrOzms7JrxzI5M8Am8mHzkUd8CuRvkzb07inabgQYAGml/KZixqcjMGG6RAKsRJzSITcWsj4gO9rz2wQQEgM9uOpdAJDrhzBguxEolfvYOUHMnakMR9Z7Xg8rovCu35/GNLX6eIWjPt2TgNJA5kpVvo/U2/6O9yrBQRSfnwq7coypj5ZQqiy4JJS4xN6Haji0YYLibWhNkK8JSu1md16DSQcAi+RM1hnkRu4t97CbYeJv6SXxsErWgABq9MhY9USGzNTIUjn9qQeZ8eaUzvpmYVivjuFjSOXiVq1VP+eg8c1SYNCS6feJTDJ4hS/iZflRMnsGio14BnM4RzeQg0Ffjh4xXkLJYgLdRJ/MUJo/GvO4x39WN2mVefq/cIq5Ge64AmiJ0qvE+3nUqciIAKCWIKGI75ZAc98UPsTsFx9UjAepkRDwe+kVgmdiWLuiGlHDGikOw3sivmduymo1K4rBSHjf93CKDcoOSF6gRcrKSNjHSEbUuMLB/Dk7yd5o73ycoD7P0BoXCryD/ANsAiFziFKnt4APut3on1ofL8KWaR7W0J8aiKKLx5kdKyvEyxa355575XDXNndvv557GfcpINm4nChW6HEksLj91PkTDNrGuOS24rA6qD6imZFBhCttj2rE17JadMr3mMJF6up/RIkKjMFIg5s0Nk6B3o/qSMc3NIA45SzkntqWVnM4HWIDgaPh3g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(36840700001)(40470700004)(46966006)(478600001)(7696005)(83380400001)(40480700001)(55016003)(47076005)(356005)(81166007)(82740400003)(33656002)(336012)(86362001)(36860700001)(2906002)(316002)(6506007)(9686003)(26005)(186003)(54906003)(5660300002)(82310400005)(6862004)(52536014)(8936002)(41300700001)(40460700003)(8676002)(4326008)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 13:53:24.9506
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 23265e6d-4a17-4572-997b-08db3c2673a9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8027

SGkgSmVucywNCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKZW5zIFdp
a2xhbmRlciA8amVucy53aWtsYW5kZXJAbGluYXJvLm9yZz4NCj4gU3ViamVjdDogUmU6IFtYRU4g
UEFUQ0ggdjggMDUvMjJdIHhlbi9hcm06IGZmYTogYWRkIGZsYWdzIGZvcg0KPiBGRkFfUEFSVElU
SU9OX0lORk9fR0VUDQo+ID4gPiArI2RlZmluZSBGRkFfUEFSVF9QUk9QX0RJUkVDVF9SRVFfUkVD
ViAgIEJJVCgwLCBVKQ0KPiA+ID4gKyNkZWZpbmUgRkZBX1BBUlRfUFJPUF9ESVJFQ1RfUkVRX1NF
TkQgICBCSVQoMSwgVSkNCj4gPiA+ICsjZGVmaW5lIEZGQV9QQVJUX1BST1BfSU5ESVJFQ1RfTVNH
UyAgICAgQklUKDIsIFUpDQo+ID4gPiArI2RlZmluZSBGRkFfUEFSVF9QUk9QX1JFQ1ZfTk9USUYg
ICAgICAgIEJJVCgzLCBVKQ0KPiA+ID4gKyNkZWZpbmUgRkZBX1BBUlRfUFJPUF9JU19NQVNLICAg
ICAgICAgICAoM1UgPDwgNCkNCj4gPg0KPiA+IEkgYW0gYSBiaXQgY29uZnVzZWQgaGVyZSwgaGVy
ZSAoM1U8PDQpIGlzICJJU19NQVNLIiBidXQuLi4NCj4gPg0KPiA+ID4gKyNkZWZpbmUgRkZBX1BB
UlRfUFJPUF9JU19QRV9JRCAgICAgICAgICAoMFUgPDwgNCkNCj4gPiA+ICsjZGVmaW5lIEZGQV9Q
QVJUX1BST1BfSVNfU0VQSURfSU5ERVAgICAgKDFVIDw8IDQpDQo+ID4gPiArI2RlZmluZSBGRkFf
UEFSVF9QUk9QX0lTX1NFUElEX0RFUCAgICAgICgyVSA8PCA0KQ0KPiA+ID4gKyNkZWZpbmUgRkZB
X1BBUlRfUFJPUF9JU19BVVhfSUQgICAgICAgICAoM1UgPDwgNCkNCj4gPg0KPiA+IC4uLmhlcmUg
dGhlIHNhbWUgdmFsdWUgaXMgdXNlZCBmb3IgIklTX0FVWF9JRCIuIEFjY29yZGluZyB0bw0KPiA+
IHRoZSBzcGVjIHRoYXQgSSByZWZlcnJlZCB0bywgYml0WzU6NF0gaGFzIHRoZSBmb2xsb3dpbmcg
ZW5jb2Rpbmc6DQo+ID4gYicxMTogUGFydGl0aW9uIElEIGlzIGFuIGF1eGlsaWFyeSBJRC4gSGVu
Y2UgSSBndWVzcyB0aGUgYWJvdmUNCj4gPiAiSVNfTUFTSyIgc2hvdWxkIGJlIHJlbW92ZWQ/DQo+
IA0KPiBGRkFfUEFSVF9QUk9QX0lTX01BU0sgaXMgc3VwcG9zZWQgdG8gYmUgdXNlZCB3aGVuIGV4
dHJhY3RpbmcgdGhlIGJpdHMNCj4gdG8gY29tcGFyZSB3aXRoIG9uZSBvZiB0aGUgb3RoZXIgIEZG
QV9QQVJUX1BST1BfSVNfKiBkZWZpbmVzLiBGb3INCj4gZXhhbXBsZToNCj4gaWYgKChwcm9wcyAm
IEZGQV9QQVJUX1BST1BfSVNfTUFTSykgPT0gRkZBX1BBUlRfUFJPUF9JU19QRV9JRCkNCg0KT2ho
IEkgbm93IHVuZGVyc3RhbmQsIHRoZSBuYW1pbmcgZG9lcyBub3QgbWVhbiBpdCAiaXMgYSBtYXNr
IiBidXQgYWN0dWFsbHkNCm1lYW5zICJ0aGlzIGlzIGEgbWFzayBmb3IgRkZBX1BBUlRfUFJPUF9J
U18iLiBUaGF0IG1ha2VzIGEgbG90IG9mIHNlbnNlLg0KDQpUbyBhdm9pZCB0aGlzIGtpbmQgb2Yg
YW1iaWd1aXR5LCBkbyB5b3UgdGhpbmsgY2hhbmdpbmcgdGhlIG5hbWUgdG8gc29tZXRoaW5nDQps
aWtlICJGRkFfUEFSVF9QUk9QX0lTX1RZUEVfTUFTSyIgbWFrZXMgc2Vuc2UgaGVyZT8gTm90ZSB0
aGF0IHRoaXMNCmlzIGp1c3QgbXkgc3VnZ2VzdGlvbiwgeW91IGNhbiBkZWNpZGUgdG8gY2hhbmdl
IG9yIG5vdCwgSSBhbSBhc2tpbmcganVzdA0KYmVjYXVzZSBJIGRvd25sb2FkZWQgdGhlIHdob2xl
IHNlcmllcyBhbmQgZm91bmQgdGhhdCBjdXJyZW50bHkNCkZGQV9QQVJUX1BST1BfSVNfTUFTSyBp
cyBub3QgdXNlZCBhbnl3aGVyZSwgc28gYmVmb3JlIGl0IGlzIHVzZWQgZXZlcnl3aGVyZQ0KaW4g
dGhlIGNvZGUsIGl0IG1pZ2h0IGJlIGdvb2QgdG8gdXNlIGEgbW9yZSBjbGVhciBuYW1lLg0KDQo+
IA0KPiB1c2luZw0KPiBpZiAoKHByb3BzICYgRkZBX1BBUlRfUFJPUF9JU19BVVhfSUQpID09IEZG
QV9QQVJUX1BST1BfSVNfUEVfSUQpDQo+IA0KPiBkb2Vzbid0IHNlZW0gcmlnaHQuDQoNCkluZGVl
ZC4gUGxlYXNlIHNlZSBteSBhYm92ZSByZXBseS4NCg0KUGVyc29uYWxseSBhZnRlciB0aGUgYWJv
dmUgY2xhcmlmaWNhdGlvbiwgSSBhbSBnb29kIHdpdGggdGhlIHBhdGNoLCBzbzoNCg0KUmV2aWV3
ZWQtYnk6IEhlbnJ5IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCg0KS2luZCByZWdhcmRzLA0K
SGVucnkNCg0KPiANCj4gPg0KPiA+IEkgY29uZmlybSB0aGUgdmFsdWVzIG9mIG90aGVyIGZpZWxk
cyBhcmUgY29uc2lzdGVudCB3aXRoIHRoZSBzcGVjLg0KPiANCj4gVGhhbmtzLA0KPiBKZW5zDQo+
IA0KPiA+DQo+ID4gS2luZCByZWdhcmRzLA0KPiA+IEhlbnJ5DQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:53:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520778.808776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxOQ-0005Je-Gs; Thu, 13 Apr 2023 13:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520778.808776; Thu, 13 Apr 2023 13:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxOQ-0005JV-E7; Thu, 13 Apr 2023 13:53:42 +0000
Received: by outflank-mailman (input) for mailman id 520778;
 Thu, 13 Apr 2023 13:53:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxOP-0005J3-Q7
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:53:41 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98db509d-da02-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:53:40 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 n9-20020a05600c4f8900b003f05f617f3cso14822555wmq.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 06:53:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98db509d-da02-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681394020; x=1683986020;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EMNbENZ/an/kqDDNkeCv4pxr/53e2QvX/R0pQ4lykeQ=;
        b=oLq/2keE4B1A6VNLw5cBwA/8gYSagdHEiJZOPBxxVr82/z4v2zuE0jXWPOjiww/0WJ
         Y40CmAgf6nQvjhMXl/by94OXN6cgbO0cnZHo919qwd1Ho8uoCh5n1TpM/xWugZFVwi+D
         B4Hk7XVZgt5/OL0YS7yoz4Ki35lv7Jwlgxvvf8I0NMzuzhDX2b9kGphR0R4gv6D6pXUJ
         3pf7onvgb4iseYCuD59Ouxtfclf39UulFq5to/qz/YGOzyEaspVSGEdUDQSwbXrvhjXg
         klDSdBBZBeJe1e2kEiwjIoO8Bk8dMtC7ssgKq9fzWuJgm/T+yA9kDfZbxqlCnaba0nzs
         zAJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681394020; x=1683986020;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EMNbENZ/an/kqDDNkeCv4pxr/53e2QvX/R0pQ4lykeQ=;
        b=Koz+CMeqyqrcy55Cte1BNeXSzRMTjliDTisrl5pxh7w7fk7w5DQKXMVYVLtpE8QTfO
         AM6+NTgH8Z+31a1jkzz8/g5OaQGeu16PVbFWxO3M2/7TNBrFLQuuCv2km+YVcsbJgiN8
         jCx6nkw3Q/9+AwrtejBFrOnaZ37XPKo3op40NgssRrU8sq2H+3tgbTY/9+f9fLl907tr
         Fh5mI2VsON1fwGbGORGKGl680cOACX//cFRc4IThU6lyc87MuhV+KRRjbTb/ITxqz/ek
         bZfjqGQpFcRKO4ozL+GLnLSsOTXKZ2+NT2MqIQ2UeFU09lzeHaoRHuooyfQDdkBCC87k
         NdAg==
X-Gm-Message-State: AAQBX9ezfbz0BfqDIfQFo0NtVQ8JjCEle+HXkuq+6RWV7CEOgA7ubFXo
	mQcNabE4Dqy9ZoMQtjceznbzgOi4voaoijPVfElhyDVtWP4SeBTN
X-Google-Smtp-Source: AKy350a746AeL5Gds3W5jEKlBuDR4I4Al2+ghgojas/6QlIgOOyTOf0iG4jd9RBggZQzgLFsE4FISYcSBaLI9q4/pYc=
X-Received: by 2002:a1c:7901:0:b0:3ee:41a8:729a with SMTP id
 l1-20020a1c7901000000b003ee41a8729amr618534wme.4.1681394020360; Thu, 13 Apr
 2023 06:53:40 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-8-jens.wiklander@linaro.org> <AS8PR08MB7991C16F543C4B1C4611572F92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB7991C16F543C4B1C4611572F92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 15:53:29 +0200
Message-ID: <CAHUa44F-D20fPnFVbdQgO4+UqLqyiHkzOFqhvTZZ2_Kp0t7jFw@mail.gmail.com>
Subject: Re: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k pages
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 12:55=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wr=
ote:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 07/22] xen/arm: ffa: enforce dependency on 4k pa=
ges
> >
> > Adds a BUILD_BUG_ON() to assert the dependency on 4k pages in the FF-A
> > mediator since the current implementation only works if Xen page size i=
s
> > the same as the FFA page size.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll add that.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 13:56:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 13:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520785.808786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxRB-0006FO-4v; Thu, 13 Apr 2023 13:56:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520785.808786; Thu, 13 Apr 2023 13:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxRB-0006FH-27; Thu, 13 Apr 2023 13:56:33 +0000
Received: by outflank-mailman (input) for mailman id 520785;
 Thu, 13 Apr 2023 13:56:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxR9-0006FB-CX
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 13:56:31 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fde3aef7-da02-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 15:56:30 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 eo6-20020a05600c82c600b003ee5157346cso10083289wmb.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 06:56:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde3aef7-da02-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681394190; x=1683986190;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eos6FtVFfmj4vjZAg3YpgEZaX/+K/gklwxVdRbaWJ3g=;
        b=FdkS6+FW2dOBE82Ak23abgGnZabZmz9UwCDBa+6K18eQg6jPg9uEAk+jNJWv4DdM+N
         6KYZIfaIaSoIDqEaQ+QKik6VVdseXfC2V6w3IXTLJIRPTmUM/IgPw6teQPEn+U31Pf4x
         YgOC/javi6IEmllwW0Jbtp1KoGOVhZaYMw7PNAW/ZqelYGEnEv5eL0iVksfgdm6qzHwL
         FpMS7AV4OIaOQrDhm4uOW5oexgbzy5qB2Pzrq5n26/c9dP/a58xjqPONkrfh+OonyfQZ
         WqBQ8eFTDU+e1XdIrhKy4xXHYaRfIp4SDA67HEwWAfNEEzJ3rynY7UkT6duyx6BGum5u
         I/UQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681394190; x=1683986190;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=eos6FtVFfmj4vjZAg3YpgEZaX/+K/gklwxVdRbaWJ3g=;
        b=IVhwfxqp1L3D+C4wVHFF+lECsT6rK7z+izX/YnxeoCLPJAD69I5MFvytnJ5Ddx357b
         QN0DjgOVbg1EVsGP8T6jPRS1XYEL8uM8YMh8nu0VK2uRUhZgQUKXQyPE6LcgeMMdsxrQ
         3/LrN3rZWhNluyAs1/dxcw+AgKZhgLSRhfko/3IXBLKYMLCXJ1JyA8MU2XtCUnVpuLQC
         8nuMbrpQl0slX7zLdUTzVlILTFY56LIvmRsnJ5anAcfImbgr4qDesKXHS12uoeV2OyQC
         9+J54OVYyMfpRsrrHET33ZBWGZq36lIUTImiUwpXBb1GLcOiC0/PgebqjTTA6PP12N/E
         WcbQ==
X-Gm-Message-State: AAQBX9eVnUR0Pqpt8YZ5UyymQfin2vRFnjx929l/ie2mSN83Jh+ifFX+
	byYNvOjm4DMKcCK2mYjHT4wJZHoOkHq85Y3dgv5xXw==
X-Google-Smtp-Source: AKy350Y7hOZ50TyXoDficd6HOGGSmV9+Q3hwnaNEOGF+13yEh1joIr8iN2QrzT17EEjxkTJLDRdRyO49u8lawH4lM5g=
X-Received: by 2002:a05:6000:1e02:b0:2f4:2e72:5bf7 with SMTP id
 bj2-20020a0560001e0200b002f42e725bf7mr2017521wrb.0.1681394189922; Thu, 13 Apr
 2023 06:56:29 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-23-jens.wiklander@linaro.org> <AS8PR08MB799163D2CB5E1184FC2F978B92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB799163D2CB5E1184FC2F978B92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 15:56:19 +0200
Message-ID: <CAHUa44HT0zP3od1Qx-ZKS8t0gc7vQ-8JPkshO32bWs0JY+gZnA@mail.gmail.com>
Subject: Re: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:20=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
> >
> > Describes a FF-A version 1.1 [1] mediator to communicate with a Secure
> > Partition in secure world.
> >
> > [1] https://developer.arm.com/documentation/den0077/latest
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  SUPPORT.md               |  8 ++++++++
> >  docs/man/xl.cfg.5.pod.in | 15 +++++++++++++++
> >  2 files changed, 23 insertions(+)
> >
> > +B<Arm only.> Allow a guest to communicate via FF-A with Secure Partiti=
ons
> > +(SP), default false.
> > +
> > +Currently is only a small subset of the FF-A specification supported. =
Just
>
> I am not a native English speaker but I think this sentence would better =
be:
> "Currently only a small subset of the FF-A specification is supported."

I'll update it.

>
> Other parts look good to me, so:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Thanks,
Jens

>
> Kind regards,
> Henry
>


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:01:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520790.808796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxWB-0007ks-NR; Thu, 13 Apr 2023 14:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520790.808796; Thu, 13 Apr 2023 14:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxWB-0007kl-KI; Thu, 13 Apr 2023 14:01:43 +0000
Received: by outflank-mailman (input) for mailman id 520790;
 Thu, 13 Apr 2023 14:01:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxW9-0007kf-Uq
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 14:01:41 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b71d7150-da03-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 16:01:41 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 v14-20020a05600c470e00b003f06520825fso13663414wmo.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 07:01:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b71d7150-da03-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681394500; x=1683986500;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9RXkq2jdJZj3WkYLgQrisz4aUlz+yyWAXOXNOUYobdE=;
        b=LEEpELcDajdGCgkadq4t6L3cbCoEejx02/O97elAwivac5XZNmE+VuD+2H87iY4O9O
         qEqb5D5V2vm3pyRpyCzmuCZqJvtdE5PoJclhIdyrz6xwPD9nu8xP+psQh99p541rP3YR
         dDLhSLH1LeexL8GMaCvLr+9qeqjbGcXCWpG4WQoAhKY4hzHotTI6ub+Z6tsYhHS5qUKN
         +SlpWawj45T/nRdbyDa7TZQCY3SJAfBoauSSUU8y4Myr8amrcotdOnSC0zl4t9z/sml2
         T4/C1g4Om20NYR1Y2u4wf0wL2uXl7H1ovDAMJ8oPJ9mcsHZ+PvBxI3w9ITc5qiX00Xwg
         c7LQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681394500; x=1683986500;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=9RXkq2jdJZj3WkYLgQrisz4aUlz+yyWAXOXNOUYobdE=;
        b=Llvz/Qh0RLorYfMmGgcAjV3AMuKnChzWDYvUVqvUAT9Xe14pOBTgAKBQBSiP1HqOHG
         8eORkfaBc3S+JRcvbxzqh6pJtJMdQ0v/KN9oClQw6L1fjINqWYdkFiuAyupAVGXZ0f1B
         fmwrDn4nip74dw3NvSQ77ZxhSq0B/GlRDsIW9YNKsBYylxMCQg4EYIpHXUjqXrYKtdnS
         mc3rtT7DSeGc5FrMTBMbWbyFJQxn0AbdJAAfAAHhJRs3X4sakWKbVohSwz7IRocTxYL1
         KGqaa9wEh5Wmrokk4u4QJ53mWP7NPcj8ifu2KFS2HP9VKFS/D2+/jst5+vCRxEVRz+mz
         Hpfg==
X-Gm-Message-State: AAQBX9c5+71/1ew69UfjOFajCAN9LhZEYz9eZh6d281gLRJwVos0ar2Q
	jMRHscH5HMWyGoslvnkwUZV/wiZsibauDm5X23gr8w==
X-Google-Smtp-Source: AKy350aB9V4mekAsIeBZvwmR+xLOZvJFC1bjXrg3eO5nfWw270ujaWfAV4VD3WeYgovVitnWJ70YG2muxxklrD7YLzo=
X-Received: by 2002:adf:df83:0:b0:2f4:1214:d5b4 with SMTP id
 z3-20020adfdf83000000b002f41214d5b4mr402098wrl.3.1681394500484; Thu, 13 Apr
 2023 07:01:40 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-22-jens.wiklander@linaro.org> <AS8PR08MB79914AB812D7BC35ACBC80E692989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB79914AB812D7BC35ACBC80E692989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 16:01:29 +0200
Message-ID: <CAHUa44FbgSOcUnZwtrBczoNCup15fpRxPsQ0JBU_ssAUHt_2BQ@mail.gmail.com>
Subject: Re: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:24=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
> >
> > Adds a comments with a list of unsupported FF-A interfaces and
>
> Typo: s/a comments/comments/

OK

>
> > limitations in the implemented FF-A interfaces.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/tee/ffa.c | 32 ++++++++++++++++++++++++++++++++
> >  1 file changed, 32 insertions(+)
> >
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index 0948cc636871..6424c222c885 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -13,6 +13,38 @@
> >   *                https://developer.arm.com/documentation/den0077/e
> >   * TEEC-1.0C: TEE Client API Specification version 1.0c available at
> >   *            https://globalplatform.org/specs-library/tee-client-api-=
specification/
> > + *
> > + * Notes on the the current implementstion.
>
> Typo: s/implementstion/implementation/

OK

>
> > + *
> > + * Unsupported FF-A interfaces:
> > + * o FFA_MSG_POLL and FFA_MSG_SEND - deprecated in FF-A-1.1-REL0
> > + * o FFA_MEM_RETRIEVE_* - Used when sharing memory from an SP to a
> > VM
> > + * o FFA_MEM_DONATE_* and FFA_MEM_LEND_* - Used when tranferring
> > ownership
> > + *   or access of a memory readion
>
> Typo "readion"? Maybe I am wrong but I cannot find this word in the spec.

I'll fix it.

>
> With above typos corrected:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:04:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520794.808806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxYy-0008JQ-4o; Thu, 13 Apr 2023 14:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520794.808806; Thu, 13 Apr 2023 14:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxYy-0008JJ-1E; Thu, 13 Apr 2023 14:04:36 +0000
Received: by outflank-mailman (input) for mailman id 520794;
 Thu, 13 Apr 2023 14:04:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxYw-0008JD-FR
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 14:04:34 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d31a31b-da04-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 16:04:32 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 he11-20020a05600c540b00b003ef6d684102so6459930wmb.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 07:04:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d31a31b-da04-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681394672; x=1683986672;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tef11HamORpWrO4zt3gqCOD3tNCnJEUhR/3tGw75Byo=;
        b=En+0pKx3uy5Mnn+GGSiASx+oSfjuCd4RNPd7sIyf5eCKUisFJd7NH37cTKIeVen3WT
         S5Z4kjtpFFNIow6mgmfSDIVz0HDfRZ2cKqUWq0YJWiz3cVW1TLxT+Zu/V9VVU81jg7QY
         Fiio/k+0PpOGFgWpEptKpEoFZd6UCEa7yBzDtr60XhLxxjMmMTOChgg4H/erxSRPLbjb
         7/DP21wuPHraUMGIecKmg66TZhCJPyTNC+gaUaNpZXZlqiwK8cqkP5DoT8XLOdO8lSRP
         PXq4B14kBLQN0jHojN6+I9DEvSK+scFbsH0IbV22vLJHWIs4hfVgIbkC/p76xbvu1YJ9
         BcEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681394672; x=1683986672;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tef11HamORpWrO4zt3gqCOD3tNCnJEUhR/3tGw75Byo=;
        b=ku9Fatt2ThHwDTr0FNtLiO9vWY/dAFeRKuloPIP+yv9Oj5jW3Hz8nUh2AORkc/T0e1
         JSLIQ3KxpMaNN73OvBctphjvshQ+hiz69Z6v3rIDy5lClEm5kU+CsXPzKVFqgU5J/5Sd
         UdiKh/Z4f6O2k3vGXgze+cIA3ZxolhQjoXtR2LuUgM6rqP0zFo6byVFuHGgDOsdGZ0p0
         tUROYWp/uc5/+DJwfgIR6fnNx+2mOWrOpks2ga5BPj3vTAJlX/cAxAYFu9U4IeBrMPjy
         XQltpjyKpUbgTzPbMoS/JTjhNQw1/UK6o6ynKNOtM6TwaNFRN83Fs4SurjKPg0c8EaJe
         Rn4Q==
X-Gm-Message-State: AAQBX9eHaRUuXw1ojw+L1VQqWBVNz+U9CXkUaI9rgtjfzdD6NxF/mDX0
	PB6qhFZVSeBoYKf86/lXpSCvtQX36GkVgkJ733W2Kw==
X-Google-Smtp-Source: AKy350b/N/foTgSiIYTBBi1PSq0BNKihoARgBMwt1YyHQzgotCZCq77/mpuOt3p2AsNvDnotLSxVCyZZRJXRpgHx12U=
X-Received: by 2002:adf:df83:0:b0:2f4:1214:d5b4 with SMTP id
 z3-20020adfdf83000000b002f41214d5b4mr404765wrl.3.1681394671929; Thu, 13 Apr
 2023 07:04:31 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-15-jens.wiklander@linaro.org> <AS8PR08MB799105F5A90ECF0085BA63A492989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB799105F5A90ECF0085BA63A492989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 16:04:21 +0200
Message-ID: <CAHUa44F3a7_XPqK0CnJUHX=hMXoHXtuXg_v6Rf8ZmCuQpyrX3A@mail.gmail.com>
Subject: Re: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and
 uint64_to_regpair() to regs.h
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Michal Orzel <michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:31=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 14/22] xen/arm: move regpair_to_uint64() and
> > uint64_to_regpair() to regs.h
> >
> > Moves the two helper functions regpair_to_uint64() and
> > uint64_to_regpair() from xen/arch/arm/tee/optee.c to the common arm
> > specific regs.h. This enables reuse of these functions in the FF-A
> > mediator in a subsequent patch.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll add it.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:06:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:06:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520797.808816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxaH-0000S6-Fz; Thu, 13 Apr 2023 14:05:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520797.808816; Thu, 13 Apr 2023 14:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxaH-0000Rz-Cz; Thu, 13 Apr 2023 14:05:57 +0000
Received: by outflank-mailman (input) for mailman id 520797;
 Thu, 13 Apr 2023 14:05:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HU9H=AE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pmxaG-0000Rt-Jg
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 14:05:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e115510-da04-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 16:05:55 +0200 (CEST)
Received: from DUZP191CA0006.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:4f9::16)
 by GV2PR08MB7932.eurprd08.prod.outlook.com (2603:10a6:150:7f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 14:05:52 +0000
Received: from DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4f9:cafe::ef) by DUZP191CA0006.outlook.office365.com
 (2603:10a6:10:4f9::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 14:05:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT018.mail.protection.outlook.com (100.127.142.74) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Thu, 13 Apr 2023 14:05:52 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 13 Apr 2023 14:05:52 +0000
Received: from 5bcdf09bd711.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 26D03264-6603-41F1-A6C6-DE02BF857EC2.1; 
 Thu, 13 Apr 2023 14:05:45 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5bcdf09bd711.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 14:05:45 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM9PR08MB6065.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 14:05:34 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 14:05:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e115510-da04-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4ntKU0qB25tzCe84Gkqv7uUIcg8aNldac8nLZ9xZICc=;
 b=hkaD3G9Oykzo1gRrBqfG2zy+phaZeCD5vO1exVMLeLSEgqjS+XQlDjIjDHvKvYUs0sF+os76DGWcnTSDPFkx9nKl6jtrhLKUcUzrkD3EoJ8TndmavVSxTg8YyynJ91MIJsJ18MeATSoqHF6AAZemjaTVXBBO5Jy36sj0V+a2x0Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b0a1d0c3ab2b01fb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gaf5h/m0A+LxqIE9Q5PBFYjuEuacUNRQUXg2p0HjoBptYKD9sAvIKNEZM1h3l9HLtvdlrka4hhIv0/P3hWG2PEygQWR/WVvKV/7mm95Od78JxCd5IbQe73tO7Xe3xjdcMbefYAjk5KAmQUNFmjRxn263iEvebsm16QENA8f+0V9N/XxagsHzS6jE8OBW9IDxAXyuOoTmpKKQAdiqrgksqVae7rPEqkwaYNpMK43SszCjgqTAaGkTEZOXvfyogpJq27D55yzx/NppI/mIhLFpZ2qrI4ljRQrzPu/6TMHc6n5uXVRVb9NrI3Cop9Lo7HH7IZdZ0OyOYmLRN4pPzoBQ7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4ntKU0qB25tzCe84Gkqv7uUIcg8aNldac8nLZ9xZICc=;
 b=RgT/ACmN4aEFk0yCX/ydcatyZoyjjiJ/+EEx07dX/8/BCwyVFYjF3Xc59bgohYgjKZ69XTItp/oX1XvuaY+nkDT8xj2Ar7EZAKn/JD1ujBdjxMUTG6neg3euohZJP9sS4JGQNYRkgkEcoKz5r+vNGIO2IJeb0C8hq0d+PVBs3vDUwBLHBsTX3V22YLdlDr1/FDrnE2VNVDQaua4hHvspvyoPCAv5+Bx6mBfrs+FowghDIL+nGO/wxlN9IftiKdJcZBQJ60vFAzMZijncYHJtkIDoTbTneFAFP8kRlV2f6Z7IGAziLbor0Dk9fCb7k1P3ZOCeyqOtkI5pDmpSvNvbJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4ntKU0qB25tzCe84Gkqv7uUIcg8aNldac8nLZ9xZICc=;
 b=hkaD3G9Oykzo1gRrBqfG2zy+phaZeCD5vO1exVMLeLSEgqjS+XQlDjIjDHvKvYUs0sF+os76DGWcnTSDPFkx9nKl6jtrhLKUcUzrkD3EoJ8TndmavVSxTg8YyynJ91MIJsJ18MeATSoqHF6AAZemjaTVXBBO5Jy36sj0V+a2x0Y=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index: AQHZbSQ9GJqCFHqaX0uCiT+EC9U7s68pNK+AgAAHnICAAAGIAIAACdqA
Date: Thu, 13 Apr 2023 14:05:30 +0000
Message-ID: <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
In-Reply-To: <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM9PR08MB6065:EE_|DBAEUR03FT018:EE_|GV2PR08MB7932:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b15dd7f-6a45-4d9f-ef28-08db3c283124
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RgSznKWfxl8m3Ahu1CbR/Hfg6/lvGHsldj3WMY7SsCvKIe5969I2Gj7jfsRrE56RtPof6+DuMXHzB4CCCZhAtQYQx5iL74BkeRNWNZJklmMedis0xB2E1CZ9o4fwWXbpISfcQzkmvpgAnEskXViozI4EPBKyTWe9Hde2TYX337QBputjqbbGbjthmvidf8XqwoA+67Sog92t3oAZgmHceUEb5DZCJfK4buNKuHc0CBEpWqeByc8MRD1PyuoaLOYsQChdik2tMZXQoTThhJpgYNL8rqaVv88jFKk6vx+i+utOuY8Nr6UVaRh6kZXrObAf4g2VAp4Hew5a+8IOE7AlOvdJuVETqH6PPhiaUjyohBf5ZEgCPmecYI5F3FTaCd6+8t4v24KpFLEIynXL3z1FvLPWcifWDnK9CluQsvv0usCPjvWWtv2eR9HeVq8+Pue8VU9Ut7Znh3SOQ6nscYxcCwmDQt4QiWWgfeBn3u8uhYZ1X3DWVba6AONLeXbx9wze6kZV19qYTr4WXQj9EPWQocWyUKAVPIiwCa28hzp8LijlLsy1Rvb3h5nrIj4SkHbXori8LxdEp9HUfNojTGfFbSfXh8IaBqx8kjZVT3p5Uf50XJE1dzh2SqCXxKYf08YLrP6jyaaztDceIpHcKSX0uw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(451199021)(5660300002)(41300700001)(33656002)(122000001)(38100700002)(36756003)(2906002)(38070700005)(8676002)(86362001)(4326008)(6916009)(66556008)(8936002)(66446008)(66946007)(64756008)(66476007)(316002)(91956017)(2616005)(83380400001)(76116006)(6506007)(54906003)(6512007)(26005)(53546011)(186003)(478600001)(6486002)(71200400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <8C2C74ED4696DC4795D6E4B68AB82C1F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6065
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3d14a642-65c9-4dd4-0ac0-08db3c2823df
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U4H9aoFTTh8dL3+X2PYAz7MwzphRRA4OkVBNNru9NFDjvKl7vgaYlOzRD67HafqYX28w+q7wQoY1jG/TJeTAt9tiwGj159q7djXl6Up6xV3xWg9RzG2djIoMvGWkqGPIGr2D8FaIlky2OdEibWqSxpYhg9pVZunrbIjxZpbSWdskbVsGONsA7DEz7rCKGmUjv7JCPJwYg9WvuJ9phuPMuxnlD722lQPKdNdpuo3oK3u+2jBUxVm16t0SzAgeepHSuXGYenpTmZaK/fGF53HhEAYPO/9+z5pKJcZVbllaYbsxlNXl+66lFqwp71yEm5TEf0V6ODZlBdYyauYzR7GTRuy/QVfZSA5QHcAinDXsVA9juPh+HQ0Fc/GYD0xm4rQLvgpH+6hx5Cz8/UyYfSfv8OoE1v2gbu90LC7O60zwIijl95l0nnRJI8FEqH3M+Ju2HHbwMukkTwbkE+rC4dGcWWKPW+n0cs1O3pCkVtDf7mDtoYtgMFf/aUG8NGLmtPhzUuLx1cMh//fS8W0qhMuuInsvb38w6EW4wtNxTBetU3V6/Pw8/S2Pma3a+ifBCxrgu5GkaV2+YZc2pf+JFPvO3DlQ/oPObroYySuhwHtg2Ags5G/ZlUlEnicOsZbJKtu5CcfU1QioOKPmNyHh8ZyRR/ysDs/L6IZWAoOWRl7ymxq9SgCjVN2RzPXsqvuwp7BvxM026VjzPhHC19Iep4u1bA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(478600001)(54906003)(6486002)(36860700001)(47076005)(36756003)(2616005)(33656002)(336012)(83380400001)(26005)(40480700001)(186003)(6512007)(6506007)(53546011)(40460700003)(82740400003)(82310400005)(81166007)(316002)(356005)(4326008)(2906002)(70206006)(70586007)(8676002)(8936002)(5660300002)(6862004)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 14:05:52.3763
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b15dd7f-6a45-4d9f-ef28-08db3c283124
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB7932

DQoNCj4gT24gMTMgQXByIDIwMjMsIGF0IDE0OjMwLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTMvMDQvMjAyMyAxNDoyNCwgTHVjYSBGYW5j
ZWxsdSB3cm90ZToNCj4+IEhpIEp1bGllbiwNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPj4+PiAgQEAg
LTU5NCw2ICs1OTcsNyBAQCBpbnQgYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnKHN0cnVjdCB4
ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiAqY29uZmlnKQ0KPj4+PiAgICAgIHVuc2lnbmVkIGludCBt
YXhfdmNwdXM7DQo+Pj4+ICAgICAgdW5zaWduZWQgaW50IGZsYWdzX3JlcXVpcmVkID0gKFhFTl9E
T01DVExfQ0RGX2h2bSB8IFhFTl9ET01DVExfQ0RGX2hhcCk7DQo+Pj4+ICAgICAgdW5zaWduZWQg
aW50IGZsYWdzX29wdGlvbmFsID0gKFhFTl9ET01DVExfQ0RGX2lvbW11IHwgWEVOX0RPTUNUTF9D
REZfdnBtdSk7DQo+Pj4+ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29k
ZV92bChjb25maWctPmFyY2guc3ZlX3ZsKTsNCj4+Pj4gICAgICAgIGlmICggKGNvbmZpZy0+Zmxh
Z3MgJiB+ZmxhZ3Nfb3B0aW9uYWwpICE9IGZsYWdzX3JlcXVpcmVkICkNCj4+Pj4gICAgICB7DQo+
Pj4+IEBAIC02MDIsNiArNjA2LDI2IEBAIGludCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWco
c3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+Pj4+ICAgICAgICAgIHJl
dHVybiAtRUlOVkFMOw0KPj4+PiAgICAgIH0NCj4+Pj4gICsgICAgLyogQ2hlY2sgZmVhdHVyZSBm
bGFncyAqLw0KPj4+PiArICAgIGlmICggc3ZlX3ZsX2JpdHMgPiAwICkNCj4+Pj4gKyAgICB7DQo+
Pj4+ICsgICAgICAgIHVuc2lnbmVkIGludCB6Y3JfbWF4X2JpdHMgPSBnZXRfc3lzX3ZsX2xlbigp
Ow0KPj4+PiArDQo+Pj4+ICsgICAgICAgIGlmICggIXpjcl9tYXhfYml0cyApDQo+Pj4+ICsgICAg
ICAgIHsNCj4+Pj4gKyAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sICJTVkUgaXMgdW5z
dXBwb3J0ZWQgb24gdGhpcyBtYWNoaW5lLlxuIik7DQo+Pj4+ICsgICAgICAgICAgICByZXR1cm4g
LUVJTlZBTDsNCj4+Pj4gKyAgICAgICAgfQ0KPj4+PiArDQo+Pj4+ICsgICAgICAgIGlmICggc3Zl
X3ZsX2JpdHMgPiB6Y3JfbWF4X2JpdHMgKQ0KPj4+PiArICAgICAgICB7DQo+Pj4+ICsgICAgICAg
ICAgICBkcHJpbnRrKFhFTkxPR19JTkZPLA0KPj4+PiArICAgICAgICAgICAgICAgICAgICAiUmVx
dWVzdGVkIFNWRSB2ZWN0b3IgbGVuZ3RoICgldSkgPiBzdXBwb3J0ZWQgbGVuZ3RoICgldSlcbiIs
DQo+Pj4+ICsgICAgICAgICAgICAgICAgICAgIHN2ZV92bF9iaXRzLCB6Y3JfbWF4X2JpdHMpOw0K
Pj4+PiArICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+Pj4+ICsgICAgICAgIH0NCj4+PiAN
Cj4+PiBJcyBTVkUgc3VwcG9ydGVkIGZvciAzMi1iaXQgZ3Vlc3Q/IElmIG5vdCwgdGhlbiB5b3Ug
c2hvdWxkIGhhZCBhIGNoZWNrIGhlcmUgdG8gcHJldmVudCB0aGUgY3JlYXRpb24gb2YgdGhlIGRv
bWFpbiBpZiBzdmVfdmxfYml0cyBpcyBzZXQuDQo+PiBObyBTVkUgaXMgbm90IHN1cHBvcnRlZCBm
b3IgMzIgYml0IGd1ZXN0cywgaGVyZSBJIHRoaW5rIHdlIHdpbGwgZ2V0IOKAnFNWRSBpcyB1bnN1
cHBvcnRlZCBvbiB0aGlzIG1hY2hpbmXigJ0gYmVjYXVzZSBnZXRfc3lzX3ZsX2xlbigpIHdpbGwg
cmV0dXJuIDAuDQo+IA0KPiBGcm9tIG15IHVuZGVyc3RhbmRpbmcsIGdldF9zeXNfdmxfbGVuKCkg
d2lsbCByZXR1cm4gdGhlIGxlbiBzdXBwb3J0ZWQgYnkgdGhlIGhvc3RzLiBTbyBpZiB5b3UgcnVu
IGEgMzItYml0IGd1ZXN0IG9uIHRvcCBvZiBhIDY0LWJpdCBob3N0cywgdGhlbiBJIGJlbGlldmUg
Z2V0X3N5c192bF9sZW4oKSB3aWxsIGJlIG5vbi16ZXJvLg0KDQpZZXMgeW91IGFyZSByaWdodCwg
SSByZWFsaXNlIHRoYXQgSSBuZWVkIHRoZSBkb21haW4gdHlwZSBpbmZvcm1hdGlvbiBhbmQgSSBj
YW7igJl0IGhhdmUgaXQgaW4gYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnLCBob3dldmVyIHRo
ZXkgbWlnaHQgaGF2ZSBzZW5zZSB0aGVyZSwgYW5kIEkgY2FuIGRvIGEgY2hlY2sNCmxpa2UgdGhp
cyBhZnRlcndhcmRzOg0KDQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5j
IGIveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jDQppbmRleCBjMWYwZDFkNzg0MzEuLmNlMTIz
NWMyNTc2OSAxMDA2NDQNCi0tLSBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KKysrIGIv
eGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jDQpAQCAtMzY5NCw2ICszNjk0LDEyIEBAIHN0YXRp
YyBpbnQgX19pbml0IGNvbnN0cnVjdF9kb21haW4oc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGtl
cm5lbF9pbmZvICpraW5mbykNCiAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KICAgICB9DQogDQor
ICAgIGlmICggZC0+YXJjaC5zdmVfdmwgJiYgKGtpbmZvLT50eXBlID09IERPTUFJTl8zMkJJVCkg
KQ0KKyAgICB7DQorICAgICAgICBwcmludGsoIlNWRSBpcyBub3QgYXZhaWxhYmxlIGZvciAzMi1i
aXQgZG9tYWluXG4iKTsNCisgICAgICAgIHJldHVybiAtRUlOVkFMOw0KKyAgICB9DQorDQogICAg
IGlmICggaXNfNjRiaXRfZG9tYWluKGQpICkNCiAgICAgICAgIHZjcHVfc3dpdGNoX3RvX2FhcmNo
NjRfbW9kZSh2KTsNCg0KV291bGQgaXQgYmUgb2sgZm9yIHlvdT8NCg0KDQo+IA0KPj4+IENhbiB3
ZSBtb3ZlIHRoaXMgc29tZXdoZXJlIGVsc2UgdG8gYXZvaWQgYWRkaW5nIGV4dHJhIHBhZGRpbmc/
IEFsc28gc2hvdWxkbid0IHRoaXMgYmUgcHJvdGVjdGVkIHdpdGggI2lmZGVmIENPTkZJR19BUk1f
NjQgdG8gbWFrZSBjbGVhciB0aGlzIGlzIG5vdCBzdXBwb3J0ZWQgb24gWGVuIDMyLWJpdD8NCj4+
IFllcywgSeKAmWxsIG1vdmUgaXQgYW5kIHByb3RlY3Qgd2l0aCBDT05GSUdfQVJNXzY0LCBpcyBp
dCBvayBmb3IgeW91IGlmIEkgbW92ZSBpdCBhZnRlcjoNCj4+IC8qIE1vbml0b3Igb3B0aW9ucyAq
Lw0KPj4gc3RydWN0IHsNCj4+ICAgICB1aW50OF90IHByaXZpbGVnZWRfY2FsbF9lbmFibGVkIDog
MTsNCj4+IH0gbW9uaXRvcjsNCj4gDQo+IFBsZWFzZSBjaGVjayB0aGUgcGFkZGluZyB3aXRoICJw
YWhvbGUiLiBJZiBwb3NzaWJsZSwgaXQgd291bGQgYmUgYmV0dGVyIHRvIHJlLXVzZSBhbiBleGlz
dGluZyBvbmUuDQoNCk9rIEnigJlsbCB0cnkgdG8gdXNlIHRoZSB0b29sDQoNCj4gDQo+IENoZWVy
cywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:19:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520807.808827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxn5-00024B-QG; Thu, 13 Apr 2023 14:19:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520807.808827; Thu, 13 Apr 2023 14:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxn5-000244-LJ; Thu, 13 Apr 2023 14:19:11 +0000
Received: by outflank-mailman (input) for mailman id 520807;
 Thu, 13 Apr 2023 14:19:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UVxt=AE=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pmxn4-00023y-PM
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 14:19:10 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27657f9d-da06-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 16:19:08 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id e7so4259531wrc.12
 for <xen-devel@lists.xenproject.org>; Thu, 13 Apr 2023 07:19:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27657f9d-da06-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681395548; x=1683987548;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jnxv1d+DPOzXFsYGT8fPh7cpyORkn3S4h2c0JMmojyY=;
        b=i2RrQtWy/A7bUtrdoyr5sTSUySe67dRrSsliXVPVJc/P2sLIqZjAucxTPd1GrskY7t
         AKOya4TNVorwCxcjKxsY3bDh5J3cZPlS7GdSC8V0LlnAP5AqLO2SHv+kvdEZ4X55Jdlp
         wcRhIYABuH1Q9Ug1rftjC8JUWTAm9sDAHI62WrU/T7SlBySVbacE/zxmlz7qpT5H6LM0
         eAQ3dVtXMC509lNLWU7FZoyUAvluUs2A9YB1yNWXJzww3Ca0e10nPos1ztyOrYxnR3cO
         OFC7oNXJHLaJvvEgTVlKZ0rQ0tzu7wAWAFG5VNZSXQ4mneGgrt7A0oDxJFJNG/EocKGJ
         wZng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681395548; x=1683987548;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jnxv1d+DPOzXFsYGT8fPh7cpyORkn3S4h2c0JMmojyY=;
        b=jmPFvFi/Z8GmHbaXrMiqltcuIiHmqtgV0sTItBS7bz4YPKS72AcuuQ2jB/zrqQj0/Y
         F+EGei1ouZEpoq6jw7mP/gPrmCVCTi6PtS4L/At5HdAsRXfgd2ZD8GJ0Iz9IkaVhkkij
         lXQENRtrcMIZeUSOx5Z0BVR6eQu1BifNE25VyGrgSsIHlLULh5vSUyTVY5FsB2pEP1X0
         nMI2I5vzp1qq3XepfBDNRTQNwtObIt9+P97bAHl/mOeCkkfBmPbAvoHju61oVCGCPorS
         KenqRxhvQNI6vr7S9SIzOeR8VhhP04G8vh9lKWuAxSSXyyKLjE1R0mNKc6HcTKRSt5gX
         IhKQ==
X-Gm-Message-State: AAQBX9fSvJIrHc2gBqQo/+nuMHgP3Z9UuYR2hB/ojMRKTAP/RulE5V3t
	wntVGlHhSDjdoitxrmiueUdMmUzUtRPhr06FQbjpUg==
X-Google-Smtp-Source: AKy350adar8VeKRFQ5PBoxwjPmgqVh84AAZAOPLUj5qBijC4tUca+gUdapTJ2ICBQ76nii1wCwNo/0k9Tk968+K4MJA=
X-Received: by 2002:a5d:6881:0:b0:2f5:6a2f:660f with SMTP id
 h1-20020a5d6881000000b002f56a2f660fmr466484wru.3.1681395547982; Thu, 13 Apr
 2023 07:19:07 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-16-jens.wiklander@linaro.org> <AS8PR08MB799188B36D51EF5BB367590D92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB799188B36D51EF5BB367590D92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Thu, 13 Apr 2023 16:18:56 +0200
Message-ID: <CAHUa44FbU2qjckQ2DL+U7yuZ893nnbcUgaZdYJ-8YVi5r6P-_g@mail.gmail.com>
Subject: Re: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing memory
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:47=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 15/22] xen/arm: ffa: add defines for sharing mem=
ory
> >
> > Adds defines needed for sharing using the function FFA_MEM_SHARE and
> > friends.
>
> Same as my comments in previous patches, I would suggest to also mention
> the references in commit message.

OK, I'll add something.

>
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/tee/ffa.c | 60
> > ++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 60 insertions(+)
> >
> > + * FF-A doesn't have any direct requirments on GlobalPlatform or vice
>
> Typo: s/requirements/requirements/

OK

>
> > + * versa, but an implementation can very well use FF-A in order to pro=
vide
> > + * a GlobalPlatform interface on top.
> > + *
> > + * Global Platform specification for TEE requires that any TEE
> > + * implementation should allow to share buffers with size of at least
> > + * 512KB, defined in TEEC-1.0C page 24, Table 4-1,
> > + * TEEC_CONFIG_SHAREDMEM_MAX_SIZE.
> > + * Due to align issue mentioned above, we need to increase this
>
> s/align issue/alignment issue/ ?

Yes, I'll make it "the alignment issue".

>
> > + * value with one.
> > + */
> > +#define FFA_MAX_SHM_PAGE_COUNT          (SZ_512K / FFA_PAGE_SIZE + 1)
> > +
> > +/*
> > + * Limits the number of shared buffers that guest can have at once. Th=
is
> > + * is to prevent case, when guests tricks XEN into exhausting its own
>
> Typo: s/tricks/trick/

OK

>
> > + * memory by allocating many small buffers. This value has been chosen
> > + * arbitrary.
>
> Typo: s/ arbitrary/arbitrarily/

OK

>
> > + */
> > +#define FFA_MAX_SHM_COUNT               32
> > +
> > +/* FF-A-1.1-REL0 section 10.9.2 Memory region handle, page 167 */
> > +#define FFA_HANDLE_HYP_FLAG             BIT(63, ULL)
> > +#define FFA_HANDLE_INVALID              0xffffffffffffffffULL
> > +
> > +/*
> > + * Memory attributes: Normal memory, Write-Back cacheable, Inner
> > shareable
> > + * Defined in FF-A-1.1-REL0 Table 10.18 at page 175.
> > + */
> > +#define FFA_NORMAL_MEM_REG_ATTR         0x2fU
> > +/*
> > + * Memory access permissions: Read-write
> > + * Defined in FF-A-1.1-REL0 Table 10.15 at page 168.
> > + */
> > +#define FFA_MEM_ACC_RW                  0x2U
> > +
> > +/* FF-A-1.1-REL0 section 10.11.4 Flags usage, page 184-187 */
> > +/* Clear memory before mapping in receiver */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR            BIT(0, U)
> > +/* Relayer may time slice this operation */
> > +#define FFA_MEMORY_REGION_FLAG_TIME_SLICE       BIT(1, U)
> > +/* Clear memory after receiver relinquishes it */
> > +#define FFA_MEMORY_REGION_FLAG_CLEAR_RELINQUISH BIT(2, U)
> > +/* Share memory transaction */
> > +#define FFA_MEMORY_REGION_TRANSACTION_TYPE_SHARE (1U << 3)
> > +
>
> I confirm the values introduced in this patch are consistent with in-code
> comments on top of them. Thanks for the pointer :)

:-)

>
> With the typos corrected:
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Great.

Thanks,
Jens

>
> Kind regards,
> Henry
>
>
> >  /*
> >   * Flags and field values used for the MSG_SEND_DIRECT_REQ/RESP:
> >   * BIT(31): Framework or partition message
> > --
> > 2.34.1
> >
>


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:27:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520813.808836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxuZ-0003X0-G2; Thu, 13 Apr 2023 14:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520813.808836; Thu, 13 Apr 2023 14:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmxuZ-0003Wt-DN; Thu, 13 Apr 2023 14:26:55 +0000
Received: by outflank-mailman (input) for mailman id 520813;
 Thu, 13 Apr 2023 14:26:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmxuY-0003Wj-FR; Thu, 13 Apr 2023 14:26:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmxuY-0004eN-8y; Thu, 13 Apr 2023 14:26:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmxuX-0002Vs-QR; Thu, 13 Apr 2023 14:26:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmxuX-0000zi-Pv; Thu, 13 Apr 2023 14:26:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=00Ml6wI2+YZbNfEYdh0VrbG/4F2r7tGSkmGiUEQt0+E=; b=ukvKb7kWvVFzhlm7aYCSnwSo28
	weLkqLOTKdmJ8Yt5s/r6MRp9d2Ij/quRMVurfk4rMdLtdLxy0RayyBCFG2FoJI2HUkOuBusdTNRqZ
	YakWDaerTDqBl/Hll/1QiG1uS4tUeSZJ3MrwReA5HXHJi3aISDjtXJIniHnJE70SD8uw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180225: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
X-Osstest-Versions-That:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 14:26:53 +0000

flight 180225 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180225/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 180211 pass in 180225
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180211
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 180211

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180211 like 180206
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180211
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180211
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180211
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180211
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180211
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180211
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180211
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180211
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f
baseline version:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f

Last test of basis   180225  2023-04-13 01:53:23 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 14:36:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 14:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520821.808846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmy3W-00050j-CT; Thu, 13 Apr 2023 14:36:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520821.808846; Thu, 13 Apr 2023 14:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmy3W-00050c-9m; Thu, 13 Apr 2023 14:36:10 +0000
Received: by outflank-mailman (input) for mailman id 520821;
 Thu, 13 Apr 2023 14:36:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HU9H=AE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pmy3V-00050R-5D
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 14:36:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20617.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::617])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 839e0870-da08-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 16:36:02 +0200 (CEST)
Received: from AS9PR0301CA0012.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::25) by VE1PR08MB5743.eurprd08.prod.outlook.com
 (2603:10a6:800:1b2::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 14:35:57 +0000
Received: from AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:468:cafe::a) by AS9PR0301CA0012.outlook.office365.com
 (2603:10a6:20b:468::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 14:35:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT016.mail.protection.outlook.com (100.127.140.106) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 14:35:57 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 13 Apr 2023 14:35:57 +0000
Received: from 9618f815e1ca.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 84741A9D-6DB9-48FF-9220-4C008AABF99B.1; 
 Thu, 13 Apr 2023 14:35:56 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9618f815e1ca.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 14:35:56 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV1PR08MB8237.eurprd08.prod.outlook.com (2603:10a6:150:5d::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 14:35:46 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 14:35:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 839e0870-da08-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lXArFAPv3OLfJYe5yu5owMpv8nCKDOluVtSJurcjWrs=;
 b=y3qhKWMnPspyoGW07gH6EpH1dn5b7QxSL0MQTXJy3S3GXuFYRsXfAbRUJ/tS/3wFqJVs2Sf/taRq25/h/97ipV7CACX9lHu9OV9diIYCPvLp66tb1YQufu4z8uW724B6snhO7gY0G5hm4ZGSKhJu7C5MQuRujtAKyv9Rf6G+t2A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d4a1fb7c697ffd9b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FNX86LFJq/Y116OYwhQ4i3eeZCPtd1ERpd12kq3u7VTytPexCK7mlfPabjTxFwCB3sO10EQT/FJKoGWQ9+xK7GDcMzh6KUiXZ6OXnV+kvdB3H0a5PP0C1+9YtswwqgDA3A1oDN0sZBj2Su9pnursseFm19Mvf5GAO4uFxbpy9TyMYQblEJOubcJHt51AGE/DyRUHPKO56YAC0OuuzSybds09YssTI8yXvytKBjpMLYhL65rjyOr8U6ZPL1Z8WHgcjxGHRDXjXrbYn/NW0lDj1LruWWX7uBG//BIa9Q1+maodmFtT4z4XXWVymk6DB5ny/dgtOrUMya3wDBauDaDHrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lXArFAPv3OLfJYe5yu5owMpv8nCKDOluVtSJurcjWrs=;
 b=IPgl0oMCmOrux1K8jv6or0ejTQ3v9mAs7o1aymsxttX487SEvHrzfM0Q4NzlQ24n8OTwsCc/b48S0hYvvM3HZVlsfc42usYfdZtRBlYnzCw1gQ6eCirca4iUIVN/MK6oykv+d0GXlszuA9NQ2XeWih4IyuddrqlceN3g/EaohU7wVz0X9zeNjHS8OIIly9IhjDx4v6MuV3ESafFS8A4yUYt0PebPBjOkxKILX74TyAysMzbkEItwRLAmO7Acj6Q4zd3K7rId2I/D4pnsLcYcQAhe+LreQLJ0teZsn5lenq9Uywa9kC/3R6ItyKJ5o65SmjKDVxhI9flR0XK7KxqLsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lXArFAPv3OLfJYe5yu5owMpv8nCKDOluVtSJurcjWrs=;
 b=y3qhKWMnPspyoGW07gH6EpH1dn5b7QxSL0MQTXJy3S3GXuFYRsXfAbRUJ/tS/3wFqJVs2Sf/taRq25/h/97ipV7CACX9lHu9OV9diIYCPvLp66tb1YQufu4z8uW724B6snhO7gY0G5hm4ZGSKhJu7C5MQuRujtAKyv9Rf6G+t2A=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQ26+FC1T02dEGoq+zimN/6Ra8pOLQAgAAXa4A=
Date: Thu, 13 Apr 2023 14:35:44 +0000
Message-ID: <6DDCEF6B-F07B-44EA-83D0-33BED5EAC506@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <b1c77bdf-6979-83b6-f5e4-ac5b3e751a3d@xen.org>
In-Reply-To: <b1c77bdf-6979-83b6-f5e4-ac5b3e751a3d@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV1PR08MB8237:EE_|AM7EUR03FT016:EE_|VE1PR08MB5743:EE_
X-MS-Office365-Filtering-Correlation-Id: c4bf5996-285a-43e7-dff4-08db3c2c6515
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fJyx/6sD+R+yHyMCsV1Hqojs0hfZyFtmoVLVyWnEOeytS9OdHCwPye1rbWvcj1b+JFEQldrdqcKmQlFXv+1wWxNhZXlcoyRCJ7w8byCdi6DBbQgByZO9OokHIDURkcvf4QOeDvrWnVzsG/XV+FXA4pCHiQRzEjNnKP5OYhNr4sp2FpO0jbcsEUeqsuQUmSfDuWiWpC/7Z1wyf2eluo6Im4ysLs54Xed3psKlc5kY1DcNmBQ2DZCPzdJq16NDsXLJX06OBATlxy9IQ5b0G8p/JnFHYS2Kpppv/zmAQH5EdIPUUGS7njXSxmx7e9rks1NsTK98wtr3wTeXZ6wWL0EZpTGXsdMbUrR/PVdx+Yj1RfPph4vUfUpmU96R/CeKCye0+EyZGV9gkF8gcKGZqCHlOyWo2dGUtOJJALyDEwEfvTCNXiklHmUt+GgnIN7tMifcrzlPIWDzxG15T3bb7/Lj/a2Itlu3a9M3LTyeJ7nCQM6kDzgG39F0LUgqCJK8iz0YLOXNK5y/yLw3TCl3j0wsSO52TT3QNzbYLPWZ1ZX4pnKBn2At57Uv4nXu4hYdu0+vi0l3lCZrIPWzQ8eWmFiPd28LstlqDJ3+mE7DvIzW1cV+NB03qRVwkt8i/F2KNlXtsbh7vUB4YDAoV5rdyMl6uQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199021)(316002)(8676002)(66446008)(4326008)(66556008)(66476007)(66946007)(71200400001)(54906003)(64756008)(6916009)(478600001)(76116006)(41300700001)(91956017)(33656002)(36756003)(86362001)(2616005)(6512007)(6506007)(26005)(6486002)(53546011)(83380400001)(2906002)(5660300002)(8936002)(38070700005)(38100700002)(122000001)(186003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D531D9FBF1F8D246BEF791C4ECBF3269@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8237
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9dc1d2e4-e8c7-4075-7d0b-08db3c2c5d4a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wkVTt8HInfGx0dQQH5b6DlYJ9pPxtwD9wboMeCFLs5T9vSQeBFrz4iyGf5kh4pIt7jXgt5xrn7g0WWL35SDu1LZesZI88Zv84kmJMq/Tj7W4jx/kmqCrAndsoMCbo7INvbbiyQSOr9vQaGeX8nn1wFfpeB/Im0H7rdYxTCMfyzB/DX2HQZ2VGdvEc83pqAN5YZu0ttU6hCm/bPjqc46iMiXoaQxBzw0Ye4HLxj0OpWPOqki+NQkkebCc2D9UVPjLHzSNuK3LgrGGNw6QWJTK0dFx8SkwWQKX1vOYsI8b/w3S3eiLy9JR9lapb8mTya/lxNt2Agxil+/qdJxngkSf9vBA5f92SpQRoe97RrsAwIFgs2tvPsoGrt4sfFiPPGdX0rl0CTKlyUyl92Z3oe6dGKeVnEj29eAbODOnP7IGQrXeWlYBIIcPGOXDOZZFPNdISvuXlY5GkPXGU7m/gP/JpHTw/30NKZIRDOMnI0d8t19gqyz5vffgDqBeD8IldRyTikqH5F8nJo5RDkEfUfx35C/e7Dd/wHXMjtsAQmsls8HcMAi9UNT+ZWLGyrMmPkzazcIkw/aXGdK/o6ggSB+qDmMx5KFF/moL4AUI18OIh5qPA8jfOFZ9L+iZkzMkgZ2MC5anFATLJvPpWBo9sR9lcfJpni30Dg0SfBvUlGSuCXoIe3dRDiDt7WtB87ewZ9tReRbVg36IGIIipxch5g2kEg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39850400004)(451199021)(36840700001)(46966006)(40470700004)(2906002)(6862004)(8676002)(70586007)(70206006)(8936002)(5660300002)(478600001)(41300700001)(33656002)(40460700003)(316002)(40480700001)(83380400001)(82740400003)(336012)(36756003)(54906003)(4326008)(186003)(86362001)(6486002)(6506007)(26005)(81166007)(53546011)(6512007)(2616005)(36860700001)(82310400005)(356005)(107886003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 14:35:57.4435
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c4bf5996-285a-43e7-dff4-08db3c2c6515
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5743

DQoNCj4gT24gMTMgQXByIDIwMjMsIGF0IDE0OjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTIvMDQvMjAyMyAxMDo0OSwgTHVjYSBGYW5j
ZWxsdSB3cm90ZToNCj4+IFNhdmUvcmVzdG9yZSBjb250ZXh0IHN3aXRjaCBmb3IgU1ZFLCBhbGxv
Y2F0ZSBtZW1vcnkgdG8gY29udGFpbg0KPj4gdGhlIFowLTMxIHJlZ2lzdGVycyB3aG9zZSBsZW5n
dGggaXMgbWF4aW11bSAyMDQ4IGJpdHMgZWFjaCBhbmQNCj4+IEZGUiB3aG8gY2FuIGJlIG1heGlt
dW0gMjU2IGJpdHMsIHRoZSBhbGxvY2F0ZWQgbWVtb3J5IGRlcGVuZHMgb24NCj4+IGhvdyBtYW55
IGJpdHMgaXMgdGhlIHZlY3RvciBsZW5ndGggZm9yIHRoZSBkb21haW4gYW5kIGhvdyBtYW55IGJp
dHMNCj4+IGFyZSBzdXBwb3J0ZWQgYnkgdGhlIHBsYXRmb3JtLg0KPj4gU2F2ZSBQMC0xNSB3aG9z
ZSBsZW5ndGggaXMgbWF4aW11bSAyNTYgYml0cyBlYWNoLCBpbiB0aGlzIGNhc2UgdGhlDQo+PiBt
ZW1vcnkgdXNlZCBpcyBmcm9tIHRoZSBmcHJlZ3MgZmllbGQgaW4gc3RydWN0IHZmcF9zdGF0ZSwN
Cj4+IGJlY2F1c2UgVjAtMzEgYXJlIHBhcnQgb2YgWjAtMzEgYW5kIHRoaXMgc3BhY2Ugd291bGQg
aGF2ZSBiZWVuDQo+PiB1bnVzZWQgZm9yIFNWRSBkb21haW4gb3RoZXJ3aXNlLg0KPj4gQ3JlYXRl
IHpjcl9lbHsxLDJ9IGZpZWxkcyBpbiBhcmNoX3ZjcHUsIGluaXRpYWxpc2UgemNyX2VsMiBvbiB2
Y3B1DQo+PiBjcmVhdGlvbiBnaXZlbiB0aGUgcmVxdWVzdGVkIHZlY3RvciBsZW5ndGggYW5kIHJl
c3RvcmUgaXQgb24NCj4+IGNvbnRleHQgc3dpdGNoLCBzYXZlL3Jlc3RvcmUgWkNSX0VMMSB2YWx1
ZSBhcyB3ZWxsLg0KPj4gUmVtb3ZlIGhlYWRlcnMgZnJvbSBzdmUuYyB0aGF0IGFyZSBhbHJlYWR5
IGluY2x1ZGVkIHVzaW5nDQo+PiB4ZW4vc2NoZWQuaC4NCj4gSSBkaXNsaWtlIHRoaXMgYmVjYXVz
ZSAuLi4NCj4gDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jIGIveGVu
L2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBpbmRleCA3OGY3NDgyNjE5ZGEuLjU0ODU2NDg4NTBh
MCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gKysrIGIveGVu
L2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBAQCAtNSwxNCArNSwyOSBAQA0KPj4gICAqIENvcHly
aWdodCAoQykgMjAyMiBBUk0gTHRkLg0KPj4gICAqLw0KPj4gIC0jaW5jbHVkZSA8eGVuL3R5cGVz
Lmg+DQo+PiAtI2luY2x1ZGUgPGFzbS9jcHVmZWF0dXJlLmg+DQo+IA0KPiAuLi4gaXQgaXMgbm90
IGVudGlyZWx5IG9idmlvdXMgdGhhdCBzY2hlZC5oIHdpbGwgaW1wb3J0IGFzbS9jcHVmZWF0dXJl
cy5oLiBUaGlzIGNvdWxkIGVhc2lseSBjaGFuZ2UgaW4gdGhlIGZ1dHVyZSBhbmQgd291bGQgb25s
eSByZXF1aXJlIHVzIHRvIHJlLWFkZCB0aG9zZSBpbmNsdWRlcy4NCg0KT2sgSSB3aWxsIHJlaW50
cm9kdWNlICNpbmNsdWRlIDxhc20vY3B1ZmVhdHVyZS5oPiwgZG8gSSB1bmRlcnN0YW5kIGNvcnJl
Y3RseSB0aGF0IGlzIHRoZSBvbmx5IGhlYWRlciB5b3Ugd291bGQgbGlrZSBtZSB0byByZXRhaW4/
DQoNCj4gDQo+PiArI2luY2x1ZGUgPHhlbi9zY2hlZC5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vc2l6
ZXMuaD4gPiAgICNpbmNsdWRlIDxhc20vYXJtNjQvc3ZlLmg+DQo+PiAtI2luY2x1ZGUgPGFzbS9h
cm02NC9zeXNyZWdzLmg+DQo+PiAtI2luY2x1ZGUgPGFzbS9wcm9jZXNzb3IuaD4NCj4+IC0jaW5j
bHVkZSA8YXNtL3N5c3RlbS5oPg0KPj4gICAgZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3
X3ZsKHZvaWQpOw0KPj4gK2V4dGVybiB2b2lkIHN2ZV9zYXZlX2N0eCh1aW50NjRfdCAqc3ZlX2N0
eCwgdWludDY0X3QgKnByZWdzLCBpbnQgc2F2ZV9mZnIpOw0KPj4gK2V4dGVybiB2b2lkIHN2ZV9s
b2FkX2N0eCh1aW50NjRfdCBjb25zdCAqc3ZlX2N0eCwgdWludDY0X3QgY29uc3QgKnByZWdzLA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgcmVzdG9yZV9mZnIpOw0KPj4gKw0KPj4g
K3N0YXRpYyBpbmxpbmUgdW5zaWduZWQgaW50IHN2ZV96cmVnX2N0eF9zaXplKHVuc2lnbmVkIGlu
dCB2bCkNCj4+ICt7DQo+PiArICAgIC8qDQo+PiArICAgICAqIFowLTMxIHJlZ2lzdGVycyBzaXpl
IGluIGJ5dGVzIGlzIGNvbXB1dGVkIGZyb20gVkwgdGhhdCBpcyBpbiBiaXRzLCBzbyBWTA0KPj4g
KyAgICAgKiBpbiBieXRlcyBpcyBWTC84Lg0KPj4gKyAgICAgKi8NCj4+ICsgICAgcmV0dXJuICh2
bCAvIDhVKSAqIDMyVTsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGlubGluZSB1bnNpZ25lZCBp
bnQgc3ZlX2ZmcnJlZ19jdHhfc2l6ZSh1bnNpZ25lZCBpbnQgdmwpDQo+PiArew0KPj4gKyAgICAv
KiBGRlIgcmVnaXN0ZXIgc2l6ZSBpcyBWTC84LCB3aGljaCBpcyBpbiBieXRlcyAoVkwvOCkvOCAq
Lw0KPj4gKyAgICByZXR1cm4gKHZsIC8gNjRVKTsNCj4+ICt9DQo+PiAgICByZWdpc3Rlcl90IGNv
bXB1dGVfbWF4X3pjcih2b2lkKQ0KPj4gIHsNCj4+IEBAIC02MCwzICs3NSw0NiBAQCB1bnNpZ25l
ZCBpbnQgZ2V0X3N5c192bF9sZW4odm9pZCkNCj4+ICAgICAgcmV0dXJuICgoc3lzdGVtX2NwdWlu
Zm8uemNyNjQuYml0c1swXSAmIFpDUl9FTHhfTEVOX01BU0spICsgMVUpICoNCj4+ICAgICAgICAg
ICAgICBTVkVfVkxfTVVMVElQTEVfVkFMOw0KPj4gIH0NCj4+ICsNCj4+ICtpbnQgc3ZlX2NvbnRl
eHRfaW5pdChzdHJ1Y3QgdmNwdSAqdikNCj4+ICt7DQo+PiArICAgIHVuc2lnbmVkIGludCBzdmVf
dmxfYml0cyA9IHN2ZV9kZWNvZGVfdmwodi0+ZG9tYWluLT5hcmNoLnN2ZV92bCk7DQo+PiArICAg
IHVpbnQ2NF90ICpjdHggPSBfeHphbGxvYyhzdmVfenJlZ19jdHhfc2l6ZShzdmVfdmxfYml0cykg
Kw0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3ZlX2ZmcnJlZ19jdHhfc2l6ZShz
dmVfdmxfYml0cyksDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICBMMV9DQUNIRV9C
WVRFUyk7DQo+PiArDQo+PiArICAgIGlmICggIWN0eCApDQo+PiArICAgICAgICByZXR1cm4gLUVO
T01FTTsNCj4+ICsNCj4+ICsgICAgdi0+YXJjaC52ZnAuc3ZlX2NvbnRleHQgPSBjdHg7DQo+PiAr
DQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+ICsNCj4+ICt2b2lkIHN2ZV9jb250ZXh0X2Zy
ZWUoc3RydWN0IHZjcHUgKnYpDQo+PiArew0KPj4gKyAgICB4ZnJlZSh2LT5hcmNoLnZmcC5zdmVf
Y29udGV4dCk7DQo+PiArfQ0KPiANCj4gUGxlYXNlIHVzZSBYRlJFRSgpLg0KDQpTdXJlIEnigJls
bCBkbw0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 15:00:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 15:00:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520826.808856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmyQx-0008NR-D9; Thu, 13 Apr 2023 15:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520826.808856; Thu, 13 Apr 2023 15:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmyQx-0008NK-AU; Thu, 13 Apr 2023 15:00:23 +0000
Received: by outflank-mailman (input) for mailman id 520826;
 Thu, 13 Apr 2023 15:00:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pmyQw-0008NE-6T
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 15:00:22 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e76090a4-da0b-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 17:00:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e76090a4-da0b-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681398019;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=YHcnmoXK+tJ5/GNdoQIGjtqmj4LyJsQsX2kB3uia3M0=;
  b=WL5vqUB2MMHn0jwqaKNt2JLkvJXrN3LtoiDKAT9XqgTT5qUI/rQVSdvd
   hlxFaqTVdr90sxnnDiyryQzLU24q41MNuIyFnOkB7LXn8MT6SzkS1Jlbb
   xbIrYGO460y2UWsx4wTV8Nzl6h1PPi7Pq7zXfuYcndAHM2t0T3ZOsaT50
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 104748591
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:a0Tiqaz/316ncZtG0qp6t+dNxirEfRIJ4+MujC+fZmUNrF6WrkUBm
 zYZD22DPKuJNGH8Kdl3OYTj8xgO6JOBx9M3SFA5+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKsT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KU0Vr
 fxDdW8TVxeKhMWN7Zjhbfhnm8t2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNwhnE9
 j+XpgwVBDkRHYOn6iOJwE6yoazJhgbiXropOOOBo6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0efBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cuPncEFlZa/eDkqYIUtT/lFPFyG7O624id9S7L/
 9yakMQvr+xN3ZdRifvkrAyvbyGE/caQEFNsjunDdif8t14iOtb4D2C9wQKDhcusOrp1WbVoU
 JIsv8GFpN4DApiW/MBmaLVcRer5jxpp3dC1vLKOI3XC3273k5JbVdoMiAyS3W8wWir+RRfnY
 VXIpSRa74JJMX2hYMdfOtzhUp9wkfW5TYu7CJg4i+aihbAoLWe6ENxGPxbMjwgBbmB3+U3AB
 XtrWZn1VitLYUiW5DG3W/0cwdcW+8zK/kuKHcqT503+idK2PSfFIYrpxXPTN4jVGovf+16Lm
 zueXuPXoyhivBrWOHOPrdVPdAhUdhDWx/ne8qRqSwJKGSI+cElJNhMb6ehJl1BN90iNqtr1w
 w==
IronPort-HdrOrdr: A9a23:ObCwBaGfzMtUP9j1pLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5r
 mTdZUgpHnJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079
 YZT0EUMrzN5DZB4voSmDPIceod/A==
X-Talos-CUID: =?us-ascii?q?9a23=3APkA5vWuRpWH8pP6eb+LplTm76IsJKSXdlyqPPXa?=
 =?us-ascii?q?/AGVOc5C/E3iB875rxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AEK7gQw8cWNudRHMjvmL3i+iQf8xBvJyNEUw/rYs?=
 =?us-ascii?q?h+M+fEDV2Awyx0g3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,194,1677560400"; 
   d="scan'208";a="104748591"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Date: Thu, 13 Apr 2023 16:00:09 +0100
Message-ID: <20230413150009.3145462-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The Long Mode consistency checks exist to "ensure that the processor does not
enter an undefined mode or state that results in unpredictable behavior".  APM
Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
preventing the OS from trying to exit Long mode while in 64bit mode.  This
could leave the CPU in Protected Mode with an %rip above the 4G boundary.

Experimentally, AMD CPUs really do permit this state transition.  An OS which
tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
to be going on behind the scenes ought to result in sane continued execution.

Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
section instructs peoples to switch to a compatibility mode segment first
before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
Architecture".

Either way, this appears to have been a genuine oversight in the AMD64 spec.

Intel, on the other hand, rejects this state transition with #GP.

Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
4.1.2 "Paging-Mode Enable" was altered from:

  If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
  exception (#GP); software should clear CR4.PCIDE before attempting to
  disable paging.

to

  If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
  clear CR0.PG causes a general-protection exception (#GP). Software should
  transition to compatibility mode and clear CR4.PCIDE before attempting to
  disable paging.

which acknowledges this corner case, but there doesn't appear to be any other
discussion even in the relevant Long Mode sections.

So it appears that Intel spotted and addressed the corner case in IA-32e mode,
but were 15 years late to document it.

Xen was written to the AMD spec, and misses the check.  Follow the Intel
behaviour, because it is more sensible and avoids hitting a VMEntry failure.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Restrict to when Long Mode is enabled.
---
 xen/arch/x86/hvm/hvm.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1454c1732d95..c63c7d4d6bcf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2340,6 +2340,21 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
     }
     else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
     {
+        struct segment_register cs;
+
+        hvm_get_segment_register(v, x86_seg_cs, &cs);
+
+        /*
+         * Intel documents a #GP fault in this case, and VMEntry checks reject
+         * it as a valid state.  AMD permits the state transition, and hits
+         * SHUTDOWN immediately thereafter.  Follow the Intel behaviour.
+         */
+        if ( (v->arch.hvm.guest_efer & EFER_LME) && cs.l )
+        {
+            HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG while CS.L=1");
+            return X86EMUL_EXCEPTION;
+        }
+
         if ( hvm_pcid_enabled(v) )
         {
             HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG "
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 15:00:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 15:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520828.808866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmyRQ-0000JO-Lt; Thu, 13 Apr 2023 15:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520828.808866; Thu, 13 Apr 2023 15:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmyRQ-0000JH-IV; Thu, 13 Apr 2023 15:00:52 +0000
Received: by outflank-mailman (input) for mailman id 520828;
 Thu, 13 Apr 2023 15:00:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nJNL=AE=citrix.com=prvs=46089ce94=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pmyRO-0000Gq-Ec
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 15:00:50 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7c9d368-da0b-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 17:00:47 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 11:00:43 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5712.namprd03.prod.outlook.com (2603:10b6:a03:2dd::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 15:00:40 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Thu, 13 Apr 2023
 15:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7c9d368-da0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681398047;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ErrKj++KGuoinzmjbMMKSr6tPwXsEhw/7+UFjFENxLo=;
  b=AtIC323yRzHhLuQwGraGc+YKkWAaKY6RZMjI3IDQvwuluLVaulmTIOXZ
   jiOA0ojZmyEBmhiJTOA7TfBoOqq5n/fF9TTD2R9JsG+7Ea1AvYEMbb28I
   BdwietqaBfv8om5wWlRipavNdviHbjeXTBkp6oMbx6Zmc3hDZfAQjsB8G
   o=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 104183524
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:w/VRZq07uE+2i10HtPbD5V1wkn2cJEfYwER7XKvMYLTBsI5bpzEBz
 TFLD2HUbvuONGH1fNB/O9u1p0IPuJODzt5hHQRtpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnPqgQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfOz9zs
 tkmNwo0QhWng8u4mpGbEvk1mZF2RCXrFNt3VnBI6xj8VK5ja7acBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouy6KlFcZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rOWzXimAt56+LuQx95JkWy5+HQqLgwTXkW4mvK9sFSdRIcKQ
 6AT0m90xUQoz2SVSd36Uwy9sWSzlBcWUNpNEMU38AiIjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+Wpz6vPSkeLUcZeDQJCwAC5rHLv4Ubnh/JCNF5H8adjMDxGDz26
 yCHqm45nbp7pdUQy6yx8FTDgjStjpvEVAg44kPQRG3NxhtweYqNd4Gur1/B4p5oL4uHT1/Ho
 HkNneCf6vwDCdeGkynlfQkWNLSg5vLAOjuDh1dqRsEl7270oyXlep1M6jZjIksvKtwDZTLif
 E7Uv0VW+YNXO3ypK6RwZupdFvgX8EQpLvy9Pti8UzaESsQZmNOvlM22WXOt4g==
IronPort-HdrOrdr: A9a23:TJlfe67/rJPu9sbp9QPXwPfXdLJyesId70hD6qm+c20tTiX4rb
 HXoB1/73XJYVkqKRQdcLy7Scu9qDbnhP1ICOoqXItKPjOW3FdARbsKheDfKn/bexEWndQtsp
 uIHZIObuEYzmIXsS852mSF+hobr+VvOZrHudvj
X-Talos-CUID: 9a23:t9zyVmx/AHAPO6JCoU0fBgUuRP1iV0PC9k78Jmu8IGgza53NWQaPrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AChNfwA7P960rd5MBB8YdvQ0YxoxZw4e/OWoziaw?=
 =?us-ascii?q?gqsyPKy9LMDOnlheoF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,194,1677560400"; 
   d="scan'208";a="104183524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l0L7+DovZFNZOk7OAr3CZDT0NDO8iXWld6ZXDWktOFHmnOkYNNOpuCIOUScffqlmyWFRfZj1y5CY7ad9z8dZNGo0bPA+tqg6Xa7Y1pEaNJrprgWo7xFpnmm1GIW4ng1JkVttgHbIgjIE9IqDcsQiOsBxooLLwmFkE7E0zInaVSLcgNS6wj3rrRfVG90aYR4p7mtuDjB1fQNYw75WS9vd1HOIEzJWT1v96adQVCPYRGAik327aAaPHQIV/HWnHdiSOPWTQx00fLRPE79DCObhMMxkdlcr9iVdx2F7d41C6QJbekYQGE4jQafVzCAJw74FmB6wNMGIitXSUDXiheyVKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=klEoG/X0C1nj76ubJnwT76H/xIUNuYrHoSpaG2B1Quc=;
 b=ngJrEfgKDJ0YHWAhgu9J3pvrGRTVp6XIIHYP47a0CdIJ4ke+SFOii+FUNeE9SEzncC4Q/B6BIh9/sqCqygnMu2O1UrWrHJDRg2E/oePjQixGJNQpfSO6LW27eAgmJzWkkqsKCp3gYtIPHVrodn/8KCKra8j11rIOByIr3BdXMcxYna15wgbjWUKWsaTtBiaGPdULO6ApZFLdCIMFl8jDMP94+ljhru3kVx0VfSm46yN3/5JTMdSckG/JtnfnWxGnZriJ3B2qiLbaobn2dLVGeU+20asJbCYeUSeAzcUdUoCyRYmqcdumWfMWJ3X+ZjNkxhDcHp+zVW7CTi6JpKFidg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=klEoG/X0C1nj76ubJnwT76H/xIUNuYrHoSpaG2B1Quc=;
 b=QvI0Jc2Ly+zuM7Ppy00VsEECYq7nfVi7IE5mcyAZapbuYWq5L1eSU3P2Yp8NxX87ewnMYYjLsDtJq2QNBKbAfQtBgEZf6njjLPGp/uAV5y3nQc3Gsb0E0MpGcw7ZXbyNkuoAdQY37eswDaLk8B81oTN+vCKmqN/jsb8y6qxqtNQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 13 Apr 2023 17:00:33 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Message-ID: <ZDgZEZIG89oW6rEw@Air-de-Roger>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger>
 <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger>
 <87v8i0wyv0.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87v8i0wyv0.fsf@epam.com>
X-ClientProxiedBy: LO4P123CA0441.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5712:EE_
X-MS-Office365-Filtering-Correlation-Id: c2812498-98c9-4ad8-9fef-08db3c2fd89e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	giRn09I/fvWGJXe2eH03OIaDaWyUiKrIm8kI/iSH2x1N9wKvX8TeiuH4PtuXC1kjgmd13qJRdZjn/yD8ABXPhZq5BEK44G2iQH5rQEVSFw4sG2JxXW+7QVOpdnaG1eZZO2ZVu+llmrZtSGOe6vBjbTWjNMNlTPfSLdk7XpJDUfixTsbg1qpJA3Vtn1UMje1jMnQJLjLy52eZsabkNz6UifdBCwdgbuj6TQxfSlzigcVOv3tA0jVulN/4vfcBOhQrwzfDFfOaJcZIIykoV7BKS6uxGJhwoMW85DPU+fW/yXiSwcM5aK71rJrmTw7cUHNcGdKBYBUTxZ1LteWzyMoPgfKi9UHlRZbxCoeHgHs8dzFprqF5GflqwVjk8TFk4ypsKDjw4t13fkcZ6CPa/I4KCID6RZ6xqHB2aAD9fVxNnM5HxkKPAgJyteQ3myOilcYTsBatDzyEPJnQb85f8DSqqqflFBG7MkYkNxabXvtHKa8TyOoZt8p3WDpRTQSXycQib9bQNELzHZJZUDCqrtPKuU1NeUydjvlxkCULsaVJS/ca8WvC/9KbUxbkOVrv03fM
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(6666004)(6486002)(6916009)(66476007)(66556008)(66946007)(4326008)(85182001)(30864003)(2906002)(7416002)(33716001)(86362001)(41300700001)(82960400001)(5660300002)(8676002)(8936002)(316002)(38100700002)(478600001)(54906003)(9686003)(6512007)(6506007)(26005)(186003)(66899021)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NktNUDBQMWlSRUJUdnE1K05hL1dkS21oSG01WG15Z1Z4dWpwODVrUTRxZ2dQ?=
 =?utf-8?B?M0pUZVU0YXI2QUZXekxmMk9QS2xjQVlqVmNPRjRubGVXdVFwN2hOaElDN3ZV?=
 =?utf-8?B?bXIydGFBeXBLa282M24xTFNINytURm9scWk1WDROb2pYdStrZHd3bjhOZUQr?=
 =?utf-8?B?RVRXcE1UNzQzVmxMVEIyaS9SYzlSd0VyTUVyN2FaOGZVS0pQT2FIakYrbDRo?=
 =?utf-8?B?RVY1VkN6cVBNd0I5QTltbmlqVXFYdTUwUm1aS1k5a01GMWRuQXZqZy9IazEz?=
 =?utf-8?B?dXBSTnJTMzRqNE1zQmpZZWt1Ym5ITkJSR3NFN1cvNEY3R0JWNzBaZlRwdmln?=
 =?utf-8?B?ZFMvUnB3VzdmQWNIaTR2TjdHLzh0U2lac2lTWXUrOWl5T2tEMlFOWG0xVUFi?=
 =?utf-8?B?ekFPRUdheEhhWVhzcUg5QmhxcWoxTG9DVU16S3RjUjQxcXpidFhIaHhVdXRC?=
 =?utf-8?B?RUIrYkdRSWdFRmlrV2tWK29kTzJMa2YyMjBpbjJxb0oxZUJPSUJBd2ZhL09v?=
 =?utf-8?B?aThoMWFzUUpCdzdiWWJRREh4Q3IwS3VRbXByQVlvaGdZeDBjb0IzTlYrSDcy?=
 =?utf-8?B?T2I4OHA0TkRWZWFkNXdreWVpSW94T2VPRVY3a0dXYmZlbmJVVjlXV3VvQlZw?=
 =?utf-8?B?VFBlVFdUdGVkSVVPODl0ODY2TWhmcVhRMG9LMHM0QTZWa29zOUJsaHFXcmdz?=
 =?utf-8?B?T21iWDhxZElkVXZ1M0h2RHJrRElFditnMjNZUFYyOHJreUpSR3d4S3I0VDU2?=
 =?utf-8?B?ZmV1dlF1cWFTZ3krazBPT0MzblVYOVhiS2pDVzI0c2FDQlgrM0NzK0lMWk9i?=
 =?utf-8?B?dW9BWWRDdlMrVUd1cEdLTEpvYXJnZ2g4c2ZGc2NkdldueXFFS3BFbHdDRmdE?=
 =?utf-8?B?R1RyN1lReVpNc1lhWVhXVDB5RGdJWmZyYWxpaE5QVnlXRENjaFMrMmNVNURV?=
 =?utf-8?B?L244bVpzRlFFb05ZOFUwNm1rOWQwNlk4VmZ1NE5kY3F3Zi9BMXora2xOZVNN?=
 =?utf-8?B?UElHN09NeUQxNUR2amdnT29xNXIyVmpTMlgzalh5V2oyRndBaExNZWFNS0dY?=
 =?utf-8?B?QmFEUzRPNENGZE4yank5UGU1S3JyWVMyOFNHbGVLeEJEemNxejZRWnJ3QlhB?=
 =?utf-8?B?SURDMlc1ekhVQmZlK0REWFRQcnZBV3pZZUZpNHhDdE5BdjN6ZENUb1c4NWJY?=
 =?utf-8?B?SU5ubnVCU3V4VGN6UGhJZXk4MXVqK29iUm11bU1zT2M1OUhrcVVtMStHMFZh?=
 =?utf-8?B?NnZmcUtha0FRcDhBNjA4S3R5QjVxQ3RsdEIzTmhNZm1iZG5UaUlWNlhDbURD?=
 =?utf-8?B?VlpLeERiaHNsc01vUWtoT1d2VlptaWoydFVtbGc2cklzeXV2eXAxK3JXY0tk?=
 =?utf-8?B?dHdhOXdGSTdOQ1hPWTJFMTBPVUF6Nk9PQkZYd3IwNFdoTURKYmE4Mjc3eVoz?=
 =?utf-8?B?d0xTTlYwdE8xUUF0ZzRVc1VtS3pzYjduVnZBZTZmZFdrYmRta2hxNTJDYm1R?=
 =?utf-8?B?QmlyclVYdE5MVjU3NzBlYWp5U1MvMHo5dldZdGtWR2JCaUorUnltSFRTNnls?=
 =?utf-8?B?S3dPVHFwcmR3dlBpTk5iRWlYZXBuVm51NUgrVzVJWEVNc1dTMSs3bmJLTXZo?=
 =?utf-8?B?ZXlOQWppNmZraU96NlpuSi9XTmFNYUN2akdBMEFGRVQ0VFFscXRoWFBOS1Rp?=
 =?utf-8?B?ZzAyOE5UeXpEM21xVGhvMklzamxuUXpqcE92dmhPa3Z5VlBKc1kzcXVrVDV6?=
 =?utf-8?B?TXovMnBkVkFGUUZ1Z2VHbkFXbFFiU2R3Um1kMmlyWjFuMHQ0UzRkbFVmZGxw?=
 =?utf-8?B?WHI2T2hoVWNFRDdLNDRueUR4Zy91MjJwTUdCRzFha3BZUWlYeGJBbmh5NG5I?=
 =?utf-8?B?WnprYlhRMlN4S0V3SEdLc28rRGtTdlAwYzFQS3ZJbHVVSzZCMW1JSWFDTmZY?=
 =?utf-8?B?Y25NZ1U3dlRHZ2VzZlF1ZVk4bmsyRG9wVHZNZU1iY1NDZjU5cUpPb2RTTDJ6?=
 =?utf-8?B?dVJrL2ozQ1F3TE1zMDNJYnNGU3BlVnU0dHR2a3d4eEJWZ2ZMUXhQQ3JlOXpN?=
 =?utf-8?B?VFB2bmZNOWtQWmpsb3ZsRjVQTDBuTUxXYUViS1ZGb2ZDdFR4YWZMMCtnSTZT?=
 =?utf-8?Q?VABD8QlzNwiKUuIF2M0AzLZ2B?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nwV6xTNeFZ1ivcHBz348OzL6OZQK9knKQIZH+9BxlFMGRU1KQE7Jo0/AsqxhQ90i9ZPFxdNkeFzlslmWueWu2hiji/lU7fvMdYzjppRPf0K2JC1lrWSTtWUIJVg4OzkIeWV4XHYN0XPlQ6z7zm2y1uMsR1H9Ro24tun+5Jcau/KGweAwLhdKea5K4i8kxR5ylf3otZsp/8Sm4GItGZs66BfOIk0W125ZxPe5N8gPeyT07/pTxEORU0nF6xxzXjwEO1LCYG8ozexgDPLO5Og4+1XmG/5glVb7HcO8yNUDUZU/6w+M7rITuvN8PnIZ3idU2Poi8l56g3S5kisGP8l/2Mqn3R1TwelzNHaIS1OrLUk1jcr4Q4fngdTxdJvOpDRKsowzR3ptqGgd5DZlQeeofmd7/jYIcAsiFitLJBQtgqPs9t9rci5SJnsqtnoWjLNPDaS/jpXH4SFldUr76P7qBycnDoKXkiyO15JhKEA22s79881fRE4lEzwyefLA1m89dnnCamcQ2ppLMeuO7Ez1gSTcDv/JB+7oWUcTXLbbb65pvX5UjZCK690OosiW2IDLLwbIlZZP46npRMlOlLBHWyLCzA6urQLhkAvwlz4CenlxKVJLxQbuAhId6p5jgJpEjVdqiaO84DgfZPswMSX5ilWedHN4A4hqtW/uzuzoIXM0N+Cw6peO+fY+4tEV45isJwkM+IGMH9EKvXmq7pOYIQDAv0dtjRjcPjk2dp08P1SAaSTWLJxgVkYAZ0+S/nbldHndXX3sFLmTVUCrpDWqCX09jhrNK51eeawYz7gliXDuAWVpyv/r8qQxyFhcsW/t/pkA1sUkRB1sOeZsOJ4MG1FzzaaQy/s0PUq0DUmPk837+wga90nWCTrGXYJ4m/jzeqMZ88wdhlMIu0vhf3Ev6w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c2812498-98c9-4ad8-9fef-08db3c2fd89e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 15:00:40.4667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SB+fCq6ERjR+XO5cvIOFLejM7wBvN5XsSs6jeF02w7DoM1HVx5SsWoDG9ffzF72YeLGxGgdp9Rd7mGYaSiWFBQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5712

On Wed, Apr 12, 2023 at 09:54:12PM +0000, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> First of all, I want to provide link [1] to the RFC series where I tried
> total PCI locking rework. After discussing with Jan, it became clear for
> me, that task is much harder, than I anticipated. So, it was decided to
> move with a smaller steps. First step is to make vPCI code independed
> from the global PCI lock. Actually, this is not the first try.
> Oleksandr Andrushchenko tried to use r/w lock for this: [2]. But,
> Jan suggested to use refcounting instead of r/w locks, and I liked the
> idea. So, this is why you are seeing this patch series.

Thanks, I've been on leave for long periods recently and I've missed
some of the series.

> 
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
> > On Tue, Apr 11, 2023 at 11:41:04PM +0000, Volodymyr Babchuk wrote:
> >> 
> >> Hi Roger,
> >> 
> >> Roger Pau Monné <roger.pau@citrix.com> writes:
> >> 
> >> > On Tue, Mar 14, 2023 at 08:56:29PM +0000, Volodymyr Babchuk wrote:
> >> >> Prior to this change, lifetime of pci_dev objects was protected by global
> >> >> pcidevs_lock(). Long-term plan is to remove this log, so we need some
> >> >                                                    ^ lock
> >> >
> >> > I wouldn't say remove, as one way or another we need a lock to protect
> >> > concurrent accesses.
> >> >
> >> 
> >> I'll write "replace this global lock with couple of more granular
> >> locking devices"
> >> if this is okay for you.
> >> 
> >> >> other mechanism to ensure that those objects will not disappear under
> >> >> feet of code that access them. Reference counting is a good choice as
> >> >> it provides easy to comprehend way to control object lifetime.
> >> >> 
> >> >> This patch adds two new helper functions: pcidev_get() and
> >> >> pcidev_put(). pcidev_get() will increase reference counter, while
> >> >> pcidev_put() will decrease it, destroying object when counter reaches
> >> >> zero.
> >> >> 
> >> >> pcidev_get() should be used only when you already have a valid pointer
> >> >> to the object or you are holding lock that protects one of the
> >> >> lists (domain, pseg or ats) that store pci_dev structs.
> >> >> 
> >> >> pcidev_get() is rarely used directly, because there already are
> >> >> functions that will provide valid pointer to pci_dev struct:
> >> >> pci_get_pdev(), pci_get_real_pdev(). They will lock appropriate list,
> >> >> find needed object and increase its reference counter before returning
> >> >> to the caller.
> >> >> 
> >> >> Naturally, pci_put() should be called after finishing working with a
> >> >> received object. This is the reason why this patch have so many
> >> >> pcidev_put()s and so little pcidev_get()s: existing calls to
> >> >> pci_get_*() functions now will increase reference counter
> >> >> automatically, we just need to decrease it back when we finished.
> >> >
> >> > After looking a bit into this, I would like to ask whether it's been
> >> > considered the need to increase the refcount for each use of a pdev.
> >> >
> >> 
> >> This is how Linux uses reference locking. It decreases cognitive load
> >> and chance for an error, as there is a simple set of rules, which you
> >> follow.
> >> 
> >> > For example I would consider the initial alloc_pdev() to take a
> >> > refcount, and then pci_remove_device() _must_ be the function that
> >> > removes the last refcount, so that it can return -EBUSY otherwise (see
> >> > my comment below).
> >> 
> >> I tend to disagree there, as this ruins the very idea of reference
> >> counting. We can't know who else holds reference right now. Okay, we
> >> might know, but this requires additional lock to serialize
> >> accesses. Which, in turn, makes refcount un-needed.
> >
> > In principle pci_remove_device() must report whether the device is
> > ready to be physically removed from the system, so it must return
> > -EBUSY if there are still users accessing the device.
> >
> > A user would use PHYSDEVOP_manage_pci_remove to signal Xen it's trying
> > to physically remove a PCI device from a system, so we must ensure
> > that when the hypervisor returns success the device is ready to be
> > physically removed.
> >
> > Or at least that's my understanding of how this should work.
> >
> 
> As I can see, this is not how it is implemented right
> now. pci_remove_device() is not checking if device is not assigned to a
> domain. Id does not check if there are still users accessing the
> device. It just relies on a the global PCI lock to ensure that device is
> removed in an orderly manner.

Right, the expectation is that any path inside of the hypervisor using
the device will hold the pcidevs lock, and thus bny holding it while
removing we assert that no users (inside the hypervisor) are left.

I don't think we have been very consistent about the usage of the
pcidevs lock, and hence most of this is likely broken.  Hopefully
removing a PCI device from a system is a very uncommon operation.

> My patch series has no intention to change this behavior. All what I
> want to achieve - is to allow vpci code access struct pdev objects
> without holding the global PCI lock. 

That's all fine, but we need to make sure it doesn't make things worse
and what they currently are, and ideally it should make things easier.

That's why I would like to understand exactly what's the purpose of
the refcount, and how it should be used.  The usage of the refcount
should be compatible with the intended behaviour of
pci_remove_device(), regardless of whether the current implementation
is not correct.  We don't want to be piling up more broken stuff on
top of an already broken implementation.

> >> >
> >> > That makes me wonder if for example callers of pci_get_pdev(d, sbdf)
> >> > do need to take an extra refcount, because such access is already
> >> > protected from the pdev going away by the fact that the device is
> >> > assigned to a guest.  But maybe it's too much work to separate users
> >> > of pci_get_pdev(d, ...); vs pci_get_pdev(NULL, ...);.
> >> >
> >> > There's also a window when the refcount is dropped to 0, and the
> >> > destruction function is called, but at the same time a concurrent
> >> > thread could attempt to take a reference to the pdev still?
> >> 
> >> Last pcidev_put() would be called by pci_remove_device(), after removing
> >> it from all lists. This should prevent other threads from obtaining a valid
> >> reference to the pdev.
> >
> > What if a concurrent user has taken a reference to the object before
> > pci_remove_device() has removed the device from the lists, and still
> > holds it when pci_remove_device() performs the supposedly last
> > pcidev_put() call?
> 
> Well, let's consider VPCI code as this concurrent user, for
> example. First, it will try to take vpci->lock. Depending on where in
> pci_remov_device() there will be three cases:
> 
> 1. Lock is taken before vpci_remove_device() takes the lock. In this
> case vpci code works as always
> 
> 2. It tries to take the lock when vpci_remove_device() is already locked
> this. In this case we are falling to the next case:
> 
> 3. Lock is taken after vpci_remove_device() had finished it's work. In this
> case vPCI code sees that it was called for a device in an invalid state
> and exits.

For 2) and 3) you will hit a dereference, as the lock (vpci->lock)
would have been freed by vpci_remove_device() while a concurrent user
is waiting on pci_remov_device() to release the lock.

I'm not sure how the user sees the device is in an invalid state,
because it was waiting on a lock (vpci->lock) that has been removed
under it's feet.

This is an existing issue not made worse by the refcounting, but it's
not a great example.

> 
> As you can see, there is no case where vPCI code is running on an device
> which was removed.
> 
> After vPCI code drops refcounter, pdev object will be freed once and for
> all. Please node, that I am talking about pdev object there, not about
> PCI device, because PCI device (as a high-level entity) was destroyed by
> pci_remove_device(). refcount is needed just for the last clean-up
> operations.

Right, but pci_remove_device() will return success even when there are
some users holding a refcount to the device, which is IMO undesirable.

As I understand it the purpose of pci_remove_device() is that once it
returns success the device can be physically removed from the system.

> >
> >> >
> >> >>          sbdf.devfn &= ~stride;
> >> >>          pdev = pci_get_pdev(NULL, sbdf);
> >> >>          if ( pdev && stride != pdev->phantom_stride )
> >> >> +        {
> >> >> +            pcidev_put(pdev);
> >> >>              pdev = NULL;
> >> >> +        }
> >> >>      }
> >> >>  
> >> >>      return pdev;
> >> >> @@ -548,13 +526,18 @@ struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
> >> >>          list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf &&
> >> >>                   (!d || pdev->domain == d) )
> >> >> +            {
> >> >> +                pcidev_get(pdev);
> >> >>                  return pdev;
> >> >> +            }
> >> >>      }
> >> >>      else
> >> >>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf )
> >> >> +            {
> >> >> +                pcidev_get(pdev);
> >> >>                  return pdev;
> >> >> -
> >> >> +            }
> >> >>      return NULL;
> >> >>  }
> >> >>  
> >> >> @@ -663,7 +646,10 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
> >> >>                              PCI_SBDF(seg, info->physfn.bus,
> >> >>                                       info->physfn.devfn));
> >> >>          if ( pdev )
> >> >> +        {
> >> >>              pf_is_extfn = pdev->info.is_extfn;
> >> >> +            pcidev_put(pdev);
> >> >> +        }
> >> >>          pcidevs_unlock();
> >> >>          if ( !pdev )
> >> >>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
> >> >> @@ -818,7 +804,9 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
> >> >>              if ( pdev->domain )
> >> >>                  list_del(&pdev->domain_list);
> >> >>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
> >> >> -            free_pdev(pseg, pdev);
> >> >> +            list_del(&pdev->alldevs_list);
> >> >> +            pdev_msi_deinit(pdev);
> >> >> +            pcidev_put(pdev);
> >> >
> >> > Hm, I think here we want to make sure that the device has been freed,
> >> > or else you would have to return -EBUSY to the calls to notify that
> >> > the device is still in use.
> >> 
> >> Why? As I can see, pdev object is still may potentially be accessed by
> >> some other CPU right now. So pdev object will be freed after last
> >> reference is dropped. As it is already removed from all the lists,
> >> pci_dev_get() will not find it anymore.
> >> 
> >> Actually, I can't see how this can happen in reality, as VPCI, MSI and
> >> IOMMU are already deactivated for this device. So, no one would touch it.
> >
> > Wouldn't it be possible for a concurrent user to hold a reference from
> > befoe the device has been 'deactivated'?
> >
> 
> Yes, it can hold a reference. This is why we need additional locking to
> ensure that, say, pci_cleanup_msi() does not races with rest of the MSI
> code. Right now this is ensured by then global PCI lock.
> 
> >> >
> >> > I think we need an extra pcidev_put_final() or similar that can be
> >> > used in pci_remove_device() to assert that the device has been
> >> > actually removed.
> >> 
> >> Will something break if we don't do this? I can't see how this can
> >> happen.
> >
> > As mentioned above, once pci_remove_device() returns 0 the admin
> > should be capable of physically removing the device from the system.
> >
> 
> This patch series does not alter this requirement. Admin is still
> capable of physically removing the device from the system. After
> successful call to the pci_remove_device()

Indeed, but there might be users in the hypervisor still holding a
reference to the pdev.

> >> >> -static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >> +static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
> >> >>  {
> >> >>      const struct domain_iommu *hd = dom_iommu(d);
> >> >> -    struct pci_dev *pdev;
> >> >> +    uint8_t devfn;
> >> >>      int rc = 0;
> >> >>  
> >> >>      if ( !is_iommu_enabled(d) )
> >> >> @@ -1422,10 +1412,11 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >>  
> >> >>      /* device_assigned() should already have cleared the device for assignment */
> >> >>      ASSERT(pcidevs_locked());
> >> >> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >>      ASSERT(pdev && (pdev->domain == hardware_domain ||
> >> >>                      pdev->domain == dom_io));
> >> >>  
> >> >> +    devfn = pdev->devfn;
> >> >> +
> >> >>      /* Do not allow broken devices to be assigned to guests. */
> >> >>      rc = -EBADF;
> >> >>      if ( pdev->broken && d != hardware_domain && d != dom_io )
> >> >> @@ -1460,7 +1451,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >>   done:
> >> >>      if ( rc )
> >> >>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> >> >> -               d, &PCI_SBDF(seg, bus, devfn), rc);
> >> >> +               d, &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
> >> >>      /* The device is assigned to dom_io so mark it as quarantined */
> >> >>      else if ( d == dom_io )
> >> >>          pdev->quarantine = true;
> >> >> @@ -1595,6 +1586,9 @@ int iommu_do_pci_domctl(
> >> >>          ASSERT(d);
> >> >>          /* fall through */
> >> >>      case XEN_DOMCTL_test_assign_device:
> >> >> +    {
> >> >> +        struct pci_dev *pdev;
> >> >> +
> >> >>          /* Don't support self-assignment of devices. */
> >> >>          if ( d == current->domain )
> >> >>          {
> >> >> @@ -1622,26 +1616,36 @@ int iommu_do_pci_domctl(
> >> >>          seg = machine_sbdf >> 16;
> >> >>          bus = PCI_BUS(machine_sbdf);
> >> >>          devfn = PCI_DEVFN(machine_sbdf);
> >> >> -
> >> >>          pcidevs_lock();
> >> >> -        ret = device_assigned(seg, bus, devfn);
> >> >> +        pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >> +        if ( !pdev )
> >> >> +        {
> >> >> +            printk(XENLOG_G_INFO "%pp non-existent\n",
> >> >> +                   &PCI_SBDF(seg, bus, devfn));
> >> >> +            ret = -EINVAL;
> >> >> +            break;
> >> >> +        }
> >> >> +
> >> >> +        ret = device_assigned(pdev);
> >> >>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
> >> >>          {
> >> >>              if ( ret )
> >> >>              {
> >> >> -                printk(XENLOG_G_INFO "%pp already assigned, or non-existent\n",
> >> >> +                printk(XENLOG_G_INFO "%pp already assigned\n",
> >> >>                         &PCI_SBDF(seg, bus, devfn));
> >> >>                  ret = -EINVAL;
> >> >>              }
> >> >>          }
> >> >>          else if ( !ret )
> >> >> -            ret = assign_device(d, seg, bus, devfn, flags);
> >> >> +            ret = assign_device(d, pdev, flags);
> >> >> +
> >> >> +        pcidev_put(pdev);
> >> >
> >> > I would think you need to keep the refcount here if ret == 0, so that
> >> > the device cannot be removed while assigned to a domain?
> >> 
> >> Looks like we are perceiving function of refcnt in a different
> >> ways. For me, this is the mechanism to guarantee that if we have a valid
> >> pointer to an object, this object will not disappear under our
> >> feet. This is the main function of krefs in the linux kernel: if your
> >> code holds a reference to an object, you can be sure that this object is
> >> exists in memory.
> >> 
> >> On other hand, it seems that you are considering this refcnt as an usage
> >> counter for an actual PCI device, not "struct pdev" that represent
> >> it. Those are two related things, but not the same. So, I can see why
> >> you are suggesting to get additional reference there. But for me, this
> >> looks unnecessary: the very first refcount is obtained in
> >> pci_add_device() and there is the corresponding function
> >> pci_remove_device() that will drop this refcount. So, for me, if admin
> >> wants to remove a PCI device which is assigned to a domain, they can do
> >> this as they were able to do this prior this patches.
> >
> > This is all fine, but needs to be stated in the commit message.
> >
> 
> Sure, I will add this.
> 
> >> The main value of introducing refcnt is to be able to access pdev objects
> >> without holding the global pcidevs_lock(). This does not mean that you
> >> don't need locking at all. But this allows you to use pdev->lock (which
> >> does not exists in this series, but was introduced in a RFC earlier), or
> >> vpci->lock, or any other subsystem->lock.
> >
> > I guess I was missing this other bit about introducing a
> > per-device lock, would it be possible to bundle all this together into
> > a single patch series?
> 
> As I said at the top of this email, it was tried. You can check RFC at [1].
> 
> >
> > It would be good to place this change together with any other locking
> > related change that you have pending.
> 
> Honestly, my main goal is to fix the current issues with vPCI, so ARM
> can move forward on adding PCI support for the platform. So, I am
> focusing on this right now.

Thanks, we need to be careful however as to not accumulate more
bandaids on top just to workaround the fact that the locking we have
regarding the pci devices is not suitable.

I think it's important to keep all the usages of the pci_dev struct in
mind when designing a solution.

Overall it seems like might help vPCI on Arm, I think the only major
request I have is the one related to pci_remove_device() only
returning success when there are not refcounts left.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 15:29:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 15:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520836.808876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmytG-000348-38; Thu, 13 Apr 2023 15:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520836.808876; Thu, 13 Apr 2023 15:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmytG-000341-0S; Thu, 13 Apr 2023 15:29:38 +0000
Received: by outflank-mailman (input) for mailman id 520836;
 Thu, 13 Apr 2023 15:29:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pvsx=AE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pmytE-00033v-Sn
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 15:29:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe6d4212-da0f-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 17:29:35 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 395DE6182A;
 Thu, 13 Apr 2023 15:29:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA1E8C433D2;
 Thu, 13 Apr 2023 15:29:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe6d4212-da0f-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681399773;
	bh=6YdWOQ0NUZ+UngXtsJhd6s1Sck8wLHWwB86rNg5G0y8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hS+1wQQHpoNr9n7nx0QyQ5GAInVAhzMZqqUwwJMuj4fGjVQ4hNdmYqCN2ZJaU8KX0
	 3CnI9sk0LPGCMmk6gAMCziTgTtInvEKwlo5/1rBEAqZ0yiszxhH+w6K/52X08j2VjA
	 zFFr6RcweeN3DZKUVtiMd4pAl5GfBwT5wySn7gDinKL2AMstEgAJOe0dOM+78Kg3kr
	 qQvQnZluph4QqS58g0DSeJPAUnYSu13XAG5iNh8pN2lY+S9i+GSAt2RDaP/KoEvG/L
	 mzLvxKfUTzlX+lcIcg7Az7msuw3ISlo7NdMe0fqkNlQ9N1tEOesDTlTjoBPt1Besxy
	 zQjan80FY5sLw==
Date: Thu, 13 Apr 2023 08:29:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] automation: switch ADL hw tests to debug build
In-Reply-To: <20230413122340.47233-1-marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304130829250.15580@ubuntu-linux-20-04-desktop>
References: <20230413122340.47233-1-marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-656519186-1681399773=:15580"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-656519186-1681399773=:15580
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 13 Apr 2023, Marek Marczykowski-Górecki wrote:
> This should give a lot more useful information in case of a failure, and
> also enable some asserts for extra checks.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/gitlab-ci/test.yaml | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 0916b367ea90..d68c584269dd 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -131,21 +131,21 @@ xilinx-smoke-dom0less-arm64-gcc:
>      - *arm64-test-needs
>      - alpine-3.12-gcc-arm64
>  
> -adl-smoke-x86-64-gcc:
> +adl-smoke-x86-64-gcc-debug:
>    extends: .adl-x86-64
>    script:
>      - ./automation/scripts/qubes-x86-64.sh 2>&1 | tee ${LOGFILE}
>    needs:
>      - *x86-64-test-needs
> -    - alpine-3.12-gcc
> +    - alpine-3.12-gcc-debug
>  
> -adl-suspend-x86-64-gcc:
> +adl-suspend-x86-64-gcc-debug:
>    extends: .adl-x86-64
>    script:
>      - ./automation/scripts/qubes-x86-64.sh s3 2>&1 | tee ${LOGFILE}
>    needs:
>      - *x86-64-test-needs
> -    - alpine-3.12-gcc
> +    - alpine-3.12-gcc-debug
>  
>  qemu-smoke-dom0-arm64-gcc:
>    extends: .qemu-arm64
> -- 
> 2.39.2
> 
--8323329-656519186-1681399773=:15580--


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 16:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 16:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520842.808885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmzOR-0007mx-J3; Thu, 13 Apr 2023 16:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520842.808885; Thu, 13 Apr 2023 16:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmzOR-0007mq-GK; Thu, 13 Apr 2023 16:01:51 +0000
Received: by outflank-mailman (input) for mailman id 520842;
 Thu, 13 Apr 2023 16:01:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmzOQ-0007mg-6z; Thu, 13 Apr 2023 16:01:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmzOQ-0007P1-0Q; Thu, 13 Apr 2023 16:01:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pmzOO-0005mS-Sj; Thu, 13 Apr 2023 16:01:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pmzOO-0006Gq-RS; Thu, 13 Apr 2023 16:01:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Go2j0v20RRlaFZNAEg84mxHo/tFaYNVz1AsvL69XM/0=; b=rwjua5hnzt3dWDV0V7hxUM2qaj
	OvBNZUnqG+qO9qlTwLFgBlnQjJV3cPvsmvGBkoJBBOoD857yAIWcCRH/MySZhLbg0vwcJnT2wxdR2
	MkkYMSV2C+hbPD9RSSpn2YhQ/iw0hTLN4faJ3DLIMyN/eieF2IGwdRGxOsUpDzEJj/h8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180237-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180237: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable-smoke:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=8363b1f62e561cfb73073b4b094516fcbbd7020e
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 16:01:48 +0000

flight 180237 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180237/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  8363b1f62e561cfb73073b4b094516fcbbd7020e
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180226  2023-04-13 02:00:25 Z    0 days
Testing same since   180237  2023-04-13 14:01:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  starved 
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f872a624cb..8363b1f62e  8363b1f62e561cfb73073b4b094516fcbbd7020e -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 16:10:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 16:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520848.808896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmzWt-0000qq-FX; Thu, 13 Apr 2023 16:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520848.808896; Thu, 13 Apr 2023 16:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pmzWt-0000qj-Co; Thu, 13 Apr 2023 16:10:35 +0000
Received: by outflank-mailman (input) for mailman id 520848;
 Thu, 13 Apr 2023 16:10:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HU9H=AE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pmzWr-0000qd-Cp
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 16:10:33 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6230a02-da15-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 18:10:30 +0200 (CEST)
Received: from AM6P191CA0074.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8a::15)
 by AM9PR08MB6114.eurprd08.prod.outlook.com (2603:10a6:20b:287::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 16:10:28 +0000
Received: from AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8a:cafe::dd) by AM6P191CA0074.outlook.office365.com
 (2603:10a6:209:8a::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 16:10:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT032.mail.protection.outlook.com (100.127.140.65) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 16:10:27 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Thu, 13 Apr 2023 16:10:26 +0000
Received: from 2ce8db9ec33a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A89ADFEA-9121-4F97-A769-EACAB67D62D0.1; 
 Thu, 13 Apr 2023 16:10:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2ce8db9ec33a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Apr 2023 16:10:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB9867.eurprd08.prod.outlook.com (2603:10a6:20b:5ab::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 16:10:13 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 16:10:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6230a02-da15-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IO2qus3IghDRLaMTgWF8VVy8MG262oH2O/6uT+AqoAo=;
 b=U30/iUJTfAElTiLhpXEDFlK9Wx8H6poIg/tiHoblMLAE+zSN0auDTDjWEqi7CnXsmU+pVjDw1ijxmBz3r2cEkUd8jCmnVVcr+76rwMVQtkM3FIEP1oT4BgnSJxbXnbLB9oW/bVL9bJaQf48qrBw0Mb6TDnR2NGOW1EzPYfShw0M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7b36245694e2b994
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=behBL2xBPSwaFw1jUmKfiDCzr9fcq3nL+2jrsPeosm44WoBazoX8jdYs065GaDW8YRTHxewqVDyOB741ygbLzGbB9XB+jxyewvxbrY1mKE3RyoYdqTpe6H72i0ELNCzAWe+xc0FTlwAGtjIi+EfWyO8aIrs1MVN56Ue9q+r0f8IYNA0VJ5EQqPi+JHAHIMIx6yjLwS5etCwoxm51ixMQRgjMsPzv06TiThf2pC1KLI3KseBTKMF8z4RX3swUFZMI7mxJb5XtZLs6jFtYL7K0gfEzR+Ufhh3nGNK6NyRI9ahvTQ82a+AgJhnXwEtO3+mQZaEy2ov0nqvg5YB0S9Ax/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IO2qus3IghDRLaMTgWF8VVy8MG262oH2O/6uT+AqoAo=;
 b=KdSLeSXLM/ClIh10YjREFY13CZCjdoj6Tzl9GSac4blFqwgCBjq4JEL0bdsxENvNwsSvf1qidS5GAqqIhjXenNKMtk3LoSIsdjEsjQOhoVCODfedR9rxFlGUE7xh0om993Q2vlijfyShYTBspYWlNV4qlNSHviqsVTm6BuafPhwsFCXUovvHoFFjMQ8nB6FYnVhkNv6mcRl+G+C/XXK/NWsuG+UxRKWOVazEBiZsRWHjtaeydE0HUYBmv/jXVZqdKvn4nweXPjNFqXQKZS5tY6Zti8LjLseB84VxELZWVJvqhKGoZqsXW2abX//1XFvv+xUpFpxJItMchuglIrjw0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IO2qus3IghDRLaMTgWF8VVy8MG262oH2O/6uT+AqoAo=;
 b=U30/iUJTfAElTiLhpXEDFlK9Wx8H6poIg/tiHoblMLAE+zSN0auDTDjWEqi7CnXsmU+pVjDw1ijxmBz3r2cEkUd8jCmnVVcr+76rwMVQtkM3FIEP1oT4BgnSJxbXnbLB9oW/bVL9bJaQf48qrBw0Mb6TDnR2NGOW1EzPYfShw0M=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index: AQHZbSQ9GJqCFHqaX0uCiT+EC9U7s68pNK+AgAAHnICAAAGIAIAACdqAgAAi2YA=
Date: Thu, 13 Apr 2023 16:10:13 +0000
Message-ID: <92DA4B4F-7BB9-4CAC-9276-0B6A10550164@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
 <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
In-Reply-To: <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB9867:EE_|AM7EUR03FT032:EE_|AM9PR08MB6114:EE_
X-MS-Office365-Filtering-Correlation-Id: 98059bcb-7759-4077-3c6f-08db3c39988b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fl9+qnvjAK/eQHm4GFaYdZG4fUNJ+jE1/qa0kF6RVNjYKihmW2xHSB29F3qr2kWL/lKcjKcJnO1T1n+2gHyEp0n0yNrtTDLbMqZg8+sUwDHLQmRPmA6AAxHXJ2E2KWMeNOM1WTQ0y9NO/tqoOkrcHLBrtZzbQF7U120sac4PsilAHd4fGQBL8lavzv/khH/uUyrBCs57+fwEROY0g4QZhLYkFA/HEY3uWaslz42u5nmQjScGe/Pp67lWHXHvxh8X7IbrcK6Kj6MutC88Nv3TkYNxXEafHls1wVHdcMmJu4wLq24hXxLkDX+JSYNF6j3xSPTN68KdJ3ogZ5KBuRJ6y7CmnPYWjnIcc05H/KbwZdqxyOVaI1wnmSoxwdCO16L2Mv9UzjRaH/ZuPthpqzwWOVE+Mp40l/YxGg+etmwMp8kXIAu6RrSleZxlZXu5+hQDO3V5CzO/w/gnGcq/RfqRxBORfE3As7Yb3R5GrrubQRjK+Mq3tWvY2Pf52qicwRKn4k+viMjuXHr1x2g4nVTyyIgL4rNfIYtvJqYpaNK3pUo9QkGFuhUg+YznZ4aq7BTbQ52TczXauC/THGQEyQYfRgLqcvzWpDv6DklU9bEHK2czTUN30w92FHwAvLsGmC+9lLsZ8kMkfk1Gk4c8VUMeF61gG57Kkb4nWRRPQkStivc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(366004)(136003)(396003)(451199021)(71200400001)(33656002)(38100700002)(122000001)(5660300002)(36756003)(2906002)(316002)(38070700005)(8676002)(8936002)(86362001)(66446008)(66556008)(64756008)(41300700001)(66946007)(6916009)(66476007)(4326008)(91956017)(76116006)(2616005)(54906003)(6506007)(186003)(478600001)(26005)(6512007)(6486002)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <74803F1FFFE1674E9D8A4431348410A7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9867
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	005ea367-2196-4d15-6a70-08db3c399059
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Tmt2Vkk13zbk2XzpzPDvL2CZYxbRF2nxw8jJEvIQJW4SkBNzZv4+k80YUy5NCoTrffLUYaYtOj6eG0n6kSm1MSohb2F+U5HLTELAHJZxsYznCL07WdugD/r0m73v061liiSGlx5ObtCxE3EG2aHU+GdkbZM70Blst5hnZki+64ZKokVnDYQoj3BqK3uKWtByhoGtrWZClfY5zSvujxVciKX5AvQklOBjk3oAbGcu+KZwjonV9E8PnZS8k0nVJLiWOQdZH8W7hD4FAQ5gA3lenzRWxprmA4ybfVF0ZCqaehOQsAyt1kdazlj/tNDNDTQCkUOe3Et0DL0JTTRjPOQymTshxEDL5au6WP4by1Q4Zp4Z9lGHiVk7zDxjV12d0GaxIelKgfHz2O3E9XHYdg7VrsKnuECx5fz92tOS6sI+4f0K21QQZA24rvfGIPVr5ojQOFGB6o+NH9q9x2cT9KFo1bI4pcZ0NDYYObfQXwXRKzW5JpUUlZ6KTJMhOrO/em4qDTjAKSNLXpd2dea6W9Yy/4ZCXW+U2QMCIrPDbdgFRYwupj+6b/mUGMMVL8+eCwlAM/Fi9VsVW81Md3RhM/dG9FF38p4rE9Ij7OwjggKyTYXPX6uUBXLxRNiNhyS1G9uwKO/e6fD6cdJr1zcrp6S6ewrqaYkCEktO18MpvafnM/I/bxe33p33rAvVBbGxAeTRIvd5goB5rUH5v5SIKKXxdjh/fw+nNQ5uRm87qAZm3gI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(6486002)(40460700003)(70206006)(70586007)(4326008)(36756003)(2906002)(86362001)(81166007)(82740400003)(356005)(41300700001)(33656002)(5660300002)(82310400005)(8676002)(8936002)(6862004)(316002)(478600001)(40480700001)(54906003)(6512007)(6506007)(26005)(336012)(36860700001)(2616005)(186003)(47076005)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 16:10:27.2496
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 98059bcb-7759-4077-3c6f-08db3c39988b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6114

PiANCj4gDQo+PiANCj4+Pj4gQ2FuIHdlIG1vdmUgdGhpcyBzb21ld2hlcmUgZWxzZSB0byBhdm9p
ZCBhZGRpbmcgZXh0cmEgcGFkZGluZz8gQWxzbyBzaG91bGRuJ3QgdGhpcyBiZSBwcm90ZWN0ZWQg
d2l0aCAjaWZkZWYgQ09ORklHX0FSTV82NCB0byBtYWtlIGNsZWFyIHRoaXMgaXMgbm90IHN1cHBv
cnRlZCBvbiBYZW4gMzItYml0Pw0KPj4+IFllcywgSeKAmWxsIG1vdmUgaXQgYW5kIHByb3RlY3Qg
d2l0aCBDT05GSUdfQVJNXzY0LCBpcyBpdCBvayBmb3IgeW91IGlmIEkgbW92ZSBpdCBhZnRlcjoN
Cj4+PiAvKiBNb25pdG9yIG9wdGlvbnMgKi8NCj4+PiBzdHJ1Y3Qgew0KPj4+ICAgIHVpbnQ4X3Qg
cHJpdmlsZWdlZF9jYWxsX2VuYWJsZWQgOiAxOw0KPj4+IH0gbW9uaXRvcjsNCj4+IA0KPj4gUGxl
YXNlIGNoZWNrIHRoZSBwYWRkaW5nIHdpdGggInBhaG9sZSIuIElmIHBvc3NpYmxlLCBpdCB3b3Vs
ZCBiZSBiZXR0ZXIgdG8gcmUtdXNlIGFuIGV4aXN0aW5nIG9uZS4NCj4gDQo+IE9rIEnigJlsbCB0
cnkgdG8gdXNlIHRoZSB0b29sDQoNCknigJl2ZSBtYW5hZ2VkIHRvIHVzZSB0aGUgdG9vbCwgdGhl
IGZpZWxkIHNlZW1zIGFscmVhZHkgaW4gYSBnb29kIHNwb3Q6DQoNCnN0cnVjdCBhcmNoX2RvbWFp
biB7DQoJZW51bSBkb21haW5fdHlwZSAgICAgICAgICAgdHlwZTsgICAgICAgICAgICAgICAgIC8q
ICAgICAwICAgICA0ICovDQoJdWludDhfdCAgICAgICAgICAgICAgICAgICAgc3ZlX3ZsOyAgICAg
ICAgICAgICAgIC8qICAgICA0ICAgICAxICovDQoNCgkvKiBYWFggMyBieXRlcyBob2xlLCB0cnkg
dG8gcGFjayAqLw0KDQoJc3RydWN0IHAybV9kb21haW4gICAgICAgICAgcDJtOyAgICAgICAgICAg
ICAgICAgIC8qICAgICA4ICAgMzI4ICovDQoJLyogLS0tIGNhY2hlbGluZSA1IGJvdW5kYXJ5ICgz
MjAgYnl0ZXMpIHdhcyAxNiBieXRlcyBhZ28gLS0tICovDQoJc3RydWN0IGh2bV9kb21haW4gICAg
ICAgICAgaHZtOyAgICAgICAgICAgICAgICAgIC8qICAgMzM2ICAgMzEyICovDQoJLyogLS0tIGNh
Y2hlbGluZSAxMCBib3VuZGFyeSAoNjQwIGJ5dGVzKSB3YXMgOCBieXRlcyBhZ28gLS0tICovDQoJ
c3RydWN0IHBhZ2luZ19kb21haW4gICAgICAgcGFnaW5nOyAgICAgICAgICAgICAgIC8qICAgNjQ4
ICAgIDMyICovDQoJc3RydWN0IHZtbWlvICAgICAgICAgICAgICAgdm1taW87ICAgICAgICAgICAg
ICAgIC8qICAgNjgwICAgIDMyICovDQoJLyogLS0tIGNhY2hlbGluZSAxMSBib3VuZGFyeSAoNzA0
IGJ5dGVzKSB3YXMgOCBieXRlcyBhZ28gLS0tICovDQoJdW5zaWduZWQgaW50ICAgICAgICAgICAg
ICAgcmVsX3ByaXY7ICAgICAgICAgICAgIC8qICAgNzEyICAgICA0ICovDQoNCgkvKiBYWFggNCBi
eXRlcyBob2xlLCB0cnkgdG8gcGFjayAqLw0KDQoJc3RydWN0IHsNCgkJdWludDY0X3QgICAgICAg
ICAgIG9mZnNldDsgICAgICAgICAgICAgICAvKiAgIDcyMCAgICAgOCAqLw0KCQlzX3RpbWVfdCAg
ICAgICAgICAgbmFub3NlY29uZHM7ICAgICAgICAgIC8qICAgNzI4ICAgICA4ICovDQoJfSB2aXJ0
X3RpbWVyX2Jhc2U7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC8qICAgNzIwICAgIDE2
ICovDQoJc3RydWN0IHZnaWNfZGlzdCAgICAgICAgICAgdmdpYzsgICAgICAgICAgICAgICAgIC8q
ICAgNzM2ICAgMjAwICovDQoNCgkvKiBYWFggbGFzdCBzdHJ1Y3QgaGFzIDIgYnl0ZXMgb2YgcGFk
ZGluZyAqLw0KDQoJLyogLS0tIGNhY2hlbGluZSAxNCBib3VuZGFyeSAoODk2IGJ5dGVzKSB3YXMg
NDAgYnl0ZXMgYWdvIC0tLSAqLw0KCXN0cnVjdCB2dWFydCAgICAgICAgICAgICAgIHZ1YXJ0OyAg
ICAgICAgICAgICAgICAvKiAgIDkzNiAgICAzMiAqLw0KCS8qIC0tLSBjYWNoZWxpbmUgMTUgYm91
bmRhcnkgKDk2MCBieXRlcykgd2FzIDggYnl0ZXMgYWdvIC0tLSAqLw0KCXVuc2lnbmVkIGludCAg
ICAgICAgICAgICAgIGV2dGNobl9pcnE7ICAgICAgICAgICAvKiAgIDk2OCAgICAgNCAqLw0KCXN0
cnVjdCB7DQoJCXVpbnQ4X3QgICAgICAgICAgICBwcml2aWxlZ2VkX2NhbGxfZW5hYmxlZDoxOyAv
KiAgIDk3MjogMCAgMSAqLw0KCX0gbW9uaXRvcjsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAvKiAgIDk3MiAgICAgMSAqLw0KDQoJLyogWFhYIDMgYnl0ZXMgaG9sZSwgdHJ5
IHRvIHBhY2sgKi8NCg0KCXN0cnVjdCB2cGwwMTEgICAgICAgICAgICAgIHZwbDAxMTsgICAgICAg
ICAgICAgICAvKiAgIDk3NiAgICA3MiAqLw0KDQoJLyogc2l6ZTogMTE1MiwgY2FjaGVsaW5lczog
MTgsIG1lbWJlcnM6IDEzICovDQoJLyogc3VtIG1lbWJlcnM6IDEwMzgsIGhvbGVzOiAzLCBzdW0g
aG9sZXM6IDEwICovDQoJLyogcGFkZGluZzogMTA0ICovDQoJLyogcGFkZGluZ3M6IDEsIHN1bSBw
YWRkaW5nczogMiAqLw0KfSBfX2F0dHJpYnV0ZV9fKChfX2FsaWduZWRfXygxMjgpKSk7DQoNCj4g
DQo+PiANCj4+IENoZWVycywNCj4+IA0KPj4gLS0gDQo+PiBKdWxpZW4gR3JhbGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520853.808916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tW-0000sY-95; Thu, 13 Apr 2023 17:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520853.808916; Thu, 13 Apr 2023 17:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tW-0000sP-4w; Thu, 13 Apr 2023 17:38:02 +0000
Received: by outflank-mailman (input) for mailman id 520853;
 Thu, 13 Apr 2023 17:38:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tV-0000a4-4r
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:01 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20626.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ee2535ea-da21-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 19:37:59 +0200 (CEST)
Received: from BN9PR03CA0953.namprd03.prod.outlook.com (2603:10b6:408:108::28)
 by MN2PR12MB4343.namprd12.prod.outlook.com (2603:10b6:208:26f::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:37:55 +0000
Received: from BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:108:cafe::38) by BN9PR03CA0953.outlook.office365.com
 (2603:10b6:408:108::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 17:37:55 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT012.mail.protection.outlook.com (10.13.177.55) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.33 via Frontend Transport; Thu, 13 Apr 2023 17:37:55 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:37:54 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:37:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee2535ea-da21-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tf+hTJOTwnRQItM+0d9OL6rf+uqcqFfFY8B3hFtgght/vCOqFTiUl4X0OTy0K3oe7sADIjbffvBBBStswj6hogWzeiTjaap1mNC6F7sR/rUWASzit3f+XyW+dIuopljfr7Vq6TLjeJ0NvPNoOUgiw3ZO1EbSzc8a42plAL/igTHQNS6C/jmj+hoV+HCJEqefOz983DZwLGcYs+dY/5mcf7yEFP5VemhgwOR/b5ng5wTMEBk/pZL1/xy0XWyVifD7p/QjfpX8v1X8FvR2N1+jdtZS6BODYSlRbHvuovO2gYPTK28sySBVpp8HR3PbyBtyc/1EK2IDfOJfUBIUt7g7ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QaWVuZvjIQT6qz1jrl/Hs9hItflDrISzlZiBLoKD23Y=;
 b=NEUCe6ByQVb5PrMDbGElKhQrITSDKEHbRwtX8bjZo/RI/h7njFhybD8KBGJawUCSiQvEKx4byUSRTM6Tl+DLwG6FeuJksTVFQT6SpRUIUTrC6RPS7SmNV3FGl5vDLHo6uCEeT0ObrPMpE3Q3a1kHjFKJyFxu6InndeFEFIYNpSnWX3KGp58oEJcUxGKNdDigEg8EB69Sy3SFS9kqGgSubcCmUIA61glU+JFuD5eu10DbMabt9dW/ZpEqPdGwXsdHfRDbuy/MsLgyomILs41NikUIyuF9SHmhhXdPz0w7BOjPbjCIJjMyp5jsEzANTrrLQv3tTYLHWhWH3MOrXfjD3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QaWVuZvjIQT6qz1jrl/Hs9hItflDrISzlZiBLoKD23Y=;
 b=kS9duPNvN38Fih7IQ9M6X2HKpaUg0SbzuqXORINOQhk6stZ6mQpEeYYvuFhGQjXu2KOebS3W1colCLvVGX4qsgEgn7UFBmP9FEEPt6LzAufDSGLZqQlUll6NdKL0nLy4LJuUK8FnR2FU90nqL8Jo4mKRoBV98COjxOXUzFG2byU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Date: Thu, 13 Apr 2023 18:37:27 +0100
Message-ID: <20230413173735.48387-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT012:EE_|MN2PR12MB4343:EE_
X-MS-Office365-Filtering-Correlation-Id: 56f16cf2-22ed-4bf8-c163-08db3c45d0a0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l/ZiAlgGXaOYtpDMdgJKswLeprohb+3vZgBiSE0Wf2KqO2NbXmOJ0Y45107MH0fM/dphPxMdzpmT9tqvWCISGClSxuBCsn6Hgl6ZtJe/1ciiV3/RsFtIyMJJ0gsmSbTD1u8/RalZfXiyKDCVTKVBy+kqwXFgBlD38vEBNZ89eTTWd2ezb18UBXJqJIpJYTIuRH/gJoCiQY+Upmf/G7A7n1heG5KkHtZOEEFeIDV95fkOIuyo2VQ1PNdIW5USzs1OROQTxBB5kUAVnq0UKrNtMKJYV/uTACyyOG5XZeIWgjIJE5kU6VaAxwRnFljlOYR2ivmYe/BryGzbrXjGTEZ9SqL/LAb6xIyfqULWhBpeaSA4XnK3vKYUMMAJUR8QXPjABAdz08SqtkVSvKxu+Etp0s2deaZdG1lzJd0POBZU9cSZR30s9GCWR4ir4FhQcXKSooGa6YcU6uyjOtF1v/3ehnp//oBFyIB2HLAg3ObzzjCqndfuBWHsQW2i9F8gfuYoACGWzYrR4HLNHay0JI0kgAAPVWvDMcLkCH0fsABCaO2no+v/zPxoI64vbJ6wiKyyHJj/ljBIpTFV8MWLQP3GQ3h9Sn8xKRE4QzZE1xkjktfCTAQreuvT/lLdfe57skoJ8emEeTjXJ/oojFYH5esZlZYOUtBarDzvLMQqvQwEHgAkSJFAGdbPzoYvDhKUBFm++YsUKUSj25rJOvYygbVeFoF2M5nBKhOqkMTLpGh3j1c=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(316002)(6916009)(4326008)(82740400003)(70206006)(70586007)(2616005)(47076005)(426003)(336012)(5660300002)(41300700001)(82310400005)(6666004)(36756003)(86362001)(40460700003)(54906003)(40480700001)(26005)(186003)(1076003)(103116003)(2906002)(83380400001)(30864003)(7416002)(8676002)(8936002)(36860700001)(478600001)(356005)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:37:55.3350
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56f16cf2-22ed-4bf8-c163-08db3c45d0a0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4343

The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
currently accept or return 64-bit values.

In future when we support 32-bit physical address, these DT functions are
expected to accept/return 32-bit or 64-bit values (depending on the width of
physical address). Also, we wish to detect if any truncation has occurred
(i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).

device_tree_get_reg() should now be able to return paddr_t. This is invoked by
various callers to get DT address and size.

For fdt_get_mem_rsv(), we have introduced a wrapper named
fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
has been imported from external source.

For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
dt_read_paddr() to read physical addresses. We chose not to modify the original
function as it is used in places where it needs to specifically read 64-bit
values from dt (For e.g. dt_property_read_u64()).

Xen prints warning when it detects truncation in cases where it is not able to
return error.

Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
by the code changes.

Also, initialized variables to fix the warning "-Werror=maybe-uninitialized".

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from

v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
"[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
this approach achieves the same purpose.

2. No need to check for truncation while converting values from u64 to paddr_t.

v2 - 1. Use "( (dt_start >> (PADDR_SHIFT - 1)) > 1 )" to detect truncation.
2. Introduced libfdt_xen.h to implement fdt_get_mem_rsv_paddr
3. Logged error messages in case truncation is detected.

v3 - 1. Renamed libfdt_xen.h to libfdt-xen.h.
2. Replaced u32/u64 with uint32_t/uint64_t
3. Use "(paddr_t)val != val" to check for truncation.
4. Removed the alias "#define PADDR_SHIFT PADDR_BITS". 

v4 - 1. Added a WARN() when truncation is detected.
2. Always check the return value of fdt_get_mem_rsv().

 xen/arch/arm/bootfdt.c              | 48 +++++++++++++++++++------
 xen/arch/arm/domain_build.c         |  2 +-
 xen/arch/arm/include/asm/setup.h    |  4 +--
 xen/arch/arm/setup.c                | 18 +++++-----
 xen/arch/arm/smpboot.c              |  2 +-
 xen/include/xen/device_tree.h       | 23 ++++++++++++
 xen/include/xen/libfdt/libfdt-xen.h | 55 +++++++++++++++++++++++++++++
 7 files changed, 129 insertions(+), 23 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 0085c28d74..ac8148da55 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,7 +11,7 @@
 #include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/lib.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/sort.h>
 #include <xsm/xsm.h>
 #include <asm/setup.h>
@@ -52,11 +52,37 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
     return false;
 }
 
-void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                                u32 size_cells, u64 *start, u64 *size)
+void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                                uint32_t size_cells, paddr_t *start,
+                                paddr_t *size)
 {
-    *start = dt_next_cell(address_cells, cell);
-    *size = dt_next_cell(size_cells, cell);
+    uint64_t dt_start, dt_size;
+
+    /*
+     * dt_next_cell will return uint64_t whereas paddr_t may not be 64-bit.
+     * Thus, there is an implicit cast from uint64_t to paddr_t.
+     */
+    dt_start = dt_next_cell(address_cells, cell);
+    dt_size = dt_next_cell(size_cells, cell);
+
+    if ( dt_start != (paddr_t)dt_start )
+    {
+        printk("Error: Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Error: Physical size greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    *start = dt_start;
+    *size = dt_size;
 }
 
 static int __init device_tree_get_meminfo(const void *fdt, int node,
@@ -326,7 +352,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-start property has invalid length %d\n", len);
         return -EINVAL;
     }
-    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    start = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
     if ( !prop )
@@ -339,7 +365,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-end property has invalid length %d\n", len);
         return -EINVAL;
     }
-    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    end = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     if ( start >= end )
     {
@@ -593,10 +619,12 @@ static void __init early_print_info(void)
     nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
     for ( i = 0; i < nr_rsvd; i++ )
     {
-        paddr_t s, e;
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
+        paddr_t s = 0, e = 0;
+
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
             continue;
-        /* fdt_get_mem_rsv returns length */
+
+        /* fdt_get_mem_rsv_paddr returns length */
         e += s;
         printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
     }
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c8f08d8ee2..15c8bdd9e4 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
         BUG_ON(!prop);
         cells = (const __be32 *)prop->value;
         device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
-        psize = dt_read_number(cells, size_cells);
+        psize = dt_read_paddr(cells, size_cells);
         if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
         {
             printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2b..7b697d879e 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -157,8 +157,8 @@ const char *boot_module_kind_as_string(bootmodule_kind kind);
 extern uint32_t hyp_traps_vector[];
 void init_traps(void);
 
-void device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                         u32 size_cells, u64 *start, u64 *size);
+void device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                         uint32_t size_cells, paddr_t *start, paddr_t *size);
 
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f26f67b90..d2a3d8c340 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -29,7 +29,7 @@
 #include <xen/virtual_region.h>
 #include <xen/vmap.h>
 #include <xen/trace.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/acpi.h>
 #include <xen/warning.h>
 #include <asm/alternative.h>
@@ -220,13 +220,13 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
 
     for ( i = first; i < nr ; i++ )
     {
-        paddr_t r_s, r_e;
+        paddr_t r_s = 0, r_e = 0;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        r_e += r_s; /* fdt_get_mem_rsv returns length */
+        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
 
         if ( s < r_e && r_s < e )
         {
@@ -500,15 +500,15 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
 
     for ( ; i < mi->nr_mods + nr; i++ )
     {
-        paddr_t mod_s, mod_e;
+        paddr_t mod_s = 0, mod_e = 0;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened,
-                             i - mi->nr_mods,
-                             &mod_s, &mod_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
+                                   i - mi->nr_mods,
+                                   &mod_s, &mod_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        /* fdt_get_mem_rsv returns length */
+        /* fdt_get_mem_rsv_paddr returns length */
         mod_e += mod_s;
 
         if ( s < mod_e && mod_s < e )
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae22869..c15c177487 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
-        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
+        addr = dt_read_paddr(prop, dt_n_addr_cells(cpu));
 
         hwid = addr;
         if ( hwid != addr )
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..11bda2fd3d 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -241,6 +241,29 @@ static inline u64 dt_read_number(const __be32 *cell, int size)
     return r;
 }
 
+/* Wrapper for dt_read_number() to return paddr_t (instead of uint64_t) */
+static inline paddr_t dt_read_paddr(const __be32 *cell, int size)
+{
+    uint64_t dt_r = 0;
+    paddr_t r;
+
+    dt_r = dt_read_number(cell, size);
+
+    if ( dt_r != (paddr_t)dt_r )
+    {
+        printk("Error: Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    r = dt_r;
+
+    return r;
+}
+
 /* Helper to convert a number of cells to bytes */
 static inline int dt_cells_to_size(int size)
 {
diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
new file mode 100644
index 0000000000..3296a368a6
--- /dev/null
+++ b/xen/include/xen/libfdt/libfdt-xen.h
@@ -0,0 +1,55 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-only
+ *
+ * xen/include/xen/libfdt/libfdt-xen.h
+ *
+ * Wrapper functions for device tree. This helps to convert dt values
+ * between uint64_t and paddr_t.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ */
+
+#ifndef LIBFDT_XEN_H
+#define LIBFDT_XEN_H
+
+#include <xen/libfdt/libfdt.h>
+
+static inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
+                                        paddr_t *address,
+                                        paddr_t *size)
+{
+    uint64_t dt_addr;
+    uint64_t dt_size;
+    int ret = 0;
+
+    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
+    if ( ret )
+        return ret;
+
+    if ( dt_addr != (paddr_t)dt_addr )
+    {
+        printk("Error: Physical address greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Error: Physical size greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    *address = dt_addr;
+    *size = dt_size;
+
+    return ret;
+}
+
+#endif /* LIBFDT_XEN_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520852.808905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tI-0000aI-Sa; Thu, 13 Apr 2023 17:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520852.808905; Thu, 13 Apr 2023 17:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tI-0000aB-Pr; Thu, 13 Apr 2023 17:37:48 +0000
Received: by outflank-mailman (input) for mailman id 520852;
 Thu, 13 Apr 2023 17:37:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tH-0000a4-5P
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:37:47 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20617.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::617])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e56df47a-da21-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 19:37:44 +0200 (CEST)
Received: from BN1PR12CA0015.namprd12.prod.outlook.com (2603:10b6:408:e1::20)
 by DM4PR12MB5793.namprd12.prod.outlook.com (2603:10b6:8:60::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:37:41 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e1:cafe::40) by BN1PR12CA0015.outlook.office365.com
 (2603:10b6:408:e1::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:37:41 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:37:41 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:37:40 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 10:37:40 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:37:38 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e56df47a-da21-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SQWmIdc2SKkD4LjHcrpjH3iWug6JZo3LuEg+yHwHO8Dp4K37seXaYYoAmuo3AW+bjrIs43Ocr0g0I96tU/xVBAaazLInjwJYIM4Hdq1lVnJAVBNMbXghR5qzEEZ3ZQNVxMimsEZ1NghNw90hkW+THc6sQWXMyURCSuuK95RhNCZ2eY1+3dw8FZeZeV5JZUIb7IuFsMpcR5VCdGspOe0oAmNiLvZE6CP4nTngtW8/8CWrLL8zCB35oLqhwOVWzTIHYb4qq+PetKEfoM493x7vh71SjE11pmo2ort2YUXGQUzzgYZNNyEVGu34Q3/NIHRR5a8QdhNadXO3NYbsfjGVkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s1VKaqIv1xUd9qTOea23vuzeYmLFJZeuxTrWrpWWn/U=;
 b=EP3VsEOyWh5wVLDKMkbwbK/DunA/HG6Rh8LVdojSuxAMqU2B7Nl/BtGA7IKn+kJvYyoCl+gQtHpB3PdE1DdA5RbozB3SppDSAgtukixX/miW+ZletGyN0/j4HpVhJ/Zp+Q/VwUPAZ8el+y8c52Kh+3EaIWTN4Li73hue+HW+W9+XCq53xHFNC8MslPHN4e4zFqPZJChPyZIq1PzXqMfPUEOGt37ets9SjBObg0lkinm+cgZaOpOIgydNAGQMXXI7S5c5X2vWqtVp6Oi8QMTptShoc8wAMl4NymG//Q7pvmIwetF06ekJasTaYYSiDutbnLtBxzUNE7Z47KzZfSuiQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s1VKaqIv1xUd9qTOea23vuzeYmLFJZeuxTrWrpWWn/U=;
 b=f4pCpH/PX3J4GDai+2cAEWLUB3lVvyiU/BPUkmJcwv+/rVN0V4zpEX5bFI7K8JCctRpVyQ/JbDiQt9SxCDgQ/d60RZhPyoh/xf5O0Xve9YL1jyskim1p23vcQgyAmiHl8yWQev3EgkVMvz0OrrrqxKqG3cxb/20N5cg7L1PmkrM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 00/10] Add support for 32-bit physical address
Date: Thu, 13 Apr 2023 18:37:25 +0100
Message-ID: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|DM4PR12MB5793:EE_
X-MS-Office365-Filtering-Correlation-Id: 6c3edd7f-2df5-48e5-c423-08db3c45c81d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0tmH1vaMOZyUTJXOaX3BB4fCLIfJpCgCq4CHZxGbq6xpmI6th4eCFP4sRqwkDmDY4pSFRkrnuSTofX09LHm3CjcLY/sjsWyvjtjKxjquXa54iwWWWm3382Lq5pjlJWwWk3MOJLOiXZy8UDQdlac7xPJ51f+xHGkI1Pe4FYIf9f4L2fudMqv147iy20IIf7mv+/IThLDdUauof4y6ftRDKVm8i+o1bab/c5N00ondUi74QV/vbVZSEb78jmLKhh5ueQE5TJHnde4RwIBnp/pq0EO+DzdIeH5L1oGGngBtK+yCfENC4PwXeVTZ9rNbftjxt6jtC/Ic7Ggbt8DjfVd9poIegISERIQB7BV9ADQFPble7WbHbGbBCbbiym6S/grYKHTZkGfzsetX6FDn21nV4diVXuK+7nNT42sZosEbDYIP3x8OPHKVFcVRZSleWIGKGV3XYE3gB41KRQd2DM1/WSHTYJXONdcCXDw3FUIeqq7FaCLM9C7zPPssndAdH6gaYyPpFx+k8NVepaf8B7oqbrYVQL7U9l9cQxkbc/AZzBpUqy1ibMD1VajafiSjSVwZZ9VTseul1ykNfn4kNLfx4Loq/PTjv6qVbwlF7YKsewt6LoioMG62pBtunocJPX1iYWJhEDF0QtocI84e+86s93McYR+GLnXt02YACKzB7A5y6IghCHJZQjiWMxjCI1wEcfDfiqIMZ7lqxJxNAgvDLQEDPzNO/Adl6bZkb2jSlB7LU2+ytiATotBBfMglUmsuZay/WdgnupXU0oL4tNP8GQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(316002)(41300700001)(81166007)(1076003)(26005)(966005)(186003)(86362001)(6666004)(47076005)(356005)(36860700001)(2616005)(82310400005)(426003)(336012)(83380400001)(82740400003)(6916009)(4326008)(54906003)(36756003)(40480700001)(70206006)(70586007)(8676002)(8936002)(5660300002)(2906002)(7416002)(103116003)(478600001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:37:41.0401
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c3edd7f-2df5-48e5-c423-08db3c45c81d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5793

Hi All,

Please have a look at https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01465.html
for the context.

The benefits of using 32 bit physical addresses are as follows :-

1. It helps to use Xen on platforms (for eg R52) which supports 32-bit
physical addresses and has no support for large physical address extension.
On 32-bit MPU systems which supports flat-mapping (for eg R52), it helps
to translate 32 bit VA into 32 bit PA.

2. It also helps in code optimization when the underlying platform does not
use large physical address extension.

The following points are to be noted :-
1. Device tree always use uint64_t for address and size. The caller needs to
translate between uint64_t and uint32_t (when 32 bit physical addressing is used).
2. Currently, we have enabled this option for Arm_32 as the MMU for Arm_64
uses 48-bit physical addressing.
3. https://lists.xenproject.org/archives/html/xen-devel/2022-12/msg00117.html
has been added to this series.

Changes from :

v1 - 1. Reordered the patches such that the first three patches fixes issues in
the existing codebase. These can be applied independent of the remaining patches
in this serie.

2. Dropped translate_dt_address_size() for the address/size translation between
paddr_t and u64 (as parsed from the device tree). Also, dropped the check for
truncation (while converting u64 to paddr_t).
Instead now we have modified device_tree_get_reg() and typecasted the return for
dt_read_number(), to obtain paddr_t. Also, introduced wrappers for
fdt_get_mem_rsv() and dt_device_get_address() for the same purpose. These can be
found in patch 4/11 and patch 6/11.

3. Split "Other adaptations required to support 32bit paddr" into the following
individual patches for each adaptation :
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_ARM_PA_32"

4. Introduced "xen/arm: p2m: Enable support for 32bit IPA".

v2 - 1. Dropped patches 1/11, 2/11 and 3/11 from the v2 as it has already been
committed (except 2/11 - "[XEN v5] xen/arm: Use the correct format specifier"
which is waiting to be committed).

2. Introduced a new patch "xen/drivers: ns16550: Use paddr_t for io_base/io_size".

v3 - 1. Combined the patches from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00656.html in this series.

v4 - 1. Dropped "xen/drivers: ns16550: Use paddr_t for io_base/io_size" from the patch series.

As Jan (jbeulich@suse.com) had pointed out in https://lore.kernel.org/xen-devel/20230321140357.24094-5-ayan.kumar.halder@amd.com/,
ns16550 driver requires some prior cleanup. Also, ns16550 can be ignored for now
for the 32-bit paddr support (which is mainly targeted for Arm). I will send out
separate patches to fix this once the current serie is committed (or in ready to
commit state). I hope that is fine with Jan ?

2. Introduced "xen/arm: domain_build: Check if the address fits the range of physical address".

3. "xen/arm: Use the correct format specifier" has been committed in v4.

Ayan Kumar Halder (10):
  xen/arm: domain_build: Track unallocated pages using the frame number
  xen/arm: Typecast the DT values into paddr_t
  xen/arm: Introduce a wrapper for dt_device_get_address() to handle
    paddr_t
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: Introduce choice to enable 64/32 bit physical addressing
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_PHYS_ADDR_T_32"
  xen/arm: Restrict zeroeth_table_offset for ARM_64
  xen/arm: domain_build: Check if the address fits the range of physical
    address
  xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
  xen/arm: p2m: Enable support for 32bit IPA for ARM_32

 xen/arch/Kconfig                           |  3 +
 xen/arch/arm/Kconfig                       | 37 +++++++++++-
 xen/arch/arm/bootfdt.c                     | 48 ++++++++++++----
 xen/arch/arm/domain_build.c                | 67 +++++++++++++++-------
 xen/arch/arm/gic-v2.c                      | 10 ++--
 xen/arch/arm/gic-v3-its.c                  |  4 +-
 xen/arch/arm/gic-v3.c                      | 10 ++--
 xen/arch/arm/guest_walk.c                  |  2 +
 xen/arch/arm/include/asm/lpae.h            |  4 ++
 xen/arch/arm/include/asm/p2m.h             |  8 +--
 xen/arch/arm/include/asm/page-bits.h       |  6 +-
 xen/arch/arm/include/asm/setup.h           |  4 +-
 xen/arch/arm/include/asm/types.h           |  6 ++
 xen/arch/arm/mm.c                          | 12 ++--
 xen/arch/arm/p2m.c                         | 35 +++++------
 xen/arch/arm/pci/pci-host-common.c         |  6 +-
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 +-
 xen/arch/arm/platforms/exynos5.c           | 32 +++++------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/arch/arm/setup.c                       | 18 +++---
 xen/arch/arm/smpboot.c                     |  2 +-
 xen/common/device_tree.c                   | 35 +++++++++++
 xen/drivers/char/cadence-uart.c            |  4 +-
 xen/drivers/char/exynos4210-uart.c         |  4 +-
 xen/drivers/char/imx-lpuart.c              |  4 +-
 xen/drivers/char/meson-uart.c              |  4 +-
 xen/drivers/char/mvebu-uart.c              |  4 +-
 xen/drivers/char/omap-uart.c               |  4 +-
 xen/drivers/char/pl011.c                   |  6 +-
 xen/drivers/char/scif-uart.c               |  4 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 +--
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         | 23 ++++----
 xen/include/xen/device_tree.h              | 36 ++++++++++++
 xen/include/xen/libfdt/libfdt-xen.h        | 55 ++++++++++++++++++
 37 files changed, 369 insertions(+), 150 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520854.808926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tX-00018O-IR; Thu, 13 Apr 2023 17:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520854.808926; Thu, 13 Apr 2023 17:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tX-00018H-FR; Thu, 13 Apr 2023 17:38:03 +0000
Received: by outflank-mailman (input) for mailman id 520854;
 Thu, 13 Apr 2023 17:38:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tW-0000rf-GV
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:02 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ecbbc965-da21-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:37:57 +0200 (CEST)
Received: from BN9PR03CA0958.namprd03.prod.outlook.com (2603:10b6:408:108::33)
 by DM6PR12MB4202.namprd12.prod.outlook.com (2603:10b6:5:219::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Thu, 13 Apr
 2023 17:37:51 +0000
Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:108:cafe::4d) by BN9PR03CA0958.outlook.office365.com
 (2603:10b6:408:108::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:37:50 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:37:50 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:37:50 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:37:48 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecbbc965-da21-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIIaANzpNZ7sR6SWB3wWxHB3yCmB4NanzJ67NrzqA3YpabBLdiWjjRb39fjG59mT43FtC/SgKuNkCyj5nQ/Nq6ffEu8A3PXth3GyHFS2rDiE5Ahnsj2rbm/lZ4+iSQMlw0gzWKCwklGAUeb3fhWKL7zokfmxMDXPMqzHEcHBJrsTJZnfG3DCrKhLqQ6MCF6+PImincO+yO+UoKXtjEUOLj3Jwl5Ka8iW/9v+z9EvRTkRGjPqkzlS6ei7aXSHq/ed7NFzFeGx6eKDSIpvq2FEqHaA3kB0ya3/KjLubx97pABg9LMoD/QB9wsnTE5Y6tVJPg8pXy2PY3ytAfhuZWKx1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h0wU8LetK/R9TLjWfaj0Ei8tmEBa20JjFHMWBU/8feQ=;
 b=S0I/hG9O6rQL6tFs0vSYCXMHCzvOTq5PMJxcvocIbENWnsg8of6C9BCo+2j5tquc0qXrkzRDbzbuS4yd6qcBDluuy+VdT9uLmgi8/kkzr/ri4FJ38m/SpsceGdSjUpf0Ho2NFlCp1x3WLbqwwuJsywRPQGeTKohQU1V6zpnLeRYrt6fJ8tgFztQDZaiN/vk53ak0NKKolQpKR7F9O90BtfKoO0jGPQMPL8LE9j+bXkxdCgDBbqDSfhxZkzwdNBV7/pD0Dgp4FUDUFq1Ub87qVJ2fEzdB0JF1RqvJlUzdSsvd5BS40/EVVhAKb3CznqP1yPRtInC9GqfU7eHHjXhxiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h0wU8LetK/R9TLjWfaj0Ei8tmEBa20JjFHMWBU/8feQ=;
 b=gVSj9zJMEzq3VGqgrDqDbbLWPo9wLAFioxbxhxUezt8zdRd9vIWrfvzDxAHN5Axll8uTiAefe8xlwLF3kAA7+CViIjyPftPrjI1g0FoKXmFM6GUzssnRC1k5WcDx4V5FGdapFLqs2GX2Lu25fbaKYEXgTRFxrJSFYCO3XB0N3Ho=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 01/10] xen/arm: domain_build: Track unallocated pages using the frame number
Date: Thu, 13 Apr 2023 18:37:26 +0100
Message-ID: <20230413173735.48387-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT044:EE_|DM6PR12MB4202:EE_
X-MS-Office365-Filtering-Correlation-Id: 8945d385-c3ad-4b0c-a44b-08db3c45cdc4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WxOROidkwfJIIDo5gTWo8XScyTi5Uc7QriwwU1I31OMjNkI2xaLlDG1/ldBxFh5CKjf6h/D7EuA0yRIVAj3p4cu0aBu022IlFYTInym+ef3kp4b9nF6hWZ4uJB71cYPyB0IwO+49mBpYuoP9uXaFFelETJnuH8VNl1cPnhUBX7t94JGxCX3kvF6PbxMbAPWjJcH4ARh4JqAxmeD91Ve7H8U5PPS15qfDgwl/TC0W6+7G01pD+xWH/M0BeqdMBpOfjCXqzzdPFGo1u1/gWZOgLLbeCunO2PtMx2CF5XQ4HHgjKhWpf8i1W3rB/lcnUgHVZ/htd8y8ViKEoFc713TxQoRB42Um3aGq7naV2oZW7Z7RvZMDlyaRemIdHOJjan1yvoOiXA/CRqo1/myldWMU+kCWL4Ir9E26iXYIFY7pJMonc1fYFmYJp5z1HA3FHNeWCmKx0owupbzh2hcRlod14AEakPPyBqazziLlHyzYmQGGGh/dbG3uD79BLzaXv6Cmn6+vsgXvB+KbxoYoZu9Yk51OHR9z6fKf+W7BipUkQsGWd+9+9NcnoK6EKqoviyFRVOJXPGPvfY+o+QJ0oCZtd0Z7SsuPpLADREI8NwxWtzvrkdlrNKkdZNo0FxM/T+nZIweZUIlsxeSV11zjfpJIzWsmkBhxRbmpBosskyLlUhFx0Ui2v5qCJvwb8cdYja5hs4lvPfQ7Fv35JJz3pBWdoNgTQ0sS4Fm1hXt8QdvNgO1GK2wVH3cQHKryscQwEHos
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199021)(40470700004)(46966006)(36840700001)(82310400005)(5660300002)(7416002)(8936002)(41300700001)(103116003)(40460700003)(83380400001)(6916009)(4326008)(8676002)(70206006)(2906002)(70586007)(6666004)(36756003)(316002)(966005)(47076005)(478600001)(54906003)(356005)(86362001)(81166007)(36860700001)(26005)(186003)(426003)(336012)(1076003)(40480700001)(82740400003)(2616005)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:37:50.5214
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8945d385-c3ad-4b0c-a44b-08db3c45cdc4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT044.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4202

rangeset_{xxx}_range() functions are invoked with 'start' and 'size' as
arguments which are either 'uint64_t' or 'paddr_t'. However, the function
accepts 'unsigned long' for 'start' and 'size'. 'unsigned long' is 32 bits for
Arm32. Thus, there is an implicit downcasting from 'uint64_t'/'paddr_t' to
'unsigned long' when invoking rangeset_{xxx}_range().

So, it may seem there is a possibility of lose of data due to truncation.

In reality, 'start' and 'size' are always page aligned. And Arm32 currently
supports 40 bits as the width of physical address.
So if the addresses are page aligned, the last 12 bits contain zeroes.
Thus, we could instead pass page frame number which will contain 28 bits (40-12
on Arm32) and this can be represented using 'unsigned long'.

On Arm64, this change will not induce any adverse side effect as the width of
physical address is 48 bits. Thus, the width of 'gfn' (ie 48 - 12 = 36) can be
represented using 'unsigned long' (which is 64 bits wide).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v3 - 1. Extracted the patch from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00657.html
and added it to this series.
2. Modified add_ext_regions(). This accepts a frame number instead of physical
address.

v4 - 1. Reworded the commit message to use Arm32/Arm64
(32-bit/64-bit Arm architecture).
2. Replaced pfn with gfn to denote guest frame number in add_ext_regions().
3. Use pfn_to_paddr() to return a physical address from the guest frame number.

 xen/arch/arm/domain_build.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4f9d4f9d88..c8f08d8ee2 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1500,10 +1500,13 @@ static int __init make_resv_memory_node(const struct domain *d,
     return res;
 }
 
-static int __init add_ext_regions(unsigned long s, unsigned long e, void *data)
+static int __init add_ext_regions(unsigned long s_gfn, unsigned long e_gfn,
+                                  void *data)
 {
     struct meminfo *ext_regions = data;
     paddr_t start, size;
+    paddr_t s = pfn_to_paddr(s_gfn);
+    paddr_t e = pfn_to_paddr(e_gfn);
 
     if ( ext_regions->nr_banks >= ARRAY_SIZE(ext_regions->bank) )
         return 0;
@@ -1566,7 +1569,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = bootinfo.mem.bank[i].start;
         end = bootinfo.mem.bank[i].start + bootinfo.mem.bank[i].size;
-        res = rangeset_add_range(unalloc_mem, start, end - 1);
+        res = rangeset_add_range(unalloc_mem, PFN_DOWN(start),
+                                 PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1580,7 +1584,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = assign_mem->bank[i].start;
         end = assign_mem->bank[i].start + assign_mem->bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1595,7 +1600,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
         start = bootinfo.reserved_mem.bank[i].start;
         end = bootinfo.reserved_mem.bank[i].start +
             bootinfo.reserved_mem.bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1607,7 +1613,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     /* Remove grant table region */
     start = kinfo->gnttab_start;
     end = kinfo->gnttab_start + kinfo->gnttab_size;
-    res = rangeset_remove_range(unalloc_mem, start, end - 1);
+    res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1617,7 +1623,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(unalloc_mem, start, end,
+    res = rangeset_report_ranges(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions, ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
@@ -1639,7 +1645,7 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
 
     start = addr & PAGE_MASK;
     end = PAGE_ALIGN(addr + len);
-    res = rangeset_remove_range(mem_holes, start, end - 1);
+    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1677,7 +1683,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     /* Start with maximum possible addressable physical memory range */
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_add_range(mem_holes, start, end);
+    res = rangeset_add_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1708,7 +1714,8 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
             start = addr & PAGE_MASK;
             end = PAGE_ALIGN(addr + size);
-            res = rangeset_remove_range(mem_holes, start, end - 1);
+            res = rangeset_remove_range(mem_holes, PFN_DOWN(start),
+                                        PFN_DOWN(end - 1));
             if ( res )
             {
                 printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1735,7 +1742,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(mem_holes, start, end,
+    res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions,  ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520855.808936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tj-0001bG-1M; Thu, 13 Apr 2023 17:38:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520855.808936; Thu, 13 Apr 2023 17:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0ti-0001b9-TM; Thu, 13 Apr 2023 17:38:14 +0000
Received: by outflank-mailman (input) for mailman id 520855;
 Thu, 13 Apr 2023 17:38:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tg-0000a4-MG
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:12 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4dd232e-da21-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 19:38:11 +0200 (CEST)
Received: from BN9P221CA0015.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::20)
 by IA1PR12MB6162.namprd12.prod.outlook.com (2603:10b6:208:3ea::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:06 +0000
Received: from BN8NAM11FT038.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10a:cafe::83) by BN9P221CA0015.outlook.office365.com
 (2603:10b6:408:10a::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:06 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT038.mail.protection.outlook.com (10.13.176.246) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:06 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:06 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 10:38:05 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4dd232e-da21-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FjnpaNQgeB09VBLKGlQ0sJYvmiqhGvzsDcXao1Us+0osoovi2UxchVODf3IE99rXyTLOvALePYHeEZZntRIGG3OrFo89SzaCgTWlB8+Hf5yabx++zGe8dEtSef9yKxdm+OyX/b/0snNmMX7XH/7ik846ZhwI+PK/R/3WHYvN5OPz9l3xhckc+3bIkbv2wWsnJ/9Qd/2qQ54uY7EXqbHkBkGCAyKClcrHm/rPdQHC1pXijH0of69Amg2iaA4+ry1liM9fpZYkKkhHofEJy9ZKVokUxYVz0NeU2tYKX5meUIMbiE7JNmjoLughKZo+lCKYFLhFn+/VvcjM601epKRoJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C+5czMOChY06NbD7GBIlDCpf1fZosY+JcIjRqcYf2ok=;
 b=lEBn6g3F4vf0MU4cSvu8Fm6UhrsVHmjAxej8vUnPldlccjfyqAnpn/RjHx8S7A4tsE5uFJmEuMjKFYBtynHyYZrHwrN1Mx5RMBBxNgXsEyh2VeeoCJ4LOU0TQaqe3Lu041cHZR7VzLPBZtat/7TSz0LMX9KVvLvSn/1oVDM9wDWtvhs/79atYga1GMeRFLNxhsh1CLKGiNiXAwV73lcKFZO8pobnaiDLuqQKLMbEcvKg14yj1DZu0LxONi2zkDqyp57O+nt8Kv1ypnQtHOdslDr1hvCq14/4a9ZOTLVHYrd+7QYZMYseI+0oIGXLSs+D+caXVbUgXTRICKOv3oJRLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C+5czMOChY06NbD7GBIlDCpf1fZosY+JcIjRqcYf2ok=;
 b=KmPEj3yWExJKc6dOITUkp/OzXtHs04BLGAm/5lTaR7QVHTAJB8HZIgptGH+67Kh3KxCQBHd1osPIWwC1frpMDTpRgBcMmm9NTgnBPVxNYfFz6UR1JkFyBclxdGFcMBv4sOQ2D91y7/kSb/eXZK8euh/cu/Ww8tdE1uMnpWyWSv8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 03/10] xen/arm: Introduce a wrapper for dt_device_get_address() to handle paddr_t
Date: Thu, 13 Apr 2023 18:37:28 +0100
Message-ID: <20230413173735.48387-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT038:EE_|IA1PR12MB6162:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e35c369-2398-4e22-8367-08db3c45d750
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NjT3a55Y92ln30oEP7rU+udXS1MzAdqohmmEoWQ9wdPdsgWd8KzMxVcX/9j2s3up40EuE+cCyXC8J8ZQV7As2IeNTc/tC+T6hrRMLG8NgUbp5v1Gkx//fhEUM2+n/Poo70/+JJ8LIDZzcvrRNNZuVEbT5QtxoU/JUHbO6qAphZGuTOxTBy7KlqWdYSVkzP5GvJZlT2eYKwG1Wc3UMSaEYZ7uNtfHpsmTGZnAH2Ngr6kJiRuPsCn2psjo3DEgf2HXxfsjoqy02NKfEhQWLymJ3/Zq9KBcuVvuJ1tX8yHfpnmubEw1CrgYJgeOBa7D22n4ZeVl+asw/zERj32nupB9/Lja206auWuazefQWE4ClKKLS2LF+GxdM8vluBWXnxfgcd50SzeW49BhALIXgTFscAmchOAMz7c7bqE2W8Lb6YxjPby6PzAKotnVGGc04TPbGd/cEdlW/JnBSguO+xp69OzX23hidzHdVe4bOczKhZTbbqAGB33Um4ksgZ1xRNVd3iIgIp3ksVZJYwP/qtPJdDZevGU0rUOXrino8mg0S34TCGlGrflA7Tdm0crLCUzlvpjshpUI/ESv7YAvUMMfrPUwK63P+bPwvxcsNlzRbu+HDoZmq+TMIVO+x3/Aly85I8AxJft2c+nYjl8pnOMlrnOwrPm+Lvo+0rnvcCLdDJNy3j+mr+lRkk7ssACzYJFt2IchgBhgkqdC1lmNfPu/6klbwplSuiXMF2RmXnw4MPE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199021)(46966006)(36840700001)(40470700004)(7416002)(47076005)(36860700001)(82740400003)(2616005)(5660300002)(83380400001)(30864003)(2906002)(336012)(426003)(40460700003)(26005)(1076003)(186003)(8936002)(8676002)(6666004)(54906003)(36756003)(6916009)(4326008)(86362001)(70586007)(70206006)(316002)(81166007)(356005)(40480700001)(41300700001)(966005)(103116003)(82310400005)(478600001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:06.5376
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e35c369-2398-4e22-8367-08db3c45d750
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT038.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6162

dt_device_get_address() can accept uint64_t only for address and size.
However, the address/size denotes physical addresses. Thus, they should
be represented by 'paddr_t'.
Consequently, we introduce a wrapper for dt_device_get_address() ie
dt_device_get_paddr() which accepts address/size as paddr_t and inturn
invokes dt_device_get_address() after converting address/size to
uint64_t.

The reason for introducing this is that in future 'paddr_t' may not
always be 64-bit. Thus, we need an explicit wrapper to do the type
conversion and return an error in case of truncation.

With this, callers can now invoke dt_device_get_paddr().

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - 1. New patch.

v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size"
into this patch.

2. dt_device_get_address() callers now invoke dt_device_get_paddr() instead.

3. Logged error in case of truncation.

v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
2. Some sanity fixes.

v4 - 1. Some sanity fixes.
2. Preserved the declaration of dt_device_get_address() in
xen/include/xen/device_tree.h. The reason being it is currently used by
ns16550.c. This driver requires some more changes as pointed by Jan in
https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
which is to be addressed as a separate series.

 xen/arch/arm/domain_build.c                | 10 +++----
 xen/arch/arm/gic-v2.c                      | 10 +++----
 xen/arch/arm/gic-v3-its.c                  |  4 +--
 xen/arch/arm/gic-v3.c                      | 10 +++----
 xen/arch/arm/pci/pci-host-common.c         |  6 ++--
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 ++--
 xen/arch/arm/platforms/exynos5.c           | 32 ++++++++++----------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/common/device_tree.c                   | 35 ++++++++++++++++++++++
 xen/drivers/char/cadence-uart.c            |  4 +--
 xen/drivers/char/exynos4210-uart.c         |  4 +--
 xen/drivers/char/imx-lpuart.c              |  4 +--
 xen/drivers/char/meson-uart.c              |  4 +--
 xen/drivers/char/mvebu-uart.c              |  4 +--
 xen/drivers/char/omap-uart.c               |  4 +--
 xen/drivers/char/pl011.c                   |  6 ++--
 xen/drivers/char/scif-uart.c               |  4 +--
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         |  8 ++---
 xen/include/xen/device_tree.h              | 13 ++++++++
 23 files changed, 116 insertions(+), 68 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 15c8bdd9e4..7d28b75517 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1698,13 +1698,13 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     dt_for_each_device_node( dt_host, np )
     {
         unsigned int naddr;
-        u64 addr, size;
+        paddr_t addr, size;
 
         naddr = dt_number_of_address(np);
 
         for ( i = 0; i < naddr; i++ )
         {
-            res = dt_device_get_address(np, i, &addr, &size);
+            res = dt_device_get_paddr(np, i, &addr, &size);
             if ( res )
             {
                 printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2478,7 +2478,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     unsigned int naddr;
     unsigned int i;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
     bool own_device = !dt_device_for_passthrough(dev);
     /*
      * We want to avoid mapping the MMIO in dom0 for the following cases:
@@ -2533,7 +2533,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     /* Give permission and map MMIOs */
     for ( i = 0; i < naddr; i++ )
     {
-        res = dt_device_get_address(dev, i, &addr, &size);
+        res = dt_device_get_paddr(dev, i, &addr, &size);
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2964,7 +2964,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         if ( res )
         {
             printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   " 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
                    kinfo->d->domain_id,
                    mstart & PAGE_MASK, PAGE_ALIGN(mstart + size) - 1);
             return res;
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 5d4d298b86..6476ff4230 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -993,7 +993,7 @@ static void gicv2_extension_dt_init(const struct dt_device_node *node)
             continue;
 
         /* Get register frame resource from DT. */
-        if ( dt_device_get_address(v2m, 0, &addr, &size) )
+        if ( dt_device_get_paddr(v2m, 0, &addr, &size) )
             panic("GICv2: Cannot find a valid v2m frame address\n");
 
         /*
@@ -1018,19 +1018,19 @@ static void __init gicv2_dt_init(void)
     paddr_t vsize;
     const struct dt_device_node *node = gicv2_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the distributor\n");
 
-    res = dt_device_get_address(node, 1, &cbase, &csize);
+    res = dt_device_get_paddr(node, 1, &cbase, &csize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the CPU\n");
 
-    res = dt_device_get_address(node, 2, &hbase, NULL);
+    res = dt_device_get_paddr(node, 2, &hbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the hypervisor\n");
 
-    res = dt_device_get_address(node, 3, &vbase, &vsize);
+    res = dt_device_get_paddr(node, 3, &vbase, &vsize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the virtual CPU\n");
 
diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 1ec9934191..3aa4edda10 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -1004,12 +1004,12 @@ static void gicv3_its_dt_init(const struct dt_device_node *node)
      */
     dt_for_each_child_node(node, its)
     {
-        uint64_t addr, size;
+        paddr_t addr, size;
 
         if ( !dt_device_is_compatible(its, "arm,gic-v3-its") )
             continue;
 
-        if ( dt_device_get_address(its, 0, &addr, &size) )
+        if ( dt_device_get_paddr(its, 0, &addr, &size) )
             panic("GICv3: Cannot find a valid ITS frame address\n");
 
         add_to_host_its_list(addr, size, its);
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index bb59ea94cd..4e6c98bada 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1377,7 +1377,7 @@ static void __init gicv3_dt_init(void)
     int res, i;
     const struct dt_device_node *node = gicv3_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv3: Cannot find a valid distributor address\n");
 
@@ -1393,9 +1393,9 @@ static void __init gicv3_dt_init(void)
 
     for ( i = 0; i < gicv3.rdist_count; i++ )
     {
-        uint64_t rdist_base, rdist_size;
+        paddr_t rdist_base, rdist_size;
 
-        res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
+        res = dt_device_get_paddr(node, 1 + i, &rdist_base, &rdist_size);
         if ( res )
             panic("GICv3: No rdist base found for region %d\n", i);
 
@@ -1417,10 +1417,10 @@ static void __init gicv3_dt_init(void)
      * For GICv3 supporting GICv2, GICC and GICV base address will be
      * provided.
      */
-    res = dt_device_get_address(node, 1 + gicv3.rdist_count,
+    res = dt_device_get_paddr(node, 1 + gicv3.rdist_count,
                                 &cbase, &csize);
     if ( !res )
-        dt_device_get_address(node, 1 + gicv3.rdist_count + 2,
+        dt_device_get_paddr(node, 1 + gicv3.rdist_count + 2,
                               &vbase, &vsize);
 }
 
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index a8ece94303..5550f9478d 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -93,7 +93,7 @@ gen_pci_init(struct dt_device_node *dev, const struct pci_ecam_ops *ops)
         cfg_reg_idx = 0;
 
     /* Parse our PCI ecam register address */
-    err = dt_device_get_address(dev, cfg_reg_idx, &addr, &size);
+    err = dt_device_get_paddr(dev, cfg_reg_idx, &addr, &size);
     if ( err )
         goto err_exit;
 
@@ -349,10 +349,10 @@ int __init pci_host_bridge_mappings(struct domain *d)
 
         for ( i = 0; i < dt_number_of_address(dev); i++ )
         {
-            uint64_t addr, size;
+            paddr_t addr, size;
             int err;
 
-            err = dt_device_get_address(dev, i, &addr, &size);
+            err = dt_device_get_paddr(dev, i, &addr, &size);
             if ( err )
             {
                 printk(XENLOG_ERR
diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index 811b40b1a6..407ec07f63 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -64,7 +64,7 @@ static void __iomem *rpi4_map_watchdog(void)
     if ( !node )
         return NULL;
 
-    ret = dt_device_get_address(node, 0, &start, &len);
+    ret = dt_device_get_paddr(node, 0, &start, &len);
     if ( ret )
     {
         printk("Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/brcm.c b/xen/arch/arm/platforms/brcm.c
index d481b2c60f..951e4d6cc3 100644
--- a/xen/arch/arm/platforms/brcm.c
+++ b/xen/arch/arm/platforms/brcm.c
@@ -40,7 +40,7 @@ static __init int brcm_get_dt_node(char *compat_str,
                                    u32 *reg_base)
 {
     const struct dt_device_node *node;
-    u64 reg_base_64;
+    paddr_t reg_base_paddr;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, compat_str);
@@ -50,7 +50,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         return -ENOENT;
     }
 
-    rc = dt_device_get_address(node, 0, &reg_base_64, NULL);
+    rc = dt_device_get_paddr(node, 0, &reg_base_paddr, NULL);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "%s: missing \"reg\" prop\n", __func__);
@@ -61,7 +61,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         *dn = node;
 
     if ( reg_base )
-        *reg_base = reg_base_64;
+        *reg_base = reg_base_paddr;
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507092..c48093cd4f 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -42,8 +42,8 @@ static int exynos5_init_time(void)
     void __iomem *mct;
     int rc;
     struct dt_device_node *node;
-    u64 mct_base_addr;
-    u64 size;
+    paddr_t mct_base_addr;
+    paddr_t size;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,exynos4210-mct");
     if ( !node )
@@ -52,14 +52,14 @@ static int exynos5_init_time(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &mct_base_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &mct_base_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos4210-mct\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_INFO, "mct_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_INFO, "mct_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             mct_base_addr, size);
 
     mct = ioremap_nocache(mct_base_addr, size);
@@ -97,9 +97,9 @@ static int __init exynos5_smp_init(void)
     struct dt_device_node *node;
     void __iomem *sysram;
     char *compatible;
-    u64 sysram_addr;
-    u64 size;
-    u64 sysram_offset;
+    paddr_t sysram_addr;
+    paddr_t size;
+    paddr_t sysram_offset;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,secure-firmware");
@@ -125,13 +125,13 @@ static int __init exynos5_smp_init(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &sysram_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &sysram_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in %s\n", compatible);
         return -ENXIO;
     }
-    dprintk(XENLOG_INFO, "sysram_addr: %016llx size: %016llx offset: %016llx\n",
+    dprintk(XENLOG_INFO,"sysram_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"offset: 0x%"PRIpaddr"\n",
             sysram_addr, size, sysram_offset);
 
     sysram = ioremap_nocache(sysram_addr, size);
@@ -189,7 +189,7 @@ static int exynos5_cpu_power_up(void __iomem *power, int cpu)
     return 0;
 }
 
-static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
+static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
 {
     struct dt_device_node *node;
     int rc;
@@ -208,14 +208,14 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, power_base_addr, size);
+    rc = dt_device_get_paddr(node, 0, power_base_addr, size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos5XXX-pmu\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_DEBUG, "power_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_DEBUG, "power_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             *power_base_addr, *size);
 
     return 0;
@@ -223,8 +223,8 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
 
 static int exynos5_cpu_up(int cpu)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *power;
     int rc;
 
@@ -256,8 +256,8 @@ static int exynos5_cpu_up(int cpu)
 
 static void exynos5_reset(void)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *pmu;
     int rc;
 
diff --git a/xen/arch/arm/platforms/sunxi.c b/xen/arch/arm/platforms/sunxi.c
index e8e4d88bef..2b2c215f20 100644
--- a/xen/arch/arm/platforms/sunxi.c
+++ b/xen/arch/arm/platforms/sunxi.c
@@ -50,7 +50,7 @@ static void __iomem *sunxi_map_watchdog(bool *new_wdt)
         return NULL;
     }
 
-    ret = dt_device_get_address(node, 0, &wdt_start, &wdt_len);
+    ret = dt_device_get_paddr(node, 0, &wdt_start, &wdt_len);
     if ( ret )
     {
         dprintk(XENLOG_ERR, "Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index befd0c3c2d..6fc2f9679e 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -50,7 +50,7 @@ static void __init xgene_check_pirq_eoi(void)
     if ( !node )
         panic("%s: Can not find interrupt controller node\n", __func__);
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("%s: Cannot find a valid address for the distributor\n", __func__);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..fdef74e7ff 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -955,6 +955,41 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
     return 0;
 }
 
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size)
+{
+    uint64_t dt_addr = 0, dt_size = 0;
+    int ret;
+
+    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
+    if ( ret )
+        return ret;
+
+    if ( addr )
+    {
+        if ( dt_addr != (paddr_t)dt_addr )
+        {
+            printk("Error: Physical address 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+                   dt_addr, dev->name, sizeof(paddr_t));
+            return -ERANGE;
+        }
+
+        *addr = dt_addr;
+    }
+
+    if ( size )
+    {
+        if ( dt_size != (paddr_t)dt_size )
+        {
+            printk("Error: Physical size 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+                   dt_size, dev->name, sizeof(paddr_t));
+            return -ERANGE;
+        }
+        *size = dt_size;
+    }
+
+    return ret;
+}
 
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
diff --git a/xen/drivers/char/cadence-uart.c b/xen/drivers/char/cadence-uart.c
index 22905ba66c..c38d7ed143 100644
--- a/xen/drivers/char/cadence-uart.c
+++ b/xen/drivers/char/cadence-uart.c
@@ -158,14 +158,14 @@ static int __init cuart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct cuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &cuart_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("cadence: Unable to retrieve the base"
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 43aaf02e18..2503392ccd 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -303,7 +303,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct exynos4210_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -316,7 +316,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     uart->parity    = PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("exynos4210: Unable to retrieve the base"
diff --git a/xen/drivers/char/imx-lpuart.c b/xen/drivers/char/imx-lpuart.c
index 9c1f3b71a3..77f70c2719 100644
--- a/xen/drivers/char/imx-lpuart.c
+++ b/xen/drivers/char/imx-lpuart.c
@@ -204,7 +204,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     const char *config = data;
     struct imx_lpuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -216,7 +216,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     uart->parity = 0;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("imx8-lpuart: Unable to retrieve the base"
diff --git a/xen/drivers/char/meson-uart.c b/xen/drivers/char/meson-uart.c
index b1e25e0468..c627328122 100644
--- a/xen/drivers/char/meson-uart.c
+++ b/xen/drivers/char/meson-uart.c
@@ -209,14 +209,14 @@ static int __init meson_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct meson_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &meson_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("meson: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/mvebu-uart.c b/xen/drivers/char/mvebu-uart.c
index a00618b96f..cc55173513 100644
--- a/xen/drivers/char/mvebu-uart.c
+++ b/xen/drivers/char/mvebu-uart.c
@@ -231,14 +231,14 @@ static int __init mvebu_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct mvebu3700_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &mvebu3700_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("mvebu3700: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index d6a5d59aa2..8e643cb039 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -324,7 +324,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     struct omap_uart *uart;
     u32 clkspec;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -344,7 +344,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     uart->parity = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("omap-uart: Unable to retrieve the base"
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index be67242bc0..052a651251 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -222,7 +222,7 @@ static struct uart_driver __read_mostly pl011_driver = {
     .vuart_info   = pl011_vuart,
 };
 
-static int __init pl011_uart_init(int irq, u64 addr, u64 size, bool sbsa)
+static int __init pl011_uart_init(int irq, paddr_t addr, paddr_t size, bool sbsa)
 {
     struct pl011 *uart;
 
@@ -258,14 +258,14 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
 {
     const char *config = data;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
     {
         printk("WARNING: UART configuration is not supported\n");
     }
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("pl011: Unable to retrieve the base"
diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index 2fccafe340..1b28ba90e9 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -311,14 +311,14 @@ static int __init scif_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct scif_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &scif_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("scif-uart: Unable to retrieve the base"
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b217..611d9eeba5 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -794,7 +794,7 @@ static void ipmmu_device_reset(struct ipmmu_vmsa_device *mmu)
 static __init bool ipmmu_stage2_supported(void)
 {
     struct dt_device_node *np;
-    uint64_t addr, size;
+    paddr_t addr, size;
     void __iomem *base;
     uint32_t product, cut;
     bool stage2_supported = false;
@@ -806,7 +806,7 @@ static __init bool ipmmu_stage2_supported(void)
         return false;
     }
 
-    if ( dt_device_get_address(np, 0, &addr, &size) )
+    if ( dt_device_get_paddr(np, 0, &addr, &size) )
     {
         printk(XENLOG_ERR "ipmmu: Failed to get PRR MMIO\n");
         return false;
@@ -884,7 +884,7 @@ static int ipmmu_probe(struct dt_device_node *node)
 {
     const struct dt_device_match *match;
     struct ipmmu_vmsa_device *mmu;
-    uint64_t addr, size;
+    paddr_t addr, size;
     uint32_t reg;
     int irq, ret;
 
@@ -905,7 +905,7 @@ static int ipmmu_probe(struct dt_device_node *node)
     bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
 
     /* Map I/O memory and request IRQ. */
-    ret = dt_device_get_address(node, 0, &addr, &size);
+    ret = dt_device_get_paddr(node, 0, &addr, &size);
     if ( ret )
     {
         dev_err(&node->dev, "Failed to get MMIO\n");
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bfdb62b395..b7fa2e90f7 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2428,7 +2428,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
 	}
 
 	/* Base address */
-	ret = dt_device_get_address(np, 0, &ioaddr, &iosize);
+	ret = dt_device_get_paddr(np, 0, &ioaddr, &iosize);
 	if (ret)
 		goto out_free_smmu;
 
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..79281075ba 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -73,8 +73,8 @@
 /* Xen: Helpers to get device MMIO and IRQs */
 struct resource
 {
-	u64 addr;
-	u64 size;
+	paddr_t addr;
+	paddr_t size;
 	unsigned int type;
 };
 
@@ -101,7 +101,7 @@ static struct resource *platform_get_resource(struct platform_device *pdev,
 
 	switch (type) {
 	case IORESOURCE_MEM:
-		ret = dt_device_get_address(pdev, num, &res.addr, &res.size);
+		ret = dt_device_get_paddr(pdev, num, &res.addr, &res.size);
 
 		return ((ret) ? NULL : &res);
 
@@ -169,7 +169,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
 	ptr = ioremap_nocache(res->addr, res->size);
 	if (!ptr) {
 		dev_err(dev,
-			"ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+			"ioremap failed (addr 0x%"PRIpaddr" size 0x%"PRIpaddr")\n",
 			res->addr, res->size);
 		return ERR_PTR(-ENOMEM);
 	}
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 11bda2fd3d..c1dc5400e1 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -581,6 +581,19 @@ int dt_find_node_by_gpath(XEN_GUEST_HANDLE(char) u_path, uint32_t u_plen,
  */
 const struct dt_device_node *dt_get_parent(const struct dt_device_node *node);
 
+/**
+ * dt_device_get_paddr - Resolve an address for a device
+ * @device: the device whose address is to be resolved
+ * @index: index of the address to resolve
+ * @addr: address filled by this function
+ * @size: size filled by this function
+ *
+ * This function resolves an address, walking the tree, for a give
+ * device-tree node. It returns 0 on success.
+ */
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size);
+
 /**
  * dt_device_get_address - Resolve an address for a device
  * @device: the device whose address is to be resolved
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520856.808942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tj-0001ik-IZ; Thu, 13 Apr 2023 17:38:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520856.808942; Thu, 13 Apr 2023 17:38:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tj-0001hd-D8; Thu, 13 Apr 2023 17:38:15 +0000
Received: by outflank-mailman (input) for mailman id 520856;
 Thu, 13 Apr 2023 17:38:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0ti-0000rf-Pk
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:14 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f63001a6-da21-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:13 +0200 (CEST)
Received: from BN8PR15CA0010.namprd15.prod.outlook.com (2603:10b6:408:c0::23)
 by IA0PR12MB7723.namprd12.prod.outlook.com (2603:10b6:208:431::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:09 +0000
Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:c0:cafe::f9) by BN8PR15CA0010.outlook.office365.com
 (2603:10b6:408:c0::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:09 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:08 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f63001a6-da21-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xs+R+dY11cRv/AkZmz3QR253FUC5U1/Vup5WaDUnPv7yK2JMj1zW3pYEY5UIeJdXT/Og5OX/HgIOp5yUa/SOQd7GvqUrj+39WmYUL6dQ4Fb7a9nYTf0birpKUUrzNeWTmRtFPEz3E9d2I9KIZwwg8UtlvH7jMSs5i33/Au+iwbhkozJGncPZcGhI8UcmigqtHsW+vLVs22kPpR11YOGEI8ZYeG2TZZhRpMw3QdIOy58XKrBaTdXJwihW/G7nC8RuwhJdjFlPWDsMSjf4D9qBWRtNfPhotBua3dK8uc4m8jVVsKoWJz5Ilp3XUg5GVZwo1soIM8+KlfPloTRmmEjufw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=84Q/bSxo/zI/eadAn9/P6keU8t8kk/hED39X4Kpg+4k=;
 b=cHCO+BxWFaXQH2iY/Z4wvhNEyADR4mO1DkHFLK51X+KQcYRhB860SFEbJijHnyDDEPSo2P4p59JM15Z85PRqTyc/9n00NvJxzdvVfU+vfihMamJ3xXhVIuKPD5kZd9w6c5nDDHbJh3cOAVIjeS011fTrn7cDYOFexnoH3/P4CCk8tLfaXuEBmLc0oFMq2rysuvVYq4w24cH0mZBhw/QJNqtvxFNAREg1aPXmaYZSYjHqFv+92jt76tCYTI3LpdVM24+gHi/o+5bBNNDEY+Xm/sC3AkW5DheJmhg1mVJwptSmKnOwKOumAcU6b/TPrngeeoLzqPyhM3JsiTxKy0Ryig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=84Q/bSxo/zI/eadAn9/P6keU8t8kk/hED39X4Kpg+4k=;
 b=0SSkUOsUlrD+tnVrzTBjvZbL4V9Ykah6RYry47qHRraw1axt8/gouRabnTNdCtX8bLp7x/q+N7+jgTwA0ogcVcyrmmFV3GGX9pZbO+V8Dl8NtPLLAXGHrtfJh8P5mrl64nR6g2q0C4D/cs1uyuBWn6sikrxSdLMDVGIicMwiahs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 04/10] xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to SMMU_CBn_TTBR0
Date: Thu, 13 Apr 2023 18:37:29 +0100
Message-ID: <20230413173735.48387-5-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|IA0PR12MB7723:EE_
X-MS-Office365-Filtering-Correlation-Id: d4fd22d6-3c30-4f17-a19e-08db3c45d902
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TsSmC5OHrPe9eiU8xz5clgHDvdA+woZQuuXp9lju1jwW89oj4yjHE+zSG5mTvUlipPMy93VyVBjoEcERR6cRwhezJMT6i7jwH3oGIQN+pxg3yd7DXsZp2BgeIB4Un5e1+D1LZ8PHaOVLzuuKd/pIEJ61ZXXinR4jZlknGeKXPCKrvKh0a01FUz9tztxtRK2uCQTUJDvL9f7/Ja5gqgevDtrYpWPZjMoxeb1zwzqEIeyTRRxyDMonnprADCXN9KOCi2M3Nmt8F4bxX4373KSjm6mEQpLwAPWw8hGTVh0nqKZ/M4ylkfjyhSk8v01MpxOvjkyXsfZ+TC6DHs8E3+vP14+R323E+YIdiqH6wB00/MbDfvwmomBfle1vT18XKFW8SSZPuvwkagUexBvwE3y8UGKEDqSXELdV8jCx8eKj7wl4c6lxNM6S0dEi2AdyHn9N2g2+mLPeWXde63RKfhb3YsE1H2cDp73sNGJ0S9YEps+zK2uI1rzSVWwRLaYkDYscFW2tqr+Xf+jzVd0P1eBBMyVb45hmYNvRlWTIJwJZbYMe37tXdCcK133FE+jWxkuGa7+Wq8LkQj/YSn4HFIj/xtXyJIw/PeKcjDttbsYLMi1fNQwZymw5zoARXCfi0WPk2m0ZU3PZK95OQkxO+DU7rT7yNSsCE+mrstgONmExfj+1UkoIkWcN2GTYfk7FnIcF5Mde5UG/Abq4Q9bft+lTlDOUuaQWtwRa/2f75f6ngS8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(6666004)(40460700003)(6916009)(70206006)(70586007)(4326008)(36756003)(2906002)(7416002)(86362001)(81166007)(82740400003)(356005)(41300700001)(103116003)(5660300002)(82310400005)(8676002)(8936002)(316002)(478600001)(40480700001)(54906003)(1076003)(26005)(336012)(426003)(36860700001)(2616005)(186003)(47076005)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:09.3826
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d4fd22d6-3c30-4f17-a19e-08db3c45d902
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7723

Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
writeq_relaxed_non_atomic() to write to it instead of invoking
writel_relaxed() twice for lower half and upper half of the register.

This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
Thus, one can assign p2maddr to a 64 bit register and do the bit
manipulations on it, to generate the value for SMMU_CBn_TTBR0.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from -

v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
fashion.

v2 - 1. Added R-b.

v3 - 1. No changes.

v4 - 1. Reordered the R-b. No further changes.
(This patch can be committed independent of the series).

 xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 79281075ba..c8ef2a925f 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
 #define ARM_SMMU_CB_SCTLR		0x0
 #define ARM_SMMU_CB_RESUME		0x8
 #define ARM_SMMU_CB_TTBCR2		0x10
-#define ARM_SMMU_CB_TTBR0_LO		0x20
-#define ARM_SMMU_CB_TTBR0_HI		0x24
+#define ARM_SMMU_CB_TTBR0		0x20
 #define ARM_SMMU_CB_TTBCR		0x30
 #define ARM_SMMU_CB_S1_MAIR0		0x38
 #define ARM_SMMU_CB_FSR			0x58
@@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
 static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 {
 	u32 reg;
+	u64 reg64;
 	bool stage1;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
@@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
 		   smmu_domain->cfg.domain->domain_id, p2maddr);
 
-	reg = (p2maddr & ((1ULL << 32) - 1));
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
-	reg = (p2maddr >> 32);
+	reg64 = p2maddr;
+
 	if (stage1)
-		reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
+		reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
+		         << 32);
+
+	writeq_relaxed_non_atomic(reg64, cb_base + ARM_SMMU_CB_TTBR0);
 
 	/*
 	 * TTBCR
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520858.808956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tl-0002H4-Tg; Thu, 13 Apr 2023 17:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520858.808956; Thu, 13 Apr 2023 17:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0tl-0002Go-PC; Thu, 13 Apr 2023 17:38:17 +0000
Received: by outflank-mailman (input) for mailman id 520858;
 Thu, 13 Apr 2023 17:38:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tk-0000a4-Qi
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:16 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8380ab3-da21-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 19:38:15 +0200 (CEST)
Received: from BN8PR03CA0034.namprd03.prod.outlook.com (2603:10b6:408:94::47)
 by MN0PR12MB5931.namprd12.prod.outlook.com (2603:10b6:208:37e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:13 +0000
Received: from BN8NAM11FT022.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:94:cafe::ef) by BN8PR03CA0034.outlook.office365.com
 (2603:10b6:408:94::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:13 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT022.mail.protection.outlook.com (10.13.176.112) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:12 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:12 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:10 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8380ab3-da21-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O/W/qQZ/U0+/znam5EFu1zzL1byXAU25t3wksNhuQg0gB+oRRGivzRQQl25QNBuzq7iN71tRHcsAK2djH3WpOaPbJj3Yjj+iGotVEyHPWPDnZs1Bjsa53RlkNEmRWfSjDMDmAk0sxZoQMlD6mHb+HEVSG0U6QE8Jzsc/vVXyA4+83GkmfY+HzWcvpgeZcar2sD3+YiisCZSqVwDw7o2O53ccZwzjUPfZnget8yHox1UnVoTGe2FRl5QE9ejNimXfbvvSLCA8pBBhp9NxV/F6iKG/XsCrhfjftOoZaYsud1QZwiIX9Ufvi93W381qc0BerYkeQND3zh4HCWriVpNENw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rTjpNh+BHfToII30u7IVf5AGDUbfYB8YKJohyCLK1pQ=;
 b=oFSfRGim8riEmY+EBg1Tp3kn3Etl8U0qo/ClbnYhrHzORJ7Mx8CLvv3qxRy1Em6fY8oJI78zRX+yj8Pt237B+Ssib/7Wj1beW+xf4Q3ImIRmaRowZ260wtUTZQyr6585qthxULDUO8y8k1CYzpRr3M3t3drPc+An6I6bdUsPAZVtkPdLIgLhQCA7jXgG8bUdnwLspFJlgq58xVkt/WoIykbIf4zrr4AlqaITtI7GKRNsvzkAJQOEQXbb+ZlcMZcNhpldtRC7GuQn0QuOUPtCHdxUgdHXvExzOfOCFyKBExw8fD3anvAys05p7l3uCSEv5pMDvBBk72BndOqPLESLvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rTjpNh+BHfToII30u7IVf5AGDUbfYB8YKJohyCLK1pQ=;
 b=tEhm6AW4eQRfV6ho0FbQogVzJddsBYsVAdmf7jk04n65gm1+Dv9KoEUfmRPVHLBc5w2GCibGKHR1RPLtukblXNq6vQ2vSNg13Qq0EAu1j/fOGYuLINUdVaWbFLUZvZhHJNgkQYIdauvVEWWyue2c+4ecCqu4k6QFoTEqlT8QPDU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 05/10] xen/arm: Introduce choice to enable 64/32 bit physical addressing
Date: Thu, 13 Apr 2023 18:37:30 +0100
Message-ID: <20230413173735.48387-6-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT022:EE_|MN0PR12MB5931:EE_
X-MS-Office365-Filtering-Correlation-Id: cbdf1f5b-395a-4866-7162-08db3c45dafd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PerdVabi5ZzwYIBT1Jnn0FQ5u42GWVrc8Sxcf2ZQZg9nZQbMOjaTtfKnMxgMK1afKkXvK/JvTcNloFclamJYxy7R9H2NSgjdEJp87EqLvttz/wzPyLnI48cjaYdKPDYgs1XsWVhhpcVLOuezbP38elN5L8sewsO+1MKjrGrK4idAXNNzUWjKPjMEjCfbzwfiI2/uhPHSdqVVxJT0MJoZgCNsSxgU1ED1FyPvhj9vucxKIDUd39gLyVR0BKFRkBiq0KC1jvj8sWyurLPO2xSS/dzZmZRPYRXnGCJIwdyXntDkA/KEbBKSMSff+SlZC+FdJ9ug1+vMzmPw9r4gjMAkYdlI2NSSjc4qdEYWRXWkODKpBMj8jOc6MKXR9kMWK4lNifSs04rGLm3B6QblM6T5OeMwnKiNsMXrMCiZK5buF7rYIomJ463Rs1dEQgemIZ//YsxIUQklFhIuGzR4UZP4mVPtoSNp0bkW5AY3n9g5hVFYdvYBTZAD86qML0xaxXea2TtNYyqFm7UGzTFB3MA8fOXuXcMFOYP5pnIdWbFIUCSvcG8ZncRAh7fjx3k7wDd/XsM4fpy5yHN6mjgMGqDe9/11WwMKvTdzvWlZpI6GJK5UVKUwjWubmuFHweA+UCO5jRVkZMqOA6N3fNrH6Q8fEo4EAIZ7ZZSCMZTEMKlKdkYw9oPXwhV2zVAgAO0TLTRK4CIqKQxBBxxPSjrTVfodGRiUY6as5bWp+U16BgtNcVQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(7416002)(8676002)(70586007)(70206006)(8936002)(5660300002)(103116003)(478600001)(41300700001)(40460700003)(316002)(40480700001)(83380400001)(82740400003)(426003)(336012)(36756003)(54906003)(6916009)(4326008)(186003)(86362001)(6666004)(26005)(81166007)(1076003)(2616005)(36860700001)(82310400005)(356005)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:12.7041
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cbdf1f5b-395a-4866-7162-08db3c45dafd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT022.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5931

Some Arm based hardware platforms which does not support LPAE
(eg Cortex-R52), uses 32 bit physical addresses.
Also, users may choose to use 32 bits to represent physical addresses
for optimization.

To support the above use cases, we have introduced arch independent
configs to choose if the physical address can be represented using
32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
For now only ARM_32 provides support to enable 32 bit physical
addressing.

When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32.
When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
When PHYS_ADDR_T_32 is not defined for ARM_64, PADDR_BITS is set to 48.
The last two are same as the current configuration used today on Xen.

PADDR_BITS is also set to 48 when ARM_64 is defined. The reason being
the choice to select ARM_PA_BITS_32/ARM_PA_BITS_40/ARM_PA_BITS_48 is
currently allowed when ARM_32 is defined.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'. 

v3 - 1. Allow user to define PADDR_BITS by selecting different config options
ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
2. Add the choice under "Architecture Features".

v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.

 xen/arch/Kconfig                     |  3 +++
 xen/arch/arm/Kconfig                 | 37 ++++++++++++++++++++++++++--
 xen/arch/arm/include/asm/page-bits.h |  6 +----
 xen/arch/arm/include/asm/types.h     |  6 +++++
 xen/arch/arm/mm.c                    |  5 ++++
 5 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 7028f7b74f..67ba38f32f 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -1,6 +1,9 @@
 config 64BIT
 	bool
 
+config PHYS_ADDR_T_32
+	bool
+
 config NR_CPUS
 	int "Maximum number of CPUs"
 	range 1 4095
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..3f6e13e475 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -19,13 +19,46 @@ config ARM
 	select HAS_PMAP
 	select IOMMU_FORCE_PT_SHARE
 
+menu "Architecture Features"
+
+choice
+	prompt "Physical address space size" if ARM_32
+	default ARM_PA_BITS_48 if ARM_64
+	default ARM_PA_BITS_40 if ARM_32
+	help
+	  User can choose to represent the width of physical address. This can
+	  sometimes help in optimizing the size of image when user chooses a
+	  smaller size to represent physical address.
+
+config ARM_PA_BITS_32
+	bool "32-bit"
+	help
+	  On platforms where any physical address can be represented within 32 bits,
+	  user should choose this option. This will help is reduced size of the
+	  binary.
+	select PHYS_ADDR_T_32
+	depends on ARM_32
+
+config ARM_PA_BITS_40
+	bool "40-bit"
+	depends on ARM_32
+
+config ARM_PA_BITS_48
+	bool "40-bit"
+	depends on ARM_48
+endchoice
+
+config PADDR_BITS
+	int
+	default 32 if ARM_PA_BITS_32
+	default 40 if ARM_PA_BITS_40
+	default 48 if ARM_PA_BITS_48 || ARM_64
+
 config ARCH_DEFCONFIG
 	string
 	default "arch/arm/configs/arm32_defconfig" if ARM_32
 	default "arch/arm/configs/arm64_defconfig" if ARM_64
 
-menu "Architecture Features"
-
 source "arch/Kconfig"
 
 config ACPI
diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
index 5d6477e599..deb381ceeb 100644
--- a/xen/arch/arm/include/asm/page-bits.h
+++ b/xen/arch/arm/include/asm/page-bits.h
@@ -3,10 +3,6 @@
 
 #define PAGE_SHIFT              12
 
-#ifdef CONFIG_ARM_64
-#define PADDR_BITS              48
-#else
-#define PADDR_BITS              40
-#endif
+#define PADDR_BITS              CONFIG_PADDR_BITS
 
 #endif /* __ARM_PAGE_SHIFT_H__ */
diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
index e218ed77bd..e3cfbbb060 100644
--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -34,9 +34,15 @@ typedef signed long long s64;
 typedef unsigned long long u64;
 typedef u32 vaddr_t;
 #define PRIvaddr PRIx32
+#if defined(CONFIG_PHYS_ADDR_T_32)
+typedef unsigned long paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "08lx"
+#else
 typedef u64 paddr_t;
 #define INVALID_PADDR (~0ULL)
 #define PRIpaddr "016llx"
+#endif
 typedef u32 register_t;
 #define PRIregister "08x"
 #elif defined (CONFIG_ARM_64)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b99806af99..6dc37be97e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -690,6 +690,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
     int rc;
 
+    /*
+     * The size of paddr_t should be sufficient for the complete range of
+     * physical address.
+     */
+    BUILD_BUG_ON((sizeof(paddr_t) * 8) < PADDR_BITS);
     BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
 
     if ( frametable_size > FRAMETABLE_SIZE )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520867.808966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u0-0003OQ-7l; Thu, 13 Apr 2023 17:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520867.808966; Thu, 13 Apr 2023 17:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u0-0003OH-2d; Thu, 13 Apr 2023 17:38:32 +0000
Received: by outflank-mailman (input) for mailman id 520867;
 Thu, 13 Apr 2023 17:38:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0tx-0000rf-Vv
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:29 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff2291d1-da21-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:28 +0200 (CEST)
Received: from BN9P223CA0023.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::28)
 by SJ1PR12MB6099.namprd12.prod.outlook.com (2603:10b6:a03:45e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:22 +0000
Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10b:cafe::f6) by BN9P223CA0023.outlook.office365.com
 (2603:10b6:408:10b::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.33 via Frontend Transport; Thu, 13 Apr 2023 17:38:21 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:21 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 10:38:20 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff2291d1-da21-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dTAkMA4qWNy5q7AncRiVsRwX/Y3Qu1od1qromFsPaqgUv5cMHBc/oHTNDRERP0biw+4lHXWPY+7AiVKVkQA73Xo7U3Omyi3Uwl7uhTw7xhJnjPB2vxSoZjoF0YFHFeOhRu4comZXNnVqjB/MbMV0dN+qd6rwz6TPPfl5/Zja/v86vfKiP8tfx1Dgyq7FfGB6AG/mbRKZ2BZlZGLSeplLhB6rU/tVx8MDE9yB0uMpNfSGLlmmA5fMjo0fpdnAY7uEsHu5TlR4hGVIlESCPPDgvq4mKdQFEhRkHeFOSjwEGxIUh3PGCmwpzLKTwTfgY97BTItfnRu++mAM6BLuHtFQug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eTQJ7lPK89ZtnWjeWUiqYAA5GuUNieXG2h1mgdQDvYw=;
 b=QdMKqOmqqWuZ2g7ks9ZCSbK/joHEmJncSmxizfVXTyzdpihHkosinviFCrHKgdR/1fRYDum+U/PJEyS5bjhYxa0tmET0UEC0lpNLbIqOStHCh3x7q4nMezJi8GEbRBKO2KJpnNZ2SGY4MIYSeFGo7YlRS2QCzOkH3Cdvy/S8jd27MqfD4JB3mj2oVhX9Pj7WFAesd0KUcXTngISzxLgPX6PqU0WQm2b8ufMttTneawb3e1PSfcSeDpw0/lr8cvzvW1OfDoymZni4ORPOlaC4eihVkCqVUXS0TxTTRv0hp83VO6NmWCKWeopsztvoAjRFNup2x3M5bQZZV+DZO1ACtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eTQJ7lPK89ZtnWjeWUiqYAA5GuUNieXG2h1mgdQDvYw=;
 b=vqHOMzgyq886Y1jjUWpFN3KHM7/VP2C/5TVEbTt7HIDsj1blMl6FsHRWmcY7uwKJgyQxybq33DTw6wQHhf0jWbKMg4F7RNFcCW4VA3DAjBpweHUm3LU0EXwWQQB8rVt6c4wamMgwyePtSc3AcUfHNfxCPU4Mk5EFoaHlXmHnkpA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 06/10] xen/arm: guest_walk: LPAE specific bits should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32"
Date: Thu, 13 Apr 2023 18:37:31 +0100
Message-ID: <20230413173735.48387-7-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT010:EE_|SJ1PR12MB6099:EE_
X-MS-Office365-Filtering-Correlation-Id: edeceec4-7f4a-4f4c-54cd-08db3c45e046
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sXZ/mepfmmEommLgduBj8X7Rq+imLHEjXk+9t4ECL+FTnjEuN+QMWM6tIw/KJSdVxiVaH9iz9H8QLzLNJ1hc5Ew0BHnEdMt02fgRbMtXlc3FHjbfEztZI3tO3W0IHAer+0/FYY/pBSTqpHwnoWgGt/hC8oedIFI4uwajR4lXnbdfyrQC7bxbqCRH1VYj5reynX1eq1M8FiLn9WVhgzWM3PxO6YUpR0HL4TAZWeMZqnj216Wcx8V057HKAAyGkZToye+aM8unSp7cXXPUciCjNsXa017GJkerITYGEabvzfR7FWx2jnS/MCCzXwgmSaXOIGjTOKC5c301zP9zyxbdlaArNC+nANVkhwuH3RfXDCTgA0QtT3+P7zT0P6wHVFV6QI2r+HndtUG59NZ6pIxCe9H3cPBoso5kg1lcmqFxx+BX/+5g4CM3JYi6G6PypipuFt8n6Zm1Pl8oXxfKoAo8lRkHi3DTHD9P5TfmRkSBn2Lui1VO755SbPYbXLsLF1hd1Yo6l4VrpfQ4ky81Rqnfn9xlDEDUWgo/GFuBkLUBXmIQXZlmNkas9GNAZuXPBBaVuaT5DX6EBD6+xeiFf/8+q8CfIjYNyUlqw3O4U/1813Jvwy7BeLhjoGMD7Vszts2w2D9bfNz+abQWfCisudshRLDCW4ck7DL+AoAldQ23W16DOMTzCJaRh+ZWYLaugFSU5aY7JXSmxVONbaNpnFl8iAgUXy/LClDLNQplxxQz7wQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(316002)(6916009)(4326008)(82740400003)(70206006)(70586007)(2616005)(47076005)(426003)(336012)(5660300002)(41300700001)(82310400005)(6666004)(36756003)(86362001)(40460700003)(54906003)(40480700001)(26005)(186003)(1076003)(103116003)(2906002)(83380400001)(7416002)(8676002)(8936002)(36860700001)(478600001)(356005)(81166007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:21.5704
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: edeceec4-7f4a-4f4c-54cd-08db3c45e046
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6099

As the previous patch introduces CONFIG_PHYS_ADDR_T_32 to support 32 bit
physical addresses, the code specific to "Large Physical Address Extension"
(ie LPAE) should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32".

Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
unsigned int extbase2:4;    /* Extended base address, PA[39:36] */

Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
are supported.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Reordered this patch so that it appears after CONFIG_ARM_PA_32 is
introduced (in 6/9).

v3 - 1. Updated the commit message.
2. Added Ack.

v4 - 1. No changes.

 xen/arch/arm/guest_walk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 43d3215304..c80a0ce55b 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -154,8 +154,10 @@ static bool guest_walk_sd(const struct vcpu *v,
             mask = (1ULL << L1DESC_SUPERSECTION_SHIFT) - 1;
             *ipa = gva & mask;
             *ipa |= (paddr_t)(pte.supersec.base) << L1DESC_SUPERSECTION_SHIFT;
+#ifndef CONFIG_PHYS_ADDR_T_32
             *ipa |= (paddr_t)(pte.supersec.extbase1) << L1DESC_SUPERSECTION_EXT_BASE1_SHIFT;
             *ipa |= (paddr_t)(pte.supersec.extbase2) << L1DESC_SUPERSECTION_EXT_BASE2_SHIFT;
+#endif /* CONFIG_PHYS_ADDR_T_32 */
         }
 
         /* Set permissions so that the caller can check the flags by herself. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520869.808973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u0-0003Rd-Jm; Thu, 13 Apr 2023 17:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520869.808973; Thu, 13 Apr 2023 17:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u0-0003QK-Bd; Thu, 13 Apr 2023 17:38:32 +0000
Received: by outflank-mailman (input) for mailman id 520869;
 Thu, 13 Apr 2023 17:38:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0ty-0000rf-Jp
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:30 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e83::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff6d9a91-da21-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:28 +0200 (CEST)
Received: from BN9PR03CA0804.namprd03.prod.outlook.com (2603:10b6:408:13f::29)
 by CO6PR12MB5459.namprd12.prod.outlook.com (2603:10b6:303:13b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:25 +0000
Received: from BN8NAM11FT103.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13f:cafe::d0) by BN9PR03CA0804.outlook.office365.com
 (2603:10b6:408:13f::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:25 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT103.mail.protection.outlook.com (10.13.176.181) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:24 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:24 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:23 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:22 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff6d9a91-da21-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iQn2M8pWbl2jB7tc/Ag8Zx07cgVO49EN7iGXkxIbIW+FZUV+jqbOcXIq1sENatxoLbtEhB0Zyj/OL7W5CEOFbBYmQu32o+hGsRAnx1UeHuvmmUZwFEZHPjxmpp9/3FERydTxtBUiDLOJB8JMzOqFVvDuZo9g/WdIkTvm7Fane1K+u+k1wOBFG6eyq9u4YbAevnJYmstQWZfv5u+uUpWk5sbFCOeOdKQtSq9LtehYDkxKnEItGGOf0WgiM2Ty0LsEVINg4ltwHbu2bLnmCdQuXjIGkahE/gLBx2yjnF0AoixxIOpbEpidOp7PVFfAxaO5tESklzqokz4Rjk6ZbMvFcQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mYDtZ/9ITc1VO1Lc7mTKB3Dr1Wuk5Ug6oJSWDmGegSc=;
 b=A43q0DJB7kb2klo9rLvydenhxMyVW3/danjiK3WNmgAoPqPwKVDGOhGpMC1zvbtJGHHlOkkmYvlWmFh1hRMg/L/zc4m4Wj4jDeH0V5YjVmoTJWxiQHPDeFFoX7cSIA/tKR1Id5Ypnrh3bZLmNvAx3YRiuQlfjROVB/ikwGRWwS4IJfcZW0pRL/qlOGO5Bk2icKeYtyYMP3sjik41nE0jOd7JOG1dHZB3mb1/Zi8rm0aJjrcC8PmfPYvtKZcaGkj+J1CvvuG2N3gEgGxKJ+hZdJm/oO8pJIy+zWQ7rArmtDcEdJCPXeMF6Qyws0fou9r2d/iqOA7SwYWApgZHdR3UMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mYDtZ/9ITc1VO1Lc7mTKB3Dr1Wuk5Ug6oJSWDmGegSc=;
 b=B74EyGQmT0A80kK5WDbx9XYlYeIgKjTcAZ/wBrClnIDoB/UYIgKbYMWrUjSySxnRKbYu9ILLBW/4Fktpy0M1Yd8GaLXswY34OL3ZRb/5JZ7p1dDdEan8l0enr+dKRHE9P+eglWo3p2eI52K1puT0GVlW+rOT/8OSTW1/EdFLVwM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 07/10] xen/arm: Restrict zeroeth_table_offset for ARM_64
Date: Thu, 13 Apr 2023 18:37:32 +0100
Message-ID: <20230413173735.48387-8-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT103:EE_|CO6PR12MB5459:EE_
X-MS-Office365-Filtering-Correlation-Id: a09b65e1-a62c-4f92-61d3-08db3c45e1f2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aMbQQyI0lhjB3Ol7ogO4+excqyipn5GqqqDSeN/UqMAq9vzOnoFksbyhlkHEk2P0n86ZZeMpenwIQ+cogYthlkd5q7xYOFsRpzmCGDsVfAj6+nqRFgpqKpW+F2pxUAbutXyBqdUstnApCphwDvhcpdaHLAjfpKQPzTR4iSy1guPC1woiN2bploUbzHWSgOQ9Gd3O56hACfjUALgjuwbsDayZ7hKzdTpjNbae3mvHPZyFTf57PMzV6RqOZHMBB7gocl5TB18SxkpPuHW9Sw7uKG2/gbCtSPOlkCCMccMsnBnVwAl6x6XpFpN9JgPJ7b/Ci3bfADvA3dvChmnZQO7ed069ao89BHm0caX3h3KGx0TcZ/74nzbwKKe4rS3hIzWt/GgDclhqZ8evNXXDfp/NZOVF0fLqOw//3RhgQCOcaxZWjtgrLa+v7MqktRdpYeYHrE9cjBtvoYPt30mfByFuFMhbXWuDRdzuIocaSqKmtuVLf+AoBsf8mKxZwnUyyL4rrjxbZ6bMP/skcA0F5vKddUgVyJShphqw96MKTwjApHojV5WLSNEyMMbsXuWvkRzbdpkkMrTs+gg6yN3srKAUaleITMpoc5W3Qt9kq/mE3hvbuZc1myOJuVtKX8mKbbpgY5CSg8eqi8uoPHKf4CoGJqc6syA6qjMOZlmeeg9Wx7H29e0H7hoYKNd46RHO7umEfrFGBSl5TeKA172PFskc/WJ6oulYJPEqkmbvg433Fvg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(316002)(41300700001)(81166007)(1076003)(26005)(186003)(86362001)(6666004)(47076005)(356005)(36860700001)(2616005)(82310400005)(426003)(336012)(83380400001)(82740400003)(6916009)(4326008)(54906003)(36756003)(40480700001)(70206006)(70586007)(8676002)(8936002)(5660300002)(2906002)(7416002)(103116003)(478600001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:24.3756
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a09b65e1-a62c-4f92-61d3-08db3c45e1f2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT103.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5459

When 32 bit physical addresses are used (ie PHYS_ADDR_T_32=y),
"va >> ZEROETH_SHIFT" causes an overflow.
Also, there is no zeroeth level page table on Arm32.

Also took the opportunity to clean up dump_pt_walk(). One could use
DECLARE_OFFSETS() macro instead of declaring the declaring an array
of page table offsets.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - Removed the duplicate declaration for DECLARE_OFFSETS.

v2 - 1. Reworded the commit message. 
2. Use CONFIG_ARM_PA_32 to restrict zeroeth_table_offset.

v3 - 1. Added R-b and Ack.

v4 - 1. Removed R-b and Ack as we use CONFIG_PHYS_ADDR_T_32
instead of CONFIG_ARM_PA_BITS_32. This is to be in parity with our earlier
patches where we use CONFIG_PHYS_ADDR_T_32 to denote 32-bit physical addr
support.

 xen/arch/arm/include/asm/lpae.h | 4 ++++
 xen/arch/arm/mm.c               | 7 +------
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index 3fdd5d0de2..7d2f6fd1bd 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
 #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
+#ifdef CONFIG_PHYS_ADDR_T_32
+#define zeroeth_table_offset(va)  0
+#else
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+#endif
 
 /*
  * Macros to define page-tables:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 6dc37be97e..247510ac57 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -221,12 +221,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 {
     static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
     const mfn_t root_mfn = maddr_to_mfn(ttbr);
-    const unsigned int offsets[4] = {
-        zeroeth_table_offset(addr),
-        first_table_offset(addr),
-        second_table_offset(addr),
-        third_table_offset(addr)
-    };
+    DECLARE_OFFSETS(offsets, addr);
     lpae_t pte, *mapping;
     unsigned int level, root_table;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:38:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:38:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520870.808986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u2-0003xL-15; Thu, 13 Apr 2023 17:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520870.808986; Thu, 13 Apr 2023 17:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0u1-0003x8-Sx; Thu, 13 Apr 2023 17:38:33 +0000
Received: by outflank-mailman (input) for mailman id 520870;
 Thu, 13 Apr 2023 17:38:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0u0-0000rf-43
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:32 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00b15c15-da22-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:30 +0200 (CEST)
Received: from BN9PR03CA0464.namprd03.prod.outlook.com (2603:10b6:408:139::19)
 by CH0PR12MB5106.namprd12.prod.outlook.com (2603:10b6:610:bd::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 17:38:27 +0000
Received: from BN8NAM11FT104.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:139:cafe::75) by BN9PR03CA0464.outlook.office365.com
 (2603:10b6:408:139::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:27 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT104.mail.protection.outlook.com (10.13.177.160) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:27 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:26 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 10:38:26 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:25 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00b15c15-da22-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aH+Zr849Fo64fg+pXGsVYf5EHX5ceYm9v4BSa/xIJEgJySUpm/TpwUvcclB6UglPqEbI2T9x28+ki+kzBGxtG+hLW7e0oshtjWcOVlNt2XPxpBAgrFaM8s+wQlVNwQeU67sX13QgtJz1xqEB01dY+FKyhkZKOOiXuHBPye3I0Z77y1dA6za2TvBbqmvwRQgwuB/SNaW97xKacVXXiVQ1/Xk/4OBqkivCV3EgZSNPE2hEDu+0XUM9xFnNZopUefD8hghPBDpsXVBgtUZHCacJMEVoDsuaP9Jr7BVlojoPGtpWMLoqIiNYVRMCDK6JjqFDapz00N7PI0jX5i3aXm8yjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8DOUnV1l7YFE+Fgge1cDptrXAMS0Z5E0PBq8a/I6OGs=;
 b=JdS3NgRtL+UpNBoW+sWqzDwtV9iViFx/ZYPAmtqpfFF4SokOHjj4wbTt+7YuGlJ7SdE7b/Vo5m6x7Ws2YsqpPdof+ekcrbb1p9i+4+xx3TlhvH8EYrF/2Wlx6eokrx4fu1SrNP46EXn5aXwHwAjet21IrEbLyX5VkoFy8kUmddVJ3RAg/oJEgUObHN5L3zbL1dnALhHQ8aBj5rpQasIjRDgTL1b0F+7Ie3rIcmF6+UTThd3PZweliXboJBZednVP5eZGQhanyXaTnst6PdIzIDyFxnD1eCcoFCnoqy89Pzy25pBXiI5KSzKzHt7qmAfAH4tliDQAGlL9GzVd9hTHRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8DOUnV1l7YFE+Fgge1cDptrXAMS0Z5E0PBq8a/I6OGs=;
 b=MsdUC9FOjQo0KMvXQ8Yj50daQlpRrKzCmyMX4QbJhGKuw3uc/STM+JVy1JKdTKzvhO/L0770htzBZkOaReYSg956Gah8r8mgIgidudSw0Wsb+fVVY7dUkipXilMQtjoAx0rFHR6MuKyBkYH6HkseYSu3z3vyHWp6J4dDIH+UV/E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 08/10] xen/arm: domain_build: Check if the address fits the range of physical address
Date: Thu, 13 Apr 2023 18:37:33 +0100
Message-ID: <20230413173735.48387-9-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT104:EE_|CH0PR12MB5106:EE_
X-MS-Office365-Filtering-Correlation-Id: 94e40bd9-7988-4fb9-219f-08db3c45e389
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vaf4P9OhiNSluRFPCbbs7OSxO7iGI4TRH6O1mrpYku5q1HrBOkngH2ehaCCyjBEOjy/+E5I+zHN01gSTLz911cDtkQFw6PXLnF0ofZH/fjXysb7OqyPs3hcEEt3onqgffjqMfDIvn1xOf9XDh4U06ZqxB0ieUl2hAXCcFI8PEZuboZhgTWTXn/Q1XnPomNTiftSlFiP/vxDLRaj7qULQun1+7js0ffSkp5miloQQhGUtTYX/8AWIELNINdiuGMDU9WSly53NKyK4kntAtHASS2mnc977t7SyE9WBYne9b8aDdu0cl0UtiumfrmwN1V1ffW8G7zmNgCfNOC2PB4dyML7IqaWmtX3TfKsYqjJPk0E1I/P9+/zMG5fEafAnHVTrRazQ9KWfW0mEn++Whr7zqElqfrTQY4y/lB9BMIKNwOvCoHXLgvwaDVNefgR+j3YeDlEy7d3oheBVXRTl6rxEjt8d7IjQdlUxjOnF25cD9qkjAWxuSCjM6AXCKHJGEG+VqK2UOw5U/Ss+S0/3KzWNq1wSpyZa1Q+56FAEZrATgok8SvuLrmYJy9+CZe7vKVZ+ojyKmO3Y62CVUJ5yxj0LUzjpqvGzY1ZgjH9zGa1RMRYWObzZEkePVvhRBHKuCWz0sVBIa0YVedIQYeGx/sdik6nsG3juUJSk0o5uZbo819u+rxDppwUXD8TR6jsuwShJO9xzmEgzkcG82AwfZt8qYxqvgQusO61W7UbhOl0XEXY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199021)(46966006)(40470700004)(36840700001)(103116003)(36756003)(86362001)(41300700001)(316002)(4326008)(478600001)(8676002)(70206006)(6916009)(54906003)(70586007)(82310400005)(6666004)(7416002)(40480700001)(5660300002)(82740400003)(8936002)(2906002)(356005)(36860700001)(186003)(81166007)(83380400001)(1076003)(336012)(426003)(2616005)(26005)(47076005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:27.0413
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 94e40bd9-7988-4fb9-219f-08db3c45e389
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT104.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5106

handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
parameters. Then frame numbers are obtained from addr and len by right shifting
with PAGE_SHIFT. The page frame numbers are saved using unsigned long.

Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
when the result is stored as 'unsigned long'.

To mitigate this issue, we check if the starting and end address can be
contained within the range of physical address supported on the system. If not,
then an appropriate error is returned.

Also, the end address is computed once and used when required. And replaced u64
with uint64_t.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from :-
v1...v4 - NA. New patch introduced in v5.

 xen/arch/arm/domain_build.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7d28b75517..b98ee506a8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1637,15 +1637,23 @@ out:
 }
 
 static int __init handle_pci_range(const struct dt_device_node *dev,
-                                   u64 addr, u64 len, void *data)
+                                   uint64_t addr, uint64_t len, void *data)
 {
     struct rangeset *mem_holes = data;
     paddr_t start, end;
     int res;
+    uint64_t end_addr = addr + len - 1;
+
+    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
+    {
+        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",
+               addr, end_addr, CONFIG_PADDR_BITS);
+        return -ERANGE;
+    }
 
     start = addr & PAGE_MASK;
-    end = PAGE_ALIGN(addr + len);
-    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
+    end = PAGE_ALIGN(end_addr);
+    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -2330,11 +2338,19 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 }
 
 int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
+                               uint64_t addr, uint64_t len, void *data)
 {
     struct map_range_data *mr_data = data;
     struct domain *d = mr_data->d;
     int res;
+    uint64_t end_addr = addr + len - 1;
+
+    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
+    {
+        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",
+               addr, end_addr, CONFIG_PADDR_BITS);
+        return -ERANGE;
+    }
 
     /*
      * reserved-memory regions are RAM carved out for a special purpose.
@@ -2345,13 +2361,13 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
                      strlen("/reserved-memory/")) != 0 )
     {
         res = iomem_permit_access(d, paddr_to_pfn(addr),
-                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
+                paddr_to_pfn(PAGE_ALIGN(end_addr)));
         if ( res )
         {
             printk(XENLOG_ERR "Unable to permit to dom%d access to"
                     " 0x%"PRIx64" - 0x%"PRIx64"\n",
                     d->domain_id,
-                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
+                    addr & PAGE_MASK, PAGE_ALIGN(end_addr) - 1);
             return res;
         }
     }
@@ -2368,7 +2384,7 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
         {
             printk(XENLOG_ERR "Unable to map 0x%"PRIx64
                    " - 0x%"PRIx64" in domain %d\n",
-                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
+                   addr & PAGE_MASK, PAGE_ALIGN(end_addr) - 1,
                    d->domain_id);
             return res;
         }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:39:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:39:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520877.808996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0uR-0005I9-Dv; Thu, 13 Apr 2023 17:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520877.808996; Thu, 13 Apr 2023 17:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0uR-0005Hs-Ay; Thu, 13 Apr 2023 17:38:59 +0000
Received: by outflank-mailman (input) for mailman id 520877;
 Thu, 13 Apr 2023 17:38:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0uD-0000rf-Tc
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:45 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20629.outbound.protection.outlook.com
 [2a01:111:f400:fe59::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 08bf1265-da22-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:44 +0200 (CEST)
Received: from BN1PR12CA0029.namprd12.prod.outlook.com (2603:10b6:408:e1::34)
 by CH3PR12MB7618.namprd12.prod.outlook.com (2603:10b6:610:14c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.35; Thu, 13 Apr
 2023 17:38:40 +0000
Received: from BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e1:cafe::b3) by BN1PR12CA0029.outlook.office365.com
 (2603:10b6:408:e1::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:40 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT107.mail.protection.outlook.com (10.13.176.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 17:38:40 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:40 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:40 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:38 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08bf1265-da22-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IpfOCpxxw3i55eJofDliLG79xwMHrN9zM1RrU9iUWchqAFkd6S/h3eyDrs3pGiFo8Ifb2DXpAn+x2VmhjkyiXWLSiAnAwG2gQJaL6WUWOWng9NUVDoeGxYe4OoUJ3BVDVTbodyN+Vha6/VlC8pjDQQncwSnwL6klmbPdU3rKVy8jP67Uh92iAckIHaXT2WrUOIdmKkJ8c48LuLOy8LWE8KqpU4Mx30VQgpEnyNKVYQG1KV31akTAx4yS3JFrwilK7Hbx/cMRxuR7ZnXPgU2+mdqbRBmaOz3F8lyQj9A1nI7mUrvdWFaamXv5E4OL/Tdq91vsyo/39MYltQ5YlCBYTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s0Jv/eP8TblDgr/FC5ImnfRMvlRSI+Z7UaQME8KEVKw=;
 b=Rmi2FH3pDqxm9aw9ZkTnkAI0xvbQhNlUE/3FyAh+2KS5A+DmYzuuDUmXgDI6z3zvHt8myxHR8d+ss4c9SowWBFVo9gY8APVpx7aSScrNieyjpABWs2XTKq1RzXezD6jOrC/116OG/iNxPeOYQdZwe1kQHMae5SfU/ZuK4dBBIPOKBFg9yxcK42PLvrNv2Yu13zF34PPP6dJiYf9pZDffQWMkW3YCIQHJTjXpesoBTqaaoJtoOGqIX+GwQ4Gxe0ze9os+98I0SHfx+Ok7HMnBonjJ0hI17DF7xpnhg6RMXOTkTEMwB9QgzljkJRz2tt4UKeQ20obghAphboK9FreBLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s0Jv/eP8TblDgr/FC5ImnfRMvlRSI+Z7UaQME8KEVKw=;
 b=gflHkHsPF8gff8+/kbhJx8Li+Yw2ZffhstP+ft8TB6nJm7TJ9G19nBru/g04I+kQvjPI+pehGelE+h5r4MO1X+Q/ee7QPPsMI71W3/NnaaElDKGnht72tlLlOIWDE8HUJQ70/hoYjWVy9VWWZ88mXRzhpOulFg/NSPUzT1JOeac=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 09/10] xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
Date: Thu, 13 Apr 2023 18:37:34 +0100
Message-ID: <20230413173735.48387-10-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT107:EE_|CH3PR12MB7618:EE_
X-MS-Office365-Filtering-Correlation-Id: 21a193d6-7f89-415a-42af-08db3c45ebbd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Zv3qJxC33MucVc5vCEjLXLBXbDu7K1QsBy1Cb2EdaNGYX2Lx8M+79hyS5AIJjRED/uW3KwsDObiIvpPMfNO9S4wAYZQJbpsQ8Vwev2NNq+fXsv3z1l+b1EWihmrzNO39mExgopc8bFjZ5kO8SlLgpFCxfr/Mzd8T7NvjLnGMhn1mVIzIhFwC/bf42RMa7PhBYjzBrOhfz2Ea7/g8Yz91vCjqaC9sc9XbVh/eHA9RLAhKSZ6ODhgxri4s8HHycyUIU5Q6JQK1+D+7smPCDys+6FvoFHqyDTadurejah6VGXuXNzNpUxLOxdirMiJn+zLBjcF6WTxB+dI9q4Zu6QWeG3EavMksmgvuVykn324DPbY8NrV2+VHbZy4WxxkiduOo9didWBPzC9UNdS4/RxFtN7y/6z1aUVeHm1Jn9ENyWUvnP1+B5esN8f7wWaNzWBh0dm4fGC3KEwO7jWq2zfSwJKGtPHG1Z4vbzwfMumyvDEJXVvyfditoHHL+QRd8lx1DdqM9ltzaYel3hykMfFMboxoBQqfiJTLXJgDKTS3OSu6o0LlXWJVk50VTRxsEuh6DKN+xHS+dY6xff/Ku2BkHjqPQhx9pHeQR+1FjsQ5BuUl4JWbPAS6m9dzaCGiNJLHbwzMoX9ND/Pal4GWfXh+WMdiLud6B0ctED5/gtWrgIsB1f16J/nZ09OI5qdyu/EHcJDX95Lc2PNgJLJ7nVLhzsHuJVDE88PMWVVp4CWRCzRg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(26005)(1076003)(40460700003)(186003)(103116003)(478600001)(36860700001)(82310400005)(316002)(54906003)(36756003)(5660300002)(7416002)(8936002)(2906002)(4326008)(6916009)(8676002)(81166007)(356005)(82740400003)(86362001)(70206006)(70586007)(41300700001)(40480700001)(6666004)(83380400001)(2616005)(426003)(336012)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:40.8052
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 21a193d6-7f89-415a-42af-08db3c45ebbd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT107.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7618

Restructure the code so that one can use pa_range_info[] table for both
ARM_32 as well as ARM_64.

Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
p2m_root_order can be obtained from the pa_range_info[].root_order and
p2m_root_level can be obtained from pa_range_info[].sl0.

Refer ARM DDI 0406C.d ID040418, B3-1345,
"Use of concatenated first-level translation tables

...However, a 40-bit input address range with a translation granularity of 4KB
requires a total of 28 bits of address resolution. Therefore, a stage 2
translation that supports a 40-bit input address range requires two concatenated
first-level translation tables,..."

Thus, root-order is 1 for 40-bit IPA on ARM_32.

Refer ARM DDI 0406C.d ID040418, B3-1348,

"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:

0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of the input
address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
address region is 2^40 bytes.

Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v3 - 1. New patch introduced in v4.
2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
well as ARM_64.

v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
The reason being root_order will not be always 1 (See the next patch).
2. Updated the commit message to explain t0sz, sl0 and root_order values for
32-bit IPA on Arm32.
3. Some sanity fixes.

 xen/arch/arm/include/asm/p2m.h |  8 +-------
 xen/arch/arm/p2m.c             | 34 ++++++++++++++++++----------------
 2 files changed, 19 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 91df922e1c..28c68428d3 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -14,16 +14,10 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
-#ifdef CONFIG_ARM_64
 extern unsigned int p2m_root_order;
 extern unsigned int p2m_root_level;
-#define P2M_ROOT_ORDER    p2m_root_order
+#define P2M_ROOT_ORDER p2m_root_order
 #define P2M_ROOT_LEVEL p2m_root_level
-#else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_ORDER    1
-#define P2M_ROOT_LEVEL 1
-#endif
 
 struct domain;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 948f199d84..4583658f92 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -19,9 +19,9 @@
 
 #define INVALID_VMID 0 /* VMID 0 is reserved */
 
-#ifdef CONFIG_ARM_64
 unsigned int __read_mostly p2m_root_order;
 unsigned int __read_mostly p2m_root_level;
+#ifdef CONFIG_ARM_64
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 /* VMID is by default 8 bit width on AArch64 */
 #define MAX_VMID       max_vmid
@@ -2265,16 +2265,6 @@ void __init setup_virt_paging(void)
     /* Setup Stage 2 address translation */
     register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
 
-#ifdef CONFIG_ARM_32
-    if ( p2m_ipa_bits < 40 )
-        panic("P2M: Not able to support %u-bit IPA at the moment\n",
-              p2m_ipa_bits);
-
-    printk("P2M: 40-bit IPA\n");
-    p2m_ipa_bits = 40;
-    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
-    val |= VTCR_SL0(0x1); /* P2M starts at first level */
-#else /* CONFIG_ARM_64 */
     static const struct {
         unsigned int pabits; /* Physical Address Size */
         unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
@@ -2283,19 +2273,24 @@ void __init setup_virt_paging(void)
     } pa_range_info[] __initconst = {
         /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
         /*      PA size, t0sz(min), root-order, sl0(max) */
-        [0] = { 32,      32/*32*/,  0,          1 },
-        [1] = { 36,      28/*28*/,  0,          1 },
-        [2] = { 40,      24/*24*/,  1,          1 },
+        [0] = { 40,      24/*24*/,  1,          1 },
+#ifdef CONFIG_ARM_64
+        [1] = { 32,      32/*32*/,  0,          1 },
+        [2] = { 36,      28/*28*/,  0,          1 },
         [3] = { 42,      22/*22*/,  3,          1 },
         [4] = { 44,      20/*20*/,  0,          2 },
         [5] = { 48,      16/*16*/,  0,          2 },
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
+#else
+        [1] = { 0 }  /* Invalid */
+#endif
     };
 
     unsigned int i;
     unsigned int pa_range = 0x10; /* Larger than any possible value */
 
+#ifdef CONFIG_ARM_64
     /*
      * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
      * with IPA bits == PA bits, compare against "pabits".
@@ -2309,6 +2304,9 @@ void __init setup_virt_paging(void)
      */
     if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
         max_vmid = MAX_VMID_16_BIT;
+#else
+    p2m_ipa_bits = PADDR_BITS;
+#endif
 
     /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
     for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
@@ -2324,24 +2322,28 @@ void __init setup_virt_paging(void)
     if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
         panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
 
+#ifdef CONFIG_ARM_64
     val |= VTCR_PS(pa_range);
     val |= VTCR_TG0_4K;
 
     /* Set the VS bit only if 16 bit VMID is supported. */
     if ( MAX_VMID == MAX_VMID_16_BIT )
         val |= VTCR_VS;
+
+    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
+#endif
+
     val |= VTCR_SL0(pa_range_info[pa_range].sl0);
     val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
 
     p2m_root_order = pa_range_info[pa_range].root_order;
     p2m_root_level = 2 - pa_range_info[pa_range].sl0;
-    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
 
     printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
            p2m_ipa_bits,
            pa_range_info[pa_range].pabits,
            ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
-#endif
+
     printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
            4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 17:39:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 17:39:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520878.809000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0uR-0005Ld-Mf; Thu, 13 Apr 2023 17:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520878.809000; Thu, 13 Apr 2023 17:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn0uR-0005Kg-JE; Thu, 13 Apr 2023 17:38:59 +0000
Received: by outflank-mailman (input) for mailman id 520878;
 Thu, 13 Apr 2023 17:38:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2GAK=AE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pn0uI-0000rf-70
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 17:38:50 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe59::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b131b5c-da22-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 19:38:48 +0200 (CEST)
Received: from DM6PR14CA0070.namprd14.prod.outlook.com (2603:10b6:5:18f::47)
 by SA1PR12MB5613.namprd12.prod.outlook.com (2603:10b6:806:23e::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.43; Thu, 13 Apr
 2023 17:38:44 +0000
Received: from DM6NAM11FT089.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:18f:cafe::db) by DM6PR14CA0070.outlook.office365.com
 (2603:10b6:5:18f::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 17:38:44 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT089.mail.protection.outlook.com (10.13.173.82) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.33 via Frontend Transport; Thu, 13 Apr 2023 17:38:44 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:38:43 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 13 Apr 2023 12:38:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b131b5c-da22-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JyCC0Wrq25jNWdJuqRlARvbyioVPtBe3BnWn08ydgVjAOrcKsJhgPhjLajeNFNuKsJ0z95iHkQZKTH+ZXYxYscokU2RugHLNVqyEITxUR+5Z8XdGO5BS2CjOAXMiIJz4anzYpft3WdVPLT7xgB93xTPdPTcUZRc4uGba7NnRsYXp9E3OfiU2pHY3YYKRZIwAf2l1n7W5quG2fUbIRRQKFfy9w28aejTtvSiTGXxN2Q9dBOimqop4mSs1Af7JSLKUqXe+w6uHgATKMmd5DU8cruewg4ApvcM1AhdUWoMWnpRLO5zzRK6Yhoa0mzN0OBjpSuzQsrPnykaLmvs63BxyBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Vf1B+8bx5vatlGHL4Tv5xbYFLO09J4Ep2zXFRUGljAc=;
 b=f2+O95wcvF1I+OM851PIc9wfxLgzNrR1vWl+uewcfRKOnEh9se+hQ0JvvIejtOPrgJ6Hljemigq3fIiX1SCj5XBkhp+WZLk+6Q8D5dBgH1fN6vWBmACcJzeqUdYTHjLOxFDQSCEQDHEYFexdzOzzi39+CIfBffxjEknI1MT7/+K5fIpLeMjUCWUlQ4QFyJVGOiBixlVoE13/MYbBra4lc/b/0UFddZhHLVTUNDbHU+cJUck462bepEDpxmJBeJmkEpeo6zcURo5e+T+giKlorU5vYB+Bgk1Xye8sAmecUlvAX3qc2qMebGN9dYEq8QCa4CSkjt67yHeiIwcTzfBAhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vf1B+8bx5vatlGHL4Tv5xbYFLO09J4Ep2zXFRUGljAc=;
 b=5C7jYlvjlYl3IJJ9QSCqkQZXNAJFN3WP974TtsnanagY2sHy9yTJynyz6WIJcbG5Shu+TzJovEPy2qjSlAiOzyMP0HDGcVDSI48gCWQpIrwkV5OAxk+RRNVsz5Ejj6f81tkwg+RwyTxWyvta0txWSEw4l9gblUUmXCynWsUmgfE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v5 10/10] xen/arm: p2m: Enable support for 32bit IPA for ARM_32
Date: Thu, 13 Apr 2023 18:37:35 +0100
Message-ID: <20230413173735.48387-11-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT089:EE_|SA1PR12MB5613:EE_
X-MS-Office365-Filtering-Correlation-Id: 963d6794-6a95-4b08-355b-08db3c45edca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5ER1z/29ZTOY8UO9h2tgQxGQDoiWQW9PGvVYq5nWExc9gg6pMYgtFW7w1Q5Ra4s+Zi41VAXLs7aRnptqUlgsuKmh5pda5q9gx3uzWw8u0IIxQPbPSlMZrNir88wnNKGTFKl+76f11tgldfVRHelY1X5BNWdaSYPitS/sWeMVP+78mey98PP6pPO62ntHqctcSdfExR6t5p2zK0iQljtupIug3ZzpzNs/QGosKzGwI5AQVROFACm4D0QyvJjn0/D/OQdcMLFwJGDIR/f2BlNt0UY3OZmtn4TNHGwn4+Da6024oqd5dTmXdiSvqho4bZPdpyDSN2G04gwC0Ky400imP0qILQod5l+sTSJns66pQFWmRn7Osb0iv4VMeuHiojBYO06dsEBAyzNbCWdcjQX3Dgz+VPrRK5WEU8JID+/gofpTFkFkA+W+z70Kun4GZb5Oy4dlJVidD6Y8ZM85QhBvIUlPLlh1QXAOLp7evGkjWFJimxaN2UFmn7G0To1hPgjiM0P4R2hjrh+Gl7dQGlMAkWZUgcdiE7pPjsSPbKxbAND6ivh+o0iNJUyQVF0aOJK9YPL7dn8HUgmg0mRqkV6vDY8XQ3VauxGDZfZ+1239M4qJUL32SqiHmo4TFC7jb34xLvBReFU94kwKtTDVnZK58h1n7SGSrqt8TZ7luwAmZ/znEzcvx5/ZMPdzI/ZTXNhwBmh4E50N7rqW6SrJdO/siEKKs1SnnKFzgItXPe4WYHc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(103116003)(82310400005)(40460700003)(36756003)(40480700001)(478600001)(36860700001)(4326008)(6916009)(8676002)(70586007)(70206006)(41300700001)(316002)(54906003)(2616005)(47076005)(83380400001)(426003)(336012)(6666004)(186003)(1076003)(26005)(86362001)(5660300002)(7416002)(356005)(82740400003)(8936002)(81166007)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 17:38:44.2105
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 963d6794-6a95-4b08-355b-08db3c45edca
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT089.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB5613

Refer ARM DDI 0406C.d ID040418, B3-1345,

"A stage 2 translation with an input address range of 31-34 bits can
start the translation either:

- With a first-level lookup, accessing a first-level translation
  table with 2-16 entries.

- With a second-level lookup, accessing a set of concatenated
  second-level translation tables"

Thus, for 32 bit IPA, there will be no concatenated root level tables.
So, the root-order is 0.

Also, Refer ARM DDI 0406C.d ID040418, B3-1348
"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:
0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of
the input address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = 0 when the size of
input address region is 2^32 bytes.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from -

v1 - New patch.

v2 - 1. Added Ack.

v3 - 1. Dropped Ack. 
2. Rebased the patch based on the previous change.

v4 - 1. t0sz is 0 for 32-bit IPA on Arm32.
2. Updated the commit message to explain t0sz, sl0 and root_order.

 xen/arch/arm/p2m.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4583658f92..746b6553e5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2283,7 +2283,8 @@ void __init setup_virt_paging(void)
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
 #else
-        [1] = { 0 }  /* Invalid */
+        [1] = { 32,      0/*0*/,    0,          1 },
+        [2] = { 0 }  /* Invalid */
 #endif
     };
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 18:52:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 18:52:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520901.809015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn23A-00064g-5e; Thu, 13 Apr 2023 18:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520901.809015; Thu, 13 Apr 2023 18:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn23A-00064Z-2q; Thu, 13 Apr 2023 18:52:04 +0000
Received: by outflank-mailman (input) for mailman id 520901;
 Thu, 13 Apr 2023 18:52:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn238-00064P-7W; Thu, 13 Apr 2023 18:52:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn238-0002iL-3D; Thu, 13 Apr 2023 18:52:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn237-0006R9-Nz; Thu, 13 Apr 2023 18:52:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pn237-0004oD-NV; Thu, 13 Apr 2023 18:52:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TjXxm45zBNVOIash0eUhuMIhvkDb3dpZsGiqpfo5Z6I=; b=VKBy2rcwvrHJAkZZbukxmexqb1
	Z0f/Ek7dZ+p2Xp52IHTYMhTW3IqolXvqd3h8rYgMC9XzByFMTgDqou2LMQEY9vQVWAaDgMCbJxH3F
	MPm1hYiGU2eJ7a48gBNR+x3yCKBnoOFgv0eTqROdnquVnWIepXOp+wpoHGy/yVrXqbbs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180227-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180227: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=152770333449cd3b78b4f5a9f1148fc1f482d842
X-Osstest-Versions-That:
    libvirt=7e1b4cc19cefba11763d09799f2b742658a9a03a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 18:52:01 +0000

flight 180227 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180227/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              152770333449cd3b78b4f5a9f1148fc1f482d842
baseline version:
 libvirt              7e1b4cc19cefba11763d09799f2b742658a9a03a

Last test of basis   180213  2023-04-12 04:20:27 Z    1 days
Testing same since   180227  2023-04-13 04:20:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  K Shiva <shiva_kr@riseup.net>
  K Shiva Kiran <shiva_kr@riseup.net>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>
  Tamara Schmitz <tamara.schmitz@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   7e1b4cc19c..1527703334  152770333449cd3b78b4f5a9f1148fc1f482d842 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:22:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520907.809026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2WP-00012C-It; Thu, 13 Apr 2023 19:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520907.809026; Thu, 13 Apr 2023 19:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2WP-000125-Ev; Thu, 13 Apr 2023 19:22:17 +0000
Received: by outflank-mailman (input) for mailman id 520907;
 Thu, 13 Apr 2023 19:22:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pn2WN-00011z-G3
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:22:15 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ca232e4-da30-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 21:22:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ca232e4-da30-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681413732;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=krpY0wtoNjWcMka31rRBRO3yp3RjVetI0sS//BE8xhw=;
  b=KAzTi0p3iFpFj1hL1O/KOpNMnir565ZlNUb/NvKUj2oBbsZr+YBWkBT6
   pTab9fmbds9y3+T3hLIgm4w8j4fuaFxn+9G0VBxA+d7oT3lgrAUvuPMku
   NrzFoPXJhp8YsmG3JZlKi0LLjv8jmgs5OTSoYZTwCW7ssuyQhhBFP8pyj
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 5.1
X-MesageID: 104783135
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:+eEsY6rynIfM5cg0R6oLAwQ8vT1eBmJjZRIvgKrLsJaIsI4StFCzt
 garIBnUbvuINjTyKYp+bYi08xhTsMCDnIMxHQA//iA1ECNDpZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCziJNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABU/SS2GjbLo++qQZ/J1tNQscdLzMqpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jueozupWUhBabRzzxKJwlaIrL7exR+kc6BVTZaVqsQ2jnaqkzl75Bo+CgLg/KjRZlSFc81bA
 1wZ/Gwpt6da3FymSJzxUgO1pFaAvwUAQJxAHusi8gaPx6HIpQGDCQAsbBRMddgnv88eXiEx2
 xmCmNaBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyMg0f7epYtj0CmXZft4CZylgYDuAT/Zl
 mXiQDcFu1kDsSIa//zlrQmb223y/Mihoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTIi8DqCENYYWOfCdkTNrGwk3PCatM53FyhBwwcnTx
 7/AGSpTMZrqIfs+l2fnLwvs+bQq2jo/1QvueHwP9Dz+ieD2TCfMEd8taQLeBt3VGYvZ+W05B
 f4EbJDUo/ieOcWjChTqHXk7fA9aciNkVcutwyGVH8baSjdb9KgaI6e56dscl0ZNxMy5Ss+gE
 qmBZ3Jl
IronPort-HdrOrdr: A9a23:USoH2a/v/kMMr7SNbgFuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-Talos-CUID: =?us-ascii?q?9a23=3A4DpzJGugqkSH6A0bDW8SeoDo6IsAQFrgyX2AfXT?=
 =?us-ascii?q?gIktyUY+FRgKL4K1Nxp8=3D?=
X-Talos-MUID: 9a23:MFKmwAX3nEn3pvvq/DPcoBVIZdYw3/uFKgdUurhagZWKHhUlbg==
X-IronPort-AV: E=Sophos;i="5.99,194,1677560400"; 
   d="scan'208";a="104783135"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] xen: Fold exit paths in find_text_region()
Date: Thu, 13 Apr 2023 20:22:01 +0100
Message-ID: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Despite rcu_read_unlock() being fully inlineable, the optimiser cannot fold
these exit paths, because of the various compiler barriers providing RCU
safety.  Help the compiler out.

This compiles to marginally better code in all cases.  No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/common/virtual_region.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 30b0b4ab9c85..5ecdba9c08ed 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -40,20 +40,20 @@ static DEFINE_RCU_READ_LOCK(rcu_virtual_region_lock);
 
 const struct virtual_region *find_text_region(unsigned long addr)
 {
-    const struct virtual_region *region;
+    const struct virtual_region *iter, *region = NULL;
 
     rcu_read_lock(&rcu_virtual_region_lock);
-    list_for_each_entry_rcu( region, &virtual_region_list, list )
+    list_for_each_entry_rcu ( iter, &virtual_region_list, list )
     {
-        if ( (void *)addr >= region->start && (void *)addr < region->end )
+        if ( (void *)addr >= iter->start && (void *)addr < iter->end )
         {
-            rcu_read_unlock(&rcu_virtual_region_lock);
-            return region;
+            region = iter;
+            break;
         }
     }
     rcu_read_unlock(&rcu_virtual_region_lock);
 
-    return NULL;
+    return region;
 }
 
 void register_virtual_region(struct virtual_region *r)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:28:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520911.809035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2bl-0001eR-5l; Thu, 13 Apr 2023 19:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520911.809035; Thu, 13 Apr 2023 19:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2bl-0001eK-30; Thu, 13 Apr 2023 19:27:49 +0000
Received: by outflank-mailman (input) for mailman id 520911;
 Thu, 13 Apr 2023 19:27:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m4jA=AE=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pn2bj-0001eD-TS
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:27:48 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43a66ac0-da31-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 21:27:45 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by IA1PR12MB6019.namprd12.prod.outlook.com (2603:10b6:208:3d5::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Thu, 13 Apr
 2023 19:27:41 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Thu, 13 Apr 2023
 19:27:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43a66ac0-da31-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dbMjtnn6XXHNi3qbL/Ikf97TDDREslIOhyTqgaN690dknApJVZ3EznKbCckGaSidZE5GSHMEIjxEF75EKCs+L4k0GozkkWRPA2slhDR8Rvj04XCtyeZYft/kd+vvD/wzuzRARlZppiztdMx69WE35nOmndg52BBt+mqX/RHp2QAhSg+LzV/afMArxAv9xYt2XJ3YaNRa4Na7kcfd8GdYL6fXgENoRqlyaKcB0fUDE3AmYYiQRMmD49UZSFLVBvnMtFf1kqXPPPeFJ0RvbvGBt6KuhC4j27nmvYzwHLzhzeNbt4DjUGC5TPLfoIVJuudC53pZbLcewzkvQc15SP3HdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PL8Oe2EKSeHiiX45CknT0nJ4Lkp6F7sg+e7i9HnZpWU=;
 b=PqyGtT/B9uS46so+lSK3vsSqJ079mvNcJwCGDZgdv9/kQ6dxXS/f6DmVqDsdWgJ869GvUaUR3FDsOsBkjoRf9Q6zGrkb0jWx/CtukU+vL5e14kh1Sgq4GNEItp2n9WWhgR7dgZ7AeIc1LNQUHXZzuC1VSub2TYRifW8zE+oDWaRtHkrZGTCsoLcBQZi9RdTaA7R1B88Mg6ub/7E2SSCSR+HSq0PdOTFZtDJigX7ur7bD5Dcp1ynBZKDTQlsV3lZ7GC0ezkO3TGDv2Cn2i9CwiPjV6Owdi4U26PzcAKyXCcj4wl3CLFEXi04vFhIbRAHmL0VdsAIM+cV2OtgVuOgjSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PL8Oe2EKSeHiiX45CknT0nJ4Lkp6F7sg+e7i9HnZpWU=;
 b=YJ/LH0m8wwbb8QPXXgiK1QuAdl8ZJnwLJMunRcci6+1xj0pQtH32b27+QlSqe0jPl7o9F6UCN0RcGlNqwl1b2M1elqebMnDT6Evrl8cjCN4SKEYKqOM9DjgKfEhvqMKj3sxflq/1BdKeiXJDNXS2W8Cno+LyrxTB4ubUyQuZIg8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <7ce2f5e3-4f22-46e2-2e51-197aba861671@amd.com>
Date: Thu, 13 Apr 2023 12:27:19 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 03/17] xen/arm: Add CONFIG_OVERLAY_DTB
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-4-vikram.garhwal@amd.com>
 <e40e323b-6eec-2cdc-62ac-d7c6eff59bd8@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <e40e323b-6eec-2cdc-62ac-d7c6eff59bd8@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR05CA0110.namprd05.prod.outlook.com
 (2603:10b6:a03:334::25) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|IA1PR12MB6019:EE_
X-MS-Office365-Filtering-Correlation-Id: d3d7d03c-dbb7-47dd-a556-08db3c552620
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gDmkD2xtf4RrQB3t3mQz5aloJo1G7rh0jt6/qTdhLsX2YZwlPYJqCfAANhCLqLLSob0LYhvGxuNuPAK1uyNrxtewRmEJ21ftovZpWgrY68F/oYJfDcmHlmb/JR48sJVRganQroFlcieaJi3lxPbk/tAmxnYeF9jDh1tOpQHRNlTmnFfHPy6EZN7QwgabUuNxMpQebLn15oVabAgUPxMTXuUdyYRKctWNQUxst7UkT2jZcpyMJCUuTqaOmIpY8EOm0yX0dT7mTXS1azWOe8gv5Vc6N2LUu97t1ut2Qt97Ji8DANU8tgyFVoPtNUKjfZ9ECvZs6jxdPr0XLppr+fGJiXiGxfsxOZPgT5YvKdjk905VB1ixbKIPNszMhyuaLu0Y9XkewUAYBOvjmwhB31ZTvEedVdsRm7hs//6+sZJn5GJZQqjEwdNJR3zdsJu0oL71G5NSd710jIVYSaolNOqrg+lsHuqr/pm0VmiN/2pkd8Kk2qqWA5/qJBsASG3eskC9sQJSRr9DQuXHyHMm+KF5Fs1lI/aHem60xGRBQcmFPTMD2xUrE7I6Uj1+ULCKhi0qDE70YQybhkpwMZDAiqHfhycbekXHhOSzQ78HyoRYGHIRh2qWe1dGS28wAEtflUT6ImjlLjuKh9l9zrbZXooENg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(366004)(346002)(39860400002)(451199021)(86362001)(31686004)(38100700002)(31696002)(36756003)(41300700001)(8936002)(6512007)(6666004)(6486002)(44832011)(478600001)(5660300002)(4326008)(316002)(186003)(2616005)(53546011)(54906003)(2906002)(26005)(8676002)(6506007)(66556008)(66476007)(66946007)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VG05UzF6V3FZM09LVXZkSlE4RlZ5VmZtUHNES1Z2SUVQdGdldDRsRnN5T0ZN?=
 =?utf-8?B?b2h4OUd1R29CL1pSLy9PZlprVWhxa3MzVDZBTUJoajNONnpGaXh3cWFwYnBY?=
 =?utf-8?B?RmpKcjBLUW9ncTRzSDhlcE04SHN2MXNRZFJkRjRTeEU2ZE43bkMwY05XUmU2?=
 =?utf-8?B?WDNjbXlvMmpjMWpJekpCVWwwV21RYVJYSnczVEpVaE9UTk5VeTJFbFhxeGE5?=
 =?utf-8?B?VFp0eFhHSDB0YzB5cXUvaEcrTENjTC9CaURjWlB2SkxTbEhFZ2FnNDBIc3lI?=
 =?utf-8?B?dWhUU2xzZHhBcDg5K2I3ZUdQUGQ1NFNKQnYwRUMrNFpSWHZHVWRpUVVXbENr?=
 =?utf-8?B?WTR0MmdWeGc4WGJwRUFORW5abkJvNGdMdDF4TWhMT2tiZWFoSUVJTHJPL1BV?=
 =?utf-8?B?ZW9NY2ZkNzB2TWtmU2xsWlM2cFRuaXJTZjJGK3B6MzZSWHozc3BnYWNNZ01s?=
 =?utf-8?B?dmpON0wrak0yd0s3Uzlra1krb0ZKeE9SZGlXZWpqMlRsWjRTVVZHTTdUUG1x?=
 =?utf-8?B?ZTI1RVlaeFd2Yzh0UzBsZFhLWk0wNVQ4RU0yV0MwclYvR1NpT1FQSGtaWW9F?=
 =?utf-8?B?UTNXajlsY2tjOW1tVTRDc1l5eVRRWVRXUUpRRGNKT1RGcytxSmpiVVgyZ1FO?=
 =?utf-8?B?RGlqQ0o4NUoxV1J3elJPWlZtU2V3NnptRDJWK3hXUHoycmliY0hWbXVJeG1o?=
 =?utf-8?B?MlJ3SFdLRVRUL0h0SXhOaG4yalU0dVdwWHFmb3dadkFBbVk2UDhFdVRaaXdO?=
 =?utf-8?B?K1QweUJZbHRYTGRob2Z2QXpQbS8zVFlxTUpjeGxPSVVHczV2Q2FCeUJJM3l1?=
 =?utf-8?B?ZFlNR3hSYUpRbFIzdTBXOWtUMkhhU29kd2RobjdXZ2FBRzFOZ29tWVVnNjR4?=
 =?utf-8?B?V25RTWN2MTVaR2l6ZTE3aktXelFoVHd4MUVUZUxNUjdDUGJic3g3YmxQOElv?=
 =?utf-8?B?cDlYZVVTa01GUmxXZm9NRmYrTEZDRnVENENyQVdUdEhNTWcybG5xc2F6Smg5?=
 =?utf-8?B?ZmV0d2dVdkNZemx6VVpsbU16TlAweTcxY3RadUU0Q0paMnJRUzZOeVNXR2Nk?=
 =?utf-8?B?QjVENUMrRExaUlQweHoxVWNKWWd2RXNvc2VBQTFDWEJOallQcEpBdXFqK294?=
 =?utf-8?B?eHlnZHRZR3Nhd0dqQ213NDZuQkVxeEZwS05MWFZnTkZETTZiTy9HS2JoRVBn?=
 =?utf-8?B?YmZKYUNucWlLQnl2cUVlMGJqQ3U1djNoaUQ3RTBEMENuMWQ1eHFRSUR5TWxK?=
 =?utf-8?B?SDlyaC8yWFRPSmU1SlhXeThJOXpUMXppVTZ4V3hZNDFqbE8rb3daZEUycktt?=
 =?utf-8?B?Y0VCT0oyUVJXeVZVTnk5dmx4dEczRm1LNDh6OXIyMWJRSzJBYTJGdFNRYWIv?=
 =?utf-8?B?NVd3c3dQMGw3dXpFT3pwVGhJUktJbHJMenUwT0lSK0dkUzR1L0FzcDNjOXpO?=
 =?utf-8?B?QlI0MGNTSEZaYS9aK0p5RDZuc2pWZjlFWXZzOW01SXlrelliTWxtdHpER1p5?=
 =?utf-8?B?OFp6WHZNV1RHczhwVU9UdTNZWkozNURiUEZlOGpWZ3V3TWFqcWVza3h6cFZP?=
 =?utf-8?B?WjczZy9VUGRhb0xLaktNSk9pSHBndUN6aXhaWlRVbTJXODVaQzh5bkJNVFJo?=
 =?utf-8?B?TjFrRmtudnBpSHZ4U0ZaY0c1cUYvU0lwMjcrNk1TSjdmZ0VPb3lUdXQydFh0?=
 =?utf-8?B?akhlTVkrWUdSZGhsMTlaaWRKWk51ZitXV0tjWmxQL3RrWmJBR0xuZm1zZm5m?=
 =?utf-8?B?OHF1Tk02R1RhaWRybms3THFPN1IrZE93aGVmVkpsckM4VDNON3FtOFhlMXlQ?=
 =?utf-8?B?aHgzU3ZvN2NFWnVUbDJMamxUb2tyeDQ2T1JiTjNtYUhjYnNuUTdySENaMnRQ?=
 =?utf-8?B?UE42eHFubnBJOTZabjdyVHdva085dXNsOGtrZjlDRmw5b3JDMVVkYXRXUVFZ?=
 =?utf-8?B?SnRpbFZNcjAxM2xRa3ZyRGJ6NStIbElwUEtmS3ZSRXJnbEhMTUpjWFJnbktv?=
 =?utf-8?B?WGVydzkrbnpVNllHL040RUh2VWhSSmN2RUw5aGpoc0poTTZxMG5wTU83bDlV?=
 =?utf-8?B?a0JJcTV2WnROTldob214NUJCcnhiYVNrbVNwVEJQcFpaaTFsRU1UYU95alBZ?=
 =?utf-8?Q?bm2sOC8UWTO15zriBFeqPiOf6?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3d7d03c-dbb7-47dd-a556-08db3c552620
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 19:27:41.4652
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Anuavt911F1VhPcn77OI1VJE/xNiqwIfTOzStT5Aw0EORqbVu4yXVbNVH2INyGTAW5FQwsq1vQfVoXRRvTxtBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6019

Hi Michal,

On 4/13/23 2:58 AM, Michal Orzel wrote:
> Hi Vikram,
>
> On 11/04/2023 21:16, Vikram Garhwal wrote:
>>
>> Introduce a config option where the user can enable support for adding/removing
>> device tree nodes using a device tree binary overlay.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   SUPPORT.md           | 6 ++++++
>>   xen/arch/arm/Kconfig | 5 +++++
>>   2 files changed, 11 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index aa1940e55f..0a31f40af4 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -822,6 +822,12 @@ No support for QEMU backends in a 16K or 64K domain.
>>
>>       Status: Supported
>>
>> +### Device Tree Overlays
>> +
>> +Add/Remove device tree nodes using a device tree overlay binary(.dtbo).
>> +
>> +    Status: Supported for ARM
> Hmm, so here you say supported but in Kconfig - unsupported.
> I think, this should be:
> Status, ARM: Tech Preview
> or Experimental
Experimental sounds better to me.
Will update it.
>> +
>>   ### ARM: Guest ACPI support
>>
>>       Status: Supported
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index 239d3aed3c..1fe3d698a5 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -53,6 +53,11 @@ config HAS_ITS
>>           bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
>>           depends on GICV3 && !NEW_VGIC && !ARM_32
>>
>> +config OVERLAY_DTB
>> +       bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
>> +       help
>> +         Dynamic addition/removal of Xen device tree nodes using a dtbo.
>> +
>>   config HVM
>>           def_bool y
>>
>> --
>> 2.17.1
>>
>>
> ~Michal



From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:40:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:40:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520916.809045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2nb-0003YI-CR; Thu, 13 Apr 2023 19:40:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520916.809045; Thu, 13 Apr 2023 19:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2nb-0003Xq-9S; Thu, 13 Apr 2023 19:40:03 +0000
Received: by outflank-mailman (input) for mailman id 520916;
 Thu, 13 Apr 2023 19:40:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn2na-0003NY-8k; Thu, 13 Apr 2023 19:40:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn2na-0003rE-6T; Thu, 13 Apr 2023 19:40:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn2nZ-0007jj-Ik; Thu, 13 Apr 2023 19:40:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pn2nZ-0002Lq-IH; Thu, 13 Apr 2023 19:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=70YuiFdaBmGYRLXWsf/sjI9TxOH8QbwNtHlLuyqweXg=; b=H6r8TCNAvn9kpkeOdPzX2Xl+tb
	lGcax7he2LMTMW0JPde6nDp+uS3RZz3iQPHitcBnwdrY4LoSed3aHJNo4cn2+iUcP3vF6HBOpbswX
	4wZ0AhCoswnmYEwAymz4mA3UrYKDMrGJiVRdKnm6HgqMH9ukE7++QhO3Q98djh+8gKKw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180248: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d795fb571b9b2c2ee67ceaef372d5cc461767859
X-Osstest-Versions-That:
    ovmf=42b0443599a69c703034079cf2bd389fa3a6bfde
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 19:40:01 +0000

flight 180248 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180248/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d795fb571b9b2c2ee67ceaef372d5cc461767859
baseline version:
 ovmf                 42b0443599a69c703034079cf2bd389fa3a6bfde

Last test of basis   180229  2023-04-13 06:12:19 Z    0 days
Testing same since   180248  2023-04-13 16:40:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corvin Köhne <corvink@FreeBSD.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   42b0443599..d795fb571b  d795fb571b9b2c2ee67ceaef372d5cc461767859 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:52:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:52:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520922.809056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2zl-0005ak-Hx; Thu, 13 Apr 2023 19:52:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520922.809056; Thu, 13 Apr 2023 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn2zl-0005ad-DK; Thu, 13 Apr 2023 19:52:37 +0000
Received: by outflank-mailman (input) for mailman id 520922;
 Thu, 13 Apr 2023 19:52:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn2zk-0005aX-As
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:52:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn2zj-00042K-On; Thu, 13 Apr 2023 19:52:35 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn2zj-0003lk-IF; Thu, 13 Apr 2023 19:52:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=j1KrRqqmhEktj1V6/+wIzgGRKMBaGQ84ANZtwb9CGZU=; b=r0oHIxKERQxW0Ur9kAIO1RSUtG
	nEmiKIUzLW6vRdnvnP6ScdWgMxkfRFYDezwYt1rk9IB+PJ6YeMOsiHB/j1A3bizvW0Rlg5c88OKpo
	ZIytpeKSZkzv8a3YzgD0TQ2qVBP+UIuClUgn+Iqldszbf+wcwx3GYbgnBhkjS3jQYQww=;
Message-ID: <03cc0c98-c5ef-16f1-ed24-6a39320b08e5@xen.org>
Date: Thu, 13 Apr 2023 20:52:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
 <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 13/04/2023 15:05, Luca Fancellu wrote:
> 
> 
>> On 13 Apr 2023, at 14:30, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 13/04/2023 14:24, Luca Fancellu wrote:
>>> Hi Julien,
>>
>> Hi Luca,
>>
>>>>>   @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>       unsigned int max_vcpus;
>>>>>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>>>>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>>>> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>>>>>         if ( (config->flags & ~flags_optional) != flags_required )
>>>>>       {
>>>>> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>           return -EINVAL;
>>>>>       }
>>>>>   +    /* Check feature flags */
>>>>> +    if ( sve_vl_bits > 0 )
>>>>> +    {
>>>>> +        unsigned int zcr_max_bits = get_sys_vl_len();
>>>>> +
>>>>> +        if ( !zcr_max_bits )
>>>>> +        {
>>>>> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>>>> +            return -EINVAL;
>>>>> +        }
>>>>> +
>>>>> +        if ( sve_vl_bits > zcr_max_bits )
>>>>> +        {
>>>>> +            dprintk(XENLOG_INFO,
>>>>> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
>>>>> +                    sve_vl_bits, zcr_max_bits);
>>>>> +            return -EINVAL;
>>>>> +        }
>>>>
>>>> Is SVE supported for 32-bit guest? If not, then you should had a check here to prevent the creation of the domain if sve_vl_bits is set.
>>> No SVE is not supported for 32 bit guests, here I think we will get “SVE is unsupported on this machine” because get_sys_vl_len() will return 0.
>>
>>  From my understanding, get_sys_vl_len() will return the len supported by the hosts. So if you run a 32-bit guest on top of a 64-bit hosts, then I believe get_sys_vl_len() will be non-zero.
> 
> Yes you are right, I realise that I need the domain type information and I can’t have it in arch_sanitise_domain_config, however they might have sense there, and I can do a check
> like this afterwards:
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index c1f0d1d78431..ce1235c25769 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -3694,6 +3694,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
>           return -EINVAL;
>       }
>   
> +    if ( d->arch.sve_vl && (kinfo->type == DOMAIN_32BIT) )
> +    {
> +        printk("SVE is not available for 32-bit domain\n");
> +        return -EINVAL;
> +    }
> +
>       if ( is_64bit_domain(d) )
>           vcpu_switch_to_aarch64_mode(v);
> 
> Would it be ok for you?

construct_domain() is only going to be used for domains created by Xen. 
You would need the same check for the ones created by the toolstack.

Do you need to know the SVE length when the domain is created? If not, 
then I would suggest to create a new domctl that would be called after 
we switch the domain to 32-bit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520926.809065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn30r-00066c-Q1; Thu, 13 Apr 2023 19:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520926.809065; Thu, 13 Apr 2023 19:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn30r-00066V-NQ; Thu, 13 Apr 2023 19:53:45 +0000
Received: by outflank-mailman (input) for mailman id 520926;
 Thu, 13 Apr 2023 19:53:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn30q-00066J-4M
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:53:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn30p-00043a-M0; Thu, 13 Apr 2023 19:53:43 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn30p-0003nc-FM; Thu, 13 Apr 2023 19:53:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=iMMxluHVcsuTLcFu23H9V+dhod04vKsQB3GqixWtHiY=; b=yf5wySg29qO11NYv26lmNzoDcI
	oLd7br1mPxMQeM3oO5kz9A1pFCCTlzG2Vf1+5dhNue9z+0EY7gIbkHvGlB7orr1Imu1Y0Ww7sdndX
	ueMYD91/zb/x9enHBjkSZVnyHtf+o3W4EaWudSYNbTitQ4CBDShtmZinIySBDKirGMZQ=;
Message-ID: <abb8f97e-02f3-774f-43e3-5fb1ccae806a@xen.org>
Date: Thu, 13 Apr 2023 20:53:41 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
 <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
 <92DA4B4F-7BB9-4CAC-9276-0B6A10550164@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <92DA4B4F-7BB9-4CAC-9276-0B6A10550164@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 13/04/2023 17:10, Luca Fancellu wrote:
>>
>>
>>>
>>>>> Can we move this somewhere else to avoid adding extra padding? Also shouldn't this be protected with #ifdef CONFIG_ARM_64 to make clear this is not supported on Xen 32-bit?
>>>> Yes, I’ll move it and protect with CONFIG_ARM_64, is it ok for you if I move it after:
>>>> /* Monitor options */
>>>> struct {
>>>>     uint8_t privileged_call_enabled : 1;
>>>> } monitor;
>>>
>>> Please check the padding with "pahole". If possible, it would be better to re-use an existing one.
>>
>> Ok I’ll try to use the tool
> 
> I’ve managed to use the tool, the field seems already in a good spot:

ok. Thanks for checking.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:55:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520930.809076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn31w-0006g5-5L; Thu, 13 Apr 2023 19:54:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520930.809076; Thu, 13 Apr 2023 19:54:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn31w-0006fy-1B; Thu, 13 Apr 2023 19:54:52 +0000
Received: by outflank-mailman (input) for mailman id 520930;
 Thu, 13 Apr 2023 19:54:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn31v-0006fs-8t
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:54:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn31v-00044y-1i; Thu, 13 Apr 2023 19:54:51 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn31u-0003oO-So; Thu, 13 Apr 2023 19:54:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tg3OWKBqkAnwBlRFVwHPX9YwMY8VMLa79Z8mOEtHzXA=; b=bVIsmCOH4eI4lEz7gBMNopYWm7
	WqsDJ31IYPQ12izK0xGIYH+WzWjf9ENAO0ijE0hbSH7HKZ+ucWtVuQ0rC6pa3txhJyZQXwPcPuFHE
	COu7K0aBqSQJ6xeRBZTHtXLOik7/LEhzHK7R+UObsGUJ7Caqe6Jx/QzJmM34uKVXf4Uo=;
Message-ID: <e369554d-946e-8419-0d94-808162183e03@xen.org>
Date: Thu, 13 Apr 2023 20:54:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <b1c77bdf-6979-83b6-f5e4-ac5b3e751a3d@xen.org>
 <6DDCEF6B-F07B-44EA-83D0-33BED5EAC506@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <6DDCEF6B-F07B-44EA-83D0-33BED5EAC506@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 13/04/2023 15:35, Luca Fancellu wrote:
> 
> 
>> On 13 Apr 2023, at 14:11, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 12/04/2023 10:49, Luca Fancellu wrote:
>>> Save/restore context switch for SVE, allocate memory to contain
>>> the Z0-31 registers whose length is maximum 2048 bits each and
>>> FFR who can be maximum 256 bits, the allocated memory depends on
>>> how many bits is the vector length for the domain and how many bits
>>> are supported by the platform.
>>> Save P0-15 whose length is maximum 256 bits each, in this case the
>>> memory used is from the fpregs field in struct vfp_state,
>>> because V0-31 are part of Z0-31 and this space would have been
>>> unused for SVE domain otherwise.
>>> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
>>> creation given the requested vector length and restore it on
>>> context switch, save/restore ZCR_EL1 value as well.
>>> Remove headers from sve.c that are already included using
>>> xen/sched.h.
>> I dislike this because ...
>>
>>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>>> index 78f7482619da..5485648850a0 100644
>>> --- a/xen/arch/arm/arm64/sve.c
>>> +++ b/xen/arch/arm/arm64/sve.c
>>> @@ -5,14 +5,29 @@
>>>    * Copyright (C) 2022 ARM Ltd.
>>>    */
>>>   -#include <xen/types.h>
>>> -#include <asm/cpufeature.h>
>>
>> ... it is not entirely obvious that sched.h will import asm/cpufeatures.h. This could easily change in the future and would only require us to re-add those includes.
> 
> Ok I will reintroduce #include <asm/cpufeature.h>, do I understand correctly that is the only header you would like me to retain?

My remark was for all the headers you removed. It is not obvious that 
any of them will be included by sched.h.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:55:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520934.809086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn32S-0007DI-DH; Thu, 13 Apr 2023 19:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520934.809086; Thu, 13 Apr 2023 19:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn32S-0007CJ-9f; Thu, 13 Apr 2023 19:55:24 +0000
Received: by outflank-mailman (input) for mailman id 520934;
 Thu, 13 Apr 2023 19:55:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XbVr=AE=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pn32R-0007C4-3y
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:55:23 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20601.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e1c0413-da35-11ed-8611-37d641c3527e;
 Thu, 13 Apr 2023 21:55:20 +0200 (CEST)
Received: from DM6PR06CA0083.namprd06.prod.outlook.com (2603:10b6:5:336::16)
 by DM6PR12MB4909.namprd12.prod.outlook.com (2603:10b6:5:1ba::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 19:55:17 +0000
Received: from DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:336:cafe::85) by DM6PR06CA0083.outlook.office365.com
 (2603:10b6:5:336::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.31 via Frontend
 Transport; Thu, 13 Apr 2023 19:55:17 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT054.mail.protection.outlook.com (10.13.173.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 19:55:16 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 14:55:16 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:55:16 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 14:55:15 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e1c0413-da35-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=giPucoljItkLXmTWzh2BABJFAT5txkUAwDWrNXk0NdmxXMrULw3GB85YcmaTvxsqYr4KJ0CbUkbXTdsTVhU61C+x9br/ieUX9QxfxZjZmqJq7okvS9+H3NA4f97Igs1UcqmQ8EQFJHNyVW1wpdbH5Hkj12Y8J76STRGubPfXqsh4wpj3YodYR3LsJI36UGRBueNRqtEtIVc8fs4ezklnrVAfKNTdFs1kd9M0EQ6m9JYAft9yp35n8C1i3hrzYOuzi+Pg09ZpzWwemf0zwp1q0OakZmsmcxXxMRW8pbGgMr/tugOzrHnJkO+zOz6XdkBif6N0kgUFNTITh7wbTomKxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AbR0ZOV6YHp3unjdtxF89oo88L3tukNm7J1yJytBJd0=;
 b=QTMbF4uNartpWetvgEmx3ru55X6arkT+yBkDaX8mGDJ+vFgEZbKRDPeFdLbgDLseFrrGU6Wo0D+EL1Ggnf+sThQaio1j582NYVQ8XGYt1vb36Hzu+Xh6fxcs2+SVjC/686NvGhujk9Dhy4wgpA9+3aVLVlAoZjsnq1clYaBVC4h9V9+rSDIEG20oYEBJU0wSpjjtPRpgouJBvQ9cuO71nY8AeXfZ6ylA+ztcrjnz+AD2wkZmhAFVpGA8BTc7OA+wldiNH3E89xGLzNxNVuGm4276IxojDBFq/A/oBMTtZqUXDwwTB1oRC6FOlGphV/knxT3gLbZdO1o5QOfvGNHzgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AbR0ZOV6YHp3unjdtxF89oo88L3tukNm7J1yJytBJd0=;
 b=jI6tkFt4ParB2RTOiV9VERM+vn3Z1pQuohxOWfaI9mVHinTgf2hpCbeYwVuyaDEMdgDQ+RuLkHXwSk5Mw3De9uPRMgdEG0U4BT9sclGbJrUyW83+rXBKz9/PUpqxvRGmdY4SEGn4gQ9IOc5eK1YWuKKKbIGDIZZ48s0ZMb1/Trs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <19cf0a82-3b79-8548-44ba-1e4b2c6dbfe8@amd.com>
Date: Thu, 13 Apr 2023 15:55:14 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 2/8] xen/arm: re-define a set of data structures for
 static shared memory region
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, <xen-devel@lists.xenproject.org>
CC: <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230223054105.2357217-1-Penny.Zheng@arm.com>
 <20230223054105.2357217-3-Penny.Zheng@arm.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <20230223054105.2357217-3-Penny.Zheng@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT054:EE_|DM6PR12MB4909:EE_
X-MS-Office365-Filtering-Correlation-Id: f0c8cad1-84e6-49bd-9e3a-08db3c59010a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W+43QDT5QNModEEJq1UPIAOPOayoUtv+D5bYGGi3jGdnkmVLbYSmYYYeV6aielISstYfPOHkh3Qf5cGkC9lJt84prJfFfjSWwOdUI9ZSR4pan4UdHfPWLYEQqcTL66JZwO3MKrt6ukbqms+A+onRwzkLck4Vo1YOjjJu8RWaGk3UaK5gsDhK3titYfva2WRM1NfCv7Y7XzgH1/VPSCerw7ZsUeyli7+X7jdR6/kVzxgQrW6pcDIedYlURWoA3YFurZuhozpJdNAThbWEMa0aIZyYVcT0HIglrkWVRtotSvm1rz7WOmMqiaaiPRA6v8duTkUvwOqIypsrRIZNkdzlCAyJZXfYDZbrDwjUaeTi6cBdvwUX/yX7eAhIRB5jEg+C4l3GvSZNi+3LEp92OfYkoKWXW/rfhUxSSpE2LsKszVrWcX2s1z6QXXA/l9CFS/Vwfi00ZVqcdmftlAlbYSYhkxE8g+2VpWU597+uSRRMcFPbPDfP108MhITSIsHEgyUri69tyQu+hK9q65/mEnUw+xdnrvlbalfBNPFGNsKe77q91dBiaqrKc0PbQK808Q6JH+vsc4rsGestIKuP1eQYsRqaZ1rqVlILkcf9NtvMMjXi9TZkkr+PBnjkkcf0Sm7j3soNAaXIR76UHRDugmhSBU6KG1wPy3ofCY92AObUzCtOAwKgGWvl9ak5lVymJB9MQgH01lIRguvk6/h2YEgPS/E4yVqA4NRR3xwluNwkXwqYlqW7x9qUml+mFREYLLDkg7Y9P04yzNQomVHw82A+fw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(336012)(36860700001)(70586007)(426003)(70206006)(47076005)(83380400001)(2616005)(54906003)(26005)(110136005)(16576012)(478600001)(186003)(53546011)(44832011)(2906002)(36756003)(5660300002)(81166007)(356005)(41300700001)(4326008)(82740400003)(40460700003)(82310400005)(40480700001)(316002)(31696002)(8676002)(86362001)(8936002)(31686004)(66899021)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 19:55:16.9504
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f0c8cad1-84e6-49bd-9e3a-08db3c59010a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4909

Hi Penny

On 2/23/23 00:40, Penny Zheng wrote:
> This commit introduces a set of separate data structures to deal with
> static shared memory at different stages.
> 
> In boot-time host device tree parsing, we introduce a new structure
> "struct shm_node" and a new field "shm_info" in bootinfo to describe and
> store parsed shm info.
> only SHMID and "nr_borrowers", which describes the number of borrower domain,
> are considered here for per shm node.
> We also introduce a new local global data "shm_data" in bootfdt.c, in which,
> reserved memory bank is recorded together with shm node, to assist doing
> shm node verification.
> 
> In order to apply above changes in acquire_nr_borrower_domain, we now use SHMID
> to iterate "shminfo" to find requested shm node, then acquiring the information
> of "nr_borrowers".
> 
> In the last, a new anonymized structure "shminfo", which is a array of
> compound structure that contains SHMID and a "struct membank membank"
> describing shared memory regions in guest address space, is created in "kinfo"
> when dealing with domain information.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v1 -> v2:
> - As the original "struct shm_membank" was making reserving memory more
> complex and actually memory information could be still got from host Device\
> Tree when dealing with domain construction, we introduce a new simple structure
> "struct shm_node" in bootinfo to only store SHMID and "nr_borrowers"
> - Further restrict the scope of the local variable
> "struct meminfo *mem = &bootinfo.reserved_mem"
> - Introduce a new local global data "shm_data" in bootfdt.c. In which, reserved
> memory bank is recorded together with the shm node, to assist doing shm node
> verification.
> - Define a set of local variables that point to
> "shm_data.shm_nodes[i].membank->start", etc, to make the code more readable.
> - Use SHMID to iterate "shminfo" to find requested shm node, as we no
> longer store host memory bank info in shm node.
> - A new anonymized structure, which is a array of compound structure that
> contains SHMID and a "struct membank membank", describing shared memory region
> in guest, is introduced in "kinfo".

This patch no longer applies cleanly to master since 64c21916167e ("xen/arm: Use the correct format specifier")


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:55:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:55:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520937.809096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn32s-0007jV-MX; Thu, 13 Apr 2023 19:55:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520937.809096; Thu, 13 Apr 2023 19:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn32s-0007jO-JA; Thu, 13 Apr 2023 19:55:50 +0000
Received: by outflank-mailman (input) for mailman id 520937;
 Thu, 13 Apr 2023 19:55:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XbVr=AE=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pn32r-00078B-1J
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:55:49 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20616.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e1cf23e-da35-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 21:55:48 +0200 (CEST)
Received: from DM6PR06CA0074.namprd06.prod.outlook.com (2603:10b6:5:336::7) by
 BN9PR12MB5228.namprd12.prod.outlook.com (2603:10b6:408:101::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30; Thu, 13 Apr 2023 19:55:43 +0000
Received: from DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:336:cafe::86) by DM6PR06CA0074.outlook.office365.com
 (2603:10b6:5:336::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 19:55:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT054.mail.protection.outlook.com (10.13.173.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 19:55:43 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 14:55:43 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 12:55:42 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 14:55:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e1cf23e-da35-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L1RumNiDMJmG1hM1kZQr0d4OTr4PfEOdC5AHLO9hw1ReWAj7YL81K26iwHiojYIn/N4RAIjY3I2MJsrqwVVBYJtE3Aj1+xQhme/uqAyAbUNtaH3E/axOptF18F9fKmBnevVNZxBSjiTOcDMM8KrYaJ7z+P8to2BZsRYxFLzymSnbdtLr20kf5DLXHbFKZz1lU8DsWVcZ1MrXQWgZxhU4R6SyaPEOLj7pl8etuvFusyKoBCGhy4DJ9sEkaJjxqkBhic2kXAWEMOIvN9RydWU1tJ452JiORA+Fnnn78XGNhAUUzD/Ny4+CEdWnEfIHojq6VhvOTdWj6lbVTJ4DX01ftQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/18PWwB/WDEzRgk5tqv7qd4aPS6M0W9Cu4ti6yz3Lfo=;
 b=hAbMo//v3mMHziciVHh5Y7530YWttAzOzNPZfkzpF3yNzH3IbOE/0sQmOz6Kxv+BmH2E2It+Qtz3EhLEhc3GdlOfTkrXXbpsL49/A6BPP2Gki+G9ryGxdzIgRcgerm2gMTBIwiTh0cafmaX0vSNld4e4PSoCSdUuLO3M03KuvejRz/9+qiGnBz38XfsaDKW8eHxEeABfviLAOKWGplzjEXdGHW8Cv817ctSY1F+39HddrJKZdUX/hz8EEA+jFvKgGqvTTTZV6tTSyk0uFHQPHb4vLafS51QaJX0suB1eoor1Yc7H7M626u8x1GQAi7sEYvCblVKe4N16IhWi+wwUIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/18PWwB/WDEzRgk5tqv7qd4aPS6M0W9Cu4ti6yz3Lfo=;
 b=4FkaS2PfIbRuoZqv+2k60RXGkkSUoHbzzroS3WMZmszwrqGA98zxyQKOVt7s2IsgNIdVg27tc93+viBF1MuFQHcdOHJDToeEygtu0637SwMBEBYHysml05sKP40xjokL2Yw9/JA+Yy2s7b6RVE/Dt4/R0eP4Sbn/s7pm7/1qHIU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <abbd3be3-be30-58a9-74e8-17aa5b2a8e84@amd.com>
Date: Thu, 13 Apr 2023 15:55:41 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 5/8] xen/arm: support static shared memory when host
 address not provided
To: Penny Zheng <Penny.Zheng@arm.com>, <xen-devel@lists.xenproject.org>
CC: <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230223054105.2357217-1-Penny.Zheng@arm.com>
 <20230223054105.2357217-6-Penny.Zheng@arm.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <20230223054105.2357217-6-Penny.Zheng@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT054:EE_|BN9PR12MB5228:EE_
X-MS-Office365-Filtering-Correlation-Id: 2891d8cd-9fc8-44e0-7c45-08db3c5910fe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+btdOB3tVf1KrCcTvt4szeT2vlKwzr17aB44/LLJZE+iHp4pov4PLadHkQLus+G3FR0WpEz+z7OHLQ9VkDzhg/imOYtSxIsxQigcWJ4eOO8ZBgUHG1bkifDXsCKVgO31FLaAuIDp3gs+/hI3BHb4NJA4u/ZTK98oqq0usXuJHwifturc+1ngdfUjVJN1wcfd8eVMAtKpdX8TeRSSkX3xUOt6UHJz9/E1zxrLgALhUtOrT9cOgkhDqC9IlY61keH9ZFp/bTe7YcPrP8o5KTXd3O4b5YzH1vRU2ZBZ3EyJEl4U/0C3EUzKOPGrvkGnXvZ1VrG0z/aB6hGwMNwV0LEH5OXunAOGv0RRRXUjZZZi2vBuPNpZF7VYtwy+H1iP7vMiEPTmtUe9dU8ic6jD6mm3x/1IKzF9bSw9GExCuxS6BpImZ48x+WhB24fUMRSgvU/qmRlpqORtcJVyzwUiMCWG0F+Qqkc9bo5gzT25Bvz6un24js7BYm/hn2D2Hn522uho/f9NTivpF1nABFmV7IbZX7AP4fN3NegE/lt/1FLCGsZ5IXxQt82f6JYBBH1j9PUB85+R/zTAUyLLITtkeSpHPWr1HpVHKJdedoj/a6FBLFIpLgbC2w3uWNW6D6Iiy4kYnWfVapEBV1ar6eHQPPKjjCiMZG0L0srN8wsLnbRAE/me4B6dR4EHDPfjBAQNj8aX+YW+znZNZBAUV8QjXu8zzA5eW/BcGS3+QAUTdhsBQJyeNAAPflg010z2lGxKX+y0S6LDIncCezVMIpw5D5vAZA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(336012)(426003)(26005)(31686004)(40480700001)(70586007)(70206006)(53546011)(36756003)(31696002)(2906002)(30864003)(83380400001)(82310400005)(47076005)(2616005)(186003)(36860700001)(4326008)(40460700003)(44832011)(110136005)(54906003)(86362001)(16576012)(5660300002)(8676002)(8936002)(81166007)(478600001)(316002)(356005)(41300700001)(82740400003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 19:55:43.7301
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2891d8cd-9fc8-44e0-7c45-08db3c5910fe
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5228

Hi Penny

On 2/23/23 00:41, Penny Zheng wrote:
> In order to support static shared memory when host address not provided,
> we shall do the following modification:
> - we shall let Xen allocate memory from heap for static shared memory
> at first domain, no matter it is owner or borrower.
> - In acquire_shared_memory_bank, as static shared memory has already
> been allocated from heap, we shall assign them to the owner domain
> using function "assign_pages".
> - Function get_shm_pages_reference is created to add as many
> additional reference as the number of borrowers.
> - We implement a new helper "add_foreign_mapping_for_borrower" to set
> up foreign memory mapping for borrower.
> 
> Instead of using multiple function parameters to deliver various shm-related
> info, like host physical address, SHMID, etc, and with the introduction
> of new struct "shm_memnode" to include banked host memory info, we switch to
> use "shm_memnode" as function parameter to replace them all, to make codes more
> clear and tidy.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v1 -> v2:
> - combine commits 4 - 6 in Serie 1
> - Adapt to changes of introducing "struct shm_memnode"
> ---
>  xen/arch/arm/domain_build.c | 222 +++++++++++++++++++++++++-----------
>  1 file changed, 155 insertions(+), 67 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 91feb8f37c..9b4aabaf22 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -869,6 +869,11 @@ static void __init assign_static_memory_11(struct domain *d,
>  }
> 
>  #ifdef CONFIG_STATIC_SHM
> +static bool __init is_shm_allocated_from_heap(struct shm_memnode *node)
> +{
> +    return (node->meminfo.nr_banks != 0);
> +}
> +
>  static int __init acquire_nr_borrower_domain(const char *shm_id,
>                                               unsigned long *nr_borrowers)
>  {
> @@ -912,12 +917,12 @@ static struct shm_memnode * __init find_shm_memnode(const char *shm_id)
>   * This function checks whether the static shared memory region is
>   * already allocated to dom_io.
>   */
> -static bool __init is_shm_allocated_to_domio(paddr_t pbase)
> +static bool __init is_shm_allocated_to_domio(struct shm_memnode *node)
>  {
>      struct page_info *page;
>      struct domain *d;
> 
> -    page = maddr_to_page(pbase);
> +    page = maddr_to_page(node->meminfo.bank[0].start);
>      d = page_get_owner_and_reference(page);
>      if ( d == NULL )
>          return false;
> @@ -935,67 +940,129 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbase)
>  }
> 
>  static mfn_t __init acquire_shared_memory_bank(struct domain *d,
> -                                               paddr_t pbase, paddr_t psize)
> +                                               struct shm_meminfo *meminfo,
> +                                               bool paddr_assigned)
>  {
> -    mfn_t smfn;
> -    unsigned long nr_pfns;
>      int res;
> +    unsigned int i = 0;
> 
> -    /*
> -     * Pages of statically shared memory shall be included
> -     * into domain_tot_pages().
> -     */
> -    nr_pfns = PFN_DOWN(psize);
> -    if ( (UINT_MAX - d->max_pages) < nr_pfns )
> +    for ( ; i < meminfo->nr_banks; i++ )
>      {
> -        printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n",
> -               d, nr_pfns);
> +        paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size;
> +        unsigned long nr_pfns;
> +
> +        /*
> +         * Pages of statically shared memory shall be included
> +         * into domain_tot_pages().
> +         */
> +        nr_pfns = PFN_DOWN(psize);
> +        if ( (UINT_MAX - d->max_pages) < nr_pfns )
> +        {
> +            printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n",
> +                   d, nr_pfns);
> +            return INVALID_MFN;
> +        }
> +        d->max_pages += nr_pfns;
> +
> +        if ( paddr_assigned )
> +        {
> +            res = acquire_domstatic_pages(d, maddr_to_mfn(pbase), nr_pfns, 0);
> +            if ( res )
> +            {
> +                printk(XENLOG_ERR
> +                       "%pd: failed to acquire static memory: %d.\n", d, res);
> +                goto fail;
> +            }
> +        }
> +        else
> +            /*
> +             * When host address is not provided, static shared memory is
> +             * allocated from heap and shall be assigned to owner domain.
> +             */
> +            if ( assign_pages(maddr_to_page(pbase), nr_pfns, d, 0) )
> +                goto fail;
> +    }
> +
> +    return maddr_to_mfn(meminfo->bank[0].start);
> +
> + fail:
> +        while( --i >= 0 )

This is an infinite loop. When building with EXTRA_CFLAGS_XEN_CORE="-Wtype-limits -Wno-error=type-limits", we see:

arch/arm/domain_build.c: In function ‘acquire_shared_memory_bank’:
arch/arm/domain_build.c:989:20: warning: comparison of unsigned expression in ‘>= 0’ is always true [-Wtype-limits]
  989 |         while( --i >= 0 )
      |                    ^~


Also, the indentation seems off here.

> +            d->max_pages -= PFN_DOWN(meminfo->bank[i].size);
>          return INVALID_MFN;
> +}
> +
> +static int __init get_shm_pages_reference(struct domain *d,
> +                                          struct shm_meminfo *meminfo,
> +                                          unsigned long count)
> +{
> +    struct page_info *page;
> +    unsigned int i = 0, j;
> +
> +    for ( ; i < meminfo->nr_banks; i++ )
> +    {
> +        paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size;
> +        unsigned long nr_pages = PFN_DOWN(psize);
> +
> +        page = maddr_to_page(pbase);
> +        for ( j = 0; j < nr_pages; j++ )
> +        {
> +            if ( !get_page_nr(page + j, d, count) )
> +            {
> +                printk(XENLOG_ERR
> +                       "Failed to add %lu references to page %"PRI_mfn".\n",
> +                       count, mfn_x(page_to_mfn(page + j)));
> +                goto fail;
> +            }
> +        }
>      }
> -    d->max_pages += nr_pfns;
> 
> -    smfn = maddr_to_mfn(pbase);
> -    res = acquire_domstatic_pages(d, smfn, nr_pfns, 0);
> -    if ( res )
> +    return 0;
> +
> + fail:
> +    while ( --j >= 0 )

Infinite loop [-Wtype-limits]

> +        put_page_nr(page + j, count);
> +    while ( --i >= 0 )

Infinite loop [-Wtype-limits]

>      {
> -        printk(XENLOG_ERR
> -               "%pd: failed to acquire static memory: %d.\n", d, res);
> -        d->max_pages -= nr_pfns;
> -        return INVALID_MFN;
> +        page = maddr_to_page(meminfo->bank[i].start);
> +        j = PFN_DOWN(meminfo->bank[i].size);
> +        while ( --j >= 0 )

Infinite loop [-Wtype-limits]

> +            put_page_nr(page + j, count);
>      }
> +    return -EINVAL;
> 
> -    return smfn;
>  }
> -
>  static int __init assign_shared_memory(struct domain *d,
> -                                       paddr_t pbase, paddr_t psize,
> -                                       paddr_t gbase, const char *shm_id)
> +                                       struct shm_memnode *node, paddr_t gbase,
> +                                       bool paddr_assigned)
>  {
>      mfn_t smfn;
> -    int ret = 0;
> -    unsigned long nr_pages, nr_borrowers, i;
> -    struct page_info *page;
> -
> -    printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n",
> -           d, pbase, pbase + psize);
> +    int ret;
> +    unsigned long nr_borrowers, i;
> +    struct shm_meminfo *meminfo = &node->meminfo;
> 
> -    smfn = acquire_shared_memory_bank(d, pbase, psize);
> +    smfn = acquire_shared_memory_bank(d, meminfo, paddr_assigned);
>      if ( mfn_eq(smfn, INVALID_MFN) )
>          return -EINVAL;
> 
> -    /*
> -     * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do not need
> -     * to create mapping in the P2M.
> -     */
> -    nr_pages = PFN_DOWN(psize);
> -    if ( d != dom_io )
> +    for ( i = 0; i < meminfo->nr_banks; i++ )
>      {
> -        ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn,
> -                                      PFN_DOWN(psize));
> -        if ( ret )
> +        paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size;
> +
> +        /*
> +         * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do
> +         * not need to create mapping in the P2M.
> +         */
> +        if ( d != dom_io )
>          {
> -            printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d);
> -            return ret;
> +            ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase),
> +                                          maddr_to_mfn(pbase),
> +                                          PFN_DOWN(psize));
> +            if ( ret )
> +            {
> +                printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d);
> +                return ret;
> +            }
> +            gbase += psize;
>          }
>      }
> 
> @@ -1003,7 +1070,7 @@ static int __init assign_shared_memory(struct domain *d,
>       * Get the right amount of references per page, which is the number of
>       * borrower domains.
>       */
> -    ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers);
> +    ret = acquire_nr_borrower_domain(node->shm_id, &nr_borrowers);
>      if ( ret )
>          return ret;
> 
> @@ -1015,24 +1082,30 @@ static int __init assign_shared_memory(struct domain *d,
>       * So if the borrower is created first, it will cause adding pages
>       * in the P2M without reference.
>       */
> -    page = mfn_to_page(smfn);
> -    for ( i = 0; i < nr_pages; i++ )
> +    return get_shm_pages_reference(d, meminfo, nr_borrowers);
> +}
> +
> +static int __init add_foreign_mapping_for_borrower(struct domain *d,
> +                                                   struct shm_memnode *node,
> +                                                   paddr_t gbase)
> +{
> +    unsigned int i = 0;
> +    struct shm_meminfo *meminfo = &node->meminfo;
> +
> +    for ( ; i < meminfo->nr_banks; i++ )
>      {
> -        if ( !get_page_nr(page + i, d, nr_borrowers) )
> -        {
> -            printk(XENLOG_ERR
> -                   "Failed to add %lu references to page %"PRI_mfn".\n",
> -                   nr_borrowers, mfn_x(smfn) + i);
> -            goto fail;
> -        }
> +        paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size;
> +        int ret;
> +
> +        /* Set up P2M foreign mapping for borrower domain. */
> +        ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize),
> +                               _mfn(PFN_UP(pbase)), p2m_map_foreign_rw);
> +        if ( ret )
> +            return ret;
> +        gbase += psize;
>      }
> 
>      return 0;
> -
> - fail:
> -    while ( --i >= 0 )
> -        put_page_nr(page + i, nr_borrowers);
> -    return ret;
>  }
> 
>  static int __init append_shm_bank_to_domain(struct kernel_info *kinfo,
> @@ -1156,7 +1229,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
> 
>      dt_for_each_child_node(node, shm_node)
>      {
> -        paddr_t gbase, pbase, psize;
> +        paddr_t gbase;
>          int ret = 0;
>          const char *role_str;
>          const char *shm_id;
> @@ -1185,15 +1258,30 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>                                           shm_id);
>          if ( !shm_memnode )
>              return -EINVAL;
> -        pbase = shm_memnode->meminfo.bank[0].start;
> -        psize = shm_memnode->meminfo.bank[0].size;
> +
> +        /*
> +         * When host address is not provided in "xen,shared-mem",
> +         * we let Xen allocate memory from heap at first domain.
> +         */
> +        if ( !paddr_assigned && !is_shm_allocated_from_heap(shm_memnode) )
> +        {
> +            if ( !allocate_domheap_memory(NULL, shm_memnode->meminfo.tot_size,
> +                                          (void *)&shm_memnode->meminfo,
> +                                          SHM_MEMINFO) )
> +            {
> +                printk(XENLOG_ERR
> +                       "Failed to allocate (%"PRIpaddr"MB) pages as static shared memory from heap\n",
> +                       shm_memnode->meminfo.tot_size >> 20);
> +                return -EINVAL;
> +            }
> +        }
> 
>          /*
>           * DOMID_IO is a fake domain and is not described in the Device-Tree.
>           * Therefore when the owner of the shared region is DOMID_IO, we will
>           * only find the borrowers.
>           */
> -        if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) ||
> +        if ( (owner_dom_io && !is_shm_allocated_to_domio(shm_memnode)) ||
>               (!owner_dom_io && strcmp(role_str, "owner") == 0) )
>          {
>              /*
> @@ -1201,16 +1289,14 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>               * specified, so they should be assigned to dom_io.
>               */
>              ret = assign_shared_memory(owner_dom_io ? dom_io : d,
> -                                       pbase, psize, gbase, shm_id);
> +                                       shm_memnode, gbase, paddr_assigned);
>              if ( ret )
>                  return ret;
>          }
> 
>          if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) )
>          {
> -            /* Set up P2M foreign mapping for borrower domain. */
> -            ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize),
> -                                   _mfn(PFN_UP(pbase)), p2m_map_foreign_rw);
> +            ret = add_foreign_mapping_for_borrower(d, shm_memnode, gbase);
>              if ( ret )
>                  return ret;
>          }
> @@ -1219,7 +1305,9 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>           * Record static shared memory region info for later setting
>           * up shm-node in guest device tree.
>           */
> -        ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id);
> +        ret = append_shm_bank_to_domain(kinfo, gbase,
> +                                        shm_memnode->meminfo.tot_size,
> +                                        shm_memnode->shm_id);
>          if ( ret )
>              return ret;
>      }
> --
> 2.25.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 19:56:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 19:56:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520943.809106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn33I-0008LM-5A; Thu, 13 Apr 2023 19:56:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520943.809106; Thu, 13 Apr 2023 19:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn33I-0008LF-2G; Thu, 13 Apr 2023 19:56:16 +0000
Received: by outflank-mailman (input) for mailman id 520943;
 Thu, 13 Apr 2023 19:56:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XbVr=AE=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pn33G-00078B-AH
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 19:56:14 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e88::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d549394-da35-11ed-b21e-6b7b168915f2;
 Thu, 13 Apr 2023 21:56:12 +0200 (CEST)
Received: from BN0PR03CA0048.namprd03.prod.outlook.com (2603:10b6:408:e7::23)
 by BL1PR12MB5046.namprd12.prod.outlook.com (2603:10b6:208:313::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 19:56:09 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e7:cafe::79) by BN0PR03CA0048.outlook.office365.com
 (2603:10b6:408:e7::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.32 via Frontend
 Transport; Thu, 13 Apr 2023 19:56:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.32 via Frontend Transport; Thu, 13 Apr 2023 19:56:09 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 14:56:09 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 13 Apr
 2023 14:56:08 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 13 Apr 2023 14:56:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d549394-da35-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YKvKgSveCPYfmptBtizcv0+TImHvFUaC3+4DNvXAJHMAOlcJkIOtZq+moW0RcPDIJXW42cMd/2c/bHI+zTmv63NpsVrhmYLyYRQ403iZV80aF+1DNKlPtZgUAsCVhuXLppziDZogOyzy/JNn9UMLWBOR8PP6XSy1/DMb/bRta7nNdAlh7XGs6TkOGcsIfnSGIb4QK41dZe//my3OsOaevxdSXJ4K3QSEBRICcON+jKnl+9ro43T/b44iQsdNeP2nid9R+DKZYAbFwOTXFL20F3oYFC5lJaFs+ruoJIDtu8OCrFnyHqt2ZhzybS376zjuCzb5XBGo9moSrO9ab8JONA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iXg/prYLx5EALCUgzFznssa0CZnjCJteDGfMgsf99ZQ=;
 b=ORGV0J4x6cWye8nLtorjcK4G9eEAIntvY9tMg2MeVU2pzkgmvtXFygRUCsWWdZgKJjNSKhR5GWvZdd2pg4iRnbsD2euq/igAgBn601l2+btAUauSFqyo1gdZO+pU6Bt/7OeUGMZpKNVMdg5BmdQYk3vOi0tVSne6CpkdHdwkiKAwPL23DR+Cvus+kih+E6pteZQkULCMzeLM8Sl0dbFzDfW1RgFk6n6H4rARQTlbPdSFI2sl+qrCIGgPWFIepvSPdhh6Nz+2fGtBkY94DWx/hgmkPIOBn+hhrWihjFnojTwGa1NMRX5bbHkaGAPAAiH0okJ43RiPC5NcJlqGIqA1Iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iXg/prYLx5EALCUgzFznssa0CZnjCJteDGfMgsf99ZQ=;
 b=z5nW4dgCeAW+XqcEeNlGq2Jq/9p7bvRazwtvtGvI+PPwMJLD6PpDDFUgve2QDUH07hgtt0rmJXe5cBXKlSN95k+KUt5VCYoNyX8j9Oxa11w1VGaQEwiOU4efHII5ah4WCMdE8/rF5E4giITcwlSvetUuN95YTbZIKlF/nQXwMSA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <8acf5625-a60e-0357-6ba0-96af35930c17@amd.com>
Date: Thu, 13 Apr 2023 15:56:07 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 6/8] xen/arm: remove shm holes for extended regions
Content-Language: en-US
To: Penny Zheng <Penny.Zheng@arm.com>, <xen-devel@lists.xenproject.org>
CC: <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230223054105.2357217-1-Penny.Zheng@arm.com>
 <20230223054105.2357217-7-Penny.Zheng@arm.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <20230223054105.2357217-7-Penny.Zheng@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|BL1PR12MB5046:EE_
X-MS-Office365-Filtering-Correlation-Id: 3afb3ce7-df12-4448-8161-08db3c59204f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+sDDAf96tcihFXdlvmuq81wW/cEbMtSBl04tDuA/qK6iKYMgmmZFU9ebWTLz/YCnyNGwlwuYNoUqgcwDZRQ4aWtMSzoMmHBWUGGtZ14j8Gprq2k+JuhgaiIP6NgVPEKKfUkDemhte8d/0Ll5eFh7vAVP7r/YNbfM9y/3TU592Twb8a4uTswKCCy08YuFQzXzJFdij1h1+otXb2EkyocMuF0n57NVPzVHbl6zGqqFgZipoTxvFHV4XaubRR5PJmQR4/zN11b2dacWlkSJ/Rb13S5rs3wnEz69tP2xW1wJ/duCHizhhN+apNsB7IxLqiMRO3/KHzCxKMbuQzzCrnK+fN1iQRRYCo4dAfP2UVWjlLX0v4NekcvH4aRqz6wXms0aNnjDAzD7gqG/8o0vLezEZTaOD3S6TQwzZgKC4SKwC4/PKkDfb2H22G3y39akQIVu0BObh8+7c/iHIKeHR0WLx+QJwericvTribVsoQCVUw5Ox4LIYsJAnpxcmA/dk0kULcUsSlWLC7W6bz2jz7/rJcj8d8a+UWs5HxSvBXmMvq6OQH+9nm5VjZ3x/j+uEumlMQ7lZgumuaIG5qku2zbpLTIXokz+vxAVeqwIU/5HzisNi9ZvY0lxb4Ma3DJxsaDbi6pz1xHs8X5nbODJdTjZKe0bKVCg8hzu9WnppOpx8NRGgt+Zu8YhZqZlVHpQEhL9pM58qn81k+rpHBmVRM6+AkwMIh6ZqTpLFWZiP6HIqR12GXhrX62c8kb4pvQOK8p+eGqWxB9P4t09AEbG0gLPRw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(336012)(36860700001)(70586007)(426003)(70206006)(47076005)(83380400001)(2616005)(54906003)(26005)(110136005)(16576012)(478600001)(186003)(53546011)(44832011)(2906002)(36756003)(5660300002)(81166007)(356005)(41300700001)(4326008)(82740400003)(40460700003)(82310400005)(40480700001)(316002)(31696002)(8676002)(86362001)(8936002)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 19:56:09.4425
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3afb3ce7-df12-4448-8161-08db3c59204f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5046

Hi Penny

On 2/23/23 00:41, Penny Zheng wrote:
> Static shared memory acts as reserved memory in guest, so it shall be
> excluded from extended regions.
> 
> Extended regions are taken care of under three different scenarios:
> normal DomU, direct-map domain with iommu on, and direct-map domain
> with iommu off.
> 
> For normal DomU, we create a new function "remove_shm_holes_for_domU", to
> firstly transfer original outputs into the format of "struct rangeset",
> then use "remove_shm_from_rangeset" to remove static shm from them.
> 
> For direct-map domain with iommu on, after we get guest shm info from "kinfo",
> we use "remove_shm_from_rangeset" to remove static shm.
> 
> For direct-map domain with iommu off, as static shm has already been taken
> care of through reserved memory banks, we do nothing.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
> v1 -> v2:
> - new commit
> ---
>  xen/arch/arm/domain_build.c | 94 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 93 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9b4aabaf22..4cd1e3d433 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1914,6 +1914,32 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
>      return 0;
>  }
> 
> +static int __init remove_shm_from_rangeset(const struct kernel_info *kinfo,
> +                                           struct rangeset *rangeset)
> +{
> +    unsigned int i;
> +
> +    /* Remove static shared memory regions */
> +    for ( i = 0; i < kinfo->shminfo.nr_banks; i++ )
> +    {
> +        struct membank membank = kinfo->shminfo.bank[i].membank;
> +        paddr_t start, end;
> +        int res;
> +
> +        start = membank.start;
> +        end = membank.start + membank.size - 1;
> +        res = rangeset_remove_range(rangeset, start, end);
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
> +                   start, end);
> +            return -EINVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  /*
>   * Find the holes in the Host DT which can be exposed to Dom0 as extended
>   * regions for the special memory mappings. In order to calculate regions
> @@ -1922,6 +1948,8 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
>   * - MMIO
>   * - Host RAM
>   * - PCI aperture
> + * - Static shared memory regions, which are described by special property
> + *   "xen,static-shm"
>   */
>  static int __init find_memory_holes(const struct kernel_info *kinfo,
>                                      struct meminfo *ext_regions)
> @@ -1997,6 +2025,14 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
>          }
>      }
> 
> +    /* Remove static shared memory regions */
> +    if ( kinfo->shminfo.nr_banks != 0 )
> +    {
> +        res = remove_shm_from_rangeset(kinfo, mem_holes);
> +        if ( res )
> +            goto out;
> +    }
> +
>      start = 0;
>      end = (1ULL << p2m_ipa_bits) - 1;
>      res = rangeset_report_ranges(mem_holes, start, end,
> @@ -2012,6 +2048,62 @@ out:
>      return res;
>  }
> 
> +static int __init remove_shm_holes_for_domU(const struct kernel_info *kinfo,
> +                                            struct meminfo *orig_ext)
> +{
> +    struct rangeset *guest_holes;
> +    unsigned int i = 0, tail;
> +    int res;
> +    paddr_t start, end;
> +
> +    /* No static shared memory region. */
> +    if ( kinfo->shminfo.nr_banks == 0 )
> +        return 0;
> +
> +    dt_dprintk("Remove static shared memory holes for extended regions of DomU\n");
> +
> +    guest_holes = rangeset_new(NULL, NULL, 0);
> +    if ( !guest_holes )
> +        return -ENOMEM;
> +
> +    for ( ; i < orig_ext->nr_banks; i++ )
> +    {
> +        start = orig_ext->bank[i].start;
> +        end = start + orig_ext->bank[i].size - 1;
> +
> +        res = rangeset_add_range(guest_holes, start, end);
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
> +                   start, end);
> +            goto out;
> +        }
> +    }
> +
> +    /* Remove static shared memory regions */
> +    res = remove_shm_from_rangeset(kinfo, guest_holes);
> +    if ( res )
> +        goto out;
> +
> +    tail = orig_ext->nr_banks - 1;
> +    start = orig_ext->bank[0].start;
> +    end = orig_ext->bank[tail].start + orig_ext->bank[tail].size - 1;
> +
> +    /* Reset original extended regions to hold new value */
> +    orig_ext->nr_banks = 0;
> +    res = rangeset_report_ranges(guest_holes, start, end,
> +                                 add_ext_regions, orig_ext);
> +    if ( res )
> +        orig_ext->nr_banks = 0;
> +    else if ( !orig_ext->nr_banks )
> +        res = -ENOENT;
> +
> +out:
> +    rangeset_destroy(guest_holes);
> +
> +    return res;
> +}
> +
>  static int __init find_domU_holes(const struct kernel_info *kinfo,
>                                    struct meminfo *ext_regions)
>  {
> @@ -2039,7 +2131,7 @@ static int __init find_domU_holes(const struct kernel_info *kinfo,
>          res = 0;
>      }
> 
> -    return res;
> +    return remove_shm_holes_for_domU(kinfo, ext_regions);

We are no longer using "res" anywhere in this function, so the variable may be removed.

arch/arm/domain_build.c: In function ‘find_domU_holes’:

arch/arm/domain_build.c:2114:9: warning: variable ‘res’ set but not used [-Wunused-but-set-variable]

 2114 |     int res = -ENOENT;

      |         ^~~

>  }
> 
>  static int __init make_hypervisor_node(struct domain *d,
> --
> 2.25.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:05:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:05:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520947.809116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3Bl-0001aA-3P; Thu, 13 Apr 2023 20:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520947.809116; Thu, 13 Apr 2023 20:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3Bk-0001a3-Uo; Thu, 13 Apr 2023 20:05:00 +0000
Received: by outflank-mailman (input) for mailman id 520947;
 Thu, 13 Apr 2023 20:05:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn3Bj-0001Zt-W8; Thu, 13 Apr 2023 20:05:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn3Bj-0004aq-T2; Thu, 13 Apr 2023 20:04:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn3Bj-0008LR-9d; Thu, 13 Apr 2023 20:04:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pn3Bj-00043W-9D; Thu, 13 Apr 2023 20:04:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lEO21onNoqnZTORQ3ZvOWUKYf+S9YW6vnuYKjBClGX8=; b=iOD8B4Llfr/UhJRKDeASqWVbuE
	f50wSfiAZKGtSGPGRmJXLqNt64c+JGwemgTjc9To7E4oHf54mT/sL4nRrZMHZNq4K0rNJe7fp+hac
	hwmx3RcMy3fXMY0oovUQzzYi8TURaXWQnyttekVxUB+EaYA2W1DVnruNJqOtCgg/zlGg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180231: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8
X-Osstest-Versions-That:
    qemuu=abb02ce0e76a8e00026699a863ab2d11d88f56d4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Apr 2023 20:04:59 +0000

flight 180231 qemu-mainline real [real]
flight 180249 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180231/
http://logs.test-lab.xenproject.org/osstest/logs/180249/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180249-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180220
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180220
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180220
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180220
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180220
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8
baseline version:
 qemuu                abb02ce0e76a8e00026699a863ab2d11d88f56d4

Last test of basis   180220  2023-04-12 12:10:32 Z    1 days
Testing same since   180231  2023-04-13 09:18:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Klaus Jensen <k.jensen@samsung.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   abb02ce0e7..9d177b7f87  9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:13:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520953.809125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3K3-00036D-Su; Thu, 13 Apr 2023 20:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520953.809125; Thu, 13 Apr 2023 20:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3K3-000366-QK; Thu, 13 Apr 2023 20:13:35 +0000
Received: by outflank-mailman (input) for mailman id 520953;
 Thu, 13 Apr 2023 20:13:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn3K3-000360-9B
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 20:13:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3K3-0004i0-10; Thu, 13 Apr 2023 20:13:35 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3K2-0004pH-S4; Thu, 13 Apr 2023 20:13:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=oSq02aCdIW2vhjwtNEB7aKRGk3ADSa7uIvd++Ul2Ca4=; b=UBkUOsckgh80kXjV3FIZLezLfT
	ak+xyeWNfR9lE0f/zSF8E2nJiNl2vGZEs7d60E5jzvlQ/LzyYoK/OlEriQ6M/3gqR5JkGI78z3Byo
	y7AN2zVNbeGkGJAFnKb8LjgYu6dkPHP7lCK8GOFEksrRHjM3Slbn1WGmpZI0Ar46nVfo=;
Message-ID: <bd8f0ed2-586f-02f6-1f16-dc3b3b9c82a8@xen.org>
Date: Thu, 13 Apr 2023 21:13:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen: Fold exit paths in find_text_region()
In-Reply-To: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Andrew,

You may want to update your runes to find the maintainers as you are 
CCing the x86 folks but not the REST (the patch modifies common/ after all).

On 13/04/2023 20:22, Andrew Cooper wrote:
> Despite rcu_read_unlock() being fully inlineable, the optimiser cannot fold
> these exit paths, because of the various compiler barriers providing RCU
> safety.  Help the compiler out.

Please mention which compiler(s) (including version) you used.

> 
> This compiles to marginally better code in all cases.
So the code itself is fine with me. But this raises a few questions. If 
this is marginal, then why are you doing it? What's your end goal?

Lastly what do you mean by "all cases"? Is it all arch? All compilers?

Anyway, if this pattern is important (TBD why), then I think we should 
update the CODING_STYLE with some guidance. Otherwise, we may introduce 
similar patterns (we already have some).

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>   xen/common/virtual_region.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
> index 30b0b4ab9c85..5ecdba9c08ed 100644
> --- a/xen/common/virtual_region.c
> +++ b/xen/common/virtual_region.c
> @@ -40,20 +40,20 @@ static DEFINE_RCU_READ_LOCK(rcu_virtual_region_lock);
>   
>   const struct virtual_region *find_text_region(unsigned long addr)
>   {
> -    const struct virtual_region *region;
> +    const struct virtual_region *iter, *region = NULL;
>   
>       rcu_read_lock(&rcu_virtual_region_lock);
> -    list_for_each_entry_rcu( region, &virtual_region_list, list )
> +    list_for_each_entry_rcu ( iter, &virtual_region_list, list )
>       {
> -        if ( (void *)addr >= region->start && (void *)addr < region->end )
> +        if ( (void *)addr >= iter->start && (void *)addr < iter->end )
>           {
> -            rcu_read_unlock(&rcu_virtual_region_lock);
> -            return region;
> +            region = iter;
> +            break;
>           }
>       }
>       rcu_read_unlock(&rcu_virtual_region_lock);
>   
> -    return NULL;
> +    return region;
>   }
>   
>   void register_virtual_region(struct virtual_region *r)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:32:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520959.809136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3bi-0005Yi-G0; Thu, 13 Apr 2023 20:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520959.809136; Thu, 13 Apr 2023 20:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3bi-0005Yb-DE; Thu, 13 Apr 2023 20:31:50 +0000
Received: by outflank-mailman (input) for mailman id 520959;
 Thu, 13 Apr 2023 20:31:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn3bh-0005YV-Ih
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 20:31:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3bh-0005Bo-6d; Thu, 13 Apr 2023 20:31:49 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3bg-0005Be-Vj; Thu, 13 Apr 2023 20:31:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=TyxIEsaM6b1l/hR5s5xSX0h5w0Zjr2PVNfKqKVoiay0=; b=UU0yZ6CQNOv687RN68enSt5ATZ
	34w0CDqDJlt9904+eDqMsPKJYLsHkUURnywVbBvmZB3jrxO4aeXh0Ym1QwEnFjI+j3vmfMgzXgADN
	qTNjV1hO89jZa87e6ExihV4fy9drZbAEPa+tzMRBrpva2gPjejdoDdZdAR3YD7R1lHgs=;
Message-ID: <6d1b8904-374c-4392-6945-2746f97c31f6@xen.org>
Date: Thu, 13 Apr 2023 21:31:47 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-13-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v8 12/22] xen/arm: ffa: support mapping guest RX/TX
 buffers
In-Reply-To: <20230413071424.3273490-13-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 13/04/2023 08:14, Jens Wiklander wrote:
> +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> +                                register_t rx_addr, uint32_t page_count)
> +{
> +    uint32_t ret = FFA_RET_INVALID_PARAMETERS;
> +    struct domain *d = current->domain;
> +    struct ffa_ctx *ctx = d->arch.tee;
> +    struct page_info *tx_pg;
> +    struct page_info *rx_pg;
> +    p2m_type_t t;
> +    void *rx;
> +    void *tx;
> +
> +    if ( !smccc_is_conv_64(fid) )
> +    {
> +        /*
> +         * Calls using the 32-bit calling convention must ignore the upper
> +         * 32 bits in the argument registers.
> +         */
> +        tx_addr &= UINT32_MAX;
> +        rx_addr &= UINT32_MAX;
> +    }
> +
> +    if ( page_count > FFA_MAX_RXTX_PAGE_COUNT ) {

Coding style:

if ( ... )
{

> +        printk(XENLOG_ERR "ffa: RXTX_MAP: error: %u pages requested (limit %u)\n",
> +               page_count, FFA_MAX_RXTX_PAGE_COUNT);
> +        return FFA_RET_NOT_SUPPORTED;
> +    }
> +
> +    /* Already mapped */
> +    if ( ctx->rx )
> +        return FFA_RET_DENIED;
> +
> +    tx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(tx_addr)), &t, P2M_ALLOC);

I might be missing something. Here you only get the reference on one 
page. Per the value of FFA_MAX_RXTX_PAGE_COUNT, it looks like the buffer 
can be up to 32 pages.

Can you clarify?

> +    if ( !tx_pg )
> +        return FFA_RET_INVALID_PARAMETERS;
> +    /* Only normal RAM for now */
> +    if ( !p2m_is_ram(t) )

p2m_is_ram() would allow RAM page marked read-only in stage-2. Is it 
intended?

If not, then I think you want to use t != p2m_ram_rw.

> +        goto err_put_tx_pg;
> +
> +    rx_pg = get_page_from_gfn(d, gfn_x(gaddr_to_gfn(rx_addr)), &t, P2M_ALLOC);
> +    if ( !tx_pg )
> +        goto err_put_tx_pg;
> +    /* Only normal RAM for now */
> +    if ( !p2m_is_ram(t) )

Same here.

> +        goto err_put_rx_pg;
> +
> +    tx = __map_domain_page_global(tx_pg);
> +    if ( !tx )
> +        goto err_put_rx_pg;
> +
> +    rx = __map_domain_page_global(rx_pg);
> +    if ( !rx )
> +        goto err_unmap_tx;
> +
> +    ctx->rx = rx;
> +    ctx->tx = tx;
> +    ctx->rx_pg = rx_pg;
> +    ctx->tx_pg = tx_pg;
> +    ctx->page_count = page_count;
> +    ctx->tx_is_free = true;
> +    return FFA_RET_OK;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:34:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520963.809146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3dn-00066w-S9; Thu, 13 Apr 2023 20:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520963.809146; Thu, 13 Apr 2023 20:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3dn-00066p-P2; Thu, 13 Apr 2023 20:33:59 +0000
Received: by outflank-mailman (input) for mailman id 520963;
 Thu, 13 Apr 2023 20:33:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn3dm-00066h-S5
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 20:33:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3dm-0005Df-HR; Thu, 13 Apr 2023 20:33:58 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3dm-0005NF-CG; Thu, 13 Apr 2023 20:33:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gB/jvbSDlfA0EMEcuLtzZFuh9OajfqJtkl3CYchv2yE=; b=S93GEsCUK1g3QR+3J7My1mZNw8
	Hqnj7mxAfragLCTnGmsuFuneY6hMKSWkdK5HPPPKgDTjg2f8zuG4JtK/ZvNMpOXNgd2eeFa+Tt7Nm
	rEjRX3Ud/7LYW/JEsn456EYUsFosilEldQoAzdQnQM3NY4OxNbFQmrs1GPJMxuO+4Jdg=;
Message-ID: <86b582a9-861c-7d8f-beeb-3469ea7948be@xen.org>
Date: Thu, 13 Apr 2023 21:33:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [XEN PATCH v8 13/22] xen/arm: ffa: support guest
 FFA_PARTITION_INFO_GET
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-14-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-14-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 13/04/2023 08:14, Jens Wiklander wrote:
> Adds support in the mediator to handle FFA_PARTITION_INFO_GET requests
> from a guest. The requests are forwarded to the SPMC and the response is
> translated according to the FF-A version in use by the guest.
> 
> Using FFA_PARTITION_INFO_GET changes the owner of the RX buffer to the
> caller (the guest in this case), so once it is done with the buffer it
> must be released using FFA_RX_RELEASE before another call can be made.
> 
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>   xen/arch/arm/tee/ffa.c | 137 ++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 134 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 127397d8e448..74b8c517afb8 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -166,7 +166,18 @@
>   #define FFA_MSG_SEND                    0x8400006EU
>   #define FFA_MSG_POLL                    0x8400006AU
>   
> +/*
> + * Structs below ending with _1_0 are defined in FF-A-1.0-REL and
> + * struct ending with _1_1 are defined in FF-A-1.1-REL0.
> + */
> +
>   /* Partition information descriptor */
> +struct ffa_partition_info_1_0 {
> +    uint16_t id;
> +    uint16_t execution_context;
> +    uint32_t partition_properties;
> +};
> +
>   struct ffa_partition_info_1_1 {
>       uint16_t id;
>       uint16_t execution_context;
> @@ -183,7 +194,8 @@ struct ffa_ctx {
>       unsigned int page_count;
>       /* FF-A version used by the guest */
>       uint32_t guest_vers;
> -    bool tx_is_free;
> +    bool rx_is_free;

I am a bit confused why this is renamed. Did you introduce tx_is_free by 
mistake? If not, can you name the field correctly from when it is 
introduced?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:53:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:53:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520967.809155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3wP-000057-DI; Thu, 13 Apr 2023 20:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520967.809155; Thu, 13 Apr 2023 20:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn3wP-00004w-Aj; Thu, 13 Apr 2023 20:53:13 +0000
Received: by outflank-mailman (input) for mailman id 520967;
 Thu, 13 Apr 2023 20:53:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn3wN-0008WT-Qq
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 20:53:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3wN-0005Wj-GH; Thu, 13 Apr 2023 20:53:11 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn3wN-0005xj-B0; Thu, 13 Apr 2023 20:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=mtaoGJepRxHbMpQyow6E7Am+G+hQC599jwvUs8hGIwE=; b=XVNDIs8ymNEXYcWjlN+4ySOdRC
	ZJ5vb/yUAZJaIAS6PDc/8r/UWd6WyeBwt2FtUDAvaKb0OlCZPD2GiDwSce2OUJkDtwlBlhZ6wtL0k
	mg+bcTsGZO+cYeHmXwaC/+QU5KgGT0zit35tp7T88R1hdHkh2ItJ0iMQOKxpPOE7t+5o=;
Message-ID: <176f5384-6e35-83bd-5f77-8b31412e8048@xen.org>
Date: Thu, 13 Apr 2023 21:53:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-18-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v8 17/22] xen/arm: ffa: support sharing memory
In-Reply-To: <20230413071424.3273490-18-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 13/04/2023 08:14, Jens Wiklander wrote:
>   static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
>                                         uint8_t msg)
>   {
> @@ -781,6 +862,400 @@ out:
>                resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 & mask);
>   }
>   
> +/*
> + * Gets all page and assigns them to the supplied shared memory object. If
> + * this function fails then the caller is still expected to call
> + * put_shm_pages() as a cleanup.
> + */
> +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> +                         const struct ffa_address_range *range,
> +                         uint32_t range_count, unsigned int start_page_idx,
> +                         unsigned int *last_page_idx)
> +{
> +    unsigned int pg_idx = start_page_idx;
> +    gfn_t gfn;
> +    unsigned int n;
> +    unsigned int m;
> +    p2m_type_t t;
> +    uint64_t addr;
> +
> +    for ( n = 0; n < range_count; n++ )
> +    {
> +        for ( m = 0; m < range[n].page_count; m++ )
> +        {
> +            if ( pg_idx >= shm->page_count )
> +                return FFA_RET_INVALID_PARAMETERS;
> +
> +            addr = read_atomic(&range[n].address);

I am confused with the use of read_atomic() here. Is this part of the 
guest memory? If so, why don't page_count is also not read atomically?

Also, it looks like you are using you will read the same address 
atomically. Shouldn't this be moved just before the loop?

> +            gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE);
> +            shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t,
> +						   P2M_ALLOC);
> +            if ( !shm->pages[pg_idx] )
> +                return FFA_RET_DENIED;
> +            /* Only normal RAM for now */
> +            if ( !p2m_is_ram(t) )
> +                return FFA_RET_DENIED;
> +            pg_idx++;
> +        }
> +    }
> +
> +    *last_page_idx = pg_idx;
> +
> +    return FFA_RET_OK;
> +}
> +
> +static void put_shm_pages(struct ffa_shm_mem *shm)
> +{
> +    unsigned int n;
> +
> +    for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
> +    {
> +        put_page(shm->pages[n]);
> +        shm->pages[n] = NULL;
> +    }
> +}
> +
> +static struct ffa_shm_mem *alloc_ffa_shm_mem(struct ffa_ctx *ctx,
> +                                             unsigned int page_count)
> +{
> +    struct ffa_shm_mem *shm;
> +
> +    if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ||
> +         ctx->shm_count >= FFA_MAX_SHM_COUNT )
> +        return NULL;
> +
> +    shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> +    if ( shm )
> +    {
> +        ctx->shm_count++;
> +        shm->page_count = page_count;
> +    }
> +
> +    return shm;
> +}
> +
> +static void free_ffa_shm_mem(struct ffa_ctx *ctx, struct ffa_shm_mem *shm)
> +{
> +    if ( shm ) {

Coding style:

if ( ... )
{

but I would prefer if we remove one level of indentation and use:

if ( !shm )
   return;

> +        ASSERT(ctx->shm_count > 0);
> +        ctx->shm_count--;
> +        put_shm_pages(shm);
> +        xfree(shm);
> +    }
> +}

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 20:57:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 20:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520971.809166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn40B-0000iY-Tp; Thu, 13 Apr 2023 20:57:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520971.809166; Thu, 13 Apr 2023 20:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn40B-0000iR-Qk; Thu, 13 Apr 2023 20:57:07 +0000
Received: by outflank-mailman (input) for mailman id 520971;
 Thu, 13 Apr 2023 20:57:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn409-0000iH-Vf
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 20:57:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn409-0005c3-Jv; Thu, 13 Apr 2023 20:57:05 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn409-0006AB-E1; Thu, 13 Apr 2023 20:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/JVyhFgWOemJ/AtWKESvxq6p/h54T4vD7UzVanZl1Qs=; b=aM0jXWcfgUMfqUhSB3ANVS/cvF
	Ilm+D9VQ4Q7hgcbZGEjNwLiQe9CSncvP1bk2FwW/bDvJ7LuV5gymFowHSn1EXZsckZWMwgMkZoffR
	rXXHMy4S85y/zrajzXv235WlSaKMOoml+2q3pIsk+Vd8Mnegz0XWwKzSul73m3keJlTM=;
Message-ID: <c2ef841e-fe3a-a283-2c83-225e02d588d2@xen.org>
Date: Thu, 13 Apr 2023 21:57:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-22-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-22-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 13/04/2023 08:14, Jens Wiklander wrote:
> Adds a comments with a list of unsupported FF-A interfaces and
> limitations in the implemented FF-A interfaces.
> 
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>   xen/arch/arm/tee/ffa.c | 32 ++++++++++++++++++++++++++++++++
>   1 file changed, 32 insertions(+)
> 
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 0948cc636871..6424c222c885 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -13,6 +13,38 @@
>    *                https://developer.arm.com/documentation/den0077/e
>    * TEEC-1.0C: TEE Client API Specification version 1.0c available at
>    *            https://globalplatform.org/specs-library/tee-client-api-specification/
> + *
> + * Notes on the the current implementstion.
> + *
> + * Unsupported FF-A interfaces:
> + * o FFA_MSG_POLL and FFA_MSG_SEND - deprecated in FF-A-1.1-REL0
> + * o FFA_MEM_RETRIEVE_* - Used when sharing memory from an SP to a VM
> + * o FFA_MEM_DONATE_* and FFA_MEM_LEND_* - Used when tranferring ownership
> + *   or access of a memory readion
> + * o FFA_MSG_SEND2 and FFA_MSG_WAIT - Used for indirect messaging
> + * o FFA_MSG_YIELD
> + * o FFA_INTERRUPT - Used to report preemption
> + * o FFA_RUN
> + *
> + * Limitations in the implemented FF-A interfaces:
> + * o FFA_RXTX_MAP_*:
> + *   - Maps at most 32 4k pages large RX and TX buffers
> + *   - RT/TX buffers must be normal RAM

Can you explain why this is a problem?

> + *   - Doesn't support forwarding this call on behalf of an endpoint
> + * o FFA_MEM_SHARE_*: only supports sharing
> + *   - from a VM to an SP
> + *   - with one borrower
> + *   - with the memory transaction descriptor in the RX/TX buffer
> + *   - normal memory
> + *   - at most 512 kB large memory regions
> + *   - at most 32 shared memory regions per guest
> + * o FFA_MSG_SEND_DIRECT_REQ:
> + *   - only supported from a VM to an SP
> + *
> + * There are some large locked sections with ffa_tx_buffer_lock and
> + * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
> + * around share_shm() is a very large locked section which can let one VM
> + * affect another VM.
>    */
>   
>   #include <xen/bitops.h>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 21:00:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 21:00:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520975.809175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn43P-0002AD-AC; Thu, 13 Apr 2023 21:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520975.809175; Thu, 13 Apr 2023 21:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn43P-0002A6-7M; Thu, 13 Apr 2023 21:00:27 +0000
Received: by outflank-mailman (input) for mailman id 520975;
 Thu, 13 Apr 2023 21:00:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pn43O-0002A0-KT
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 21:00:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn43N-0005iQ-NU; Thu, 13 Apr 2023 21:00:25 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pn43N-0006ML-G8; Thu, 13 Apr 2023 21:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ag2m83F870Cvztjq8+K6JnjMvBiAtIBnhlonGZVMLEY=; b=NlZZYNnze0XdrXvObNrLz2QWHU
	X1Qt9nPDGruRyCKNd+hPwKiyy0ExOlpFpGxvTADKybzq3/9yv62FJW/NmqBxh87MIaSriXbOUv3T9
	rloJAfd1XJLBcCIGBfK1apw/UxExkfDGj1+yMCKiZbFj6NoqgkkEoygaboP5uVstAWrc=;
Message-ID: <7a29bc06-61b5-51e6-4625-bf19e530b975@xen.org>
Date: Thu, 13 Apr 2023 22:00:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>,
 Achin Gupta <achin.gupta@arm.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-23-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413071424.3273490-23-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 13/04/2023 08:14, Jens Wiklander wrote:
> Describes a FF-A version 1.1 [1] mediator to communicate with a Secure
> Partition in secure world.
> 
> [1] https://developer.arm.com/documentation/den0077/latest
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> ---
>   SUPPORT.md               |  8 ++++++++
>   docs/man/xl.cfg.5.pod.in | 15 +++++++++++++++
>   2 files changed, 23 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index aa1940e55f09..1fd746f7f7f2 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -818,6 +818,14 @@ that covers the DMA of the device to be passed through.
>   
>   No support for QEMU backends in a 16K or 64K domain.
>   
> +### ARM: Firmware Framework for Arm A-profile (FF-A) Mediator
> +
> +    Status, Arm64: Tech Preview
> +
> +There are still some code paths where a vCPU may hog a pCPU longer than
> +necessary. The FF-A mediator is not yet implemented for Arm32. Part of the
> +FF-A specification is not supported.

NIT: You would suggest to add: "(See the top comment in ...)". So one 
can easily find the limitation.

> +
>   ### ARM: Guest Device Tree support
>   
>       Status: Supported
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 10f37990be57..bba99c576b48 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1645,6 +1645,21 @@ in OP-TEE.
>   
>   This feature is a B<technology preview>.
>   
> +=item B<ffa>
> +
> +B<Arm only.> Allow a guest to communicate via FF-A with Secure Partitions
> +(SP), default false.
> +
> +Currently is only a small subset of the FF-A specification supported. Just
> +enough to communicate with OP-TEE. In general only direct messaging and
> +sharing memory with one SP. More advanced use cases where memory might be
> +shared or donated to multple SPs are not supported.

Typo: s/multple/multiple/

> +
> +See L<https://developer.arm.com/documentation/den0077/latest> for more
> +informantion about FF-A.

Typo: s/informantion/information/

> +
> +This feature is a B<technology preview>.
> +
>   =back
>   
>   =back

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 22:10:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 22:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520979.809185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn58o-0000qk-DC; Thu, 13 Apr 2023 22:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520979.809185; Thu, 13 Apr 2023 22:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn58o-0000qO-AL; Thu, 13 Apr 2023 22:10:06 +0000
Received: by outflank-mailman (input) for mailman id 520979;
 Thu, 13 Apr 2023 22:10:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9nQ=AE=citrix.com=prvs=46097603d=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pn58n-0000df-CE
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 22:10:05 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef133321-da47-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 00:10:02 +0200 (CEST)
Received: from mail-dm6nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 13 Apr 2023 18:09:59 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5182.namprd03.prod.outlook.com (2603:10b6:208:1e7::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 13 Apr
 2023 22:09:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Thu, 13 Apr 2023
 22:09:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef133321-da47-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681423802;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=kOsCEgAqv4GvFI5JVXuSC44qWWFvhCNpXlX8jZL2Qhk=;
  b=Mi37dJ7hWm07CgIhi21AwzdaILHHjGdj31DhNs/twav15x7fWa3xRTER
   ISqV6wSy+QtPPn7TZkjAYwHhsvykK6Yq/uxBtXZkQYXsRW1AEihADdI48
   UaPded8LB4K8A2W/nVb0dHff4e8dQGlbVvGIJ5zW+0f9v+pkQi+0031mK
   E=;
X-IronPort-RemoteIP: 104.47.58.105
X-IronPort-MID: 105349865
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:acNQZ6mTEvN2IO1VmQ8VXsvo5gxsJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIWCmuGOanfNmWjL4t/aom18BhQ7ZLdmtNnHQc9+yFmEiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QSGyxH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 f49JytdRxK5vu2n7JudG9RGmeQcCeC+aevzulk4pd3YJdAPZMmbBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3ieCwWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTODjqq863gD7Kmo7ExwVbl6cnbqA0HGmANdzB
 hY70A0vov1nnKCsZpynN/Gim1aGoxodVtx4A+A8rgaXxcL88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVq/3LqJqTK5OQAOMHQPIyQDSGMt/N3LsIw1yBXVQb5LC7Wph9f4HTXxx
 TGiryUkgbgXy8kR2M2T9lfBjy+nurDZQwgt/ALVU2m5qARja+aYi5eA7FHa6bNMKdifR1zY5
 HwcwZHGsaYJEI2HkzGLTKMVBra16v2ZMTrax1lyA50m8Dfr8HmmFWxN3AxDyI5SGp5sUVfUj
 IX74Gu9OLc70KOWUJJK
IronPort-HdrOrdr: A9a23:37+yoK0WQOwtXHE4mGckygqjBLYkLtp133Aq2lEZdPU1SL3gqy
 nKpp8mPHDP+VUssR0b9+xoW5PwJE80l6QFg7X5VI3KNGOL11dARLsSibcKqAeBJ8SRzI9gPQ
 oKScVD4FqaNykdsS4vizPIdOod/A==
X-Talos-CUID: 9a23:5eTo2m8orICBok6Rf0uVv0wxFdt1W3/n9XjBD2aeWFtDV52qbHbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AmtwvtwxD5Q5UsuXFNO3smrfKGJSaqL+vC3sItql?=
 =?us-ascii?q?FgpO7LXd1FxrHox6QQ4Byfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,195,1677560400"; 
   d="scan'208";a="105349865"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kOQALRFoJSzgxzpef1efsMGDpP5dSD709yaikLV03/aUJaQxoP3lX3lYG4IVuf7HKZHzXa0ChRxGw3E1Bv8tteKmwD4WafeEt8+PxgnGcBgsUnoa5VREd3rBZMr3DheBH7O3HAZ+906blxQCDMalU+TLXAzDmRry3msOVo4JM/Z+SI+t+JWJoxtl7H2trUKOdew6v528kCyhORhkGqBpNmhjH21jjIbBgy/hTa3o3UjG0emTme8RK8u1VKhgyn6ufIcM8txWOJeYtLRrmbHIarKt5ykXdpOKOyr8lzob9sBmyOU5XDPrbhUxmW/81iAMj0PAmtNJVOdCGULlfo9K1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kOsCEgAqv4GvFI5JVXuSC44qWWFvhCNpXlX8jZL2Qhk=;
 b=hHRQSeeZx8oXQFJ1xfmyf38dut/bMagXXBaCi+j4R8qzgilQwuF8hzKuwEYfcExPUaLtJu2McBUe5MUEVh7A++TYsvglaUV+kAvzfGdIsnY1JxRqp3CWCnTGHZUru2rH/UZm9BFD7vzH/UvwT3fuRKqXI2I0VywuTFESKQwvauMogIxGI9uD62DUglo8/QIPrpGjeqeyEIVQStmDAyT0afF3ebr8GUEo/0EhcPaz+cWm1K4CylqSXhJJynA+sIBX2U0QB9akcv5/+D6z5Hrzwe9dZkvU/CYmO+iR5iL7jmKyZB3WqlGW6i3KtJ34ILvz+sLTm/9SG48ateNN9EbOkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kOsCEgAqv4GvFI5JVXuSC44qWWFvhCNpXlX8jZL2Qhk=;
 b=dUk/llRNWMJ+d/jSavaa/Rcw1IPCCXASbJSERRy0Bb1Ag6ws/aBZokowjunKq01dSG6BMDiTibmtU9Zne5rJOzsdIQ0jjAR4DhwNdZGW07RtJ7rv2u/vdi9w32mo42W1S+iXqnPqMU1yXv4wIPRzhwMnckpHKETx6Y0GjsFHoyI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <dcdc1e90-77f0-16d2-e83a-dc9c12158975@citrix.com>
Date: Thu, 13 Apr 2023 23:09:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen: Fold exit paths in find_text_region()
Content-Language: en-GB
To: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
 <bd8f0ed2-586f-02f6-1f16-dc3b3b9c82a8@xen.org>
In-Reply-To: <bd8f0ed2-586f-02f6-1f16-dc3b3b9c82a8@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0106.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5182:EE_
X-MS-Office365-Filtering-Correlation-Id: f74696c4-a8d8-4939-e935-08db3c6bd0cd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	asoWmoNP2Yf6r8PXjelYXfpHFe6GD1V06phds8fNoIWqGI5VM6QIdhQYllznpV+of9GkjjJG+IzQQSWdEXxByUurBdrMc2LSltXqgsM9KETTHQ/NRDIraVitUY3p6D9iLzUY3X+4egbcBJqKs3A4W62Wv9PEfHTw8ALRcbdd6+/Rifk55A/fqX2nyKiJtWss7z16+cfeFfWNmKySlvjNgfT/dMNsL/CYiTjEqLt9+canh2/f9FfCrEh5HqZuMdFrLGloufOCSqJHFAw7bJb7Zbc94z/c18tILUxoTYbEa778qBqb3c7DVi9SNhbLoHz+d7tUFzRxp+iHvh/BevmOHlaElvj+7Cw+Z0jSssl5vONzU62HSE4UG8izKsO2Wdcztfv7tcB5byhaAppTW49LMPGAV+86v/QFFz/dAM2Gd89LmfUSAkNOUAwSTZ4rDdgW+u3qHZ4gA41Es2/D7eH5CBTqq+fwSdqXA6+gWY7oGtlzycRADRqnyWdlEodTQZFKlzpzd9XxlNwW5HtDjKvKl1Ir69FTDn7Tcv0EyMeWVyS83BshRgsXSm3fW3Q90X2alREUI0WmJh4xrPFljIOZuBB2veXstsyIx9tIS7uJklU4JeLZHlMqmHKkHxew5tB9zpziFtqjEYQhICJHT3hO4w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(366004)(136003)(346002)(451199021)(2616005)(83380400001)(82960400001)(41300700001)(5660300002)(2906002)(31686004)(8936002)(38100700002)(8676002)(186003)(53546011)(6506007)(6512007)(26005)(6486002)(110136005)(6666004)(36756003)(66476007)(66556008)(4326008)(66946007)(54906003)(86362001)(31696002)(316002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NWhsOEsyS2JzSUpGVjJDZDZsclNDemdYeGpUcUUrVFR6ZWNSUlZFeDgvOGQ3?=
 =?utf-8?B?bTdIT1pTZkxIUHV6TGZ0SGRsaXk5d3kySTZ2SWRGOERxVVJNbndhVzdlc2ho?=
 =?utf-8?B?bjE5SUIrVWxITlhPZkY4bHpzQXdxZHdZR2pkSUpteENFcUhNOEYwZ0NDOUIv?=
 =?utf-8?B?bnVEZ0hXNkJpZWJPdHhLR1VmVTQyQzN5YzMwS0IyenVobW5jRkdjalkrNmdy?=
 =?utf-8?B?MEFMaWMyY21VSmdwRWowVFdpeWhJejNsNlUyS0xlbm55V09NUGl1UDVveUg0?=
 =?utf-8?B?ZWgwcysza005aERLTndxSzF5a2V1Y1EyTlB5K1hLRzZpR1NicUZFWnV1VmdH?=
 =?utf-8?B?Z3FIbk9Ed0dqR0RMVXhMUWM4Ti96bEYvOTJNM2VOOWl1WjlmWEY4Z3ZIY3Jo?=
 =?utf-8?B?ODF6MTB3c0JveG44ejNpUkhKaG52QjU5WFQ5bFkxSEpvcThmTUhDYlRiQWZn?=
 =?utf-8?B?czlhSnRYaGV0eVR0L3VCK2JSOG5EbHFNOTdGZnlXRGxBb3hFK2JNZEUvQkky?=
 =?utf-8?B?TGtKd1gza3RwdzF2NHNPQWxpbVVJcklXZzVFR3JHSklKc2hCUWcvanUrM2ts?=
 =?utf-8?B?ODMwRExHSkdIK1RUY1ZnTjl0SlNyTFl4MU5QTnFPT2NyMjJXOURjMUtkN2Y0?=
 =?utf-8?B?UWlDckZxU3RXbzJWSHVFRWtrWFVPOEtWb25RenFHMFRHT0FSck4rb1hGeUVO?=
 =?utf-8?B?Z281RVdpbTkydWlya3NtVi9UT1EvQnJ2QmFKd0RObmxxTjV0V001aW9FY3hS?=
 =?utf-8?B?bmZxMkFnNjg4cjZ1VEQ2K2tXam1OVmY0V0pQNzhlc2xoWUxtNDc4K0ZiMDl1?=
 =?utf-8?B?THBIRzJIMGZBSnQwZVp0Qnl1cUhrRzhmMVBBZjB4TUhGTFRNZERFWS9pV1ZB?=
 =?utf-8?B?elpTbUVwTE5BdkR5UzF0TjVuZCswbkhnZUQ0UjdaS1c5OFYzd1pKZGlqYlhU?=
 =?utf-8?B?Y1h3WlorMmF0MkJwZ0xVLzRyZUdjNyt1UHVpWFJDdE05SnRyc2NhS2h4RGMx?=
 =?utf-8?B?ZGNtaE0zYmU1STZNOUJlMWVSUEZ5RGVzcDNNQjZHNXd6WEZQRmZYWFpIRTZa?=
 =?utf-8?B?WXVyMmNyWmQ4cGY1S2thNFFZZElJejdaMFpCbU04KzBJTktHK2IyL21KbHhM?=
 =?utf-8?B?bkZIeG9pMnJJcEJVMjQ0dDRYWjNaUmlKL3VEVWo1dU9lUXo1anpDeUhOZnN1?=
 =?utf-8?B?ZWZUQzM2ek9adm9ZTWVNaEhvMjIzK3BjL2xEQ2lVYkJLYkpUK2N0NzMvdElv?=
 =?utf-8?B?RUNpRnpDblVVSUJvRTRDWFlQcXU5ZzMxNmsrM1JnUDRUUUlDaTU1RW5RVjA3?=
 =?utf-8?B?Y0VBMk1tRlFTbzFQNWNRL0R6U2FtQ1V6ei9rcy9jb3NUdTUyNm5tUURrYnM5?=
 =?utf-8?B?cGVqbWpMWWRwbEk1Z29jM1dCUW5EYmp5SUNBc2k4R2pBeEZLdjRnSk5zYk9S?=
 =?utf-8?B?ampTemM4bStNeGFsdmNobmlsbTJhOUowdmQ3RGZwZ29HdWJoTDBWVTdZWTVl?=
 =?utf-8?B?Q3RETUJPN3RLRjltenVmcC9aT2dyK296dS91VFNWWjNoTzFXUisrcGtIYW9s?=
 =?utf-8?B?WmJRNGEzODlkZDdTS3VXbUd5T1hkb3F4REJydGFIcVBzZVU4bnVFTmIwREcz?=
 =?utf-8?B?WUs5MmNzQnpYYW5WM2tQSzRmNzFaWGsvM3k4Uk5qMlRFdEMxZ3dKb2tXOUZB?=
 =?utf-8?B?VmtSOEt2VkYyeWIwWmcrU0NublFkVFhLZXRoSzJSUUltb0lkR3hwQ09ZSjJo?=
 =?utf-8?B?MmpRZU11cWt3dVhRbFV6dStZbjkwbVpwL25RYW9POGlOU2pkTjQwU3lpN2VH?=
 =?utf-8?B?WCtpend2Nm5YYVNyc0ZVUkovUGZHZENtTzhXUmZBWGRjWEd5eEZUTCswQ3Rt?=
 =?utf-8?B?bkkvMkNlY0R6SlBGd0J1bXBxNm5oTnVOOWlxK2hYNXNLVWl2anpTK21wWXBl?=
 =?utf-8?B?TVlMZWowUVl6NUJDQkZqN3hKMmZmL3NhcEFwb05OK3MyZ2Q5a2h6Sm8vYjQ4?=
 =?utf-8?B?M1k1c0wvdEo0QnF6aGV4K0U3YWtKMjYrcUFhWmhxR3BRZENBSFNXbGJpZWVj?=
 =?utf-8?B?UmlVY2hTVFEzeXdXUm45VUxZV3RIRk5xeUdqVlp5WlJZUkhzTWF2MjlRNlZv?=
 =?utf-8?B?MGtYOFJNSmQvSkVuT1kwZkZYd2xmcjVHcHgrN0F2MGVpcjhWVVJ6NFJqMEdN?=
 =?utf-8?B?anc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	zivnrkVRXoWT5eJdix3ZpBo8nZQFNFTGkJIE1WTvpTyMSklDsk0RjNKLHXJUBtOm3dJYHMm6TLzm5LZ7fPByCn3JDUWSWdTy1a9VH5Cu6mJaxD10VCRjLGVyXxdKzIbX/BAHsPA/EjWDTHuowGIpm/TSDl+3D14ebgBTn8G1itR30hisJizNYs3FPniZmQaB2EiUKiBbGUoIkXypBYrc+5A4nOt/lumk8CMiEVvLg1cJ++sbiNbOrkXOVNR0DFBfvHfeOirATAZDN9PAAinRWaVk9KlgWf8pF8kRXwT4OoEEZ+3tyPdbMaEGTyP02fZ75Wl/2GeA0YlbgtNJhIrxUaxI4xufSHrUnkiiYeeOJ7zQjeujoT4qQ75EVQjbgl97uvshcaQRL6UlPF3vz5DKLGq7ezkR80a7JEB/MgUWxAGd134i2HKeIDF+ADU1GYuZ5rLmuPQijDRGV9m4F+C59751HCh7t/pV8BOfa/k1WMMlr4+FIa5HGTEApGkxmXC8PzbxN4JS9cMnJ3/jK2scvYQ5MaRvPpEMjXTN2njiDCjDX1Jq9CLy6fABCYlQ9rlGtvlgfy2HiFvhilsarN64/MT9DwQu5hDHKnCj7vVDztZeVWiJk94R+XojIin5x3u5c56uW3TY3PVrdQJDPmMtJYgJmrIc1Ejn7dfWUSBjCos038nw48Ycd4x3CRKTaa8rAs1YKUe9W05tFp7Jx49arVVg9eKp8Sp/asHp0p/kdP7Mov4+VA8yP2ZtBh1GXC2YM8bUwdCvF+oCnMhLd6oDF4BmBWzPKAqcr6eI6C1Z1CDiLBovypLdgFNKJXqXTI2xWatI/708sAwlDyhw7IZ1eocWeh6d3r1z1QpcvWdYOgE2s2HqZed4bVmgMbCD00qn8UbYe3Rdj+yKrsFa/Ysn8w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f74696c4-a8d8-4939-e935-08db3c6bd0cd
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Apr 2023 22:09:56.8903
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o8I/HqXrYqgIeRd2QsYrN0nDJiA3jFpv1dKj+AfAMDtveP8vjGfEpLKeneOZenOayGy3scNGW4e6Re1yWUrXdqbhKDzAf2bhR8ubYdLRaKA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5182

On 13/04/2023 9:13 pm, Julien Grall wrote:
>
> On 13/04/2023 20:22, Andrew Cooper wrote:
>> Despite rcu_read_unlock() being fully inlineable, the optimiser
>> cannot fold
>> these exit paths, because of the various compiler barriers providing RCU
>> safety.  Help the compiler out.
>
> Please mention which compiler(s) (including version) you used.
>
>>
>> This compiles to marginally better code in all cases.
> So the code itself is fine with me. But this raises a few questions.
> If this is marginal, then why are you doing it? What's your end goal?

I happened to be working in the area while fixing a bug.  But am not
justifying "because I judged it to be worth doing" further; it is
entirely self evident from the fact I sent the patch.

Whether you meant it to be or not, the request comes across as
insulting, and is not something which should be made of any submitter.

But as this kind of thing has come up before, any further debate on the
matter can be made to the code of conduct board.

A better phrasing might have been "I'm sorry, I don't understand.  Why
is this an improvement?"  But I'm only guessing as to what this issue is.


But moving to the technicals aspects, in an attempt to help this along.

Do you understand what folding the exit paths means?  It's a term which
is used frequently enough on list that it ought to be taken for granted,
and is what in my opinion makes the patch entirely self-evident.

> Lastly what do you mean by "all cases"? Is it all arch? All compilers?

Yes.

>
> Anyway, if this pattern is important (TBD why), then I think we should
> update the CODING_STYLE with some guidance. Otherwise, we may
> introduce similar patterns (we already have some).

Perhaps, but I don't have the time, energy, or willing to dive into the
viper pit which is trying to make any changes to that document at all. 
Especially when there's a laundry list of more important topics that
ought to take priority...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 13 23:01:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Apr 2023 23:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520985.809196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn5wd-0006G8-AB; Thu, 13 Apr 2023 23:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520985.809196; Thu, 13 Apr 2023 23:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn5wd-0006G1-7J; Thu, 13 Apr 2023 23:01:35 +0000
Received: by outflank-mailman (input) for mailman id 520985;
 Thu, 13 Apr 2023 23:01:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pvsx=AE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pn5wb-0006Fv-W4
 for xen-devel@lists.xenproject.org; Thu, 13 Apr 2023 23:01:34 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 208ff823-da4f-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 01:01:31 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id C5C5863341;
 Thu, 13 Apr 2023 23:01:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E13AFC433EF;
 Thu, 13 Apr 2023 23:01:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 208ff823-da4f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681426889;
	bh=HkfMfx/YlMGO+2U+jZmIS1g1NsyaL0g6VbZzS6yZE1I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=V4hEkSmqMgJiHk18bhxF4FAXOuzAkGR1+sp1kueeZJLrC0/JkH9ZniLuj4aQBlCF/
	 tde5AVOVrgEIpTKizMg0v49bbTB0PYeoi7rIiyYbY77lJ2CUuIGxr1Lm81y6NYiVix
	 hNF7cyRF5FuMgPKSxIq5rYinBzTjQNU6oRI8kj8c+3dmNHchnPL5ZDUZ1hu44lIYaL
	 ZcSZ8QjhrziEAh2UlLTp0XsMlxcjx4IQGTYz+/zLNyE66+WY7Mc+S/jKjDWz6xMS+x
	 6xxRdQt33Mma7rs2P3QsSLeSe/CtIBgRX9ztcz41pUwCpwIz8PuyKZmDyb8H+aVgmg
	 zzclo+F+UCD1g==
Date: Thu, 13 Apr 2023 16:01:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
    Jan Beulich <JBeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen: Fold exit paths in find_text_region()
In-Reply-To: <dcdc1e90-77f0-16d2-e83a-dc9c12158975@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304131514430.15580@ubuntu-linux-20-04-desktop>
References: <20230413192201.3255984-1-andrew.cooper3@citrix.com> <bd8f0ed2-586f-02f6-1f16-dc3b3b9c82a8@xen.org> <dcdc1e90-77f0-16d2-e83a-dc9c12158975@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-616007622-1681424106=:15580"
Content-ID: <alpine.DEB.2.22.394.2304131515550.15580@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-616007622-1681424106=:15580
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304131515551.15580@ubuntu-linux-20-04-desktop>

On Thu, 13 Apr 2023, Andrew Cooper wrote:
> On 13/04/2023 9:13 pm, Julien Grall wrote:
> >
> > On 13/04/2023 20:22, Andrew Cooper wrote:
> >> Despite rcu_read_unlock() being fully inlineable, the optimiser
> >> cannot fold
> >> these exit paths, because of the various compiler barriers providing RCU
> >> safety.  Help the compiler out.
> >
> > Please mention which compiler(s) (including version) you used.
> >
> >>
> >> This compiles to marginally better code in all cases.
> > So the code itself is fine with me. But this raises a few questions.
> > If this is marginal, then why are you doing it? What's your end goal?
> 
> I happened to be working in the area while fixing a bug.  But am not
> justifying "because I judged it to be worth doing" further; it is
> entirely self evident from the fact I sent the patch.
> 
> Whether you meant it to be or not, the request comes across as
> insulting, and is not something which should be made of any submitter.
> 
> But as this kind of thing has come up before, any further debate on the
> matter can be made to the code of conduct board.
> 
> A better phrasing might have been "I'm sorry, I don't understand.  Why
> is this an improvement?"  But I'm only guessing as to what this issue is.

Hi Andrew,

I don't think Julien's comment was insulting. You probably thought this
was a two steps process:
step1: Why?
step2: Gotcha! NACK!

But Julien's question should be taken at face value. Julien even wrote
that he thinks the code is OK. "Why is this an improvement?" is a nicer
way to phrase it but both are OK in my view.


When we make code improvements (not bug fixes or new features) and the
improvement is marginal, I think it is an OK question to ask why you
thought it was worth doing.

For instance, could it be that there are other additional benefits that
you didn't write down in the commit message? Such as, is the code more
readable, easier to maintain, more resilient to attacks?

It is also OK if it is only a marginal improvement but in any case "why
are you doing it" should be or be seen as a challenge.


> But moving to the technicals aspects, in an attempt to help this along.
> 
> Do you understand what folding the exit paths means?  It's a term which
> is used frequently enough on list that it ought to be taken for granted,
> and is what in my opinion makes the patch entirely self-evident.
> 
> > Lastly what do you mean by "all cases"? Is it all arch? All compilers?
> 
> Yes.
> 
> >
> > Anyway, if this pattern is important (TBD why), then I think we should
> > update the CODING_STYLE with some guidance. Otherwise, we may
> > introduce similar patterns (we already have some).
> 
> Perhaps, but I don't have the time, energy, or willing to dive into the
> viper pit which is trying to make any changes to that document at all. 
> Especially when there's a laundry list of more important topics that
> ought to take priority...

Also here, I don't think Julien meant to put one more
potentially-blocking obstacle in your path to upstreaming the
improvement.

If folding the exit paths is an important pattern it is a good idea to
make it a recommended guideline project-wide under CODING_STYLE. It
makes your idea more generally applicable.

Otherwise we end up with, let's say, xen/arch/x86 with good exit paths
and xen/arch/arm with bad exit paths.
 
It should not be taboo to update CODING_STYLE. In the past it was
difficult, but now we have a process in place. Specifically, I am happy
to add it to the agenda for the next MISRA C meeting, where we can make
a snap decision on this guideline in few minutes. 

Once it is in CODING_STYLE, you won't have to discuss on the mailing
list why you are doing this sort of things anymore, and you can follow
up with dozen of patches improving exit paths everywhere.

I don't think you need to wait for the next MISRA C meeting (or other
calls) before following up on this one patch, but suggesting an update
of CODING_STYLE I think it is positive.
--8323329-616007622-1681424106=:15580--


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 00:18:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 00:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520992.809206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn78b-0005ci-8y; Fri, 14 Apr 2023 00:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520992.809206; Fri, 14 Apr 2023 00:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn78b-0005cb-5B; Fri, 14 Apr 2023 00:18:01 +0000
Received: by outflank-mailman (input) for mailman id 520992;
 Fri, 14 Apr 2023 00:17:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn78Z-0005cR-0T; Fri, 14 Apr 2023 00:17:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn78Y-0002Jx-Ue; Fri, 14 Apr 2023 00:17:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn78Y-0002zg-Gk; Fri, 14 Apr 2023 00:17:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pn78Y-0007Ap-GI; Fri, 14 Apr 2023 00:17:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tDfZ4Xi51k8E9IqFOo+R+TbzvOuZhpbbDwcG9ScHaC4=; b=Zm8uB6UFLjH1/uLBJvFW00+x4S
	wrbV5Jd4dXE+0jXrvWlRtJnBWhXsKvE3mPjXei32DvWNKax6xVfSsX0vQjxLuH/ZrL/BN2t11q7v4
	GEKWv5FwW+8vrsm0VyPHNsxgL3MI9OiEpmGCYLjdy+3q5rfW4TfP+wOeCkMWwe/Xbh1A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180250: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=55b67b6950e648338adfe8ec54aeb26ed89d2c97
X-Osstest-Versions-That:
    ovmf=d795fb571b9b2c2ee67ceaef372d5cc461767859
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 00:17:58 +0000

flight 180250 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180250/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 55b67b6950e648338adfe8ec54aeb26ed89d2c97
baseline version:
 ovmf                 d795fb571b9b2c2ee67ceaef372d5cc461767859

Last test of basis   180248  2023-04-13 16:40:45 Z    0 days
Testing same since   180250  2023-04-13 19:43:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  devel@edk2.groups.io <devel@edk2.groups.io>
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d795fb571b..55b67b6950  55b67b6950e648338adfe8ec54aeb26ed89d2c97 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 01:00:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 01:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.520998.809216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn7o3-0004FP-Ck; Fri, 14 Apr 2023 01:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 520998.809216; Fri, 14 Apr 2023 01:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn7o3-0004EX-A0; Fri, 14 Apr 2023 01:00:51 +0000
Received: by outflank-mailman (input) for mailman id 520998;
 Fri, 14 Apr 2023 01:00:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn7o1-0003PZ-FG; Fri, 14 Apr 2023 01:00:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn7o1-0005jd-Af; Fri, 14 Apr 2023 01:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pn7o0-0004ao-QR; Fri, 14 Apr 2023 01:00:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pn7o0-00038e-Pv; Fri, 14 Apr 2023 01:00:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9o8hufXQ8Ift/7JSVqG5AnvXGlph2HNHNYbob3wY8fs=; b=4fgASHtdc6Qa5Z7AivCqpxuccA
	z/XtJueRqpKKs7ErgBt0tTGzU/S5gW3ibdkus/HZtB3vTbTFgtJ8F07PWG3sbqf68HLOy7ydriaJU
	B6NesW28o9bbh0BIu/U79ajTlhRu8OBBklhHdEHdsutCKmkzHUDT19ymQ27CMixnNmfw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180230: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=de4664485abbc0529b1eec44d0061bbfe58a28fb
X-Osstest-Versions-That:
    linux=e62252bc55b6d4eddc6c2bdbf95a448180d6a08d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 01:00:48 +0000

flight 180230 linux-linus real [real]
flight 180252 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180230/
http://logs.test-lab.xenproject.org/osstest/logs/180252/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 180252-retest
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180252-retest
 test-arm64-arm64-xl-vhd 17 guest-start/debian.repeat fail pass in 180252-retest
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180252-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180208
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180208
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                de4664485abbc0529b1eec44d0061bbfe58a28fb
baseline version:
 linux                e62252bc55b6d4eddc6c2bdbf95a448180d6a08d

Last test of basis   180208  2023-04-11 19:10:17 Z    2 days
Failing since        180222  2023-04-12 16:43:41 Z    1 days    2 attempts
Testing same since   180230  2023-04-13 07:36:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Manca <crizan.git@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  David Howells <dhowells@redhat.com>
  Jiri Kosina <jkosina@suse.cz>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin Povišer <povik+lin@cutebit.org>
  Peter Korsgaard <peter@korsgaard.com>
  Philipp Jungkamp <p.jungkamp@gmx.net>
  Philippe Troin <phil@fifi.org>
  Shaunak Saha <shaunak.saha@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Tanu Malhotra <tanu.malhotra@intel.com>
  Todd Brandt <todd.e.brandt@intel.com>
  Vinod Koul <vkoul@kernel.org>
  Yang Li <yang.lee@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e62252bc55b6..de4664485abb  de4664485abbc0529b1eec44d0061bbfe58a28fb -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 01:32:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 01:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521004.809226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8I6-0004AR-Pt; Fri, 14 Apr 2023 01:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521004.809226; Fri, 14 Apr 2023 01:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8I6-0004AK-KK; Fri, 14 Apr 2023 01:31:54 +0000
Received: by outflank-mailman (input) for mailman id 521004;
 Fri, 14 Apr 2023 01:31:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W/aT=AF=epam.com=prvs=8468442823=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pn8I4-0004AD-II
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 01:31:52 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ec51e7f-da64-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 03:31:47 +0200 (CEST)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33DFVh3I012539; Fri, 14 Apr 2023 01:30:44 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2110.outbound.protection.outlook.com [104.47.17.110])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3pxgh7tt83-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 14 Apr 2023 01:30:44 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by DU0PR03MB9633.eurprd03.prod.outlook.com (2603:10a6:10:42f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 01:30:40 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 01:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ec51e7f-da64-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d/XYGwsbX3tlkEnTql0JyT6h3G7rqs41W4omX6rQz0T/MYWICpcFKfwgMjr+CZK4ZI/rxJLilaSfkrqMPckU4BTnuzNX1dui0rcL5vVKLsymzwqVJZb2dEtTYSweHa8dszzCjOeW7NkRlp0o8t7VpFLlFnMn8QSxdyb7lKJZXA8QtREM/PvIYN65UlW7pC5N4/hL5ugffRie+OrTrcsdgLDzyhSBdPk4QiDDiEu0H/93rd5AabYrRJavzLesDJpL15QQ0MNawZVYnlRiQFSYOaU4XnDj8jiGBSuvDGD60hB5UnN3yX6FE58cqmzMg93slUUa8CQe6hPYaeUmXytENg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=a8a2Eq9xah/4GejivNZd2LJafoVPQEkFOxgCLXtX3kM=;
 b=HvvRRQ4cDiFBS95Zes2s4CZA6aJca28+ihCmTeo4a+KGkcShzH0h46sRq6WtHdEdbONRLkxR20SvsdcpG/QywCl6K2h2cMawpq/MgWvfctmeV5QlcHO4u/01he2e0R9Yj4Tw/KTCfE2ZHe/D75yBrN+th8B0y7p9e2KpCiEHAP6ja0JHUUQw4spq0wyQQAVmjGXH7rECkmfOgSd+CIKnilenaJk6cUbtyeywsk7CiNuyqMxDEYy3ctvPNlByeRl5yC5lrAfJuSRXsAZOio+lkNIyFzXM6twN9tH7Jzi3J6+djZFYaJo2qjP0EJ8K3VcOXbL9fpnJqgnMaOt9XUL/zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8a2Eq9xah/4GejivNZd2LJafoVPQEkFOxgCLXtX3kM=;
 b=gfLmVZvRp+hNCzT9TzJse23kUoVuIldOinZQ2PMKf8l0ywfsH6seT7/fDqYDs6kEJsVf+JCaQQlLyK3IA5dOAgVQPq/h2gQuSgbh1FeT/38Mr3B5Id9GjUJhtJSzDKmi8xbdbpAm54DX6yjHxXmN6AkuNtyb+FGk4cK8Epoezqw2NUel6CUnKExa8kepqdvwgRUh+z40wBzP4Icsp2geVySPR/1CeJiGaiYk1h21aFguGrOD4r3Akl1cQcsjI80NhQybm4fA23zK7ZBNMqNaQfiOwHL5Ez0bANOk4zNIgKZJ+A1Cs7f+VaVUIaGJ1BBubvXUe+1hscH1VplUY1TJeA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Jan
 Beulich <jbeulich@suse.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei
 Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: 
 AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGAgAGe14CAAJvOAA==
Date: Fri, 14 Apr 2023 01:30:39 +0000
Message-ID: <87leivw8qp.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger>
In-Reply-To: <ZDgZEZIG89oW6rEw@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|DU0PR03MB9633:EE_
x-ms-office365-filtering-correlation-id: 5999ec03-55fa-45f9-6896-08db3c87daf7
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 9XjiIyxXQUmahxVs5aYvwRy5kJO19TZ1dLLbkQr4mktU7egjIwP6n/1jjiiHox08vFuXjQtXmEnGuu5441cQv+wlpVMPRA//0YglXS7di9vBEMNkWeysf494KR/yLgMGt+60Ncdw2ORY5D/TGgv8zIkMIZZ5XGXycBNCDY5p8Px7TuRherMyqdHWFXqGPqy/1l5/fBt49Q75QGe2Zmf67M/ynTXU4oHQhzbERq7hDLDajaTP3NWwMduM4NsrUv6Wd8MketO0q98+8U8g0YXzrzi5J5bLOpo+vqSlSG4NNCBOv1gn0MZDdAENKphLwNTyKVlTGC456ytG9YH5+vApaypNq41Yvr0hALKIs5rSorC+A4dcxjgT3eQjxgfSiz4WKLZS1DDp0m2CWbwkHkr+wqrqd8YKzlOE5XZ8U+T4X9ZuorzNAMpoGIkDlxbkcsNBPWDKuFZy4aUGC8EKVhDfnc6n1u+LULM5ggruK9dGZqbn1pmbShhw98wVRESM9Y4h8s3T4XF+OpmxlCrD+usJrQWod3Or4SFbBb6xxTPpJyKB/DYg5cWOwVWYb6i1YVWMA9mxNnx9GHypR0NHHu5YyYbWdOzuPJVZtaQ3xCpZQsN/wqBT4q2HY+hITdryzrQmx4hFnf82Yc4hg0Sn+9pmAg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(136003)(396003)(346002)(376002)(451199021)(30864003)(83380400001)(54906003)(2616005)(91956017)(6506007)(76116006)(6486002)(6512007)(26005)(186003)(478600001)(55236004)(7416002)(2906002)(36756003)(38100700002)(122000001)(5660300002)(66556008)(64756008)(41300700001)(66446008)(4326008)(66476007)(66946007)(6916009)(38070700005)(316002)(8676002)(86362001)(8936002)(71200400001)(66899021)(68124002)(579004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?ejRzY0JpcG5wUlF3K2QwdE1lMDVTRjhWdEIxcU15Sk43VVhkOWVObTlYemF2?=
 =?utf-8?B?Y2FnSm1rVjZ0N3VlNnZRaU95bi90UlhndVJtZXlURnpZdEZtU0Y0alB5eHkv?=
 =?utf-8?B?bWc3SjZwdXVrUkF3Q0tiYUFVdlU2ZnFvZUpyRXJ1MVNoejE3K2lqbG5DNWYv?=
 =?utf-8?B?V0toTVVBd29oaDgxUnZFL09xMkVOMlFTdVplSGZDVDN2eTZ1ODlOK242d1A2?=
 =?utf-8?B?b1h3M1FERWNqT1VUdjh4bmVLYjBrTVB0Q3ZuWXBUNm9ZeHRtV2QwUFdNSzBq?=
 =?utf-8?B?TVVoaC9PZmV5UU5WZWVCWW5PYnJuMkFGcWV4RnB2SFlTQWFYVnJKdG55dmZM?=
 =?utf-8?B?YldpOWZUSmRKZ1ArSi9za0tzVkdWWWhHMmE1WUJEQWxha0NhUTF0bHRmc1No?=
 =?utf-8?B?bGtiSGJHRGtBc0c3cjBlejY2UXNwaDdzRTFKTXE4cHMvUWQ5SVZBeldEQTZl?=
 =?utf-8?B?elNJb0pNWDJ2cXlKTUtSVjZLUi9YREdwb3d0RFRuRmdVYWRBV3pxMzF2V0JX?=
 =?utf-8?B?VmVQOG9CV2NiemhFYjhxMzdzd1pJQmxVejYrQkI2bERNV2RqUi9lbDBlQW5F?=
 =?utf-8?B?bzF6VFVkbFd5S1FUajJjWmdZaXdrNVNtZGV2Z2tDZ1VVenV0VHZNMjRjZkNy?=
 =?utf-8?B?Y1o2TmlmKzRWTXozOTFzNWxXYnNTeEJGbGlXeUlPL2pSb3NrRFMxYzhJTXFs?=
 =?utf-8?B?azQvbHJCNHBUZ3RMc09Nc0xndFUvbGRybDhhTUNWb3diMGNaeGRtYnBMQk1q?=
 =?utf-8?B?ZWhsbFRtQzJvQmt5THMyb25FVmJWeXVuOVdURXVMT1JJOG9kdS9SMktEZnFh?=
 =?utf-8?B?b1RpOEllV1VNWEtOZmdhdEtqbFdkWFRTMkdaM29VVjUyVnlXRHA3RWFVcmpj?=
 =?utf-8?B?SlpCSlNuUy9RSmRqY2NOdGNnTkRkcnpPUXFES3A1QlU2TllUTngraytwbmJt?=
 =?utf-8?B?UDYza3ptZmN4anh2L0ZTeVVvVDR6SkpZejdrQzg4OGJpR3V5cndDQTFkbi9T?=
 =?utf-8?B?ME1CWWRkeFhlcTFSbzlDVjNsMzNrN1Bpd1lkT2VhOG1lRzVTRXZDRVovN3dh?=
 =?utf-8?B?Mzk4MWtnMWNmaGI4K2NGSHFOV0tIekZ1WlhjTi9DVkxpYmUvWnprTjU2R2JY?=
 =?utf-8?B?TFFYY0gzQU1MaGhIVmwxZndTdVg5M1FqdUV4OWFzWU9VMDhNS1pXY0NJa041?=
 =?utf-8?B?YWoyaWsvYVBGb2hJdFZrZ1Z3MjZyZ05rRlRQYTJJQTh1ZFBuWE5oOXRRZG4z?=
 =?utf-8?B?RmVXc0VNZ29aakY5b1kyc0VVWlRLVHlHM3lmU1pMRVNrNHdVV0lEOXRyK2JD?=
 =?utf-8?B?Z2FFb3R4dzVmUHhkeXN3VkJ4eUFQYXpjd1BpQXpzc2svVzBiNFBPZmVQeisv?=
 =?utf-8?B?Y0FVd09Sa00xMnEwMXRnMW1UUlBQbGZncmNCUEFmd2N3VlVxbE5zRmV5Ulp4?=
 =?utf-8?B?UHRJaHJadkZNeTJJeHRyRTVyZDdiTjlmbzRvUmtwUTNuUU9jWmhmWEZ4QjVi?=
 =?utf-8?B?MHZXUmQzbTFQVFZ1RXl0eTVUVDRqajhHMnRmQVNzcDRrb2k4ek5JWFhKYVY2?=
 =?utf-8?B?eXM5TitYelZjWHZ6WVZmbXRJMm9EU2Rxcm80bllLYW1lcjZHTHR1Wm1tRkhC?=
 =?utf-8?B?cENCaVJ0RkxaS2pWWHA0akdNQURqSkROd3NrQWRpVXRSUWZqSWhxV0hNbDhP?=
 =?utf-8?B?T2tJS3pmSlRLZzRucXpDOEgzZXNmR1pUSUJabmVFVEVDam4wRGtiYmJZQUhP?=
 =?utf-8?B?ak9zd1JvZnh6aU9GTDA4bnFWYSs1d0lCQ0p1eVAvdm5PaVV3ZTM2RnF4d0Rn?=
 =?utf-8?B?V3h3dWJPbDUxbFV5VFdORVEwY0hMSlNFczJYekFMcTRqS1NvMW83cVBUempR?=
 =?utf-8?B?U1o5VWEydWVwdlNVSUVqR2lyZnBXZEt1b2JEc09XeGJNaTkvd2phU3M0bkk5?=
 =?utf-8?B?a0FVNnNUMnRKcHVLUE1maG12V0RBNlRjcHFnZ2JPaHNlNjllV09KZEk0SXVx?=
 =?utf-8?B?S1NNdW1lN0VZUW5lTWcxYmFhNnM1MjNYYzU4QUJiR0pvUXNiSDdaUlgyV0Zv?=
 =?utf-8?B?TGl4dDFjclZFcENEdW1qa3hrWXNHWWdQUU5rYzhUY1JZNkJvcFN6TmRYcnFn?=
 =?utf-8?B?RnAyZkZyQlVqdEZnUzhRS2tNaUN1bXRyOFJaVnpqaTl6TjJXMDA1NU1EWSt1?=
 =?utf-8?B?SlE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <290C356A125B8C4E8892EA12C30F89E2@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5999ec03-55fa-45f9-6896-08db3c87daf7
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Apr 2023 01:30:39.4783
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: IBRDax+kUaqDAwlBDAmmCbkb1/ik9iZhYZtTc5IYmxYL0EYPuD2emJEypMoAz0dkeOd4YS8d7kTTGcdRFm+lTlkhviDGB41H0j7QNRMr6Uk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR03MB9633
X-Proofpoint-GUID: 7binSmdBOPWGMRNlQs2zZkREctIHtiSr
X-Proofpoint-ORIG-GUID: 7binSmdBOPWGMRNlQs2zZkREctIHtiSr
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-13_18,2023-04-13_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 mlxlogscore=999 lowpriorityscore=0 phishscore=0 adultscore=0
 suspectscore=0 bulkscore=0 malwarescore=0 mlxscore=0 spamscore=0
 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2303200000 definitions=main-2304140011

DQpIaSBSb2dlciwNCg0KUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdy
aXRlczoNCg0KPiBPbiBXZWQsIEFwciAxMiwgMjAyMyBhdCAwOTo1NDoxMlBNICswMDAwLCBWb2xv
ZHlteXIgQmFiY2h1ayB3cm90ZToNCj4+IA0KPj4gSGkgUm9nZXIsDQo+PiANCj4+IEZpcnN0IG9m
IGFsbCwgSSB3YW50IHRvIHByb3ZpZGUgbGluayBbMV0gdG8gdGhlIFJGQyBzZXJpZXMgd2hlcmUg
SSB0cmllZA0KPj4gdG90YWwgUENJIGxvY2tpbmcgcmV3b3JrLiBBZnRlciBkaXNjdXNzaW5nIHdp
dGggSmFuLCBpdCBiZWNhbWUgY2xlYXIgZm9yDQo+PiBtZSwgdGhhdCB0YXNrIGlzIG11Y2ggaGFy
ZGVyLCB0aGFuIEkgYW50aWNpcGF0ZWQuIFNvLCBpdCB3YXMgZGVjaWRlZCB0bw0KPj4gbW92ZSB3
aXRoIGEgc21hbGxlciBzdGVwcy4gRmlyc3Qgc3RlcCBpcyB0byBtYWtlIHZQQ0kgY29kZSBpbmRl
cGVuZGVkDQo+PiBmcm9tIHRoZSBnbG9iYWwgUENJIGxvY2suIEFjdHVhbGx5LCB0aGlzIGlzIG5v
dCB0aGUgZmlyc3QgdHJ5Lg0KPj4gT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gdHJpZWQgdG8gdXNl
IHIvdyBsb2NrIGZvciB0aGlzOiBbMl0uIEJ1dCwNCj4+IEphbiBzdWdnZXN0ZWQgdG8gdXNlIHJl
ZmNvdW50aW5nIGluc3RlYWQgb2Ygci93IGxvY2tzLCBhbmQgSSBsaWtlZCB0aGUNCj4+IGlkZWEu
IFNvLCB0aGlzIGlzIHdoeSB5b3UgYXJlIHNlZWluZyB0aGlzIHBhdGNoIHNlcmllcy4NCj4NCj4g
VGhhbmtzLCBJJ3ZlIGJlZW4gb24gbGVhdmUgZm9yIGxvbmcgcGVyaW9kcyByZWNlbnRseSBhbmQg
SSd2ZSBtaXNzZWQNCj4gc29tZSBvZiB0aGUgc2VyaWVzLg0KPg0KDQpEaWQgeW91IGNoZWNrZWQg
dGhpcyBSRkMgc2VyaWVzPyBJIGFtIG5vdCBhc2tpbmcgeW91IHRvIHJldmlldyBpdCwgSSBhbQ0K
anVzdCBjdXJpb3VzIGFib3V0IHlvdXIgb3BpbmlvbiBvbiB0aGUgc2VsZWN0ZWQgYXBwcm9hY2gN
Cg0KPj4gDQo+PiANCj4+IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3
cml0ZXM6DQo+PiANCj4+ID4gT24gVHVlLCBBcHIgMTEsIDIwMjMgYXQgMTE6NDE6MDRQTSArMDAw
MCwgVm9sb2R5bXlyIEJhYmNodWsgd3JvdGU6DQo+PiA+PiANCj4+ID4+IEhpIFJvZ2VyLA0KPj4g
Pj4gDQo+PiA+PiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4gd3JpdGVz
Og0KPj4gPj4gDQo+PiA+PiA+IE9uIFR1ZSwgTWFyIDE0LCAyMDIzIGF0IDA4OjU2OjI5UE0gKzAw
MDAsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4gPj4gPj4gUHJpb3IgdG8gdGhpcyBjaGFu
Z2UsIGxpZmV0aW1lIG9mIHBjaV9kZXYgb2JqZWN0cyB3YXMgcHJvdGVjdGVkIGJ5IGdsb2JhbA0K
Pj4gPj4gPj4gcGNpZGV2c19sb2NrKCkuIExvbmctdGVybSBwbGFuIGlzIHRvIHJlbW92ZSB0aGlz
IGxvZywgc28gd2UgbmVlZCBzb21lDQo+PiA+PiA+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIF4gbG9jaw0KPj4gPj4gPg0KPj4gPj4gPiBJIHdvdWxk
bid0IHNheSByZW1vdmUsIGFzIG9uZSB3YXkgb3IgYW5vdGhlciB3ZSBuZWVkIGEgbG9jayB0byBw
cm90ZWN0DQo+PiA+PiA+IGNvbmN1cnJlbnQgYWNjZXNzZXMuDQo+PiA+PiA+DQo+PiA+PiANCj4+
ID4+IEknbGwgd3JpdGUgInJlcGxhY2UgdGhpcyBnbG9iYWwgbG9jayB3aXRoIGNvdXBsZSBvZiBt
b3JlIGdyYW51bGFyDQo+PiA+PiBsb2NraW5nIGRldmljZXMiDQo+PiA+PiBpZiB0aGlzIGlzIG9r
YXkgZm9yIHlvdS4NCj4+ID4+IA0KPj4gPj4gPj4gb3RoZXIgbWVjaGFuaXNtIHRvIGVuc3VyZSB0
aGF0IHRob3NlIG9iamVjdHMgd2lsbCBub3QgZGlzYXBwZWFyIHVuZGVyDQo+PiA+PiA+PiBmZWV0
IG9mIGNvZGUgdGhhdCBhY2Nlc3MgdGhlbS4gUmVmZXJlbmNlIGNvdW50aW5nIGlzIGEgZ29vZCBj
aG9pY2UgYXMNCj4+ID4+ID4+IGl0IHByb3ZpZGVzIGVhc3kgdG8gY29tcHJlaGVuZCB3YXkgdG8g
Y29udHJvbCBvYmplY3QgbGlmZXRpbWUuDQo+PiA+PiA+PiANCj4+ID4+ID4+IFRoaXMgcGF0Y2gg
YWRkcyB0d28gbmV3IGhlbHBlciBmdW5jdGlvbnM6IHBjaWRldl9nZXQoKSBhbmQNCj4+ID4+ID4+
IHBjaWRldl9wdXQoKS4gcGNpZGV2X2dldCgpIHdpbGwgaW5jcmVhc2UgcmVmZXJlbmNlIGNvdW50
ZXIsIHdoaWxlDQo+PiA+PiA+PiBwY2lkZXZfcHV0KCkgd2lsbCBkZWNyZWFzZSBpdCwgZGVzdHJv
eWluZyBvYmplY3Qgd2hlbiBjb3VudGVyIHJlYWNoZXMNCj4+ID4+ID4+IHplcm8uDQo+PiA+PiA+
PiANCj4+ID4+ID4+IHBjaWRldl9nZXQoKSBzaG91bGQgYmUgdXNlZCBvbmx5IHdoZW4geW91IGFs
cmVhZHkgaGF2ZSBhIHZhbGlkIHBvaW50ZXINCj4+ID4+ID4+IHRvIHRoZSBvYmplY3Qgb3IgeW91
IGFyZSBob2xkaW5nIGxvY2sgdGhhdCBwcm90ZWN0cyBvbmUgb2YgdGhlDQo+PiA+PiA+PiBsaXN0
cyAoZG9tYWluLCBwc2VnIG9yIGF0cykgdGhhdCBzdG9yZSBwY2lfZGV2IHN0cnVjdHMuDQo+PiA+
PiA+PiANCj4+ID4+ID4+IHBjaWRldl9nZXQoKSBpcyByYXJlbHkgdXNlZCBkaXJlY3RseSwgYmVj
YXVzZSB0aGVyZSBhbHJlYWR5IGFyZQ0KPj4gPj4gPj4gZnVuY3Rpb25zIHRoYXQgd2lsbCBwcm92
aWRlIHZhbGlkIHBvaW50ZXIgdG8gcGNpX2RldiBzdHJ1Y3Q6DQo+PiA+PiA+PiBwY2lfZ2V0X3Bk
ZXYoKSwgcGNpX2dldF9yZWFsX3BkZXYoKS4gVGhleSB3aWxsIGxvY2sgYXBwcm9wcmlhdGUgbGlz
dCwNCj4+ID4+ID4+IGZpbmQgbmVlZGVkIG9iamVjdCBhbmQgaW5jcmVhc2UgaXRzIHJlZmVyZW5j
ZSBjb3VudGVyIGJlZm9yZSByZXR1cm5pbmcNCj4+ID4+ID4+IHRvIHRoZSBjYWxsZXIuDQo+PiA+
PiA+PiANCj4+ID4+ID4+IE5hdHVyYWxseSwgcGNpX3B1dCgpIHNob3VsZCBiZSBjYWxsZWQgYWZ0
ZXIgZmluaXNoaW5nIHdvcmtpbmcgd2l0aCBhDQo+PiA+PiA+PiByZWNlaXZlZCBvYmplY3QuIFRo
aXMgaXMgdGhlIHJlYXNvbiB3aHkgdGhpcyBwYXRjaCBoYXZlIHNvIG1hbnkNCj4+ID4+ID4+IHBj
aWRldl9wdXQoKXMgYW5kIHNvIGxpdHRsZSBwY2lkZXZfZ2V0KClzOiBleGlzdGluZyBjYWxscyB0
bw0KPj4gPj4gPj4gcGNpX2dldF8qKCkgZnVuY3Rpb25zIG5vdyB3aWxsIGluY3JlYXNlIHJlZmVy
ZW5jZSBjb3VudGVyDQo+PiA+PiA+PiBhdXRvbWF0aWNhbGx5LCB3ZSBqdXN0IG5lZWQgdG8gZGVj
cmVhc2UgaXQgYmFjayB3aGVuIHdlIGZpbmlzaGVkLg0KPj4gPj4gPg0KPj4gPj4gPiBBZnRlciBs
b29raW5nIGEgYml0IGludG8gdGhpcywgSSB3b3VsZCBsaWtlIHRvIGFzayB3aGV0aGVyIGl0J3Mg
YmVlbg0KPj4gPj4gPiBjb25zaWRlcmVkIHRoZSBuZWVkIHRvIGluY3JlYXNlIHRoZSByZWZjb3Vu
dCBmb3IgZWFjaCB1c2Ugb2YgYSBwZGV2Lg0KPj4gPj4gPg0KPj4gPj4gDQo+PiA+PiBUaGlzIGlz
IGhvdyBMaW51eCB1c2VzIHJlZmVyZW5jZSBsb2NraW5nLiBJdCBkZWNyZWFzZXMgY29nbml0aXZl
IGxvYWQNCj4+ID4+IGFuZCBjaGFuY2UgZm9yIGFuIGVycm9yLCBhcyB0aGVyZSBpcyBhIHNpbXBs
ZSBzZXQgb2YgcnVsZXMsIHdoaWNoIHlvdQ0KPj4gPj4gZm9sbG93Lg0KPj4gPj4gDQo+PiA+PiA+
IEZvciBleGFtcGxlIEkgd291bGQgY29uc2lkZXIgdGhlIGluaXRpYWwgYWxsb2NfcGRldigpIHRv
IHRha2UgYQ0KPj4gPj4gPiByZWZjb3VudCwgYW5kIHRoZW4gcGNpX3JlbW92ZV9kZXZpY2UoKSBf
bXVzdF8gYmUgdGhlIGZ1bmN0aW9uIHRoYXQNCj4+ID4+ID4gcmVtb3ZlcyB0aGUgbGFzdCByZWZj
b3VudCwgc28gdGhhdCBpdCBjYW4gcmV0dXJuIC1FQlVTWSBvdGhlcndpc2UgKHNlZQ0KPj4gPj4g
PiBteSBjb21tZW50IGJlbG93KS4NCj4+ID4+IA0KPj4gPj4gSSB0ZW5kIHRvIGRpc2FncmVlIHRo
ZXJlLCBhcyB0aGlzIHJ1aW5zIHRoZSB2ZXJ5IGlkZWEgb2YgcmVmZXJlbmNlDQo+PiA+PiBjb3Vu
dGluZy4gV2UgY2FuJ3Qga25vdyB3aG8gZWxzZSBob2xkcyByZWZlcmVuY2UgcmlnaHQgbm93LiBP
a2F5LCB3ZQ0KPj4gPj4gbWlnaHQga25vdywgYnV0IHRoaXMgcmVxdWlyZXMgYWRkaXRpb25hbCBs
b2NrIHRvIHNlcmlhbGl6ZQ0KPj4gPj4gYWNjZXNzZXMuIFdoaWNoLCBpbiB0dXJuLCBtYWtlcyBy
ZWZjb3VudCB1bi1uZWVkZWQuDQo+PiA+DQo+PiA+IEluIHByaW5jaXBsZSBwY2lfcmVtb3ZlX2Rl
dmljZSgpIG11c3QgcmVwb3J0IHdoZXRoZXIgdGhlIGRldmljZSBpcw0KPj4gPiByZWFkeSB0byBi
ZSBwaHlzaWNhbGx5IHJlbW92ZWQgZnJvbSB0aGUgc3lzdGVtLCBzbyBpdCBtdXN0IHJldHVybg0K
Pj4gPiAtRUJVU1kgaWYgdGhlcmUgYXJlIHN0aWxsIHVzZXJzIGFjY2Vzc2luZyB0aGUgZGV2aWNl
Lg0KPj4gPg0KPj4gPiBBIHVzZXIgd291bGQgdXNlIFBIWVNERVZPUF9tYW5hZ2VfcGNpX3JlbW92
ZSB0byBzaWduYWwgWGVuIGl0J3MgdHJ5aW5nDQo+PiA+IHRvIHBoeXNpY2FsbHkgcmVtb3ZlIGEg
UENJIGRldmljZSBmcm9tIGEgc3lzdGVtLCBzbyB3ZSBtdXN0IGVuc3VyZQ0KPj4gPiB0aGF0IHdo
ZW4gdGhlIGh5cGVydmlzb3IgcmV0dXJucyBzdWNjZXNzIHRoZSBkZXZpY2UgaXMgcmVhZHkgdG8g
YmUNCj4+ID4gcGh5c2ljYWxseSByZW1vdmVkLg0KPj4gPg0KPj4gPiBPciBhdCBsZWFzdCB0aGF0
J3MgbXkgdW5kZXJzdGFuZGluZyBvZiBob3cgdGhpcyBzaG91bGQgd29yay4NCj4+ID4NCj4+IA0K
Pj4gQXMgSSBjYW4gc2VlLCB0aGlzIGlzIG5vdCBob3cgaXQgaXMgaW1wbGVtZW50ZWQgcmlnaHQN
Cj4+IG5vdy4gcGNpX3JlbW92ZV9kZXZpY2UoKSBpcyBub3QgY2hlY2tpbmcgaWYgZGV2aWNlIGlz
IG5vdCBhc3NpZ25lZCB0byBhDQo+PiBkb21haW4uIElkIGRvZXMgbm90IGNoZWNrIGlmIHRoZXJl
IGFyZSBzdGlsbCB1c2VycyBhY2Nlc3NpbmcgdGhlDQo+PiBkZXZpY2UuIEl0IGp1c3QgcmVsaWVz
IG9uIGEgdGhlIGdsb2JhbCBQQ0kgbG9jayB0byBlbnN1cmUgdGhhdCBkZXZpY2UgaXMNCj4+IHJl
bW92ZWQgaW4gYW4gb3JkZXJseSBtYW5uZXIuDQo+DQo+IFJpZ2h0LCB0aGUgZXhwZWN0YXRpb24g
aXMgdGhhdCBhbnkgcGF0aCBpbnNpZGUgb2YgdGhlIGh5cGVydmlzb3IgdXNpbmcNCj4gdGhlIGRl
dmljZSB3aWxsIGhvbGQgdGhlIHBjaWRldnMgbG9jaywgYW5kIHRodXMgYm55IGhvbGRpbmcgaXQg
d2hpbGUNCj4gcmVtb3Zpbmcgd2UgYXNzZXJ0IHRoYXQgbm8gdXNlcnMgKGluc2lkZSB0aGUgaHlw
ZXJ2aXNvcikgYXJlIGxlZnQuDQo+DQoNCk1heSBJIHByb3Bvc2VkIGEgYml0IHJlbGF4ZWQgYXNz
ZXJ0aW9uPyAiV2UgYXNzZXJ0IHRoYXQgbm8gdXNlcnMgdGhhdA0KYWNjZXNzIHRoZSBkZXZpY2Ug
YXJlIGxlZnQiLiBXaGF0IEkgYW0gdHJ5aW5nIGlzIHNheSB0aGVyZSwgdGhhdCBubyBvbmUNCndp
bGwgdHJ5IHRvIGFjY2Vzcywgc2F5LCBkZXZpY2UncyBjb25maWcgc3BhY2UuIEJlY2F1c2UgdGhl
IGRldmljZQ0KYWxyZWFkeSBtYXkgYmUgcGh5c2ljYWxseSByZW1vdmVkIGFuZCBhbnkgYWNjZXNz
IHRvIHRoZSBkZXZpY2UgaXRzZWxmDQp3aWxsIGNhdXNlIGEgZmF1bHQuIEJ1dCB0aGVyZSBtYXkg
YmUgdXNlcnMgdGhhdCBjYW4gYWNjZXNzIHN0cnVjdCBwZGV2DQp0aGF0IGNvcnJlc3BvbmRzIHRv
IHRoaXMgZGV2aWNlLg0KDQo+IEkgZG9uJ3QgdGhpbmsgd2UgaGF2ZSBiZWVuIHZlcnkgY29uc2lz
dGVudCBhYm91dCB0aGUgdXNhZ2Ugb2YgdGhlDQo+IHBjaWRldnMgbG9jaywgYW5kIGhlbmNlIG1v
c3Qgb2YgdGhpcyBpcyBsaWtlbHkgYnJva2VuLiAgSG9wZWZ1bGx5DQo+IHJlbW92aW5nIGEgUENJ
IGRldmljZSBmcm9tIGEgc3lzdGVtIGlzIGEgdmVyeSB1bmNvbW1vbiBvcGVyYXRpb24uDQo+DQo+
PiBNeSBwYXRjaCBzZXJpZXMgaGFzIG5vIGludGVudGlvbiB0byBjaGFuZ2UgdGhpcyBiZWhhdmlv
ci4gQWxsIHdoYXQgSQ0KPj4gd2FudCB0byBhY2hpZXZlIC0gaXMgdG8gYWxsb3cgdnBjaSBjb2Rl
IGFjY2VzcyBzdHJ1Y3QgcGRldiBvYmplY3RzDQo+PiB3aXRob3V0IGhvbGRpbmcgdGhlIGdsb2Jh
bCBQQ0kgbG9jay4gDQo+DQo+IFRoYXQncyBhbGwgZmluZSwgYnV0IHdlIG5lZWQgdG8gbWFrZSBz
dXJlIGl0IGRvZXNuJ3QgbWFrZSB0aGluZ3Mgd29yc2UNCj4gYW5kIHdoYXQgdGhleSBjdXJyZW50
bHkgYXJlLCBhbmQgaWRlYWxseSBpdCBzaG91bGQgbWFrZSB0aGluZ3MgZWFzaWVyLg0KPg0KPiBU
aGF0J3Mgd2h5IEkgd291bGQgbGlrZSB0byB1bmRlcnN0YW5kIGV4YWN0bHkgd2hhdCdzIHRoZSBw
dXJwb3NlIG9mDQo+IHRoZSByZWZjb3VudCwgYW5kIGhvdyBpdCBzaG91bGQgYmUgdXNlZC4gIFRo
ZSB1c2FnZSBvZiB0aGUgcmVmY291bnQNCj4gc2hvdWxkIGJlIGNvbXBhdGlibGUgd2l0aCB0aGUg
aW50ZW5kZWQgYmVoYXZpb3VyIG9mDQo+IHBjaV9yZW1vdmVfZGV2aWNlKCksIHJlZ2FyZGxlc3Mg
b2Ygd2hldGhlciB0aGUgY3VycmVudCBpbXBsZW1lbnRhdGlvbg0KPiBpcyBub3QgY29ycmVjdC4g
IFdlIGRvbid0IHdhbnQgdG8gYmUgcGlsaW5nIHVwIG1vcmUgYnJva2VuIHN0dWZmIG9uDQo+IHRv
cCBvZiBhbiBhbHJlYWR5IGJyb2tlbiBpbXBsZW1lbnRhdGlvbi4NCj4NCg0KSSBhZ3JlZSB3aXRo
IHlvdS4gSSdsbCBmaXggdGhlIGlzc3VlIHdpdGggdlBDSSwgdGhhdCB5b3UgbWVudGlvbmVkIGJl
bG93DQphbmQgcHJlcGFyZSBtb3JlIGNvbXByZWhlbnNpdmUgY29tbWl0IGRlc2NyaXB0aW9uIGlu
IHRoZSBuZXh0IHZlcnNpb24uDQoNCj4+ID4+ID4NCj4+ID4+ID4gVGhhdCBtYWtlcyBtZSB3b25k
ZXIgaWYgZm9yIGV4YW1wbGUgY2FsbGVycyBvZiBwY2lfZ2V0X3BkZXYoZCwgc2JkZikNCj4+ID4+
ID4gZG8gbmVlZCB0byB0YWtlIGFuIGV4dHJhIHJlZmNvdW50LCBiZWNhdXNlIHN1Y2ggYWNjZXNz
IGlzIGFscmVhZHkNCj4+ID4+ID4gcHJvdGVjdGVkIGZyb20gdGhlIHBkZXYgZ29pbmcgYXdheSBi
eSB0aGUgZmFjdCB0aGF0IHRoZSBkZXZpY2UgaXMNCj4+ID4+ID4gYXNzaWduZWQgdG8gYSBndWVz
dC4gIEJ1dCBtYXliZSBpdCdzIHRvbyBtdWNoIHdvcmsgdG8gc2VwYXJhdGUgdXNlcnMNCj4+ID4+
ID4gb2YgcGNpX2dldF9wZGV2KGQsIC4uLik7IHZzIHBjaV9nZXRfcGRldihOVUxMLCAuLi4pOy4N
Cj4+ID4+ID4NCj4+ID4+ID4gVGhlcmUncyBhbHNvIGEgd2luZG93IHdoZW4gdGhlIHJlZmNvdW50
IGlzIGRyb3BwZWQgdG8gMCwgYW5kIHRoZQ0KPj4gPj4gPiBkZXN0cnVjdGlvbiBmdW5jdGlvbiBp
cyBjYWxsZWQsIGJ1dCBhdCB0aGUgc2FtZSB0aW1lIGEgY29uY3VycmVudA0KPj4gPj4gPiB0aHJl
YWQgY291bGQgYXR0ZW1wdCB0byB0YWtlIGEgcmVmZXJlbmNlIHRvIHRoZSBwZGV2IHN0aWxsPw0K
Pj4gPj4gDQo+PiA+PiBMYXN0IHBjaWRldl9wdXQoKSB3b3VsZCBiZSBjYWxsZWQgYnkgcGNpX3Jl
bW92ZV9kZXZpY2UoKSwgYWZ0ZXIgcmVtb3ZpbmcNCj4+ID4+IGl0IGZyb20gYWxsIGxpc3RzLiBU
aGlzIHNob3VsZCBwcmV2ZW50IG90aGVyIHRocmVhZHMgZnJvbSBvYnRhaW5pbmcgYSB2YWxpZA0K
Pj4gPj4gcmVmZXJlbmNlIHRvIHRoZSBwZGV2Lg0KPj4gPg0KPj4gPiBXaGF0IGlmIGEgY29uY3Vy
cmVudCB1c2VyIGhhcyB0YWtlbiBhIHJlZmVyZW5jZSB0byB0aGUgb2JqZWN0IGJlZm9yZQ0KPj4g
PiBwY2lfcmVtb3ZlX2RldmljZSgpIGhhcyByZW1vdmVkIHRoZSBkZXZpY2UgZnJvbSB0aGUgbGlz
dHMsIGFuZCBzdGlsbA0KPj4gPiBob2xkcyBpdCB3aGVuIHBjaV9yZW1vdmVfZGV2aWNlKCkgcGVy
Zm9ybXMgdGhlIHN1cHBvc2VkbHkgbGFzdA0KPj4gPiBwY2lkZXZfcHV0KCkgY2FsbD8NCj4+IA0K
Pj4gV2VsbCwgbGV0J3MgY29uc2lkZXIgVlBDSSBjb2RlIGFzIHRoaXMgY29uY3VycmVudCB1c2Vy
LCBmb3INCj4+IGV4YW1wbGUuIEZpcnN0LCBpdCB3aWxsIHRyeSB0byB0YWtlIHZwY2ktPmxvY2su
IERlcGVuZGluZyBvbiB3aGVyZSBpbg0KPj4gcGNpX3JlbW92X2RldmljZSgpIHRoZXJlIHdpbGwg
YmUgdGhyZWUgY2FzZXM6DQo+PiANCj4+IDEuIExvY2sgaXMgdGFrZW4gYmVmb3JlIHZwY2lfcmVt
b3ZlX2RldmljZSgpIHRha2VzIHRoZSBsb2NrLiBJbiB0aGlzDQo+PiBjYXNlIHZwY2kgY29kZSB3
b3JrcyBhcyBhbHdheXMNCj4+IA0KPj4gMi4gSXQgdHJpZXMgdG8gdGFrZSB0aGUgbG9jayB3aGVu
IHZwY2lfcmVtb3ZlX2RldmljZSgpIGlzIGFscmVhZHkgbG9ja2VkDQo+PiB0aGlzLiBJbiB0aGlz
IGNhc2Ugd2UgYXJlIGZhbGxpbmcgdG8gdGhlIG5leHQgY2FzZToNCj4+IA0KPj4gMy4gTG9jayBp
cyB0YWtlbiBhZnRlciB2cGNpX3JlbW92ZV9kZXZpY2UoKSBoYWQgZmluaXNoZWQgaXQncyB3b3Jr
LiBJbiB0aGlzDQo+PiBjYXNlIHZQQ0kgY29kZSBzZWVzIHRoYXQgaXQgd2FzIGNhbGxlZCBmb3Ig
YSBkZXZpY2UgaW4gYW4gaW52YWxpZCBzdGF0ZQ0KPj4gYW5kIGV4aXRzLg0KPg0KPiBGb3IgMikg
YW5kIDMpIHlvdSB3aWxsIGhpdCBhIGRlcmVmZXJlbmNlLCBhcyB0aGUgbG9jayAodnBjaS0+bG9j
aykNCj4gd291bGQgaGF2ZSBiZWVuIGZyZWVkIGJ5IHZwY2lfcmVtb3ZlX2RldmljZSgpIHdoaWxl
IGEgY29uY3VycmVudCB1c2VyDQo+IGlzIHdhaXRpbmcgb24gcGNpX3JlbW92X2RldmljZSgpIHRv
IHJlbGVhc2UgdGhlIGxvY2suDQo+DQo+IEknbSBub3Qgc3VyZSBob3cgdGhlIHVzZXIgc2VlcyB0
aGUgZGV2aWNlIGlzIGluIGFuIGludmFsaWQgc3RhdGUsDQo+IGJlY2F1c2UgaXQgd2FzIHdhaXRp
bmcgb24gYSBsb2NrICh2cGNpLT5sb2NrKSB0aGF0IGhhcyBiZWVuIHJlbW92ZWQNCj4gdW5kZXIg
aXQncyBmZWV0Lg0KPg0KPiBUaGlzIGlzIGFuIGV4aXN0aW5nIGlzc3VlIG5vdCBtYWRlIHdvcnNl
IGJ5IHRoZSByZWZjb3VudGluZywgYnV0IGl0J3MNCj4gbm90IGEgZ3JlYXQgZXhhbXBsZS4NCj4N
Cg0KWWVzLCBhZ3JlZS4gSSBhbSBnb2luZyB0byBtb3ZlIHZwY2ktPmxvY2sgdG8gdGhlIHVwcGVy
IGxldmVsDQoocGRldi0+dnBjaV9sb2NrKSBhbmQgcmV3b3JrIHZQQ0kgY29kZSBzbyBpdCB3aWxs
IGdyYWNlZnVsbHkgaGFuZGxlDQogcGRldi0+dnBjaSA9PSBOVUxMLg0KDQo+PiANCj4+IEFzIHlv
dSBjYW4gc2VlLCB0aGVyZSBpcyBubyBjYXNlIHdoZXJlIHZQQ0kgY29kZSBpcyBydW5uaW5nIG9u
IGFuIGRldmljZQ0KPj4gd2hpY2ggd2FzIHJlbW92ZWQuDQo+PiANCj4+IEFmdGVyIHZQQ0kgY29k
ZSBkcm9wcyByZWZjb3VudGVyLCBwZGV2IG9iamVjdCB3aWxsIGJlIGZyZWVkIG9uY2UgYW5kIGZv
cg0KPj4gYWxsLiBQbGVhc2Ugbm9kZSwgdGhhdCBJIGFtIHRhbGtpbmcgYWJvdXQgcGRldiBvYmpl
Y3QgdGhlcmUsIG5vdCBhYm91dA0KPj4gUENJIGRldmljZSwgYmVjYXVzZSBQQ0kgZGV2aWNlIChh
cyBhIGhpZ2gtbGV2ZWwgZW50aXR5KSB3YXMgZGVzdHJveWVkIGJ5DQo+PiBwY2lfcmVtb3ZlX2Rl
dmljZSgpLiByZWZjb3VudCBpcyBuZWVkZWQganVzdCBmb3IgdGhlIGxhc3QgY2xlYW4tdXANCj4+
IG9wZXJhdGlvbnMuDQo+DQo+IFJpZ2h0LCBidXQgcGNpX3JlbW92ZV9kZXZpY2UoKSB3aWxsIHJl
dHVybiBzdWNjZXNzIGV2ZW4gd2hlbiB0aGVyZSBhcmUNCj4gc29tZSB1c2VycyBob2xkaW5nIGEg
cmVmY291bnQgdG8gdGhlIGRldmljZSwgd2hpY2ggaXMgSU1PIHVuZGVzaXJhYmxlLg0KPg0KPiBB
cyBJIHVuZGVyc3RhbmQgaXQgdGhlIHB1cnBvc2Ugb2YgcGNpX3JlbW92ZV9kZXZpY2UoKSBpcyB0
aGF0IG9uY2UgaXQNCj4gcmV0dXJucyBzdWNjZXNzIHRoZSBkZXZpY2UgY2FuIGJlIHBoeXNpY2Fs
bHkgcmVtb3ZlZCBmcm9tIHRoZSBzeXN0ZW0uDQo+DQoNClllcywgSSB0b3RhbGx5IGFncmVlIHdp
dGggeW91LiBCeSBzYXlpbmcgInRoZSBkZXZpY2UgY2FuIHBoeXNpY2FsbHkgcmVtb3ZlZA0KZnJv
bSB0aGUgc3lzdGVtIiB3ZSBhcmUgYXNzZXJ0aW5nIHRoYXQgbm8gb25lIHdpbGwgdHJ5IHRvIGFj
Y2VzcyB0aGlzDQpkZXZpY2UgdmlhIFBDSSBidXMuIEJ1dCB0aGlzIGlzIG5vdCB0aGUgc2FtZSBh
cyAibm8gb25lIHNoYWxsIGFjY2Vzcw0Kc3RydWN0IHBkZXYgZmllbGRzIGFzIGl0IHNob3VsZCBi
ZSBmcmVlZCBpbW1lZGlhdGVseSIuDQoNCj4+ID4NCj4+ID4+ID4NCj4+ID4+ID4+ICAgICAgICAg
IHNiZGYuZGV2Zm4gJj0gfnN0cmlkZTsNCj4+ID4+ID4+ICAgICAgICAgIHBkZXYgPSBwY2lfZ2V0
X3BkZXYoTlVMTCwgc2JkZik7DQo+PiA+PiA+PiAgICAgICAgICBpZiAoIHBkZXYgJiYgc3RyaWRl
ICE9IHBkZXYtPnBoYW50b21fc3RyaWRlICkNCj4+ID4+ID4+ICsgICAgICAgIHsNCj4+ID4+ID4+
ICsgICAgICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gPj4gPj4gICAgICAgICAgICAgIHBk
ZXYgPSBOVUxMOw0KPj4gPj4gPj4gKyAgICAgICAgfQ0KPj4gPj4gPj4gICAgICB9DQo+PiA+PiA+
PiAgDQo+PiA+PiA+PiAgICAgIHJldHVybiBwZGV2Ow0KPj4gPj4gPj4gQEAgLTU0OCwxMyArNTI2
LDE4IEBAIHN0cnVjdCBwY2lfZGV2ICpwY2lfZ2V0X3BkZXYoY29uc3Qgc3RydWN0IGRvbWFpbiAq
ZCwgcGNpX3NiZGZfdCBzYmRmKQ0KPj4gPj4gPj4gICAgICAgICAgbGlzdF9mb3JfZWFjaF9lbnRy
eSAoIHBkZXYsICZwc2VnLT5hbGxkZXZzX2xpc3QsIGFsbGRldnNfbGlzdCApDQo+PiA+PiA+PiAg
ICAgICAgICAgICAgaWYgKCBwZGV2LT5zYmRmLmJkZiA9PSBzYmRmLmJkZiAmJg0KPj4gPj4gPj4g
ICAgICAgICAgICAgICAgICAgKCFkIHx8IHBkZXYtPmRvbWFpbiA9PSBkKSApDQo+PiA+PiA+PiAr
ICAgICAgICAgICAgew0KPj4gPj4gPj4gKyAgICAgICAgICAgICAgICBwY2lkZXZfZ2V0KHBkZXYp
Ow0KPj4gPj4gPj4gICAgICAgICAgICAgICAgICByZXR1cm4gcGRldjsNCj4+ID4+ID4+ICsgICAg
ICAgICAgICB9DQo+PiA+PiA+PiAgICAgIH0NCj4+ID4+ID4+ICAgICAgZWxzZQ0KPj4gPj4gPj4g
ICAgICAgICAgbGlzdF9mb3JfZWFjaF9lbnRyeSAoIHBkZXYsICZkLT5wZGV2X2xpc3QsIGRvbWFp
bl9saXN0ICkNCj4+ID4+ID4+ICAgICAgICAgICAgICBpZiAoIHBkZXYtPnNiZGYuYmRmID09IHNi
ZGYuYmRmICkNCj4+ID4+ID4+ICsgICAgICAgICAgICB7DQo+PiA+PiA+PiArICAgICAgICAgICAg
ICAgIHBjaWRldl9nZXQocGRldik7DQo+PiA+PiA+PiAgICAgICAgICAgICAgICAgIHJldHVybiBw
ZGV2Ow0KPj4gPj4gPj4gLQ0KPj4gPj4gPj4gKyAgICAgICAgICAgIH0NCj4+ID4+ID4+ICAgICAg
cmV0dXJuIE5VTEw7DQo+PiA+PiA+PiAgfQ0KPj4gPj4gPj4gIA0KPj4gPj4gPj4gQEAgLTY2Myw3
ICs2NDYsMTAgQEAgaW50IHBjaV9hZGRfZGV2aWNlKHUxNiBzZWcsIHU4IGJ1cywgdTggZGV2Zm4s
DQo+PiA+PiA+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBDSV9TQkRGKHNlZywgaW5m
by0+cGh5c2ZuLmJ1cywNCj4+ID4+ID4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgaW5mby0+cGh5c2ZuLmRldmZuKSk7DQo+PiA+PiA+PiAgICAgICAgICBpZiAoIHBkZXYg
KQ0KPj4gPj4gPj4gKyAgICAgICAgew0KPj4gPj4gPj4gICAgICAgICAgICAgIHBmX2lzX2V4dGZu
ID0gcGRldi0+aW5mby5pc19leHRmbjsNCj4+ID4+ID4+ICsgICAgICAgICAgICBwY2lkZXZfcHV0
KHBkZXYpOw0KPj4gPj4gPj4gKyAgICAgICAgfQ0KPj4gPj4gPj4gICAgICAgICAgcGNpZGV2c191
bmxvY2soKTsNCj4+ID4+ID4+ICAgICAgICAgIGlmICggIXBkZXYgKQ0KPj4gPj4gPj4gICAgICAg
ICAgICAgIHBjaV9hZGRfZGV2aWNlKHNlZywgaW5mby0+cGh5c2ZuLmJ1cywgaW5mby0+cGh5c2Zu
LmRldmZuLA0KPj4gPj4gPj4gQEAgLTgxOCw3ICs4MDQsOSBAQCBpbnQgcGNpX3JlbW92ZV9kZXZp
Y2UodTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbikNCj4+ID4+ID4+ICAgICAgICAgICAgICBpZiAo
IHBkZXYtPmRvbWFpbiApDQo+PiA+PiA+PiAgICAgICAgICAgICAgICAgIGxpc3RfZGVsKCZwZGV2
LT5kb21haW5fbGlzdCk7DQo+PiA+PiA+PiAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19ERUJV
RyAiUENJIHJlbW92ZSBkZXZpY2UgJXBwXG4iLCAmcGRldi0+c2JkZik7DQo+PiA+PiA+PiAtICAg
ICAgICAgICAgZnJlZV9wZGV2KHBzZWcsIHBkZXYpOw0KPj4gPj4gPj4gKyAgICAgICAgICAgIGxp
c3RfZGVsKCZwZGV2LT5hbGxkZXZzX2xpc3QpOw0KPj4gPj4gPj4gKyAgICAgICAgICAgIHBkZXZf
bXNpX2RlaW5pdChwZGV2KTsNCj4+ID4+ID4+ICsgICAgICAgICAgICBwY2lkZXZfcHV0KHBkZXYp
Ow0KPj4gPj4gPg0KPj4gPj4gPiBIbSwgSSB0aGluayBoZXJlIHdlIHdhbnQgdG8gbWFrZSBzdXJl
IHRoYXQgdGhlIGRldmljZSBoYXMgYmVlbiBmcmVlZCwNCj4+ID4+ID4gb3IgZWxzZSB5b3Ugd291
bGQgaGF2ZSB0byByZXR1cm4gLUVCVVNZIHRvIHRoZSBjYWxscyB0byBub3RpZnkgdGhhdA0KPj4g
Pj4gPiB0aGUgZGV2aWNlIGlzIHN0aWxsIGluIHVzZS4NCj4+ID4+IA0KPj4gPj4gV2h5PyBBcyBJ
IGNhbiBzZWUsIHBkZXYgb2JqZWN0IGlzIHN0aWxsIG1heSBwb3RlbnRpYWxseSBiZSBhY2Nlc3Nl
ZCBieQ0KPj4gPj4gc29tZSBvdGhlciBDUFUgcmlnaHQgbm93LiBTbyBwZGV2IG9iamVjdCB3aWxs
IGJlIGZyZWVkIGFmdGVyIGxhc3QNCj4+ID4+IHJlZmVyZW5jZSBpcyBkcm9wcGVkLiBBcyBpdCBp
cyBhbHJlYWR5IHJlbW92ZWQgZnJvbSBhbGwgdGhlIGxpc3RzLA0KPj4gPj4gcGNpX2Rldl9nZXQo
KSB3aWxsIG5vdCBmaW5kIGl0IGFueW1vcmUuDQo+PiA+PiANCj4+ID4+IEFjdHVhbGx5LCBJIGNh
bid0IHNlZSBob3cgdGhpcyBjYW4gaGFwcGVuIGluIHJlYWxpdHksIGFzIFZQQ0ksIE1TSSBhbmQN
Cj4+ID4+IElPTU1VIGFyZSBhbHJlYWR5IGRlYWN0aXZhdGVkIGZvciB0aGlzIGRldmljZS4gU28s
IG5vIG9uZSB3b3VsZCB0b3VjaCBpdC4NCj4+ID4NCj4+ID4gV291bGRuJ3QgaXQgYmUgcG9zc2li
bGUgZm9yIGEgY29uY3VycmVudCB1c2VyIHRvIGhvbGQgYSByZWZlcmVuY2UgZnJvbQ0KPj4gPiBi
ZWZvZSB0aGUgZGV2aWNlIGhhcyBiZWVuICdkZWFjdGl2YXRlZCc/DQo+PiA+DQo+PiANCj4+IFll
cywgaXQgY2FuIGhvbGQgYSByZWZlcmVuY2UuIFRoaXMgaXMgd2h5IHdlIG5lZWQgYWRkaXRpb25h
bCBsb2NraW5nIHRvDQo+PiBlbnN1cmUgdGhhdCwgc2F5LCBwY2lfY2xlYW51cF9tc2koKSBkb2Vz
IG5vdCByYWNlcyB3aXRoIHJlc3Qgb2YgdGhlIE1TSQ0KPj4gY29kZS4gUmlnaHQgbm93IHRoaXMg
aXMgZW5zdXJlZCBieSB0aGVuIGdsb2JhbCBQQ0kgbG9jay4NCj4+IA0KPj4gPj4gPg0KPj4gPj4g
PiBJIHRoaW5rIHdlIG5lZWQgYW4gZXh0cmEgcGNpZGV2X3B1dF9maW5hbCgpIG9yIHNpbWlsYXIg
dGhhdCBjYW4gYmUNCj4+ID4+ID4gdXNlZCBpbiBwY2lfcmVtb3ZlX2RldmljZSgpIHRvIGFzc2Vy
dCB0aGF0IHRoZSBkZXZpY2UgaGFzIGJlZW4NCj4+ID4+ID4gYWN0dWFsbHkgcmVtb3ZlZC4NCj4+
ID4+IA0KPj4gPj4gV2lsbCBzb21ldGhpbmcgYnJlYWsgaWYgd2UgZG9uJ3QgZG8gdGhpcz8gSSBj
YW4ndCBzZWUgaG93IHRoaXMgY2FuDQo+PiA+PiBoYXBwZW4uDQo+PiA+DQo+PiA+IEFzIG1lbnRp
b25lZCBhYm92ZSwgb25jZSBwY2lfcmVtb3ZlX2RldmljZSgpIHJldHVybnMgMCB0aGUgYWRtaW4N
Cj4+ID4gc2hvdWxkIGJlIGNhcGFibGUgb2YgcGh5c2ljYWxseSByZW1vdmluZyB0aGUgZGV2aWNl
IGZyb20gdGhlIHN5c3RlbS4NCj4+ID4NCj4+IA0KPj4gVGhpcyBwYXRjaCBzZXJpZXMgZG9lcyBu
b3QgYWx0ZXIgdGhpcyByZXF1aXJlbWVudC4gQWRtaW4gaXMgc3RpbGwNCj4+IGNhcGFibGUgb2Yg
cGh5c2ljYWxseSByZW1vdmluZyB0aGUgZGV2aWNlIGZyb20gdGhlIHN5c3RlbS4gQWZ0ZXINCj4+
IHN1Y2Nlc3NmdWwgY2FsbCB0byB0aGUgcGNpX3JlbW92ZV9kZXZpY2UoKQ0KPg0KPiBJbmRlZWQs
IGJ1dCB0aGVyZSBtaWdodCBiZSB1c2VycyBpbiB0aGUgaHlwZXJ2aXNvciBzdGlsbCBob2xkaW5n
IGENCj4gcmVmZXJlbmNlIHRvIHRoZSBwZGV2Lg0KPg0KDQpyZWZlcmVuY2UgY291bnRpbmcgYWxv
bmUgY2FuJ3QgcHJvdGVjdCB5b3UgZnJvbSB0aGlzDQpzaXR1YXRpb24uIEFkZGl0aW9uYWwgbG9j
a2luZyBpcyByZXF1aXJlZCBpbiB0aGlzIGNhc2UuIEFuZCByaWdodCBub3cgd2UNCmhhdmUgdGhl
IGdsb2JhbCBQQ0kgbG9jayB0aGF0IHByb3RlY3RzIHVzLiBBY3R1YWxseSwgYWxtb3N0IGFsbCB0
aGUgY29kZQ0KdGFrZXMgYW5kIGRyb3BzIHJlZmVyZW5jZXMgd2hpbGUgaG9sZGluZyB0aGUgZ2xv
YmFsIFBDSSBsb2NrLiBPbmx5IG9uZQ0KZXhjZXB0aW9uLCBhcyBmYXIgYXMgSSBrbm93LCBpcyB0
aGUgdlBDSSBjb2RlLiBXaGljaCBJIGFtIGdvaW5nIHRvIGZpeA0KaW4gdGhlIG5leHQgdmVyc2lv
bi4NCg0KQWxzbywgSSdsbCBkb3VibGUgY2hlY2sgdGhhdCBvbmx5IHZQQ0kgY29kZSBvYnRhaW5z
IHJlZmVyZW5jZXMgd2hpbGUgbm90DQpob2xkaW5nIHRoZSBnbG9iYWwgbG9jay4gTXkgcmVhc29u
aW5nIGlzIHRoZSBmb2xsb3dpbmc6DQoNCjEuIFJpZ2h0IG5vdyAoaS5lLiBvbiBzdGFnaW5nIGJy
YW5jaCkgYWxsIGFjY2Vzc2VzIHRvIHBkZXZzIGFyZQ0KaW4gY29uc2lzdGVudCBzdGF0ZS4gVGhp
cyBiYXNpY2FsbHkgbWVhbnMgdGhhdCBhbGwgY29kZSB0aGF0IGFjY2Vzcw0KcGRldnMgaXMgZG9p
bmcgdGhpcyB3aGlsZSBob2xkaW5nIGFuIGFwcHJvcHJpYXRlIGxvY2suIEdsb2JhbCBQQ0kgbG9j
aywNCmluIG1vc3QgY2FzZXMuIFRoaXMgbWVhbnMgdGhlIGZvbGxvd2luZzogcGRldiBjYW4ndCBk
aXNhcHBlYXIgdW5kZXIgb3VyDQpmZWV0LCBubyBvbmUgcmFjaW5nIHdpdGggdXMgd2hpbGUgYWNj
ZXNzaW5nIHRoZSBwZGV2LCBubyBuZXcgcGRldiBjYW4gYmUNCmNyZWF0ZWQgd2hpbGUgd2UgYXJl
IGhvbGRpbmcgdGhlIGdsb2JhbCBQQ0kgbG9jay4NCg0KMi4gQWRkaW5nIHJlZmVyZW5jZSBjb3Vu
dGluZyBhbG9uZSBjaGFuZ2VzIG5vdGhpbmcgaW4gdGhpcw0KcmVnYXJkLiBBY3R1YWxseSwgUENJ
IGNvZGUgd2lsbCBuZWVkbGVzc2x5IGluY3JlYXNlL2RlY3JlYXNlIGFuIGF0b21pYw0Kd2hpbGUg
aG9sZGluZyB0aGUgZ2xvYmFsIGxvY2suDQoNCjMuIEFzIGFsbCB3b3JrIHdpdGggUENJIGRldmlj
ZXMgaXMgZG9uZSB3aGlsZSBob2xkaW5nIHRoZSBsb2NrLCB3ZSBjYW4NCmFzc2VydCB0aGF0IHJl
ZmVyZW5jZSBjb3VudCBhdCB0aGUgYmVnaW5uaW5nIG9mIGEgY3JpdGljYWwgc2VjdGlvbiB3aWxs
DQpiZSBlcXVhbCB0byByZWZlcmVuY2UgY291bnQgYXQgdGhlIGVuZCBvZiBhIGNyaXRpY2FsIHNl
Y3Rpb24sIGJlY2F1c2UNCm15IHBhdGNoIGFkZCBfcHV0IHRvIHRoZSBldmVyeSBfZ2V0IGFsbCBh
Y3Jvc3MgdGhlIGh5cGVydmlzb3IsIHdpdGggYQ0KZmV3IG5vdGFibGUgZXhjZXB0aW9uczoNCg0K
My4xLiBwY2lfYWRkX2RldmljZSgpIHdpbGwgaW5pdGlhbGl6ZSBhIGRldmljZSBhbmQgc2V0IHJl
ZmVyZW5jZSBjb3VudA0KdG8gMQ0KDQozLjIuIHBjaV9yZW1vdmVfZGV2aWNlKCkgd2lsbCBkZS1p
bml0aWFsaXplIGEgZGV2aWNlIGFuZCBkZWNyZWFzZQ0KcmVmZXJlbmNlIGNvdW50IGJ5IDEuIEkg
Y2FuIGFzc2VydCwgdGhhdCBpZiBwLjEgaXMgdHJ1ZSBhbmQgSSBkaWRuJ3QNCm1lc3NlZCB1cCB3
aXRoIGJhbGFuY2luZyBfZ2V0cy9fcHV0cyBpbiBvdGhlciBwYXJ0cyBvZiB0aGUgY29kZSwgdGhl
bg0KcGNpX3JlbW92ZV9kZXZpY2UoKSB3aWxsIGFsd2F5cyByZW1vdmUgdGhlIGxhc3QgcmVmZXJl
bmNlLiBUaGlzIG1heSAoYW5kDQp3aWxsKSBjaGFuZ2UgaW4gdGhlIGZ1dHVyZS4NCg0KMy4zLiBN
U0kgY29kZSBob2xkcyBsb25nLXRlcm0gcG9pbnRlcnMgdG8gcGRldiwgc28NCm1zaVt4XV9jYXBh
YmlsaXR5X2luaXQoKSBkb2VzIGFkZGl0aW9uYWwgX2dldCgpIGFuZCB0aGVuDQpgbXNpX2ZyZWVf
aXJxKClgIGRvZXMgY29ycmVzcG9uZGluZyBfcHV0KCkuIEx1Y2tpbHkgZm9yIHVzLA0KcGNpX3Jl
bW92ZV9kZXZpY2UoKSBjYWxscyBwY2lfY2xlYW51cF9tc2koKSBzbyB3ZSBjYW4gYmUgc3VyZSB0
aGF0IGRvZXMNCm5vdCBicmVhayBhc3NlcnRpb24gaW4gcC4zLjINCg0KNC4gTm93LCB3ZSB3YW50
IHZQQ0kgY29kZSB0byBiZSBhYmxlIHRvIGFjY2VzcyBQQ0kgZGV2aWNlcyB3aXRob3V0DQpob2xk
aW5nIHRoZSBnbG9iYWwgUENJIGxvY2sgdGhlIHdob2xlIHRpbWUuIFRoaXMgaXMgd2hlcmUgd2Ug
Y2FuDQpsZXZlcmFnZSByZWZlcmVuY2UgY291bnRpbmcuIEhlcmUgYXJlIHRoZSBhc3NlcnRpb25z
Og0KDQo0LjEuIHZQQ0kgY29kZSBnZXRzIHBkZXYgcG9pbnRlciBvbmx5IHZpYSBwY2lfZ2V0X3Bk
ZXYoKSBmdW5jdGlvbiwgd2hpY2gNCnJlYWRzIGZyb20gYSBsaXN0IHdoaWxlIGhvbGRpbmcgdGhl
IGdsb2JhbCBQQ0kgbG9jay4gVGhhdCBtZWFucyB0aGF0DQpwY2lfZ2V0X3BkZXYoKSB3aWxsIHJl
dHVybiBOVUxMIGFmdGVyIHBjaV9yZW1vdmVfZGV2aWNlKCkgZGVsZXRlcyB0aGUNCmRldmljZSBm
cm9tIGFsbCBsaXN0cy4gQWxzbywgdGhhdCBtZWFucyB0aGF0IHZQQ0kgY29kZSBjYW4ndCBnZXQg
cGRldg0Kd2hpbGUgcGNpX3JlbW92ZV9kZXZpY2UoKSBpcyBydW5uaW5nLCBiZWNhdXNlIHBjaV9y
ZW1vdmVfZGV2aWNlKCkgaXMNCmhvbGRpbmcgdGhlIGdsb2JhbCBQQ0kgbG9jay4NCg0KNC4yLiB2
UENJIGNvZGUgd2lsbCBhbHdheXMgYWNxdWlyZSBwZGV2LT52cGNpX2xvY2sgYmVmb3JlIGFjY2Vz
c2luZw0KcGRldi0+dnBjaQ0KDQo0LjMuIHBjaV9yZW1vdmVfZGV2aWNlKCkgd2lsbCBkZS1pbml0
IHZwY2kgc3RhdGUgd2hpbGUgaG9sZGluZw0KcGRldi0+dnBjaV9sb2NrDQoNCjQuNC4gdlBDSSBj
b2RlIHdpbGwgbm90IHRyeSB0byBhY2Nlc3MgUENJIGRldmljZSBpZiBwZGV2LT52cGNpID09IE5V
TEwNCg0KNC41LiB2UENJIGNvZGUgd2lsbCBhY2Nlc3Mgb25seSB2cGNpLXJlbGF0ZWQgZmllbGRz
IGluIHN0cnVjdCBwZGV2DQoNCjQuNi4gdlBDSSBkb2VzIG5vdCBkZXBlbmRzIGFuZCBkb2VzIG5v
dCBhbHRlciBub24tdlBDSS1yZWxhdGVkIHN0YXRlIG9mIGEgUENJDQpkZXZpY2UuIFRoaXMgaXMg
dGhlIG1vc3QgdHJpY2t5IHBhcnQsIGJlY2F1c2UgbW9zdCBvZiB0aGUgcmVtYWluaW5nIHN0YXRl
IGlzDQpwcm90ZWN0ZWQgYnkgdGhlIGdsb2JhbCBQQ0kgbG9jaywgd2hpY2ggd2UgYXJlIG5vdCBo
b2xkaW5nLiBUaGF0IG1lYW5zLA0KdGhhdCB3ZSBuZWVkIHRvIGRpc2FibGUgdlBDSSB3aGlsZSBy
ZS1hc3NpZ25pbmcgdGhlIFBDSSBkZXZpY2UgdG8NCmFub3RoZXIgZG9tYWluLiBBcyBJIGNhbiBz
ZWUsIHRoaXMgaXMgdGhlIG9ubHkgcGxhY2Ugd2hlcmUgdlBDSSBkZXBlbmRzDQpvbiBtb3JlIGJy
b2FkZXIgUENJIGRldmljZSBzdGF0ZS4NCg0KVGhpcyBhcHByb2FjaCB3aWxsIG5vdCBpbnRlcmZl
cmUgd2l0aCBwY2lfcmVtb3ZlX2RldmljZSgpIG9ibGlnYXRpb25zLA0KYmVjYXVzZSB3ZSBjYW4g
YmUgc3VyZSB0aGF0IHJpZ2h0IG5vdyB2UENJIGlzIHRoZSBvbmx5IHVzZXIgdGhhdCBjYW4NCmhv
bGQgcmVmZXJlbmNlIGNvdW50ZXIgcGFzdCBwY2lfcmVtb3ZlX2RldmljZSgpIGNhbGwgYW5kIHRo
YXQgdlBDSSBjb2RlDQp3aWxsIG5vdCBhdHRlbXB0IHRvIGFjY2VzcyB0byBQQ0kgZGV2aWNlIGFm
dGVyIGVuZCBvZiAsIHRodXMsIGFsbG93aW5nIGFkbWluIHRvDQpwaHlzaWNhbGx5IHJlbW90ZSB0
aGUgZGV2aWNlLg0KDQpJbiB0aGUgZnV0dXJlLCB3ZSBjYW4gZ3JhZHVhbGx5IHJlbW92ZSBvdGhl
ciBwYXJ0cyBvZiB0aGUgUENJIGNvZGUgZnJvbSB1bmRlcg0KdGhlIGdsb2JhbCBQQ0kgbG9jaywg
cHJvdmlkaW5nIHdlIGNhbiBnaXZlIHRoZSBzYW1lIGd1YXJhbnRlZXMgYXMgcCA0LjEtNC42DQoN
Cj4+ID4+ID4+IC1zdGF0aWMgaW50IGFzc2lnbl9kZXZpY2Uoc3RydWN0IGRvbWFpbiAqZCwgdTE2
IHNlZywgdTggYnVzLCB1OCBkZXZmbiwgdTMyIGZsYWcpDQo+PiA+PiA+PiArc3RhdGljIGludCBh
c3NpZ25fZGV2aWNlKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdCBwY2lfZGV2ICpwZGV2LCB1MzIg
ZmxhZykNCj4+ID4+ID4+ICB7DQo+PiA+PiA+PiAgICAgIGNvbnN0IHN0cnVjdCBkb21haW5faW9t
bXUgKmhkID0gZG9tX2lvbW11KGQpOw0KPj4gPj4gPj4gLSAgICBzdHJ1Y3QgcGNpX2RldiAqcGRl
djsNCj4+ID4+ID4+ICsgICAgdWludDhfdCBkZXZmbjsNCj4+ID4+ID4+ICAgICAgaW50IHJjID0g
MDsNCj4+ID4+ID4+ICANCj4+ID4+ID4+ICAgICAgaWYgKCAhaXNfaW9tbXVfZW5hYmxlZChkKSAp
DQo+PiA+PiA+PiBAQCAtMTQyMiwxMCArMTQxMiwxMSBAQCBzdGF0aWMgaW50IGFzc2lnbl9kZXZp
Y2Uoc3RydWN0IGRvbWFpbiAqZCwgdTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbiwgdTMyIGZsYWcp
DQo+PiA+PiA+PiAgDQo+PiA+PiA+PiAgICAgIC8qIGRldmljZV9hc3NpZ25lZCgpIHNob3VsZCBh
bHJlYWR5IGhhdmUgY2xlYXJlZCB0aGUgZGV2aWNlIGZvciBhc3NpZ25tZW50ICovDQo+PiA+PiA+
PiAgICAgIEFTU0VSVChwY2lkZXZzX2xvY2tlZCgpKTsNCj4+ID4+ID4+IC0gICAgcGRldiA9IHBj
aV9nZXRfcGRldihOVUxMLCBQQ0lfU0JERihzZWcsIGJ1cywgZGV2Zm4pKTsNCj4+ID4+ID4+ICAg
ICAgQVNTRVJUKHBkZXYgJiYgKHBkZXYtPmRvbWFpbiA9PSBoYXJkd2FyZV9kb21haW4gfHwNCj4+
ID4+ID4+ICAgICAgICAgICAgICAgICAgICAgIHBkZXYtPmRvbWFpbiA9PSBkb21faW8pKTsNCj4+
ID4+ID4+ICANCj4+ID4+ID4+ICsgICAgZGV2Zm4gPSBwZGV2LT5kZXZmbjsNCj4+ID4+ID4+ICsN
Cj4+ID4+ID4+ICAgICAgLyogRG8gbm90IGFsbG93IGJyb2tlbiBkZXZpY2VzIHRvIGJlIGFzc2ln
bmVkIHRvIGd1ZXN0cy4gKi8NCj4+ID4+ID4+ICAgICAgcmMgPSAtRUJBREY7DQo+PiA+PiA+PiAg
ICAgIGlmICggcGRldi0+YnJva2VuICYmIGQgIT0gaGFyZHdhcmVfZG9tYWluICYmIGQgIT0gZG9t
X2lvICkNCj4+ID4+ID4+IEBAIC0xNDYwLDcgKzE0NTEsNyBAQCBzdGF0aWMgaW50IGFzc2lnbl9k
ZXZpY2Uoc3RydWN0IGRvbWFpbiAqZCwgdTE2IHNlZywgdTggYnVzLCB1OCBkZXZmbiwgdTMyIGZs
YWcpDQo+PiA+PiA+PiAgIGRvbmU6DQo+PiA+PiA+PiAgICAgIGlmICggcmMgKQ0KPj4gPj4gPj4g
ICAgICAgICAgcHJpbnRrKFhFTkxPR19HX1dBUk5JTkcgIiVwZDogYXNzaWduICglcHApIGZhaWxl
ZCAoJWQpXG4iLA0KPj4gPj4gPj4gLSAgICAgICAgICAgICAgIGQsICZQQ0lfU0JERihzZWcsIGJ1
cywgZGV2Zm4pLCByYyk7DQo+PiA+PiA+PiArICAgICAgICAgICAgICAgZCwgJlBDSV9TQkRGKHBk
ZXYtPnNlZywgcGRldi0+YnVzLCBkZXZmbiksIHJjKTsNCj4+ID4+ID4+ICAgICAgLyogVGhlIGRl
dmljZSBpcyBhc3NpZ25lZCB0byBkb21faW8gc28gbWFyayBpdCBhcyBxdWFyYW50aW5lZCAqLw0K
Pj4gPj4gPj4gICAgICBlbHNlIGlmICggZCA9PSBkb21faW8gKQ0KPj4gPj4gPj4gICAgICAgICAg
cGRldi0+cXVhcmFudGluZSA9IHRydWU7DQo+PiA+PiA+PiBAQCAtMTU5NSw2ICsxNTg2LDkgQEAg
aW50IGlvbW11X2RvX3BjaV9kb21jdGwoDQo+PiA+PiA+PiAgICAgICAgICBBU1NFUlQoZCk7DQo+
PiA+PiA+PiAgICAgICAgICAvKiBmYWxsIHRocm91Z2ggKi8NCj4+ID4+ID4+ICAgICAgY2FzZSBY
RU5fRE9NQ1RMX3Rlc3RfYXNzaWduX2RldmljZToNCj4+ID4+ID4+ICsgICAgew0KPj4gPj4gPj4g
KyAgICAgICAgc3RydWN0IHBjaV9kZXYgKnBkZXY7DQo+PiA+PiA+PiArDQo+PiA+PiA+PiAgICAg
ICAgICAvKiBEb24ndCBzdXBwb3J0IHNlbGYtYXNzaWdubWVudCBvZiBkZXZpY2VzLiAqLw0KPj4g
Pj4gPj4gICAgICAgICAgaWYgKCBkID09IGN1cnJlbnQtPmRvbWFpbiApDQo+PiA+PiA+PiAgICAg
ICAgICB7DQo+PiA+PiA+PiBAQCAtMTYyMiwyNiArMTYxNiwzNiBAQCBpbnQgaW9tbXVfZG9fcGNp
X2RvbWN0bCgNCj4+ID4+ID4+ICAgICAgICAgIHNlZyA9IG1hY2hpbmVfc2JkZiA+PiAxNjsNCj4+
ID4+ID4+ICAgICAgICAgIGJ1cyA9IFBDSV9CVVMobWFjaGluZV9zYmRmKTsNCj4+ID4+ID4+ICAg
ICAgICAgIGRldmZuID0gUENJX0RFVkZOKG1hY2hpbmVfc2JkZik7DQo+PiA+PiA+PiAtDQo+PiA+
PiA+PiAgICAgICAgICBwY2lkZXZzX2xvY2soKTsNCj4+ID4+ID4+IC0gICAgICAgIHJldCA9IGRl
dmljZV9hc3NpZ25lZChzZWcsIGJ1cywgZGV2Zm4pOw0KPj4gPj4gPj4gKyAgICAgICAgcGRldiA9
IHBjaV9nZXRfcGRldihOVUxMLCBQQ0lfU0JERihzZWcsIGJ1cywgZGV2Zm4pKTsNCj4+ID4+ID4+
ICsgICAgICAgIGlmICggIXBkZXYgKQ0KPj4gPj4gPj4gKyAgICAgICAgew0KPj4gPj4gPj4gKyAg
ICAgICAgICAgIHByaW50ayhYRU5MT0dfR19JTkZPICIlcHAgbm9uLWV4aXN0ZW50XG4iLA0KPj4g
Pj4gPj4gKyAgICAgICAgICAgICAgICAgICAmUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSk7DQo+
PiA+PiA+PiArICAgICAgICAgICAgcmV0ID0gLUVJTlZBTDsNCj4+ID4+ID4+ICsgICAgICAgICAg
ICBicmVhazsNCj4+ID4+ID4+ICsgICAgICAgIH0NCj4+ID4+ID4+ICsNCj4+ID4+ID4+ICsgICAg
ICAgIHJldCA9IGRldmljZV9hc3NpZ25lZChwZGV2KTsNCj4+ID4+ID4+ICAgICAgICAgIGlmICgg
ZG9tY3RsLT5jbWQgPT0gWEVOX0RPTUNUTF90ZXN0X2Fzc2lnbl9kZXZpY2UgKQ0KPj4gPj4gPj4g
ICAgICAgICAgew0KPj4gPj4gPj4gICAgICAgICAgICAgIGlmICggcmV0ICkNCj4+ID4+ID4+ICAg
ICAgICAgICAgICB7DQo+PiA+PiA+PiAtICAgICAgICAgICAgICAgIHByaW50ayhYRU5MT0dfR19J
TkZPICIlcHAgYWxyZWFkeSBhc3NpZ25lZCwgb3Igbm9uLWV4aXN0ZW50XG4iLA0KPj4gPj4gPj4g
KyAgICAgICAgICAgICAgICBwcmludGsoWEVOTE9HX0dfSU5GTyAiJXBwIGFscmVhZHkgYXNzaWdu
ZWRcbiIsDQo+PiA+PiA+PiAgICAgICAgICAgICAgICAgICAgICAgICAmUENJX1NCREYoc2VnLCBi
dXMsIGRldmZuKSk7DQo+PiA+PiA+PiAgICAgICAgICAgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+
PiA+PiA+PiAgICAgICAgICAgICAgfQ0KPj4gPj4gPj4gICAgICAgICAgfQ0KPj4gPj4gPj4gICAg
ICAgICAgZWxzZSBpZiAoICFyZXQgKQ0KPj4gPj4gPj4gLSAgICAgICAgICAgIHJldCA9IGFzc2ln
bl9kZXZpY2UoZCwgc2VnLCBidXMsIGRldmZuLCBmbGFncyk7DQo+PiA+PiA+PiArICAgICAgICAg
ICAgcmV0ID0gYXNzaWduX2RldmljZShkLCBwZGV2LCBmbGFncyk7DQo+PiA+PiA+PiArDQo+PiA+
PiA+PiArICAgICAgICBwY2lkZXZfcHV0KHBkZXYpOw0KPj4gPj4gPg0KPj4gPj4gPiBJIHdvdWxk
IHRoaW5rIHlvdSBuZWVkIHRvIGtlZXAgdGhlIHJlZmNvdW50IGhlcmUgaWYgcmV0ID09IDAsIHNv
IHRoYXQNCj4+ID4+ID4gdGhlIGRldmljZSBjYW5ub3QgYmUgcmVtb3ZlZCB3aGlsZSBhc3NpZ25l
ZCB0byBhIGRvbWFpbj8NCj4+ID4+IA0KPj4gPj4gTG9va3MgbGlrZSB3ZSBhcmUgcGVyY2Vpdmlu
ZyBmdW5jdGlvbiBvZiByZWZjbnQgaW4gYSBkaWZmZXJlbnQNCj4+ID4+IHdheXMuIEZvciBtZSwg
dGhpcyBpcyB0aGUgbWVjaGFuaXNtIHRvIGd1YXJhbnRlZSB0aGF0IGlmIHdlIGhhdmUgYSB2YWxp
ZA0KPj4gPj4gcG9pbnRlciB0byBhbiBvYmplY3QsIHRoaXMgb2JqZWN0IHdpbGwgbm90IGRpc2Fw
cGVhciB1bmRlciBvdXINCj4+ID4+IGZlZXQuIFRoaXMgaXMgdGhlIG1haW4gZnVuY3Rpb24gb2Yg
a3JlZnMgaW4gdGhlIGxpbnV4IGtlcm5lbDogaWYgeW91cg0KPj4gPj4gY29kZSBob2xkcyBhIHJl
ZmVyZW5jZSB0byBhbiBvYmplY3QsIHlvdSBjYW4gYmUgc3VyZSB0aGF0IHRoaXMgb2JqZWN0IGlz
DQo+PiA+PiBleGlzdHMgaW4gbWVtb3J5Lg0KPj4gPj4gDQo+PiA+PiBPbiBvdGhlciBoYW5kLCBp
dCBzZWVtcyB0aGF0IHlvdSBhcmUgY29uc2lkZXJpbmcgdGhpcyByZWZjbnQgYXMgYW4gdXNhZ2UN
Cj4+ID4+IGNvdW50ZXIgZm9yIGFuIGFjdHVhbCBQQ0kgZGV2aWNlLCBub3QgInN0cnVjdCBwZGV2
IiB0aGF0IHJlcHJlc2VudA0KPj4gPj4gaXQuIFRob3NlIGFyZSB0d28gcmVsYXRlZCB0aGluZ3Ms
IGJ1dCBub3QgdGhlIHNhbWUuIFNvLCBJIGNhbiBzZWUgd2h5DQo+PiA+PiB5b3UgYXJlIHN1Z2dl
c3RpbmcgdG8gZ2V0IGFkZGl0aW9uYWwgcmVmZXJlbmNlIHRoZXJlLiBCdXQgZm9yIG1lLCB0aGlz
DQo+PiA+PiBsb29rcyB1bm5lY2Vzc2FyeTogdGhlIHZlcnkgZmlyc3QgcmVmY291bnQgaXMgb2J0
YWluZWQgaW4NCj4+ID4+IHBjaV9hZGRfZGV2aWNlKCkgYW5kIHRoZXJlIGlzIHRoZSBjb3JyZXNw
b25kaW5nIGZ1bmN0aW9uDQo+PiA+PiBwY2lfcmVtb3ZlX2RldmljZSgpIHRoYXQgd2lsbCBkcm9w
IHRoaXMgcmVmY291bnQuIFNvLCBmb3IgbWUsIGlmIGFkbWluDQo+PiA+PiB3YW50cyB0byByZW1v
dmUgYSBQQ0kgZGV2aWNlIHdoaWNoIGlzIGFzc2lnbmVkIHRvIGEgZG9tYWluLCB0aGV5IGNhbiBk
bw0KPj4gPj4gdGhpcyBhcyB0aGV5IHdlcmUgYWJsZSB0byBkbyB0aGlzIHByaW9yIHRoaXMgcGF0
Y2hlcy4NCj4+ID4NCj4+ID4gVGhpcyBpcyBhbGwgZmluZSwgYnV0IG5lZWRzIHRvIGJlIHN0YXRl
ZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UuDQo+PiA+DQo+PiANCj4+IFN1cmUsIEkgd2lsbCBhZGQg
dGhpcy4NCj4+IA0KPj4gPj4gVGhlIG1haW4gdmFsdWUgb2YgaW50cm9kdWNpbmcgcmVmY250IGlz
IHRvIGJlIGFibGUgdG8gYWNjZXNzIHBkZXYgb2JqZWN0cw0KPj4gPj4gd2l0aG91dCBob2xkaW5n
IHRoZSBnbG9iYWwgcGNpZGV2c19sb2NrKCkuIFRoaXMgZG9lcyBub3QgbWVhbiB0aGF0IHlvdQ0K
Pj4gPj4gZG9uJ3QgbmVlZCBsb2NraW5nIGF0IGFsbC4gQnV0IHRoaXMgYWxsb3dzIHlvdSB0byB1
c2UgcGRldi0+bG9jayAod2hpY2gNCj4+ID4+IGRvZXMgbm90IGV4aXN0cyBpbiB0aGlzIHNlcmll
cywgYnV0IHdhcyBpbnRyb2R1Y2VkIGluIGEgUkZDIGVhcmxpZXIpLCBvcg0KPj4gPj4gdnBjaS0+
bG9jaywgb3IgYW55IG90aGVyIHN1YnN5c3RlbS0+bG9jay4NCj4+ID4NCj4+ID4gSSBndWVzcyBJ
IHdhcyBtaXNzaW5nIHRoaXMgb3RoZXIgYml0IGFib3V0IGludHJvZHVjaW5nIGENCj4+ID4gcGVy
LWRldmljZSBsb2NrLCB3b3VsZCBpdCBiZSBwb3NzaWJsZSB0byBidW5kbGUgYWxsIHRoaXMgdG9n
ZXRoZXIgaW50bw0KPj4gPiBhIHNpbmdsZSBwYXRjaCBzZXJpZXM/DQo+PiANCj4+IEFzIEkgc2Fp
ZCBhdCB0aGUgdG9wIG9mIHRoaXMgZW1haWwsIGl0IHdhcyB0cmllZC4gWW91IGNhbiBjaGVjayBS
RkMgYXQgWzFdLg0KPj4gDQo+PiA+DQo+PiA+IEl0IHdvdWxkIGJlIGdvb2QgdG8gcGxhY2UgdGhp
cyBjaGFuZ2UgdG9nZXRoZXIgd2l0aCBhbnkgb3RoZXIgbG9ja2luZw0KPj4gPiByZWxhdGVkIGNo
YW5nZSB0aGF0IHlvdSBoYXZlIHBlbmRpbmcuDQo+PiANCj4+IEhvbmVzdGx5LCBteSBtYWluIGdv
YWwgaXMgdG8gZml4IHRoZSBjdXJyZW50IGlzc3VlcyB3aXRoIHZQQ0ksIHNvIEFSTQ0KPj4gY2Fu
IG1vdmUgZm9yd2FyZCBvbiBhZGRpbmcgUENJIHN1cHBvcnQgZm9yIHRoZSBwbGF0Zm9ybS4gU28s
IEkgYW0NCj4+IGZvY3VzaW5nIG9uIHRoaXMgcmlnaHQgbm93Lg0KPg0KPiBUaGFua3MsIHdlIG5l
ZWQgdG8gYmUgY2FyZWZ1bCBob3dldmVyIGFzIHRvIG5vdCBhY2N1bXVsYXRlIG1vcmUNCj4gYmFu
ZGFpZHMgb24gdG9wIGp1c3QgdG8gd29ya2Fyb3VuZCB0aGUgZmFjdCB0aGF0IHRoZSBsb2NraW5n
IHdlIGhhdmUNCj4gcmVnYXJkaW5nIHRoZSBwY2kgZGV2aWNlcyBpcyBub3Qgc3VpdGFibGUuDQo+
DQo+IEkgdGhpbmsgaXQncyBpbXBvcnRhbnQgdG8ga2VlcCBhbGwgdGhlIHVzYWdlcyBvZiB0aGUg
cGNpX2RldiBzdHJ1Y3QgaW4NCj4gbWluZCB3aGVuIGRlc2lnbmluZyBhIHNvbHV0aW9uLg0KPg0K
PiBPdmVyYWxsIGl0IHNlZW1zIGxpa2UgbWlnaHQgaGVscCB2UENJIG9uIEFybSwgSSB0aGluayB0
aGUgb25seSBtYWpvcg0KPiByZXF1ZXN0IEkgaGF2ZSBpcyB0aGUgb25lIHJlbGF0ZWQgdG8gcGNp
X3JlbW92ZV9kZXZpY2UoKSBvbmx5DQo+IHJldHVybmluZyBzdWNjZXNzIHdoZW4gdGhlcmUgYXJl
IG5vdCByZWZjb3VudHMgbGVmdC4NCg0KQWJvdmUgSSBoYXZlIHByb3Bvc2VkIGFub3RoZXIgdmll
dyBvbiB0aGlzLiBJIGhvcGUsIGl0IHdpbGwgd29yayBmb3INCnlvdS4gSnVzdCB0byByZWl0ZXJh
dGUsIGlkZWEgaXMgdG8gYWxsb3cgImhhcm1sZXNzIiByZWZjb3VudHMgdG8gYmUgbGVmdA0KYWZ0
ZXIgcmV0dXJuaW5nIGZyb20gcGNpX3JlbW92ZV9kZXZpY2UoKS4gQnkgImhhcm1sZXNzIiBJIG1l
YW4gdGhhdA0Kb3duZXJzIG9mIHRob3NlIHJlZmNvdW50cyB3aWxsIG5vdCB0cnkgdG8gYWNjZXNz
IHRoZSBwaHlzaWNhbCBQQ0kNCmRldmljZSBpZiBwY2lfcmVtb3ZlX2RldmljZSgpIGlzIGFscmVh
ZHkgZmluaXNoZWQuDQoNCi0tIA0KV0JSLCBWb2xvZHlteXINCg==


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 01:51:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 01:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521009.809236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8an-0006gA-IJ; Fri, 14 Apr 2023 01:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521009.809236; Fri, 14 Apr 2023 01:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8an-0006g3-DR; Fri, 14 Apr 2023 01:51:13 +0000
Received: by outflank-mailman (input) for mailman id 521009;
 Fri, 14 Apr 2023 01:51:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eyEl=AF=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pn8al-0006fx-Qm
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 01:51:11 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d2d5f50b-da66-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 03:51:08 +0200 (CEST)
Received: from DU2PR04CA0263.eurprd04.prod.outlook.com (2603:10a6:10:28e::28)
 by AS8PR08MB7696.eurprd08.prod.outlook.com (2603:10a6:20b:523::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 01:51:05 +0000
Received: from DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::7c) by DU2PR04CA0263.outlook.office365.com
 (2603:10a6:10:28e::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 01:51:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT053.mail.protection.outlook.com (100.127.142.121) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30 via Frontend Transport; Fri, 14 Apr 2023 01:51:04 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Fri, 14 Apr 2023 01:51:04 +0000
Received: from e074ea808b3c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 00F90D79-7CBD-4E3E-8915-4F29574BB6D5.1; 
 Fri, 14 Apr 2023 01:50:58 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e074ea808b3c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 01:50:58 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM8PR08MB6354.eurprd08.prod.outlook.com (2603:10a6:20b:367::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 01:50:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 01:50:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2d5f50b-da66-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/0dmwobpMlHVyKGJ4RopLZFrcAb9z2gbFSUg5A+kZq8=;
 b=r8XzEUjpKvQ2Sr5adC968nFD6E+vElDdc+fQJ8uXuVdDWAobTI5TpDRXvwZtzQb3QdsSMkLTZW/VsBIfyn4zYlVlFX7Q3ijO/eGfPiiu2zyj9L9TVnrfJT1S6Iw/OmjJ0cTXN7x0H3TVOER+1fdax6jRaDYRcfA7RTA8hjnIaCE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KxAxwgyYdWGjNAr/KqvArb3R/HiCqL3tqQu9U05MsJt9qALETe8epFwUcWY8c+B8jgMjhCLBeEHHBc5llxreKsTolKBavWGl/fYGZqEX72ZcOjVjUHEaLG0IQ46hO/67zL7ZDnaI27+OuxTRGIJnbQ+2RC32lF1Do8sB9d5CbhpKe6s0gz0B276zrKim4O342ufjOReRV6pH5Y6P3i8bZE/bfewXCf/Y01muSYcvZSHQKiKObY7Im2eie8kKlpXEm1ajkbcFFric650wStVAxBdpy7ShRlqLnS4YvTOAkfR0LRnmnhbI4NvKVB7xjq5Jge1W0AK6oEc9qLxHwzNbTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/0dmwobpMlHVyKGJ4RopLZFrcAb9z2gbFSUg5A+kZq8=;
 b=EeUPKXccVowS5jzkRYbk3M9MfqdAslyt9LThgi+qV4X620mPzP7xRp34U96FB67zPjd7ruTv8U0ISO1g2FRpU9bhCmXZIxpr9qJjkgxkh29EbFU7aW/KKkgXaVqf0lyjrcBaE5oC8wqH7eVD9jMTALfpE0AKYdNiO1Pys2pOAX4s6pXjoTEWMp7CThfVCj0GC/P0B/VgbK8skrN01jffub/UfHvcFU1E64irLP5u9cqcSLL14F6eWB5I6xOmfxvfCfQBwTH87955gwxZY3vOh6QJOlHSdhKdVflMwmfV1RGaAOGLFWdtsjvdZ+bdlU4gbeERKpphsO1r56pjOOAafg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/0dmwobpMlHVyKGJ4RopLZFrcAb9z2gbFSUg5A+kZq8=;
 b=r8XzEUjpKvQ2Sr5adC968nFD6E+vElDdc+fQJ8uXuVdDWAobTI5TpDRXvwZtzQb3QdsSMkLTZW/VsBIfyn4zYlVlFX7Q3ijO/eGfPiiu2zyj9L9TVnrfJT1S6Iw/OmjJ0cTXN7x0H3TVOER+1fdax6jRaDYRcfA7RTA8hjnIaCE=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for
 device_tree.h and rwlock.h
Thread-Topic: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for
 device_tree.h and rwlock.h
Thread-Index: AQHZbKpEgTyA0CRuYk66qjVY5D36Ua8qDYcw
Date: Fri, 14 Apr 2023 01:50:45 +0000
Message-ID:
 <AS8PR08MB79918B9918132341F547A22092999@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-12-vikram.garhwal@amd.com>
In-Reply-To: <20230411191636.26926-12-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EEFA672FD9D738488657FB1F1191A2F1.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM8PR08MB6354:EE_|DBAEUR03FT053:EE_|AS8PR08MB7696:EE_
X-MS-Office365-Filtering-Correlation-Id: a1a62ec8-2f7b-481d-eb8f-08db3c8ab52f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 r2pNb9kKRMlXcx4lHZNIvkjviZLLOZ2UhOJqSEpqybuXStSDdXie8QZtPTKiP1jUv5S43gjmLATGzo6B15JXBD/1DPgOu/nNGs3cANafe7eet7xtM8xBAPzIjLphNpfCNm7cGUJVbb7APmG5vapUuGZOs3m5InLqpOLtlbMF/KmQrKGSemnlLhoGph3VVbkTNjLNB4HuxnXyADAyaqao/mPWoAgjibP5jZsyMTeU7+/9zq9rL4y6XAKHU/4slUZ/bi7iyO+I7U377Vd9Ya0cxywxWFlzy/fIBtnOsAZg5E5Px93p9hNf5NDT4i8zgdg9M3I/jCUOMmu7yRSg0lUhBKsC5I4//Y+GNeISvzSVqJwfn00F40Z3/kcupuHolJdrClFz6YQQ3KwNuhR4zBs8jtWrw5N8v8eE/BYBvENPSdWZUrdyjnX3TGIFZ7n/dUSfqrVgP2GI5ZGrEVzv1fENRL2kDf7UqPvcqDaCCofWOdVL15pNH2yOrXxpG/WVTpV27G0mejqDE6hZdEE9VkKI5yIdQ3Vxr3rhQKI3lcbrhLAqZa9NkvMhKrXWTwRTtJNrKYDMlk3/hHoD3LNMZkO4Bw1sLvluHmxBaVFeMydJHbPl0OK+PLkHi/zOa7OW8F0d
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(366004)(451199021)(33656002)(55016003)(478600001)(54906003)(110136005)(7696005)(71200400001)(122000001)(41300700001)(8676002)(8936002)(38100700002)(66556008)(66946007)(316002)(66476007)(4326008)(64756008)(66446008)(83380400001)(76116006)(9686003)(6506007)(26005)(186003)(86362001)(52536014)(4744005)(2906002)(38070700005)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6354
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3d949f1d-fa86-400a-e77e-08db3c8aa9e6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HXV+i+wy95Mjja1m1HgSYJf0R+hQSAvvPZh/QP/GHg1rvbMfoCEVG/rAL4TRmyD5R/vwekMgpi+ylCZZkd0dz0cKiav5si3URuvHeq0B87i42KSUOABFYHSBSPqkgPupNX9Zi6yqLLhzsU8oKTY2xPiHzXF7ateoFUZL7/8rH/gUU4CKAbtNKUMnmbBgFS5L5F3aoDLJI7E1pF7cjlurs26Iq+gyZHa5j8wMrfTmFyDnygfZa6v2X1U5RpUBIDtDa3XJupri2G856CPuc5wejS1hpSjUzMWgUl6lvXgTMdvAqczLNMdXhOBE2VIaT2i3UMiHk75pACMd2sTQwkqZlwYSbI1V0xEFOWfdh2kqCXlVn6q8SckIhQPwpInBHkE/PNF5Hiz2KQxToD8O25m+CP8dJBj4RYs6j26rdpqn5QF+BHoMOXxbJa1xZL3gq61ZdSyepTOnG5N1FXLOyh5Sq6ErpWN7/Fdk66WPEBEwmae+taeDKzkNDxkoww9aevYUvaul1lgMJbYarVIVulrJHXmDgxIMgPgVDfbpWyYaQiM1Cf5Lw+k7mItWs+5vAtmCHsNisQwauB0OZUOTbvsHaNh1KGxtVoHiNkzdBWdNtuxWoUIvsMDIVQRSxMBKbLrc+v9JsxkeZABqax6SKLqYdFL48ld8JGHCanfCcdLMacEBItqxiB67y6+Vd7Q+5WscQ9pyU73RnpA6DMWghtvFKw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(33656002)(40460700003)(316002)(41300700001)(9686003)(26005)(81166007)(6506007)(7696005)(186003)(86362001)(47076005)(107886003)(356005)(82310400005)(36860700001)(336012)(83380400001)(82740400003)(55016003)(40480700001)(110136005)(4326008)(54906003)(70586007)(8676002)(70206006)(8936002)(5660300002)(2906002)(4744005)(52536014)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 01:51:04.6027
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a1a62ec8-2f7b-481d-eb8f-08db3c8ab52f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7696

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for
> device_tree.h and rwlock.h
>=20
> Dynamic programming ops will modify the dt_host and there might be other
> function which are browsing the dt_host at the same time. To avoid the ra=
ce
> conditions, adding rwlock for browsing the dt_host. But adding rwlock in
> device_tree.h causes following circular dependency:
>     device_tree.h->rwlock.h->smp.h->asm/smp.h->device_tree.h
>=20
> To fix this, removed the "#include <xen/device_tree.h> and forward declar=
ed
> "struct dt_device_node".
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 02:09:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 02:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521013.809246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8sm-0000CD-26; Fri, 14 Apr 2023 02:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521013.809246; Fri, 14 Apr 2023 02:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pn8sl-0000C6-V9; Fri, 14 Apr 2023 02:09:47 +0000
Received: by outflank-mailman (input) for mailman id 521013;
 Fri, 14 Apr 2023 02:09:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eyEl=AF=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pn8sl-0000C0-7z
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 02:09:47 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe02::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c1d9db4-da69-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 04:09:44 +0200 (CEST)
Received: from FR3P281CA0038.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:4a::14)
 by PAXPR08MB7599.eurprd08.prod.outlook.com (2603:10a6:102:23e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.46; Fri, 14 Apr
 2023 02:09:42 +0000
Received: from VI1EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:4a:cafe::df) by FR3P281CA0038.outlook.office365.com
 (2603:10a6:d10:4a::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.7 via Frontend
 Transport; Fri, 14 Apr 2023 02:09:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT056.mail.protection.outlook.com (100.127.144.95) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Fri, 14 Apr 2023 02:09:40 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Fri, 14 Apr 2023 02:09:40 +0000
Received: from 84de44484703.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 734486FC-61CF-4B57-B731-BE6F4D2464D7.1; 
 Fri, 14 Apr 2023 02:09:34 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 84de44484703.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 02:09:34 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VI1PR08MB5520.eurprd08.prod.outlook.com (2603:10a6:803:135::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Fri, 14 Apr
 2023 02:09:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 02:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c1d9db4-da69-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7K66/NOBOzweoXpUKEC2T8xfXCJQM7bOv4DduEzKc9A=;
 b=dk/6pKjfJxDt7qRUG2kkSolCHVRlhsjYKxyYVspZ8GNGdoGgwtoGQbMys9fYd7RLUvnxnCOii0gT4eclf1n24RrY8m7e5dBsElm7DE6Rybb4V2PGj56Kfhq3laS8mVqyrPxcZ9CaFjx3qSFeP1TxjNXgzHgc3SC3vDs8NWACr4g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fy1E6sTz9XEvrZFc/HlkUjpT0IkQ5wPFlzUlmVlPOkdPVYr/+/0w+uEFkRNF8/bNXzGRZMkzZ0bbt9VG9mk4meb1ZEUnjqLtAUM/C9+WplE3Xpy7H4Ff04hiVg4XJ4PyBjuoMQe6MfGNPPM2IPkhLaWjKHq5wdfTUiyHU0eFf1q5KiEQrrsQgCTBGNb7IEgAcgjSuBMB6WCVn4mtnxt6pY0gkJ7yodyu2e4qZHoRHUOnmLS39N9AtvQM2KF83fyQbSdAiC1TZMYTKjv38x0DjFoN7YFUOMYnQvKdUEFFOw6OkeDLDBrW3//w2lT9gZJ5VTxZSAy09rJYqkjSbov7UA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7K66/NOBOzweoXpUKEC2T8xfXCJQM7bOv4DduEzKc9A=;
 b=aiwpKwOoUJY5nvmd+cqsD6r9woTfjRS8PZWmLubz6nm0BfYXUvPXFM0dofTzyirqD5TF6P52+vKqO1eOiOwfJGVQQ5tam6iH0wa+/yFpGPtlzdRSN0Xxb5i1929jo0RDUUM/fXJfBIKCdcVVn7DphM3ZF2ZPezajofQx9rQmTTdjTWGzeVhBxqr19VnQR8lCL6gWNDcafHnre7pekT+PLPRTYZR2SSxFn/jeXYb6C4UFilki7KlgUgtpZVIogiJ9xE1ZVY0tyW8ZVHUn4VhUvuIGSKLU61QPQEPM5LeHiNkaFBe9yZXvK7UEMJrt1iitHlYg3eZfP1SUMuW19NQkwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7K66/NOBOzweoXpUKEC2T8xfXCJQM7bOv4DduEzKc9A=;
 b=dk/6pKjfJxDt7qRUG2kkSolCHVRlhsjYKxyYVspZ8GNGdoGgwtoGQbMys9fYd7RLUvnxnCOii0gT4eclf1n24RrY8m7e5dBsElm7DE6Rybb4V2PGj56Kfhq3laS8mVqyrPxcZ9CaFjx3qSFeP1TxjNXgzHgc3SC3vDs8NWACr4g=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>
Subject: RE: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
Thread-Topic: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
Thread-Index: AQHZbKpCBmKbFexulkCLTCM8AA6Zz68qDq7Q
Date: Fri, 14 Apr 2023 02:09:30 +0000
Message-ID:
 <AS8PR08MB7991D4C1352B785D505AE63892999@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-13-vikram.garhwal@amd.com>
In-Reply-To: <20230411191636.26926-13-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CE209F4D9041C34993750FB80751A88B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VI1PR08MB5520:EE_|VI1EUR03FT056:EE_|PAXPR08MB7599:EE_
X-MS-Office365-Filtering-Correlation-Id: 645e1be3-bc12-46c2-412b-08db3c8d4e9a
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8//2IIWN6YZLqD9LMG3rU8d9L666/PPPmTIyqKZwNETnQy9M+z3Njnqpdo7+O+MGde9effTC2qa1bUe8zSUOQ2BBdVUsc+9/gbCXHJbJcKXQnr8mPLneGjwKH6xbP8HFFCmgH7xRyQ/I6FXPIncvAT5BC5AriWr/Th/1n11f9VRuyeGpxrq/rz1Dd7bszb9iFkJ8fVf8MutSDUss/gY/DLCOjAKrVidy5PAPp34nHO8cqRBHTEiUAnu+ZKTb1w5zOqjbiP0s6C9WXXUAR5v8mmjMF/Ryvbb35irIlwkNqDqgEek0IPT8acN9qjdsy5aNr+66n7bXK1mkPPDUHQyuZ98YWhG/WIh9wD3K6r24BYj4g5MCugEJVC0cuOtE5mqtM7WzmGlD5YQeiToyqGmdlXXktjO8Mdf2F+oqmfJ3ClS1MeQoU4ihI7hJ+RduIjE6SUOrKJ+SpkDV3VHJLsONr8LnpEWpiN+CmAyjCBIhkU/If2xj4VHsqkxU01wPRO6klWY+YKr6/hj6tY8y572k8gxyEAOytTO4VGM7tT+0gcgTX/ctn60GNpGUyh9pOEwgs25h59cwWo5ckG67Dd7qTIFkQAPlIDqR8BWtMImFggWz2SWJ/x/LOig/yhlxn9xD
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39850400004)(366004)(396003)(136003)(376002)(451199021)(478600001)(71200400001)(7696005)(186003)(66446008)(110136005)(64756008)(26005)(5660300002)(54906003)(6506007)(9686003)(2906002)(316002)(76116006)(4326008)(66476007)(52536014)(66946007)(66556008)(8676002)(8936002)(41300700001)(122000001)(83380400001)(55016003)(38070700005)(38100700002)(86362001)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5520
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	497de3b1-0aba-4402-df66-08db3c8d48a5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kQmJYxlkEmrbyNc7F8DKThMXbq0wgv5LW7Cn7koPvnPs/wzGpB7Y/QGPNbtwKMCgs3uGWTG5h5eI3rb+HPOvcRR5e6IFTrr4sIkjhSq3OCRZMD4joiHbLrhXBqDx+S3k5WnA3c9t+TtUxDKADT8nTGlJ0tqmjTe4Y9k/PZp+23nVadfOsXY9oATtbmWBSoZQBcbEpkf3LTlQssl5YPnCQV7cb9rissUWb1QTMs/dUg566y+zerxyVoTrbiIfBzp4FZVDdvUzzA0IoycBUZrAHy7xXeXzsLOiWn7dXLq0LoAoJJ+3qI+cHE+Mf7vgL6rx26JVD08LBiTNcMK78J03BKyuBqfbnl5AiQXj6GHD1P2UQBa6vUeh1Cp6WjQW5umqSV8bBjt6J5Vp8/vsi8whba3ib7d8/5rmcFSva9I+v94Zk2XBajPHDuPy0jORPGggiBEfnmXrSaAflhhSuwV35MimRfeOy5GweBSu6nZAnHNTQgeVnRGdTPqjhh0vg5Y6OaeTuY7oRzyVpKKNevjkkD5CMnZKVlG1ekks0UA0p+5lWJyEsIDVuPmte6+wZAw28sfBkObOJqlPfk5e3wlcs6YGoKhJ0whc3kPwN5IOAJPnrEfQamz5+4zc6Kyt63QFApopwxEN3sTi7NYehlTK1MfAqetqNBRT5+a+rB0YdAazvMrMk5IVlW8smU84D8dq+B65pGpT/BvQo1XRhsMvdw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(40460700003)(70206006)(8676002)(70586007)(4326008)(41300700001)(54906003)(7696005)(478600001)(110136005)(316002)(86362001)(33656002)(47076005)(83380400001)(336012)(9686003)(6506007)(26005)(52536014)(5660300002)(2906002)(8936002)(82310400005)(40480700001)(55016003)(82740400003)(356005)(81166007)(186003)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 02:09:40.8750
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 645e1be3-bc12-46c2-412b-08db3c8d4e9a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7599

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
>=20
>  Dynamic programming ops will modify the dt_host and there might be other
>  function which are browsing the dt_host at the same time. To avoid the r=
ace
>  conditions, adding rwlock for browsing the dt_host during runtime.

For clarity, could you please add a little bit more details to explain why =
you chose
rwlock instead of normal spinlock?

>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/device_tree.c              |  3 +++
>  xen/drivers/passthrough/device_tree.c | 39 +++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h         |  6 +++++
>  3 files changed, 48 insertions(+)
>=20
>          if ( ret )
> +        {
>              printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign
> \"%s\""
>                     " to dom%u failed (%d)\n",
>                     dt_node_full_name(dev), d->domain_id, ret);
> +        }

I am not sure if it is necessary to add "{" and "}" here.

> +
> +        read_unlock(&dt_host->lock);
>          break;
>=20
>      case XEN_DOMCTL_deassign_device:
> @@ -322,25 +345,41 @@ int iommu_do_dt_domctl(struct xen_domctl
> *domctl, struct domain *d,
>          if ( domctl->u.assign_device.flags )
>              break;
>=20
> +        read_lock(&dt_host->lock);
> +
>          ret =3D dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
>                                      domctl->u.assign_device.u.dt.size,
>                                      &dev);
>          if ( ret )
> +        {
> +            read_unlock(&dt_host->lock);

I think instead of adding "read_unlock" in every break and return path,
you can...

>              break;
> +        }
>=20
>          ret =3D xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev=
));
> +
>          if ( ret )
> +        {
> +            read_unlock(&dt_host->lock);
>              break;
> +        }
>=20
>          if ( d =3D=3D dom_io )
> +        {
> +            read_unlock(&dt_host->lock);
>              return -EINVAL;

...do something like:

ret =3D -EINVAL;
break;

here, and then add one single "read_unlock" before the "return ret;"
in the bottom of the function?

> +        }
>=20
>          ret =3D iommu_deassign_dt_device(d, dev);
>=20
>          if ( ret )
> +        {
>              printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign
> \"%s\""
>                     " to dom%u failed (%d)\n",
>                     dt_node_full_name(dev), d->domain_id, ret);
> +        }

Same here. I am not sure if it is necessary to add "{" and "}".

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 05:32:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 05:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521019.809256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnC2I-0004Cs-D8; Fri, 14 Apr 2023 05:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521019.809256; Fri, 14 Apr 2023 05:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnC2I-0004Cl-8T; Fri, 14 Apr 2023 05:31:50 +0000
Received: by outflank-mailman (input) for mailman id 521019;
 Fri, 14 Apr 2023 05:31:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnC2H-0004Cb-AK; Fri, 14 Apr 2023 05:31:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnC2H-0000GB-8Q; Fri, 14 Apr 2023 05:31:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnC2G-000648-KT; Fri, 14 Apr 2023 05:31:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnC2G-0002C0-Jz; Fri, 14 Apr 2023 05:31:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bAHrD2MEs+7v7xaF2/2Gh2LF4X4fcRiqsRMrtFx5RcE=; b=Il6OyxLuL4ZsRTdjiGViKuDp/E
	NZSTQmfge0GYuV6HhEfcCZDNLmWfigvpyC4AJ44iouosjPpFjwQSTv0DkKBUvdweYVMCX9yztoiFW
	dlkm4IoYX/3IFgKGO98ZY7emXnW2gntZ07nDsFLEOAqjYE8BHpUqcrg5ITDiOceVF3xM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180238: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    xen-unstable:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
X-Osstest-Versions-That:
    xen=5ea03c570c8610d4359f8bbf5f093d215344ce3f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 05:31:48 +0000

flight 180238 xen-unstable real [real]
flight 180254 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180238/
http://logs.test-lab.xenproject.org/osstest/logs/180254/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180254-retest
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180254-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180211
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180225
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180225
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180225
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180225
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180225
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180225
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180225
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180225
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180225
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380
baseline version:
 xen                  5ea03c570c8610d4359f8bbf5f093d215344ce3f

Last test of basis   180225  2023-04-13 01:53:23 Z    1 days
Testing same since   180238  2023-04-13 14:38:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5ea03c570c..f872a624cb  f872a624cbf92de9944483eea7674ef80ced1380 -> master


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 07:02:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 07:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521027.809270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnDRv-00057d-8O; Fri, 14 Apr 2023 07:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521027.809270; Fri, 14 Apr 2023 07:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnDRv-00057W-5a; Fri, 14 Apr 2023 07:02:23 +0000
Received: by outflank-mailman (input) for mailman id 521027;
 Fri, 14 Apr 2023 07:02:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnDRt-00057O-Rz
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 07:02:22 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4ac5cdcd-da92-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 09:02:17 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id v6so16443537wrv.8
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 00:02:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ac5cdcd-da92-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681455737; x=1684047737;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d7E/ybxLrLBiZp41oOMePR1HXiUvoIwxiKTUvhbUPiY=;
        b=S6kxYek8CginYMAQIc8kiF5qiwsWyuU6g8bQbgntZfAVbGipZMYTJTCOe3lqR9sNJU
         PCAyVzLmD6yUGzlPDRa7hiUiEklE6aKUfvEWV2g9pxjY/2ynO4kvqOW00odhEA/gezf+
         uj6iZb5UOXruIEiGj0+Ojkrezxq4yvU09ioQUa2sjWfRunYPrrAsbNmaYFOXdez2n8n5
         zKwILnL1Grllfkqi9Qa9sEGd42MZ8f6uCQiTWPXYyTMeEtHSpucQvxaprz88ei6E9C2u
         as7E/LoSZkbL9IGbe8RZt1YFcaUSG7T/B880tJZ7IpOBf+4vNsT9G8FijnjVQMXTboP6
         72pA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681455737; x=1684047737;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=d7E/ybxLrLBiZp41oOMePR1HXiUvoIwxiKTUvhbUPiY=;
        b=EVkqX7qwTQgdMeHBsgYsXK7i4p64h85+rR5/8ylZhxZWIIvIivzaByia2FXYGK1ySD
         N8XyygAJYdwFF0zzlj2Xrado/O9EfhgZO4XeudL8+seOBhE14Snw14WXRlnU/1eXyzNt
         kQRAAcFciakv0SRuKYTiyRj8NZEfm/QlNumeRlZ/TLgExcofuBar7m1hbvDNNMqWNgAc
         hAueqyYthyeMNQaxqHaymt0BR5Eu0+NOIWiU9kvceuw24xz84gjZtHsWUaCTQhic947Q
         f43oSMeKHRzglJ3WJF/XGn7g17hbYG9DT6PCH48BuE7z2r7g22y/TqloSyECp4jSUnUn
         u0mg==
X-Gm-Message-State: AAQBX9chwQxJ6aT6at8Pn0SVWGmhC8zcvL+KvezBnuAMi8LQ3e89PdP6
	xrGsLYGoVBQS2s6igmc90H41e50+IU20QPJ8atyNzA==
X-Google-Smtp-Source: AKy350b/2rLk8o88p4VUbyle6JhVGe9hR9d819uyBdgV73sk/uZgX0GFbxXyCEIMzWz7Cw3/rrBl4ZxfoRtYYcDxmUE=
X-Received: by 2002:adf:cf05:0:b0:2f6:ddbf:5bce with SMTP id
 o5-20020adfcf05000000b002f6ddbf5bcemr331156wrj.3.1681455736843; Fri, 14 Apr
 2023 00:02:16 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-17-jens.wiklander@linaro.org> <AS8PR08MB79910752526D506B3BA881EA92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB79910752526D506B3BA881EA92989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 09:02:06 +0200
Message-ID: <CAHUa44FsuN0rcbwD2ei=8MochN-9zq=i1+qiLukZNCbwpV6o-g@mail.gmail.com>
Subject: Re: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing memory
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:49=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 16/22] xen/arm: ffa: add ABI structs for sharing
> > memory
> >
> > Adds the ABI structs used by function FFA_MEM_SHARE and friends for
> > sharing memory.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll add that.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 07:03:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 07:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521031.809280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnDTH-0005dV-J9; Fri, 14 Apr 2023 07:03:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521031.809280; Fri, 14 Apr 2023 07:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnDTH-0005dO-G8; Fri, 14 Apr 2023 07:03:47 +0000
Received: by outflank-mailman (input) for mailman id 521031;
 Fri, 14 Apr 2023 07:03:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnDTG-0005dI-GM
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 07:03:46 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7eb28b0a-da92-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 09:03:44 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id v10so3166817wmn.5
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 00:03:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7eb28b0a-da92-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681455824; x=1684047824;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tGMx0rmfx67kzWt9O29eEw74QbvsMz8TesWbQZyGcC8=;
        b=A2xCGLMkFPbx/h9fqDpYraPYo3ESuXtY3oggNU2cCMvTPFP3avZx17oydoAS3SfmWI
         NkE3aYXm/bAqF68ftQbw/UBSrGtsVs5A4ZsjHi+XJlxtxLYusguKrtKTBgIM21cro8UX
         OR0ehdHAbCiUjcB2KAWEwQedPBEgfFNyAL3a3ZupkEVQHXUjvgHqNTgvP3PnnqM3osoP
         2uSIJY8c4oBbt4zphAFlVJV733jeMiEnn1txsiPoxHmg5JHl9foXC8w7LeYN9XTcDZKX
         U4MLFRyS9+nvoImS6IwXXvDsDoFPGvAmNV5aZQOgLhF3lg+Rn3Be9HTEPaNOTpwY1TyM
         w1XA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681455824; x=1684047824;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tGMx0rmfx67kzWt9O29eEw74QbvsMz8TesWbQZyGcC8=;
        b=NBixKFAZa/0UHPuMxZklXXLZgQIwy+guMfQN6PNHS1BiGeCoqi2A7/0pHSfVJGDIfH
         LCsUsAQburG8G/0FWxNfwbMPCyZCrIz6T3ZTkht2EGboBZHP+wVRnwXAECjnqC0WevDk
         nEOxgF6NqD9+TSdsklOkI5TDO4aoc14U3zWxHFNsPe2anKO2UgCxsND96WnKUR89OyWW
         sBbqIZ03UZlzDs370hNu6zwvKTubJycQUxrOGcmhel86tyWbyhbD0aQPxH7d6ZXeCCQY
         Zl19M8bYzsy0MHHf0zCAasuEazEhPA0eA5c0xA0PXH++kLgJqNX6TTN3Pdp6Q0Y0FGFX
         exBA==
X-Gm-Message-State: AAQBX9flPbJxBmBWJTtYZs5lWR6Ha8fLZi05Sw5yswhlAWit14WJBuud
	ArsL769enVHv+2rJBb3QiGh0n80HQvmFwlujN0RPEg==
X-Google-Smtp-Source: AKy350ZoyKfVIgsqyF/4nTESSIivT20YSwuFJPYTkxQN1FJIwTSEP+bFEv/+UpBMZ9k2LpwG6DrAG7GbzAc3spqlToE=
X-Received: by 2002:a7b:c4d0:0:b0:3f0:b15c:e6b8 with SMTP id
 g16-20020a7bc4d0000000b003f0b15ce6b8mr237961wmk.4.1681455823956; Fri, 14 Apr
 2023 00:03:43 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-4-jens.wiklander@linaro.org> <AS8PR08MB7991029EED281DD96108D4C292989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB7991029EED281DD96108D4C292989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 09:03:33 +0200
Message-ID: <CAHUa44Hc054rLkFrvonK6X+AReonh2B_TaWwOJDK2TR5G2Jgcg@mail.gmail.com>
Subject: Re: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:53=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
> >
> > Adds a new "ffa" value to the Enumeration "tee_type" to indicate if a
> > guest is trusted to use FF-A.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll add that.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:20:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521037.809290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEf4-00057g-CT; Fri, 14 Apr 2023 08:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521037.809290; Fri, 14 Apr 2023 08:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEf4-00056v-8o; Fri, 14 Apr 2023 08:20:02 +0000
Received: by outflank-mailman (input) for mailman id 521037;
 Fri, 14 Apr 2023 08:20:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnEf3-0004z5-6Q
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 08:20:01 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2540f0c4-da9d-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 10:19:59 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id l18so16577613wrb.9
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 01:19:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2540f0c4-da9d-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681460399; x=1684052399;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PFpxARKtWgnRpMlcvrYFS8qiTmLjgx1JA4OmnpxuvH0=;
        b=i1mrDdZ2WYg3txNAEHG2ymhXtYk+Lkrdbgr29/1uLP+RlIPXAVE1f5+6f4pVZSeoPv
         QqLV0bq5fqSLfOWGHFiPV2aDFsNtM7X1aDAKVdN/9Zys0DM3JUEz20mKkDw1efDqGIMy
         7r0zYt35/5c5uh5T9RsNtPSsdKA/3bVmHEuxGVdp12QnqqrLJ4HtyA3V5rIcSI6Uw80k
         MfVI5C8HzqZTxoBM69TJREJ5VHbqU1rdmUhr6tuBf1sZK2rPnf8fvXcB5nRqmyPmW3ts
         F85p9gDc+v7OuzJPtqCDXlMZuD4vFjzWzGpR3jUPhHKny/XPZM4xPZFOfoZf4p70nhhs
         Uo1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681460399; x=1684052399;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PFpxARKtWgnRpMlcvrYFS8qiTmLjgx1JA4OmnpxuvH0=;
        b=j5ilrHbStK23TPQJA5/4Rfqi5fJjfTLIDmTFDLjSJGzHaOTxG1uGUM/SSE2wp/NoRF
         YrH5fO0VkKDyYnN2JbvoFRoqdzIRU842xiJTHR6rN9vj20txnLCeRjE5kG9lRnVvpSdu
         6XtI8HMq+6V/IZ6lzSXjc+I18cQAXqY3gF8BrutkMePYuB6SB7WS3i7ZGIKwRZXuLMtn
         bw6tdMuLXJywbIJbXTtSEuVfS9eH4FPX/1l4OA0hIgzV1/sYZ+a0iMWy5fqlesKXgaUs
         lVcXoOFXcp2Z0eI7yOu+DqdX4l3MelhccAI4S+5XOdcJue0CusVyAindDDdx3cDBxaXb
         R4dQ==
X-Gm-Message-State: AAQBX9cUSQA5WeIZPX5vas9Kl5OWHiDQx/1HXi/jLMTAaLZ5yzLhlPv5
	kGqHWKitxZZbNnQFvUdq7I6tMd/0OZ71rl21NnUmMw==
X-Google-Smtp-Source: AKy350bilVUh/+x6GI8LJs/DLGS/LheJiFQC0zsKEuCnmKW1U2zt3b1JRIHKo8NndQZkyyldYZbAsSVDi49Jjok7GmY=
X-Received: by 2002:adf:e48f:0:b0:2f5:a50c:167d with SMTP id
 i15-20020adfe48f000000b002f5a50c167dmr867559wrm.3.1681460398710; Fri, 14 Apr
 2023 01:19:58 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-12-jens.wiklander@linaro.org> <d354fee8-4d02-fe5f-1ff1-15f96efeb13f@xen.org>
In-Reply-To: <d354fee8-4d02-fe5f-1ff1-15f96efeb13f@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 10:19:47 +0200
Message-ID: <CAHUa44FM5yQ+e=ruPhTxFttGTE1HQvruX-7XAiqVnW4b-mQgcw@mail.gmail.com>
Subject: Re: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure Partitions
To: Julien Grall <julien@xen.org>, bertrand.marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>
Cc: xen-devel@lists.xenproject.org, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

On Thu, Apr 13, 2023 at 3:24=E2=80=AFPM Julien Grall <julien@xen.org> wrote=
:
>
> Hi,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> > +                                      uint8_t msg)
> > +{
> > +    uint32_t exp_resp =3D FFA_MSG_FLAG_FRAMEWORK;
> > +    int32_t res;
> > +
> > +    if ( msg =3D=3D FFA_MSG_SEND_VM_CREATED )
> > +        exp_resp |=3D FFA_MSG_RESP_VM_CREATED;
> > +    else if ( msg =3D=3D FFA_MSG_SEND_VM_DESTROYED )
> > +        exp_resp |=3D FFA_MSG_RESP_VM_DESTROYED;
> > +    else
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +
> > +    do {
> > +        const struct arm_smccc_1_2_regs arg =3D {
> > +            .a0 =3D FFA_MSG_SEND_DIRECT_REQ_32,
> > +            .a1 =3D sp_id,
> > +            .a2 =3D FFA_MSG_FLAG_FRAMEWORK | msg,
> > +            .a5 =3D vm_id,
> > +        };
> > +        struct arm_smccc_1_2_regs resp;
> > +
> > +        arm_smccc_1_2_smc(&arg, &resp);
> > +        if ( resp.a0 !=3D FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 !=3D =
exp_resp )
> > +        {
> > +            /*
> > +             * This is an invalid response, likely due to some error i=
n the
> > +             * implementation of the ABI.
> > +             */
> > +            return FFA_RET_INVALID_PARAMETERS;
> > +        }
> > +        res =3D resp.a3;
> > +    } while ( res =3D=3D FFA_RET_INTERRUPTED || res =3D=3D FFA_RET_RET=
RY );
>
> This loop seems potentially unbounded to me. Can you add a comment
> explaining why this is fine?

In the FF-A 1.1 specification
(https://developer.arm.com/documentation/den0077/e/?lang=3Den) Table
18.26 at page 330 it says that FFA_RET_INTERRUPTED and FFA_RET_RETRY
should be handled in this way. When looking at this from the
hypervisor's point of view it is troublesome since there isn't any
guarantee that we're progressing.

We should be able to rule out FFA_RET_INTERRUPTED since non-secure
interrupts should be masked at this point. I'm not sure if
FFA_RET_RETRY can be avoided entirely, but we should be able to put a
limit on how many times we're prepared to retry.

How about setting a limit of max 10 retries and treating
FFA_RET_INTERRUPTED as an error? Or is it better to not loop at all
and treat all but FFA_RET_OK as errors? What do others think?

Thanks,
Jens

>
> > +
> > +    return res;
> > +}
> > +
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:29:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:29:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521041.809300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEoE-0006T0-7q; Fri, 14 Apr 2023 08:29:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521041.809300; Fri, 14 Apr 2023 08:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEoE-0006St-56; Fri, 14 Apr 2023 08:29:30 +0000
Received: by outflank-mailman (input) for mailman id 521041;
 Fri, 14 Apr 2023 08:29:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tlWx=AF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pnEoC-0006Sn-RU
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 08:29:29 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060d.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 779515b0-da9e-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 10:29:26 +0200 (CEST)
Received: from AS8P251CA0021.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::18)
 by DB9PR08MB6747.eurprd08.prod.outlook.com (2603:10a6:10:26e::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:29:24 +0000
Received: from AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2:cafe::d5) by AS8P251CA0021.outlook.office365.com
 (2603:10a6:20b:2f2::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.33 via Frontend
 Transport; Fri, 14 Apr 2023 08:29:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT063.mail.protection.outlook.com (100.127.140.221) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.10 via Frontend Transport; Fri, 14 Apr 2023 08:29:24 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 14 Apr 2023 08:29:23 +0000
Received: from 637f994ace9a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FB9D403E-7790-49E2-BBC3-657557A7E860.1; 
 Fri, 14 Apr 2023 08:29:16 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 637f994ace9a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 08:29:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU0PR08MB10366.eurprd08.prod.outlook.com (2603:10a6:10:40a::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:29:09 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 08:29:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 779515b0-da9e-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u5KpCr7vPoauzdSPAbufVUBQVHs8SNd1pw2IL59R4QU=;
 b=FAozWIco6QGXzWntBUnO4WKiBWOgwJwA4MWYhHb4W8P71Abxh8rbKTL6YT4LSMbx3OB92VFjsiXGW9sUkSk0ZPKiWdkcgvGFn/ju9J4/FLxXjzEeAvI3GdBvu2FPbOF+/g0eimHipQE/Pw2lcZ2gqikSvn8kcC1Wz2YQ5SQVvz8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: af4b43025101a2e4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HRt15dowCb6DVE7BlkGaMCGJ06VCHQFi7Lx5ta2xrUGT15Km38+6qcB0USQbKgGz2kqr7CQuce3jB/dyokAr2I/9VyasI3TPgB+yj/6qxBFFRiDHSy+kALTLiDSt7a/cp4zxZIeN3Q//aTJZRo9iXxD3SSRgSZXDqGxhCY/VNMzkPiJyt9cfr0vvgDzAfd3X+JXtYzy2USUFdxKtu7jDh++wWDQ5VMP2NPWY2RaGfLxXtXqhKPWaPpyz15nXrY2sQ7YQbzmoO+4LnHnALs8AMwo/6rusEpN4mmv9dW3n7v6t/0BC9KA/Vx5eGYFCgpp+V17xnJEr+cSXiPMBrNy8qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u5KpCr7vPoauzdSPAbufVUBQVHs8SNd1pw2IL59R4QU=;
 b=CWxEd6l6Ys9G2yseBI4C8Q7zPyLGs8DR5bHcXVB9quU1aMr/F+H+c18Cr5YZWRDiHfW6YSi8SRpL1s1ljbSC/NPXmxIzOrta00iB6uuoggemsqXPmx5NINqeCBtNf8iGXur2JtnvxjtQ+wPN7CeXLF93rO2nFM3c+WJ8XWR/ldVPGyfzKI47P693eOC3rJAYjqjEBxZgG5eBarZMyN9Yiq2wB0LHydyf6ZTzp8gxKZ8BtzboNTnmrmyb3znQ7EeIW+D5UJi7/MR6ZeadUz8OQ0CsovcICtpaF4eNPY6kkBvGvsmv/Z598BGpxoyVTny91PrOOwAZ/cMHc4vRw536OQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u5KpCr7vPoauzdSPAbufVUBQVHs8SNd1pw2IL59R4QU=;
 b=FAozWIco6QGXzWntBUnO4WKiBWOgwJwA4MWYhHb4W8P71Abxh8rbKTL6YT4LSMbx3OB92VFjsiXGW9sUkSk0ZPKiWdkcgvGFn/ju9J4/FLxXjzEeAvI3GdBvu2FPbOF+/g0eimHipQE/Pw2lcZ2gqikSvn8kcC1Wz2YQ5SQVvz8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: Julien Grall <julien@xen.org>, Marc Bonnici <Marc.Bonnici@arm.com>, Achin
 Gupta <Achin.Gupta@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure
 Partitions
Thread-Topic: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure
 Partitions
Thread-Index: AQHZbdfNbAtCLjIaYEG04chauCctrq8pOrmAgAE9V4CAAAKNgA==
Date: Fri, 14 Apr 2023 08:29:06 +0000
Message-ID: <9732C6EB-3346-408D-B267-9CCCE081B661@arm.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-12-jens.wiklander@linaro.org>
 <d354fee8-4d02-fe5f-1ff1-15f96efeb13f@xen.org>
 <CAHUa44FM5yQ+e=ruPhTxFttGTE1HQvruX-7XAiqVnW4b-mQgcw@mail.gmail.com>
In-Reply-To:
 <CAHUa44FM5yQ+e=ruPhTxFttGTE1HQvruX-7XAiqVnW4b-mQgcw@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DU0PR08MB10366:EE_|AM7EUR03FT063:EE_|DB9PR08MB6747:EE_
X-MS-Office365-Filtering-Correlation-Id: 79df069b-04d3-4e42-6f06-08db3cc25a88
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FcRLf50Y/01/r/9qeW7r4ntaZrDOFFoDFw1jYznHkQ93cblxDcbEYuGn2XYhgwmW+MySIWzSZt2ruPE89Z8qEyVgpU7GYXvHdIDR6lug3WCD6ZU2Ga7eOhXXKnUuDnuIIPIMliqgNiSrORYYf+OY+UkSNo9SqsCagsS+SlGyHhcQWTEyGvXd1MYN9EgjUphw8llEEHpQbvR1OrQtn3YVUdT1sbThZzzUJ7snVwHvIsMkT1enx4zAkUk5aT1xNBfRnOslL20aJ+HmEtWNGZh1gRzKu1/AQ1ZKm1KEeFnyZ9YFzC4rBx9A+Nv0u9u33eXGHOWnqwJoWWrmYeejGKrFWWVFZuasoO95xosxQS+t/s0XSDycK/LJ6QAup9bI0LbPA1C4ig5dbxY1MDUw94unDbWmhVhdS4TCufA0K5hFvDmwHywwus98WnYahFHfybg5+auAUKGW/hTsJHahS9lDKl4Eu90N/C2IiYtCRUHqzzQd1LH/8N3f+VE8Fpta7fFKetW+jy0biflciCenOL7LEOmWw0Q1T0q+mideaOjvokwD3kwdPEjTzOpx9OcRewdxOpuaNXabsus99vWJ0HYmqgIs86xcnQmYS5t9ucewALq00D1X917asKj0Sp1kDi7nnarDmIHziBaLcB+8YmnTKg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(366004)(376002)(346002)(39860400002)(451199021)(4326008)(316002)(6486002)(6916009)(2906002)(86362001)(2616005)(36756003)(8676002)(8936002)(33656002)(186003)(5660300002)(6506007)(6512007)(53546011)(38100700002)(122000001)(38070700005)(83380400001)(71200400001)(6666004)(41300700001)(54906003)(478600001)(91956017)(76116006)(66446008)(66476007)(66946007)(66556008)(64756008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <640B96E36F413347AF23CB1C6598A7E6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB10366
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7d190ecf-9413-43f0-8930-08db3cc2502e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hc4sryTN767UroML+SB9gIefwmynRbGt7CYaq0F1pyF9qXDqt10n1Y5sMfBioo0VmXnHL6LkFG94FV5aATPLOyhZs+Zw+ZpYm/IZ726XKDngLX4WUUorZm8aV+1ergI1B3JhICJZtN5Qt65byck+fkZFpVj3GEl6iS5jMMd5rc9Ehcr7e+YQ3KwDPE9j+ZR851XWzHBjAHp+haz6cfpaJlFt89KHQPdsPhJwtrgx7L+RjFFLcnmnPNTGcP0ohmLP5Bm1EfuHTuq+9rkA88kM1rKblISapQG4AsujPPmyHiGuoKPp+Bzj56H2xNCs1e2HLtj7+0xqw1/Nl1bmdIdRvL6bSwhZIkx4/CgxbFrIItmOWOl31qJ7J9V+6sb0hT4genCj1yAZqwNalqbKRpL/oDZC1jdMyeAb8lwxb6hwYbV8lYUiVUPrt3REyuiMqq4H9FmnyyCiX36sCKyrem/vPf+qUk5TFPHAAFNH8RbxegTiHbaDDFg/1H1xmX3M/GBPYlamogPYVHGhjcUJlt918Vw89/5sR1LjUlYXdmIUq9/8OIXo1ATNgJ9q9hMGXF7qtPQvDFiihW437mTfbn6z3Ia/Ve0AuvHOhRNr0Dh4GWuYJkZAdkEvE8RryGlsCs7g1YTDcJTZYt3O8C5cwhkIPGiw9eJpCkhKNT+6xk54ytiIfqYcG7EjGdTHxclQ5VFi
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(396003)(451199021)(40470700004)(46966006)(36840700001)(107886003)(40460700003)(6512007)(26005)(186003)(6506007)(53546011)(54906003)(40480700001)(33656002)(478600001)(81166007)(356005)(2906002)(8676002)(8936002)(6862004)(36860700001)(83380400001)(41300700001)(336012)(5660300002)(47076005)(2616005)(82310400005)(316002)(4326008)(70206006)(70586007)(82740400003)(86362001)(6486002)(6666004)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 08:29:24.2488
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79df069b-04d3-4e42-6f06-08db3cc25a88
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6747

SGkgSmVucywNCg0KPiBPbiAxNCBBcHIgMjAyMywgYXQgMTA6MTksIEplbnMgV2lrbGFuZGVyIDxq
ZW5zLndpa2xhbmRlckBsaW5hcm8ub3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gVGh1
LCBBcHIgMTMsIDIwMjMgYXQgMzoyNOKAr1BNIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+
IHdyb3RlOg0KPj4gDQo+PiBIaSwNCj4+IA0KPj4gT24gMTMvMDQvMjAyMyAwODoxNCwgSmVucyBX
aWtsYW5kZXIgd3JvdGU6DQo+Pj4gK3N0YXRpYyBpbnQzMl90IGZmYV9kaXJlY3RfcmVxX3NlbmRf
dm0odWludDE2X3Qgc3BfaWQsIHVpbnQxNl90IHZtX2lkLA0KPj4+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVpbnQ4X3QgbXNnKQ0KPj4+ICt7DQo+Pj4gKyAgICB1aW50
MzJfdCBleHBfcmVzcCA9IEZGQV9NU0dfRkxBR19GUkFNRVdPUks7DQo+Pj4gKyAgICBpbnQzMl90
IHJlczsNCj4+PiArDQo+Pj4gKyAgICBpZiAoIG1zZyA9PSBGRkFfTVNHX1NFTkRfVk1fQ1JFQVRF
RCApDQo+Pj4gKyAgICAgICAgZXhwX3Jlc3AgfD0gRkZBX01TR19SRVNQX1ZNX0NSRUFURUQ7DQo+
Pj4gKyAgICBlbHNlIGlmICggbXNnID09IEZGQV9NU0dfU0VORF9WTV9ERVNUUk9ZRUQgKQ0KPj4+
ICsgICAgICAgIGV4cF9yZXNwIHw9IEZGQV9NU0dfUkVTUF9WTV9ERVNUUk9ZRUQ7DQo+Pj4gKyAg
ICBlbHNlDQo+Pj4gKyAgICAgICAgcmV0dXJuIEZGQV9SRVRfSU5WQUxJRF9QQVJBTUVURVJTOw0K
Pj4+ICsNCj4+PiArICAgIGRvIHsNCj4+PiArICAgICAgICBjb25zdCBzdHJ1Y3QgYXJtX3NtY2Nj
XzFfMl9yZWdzIGFyZyA9IHsNCj4+PiArICAgICAgICAgICAgLmEwID0gRkZBX01TR19TRU5EX0RJ
UkVDVF9SRVFfMzIsDQo+Pj4gKyAgICAgICAgICAgIC5hMSA9IHNwX2lkLA0KPj4+ICsgICAgICAg
ICAgICAuYTIgPSBGRkFfTVNHX0ZMQUdfRlJBTUVXT1JLIHwgbXNnLA0KPj4+ICsgICAgICAgICAg
ICAuYTUgPSB2bV9pZCwNCj4+PiArICAgICAgICB9Ow0KPj4+ICsgICAgICAgIHN0cnVjdCBhcm1f
c21jY2NfMV8yX3JlZ3MgcmVzcDsNCj4+PiArDQo+Pj4gKyAgICAgICAgYXJtX3NtY2NjXzFfMl9z
bWMoJmFyZywgJnJlc3ApOw0KPj4+ICsgICAgICAgIGlmICggcmVzcC5hMCAhPSBGRkFfTVNHX1NF
TkRfRElSRUNUX1JFU1BfMzIgfHwgcmVzcC5hMiAhPSBleHBfcmVzcCApDQo+Pj4gKyAgICAgICAg
ew0KPj4+ICsgICAgICAgICAgICAvKg0KPj4+ICsgICAgICAgICAgICAgKiBUaGlzIGlzIGFuIGlu
dmFsaWQgcmVzcG9uc2UsIGxpa2VseSBkdWUgdG8gc29tZSBlcnJvciBpbiB0aGUNCj4+PiArICAg
ICAgICAgICAgICogaW1wbGVtZW50YXRpb24gb2YgdGhlIEFCSS4NCj4+PiArICAgICAgICAgICAg
ICovDQo+Pj4gKyAgICAgICAgICAgIHJldHVybiBGRkFfUkVUX0lOVkFMSURfUEFSQU1FVEVSUzsN
Cj4+PiArICAgICAgICB9DQo+Pj4gKyAgICAgICAgcmVzID0gcmVzcC5hMzsNCj4+PiArICAgIH0g
d2hpbGUgKCByZXMgPT0gRkZBX1JFVF9JTlRFUlJVUFRFRCB8fCByZXMgPT0gRkZBX1JFVF9SRVRS
WSApOw0KPj4gDQo+PiBUaGlzIGxvb3Agc2VlbXMgcG90ZW50aWFsbHkgdW5ib3VuZGVkIHRvIG1l
LiBDYW4geW91IGFkZCBhIGNvbW1lbnQNCj4+IGV4cGxhaW5pbmcgd2h5IHRoaXMgaXMgZmluZT8N
Cj4gDQo+IEluIHRoZSBGRi1BIDEuMSBzcGVjaWZpY2F0aW9uDQo+IChodHRwczovL2RldmVsb3Bl
ci5hcm0uY29tL2RvY3VtZW50YXRpb24vZGVuMDA3Ny9lLz9sYW5nPWVuKSBUYWJsZQ0KPiAxOC4y
NiBhdCBwYWdlIDMzMCBpdCBzYXlzIHRoYXQgRkZBX1JFVF9JTlRFUlJVUFRFRCBhbmQgRkZBX1JF
VF9SRVRSWQ0KPiBzaG91bGQgYmUgaGFuZGxlZCBpbiB0aGlzIHdheS4gV2hlbiBsb29raW5nIGF0
IHRoaXMgZnJvbSB0aGUNCj4gaHlwZXJ2aXNvcidzIHBvaW50IG9mIHZpZXcgaXQgaXMgdHJvdWJs
ZXNvbWUgc2luY2UgdGhlcmUgaXNuJ3QgYW55DQo+IGd1YXJhbnRlZSB0aGF0IHdlJ3JlIHByb2dy
ZXNzaW5nLg0KPiANCj4gV2Ugc2hvdWxkIGJlIGFibGUgdG8gcnVsZSBvdXQgRkZBX1JFVF9JTlRF
UlJVUFRFRCBzaW5jZSBub24tc2VjdXJlDQo+IGludGVycnVwdHMgc2hvdWxkIGJlIG1hc2tlZCBh
dCB0aGlzIHBvaW50LiBJJ20gbm90IHN1cmUgaWYNCj4gRkZBX1JFVF9SRVRSWSBjYW4gYmUgYXZv
aWRlZCBlbnRpcmVseSwgYnV0IHdlIHNob3VsZCBiZSBhYmxlIHRvIHB1dCBhDQo+IGxpbWl0IG9u
IGhvdyBtYW55IHRpbWVzIHdlJ3JlIHByZXBhcmVkIHRvIHJldHJ5Lg0KDQpUaGUgZmFjdCB0aGF0
IGludGVycnVwdHMgYXJlIG1hc2tlZCBpbiBYZW4gZG9lcyBub3QgbWVhbiB0aGV5IHdpbGwgYmUg
aW4gc2VjdXJlLg0KSW4gZmFjdCB3aGF0IHdlIHNob3VsZCBkbyB3aGVuIElOVEVSUlVQVEVEIGlz
IHJlY2VpdmVkIGlzIHNvbWV0aGluZyB3ZSBoYXZlDQp0byBjbGVhciB1cCBidXQgd2Ugc2hvdWxk
IHVubWFzayBpbnRlcnJ1cHQgdG8gcHJvY2VzcyB0aGVtIGluIFhlbiBiZWZvcmUgcmV0cnlpbmcg
SSB0aGluay4NCg0KPiANCj4gSG93IGFib3V0IHNldHRpbmcgYSBsaW1pdCBvZiBtYXggMTAgcmV0
cmllcyBhbmQgdHJlYXRpbmcNCj4gRkZBX1JFVF9JTlRFUlJVUFRFRCBhcyBhbiBlcnJvcj8gT3Ig
aXMgaXQgYmV0dGVyIHRvIG5vdCBsb29wIGF0IGFsbA0KPiBhbmQgdHJlYXQgYWxsIGJ1dCBGRkFf
UkVUX09LIGFzIGVycm9ycz8gV2hhdCBkbyBvdGhlcnMgdGhpbms/DQoNCkkgd291bGQgc3VnZ2Vz
dCB0byBkbyBhIG1heCByZXRyeSBmb3IgYm90aCBjYXNlcyBhbmQgYWRkIGEgVE9ETyBpbiB0aGUg
Y29kZS4NCldlIHdpbGwgbmVlZCB0byBkZWZpbmUgYSBnZW5lcmljIHdheSB0byBoYW5kbGUgdGhv
c2UgY2FzZXMgYnV0IGF0IHRoaXMgc3RhZ2UNCklOVEVSUlVQVEVEIHNob3VsZCBiZSBjb25zaWRl
cmVkIFRPRE8uDQpSRVRSWSB3aWxsIHByb2JhYmx5IHN0YXkgd2l0aCBhIGxpbWl0IGhlcmUsIGlu
IHRoZSBjYXNlIG9mIGEgZ3Vlc3QgbWVzc2FnZSBib3RoDQpvZiB0aG9zZSBwb3NzaWJpbGl0aWVz
IGNvdWxkIGp1c3QgYmUgcmV0dXJuZWQgdG8gdGhlIGd1ZXN0Lg0KDQoNCkRvIHlvdSBhZ3JlZSA/
DQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiANCj4gVGhhbmtzLA0KPiBKZW5zDQo+IA0KPj4gDQo+
Pj4gKw0KPj4+ICsgICAgcmV0dXJuIHJlczsNCj4+PiArfQ0KPj4+ICsNCj4+IA0KPj4gQ2hlZXJz
LA0KPj4gDQo+PiAtLQ0KPj4gSnVsaWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:41:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521046.809310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEzV-0000Pn-DG; Fri, 14 Apr 2023 08:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521046.809310; Fri, 14 Apr 2023 08:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnEzV-0000Pg-AM; Fri, 14 Apr 2023 08:41:09 +0000
Received: by outflank-mailman (input) for mailman id 521046;
 Fri, 14 Apr 2023 08:41:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tlWx=AF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pnEzU-0000Pa-5B
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 08:41:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18e770d6-daa0-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 10:41:07 +0200 (CEST)
Received: from AS9PR06CA0159.eurprd06.prod.outlook.com (2603:10a6:20b:45c::26)
 by AS8PR08MB9744.eurprd08.prod.outlook.com (2603:10a6:20b:614::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:41:03 +0000
Received: from AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45c:cafe::2) by AS9PR06CA0159.outlook.office365.com
 (2603:10a6:20b:45c::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.33 via Frontend
 Transport; Fri, 14 Apr 2023 08:41:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT007.mail.protection.outlook.com (100.127.140.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.33 via Frontend Transport; Fri, 14 Apr 2023 08:41:02 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 14 Apr 2023 08:41:02 +0000
Received: from 9d7f2be32ea3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F29C5BC1-AE1A-480C-A63D-C2C66DE582EA.1; 
 Fri, 14 Apr 2023 08:40:54 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9d7f2be32ea3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 08:40:54 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB6680.eurprd08.prod.outlook.com (2603:10a6:20b:397::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:40:52 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 08:40:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18e770d6-daa0-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zcc1odVblejsig4qsy4m9Anh6zU8rGuBdxTtNo7iqtY=;
 b=rjM82foaOvVrJeDUNzBYaqDBUkQ7q/N6kQmBKIWPbGQ64nxm7N/XghyuXsT2F9jFm2fqF9/X/1iM0Xh0e0mci/vNEGlgHfeC6PROoMlzWo/WNuhKwMcAm5LT9MzRxvG7I4L3UfBlf7hFWFCsVlBO8PzGkIWF3kLtjIN0YrE+lPE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6e812a435e6285eb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gGaWOVDDZhIFh6UqPXMqVrOuDnGiNONs1nQr8gs1Fjc2xy/KUjF9O2VigdgqG+vEKxrlBxJzw/SZzjOy16ES0VUfWpoVEP1w/+RemFwfj1PtWYKdi/9205hmsG7Dkg8aiEFRsm661qSa58g1HOlPN18sWkaxGLIA/sbasKhXcNq6whyO/uRArI6FmEbLPBEwPC3xnOdT5H5ft6Xz3H23yHKo7Yxtzb7OPl6qTiFz5rXLZ/Gd+xtS/JsnldsoFP6llkNRMQidEuOkZW82tdZylUW1/512QQrdlfqXR0M6ji+UqvBjnLch5xBOxWoxxf97lg8IY/UPnMmAVltEZk+2bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zcc1odVblejsig4qsy4m9Anh6zU8rGuBdxTtNo7iqtY=;
 b=iRVPrKcXXz/mkMJ036ZPu/stLqYRmqdqxzC+tnTPID8hHuo+/S6BE69CmSFgNXSFhYqr+0NiA0iwVlwrYmDpgHKObretvQbtwEd4RGz5132iJ4JDWgsLF9jFC3VqKlgOiXaYlsF1hom196XG+UhdW85KzLEUPEdg6a2bsE8cH58ECZmZp/IBTEqxoVI049HZUVDjYQ+82K3ekkSiDKbxEugXb6omFQMXdVgPt82+INbyGXxXxSzrbujfpC9jAyDdcLGbPPYqNxcbwI7V+QJa1Bt5cIIq8Q/mGl910WSSdjm4aUe01v7dqGD84EP6sEjXG4bZ1h2T71ihcktNgLj2Hw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zcc1odVblejsig4qsy4m9Anh6zU8rGuBdxTtNo7iqtY=;
 b=rjM82foaOvVrJeDUNzBYaqDBUkQ7q/N6kQmBKIWPbGQ64nxm7N/XghyuXsT2F9jFm2fqF9/X/1iM0Xh0e0mci/vNEGlgHfeC6PROoMlzWo/WNuhKwMcAm5LT9MzRxvG7I4L3UfBlf7hFWFCsVlBO8PzGkIWF3kLtjIN0YrE+lPE=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Thread-Topic: [PATCH v5 04/12] xen/arm: add SVE exception class handling
Thread-Index: AQHZbSQpoCrJkjjTIkyQmrtaVCm5Lq8qf08A
Date: Fri, 14 Apr 2023 08:40:52 +0000
Message-ID: <92137CCE-1E08-4C95-9BC1-A4B83EEEC91E@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-5-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-5-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB6680:EE_|AM7EUR03FT007:EE_|AS8PR08MB9744:EE_
X-MS-Office365-Filtering-Correlation-Id: fc49b2fa-9d1c-4e92-8de3-08db3cc3fad6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nxPwmzvnFSe/USa0o86JHXqtJ8idwvBwzFgRoNlf0un8jL+zjegRhRi7xD9R5AhwpxdRhZ86VWtoEQHHrBgr0D3y/Vo+7JZmgcq1GowFVyU/4g6/HyyTXDsgTHFxUKx5yxx1jwAwCDM3qAVpk7M60Xc0xPQ8I1DSWhurczxIg+APipw62UW5fzwiDux/9BmaL+MjU+Mes5ofatHZXyelqPaxEJZYyrkIbgeCamVLOOKGkSgLBtYck2Rv8dhk/vQSdNSqAOy9NpQHL4Ba4MEtiyDWOfA0liWPV5M8hBsske2lIpi5744LdDcoKTOec7kqVNai37gScUEeE2ZJMjTUU2aa5ELgAohAVL41IQkYjON5GgcIX4YSJu6EzR3Ayv6m5UFXsbuXsPDrNwliFQfqd/azj+ixqGfG8B8VhZO/gbyf9PbLrD7R6CYYmze3qFVOsS2lFPYaP+3eys8SRuzUlVc/kMYJDHOJyoWKW+rjDMf4rOw3Lr4QEPJenGTi4HjIyZjDYU5VLHgXYbJkNRlw1yAxyNT+EyjA2yqjcdRBMF47FMIW6rdsHyVeK9nW3IejdotPjJ0T6C4hfeR1rJKUHky7TkeCLuNLNswTgAsRNS+tT1CVfs1Zh1auJ0ypigP4irLabV/VtNrUMcUibaUTsw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199021)(38070700005)(6512007)(186003)(6506007)(53546011)(6636002)(54906003)(37006003)(33656002)(478600001)(122000001)(2906002)(8676002)(8936002)(6862004)(83380400001)(41300700001)(5660300002)(2616005)(38100700002)(316002)(4326008)(66946007)(66556008)(91956017)(76116006)(66476007)(64756008)(66446008)(86362001)(6486002)(71200400001)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <63ACCED0B18DAF439FB2E33AA5279782@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6680
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4f0ba49c-1d48-4352-3563-08db3cc3f4aa
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rW9+y1RYA701dmMxyj8bWYIpK/JC3KNLWzq1A6HxjeQaQlehXnZDK4UjZtXGApYYB/2ItkaaPdk/G/Q3rCzhI3TYF2k1ukAAeami+xtpwpzyJ4jyOkzh0x4lxXKPxek+qVTo6NQX9NFiISf/XM8ZXltRf5R7ghIf0h/jXnXfmLk6ZqplhtTbLLqSAD+IHrSgXhWLXg3SZekUBGSYUKPhdP7p/m+OPlxXUBZl3f1TTvPqU5w5ccAjUIeZj2cty2Y4UZjgjQLi8xaCkWkpdiYXuyUIYAhCvT+8FBhgtNsVm02ojMkLg3cuWhYsdkF7mlNdkm9+0vgQs5CA2irTa+1f6SHuhLx2/n2853Lc9L0CpMnGhjQ8TUqxi/olI5PE3Uq+BDXI87yMZxRMXR5ce/F8/X2zgrM/WLH4ofR84N1JDuOszpTnnW9tDof2Kf3/ENnnnH5mjbEGcajTdm+3y+LoaiyPjRJYZCN+oc/4N6sn5kIYgBb7C8pfCFW5Z02nZvOupYr8IwdNIIUoux/09pJUeHZCnhYKFgbTuibPhMdKq95tZz76uqWZr0908qpfTcDeN6GcfLliC8yYvIZsazCWF2VT6C+iNGbIgrolUheKwaXeWPxy2U3ql2yBe3Cti+hUx8vXirk7gNH36FCuppMEH0R7AJzO+KJDIsO7i33SJWWhOxLQFtK2FjfoQekhcOdp393CVxmrrqbsLS10eYMf0g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(6862004)(47076005)(2616005)(336012)(83380400001)(54906003)(6486002)(478600001)(356005)(33656002)(37006003)(6512007)(186003)(6636002)(26005)(6506007)(107886003)(2906002)(36756003)(5660300002)(70206006)(4326008)(81166007)(41300700001)(53546011)(82310400005)(70586007)(40480700001)(8676002)(82740400003)(86362001)(316002)(40460700003)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 08:41:02.7086
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc49b2fa-9d1c-4e92-8de3-08db3cc3fad6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9744

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> SVE has a new exception class with code 0x19, introduce the new code
> and handle the exception.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

With the comments from Julien handled you can add my:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


> ---
> Changes from v4:
> - No changes
> Changes from v3:
> - No changes
> Changes from v2:
> - No changes
> Changes from v1:
> - No changes
> Changes from RFC:
> - No changes
> ---
> xen/arch/arm/include/asm/processor.h |  1 +
> xen/arch/arm/traps.c                 | 12 ++++++++++++
> 2 files changed, 13 insertions(+)
>=20
> diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/=
asm/processor.h
> index bc683334125c..7e42ff8811fc 100644
> --- a/xen/arch/arm/include/asm/processor.h
> +++ b/xen/arch/arm/include/asm/processor.h
> @@ -426,6 +426,7 @@
> #define HSR_EC_HVC64                0x16
> #define HSR_EC_SMC64                0x17
> #define HSR_EC_SYSREG               0x18
> +#define HSR_EC_SVE                  0x19
> #endif
> #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
> #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index a78a99ddadd0..c2e30feafd5a 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2160,6 +2160,13 @@ void do_trap_guest_sync(struct cpu_user_regs *regs=
)
>         perfc_incr(trap_sysreg);
>         do_sysreg(regs, hsr);
>         break;
> +    case HSR_EC_SVE:
> +        GUEST_BUG_ON(regs_mode_is_32bit(regs));
> +        gprintk(XENLOG_WARNING,
> +                "Domain id %d tried to use SVE while not allowed\n",
> +                current->domain->domain_id);
> +        inject_undef_exception(regs, hsr);
> +        break;
> #endif
>=20
>     case HSR_EC_INSTR_ABORT_LOWER_EL:
> @@ -2189,6 +2196,11 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
>     case HSR_EC_BRK:
>         do_trap_brk(regs, hsr);
>         break;
> +    case HSR_EC_SVE:
> +        /* An SVE exception is a bug somewhere in hypervisor code */
> +        printk("SVE trap at EL2.\n");
> +        do_unexpected_trap("Hypervisor", regs);
> +        break;
> #endif
>     case HSR_EC_DATA_ABORT_CURR_EL:
>     case HSR_EC_INSTR_ABORT_CURR_EL:
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:46:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:46:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521051.809320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnF4Y-00012l-0q; Fri, 14 Apr 2023 08:46:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521051.809320; Fri, 14 Apr 2023 08:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnF4X-00012e-UD; Fri, 14 Apr 2023 08:46:21 +0000
Received: by outflank-mailman (input) for mailman id 521051;
 Fri, 14 Apr 2023 08:46:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnF4W-00012U-KP; Fri, 14 Apr 2023 08:46:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnF4W-0005Gr-GT; Fri, 14 Apr 2023 08:46:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnF4W-0007N6-40; Fri, 14 Apr 2023 08:46:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnF4W-0003oy-3V; Fri, 14 Apr 2023 08:46:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qMh9RedIxsTzSzUySz+FBrgZoII1lSf1la7H9WjfH3E=; b=O5wK8N9rBqzBClW/BVCJm4X8th
	DNDWdDJglm4RkV/EGv5S2rZMNYqOqcGaBa7QCmHyBZTd3YSluM72daGiozRuRbeMOQmWIl7CsbCUw
	mzPROmKpo+U803s21fO5tVgb+vsAQ2+i4/rMyTMnFvxptmiNKo0n3NXBjzUfrZDeAO4g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180251-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180251: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=7dbd6f8a27e30fe14adb3d5869097cddf24038d6
X-Osstest-Versions-That:
    qemuu=9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 08:46:20 +0000

flight 180251 qemu-mainline real [real]
flight 180257 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180251/
http://logs.test-lab.xenproject.org/osstest/logs/180257/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180231

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180231
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180231
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180231
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180231
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180231
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                7dbd6f8a27e30fe14adb3d5869097cddf24038d6
baseline version:
 qemuu                9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8

Last test of basis   180231  2023-04-13 09:18:05 Z    0 days
Testing same since   180251  2023-04-13 20:08:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juan Quintela <quintela@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7dbd6f8a27e30fe14adb3d5869097cddf24038d6
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Thu Apr 13 16:40:22 2023 +0100

    Update version for v8.0.0-rc4 release
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit c38b2ca7387b7857a614d1a6b8be5371949156d4
Merge: 69d4e746b3 28ef5339c3
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Thu Apr 13 10:09:42 2023 +0100

    Merge tag 'migration-20230412-pull-request' of https://gitlab.com/juan.quintela/qemu into staging
    
    Migration Pull request for 8.0
    
    Last patches found:
    - peter xu preempt channel fixes.
      needed for backward compatibility with old machine types.
    - lukas fix to get compress working again.
    
    - fix ram on s390x.  Get back to the old code, even when it shouldn't
      be needed, but as it fails on s390x, just revert.
    
    Later, Juan.
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQIzBAABCAAdFiEEGJn/jt6/WMzuA0uC9IfvGFhy1yMFAmQ3HgQACgkQ9IfvGFhy
    # 1yPXGQ/+Pf6HepNUlIr7naYOcpRriXPQF+q1zqo74F9fy2vrGcwJOI6qmRTjsX4E
    # 9KgXipOz7+b5wSemF7PDKcnBiwyt6UHCH+XXe0h4TpyuORbtABKRgtOhA1/sa84D
    # HnKp0TwImpAO26tzPa7u49aau/EEVBKAzFVcyn4w56S9qiDWicOpd5kG0CJBIsMJ
    # Mnvy5fXaqQRewnKiwFoJGWfyhzEToDO6Z/SkT5xYON94P+eiM2xMwXOC5WcGfmY7
    # wFGDB+SuyEP8TTn7mV0mmnlFjYe4G07hVARHSDFX3ho4b6q5F+WzfW095G6QKiu9
    # n3Pzr7IBGX3sgetPtYwOwGsE9JrfHMFzBRxQZZwq5GSmjk7+agkbXmV7RyV82EYs
    # KYOhuNF91ca0qvCrGA/eGbbJqVrd7SR5FhS4SQ7oKd5n2au/ZHoKwAgm5lBdcvES
    # 2TB0MBN1s0JPh6KMV8tPB2miZyqPRa++oA8qIX7Asoe1X4xVT1FwiDaFL8TO8i2A
    # 7uBis3KLZqOHC6dAiXlCDtaADAWgQxjcdoS1l8jTF6MgBSe+zQhXG+pcIDuSiV9N
    # WfDiUPY97iqPTvpzdz3Is+LbBax2uY5ZR05KSdmCBpIgfvSWMqXtwRydclt6G5h7
    # ZiOcTwrgMpXdbhdsFZTqVWAJG2sTkj4TA+IezVpXzPeQNLZ+T8k=
    # =kW3P
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Wed 12 Apr 2023 22:09:24 BST
    # gpg:                using RSA key 1899FF8EDEBF58CCEE034B82F487EF185872D723
    # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" [full]
    # gpg:                 aka "Juan Quintela <quintela@trasno.org>" [full]
    # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03  4B82 F487 EF18 5872 D723
    
    * tag 'migration-20230412-pull-request' of https://gitlab.com/juan.quintela/qemu:
      migration: fix ram_state_pending_exact()
      migration/ram.c: Fix migration with compress enabled
      migration: Recover behavior of preempt channel creation for pre-7.2
      migration: Fix potential race on postcopy_qemufile_src
      io: tls: Inherit QIO_CHANNEL_FEATURE_SHUTDOWN on server side
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit 69d4e746b3a899b90d2cbf422a3ce764cf51cfbe
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Wed Apr 12 19:51:01 2023 +0100

    hw/xen: Fix double-free in xen_console store_con_info()
    
    Coverity spotted a double-free (CID 1508254); we g_string_free(path) and
    then for some reason immediately call free(path) too.
    
    We should just use g_autoptr() for it anyway, which simplifies the code
    a bit.
    
    Fixes: 7a8a749da7d3 ("hw/xen: Move xenstore_store_pv_console_info to xen_console.c")
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit 28ef5339c37f1f78c2fa4df2295bc0cd73a0abfd
Author: Juan Quintela <quintela@redhat.com>
Date:   Wed Apr 12 22:30:20 2023 +0200

    migration: fix ram_state_pending_exact()
    
    I removed that bit on commit:
    
    commit c8df4a7aeffcb46020f610526eea621fa5b0cd47
    Author: Juan Quintela <quintela@redhat.com>
    Date:   Mon Oct 3 02:00:03 2022 +0200
    
        migration: Split save_live_pending() into state_pending_*
    
    Fixes: c8df4a7aeffcb46020f610526eea621fa5b0cd47
    Suggested-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com>
    Signed-off-by: Juan Quintela <quintela@redhat.com>

commit 37502df32c4b02403fe92452c4ed1d96da3df01c
Author: Lukas Straub <lukasstraub2@web.de>
Date:   Sun Apr 2 17:06:32 2023 +0000

    migration/ram.c: Fix migration with compress enabled
    
    Since ec6f3ab9, migration with compress enabled was broken, because
    the compress threads use a dummy QEMUFile which just acts as a
    buffer and that commit accidentally changed it to use the outgoing
    migration channel instead.
    
    Fix this by using the dummy file again in the compress threads.
    
    Signed-off-by: Lukas Straub <lukasstraub2@web.de>
    Reviewed-by: Juan Quintela <quintela@redhat.com>
    Signed-off-by: Juan Quintela <quintela@redhat.com>

commit 06064a671573580326b1f23a2afa2702c48d8e05
Author: Peter Xu <peterx@redhat.com>
Date:   Sun Mar 26 13:25:40 2023 -0400

    migration: Recover behavior of preempt channel creation for pre-7.2
    
    In 8.0 devel window we reworked preempt channel creation, so that there'll
    be no race condition when the migration channel and preempt channel got
    established in the wrong order in commit 5655aab079.
    
    However no one noticed that the change will also be not compatible with
    older qemus, majorly 7.1/7.2 versions where preempt mode started to be
    supported.
    
    Leverage the same pre-7.2 flag introduced in the previous patch to recover
    the behavior hopefully before 8.0 releases, so we don't break migration
    when we migrate from 8.0 to older qemu binaries.
    
    Fixes: 5655aab079 ("migration: Postpone postcopy preempt channel to be after main")
    Signed-off-by: Peter Xu <peterx@redhat.com>
    Reviewed-by: Juan Quintela <quintela@redhat.com>
    Signed-off-by: Juan Quintela <quintela@redhat.com>

commit 6621883f9398bc3f255968f0b4919e883bafb06c
Author: Peter Xu <peterx@redhat.com>
Date:   Sun Mar 26 13:25:39 2023 -0400

    migration: Fix potential race on postcopy_qemufile_src
    
    postcopy_qemufile_src object should be owned by one thread, either the main
    thread (e.g. when at the beginning, or at the end of migration), or by the
    return path thread (when during a preempt enabled postcopy migration).  If
    that's not the case the access to the object might be racy.
    
    postcopy_preempt_shutdown_file() can be potentially racy, because it's
    called at the end phase of migration on the main thread, however during
    which the return path thread hasn't yet been recycled; the recycle happens
    in await_return_path_close_on_source() which is after this point.
    
    It means, logically it's posslbe the main thread and the return path thread
    are both operating on the same qemufile.  While I don't think qemufile is
    thread safe at all.
    
    postcopy_preempt_shutdown_file() used to be needed because that's where we
    send EOS to dest so that dest can safely shutdown the preempt thread.
    
    To avoid the possible race, remove this only place that a race can happen.
    Instead we figure out another way to safely close the preempt thread on
    dest.
    
    The core idea during postcopy on deciding "when to stop" is that dest will
    send a postcopy SHUT message to src, telling src that all data is there.
    Hence to shut the dest preempt thread maybe better to do it directly on
    dest node.
    
    This patch proposed such a way that we change postcopy_prio_thread_created
    into PreemptThreadStatus, so that we kick the preempt thread on dest qemu
    by a sequence of:
    
      mis->preempt_thread_status = PREEMPT_THREAD_QUIT;
      qemu_file_shutdown(mis->postcopy_qemufile_dst);
    
    While here shutdown() is probably so far the easiest way to kick preempt
    thread from a blocked qemu_get_be64().  Then it reads preempt_thread_status
    to make sure it's not a network failure but a willingness to quit the
    thread.
    
    We could have avoided that extra status but just rely on migration status.
    The problem is postcopy_ram_incoming_cleanup() is just called early enough
    so we're still during POSTCOPY_ACTIVE no matter what.. So just make it
    simple to have the status introduced.
    
    One flag x-preempt-pre-7-2 is added to keep old pre-7.2 behaviors of
    postcopy preempt.
    
    Fixes: 9358982744 ("migration: Send requested page directly in rp-return thread")
    Signed-off-by: Peter Xu <peterx@redhat.com>
    Reviewed-by: Juan Quintela <quintela@redhat.com>
    Signed-off-by: Juan Quintela <quintela@redhat.com>

commit 86d063fa83901bc8150343ff8b03979fbea392c9
Author: Peter Xu <peterx@redhat.com>
Date:   Sun Mar 26 13:25:38 2023 -0400

    io: tls: Inherit QIO_CHANNEL_FEATURE_SHUTDOWN on server side
    
    TLS iochannel will inherit io_shutdown() from the master ioc, however we
    missed to do that on the server side.
    
    This will e.g. allow qemu_file_shutdown() to work on dest QEMU too for
    migration.
    
    Acked-by: Daniel P. Berrangé <berrange@redhat.com>
    Signed-off-by: Peter Xu <peterx@redhat.com>
    Reviewed-by: Juan Quintela <quintela@redhat.com>
    Signed-off-by: Juan Quintela <quintela@redhat.com>


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:47:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521058.809330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnF5c-0001dM-H3; Fri, 14 Apr 2023 08:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521058.809330; Fri, 14 Apr 2023 08:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnF5c-0001dF-E1; Fri, 14 Apr 2023 08:47:28 +0000
Received: by outflank-mailman (input) for mailman id 521058;
 Fri, 14 Apr 2023 08:47:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tlWx=AF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pnF5b-0001ct-66
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 08:47:27 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9dd0596-daa0-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 10:47:24 +0200 (CEST)
Received: from AS9PR06CA0659.eurprd06.prod.outlook.com (2603:10a6:20b:46f::35)
 by GV1PR08MB9916.eurprd08.prod.outlook.com (2603:10a6:150:a6::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:47:17 +0000
Received: from AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46f:cafe::52) by AS9PR06CA0659.outlook.office365.com
 (2603:10a6:20b:46f::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 08:47:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT037.mail.protection.outlook.com (100.127.140.225) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Fri, 14 Apr 2023 08:47:16 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Fri, 14 Apr 2023 08:47:16 +0000
Received: from c47710ec9854.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C7F73050-5D25-4657-B869-090F9DA7FD06.1; 
 Fri, 14 Apr 2023 08:47:05 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c47710ec9854.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 08:47:05 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU0PR08MB8930.eurprd08.prod.outlook.com (2603:10a6:10:465::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 08:47:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 08:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9dd0596-daa0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b1d++NwiHA03Rdgvx4zl7hDaFSLq9idxZRGzKkiYsaQ=;
 b=KSOND7mmVoUPCfp1ACxeWlZEZgIgX/b0u+VRNzQepnoyGBq2Zn0+A9aWHC9O5E+kQux0Sb2MyWbaApz+//eiAOoLW0i2j02xYmHuOnRyBN5YwLSWNXrE5FHM5SJ1pdgVcCH28awOWhj38KD1/u0ZWfh5QHKbZeEK1g5I8k/u2OY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0bf19fa0587feaaf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DGoZ3ncKsr4whjcAEE/gXRLPPjcd/3NyEwDsZoVWYl/5YuFhs7btof7s3Woa2m0UYAsLM9JBRQJtmgaKI816cBUMf5lyKEWuFgbq/MpX+8FEVfjh6Jo5Nj6KtVOIP4PoIviaXbFx9jUyJBjX4JrkVEmEO68341WiT9/2jItW2NJ6QtF+5bhPZVwfi0ryqRFeKJjbMD7K9BXzZvjxJoZbQLSX8mXGeQhSQqDHzVPh0Vkdd1rls3yupaW8TTIaSSUQAS410KI8KUAMTMkTtqcA6K/c3IMcFQr9V9L3hOJNsd8PCAC0kowsRFG6inNbIaZ13s4QJ8NC2qCfsiG6myOh/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b1d++NwiHA03Rdgvx4zl7hDaFSLq9idxZRGzKkiYsaQ=;
 b=hhJ+7lQrZFbdqxTZbtW6/NkxwiJQtNAR9woyT2ibogfPkIC+p539Qt1ub63ZJWVdvWWlnbc+Wed04/Mib/mtFucPfBFHNuCoC7fwZHX9gW4LUc/0UTZpYc5oh7Cub3i83JpkW6ctPjH19uVerBaDKe6J1I/wMnRozK3dZTHgy/fo2cISowHt88KTMMb7klq/s+9kWfeBG3p5L2PlWrjS1gq5mnF/g6M5JPqLSczs+r9Z4iKGAqlRS37fG0lvCueWILhBHU9PDdqLp7+X5IfIKASB1RWKiaHtR3T+/RbtLGN+VUhhOjXLShhXsdOm5mODbW8CFFu51nc3VkeEHq7daQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b1d++NwiHA03Rdgvx4zl7hDaFSLq9idxZRGzKkiYsaQ=;
 b=KSOND7mmVoUPCfp1ACxeWlZEZgIgX/b0u+VRNzQepnoyGBq2Zn0+A9aWHC9O5E+kQux0Sb2MyWbaApz+//eiAOoLW0i2j02xYmHuOnRyBN5YwLSWNXrE5FHM5SJ1pdgVcCH28awOWhj38KD1/u0ZWfh5QHKbZeEK1g5I8k/u2OY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH v5 06/12] xen/common: add dom0 xen command line argument
 for Arm
Thread-Topic: [PATCH v5 06/12] xen/common: add dom0 xen command line argument
 for Arm
Thread-Index: AQHZbSQvU2J9PUHvFUGvULiLhkcjJK8qgQWA
Date: Fri, 14 Apr 2023 08:47:00 +0000
Message-ID: <8796C502-0822-4559-8167-105CDA3B1163@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-7-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-7-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DU0PR08MB8930:EE_|AM7EUR03FT037:EE_|GV1PR08MB9916:EE_
X-MS-Office365-Filtering-Correlation-Id: faa98a15-a307-408f-9197-08db3cc4d993
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9K8gC3MTYYbnKVJUielkADJtvj7YXYmkeN0xMkDa53oiRXuweiY8mhgNKLh7TfxY9YL3HKCPiWQ8Gej2es+opJiIh+5R4Pp3j8iXUlE7iQg92LIa4MlT93byjuGT61S/rFuob9htjz6y9rVFo/hkHVwnhPouEehksAPMvVejPJlqRktvVgyKdEvbxpftpNSnbOBTCISgbLoRLTlbHCQNb/k9B+t2v1053iknBVb320+WPvj3i7iHzrexA/NONO074vJXuOj0JluLCtQQS1CC5FLhHGsskCNVxkyO3xq7YhMc6i+frYEqJcQDBOm4wt1AA8TbLe15zfqRRS+EXBZkGLlekJeaWJxS6tSM841z5vdv4JW2z72qi+WpOlE76RZje3cK+Skh3Dd0/4/5puOJQEx+uxZSI6Nkx4GW+gBy/7Sva6ck6sDkx1n15RuVI/XtTrjIsYFKrWcjEVILA1oXIP7SpWfUuXtSczDs0Atb9l4twsAJWk17m2AgPwLrnCUVk23fnN2L+C90pvF7E6RXgPxdrvBcW/Ql6jf5MjIxVSrUJDvEK8SSUa3UikUhhKyyEonjCOuaBQvwA+kaUUFeKd01XwmjURHc+BxXGOWZK/Ym6YHFVTKQeauIekk4Eflgf06B5QPhj01hOBFtYsHHIw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(451199021)(8676002)(8936002)(76116006)(91956017)(6862004)(41300700001)(5660300002)(66476007)(66446008)(66946007)(66556008)(2906002)(4326008)(64756008)(122000001)(316002)(6636002)(478600001)(37006003)(54906003)(86362001)(6486002)(38070700005)(38100700002)(53546011)(6506007)(186003)(6512007)(71200400001)(83380400001)(33656002)(2616005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <0A5982B861DE5D4CB675C1279DDF056C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8930
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9d2e52de-d392-4c29-8885-08db3cc4cffd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EO31HU4BPKQn4W+9HlPGkmMs6oH7ug6M0AzA3L9PY0BanFUWNVgkTmzUiv+Il/Eq9Q9HvrrLBR1hwMvR7vT9Bah5F7k0Tqhv49AVle/K/78KjUV7Hgnvyn/PuRvzLJgH0vJfZmhIbKEcS+AthMkzh/SWkMPS3BBGZQeN6tO0KmsMWbl3pJc3nKfJW7neL4n8VuOVDP0XpmZ+XzWXW94TAAkn9jTVxEtOh+u8DdBR11cHA6ArhtFwwqV4qogD2fzQy846Lw5bovJIy4B9Wozu4xWcTfBTIUb9E+DIoJd/uytr5i2E40Q10jII5LxV4Vd5cWEnASHGZeqhVBTmTeVmdSBsk3ydN9Ivg0byVdyt0x0y8dj8se+tWH69qWCPQKu6lil+jX+6xZKkackusmWjRZg1Mmi0qTpm8Z0SmHXJK4K1RA4qIEEhLWPrLCrAc2rwYNhtXoyPjOkesEHsewPp/9MOKY5dasgQQDapSsEHBehdqy+dH4tjH35AhXzt1LP+bpd+JUcVEE5ab6LSbutTkrX6sCb0Anl3nbuoA4UDVs4s1hnNH5IDSeI80E+1qiJ1dqA9uS0f/i0NCZd+JXi4qLb20dv4z3zHC/mk9yl4poMxHr+S19qCHVbu3v2/4A4Sv5mHSgARnPd1gkkSnU8kG2chu9d7NNsxmxCrs/ltQJ7cAlMTVei6FrgYJ0XajHFPLEwiY6XKnr0s6i+psfnC+w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(376002)(396003)(451199021)(40470700004)(36840700001)(46966006)(4326008)(6636002)(54906003)(316002)(37006003)(70586007)(70206006)(6486002)(478600001)(5660300002)(6862004)(82310400005)(40480700001)(8936002)(41300700001)(8676002)(36756003)(356005)(53546011)(2906002)(82740400003)(81166007)(86362001)(33656002)(186003)(83380400001)(2616005)(26005)(107886003)(36860700001)(6506007)(40460700003)(6512007)(336012)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 08:47:16.3703
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: faa98a15-a307-408f-9197-08db3cc4d993
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB9916

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Currently x86 defines a Xen command line argument dom0=3D<list> where
> there can be specified dom0 controlling sub-options, to use it also
> on Arm, move the code that loops through the list of arguments from
> x86 to the common code and from there, call architecture specific
> functions to handle the comma separated sub-options.
>=20
> No functional changes are intended.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


> ---
> Changes from v4:
> - return EINVAL in Arm implementation of parse_arch_dom0_param,
>   shorten variable name in the funtion from str_begin, str_end to
>   s, e. Removed variable rc from x86 parse_arch_dom0_param
>   implementation. (Jan)
> - Add R-By Jan
> Changes from v3:
> - new patch
> ---
> xen/arch/arm/domain_build.c |  5 ++++
> xen/arch/x86/dom0_build.c   | 48 ++++++++++++++-----------------------
> xen/common/domain.c         | 23 ++++++++++++++++++
> xen/include/xen/domain.h    |  1 +
> 4 files changed, 47 insertions(+), 30 deletions(-)
>=20
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 4f9d4f9d8867..eeb4662f0eee 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -59,6 +59,11 @@ static int __init parse_dom0_mem(const char *s)
> }
> custom_param("dom0_mem", parse_dom0_mem);
>=20
> +int __init parse_arch_dom0_param(const char *s, const char *e)
> +{
> +    return -EINVAL;
> +}
> +
> /* Override macros from asm/page.h to make them work with mfn_t */
> #undef virt_to_mfn
> #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
> index 79234f18ff01..9f5300a3efbb 100644
> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -266,42 +266,30 @@ bool __initdata opt_dom0_pvh =3D !IS_ENABLED(CONFIG=
_PV);
> bool __initdata opt_dom0_verbose =3D IS_ENABLED(CONFIG_VERBOSE_DEBUG);
> bool __initdata opt_dom0_msr_relaxed;
>=20
> -static int __init cf_check parse_dom0_param(const char *s)
> +int __init parse_arch_dom0_param(const char *s, const char *e)
> {
> -    const char *ss;
> -    int rc =3D 0;
> +    int val;
>=20
> -    do {
> -        int val;
> -
> -        ss =3D strchr(s, ',');
> -        if ( !ss )
> -            ss =3D strchr(s, '\0');
> -
> -        if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
> -            opt_dom0_pvh =3D false;
> -        else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
> -            opt_dom0_pvh =3D true;
> +    if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
> +        opt_dom0_pvh =3D false;
> +    else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
> +        opt_dom0_pvh =3D true;
> #ifdef CONFIG_SHADOW_PAGING
> -        else if ( (val =3D parse_boolean("shadow", s, ss)) >=3D 0 )
> -            opt_dom0_shadow =3D val;
> +    else if ( (val =3D parse_boolean("shadow", s, e)) >=3D 0 )
> +        opt_dom0_shadow =3D val;
> #endif
> -        else if ( (val =3D parse_boolean("verbose", s, ss)) >=3D 0 )
> -            opt_dom0_verbose =3D val;
> -        else if ( IS_ENABLED(CONFIG_PV) &&
> -                  (val =3D parse_boolean("cpuid-faulting", s, ss)) >=3D =
0 )
> -            opt_dom0_cpuid_faulting =3D val;
> -        else if ( (val =3D parse_boolean("msr-relaxed", s, ss)) >=3D 0 )
> -            opt_dom0_msr_relaxed =3D val;
> -        else
> -            rc =3D -EINVAL;
> -
> -        s =3D ss + 1;
> -    } while ( *ss );
> +    else if ( (val =3D parse_boolean("verbose", s, e)) >=3D 0 )
> +        opt_dom0_verbose =3D val;
> +    else if ( IS_ENABLED(CONFIG_PV) &&
> +              (val =3D parse_boolean("cpuid-faulting", s, e)) >=3D 0 )
> +        opt_dom0_cpuid_faulting =3D val;
> +    else if ( (val =3D parse_boolean("msr-relaxed", s, e)) >=3D 0 )
> +        opt_dom0_msr_relaxed =3D val;
> +    else
> +        return -EINVAL;
>=20
> -    return rc;
> +    return 0;
> }
> -custom_param("dom0", parse_dom0_param);
>=20
> static char __initdata opt_dom0_ioports_disable[200] =3D "";
> string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 626debbae095..7779ba088675 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -364,6 +364,29 @@ static int __init cf_check parse_extra_guest_irqs(co=
nst char *s)
> }
> custom_param("extra_guest_irqs", parse_extra_guest_irqs);
>=20
> +static int __init cf_check parse_dom0_param(const char *s)
> +{
> +    const char *ss;
> +    int rc =3D 0;
> +
> +    do {
> +        int ret;
> +
> +        ss =3D strchr(s, ',');
> +        if ( !ss )
> +            ss =3D strchr(s, '\0');
> +
> +        ret =3D parse_arch_dom0_param(s, ss);
> +        if ( ret && !rc )
> +            rc =3D ret;
> +
> +        s =3D ss + 1;
> +    } while ( *ss );
> +
> +    return rc;
> +}
> +custom_param("dom0", parse_dom0_param);
> +
> /*
>  * Release resources held by a domain.  There may or may not be live
>  * references to the domain, and it may or may not be fully constructed.
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 26f9c4f6dd5b..1df8f933d076 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -16,6 +16,7 @@ typedef union {
> struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id);
>=20
> unsigned int dom0_max_vcpus(void);
> +int parse_arch_dom0_param(const char *s, const char *e);
> struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
>=20
> int vcpu_reset(struct vcpu *);
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 08:59:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 08:59:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521065.809339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFGc-0003Dh-Kw; Fri, 14 Apr 2023 08:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521065.809339; Fri, 14 Apr 2023 08:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFGc-0003Da-Hu; Fri, 14 Apr 2023 08:58:50 +0000
Received: by outflank-mailman (input) for mailman id 521065;
 Fri, 14 Apr 2023 08:58:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnFGb-0003DU-NM
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 08:58:49 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91bb34ff-daa2-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 10:58:48 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id i3so7436687wrc.4
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 01:58:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91bb34ff-daa2-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681462728; x=1684054728;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2bYJqD/lritzgdgYVeCcBDk4S3KIqAcSAF9n6O03bTM=;
        b=F1QhtALQROLvvngu6FysJm43XIyPpeA8KPqJw74kx67WBcWqoy2fBwKLYCHrAAjUUN
         dRNJ/5oHnNNnu3TZNpZKidLGEzRi0OZmQ/rHhSa+RCaXu3STu3q6iXCXsI2rhT5fGQ56
         oRv1bD5DqQ1WpXS7Rzuz9xMan6wZDobvBQ2klB92MXXJw7pptmDq7SeRJX4qfINxBX+K
         /boPNqkmd6yVFbPHk86AAF/hENHBGEEqsX3A9k1V417u7vH4srBHohiGf55BAHRSAl5S
         O38T2eOgmr5OfdtChbDsANzNT+K6QcV3JfP9LIhuGl8BlaDl1x4AZhQnjEvRqzOiXrb5
         kJ7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681462728; x=1684054728;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2bYJqD/lritzgdgYVeCcBDk4S3KIqAcSAF9n6O03bTM=;
        b=jZbwAWYMm6SgMF/Y9f6jrJJECUFWbjHq/gmiCQ9RBXfniTwpWOKD0LupbIVps/oC0W
         ZClOqt6K3R1DGq9UVIdLFwnD3Pyexq5LKli3iO/Sgc9sguGHBNhJjU2THBSc4UQGXcMf
         nNEKn0gBGmia2BmIkkb6h5RajBuX4u2OOkJk0WZUMkmAZMs2gNhJeFGOim6sNdqhyV8z
         oqTlBJrTS9fK74WgDygC6EMw1+R1zq2Lsdyoz/jbcaPGKuQW4eDs3yXRqY0PNdbpPY7l
         ejA1DBWxxART5i2VeHFPLJyKTWhsFwn3vz3dqbF6QlHvmPUaKq8ttui6OCemA7tm6nEr
         doRQ==
X-Gm-Message-State: AAQBX9cEYRdqXBbkVo7r0Imii4JJQlJQx4fiQA6uCrTNbF0c+3gPI+Uc
	vknRq00Kj3WlyFiX5y0S3APiBeMtbi6PPrdeMF+PTQ==
X-Google-Smtp-Source: AKy350atbqUjXHkQy5yQKv9rcJiaNVpf2SH4v1dIDy4x+Y7UPgU3NueaIUJzrzgaOcfU5KwWDuMlFsOxw1rUoBLG1TY=
X-Received: by 2002:a05:6000:1e02:b0:2f4:2e72:5bf7 with SMTP id
 bj2-20020a0560001e0200b002f42e725bf7mr2813000wrb.0.1681462727910; Fri, 14 Apr
 2023 01:58:47 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-3-jens.wiklander@linaro.org> <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
 <d7f18393-262b-f2b1-9af3-a371dae75994@citrix.com>
In-Reply-To: <d7f18393-262b-f2b1-9af3-a371dae75994@citrix.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 10:58:37 +0200
Message-ID: <CAHUa44FYGeA-knf2HMR6t4B_q3JZ_WuEq9fpTmD2_sJLMwPoQw@mail.gmail.com>
Subject: Re: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, 
	Bertrand.Marquis@arm.com, Marc Bonnici <marc.bonnici@arm.com>, 
	Achin Gupta <achin.gupta@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

On Thu, Apr 13, 2023 at 3:27=E2=80=AFPM Andrew Cooper <andrew.cooper3@citri=
x.com> wrote:
>
> On 13/04/2023 1:26 pm, Julien Grall wrote:
> >> +static int ffa_domain_init(struct domain *d)
> >> +{
> >> +    struct ffa_ctx *ctx;
> >> +
> >> +    if ( !ffa_version )
> >> +        return -ENODEV;
> >> +
> >> +    ctx =3D xzalloc(struct ffa_ctx);
> >> +    if ( !ctx )
> >> +        return -ENOMEM;
> >> +
> >> +    d->arch.tee =3D ctx;
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +/* This function is supposed to undo what ffa_domain_init() has done =
*/
> >
> > I think there is a problem in the TEE framework. The callback
> > .relinquish_resources() will not be called if domain_create() failed.
> > So this will result to a memory leak.
> >
> > We also can't call .relinquish_resources() on early domain creation
> > failure because relinquishing resources can take time and therefore
> > needs to be preemptible.
> >
> > So I think we need to introduce a new callback domain_free() that will
> > be called arch_domain_destroy(). Is this something you can look at?
>
>
> Cleanup of an early domain creation failure, however you do it, is at
> most "the same amount of time again".  It cannot (absent of development
> errors) take the same indefinite time periods of time that a full
> domain_destroy() can.
>
> The error path in domain_create() explicitly does call domain_teardown()
> so we can (eventually) purge these duplicate cleanup paths.  There are
> far too many easy errors to be made which occur from having split
> cleanup, and we have had to issue XSAs in the past to address some of
> them.  (Hence the effort to try and specifically change things, and
> remove the ability to introduce the errors in the first place.)
>
>
> Right now, it is specifically awkward to do this nicely because
> domain_teardown() doesn't call into a suitable arch hook.
>
> IMO the best option here is extend domain_teardown() with an
> arch_domain_teardown() state/hook, and wire in the TEE cleanup path into
> this too.
>
> Anything else is explicitly adding to technical debt that I (or someone
> else) is going to have to revert further down the line.
>
> If you want, I am happy to prototype the arch_domain_teardown() bit of
> the fix, but I will have to defer wiring in the TEE part to someone
> capable of testing it.

You're more than welcome to prototype the fix, I can test it and add
it to the next version of the patch set when we're happy with the
result.

Thanks,
Jens

>
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 09:02:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 09:02:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521070.809352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFK3-0004gj-4Q; Fri, 14 Apr 2023 09:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521070.809352; Fri, 14 Apr 2023 09:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFK3-0004gc-13; Fri, 14 Apr 2023 09:02:23 +0000
Received: by outflank-mailman (input) for mailman id 521070;
 Fri, 14 Apr 2023 09:02:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnFK1-0004fJ-J3
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 09:02:21 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 107cb59b-daa3-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 11:02:21 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id v27so7545821wra.13
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 02:02:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 107cb59b-daa3-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681462940; x=1684054940;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ocke09pCVF1SRzVO7CZh1DeCa3giZ7bYqQybME6Jk/o=;
        b=d9Dkol2SQp6LyvvfSh6sz8SQIy/Y4jdNGzPpL0jrhGB6vdxmXMUeT1UZNeb3ECE1cY
         Tsoqr1/DQqEYrAyRhyrDYBIqri1XhKt8UJba7yenr5zOG2YXeCA/VuSfHEBWEddWuOA3
         qt3+ShuoUlZUjtxmF0eYqqy9ywf0vxDm3ZORxPeJckC6FGa0QJ+GY17b2pnQW8DthuVa
         zkL8V79vHmBfMJfruLg3d15zm47BA/WLxOE9VwkSMakiUnt6DWSbZTwTXzAZtnOAGMvp
         wq9dxDElmA/t+quRPtxrBd3ManFUGYqmNRVDaGGyth+KfYMd8LMX+QHMks4hoiAI8XnP
         wrxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681462940; x=1684054940;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ocke09pCVF1SRzVO7CZh1DeCa3giZ7bYqQybME6Jk/o=;
        b=VGtOYtXCJp1f4IMKWJe5jBzWDesIEhjHB731YAb6Ebl7J1lCKoqwXNMRu+wU9o3nkz
         zfKenGWTGD1mtY+uRll1fckuAMvQPhAHlpCjFJuwxIthq6wjKL0A3NxXvwIj8RVh9XIl
         1QZspTV2cPKDaLiFFhBwD6hniiPnENDVtmiE+D/STzBYScYhEwvRYRgJKJZYI2Leyt9J
         UwPMNRntjhlSV0kygmpx39KHVFZbYNPzL6j9yQ0RMH405mx7HatXpn98rDcBN+WvxGGh
         CKRgahdYhpR9pwq05aFAH/oPnaxOkFk0DKAgWVFixOIxY8NWBIQ/xI8m+ZV1mPmISXMJ
         07IQ==
X-Gm-Message-State: AAQBX9cG8wpbhRgwrM+3eYcF4byhELOrpuVhT2uB2sMHQemfQpLRgaTA
	cR1Q45dnI+dBib9BtnKOP6emeCzHlzF/JX1hrAAkHFip1Q67FuOF
X-Google-Smtp-Source: AKy350a/YvgC3+sORWjYf1mdBFgj5RF2hg9YrRLfVC/y9qxUJm/ZJaYPS2L1UEt1PO5Ce8As5NBhCe8mSVaHCaHQgHY=
X-Received: by 2002:a05:6000:1e02:b0:2f4:2e72:5bf7 with SMTP id
 bj2-20020a0560001e0200b002f42e725bf7mr2815992wrb.0.1681462940494; Fri, 14 Apr
 2023 02:02:20 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org> <AS8PR08MB7991020558FDF641D9E89C7192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB7991020558FDF641D9E89C7192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 11:02:09 +0200
Message-ID: <CAHUa44H8tNz-18m=rR-V+Sn1tH+yNqpnVver0OZ11PuXedwjow@mail.gmail.com>
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 1:16=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > Subject: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
> >
> > Adds support for sending a FF-A direct request. Checks that the SP also
> > supports handling a 32-bit direct request. 64-bit direct requests are
> > not used by the mediator itself so there is not need to check for that.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >  xen/arch/arm/tee/ffa.c | 112
> > +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 112 insertions(+)
> >
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index f129879c5b81..f2cce955d981 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
> >      return true;
> >  }
> >
> > +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *resp)
> > +{
> > +    switch ( resp->a0 )
> > +    {
> > +    case FFA_ERROR:
> > +        if ( resp->a2 )
> > +            return resp->a2;
> > +        else
> > +            return FFA_RET_NOT_SUPPORTED;
> > +    case FFA_SUCCESS_32:
> > +    case FFA_SUCCESS_64:
> > +        return FFA_RET_OK;
> > +    default:
> > +        return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +}
> > +
> > +static int32_t ffa_simple_call(uint32_t fid, register_t a1, register_t=
 a2,
> > +                               register_t a3, register_t a4)
> > +{
> > +    const struct arm_smccc_1_2_regs arg =3D {
> > +        .a0 =3D fid,
> > +        .a1 =3D a1,
> > +        .a2 =3D a2,
> > +        .a3 =3D a3,
> > +        .a4 =3D a4,
> > +    };
> > +    struct arm_smccc_1_2_regs resp;
> > +
> > +    arm_smccc_1_2_smc(&arg, &resp);
> > +
> > +    return get_ffa_ret_code(&resp);
> > +}
> > +
> > +static int32_t ffa_features(uint32_t id)
> > +{
> > +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
> > +}
> > +
> > +static bool check_mandatory_feature(uint32_t id)
> > +{
> > +    int32_t ret =3D ffa_features(id);
> > +
> > +    if (ret)
>
> Coding style nit: You need spaces before and after "ret", i.e.
> if ( ret )
>
> With this fixed:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll fix it.

Thanks,
Jens

>
> Kind regards,
> Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 09:05:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 09:05:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521076.809362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFMr-0005LP-MB; Fri, 14 Apr 2023 09:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521076.809362; Fri, 14 Apr 2023 09:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFMr-0005LI-JY; Fri, 14 Apr 2023 09:05:17 +0000
Received: by outflank-mailman (input) for mailman id 521076;
 Fri, 14 Apr 2023 09:05:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnFMp-0005LC-Mg
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 09:05:15 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 772846a3-daa3-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 11:05:13 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id w24so7165558wra.10
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 02:05:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 772846a3-daa3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681463113; x=1684055113;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d1NtzKB2dYRlHM4v+m7fmPEZevPOYyQOjqzKBr2cZNc=;
        b=MWjWLVNsR9nF6e/Jinh5Pe6+jUSYad4kMBt0Er533MN9PQ48cIp67Qty5vYXGIYZBR
         9KU8dG6d8NXtYhefvQxkAl5vcORqDk6+Tx60D/Ws//6u+UHJG1lDo/tpHrueT/2MmA1v
         CSorithRHr8LKOxwB/jSloLIK6uwbxdKRSoNZYCj3xxtP8eDjGIbEZQKpabiMyKNu4zy
         TLWXgwQv3Ohy6f495dWvAF+uVInDO6NXGaUY6DmzGivOynK2YFsmXR/Ks5ZEDtTxTfyK
         0tQ88jLk0mQc4OdI8jvuN/OY+um+7gUOOtxEXQ/YXWVx/4fSLbFDwo4DBJioWJ1kzFrl
         l6PA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681463113; x=1684055113;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=d1NtzKB2dYRlHM4v+m7fmPEZevPOYyQOjqzKBr2cZNc=;
        b=F2Gs6oYr8r5BL8VQcmbDQ3qph7XJ+XSdUJwPhvNbkDIg9Dx0Flbl57RnbpMxrgfuRS
         89/R2bQ0B3Ka8JRs9Oui8IR2qOlEKQIw/D4q7dbSX3Wj4OaRXthx5EiIZ2WSFbMpftBd
         JFqIXvPVLykwitjyirUWtFXAOsSYVWHAkV0lvt17Yn2A74hJTnZCXXcSW3jmX7Y5m9p2
         9rxoKa+7a2FoLr+VUPSDhDLNv57Ul7a05nBVFHiNlf2UsM+BklOId47shNIXvbwH1lkE
         3yuWo94ZifCKlOuRxOjVT/voDYuaneCB1pYV77vGn3t73fyA+T4iiA7i/rga6sZiyTJN
         SB/A==
X-Gm-Message-State: AAQBX9dIImUIVMr9k//2YyQ3pHtJQ9e9iwaidFOcuVA6i1xb2Pjk4fPB
	nFWCVhh3I9BLlNG3WErmB/0c1i9w+TQsv4McDHVnrd9Qh7/G1YPY
X-Google-Smtp-Source: AKy350YoO09EabljTMkZJPCnVZy+a6Up4B8ihxRxUzUZg5OQ94doL5DgUB4CMxbdHjlgyhbwDz45pLGCfvQGglWn1OA=
X-Received: by 2002:a5d:4441:0:b0:2c7:17b8:5759 with SMTP id
 x1-20020a5d4441000000b002c717b85759mr902302wrr.3.1681463112669; Fri, 14 Apr
 2023 02:05:12 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-10-jens.wiklander@linaro.org> <2359695e-f8f8-cf51-27f9-5f0c776feca5@xen.org>
 <916BB708-3028-4AAB-BD6A-BCABAFBD7C45@arm.com> <2dba6372-330d-a068-241f-59e19b837150@xen.org>
 <0B0212E8-BAC7-4557-B21B-B49EB14F1D09@arm.com>
In-Reply-To: <0B0212E8-BAC7-4557-B21B-B49EB14F1D09@arm.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 11:05:01 +0200
Message-ID: <CAHUa44H4+83WqT8PTqWfUFv7bj1ZX5DHmTc=ZLTB47dBBzkD3Q@mail.gmail.com>
Subject: Re: [XEN PATCH v8 09/22] xen/arm: ffa: add direct request support
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
	Marc Bonnici <Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 13, 2023 at 3:44=E2=80=AFPM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi,
>
> > On 13 Apr 2023, at 15:27, Julien Grall <julien@xen.org> wrote:
> >
> >
> >
> > On 13/04/2023 14:20, Bertrand Marquis wrote:
> >> Hi Julien,
> >>> On 13 Apr 2023, at 15:15, Julien Grall <julien@xen.org> wrote:
> >>>
> >>> Hi,
> >>>
> >>> On 13/04/2023 08:14, Jens Wiklander wrote:
> >>>> Adds support for sending a FF-A direct request. Checks that the SP a=
lso
> >>>> supports handling a 32-bit direct request. 64-bit direct requests ar=
e
> >>>> not used by the mediator itself so there is not need to check for th=
at.
> >>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> >>>> ---
> >>>>  xen/arch/arm/tee/ffa.c | 112 ++++++++++++++++++++++++++++++++++++++=
+++
> >>>>  1 file changed, 112 insertions(+)
> >>>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> >>>> index f129879c5b81..f2cce955d981 100644
> >>>> --- a/xen/arch/arm/tee/ffa.c
> >>>> +++ b/xen/arch/arm/tee/ffa.c
> >>>> @@ -181,6 +181,56 @@ static bool ffa_get_version(uint32_t *vers)
> >>>>      return true;
> >>>>  }
> >>>>  +static int32_t get_ffa_ret_code(const struct arm_smccc_1_2_regs *r=
esp)
> >>>> +{
> >>>> +    switch ( resp->a0 )
> >>>> +    {
> >>>> +    case FFA_ERROR:
> >>>> +        if ( resp->a2 )
> >>>> +            return resp->a2;
> >>>> +        else
> >>>> +            return FFA_RET_NOT_SUPPORTED;
> >>>> +    case FFA_SUCCESS_32:
> >>>> +    case FFA_SUCCESS_64:
> >>>> +        return FFA_RET_OK;
> >>>> +    default:
> >>>> +        return FFA_RET_NOT_SUPPORTED;
> >>>> +    }
> >>>> +}
> >>>> +
> >>>> +static int32_t ffa_simple_call(uint32_t fid, register_t a1, registe=
r_t a2,
> >>>> +                               register_t a3, register_t a4)
> >>>> +{
> >>>> +    const struct arm_smccc_1_2_regs arg =3D {
> >>>> +        .a0 =3D fid,
> >>>> +        .a1 =3D a1,
> >>>> +        .a2 =3D a2,
> >>>> +        .a3 =3D a3,
> >>>> +        .a4 =3D a4,
> >>>> +    };
> >>>> +    struct arm_smccc_1_2_regs resp;
> >>>> +
> >>>> +    arm_smccc_1_2_smc(&arg, &resp);
> >>>> +
> >>>> +    return get_ffa_ret_code(&resp);
> >>>> +}
> >>>> +
> >>>> +static int32_t ffa_features(uint32_t id)
> >>>> +{
> >>>> +    return ffa_simple_call(FFA_FEATURES, id, 0, 0, 0);
> >>>> +}
> >>>> +
> >>>> +static bool check_mandatory_feature(uint32_t id)
> >>>> +{
> >>>> +    int32_t ret =3D ffa_features(id);
> >>>> +
> >>>> +    if (ret)
> >>>> +        printk(XENLOG_ERR "ffa: mandatory feature id %#x missing: e=
rror %d\n",
> >>>> +               id, ret);
> >>>> +
> >>>> +    return !ret;
> >>>> +}
> >>>> +
> >>>>  static uint16_t get_vm_id(const struct domain *d)
> >>>>  {
> >>>>      /* +1 since 0 is reserved for the hypervisor in FF-A */
> >>>> @@ -222,6 +272,57 @@ static void handle_version(struct cpu_user_regs=
 *regs)
> >>>>      set_regs(regs, vers, 0, 0, 0, 0, 0, 0, 0);
> >>>>  }
> >>>>  +static void handle_msg_send_direct_req(struct cpu_user_regs *regs,=
 uint32_t fid)
> >>>> +{
> >>>> +    struct arm_smccc_1_2_regs arg =3D { .a0 =3D fid, };
> >>>> +    struct arm_smccc_1_2_regs resp =3D { };
> >>>> +    struct domain *d =3D current->domain;
> >>>> +    uint32_t src_dst;
> >>>> +    uint64_t mask;
> >>>> +
> >>>> +    if ( smccc_is_conv_64(fid) )
> >>>> +        mask =3D GENMASK_ULL(63, 0);
> >>>> +    else
> >>>> +        mask =3D GENMASK_ULL(31, 0);
> >>>> +
> >>>> +    src_dst =3D get_user_reg(regs, 1);
> >>>> +    if ( (src_dst >> 16) !=3D get_vm_id(d) )
> >>>> +    {
> >>>> +        resp.a0 =3D FFA_ERROR;
> >>>> +        resp.a2 =3D FFA_RET_INVALID_PARAMETERS;
> >>>> +        goto out;
> >>>> +    }
> >>>> +
> >>>> +    arg.a1 =3D src_dst;
> >>>> +    arg.a2 =3D get_user_reg(regs, 2) & mask;
> >>>> +    arg.a3 =3D get_user_reg(regs, 3) & mask;
> >>>> +    arg.a4 =3D get_user_reg(regs, 4) & mask;
> >>>> +    arg.a5 =3D get_user_reg(regs, 5) & mask;
> >>>> +    arg.a6 =3D get_user_reg(regs, 6) & mask;
> >>>> +    arg.a7 =3D get_user_reg(regs, 7) & mask;
> >>>> +
> >>>> +    arm_smccc_1_2_smc(&arg, &resp);
> >>>> +    switch ( resp.a0 )
> >>>> +    {
> >>>> +    case FFA_ERROR:
> >>>> +    case FFA_SUCCESS_32:
> >>>> +    case FFA_SUCCESS_64:
> >>>> +    case FFA_MSG_SEND_DIRECT_RESP_32:
> >>>> +    case FFA_MSG_SEND_DIRECT_RESP_64:
> >>>> +        break;
> >>>> +    default:
> >>>> +        /* Bad fid, report back. */
> >>>> +        memset(&arg, 0, sizeof(arg));
> >>>> +        arg.a0 =3D FFA_ERROR;
> >>>> +        arg.a1 =3D src_dst;
> >>>> +        arg.a2 =3D FFA_RET_ABORTED;
> >>>> +    }
> >>>> +
> >>>> +out:
> >>>> +    set_regs(regs, resp.a0, resp.a1 & mask, resp.a2 & mask, resp.a3=
 & mask,
> >>>> +             resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a=
7 & mask);
> >>>> +}
> >>>> +
> >>>>  static bool ffa_handle_call(struct cpu_user_regs *regs)
> >>>>  {
> >>>>      uint32_t fid =3D get_user_reg(regs, 0);
> >>>> @@ -239,6 +340,10 @@ static bool ffa_handle_call(struct cpu_user_reg=
s *regs)
> >>>>      case FFA_ID_GET:
> >>>>          set_regs_success(regs, get_vm_id(d), 0);
> >>>>          return true;
> >>>> +    case FFA_MSG_SEND_DIRECT_REQ_32:
> >>>> +    case FFA_MSG_SEND_DIRECT_REQ_64:
> >>>> +        handle_msg_send_direct_req(regs, fid);
> >>>> +        return true;
> >>>>        default:
> >>>>          gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
> >>>> @@ -326,6 +431,13 @@ static bool ffa_probe(void)
> >>>>      printk(XENLOG_INFO "ARM FF-A Firmware version %u.%u\n",
> >>>>             major_vers, minor_vers);
> >>>>  +    /*
> >>>> +     * TODO save result of checked features and use that informatio=
n to
> >>>> +     * accept or reject requests from guests.
> >>>> +     */
> >>>
> >>> I am not entirely sure I understand this TODO. Does it mean a guest c=
an currently use a request that is not supported by FFA?
> >> In fact this is a bit the opposite: in the following patch we check th=
at all features we could need are supported but if a guest is only using a =
subset we might not need to have all of them.
> >> Idea of this TODO would be to save the features supported and refuse g=
uest requests depending on the features needed for them.
> >
> > Thanks. I would suggest the following comment:
> >
> > /*
> > * At the moment domains must supports the same features used by Xen.
> > * TODO: Rework the code to allow domain to use a subset of the features
> > * supported.
> > */
> >
> > Note that I am using "domains" rather than "guests" because the latter =
doesn't include dom0.
>
> Makes sense and new comment is nice.

That's better, I'll update it.

Thanks,
Jens

>
> Up to Jens to say if he is ok with it.
>
> Cheers
> Bertrand
>
> >
> > Cheers,
> >
> > --
> > Julien Grall
>
>


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 09:25:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 09:25:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521081.809372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFg5-0007kU-AA; Fri, 14 Apr 2023 09:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521081.809372; Fri, 14 Apr 2023 09:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnFg5-0007kN-7Q; Fri, 14 Apr 2023 09:25:09 +0000
Received: by outflank-mailman (input) for mailman id 521081;
 Fri, 14 Apr 2023 09:25:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnFg4-0007kH-Mz
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 09:25:08 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e490ea5-daa6-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 11:25:06 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 n19-20020a05600c501300b003f064936c3eso14546482wmr.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 02:25:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e490ea5-daa6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681464306; x=1684056306;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Pe0+F4NozuXBYn50mLdDfoIjRN8vhOuduj2ngocD7Sw=;
        b=ypycx2m+3sLbqN7qTe/OMw3H1gPasVHi0uRo4EUo4dZPm6wOICpQx9x9rMYxTJ0aUd
         N/QLJh5NupB+k5AdSO7PLbPDMNtrYFvlX/3byQ1rmeKf1xTQH0NDeBj4Hj1AwTbLJK2+
         v3k2gH2oGsfTM6lusSjyQXt7f5gnvHCQN5dQqooTMfiqdRYhgiNOXzoIzt42JJRCaEZr
         clh5Kg+Y+cEhS782BNamgTYYq4nV4xgLpSykuMS1M405NGPFo/aiMrAHo0zT5Zn0h/M/
         dXz0BlOo1XmQc0kGdS51ZR0Oa/7uwJL0YugulgMLC/nmC8Wp9UmWVs+IASvOnOiN2g/O
         xApg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681464306; x=1684056306;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Pe0+F4NozuXBYn50mLdDfoIjRN8vhOuduj2ngocD7Sw=;
        b=gyTEoXRzCaa2O4qsCJQsmMrMHYgTcmauGm3FerrMDd1r3w2z6lMUgGGUBTZb914Zyi
         kYF8VRwV0+MRZvAp9Uj1rEE4/80S7WMZgEUGHN5hOqFVF4Z4mqh3UHpw1JaG0Yt2MYlQ
         ba9JN5b5t+4cbS0Q/7/IWz5TwECu0gcXbvEm3XMJQ3IrJthiJ7PWF2APRK4sED+Sqvm3
         hj54azjzAiS+BR6xsR7+4G/dJ7NFGXVksFxbEYbSeDn4RePaTEaFfndOFrNEBLS6CJkg
         uX+cnulKCifJQhCt0XyAvfpc3pK3xSvnkjExsbEPsQbfRqlqT8nTL06occxqWneRYaim
         PENQ==
X-Gm-Message-State: AAQBX9cSu57o4Ui0A7PkIjbFfR4X7FNWr2PA5KgcxubiWiXxWMCaIhNZ
	rPxB4rfuxawall60Yzy6tVavJlw74ubW2zgPPsDCPQ==
X-Google-Smtp-Source: AKy350aPZQFYIgxccfqLSGBHoPPFt6UbuzdwW6MFbQz340GnRRYB4WmiwIWjrlkRdji++vMuWKc3YQQuV4KkfzTfKnM=
X-Received: by 2002:a1c:f302:0:b0:3df:97de:8bab with SMTP id
 q2-20020a1cf302000000b003df97de8babmr1215506wmq.4.1681464305829; Fri, 14 Apr
 2023 02:25:05 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-6-jens.wiklander@linaro.org> <AS8PR08MB79913E8D281DB674FDC0D70192989@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <CAHUa44FqgH3QRiTR=8v4WH+6XYbzwHYn8=Ht_KRC--jLWz9cog@mail.gmail.com> <AS8PR08MB799116F19A91F29F889D57A892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB799116F19A91F29F889D57A892989@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 11:24:55 +0200
Message-ID: <CAHUa44HHSY7E6D4puMVMgnUb=5_tf8uOEdpQt1OtZeTogL-NYQ@mail.gmail.com>
Subject: Re: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for FFA_PARTITION_INFO_GET
To: Henry Wang <Henry.Wang@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Henry,

On Thu, Apr 13, 2023 at 3:53=E2=80=AFPM Henry Wang <Henry.Wang@arm.com> wro=
te:
>
> Hi Jens,
>
> > -----Original Message-----
> > From: Jens Wiklander <jens.wiklander@linaro.org>
> > Subject: Re: [XEN PATCH v8 05/22] xen/arm: ffa: add flags for
> > FFA_PARTITION_INFO_GET
> > > > +#define FFA_PART_PROP_DIRECT_REQ_RECV   BIT(0, U)
> > > > +#define FFA_PART_PROP_DIRECT_REQ_SEND   BIT(1, U)
> > > > +#define FFA_PART_PROP_INDIRECT_MSGS     BIT(2, U)
> > > > +#define FFA_PART_PROP_RECV_NOTIF        BIT(3, U)
> > > > +#define FFA_PART_PROP_IS_MASK           (3U << 4)
> > >
> > > I am a bit confused here, here (3U<<4) is "IS_MASK" but...
> > >
> > > > +#define FFA_PART_PROP_IS_PE_ID          (0U << 4)
> > > > +#define FFA_PART_PROP_IS_SEPID_INDEP    (1U << 4)
> > > > +#define FFA_PART_PROP_IS_SEPID_DEP      (2U << 4)
> > > > +#define FFA_PART_PROP_IS_AUX_ID         (3U << 4)
> > >
> > > ...here the same value is used for "IS_AUX_ID". According to
> > > the spec that I referred to, bit[5:4] has the following encoding:
> > > b'11: Partition ID is an auxiliary ID. Hence I guess the above
> > > "IS_MASK" should be removed?
> >
> > FFA_PART_PROP_IS_MASK is supposed to be used when extracting the bits
> > to compare with one of the other  FFA_PART_PROP_IS_* defines. For
> > example:
> > if ((props & FFA_PART_PROP_IS_MASK) =3D=3D FFA_PART_PROP_IS_PE_ID)
>
> Ohh I now understand, the naming does not mean it "is a mask" but actuall=
y
> means "this is a mask for FFA_PART_PROP_IS_". That makes a lot of sense.
>
> To avoid this kind of ambiguity, do you think changing the name to someth=
ing
> like "FFA_PART_PROP_IS_TYPE_MASK" makes sense here? Note that this
> is just my suggestion, you can decide to change or not, I am asking just
> because I downloaded the whole series and found that currently
> FFA_PART_PROP_IS_MASK is not used anywhere, so before it is used everywhe=
re
> in the code, it might be good to use a more clear name.
>
> >
> > using
> > if ((props & FFA_PART_PROP_IS_AUX_ID) =3D=3D FFA_PART_PROP_IS_PE_ID)
> >
> > doesn't seem right.
>
> Indeed. Please see my above reply.
>
> Personally after the above clarification, I am good with the patch, so:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I'll update it as you suggest.

Thanks,
Jens

>
> Kind regards,
> Henry
>
> >
> > >
> > > I confirm the values of other fields are consistent with the spec.
> >
> > Thanks,
> > Jens
> >
> > >
> > > Kind regards,
> > > Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 10:12:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 10:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521088.809383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGPh-0004bf-SY; Fri, 14 Apr 2023 10:12:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521088.809383; Fri, 14 Apr 2023 10:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGPh-0004bY-PY; Fri, 14 Apr 2023 10:12:17 +0000
Received: by outflank-mailman (input) for mailman id 521088;
 Fri, 14 Apr 2023 10:12:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ClDU=AF=citrix.com=prvs=4618bebc7=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pnGPg-0004bS-1I
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 10:12:16 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d17f5c1b-daac-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 12:12:13 +0200 (CEST)
Received: from mail-dm6nam04lp2043.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 06:11:16 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6016.namprd03.prod.outlook.com (2603:10b6:5:38b::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 10:11:14 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Fri, 14 Apr 2023
 10:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d17f5c1b-daac-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681467133;
  h=date:from:to:cc:subject:message-id:mime-version;
  bh=PFzL6PE3tO1+kG4mMgi3nA4szToEfZC1w+ZwpnUL/M8=;
  b=M+GyGvr4b/lahC4qGVh7+NhPmfNxsdCdS4X3HMw5rlnOK1Lkw4NdO6xu
   UGOVqRkunntDd8bSSBRUOd5X1fe+k4XstXqKuHkpQ7OUgP/ScpvWqOatS
   8Npz0jX8oKvan/5qCx8W7M9T6m2iEgTDvhpzER1f0wJ3LP6qOU2l7yt4P
   Q=;
X-IronPort-RemoteIP: 104.47.73.43
X-IronPort-MID: 105910295
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:G8vb4aMGxV6OetjvrR2klsFynXyQoLVcMsEvi/4bfWQNrUp01zcBz
 TYdXWjXMviKa2uheNkgOoi08x9Su8fQnNZnHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5wBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vZRHiZWq
 vE1FD4ITxDTpOK/3LPja+Y506zPLOGzVG8ekldJ6GmFSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxvzO7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi/937eVzH6TtIQ6T5OWytFI0H6v1mEvFTYqdVW+sN6DsxvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxO4QfZjtIadhjuMpoQzUvj
 gONh4mxWWcpt6CJQ3WA8LvStSm1JSUeMW4FY2kDUBcB5N7g5oo0i3ojU+peLUJ8tfWtcRmY/
 txAhHlu71nPpabnD5mGwG0=
IronPort-HdrOrdr: A9a23:cMdDRai5a5e7VIBUSy6ubG1Bv3BQX2V13DAbv31ZSRFFG/FwyP
 rPoB1L737JYWgqNk3IwerwQpVoMkmsiKKdgLNhd4tKMzOWwFdAQLsSiLcKzgeKJ8SczJ8R6U
 4DSdkENDSYNzET56uXj3jaYrQdKbG8geeVbIzlvhBQpHRRGthdBnBCe2Cm+yNNNW17LKt8Ob
 +kovBMrz2mdXl/VLXjOlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPsf2F
 mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFctcsvy5zX4ISdOUmRYXee
 r30lQd1gNImjHsl1SO0FrQMs/boXMTAjHZuBulaDDY0LDErXoBerV8bMRiA13kAgMbza9B+b
 MO0GSDu5VNCxTc2Cz7+tjTThlv0lG5uHw4jIco/jViuKYlGchsRLYkjTVoOYZFGDi/5JEsEe
 FoAs2Z7PFKcUmCZ3ScumV02tSjUnk6Ax/DGyE5y4eo+ikTmGo8w1oTxcQZkHtF/JUhS4Nc7+
 CBNqhzjrlBQsIfcKo4DuYcRsm8DHDLXHv3QSqvCEWiELtCN2PGqpbx7rlw7Oa2eIYQxJ93g5
 jFWEMwjx9HR6qjYvf+rqGjMiq9NVlVcQ6duf22vaIJy4EUbICbQRG+dA==
X-Talos-CUID: =?us-ascii?q?9a23=3AWoxKjmhg+iAhTCpKrGp/B7TGOTJuYlvCi2+PLmK?=
 =?us-ascii?q?EB2NISO22SVaM+ftFnJ87?=
X-Talos-MUID: 9a23:RrXPbgvXWFIIAH+yNM2n3x9MCOhKs4eSGE0xjL46m8eBJAtKAmLI
X-IronPort-AV: E=Sophos;i="5.99,195,1677560400"; 
   d="scan'208";a="105910295"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CYBc2yf3qLsbeRFSjIXzZL5drvCdX5zf5eCMoBx/ZmcVFwi/ZwR2sWU7UzzgOdPRzYWanJQDU98FUMEtYpPd2zW5eTUd1ruYGmRVDxc88CxVhhy+/0PEkOnJIpf+I2v03E8KfubsLfJ7FMPgpdTpF/TYIcz3N6Zuorlll6fDbJD8NvOfW0EZ7KsAXuVFXKwvjQ72oSORY7YRSB6mZ7Bks4/8I9ToRRy0u+YdBeln2exOdxVqjrdmTvbBBGzdScdN3PZVRNv+kkXLmnkmRrDNQsIT4puZWQ/PqNuSAWJzJJdhU+r1ybCAVNnV3zJu/hQ3QvkOeJc4PNOLEOabjL++1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PFzL6PE3tO1+kG4mMgi3nA4szToEfZC1w+ZwpnUL/M8=;
 b=Rvpv1aWSs4louvQAmCXKE6gAZfauPHvo7r9dW/kYjjmWWI3ep4m9ZfAFxuaSH/qSgEcq2/3ma5Cvi61FxodSHgbmbzRfWpJL+dogGCSKMX2ZPxdYNZlyo+zg2Ead8wGt4fUemGmvEoFib/aXsrjEG2SdPcos6DWwjQAhIuYoU1vP4AiDV4j0/DUrvffVNmxBqL7gbrWRwsSwxLem7cOztGnjyibU4soCHRpYcELH6eXrZTjNoTW4ntOfzsCGYQpoj2G+CkPmaon3mzCRlFk8MW/cblaw5CKAkW2Kq8gY7DHaLUsOqv/BCuUpCzfDuhWU8mLlxqqwkKM7SMqO/+NEew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PFzL6PE3tO1+kG4mMgi3nA4szToEfZC1w+ZwpnUL/M8=;
 b=gVnzKdamybghucgIsk/kzjMJ3gtl5m5XVubf+lW1Bb2wdVWj6+2Z3KgTYxA2EVES50qxxODEd72sjrL0DWTORgVQZqdDz72uBzvv3Uupluq4HDYNHVQEIEgCaQqTBa2wtGlcizqEChYdaIU+01IqufpsSSZOiMtNGPl4gPZjygc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 14 Apr 2023 12:11:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony.perard@citrix.com>
Subject: HEADS UP: re-adding the armhf boxes to osstest
Message-ID: <ZDkmu0mgy23ypaL7@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-ClientProxiedBy: LO4P265CA0078.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6016:EE_
X-MS-Office365-Filtering-Correlation-Id: cef0a9ef-0637-41f6-3ac2-08db3cd093a2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aQ86P+FbCCghPJS6gVwaoLIv5fLm+Fa+1b5lnxY9gvF1RkFi4/hZg9sx51Nv38DPQz0lf1lPH/wv+k7trAUc/Ha+GVUGOrB4Dk1u511AQNnwz+L3mnjaI6RVdKhPd1hh+9hN2NvqvTzhBZpZrjUgk99WAaCXRoz3ic8cHNBabE+IGDc+E6JFHEsoWySkJxSLIt+GGIU1rP+p/4xFfl2WK6X77D7PtmtvWRqClvo2JKpB+brniYjjhe14a+nr7kyz2PmFFmtXaa0gIXQf4inpGnEOmTcKd/Danefg2+YCPGKS78WODFbNuD9zzUcbT1+GunyLf1jnDaTiEFZa8INkjbatSHgLlwMk2H7lIzwNLlGoMeW/wmFc0EAGb90hjfzHwD1lsVq7FpMhml6sRqj4zsoj+lFnsG5uu0mk8Gl0V8MmHMOHWh1YmevNORoJe2IEdWGfoK0xXrOrYmN6vR45+UpZfLXMqI3EUSssEAQUdwYXKEI2GJ5odBwitCcDAissNf+GaV96qqA0rJ8s69i+kMIVzIsjGqfz+V0jeu7XeCbVPWVlhDgqWN3emHOh/xTL
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(346002)(39860400002)(136003)(366004)(376002)(451199021)(6666004)(6486002)(4744005)(6916009)(66476007)(66556008)(85182001)(66946007)(4326008)(2906002)(33716001)(86362001)(38100700002)(41300700001)(82960400001)(5660300002)(8936002)(8676002)(316002)(478600001)(9686003)(6512007)(6506007)(26005)(107886003)(186003)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjBFNnB1aitxalB3SFAySjFaTUYxSzhWUHRVZTkwdFZIQVh1cWhNUXJ2dFFa?=
 =?utf-8?B?VDR4RGZqRkFEbmk0REFVRi91Zk0wY0xkYURrVlNmNWl5R1F6RU5DeHRiZERH?=
 =?utf-8?B?NXVVbmxRTzNHdTFld0cxb0cwWnl6dTZVOUhhZDh3U3drYXI4Y25QelJhdEtZ?=
 =?utf-8?B?WnVJSFRmcjhHOEZwWEI5bDAwMGVvWnBkTzVmYWtGZERaaGoyVnZqMDAwOFVM?=
 =?utf-8?B?dXdPV1ZCWW9OOVc4RUpLYlV2WlhxMjE0eXoreWJRekFpMHRuWWozcTYwcWY4?=
 =?utf-8?B?M2xWbXRxaWt2YVZ6MEdrR3drdCsrdDhKL2hUb1p5YUJyY05hTzlkRE8rd2Ew?=
 =?utf-8?B?cnhrcVZ6VmJuSVJnai9HSVNVYWFUVkNCczlYZ0tlei9Pekx3SThiV1l2ZHAy?=
 =?utf-8?B?MGFIblNDN1hCTDN3Q0JvaDZ3TDZSM0cxalVrWXBLVENpMGxiK3ptR2xmVzNs?=
 =?utf-8?B?c1VnVGJLeEtsdzR0T3JzRHNzVGNHYWZGNEEvR3ZkNk9pNjJ3M1JVSFZibzBo?=
 =?utf-8?B?a3JPd0xFZFM5YnIwU0dtRTFmWjNsQXdCaWcxcWExQTFVWDVaY3BnVU5aS29I?=
 =?utf-8?B?cUJCVjRtdmpCeUFyL2o3bGJGamtZNTEwTi9hSmtnOVhJdStwMFdFa3VuMjNF?=
 =?utf-8?B?a0t4VkxNR1pHMFVHQTlLREVMWGF5cnhEamdsdlp1a3RtUU90VzhPeEU1aGFU?=
 =?utf-8?B?ZHlEbEhVbEt0dm90TWZEKzNlZGt0azRCUm0zWWdJRDZLN3FOYWFBcXpES1hh?=
 =?utf-8?B?UjFqZWhjNEREQjdnakhZd09YUE44TnF3b3pOMDA2WEtNdjNsNk4zQU9lOFdh?=
 =?utf-8?B?cnUyQzBHanM3STlRSllhQlhIMWpFWFN1ditNTzl4MnZDTnVhMmtXdGJxdmls?=
 =?utf-8?B?eGdXQ044TldnN3hzMCtKUTF2MldCZHdWVzFTOG92RzVoZTZtc3lLSFoyUDgv?=
 =?utf-8?B?c1BHTktrbm51dGZ0MnI2SjBOaHVxSC9BVmZmK3AwTFB4Vmh0Ym9KK08yTFRI?=
 =?utf-8?B?bDZXMnRYb29EU3VYWFhQeWFGbmkrODdwQlVGT3Q0K3YrN2w3c1RPVFhGNUp4?=
 =?utf-8?B?Vlk5cGpjU2Y3VHZtOG9GOVFkUThQWEc0V2JxTFdCV1dvWWFWcDVQMFBTcHJF?=
 =?utf-8?B?UU1KWU9NUStybS9LNXlGRURJSzBSczQ0cHZySjF2WWw5Uk1haWZwWE9hZG9C?=
 =?utf-8?B?bkEyOWw0cjZvaytVelI0bTFtT2ZzOEZTbWhkcW5IQndZVjJlTzFWNlVPMVps?=
 =?utf-8?B?NDk0YlFYcWI3Tk1MSU41SFlSUStNcW1GVDBoV0FkdVJ5RHk5by9XcVlJUUlN?=
 =?utf-8?B?bitXSFZLT0Jrd3ZlbkdPbEhScVRkcXczalJHMkgzRlZnd1Q4cllPbDUxWW8v?=
 =?utf-8?B?KzNKMlRZd1pTelBUNDhLdG9lbjRpeHhqekVIcFIxYlpKWFZHS1hod05ydVZV?=
 =?utf-8?B?Y21BR3VZSXBaVHZSV0JRRTk2Nm1pd2s3d2lvVXFjVy8vS1N3cjZjaEpuSFZF?=
 =?utf-8?B?dUxUc1ZlRHpydHRwNnd3aGJoMGdmczdUY1hWeUJUUkw3c1Z4bVNJOEVMZXZ3?=
 =?utf-8?B?dWZTK3BKUS95VXdxTE5vNG9YeWN1Nmp2LzJqUG9aVkp3R0pVYmJOeGRtVUE0?=
 =?utf-8?B?cFpLUVc5SlZUK09OcjhFaTlORXRRQ3ZySERCY2NsT3VnUWJSaEZES3lFbThU?=
 =?utf-8?B?MzBlTmQvQzZlWUN6cHRJemVyT1pBK0FWYnlGdndQR2JJa2tvZytrTGNRWDJv?=
 =?utf-8?B?MGZiTFppTVYrUGtTanZEeGtxRVIvSWJ2aEpRcEQ3MFJWSHJpQjRBZnh2Z1Jk?=
 =?utf-8?B?V05GNVhRSVY0bi8zSTBXNmhySEtUR1dHa1E0ZHRxMElSSFpBTFBMZEh6bElw?=
 =?utf-8?B?cGthcDdiZVZSbmNYci9LTjVLcVIwQU9mZWhlYkQvcW9KcHdCK3UwOG9mSUly?=
 =?utf-8?B?MElXa0MrTHRDM2JYOHEvYjlhUEFCOXZZR1ExQk1iM0FBL0luSStmclBTOEpE?=
 =?utf-8?B?aE8rakR5ZWUrWkxlKy90aFJqSndPUzQ2Z0VLWXkzd1I0SVJGRzVDcVJtNkds?=
 =?utf-8?B?RThQeGpOWlBiTnZWRXNFSGVYeFkxWndqbmNQLzZWMFR4Rmtlc2M4V0s1M2Zl?=
 =?utf-8?Q?4Llk46QVnDBpceISOzdZNc6If?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	fa+ib7boHG9ZCXXBmlkJRPeA6G6nETkNDnHNNUsrYPaY4MY8/3sA/Eq1Yg0Hte6a7kuLQrBR1uy+Nstyg6djs7I/VD2uSbed/COlvPDI4xFE9YMkJwIVPxAgEJD8LRFRgIT+bKiNK452wYDXlWMJ+kxWqDL/RgIxYqInCI6ThoHC23Tqmj9B3lAXXFp2Qr0reFQm77AXcrdoMtl7qgUKO0Su1iZdXPge2qKWS8NPfS4fkvHJ7/aKQjORiHNIbx0OYtelj/9QauQurJlwcb6F2rb+N8K04FlRiIiNs4DOjJmviDFQu+kprCibrcDJeKhLN4ajjDBUX2d3dIACu/XYWc6D5uVok1kASXL5JYvelE1+64ILavo9FWTyQrot7wT63gnA/LtKDWWt+aIffIxnmB+QRgGhLARFkj4eVwJXillI55BfA9FoCkDLldUAafQ1mIS84VUDAvYgypiZ5jeocB+LPGEKWPlN0/dDbeyxhDI1IARWx54rw1irIlHMQzLvisEFttJjmP9CgwvXTv6p2BKaZKpg6xJAfSrrXZ2TrefbCqObnRuIuFS9FpS20+z/xWza62XIp5sqQtPPJSCmTAtyojHZwL1aBvpDw8Oi+XO97E63VPZ10O403UxWn0nsLKcE92k4SxlAhB55UN3/Dsjd0yC5JXoCOovfK04wsFIirvMKJbLkg75yvy2QPFzu5aVbQvcmMzQU+iAVXvukLeAZTSSrsBGz8E4SD7pQPHxG4BnA+BjpjOS/RFFwdYe4kDdkoj62tTpRlwEGrYsFlQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cef0a9ef-0637-41f6-3ac2-08db3cd093a2
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 10:11:13.2042
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n1yFHiG6UAsedNJmCX8IBSYQJi4nS6cee7EPaY8oNJxz/FCPYsKkmkjrQALXk8gjBYMigCu3B9cVbYw2CQ2AhA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6016

Hello,

We finally had the broken PDU replaced in the osstest colo, and the
armhf boxes are operational again (those are the arndales and the
cubietrucks).

I've run some ad-hoc tests on them and they look fine. I plan to bless
them before the end of the day.

As usual, keep and eye on any failures that could be caused by the
newly added boxes.

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 10:32:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 10:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521093.809393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGiw-00078w-L7; Fri, 14 Apr 2023 10:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521093.809393; Fri, 14 Apr 2023 10:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGiw-00078p-IE; Fri, 14 Apr 2023 10:32:10 +0000
Received: by outflank-mailman (input) for mailman id 521093;
 Fri, 14 Apr 2023 10:32:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ClDU=AF=citrix.com=prvs=4618bebc7=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pnGiu-00078j-QM
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 10:32:08 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9947ba08-daaf-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 12:32:05 +0200 (CEST)
Received: from mail-bn7nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 06:32:03 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM6PR03MB5018.namprd03.prod.outlook.com (2603:10b6:5:1ea::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 10:31:59 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Fri, 14 Apr 2023
 10:31:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9947ba08-daaf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681468325;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MGhRUNn4CpD1xmpI3DHftaQwUI7WuO46fEFASjxcWZk=;
  b=AzgWtX3u1mrMIv/nFRwOiHvWjEABMxTBi809BFhg8h5+am9u5BEjwLjR
   huie9gjtJyHeqSKfHS4YFjhtzqieoyh0ZXf3bEIadYTXXQU6jkhYsfIWJ
   3hpxycoLF8JApU1TQQ0AamUrvz1N8pN2Kl7WWeKW0MxjHzWxRx2F9CYyB
   8=;
X-IronPort-RemoteIP: 104.47.70.108
X-IronPort-MID: 105912536
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:aMdQK6hQCGCtLtFdFTEJsSxdX161RhEKZh0ujC45NGQN5FlHY01je
 htvD2yGa6qOMDemeNolaITjoExSsMXTzIdhSlM/+SFkEiMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRfDSwAQSuBoNu/g+yGRvlzntt8PO3CadZ3VnFIlVk1DN4AaLWaG+Dgw4Ad2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMpluG1YLI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjBdNLTuDjqJaGhnW14GxKEi00CmeqsL6D0V6BB9hZB
 R09r39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOluU7WDgr3
 V+hhM7yCHpkt7j9YW2Z3qeZq3W1Iyd9EIMZTSoNTA9A6d+8pog210rLVow6SP7zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:wAS3KqgA5aTuM/n9KelGZ6MbXXBQXuUji2hC6mlwRA09TyVXrb
 HWoB17726NtN91YhsdcL+7Scy9qB/nhPxICMwqTNSftWrd2VdATrsSibcKqgeIc0bDH6xmtZ
 uIGJIOb+EYY2IK6/oSIzPVLz/j+rS6GWyT6ts2Bk0CcT1X
X-Talos-CUID: =?us-ascii?q?9a23=3AeXK6JWkvgMnl2A6tSRfzkRyNdmXXOTr40Hr/GE2?=
 =?us-ascii?q?fMG8qFoCZWUCI5/x0t+M7zg=3D=3D?=
X-Talos-MUID: 9a23:GQfEfAVZqu+X7k/q/BnKhGh4BOU337W/T3pWgapasc+qZAUlbg==
X-IronPort-AV: E=Sophos;i="5.99,195,1677560400"; 
   d="scan'208";a="105912536"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jA8372tFLxdBw9dvXiFgrmUuUpE7D1PuptSaIK0Dwq4/3HZ4s1utowIdWu7AT0ExnebK7xdzejjjgApmjhh7vGDUvRfGzzSAxy+3lLpK/FoZDsnKASPusLQc/H6jrw6gfi0UrrTlrqJ+NfcVuHbXqSnRr2yv96sxvYqwF6xRvnIkojLhnhenbEd8Bn4E1oMYOXlg5obnsJ2QZBvKgRjK53Rx5rbYvTZW77Gvlp1PAkLrt4fsFVUKWQe7g4hT4r0g5CwaAlyQKPakx3oHmfC27XQNaP6fpa0GjSMKDt5SQRuYIEG65tn/pRqr3xLt2b8GCuObXb9CqzaWYXxg6IluRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=98vGd4J/+VAnQ78DBVa6BYMBArs4WMIxEMx+E+g17Cw=;
 b=cyJO+sxM6SaEcpatI7P5rLL95HXHncd3ov3BkMRpi9VpJby1uMGjP1cBBUZ2vu+BmqJ1AQkgoedIZxuwZIzNMOsekWIzBUcdo4P+Stla3ACL7ZIMrkNeNshrwEd27Pa/SWfGCehy+2ShVqHB/ZJ9GzuvJuLuBFbeEyjgvZgTA0MOt4whKU3UaqD6+N4BYUNE/cd5KcP8caPhy2a+WcMz/p0GiONh/rfPbVMTnElRMcCtDYav5b5vTovZ5tq7ZNHTXr5KVsOqQNXpWaXGjytrh7p5OxPiWCrtywfS0WF6bB5TNhucQqFYqTde7g3ePtNkHnyN5EJPprA971onsY9ttA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=98vGd4J/+VAnQ78DBVa6BYMBArs4WMIxEMx+E+g17Cw=;
 b=e/wT+9qWF4A6IkLh+1uGqn7K1XVnuji4EfWTf8MPteU5LnEI96c6PBgBasG3uPNuvmwSBbIIBzAs35oPwi7X0Q8k8mBMxXkwkjhGPjCfpnc2Fe8ChuizoF5gWKMbH3pmGSwWRTk2MB8DwnaaF+ieT5HPvDLvBtg8TWuJF9KGbIY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 14 Apr 2023 12:31:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Message-ID: <ZDkrmbn84X26X0qt@Air-de-Roger>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
 <20230413150009.3145462-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230413150009.3145462-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO4P265CA0252.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM6PR03MB5018:EE_
X-MS-Office365-Filtering-Correlation-Id: bacdc4b9-6734-4b26-278d-08db3cd37a6f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Fc4yjXWFDGufRcIQ8LrMb2Gq77JSIja0QRG/4aK6CVVlWC58y+6fhwpLwYsRAKbyS8ufXnlPMNnBpIE0FUBv6WYudM1ggXMWO8JITRXRnCbncMwZ2FSuKTZ8UDFeuUyPZrsiUgdTQt9StzrSd9s5TzT9IYKUviz2zEijbdNG2UMq2dT41uD/8n5Qy2Mdq/O9WX4qPIB4JcWYs3wtvvwrGVsL/gnDnbF6+sAbifHOaRmR1vBPRJJkvCeVTloGFnnTxe394SJcyqvKtoAqYthZs4o/tmx3/wALhbT6R1xE8itcOjqY4evTld1FogBSMkeD+5LMA09YvO46OZ8HcLqAKJGOTR6WuFfpoJqClde/66fAB83wKNErEuZeELEl4fcICB2SWB+N2Hd74aegMQ3pZaHRbeanWl1qQtk6nZBvy6dbjpTgYsKmw0AXc6GxbCe16byNIdWf7WTCLi+X9uXyBug7Z2oJ4LCtC3hW68nW1cToaYVauBp8dAguVilwSmXqgpK+1BJgWG1ysvOIUWmfrJZIqALAZz60V/I0GO1Yxexq2BJxnK7J5oZbgFbEc4s4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(39860400002)(376002)(136003)(366004)(451199021)(33716001)(85182001)(478600001)(54906003)(6636002)(6486002)(6666004)(41300700001)(82960400001)(8936002)(6862004)(8676002)(38100700002)(66946007)(316002)(4326008)(83380400001)(66476007)(66556008)(6512007)(186003)(6506007)(9686003)(26005)(86362001)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YWtVanFGTk5PbWNUbWFLZVpTTmM5Vk9WcklBVllaTmk2YUtUekNxK3QxY3Ux?=
 =?utf-8?B?SGFaVVpSR2tRTUJrWkZaN3NWbUVRK05La2xpdm96VzRmeWJoRHMxYUdIaW50?=
 =?utf-8?B?eWtDT21WVTJ0R0Q2M1JaYys1T0xmaG1WNitXVTg1RTVWV3h3T3dJMm82ejU4?=
 =?utf-8?B?T2kvQXYxd0RsRDdOejBiSjZBZ1lnQ05nMXUzZ3I1OFFrN29FQkxqVjlsdG13?=
 =?utf-8?B?K3l0YWppRWo4MVdyZ2w4aHAzTHprVjNsaE52N3FlNjhGb2hHN0hzR051TXRH?=
 =?utf-8?B?Rnl6Rmxjem1GMzAxbi8xejhncXBlajBYUHd6NUtLVFhsNUNrWkVkMm5EYWpr?=
 =?utf-8?B?V1dZOExHaVpyVzN1YTllNlNMSlJsQ1JXUmtqMktmMC9ubnNRK215cUdVTURU?=
 =?utf-8?B?WVJ5Ymd4Nmk5WGJrK09lazVXOFNYS2h1SVErN05Gd0hTMkhVaEdmNzlmY09O?=
 =?utf-8?B?cHg5MUowbHVLK1NFcFpSa1VEdzdXM0ozZ3NOWUo2VnNqNlFFekkwNFlsZmJ2?=
 =?utf-8?B?UjZVODJmeVQvTGh0UGV1eDF1WTJNTzMvQm4wb1czMGJ1SUJ3R2RtSlBHK2c5?=
 =?utf-8?B?TnJ3V1RLQ2RBVWF5NmxIS0lTZDlxczkxTnVFYVdKekdhRmdMZHgrbWVkVlZX?=
 =?utf-8?B?ZmpHb0NMeXFTQURMTFVia0JlQUxOaDhnM3ozeXc2YW95ZFhNVkx5NjB0anVl?=
 =?utf-8?B?TS93NTYvQmtiY01vRmNlWWtNdnpTc1NRdUViSHcxVFVEK1JjYWdLeVM0anJp?=
 =?utf-8?B?RDh2d1g3c2RscmpRSXZiRzBETkQ5T21PZnBHVU9tZjUycHVSejhyays3aWl2?=
 =?utf-8?B?Uzg1MndHODBSMU1FZFJZYkFpMzN6YmhHdEtXdnVQR0YyYXBJdEt6a1RjYkdB?=
 =?utf-8?B?bG5IQldoTURQdnBqRjdWUTRaMGRUbTgzOUxxWno5VFFyV3JqSVZsS2grVUNh?=
 =?utf-8?B?T3RtLzZLb040WEFncFc2QnY3Z0tBZUN6bUdVSFNEQ0dRZ3BmRWlQNWJVUkNY?=
 =?utf-8?B?Uy8xUCtIUUVoNjdHS3h0NER6WTJnZG1mT1ZPZ2oraEdiL1p3N21xQUoxL2tZ?=
 =?utf-8?B?ZWlySi9paURJUVc3aVhSejJPV2FSTENJcmlqZXVOWTR5emZQZmN0TmJ5SWxn?=
 =?utf-8?B?K0NZbFFjQTZzR0ZvdFluWDNjNTc3SnVxYnI2K1R2WW5VN3hJSEVUODZxUkJv?=
 =?utf-8?B?NGlxZ29tRHF5TjNpK1Y0UFd2dGkrcmpIOTZnK0ZiZXZqL1Q3UnhPQWFnSmt5?=
 =?utf-8?B?RjIybmVraXkxZXJRekU2RENoVloyN0Vaa2VmcjB3clpFdFlFQjdQMjVRUkJO?=
 =?utf-8?B?ZFpxaDloRGtqNTJVUS9LdkwzODJyY1k3NTdkYUFoaHFNMllvdG5iSy85ajJ1?=
 =?utf-8?B?UFdFVWpGeG9iT001TlNXSG9pWkJFdFFKTnY1eWwxekh6QWxrRExRdmVDaHFw?=
 =?utf-8?B?M2tISXZBWDk5ZjhYWmJEendSUVFJU0JSTkZEWFJUc3JFZXN4dkJ4cTZHY1Nz?=
 =?utf-8?B?YnJhUWFla0VCTzJqRDduMm5Oa3JLMlg4S3RnZlBrYzVhNEduTWVaQXBWNFdz?=
 =?utf-8?B?NUhqY1JYbDdrNnJXYlI4R1FCRXNIMDI3NEVQYkg5SCtiY2ZRTExBSG1tM3ZZ?=
 =?utf-8?B?Q05zS2NCOGRtNW5nWjl2cHdSM3ZLSXYvbDZIbzNmTzZzTG1Vc3p0R0FhbjV6?=
 =?utf-8?B?RHFSVjErRVhYQ2ZvQTJxOVFYd0tHbkp2Q25PdlFRa0t5cEtyQzZBVURLL0Fu?=
 =?utf-8?B?blJCcmhrcExqbTZQMVNWUTZDUU9HSW9TeWcybWMvMWJ6SC9JWDRUSzVHRmJM?=
 =?utf-8?B?ZFNSandqV2xydTJLZlZPV3Z5ODJvS001OXpIR0xralpYbEJyWmZUZGwrRmRJ?=
 =?utf-8?B?eTBUcXBPSmtIcERLaXBjN2pITkc3MFBXOFhhQWpxOURpSUdOdktlcnN1WEJV?=
 =?utf-8?B?Z1prTkV0MStEN1VYcURrZjg4amNKcEw2WSttOE1DUE5WZ2l3cWFiZUlyT3Vr?=
 =?utf-8?B?UTY5QXplVk4yTEJKTGFyQWdHTmlLY1BGUUZvenJmNXZBbFY5ZG5nbFZBMU44?=
 =?utf-8?B?TmVYWjA4VGMrQkxFWk5qZkxsYTNjdTVBM1FpRXdSS012MGZ2UVJOdHdaenBu?=
 =?utf-8?B?UjgvNU1FeGthRFR6YXJsYzY5aUtYR2FlMXZDM2t5VUcwZXdyN1BKK2VHTHk3?=
 =?utf-8?B?NGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	InxyOcUt/WNUc9tMmw1XcwtSg8X/lTdI97ThXGiovOojTz4diAc1KnXfl2nrzi48I7Bej25pwykRORoh0S1hpVRSyX10WwikqqGsfVBt171/+/bevXBHeI9u5FRie6m9B4jyMQVfykVS+V125RP7xt99ydmDvyzLbCn000ppo1O+yxE8S3X25Nn/9BtBbogvq921JOPVvnlRpzZimbmmp9fnu8fmCtoB2vOHOrOvKS8QgzEJzGsgwOXdgv2JfXJO2BC9IsMSR3ttWmbC1SNuT5rI3DJrdaRK8Brgp+331fyQEnlwyF/W7S3dEfdVWM4IjmwIGBqVmL09ae3L8i85ml8O5teqJEaUWgm5ellEBB2vPOy1DX10Eynr8A47m6lmcQwWn2n+KdIhYQGBiJm2XL5w++2Ba/cjRl0iJwfLaqLGCZ6kPBHYX+AhOa26cenrd4ZFjT0EmNXQlIC8lipK/CaPNBcaEItaAIjWFeroJPfYYfd6+6ZwmN3v7PnBYV94J7vK+DjHxBSQBoFksP0JoQFtiNtXIUvvsyhArwSNDQTivdasjETP2O6o8JIzJ4NXAuBmBXs8ADs7e7iihVYmoXeZ/H83M37Sk4VFOowbXmYQ7IIkjsuN3ykDtpzrkHTvcVFey2kVt5MO2swDq4YPd8H7TUpDSmPTup0yEOw/fn+Pnj4FqqtOYrILrF6dI6rLB88h74CDQmaEJbQewQDoKEcmVDpj0oqFOLGghLpSFXK79T8JH+aeH362DFaqzlML5LpyV481p4VJ9jv0yPZ1K47dfhWK7zqH5mK4ja8MVmui+RIZqDEpm3W1DkG4STSQMLeNynKYkzedHEq3t0DY0Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bacdc4b9-6734-4b26-278d-08db3cd37a6f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 10:31:59.5574
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B1tDuMUN5yoiphQhHDAYGoL5E+rhAgcpiPN39iJx6uJ4rtLB1+tWMF04bL09JhOzW0uQxhOEfarf+cyJ4s1CJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5018

On Thu, Apr 13, 2023 at 04:00:09PM +0100, Andrew Cooper wrote:
> The Long Mode consistency checks exist to "ensure that the processor does not
> enter an undefined mode or state that results in unpredictable behavior".  APM
> Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
> preventing the OS from trying to exit Long mode while in 64bit mode.  This
> could leave the CPU in Protected Mode with an %rip above the 4G boundary.
> 
> Experimentally, AMD CPUs really do permit this state transition.  An OS which
> tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
> to be going on behind the scenes ought to result in sane continued execution.
> 
> Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
> section instructs peoples to switch to a compatibility mode segment first
> before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
> further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
> Architecture".
> 
> Either way, this appears to have been a genuine oversight in the AMD64 spec.
> 
> Intel, on the other hand, rejects this state transition with #GP.
> 
> Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
> 4.1.2 "Paging-Mode Enable" was altered from:
> 
>   If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
>   exception (#GP); software should clear CR4.PCIDE before attempting to
>   disable paging.
> 
> to
> 
>   If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
>   clear CR0.PG causes a general-protection exception (#GP). Software should
>   transition to compatibility mode and clear CR4.PCIDE before attempting to
>   disable paging.
> 
> which acknowledges this corner case, but there doesn't appear to be any other
> discussion even in the relevant Long Mode sections.
> 
> So it appears that Intel spotted and addressed the corner case in IA-32e mode,
> but were 15 years late to document it.
> 
> Xen was written to the AMD spec, and misses the check.  Follow the Intel
> behaviour, because it is more sensible and avoids hitting a VMEntry failure.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> v2:
>  * Restrict to when Long Mode is enabled.

Maybe the subject also needs to be slightly edited to mention CS.L=1
and Long Mode enabled?  Or just mention long mode instead of the code
segment status?

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 10:38:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 10:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521098.809403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGoS-0007ly-AJ; Fri, 14 Apr 2023 10:37:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521098.809403; Fri, 14 Apr 2023 10:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnGoS-0007lr-7C; Fri, 14 Apr 2023 10:37:52 +0000
Received: by outflank-mailman (input) for mailman id 521098;
 Fri, 14 Apr 2023 10:37:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rR8y=AF=citrix.com=prvs=4614ad092=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pnGoQ-0007ll-EI
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 10:37:50 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 655b5324-dab0-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 12:37:48 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 06:37:40 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5951.namprd03.prod.outlook.com (2603:10b6:a03:2de::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 10:37:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Fri, 14 Apr 2023
 10:37:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 655b5324-dab0-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681468668;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rfFV0GyBle3d5FzWYjOjeStUQK3dx7Ex94LOto3jdzo=;
  b=cRKHI4VABc1xhVwVrY8qmjeSeuO5CnvCApHXdzgqQ6NHHiKFAIMq+T5f
   x4kcgSb/iK+ZMaD4tbk42MO+Ay1DgthfoCriGhP7VsiMOlFHqViX8i7LO
   bTNRGpDdcJPSB4Gvwb+zf9r+Tl0DaiQSjX+E0aiclp78FFWct+gAuIkKf
   M=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 104290161
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:12cF5q/pMriKJ8njN5GGDrUDon+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 WoeXWzQPKuDNmLxc9h0bYS18h8CsMLVmNZgHARt+X08E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOakX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkle3
 vEGGAkdPyyPxN+/7Iq2G9U31u48eZyD0IM34hmMzBn/JNN/G9XmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTaNilAsuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAN5IReXhqKcCbFu72lAZJyZGZ2OBjei0tVe4astwM
 mIS9X97xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnO0cSCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptz8+oc6i0uVSs45SPLoyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:68ja/66MJJO4o4hwlAPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-Talos-CUID: 9a23:eSbo6GzTXMHmMePCl1BNBgUEMc07QkTNnUvcYHKxJV5kGYHOGXWPrfY=
X-Talos-MUID: 9a23:126ZuQgU7A7pQSKn15wyW8MpKPtizJ/+Ong3vYQLhNCCBDVfIhmbg2Hi
X-IronPort-AV: E=Sophos;i="5.99,195,1677560400"; 
   d="scan'208";a="104290161"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XeTCHPQPWglVOSTG1NGoJFacphuNobGIdeRIWr06wayTLvMYb4pW5IRW6muojxbexMbP44iREXO4HQ+Q/WSPTqBKsQ32vG1tzHyXW7HK1lukNZMpks86bQanVP59sYqCmguwIbR23T+YXgt7ZWTgVjJ60NVZdnSkLeByyLyfyLfBBL9YesLgsPBisj9So4tUCYxpYkZU6e6WEPLgEXisyYvsnYAA2xS55lS72hAgxL3F5QDrmNDQtb862HsSVLmCqu8ixSfghF/75e0GV3Oqe/Xki9NQpy8rIgAKlHI1PXy3M1GN9LSRHbp3cOG2+p4go4nTSLoxlKjcWCLysiUO4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GYyf/Bzhwwa+5YfSF6UGO9nYAQLVjLy3kf5xyX+9pYo=;
 b=ElCcTLaqo1gEJ9ibokBSHV5F+IPuwlwSQzOc5J+YIBjrQo3ppf4Un7Dv9/xlu2CQj54SjjrWwNliQKz6M4lZ8Wv2OtlvrkdBBNjmn7lSkRJgNbi8b2Bmr3zNLSiz8WQV9h5csbk2/qXy6w4SdosyNODRtROybgh0Z/vmOk6dSMWb6kYSd1/ho7ootjDDMgGDKE44QUf7MQ3PfVTt7ViyllUvnnzPsIHVaAC2Yv/jAotiXWaIz0jE6S9X4k6POVcYw33xAwVmgpArS6eNDUU4kZm0ohBnSTOLB0H0Gd7WVsy8tOpXD2+WGquDyRXzBLkdSniiGMqsk7eHtaNeDu+31w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GYyf/Bzhwwa+5YfSF6UGO9nYAQLVjLy3kf5xyX+9pYo=;
 b=dHlrberag5bPoE/FAFLx3rvJK5SBM8Blg1a5bJMSIlqmUlXRaStsZhiSRlGP1aIBn3zxqDtN8sfuGEEWQ1mmjfzc4rMJ+Ze6ugvciF1yy3CZe5BuJX4Bi58SeQCVOAf5ZBWWqQG8H35doR0l4fSe3Gi1tntE25mk9J6N3S/E4Sc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c0fff5af-6a11-34f9-9e0e-01c2ba586591@citrix.com>
Date: Fri, 14 Apr 2023 11:37:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
 <20230413150009.3145462-1-andrew.cooper3@citrix.com>
 <ZDkrmbn84X26X0qt@Air-de-Roger>
In-Reply-To: <ZDkrmbn84X26X0qt@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0540.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5951:EE_
X-MS-Office365-Filtering-Correlation-Id: f3834d00-6fd6-4f91-94ac-08db3cd442da
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PenDB/QVUUE4qbnb+wRr/lqiv9y/Oyn6C+nz97hQuAnj38uejq22xVLw5z9OJVB/8riNQ8+rKcJZINUFMX04IZ9bAH4AhlpjU3yUHVugCOMs/bUwdVQlQ0YTTVM21V8tj/SyJqrsiTS/5gBN1lf5clu6eDWSVCd75HmnEWE5t87DF8h7aDb2wzi04f2ENlnuarmwdxh4Xr/k0nYlRkCF3fF/94jaTvKhjuNYrDRIZC9fSKV89qVO1lCMA73W/SmuqZSRWgSJrF7h+AU8d/qropZ04C0PEcaxQNzzWKgrxXljANtRKySxPNv6ppFFlF4fBSIB2RjeOAPEUJTG6TzPhoJYz5yOGqc2MCQw+xeiNARIYsHjI9+KrCazc86Nc0uhP3h8mO5hI2e0uuQlVCwyejWoBlKj0MCFm4CJnZN0HlXECMi/cQaJKxXE7N2XRfCs+Am42ag65snaWFTBAvZdnNpToZ0hiMpM55X4Dt6BEsNJgWijA9rxrckP+kgHP0wIDUMAdxombqAT986ZU/o1FPUHX8dCbUdtQ3+knGXXyScwjidbP7RtNItJ9VQ5rLEZhXfz2PBXJar5HohlPyyJCwe/pNqc/nEatMRlMaZ0jLZ4uCAtIMF4BcVO8f7XkReWv+cSsKpr/tLSjjaKGVkOHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(366004)(376002)(451199021)(6666004)(37006003)(6486002)(66476007)(66556008)(66946007)(36756003)(4326008)(2906002)(86362001)(38100700002)(41300700001)(82960400001)(5660300002)(8936002)(8676002)(6862004)(316002)(478600001)(31696002)(54906003)(6636002)(53546011)(6512007)(6506007)(26005)(31686004)(2616005)(186003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGY4ZlF6MURGa0hXWFBEckIrb2I2S2RQMWxEbjkyV0d5SUpWNWhaM1NBelQ4?=
 =?utf-8?B?ZVhsU25sS1p4a3dmbHVYNmxIamt1VkZyNkdLbC96cmpISitsY1kzcEpQU3U3?=
 =?utf-8?B?a3JoMGVZUGdEeWsrV1Z5SnVncTgzaVkzVEJ4UU15TzY5ZWdRQ3ZnbWczcDAw?=
 =?utf-8?B?aTlkaXpLTnNlQ2Q3dFJqNzZBY0pwZmQ5cTB6dVlmTU5oWmNxeHdXMTBPWExO?=
 =?utf-8?B?c1M0R0Npa0xzbGM2YmZ3VmpJWmU3Z3FNRVRWczd2QkNxSTZjcEpuTzZRZ2pL?=
 =?utf-8?B?Qjk4OWdVUEpTNEI5Um1TRFE3SDAzRFBpai9NQTRodW0xNlBFb3FMT2V6alVZ?=
 =?utf-8?B?eFNkbWcxcWtUU1RDZkdKWE0xdndMVkZqSUxqOGdpKzBFdERkclRyUFJxUXB2?=
 =?utf-8?B?NE9NaE5VSEQ0dzBCVXR3UFd0ZzlwdkkwclU2NHp3dFk0U0ZIY0JQT0M5S3Jk?=
 =?utf-8?B?QmdpRXJxd2hOOVJRbWVVaUdZN1pjazJJVEdDRlQwenhvUzQ2QnBEYkR4N3h1?=
 =?utf-8?B?NUZ4OGVFOUQrcDBBTHFaL3Viay9JQ2tDN3VSZUdEc1VCdzk5bVZXbmlsM2RW?=
 =?utf-8?B?VXc2ZERVcEp4RTErYkNLY0lpWEJPaUlXcStBS1g4Nm9xdGd1SHoyWDBuNmtQ?=
 =?utf-8?B?S29Tb0dzSEZyb3pjNU9mbU9NVkV5YW53bWdJMCtKM1E5cS9XR1ZxOWU2cFRK?=
 =?utf-8?B?bG9Ib1hPcDZ1Mmd3bjcwcjQ3TzR6OGF0TW01K0hxcml1UDJ6WWc3YllMUlZq?=
 =?utf-8?B?eTFwSnNPQlIwMy9USzN5MmRJSUtOSUtiOW5CUXN6Q0pQVzJOaXVTYk91NXd6?=
 =?utf-8?B?Z0gvSXVtU0lBdnhhdnduNFVhdUVlbkRad0JmZVdXR21vOURTS2FyMU5TU1h4?=
 =?utf-8?B?RWg2RmNyTjVEZHRsbDBWOHVZdWwvaFJZRFRpU1FKT1NGVzNHWE9EQ1I5NEdQ?=
 =?utf-8?B?cWtYbklBVWZnc3A5bldyZnpMOWhnNXNXbUJyWkxBOUNGMXU3U01hZkNUdkNO?=
 =?utf-8?B?YmV6aS9Oa25Qa3Q3QnlZcG5EYUZUcnBJNThnTHU2QjEvSGo0S0J3YTJVa2Zl?=
 =?utf-8?B?U0Rrc3N6cmhQNWZ5aWpyejNUdU1SVE5xSUVzbzhYaEJwc0FrL291OTVKYVFz?=
 =?utf-8?B?MStibStRdjdLdmdhdmM1azA4OXd0azljT2FCSEVRL1FuNXhyZ2ZWVkhHbS9j?=
 =?utf-8?B?czBZTnE0eWVuNUtBeVJYa3hiaE03ZTZQdWRad09iYUtUcHp0VnZ3K2hlTlRJ?=
 =?utf-8?B?QnIyTUJ1bU56Q0NVRzJXNmRyaWZzelk4WFpqcU1lcEtWdjhQU0JVSk8xTklV?=
 =?utf-8?B?UTBoZGZaL3JhZGlnTkxBREpRS3JSOVVISWIxRmFhaWRMWU1JYkVSZUNZamox?=
 =?utf-8?B?bHd1RVhCWDYvK0ZjeHFsbHNRMFFOMENvUGVPNDJ2WTVmeFhZUjZCRmNIRkNa?=
 =?utf-8?B?SFJlbEFzckhrOU9iRzRpTUFEb29MbzFVSzJIVVlIMi9FYXpKdHVmbXMxOHdU?=
 =?utf-8?B?aHFtd1ZTY3p0NmowY0lRMmlZKzBUK1pZVThpcVViVGtDU3cybmFpM1YwQk5q?=
 =?utf-8?B?akEwRE5wNllxMGxra3h6cXFmeGdwZ3lEZzR4ZEZHNnJ2bm1vWXVWdGIyVFRX?=
 =?utf-8?B?cE1icGdqbjJ4eDFORXkvRTdHcTVsVCtLWnNrSTc5d0NZbDhTck9kSnZsclFj?=
 =?utf-8?B?MVhBdWxwWmVVSnE4bkIwbnc2blhINWlTdDdwZ1JNaldGbElPeWF2MlRjSStM?=
 =?utf-8?B?ZzRIdTBXdVRKdTRQbzUzY045TzVNY2pLcFNIUGFmQ1hvamZQV0piZDNwWTgx?=
 =?utf-8?B?QUFocFRCRWUvTG51d2xiMHNzdE8xQ3BSZmVVMDVOcHE1NGtTTzY0a1l2OXht?=
 =?utf-8?B?KzAvdVM2YThxcTh2ajRHbjJrQ0h1SHBqYURFNmNub3ZaNEMzMStGMGpDWnlt?=
 =?utf-8?B?cXlIcXlWRHVUTjVyNEd6ZUt1WG5pRTNaZVpucUJ5U0hyb1QxdHlMMEpUdzJZ?=
 =?utf-8?B?NkxDZ3laR2RXZ1ltL3JvUjBCYjJYV2RlalEyLzIzMXk3N0VhU01NSHllVmd3?=
 =?utf-8?B?a3ZCN2pZMy82YXJubE1nMXF1eXZHenVranBQVll1SkNaaXFIdXE0ZWNDWEZK?=
 =?utf-8?B?am1vb2hJS1VkWE1FSkFBbVEyMmZkR2RSZm4ra2ZJd2hYSUluZllZaHpXdlR4?=
 =?utf-8?B?WFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CLV82Y3MtsIqNeTg16jKlJxPQuP1oAJDfRWmYMNxXwdFeDV6PAdhx655xeC1zyTrQvZhJ2y8/3/zGS1cw4LSqvDTYxLwri5AbiFzWYUpYn0ZqzuvxqJY5TCX0lq+q31P8IhgrSELqbSgzbSkYqAP+1WAI1jt9pbqEhOSUy2wVgHNsD+FcM9hSPGZmhPHBdoY6CAkglecPHx0nkILnk9xdTw2mV/R7N5KVhYzQ0itAg00V4f770YehSiZhuTqmYb8TXUB8TxLsIlgUFGt+VLQjNV6Wixi04cN4uh1mpI3f0r91ebF8TVF4oEHOF/yqlnlQrnxSV64hwuv7qqILE0TpS+rN958N/ANm3cGRHcVQDNenC20urcptttCdSYRIrFh8oSqiS8zrGkYcWDmslAveDLVQ/31bnzedCELaQ19uPyCd4WGsMPGZhoMBbgGNJe26Q0EmgMNSfRaZ6AQOvHW6tDQj5XNXTZeLOYxyrIURgYUgUa7KIaHqlC7PbWsJ0c6eItq/H/YDgHYnZcmxzxlcEp8SpxqHheDaGERtfVaRyq+Wgjnnj4cVkf9x+K2a6rMGQgKPj6lfgT2NygAtoK5QI9mBxtM457LFB3nK9GxPJ1BFdVAhBjWu+JdXA48StGTCQL1kI8Z4RXHY6/7m4KV1GKpZx8OjR2IY6uKz89XqBqakUjY4oNnJoqwOErUe3IMy3ZyKGlc5dri0DKYU+qYfI6kvWLjmIauGkjFKsaQU2VJKqDTw7KH1z8B/wHuTbA/ipzTVf3liDcL7Ivob6SQpIo9FVZiyKaZBGmahOGui3xoJNOCIMAv+YawC/+LqHG65FPWJ8jc6Q18eulJU41+Fw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3834d00-6fd6-4f91-94ac-08db3cd442da
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 10:37:36.0278
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 67yGCgatj3tdcP4PdSEGI417aBtyLs3V4NFaTQQC2J+4eGG4CpFmDEQ3LAoPozSZZjKEGkaHHi9G6pk9rOyGmOfqw/Xv/LihMqS8dAwNeng=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5951

On 14/04/2023 11:31 am, Roger Pau Monné wrote:
> On Thu, Apr 13, 2023 at 04:00:09PM +0100, Andrew Cooper wrote:
>> The Long Mode consistency checks exist to "ensure that the processor does not
>> enter an undefined mode or state that results in unpredictable behavior".  APM
>> Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
>> preventing the OS from trying to exit Long mode while in 64bit mode.  This
>> could leave the CPU in Protected Mode with an %rip above the 4G boundary.
>>
>> Experimentally, AMD CPUs really do permit this state transition.  An OS which
>> tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
>> to be going on behind the scenes ought to result in sane continued execution.
>>
>> Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
>> section instructs peoples to switch to a compatibility mode segment first
>> before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
>> further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
>> Architecture".
>>
>> Either way, this appears to have been a genuine oversight in the AMD64 spec.
>>
>> Intel, on the other hand, rejects this state transition with #GP.
>>
>> Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
>> 4.1.2 "Paging-Mode Enable" was altered from:
>>
>>   If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
>>   exception (#GP); software should clear CR4.PCIDE before attempting to
>>   disable paging.
>>
>> to
>>
>>   If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
>>   clear CR0.PG causes a general-protection exception (#GP). Software should
>>   transition to compatibility mode and clear CR4.PCIDE before attempting to
>>   disable paging.
>>
>> which acknowledges this corner case, but there doesn't appear to be any other
>> discussion even in the relevant Long Mode sections.
>>
>> So it appears that Intel spotted and addressed the corner case in IA-32e mode,
>> but were 15 years late to document it.
>>
>> Xen was written to the AMD spec, and misses the check.  Follow the Intel
>> behaviour, because it is more sensible and avoids hitting a VMEntry failure.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> v2:
>>  * Restrict to when Long Mode is enabled.
> Maybe the subject also needs to be slightly edited to mention CS.L=1
> and Long Mode enabled?  Or just mention long mode instead of the code
> segment status?
>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.  I think "Disallow CR0.PG 1->0 transitions in 64bit mode" is the
most concise way of tweaking the subject.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 11:08:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 11:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521105.809413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnHHg-0002sd-LX; Fri, 14 Apr 2023 11:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521105.809413; Fri, 14 Apr 2023 11:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnHHg-0002sW-IC; Fri, 14 Apr 2023 11:08:04 +0000
Received: by outflank-mailman (input) for mailman id 521105;
 Fri, 14 Apr 2023 11:08:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Krav=AF=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pnHHf-0002sQ-3F
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 11:08:03 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2041.outbound.protection.outlook.com [40.107.7.41])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ed8037b-dab4-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 13:08:01 +0200 (CEST)
Received: from DB6PR07CA0023.eurprd07.prod.outlook.com (2603:10a6:6:2d::33) by
 AS8PR08MB7765.eurprd08.prod.outlook.com (2603:10a6:20b:521::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.42; Fri, 14 Apr 2023 11:07:32 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::73) by DB6PR07CA0023.outlook.office365.com
 (2603:10a6:6:2d::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.7 via Frontend
 Transport; Fri, 14 Apr 2023 11:07:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Fri, 14 Apr 2023 11:07:31 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Fri, 14 Apr 2023 11:07:31 +0000
Received: from 2b4ff116f851.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 01951991-24EA-43A3-8784-E1D77A91B09B.1; 
 Fri, 14 Apr 2023 11:07:20 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2b4ff116f851.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 11:07:20 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB7770.eurprd08.prod.outlook.com (2603:10a6:10:396::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Fri, 14 Apr
 2023 11:07:18 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 11:07:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ed8037b-dab4-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WScqcNgRrzzsOLc1vEtHNgMNFwLihiXJagzJYyK7NLA=;
 b=PcPjDIHCB987af/f/Yhg9lzEjikRzYuITys71LzKxUWq/Bl7M3XsAFVavHv5WwqspZJrCOvQRVZOLC2Q/NoT4iRxaLOuFEY0/VV9VQymRtay6BNMuprIItNP9+APs23y0gJo3dPRn0+OFgyD/DJZlWbMsLp8jYhqp65CX52WxTM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 031b3a79c22b1b86
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z3Tkzh2c1ZoGUpdkqNfalDpCAQj5r8ZcfQtHZiiw4I1Lm3Zag77qZm7IwCU5d+aap+9Hk+Wf3dhB0zqUDiowFh4t2kQTw0wChqBhoGpP73N/NRgxxD/3HwIPTrQolDdhipV3ZvTdNxPrs7ZOgoCDGTBLCyNegYy9iFhcuMS8P0C/7B1OusSJ51pcvGzfYB8LjztoCI1NMrQsmUS4WEdTnAGmQt8/kTgA5CILv+EXqyYPUHx0rsnRy5O1NgWBBSdZdLamxS4olXcoG9DXKNOudz9OltafLPWSjzaTAAmPfx+JNiEunWlgUOVqu/UcFBAL3MYZC20N6x+Iatojw/9CBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WScqcNgRrzzsOLc1vEtHNgMNFwLihiXJagzJYyK7NLA=;
 b=hksip9xfv+E+ETEdGvORtIZqAS6FQAZGhXysC5wXdp65WijFPXUVbfP0zTNi+HRkARfpXD0qz2+XadL5KoM8msPC90spHH0juCl9z0M0tjjxfRkiGl+DLlYX1VP3Lxv/TrunHzM66CZFd5jDPUP0LzID/WA6HHsDJvACdjUAQ1pYs+eVP55t7VQaH/OilprOiclAwzEIciXFI7Za82TxXKGM5xvyORFVsPrfHlXPm2AX5b3Q7drKezoH0mgcNPX+wLpkS0vJXde+ywefFz0fAToXcjUP7K4Kwes2ktj5+NmC3nILf6nLynp0sRG53a/j315ZUtqnZAKX8w1+N3LmVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WScqcNgRrzzsOLc1vEtHNgMNFwLihiXJagzJYyK7NLA=;
 b=PcPjDIHCB987af/f/Yhg9lzEjikRzYuITys71LzKxUWq/Bl7M3XsAFVavHv5WwqspZJrCOvQRVZOLC2Q/NoT4iRxaLOuFEY0/VV9VQymRtay6BNMuprIItNP9+APs23y0gJo3dPRn0+OFgyD/DJZlWbMsLp8jYhqp65CX52WxTM=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index:
 AQHZbSQ9GJqCFHqaX0uCiT+EC9U7s68pNK+AgAAHnICAAAGIAIAACdqAgABhBICAAP+IAA==
Date: Fri, 14 Apr 2023 11:07:18 +0000
Message-ID: <D32A74F6-8BBB-4965-A720-B3133ECC77BA@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
 <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
 <03cc0c98-c5ef-16f1-ed24-6a39320b08e5@xen.org>
In-Reply-To: <03cc0c98-c5ef-16f1-ed24-6a39320b08e5@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB7770:EE_|DBAEUR03FT034:EE_|AS8PR08MB7765:EE_
X-MS-Office365-Filtering-Correlation-Id: a4ca2cf1-cf8b-42ef-748f-08db3cd8711f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mSe6OrypJYJnEph5zQI2X3OHzF9a0NXN9O9G4hfYRd5rRAVSOGiKAb3FV3tiwO6k00mYRNt5IdYuTKEBBMjHHjsqjsqfTXLc0cVnpEUsQgDCoG5E7fQObDNmg/bn33YELRMtaR2ixbR439cwj5Sao+JvtVH0sJIeBIcElPoJMYbH0yioTVHuhiNS6Qvu/rZPdmDP30W6lwyGep7fA/MsBWK2dhYzacIGRcOIIWgX0rOIQjFdUgbL9CsJzm6K9zJfAikY1fAywd0JDtfgK3AHnXTEDaVQtIW+NcvLTBlI0YmBjsaQtWDkK8RE0BfOolHMEV9vEPyf1Xn3n/5+c9G0vm6w3qgvv4podrXruwc26/vJhr00zfMCqAdVIPq9S1tIODD6seUJQTPKYrgL+RemwcKHIYBuE1GeBGZCZ20qaq9WWdaB8D3Q1YUYwzH8zV/vg62e/YmQnvMeIp/jTma9o+u6bDQsJ1n155w0OUuTEeQmEsA/yXjFUgrjNKpJepz+xcT/0bBxBdcrn2OhlH0XftNWbMDJVcwPzAJw1zf6/56HULkfl6wLRF7UzKoznHD/veZ0s7IdvZpqN7FdqnV0+qjA+tLGWJhHRYt+e3YbMB7M0jj8zKON4gDlOTBj1rFhR/hZkX5V53sZ30YalShHVQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199021)(36756003)(86362001)(6486002)(6916009)(71200400001)(4326008)(41300700001)(66946007)(66446008)(54906003)(64756008)(66556008)(478600001)(8676002)(66476007)(76116006)(316002)(91956017)(33656002)(2616005)(6506007)(83380400001)(6512007)(26005)(53546011)(122000001)(2906002)(5660300002)(38070700005)(8936002)(38100700002)(186003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <340B0A6ADBD74547904684AB28BF4388@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7770
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6faa8a5a-b9cb-4700-bfc5-08db3cd8698d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l4KUBLgGKnDLkrg14ugKGOZERDVV59FENl9xpawH1Fvphjd+qbi5cEpLtaQY1W7N1EhXN7vv2f3zBpHg1vUkhlQO5h6LGE7wHkPb1Br4gOk57Pb+A2tv/RhKmOQ0N5AqJ6A0tPrlQbMUO5FXZd6ENXQyw1Rp/IFOGgVAvsi5000Ph6xuqqT4HO0rguoYUvwABeujVQy6UJpwHtZrrcln1NfGBuyW7KAJICk+QEH77ayD36uqeT0H76w61GLRCj61y8PR9RLKNfohdrjBNxkS233C59GM5Gw0l3hqKRLoum1uWIHbLgHzrB/29Dnm2GDXsWy9SppKY7FzN5xHbnLv/gLqp7KsEAOW5LAkATMluO+e3Jfbflq52lC1XSEjy9VeRmgT6AN+bKsf6CwGxolk1iR11Mg3OSz09OVMwZp1cB9vfR0g4TZaEF7Ob5xQRTgcvAl2NlcEuvQ5PaijwWVE1k+LsoFor96k6ZxaXzVczYaGGLvfCMlP5te+zpj3X1HeLhnJ/cOStV7qbfDOgc5N0/B/Kl4Yegrsr4ArAyr9UtXkIMPWFK0oslbqPs3euYhwJJ0C27SxCpCc375IDRNNW4Gw5BA9zoJHBNGWLkny4FdT6guUksIWW1JCYTP5joGDZA6jAC4BcDvpHDFPmn6xZ0kDixuuvOMwO7U8VlOG2foQLi5t1ntTpDYtfZihlL5cd0kRQih7+gUDbHNFuO6G4A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(316002)(83380400001)(41300700001)(54906003)(82310400005)(33656002)(86362001)(36860700001)(336012)(478600001)(2616005)(70206006)(81166007)(8676002)(4326008)(70586007)(36756003)(6486002)(82740400003)(47076005)(356005)(2906002)(6862004)(8936002)(186003)(40460700003)(6506007)(6512007)(26005)(53546011)(5660300002)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 11:07:31.1546
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a4ca2cf1-cf8b-42ef-748f-08db3cd8711f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7765

DQoNCj4gT24gMTMgQXByIDIwMjMsIGF0IDIwOjUyLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAxMy8wNC8yMDIzIDE1OjA1LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDEzIEFwciAyMDIzLCBhdCAxNDozMCwgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4gDQo+Pj4gDQo+Pj4gDQo+Pj4gT24g
MTMvMDQvMjAyMyAxNDoyNCwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4gSGkgSnVsaWVuLA0K
Pj4+IA0KPj4+IEhpIEx1Y2EsDQo+Pj4gDQo+Pj4+Pj4gIEBAIC01OTQsNiArNTk3LDcgQEAgaW50
IGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZyhzdHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21h
aW4gKmNvbmZpZykNCj4+Pj4+PiAgICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7DQo+Pj4+Pj4g
ICAgICB1bnNpZ25lZCBpbnQgZmxhZ3NfcmVxdWlyZWQgPSAoWEVOX0RPTUNUTF9DREZfaHZtIHwg
WEVOX0RPTUNUTF9DREZfaGFwKTsNCj4+Pj4+PiAgICAgIHVuc2lnbmVkIGludCBmbGFnc19vcHRp
b25hbCA9IChYRU5fRE9NQ1RMX0NERl9pb21tdSB8IFhFTl9ET01DVExfQ0RGX3ZwbXUpOw0KPj4+
Pj4+ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bChjb25maWct
PmFyY2guc3ZlX3ZsKTsNCj4+Pj4+PiAgICAgICAgaWYgKCAoY29uZmlnLT5mbGFncyAmIH5mbGFn
c19vcHRpb25hbCkgIT0gZmxhZ3NfcmVxdWlyZWQgKQ0KPj4+Pj4+ICAgICAgew0KPj4+Pj4+IEBA
IC02MDIsNiArNjA2LDI2IEBAIGludCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoc3RydWN0
IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+Pj4+Pj4gICAgICAgICAgcmV0dXJu
IC1FSU5WQUw7DQo+Pj4+Pj4gICAgICB9DQo+Pj4+Pj4gICsgICAgLyogQ2hlY2sgZmVhdHVyZSBm
bGFncyAqLw0KPj4+Pj4+ICsgICAgaWYgKCBzdmVfdmxfYml0cyA+IDAgKQ0KPj4+Pj4+ICsgICAg
ew0KPj4+Pj4+ICsgICAgICAgIHVuc2lnbmVkIGludCB6Y3JfbWF4X2JpdHMgPSBnZXRfc3lzX3Zs
X2xlbigpOw0KPj4+Pj4+ICsNCj4+Pj4+PiArICAgICAgICBpZiAoICF6Y3JfbWF4X2JpdHMgKQ0K
Pj4+Pj4+ICsgICAgICAgIHsNCj4+Pj4+PiArICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5G
TywgIlNWRSBpcyB1bnN1cHBvcnRlZCBvbiB0aGlzIG1hY2hpbmUuXG4iKTsNCj4+Pj4+PiArICAg
ICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+Pj4+Pj4gKyAgICAgICAgfQ0KPj4+Pj4+ICsNCj4+
Pj4+PiArICAgICAgICBpZiAoIHN2ZV92bF9iaXRzID4gemNyX21heF9iaXRzICkNCj4+Pj4+PiAr
ICAgICAgICB7DQo+Pj4+Pj4gKyAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sDQo+Pj4+
Pj4gKyAgICAgICAgICAgICAgICAgICAgIlJlcXVlc3RlZCBTVkUgdmVjdG9yIGxlbmd0aCAoJXUp
ID4gc3VwcG9ydGVkIGxlbmd0aCAoJXUpXG4iLA0KPj4+Pj4+ICsgICAgICAgICAgICAgICAgICAg
IHN2ZV92bF9iaXRzLCB6Y3JfbWF4X2JpdHMpOw0KPj4+Pj4+ICsgICAgICAgICAgICByZXR1cm4g
LUVJTlZBTDsNCj4+Pj4+PiArICAgICAgICB9DQo+Pj4+PiANCj4+Pj4+IElzIFNWRSBzdXBwb3J0
ZWQgZm9yIDMyLWJpdCBndWVzdD8gSWYgbm90LCB0aGVuIHlvdSBzaG91bGQgaGFkIGEgY2hlY2sg
aGVyZSB0byBwcmV2ZW50IHRoZSBjcmVhdGlvbiBvZiB0aGUgZG9tYWluIGlmIHN2ZV92bF9iaXRz
IGlzIHNldC4NCj4+Pj4gTm8gU1ZFIGlzIG5vdCBzdXBwb3J0ZWQgZm9yIDMyIGJpdCBndWVzdHMs
IGhlcmUgSSB0aGluayB3ZSB3aWxsIGdldCDigJxTVkUgaXMgdW5zdXBwb3J0ZWQgb24gdGhpcyBt
YWNoaW5l4oCdIGJlY2F1c2UgZ2V0X3N5c192bF9sZW4oKSB3aWxsIHJldHVybiAwLg0KPj4+IA0K
Pj4+IEZyb20gbXkgdW5kZXJzdGFuZGluZywgZ2V0X3N5c192bF9sZW4oKSB3aWxsIHJldHVybiB0
aGUgbGVuIHN1cHBvcnRlZCBieSB0aGUgaG9zdHMuIFNvIGlmIHlvdSBydW4gYSAzMi1iaXQgZ3Vl
c3Qgb24gdG9wIG9mIGEgNjQtYml0IGhvc3RzLCB0aGVuIEkgYmVsaWV2ZSBnZXRfc3lzX3ZsX2xl
bigpIHdpbGwgYmUgbm9uLXplcm8uDQo+PiBZZXMgeW91IGFyZSByaWdodCwgSSByZWFsaXNlIHRo
YXQgSSBuZWVkIHRoZSBkb21haW4gdHlwZSBpbmZvcm1hdGlvbiBhbmQgSSBjYW7igJl0IGhhdmUg
aXQgaW4gYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnLCBob3dldmVyIHRoZXkgbWlnaHQgaGF2
ZSBzZW5zZSB0aGVyZSwgYW5kIEkgY2FuIGRvIGEgY2hlY2sNCj4+IGxpa2UgdGhpcyBhZnRlcndh
cmRzOg0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBiL3hlbi9h
cmNoL2FybS9kb21haW5fYnVpbGQuYw0KPj4gaW5kZXggYzFmMGQxZDc4NDMxLi5jZTEyMzVjMjU3
NjkgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4+ICsrKyBi
L3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KPj4gQEAgLTM2OTQsNiArMzY5NCwxMiBAQCBz
dGF0aWMgaW50IF9faW5pdCBjb25zdHJ1Y3RfZG9tYWluKHN0cnVjdCBkb21haW4gKmQsIHN0cnVj
dCBrZXJuZWxfaW5mbyAqa2luZm8pDQo+PiAgICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4+ICAg
ICAgfQ0KPj4gICsgICAgaWYgKCBkLT5hcmNoLnN2ZV92bCAmJiAoa2luZm8tPnR5cGUgPT0gRE9N
QUlOXzMyQklUKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHByaW50aygiU1ZFIGlzIG5vdCBh
dmFpbGFibGUgZm9yIDMyLWJpdCBkb21haW5cbiIpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1FSU5W
QUw7DQo+PiArICAgIH0NCj4+ICsNCj4+ICAgICAgaWYgKCBpc182NGJpdF9kb21haW4oZCkgKQ0K
Pj4gICAgICAgICAgdmNwdV9zd2l0Y2hfdG9fYWFyY2g2NF9tb2RlKHYpOw0KPj4gV291bGQgaXQg
YmUgb2sgZm9yIHlvdT8NCj4gDQo+IGNvbnN0cnVjdF9kb21haW4oKSBpcyBvbmx5IGdvaW5nIHRv
IGJlIHVzZWQgZm9yIGRvbWFpbnMgY3JlYXRlZCBieSBYZW4uIFlvdSB3b3VsZCBuZWVkIHRoZSBz
YW1lIGNoZWNrIGZvciB0aGUgb25lcyBjcmVhdGVkIGJ5IHRoZSB0b29sc3RhY2suDQo+IA0KPiBE
byB5b3UgbmVlZCB0byBrbm93IHRoZSBTVkUgbGVuZ3RoIHdoZW4gdGhlIGRvbWFpbiBpcyBjcmVh
dGVkPyBJZiBub3QsIHRoZW4gSSB3b3VsZCBzdWdnZXN0IHRvIGNyZWF0ZSBhIG5ldyBkb21jdGwg
dGhhdCB3b3VsZCBiZSBjYWxsZWQgYWZ0ZXIgd2Ugc3dpdGNoIHRoZSBkb21haW4gdG8gMzItYml0
Lg0KDQpIaSBKdWxpZW4sDQoNClllcyB0aGF04oCZcyB0cnVlLCB3ZSB3b3VsZCBsaWtlIHRvIHBy
ZXZlbnQgd2hvIGlzIHVzaW5nIGh5cGVyIGNhbGxzIHRvIGhhdmUgZ3Vlc3RzIHdpdGggU1ZFIGJ1
dCAzMiBiaXRzLCBJIHRoaW5rIHRoYXQgd2l0aCB0aGlzIGNoZWNrIGl04oCZcyBwb3NzaWJsZSB0
byBhdm9pZCB0aGVtOg0KDQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L2RvbWN0bC5j
IGIveGVuL2FyY2gvYXJtL2FybTY0L2RvbWN0bC5jDQppbmRleCAwZGU4OWI0MmM0NDguLmI3MTg5
ZThkYmJiNSAxMDA2NDQNCi0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9kb21jdGwuYw0KKysrIGIv
eGVuL2FyY2gvYXJtL2FybTY0L2RvbWN0bC5jDQpAQCAtNDMsNiArNDMsOSBAQCBsb25nIHN1YmFy
Y2hfZG9fZG9tY3RsKHN0cnVjdCB4ZW5fZG9tY3RsICpkb21jdGwsIHN0cnVjdCBkb21haW4gKmQs
DQogICAgICAgICBjYXNlIDMyOg0KICAgICAgICAgICAgIGlmICggIWNwdV9oYXNfZWwxXzMyICkN
CiAgICAgICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQorICAgICAgICAgICAgLyogU1ZFIGlz
IG5vdCBzdXBwb3J0ZWQgZm9yIDMyIGJpdCBkb21haW4gKi8NCisgICAgICAgICAgICBpZiAoIGlz
X3N2ZV9kb21haW4oZCkgKQ0KKyAgICAgICAgICAgICAgICByZXR1cm4gLUVPUE5PVFNVUFA7DQog
ICAgICAgICAgICAgcmV0dXJuIHN3aXRjaF9tb2RlKGQsIERPTUFJTl8zMkJJVCk7DQogICAgICAg
ICBjYXNlIDY0Og0KICAgICAgICAgICAgIHJldHVybiBzd2l0Y2hfbW9kZShkLCBET01BSU5fNjRC
SVQpOw0KDQpJdOKAmXMgYSBiaXQgbGF0ZSBpbiB0aGUgZ3Vlc3QgY3JlYXRpb24sIGJ1dCB3ZSBk
b27igJl0IGhhdmUgdGhlIGRvbWFpbiB0eXBlIGluZm9ybWF0aW9uIGJlZm9yZSwgdGhpcyBjaGVj
ayB0b2dldGhlciB3aXRoIHRoZSBjaGVjayBhYm92ZSBpbiBjb25zdHJ1Y3RfZG9tYWluIHdvdWxk
DQpiZSBlbm91Z2guDQoNCldoYXQgZG8geW91IHRoaW5rPw0KDQo+IA0KPiBDaGVlcnMsDQo+IA0K
PiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 11:52:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 11:52:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521111.809429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnHyD-0008FP-2n; Fri, 14 Apr 2023 11:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521111.809429; Fri, 14 Apr 2023 11:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnHyC-0008FI-Vx; Fri, 14 Apr 2023 11:52:00 +0000
Received: by outflank-mailman (input) for mailman id 521111;
 Fri, 14 Apr 2023 11:51:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnHyB-0008F8-P1; Fri, 14 Apr 2023 11:51:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnHyB-00015G-MI; Fri, 14 Apr 2023 11:51:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnHyB-0007V0-1b; Fri, 14 Apr 2023 11:51:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnHyB-0000va-14; Fri, 14 Apr 2023 11:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GYCAZq0QrjbnYJPPWZJYnbfYtkm42KIpbHfYyvtxG48=; b=V/5LP9Aq8YW0+AKVi3AKPWOgXo
	o3zoctYbhj1C/0HasAqtme4zBLNW6lhmJCXH3xPE06H9lIWxYAEOHgPIV1DS8QeaBbYQ+w9qmsoxk
	RC/Ma/bnL1R3GLwktMupgCcemmjA8opk/9NEkLvTrjRtDNpUslgo6WZklypGBU1dYWp8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180253-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180253: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
X-Osstest-Versions-That:
    linux=de4664485abbc0529b1eec44d0061bbfe58a28fb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 11:51:59 +0000

flight 180253 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180253/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180230
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180230
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180230
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180230
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat    fail  like 180230
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180230
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d
baseline version:
 linux                de4664485abbc0529b1eec44d0061bbfe58a28fb

Last test of basis   180230  2023-04-13 07:36:51 Z    1 days
Testing same since   180253  2023-04-14 01:13:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Conole <aconole@redhat.com>
  Ahmed Zaki <ahmed.zaki@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Asahi Lina <lina@asahilina.net>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Claudia Draghicescu <claudia.rosu@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Vetter <daniel.vetter@intel.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  David S. Miller <davem@davemloft.net>
  Denis Plotnikov <den-plotnikov@yandex-team.ru>
  Douglas Anderson <dianders@chromium.org>
  Eric Dumazet <edumazet@google.com>
  Evan Quan <evan.quan@amd.com>
  Felix Huettner <felix.huettner@mail.schwarz>
  Florent Revest <revest@chromium.org>
  Geert Uytterhoeven <geert+renesas@glider.be>
  George Guo <guodongtai@kylinos.cn>
  Guillaume Nault <gnault@redhat.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hayes Wang <hayeswang@realtek.com>
  Helge Deller <deller@gmx.de>
  Horatio Zhang <Hongkun.Zhang@amd.com>
  Ivan Bornyakov <i.bornyakov@metrotek.ru>
  Jakub Kicinski <kuba@kernel.org>
  Jani Nikula <jani.nikula@intel.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Josh Don <joshdon@google.com>
  Karol Herbst <kherbst@redhat.com>
  Kornel Dulęba <korneld@chromium.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Lars-Peter Clausen <lars@metafoo.de>
  Liang Chen <liangchen.linux@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Peibao <liupeibao@loongson.cn>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Luca Czesla <luca.czesla@mail.schwarz>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mark Brown <broonie@kernel.org>
  Martin KaFai Lau <martin.lau@kernel.org>
  Martin Willi <martin@strongswan.org>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Min Li <lm0963hack@gmail.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Paolo Abeni <pabeni@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Radu Pirea (OSS) <radu-nicolae.pirea@oss.nxp.com>
  Rafal Romanowski <rafal.romanowski@intel.com>
  Rob Herring <robh@kernel.org>
  Roman Gushchin <roman.gushchin@linux.dev>
  Saravana Kannan <saravanak@google.com>
  Sasha Finkelstein <fnkl.kernel@gmail.com>
  Shawn Guo <shawnguo@kernel.org>
  Stanislav Fomichev <sdf@google.com>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephen Boyd <sboyd@kernel.org>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Zimmermann <tzimmermann@suse.de>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Waiman Long <longman@redhat.com>
  WANG Xuerui <git@xen0n.name>
  Wayne Lin <Wayne.Lin@amd.com>
  Wolfram Sang <wsa@kernel.org> # for I2C
  Xin Long <lucien.xin@gmail.com>
  Xingyuan Mo <hdthky0@gmail.com>
  Xu Kuohai <xukuohai@huawei.com>
  YueHaibing <yuehaibing@huawei.com>
  Zheng Wang <zyytlz.wz@163.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   de4664485abb..44149752e998  44149752e9987a9eac5ad78e6d3a20934b5e018d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 12:18:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 12:18:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521118.809439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIO7-0002Ql-8n; Fri, 14 Apr 2023 12:18:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521118.809439; Fri, 14 Apr 2023 12:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIO7-0002Qe-60; Fri, 14 Apr 2023 12:18:47 +0000
Received: by outflank-mailman (input) for mailman id 521118;
 Fri, 14 Apr 2023 12:18:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnIO5-0002QU-Uv
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 12:18:45 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7fdad166-dabe-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 14:18:44 +0200 (CEST)
Received: by mail-wm1-x330.google.com with SMTP id
 n42-20020a05600c3baa00b003f0b12814aaso916025wms.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 05:18:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fdad166-dabe-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681474724; x=1684066724;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DWES7aaub1/1Y6TQPqE2JUKhfa5FJIAf7U3cNJVc0FI=;
        b=Cv3knVpGj3zGDEHS5ifStC8xmlavhUn+IYV/Rp9JMDHRRQi3i2r2Zda35WHr6RReaW
         MD17+zvXFodYhu8IT+2BiTjIS5PobJIjK/Zd0hVgY6agPnldVWLgKRMPsOvS2DcEv8q2
         zAIAsyL5Va2MJD+gRfoD7CnyQS9x9IYa5vFzIvnRwnSNPjxcZb/xUx8Xg8zR+yFyGxpE
         /o5TCvkMnN+RwRPaxjqdW/SjZLYzam8twOJIj8CJ8FqOZu1d7bmR+fk6mgxSKpEig168
         yFgCubO2mRr7UsoAYccVMSgfM8l0cULYe0RId7xJcPp2N4jjCzo/l8i8nAf8sSEb3LMw
         lemA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681474724; x=1684066724;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DWES7aaub1/1Y6TQPqE2JUKhfa5FJIAf7U3cNJVc0FI=;
        b=ZYE9iDuW/LgxI9ipwHaJaJJzvzdQTJtU9xkjTJVjO4AyJfuHKGMmSYAFoMI4v8scWm
         q0xIS8P6Bwv8ZiMl1ANjuzUaJiHStDtfU0D2FUPrzAN1j5B/Tc3NmD86gfmkrRETqMO+
         /b/ApHtiTyfcu6RQ/YLRB3oZmtfXOONEqgwsjf4A3QfajBS/lXWZpafUIW4zEN2VavTG
         EpY//uAky7mnlMo2er32QtT4uHMJIcvo1WRKiS2nkXp//fUDFfehm2wrYU9aUvYqUlZS
         kG5cb5ELgQexUz0FxTDzQhh2cQ7z0vH4nmViAuHZj2QZ2gauKSV72CEYg2sr9SoXR1l0
         +56Q==
X-Gm-Message-State: AAQBX9frPCSqOtebeUvEH1SY+kC9gYEbTVKbN7PHqG2Aa401scYY9h2B
	HthM4M8NtaVcS4NoLoK9ffMO4tZ3zdtAc7lZCUfJOw==
X-Google-Smtp-Source: AKy350YdNT3Cjuz8BXV3UbjRph1dX32VinWgDt6FOIiKF226mh8dcWomBkzryF8+t9QVb3VamOwmbBdm39WjBP8Mo8Y=
X-Received: by 2002:a1c:7708:0:b0:3df:97c0:1b4 with SMTP id
 t8-20020a1c7708000000b003df97c001b4mr1404928wmi.4.1681474723712; Fri, 14 Apr
 2023 05:18:43 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-12-jens.wiklander@linaro.org> <d354fee8-4d02-fe5f-1ff1-15f96efeb13f@xen.org>
 <CAHUa44FM5yQ+e=ruPhTxFttGTE1HQvruX-7XAiqVnW4b-mQgcw@mail.gmail.com> <9732C6EB-3346-408D-B267-9CCCE081B661@arm.com>
In-Reply-To: <9732C6EB-3346-408D-B267-9CCCE081B661@arm.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 14:18:32 +0200
Message-ID: <CAHUa44HDv++FfxcrqH9sFUgK1UyB_HQN2xcRknMRVX5gvvnUcA@mail.gmail.com>
Subject: Re: [XEN PATCH v8 11/22] xen/arm: ffa: send guest events to Secure Partitions
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>, Marc Bonnici <Marc.Bonnici@arm.com>, 
	Achin Gupta <Achin.Gupta@arm.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Bertrand,

On Fri, Apr 14, 2023 at 10:29=E2=80=AFAM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi Jens,
>
> > On 14 Apr 2023, at 10:19, Jens Wiklander <jens.wiklander@linaro.org> wr=
ote:
> >
> > Hi,
> >
> > On Thu, Apr 13, 2023 at 3:24=E2=80=AFPM Julien Grall <julien@xen.org> w=
rote:
> >>
> >> Hi,
> >>
> >> On 13/04/2023 08:14, Jens Wiklander wrote:
> >>> +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id=
,
> >>> +                                      uint8_t msg)
> >>> +{
> >>> +    uint32_t exp_resp =3D FFA_MSG_FLAG_FRAMEWORK;
> >>> +    int32_t res;
> >>> +
> >>> +    if ( msg =3D=3D FFA_MSG_SEND_VM_CREATED )
> >>> +        exp_resp |=3D FFA_MSG_RESP_VM_CREATED;
> >>> +    else if ( msg =3D=3D FFA_MSG_SEND_VM_DESTROYED )
> >>> +        exp_resp |=3D FFA_MSG_RESP_VM_DESTROYED;
> >>> +    else
> >>> +        return FFA_RET_INVALID_PARAMETERS;
> >>> +
> >>> +    do {
> >>> +        const struct arm_smccc_1_2_regs arg =3D {
> >>> +            .a0 =3D FFA_MSG_SEND_DIRECT_REQ_32,
> >>> +            .a1 =3D sp_id,
> >>> +            .a2 =3D FFA_MSG_FLAG_FRAMEWORK | msg,
> >>> +            .a5 =3D vm_id,
> >>> +        };
> >>> +        struct arm_smccc_1_2_regs resp;
> >>> +
> >>> +        arm_smccc_1_2_smc(&arg, &resp);
> >>> +        if ( resp.a0 !=3D FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 !=
=3D exp_resp )
> >>> +        {
> >>> +            /*
> >>> +             * This is an invalid response, likely due to some error=
 in the
> >>> +             * implementation of the ABI.
> >>> +             */
> >>> +            return FFA_RET_INVALID_PARAMETERS;
> >>> +        }
> >>> +        res =3D resp.a3;
> >>> +    } while ( res =3D=3D FFA_RET_INTERRUPTED || res =3D=3D FFA_RET_R=
ETRY );
> >>
> >> This loop seems potentially unbounded to me. Can you add a comment
> >> explaining why this is fine?
> >
> > In the FF-A 1.1 specification
> > (https://developer.arm.com/documentation/den0077/e/?lang=3Den) Table
> > 18.26 at page 330 it says that FFA_RET_INTERRUPTED and FFA_RET_RETRY
> > should be handled in this way. When looking at this from the
> > hypervisor's point of view it is troublesome since there isn't any
> > guarantee that we're progressing.
> >
> > We should be able to rule out FFA_RET_INTERRUPTED since non-secure
> > interrupts should be masked at this point. I'm not sure if
> > FFA_RET_RETRY can be avoided entirely, but we should be able to put a
> > limit on how many times we're prepared to retry.
>
> The fact that interrupts are masked in Xen does not mean they will be in =
secure.
> In fact what we should do when INTERRUPTED is received is something we ha=
ve
> to clear up but we should unmask interrupt to process them in Xen before =
retrying I think.
>
> >
> > How about setting a limit of max 10 retries and treating
> > FFA_RET_INTERRUPTED as an error? Or is it better to not loop at all
> > and treat all but FFA_RET_OK as errors? What do others think?
>
> I would suggest to do a max retry for both cases and add a TODO in the co=
de.
> We will need to define a generic way to handle those cases but at this st=
age
> INTERRUPTED should be considered TODO.
> RETRY will probably stay with a limit here, in the case of a guest messag=
e both
> of those possibilities could just be returned to the guest.
>
>
> Do you agree ?

Sounds good to me, I'll do that.

Thanks,
Jens

>
> Cheers
> Bertrand
>
> >
> > Thanks,
> > Jens
> >
> >>
> >>> +
> >>> +    return res;
> >>> +}
> >>> +
> >>
> >> Cheers,
> >>
> >> --
> >> Julien Grall
>
>


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 12:27:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 12:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521122.809449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIWm-0003ui-3s; Fri, 14 Apr 2023 12:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521122.809449; Fri, 14 Apr 2023 12:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIWm-0003ub-1F; Fri, 14 Apr 2023 12:27:44 +0000
Received: by outflank-mailman (input) for mailman id 521122;
 Fri, 14 Apr 2023 12:27:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnIWk-0003uV-MT
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 12:27:42 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c027a1bd-dabf-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 14:27:41 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 bd13-20020a05600c1f0d00b003f14c42cc99so489112wmb.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 05:27:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c027a1bd-dabf-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681475261; x=1684067261;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=J3Y/hSydieebmmYwPSKCbO29GxMGDiyoGiZ+b5e5hUA=;
        b=tUnNWGpl9bhWPrFRwBk9shXRxXgqi12/g5dzdpqNsg57/NpcXvg+ymRCLPbz/yA4S1
         u3MW4gqORASqR3ETAaG7btQvxrn8DMJvEDVLiOJDIoFrikE15WbYQ4Z4Fl0NFQ9YxYdL
         Mn46qIVmMxLRxICvu0pJrxcsdgIvn4ch41g3OYDbyZ7xzm5QgSLQAl53+uWQeHo6ybo7
         gqFHmKoTBau3KSEeKT6VQ4swDPeALbEKsXNFMsroSYG2D3w+dhXaZt4BBifUrcgRmnym
         oE/8NK+R9hrA52Ll0Ku4HNibYXUozuaaQX+Rpp9glqCj19/7oX0FD2Nbr0DtbHpHEo/Y
         blIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681475261; x=1684067261;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=J3Y/hSydieebmmYwPSKCbO29GxMGDiyoGiZ+b5e5hUA=;
        b=c4qGbfhj7WGEa/XCQRx5Eo7Dehp7QeFWnbHoYjsd2Jf3jfq3Ofw5f1MjEkouWc5Bmk
         zMzoaUnHAgTOBb59kPOUPWNb6uTH7FV6bOl7uXlDk2g43PLCSOcO4KBy/wzQO/CeOPfv
         h15fev+CFNKxZw6TrQrNvtEqVKTctRBKzs3QzoK0jFoiYKvuGSG8MR4YsKUrwX4ymAwo
         Q18JZ14J35kLywIukv9hsnmGqe8Dzpt+u9/EMhTflDpkEwiUeOg3JRmhGQTPZToPkTUY
         lHBL1LFKXQlZtyKLWkO71txiCE+WkfteGiS/WVvlhGLDSuwnbinYkMl+FdTJVkDyT+iE
         x+Ag==
X-Gm-Message-State: AAQBX9dNhzEdqdOm0uP1oflR1L+I0qymXzGVmkzp8l5qV9hoTNgPdMwL
	3H1IWrlZm20wWuSvn1qdZL/XUMudL1JHse9GgMjrbw==
X-Google-Smtp-Source: AKy350Zdm3yiMt9d+tmL4qHGNzKFgE2QaAKiri18GvmV7MGXhIEwDTKKTgN1FZtNQdKaMeR0BvsAnrA4GdKrezCFqBw=
X-Received: by 2002:a05:600c:1d1b:b0:3f0:34c5:7350 with SMTP id
 l27-20020a05600c1d1b00b003f034c57350mr1808079wms.4.1681475261169; Fri, 14 Apr
 2023 05:27:41 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-23-jens.wiklander@linaro.org> <7a29bc06-61b5-51e6-4625-bf19e530b975@xen.org>
In-Reply-To: <7a29bc06-61b5-51e6-4625-bf19e530b975@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 14:27:28 +0200
Message-ID: <CAHUa44FVQamkBWiwh8pwNmUP3KaVhbg9Vn_z4P6miYt+sBMByA@mail.gmail.com>
Subject: Re: [XEN PATCH v8 22/22] docs: add Arm FF-A mediator
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Thu, Apr 13, 2023 at 11:00=E2=80=AFPM Julien Grall <julien@xen.org> wrot=
e:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > Describes a FF-A version 1.1 [1] mediator to communicate with a Secure
> > Partition in secure world.
> >
> > [1] https://developer.arm.com/documentation/den0077/latest
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >   SUPPORT.md               |  8 ++++++++
> >   docs/man/xl.cfg.5.pod.in | 15 +++++++++++++++
> >   2 files changed, 23 insertions(+)
> >
> > diff --git a/SUPPORT.md b/SUPPORT.md
> > index aa1940e55f09..1fd746f7f7f2 100644
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -818,6 +818,14 @@ that covers the DMA of the device to be passed thr=
ough.
> >
> >   No support for QEMU backends in a 16K or 64K domain.
> >
> > +### ARM: Firmware Framework for Arm A-profile (FF-A) Mediator
> > +
> > +    Status, Arm64: Tech Preview
> > +
> > +There are still some code paths where a vCPU may hog a pCPU longer tha=
n
> > +necessary. The FF-A mediator is not yet implemented for Arm32. Part of=
 the
> > +FF-A specification is not supported.
>
> NIT: You would suggest to add: "(See the top comment in ...)". So one
> can easily find the limitation.

Good point, I'll fix that.

>
> > +
> >   ### ARM: Guest Device Tree support
> >
> >       Status: Supported
> > diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> > index 10f37990be57..bba99c576b48 100644
> > --- a/docs/man/xl.cfg.5.pod.in
> > +++ b/docs/man/xl.cfg.5.pod.in
> > @@ -1645,6 +1645,21 @@ in OP-TEE.
> >
> >   This feature is a B<technology preview>.
> >
> > +=3Ditem B<ffa>
> > +
> > +B<Arm only.> Allow a guest to communicate via FF-A with Secure Partiti=
ons
> > +(SP), default false.
> > +
> > +Currently is only a small subset of the FF-A specification supported. =
Just
> > +enough to communicate with OP-TEE. In general only direct messaging an=
d
> > +sharing memory with one SP. More advanced use cases where memory might=
 be
> > +shared or donated to multple SPs are not supported.
>
> Typo: s/multple/multiple/
>
> > +
> > +See L<https://developer.arm.com/documentation/den0077/latest> for more
> > +informantion about FF-A.
>
> Typo: s/informantion/information/

I'll fix the typos.

Thanks,
Jens

>
> > +
> > +This feature is a B<technology preview>.
> > +
> >   =3Dback
> >
> >   =3Dback
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 12:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 12:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521126.809458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIwW-0007JD-9p; Fri, 14 Apr 2023 12:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521126.809458; Fri, 14 Apr 2023 12:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnIwW-0007J6-74; Fri, 14 Apr 2023 12:54:20 +0000
Received: by outflank-mailman (input) for mailman id 521126;
 Fri, 14 Apr 2023 12:54:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZM4c=AF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1pnIwU-0007J0-9w
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 12:54:18 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77265b7a-dac3-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 14:54:17 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id j12so1143964wrd.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 05:54:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77265b7a-dac3-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681476857; x=1684068857;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QItX4kx36BzpBSoyjOqYoLsNvoaJtFnUqdCC7RP/lNI=;
        b=v3Bg82ZImQ85yF/zvQQywf+xe/LxE6fcNLtLBfXkEFkt+slxSyw0AYQnY4xLiJ7WST
         OWHoZ6uYNcDTI5setzhlVl5E7F4cryv3v67nPwdEBVNwZcg47ouKUTdGmAe3vBYtzxee
         F9YpgJaXmNFYUqWNHei0UIpfRrDxmPX+O1gJrIQ9mNWqJPG3357jesn5VoYEodN6kmJx
         uOxx3bieA0qw5iy9LGgZkjMm7/NjU2wx77Rb4dKhlgsdCwuoKER94kiRLXG4cNWbHxea
         klMUEEXG5xy2bw8tf6qp5ODrhWX597LmRu4BHYIIAkSlzUwEdJlfqTDQu1B/trYxDEyA
         KK6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681476857; x=1684068857;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=QItX4kx36BzpBSoyjOqYoLsNvoaJtFnUqdCC7RP/lNI=;
        b=L8MkJ/DESRMV9dyTw2uCTVL90Lp+p1ZG6l/7Sm8iqhohlE/xX/WD/ecC1z7vvM+s57
         Yvb0ufQK1zSdHSKiCRW52+8lem9ME00PaQUTSTOCh3cJLakLKsWbaMjlAi6inGgJHAfg
         IlKW22iAbC68vW6zYulNZIzuN6ODy2UPhp1YkCWL6ONN8cO8/AqaH0eUX4STAl/KsggT
         gutUD/Qs/huqjz1oaAnhjKdcuUIkuEhP3UWUMJAe1tt9gVCOx4EXOFWYg1/bGDwAxsqi
         8Qkz2YrU4Rw05K4UffPLyyff88MHIwROhsQiFWOktKsz+G6rZdVBKu4VGtcPCX8ryDU0
         6YPg==
X-Gm-Message-State: AAQBX9ewyVBeZJxqXF24iOS/MC9R9Mz5bz/owowncGYjfS0kGXDkHp/T
	mXNdoLwC+GyCRxnlc2QtaxUxvPZiaBbx+7NsFVxPhw==
X-Google-Smtp-Source: AKy350ayl0OXgBaZesP3BYAgSJQPytVASLtqXN7bM9l2pkthiM5eJbwvddimiJgzQOEjbx7BRVkFi2ni0E1JtF44erQ=
X-Received: by 2002:adf:df83:0:b0:2f4:1214:d5b4 with SMTP id
 z3-20020adfdf83000000b002f41214d5b4mr1062374wrl.3.1681476856678; Fri, 14 Apr
 2023 05:54:16 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-22-jens.wiklander@linaro.org> <c2ef841e-fe3a-a283-2c83-225e02d588d2@xen.org>
In-Reply-To: <c2ef841e-fe3a-a283-2c83-225e02d588d2@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 14 Apr 2023 14:54:05 +0200
Message-ID: <CAHUa44EFqNW_SJcWb=Lbjz1g41TA4Y2f+h8a+xdVG-5yCLpusg@mail.gmail.com>
Subject: Re: [XEN PATCH v8 21/22] xen/arm: ffa: list current limitations
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Thu, Apr 13, 2023 at 10:57=E2=80=AFPM Julien Grall <julien@xen.org> wrot=
e:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > Adds a comments with a list of unsupported FF-A interfaces and
> > limitations in the implemented FF-A interfaces.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >   xen/arch/arm/tee/ffa.c | 32 ++++++++++++++++++++++++++++++++
> >   1 file changed, 32 insertions(+)
> >
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index 0948cc636871..6424c222c885 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -13,6 +13,38 @@
> >    *                https://developer.arm.com/documentation/den0077/e
> >    * TEEC-1.0C: TEE Client API Specification version 1.0c available at
> >    *            https://globalplatform.org/specs-library/tee-client-api=
-specification/
> > + *
> > + * Notes on the the current implementstion.
> > + *
> > + * Unsupported FF-A interfaces:
> > + * o FFA_MSG_POLL and FFA_MSG_SEND - deprecated in FF-A-1.1-REL0
> > + * o FFA_MEM_RETRIEVE_* - Used when sharing memory from an SP to a VM
> > + * o FFA_MEM_DONATE_* and FFA_MEM_LEND_* - Used when tranferring owner=
ship
> > + *   or access of a memory readion
> > + * o FFA_MSG_SEND2 and FFA_MSG_WAIT - Used for indirect messaging
> > + * o FFA_MSG_YIELD
> > + * o FFA_INTERRUPT - Used to report preemption
> > + * o FFA_RUN
> > + *
> > + * Limitations in the implemented FF-A interfaces:
> > + * o FFA_RXTX_MAP_*:
> > + *   - Maps at most 32 4k pages large RX and TX buffers
> > + *   - RT/TX buffers must be normal RAM
>
> Can you explain why this is a problem?

Good catch, I can't. I must have added it by mistake. I'll remove it.

Thanks,
Jens

>
> > + *   - Doesn't support forwarding this call on behalf of an endpoint
> > + * o FFA_MEM_SHARE_*: only supports sharing
> > + *   - from a VM to an SP
> > + *   - with one borrower
> > + *   - with the memory transaction descriptor in the RX/TX buffer
> > + *   - normal memory
> > + *   - at most 512 kB large memory regions
> > + *   - at most 32 shared memory regions per guest
> > + * o FFA_MSG_SEND_DIRECT_REQ:
> > + *   - only supported from a VM to an SP
> > + *
> > + * There are some large locked sections with ffa_tx_buffer_lock and
> > + * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
> > + * around share_shm() is a very large locked section which can let one=
 VM
> > + * affect another VM.
> >    */
> >
> >   #include <xen/bitops.h>
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 13:29:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 13:29:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521137.809471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnJU4-0002Oo-29; Fri, 14 Apr 2023 13:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521137.809471; Fri, 14 Apr 2023 13:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnJU3-0002Oh-Vh; Fri, 14 Apr 2023 13:28:59 +0000
Received: by outflank-mailman (input) for mailman id 521137;
 Fri, 14 Apr 2023 13:28:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Krav=AF=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pnJU2-0002OK-KG
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 13:28:58 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d26f363-dac8-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 15:28:54 +0200 (CEST)
Received: from DB8PR06CA0009.eurprd06.prod.outlook.com (2603:10a6:10:100::22)
 by AS8PR08MB8491.eurprd08.prod.outlook.com (2603:10a6:20b:566::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 13:28:52 +0000
Received: from DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::f1) by DB8PR06CA0009.outlook.office365.com
 (2603:10a6:10:100::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 13:28:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT013.mail.protection.outlook.com (100.127.142.222) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Fri, 14 Apr 2023 13:28:51 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 14 Apr 2023 13:28:51 +0000
Received: from ad04a3c511c1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0CD39F26-1907-4FCF-BD32-365204672364.1; 
 Fri, 14 Apr 2023 13:28:45 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad04a3c511c1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 13:28:45 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV1PR08MB7732.eurprd08.prod.outlook.com (2603:10a6:150:53::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.43; Fri, 14 Apr
 2023 13:28:43 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 13:28:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d26f363-dac8-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=snhBR0/mjQ5oUipLv/cbHjCeqCM07eecgS70/ZHYkqA=;
 b=CLGQoFgfnVJstUBhVeTem5FqptzgBX1j1lGcpv3dH/lEbPUYciZjgPDmM+ixrAgvsOB69rWNcSMuNSg3zlqFB/uc85/nlt2V8T1vAAQM3Z3rzHcMot1SohwEeB1g/Q0v7Y8mSA7q9oJgBF181bZD5GRp13Rek1fUs1FjN9m09qU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9b15a0689fb03e2f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O3o23tdmV2j6qcD/+sg5XY85DhZ3bYZyLFuLkkp6kY/p9BUa54t5HBoZIMZM/29p3OhMEARayQ0u9+B08lmhA3bdp9S18LZ+6/6YjMbDHOvbOSQHet4HizmPBXq1zD9xuqNS9+zke6Irb+LdVKY7Yb6OxalMF4a87L1wNA7LefwN0G6l1lfahg1qqqVvWN4tPp9zp/TO7aFjsHSdBgInfUJenmu6Cf+QLgoYzV2rMKU0RgYiaywAF0dEpHGr2zfGEUTqs/1gleXNmSLzaePw1DxV5/Gk23XJpbQvh/6hyA8S52XqSJwPRcTs6Y/4HEq7U3MjWRHXevBIk3JX26Au1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=snhBR0/mjQ5oUipLv/cbHjCeqCM07eecgS70/ZHYkqA=;
 b=PR4kRNhorLgBA0TL6UW2HCSTaw5XDCb4HGnvy0QScV4gRlM0Cu7HUbuW40F25mcu51785sHBWG2XA6sOQbL3ahvjIS/+p5kaiEfiZY6sn07QMOuO7S3RhC8aj9PwyLUaWmuHKe5YXHBLjcHZlTbfKor3229HfS3uCBj18x4FTyuo37k0PNjwFfThnxlnEFdp6sWvkVfx136wIdu7Rc/wUg8UhhCh4eiftpD2C2dmq6Ce60UVlljDC4GW0uowWoQ1F8/yRkKt2VHQOwU+gOpcLHPJkXdayBfKq78fsZhSS5Xg694UU5qW7Dn9DkbXc6AXuVGeCYWc38OT+l8EBjE2nQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=snhBR0/mjQ5oUipLv/cbHjCeqCM07eecgS70/ZHYkqA=;
 b=CLGQoFgfnVJstUBhVeTem5FqptzgBX1j1lGcpv3dH/lEbPUYciZjgPDmM+ixrAgvsOB69rWNcSMuNSg3zlqFB/uc85/nlt2V8T1vAAQM3Z3rzHcMot1SohwEeB1g/Q0v7Y8mSA7q9oJgBF181bZD5GRp13Rek1fUs1FjN9m09qU=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZbSQxNKEijLDBi06LhQ6zHwegOK8pMc0AgAGd7gA=
Date: Fri, 14 Apr 2023 13:28:42 +0000
Message-ID: <8C042AAE-8256-488C-92C6-E4C0B5DEB3C1@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-2-luca.fancellu@arm.com>
 <190AEC88-68BE-4DEA-84D5-9DDF0F63A365@arm.com>
In-Reply-To: <190AEC88-68BE-4DEA-84D5-9DDF0F63A365@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV1PR08MB7732:EE_|DBAEUR03FT013:EE_|AS8PR08MB8491:EE_
X-MS-Office365-Filtering-Correlation-Id: 7c3b403b-afad-491d-aa7c-08db3cec300f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ElOEsLgAC++ZrJMXt0wODsLoHfu4XhSVFlp7SotoHf7Ym2A43kT2cCQAjvX1Z7Z0+ikcp+XjBtMG9SZZMekbS/496CtCmBGieaQeUzxFe7cfK0MAczT2RsdxybngFgMJWgDUxwcZsWyqv06i6M4wlhhG7WB+eqNF5VWP1Mg4XJp2lGAqmeSM4K9cqxBzpc2X17jhh977rhOxK4FAwdMbyOkpbA4dwd7d2TYZoHH+jKsBEHAnANp1eFJIhtMnKVsNiZi3j1fxYtGW9+/t2oXDJ6Y20g3yn6jXyzVfcQxxuFXc2NWh0fLnCL06pbftyBEJdTctFDyW8bNATwv7A9GRetzZP92W6GkRGvoztCZ+XITODZQ35qUATt26AjrfcZnbAgUzWrSzXPlOEeJKjrI1Kxrzgo8cha3/gS19YtGOmdJZ/96IY2brC+yLy7/udOCm9l3wYpyclasyGfbc9rJgK35wGm4tHvXS1f/gF0prv4inkxPJGI62YOJXMp7OzH5aAs3pKRkWZjOQN+9qKxpp3f6XnkuEfYOX2WSunlnp0N1PHF5Lh+ncN+nx04Xlu+0iYGqCHVRbG/Qx2eN/bvK04sbtlzqnXeDMg0qC9fS66dLn6s6CEq+zVD2ANw0fMv7AYDdvmgNnITLs7B5gVya/tQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(39860400002)(376002)(396003)(451199021)(37006003)(4326008)(54906003)(6636002)(6486002)(66476007)(64756008)(91956017)(66556008)(66946007)(66446008)(76116006)(316002)(186003)(53546011)(6512007)(6506007)(26005)(38100700002)(2616005)(83380400001)(5660300002)(8676002)(6862004)(8936002)(41300700001)(478600001)(71200400001)(86362001)(36756003)(33656002)(38070700005)(122000001)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C630C58CB4E1CE438F3168FBDF313329@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7732
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ce1f7720-5c70-4dfb-e03a-08db3cec2ab1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	82vQjQTbCotXVhMtalVvZns+MatpACoC6+L1QBkQflSesIChuY55h1+JFwV+o8n0yHYOJo2Dmf4GbBjOFMUvj4RBIu+im6n8YsRdae73DkUXzNWrbMzjqp712rje3IM+TOgNjxyMP+NnHpW0VxLdqFAHYQMvG11ctzprfO/m8yQSAoN1b9gibQfOOYSaFpG/bXHMg4ChbLfJICWmGlUYlnVOiyoWTmDyIITKfc9qKDT1VSqmqLsxDNfEXMiLlbOfMukRva7miCXTS5nl6tmuFaYJgO2ZY9d2Td9VnunDuX/abePMwBwL67ZKVzWguvF/eCf/b8kG+Vh6xuHFZLR+nhoNyYiV5yW/+cQoVhYznSHkYAlDA0NENAhRCDxwjiDhsOBTMyIj7ZIQxP/ld8PxYAVc0/wi7BCWMmeeCPdVqd2kNc8i8FJVasqfEZn6IfF2iB2XrJlAQjj5PLCIO2PvGksUlMs9Y5bkQCD3XrQrC4o8PiMV/QmIaR7aym3Zn0Q9lNX9IcOlFepip6umzxgGcv5wU86yJ8ml9aM3yNXTgnmDDCkmKjTejdadjxgmxNW9NMGJ9m0/6MACbvmhYC+O0dhqOYFrmxJPp/twSnGbqc+nWObmme0RKbu6nf3IObEsYeD8R0B0+HB7+OmZyTmiRPq+CgHe3qsou5P4/NIqZqgAxm6iY6TmNJGGYPQY4YfW2ZBjALGDO4quKY5wpZTQlQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199021)(40470700004)(36840700001)(46966006)(36860700001)(40480700001)(82740400003)(2616005)(336012)(47076005)(53546011)(40460700003)(107886003)(6512007)(2906002)(6506007)(33656002)(186003)(26005)(82310400005)(81166007)(356005)(8936002)(83380400001)(86362001)(70206006)(70586007)(478600001)(6486002)(36756003)(6636002)(5660300002)(6862004)(4326008)(316002)(8676002)(54906003)(41300700001)(37006003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 13:28:51.9026
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7c3b403b-afad-491d-aa7c-08db3cec300f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8491



> On 13 Apr 2023, at 13:47, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Luca,
>=20
>> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>=20
>> Enable Xen to handle the SVE extension, add code in cpufeature module
>> to handle ZCR SVE register, disable trapping SVE feature on system
>> boot only when SVE resources are accessed.
>> While there, correct coding style for the comment on coprocessor
>> trapping.
>>=20
>> Now cptr_el2 is part of the domain context and it will be restored
>> on context switch, this is a preparation for saving the SVE context
>> which will be part of VFP operations, so restore it before the call
>> to save VFP registers.
>> To save an additional isb barrier, restore cptr_el2 before an
>> existing isb barrier and move the call for saving VFP context after
>> that barrier.
>>=20
>> Change the KConfig entry to make ARM64_SVE symbol selectable, by
>> default it will be not selected.
>>=20
>> Create sve module and sve_asm.S that contains assembly routines for
>> the SVE feature, this code is inspired from linux and it uses
>> instruction encoding to be compatible with compilers that does not
>> support SVE.
>>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> Changes from v4:
>> - don't use fixed types in vl_to_zcr, forgot to address that in
>>  v3, by mistake I changed that in patch 2, fixing now (Jan)
>> Changes from v3:
>> - no changes
>> Changes from v2:
>> - renamed sve_asm.S in sve-asm.S, new files should not contain
>>  underscore in the name (Jan)
>> Changes from v1:
>> - Add assert to vl_to_zcr, it is never called with vl=3D=3D0, but just
>>  to be sure it won't in the future.
>> Changes from RFC:
>> - Moved restoring of cptr before an existing barrier (Julien)
>> - Marked the feature as unsupported for now (Julien)
>> - Trap and un-trap only when using SVE resources in
>>  compute_max_zcr() (Julien)
>> ---
>> xen/arch/arm/Kconfig                     | 10 +++--
>> xen/arch/arm/arm64/Makefile              |  1 +
>> xen/arch/arm/arm64/cpufeature.c          |  7 ++--
>> xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
>> xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
>> xen/arch/arm/cpufeature.c                |  6 ++-
>> xen/arch/arm/domain.c                    |  9 +++--
>> xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
>> xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
>> xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
>> xen/arch/arm/include/asm/domain.h        |  1 +
>> xen/arch/arm/include/asm/processor.h     |  2 +
>> xen/arch/arm/setup.c                     |  5 ++-
>> xen/arch/arm/traps.c                     | 28 +++++++------
>> 14 files changed, 201 insertions(+), 24 deletions(-)
>> create mode 100644 xen/arch/arm/arm64/sve-asm.S
>> create mode 100644 xen/arch/arm/arm64/sve.c
>> create mode 100644 xen/arch/arm/include/asm/arm64/sve.h
>>=20
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index 239d3aed3c7f..41f45d8d1203 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
>> This feature is not supported in Xen.
>>=20
>> config ARM64_SVE
>> - def_bool n
>> + bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPO=
RTED
>> depends on ARM_64
>> help
>> -  Scalar Vector Extension support.
>> -  This feature is not supported in Xen.
>> +  Scalar Vector Extension (SVE/SVE2) support for guests.
>=20
> I would prevent to mention SVE2 here unless both versions of SVE are supp=
orted with this config.
> Is it the case ?

Hi Bertrand,

Yes both versions of SVE are supported with this config, SVE2 is a superset=
 of SVE that includes new
instructions, but the work done in this serie for registers settings and co=
ntext switch will apply to both
versions.

>=20
> Cheers
> Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 13:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 13:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521142.809482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnJdJ-0003wV-1l; Fri, 14 Apr 2023 13:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521142.809482; Fri, 14 Apr 2023 13:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnJdI-0003wO-VK; Fri, 14 Apr 2023 13:38:32 +0000
Received: by outflank-mailman (input) for mailman id 521142;
 Fri, 14 Apr 2023 13:38:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tlWx=AF=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pnJdI-0003wI-5h
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 13:38:32 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3791077-dac9-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 15:38:28 +0200 (CEST)
Received: from DUZPR01CA0164.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b3::18) by DBBPR08MB5995.eurprd08.prod.outlook.com
 (2603:10a6:10:20b::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 13:38:26 +0000
Received: from DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b3:cafe::29) by DUZPR01CA0164.outlook.office365.com
 (2603:10a6:10:4b3::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 13:38:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT031.mail.protection.outlook.com (100.127.142.173) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Fri, 14 Apr 2023 13:38:26 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Fri, 14 Apr 2023 13:38:26 +0000
Received: from edd4ec2fb1a0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CA36389E-0F1A-45DD-95E7-C4F68C0D5028.1; 
 Fri, 14 Apr 2023 13:38:15 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id edd4ec2fb1a0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Apr 2023 13:38:15 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB9815.eurprd08.prod.outlook.com (2603:10a6:20b:614::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 13:38:13 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 13:38:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3791077-dac9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ogctp1zZxuHa8uxipGyJGj0grO+wa7ak43rkcB7fDYo=;
 b=wjXLIbuKeaKduD74cNxLC/HcBsd+kJc5YgO3KEecjUrsePjHtKyC/TTTXLdQp6ILXAbyapK9X57zLhVsNqQLyGLy1wL/UWi7m0HtseK15sJ0PiVYUdM84xkv6yCMQf3z16U7yx2ewtlt3Fwzkt+WVPaqjuh7DqLZg79c1yN1qxE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2afed1d247fdabf3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jyay1o0h0OidO6TELjBmi/U7fLiWNdWda92Y7hqdiB3Tw23cFlCLbovy0g2P29+Enriv8OVHUgHEnSPDKHaJUQ7VMjGkvYhbkchoN7sRY5oDtND4ZxUTLYXQJgkJps4rEyXd1GQt/HDX4WJgvLxgfD3NfgFLIFmMAoVVI2OU/jf6aOrlSjjXIOch7hWhR6zcb2BTuzgq/ioJ1o5TvZja481uFtY/we2PegC/1KLwKg8jJKTGNwjVKeP2opPw+XvLLWur2x1TUhS9ogVF2q+NQ1FBsazwdixnyjF6wuDdgnuemyf7R0yc9ka5Cd/OjtE2CCOCDKUbUn3Y23E/fGP3mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ogctp1zZxuHa8uxipGyJGj0grO+wa7ak43rkcB7fDYo=;
 b=JYQLZjiwRgNgsk33n/zkRHoFkoVhpoirOcFsusmD5NLJzIpbZB0g8x/ZYanL6oEIWt4HBLpm0ZfgV28hpeNABUPGho6camYN93hEEr3atOv1gvFkk/vPAlGqLwyGK9LiqQ/8OvumaJ5lw5U61MOmuIrDO9DkIwJdYeH+DCpbhkadDYR+t2pzWNoGG4zbpcDETlFOitkWwA+U6SJDTt4pCDeoBT8dGEf+eFwekjTAqiNqmgJqNR5lDKqal06hTxRPSym5KFy0EGJkySoVjJgxJIm0o4vIodsGtmmRParnFpFtfhPle/Fa8Tv+3f9ZZCyNa073zzwYbKGJp7Ut0MdAAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ogctp1zZxuHa8uxipGyJGj0grO+wa7ak43rkcB7fDYo=;
 b=wjXLIbuKeaKduD74cNxLC/HcBsd+kJc5YgO3KEecjUrsePjHtKyC/TTTXLdQp6ILXAbyapK9X57zLhVsNqQLyGLy1wL/UWi7m0HtseK15sJ0PiVYUdM84xkv6yCMQf3z16U7yx2ewtlt3Fwzkt+WVPaqjuh7DqLZg79c1yN1qxE=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v5 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZbSQnZHhi12xjyUK0b5ypet9SdK8pMcIAgAGeBQCAAAKagA==
Date: Fri, 14 Apr 2023 13:38:11 +0000
Message-ID: <6F8AEB0D-5DB4-44EB-BC64-F8488D4DE109@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-2-luca.fancellu@arm.com>
 <190AEC88-68BE-4DEA-84D5-9DDF0F63A365@arm.com>
 <8C042AAE-8256-488C-92C6-E4C0B5DEB3C1@arm.com>
In-Reply-To: <8C042AAE-8256-488C-92C6-E4C0B5DEB3C1@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB9815:EE_|DBAEUR03FT031:EE_|DBBPR08MB5995:EE_
X-MS-Office365-Filtering-Correlation-Id: cd689704-318c-4190-b9e1-08db3ced8672
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ol9Er6Swuyoxq2YaWBQj3JwDEKGzYDPSxCm+aSyFacfsN++i6ZNAMRDIwe1n16pMAgSvg1fBjjUfl2FGk6Fb32ioYAp2X8nM4FwchD00erLNpSBNQ+Jhsb4OQhXbqp64IjoIviYIXC+vSFyaO0gzOFQEG/Yuah2HxNfySb59l3iscKg5/83XuSF2KraLv0Fj8qLl6yOccoRo1Oubhi6NIn4O9gDyvEk8WC9VDb5Axhm02HNfM1xTmp+9F8ZsjCDQ9lWhW4qaGSW7oLNCqkmjD3uoXU0NzdMwGKAJYskCfYHMaDu5dC4NfkIl5vxtqN1xVlCZuIgFsZ1rA36TDSwOdEN/LM18XKTIGkD3S7SUa35hM2rMKOIFmSvKbhG2AbrLNo0Dd8Gav4jrr3pZ6H6uljYRxIxro/VELN9YfL+9Am32niATCk4JDeqtl8c9d/IMShAhMK0rkYlFrOHnP6A8mZlYftuxEVZPl3L6TpGizwexW5HYoVbd7vNE4CaxO4V6htem0eVOT5ywdHUcsMZWIqNQ0JPr4wCzqXY86wg1SkgUzwe65XQUoQ/QzH4OhMVsJOocZuRD3jLTFQpJNrDFDpwypBDdVjlkGEQptkC9SRT9vth6zgFQifOyl6OZDDcZ9apVpUI/+K9bNontzXmJTw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(366004)(136003)(451199021)(38070700005)(26005)(186003)(6512007)(6506007)(53546011)(6636002)(54906003)(37006003)(33656002)(478600001)(122000001)(2906002)(8676002)(8936002)(6862004)(83380400001)(41300700001)(5660300002)(2616005)(38100700002)(4326008)(316002)(66946007)(66556008)(91956017)(76116006)(66476007)(64756008)(66446008)(86362001)(6486002)(71200400001)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5048B202B74CC746961C6F6C754AFA56@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9815
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ec96df73-73ff-4436-74e2-08db3ced7dca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xoc0FhZyHJZhaq4Rlt77haDNYq1EB2NwtrwBHcw0LiBrfL0ZEm3ZeJkFDTZSAhU5rs+2FoVSnhcmrgYwBqyW4124ga+OYsd8NTSt6/kWmr0PfC7zWVBVZgSEzYtkChxYigxq04qVle3YHXq8869npEJ2ysORfZz8+yw5Bb2aaT4Fouc0V/PjwJwD5ge6Bc5cm9iDgZp5avve+qqR0WNDqdWxI1zfc5w8/ksLFAVu0Ytk8v1MRUUBuLxIttEWJOzSMn7pmB+Y16u72PrzOLX/4ZmBwOoO+oPMmn/d7UjauuEtP0YOkDRsmgdS69SNaZnRKFQfc7Hp+SGIIqXGbn6IgfB4JtO+LN62dqVRLl3c+hpzRlnHeawECpJBgbMoqOi++AkVby2tsGHdcPOiGZHB4gOsmgwaRssE5OxSDNbP6L+xOvTwBSMmI2/evr7yoCbt2hx21O3e0SWZYfoA9rKdC9EyxQYUZGM6RdUkN7BQ89gA/cRRHwAzrIo+tCICllJ5O88iKQP5AXHcIPzm47EnP5jTyvcC+IFgUx6+nb/3fmlr0nE9eb2Iqbfn23Nw7riEwMRmewnZQEQwZIa2Nx4XNfoo98DgpLeeECgF4EJgxozq5aKAG4tS41c+X11RWMmzXVygArGbCzn97zn1vcnH/2dA3zLmrerJh9DZfFXg8JASj+keLNtxe74cxxkRFkGMWbVOUqrKvDbBDwmsxR/h8Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199021)(46966006)(40470700004)(36840700001)(316002)(70206006)(70586007)(4326008)(8676002)(6862004)(8936002)(2906002)(41300700001)(5660300002)(83380400001)(6636002)(478600001)(54906003)(37006003)(40460700003)(86362001)(336012)(107886003)(36756003)(26005)(6512007)(33656002)(2616005)(186003)(36860700001)(6506007)(82310400005)(6486002)(356005)(47076005)(53546011)(82740400003)(40480700001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 13:38:26.3482
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cd689704-318c-4190-b9e1-08db3ced8672
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5995

Hi Luca,

> On 14 Apr 2023, at 15:28, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
>=20
>=20
>> On 13 Apr 2023, at 13:47, Bertrand Marquis <Bertrand.Marquis@arm.com> wr=
ote:
>>=20
>> Hi Luca,
>>=20
>>> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>>=20
>>> Enable Xen to handle the SVE extension, add code in cpufeature module
>>> to handle ZCR SVE register, disable trapping SVE feature on system
>>> boot only when SVE resources are accessed.
>>> While there, correct coding style for the comment on coprocessor
>>> trapping.
>>>=20
>>> Now cptr_el2 is part of the domain context and it will be restored
>>> on context switch, this is a preparation for saving the SVE context
>>> which will be part of VFP operations, so restore it before the call
>>> to save VFP registers.
>>> To save an additional isb barrier, restore cptr_el2 before an
>>> existing isb barrier and move the call for saving VFP context after
>>> that barrier.
>>>=20
>>> Change the KConfig entry to make ARM64_SVE symbol selectable, by
>>> default it will be not selected.
>>>=20
>>> Create sve module and sve_asm.S that contains assembly routines for
>>> the SVE feature, this code is inspired from linux and it uses
>>> instruction encoding to be compatible with compilers that does not
>>> support SVE.
>>>=20
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> Changes from v4:
>>> - don't use fixed types in vl_to_zcr, forgot to address that in
>>> v3, by mistake I changed that in patch 2, fixing now (Jan)
>>> Changes from v3:
>>> - no changes
>>> Changes from v2:
>>> - renamed sve_asm.S in sve-asm.S, new files should not contain
>>> underscore in the name (Jan)
>>> Changes from v1:
>>> - Add assert to vl_to_zcr, it is never called with vl=3D=3D0, but just
>>> to be sure it won't in the future.
>>> Changes from RFC:
>>> - Moved restoring of cptr before an existing barrier (Julien)
>>> - Marked the feature as unsupported for now (Julien)
>>> - Trap and un-trap only when using SVE resources in
>>> compute_max_zcr() (Julien)
>>> ---
>>> xen/arch/arm/Kconfig                     | 10 +++--
>>> xen/arch/arm/arm64/Makefile              |  1 +
>>> xen/arch/arm/arm64/cpufeature.c          |  7 ++--
>>> xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
>>> xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
>>> xen/arch/arm/cpufeature.c                |  6 ++-
>>> xen/arch/arm/domain.c                    |  9 +++--
>>> xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
>>> xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
>>> xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
>>> xen/arch/arm/include/asm/domain.h        |  1 +
>>> xen/arch/arm/include/asm/processor.h     |  2 +
>>> xen/arch/arm/setup.c                     |  5 ++-
>>> xen/arch/arm/traps.c                     | 28 +++++++------
>>> 14 files changed, 201 insertions(+), 24 deletions(-)
>>> create mode 100644 xen/arch/arm/arm64/sve-asm.S
>>> create mode 100644 xen/arch/arm/arm64/sve.c
>>> create mode 100644 xen/arch/arm/include/asm/arm64/sve.h
>>>=20
>>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>>> index 239d3aed3c7f..41f45d8d1203 100644
>>> --- a/xen/arch/arm/Kconfig
>>> +++ b/xen/arch/arm/Kconfig
>>> @@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
>>> This feature is not supported in Xen.
>>>=20
>>> config ARM64_SVE
>>> - def_bool n
>>> + bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPP=
ORTED
>>> depends on ARM_64
>>> help
>>> -  Scalar Vector Extension support.
>>> -  This feature is not supported in Xen.
>>> +  Scalar Vector Extension (SVE/SVE2) support for guests.
>>=20
>> I would prevent to mention SVE2 here unless both versions of SVE are sup=
ported with this config.
>> Is it the case ?
>=20
> Hi Bertrand,
>=20
> Yes both versions of SVE are supported with this config, SVE2 is a supers=
et of SVE that includes new
> instructions, but the work done in this serie for registers settings and =
context switch will apply to both
> versions.

Good so this is ok then.

You can add my:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


>=20
>>=20
>> Cheers
>> Bertrand




From xen-devel-bounces@lists.xenproject.org Fri Apr 14 15:20:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 15:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521147.809492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnLDd-0006aD-BM; Fri, 14 Apr 2023 15:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521147.809492; Fri, 14 Apr 2023 15:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnLDd-0006a6-83; Fri, 14 Apr 2023 15:20:09 +0000
Received: by outflank-mailman (input) for mailman id 521147;
 Fri, 14 Apr 2023 15:20:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ClDU=AF=citrix.com=prvs=4618bebc7=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pnLDb-0006Zp-Bf
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 15:20:07 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3691e90-dad7-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 17:20:03 +0200 (CEST)
Received: from mail-bn8nam04lp2043.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 11:19:59 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB7223.namprd03.prod.outlook.com (2603:10b6:510:243::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Fri, 14 Apr
 2023 15:19:54 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::bb8d:7344:7172:6ee%3]) with mapi id 15.20.6277.038; Fri, 14 Apr 2023
 15:19:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3691e90-dad7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681485603;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=KZASkSgfwqRxMeZvPhT1iszvELyMTNOOjVdR7aVlPNk=;
  b=gx/grlOM4nFEmctYR7eZkuK3/21fHsDwRkA7b/SnppcspBLA8Xnc6hGB
   tNcwAiDVOQJyfgDIed49uX1WNTle/LZVrxkdrI4UsgLLg+GOnKH5TNAuS
   B3EN5qtTzbc0V+VuPK0Fcszz0PyHUWGyf8V+8n/iARSRY+GrBCasWjVIU
   A=;
X-IronPort-RemoteIP: 104.47.74.43
X-IronPort-MID: 105442746
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PMUg7aD6EOmkpxVW/x3iw5YqxClBgxIJ4kV8jS/XYbTApD0h1zRUz
 2tKWWqAPP+LN2ajedt2a97g9UpS6MTdndYwQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw+ttLX2Yf6
 sMjEwtWQDCyutu4zqCSRbw57igjBJGD0II3nFhFlW2cJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK+exruAA/zyQouFTpGMDSddGQA91cg26Tp
 37c/nS/CRYfXDCa4WPdry7w3LCTw0sXXqooEo/o0a86vWau7UkIUEMdDkGqpqm23xvWt9V3b
 hZ8FjAVhbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xBHADTztLb9EOrsI6RTU2k
 FSOmrvBBjtpqrSZD22c8rS8qim7MiwYa2QFYEc5oRAt5tDipMQ5iELJR9M7TKqt1IWpQnf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodd7xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:MRFC2aDKzDKQcg7lHejHsseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ++xoX5PwOU80lKQFmLX5WI3PYOCIghrNEGgP1+vfKl7balDDH5BmpM
 BdmsFFYbWfbGSS5fyKmjVQeOxQpeVvnprY5ts3mBxWPHpXguxbnnBE4kHxKDwGeCB2Qb4CUL
 aM7MtOoDStPVwRc8SAH3EAG8TTutHRk5riQBgeQzoq8hOHgz+E4KPzV0Hw5GZUbxp/hZMZtU
 TVmQ3w4auu99m91x/nzmfWq7hGhdf7zdNHJcqUzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq
 iFnz4Qe+BIr1/BdGC8phXgnyHmzTYV8nfnjXuVm2Hqr8DVTC8zT5Mpv/MRTjLpr24b+P1s2q
 NC2GyU87JREBP7hSz4o/zFTQtjmEaYqWcr1cQTk3tce40Db6I5l/1owGplVLM7WA7q4oEuF+
 djSOna+fZtaFufK0vUu2F+qebcLEgbL1OjeAwvq8aV2z9ZkDRS1E0D3vESmX8G6dYUV4REz/
 6sCNUmqJh+CustKY5tDuYIRsW6TkbXRwjXDW6UKVP7UIkaJnP2rYLt6rld3pDnRHUx9upypH
 39aiIZiYZrEHieSvFmnac7vywleV/NEwgEkaplltpEUr6VfsuZDcTMciFqryKamYRgPiTqYY
 fOBHtoOY6dEYKXI/cu4+TfYegmFZBMarxghv8LH3Szn+nsFqrG8sTmTde7HsucLd9jYBK0Pk
 c+
X-Talos-CUID: 9a23:dKHvXG7k280+QGZHVNss9HMoRMd0bULm90yLDW+JNF9nRq+8RgrF
X-Talos-MUID: 9a23:1smJzAnYnR5Fggo7No9HdnpYMvU4xPX+NXswurAd45ePbjZhPGeS2WE=
X-IronPort-AV: E=Sophos;i="5.99,197,1677560400"; 
   d="scan'208";a="105442746"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dYN+lxuu97RJHO2KlJdpI6kgwB6sLFE0RXADMxGi2OwqKy7f9forBs0JSw0nViXUe0r1ufJY9yFz7tbVIpZMIzGy+RNRdVz8LiaY8L+WuHkE6lZIITNwj9dDNg3CJ5VBf4++IzyFk4nXOMIVSC8OAqtIq9DyAYH8HbQDh/fi4s8nh9n5qbglLVfoisW13SmKZbt20+K9uYuLw2t9eGyQqqZAAG8jCdsVRaJR9Y9V11z2YI5NbKw0k4M77vgEBmNujuOeAzStj6wd4dXO3mHm9+RSmU2hw5MzFMRpOGoB5/DXhdeGPxqMUGqAE7y9OMJ6xyk7d13qIsCk033bd931EA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1p+jVcCxqzh6neMgNDhH9OkgSf5UEdomAAqkuunpqpE=;
 b=ZGEm+0IDixL5x2Rii4qzaksWrl1r3eH1HH+dMiYCvJgYOSypVIzYxXzcums3XA7tVufYQmklpUKm8HbviKAD1RzT/k+IyHZNekyzEYrS2bYukFt653pnkzt9Rj/Q6Ofy3qGNJqEzv6tem7wM9cW5h+wReOFn7JLtCHZbnNxJO8cuII74xqm/sSIhV4kPTiaSDZKwlvEUtOEkfl4dOpO3O2NOJ+knqX205wgaq+iG5iOmdudatw+PxT6lf+zTo4ud1hzf6XRxb5SoxGEA1LApd+rklGWzvPi28/4SvZyiHmzR0uJX3tVIlLGI3LEsYyhKWCqR/uOk151YYt+9pBNxzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1p+jVcCxqzh6neMgNDhH9OkgSf5UEdomAAqkuunpqpE=;
 b=c0nl3uUaW82dStoUESvSzuPfx6dzPwB9RIcoXGkWCBX6rMrLAs7gj+8uyu9Dq6fY01Q58oovQcMdjMhBWooEzRGHoJYOrKGnX/59YOE/1gUEcmicV6iIjjGlkR/t9hwYZvDEGuxGYvD3j+VsG04FNj6dVTVjmRKFj55CDhYylAg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Josh Poimboeuf <jpoimboe@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH] create-diff-object: handle missing padding at end of special section
Date: Fri, 14 Apr 2023 17:19:33 +0200
Message-Id: <20230414151933.53851-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0204.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:318::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB7223:EE_
X-MS-Office365-Filtering-Correlation-Id: d35c8331-6aaa-47cd-d3a5-08db3cfbb2eb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GPnMOhTNJGjqYRWEdqdljf5rBh2dDiFcQzR9osoM4QwKEUBCqxGCnOXgRQaOXFd7nazPTO+2YYKvn83gnSgLDEOyVc9+Bdpyo3g6bRvYC99sUOmut63FkX+5j89Bd9JPeUnanId3kaGS221nTH9UYmDSsILK2tSHt3h5pqC/lBhhBlxAKci4csHqppoWDwzpKmPTLhFss+r7zz3whzJKZ+QinJi05IeK2wEAzdQIpZ2V7kGtmo6Z+9nM3xLyE2E61xIM/bTdBFLQL8GS5n5E/qoo8bASi+aSGEBTY/kwTomrTVR4DkxADd+yKyAUDpAyl07dCuFblu9fQ4VOyIgtoQxRBVUp5PslHa21Qp06e/g1zWtvUYJoclhWD3rTHWOiABcW2lULHMW5fs15oagUi7wnyLjH7vFWc/0R4IlCx0Bmw6UJ7thOuDuBqfK73lqmAPTJfpyYDoxn6gEkSBmI3En1gPMg+Leq9tR9zvsg28JvwCLtdd4/N8xOqk6jrwc727O6E4ZYy6RPTOQZSgLLgfDaBr5GjvdpXlrUTaZUjF4Kn3GQBWerVPSuaFcCPVLg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(376002)(39860400002)(396003)(136003)(451199021)(36756003)(6486002)(4326008)(41300700001)(6916009)(66946007)(54906003)(66556008)(478600001)(8676002)(66476007)(316002)(86362001)(2616005)(6506007)(1076003)(83380400001)(6666004)(6512007)(107886003)(26005)(2906002)(5660300002)(82960400001)(8936002)(38100700002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cW1GVHlkVGtkRVhkZHN4YjJnTjdsS3NXd29EMW5iNkw1UTFQOEpic01UZTla?=
 =?utf-8?B?a1FNc3h5ZzNtM0MvK0FGSGZFa0NwMDJKcThHbk5LdGxqRG1IRm1wY2UvZitY?=
 =?utf-8?B?WktwV3g3MmNiamtFQmxKSlRpblBIVEthS0swUzMrL0E4MWkvcnAwU0pVZE1J?=
 =?utf-8?B?WWdQK3pKK1FIamFPc0tjU3o0YzA4Y2hzMkJRaVM5MmFZRlVUTnZMYS84ajRY?=
 =?utf-8?B?QzNoZHJnaXdrbzdOYlc2R0k0bzYxQm4wR1VVdlgrYS9TVkYyeFhXTk42SDFr?=
 =?utf-8?B?Qnlldm9yN2l2SlJUQWxZNDlnTWxOWHExaU5xSnZlZXgyWUhxZG5ncXBkRXJw?=
 =?utf-8?B?UDRiN1ZPWVl4b0toOFZLbmYyWmtESitZYnowNy9Ec2Jxd2VhS3FrZGF3dUZK?=
 =?utf-8?B?NXdLelN6TmZXT2o5bDdqRXB2NGF1Y1ZxZ21pR3hCQ0hDZEdDMVhBaEFaOC96?=
 =?utf-8?B?MDUybmswYU1FQVhnTkUxQjhzVHlRQ0Z5b1hJNSszSW8zcDQ5VVZPYkpGRkhH?=
 =?utf-8?B?QjM1Y05hSmF5bHBobWYyaTJDMXdDOElzcE9TN2ppVUlma1VMVDVyVVN5NkFm?=
 =?utf-8?B?dE1sS1JrWnlxOFp2UU1yYWU5djNpMTc2MXgxWldoOFVFY0tMSzlqOHhEd1Vw?=
 =?utf-8?B?a2JYMm9LRjFaMHprR3FHSTFnaHJIOUxkekRXMTN6N3VSYUpzUGx1YWloblJR?=
 =?utf-8?B?QjRWNFFvWTViQjQwWkUwSkZuN2lVWHYrMlhNdm1GZmVrbk9JNXIyejZmejJB?=
 =?utf-8?B?SEZCUFA3b0MxUS9oSHkvbWwrSFN0Q1VsM1hJb1BQK21zempaMmNwSjcwSzVu?=
 =?utf-8?B?QnpiMzhTZUZEWmN5dXNCc2F0MlRwSUt6NkxxOEV6R2hFYURKZTBIUmVaU2pE?=
 =?utf-8?B?Z2hiUDhXbzJzbmYzNWhUdDlJWTZDb0IxaHNYbFpMbmFaSGE3VXpLdGJOSURm?=
 =?utf-8?B?ZzROV1hoVlVxb3hYNnB0RUtiTGF0RkJZQkorRU93cnVRQ0JHV0hXZ3lTS0pJ?=
 =?utf-8?B?Q0Jabk9PQzRZRTVTRnBVNm4xSm5oM2JLdGNCdHB3RmgrYzBXYTA0VXB0bXBk?=
 =?utf-8?B?WkxYMzhNWW8wbXJwdkwrZmVqK09mWVFnTzVqTHdiemxGTC8vcVoxcDNzUENR?=
 =?utf-8?B?dHl5ZHJqVkV0ck9Vd0RZZ0ZQcHh1YjBvQ0NNeXRSMjFVNkhuYkJlSGoxdm1k?=
 =?utf-8?B?RjVGM0tOT0pRZkozSFhpclI4SEhaamNUb2FsTmFYQ1NYVnFlSFZzZE54MUF0?=
 =?utf-8?B?cnNsZkh1alRST3BQTXl5NEhvOXlTZVQwQUZlbkNkVlZ2cTY3RzZTcnZwdzM0?=
 =?utf-8?B?U1VvY3ZZYTVmbzQvVkxzbnpvOTBvbDNKcWJuaTFacHhZdGprbmVseEE1VDNo?=
 =?utf-8?B?TWVlYW45SmFQSGcyWUhnekhJSjJ6WGwrYyswZ080WnVGbi9ZODBSellkbklq?=
 =?utf-8?B?cTd5YkVRY1J2VE1uTk1BVFNPVjcyZDdNbFBSWmxudVdIQ21XTDJhSmRsNUFH?=
 =?utf-8?B?VWlLMmROSWtraXUzZGNlL1I1L2RZdWZRUmREaXdFbzY5Z0RMNFEyK2lFOGNp?=
 =?utf-8?B?Mnk3V2tHK3ZXemdtbnFvejJ2TEZEWEpFV094dU1aTG9VZVBtd1lEK3A4dDhL?=
 =?utf-8?B?enZNTHlYNDJQY2NHTWtOa3JkbEZKV3gzYk9TREFUUER2dlB0MUJGREtEU2N4?=
 =?utf-8?B?TzlEUXdqRWozWHlVN3NmU2M4eWd0SHRIUThWU2Y3d0RyWlVVazhHbUVoejZW?=
 =?utf-8?B?bDJuemtXdWdVWGFNZ1dFc1c4RVlvUWkrVkxSOVgrTjU1TTVtQzhtUlhtMzN3?=
 =?utf-8?B?UWpLSCtwc2Z2NTdCWXkzMGtrZ3lzaWMwWjlRd2h2VC9wbzNRWWFmMy9xbXAx?=
 =?utf-8?B?WmFZTDUvSUxaS29WVmxJcEZ3SUhXQXVwa2FhSE15YXUzaE14M2RLMGsrSFJN?=
 =?utf-8?B?bFI5R21JckVERVJzdDR2TitEQXV4aXZJWXBmMHhTSTVBemk5WG1lMURxNEp5?=
 =?utf-8?B?MFFiUVlPcnRvVDNqcFVCenZpYi9NekRZSzNpSzBPa1FTc0ErREFLOWZZam5G?=
 =?utf-8?B?anBmcTF0ZTNLQURYZnQvZWZrYUc3TURkdExHaGJ1eTlFUURRb3VqN0VBVm8z?=
 =?utf-8?Q?GQhBrLD50crx13IvCAF7yRSU6?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Q4bpxfPap184UEiEOiz2EhoU/tjLAvsd7BdFjsvJylMQ8yzmErntINRQe4nUANE/HX6KHMMwAqNrtJ+ncw50kWya+n8MDoBAlCsGbyNfUxlOLx8r2r72XPaKPapjkyM2fKb+BgsheehDv3fGQfE0r6gpWMilbzTHBQi8S4I4flsSR712cJBs2eywjIEiTtNB7PbXkqjlAJm20ORNkoMFai236xTy9Jwua0m9uPwo92yWsc/B3QvoB9gGKNBhPnOBx/VUvNj2HQ8G30UGeT3zlLYm1U9wFWlG49YwMz9qaobiEjOt2uG+yzHBpi6zPBxoGVjyi+WA56TqPDkfbnfpPqdXVGDLtHvgQ4rZMb2qIKaVNlmX2bWg4tiPVZLDEX3zIQltdG148aPdeVuX7cTHyLxnaXE6gB46rcqqGNf47yUKbxdZB6Tt0sZMCHuLaWdLAvtP1tGWnufAlfHO64fb/AIQk5QD6FkCvOsh7xD3AEAyNgGe3JLs0cj83bUFzHyl53FS45sai3mJuIs8ESX8Gb1aK31fFSxSHZMlvbYOXLN2g4yL/CuGt21HI1c2QOLgUHhhphMbG2ZOH61HwLKuY/X9T/x7pugbSSnuav/Kzlv/+pVsQaVk6dpYBWBG1y+zDfLvCmHmOWSLo8ghLrNz0i3zw8spnk9r1D2YwFD93d+Q7g39ZNmoLEkx1j9m5rDhxDJxo5Zq9siFTxlHFkenG3/VAS62u8xb1nWVvrS1+79OXIJKKwobZ4gYghcltj0dVWu44yKTKfpdclbNcs0q+gbaZp5RS+pfXOCOjq17K8CGsRpMGDxvVVXN7bKXvfQ+Jqxr0zVQFlpGXCjJg6zBng==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d35c8331-6aaa-47cd-d3a5-08db3cfbb2eb
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 15:19:54.4118
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9GfrlnQDQuZLeQeIenbIie23AMmGLK34uptR1JDTZGp/D0uBbkwheuqr+s6wNI4FlqC58LH0LYC5bcEM0YU3KQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7223

From: Josh Poimboeuf <jpoimboe@redhat.com>

The paravirt_patch_site struct has 12 bytes of data and 4 bytes of
padding, for a total of 16 bytes.  However, when laying out the structs
in the .parainstructions section, the vmlinux script only aligns before
each struct's data, not after.  So the last entry doesn't have the
4-byte padding, which breaks kpatch_regenerate_special_section()'s
assumption of a 16-byte struct, resulting in a memcpy past the end of
the section.

Fixes #747.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>

This is commit:

c2dc3836e862 create-diff-object: handle missing padding at end of special section

In kpatch repository.

I've seen the .fixup section get an alignment of 16 but a size of 81,
which makes the error removed in this patch trigger.  Overall I'm not
sure why the original alignment check was done against the size of the
section, the alignment applies to the address of the section, not its
size.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
---
 create-diff-object.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/create-diff-object.c b/create-diff-object.c
index d8a003216096..67784642bcd7 100644
--- a/create-diff-object.c
+++ b/create-diff-object.c
@@ -1204,7 +1204,7 @@ static void kpatch_regenerate_special_section(struct kpatch_elf *kelf,
 {
 	struct rela *rela, *safe;
 	char *src, *dest;
-	int group_size, src_offset, dest_offset, include, align, aligned_size;
+	int group_size, src_offset, dest_offset, include;
 
 	LIST_HEAD(newrelas);
 
@@ -1234,6 +1234,18 @@ static void kpatch_regenerate_special_section(struct kpatch_elf *kelf,
 	for ( ; src_offset < sec->base->sh.sh_size; src_offset += group_size) {
 
 		group_size = special->group_size(kelf, src_offset);
+
+		/*
+		 * In some cases the struct has padding at the end to ensure
+		 * that all structs after it are properly aligned.  But the
+		 * last struct in the section may not be padded.  In that case,
+		 * shrink the group_size such that it still (hopefully)
+		 * contains the data but doesn't go past the end of the
+		 * section.
+		 */
+		if (src_offset + group_size > sec->base->sh.sh_size)
+			group_size = sec->base->sh.sh_size - src_offset;
+
 		include = should_keep_rela_group(sec, src_offset, group_size);
 
 		if (!include)
@@ -1269,12 +1281,6 @@ static void kpatch_regenerate_special_section(struct kpatch_elf *kelf,
 		dest_offset += group_size;
 	}
 
-	/* verify that group_size is a divisor of aligned section size */
-	align = sec->base->sh.sh_addralign;
-	aligned_size = ((sec->base->sh.sh_size + align - 1) / align) * align;
-	if (src_offset != aligned_size)
-		ERROR("group size mismatch for section %s\n", sec->base->name);
-
 	if (!dest_offset) {
 		/* no changed or global functions referenced */
 		sec->status = sec->base->status = SAME;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 15:58:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 15:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521152.809502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnLoI-0001iJ-A9; Fri, 14 Apr 2023 15:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521152.809502; Fri, 14 Apr 2023 15:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnLoI-0001iC-5y; Fri, 14 Apr 2023 15:58:02 +0000
Received: by outflank-mailman (input) for mailman id 521152;
 Fri, 14 Apr 2023 15:58:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnLoH-0001i2-2k; Fri, 14 Apr 2023 15:58:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnLoH-0006hd-1f; Fri, 14 Apr 2023 15:58:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnLoG-0006zk-MQ; Fri, 14 Apr 2023 15:58:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnLoG-0007Cs-Lv; Fri, 14 Apr 2023 15:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KBuBFfQZqELlxjZoDe5vg5MXsq3VLmY7J8hnmSwdY/U=; b=MvTEoUI707dgw0asv8bo8cGx2r
	9OAyQMPxU1hhSSnvS4QiG2k6XLlA213YMNsujq6srIXb//WY61ZoSQ0O8z6n+gMxgtN278Ksl2juL
	EjMvkLUwwD8qpHYUQU2cGclOyLSfM0VCtO7ky0H99sHZMpQOB4AX9FxBTkBJtauec300=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180261: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c9fb11f92f52e06bcb1279b467a3b2667757be44
X-Osstest-Versions-That:
    ovmf=55b67b6950e648338adfe8ec54aeb26ed89d2c97
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 15:58:00 +0000

flight 180261 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180261/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c9fb11f92f52e06bcb1279b467a3b2667757be44
baseline version:
 ovmf                 55b67b6950e648338adfe8ec54aeb26ed89d2c97

Last test of basis   180250  2023-04-13 19:43:30 Z    0 days
Testing same since   180261  2023-04-14 13:43:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   55b67b6950..c9fb11f92f  c9fb11f92f52e06bcb1279b467a3b2667757be44 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 16:18:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 16:18:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521158.809512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnM7Z-0004fr-U0; Fri, 14 Apr 2023 16:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521158.809512; Fri, 14 Apr 2023 16:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnM7Z-0004fk-R2; Fri, 14 Apr 2023 16:17:57 +0000
Received: by outflank-mailman (input) for mailman id 521158;
 Fri, 14 Apr 2023 16:17:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rR8y=AF=citrix.com=prvs=4614ad092=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pnM7Y-0004fe-E2
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 16:17:56 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7fce995-dadf-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 18:17:53 +0200 (CEST)
Received: from mail-dm6nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 12:17:51 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO6PR03MB6226.namprd03.prod.outlook.com (2603:10b6:5:354::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 16:17:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Fri, 14 Apr 2023
 16:17:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7fce995-dadf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681489073;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jEZOa8afM7pe3p3pvBaKoGXqiXUTARSdTOf0VVwjX2o=;
  b=LpaHS1d6YNatkzxzkGpIQK9MDTSPYmwLZXbVPwwXDt3ahgMWQe0scIkf
   kYAmtnrU8yrij6FyHq1n+RcOoT4J0AJWiyAuUevwFTds6tAl7q92XiRHX
   8C6yf4vjOduLP/yln6DPnXKk6SdeN0DTah3BpT0X57etydpufX7Vw6GtO
   4=;
X-IronPort-RemoteIP: 104.47.59.168
X-IronPort-MID: 105450429
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jETZSq9Kj7dclMeJLKEfDrUDs3+TJUtcMsCJ2f8bNWPcYEJGY0x3x
 jNNXG+OO66JZTT2L9AlOoy1908C6sCHyddlGQdspH88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOakX5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklEq
 tY6cCtSRymIoL7u/fW3Q7lr2MUKeZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUvgNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpPSODgqqQ76LGV7nUxIz4ZVUeZm+DjiECjfdVBe
 mMwpCV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0XTsr0
 1WOkvvzCDBvuaHTQnWYnp+XqjWoMCRTMm4GZgcDVwIO59Slq4Y25i8jVf5mGa+xy9HwRzf5x
 mnQqDBk3utDy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDHhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:uwxzfq0nPVxyfly5OkMGaQqjBVVyeYIsimQD101hICG9Lfbzqy
 n+pp8mPEHP4Ar5AEtQ/+xpOMG7IU80hqQFmLX5XI3SKjUO3VHEEGgM1/qH/9SNIUPDH41mpO
 5dmspFebrN5DFB5KqU3ODSKadC/DDzytHMuQ6o9QYOcegFUcBdxjY8LjzePlx9RQFAC5Z8Po
 Gb/NB7qz2pfmlSRtinB1EeNtKz7OHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoiy2
 7YiAb0j5/T+c1TiyWsmlM73a4m1+cJ+eEzSvBkv/JlZgkExDzYJbiJEIfyxAzd6Nvfk2rC1u
 O86yvIdv4Dlk84cgyO0GHQ8hil3zA053D4z1iExXPlvMziXTo/T9FMnIRDb3Limj8dVfxHod
 F2NliixuhqJAKFmD6468nDVhlsmEbxqX0+kfQLh3gaVYcFcrdeoYEW4UsQSf47bVXHwZFiFP
 MrANDX5f5Qf1/fZ3fFvnN3yNjpWngoBB+JTkULp8TQ2TlLm3JyyVce2aUk7wM93YN4T4MB6/
 XPM6xumr0LRsgKbbhlDONEWsezAnyle2O9DItTGyWXKEgqAQOyl3es2sRI2AiDQu128KcP
X-Talos-CUID: 9a23:VsdqsmEMq9yPyaORqmJtq2Q2Qtt6W0TW6yuIHgi0ADZuFry8HAo=
X-Talos-MUID: 9a23:JFrFywo9gKBXD2e0KzsezyhAH+Rn47iMMllXk5sbnc6aZANUZTjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,197,1677560400"; 
   d="scan'208";a="105450429"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iSaa6kniLshVYGTCxCioV6nXhUPqC/RglUMZ8UFbbYAqx7gE+t/NfQhVb2iPPmZo/j/3wXRh/bwn1YFYm65zGGp5peMSTVT0p9pSWyqxp28s1ogrxaSVEjwGFjE/ZKgy52qwQTlki29c5Q2sLIzD+UxrrYP2/Z/d8L0OiyXYfiUtd5nsxNKnZesa25WvKeT0KPtRofW89g7P4Ya2ryEmlJ+YA55/DsQv0sxsdmuI2fr8wfvBoMGvlTL/j5ziCT/hJzIAfe9PzkqScyo8BTRkuxZmQggTVnRJVsphezyfM72YwozaggRDGzpPMvm5+/Gr18xlgouSGQ+a8nCCrt9gpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L2cWvCC3TMiknbAQJ70yQk3bXAIp5XaHNtdnUXOvN9U=;
 b=hc2x5vLF0vH83rcA9kpQcULumJuhq5dzBTKITvkMsd5Y8u6ZEbj7joFo+ofm67rkaHfjr7c9sQjR+odhmdQe73/JSfHte2UrrdeH8vXKgfFJGpKPn34YKrIBFZX0ASH7JIT1PZYKbtERrqFWj7rrWNGg0IbKOlVHexGMhyFcI7sLUYFpo2/AzjEFEs374Gc2R7fS3EsPTjBiLeTOJeU2bA609EQWrGJaPUuukRfMMIEWvwuy/46XdG2n/8uZdnw7JywHYCYqQVEEbj7wl/t1omqHLme+ign7X1u+xH74G8ZUY+e9LOH14wYNBmLLXh2bm19MDs4xx2g/gpgiW+AxkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L2cWvCC3TMiknbAQJ70yQk3bXAIp5XaHNtdnUXOvN9U=;
 b=qIKEZniPG5KTLUN2Lfswd7j4WM6hl3yZJHfh72PIsy+35IaGVA3OZmmPkPFY6oTuPpPbeeEcMtrKC/w6QCbjUE8XrbBhA6N6N3B3HqV4YRvcs7XHISz3+cQ8NLOktLAni7TfBQvW44k+XZdITP/0EvBiFdiR/ZTL4/MB4qJTAsY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <1377fcd2-672a-687d-468d-ddc6d5b4be70@citrix.com>
Date: Fri, 14 Apr 2023 17:17:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] create-diff-object: handle missing padding at end of
 special section
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Josh Poimboeuf <jpoimboe@redhat.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230414151933.53851-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230414151933.53851-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0272.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CO6PR03MB6226:EE_
X-MS-Office365-Filtering-Correlation-Id: fffa0f9d-4941-413e-eea9-08db3d03c9e4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7q/+OKijDl13wzd91MpbgUwycnvO7fCwGW6bnaOsWs7AwTB18BV0Dk+boi7sJPK6UPgxzfgvxJ9KlIZJhT9fHnxQitpOTqE7fhgpyDUCsUhCBVQ1JTMkvzaMebftOkD8uPmdl8ofjnrym1wILIq5172C9iv84EDNdPNMgyezANeY8G2OVwh0cILyELgZZHjpfaypKPghCIh5y2NPe5liCMjax3xEYu129sh+sfwlqN3VNrOZUwKlYmKZ/mPBTdhTld8JU8HzAxcZSLXdyk6Zvx21EQSDYHe4y6Kn8h+KBWZylxZ6vQZhu8IRJCgG87iGFnGOtQv71En+qw3tm9zEl+eIgoR4giqlzRCvU1VfLpFJTbLyIH6osqttZUBae9CaJ8gulDXmQnVIbT9ftFZJqnfMY3n2v79brws27jikoqv46Oxj5fc2fNEGHsLfvgdq+7jYiK2T6bNS9HcmT2FRezjdwkiToRELSAsfWsy9gkfi3D1OBqJ2DWoXQ3YNepkFGQSm7BV/TIw8w1xKg8yDrs081+cQTw1VnAiUOaS5jLDC7D7II/7y9ROKdR7xvFZCnHj9zYpCQgvH4P2Uu/XvpEvibH5F5Io/DJj4qQE6Fmyxd84KrIC3HgpmHvGPmtFKeXLNGQ/Q2a5Ubg/Rx4VMqw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(366004)(39860400002)(396003)(451199021)(66946007)(31686004)(53546011)(4326008)(66556008)(66476007)(6506007)(26005)(6512007)(36756003)(31696002)(107886003)(2906002)(6666004)(6486002)(2616005)(186003)(86362001)(54906003)(5660300002)(8936002)(38100700002)(8676002)(478600001)(41300700001)(316002)(82960400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VVBTVzZJM2pLbVMzV3gvYW5uN3J3WE9kNDRZcWpFY2IwZ0tpQXNpYU16WDZz?=
 =?utf-8?B?NkMvVy9rdE8yNXFXRzhzMmsrbDZJaURKTUVhUVpSWXpidFYvMkJPQXROQTJy?=
 =?utf-8?B?RXBnby80K1ZVRFNUbW04U3k3bndZSHRCS3dQOTRCNXFZTEdDNzVaQkF3TEVa?=
 =?utf-8?B?SCtML25QK2JjQ3JXb0ZvMm5yNHBzdzdXeXFpcVdsS3FRdy9NN09pRlJueC92?=
 =?utf-8?B?WVRGTGpqQ0QwMDkwZnlWLzh5ZjlHdkdCVDMwZXlxbURza1FoYVRJYU1mV21Y?=
 =?utf-8?B?b1JMYVNXYWV0Vy9RTVJuQitXaTRwN3JqekJCa2xkajl5N2QvWVNCYlN2bUQw?=
 =?utf-8?B?TVBPa3A2WDNhSERNRTlwckU5SzFpSzZJRWFFRE5KdFBleFpXYWVTMzJLMVRq?=
 =?utf-8?B?TDlWMFI0eWIyNXNCSUpNSXMyalVQbkl2a3J6dU9HanNrWk8yallrL1NMeFFk?=
 =?utf-8?B?eHg1ajZUTDJjYUdFZG9GRVpvcjB1aWhnUVRRRjZtNU5yR3RRL3l5amhSZm5L?=
 =?utf-8?B?Z3A1YnVFcmdtT0pFS3M4UlZnOEJvL1JXV3QvWk1IZnJMZDFFSm92Q1d6Mm9h?=
 =?utf-8?B?M1pxZVRVZzl5Z09VOEd0alVzUEZCYTNnTXhmQWhLRHZlMDN3cGdoQkxSYkk0?=
 =?utf-8?B?ZWYrZDFhSjN4bnNHL3ZvSTMrSHhuZ0RZMEVYZytmQmJkM3h2YlprajBSRENC?=
 =?utf-8?B?S0ZhYmxyWm1ycjYzV2pyVk12ekRrV3NBbkRCTk1ONGFzZmR6aG9pTldsRTlU?=
 =?utf-8?B?dUlpRzlwWjkxYmJpenJlYXladXBSVmFrbzRxK2wrMHMyUkRia2o0MmxsZjRW?=
 =?utf-8?B?TFNkcENJZ0RubEhiNEdtd3BULzdnRDVJTlFqRWVFTjRzMFJVVGp4a2dhaGdu?=
 =?utf-8?B?SkVkYzl0aW83NktjMFJhNWJxOVc5VjVORm9EUUdMM1VSd3crbkdCdVNtQXpL?=
 =?utf-8?B?R3ptZy90ckRpcG16cVcxK3VKM0RTSXpkSkVFU21lekpuWEpoT2hDdzRFc0dy?=
 =?utf-8?B?VzdpdmZUbnBkMHppN0M0cDR1QUhLaGhxVWc3NmZUN1I0SDVlZjByWVpaaGNS?=
 =?utf-8?B?UWNBUjFiVmZrZUNlNFpaZDRDcGlGd0hZd2N6UW43RE54VWxJTmhWc01sSFlU?=
 =?utf-8?B?Ym1ZY0pIaGtZVWJITlNmMmJBYkxRVjZpaUFvL0dFMHhUaHMvOXAxcjUyYnNC?=
 =?utf-8?B?Z0ZIOEtnMngzV3F4d1h5S1lJOVc2dUZDdWUzeCt3MkZPSGp1QlFaQmcwbUpF?=
 =?utf-8?B?elZVSm1RSlQwVWVoTFp3bUh6Q2JIRmFHa0N6SG5EZ0hVZXFPaWZnbjJQR0Vv?=
 =?utf-8?B?dHNScXJMZDU1aFN1V0lTLzJ4SklUL0J1V05IK01ma05iQ2lzcm4rYVdOYUdC?=
 =?utf-8?B?dlJlOG9meEhkRXF6NU15WFl1anpOS21qd2ZPQ2Y5L0JCNTdpa1BHaEFIWFJZ?=
 =?utf-8?B?N0FvMWc1MWRUM0VGMVZocElZU3NWenhoQ1FjV2txbTgrNC9xdFNKd05OejAv?=
 =?utf-8?B?MVVoVUM0WHN6SGlPQ212Vi9EZ2s0U3l4S3d2alhBTytUM0lQU3NHMlYvVWlZ?=
 =?utf-8?B?cHRZU200cUpxNjYwWnhLOWs3VFpEUTE0WE1KN1pNNFRuMWRZcGpCQzg0ekNn?=
 =?utf-8?B?bkcyNmRuZFpjZmEyT0hWR09pNmNHWXBqWFQyWG1XRzZyelNmOVlFc3MrK1hy?=
 =?utf-8?B?R1RwRk9XSWx3Uy9yM2p6LzVkUWtqRFI5a2c1Ujg4Y1VKeExoTGJvV1ROVFk4?=
 =?utf-8?B?Zzd2K1JJdWxUUnZrUkNra2NwTjZJVFVrM1VMRVQ3b1ZtSi91NnRTc282RGZT?=
 =?utf-8?B?RGUzbk1iL0h4MnVBUHZheEVqcmU4UGUzcU5hS0tjaVdTUHFYaFJaRXV1TkV2?=
 =?utf-8?B?Vk9ScUlmZHJ3cS9raUN4STMrWTVxVk5KdVNBL0dtemw0SSsyMm1IMzh2emYz?=
 =?utf-8?B?MmhzOGd3cWFnbWlwdGp4dTBsN3duamFsMjVVOHQ3K0VuUVkzYnc2R0wrcDNp?=
 =?utf-8?B?UjhoK2RCR1JGV055VTNzSTNNRm95RUlUUU1kT2lrUkp5bndrODlZMHRMcjRR?=
 =?utf-8?B?d0dPeEZnYnc1OTV4TklEOWE2cGVkL3RtRWUvVFREQ1d1eU9nOHRsZks0WTFo?=
 =?utf-8?B?RWoyeDlkekFVTitQZDFoRS9HcjJCbWFpSmcwL2NrVTc2OUU1QkxYOTRzV2pj?=
 =?utf-8?B?NkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XVEpAStNAWZFH8PfQ0r+GfgTDok5ilZrY+zeyV+V+GGUm3VnrhRmOgAyHXpY49qX/jzuW3nV9e21YMQHKyMcC6RdWKV7K9/xbp9ODSc7wLw5iZAqW64wR9+Db4i8gUcOPeUL1Phd2b9b5V4cxavOGWzuigVo5jmS4L2mqdFrQEjdKbG+/f8fY6YCcOd/a89U6pqq2gAuD4BjiSKctvgRBMxPxPcebfYOX/f+YMUZZhbCtJPy2yt6LoD1YjXB50e1NH1cBiMFT2zSYeXEWj3TNpZVGCcMRQDFlMxj4LuL2MQwLdxiKdBHAJ3PoAtFc7ps2lY6ifp+JXVkqzMZCWmMJ0GZOv4WNn19Bj/BYANpmDDMUOaBKOguZvhM5ZearwCSa41zPECTEaiwOeRUlBpK0YgLTRwBZVyYVXgS8TSgVBKxW+roUIa26ApxNfIdzwjs8Km+9YQ028h6lRalmsext2ClPey27E+MwnzG+vIBhqSdjGbsNtYgaVSty/8Aq6DGumH3rjiOHVjgmU/zLY+rhq9IXCSn5y5kGNa18fi1PckdM1ceFEDKY6HkQmcHGs/FsyXseNOr7xFgkD41RJSRG5oO2NFZ2fJWAApv877SCgMQi7C3cYAmymjsOl+BwGv5/ttmPKi2qs7NPN3zXImkS1casZQBkIWh9ZA1O+eGTaOA67uEPGj2Ngtr7+CvQN9B4n2d7UKVPY4PWr79H0aJytGfQXfSwcKvufJ5Osyyn7h7s5BUvPZk7sJMG1dtb1V3sM2FqKTsHp4vP0GB1BIbnzOk/qfODTcSuerLIajCUoUe49jlXSVd4qevPXLLe8s8B5RanwXmJrdS/tImjSkIow==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fffa0f9d-4941-413e-eea9-08db3d03c9e4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 16:17:48.8976
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aOrRAlw+nO46+k16VnZ2Eh3LJNnRry5zeIbi/dyETE5fH7fCpllYvGMnFMAa2YA9T/P3dwoTZNCvSD18xqEMqal4+abrhYgwcRVdxOzFq10=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR03MB6226

On 14/04/2023 4:19 pm, Roger Pau Monne wrote:
> From: Josh Poimboeuf <jpoimboe@redhat.com>
>
> The paravirt_patch_site struct has 12 bytes of data and 4 bytes of
> padding, for a total of 16 bytes.  However, when laying out the structs
> in the .parainstructions section, the vmlinux script only aligns before
> each struct's data, not after.  So the last entry doesn't have the
> 4-byte padding, which breaks kpatch_regenerate_special_section()'s
> assumption of a 16-byte struct, resulting in a memcpy past the end of
> the section.
>
> Fixes #747.
>
> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
>
> This is commit:
>
> c2dc3836e862 create-diff-object: handle missing padding at end of special section
>
> In kpatch repository.
>
> I've seen the .fixup section get an alignment of 16 but a size of 81,
> which makes the error removed in this patch trigger.  Overall I'm not
> sure why the original alignment check was done against the size of the
> section, the alignment applies to the address of the section, not its
> size.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Seems like a clean backport, so FWIW

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

However, surely we want a correction to Xen's linker file too, to stop
putting out a badly aligned section?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 16:18:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 16:18:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521162.809522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnM8I-0005B1-8f; Fri, 14 Apr 2023 16:18:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521162.809522; Fri, 14 Apr 2023 16:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnM8I-0005Au-4F; Fri, 14 Apr 2023 16:18:42 +0000
Received: by outflank-mailman (input) for mailman id 521162;
 Fri, 14 Apr 2023 16:18:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnM8H-0005Ai-Bd; Fri, 14 Apr 2023 16:18:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnM8H-0007jR-97; Fri, 14 Apr 2023 16:18:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnM8G-0007Sk-TG; Fri, 14 Apr 2023 16:18:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnM8G-0003Rx-Sm; Fri, 14 Apr 2023 16:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lruf1+ZxQNmgEaak6z8+Vwj5o8cFdVs+XytGtA0+cHU=; b=VIPSR5y4B6Gj52goeTFtjdiS5q
	k6DmVl7EZrR+qwrCk3t0yhmzosWStSJdjM2LAvbAbg/kihqHQl44fohGQn5TPHigmT5+H3scOfD1m
	imde/LhR1A7DHPLkbH69s9Nxd8L7A1+hHJh5lpF2tFOTrNIom0U2PGkUeNaOmJvhvTKw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180255-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180255: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:build-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    libvirt:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=ebd004a03dbddc52dd1b47bd6bc4607f553d5e70
X-Osstest-Versions-That:
    libvirt=152770333449cd3b78b4f5a9f1148fc1f482d842
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 16:18:40 +0000

flight 180255 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180255/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              ebd004a03dbddc52dd1b47bd6bc4607f553d5e70
baseline version:
 libvirt              152770333449cd3b78b4f5a9f1148fc1f482d842

Last test of basis   180227  2023-04-13 04:20:20 Z    1 days
Testing same since   180255  2023-04-14 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Eric Farman <farman@linux.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   1527703334..ebd004a03d  ebd004a03dbddc52dd1b47bd6bc4607f553d5e70 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 17:17:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 17:17:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521168.809531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnN2d-00030T-9o; Fri, 14 Apr 2023 17:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521168.809531; Fri, 14 Apr 2023 17:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnN2d-00030M-73; Fri, 14 Apr 2023 17:16:55 +0000
Received: by outflank-mailman (input) for mailman id 521168;
 Fri, 14 Apr 2023 17:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pnN2b-00030G-AK
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 17:16:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pnN2a-0000a7-Ph; Fri, 14 Apr 2023 17:16:52 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227] helo=[10.95.152.63])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pnN2a-00020B-Fx; Fri, 14 Apr 2023 17:16:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=soYcFvgEQFsjuqxT9VN1Dq0AYPOfz8uaTI8vt+176Ng=; b=Lr2eBqd0ktz1F/4eWxShbwRZbD
	ukHm3UogSPwyXfSpxWm78vTkpP9+81elqf//qzhkCPMK+BbO/WkTUJTL8bLzFW3sDRlcmvWxn6Ttm
	QbKYz2h1U5HYNkBE4i05z4ff7BtLw5B7BRgiOvxFmm72YwPpeWErvHQ6WuOtcIw7BPvM=;
Message-ID: <7fba60ed-d159-f1bd-7edd-6e5a0c60340f@xen.org>
Date: Fri, 14 Apr 2023 18:16:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <72f38b2b-a391-fb7c-f8c0-cf3561470875@xen.org>
 <B3A82639-6D61-4DA2-B918-A92A421C75D3@arm.com>
 <e8075849-8bd5-7fd4-efaa-81e48c867635@xen.org>
 <4F5DC5EC-F538-42CE-A93F-2B5E3FAC13BB@arm.com>
 <03cc0c98-c5ef-16f1-ed24-6a39320b08e5@xen.org>
 <D32A74F6-8BBB-4965-A720-B3133ECC77BA@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <D32A74F6-8BBB-4965-A720-B3133ECC77BA@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 14/04/2023 12:07, Luca Fancellu wrote:
> 
> 
>> On 13 Apr 2023, at 20:52, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 13/04/2023 15:05, Luca Fancellu wrote:
>>>> On 13 Apr 2023, at 14:30, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 13/04/2023 14:24, Luca Fancellu wrote:
>>>>> Hi Julien,
>>>>
>>>> Hi Luca,
>>>>
>>>>>>>   @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>>>       unsigned int max_vcpus;
>>>>>>>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>>>>>>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>>>>>> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>>>>>>>         if ( (config->flags & ~flags_optional) != flags_required )
>>>>>>>       {
>>>>>>> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>>>>>           return -EINVAL;
>>>>>>>       }
>>>>>>>   +    /* Check feature flags */
>>>>>>> +    if ( sve_vl_bits > 0 )
>>>>>>> +    {
>>>>>>> +        unsigned int zcr_max_bits = get_sys_vl_len();
>>>>>>> +
>>>>>>> +        if ( !zcr_max_bits )
>>>>>>> +        {
>>>>>>> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>>>>>> +            return -EINVAL;
>>>>>>> +        }
>>>>>>> +
>>>>>>> +        if ( sve_vl_bits > zcr_max_bits )
>>>>>>> +        {
>>>>>>> +            dprintk(XENLOG_INFO,
>>>>>>> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
>>>>>>> +                    sve_vl_bits, zcr_max_bits);
>>>>>>> +            return -EINVAL;
>>>>>>> +        }
>>>>>>
>>>>>> Is SVE supported for 32-bit guest? If not, then you should had a check here to prevent the creation of the domain if sve_vl_bits is set.
>>>>> No SVE is not supported for 32 bit guests, here I think we will get “SVE is unsupported on this machine” because get_sys_vl_len() will return 0.
>>>>
>>>>  From my understanding, get_sys_vl_len() will return the len supported by the hosts. So if you run a 32-bit guest on top of a 64-bit hosts, then I believe get_sys_vl_len() will be non-zero.
>>> Yes you are right, I realise that I need the domain type information and I can’t have it in arch_sanitise_domain_config, however they might have sense there, and I can do a check
>>> like this afterwards:
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index c1f0d1d78431..ce1235c25769 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -3694,6 +3694,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
>>>           return -EINVAL;
>>>       }
>>>   +    if ( d->arch.sve_vl && (kinfo->type == DOMAIN_32BIT) )
>>> +    {
>>> +        printk("SVE is not available for 32-bit domain\n");
>>> +        return -EINVAL;
>>> +    }
>>> +
>>>       if ( is_64bit_domain(d) )
>>>           vcpu_switch_to_aarch64_mode(v);
>>> Would it be ok for you?
>>
>> construct_domain() is only going to be used for domains created by Xen. You would need the same check for the ones created by the toolstack.
>>
>> Do you need to know the SVE length when the domain is created? If not, then I would suggest to create a new domctl that would be called after we switch the domain to 32-bit.
> 
> Hi Julien,
> 
> Yes that’s true, we would like to prevent who is using hyper calls to have guests with SVE but 32 bits, I think that with this check it’s possible to avoid them:
> 
> diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
> index 0de89b42c448..b7189e8dbbb5 100644
> --- a/xen/arch/arm/arm64/domctl.c
> +++ b/xen/arch/arm/arm64/domctl.c
> @@ -43,6 +43,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>           case 32:
>               if ( !cpu_has_el1_32 )
>                   return -EINVAL;
> +            /* SVE is not supported for 32 bit domain */
> +            if ( is_sve_domain(d) )
> +                return -EOPNOTSUPP;
>               return switch_mode(d, DOMAIN_32BIT);
>           case 64:
>               return switch_mode(d, DOMAIN_64BIT);
> 
> It’s a bit late in the guest creation, but we don’t have the domain type information before, this check together with the check above in construct_domain would
> be enough.
> 
> What do you think?

I would be OK with this approach for now. In the longer term, we 
probably want to consider to set the mode when the domain is created 
because this can't change at runtime.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 17:23:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 17:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521173.809541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnN93-0004Ux-4e; Fri, 14 Apr 2023 17:23:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521173.809541; Fri, 14 Apr 2023 17:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnN93-0004Uq-1p; Fri, 14 Apr 2023 17:23:33 +0000
Received: by outflank-mailman (input) for mailman id 521173;
 Fri, 14 Apr 2023 17:23:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnN91-0004Uk-G5
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 17:23:31 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 11d5e261-dae9-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 19:23:28 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by CO6PR12MB5442.namprd12.prod.outlook.com (2603:10b6:5:35b::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 17:23:25 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 17:23:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11d5e261-dae9-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OobbYPp7E/tUwaL0H/DvMcsv2VP9RYcUJvVDsFl1lY5ehOXPjgTeGQ82zjy2Gr6qO34FcDutP+P0hfC/FIghfAXkUKUAelh8oK42SRMUKA8n7xxk8SGR+OlbelUlHa92B5psxhcgzl+k4cS3bODVnrRwMoEt21qvwZzsXZdZnD4vS1+7VsiSL00FSdfy1SAL3UFf3qeLPgehG4eiJv//eaMOQfC2E4f/1IKTANXsNkp3cnskgDbLSEfoK1QgOVTJK9rDIAEpbY8Khm3B+UzDcF+i0mZfzIHEt13dzv/vaoHP5MTL75+CVtE0YHuYiCTPMgrKZCh+zgFQmJbQx1nn5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ej5SWGuUgb6hps2pgI4GwpP608Nluv+8+jfvaBUuYXI=;
 b=Veyc6rjldKlPLkv2VXtAIN9+eJQHiE6BehYFTAXgBLiAaF0S6oOcC0XmviiQ/1NsvgpAyhcQw4Pu5+icodoviiX0dph4l1VIuJopfZfdvm5d80KUEKtRV0ox5xj47MVPxPeMvYheg/+Sxc/cDP0BlaGL5mp5LQFntJozgBSoSzgjvMc3ZV5hcd0HqwpgGO6brzq33eLend/aM6L0gLFuGrWT1OzERxWntvhvNt9RCrVhq9mNbjSTkdqWByAStMu3n8L3nMuD2PL9/f31E1MPRgF6vf1QATVRed3bqgiFdr3K02plwyHOJQ6FqD+SBMdTA45ie0mi3t/vIVZ78we6Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ej5SWGuUgb6hps2pgI4GwpP608Nluv+8+jfvaBUuYXI=;
 b=5ILnij7Kx8AxWQsD7JT8qGoQ6eRtqCKrJ211NNAaAciWScaFtksRYEh3c3/j+Z+Z7ajqbuTVz8EvCiRFF89pRTEWoWa8jj2Rrmo9Sh46ddcU8SOVsIoIhTUp2P9dSbVwDjRDf2qGDtTxTkdHWGqiRFULK6mULgFvRK8fsvpdVQ8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <a93baa6c-2b9d-315d-304a-956804b2731b@amd.com>
Date: Fri, 14 Apr 2023 10:23:22 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-13-vikram.garhwal@amd.com>
 <AS8PR08MB7991D4C1352B785D505AE63892999@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <AS8PR08MB7991D4C1352B785D505AE63892999@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR05CA0201.namprd05.prod.outlook.com
 (2603:10b6:a03:330::26) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|CO6PR12MB5442:EE_
X-MS-Office365-Filtering-Correlation-Id: 630d318b-25ee-4ed6-13d9-08db3d0cf448
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/rdxoWmQGvXsw1gtkowbXup1MuKId+vzOySCZAA1G7OoRAV3C9l00bgPy/We/nqoK7GPL/z4WriHZUjBtBGw24P/JyLnRQOv+Dm03iTVTxbj3xhQ5ZKcAo0sLgF5Yenm5wRSX2iR1YnHMBFSWAFYpbCgkubchZ4sbPxdemdmr+mhjJ9PsjKnciHtMeq6wdsQKGWbPuSfYQlk67Vu1h/QfLd2LJegA03Qf4GSS9oEnaLdZq9/SFs8s1WGaYn3M+0SOySynGk+BaXD8nyNzfjjDr4j3es3yQJjTDPHzKfM0p62OaQb+RoxkBZod3y0rVKxsRfHk9cBY/w/MAoehNA/QAgpbjSu6C6JDtecByHkXB1HpcrQzRMGPWnuhlD0HtUSXTKBA84hxMc0JfA3Bg0b0c8heIfHroTWBUpDqdwV9g3IMovp+I+dtrMlTiziUjwhxDk30iO70XydMZuRey3DIHLe79BV7xYt7cwd6IrCOWg3qyRRYme5/WWmJW1kd2qIn3EPoCf2i5BUfQS3gicvHhQpnbHkjXcFMalpuG39q+WXGqUQ5h518RtHkz0I/R6bDmpRy50RuM/1qYk1Uz2c/tpNqQFvwdfnNc3/GuHyN5B+kXrOZMy9+Fm3PQwbPuuNLVp1+Pc6nsH/9POwjkrU/w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(366004)(396003)(39860400002)(451199021)(316002)(41300700001)(53546011)(6512007)(26005)(6506007)(6486002)(6666004)(38100700002)(186003)(86362001)(31696002)(2616005)(36756003)(83380400001)(66476007)(66556008)(66946007)(110136005)(4326008)(54906003)(478600001)(8676002)(8936002)(5660300002)(2906002)(31686004)(44832011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y3BhQWQwWHhZQW9sUlF2NFF5Z1JGc3Q4cmh0bmd2QTNjY0o0bFRLSUVYRFdn?=
 =?utf-8?B?dUdQZUVqSnZLa0ZDdUkxdmpWUDd1cU9sVG1wblpNdzNHTExuNm14VlVyNHk0?=
 =?utf-8?B?OG5NMXhNbUljTDhsZjgvSVd0TXdkYTFYQlM4aGpVVnlNV0RDWHdpbXVZWnRN?=
 =?utf-8?B?aXZwb2JPVEk1NzczV0pZYmNUYUw4bC9INHhZaTJaV2lnaWVRSXlDM2p4RWVP?=
 =?utf-8?B?L3RkVUtmRDRnV1VlUSs0S1ZLa01ORWk4cHVUNS9iM3VMZ0xCSVFCMmhHOGxZ?=
 =?utf-8?B?VWdSOGN6cWhlZ1ovSzA1WXV6UFNJSGZNS0w4UVA5eVhMQWJ6NlQvaDdTeUR5?=
 =?utf-8?B?OFJjbnFlVk1tRWtkRnc0eFhDdGcwanJtT3BLYzIxVmxhcWg1LzM4OTZqSUlz?=
 =?utf-8?B?QmtTUG5UVjZyOFRPOVZQZ3N2NWw1NGh1MGkxU3gzT0dQckkvaWpYUS9zb05n?=
 =?utf-8?B?TzVrbHR3eWZiWUNJU3E2WXFOUmJEWU0vQnVXZSs5a3QzK0lMR2tnVzdRMFJp?=
 =?utf-8?B?cVRzYWVabnpkdHljMHBjQldoZWxmcFo2eGNMeS9PdmVCelBHd21wRm5lYng2?=
 =?utf-8?B?ei84QnF3RitRZjRBSjJZd0IrMWdjR3NGVG9VLzZSbGk3eThRM2VTZG9aRjlt?=
 =?utf-8?B?dDdOZTFOajUyMldxbGhFRVlLTkVMTmxhajFHUTdEejVsazJpQ3R5ZXk2cFFT?=
 =?utf-8?B?SkxXbFVCSG5uQ29uY0FUN2k3VXJCcHlrNkZ6S2lpcmR2dHpiUWtsdk9rdDZF?=
 =?utf-8?B?dWRpTkE0VGNES29QcnFCdWFTRjFhaHd5UUUremRtaFNNU2xmUzJTRlk2ZmpY?=
 =?utf-8?B?VkFKdVZhZmkrUFpOcWwyV1VQQUZBNThsRmhOUjE5VVlqQVN2TUkvN1ZkempD?=
 =?utf-8?B?Q285UW1sZHV5d3hiZ1pmU0E1QmhvaVBnNjdXeloxU3IrY1VaZFVrVC82eW9P?=
 =?utf-8?B?Skk2MFRSdFpNT3RKbDBPUDRrb2hiVVlIRXJFSFpKSWVFa01SWDBWTEdYMW02?=
 =?utf-8?B?N2JPUVdPYmxOM1Z0ZjRITDM0VkQ2MTFqT3dNS0FVUUhrMmtoVE9ucXZWR2Jw?=
 =?utf-8?B?TW44QXBubjc3SitsaXJoeTBjQTFlWTZScmpYdkdJdzljbmxzVis0SUVReG90?=
 =?utf-8?B?TTRJUS9yeXMwTFZTdWozNStzNDlpUXJRV0RZZU5jZ1ZhOERtUWg3UFdsVlV1?=
 =?utf-8?B?Qk9TT0J2dzBKdmhDcFprdHJnM3J6NE95ZzVuVVhMQzNWaHB1aUs4QzAzOC9F?=
 =?utf-8?B?c1ZYa3lheDQ2MThYdTZ3amxVUWZoOWg0VGJ4Q05SaDFTSFVOWGI1cUV4OVpK?=
 =?utf-8?B?dGhxWFkzTUdiV1ByNDNTYU1uZUZhby9YOXhMTU1ESlRNa0hKSklRRlFlM2Zz?=
 =?utf-8?B?TzRrdHJub3R3dzBTTlJGWVNUMWtJd3l0YlUydVNnV01VME1sRkJaVUdqQ3E5?=
 =?utf-8?B?UmwyTVovUEtTZWtPNlZ3Ylg0dmlnWHJPbEx3aGFhMnNmdVZXSng5T0dGeHd5?=
 =?utf-8?B?WjRwa1FDeUFiY2NNeThmWU45emwxQWNWaHFwNVNDdnFMYzNRZktteXV1cS9N?=
 =?utf-8?B?RDBzd2RudGJXdzRCdGl5T2xrR2hibFBEaW5KU0JXSVRraFk5R1Y5VkovSmsr?=
 =?utf-8?B?QlVWQWE3VlRWQkI1SDlCVFA4S29uOGJldmJHdkhGcjBmU3BXMGRENytYZG0x?=
 =?utf-8?B?em9RNFZxYjBOQTR3RmxTa2UyNWVPT050SUQvczJiRUxVQTJmVW9pU1JPcU9S?=
 =?utf-8?B?c05PZjZDeVlVUHdZdVFEU0RSSG1tRm9YRTJSbURINjB0VDIvajQvRjVYTUhh?=
 =?utf-8?B?ZzBMZW00UlRHOU1Ud1BzdzFFTndadS9PYUh5ZnJGbGhGV0RlUHYvelRTdlZ5?=
 =?utf-8?B?aVBYZS9YY2tJQndqdVJIbDQvcU84UkxxT0thdGoxOW5mSmFBWXdnT3NvK1ZK?=
 =?utf-8?B?MEhLMnFzbmpNYTY5OHN5QVFUa1lLNEY2LzZKSUJQWUpPZHFpVUtXeUNYbEpq?=
 =?utf-8?B?dzVkK1NzVWdiRlhyTVJ1VmpQQWw4V0cwVGgrMUJoZG80VFQxTjdIaEVsK0cz?=
 =?utf-8?B?UVVUc2plcDh1RmVYRmViK3lZcVVHWloreXU2UHFyaXg5LzEwWjEzQmFlVUJM?=
 =?utf-8?Q?RkNphRRsUeQ46YAxHqCEgMPre?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 630d318b-25ee-4ed6-13d9-08db3d0cf448
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 17:23:25.2225
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kAArDci8ntS20m5aQNh+rArKUdN3/I2MPysXEX76yzbpILQ6jXPIHTvTcokDwy1t7YQ8w2OV/0V5xoQv7mlwRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5442

Hi Henry,

On 4/13/23 7:09 PM, Henry Wang wrote:
> Hi Vikram,
>
>> -----Original Message-----
>> Subject: [XEN][PATCH v5 12/17] common/device_tree: Add rwlock for dt_host
>>
>>   Dynamic programming ops will modify the dt_host and there might be other
>>   function which are browsing the dt_host at the same time. To avoid the race
>>   conditions, adding rwlock for browsing the dt_host during runtime.
> For clarity, could you please add a little bit more details to explain why you chose
> rwlock instead of normal spinlock?
rwlock is just to protect someone reading the dt_host while dynamic 
programming is modifying the dt_host.
Initial suggestion was from Julien about adding rwlock here.
For now, dynamic programming is the dt_host writer in Xen during run 
time. All other iommu passthrough function was just readers during 
run-time. So, it was better to go for r/w locks here as spinlock won't 
be able to difference between read and r/w access.

For next version, I will add a comment in commit message.
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   xen/common/device_tree.c              |  3 +++
>>   xen/drivers/passthrough/device_tree.c | 39 +++++++++++++++++++++++++++
>>   xen/include/xen/device_tree.h         |  6 +++++
>>   3 files changed, 48 insertions(+)
>>
>>           if ( ret )
>> +        {
>>               printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign
>> \"%s\""
>>                      " to dom%u failed (%d)\n",
>>                      dt_node_full_name(dev), d->domain_id, ret);
>> +        }
> I am not sure if it is necessary to add "{" and "}" here.
You are right. Will remove it in next version.
>
>> +
>> +        read_unlock(&dt_host->lock);
>>           break;
>>
>>       case XEN_DOMCTL_deassign_device:
>> @@ -322,25 +345,41 @@ int iommu_do_dt_domctl(struct xen_domctl
>> *domctl, struct domain *d,
>>           if ( domctl->u.assign_device.flags )
>>               break;
>>
>> +        read_lock(&dt_host->lock);
>> +
>>           ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
>>                                       domctl->u.assign_device.u.dt.size,
>>                                       &dev);
>>           if ( ret )
>> +        {
>> +            read_unlock(&dt_host->lock);
> I think instead of adding "read_unlock" in every break and return path,
> you can...
>
>>               break;
>> +        }
>>
>>           ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
>> +
>>           if ( ret )
>> +        {
>> +            read_unlock(&dt_host->lock);
>>               break;
>> +        }
>>
>>           if ( d == dom_io )
>> +        {
>> +            read_unlock(&dt_host->lock);
>>               return -EINVAL;
> ...do something like:
>
> ret = -EINVAL;
> break;
>
> here, and then add one single "read_unlock" before the "return ret;"
> in the bottom of the function?
Will do.
>
>> +        }
>>
>>           ret = iommu_deassign_dt_device(d, dev);
>>
>>           if ( ret )
>> +        {
>>               printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign
>> \"%s\""
>>                      " to dom%u failed (%d)\n",
>>                      dt_node_full_name(dev), d->domain_id, ret);
>> +        }
> Same here. I am not sure if it is necessary to add "{" and "}".
> Kind regards,
> Henry



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 17:28:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 17:28:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521177.809552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNDW-00056b-N2; Fri, 14 Apr 2023 17:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521177.809552; Fri, 14 Apr 2023 17:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNDW-00056U-KL; Fri, 14 Apr 2023 17:28:10 +0000
Received: by outflank-mailman (input) for mailman id 521177;
 Fri, 14 Apr 2023 17:28:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnNDV-00056O-2w
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 17:28:09 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20619.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7fba173-dae9-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 19:28:07 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by PH7PR12MB7428.namprd12.prod.outlook.com (2603:10b6:510:203::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.33; Fri, 14 Apr
 2023 17:28:03 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 17:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7fba173-dae9-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vh+BYD92ONCseAd15jP65tOXddlfxL6EBES9X0czPozkCzcIlbm/BR6kIvZlnPtX90eLbRL8KEyl+3LKKJaAfWzH0BorPS4WPOXSu9Hp78k6izHsDvD7SKAB2jLVJXMi5ilHifiPmOOgrSLdsiDVHvIQBUVboWBabKHFeK+q72gjJ18DTMSuT6aq/xvlhbIt4yE+V9+uOBodvO3nM60RFCC16UstAY9gL9LktdZSqo79bwr4cm14/LlRybOp3JLhv0gRvU8pdjiUHqgWmhXG7Kqa2uBTZfhX1Yiq1/R/2vmZ5Vz2671d85ELyJbgBA9NttE3N/EG25S8yKhz2PXS9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x+qGWBdzemzbWug6XwXX7zGjoSLRS1maroLJGXzCB7I=;
 b=RYjHC7wR7FtvlSgmJLENuIUBOLPYQWOJJrBqpFtFiqR+wPJuHEIruJgyABzmgvELnEQyR7J5kbU1aNkMEx5bT8Uws5I9CjpY/A4LU/ifAW/MKRaPghs5DFVCnKHgtrHFfbqu7/EEYV2axjKPC9g0x8BB3Jm82yVZSsb2iwB04/Guuo8Fu+UUZp/Y5Zv/ge26+VO38Hp5TrgaJNkfQoB+BUeNLbtNJNRId3itYveXrGL9IS+8Pq1qEu7LL8sV6s/VUv3SXML+c1WQCQ/NHAp5PIW9Pu9L0NVHAyBYOs0v6GD7mBOncs6xyXO66Lizc1F+3yOjXBcQSrwszRKDxXV+Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x+qGWBdzemzbWug6XwXX7zGjoSLRS1maroLJGXzCB7I=;
 b=GyMroG+PWGqLtMIr5nQZstGlk0lm2WTO966wP7bV8jhMAFh5rk+1GTDe6aTgB2f3f0S1W/DQN1GYf2LRUds3liBrqRZzWJf96yapPw1FgarqN2GQ0EYWfrQzvKH2Fu31WVXEN/7JWXvpWxiLCRcyQmyBVHd7YlNsAkemnXIP1aU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <5871c848-5d44-1040-9679-7dff25161714@amd.com>
Date: Fri, 14 Apr 2023 10:28:01 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 01/17] xen/arm/device: Remove __init from function
 type
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-2-vikram.garhwal@amd.com>
 <836d2629-b097-9a8e-7aea-3f83fd13228f@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <836d2629-b097-9a8e-7aea-3f83fd13228f@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: BY5PR17CA0067.namprd17.prod.outlook.com
 (2603:10b6:a03:167::44) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|PH7PR12MB7428:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b53a4c2-4231-41cc-31c7-08db3d0d9a30
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8YN+Q9iv3xL1MStN8j3WUuFr8K0PoWhsPsuHzjeR0YcsyxE4/7p3tHCL4FSvDzcGOW7pdB+1ThyVdjq6P4XOhwt8AKbR/+2IMih4R8NQNZFTqXyM846IbEtyqT/CCq1hK4buV8Skbqnam0kO1CVjXnioCmdctVPSDCgMSF9JK5KTJunkUlKrgtcgMQyn1FqhlWVnHtojs3Z63oQ/vNA4vvGiYAV9zotUP3uM8QMF2Utzp3NpK6Yr5sMjHhxA+Cvd9Em7RT7kesZhfGjyUXAn6AjfNdw2I8sseyQJRz8Xwjn1Zx4btvZQjQ8LqMalaisq87vWW8kilfWKOIbJUJlSmE7OYIc8M1OwKWdvpvXJC5OfnixYtzCoLtNm1MNDboFIg7I3/I95NlpxN8EabZA/U94TH5Tius9Fwt7cBrPln5dDVXMAFsaXtJ9cT8D/h3wziRryuzKHAUQ4s8NEKwbEZWY5TPKlEXDQSKjxdkQBJ21lJJld0RBw8ytEkeDEfn9EhNLZz3KZDxNRnCq42pL5zCjG1Ri8R52dtGk0zQ7Be58FPyVNsw/80wYLqlxwMmabN6DZeeViBXudjcDC7qw+047FhYGoY8eOk3uomrqIUA5ZlIdwr/iaQf+5umpmtgY+rP4a/waITIe60uee1shTNg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(366004)(136003)(396003)(451199021)(2616005)(38100700002)(6506007)(83380400001)(316002)(478600001)(26005)(186003)(6486002)(54906003)(53546011)(6512007)(36756003)(5660300002)(66946007)(2906002)(41300700001)(31696002)(66476007)(4326008)(44832011)(8676002)(66556008)(86362001)(8936002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGg4TjIrRVFkcURnc1BMY0d5VTJ1MTNqZ2MwZ0N2UVJmNXByeXZ4VHVyajVJ?=
 =?utf-8?B?RGwyZ2orbnhLSWVyY01nK0owM0dtN0V3bFZGVzJ1RlJkRDcrRklNYzdVcXRQ?=
 =?utf-8?B?WG5CM3lkS0lLZmtRVUNTaVdhU3g2L0xjMTBRaUpPcVFjYnduTGpVRVpjNXQ4?=
 =?utf-8?B?Qy9ER2l6ZExTZkhtU0hQbmpDRFNhdU9GVzRJdTF2eVhGTlc1NTVkcXpjT04w?=
 =?utf-8?B?alA2eWw0MkU4VmtrS01iOHUyM1JGeXFVRkRGOUVhZjB4bDRyRFNPemM4dDJ5?=
 =?utf-8?B?Yk5adkpnTFJBeVRheW1PUkNYMGxFOW13ZXNEbU9YT201aVZuN2E5WEFLVDBN?=
 =?utf-8?B?TG5WRWtxcmFReDRHdnRVVXZrRDNQZCtuVll4bkozSHBsUWozbjJ2UjNVNTBT?=
 =?utf-8?B?VGRQcUVZaW1sUlYxakc4Z04zZlpiMVdWNWYzV3k4czVPKyswVklHdFJBczJp?=
 =?utf-8?B?NDRNdnVJUTNuMmp0MDd5OW9kSzd1aGM2dDVxSVV5MmRycHpkZG4yeU1zSTh6?=
 =?utf-8?B?bHlMQncvRXc3dDdWMnV2aWZUa3UxN0FvYzdSZVNvWHJ2S21BNjI0R1JUQ01X?=
 =?utf-8?B?cGdUVXBMQnpETXYza3d2N1N3NGlNV0UxQ2JTWGw4N3N1QmhWc01sU0FFTXB2?=
 =?utf-8?B?OUlrK0VFdldiR1dYeG1mUWxJNFhTbHF6MWhRbGxTbE5HcmZsa0J4dFRlT01F?=
 =?utf-8?B?WTgrWnR5VlJyb1dWRHFnV3Y5MGxNM1JpWUJuNmJteFBzU1psTkJHbE4vcmlQ?=
 =?utf-8?B?YUkzRFIxMmpZb1puUWJiZk5jSC9KSW1La1dPallRdGdmajcySXRDVkhpVGN0?=
 =?utf-8?B?d0FmTFFvZnJyTUsvQ2NDNVlyTDJoQ1ErSnZLYVhRUFRDKzU5Y0kwaHFQb0R4?=
 =?utf-8?B?cDMrMU1qbStsd2VidEhCUFNrTE5VYU42a3VhRXIzMkoyVTFXemFKcFpYcjA3?=
 =?utf-8?B?aUtvTmx2c3FHb1JnclB6ZWdnczRHS1FRNFJweW9ha2trZ2pnTDg0Z3JFSkVW?=
 =?utf-8?B?TjRSMmhDS3c3NGs0dUxqSWJobUZkSFV4Um1ydWZIdzRqWDF2T1lWbHZEZWZP?=
 =?utf-8?B?RlFJbzJEVU0vSS8zM2ZqMlBVd3ArMGU4YkdzT0l3TkJhQUl6ZE82TFl2TVQ1?=
 =?utf-8?B?Tms4OEkwdFFRL0lZZ0E3MnlWNmdwOG1Hc1Y1OW0xVmp5b2pPMUFyQU1LK0ps?=
 =?utf-8?B?d1RSK09Qd3k4ZTYybGtSNitTZmxnZXNld0FiVXhpOUh0SGkySUxVaGhpZzFT?=
 =?utf-8?B?aklMTGdIVW96TFZreDYrWXBXQkExZUdpYWprUXNIS3R6dk43QjIzbEN4dERy?=
 =?utf-8?B?bXA0REVVTU0wMElkaUlPV3dGNDduOE5aQ2hRVVdubTJCaHNUMi9XQXhMNEFR?=
 =?utf-8?B?MnNXTWZlYlB1S01aakNRSXdRd2YrRHVjVkRLbW1hT01TWTlWak5Gd1p4Mzdw?=
 =?utf-8?B?V2ZOTEYrMExtQWhsaHowUWJvTWc2aktNYnRPWHBoUlBhSUlKa0xjMVVLTFBy?=
 =?utf-8?B?ZEtteVFnZExsbGhBenF1VTBBQTNlNTVPUEhPb2psTkRXckI1RVdRSE1RNThS?=
 =?utf-8?B?dUNvdER5VHA1OERrOFVvYld5WXVZZWt4YzJFS0ZKN2lvdkJ3WHUzUVpjdFFu?=
 =?utf-8?B?cUNRSlB0Tm1maFhiaXk5RHpHSnRza3Z6YmhJSUk2enZSdTRieTd1UHdFb1lk?=
 =?utf-8?B?ZEZHZXJtUHh3NmFVV04zUGx0ZnRvN3ptSnhXWFJrNktvekVnN05xb2MzanRC?=
 =?utf-8?B?ZHoybDdMaVhFSldTUW5MamFNNkM4SFZ1dTllVFh2M2VhaU03clNCOERjTkxY?=
 =?utf-8?B?UW5qejJNOXoxWUhML0FGYk4zYnd2bFhuMnA5OFZPb2hlS1VPR1JEYTdoL1hZ?=
 =?utf-8?B?c09hWllkTVU2K2RPdXQwSU9nYkVkck1nenJ2S1VxYjl1cUl6RzBaU09vaUNU?=
 =?utf-8?B?ekY3aUNNZk9RdmJ5ZVVyWkpTMmttbEdDeDVGRVBodFVjV1hBNFZrdTlZSjNX?=
 =?utf-8?B?UnQ2cXFRWS9DRitucVpBalVTVE5udXdzZDJ0UWQ0Y3AzendLTndSUUl2VzMw?=
 =?utf-8?B?N0s4b1lyNU1DZk5Xbk9pWDVqTkR3NStlZExOVEZRbkcwWWF4M0JkYXlwVi9P?=
 =?utf-8?Q?m/IMQvzfg2ufiKcck8k9DnDjb?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b53a4c2-4231-41cc-31c7-08db3d0d9a30
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 17:28:03.5230
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Hx/GCMooZyJKJTm17l0XjrZ85wnujiKalKqSkUap+ncOuEKU2c1VcIpyHOE4c985m/VzNbEe6TQUR5ZXoTLuOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7428

Hi Michal,

On 4/13/23 2:19 AM, Michal Orzel wrote:
> Hi Vikram,
>
> On 11/04/2023 21:16, Vikram Garhwal wrote:
>>
>> Remove __init from following function to access during runtime:
>>      1. map_irq_to_domain()
>>      2. handle_device_interrupt()
> s/interrupt/interrupts/ since there is no handle_device_interrupt() function.
>
>>      3. map_range_to_domain()
>>      4. unflatten_dt_node()
>>      5. unflatten_device_tree()
>>
>> Move map_irq_to_domain() prototype from domain_build.h to setup.h.
>>
>> To avoid the breaking the build, following changes are also done:
> 'avoid breaking' instead of 'avoid the breaking'.
>
>> 1. Move map_irq_to_domain(), handle_device_interrupt() and map_range_to_domain() to
> s/interrupt/interrupts/
>
>>      device.c. After removing __init type,  these functions are not specific to
>>      domain building, so moving them out of domain_build.c to device.c.
>> 2. Remove static type from handle_device_interrupt().
>>
>> Overall, these changes are done to support the dynamic programming of a nodes
>> where an overlay node will be added to fdt and unflattened node will be added to
>> dt_host. Furthermore, IRQ and mmio mapping will be done for the added node.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   xen/arch/arm/device.c                   | 145 ++++++++++++++++++++++++
>>   xen/arch/arm/domain_build.c             | 142 -----------------------
>>   xen/arch/arm/include/asm/domain_build.h |   2 -
>>   xen/arch/arm/include/asm/setup.h        |   6 +
>>   xen/common/device_tree.c                |  16 +--
>>   5 files changed, 159 insertions(+), 152 deletions(-)
>>
>> diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
>> index ca8539dee5..fec6e29c42 100644
>> --- a/xen/arch/arm/device.c
>> +++ b/xen/arch/arm/device.c
>> @@ -12,6 +12,9 @@
>>   #include <xen/errno.h>
>>   #include <xen/init.h>
>>   #include <xen/lib.h>
>> +#include <xen/iocap.h>
>> +#include <asm/domain_build.h>
> I can't see why do we need to include this header.
I will recheck if this is really needed.
>
>> +#include <asm/setup.h>
> You should keep the alphabetical order so:
> - iocap goes after init.h
> - setup.h goes after device.h
Will change it.
> Apart from that, it looks ok so:
Thanks!
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
>
> ~Michal
>



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 17:51:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 17:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521181.809562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNZy-0008PX-IT; Fri, 14 Apr 2023 17:51:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521181.809562; Fri, 14 Apr 2023 17:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNZy-0008PQ-Ff; Fri, 14 Apr 2023 17:51:22 +0000
Received: by outflank-mailman (input) for mailman id 521181;
 Fri, 14 Apr 2023 17:51:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnNZw-0008PK-TZ
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 17:51:20 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20627.outbound.protection.outlook.com
 [2a01:111:f400:fe59::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5459c84-daec-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 19:51:19 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by CY5PR12MB6551.namprd12.prod.outlook.com (2603:10b6:930:41::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 17:51:05 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 17:51:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5459c84-daec-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ABqetQ6EY57DXnmcLwCkl4K0btJxstAwaJ0qNF+Bc1F3qZxraUKtkcgD/ptDpJeO+XX0vZQmaICMF0jDfiyjqHYwH0lTf4Ks4qW6nd9iM1Ge7Mw2MN5NgD25z0joOaQP5d0HYK2QQup/7rI4E2k5vfeDYOg45KREeEoBVUiLol8EcM6sWVPKKs7AJx1IxLCBTHqb2ZLscXP9RPgNlIqE8u6Cp/UvCKepyXcLdRIoDc3nLdhvZyY2DrdN/dco/4lNGVGYPt3kVWrc91Bnav+/r8N8qX6bb4XCNaJ8iwrr7N2PdlN1kN2Nr/1WRfGMSDbbioMAvhTYRaX9DGAINm6w3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KqWe/9nIBPcNRaEVaF8DD9CU4ey90w+PK361ATYjsSo=;
 b=AJhlCgC1N4a2aeMCAm/v1oXxuFsCjsttoPYusXD9PIuJaQwDCxG3bqPLMhAIbacHHbuHqWmuvm/B1M/9WXNzKWWlLcFeUBadtSuD4ET6RNSohDNXaaGeP+gCXn8qZPDBkVjMqu8WxQpWwr7ceU4mrWl+9NB7eaP1n1FrfEvsfj4ZeVSwnoecNxaprJ4nc31IdrwS4qQR7YVI+FfKNT9QRgQYKRLlG69IwPOQXLADhCbwRaNc/cEpqsxOobeyUYjQOcG0NRPvqKaRJeme9bo1o8eD1ISMyuPMQn6vbUMot4U1jhPq55u28RVVV4vcUbaVPgEN36xOAOieMqxwCAWewQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KqWe/9nIBPcNRaEVaF8DD9CU4ey90w+PK361ATYjsSo=;
 b=LhnejioaQtRRk7BQvGLII60UG46A2JzzIzQiRC6JANmPElPl3tOC65Djl2lvnFBad1l+gDnKupbcBpexVApwViHG2N6Wg6p4mLvbes7rwMXkdV1QwTum4OtpzbOsETB66rQxMVyvecuOpjx8zeAyQ4BcUZgkQ3st4brONxo69D4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <e8405b8d-40a9-3df4-90e7-89ec7195449c@amd.com>
Date: Fri, 14 Apr 2023 10:51:02 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
 <869d014a-d325-6592-d51e-e3638ba04076@xen.org>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <869d014a-d325-6592-d51e-e3638ba04076@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SJ0PR13CA0140.namprd13.prod.outlook.com
 (2603:10b6:a03:2c6::25) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|CY5PR12MB6551:EE_
X-MS-Office365-Filtering-Correlation-Id: 1aaf64ef-2b85-4f80-48b3-08db3d10d184
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/TLKPjs+l451swX29auLPr892nkKcm6sOXr85PmeukLjRXpnMXkayNf9GOIu2+8BdtZTDlUppR0tz84cdgQZAh1uohhMgNSH0jzSrYQ/eBRjeFeyEezIQil5qsfepTIyghG8StIEz0tkfd13LImELuDQriqlTB1Z9Ymw2WS9Sjk32exyPFGCML2f3OZKoLeraMFd4/FJHuOCHqUMesmwO9/MhRdgN/8iastrentqnA/0TLb/I8WF6Bz6zJTEB0QmbuTg2hlX712jK/l6fd+sQqx2QfJMwA4kD37Svo2GS+ecA2EVEktz+hs/xlkFMyyXPuiRjZ+qZS3xg+Cpt6M8ZG5ihus9dOuYHfV4mr6WL5IXkwE3qFnXRGCEoh0RAUaQR1ZTKFY9JUeXkp9AY0FLSf/IvhwnEDwA94buYQ0UBbapGJdDadMzfomusiOP+M5qkAuoWpl2ebFptd90rU4+ZGxFo09zhT7YbVeog4IsHjezCndPyu+RRXicUPaZDxjUmfkFqLPJwVDsM7Z62b6Px4NpfEEK6+vADq9e2Y1WQTN3cRgnPq1/Szet6geuN+dFw1FwX8T95arRSoWXCJzyFmNEHijv4Pe8f7q3fqTxjHcc+AzNO7h879TIdlr8r1GgmwyETwQlNoaJE/9wBP+zRw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(396003)(346002)(366004)(451199021)(41300700001)(4326008)(66476007)(2616005)(66556008)(66946007)(36756003)(186003)(2906002)(44832011)(5660300002)(83380400001)(8936002)(8676002)(86362001)(31696002)(478600001)(6486002)(31686004)(6512007)(53546011)(6506007)(26005)(316002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MnpCSjFaRitScHdrUVBHdk53dkhHcksxKzFtaWVXNXlBR09yK1piUGx6Vlhu?=
 =?utf-8?B?Y0U3WUZMRTlRMWRVT1V1a1RIdWx0eTlWU242VEI3bzlLWll0OVk4T2FXTzVU?=
 =?utf-8?B?dFpXWCtvZzI4TTV5K1RCMnp0Z1RtejVYaVl2eU10YitEMFNoSSs0Y3lHRHVK?=
 =?utf-8?B?U3FmZWFidEd5ODFOWWVRMmhNQTJSaE9vZTkreENHU21rblVvQUpqeDFNeTlz?=
 =?utf-8?B?S0tUOXNucEkwT3RESHNTRDNhYnNqd2ltSXJiYktsc2JJYkNQUjdmZFhkS0Rl?=
 =?utf-8?B?V3ErcVROYnIrK0RtNVJXZDdIWWh2WTVpU3J5UHFCS3RMRUVzL2lBL0ROeEFO?=
 =?utf-8?B?SjRTNjM2N2VoenJtUUxsWDlTZEk0LzNyRUVyaWN6bmhmSnJTZ2pseVpQNzdm?=
 =?utf-8?B?NFB3SXpOS3Yzc0wyN00ra2VlUTh6OGhRQUR3UW9XeUJ5WnJJRFBPR1h5a0l1?=
 =?utf-8?B?c0ZXN1lpazdvaHZqaTlWS0VzenVhck5nL0pWTTB2czBMM3JHWmFWY1FZNUp3?=
 =?utf-8?B?NUE2SDZyQTZjQThVekU4bzVJVmpWZCtuR0JQQ1BnOVluV21TL0lrTnhSelNO?=
 =?utf-8?B?YTRqVk9WY01nZDB1bzlqNXlFLzBsaHhKY1J1OGs4OElYdDNDTittRm5HTTMx?=
 =?utf-8?B?eUNIbUJlK2huUVlsY05uKzdyL1dkd0pnQkpiazY4T3c5Q3ZCWExyZDRuQjdP?=
 =?utf-8?B?VEowcWdiR0ZPcDFLT2pubWVCMnBKTTdBVFdpUHdCc0JUNm05R2J4a0JFclVp?=
 =?utf-8?B?R212NGVqR1ZJU1FPVUhYcG1YRGlHRXJVcDFFS3BITEVsdTFqZFRocVcwWTV0?=
 =?utf-8?B?UWZuSnFxQ3pHSVd4R1RROFJYVFB5WTZ2TXVpU2pNS2kwUnNQU2ExK3lCbDhy?=
 =?utf-8?B?YlRCdVh0VThKanFaWmVlNmFhTUpxZTdWeWpRNXJlbXQ2dGd2M1VKLyszSldO?=
 =?utf-8?B?RHFySzZsVHB4UHRNVFBuQ3JuM3FaS2d6MTh6cFA2WEF4RTAyNjJRV3VJZDEw?=
 =?utf-8?B?RTIvWEsrL0FUbU5XNWNldHllMkdzVnhiTTh4YUZWNUtHNkVDOGNTRldtazND?=
 =?utf-8?B?MXM2Wm1lOWsvQ1g1Z29YanlVZGkrVEdhdVlmUWxpUEVDcFpPa2UvR05zUTBL?=
 =?utf-8?B?aHdmeDJzUWFuMTFVb3k1TS9hOEVkUks1NVdHcjBCRmtVcjVscG1EVUZYTTZ6?=
 =?utf-8?B?TjQvOWxORlI2R3MvT2NKSUpTUGFSWWdlZEJ0RG0yRktxbXFPVlpHeUhvYm4w?=
 =?utf-8?B?MTE4SVB0ckZlOUxEL0JibHkzNG9kWGtaVmd0aHVhakExTE81UWZDVzBkbUhw?=
 =?utf-8?B?SGtZUEMwZ1Y0bnl0aFpYOGd0SjRCTVhYbHFnRTJPelYvMTh5cVYvYjg2Y2hj?=
 =?utf-8?B?V3VjY0NtNmdxd05BOFE3cWtBR1pQdGdreFIvU2RtY2VsQkxxa01HeWdRU2lj?=
 =?utf-8?B?bnB5RzJxWVdTa2VQbVIzd0FLc2Nkditpb1RBYlJhdjhZemlRaHNSM1ZXUXgx?=
 =?utf-8?B?c3lqTUxBUVZZazB1K0RuUTB2UnFDalYxR25nNGFzamFsWU14Q1k2aHB3Z2xr?=
 =?utf-8?B?TUNRcURQcDd3UkxGaHhCWDZRMHVEcElWOTRBcStmbStEREpUUFRBR0VUdXZh?=
 =?utf-8?B?Z21TbnY4UGFSdHlJZ1ZGay9wd3g2emZ0ajhGbnJUbjBSdnhabHRVOEh1S3Vp?=
 =?utf-8?B?aDBYcFBibU9TTmRSL0xpUkRNK29WcVYvZDBBQXptVEtrYWZrQU5xaHJwUkhJ?=
 =?utf-8?B?VkFyYVJVdkRYdWc1UEtsNElyblhRNGJZZ1VISlVua3dlNUNZd0dXcUNCUUxu?=
 =?utf-8?B?Nmg0MHV0L2dXUzF5RGZ3ZzU3ZmtNTzNNNml6aS9jRWdQUVRKQVR2eUk2ME5v?=
 =?utf-8?B?MnY5WFhETzIydGdmWUZZYWFXc1Y4SEI4Y0tyM0xTUnZCQ3BscmhXZUV2bzZ0?=
 =?utf-8?B?dGpUZ3BHeHJ3VUZTb3V5aEtUN0QvWmROS3pZaWRyVmlLZWdIZVRoUG02enk0?=
 =?utf-8?B?dUlEbnBpZEZabmplbmkwOWRWQ2J6dUZ4S210NXNKOVJySHZQVHlKb3JvY2Fy?=
 =?utf-8?B?NTgybVk1QjZtdEVaUDBuNmliTEoyaUFlWkhJYndRNm9aSW5iclhobmphb0pO?=
 =?utf-8?Q?qN6WwSe3LqeF/wWwi6Y4NS0nf?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1aaf64ef-2b85-4f80-48b3-08db3d10d184
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 17:51:04.8250
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9tgAUSelpVTgVzOdHzNY08chY0c/yUzlhCUJohAVRsik0Zxx8b5bsN0OvBfotx1ATLwPfsVlLVus4yKWWtStEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6551

Hi,
Julien & Michal, thanks for comments.

On 4/13/23 3:03 AM, Julien Grall wrote:
> Hi,
>
> On 11/04/2023 20:16, Vikram Garhwal wrote:
>> Following changes are done to __unflatten_device_tree():
>>      1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>>      2. Remove static function type.
>>      3. Add handling of memory allocation. This will be useful in 
>> dynamic node
>>          programming when we unflatten the dt during runtime memory 
>> allocation
>>          can fail.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   xen/common/device_tree.c      | 10 ++++++----
>>   xen/include/xen/device_tree.h |  5 +++++
>>   2 files changed, 11 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index aed38ff63c..bf847b2584 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const 
>> void *fdt,
>>   }
>>     /**
>> - * __unflatten_device_tree - create tree of device_nodes from flat blob
>> + * unflatten_device_tree - create tree of device_nodes from flat blob
>>    *
>>    * unflattens a device-tree, creating the
>>    * tree of struct device_node. It also fills the "name" and "type"
>> @@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const 
>> void *fdt,
>>    * @fdt: The fdt to expand
>>    * @mynodes: The device_node tree created by the call
>>    */
>> -static void __unflatten_device_tree(const void *fdt,
>> -                                    struct dt_device_node **mynodes)
>> +void unflatten_device_tree(const void *fdt, struct dt_device_node 
>> **mynodes)
>>   {
>>       unsigned long start, mem, size;
>>       struct dt_device_node **allnextp = mynodes;
>> @@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const void 
>> *fdt,
>>       /* Allocate memory for the expanded device tree */
>>       mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct 
>> dt_device_node));
>>   +    if ( !mem )
>> +        panic("Cannot allocate memory for unflatten device tree\n");
>
> After your series, unflatten_device_tree() will be called after boot, 
> so we should not unconditionally called panic(). Instead, 
> unflatten_device_tree() should return an error and let the caller 
> decide what to do.
Looks like i misunderstood v4 comments. Will change it to propagate an 
error in case of failure here to the handle_add_overlay_nodes() caller 
and that will further forward to error to toolstack.
>
> I suggest to read misc/xen-error-handling.txt to understand when to 
> use panic()/BUG() & co. For...
>
>
>> +
>>       ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
>>         dt_dprintk("  unflattening %lx...\n", mem);
>> @@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct 
>> dt_device_match *matches)
>>     void __init dt_unflatten_host_device_tree(void)
>>   {
>> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
>> +    unflatten_device_tree(device_tree_flattened, &dt_host);
>
> ... this caller this should be a panic() (this is OK here because it 
> is boot code).
>
> But for your new caller, you should properly report the error back the 
> toolstack.
Understood, will change it in next version.
>
> Also, unflatten_dt_node() (called by __unflatten_device_tree()) seems 
> to have some failure cases. Can you explain why they are not properly 
> propagated in your case? Are you trusting the device-tree to always be 
> valid?
for dynamic_programming, while adding node(check patch: [XEN][PATCH v5 
14/17] xen/arm: Implement device tree node addition functionalities), 
fdt_overlay_apply() is called before unflatten_device_tree() is called. 
fdt_overlay_apply() will catch the invalid device-tree overlay nodes.

Regards,
Vikram
>
> Cheers,
>



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 17:54:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 17:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521186.809571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNd1-0000bs-3k; Fri, 14 Apr 2023 17:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521186.809571; Fri, 14 Apr 2023 17:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNd1-0000bl-17; Fri, 14 Apr 2023 17:54:31 +0000
Received: by outflank-mailman (input) for mailman id 521186;
 Fri, 14 Apr 2023 17:54:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnNd0-0000bd-8o
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 17:54:30 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65118504-daed-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 19:54:27 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by PH0PR12MB5434.namprd12.prod.outlook.com (2603:10b6:510:d5::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 17:54:23 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 17:54:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65118504-daed-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wd0p5OaEyMrCDZAHixCs9kT3OPLKs/tVMjP6BJ91nJLOLMRZMlGN2g4fhawEQiVQp9BR7C9YlJDOmmVkhUIlQVdHJa2cK7AdZzM0mKJl51S8sZsOh0MlL6h8w6d2CZ2iNM/sdLlVMHsdfQhsh688bjXr4kQP1KKDY4TmnlA7knjJmr+O+L51GjvZF5Fk6T8c+uEajW2LvbBlbd+KQa5/uUIC9bjwnnJXkE803ImMK6zs4jgG5yNWZX0VXO1geUOX+xV9fDJ4GfuN2O4xuI9Jyk+4eg9NnfOCac3ORcet0NEwSrvZ1SeJ7bQl7Jh7bodTYJqlDkvlJ2ubMehKPGodgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PhD2gGUs0psgEuiecZN5i/3qHZ0Elom1MVvlI/TlWIU=;
 b=FiPJ+vhy4d3vAnJInwIkrNQsuD9fVDnkB9mhTmLpX1PM29nV6AqOEXVaRFQ+fpJ9otuYuyZ2EnMtVVubQuBIq4uwCJ6EbAWoyhu0mM6x6BWiz1OGG5IFijlRwxR3hupuwXgUHy9+Hb3w2bq4NBwsTS71rFcRZ7pu7iSlZbZ7VtLiQgkn2CEzWksYmfAEiiFkJRfkgJ2EmRW5w7hi7zjb9AHjzwQteRl0O9Z4yi6KZ6kK4TevL8oiTxZDpo67Mbwx5N+F/crY7BAfk9tNd1yA5jATPfEuc2BlB7jUMVuxk4oTO7ONWtZx0AC0kTpc3DvpV5h775E52iC78wpE/0LcUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PhD2gGUs0psgEuiecZN5i/3qHZ0Elom1MVvlI/TlWIU=;
 b=eEybioKefLPy3NEzUxyHUANAsvSrQyT88Wo+TPAbFw+vvdaPyLCK+U2SmxjGNvDtRi9ZBpQiXI1/lBkJuG6VaM/JGilhDUaexluXFqu9sSEzEmRGQCtkuo7OfhhWlj5HRCQUQQhpUhpRkXTCLUKHc8kxeLZz5IH31NHm9B/OfIc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <7dcd0b9a-f986-e6b0-46c6-95936799f39d@amd.com>
Date: Fri, 14 Apr 2023 10:54:21 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 05/17] libfdt: overlay: change
 overlay_get_target()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>,
 Vikram Garhwal <fnu.vikram@xilinx.com>,
 David Gibson <david@gibson.dropbear.id.au>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-6-vikram.garhwal@amd.com>
 <601858e2-79b5-35da-df00-2d9061d8ff22@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <601858e2-79b5-35da-df00-2d9061d8ff22@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR13CA0014.namprd13.prod.outlook.com
 (2603:10b6:a03:2c0::19) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|PH0PR12MB5434:EE_
X-MS-Office365-Filtering-Correlation-Id: 96a6070f-760b-45c7-0a82-08db3d1147e1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Jmidc8rV9PYJJJrya2oE90YRImQm76IABv0Ng23qgsvDrnblIeUoNkEF50q7zS7iWbESv9K3LL3kjvFZ15rtOeNNMNZxY6aBxN8bU77k22xPuy+M0ZycyN//ljHjEvOP7w/XEr0G2KqvazHQmue0LNLM4uYuLdGwgccC4S3xP7w/9PHQysHOOpzUd6IR7lNuVop8KlkiZgJa8sIqBIL4gWbRggxjeCbqLok7WcCrbGLrqhacIluPR1mrS2omjhLDXEAVq8kYBk9HSmAbxJthKUY6iRFM9Lll+3sdz9Stjf3Wd9vKQ6WBQj06zA4wGLRjM1Oq794PBBRA2tSWrjAK3a2hdnDJXGLszKmA+cKUvl0JIHKcaZKGludLb4mYoC3dEieKLe/c9aEcu+zk40hhZtWLuwgQY8G8lU/Mc3mBlF6oeKlnqGenVrXny3Vy0Jc0E4+l3dP20eqyJUmJAbocgmmZLCuTOB2uSy06y6NPvhzGURl4Q+rknA9MCTGmZ4BDM5fAfZzpuB3m2NJIlwTHlJsB6p+wZsX3Y6fXLxoCZEYz0YdeXYZEkucAw6guQ/DBQMXQA8NcwGCw9cXzKDLIsJdwEObBH3aOzRYZr7t66uSFfyiBQl/Z00XIvu4j3Po+/ILE4fMdclFl58m/6iwp6A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(366004)(396003)(39860400002)(451199021)(44832011)(5660300002)(2616005)(8936002)(2906002)(83380400001)(31686004)(6506007)(6512007)(53546011)(26005)(38100700002)(36756003)(478600001)(6486002)(86362001)(966005)(54906003)(41300700001)(66946007)(66556008)(8676002)(66476007)(4326008)(186003)(31696002)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QldWeklRQWVHWGNRcVJFaHI0QUZ0djYvamlyZXlCRy9JZktZYnVtWE9kSWFk?=
 =?utf-8?B?bHpYcWtkMkdJa0dnSW9Ra3E3VWpoMkJSVlRjMjlTVTdlUjd3OGVvMVVYR3Uw?=
 =?utf-8?B?T0UxMnI3enlBMkNiTW9sMWFFUU9SVGVNNFBPY2M1WFdHandsMXB1aXM2bGgy?=
 =?utf-8?B?YkQ3bHB0YjdGZ3FIV3IvODNFNVhwRHE2dVpMT3NtSUo3ekpHYnc3SWUrenVF?=
 =?utf-8?B?UXNnOTBOS3pDbmN4RnlveTlXdktyaXlNdWYwMHdwZ1dLK2cvTW1hUUxKTUR3?=
 =?utf-8?B?WlFndnJBY0h3eFlROCswTmZqZkRPQndlaWZqdE5peFFGODU5cmZqM3F0V1ly?=
 =?utf-8?B?cE5hamFpdUZ5eFJqbDRIWDRNaUtrbDV6TVN5SjNLNElyODJ0dmVyWFI3MW04?=
 =?utf-8?B?S3RiMXliVXlhRXNkdnk4VUl1RUR0TkNBYzJVVWJlazJEQzJXMDdsU0pqdFN2?=
 =?utf-8?B?K3A0QkxMUWVqbnNiVTZrOVlabE42UlVwOFdHdkFTemJNeDI1MWVJbWRSR0pz?=
 =?utf-8?B?Q2V4SWg2TXFuNmVudUNnZlZ5SVJYNHBCZkJRdDRzaCtlNkVCN2hkbGtRM0pk?=
 =?utf-8?B?a1p0eWRGS01GWGhRZFVnc2ttM0ptaGcrV0I3a2lVY3ppdjh4aExPaFJLbU5F?=
 =?utf-8?B?NGJSKzRiWlczblZDeU0xWXZzK0kxRzBDelc5bmhoM3gveHZIMmtBWHVDKzB0?=
 =?utf-8?B?TnhNT0FldFVIZHJrMWY4aVdoYy9mZytGMnI1RG5sMHRYdm42N01yaDYxVHNh?=
 =?utf-8?B?RWd5WnU2a1NBVldqby9abUhkbzVRMVViaU9CUC8vMzB2OWxxdVhlK0ZRdG1R?=
 =?utf-8?B?OStRZSs2dENvb1YxWVNwV0MyTUEzZTUzbUpTZUdSaUloVjIrUGxtaTcveHNS?=
 =?utf-8?B?eW5HV3NyUzhZSmY4YXhNTnJDZ3NwelgvbWswR3BjRkdtSlZJc0xlRDlteXNz?=
 =?utf-8?B?eGlKTzhjNHUxcjBQaHN5WVAzTi9qcUl2T29jbG9Wc1cwWWc4a2xueFMzemIw?=
 =?utf-8?B?a1V3Q0ZUTldTZE1jSnExeEhCWkNCdWIvaE5oTlJxaUxzbTlXcHNtS083T0c1?=
 =?utf-8?B?cERMZ2pUNnhXSVA3S3pQcVorZS9heUxueXBCcVZUbFIyYmFnUzdYWDJsYjd5?=
 =?utf-8?B?ZGc5bnh3dm5mSmJWcDlGeFdadmt4eUh1cGMzbU9VTktNTXlsOVplY3g2eW9j?=
 =?utf-8?B?aG5IbjhEb2JmL3Yrak5MY0pla0hhV0QyVnJLaGhTbm9tWG1iN0xWb2JUZEYr?=
 =?utf-8?B?V0xSRndUYS9HODF1MHU1Y09nSHk0QXRUcE8xNC9LSzlKQWtuSndMZG5jaVk1?=
 =?utf-8?B?WW1ReHVDeStJaGdIbzJBS0RkeWU0Z1UrVkZpb3NpcGJIdzhEQUR6SW1PTWEy?=
 =?utf-8?B?dlFKcS92Y0dDeTBLOTlDQWFwWEcwMHFhNUVGR2Vma0o3Y0lTTURjdmlwc1VF?=
 =?utf-8?B?cyt2UXJRZ2svTjducnNwY3dVVURhaXQ5TzlsRXd5Y3lOYnhwU09hRnN5eTgv?=
 =?utf-8?B?OTI0dE5LMUk3dElHVTlNbVVrbHNsU0NkSzkvWVVuSDIvbStkK0I5QTR4WERy?=
 =?utf-8?B?MWthZXVKWExHVnBkU1dsL2JVbTByZFdSamRDZWxMbk5UTVlhN29ubFNwZEpS?=
 =?utf-8?B?alNZVzBHR2xSa2RTbUdwWk5RK29KTTZZMXVVNVpzbWtGOVFnbVlNOXRYYVNs?=
 =?utf-8?B?NEQxSncyQkIxQXVwMFljV29qbzBxTG9OT2VVbE42UFJVZ3EwNkYxbXBsbm9n?=
 =?utf-8?B?M0dpR2dDY25KZFlsSW14dmFXZ0R1dlNhWThHdWwzbExDZVZUMm5ST1hTTkRs?=
 =?utf-8?B?cldFT0ZBSE1iK2RZT2dtRFA1cEdybXhGYXc3SEtrblZ6Mlh4R200YkZjVy9J?=
 =?utf-8?B?UU92SVBLWkdwZElia2JTZzczRXV0V21zMGt1MGdBVHZNTVpMQUp2WThsRk10?=
 =?utf-8?B?dVkzR3kxSkx2WE5hbTlOYlcrTitoK0ZZRXZwZWNDMWtHYmhjZTgyNTRPRVNi?=
 =?utf-8?B?TXBXQ2dPK0JhMHcyaXZvNnFBbi9IbE5KWjRlWG9XVkIxbzZIb1hYUDRJRWtQ?=
 =?utf-8?B?M053cVpRdWJXNUhneEwzWlF0ejBjTEEwL0oycE9ONzJWNTI3anFpT2MrRFF2?=
 =?utf-8?Q?l1XprhOvTtJjNkzelKMeOe+zC?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96a6070f-760b-45c7-0a82-08db3d1147e1
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 17:54:23.4425
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bWMVMakvUuGEK2CCSu0pQcNTbr7bUt4jwUsE800BK44orSxiEfoIObFeaDN+8+Bn3cJ/Ej9racejWYHNlF5d0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5434

Hi Michal,

On 4/13/23 6:11 AM, Michal Orzel wrote:
> Hi Vikram,
>
> On 11/04/2023 21:16, Vikram Garhwal wrote:
>>
>> Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
>> function type.
>>
>> This is done to get the target path for the overlay nodes which is very useful
>> in many cases. For example, Xen hypervisor needs it when applying overlays
>> because Xen needs to do further processing of the overlay nodes, e.g. mapping of
>> resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.
>>
>> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
>> Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
>> Origin: https://github.com/dgibson/dtc 45f3d1a095dd
> Wouldn't it be better to point to the main dtc repository under kernel.org rather than github?
> Origin: git://git.kernel.org/pub/scm/utils/dtc/dtc.git 45f3d1a095dd
Okay, i will change it in next version. Is there any guidelines/note for 
this preference? Will be helpful to me in future series.
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> In any case:
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Thanks you!
>
> ~Michal



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:04:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521190.809582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNms-0002CR-30; Fri, 14 Apr 2023 18:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521190.809582; Fri, 14 Apr 2023 18:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNmr-0002CJ-Vz; Fri, 14 Apr 2023 18:04:41 +0000
Received: by outflank-mailman (input) for mailman id 521190;
 Fri, 14 Apr 2023 18:04:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnNmq-0002CC-MX
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:04:40 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d11371bf-daee-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 20:04:38 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by IA0PR12MB8862.namprd12.prod.outlook.com (2603:10b6:208:48e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Fri, 14 Apr
 2023 18:04:34 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 18:04:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d11371bf-daee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gyACCJCGPRTYKyehYTXnmb2My8kw1SLu2n8wbD9MOx28aw4fCo1r9SNV2UuuQk+A/2yBWhi/FO7LX2+cThRgnnnUfjd1tWg34MlEdpYc4jAgnIXd0qTJO7hu6XZhqLQziRJ6lxXbFE+gz7ZGfzPWkPL17UaURvQd2G1qiNwACvnNI2t5vv/fX1X6qFRZND0IEDq8bww+9ZTumwdiG+imFRFQ3ahTKurFt8i62AA7fN9ZA3+s0FhU8PbErszzTvmBHiQdZuTe+nCUXQRJSFBclv4kxT09gU/CZgQ4NrywXGeWL8aOjGftmMmCyMrGTKHgpqxoOhMbupTri9/zsC9HbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gj6MQrbDZkzu73BonYZ+DEwZMjhrF0TKWRr5HB+2lwo=;
 b=BmqeWhn12GAuZ14ValTxq5wKmwc6orlWGOLzUz6yaywQzq/squmHb7LSFMUUhll6pxcYJFn/NE5pfDPls8ZnxwIyP2kmzlnER7Mjf87LhvQlmXqt5Mq3QmfuCQoj6RQKkO1NXnP6I3dbNF4oGy1l7p0hkENIRmuxJDp/oZ9VN3+oyyADyZ2n+vqiKqzgmT8ia7Xpzt7GF+CsSDczoRYA0zCR4Wcz3fPCJxaV/7aCvTNdNR9vcs9TjjApe/lu+APOzB9/NBnSgocQPpHeHTcqtjkDxvLLFAn/nwpA7iiifxZ0vWc9pim1pbBzSqUCQQPSI7DxDQlaszAQ5KBNsJQPEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gj6MQrbDZkzu73BonYZ+DEwZMjhrF0TKWRr5HB+2lwo=;
 b=3rXt6Zg8A32qAIT+6zPMRjYimhJVPVCHw5Vcu1m4KlvyCuccX75SZXDbNMAWrKCGbPTfbTjTJHM8Wkc7Gvdtl32tWH9VA4LBr+5TA4uUA5Sqypo1Oll5phU5JEAGFMtYpnqPz5AVdyWk6IGADzmhbMXXaD4CP84S9WcPblJ5NQs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <e7a2790b-31eb-a933-af6e-e01e0d4dd5ac@amd.com>
Date: Fri, 14 Apr 2023 11:04:31 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 06/17] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-7-vikram.garhwal@amd.com>
 <9211242e-102c-4468-c35b-c88f8e31b274@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <9211242e-102c-4468-c35b-c88f8e31b274@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: BYAPR01CA0052.prod.exchangelabs.com (2603:10b6:a03:94::29)
 To MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|IA0PR12MB8862:EE_
X-MS-Office365-Filtering-Correlation-Id: 6f0a3844-1544-4632-5129-08db3d12b3c6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yceAmFE7yzqfm9XQ4Ot8LInf3UDDKEoHUXmSveEfpeSLC6xIEoQjY5ykFa4A84hsdxVIqxgpfnUj/bkbo94w2uPNnAOh1p8NaKmofRCfJu2crNh540MXcdDO+CHEc7xERtor00Hxwq6vRlAKTpM3tc4ZYmb2f6mLv1xXbNta+6sEdQPyOFSI88nUBlPtCfnphPRGN1H6MUAZxb2LErTQOwMzLYslhjp8p24BibqMPn4HVxA+BCuZqLi0+Jaz1EAIJnTQLmvlq0xz+GlqCDFBjOqRerqrNl2CQnumGu4lVOIXPx9FRLbKpntepUabSJ+MufC8saWsHvg9j2WQ6yXysiGd8lMXH+ZWaZw0SSn7cNc3JF0smucRl9ghX7b6+Qv8ejuhguzXflnCgNhbjtNuCrmj/vu0DeaGTN4ykzOvISrc1kb+mB8MW4EAxen00wykeTyUC4VLF3YOyXb3g3LvA3vkTTNgfFz5+Z3uOOZ4uTn0G2m2q0e99DJtKz8rcBiymfG4AxUZnJIJYcdvNBDLhOHTw585Rc6Om3c7URtO62Tqfo8+slG4mURCJ6dqhfmQdbHl0p5uUskTnC8SF0EmH/Vgwiddudqw/RdUqNCOX7UU8thbRZDwFOpZIirQUAPg7zQZ4au6v6Fe1sJke2F+iR8+xeOTV2mUs5kfZRbBeAA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(346002)(376002)(39860400002)(136003)(451199021)(31686004)(86362001)(31696002)(2906002)(36756003)(6666004)(53546011)(186003)(83380400001)(2616005)(6512007)(26005)(6506007)(66556008)(6486002)(478600001)(66946007)(44832011)(66476007)(38100700002)(4326008)(8676002)(41300700001)(316002)(8936002)(5660300002)(37363002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?KzE4SXF6OEV6TXVKbUcyTnJtT2ZnL21aTjF4a1Vid3lneVJUWENiMkY5TUU5?=
 =?utf-8?B?QUY5a2lBTG9jOXdnSnMyanBMVDJ1WGdTM2swSlZpYUNpeElzYzlvK3FZQWx2?=
 =?utf-8?B?UDFCT3R3OC9mUW01VDlxY29KdTh4Q0F1NEIzbzUzYmM4bFJnVWpPeHNybVBK?=
 =?utf-8?B?bUx3MWpMaDY1WDFxRzhNVTkvZ3FLWnJER1V3cjY4YTd6aHlsdnJHelBQc2pY?=
 =?utf-8?B?dVFSbGtDdk5vNkNIMXF2UGN2UTJtM3FaaDVuTVRRa1paQWxIOE5iL3BKTzNE?=
 =?utf-8?B?M08yM2w2TWNSSU5IZXcyZEQ0N2ZmWUExclV3dWgzVmVWOTljSGltSHh6SFRG?=
 =?utf-8?B?NWFQQVlvWVhsd3hyM3l0elE3bTZOb0haVFNVaWZqWmI2ZEsxTTZpaEFpazY4?=
 =?utf-8?B?eExTVFdJQml6cUlwc3hNV0QwZS9ZbGdQYWhQd2xSOHN3ZG5ocm14ZzlBdlhP?=
 =?utf-8?B?NzNUZDZ5Tlh6bGt4Qk5DcVRzanpza1NtZlZXZGtRUlFYTkF0ZHAreVNwVGJ0?=
 =?utf-8?B?TXU1RTg5NCsraXExNU9abU1scVJZSmVFdlc1OHJiN3NDMVQzZ1pRcndDN1R5?=
 =?utf-8?B?OTFHV0c5NlN0N1JSOE5kc3NrOTA2SldEOS9FVCtxOTFrMk9QeGRTNit6ak9E?=
 =?utf-8?B?QkhneTRHa3FtdWhrcGZ3MVNndVNsMlkwSkhuemljQ2VNemplQjBUZ2d2OGpK?=
 =?utf-8?B?RlQ5dW5VVExQNXNxVW5PQ0pmbUsyYnB3MldxV1YrelpPV09MRXpOVndOU014?=
 =?utf-8?B?Q25VQUFGTFlxSWFzdStYSHpVVDBUNHlGR25uVkhVVXhpSHNGK0kwTzEvR2lz?=
 =?utf-8?B?Q3dCUlVLLzJIK3J4WE92TnZJZDZ4S3h0blBIVFVuZ1hVRXhSMlBLa3YxR01i?=
 =?utf-8?B?b2VkL1VUc25ERmFKcm1hTlJjU0twaG5rR2Z4Q3FYNklpVUJGNkovTmxMbm8y?=
 =?utf-8?B?Z0ZlZE8rVVNIc3pkYTFHMms1NGg2OE01c1Q4NDIxVkhHN0YrRGloZkl5V0hK?=
 =?utf-8?B?UnE5TXFheWJ2ODRwSXdid3VOU1FyWVdXQ0NidDIxanpWZWgwUDNISWg2WjJs?=
 =?utf-8?B?ZDBPaUFudWZ2VCtaRUFVNk9KeHVQeENVcEdhVThnSzdmeC9KMkpKdFN5cGsw?=
 =?utf-8?B?R1E4UE85aHROZWliTUE4c0JVbXVZSWJGa3VIUXZsN0ZkZk9tc2p6SGxUV3ZF?=
 =?utf-8?B?UW15QXFRKzgveHFSNExlTHl3NFMxOS9hR1Y5dlBmL0VNYWEzM0I2cW41ZXB3?=
 =?utf-8?B?c3JvOGNJU0JTT2lWSmRVOTFUU09PUHNZOWZsZWM3NmVqaFZLRWJYb3lwUzV1?=
 =?utf-8?B?QllGWTkyS24rYWN6REZHNHVoVWQ5STRmNjFTNWliejNhamJ1cm9JYTlRUUNy?=
 =?utf-8?B?TVQ5akppNXdoZjBrSHIwYmNzbDlkK1poaVZxYVJ5bHdMV0NTam1MNnBtUk9D?=
 =?utf-8?B?YXFtbnV5YlJQOVlmMGlEaFA0VGtHMGVHYjdrd1ZCcjNvU3VMZkFoUFVIZ3ZC?=
 =?utf-8?B?S3pEZG8zdTB4Z3FseFBCZjg3NzRqRit2YnVsUE1tT0dlcXkxdDBIQkJseEpr?=
 =?utf-8?B?VkE5SVVTdlltTHZqYUFGbzNnZUtnN3gray9sMC9WM1BGN0sySGJGbjFnWkNJ?=
 =?utf-8?B?dDU0VENDeHVuRHRwckVyUDl2dkdmM1NSa1lmckMzTkc0M0ZGV1ZzeWlCREMy?=
 =?utf-8?B?T0w5OEdRcjdwcTVhMkIyVWd6b0NJTU95amRuV1BrQTNoR3ZPVTJKbVV5Q2Fi?=
 =?utf-8?B?S1VpQzRrUTdSL2lraExuZjFMQnFXTnZmWVlzaVJMMmZyMHNzVGYwL0ZSaWhH?=
 =?utf-8?B?dng1VWlSWjZidnpUdisxRGNaL3IwemNWWXNNdmljQUZIS0tHVjliaUc1NXk1?=
 =?utf-8?B?WWVOb0huVHdjK2Q1ZDdrM2tEdkVXNHlPdlY3MzBlT3k0MTFhSlkxRVdUVjZ0?=
 =?utf-8?B?KzRONmJuRFBlWkZuVFZsL2xvdElNZmVKVlBhWWtwWmxsQk90UVVLVzVJY3d0?=
 =?utf-8?B?Tm9VWVRFWk8za090UDdIL0ZQT1J0c29LcHMvWGNBa0dxZVVkS3p5ZWJtSGxP?=
 =?utf-8?B?aGtXZUF2aERKbnBHTDFKSmZHRVBBNGgyM0padnVVT0JnVmYrNFJpRlpQL3pZ?=
 =?utf-8?Q?VpaJw/j6uzzY+AaKNprdygaNv?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f0a3844-1544-4632-5129-08db3d12b3c6
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:04:33.9701
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oem9AvBT4zsnxNs1J1BFDcXb2Ol4rAB64yhQXltqR89WlKL/ungGGS8JY3RLHIc8dDVjc6iHKHiF/UqrZQ1A7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8862

Hi Michal,

On 4/13/23 6:40 AM, Michal Orzel wrote:
> Hi Vikram,
>
> On 11/04/2023 21:16, Vikram Garhwal wrote:
>>
>> Add device_tree_find_node_by_path() to find a matching node with path for a
>> dt_device_node.
>>
>> Reason behind this function:
>>      Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>>      device_tree_flattened) is created and updated with overlay nodes. This
>>      updated fdt is further unflattened to a dt_host_new. Next, we need to find
>>      the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
>>      and add the nodes as child under their parent in the dt_host. Thus we need
>>      this function to search for node in different unflattened device trees.
> You do not mention making dt_find_node_by_path() static inline.
Will add a comment about it in v6.
>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>   xen/common/device_tree.c      |  5 +++--
>>   xen/include/xen/device_tree.h | 17 +++++++++++++++--
>>   2 files changed, 18 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index bf847b2584..507b4ac5b6 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>>       return np;
>>   }
>>
>> -struct dt_device_node *dt_find_node_by_path(const char *path)
>> +struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
>> +                                                     const char *path)
>>   {
>>       struct dt_device_node *np;
>>
>> -    dt_for_each_device_node(dt_host, np)
>> +    dt_for_each_device_node(dt, np)
>>           if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
>>               break;
>>
>> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
>> index 58ac12abe3..998f972ebc 100644
>> --- a/xen/include/xen/device_tree.h
>> +++ b/xen/include/xen/device_tree.h
>> @@ -534,13 +534,26 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
>>   struct dt_device_node *dt_find_node_by_alias(const char *alias);
>>
>>   /**
>> - * dt_find_node_by_path - Find a node matching a full DT path
>> + * device_tree_find_node_by_path - Generic function to find a node matching the
>> + * full DT path for any given unflatten device tree
>> + * @dt_node: The device tree to search
> This should be @dt to match the parameter. Also, shouldn't the description say:
> "the node to start searching from"
> or
> "device tree root node"
>
> FWICS, you expect to pass a root node as dt node. However, in device_tree_find_node_by_path()
> you do not check if a provided node is a root node or not (e.g. no parent). Is this intended?
Yeah, intent was to write a generic function where we can search from 
middle of a device_tree as long as we have the start node from where to 
search.

But yeah so far for dynamic programming it's been called for a root 
nodes only.
>
> ~Michal



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:09:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521193.809591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNrC-0002oO-Jk; Fri, 14 Apr 2023 18:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521193.809591; Fri, 14 Apr 2023 18:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnNrC-0002oH-H9; Fri, 14 Apr 2023 18:09:10 +0000
Received: by outflank-mailman (input) for mailman id 521193;
 Fri, 14 Apr 2023 18:09:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pnNrB-0002oB-DM
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:09:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pnNrB-0001vX-9J; Fri, 14 Apr 2023 18:09:09 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227] helo=[10.95.152.63])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pnNrB-0004G0-31; Fri, 14 Apr 2023 18:09:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tnkLueb8MbcDuwKISZhvDbBJkG7Eu680dhtJ0tO48i0=; b=Oe/1RZpheYpNyVOx19OjkZq/3S
	XMEJheDEB68R21iC5Fuejj0JXlfs7/Xst0HxwfO16+3II4O3BqkYCksnqRmTd4wuf9hTlR33yDH9J
	B9Gjsm489cgVOy9oA085++/CG0jPKNepSyRiW6yiTnHYLHMgDYL+lXQkb6deQqSHVoxQ=;
Message-ID: <1198aebe-caa7-fefe-8c09-db7a14ec7c34@xen.org>
Date: Fri, 14 Apr 2023 19:09:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
 <869d014a-d325-6592-d51e-e3638ba04076@xen.org>
 <e8405b8d-40a9-3df4-90e7-89ec7195449c@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <e8405b8d-40a9-3df4-90e7-89ec7195449c@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 14/04/2023 18:51, Vikram Garhwal wrote:
> On 4/13/23 3:03 AM, Julien Grall wrote:
>> Hi,
>>
>> On 11/04/2023 20:16, Vikram Garhwal wrote:
>>> Following changes are done to __unflatten_device_tree():
>>>      1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>>>      2. Remove static function type.
>>>      3. Add handling of memory allocation. This will be useful in 
>>> dynamic node
>>>          programming when we unflatten the dt during runtime memory 
>>> allocation
>>>          can fail.
>>>
>>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>>> ---
>>>   xen/common/device_tree.c      | 10 ++++++----
>>>   xen/include/xen/device_tree.h |  5 +++++
>>>   2 files changed, 11 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>>> index aed38ff63c..bf847b2584 100644
>>> --- a/xen/common/device_tree.c
>>> +++ b/xen/common/device_tree.c
>>> @@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const 
>>> void *fdt,
>>>   }
>>>     /**
>>> - * __unflatten_device_tree - create tree of device_nodes from flat blob
>>> + * unflatten_device_tree - create tree of device_nodes from flat blob
>>>    *
>>>    * unflattens a device-tree, creating the
>>>    * tree of struct device_node. It also fills the "name" and "type"
>>> @@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const 
>>> void *fdt,
>>>    * @fdt: The fdt to expand
>>>    * @mynodes: The device_node tree created by the call
>>>    */
>>> -static void __unflatten_device_tree(const void *fdt,
>>> -                                    struct dt_device_node **mynodes)
>>> +void unflatten_device_tree(const void *fdt, struct dt_device_node 
>>> **mynodes)
>>>   {
>>>       unsigned long start, mem, size;
>>>       struct dt_device_node **allnextp = mynodes;
>>> @@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const void 
>>> *fdt,
>>>       /* Allocate memory for the expanded device tree */
>>>       mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct 
>>> dt_device_node));
>>>   +    if ( !mem )
>>> +        panic("Cannot allocate memory for unflatten device tree\n");
>>
>> After your series, unflatten_device_tree() will be called after boot, 
>> so we should not unconditionally called panic(). Instead, 
>> unflatten_device_tree() should return an error and let the caller 
>> decide what to do.
> Looks like i misunderstood v4 comments. Will change it to propagate an 
> error in case of failure here to the handle_add_overlay_nodes() caller 
> and that will further forward to error to toolstack.
>>
>> I suggest to read misc/xen-error-handling.txt to understand when to 
>> use panic()/BUG() & co. For...
>>
>>
>>> +
>>>       ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
>>>         dt_dprintk("  unflattening %lx...\n", mem);
>>> @@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct 
>>> dt_device_match *matches)
>>>     void __init dt_unflatten_host_device_tree(void)
>>>   {
>>> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
>>> +    unflatten_device_tree(device_tree_flattened, &dt_host);
>>
>> ... this caller this should be a panic() (this is OK here because it 
>> is boot code).
>>
>> But for your new caller, you should properly report the error back the 
>> toolstack.
> Understood, will change it in next version.
>>
>> Also, unflatten_dt_node() (called by __unflatten_device_tree()) seems 
>> to have some failure cases. Can you explain why they are not properly 
>> propagated in your case? Are you trusting the device-tree to always be 
>> valid?
> for dynamic_programming, while adding node(check patch: [XEN][PATCH v5 
> 14/17] xen/arm: Implement device tree node addition functionalities), 
> fdt_overlay_apply() is called before unflatten_device_tree() is called. 
> fdt_overlay_apply() will catch the invalid device-tree overlay nodes.

I agree that in theory fdt_overlay_apply() will catch invalid 
device-tree. However, neither of the functions are exempts of bugs and 
there is no code shared between the two (they are not even coming from 
the same project).

So we could end up in a situation where fdt_overlay_apply() works but 
not unflatten_device_tree(). Therefore, I would rather prefer if the 
latter function properly handle any errors.

Note that unflatten_dt_node() already have check the validity of the DT 
and will return. We just need to make sure they are treated as error 
rather than been ignored.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:28:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521199.809601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnO9p-0005KS-7d; Fri, 14 Apr 2023 18:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521199.809601; Fri, 14 Apr 2023 18:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnO9p-0005KL-51; Fri, 14 Apr 2023 18:28:25 +0000
Received: by outflank-mailman (input) for mailman id 521199;
 Fri, 14 Apr 2023 18:28:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnO9n-0005K9-J5; Fri, 14 Apr 2023 18:28:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnO9n-0002H2-Fl; Fri, 14 Apr 2023 18:28:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnO9n-00027a-1h; Fri, 14 Apr 2023 18:28:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnO9n-0000QL-1C; Fri, 14 Apr 2023 18:28:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QrJn6+JnF4EuEdXZYQHgGWBWO3b665cEZalgOdhtzsU=; b=B7X1OKN8iO7DGwGg4DBOfL6+Nz
	5T6cOE6kmyFRKsuzNArTdwiExSqMRxGKNYQgsa6yf1BaJbyOzVRwMyzNI5X8wuaoz3/lWHOhs1M2h
	Lyp0BLcg/kL9Z30gDQPYhKx0ubfRgnM1+DU6mrrK5/KUbxn6WP9elzroKCSxUQBao50U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180262: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=797f526ae2a83811b0ccbde0138c65a9f137eba5
X-Osstest-Versions-That:
    ovmf=c9fb11f92f52e06bcb1279b467a3b2667757be44
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 18:28:23 +0000

flight 180262 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180262/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 797f526ae2a83811b0ccbde0138c65a9f137eba5
baseline version:
 ovmf                 c9fb11f92f52e06bcb1279b467a3b2667757be44

Last test of basis   180261  2023-04-14 13:43:27 Z    0 days
Testing same since   180262  2023-04-14 16:12:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c9fb11f92f..797f526ae2  797f526ae2a83811b0ccbde0138c65a9f137eba5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:28:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:28:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521202.809612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnO9y-0005bD-FF; Fri, 14 Apr 2023 18:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521202.809612; Fri, 14 Apr 2023 18:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnO9y-0005b6-CO; Fri, 14 Apr 2023 18:28:34 +0000
Received: by outflank-mailman (input) for mailman id 521202;
 Fri, 14 Apr 2023 18:28:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YiN8=AF=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1pnO9w-0005aB-T4
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:28:33 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26b7787b-daf2-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 20:28:29 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by IA1PR12MB7517.namprd12.prod.outlook.com (2603:10b6:208:41a::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 18:28:26 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::f6c3:91ae:9bd8:edea%6]) with mapi id 15.20.6298.030; Fri, 14 Apr 2023
 18:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26b7787b-daf2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a4Xy9O8E6hhy2D0uaBVXt7J8cZ2Bb/9Nq5h2iHodmEFzwrtekUEBJOAOzedJwSK/8Kg+lW3a8TlaFEFp12mfnJ2oxnmtri7BJ74DD+fgffj5nmy4c2FLwaiNgbKztcxsW+JVY+4+/9LIQxv8cipTeMuNiiP46DlMwNl26RVMy6vGQ4PaTDxaVd8ygOlLe87ONwwzWxVmUiaRjBLtRZVFiAdzM5Utun9A2ujVBgP2egB35lRao+k56krcx6w53HDRw49qjKSJGbtuaHkI3I1C2PXh4LsdPH5GrIMADFXO4VdZCpmQZhsAaRxHAklAWWQ5VFje+yUOJhwO+0gdSFuU3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=te2Eu0q5BWGrhHL/LWGAvbwYJB5zeA7tVLv/8x4nflg=;
 b=kJQIXVgbQPhykXGOB0LzIupgoFF3NAuPyRMTKk28cXwXrHwnXQM1Rc0G3OpMko6J4JoI3zJ29V7k1j9BIY8FSxM/irAL2/IiBGG/68nLgolXMWcsc0dQfNAaye15xWiy+/bSQpLS3SWuabNnDxTNOwDDKeAmEaSGmxXtPxdAsBpJSsfbhQggkCTWiOBK4dDfpGa3CbvtDZZ93d0Rtn6F6quOKU9GLEJkchJAdCOcGsG49sv8+Yy6fTIw7OcP3RC6tB4DD1wMtkZjp3Sg0dD4fRaVKR2vsTxmF+dbJWxZP56ZmqsSiZxsOlRAU2ZzSgVXVN2UNh0PgCuGYXGsMarxAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=te2Eu0q5BWGrhHL/LWGAvbwYJB5zeA7tVLv/8x4nflg=;
 b=OlIeY2VtgxXr27kZTih79qRSxWKapLumebisiWaTq56UwFnuwNcUal30/c32B/FcFw1kD7buGtpDm6IUL08KyHDzWZXlrdxicD7AjMgpzTdOiyfWAvKp5yRrAGCKlwPlR58uuESzzK/HlNifCvK35PBreFIW+g2MqfZ6Qtpm92o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <81b7b4ab-1765-67dc-0d0e-8fcf8a8f41d1@amd.com>
Date: Fri, 14 Apr 2023 11:28:23 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 02/17] common/device_tree: change
 __unflatten_device_tree()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-3-vikram.garhwal@amd.com>
 <869d014a-d325-6592-d51e-e3638ba04076@xen.org>
 <e8405b8d-40a9-3df4-90e7-89ec7195449c@amd.com>
 <1198aebe-caa7-fefe-8c09-db7a14ec7c34@xen.org>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <1198aebe-caa7-fefe-8c09-db7a14ec7c34@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: BYAPR06CA0063.namprd06.prod.outlook.com
 (2603:10b6:a03:14b::40) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|IA1PR12MB7517:EE_
X-MS-Office365-Filtering-Correlation-Id: be032754-80f5-49ee-529e-08db3d16093f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	niYEMMWwMNa4YMkGG49TFKV0lyedNo771oL1UoFmVm9YtYITpV9IZl/5YzVr1x+0TaKx4H0AFFDVIhscSmaZ5mMqBC4s+2hVvaMrY1k2zRfWVARrxItks9KfPJVqMBbkWFhdT4Nv3FcUgGfCch53L9csgioHT68LCgZ1h02d1yOO1AnxZPdgVzsOXbK0r+u72mcrDZcDF2mWeVOxWDKvSkQPSZslsGHC0wM6ZDHWVEZ5D9Not6RJU15HQSPtff+0rHFBb1DgLDbNEgsnJ6OX+gLKIsmRNaVvjX7sFANNRMKBZaZttMa1K3muGf4d8OzG7MDlDlLr38H50CllNFB7ZxNn0v91IZq3oUekoiwNVQ9AIaT9neJ1qDMpYAY5MF8OZI6tSDQQwpqWwX1L88iHxBdsaUQ51aue08rXnerrPoXzEdelfov2ntpyAIWqZZn923DbovpP94GpXcQlqACVOBBY5+88jHJzB8Y/issUuMwS9N6mOzMDZiE2vPaM1fRAUEBQgnYzcABrzJZLySCtJSLDnsWrjAsVMkjY2oU1EpcAvC5O5B+zToYyJvjQJF+NrwYzKM4Xz5hZaS/fbT2Virk1LY6DzhLrXcn0yFzzb0OcVabmnJmMIPFb3ezg7v6cEOsOg9xpsnhjzMTHSTkJRQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(366004)(39860400002)(451199021)(6666004)(5660300002)(6486002)(66556008)(2906002)(66476007)(36756003)(4326008)(66946007)(44832011)(86362001)(38100700002)(41300700001)(8676002)(316002)(8936002)(31696002)(478600001)(53546011)(26005)(6512007)(6506007)(31686004)(2616005)(186003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmIwVnVFbjdxTXJISmNOYlJXVld6Tnd0Z1VOaTNjV0hkOFJNRHdyOTZUcUJC?=
 =?utf-8?B?RGQ4dW5UUXdwMWVhSUFCTmkxNGRTOGJjTnNaMkhGNDM3dHJDOFlPMk9YbVIx?=
 =?utf-8?B?Qm1ldVQwTFJKeEplbVl3bUFpcnN5b0M5Z2NsMXBCeU9aT3VBcVdEVVJhaWRV?=
 =?utf-8?B?N1g2QTJnNmx0d1dRcnhHay8yMVF6V1dLUXk4bjcyQkZvSWozR1UwMlNWb0xC?=
 =?utf-8?B?RnltR1V0c0E1WXNvVmVOcUpoS2JzQUVyVHUxa2hqNG9PejNhZmwzNXRyNjdD?=
 =?utf-8?B?NFlyY3djOWY3eDUwQTVtVUUyNkoyajFaRUJTcUowT1F6T1BCZ3dWRkJMQk41?=
 =?utf-8?B?Um40YW5LeldWbHd0ZkVRZHdCSVg5SGZhRDU5TWx0YjRrd1VBT1NYZjNFZTVy?=
 =?utf-8?B?SERWMTdselp0UnErNGJ5cDdXNzNUeXAwNjlZbmQ4L0hMUWtkUGZpOTIxVmMx?=
 =?utf-8?B?VGIvcDhzaXZHd2ZURUljM1AwYnh6MmlwNUlqVnZMYXpQcGNHcDdUT3psRy92?=
 =?utf-8?B?RkJjVFUrZnlpMnJMOHY4Ni9aR1ZXQUMyZStINzh1UmsxRFRLVW1PNlNRam9n?=
 =?utf-8?B?dXB6N0RTRlNHZnQ0eWRuZzBackxpMjgwdU5DWHgySzhEc2pNT21qNVByV2dO?=
 =?utf-8?B?ZnlyZUdINlpHLzRGNlhsUHRmUG1FTWhKS05VaVBKUkFBblBCNEU2MmJZSmNn?=
 =?utf-8?B?ZEcrOEw3aE16YVQxLy9XTVNlYXpqMHRydHF2ZGNtdFJWeVRiQUZZUXFuSk5p?=
 =?utf-8?B?ZUh5UWxWNDE3YTViMGVhSXF3UUs5VUNTY210R1cvQmxvTFY1eDFOeTFMVGY4?=
 =?utf-8?B?VTZRVTBMVThRTXJqWVVXeUk3YjFYbWxxWFlkdllwTTVwV3FEMnJHajFBNTZO?=
 =?utf-8?B?NW1pNWdtcVRzSWliZ2g3RWJGZW1lV044UHlQOFFTQVVjREt4eEozUEhaQzVp?=
 =?utf-8?B?Rk5IZmRhS1ZPbkt5YlNKR1hIZnBSbTNjYzB4UEh6VUozUkRMRW5ZRlJ5emY0?=
 =?utf-8?B?U3BVYzhIcnNsRlZoUW1LVEtuNlJIaTBuK2QzdVR0TnBkSGVPM3g0NHZiN0ZP?=
 =?utf-8?B?U1hiRWNpNVIzaXp4MW5jM01DM0s2Ymdudk1lN0JiNlYzbzZKMVF1U0RiKy9w?=
 =?utf-8?B?L0xtMmozOTVubEwzZUdKcDU5RWsybTRHck1hSU5odnhCRWd2UjdneURiNzhp?=
 =?utf-8?B?bnBBS3FFUVd0aDVNT0hqdUVFeG5aTDN4VkdjYmNyb0YycUhWSktYaUdHRkpW?=
 =?utf-8?B?S2s4OG52aWxQZTZpVlZHeTJlaWRlSjVrTExhaTVTUDVLTVVnZktmTlkxcXow?=
 =?utf-8?B?cEpwaHhiY0ZPb0VLblZCbUxETndwM2Z6VS9CZmR5TEtCYlVhV01kNi9IZUda?=
 =?utf-8?B?NXd0cUhoVHp5bUdZYXRBbE55T3Z0U2pscjJlSmFJWlVaVE90Z2lkY0gyczRY?=
 =?utf-8?B?aERiWncyaFRtQ2laWVFORTY2M2ErOTRLclBHMjY1VHdWWFB0eTJOTTB1WEVB?=
 =?utf-8?B?R01RelpPSG1zeW8zQnpvSDhTRXIwdHdVejl6dFhZaVJzcXlLYmZWWm10NG9V?=
 =?utf-8?B?YU42aW1KYUhmQjFYU283SnlyRDA2Q3hRL2NGb0xyWlBVNnhGaWVvd1hCSTQ3?=
 =?utf-8?B?N3NPR3gwOHB6cnV6R2l3K21aaFpCNU55NlhFSkhkNksrd2ZTcnhzNTVmejNO?=
 =?utf-8?B?Y1BicEo1OGUrWi9pTk1tTzVXZ3dNQzBxd2JXbHVidXdNZnMyRE5oQlFwV3FG?=
 =?utf-8?B?QXZROFk2ei96MXQwQnloKzBTTUdsZFNmMTd5VkdIa0VpbnkvQWZXQktJQmkw?=
 =?utf-8?B?UnVsMkdaS1lOaTU2Tm5rOTRJV1BodWJUZnNIUkptNm1vdXdvY3RWbEdCVllw?=
 =?utf-8?B?NTlCZy8vZ0IxeWgya0J3Mmc1UnRaSVhRaGw5bnQrMVBpNjhqV0lFUkYwcjhT?=
 =?utf-8?B?d3RMVmJYbzBLVElxZlN4ZjFhL0FOOGRraWxkd0IvQ1FxODNpQW9SQjMwZGVC?=
 =?utf-8?B?Ri9hN0ZCazA4aWhMeCtLMjAzOGhJRkVpSHA3OTFubk9maE4rWnBJVy85S0p1?=
 =?utf-8?B?Tm5IQ2ZjMnBIVGFrRXlkSlZHS1JNbFdWbzBVcGY5VnZrYmtrNWQvSi9iTnNw?=
 =?utf-8?Q?Omc0PZEMWvIFVrghesSeiEW2V?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be032754-80f5-49ee-529e-08db3d16093f
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:28:25.8283
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 60ZsMNeywH+AHE4n8fRUH/paPNxwMIN4g1j9f1vk7d7U1G/t01MmGIaARGMynNBcGWJeaBrA1vroMME/xySyYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7517

Hi,

On 4/14/23 11:09 AM, Julien Grall wrote:
> Hi,
>
> On 14/04/2023 18:51, Vikram Garhwal wrote:
>> On 4/13/23 3:03 AM, Julien Grall wrote:
>>> Hi,
>>>
>>> On 11/04/2023 20:16, Vikram Garhwal wrote:
>>>> Following changes are done to __unflatten_device_tree():
>>>>      1. __unflatten_device_tree() is renamed to 
>>>> unflatten_device_tree().
>>>>      2. Remove static function type.
>>>>      3. Add handling of memory allocation. This will be useful in 
>>>> dynamic node
>>>>          programming when we unflatten the dt during runtime memory 
>>>> allocation
>>>>          can fail.
>>>>
>>>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>>>> ---
>>>>   xen/common/device_tree.c      | 10 ++++++----
>>>>   xen/include/xen/device_tree.h |  5 +++++
>>>>   2 files changed, 11 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>>>> index aed38ff63c..bf847b2584 100644
>>>> --- a/xen/common/device_tree.c
>>>> +++ b/xen/common/device_tree.c
>>>> @@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const 
>>>> void *fdt,
>>>>   }
>>>>     /**
>>>> - * __unflatten_device_tree - create tree of device_nodes from flat 
>>>> blob
>>>> + * unflatten_device_tree - create tree of device_nodes from flat blob
>>>>    *
>>>>    * unflattens a device-tree, creating the
>>>>    * tree of struct device_node. It also fills the "name" and "type"
>>>> @@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const 
>>>> void *fdt,
>>>>    * @fdt: The fdt to expand
>>>>    * @mynodes: The device_node tree created by the call
>>>>    */
>>>> -static void __unflatten_device_tree(const void *fdt,
>>>> -                                    struct dt_device_node **mynodes)
>>>> +void unflatten_device_tree(const void *fdt, struct dt_device_node 
>>>> **mynodes)
>>>>   {
>>>>       unsigned long start, mem, size;
>>>>       struct dt_device_node **allnextp = mynodes;
>>>> @@ -2079,6 +2078,9 @@ static void __unflatten_device_tree(const 
>>>> void *fdt,
>>>>       /* Allocate memory for the expanded device tree */
>>>>       mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct 
>>>> dt_device_node));
>>>>   +    if ( !mem )
>>>> +        panic("Cannot allocate memory for unflatten device tree\n");
>>>
>>> After your series, unflatten_device_tree() will be called after 
>>> boot, so we should not unconditionally called panic(). Instead, 
>>> unflatten_device_tree() should return an error and let the caller 
>>> decide what to do.
>> Looks like i misunderstood v4 comments. Will change it to propagate 
>> an error in case of failure here to the handle_add_overlay_nodes() 
>> caller and that will further forward to error to toolstack.
>>>
>>> I suggest to read misc/xen-error-handling.txt to understand when to 
>>> use panic()/BUG() & co. For...
>>>
>>>
>>>> +
>>>>       ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
>>>>         dt_dprintk("  unflattening %lx...\n", mem);
>>>> @@ -2179,7 +2181,7 @@ dt_find_interrupt_controller(const struct 
>>>> dt_device_match *matches)
>>>>     void __init dt_unflatten_host_device_tree(void)
>>>>   {
>>>> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
>>>> +    unflatten_device_tree(device_tree_flattened, &dt_host);
>>>
>>> ... this caller this should be a panic() (this is OK here because it 
>>> is boot code).
>>>
>>> But for your new caller, you should properly report the error back 
>>> the toolstack.
>> Understood, will change it in next version.
>>>
>>> Also, unflatten_dt_node() (called by __unflatten_device_tree()) 
>>> seems to have some failure cases. Can you explain why they are not 
>>> properly propagated in your case? Are you trusting the device-tree 
>>> to always be valid?
>> for dynamic_programming, while adding node(check patch: [XEN][PATCH 
>> v5 14/17] xen/arm: Implement device tree node addition 
>> functionalities), fdt_overlay_apply() is called before 
>> unflatten_device_tree() is called. fdt_overlay_apply() will catch the 
>> invalid device-tree overlay nodes.
>
> I agree that in theory fdt_overlay_apply() will catch invalid 
> device-tree. However, neither of the functions are exempts of bugs and 
> there is no code shared between the two (they are not even coming from 
> the same project).
>
> So we could end up in a situation where fdt_overlay_apply() works but 
> not unflatten_device_tree(). Therefore, I would rather prefer if the 
> latter function properly handle any errors.
>
> Note that unflatten_dt_node() already have check the validity of the 
> DT and will return. We just need to make sure they are treated as 
> error rather than been ignored.
Thanks for explanation. Will add error handling for unflatten_dt_node() 
and unflatten_device_tree() for failures.
>
> Cheers,
>



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:33:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:33:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521209.809622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOEz-0007KT-0o; Fri, 14 Apr 2023 18:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521209.809622; Fri, 14 Apr 2023 18:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOEy-0007KM-U9; Fri, 14 Apr 2023 18:33:44 +0000
Received: by outflank-mailman (input) for mailman id 521209;
 Fri, 14 Apr 2023 18:33:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hfD7=AF=gmail.com=this.is.a0lson@srs-se1.protection.inumbo.net>)
 id 1pnOEx-0007KE-Ir
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:33:43 +0000
Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com
 [2607:f8b0:4864:20::330])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1898835-daf2-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 20:33:42 +0200 (CEST)
Received: by mail-ot1-x330.google.com with SMTP id
 39-20020a9d04aa000000b006a1370e214aso7460524otm.11
 for <xen-devel@lists.xenproject.org>; Fri, 14 Apr 2023 11:33:42 -0700 (PDT)
Received: from [192.168.104.105] (c-73-32-128-233.hsd1.tx.comcast.net.
 [73.32.128.233]) by smtp.gmail.com with ESMTPSA id
 v16-20020a05683011d000b0069457b86060sm2001827otq.47.2023.04.14.11.33.40
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Apr 2023 11:33:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1898835-daf2-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681497221; x=1684089221;
        h=mime-version:user-agent:content-transfer-encoding:date:to:from
         :subject:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=l8ln56Hzz0BqSpD/wYa3XGMyMNJG6KUwksNpEl82aL4=;
        b=GZ09WK/rKhSZjCLyzpEbp0gMKunpkh56PSXJmInmZIoWGCQzd7Y3XeUN4mMTHNRK6h
         YIs802CWOA+zKs8XC/TXtS2a7wgMGS1gdabSJNhwOLVDFl9Mc7Ei6QnOUv2/l7+F+4b8
         Rnt8H5CBOAri3nIbSMigDK3BrcBAdxypPRxnjJvprvgRPVcUXWP8U53idKYEir0tOrBD
         044e3Botely9gnqj5Jr4NLDdTVHnFHBHL+BizwOsjRsPVSeIlOuCmbOF+Ahl3xu7SH6V
         BrbTB0y4nYYxcDGEEF3NY5yVFHBJQq8FbW4IOaavJdFNkNpW5Pizj/1YsuPFV0lF1OMU
         ow/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681497221; x=1684089221;
        h=mime-version:user-agent:content-transfer-encoding:date:to:from
         :subject:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=l8ln56Hzz0BqSpD/wYa3XGMyMNJG6KUwksNpEl82aL4=;
        b=FRgTI/vasMx2Y7UFd9LLvj9dxY5WES/kKI+yXlAn5GkqlqLg99UrxEdFreObcaLrBn
         6M+gUgYDDk1Zce70CPVVGmb80mSq7iLBPD2LektMNeJJ411HfHwVGLBma3ogAikS/LH5
         rogTQ7o8yhLDLW+NvO7b+ORopo8gfqGCAqcJcPQ7dIE2rpM2lUjZhrbYMQhwtQ8widII
         R3LuOLdrYF5zCz+0k9ztTydajQHQ9kCzJ3COI3gwZuQI6yLKuKhbtW9wgm87iQX0MFiP
         9BsQTfpgVSv28qDkLFN5cEYnRj9CP7q6hwrnstVDWPOWgb7VG8t+hBwk8gQJX/zzco0k
         N41g==
X-Gm-Message-State: AAQBX9dU+shAnFQf1AbISr6O0j1aFYZQ/C6RU1B8HvGW7VCW/0K7I9P6
	U7e2o+/H8uf50DLFe0X93GDzGSqp1jc=
X-Google-Smtp-Source: AKy350ajpP8J7z63r+4DlL5zbJcOQK7g7+FL7nR31UiQ7mgsXV8Dj7fqnM8zV2muaCO0Mjzgh54eHw==
X-Received: by 2002:a9d:6508:0:b0:68b:d61c:168f with SMTP id i8-20020a9d6508000000b0068bd61c168fmr2740829otl.11.1681497221315;
        Fri, 14 Apr 2023 11:33:41 -0700 (PDT)
Message-ID: <56d02462173267603d7af1503d2e67cb88f2ad5a.camel@gmail.com>
Subject: x86 instruction emulation backstory?
From: Alex Olson <this.is.a0lson@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 14 Apr 2023 13:33:39 -0500
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0

I've been digging into VMX internals and I see why MMIO emulation pretty mu=
ch
requires x86 instruction emulation.  Even the Linux KVM code borrowed Xen's
emulation...

Thus, I'm trying to understand Xen's x86 emulation implementation...

How was it developed? (x86 instruction handling is incredibly complex!)=20

Was it originally part of a general purpose x86 emulator? =20

It looks like it implements more instructions than just ones that can acces=
s
memory, such as "AAM"?  (Why is this)?

Thanks

-Alex




From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:58:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521214.809642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcF-0001eI-A2; Fri, 14 Apr 2023 18:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521214.809642; Fri, 14 Apr 2023 18:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcF-0001eB-6u; Fri, 14 Apr 2023 18:57:47 +0000
Received: by outflank-mailman (input) for mailman id 521214;
 Fri, 14 Apr 2023 18:57:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWbr=AF=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pnOcD-0001O1-SR
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:57:45 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe59::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d0f641f-daf6-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 20:57:45 +0200 (CEST)
Received: from CYZPR17CA0005.namprd17.prod.outlook.com (2603:10b6:930:8c::21)
 by SA1PR12MB8599.namprd12.prod.outlook.com (2603:10b6:806:254::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Fri, 14 Apr
 2023 18:57:42 +0000
Received: from CY4PEPF0000C977.namprd02.prod.outlook.com
 (2603:10b6:930:8c:cafe::5) by CYZPR17CA0005.outlook.office365.com
 (2603:10b6:930:8c::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 18:57:42 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C977.mail.protection.outlook.com (10.167.241.133) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Fri, 14 Apr 2023 18:57:41 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:40 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 14 Apr 2023 13:57:39 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d0f641f-daf6-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVHNIhQawx2IMLPEDwhhvyQ0AskgqeAeqibmTkNk0c73CcLlidKXboAqFzuqLznrR4S6rD0D11mXho7iarj6vuSLsQhq9QAfwW0PjzJD6RO3Y3P9MQIRtAY3R6CwmXXtS0ybmAdUilByhfQOsDebq6YxwiCCObFXAqBH5A7hpVLsq2HFZEIUqvs04DvAwmiyHszIh/CyAnjcElHtdf8swOmdfaFKBM1I+V83Lgz/p22QY5GRhHKf6GvdXpzNDkiJpVb/iEb4CLoeb+fi08MPvhigVDe4NhXt9thD+DI7CmLaOQLiLzfK4NWBZDUzfLA6tqwo2yPG0wPJqT4Jz6l4eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z33OHiDZlWIHfUqu2WZ+dteJoaqpdcUN2xGgnP65iNE=;
 b=A3mE3zhgQQMTKZGlbzpiW3sBgba8XcUINhrq7m92V9Fvm+vTk0YrqswIrPi46ARpnEccJP8jIf2FGj/Qe2CEuVb92hyYD25GykGld5t1IREpXCS+AF7Yz86/tRmU89+LeltefTYNlU4E/JaPK7WRL3dFkDTvPbw4kQ5HICgydxOmSwGPV2Duhbffk8uybx5H1HIuNzK6EpI4SLgE1DXYtg2+wHisFd2cmCspWiAn/6qpJwj0XwgrWd8NYD3osLpWOpzDyDttMsbHMzTAeIFUksUISCc0vLmdXGO9WAXy0rANna9CRYDK3MJBwITEU2Jt7wOIQ8FJKTTXqMLZb0EX0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z33OHiDZlWIHfUqu2WZ+dteJoaqpdcUN2xGgnP65iNE=;
 b=kJD5WWKbFY8zyBHg9v8QvIhNtbyngzYa/AZOTwccVLfv3rBFrloYesqyh8e/99Oe7YPAU3Bld8/t3jsX/bLT8ZlJWI92xZuYfN/eIx4L6DJBqBWQBuwV73tcgYNQZ2BKKIg+P5/3KUO08Dahkz/qe2OqgH//Dgbm+upD0URVCiM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
Date: Fri, 14 Apr 2023 14:57:12 -0400
Message-ID: <20230414185714.292881-2-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230414185714.292881-1-stewart.hildebrand@amd.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C977:EE_|SA1PR12MB8599:EE_
X-MS-Office365-Filtering-Correlation-Id: 8598bab1-0dbd-4004-299e-08db3d1a1fca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wA+Jg7uRF9g5PnFKZUnpgcCiILrMoIUHeTMMQCkSlKSHyW9lhOwsiLwAtOekh4nzTNKKCqGEqvoHfXv65CdFXG6yx14/rmuI7QPvpEIqSgh2+My53Z2MogVu+49VHJx05bxf2faVlevcUbqOgCz1Xzgu2937PWfDWisOtKVeWCabiCrwKgSFrTyn/Ja4gpc6QFODy8qiEBFVhL2EF5khFrLvv2cBjRbSxgLOqhEosM/ekNhhNpP8UAoQ9HowE4AXZ1fo08n9y2/dFYtCfK3+E59zDcT2hDZ1pzAfjDCjHqUUJUHdTXQK6B92Ue5YnDJyNFQOu+jZRO/NSmIAJEEWgKRSCWnHNxKNwMMvDednyfZDr8v2VgjUD+qWd1mtNQ1gTthiRWpzHKE2BfG9u6BrfypnSaUGXflxnyntOOKvBmjBmJB2jVQ5YO1O39tB5h7MDQ/HV1J1s+/T1ge6cMxRLy2MRaN/OtkuWLerdyZkod0dzoWlh+MulvKCuA8R7D86UkTZbA/rj7gUrXZEIhMJUiXeMbUzoKcjDFun6z7me6Ea7adaHNXNcNqwj/p0cxNh0da7RzDQ7MAb289Fu4aKPXLQaoNCaKluudpWY1RujglJHthcByiJLry/b/hNmy7Kl0RqvLtuqC93ZLkq2+htPqbvTgnNdhofWFmwQDkGut+8sfQqhushKpfSOhcXR0a5ZRIJgK/wFBgEmGioMOKCZgqd4DmgBkQU+sLZOKyyddg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199021)(46966006)(40470700004)(36840700001)(2906002)(70206006)(70586007)(426003)(336012)(82310400005)(5660300002)(44832011)(36756003)(8676002)(40460700003)(8936002)(41300700001)(316002)(6916009)(4326008)(40480700001)(478600001)(54906003)(6666004)(26005)(81166007)(82740400003)(2616005)(1076003)(83380400001)(356005)(47076005)(36860700001)(86362001)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:57:41.3749
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8598bab1-0dbd-4004-299e-08db3d1a1fca
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C977.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8599

When building the hypervisor with -Og, we run into a __bad_cmpxchg link error:

aarch64-none-linux-gnu-ld: prelink.o: in function `__int_cmpxchg':
.../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
aarch64-none-linux-gnu-ld: .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
aarch64-none-linux-gnu-ld: ./.xen-syms.0: hidden symbol `__bad_cmpxchg' isn't defined
aarch64-none-linux-gnu-ld: final link failed: bad value

This is due to the function __guest_cmpxchg not being inlined in the -Og build
with gcc 12. Fix this by marking __guest_cmpxchg always_inline.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com
---
I considered also changing "guest_cmpxchg64" just below in the same file to
always_inline, but I decided not to because this function does not take a size
parameter.
---
 xen/arch/arm/include/asm/guest_atomics.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/include/asm/guest_atomics.h b/xen/arch/arm/include/asm/guest_atomics.h
index 9e2e96d4ff72..a1745f8613f6 100644
--- a/xen/arch/arm/include/asm/guest_atomics.h
+++ b/xen/arch/arm/include/asm/guest_atomics.h
@@ -86,11 +86,11 @@ static inline void guest_clear_mask16(struct domain *d, uint16_t mask,
     domain_unpause(d);
 }
 
-static inline unsigned long __guest_cmpxchg(struct domain *d,
-                                            volatile void *ptr,
-                                            unsigned long old,
-                                            unsigned long new,
-                                            unsigned int size)
+static always_inline unsigned long __guest_cmpxchg(struct domain *d,
+                                                   volatile void *ptr,
+                                                   unsigned long old,
+                                                   unsigned long new,
+                                                   unsigned int size)
 {
     unsigned long oldval = old;
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:58:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:58:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521213.809631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOc9-0001OE-Ty; Fri, 14 Apr 2023 18:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521213.809631; Fri, 14 Apr 2023 18:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOc9-0001O7-RI; Fri, 14 Apr 2023 18:57:41 +0000
Received: by outflank-mailman (input) for mailman id 521213;
 Fri, 14 Apr 2023 18:57:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWbr=AF=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pnOc8-0001O1-Rs
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:57:40 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20601.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3968bfd1-daf6-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 20:57:39 +0200 (CEST)
Received: from CY8PR19CA0012.namprd19.prod.outlook.com (2603:10b6:930:44::29)
 by BN9PR12MB5178.namprd12.prod.outlook.com (2603:10b6:408:11b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 18:57:35 +0000
Received: from CY4PEPF0000C976.namprd02.prod.outlook.com
 (2603:10b6:930:44:cafe::23) by CY8PR19CA0012.outlook.office365.com
 (2603:10b6:930:44::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 18:57:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C976.mail.protection.outlook.com (10.167.241.132) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Fri, 14 Apr 2023 18:57:34 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:34 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 14 Apr 2023 13:57:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3968bfd1-daf6-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jo4FIzpxPgNZ1XanipsY1GEBnOSLUWDAEhh97vvlXlmbf1j5WK7XmyMtqzEmueEid1dPUG00no7Pa7uOt9ABqrxZqvwVsoIUYdAxuH1oURdX8DlVDgVlBCZzUHm857cBef7Rx3ePy2b/QbhidfdcPFJvJ1VeFEimmoM9QAhYugMN0X7Gx//UlP0cFYIWgqAkDTSX570UpDvNBx+5zqPThsVMG+ZsoKX/mjq5otMkG2UUCkDVny0TMvDFlqLtGyHeOwo4AyJK0Nwp5WdSSQ7W72WoyP7i7Oqs7poznMhcy7lC49ThUsXH6NXEbiD/aomVDhKjN30UEjMQO7v5CuvdHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9aUhBbT3JP8LmVuEIaDIRl2xT4DCvnOTPAPCN1AMiXg=;
 b=NfXLuFZlxzDgTjaMQPeu7y1iSi3ARY9jKvZgKwW/p6JzGKpYg9Al0oSGOsvqOtX9GNqgLCY2X+/rNalBTTyh+/bvd3APEHVbtlXzcoSWc10QrW3cZgwfgthExHMsjc2LFdsTcCD3p176tVqUUKr0g2yxWgZQ/qOiuWf2J2tB52d+MM6FZ6PwR7smgec8brM0R/WG2z616euTuCSW8y+aGA8Gz8GJK966RbUk7dsP4sofS/kOqepyw6uq+HZTzqI/WGYg35a5Wt1y3Sp6f/C5yrLOOiEPTeFTCNqvgngFFey3niUoulb4gRNXrYd7TEWHO6xzCgEmgZTOsRpZ30k/Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9aUhBbT3JP8LmVuEIaDIRl2xT4DCvnOTPAPCN1AMiXg=;
 b=FGtNZlN6MHo5VXCjo1vawLFCpM6C39KNjdC96AjHYGuqX5T7Dz0wyBru1QKRe3FnlPGmTOZfrRuJ2RPGtCsZL0YPyo1KF5J5Jn4b6e8gaM7je1mjQMYYaRbPnkbZKVneA7kTsYaAD+7CvXpOyxxzYYRLwM3hr83KLlm0GGcVCTM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH 0/3] xen/arm: fix build errors with -Og
Date: Fri, 14 Apr 2023 14:57:11 -0400
Message-ID: <20230414185714.292881-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C976:EE_|BN9PR12MB5178:EE_
X-MS-Office365-Filtering-Correlation-Id: e570a07e-02ef-42c7-deef-08db3d1a1bec
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o+OxKV0EcSK6N1GOgBSsp2B2Rk0UtQA+Gnsaw2RC32GktQXVo5gt3kYXcFiZIC+vQJzPI1EPUNpCemnlVwQYA+HGygb9tPqGy8a06pT+jP3vvzvdovD1IS5Zz4EYsOG4aJ7lWXidnRrJ0N9Su8mw8foFmLg5yxb8VWCpSQDt1Nhs231FZJgsmPRi4yoZ5a5BeugAAm7EShsvGbh7OE7ef/XbNtNfnjRJ8FrkBDxJ2UR2pu2qoCnQUrQpuc3BG7nAAyeHBo6VOAOprzr+ahK9C/sWmAjHBDfpGW+MjWBNlWhto61mCTt8AEMKkAIizUq6zt19SbK+2dcr7tMUMS/eEcLZVtJtQm2HEeKjbw3nbuJgL6Ph2ix3GNRXyM7mXypa03R8snW1AwFVX9pfJIJzhbo8kPl9gmV579WrfzaboqggG95mDuW2hiRxIu0JEIPq8WcpMiXU8CDf36GF4Wm1TVmKCvTZx5MxrkDh99hHKHYIfB2nH52CPjvpwJWrqNTgcsQHbOyIRZMb3C02zm09vBKdFG9pbeuP4BPvWMkV9Q8y3rF8fbvY9wN9tI8Vo3OT5lDpiOFExNp/Xy2tm0PrhLUYL2gbI1ntAD6ZDKLRgANKk3VoAPLdcFVyvAPMwj8xOEvAAcw3+cp0MySy4mb+cl7Q6woKFaeJZ9wR+AdA/2XGta/QBpz86EI7JLYVCZ8tzs1lmXWPqUd+Y+7V8xYvAA146ljG9uh2j6tUsLIFEvY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(6666004)(5660300002)(40460700003)(4744005)(6916009)(2906002)(36756003)(4326008)(70586007)(70206006)(44832011)(86362001)(81166007)(41300700001)(82310400005)(356005)(8676002)(316002)(8936002)(82740400003)(478600001)(40480700001)(54906003)(26005)(1076003)(336012)(426003)(36860700001)(2616005)(47076005)(186003)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:57:34.8915
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e570a07e-02ef-42c7-deef-08db3d1a1bec
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C976.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5178

This is a collection of fixes needed to build the hypervisor with -Og for arm64.

I build-tested this with the following command:

make -j $(nproc) \
    EXTRA_CFLAGS_XEN_CORE="-Og" \
    XEN_TARGET_ARCH=arm64 \
    CROSS_COMPILE=aarch64-none-linux-gnu- \
    dist-xen


Stewart Hildebrand (3):
  xen/arm: mark __guest_cmpxchg always_inline
  xen/efi: fix unitialized use warning
  xen/arm: fix unitialized use warning

 xen/arch/arm/domain_build.c              |  2 +-
 xen/arch/arm/include/asm/guest_atomics.h | 10 +++++-----
 xen/common/efi/boot.c                    |  9 +++++++++
 3 files changed, 15 insertions(+), 6 deletions(-)

-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:58:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521215.809652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcP-00020B-IA; Fri, 14 Apr 2023 18:57:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521215.809652; Fri, 14 Apr 2023 18:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcP-000204-F2; Fri, 14 Apr 2023 18:57:57 +0000
Received: by outflank-mailman (input) for mailman id 521215;
 Fri, 14 Apr 2023 18:57:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWbr=AF=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pnOcO-0001yf-7e
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:57:56 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e88::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4297d594-daf6-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 20:57:54 +0200 (CEST)
Received: from MW4PR04CA0259.namprd04.prod.outlook.com (2603:10b6:303:88::24)
 by CH3PR12MB8911.namprd12.prod.outlook.com (2603:10b6:610:169::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Fri, 14 Apr
 2023 18:57:46 +0000
Received: from CO1NAM11FT004.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:88:cafe::b3) by MW4PR04CA0259.outlook.office365.com
 (2603:10b6:303:88::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 18:57:45 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT004.mail.protection.outlook.com (10.13.175.89) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.33 via Frontend Transport; Fri, 14 Apr 2023 18:57:44 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:44 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:43 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 14 Apr 2023 13:57:42 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4297d594-daf6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aPo3Zkau8v5cZvvTDRT5ZDoJxjWnp46Q/ftxQaGfnfgX64fe8jZa27ISThGxZfFgj2URL0unFqPXgPe7IYajX6mvs0y9CnmofDz+hTxT2aMt+8tvYfRIkoMfX84Qaa9mSH0bLdg94thJl1E1KijBSopNwkjP+oEzi2G4MEuHaXvigdpZ3c9lgiB1uTlUJWxoIIB19nxXcMZQ75a4ZXQVNfm3hB09UuYDdjm812tQjxbPqSPVyRNStF6noawt1cdIWCThivdjFAZ0Vqp8dgQc+DnxVlOiEyq/4Vt/mzCb3WGkcWqYsTOqmBkLvfcs9w7bNuhI4RzzWJpYIIvPka0d8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Op4gLrZXr+32HM/wsQS2QIeYqXOraK+iVo5JPJFoRrg=;
 b=PF0tw8puEGB7V1u4jMXk65LW/+c6OHWy94UvCCX/kPCviDxl6+UFeEWStJniZa2cg8W3tp1zKG7V3uWAoXQK4FZB4G0y7GpFHVfO8PAeiv2epeGIzj61IjIBjsC+07sF7YUkgoE44KPdH11MKPAGqVEYhzrrqAvAUxEcFCMRw+JYqfLYkHxAOitSo5xv5y+NFijKm04vtBHvRtmVMdTt4WZ7344MGK4pFBbQTPimQqDbabxDYqXnwqi8RKQG0HvhpbrZzoTSt7EFu8r+2sDHUREZhXqLQiDnNBttNMaphFyKPO2PXDdkRBa8R044Vh1iOz1RF2YXf/fjTh4z5iwYkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Op4gLrZXr+32HM/wsQS2QIeYqXOraK+iVo5JPJFoRrg=;
 b=gOhpO5lNnnXIY4VlUUeOUNUG5dcNiTSFrhvVLUPammGnNuJIaK/2xGJJ2jhcc9MlgdxmN0vESeUqlujwSKjQnSB+HT6Mte9fHiY92/XAxhWsxYhy2LkaWBGYZZi/wxdeW95ZQ5h+jgXqangf3IZ/NkRMH2KKThjRYvv8al/fekQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Jan Beulich
	<jbeulich@suse.com>
Subject: [PATCH 2/3] xen/efi: fix unitialized use warning
Date: Fri, 14 Apr 2023 14:57:13 -0400
Message-ID: <20230414185714.292881-3-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230414185714.292881-1-stewart.hildebrand@amd.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT004:EE_|CH3PR12MB8911:EE_
X-MS-Office365-Filtering-Correlation-Id: 03c6a0c3-bf5c-4b69-daaf-08db3d1a21e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IsgYxjrVVnc8ryVLdw6JHAKd7S4Bus9ldxYr+FiRT9sABsv4fL3jP/JAs7y607JaaF8RAC+OMQInsyCjc3yxawg5JIdVOvx68ziUFBtBgnZEcU2zdEnFEfN3S/EtiCG1T4HB9c9giYrP8OI85HGky6UaIB/jQyM/k7HwyrNeh8DqTxGPgS1syA+Gbd/VnVNTckD7EU3lFK9Syl6dR/GRrmLfrTyTyWIwfM4Zvc9p0tjjQIAIOb0jdjFUe9Ng1iFrHVwHbAf3WrJUISpcFnjWjqKZafZt26WedtrdIz9bo47N8ECM+QHzdmUMWdxLO7/uCF+kEeP3Z2HIxH4Cvwg3391ybFJKYxunJd2XQKEZbYuw/j8KZdL/lgtDkCMF33FNY6RTgMVzjfSwFGLYeGjn8TD9DWo9NWY06Ro/GHPcGNoF/zI+eM1hpx/n6S7OSTCjNbjqU06o1sGqDCwOxH4KFwIWMKWq8Z/+g3//YifkWXMHNu5x6oMH1bcwhX2w/6mGL7dygT+EDgN9hWXrdFdidVWcIQeLePBMQ9limsZuMvPyVslGEi36UlDxGu8M3rVMTzT556B7maMx4POovK+jPhy1X1hIVFmPBKUex48/8OkxgK9018IH6OzN+bniYifijBSLrEZxI003ZZPifRXgcXezB3GSd6ca1mw3ec9WRaPHSxMbrMQjZ/1fA849E0+EvxylOqbKuV81MHLET3fDe3xGGS4fE5c9FjJ4vLjUmRM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(54906003)(2906002)(36756003)(26005)(40460700003)(478600001)(8936002)(356005)(2616005)(966005)(81166007)(47076005)(1076003)(82740400003)(44832011)(83380400001)(186003)(5660300002)(426003)(336012)(86362001)(40480700001)(36860700001)(70206006)(70586007)(4326008)(6916009)(8676002)(82310400005)(316002)(41300700001)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:57:44.8872
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 03c6a0c3-bf5c-4b69-daaf-08db3d1a21e6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT004.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8911

When building the hypervisor for arm64 with -Og, we encounter a (false)
uninitialized use warning:

arch/arm/efi/boot.c: In function ‘efi_start’:
arch/arm/efi/boot.c:1468:9: error: ‘argc’ may be used uninitialized [-Werror=maybe-uninitialized]
 1468 |         efi_arch_handle_cmdline(argc ? *argv : NULL, options, name.s);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/arm/efi/boot.c:1263:21: note: ‘argc’ was declared here
 1263 |     unsigned int i, argc;
      |                     ^~~~
cc1: all warnings being treated as errors

Fix this by initializing argc. As a precaution, also initialize argv.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
See previous discussion here
https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00805.html
---
 xen/common/efi/boot.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index b69c83e354ee..c5850c26af9f 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1344,6 +1344,15 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
         if ( !base_video )
             efi_console_set_mode();
     }
+    else
+    {
+        /*
+         * Some compilers may emit a false "uninitialized use" warning for argc,
+         * so initialize argc/argv here to avoid the warning.
+         */
+        argc = 0;
+        argv = NULL;
+    }
 
     PrintStr(L"Xen " XEN_VERSION_STRING XEN_EXTRAVERSION
 	     " (c/s " XEN_CHANGESET ") EFI loader\r\n");
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 18:58:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 18:58:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521216.809662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcQ-0002JA-Qz; Fri, 14 Apr 2023 18:57:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521216.809662; Fri, 14 Apr 2023 18:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOcQ-0002Iz-NZ; Fri, 14 Apr 2023 18:57:58 +0000
Received: by outflank-mailman (input) for mailman id 521216;
 Fri, 14 Apr 2023 18:57:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWbr=AF=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pnOcO-0001yf-UQ
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 18:57:56 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 430e94da-daf6-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 20:57:55 +0200 (CEST)
Received: from CY8PR19CA0009.namprd19.prod.outlook.com (2603:10b6:930:44::14)
 by SA3PR12MB7997.namprd12.prod.outlook.com (2603:10b6:806:307::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Fri, 14 Apr
 2023 18:57:52 +0000
Received: from CY4PEPF0000C976.namprd02.prod.outlook.com
 (2603:10b6:930:44:cafe::b6) by CY8PR19CA0009.outlook.office365.com
 (2603:10b6:930:44::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 18:57:52 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000C976.mail.protection.outlook.com (10.167.241.132) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.27 via Frontend Transport; Fri, 14 Apr 2023 18:57:51 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:51 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:57:51 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 14 Apr 2023 13:57:49 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 430e94da-daf6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A0H+L5MXch2APCSZxtNrT9Hh69YGQLyk+gOXnGelPqesd81rc4Z2e47jp+NOUEMMI2MTpPWPVdo08r2ZkO6tLYMNkV1YPQvhfvmeUYlGTCRUryXzK5n3N1pXFhyuh2BUVHS6n/UKpqFmvuj75sqoZl0Z9WR3KCu9rTITjmjtV/SPbHA2UDR6Y6qaliZJIoQopcotNipfEqdgxq8aT4vBn49Vc0o0OmNYBRDH0sCC1bmrcH0Lil78Qs5d5guAsSN9wpq6/xheCZMq+dzGD52N4PhbIV3Os4v1tLw7RL5/R4lhe/kmXrlNGhjhWjLLxuPz1LQGint5Tb/i6Cxz1TM2lA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Al+YVBo/rFHAhzjUVn24nt7jKtL9yJFrezw5/WN9ImY=;
 b=IQ2pGTKD88+zuMPdA4wet/rig4/wySpAUmtkJ7Q5wxyP6pjpmfiBYb++Pb0DR/D0U5d16BdI8fRwCQhCkSNJt4dY2iiCeV/0Mt7fMfj2ZSWzLGoovbWsJU3RBaJ1ffF1tzwxfWO1bhwG4NUF2+Ob0z22AhZi/kSlYS4PJUlyGHZTGa72zIe0hKWp/j6ub013MP5RlPGiONui/59ZqQKpfwjjtZlCOU/zM7fYBQ7vTzexQ/B215AapaPW2C/eCmlTC0Pq0Q5uy1FujaiZhwoEQV88c2h3Z13tyAT+2Cv1+7XU1RPvzb8PX5vVDIIkVM7iy//mXgvDSkpyuKePXdpkJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Al+YVBo/rFHAhzjUVn24nt7jKtL9yJFrezw5/WN9ImY=;
 b=yYbMAe8et+HIY/oo+xWHW5dJ6PnoJQ7ZB485hAvYwdM1zVkZyAc46Cp9II7PgMKmtxByG/dlivHaZntZNbjbmRytIDtym/t9hxmikLzm86xLRsaAdR4DxsTEn0sGsGFQAWlhD720yDj8boZSj1+hPnZDhgvITRNbgR+doCvxpx8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 3/3] xen/arm: fix unitialized use warning
Date: Fri, 14 Apr 2023 14:57:14 -0400
Message-ID: <20230414185714.292881-4-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230414185714.292881-1-stewart.hildebrand@amd.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C976:EE_|SA3PR12MB7997:EE_
X-MS-Office365-Filtering-Correlation-Id: 98d4bc59-f055-4c3c-33af-08db3d1a261d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HwO0CmzPGZaR7SXgWQBDb+ReAr9FYZhx95Uu3HjoATsfs5OwjYwfvxgBL/t8rcYE2GHPFGbT7DXYU0Wv+MAuNbbEGzpU7qjUzkwB/FQn0q5KWBW+g0Nf/t1gpTOvQNaRGAGo6fp85+l3iEaimAG3EzCWGtA9B5mLwN6vM2UoBIvZKHI64YdZhZiSzMRKZXkB4RqpTAo5x/r04w8ZHhU+vPRFts7PxxdA9V1zC1t4u4tHSZHiEI7RICbPTk0w43uiD1kkxjwPyxt5jUQzKtftu1zik3XQZlqsoJ9ToX0nhL14b5/aA5ctZEe0Cyz2qSoBefdXqVj+MTVNTpiNOKEU0+caFZLx3ctCH4vRrXcW5m9UMMSrXoGUO1pdlscHvw8I7SZMRb3m0p1URYM+nTYdMlFqHnCojf1ccy1fvZy5DR9HqquApX5+AQya/OGuQbDvtQuOB/lj0U6ZSLmaxd37KE9Vbk1+19IouDtT95zZZDVWoTa5VyarWXXQktdgaB/XlyBMA4cBJnbx4OvNOeUfBVvYFwvOnGLoThBB7MmiAVAryBhR461x3IdPd+dgT7DcpkCwrxOW/6PD5AgGvqI0HW5B+Mp+/jrYC87C9r6HFC8MnyGv0TEcTL+KVLz38tjXorUPsCH2RMEaKS12Fmkf+dvzFIS61gkLSghn1SUq4HGYpT9EAi+McxMKgq6Q+151ZOk9j3egKE5bInDQKxFNt580w46XB3ln1ARSPf4Ym28=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199021)(36840700001)(46966006)(40470700004)(316002)(82310400005)(81166007)(54906003)(82740400003)(36860700001)(356005)(70206006)(70586007)(8676002)(4326008)(6916009)(86362001)(478600001)(336012)(47076005)(426003)(2616005)(186003)(26005)(41300700001)(36756003)(1076003)(2906002)(83380400001)(40460700003)(44832011)(40480700001)(8936002)(6666004)(5660300002)(966005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 18:57:51.9856
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 98d4bc59-f055-4c3c-33af-08db3d1a261d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C976.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7997

When building the hypervisor with -Og, we encounter the following error:

arch/arm/domain_build.c: In function ‘make_cpus_node’:
arch/arm/domain_build.c:2040:12: error: ‘clock_valid’ may be used uninitialized [-Werror=maybe-uninitialized]
 2040 |         if ( clock_valid )
      |            ^
arch/arm/domain_build.c:1947:10: note: ‘clock_valid’ was declared here
 1947 |     bool clock_valid;
      |          ^~~~~~~~~~~
cc1: all warnings being treated as errors

Fix it by initializing the variable.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
See previous discussion here
https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00741.html
---
 xen/arch/arm/domain_build.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4f9d4f9d8867..18b350734a8e 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1944,7 +1944,7 @@ static int __init make_cpus_node(const struct domain *d, void *fdt)
     /* Placeholder for cpu@ + a 32-bit hexadecimal number + \0 */
     char buf[13];
     u32 clock_frequency;
-    bool clock_valid;
+    bool clock_valid = false;
     uint64_t mpidr_aff;
 
     dt_dprintk("Create cpus node\n");
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 19:16:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 19:16:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521230.809672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOtw-0005ZI-Fa; Fri, 14 Apr 2023 19:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521230.809672; Fri, 14 Apr 2023 19:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnOtw-0005ZB-Bv; Fri, 14 Apr 2023 19:16:04 +0000
Received: by outflank-mailman (input) for mailman id 521230;
 Fri, 14 Apr 2023 19:16:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnOtv-0005Z1-LK; Fri, 14 Apr 2023 19:16:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnOtv-0003R8-HU; Fri, 14 Apr 2023 19:16:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnOtv-0003eq-0X; Fri, 14 Apr 2023 19:16:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnOtv-0006C6-04; Fri, 14 Apr 2023 19:16:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4SSj41JO/CP7MSjwVROwr+kuXAcYaCKL9xqydvMxrPA=; b=1KfOOnaRU+xAbSp0nd1e9uaMpg
	+OSF2Q3VdQu5cD19TmJMXNQVf/dXQhf5X840UYdz4ZgF37TbwMuQw0+QpTmGfhcKX7QSzpL6cMJl1
	X0vLziPWbhpSsvUwRgpEVqnUatzQ/dgPkJ7RQJYOc38+EHE5tG5M9AU2K3wKduFv9ICY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180258-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180258: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=7dbd6f8a27e30fe14adb3d5869097cddf24038d6
X-Osstest-Versions-That:
    qemuu=9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 19:16:03 +0000

flight 180258 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180258/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180231
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180231
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180231
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180231
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180231
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180231
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                7dbd6f8a27e30fe14adb3d5869097cddf24038d6
baseline version:
 qemuu                9d177b7f87d96d1ed8fd16e222a37bd1ac8a0cd8

Last test of basis   180231  2023-04-13 09:18:05 Z    1 days
Testing same since   180251  2023-04-13 20:08:41 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Juan Quintela <quintela@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      starved 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9d177b7f87..7dbd6f8a27  7dbd6f8a27e30fe14adb3d5869097cddf24038d6 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 19:55:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 19:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521237.809681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnPVh-0001v9-KO; Fri, 14 Apr 2023 19:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521237.809681; Fri, 14 Apr 2023 19:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnPVh-0001v2-H3; Fri, 14 Apr 2023 19:55:05 +0000
Received: by outflank-mailman (input) for mailman id 521237;
 Fri, 14 Apr 2023 19:55:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rR8y=AF=citrix.com=prvs=4614ad092=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pnPVg-0001uw-3j
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 19:55:04 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3cb03373-dafe-11ed-b21e-6b7b168915f2;
 Fri, 14 Apr 2023 21:55:01 +0200 (CEST)
Received: from mail-bn7nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 14 Apr 2023 15:54:49 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6975.namprd03.prod.outlook.com (2603:10b6:8:42::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 19:54:47 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6277.043; Fri, 14 Apr 2023
 19:54:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cb03373-dafe-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681502101;
  h=message-id:date:subject:to:references:from:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=HU4y5Ha3VG9lUue6H00gsQGFZ8wM4KASEAKmFuchASc=;
  b=d9OvXNJjbdg9Yhka0x5yVFTLCBkoQBET8fIQekjZBmUWoC6J+CU3Hrma
   Y4ymdEVEamStvtcITYhVTmUe8gHxszm+OjU4mjMQ4fwXZrNjtncO5sA5A
   lA+X+DzzyCRQfbQ/UIbJTWsLNF5AoRY16lKdKD+DHZ8FlI4j6AXOvGJJp
   k=;
X-IronPort-RemoteIP: 104.47.70.101
X-IronPort-MID: 104916228
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:TqSPj6IQUKjQfAtZFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENSgzcCn
 GBKWm6AOK6CMzD3c4x/ati18RtUvp7RzoQ1TQFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gViPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5lWT131
 MEWLgxSMEGxm7KrkeqSFeVj05FLwMnDZOvzu1lG5BSAVbMMZ8+GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VvpTGLlWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHunCdtMTufonhJsqFi6z2ZUMgAqbxz4+KTggHebdYxnF
 GVBr0LCqoB3riRHVOLVQx25uziFpVgVA95LFOsS5wSEy66S6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8kN+pES0cLGtHbylbSwIAuoHnuNtq1kuJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNTNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:/HMDQqFpW808v/ZMpLqE+ceALOsnbusQ8zAXPiFKJCC9F/by/f
 xG885rtiMc9wxhOk3I9ervBEDiex/hHPxOgbX5VI3KNDUO01HGEGgN1+rfKjTbakjDytI=
X-Talos-CUID: =?us-ascii?q?9a23=3AwNmbZGlnvjqL/87RG9nqmtHgFWDXOW+FkE7xHkG?=
 =?us-ascii?q?+NTk3GY+JGULBoIo7n8U7zg=3D=3D?=
X-Talos-MUID: 9a23:nu7dFgi6LBPVdiNToAigP8MpMe5h6eOUFXw31tZf5pCODAleZQ+ipWHi
X-IronPort-AV: E=Sophos;i="5.99,197,1677560400"; 
   d="scan'208";a="104916228"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JIGLXyDojC4rX/JGQNaeOeLzli6WyYyF3FhiTeAWcOh5pyAp12FD2+bGOurDyp1fnZ56QR5dLCXQbC9i3mtJItA2gKHBpx+1iznwxtGO26MjnxzNcUeqiDd7wUSoySOX9Oir2CDuSM5/CV8emJVbDvXb4lrQxxf4FXS1Qx5z3C5jOHU0gH/dTrTQfjLbXvqISaH8hy5BP8Uqh5muj7yYPVPv7Xgnk1dPwcqD6WOhsko0NpkPVuBZArTfwqRVnd8ouuuplLBr999lj2jY00Bi8GDswKRIOA3AP1mOYFRDLo9OZBv98Pho9B2WrwbDI6lcJRfROK/yHH0uyqJEEuVODw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Quo075gM2E9iTDF5hDm7gU4UCvuyM8dFgPdI8ca8564=;
 b=k2VQSq5wjJrpykZVlYmBYJSxAK4B6BGlyDqF/wSK+Ep96Y0yeTo1aLMY51Vo5ye5KxSDiLVz2bbEE0YkdfjDmfvyIn7vAPLtFwf+QdcqgwMeg+vAZPzl7SPzlxwmRQWGbMPElu0eoam4I7izcG0ojBTr5TNHjj+BFaRN06Tr6jemvcUHePK6k2YU80Q7iKEOArrBdfiB0NPZMjNYxlhYAAm7SiUA2CuASAqxyoJKmJiHi5bEo4enUj3xAPSApfUB4SjDWSKoq8pzM4KOHFX2y6PFShJF4nkz0F9dHWqnDnOktBZrHu/qDgmktpRpDW0iVc6x9YxRKNYZSLYmi1nbTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Quo075gM2E9iTDF5hDm7gU4UCvuyM8dFgPdI8ca8564=;
 b=K2/T2v5J8GbWD5iAmReZPawYO827K8DTza2Wc7PKnhpuPYS7t5stIGVeOGw/dntTkwd4vjVjaKaRnNv9yVVBnaayI35ma60xXp4Ag3CnlQxBAX9f3iiAGTFkfId0sXSPz1HRswD2DsH/pQ0X+7b7T0AnWSb28yFG4KFv+TyT22s=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <850fc508-356b-3d12-f6c4-ac0a8dc9de8b@citrix.com>
Date: Fri, 14 Apr 2023 20:54:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: x86 instruction emulation backstory?
Content-Language: en-GB
To: Alex Olson <this.is.a0lson@gmail.com>, xen-devel@lists.xenproject.org
References: <56d02462173267603d7af1503d2e67cb88f2ad5a.camel@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <56d02462173267603d7af1503d2e67cb88f2ad5a.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P123CA0011.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:338::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6975:EE_
X-MS-Office365-Filtering-Correlation-Id: cb196cd2-775c-474a-43f0-08db3d22190b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U95BEPbdCA2ZWHDHGBUG3ueCFfiuNYM5NIrAGlgJJn+86uZtbusn0joQ/fIrLVHrXSRZIBbtCtBOfjh7Ml2ZC83nevWw4DLbwbqXMBibr2DOkUFxEHgsBqT5H+fBYtLFtefVYNGcNOovMxbx2WKP/sNEN/WQiy6KNtzM+K1Mo8CcVDR9PA5dgwVCKip7CJC+FnFv/agB4Vrz6E+UNbinO8BBxbqCDvc5mb3KlR6u0dywYhCJnTioiqSpcS9fBKurL3qCBjJaCQVyBbEKHoQ55MyUYb4eNgh3uEFDSBmM3Z0M5QNEguxst9sOoLPx0aChnL+x05IbLI2Zl/RIm+mhS1zBpQ1Z0mhaEpSENcsZgYVHVJQSIo+xzZmGzwuJmoWSd5HAxlNjd9v6Tb1LCRD05/USIWQOvHcrR4MPdqjZ//smu5Zq/Ils8V7kiks2zBaRvh+yXASWjWSB1+Y/QTt8BW87PvR4elXgepyJVYENGrFZziC2wAnmDvHAK5GRS07m01lAAXkHKmhN6+OF1a5cGm8taVrd3fkDz5EdKuRVmApC+DZdpNS5Vcx8XnvfD+i8Pzy3PEaMEb1RyEqwT8ZrtqiD14jBsR9nDI1uEy3QBsmJaVOvWBC2Azmt3jVTtOhhqLsu5hUqrWZcaB7NwOxOzw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(366004)(39860400002)(451199021)(6666004)(5660300002)(6486002)(966005)(66556008)(2906002)(66476007)(36756003)(66946007)(86362001)(38100700002)(82960400001)(41300700001)(8676002)(316002)(8936002)(31696002)(478600001)(53546011)(26005)(6512007)(6506007)(31686004)(2616005)(186003)(83380400001)(66574015)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjFFMmVpYmpOcFhSUFBaWjcvb1BibWM5TFdmUGFUTVVtdWNGRHlwVitudU5K?=
 =?utf-8?B?TXpoSWxjLy9Ec09rMUxLc3FyenpDSjFRcm5FUWJMRXUvdm5ERzZkSHNwVCsy?=
 =?utf-8?B?UXFoam1oM3JYQ0pFOStmSTM3Z1F5WW9FOGdCWUZvdERIa3ZDVkI2M3lRRHhG?=
 =?utf-8?B?aHA3Qnd5amZvU2JZcWUvMk1LM29pQ05rUFZPQXNjd1RXUjNUOThEYXdiVU9n?=
 =?utf-8?B?aDQzZ2YwWUorSTFlVlNTMVRVcHpCMmd1YWxTR1ZPazk0dEI3b05JODE5NTlN?=
 =?utf-8?B?Rk9EV3VSbnduWEZTMFdvSDlBME1pcHB0TU1HMGJyc21IdGtwc0NaeHQ3dE9C?=
 =?utf-8?B?YjdxcmtUNFc0ME5ub0c3VGVtSks3QWZ5UXdaYnE0NEpYWEw2MTFaeGhrUmsr?=
 =?utf-8?B?RFFLWS9rY1BDOHFrcEZBSDRVWi85VkNabjl4MTljbGpmcmYyVC9GbjFoNy9q?=
 =?utf-8?B?cG5NOWtPVFdGOFRnY3I4R3FtSUoxck1JTERtb2FSeXVHUjk3VS9xbmYrcExF?=
 =?utf-8?B?WUxXclZiNU5RT2JxS21iS2xBY3dlVWJsZ3RNUTJFUDBiaDF4LzZ2azNtcWVW?=
 =?utf-8?B?eWxkbGJlRjNXL0tRRDJ5eTdId2pSbXdMVnpRY3dvNnhvR3ZzcUwyZ1pJNWJH?=
 =?utf-8?B?MlpyK3hZVTQyb1dZcit6M3ZhWmdDVnZlY1Z4Uk95YlpTM3FVY3hxRkI1S1gv?=
 =?utf-8?B?YTVsTS9pWXh5UnVTdnJPN2IwYlRFNzhJSWNWaWRZYjBnaVJEQkxaNU9MWVkx?=
 =?utf-8?B?OUdDa044ek1hOFZTUVBlYmJxMWllSVZDRytWR200N1RvTEdlc3ZTeTY4K0tZ?=
 =?utf-8?B?TzhLWXprRlMvUkx2bklsc2FpSzJLR1ZLWitKNlpCTFdmaTBBYU53Tm55cm1t?=
 =?utf-8?B?Qk1JTnBSdG1BZ0ZaVDZGdmpvZERhYlNLa2dMRHZvZzZ4bFg2ZEFGdHRHbVVK?=
 =?utf-8?B?cVlkV0NxK2ExNWhlamRQYUNSdGp5UCtlaE00YW5DWjJlSHdNK3d4U2pDTWJu?=
 =?utf-8?B?ZUx5VXVWZHZOTDJuQWhrcnovUmJRVkVxNG5XUVhoK2JOWHYrejF6enFsMnJY?=
 =?utf-8?B?NTZBUFVPTGFVa1BBaUtVVVNEWkV1UFFXTU95MFFsRnJSUlhRVkhXZEZXb1NU?=
 =?utf-8?B?Tm9JeW9GQ1lnSGxTMjBESW9nbHdEWjRWRlAxSVZjN1MvVCs2Nk5QSnZzK1Yw?=
 =?utf-8?B?MTdtZlVoNjAwMTE1UDhKNmswenlpY2xNOXo5MmNZVVFxeUJqTklhb1IxMXY2?=
 =?utf-8?B?QlNlM0loSUhVWTFiaUZSRGNnSmphZ21kMWk3bjliUFVGZ0V1d0VlMXNhUFhE?=
 =?utf-8?B?VkpGRDl4ME5ZaHhDZnZDYTVMSEd5UGU1N3FieVNDSGI3OWFBWnpCRlJ4MDNs?=
 =?utf-8?B?MFdzN0hhMFdoUHlQZjR5ZUdYMk5oczhwa0hGYksxSXB1cHpOUDNWS0I0QkJ0?=
 =?utf-8?B?bWZJUnRJVHpZMGFEbHdhMUdBVkhkWW9xTUkwNjlWdWZQamcyallDOGFKR1Ar?=
 =?utf-8?B?bW1icmhzYjdTaVd5N2N4bzFGeHl2WVltSUhxSEZNWFZ5cjhNL25JQ0JFT3VX?=
 =?utf-8?B?QitKblVNTm8wWTNqRXZOenRMeU84TElvcE1LZjV4cEczOFJMdVFOei80SXJB?=
 =?utf-8?B?TlhwNkZzMjNoclR5ZTBHTXZKSEFHWjlMNURkVWtKRWdzNEd0aE9FVkMvck1y?=
 =?utf-8?B?VGJJdDdrNmNmdmpIUHNJSmxEeHBVMGRMZXVrRW5iRlJMaXF6elJUeXN3NUhY?=
 =?utf-8?B?aCttRDBhMjdMNlg1WWY3U0VjUzRPTEE5VW1NY21nY1FKRFhSSUsxUFVqaE5V?=
 =?utf-8?B?eCt2WXFqcTF5cENOUE9UWlowWFZqVDZXUDVLM0ZjWE9hUUdSdW91L3pmdTcy?=
 =?utf-8?B?MGNPNnhoVFEvZDM1WTJSZGh1UDVmRUlzRWRtR2g0MTd2dkVQeEJWWkhrMWN6?=
 =?utf-8?B?ZXBYcEI0NnR3ZlRtU1BHblBqbjJrVkdwOE9WYTBkZ0FnNGc5VmlTaVF0UnIx?=
 =?utf-8?B?SVFqbzVJN2JqcGsybHVKRkdVNHBnYm11Y3BEUHhOMWJJaUl5MkxCd1hDWVRu?=
 =?utf-8?B?TVpRbUM1enB2cm5FdVdnU21BZG40ZTFuRTdiMWRlcFppaDN3SUJPQjFNblMz?=
 =?utf-8?B?QVpIYUUxSW9MZDg0ZzJIZnFDSUNuK0p5WVplNXZDQlVlYlJRSVQvWDVDdld6?=
 =?utf-8?B?cmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Flvv74x3OSVXMvVVIvDGj3SaDc3MIGnQydO3FJITsZvBvLzNvE4xKttAJusOniTtRv97/1nL+oZBY2iK+G8mQXaFGPbSES6mv7OglOAMqgpv4ftEtOYTQhrWscw9o1ElzogbncKD5aCr+XRlNbNbjTwdd1QRbGo7mWCfZ/ZrBSRW+9n5xDDPb1aIeRjNyLyOub3TJDeZ+9TACo8bdZVkbEJXPWRQVDGlivFHJosEHSZz+imMcmwAjLRFV5lcmxPkzRcImqm6rhprRf61lr+ZWbBcGgbih5zJvW+Q9QLD/NJ/ymOXDKMCTQq9WCN4UVV8AGQvL8ngWb1uRGd2LhFR83itZKzmtogEBMnZPO6BcIEAhBEr1glP52zEPV3/DHiEOzKAxytaESsjRDptip0kTQ2ylRf7LAe7uEVtdRdapnwsEuRepHgDop+Hr5XyZU4/9bVaQ4as3jiHon10Y6o8NSvnF1mYtHOQgo3x2N8u7rNrU6njNMUG91Ok0835CllHja501Dk/I3XnlaePJEsc1+2G8dqrF5huiRe6e9Sz74qE2vdBtQF5D2+k5nx88IWlIcy/AEAv1jaLoHTpXDxAGbRu4CWeK/Myo+EeF60GDl6O/neI6XFpgrjAMges9Jx400DWCovr/EMj2iA7qQbKXLpDAY1uEyIRPie4wPV4A0K7mYfkcHWdPmocLicCfyaWHnPUhDqI2rvIqweJpfzdhQDjk6X+taHeb/pbRvsoFTWkqADQtXL/CMF8X/HvaFxifI45lwK3SZRmVrLgRMdovwqatCttWYN1jeymV0h9zfqo2jQla9FQUBN4uXhe4JDN
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb196cd2-775c-474a-43f0-08db3d22190b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 19:54:46.9844
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +KfWLcpY+hLCffmSZFIjBa5ZJf1pMrwpc9TsuYbCsPFjWYC1YtPQnNgzVzveSC6z8Zl6uJGpeMhznhroRQAxBw1WNUfwVKxWvco0wrMroRY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6975

On 14/04/2023 7:33 pm, Alex Olson wrote:
> I've been digging into VMX internals and I see why MMIO emulation pretty much
> requires x86 instruction emulation.  Even the Linux KVM code borrowed Xen's
> emulation...
>
> Thus, I'm trying to understand Xen's x86 emulation implementation...
>
> How was it developed? (x86 instruction handling is incredibly complex!) 
>
> Was it originally part of a general purpose x86 emulator?

Xen's emulator (in this form at least) is 18 years old - March 2005

https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=4c5eeec983495e347c6ab3d40a4a70cdbdfce9af

and it was written from scratch, but you can even see in the context for
x86/traps.c that emulate_privileged_op() predates that.  (We decided to
consolidate down to a single instruction decoder/emulator at the point
that we were maintaining 4 different ad-hoc ones.)

As for development, it's all there in git log if you want to go looking :).

> It looks like it implements more instructions than just ones that can access
> memory, such as "AAM"?  (Why is this)?

All instructions have an implicit memory operand at %rip.  The CPU has
to fetch the opcode bytes from somewhere...  (See Introspection, later)


You've found MMIO, but emulating from a #GP fault was also an important
usecase even back then.  PV guest kernels execute in Ring1 (32bit) or
Ring3 (64bit), therefore cannot use CPL0 instructions.

While PV guests ought to use hypercalls for privileged operations, doing
so completely is very expensive in an existing codebase that you're
trying to port to Xen.  Therefore, Xen will emulate in a few faulting
conditions, so the guest can e.g. execute RDMSR and have it function
correctly (albeit painfully slowly).


More recently, Hypervisor Introspection as a technology opens up a whole
load of interesting cases which want emulation.  A lot of introspection
boils down to removing permissions behind the scenes (e.g. making code
no-execute, or making data read-only) so violations cause an exit to the
hypervisor, and an introspection agent can make a judgement call.  99%
of cases are fine, and should proceed.

But, how do you do this?  You could lift the permissions, but then
malware on other vCPUs now have a window of time where they are free to
make modifications.  So instead you could pause the VM, lift the perms,
singlestep the trapping vCPU, restore them perms, and unpause it.  But
this has terrible performance to start with, and is an O(N^2) perf hit
with then number of vCPUs the VM has.

In practice, it is *far* cheaper to have Xen emulate the instruction,
than it is to play with pausing, perms and singlestepping.

But consider the 1% other case where continuing isn't fine.  One of the
supported options is to "emulate / discard" to try and skip the
instruction without making a real state modification.  This cannot be
done with singlestepping, and has to be done by software somewhere.  As
Xen already has an emulator, it's very easy to use a set of
write_discard() hooks in place of the real ones.


As to the complexity, yes and in truth, Xen's emulator isn't fully an
emulator.

We pretty much emulate all the integer instructions, because most of
them are very simple, but we do not for the vector instructions.  What
we do for vector instructions is better described as decode and replay,
where we reconstruct a modified form of the instruction to operate on
local state, so we can piece together the overall reads and writes
without needing to implement the vector logic itself.

It's also worth saying that for any locked/atomic operations, we have to
issue a real instruction too, because that's the only way to get the
cache coherency behaviour correct.

It is also worth nothing that Xen's emulator isn't complete.  Notably,
noone has implemented IRET for protected mode yet, or inter-privilege
far transfers, and we've got known corner cases (e.g. interrupt shadow
with Mov SS) in need of some work.


We went through a spate of problems where Windows in particular kept
coming up with more and more inventive instructions to use to write into
the emulated VGA framebuffer, and we decided that the emulator should be
as complete as we can reasonably make it.  A consequence of this is that
we have some very interesting and powerful advanced security features.

I hope this helps, or was at least interesting.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 20:30:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 20:30:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521244.809694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnQ3L-0005VH-Ar; Fri, 14 Apr 2023 20:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521244.809694; Fri, 14 Apr 2023 20:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnQ3L-0005VA-8D; Fri, 14 Apr 2023 20:29:51 +0000
Received: by outflank-mailman (input) for mailman id 521244;
 Fri, 14 Apr 2023 20:29:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWbr=AF=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pnQ3J-0005V4-DI
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 20:29:49 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e88::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17de211a-db03-11ed-8611-37d641c3527e;
 Fri, 14 Apr 2023 22:29:46 +0200 (CEST)
Received: from DM6PR07CA0116.namprd07.prod.outlook.com (2603:10b6:5:330::13)
 by BY5PR12MB4872.namprd12.prod.outlook.com (2603:10b6:a03:1c4::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Fri, 14 Apr
 2023 20:29:42 +0000
Received: from DM6NAM11FT076.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:330:cafe::5f) by DM6PR07CA0116.outlook.office365.com
 (2603:10b6:5:330::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Fri, 14 Apr 2023 20:29:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT076.mail.protection.outlook.com (10.13.173.204) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.36 via Frontend Transport; Fri, 14 Apr 2023 20:29:42 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 15:29:41 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 14 Apr
 2023 13:29:40 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 14 Apr 2023 15:29:39 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17de211a-db03-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oXhrPKkMhMSjz5keQA/NtMClm2k2hhg1wfPIwEO1NR+jZoYaCvG7G31xOB75KTGCrasNtXdSvSHQu1yYCmthwSmlNITtf40zgF+9CczKQx/Ujb6WZCLrMOkXz2hSpki3MgUR2kUqQ8iH4JxvVJqyG9FrdhSdK5X9MJhzlXuZrDUzaRQsbxRzV9L+ZRKZVQXHycvLiELKlb/3ZPGuKaij9H7weeycZyWHIdQFSgRnN2OPok48aKuZqEI01bqXurLNpImW6lggACG5jYOzfon9ivaIQL0dEgze08TvFFPxWnwoRnJbturdTuJbKWVfnvBngGDAvaifKoTE22mFDatDDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S0EKJQP7OWmfg5VAzpren4WzVtEYFxSHbvtw0gR2VGo=;
 b=HsHMi9KuqLZdofWyOgOBaNJ4IrtavDQOf2vnIXeYzOiz+xwVf6dD2gU7m69LSXp0AGw7vJZYiR5MC61UkTLfYMBnQWggv9rIRVrGUId6WKEwN4HCkg3nP/iusRj+6oAyFE5BVJyybnH+Al4rPWyXKrHSbma9GSjf152573rDBKMypdZheeuNdwhwVtNqaoCtV7NGY6lbBdkhaDoDsyMbbQi4xFVp7OZhJPkdOE5ThbkYqi9hYUToQ+wVcpHYy0F8mmMtGDd5NaBAgW+yl3rZk8opY5278tncFotNDqaJcma6Wv+ls2/nP9og0dCVu9/npCbwW/mdZgfHVO9tuSKOLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S0EKJQP7OWmfg5VAzpren4WzVtEYFxSHbvtw0gR2VGo=;
 b=01WKDi/kq5QAueNfmo2zGUNTAh2jl0ldPYlzH8TLj++pmg7k7mKZZBZnngM6uN3YegI9mVmcLZQMMzUNZH63hK3nS+FiHgDZpGgZi2FW85aA+Ou5BTURsKAKp++m6/GeGw97S2w3Gms8uJs7+8KrG+bBqjGdJXshE86vcz2x3yg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH] xen/vpci: initialize msix->next
Date: Fri, 14 Apr 2023 16:29:32 -0400
Message-ID: <20230414202932.293688-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT076:EE_|BY5PR12MB4872:EE_
X-MS-Office365-Filtering-Correlation-Id: d09ac933-e66a-406b-242e-08db3d26fa56
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dLpsv8G6C/YO8AhtPchbkjVjJgKWwIzYFhnbxg1FoPXFDpHqhDgkMJVifFlGPT7lZC71BQgkiMOr2oGihiH0uFlRUSPWLj7aBBEF/DqBAo0lA9+R/gScBY2YABbj6jMgpQvS95wfm7jM1hF6CSHRb6PuQbJWZncrvqDtQJeJ8f/SsIDox5FNIZy377Rxd866315KscGkyH1dHCmaeUw6SU6K7oewAk5ze6Zw47BECCcmYrE//uH5y8aWS40Vp2u2ETqZtOojC7CjcYEFFVoqVbI2xwT9O8pgjghFKlBMfpjdGWptN9lmuQqHS6j5Cr9brmHQwzxp+kppU1QzltAm4vXyGxXJZ3WuxHejcBEQLarfnnwuA6xwQa7hDt0eE1cKApHDQZAvs+mNGOgjP0ZG9Y8TFozb25TM9cC76dkOqWFoFP5rf0QQ5zO4zz+FjMaskShUYhacxjtUPNAKCNR4izD/aOyxBOTJb2E02/0SZaigd6CF8e32qVQK6nz2WNutZIRrsNthBwedWMn9yIuWtVP9iKjLz4BJ5hd8vPUEf/LbKP0wYwf/rngak22SN+h84BGLy7Ol/F5nCfqge7OIrTWBju2t+mIbwkBIDfZ/gQ/gB2Fr6elQui+4yd0s07U/QdjJCDS3Y60wb8rY9xI6sIoxsWDM7Y4GqWBV0JlO46Jg2gyLXgrdted+f0roWW87tznLJSRj+UeVZ3iZNPgEgdJARv9yAEWHKO2idE9Ew10=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(186003)(1076003)(6666004)(2906002)(4744005)(26005)(41300700001)(2616005)(426003)(336012)(82310400005)(40460700003)(478600001)(36756003)(47076005)(54906003)(4326008)(6916009)(70586007)(70206006)(356005)(86362001)(44832011)(5660300002)(316002)(81166007)(8676002)(40480700001)(82740400003)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2023 20:29:42.0433
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d09ac933-e66a-406b-242e-08db3d26fa56
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT076.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4872

The list was not being initialized, which could result in a crash in
vpci_remove_device if no list items were added.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
 xen/drivers/vpci/msix.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 25bde77586a4..1b98c3c10a64 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -678,6 +678,8 @@ static int cf_check init_msix(struct pci_dev *pdev)
     if ( !msix )
         return -ENOMEM;
 
+    INIT_LIST_HEAD(&msix->next);
+
     rc = vpci_add_register(pdev->vpci, control_read, control_write,
                            msix_control_reg(msix_offset), 2, msix);
     if ( rc )
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 21:04:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 21:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521248.809705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnQax-0001QF-Vc; Fri, 14 Apr 2023 21:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521248.809705; Fri, 14 Apr 2023 21:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnQax-0001Q8-Sn; Fri, 14 Apr 2023 21:04:35 +0000
Received: by outflank-mailman (input) for mailman id 521248;
 Fri, 14 Apr 2023 21:04:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnQaw-0001Pp-0U; Fri, 14 Apr 2023 21:04:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnQav-0006FF-Sz; Fri, 14 Apr 2023 21:04:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnQav-00016e-Gf; Fri, 14 Apr 2023 21:04:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnQav-0003B2-GH; Fri, 14 Apr 2023 21:04:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+ucC5+v6hpLgdNc5EfO3taEtjnB8mpV3KnQbTyDUdxA=; b=FsLva4Jqm6BFAH+lsfPVGNbYqE
	sbsJtviH9Qj49A4rzAe7v0C99QVidJdar85tGyKkmEdOBSOKSCQgz4vBtAtNm+ZeIwFVyyN21cGVq
	jtuiute92jBwFg+yy70GfBUpWrtmw6IJ6NKvrmQcjix19S8I2MKFBdr89hZmZMoh/KQU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180263: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=8363b1f62e561cfb73073b4b094516fcbbd7020e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 21:04:33 +0000

flight 180263 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180263/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  8363b1f62e561cfb73073b4b094516fcbbd7020e

Last test of basis   180237  2023-04-13 14:01:55 Z    1 days
Testing same since   180263  2023-04-14 18:03:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8363b1f62e..18c128ba66  18c128ba66e6308744850aca96dbffd18f91c29b -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 22:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 22:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521255.809715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnRUH-0007hN-9C; Fri, 14 Apr 2023 22:01:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521255.809715; Fri, 14 Apr 2023 22:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnRUH-0007hG-5j; Fri, 14 Apr 2023 22:01:45 +0000
Received: by outflank-mailman (input) for mailman id 521255;
 Fri, 14 Apr 2023 22:01:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnRUG-0007h6-33; Fri, 14 Apr 2023 22:01:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnRUF-0007Ul-SX; Fri, 14 Apr 2023 22:01:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnRUF-0002UP-F8; Fri, 14 Apr 2023 22:01:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnRUF-00038K-Eg; Fri, 14 Apr 2023 22:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VHXDYLTcSQDE8A6Lh+XdZld3+QNz2rUMTZV+8oBkVOI=; b=wIybqqAPJ2jY6Qvj0CL5p2GFKp
	3786VHmuyTG1fCrdreDihvT0wBoCAz8byF2P2z42AvUDjBvOcu0bWwFFwlMhCtJu+F/3MjYskhWWf
	S5D+yApenDkajuw0FqyzYIQYtkFqPKAX4QJYc5/y1XBop+ncYwOcEb51cEgDazob0xjs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180256-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180256: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:host-install(5):broken:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8363b1f62e561cfb73073b4b094516fcbbd7020e
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Apr 2023 22:01:43 +0000

flight 180256 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180256/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180238
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180238

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180238
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken starved in 180238
 test-armhf-armhf-libvirt      5 host-install(5)       broken starved in 180238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  8363b1f62e561cfb73073b4b094516fcbbd7020e
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    1 days
Testing same since   180256  2023-04-14 05:34:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-libvirt host-install(5)

Not pushing.

------------------------------------------------------------
commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521265.809760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5b-0001mz-LO; Fri, 14 Apr 2023 23:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521265.809760; Fri, 14 Apr 2023 23:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5b-0001m5-FZ; Fri, 14 Apr 2023 23:44:23 +0000
Received: by outflank-mailman (input) for mailman id 521265;
 Fri, 14 Apr 2023 23:44:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5Z-0001Th-Vz
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46f44033-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46f44033-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232309.448345494@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515860;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SRk3RrJ8+RSaiZTRwxMk0fG1NuTGaGpFKdBZXu8mJKE=;
	b=TenZelvKdY6Tx+OsImAHk36b4tbQIR+z0M1WnFpjy8y+51j9W5eVFc7Of6Sf8s0X5uq+xc
	6SLC+/lL+CpUg+kTVvDAN1gjqSax0V+tVQNmmxi4coTX5o01NqKtoGfLSsdE0lb320KW1r
	QEHnIVRUnqLfhZgK+7fQTY+n7WOGwL6ll+eunwu5W5z5A2qnT1aRW5PuasQXgjsBUf/jBU
	7xOQ39cpNhzE0/XOurXN0gDhM1Ox280g/R6YyoZtAiatQnNAao3G3M3vKjSts9mlj9G6qX
	S7eGW9eMYE6lWwqIwp+CRBO4mI9eSSdxCdQI74WXa5fo3DgjCo/bZ5Uwga1FYg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515860;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SRk3RrJ8+RSaiZTRwxMk0fG1NuTGaGpFKdBZXu8mJKE=;
	b=GpdL477IqUbVLYXts40m+nkntEs4l/SFSgjl8WTxVs9fG2Y4NeqfbJbNj3fY+ekDTnUTf2
	CVV1j8YeN8LpbeAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 04/37] x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:19 +0200 (CEST)

This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/include/asm/cpu.h   |    2 +-
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   10 +++++-----
 arch/x86/kernel/head_64.S    |   10 +++++-----
 arch/x86/kernel/sev.c        |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -30,7 +30,7 @@ struct x86_cpu {
 #ifdef CONFIG_HOTPLUG_CPU
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
-extern void start_cpu0(void);
+extern void soft_restart_cpu(void);
 #ifdef CONFIG_DEBUG_HOTPLUG_CPU0
 extern int _debug_hotplug_cpu(int cpu, int action);
 #endif
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -134,7 +134,7 @@ static bool skip_addr(void *dest)
 	if (dest == ret_from_fork)
 		return true;
 #ifdef CONFIG_HOTPLUG_CPU
-	if (dest == start_cpu0)
+	if (dest == soft_restart_cpu)
 		return true;
 #endif
 #ifdef CONFIG_FUNCTION_TRACER
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -140,16 +140,16 @@ SYM_CODE_END(startup_32)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary().
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_FUNC_START(start_cpu0)
+SYM_FUNC_START(soft_restart_cpu)
 	movl initial_stack, %ecx
 	movl %ecx, %esp
 	call *(initial_code)
 1:	jmp 1b
-SYM_FUNC_END(start_cpu0)
+SYM_FUNC_END(soft_restart_cpu)
 #endif
 
 /*
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -377,11 +377,11 @@ SYM_CODE_END(secondary_startup_64)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary() via .Ljump_to_C_code.
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_CODE_START(start_cpu0)
+SYM_CODE_START(soft_restart_cpu)
 	ANNOTATE_NOENDBR
 	UNWIND_HINT_EMPTY
 
@@ -390,7 +390,7 @@ SYM_CODE_START(start_cpu0)
 	movq	TASK_threadsp(%rcx), %rsp
 
 	jmp	.Ljump_to_C_code
-SYM_CODE_END(start_cpu0)
+SYM_CODE_END(soft_restart_cpu)
 #endif
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1326,7 +1326,7 @@ static void sev_es_play_dead(void)
 	 * If we get here, the VCPU was woken up again. Jump to CPU
 	 * startup code to get it back online.
 	 */
-	start_cpu0();
+	soft_restart_cpu();
 }
 #else  /* CONFIG_HOTPLUG_CPU */
 #define sev_es_play_dead	native_play_dead



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521261.809725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5X-0000zx-KT; Fri, 14 Apr 2023 23:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521261.809725; Fri, 14 Apr 2023 23:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5X-0000zq-GR; Fri, 14 Apr 2023 23:44:19 +0000
Received: by outflank-mailman (input) for mailman id 521261;
 Fri, 14 Apr 2023 23:44:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5W-0000zb-I8
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 440ecfed-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 440ecfed-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.261127575@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515855;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IlHV62TWgKh1cdgK7stl2AzD5yScRMLzrcA90zGNlOQ=;
	b=ZSJ2jYDTvrOcSfB+q9NW+WGp94J7gVM5nWU/8DuwnoZDMu+XBZCbSUvTxL+DR4BvKxavFN
	jhuRONKx7L6r94CMMXtiIONzHNo1K+Igs8Z0LLNxD99J31QKLzvq4MDfPqPJUdJy0who+d
	mKrkRcfXkOBt5Idh86wWBXAvBPESGtEj994l0Sk8+em2iAJUZAhObLDXX+j3oXrngnpCuw
	zyRT6xozBWS7ZDWw8BWKkQqW6Qlj1eCTnFv9dCNIQazSMGDnTo3mcHwD/wv9ccqLmI3o2G
	GJGLBHoc9N7XmTNB1GAm0gwQkaNsttkeRSqEESmeoHEEgOdQdWWA8b6V/dbDuQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515855;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IlHV62TWgKh1cdgK7stl2AzD5yScRMLzrcA90zGNlOQ=;
	b=5y7iOO6UN0/F9Na7fC7zsih/Bs+dHvWN4EhYECYw/BugXKR0lZggZtmRr+wdRcc8V09+nR
	evNcawtvhTVqcwDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 01/37] x86/smpboot: Cleanup topology_phys_to_logical_pkg()/die()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:14 +0200 (CEST)

Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/topology.h |    3 ---
 arch/x86/kernel/smpboot.c       |   10 ++++++----
 2 files changed, 6 insertions(+), 7 deletions(-)

--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -139,7 +139,6 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-int topology_phys_to_logical_die(unsigned int die, unsigned int cpu);
 bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
 #else
@@ -149,8 +148,6 @@ topology_update_package_map(unsigned int
 static inline int
 topology_update_die_map(unsigned int dieid, unsigned int cpu) { return 0; }
 static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
-static inline int topology_phys_to_logical_die(unsigned int die,
-		unsigned int cpu) { return 0; }
 static inline int topology_max_die_per_package(void) { return 1; }
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -288,6 +288,7 @@ bool topology_smt_supported(void)
 
 /**
  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+ * @phys_pkg:	The physical package id to map
  *
  * Returns logical package id or -1 if not found
  */
@@ -304,15 +305,17 @@ int topology_phys_to_logical_pkg(unsigne
 	return -1;
 }
 EXPORT_SYMBOL(topology_phys_to_logical_pkg);
+
 /**
  * topology_phys_to_logical_die - Map a physical die id to logical
+ * @die_id:	The physical die id to map
+ * @cur_cpu:	The CPU for which the mapping is done
  *
  * Returns logical die id or -1 if not found
  */
-int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
+static int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
 {
-	int cpu;
-	int proc_id = cpu_data(cur_cpu).phys_proc_id;
+	int cpu, proc_id = cpu_data(cur_cpu).phys_proc_id;
 
 	for_each_possible_cpu(cpu) {
 		struct cpuinfo_x86 *c = &cpu_data(cpu);
@@ -323,7 +326,6 @@ int topology_phys_to_logical_die(unsigne
 	}
 	return -1;
 }
-EXPORT_SYMBOL(topology_phys_to_logical_die);
 
 /**
  * topology_update_package_map - Update the physical to logical package map



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521267.809778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5d-0002It-9h; Fri, 14 Apr 2023 23:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521267.809778; Fri, 14 Apr 2023 23:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5d-0002Ht-2x; Fri, 14 Apr 2023 23:44:25 +0000
Received: by outflank-mailman (input) for mailman id 521267;
 Fri, 14 Apr 2023 23:44:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5c-0001Th-G1
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48e4ba5e-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48e4ba5e-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232309.573146108@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515863;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=YH9enl+MJMA7M+jHkGDtbicNr+Ixljj01H/xHz4Qh68=;
	b=lpyjAm63G6/b32sEy2uepr/mwtvLxNaVZIUgygQvLzALETE8fY9zKOsUJn47ur21f6W14A
	V2tnHnJYBZ1CJN22W1lTtZ/rc6uajRww1SwLf8shpZdCOhX0eOSE//5WpUk6MbTpvqnR5X
	qZU37iLHk5J6vEkA5V5T/+OF3dsRu/6COL0XsoQ/S7QyrnGbtOZ10i5838omOG817CfU4Q
	wnB32letUr83o/0uqFoR5Pi8jZWinNlL6rig7eZUjtVjm5/rd8LWR2a5RVvkxhhNsMfni8
	tZJwVT1nbrbuO6Pt7yiZlgUrGKoZmXqt+t1/7rlqYhGz6DIacVWHGb0RmWJU3A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515863;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=YH9enl+MJMA7M+jHkGDtbicNr+Ixljj01H/xHz4Qh68=;
	b=aRdJRfhum/Hx0Do87bftDWbAr15dWCqnPq1ID3UStY4wRkpCpotZQGfhlALDv9i+4/sicD
	Li5ESueuDU/IBvAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 06/37] x86/smpboot: Remove the CPU0 hotplug kludge
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:22 +0200 (CEST)

This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic.h   |    1 
 arch/x86/include/asm/smp.h    |    1 
 arch/x86/kernel/smpboot.c     |  170 +++---------------------------------------
 drivers/acpi/processor_idle.c |    4 
 4 files changed, 14 insertions(+), 162 deletions(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -377,7 +377,6 @@ extern struct apic *__apicdrivers[], *__
  * APIC functionality to boot other CPUs - only used on SMP:
  */
 #ifdef CONFIG_SMP
-extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
 extern int lapic_can_unplug_cpu(void);
 #endif
 
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -129,7 +129,6 @@ void native_play_dead(void);
 void play_dead_common(void);
 void wbinvd_on_cpu(int cpu);
 int wbinvd_on_all_cpus(void);
-void cond_wakeup_cpu0(void);
 
 void native_smp_send_reschedule(int cpu);
 void native_send_call_func_ipi(const struct cpumask *mask);
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -216,9 +216,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static int cpu0_logical_apicid;
-static int enable_start_cpu0;
-
 /*
  * Activate a secondary processor.
  */
@@ -241,8 +238,6 @@ static void notrace start_secondary(void
 	x86_cpuinit.early_percpu_clock_init();
 	smp_callin();
 
-	enable_start_cpu0 = 0;
-
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
 	/* Check TSC synchronization with the control CPU: */
@@ -410,7 +405,7 @@ void smp_store_cpu_info(int id)
 	c->cpu_index = id;
 	/*
 	 * During boot time, CPU0 has this setup already. Save the info when
-	 * bringing up AP or offlined CPU0.
+	 * bringing up an AP.
 	 */
 	identify_secondary_cpu(c);
 	c->initialized = true;
@@ -807,51 +802,14 @@ static void __init smp_quirk_init_udelay
 }
 
 /*
- * Poke the other CPU in the eye via NMI to wake it up. Remember that the normal
- * INIT, INIT, STARTUP sequence will reset the chip hard for us, and this
- * won't ... remember to clear down the APIC, etc later.
- */
-int
-wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
-{
-	u32 dm = apic->dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
-	unsigned long send_status, accept_status = 0;
-	int maxlvt;
-
-	/* Target chip */
-	/* Boot on the stack */
-	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | dm, apicid);
-
-	pr_debug("Waiting for send to finish...\n");
-	send_status = safe_apic_wait_icr_idle();
-
-	/*
-	 * Give the other CPU some time to accept the IPI.
-	 */
-	udelay(200);
-	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
-		maxlvt = lapic_get_maxlvt();
-		if (maxlvt > 3)			/* Due to the Pentium erratum 3AP.  */
-			apic_write(APIC_ESR, 0);
-		accept_status = (apic_read(APIC_ESR) & 0xEF);
-	}
-	pr_debug("NMI sent\n");
-
-	if (send_status)
-		pr_err("APIC never delivered???\n");
-	if (accept_status)
-		pr_err("APIC delivery error (%lx)\n", accept_status);
-
-	return (send_status | accept_status);
-}
-
-static int
-wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ * Wake up AP by INIT, INIT, STARTUP sequence.
+ */
+static int wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
 {
 	unsigned long send_status = 0, accept_status = 0;
 	int maxlvt, num_starts, j;
 
+	preempt_disable();
 	maxlvt = lapic_get_maxlvt();
 
 	/*
@@ -957,6 +915,7 @@ wakeup_secondary_cpu_via_init(int phys_a
 	if (accept_status)
 		pr_err("APIC delivery error (%lx)\n", accept_status);
 
+	preempt_enable();
 	return (send_status | accept_status);
 }
 
@@ -997,67 +956,6 @@ static void announce_cpu(int cpu, int ap
 			node, cpu, apicid);
 }
 
-static int wakeup_cpu0_nmi(unsigned int cmd, struct pt_regs *regs)
-{
-	int cpu;
-
-	cpu = smp_processor_id();
-	if (cpu == 0 && !cpu_online(cpu) && enable_start_cpu0)
-		return NMI_HANDLED;
-
-	return NMI_DONE;
-}
-
-/*
- * Wake up AP by INIT, INIT, STARTUP sequence.
- *
- * Instead of waiting for STARTUP after INITs, BSP will execute the BIOS
- * boot-strap code which is not a desired behavior for waking up BSP. To
- * void the boot-strap code, wake up CPU0 by NMI instead.
- *
- * This works to wake up soft offlined CPU0 only. If CPU0 is hard offlined
- * (i.e. physically hot removed and then hot added), NMI won't wake it up.
- * We'll change this code in the future to wake up hard offlined CPU0 if
- * real platform and request are available.
- */
-static int
-wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
-	       int *cpu0_nmi_registered)
-{
-	int id;
-	int boot_error;
-
-	preempt_disable();
-
-	/*
-	 * Wake up AP by INIT, INIT, STARTUP sequence.
-	 */
-	if (cpu) {
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
-		goto out;
-	}
-
-	/*
-	 * Wake up BSP by nmi.
-	 *
-	 * Register a NMI handler to help wake up CPU0.
-	 */
-	boot_error = register_nmi_handler(NMI_LOCAL,
-					  wakeup_cpu0_nmi, 0, "wake_cpu0");
-
-	if (!boot_error) {
-		enable_start_cpu0 = 1;
-		*cpu0_nmi_registered = 1;
-		id = apic->dest_mode_logical ? cpu0_logical_apicid : apicid;
-		boot_error = wakeup_secondary_cpu_via_nmi(id, start_ip);
-	}
-
-out:
-	preempt_enable();
-
-	return boot_error;
-}
-
 int common_cpu_up(unsigned int cpu, struct task_struct *idle)
 {
 	int ret;
@@ -1086,8 +984,7 @@ int common_cpu_up(unsigned int cpu, stru
  * Returns zero if CPU booted OK, else error code from
  * ->wakeup_secondary_cpu.
  */
-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
-		       int *cpu0_nmi_registered)
+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
@@ -1120,7 +1017,6 @@ static int do_boot_cpu(int apicid, int c
 	 * This grunge runs the startup process for
 	 * the targeted processor.
 	 */
-
 	if (x86_platform.legacy.warm_reset) {
 
 		pr_debug("Setting warm reset code and vector.\n");
@@ -1149,15 +1045,14 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use a method from the APIC driver if one defined, with wakeup
 	 *   straight to 64-bit mode preferred over wakeup to RM.
 	 * Otherwise,
-	 * - Use an INIT boot APIC message for APs or NMI for BSP.
+	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
 		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
 		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
 	else
-		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
-						     cpu0_nmi_registered);
+		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
 	if (!boot_error) {
 		/*
@@ -1206,9 +1101,8 @@ static int do_boot_cpu(int apicid, int c
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
-	int cpu0_nmi_registered = 0;
 	unsigned long flags;
-	int err, ret = 0;
+	int err;
 
 	lockdep_assert_irqs_enabled();
 
@@ -1247,11 +1141,10 @@ int native_cpu_up(unsigned int cpu, stru
 	if (err)
 		return err;
 
-	err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);
+	err = do_boot_cpu(apicid, cpu, tidle);
 	if (err) {
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		ret = -EIO;
-		goto unreg_nmi;
+		return err;
 	}
 
 	/*
@@ -1267,15 +1160,7 @@ int native_cpu_up(unsigned int cpu, stru
 		touch_nmi_watchdog();
 	}
 
-unreg_nmi:
-	/*
-	 * Clean up the nmi handler. Do this after the callin and callout sync
-	 * to avoid impact of possible long unregister time.
-	 */
-	if (cpu0_nmi_registered)
-		unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
-
-	return ret;
+	return 0;
 }
 
 /**
@@ -1373,14 +1258,6 @@ static void __init smp_cpu_index_default
 	}
 }
 
-static void __init smp_get_logical_apicid(void)
-{
-	if (x2apic_mode)
-		cpu0_logical_apicid = apic_read(APIC_LDR);
-	else
-		cpu0_logical_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
-}
-
 void __init smp_prepare_cpus_common(void)
 {
 	unsigned int i;
@@ -1443,8 +1320,6 @@ void __init native_smp_prepare_cpus(unsi
 	/* Setup local timer */
 	x86_init.timers.setup_percpu_clockev();
 
-	smp_get_logical_apicid();
-
 	pr_info("CPU0: ");
 	print_cpu_info(&cpu_data(0));
 
@@ -1752,18 +1627,6 @@ void play_dead_common(void)
 	local_irq_disable();
 }
 
-/**
- * cond_wakeup_cpu0 - Wake up CPU0 if needed.
- *
- * If NMI wants to wake up CPU0, start CPU0.
- */
-void cond_wakeup_cpu0(void)
-{
-	if (smp_processor_id() == 0 && enable_start_cpu0)
-		start_cpu0();
-}
-EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
-
 /*
  * We need to flush the caches before going to sleep, lest we have
  * dirty data in our caches when we come back up.
@@ -1831,8 +1694,6 @@ static inline void mwait_play_dead(void)
 		__monitor(mwait_ptr, 0, 0);
 		mb();
 		__mwait(eax, 0);
-
-		cond_wakeup_cpu0();
 	}
 }
 
@@ -1841,11 +1702,8 @@ void hlt_play_dead(void)
 	if (__this_cpu_read(cpu_info.x86) >= 4)
 		wbinvd();
 
-	while (1) {
+	while (1)
 		native_halt();
-
-		cond_wakeup_cpu0();
-	}
 }
 
 void native_play_dead(void)
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -597,10 +597,6 @@ static int acpi_idle_play_dead(struct cp
 			io_idle(cx->address);
 		} else
 			return -ENODEV;
-
-#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
-		cond_wakeup_cpu0();
-#endif
 	}
 
 	/* Never reached */



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521266.809774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5c-0002Dz-SX; Fri, 14 Apr 2023 23:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521266.809774; Fri, 14 Apr 2023 23:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5c-0002C8-Oh; Fri, 14 Apr 2023 23:44:24 +0000
Received: by outflank-mailman (input) for mailman id 521266;
 Fri, 14 Apr 2023 23:44:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5c-0000zb-4c
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47f825ef-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47f825ef-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.510911744@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=d2aa8PgTtRDs84GbzPYsg9/c/UFZj39RiJ/AAhETCGQ=;
	b=g4jq/m8sUm3dCRnIPA0dwXXquH1L4TZDkHABiQoYlnjqLC+rJPpJqxy/IaoXl2Ast0od+p
	CiKL7+fD0P/yOnSTmRwtqCgs50Gux2IhjOKxTWrY/yQZpbQgUKprRZgoR48vA7MalN2Ein
	wjm/JhwASW4BdRf1odYjAAQ//Da46eXuhGP3fTvTUqPqi1SEsWGcT8cUSKBsEaxJpxMwri
	t4TDP+VrP0GoyskBjuHSwyAZxkBjwuKAJeiqvUmsCCKQWGebsQbQ/dYW0PwkMnMEHx/+UQ
	Dk3BAh9FNSoDZhz5JHGwg35NvV4HQZFZp3Q2oCe3rsQbsck8yVtweC9RSpalJg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=d2aa8PgTtRDs84GbzPYsg9/c/UFZj39RiJ/AAhETCGQ=;
	b=6YjZFgK/aGr78TaRzK8UpHBzLtU3o1jB0TrBSGUihSUOr+Hyfc039ben0Zk8EZVKRloViH
	We4cnTEgROLKpkAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 05/37] x86/topology: Remove CPU0 hotplug option
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:21 +0200 (CEST)

This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |   14 ---
 Documentation/core-api/cpu_hotplug.rst          |   13 ---
 arch/x86/Kconfig                                |   43 ----------
 arch/x86/include/asm/cpu.h                      |    3 
 arch/x86/kernel/topology.c                      |   98 ------------------------
 arch/x86/power/cpu.c                            |   37 ---------
 6 files changed, 6 insertions(+), 202 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -795,20 +795,6 @@
 			Format:
 			<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
 
-	cpu0_hotplug	[X86] Turn on CPU0 hotplug feature when
-			CONFIG_BOOTPARAM_HOTPLUG_CPU0 is off.
-			Some features depend on CPU0. Known dependencies are:
-			1. Resume from suspend/hibernate depends on CPU0.
-			Suspend/hibernate will fail if CPU0 is offline and you
-			need to online CPU0 before suspend/hibernate.
-			2. PIC interrupts also depend on CPU0. CPU0 can't be
-			removed if a PIC interrupt is detected.
-			It's said poweroff/reboot may depend on CPU0 on some
-			machines although I haven't seen such issues so far
-			after CPU0 is offline on a few tested machines.
-			If the dependencies are under your control, you can
-			turn on cpu0_hotplug.
-
 	cpuidle.off=1	[CPU_IDLE]
 			disable the cpuidle sub-system
 
--- a/Documentation/core-api/cpu_hotplug.rst
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -127,17 +127,8 @@ Once the CPU is shutdown, it will be rem
  $ echo 1 > /sys/devices/system/cpu/cpu4/online
  smpboot: Booting Node 0 Processor 4 APIC 0x1
 
-The CPU is usable again. This should work on all CPUs. CPU0 is often special
-and excluded from CPU hotplug. On X86 the kernel option
-*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
-shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
-used. Some known dependencies of CPU0:
-
-* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
-* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
-
-Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies
-on CPU0.
+The CPU is usable again. This should work on all CPUs, but CPU0 is often special
+and excluded from CPU hotplug.
 
 The CPU hotplug coordination
 ============================
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2294,49 +2294,6 @@ config HOTPLUG_CPU
 	def_bool y
 	depends on SMP
 
-config BOOTPARAM_HOTPLUG_CPU0
-	bool "Set default setting of cpu0_hotpluggable"
-	depends on HOTPLUG_CPU
-	help
-	  Set whether default state of cpu0_hotpluggable is on or off.
-
-	  Say Y here to enable CPU0 hotplug by default. If this switch
-	  is turned on, there is no need to give cpu0_hotplug kernel
-	  parameter and the CPU0 hotplug feature is enabled by default.
-
-	  Please note: there are two known CPU0 dependencies if you want
-	  to enable the CPU0 hotplug feature either by this switch or by
-	  cpu0_hotplug kernel parameter.
-
-	  First, resume from hibernate or suspend always starts from CPU0.
-	  So hibernate and suspend are prevented if CPU0 is offline.
-
-	  Second dependency is PIC interrupts always go to CPU0. CPU0 can not
-	  offline if any interrupt can not migrate out of CPU0. There may
-	  be other CPU0 dependencies.
-
-	  Please make sure the dependencies are under your control before
-	  you enable this feature.
-
-	  Say N if you don't want to enable CPU0 hotplug feature by default.
-	  You still can enable the CPU0 hotplug feature at boot by kernel
-	  parameter cpu0_hotplug.
-
-config DEBUG_HOTPLUG_CPU0
-	def_bool n
-	prompt "Debug CPU0 hotplug"
-	depends on HOTPLUG_CPU
-	help
-	  Enabling this option offlines CPU0 (if CPU0 can be offlined) as
-	  soon as possible and boots up userspace with CPU0 offlined. User
-	  can online CPU0 back after boot time.
-
-	  To debug CPU0 hotplug, you need to enable CPU0 offline/online
-	  feature by either turning on CONFIG_BOOTPARAM_HOTPLUG_CPU0 during
-	  compilation or giving cpu0_hotplug kernel parameter at boot.
-
-	  If unsure, say N.
-
 config COMPAT_VDSO
 	def_bool n
 	prompt "Disable the 32-bit vDSO (needed for glibc 2.3.3)"
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -31,9 +31,6 @@ struct x86_cpu {
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
 extern void soft_restart_cpu(void);
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-extern int _debug_hotplug_cpu(int cpu, int action);
-#endif
 #endif
 
 extern void ap_init_aperfmperf(void);
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -38,102 +38,12 @@
 static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
-
-#ifdef CONFIG_BOOTPARAM_HOTPLUG_CPU0
-static int cpu0_hotpluggable = 1;
-#else
-static int cpu0_hotpluggable;
-static int __init enable_cpu0_hotplug(char *str)
-{
-	cpu0_hotpluggable = 1;
-	return 1;
-}
-
-__setup("cpu0_hotplug", enable_cpu0_hotplug);
-#endif
-
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-/*
- * This function offlines a CPU as early as possible and allows userspace to
- * boot up without the CPU. The CPU can be onlined back by user after boot.
- *
- * This is only called for debugging CPU offline/online feature.
- */
-int _debug_hotplug_cpu(int cpu, int action)
-{
-	int ret;
-
-	if (!cpu_is_hotpluggable(cpu))
-		return -EINVAL;
-
-	switch (action) {
-	case 0:
-		ret = remove_cpu(cpu);
-		if (!ret)
-			pr_info("DEBUG_HOTPLUG_CPU0: CPU %u is now offline\n", cpu);
-		else
-			pr_debug("Can't offline CPU%d.\n", cpu);
-		break;
-	case 1:
-		ret = add_cpu(cpu);
-		if (ret)
-			pr_debug("Can't online CPU%d.\n", cpu);
-
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-	return ret;
-}
-
-static int __init debug_hotplug_cpu(void)
+int arch_register_cpu(int cpu)
 {
-	_debug_hotplug_cpu(0, 0);
-	return 0;
-}
-
-late_initcall_sync(debug_hotplug_cpu);
-#endif /* CONFIG_DEBUG_HOTPLUG_CPU0 */
-
-int arch_register_cpu(int num)
-{
-	struct cpuinfo_x86 *c = &cpu_data(num);
-
-	/*
-	 * Currently CPU0 is only hotpluggable on Intel platforms. Other
-	 * vendors can add hotplug support later.
-	 * Xen PV guests don't support CPU0 hotplug at all.
-	 */
-	if (c->x86_vendor != X86_VENDOR_INTEL ||
-	    cpu_feature_enabled(X86_FEATURE_XENPV))
-		cpu0_hotpluggable = 0;
-
-	/*
-	 * Two known BSP/CPU0 dependencies: Resume from suspend/hibernate
-	 * depends on BSP. PIC interrupts depend on BSP.
-	 *
-	 * If the BSP dependencies are under control, one can tell kernel to
-	 * enable BSP hotplug. This basically adds a control file and
-	 * one can attempt to offline BSP.
-	 */
-	if (num == 0 && cpu0_hotpluggable) {
-		unsigned int irq;
-		/*
-		 * We won't take down the boot processor on i386 if some
-		 * interrupts only are able to be serviced by the BSP in PIC.
-		 */
-		for_each_active_irq(irq) {
-			if (!IO_APIC_IRQ(irq) && irq_has_action(irq)) {
-				cpu0_hotpluggable = 0;
-				break;
-			}
-		}
-	}
-	if (num || cpu0_hotpluggable)
-		per_cpu(cpu_devices, num).cpu.hotpluggable = 1;
+	struct x86_cpu *xc = per_cpu_ptr(&cpu_devices, cpu);
 
-	return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
+	xc->cpu.hotpluggable = cpu > 0;
+	return register_cpu(&xc->cpu, cpu);
 }
 EXPORT_SYMBOL(arch_register_cpu);
 
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -351,43 +351,6 @@ static int bsp_pm_callback(struct notifi
 	case PM_HIBERNATION_PREPARE:
 		ret = bsp_check();
 		break;
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-	case PM_RESTORE_PREPARE:
-		/*
-		 * When system resumes from hibernation, online CPU0 because
-		 * 1. it's required for resume and
-		 * 2. the CPU was online before hibernation
-		 */
-		if (!cpu_online(0))
-			_debug_hotplug_cpu(0, 1);
-		break;
-	case PM_POST_RESTORE:
-		/*
-		 * When a resume really happens, this code won't be called.
-		 *
-		 * This code is called only when user space hibernation software
-		 * prepares for snapshot device during boot time. So we just
-		 * call _debug_hotplug_cpu() to restore to CPU0's state prior to
-		 * preparing the snapshot device.
-		 *
-		 * This works for normal boot case in our CPU0 hotplug debug
-		 * mode, i.e. CPU0 is offline and user mode hibernation
-		 * software initializes during boot time.
-		 *
-		 * If CPU0 is online and user application accesses snapshot
-		 * device after boot time, this will offline CPU0 and user may
-		 * see different CPU0 state before and after accessing
-		 * the snapshot device. But hopefully this is not a case when
-		 * user debugging CPU0 hotplug. Even if users hit this case,
-		 * they can easily online CPU0 back.
-		 *
-		 * To simplify this debug code, we only consider normal boot
-		 * case. Otherwise we need to remember CPU0's state and restore
-		 * to that state and resolve racy conditions etc.
-		 */
-		_debug_hotplug_cpu(0, 0);
-		break;
-#endif
 	default:
 		break;
 	}



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521264.809755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5b-0001jo-Bd; Fri, 14 Apr 2023 23:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521264.809755; Fri, 14 Apr 2023 23:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5b-0001jh-8F; Fri, 14 Apr 2023 23:44:23 +0000
Received: by outflank-mailman (input) for mailman id 521264;
 Fri, 14 Apr 2023 23:44:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5Z-0001Th-Gt
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45f0e738-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45f0e738-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232309.385574446@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515858;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IHNUiYT/aa+9U/QrFpO4u8KQro95rDNTw6iEEK5ze9o=;
	b=1GGWgmKl+Z16YESLBwfkaJ8pYvl0ibRrRLZ//qbrV6Rn4ejq8Bvi6vo9cC/d06/jSPEXR4
	hvZPrEqwgB1nXc44jqzGpA3g8UhwvIHA09SMd8L91BmwlYgaP2m77BjOpifPA2Jp8FR9F+
	c0G0jy+fXRMuNBDQtZn5Uv5VCIUDjwdIxXIzMKcFSmEiflIS9qbg0CEGaA9Azgv3qTSjMh
	2Q6Nmfsfu+SPkBhJwKselmJZVd9FCtpTg9fyXFZ6j2n8kUoisHbXRgCmcH48m/s4qcThpJ
	o2FriQZJckk/vdgmu5g5vmRYvkI+K74QUFnLD6JN3zpP58uh1nfGpXn5o7XkuA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515858;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IHNUiYT/aa+9U/QrFpO4u8KQro95rDNTw6iEEK5ze9o=;
	b=cHgI8T+6dl0c5QvmY+jPTGI3Fml5ooLABkADGz2XWt8RyEwXHfElCQYbWkrIndZHTvvRK5
	Onzbq0bjdDoTP8Dg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 03/37] x86/smpboot: Avoid pointless delay calibration is TSC
 is synchronized
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:18 +0200 (CEST)

When TSC is synchronized across sockets then there is no reason in
calibrating the delay for the first CPU which comes up on a socket.

Just reuse the existing calibration value.

This removes 100ms pointlessly wasted time from CPU hotplug.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |   38 ++++++++++++++++++++++++--------------
 arch/x86/kernel/tsc.c     |   20 ++++++++++++++++----
 2 files changed, 40 insertions(+), 18 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -178,10 +178,7 @@ static void smp_callin(void)
 	 */
 	apic_ap_setup();
 
-	/*
-	 * Save our processor parameters. Note: this information
-	 * is needed for clock calibration.
-	 */
+	/* Save our processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
@@ -192,14 +189,6 @@ static void smp_callin(void)
 
 	ap_init_aperfmperf();
 
-	/*
-	 * Get our bogomips.
-	 * Update loops_per_jiffy in cpu_data. Previous call to
-	 * smp_store_cpu_info() stored a value that is close but not as
-	 * accurate as the value just calculated.
-	 */
-	calibrate_delay();
-	cpu_data(cpuid).loops_per_jiffy = loops_per_jiffy;
 	pr_debug("Stack at about %p\n", &cpuid);
 
 	wmb();
@@ -212,8 +201,24 @@ static void smp_callin(void)
 	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
+static void ap_calibrate_delay(void)
+{
+	/*
+	 * Calibrate the delay loop and update loops_per_jiffy in cpu_data.
+	 * smp_store_cpu_info() stored a value that is close but not as
+	 * accurate as the value just calculated.
+	 *
+	 * As this is invoked after the TSC synchronization check,
+	 * calibrate_delay_is_known() will skip the calibration routine
+	 * when TSC is synchronized across sockets.
+	 */
+	calibrate_delay();
+	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
+}
+
 static int cpu0_logical_apicid;
 static int enable_start_cpu0;
+
 /*
  * Activate a secondary processor.
  */
@@ -240,10 +245,15 @@ static void notrace start_secondary(void
 
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
+	/* Check TSC synchronization with the control CPU: */
+	check_tsc_sync_target();
+
 	/*
-	 * Check TSC synchronization with the boot CPU:
+	 * Calibrate the delay loop after the TSC synchronization check.
+	 * This allows to skip the calibration when TSC is synchronized
+	 * across sockets.
 	 */
-	check_tsc_sync_target();
+	ap_calibrate_delay();
 
 	speculative_store_bypass_ht_init();
 
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1598,10 +1598,7 @@ void __init tsc_init(void)
 
 #ifdef CONFIG_SMP
 /*
- * If we have a constant TSC and are using the TSC for the delay loop,
- * we can skip clock calibration if another cpu in the same socket has already
- * been calibrated. This assumes that CONSTANT_TSC applies to all
- * cpus in the socket - this should be a safe assumption.
+ * Check whether existing calibration data can be reused.
  */
 unsigned long calibrate_delay_is_known(void)
 {
@@ -1609,6 +1606,21 @@ unsigned long calibrate_delay_is_known(v
 	int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
 	const struct cpumask *mask = topology_core_cpumask(cpu);
 
+	/*
+	 * If TSC has constant frequency and TSC is synchronized across
+	 * sockets then reuse CPU0 calibration.
+	 */
+	if (constant_tsc && !tsc_unstable)
+		return cpu_data(0).loops_per_jiffy;
+
+	/*
+	 * If TSC has constant frequency and TSC is not synchronized across
+	 * sockets and this is not the first CPU in the socket, then reuse
+	 * the calibration value of an already online CPU on that socket.
+	 *
+	 * This assumes that CONSTANT_TSC is consistent for all CPUs in a
+	 * socket.
+	 */
 	if (!constant_tsc || !mask)
 		return 0;
 



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521263.809745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5Z-0001Te-0n; Fri, 14 Apr 2023 23:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521263.809745; Fri, 14 Apr 2023 23:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5Y-0001TX-U7; Fri, 14 Apr 2023 23:44:20 +0000
Received: by outflank-mailman (input) for mailman id 521263;
 Fri, 14 Apr 2023 23:44:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5X-0000zb-EL
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:19 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 44f6ddd0-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44f6ddd0-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.323016954@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515856;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=sDTci1v+H+1hOBnO3MWKNQ4KSM/+mxYEdJGIWDEKpYw=;
	b=y8O+iHOImdrEQNmtF7SapUaot3V0yy8b8Ub9SUl+OgGsTaIJZhbZjtiR4PN4V5WCNemOca
	Ob8ttIQs2lYzofVI5dy3T9RFKAux9iKX/Rkv+VPjfOqV4T6XWGHXuY9IFhwqlXBTVc5Nsl
	KK4n8ce7YmA5TsutXdcfdM52VSR98OW8mCNQWFGg6cY3Yq6KC99p69q2+p9erXW8Qg/qAb
	ceQeWzd0QJGBxdpwT1wRPfHvVdxLxyyhgH/r2Dl9U2nA2Jo/APevn/jIIjbNa3EFkc7Pxb
	zX0dOALraGxj/DlefvHJr1NZVmMqpk/+HdfNk32Kc/stZp2q/WmPtDMIe6OaIQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515856;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=sDTci1v+H+1hOBnO3MWKNQ4KSM/+mxYEdJGIWDEKpYw=;
	b=HMWjVuC1heTl6ACr7VnRkYnxnf9HNU4lYZlkNOeO1jQ9uLiC6+ocCzTSyzyWZjYVjiCgZf
	cRj6X0w+T3BbGTBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 02/37] cpu/hotplug: Mark arch_disable_smp_support() and
 bringup_nonboot_cpus() __init
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:16 +0200 (CEST)

No point in keeping them around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |    4 ++--
 kernel/cpu.c              |    2 +-
 kernel/smp.c              |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1269,9 +1269,9 @@ int native_cpu_up(unsigned int cpu, stru
 }
 
 /**
- * arch_disable_smp_support() - disables SMP support for x86 at runtime
+ * arch_disable_smp_support() - Disables SMP support for x86 at boottime
  */
-void arch_disable_smp_support(void)
+void __init arch_disable_smp_support(void)
 {
 	disable_ioapic_support();
 }
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1502,7 +1502,7 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void bringup_nonboot_cpus(unsigned int setup_max_cpus)
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
 {
 	unsigned int cpu;
 
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -1051,7 +1051,7 @@ EXPORT_SYMBOL(setup_max_cpus);
  * SMP mode to <NUM>.
  */
 
-void __weak arch_disable_smp_support(void) { }
+void __weak __init arch_disable_smp_support(void) { }
 
 static int __init nosmp(char *str)
 {



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521262.809728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5X-00013M-Qc; Fri, 14 Apr 2023 23:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521262.809728; Fri, 14 Apr 2023 23:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5X-00012H-NU; Fri, 14 Apr 2023 23:44:19 +0000
Received: by outflank-mailman (input) for mailman id 521262;
 Fri, 14 Apr 2023 23:44:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5W-0000zb-RX
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 436b1c71-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 436b1c71-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414225551.858160935@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515853;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=2FBNXDCOjdNXMioUNtpnUEdjyh1DRJi8Y1Kqgx20DOM=;
	b=i+svBH7O/oaj7aunuMsHcDS2onKFuoOeQ13uBn0i2EZBlR0v4LMikX50FeDA9uuKtpCCb4
	z/ZWJt0/vTUZz6mY9zE5iv3BQkst/GrxSi6Ihj+kSlASau5HRJFfQmM6o+Ezs2LFefuihB
	OfOybKBgg03vWHVVy+lwwdcxJzNP2Mj4ptIvtRem3TShasnYo0WGCg3BGy7G1RTXjq6H0Q
	McFUN7aC8YHh6CBsi49Ue6HNDjvXBu1uwc2ekdjKi/zkZtzpv+eFnW/uvjlxZJOJf9q5K0
	HXiWxG4BHUsSjcGVIe0d8f1X+Bi3gXR6qL+GYAhvxOO7XxPoj1jAoHFb0nd3bw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515853;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=2FBNXDCOjdNXMioUNtpnUEdjyh1DRJi8Y1Kqgx20DOM=;
	b=CjkzjdOXpz0mr1oMcda6Lg3ZiGC4HBFA8/eiJ3o7o6axQMA6SR74jLA8yo7Mg0AL4WrPGY
	YXxTfH/3ZF/ENPBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Date: Sat, 15 Apr 2023 01:44:13 +0200 (CEST)

Hi!

This is a complete rework of the parallel bringup patch series (V17)

    https://lore.kernel.org/lkml/20230328195758.1049469-1-usama.arif@bytedance.com

to address the issues which were discovered in review:

 1) The X86 microcode loader serialization requirement

    https://lore.kernel.org/lkml/87v8iirxun.ffs@tglx

    Microcode loading on HT enabled X86 CPUs requires that the microcode is
    loaded on the primary thread. The sibling thread(s) must be in
    quiescent state; either looping in a place which is aware of potential
    changes by the microcode update (see late loading) or in fully quiescent
    state, i.e. waiting for INIT/SIPI.

    This is required by hardware/firmware on Intel. Aside of that it's a
    vendor independent software correctness issue. Assume the following
    sequence:

    CPU1.0	  	      CPU1.1
    			      CPUID($A)
    Load microcode.
    Changes CPUID($A, $B)
    			      CPUID($B)

    CPU1.1 makes a decision on $A and $B which might be inconsistent due
    to the microcode update.

    The solution for this is to bringup the primary threads first and after
    that the siblings. Loading microcode on the siblings is a NOOP on Intel
    and on AMD it is guaranteed to only modify thread local state.

    This ensures that the APs can load microcode before reaching the alive
    synchronization point w/o doing any further x86 specific
    synchronization between the core siblings.

 2) The general design issues discussed in V16

    https://lore.kernel.org/lkml/87pm8y6yme.ffs@tglx

    The previous parallel bringup patches just glued this mechanism into
    the existing code without a deeper analysis of the synchronization
    mechanisms and without generalizing it so that the control logic is
    mostly in the core code and not made an architecture specific tinker
    space.

    Much of that had been pointed out 2 years ago in the discussions about
    the early versions of parallel bringup already.


The series is based on:

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip x86/apic

and also available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotplug


Background
----------

The reason why people are interested in parallel bringup is to shorten
the (kexec) reboot time of cloud servers to reduce the downtime of the
VM tenants. There are obviously other interesting use cases for this
like VM startup time, embedded devices...

The current fully serialized bringup does the following per AP:

    1) Prepare callbacks (allocate, intialize, create threads)
    2) Kick the AP alive (e.g. INIT/SIPI on x86)
    3) Wait for the AP to report alive state
    4) Let the AP continue through the atomic bringup
    5) Let the AP run the threaded bringup to full online state

There are two significant delays:

    #3 The time for an AP to report alive state in start_secondary() on x86
       has been measured in the range between 350us and 3.5ms depending on
       vendor and CPU type, BIOS microcode size etc.

    #4 The atomic bringup does the microcode update. This has been measured
       to take up to ~8ms on the primary threads depending on the microcode
       patch size to apply.

On a two socket SKL server with 56 cores (112 threads) the boot CPU spends
on current mainline about 800ms busy waiting for the APs to come up and
apply microcode. That's more than 80% of the actual onlining procedure.

By splitting the actual bringup mechanism into two parts this can be
reduced to waiting for the first AP to report alive or if the system is
large enough the first AP is already waiting when the boot CPU finished the
wake-up of the last AP.


The actual solution comes in several parts
------------------------------------------

 1) [P 1-2] General cleanups (init annotations, kernel doc...)

 2) [P 3] The obvious

    Avoid pointless delay calibration when TSC is synchronized across
    sockets. That removes a whopping 100ms delay for the first CPU of a
    socket. This is an improvement independent of parallel bringup and had
    been discussed two years ago already.

 2) [P 3-6] Removal of the CPU0 hotplug hack.

    This was added 11 years ago with the promise to make this a real
    hardware mechanism, but that never materialized. As physical CPU
    hotplug is not really supported and the physical unplugging of CPU0
    never materialized there is no reason to keep this cruft around. It's
    just maintenance ballast for no value and the removal makes
    implementing the parallel bringup feature way simpler.

 3) [P 7-16] Cleanup of the existing bringup mechanism:

     a) Code reorganisation so that the general hotplug specific code is
        in smpboot.c and not sprinkled all over the place

     b) Decouple MTRR/PAT initialization from smp_callout_mask to prepare
        for replacing that mask with a hotplug core code synchronization
        mechanism.

     c) Make TSC synchronization function call based so that the control CPU
        does not have to busy wait for nothing if synchronization is not
        required.

     d) Remove the smp_callin_mask synchronization point as its not longer
        required due to #3c.

     e) Rework the sparse_irq_lock held region in the core code so that the
        next polling synchronization point in the x86 code can be removed to.

     f) Due to #3e it's not longer required to spin wait for the AP to set
        it's online bit.  Remove wait_cpu_online() and the XENPV
        counterpart. So the control CPU can directly wait for the online
        idle completion by the AP and free the control CPU up for other
        work.

     This reduces the synchronization points in the x86 code to one, which
     is the AP alive one. This synchronization will be moved to core
     infrastructure in the next section.

 4) [P 17-27] Replace the disconnected CPU state tracking

    The extra CPU state tracking which is used by a few architectures is
    completely separate from the CPU hotplug core code.

    Replacing it by a variant integrated in the core hotplug machinery
    allows to reduce architecture specific code and provides a generic
    synchronization mechanism for (parallel) CPU bringup/teardown.

    - Convert x86 over and replace the AP alive synchronization on x86 with
      the core variant which removes the remaining x86 hotplug
      synchronization masks.

    - Convert the other architectures usage and remove the old interface
      and code.

 5) [P 28-30] Split the bringup into two steps

    First step invokes the wakeup function on the BP, e.g. SIPI/STARTUP on
    x86. The second one waits on the BP for the AP to report alive and
    releases it for the complete onlining.

    As the hotplug state machine allows partial bringup this allows later
    to kick all APs alive in a first iteration and then bring them up
    completely one by one afterwards.

 6) [P 31] Switch the primary thread detection to a cpumask

    This makes the parallel bringup a simple cpumask based mechanism
    without tons of conditionals and checks for primary threads.

 7) [P 32] Implement the parallel bringup core code

    The parallel bringup looks like this:
    
      1) Bring up the primary SMT threads to the CPUHP_KICK_AP_ALIVE step
      	 one by one

      2) Bring up the primary SMT threads to the CPUHP_ONLINE step one by
      	 one

      3) Bring up the secondary SMT threads to the CPUHP_KICK_AP_ALIVE
      	 step one by one

      4) Bring up the secondary SMT threads to the CPUHP_ONLINE
      	 step one by one

    In case that SMT is not supported this is obviously reduced to step #1
    and #2.

 8) [P 33-37] Prepare X86 for parallel bringup and enable it


Caveats
-------

The non X86 changes have been all compile tested. Boot and runtime
testing has only be done on a few real hardware platforms and qemu as
available. That definitely needs some help from the people who have
these systems at their fingertips.


Results and analysis
--------------------

Here are numbers for a dual socket SKL 56 cores/ 112 threads machine.  All
numbers in milliseconds. The time measured is the time which the cpu_up()
call takes for each CPU and phase. It's not exact as the system is already
scheduling, handling interrupts and soft interrupts, which is obviously
skewing the picture slightly.

Baseline tip tree x86/apic branch.

		total      avg/CPU          min          max
total  :      912.081        8.217        3.720      113.271

The max of 100ms is due to the silly delay calibration for the second
socket which takes 100ms and was eliminated first. Also the other initial
cleanups and improvements take some time away.

So the real baseline becomes:

		total      avg/CPU          min          max
total  :      785.960        7.081        3.752       36.098

The max here is on the first CPU of the second socket. 20ms of that is due
to TSC synchronization and an extra 2ms to react on the SIPI.

With parallel bootup enabled this becomes:

		total      avg/CPU          min          max
prepare:       39.108        0.352        0.238        0.883
online :       45.166        0.407        0.170       20.357
total  :       84.274        0.759        0.408       21.240

That's a factor ~9.3 reduction on average.

Looking at the 27 primary threads of socket 0 then this becomes even more
interesting:

		total      avg/CPU          min          max
total  :      325.764       12.065       11.981       14.125

versus:
		total      avg/CPU          min          max
prepare:        8.945        0.331        0.238        0.834
online :        4.830        0.179        0.170        0.212
total  :       13.775        0.510        0.408        1.046

So the reduction factor is ~23.5 here. That's mostly because the 20ms TSC
sync is not skewing the picture.

For all 55 primaries, i.e with the 20ms TSC sync extra for socket 1 this
becomes:

                total      avg/CPU          min          max
total  :      685.489       12.463       11.975       36.098

versus:

                total      avg/CPU          min          max
prepare:       19.080        0.353        0.238        0.883
online :       30.283        0.561        0.170       20.357
total  :       49.363        0.914        0.408       21.240

The TSC sync reduces the win to a factor of ~13.8

With 'tsc=reliable' on the command line the socket sync is disabled which
brings it back to the socket 0 numbers:

                total      avg/CPU          min          max
prepare:       18.970        0.351        0.231        0.874
online :       10.328        0.191        0.169        0.358
total  :       29.298        0.543        0.400        1.232

Now looking at the secondary threads only:

                total      avg/CPU          min          max
total  :      100.471        1.794        0.375        4.745

versus:
                total      avg/CPU          min          max
prepare:       19.753        0.353        0.257        0.512
online :       14.671        0.262        0.179        3.461
total  :       34.424        0.615        0.436        3.973

Still a factor of ~3.

The average on the secondaries for the serialized bringup is significantly
lower than for the primaries because the SIPI response time is shorter and
the microcode update takes no time.

This varies wildly with the system, whether microcode in BIOS is already up
to date, how big the microcode patch is and how long the INIT/SIPI response
time is. On an AMD Zen3 machine INIT/SIPI response time is amazingly fast
(350us), but then it lacks TSC_ADJUST and does a two millisecond TSC sync
test for _every_ AP. All of this sucks...


Possible further enhancements
-----------------------------

It's definitely worthwhile to look into reducing the cross socket TSC sync
test time. It's probably safe enough to use 5ms or even 2ms instead of 20ms
on systems with TSC_ADJUST and a few other 'TSC is sane' indicators. Moving
it out of the hotplug path is eventually possible, but that needs some deep
thoughts.

Let's take the TSC sync out of the picture by adding 'tsc=reliable" to the
kernel command line. So the bringup of 111 APs takes:

                total      avg/CPU          min          max
prepare:       38.936        0.351        0.231        0.874
online :       25.231        0.227        0.169        3.465
total  :       64.167        0.578        0.400        4.339

Some of the outliers are not necessarily in the state callbacks as the
system is already scheduling and handles interrupts and soft
interrupts. Haven't analyzed that yet in detail.

In the prepare stage which runs on the control CPU the larger steps are:

  smpcfd:prepare           16us  avg/CPU
  threads:prepare          98us  avg/CPU
  workqueue:prepare        43us  avg/CPU
  trace/RB:prepare	  135us	 avg/CPU

The trace ringbuffer initialization allocates 354 pages and 354 control
structures one by one. That probably should allocate a large page and an
array of control structures and work from there. I'm sure that would reduce
this significantly. Steven?

smpcfd does just a percpu allocation. No idea why that takes that long.

Vs. threads and workqueues. David thought about spreading out the
preparation work and do it really in parallel. That's a nice idea, but the
threads and workqueue prepare steps are self serializing. The workqueue one
has a global mutex and aside of that both steps create kernel threads which
implicitely serialize on kthreadd. alloc_percpu(), which is used by
smpcfd:prepare is also globally serialized.

The rest of the prepare steps is pretty much in the single digit
microseconds range.

On the AP side it should be possible to move some of the initialization
steps before the alive synchronization point, but that really needs a lot
of analysis whether the functions are safe to invoke that early and outside
of the cpu_hotplug_lock held region for the case of two stage parallel
bringup; see below.

The largest part is:

    identify_secondary_cpu()	99us avg/CPU
   
    Inside of identify_secondary_cpu() the largest offender:

      mcheck_init()		73us avg/CPU

    This part is definitly worth to be looked at whether it can be at least
    partially moved to the early startup code before the alive
    synchronization point. There's a lot of deep analysis required and
    ideally we just rewrite the whole CPUID evaluation trainwreck
    completely.

The rest of the AP side is low single digit microseconds except of:

    perf/x86:starting		14us avg/CPU

    smpboot/threads:online	13us avg/CPU
    workqueue:online		17us avg/CPU
    mm/vmstat:online		17us avg/CPU
    sched:active		30us avg/CPU

sched:active is special. Onlining the first secondary HT thread on the
second socket creates a 3.2ms outlier which skews the whole picture. That's
caused by enabling the static key sched_smt_present which patches the world
and some more. For all other APs this is really in the 1us range. This
definitely could be postponed during bootup like the scheduler domain
rebuild is done after the bringup. But that's still fully serialized and
single threaded and obviously could be done later in the context of async
parallel init. It's unclear why this is different with the fully serialized
bringup where it takes significantly less time, but that's something which
needs to be investigated.


Is truly parallel bringup feasible?
-----------------------------------

In theory yes, realistically no. Why?

   1) The preparation phase

      Allocating memory, creating threads for the to be brought up CPU must
      obviously happen on an already online CPU.

      While it would be possible to bring up a subset of CPUs first and let
      them do the preparation steps for groups of still offline CPUs
      concurrently, the actual benefit of doing so is dubious.

      The prime example is kernel thread creation, which is implicitely
      serialized on kthreadd.

      A simple experiment shows that 4 concurrent workers on 4 different
      CPUs where each is creating 14 * 5 = 70 kernel threads are 5% slower
      than a single worker creating 4 * 14 * 5 = 280 threads.

      So we'd need to have multiple kthreadd instances to handle that,
      which would then serialize on tasklist lock and other things.

      That aside the preparation phase is also affected by the problem
      below.

   2) Assumptions about hotplug serialization

      a) There are quite some assumptions about CPU bringup being fully
         serialized across state transitions.  A lot of state callbacks rely
         on that and would require local locking.

	 Adding that local locking is surely possible, but that has several
	 downsides:

          - It adds complexity and makes it harder for developers to get
	    this correct. The subtle bugs resulting out of that are going
	    to be interesting

          - Fine grained locking has a charm, but only if the time spent
	    for the actual work is larger than the time required for
	    serialization and synchronization.

	    Serializing a callback which takes less than a microsecond and
	    then having a large number of CPUs contending on the lock will
	    not make it any faster at all. That's a well known issue of
	    parallelizing and neither made up nor kernel specific.

      b) Some operations definitely require to be protected by the
         cpu_hotplug_lock, especially those which affect cpumasks as the
         masks are guaranteed to be stable in a cpus_read_lock()'ed region.

       	 As this lock cannot be taken in atomic contexts, it's required
       	 that the control CPU holds the lock write locked across these
       	 state transitions. And no, we are not making this a spinlock just
       	 for that and we even can't.

       	 Just slapping a lock into the x86 specific part of the cpumask
       	 update function does not solve anything. The relevant patch in V17
       	 is completely useless as it only serializes the actual cpumask/map
       	 modifications, but all read side users are hosed if the update
       	 would be moved before the alive synchronization point, i.e. into a
       	 non hotplug lock protected region.

       	 Even if the hotplug lock would be held accross the whole parallel
       	 bringup operation then this would still expose all usage of these
       	 masks and maps in the actual hotplug state callbacks to concurrent
       	 modifications.

       	 And no, we are not going to expose an architecture specific raw
       	 spinlock to the hotplug state callbacks, especially not to those
       	 in generic code.

      c) Some cpu_read_lock()'ed regions also expect that there is no CPU
      	 state transition happening which would modify their local
      	 state. This would again require local serialization.

    3) The amount of work and churn:

       - Analyze the per architecture low level startup functions plus their
         descendant functions and make them ready for concurrency if
       	 necessary.

       - Analyze ~300 hotplug state callbacks and their descendant functions
         and make them ready for concurrency if necessary.

       - Analyze all cpus_read_lock()'ed regions and address their
         requirements.
      
       - Rewrite the core code to handle the cpu_hotplug_lock requirements
         only in distinct phases of the state machine.

       - Rewrite the core code to handle state callback failure and the
         related rollback in the context of the new rules.

      - ...

   Even if some people are dedicated enough to do that, it's very
   questionable whether the resulting complexity is justified.

   We've spent a serious amount of time to sanitize hotplug and bring it
   into a state where it is correct. This also made it reasonably simple
   for developers to implement hotplug state callbacks without having to
   become hotplug experts.

   Breaking this completely up will result in a flood of hard to diagnose
   subtle issues for sure. Who is going to deal with them?

   The experience with this series so far does not make me comfortable
   about that thought in any way.


Summary
-------

The obvious and low hanging fruits have to be solved first:

  - The CPUID evaluation and related setup mechanisms

  - The trace/ringbuffer oddity

  - The sched:active oddity for the first sibling on the second socket
  
  - Some other expensive things which I'm not seeing in my test setup due
    to lack of hardware or configuration.

Anything else is pretty much wishful thinking in my opinion.

  To be clear. I'm not standing in the way if there is a proper solution,
  but that requires to respect the basic engineering rules:

    1) Correctness first
    2) Keep it maintainable
    3) Keep it simple

  So far this stuff failed already at #1.

I completely understand why this is important for cloud people, but
the real question to ask here is what are the actual requirements.

  As far as I understand the main goal is to make a (kexec) reboot
  almost invisible to VM tenants.

  Now lets look at how this works:

     A) Freeze VMs and persist state
     B) kexec into the new kernel
     C) Restore VMs from persistant memory
     D) Thaw VMs

  So the key problem is how long it takes to get from #B to #C and finally
  to #D.

  As far as I understand #C takes a serious amount of time and cannot be
  parallelized for whatever reasons.

  At the same time the number of online CPUs required to restore the VMs
  state is less than the number of online CPUs required to actually
  operate them in #D.

  That means it would be good enough to return to userspace with a
  limited number of online CPUs as fast as possible. A certain amount of
  CPUs are going to be busy with restoring the VMs state, i.e. one CPU
  per VM. Some remaining non-busy CPU can bringup the rest of the system
  and the APs in order to be functional for #D, i.e the restore of VM
  operation.

  Trying to optimize this purely in kernel space by adding complexity of
  dubious value is simply bogus in my opinion.

  It's already possible today to limit the number of CPUs which are
  initially onlined and online the rest later from user space.

  There are two issue there:

    a) The death by MCE broadcast problem

       Quite some (contemporary) x86 CPU generations are affected by
       this:

         - MCE can be broadcasted to all CPUs and not only issued locally
           to the CPU which triggered it.

         - Any CPU which has CR4.MCE == 0, even if it sits in a wait
           for INIT/SIPI state, will cause an immediate shutdown of the
           machine if a broadcasted MCE is delivered.

    b) Do the parallel bringup via sysfs control knob

       The per CPU target state interface allows to do that today one
       by one, but it's akward and has quite some overhead.

       A knob to online the rest of the not yet onlined present CPUs
       with the benefit of the parallel bringup mechanism is
       missing.

    #a) That's a risk to take by the operator.

        Even the regular serialized bringup does not protect against this
     	issue up to the point where all present CPUs have at least
     	initialized CR4.

	Limiting the number of APs to online early via the kernel command
	line widens that window and increases the risk further by
	executing user space before all APs have CR4 initialized.

	But the same applies to a deferred online mechanism implemented in
	the kernel where some worker brings up the not yet online APs while
	the early online CPUs are already executing user space code.

    #b) Is a no brainer to implement on top of this.


Conclusion
----------

Adding the basic parallel bringup mechanism as provided by this series
makes a lot of sense. Improving particular issues as pointed out in the
analysis makes sense too.

But trying to solve an application specific problem fully in the kernel
with tons of complexity, without exploring straight forward and simple
approaches first, does not make any sense at all.

Thanks,

	tglx

---
 Documentation/admin-guide/kernel-parameters.txt |   20 
 Documentation/core-api/cpu_hotplug.rst          |   13 
 arch/Kconfig                                    |   23 +
 arch/arm/Kconfig                                |    1 
 arch/arm/include/asm/smp.h                      |    2 
 arch/arm/kernel/smp.c                           |   18 
 arch/arm64/Kconfig                              |    1 
 arch/arm64/include/asm/smp.h                    |    2 
 arch/arm64/kernel/smp.c                         |   14 
 arch/csky/Kconfig                               |    1 
 arch/csky/include/asm/smp.h                     |    2 
 arch/csky/kernel/smp.c                          |    8 
 arch/mips/Kconfig                               |    1 
 arch/mips/cavium-octeon/smp.c                   |    1 
 arch/mips/include/asm/smp-ops.h                 |    1 
 arch/mips/kernel/smp-bmips.c                    |    1 
 arch/mips/kernel/smp-cps.c                      |   14 
 arch/mips/kernel/smp.c                          |    8 
 arch/mips/loongson64/smp.c                      |    1 
 arch/parisc/Kconfig                             |    1 
 arch/parisc/kernel/process.c                    |    4 
 arch/parisc/kernel/smp.c                        |    7 
 arch/riscv/Kconfig                              |    1 
 arch/riscv/include/asm/smp.h                    |    2 
 arch/riscv/kernel/cpu-hotplug.c                 |   14 
 arch/x86/Kconfig                                |   45 --
 arch/x86/include/asm/apic.h                     |    5 
 arch/x86/include/asm/cpu.h                      |    5 
 arch/x86/include/asm/cpumask.h                  |    5 
 arch/x86/include/asm/processor.h                |    1 
 arch/x86/include/asm/realmode.h                 |    3 
 arch/x86/include/asm/sev-common.h               |    3 
 arch/x86/include/asm/smp.h                      |   26 -
 arch/x86/include/asm/topology.h                 |   23 -
 arch/x86/include/asm/tsc.h                      |    2 
 arch/x86/kernel/acpi/sleep.c                    |    9 
 arch/x86/kernel/apic/apic.c                     |   22 -
 arch/x86/kernel/callthunks.c                    |    4 
 arch/x86/kernel/cpu/amd.c                       |    2 
 arch/x86/kernel/cpu/cacheinfo.c                 |   21 
 arch/x86/kernel/cpu/common.c                    |   50 --
 arch/x86/kernel/cpu/topology.c                  |    3 
 arch/x86/kernel/head_32.S                       |   14 
 arch/x86/kernel/head_64.S                       |  121 +++++
 arch/x86/kernel/sev.c                           |    2 
 arch/x86/kernel/smp.c                           |    3 
 arch/x86/kernel/smpboot.c                       |  508 ++++++++----------------
 arch/x86/kernel/topology.c                      |   98 ----
 arch/x86/kernel/tsc.c                           |   20 
 arch/x86/kernel/tsc_sync.c                      |   36 -
 arch/x86/power/cpu.c                            |   37 -
 arch/x86/realmode/init.c                        |    3 
 arch/x86/realmode/rm/trampoline_64.S            |   27 +
 arch/x86/xen/enlighten_hvm.c                    |   11 
 arch/x86/xen/smp_hvm.c                          |   16 
 arch/x86/xen/smp_pv.c                           |   56 +-
 drivers/acpi/processor_idle.c                   |    4 
 include/linux/cpu.h                             |    4 
 include/linux/cpuhotplug.h                      |   17 
 kernel/cpu.c                                    |  397 +++++++++++++++++-
 kernel/smp.c                                    |    2 
 kernel/smpboot.c                                |  163 -------
 62 files changed, 953 insertions(+), 976 deletions(-)




From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521268.809795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5g-0002uU-R9; Fri, 14 Apr 2023 23:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521268.809795; Fri, 14 Apr 2023 23:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5g-0002uL-MO; Fri, 14 Apr 2023 23:44:28 +0000
Received: by outflank-mailman (input) for mailman id 521268;
 Fri, 14 Apr 2023 23:44:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5f-0000zb-6J
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:27 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a08d002-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a08d002-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.634953473@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515865;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=yXU88DPM4+gDgwwKQLancKUxDjIhod/zoYFzhLlkywE=;
	b=uB/cdoo+FiKgxcyt6ua+eofd7Uy6FRsG6/M919LHeLaRDCm0uyHoc5a1FuwimG9z0Kg1xr
	I101+cnRHFikD4fqEm6lZOTxsA3yV9LXW61rrHUVlF3AiOWiBC00yRq0VhWdSwUelUpctI
	5IcE3x4R3zXRKbqt7SSBKBa4oYiQTpXazjMN/L2oMlO9K+07Nyr22exnb4k00l3spzvmVc
	9LRa6JkEMAonr+NJM4O1ggTmXWPqwz4bV0FKYjWVuxuelWC7S0/bYUbnT566jHXu359fFE
	1kF7NH3ihq64cycQYcnuOz7AxFm2QZeP5t1J5QjP8JiYPJKiwyKDhh8WCXeClA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515865;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=yXU88DPM4+gDgwwKQLancKUxDjIhod/zoYFzhLlkywE=;
	b=MdNuozlhNGUku1R1Tw5i7+TfSWnt88h5jzNnGTOji+filOi1PFWkM9UpldmpxTl1Bs6qoB
	RP/SXlqo2sMOuJDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 07/37] x86/smpboot: Restrict soft_restart_cpu() to SEV
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:24 +0200 (CEST)

Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
---
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   14 --------------
 arch/x86/kernel/head_64.S    |    2 +-
 3 files changed, 2 insertions(+), 16 deletions(-)

--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -133,7 +133,7 @@ static bool skip_addr(void *dest)
 	/* Accounts directly */
 	if (dest == ret_from_fork)
 		return true;
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 	if (dest == soft_restart_cpu)
 		return true;
 #endif
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -138,20 +138,6 @@ SYM_CODE_START(startup_32)
 	jmp .Ldefault_entry
 SYM_CODE_END(startup_32)
 
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
- * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
- * unplug. Everything is set up already except the stack.
- */
-SYM_FUNC_START(soft_restart_cpu)
-	movl initial_stack, %ecx
-	movl %ecx, %esp
-	call *(initial_code)
-1:	jmp 1b
-SYM_FUNC_END(soft_restart_cpu)
-#endif
-
 /*
  * Non-boot CPU entry point; entered from trampoline.S
  * We can't lgdt here, because lgdt itself uses a data segment, but
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -375,7 +375,7 @@ SYM_CODE_END(secondary_startup_64)
 #include "verify_cpu.S"
 #include "sev_verify_cbit.S"
 
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 /*
  * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
  * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521269.809805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5i-0003Bo-9h; Fri, 14 Apr 2023 23:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521269.809805; Fri, 14 Apr 2023 23:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5i-0003BR-1Z; Fri, 14 Apr 2023 23:44:30 +0000
Received: by outflank-mailman (input) for mailman id 521269;
 Fri, 14 Apr 2023 23:44:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5h-0000zb-4r
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:29 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b01e6b6-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b01e6b6-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.697634212@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1Wi+qwoi+evP9/DCOV2cSOOVm8Cmr+iaowZLI6jmphg=;
	b=AtRfJAw/8dYgxzfcjPmLpOmGdUx0vCx8DqJiiMUmA3Cx+ZLSfVD8knv2kLNlnnW0M6wLbC
	T2QMHllGrZw7clK5/9Izh2AsnDdOvwLy0I4V80idxIhDm2+ZbZDLAikAULtI4e2AXQKPJr
	hBN5Y+KDw8n4D6Q1OvfiiBRG0iaAfEdsTfZ4cQweomKAopqUvixONL5zoVls1tMNAxyOEc
	sq7MkdcHTAZO6XblFcZcs66KyhWSvsJm9iqyZTxzUjsNnLQXic0xapY2RzQ/O/jDEPsPId
	R7R0FelILdrBFPupQz1xDszAcoMcHcibu8KNFeW7JRloBkiOApBN6zt3RCxP6g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1Wi+qwoi+evP9/DCOV2cSOOVm8Cmr+iaowZLI6jmphg=;
	b=erfBajWkK4A2wOU949cIjl3VUhIAl2v89iiOuXqsNlKDnlt3hip7OcUwwjvwGRE24kiwsr
	gNkbbyv7cvDgLNDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 08/37] x86/smpboot: Split up native_cpu_up() into separate
 phases and document them
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:26 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

There are four logical parts to what native_cpu_up() does on the BSP (or
on the controlling CPU for a later hotplug):

 1) Wake the AP by sending the INIT/SIPI/SIPI sequence.

 2) Wait for the AP to make it as far as wait_for_master_cpu() which
    sets that CPU's bit in cpu_initialized_mask, then sets the bit in
    cpu_callout_mask to let the AP proceed through cpu_init().

 3) Wait for the AP to finish cpu_init() and get as far as the
    smp_callin() call, which sets that CPU's bit in cpu_callin_mask.

 4) Perform the TSC synchronization and wait for the AP to actually
    mark itself online in cpu_online_mask.

In preparation to allow these phases to operate in parallel on multiple
APs, split them out into separate functions and document the interactions
a little more clearly in both the BP and AP code paths.

No functional change intended.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |  187 +++++++++++++++++++++++++++++-----------------
 1 file changed, 121 insertions(+), 66 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -193,6 +193,10 @@ static void smp_callin(void)
 
 	wmb();
 
+	/*
+	 * This runs the AP through all the cpuhp states to its target
+	 * state (CPUHP_ONLINE in the case of serial bringup).
+	 */
 	notify_cpu_starting(cpuid);
 
 	/*
@@ -233,14 +237,31 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	/*
+	 * Sync point with wait_cpu_initialized(). Before proceeding through
+	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
+	 * own bit in cpu_initialized_mask and then waits for the BSP to set
+	 * its bit in cpu_callout_mask to release it.
+	 */
 	cpu_init_secondary();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
+
+	/*
+	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
+	 * but just sets the bit to let the controlling CPU (BSP) know that
+	 * it's got this far.
+	 */
 	smp_callin();
 
-	/* otherwise gcc will move up smp_processor_id before the cpu_init */
+	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
-	/* Check TSC synchronization with the control CPU: */
+
+	/*
+	 * Check TSC synchronization with the control CPU, which will do
+	 * its part of this from wait_cpu_online(), making it an implicit
+	 * synchronization point.
+	 */
 	check_tsc_sync_target();
 
 	/*
@@ -259,6 +280,7 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
+	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -981,17 +1003,13 @@ int common_cpu_up(unsigned int cpu, stru
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
- * Returns zero if CPU booted OK, else error code from
+ * Returns zero if startup was successfully sent, else error code from
  * ->wakeup_secondary_cpu.
  */
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
-	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
 
-	unsigned long boot_error = 0;
-	unsigned long timeout;
-
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
 	if (apic->wakeup_secondary_cpu_64)
@@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
-	else
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
+		return apic->wakeup_secondary_cpu(apicid, start_ip);
 
-	if (!boot_error) {
-		/*
-		 * Wait 10s total for first sign of life from AP
-		 */
-		boot_error = -1;
-		timeout = jiffies + 10*HZ;
-		while (time_before(jiffies, timeout)) {
-			if (cpumask_test_cpu(cpu, cpu_initialized_mask)) {
-				/*
-				 * Tell AP to proceed with initialization
-				 */
-				cpumask_set_cpu(cpu, cpu_callout_mask);
-				boot_error = 0;
-				break;
-			}
-			schedule();
-		}
-	}
+	return wakeup_secondary_cpu_via_init(apicid, start_ip);
+}
 
-	if (!boot_error) {
-		/*
-		 * Wait till AP completes initial initialization
-		 */
-		while (!cpumask_test_cpu(cpu, cpu_callin_mask)) {
-			/*
-			 * Allow other tasks to run while we wait for the
-			 * AP to come online. This also gives a chance
-			 * for the MTRR work(triggered by the AP coming online)
-			 * to be completed in the stop machine context.
-			 */
-			schedule();
-		}
-	}
+static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
+{
+	unsigned long timeout;
 
-	if (x86_platform.legacy.warm_reset) {
-		/*
-		 * Cleanup possible dangling ends...
-		 */
-		smpboot_restore_warm_reset_vector();
+	/*
+	 * Wait up to 10s for the CPU to report in.
+	 */
+	timeout = jiffies + 10*HZ;
+	while (time_before(jiffies, timeout)) {
+		if (cpumask_test_cpu(cpu, mask))
+			return 0;
+
+		schedule();
 	}
+	return -1;
+}
 
-	return boot_error;
+/*
+ * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
+ * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
+ * to proceed.  The AP will then proceed past setting its 'callin' bit
+ * and end up waiting in check_tsc_sync_target() until we reach
+ * do_wait_cpu_online() to tend to it.
+ */
+static int wait_cpu_initialized(unsigned int cpu)
+{
+	/*
+	 * Wait for first sign of life from AP.
+	 */
+	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
+		return -1;
+
+	cpumask_set_cpu(cpu, cpu_callout_mask);
+	return 0;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+/*
+ * Bringup step three: Wait for the target AP to reach smp_callin().
+ * The AP is not waiting for us here so we don't need to parallelise
+ * this step. Not entirely clear why we care about this, since we just
+ * proceed directly to TSC synchronization which is the next sync
+ * point with the AP anyway.
+ */
+static void wait_cpu_callin(unsigned int cpu)
+{
+	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
+		schedule();
+}
+
+/*
+ * Bringup step four: Synchronize the TSC and wait for the target AP
+ * to reach set_cpu_online() in start_secondary().
+ */
+static void wait_cpu_online(unsigned int cpu)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
 	unsigned long flags;
+
+	/*
+	 * Check TSC synchronization with the AP (keep irqs disabled
+	 * while doing so):
+	 */
+	local_irq_save(flags);
+	check_tsc_sync_source(cpu);
+	local_irq_restore(flags);
+
+	/*
+	 * Wait for the AP to mark itself online, so the core caller
+	 * can drop sparse_irq_lock.
+	 */
+	while (!cpu_online(cpu))
+		schedule();
+}
+
+static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+{
+	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
 
 	lockdep_assert_irqs_enabled();
@@ -1142,25 +1189,33 @@ int native_cpu_up(unsigned int cpu, stru
 		return err;
 
 	err = do_boot_cpu(apicid, cpu, tidle);
-	if (err) {
+	if (err)
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		return err;
-	}
 
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
+	return err;
+}
 
-	while (!cpu_online(cpu)) {
-		cpu_relax();
-		touch_nmi_watchdog();
-	}
+int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	int ret;
 
-	return 0;
+	ret = native_kick_ap(cpu, tidle);
+	if (ret)
+		goto out;
+
+	ret = wait_cpu_initialized(cpu);
+	if (ret)
+		goto out;
+
+	wait_cpu_callin(cpu);
+	wait_cpu_online(cpu);
+
+out:
+	/* Cleanup possible dangling ends... */
+	if (x86_platform.legacy.warm_reset)
+		smpboot_restore_warm_reset_vector();
+
+	return ret;
 }
 
 /**



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521270.809815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5j-0003W4-Sl; Fri, 14 Apr 2023 23:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521270.809815; Fri, 14 Apr 2023 23:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5j-0003VW-LV; Fri, 14 Apr 2023 23:44:31 +0000
Received: by outflank-mailman (input) for mailman id 521270;
 Fri, 14 Apr 2023 23:44:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5i-0000zb-Kd
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:30 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c06e6cc-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c06e6cc-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.760351424@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515868;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=38vdbwHqBLIqPwRw5MqT+R7Ll28v39Ia/30h9IRZRBU=;
	b=yar8EeigYdFMdQYEGUNE3USYwkYHZzKxUot+Z73dH900LQv7+iVPCq5P8CJivgiAUZW2OK
	5b+Jgl5VWXA9zRjgQQpg9e2anhCtGANLrAGNoR9f6XmdTQzBRqsWpTTXFoZzvs8Un4b+Cd
	xQVtcABhcUTk8fKAr801BiQCOwMb7/BograiZPjfOPVUBCanspJXyCAKU5TMrWkY70q3Vf
	aRByfCqKTLqDcO3ENb28nyzgihvUsSGw62p4CXRy+qfa3By2TlFWN34HGdXMCTwJbS74gF
	n5JJnrI5zbLFVOG1tHdxNahBhTnqNGAQsLPfDS+q8FOpyMYAyy0JyNMiAMEw8w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515868;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=38vdbwHqBLIqPwRw5MqT+R7Ll28v39Ia/30h9IRZRBU=;
	b=MftkEADdginftZmZ+/cCkpBXX8KUjRl1mx3aoGgqqGTOVFHczgdVkY20eVN/Z+ePOHG5TU
	YFmw4iY8ytDodjDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 09/37] x86/smpboot: Get rid of cpu_init_secondary()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:28 +0200 (CEST)

The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().

Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/processor.h |    1 -
 arch/x86/kernel/cpu/common.c     |   27 ---------------------------
 arch/x86/kernel/smpboot.c        |   24 +++++++++++++++++++-----
 3 files changed, 19 insertions(+), 33 deletions(-)

--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -551,7 +551,6 @@ extern void switch_gdt_and_percpu_base(i
 extern void load_direct_gdt(int);
 extern void load_fixmap_gdt(int);
 extern void cpu_init(void);
-extern void cpu_init_secondary(void);
 extern void cpu_init_exception_handling(void);
 extern void cr4_init(void);
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2122,19 +2122,6 @@ static void dbg_restore_debug_regs(void)
 #define dbg_restore_debug_regs()
 #endif /* ! CONFIG_KGDB */
 
-static void wait_for_master_cpu(int cpu)
-{
-#ifdef CONFIG_SMP
-	/*
-	 * wait for ACK from master CPU before continuing
-	 * with AP initialization
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-#endif
-}
-
 static inline void setup_getcpu(int cpu)
 {
 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
@@ -2238,8 +2225,6 @@ void cpu_init(void)
 	struct task_struct *cur = current;
 	int cpu = raw_smp_processor_id();
 
-	wait_for_master_cpu(cpu);
-
 	ucode_cpu_init(cpu);
 
 #ifdef CONFIG_NUMA
@@ -2292,18 +2277,6 @@ void cpu_init(void)
 	load_fixmap_gdt(cpu);
 }
 
-#ifdef CONFIG_SMP
-void cpu_init_secondary(void)
-{
-	/*
-	 * Relies on the BP having set-up the IDT tables, which are loaded
-	 * on this CPU in cpu_init_exception_handling().
-	 */
-	cpu_init_exception_handling();
-	cpu_init();
-}
-#endif
-
 #ifdef CONFIG_MICROCODE_LATE_LOADING
 /**
  * store_cpu_caps() - Store a snapshot of CPU capabilities
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -220,6 +220,17 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
+static void wait_for_master_cpu(int cpu)
+{
+	/*
+	 * Wait for release by control CPU before continuing with AP
+	 * initialization.
+	 */
+	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
+	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
+		cpu_relax();
+}
+
 /*
  * Activate a secondary processor.
  */
@@ -237,13 +248,16 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	cpu_init_exception_handling();
+
 	/*
-	 * Sync point with wait_cpu_initialized(). Before proceeding through
-	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
-	 * own bit in cpu_initialized_mask and then waits for the BSP to set
-	 * its bit in cpu_callout_mask to release it.
+	 * Sync point with wait_cpu_initialized(). Sets AP in
+	 * cpu_initialized_mask and then waits for the control CPU
+	 * to release it.
 	 */
-	cpu_init_secondary();
+	wait_for_master_cpu(raw_smp_processor_id());
+
+	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521271.809824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5l-0003pD-H5; Fri, 14 Apr 2023 23:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521271.809824; Fri, 14 Apr 2023 23:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5l-0003od-81; Fri, 14 Apr 2023 23:44:33 +0000
Received: by outflank-mailman (input) for mailman id 521271;
 Fri, 14 Apr 2023 23:44:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5k-0000zb-40
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4cea9c1a-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cea9c1a-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232309.823800249@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515870;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=u2pIwytrv1vZgLx+4s6Zqq6fs8EHbK9BS70kDc1o//Q=;
	b=tj9xEvgya1srEI62cuunab7v24wBazw6wpcxMtpv79720yev1kGcsEOxNFRId8qJ8lqNgr
	q1f11vM+Nc4aB4NgnBxR6tIQfQU2iMcMUeRzBxRZmT8RWJKM3p19eh3Ukgff4IpPXHAAmQ
	OLzNAKEJS2vpJEbsoOLlJHaRdWtjNT4f1mD1+bDAcp/oan26jOAxfnbm6Tpuvt1St80R8P
	dPrGNEb4Zg6w49Q/eELNuFnnuSnEIQhX8h7OuWa4y9Uk6de034h6Q46GBDo6gmOK33DmEP
	YQXfDKPvV28Xy5XOo7nZYu9OBqWv20e7LFSwhwcY173bD1coDcsMG/2Q9JSTZg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515870;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=u2pIwytrv1vZgLx+4s6Zqq6fs8EHbK9BS70kDc1o//Q=;
	b=YNu0rancrP7gWuufcUKxrEH7bo8bWOmpZPxm7N3EMgjD1anaBUmHq23Qv/x7FzdBfUPs2b
	nJi2pgQCB1pCr2DA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 10/37] x86/cpu/cacheinfo: Remove cpu_callout_mask dependency
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:29 +0200 (CEST)

cpu_callout_mask is used for the stop machine based MTRR/PAT init.

In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/cacheinfo.c |   21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -39,6 +39,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t
 /* Shared L2 cache maps */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 
+static cpumask_var_t cpu_cacheinfo_mask;
+
 /* Kernel controls MTRR and/or PAT MSRs. */
 unsigned int memory_caching_control __ro_after_init;
 
@@ -1172,8 +1174,10 @@ void cache_bp_restore(void)
 		cache_cpu_init();
 }
 
-static int cache_ap_init(unsigned int cpu)
+static int cache_ap_online(unsigned int cpu)
 {
+	cpumask_set_cpu(cpu, cpu_cacheinfo_mask);
+
 	if (!memory_caching_control || get_cache_aps_delayed_init())
 		return 0;
 
@@ -1191,11 +1195,17 @@ static int cache_ap_init(unsigned int cp
 	 *      lock to prevent MTRR entry changes
 	 */
 	stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
-				       cpu_callout_mask);
+				       cpu_cacheinfo_mask);
 
 	return 0;
 }
 
+static int cache_ap_offline(unsigned int cpu)
+{
+	cpumask_clear_cpu(cpu, cpu_cacheinfo_mask);
+	return 0;
+}
+
 /*
  * Delayed cache initialization for all AP's
  */
@@ -1210,9 +1220,12 @@ void cache_aps_init(void)
 
 static int __init cache_ap_register(void)
 {
+	zalloc_cpumask_var(&cpu_cacheinfo_mask, GFP_KERNEL);
+	cpumask_set_cpu(smp_processor_id(), cpu_cacheinfo_mask);
+
 	cpuhp_setup_state_nocalls(CPUHP_AP_CACHECTRL_STARTING,
 				  "x86/cachectrl:starting",
-				  cache_ap_init, NULL);
+				  cache_ap_online, cache_ap_offline);
 	return 0;
 }
-core_initcall(cache_ap_register);
+early_initcall(cache_ap_register);



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521272.809834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5n-0004CB-4H; Fri, 14 Apr 2023 23:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521272.809834; Fri, 14 Apr 2023 23:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5m-0004AO-T8; Fri, 14 Apr 2023 23:44:34 +0000
Received: by outflank-mailman (input) for mailman id 521272;
 Fri, 14 Apr 2023 23:44:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5k-0001Th-Ss
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4dfd662d-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dfd662d-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232309.886802016@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515871;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/ZK0+aYTDprGh+TLZnlXwtj4Vfn8D4d3BT+xMJ2O4BY=;
	b=vjkoNmrHuvgLnadmWDBQsaj4074jsURhSMcG/y+SAv5U7Hjl873Sfi0Y38vZ0CiBW30lkc
	YSRug6WOf+XkPtW/Ok7DfLfUey146jL84NdowhhogdHAP/UKxbH2eo4JQjN009SbEVnvNW
	1zv9NUb4vEW1lio3uk0/iDb4xih4TRxA+1g65l9y4CxPj9/vIKfhtqspZBp17LHEOMwTUs
	uqFNHzjdehaY7LPLRMdLZxqYkTGv4zrW/lRcOci3KT4V49tXaioV61gsRoQXocuqUqCvtS
	KqU2xIzYJQYvtQyq2IG1EFa3XvWD9GUEZsifVyZD2YZ/wz91ynh5mZJDMN6fTQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515871;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/ZK0+aYTDprGh+TLZnlXwtj4Vfn8D4d3BT+xMJ2O4BY=;
	b=5l86SHxSNU5XjxfrY2KYmiaNiv7XnFBAOMn0HfeVzwcXghypKkHnPtf+Boxw8mjzqZQmUJ
	Sn14SMUAF/D/3pAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 11/37] x86/smpboot: Move synchronization masks to SMP boot code
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:31 +0200 (CEST)

The usage is in smpboot.c and not in the CPU initialization code.

The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cpumask.h |    5 -----
 arch/x86/kernel/cpu/common.c   |   17 -----------------
 arch/x86/kernel/smpboot.c      |   16 ++++++++++++++++
 arch/x86/xen/smp_pv.c          |    3 ---
 4 files changed, 16 insertions(+), 25 deletions(-)

--- a/arch/x86/include/asm/cpumask.h
+++ b/arch/x86/include/asm/cpumask.h
@@ -4,11 +4,6 @@
 #ifndef __ASSEMBLY__
 #include <linux/cpumask.h>
 
-extern cpumask_var_t cpu_callin_mask;
-extern cpumask_var_t cpu_callout_mask;
-extern cpumask_var_t cpu_initialized_mask;
-extern cpumask_var_t cpu_sibling_setup_mask;
-
 extern void setup_cpu_local_masks(void);
 
 /*
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -67,14 +67,6 @@
 
 u32 elf_hwcap2 __read_mostly;
 
-/* all of these masks are initialized in setup_cpu_local_masks() */
-cpumask_var_t cpu_initialized_mask;
-cpumask_var_t cpu_callout_mask;
-cpumask_var_t cpu_callin_mask;
-
-/* representing cpus for which sibling maps can be computed */
-cpumask_var_t cpu_sibling_setup_mask;
-
 /* Number of siblings per CPU package */
 int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
@@ -168,15 +160,6 @@ static void ppin_init(struct cpuinfo_x86
 	clear_cpu_cap(c, info->feature);
 }
 
-/* correctly size the local cpu masks */
-void __init setup_cpu_local_masks(void)
-{
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
-	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
-}
-
 static void default_init(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_X86_64
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,13 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* All of these masks are initialized in setup_cpu_local_masks() */
+static cpumask_var_t cpu_initialized_mask;
+static cpumask_var_t cpu_callout_mask;
+static cpumask_var_t cpu_callin_mask;
+/* Representing CPUs for which sibling maps can be computed */
+static cpumask_var_t cpu_sibling_setup_mask;
+
 /* Logical package management. We might want to allocate that dynamically */
 unsigned int __max_logical_packages __read_mostly;
 EXPORT_SYMBOL(__max_logical_packages);
@@ -1548,6 +1555,15 @@ early_param("possible_cpus", _setup_poss
 		set_cpu_possible(i, true);
 }
 
+/* correctly size the local cpu masks */
+void __init setup_cpu_local_masks(void)
+{
+	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
+	alloc_bootmem_cpumask_var(&cpu_callin_mask);
+	alloc_bootmem_cpumask_var(&cpu_callout_mask);
+	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 
 /* Recompute SMT state for all CPUs on offline */
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -254,15 +254,12 @@ cpu_initialize_context(unsigned int cpu,
 	struct desc_struct *gdt;
 	unsigned long gdt_mfn;
 
-	/* used to tell cpu_init() that it can proceed with initialization */
-	cpumask_set_cpu(cpu, cpu_callout_mask);
 	if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map))
 		return 0;
 
 	ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
 	if (ctxt == NULL) {
 		cpumask_clear_cpu(cpu, xen_cpu_initialized_map);
-		cpumask_clear_cpu(cpu, cpu_callout_mask);
 		return -ENOMEM;
 	}
 



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521273.809840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5n-0004KA-R7; Fri, 14 Apr 2023 23:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521273.809840; Fri, 14 Apr 2023 23:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5n-0004HW-GB; Fri, 14 Apr 2023 23:44:35 +0000
Received: by outflank-mailman (input) for mailman id 521273;
 Fri, 14 Apr 2023 23:44:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5m-0001Th-CM
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:34 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4edf5eb0-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4edf5eb0-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232309.948211096@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515873;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=vYiQOFtP2lP/XNgK/P8qTVjY7wF3a2V1zLIb771O4ZM=;
	b=ozP/whFRJCNtzWtZf47EwOD3ctF5Yl7nBhhPFmfGOp3vNuB6ZYseZErnaJuxpU20wGHlVt
	aTZzyJkMmm5NdQRvyG+n9diEY1Th1N+/hNKcN5FC+5wvr+HXD2qTAouaqdTP1xuZf9HF0k
	M58CetJX/We25l+wDbpp+iFsAladFY6A+8V0DeXEQPBkTYzbCTvagMj54yMvSeCmqTBY8B
	NuJdxqnKhTdSvfyUvbfD7TzIVkuVpe2zLY9+oBadLQdJiEqE/pGfpKmP3r5+H23H6uRJJs
	iN3tSKES7XyjbX1WXywuTqJ797O7g2NsRouRI92YsdfZkjPlG9hwc3S8b4FTCw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515873;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=vYiQOFtP2lP/XNgK/P8qTVjY7wF3a2V1zLIb771O4ZM=;
	b=OXd3Ubv1MGKAClArDXPNKshLMJqlxJq7sB82Sa9cgrlhWYCSXIxBkFLh2ALnT54B63KCKg
	PsqQBV4ALo0t5yBQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 12/37] x86/smpboot: Make TSC synchronization function call based
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:32 +0200 (CEST)

Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.

As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP function call. The upcoming AP issues that
call async only in the case that synchronization is required.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/tsc.h |    2 --
 arch/x86/kernel/smpboot.c  |   20 +++-----------------
 arch/x86/kernel/tsc_sync.c |   36 +++++++++++-------------------------
 3 files changed, 14 insertions(+), 44 deletions(-)

--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -55,12 +55,10 @@ extern bool tsc_async_resets;
 #ifdef CONFIG_X86_TSC
 extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
-extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
-static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
 #endif
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -278,11 +278,7 @@ static void notrace start_secondary(void
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
 
-	/*
-	 * Check TSC synchronization with the control CPU, which will do
-	 * its part of this from wait_cpu_online(), making it an implicit
-	 * synchronization point.
-	 */
+	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
 
 	/*
@@ -1144,21 +1140,11 @@ static void wait_cpu_callin(unsigned int
 }
 
 /*
- * Bringup step four: Synchronize the TSC and wait for the target AP
- * to reach set_cpu_online() in start_secondary().
+ * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
 {
-	unsigned long flags;
-
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
-
 	/*
 	 * Wait for the AP to mark itself online, so the core caller
 	 * can drop sparse_irq_lock.
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -245,7 +245,6 @@ bool tsc_store_and_check_tsc_adjust(bool
  */
 static atomic_t start_count;
 static atomic_t stop_count;
-static atomic_t skip_test;
 static atomic_t test_runs;
 
 /*
@@ -344,21 +343,14 @@ static inline unsigned int loop_timeout(
 }
 
 /*
- * Source CPU calls into this - it waits for the freshly booted
- * target CPU to arrive and then starts the measurement:
+ * The freshly booted CPU initiates this via an async SMP function call.
  */
-void check_tsc_sync_source(int cpu)
+static void check_tsc_sync_source(void *__cpu)
 {
+	unsigned int cpu = (unsigned long)__cpu;
 	int cpus = 2;
 
 	/*
-	 * No need to check if we already know that the TSC is not
-	 * synchronized or if we have no TSC.
-	 */
-	if (unsynchronized_tsc())
-		return;
-
-	/*
 	 * Set the maximum number of test runs to
 	 *  1 if the CPU does not provide the TSC_ADJUST MSR
 	 *  3 if the MSR is available, so the target can try to adjust
@@ -368,16 +360,9 @@ void check_tsc_sync_source(int cpu)
 	else
 		atomic_set(&test_runs, 3);
 retry:
-	/*
-	 * Wait for the target to start or to skip the test:
-	 */
-	while (atomic_read(&start_count) != cpus - 1) {
-		if (atomic_read(&skip_test) > 0) {
-			atomic_set(&skip_test, 0);
-			return;
-		}
+	/* Wait for the target to start. */
+	while (atomic_read(&start_count) != cpus - 1)
 		cpu_relax();
-	}
 
 	/*
 	 * Trigger the target to continue into the measurement too:
@@ -397,14 +382,14 @@ void check_tsc_sync_source(int cpu)
 	if (!nr_warps) {
 		atomic_set(&test_runs, 0);
 
-		pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n",
+		pr_debug("TSC synchronization [CPU#%d -> CPU#%u]: passed\n",
 			smp_processor_id(), cpu);
 
 	} else if (atomic_dec_and_test(&test_runs) || random_warps) {
 		/* Force it to 0 if random warps brought us here */
 		atomic_set(&test_runs, 0);
 
-		pr_warn("TSC synchronization [CPU#%d -> CPU#%d]:\n",
+		pr_warn("TSC synchronization [CPU#%d -> CPU#%u]:\n",
 			smp_processor_id(), cpu);
 		pr_warn("Measured %Ld cycles TSC warp between CPUs, "
 			"turning off TSC clock.\n", max_warp);
@@ -457,11 +442,12 @@ void check_tsc_sync_target(void)
 	 * SoCs the TSC is frequency synchronized, but still the TSC ADJUST
 	 * register might have been wreckaged by the BIOS..
 	 */
-	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) {
-		atomic_inc(&skip_test);
+	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable)
 		return;
-	}
 
+	/* Kick the control CPU into the TSC synchronization function */
+	smp_call_function_single(cpumask_first(cpu_online_mask), check_tsc_sync_source,
+				 (unsigned long *)(unsigned long)cpu, 0);
 retry:
 	/*
 	 * Register this CPU's participation and wait for the



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521274.809852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5r-00058F-H2; Fri, 14 Apr 2023 23:44:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521274.809852; Fri, 14 Apr 2023 23:44:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5q-00051G-KI; Fri, 14 Apr 2023 23:44:38 +0000
Received: by outflank-mailman (input) for mailman id 521274;
 Fri, 14 Apr 2023 23:44:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5o-0001Th-1b
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fdcf5be-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fdcf5be-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.010585365@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515875;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=o+IIZolWdFzFoKPnCYeSD1bdkqJKKR73ExqiLPCERDI=;
	b=YtXMog5VLahh+j7fmWogAvr6ObdCeMs2/pQmIXREKblleGoCtvJL4rCNiqdvWzbyjDcTRQ
	uiN2/aQVEAqhEyWPSPIkKx0AVkXmtnjjoHfjwT3/yh44khK5MSa6L7h2bJz7b22ZDlJ6sh
	00pAwnPCUJWCsn2Ejc57QMUUvq4xsRjx3LwBphTuReLzL6PmYvBpWBQh7nPpyk6UmRKfhO
	2Eee0WLf6JgST16c76h3RPniWFIj95CGMwloYJMAwCCsO+gfqOq1rnGnjyKNvGqHT1gYGv
	P1F3HkYh07cMCdG7gc3D1VPgXDTVfI3a8otMhGht4nwU8AdufkquSsBkf+sOZA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515875;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=o+IIZolWdFzFoKPnCYeSD1bdkqJKKR73ExqiLPCERDI=;
	b=/GA3IGAqyXeZ0ie5H89dtUrqgSL/9oSjEmZXWUtsSX1UpgcGxjRjqmWWhXVPw2ws/g8YlI
	DVurnNfTpyXvJvDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 13/37] x86/smpboot: Remove cpu_callin_mask
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:34 +0200 (CEST)

Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |   61 +++++++---------------------------------------
 1 file changed, 10 insertions(+), 51 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -104,7 +104,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
 /* All of these masks are initialized in setup_cpu_local_masks() */
 static cpumask_var_t cpu_initialized_mask;
 static cpumask_var_t cpu_callout_mask;
-static cpumask_var_t cpu_callin_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -167,21 +166,16 @@ static inline void smpboot_restore_warm_
  */
 static void smp_callin(void)
 {
-	int cpuid;
+	int cpuid = smp_processor_id();
 
 	/*
 	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before
-	 * an INIT_deassert IPI reaches our local APIC, so it is
-	 * now safe to touch our local APIC.
-	 */
-	cpuid = smp_processor_id();
-
-	/*
-	 * the boot CPU has finished the init stage and is spinning
-	 * on callin_map until we finish. We are free to set up this
-	 * CPU, first the APIC. (this is probably redundant on most
-	 * boards)
+	 * cpu_callout_mask guarantees we don't get here before an
+	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
+	 * touch our local APIC.
+	 *
+	 * Set up this CPU, first the APIC, which is probably redundant on
+	 * most boards.
 	 */
 	apic_ap_setup();
 
@@ -192,7 +186,7 @@ static void smp_callin(void)
 	 * The topology information must be up to date before
 	 * calibrate_delay() and notify_cpu_starting().
 	 */
-	set_cpu_sibling_map(raw_smp_processor_id());
+	set_cpu_sibling_map(cpuid);
 
 	ap_init_aperfmperf();
 
@@ -205,11 +199,6 @@ static void smp_callin(void)
 	 * state (CPUHP_ONLINE in the case of serial bringup).
 	 */
 	notify_cpu_starting(cpuid);
-
-	/*
-	 * Allow the master to continue.
-	 */
-	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
 static void ap_calibrate_delay(void)
@@ -268,11 +257,6 @@ static void notrace start_secondary(void
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 
-	/*
-	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
-	 * but just sets the bit to let the controlling CPU (BSP) know that
-	 * it's got this far.
-	 */
 	smp_callin();
 
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
@@ -1112,7 +1096,7 @@ static int wait_cpu_cpumask(unsigned int
  * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
  * to proceed.  The AP will then proceed past setting its 'callin' bit
  * and end up waiting in check_tsc_sync_target() until we reach
- * do_wait_cpu_online() to tend to it.
+ * wait_cpu_online() to tend to it.
  */
 static int wait_cpu_initialized(unsigned int cpu)
 {
@@ -1127,20 +1111,7 @@ static int wait_cpu_initialized(unsigned
 }
 
 /*
- * Bringup step three: Wait for the target AP to reach smp_callin().
- * The AP is not waiting for us here so we don't need to parallelise
- * this step. Not entirely clear why we care about this, since we just
- * proceed directly to TSC synchronization which is the next sync
- * point with the AP anyway.
- */
-static void wait_cpu_callin(unsigned int cpu)
-{
-	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
-		schedule();
-}
-
-/*
- * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * Bringup step three: Wait for the target AP to reach set_cpu_online() in
  * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
@@ -1170,14 +1141,6 @@ static int native_kick_ap(unsigned int c
 	}
 
 	/*
-	 * Already booted CPU?
-	 */
-	if (cpumask_test_cpu(cpu, cpu_callin_mask)) {
-		pr_debug("do_boot_cpu %d Already started\n", cpu);
-		return -ENOSYS;
-	}
-
-	/*
 	 * Save current MTRR state in case it was changed since early boot
 	 * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync:
 	 */
@@ -1214,7 +1177,6 @@ int native_cpu_up(unsigned int cpu, stru
 	if (ret)
 		goto out;
 
-	wait_cpu_callin(cpu);
 	wait_cpu_online(cpu);
 
 out:
@@ -1330,7 +1292,6 @@ void __init smp_prepare_cpus_common(void
 	 * Setup boot CPU information
 	 */
 	smp_store_boot_cpu_info(); /* Final full version of the data */
-	cpumask_copy(cpu_callin_mask, cpumask_of(0));
 	mb();
 
 	for_each_possible_cpu(i) {
@@ -1545,7 +1506,6 @@ early_param("possible_cpus", _setup_poss
 void __init setup_cpu_local_masks(void)
 {
 	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
 	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
@@ -1609,7 +1569,6 @@ static void remove_cpu_from_maps(int cpu
 {
 	set_cpu_online(cpu, false);
 	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	cpumask_clear_cpu(cpu, cpu_callin_mask);
 	/* was set by cpu_init() */
 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521276.809859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5t-0005Vp-5w; Fri, 14 Apr 2023 23:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521276.809859; Fri, 14 Apr 2023 23:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5s-0005Q2-CP; Fri, 14 Apr 2023 23:44:40 +0000
Received: by outflank-mailman (input) for mailman id 521276;
 Fri, 14 Apr 2023 23:44:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5p-0001Th-Fd
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:37 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50c924e4-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50c924e4-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.073038650@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515876;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=GyywxqWp8oXO9/e7syapChEN4EQiJFDwV+0SUoqcKJk=;
	b=JUqSST6sUfcIWAgxhbacXjpEmmObS6IQmZ8ClrGYKz6doTL5ISmvBQwKyUHTzWuCMn0jQ2
	kxURDWFhZo9th4otAi+IQxUNu4L5D+QqiOZWbyNzW73qnBtuAg4kKtwBnKA13ZEyQuUA3X
	24NTR55n5XqMxhu41NV27bpjHdQ+JlbVY0oF+P/tIz60xJTXSar+rkM76hsA2KCV9DFE63
	6moUS8v6t62b1FyFg672g0WX5bUvfVeqnGzGFuSCL/S0/rzXi15l7IZ4fLWeFV2skFwkBv
	DN49AN17nBshRWNeN2Y+3WWvPRAiTBVV4LsFXW/N5eBDDmUXpiGL65/32uCbZA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515876;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=GyywxqWp8oXO9/e7syapChEN4EQiJFDwV+0SUoqcKJk=;
	b=snqZUI4TE1G36IaRc7SXm3rJmNJDzjCApFmrXGG7j+LkkvHa7rApiOe1UgrCEuxcVgy8d4
	n0QkZCVyncqid6Dw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 14/37] cpu/hotplug: Rework sparse_irq locking in bringup_cpu()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:36 +0200 (CEST)

There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/cpu.c |   28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -558,7 +558,7 @@ static int cpuhp_kick_ap(int cpu, struct
 	return ret;
 }
 
-static int bringup_wait_for_ap(unsigned int cpu)
+static int bringup_wait_for_ap_online(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 
@@ -579,15 +579,12 @@ static int bringup_wait_for_ap(unsigned
 	 */
 	if (!cpu_smt_allowed(cpu))
 		return -ECANCELED;
-
-	if (st->target <= CPUHP_AP_ONLINE_IDLE)
-		return 0;
-
-	return cpuhp_kick_ap(cpu, st, st->target);
+	return 0;
 }
 
 static int bringup_cpu(unsigned int cpu)
 {
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
@@ -606,10 +603,23 @@ static int bringup_cpu(unsigned int cpu)
 
 	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
-	irq_unlock_sparse();
 	if (ret)
-		return ret;
-	return bringup_wait_for_ap(cpu);
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
 }
 
 static int finish_cpu(unsigned int cpu)



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521277.809862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5t-0005hZ-Lh; Fri, 14 Apr 2023 23:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521277.809862; Fri, 14 Apr 2023 23:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5t-0005cc-8h; Fri, 14 Apr 2023 23:44:41 +0000
Received: by outflank-mailman (input) for mailman id 521277;
 Fri, 14 Apr 2023 23:44:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5r-0001Th-1T
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51b6e0db-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51b6e0db-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.133063984@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515878;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IXf0ZnAk7Rjoer1aZheyf4W/cuBQpAE5Qx39Wnoq+gw=;
	b=tCHY6jxHe44QdTi6vocnhUhZC/Sr8ssG31Dn8gwv5SQPDPmECHh8SvAHIYmNtqHfMlkDjb
	5z/JlKFS3jn11K/mH/g/+Ow73N4W+vlAAEOR14IBfVSLYSUt09dUYIaE4ilEgx1gRiYou/
	NJW3slP7Xaephbtnct1+NjMkcY39oSmvHRscafuLnWIYw0ZXn5+nds4CQ9+gIajjenAhxm
	MpSUlozCRJ6mr8ZNsIclfZsK1CWaX+//qVVkJTb566EfkmD5lkZR5KlXMt2cJLL6tWRV4l
	KlYtfdAt7sssM4QTlY+1m/BpDXqcWMvO6p1Ofmu+gKrgTD9Ra18H8OqOESasOQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515878;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IXf0ZnAk7Rjoer1aZheyf4W/cuBQpAE5Qx39Wnoq+gw=;
	b=ZRExhqz5gVYg/2ZXmu04kZ6TddbP4xfwFZiD+kT/UslGyVveAnCubA4h2r17G4cgTnxl1O
	+W+htaNbUUPcKRAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 15/37] x86/smpboot: Remove wait for cpu_online()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:37 +0200 (CEST)

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |   25 ++-----------------------
 1 file changed, 2 insertions(+), 23 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1110,20 +1110,6 @@ static int wait_cpu_initialized(unsigned
 	return 0;
 }
 
-/*
- * Bringup step three: Wait for the target AP to reach set_cpu_online() in
- * start_secondary().
- */
-static void wait_cpu_online(unsigned int cpu)
-{
-	/*
-	 * Wait for the AP to mark itself online, so the core caller
-	 * can drop sparse_irq_lock.
-	 */
-	while (!cpu_online(cpu))
-		schedule();
-}
-
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
@@ -1170,16 +1156,9 @@ int native_cpu_up(unsigned int cpu, stru
 	int ret;
 
 	ret = native_kick_ap(cpu, tidle);
-	if (ret)
-		goto out;
-
-	ret = wait_cpu_initialized(cpu);
-	if (ret)
-		goto out;
-
-	wait_cpu_online(cpu);
+	if (!ret)
+		ret = wait_cpu_initialized(cpu);
 
-out:
 	/* Cleanup possible dangling ends... */
 	if (x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521282.809879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5w-0006bQ-NE; Fri, 14 Apr 2023 23:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521282.809879; Fri, 14 Apr 2023 23:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5w-0006Zl-Ap; Fri, 14 Apr 2023 23:44:44 +0000
Received: by outflank-mailman (input) for mailman id 521282;
 Fri, 14 Apr 2023 23:44:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5t-0000zb-Ls
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:41 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52adff75-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52adff75-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.194293270@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=U86WQCD4/7f+Xqtkcp8tab/f3Qs5fwP9mDOYJzY6GfM=;
	b=jVIvfWfMQ/Mv7TLWicCCZIBuoCNHTQbWP9agDzTG7YCSq254hUFkclQNrmhW3hMxwO6BPK
	vpCyMRIt4O1v+XPm4hgw1E/lo8bQ+TLMoWfNV6id2wkbLsusQC6+QgLxHkmM7jqI5M1i8u
	WMWTwhPBiMRMlROawAHv6lnZ2lJwgt69Ha2d7Hel7C6kpu8UPNf7d9bEsHq2W4HMj8o6m6
	dmaviMmKfYNuICvE6NIhMG4fkMK+1VZcm9tvKGIXKcaFCzlTUjXnuorDCoZhFXx/Hy5zi8
	WucVMkf4GaCQOYOvuAmNqRgA5o2nLyY6ILSN9/5KO2/baOF9ZKuCj7IpPwPGyQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=U86WQCD4/7f+Xqtkcp8tab/f3Qs5fwP9mDOYJzY6GfM=;
	b=bwxgSYDBNDOQrQ9qYJjd5x62eInfveDh14CbqLzga+9pzOoPfQod/ugWmc+ORstEOs2az4
	q8XHnTNdR1jM1HBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 16/37] x86/xen/smp_pv: Remove wait for CPU online
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:39 +0200 (CEST)

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no difference.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
---
 arch/x86/xen/smp_pv.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -340,11 +340,11 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_pmu_init(cpu);
 
-	rc = HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL);
-	BUG_ON(rc);
-
-	while (cpu_report_state(cpu) != CPU_ONLINE)
-		HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+	/*
+	 * Why is this a BUG? If the hypercall fails then everything can be
+	 * rolled back, no?
+	 */
+	BUG_ON(HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL));
 
 	return 0;
 }



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:44:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:44:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521285.809885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5x-0006kf-Gk; Fri, 14 Apr 2023 23:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521285.809885; Fri, 14 Apr 2023 23:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnT5w-0006jJ-Sf; Fri, 14 Apr 2023 23:44:44 +0000
Received: by outflank-mailman (input) for mailman id 521285;
 Fri, 14 Apr 2023 23:44:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5v-0000zb-Cg
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:43 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53a62147-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53a62147-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.256412375@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515881;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=e1FMo66EmKhD6c9vNMbCMEr10b7nb30z1HTBJhSwsws=;
	b=gZ0HEhv3acCl5QBYyuBxsWswUfiT2pTSEiPx8KAuYf607AwMRXvsLBlYW5dljp8oWs/ntm
	Rl62blc06B+wdYag6mxiaHF4d6UazbaaiGafHMeetJ/A7Hs5RVhUzMngs/dEe59idC0O2F
	aA4jOVKszqOh4s2mn+PKplYpnPle5vYAOnMKw27lKXhJvIh/kbhLy4lRj7cXXA/G581Bej
	PsarGBYF6AEmBjKJMVj8L9q7SGw5rle2qMtTuVh6V6kezaC/bwaovVLX9d1Zsm+GX8ANmu
	PhMCi/IYxY/Oqm30JTDcZDBgs/Y7EGdHwcz8miDR/Arr0Wrr//IGKwmjd5YAMQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515881;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=e1FMo66EmKhD6c9vNMbCMEr10b7nb30z1HTBJhSwsws=;
	b=4fNwIQeLJaYUXs5/LQAEGEGLbWkObI//eYOHt1swXrTx799wwKeZFlTzry5MJCiAr70s7q
	ypFOiCNeSLElyzBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 17/37] x86/xen/hvm: Get rid of DEAD_FROZEN handling
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:41 +0200 (CEST)

No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.

Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
---
 arch/x86/xen/enlighten_hvm.c |   11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -161,13 +161,12 @@ static int xen_cpu_up_prepare_hvm(unsign
 	int rc = 0;
 
 	/*
-	 * This can happen if CPU was offlined earlier and
-	 * offlining timed out in common_cpu_die().
+	 * If a CPU was offlined earlier and offlining timed out then the
+	 * lock mechanism is still initialized. Uninit it unconditionally
+	 * as it's safe to call even if already uninited. Interrupts and
+	 * timer have already been handled in xen_cpu_dead_hvm().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-	}
+	xen_uninit_lock_cpu(cpu);
 
 	if (cpu_acpi_id(cpu) != U32_MAX)
 		per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:48:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:48:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521288.809905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTA1-0002kx-NL; Fri, 14 Apr 2023 23:48:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521288.809905; Fri, 14 Apr 2023 23:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTA1-0002kq-KZ; Fri, 14 Apr 2023 23:48:57 +0000
Received: by outflank-mailman (input) for mailman id 521288;
 Fri, 14 Apr 2023 23:48:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6Q-0000zb-Oh
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:14 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6639473e-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6639473e-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.441918776@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515912;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6cGHgynMBCaHRwYTrISGgGUY8IfBttwd4gSW6/8ncwo=;
	b=Y6MvVoIRD+b308pDJWowuLkJP8xTk8BgVllbAchtET5SCLxH3pPaTbpOYvBSvJgt29AjQn
	ezcDEAwpvw4go5HJCn0jDdZCGPT9MLW24qpZKb8kz2KJNNnr6JcyXOPyaUGlM/ANv9N+fq
	tuEJRuRxzNty3GrkhGsgHYpRhP/0DX3AEiMTFdAHD/hkrve/Z/OGirQHyoZehUi+ZLj2of
	nh0+zW/XB9E47Zv4MWfj7UgDu31M48dyxxMexXPIhf0Fh2Xfz56PypK/Z4RNZ1QuLpDfkl
	MzM4X40h1quxb59BK38sroOGy3J6fRyyYS1Nn/XWNH7x+1QZ0XFumeRWVGSB2A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515912;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6cGHgynMBCaHRwYTrISGgGUY8IfBttwd4gSW6/8ncwo=;
	b=62z6QxHlYDjrTs4WKqbuHCfVc8G+x+AUIAn0i55mZeGFgkDCFd3an/Sr2oboSn6ZUcOcEh
	Lf1/KZWBw97ffRDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 36/37] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:12 +0200 (CEST)

Implement the validation function which tells the core code whether
parallel bringup is possible:

  1) Valid CPUID leaf for APIC ID retrieval. For non x2APIC systmms leaf
     0x1 is sufficient, otherwise leaf 0xb or 0x1f must be available.

  2) Prevent parallel bringup on encrypted guests as this requires a
     different handling of the CPUID leaf retrieval via a call into the
     trusted firmware module. This is what the #VC trap handler does later
     on, which is not available during the very early startup.

Originally-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/Kconfig             |    3 +-
 arch/x86/kernel/cpu/common.c |    6 -----
 arch/x86/kernel/smpboot.c    |   49 +++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 50 insertions(+), 8 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -272,8 +272,9 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_PARALLEL			if SMP && X86_64
 	select HOTPLUG_SMT			if SMP
-	select HOTPLUG_SPLIT_STARTUP		if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP && X86_32
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2127,11 +2127,7 @@ static inline void setup_getcpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static inline void ucode_cpu_init(int cpu)
-{
-	if (cpu)
-		load_ucode_ap();
-}
+static inline void ucode_cpu_init(int cpu) { }
 
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
 #include <linux/cpuhotplug.h>
+#include <linux/mc146818rtc.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -75,7 +76,7 @@
 #include <asm/fpu/api.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
-#include <linux/mc146818rtc.h>
+#include <asm/microcode.h>
 #include <asm/i8259.h>
 #include <asm/misc.h>
 #include <asm/qspinlock.h>
@@ -128,7 +129,6 @@ int arch_update_cpu_topology(void)
 	return retval;
 }
 
-
 static unsigned int smpboot_warm_reset_vector_count;
 
 static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
@@ -247,6 +247,8 @@ static void notrace start_secondary(void
 #endif
 	cpu_init_exception_handling();
 
+	load_ucode_ap();
+
 	/*
 	 * Sync point with the hotplug core. Sets the sync state to ALIVE
 	 * and waits for the control CPU to release it.
@@ -1251,6 +1253,49 @@ void __init smp_prepare_cpus_common(void
 	set_cpu_sibling_map(0);
 }
 
+#ifdef CONFIG_X86_64
+/* Establish whether parallel bringup can be supported. */
+bool __init arch_cpuhp_init_parallel_bringup(void)
+{
+	unsigned int ctrl;
+
+	if (boot_cpu_data.cpuid_level < 0x01) {
+		pr_info("Parallel CPU startup disabled due to lack of CPUID\n");
+		return false;
+	}
+
+	/* Encrypted guests require special CPUID handling. */
+	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+		return false;
+	}
+
+	switch (topology_extended_leaf) {
+	case 0x0b:
+		ctrl = STARTUP_APICID_CPUID_0B;
+		break;
+	case 0x1f:
+		ctrl = STARTUP_APICID_CPUID_1F;
+		break;
+	case 0x00:
+		/* For !x2APIC mode 8 bits from leaf 0x01 are sufficient. */
+		if (!x2apic_mode) {
+			ctrl = STARTUP_APICID_CPUID_01;
+			break;
+		}
+		fallthrough;
+	default:
+		pr_info("Parallel CPU startup disabled. Unsupported topology leaf %u\n",
+			topology_extended_leaf);
+		return false;
+	}
+
+	pr_debug("Parallel CPU startup enabled: 0x%08x\n", ctrl);
+	smpboot_control = ctrl;
+	return true;
+}
+#endif
+
 /*
  * Prepare for SMP bootup.
  * @max_cpus: configured maximum number of CPUs, It is a legacy parameter



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521294.809915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTA7-00033e-V2; Fri, 14 Apr 2023 23:49:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521294.809915; Fri, 14 Apr 2023 23:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTA7-00033X-Rb; Fri, 14 Apr 2023 23:49:03 +0000
Received: by outflank-mailman (input) for mailman id 521294;
 Fri, 14 Apr 2023 23:49:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT68-0000zb-7w
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:56 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57995914-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57995914-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.506665258@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515888;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=BWuCF2CiZKIQbbVy6vgpgYODjXTXdK2iLGuwNDfmq40=;
	b=f+z8dnbF0VfVZ2drfvyKG7CFHH0pFiyt/pWm7ehfcpt37yeWoaaqrXqvZgx9e/X+XvrZce
	gt4gtyv6t++LGb/9kB1kGkXRTvx7bPDMmlE2YCfC4bhE+s0E3cJidnKxtsU1uFW28KmFrm
	Eh1JSahdrCC66cB1rpXpEnDQUWYAaCehtBdvR4JcyEbKg1e1N2/wIL2n3XDGfkKj7+6Kc2
	i/dgOOZylb3sazc82Iv1WIdzwGBFxWc2QRD+sOwDm8QUeivjB4vTaM4N2GaxbzQoD4goZa
	9mik80mRNDqag2mcRqOovhsRU7cr3S59unOU29PAZedSzmpwu0Vl+6WwRIUstA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515888;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=BWuCF2CiZKIQbbVy6vgpgYODjXTXdK2iLGuwNDfmq40=;
	b=HrYkhb/PC8lVyjiszH07tO75RvI9fVkc6wkdtUyyzhBuNuOW308LHInmD5FALuyFMFj8ZV
	PF23VUQhymsOP5DA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 21/37] ARM: smp: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:47 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/Kconfig           |    1 +
 arch/arm/include/asm/smp.h |    2 +-
 arch/arm/kernel/smp.c      |   18 +++++++-----------
 3 files changed, 9 insertions(+), 12 deletions(-)

--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -124,6 +124,7 @@ config ARM
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UID16
 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_FORCED_THREADING
 	select MODULES_USE_ELF_REL
 	select NEED_DMA_MAP_STATE
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -64,7 +64,7 @@ extern void secondary_startup_arm(void);
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -289,15 +289,11 @@ int __cpu_disable(void)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	clear_tasks_mm_cpumask(cpu);
@@ -337,11 +333,11 @@ void arch_cpu_idle_dead(void)
 	flush_cache_louis();
 
 	/*
-	 * Tell __cpu_die() that this CPU is now safe to dispose of.  Once
-	 * this returns, power and/or clocks can be removed at any point
-	 * from this CPU and its cache by platform_cpu_kill().
+	 * Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose
+	 * of. Once this returns, power and/or clocks can be removed at
+	 * any point from this CPU and its cache by platform_cpu_kill().
 	 */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Ensure that the cache lines associated with that completion are



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521296.809925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAB-0003Mf-AN; Fri, 14 Apr 2023 23:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521296.809925; Fri, 14 Apr 2023 23:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAB-0003MI-7X; Fri, 14 Apr 2023 23:49:07 +0000
Received: by outflank-mailman (input) for mailman id 521296;
 Fri, 14 Apr 2023 23:49:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6B-0000zb-8I
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:59 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a850ad4-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a850ad4-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.693105830@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515893;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=mLdwoL7i0j/gvs95W7juPrPE5guQlLpBTCOH6DWIfHg=;
	b=y1vQpfJDA9JbXkT4n2QDDyergrLrjhNoIcOXwKPLBgXGWR69MsWEbhct6B2Q0Beq9IabGw
	8AdipYQcGyGv10JjhcWmTMA6nb+uKbsSx/MN8HdunT5YcA8SzmppaMBbV7cQ+4LIlc+rZg
	M4S3YtNPEpFxh9BpQS/uuKImmvSvY9SOjP7kEGHRSjocKuW8Hkd+wsQRNAhaJSu1q3Ji84
	SfmMJL8bUIWPuJwGfQ1vQpsLmkVhRAcjnaf3HglDZNmuBZv36ILD3UsCeX4NSzkNQbUFEr
	jFiHQ+uYjIIWiDdw9m0lRurk39p0R7S/zFpzTGvsKaBrVSf8BTbzIWNSs/wA3w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515893;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=mLdwoL7i0j/gvs95W7juPrPE5guQlLpBTCOH6DWIfHg=;
	b=M7xQ1hL+35efJyLWe/r6UMXIs4XiyYXK60NpoJQ4tJZIRf0mH7XTaz2kcmwMNEzpQl+EWC
	m32CcBw/MSN+BxAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 24/37] MIPS: SMP_CPS: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:52 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
---
 arch/mips/Kconfig               |    1 +
 arch/mips/cavium-octeon/smp.c   |    1 +
 arch/mips/include/asm/smp-ops.h |    1 +
 arch/mips/kernel/smp-bmips.c    |    1 +
 arch/mips/kernel/smp-cps.c      |   14 +++++---------
 arch/mips/kernel/smp.c          |    8 ++++++++
 arch/mips/loongson64/smp.c      |    1 +
 7 files changed, 18 insertions(+), 9 deletions(-)

--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2356,6 +2356,7 @@ config MIPS_CPS
 	select MIPS_CM
 	select MIPS_CPS_PM if HOTPLUG_CPU
 	select SMP
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select SYNC_R4K if (CEVT_R4K || CSRC_R4K)
 	select SYS_SUPPORTS_HOTPLUG_CPU
 	select SYS_SUPPORTS_SCHED_SMT if CPU_MIPSR6
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -344,6 +344,7 @@ void play_dead(void)
 	int cpu = cpu_number_map(cvmx_get_core_num());
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 	octeon_processor_boot = 0xff;
 	per_cpu(cpu_state, cpu) = CPU_DEAD;
 
--- a/arch/mips/include/asm/smp-ops.h
+++ b/arch/mips/include/asm/smp-ops.h
@@ -33,6 +33,7 @@ struct plat_smp_ops {
 #ifdef CONFIG_HOTPLUG_CPU
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
+	void (*cleanup_dead_cpu)(unsigned cpu);
 #endif
 #ifdef CONFIG_KEXEC
 	void (*kexec_nonboot_cpu)(void);
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -390,6 +390,7 @@ static void bmips_cpu_die(unsigned int c
 void __ref play_dead(void)
 {
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	/* flush data cache */
 	_dma_cache_wback_inv(0, ~0);
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -497,8 +497,7 @@ void play_dead(void)
 		}
 	}
 
-	/* This CPU has chosen its way out */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cps_shutdown_this_cpu(cpu_death);
 
@@ -521,7 +520,9 @@ static void wait_for_sibling_halt(void *
 	} while (!(halted & TCHALT_H));
 }
 
-static void cps_cpu_die(unsigned int cpu)
+static void cps_cpu_die(unsigned int cpu) { }
+
+static void cps_cleanup_dead_cpu(unsigned cpu)
 {
 	unsigned core = cpu_core(&cpu_data[cpu]);
 	unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]);
@@ -529,12 +530,6 @@ static void cps_cpu_die(unsigned int cpu
 	unsigned stat;
 	int err;
 
-	/* Wait for the cpu to choose its way out */
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: didn't offline\n", cpu);
-		return;
-	}
-
 	/*
 	 * Now wait for the CPU to actually offline. Without doing this that
 	 * offlining may race with one or more of:
@@ -618,6 +613,7 @@ static const struct plat_smp_ops cps_smp
 #ifdef CONFIG_HOTPLUG_CPU
 	.cpu_disable		= cps_cpu_disable,
 	.cpu_die		= cps_cpu_die,
+	.cleanup_dead_cpu	= cps_cleanup_dead_cpu,
 #endif
 #ifdef CONFIG_KEXEC
 	.kexec_nonboot_cpu	= cps_kexec_nonboot_cpu,
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -690,6 +690,14 @@ void flush_tlb_one(unsigned long vaddr)
 EXPORT_SYMBOL(flush_tlb_page);
 EXPORT_SYMBOL(flush_tlb_one);
 
+#ifdef CONFIG_HOTPLUG_CPU
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (mp_ops->cleanup_dead_cpu)
+		mp_ops->cleanup_dead_cpu(cpu);
+}
+#endif
+
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
 static void tick_broadcast_callee(void *info)
--- a/arch/mips/loongson64/smp.c
+++ b/arch/mips/loongson64/smp.c
@@ -788,6 +788,7 @@ void play_dead(void)
 	void (*play_dead_at_ckseg1)(int *);
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	prid_imp = read_c0_prid() & PRID_IMP_MASK;
 	prid_rev = read_c0_prid() & PRID_REV_MASK;



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521301.809935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAF-0003jr-Jr; Fri, 14 Apr 2023 23:49:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521301.809935; Fri, 14 Apr 2023 23:49:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAF-0003je-GJ; Fri, 14 Apr 2023 23:49:11 +0000
Received: by outflank-mailman (input) for mailman id 521301;
 Fri, 14 Apr 2023 23:49:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT69-0000zb-8C
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:57 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 588ebefc-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 588ebefc-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.569498144@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dAqk1yNFRcZF7qilzpcfVR/70NZ8+F/SNrVuS7Hd0dk=;
	b=AU6aWn3llfgN87vGRPuEWQIrjtTFwK3yn2f2vw94yybrTzt5H4LRbc91tm1Xw3BwSnq5hP
	rQmL01eHYwGuD/tJ0f+L2+Oe406bYjAoY3D+heXxkuTKO7KuUfo+Vs6B1IlkwX6ZMSgfKv
	MeC7l/ITcGbKIYw5eRyObkCJegLxrMXZzPk/rNGf+zutst2sEf6AX47fGrPZphUN9CnuIP
	6Hqq0/37hmlZ5Bbfm3XWsFI3yH4n+lXK8uvrCVgYW4m/9hcfQmz9XLNpsQ2ZNbshSo8dWB
	0ORFcD1Xi54QY+d+vfpgXIdhI1e6dB7Jux3a2oBfTKMsXcUuUQI04NVVQTd0zA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dAqk1yNFRcZF7qilzpcfVR/70NZ8+F/SNrVuS7Hd0dk=;
	b=9EnFpcJNs8EwCrNKZip93QB/LFNFkvbQArfrS6NvwMyIg/IEdfp5AE2HOIUr5Sp+QKyDnB
	/vmf1BkRatgvPdDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 linux-arm-kernel@lists.infradead.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 22/37] arm64: smp: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:49 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/Kconfig           |    1 +
 arch/arm64/include/asm/smp.h |    2 +-
 arch/arm64/kernel/smp.c      |   14 +++++---------
 3 files changed, 7 insertions(+), 10 deletions(-)

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -216,6 +216,7 @@ config ARM64
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -99,7 +99,7 @@ static inline void arch_send_wakeup_ipi_
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 extern void cpu_die(void);
 extern void cpu_die_early(void);
 
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -333,17 +333,13 @@ static int op_cpu_kill(unsigned int cpu)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * Called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int err;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	/*
@@ -370,8 +366,8 @@ void cpu_die(void)
 
 	local_daif_mask();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of */
-	(void)cpu_report_death();
+	/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Actually shutdown the CPU. This must never fail. The specific hotplug



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521302.809941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAG-0003p1-4y; Fri, 14 Apr 2023 23:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521302.809941; Fri, 14 Apr 2023 23:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAF-0003oL-QV; Fri, 14 Apr 2023 23:49:11 +0000
Received: by outflank-mailman (input) for mailman id 521302;
 Fri, 14 Apr 2023 23:49:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6D-0000zb-8X
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:01 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5b94cad7-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b94cad7-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.754812729@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515894;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=R05iW7fNVcum8Q+mjf2E3yOsCRNZdrRM4MkZfsXKqKc=;
	b=aUiik8SryxpjaeRHuGDH1CVCPJrHdz4ToboDiBvyrgybp5D/pPkGbz7paiwVK8VODAE0fl
	SgTT+QXtuBbN0fpZxUWKztpQGU0bjPZSUW3fxyRe9IQVkvcbXWq/0x43vpjbVvOarJH7HV
	uIjYrrAOYGCrtSVeXL3Klmw/1hn994FxUZywCAJZ5ETy96ETN4uYpOv1uP93+BscjKqka+
	yexDk6oAjmOIdMrSOugyHKVHXN1255/Us+8JjFFS+8Qfs8u4ygXpDFKFDNp8KDDwtuiQU3
	N8RPw7aSRn8YI2NswkTZsr81vwh3F5+iNvqvFAUv07ki8YW+7xmZPHzCBoIC9w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515894;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=R05iW7fNVcum8Q+mjf2E3yOsCRNZdrRM4MkZfsXKqKc=;
	b=caenbNdiUoGPpEuAuhFVb0nh5RPBSUL+B8V0qrD3LKpWvQp6oNtGy6pLTh0H4xLk3s7H+h
	2RFNt+MohEqWQ1CQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 25/37] parisc: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:54 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/Kconfig          |    1 +
 arch/parisc/kernel/process.c |    4 ++--
 arch/parisc/kernel/smp.c     |    7 +++----
 3 files changed, 6 insertions(+), 6 deletions(-)

--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -56,6 +56,7 @@ config PARISC
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select GENERIC_SCHED_CLOCK
 	select GENERIC_IRQ_MIGRATION if SMP
 	select HAVE_UNSTABLE_SCHED_CLOCK if SMP
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -166,8 +166,8 @@ void arch_cpu_idle_dead(void)
 
 	local_irq_disable();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of. */
-	(void)cpu_report_death();
+	/* Tell the core that this CPU is now safe to dispose of. */
+	cpuhp_ap_report_dead();
 
 	/* Ensure that the cache lines are written out. */
 	flush_cache_all_local();
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -500,11 +500,10 @@ int __cpu_disable(void)
 void __cpu_die(unsigned int cpu)
 {
 	pdc_cpu_rendezvous_lock();
+}
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
 	pr_info("CPU%u: is shutting down\n", cpu);
 
 	/* set task's state to interruptible sleep */



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521305.809955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAJ-0004Uw-7K; Fri, 14 Apr 2023 23:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521305.809955; Fri, 14 Apr 2023 23:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAJ-0004UY-1w; Fri, 14 Apr 2023 23:49:15 +0000
Received: by outflank-mailman (input) for mailman id 521305;
 Fri, 14 Apr 2023 23:49:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6E-0000zb-8g
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c76f70e-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c76f70e-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.817955867@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515896;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ayzVJgXuvflj3xjg1mGXZolHpFmSekkZ0AyhKY70H0Q=;
	b=GY2K7k5oePF4smJsnVlCWc197Gnvm2xpV+H4Y4U8e+jaRt3VB8iE9Bg1axBb2bpO+/I7XE
	wTG8GN8vNucA/g3rxDLFU6mc1SvUhxpbP0wveuvbhDGNZd++q35ihkBqOmSICTjhquJjYJ
	gQ/8Bkd6FpRUM+MX/SRhgi0pwhuLuSCrUZwmZ1fPuiU+CEObkKTMFIVqx8V0B6LfZdF2Zy
	vTSM8z3vPt6+fF/Bmw0aE9UGx+RqDICSvY+n8F5aACJ8IZ8FOG+IfKxylVEweXpqRrNp+K
	c5k4vI/XDc1rGZAhq8CumldbXaaUQJlgNjb18yycd/3lTwMlhsF9uxtK+pKsOQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515896;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ayzVJgXuvflj3xjg1mGXZolHpFmSekkZ0AyhKY70H0Q=;
	b=K/mg+lwQlwHWac81k54xHqleQCPzjnE1varYnMwenALS/XLp9IX7NBh95fkmdnzkG0JM9N
	NgO+APSKFCAt65CA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 26/37] riscv: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:55 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-riscv@lists.infradead.org
---
 arch/riscv/Kconfig              |    1 +
 arch/riscv/include/asm/smp.h    |    2 +-
 arch/riscv/kernel/cpu-hotplug.c |   14 +++++++-------
 3 files changed, 9 insertions(+), 8 deletions(-)

--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -116,6 +116,7 @@ config RISCV
 	select HAVE_RSEQ
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select MODULES_USE_ELF_RELA if MODULES
--- a/arch/riscv/include/asm/smp.h
+++ b/arch/riscv/include/asm/smp.h
@@ -64,7 +64,7 @@ asmlinkage void smp_callin(void);
 
 #if defined CONFIG_HOTPLUG_CPU
 int __cpu_disable(void);
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 #endif /* CONFIG_HOTPLUG_CPU */
 
 #else
--- a/arch/riscv/kernel/cpu-hotplug.c
+++ b/arch/riscv/kernel/cpu-hotplug.c
@@ -8,6 +8,7 @@
 #include <linux/sched.h>
 #include <linux/err.h>
 #include <linux/irq.h>
+#include <linux/cpuhotplug.h>
 #include <linux/cpu.h>
 #include <linux/sched/hotplug.h>
 #include <asm/irq.h>
@@ -48,17 +49,15 @@ int __cpu_disable(void)
 	return ret;
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
 /*
- * Called on the thread which is asking for a CPU to be shutdown.
+ * Called on the thread which is asking for a CPU to be shutdown, if the
+ * CPU reported dead to the hotplug core.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int ret = 0;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU %u: didn't die\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: off\n", cpu);
 
 	/* Verify from the firmware if the cpu is really stopped*/
@@ -75,9 +74,10 @@ void arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cpu_ops[smp_processor_id()]->cpu_stop();
 	/* It should never reach here */
 	BUG();
 }
+#endif



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521307.809959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAJ-0004Yj-N2; Fri, 14 Apr 2023 23:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521307.809959; Fri, 14 Apr 2023 23:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAJ-0004XT-FQ; Fri, 14 Apr 2023 23:49:15 +0000
Received: by outflank-mailman (input) for mailman id 521307;
 Fri, 14 Apr 2023 23:49:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6K-0000zb-Bm
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6268d8ce-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6268d8ce-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.192114505@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515906;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OOycMYPFVi5YhixhBoVMbIx7KiHrtAXAV72ZpNnWZL8=;
	b=hJ/cJ/LsDaASOUqASLPPkSC6Ps6RxfdsA7gQYRBMpPaZEuge6FBpr2f6vVc+8HZFkB/rmb
	RJy5f49D9C3GZ5iAOC7Zko7KcGkJJr2aVQq4d0I8MHxwSrgTL3A0ePu6a/KfQsXuVA4Xbu
	ajazkAeZDlEyf5WTyO4+9yNz86aTWvAhKkL+wFnm59Pu29bb/grFar2EnhIA3IYkZTtEDm
	0zbDi0Vg6WqjBsnDoMhcNfUK9CtbuR2h5QYljCFrrrsyKUW96SL8J44WHxTF6P75LRzQZI
	y5u0fp80f4IGejFZO38GlIfCPV4fIBaOEn1NXHQkG/2n87JafGV7laJAQBArhQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515906;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OOycMYPFVi5YhixhBoVMbIx7KiHrtAXAV72ZpNnWZL8=;
	b=3elqLNqpb58Nl0QV7Y5mpEmw+yXYy7Jx8aSLhWQ4/jwrJTvN020KSkN4jR6yL3KGS/kbNN
	oIimmB7TZWnYdBDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 32/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:05 +0200 (CEST)

There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.

Allow a platform to enable parallel setup which brings all to be onlined
CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
control CPU (BP) is single-threaded the important part is the last state
CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.

This allows the CPUs to run up to the first sychronization point
cpuhp_ap_sync_alive() where they wait for the control CPU to release them
one by one for the full onlining procedure.

This parallelism depends on the CPU hotplug core sync mechanism which
ensures that the parallel brought up CPUs wait for release before touching
any state which would make the CPU visible to anything outside the hotplug
control mechanism.

To handle the SMT constraints of X86 correctly the bringup happens in two
iterations when CONFIG_HOTPLUG_SMT is enabled. The control CPU brings up
the primary SMT threads of each core first, which can load the microcode
without the need to rendevouz with the thread siblings. Once that's
completed it brings up the secondary SMT threads.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/admin-guide/kernel-parameters.txt |    6 +
 arch/Kconfig                                    |    4 
 include/linux/cpuhotplug.h                      |    1 
 kernel/cpu.c                                    |  103 ++++++++++++++++++++++--
 4 files changed, 109 insertions(+), 5 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -815,6 +815,12 @@
 			on every CPU online, such as boot, and resume from suspend.
 			Default: 10000
 
+	cpuhp.parallel=
+			[SMP] Enable/disable parallel bringup of secondary CPUs
+			Format: <bool>
+			Default is enabled if CONFIG_HOTPLUG_PARALLEL=y. Otherwise
+			the parameter has no effect.
+
 	crash_kexec_post_notifiers
 			Run kdump after running panic-notifiers and dumping
 			kmsg. This only for the users who doubt kdump always
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -53,6 +53,10 @@ config HOTPLUG_SPLIT_STARTUP
 	bool
 	select HOTPLUG_CORE_SYNC_FULL
 
+config HOTPLUG_PARALLEL
+	bool
+	select HOTPLUG_SPLIT_STARTUP
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -526,6 +526,7 @@ void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
 int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
+bool arch_cpuhp_init_parallel_bringup(void);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -649,8 +649,23 @@ bool cpu_smt_possible(void)
 		cpu_smt_control != CPU_SMT_NOT_SUPPORTED;
 }
 EXPORT_SYMBOL_GPL(cpu_smt_possible);
+
+static inline bool cpuhp_smt_aware(void)
+{
+	return topology_smt_supported();
+}
+
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_primary_thread_mask;
+}
 #else
 static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
+static inline bool cpuhp_smt_aware(void) { return false; }
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_present_mask;
+}
 #endif
 
 static inline enum cpuhp_state
@@ -1743,16 +1758,94 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int ncpus,
+				      enum cpuhp_state target)
 {
 	unsigned int cpu;
 
-	for_each_present_cpu(cpu) {
-		if (num_online_cpus() >= setup_max_cpus)
+	for_each_cpu(cpu, mask) {
+		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+
+		if (!--ncpus)
 			break;
-		if (!cpu_online(cpu))
-			cpu_up(cpu, CPUHP_ONLINE);
+
+		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
+			/*
+			 * If this failed then cpu_up() might have only
+			 * rolled back to CPUHP_BP_KICK_AP for the final
+			 * online. Clean it up. NOOP if already rolled back.
+			 */
+			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
+		}
+	}
+}
+
+#ifdef CONFIG_HOTPLUG_PARALLEL
+static bool __cpuhp_parallel_bringup __ro_after_init = true;
+
+static int __init parallel_bringup_parse_param(char *arg)
+{
+	return kstrtobool(arg, &__cpuhp_parallel_bringup);
+}
+early_param("cpuhp.parallel", parallel_bringup_parse_param);
+
+/*
+ * On architectures which have enabled parallel bringup this invokes all BP
+ * prepare states for each of the to be onlined APs first. The last state
+ * sends the startup IPI to the APs. The APs proceed through the low level
+ * bringup code in parallel and then wait for the control CPU to release
+ * them one by one for the final onlining procedure.
+ *
+ * This avoids waiting for each AP to respond to the startup IPI in
+ * CPUHP_BRINGUP_CPU.
+ */
+static bool __init cpuhp_bringup_cpus_parallel(unsigned int ncpus)
+{
+	const struct cpumask *mask = cpu_present_mask;
+
+	if (__cpuhp_parallel_bringup)
+		__cpuhp_parallel_bringup = arch_cpuhp_init_parallel_bringup();
+	if (!__cpuhp_parallel_bringup)
+		return false;
+
+	if (cpuhp_smt_aware()) {
+		const struct cpumask *pmask = cpuhp_get_primary_thread_mask();
+		static struct cpumask tmp_mask __initdata;
+
+		/*
+		 * X86 requires to prevent that SMT siblings stopped while
+		 * the primary thread does a microcode update for various
+		 * reasons. Bring the primary threads up first.
+		 */
+		cpumask_and(&tmp_mask, mask, pmask);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_BP_KICK_AP);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_ONLINE);
+		/* Account for the online CPUs */
+		ncpus -= num_online_cpus();
+		if (!ncpus)
+			return true;
+		/* Create the mask for secondary CPUs */
+		cpumask_andnot(&tmp_mask, mask, pmask);
+		mask = &tmp_mask;
 	}
+
+	/* Bring the not-yet started CPUs up */
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_BP_KICK_AP);
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_ONLINE);
+	return true;
+}
+#else
+static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return false; }
+#endif /* CONFIG_HOTPLUG_PARALLEL */
+
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+{
+	/* Try parallel bringup optimization if enabled */
+	if (cpuhp_bringup_cpus_parallel(setup_max_cpus))
+		return;
+
+	/* Full per CPU serialized bringup */
+	cpuhp_bringup_mask(cpu_present_mask, setup_max_cpus, CPUHP_ONLINE);
 }
 
 #ifdef CONFIG_PM_SLEEP_SMP



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521309.809966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAK-0004m6-Ko; Fri, 14 Apr 2023 23:49:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521309.809966; Fri, 14 Apr 2023 23:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAK-0004js-9U; Fri, 14 Apr 2023 23:49:16 +0000
Received: by outflank-mailman (input) for mailman id 521309;
 Fri, 14 Apr 2023 23:49:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6H-0000zb-9F
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:05 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f612769-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f612769-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.004104404@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515901;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ocDa9/b5b649kz9ODCiS7Obhl2xff0PGySrVJUnu/1Q=;
	b=Md1KUi0t4p5246hEM+Qbsg81XiYR2jMjuw8Cw1D96u97R/Z2vnRzvzyGsvg5/thdzlhagw
	VxVAyRxzHuR0yoObilNCNxEii10xKHOuzvMmMpympXYYlB2U8gUH+sjsIvQSBudSOTSmdM
	UnCqcdLP+SdXgts22XiNkil1Ao3IAY8flV5MA9J+wt6a2jzpqf6qFRYX/asyt9IKYGJESJ
	xr1bFKQEJqtc2y44yr2+c326EAG2BPB0FHdGHcRgBqiJBXhDU2UyMAE2uzcyA4utOsub3q
	p754cUuO5uG6++I5nPeZhand8mstexdu0PM3TYbIMB2kMernSkCdSX7RIPkZfA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515901;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ocDa9/b5b649kz9ODCiS7Obhl2xff0PGySrVJUnu/1Q=;
	b=BXd39doKKTpBx8WPmaV0PtWPunRukkpmlrtLhowy8wImwT3Zn16BG6OqXrJtidsxX/8Ej+
	driBsAHHd5CWSWBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 29/37] cpu/hotplug: Provide a split up CPUHP_BRINGUP mechanism
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:00 +0200 (CEST)

The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:

  1) Control CPU issues the wake-up

  2) To be onlined CPU starts up, does the minimal initialization,
     reports to be alive and waits for release into the complete bring-up.

  3) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

Allow to split this into two states:

  1) Control CPU issues the wake-up

     After that the to be onlined CPU starts up, does the minimal
     initialization, reports to be alive and waits for release into the
     full bring-up. As this can run after the control CPU dropped the
     hotplug locks the code which is executed on the AP before it reports
     alive has to be carefully audited to not violate any of the hotplug
     constraints, especially not modifying any of the various cpumasks.

     This is really only meant to avoid waiting for the AP to react on the
     wake-up. Of course an architecture can move strict CPU related setup
     functionality, e.g. microcode loading, with care before the
     synchronization point to save further pointless waiting time.

  2) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

This allows that the two states can be split up to run all to be onlined
CPUs up to state #1 on the control CPU and then at a later point run state
#2. This spares some of the latencies of the full serialized per CPU
bringup by avoiding the per CPU wakeup/wait serialization. The assumption
is that the first AP already waits when the last AP has been woken up. This
obvioulsy depends on the hardware latencies and depending on the timings
this might still not completely eliminate all wait scenarios.

This split is just a preparatory step for enabling the parallel bringup
later. The boot time bringup is still fully serialized. It has a separate
config switch so that architectures which want to support parallel bringup
can test the split of the CPUHP_BRINGUG step separately.

To enable this the architecture must support the CPU hotplug core sync
mechanism and has to be audited that there are no implicit hotplug state
dependencies which require a fully serialized bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/Kconfig               |    4 ++
 include/linux/cpuhotplug.h |    4 ++
 kernel/cpu.c               |   70 +++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 76 insertions(+), 2 deletions(-)

--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -49,6 +49,10 @@ config HOTPLUG_CORE_SYNC_FULL
 	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select HOTPLUG_CORE_SYNC
 
+config HOTPLUG_SPLIT_STARTUP
+	bool
+	select HOTPLUG_CORE_SYNC_FULL
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -133,6 +133,7 @@ enum cpuhp_state {
 	CPUHP_MIPS_SOC_PREPARE,
 	CPUHP_BP_PREPARE_DYN,
 	CPUHP_BP_PREPARE_DYN_END		= CPUHP_BP_PREPARE_DYN + 20,
+	CPUHP_BP_KICK_AP,
 	CPUHP_BRINGUP_CPU,
 
 	/*
@@ -519,9 +520,12 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+struct task_struct;
+
 void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -759,6 +759,47 @@ static int bringup_wait_for_ap_online(un
 	return 0;
 }
 
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+static int cpuhp_kick_ap_alive(unsigned int cpu)
+{
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
+	return arch_cpuhp_kick_ap_alive(cpu, idle_thread_get(cpu));
+}
+
+static int cpuhp_bringup_ap(unsigned int cpu)
+{
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+	int ret;
+
+	/*
+	 * Some architectures have to walk the irq descriptors to
+	 * setup the vector space for the cpu which comes online.
+	 * Prevent irq alloc/free across the bringup.
+	 */
+	irq_lock_sparse();
+
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
+}
+#else
 static int bringup_cpu(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
@@ -775,7 +816,6 @@ static int bringup_cpu(unsigned int cpu)
 	 */
 	irq_lock_sparse();
 
-	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
 	if (ret)
 		goto out_unlock;
@@ -799,6 +839,7 @@ static int bringup_cpu(unsigned int cpu)
 	irq_unlock_sparse();
 	return ret;
 }
+#endif
 
 static int finish_cpu(unsigned int cpu)
 {
@@ -1938,13 +1979,38 @@ static struct cpuhp_step cpuhp_hp_states
 		.startup.single		= timers_prepare_cpu,
 		.teardown.single	= timers_dead_cpu,
 	},
-	/* Kicks the plugged cpu into life */
+
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+	/*
+	 * Kicks the AP alive. AP will wait in cpuhp_ap_sync_alive() until
+	 * the next step will release it.
+	 */
+	[CPUHP_BP_KICK_AP] = {
+		.name			= "cpu:kick_ap",
+		.startup.single		= cpuhp_kick_ap_alive,
+	},
+
+	/*
+	 * Waits for the AP to reach cpuhp_ap_sync_alive() and then
+	 * releases it for the complete bringup.
+	 */
+	[CPUHP_BRINGUP_CPU] = {
+		.name			= "cpu:bringup",
+		.startup.single		= cpuhp_bringup_ap,
+		.teardown.single	= finish_cpu,
+		.cant_stop		= true,
+	},
+#else
+	/*
+	 * All-in-one CPU bringup state which includes the kick alive.
+	 */
 	[CPUHP_BRINGUP_CPU] = {
 		.name			= "cpu:bringup",
 		.startup.single		= bringup_cpu,
 		.teardown.single	= finish_cpu,
 		.cant_stop		= true,
 	},
+#endif
 	/* Final state before CPU kills itself */
 	[CPUHP_AP_IDLE_DEAD] = {
 		.name			= "idle:dead",



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521314.809982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAM-0005G2-7n; Fri, 14 Apr 2023 23:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521314.809982; Fri, 14 Apr 2023 23:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAL-0005Ev-Rp; Fri, 14 Apr 2023 23:49:17 +0000
Received: by outflank-mailman (input) for mailman id 521314;
 Fri, 14 Apr 2023 23:49:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6P-0000zb-BF
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:13 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65468783-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65468783-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.379210081@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515911;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=30gAYfFq7D/rpiAsDZ0VJ8Kl0fjNcwxtQrZfJqj5x1Q=;
	b=slTd10Pp/4TGmD/zlEb43Gb2WKRTlAppsSoSOu/T/E5JWpZy7hpKytp5/flV3pXvlf98mG
	aFkvnauiXOzF9rU2jFxo9JDlLYJmNHG16yPTZ65k9zIeEDhe+RYXJnZZEJKz1jMvqGBqWY
	BrJbQ+/W6kvjZFEcMJnDOmBoB+wtMm7S1uu0QlhCDKdWnqg0/vW+4HXD1l1CY7ixM7tgU3
	5PYqnW+jK7Z+hTrrlILf1oeW7ukViva1urpMPLhbmq0i0D+n8MofDLQxKVm+wNnRvv0a2W
	OS0dI9HKv3OCVLtAiJE196l54a2e8MLBX7ZcLL6pY36gkjwA0CwNz7FRv3fQXg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515911;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=30gAYfFq7D/rpiAsDZ0VJ8Kl0fjNcwxtQrZfJqj5x1Q=;
	b=fpHp/7lpP4jb7NBZQJ1STRnv+01MPSKMiVut7RE9SWvxHshH6nA9x03/g2I1cMS+hCTLAP
	Z5dLyPS0dXp2guCA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 35/37] x86/smpboot: Support parallel startup of secondary CPUs
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:10 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Rework the real-mode startup code to allow for APs to be brought up in
parallel. This is in two parts:

 1. Introduce a bit-spinlock to prevent them from all using the real
    mode stack at the same time.

 2. Avoid needing to use the global smpboot_control variable to pass
    each AP its CPU number.

To achieve the latter, export the cpuid_to_apicid[] array so that each
AP can find its own CPU number by searching therein based on its APIC ID.

Introduce flags in the top bits of smpboot_control which indicate methods
by which an AP should find its CPU number. For a serialized bringup, the
CPU number is explicitly passed in the low bits of smpboot_control as
before. For parallel mode there are flags directing the AP to find its APIC
ID in CPUID leaf 0x0b or 1x1f (for X2APIC mode) or CPUID leaf 0x01 where 8
bits are sufficient, then perform the cpuid_to_apicid[] lookup with that.

Aside from the fact that APs will now look up their CPU number via the
newly-exported cpuid_to_apicid[] table, there is no behavioural change
intended, since the parallel bootup has not yet been enabled.

[ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
[ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
[ seanc: Fix stray override of initial_gs in common_cpu_up() ]
[ Oleksandr Natalenko: reported suspend/resume issue fixed in
  x86_acpi_suspend_lowlevel ]

Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Co-developed-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic.h          |    2 
 arch/x86/include/asm/realmode.h      |    3 +
 arch/x86/include/asm/smp.h           |    8 +++
 arch/x86/kernel/acpi/sleep.c         |    9 +++
 arch/x86/kernel/apic/apic.c          |    2 
 arch/x86/kernel/head_64.S            |   79 ++++++++++++++++++++++++++++++++++-
 arch/x86/kernel/smpboot.c            |    5 --
 arch/x86/realmode/init.c             |    3 +
 arch/x86/realmode/rm/trampoline_64.S |   27 +++++++++--
 9 files changed, 125 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -55,6 +55,8 @@ extern int local_apic_timer_c2_ok;
 extern int disable_apic;
 extern unsigned int lapic_timer_period;
 
+extern int cpuid_to_apicid[];
+
 extern enum apic_intr_mode_id apic_intr_mode;
 enum apic_intr_mode_id {
 	APIC_PIC,
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -52,6 +52,7 @@ struct trampoline_header {
 	u64 efer;
 	u32 cr4;
 	u32 flags;
+	u32 lock;
 #endif
 };
 
@@ -64,6 +65,8 @@ extern unsigned long initial_stack;
 extern unsigned long initial_vc_handler;
 #endif
 
+extern u32 *trampoline_lock;
+
 extern unsigned char real_mode_blob[];
 extern unsigned char real_mode_relocs[];
 
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -198,4 +198,12 @@ extern unsigned int smpboot_control;
 
 #endif /* !__ASSEMBLY__ */
 
+/* Control bits for startup_64 */
+#define STARTUP_APICID_CPUID_1F 0x80000000
+#define STARTUP_APICID_CPUID_0B 0x40000000
+#define STARTUP_APICID_CPUID_01 0x20000000
+
+/* Top 8 bits are reserved for control */
+#define STARTUP_PARALLEL_MASK	0xFF000000
+
 #endif /* _ASM_X86_SMP_H */
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -16,6 +16,7 @@
 #include <asm/cacheflush.h>
 #include <asm/realmode.h>
 #include <asm/hypervisor.h>
+#include <asm/smp.h>
 
 #include <linux/ftrace.h>
 #include "../../realmode/rm/wakeup.h"
@@ -127,7 +128,13 @@ int x86_acpi_suspend_lowlevel(void)
 	 * value is in the actual %rsp register.
 	 */
 	current->thread.sp = (unsigned long)temp_stack + sizeof(temp_stack);
-	smpboot_control = smp_processor_id();
+	/*
+	 * Ensure the CPU knows which one it is when it comes back, if
+	 * it isn't in parallel mode and expected to work that out for
+	 * itself.
+	 */
+	if (!(smpboot_control & STARTUP_PARALLEL_MASK))
+		smpboot_control = smp_processor_id();
 #endif
 	initial_code = (unsigned long)wakeup_long64;
 	saved_magic = 0x123456789abcdef0L;
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2377,7 +2377,7 @@ static int nr_logical_cpuids = 1;
 /*
  * Used to store mapping between logical CPU IDs and APIC IDs.
  */
-static int cpuid_to_apicid[] = {
+int cpuid_to_apicid[] = {
 	[0 ... NR_CPUS - 1] = -1,
 };
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -25,6 +25,7 @@
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
 #include <asm/fixmap.h>
+#include <asm/smp.h>
 
 /*
  * We are not able to switch in one step to the final KERNEL ADDRESS SPACE
@@ -234,8 +235,70 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	ANNOTATE_NOENDBR // above
 
 #ifdef CONFIG_SMP
+	/*
+	 * For parallel boot, the APIC ID is retrieved from CPUID, and then
+	 * used to look up the CPU number.  For booting a single CPU, the
+	 * CPU number is encoded in smpboot_control.
+	 *
+	 * Bit 31	STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
+	 * Bit 30	STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
+	 * Bit 29	STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
+	 * Bit 0-23	CPU# if STARTUP_APICID_CPUID_xx flags are not set
+	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_APICID_CPUID_1F, %ecx
+	jnz	.Luse_cpuid_1f
+	testl	$STARTUP_APICID_CPUID_0B, %ecx
+	jnz	.Luse_cpuid_0b
+	testl	$STARTUP_APICID_CPUID_01, %ecx
+	jnz	.Luse_cpuid_01
+	andl	$(~STARTUP_PARALLEL_MASK), %ecx
+	jmp	.Lsetup_cpu
+
+.Luse_cpuid_01:
+	mov	$0x01, %eax
+	cpuid
+	mov	%ebx, %edx
+	shr	$24, %edx
+	jmp	.Lsetup_AP
+
+.Luse_cpuid_0b:
+	mov	$0x0B, %eax
+	xorl	%ecx, %ecx
+	cpuid
+	jmp	.Lsetup_AP
+
+.Luse_cpuid_1f:
+	mov	$0x1f, %eax
+	xorl	%ecx, %ecx
+	cpuid
 
+.Lsetup_AP:
+	/* EDX contains the APIC ID of the current CPU */
+	xorq	%rcx, %rcx
+	leaq	cpuid_to_apicid(%rip), %rbx
+
+.Lfind_cpunr:
+	cmpl	(%rbx,%rcx,4), %edx
+	jz	.Lsetup_cpu
+	inc	%ecx
+#ifdef CONFIG_FORCE_NR_CPUS
+	cmpl	$NR_CPUS, %ecx
+#else
+	cmpl	nr_cpu_ids(%rip), %ecx
+#endif
+	jb	.Lfind_cpunr
+
+	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
+	movq	trampoline_lock(%rip), %rax
+	lock
+	btrl	$0, (%rax)
+
+1:	cli
+	hlt
+	jmp	1b
+
+.Lsetup_cpu:
 	/* Get the per cpu offset for the given CPU# which is in ECX */
 	movq	__per_cpu_offset(,%rcx,8), %rdx
 #else
@@ -248,10 +311,20 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	 *
 	 * RDX contains the per-cpu offset
 	 */
-	movq	pcpu_hot + X86_current_task(%rdx), %rax
-	movq	TASK_threadsp(%rax), %rsp
+	movq	pcpu_hot + X86_top_of_stack(%rdx), %rsp
 
 	/*
+	 * Now that this CPU is running on its own stack, drop the realmode
+	 * protection. For the boot CPU the pointer is NULL!
+	 */
+	movq	trampoline_lock(%rip), %rax
+	testq	%rax, %rax
+	jz	.Lsetup_gdt
+	lock
+	btrl	$0, (%rax)
+
+.Lsetup_gdt:
+	/*
 	 * We must switch to a new descriptor in kernel space for the GDT
 	 * because soon the kernel won't have access anymore to the userspace
 	 * addresses where we're currently running on. We have to do that here
@@ -435,6 +508,8 @@ SYM_DATA(initial_code,	.quad x86_64_star
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 SYM_DATA(initial_vc_handler,	.quad handle_vc_boot_ghcb)
 #endif
+
+SYM_DATA(trampoline_lock, .quad 0);
 	__FINITDATA
 
 	__INIT
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -985,10 +985,7 @@ int common_cpu_up(unsigned int cpu, stru
 	if (ret)
 		return ret;
 
-#ifdef CONFIG_X86_32
-	/* Stack for startup_32 can be just as for start_secondary onwards */
 	per_cpu(pcpu_hot.top_of_stack, cpu) = task_top_of_stack(idle);
-#endif
 	return 0;
 }
 
@@ -1014,7 +1011,7 @@ static int do_boot_cpu(int apicid, int c
 	if (IS_ENABLED(CONFIG_X86_32)) {
 		early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu);
 		initial_stack  = idle->thread.sp;
-	} else {
+	} else if (!(smpboot_control & STARTUP_PARALLEL_MASK)) {
 		smpboot_control = cpu;
 	}
 
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -154,6 +154,9 @@ static void __init setup_real_mode(void)
 
 	trampoline_header->flags = 0;
 
+	trampoline_lock = &trampoline_header->lock;
+	*trampoline_lock = 0;
+
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
 
 	/* Map the real mode stub as virtual == physical */
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,6 +37,24 @@
 	.text
 	.code16
 
+.macro LOAD_REALMODE_ESP
+	/*
+	 * Make sure only one CPU fiddles with the realmode stack
+	 */
+.Llock_rm\@:
+	btl	$0, tr_lock
+	jnc	2f
+	pause
+	jmp	.Llock_rm\@
+2:
+	lock
+	btsl	$0, tr_lock
+	jc	.Llock_rm\@
+
+	# Setup stack
+	movl	$rm_stack_end, %esp
+.endm
+
 	.balign	PAGE_SIZE
 SYM_CODE_START(trampoline_start)
 	cli			# We should be safe anyway
@@ -49,8 +67,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -93,8 +110,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -177,7 +193,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax
@@ -241,6 +257,7 @@ SYM_DATA_START(trampoline_header)
 	SYM_DATA(tr_efer,		.space 8)
 	SYM_DATA(tr_cr4,		.space 4)
 	SYM_DATA(tr_flags,		.space 4)
+	SYM_DATA(tr_lock,		.space 4)
 SYM_DATA_END(trampoline_header)
 
 #include "trampoline_common.S"



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521323.809996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAP-0005x1-Ku; Fri, 14 Apr 2023 23:49:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521323.809996; Fri, 14 Apr 2023 23:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAP-0005wb-DJ; Fri, 14 Apr 2023 23:49:21 +0000
Received: by outflank-mailman (input) for mailman id 521323;
 Fri, 14 Apr 2023 23:49:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6B-0001Th-2o
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:59 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d9b13d7-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d9b13d7-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.880220709@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515898;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=auQqEwsU9PrP6dmLhZ2HNf+yKNnP8AcVm6NVRhQvyDs=;
	b=UQ9zo5ANA3bXro6uBe1FOBxXvcfoSQEiD24Xrn7vh8KjSNUhNMcqLW1wljHJEJ+e8j4GDJ
	N1EWMYIroPUN4YgTfbVjru09sBWEDKR5SQth1/0kGQgcTNZpj5XjpFfxTvJhJa1qZtvmXw
	wJMkqQYqdVRSHK1GBFiS97ApX5A77MdAt+hpRt1LADSzZH/uFNSbjXsbu3HZBSvh8G0y2P
	EK0CnndcvpELSRQBHbYZKRgcaUhr6Z8TtUOgSj+l3dKGuu0eZBB3TQY8X7TLCDbAdL4JaV
	dJXPejZzrfw0jpwXNhC/giE+cyNnTnS/yF70lli9Oe14OjMbSX5LRocaBNalvA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515898;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=auQqEwsU9PrP6dmLhZ2HNf+yKNnP8AcVm6NVRhQvyDs=;
	b=3UjUp0tk+qEdheiIONrPumzFg6y/RDymanoMKy3WCuwS6WRIh+kUkt2pm7bfL23mY4jSCI
	ishR5iLfVIQKiiCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 27/37] cpu/hotplug: Remove unused state functions
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:57 +0200 (CEST)

All users converted to the hotplug core mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   75 ----------------------------------------------------
 2 files changed, 77 deletions(-)

--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -193,8 +193,6 @@ static inline void play_idle(unsigned lo
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-bool cpu_wait_death(unsigned int cpu, int seconds);
-bool cpu_report_death(void);
 void cpuhp_report_idle_dead(void);
 #else
 static inline void cpuhp_report_idle_dead(void) { }
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -325,78 +325,3 @@ void smpboot_unregister_percpu_thread(st
 	cpus_read_unlock();
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
-
-#ifndef CONFIG_HOTPLUG_CORE_SYNC
-static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
-
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Wait for the specified CPU to exit the idle loop and die.
- */
-bool cpu_wait_death(unsigned int cpu, int seconds)
-{
-	int jf_left = seconds * HZ;
-	int oldstate;
-	bool ret = true;
-	int sleep_jf = 1;
-
-	might_sleep();
-
-	/* The outgoing CPU will normally get done quite quickly. */
-	if (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)
-		goto update_state_early;
-	udelay(5);
-
-	/* But if the outgoing CPU dawdles, wait increasingly long times. */
-	while (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {
-		schedule_timeout_uninterruptible(sleep_jf);
-		jf_left -= sleep_jf;
-		if (jf_left <= 0)
-			break;
-		sleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);
-	}
-update_state_early:
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-update_state:
-	if (oldstate == CPU_DEAD) {
-		/* Outgoing CPU died normally, update state. */
-		smp_mb(); /* atomic_read() before update. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);
-	} else {
-		/* Outgoing CPU still hasn't died, set state accordingly. */
-		if (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-					&oldstate, CPU_BROKEN))
-			goto update_state;
-		ret = false;
-	}
-	return ret;
-}
-
-/*
- * Called by the outgoing CPU to report its successful death.  Return
- * false if this report follows the surviving CPU's timing out.
- *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
- * timed out.  This approach allows architectures to omit calls to
- * cpu_check_up_prepare() and cpu_set_state_online() without defeating
- * the next cpu_wait_death()'s polling loop.
- */
-bool cpu_report_death(void)
-{
-	int oldstate;
-	int newstate;
-	int cpu = smp_processor_id();
-
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-	do {
-		if (oldstate != CPU_BROKEN)
-			newstate = CPU_DEAD;
-		else
-			newstate = CPU_DEAD_FROZEN;
-	} while (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-				     &oldstate, newstate));
-	return newstate == CPU_DEAD;
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-#endif /* !CONFIG_HOTPLUG_CORE_SYNC */



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521332.810009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAT-0006gO-4R; Fri, 14 Apr 2023 23:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521332.810009; Fri, 14 Apr 2023 23:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAS-0006eU-TR; Fri, 14 Apr 2023 23:49:24 +0000
Received: by outflank-mailman (input) for mailman id 521332;
 Fri, 14 Apr 2023 23:49:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6M-0000zb-9S
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:10 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 635bc938-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 635bc938-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.254849089@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515907;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=qlj4H3ejL+vBUDG3NY1lMMcGFIiTzrSNA2aqLKlihDE=;
	b=1GQ0vgasLWqRcJWgIXPbXDmKUvXohkJnXHYow4+Gk18ZVkMCsGTyrDjPv1CmK6RbyQU3xy
	sZOnCBi1gUZHMhyRrfBhTMtqyNFDPJ6vIKz6V9GUUqs40cNvVEv7vD+CBHvRT0wd1gUl1d
	6b2tSiphU+RGH9LMrzC1SBAlXe4ncrHHHmvCSojPNfdfYrCZOS3cqdN5dvg1mrwkF+zHKS
	ZggDxsStwqH0PgeAnVkNUSmrn0NxE/MFHr8xTSF8rH/8cE10sWA0KKqw6/zjLrvWcuBEms
	WRc4LjINHUoNtUJjSQE1O0a6vHatkwkifR+Lal+9Qf7z5SXlNpUbZu7h4WPnRg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515907;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=qlj4H3ejL+vBUDG3NY1lMMcGFIiTzrSNA2aqLKlihDE=;
	b=QX/aw2ZRTXXvpaXqpLyn2DL7d4KHZGqZGPQ3AUK084YoY4i+oS2d15EjnpvV8yHLp9Q5Sc
	902bNwcd/YuBSyBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 33/37] x86/topology: Store extended topology leaf information
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:07 +0200 (CEST)

Save the extended topology leaf number if it exists and is valid in
preparation of parallel CPU bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/topology.h |    1 +
 arch/x86/kernel/cpu/topology.c  |    3 +++
 2 files changed, 4 insertions(+)

--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -121,6 +121,7 @@ extern unsigned int __max_die_per_packag
 #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
 #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
 
+extern unsigned int topology_extended_leaf;
 extern unsigned int __max_logical_packages;
 #define topology_max_packages()			(__max_logical_packages)
 
--- a/arch/x86/kernel/cpu/topology.c
+++ b/arch/x86/kernel/cpu/topology.c
@@ -29,6 +29,8 @@ unsigned int __max_die_per_package __rea
 EXPORT_SYMBOL(__max_die_per_package);
 
 #ifdef CONFIG_SMP
+unsigned int topology_extended_leaf __read_mostly;
+
 /*
  * Check if given CPUID extended topology "leaf" is implemented
  */
@@ -72,6 +74,7 @@ int detect_extended_topology_early(struc
 	if (leaf < 0)
 		return -1;
 
+	topology_extended_leaf = leaf;
 	set_cpu_cap(c, X86_FEATURE_XTOPOLOGY);
 
 	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521328.810005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAS-0006XD-Fv; Fri, 14 Apr 2023 23:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521328.810005; Fri, 14 Apr 2023 23:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAS-0006Wc-9f; Fri, 14 Apr 2023 23:49:24 +0000
Received: by outflank-mailman (input) for mailman id 521328;
 Fri, 14 Apr 2023 23:49:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6J-0000zb-9G
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:07 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6158ef05-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6158ef05-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.128590508@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515904;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1po2rJ4fVgJP8Lz7zlVuql3iIT5xp2VpR+B+sueZaiQ=;
	b=j2f464LqemKF5EyymLaTszAtW5FGztj8vmPcDPj1wK7YU7fOHNTdZk2/K8gAJFZq9pfDGY
	5swh08e6d90aHqmMGF6WkIyj8axPVjUxuJuAViwZJ1cdvU6xcoF9zb87nqcct5XNmsoX0o
	Y0cHSVhAJwX30WEOO0lshPZ8xsXVjRD0fZRGdhHxX3DVdfFiIehL5+rfIQAY4cxNaAwRps
	53guQBH31H4F+J1+vPDF/UwhFysy5n0UHaveg6UXzsIamQzkfs0Z0QqIZPRxbWQld5Mev3
	7k6omsH4VccEFnKEdLPC9J1LIDlQw0AxGEp7j6OfTQ3DxOTM+UbH+I/cU+I8Bw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515904;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1po2rJ4fVgJP8Lz7zlVuql3iIT5xp2VpR+B+sueZaiQ=;
	b=H0vP+GAKFQrHzIZje+HAambScu8P4RD3X0xBakqALypG7Ph1OaLTUgdSgEHGbKo8mVzf0A
	CNqZe/l3Ol72rcBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 31/37] x86/apic: Provide cpu_primary_thread mask
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:03 +0200 (CEST)

Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/apic.h     |    2 --
 arch/x86/include/asm/topology.h |   19 +++++++++++++++----
 arch/x86/kernel/apic/apic.c     |   20 +++++++++-----------
 arch/x86/kernel/smpboot.c       |   12 +++---------
 4 files changed, 27 insertions(+), 26 deletions(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -506,10 +506,8 @@ extern int default_check_phys_apicid_pre
 #endif /* CONFIG_X86_LOCAL_APIC */
 
 #ifdef CONFIG_SMP
-bool apic_id_is_primary_thread(unsigned int id);
 void apic_smt_update(void);
 #else
-static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
 static inline void apic_smt_update(void) { }
 #endif
 
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -31,9 +31,9 @@
  * CONFIG_NUMA.
  */
 #include <linux/numa.h>
+#include <linux/cpumask.h>
 
 #ifdef CONFIG_NUMA
-#include <linux/cpumask.h>
 
 #include <asm/mpspec.h>
 #include <asm/percpu.h>
@@ -139,9 +139,20 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
-#else
+
+extern struct cpumask __cpu_primary_thread_mask;
+#define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_thread_mask)
+
+/**
+ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
+ * @cpu:	CPU to check
+ */
+static inline bool topology_is_primary_thread(unsigned int cpu)
+{
+	return cpumask_test_cpu(cpu, cpu_primary_thread_mask);
+}
+#else /* CONFIG_SMP */
 #define topology_max_packages()			(1)
 static inline int
 topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
@@ -152,7 +163,7 @@ static inline int topology_max_die_per_p
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
 static inline bool topology_smt_supported(void) { return false; }
-#endif
+#endif /* !CONFIG_SMP */
 
 static inline void arch_fix_phys_package_id(int num, u32 slot)
 {
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2387,20 +2387,16 @@ bool arch_match_cpu_phys_id(int cpu, u64
 }
 
 #ifdef CONFIG_SMP
-/**
- * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
- * @apicid: APIC ID to check
- */
-bool apic_id_is_primary_thread(unsigned int apicid)
+static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
 {
-	u32 mask;
-
-	if (smp_num_siblings == 1)
-		return true;
 	/* Isolate the SMT bit(s) in the APICID and check for 0 */
-	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
-	return !(apicid & mask);
+	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
+
+	if (smp_num_siblings == 1 || !(apicid & mask))
+		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
 }
+#else
+static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
 #endif
 
 /*
@@ -2545,6 +2541,8 @@ int generic_processor_info(int apicid, i
 	set_cpu_present(cpu, true);
 	num_processors++;
 
+	cpu_mark_primary_thread(cpu, apicid);
+
 	return cpu;
 }
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -102,6 +102,9 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* CPUs which are the primary SMT threads */
+struct cpumask __cpu_primary_thread_mask __read_mostly;
+
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -294,15 +297,6 @@ static void notrace start_secondary(void
 }
 
 /**
- * topology_is_primary_thread - Check whether CPU is the primary SMT thread
- * @cpu:	CPU to check
- */
-bool topology_is_primary_thread(unsigned int cpu)
-{
-	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
-}
-
-/**
  * topology_smt_supported - Check whether SMT is supported by the CPUs
  */
 bool topology_smt_supported(void)



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521346.810025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAd-0008M0-Nf; Fri, 14 Apr 2023 23:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521346.810025; Fri, 14 Apr 2023 23:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAd-0008Lr-Gf; Fri, 14 Apr 2023 23:49:35 +0000
Received: by outflank-mailman (input) for mailman id 521346;
 Fri, 14 Apr 2023 23:49:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT64-0001Th-In
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:52 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 599e35f4-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 599e35f4-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.631170657@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515891;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=y6SldvuVXq/pw8ptav6ZzZhZk50vNE0+ALorGdATBvE=;
	b=j7ir4M38uyCqNd0WpGFhlMbHqqP22sIUkP6lTXCI4ZoMelPAhB0ZW3jUKq/qNg7dgtIPyr
	wAgHvssp340UT3vn/QZsaFN0BbxJz/50Bk2j/ViCY7pmbdjJltpv6UUhTwyesp5uV/hH5R
	rn30yImLh2MbCenzGtlPdqYodmJQsWXgthmyh3VDgg8IsWfSP8TgZqODKQUJwz+InU8in3
	FyxZW7Fp+CUcayGzDXYKM02cvf34xnDKNWTxsXl+QFRwELU5xV0mCQa2KjOMor4Q501Rct
	0HVatbGD4qEO2xAAV9X3ZovODF8RAMi5ZzvUu9hOjJPsLeNkmpDExwTMMJ6FBg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515891;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=y6SldvuVXq/pw8ptav6ZzZhZk50vNE0+ALorGdATBvE=;
	b=Qg+tSuukciCk/31JBTMbNipFvroQgicaNOn31ON18qthuCRxjFkOmjqqsu/6PGW7nvHreJ
	GQLRpem8ebEejbCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 23/37] csky/smp: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:50 +0200 (CEST)

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Guo Ren <guoren@kernel.org>
Cc: linux-csky@vger.kernel.org
---
 arch/csky/Kconfig           |    1 +
 arch/csky/include/asm/smp.h |    2 +-
 arch/csky/kernel/smp.c      |    8 ++------
 3 files changed, 4 insertions(+), 7 deletions(-)

--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -96,6 +96,7 @@ config CSKY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select MAY_HAVE_SPARSE_IRQ
 	select MODULES_USE_ELF_RELA if MODULES
 	select OF
--- a/arch/csky/include/asm/smp.h
+++ b/arch/csky/include/asm/smp.h
@@ -23,7 +23,7 @@ void __init set_send_ipi(void (*func)(co
 
 int __cpu_disable(void);
 
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 #endif /* CONFIG_SMP */
 
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -291,12 +291,8 @@ int __cpu_disable(void)
 	return 0;
 }
 
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: shutdown failed\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: shutdown\n", cpu);
 }
 
@@ -304,7 +300,7 @@ void arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	while (!secondary_stack)
 		arch_cpu_idle();



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521351.810036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAe-0008Uy-Gi; Fri, 14 Apr 2023 23:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521351.810036; Fri, 14 Apr 2023 23:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAe-0008T1-5G; Fri, 14 Apr 2023 23:49:36 +0000
Received: by outflank-mailman (input) for mailman id 521351;
 Fri, 14 Apr 2023 23:49:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6C-0001Th-NI
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:00 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e954127-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:45:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e954127-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.941680232@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515899;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PwaT1nVbkmNxpYn2jntunVyX300rI6JOSPoxKblUFxc=;
	b=J4d36wqGXDsa4D15fuC6bgECzfAAInQki6KA3SUyC98aD0yQW6gcNLFjQOCHcXv2J+orZG
	5Qmhd0e6dbHL514sP/COcIuWz15soNEsWHzEh1yBv2+hd5B8wqzYOidLR2BE5cayvWqW8Z
	HTkU8Nrl6+H70MJnTBAZDmHlfDfKJpBLY0P7kmzee0usDqsCKruEINfZUsjihRGa5dTOub
	nsf1V7hsU0pvXa2Ve2ehvBeS9w4NMv0lT4Qar9j26QhzX8zbcfe2cv28EW7JmGjaOXAvGg
	fSx2t8XhhVFut65VN9kClCBXCqftJpibwk/DAZDt6leoEUA5kSkPTRcDM8b5qA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515899;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PwaT1nVbkmNxpYn2jntunVyX300rI6JOSPoxKblUFxc=;
	b=GT9ybsNnlf8a2CqrpgJaThMxA4ODF78eIoN2mtLu9TxUz4hUltbB6uZ6jFavUSEgOGT604
	TxVe7hxr4awf6HAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Mark Rutland <mark.rutland@arm.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 28/37] cpu/hotplug: Reset task stack state in _cpu_up()
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:59 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Commit dce1ca0525bf ("sched/scs: Reset task stack state in bringup_cpu()")
ensured that the shadow call stack and KASAN poisoning were removed from
a CPU's stack each time that CPU is brought up, not just once.

This is not incorrect. However, with parallel bringup the idle thread setup
will happen at a different step. As a consequence the cleanup in
bringup_cpu() would be too late.

Move the SCS/KASAN cleanup to the generic _cpu_up() function instead,
which already ensures that the new CPU's stack is available, purely to
allow for early failure. This occurs when the CPU to be brought up is
in the CPUHP_OFFLINE state, which should correctly do the cleanup any
time the CPU has been taken down to the point where such is needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
---
 kernel/cpu.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -769,12 +769,6 @@ static int bringup_cpu(unsigned int cpu)
 		return -EAGAIN;
 
 	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
-	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
 	 * Prevent irq alloc/free across the bringup.
@@ -1581,6 +1575,12 @@ static int _cpu_up(unsigned int cpu, int
 			ret = PTR_ERR(idle);
 			goto out;
 		}
+
+		/*
+		 * Reset stale stack state from the last time this CPU was online.
+		 */
+		scs_task_reset(idle);
+		kasan_unpoison_task_stack(idle);
 	}
 
 	cpuhp_tasks_frozen = tasks_frozen;



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521347.810030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAe-0008RI-3H; Fri, 14 Apr 2023 23:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521347.810030; Fri, 14 Apr 2023 23:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAd-0008Qb-So; Fri, 14 Apr 2023 23:49:35 +0000
Received: by outflank-mailman (input) for mailman id 521347;
 Fri, 14 Apr 2023 23:49:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6N-0000zb-9T
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:11 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6448a52f-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6448a52f-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.316352541@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=2PTnDdSuwes/9NjD8Pm7hXlACOxFTN9vJCNjWOfgGXM=;
	b=Jh6ZM0Us9GGh285BMnb5fEm77rslh++J6j/QuRkHBnu0IJ3u2WPmnGe/3gdgkhs+Itn6YC
	EYm2JHM+HCoiKXud+mnLR3pzE3sn62xS6OTtxoSRYTybybUo7HLhqrcnKqV1RwxPAi5wc5
	OBp150tG8AHyoMw6+86zMyjFChURpnILiUkhTuUlpHCRtOIbCa3VMT62a8kcqkB8R+gKQ4
	U9tfXpB5IVu1MhpvFCVN68arP6yea+vniJ2RVD3n3fNQwQKszlDZIcpnpbeZf4M7nBLa8W
	ZFeGomntJTFI5LAn96fL2tniH8jRiNy2rM7ZxO27HO46gpoe/qq365pH/j/4pA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515909;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=2PTnDdSuwes/9NjD8Pm7hXlACOxFTN9vJCNjWOfgGXM=;
	b=JDEmTRQRnlWI2RxVIaPHzb6Y3x+BSeHE+/1j/fasD5aAeXT8ZfKeZ96xHuB6wXG5qxyGWZ
	eqmeypWBVFC7DyAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 34/37] x86/cpu/amd; Invoke detect_extended_topology_early() on
 boot CPU
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:08 +0200 (CEST)

The early detection stores the extended topology leaf number which is
required for parallel hotplug.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/amd.c |    2 ++
 1 file changed, 2 insertions(+)

--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -692,6 +692,8 @@ static void early_init_amd(struct cpuinf
 		}
 	}
 
+	detect_extended_topology_early(c);
+
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
 }



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521356.810055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAh-0000yR-6M; Fri, 14 Apr 2023 23:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521356.810055; Fri, 14 Apr 2023 23:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAg-0000xI-Kl; Fri, 14 Apr 2023 23:49:38 +0000
Received: by outflank-mailman (input) for mailman id 521356;
 Fri, 14 Apr 2023 23:49:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5w-0001Th-NI
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:44 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 54bdfe16-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54bdfe16-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.319386819@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515883;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7hAm6I9MiGtPmIXfa24B9PQ6b2IikPrY0L+IvEnJvPI=;
	b=jK/GS8nzEcoNMYUK0AKSW91HahqfcKmgmQ+JtmtDnur9HSlYdew/SITlGW5QELStSGh6Q/
	V57WOhEWSVSxWZWhG3SaBczDcMaraK+itB6zsiGdpLQtzoC7yxxSloqDZW6B4Njz8MWDbR
	miMPGsiCOdTRydFFdFmiS70GYeAIK8oBwYAUlay9JYTfVbSXJ63AXiImeMdnSL+M470Et5
	VfCBSEigMVHivkoKuqa+6Tu/41+RxN+9jOUZtcMtyT4RoQND7Pp1xwtWviW33NcuD2Qa/G
	AC+ABgPHvS+/XtBUfn+u5LoukNQ3JpVGWaLmycJWPrCSc7LATrT2CKreEiN1ng==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515883;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7hAm6I9MiGtPmIXfa24B9PQ6b2IikPrY0L+IvEnJvPI=;
	b=IpKwz96GwSWUZ2KDbHlOo0nFprwPP3s9jahoNB/qpcC8qkXn9/Q9MaoBQo6UsIzMi7+Hea
	LODiZVcbnARULPAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 18/37] cpu/hotplug: Add CPU state tracking and synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:42 +0200 (CEST)

The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.

Except for the state reporting of the AP there is absolutely nothing
architecture specific and the sychronization and decision functions can be
moved into the generic hotplug core code.

Provide an integrated variant and add the core synchronization and decision
points. This comes in two flavours:

  1) DEAD state synchronization

     Updated by the architecture code once the AP reaches the point where
     it is ready to be torn down by the control CPU, e.g. by removing power
     or clocks or tear down via the hypervisor.

     The control CPU waits for this state to be reached with a timeout. If
     the state is reached an architecture specific cleanup function is
     invoked.

  2) Full state synchronization

     This extends #1 with AP alive synchronization. This is new
     functionality, which allows to replace architecture specific wait
     mechanims, e.g. cpumasks, completely.

     It also prevents that an AP which is in a limbo state can be brought
     up again. This can happen when an AP failed to report dead state
     during a previous off-line operation.

The dead synchronization is what most architectures use. Only x86 makes a
bringup decision based on that state at the moment.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/Kconfig               |   15 +++
 include/linux/cpuhotplug.h |   12 ++
 kernel/cpu.c               |  193 ++++++++++++++++++++++++++++++++++++++++++++-
 kernel/smpboot.c           |    2 
 4 files changed, 221 insertions(+), 1 deletion(-)

--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -34,6 +34,21 @@ config ARCH_HAS_SUBPAGE_FAULTS
 config HOTPLUG_SMT
 	bool
 
+# Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL
+config HOTPLUG_CORE_SYNC
+	bool
+
+# Basic CPU dead synchronization selected by architecture
+config HOTPLUG_CORE_SYNC_DEAD
+	bool
+	select HOTPLUG_CORE_SYNC
+
+# Full CPU synchronization with alive state selected by architecture
+config HOTPLUG_CORE_SYNC_FULL
+	bool
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
+	select HOTPLUG_CORE_SYNC
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -519,4 +519,16 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+void cpuhp_ap_sync_alive(void);
+void arch_cpuhp_sync_state_poll(void);
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+void cpuhp_ap_report_dead(void);
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu);
+#else
+static inline void cpuhp_ap_report_dead(void) { }
+static inline void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+#endif
+
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/cpu.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
+#include <linux/delay.h>
 #include <linux/export.h>
 #include <linux/bug.h>
 #include <linux/kthread.h>
@@ -59,6 +60,7 @@
  * @last:	For multi-instance rollback, remember how far we got
  * @cb_state:	The state for a single callback (install/uninstall)
  * @result:	Result of the operation
+ * @ap_sync_state:	State for AP synchronization
  * @done_up:	Signal completion to the issuer of the task for cpu-up
  * @done_down:	Signal completion to the issuer of the task for cpu-down
  */
@@ -76,6 +78,7 @@ struct cpuhp_cpu_state {
 	struct hlist_node	*last;
 	enum cpuhp_state	cb_state;
 	int			result;
+	atomic_t		ap_sync_state;
 	struct completion	done_up;
 	struct completion	done_down;
 #endif
@@ -276,6 +279,182 @@ static bool cpuhp_is_atomic_state(enum c
 	return CPUHP_AP_IDLE_DEAD <= state && state < CPUHP_AP_ONLINE;
 }
 
+/* Synchronization state management */
+enum cpuhp_sync_state {
+	SYNC_STATE_DEAD,
+	SYNC_STATE_KICKED,
+	SYNC_STATE_SHOULD_DIE,
+	SYNC_STATE_ALIVE,
+	SYNC_STATE_SHOULD_ONLINE,
+	SYNC_STATE_ONLINE,
+};
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC
+/**
+ * cpuhp_ap_update_sync_state - Update synchronization state during bringup/teardown
+ * @state:	The synchronization state to set
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+	int sync = atomic_read(st);
+
+	while (!atomic_try_cmpxchg(st, &sync, state));
+}
+
+void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
+
+static bool cpuhp_wait_for_sync_state(unsigned int cpu, enum cpuhp_sync_state state,
+				      enum cpuhp_sync_state next_state)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	ktime_t now, end, start = ktime_get();
+	int sync;
+
+	end = start + 10ULL * NSEC_PER_SEC;
+
+	sync = atomic_read(st);
+	while (1) {
+		if (sync == state) {
+			if (!atomic_try_cmpxchg(st, &sync, next_state))
+				continue;
+			return true;
+		}
+
+		now = ktime_get();
+		if (now > end) {
+			/* Timeout. Leave the state unchanged */
+			return false;
+		} else if (now - start < NSEC_PER_MSEC) {
+			/* Poll for one millisecond */
+			arch_cpuhp_sync_state_poll();
+		} else {
+			usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE);
+		}
+		sync = atomic_read(st);
+	}
+	return true;
+}
+#else  /* CONFIG_HOTPLUG_CORE_SYNC */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+/**
+ * cpuhp_ap_report_dead - Update synchronization state to DEAD
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+void cpuhp_ap_report_dead(void)
+{
+	cpuhp_ap_update_sync_state(SYNC_STATE_DEAD);
+}
+
+void __weak arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+
+/*
+ * Late CPU shutdown synchronization point. Cannot use cpuhp_state::done_down
+ * because the AP cannot issue complete() at this stage.
+ */
+static void cpuhp_bp_sync_dead(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+	do {
+		/* CPU can have reported dead already. Don't overwrite that! */
+		if (sync == SYNC_STATE_DEAD)
+			break;
+	} while (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_SHOULD_DIE));
+
+	if (cpuhp_wait_for_sync_state(cpu, SYNC_STATE_DEAD, SYNC_STATE_DEAD)) {
+		/* CPU reached dead state. Invoke the cleanup function */
+		arch_cpuhp_cleanup_dead_cpu(cpu);
+		return;
+	}
+
+	/* No further action possible. Emit message and give up. */
+	pr_err("CPU%u failed to report dead state\n", cpu);
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+static inline void cpuhp_bp_sync_dead(unsigned int cpu) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_FULL
+/**
+ * cpuhp_ap_sync_alive - Synchronize AP with the control CPU once it is alive
+ *
+ * Updates the AP synchronization state to SYNC_STATE_ALIVE and waits
+ * for the BP to release it.
+ */
+void cpuhp_ap_sync_alive(void)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+
+	cpuhp_ap_update_sync_state(SYNC_STATE_ALIVE);
+
+	/* Wait for the control CPU to release it. */
+	while (atomic_read(st) != SYNC_STATE_SHOULD_ONLINE)
+		cpu_relax();
+}
+
+static bool cpuhp_can_boot_ap(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+again:
+	switch (sync) {
+	case SYNC_STATE_DEAD:
+		/* CPU is properly dead */
+		break;
+	case SYNC_STATE_KICKED:
+		/* CPU did not come up in previous attempt */
+		break;
+	case SYNC_STATE_ALIVE:
+		/* CPU is stuck cpuhp_ap_sync_alive(). */
+		break;
+	default:
+		/* CPU failed to report online or dead and is in limbo state. */
+		return false;
+	}
+
+	/* Prepare for booting */
+	if (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_KICKED))
+		goto again;
+
+	return true;
+}
+
+void __weak arch_cpuhp_cleanup_kick_cpu(unsigned int cpu) { }
+
+/*
+ * Early CPU bringup synchronization point. Cannot use cpuhp_state::done_up
+ * because the AP cannot issue complete() so early in the bringup.
+ */
+static int cpuhp_bp_sync_alive(unsigned int cpu)
+{
+	int ret = 0;
+
+	if (!IS_ENABLED(CONFIG_HOTPLUG_CORE_SYNC_FULL))
+		return 0;
+
+	if (!cpuhp_wait_for_sync_state(cpu, SYNC_STATE_ALIVE, SYNC_STATE_SHOULD_ONLINE)) {
+		pr_err("CPU%u failed to report alive state\n", cpu);
+		ret = -EIO;
+	}
+
+	/* Let the architecture cleanup the kick alive mechanics. */
+	arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_FULL */
+static inline int cpuhp_bp_sync_alive(unsigned int cpu) { return 0; }
+static inline bool cpuhp_can_boot_ap(unsigned int cpu) { return true; }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_FULL */
+
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
 bool cpuhp_tasks_frozen;
@@ -588,6 +767,9 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
 	/*
 	 * Reset stale stack state from the last time this CPU was online.
 	 */
@@ -606,6 +788,10 @@ static int bringup_cpu(unsigned int cpu)
 	if (ret)
 		goto out_unlock;
 
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
 	ret = bringup_wait_for_ap_online(cpu);
 	if (ret)
 		goto out_unlock;
@@ -1109,6 +1295,8 @@ static int takedown_cpu(unsigned int cpu
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
+	cpuhp_bp_sync_dead(cpu);
+
 	tick_cleanup_dead_cpu(cpu);
 	rcutree_migrate_callbacks(cpu);
 	return 0;
@@ -1355,8 +1543,10 @@ void cpuhp_online_idle(enum cpuhp_state
 	if (state != CPUHP_AP_ONLINE_IDLE)
 		return;
 
+	cpuhp_ap_update_sync_state(SYNC_STATE_ONLINE);
+
 	/*
-	 * Unpart the stopper thread before we start the idle loop (and start
+	 * Unpark the stopper thread before we start the idle loop (and start
 	 * scheduling); this ensures the stopper task is always available.
 	 */
 	stop_machine_unpark(smp_processor_id());
@@ -2722,6 +2912,7 @@ void __init boot_cpu_hotplug_init(void)
 {
 #ifdef CONFIG_SMP
 	cpumask_set_cpu(smp_processor_id(), &cpus_booted_once_mask);
+	atomic_set(this_cpu_ptr(&cpuhp_state.ap_sync_state), SYNC_STATE_ONLINE);
 #endif
 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
 	this_cpu_write(cpuhp_state.target, CPUHP_ONLINE);
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -326,6 +326,7 @@ void smpboot_unregister_percpu_thread(st
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
 
+#ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
 /*
@@ -488,3 +489,4 @@ bool cpu_report_death(void)
 }
 
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521358.810060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAi-0001FV-2g; Fri, 14 Apr 2023 23:49:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521358.810060; Fri, 14 Apr 2023 23:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAh-0001BO-SR; Fri, 14 Apr 2023 23:49:39 +0000
Received: by outflank-mailman (input) for mailman id 521358;
 Fri, 14 Apr 2023 23:49:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT64-0000zb-7Z
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:52 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56a62801-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:44:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56a62801-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232310.444204883@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OJmMK1DNsYZsN6fUmGwfeq4ERjaNgScwoVZRQqH8yUw=;
	b=GcFPtrg7k9VQxG3rMlumRMw5nuKtsJmdDMoJ6TSceXuK1zdHd+N9joDWuAZdo46YFNxZMP
	Bsg8W/+6nBYJRVuCztzAyHJQvPJSb23Tablj/SGjhsX/WUeCrmSA9nsuhw8G2JGX2NZ97t
	THjsQ+3MT5axiGFvQ/sUAHek0hFvHwgQKNG5VvKo8oZKr/4skgr04uS53dj+ol0BYWxhal
	5oIGYZdmtf3qGjw5V88nnPJqYy7bxEgNOIlxEv9+tW18PGEfux4IdshBI1Y1RLHv+TPNLz
	Ddqx3nRleBLOQcei7th1ojLHd0sg0KKVmzdvEaltZ8B7BxlHVYGVjXaqhkZyww==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OJmMK1DNsYZsN6fUmGwfeq4ERjaNgScwoVZRQqH8yUw=;
	b=l2/PUI4LfABzfYrIQxeqJjxovlQLjpZzH9GEF/bgX07OlI0hCc/156VAzkEe/0KLuss+2I
	g5EEfqF03SuN2wDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 20/37] cpu/hotplug: Remove cpu_report_state() and related unused cruft
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:46 +0200 (CEST)

No more users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   90 ----------------------------------------------------
 2 files changed, 92 deletions(-)

--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -184,8 +184,6 @@ void arch_cpu_idle_enter(void);
 void arch_cpu_idle_exit(void);
 void arch_cpu_idle_dead(void);
 
-int cpu_report_state(int cpu);
-int cpu_check_up_prepare(int cpu);
 void cpu_set_state_online(int cpu);
 void play_idle_precise(u64 duration_ns, u64 latency_ns);
 
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -329,97 +329,7 @@ EXPORT_SYMBOL_GPL(smpboot_unregister_per
 #ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
-/*
- * Called to poll specified CPU's state, for example, when waiting for
- * a CPU to come online.
- */
-int cpu_report_state(int cpu)
-{
-	return atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-}
-
-/*
- * If CPU has died properly, set its state to CPU_UP_PREPARE and
- * return success.  Otherwise, return -EBUSY if the CPU died after
- * cpu_wait_death() timed out.  And yet otherwise again, return -EAGAIN
- * if cpu_wait_death() timed out and the CPU still hasn't gotten around
- * to dying.  In the latter two cases, the CPU might not be set up
- * properly, but it is up to the arch-specific code to decide.
- * Finally, -EIO indicates an unanticipated problem.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-int cpu_check_up_prepare(int cpu)
-{
-	if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-	}
-
-	switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
-
-	case CPU_POST_DEAD:
-
-		/* The CPU died properly, so just start it up again. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-
-	case CPU_DEAD_FROZEN:
-
-		/*
-		 * Timeout during CPU death, so let caller know.
-		 * The outgoing CPU completed its processing, but after
-		 * cpu_wait_death() timed out and reported the error. The
-		 * caller is free to proceed, in which case the state
-		 * will be reset properly by cpu_set_state_online().
-		 * Proceeding despite this -EBUSY return makes sense
-		 * for systems where the outgoing CPUs take themselves
-		 * offline, with no post-death manipulation required from
-		 * a surviving CPU.
-		 */
-		return -EBUSY;
-
-	case CPU_BROKEN:
-
-		/*
-		 * The most likely reason we got here is that there was
-		 * a timeout during CPU death, and the outgoing CPU never
-		 * did complete its processing.  This could happen on
-		 * a virtualized system if the outgoing VCPU gets preempted
-		 * for more than five seconds, and the user attempts to
-		 * immediately online that same CPU.  Trying again later
-		 * might return -EBUSY above, hence -EAGAIN.
-		 */
-		return -EAGAIN;
-
-	case CPU_UP_PREPARE:
-		/*
-		 * Timeout while waiting for the CPU to show up. Allow to try
-		 * again later.
-		 */
-		return 0;
-
-	default:
-
-		/* Should not happen.  Famous last words. */
-		return -EIO;
-	}
-}
-
-/*
- * Mark the specified CPU online.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-void cpu_set_state_online(int cpu)
-{
-	(void)atomic_xchg(&per_cpu(cpu_hotplug_state, cpu), CPU_ONLINE);
-}
-
 #ifdef CONFIG_HOTPLUG_CPU
-
 /*
  * Wait for the specified CPU to exit the idle loop and die.
  */



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521366.810075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAk-0001vY-Uk; Fri, 14 Apr 2023 23:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521366.810075; Fri, 14 Apr 2023 23:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAk-0001vH-KN; Fri, 14 Apr 2023 23:49:42 +0000
Received: by outflank-mailman (input) for mailman id 521366;
 Fri, 14 Apr 2023 23:49:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6I-0000zb-9F
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:06 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 605f8973-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 605f8973-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.066246849@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=cY/Vb0k8l53HgnPO915OdGCDzaj2v7X/LK/0bhlR+ZM=;
	b=Rl28BYLRw1xoqBY2XVna4334ensDR9FZzUXIXCLNtZ0ZnRftdKxlp0PMWMq1HaFMs/Nvoh
	XqGn6TtnTp761vrtUa5+ABOqgv49/+dDRkb1k96bmalthN4bjNP4h3EacgIX7TIRoFoQ1L
	UV+FOsaKzNfqWlzPpun3ecs+zhYGqo6qHGZzsNZ57SQvznqryxMS+L3SqK9W/dGy+VDKv9
	YxTV8knnunEjK8aSQH5isteVb77I0eCsLEqmlSmZcWsTyG9EdV1bj0iMexF7biZ5gNOCfj
	ROw3zMy677VniDryW+1xRSofiRytmHDazA6lDPsXJmr7uXM+XbDSfwEveQWQlQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=cY/Vb0k8l53HgnPO915OdGCDzaj2v7X/LK/0bhlR+ZM=;
	b=D1VjUKIFe77+cdNW6ebx9l5dhZNCMlm+6Ti8d+rxHqlUBIfv17CftGyUs1Xi5cbqbTIHp+
	wg0OL2V4Mj/Lh/Bw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: [patch 30/37] x86/smpboot: Enable split CPU startup
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:02 +0200 (CEST)

The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.

It is safe to be split into a wake-up and and a separate wait+release
state.

Provide the required functions and enable the split CPU bringup, which
prepares for parallel bringup, where the bringup of the non-boot CPUs takes
two iterations: One to prepare and wake all APs and the second to wait and
release them. Depending on timing this can eliminate the wait time
completely.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/Kconfig           |    2 +-
 arch/x86/include/asm/smp.h |    9 ++-------
 arch/x86/kernel/smp.c      |    2 +-
 arch/x86/kernel/smpboot.c  |    8 ++++----
 arch/x86/xen/smp_pv.c      |    4 ++--
 5 files changed, 10 insertions(+), 15 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -272,8 +272,8 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
-	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -40,7 +40,7 @@ struct smp_ops {
 
 	void (*cleanup_dead_cpu)(unsigned cpu);
 	void (*poll_sync_state)(void);
-	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
+	int (*kick_ap_alive)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
@@ -80,11 +80,6 @@ static inline void smp_cpus_done(unsigne
 	smp_ops.smp_cpus_done(max_cpus);
 }
 
-static inline int __cpu_up(unsigned int cpu, struct task_struct *tidle)
-{
-	return smp_ops.cpu_up(cpu, tidle);
-}
-
 static inline int __cpu_disable(void)
 {
 	return smp_ops.cpu_disable();
@@ -123,7 +118,7 @@ void native_smp_prepare_cpus(unsigned in
 void calculate_max_logical_packages(void);
 void native_smp_cpus_done(unsigned int max_cpus);
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
-int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle);
 int native_cpu_disable(void);
 void hlt_play_dead(void);
 void native_play_dead(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -268,7 +268,7 @@ struct smp_ops smp_ops = {
 #endif
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
-	.cpu_up			= native_cpu_up,
+	.kick_ap_alive		= native_kick_ap,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1070,7 +1070,7 @@ static int do_boot_cpu(int apicid, int c
 	return ret;
 }
 
-static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
@@ -1106,15 +1106,15 @@ static int native_kick_ap(unsigned int c
 	return err;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle)
 {
-	return native_kick_ap(cpu, tidle);
+	return smp_ops.kick_ap_alive(cpu, tidle);
 }
 
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
 {
 	/* Cleanup possible dangling ends... */
-	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
+	if (smp_ops.kick_ap_alive == native_kick_ap && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
 }
 
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -314,7 +314,7 @@ cpu_initialize_context(unsigned int cpu,
 	return 0;
 }
 
-static int xen_pv_cpu_up(unsigned int cpu, struct task_struct *idle)
+static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
 {
 	int rc;
 
@@ -438,7 +438,7 @@ static const struct smp_ops xen_smp_ops
 	.smp_prepare_cpus = xen_pv_smp_prepare_cpus,
 	.smp_cpus_done = xen_smp_cpus_done,
 
-	.cpu_up = xen_pv_cpu_up,
+	.kick_ap_alive = xen_pv_kick_ap,
 	.cpu_die = xen_pv_cpu_die,
 	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
 	.poll_sync_state = xen_pv_poll_sync_state,



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521367.810080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAl-00020F-Fv; Fri, 14 Apr 2023 23:49:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521367.810080; Fri, 14 Apr 2023 23:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAl-0001yy-3K; Fri, 14 Apr 2023 23:49:43 +0000
Received: by outflank-mailman (input) for mailman id 521367;
 Fri, 14 Apr 2023 23:49:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT5z-0001Th-Nr
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:44:47 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55b151aa-db1e-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 01:44:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55b151aa-db1e-11ed-b21e-6b7b168915f2
Message-ID: <20230414232310.382005483@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515885;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3+qlITNV53BaRfy3keVD52aWouI/gE/YQxgQoDSFHFQ=;
	b=d/JnboK/jkYT0GCssjVJ90Y/WenuKe1jZIeYgv20OJ8XmoGVYLF1ZTVBI388KIvGG81BdO
	s78WecNm45vveuEvwyWBqpgE+vTtwDzERMCqW1Wa+4ETTvB/cX6F944F3VGLD5QdOboktX
	oYzSxcLkz1ApMjeWZS/WTENuQtTLSkNhnQRy9RzQFKh/XLNn3yGYZts1NNhtJC5HjIGI5o
	GW7Dy1zD4MwOm3khrT+nIHASKJLyysL3C4C/JHrrPJSnMuLIm7wmYPRipyZTKCP/Pk5TvQ
	t4mSIMmTLfqg4c1OUMFcIfZVIVmu4xkWF38NJIgv/C8Mj3XL31D/6X50h3AvSA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515885;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3+qlITNV53BaRfy3keVD52aWouI/gE/YQxgQoDSFHFQ=;
	b=yCWdMiSVNqst3Q7S9gg95ghiR9su0xx9UWqm5RPQMRYbxRMm2BYagQWZ6uYhvAgJKN0wb7
	iDVKrki61JxGxVBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject:
 [patch 19/37] x86/smpboot: Switch to hotplug core state synchronization
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:44:44 +0200 (CEST)

The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:

  1) The AP alive synchronization based on cpumasks

  2) The decision whether an AP can be brought up again

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
---
 arch/x86/Kconfig           |    1 
 arch/x86/include/asm/smp.h |    7 +
 arch/x86/kernel/smp.c      |    1 
 arch/x86/kernel/smpboot.c  |  159 ++++++++++-----------------------------------
 arch/x86/xen/smp_hvm.c     |   16 +---
 arch/x86/xen/smp_pv.c      |   39 ++++++-----
 6 files changed, 72 insertions(+), 151 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -272,6 +272,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -38,6 +38,8 @@ struct smp_ops {
 	void (*crash_stop_other_cpus)(void);
 	void (*smp_send_reschedule)(int cpu);
 
+	void (*cleanup_dead_cpu)(unsigned cpu);
+	void (*poll_sync_state)(void);
 	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
@@ -90,7 +92,8 @@ static inline int __cpu_disable(void)
 
 static inline void __cpu_die(unsigned int cpu)
 {
-	smp_ops.cpu_die(cpu);
+	if (smp_ops.cpu_die)
+		smp_ops.cpu_die(cpu);
 }
 
 static inline void play_dead(void)
@@ -122,8 +125,6 @@ void native_smp_cpus_done(unsigned int m
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_disable(void);
-int common_cpu_die(unsigned int cpu);
-void native_cpu_die(unsigned int cpu);
 void hlt_play_dead(void);
 void native_play_dead(void);
 void play_dead_common(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -269,7 +269,6 @@ struct smp_ops smp_ops = {
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
 	.cpu_up			= native_cpu_up,
-	.cpu_die		= native_cpu_die,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include <linux/pgtable.h>
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
+#include <linux/cpuhotplug.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -101,9 +102,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
-/* All of these masks are initialized in setup_cpu_local_masks() */
-static cpumask_var_t cpu_initialized_mask;
-static cpumask_var_t cpu_callout_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -169,8 +167,8 @@ static void smp_callin(void)
 	int cpuid = smp_processor_id();
 
 	/*
-	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before an
+	 * If waken up by an INIT in an 82489DX configuration the alive
+	 * synchronization guarantees we don't get here before an
 	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
 	 * touch our local APIC.
 	 *
@@ -216,17 +214,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static void wait_for_master_cpu(int cpu)
-{
-	/*
-	 * Wait for release by control CPU before continuing with AP
-	 * initialization.
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-}
-
 /*
  * Activate a secondary processor.
  */
@@ -247,11 +234,10 @@ static void notrace start_secondary(void
 	cpu_init_exception_handling();
 
 	/*
-	 * Sync point with wait_cpu_initialized(). Sets AP in
-	 * cpu_initialized_mask and then waits for the control CPU
-	 * to release it.
+	 * Sync point with the hotplug core. Sets the sync state to ALIVE
+	 * and waits for the control CPU to release it.
 	 */
-	wait_for_master_cpu(raw_smp_processor_id());
+	cpuhp_ap_sync_alive();
 
 	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
@@ -285,7 +271,6 @@ static void notrace start_secondary(void
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
-	cpu_set_state_online(smp_processor_id());
 	x86_platform.nmi_init();
 
 	/* enable local interrupts */
@@ -736,9 +721,10 @@ static void impress_friends(void)
 	 * Allow the user to impress friends.
 	 */
 	pr_debug("Before bogomips\n");
-	for_each_possible_cpu(cpu)
-		if (cpumask_test_cpu(cpu, cpu_callout_mask))
+	for_each_possible_cpu(cpu) {
+		if (cpumask_test_cpu(cpu, cpu_online_mask))
 			bogosum += cpu_data(cpu).loops_per_jiffy;
+	}
 	pr_info("Total of %d processors activated (%lu.%02lu BogoMIPS)\n",
 		num_online_cpus(),
 		bogosum/(500000/HZ),
@@ -1010,6 +996,7 @@ int common_cpu_up(unsigned int cpu, stru
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	unsigned long start_ip = real_mode_header->trampoline_start;
+	int ret;
 
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -1050,13 +1037,6 @@ static int do_boot_cpu(int apicid, int c
 		}
 	}
 
-	/*
-	 * AP might wait on cpu_callout_mask in cpu_init() with
-	 * cpu_initialized_mask set if previous attempt to online
-	 * it timed-out. Clear cpu_initialized_mask so that after
-	 * INIT/SIPI it could start with a clean state.
-	 */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	smp_mb();
 
 	/*
@@ -1067,47 +1047,16 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		return apic->wakeup_secondary_cpu(apicid, start_ip);
-
-	return wakeup_secondary_cpu_via_init(apicid, start_ip);
-}
-
-static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
-{
-	unsigned long timeout;
-
-	/*
-	 * Wait up to 10s for the CPU to report in.
-	 */
-	timeout = jiffies + 10*HZ;
-	while (time_before(jiffies, timeout)) {
-		if (cpumask_test_cpu(cpu, mask))
-			return 0;
-
-		schedule();
-	}
-	return -1;
-}
-
-/*
- * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
- * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
- * to proceed.  The AP will then proceed past setting its 'callin' bit
- * and end up waiting in check_tsc_sync_target() until we reach
- * wait_cpu_online() to tend to it.
- */
-static int wait_cpu_initialized(unsigned int cpu)
-{
-	/*
-	 * Wait for first sign of life from AP.
-	 */
-	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
-		return -1;
+		ret = apic->wakeup_secondary_cpu(apicid, start_ip);
+	else
+		ret = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
-	cpumask_set_cpu(cpu, cpu_callout_mask);
-	return 0;
+	/* If the wakeup mechanism failed, cleanup the warm reset vector */
+	if (ret)
+		arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
 }
 
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
@@ -1132,11 +1081,6 @@ static int native_kick_ap(unsigned int c
 	 */
 	mtrr_save_state();
 
-	/* x86 CPUs take themselves offline, so delayed offline is OK. */
-	err = cpu_check_up_prepare(cpu);
-	if (err && err != -EBUSY)
-		return err;
-
 	/* the FPU context is blank, nobody can own it */
 	per_cpu(fpu_fpregs_owner_ctx, cpu) = NULL;
 
@@ -1153,17 +1097,29 @@ static int native_kick_ap(unsigned int c
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int ret;
-
-	ret = native_kick_ap(cpu, tidle);
-	if (!ret)
-		ret = wait_cpu_initialized(cpu);
+	return native_kick_ap(cpu, tidle);
+}
 
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
+{
 	/* Cleanup possible dangling ends... */
-	if (x86_platform.legacy.warm_reset)
+	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
+}
 
-	return ret;
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (smp_ops.cleanup_dead_cpu)
+		smp_ops.cleanup_dead_cpu(cpu);
+
+	if (system_state == SYSTEM_RUNNING)
+		pr_info("CPU %u is now offline\n", cpu);
+}
+
+void arch_cpuhp_sync_state_poll(void)
+{
+	if (smp_ops.poll_sync_state)
+		smp_ops.poll_sync_state();
 }
 
 /**
@@ -1355,9 +1311,6 @@ void __init native_smp_prepare_boot_cpu(
 	if (!IS_ENABLED(CONFIG_SMP))
 		switch_gdt_and_percpu_base(me);
 
-	/* already set me in cpu_online_mask in boot_cpu_init() */
-	cpumask_set_cpu(me, cpu_callout_mask);
-	cpu_set_state_online(me);
 	native_pv_lock_init();
 }
 
@@ -1484,8 +1437,6 @@ early_param("possible_cpus", _setup_poss
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
 
@@ -1547,9 +1498,6 @@ static void remove_siblinginfo(int cpu)
 static void remove_cpu_from_maps(int cpu)
 {
 	set_cpu_online(cpu, false);
-	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	/* was set by cpu_init() */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);
 }
 
@@ -1600,36 +1548,11 @@ int native_cpu_disable(void)
 	return 0;
 }
 
-int common_cpu_die(unsigned int cpu)
-{
-	int ret = 0;
-
-	/* We don't do anything here: idle task is faking death itself. */
-
-	/* They ack this in play_dead() by setting CPU_DEAD */
-	if (cpu_wait_death(cpu, 5)) {
-		if (system_state == SYSTEM_RUNNING)
-			pr_info("CPU %u is now offline\n", cpu);
-	} else {
-		pr_err("CPU %u didn't die...\n", cpu);
-		ret = -1;
-	}
-
-	return ret;
-}
-
-void native_cpu_die(unsigned int cpu)
-{
-	common_cpu_die(cpu);
-}
-
 void play_dead_common(void)
 {
 	idle_task_exit();
 
-	/* Ack it */
-	(void)cpu_report_death();
-
+	cpuhp_ap_report_dead();
 	/*
 	 * With physical CPU hotplug, we should halt the cpu
 	 */
@@ -1731,12 +1654,6 @@ int native_cpu_disable(void)
 	return -ENOSYS;
 }
 
-void native_cpu_die(unsigned int cpu)
-{
-	/* We said "no" in __cpu_disable */
-	BUG();
-}
-
 void native_play_dead(void)
 {
 	BUG();
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -55,18 +55,16 @@ static void __init xen_hvm_smp_prepare_c
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (common_cpu_die(cpu) == 0) {
-		if (xen_have_vector_callback) {
-			xen_smp_intr_free(cpu);
-			xen_uninit_lock_cpu(cpu);
-			xen_teardown_timer(cpu);
-		}
+	if (xen_have_vector_callback) {
+		xen_smp_intr_free(cpu);
+		xen_uninit_lock_cpu(cpu);
+		xen_teardown_timer(cpu);
 	}
 }
 #else
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
 	BUG();
 }
@@ -77,7 +75,7 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
 	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_cpus_done = xen_smp_cpus_done;
-	smp_ops.cpu_die = xen_hvm_cpu_die;
+	smp_ops.cleanup_dead_cpu = xen_hvm_cleanup_dead_cpu;
 
 	if (!xen_have_vector_callback) {
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -62,6 +62,7 @@ static void cpu_bringup(void)
 	int cpu;
 
 	cr4_init();
+	cpuhp_ap_sync_alive();
 	cpu_init();
 	touch_softlockup_watchdog();
 
@@ -83,7 +84,7 @@ static void cpu_bringup(void)
 
 	set_cpu_online(cpu, true);
 
-	cpu_set_state_online(cpu);  /* Implies full memory barrier. */
+	smp_mb();
 
 	/* We can take interrupts now: we're officially "up". */
 	local_irq_enable();
@@ -323,14 +324,6 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_setup_runstate_info(cpu);
 
-	/*
-	 * PV VCPUs are always successfully taken down (see 'while' loop
-	 * in xen_cpu_die()), so -EBUSY is an error.
-	 */
-	rc = cpu_check_up_prepare(cpu);
-	if (rc)
-		return rc;
-
 	/* make sure interrupts start blocked */
 	per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
 
@@ -349,6 +342,11 @@ static int xen_pv_cpu_up(unsigned int cp
 	return 0;
 }
 
+static void xen_pv_poll_sync_state(void)
+{
+	HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 static int xen_pv_cpu_disable(void)
 {
@@ -364,18 +362,18 @@ static int xen_pv_cpu_disable(void)
 
 static void xen_pv_cpu_die(unsigned int cpu)
 {
-	while (HYPERVISOR_vcpu_op(VCPUOP_is_up,
-				  xen_vcpu_nr(cpu), NULL)) {
+	while (HYPERVISOR_vcpu_op(VCPUOP_is_up, xen_vcpu_nr(cpu), NULL)) {
 		__set_current_state(TASK_UNINTERRUPTIBLE);
 		schedule_timeout(HZ/10);
 	}
+}
 
-	if (common_cpu_die(cpu) == 0) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-		xen_teardown_timer(cpu);
-		xen_pmu_finish(cpu);
-	}
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	xen_smp_intr_free(cpu);
+	xen_uninit_lock_cpu(cpu);
+	xen_teardown_timer(cpu);
+	xen_pmu_finish(cpu);
 }
 
 static void __noreturn xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
@@ -397,6 +395,11 @@ static void xen_pv_cpu_die(unsigned int
 	BUG();
 }
 
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	BUG();
+}
+
 static void __noreturn xen_pv_play_dead(void)
 {
 	BUG();
@@ -437,6 +440,8 @@ static const struct smp_ops xen_smp_ops
 
 	.cpu_up = xen_pv_cpu_up,
 	.cpu_die = xen_pv_cpu_die,
+	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
+	.poll_sync_state = xen_pv_poll_sync_state,
 	.cpu_disable = xen_pv_cpu_disable,
 	.play_dead = xen_pv_play_dead,
 



From xen-devel-bounces@lists.xenproject.org Fri Apr 14 23:49:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Apr 2023 23:49:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521371.810089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAn-0002SG-73; Fri, 14 Apr 2023 23:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521371.810089; Fri, 14 Apr 2023 23:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnTAn-0002QR-0G; Fri, 14 Apr 2023 23:49:45 +0000
Received: by outflank-mailman (input) for mailman id 521371;
 Fri, 14 Apr 2023 23:49:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZCOw=AF=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnT6S-0000zb-CK
 for xen-devel@lists.xenproject.org; Fri, 14 Apr 2023 23:45:16 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 673e1bc9-db1e-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 01:45:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 673e1bc9-db1e-11ed-8611-37d641c3527e
Message-ID: <20230414232311.505152290@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681515914;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=nUZSh5064m2aKoxr+KQn5JPMu6mMKtsaGAC9NSZVNms=;
	b=tXdZ+JNXwmoFAj/D4wiLzLK3B9Tdjf372IqdJt5Tv46gv42xMEL/Nl0g5kdyiZHT7x9AOz
	mlFylzsIhnrXORGzmbDeoOd0v4PWWxn8r5JfaFvKB5y4Myd0UBCMyCyRl33Zi088uSB2Ir
	SvONUInWoGIX09mANT0wqHnMa8plhV/+A02oVVT5ueBVnLW3psXBgZVX1wGuoSRuwDa/iV
	WCxZeGiReQ8KfEt87/havCjKqbuogL4jZdqFqdulIebLR5I3xNrYIQ3tyIgS/ypjvwLm3u
	ZY0rsKukGMk7diouGwNhACQ22cLpXT7hOtDVbk+HLvlFHbrtTgmHuHYua/2ZDA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681515914;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=nUZSh5064m2aKoxr+KQn5JPMu6mMKtsaGAC9NSZVNms=;
	b=pkIQhDhWjPpGLy6aH5XoCk5G1NoyA94urqDixfl4xi4JgoGh52mUmdlWbRdLMez3lvqnCR
	cNwc4JXhAqWj08Aw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 "Arjan van de Veen" <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>
Subject: [patch 37/37] x86/smpboot: Allow parallel bringup for SEV-ES
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Sat, 15 Apr 2023 01:45:13 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Enable parallel bringup for SEV-ES guests. The APs can't actually execute
the CPUID instruction directly during early startup, but they can make the
GHCB call directly instead, just as the #VC trap handler would do.

Thanks to Sabin for talking me through the way this works.

Suggested-by: Sabin Rapan <sabrapan@amazon.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

---
 arch/x86/include/asm/sev-common.h |    3 +++
 arch/x86/include/asm/smp.h        |    1 +
 arch/x86/kernel/head_64.S         |   30 ++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c         |   14 ++++++++++++--
 4 files changed, 46 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -70,6 +70,7 @@
 	/* GHCBData[63:12] */				\
 	(((u64)(v) & GENMASK_ULL(63, 12)) >> 12)
 
+#ifndef __ASSEMBLY__
 /*
  * SNP Page State Change Operation
  *
@@ -161,6 +162,8 @@ struct snp_psc_desc {
 
 #define GHCB_RESP_CODE(v)		((v) & GHCB_MSR_INFO_MASK)
 
+#endif /* __ASSEMBLY__ */
+
 /*
  * Error codes related to GHCB input that can be communicated back to the guest
  * by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2.
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -202,6 +202,7 @@ extern unsigned int smpboot_control;
 #define STARTUP_APICID_CPUID_1F 0x80000000
 #define STARTUP_APICID_CPUID_0B 0x40000000
 #define STARTUP_APICID_CPUID_01 0x20000000
+#define STARTUP_APICID_SEV_ES	0x10000000
 
 /* Top 8 bits are reserved for control */
 #define STARTUP_PARALLEL_MASK	0xFF000000
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -26,6 +26,7 @@
 #include <asm/nospec-branch.h>
 #include <asm/fixmap.h>
 #include <asm/smp.h>
+#include <asm/sev-common.h>
 
 /*
  * We are not able to switch in one step to the final KERNEL ADDRESS SPACE
@@ -243,9 +244,14 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	 * Bit 31	STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
 	 * Bit 30	STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
 	 * Bit 29	STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
+	 * Bit 28	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
 	 * Bit 0-23	CPU# if STARTUP_APICID_CPUID_xx flags are not set
 	 */
 	movl	smpboot_control(%rip), %ecx
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	testl	$STARTUP_APICID_SEV_ES, %ecx
+	jnz	.Luse_sev_cpuid_0b
+#endif
 	testl	$STARTUP_APICID_CPUID_1F, %ecx
 	jnz	.Luse_cpuid_1f
 	testl	$STARTUP_APICID_CPUID_0B, %ecx
@@ -262,6 +268,30 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	shr	$24, %edx
 	jmp	.Lsetup_AP
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+.Luse_sev_cpuid_0b:
+	/* Set the GHCB MSR to request CPUID 0x0B_EDX */
+	movl	$MSR_AMD64_SEV_ES_GHCB, %ecx
+	movl	$(GHCB_CPUID_REQ_EDX << 30) | GHCB_MSR_CPUID_REQ, %eax
+	movl	$0x0b, %edx
+	wrmsr
+
+	/* Perform GHCB MSR protocol */
+	rep; vmmcall		/* vmgexit */
+
+	/*
+	 * Get the result. After the RDMSR:
+	 *   EAX should be 0xc0000005
+	 *   EDX should have the CPUID register value and since EDX
+	 *   is the target register, no need to move the result.
+	 */
+	rdmsr
+	andl	$GHCB_MSR_INFO_MASK, %eax
+	cmpl	$GHCB_MSR_CPUID_RESP, %eax
+	jne	1f
+	jmp	.Lsetup_AP
+#endif
+
 .Luse_cpuid_0b:
 	mov	$0x0B, %eax
 	xorl	%ecx, %ecx
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -86,6 +86,7 @@
 #include <asm/hw_irq.h>
 #include <asm/stackprotector.h>
 #include <asm/sev.h>
+#include <asm/coco.h>
 
 /* representing HT siblings of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
@@ -1266,8 +1267,16 @@ bool __init arch_cpuhp_init_parallel_bri
 
 	/* Encrypted guests require special CPUID handling. */
 	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
-		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
-		return false;
+		switch (cc_get_vendor()) {
+		case CC_VENDOR_AMD:
+			ctrl = STARTUP_APICID_SEV_ES;
+			if (topology_extended_leaf == 0x0b)
+				goto setup;
+			fallthrough;
+		default:
+			pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+			return false;
+		}
 	}
 
 	switch (topology_extended_leaf) {
@@ -1290,6 +1299,7 @@ bool __init arch_cpuhp_init_parallel_bri
 		return false;
 	}
 
+setup:
 	pr_debug("Parallel CPU startup enabled: 0x%08x\n", ctrl);
 	smpboot_control = ctrl;
 	return true;



From xen-devel-bounces@lists.xenproject.org Sat Apr 15 02:23:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 02:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521423.810105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnVZ5-000401-Dd; Sat, 15 Apr 2023 02:22:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521423.810105; Sat, 15 Apr 2023 02:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnVZ5-0003zt-7D; Sat, 15 Apr 2023 02:22:59 +0000
Received: by outflank-mailman (input) for mailman id 521423;
 Sat, 15 Apr 2023 02:22:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXtF=AG=citrix.com=prvs=462c9d09f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pnVZ2-0003zn-Ny
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 02:22:57 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b832a43-db34-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 04:22:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b832a43-db34-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681525372;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=QJ+3HwfGXzkynQSqiJEwjODBvnbypPPunqE44a4eZqg=;
  b=CxPzn36aSeSYE3Jhy7rnlQYNu535953G4exGqRZal1aOPbd41w1Cj9zM
   Q6GZSb/dAlON+mZP0j7zYN+uu7gCS0ooX4fh5MGYJQZcd4sD2d/xz8ioT
   ZfX8/bqvN6FnV/s5F1yRKnBSXb0ab+AmPx6ceBQLDPMYipIL8hQ6x2yfa
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105999310
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:XjWoIqBjFW9+hRVW/1/jw5YqxClBgxIJ4kV8jS/XYbTApD0q1mYHx
 zEfXmiBP/eNNDH0fox/Pd+w9UoH7Z7RztFgQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw0956B3kVr
 cckCQtdUD/AruOa5amec7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4TXFJQExR/Az
 o7A11/yWTErG+aC83mAqiyBodfPxAirdp1HQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaHuRgGUtYWDOw+6ymK0KPf5wvfDW8BJgOtc/R/6pVwH2Zzk
 AbUwZWwX2cHXKCppWy134+ZqyvpYRMpDFRSWwBHZAopxsLGmdRm5v7QdeqPAJJZn/WsR2Gpn
 2vb8Xli71kApZVVjvvmpDgrlxrp/8GUFVBtu207S0r/tmtEiJiZi5tEALQxxdJJN86nQ1aIp
 xDocODOvblVXflheMFgKdjh/Y1FBN7falUweXY1Q/EcG82FohZPh7x47jBkP1tOOc0ZYzLva
 0K7kVoPtMYPbSTxNPUtPt/Z5yEWIU/ITI2NaxwpRoAWPsgZmPGvp0mCmnJ8L0iyyRNxwMnTy
 L+QcNq2DGZyNJmLOAGeHr9HuZdyn3BW+I8mbcyjp/hR+ebENSH9pHZsGAfmU93VG4ve8ViEq
 44Ca5rao/idOcWnChTqHUcoBQhiBRAG6Vre8KS7qsbrztJaJVwc
IronPort-HdrOrdr: A9a23:XXd1qqoY9/73zW/6R1Wxb+gaV5oneYIsimQD101hICG8cqSj+f
 xG+85rsSMc6QxhPk3I9urhBEDtex/hHP1OkOws1NWZLWrbUQKTRekIh+bfKlXbakrDH4VmtJ
 uIHZIQNDSJNykZsfrH
X-Talos-CUID: =?us-ascii?q?9a23=3A5i10mWmgHQehAcGSk2hIWMj2++fXOVTlwXHIeBe?=
 =?us-ascii?q?DNUdSGIO6CnaR1IZ/tvM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AZnZwgQ1Blvc+NF+cnxSUFpyQHzUj2K/1KHEiq4g?=
 =?us-ascii?q?8+MzcawFXAzfEp2SZXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,198,1677560400"; 
   d="scan'208";a="105999310"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH] xen/livepatch: Fix .altinstructions safety checks
Date: Sat, 15 Apr 2023 03:22:29 +0100
Message-ID: <20230415022229.3475033-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The prior check has && vs || mixups, making it tautologically false and thus
providing no safety at all.  There are boundary errors too.

First start with a comment describing how the .altinstructions and
.altinstr_replacement sections interact, and perform suitable cross-checking.

Second, rewrite the alt_instr loop entirely from scratch.  Origin sites have
non-zero size, and must be fully contained within .text.  Any non-zero sized
replacements must be fully contained within .altinstr_replacement.

Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>

As a further observation, .altinstr_replacement shouldn't survive beyond its
use in apply_alternatives(), but the disp32 relative references (for x86 at
least) in alt_instr force .altinstr_replacement to be close to the payload
while being applied.
---
 xen/common/livepatch.c | 66 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 60 insertions(+), 6 deletions(-)

diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c
index 784fbd92e913..020a9648d5ba 100644
--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -803,28 +803,82 @@ static int prepare_payload(struct payload *payload,
     if ( sec )
     {
 #ifdef CONFIG_HAS_ALTERNATIVE
+        /*
+         * (As of April 2023), Alternatives are formed of:
+         * - An .altinstructions section with an array of struct alt_instr's.
+         * - An .altinstr_replacement section containing instructions bytes.
+         *
+         * An individual alt_instr contains:
+         * - An orig reference, pointing into .text with a nonzero length
+         * - A repl reference, pointing into .altinstr_replacement
+         *
+         * It is legal to have zero-length replacements, meaning it is legal
+         * for the .altinstr_replacement section to be empty too.  An
+         * implementation detail means that a zero-length replacement's repl
+         * reference will be the start of the .altinstr_replacement section.
+         */
+        const struct livepatch_elf_sec *repl_sec;
         struct alt_instr *a, *start, *end;
 
         if ( !section_ok(elf, sec, sizeof(*a)) )
             return -EINVAL;
 
+        /* Tolerate an empty .altinstructions section... */
+        if ( sec->sec->sh_size == 0 )
+            goto alt_done;
+
+        /* ... but otherwise, there needs to be something to alter... */
+        if ( payload->text_size == 0 )
+        {
+            printk(XENLOG_ERR LIVEPATCH "%s Alternatives provided, but no .text\n",
+                   elf->name);
+            return -EINVAL;
+        }
+
+        /* ... and something to be altered to. */
+        repl_sec = livepatch_elf_sec_by_name(elf, ".altinstr_replacement");
+        if ( !repl_sec )
+        {
+            printk(XENLOG_ERR LIVEPATCH "%s .altinstructions provided, but no .altinstr_replacement\n",
+                   elf->name);
+            return -EINVAL;
+        }
+
         start = sec->load_addr;
         end = sec->load_addr + sec->sec->sh_size;
 
         for ( a = start; a < end; a++ )
         {
-            const void *instr = ALT_ORIG_PTR(a);
-            const void *replacement = ALT_REPL_PTR(a);
+            const void *orig = ALT_ORIG_PTR(a);
+            const void *repl = ALT_REPL_PTR(a);
+
+            /* orig must be fully within .text. */
+            if ( orig               < payload->text_addr ||
+                 a->orig_len        > payload->text_size ||
+                 orig + a->orig_len > payload->text_addr + payload->text_size )
+            {
+                printk(XENLOG_ERR LIVEPATCH
+                       "%s Alternative orig %p+%#x outside payload text %p+%#lx\n",
+                       elf->name, orig, a->orig_len, payload->text_addr, payload->text_size);
+                return -EINVAL;
+            }
 
-            if ( (instr < region->start && instr >= region->end) ||
-                 (replacement < region->start && replacement >= region->end) )
+            /*
+             * repl must be fully within .altinstr_replacement, even if they
+             * happen to both have zero length.
+             */
+            if ( repl               < repl_sec->load_addr ||
+                 a->repl_len        > repl_sec->sec->sh_size ||
+                 repl + a->repl_len > repl_sec->load_addr + repl_sec->sec->sh_size )
             {
-                printk(XENLOG_ERR LIVEPATCH "%s Alt patching outside payload: %p\n",
-                       elf->name, instr);
+                printk(XENLOG_ERR LIVEPATCH
+                       "%s Alternative repl %p+%#x outside .altinstr_replacement %p+%#lx\n",
+                       elf->name, repl, a->repl_len, repl_sec->load_addr, repl_sec->sec->sh_size);
                 return -EINVAL;
             }
         }
         apply_alternatives(start, end);
+    alt_done:;
 #else
         printk(XENLOG_ERR LIVEPATCH "%s: We don't support alternative patching\n",
                elf->name);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Sat Apr 15 02:55:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 02:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521427.810115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnW4G-0007RU-S1; Sat, 15 Apr 2023 02:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521427.810115; Sat, 15 Apr 2023 02:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnW4G-0007RN-MQ; Sat, 15 Apr 2023 02:55:12 +0000
Received: by outflank-mailman (input) for mailman id 521427;
 Sat, 15 Apr 2023 02:55:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnW4F-0007RA-Q4; Sat, 15 Apr 2023 02:55:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnW4F-00054c-Kf; Sat, 15 Apr 2023 02:55:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnW4F-0007IN-Cd; Sat, 15 Apr 2023 02:55:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnW4F-00059q-C9; Sat, 15 Apr 2023 02:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FK8cCmDfmh1l3LhnUQjNXIJDF19GHLBnx4FDez1MPe8=; b=tVnYcpBiLwcIUNpwE+C6kSWeZo
	lPpKIWQfilBg2z2IN0zEL2ZeSMqvZQ6rLHbNDGjYsC/R1aKyMoCgOIAoYuHIK97sS8yFPMyemFSaa
	2b4qClgPSwYo3Ck6VOA34zsCBX9HjOrfKBhjuEKcQFLi7PiSedqObAgYt5FrwTjKzvf8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180264-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180264: trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Apr 2023 02:55:11 +0000

flight 180264 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180264/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-xl-multivcpu  8 xen-boot               fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180253

version targeted for testing:
 linux                7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    1 days
Testing same since   180264  2023-04-14 18:10:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Conor Dooley <conor.dooley@microchip.com>
  Erik Brakkee <erik@brakkee.org>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Mathis Salmen <mathis.salmen@matsal.de>
  Mustafa Ismail <mustafa.ismail@intel.com>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-step test-armhf-armhf-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 768 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 04:59:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 04:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521434.810124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnY0C-0002mk-Uv; Sat, 15 Apr 2023 04:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521434.810124; Sat, 15 Apr 2023 04:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnY0C-0002md-SA; Sat, 15 Apr 2023 04:59:08 +0000
Received: by outflank-mailman (input) for mailman id 521434;
 Sat, 15 Apr 2023 04:59:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rrIm=AG=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pnY0B-0002mX-Ib
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 04:59:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2073.outbound.protection.outlook.com [40.107.7.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e3724dc-db4a-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 06:59:03 +0200 (CEST)
Received: from AS8P250CA0029.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:330::34)
 by PAVPR08MB9013.eurprd08.prod.outlook.com (2603:10a6:102:320::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Sat, 15 Apr
 2023 04:58:25 +0000
Received: from AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:330:cafe::af) by AS8P250CA0029.outlook.office365.com
 (2603:10a6:20b:330::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Sat, 15 Apr 2023 04:58:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT041.mail.protection.outlook.com (100.127.140.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.29 via Frontend Transport; Sat, 15 Apr 2023 04:58:24 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Sat, 15 Apr 2023 04:58:23 +0000
Received: from dc3e42969725.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C967FC9F-5404-4FA0-9D2B-05D2ECF1C896.1; 
 Sat, 15 Apr 2023 04:58:13 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dc3e42969725.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 15 Apr 2023 04:58:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8561.eurprd08.prod.outlook.com (2603:10a6:10:406::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28; Sat, 15 Apr
 2023 04:58:10 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Sat, 15 Apr 2023
 04:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e3724dc-db4a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KrA8IWKKeeFG1W6ZwODIRDwRgiC2u6YdYMqxGQUzm+o=;
 b=WvOAY2AT389Io0Y6Q6dPdDw9PO3iTpM1ToxUBgWFRZUu1kyHN6lnyeKmr7cNGF4LDHXiy2RUXLZ4AU0gEjefE8sJBMpM3cmyA8jsSBaFeWjXummJ2GreWLB2Yx9PFExsnEj3PvQLAGeHxHKVP7cmSqj/GPvOvpdBQlZq2VTiYZw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g4a3+jcFMIgHiG6inAn23eq7VZsLQrTlxxRrVXbGo4ggRBdMCpEQodGQozjdfiSn9DsDQ7WOhg/eTaM6jyk1V1CjE74JinKwpIgvipIY2cEm6IxJUwrZl6HpHjs1+7gLZQlwy8Oc9hJZ5fLILLMzrsUks7ofhUXUttA1GU0j9eVMUn1fcsTABV3wRx9NowgVnFcDbfyF55WGh7akwNJsGwjgSYS8iVndodf1tH3e0blp8OqpmUEWkQnWMs7y6iCke4GQVWmG2WTtRnHO/Uw8eeNn52SDp+4XR+1+EIXfk/iRSJ+CJFHHn9QWjl8jNRa8DMPGgFYAO0YdiLtVHE0KCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KrA8IWKKeeFG1W6ZwODIRDwRgiC2u6YdYMqxGQUzm+o=;
 b=PoMstpCu/sgGcSt0zb2x8VJED3l9I3I9zt6BcfPZUI7peaHsAN7EYG6JzptDQwZctBHT2Ul3DWn9qQple7549qfWvv/op1as+fUJWBnbeFraEjTV9g6hcHQbVVxHR8SG6pCrkjzx1KIakrViIC4DN8HBMdgpE2YTZWdsAR8YiWPl1iPSqDIPNN1Oz4JTK4zcaPTEzo06mYQm3SjDFVIJ5MfGfTxGvqYTZV85v32OC9ZajMojSlclkw4YiGIag/lbIv+M8TsRUOJOz4oSMDTS3CX42Bh5ojHCvV8MejhnrPmu5ccb3pifru9Aa/xe+8k3/jes3SCSY2F4kWRg8EH5zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KrA8IWKKeeFG1W6ZwODIRDwRgiC2u6YdYMqxGQUzm+o=;
 b=WvOAY2AT389Io0Y6Q6dPdDw9PO3iTpM1ToxUBgWFRZUu1kyHN6lnyeKmr7cNGF4LDHXiy2RUXLZ4AU0gEjefE8sJBMpM3cmyA8jsSBaFeWjXummJ2GreWLB2Yx9PFExsnEj3PvQLAGeHxHKVP7cmSqj/GPvOvpdBQlZq2VTiYZw=
From: Henry Wang <Henry.Wang@arm.com>
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Subject: RE: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
Thread-Topic: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
Thread-Index: AQHZbwMWeR0W8wxF2kqEL5lhRIkFbK8rm8pA
Date: Sat, 15 Apr 2023 04:58:10 +0000
Message-ID:
 <AS8PR08MB7991C281B7B33DB58BB1C130929E9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-2-stewart.hildebrand@amd.com>
In-Reply-To: <20230414185714.292881-2-stewart.hildebrand@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 68CADEA0DBE95F40B02926A97DCB8709.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8561:EE_|AM7EUR03FT041:EE_|PAVPR08MB9013:EE_
X-MS-Office365-Filtering-Correlation-Id: 0a172917-0d1d-4062-7c39-08db3d6e0adc
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Bf77KFEU5xIfyKkQXnB75sRY9w0y466ImvIDqLcbCu/Bv++ZbLIKoiuGFvfBg3yArxXI4KYZOi+IGPF/I/uxQsU1NZpDtjwZkrg3X58SfnSFiXfrxIEsh5zyuha5jFacGvSephY23H7egJKgVkqzaKEFNzPq7UkRscaGF9qBXgzrQnTAuiQJHlUOWrq8ynTBnmmTwNcGI0pChB0zN0puw9dQyChPXYlohMivcG7P3pb2vBCN+1s3j8sjGAUQbxAZBjbSVDIDdNXL6xJk0L48isSfFJJb0ft0jrrJBKpzJFTWxw1ztMh93CMm5j2UkhSWwwqCYc6ALlZxEKJcSNEs1V8LXTztghgz6e0XPKMIGz9XWFlsaqeNhDTm20gcuKmP50WaTEVYxkNR+hgGEVLwiBQsSJz4kQCYZxXmW+V+n8mCL93DoV9vMMYZ8oMkMnEhyD+ziiU6EHZwTB7rNT3WU5f2kXYbhCTAAizBhcmF9Ukns8RJ283igFaCEl9LLZbm6DHhgxKHPpZdrziWIOHHqmyGKHcPd5EW6kArnx5s65HdfMZSxhI5jYiy6BDjhrzd3lIyyDIfXxyoycvS4eeGT7NlfEUuQlPM0b3gec4U2M4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(39860400002)(136003)(366004)(346002)(451199021)(54906003)(478600001)(55016003)(83380400001)(7696005)(71200400001)(6506007)(9686003)(26005)(66946007)(66446008)(66556008)(76116006)(66476007)(316002)(64756008)(4326008)(110136005)(186003)(5660300002)(52536014)(8676002)(8936002)(38100700002)(2906002)(122000001)(4744005)(38070700005)(41300700001)(86362001)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8561
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1899a41a-b261-4853-8d4c-08db3d6e029a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uHNtbOtWbnOrdkFRUL/AmeF+1tf1lotI/4DvYK5QV8tOroZDETIBIxqVI1sLhupSLo8ihoiqUvTCacRLQPlXhwLpOfNf7haJf9i50awwKV2UA5kuBO/NWhJ4npnoARgo+XvdgMfvEu5qI5woHIcbDaqbt3l+DTluWOGKbPAX1MkYdVJRQM8SA1tByD8nrdG68rRvBCCaLNKpPc1AMhJgIkaURhcq4gmiwymFgm5nCjy8ZLqAxbnwt1vmK3X2x7gHoKkamSO8HtnAFeGe+rts0wZgsg1wlT61qANAJ45/Eh/+SWbFqofIVi/kd67IFwfw4RasoIi8LjibgUxoEwBGiZIb2EXLZ19UxxkkcXHFmDa+eB0arDCBwLP+GUmxCL3EkzP2NHkoGQSXEGLfSHh1aPg2cowh2V1Nq/dmkoABwxmsZ8TnNi0jCuT631R+Q4e0gQhIU8wVsSu7es40yLShXzhFLnrUWCdlb0we1WtQrkLZhp6Htq9Nualywcixl0Tg3H2Yt88oWSasepSOj164Q0yBbS9nacYENrM9+9EslVm+VbqOHqxf4eyvyTut1JFKZimW8yPI3FvlDjwOhAyod9HdLhMn1sabeERTo/SbE5//TmnIWIEIpP8qvOcqa0wIrhlQg1tU78L1UG+B6iXzO3Q62mY+FPtRrplakhdNrCSLG4BAys0os7p1SfpDXsQWv7ubCNGCSBo7VBdmWbY+wA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(396003)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(7696005)(41300700001)(478600001)(107886003)(81166007)(26005)(82740400003)(110136005)(54906003)(33656002)(356005)(36860700001)(6506007)(40460700003)(9686003)(82310400005)(8936002)(4326008)(336012)(52536014)(83380400001)(8676002)(47076005)(55016003)(40480700001)(70586007)(2906002)(70206006)(5660300002)(86362001)(186003)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Apr 2023 04:58:24.0507
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a172917-0d1d-4062-7c39-08db3d6e0adc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9013

Hi Stewart,

> -----Original Message-----
> Subject: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
>=20
> When building the hypervisor with -Og, we run into a __bad_cmpxchg link
> error:
>=20
> aarch64-none-linux-gnu-ld: prelink.o: in function `__int_cmpxchg':
> .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference
> to `__bad_cmpxchg'
> aarch64-none-linux-gnu-
> ld: .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined
> reference to `__bad_cmpxchg'
> aarch64-none-linux-gnu-ld: ./.xen-syms.0: hidden symbol `__bad_cmpxchg'
> isn't defined
> aarch64-none-linux-gnu-ld: final link failed: bad value
>=20
> This is due to the function __guest_cmpxchg not being inlined in the -Og =
build
> with gcc 12. Fix this by marking __guest_cmpxchg always_inline.
>=20
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com

Hmmm I think you missed the ">" in the end of your signoff...But anyway the
patch looks good to me so:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 05:00:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 05:00:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521438.810135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnY1o-0004T3-9h; Sat, 15 Apr 2023 05:00:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521438.810135; Sat, 15 Apr 2023 05:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnY1o-0004Sw-6t; Sat, 15 Apr 2023 05:00:48 +0000
Received: by outflank-mailman (input) for mailman id 521438;
 Sat, 15 Apr 2023 05:00:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rrIm=AG=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pnY1n-0004Sl-96
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 05:00:47 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7b0ec4dd-db4a-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 07:00:46 +0200 (CEST)
Received: from AM5P194CA0012.EURP194.PROD.OUTLOOK.COM (2603:10a6:203:8f::22)
 by GV1PR08MB8618.eurprd08.prod.outlook.com (2603:10a6:150:82::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Sat, 15 Apr
 2023 05:00:33 +0000
Received: from AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:8f:cafe::49) by AM5P194CA0012.outlook.office365.com
 (2603:10a6:203:8f::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.36 via Frontend
 Transport; Sat, 15 Apr 2023 05:00:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT025.mail.protection.outlook.com (100.127.140.199) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.31 via Frontend Transport; Sat, 15 Apr 2023 05:00:33 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Sat, 15 Apr 2023 05:00:32 +0000
Received: from 9506945cc5da.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 399F7453-B466-471A-A2F8-A96EABF796A7.1; 
 Sat, 15 Apr 2023 05:00:31 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9506945cc5da.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 15 Apr 2023 05:00:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB7707.eurprd08.prod.outlook.com (2603:10a6:150:52::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Sat, 15 Apr
 2023 05:00:21 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Sat, 15 Apr 2023
 05:00:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b0ec4dd-db4a-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rsxox0ELCpNEOYk4he/tfF4VeGmY1BM8iiwIZeOxqKE=;
 b=IGwo/iU9dQT0eZG8M0Wz0XWoCbSKWIPx9vNPc5NSb2DMxfF+65/VMX7Cmk9srGMYtqksYRwVvdGJBwPBV1l9PnE3PY2Qa7iE0m4dvjxZ8lHTUM3sqo8p29fr8REZTC6EHxIFi0CySnKbP/mH6XOtzE107shizdKcJObOrkaC9KE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QTMJFwAqGbE6eyzWDb+wr+ULXFPqc73gXwGKttLdgpdsx35shMLAAoPjB4pVAodfiibL8HF5uV3LWd88LREPZXaSFbHcdjlXeFKNIq0CpB2IB4Ie3tgQJ1IoCWcr0PcI90ERa6haQ2a7OFWDzupAvkQtDJ145zzew2tuaqQkj/8xg8ZprlSbJ5rPBF9i2+I/TPqEHG6Fqe/GJlAmFvALDR4XH6+5o8/QwR1z+uVbmdjfU5f2+DPFT9VfhZb1AhzfYApXzh2H7tsLUNqjkpuSntHd96d4988ckKU5vkVr8ALeUizhXXLxKqaOIoJ4nqA2O46KkEpDRdLOvj2OEwnn9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rsxox0ELCpNEOYk4he/tfF4VeGmY1BM8iiwIZeOxqKE=;
 b=byu/r7szMOOZUXqgMnneGBUiK2vQ9aio0E7h4pB851A/kYkqSiQxoKbc7HyMyfGZ8BIgBlPTgxBUMAp+Lx63kT+BcrJ4VBiVvemFCUZsbIx1d4i9kFpNFFclcTIwTVwi7z2pWGzVaChsUL65iLUDeLsQE1H3+vQOLmOJlPXGqKjAw9QZRodckhNg4CViStfRAWo9JkFphP145NP5SL2tv+MGye5Yc+DmwU79YEYCHwaiVQZJ3vLdB/6Ysb9pJV4dBD6QTw/oEptD/Ab9BuK6FXNHo9TEBCnsBtlk7moWeOVzvV9C5HxGK7k9/bbu5XbRAyKarBOXsVQ90XAS5kC6Zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rsxox0ELCpNEOYk4he/tfF4VeGmY1BM8iiwIZeOxqKE=;
 b=IGwo/iU9dQT0eZG8M0Wz0XWoCbSKWIPx9vNPc5NSb2DMxfF+65/VMX7Cmk9srGMYtqksYRwVvdGJBwPBV1l9PnE3PY2Qa7iE0m4dvjxZ8lHTUM3sqo8p29fr8REZTC6EHxIFi0CySnKbP/mH6XOtzE107shizdKcJObOrkaC9KE=
From: Henry Wang <Henry.Wang@arm.com>
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 3/3] xen/arm: fix unitialized use warning
Thread-Topic: [PATCH 3/3] xen/arm: fix unitialized use warning
Thread-Index: AQHZbwMRTii87254YUmjLql73V41Gq8rnEuA
Date: Sat, 15 Apr 2023 05:00:21 +0000
Message-ID:
 <AS8PR08MB7991CCB0440E0B7F6D074C93929E9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-4-stewart.hildebrand@amd.com>
In-Reply-To: <20230414185714.292881-4-stewart.hildebrand@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E52D9A63ED09EC4A94CA29B101DE4A2E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB7707:EE_|AM7EUR03FT025:EE_|GV1PR08MB8618:EE_
X-MS-Office365-Filtering-Correlation-Id: d9eee0af-6d71-4e1e-45b3-08db3d6e57be
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dukRCfPNJGFxQd7AgqzS3+YrgUDeaJ165m396hj4i9IRynuWbgqdpUg/PTT7SmNQzEzKbMFwCUFGawcTUxbXWnbbuKzAmdLx69lFWgH0GrtIxAYp2nD82zVAzXsLEYrWKR15euWc9V8B0D+sIYV3xMH0SjL9l0lYE3Y4i8mlWzPXOMvnJQ8RmgRqJF50NKo88QUoQiPGxXG/1Y2llvrU6T4w8i81PA36SVcATr1UxFCVmcQhoTwzOAh0PEnlydx05WS3kyP/nPtJPa0oSx962LCfOqQ5tzfXgKxp/TNtMIcGHF6q55wS8D5XkP1N+yJpIqg2YViO+cc33fpm6rf6i2sDDddfwnymG6swTkPBXDu9dhZ3uPfIPnYwPP21NwalyYdbizqrnI0j3iA1NU+hKTND+l2IO0I9IKiQI7L/AwSuNJWPSGrvRZjhSaNB+yEYXuo96Y/Mfg1vvlWd1BJDb5qJx0Lx/gC8PK4p5I6V4FOz3VsY635vXu0JIaMnHlttMHaDNIAzB71RphNB1DcQxTf83a9AVIE34Kz3UWiv1YwnR98g4xLjZsADQA35W5jQd5Xtu7WwTKHbnr2TpH/1wvrFfJb8d6gLbTgDjDmQJPNBxKzeqGUD28IV2LrqvgnU
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(396003)(346002)(376002)(136003)(451199021)(478600001)(110136005)(316002)(54906003)(66556008)(66476007)(66946007)(8676002)(64756008)(66446008)(76116006)(55016003)(83380400001)(33656002)(186003)(9686003)(38070700005)(2906002)(122000001)(4744005)(7696005)(4326008)(71200400001)(41300700001)(86362001)(26005)(5660300002)(8936002)(38100700002)(52536014)(6506007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7707
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a1fd1b13-0c8e-425a-59b8-08db3d6e510c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T3lvOOqLEt04U9wuBqfRFmnr2ELd2g/JrNjAkkrhh1/rux/KoOlLA/Nkql2IbmBKZ74/9N1tfpGwjM8+4ThvJhYHTCKe8QgD4BoDAcoh28d3WRu/Lt6wQuLJEH1MpjtNF58WSeLrTwMhV9cKxyPGQXduA04z3KxQvvZuhkIfU73CwpRACKsgbBz/epcU3pLxOQC/AbA5rlcLG55o+ROTKsBDTu4FLOCsO4gWN0o9XsEoCbDbHE2bzKO6HTvVwHW2jBJMSTxVjbqwsvLSCISium2S8Ca20nvLxnsWHFpPJ1GqlxC2ZZpYx1JkUq8jIo/PvOlsaMYm2PKUIhpSMNsS04WvVqlIo2kUrcvjA91dnUxsTO5vKyIbGe8C/w2ru2HS/If0r/ZPowHTpouC+o2+qbV/IUXnbQPzgvZK1J6LDzJ8EQYtNvAzaiOh693lxZVyvWbbEi8fpqTH0JSwEdcYTQ/1usHFDoQ9I5Dbuc9REZuY2WNG3ADFyPcR3aCXuqf+ZeQ1gb0D2fwi5A4j8B9cJwudaww39HZO4OUSLqGYX6uzHLNI1ZkYAyF+dWradhVTWOWQ/GqOtj5h4nUJtlvwz/npqydPs+lMbH3mLmOreW7sh1S0U/H/FOzLc1Y8RjmIAKrgG8i9AfVnkMmjvfK2Um5oImL90Sk1VHVfHqnDdtoZbNPtGvYZwcch01K3vnT8wPPaujRfU3/R8cH7H1q0jQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(5660300002)(4744005)(7696005)(40460700003)(2906002)(70586007)(4326008)(70206006)(86362001)(81166007)(82310400005)(41300700001)(33656002)(478600001)(356005)(52536014)(316002)(8936002)(82740400003)(8676002)(55016003)(40480700001)(54906003)(9686003)(110136005)(6506007)(26005)(336012)(36860700001)(107886003)(186003)(47076005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Apr 2023 05:00:33.0216
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d9eee0af-6d71-4e1e-45b3-08db3d6e57be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8618

SGkgU3Rld2FydCwNCg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBTdWJqZWN0OiBb
UEFUQ0ggMy8zXSB4ZW4vYXJtOiBmaXggdW5pdGlhbGl6ZWQgdXNlIHdhcm5pbmcNCj4gDQo+IFdo
ZW4gYnVpbGRpbmcgdGhlIGh5cGVydmlzb3Igd2l0aCAtT2csIHdlIGVuY291bnRlciB0aGUgZm9s
bG93aW5nIGVycm9yOg0KPiANCj4gYXJjaC9hcm0vZG9tYWluX2J1aWxkLmM6IEluIGZ1bmN0aW9u
IOKAmG1ha2VfY3B1c19ub2Rl4oCZOg0KPiBhcmNoL2FybS9kb21haW5fYnVpbGQuYzoyMDQwOjEy
OiBlcnJvcjog4oCYY2xvY2tfdmFsaWTigJkgbWF5IGJlIHVzZWQNCj4gdW5pbml0aWFsaXplZCBb
LVdlcnJvcj1tYXliZS11bmluaXRpYWxpemVkXQ0KPiAgMjA0MCB8ICAgICAgICAgaWYgKCBjbG9j
a192YWxpZCApDQo+ICAgICAgIHwgICAgICAgICAgICBeDQo+IGFyY2gvYXJtL2RvbWFpbl9idWls
ZC5jOjE5NDc6MTA6IG5vdGU6IOKAmGNsb2NrX3ZhbGlk4oCZIHdhcyBkZWNsYXJlZCBoZXJlDQo+
ICAxOTQ3IHwgICAgIGJvb2wgY2xvY2tfdmFsaWQ7DQo+ICAgICAgIHwgICAgICAgICAgXn5+fn5+
fn5+fn4NCj4gY2MxOiBhbGwgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMNCj4gDQo+
IEZpeCBpdCBieSBpbml0aWFsaXppbmcgdGhlIHZhcmlhYmxlLg0KPiANCj4gU2lnbmVkLW9mZi1i
eTogU3Rld2FydCBIaWxkZWJyYW5kIDxzdGV3YXJ0LmhpbGRlYnJhbmRAYW1kLmNvbT4NCg0KUmV2
aWV3ZWQtYnk6IEhlbnJ5IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCg0KS2luZCByZWdhcmRz
LA0KSGVucnkNCg==


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 06:48:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 06:48:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521442.810145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnZiD-0006Mu-4W; Sat, 15 Apr 2023 06:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521442.810145; Sat, 15 Apr 2023 06:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnZiD-0006Mn-1F; Sat, 15 Apr 2023 06:48:41 +0000
Received: by outflank-mailman (input) for mailman id 521442;
 Sat, 15 Apr 2023 06:48:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnZiB-0006Md-Pd; Sat, 15 Apr 2023 06:48:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnZiB-0002ab-Hw; Sat, 15 Apr 2023 06:48:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnZiB-0000sd-3c; Sat, 15 Apr 2023 06:48:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnZiB-0004we-3B; Sat, 15 Apr 2023 06:48:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Mptt7fM3u5d2ar9YIhIuY5JB5AR4uh4SuS/Sl3eYI2Q=; b=ioFZvXhSIew9Rgd/NsceljlITJ
	74aLddsFPb61+/YfBiJfo2ARKys0xDpJsV+2XbmDlUZhVCy6wiRa+y2/PXBT1K3PGcmkY2drcRbVr
	XMfR2zwy1UER53HU3oXWHLtiE05YmQMzumDXnkX/Ldbkv2cwXYoM3leOFSzS8BKJh8hg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180265: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-xl-credit2:host-install(5):broken:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Apr 2023 06:48:39 +0000

flight 180265 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180265/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180238

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   5 host-install(5)       broken starved in 180238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-examine   11 examine-serial/bootloader fail starved in 180238
 test-armhf-armhf-examine     12 examine-serial/kernel   fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    1 days
Failing since        180256  2023-04-14 05:34:08 Z    1 days    2 attempts
Testing same since   180265  2023-04-14 22:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit2 broken
broken-step test-armhf-armhf-xl-credit2 host-install(5)

Not pushing.

------------------------------------------------------------
commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 12:59:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 12:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521453.810155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnfUM-0000mj-Gb; Sat, 15 Apr 2023 12:58:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521453.810155; Sat, 15 Apr 2023 12:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnfUM-0000mc-Df; Sat, 15 Apr 2023 12:58:46 +0000
Received: by outflank-mailman (input) for mailman id 521453;
 Sat, 15 Apr 2023 12:58:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PlBa=AG=gmail.com=brgerst@srs-se1.protection.inumbo.net>)
 id 1pnfUL-0000mW-De
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 12:58:45 +0000
Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com
 [2607:f8b0:4864:20::112d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f4959f2-db8d-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 14:58:42 +0200 (CEST)
Received: by mail-yw1-x112d.google.com with SMTP id
 00721157ae682-54c12009c30so485649017b3.9
 for <xen-devel@lists.xenproject.org>; Sat, 15 Apr 2023 05:58:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f4959f2-db8d-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681563521; x=1684155521;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ccV6fJmLsZscVpXweMcEv4sovH+IDrkvejv50k6gs/U=;
        b=BSOaYfaXyd5Wb+epLfvjrJB5ddMbSfuRSPgkDv/0OOsO8ij2hFGojnuviqXm0GCrkP
         ID2hBr77xwAaGcKnrXJiZAUdBojhzD8cglmd8qXgd51iUqdS7A9NAWUQz0MUF5ZcciE8
         AWz1SY3aKamn3My6fXKs/8E5eqVabut4SfZKc0c5sbIlrZyG34ztRaxzO3zCWgWJeeks
         vL5yfHXHE0t7DNIuTqx162KoYvLaNpuXLEhcNNNv20NmCKzlwGF6mcKsLN10ua6WOBit
         5o9zCg+9j1R+WkmMZNRoFuK0k+WfhwAjMsbRMXWBbGHiHnx9KcILf1NV5DEtKeNZyNxV
         c60w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681563521; x=1684155521;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ccV6fJmLsZscVpXweMcEv4sovH+IDrkvejv50k6gs/U=;
        b=lDGLnOAXcncqxY8ebXApQU5/yMSxnvxXf6MmbHA7amZ2k8YbMl/HcgDkpNESKv5K4g
         sUmcPqeUj1L/ZkQQwgw3MAVi4bKRRuYKkAHQe7novFJKW33ZLXOmFWE2xxARfwj3xcrL
         XrxRRI7gp7CLY7Do47ia3PEi1YDvr0DAgOj0idHGUpmcHsF7KQI2flDFSRpHBoo8tFkR
         9nzkB8JcmMJeU2PNRSsDqIdanuhPKOrph9SNgeJoQ6FZDL22127f8eqY/zT5TboaEg79
         GnkM8ZoL5v32VJZEKx1KfrrgbwwyY8e3u+Da32qPI197FSKEtVRBDLO9JPHOw3L+7m2q
         WZjQ==
X-Gm-Message-State: AAQBX9dlN3fcijVBrFxZ3WneJHabvgsbcnJSliECeO/ZKG3h/CZgyS1n
	7Ihx1H8ri6PssO1OEYyGLCwIzmYhcHfUDOlvzw==
X-Google-Smtp-Source: AKy350bkbiaOTV48G/m6wg5f7oTu79ZcJL2xRJuRL2it4F9Xsb2JgTwgmruSWgxfXHK3zLR53InjScARyCku/VQAIuk=
X-Received: by 2002:a81:b245:0:b0:54f:b721:5325 with SMTP id
 q66-20020a81b245000000b0054fb7215325mr5587058ywh.1.1681563521186; Sat, 15 Apr
 2023 05:58:41 -0700 (PDT)
MIME-Version: 1.0
References: <20230414225551.858160935@linutronix.de> <20230414232310.382005483@linutronix.de>
In-Reply-To: <20230414232310.382005483@linutronix.de>
From: Brian Gerst <brgerst@gmail.com>
Date: Sat, 15 Apr 2023 08:58:30 -0400
Message-ID: <CAMzpN2j4NbGGR=jfxpVVQwYCZ=hHOUKm3oBpw1WKGiTUJ73EXA@mail.gmail.com>
Subject: Re: [patch 19/37] x86/smpboot: Switch to hotplug core state synchronization
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, 
	David Woodhouse <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Paul McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, 
	Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, 
	Paul Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>, 
	Piotr Gorski <lucjan.lucjanov@gmail.com>, Juergen Gross <jgross@suse.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, 
	David Woodhouse <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>, 
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>, 
	linux-arm-kernel@lists.infradead.org, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, 
	linux-csky@vger.kernel.org, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, 
	linux-mips@vger.kernel.org, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org, 
	Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Apr 14, 2023 at 7:44=E2=80=AFPM Thomas Gleixner <tglx@linutronix.de=
> wrote:
>
> The new AP state tracking and synchronization mechanism in the CPU hotplu=
g
> core code allows to remove quite some x86 specific code:
>
>   1) The AP alive synchronization based on cpumasks
>
>   2) The decision whether an AP can be brought up again
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: xen-devel@lists.xenproject.org
> ---
>  arch/x86/Kconfig           |    1
>  arch/x86/include/asm/smp.h |    7 +
>  arch/x86/kernel/smp.c      |    1
>  arch/x86/kernel/smpboot.c  |  159 ++++++++++----------------------------=
-------
>  arch/x86/xen/smp_hvm.c     |   16 +---
>  arch/x86/xen/smp_pv.c      |   39 ++++++-----
>  6 files changed, 72 insertions(+), 151 deletions(-)
>
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -272,6 +272,7 @@ config X86
>         select HAVE_UNSTABLE_SCHED_CLOCK
>         select HAVE_USER_RETURN_NOTIFIER
>         select HAVE_GENERIC_VDSO
> +       select HOTPLUG_CORE_SYNC_FULL           if SMP
>         select HOTPLUG_SMT                      if SMP
>         select IRQ_FORCED_THREADING
>         select NEED_PER_CPU_EMBED_FIRST_CHUNK
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -38,6 +38,8 @@ struct smp_ops {
>         void (*crash_stop_other_cpus)(void);
>         void (*smp_send_reschedule)(int cpu);
>
> +       void (*cleanup_dead_cpu)(unsigned cpu);
> +       void (*poll_sync_state)(void);
>         int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
>         int (*cpu_disable)(void);
>         void (*cpu_die)(unsigned int cpu);
> @@ -90,7 +92,8 @@ static inline int __cpu_disable(void)
>
>  static inline void __cpu_die(unsigned int cpu)
>  {
> -       smp_ops.cpu_die(cpu);
> +       if (smp_ops.cpu_die)
> +               smp_ops.cpu_die(cpu);
>  }
>
>  static inline void play_dead(void)
> @@ -122,8 +125,6 @@ void native_smp_cpus_done(unsigned int m
>  int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
>  int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
>  int native_cpu_disable(void);
> -int common_cpu_die(unsigned int cpu);
> -void native_cpu_die(unsigned int cpu);
>  void hlt_play_dead(void);
>  void native_play_dead(void);
>  void play_dead_common(void);
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -269,7 +269,6 @@ struct smp_ops smp_ops =3D {
>         .smp_send_reschedule    =3D native_smp_send_reschedule,
>
>         .cpu_up                 =3D native_cpu_up,
> -       .cpu_die                =3D native_cpu_die,
>         .cpu_disable            =3D native_cpu_disable,
>         .play_dead              =3D native_play_dead,
>
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -57,6 +57,7 @@
>  #include <linux/pgtable.h>
>  #include <linux/overflow.h>
>  #include <linux/stackprotector.h>
> +#include <linux/cpuhotplug.h>
>
>  #include <asm/acpi.h>
>  #include <asm/cacheinfo.h>
> @@ -101,9 +102,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
>  DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
>  EXPORT_PER_CPU_SYMBOL(cpu_info);
>
> -/* All of these masks are initialized in setup_cpu_local_masks() */
> -static cpumask_var_t cpu_initialized_mask;
> -static cpumask_var_t cpu_callout_mask;
>  /* Representing CPUs for which sibling maps can be computed */
>  static cpumask_var_t cpu_sibling_setup_mask;
>
> @@ -169,8 +167,8 @@ static void smp_callin(void)
>         int cpuid =3D smp_processor_id();
>
>         /*
> -        * If waken up by an INIT in an 82489DX configuration
> -        * cpu_callout_mask guarantees we don't get here before an
> +        * If waken up by an INIT in an 82489DX configuration the alive
> +        * synchronization guarantees we don't get here before an
>          * INIT_deassert IPI reaches our local APIC, so it is now safe to
>          * touch our local APIC.
>          *
> @@ -216,17 +214,6 @@ static void ap_calibrate_delay(void)
>         cpu_data(smp_processor_id()).loops_per_jiffy =3D loops_per_jiffy;
>  }
>
> -static void wait_for_master_cpu(int cpu)
> -{
> -       /*
> -        * Wait for release by control CPU before continuing with AP
> -        * initialization.
> -        */
> -       WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
> -       while (!cpumask_test_cpu(cpu, cpu_callout_mask))
> -               cpu_relax();
> -}
> -
>  /*
>   * Activate a secondary processor.
>   */
> @@ -247,11 +234,10 @@ static void notrace start_secondary(void
>         cpu_init_exception_handling();
>
>         /*
> -        * Sync point with wait_cpu_initialized(). Sets AP in
> -        * cpu_initialized_mask and then waits for the control CPU
> -        * to release it.
> +        * Sync point with the hotplug core. Sets the sync state to ALIVE
> +        * and waits for the control CPU to release it.
>          */
> -       wait_for_master_cpu(raw_smp_processor_id());
> +       cpuhp_ap_sync_alive();
>
>         cpu_init();
>         rcu_cpu_starting(raw_smp_processor_id());
> @@ -285,7 +271,6 @@ static void notrace start_secondary(void
>         set_cpu_online(smp_processor_id(), true);
>         lapic_online();
>         unlock_vector_lock();
> -       cpu_set_state_online(smp_processor_id());
>         x86_platform.nmi_init();
>
>         /* enable local interrupts */
> @@ -736,9 +721,10 @@ static void impress_friends(void)
>          * Allow the user to impress friends.
>          */
>         pr_debug("Before bogomips\n");
> -       for_each_possible_cpu(cpu)
> -               if (cpumask_test_cpu(cpu, cpu_callout_mask))
> +       for_each_possible_cpu(cpu) {
> +               if (cpumask_test_cpu(cpu, cpu_online_mask))
>                         bogosum +=3D cpu_data(cpu).loops_per_jiffy;

This should be the same as for_each_online_cpu().

--
Brian Gerst


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 13:22:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 13:22:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521459.810164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnfre-00049s-Ex; Sat, 15 Apr 2023 13:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521459.810164; Sat, 15 Apr 2023 13:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnfre-00049l-CG; Sat, 15 Apr 2023 13:22:50 +0000
Received: by outflank-mailman (input) for mailman id 521459;
 Sat, 15 Apr 2023 13:22:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PlBa=AG=gmail.com=brgerst@srs-se1.protection.inumbo.net>)
 id 1pnfrd-00049f-Ae
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 13:22:49 +0000
Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com
 [2607:f8b0:4864:20::b32])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c2f217f-db90-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 15:22:46 +0200 (CEST)
Received: by mail-yb1-xb32.google.com with SMTP id u13so21663658ybu.5
 for <xen-devel@lists.xenproject.org>; Sat, 15 Apr 2023 06:22:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c2f217f-db90-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681564965; x=1684156965;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FVzmiHK64lr1yWOA/+XrXfsMmSyuvLawYPiT7JCvob8=;
        b=kzN6Qldowb67XS8W5yIr9TKg8Wd1DqCY2ePREiBWY/NtxgAgDeDo2TRyur9nar8C++
         HnTU0sGESuNQrLXZCmp3MbPSMe1ctHBcEzOeudm0xjlT82JqM37TyyV1w921MOnZOt6Z
         Yl7JLD1auTWmNaBAqeCDaHjO1FCRr0N1PCoLWMJkB962BgyPgRGiSnVZP/zKM+Wii/Z6
         avk+V5z1zf7aKn7zz8tC7iPNYYczHe6njW5BJijHx4bRAJcJmA1C+2uvle8w9h0679Xe
         I+RvsIyQOxeJGPvDgiUZDzz/5uSuqAkakw1Oh0ISxsKLLZT6a+aMlHFulUbAlWle21kk
         M3lA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681564965; x=1684156965;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FVzmiHK64lr1yWOA/+XrXfsMmSyuvLawYPiT7JCvob8=;
        b=JJBwkLuzriltLTTz/O95XvOeEVsZ2WkoNAvbPX06WGZHFVt4EVTJ5O+AM/1NDgK8tF
         9FKC5NrZ1cRMpD8cMUjkAClhhXoF4A2vTKvn3/8JKFqZPFKH9Cleaz8z7Dl/GgxwjwpD
         fNZXMFcYgOocrpDvl6iSbdaP2IZG5pPL0GqHhAi4VhwuR9pu30MjRyFhNDQ3pQ9k0qgn
         8DgAlHFJyoHMoK59k0G0aIEwYtGUW7Dx52aDkrE2MeqgUFFnSLumxrksvOgAGx1TdQpF
         /tq3bT9Rl4zWDiWsNOe5VfplTGBZcP4uymdX21ZzyMoAsa/wrezdcWXZeOVSgG+ufC0h
         /3gQ==
X-Gm-Message-State: AAQBX9eaeQOvmTGXtg0hFdO6TYjHV+K5ps/lUaiU/kpi1MbNGFkIKP3I
	712TKP81rBOTUphBgBQjhtd1442BO6Yo/TGnwQ==
X-Google-Smtp-Source: AKy350ZOA4eLsjA57UfjGyPrbRouS5+i8hiU4IROzy4HMM/uTm55VtPGAF6abh2Q+UqEwUzS/wn5tzSKLFF9guCfVhM=
X-Received: by 2002:a25:df94:0:b0:b8b:f5fb:5986 with SMTP id
 w142-20020a25df94000000b00b8bf5fb5986mr5819247ybg.10.1681564965560; Sat, 15
 Apr 2023 06:22:45 -0700 (PDT)
MIME-Version: 1.0
References: <20230414225551.858160935@linutronix.de> <20230414232311.379210081@linutronix.de>
In-Reply-To: <20230414232311.379210081@linutronix.de>
From: Brian Gerst <brgerst@gmail.com>
Date: Sat, 15 Apr 2023 09:22:34 -0400
Message-ID: <CAMzpN2hUbYpYrqDL1ViXUWGKGa7mDEG6iHtWEZg9GvrAoRgvKQ@mail.gmail.com>
Subject: Re: [patch 35/37] x86/smpboot: Support parallel startup of secondary CPUs
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, 
	David Woodhouse <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Paul McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, 
	Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, 
	Paul Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>, 
	Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, 
	Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, 
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>, 
	linux-arm-kernel@lists.infradead.org, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, 
	linux-csky@vger.kernel.org, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, 
	linux-mips@vger.kernel.org, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org, 
	Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Apr 14, 2023 at 7:45=E2=80=AFPM Thomas Gleixner <tglx@linutronix.de=
> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Rework the real-mode startup code to allow for APs to be brought up in
> parallel. This is in two parts:
>
>  1. Introduce a bit-spinlock to prevent them from all using the real
>     mode stack at the same time.
>
>  2. Avoid needing to use the global smpboot_control variable to pass
>     each AP its CPU number.
>
> To achieve the latter, export the cpuid_to_apicid[] array so that each
> AP can find its own CPU number by searching therein based on its APIC ID.
>
> Introduce flags in the top bits of smpboot_control which indicate methods
> by which an AP should find its CPU number. For a serialized bringup, the
> CPU number is explicitly passed in the low bits of smpboot_control as
> before. For parallel mode there are flags directing the AP to find its AP=
IC
> ID in CPUID leaf 0x0b or 1x1f (for X2APIC mode) or CPUID leaf 0x01 where =
8
> bits are sufficient, then perform the cpuid_to_apicid[] lookup with that.
>
> Aside from the fact that APs will now look up their CPU number via the
> newly-exported cpuid_to_apicid[] table, there is no behavioural change
> intended, since the parallel bootup has not yet been enabled.
>
> [ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
> [ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
> [ seanc: Fix stray override of initial_gs in common_cpu_up() ]
> [ Oleksandr Natalenko: reported suspend/resume issue fixed in
>   x86_acpi_suspend_lowlevel ]
>
> Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Co-developed-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/apic.h          |    2
>  arch/x86/include/asm/realmode.h      |    3 +
>  arch/x86/include/asm/smp.h           |    8 +++
>  arch/x86/kernel/acpi/sleep.c         |    9 +++
>  arch/x86/kernel/apic/apic.c          |    2
>  arch/x86/kernel/head_64.S            |   79 ++++++++++++++++++++++++++++=
++++++-
>  arch/x86/kernel/smpboot.c            |    5 --
>  arch/x86/realmode/init.c             |    3 +
>  arch/x86/realmode/rm/trampoline_64.S |   27 +++++++++--
>  9 files changed, 125 insertions(+), 13 deletions(-)
>
> --- a/arch/x86/include/asm/apic.h
> +++ b/arch/x86/include/asm/apic.h
> @@ -55,6 +55,8 @@ extern int local_apic_timer_c2_ok;
>  extern int disable_apic;
>  extern unsigned int lapic_timer_period;
>
> +extern int cpuid_to_apicid[];
> +
>  extern enum apic_intr_mode_id apic_intr_mode;
>  enum apic_intr_mode_id {
>         APIC_PIC,
> --- a/arch/x86/include/asm/realmode.h
> +++ b/arch/x86/include/asm/realmode.h
> @@ -52,6 +52,7 @@ struct trampoline_header {
>         u64 efer;
>         u32 cr4;
>         u32 flags;
> +       u32 lock;
>  #endif
>  };
>
> @@ -64,6 +65,8 @@ extern unsigned long initial_stack;
>  extern unsigned long initial_vc_handler;
>  #endif
>
> +extern u32 *trampoline_lock;
> +
>  extern unsigned char real_mode_blob[];
>  extern unsigned char real_mode_relocs[];
>
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -198,4 +198,12 @@ extern unsigned int smpboot_control;
>
>  #endif /* !__ASSEMBLY__ */
>
> +/* Control bits for startup_64 */
> +#define STARTUP_APICID_CPUID_1F 0x80000000
> +#define STARTUP_APICID_CPUID_0B 0x40000000
> +#define STARTUP_APICID_CPUID_01 0x20000000
> +
> +/* Top 8 bits are reserved for control */
> +#define STARTUP_PARALLEL_MASK  0xFF000000
> +
>  #endif /* _ASM_X86_SMP_H */
> --- a/arch/x86/kernel/acpi/sleep.c
> +++ b/arch/x86/kernel/acpi/sleep.c
> @@ -16,6 +16,7 @@
>  #include <asm/cacheflush.h>
>  #include <asm/realmode.h>
>  #include <asm/hypervisor.h>
> +#include <asm/smp.h>
>
>  #include <linux/ftrace.h>
>  #include "../../realmode/rm/wakeup.h"
> @@ -127,7 +128,13 @@ int x86_acpi_suspend_lowlevel(void)
>          * value is in the actual %rsp register.
>          */
>         current->thread.sp =3D (unsigned long)temp_stack + sizeof(temp_st=
ack);
> -       smpboot_control =3D smp_processor_id();
> +       /*
> +        * Ensure the CPU knows which one it is when it comes back, if
> +        * it isn't in parallel mode and expected to work that out for
> +        * itself.
> +        */
> +       if (!(smpboot_control & STARTUP_PARALLEL_MASK))
> +               smpboot_control =3D smp_processor_id();
>  #endif
>         initial_code =3D (unsigned long)wakeup_long64;
>         saved_magic =3D 0x123456789abcdef0L;
> --- a/arch/x86/kernel/apic/apic.c
> +++ b/arch/x86/kernel/apic/apic.c
> @@ -2377,7 +2377,7 @@ static int nr_logical_cpuids =3D 1;
>  /*
>   * Used to store mapping between logical CPU IDs and APIC IDs.
>   */
> -static int cpuid_to_apicid[] =3D {
> +int cpuid_to_apicid[] =3D {
>         [0 ... NR_CPUS - 1] =3D -1,
>  };
>
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -25,6 +25,7 @@
>  #include <asm/export.h>
>  #include <asm/nospec-branch.h>
>  #include <asm/fixmap.h>
> +#include <asm/smp.h>
>
>  /*
>   * We are not able to switch in one step to the final KERNEL ADDRESS SPA=
CE
> @@ -234,8 +235,70 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>         ANNOTATE_NOENDBR // above
>
>  #ifdef CONFIG_SMP
> +       /*
> +        * For parallel boot, the APIC ID is retrieved from CPUID, and th=
en
> +        * used to look up the CPU number.  For booting a single CPU, the
> +        * CPU number is encoded in smpboot_control.
> +        *
> +        * Bit 31       STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
> +        * Bit 30       STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
> +        * Bit 29       STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
> +        * Bit 0-23     CPU# if STARTUP_APICID_CPUID_xx flags are not set
> +        */
>         movl    smpboot_control(%rip), %ecx
> +       testl   $STARTUP_APICID_CPUID_1F, %ecx
> +       jnz     .Luse_cpuid_1f
> +       testl   $STARTUP_APICID_CPUID_0B, %ecx
> +       jnz     .Luse_cpuid_0b
> +       testl   $STARTUP_APICID_CPUID_01, %ecx
> +       jnz     .Luse_cpuid_01
> +       andl    $(~STARTUP_PARALLEL_MASK), %ecx
> +       jmp     .Lsetup_cpu
> +
> +.Luse_cpuid_01:
> +       mov     $0x01, %eax
> +       cpuid
> +       mov     %ebx, %edx
> +       shr     $24, %edx
> +       jmp     .Lsetup_AP
> +
> +.Luse_cpuid_0b:
> +       mov     $0x0B, %eax
> +       xorl    %ecx, %ecx
> +       cpuid
> +       jmp     .Lsetup_AP
> +
> +.Luse_cpuid_1f:
> +       mov     $0x1f, %eax
> +       xorl    %ecx, %ecx
> +       cpuid
>
> +.Lsetup_AP:
> +       /* EDX contains the APIC ID of the current CPU */
> +       xorq    %rcx, %rcx
> +       leaq    cpuid_to_apicid(%rip), %rbx
> +
> +.Lfind_cpunr:
> +       cmpl    (%rbx,%rcx,4), %edx
> +       jz      .Lsetup_cpu
> +       inc     %ecx
> +#ifdef CONFIG_FORCE_NR_CPUS
> +       cmpl    $NR_CPUS, %ecx
> +#else
> +       cmpl    nr_cpu_ids(%rip), %ecx
> +#endif
> +       jb      .Lfind_cpunr
> +
> +       /*  APIC ID not found in the table. Drop the trampoline lock and =
bail. */
> +       movq    trampoline_lock(%rip), %rax
> +       lock
> +       btrl    $0, (%rax)
> +
> +1:     cli
> +       hlt
> +       jmp     1b
> +
> +.Lsetup_cpu:
>         /* Get the per cpu offset for the given CPU# which is in ECX */
>         movq    __per_cpu_offset(,%rcx,8), %rdx
>  #else
> @@ -248,10 +311,20 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>          *
>          * RDX contains the per-cpu offset
>          */
> -       movq    pcpu_hot + X86_current_task(%rdx), %rax
> -       movq    TASK_threadsp(%rax), %rsp
> +       movq    pcpu_hot + X86_top_of_stack(%rdx), %rsp

Switching to using pcpu_hot.top_of_stack is ok, but it's not
completely equivalent.  top_of_stack points to the end of the pt_regs
structure, while the kernel stack starts below pt_regs even for kernel
threads.  So you need to subtract PTREGS_SIZE from the stack pointer
after this.

This change should also be a separate patch.

--
Brian Gerst


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 14:05:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 14:05:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521463.810175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pngWO-0000E5-L8; Sat, 15 Apr 2023 14:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521463.810175; Sat, 15 Apr 2023 14:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pngWO-0000Dy-Gx; Sat, 15 Apr 2023 14:04:56 +0000
Received: by outflank-mailman (input) for mailman id 521463;
 Sat, 15 Apr 2023 14:04:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pngWM-0000Do-Io; Sat, 15 Apr 2023 14:04:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pngWM-0004Rm-Fn; Sat, 15 Apr 2023 14:04:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pngWL-0004Ep-TH; Sat, 15 Apr 2023 14:04:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pngWL-0004HR-Si; Sat, 15 Apr 2023 14:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qT5xkwpANFLVUY6fKj0sRZyDFGTZKnSJ9GuD6HjPlWo=; b=sIxjjxm5SqlHH/3XZ9rc7YejGE
	LP7mRqEu8M+heSziMo7j9jXus69EB9qNITiYLlVz7TFl7lRlKR7d4yfhVWL/8QHq0hQmOQlFp49lQ
	XgfR4shZHbJ7/ccYZ3dY0xyCjhyNgWgyOIOuYSRuAhItC4AkHgSSX8omjDOkZndywNns=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180266: trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Apr 2023 14:04:53 +0000

flight 180266 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180266/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180253
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-credit1   5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180253

version targeted for testing:
 linux                7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    1 days
Testing same since   180264  2023-04-14 18:10:06 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Conor Dooley <conor.dooley@microchip.com>
  Erik Brakkee <erik@brakkee.org>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Mathis Salmen <mathis.salmen@matsal.de>
  Mustafa Ismail <mustafa.ismail@intel.com>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 768 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 15:11:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 15:11:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521472.810184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnhYJ-0007QA-M9; Sat, 15 Apr 2023 15:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521472.810184; Sat, 15 Apr 2023 15:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnhYJ-0007Pc-Ja; Sat, 15 Apr 2023 15:10:59 +0000
Received: by outflank-mailman (input) for mailman id 521472;
 Sat, 15 Apr 2023 15:10:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnhYH-0007PQ-R1; Sat, 15 Apr 2023 15:10:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnhYH-0005vs-MM; Sat, 15 Apr 2023 15:10:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnhYH-0005hX-4h; Sat, 15 Apr 2023 15:10:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnhYH-0007jZ-49; Sat, 15 Apr 2023 15:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=k/DsF+cJyXIpRr5gpUwirITGrgt8HdHdifHnYioC/Qo=; b=ZifWHNAZS24B/iIXzCwC89qEmb
	mf7hEvJ7eb0608qU0vGWUaVWRdKLqkcTEq1pOhpdMjQ+GqhYGJQkH3x9fqaaYb6yzxk5Q1TTk2pC+
	4rUBY8r8Pbg0uG2k2W6s6s2KwOWsYSjnAaPLPx2gqC9blhqEgEHrA6vYu5uEU3fSsEhI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180267: trouble: broken/pass
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    libvirt:test-armhf-armhf-libvirt-raw:host-install(5):broken:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=7cbbd45af115c24dba1b1be9631a32d6215ff0cc
X-Osstest-Versions-That:
    libvirt=ebd004a03dbddc52dd1b47bd6bc4607f553d5e70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Apr 2023 15:10:57 +0000

flight 180267 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180267/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw    <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  5 host-install(5)       broken starved in 180255
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180255
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180255

version targeted for testing:
 libvirt              7cbbd45af115c24dba1b1be9631a32d6215ff0cc
baseline version:
 libvirt              ebd004a03dbddc52dd1b47bd6bc4607f553d5e70

Last test of basis   180255  2023-04-14 04:18:49 Z    1 days
Testing same since   180267  2023-04-15 04:20:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-raw broken
broken-step test-armhf-armhf-libvirt-raw host-install(5)

Not pushing.

------------------------------------------------------------
commit 7cbbd45af115c24dba1b1be9631a32d6215ff0cc
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 08:39:22 2023 +0200

    virsh-domain-event: Make 'virshEventIOError(Reason)Print' translation friendly
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 24b56900f8eb1e508a7aa83ca5a47ce8404c6bbc
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 08:39:22 2023 +0200

    virsh-domain-event: Make 'virshEventWatchdogPrint' translation friendly
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 4c531e0130f5e4ba82b8df893740dba2367f4586
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 08:39:22 2023 +0200

    virsh-domain-event: Make 'virshEventTrayChangePrint' translation friendly
    
    Remove construction of the event string from sub-strings marked as
    translatable. Without context it's impossible to translate it correctly.
    
    This slightly increases verbosity of the code but actually makes it more
    readable as everything is inline.
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 9dc2a41f1e2df7c16ffad8b8a33e44f05ad4fa5a
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 09:31:34 2023 +0200

    virsh: event: Introduce virshEventPrintf
    
    Extract internals of virshEventPrint into a function that can take the
    format string. The function will be used in upcoming patches which make
    the event formatting translatable.
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 13af21fb7490e4aae1514ee1318c7096eea07fcc
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 09:19:27 2023 +0200

    vshPrint: Add version using 'va_list'
    
    Add a version for functions which may already need to take a printf
    format string.
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 620d9427490144a5234001c0ea80bf74dab48a03
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 09:44:05 2023 +0200

    virshGraphicsAddressToString: Remove pointless translation
    
    There's no point in marking the protocol name as translatable.
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 2fbb8e9a7b955ae4544fafe6cf48af0cb1147d0d
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Thu Apr 13 08:30:21 2023 +0200

    Don't translate strings used with VIR_DEBUG
    
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit b108a73a7bcedb65a324eed692712478101848ef
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Wed Apr 12 14:46:04 2023 +0200

    virCgroupV1GetBlkioIo(Device)Serviced: Refactor extraction of cgroup data
    
    Rewrite the code to improve maintainability and also re-do construction
    of error messages which are assembled from non-translatable parts.
    
    Closes: https://gitlab.com/libvirt/libvirt/-/issues/455
    Signed-off-by: Peter Krempa <pkrempa@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 194cfb44e77ce25d99240f24321559f569251e68
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Fri Apr 14 10:37:10 2023 +0200

    qemu: Fix incorrect command name in error messages
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 19:00:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 19:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521482.810195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnl8G-0004zG-RV; Sat, 15 Apr 2023 19:00:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521482.810195; Sat, 15 Apr 2023 19:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnl8G-0004z9-NI; Sat, 15 Apr 2023 19:00:20 +0000
Received: by outflank-mailman (input) for mailman id 521482;
 Sat, 15 Apr 2023 19:00:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnl8E-0004ys-I3; Sat, 15 Apr 2023 19:00:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnl8E-00039I-Ck; Sat, 15 Apr 2023 19:00:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnl8D-0006Iv-U2; Sat, 15 Apr 2023 19:00:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnl8D-0001ir-TY; Sat, 15 Apr 2023 19:00:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nb06gDW/EWxx5aIsf46XKdOKtx7sFnbzbCzrXXvM+R4=; b=Limn6tOb7LETlAaLcGYRHMRu+E
	Qg/1VysAlSRrfNXZbIJKcOidg6/IA4PUAjbJnhN6uZqWOZwcHALMcezSEekLeEX2Gr9U9RnClCmBF
	RuUa1Vek5xbjd70rVlHcuxLgjvVClsU00sH6/kwU8afxHBnAHt8+oS8WVowUVLdLDxJc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180268: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Apr 2023 19:00:17 +0000

flight 180268 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180268/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-rtds        <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken starved in 180238
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-examine   11 examine-serial/bootloader fail starved in 180238
 test-armhf-armhf-examine     12 examine-serial/kernel   fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    2 days
Failing since        180256  2023-04-14 05:34:08 Z    1 days    3 attempts
Testing same since   180265  2023-04-14 22:10:49 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-rtds host-install(5)

Not pushing.

------------------------------------------------------------
commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 19:58:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 19:58:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521491.810204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnm2V-0001vG-BK; Sat, 15 Apr 2023 19:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521491.810204; Sat, 15 Apr 2023 19:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnm2V-0001v9-8k; Sat, 15 Apr 2023 19:58:27 +0000
Received: by outflank-mailman (input) for mailman id 521491;
 Sat, 15 Apr 2023 19:58:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXtF=AG=citrix.com=prvs=462c9d09f=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pnm2U-0001v3-Bo
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 19:58:26 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df1cc446-dbc7-11ed-b21e-6b7b168915f2;
 Sat, 15 Apr 2023 21:58:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df1cc446-dbc7-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681588704;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=aGoV2weU6U132U2moXp+8TbQk1yUDNrx4rdfkwTi2Og=;
  b=LxWTeLSksl9YTUrdoQmdezUMKliOGb8Bi1Lg3Q49PjOpmj/ZUH8M7I1T
   N4lDVbTjfWUgDead3ziQQI24KO8Jyba3gmG5TXfN0OPGxLCojRWPA6jab
   +9rnUBTcouwhqIWbiOxPo4nJbob0NzvifyYjjNRQJVMIjHMIQ2tg1l5eE
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106054553
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0mLkD6maCYrDuNsKj2SozUro5gyWJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNCm/UMq3eZWb0Kooja4q/8RkO7cDdydJrHgQ5rSo9QiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QKGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eFCDS4SYjmHvd2VmL+UW+Mv290jLfC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/A+
 DqbozmkWXn2MvSj533U+WyhhdPDxyP7RrpPE4+y+dxD1Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Mcc39QWMwar8+BuCCy4PSTspQMMinN87Q3otz
 FDht9DuAyZmvPuKSHae3rCOpDi2NG4eKmpqWMMfZVJbuZ+5+th110+RCI85S8ZZk+EZBxntm
 RWUsyQXg48srpYG+LSxvg3egzOV882hohEO2unHYo60xlonNNf5PN31uASzAeVod9jAEATY1
 JQQs43Htb1VU8nQ/MCYaL9VdIxF8cppJ9E1bbRHO5A6vwqg9He4FWy7yGEvfRw5WirolNKAX
 aMyhe+yzMUJVJdSRfUrC79d8uxzpUQaKfzrV+rPcv1FaYVreQmM8UlGPBDAhj29yxZxy/lla
 f93lPpA6l5DUMxaIMeeHb9BgdfHOAhlrY8seXwL50v+iufPDJJkYbwELEGPfogE0U9wmy2Mq
 4w3H5LTm31ivBjWPnG/HXg7cQpbchDWxPne96RqSwJ0ClM5RzlxVqGJn+5Jlk4Mt/09q9okN
 0qVAidwoGcTT1WeQelWQhiPsI/SYKs=
IronPort-HdrOrdr: A9a23:L7u626AZq+86ki7lHemg55DYdb4zR+YMi2TC1yhKJyC9Ffbo8P
 xG/c5rsSMc5wxwZJhNo7y90cq7MBbhHPxOkOos1N6ZNWGM0gaVxelZnO3fKlbbehEWmNQz6U
 4ZSdkdNOHN
X-Talos-CUID: 9a23:tPFag2/TRHXsuW3tUPKVv1NOP946flTZ9kjze0KXGGwqFaSVSGbFrQ==
X-Talos-MUID: 9a23:/VIeiQW2VR1AmpLq/APCw2BpKspr36j0CEATqcgBicOLFwUlbg==
X-IronPort-AV: E=Sophos;i="5.99,200,1677560400"; 
   d="scan'208";a="106054553"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH] x86/livepatch: Fix livepatch application when CET is active
Date: Sat, 15 Apr 2023 20:58:16 +0100
Message-ID: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:

  (XEN) livepatch: lp: Verifying enabled expectations for all functions
  (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
  (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
  (XEN) livepatch: lp: Applying 1 functions
  (XEN) hi_func: Hi! (called 1 times)
  (XEN) Hook executing.
  (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
  (XEN) *** DOUBLE FAULT ***
  <many double faults>

The assertion failure is from a global (system wide) TLB flush initiated by
modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
sure exactly what causes the #DF's, but it doesn't really matter either
because they highlight a latent bug that I'd overlooked with the CET-SS vs
patching work the first place.

While we're careful to arrange for the patching CPU to avoid encountering
non-shstk memory with transient shstk perms, other CPUs can pick these
mappings up too if they need to re-walk for uarch reasons.

Another bug is that for livepatching, we only disable CET if shadow stacks are
in use.  Running on Intel CET systems when Xen is only using CET-IBT will
crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
still active.

Also, we never went and cleared the dirty bits on .rodata.  This would
matter (for the same reason it matters on .text - it becomes a valid target
for WRSS), but we never actually patch .rodata anyway.

Therefore rework how we do patching for both alternatives and livepatches.

Introduce modify_xen_mappings_lite() with a purpose similar to
modify_xen_mappings(), but stripped down to the bare minimum as it's used in
weird contexts.  Leave all complexity to the caller to handle.

Instead of patching by clearing CR0.WP (and having to jump through some
fragile hoops to disable CET in order to do this), just transiently relax the
permissions on .text via l2_identmap[].

The perms are relaxed globally, but is safe enough.  Alternatives run before
we boot APs, and Livepatching runs in a quiesced state where the other CPUs
are not doing anything interesting.

This approach is far more robust.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>

Pulling put_pte_flags() out of the loops in modify_xen_mappings_lite() halves
the size of the function.  The code generation of the typesafe pagetable
helpers is terrible, both because of flags needing a 32->64 expand, and
because of _PAGE_NX using cpu_has_nx behind the scene.  We really should
improve how all of this works.
---
 xen/arch/x86/alternative.c       | 45 ++++++++------------
 xen/arch/x86/livepatch.c         | 56 ++++++++++---------------
 xen/arch/x86/mm.c                | 71 ++++++++++++++++++++++++++++++++
 xen/common/virtual_region.c      | 22 +++++++---
 xen/include/xen/mm.h             |  1 +
 xen/include/xen/virtual_region.h |  4 +-
 6 files changed, 132 insertions(+), 67 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index 2383fa66294c..fc815bc7d627 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -382,24 +382,28 @@ static int __init cf_check nmi_apply_alternatives(
      */
     if ( !(alt_done & alt_todo) )
     {
-        unsigned long cr0, cr4;
-
-        cr0 = read_cr0();
-        cr4 = read_cr4();
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4 & ~X86_CR4_CET);
-
-        /* Disable WP to allow patching read-only pages. */
-        write_cr0(cr0 & ~X86_CR0_WP);
+        /*
+         * Relax perms on .text to be RWX, so we can modify them.
+         *
+         * This relaxes perms globally, but we run ahead of bringing APs
+         * online, so only have our own TLB to worry about.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RWX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         _apply_alternatives(__alt_instructions, __alt_instructions_end,
                             alt_done);
 
-        write_cr0(cr0);
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4);
+        /*
+         * Reinstate perms on .text to be RW.  This also cleans out the dirty
+         * bits, which matters when CET Shstk is active.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         alt_done |= alt_todo;
     }
@@ -454,19 +458,6 @@ static void __init _alternative_instructions(bool force)
         panic("Timed out waiting for alternatives self-NMI to hit\n");
 
     set_nmi_callback(saved_nmi_callback);
-
-    /*
-     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
-     * writing into the mappings set dirty bits, turning the mappings into
-     * shadow stack mappings.
-     *
-     * While we can execute from them, this would also permit them to be the
-     * target of WRSS instructions, so reset the dirty after patching.
-     */
-    if ( cpu_has_xen_shstk )
-        modify_xen_mappings(XEN_VIRT_START + MB(2),
-                            (unsigned long)&__2M_text_end,
-                            PAGE_HYPERVISOR_RX);
 }
 
 void __init alternative_instructions(void)
diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c
index f2d783fdc567..a54d991c5f0f 100644
--- a/xen/arch/x86/livepatch.c
+++ b/xen/arch/x86/livepatch.c
@@ -61,46 +61,32 @@ int arch_livepatch_safety_check(void)
 
 int noinline arch_livepatch_quiesce(void)
 {
-    /* If Shadow Stacks are in use, disable CR4.CET so we can modify CR0.WP. */
-    if ( cpu_has_xen_shstk )
-        write_cr4(read_cr4() & ~X86_CR4_CET);
-
-    /* Disable WP to allow changes to read-only pages. */
-    write_cr0(read_cr0() & ~X86_CR0_WP);
+    /*
+     * Relax perms on .text to be RWX, so we can modify them.
+     *
+     * This relaxes perms globally, but all other CPUs are waiting on us.
+     */
+    relax_virtual_region_perms();
+    flush_local(FLUSH_TLB_GLOBAL);
 
     return 0;
 }
 
 void noinline arch_livepatch_revive(void)
 {
-    /* Reinstate WP. */
-    write_cr0(read_cr0() | X86_CR0_WP);
-
-    /* Clobber dirty bits and reinstate CET, if applicable. */
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
-    {
-        unsigned long tmp;
-
-        reset_virtual_region_perms();
-
-        write_cr4(read_cr4() | X86_CR4_CET);
-
-        /*
-         * Fix up the return address on the shadow stack, which currently
-         * points at arch_livepatch_quiesce()'s caller.
-         *
-         * Note: this is somewhat fragile, and depends on both
-         * arch_livepatch_{quiesce,revive}() being called from the same
-         * function, which is currently the case.
-         *
-         * Any error will result in Xen dying with #CP, and its too late to
-         * recover in any way.
-         */
-        asm volatile ("rdsspq %[ssp];"
-                      "wrssq %[addr], (%[ssp]);"
-                      : [ssp] "=&r" (tmp)
-                      : [addr] "r" (__builtin_return_address(0)));
-    }
+    /*
+     * Reinstate perms on .text to be RX.  This also cleans out the dirty
+     * bits, which matters when CET Shstk is active.
+     *
+     * The other CPUs waiting for us could in principle have re-walked while
+     * we were patching and cached the reduced perms in their TLB.  Therefore,
+     * we need to do a global TLB flush.
+     *
+     * However, we can't use Xen's normal global TLB flush infrastructure, so
+     * delay the TLB flush to arch_livepatch_post_action(), which is called on
+     * all CPUs (including us) on the way out of patching.
+     */
+    tighten_virtual_region_perms();
 }
 
 int arch_livepatch_verify_func(const struct livepatch_func *func)
@@ -197,6 +183,8 @@ void noinline arch_livepatch_revert(const struct livepatch_func *func)
  */
 void noinline arch_livepatch_post_action(void)
 {
+    /* See arch_livepatch_revive() */
+    flush_local(FLUSH_TLB_GLOBAL);
 }
 
 static nmi_callback_t *saved_nmi_callback;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 36a07ef77eae..1707bcd2d15c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5879,6 +5879,77 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
     return modify_xen_mappings(s, e, _PAGE_NONE);
 }
 
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_HAS_ALTERNATIVE)
+/*
+ * Similar to modify_xen_mappings(), but used by the alternatives and
+ * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
+ * responsibility of the caller, and *MUST* not be introduced here.
+ *
+ * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
+ * Must be called with preset flags, and over present mappings.
+ * Must be called on leaf page boundaries.
+ */
+void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int _nf)
+{
+    unsigned long v = s, fm, nf;
+
+    /* Set of valid PTE bits which may be altered. */
+#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
+    _nf &= FLAGS_MASK;
+
+    fm = put_pte_flags(FLAGS_MASK);
+    nf = put_pte_flags(_nf);
+
+    ASSERT(nf & _PAGE_PRESENT);
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
+
+    while ( v < e )
+    {
+        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
+        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
+        unsigned int l2f = l2e_get_flags(l2e);
+
+        ASSERT(l2f & _PAGE_PRESENT);
+
+        if ( l2e_get_flags(l2e) & _PAGE_PSE )
+        {
+            ASSERT(l1_table_offset(v) == 0);
+
+            l2e_write_atomic(pl2e, l2e_from_intpte((l2e.l2 & ~fm) | nf));
+
+            v += 1UL << L2_PAGETABLE_SHIFT;
+            continue;
+        }
+
+        /* else decend to l1 */
+        {
+            l1_pgentry_t *pl1t = map_l1t_from_l2e(l2e);
+
+            while ( v < e )
+            {
+                l1_pgentry_t *pl1e = &pl1t[l1_table_offset(v)];
+                l1_pgentry_t l1e = l1e_read_atomic(pl1e);
+                unsigned int l1f = l1e_get_flags(l1e);
+
+                ASSERT(l1f & _PAGE_PRESENT);
+
+                l1e_write_atomic(pl1e, l1e_from_intpte((l1e.l1 & ~fm) | nf));
+
+                v += 1UL << L1_PAGETABLE_SHIFT;
+
+                if ( l2_table_offset(v) == 0 )
+                    break;
+            }
+
+            unmap_domain_page(pl1t);
+        }
+    }
+
+#undef FLAGS_MASK
+}
+#endif /* LIVEPATCH || ALTERNATIVE */
+
 void __set_fixmap(
     enum fixed_addresses idx, unsigned long mfn, unsigned long flags)
 {
diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 5ecdba9c08ed..ddac5c9147e5 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -92,16 +92,28 @@ void unregister_virtual_region(struct virtual_region *r)
     remove_virtual_region(r);
 }
 
-#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_XEN_SHSTK)
-void reset_virtual_region_perms(void)
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_X86)
+void relax_virtual_region_perms(void)
 {
     const struct virtual_region *region;
 
     rcu_read_lock(&rcu_virtual_region_lock);
     list_for_each_entry_rcu( region, &virtual_region_list, list )
-        modify_xen_mappings((unsigned long)region->start,
-                            ROUNDUP((unsigned long)region->end, PAGE_SIZE),
-                            PAGE_HYPERVISOR_RX);
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RWX);
+    rcu_read_unlock(&rcu_virtual_region_lock);
+}
+
+void tighten_virtual_region_perms(void)
+{
+    const struct virtual_region *region;
+
+    rcu_read_lock(&rcu_virtual_region_lock);
+    list_for_each_entry_rcu( region, &virtual_region_list, list )
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RX);
     rcu_read_unlock(&rcu_virtual_region_lock);
 }
 #endif
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9d14aed74baa..b0dc3ba9c98d 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -100,6 +100,7 @@ int map_pages_to_xen(
     unsigned int flags);
 /* Alter the permissions of a range of Xen virtual address space. */
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags);
+void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int flags);
 int destroy_xen_mappings(unsigned long s, unsigned long e);
 /* Retrieve the MFN mapped by VA in Xen virtual address space. */
 mfn_t xen_map_to_mfn(unsigned long va);
diff --git a/xen/include/xen/virtual_region.h b/xen/include/xen/virtual_region.h
index ba408eb87a1a..d05362071135 100644
--- a/xen/include/xen/virtual_region.h
+++ b/xen/include/xen/virtual_region.h
@@ -33,7 +33,9 @@ void setup_virtual_regions(const struct exception_table_entry *start,
 void unregister_init_virtual_region(void);
 void register_virtual_region(struct virtual_region *r);
 void unregister_virtual_region(struct virtual_region *r);
-void reset_virtual_region_perms(void);
+
+void relax_virtual_region_perms(void);
+void tighten_virtual_region_perms(void);
 
 #endif /* __XEN_VIRTUAL_REGION_H__ */
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Sat Apr 15 21:04:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 21:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521497.810215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnn4I-0000si-VN; Sat, 15 Apr 2023 21:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521497.810215; Sat, 15 Apr 2023 21:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnn4I-0000sb-SW; Sat, 15 Apr 2023 21:04:22 +0000
Received: by outflank-mailman (input) for mailman id 521497;
 Sat, 15 Apr 2023 21:04:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ScAW=AG=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnn4H-0000sV-Jo
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 21:04:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16223bdf-dbd1-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 23:04:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16223bdf-dbd1-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681592657;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QvzaDLok1TQ4UelYWBZG7isf5NmoEpMzY70xaWHq6tY=;
	b=fIS1RUsnFKdb9xABGYfFiZCU2J2QWZeug/pOhRzYO2SSFiN67M2BraQ6i1WFD37roLQsyb
	0Xo1pTWRYZ/WwMogkjjzj3I+bbwhVwR7hTC3ROy5wKT1EIYulJpr/cXNAdOvjkRz7uB3BI
	klyFr/vToH0lZCb+hb3VXX9pfB0N3D76XKagNMZKWitt63oRwb7aPaIvEhJSApLEK7ILRG
	Zkcr6A4WMIslSyduOboeP87v5gUiVRgVXC8mEjpP6oV0s0oxKgYk+mkzY1TMBc6xKWwYqj
	lHlMmqQccgQg7IW71Y5KizHByK/0ncSB7/Q2HObdyAE/q+s6MPivfs7vv6Bz9w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681592657;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QvzaDLok1TQ4UelYWBZG7isf5NmoEpMzY70xaWHq6tY=;
	b=uGGpswhVFZB8lospVRZK/FCDeguM0Ticu2FJ7GXNrivH09zniqOSmq2BEICy2eBqNRVNGH
	0xJtAf6AVJyuuYBQ==
To: Brian Gerst <brgerst@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Arjan van
 de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 David Woodhouse <dwmw@amazon.co.uk>, Usama Arif
 <usama.arif@bytedance.com>, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 19/37] x86/smpboot: Switch to hotplug core state
 synchronization
In-Reply-To: <CAMzpN2j4NbGGR=jfxpVVQwYCZ=hHOUKm3oBpw1WKGiTUJ73EXA@mail.gmail.com>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.382005483@linutronix.de>
 <CAMzpN2j4NbGGR=jfxpVVQwYCZ=hHOUKm3oBpw1WKGiTUJ73EXA@mail.gmail.com>
Date: Sat, 15 Apr 2023 23:04:16 +0200
Message-ID: <87pm84yi0f.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Sat, Apr 15 2023 at 08:58, Brian Gerst wrote:
> On Fri, Apr 14, 2023 at 7:44=E2=80=AFPM Thomas Gleixner <tglx@linutronix.=
de> wrote:
>>         pr_debug("Before bogomips\n");
>> -       for_each_possible_cpu(cpu)
>> -               if (cpumask_test_cpu(cpu, cpu_callout_mask))
>> +       for_each_possible_cpu(cpu) {
>> +               if (cpumask_test_cpu(cpu, cpu_online_mask))
>>                         bogosum +=3D cpu_data(cpu).loops_per_jiffy;
>
> This should be the same as for_each_online_cpu().

Duh, yes. Obviously...


From xen-devel-bounces@lists.xenproject.org Sat Apr 15 21:06:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Apr 2023 21:06:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521501.810226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnn6a-0001SO-E1; Sat, 15 Apr 2023 21:06:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521501.810226; Sat, 15 Apr 2023 21:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnn6a-0001SH-9G; Sat, 15 Apr 2023 21:06:44 +0000
Received: by outflank-mailman (input) for mailman id 521501;
 Sat, 15 Apr 2023 21:06:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ScAW=AG=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pnn6Y-0001SB-LA
 for xen-devel@lists.xenproject.org; Sat, 15 Apr 2023 21:06:42 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a27f27c-dbd1-11ed-8611-37d641c3527e;
 Sat, 15 Apr 2023 23:06:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a27f27c-dbd1-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681592799;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3DiOB1jbfBoIoBcKhJ+tlgldd4KD5aJr4Sg6j9ebPUI=;
	b=P2JRe5U4lJMHWRXIblQx0pIuiWd2AghPV4KuEWvmDuReSyOVhQCCEFeElO4eRip+BZst8j
	FEt9t2BOSLC4o1kel1IC1nngSPriEuvEd6hiCJGBQiNkbnbNfFcto1Vl3sKZG2ESMonhFs
	An5RJlKu7/tSGpzgHR6Z7KG43eMP0SiKp3vTl4fTzoc+BUJC8yEtxBJufLRq9lbmeibFXN
	sl7kTAsogFv4BEj0maXzziyeRxITPDLItnz8esS3e+KABEeGtBs5ADBH+SUHP2X3sjvsCF
	sA5xfsq3ODgQ/B4I43wW92xDuUJXBh2S1RtOiUBnGhQ7c+K3ylqiGKlbM1+Z/A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681592799;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3DiOB1jbfBoIoBcKhJ+tlgldd4KD5aJr4Sg6j9ebPUI=;
	b=QEW/hhbgWQeeaCKMhDdxi3o+5E2lnyqCY4azQRMF5wvL6dQ0zBCYuANaf/0HtZpTDBh4rO
	TIZpqZPJS0yZKOCg==
To: Brian Gerst <brgerst@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Arjan van
 de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 35/37] x86/smpboot: Support parallel startup of
 secondary CPUs
In-Reply-To: <CAMzpN2hUbYpYrqDL1ViXUWGKGa7mDEG6iHtWEZg9GvrAoRgvKQ@mail.gmail.com>
References: <20230414225551.858160935@linutronix.de>
 <20230414232311.379210081@linutronix.de>
 <CAMzpN2hUbYpYrqDL1ViXUWGKGa7mDEG6iHtWEZg9GvrAoRgvKQ@mail.gmail.com>
Date: Sat, 15 Apr 2023 23:06:38 +0200
Message-ID: <87mt38yhwh.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Sat, Apr 15 2023 at 09:22, Brian Gerst wrote:
> On Fri, Apr 14, 2023 at 7:45=E2=80=AFPM Thomas Gleixner <tglx@linutronix.=
de> wrote:
>> @@ -248,10 +311,20 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>>          *
>>          * RDX contains the per-cpu offset
>>          */
>> -       movq    pcpu_hot + X86_current_task(%rdx), %rax
>> -       movq    TASK_threadsp(%rax), %rsp
>> +       movq    pcpu_hot + X86_top_of_stack(%rdx), %rsp
>
> Switching to using pcpu_hot.top_of_stack is ok, but it's not
> completely equivalent.  top_of_stack points to the end of the pt_regs
> structure, while the kernel stack starts below pt_regs even for kernel
> threads.  So you need to subtract PTREGS_SIZE from the stack pointer
> after this.
>
> This change should also be a separate patch.

You're right on both counts.


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 00:32:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 00:32:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521508.810234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnqJ4-0005ae-QV; Sun, 16 Apr 2023 00:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521508.810234; Sun, 16 Apr 2023 00:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnqJ4-0005aX-N1; Sun, 16 Apr 2023 00:31:50 +0000
Received: by outflank-mailman (input) for mailman id 521508;
 Sun, 16 Apr 2023 00:31:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnqJ3-0005aN-3I; Sun, 16 Apr 2023 00:31:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnqJ3-00033q-01; Sun, 16 Apr 2023 00:31:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnqJ2-0004bg-CK; Sun, 16 Apr 2023 00:31:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnqJ2-0006OY-Bq; Sun, 16 Apr 2023 00:31:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=q5PmMIjcN5ILZryl32+UOw6oCPOaz6UL48viKRqgDq8=; b=yeePAaHcTLiB1NmaYN6hdBaKK1
	dfH0uoA752md0c6MvrBvNfiJZ8Y7D803QZ4feD0qd0p9y5dBrfOYqrWJ7V8ffREqkGT1jCHpYBHSQ
	b8OOxyhU4Jba7ktydi/ok1ky7jkn4XuO5LtGJdENtwaq/4hE/3FMkzdFjSIBYh52FZ+Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180269-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180269: trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 00:31:48 +0000

flight 180269 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180269/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180253
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180253

version targeted for testing:
 linux                7a934f4bd7d6f9da84c8812da3ba42ee10f5778e
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    1 days
Testing same since   180264  2023-04-14 18:10:06 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Conor Dooley <conor.dooley@microchip.com>
  Erik Brakkee <erik@brakkee.org>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Mathis Salmen <mathis.salmen@matsal.de>
  Mustafa Ismail <mustafa.ismail@intel.com>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-step test-armhf-armhf-xl-credit1 host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)

Not pushing.

(No revision log; it would be 768 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 04:32:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 04:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521515.810245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnu3j-0002EN-0U; Sun, 16 Apr 2023 04:32:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521515.810245; Sun, 16 Apr 2023 04:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pnu3i-0002EF-Q6; Sun, 16 Apr 2023 04:32:14 +0000
Received: by outflank-mailman (input) for mailman id 521515;
 Sun, 16 Apr 2023 04:32:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnu3h-0002E5-Qy; Sun, 16 Apr 2023 04:32:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnu3h-0007Th-E2; Sun, 16 Apr 2023 04:32:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pnu3g-00065f-Op; Sun, 16 Apr 2023 04:32:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pnu3g-0005PL-OB; Sun, 16 Apr 2023 04:32:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GJ9RrBNn6QZtTZL7DNMlit2mwhE/1BBfCBZU7ztS2Ak=; b=xlQvRivDANIGer1Bv6lqq1XkMD
	KuuKyhPR9W52JHSGP3i7oZE04iHGx8foCgN5llCfl4CSvf2YBpzREhkGBVNV9Z3H5fgRu07no3F7Z
	yZNXmVxQcR2VjKMNAwkdb/Oe7tTqDoVVQ8U9P4ycL8yUbU2dBuMYtGCJRihqcyhfuzUI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180270: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-raw:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 04:32:12 +0000

flight 180270 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180270/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw    <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 180268

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 180268
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180268

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   5 host-install(5)       broken baseline untested
 test-armhf-armhf-xl-rtds  5 host-install(5) broken in 180268 starved in 180238
 test-armhf-armhf-libvirt-raw  5 host-install(5)       broken starved in 180238
 test-armhf-armhf-xl-arndale   8 xen-boot      fail in 180268 baseline untested
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180268 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 180268 never pass
 test-armhf-armhf-examine 11 examine-serial/bootloader fail in 180268 starved in 180238
 test-armhf-armhf-examine 12 examine-serial/kernel fail in 180268 starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 180268 starved in 180238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    2 days
Failing since        180256  2023-04-14 05:34:08 Z    1 days    4 attempts
Testing same since   180265  2023-04-14 22:10:49 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 broken  
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-raw broken
broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-libvirt-raw host-install(5)
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-job test-armhf-armhf-xl-rtds broken

Not pushing.

------------------------------------------------------------
commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 10:54:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 10:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521529.810254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po01i-0005u3-8l; Sun, 16 Apr 2023 10:54:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521529.810254; Sun, 16 Apr 2023 10:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po01i-0005tw-5t; Sun, 16 Apr 2023 10:54:34 +0000
Received: by outflank-mailman (input) for mailman id 521529;
 Sun, 16 Apr 2023 10:54:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po01h-0005tm-1l; Sun, 16 Apr 2023 10:54:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po01g-0008S5-Vi; Sun, 16 Apr 2023 10:54:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po01g-00067A-ER; Sun, 16 Apr 2023 10:54:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po01g-0004Io-E4; Sun, 16 Apr 2023 10:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rhP1re3bB4ddfN/BnjAYKcuYjxH6M9uhnZL3GfJDOgM=; b=J645F4BxnQ0sCVs51En60NeHml
	hWO9PrISMsjeLliWND3Kn1y2bSxzhU5WnwYzNfa4hAVLDrSK4VOe9vsP/z630EEVgDOS6uxnrX4EO
	AxmuK2HrFWxvwe+s6fEe3T4UMuqR1P2I+pc1u2O53z/UUSwQYgC6gLForeETJ4DKlUPg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180271: trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
X-Osstest-Versions-This:
    linux=a7a55e27ad72fb0dc9281d6211cffeebef8dde65
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 10:54:32 +0000

flight 180271 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180271/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180253
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-xl           8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-raw  8 xen-boot                fail starved in 180253

version targeted for testing:
 linux                a7a55e27ad72fb0dc9281d6211cffeebef8dde65
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    2 days
Failing since        180264  2023-04-14 18:10:06 Z    1 days    4 attempts
Testing same since   180271  2023-04-16 00:43:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Christoph Hellwig <hch@lst.de>
  Conor Dooley <conor.dooley@microchip.com>
  Duy Truong <dory@dory.moe>
  Erik Brakkee <erik@brakkee.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jiri Kosina <jkosina@suse.cz>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mathis Salmen <mathis.salmen@matsal.de>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michal Kolar <mich.k@seznam.cz>
  Ming Lei <ming.lei@redhat.com>
  Mustafa Ismail <mustafa.ismail@intel.com>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Peter Korsgaard <peter@korsgaard.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Tharun Kumar P <tharunkumar.pasumarthi@microchip.com>
  Wolfram Sang <wsa@kernel.org>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)

Not pushing.

(No revision log; it would be 996 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 11:25:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 11:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521536.810265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po0V8-0000vc-Ri; Sun, 16 Apr 2023 11:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521536.810265; Sun, 16 Apr 2023 11:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po0V8-0000vV-Om; Sun, 16 Apr 2023 11:24:58 +0000
Received: by outflank-mailman (input) for mailman id 521536;
 Sun, 16 Apr 2023 11:24:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po0V7-0000vJ-Jx; Sun, 16 Apr 2023 11:24:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po0V7-0000iC-HA; Sun, 16 Apr 2023 11:24:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po0V6-0006pN-VK; Sun, 16 Apr 2023 11:24:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po0V6-00056G-Uz; Sun, 16 Apr 2023 11:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6uWGC8a5Dk3oXB9Gbeq6qzDOFY+KoZ8L8solsVLZyYs=; b=O4bSj+almSv1vciFi/+Sbo1+3M
	qQCDTbzw7CukL0LnhtX6CpfkubB4f7M0geVwTehimyU/3l6a5vk1aj/7TdBmONZKesMHkgVwhF3c1
	Ruwi6t5VCTvysM3rbVOTt1vUiBgYPYIP2BBimrI+kjfLGQCg7FbpsDLXB1SyzGBykEZg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180273: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-examine:host-install:broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 11:24:56 +0000

flight 180273 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180273/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 180268
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180238

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180268

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   5 host-install(5)       broken baseline untested
 test-armhf-armhf-xl-rtds  5 host-install(5) broken in 180268 starved in 180238
 test-armhf-armhf-examine      5 host-install          broken starved in 180238
 test-armhf-armhf-xl-arndale   8 xen-boot      fail in 180268 baseline untested
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180268 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180268 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180268 never pass
 test-armhf-armhf-examine 11 examine-serial/bootloader fail in 180268 starved in 180238
 test-armhf-armhf-examine 12 examine-serial/kernel fail in 180268 starved in 180238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    2 days
Failing since        180256  2023-04-14 05:34:08 Z    2 days    5 attempts
Testing same since   180265  2023-04-14 22:10:49 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-examine host-install
broken-step test-armhf-armhf-xl-arndale host-install(5)
broken-job test-armhf-armhf-xl-rtds broken

Not pushing.

------------------------------------------------------------
commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 12:50:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 12:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521544.810275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1q3-0001Ys-AR; Sun, 16 Apr 2023 12:50:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521544.810275; Sun, 16 Apr 2023 12:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1q3-0001Yl-7T; Sun, 16 Apr 2023 12:50:39 +0000
Received: by outflank-mailman (input) for mailman id 521544;
 Sun, 16 Apr 2023 12:50:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po1q1-0001Yf-Kl
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 12:50:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po1q1-0002X8-5o; Sun, 16 Apr 2023 12:50:37 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po1q0-0004UO-Tp; Sun, 16 Apr 2023 12:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=d6s+DZMw/J0/1XlNrVNF3o4YPzAck+dr1lmeDx3RelA=; b=YDM0YLM8ma8e69yFNZC3pa6VMI
	EN80ZkYuBbVBbTjth5YcNXFZOQlGNQWTJrauFLeZf4pJVhzBlQsnGZqUWsO6W8COcmT60KD1fJToB
	PoroCuJcKjHJ7QMmW2epbPaJUfjqGuSoxExmtROvyaWMwIxIiW+4v9bWNo9ovxKcERG0=;
Message-ID: <bed2fc02-6b6a-aecc-e279-e7ea3ddffe7e@xen.org>
Date: Sun, 16 Apr 2023 13:50:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-2-stewart.hildebrand@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230414185714.292881-2-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stewart,

On 14/04/2023 19:57, Stewart Hildebrand wrote:
> When building the hypervisor with -Og, we run into a __bad_cmpxchg link error:
> 
> aarch64-none-linux-gnu-ld: prelink.o: in function `__int_cmpxchg':
> .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
> aarch64-none-linux-gnu-ld: .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
> aarch64-none-linux-gnu-ld: ./.xen-syms.0: hidden symbol `__bad_cmpxchg' isn't defined
> aarch64-none-linux-gnu-ld: final link failed: bad value
> 
> This is due to the function __guest_cmpxchg not being inlined in the -Og build
> with gcc 12. Fix this by marking __guest_cmpxchg always_inline.
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com
> ---
> I considered also changing "guest_cmpxchg64" just below in the same file to
> always_inline, but I decided not to because this function does not take a size
> parameter.

Make sense. I will fixed the signed-off-by line issue reported by Henry 
while committing:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 12:53:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 12:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521548.810285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1sd-00027p-O2; Sun, 16 Apr 2023 12:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521548.810285; Sun, 16 Apr 2023 12:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1sd-00027i-L1; Sun, 16 Apr 2023 12:53:19 +0000
Received: by outflank-mailman (input) for mailman id 521548;
 Sun, 16 Apr 2023 12:53:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po1sc-00027c-F6
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 12:53:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po1sc-0002ZA-5w; Sun, 16 Apr 2023 12:53:18 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po1sc-0004Xq-1A; Sun, 16 Apr 2023 12:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GVZZ0J9TiOyDXbYJOrOwpCrGuXjgqWxUp/y97wdmlfY=; b=S4fbaSXbv2NLoqEod64MC0OO4h
	BssaHvLgdRzT7CZsK97ladi6bheqH5kOiolQhKlLKEksxmh+mYsDMzrDBkxIxQg/J/oPLhU2oNq4T
	348WwdvEh6rWS8bhXSppI6PsBnOYYqQ9aHbVppU5vO/xZO7U7WQGdpdWJmG47lLELV7w=;
Message-ID: <5fb567c5-1e82-a048-1cfe-f6f69e0b5ebc@xen.org>
Date: Sun, 16 Apr 2023 13:53:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH 3/3] xen/arm: fix unitialized use warning
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-4-stewart.hildebrand@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230414185714.292881-4-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Stewart,

On 14/04/2023 19:57, Stewart Hildebrand wrote:
> When building the hypervisor with -Og, we encounter the following error:

Is this with GCC 12 as well?

> arch/arm/domain_build.c: In function ‘make_cpus_node’:
> arch/arm/domain_build.c:2040:12: error: ‘clock_valid’ may be used uninitialized [-Werror=maybe-uninitialized]
>   2040 |         if ( clock_valid )
>        |            ^
> arch/arm/domain_build.c:1947:10: note: ‘clock_valid’ was declared here
>   1947 |     bool clock_valid;
>        |          ^~~~~~~~~~~
> cc1: all warnings being treated as errors
> 
> Fix it by initializing the variable.
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> ---
> See previous discussion here
> https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00741.html
> ---
>   xen/arch/arm/domain_build.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 4f9d4f9d8867..18b350734a8e 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1944,7 +1944,7 @@ static int __init make_cpus_node(const struct domain *d, void *fdt)
>       /* Placeholder for cpu@ + a 32-bit hexadecimal number + \0 */
>       char buf[13];
>       u32 clock_frequency;
> -    bool clock_valid;
> +    bool clock_valid = false;

NIT: I would add "Keep the compiler happy with -Og"

I am happy to add it while committing if you agree.

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 12:58:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 12:58:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521553.810295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1xI-0002kq-8p; Sun, 16 Apr 2023 12:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521553.810295; Sun, 16 Apr 2023 12:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po1xI-0002kj-6E; Sun, 16 Apr 2023 12:58:08 +0000
Received: by outflank-mailman (input) for mailman id 521553;
 Sun, 16 Apr 2023 12:58:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po1xG-0002kZ-TX; Sun, 16 Apr 2023 12:58:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po1xG-0002eJ-Qf; Sun, 16 Apr 2023 12:58:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po1xG-00020T-Cu; Sun, 16 Apr 2023 12:58:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po1xG-0002Ty-CM; Sun, 16 Apr 2023 12:58:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rp9UZqRU4Yn8+j1dNfSgu/F7/75/bVc0HaWE4qynx2Q=; b=jIW6WbHMqsVH5zQ27gjbVoRFcp
	rC0Khvsz1XX3e+ThAEFJLEywMy7f2Sqs0dqKXOYzyCvMqkFMuruZj/GrC0VP6glLpAjnuO5ED0ys7
	6E9ImwHyAet1ihtUuXITIm8Tpn28mUXvu4rj/qTxOXte1fhKgst8DMXOJmWsLlXV+MMU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180272-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180272: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=7cbbd45af115c24dba1b1be9631a32d6215ff0cc
X-Osstest-Versions-That:
    libvirt=ebd004a03dbddc52dd1b47bd6bc4607f553d5e70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 12:58:06 +0000

flight 180272 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180272/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180255
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180255
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180255

version targeted for testing:
 libvirt              7cbbd45af115c24dba1b1be9631a32d6215ff0cc
baseline version:
 libvirt              ebd004a03dbddc52dd1b47bd6bc4607f553d5e70

Last test of basis   180255  2023-04-14 04:18:49 Z    2 days
Testing same since   180267  2023-04-15 04:20:21 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   ebd004a03d..7cbbd45af1  7cbbd45af115c24dba1b1be9631a32d6215ff0cc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521563.810324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qf-0004qJ-K2; Sun, 16 Apr 2023 14:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521563.810324; Sun, 16 Apr 2023 14:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qf-0004qC-HN; Sun, 16 Apr 2023 14:32:33 +0000
Received: by outflank-mailman (input) for mailman id 521563;
 Sun, 16 Apr 2023 14:32:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qd-0004kg-QI
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qd-0004yt-EG; Sun, 16 Apr 2023 14:32:31 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qd-0008OW-5c; Sun, 16 Apr 2023 14:32:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=PrRA+a+hArk7xUtxnyazbLpJIuTfGr9tkFt/D2rBWsA=; b=pI6H98/Eb5rWMeWAc7MEAK1S2+
	yOC08oZ3byjikv3015NjSxPj3tSTwhTZBVS9Ayh2iDREcWeEzXdTifJnBrx9U4y1xT5jlW5LQYBt5
	5Fm3aWUtF7NnBJOMi9EMr7M2RM4Y/gS1xjo8coNnPHRFCXCvOQ2PY6RJpxFgooMwc1Yk=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Date: Sun, 16 Apr 2023 15:32:08 +0100
Message-Id: <20230416143211.72227-3-julien@xen.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
References: <20230416143211.72227-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Xen is currently not fully compliant with the Arm Arm because it will
switch the TTBR with the MMU on.

In order to be compliant, we need to disable the MMU before
switching the TTBR. The implication is the page-tables should
contain an identity mapping of the code switching the TTBR.

In most of the case we expect Xen to be loaded in low memory. I am aware
of one platform (i.e AMD Seattle) where the memory start above 512GB.
To give us some slack, consider that Xen may be loaded in the first 2TB
of the physical address space.

The memory layout is reshuffled to keep the first four slots of the zeroeth
level free. All the regions currently in L0 slot 0 will not be part of
slot 4 (2TB). This requires a slight tweak of the boot code because
XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.

This reshuffle will make trivial to create a 1:1 mapping when Xen is
loaded below 2TB.

Lastly, take the opportunity to check a compile time if any of the
regions may overlap with the reserved area for identity mapping.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v7:
        - Remove all tags
        - Add BUILD_BUG_ON()s
        - Don't forget to update FRAMETABLE_VIRT_START and
          VMAP_VIRT_START

    Changes in v6:
        - Correct the BUILD_BUG_ON(), Xen virtual address should be
          above 2TB (i.e. slot0 > 4).
        - Add Bertrand's reviewed-by

    Changes in v5:
        - We are reserving 4 slots rather than 2.
        - Fix the addresses in the layout comment.
        - Fix the size of the region in the layout comment
        - Add Luca's tested-by (the reviewed-by was not added
          because of the changes requested by Michal)
        - Add Michal's reviewed-by

    Changes in v4:
        - Correct the documentation
        - The start address is 2TB, so slot0 is 4 not 2.

    Changes in v2:
        - Reword the commit message
        - Load Xen at 2TB + 2MB
        - Update the documentation to reflect the new layout
---
 xen/arch/arm/arm64/head.S         |  3 ++-
 xen/arch/arm/include/asm/config.h | 38 +++++++++++++++++++++----------
 xen/arch/arm/mm.c                 | 23 +++++++++++++++----
 3 files changed, 46 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 4a3f87117c83..663f5813b12e 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -607,7 +607,8 @@ create_page_tables:
          * need an additional 1:1 mapping, the virtual mapping will
          * suffice.
          */
-        cmp   x19, #XEN_VIRT_START
+        ldr   x0, =XEN_VIRT_START
+        cmp   x19, x0
         bne   1f
         ret
 1:
diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
index 5df0e4c4959b..2cfe5e480256 100644
--- a/xen/arch/arm/include/asm/config.h
+++ b/xen/arch/arm/include/asm/config.h
@@ -72,16 +72,13 @@
 #include <xen/page-size.h>
 
 /*
- * Common ARM32 and ARM64 layout:
+ * ARM32 layout:
  *   0  -   2M   Unmapped
  *   2M -   4M   Xen text, data, bss
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -  10M   Early boot mapping of FDT
  *   10M - 12M   Livepatch vmap (if compiled in)
  *
- * ARM32 layout:
- *   0  -  12M   <COMMON>
- *
  *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
  *                    space
@@ -90,14 +87,23 @@
  *   2G -   4G   Domheap: on-demand-mapped
  *
  * ARM64 layout:
- * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
- *   0  -  12M   <COMMON>
+ * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
+ *
+ *  Reserved to identity map Xen
+ *
+ * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
+ *  (Relative offsets)
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -  10M   Early boot mapping of FDT
+ *  10M -  12M   Livepatch vmap (if compiled in)
  *
  *   1G -   2G   VMAP: ioremap and early_ioremap
  *
  *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
  *
- * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
  *  Unused
  *
  * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
@@ -107,7 +113,17 @@
  *  Unused
  */
 
+#ifdef CONFIG_ARM_32
 #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
+#else
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+
+#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
+#endif
+
 #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
 
 #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
@@ -163,14 +179,12 @@
 
 #else /* ARM_64 */
 
-#define SLOT0_ENTRY_BITS  39
-#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
-#define SLOT0_ENTRY_SIZE  SLOT0(1)
+#define IDENTITY_MAPPING_AREA_NR_L0  4
 
-#define VMAP_VIRT_START  GB(1)
+#define VMAP_VIRT_START  (SLOT0(4) + GB(1))
 #define VMAP_VIRT_SIZE   GB(1)
 
-#define FRAMETABLE_VIRT_START  GB(32)
+#define FRAMETABLE_VIRT_START  (SLOT0(4) + GB(32))
 #define FRAMETABLE_SIZE        GB(32)
 #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b99806af996c..1d09d61dd922 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -153,7 +153,19 @@ static void __init __maybe_unused build_assertions(void)
 #endif
     /* Page table structure constraints */
 #ifdef CONFIG_ARM_64
-    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
+    /*
+     * The first few slots of the L0 table is reserved for the identity
+     * mapping. Check that none of the other regions are overlapping
+     * with it.
+     */
+#define CHECK_OVERLAP_WITH_IDMAP(virt) \
+    BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L0)
+
+    CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START);
+    CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START);
+    CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START);
+    CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START);
+#undef CHECK_OVERLAP_WITH_IDMAP
 #endif
     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
 #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
@@ -496,10 +508,11 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
     phys_offset = boot_phys_offset;
 
 #ifdef CONFIG_ARM_64
-    p = (void *) xen_pgtable;
-    p[0] = pte_of_xenaddr((uintptr_t)xen_first);
-    p[0].pt.table = 1;
-    p[0].pt.xn = 0;
+    pte = pte_of_xenaddr((uintptr_t)xen_first);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] = pte;
+
     p = (void *) xen_first;
 #else
     p = (void *) cpu0_pgtable;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521566.810350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qi-0005Qx-ID; Sun, 16 Apr 2023 14:32:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521566.810350; Sun, 16 Apr 2023 14:32:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qi-0005QR-Cu; Sun, 16 Apr 2023 14:32:36 +0000
Received: by outflank-mailman (input) for mailman id 521566;
 Sun, 16 Apr 2023 14:32:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qh-0005KJ-Q6
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qh-0004zi-Jn; Sun, 16 Apr 2023 14:32:35 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qh-0008OW-Bd; Sun, 16 Apr 2023 14:32:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=YpmUTbbN8N1AmFmYpme9CCnV9PdTeXnSWyUsvtOcdzM=; b=gjJkxYry1/LQBlw8e8hXlFcBbY
	GEvlnQN6nMu/zu/3ZH4L1SbBijbLKa5v1myPtTn5gmQejc2Mtpb9JJwSCK041KND7JsKGDTRjQEfx
	GhouvrWn7scItXdDpj4vnil/SZYsLquqiMIBsb+a9iBlHZrnWlnkWP/5yglBN9yK/zuE=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>
Subject: [PATCH v7 5/5] xen/arm64: smpboot: Directly switch to the runtime page-tables
Date: Sun, 16 Apr 2023 15:32:11 +0100
Message-Id: <20230416143211.72227-6-julien@xen.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
References: <20230416143211.72227-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Switching TTBR while the MMU is on is not safe. Now that the identity
mapping will not clash with the rest of the memory layout, we can avoid
creating temporary page-tables every time a CPU is brought up.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

----
    Changes in v7:
        - Remove the tested-by tag because of the layout reshuffle in
          the previous patches.

    Changes in v6:
        - Add Bertrand's reviewed-by

    Changes in v5:
        - Add Luca's reviewed-by and tested-by tags.

    Changes in v4:
        - Somehow I forgot to send it in v3. So re-include it.

    Changes in v2:
        - Remove arm32 code
---
 xen/arch/arm/arm32/smpboot.c   |  4 ++++
 xen/arch/arm/arm64/head.S      | 29 +++++++++--------------------
 xen/arch/arm/arm64/smpboot.c   | 15 ++++++++++++++-
 xen/arch/arm/include/asm/smp.h |  1 +
 xen/arch/arm/smpboot.c         |  1 +
 5 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index e7368665d50d..518e9f9c7e70 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -21,6 +21,10 @@ int arch_cpu_up(int cpu)
     return platform_cpu_up(cpu);
 }
 
+void arch_cpu_up_finish(void)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 5efd442b24af..a61b4d3c2738 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -308,6 +308,7 @@ real_start_efi:
         bl    check_cpu_mode
         bl    cpu_init
         bl    create_page_tables
+        load_paddr x0, boot_pgtable
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
@@ -365,29 +366,14 @@ GLOBAL(init_secondary)
 #endif
         bl    check_cpu_mode
         bl    cpu_init
-        bl    create_page_tables
+        load_paddr x0, init_ttbr
+        ldr   x0, [x0]
         bl    enable_mmu
 
         /* We are still in the 1:1 mapping. Jump to the runtime Virtual Address. */
         ldr   x0, =secondary_switched
         br    x0
 secondary_switched:
-        /*
-         * Non-boot CPUs need to move on to the proper pagetables, which were
-         * setup in init_secondary_pagetables.
-         *
-         * XXX: This is not compliant with the Arm Arm.
-         */
-        ldr   x4, =init_ttbr         /* VA of TTBR0_EL2 stashed by CPU 0 */
-        ldr   x4, [x4]               /* Actual value */
-        dsb   sy
-        msr   TTBR0_EL2, x4
-        dsb   sy
-        isb
-        tlbi  alle2
-        dsb   sy                     /* Ensure completion of TLB flush */
-        isb
-
 #ifdef CONFIG_EARLY_PRINTK
         /* Use a virtual address to access the UART. */
         ldr   x23, =EARLY_UART_VIRTUAL_ADDRESS
@@ -672,9 +658,13 @@ ENDPROC(create_page_tables)
  * mapping. In other word, the caller is responsible to switch to the runtime
  * mapping.
  *
- * Clobbers x0 - x3
+ * Inputs:
+ *   x0 : Physical address of the page tables.
+ *
+ * Clobbers x0 - x4
  */
 enable_mmu:
+        mov   x4, x0
         PRINT("- Turning on paging -\r\n")
 
         /*
@@ -685,8 +675,7 @@ enable_mmu:
         dsb   nsh
 
         /* Write Xen's PT's paddr into TTBR0_EL2 */
-        load_paddr x0, boot_pgtable
-        msr   TTBR0_EL2, x0
+        msr   TTBR0_EL2, x4
         isb
 
         mrs   x0, SCTLR_EL2
diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 694fbf67e62a..9637f424699e 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -106,10 +106,23 @@ int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
 
 int arch_cpu_up(int cpu)
 {
+    int rc;
+
     if ( !smp_enable_ops[cpu].prepare_cpu )
         return -ENODEV;
 
-    return smp_enable_ops[cpu].prepare_cpu(cpu);
+    update_identity_mapping(true);
+
+    rc = smp_enable_ops[cpu].prepare_cpu(cpu);
+    if ( rc )
+        update_identity_mapping(false);
+
+    return rc;
+}
+
+void arch_cpu_up_finish(void)
+{
+    update_identity_mapping(false);
 }
 
 /*
diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index 8133d5c29572..a37ca55bff2c 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -25,6 +25,7 @@ extern void noreturn stop_cpu(void);
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
 extern int arch_cpu_up(int cpu);
+extern void arch_cpu_up_finish(void);
 
 int cpu_up_send_sgi(int cpu);
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 412ae2286906..4a89b3a8345b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -500,6 +500,7 @@ int __cpu_up(unsigned int cpu)
     init_data.cpuid = ~0;
     smp_up_cpu = MPIDR_INVALID;
     clean_dcache(smp_up_cpu);
+    arch_cpu_up_finish();
 
     if ( !cpu_online(cpu) )
     {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521562.810315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qd-0004an-8x; Sun, 16 Apr 2023 14:32:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521562.810315; Sun, 16 Apr 2023 14:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qd-0004ag-5n; Sun, 16 Apr 2023 14:32:31 +0000
Received: by outflank-mailman (input) for mailman id 521562;
 Sun, 16 Apr 2023 14:32:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qc-0004Vb-Fm
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qc-0004yh-4v; Sun, 16 Apr 2023 14:32:30 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qb-0008OW-Sa; Sun, 16 Apr 2023 14:32:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=cmPpxvjkrxWFma6zTaRewwdEQPiJX9jNoQsgck6jnCg=; b=Ar/he/eeHYx7LDhVsytD0xoHaB
	S6IqcvMjyjBF9tUOme/pTwD7kgJGvxsK7QMExWVNcpLQeYk5vvKJR6V9QeNG05NWcUMjp3JvpVjOQ
	mlNR8x+7ltvWuv3oiEtOFnuag/etvQQEo16wVxX1jdy4gEOmqJN36Wszou3XzA+9rAxI=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v7 1/5] xen/arm32: head: Widen the use of the temporary mapping
Date: Sun, 16 Apr 2023 15:32:07 +0100
Message-Id: <20230416143211.72227-2-julien@xen.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
References: <20230416143211.72227-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, the temporary mapping is only used when the virtual
runtime region of Xen is clashing with the physical region.

In follow-up patches, we will rework how secondary CPU bring-up works
and it will be convenient to use the fixmap area for accessing
the root page-table (it is per-cpu).

Rework the code to use temporary mapping when the Xen physical address
is not overlapping with the temporary mapping.

This also has the advantage to simplify the logic to identity map
Xen.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Henry Wang <Henry.Wang@arm.com>
Tested-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

----

Even if this patch is rewriting part of the previous patch, I decided
to keep them separated to help the review.

The "follow-up patches" are still in draft at the moment. I still haven't
find a way to split them nicely and not require too much more work
in the coloring side.

I have provided some medium-term goal in the cover letter.

    Changes in v6:
        - Add Henry's reviewed-by and tested-by tag
        - Add Michal's reviewed-by
        - Add newline in remove_identity_mapping for clarity

    Changes in v5:
        - Fix typo in a comment
        - No need to link boot_{second, third}_id again if we need to
          create a temporary area.

    Changes in v3:
        - Resolve conflicts after switching from "ldr rX, <label>" to
          "mov_w rX, <label>" in a previous patch

    Changes in v2:
        - Patch added
---
 xen/arch/arm/arm32/head.S | 86 ++++++++-------------------------------
 1 file changed, 16 insertions(+), 70 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index df51550baa8a..9befffd85079 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -459,7 +459,6 @@ ENDPROC(cpu_init)
 create_page_tables:
         /* Prepare the page-tables for mapping Xen */
         mov_w r0, XEN_VIRT_START
-        create_table_entry boot_pgtable, boot_second, r0, 1
         create_table_entry boot_second, boot_third, r0, 2
 
         /* Setup boot_third: */
@@ -479,70 +478,37 @@ create_page_tables:
         cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per page */
         blo   1b
 
-        /*
-         * If Xen is loaded at exactly XEN_VIRT_START then we don't
-         * need an additional 1:1 mapping, the virtual mapping will
-         * suffice.
-         */
-        cmp   r9, #XEN_VIRT_START
-        moveq pc, lr
-
         /*
          * Setup the 1:1 mapping so we can turn the MMU on. Note that
          * only the first page of Xen will be part of the 1:1 mapping.
-         *
-         * In all the cases, we will link boot_third_id. So create the
-         * mapping in advance.
          */
+        create_table_entry boot_pgtable, boot_second_id, r9, 1
+        create_table_entry boot_second_id, boot_third_id, r9, 2
         create_mapping_entry boot_third_id, r9, r9
 
         /*
-         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
-         * then the 1:1 mapping will use its own set of page-tables from
-         * the second level.
+         * Find the first slot used. If the slot is not the same
+         * as TEMPORARY_AREA_FIRST_SLOT, then we will want to switch
+         * to the temporary mapping before jumping to the runtime
+         * virtual mapping.
          */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        b     link_from_second_id
-
-1:
-        /*
-         * Find the second slot used. If the slot is XEN_SECOND_SLOT, then the
-         * 1:1 mapping will use its own set of page-tables from the
-         * third level.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   virtphys_clash
-        create_table_entry boot_second, boot_third_id, r9, 2
-        b     link_from_third_id
+        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
+        bne   use_temporary_mapping
 
-link_from_second_id:
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-link_from_third_id:
-        /* Good news, we are not clashing with Xen virtual mapping */
+        mov_w r0, XEN_VIRT_START
+        create_table_entry boot_pgtable, boot_second, r0, 1
         mov   r12, #0                /* r12 := temporary mapping not created */
         mov   pc, lr
 
-virtphys_clash:
+use_temporary_mapping:
         /*
-         * The identity map clashes with boot_third. Link boot_first_id and
-         * map Xen to a temporary mapping. See switch_to_runtime_mapping
-         * for more details.
+         * The identity mapping is not using the first slot
+         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
+         * See switch_to_runtime_mapping for more details.
          */
-        PRINT("- Virt and Phys addresses clash  -\r\n")
         PRINT("- Create temporary mapping -\r\n")
 
-        /*
-         * This will override the link to boot_second in XEN_FIRST_SLOT.
-         * The page-tables are not live yet. So no need to use
-         * break-before-make.
-         */
-        create_table_entry boot_pgtable, boot_second_id, r9, 1
-        create_table_entry boot_second_id, boot_third_id, r9, 2
-
         /* Map boot_second (cover Xen mappings) to the temporary 1st slot */
         mov_w r0, TEMPORARY_XEN_VIRT_START
         create_table_entry boot_pgtable, boot_second, r0, 1
@@ -675,33 +641,13 @@ remove_identity_mapping:
         /* r2:r3 := invalid page-table entry */
         mov   r2, #0x0
         mov   r3, #0x0
-        /*
-         * Find the first slot used. Remove the entry for the first
-         * table if the slot is not XEN_FIRST_SLOT.
-         */
+
+        /* Find the first slot used and remove it */
         get_table_slot r1, r9, 1     /* r1 := first slot */
-        cmp   r1, #XEN_FIRST_SLOT
-        beq   1f
-        /* It is not in slot 0, remove the entry */
         mov_w r0, boot_pgtable       /* r0 := root table */
         lsl   r1, r1, #3             /* r1 := Slot offset */
         strd  r2, r3, [r0, r1]
-        b     identity_mapping_removed
-
-1:
-        /*
-         * Find the second slot used. Remove the entry for the first
-         * table if the slot is not XEN_SECOND_SLOT.
-         */
-        get_table_slot r1, r9, 2     /* r1 := second slot */
-        cmp   r1, #XEN_SECOND_SLOT
-        beq   identity_mapping_removed
-        /* It is not in slot 1, remove the entry */
-        mov_w r0, boot_second        /* r0 := second table */
-        lsl   r1, r1, #3             /* r1 := Slot offset */
-        strd  r2, r3, [r0, r1]
 
-identity_mapping_removed:
         flush_xen_tlb_local r0
         mov   pc, lr
 ENDPROC(remove_identity_mapping)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521565.810345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qi-0005N4-5K; Sun, 16 Apr 2023 14:32:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521565.810345; Sun, 16 Apr 2023 14:32:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qi-0005Mr-1h; Sun, 16 Apr 2023 14:32:36 +0000
Received: by outflank-mailman (input) for mailman id 521565;
 Sun, 16 Apr 2023 14:32:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qg-00053u-GI
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qg-0004zS-71; Sun, 16 Apr 2023 14:32:34 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qf-0008OW-V7; Sun, 16 Apr 2023 14:32:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=MKZwTspZPblOFa/G5oXmGl3DM8Na2fO16JalKitAX/o=; b=kA/yl1BDWvd4QQk2r6yQab75Yd
	8OsG8+Qwp2bMscZNCCBwwMNtZl/USzIIgmWRzWkHF2VJwGWajCoYdmMeP7sTWiAf4SRRSBLHZfIcF
	oX3tvgirVn6klVYKeaDxumymBDYA94NcZPVPW83St3Yd3W3VvF6aw0fNOzQMnQzjcxU4=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Luca Fancellu <luca.fancellu@arm.com>
Subject: [PATCH v7 4/5] xen/arm64: mm: Rework switch_ttbr()
Date: Sun, 16 Apr 2023 15:32:10 +0100
Message-Id: <20230416143211.72227-5-julien@xen.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
References: <20230416143211.72227-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
still on.

Switching TTBR is like replacing existing mappings with new ones. So
we need to follow the break-before-make sequence.

In this case, it means the MMU needs to be switched off while the
TTBR is updated. In order to disable the MMU, we need to first
jump to an identity mapping.

Rename switch_ttbr() to switch_ttbr_id() and create an helper on
top to temporary map the identity mapping and call switch_ttbr()
via the identity address.

switch_ttbr_id() is now reworked to temporarily turn off the MMU
before updating the TTBR.

We also need to make sure the helper switch_ttbr() is part of the
identity mapping. So move _end_boot past it.

The arm32 code will use a different approach. So this issue is for now
only resolved on arm64.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

----
    Changes in v7:
        - Removed the tested-by tag because of the layout reshuffle in
          the previous patches.

    Changes in v6:
        - Add Michal's reviewed-by tag
        - Add Bertrand's reviewed-by tag

    Changes in v5:
        - Add a newline in switch_ttbr()
        - Add Luca's reviewed-by and tested-by

    Changes in v4:
        - Don't modify setup_pagetables() as we don't handle arm32.
        - Move the clearing of the boot page tables in an earlier patch
        - Fix the numbering

    Changes in v2:
        - Remove the arm32 changes. This will be addressed differently
        - Re-instate the instruct cache flush. This is not strictly
          necessary but kept it for safety.
        - Use "dsb ish"  rather than "dsb sy".


    TODO:
        * Handle the case where the runtime Xen is loaded at a different
          position for cache coloring. This will be dealt separately.
---
 xen/arch/arm/arm64/head.S     | 50 +++++++++++++++++++++++------------
 xen/arch/arm/arm64/mm.c       | 31 ++++++++++++++++++++++
 xen/arch/arm/include/asm/mm.h |  2 ++
 xen/arch/arm/mm.c             |  2 --
 4 files changed, 66 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 663f5813b12e..5efd442b24af 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -816,30 +816,46 @@ ENDPROC(fail)
  * Switch TTBR
  *
  * x0    ttbr
- *
- * TODO: This code does not comply with break-before-make.
  */
-ENTRY(switch_ttbr)
-        dsb   sy                     /* Ensure the flushes happen before
-                                      * continuing */
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+ENTRY(switch_ttbr_id)
+        /* 1) Ensure any previous read/write have completed */
+        dsb    ish
+        isb
+
+        /* 2) Turn off MMU */
+        mrs    x1, SCTLR_EL2
+        bic    x1, x1, #SCTLR_Axx_ELx_M
+        msr    SCTLR_EL2, x1
+        isb
+
+        /*
+         * 3) Flush the TLBs.
+         * See asm/arm64/flushtlb.h for the explanation of the sequence.
+         */
+        dsb   nshst
+        tlbi  alle2
+        dsb   nsh
+        isb
+
+        /* 4) Update the TTBR */
+        msr   TTBR0_EL2, x0
         isb
 
-        msr    TTBR0_EL2, x0
+        /*
+         * 5) Flush I-cache
+         * This should not be necessary but it is kept for safety.
+         */
+        ic     iallu
+        isb
 
-        isb                          /* Ensure synchronization with previous
-                                      * changes to text */
-        tlbi   alle2                 /* Flush hypervisor TLB */
-        ic     iallu                 /* Flush I-cache */
-        dsb    sy                    /* Ensure completion of TLB flush */
+        /* 6) Turn on the MMU */
+        mrs   x1, SCTLR_EL2
+        orr   x1, x1, #SCTLR_Axx_ELx_M  /* Enable MMU */
+        msr   SCTLR_EL2, x1
         isb
 
         ret
-ENDPROC(switch_ttbr)
+ENDPROC(switch_ttbr_id)
 
 #ifdef CONFIG_EARLY_PRINTK
 /*
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
index 56b9e9b8d3ef..78b7c7eb004f 100644
--- a/xen/arch/arm/arm64/mm.c
+++ b/xen/arch/arm/arm64/mm.c
@@ -120,6 +120,37 @@ void update_identity_mapping(bool enable)
     BUG_ON(rc);
 }
 
+extern void switch_ttbr_id(uint64_t ttbr);
+
+typedef void (switch_ttbr_fn)(uint64_t ttbr);
+
+void __init switch_ttbr(uint64_t ttbr)
+{
+    vaddr_t id_addr = virt_to_maddr(switch_ttbr_id);
+    switch_ttbr_fn *fn = (switch_ttbr_fn *)id_addr;
+    lpae_t pte;
+
+    /* Enable the identity mapping in the boot page tables */
+    update_identity_mapping(true);
+
+    /* Enable the identity mapping in the runtime page tables */
+    pte = pte_of_xenaddr((vaddr_t)switch_ttbr_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+    pte.pt.ro = 1;
+    write_pte(&xen_third_id[third_table_offset(id_addr)], pte);
+
+    /* Switch TTBR */
+    fn(ttbr);
+
+    /*
+     * Disable the identity mapping in the runtime page tables.
+     * Note it is not necessary to disable it in the boot page tables
+     * because they are not going to be used by this CPU anymore.
+     */
+    update_identity_mapping(false);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 23dec574eb31..4262165ce25e 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -207,6 +207,8 @@ extern unsigned long total_pages;
 extern void setup_pagetables(unsigned long boot_phys_offset);
 /* Map FDT in boot pagetable */
 extern void *early_fdt_map(paddr_t fdt_paddr);
+/* Switch to a new root page-tables */
+extern void switch_ttbr(uint64_t ttbr);
 /* Remove early mappings */
 extern void remove_early_mappings(void);
 /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr to the
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index b7104d8d33ba..74f6ff2c6f78 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -488,8 +488,6 @@ static void xen_pt_enforce_wnx(void)
     flush_xen_tlb_local();
 }
 
-extern void switch_ttbr(uint64_t ttbr);
-
 /* Clear a translation table and clean & invalidate the cache */
 static void clear_table(void *table)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521564.810335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qg-00056M-SY; Sun, 16 Apr 2023 14:32:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521564.810335; Sun, 16 Apr 2023 14:32:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qg-000569-PC; Sun, 16 Apr 2023 14:32:34 +0000
Received: by outflank-mailman (input) for mailman id 521564;
 Sun, 16 Apr 2023 14:32:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qf-0004py-4o
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qe-0004zA-Qn; Sun, 16 Apr 2023 14:32:32 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qe-0008OW-Eu; Sun, 16 Apr 2023 14:32:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=tlYimgkZ12PXkrEbryHvJ50cYG3oiNsgKm1fSTsy9SQ=; b=uZ6jzygfXPApkdJ7+RqIABXBrR
	YsxGDpb25ulKAuJtQCnf84JTn5gj+b4g/t+8va7BAv0nuBBNrBlHCs946+6PLtQds3S1sOqi2RWIr
	kYEBUG9gnCk2/miHCU8bInUwsUOv+fMGXeKCt+vGf1kLdsrFWiZhqRpBtgrwG15VNAKM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping
Date: Sun, 16 Apr 2023 15:32:09 +0100
Message-Id: <20230416143211.72227-4-julien@xen.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
References: <20230416143211.72227-1-julien@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

In follow-up patches we will need to have part of Xen identity mapped in
order to safely switch the TTBR.

On some platform, the identity mapping may have to start at 0. If we always
keep the identity region mapped, NULL pointer dereference would lead to
access to valid mapping.

It would be possible to relocate Xen to avoid clashing with address 0.
However the identity mapping is only meant to be used in very limited
places. Therefore it would be better to keep the identity region invalid
for most of the time.

Two new external helpers are introduced:
    - arch_setup_page_tables() will setup the page-tables so it is
      easy to create the mapping afterwards.
    - update_identity_mapping() will create/remove the identity mapping

Signed-off-by: Julien Grall <jgrall@amazon.com>

----
    Changes in v7:
        - The definition of IDENTITY_MAPPING_AREA_NR_L0 was moved
          in the previous patch.

    Changes in v6:
        - Correctly check the placement of the identity mapping (take
          2).
        - Fix typoes

    Changes in v5:
        - The reserved area for the identity mapping is 2TB (so 4 slots)
          rather than 512GB.

    Changes in v4:
        - Fix typo in a comment
        - Clarify which page-tables are updated

    Changes in v2:
        - Remove the arm32 part
        - Use a different logic for the boot page tables and runtime
          one because Xen may be running in a different place.
---
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/setup.h    |  11 +++
 xen/arch/arm/mm.c                   |   6 +-
 6 files changed, 163 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 6d507da0d44d..28481393e98f 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -10,6 +10,7 @@ obj-y += entry.o
 obj-y += head.o
 obj-y += insn.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
 obj-y += traps.o
diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
new file mode 100644
index 000000000000..56b9e9b8d3ef
--- /dev/null
+++ b/xen/arch/arm/arm64/mm.c
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <xen/init.h>
+#include <xen/mm.h>
+
+#include <asm/setup.h>
+
+/* Override macros from asm/page.h to make them work with mfn_t */
+#undef virt_to_mfn
+#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
+
+static DEFINE_PAGE_TABLE(xen_first_id);
+static DEFINE_PAGE_TABLE(xen_second_id);
+static DEFINE_PAGE_TABLE(xen_third_id);
+
+/*
+ * The identity mapping may start at physical address 0. So we don't want
+ * to keep it mapped longer than necessary.
+ *
+ * When this is called, we are still using the boot_pgtable.
+ *
+ * We need to prepare the identity mapping for both the boot page tables
+ * and runtime page tables.
+ *
+ * The logic to create the entry is slightly different because Xen may
+ * be running at a different location at runtime.
+ */
+static void __init prepare_boot_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    /*
+     * We will be re-using the boot ID tables. They may not have been
+     * zeroed but they should be unlinked. So it is fine to use
+     * clear_page().
+     */
+    clear_page(boot_first_id);
+    clear_page(boot_second_id);
+    clear_page(boot_third_id);
+
+    if ( id_offsets[0] >= IDENTITY_MAPPING_AREA_NR_L0 )
+        panic("Cannot handle ID mapping above 2TB\n");
+
+    /* Link first ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&boot_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+static void __init prepare_runtime_identity_mapping(void)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    lpae_t pte;
+    DECLARE_OFFSETS(id_offsets, id_addr);
+
+    if ( id_offsets[0] >= IDENTITY_MAPPING_AREA_NR_L0 )
+        panic("Cannot handle ID mapping above 2TB\n");
+
+    /* Link first ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_first_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_pgtable[id_offsets[0]], pte);
+
+    /* Link second ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_second_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_first_id[id_offsets[1]], pte);
+
+    /* Link third ID table */
+    pte = pte_of_xenaddr((vaddr_t)xen_third_id);
+    pte.pt.table = 1;
+    pte.pt.xn = 0;
+
+    write_pte(&xen_second_id[id_offsets[2]], pte);
+
+    /* The mapping in the third table will be created at a later stage */
+}
+
+void __init arch_setup_page_tables(void)
+{
+    prepare_boot_identity_mapping();
+    prepare_runtime_identity_mapping();
+}
+
+void update_identity_mapping(bool enable)
+{
+    paddr_t id_addr = virt_to_maddr(_start);
+    int rc;
+
+    if ( enable )
+        rc = map_pages_to_xen(id_addr, maddr_to_mfn(id_addr), 1,
+                              PAGE_HYPERVISOR_RX);
+    else
+        rc = destroy_xen_mappings(id_addr, id_addr + PAGE_SIZE);
+
+    BUG_ON(rc);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm/arm32/mm.h
index 8bfc906e7178..856f2dbec4ad 100644
--- a/xen/arch/arm/include/asm/arm32/mm.h
+++ b/xen/arch/arm/include/asm/arm32/mm.h
@@ -18,6 +18,10 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
 
 bool init_domheap_mappings(unsigned int cpu);
 
+static inline void arch_setup_page_tables(void)
+{
+}
+
 #endif /* __ARM_ARM32_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/asm/arm64/mm.h
index aa2adac63189..e0bd23a6ed0c 100644
--- a/xen/arch/arm/include/asm/arm64/mm.h
+++ b/xen/arch/arm/include/asm/arm64/mm.h
@@ -1,6 +1,8 @@
 #ifndef __ARM_ARM64_MM_H__
 #define __ARM_ARM64_MM_H__
 
+extern DEFINE_PAGE_TABLE(xen_pgtable);
+
 /*
  * On ARM64, all the RAM is currently direct mapped in Xen.
  * Hence return always true.
@@ -10,6 +12,17 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
     return true;
 }
 
+void arch_setup_page_tables(void);
+
+/*
+ * Enable/disable the identity mapping in the live page-tables (i.e.
+ * the one pointed by TTBR_EL2).
+ *
+ * Note that nested call (e.g. enable=true, enable=true) is not
+ * supported.
+ */
+void update_identity_mapping(bool enable);
+
 #endif /* __ARM_ARM64_MM_H__ */
 
 /*
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index a926f30a2be4..66b27f2b57c1 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,6 +166,17 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+
+#ifdef CONFIG_ARM_64
+extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+#endif
+extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+
+/* Find where Xen will be residing at runtime and return a PT entry */
+lpae_t pte_of_xenaddr(vaddr_t);
+
 extern const char __ro_after_init_start[], __ro_after_init_end[];
 
 struct init_info
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 1d09d61dd922..b7104d8d33ba 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
 
 #ifdef CONFIG_ARM_64
 #define HYP_PT_ROOT_LEVEL 0
-static DEFINE_PAGE_TABLE(xen_pgtable);
+DEFINE_PAGE_TABLE(xen_pgtable);
 static DEFINE_PAGE_TABLE(xen_first);
 #define THIS_CPU_PGTABLE xen_pgtable
 #else
@@ -400,7 +400,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
         invalidate_icache();
 }
 
-static inline lpae_t pte_of_xenaddr(vaddr_t va)
+lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
 
@@ -507,6 +507,8 @@ void __init setup_pagetables(unsigned long boot_phys_offset)
 
     phys_offset = boot_phys_offset;
 
+    arch_setup_page_tables();
+
 #ifdef CONFIG_ARM_64
     pte = pte_of_xenaddr((uintptr_t)xen_first);
     pte.pt.table = 1;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:32:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521561.810304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qc-0004Lu-1f; Sun, 16 Apr 2023 14:32:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521561.810304; Sun, 16 Apr 2023 14:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Qb-0004Ln-V6; Sun, 16 Apr 2023 14:32:29 +0000
Received: by outflank-mailman (input) for mailman id 521561;
 Sun, 16 Apr 2023 14:32:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Qb-0004Lh-7k
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:32:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qa-0004yZ-Py; Sun, 16 Apr 2023 14:32:28 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Qa-0008OW-Fp; Sun, 16 Apr 2023 14:32:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=Xh8SqKA/Ig7OOw4PT3lH4WC8bUFxXubU2RCGdjI0Ccs=; b=jv8BIS
	YjhK35I9yR1g5N0UDjfUJ82rDGh69vVUkHNHQLFKxCopOFC+SNgsvu1SMkgfAXwyyoab36GeYEO8T
	fVR8Y5Zjv6N49quh36JYImS80lAY9yYGOzGzEdt+Koufo8l5PsWK9h95VghSc3GwDcf1pPRIjuyLw
	LN66MkjPbIg=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com,
	michal.orzel@amd.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 0/5] xen/arm: Don't switch TTBR while the MMU is on
Date: Sun, 16 Apr 2023 15:32:06 +0100
Message-Id: <20230416143211.72227-1-julien@xen.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Hi all,

Currently, Xen on Arm will switch TTBR whilst the MMU is on. This is
similar to replacing existing mappings with new ones. So we need to
follow a break-before-make sequence.

When switching the TTBR, we need to temporarily disable the MMU
before updating the TTBR. This means the page-tables must contain an
identity mapping.

The current memory layout is not very flexible and has an higher chance
to clash with the identity mapping.

On Arm64, we have plenty of unused virtual address space Therefore, we can
simply reshuffle the layout to leave the first part of the virtual
address space empty.

On Arm32, the virtual address space is already quite full. Even if we
find space, it would be necessary to have a dynamic layout. So a
different approach will be necessary. The chosen one is to have
a temporary mapping that will be used to jumped from the ID mapping
to the runtime mapping (or vice versa). The temporary mapping will
be overlapping with the domheap area as it should not be used when
switching on/off the MMU.

The Arm32 part is not yet addressed and will be handled in a follow-up
series.

After this series, most of Xen page-table code should be compliant
with the Arm Arm. The last two issues I am aware of are:
 - domheap: Mappings are replaced without using the Break-Before-Make
   approach.
 - The cache is not cleaned/invalidated when updating the page-tables
   with Data cache off (like during early boot).

The long term plan is to get rid of boot_* page tables and then
directly use the runtime pages. This means for coloring, we will
need to build the pages in the relocated Xen rather than the current
Xen.

For convience, I pushed a branch with everything applied:

https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
branch boot-pt-rework-v7

Cheers,

Julien Grall (5):
  xen/arm32: head: Widen the use of the temporary mapping
  xen/arm64: Rework the memory layout
  xen/arm64: mm: Introduce helpers to prepare/enable/disable the
    identity mapping
  xen/arm64: mm: Rework switch_ttbr()
  xen/arm64: smpboot: Directly switch to the runtime page-tables

 xen/arch/arm/arm32/head.S           |  86 +++------------
 xen/arch/arm/arm32/smpboot.c        |   4 +
 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/head.S           |  82 +++++++-------
 xen/arch/arm/arm64/mm.c             | 161 ++++++++++++++++++++++++++++
 xen/arch/arm/arm64/smpboot.c        |  15 ++-
 xen/arch/arm/include/asm/arm32/mm.h |   4 +
 xen/arch/arm/include/asm/arm64/mm.h |  13 +++
 xen/arch/arm/include/asm/config.h   |  38 ++++---
 xen/arch/arm/include/asm/mm.h       |   2 +
 xen/arch/arm/include/asm/setup.h    |  11 ++
 xen/arch/arm/include/asm/smp.h      |   1 +
 xen/arch/arm/mm.c                   |  31 ++++--
 xen/arch/arm/smpboot.c              |   1 +
 14 files changed, 320 insertions(+), 130 deletions(-)
 create mode 100644 xen/arch/arm/arm64/mm.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:38:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521588.810365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Vy-0007qY-9L; Sun, 16 Apr 2023 14:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521588.810365; Sun, 16 Apr 2023 14:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3Vy-0007qR-6b; Sun, 16 Apr 2023 14:38:02 +0000
Received: by outflank-mailman (input) for mailman id 521588;
 Sun, 16 Apr 2023 14:38:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3Vw-0007qL-Fh
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:38:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Vw-00056L-7B; Sun, 16 Apr 2023 14:38:00 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.23.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3Vw-0000N0-1B; Sun, 16 Apr 2023 14:38:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=0u4RjbN6qXFZNpjJVhhdtHZg4RuQj4KR2WWQfd6YgLU=; b=IxCmwu++Kds5D9kODzrQLGdv6X
	mtEAzCuRR1zHF07Re0hZF2rbBstlZxyvDTTnah7VjSrusiQECIS0vnFYm9E6Z+BNEAU46KRzuS5n2
	M5Bz+qL14Ys3I7RL7i6dJxe9AdRdoaZ3M+43zIOsj6aQnIJttGxUPWiWp1vhuJ5KasL8=;
Message-ID: <5faadbe5-d019-3fe0-eedc-4e07f86b38ce@xen.org>
Date: Sun, 16 Apr 2023 15:37:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v3 3/4] xen/arm: Defer GICv2 CPU interface mapping until
 the first access
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <wei.chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230328071334.2098429-1-Henry.Wang@arm.com>
 <20230328071334.2098429-4-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230328071334.2098429-4-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 28/03/2023 08:13, Henry Wang wrote:
> Currently, the mapping of the GICv2 CPU interface is created in
> arch_domain_create(). This causes some troubles in populating and
> freeing of the domain P2M pages pool. For example, a default 16
> P2M pages are required in p2m_init() to cope with the P2M mapping
> of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
> the complexity of P2M destroy in the failure path of
> arch_domain_create().
> 
> As per discussion in [1], similarly as the MMIO access for ACPI, this
> patch defers the GICv2 CPU interface mapping until the first MMIO
> access. This is achieved by moving the GICv2 CPU interface mapping
> code from vgic_v2_domain_init()/vgic_v2_map_resources() to the
> stage-2 data abort trap handling code. The original CPU interface
> size and virtual CPU interface base address is now saved in
> `struct vgic_dist` instead of the local variable of
> vgic_v2_domain_init()/vgic_v2_map_resources().
> 
> Take the opportunity to unify the way of data access using the
> existing pointer to struct vgic_dist in vgic_v2_map_resources() for
> new GICv2.
> 
> Since gicv2_map_hwdom_extra_mappings() happens after domain_create(),
> so there is no need to map the extra mappings on-demand, and therefore
> keep the hwdom extra mappings as untouched.
> 
> [1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 14:40:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 14:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521592.810375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3YK-0000pW-Mf; Sun, 16 Apr 2023 14:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521592.810375; Sun, 16 Apr 2023 14:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po3YK-0000pP-Jf; Sun, 16 Apr 2023 14:40:28 +0000
Received: by outflank-mailman (input) for mailman id 521592;
 Sun, 16 Apr 2023 14:40:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po3YJ-0000pJ-En
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 14:40:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3YJ-0005AC-1o; Sun, 16 Apr 2023 14:40:27 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.23.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po3YI-0000TV-RZ; Sun, 16 Apr 2023 14:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OOaJ+8vNffk7s48mUmpmVVVIbeY7Bkju8QIE2ghOtF8=; b=Qic22POWKxGt425lF3wz3lFBM3
	sWO78K0JY7yXsVqa9jGO/zHEaHiuNJqQD5dO01js5bkWlqGNTXzbWLSviar+yDnQZh/asfoTvxoHd
	/B3iuRDLaQ24BdQCvvsUWKIX1M4ETKK+r8fkonsa/AXjEFps+GAQep+8NLG75/WzZc0Y=;
Message-ID: <092e0ca3-f10f-63bd-2b0b-62ac556f7233@xen.org>
Date: Sun, 16 Apr 2023 15:40:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v3 4/4] xen/arm: Clean-up in p2m_init() and
 p2m_final_teardown()
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <wei.chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Michal Orzel <michal.orzel@amd.com>
References: <20230328071334.2098429-1-Henry.Wang@arm.com>
 <20230328071334.2098429-5-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230328071334.2098429-5-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 28/03/2023 08:13, Henry Wang wrote:
> With the change in previous patch, the initial 16 pages in the P2M
> pool is not necessary anymore. Drop them for code simplification.
> 
> Also the call to p2m_teardown() from arch_domain_destroy() is not
> necessary anymore since the movement of the P2M allocation out of
> arch_domain_create(). Drop the code and the above in-code comment
> mentioning it. Take the opportunity to fix a typo in the original
> in-code comment.
> 
> With above clean-up, the second parameter of p2m_teardown() is
> also not needed anymore. Drop this parameter and the logic related
> to this parameter.
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 15:10:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 15:10:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521596.810384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po41B-0004EZ-09; Sun, 16 Apr 2023 15:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521596.810384; Sun, 16 Apr 2023 15:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po41A-0004ES-Tr; Sun, 16 Apr 2023 15:10:16 +0000
Received: by outflank-mailman (input) for mailman id 521596;
 Sun, 16 Apr 2023 15:10:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po41A-0004EM-C1
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 15:10:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po41A-0005pQ-4D; Sun, 16 Apr 2023 15:10:16 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.23.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po419-0001k7-S3; Sun, 16 Apr 2023 15:10:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=4PHEj3HFRC40EsIa6IRecoXudbxCFnXZgHSqRkk5cEI=; b=If+rB0+W8safLbTHDt4r0yNLpv
	gPEdI+7l2Dh7snxf2S8bkxjga3YY7hh9fVCa4U639p47190Gb7kt2mdhTZrPJr5laQeWDyA/Jjuab
	AhTb9jE7SqyWTYPLRXlzOH5KlFtd6tyoMcpJq/BltW7Ves0IwPUQ9cbeWA7nnhvHVHh8=;
Message-ID: <7595ba77-899f-6aa2-a65c-64a0b34553ac@xen.org>
Date: Sun, 16 Apr 2023 16:10:13 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v3 0/4] P2M improvements for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <wei.chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230328071334.2098429-1-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230328071334.2098429-1-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 28/03/2023 08:13, Henry Wang wrote:
> There are some clean-up/improvement work that can be done in the
> Arm P2M code triggered by [1] and [2]. These were found at the 4.17
> code freeze period so the issues were not fixed at that time.
> Therefore do the follow-ups here.
> 
> Patch#1 addresses one comment in [1]. It was sent earlier and reviewed
> once. Pick the updated version, i.e. "[PATCH v2] xen/arm: Reduce
> redundant clear root pages when teardown p2m", to this series.
> 
> Patch#2 is a new patch based on v1 comments, this is a pre-requisite
> patch for patch#3 where the deferring of GICv2 CPU interface mapping
> should also be applied for new vGIC.
> 
> Patch#3 and #4 addresses the comment in [2] following the discussion
> between two possible options.
> 
> [1] https://lore.kernel.org/xen-devel/a947e0b4-8f76-cea6-893f-abf30ff95e0d@xen.org/
> [2] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/
> 
> v2 -> v3:
> 1. Add Julien's acked-by tag for patch #2.
> 2. Reword the reason why hwdom extra mappings are not touched by this
>     patch in the commit message of patch #3.
> 3. Rework the address check in stage-2 data abort trap so that larger
>     CPU interface size can work fine.
> 4. Correct a typo in original in-code comment, slightly modify
>     the wording to avoid the presence of preemptive/non-preemptive
>     p2m_teardown() call assumption.
> 5. Drop the (now) unnecessary second parameter of p2m_teardown().
> 
> v1 -> v2:
> 1. Move in-code comment for p2m_force_tlb_flush_sync() on top of
>     p2m_clear_root_pages().
> 2. Add a new patch as patch #2.
> 3. Correct style in in-code comment in patch #3.
> 4. Avoid open-coding gfn_eq() and gaddr_to_gfn(d->arch.vgic.cbase).
> 5. Apply same changes for the new vGICv2 implementation, update the
>     commit message accordingly.
> 6. Add in-code comment in old GICv2's vgic_v2_domain_init() and
>     new GICv2's vgic_v2_map_resources() to mention the mapping of the
>     virtual CPU interface is deferred until first access.
> 7. Add reviewed-by and acked-by tags accordingly.
> 
> Henry Wang (4):
>    xen/arm: Reduce redundant clear root pages when teardown p2m
>    xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC
>    xen/arm: Defer GICv2 CPU interface mapping until the first access
>    xen/arm: Clean-up in p2m_init() and p2m_final_teardown()

I have committed the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 15:19:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 15:19:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521600.810395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po49y-0004sx-Sz; Sun, 16 Apr 2023 15:19:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521600.810395; Sun, 16 Apr 2023 15:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po49y-0004sp-PS; Sun, 16 Apr 2023 15:19:22 +0000
Received: by outflank-mailman (input) for mailman id 521600;
 Sun, 16 Apr 2023 15:19:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9kJX=AH=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1po49x-0004sj-59
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 15:19:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2073.outbound.protection.outlook.com [40.107.20.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e013fae-dc6a-11ed-8611-37d641c3527e;
 Sun, 16 Apr 2023 17:19:18 +0200 (CEST)
Received: from AS9PR05CA0150.eurprd05.prod.outlook.com (2603:10a6:20b:497::32)
 by VE1PR08MB5615.eurprd08.prod.outlook.com (2603:10a6:800:1b3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Sun, 16 Apr
 2023 15:18:37 +0000
Received: from AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:497:cafe::be) by AS9PR05CA0150.outlook.office365.com
 (2603:10a6:20b:497::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.44 via Frontend
 Transport; Sun, 16 Apr 2023 15:18:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT032.mail.protection.outlook.com (100.127.140.65) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.32 via Frontend Transport; Sun, 16 Apr 2023 15:18:37 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Sun, 16 Apr 2023 15:18:37 +0000
Received: from f4fda34f82d2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9DFCA22E-3E6D-4BD1-9D7E-F537EFE1ED5E.1; 
 Sun, 16 Apr 2023 15:18:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f4fda34f82d2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sun, 16 Apr 2023 15:18:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB5PR08MB10047.eurprd08.prod.outlook.com (2603:10a6:10:4a0::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.49; Sun, 16 Apr
 2023 15:18:22 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Sun, 16 Apr 2023
 15:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e013fae-dc6a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pWziZRY5p+euNPnr/zI7SOY687bH/aXy+6j954Y+z9U=;
 b=S9Vw3TNcYc9HWL4V2ZgDhAzK6PMl1IVC6znzJyfKJU7YTwekHJ+abpNSI2i5Ku8kd+9CSJh2eXShO+oCyVDDxLyUN1Dik5XaKx4tEgM98ETHNrV0+nLbwhJi3hpnGIC4GMJovFK7fqMmKDFxnbOqPlJ2cGvd4WbSCaOjsIkUPm4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIj3fWxm+kTJFp4EUqXJHOpOGwO6vdauWaOr4GL5iy2dNUu52q1t0fLDX85zRhBrThHRRkNq20gbYAhRN8zEBm0TfRyIjLqtIN1Z4M1rfhGoa5rARlLvwCNAgosagyIj/vYGn2uzTMKuCJeqH8MvcnUfOpGmIXXXEoCrqUQDfW3InonGyGE1hp3hjvKsJuFY8ze9FamXldYgg3eJEe2Ghf0UNo+pSCkPQzVaSGvb8MVNix4lj3nMzat4MZg8JHmK6NY8K7Nd/+dHuQl94C+RsS3ZRA4+NaL9YSxLAyPIb8JmKQdL3ZPWsKSRDzDGhF4i3gEHLFLi2MHwR81RklRKhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pWziZRY5p+euNPnr/zI7SOY687bH/aXy+6j954Y+z9U=;
 b=Z1zEGNEF9pDKNKjU1QnDm2ad6gSRQTKTwJzMeSRb3GXFI1e8nfQuZ6vZW5sgyVsz2349h6lme+q0+JMLS2zcoZU03PXzQZlsneS3VallG00wTHjmaxyJZNhLxp9iA8WIlM4SbTbpV+yMgLKKAr8PXQIXnWwOgem1vCgageBkHkZgyKsFyf/5rSYfVGtnT6rp6jFwYe0MfZVUHRYiT9iWWgC1LL0K0r1kl3o5oijOqw5XAp5IgUSsrB999uOsHIVFzTTVgxIiF9E9Hm5HNK4u9MZ9WluCYEjn//0mfHX/Q5AJZnS5QUCAPtCOL1h58Gh6TGFpnwvG03UOomciKUMlnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pWziZRY5p+euNPnr/zI7SOY687bH/aXy+6j954Y+z9U=;
 b=S9Vw3TNcYc9HWL4V2ZgDhAzK6PMl1IVC6znzJyfKJU7YTwekHJ+abpNSI2i5Ku8kd+9CSJh2eXShO+oCyVDDxLyUN1Dik5XaKx4tEgM98ETHNrV0+nLbwhJi3hpnGIC4GMJovFK7fqMmKDFxnbOqPlJ2cGvd4WbSCaOjsIkUPm4=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v3 0/4] P2M improvements for Arm
Thread-Topic: [PATCH v3 0/4] P2M improvements for Arm
Thread-Index: AQHZYUTg6MW26c27J0WSUxzcC1g70a8uKIyAgAABGqA=
Date: Sun, 16 Apr 2023 15:18:21 +0000
Message-ID:
 <AS8PR08MB7991E697DBC2E12129FAA97C929F9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230328071334.2098429-1-Henry.Wang@arm.com>
 <7595ba77-899f-6aa2-a65c-64a0b34553ac@xen.org>
In-Reply-To: <7595ba77-899f-6aa2-a65c-64a0b34553ac@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 48E74DBE5D959B42900B25A3AA2AA4A3.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB5PR08MB10047:EE_|AM7EUR03FT032:EE_|VE1PR08MB5615:EE_
X-MS-Office365-Filtering-Correlation-Id: 376ffb82-502e-4611-c7f1-08db3e8dda12
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wqWTLhs0e8kjHxiB+FWA7RV3oHiHAck1mDwZI3iefHSfOH1JXDNyjblIOlvYgeRyKSWSwuxRIEq5L16L1pIvkFxkCwhg89Ypfjb60rciJyWZrG8yQakXA6N7pKiA7DeUYb5RJTGS43cQP73WimMPEsC7Di18vxo5+vs11POyX/R0y2j/pDt2Dy0hCvmWY+kKRE7qOj0pN1HH6gxQX1xQOfj98gAMiKNW4r8TfQ/BxV8x5Xjx2ylr2cyFCp2/LUfKBcUL/IV/Cd4up8ARoM0dTpMnjMR2yGG2bFaB+QUx2ysvTNd4+/rmeOo3deA3t6YAfRAcK4OmDXbveAvZtq36PxsS4/yDQrLlSjRQZ7oApJUN2BS2NFbJ4wtl88eSNaLa/3h+unV94Y947qKtKmBc0M+/4P6ZOlhKiSR7BqKxLrdzmHjUXEwXu/v/DDVtyb+yZsifKt/epgOyJfhFsKNi2DBFQqw6+22+/7ZMktE5e6Kv08h7kjNskxBWznm9WCq2cyBBtmclmai0UhsZwVNTFd8/6koh4fegFBr7tQLcnK8ZrPScm+dV6rkBH5Ns2QpSWPAADg3oZqF/XDnc183U1Z1S+1ITSoyKzKiV4/vHMI0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39850400004)(366004)(396003)(346002)(136003)(451199021)(41300700001)(38100700002)(122000001)(8676002)(4326008)(66446008)(64756008)(66556008)(5660300002)(86362001)(33656002)(55016003)(8936002)(52536014)(66946007)(66476007)(38070700005)(76116006)(4744005)(186003)(110136005)(7696005)(83380400001)(316002)(966005)(478600001)(2906002)(54906003)(6506007)(26005)(9686003)(71200400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10047
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	987187f9-a26a-4e63-1bc9-08db3e8dd09f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JGUgkmXmhJ3KzRKz0H1kNOjgcHIgcTlSsM3s8KdIHjnLxwdasR07Q4ITrIkKh7khrQKhcXlb6X2NDaIVwcSh+IRMXfBOsUaLrwBlrPFIkeT8DrE6rlvJScwRuTDqcX2AxozBgQhEUnJYR7htsCxmGwW7es0E5E1eLGVP8KHGHGXYUAj/tLxpaiJnpND/RDA0+RMwwctgjybaMtSpFC7nY7Cu3yNX81rJ5R8zm8RE3fI0OMy0QANDciLqng1qS2ydLl7VYmPUQG9cIy9/gd+H54IqDvE8ykSW3N0tWwHePPva/Ip+4rQa+t5wBoCuOKWZLwtKBL1bFW8IJOCST9neJ3MTqhas5EeaWkukzi6q5QLYdFBnNpbSPolVcMlDQueGnVZ89GFt3QyB1CIL55NiSKH1499ySy4eCYQvLwHbHnPGM0HIYaUqedNApCdtLWYlAi9icapWBATxJBrkiqT8fZHmUVib/z0V2fM4++4X5EI4zq012zhRzXgLF3nX3N3YVzNJgPuq+/LlXmdhvhUuLc6usYrdhzLWC9xYrS8oBzGRtYKcCFnz6Zb5m3mxNheMYAF0eqVLZi43K8ByIkWyrDyIKxUdhTphSxr2VhdrrxpEtxqxrFFzkfdbhKihI08+1ZjFLBMHEAr5jV7m6LxlF/EFW0GLv8RRDJA92NZ6IO0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(396003)(136003)(346002)(376002)(451199021)(46966006)(36840700001)(966005)(7696005)(478600001)(86362001)(110136005)(55016003)(36860700001)(47076005)(336012)(83380400001)(33656002)(26005)(107886003)(40480700001)(186003)(6506007)(9686003)(82740400003)(82310400005)(81166007)(356005)(316002)(4326008)(70206006)(70586007)(4744005)(2906002)(8676002)(8936002)(5660300002)(52536014)(41300700001)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2023 15:18:37.2406
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 376ffb82-502e-4611-c7f1-08db3e8dda12
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5615

SGkgSnVsaWVuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjMgMC80XSBQ
Mk0gaW1wcm92ZW1lbnRzIGZvciBBcm0NCj4gDQo+IEhpIEhlbnJ5LA0KPiANCj4gPiBIZW5yeSBX
YW5nICg0KToNCj4gPiAgICB4ZW4vYXJtOiBSZWR1Y2UgcmVkdW5kYW50IGNsZWFyIHJvb3QgcGFn
ZXMgd2hlbiB0ZWFyZG93biBwMm0NCj4gPiAgICB4ZW4vYXJtOiBSZW5hbWUgdmdpY19jcHVfYmFz
ZSBhbmQgdmdpY19kaXN0X2Jhc2UgZm9yIG5ldyB2R0lDDQo+ID4gICAgeGVuL2FybTogRGVmZXIg
R0lDdjIgQ1BVIGludGVyZmFjZSBtYXBwaW5nIHVudGlsIHRoZSBmaXJzdCBhY2Nlc3MNCj4gPiAg
ICB4ZW4vYXJtOiBDbGVhbi11cCBpbiBwMm1faW5pdCgpIGFuZCBwMm1fZmluYWxfdGVhcmRvd24o
KQ0KPiANCj4gSSBoYXZlIGNvbW1pdHRlZCB0aGUgc2VyaWVzLg0KDQpNYW55IHRoYW5rcyBmb3Ig
dGhhdCEgQWxzbyB0aGFua3MgZm9yIHlvdXIgcmV2aWV3IGZvciB0aGUgc2VyaWVzLg0KDQpXaGls
ZSB5b3UgYXJlIGF0IHRoaXMsIEkgYW0gd29uZGVyaW5nIHdvdWxkIHlvdSBtaW5kIGFsc28gdGFr
aW5nIGEgbG9vaw0KYXQgWzFdIHNpbmNlIEkgYmVsaWV2ZSBJIGhhdmUgZml4ZWQgeW91ciBjb21t
ZW50cyB3aGljaCBjb3JyZWN0bHkgcG9pbnRlZA0Kb3V0IGFib3V0IHRoZSBmb3JtYXQgb2YgInBy
aW50ayBpbmZvIiBpbiB2MyBvZiB0aGF0IHNlcmllcy4NCg0KSWYgeW91IGhhdmUgbW9yZSBjb21t
ZW50cywgcGxlYXNlIGRvbid0IGhlc2l0YXRlIHRvIHJhaXNlIHRoZW0gYW5kIEkNCndpbGwgdGFr
ZSBjYXJlIG9mIHRoZW0gdG9tb3Jyb3cgOikpDQoNClRoYW5rcyENCg0KWzFdIGh0dHBzOi8vbG9y
ZS5rZXJuZWwub3JnL3hlbi1kZXZlbC8yMDIzMDIwMTAyMTUxMy4zMzY4MzctMS1IZW5yeS5XYW5n
QGFybS5jb20vDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+
IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 16:16:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 16:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521604.810405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po53S-0003GJ-36; Sun, 16 Apr 2023 16:16:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521604.810405; Sun, 16 Apr 2023 16:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po53S-0003GC-0D; Sun, 16 Apr 2023 16:16:42 +0000
Received: by outflank-mailman (input) for mailman id 521604;
 Sun, 16 Apr 2023 16:16:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po53R-0003G1-0g
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 16:16:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po53Q-0007hs-Nz; Sun, 16 Apr 2023 16:16:40 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po53Q-0004fi-EN; Sun, 16 Apr 2023 16:16:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=lQOD+WhzpSlsqvtfD8RwiIWMGCfren1uyVBmyLyLngs=; b=Gm7wyK2W5EgL/uo1AXp63Yz6Yp
	vJvrNtaps9RlX4x/iBImmZ2x1j4CkPA4Dd1Ooehotl9QPcXVichuOJJPuxIqkLMgTtK+GVe3PWURg
	QCcF5WI7bM3a+L2trKAwff9EbK5xFu+Mqg8iJDn/Y4olqediBZFa95p5/GR0xJSqlaK8=;
Message-ID: <c412e3a5-9fce-6b25-640b-561b3c563cb3@xen.org>
Date: Sun, 16 Apr 2023 17:16:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH v3 0/4] P2M improvements for Arm
To: Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230328071334.2098429-1-Henry.Wang@arm.com>
 <7595ba77-899f-6aa2-a65c-64a0b34553ac@xen.org>
 <AS8PR08MB7991E697DBC2E12129FAA97C929F9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AS8PR08MB7991E697DBC2E12129FAA97C929F9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 16/04/2023 16:18, Henry Wang wrote:
> Hi Julien,

Hi Henry,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH v3 0/4] P2M improvements for Arm
>>
>> Hi Henry,
>>
>>> Henry Wang (4):
>>>     xen/arm: Reduce redundant clear root pages when teardown p2m
>>>     xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC
>>>     xen/arm: Defer GICv2 CPU interface mapping until the first access
>>>     xen/arm: Clean-up in p2m_init() and p2m_final_teardown()
>>
>> I have committed the series.
> 
> Many thanks for that! Also thanks for your review for the series.
> 
> While you are at this, I am wondering would you mind also taking a look
> at [1] since I believe I have fixed your comments which correctly pointed
> out about the format of "printk info" in v3 of that series.

Sorry this fell through the cracks. I will have a look at it now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 16:20:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 16:20:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521608.810416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po56r-0004fQ-L4; Sun, 16 Apr 2023 16:20:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521608.810416; Sun, 16 Apr 2023 16:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po56r-0004fJ-FM; Sun, 16 Apr 2023 16:20:13 +0000
Received: by outflank-mailman (input) for mailman id 521608;
 Sun, 16 Apr 2023 16:20:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po56q-0004eQ-9M
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 16:20:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po56q-0007lo-29; Sun, 16 Apr 2023 16:20:12 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po56p-0004lY-Qz; Sun, 16 Apr 2023 16:20:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=XSFsE8Hx/H0JzDaGHFB89rfXOYgq2FdVCM6+zj35iNc=; b=fgbb5B5QC8oKcI1VMOqfAclb6G
	1ZAjKahbaQpkvawYY/E2Vz8BsdH1uIbnCFDM3Y170JwOLyZMHhhvfh5slByszHlkRWCao4AH3IdIr
	fZi21OVdml4usCq0uDGxnulDSV8+sFYCHWD2+inQW0gtBZqRZp4bMXh/tKtJB8syhsVo=;
Message-ID: <b9374174-37b8-9bd1-2e2d-55efd22f8a14@xen.org>
Date: Sun, 16 Apr 2023 17:20:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH v4 0/3] Memory region overlap check in device tree
To: Henry Wang <Henry.Wang@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Chen <wei.chen@arm.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230201021513.336837-1-Henry.Wang@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230201021513.336837-1-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Henry,

On 01/02/2023 02:15, Henry Wang wrote:
> As we are having more and more types of memory region defined in the
> device tree, it is necessary to add the overlap check of these memory
> regions in Xen, because such check will help user to identify the
> misconfiguration in the device tree at the early stage of boot time.
> 
> The first patch introduces the basic memory overlap check mechanism,
> and does the memory check for memory regions in bootinfo.reserved_mem.
> Following patches extend the overlap check to include bootmodules and
> EfiACPIReclaimMemory.
> 
> v3 -> v4:
> 1. Correct printk error message, end should be exclusive.
> 2. Add comment explaining the unhandled case where '*_end' could be 0
>     if the module/region is at the end of the physical address space.
> 3. Add Stefano's reviewed-by tag.
> 
> v2 -> v3:
> 1. Use "[start, end]" format in printk error message.
> 2. Change the return type of helper functions to bool.
> 3. Use 'start' and 'size' in helper functions to describe a region.
> 
> v1 -> v2:
> - Split original `overlap_check()` to `meminfo_overlap_check()` and
>    `bootmodules_overlap_check()`.
> - Rework commit message.
> 
> Henry Wang (3):
>    xen/arm: Add memory overlap check for bootinfo.reserved_mem
>    xen/arm: Extend the memory overlap check to include bootmodules
>    xen/arm: Extend the memory overlap check to include
>      EfiACPIReclaimMemory

I have committed the series. Sorry for the late.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 17:11:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 17:11:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521613.810424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po5uB-0001Xi-FG; Sun, 16 Apr 2023 17:11:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521613.810424; Sun, 16 Apr 2023 17:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po5uB-0001Xb-Cf; Sun, 16 Apr 2023 17:11:11 +0000
Received: by outflank-mailman (input) for mailman id 521613;
 Sun, 16 Apr 2023 17:11:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po5u9-0001XV-TM
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 17:11:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po5u7-0000XK-Hz; Sun, 16 Apr 2023 17:11:07 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po5u7-0003ro-CE; Sun, 16 Apr 2023 17:11:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=8AONajZKn+lv7eWrYnz+VY0D8AXJ+oMC76DLB71utPU=; b=LPbY8LEbCPX+nKFd8m+8jYugve
	usPCRCCJ4HlCmP0a+EQjKn0DaZVaFdljL82hz9wFs2WnujCuS1XgKzwFGcCZC9RODiQyV0+6P0tu+
	1WNgK9qtg16pREhXaoeAik62xA5Bl57KE/nR99mEp0VwtGs7D48dmOp4cCd68qRKEgGM=;
Message-ID: <ca8cf5b9-ac8b-f48b-50ed-0b078f29c930@xen.org>
Date: Sun, 16 Apr 2023 18:11:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Oleksii <oleksii.kurochko@gmail.com>
References: <20230315185121.665635-1-andrew.cooper3@citrix.com>
 <4b50bbf0-96f7-c0ad-2bd4-cb204e8daef1@suse.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/riscv: Fix build with GCC 10
In-Reply-To: <4b50bbf0-96f7-c0ad-2bd4-cb204e8daef1@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 16/03/2023 07:21, Jan Beulich wrote:
> On 15.03.2023 19:51, Andrew Cooper wrote:
>>    riscv64-linux-gnu-gcc -MMD -MP -MF arch/riscv/.early_printk.o.d -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O1 -fno-omit-frame-pointer -nostdinc -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith -Wvla -pipe -D__XEN__ -include ./include/xen/config.h -Wa,--strip-local-absolute -g -mabi=lp64  -I./include -I./arch/riscv/include -march=rv64gc -mstrict-align -mcmodel=medany   -c arch/riscv/early_printk.c -o arch/riscv/early_printk.o
>>    arch/riscv/early_printk.c:18:2: error: #error "early_*() can be called from head.S with MMU-off"
>>       18 | #error "early_*() can be called from head.S with MMU-off"
>>          |  ^~~~~
>>
>>    $ riscv64-linux-gnu-gcc --version
>>    riscv64-linux-gnu-gcc (Debian 10.2.1-6) 10.2.1 20210110
>>
>> The binary is otherwise correct, so remove the incorrect check.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> I'm with Julien here - this needs further explaining: The compiler (even 8.2)
> clearly provides this definition with the given set of command line options,
> as supported by trying it out om godbolt. So there must be more to this -
> could be a bad patch in Debian's build, could be some odd interaction of
> command line options which for whatever reason only triggers with certain
> builds, or about anything else.

I have tried to build RISC-v on my Debian Bulleyes machine today. The 
build failed with the same reason.

The Linux kernel (which has the exact same check) could be built. So I 
decided to dig why this happens.

I got a below code compiled when both -mcmodel=medany and -fno-pie are 
passed to the GCC command line:

#ifndef __riscv_cmodel_medany
#error "medany not enabled"
#endif

I am guessing that's because GCC on Debian has PIE enabled by default.
Now, Xen is meant to be built with -fno-pie, but this is not on the 
command line. So does any flags from EMBEDDED_EXTRA_CFLAGS.

Skimming through xen-devel, there is already a patch from Oleksii to add 
the cflags (see [1]). So with that in place, this patch becomes 
unnecessary to build Xen RISC-v on Debian.

Cheers,

[1] 
https://lore.kernel.org/all/2785518800dce64fafb3096480a5ae4c4e026bcb.1678970065.git.oleksii.kurochko@gmail.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 17:14:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 17:14:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521617.810435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po5xA-00026c-T5; Sun, 16 Apr 2023 17:14:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521617.810435; Sun, 16 Apr 2023 17:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po5xA-00026V-QK; Sun, 16 Apr 2023 17:14:16 +0000
Received: by outflank-mailman (input) for mailman id 521617;
 Sun, 16 Apr 2023 17:14:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1po5xA-00026P-BG
 for xen-devel@lists.xenproject.org; Sun, 16 Apr 2023 17:14:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po5x9-0000aB-EN; Sun, 16 Apr 2023 17:14:15 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1po5x9-0003x8-9E; Sun, 16 Apr 2023 17:14:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=N1+ZRfJS+TAIGVKceDPviNXUeXjeeAYAbDV2RmP++cI=; b=SWDBBb2FZIXdxrkWTfIMV8fhdh
	l6qd4+OTIYeZWhR6Jw2z2Y6AijQ+Jd+uWqdoJD0vNRURPxcgoBUUdtygpgA6D3ZZr2MH1tOBmU+6d
	LVIX3vWSL6NotCUmVRdScxyT7+xD66cu39qyiiYjswLT/j1XcmKeQMrQY/rxIV3yFHbU=;
Message-ID: <bee7fc62-d1e0-1dff-c1ad-289ef12ae8da@xen.org>
Date: Sun, 16 Apr 2023 18:14:13 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
Subject: Re: [PATCH] ARM+RISC-V: BSS handling improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20230324222451.3295023-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230324222451.3295023-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 24/03/2023 22:24, Andrew Cooper wrote:
>   * Correct comments in arm{32,64}/head.S
>   * Provide Linker assertions to check the safety of the zeroing loops
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

It is now fully acked. So I have committed the patch.

Cheers,

> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Bob Eshleman <bobbyeshleman@gmail.com>
> CC: Alistair Francis <alistair.francis@wdc.com>
> CC: Connor Davis <connojdavis@gmail.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> Pulled out of the very start of my work to try and unify the handling of
> xen_phys_addr across architectures.
> ---
>   xen/arch/arm/arm32/head.S | 2 +-
>   xen/arch/arm/arm64/head.S | 2 +-
>   xen/arch/arm/xen.lds.S    | 2 ++
>   xen/arch/riscv/xen.lds.S  | 4 ++++
>   4 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index df51550baa8a..f9f7be9588b1 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -301,7 +301,7 @@ ENDPROC(check_cpu_mode)
>   zero_bss:
>           PRINT("- Zero BSS -\r\n")
>           mov_w r0, __bss_start        /* r0 := vaddr(__bss_start) */
> -        mov_w r1, __bss_end          /* r1 := vaddr(__bss_start) */
> +        mov_w r1, __bss_end          /* r1 := vaddr(__bss_end)   */
>   
>           mov   r2, #0
>   1:      str   r2, [r0], #4
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..8a4dd64c99ad 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -437,7 +437,7 @@ zero_bss:
>   
>           PRINT("- Zero BSS -\r\n")
>           ldr   x0, =__bss_start       /* x0 := vaddr(__bss_start) */
> -        ldr   x1, =__bss_end         /* x1 := vaddr(__bss_start) */
> +        ldr   x1, =__bss_end         /* x1 := vaddr(__bss_end)   */
>   
>   1:      str   xzr, [x0], #8
>           cmp   x0, x1
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index 1b392345bc3b..6ca3caefe607 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -240,3 +240,5 @@ ASSERT(_idmap_end - _idmap_start <= PAGE_SIZE, "Identity mapped code is larger t
>    */
>   ASSERT(IS_ALIGNED(__init_begin,     4), "__init_begin is misaligned")
>   ASSERT(IS_ALIGNED(__init_end,       4), "__init_end is misaligned")
> +ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misaligned")
> +ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
> diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
> index ca57cce75cba..2ed70eccc62a 100644
> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -1,3 +1,4 @@
> +#include <xen/lib.h>
>   #include <xen/xen.lds.h>
>   
>   #undef ENTRY
> @@ -156,3 +157,6 @@ SECTIONS
>   
>       ELF_DETAILS_SECTIONS
>   }
> +
> +ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is misaligned")
> +ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 19:19:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 19:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521624.810445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po7tr-0005mS-JP; Sun, 16 Apr 2023 19:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521624.810445; Sun, 16 Apr 2023 19:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po7tr-0005mL-GH; Sun, 16 Apr 2023 19:18:59 +0000
Received: by outflank-mailman (input) for mailman id 521624;
 Sun, 16 Apr 2023 19:18:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po7tp-0005m8-CM; Sun, 16 Apr 2023 19:18:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po7tp-0003Tr-3f; Sun, 16 Apr 2023 19:18:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po7to-0002wp-Kj; Sun, 16 Apr 2023 19:18:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po7to-00020b-KD; Sun, 16 Apr 2023 19:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tlHDmTMMqfGEaWj3C9rwITNE69pha72RMRJKjLJIZHo=; b=6q2TaeE6XdpBu2idW6vKfTcxHp
	823XE+VLGgKjNelsrntmWO7ylOVNNi9l6/aIEHcDtKM7XwJuDSzxCukATgHw5q4kLfPpP9EAOT/SW
	NzjUKFTKLBTCxPWXLhGecVnWecZ/tKcPxs1MM/m4YJNJNmjG10paqASHfUu5kkyyW4QA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180274: trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
X-Osstest-Versions-This:
    linux=3e7bb4f2461710b70887704af7f175383251088e
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 19:18:56 +0000

flight 180274 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180274/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-rtds        <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180253
 test-armhf-armhf-xl-rtds      5 host-install(5)       broken starved in 180253
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-xl           8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-raw  8 xen-boot                fail starved in 180253

version targeted for testing:
 linux                3e7bb4f2461710b70887704af7f175383251088e
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    2 days
Failing since        180264  2023-04-14 18:10:06 Z    2 days    5 attempts
Testing same since   180274  2023-04-16 10:58:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Christoph Hellwig <hch@lst.de>
  Conor Dooley <conor.dooley@microchip.com>
  David Disseldorp <ddiss@suse.de>
  Duy Truong <dory@dory.moe>
  Erik Brakkee <erik@brakkee.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jiri Kosina <jkosina@suse.cz>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mathis Salmen <mathis.salmen@matsal.de>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michal Kolar <mich.k@seznam.cz>
  Ming Lei <ming.lei@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com> # v5.10, v4.19
  Mustafa Ismail <mustafa.ismail@intel.com>
  Nicolas Schichan <nschichan@freebox.fr>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Peter Korsgaard <peter@korsgaard.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Weinberger <richard@nod.at>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Steve French <stfrench@microsoft.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Tharun Kumar P <tharunkumar.pasumarthi@microchip.com>
  Wolfram Sang <wsa@kernel.org>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>
  ZhaoLong Wang <wangzhaolong1@huawei.com>
  Zhihao Cheng <chengzhihao1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-rtds host-install(5)

Not pushing.

(No revision log; it would be 1107 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 19:43:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 19:43:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521664.810455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po8HG-0000hw-CW; Sun, 16 Apr 2023 19:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521664.810455; Sun, 16 Apr 2023 19:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po8HG-0000hp-9o; Sun, 16 Apr 2023 19:43:10 +0000
Received: by outflank-mailman (input) for mailman id 521664;
 Sun, 16 Apr 2023 19:43:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8HF-0000hf-4c; Sun, 16 Apr 2023 19:43:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8HE-00041J-Sn; Sun, 16 Apr 2023 19:43:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8HE-0003dh-Dn; Sun, 16 Apr 2023 19:43:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po8HE-0002ae-DM; Sun, 16 Apr 2023 19:43:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+9sVUa063IJyB/BVvdjkaU2l9cu2wlA+sH3s1blW/XQ=; b=suLKG4hplhthrOhZxlhJk6IsJ7
	/bORhGrXidlB0v3VyENCIheEpVzzzRn8jvpJI/l3YQmhoOFIb1KtmP50ybir7fDm78WOEN3OIRtba
	MHAj/DhL27qwRgGVkZE8IIjb+Gb7EPklJJ38NVA6h0ewll6fsvjhrKRXVTCRgrkkYPDY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180276: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9c962e07fbf3d7e2f8f92d8834a60fe7d5600637
X-Osstest-Versions-That:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 19:43:08 +0000

flight 180276 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180276/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9c962e07fbf3d7e2f8f92d8834a60fe7d5600637
baseline version:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b

Last test of basis   180263  2023-04-14 18:03:12 Z    2 days
Testing same since   180276  2023-04-16 16:03:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   18c128ba66..9c962e07fb  9c962e07fbf3d7e2f8f92d8834a60fe7d5600637 -> smoke


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 20:27:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 20:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521671.810464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po8yE-0005Fk-OT; Sun, 16 Apr 2023 20:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521671.810464; Sun, 16 Apr 2023 20:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1po8yE-0005Fd-Ls; Sun, 16 Apr 2023 20:27:34 +0000
Received: by outflank-mailman (input) for mailman id 521671;
 Sun, 16 Apr 2023 20:27:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8yD-0005FT-5W; Sun, 16 Apr 2023 20:27:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8yC-000577-Un; Sun, 16 Apr 2023 20:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1po8yC-0005LA-F3; Sun, 16 Apr 2023 20:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1po8yC-0000cB-EY; Sun, 16 Apr 2023 20:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jdx1PLPkrfeJjN9pEnneMxJlu1Hfj1VOagqD8QJthec=; b=aQO/jN4+y2atOpbn/syR+3yP/2
	TylVh8iCp1Ef9FBUrlHuWaopPxQafhAML+jmNMnF67+LY+evxqV6ad7lY943JrDPNnrJWWHpwsqfL
	np0+u1P5xNDacBeiE5V7pIKaQGsnhcOs0NGh4PM8EMHtA/7h3Hmbtc0HH1OAQxF34NkY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180277: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6ded9f50c3aa123fe581c42ff6c03789b9b593c1
X-Osstest-Versions-That:
    ovmf=797f526ae2a83811b0ccbde0138c65a9f137eba5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 20:27:32 +0000

flight 180277 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180277/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6ded9f50c3aa123fe581c42ff6c03789b9b593c1
baseline version:
 ovmf                 797f526ae2a83811b0ccbde0138c65a9f137eba5

Last test of basis   180262  2023-04-14 16:12:08 Z    2 days
Testing same since   180277  2023-04-16 18:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pedro Falcato <pedro.falcato@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   797f526ae2..6ded9f50c3  6ded9f50c3aa123fe581c42ff6c03789b9b593c1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 23:41:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 23:41:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521690.810475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poBzP-0007rj-Fc; Sun, 16 Apr 2023 23:40:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521690.810475; Sun, 16 Apr 2023 23:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poBzP-0007rc-CD; Sun, 16 Apr 2023 23:40:59 +0000
Received: by outflank-mailman (input) for mailman id 521690;
 Sun, 16 Apr 2023 23:40:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poBzN-0007rR-Im; Sun, 16 Apr 2023 23:40:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poBzN-00011p-B3; Sun, 16 Apr 2023 23:40:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poBzM-0004Wd-Su; Sun, 16 Apr 2023 23:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poBzM-0002Ht-SU; Sun, 16 Apr 2023 23:40:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6CaXq7816UIKlxsKKNV/lgTlkXnHCvx9tO34VEm0Abw=; b=SdjVNb6PLD8ilUHuIoCSDbDrai
	sfLZTfc+b6rpdGb6LgNJXUyl21Y14Zy2EuMduommT2dkgH3RSWTlGz3jsAY1TxO+YTUwY0CrwIo9j
	h7cnbj/JsYvlE9xMV0wMwdmoxFY4hP8fptEE3vc0vbyhZHOKrtMuFiwBtGykBYGr1hew=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180279-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180279: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=44843cee3d2b8daa09e5860fc4574219b57acde8
X-Osstest-Versions-That:
    xen=9c962e07fbf3d7e2f8f92d8834a60fe7d5600637
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 23:40:56 +0000

flight 180279 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180279/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  44843cee3d2b8daa09e5860fc4574219b57acde8
baseline version:
 xen                  9c962e07fbf3d7e2f8f92d8834a60fe7d5600637

Last test of basis   180276  2023-04-16 16:03:31 Z    0 days
Testing same since   180279  2023-04-16 20:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9c962e07fb..44843cee3d  44843cee3d2b8daa09e5860fc4574219b57acde8 -> smoke


From xen-devel-bounces@lists.xenproject.org Sun Apr 16 23:54:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Apr 2023 23:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521696.810485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poCCR-0000xz-Ms; Sun, 16 Apr 2023 23:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521696.810485; Sun, 16 Apr 2023 23:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poCCR-0000xs-JR; Sun, 16 Apr 2023 23:54:27 +0000
Received: by outflank-mailman (input) for mailman id 521696;
 Sun, 16 Apr 2023 23:54:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poCCP-0000xi-F6; Sun, 16 Apr 2023 23:54:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poCCP-0001Db-95; Sun, 16 Apr 2023 23:54:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poCCO-0004tz-TZ; Sun, 16 Apr 2023 23:54:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poCCO-0006OF-T4; Sun, 16 Apr 2023 23:54:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QPP00Kgb1VoSVbUxPdYtAfWIppnhEtj9akDdIzCmKR0=; b=hjYAGmbBLyBxjDum6VUABC34+e
	Ki8dBRxJ25BTse/X065M7yuSFctUJm0QHUH+Lk81WN7bx3v6cU8yKSDevX9fyOo20WBBVevSB9rsn
	HXlTiUlS93VvO9NlRp2tno8jLQmrNayzobUTCPw4hvjBjacke1ZCCy1QnJDGI6qFmHJU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180275-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180275: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-arndale:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl:host-install(5):broken:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18c128ba66e6308744850aca96dbffd18f91c29b
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Apr 2023 23:54:24 +0000

flight 180275 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180275/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-arndale     <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   5 host-install(5)       broken baseline untested
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)     broken starved in 180238
 test-armhf-armhf-xl           5 host-install(5)       broken starved in 180238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  18c128ba66e6308744850aca96dbffd18f91c29b
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    3 days
Failing since        180256  2023-04-14 05:34:08 Z    2 days    6 attempts
Testing same since   180265  2023-04-14 22:10:49 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl-arndale host-install(5)

Not pushing.

------------------------------------------------------------
commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 01:56:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 01:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521706.810495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poE6R-0003RE-Ku; Mon, 17 Apr 2023 01:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521706.810495; Mon, 17 Apr 2023 01:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poE6R-0003R6-G3; Mon, 17 Apr 2023 01:56:23 +0000
Received: by outflank-mailman (input) for mailman id 521706;
 Mon, 17 Apr 2023 01:56:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nDUx=AI=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1poE6P-0003R0-Ed
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 01:56:21 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20624.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09258ca3-dcc3-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 03:56:18 +0200 (CEST)
Received: from DM6PR02CA0129.namprd02.prod.outlook.com (2603:10b6:5:1b4::31)
 by CH2PR12MB5001.namprd12.prod.outlook.com (2603:10b6:610:61::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 01:56:12 +0000
Received: from DM6NAM11FT061.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1b4:cafe::57) by DM6PR02CA0129.outlook.office365.com
 (2603:10b6:5:1b4::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 01:56:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT061.mail.protection.outlook.com (10.13.173.138) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 01:56:11 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sun, 16 Apr
 2023 20:56:11 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Sun, 16 Apr 2023 20:56:09 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09258ca3-dcc3-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d6h8uEcyrXaSqE7NLvJr78YzvVRWmVlDte/7ccNjxlmZIi4zOJSsYW05cTG1rwhlyGqFKbRpa/laPwOxR3J2iT5Elo/oQfq6qfX7rGdi/Xgdlg0eD39hn68Dw2jbioI+o/Mi/g2dv3aL0dMM3ptUJREOZLSsQyn9QxTwh4uch8nc9zaJTNPi9Iu+Lbz6asar9NIGPSwUMD2stGujzledz0+vgddT4SugiT0OFE379H8ZWnXfmv9NMaCBbyq4Bue3sYkZXKJcGkovyczB1l1VoQDOld4jclSNJXrfkVty9cDxD6/mgMxwIeXji6Yc5YjhPp5NbUCLnNHSgNKxewwhbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CWdT4+fYncLA3IivelBA4rg96u9zi5g/rldo/VCvoS8=;
 b=CtAdh902PevljIBMiwW1+G6V9zcYzG/7//Whp8/tnUexP8TgUj8I4We9MR+cpRNqHSIQULzC50NVaUU/wpvsFpqaMnKe4g/PPIBrmzGVUztxwTZt6XafLzQZB8hj7//Jas1Lx676tXv5niWmmILtizB1121NSf3Lv+qbPVww9T6ITwt305ra4YEuQRDdJaa9D4yG83MQX1A3fyzwlsGCv7Eh+4n7V1G2Qh3lyaNcPwS+i83usbjlPbQJ21Lq9Rdd1qIQXB0yQEr84JW0pW2QSlyzFKtQEHc0/OrDGQjOK4vqfg0KGIzqYdV98EPI40RSuEzJ+svP4Sucg3NHRoNFfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CWdT4+fYncLA3IivelBA4rg96u9zi5g/rldo/VCvoS8=;
 b=1RZ7TW1r1dwONAwI1GDBpvCmtBaOt7MJSHhwOzEiZzbdscdkxpX4a8crjMOmXzcev0vBOB4rUaLxcQpa6/MCMXoIzIB1zdH7qLzumZsXIZygPjqzFim7ciVI7F/J2sAsbbbio+wxkA+njDcwSNxDlwwSPFumJNhHTd6v26dRCks=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <cb28fbd8-8fbc-4f71-f1f6-6d5c2a5f2c46@amd.com>
Date: Sun, 16 Apr 2023 21:56:09 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 1/3] xen/arm: mark __guest_cmpxchg always_inline
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-2-stewart.hildebrand@amd.com>
 <bed2fc02-6b6a-aecc-e279-e7ea3ddffe7e@xen.org>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <bed2fc02-6b6a-aecc-e279-e7ea3ddffe7e@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT061:EE_|CH2PR12MB5001:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c0ad564-9830-4d47-4094-08db3ee6eb96
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6exLxPMaOSLYgnbX9/0LAHL/6pBRon9n+8ay1b9iqfsfss2Gw8RZAfx8xvOZPpazO9pvmCSzCMrMKeNhlGMmxy49Dqe3XT9cN/1Zu8IocfgW29SLVqHnt4e4wDtdnENSIZgj7+jjQ9Hv18O8pPw49wZIoKLCmIo24EaT/WxFGwWMokGhZEg4idjbZgOkwSqcOmTV8gG2wgNhAcWu8ZsBC9kE5zNu1wMOo9yUWKKKoX/4l2VPc500Ax/8ksHc25HGvFDJjKwm/BMQn58FsC0ilY91yhgEzGM5DtUDm8LK8463sOgw4QeGcpPecmuSA4Jk5YuX0w8bq4Gce0x1EpZqqujurauAjQYyOtk87qcBzKmxGw66ODCUSqJRu6NypIVVVpevWe8OOm4gSGUB6kInKA7n2lZF4d6Kc64QtV5C5uI+UnmWmkturnxrSK9HLl0vjVUY2woLe/Oick+OCCv+V10lPhXrSBkeYCg1bd+5Y0Quf6A2psNGDg9blJaN9Fdu7fgfN6BmIsbpqfIlfKz1RY/4g3i1oaSRE60FaDWBv7dAgfWxyaL0v/wjIQ5Uof3SRZmRYZFEyLoIKigVTCYDnJJBAaapgfkWa+9ZSd7Q1gG4F5zFS9+NjahQ4flA0Iap03SadEG2dQwybYu0XW2OCrsMmNy5csJHa8IFtdjl+jEs+huAdOcnyR5XE9AalN5mUV3pCGA1Ea9txhEMB2FjRbAuISY5pr5wi2JRBnv5kLEVp3cZB7gPL2gjs10B+s9bcOzc3nhHoX4Z4B7gg0PaJA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(44832011)(40460700003)(82310400005)(2906002)(5660300002)(8936002)(8676002)(356005)(41300700001)(81166007)(40480700001)(86362001)(31696002)(31686004)(478600001)(54906003)(2616005)(110136005)(16576012)(36860700001)(26005)(53546011)(186003)(336012)(4326008)(426003)(70206006)(70586007)(82740400003)(316002)(47076005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 01:56:11.8168
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c0ad564-9830-4d47-4094-08db3ee6eb96
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT061.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5001

On 4/16/23 08:50, Julien Grall wrote:
> Hi Stewart,
> 
> On 14/04/2023 19:57, Stewart Hildebrand wrote:
>> When building the hypervisor with -Og, we run into a __bad_cmpxchg link error:
>>
>> aarch64-none-linux-gnu-ld: prelink.o: in function `__int_cmpxchg':
>> .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
>> aarch64-none-linux-gnu-ld: .../xen/./arch/arm/include/asm/arm64/cmpxchg.h:117: undefined reference to `__bad_cmpxchg'
>> aarch64-none-linux-gnu-ld: ./.xen-syms.0: hidden symbol `__bad_cmpxchg' isn't defined
>> aarch64-none-linux-gnu-ld: final link failed: bad value
>>
>> This is due to the function __guest_cmpxchg not being inlined in the -Og build
>> with gcc 12. Fix this by marking __guest_cmpxchg always_inline.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com
>> ---
>> I considered also changing "guest_cmpxchg64" just below in the same file to
>> always_inline, but I decided not to because this function does not take a size
>> parameter.
> 
> Make sense. I will fixed the signed-off-by line issue reported by Henry
> while committing:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thank you!


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 02:03:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 02:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521710.810505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poEDJ-0005FW-Ao; Mon, 17 Apr 2023 02:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521710.810505; Mon, 17 Apr 2023 02:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poEDJ-0005FP-7k; Mon, 17 Apr 2023 02:03:29 +0000
Received: by outflank-mailman (input) for mailman id 521710;
 Mon, 17 Apr 2023 02:03:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nDUx=AI=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1poEDI-0005FJ-Fb
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 02:03:28 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0950f951-dcc4-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 04:03:26 +0200 (CEST)
Received: from DM6PR03CA0069.namprd03.prod.outlook.com (2603:10b6:5:100::46)
 by SJ0PR12MB6990.namprd12.prod.outlook.com (2603:10b6:a03:449::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Mon, 17 Apr
 2023 02:03:22 +0000
Received: from DM6NAM11FT108.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:100:cafe::23) by DM6PR03CA0069.outlook.office365.com
 (2603:10b6:5:100::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 02:03:21 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT108.mail.protection.outlook.com (10.13.172.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 02:03:20 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sun, 16 Apr
 2023 21:03:20 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sun, 16 Apr
 2023 19:03:20 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Sun, 16 Apr 2023 21:03:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0950f951-dcc4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=On2SCcY6aju3Jng6z40Qs4oriGZ1xQNrxQlyD+CstrQcq8DhmCAqiGl1/aeG61PcMWpvSm6Vj1/dpmdXilZIepQ6O97hnkDZdk4WeuSzj8oijPWA0PdwjoIU6ndCRvUo0SGVQZdfa0YV3HAb9+zr+YLSauZPElkWvAPKt+64ecmiYywwpvHuMoORfcj1N/YD6FhqD4r+UbWCEpX0aNsKyMpydDBidqgAR30ShzSgCASY4VQuyns5hDyJHrD+b15ynYPGLySeJ1FIEWWtJgFjkPFEcEE4Lx5Ypr4P+UUJMWblAN9EaIO5j1dDHKR+Iq3/xWWM/U0oLMjeAD4qSelGsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZB7rQPXzigjksR8cneF7KMo407F4iVd+/nvEuzD/dHE=;
 b=jflfrYobF/4cG6Y5nVmwbWVyXeAyLs1Gip5Eg/AL+WsC0ifPDE07zsjX8MH6rGOfYvYBa0s0VVCub9snfd4dnVhjsWdCqbE6H24D0h8BlzZLOLCPRqRd4NYRe8gWmjn5vacwXqVoSm/9rl6EQZQY9GE6YmV3cd0c1EFeLjiUMpxrfoJT7q5X6YY1gyXs5PLu0bUQhMkZizGwVZTMNhBzVRoQDsSSysQZJ0sXP2M5hnRNg9lRZsEYZpDxhztCqKGdt4Z3bsI0BL0z8vOm5QAv2OhVgxDsH49eWNNs7sRyH8b9FHlGKlrgS3ZuBBa8ErJoSydVnQFEn0ooI6R+6oI2JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZB7rQPXzigjksR8cneF7KMo407F4iVd+/nvEuzD/dHE=;
 b=xDY7sP8KXF1IWuEKwJsnHFPfiloxNuMxOxEytLkSy+ueG+plgE9hLkIu1PdSwd1r36K6pzf4EpdPV2YAd6ctKN7K1+pz6XTon6m7wobPaTuiWFa1XncaExDjque+MV9q8MVR24EmXfSrhtVqioQ3BhVDHK21OwW1jtZfDCbDTus=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <3833c906-8d88-d35d-b9dd-b70d5f7a9fa7@amd.com>
Date: Sun, 16 Apr 2023 22:03:18 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 3/3] xen/arm: fix unitialized use warning
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-4-stewart.hildebrand@amd.com>
 <5fb567c5-1e82-a048-1cfe-f6f69e0b5ebc@xen.org>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <5fb567c5-1e82-a048-1cfe-f6f69e0b5ebc@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT108:EE_|SJ0PR12MB6990:EE_
X-MS-Office365-Filtering-Correlation-Id: 652a7c9e-3a95-4db0-17ed-08db3ee7eb64
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/hvSZwuGv1r4fGOEgT2unE07Wn/HLbH+qmH/4gfOk5BzqFdkoOnyhjKQ44Z3UBlk8/JK0nP0+evw2HkJVWekfv9tUyC5sVLPHa9P1Y43lpUcXLjSmt44tsmh8ZED/ek5SGh6cKNorS7JtMj5VhtrHCXczBQbcauKzvZOet/0YXsHysUUmsJ+u6NR9Y3EYO7IJYr2Jo/qqFeoU6biajXhpPATeQ7d2jh8P2cWi1EQW87uQxvYO2P4rrQLHbYU+fpSxO9w5WSLAvIOATSfSrkS97C32oI192hoSnDBit7vV6tgQWcEq1pVoazdz4AH9gVHAU56M+Z/ENgfau3ahv3bVJxz0IxQW7/9M6yXe24m7OfT7IZGIOcHfjLF46B+9/skHXULXegGnzMGw2hPuxA21IW0L7Y9J2V/MRLfJKXphQmG4M46VHbLsDvRzau/Ty5NMfuuwP7QGAF9jOMMS2eQcTrYs1kjQxQ6UAkUK1aRhdIJJcVdQ+IjU4ImefXjDdSAVPdDeH1oajEyAqtOaDiDXola3NyZ7SB8+IMaddlUepFFZc9qRaQwxPrFSvh55NL/OcCJYMCx12Lw4O0RgKB5OGd5DxNWWp6ku1IuTHfWTpdDmHODek/iXsE3LvDu3k2UdKs0puXwnyUnAMtI3WdpiaNc01WzfUkuxkIorOcsY6TbhJHdMimusmYxCtYclEJeFuSGsxyWY7UPZb0np+sVVh6WkiLFniF6yqcpEpyfb4QyXKeQVbG4yDOxpUuDzfvI
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(47076005)(40460700003)(31686004)(36756003)(41300700001)(4326008)(54906003)(8676002)(70206006)(966005)(16576012)(110136005)(70586007)(316002)(81166007)(478600001)(86362001)(31696002)(336012)(2616005)(83380400001)(26005)(53546011)(426003)(5660300002)(2906002)(82310400005)(40480700001)(36860700001)(8936002)(44832011)(82740400003)(186003)(356005)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 02:03:20.9848
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 652a7c9e-3a95-4db0-17ed-08db3ee7eb64
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT108.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6990

On 4/16/23 08:53, Julien Grall wrote:
> Hi Stewart,

Hi Julien,

> On 14/04/2023 19:57, Stewart Hildebrand wrote:
>> When building the hypervisor with -Og, we encounter the following error:
> 
> Is this with GCC 12 as well?

Yes. If my memory serves me correctly this particular one occurs with both GCC 11 and 12.

>> arch/arm/domain_build.c: In function ‘make_cpus_node’:
>> arch/arm/domain_build.c:2040:12: error: ‘clock_valid’ may be used uninitialized [-Werror=maybe-uninitialized]
>>   2040 |         if ( clock_valid )
>>        |            ^
>> arch/arm/domain_build.c:1947:10: note: ‘clock_valid’ was declared here
>>   1947 |     bool clock_valid;
>>        |          ^~~~~~~~~~~
>> cc1: all warnings being treated as errors
>>
>> Fix it by initializing the variable.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
>> ---
>> See previous discussion here
>> https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00741.html
>> ---
>>   xen/arch/arm/domain_build.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 4f9d4f9d8867..18b350734a8e 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1944,7 +1944,7 @@ static int __init make_cpus_node(const struct domain *d, void *fdt)
>>       /* Placeholder for cpu@ + a 32-bit hexadecimal number + \0 */
>>       char buf[13];
>>       u32 clock_frequency;
>> -    bool clock_valid;
>> +    bool clock_valid = false;
> 
> NIT: I would add "Keep the compiler happy with -Og"
> 
> I am happy to add it while committing if you agree.

Yes, please do. Thanks.

> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> Cheers,
> 
> -- 
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 02:09:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 02:09:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521714.810515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poEIq-0005sJ-Uy; Mon, 17 Apr 2023 02:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521714.810515; Mon, 17 Apr 2023 02:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poEIq-0005sC-SD; Mon, 17 Apr 2023 02:09:12 +0000
Received: by outflank-mailman (input) for mailman id 521714;
 Mon, 17 Apr 2023 02:09:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nDUx=AI=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1poEIp-0005s6-OF
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 02:09:11 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20605.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5dc9e91-dcc4-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 04:09:09 +0200 (CEST)
Received: from DM6PR21CA0023.namprd21.prod.outlook.com (2603:10b6:5:174::33)
 by CH3PR12MB8994.namprd12.prod.outlook.com (2603:10b6:610:171::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 02:09:04 +0000
Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::7d) by DM6PR21CA0023.outlook.office365.com
 (2603:10b6:5:174::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.1 via Frontend
 Transport; Mon, 17 Apr 2023 02:09:04 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 02:09:03 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sun, 16 Apr
 2023 21:09:03 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sun, 16 Apr
 2023 21:09:03 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Sun, 16 Apr 2023 21:09:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5dc9e91-dcc4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CcsjSlD7LX4uFfsrzEa5N+uB3KTNt3QpX+FAiI6i9VRRS7xEwD7XC8iA7tbhNXD4bGh9WRTV+VMSsbsBAOxlR6veUym71oUYFtMPoG8Qdc7NTeTu14FeqzX+sah3qe55uh+cgUKod3E0JhMsZq9h1zzz4d8EjmechdtORHiYSWtYjl8+EVI+ro3zLdHBNYhz3XZ6v+MNjJj5Sq1JZDPITygo6gBVjeMkIitDTU5ie5QwZg9rZEnp08xv7ajO+y7Iub/OBCYo+bEtU8935AyXCT5r+ApLRBXDJuriAGCjGeCq4Zh6d3h0CXhT2QdEHrgxHLJkE4/eGjHY3N7CIXOqCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Mt+pcXhEBYCYbjFfwCuuM/w86R2K52bxIB0y1/nzKsI=;
 b=UyGjzJWszHnwpngIrdzeVQXMfv3PrmxIyZp8XCJSAI11UNZ3gC829Obj067werdtTzxPVKCB6AUovMRnOIHRUVGnl3vGG4Y1un4LTpGWU893QDzCWbEdzGUxYc4Aj7plazF93wog+FokF75RpfzYLW4VnVFjHiEgBTbGyfVFQFdDgJqGhbLAb/VbDCYbvHqjmAwVU9NqdffaAfvUhyuU+znn8ZjNSBLtqfGOScDfb+pFEsh3R0bFb6c0wbTBzrmQM5tcywJfk2x9kE8kz+rz4dfdNS/Wfpib/xW3HvtVLPHlrY5lmE3WPmutiq5Epk9Gl5MsWKa3Qa1OnfCtGpBWeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Mt+pcXhEBYCYbjFfwCuuM/w86R2K52bxIB0y1/nzKsI=;
 b=vu66MG++HG7z1p0RQVzPEo+Lh93N9h01NpTBwNvSkiuzDMWaNKIcSZl3KSMSMRAq4zcd5j0YjDf2G0WeHyADnwAALQezUfBBAxE2yBVp4ll02EU6TxiGnJ5+3OsqaT2eW2zDM1sGSSZ1wQ+4L5n3La1TXNW9dBuhTAZATRMjHFU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b779a5cf-1421-086b-f7f3-188fcb9af3db@amd.com>
Date: Sun, 16 Apr 2023 22:08:56 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 3/3] xen/arm: fix unitialized use warning
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-4-stewart.hildebrand@amd.com>
 <5fb567c5-1e82-a048-1cfe-f6f69e0b5ebc@xen.org>
 <3833c906-8d88-d35d-b9dd-b70d5f7a9fa7@amd.com>
In-Reply-To: <3833c906-8d88-d35d-b9dd-b70d5f7a9fa7@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|CH3PR12MB8994:EE_
X-MS-Office365-Filtering-Correlation-Id: ae44a865-8572-42b0-b33a-08db3ee8b7cc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	up8bxXglL6G1Q10kjkRndJQwrUoRzEzs+eGAXBMQQjfg46TPagPnksRLaKXEefR/YCy3+FI+mcVrT9Cg0bRaLYOrYu7f6X4c008VXHCmjCNNhB7XSJ5w8o9UWPP6xF/+S24iP0ClX+8WqnAEVWp0ckoLf/Y7UIxPcAR5PU6DnTwu37/q9aBkABBg4X+ldZYWjBoHWOM5tBy/MC20OmaiR5d8ExRF5dWB5IOThO4XB8MXxDGI+3wxrH2oacef1PoBP9MwthoAX41idDIBe9abJdYK5oDFZBWFrfkbOEDQnX5FbFKniqtjxZf5roSEJY0G294CsjJG7w8zcjKTxbaP2YCJdTCqZnAnKjHyzTFyn0OEC29ChgfH7LQ6XnO7Vz+g5wvayBimG3S6cyIQoopIV1OgbxMDfzf78LSE+qrcQVl4LYuQjM/vgFYaJFla5wLEBkZSq/4l3tBAD83I90ocYDyiKnvDjilkRMomTB1uEKAwpoyBqL6Vrd6kXI3iL6h10qFoYN1MIY3OX7gBnXYD1BEOrphXWWMckpvGCWx5b87ifgCD7JJufDDhTShly92nHmLFuDAmRiHm1pC2Uf9zCqgJ7t3lmt39kbgTtawI3tvl+RvM/bpA/X3ooN6UyKL+MCUGfGSHTNAJkGIcfyanuUPN3X8mGt+WE877Ps0ijpgRJGWaAVOsnd0AqTaNKhod61ZtU/9xYOK/zggCQTQ4z0L3dItnu5QL9OAPIbUCdyqdvM7ecjw6b4azr/5DVZ/u
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(86362001)(31696002)(426003)(336012)(83380400001)(36756003)(53546011)(26005)(47076005)(82310400005)(186003)(40460700003)(2906002)(31686004)(44832011)(2616005)(36860700001)(966005)(16576012)(5660300002)(6666004)(8936002)(8676002)(478600001)(54906003)(81166007)(356005)(110136005)(41300700001)(82740400003)(316002)(70206006)(40480700001)(70586007)(4326008)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 02:09:03.9256
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ae44a865-8572-42b0-b33a-08db3ee8b7cc
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT004.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8994

On 4/16/23 22:03, Stewart Hildebrand wrote:
> On 4/16/23 08:53, Julien Grall wrote:
>> Hi Stewart,
> 
> Hi Julien,
> 
>> On 14/04/2023 19:57, Stewart Hildebrand wrote:
>>> When building the hypervisor with -Og, we encounter the following error:
>>
>> Is this with GCC 12 as well?
> 
> Yes. If my memory serves me correctly this particular one occurs with both GCC 11 and 12.
> 
>>> arch/arm/domain_build.c: In function ‘make_cpus_node’:
>>> arch/arm/domain_build.c:2040:12: error: ‘clock_valid’ may be used uninitialized [-Werror=maybe-uninitialized]
>>>   2040 |         if ( clock_valid )
>>>        |            ^
>>> arch/arm/domain_build.c:1947:10: note: ‘clock_valid’ was declared here
>>>   1947 |     bool clock_valid;
>>>        |          ^~~~~~~~~~~
>>> cc1: all warnings being treated as errors
>>>
>>> Fix it by initializing the variable.
>>>
>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
>>> ---
>>> See previous discussion here
>>> https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00741.html
>>> ---
>>>   xen/arch/arm/domain_build.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 4f9d4f9d8867..18b350734a8e 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -1944,7 +1944,7 @@ static int __init make_cpus_node(const struct domain *d, void *fdt)
>>>       /* Placeholder for cpu@ + a 32-bit hexadecimal number + \0 */
>>>       char buf[13];
>>>       u32 clock_frequency;
>>> -    bool clock_valid;
>>> +    bool clock_valid = false;
>>
>> NIT: I would add "Keep the compiler happy with -Og"
>>
>> I am happy to add it while committing if you agree.
> 
> Yes, please do. Thanks.

One more thing, there is a typo in the subject, if you are willing to correct it while committing. s/unitialized/uninitialized/

>> Reviewed-by: Julien Grall <jgrall@amazon.com>
>>
>> Cheers,
>>
>> --
>> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 04:59:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 04:59:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521719.810524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxb-0006NC-0I; Mon, 17 Apr 2023 04:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521719.810524; Mon, 17 Apr 2023 04:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxa-0006N5-U0; Mon, 17 Apr 2023 04:59:26 +0000
Received: by outflank-mailman (input) for mailman id 521719;
 Mon, 17 Apr 2023 04:59:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c4/u=AI=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1poGxZ-0006Mz-Hl
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 04:59:25 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0617.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9d1e6db4-dcdc-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 06:59:21 +0200 (CEST)
Received: from DUZP191CA0036.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:4f8::28)
 by AS8PR08MB9765.eurprd08.prod.outlook.com (2603:10a6:20b:616::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 04:59:18 +0000
Received: from DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4f8:cafe::32) by DUZP191CA0036.outlook.office365.com
 (2603:10a6:10:4f8::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 04:59:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT049.mail.protection.outlook.com (100.127.142.192) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 04:59:18 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 17 Apr 2023 04:59:18 +0000
Received: from b1c5637fe53e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A2C2F4C3-F214-403F-AB01-4AAAF5BA9C0D.1; 
 Mon, 17 Apr 2023 04:59:08 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b1c5637fe53e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 04:59:08 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB9472.eurprd08.prod.outlook.com (2603:10a6:102:2e3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 04:59:05 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 04:59:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d1e6db4-dcdc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kvyM8Kg5KuVKat3xlQd2xvzzWkgmOdbcC+vMlFnhZE0=;
 b=5V1RGV8mkjvvcpVOnVCkt4GcapyyILoPRBlBgx2kZgUTzpw5eGgrH2IBfBJJqiDjrusOFKNShiMZ+OPWDf+rOy97E3Zk5Q+OZRI4Ezu9A5+idNNI/ksiyphY2AnJrbykWmz3pDtKzdBPRTvh6cJfn2NAukjY7zAe+Oz9ggdVsQc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g/I75oCxtchvJrylNTQXoPIw8fPRqFKGnZU19Mf5FfKk2POf/LT+4iTO+SVYug5rB4rKCQptptMtpZ+zAVrBZ8acs9vbnPxyU8GIZBdFtPLe2Q7ap9oAgObfiB9AbWfZpXqAfkbumf6VUm/c2OOHaNJEhqhaTchhaCRnoqBYJWVKAIBz3+ieXy07Dtd0cx34lZvhEkLCLvWrcCGGNzBhGUaFnVatSiF7hqEFQ/7zr9A9pJNHyBI7iYBg4TNhc6d8YEFmyq5itWCI1lSFzWxjSGUcdWKho9hHga4vlj3RgXdPgz9ogG3FIP4wrdxQj5AkdS2EiyhTl8nZDlFkxBBBjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kvyM8Kg5KuVKat3xlQd2xvzzWkgmOdbcC+vMlFnhZE0=;
 b=hN1qQWwpKffbOqWPPFCsE/kE3eduHoZQArVlRG8Qq6uMftAW1++PKzNa4tDvA07Rl42WJJsIJzy3USrubFtxwvHtgXpcS1/In3Xxa9TbFvkQuZm49a6+B+LjSEcWNAVwO4oKxzpawTXENoMR/uTjXm/LHaqC//Bv9H38SMpwj7tfyTRGROHL8euSNQcZkPKlZWNY7OiROuwUb5vC5PhdNFB5PaMVGFmkwXTg6lFElVYxPRkoKweVf4U8XugOSqxqxB4wkcqjmtTmciEla1q2at9jz6VbgzuqWjmOubeI0zwB2Gq4MsA8LRY9DZKONZtHuJjfdPGBPOAEKj7GV23YaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kvyM8Kg5KuVKat3xlQd2xvzzWkgmOdbcC+vMlFnhZE0=;
 b=5V1RGV8mkjvvcpVOnVCkt4GcapyyILoPRBlBgx2kZgUTzpw5eGgrH2IBfBJJqiDjrusOFKNShiMZ+OPWDf+rOy97E3Zk5Q+OZRI4Ezu9A5+idNNI/ksiyphY2AnJrbykWmz3pDtKzdBPRTvh6cJfn2NAukjY7zAe+Oz9ggdVsQc=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Topic: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Index: AQHZcHBWic1Wc8S7jkeT5KQNtFjvNK8uzxTw
Date: Mon, 17 Apr 2023 04:59:05 +0000
Message-ID:
 <AS8PR08MB799136F71E1FF28CED194666929C9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-4-julien@xen.org>
In-Reply-To: <20230416143211.72227-4-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 803C644098294941A43ADD0D9D79C68A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB9472:EE_|DBAEUR03FT049:EE_|AS8PR08MB9765:EE_
X-MS-Office365-Filtering-Correlation-Id: 477c7a75-9025-4ced-6286-08db3f008017
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YcDA3LP3RZ1SpjgB7dTPKjs6kzvzn88iICD4l/qaodYHlu1yBaMWyW7W189GS992AHg/G8Ni9P8t9PZAzT+GswTBIEOx4aad4Ghs3xcRI4JZNR4cFhBWJU4STCzhpWQ/hBTYHcko+8uo1sXJ1LPu9o24pvxuLXEA12WHxYa9NdLTVoN6rFJMk8qAi9Y0qtA3IxNc8Xp7yDWT2/gczrZx/zU1QF1Z1CiElxoIPnoHgJJCXsc/W5dyqdkPkZ+sEEb0NFx8j+Wl8RbY2auQCSunhhKlhPTBKsABqEySnLT0bDfrzUaRd35qcomSmMwoCjRMGinF6C85UOMfv5Ak+t6cbFEuR+A+KezZlkNxf90P/kWU/h3xS1IsEbdih/fDijxKhuftJ03UvskWE7DdyL13y9vp/FBPPDbVHNdL22gOmt06BUS42moR8c62yEy7gmF4+OOtvcNtUds/SJvWgZ8S0H4F76Yt3cdBjS4rrA0GVAqJsj8hvAqS5dVjs2FBdCFY1jhh4gpaeQky8AJkl/VTGeGy2ERT07p3iYeN5zHsoLpqwCQYQsLa6brW8ELuKxv444bzAmmZm5+ya3B3/vKHWTHa6gcrQVYpsPgYc/0nrhv4VVNeaAvLZnqZuKgelnSI
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(54906003)(110136005)(9686003)(26005)(6506007)(478600001)(76116006)(66946007)(66476007)(66556008)(64756008)(66446008)(83380400001)(316002)(7696005)(71200400001)(186003)(4326008)(8676002)(41300700001)(8936002)(5660300002)(2906002)(122000001)(52536014)(38070700005)(33656002)(38100700002)(55016003)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9472
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9a18ea7f-f909-42fc-eb58-08db3f007886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QYdQ5FCBe6+STQjb1o0U0zdhHNgF+Z7Lo+Gogw8NBZ1fwuq6ldHnWADAnK5a9a3IJPkG9NFi3duDob2PBKaUKB9ZE0uHjXVPwreQnaSgi6vFaxVEdF1BCgU+UIP8g+qthkGxbVZujM5jKLd+L13knnqkDeiGq1hxFcYHB1y4yXi1/m2bp6AQApOkvtgzmN056IY9qLvfAeGdPdOp3L+VglFwgUBK4jI+aEjs0O9KF6s+WWiZuxCByj16O+J0/3eyUVWu6y76QYpenbeRtNumnAMTTrbiTwVs0KGXsqEQHFahRJHSIL2/WyayGJlGdi/EJ9qURdMirfPdeUQUa8F46nZeNb6fZpQcNSj2cemkYOLzMpvVUXJj+KvWbJKKBWWKTx626uqP2TjYM5fQQ56kojIGuf3l2QO6q8tSHV+/V9qbWdMDHxSHqs/Rt+RnbUimVZGuQmHYCr6ht5TlI9fDQGRH2j7u82fWrR0Fd9a+M/+8fvOMMFhpcI5+60SEc9Bjnu+dk3SaUnWwCr0/QVj9JWAHXbie+BEaBYeJwd+TNE0Bj6BGb8EISukHi9npxcp3/Rytclz0duhdA3vUMAPD9bqWaIGL+G/+0VIP0l2RbbSuyXPzGkT7gO0fUcusa/2tOyStnxjloFZyZVuIl2KlTwVALgHsVz7j4AUUzE5NvKsMMIBIh8Krhb54f6uJJiX9kRjfk+SbFVQtNSVIklWlRQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(396003)(376002)(451199021)(36840700001)(46966006)(40470700004)(4326008)(316002)(110136005)(54906003)(70586007)(70206006)(33656002)(186003)(9686003)(6506007)(26005)(107886003)(40460700003)(47076005)(36860700001)(81166007)(356005)(336012)(83380400001)(40480700001)(55016003)(5660300002)(41300700001)(82310400005)(8936002)(8676002)(478600001)(7696005)(86362001)(82740400003)(2906002)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 04:59:18.4466
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 477c7a75-9025-4ced-6286-08db3f008017
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9765

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
> prepare/enable/disable the identity mapping
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
>=20
> On some platform, the identity mapping may have to start at 0. If we alwa=
ys
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
>=20
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
>=20
> Two new external helpers are introduced:
>     - arch_setup_page_tables() will setup the page-tables so it is
>       easy to create the mapping afterwards.
>     - update_identity_mapping() will create/remove the identity mapping
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

I used the test method described in my notes from patch#2, and this
patch passed the test, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 04:59:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 04:59:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521720.810535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxi-0006dg-Bj; Mon, 17 Apr 2023 04:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521720.810535; Mon, 17 Apr 2023 04:59:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxi-0006dY-8V; Mon, 17 Apr 2023 04:59:34 +0000
Received: by outflank-mailman (input) for mailman id 521720;
 Mon, 17 Apr 2023 04:59:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c4/u=AI=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1poGxg-0006Mz-EC
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 04:59:32 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe1b::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2e14452-dcdc-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 06:59:30 +0200 (CEST)
Received: from DB3PR06CA0009.eurprd06.prod.outlook.com (2603:10a6:8:1::22) by
 GV2PR08MB8052.eurprd08.prod.outlook.com (2603:10a6:150:75::11) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6277.35; Mon, 17 Apr 2023 04:59:28 +0000
Received: from DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::2d) by DB3PR06CA0009.outlook.office365.com
 (2603:10a6:8:1::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.44 via Frontend
 Transport; Mon, 17 Apr 2023 04:59:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT065.mail.protection.outlook.com (100.127.142.147) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 04:59:27 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Mon, 17 Apr 2023 04:59:27 +0000
Received: from fb4923cd3050.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DEC31E97-3CDF-4CA7-BD35-4F1D0CBF120B.1; 
 Mon, 17 Apr 2023 04:59:18 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb4923cd3050.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 04:59:18 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB7447.eurprd08.prod.outlook.com (2603:10a6:10:36d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 04:59:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 04:59:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2e14452-dcdc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MJTq2bjk9yaNV4ZBoXSHfSpASV673BeyC4EA45/r4/k=;
 b=o+8OrWy0A3ycGNMplzaLbher1bjQLTlVgTK2USIsxOMswub6hIe7EBkmO7Ks4SHhyDQlCXel0s4e0gs1RqaiIFAFms6qDVKIE+6nYN7/TXPLnhVTiLVczl/WcGwhqNbUyglYR9tt3Xf0lHLgoFHYPjhjK11W3XtTzpFDZZqa9bg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kz6t3nE/3sgqnKFM1GhNQtbI9jEgsEX1UtOteSJ8cdpofJ74yvHBYbRe029fIDn4aqAzAnYjSJASXe1+JMw6VaTTO9fUABcx6A51LiLfKHF9GxycRjTvr+KSgf0MNAehLgUPjvDTJedrq0v1eGxuxVG4pWIcqDnmnWtpL7E5Lo3AU0CQtSFLJGl4mnWLhxnv9gqUmrnRdqQr+LoUxaQrMuvfs/i2jVBQydy3OPbmUFHQItd8XiVknhYL8KiwtGPGdVkw9xMHv0ija9EGDiqWmbVVk/33iLmPF66YheET0JwQpe4rw2G7fJK3V5JIy/KiekhCHFRN5To4OqbHUr4brw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MJTq2bjk9yaNV4ZBoXSHfSpASV673BeyC4EA45/r4/k=;
 b=EpDXNJ4QykTaInItHOeDG+p3Xxuar3yHBOh2R5NXQGdCQjNrarQLn/1yjHayMc9bBeEpp2tZlETAEXKFjyDXDUJUdn1vbCDRvQlxvURAyVPc6ymZ53W0wk9w9S2h4AJnEOOv23YYO8l68WCnFPJYoDdELm0ZXyp3X9twzZheW8UoVbEigDz7wZlCK96MQQfrHtkrhMc8yfFadUCRsf8U4VM8rrAbpsH+fQvAYNU77DbUAiT9125URsaQ+YEwdsSQjPsKEVf3i6oNBW8E45JWyVaiA7uc5txFqHb37JiXT3WlGBIuwXJlKNmCV+f9eRKQ+uVr9jJ1UHbZeyrxJAOFrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MJTq2bjk9yaNV4ZBoXSHfSpASV673BeyC4EA45/r4/k=;
 b=o+8OrWy0A3ycGNMplzaLbher1bjQLTlVgTK2USIsxOMswub6hIe7EBkmO7Ks4SHhyDQlCXel0s4e0gs1RqaiIFAFms6qDVKIE+6nYN7/TXPLnhVTiLVczl/WcGwhqNbUyglYR9tt3Xf0lHLgoFHYPjhjK11W3XtTzpFDZZqa9bg=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Luca Fancellu
	<Luca.Fancellu@arm.com>
Subject: RE: [PATCH v7 5/5] xen/arm64: smpboot: Directly switch to the runtime
 page-tables
Thread-Topic: [PATCH v7 5/5] xen/arm64: smpboot: Directly switch to the
 runtime page-tables
Thread-Index: AQHZcHBUxvhM1r7suU+gX9Lg9ukFqq8uz8Yw
Date: Mon, 17 Apr 2023 04:59:15 +0000
Message-ID:
 <AS8PR08MB79912C8D87F209915E3199AF929C9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-6-julien@xen.org>
In-Reply-To: <20230416143211.72227-6-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5136D2BED5367D4590AE1AFCA84CF671.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB7447:EE_|DBAEUR03FT065:EE_|GV2PR08MB8052:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b1bf294-0dcf-4292-e04e-08db3f008597
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Qy3pBkKe6pSnzWyMuRYjvTL+ygb1gz1tfUeE0w14CZfK960z34WZdHadnIcPFRGM1LdaiYtteXhq9dc8Q8/YaVE9iVKXWJ2s/Yn8NzHHQGVsCtPvaUZC7IjBCbUrrYWFiWZvyUNMA0uGjEVH1LL1scXHLh+OenC6SGO/YoU+PYlmD9l5a4gZJlV13OgEcNkwTYbLEQKb++SPvAIK14bkm9kkJ4fC37ZayHD29TDnv+SdSwpTOnhPCxeuX9nOr6LCHhVcyNcpbu6+gvt7vOUBffh77hVIzcO0XY8tKqLwEO/7QgYg3S22grSwmY2RFNWdEZOiXkGKLSfp2Y/0bqDNvxrWeVybrnUnyVBZjGWZLMqSPivqXY+NHdrDGNlC8GMi5LSCh6LAxk+owQ1lElB2b/r4VcYbltu4NTvwFpctfMCkRDyPn3ZkANYFpZAJgS0fD6zaiwHMrMjcNfMG1r0/mfCc9XjMYEr+huYqqD8kNTsWXtFEEcEgnzsQ9/8ogU+6TsWZPnnM/douOzqohXkKDFvmHh0lC7qtkuRyRRR5sHc+yw6rxZbTlSSmfI3xrC9uARYz22Ylis3yYdVC8btcPot3rY/AMhgqsumXS8O53UcI9Sa8HJ9AzIrYR4ec9kUw
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(451199021)(38100700002)(66556008)(4326008)(64756008)(66946007)(66476007)(76116006)(66446008)(5660300002)(41300700001)(52536014)(7696005)(71200400001)(316002)(86362001)(9686003)(54906003)(6506007)(186003)(26005)(38070700005)(83380400001)(8936002)(2906002)(8676002)(4744005)(55016003)(478600001)(33656002)(122000001)(110136005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7447
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2c855f94-a090-49d2-d4f5-08db3f007e43
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ArAv/Ko4qyJh1GkeOUTNFx0364ciMhGb8Ky2tkkM3pDBJZmr9mljXdP4k7o2jxVS0hbfEeIMZuFLDn56y1mvHAmtyWG73iZIbIst5P3A8VKyy0R5aEoryhVV6mK88gPA8rsCuPAS4Ncki56jWYXHzBaY+7WdnXIxom1VwprokGjfVfGIpiBNXQDLugtDi6J6j/2+0ohYNf44nt/9abcZTpbBdmsanjZA1gbcosD023QKQTYIqbAc6JT6k0OybZaFrhDfT8DfdcMUy0TPGZWiILKmsy3XuML2BlYwJkV9aXt+hhZej9L4fuHVFAeBt3G4b70jAUFmuJYDncQcfFQjkiHGSWMGCKGVrAzFP6x5dL6pJFIS3qHkVR5bRwdtdBFmDdWbYfJQrU3zxyFO3EclQlm49dHs/aIeHg4lHR79jbVBUyDhMzVjsiTawzrv8aOw2bgw3sd7OmD2VZUIRK6n4oLG14Cm+JugiNRtwt1/15dDWjfshiFLfx2KyQX2eEnl8UAmHGUb1C9rcmaHXAP6xnszcT1rbBNdVbhLn6fXxcaGqTD44OlXkWKYGkS1A0CMkpIym5ef37ifQ/5xjBhobeoQdLKaij+kFfXtZHZBfa+Gi6m5glZ8QFl/fvfZS/Lw05T9I0g5EOPxxosK+hLfIfnx83FYKzsmdMR49Rypzz6XMd0SgYVh7aSjf7L2ZsUTodwxOSuqhstboMrKOiK8Xw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(40470700004)(46966006)(36840700001)(110136005)(47076005)(70586007)(83380400001)(478600001)(7696005)(8676002)(70206006)(356005)(4326008)(316002)(82740400003)(40460700003)(41300700001)(81166007)(336012)(86362001)(54906003)(33656002)(6506007)(82310400005)(26005)(9686003)(52536014)(8936002)(40480700001)(5660300002)(4744005)(55016003)(186003)(36860700001)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 04:59:27.6930
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b1bf294-0dcf-4292-e04e-08db3f008597
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8052

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v7 5/5] xen/arm64: smpboot: Directly switch to the runtim=
e
> page-tables
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Switching TTBR while the MMU is on is not safe. Now that the identity
> mapping will not clash with the rest of the memory layout, we can avoid
> creating temporary page-tables every time a CPU is brought up.
>=20
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I used the test method described in my notes from patch#2, and this
patch passed the test, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 04:59:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 04:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521722.810544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxq-0006xi-Ji; Mon, 17 Apr 2023 04:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521722.810544; Mon, 17 Apr 2023 04:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGxq-0006xb-H9; Mon, 17 Apr 2023 04:59:42 +0000
Received: by outflank-mailman (input) for mailman id 521722;
 Mon, 17 Apr 2023 04:59:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c4/u=AI=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1poGxp-0006wd-RX
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 04:59:41 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8ce5176-dcdc-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 06:59:40 +0200 (CEST)
Received: from AS9PR06CA0578.eurprd06.prod.outlook.com (2603:10a6:20b:486::11)
 by PAWPR08MB9007.eurprd08.prod.outlook.com (2603:10a6:102:340::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 04:59:03 +0000
Received: from AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:486:cafe::7d) by AS9PR06CA0578.outlook.office365.com
 (2603:10a6:20b:486::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 04:59:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT057.mail.protection.outlook.com (100.127.140.117) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 04:59:02 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Mon, 17 Apr 2023 04:59:02 +0000
Received: from 8c13dcad87d6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 32089347-C54A-4591-8306-B81DDEE7F8B9.1; 
 Mon, 17 Apr 2023 04:58:56 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8c13dcad87d6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 04:58:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB9472.eurprd08.prod.outlook.com (2603:10a6:102:2e3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 04:58:53 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 04:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8ce5176-dcdc-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BPH0HizTlrsxooKXXMwgC2ytuuQzv08uUCOKt2VaMqY=;
 b=e3VapdMCAglgMnJC7nAS/+oofEljWOWSglyM6LA5t3iEdDC4WL/thkwDgIe97qgVrkvpihInVW4z/S5LDi8E9WbGhDhZ7QsSpKGRu9L4zIxvWl9X/qy49f03+iQv+r7/renK2N9E2cAWGmlwJEUcS5jPQ0sMoUpDouwmgcOBqGU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ACW2nwKqRoCy5byk2BP5IukdO0mfETcpniYmThIQ1wOCUC2fbkUp2OMMUjJ6OInpyqNgAKs48tnlCkVJQ0AOQSxenk553MvqNF7mq59v/EhLEd1/mieaR9NrmyUReJpJVDcPLkkfCVBxcdekwHNvismVaiLveaz+onIeShLYN6Mfvo6S6uDuf2FfoT9iPsJBEXMyw8bGBZkJ/bVSnlcPNixqoefemFNmNODV6ofdnD0v2Na+EMUwKdS57Q0oOqax6NtRGZKwR2/pBP8IePDbWw1atUQvXMtP+Y3iV3Eb+hR/9Eva/MDPHAmk/1RJ2qT7uUU6B9Pm/KqaHJSO20Nkwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BPH0HizTlrsxooKXXMwgC2ytuuQzv08uUCOKt2VaMqY=;
 b=NbPmL/PJgiG0icgEMyk4i2X1qJqP1saWg+inbRbKx8W2JJD7mXY9IY6XBT1gpYElR/kuJ+46NyFWU5eE59I9xo95StkBeOQeyOg/iDS1noqJmKJXZyLlJUuFu9CudLvCzqLQu9avK43NcW0LrN/nJH6zmSHvSKzF4cn5RUVLHp1UQPSA4aN5ER0DYYL1k76+FZmBuotDh6VmKUeK54zIUZm2cRCZ3InVwig/VOt1ivNdcp5exzL4tSmxIZbvWAv5NwXYM1aBYnYemUUzG2ySzKg2yOLlKlhC6I8CxOI2wbcV5LGscL3yYKX43bNOA4gVmMFeQOkuUFJl0clCPggJ9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BPH0HizTlrsxooKXXMwgC2ytuuQzv08uUCOKt2VaMqY=;
 b=e3VapdMCAglgMnJC7nAS/+oofEljWOWSglyM6LA5t3iEdDC4WL/thkwDgIe97qgVrkvpihInVW4z/S5LDi8E9WbGhDhZ7QsSpKGRu9L4zIxvWl9X/qy49f03+iQv+r7/renK2N9E2cAWGmlwJEUcS5jPQ0sMoUpDouwmgcOBqGU=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Topic: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Index: AQHZcHBVTtqn5UnQqEuWlkR+HSSWja8uwBIw
Date: Mon, 17 Apr 2023 04:58:51 +0000
Message-ID:
 <AS8PR08MB79915D100B2E1FB81292E195929C9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
In-Reply-To: <20230416143211.72227-3-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9BA70659DD6C1640BE51CFACAB071D32.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB9472:EE_|AM7EUR03FT057:EE_|PAWPR08MB9007:EE_
X-MS-Office365-Filtering-Correlation-Id: 5de19443-a1a9-4180-12c4-08db3f007699
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AciZvIQRxgIlwp1nhMKJAeApjuHw5XfWyIkiB21pq6DcTjpBmi+Zok3KzsdHaM1lCjRI7XB4xbSSv8qbVhsLOY8dTUjZ0Tjeke9F/fHcU7tU9jNcMAmKe84/xw+JO8YKyU93HasZNNnyrD5eqkq3PLvpDyUxAxXEHlmKb164mjdFTQqcu7p71b89Ue6K0LhvAoo68j7NCb+oJam7BpbEMrkQ3o0oozL4z5FmldmX1+jvI44s7jismTxGaAJ3R0W2qOtwuYm7V+FmFa0MZUaR7jYOF2K/Fn3f/jOhATrLr4HBWM/LFMryUgP1ducZeXfTcheP7WhhuLOUqCxLdH3D48P2yUUqOKBYRW0GSJnEMGD4khhWn689+4KHKOI4nW0Z6MDykxT9MOY4O0PD1xmpBee0jHkN/my5MGiyurc8QdsvwOsHShFYIG3tS++KTvlU49MSqTIqOB310+sNwyXNqyXFs8G+PQyajqvWbEBNFfWhILBq5Q1SCtpRuvTjiTKGdXAIa9BW6ZZSXej8Iz+cQcAIpXx1FOAMpRdSj16z0HMQzc4I4hpKp8eeFCCYpFIm2nd/Aoydnnsd0iDRpt4TNr67NZdDVSQv8oeWEd/t15sAdIlPd6s99FCAQ1te9BhQ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(54906003)(110136005)(9686003)(26005)(6506007)(478600001)(76116006)(66946007)(66476007)(66556008)(64756008)(66446008)(83380400001)(316002)(7696005)(71200400001)(186003)(4326008)(8676002)(41300700001)(8936002)(5660300002)(2906002)(122000001)(52536014)(38070700005)(33656002)(38100700002)(55016003)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9472
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	62e69a6e-d52d-47ea-3ffc-08db3f006ff5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SLVQnWwr/OBn/kQdL4BKi0JYRtX+jovrO9i9ZzPGz+gO6reqbwgaOk/iANOE+DSUJ23p5zBFAKfzZ2+gnFRBaiWaoH5/PdbDV/Tn1k2f5mTN1vIrioEBlh4tbz6TyKbdFhxf9P0+mrWXadzTaMHizhkQ1oX6B5idGFgtTjDXU23/A5PaOiEvN8rdNLisNad3FO3CJ6DXu0adyFJONbRZNuyB/CdX3pC8IsgGBOeP0f/46u+JnjAiqli4CXdfCxfSGdgU5s/143hdfSLcy9pcVlClA3bGW+AZojTZzhtkNJGQk9ilRh4lcmyl+JHXBCFCPoEk+Jh/FJIT/aZGiIiO4m3tAziGSvgDsgU5YKH04H7/7J0EbUqZyLIgWyAGUd1oBOyC90zpWLoO1/VEX4etKN4MpYtNfrUWUa9EwhNp9LKSTeqlH5sPsv9Lx5DpcXZ6oJ2qr+mRDws7mSlLPP/dponkith6SxXQ0BwKx4sDeFKZEvn54tCFgCDwSz+RVcgMXf4otI5f5YmfcklDCcNo73ZgnukCGN2MvqppLNIjmlrI77uf1fQd6WPyKScWv0f8HGgUNREtt/nI6uRzy8Vid4oVd9WbiRkW41HpclbaIL8CbrLXUCPq61GtabnpcLPgzm71By4bbVvZ2/6qD55sifQRACZykOPQwyxsnCEuZHYT8tF7qnS0x0uqM9ekusEmBFnQaqgnIFiGDUZjeonp4g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(110136005)(478600001)(54906003)(86362001)(55016003)(36860700001)(47076005)(33656002)(83380400001)(26005)(336012)(107886003)(40480700001)(9686003)(40460700003)(82740400003)(7696005)(186003)(82310400005)(6506007)(4326008)(356005)(316002)(81166007)(2906002)(70206006)(70586007)(8936002)(8676002)(5660300002)(52536014)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 04:59:02.4725
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5de19443-a1a9-4180-12c4-08db3f007699
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9007

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v7 2/5] xen/arm64: Rework the memory layout
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
>=20
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
>=20
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
>=20
> The memory layout is reshuffled to keep the first four slots of the zeroe=
th
> level free. All the regions currently in L0 slot 0 will not be part of
> slot 4 (2TB). This requires a slight tweak of the boot code because
> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
>=20
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
>=20
> Lastly, take the opportunity to check a compile time if any of the
> regions may overlap with the reserved area for identity mapping.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

This time I used our CI to test this series patch by patch on top of stagin=
g
today (Apr 17), so that we can see if the qemu issue reported by Bertrand
in v6 still persists.

I can confirm all boards including the qemu-arm64 passed this time, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 05:00:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 05:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521729.810555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGyE-0000Sm-VY; Mon, 17 Apr 2023 05:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521729.810555; Mon, 17 Apr 2023 05:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poGyE-0000Sf-RR; Mon, 17 Apr 2023 05:00:06 +0000
Received: by outflank-mailman (input) for mailman id 521729;
 Mon, 17 Apr 2023 05:00:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c4/u=AI=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1poGyE-0006Mz-09
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 05:00:06 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2050.outbound.protection.outlook.com [40.107.13.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b708d321-dcdc-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 07:00:04 +0200 (CEST)
Received: from AS8P251CA0009.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::34)
 by AS8PR08MB6103.eurprd08.prod.outlook.com (2603:10a6:20b:296::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 04:59:26 +0000
Received: from AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2:cafe::df) by AS8P251CA0009.outlook.office365.com
 (2603:10a6:20b:2f2::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 04:59:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT060.mail.protection.outlook.com (100.127.140.216) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.12 via Frontend Transport; Mon, 17 Apr 2023 04:59:25 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 17 Apr 2023 04:59:25 +0000
Received: from fb1457913faf.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 714F9BCD-AE91-4BC5-B208-27BB3A025618.1; 
 Mon, 17 Apr 2023 04:59:14 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fb1457913faf.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 04:59:14 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB9472.eurprd08.prod.outlook.com (2603:10a6:102:2e3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 04:59:12 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 04:59:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b708d321-dcdc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D3VEkZxhBKtjTkaSXciAD7QPBjlZijqEraymwNMCqo8=;
 b=KA51XTi1uen0a+yDR2dVe/EcfiiWcJyJ0+ZTrBu5TcDMV4J44iMuo35JDpeCMn1++nwUV/aWIfqW13Azq2gyKHFPlqlUm2VSxdXh9XefaRgox8NtBB/7alPz7wHntIxQVqvQKrU/tcFWCGkicsbzEoECgjsOl74Vi51v6fTSxxw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g4muQ1jpFR5+Rk9avaIwR+w+J+VAXiWlP3C6ZxzsBSklucuGac1l3vKFO0xdNfF2J3V6mRuFYO5uliaFcgeNnD58BIb45lBStRT2NqVnC05mhb6ETMPkmkOuPla56ng6CZDNBiOuwLlYx+s51uHb/Dx1STbX6oNqbE4m0PU+cBfhAGml93POsTb6BwS+pWWQ6MZKuqqKq2/odwH1hb/sieU2ck2TPt5mc8iCpmlvle9phuztOWlPEeJShA4QMGUqcZsagd/rWXoE5nboO0S8b8gAh4Jos5KDuZSIG5n3OnueK14h4zCv6F9o+aMZpJkH8FJ4Hh7GVcFafs7z9fwtVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D3VEkZxhBKtjTkaSXciAD7QPBjlZijqEraymwNMCqo8=;
 b=cYhSzhXj4UUSp90JR43gtTOg4VqxazMX1ivfecLG5nwV/dvkXVKEpmAKJcexJhixvCNBbnsGjPlPNmTAD4GLXESaJ+oR605a0p2mziaqeG3oidAZRnv9u9V/W3RHhGkbHBlJ0NFX/Gz+asrQTFt1PPBM6FV7P0wZsEoTtggQL/Abw/WCpd0Fol22WwwMgTSx9l1y/09sGOEuiRxIccGTWUsk5e1K+2YQTZWzzZRttSjb0Ab05M/F8jw3M8YgjDlT31qBVmi0z3QGVosuzpyxPfCGZU5jVkRut2B+fKkJ7qPnek9o7D0wsHTLXKN1nD2BYbgy07d44nT586X2+JGa8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D3VEkZxhBKtjTkaSXciAD7QPBjlZijqEraymwNMCqo8=;
 b=KA51XTi1uen0a+yDR2dVe/EcfiiWcJyJ0+ZTrBu5TcDMV4J44iMuo35JDpeCMn1++nwUV/aWIfqW13Azq2gyKHFPlqlUm2VSxdXh9XefaRgox8NtBB/7alPz7wHntIxQVqvQKrU/tcFWCGkicsbzEoECgjsOl74Vi51v6fTSxxw=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Luca Fancellu
	<Luca.Fancellu@arm.com>
Subject: RE: [PATCH v7 4/5] xen/arm64: mm: Rework switch_ttbr()
Thread-Topic: [PATCH v7 4/5] xen/arm64: mm: Rework switch_ttbr()
Thread-Index: AQHZcHBY3cupzbzE6k+kIJf+YHXT8q8uz6sQ
Date: Mon, 17 Apr 2023 04:59:12 +0000
Message-ID:
 <AS8PR08MB7991B3A1AF8D77C1169B433C929C9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-5-julien@xen.org>
In-Reply-To: <20230416143211.72227-5-julien@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 74C32E6F5E472240A9DD5D7443625D3D.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB9472:EE_|AM7EUR03FT060:EE_|AS8PR08MB6103:EE_
X-MS-Office365-Filtering-Correlation-Id: e55c7283-f220-44d4-1532-08db3f00845f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WnmjhFHFVj3Y84C/NQieiqdunGALmvAstm0TRewicEj/qK1k5aP8GqHFixeHmbaU9TMt9NK1n2tEovuSxuFO/CZTtWyT9fvxswJE9QdGrqGtZzP+eMOopObgRhdB8AHMcQnE/k4UOPI7gMtgTslcw5Ecn+nd9hXsn8AykQHLknX4MRDERTFGdzuUShez759vXXUEl4E/lfA11o/D8lYqcPGzMBzV3Uvn/vcWzwEQXlLETjIynHPK85uMuQDi5u/F1uN7odwW4gj/Yh7AE4LJAKXzM4/3g2K8y4lmhXxEN9mvoXg08WGi8ox2AFfxvnIcGZ92V/8Mc+Bd8zw9tzCHeL5BFE2f7z3diuqUJ4HyO6bd+3aD5M6bFkgl2Ueh19/jjcDKZRKgtPal4jEb83foAz1rAoEoKURX7aWR7/XbZUrLZS7xxPcMHF4RByh1O5FRU62u4m/QhILkT34iQ1sA2GCLn9s0c6TJzGUEJq7SajHbmPw3acO9ZbWa88HTmv+O08otg9JG9mvX9tsi0DPNm8yiIQrbnVL3N9yywLGo7T8SSjAdRE31Jxq8OvqdwTD+ZzID1hjFm5WlX3HVcFmGZ4LBomA1v55pm1SOKP/JbcVyU3tRzR4fnWLjhlU948Mq
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(54906003)(110136005)(9686003)(26005)(6506007)(478600001)(76116006)(66946007)(66476007)(66556008)(64756008)(66446008)(83380400001)(316002)(7696005)(71200400001)(186003)(4326008)(8676002)(41300700001)(8936002)(5660300002)(2906002)(122000001)(52536014)(38070700005)(33656002)(38100700002)(55016003)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9472
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	218a45ef-00d2-48ae-d54b-08db3f007c96
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q4yMutviLYjZh6D/zTaXyl4+9Ye3DomV9wsi6TC5n3xTmy25GMF0DKG2w2J45berg5nFuAVfhGJFx7dAgfowmLgRHsPrEbqilh3ctMXkoM7uzEzqZop9QLTFoV3cJIqkVMw5f+0pPEaiwSujNZ/TYTT5C95k428KXNTRC8ZaN7c9lUx6IyD8w5HMkaXzKA26RpYc5WNUleGixnxINRdGRSkmwcnO/Gsh19yRhvXB4oXCJHJ1hHN+lEknLDLyQRiv7I7jc5XTb1avULS/CFUuTGlzu1evsaxrscE2dsxHxGP3ggKYYzDXQg8ffYQL5GAhM6E3oIAJl752/lZ7Cbkq8b9Id1WjLeRIz2Gl0CjWXH3x+p+KnPrQLOqWsCEcl0gTIVr3WcWGWgZ9OJDMTEczoQRVjNccwoquX08AnLtFIr9lSblrXTdFGPJgYxnLhmi8Q5CYY20lpTnF4VG7fDWW7mt7PB94OcP3ufA0QppQJpsGjV97XGPfGAeMRwqfy62/mlbM5rXk850YtH9xXavl8Ce+bNWXceKeDG1JPkLwqnCiGZGhaCsiWweMUjsvVUz5pZM6EMN3zO9Z6fKJ4y/WXEAf+UaTm6BXUdDaT4YfCv0UTnj4J3zm47jUYmx8wXnRAPomXU8ox7m3gmaqCT3KTT4KugArnhrjV4QtCjZN2rQmESlMKPCqN/swaHWEK/fvrPaJiQ1WM4BMIWUAFFrn6g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199021)(36840700001)(46966006)(40470700004)(4326008)(70206006)(70586007)(2906002)(336012)(82310400005)(5660300002)(52536014)(86362001)(8936002)(316002)(55016003)(41300700001)(40460700003)(110136005)(478600001)(54906003)(33656002)(7696005)(186003)(356005)(81166007)(82740400003)(26005)(6506007)(9686003)(47076005)(83380400001)(36860700001)(8676002)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 04:59:25.5805
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e55c7283-f220-44d4-1532-08db3f00845f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6103

Hi Julien,

> -----Original Message-----
> Subject: [PATCH v7 4/5] xen/arm64: mm: Rework switch_ttbr()
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, switch_ttbr() is switching the TTBR whilst the MMU is
> still on.
>=20
> Switching TTBR is like replacing existing mappings with new ones. So
> we need to follow the break-before-make sequence.
>=20
> In this case, it means the MMU needs to be switched off while the
> TTBR is updated. In order to disable the MMU, we need to first
> jump to an identity mapping.
>=20
> Rename switch_ttbr() to switch_ttbr_id() and create an helper on
> top to temporary map the identity mapping and call switch_ttbr()
> via the identity address.
>=20
> switch_ttbr_id() is now reworked to temporarily turn off the MMU
> before updating the TTBR.
>=20
> We also need to make sure the helper switch_ttbr() is part of the
> identity mapping. So move _end_boot past it.
>=20
> The arm32 code will use a different approach. So this issue is for now
> only resolved on arm64.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

I used the test method described in my notes from patch#2, and this
patch passed the test, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 06:20:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 06:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521742.810565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poIDr-0000gB-Er; Mon, 17 Apr 2023 06:20:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521742.810565; Mon, 17 Apr 2023 06:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poIDr-0000g4-Br; Mon, 17 Apr 2023 06:20:19 +0000
Received: by outflank-mailman (input) for mailman id 521742;
 Mon, 17 Apr 2023 06:20:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poIDq-0000fo-II; Mon, 17 Apr 2023 06:20:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poIDq-0005GD-3h; Mon, 17 Apr 2023 06:20:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poIDp-0005BN-Im; Mon, 17 Apr 2023 06:20:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poIDp-0003yC-I1; Mon, 17 Apr 2023 06:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nVCdYhR9d9YsueseX9ag/HL8m6lD+nHqrlowDjGB2P8=; b=RFOeSUiGz3n+DvA6lIKdlgvOz6
	kzeH0hDF1Bu8wFXLfc2NCoxduhhS40tN+XKUXguiBWPB9aXmUlUOwNYJ8sEkD/TSJ23U1aqOul9La
	55y1go/q0BM4FV4EtEdsQqudDnxrXajzaMPYrcf9cNEk8PobafoFiOtz5dZFOMjuzU1s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180278-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180278: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
X-Osstest-Versions-This:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
X-Osstest-Versions-That:
    linux=44149752e9987a9eac5ad78e6d3a20934b5e018d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 06:20:17 +0000

flight 180278 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180278/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180253
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180253
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      8 reboot                  fail starved in 180253
 test-armhf-armhf-libvirt      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-vhd       8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-qcow2  8 xen-boot              fail starved in 180253
 test-armhf-armhf-xl           8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-rtds      8 xen-boot                fail starved in 180253
 test-armhf-armhf-xl-multivcpu  8 xen-boot               fail starved in 180253
 test-armhf-armhf-xl-credit2   8 xen-boot                fail starved in 180253
 test-armhf-armhf-libvirt-raw  8 xen-boot                fail starved in 180253

version targeted for testing:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d
baseline version:
 linux                44149752e9987a9eac5ad78e6d3a20934b5e018d

Last test of basis   180253  2023-04-14 01:13:19 Z    3 days
Failing since        180264  2023-04-14 18:10:06 Z    2 days    6 attempts
Testing same since   180278  2023-04-16 19:41:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alyssa Ross <hi@alyssa.is>
  Andrew Donnellan <ajd@linux.ibm.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Benjamin Gray <bgray@linux.ibm.com>
  Cheng Xu <chengyou@linux.alibaba.com>
  Christoph Hellwig <hch@lst.de>
  Conor Dooley <conor.dooley@microchip.com>
  David Disseldorp <ddiss@suse.de>
  Duy Truong <dory@dory.moe>
  Erik Brakkee <erik@brakkee.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Huang Rui <ray.huang@amd.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jiri Kosina <jkosina@suse.cz>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Zhang <markzhang@nvidia.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathis Salmen <mathis.salmen@matsal.de>
  Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Kolar <mich.k@seznam.cz>
  Ming Lei <ming.lei@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com> # v5.10, v4.19
  Mustafa Ismail <mustafa.ismail@intel.com>
  Namjae Jeon <linkinjeon@kernel.org>
  Nathan Chancellor <nathan@kernel.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nicolas Schichan <nschichan@freebox.fr>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Weinberger <richard@nod.at>
  Rui Salvaterra <rsalvaterra@gmail.com>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Steve French <stfrench@microsoft.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tatyana Nikolova <tatyana.e.nikolova@intel.com>
  Tharun Kumar P <tharunkumar.pasumarthi@microchip.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tingjia Cao <tjcao980311@gmail.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Wolfram Sang <wsa@kernel.org>
  Wyes Karny <wyes.karny@amd.com>
  Xu Biang <xubiang@hust.edu.cn>
  ZhaoLong Wang <wangzhaolong1@huawei.com>
  Zhihao Cheng <chengzhihao1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   44149752e998..6c538e1adbfc  6c538e1adbfc696ac4747fb10d63e704344f763d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 06:28:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 06:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521748.810575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poILb-0001Ky-9V; Mon, 17 Apr 2023 06:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521748.810575; Mon, 17 Apr 2023 06:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poILb-0001Kr-6q; Mon, 17 Apr 2023 06:28:19 +0000
Received: by outflank-mailman (input) for mailman id 521748;
 Mon, 17 Apr 2023 06:28:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5U/i=AI=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1poILa-0001Ki-7z
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 06:28:18 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09caec83-dce9-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 08:28:17 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-2f9b9aa9d75so380484f8f.0
 for <xen-devel@lists.xenproject.org>; Sun, 16 Apr 2023 23:28:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09caec83-dce9-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681712896; x=1684304896;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6tQQU4zJUs1IaMa5esJ+nKNalf4Avzhr/SufnUx+NCw=;
        b=cqzTcONMfhf0RcHZqKcTVBQJN0AItWw4f94hORDEuT6ZleTKHHrHdyU16QVoI/gDvx
         xyKkWtmAbhOXwsXAdl5BcqH9b162apqmFWtWFgB+MP1+WbWCmLW8v2QSzs71kKc8LFPl
         pR4sX7AuKzZQA8Hntd6tN7sgEi0T+NuTKur0MJyTIDlFvA2CG58cU8h7mnGbRuFIVfbg
         BX2zNprTWnEpYoWy2WsAAmyPN25OKHimkYoKYwfiJl8GKPvcWNa7hQ2PGAvKc0tzK0L0
         uzePbQlIu4aWfmVO36U7NQ8Wt1XRm18cATrRSTfz2KNhoTLr4Y4bCooLxIMuCcBLxnqT
         gLlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681712896; x=1684304896;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=6tQQU4zJUs1IaMa5esJ+nKNalf4Avzhr/SufnUx+NCw=;
        b=DrGklLCcnk/vGx4OGel+zS5Qcd98ODOsvoeaQeMk6gtjJW9m1fCuNa4HHnWPQ7NYZW
         kOinBe9sr7BUmQFQsQUNp53dukAIWnhnWhjy3/t+jubMHCZCHmDl/u2CZ8OlNChPNiP+
         J0o/gF61gpUQmO6TGs2iAB5XnCPshcFv0Vic9xqQLs2853oHdtvjPbQn6OgDz1Zk2k+4
         o1j+2iA+w1A4jl3XQRZNQ0XvApFySwBV2wUx/EPIkWwXxi0lIWI9tRhCi/QTX8Iy0TFC
         iJFlt7NvLDGZvxDKXYocxgjEqOQDCalgSZmFzhMiDKGrQuHVOtDgzb16LGsqKoybfR8Z
         usCw==
X-Gm-Message-State: AAQBX9fOzZWBbQ2ZaltuU3S333JabhkkAouN4DFuIRP/J46Ifkvw9Vuz
	Zoj4uy2BWAYZ1j66+kiWUBvpIudPyndZifqu7AOwPg==
X-Google-Smtp-Source: AKy350ajAuC1xjTVg5v5AturCDM83O5kaI0/c/FQjeKQVMfKROSCEqOeYV+8pG7SLj+9HqFNthaYCLGzTLNWkz3K5t0=
X-Received: by 2002:a05:6000:1049:b0:2f5:6a2f:660f with SMTP id
 c9-20020a056000104900b002f56a2f660fmr1059730wrx.3.1681712896384; Sun, 16 Apr
 2023 23:28:16 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-13-jens.wiklander@linaro.org> <6d1b8904-374c-4392-6945-2746f97c31f6@xen.org>
In-Reply-To: <6d1b8904-374c-4392-6945-2746f97c31f6@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 17 Apr 2023 08:28:05 +0200
Message-ID: <CAHUa44HHHycgzsYhArkgVncbFP+GE1qEU7mOcUcp7TYgon8tWA@mail.gmail.com>
Subject: Re: [XEN PATCH v8 12/22] xen/arm: ffa: support mapping guest RX/TX buffers
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Thu, Apr 13, 2023 at 10:31=E2=80=AFPM Julien Grall <julien@xen.org> wrot=
e:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > +static uint32_t handle_rxtx_map(uint32_t fid, register_t tx_addr,
> > +                                register_t rx_addr, uint32_t page_coun=
t)
> > +{
> > +    uint32_t ret =3D FFA_RET_INVALID_PARAMETERS;
> > +    struct domain *d =3D current->domain;
> > +    struct ffa_ctx *ctx =3D d->arch.tee;
> > +    struct page_info *tx_pg;
> > +    struct page_info *rx_pg;
> > +    p2m_type_t t;
> > +    void *rx;
> > +    void *tx;
> > +
> > +    if ( !smccc_is_conv_64(fid) )
> > +    {
> > +        /*
> > +         * Calls using the 32-bit calling convention must ignore the u=
pper
> > +         * 32 bits in the argument registers.
> > +         */
> > +        tx_addr &=3D UINT32_MAX;
> > +        rx_addr &=3D UINT32_MAX;
> > +    }
> > +
> > +    if ( page_count > FFA_MAX_RXTX_PAGE_COUNT ) {
>
> Coding style:

OK

>
> if ( ... )
> {
>
> > +        printk(XENLOG_ERR "ffa: RXTX_MAP: error: %u pages requested (l=
imit %u)\n",
> > +               page_count, FFA_MAX_RXTX_PAGE_COUNT);
> > +        return FFA_RET_NOT_SUPPORTED;
> > +    }
> > +
> > +    /* Already mapped */
> > +    if ( ctx->rx )
> > +        return FFA_RET_DENIED;
> > +
> > +    tx_pg =3D get_page_from_gfn(d, gfn_x(gaddr_to_gfn(tx_addr)), &t, P=
2M_ALLOC);
>
> I might be missing something. Here you only get the reference on one
> page. Per the value of FFA_MAX_RXTX_PAGE_COUNT, it looks like the buffer
> can be up to 32 pages.
>
> Can you clarify?

Good catch. I'll reduce FFA_MAX_RXTX_PAGE_COUNT to 1 since that's what
I've been testing with. I'll add a TODO for supporting a larger
number.

>
> > +    if ( !tx_pg )
> > +        return FFA_RET_INVALID_PARAMETERS;
> > +    /* Only normal RAM for now */
> > +    if ( !p2m_is_ram(t) )
>
> p2m_is_ram() would allow RAM page marked read-only in stage-2. Is it
> intended?
>
> If not, then I think you want to use t !=3D p2m_ram_rw.

Thanks, I'll update it.

>
> > +        goto err_put_tx_pg;
> > +
> > +    rx_pg =3D get_page_from_gfn(d, gfn_x(gaddr_to_gfn(rx_addr)), &t, P=
2M_ALLOC);
> > +    if ( !tx_pg )
> > +        goto err_put_tx_pg;
> > +    /* Only normal RAM for now */
> > +    if ( !p2m_is_ram(t) )
>
> Same here.

OK

Thanks,
Jens

>
> > +        goto err_put_rx_pg;
> > +
> > +    tx =3D __map_domain_page_global(tx_pg);
> > +    if ( !tx )
> > +        goto err_put_rx_pg;
> > +
> > +    rx =3D __map_domain_page_global(rx_pg);
> > +    if ( !rx )
> > +        goto err_unmap_tx;
> > +
> > +    ctx->rx =3D rx;
> > +    ctx->tx =3D tx;
> > +    ctx->rx_pg =3D rx_pg;
> > +    ctx->tx_pg =3D tx_pg;
> > +    ctx->page_count =3D page_count;
> > +    ctx->tx_is_free =3D true;
> > +    return FFA_RET_OK;
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 06:35:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 06:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521752.810585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poISf-0002qI-W5; Mon, 17 Apr 2023 06:35:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521752.810585; Mon, 17 Apr 2023 06:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poISf-0002qB-TM; Mon, 17 Apr 2023 06:35:37 +0000
Received: by outflank-mailman (input) for mailman id 521752;
 Mon, 17 Apr 2023 06:35:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yHEP=AI=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1poISe-0002q5-6L
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 06:35:36 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e315093-dcea-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 08:35:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 702BB1F38D;
 Mon, 17 Apr 2023 06:35:33 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 441E013319;
 Mon, 17 Apr 2023 06:35:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id WB4nD7XoPGTJBwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 17 Apr 2023 06:35:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e315093-dcea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1681713333; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=gfi+IABHGpYlqP7Ea7r8GTDv04KE6VV7UiF+gCL1X3o=;
	b=sUT2V1Rm0ZGnZp38E8Y9ll4NYNO8hseIlBYh4TB+QAylwWq2UahTFkkEKGgFEcfr1a7wPW
	JXDSGtZdjD8S85TH85ZrCKD/V4qk0WJun7ZJENt5NzA3NVei0P/0BHjUHGuGuhLptBtSnd
	zKAR/L9fkXTfKzgTXVpS2oaiKts8MAQ=
Message-ID: <fda641f1-e87e-3dc0-85a5-acf91d6f39ff@suse.com>
Date: Mon, 17 Apr 2023 08:35:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Alexander Kanavin <alex@linutronix.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230412090104.3794213-1-alex@linutronix.de>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstore/xenstored_control.c: correctly print
 time_t
In-Reply-To: <20230412090104.3794213-1-alex@linutronix.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------e8RkAns0TqfJFeVgiQRV8lNg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------e8RkAns0TqfJFeVgiQRV8lNg
Content-Type: multipart/mixed; boundary="------------2PINZutKOFs6uaGlQkIzQ4ZC";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Alexander Kanavin <alex@linutronix.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <fda641f1-e87e-3dc0-85a5-acf91d6f39ff@suse.com>
Subject: Re: [PATCH] tools/xenstore/xenstored_control.c: correctly print
 time_t
References: <20230412090104.3794213-1-alex@linutronix.de>
In-Reply-To: <20230412090104.3794213-1-alex@linutronix.de>

--------------2PINZutKOFs6uaGlQkIzQ4ZC
Content-Type: multipart/mixed; boundary="------------xPXyoIDaEOCo32SxzBFdH5zG"

--------------xPXyoIDaEOCo32SxzBFdH5zG
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTIuMDQuMjMgMTE6MDEsIEFsZXhhbmRlciBLYW5hdmluIHdyb3RlOg0KPiBPbiAzMiBi
aXQgc3lzdGVtcyB3aXRoIDY0IGJpdCB0aW1lX3QgKGhlbGxvLCBZMjAzOCBwcm9ibGVtKSwN
Cj4gdGhlIGZvbGxvd2luZyBlcnJvciBvY2N1cnMgb3RoZXJ3aXNlOg0KPiANCj4gfCB4ZW5z
dG9yZWRfY29udHJvbC5jOiBJbiBmdW5jdGlvbiAnbHVfcmVqZWN0X3JlYXNvbic6DQo+IHwg
eGVuc3RvcmVkX2NvbnRyb2wuYzo2NDY6NzA6IGVycm9yOiBmb3JtYXQgJyVsZCcgZXhwZWN0
cyBhcmd1bWVudCBvZiB0eXBlICdsb25nIGludCcsIGJ1dCBhcmd1bWVudCA1IGhhcyB0eXBl
ICd0aW1lX3QnIHtha2EgJ2xvbmcgbG9uZyBpbnQnfSBbLVdlcnJvcj1mb3JtYXQ9XQ0KPiAN
Cj4gU2lnbmVkLW9mZi1ieTogQWxleGFuZGVyIEthbmF2aW4gPGFsZXhAbGludXRyb25peC5k
ZT4NCj4gLS0tDQo+ICAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvbnRyb2wuYyB8IDQg
KystLQ0KPiAgIDEgZmlsZSBjaGFuZ2VkLCAyIGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25z
KC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvbnRy
b2wuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb250cm9sLmMNCj4gaW5kZXggY2Jk
NjI1NTZjMy4uODY4Mzk0N2QyNSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvbnRyb2wuYw0KPiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29u
dHJvbC5jDQo+IEBAIC02NjgsMTAgKzY2OCwxMCBAQCBzdGF0aWMgY29uc3QgY2hhciAqbHVf
cmVqZWN0X3JlYXNvbihjb25zdCB2b2lkICpjdHgpDQo+ICAgCWxpc3RfZm9yX2VhY2hfZW50
cnkoY29ubiwgJmNvbm5lY3Rpb25zLCBsaXN0KSB7DQo+ICAgCQlpZiAoY29ubi0+dGFfc3Rh
cnRfdGltZSAmJg0KPiAgIAkJICAgIChub3cgLSBjb25uLT50YV9zdGFydF90aW1lID49IGx1
X3N0YXR1cy0+dGltZW91dCkpIHsNCj4gLQkJCXJldCA9IHRhbGxvY19hc3ByaW50ZihjdHgs
ICIlc1xuRG9tYWluICV1OiAlbGQgcyIsDQo+ICsJCQlyZXQgPSB0YWxsb2NfYXNwcmludGYo
Y3R4LCAiJXNcbkRvbWFpbiAldTogJWpkIHMiLA0KPiAgIAkJCQkJICAgICAgcmV0ID8gOiAi
RG9tYWlucyB3aXRoIGxvbmcgcnVubmluZyB0cmFuc2FjdGlvbnM6IiwNCj4gICAJCQkJCSAg
ICAgIGNvbm4tPmlkLA0KPiAtCQkJCQkgICAgICBub3cgLSBjb25uLT50YV9zdGFydF90aW1l
KTsNCj4gKwkJCQkJICAgICAgKGludG1heF90KW5vdyAtIGNvbm4tPnRhX3N0YXJ0X3RpbWUp
Ow0KPiAgIAkJfQ0KPiAgIAl9DQo+ICAgDQoNCkknZCByYXRoZXIgaGF2ZSBzb21ldGhpbmcg
bGlrZToNCg0KZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb250cm9s
LmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29udHJvbC5jDQppbmRleCBjYmQ2MjU1
NmMzLi5mOTQ1MmQ2M2I0IDEwMDY0NA0KLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2NvbnRyb2wuYw0KKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvbnRyb2wuYw0K
QEAgLTY2NiwxMiArNjY2LDEyIEBAIHN0YXRpYyBjb25zdCBjaGFyICpsdV9yZWplY3RfcmVh
c29uKGNvbnN0IHZvaWQgKmN0eCkNCiAgICAgICAgIHRpbWVfdCBub3cgPSB0aW1lKE5VTEwp
Ow0KDQogICAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5KGNvbm4sICZjb25uZWN0aW9ucywg
bGlzdCkgew0KLSAgICAgICAgICAgICAgIGlmIChjb25uLT50YV9zdGFydF90aW1lICYmDQot
ICAgICAgICAgICAgICAgICAgIChub3cgLSBjb25uLT50YV9zdGFydF90aW1lID49IGx1X3N0
YXR1cy0+dGltZW91dCkpIHsNCisgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIHRkaWZm
ID0gbm93IC0gY29ubi0+dGFfc3RhcnRfdGltZTsNCisNCisgICAgICAgICAgICAgICBpZiAo
Y29ubi0+dGFfc3RhcnRfdGltZSAmJiB0ZGlmZiA+PSBsdV9zdGF0dXMtPnRpbWVvdXQpIHsN
CiAgICAgICAgICAgICAgICAgICAgICAgICByZXQgPSB0YWxsb2NfYXNwcmludGYoY3R4LCAi
JXNcbkRvbWFpbiAldTogJWxkIHMiLA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICByZXQgPyA6ICJEb21haW5zIHdpdGggbG9uZyANCnJ1bm5pbmcg
dHJhbnNhY3Rpb25zOiIsDQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29ubi0+aWQsDQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgbm93IC0gY29ubi0+dGFfc3RhcnRfdGltZSk7DQorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29ubi0+aWQsIHRkaWZmKTsNCiAg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgfQ0KDQoNCkp1ZXJnZW4NCg==
--------------xPXyoIDaEOCo32SxzBFdH5zG
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------xPXyoIDaEOCo32SxzBFdH5zG--

--------------2PINZutKOFs6uaGlQkIzQ4ZC--

--------------e8RkAns0TqfJFeVgiQRV8lNg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQ86LQFAwAAAAAACgkQsN6d1ii/Ey9o
/wf/WaxV1xrZRmgp+3aTIMV9RguxozXwIBaLeHzkAH9AxdPCwmJsg3qzyE20Sf5oyf7mzyZ5aqqH
XZhD09BybqFiQlu+1HIhBGMdqh9avOIwBeU2jahIiWXzGw/Wni7777ib1eSgWWHPgp64gbnxLymd
V2aFAIYU9TZnRhoYiXca00NjMwYChnZN/joHfbxhM6Cvq8QSgAcpqoIi7nA5ZGH8eloMB+u/NAI3
Gglc/toyFctFhFPPse4d0R36gc1jhSq7z6XP07tzWEPObJiXcoqqFIhDntWSJ277xxGYQlvFYh9K
5JbPMJTUhYR/IFMO32ABKYZQcY2eGQSA0C4ltFooYw==
=TJ0w
-----END PGP SIGNATURE-----

--------------e8RkAns0TqfJFeVgiQRV8lNg--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 06:47:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 06:47:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521759.810595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poIe0-0004PF-5t; Mon, 17 Apr 2023 06:47:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521759.810595; Mon, 17 Apr 2023 06:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poIe0-0004P8-1r; Mon, 17 Apr 2023 06:47:20 +0000
Received: by outflank-mailman (input) for mailman id 521759;
 Mon, 17 Apr 2023 06:47:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poIdy-0004P2-0H
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 06:47:18 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b120e13c-dceb-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 08:47:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9776.eurprd04.prod.outlook.com (2603:10a6:10:4f0::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 06:47:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 06:47:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b120e13c-dceb-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CqpyRRbk21QSISyV2TR+VZv6X9ueppvEXNWDLkWNjO3T5++bFzF9fOHmGO+egj1fZV63GZxwBr7g9eLcMdf0tIBzWDrmv/k+fzFV7WWU8Hh49y0n/CvB7EzWZEa+WnAYUTJEmm5ZXgJ9TJQfDQOpXGz2W6FHwomLhn1wiEQBYbN1TZ8tWW0f1B+IaKuADpebNZPhCDudXaTS0EiZecTj/loI1QECXU7FuGaNC58MiZdAZIaboFgGGqKvh9zGpofdgUo2lOtzn6zeu7naU8xSILa6PwcalhfwhTuDtxhFSTbS7qWDGW4zW5ntjH+Du/S9YMNiRL5bfiPn/Tyg1BNtCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LNN16guSb2UEKvY1cXgGIaAE7s/NGut5AB7BbY3dch4=;
 b=j31cs1grh0L33WcxbPnYhRwrWjpMHpySg2VBMnfaXmr0BE0DEKcSB6oegT43gYrBkpyZ0bTyjLS6eXbui4rH8rlI0xOqaphQ8byTthiubuqmbg/FM+99tpbIbKdTDq5Vn9hooAikqkFNt4vXvIDT0qOaDRaBZ52I5RgMKbAu8oHE8R3DvamYGMvHRX8vKY7gyo0/xpsRBCiG6NszRgUTWlzU9VHyLIj4oWm05rWXHGaejjaOZHNK0UFJqm/lzKunUuzbjnyylPIm+/zSe5DpE2pVIYw7YYyle0CyBajfgyyOMUehtcESpIN+DvpShfRltcIowsYu90D4oFc3Yqc2xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LNN16guSb2UEKvY1cXgGIaAE7s/NGut5AB7BbY3dch4=;
 b=FTfeZ98b+1zeiTUFjqptjhc1tao5M5b1L+xakGsmvUBptBzg4NbH2f6FKtSk/NDE99+TVYHk0V+rEnz6iwumUW395QLkBvsWOSerABebKPrfce11/RZRokVdYdjjSbVf4zb4peQ+tcU4HSJB0ji5O+gJAHfCZQ55hv5oNTvmoCuEf82NHmy6BSYg5fwh/iYBQcxbRMHOgd/TI9lvqWJTaNXJLi5VYiAVWr5hwdaVMTPkoi9+tpT9UqmBTEgBOEBvaiy85/03f5v49wUW9oEoxuQz4VhOOHtRVnBUyUdDrEYp+B+7zk4EDTVw5X8rKjiAZXbhtsnpjDe340BiCJnYRQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <910bd8aa-8289-25e2-bc8e-0b01d25992d9@suse.com>
Date: Mon, 17 Apr 2023 08:47:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3 1/6] xen: add reference counter support
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-2-volodymyr_babchuk@epam.com>
 <57c7c2e4-ae68-19c4-2140-f5a41fc1a6d3@suse.com> <87a5zexcw0.fsf@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87a5zexcw0.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9776:EE_
X-MS-Office365-Filtering-Correlation-Id: d1b8bbf4-a0c4-4205-7514-08db3f0f9305
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KsebxhSm78uark9E6C3Ci0GoQcUcCCrrREWOimHSlA6u19EUxLqorcLs6DXvv/ZqVWi6hC51BlTwo+YoDzEvxPzDTiVDIwpXQTW08iKWKqd+wV57nLIMJMKSGs+eTgqQZWiLX0SDXcpQH3oYQAWzbZ6VJh+WPEPIAbPL9xKIaWEwPxkFRkJuFdRIclV8iyLMDOH75VbtD120gQJDDOaA+9bmozgLk7C91EsKikEeDEeVOoUmROVqmjgNW79hjoHLZy9Vrz49d77JGeuw8UPBN8nnCUNNSihVAtfAQxserZv17YKXHl1q7p8yup9ZsrsviAYtMHZD3Oo5Q8UwIorPPJJZ/jtMAydl8hMmGoqC/z0D5nz/3MSjcGKEroeeW4uncAsttvMNaNC28YT86v3CZQpdusHZ17l3F+uw6d+qZoqF8RQM/7LE0GbQFjZpw943aAQDHSFstVWVNVg0l8OpXcdmxj7xOZIu53jVDR+vDjLhy/WWzIKtffDVkuZbZEWP9fwi95/fkck0/4u8cZ3d7yS0O+i9uBVuSg2uI3XEtxTn7XS9G1LwxIELC0RUX8wq23I/U/lke6r6mspFheV0o8KYIdG+61b0+5v2MZlpcDA5egPICqzFeAT+SHr0agfvuumClQOxYNEOXCwq3ty3IA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(39860400002)(396003)(136003)(366004)(451199021)(31696002)(2616005)(83380400001)(54906003)(6486002)(6666004)(478600001)(6512007)(186003)(26005)(6506007)(38100700002)(36756003)(5660300002)(4326008)(66556008)(53546011)(66476007)(66946007)(41300700001)(8676002)(2906002)(6916009)(8936002)(86362001)(316002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RVFtdE9yaDFldmhkbkVPYXptUGt0RllKRVJIL2loKzRERTZsZDNMUEFnQ1ls?=
 =?utf-8?B?NDljZ1UwWWYxNUVIa0ZGTEZObEpEN2ZPeVRFeFdPY0t2OHBlMlE4K0d4bTg4?=
 =?utf-8?B?MXdPeUozckFKRkU1RkwxQ0RxU3R3ODc0YTBoVy8yZVlUa1RRTWxCUFJUVGlk?=
 =?utf-8?B?dnM0RTB3NkprRS9VZ3RhK0lVVU4xcGorSS8rU0RGVnpIM0FROTZ5RzdFZE8v?=
 =?utf-8?B?R0F1QXN0cU4rQkJMMjNLMEtoNmJFOHZROEVmRHVyUG9IWHgwYWxvNWY5cklt?=
 =?utf-8?B?dlorSFY1dWdMTTU4dzduRXRNcGVkOXdQL0VyVkh4SFRvZVBKTEVRaEd0REFa?=
 =?utf-8?B?QUt2aXByM3VnOENZZkgzeFpKK3R2Z0JPbzBUaXFOS0xZZkQ5VGF1OVVnMlh1?=
 =?utf-8?B?Wkx6R29XRWJ5aHh5STQzRHBLbEhIeVFUNVhaanRDTVJibGZDSXBGOHZ5VTFK?=
 =?utf-8?B?NHdBaVFBVWpnSnB6WDhGTEwrQTRCV2s0R0VqczFpSG5nc0RYY1MxRjhDTy9W?=
 =?utf-8?B?Ym81dktqR0VGUVlwNFpoQUJibEdCeDdiMU8yWHlRYVEreTcwTmQrRytlSUE3?=
 =?utf-8?B?U1BtdXNELzhKY0hCMTBmZmFkV05FWkZhTG8rR0I2ZDRQblB0NUFaY0wwU1Ez?=
 =?utf-8?B?NzJKbG9CQzNvcDBGSzU4YmxaMWd6RWZsdjVBV2JLVnpxUkFONzFZSEpnd3or?=
 =?utf-8?B?ODU4cW1uMTV6V3c5SGxWU3BRcTFKTXZGQkpFeDBNaTRscmI0NEtUL2xxQ09G?=
 =?utf-8?B?bDRXZlBmK2EraWIydFEyYnB2WDM2NTlXWVBaaUFMMUxDQSsyY3ZNdGU2dCt3?=
 =?utf-8?B?OWwvdkhScFBUTThzRzN0bzhNUTY3d3lpYWMyS0piTUNyU0NtTVVNVHVmYlpW?=
 =?utf-8?B?QjgrdzBHejF6U1p4TWlMZmpBbFpldmZ0enhSUk01ZzM3eUNGMW9jQ29nTmV2?=
 =?utf-8?B?U1Fya0dJNlU2dnoyQmxQMUIrUDZNaGtvV056Vi96RXEvRzlva0VpWGRwQVpz?=
 =?utf-8?B?WjNsNjJnWnBoT1lkZjhiMGNqSzN2QnhDanRmTVoyTjRXMkFNbVhXZlBzT1Rv?=
 =?utf-8?B?V2w5eTdvTFpPZXFJd25tU0NUcEVIa3VOU2hoQ2hMZ2lNMVV2WTd1VTJ5MnRI?=
 =?utf-8?B?TDVBNk5KZzJmYTZpSkJFOUcwdEhIdzE3NE9BMmd3RSt3QUVCU2ZOZlV3MnF5?=
 =?utf-8?B?WHUxKzFXZ3F3OTBFR2U0cVN5NmZaamRPU0RySCtFemRCTWlhNE5rSUUzaHZB?=
 =?utf-8?B?bkRRWUJILzNyazdMS0VGMXQ2ZDlTOThHeTlwN1NJdUlWbEl0cEpKOTVCTCs3?=
 =?utf-8?B?akJ6WGwwak50cFNRR2dFTnNmNmVVVXNUQVBwT2VUdlNjS0hMMnNRd0x5Q1BP?=
 =?utf-8?B?dnZOdmNCd090K0dEUmtpUzh3MGJ2TWt2ejFxOVR0TE9XZ3l5MXJUd2lvNVlF?=
 =?utf-8?B?b3I5N3ZIVVVKcVBkb1JPSld0bTEzc0dyYnZZYmZNNS95UUVZK0pxZjVOZFVq?=
 =?utf-8?B?QVFrUkpxVXJnUU5LaXIwQUZBTDhGUE92S0NyOHNUODdrQmlTbFFOSnlQbE4z?=
 =?utf-8?B?azE4UzZ0ZktIYVRnVE90aUZzTjRYTWpwYTV6NUZaZTVyVWpTdDBqWlVVQ3hI?=
 =?utf-8?B?R0Z4OXpaNjlUaUdyM0dFTmdYbmRwemI3akF6ZGE4R0hmK3VybStRWndQTEtR?=
 =?utf-8?B?dy8xMXlKZ1ZTMUM4L1VMTTRERE5rWUJ6US9YQlYzeFdLVldpQnpJei9sZy8y?=
 =?utf-8?B?Q25GcUNEa0cvRUQxUkdFSUlRMm5rYWhibWNmRmh1MDgrU1h5dlhIL2lmWDMv?=
 =?utf-8?B?RDJ4YUVWWXRmZWovYnhGTW9HQitraGlDdlptTVhQbWRlOVVSWFFiZlVOeUVR?=
 =?utf-8?B?R0doZzFCVVp6aEVNY29pY2RzSWMvMDc3UFJvQUMwdUdaSnpTM21SaitNSmtn?=
 =?utf-8?B?eEw5OUhOL212M3Q0TFhYNUluNzlaN1E1OVNlRWpjcmU3M3lOd3VvMkN0S3Rv?=
 =?utf-8?B?UHBUOVY1TzRES1JRaGNVeHUrKy90R09lZGMwUjdsamxRMjRpMGp4TFVsK2pB?=
 =?utf-8?B?U2NKVU9rUmtYRlNUd1ZVMU9GZXljczFDZXFQQk1uams0dUltSGFLOUIyekJr?=
 =?utf-8?Q?axvaIjqPQRQvfwHq1zmU3w+ho?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d1b8bbf4-a0c4-4205-7514-08db3f0f9305
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 06:47:12.9564
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tZMg/WSCCO7TZzNYhVQyPSeAzI3oTzNC7JO8YuGsUae6tIxp2q9PBGFMIedU9soOvS3fI2IbOCWKG/P2rtTN6A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9776

On 12.04.2023 00:38, Volodymyr Babchuk wrote:
> Jan Beulich <jbeulich@suse.com> writes:
>> On 14.03.2023 21:56, Volodymyr Babchuk wrote:
>>> +static inline void refcnt_put(refcnt_t *refcnt, void (*destructor)(refcnt_t *refcnt))
>>
>> Hmm, this means all callers need to pass (and agree on) the supposedly
>> single destructor function that needs calling. Wouldn't the destructor
>> function better be stored elsewhere (and supplied to e.g. refcnt_init())?
>>
> 
> I tried to replicate Linux approach. They provide destructor function
> every time. On other hand, kref_put() is often called from a wrapper
> function (like pdev_put() in our case), so destructor in fact, is
> provided only once.

If provided via wrappers, that'll be fine of course.

>>> +{
>>> +    int ret = atomic_dec_return(&refcnt->refcnt);
>>> +
>>> +    if ( ret == 0 )
>>> +        destructor(refcnt);
>>> +
>>> +    if ( unlikely(ret < 0))
>>> +    {
>>> +        atomic_set(&refcnt->refcnt, REFCNT_SATURATED);
>>
>> It's undefined whether *refcnt still exists once the destructor was
>> called (which would have happened before we can make it here). While
>> even the atomic_dec_return() above would already have acted in an
>> unknown way in this case I don't think it's a good idea to access the
>> object yet another time. (Same for the "negative" case in
>> refcnt_get() then.)
> 
> Okay, then I'll remove saturation logic.

Wait. Saturating on overflow might still be a reasonable concept. But
here you convert an underflow to the "saturated" value.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:11:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521763.810605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJ1g-0007kJ-5a; Mon, 17 Apr 2023 07:11:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521763.810605; Mon, 17 Apr 2023 07:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJ1g-0007kC-1V; Mon, 17 Apr 2023 07:11:48 +0000
Received: by outflank-mailman (input) for mailman id 521763;
 Mon, 17 Apr 2023 07:11:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poJ1e-0007k6-NK
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:11:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7d00::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1bdc8ff9-dcef-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 09:11:44 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8566.eurprd04.prod.outlook.com (2603:10a6:10:2d5::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 07:11:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 07:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bdc8ff9-dcef-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XACSUd95gNSk4+/bamMfM6KEPbXqKyyt2RknVxl+GwTYu7roRoyMHWCdefzJufDPXh8/Vfm0m4XCSF+IP6BfDnC4d9jwyg/2VwDM8l4QBQL6Sc/bW0hFDLueSgvp/yDQVREBSweAYJe8aO6tJw+vXQFRz9Bwz4VA3brkhWv6jlxDr+ObIm8SaC5ZVOND6zQ/P0hzhVZaxVrmhKh0iRld8CB0ZsUAFzlEmuCYsFU2weeRLaWYXbxvKn4nJaawbhbbX5+y/64akSCBgkgVAJdq7UDux6Q8eUpd/G83M0viFe4opFKQQymP8G4XHAjEPbBvwPgzKRRF86WAr2+WUESBPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ynkEhOnbNy9kAOy+T/AB+jEXHtc582wleJWzK0Q62+A=;
 b=ZA4XBwUN14eNeHowibIw6kJZCdcFQfU/tiKp2AVbDlzxtC316t8eyHvXW//Q0hO458j4RaFFVEX0LKrHupxqngzs4JljWyQiiRioKt2RGOnXYUbxbYnLksXf5WrkHNq1vhZacD8Usn+oumuavNFf9K4IPmUWgivH7WslM1AKxcYRWQE2enSUnwdIXXsF4ZqvVh+C4XhDgVlPU0ImPOi4OPFSsz9Ly+fewQcBFzkqyhKRZYId+fAGxnLzziRi0GmpfFJtFs4W1rxz86dMp+ezZe+8REv47MDzbrtvaUbMBs0+jQxDocit94Cwf2jmaWmg4bDsKDEaV2bRvlJ6DTMz/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ynkEhOnbNy9kAOy+T/AB+jEXHtc582wleJWzK0Q62+A=;
 b=lBecPhAITbUVEdqy7h+w6atr1l2ssPNiYEaN1S+gUA0jOqpK/fd9JpWkS2XtgJdIGuvtsYHGTKdU/55EnbxCliwIck6Z09BjMzS/NDy670AtLxrnZKT3yCFe72ERfcozr7H8BFTnAqjX5geXBdDJgP6cK9gGouVLBDDscUE6KRTi+Mn2fIhQAH+Z84Ybk/lQL6r3C3Sg0kMYDnYnuXs5ADROfHFiQOznXl2Tw9YPgRY9uJkiMKo6dUUwSkJEnddiyX22rFJ+e6qyS1B+xOkDUlf1EylNVoNLxdgIU1hxqGBgxn6abaCe21fZ8+ARi1ykmqpIzKwI1Qn1hM4ag3653Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <00e284d2-c4e5-5f2d-ef4b-6f0b5d7cc8ee@suse.com>
Date: Mon, 17 Apr 2023 09:11:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: Parallel build regression in the emulator harness
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <19944c81-8dd4-6c89-0fa0-f4837648c7d5@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Anthony PERARD <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <19944c81-8dd4-6c89-0fa0-f4837648c7d5@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0146.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8566:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ee71aa3-821a-48bd-3eea-08db3f12fdea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kIt/2bwd2Hr1AOsAZm8Yof/RDdm4/Ov32UpsADBRuIY7nD3yR5f7xoBwcMKdvVQLgRoXkA6lIGOMsCdwG3L9ATLzx3hClNNKQlW9/YRjGS0rDslHcrqhYcLXcfYNS7MW02xCZ9vzTD/b18TMQF4ucq7l/jBV1QszU89xzNbRrZP/z25JGClFKT6EGh/ChOsoGRmVZjWwkhhVIE7bsJq+y8rjDcTMIxhuhPhR5UyKRA77xPA3yFB88gSrkYmy1OE8krpgZHtl6cSmmjQ7JR4J7PtNNVTvoILo2BoQmCJUe32ivquNW8ERaAEy8IdCqCCppdSlzDndyUI0ioi3jyR0yfHarvm+KPcxeoljrGv+R1ZMLS6L36K4Qtt61WRZq/fh41XbmegyP08iZheaW5IrDJjSt06OFUf38HyCIPXN0fugLNMM1bf52ubfUXsYbMmmYO+Mak/gbniWJOhInENTFx0Q8Xjy00Buo+rJKbh9AYzxCyNEY4z4uydZM+/Ml7WsRSHH/+x2QS7lSI85oNcmbkLoyZOmeJReGPgJAGwnQ2HJkFvbWT4rgicIxGu4eB9Z/ZL49DEeK42EpIo5dmcn+fSTeDEGe3/4aW1GRnhpx/fRqwB/uyDTbG3pTHNwr+LKIpv7ikWrVlOdsOJvBRP5sA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(396003)(39860400002)(136003)(346002)(451199021)(54906003)(31686004)(6486002)(478600001)(41300700001)(316002)(83380400001)(6916009)(4326008)(66556008)(966005)(2616005)(186003)(53546011)(26005)(6506007)(6512007)(66946007)(66476007)(5660300002)(4744005)(2906002)(8936002)(8676002)(38100700002)(86362001)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFJwa2MrNUp1dEJtOWVjcDlIZkZ0MnJETitWQ0FSa1Y5NmE3czdDajBWdjB3?=
 =?utf-8?B?YjlJdDVUZGU3dytjV2tqVW13WnRmdVRSK0FZeXRUNERSRXg1UEU3ODh3T2tX?=
 =?utf-8?B?VWFTbG94bW40TS9mZUdpbzRuTnk4UGZKdlY0Zi9vZUNMQUU3cmpFQ05Hd25Q?=
 =?utf-8?B?ajR2VWEzcEtrVEJSY3N4TlkrRnowVUh2WitLL01NQVBkeDdrSEVRRUNySyt2?=
 =?utf-8?B?SUROQkVyK2s1TXV4SW1xWFA2blBhU25qWEh4TDRhWTNnV21iQTVRMXUxbndw?=
 =?utf-8?B?NUhYVzhKcDF2MkwrMnl6Uis3cFJPd3dCWlpBOW1YVXhQOWwzRmYvS1ZsNXNZ?=
 =?utf-8?B?b202aDM0WGFWbUJYaGxFRXgrNmZYanZmN0pnWEFpRnlOUFZnWmJvUVJnODVs?=
 =?utf-8?B?RkNEa2NpOGh3bEFVVGdWdlVxUG1NTWorU05qeHE1aVpvYlZpcGtCMElINkhO?=
 =?utf-8?B?aWFJQXdZYVJaaXFnbDhyL1NEZGF5bTNBL1RBMHJBeXh5TzQwWStRQzhBRFl6?=
 =?utf-8?B?bFNsM1ZuYWVHOE1oQzlJY1ByTncxd3YxemJVVHRndGhzQ3YySnBraGJNMmVS?=
 =?utf-8?B?R0l4SFNFR3RlcnN3eEZyOXVJN3F3b2g5aW1qS0RqQ3htb1NVeEVZbThiQzNa?=
 =?utf-8?B?WDQ5TFVKaWFjVWRadnFOMEVjNzcwVk5tVGVTU0ZZelVqeVAxVTVOUTZxaFF2?=
 =?utf-8?B?RUN0VFVKcFFBRGFYTDJWc1VtdkpZdGdZNS8rQTNUZHRzU0ZETWFDdDZTM1JN?=
 =?utf-8?B?VUhLeTJ0azg0TGIrQUR1YUYxeHFkY0JzN0wvODc1TzZudjZXdUhCT1hzOUZs?=
 =?utf-8?B?VEgzSWZIc3Rza1ZINW84YnFiK2tEZnJUajlGblpZQ1g3V2RKWnc5aWU0UDFs?=
 =?utf-8?B?M0RUaklaYm5mRVlUVForWmdDQzNBQmtCZTR2YU1pSXJLWFpLU3d6dDg2WE9i?=
 =?utf-8?B?cVhUZnVjS1o5cVVSM1k0UHFDcC9NOHFHNDRLQjlNQkpCVE0zakhhTnFlQlhM?=
 =?utf-8?B?VmY5WDBZMjlxbHAraVZZQVcxZk80aVZKcGp2aWsyVzAyNFVHUk1CSnA0MGsw?=
 =?utf-8?B?QlBZbnV4WlRLYk9WdFVWWnNLQWNZS2krRmxTSlY0SmZXS2F4NStiZzFPbUlS?=
 =?utf-8?B?aGt1T05VdG1FeithOUZFQ2s4c01NS21jTm1xenZuOTVVV0g2eU9Cb0VucjZO?=
 =?utf-8?B?K3d4dmF2L1Jsc2IySzEzbEJIZCs3QllpMXMvem1UTVJGSTc0N0M1QnBKY3Nh?=
 =?utf-8?B?akhleWdGZlI1V0M3MGUxWXd6bzZ3Zzhhd0VsVFhENlV3U2JkcktPb2F6UEdH?=
 =?utf-8?B?bnhSdGN5clRNelJZT09FUGttTkhpbmZuVVpsL0JXWWNGTEVYTGU0UTNVWDVy?=
 =?utf-8?B?QVVWRWNud0NZTXdoUXZoSFRTUWNQNy8wUW9GbjFHL0FGU2xSamh2Q0ZQSlIz?=
 =?utf-8?B?K0pQSGFreGkrb0ExaHBucDRwazM3RUZpSmdpMHRIYXIvVnhKNmdTQUhxWUFS?=
 =?utf-8?B?MXFNc25yYkV3TmVKOC9LNGtFS0hLRHkwZTRMdnhsR1NmOUhZeXB2VkxKWnBD?=
 =?utf-8?B?a1pCMXBOU1F0clJuTTZlbnk0eGpEeVNydnYyeHBJRm1uYTR3YVkrNllBeVh3?=
 =?utf-8?B?Yk5GY01KSDlVdzJmRW5pbWp4RjU4OEt1M1lZNzBpWGVXOWw2c3pKYjNoK1VM?=
 =?utf-8?B?VDhnUXJrR2wwRWJVRG5LNmU5T1V3NmkrT0lSWnZkTjFDaDlINk1KUmJVTDFM?=
 =?utf-8?B?S09qRktydVRRMUVIYisvZWZnUGFKdDdnbXBYa0I4WWNzZ1U2MSttTHFHams5?=
 =?utf-8?B?QlpOSU1VRlBQQzJETWR0NjJSSzV6RlRIMkt4ZTJlS1ByM1ErK0phSDVNNkd4?=
 =?utf-8?B?Y1FtU1M1MVdQRjRIVm5DRTNUMzBVYVZSMFhqQ3FRdG9HYXFhWSt1WE9sNkM5?=
 =?utf-8?B?VUlaNzBPWkFZNVU5WjJBZE1VdnFPekhUK3lpYVhFUlQvd1BoMy9CdTgwMmx6?=
 =?utf-8?B?cldOdzhsVll6YnI1NUMvRXE0emhsdmhjaEdySE1RM1J2cGpQYXlZTUtWc1ZG?=
 =?utf-8?B?ZzhCYUhHNDgxengwSnBJMVZpb1EwQ3dlZnpxTG1BaTZac2thOHFzOWgxTkNq?=
 =?utf-8?Q?d5aheuF+wgRAnKF/6qjHT6FvR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ee71aa3-821a-48bd-3eea-08db3f12fdea
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:11:40.7139
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2Lj/K6wF+xnqP6XRD5CZE74YsGAX3AMEu5xNcQiu7Cfw1xWv2/Gppjnu3umdWKoRfRSErBY5HalNWPnIakLW+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8566

On 13.04.2023 12:36, Andrew Cooper wrote:
> Gitlab has started very occasionally failing with this:
> 
> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4104532296
> 
> While the individual log lines appear to be in the right order, the
> build is clearly failing because compiling decode.o is happening before
> the x86-emulate.h symlink is properly in place.
> 
> This will be an error in the recent splitting, but I've not had time to
> investigate further at this juncture.

Well - I've introduced uses of $(x86_emulate.h) in the fuzzer Makefile
(mirroring what we have in the test harness'es) without realizing that
there is no such variable there (yet). I'll make a patch ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:14:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:14:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521768.810615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJ3z-0008Ik-Gm; Mon, 17 Apr 2023 07:14:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521768.810615; Mon, 17 Apr 2023 07:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJ3z-0008Id-Dv; Mon, 17 Apr 2023 07:14:11 +0000
Received: by outflank-mailman (input) for mailman id 521768;
 Mon, 17 Apr 2023 07:14:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5U/i=AI=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1poJ3y-0008IX-6X
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:14:10 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71dc1703-dcef-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:14:09 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 v20-20020a05600c471400b003ed8826253aso7036711wmo.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 00:14:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71dc1703-dcef-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681715648; x=1684307648;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=i33sP5KxJQ3ug0hodb7+NbP/Rdc8Yf+/xfz8DRpbBns=;
        b=rWTNf2rB95O6/JkAY8AYRlsAqMEYrWZdt03p4g6aae9P/EZuvTTGnR4HTOlBJiBeSM
         +XmNm9/5Ct/Px+y8QGd6s21d0K3ziNhTlrO6oAo8WsIuR/S5GMddWieJG8qXU2dnP9pi
         9JDHdd1gBx+DcYNvlW3tlJqkC4/YicGvY4n6q+/yyppGf24B78uwxHnOH3mJDk9b3fg9
         eGSwQyL+YrIdvrwyAccS4Yk93eIIJUSQFayBkRDhTDdbW7EVo0SUscm+Bc/1gdA2b2mm
         ihieOyKpLe9HhAlVsiUW1xtqDdFMzJULVsTLc04rP1PtAsGmmUrzdM+4bU3BoEg5/MLx
         L/Mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681715648; x=1684307648;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=i33sP5KxJQ3ug0hodb7+NbP/Rdc8Yf+/xfz8DRpbBns=;
        b=d6ZHdKQyY8hbIzLJb2THHBGDonKmn6s/mXzL5/M3zzZ0BYBLX9lwJLbhuG9GCbS8NA
         h3WWkrhPxZ4L39bnKJCNT7z65sACENxbfe1l7lN3mm/aCcznaUQRjgdIwJLnpErJvSST
         aAuovtRz8GJbfRe7GdxnykFR5OTBIoXYyptBqm6OOTgm3ic1vKFWZ2l5fE7y6czaRiEW
         7TG/S9+VQcpAY/tIoL9wGDLLkJNkIk0OvAsJHqmx5vnG5Va1nBVt1yIi/PWbnCFr+L5B
         Be65XuWpPLhotrJBRfiUt2apdXWgEepJaDw2SyE0jGtFwA5uC+WRC5vaZAQDI7h5t0x3
         piHQ==
X-Gm-Message-State: AAQBX9cc7D8CsMhgMyF05tPvRdxLFSmLxv5LzPyJLy6ZJTx9IzDCOCL8
	HurM12ijWfLxVOg9tsDo07mei24FudHDJLWlKYrYYQ==
X-Google-Smtp-Source: AKy350ZoRtbAsG2NmVq9FheLuaqlFQaisu1AeojwWM+a2iRBmyUcR24EyrzgL25BAVg2XM9OlFHrztb5j2zWfJBW98o=
X-Received: by 2002:a1c:7206:0:b0:3f1:6980:2d30 with SMTP id
 n6-20020a1c7206000000b003f169802d30mr1079499wmc.4.1681715647900; Mon, 17 Apr
 2023 00:14:07 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-14-jens.wiklander@linaro.org> <86b582a9-861c-7d8f-beeb-3469ea7948be@xen.org>
In-Reply-To: <86b582a9-861c-7d8f-beeb-3469ea7948be@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 17 Apr 2023 09:13:57 +0200
Message-ID: <CAHUa44HuHhD7i3MH5h8dE+K7Mk=6NoZ8ij4K38gy6LnkX=Zacw@mail.gmail.com>
Subject: Re: [XEN PATCH v8 13/22] xen/arm: ffa: support guest FFA_PARTITION_INFO_GET
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Thu, Apr 13, 2023 at 10:33=E2=80=AFPM Julien Grall <julien@xen.org> wrot=
e:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > Adds support in the mediator to handle FFA_PARTITION_INFO_GET requests
> > from a guest. The requests are forwarded to the SPMC and the response i=
s
> > translated according to the FF-A version in use by the guest.
> >
> > Using FFA_PARTITION_INFO_GET changes the owner of the RX buffer to the
> > caller (the guest in this case), so once it is done with the buffer it
> > must be released using FFA_RX_RELEASE before another call can be made.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> >   xen/arch/arm/tee/ffa.c | 137 ++++++++++++++++++++++++++++++++++++++++=
-
> >   1 file changed, 134 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> > index 127397d8e448..74b8c517afb8 100644
> > --- a/xen/arch/arm/tee/ffa.c
> > +++ b/xen/arch/arm/tee/ffa.c
> > @@ -166,7 +166,18 @@
> >   #define FFA_MSG_SEND                    0x8400006EU
> >   #define FFA_MSG_POLL                    0x8400006AU
> >
> > +/*
> > + * Structs below ending with _1_0 are defined in FF-A-1.0-REL and
> > + * struct ending with _1_1 are defined in FF-A-1.1-REL0.
> > + */
> > +
> >   /* Partition information descriptor */
> > +struct ffa_partition_info_1_0 {
> > +    uint16_t id;
> > +    uint16_t execution_context;
> > +    uint32_t partition_properties;
> > +};
> > +
> >   struct ffa_partition_info_1_1 {
> >       uint16_t id;
> >       uint16_t execution_context;
> > @@ -183,7 +194,8 @@ struct ffa_ctx {
> >       unsigned int page_count;
> >       /* FF-A version used by the guest */
> >       uint32_t guest_vers;
> > -    bool tx_is_free;
> > +    bool rx_is_free;
>
> I am a bit confused why this is renamed. Did you introduce tx_is_free by
> mistake? If not, can you name the field correctly from when it is
> introduced?

I'll fix it.

Thanks,
Jens

>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:22:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521772.810624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJBp-0001NC-A1; Mon, 17 Apr 2023 07:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521772.810624; Mon, 17 Apr 2023 07:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJBp-0001N5-77; Mon, 17 Apr 2023 07:22:17 +0000
Received: by outflank-mailman (input) for mailman id 521772;
 Mon, 17 Apr 2023 07:22:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8RDw=AI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1poJBn-0001Mj-AD
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:22:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2075.outbound.protection.outlook.com [40.107.6.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92c4ff40-dcf0-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:22:13 +0200 (CEST)
Received: from DB6PR0601CA0047.eurprd06.prod.outlook.com (2603:10a6:4:17::33)
 by VI1PR08MB5328.eurprd08.prod.outlook.com (2603:10a6:803:13a::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 07:21:43 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:17:cafe::b8) by DB6PR0601CA0047.outlook.office365.com
 (2603:10a6:4:17::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.44 via Frontend
 Transport; Mon, 17 Apr 2023 07:21:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.12 via Frontend Transport; Mon, 17 Apr 2023 07:21:42 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Mon, 17 Apr 2023 07:21:42 +0000
Received: from 0646f5cb71cd.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3FB95646-3472-4297-BD4E-C6AB4EBE175F.1; 
 Mon, 17 Apr 2023 07:21:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0646f5cb71cd.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 07:21:31 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB5894.eurprd08.prod.outlook.com (2603:10a6:20b:23d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 07:21:28 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 07:21:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92c4ff40-dcf0-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cavBQbLHMJ0mRw8PT5sgIG/4b6X42jfw8a0tN06yiXA=;
 b=j3Es5bQlHZaim3r5oqNiafcjFcytZ/uyJb2pRfITPfzbxpiSMajxYoY0uQ2S/KzgS2oGHfM1sykE+hR6gC6lGmYCiMyQpKL2oUBGcqekvS3jAFbEpxjIqkSEE2vZHK5wnINvXDUHO1xAAxvapr0LiodSRqXOXorMFfhodEPowBw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f230175f34a0fe6f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=InrLjsVg9nZWtrygfBYthw88FAAWAeq23K4DjieBdo8N62NElMCoC5/Nv1R1eq4UX4INwz77BX2yYrWmiydvFFwNzIeXjb369Q+raq0cAzuP+vM1GrGBHB4zut9gdRM6NqTDg0mia6G4aNmqJ1YRgi1v62n5Yx369qIt1z/fDM/oQGEzSSAbz6lrXG41UeS0k4dXdIe2j53TZZC9RTqhuVFEDjogl+TfGgTKgqCyYkMyAw2wBbqy9zVNjPz9/Qnyxu2Ze75TSB0rpHTIJr9UU+d75C9pNJWRLGG3aapex+GtPt05oLdeipAFIKN/bQfYWyR4q61H99Dmzp7R8LVpag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cavBQbLHMJ0mRw8PT5sgIG/4b6X42jfw8a0tN06yiXA=;
 b=JRSbcjxiw9ZXJE94U1s9fa2CohyCpX6Fh6AyNUh6xJmOyxnhOI/DVeVzDoJzwqGIu6a9ODVtxe2NKBKueYjm9uuvbBccU+VLT0fH39jE4jCdD867H8fHbHPlOaDqGzQl75wXP343lvQJdJXgE8TJfRCSXmposBhncHWcmOEiJ9jnBAtcnlOC/B7s39yf8IlHg4DxEDf1wiUMRrsCSfH3BzSILJMG/0pZTbd1lGeqb4iC65kYBAzmbLxYgxr3i1tIDDOGCwKvTwN/j1Pwigl3j2GaSFEUlkBo7EiZPcMmBPXUmSwiCzCqI4tUz4x2jjAt0kIjfH3487b368iX7XVsRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cavBQbLHMJ0mRw8PT5sgIG/4b6X42jfw8a0tN06yiXA=;
 b=j3Es5bQlHZaim3r5oqNiafcjFcytZ/uyJb2pRfITPfzbxpiSMajxYoY0uQ2S/KzgS2oGHfM1sykE+hR6gC6lGmYCiMyQpKL2oUBGcqekvS3jAFbEpxjIqkSEE2vZHK5wnINvXDUHO1xAAxvapr0LiodSRqXOXorMFfhodEPowBw=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Topic: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Index: AQHZcHBTTmBJ360YB0mwtleUAbK62a8vGYQA
Date: Mon, 17 Apr 2023 07:21:27 +0000
Message-ID: <3D6BC31F-05F0-4627-9311-E77867AD32C0@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
In-Reply-To: <20230416143211.72227-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB5894:EE_|DBAEUR03FT007:EE_|VI1PR08MB5328:EE_
X-MS-Office365-Filtering-Correlation-Id: 35a1f825-7cb3-4fb7-0d0c-08db3f1464c9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +2Xf7c4jXVBcYnoglI4FdduRWhMSxrd+U6tGGFWK0g4b0LIbfFsIW7usr8UOQ/4IlVTt32DXvaXtGx2CgsupK+YMXIjaHM25HL20b5ja4d/psqk/Mb28tweCOLeRKS7TCpV6FqIAuKbiKMnZDiJDBA3LSZsRd5NzDNweMlijNarr1LUsqzNpL6tYS84le5P2BAQsc+SgzJwoaAl0G9uBYMEbKPDXIR6tPqFdvi2yfLDedVnc8B5AHE5EJk7kyl0iEB0vdz4n1SzuTZI5n0xs1pSUWg1PMhyUFiuZeIpIwQtNnpAUcVOjZTyw2hMs6gD/IsVdPu/3+yZgwfb8fVJdmZGQQVDgJmlm3VcK98JOzhVmU9jzKetuM22TYAIj1kTaoWefMIUOdM0gxlf1jPmhjPqUVFgYRlUWnUMRO4457QaCjkxjCljHSB/o4kHHBVI4BvV6zJOqV5IWHK3Dz9CzBDlZpRUn1RhfbGudZQRKyF6m7Ufb+G+Fr/GH++ugQ+jtNyJ3K5zc78EOOZOxViDAvn+AOWdpjh7TtH1VXcIePgVJO7uqoMfAikCPMOYC0nh0U+AFxs7hQOu3dZHhwxZXGSOb1lAIHwtsWy0Em3CZ1k9NqTZZyRxQZhgFwESXt6aN509TfhBEz+9C7gmrYguwFA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(5660300002)(2616005)(86362001)(83380400001)(53546011)(186003)(26005)(6506007)(6512007)(122000001)(38100700002)(38070700005)(8676002)(8936002)(33656002)(54906003)(478600001)(71200400001)(6486002)(316002)(41300700001)(91956017)(76116006)(36756003)(64756008)(66446008)(66476007)(66556008)(4326008)(6916009)(66946007)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <79984573A1AEBB4EBF2F93FC9677BA49@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5894
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6e1625eb-0185-4185-2e8b-08db3f145be9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pKIQGbnhSZoxUKzKEjv8ius0KBSsWWC0Rndb4vSH0hbXgEABpmEvZdgyiqS0bGzqo0PIR7b54OGOCkPLgXkV1eNf3tPN4byR+hvIfZmpwic0QYkFzKwBSCWo0QDIdwBNw0viVB2Lg34LXTjy3REzTmaAr2OhB0KgRufsI4Dqw+928oVFqbX3tcZI3JibQ/Djs07LHb7XBzT8axofMP1BTpWH1W8b7is6KzF3kxEuvCdg57/oA8bvJYS6zBQdDDfe4/MOLsiY2dc8PxolYFF1LPPRl7BvMPSrm4ll5S9hts9JGihFQsoq1WC/m4LDOz481WoMldMEbeZLtWrRkot0ZK4uiXGRM0iNam47OqccZEeQM/qRErAklXFfCsJG+VH8awnZkDZpK0tBggnGRPG9YqPoNX1YkjU9PBq1XNps5Z9y007XKFzeV+SfD75qClY8GRS+/Fso/SOyqqljECs/JpZyDV2LGMTDzEyizoSNaCwDGH9jQnrCmLcXM/d0Rq7nkrnJOqWkwmC2zxuyNUv/+q6w4qdX0kecxlpU9//bDlGWBiDqh4jeOySANfBm1xKHg6TX4dNXKiTI5oQlKUqtVvdjLrMKhi4Zw8LMUAn9DYreLxPtF8ZzlQ50g9pV/ODE5HwkgT8RveyBYXSFG6OvUJ/FpT5soDkqhOlbN3+44ts=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39850400004)(346002)(451199021)(36840700001)(46966006)(336012)(6486002)(86362001)(478600001)(36860700001)(2616005)(47076005)(36756003)(83380400001)(26005)(33656002)(40480700001)(107886003)(6506007)(186003)(53546011)(6512007)(82740400003)(82310400005)(356005)(70206006)(70586007)(81166007)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(6862004)(41300700001)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:21:42.5381
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 35a1f825-7cb3-4fb7-0d0c-08db3f1464c9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5328



> On 16 Apr 2023, at 15:32, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
>=20
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
>=20
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
>=20
> The memory layout is reshuffled to keep the first four slots of the zeroe=
th
> level free. All the regions currently in L0 slot 0 will not be part of
> slot 4 (2TB). This requires a slight tweak of the boot code because
> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
>=20
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
>=20
> Lastly, take the opportunity to check a compile time if any of the
> regions may overlap with the reserved area for identity mapping.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20

Hi Julien,

It looks fine to me.

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>




From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521777.810635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJIQ-00025A-3p; Mon, 17 Apr 2023 07:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521777.810635; Mon, 17 Apr 2023 07:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJIQ-000253-1E; Mon, 17 Apr 2023 07:29:06 +0000
Received: by outflank-mailman (input) for mailman id 521777;
 Mon, 17 Apr 2023 07:29:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poJIO-00024x-JD
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:29:04 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20624.outbound.protection.outlook.com
 [2a01:111:f400:7eab::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 866a7e2c-dcf1-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:29:03 +0200 (CEST)
Received: from CY5PR17CA0059.namprd17.prod.outlook.com (2603:10b6:930:12::23)
 by PH7PR12MB5594.namprd12.prod.outlook.com (2603:10b6:510:134::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 07:28:59 +0000
Received: from CY4PEPF0000C97F.namprd02.prod.outlook.com
 (2603:10b6:930:12:cafe::b9) by CY5PR17CA0059.outlook.office365.com
 (2603:10b6:930:12::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 07:28:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000C97F.mail.protection.outlook.com (10.167.241.197) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.16 via Frontend Transport; Mon, 17 Apr 2023 07:28:58 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 02:28:57 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 02:28:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 866a7e2c-dcf1-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=be1aAbCF89tmwmqqpyXGz38JXhvhwjDI3Oas/23PfCWSIuJUkbDrZHy3NHSP1tDIL3pMmIh7c04ofRuGTdPamRl4r8diysP4lPUV821kZnm6auGkLfM2+/GhNvlPQsGxg+8/tx3mze/8PkYqrDqbha2qs+iWy8dlVp8BZtTM2RGTNgsW/PnizG2o3IxF5K7Gxbd4lkLgte1iQVvbAmLCfvQ72AKfqEqq6uznAOetuuN5VXFSXrhU9RAOivGASpO3JjdVY6Vqrmo4r8UyAIny3IjepRtDAwxkzkI3/32oR0Sl5qYhX2UQlsDv/ZTBDs0y4rQC7Fy3FhoeuEiqBynDtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x/tanqHOuQI4YVyvSOUiTRNovyenuslY3plKudh7H40=;
 b=BbBKZfzgS+wgT/AFDObB19z6RqYkAjSQfYCsQFBL2sTRSJTp1Zoak01DB9JLpc4QrgzUhndfJp/a5AB2wTOKA9PQ+veVQ1Bmctr2yBkC7RdJDkIQqTX6UsqpGOMBoxUj7KU1U3L/oVRHXrkuRcYDV+gVQWGxMPFkS5KOy5ZhKeqKuxRHb84IDOLOkRGOjLFcju41hGw5ZLqwksaUpp5Bm4ZUtmTm8S3d5YGgf1E5TwW8HfyBiJ6fL2giwrdMoHrvwqQCXX+u9QkCuStj0Y2bBll7d74NMpbBjwKQIaqVbVsnlDXurA5Rs6WuWN+Ji45reL66PWZDkd56YOGuFtRk4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x/tanqHOuQI4YVyvSOUiTRNovyenuslY3plKudh7H40=;
 b=IFsyYpm684BnWGBLZn1+gmWXzcs+O7ePfeqD1q0BKp/tTMHwLvaExwvPVa26+fuy1yrGYnyyUpCu0s1Db+mDUEB8tdTLTDDCy2b2A2q+k6pAC5DrEAaM/TKoQm+lkBDOm3jYI9HwNX5yEyZ9mmvmflT1TsrCk8Gr9HJeY8MOEw4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b088d287-3809-0a37-1a41-992d6ed9d631@amd.com>
Date: Mon, 17 Apr 2023 09:28:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230416143211.72227-3-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000C97F:EE_|PH7PR12MB5594:EE_
X-MS-Office365-Filtering-Correlation-Id: c9e5c3dc-db05-4af1-616f-08db3f156893
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HT9UvaBdP0idPNht2FuMsX8Ta8N2xjm5lO2NBQRajJJI6a2664xvNo7bkztYXzvVamlpsQTFO19rxreRfsJv7/XO94HEtpNTTYca5dtNFu6ufRtqM4+v4qHlRoalsF5fNKqibojTSKBkhIGgIs148Y6dIzhsI989nwDc9cMMrctQ4ezBenoD764BqBEaXS/i664LDS+O5Ur+b3IYLpYRqL+b2+LeYNvIDYHmM3duejck7gW7/MKL5AfK6TIt8XLYYrmW644y7yOoqGtjWXoVrU1F8JkoAjUSpTm45cjRAqoVYFR6T7r5JlGFEm4GEJKM+03gWgWTSjej7/xcOM1cn46CQmzQ7p6ddui/CVSxRs240TKcmkP+pdjUSlXpDrlz39hC4kZtqtvxD2IlFWOrO8lH7HTKqMFxVL1wptek9sOGQSXWULD/61wgIgwkm9ot8ek+y/ku6GoaFi0QqdFvTsp1V1Oqw+R9PkCUizBlAcqZQwmOpqTny6PAre1T4L+ARf15B54rtnPO3VxS7RUjEJRPPkdGMBa9SNG3WXo6zflZwy07kAziFPd/cyJ/dFcARL8e65NIfRzwaAqaniV7g83hNL50VJ3U1plwZrB4bSpIDHraC+hCuyUt1zLXmEQ3dsK/xvT1Zg+ArAODYGuUB2IhhiE6bVnR7Pc/FjlfN8pP6lnz2Uh+FIGfAsVbUpqvaXRUX8ayVoFlIDVM5vgAQt0SClZScUm3I5nEsarYH3tQzcjgHpaokgjNgzdYbeo+xqr38RIS8LidKvBupQOoMQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(110136005)(54906003)(16576012)(4326008)(316002)(70586007)(70206006)(478600001)(5660300002)(82310400005)(8936002)(8676002)(41300700001)(2906002)(44832011)(40480700001)(82740400003)(86362001)(31696002)(81166007)(356005)(426003)(2616005)(336012)(26005)(40460700003)(186003)(53546011)(36860700001)(47076005)(83380400001)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:28:58.3315
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c9e5c3dc-db05-4af1-616f-08db3f156893
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000C97F.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5594

Hi,

On 16/04/2023 16:32, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
> 
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
> 
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
> 
> The memory layout is reshuffled to keep the first four slots of the zeroeth
> level free. All the regions currently in L0 slot 0 will not be part of
> slot 4 (2TB). This requires a slight tweak of the boot code because
> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
> 
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
> 
> Lastly, take the opportunity to check a compile time if any of the
s/a/at/ compile time

> regions may overlap with the reserved area for identity mapping.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ----
>     Changes in v7:
>         - Remove all tags
>         - Add BUILD_BUG_ON()s
>         - Don't forget to update FRAMETABLE_VIRT_START and
>           VMAP_VIRT_START
> 
>     Changes in v6:
>         - Correct the BUILD_BUG_ON(), Xen virtual address should be
>           above 2TB (i.e. slot0 > 4).
>         - Add Bertrand's reviewed-by
> 
>     Changes in v5:
>         - We are reserving 4 slots rather than 2.
>         - Fix the addresses in the layout comment.
>         - Fix the size of the region in the layout comment
>         - Add Luca's tested-by (the reviewed-by was not added
>           because of the changes requested by Michal)
>         - Add Michal's reviewed-by
> 
>     Changes in v4:
>         - Correct the documentation
>         - The start address is 2TB, so slot0 is 4 not 2.
> 
>     Changes in v2:
>         - Reword the commit message
>         - Load Xen at 2TB + 2MB
>         - Update the documentation to reflect the new layout
> ---
>  xen/arch/arm/arm64/head.S         |  3 ++-
>  xen/arch/arm/include/asm/config.h | 38 +++++++++++++++++++++----------
>  xen/arch/arm/mm.c                 | 23 +++++++++++++++----
>  3 files changed, 46 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..663f5813b12e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -607,7 +607,8 @@ create_page_tables:
>           * need an additional 1:1 mapping, the virtual mapping will
>           * suffice.
>           */
> -        cmp   x19, #XEN_VIRT_START
> +        ldr   x0, =XEN_VIRT_START
> +        cmp   x19, x0
>          bne   1f
>          ret
>  1:
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
> index 5df0e4c4959b..2cfe5e480256 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -72,16 +72,13 @@
>  #include <xen/page-size.h>
> 
>  /*
> - * Common ARM32 and ARM64 layout:
> + * ARM32 layout:
>   *   0  -   2M   Unmapped
>   *   2M -   4M   Xen text, data, bss
>   *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>   *   6M -  10M   Early boot mapping of FDT
>   *   10M - 12M   Livepatch vmap (if compiled in)
>   *
> - * ARM32 layout:
> - *   0  -  12M   <COMMON>
> - *
>   *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>   * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>   *                    space
> @@ -90,14 +87,23 @@
>   *   2G -   4G   Domheap: on-demand-mapped
>   *
>   * ARM64 layout:
> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> - *   0  -  12M   <COMMON>
> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
> + *
> + *  Reserved to identity map Xen
> + *
> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
missing closing parenthesis at the end of line

This can be done on commit, so:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521781.810644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJKb-0003UW-HG; Mon, 17 Apr 2023 07:31:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521781.810644; Mon, 17 Apr 2023 07:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJKb-0003UP-EX; Mon, 17 Apr 2023 07:31:21 +0000
Received: by outflank-mailman (input) for mailman id 521781;
 Mon, 17 Apr 2023 07:31:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLV6=AI=citrix.com=prvs=464b9e9d0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1poJKa-0003UH-FI
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:31:20 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6305203-dcf1-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:31:17 +0200 (CEST)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 03:31:13 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6544.namprd03.prod.outlook.com (2603:10b6:510:b9::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 07:31:10 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 07:31:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6305203-dcf1-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681716677;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=wDnaXqysAqoxC0OvRMTcf6nCobQtNHcHueqP+gPJmng=;
  b=f3LTLz6VBFy81yGrouBXfzz2rE+nzCKurhIUHmkL5IQ8so3zmjimprNK
   iOf1EVCuv4HkVZdn9L8RjW4xop/RKczClSwpxFODHFxABGCrdugwMgAoX
   ujAYPje5L2hkQpFGMxBEEW4eSAb5lpLMrccZwp7A6jwY2PEH1P68nq+TB
   o=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 105669632
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:b8Kn9amnU2Vi7mdwx5uNsO3o5gy1J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOXmvQMvjfZ2XxL4onad6/90tVvJPQm9dmHgs+qSA3ECMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dshAy83bR6Ivuan/bikYNRpmssTPeC+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieC1WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHumBtxDSubinhJsqFPO6jQJITcIbgLlh6O1lku/ZdV7b
 FNBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQkrMg3QDYt2
 3eTkt/pDCApu7qQIVqe8bGOpD/0JikRLkcFfyYPSQZD6N7myLzflTrKR9dnVaSz0dv8HGiox
 yjQ9XBmwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBa8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:tyykqq0sJ6bd+6HB1EOkyAqjBfdyeYIsimQD101hICG9E/bo4v
 xG+c5xuyMc5wxwZJheo6H9BEDtexLhHP1OkPos1MmZLWvbUQKTRekJ0WKI+UyCJ8SRzJ856U
 9qG5IOd+EZZTJB4foTi2ODfOrJD7O8nZyAtKPm6zNIcCkvUqdn6m5Ce3Sm+o8dfng5OXL8fq
 DslvauYlCbCAUqh7+Adx04dtmGncTPiJXlJTYeHnccmXCzpALt0qf+Dx+bmjwDUzZDqI1SjF
 TtokjC/6C+tPP+7RfZ2wbonvNrseqk8MJHGMuPzu4KLTn24zzYArhJavm5pTUop+Pq0nYG+e
 O82ysIDoBI8nbMeWPwmxf3xAX69z4r5xbZuCSlqEqmm9X9WDU5T/VMnphYdByx0TtbgO1B
X-Talos-CUID: 9a23:v6JZgWBPRNFQf7X6Ew5e31EuGc4uS3nM8X3PCXK9FWlZWZTAHA==
X-Talos-MUID: 9a23:xultngR4LvgAbBUkRXTi2R06Lu5C/Z/3VmZXnYkDuveDEWtvbmI=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105669632"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MetBibDaQqTTTQRQ9IwmomuVDXwbtMBH6hKFhNP/R8QEdQNstMLHBnhhOZJm/uL3pbPYM1nI3OU6qJZniI9fYVFekeMGzbOnxvavx2eHqBr6/IaLkEP0WRmwHcXN7ZPXGcC4FRr9d/ItCV2XEB5RHIk5a22vlqsATtllQ5VOuGC0ZotMFrhNdcc9vYvVxsOe9lfuBSiDScf3F4aB0A3ugIWWc97yBSUr5hEURsSmFsVS8n/70eshFbWVJG50egDRp1fbonwhOwtrGQ2VL74FB2MN1xahWTsaSNlbJsvh2OudcWN95Xao5+EfMxdjY9XGe0NAQT7oSqaU7VDmzI06TQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NZn4kzEE6aP9fyCf+9qanmJfpDIZMAhoGwQkQL7qdVc=;
 b=hQUTW0Gv9BifnN4mBMGCkxlQgcKH96nJ5MgHnnV/NybieHuubXbAwkw5Z1Xe77kMbr9NdKz9uLf6mUZ1r7/oN5ewNNii7FmxznsYyWPrtr8P3Ktvi1NJ27Qf51QGNSPzey4BX9Fnks/jog26Db3m+ZmTFk+tyX7oCr7GhFFKA1q8g3nueFxdta0n2fohYtoJfgw5abQ4Vqfocc46hNB8S23MENDcAEPzSPZJyQYSuEX00460gR2AZNL5wrK5tbkzizHykYF6Kfdj0Gvt9nXKfAzMyO6clku6mPVpRH7xfO7j4ZQTy7w3I0B4iKe42ADOusmGUcAGq1j8XeuJRxOxtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NZn4kzEE6aP9fyCf+9qanmJfpDIZMAhoGwQkQL7qdVc=;
 b=RIabbAp1HytjWWRSJFF+zk+RFDg2+jdJLDkJyTmjOATtIdJKucecGr8Vu4UShqywWV6QwkzdTbKfrmEKpRvKo3D1rzDEyDAeHdZHnf4tTDzFDTHrsmjGGA8qyEKpaXkqowx8GFLqSmDeMCWL5v+woHg570DkLl0lydTTXsPoZA0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 17 Apr 2023 09:31:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Josh Poimboeuf <jpoimboe@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: Re: [PATCH] create-diff-object: handle missing padding at end of
 special section
Message-ID: <ZDz1tw+3BiWAmEGF@Air-de-Roger>
References: <20230414151933.53851-1-roger.pau@citrix.com>
 <1377fcd2-672a-687d-468d-ddc6d5b4be70@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1377fcd2-672a-687d-468d-ddc6d5b4be70@citrix.com>
X-ClientProxiedBy: LO0P123CA0001.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:354::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6544:EE_
X-MS-Office365-Filtering-Correlation-Id: 4935d6c9-10a0-48b1-cc99-08db3f15b6de
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GG2TyqDnzj8wVYdlaPUrn5JXSplZKPwVMhNbfyzwVpBk0Zh+KMFCWUCsjAP6IKMKvvpPWMNyu5RmAnH0i0jjeX22vIT7BF116cjR+fxElwVAQZ98muraKE5aWr1vQRM3dUxgxusUzjlajWUxYWiz1d4lNbgU+w2H15bcgRsszJcd3e9uRTr38VSf3kop18/+oQlAyTKXRs60NE+qaVAhYzb5QapJ1BP39xAB+xRSZAlMM8Fn0zYVhdviGWgLjdBm2j5r8nc5uwnJMdDffovO73RjUgKnFwwmCNo+EB56qjQYQdz5yTB+lumNqMz0Y1wmYGdgV3CemYkk4t7c86d3O5NSeXYY1viQ1/vDzP+YI6o5W+y3pw5AIl+8zBHlpv9kqSkAUM9fKYV2fU12TqPr0i39VtvOUapL1KRDOizat/IFMpuDMVkctyl8sRnq6ME6ui5WFcltyXu+h5j82ioK9GBtWO1+RZMwTrBqYAVxFK7UfLAv165xS1c5d+EzmAm/C1v8S6qk6l9odusdRiEjitvzBrvBedORZZFGEkROu21LOSNGQy3hzFuFcmJ0yusYb5pd9kRVtPAhf81BLP3ATalSyq5zVS2yZ2pWqirKR4E=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(26005)(8676002)(186003)(9686003)(38100700002)(8936002)(6506007)(6512007)(53546011)(6862004)(478600001)(316002)(6636002)(33716001)(86362001)(6486002)(85182001)(54906003)(66476007)(66946007)(66556008)(6666004)(4326008)(107886003)(82960400001)(41300700001)(5660300002)(2906002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkRqV3FBSk1xMjdheGxzZCtnZTNxVVpBck1XblJKWXpuclhzWnBiR1VxSnFP?=
 =?utf-8?B?U2cvN2J4SUdUNEdPRWV2bWZ3Si9HTU1ZeXE2ZXI5VUNCTjA1Z29TQUJoTS9a?=
 =?utf-8?B?bURieE56VGVWNzdwUGliVVVnUmlpVlZwYzNnQVBYcFFITXRBME5sanBrcXA1?=
 =?utf-8?B?STZzd0docy9HR0VTVjlxN1FJbzJVV0VyZS8xL1FvUE1DOVczR0J4cFR3MUpJ?=
 =?utf-8?B?MzVFUFZ3eklWaXZ3YUUySHhkeWxQdlowRE9LS2s1QktYQ0FTc0U5bllyUUtw?=
 =?utf-8?B?d1gxUGpra3p4QnF2S092MldWR29QbDhLbysvTVp5V3VKcG1wTUpQbWVzYkJU?=
 =?utf-8?B?VEZnWFNaK2pBL0wzaHo5UzRZL1hrYVVpQzRKT3R4WldLZGdaL25PdllwU0RS?=
 =?utf-8?B?Sjg3R2tGclc2dndtS2daZjhYMmdwRG1VZ0QxQ3ZTZ1VRdUdkQTJ0bkZNWDV0?=
 =?utf-8?B?RngwMHhxNU9kVUNuRGNMM1JESXNiRytyWWM4TjRMOVhyaTh0SlMzcGtNZXd0?=
 =?utf-8?B?bTZYbkMvUFJJdkN3VEwwM0ZSdVFkaGdvS0xjNzVIZENORVExeTVuU0ppbHdL?=
 =?utf-8?B?MjY4MVJXb2JPSFB2bUU4Z1gyODBGMlpJcW83eHVHeDZJSmx4aFJGU2M0VDJ4?=
 =?utf-8?B?ZE9OTjgwNGpnSCs2NHo3NUl4TzhDMGtiK0J6UUphb3c2NDliV1Y2V1BtTVNj?=
 =?utf-8?B?WDlBRk9UdDhMbVczN0Y1RURBeXBYaXhET3l1R21TQUttMmdBUnZ5YlY4S1N0?=
 =?utf-8?B?NXB1ZVNNYXVOMmpSY0krczduOTJiTUZVR09zZFcrK2V6R2JjYzUrREJjQVJR?=
 =?utf-8?B?YWlDQ1p2TXU4MERjYWx4VW85WnRHWWFkY1hGWFpDNDUwSld1b2RnMktrN2FZ?=
 =?utf-8?B?RWVIdmtJeDdPT0xOdUJyajVoYmsrSDBYZnIrcFNkTTR3V09PeWcxZHo2RGhp?=
 =?utf-8?B?OU5aOFBXcG9MQlE1aUQ4eHNTaHhQQVdQSmFUaGxkVVc3cUJWNzU1aXNpbmxx?=
 =?utf-8?B?TVVUaURsdWJrN1RhdWRGOWZtbjV6UVFKU3VQV2wySjNqWmRoL1Nla3RvWkFM?=
 =?utf-8?B?SFd3MFhKTDRuWVlHQnFrQ2wvQ3ovMmFYL1dqOUhzMzkyOGtneGJHcmlkT0Vv?=
 =?utf-8?B?L1hxV3ZZemYrYnlkNGx1aURyNnpwc3crYjdIU1VTZzBlQU1WZ1haeWxvcFQ5?=
 =?utf-8?B?ZDV5R1BzWFJScFZiOVpGcTJtbXRKSTNkNE40QnVvSDdzcEUxeUVGVWFieDM3?=
 =?utf-8?B?RlpLQnFDV1JkaWpCYkU0Y04wVkN5R1ZJejJzTXNubWhGN3NjRXBLcVR2OG9q?=
 =?utf-8?B?Yy95TVc2bUhYZlFFSzJSRVlmQ1hDd2xhSFdTMGJNbnhEcm53K3lsY0FWek9t?=
 =?utf-8?B?bEoybXM3WHhwT2tzOEs1MlkvM0pETlpNeWZqNTlyVXJjZGRiT3BhSE1ONS9J?=
 =?utf-8?B?UERvNlNUaW44WFlTcVZrVjMrYUIyczhCTWMyd1c5RGRWYWhXWlYxdEEwdEN0?=
 =?utf-8?B?SzNCOVR3WXAzU2VZZ1FKZzJ4aXE0U3V2a0c1eWc4OXhJeWMrckw1dGtab1ZR?=
 =?utf-8?B?OHpUUVBWZG9MK25TUERSM3NFNW5IWkpreFdXb2EyeFBFK1ZxK1gwMHo5Nm12?=
 =?utf-8?B?UE5TZ2dOUUQzenB2cmFxeno0aG1ZaWxCU2hVVGxsWnJBbXd0WkxDREU4MDBQ?=
 =?utf-8?B?b1hTc2xhN3I0MDNNcEJHalEzME1leHhyWUlKVVlQeWxDS1FwbFdFeTdLVzln?=
 =?utf-8?B?NkZaRjQ2RytvSFJQanEyVUNXOGRGbFdKYjViTlhhNmRIUVB3YXFrc000c2w1?=
 =?utf-8?B?UDB2ZklOV3pXNzlyWnBTOG9kNE4yRlNHWVlMZWVJemo2TlhUVmVCUFRNRkdG?=
 =?utf-8?B?OUZ3RmdnYlprU1ErY3BBaSt6NStQME02My9DdEtVUmlLaWxjTnFONkt6eCtw?=
 =?utf-8?B?OXFNcjlvQ1o0cHNRWG9ZanZZRUc4OXBCWGswSVZTV3lleGNQQkh2ZFNhaVZB?=
 =?utf-8?B?Vm0zZ2RrcGtBVVB4bjVSVGVrYmgzN0JqdFR4STdMQ0plamNTODJxYTRvcVkv?=
 =?utf-8?B?R1o4M2lIaWRxNHJWNDYzRW5CTW0vT0tRN3FPaW5BbXZySU42LzV5VnRzSWlC?=
 =?utf-8?Q?n6kYdU4MocjQ6DXzN9eMKG9oB?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/Nd/NSlB1sx9l9QA9yTOy9ttyE017/rJh60It6BCvuMhfGpOTpcSiNi0dOzH/E3Rv+ckY8U+HwngpM0kticCI7RcL4wFvis4s2N84DNraQhFyOQtDJXFmt43W5LmakGXQnvcEi0Av8oOQ+qkc5e1oSuVwzVCqkplG9pr2TBmUWBS7IrmRTwTW98NgPh+fdnUsYZWb667almqFAOH8nSrjZCMbN9UM73WU00Eo7+qIk9edw3yMm7C4QJIsGLZH+7JyidU+TmgiEhQoSA2eL+QavBdTQpAFPRnHW5tD/7PkOwNXSoy4nOhDwVuO+Z2Z2WRWR1nk83YKU+XzJYcGCJwVtO9OgB+w0Q62Fb4Pt9InuC9dfN5UogVZn95tnBw1yapnEMEGcxR9sf2I3VpqTgdx+DvzpT/IJxpvz8XAA2hcu9lStvXS3SIC2sW7SuctUZ8lx2ETylwmTJDDkesd96/CW6IWwRTBsTJIEM2fwruAVai17s6FMNv3gyO0+xpdW37fbQMBdTTyJhg6KS6rOypgP/Q03f5OZLynIFRacQ6WZDlRK5m0LbZN5hXjYrMdhOJitqZ7ctya7jFiNkw/qEmm7WQx7LSRqvND9qBjypRgCmSKi5JdwFcPW3V+PGgZa5VjBNkDgUcFTimDu59KwKgXH+csRYjbYGv4+qQe2v+b6NX601ucLvKNrreO47F+Y08ykGyNvYc0fkzA0l1+S0VwRhBNYizRjYBXVYspgyA+l8/PlBV7lWAEENvX0bmv0TjliZqxIffl+/vB9dAuOjh54Ckxz5I4q7+zWIvtUyMjyI5hEp0Gsje/fvBnAYrzGZhGlVzja+VeI2YHYk4f36V7A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4935d6c9-10a0-48b1-cc99-08db3f15b6de
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:31:10.2849
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Nf2QJ48cc3l/18/REOqbl/imWddU5hNdVhMzq6Tz7xmN4eC6RwscWMEcVggkWvD1b6KIV6nAh4prkcx0iDvD7w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6544

On Fri, Apr 14, 2023 at 05:17:42PM +0100, Andrew Cooper wrote:
> On 14/04/2023 4:19 pm, Roger Pau Monne wrote:
> > From: Josh Poimboeuf <jpoimboe@redhat.com>
> >
> > The paravirt_patch_site struct has 12 bytes of data and 4 bytes of
> > padding, for a total of 16 bytes.  However, when laying out the structs
> > in the .parainstructions section, the vmlinux script only aligns before
> > each struct's data, not after.  So the last entry doesn't have the
> > 4-byte padding, which breaks kpatch_regenerate_special_section()'s
> > assumption of a 16-byte struct, resulting in a memcpy past the end of
> > the section.
> >
> > Fixes #747.
> >
> > Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
> >
> > This is commit:
> >
> > c2dc3836e862 create-diff-object: handle missing padding at end of special section
> >
> > In kpatch repository.
> >
> > I've seen the .fixup section get an alignment of 16 but a size of 81,
> > which makes the error removed in this patch trigger.  Overall I'm not
> > sure why the original alignment check was done against the size of the
> > section, the alignment applies to the address of the section, not its
> > size.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Seems like a clean backport, so FWIW
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> However, surely we want a correction to Xen's linker file too, to stop
> putting out a badly aligned section?

AFAICT that alignment comes from the per-function-section object files,
so that's before the linker has assembled the xen image.  And the
address of the section is indeed alignment to the value, so it's all
correct.

Even then, it's my understanding the alignment in sh_addralign applies
to the address of the section, not the size, so I'm confused as to why
create-diff-object was expecting section sizes to the aligned.  IMO
it would make sense to pad the start address so it's aligned to the
section requirements, but not the section size.

Regardless, it's indeed a clean backport from the change upstream so
we should take it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:32:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521785.810655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJLH-00040N-R5; Mon, 17 Apr 2023 07:32:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521785.810655; Mon, 17 Apr 2023 07:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJLH-00040G-NF; Mon, 17 Apr 2023 07:32:03 +0000
Received: by outflank-mailman (input) for mailman id 521785;
 Mon, 17 Apr 2023 07:32:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poJLF-0003zu-VF; Mon, 17 Apr 2023 07:32:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poJLF-0008BN-KT; Mon, 17 Apr 2023 07:32:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poJLF-0006oc-3b; Mon, 17 Apr 2023 07:32:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poJLF-0002qX-34; Mon, 17 Apr 2023 07:32:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nfvdqKNqFrEGAtKqVeAcGSJ+ZlJdbLnuTe2XWEJEU4c=; b=w5UUaz87P+fC7jDTAhhBQVSr1H
	fe5NhuGsKnL8ixNJ2egMsxmaRxb9YOtGCC2RejEXSDR4tIxmApb2Y2EBO02eZIK0rMMlS/lcLoEvw
	gIPStnXR6Fv8WmNeGWq3n7NGdpVN67WTQ2nxPlLlrcunNqVnulf03ZqDHLWgG72oVo8U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180280: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-credit1:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-multivcpu:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-vhd:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:host-install(5):broken:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=44843cee3d2b8daa09e5860fc4574219b57acde8
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 07:32:01 +0000

flight 180280 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180280/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl-credit1     <job status>                 broken
 test-armhf-armhf-xl-multivcpu    <job status>                 broken
 test-armhf-armhf-xl-vhd         <job status>                 broken
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180238

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           5 host-install(5)       broken starved in 180238
 test-armhf-armhf-xl-vhd       5 host-install(5)       broken starved in 180238
 test-armhf-armhf-xl-multivcpu  5 host-install(5)      broken starved in 180238
 test-armhf-armhf-xl-credit1   5 host-install(5)       broken starved in 180238
 test-armhf-armhf-xl-arndale   8 xen-boot                fail baseline untested
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  44843cee3d2b8daa09e5860fc4574219b57acde8
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    3 days
Failing since        180256  2023-04-14 05:34:08 Z    3 days    7 attempts
Testing same since   180280  2023-04-17 00:10:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  broken  
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      broken  
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-job test-armhf-armhf-xl-credit1 broken
broken-job test-armhf-armhf-xl-multivcpu broken
broken-job test-armhf-armhf-xl-vhd broken
broken-step test-armhf-armhf-xl host-install(5)
broken-step test-armhf-armhf-xl-vhd host-install(5)
broken-step test-armhf-armhf-xl-multivcpu host-install(5)
broken-step test-armhf-armhf-xl-credit1 host-install(5)

Not pushing.

------------------------------------------------------------
commit 44843cee3d2b8daa09e5860fc4574219b57acde8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Mar 24 22:24:51 2023 +0000

    ARM+RISC-V: BSS handling improvements
    
     * Correct comments in arm{32,64}/head.S
     * Provide Linker assertions to check the safety of the zeroing loops
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 3e25767ea50a95b0bd08c5922e33e601228f7485
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Wed Feb 1 10:15:13 2023 +0800

    xen/arm: Extend the memory overlap check to include EfiACPIReclaimMemory
    
    Similarly as the static regions and boot modules, memory regions with
    EfiACPIReclaimMemory type (defined in bootinfo.acpi if CONFIG_ACPI is
    enabled) should also not be overlapping with memory regions in
    bootinfo.reserved_mem and bootinfo.modules.
    
    Therefore, this commit reuses the `meminfo_overlap_check()` to further
    extends the check in function `check_reserved_regions_overlap()` so that
    memory regions in bootinfo.acpi are included. If any error occurs in the
    extended `check_reserved_regions_overlap()`, the `meminfo_add_bank()`
    defined in `efi-boot.h` will return early.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 4f6a29158273642928db3f73604ac76d6da1d6af
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Wed Feb 1 10:15:12 2023 +0800

    xen/arm: Extend the memory overlap check to include bootmodules
    
    Similarly as the static regions defined in bootinfo.reserved_mem,
    the bootmodule regions defined in bootinfo.modules should also not
    be overlapping with memory regions in either bootinfo.reserved_mem
    or bootinfo.modules.
    
    Therefore, this commit introduces a helper `bootmodules_overlap_check()`
    and uses this helper to extend the check in function
    `check_reserved_regions_overlap()` so that memory regions in
    bootinfo.modules are included. Use `check_reserved_regions_overlap()`
    in `add_boot_module()` to return early if any error occurs.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 6f7d56ccd39940a51282730e121e1c5c64fbf8c0
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Wed Feb 1 10:15:11 2023 +0800

    xen/arm: Add memory overlap check for bootinfo.reserved_mem
    
    As we are having more and more types of static region, and all of
    these static regions are defined in bootinfo.reserved_mem, it is
    necessary to add the overlap check of reserved memory regions in Xen,
    because such check will help user to identify the misconfiguration in
    the device tree at the early stage of boot time.
    
    Currently we have 3 types of static region, namely
    (1) static memory
    (2) static heap
    (3) static shared memory
    
    (1) and (2) are parsed by the function `device_tree_get_meminfo()` and
    (3) is parsed using its own logic. All of parsed information of these
    types will be stored in `struct meminfo`.
    
    Therefore, to unify the overlap checking logic for all of these types,
    this commit firstly introduces a helper `meminfo_overlap_check()` and
    a function `check_reserved_regions_overlap()` to check if an input
    physical address range is overlapping with the existing memory regions
    defined in bootinfo. After that, use `check_reserved_regions_overlap()`
    in `device_tree_get_meminfo()` to do the overlap check of (1) and (2)
    and replace the original overlap check of (3) with
    `check_reserved_regions_overlap()`.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 9c962e07fbf3d7e2f8f92d8834a60fe7d5600637
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Tue Mar 28 15:13:34 2023 +0800

    xen/arm: Clean-up in p2m_init() and p2m_final_teardown()
    
    With the change in previous patch, the initial 16 pages in the P2M
    pool is not necessary anymore. Drop them for code simplification.
    
    Also the call to p2m_teardown() from arch_domain_destroy() is not
    necessary anymore since the movement of the P2M allocation out of
    arch_domain_create(). Drop the code and the above in-code comment
    mentioning it. Take the opportunity to fix a typo in the original
    in-code comment.
    
    With above clean-up, the second parameter of p2m_teardown() is
    also not needed anymore. Drop this parameter and the logic related
    to this parameter.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>

commit 4dbcb0653621f2362e04e43be66197a03de24432
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Tue Mar 28 15:13:33 2023 +0800

    xen/arm: Defer GICv2 CPU interface mapping until the first access
    
    Currently, the mapping of the GICv2 CPU interface is created in
    arch_domain_create(). This causes some troubles in populating and
    freeing of the domain P2M pages pool. For example, a default 16
    P2M pages are required in p2m_init() to cope with the P2M mapping
    of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
    the complexity of P2M destroy in the failure path of
    arch_domain_create().
    
    As per discussion in [1], similarly as the MMIO access for ACPI, this
    patch defers the GICv2 CPU interface mapping until the first MMIO
    access. This is achieved by moving the GICv2 CPU interface mapping
    code from vgic_v2_domain_init()/vgic_v2_map_resources() to the
    stage-2 data abort trap handling code. The original CPU interface
    size and virtual CPU interface base address is now saved in
    `struct vgic_dist` instead of the local variable of
    vgic_v2_domain_init()/vgic_v2_map_resources().
    
    Take the opportunity to unify the way of data access using the
    existing pointer to struct vgic_dist in vgic_v2_map_resources() for
    new GICv2.
    
    Since gicv2_map_hwdom_extra_mappings() happens after domain_create(),
    so there is no need to map the extra mappings on-demand, and therefore
    keep the hwdom extra mappings as untouched.
    
    [1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 5ee30222c626fd3224f0b6d59e9856ba77bb89d4
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Tue Mar 28 15:13:32 2023 +0800

    xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC
    
    In the follow-up patch from this series, the GICv2 CPU interface
    mapping will be deferred until the first access in the stage-2
    data abort trap handling code. Since the data abort trap handling
    code is common for the current and the new vGIC implementation,
    it is necessary to unify the variable names in struct vgic_dist
    for these two implementations.
    
    Therefore, this commit renames the vgic_cpu_base and vgic_dist_base
    for new vGIC to cbase and dbase. So we can use the same code in
    the data abort trap handling code for both vGIC implementations.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 603956258ceb30bbb621e1acdff1efcfd136212a
Author: Henry Wang <Henry.Wang@arm.com>
Date:   Tue Mar 28 15:13:31 2023 +0800

    xen/arm: Reduce redundant clear root pages when teardown p2m
    
    Currently, p2m for a domain will be teardown from two paths:
    (1) The normal path when a domain is destroyed.
    (2) The arch_domain_destroy() in the failure path of domain creation.
    
    When tearing down p2m from (1), the part to clear and clean the root
    is only needed to do once rather than for every call of p2m_teardown().
    If the p2m teardown is from (2), the clear and clean of the root
    is unnecessary because the domain is not scheduled.
    
    Therefore, this patch introduces a helper `p2m_clear_root_pages()` to
    do the clear and clean of the root, and move this logic outside of
    p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)`
    check can be dropped.
    
    Signed-off-by: Henry Wang <Henry.Wang@arm.com>
    Reviewed-by: Michal Orzel <michal.orzel@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 18c128ba66e6308744850aca96dbffd18f91c29b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 26 14:57:45 2023 +0000

    x86/hvm: Disallow disabling paging in 64bit mode
    
    The Long Mode consistency checks exist to "ensure that the processor does not
    enter an undefined mode or state that results in unpredictable behavior".  APM
    Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
    preventing the OS from trying to exit Long mode while in 64bit mode.  This
    could leave the CPU in Protected Mode with an %rip above the 4G boundary.
    
    Experimentally, AMD CPUs really do permit this state transition.  An OS which
    tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
    to be going on behind the scenes ought to result in sane continued execution.
    
    Furthermore, right from the very outset, the APM Vol2 14.7 "Leaving Long Mode"
    section instructs peoples to switch to a compatibility mode segment first
    before clearing CR0.PG, which does clear out the upper bits in %rip.  This is
    further backed up by Vol2 Figure 1-6 "Operating Modes of the AMD64
    Architecture".
    
    Either way, this appears to have been a genuine oversight in the AMD64 spec.
    
    Intel, on the other hand, rejects this state transition with #GP.
    
    Between revision 71 (Nov 2019) and 72 (May 2020) of SDM Vol3, a footnote to
    4.1.2 "Paging-Mode Enable" was altered from
    
      If CR4.PCIDE= 1, an attempt to clear CR0.PG causes a general-protection
      exception (#GP); software should clear CR4.PCIDE before attempting to
      disable paging.
    
    to
    
      If the logical processor is in 64-bit mode or if CR4.PCIDE= 1, an attempt to
      clear CR0.PG causes a general-protection exception (#GP). Software should
      transition to compatibility mode and clear CR4.PCIDE before attempting to
      disable paging.
    
    which acknowledges this corner case, but there doesn't appear to be any other
    discussion even in the relevant Long Mode sections.
    
    So it appears that Intel spotted and addressed the corner case in IA-32e mode,
    but were 15 years late to document it.
    
    Xen was written to the AMD spec, and misses the check.  Follow the Intel
    behaviour, because it is more sensible and avoids hitting a VMEntry failure.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8363b1f62e561cfb73073b4b094516fcbbd7020e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Thu Apr 13 14:23:40 2023 +0200

    automation: switch ADL hw tests to debug build
    
    This should give a lot more useful information in case of a failure, and
    also enable some asserts for extra checks.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:36:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521792.810665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJPQ-0004jd-Ix; Mon, 17 Apr 2023 07:36:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521792.810665; Mon, 17 Apr 2023 07:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJPQ-0004jW-E7; Mon, 17 Apr 2023 07:36:20 +0000
Received: by outflank-mailman (input) for mailman id 521792;
 Mon, 17 Apr 2023 07:36:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5U/i=AI=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1poJPP-0004jQ-Hu
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:36:19 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 899b8447-dcf2-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 09:36:16 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f16ecaade1so6653225e9.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 00:36:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 899b8447-dcf2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681716977; x=1684308977;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FIKmirvKiuWLnbuh3Hx46nqicgE588SNTJroauuaDcs=;
        b=Pf4B+onBpHlN+bJvCPT6HFsKwZ66NJnTgGTf1SHfnonxTETGfIQStqK0xbKEoSwgX4
         0a83wD+QzHQ3zJOTCjYc9PAwTXsWzHUgXQdZgcIbTLLc3lWdE7RsOXCbosFdkGQ+rl94
         AiZnXU+xaUXQZgrWJuPFqGtzj2Fo/MfEN1f9yUNB9/B9E3rN+IKLM4xCkDiUwfYz9hya
         F0edpDpVkb+fbaCOZqnkJn8PpiQi6T9qhttrASO6eUjtm+sX98G3ffWKh+tGU/552WJe
         Qgqy6WA+/XVBXplSMBRmCUlZtp2sLUvJFBawDvGe4j15WHz0k7UrXVavFgA+G7kYSWIS
         fj1Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681716977; x=1684308977;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FIKmirvKiuWLnbuh3Hx46nqicgE588SNTJroauuaDcs=;
        b=OKnAQmMs9qwlxvtGAVBTF/bWAgCCOe2pr4rknp2E1/72GGcjsW7PBkRzpmG3K0yNQK
         ytIvLMeh3n7oSxFGdFijY7B2Uxr7pus0fDh/x3iwpH8PSR8jhsMwqGuty1qbVasGN9Oa
         cizhmgNO3bwrqHqMFHmdIIN25ZhQkzYwgSoHVW67rOB8pfpf5uQ5/SaPRVJ+yGjrBeNJ
         iIghoc4MEGhEyFzc8weRfTpuqkgObgB7pBh/gOqzX1WcPpuh/FJnNlIY7A1hTf7sHYiy
         5grmWL1s1zIVRByzVefB9TaxBsq3qnjvlx2HfcRYgG5ee/fbUzOuPpPWmHKbnbW322XE
         GHQA==
X-Gm-Message-State: AAQBX9ep0TXHh6dc8FXFxYg7Y/xwCviLJWo1EUcnxGNwxvZu1F0ISYG6
	oBqRV2EXkQHbtMrk+v+ZfYlFqllXRgShzgeHzcXISA==
X-Google-Smtp-Source: AKy350b7gZaM9cVXb2M/uURPE5aD6IvTV1i6BxWj5KFkbjF97PT3P1JHVyvjdBnKphfMPh7EMexAhaxaGZlrE4cRBTU=
X-Received: by 2002:a5d:525b:0:b0:2c8:14ba:4594 with SMTP id
 k27-20020a5d525b000000b002c814ba4594mr1062704wrc.3.1681716976901; Mon, 17 Apr
 2023 00:36:16 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-18-jens.wiklander@linaro.org> <176f5384-6e35-83bd-5f77-8b31412e8048@xen.org>
In-Reply-To: <176f5384-6e35-83bd-5f77-8b31412e8048@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 17 Apr 2023 09:36:06 +0200
Message-ID: <CAHUa44Gf1aC8D8TBYGk3r=D3Xvs+N-HRpWWmfWk7VgWFKQHKEA@mail.gmail.com>
Subject: Re: [XEN PATCH v8 17/22] xen/arm: ffa: support sharing memory
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Thu, Apr 13, 2023 at 10:53=E2=80=AFPM Julien Grall <julien@xen.org> wrot=
e:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> >   static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> >                                         uint8_t msg)
> >   {
> > @@ -781,6 +862,400 @@ out:
> >                resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 =
& mask);
> >   }
> >
> > +/*
> > + * Gets all page and assigns them to the supplied shared memory object=
. If
> > + * this function fails then the caller is still expected to call
> > + * put_shm_pages() as a cleanup.
> > + */
> > +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> > +                         const struct ffa_address_range *range,
> > +                         uint32_t range_count, unsigned int start_page=
_idx,
> > +                         unsigned int *last_page_idx)
> > +{
> > +    unsigned int pg_idx =3D start_page_idx;
> > +    gfn_t gfn;
> > +    unsigned int n;
> > +    unsigned int m;
> > +    p2m_type_t t;
> > +    uint64_t addr;
> > +
> > +    for ( n =3D 0; n < range_count; n++ )
> > +    {
> > +        for ( m =3D 0; m < range[n].page_count; m++ )
> > +        {
> > +            if ( pg_idx >=3D shm->page_count )
> > +                return FFA_RET_INVALID_PARAMETERS;
> > +
> > +            addr =3D read_atomic(&range[n].address);
>
> I am confused with the use of read_atomic() here. Is this part of the
> guest memory? If so, why don't page_count is also not read atomically?
>
> Also, it looks like you are using you will read the same address
> atomically. Shouldn't this be moved just before the loop?

You're right it is from guest memory and we should use read_atomoc()
only once. I'll fix it.

>
> > +            gfn =3D gaddr_to_gfn(addr + m * FFA_PAGE_SIZE);
> > +            shm->pages[pg_idx] =3D get_page_from_gfn(d, gfn_x(gfn), &t=
,
> > +                                                P2M_ALLOC);
> > +            if ( !shm->pages[pg_idx] )
> > +                return FFA_RET_DENIED;
> > +            /* Only normal RAM for now */
> > +            if ( !p2m_is_ram(t) )
> > +                return FFA_RET_DENIED;
> > +            pg_idx++;
> > +        }
> > +    }
> > +
> > +    *last_page_idx =3D pg_idx;
> > +
> > +    return FFA_RET_OK;
> > +}
> > +
> > +static void put_shm_pages(struct ffa_shm_mem *shm)
> > +{
> > +    unsigned int n;
> > +
> > +    for ( n =3D 0; n < shm->page_count && shm->pages[n]; n++ )
> > +    {
> > +        put_page(shm->pages[n]);
> > +        shm->pages[n] =3D NULL;
> > +    }
> > +}
> > +
> > +static struct ffa_shm_mem *alloc_ffa_shm_mem(struct ffa_ctx *ctx,
> > +                                             unsigned int page_count)
> > +{
> > +    struct ffa_shm_mem *shm;
> > +
> > +    if ( page_count >=3D FFA_MAX_SHM_PAGE_COUNT ||
> > +         ctx->shm_count >=3D FFA_MAX_SHM_COUNT )
> > +        return NULL;
> > +
> > +    shm =3D xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count)=
;
> > +    if ( shm )
> > +    {
> > +        ctx->shm_count++;
> > +        shm->page_count =3D page_count;
> > +    }
> > +
> > +    return shm;
> > +}
> > +
> > +static void free_ffa_shm_mem(struct ffa_ctx *ctx, struct ffa_shm_mem *=
shm)
> > +{
> > +    if ( shm ) {
>
> Coding style:
>
> if ( ... )
> {
>
> but I would prefer if we remove one level of indentation and use:
>
> if ( !shm )
>    return;

OK, I'll change it.

Thanks,
Jens

>
> > +        ASSERT(ctx->shm_count > 0);
> > +        ctx->shm_count--;
> > +        put_shm_pages(shm);
> > +        xfree(shm);
> > +    }
> > +}
>
> Cheers,
>
> --
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:47:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521797.810675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJa0-0006Ft-Fz; Mon, 17 Apr 2023 07:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521797.810675; Mon, 17 Apr 2023 07:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJa0-0006Fm-DA; Mon, 17 Apr 2023 07:47:16 +0000
Received: by outflank-mailman (input) for mailman id 521797;
 Mon, 17 Apr 2023 07:47:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8RDw=AI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1poJZy-0006FN-Ug
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:47:14 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2055.outbound.protection.outlook.com [40.107.13.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10e2b9c8-dcf4-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:47:13 +0200 (CEST)
Received: from DB3PR06CA0015.eurprd06.prod.outlook.com (2603:10a6:8:1::28) by
 PA4PR08MB6207.eurprd08.prod.outlook.com (2603:10a6:102:f2::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.45; Mon, 17 Apr 2023 07:46:36 +0000
Received: from DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::7b) by DB3PR06CA0015.outlook.office365.com
 (2603:10a6:8:1::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 07:46:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT017.mail.protection.outlook.com (100.127.142.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 07:46:36 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 17 Apr 2023 07:46:36 +0000
Received: from 5900bcb80821.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B35F3962-BA04-4C96-9487-C730389847F2.1; 
 Mon, 17 Apr 2023 07:46:25 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5900bcb80821.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 Apr 2023 07:46:25 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB8883.eurprd08.prod.outlook.com (2603:10a6:10:47e::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 07:46:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Mon, 17 Apr 2023
 07:46:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10e2b9c8-dcf4-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cItfq53Pg6byDbdG9uVNgWkA18BfqckFsGFCdIja904=;
 b=wzVxHoY3IyBK9duTwaCKjPjuc2lcb+BjK2wUWgobl3bG2a3WhrJItqx3RtkkgkAI3VnXwqyAkmaI3o28D9kg/Ilbbz2YbtAGvE/DlnZgrTnyzzD9A2Dx7SrZ5pW0HwHGWhOWFv4pNTmHCWM5upzbfTtEfaftzPGXZ9LkwaKJfko=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b1b10918e9a6aa59
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C4GQrCpYpWN5Y8+leMhdtNcSfTJ2AzIVrkWMrF1tUn//7ZGnK9qfDjjNM6iVnvZw9Oi/MQaNRGf+3pUeWjmt4pGPEfXE6uDK7SeG4cdxmn2QwbJd+u0QhInQZSiqzHv0PNUwL2EByzr+sUUrj6jFWGobsheGV98IOl+moktMAs4Tf5fYAEJ8Q0W4ZSdZwTZgvmJwN4VuYL3SnaeFfjjvycPfXfU2TkojJVWu/pLT9JFgzBNYtyAxLL041a/VUxc1wXBYhysB+L13Dp3T/eiQZAvfwh14JditAH3qRWQIBa+umdAHloMfjwM5NvX4SO8bBAq4FuIsTQha2rPOsttCZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cItfq53Pg6byDbdG9uVNgWkA18BfqckFsGFCdIja904=;
 b=MjU6dRcHMMuX7FimzFoliZL2rTdY9mXQRDDNLCHENzR7qQbnnFRhLxKBW6Om9K4TUB+sqF1+tKG/qWtv73hfMwChXOu8pQVTe3D202n8v87UcKKmCU8c1EoLRW/Uoa9sGgBRXuCYR2k7R2d4wgxmk9b2JEIYzHymVMRLr+0mdSkqqxJZgZijGqx2BxWCoUXaKBuHVQ13WiwBFRkEBloUYBXO5Oy9L8/f/tBDrXKfrnGDKeUkzikkh/fwrIRqDMdNoLdN/X+Xi9VjuI39tZ65UxKKPsUCMdfU0SlfPaDW/M0FzwcLUfTZbRCo6ZE5aLfF9ziZBxITezuPZ5T9S4CWsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cItfq53Pg6byDbdG9uVNgWkA18BfqckFsGFCdIja904=;
 b=wzVxHoY3IyBK9duTwaCKjPjuc2lcb+BjK2wUWgobl3bG2a3WhrJItqx3RtkkgkAI3VnXwqyAkmaI3o28D9kg/Ilbbz2YbtAGvE/DlnZgrTnyzzD9A2Dx7SrZ5pW0HwHGWhOWFv4pNTmHCWM5upzbfTtEfaftzPGXZ9LkwaKJfko=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Topic: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Index: AQHZcHBP636VZKQlJkWnTIQrYG+QeK8vIIIA
Date: Mon, 17 Apr 2023 07:46:21 +0000
Message-ID: <72C49710-B8D2-444F-A547-4D574D70E2B0@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-4-julien@xen.org>
In-Reply-To: <20230416143211.72227-4-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB8883:EE_|DBAEUR03FT017:EE_|PA4PR08MB6207:EE_
X-MS-Office365-Filtering-Correlation-Id: 44667e34-dc54-4fcf-b954-08db3f17df1a
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Kq9v7FNnaTQfr8nea7Yeig29Deo9bE3iRD5wCnCL8HcW0ggpFQaZav7Y1t82Splqzl/gNE1cHZl71m0hi05ezgcu9ofEM6M1Nb/JOj89v//Nve1KMWD6yPXnLtZ0/QqItpOSqZllXSavdNhJOhKcs1UhnARLP1PDKCVsDCABQaUUZJEu4Wgvjb71gCuXl5QYOaABLIwpUfE/Y8oU6swJ8kBDxgLUIrW+kjVK/t790NVZqMqLDNk6QwKzBZoYEAMTOuWd0dkKteA63RQs+mjeJkM6DhdRD82n7vljCK/RI5tbR2hbg1+x5pksIR/WaEELdLRQBfTM68Y7GD72Llo+2ymay/L0Oqv20Zws8L/9PtwAvPeGZPBh3PWZybq1aR2HeHZl2443xuKOAppvrUMvGd/w0V7JbpdJhLs/Chyct/aZFk8xl6mofAvmurcjlSzjDhZepn25nMd7qPJdGki8muuO9wbUlJACH912QH16FGlhz5M/IICZWAxDGb5pb/qmAq3sj19YztMlihbbx9jhx65Cm52tdh8cJO59Gu4Zk5/cBDv5oLcblcmKnVWNUoNtfu5OKXJ5Oo+/zeLXzbOK1h9Zkeo3RgORcThJUA9S2Zsot9biuAQ2acKEEehuBA0F4cMGxpf0yR/3kQW3HDjTiQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(346002)(376002)(136003)(451199021)(2906002)(4744005)(8676002)(76116006)(33656002)(8936002)(5660300002)(478600001)(91956017)(41300700001)(316002)(64756008)(66946007)(66556008)(83380400001)(66476007)(36756003)(66446008)(54906003)(4326008)(71200400001)(86362001)(38070700005)(38100700002)(6506007)(122000001)(53546011)(26005)(6512007)(6486002)(2616005)(186003)(6916009)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0434824A92B8C04E9068805BE43C8FDD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8883
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7d87341d-cefe-4994-2732-08db3f17d633
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+6bDdcLvhzvT+3XcQRh+SyHgO4xKKsYt0N5K2jECvgHupPNDnpen2uUyZOmJcpPEkcTNU+7uTv5cHIUrPl8OUPSMR3L1UpMv7RH/ZVuMDEbYN5A7dzXXZSmrBamkoBxoHiGyvLjGHyGsHu3q2yKqfzv5Pw0XcRIpv+9riABRnpgydmWa/6QNeeME1RAB1HCY58dcPxwpT+k+G1z3m4hsFzQZFDvhUSeyH5XKXYPi2d0+RsA8hhKF0WE4ib4yB9bS4nYNbBZvm+ZcXCkTqMJmWK9lan/Bu2vj7/W5piidpxzzI+n9JYyVA0t3rQ/rPTCvqG72WxZFONKUxVl29jsqM4UYocn+FdKjgDDWIr831Qpb8LPmLdFu93aqACz0/vONlUme8Il3h5YlExb/Rq5pWDDjpB1GXScUq60Q7VX/S1h3cRcvJ5Szj1bTwsfwjTODvfvB5My7VvU3HsUYwGLrryo2KBNXqN96I2pjpDvm99gWmGgiGan1uQUldLLUgwc038dYGnnIz3cr25/IOZfXA/xvTrYwaXWB+qc0AqtTfSKPOKjlyxMHxqrVkh2EhYGL9os9XAlVTq3+UKh5I6ibMSu5RtBLSsFs0+qIdMXKczVa8awMhrB7KI8YXDiISy8sWu1anl9oUTx5CesRdqAqeGVxmWu6hhsGIfDhGxCpQPwoeuXjO9YM4tsHxTprTe9PP2LXcKuk19MBjnkEl5NXwA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(336012)(6486002)(86362001)(478600001)(36860700001)(2616005)(47076005)(36756003)(83380400001)(26005)(33656002)(40480700001)(107886003)(6506007)(186003)(53546011)(6512007)(40460700003)(82740400003)(82310400005)(356005)(70206006)(70586007)(81166007)(4744005)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(6862004)(41300700001)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:46:36.2769
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 44667e34-dc54-4fcf-b954-08db3f17df1a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6207



> On 16 Apr 2023, at 15:32, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
>=20
> On some platform, the identity mapping may have to start at 0. If we alwa=
ys
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
>=20
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
>=20
> Two new external helpers are introduced:
>    - arch_setup_page_tables() will setup the page-tables so it is
>      easy to create the mapping afterwards.
>    - update_identity_mapping() will create/remove the identity mapping
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20

Hi Julien,

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:54:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:54:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521801.810685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJhL-0007gw-AH; Mon, 17 Apr 2023 07:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521801.810685; Mon, 17 Apr 2023 07:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJhL-0007gp-6L; Mon, 17 Apr 2023 07:54:51 +0000
Received: by outflank-mailman (input) for mailman id 521801;
 Mon, 17 Apr 2023 07:54:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poJhJ-0007gj-R6
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:54:49 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2063.outbound.protection.outlook.com [40.107.104.63])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f15b9e3-dcf5-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 09:54:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8268.eurprd04.prod.outlook.com (2603:10a6:10:240::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 07:54:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 07:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f15b9e3-dcf5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PAkXTsnJAiBIsoLsb7678toiVXG6Fsn1vtmR9FA1gu3JtcBQLWy5SHIqJgwvUB1PJxu7tsEt0i9DBAZ0S0BqX1BXroCaookRM/uDUlm8sb7qgD7yauWCiX+Gx2+eJVSTf/5HNd0B5sEhDQy4gr42lxVh1xudzf0nA9Uj6Z0WrRWjtvheLD/UQ1Yj/BlXxijH6Ya1huOT3ZxJZyk+ezPPpoDzBPuBiEZPbOUT2CU65hwHdytTuEHCRhpb0NEgdm5kDLiZvxfCeScJEOU/e5V51N/KTCZj87e8a1C70arTAfxV2rBVhuvUli6xnakyYZwRyQOtYg8vlPirMi17SLtCwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gE0a1mE6c2npUCOmNFYKdH+0rHLAz4QE3HbsVfSob6Y=;
 b=kMGocZqje/K4CoYYWo3zRuJD38tZnOFvDkmR51AJtf6jZcf9nCpH8OM5ApQiJXvD1ckwzBaqzB6a3nq98nidN9afVHPYRSNDmYlWsSSODNT159/ztK6l+UGqst4TNN4Si6hSR0bvP7YkP4GJ93J7hkozli7xFouv8R01svu5X57Zj7tkN9CPE6yo9zlr8LSfbSUYT4gOgidtUHnJW+GLwibE64RuR9ZaazBxdrmhgOPAd5CyMCu6BKvTtkQJhe2oFvR9ZZ3pTGKcveiXtr5qPyp0k7q2CO3zL0Go2USxx/dTj0amAT5UdmV0xj/Vmydl/CMKMrSb+XOLNHQ/7GICkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gE0a1mE6c2npUCOmNFYKdH+0rHLAz4QE3HbsVfSob6Y=;
 b=OU4HS1JXVooRHDoLECM/X4vJW4Fb5p4hdIC74IJUDOSm5AdSN3hCRRw+YAYFdKhf4x1vk2IOYYlX5pE5ng4c2oDUoSYSr1CgEy+OzlpU5g085DfAs66U/PYzyJEBTzBACbpbw5Yq5gGpDRmGNeX2UdZXQjlgXeYDH+ewIzG/AXIZZn3X/Ez3jZR9R2hFm+5h66EuekdIGMWFNLOk72lI+WaFAfKe5LIEGBpxlls/TrBCThnhikRoF96mcNoS7dWwCeQfFLQduRIyeCmrt8JX1+SgAMBxeX1UKqpVL40riTBU+qbdM9XattB8O1V0eqSKVN2JKbfA5LpQEKpDmM1y6g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <658f5267-9943-7c5a-2ae7-f7e40a15301d@suse.com>
Date: Mon, 17 Apr 2023 09:54:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul/fuzz: correct header (symlink) dependencies
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0154.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8268:EE_
X-MS-Office365-Filtering-Correlation-Id: 763a66cf-d671-4fc2-05db-08db3f18f24f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EoFWdv8YN+O5Ur/nvl7KoUUWk7P6xPcfHpbRICmvsPzm8xnaWtt3DYbDBq38Hh1t0AlmJmkkmx/D6UUCME0aH/2gw34CJsmLEoawQ5JXvPQoM5+N/yHbUGcRgEskfSz1WcbpEgWmovQ1szi64mru6lnOXOgbKHk2paGldvuI30YSZgpiPAyl8UVnuhRAXLHPws37Uu4ELHScwYtcOTMbkxRcUlJ+nXuidis6eDor1C0kag4LzXsI9k8MONQzMvvII0K8dK7Woboa0sqpbP/wN0xY42wZs7wvKGDzEBOjkZILKj80MCoI498siiny0s9xz+ym90+F/DesKBhRf/1uLphjJSEgwvly5yp8L3ep2A92G/ypFzLwYdsL+gt9rxGLhw7hW40B5tJbaO1N5BrqejRY84aImfhVH2CTORZ41sH81C+hp+eeIkmQoBAjB1iyCWSjCqaJ49Kv4If4U7IUKIN+hpKy1JfDgB7Al2uUGLGScleQh1HVecrKtFu4iMAqhMu8JeYyzTHgKcQ4ZgMlvboQQreMHgY1bm2ym1gUIBQgMYeNrKlMABtqMAHlC1SS5tEpVBXkB/69rc44vnsss4WIGaRpPZmJeqp4jiECfxYLCIa8jJ/tkCoFHO3pjIO0ELRvmkcIyb5KfdaLjjEn5w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(26005)(6506007)(6512007)(66946007)(66476007)(316002)(41300700001)(4326008)(6916009)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmNHL25CTkZRdVI3VWdJTGlQcVVxaUFaRUI5OWxUVlpnZi80enVKeFB3N2Yr?=
 =?utf-8?B?eTBTRVVTeU1sV2xkQ1d3cnl3cDlwT2s1dUZqUzNwY1dZWHRSaGJ0VUVmUXlz?=
 =?utf-8?B?Z2czWHpCZmxnNlF4bUplbnoyRjVKeXFDRFZNYUdZUllxYXBDZm5GdStGeWNq?=
 =?utf-8?B?eXJGeDRTM0RnbVBkRFJGVkE4RnJsSWl4bDFDNDBva1o2cVhXaWplS1F3MFRQ?=
 =?utf-8?B?UWFPaG00YkFuVVFqWWNFdXhGUmg2cFR3VDJ4NkpEK3BMUXZjektiejBDUVNX?=
 =?utf-8?B?ZEVvQVBBaThQbTFrWlBkRmI0Ty9aZWxhaUNhYUVueEhPZHBNVktkc3B6ZzNi?=
 =?utf-8?B?ZVJob0s3d1Jhd0VFdnJGNjBGNnFneE1neTBWbVUxbmtDTWxaL2F3dUJOV1dv?=
 =?utf-8?B?TURsNWo4SGl0dllzWk1QZitvblFhTll0eDRHMjA4TE5OUFBYVStSZlFvRFI0?=
 =?utf-8?B?MVhQeFFGZXdXRTVyYWVUbWwvdS80Ylh4UmVIR256K3NVeDFPRDNtQTRTKzdU?=
 =?utf-8?B?OWNidHlPNUhiaStYVWw5NXVzYlFnYlRwbklGeVBLK21nZGVkdkxlKytreHhW?=
 =?utf-8?B?TVpOaHAxYVRxaWVpRXdpSWRxaFRHUkNaY2c5a0dwdCtnbW1GWWZOaEhvNXI0?=
 =?utf-8?B?dXl0ZmtHK2hrZ3hHQnVMSVJqQldGTWQ0dzduSTUwUm9Nc0VYT3dMUUNkSWUr?=
 =?utf-8?B?V0doT3E2Zk1KV3ppRnNOVERWVW05NXBUemNTV3NIamQ2U0RWS3VzZkE2NExs?=
 =?utf-8?B?M2lBTXR3L2YyWFBEOGZ2aTIzbzl2U0liNVdPbWNjUDdWNFJRZWtBZlVxZlF0?=
 =?utf-8?B?VjhpRTMva3kzTmVJZVVGQ2VlNnh5NTB1SG1CakxQcHNtdjhLT3BObEFhM1Z5?=
 =?utf-8?B?L0xPdytGTUVLLytNRWdTamVPRG0xMTZvd3NqSVRsSUczdWJTcjZNUXh1eTFm?=
 =?utf-8?B?OS9HSWRvOEhkWlB2OTJ3M2tFcDlTWGxDY09hZDZFMkFRYmNSNmpyZjNTWXhB?=
 =?utf-8?B?TkdRNTltTzVJMk1BbEovUXdvc3R4dnlZS2Q5ZEt3a0NkMkFLOHlIQjJ2eUNQ?=
 =?utf-8?B?VklNZzJySUJ3Nkd6NHEva1RBc3pwZjMyQlN3YlNGajV4aWNOREk0WFFRcHF0?=
 =?utf-8?B?MFhpVVUxM2x3eDNFTE5wd0JJYTIvZzR6anZqRXVkMXlRL0R0b1JFS3p1SitS?=
 =?utf-8?B?ZDdnZDFJclVTam1JaUtpL2xnTWZIK0MrRklLaHVHV1FMUzRmOUo5R1Y2aDNX?=
 =?utf-8?B?Wk8weVkyQjh3eXp6K1RkWjFGQWJxMGpjdW5hamFyT1N6cmRqTUgzNGpCSG5t?=
 =?utf-8?B?bVVoSGMwOXlBNExFVEhvTDZmTUN5YUJabXBmOUlQc0FzQmhKdUdBWGRpdzNj?=
 =?utf-8?B?eVZkSWhXNnZZSFpvOGtqeXdQcVp5aUVDTkNSNWNzczNvWDRkelZDMFZNbEFV?=
 =?utf-8?B?Nmo3Q2gzU2ZWeHlvMllyNDFod3F4YzFvUzNFNlIraVZoMjhRbUwrZnpHUUU1?=
 =?utf-8?B?Qmp4UWl4SUlSNmRtQkJvREdjTnBjYVVCU2VsZnBOZTJSNmdvUy94WnBKSldN?=
 =?utf-8?B?Qm0yWG9UbVV2YnJaMjAya2dMemk1Rm1SYnZZY0V1NEFOaitlMHJ4blJjM0Zj?=
 =?utf-8?B?UGZocnRnM2VjZnp1QjlUNXpSd1hrd3I1V1NILzFReWlYTGlUcHN6N21Cenpy?=
 =?utf-8?B?TUR1VTcrNldMdm1PeGlMbmV2NDQxQkw1R0g2TFJSTmJLUHhYUU1CV3lVRkl0?=
 =?utf-8?B?T2ZkYU1OTkthbERHcW42T2FWeTNsRGtKT1MweGNUZVRLVktNM0YwUExvMnF3?=
 =?utf-8?B?MHV3U0hTOHVuVGt0R0hTWHljRlR3UXFIaS94Vk01NHpqa0x0Y3pwaE5RZjVU?=
 =?utf-8?B?VmhSTlUxQkN2OE9oVHhaa0pDeExhbXZGRUhPWXZJV0hGTHl6bTRtM0RtbDZZ?=
 =?utf-8?B?Uzl5MGF3bEJ3UkRFR1p0S0JYTVV6MCt1TUZvUnM1UFViU21ibTNJT0N0dDlZ?=
 =?utf-8?B?SjQ2UnFqYUJuSGVnZk9SYlUrd2owQVNHWmRwRGlFWmN4a2tKZkZQVmhZZGth?=
 =?utf-8?B?UU4rS0VYWTFWM3ZFWDBHMkMzNSswSG1CMUhSR2xNRkRVREk0U1MyMGYrMXU2?=
 =?utf-8?Q?a7Jfwg3lKRPvX3hnKm69f8Har?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 763a66cf-d671-4fc2-05db-08db3f18f24f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:54:18.1948
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wADAhTd6r2OUkD0tqJ5QdXsOaS9evWNdvwXIvKpdGVLp4zIJw6BAFN3IOJmx5zyNa79UQRFtBYu7NscEykLTmg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8268

A use of $(x86_emulate.h) was introduced (mirroring what the testharness
has) without realizing that no such variable exists here. (Re)name the
variable (to) "private.h", which better expresses what is included which
way.

Note that because of automatic dependencies tracking, unlike in the test
harness no $(x86.h) variable is needed here - we solely need explicit
dependencies for files which need symlinks created.

Fixes: 9ace97ab9b87 ("x86emul: split off opcode 0f01 handling")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -35,14 +35,16 @@ OBJS := fuzz-emul.o x86-emulate.o
 OBJS += x86_emulate/0f01.o x86_emulate/0fae.o x86_emulate/0fc7.o
 OBJS += x86_emulate/decode.o x86_emulate/fpu.o
 
+private.h := x86-emulate.h x86_emulate/x86_emulate.h x86_emulate/private.h
+
 x86-emulate.h: x86_emulate/x86_emulate.h
-x86-emulate.o x86-emulate-cov.o: x86-emulate.h x86_emulate/x86_emulate.c x86_emulate/private.h
+x86-emulate.o x86-emulate-cov.o: x86_emulate/x86_emulate.c $(private.h)
 fuzz-emul.o fuzz-emul-cov.o wrappers.o: x86-emulate.h
 
-$(filter x86_emulate/%.o,$(OBJS)): x86_emulate/%.o: x86_emulate/%.c x86_emulate/private.h $(x86_emulate.h)
+$(filter x86_emulate/%.o,$(OBJS)): x86_emulate/%.o: x86_emulate/%.c $(private.h)
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -o $@ $< $(APPEND_CFLAGS)
 
-$(patsubst %.o,%-cov.o,$(filter x86_emulate/%.o,$(OBJS))): x86_emulate/%-cov.o: x86_emulate/%.c x86_emulate/private.h $(x86_emulate.h)
+$(patsubst %.o,%-cov.o,$(filter x86_emulate/%.o,$(OBJS))): x86_emulate/%-cov.o: x86_emulate/%.c $(private.h)
 	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) $(GCOV_FLAGS) -c -o $@ $< $(APPEND_CFLAGS)
 
 x86-insn-fuzzer.a: $(OBJS) cpuid.o


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:56:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:56:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521806.810694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJiX-0008It-M1; Mon, 17 Apr 2023 07:56:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521806.810694; Mon, 17 Apr 2023 07:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJiX-0008Im-JO; Mon, 17 Apr 2023 07:56:05 +0000
Received: by outflank-mailman (input) for mailman id 521806;
 Mon, 17 Apr 2023 07:56:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poJiX-0008Ie-1F
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:56:05 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4b1e0ba3-dcf5-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 09:56:01 +0200 (CEST)
Received: from MW4PR03CA0009.namprd03.prod.outlook.com (2603:10b6:303:8f::14)
 by SN7PR12MB6690.namprd12.prod.outlook.com (2603:10b6:806:272::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 07:55:58 +0000
Received: from CO1NAM11FT038.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::14) by MW4PR03CA0009.outlook.office365.com
 (2603:10b6:303:8f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 07:55:58 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT038.mail.protection.outlook.com (10.13.174.231) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 07:55:57 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 02:55:56 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 02:55:56 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 02:55:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b1e0ba3-dcf5-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gIzMKriHIw1pwJXyN41W3UHHjL98WYqa3s6nR5B8ll2x4qbV1YG3Gbr5JXLqsj9lnqkh0dJyn2on467/h/lUexTiYRKTrsheunHnwehiAJ1ZRhgZxgprnh634YBOHq9H4fdFsD6WdtR2umCENjq/Osi4JpbgeXj18iYktTOddQBHl1zmzLBWXyjwIzom5cFysD9zl32Sxqanbxta5Lemp88v2IZshIDCpkwb+D7YDmHZirFUCL2/OyKgdzmyDh9uPoMEpVzA56SzMNa7DJ715X+wf/9wtLop1DLzH13mrwX7gD6YtwcR3ba5kA1HmUq06d+Bv8gvn8BOQ/+FcVsE3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=chtbllMr7ZTnyDLWO1/uN42rbRkqjZQuW7DSLEy2yJ0=;
 b=hjfI+iiKzBuK8e8nMwAa9bjMlZsjTqZCCJb9yovOPI9grraM2lBbs+7r/fuDZSerPVh8/9Tsd0yoSOoInhORZ2gtWj0RQzNPmfprtJgIqX1Y82xVxLTCa/GakpTjnW1aJ5+oWscX0ngS/lglrk3VJ2QCA9oh1ASbfzCejrg16pOSKxclMQUP7eb/C8JD8i1ydFyuI4er1uiBpi0CZ0oKRGeuuc0VjdqHk9Ik9lfXjUFjiYhwZtY+Tu7NEOfI9K+ply+cgLPFrll4sizq/xZjiDr/iAHn3moJeIUGoyXFVI7r3Q3jJwwWibOy+MP1W0m+hazrEfBEaHkbg84iNF0mew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=chtbllMr7ZTnyDLWO1/uN42rbRkqjZQuW7DSLEy2yJ0=;
 b=GhMMNP+RRVkEpriaNPaaeQBdky02FmWx1bAQcpsh83esU7IsXszyLwrdiOXC0F7NuhICpG9VG+5AQz778ODOGLm6pN6k2hZMew619mptuZXpY3U/PhJmx2D3/e13FB4Xqf3yYGrT7FgofQBM61kdBTH7Lx6DDil1YCz7O1FxrhM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <3a011582-f6e4-9c28-509c-1d552e7ff903@amd.com>
Date: Mon, 17 Apr 2023 09:55:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: <Luca.Fancellu@arm.com>, Julien Grall <jgrall@amazon.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-4-julien@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230416143211.72227-4-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT038:EE_|SN7PR12MB6690:EE_
X-MS-Office365-Filtering-Correlation-Id: 56076d87-c395-4144-7c6b-08db3f192dcb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rox9AEy+ORxOaRSvhBsG2o5GmJHlq5hPmbJfvCQ1XRdxo5QixLi2psV2hmkheAxLdeNJJWxk4DOmf+d14sxJh0HnbJBN3JGpWUqRkRly01FA9UdY/MH13O3BXxZc9AQyoyTYOgK71B34tnXxlGv+TU0Rl9LNYX3iXimw/0wNc1f6QMRaVLCAGZeVDSc0ziMLlnf9LAlTsSED8Z2pgZHXfNp+63j1ax1YjzzVN1swPtS1YPSmlFIcqoSljTKbKu269wLvoiFaGDXb7hpezUPk9U/6iJuPvPj+ZHo7aFIIMCfiqo9CmKIHxhXuOVVjHukS44i9lBfwuy31Re+3kudCzcviebFO8I4TZ+/F/e6Y3hfg4bH2He+bU1vzYncbnMojeRtvepVsQNXTxDwOWxFYZDglBihIJixWJ86NQtH9uPH2wGalB/4/iMHDJE0NkclljGJIpvT2XfAcqd1DsEmITg823MPzoh+KknbCHQPuhDzGTNoCbSG9TE6TXa3v7UyAYMlu3h1blE/jl4AheHKdvt3qZfzwtqA2bPQO1XTDE6oHo3bE0ftZcJXpu6A8QXos+lGSkMF9EY9LBmApxc/Zg+0kTKPCPmVuXC+UHuSaV5y5IJbYrSQLFuzBwPKvZyAIsQ7sPT15XPQjgr3UaqwFXEJamCob/TGY/RxekT9C3mmi9N+O0KbAqv2erNLtuskzT4tHgE2EhRbRhYA31SQHYjT00HuAjmqLEJ+LIRMXxRjnaOAYoCysFcn/sXb/jON1PBnFK3T83msCdQMji8rnGA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(4744005)(31696002)(4326008)(86362001)(8936002)(41300700001)(8676002)(70586007)(70206006)(5660300002)(31686004)(44832011)(110136005)(316002)(16576012)(54906003)(356005)(82740400003)(478600001)(82310400005)(81166007)(40460700003)(2616005)(47076005)(26005)(426003)(186003)(36756003)(36860700001)(53546011)(83380400001)(336012)(40480700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:55:57.6537
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 56076d87-c395-4144-7c6b-08db3f192dcb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT038.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6690


On 16/04/2023 16:32, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
> 
> On some platform, the identity mapping may have to start at 0. If we always
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
> 
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
> 
> Two new external helpers are introduced:
>     - arch_setup_page_tables() will setup the page-tables so it is
>       easy to create the mapping afterwards.
>     - update_identity_mapping() will create/remove the identity mapping
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 07:57:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 07:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521810.810704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJjR-0000PG-Vi; Mon, 17 Apr 2023 07:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521810.810704; Mon, 17 Apr 2023 07:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJjR-0000P9-Sr; Mon, 17 Apr 2023 07:57:01 +0000
Received: by outflank-mailman (input) for mailman id 521810;
 Mon, 17 Apr 2023 07:57:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poJjQ-0000OA-H4
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 07:57:00 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7d00::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d16db00-dcf5-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 09:56:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9302.eurprd04.prod.outlook.com (2603:10a6:102:2b8::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 07:56:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 07:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d16db00-dcf5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ldd4As3KEiTK+m4eagL9fG10B7sPRkWSQYmtbNTsT83m7yRGXFXZumKdBDEWAOIfXk6FBP+g19oVPNruOrIrPIxRm5EaahRPIe12Jn6v2Kilj6w7M51+DRwJwVyRAfj8T+rWvQyvDMYMvv+ZUVD2tRJnPJM5BXJbUTmyhufHfZFofGrfZ877GjGuEUN1q18hCY29GOGL3ZwwV6OJpRq95/OTQkFcpRRiqbVgZzPGRBiUdGv+QIcpTBUy0dlL+DC0LijswAQ21Bi9jNrRY1K8bPkJlv1VbeBrflsI5RUYDuKEl7e58KX2ciTJGKocM40Ww29aDTf9sSs8QUIidBw5Zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4hTLNFxuSW+sx0jVTw/3lYVZEc+ZiEssDDfFuBdiE90=;
 b=nhO/08kgprDvGYnH9xu6Lnp8JqAfNxTo7U6WvfeIWBPww8dfz1+Sjv1rcDQ/1PO9Y4vEDDls1thxoChHW8+QKR18l4GJ4ZWOR3xcHSHmDWeoodyn7O09Zo+eLLLyFaXHAATp35ZYG1OY4yu9o/zeIR9jprpqbofSF8N20WlhHEQJ4H/UP0ijwNUios9mKb5lp9DWQ2kkxkcs5c933FJJ+BPkNYAxblNlG/Q4zvrnWqlu1M8E6uXuY8MNbXoJPjAdQRhwB2tyyEz3I2A0RNeDVp54URQLwqDvLnv9u0/zoLZjqTf3/vNFVVQfl4+iV+aQmnI2qWA+sa3WpDUcmSLJfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4hTLNFxuSW+sx0jVTw/3lYVZEc+ZiEssDDfFuBdiE90=;
 b=uMcMymeimRqZMDLQ3csTEaP1CTDX41d6MI8XGdMZS7OQk3PRvY41juLwFGp2Jh69dMqDNHUUviyDiCzsfV9wcCwSBuF2OC9nOIkLLD+QWjDNxQbaNq5ieq+lXtjZGIKSUzs8il71HiYMefkluQz8C+gbqRUE8D0i+WVPv0tVKWF8Gols4894wWYZ55fGJ5xZ56w68wvRE9ztbSRvpYa2qHPlgp7wrmo34h7pEQ+Nl8EfTPZ1AYmcVYXeodr01m0iUE3kGr3ztOTR3vXZWDQW5t/sWS5o+njrR3evWNYHe4KJOsSNAM3WzQvt5dJzmn7HSW5pU2dvzZ/4GakNTVtAsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b7f3c088-2b80-b965-f307-4a31d72eb89c@suse.com>
Date: Mon, 17 Apr 2023 09:56:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: switch split-off files to SPDX
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0065.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9302:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ff73370-2723-42cf-e132-08db3f195083
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E6jr8k1NOi1tsmwKEwaQK9dCUBv9OCPFOpSp7iXdqx0C3fLzYhHcZQlrBA9qiqfC65LvmSHBD5gouUTC8InT6KPG1AQVyF9RSg/WT99S1tPt1S9TyNggFErw6vCkMJcUX93NIH9FqfXrHpn8rANQ4JAg6Ieh2+NjFqiwIK5qFSBwYcnLKhE3haEmWlXT+cJzCXanyUOZV0J00QFUhpNuByj5yJDiGAzGglVis5G+qj0Saym4W14Zvp3OXHZzvlyHMzZ0io++W6tviLgNKZhQY59g9Kmbi1l/bb+5/A2n8dF4ic+e7by1JyId4yzMqC1ILiiSL1b48GB/og7zcYKaTH28R+AIlGiv6D9bJS8TshxkWHSh2H+9+QX0+Ier11aWUXW2zN8MD8zXk+DneekKk6qS8TGUSyVVDurtiDLtoRMGv/TP/SzHGsW7+tXmiNL+TX0cU/WARQFSYQpiTZEJCsyGwps64wDNRhSpwdvkPF/wVJPN+YhiOuUir9yWZXDa2c1XuCnY+ts6BNFPLAQT3utvEmpENGy49tkyxby7QZqThzXKrhpd4fLrO5/KNUM0a3evKnr9Cyw/KoDfC5wCsguNBqhinBbZqikSaWXKNxOk9uOZxOxdxm8mU/Z0F7k560s7jzby96SdQpnpkHYg0PXm1jKYINYfPGYsDUALYI8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(5660300002)(6486002)(6916009)(66476007)(66556008)(2906002)(4326008)(66946007)(36756003)(86362001)(8936002)(38100700002)(41300700001)(478600001)(8676002)(316002)(31696002)(54906003)(6506007)(6512007)(26005)(31686004)(2616005)(186003)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aGpVV3MyK05pc25ZNDVGL1Bsb0pQbkx1RDlzU3V1Zm1razFRVlB0SDRzUmZz?=
 =?utf-8?B?c05ERmNPMFZpWk4vR1NxdnN5a0NvdFozdityL2s1NGNocWp3aE5saXIwUnEr?=
 =?utf-8?B?TFFGMkdXNmk0VHBUejRBcDdvbVpGWlBydW8xSzNsUVQ1bzBYQ3h5bE5aUDl6?=
 =?utf-8?B?SXRWRmRRNW5wU2JXaTRoQ0dFVVZaTE9QOXpBWXB2bnIwa1lxeUFxS2p6Ymhr?=
 =?utf-8?B?OXI3dVNRR3ZSUlEvMTBlNlBZV0JNb2dTcGFyN290YW5ZLytMVGtOQ21VTVFO?=
 =?utf-8?B?RGZudkMyZXl5SThOUDI1bWFOR2ZhcDhPbUdDUU1CVksxSVUrbzkvUnp6ZTQy?=
 =?utf-8?B?cGVMNlgxT1FTYUNqZUVaSTZqRWdoZkZQVCtIRTZNbDZtNjR5Vm5GNmJPQjJp?=
 =?utf-8?B?ZE5iVFR4VzFjc2Z0OW5ZckNZZVFQV0lqOFRnWWpJaWZIMGZyTlpDK1pJeXIr?=
 =?utf-8?B?NUowbmtZWDZycVlReVJJNkhnL251WEJyeks5N3VJSDBPZmp5M2ptSHRKS1Zi?=
 =?utf-8?B?RDdLK0s0SGxYQStlZklFVm9iNHMyeWQ0UzdWRmtnTVZDM2U2dGhQdWZ2RWUr?=
 =?utf-8?B?ZC9mSDRUTFE2aXMza3pTVjhRQ0dMYVVISnhGa3pid01rVnJzeUliMmRnR1NB?=
 =?utf-8?B?aGs3RkoyNU5sU0tnbzVVOEc0SVp0ejdNVjFaQXhlMXErdzNqdkZreGVqMUw0?=
 =?utf-8?B?N1FaZXpFbmF6SkVFK1h0MnBoQ0FzcGNvaEhTR3NXbUkrYmRHTHB6Wm9YbWFr?=
 =?utf-8?B?V2h6ajlUY2J3VnorMzlGYWtpb3hWV0x4M2FjSWYyQXoxcE5rWTIrSi9taE1V?=
 =?utf-8?B?L1pQRUsvVjVQWTdVZjRNUkFSS1BuNGJyMy9GUVRWc0oxTzNvMkJrUllSeDNS?=
 =?utf-8?B?KzVaZDM0S1JoSXN4VzdJREZzVkhUOERMSFAwZENpdFdtNlJGbk9nd1pkMSs5?=
 =?utf-8?B?Y0p0QmZGcDBYRkJ3QkZBUFlSNG1ZRUUyWGRBWTFDanQyUmF5VG5DY3EraUkr?=
 =?utf-8?B?UXZ0bUdsR0hEcGVEdElFbENwdE1sa041Uk8xWXg5ajdsYnNVRmJjZy9hcmhi?=
 =?utf-8?B?SGhGUDBVaDY1YlZFNXRvNklHaXNxL0ZiNDgrMGczNVlUbmZyeW1YaXB4UDMv?=
 =?utf-8?B?WGtEdGt6NGxQeTR3blBhR1ZpTEw1Nk5vOWhnM0hhbTUwS1J4TUJpNnZRaWdG?=
 =?utf-8?B?cldYOFA1dStGdDJtMjdaK1NqZ1V3ZlNyZ1pMRUo2bWtqU3U4ZnpkV2hQa3l2?=
 =?utf-8?B?amN3MWNwcHYyVURHcVB6QkNDN3pkWExCRkRyVVhuZEQ5b01DY1R5SXNLZit4?=
 =?utf-8?B?dG1tbkd0MWxJVkhvcWZjUHQzMk43NGZQNE56ZVYyWWFlbG5jQkY2QjduaUNE?=
 =?utf-8?B?TitqTDBxRW4ySkVhVWhLMllOdTgyWnVyeFhMM3o4ZVpCaXVKZ0E3L1gvZGFy?=
 =?utf-8?B?dGJjdk9UTjQyS01oZjVXdXZPUTRrb29mVGI5Nk1md25Fbk9mSlNwTG1DMDkz?=
 =?utf-8?B?QVk4MTlFc3MvOTZ1bTVCdUI0R3Vha20zMGFwMjdCSXlJMS9ETVJ5enVXWXhr?=
 =?utf-8?B?R3B4VHRWS3I2L1dDQ0NpVVZRQUVyc21tUzB3cGd5cnpkYVRpUzQ5bFlzTWl6?=
 =?utf-8?B?azlYUDJDaFRBWXpZL1pldzQxSG9PYTdwdGl2amJYMG8yWnNmNjFLdFRxRkNn?=
 =?utf-8?B?K0p6R0t5WDdpZEZpQStoQnRYa0VHbDJ2QWc5dU50V29QT0ZmYUV1a25KZ05v?=
 =?utf-8?B?d1RIQzhnOFJPY1ovdFkrWUhTVENxLzEwaVJLNis2dnFrSnE4WHVPZWI2L3V1?=
 =?utf-8?B?K2tGZytqUURLWmVVanlkc2FzOU9RZDZkWXAyVkUvRms1bnh3MjVhL24wWFpE?=
 =?utf-8?B?M0JqNVlTbmlaM29wRnRJdjBwM282WkVKV1NYUGhDK2xLMHBMWi8reGgyREtH?=
 =?utf-8?B?WXpuQ1VoTXZRSVE3OEl2dzU5aUV0bDRKdm5WMEVUSTRjSHNnOUZMR1ZUQkdG?=
 =?utf-8?B?K1RhQ0M4N3p3NUdrb3l2SVlMYzdFVjZwcGtCNW9xTkdFUDQzODJCc2RwQTcy?=
 =?utf-8?B?YSt0SkFsR2dyQ3ZQcWZsRkJUZUhCVnNIeWJlN3Q0NjZsWDNNZ1VOMlJ4eUNK?=
 =?utf-8?Q?w+XvfRQk9U91TDAosyEy0+bf5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ff73370-2723-42cf-e132-08db3f195083
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 07:56:56.1884
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: asVGXdh6Rx9lsxjBadCpBcsD5vqTUNukJk5+65KkvekaV2QQcbOBAQ6rLZa/CZ52nJmV1wrnMiXI+no7Q+sFDQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9302

I should have remembered to adjust the splitting patches accordingly,
but I forgot. While making the adjustment also correct fpu.c's first
comment line.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * 0f01.c - helper for x86_emulate.c
  *
@@ -5,19 +6,6 @@
  *
  * Copyright (c) 2005-2007 Keir Fraser
  * Copyright (c) 2005-2007 XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/0fae.c
+++ b/xen/arch/x86/x86_emulate/0fae.c
@@ -1,20 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * 0fae.c - helper for x86_emulate.c
  *
  * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/0fc7.c
+++ b/xen/arch/x86/x86_emulate/0fc7.c
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * 0fc7.c - helper for x86_emulate.c
  *
@@ -5,19 +6,6 @@
  *
  * Copyright (c) 2005-2007 Keir Fraser
  * Copyright (c) 2005-2007 XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/blk.c
+++ b/xen/arch/x86/x86_emulate/blk.c
@@ -1,20 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * blk.c - helper for x86_emulate.c
  *
  * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * decode.c - helper for x86_emulate.c
  *
@@ -5,19 +6,6 @@
  *
  * Copyright (c) 2005-2007 Keir Fraser
  * Copyright (c) 2005-2007 XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/fpu.c
+++ b/xen/arch/x86/x86_emulate/fpu.c
@@ -1,23 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
- * x86_emulate.c
+ * fpu.c
  *
  * Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
  *
  * Copyright (c) 2005-2007 Keir Fraser
  * Copyright (c) 2005-2007 XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -1,21 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * private.h - interface between x86_emulate.c and its helpers
  *
  * Copyright (c) 2005-2007 Keir Fraser
  * Copyright (c) 2005-2007 XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #ifdef __XEN__
--- a/xen/arch/x86/x86_emulate/util-xen.c
+++ b/xen/arch/x86/x86_emulate/util-xen.c
@@ -1,21 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * util-xen.c
  *
  * Generic x86 (32-bit and 64-bit) instruction decoder and emulator hypervisor-
  * only utility functions.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"
--- a/xen/arch/x86/x86_emulate/util.c
+++ b/xen/arch/x86/x86_emulate/util.c
@@ -1,21 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
 /******************************************************************************
  * util.c
  *
  * Generic x86 (32-bit and 64-bit) instruction decoder and emulator utility
  * functions.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
 #include "private.h"


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:06:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:06:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521816.810715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJsX-0002WU-4i; Mon, 17 Apr 2023 08:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521816.810715; Mon, 17 Apr 2023 08:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJsX-0002WN-26; Mon, 17 Apr 2023 08:06:25 +0000
Received: by outflank-mailman (input) for mailman id 521816;
 Mon, 17 Apr 2023 08:06:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poJsV-0002WH-OZ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:06:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2051.outbound.protection.outlook.com [40.107.6.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bcb97601-dcf6-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:06:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAWPR04MB10055.eurprd04.prod.outlook.com (2603:10a6:102:380::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 08:05:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:05:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcb97601-dcf6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AEvsNM3CCIiOpxaes4zA9P3dGeExGNJv2llEGfU5Dpl+vHgPnYAj1lk43SSpK4CFkswvEGT1hk7Pv/rCclCVWIbm9yPqa0IMrqHeO73a+DksxOWjRvMLkRQusSB7iSTWCPuUCBTS1h/wDEUUqxNDb6t18ib0C1WSukcB0VXzdv6Emo+XmURByJ4BFQ1JC4RDWNm4wuZH1M+qerMEaoWriGtrNiAf6IbRYNMMzWl9KXcBjg2Sg00tj6Uws1zqfo5BSwKShA4w7vSKR0gX9+exTP4zfKgL/bLRIqOnmrljymSx1JT7+dTYP1G+kfw8YA/qZbJ4+xa/YrF1H7+th6VSfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o/X4BUZEw9PGWHmf8kbHNWsoPJEyrQ8Kr2jYHU5jiag=;
 b=MPEVJb/OkykpnF6Rb/O4uDDBDlMcVcWCjySH2whm+zPe6U1bFWCpx+j3/NC3I0+KG1BSsT8uJDMC621rlLj2mCRnFKMbxbkw4OPPbIAHeoGwqcIAKZAF/3CGEJrA7QS4fBG2L7UuP8uXF1S7R77sshBNFkq9qxDAHyh7kPFJ569bTaqrnmpZOmt+0MLcw8MJq6nR1Od8KDB/eduaL9Rg0cQF4CwqBqzCauB/mQKVeZUp1vp4IK2I/cd2hLkOwujV9HpJLQRep9QPNr5UP0MxKs+8QzM8NERJ2r85YM3h7SpCYnUCCZEWGHXjJEvkKezGdLCxW1NRu6m0bZGvkiAfJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o/X4BUZEw9PGWHmf8kbHNWsoPJEyrQ8Kr2jYHU5jiag=;
 b=DW8ehCX4ZyMQCo0gdA4KUDEewvq5iHOZaDHInKdwuQ3bGT4ahVFE0g8kv8pSTD1Pkt+bvoAugVp5QsU2ClMGvkA81jnSkdt89BRZINbaeXlfQ3H8HGvbxoz0qT5aUZuPNMEoBxoo+hsjdf0ccPEnzvf/38IgEFwKHxJ1l37u03cR+91njTlYl+vF05HSFoECG+yC69+MBnh4LuqkeHgjhICTLdYzYWSX+RSHWkdOOtTB0NRxmVNLfTHs5Fu+l4EwCfByFgAwnVIIfUSsYHtqeEAJu5/8amEwG0LRE8ehCOgybMC6umb/tYoC3Wq1xTRHTmOpe6wwaYSnXDcWpZJ9XQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3a554475-9e74-39be-e03a-aaca2c22b857@suse.com>
Date: Mon, 17 Apr 2023 10:05:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
 <20230413150009.3145462-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230413150009.3145462-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAWPR04MB10055:EE_
X-MS-Office365-Filtering-Correlation-Id: f8cdeeca-f02d-426c-a7ca-08db3f1a8f95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ez/+hcM5O4reCB3HsbVTdYSnTTcJnC7QbpLPaufUvHEh7H1Z57QNGagpmIHkPagS0MoggqB0qpwetqz6SGIriGw+SGkHjRDatf5zQnjoCYgHKirD3F8iJ97+Tc3vdEfo8BIgLO/Da++YcN9vVCDZ9Y4B+ViQrg1lLUs2WxiuNEPBgfO+1F2eX8GvSVT8jXZxsGTfNYoao3UChzxJzmEg5praDLoz/wTCVGk58oUDTJJDxTWF8ig6v2yylyr2kknE7+gCTGmGIYJyNIyub4pgppSKwikqVa1yhOk7ljhHTZGBj9xYXcjNVZ31/ZN9BBdsbnSr3oNRwRsRnK98Hxe/ZzyD75HGBYwUkKXZV74UDsSeHDQJtyfjFFHypMV+CRCR5sxgFGzk/rhRiD/eLyzmGK2MrGY3IPvnLmyDxqH5Fw6aFRuKJIOWkoZUoqjNzCsW5CLreZwJqrxIkb19p4ooSw8pPlLyXFAddZr0JxXTXm6k539h3brAZGCBNrDJqYeDCNwUeIP/upO141d19h6hWBqm1x7hSYJiUAG8zXEDQ6RpRLd953ocgwlMe0vP5IjdUmS/mN3Cm6YHQfpqvtgGOo05QmNV63JwSGcIpmpJ6sIHygwy9SQF3/SuUqKUM6BRdl3DZCW1oI2kGNw7NpbQzQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(39860400002)(346002)(376002)(136003)(451199021)(2906002)(31686004)(8676002)(8936002)(5660300002)(478600001)(41300700001)(316002)(66946007)(66556008)(66476007)(36756003)(54906003)(4326008)(86362001)(38100700002)(6506007)(53546011)(26005)(6512007)(31696002)(6486002)(2616005)(186003)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFljNWtpaUZqQ3AyY1R0MU5CK2MxT201R0JwYXhUT2RLNFVGd1RJVkx4eTBz?=
 =?utf-8?B?d1BvYW1uaUw2T0huSVBnZDZEZmhxRFFycEVTRlY1MWZXeFgrblpITFRZQUh0?=
 =?utf-8?B?emhMS0VwNkhrRVNJdjc5dmJBTUtBZklqVG8zcGFpeUpDbHpwaW1YQ0xReERl?=
 =?utf-8?B?Y3Jabytvak0va1pwemhKYnBFNVRpM0F2bndiZDlrRUx4Mjh2WVFVTTVBMFVB?=
 =?utf-8?B?RTByUzQwS00xaU9WWThnVGphUFM5MTdzU1pYNE5VeW1rYUtTWGFhK2pJRWlU?=
 =?utf-8?B?eG1PNTd4dzhVQVZaLzBoVy8yT0dRZ3p6aHpGQzREUmxDUnVScTFSQ1ozdnhi?=
 =?utf-8?B?TjN3VjU0Y0JVVXNDR3d6NEFGU291eDVFYTZyaWlQeEM4dkxFL082ZmRTNkMx?=
 =?utf-8?B?c0RqSVJVcHBKTisyV3R6Z1RtaEV6QzUrbDlRVVVtUUtLQ1loQTVaYXJsQzlL?=
 =?utf-8?B?aDB0UVV5c0xSVkpEcmE4dVBqU1V0S1ZOc1hDODNadERleFMvWFdtWjBHMVZH?=
 =?utf-8?B?M3g2TEpZdVJRbEpLTnlwR3l6bHFDSWswbGFEWm9aWktHckNLY2ZBRmxZM2xV?=
 =?utf-8?B?S2VoNHRReFJMQkxEdURXVTE4aFlzajk2bktkUXZTYVVCb0VZVWQwYmZsNDBD?=
 =?utf-8?B?czczblpKc0ZITklpellUcnpKc1VwdkwyeFZVRU80ZnQvZ1ZpVGx0RzVHMkVL?=
 =?utf-8?B?anRZc0FVdm8rWWtBSW5qdm14Und4bW5oVU4ralhBd1FXaytZZGFwNGNCWUcx?=
 =?utf-8?B?dEJDaXFxQXk1aHVWMDI4dERoMmgwTU82cUtERkh1TElhVXBQeFJ1UjRvbmNS?=
 =?utf-8?B?T1djbjFyYmZReDBwTkE2SUxoNnkyL2dMYUhHc1k1TzZrY2ZqeG9Hb2RaU2Y0?=
 =?utf-8?B?a2R4bnNRTy9SMkVHbGh3ekFNNEY2akpDOG9mZ2lBNUtEY1hZV1BydERNaElK?=
 =?utf-8?B?cDREb2E2U0V5RGFvMTljSXBKR2tDZGkxK1IvV2NTbjBnZXVsSVFqcU4rbEZO?=
 =?utf-8?B?K1c0QkNWWUhqWnVzZEdtUE5UVjFOc2hsMGRyZm9UTHM1NzhSWmRvQlp2K1M0?=
 =?utf-8?B?U0Ywenh1UlBTaE5zNEJNNE02bEV2ZVY2aG5lVS9ZOXJyNjJLWnBUV2RwY1g1?=
 =?utf-8?B?ZnBSODZiZnE5a1BrSmsvM0RPYndta2VyaHVuVXlSdUdlVUw2cFluUkdrOGJU?=
 =?utf-8?B?Ti8rMkJYYVkzOEVNVFl5ZXljS1NVZG5GVFJSYWNBVG9ZOUc0QnBNZ2RHdWVF?=
 =?utf-8?B?enJVVWVrZEdJL2JsczBlRlE1Q0JSaTNuS0IvRnVnZWNNL1hUaWUyeHlLcENa?=
 =?utf-8?B?OFE1TWRid0psVHlJZzBrQ0lzRTduSzFVcVlXQ3ZzeGV4MHlKNEZpUjUxUG5X?=
 =?utf-8?B?U3NuVXJ5QTFMUmFJbncxR2k2Zll0N2N2eC8vUEdjY2NVUnhTeXVTQ3QvR1Rr?=
 =?utf-8?B?UnBMeTZleEVBR253YiswZEZBRU8wbFRnSXU2UnRpNC90dUZKT3o0ZGFKVVoy?=
 =?utf-8?B?Z0NEb05YektMOUtBbGw3cmZlck50bkxyVXduN3VEUWhEVEh5Ym81WitKZ0xC?=
 =?utf-8?B?dkxJdFVhRVdXWHNhOUdZWExCaG1tL2hJRFlVZDRYdnV4bHZETW0wajRxbUtj?=
 =?utf-8?B?SjRkMjlUTWxNdEFIVlRkRG9xUzZ4VkJkWVhuOHZ1SWQ0ZTE0eGRrYnhQMk96?=
 =?utf-8?B?UmRMWnJzd2Zkd2g0L213dzVMa29wOVh4MnFHOVFsR1NRaWZ0NWVGeDN1K2RG?=
 =?utf-8?B?aG5CZHdkeXZneC9Nb1JISDJBbVhJcDZFb1l1UUEyczZNcDhkeG1sYk05QjNS?=
 =?utf-8?B?MHhnRXpTWmdaRlVDQmh5OUNEQ0FMSUg1TG5rNkY2azNZVEJTT2NYQnpyeEpy?=
 =?utf-8?B?VmVxaDNrNHhPK0lsaE5zM2JLY2ZreFJyS0p2VkRjUWEzSmFBN3IrM2xEc1p2?=
 =?utf-8?B?bDJOQlRNbERVRjk0RFhFZzgxeU1xZjRCRkpYYTNOeUR0V2JsVUwxQ2JlRklx?=
 =?utf-8?B?VUIxS01UVmJ4T1ZpcmdLM1ZsSEpRZE5Od0ZiOGk2eTQzYjFuNENjMzVsL01J?=
 =?utf-8?B?SXl5VzZwUnNXWmQwYjkxMTdhVGxJZlJsWmNpRlFTTWdkeXZZQW1nOXVlWXFv?=
 =?utf-8?Q?Vn9c9alEgRS+Usbxl7BjMIHRY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f8cdeeca-f02d-426c-a7ca-08db3f1a8f95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:05:51.5809
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WL8r8F+iGUegiDXpXYOZynlfSY6ML3u6O63uVHtw00k0vIT5L6a3WigwmdoUKn8HqV4SatacwekexeDD+xkYFQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB10055

On 13.04.2023 17:00, Andrew Cooper wrote:
> The Long Mode consistency checks exist to "ensure that the processor does not
> enter an undefined mode or state that results in unpredictable behavior".  APM
> Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
> preventing the OS from trying to exit Long mode while in 64bit mode.  This
> could leave the CPU in Protected Mode with an %rip above the 4G boundary.
> 
> Experimentally, AMD CPUs really do permit this state transition.  An OS which
> tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
> to be going on behind the scenes ought to result in sane continued execution.

For my own understanding, which truncation are you referring to here?
As you're in 1:1 mapped code, %rip can't really be meant. Clearly IDT
and GDT would need to be (re)loaded to point to 32-bit-style tables, so
the only thing left would seem to be %rsp. It's not clear to me whether
after such an illegal mode switch its upper bits would be cleared or
ignored ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:06:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521818.810725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJst-0002yD-HR; Mon, 17 Apr 2023 08:06:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521818.810725; Mon, 17 Apr 2023 08:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJst-0002y6-De; Mon, 17 Apr 2023 08:06:47 +0000
Received: by outflank-mailman (input) for mailman id 521818;
 Mon, 17 Apr 2023 08:06:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poJss-0002WH-2j
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:06:46 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c918e2f7-dcf6-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:06:42 +0200 (CEST)
Received: from mail-dm6nam04lp2044.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 04:06:40 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6237.namprd03.prod.outlook.com (2603:10b6:a03:305::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:06:38 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:06:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c918e2f7-dcf6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681718803;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OUke1WhuNpdKg04sRkmMFR77LiGMnXi3wD+oAOhT3HY=;
  b=bK9fNrR7UAH3saW6uPQXTIQL82XezP9idyX4LChymGKecrqv+w5/EmJb
   /+9lwI4HcfHTYi6Jqvu3zqrxPe8v211IrQShhwDrULTZTHgYnsxPxZwJt
   ++hPjGDZ5tPGLzuNCZYeEfec53+oZbvTG8x97B6D2bpPC59fgXX8G7ZAq
   c=;
X-IronPort-RemoteIP: 104.47.73.44
X-IronPort-MID: 105674623
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ZrWSFqL+aqEHnuOeFE+R85QlxSXFcZb7ZxGr2PjKsXjdYENS1zcGz
 WNJUTiDa/jeajf0f9l0ao238U8EvcKGm98yTlZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4rAHl/5
 /dABAxcNA64q/i22ZaSFbJV05FLwMnDZOvzu1lG5BSBUbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTSPpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxX6lB95MTubQGvhCp3qz3jcdEQUvZAGXh9Ccu0uyBo14E
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9URX5NkcHbC4ACAcAvd/qpdlvigqVFoo/VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTygbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:dzHrA67YtZ8xgtD1BwPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-Talos-CUID: =?us-ascii?q?9a23=3AKTR2kGnSIif9+qQilfO6G6pFa+TXOSHc1EbCKBe?=
 =?us-ascii?q?UNTczVoa5U1+J3Io7yPM7zg=3D=3D?=
X-Talos-MUID: 9a23:Sy9OQAukxMFBzPogf82noxZCLJxm+ryXORoht7wjisyIaHNXAmLI
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105674623"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j2ZOC/z08FGJ546JAlCk9EJsTYfSmSH7veEm4xeUWgQewceODDdwRd0LOzkebyBHiY8jPs1f8myD+fw0ptGJRwl/KNZOXo9Wn5GYm+j95Jyf3BIOXW+roQaLrD5+PHpvnBzbVbCefD2+BGfpfRV7vt+2KyPgD4hc2tEUywCIv35PCrMt3foP+NXlb8RCtGeyGvp/9EL6SQelkNTYpigfWjKzIAUY+dbcDMCTfNzy54cBHLy7+TK9Hq8KJXM0CahqFEefeShjY5Nr3gC5igAC9FEO8Un0OtanBwa7HucNHC04UPriQ3ymyCKeIvFUaC2HU66S5B9Iy9ZJ7L9bLwvHIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OUke1WhuNpdKg04sRkmMFR77LiGMnXi3wD+oAOhT3HY=;
 b=jatGBhXujNrIUwwGgtZaxvznLfZZ6uPZdgtWSnx+b1QO7jVf78Wnmc0PSZ8TgkYZB6uQnpZ+XZXrDYuq6naJY8qFLToqtaMp5SVlzMNssbRzlr8nm1oNO6onS2UGBjf+DILl0xQ9XfzaKKPVYZLP3KD3s1jmV3IB6uAfd+gbwmUjJZSlSg5rOi47Pg/7s7bx2RGBi5dgONbBlwUJzxuX0z1Z4nejq2FxB96HXkSTQQRVlitCO2FBIATu85RkaJEM8k9KJ/cMbr5BGlIXq3S/qSQt4vI7Y+ZVgqHsGptR3ExfJlmDB/atHPv43QLHDkV6yyoZS6RhZtSCDCNhuyR/uQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OUke1WhuNpdKg04sRkmMFR77LiGMnXi3wD+oAOhT3HY=;
 b=DfqKQovGT6qcdFTuf56AHF8pfLW3E0xdD8EQcxGtQAIqsOfONIJHTSNzkCxNR9bEemUhE500BnRK5mB4SE8C30D9eIY51vPdnZj2pjlkQiAQu2T8+HDvMt21nluY4HrpijfImahsN4qpgLC7M8ns257u2vEaWsgcchDSdcdCz0M=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <08aea229-4b84-df51-715d-5ce6a0151023@citrix.com>
Date: Mon, 17 Apr 2023 09:06:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul: switch split-off files to SPDX
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <b7f3c088-2b80-b965-f307-4a31d72eb89c@suse.com>
In-Reply-To: <b7f3c088-2b80-b965-f307-4a31d72eb89c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO0P265CA0001.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:355::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6237:EE_
X-MS-Office365-Filtering-Correlation-Id: adbd9509-f8f0-4cc5-6db3-08db3f1aab4e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dUZVE3GRTbXOcs5eneZneS5zReMAWP/z9ARmYGP7u1gTYALuonUJ1EKRxs5uu4LWDnXJ1RSIkFZYtUQItK4y///uaI9HPr3Yf393zeXX8pMxJY00UnX9t9Kl4x4kplXnPZxUHw8D9PXfT2Is6pPkM69G03J7StiIusZ5D9JMTYlHgWUN0+nGqb/pwwz7kAZt9Eh9HXesdxTl35HJpVGZ8cmBlTxmwMEVPqyohSRUB/+m6y2dvxLjuEA14iPzdr2/awax6Pqq6ovBpAYZH/RIGNb2k+AYSaK8cXilZMZVnaZPV0Ij687s8ahSMQsFSDjm0xzyiToNgEAj/lu7lUdlKSTyeM/2BEiOyLt4DV2RhK/yPs9VkHP06wQaxsSR9Dl9n1iUmQTMu/8J2DhdkPoCkzmEJULoyE3qnQS+51wU1rkyFr7Nt6PWVLFHViGtOmdb5+p673jZq7pOa4TokhUq0gZMQeNKjD8VbjLsabq0w8TuxnidhWzSWCLbQQGYzWZpZWeGmE7oOOdh/YcjVSv1kbdkvPkgLXdH+DTGsEgxyuTuHLy0NqJ4D3utws2+5hc265cVMnIPrnvRaE0yiD9Oh/L9hTYERi8F08UDloOUG/fBZYu/4S7VJYl7svFGHzQvlluzoom1wicagVC4ihBrbg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199021)(6486002)(6666004)(86362001)(478600001)(31696002)(110136005)(2616005)(36756003)(26005)(107886003)(6506007)(186003)(53546011)(6512007)(82960400001)(38100700002)(66946007)(66476007)(66556008)(4744005)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VUpuQSt5emVWU2dpd2dlQkJOK3hZUVlpb0ZGbDJ0a2Y4Q0dsdkdGRVJBa3po?=
 =?utf-8?B?cmIvdTBRdEt0UmNGb2EzdklTOG92SWlITWhJY2wycnRoYnBpejZ5RkJaLy9M?=
 =?utf-8?B?aThWQXN0ajBZd2dBbk5pWE1JaVcwc2tRclU5N1A0UUtNMkpQVzdsTnNFYVNr?=
 =?utf-8?B?ZzhEV0pQaE1xcHpsSkY3V0t6RFVCU3Btb3ZwdnpFdHhaWTFwcWhsTThZMzc4?=
 =?utf-8?B?c0toc0FOUzY0QytJbVhiTmlBbkJQS2pMWFp0WmRqTmJVN3NubVJtR0laaHJp?=
 =?utf-8?B?Z28weHMxVnpwQmxYSlpFTUdEK0lzblMrVEY4MDRFUXZQQ0dBR0U5QnRyYlYv?=
 =?utf-8?B?T3E5YnVEa3JZWjA4YjFqYjA4TWRqWGZibkpoUDJub3h1MU9DR2hwVEJLZXdz?=
 =?utf-8?B?aXBSUmdwVlZtbERuczJLemVySGhTSktxQ1ZBTjc3ZzNnVGlmdlZJQm0wTTFF?=
 =?utf-8?B?K094clNxSmdPb0NDaStiRVJLR2NIY3RJT0tTTnBaRmlKM09MeTRxZWlCdktu?=
 =?utf-8?B?N3ZZQWduSDR6WFF4Y2JsbzAvQnNZYXVOSndnZ2lpQWJZRXNKeXdoYnlSbUgx?=
 =?utf-8?B?a0g1bFdHN1h5UzBUR21qM09qeDR1OGR4bzBPYXlpSE1DRnZQKzZ6dXdJaUlz?=
 =?utf-8?B?ZXI0Q1MrU3FlUzl3Mk1PRWpIMjlHQW9OUEc0dVUwRVAvMzRiTGJrZmNhUm5a?=
 =?utf-8?B?dFMyWnRPSVZ6NVlLaFdhS01qanBaT2NRVUkvNkFlTEdlTU9TZ1R4SmtCU3dq?=
 =?utf-8?B?VE9zYUlsZnplNzEzcWtyRUU3VGpmV0RucXhEaGZwSkprVGhxSDFjRE9ZbmFX?=
 =?utf-8?B?SCtOc1YxeG10NUY4UmRZdHNvOEsrVXEyazZKT0J0NDN3M29ZRElQVHFyeTJI?=
 =?utf-8?B?TlZNMlplQld4OHQvWTRVcnJ0cmt1WU1GenNnOGgrc0dhUW1ic1crcXhRREZm?=
 =?utf-8?B?RVczZWVlT3dJRUVqVlZRL3VRbitKdXBJYXNlWlV2RndTRmIzWmExUWhvU0Nr?=
 =?utf-8?B?SVpqZXdJaDZjdDIvSGRXdkgzQ2FCRDdzcldBaWN6cjVOckdXU2pnYkQvOXJw?=
 =?utf-8?B?cDQvNnpQYXRiN1hhMGhvTkdKMXBGSlF2bGxMYzVRdEtBNktodHQxdms5TWpZ?=
 =?utf-8?B?a1VsU2lEQWY3LzRGYWFnRDAyYUc5TTE1SDA3VTBGZWIwWkVXNTdaRlh5cjJX?=
 =?utf-8?B?Tisvdi9vTUZMTUN4d05YL1UzenduYmFHT3M5b25uZG1BcG1Qd2FES08yL2Zp?=
 =?utf-8?B?LzhyZCt3RGdDQ2l4eGtsbFYrc1ZvSWRxS1VrSTZiN2RDV20wWk9OS25uakt5?=
 =?utf-8?B?bitQa3NHa1lKOTE2Qkw2MkRSMkdzUlVrUzJBblRMQkRqNlpBN29EYnNsSXFx?=
 =?utf-8?B?VjUvekI5Rkh5UlVUSE1wZ0Jkc3NrMllnUnVoWTRBYktRWTlNek9HOGNwb1J1?=
 =?utf-8?B?T2Z0ZEE5VnJ6T1JhNGd2ZGRtSEhEUzIybUJuU1Y3djBDUHEwaWl6NVU3MHZy?=
 =?utf-8?B?c0QyR1FxWDBrNnJLQTkxc0RnVGQyMXYxT2V2dlQ4TEFOUTQyYTlwMmYrOEl0?=
 =?utf-8?B?RzVNMXJrU25pclVQbm1kaUdaWnNFa3YxVkovQmpOdWlSUHMwVGZpYldOaEdU?=
 =?utf-8?B?blRNQVVPd0FLVU00WmtZb2kzcHJYWmVWM1dNUHFBc3RyY1ByM29pMExFMUJv?=
 =?utf-8?B?cGIrZklZbTdpR0EyWVF3ejRFcnlkWC9Mck92NWtsNmRzV0NiWkdFTUtocmxh?=
 =?utf-8?B?Z2REL09DYTFJLzh1ZWdGQTJRZ2sxUjNQOHY4aGM2MEV2NE95YTMwRFVHbytY?=
 =?utf-8?B?RnRmNUxXeVcxTzgzRHU3Z1F5MHFIbE5wbmJVanBUekxyZUN4em9kVk8vYVdC?=
 =?utf-8?B?YkFTUHU2dTdMbFRaNnVJNlNxQzg2TkVObzVMajBXV2hreTFaUk4vN29TcGdZ?=
 =?utf-8?B?eDVzOUhVdis5YkFFVWo2UUtUQmNTQUNubnNSaUw2OXFsOVI0cEhBU0l2VlBx?=
 =?utf-8?B?VHdlUmpZUlNiVVVsT1ZhQ1MrZlFJd3BaMGJsNVNOcVpoenF3ZzhyaUd3dDZT?=
 =?utf-8?B?RWZ0VkZ1T0tJRjZKem82cjdYcytKa2N5aGlNalNmeG1UTXpVdEpBb0sxR1lS?=
 =?utf-8?B?UEJ6Y0hMcGpUakNFUzlQS2tja3EyUEZjdmNMUnN2VjhyWGFPWkgyUzYxWWdq?=
 =?utf-8?B?V1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ipkZcEImawqUBT6EocKcHHHtARvGOkVUjnI13HScZv4n/z7CMZISfvz5bpKRECMeMV6LQtAdesoQwSPeRYBtP6NM9KDy2EXbRAqU28VFatDIY+8HvNNZQ78fEpn3OLLCIBqAZyCpw6m4MvFa67QEyyYCaa9N1cu38CdqOYzbSbpIOYVL4brL258DrWThYQnPWCqMtypQCwhf1T3asRpMJsquv73iahzgKFLuUSqgeyI5iIObHUeeGRaNdbXamrD8R8VoQqpV/KzZ0Zr2PJN8Ds5njF5HWk1QSf4Oe61jto6rZEM+SzEfX1cD9pnJ/2hk2DTeJWl5v1Vd01cAwtt+HZpR3iqoMN0qeeddbK8jnpWOPsLVOxQAd7/TE/Z3kADZpBb6A3HQdcV96W5kB8qCy5opmEjPQGLzfK4MWtGQte/8eFaGOf0CoT0vTLgriiNQWl9GuVVBML/tYlh6OAS4UtbO0euEkBnfxq8C1bFedNonCApKmUpucfp3+Io7u+h++btv0sBdxAcAb5MRGprvy/BaHScybowvhoO0CJm+YQGtI2pbz0ITsTtirW+VffafPd5AuxaVZ6cdErgHmRPjkJnzgk1Dnfe8Ua+lIz5CkhjLfdcdfnAQruceBj5JSAzRvuWmpIUUNd9e49aN31rDaAwnDykyoMl7hdIYvLLY9i4C2xLOuxkCqJLDsmzq4sfLvq9RiDM8or6xnpCAsD4oSKwGNipUnFu0kbk3i1EUkJ58gpvIveD3UbfIalXwgZrDQvt7/NxrNus6iN+zfkfMROcltCKYSJimDiWew+cY26SClrImA9xpy18SeKLAM5YF
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: adbd9509-f8f0-4cc5-6db3-08db3f1aab4e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:06:38.2514
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u2Ift48F7wSdj4AK5Svv7+80RSIVWTxri97yjG0HdUagGXy9oTh3zbLAOxObhm864IRE5UoZAuBVrMeP/PAZzGlExZEbByN7dwoE+socJXg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6237

On 17/04/2023 8:56 am, Jan Beulich wrote:
> I should have remembered to adjust the splitting patches accordingly,
> but I forgot. While making the adjustment also correct fpu.c's first
> comment line.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:09:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521825.810735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJv2-0003g0-W2; Mon, 17 Apr 2023 08:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521825.810735; Mon, 17 Apr 2023 08:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJv2-0003ft-Rj; Mon, 17 Apr 2023 08:09:00 +0000
Received: by outflank-mailman (input) for mailman id 521825;
 Mon, 17 Apr 2023 08:08:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poJv1-0003fb-F1
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:08:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1adf9ff4-dcf7-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 10:08:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8746.eurprd04.prod.outlook.com (2603:10a6:20b:43f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:08:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:08:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1adf9ff4-dcf7-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C2lCgzE6fSIvIsndOT4XxfcZyrODK3VxW9V67XCV6JwVilE77MY9cOf2P6jLMD5fnEYpzQI0HWdbjFqlj9hoCExEuFiJlHAWKMGPFlVutFCn0J1yulnqR4jhLAfR6XBhybC24m9f2Q/JPBvYWoS2jl8b/XvqmYazJUUo9VESb4LQ/dwZg6KrxaD0RpX5JfXHBp76v0co/TWUb9nGly0psDnfBL1ohjyfUwylGnXrS3WHnxH4Qj0H5dYm5dHrB5c1ZFQJ566bCiPwN1PMLh2fKBnbDQ7sNI6hH9HKOOML73ixHmEdYXuWqBIiOBNakZPKI4nVNrX2RR5cZyg4br511g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sJyVcNZsl+a0+4gqIW4m8I94oRY2MExdgWMzYzXAf2k=;
 b=Yu906kOp3QtXwiTmFJVA7yfI5Ea+egoPkdtDDTHT7b7bk7SI17JLsjKyjqM4mdWFYyFA14O/HCbPQtulEVeDqxlQ2b/v0ZJBzNpjm12ehRWhQKBrtnoHkzmVtG15sjNlZkc/vdhbwuxC7zDD0lWklmkaYBnuQqM0G1Uhmwq1QKmkeiD+vPDhBKEnvFYaatLtAUfKPaQxelEwY8ShI4pa4saUgo/+oh71MiJKDeusmfec/xSRADOueur3zOSPDLkg5eZJfk+Q6CnDU4c2IYFlRg9qwGhqa+WKnxdTSKdQwFwJt6qrPF9euzhGEQ9RS2/6D4bhwT+yqaxGtuhd/uKJIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sJyVcNZsl+a0+4gqIW4m8I94oRY2MExdgWMzYzXAf2k=;
 b=U/cHm4a96HrLGIVhWXSohDZtrkqxjWtmg4YqE83b87ztYAT3W949t1VPs76h7WcmnJRWzIh/KMdhViETJI4y4sIaV5QogXaS09Pm+9aVoxZm6lJ5LGCdlbTSWkedqCuwy58LsJK6KyyMXPBHPvsLIUYrClqCsg2Caq/sVcq4t5iXhvvKYW2DKiQmlirlo2nxSE6Vyc2vIlV86HLJ4hTfOYBMWkH8SPv3rg5jjw95CnefcPtUxnBN8yrLtwTI7LSyhO5MG7FSSUuUNUtxpr3dWJWIkT1VvJ/BL6OqC003o6O+0In9Kn4uVK0LP/8ir53WNDAc6QOBB2mJcxvW4TFH+Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <50fb5231-26c4-e580-0499-944402e6a9ab@suse.com>
Date: Mon, 17 Apr 2023 10:08:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN][PATCH v5 13/17] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-14-vikram.garhwal@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230411191636.26926-14-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8746:EE_
X-MS-Office365-Filtering-Correlation-Id: 49012735-7047-437b-5817-08db3f1aee20
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qDBUam7sfFSsRyjlb8Lxo2uOH1UTCsfdfth/VUbymuLsCwMiTkz8t2u8L2VDWop2bMfuLUz3KuBm5cGhNIV8FxuMuWGYhw3ESFvPvf+OjSCLNtwjxSgTRPT5A+Y+laWaI7R39s8M4cgBse26IZPn0macu00WUOpwAvkuAeKL9UzrXwMyy2QW/H+DHX5jNytlJrzuFk4rutXYvtwHCe59ltmzqsRmvlJy0wjNMifB0gec2Tr7vWgpK5xmRShVrHjiggr0OagytfDnVCCJ0VM3SccBRy7FKKiT0ajcx6wh3Oj06UpBRQRlFrMayQQxvRKUTzuhslBRNsxckM22dQCto3OxdvPSWg24FmtTBxR4+6wozrb7jyb2Er1ZiO8rYcTT27beXq7fCkMwvEbnQW65U+PbJp1tmSzvKPtS4IMVZOTmmy+hdjJOLF1cPeSOnN2uCBL0FtDENbFb4Fyicox2QLKVpYOgpMOhMwC2LfMVJDfxzNfJacMIl9OESO0sAvx405sXC57P4FkNtfVaOGDhyPbGt6wHRsiypshV1VKBFuEmjTPlZh/n4XQU5Ooe0TkWrRIt+DrbVsdJVMjKEetdk3mOANlznfpgO1oExIdSBNwKUl2j0+ReeoJiCv4hMAcRmV7BqRn24C9Wf2yBbfErUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(5660300002)(2616005)(86362001)(31696002)(83380400001)(53546011)(186003)(26005)(6506007)(6512007)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(6916009)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTNCVFJxL0VpbU1ZTSt4ZEJxUXNLMnk1Q1d3bXZyVjNCUGk2Nm56OEEzVmFk?=
 =?utf-8?B?OHlEVXBuZXFsY2Y1eGhaalFUV0laMzBpYWN6T1FNQ011dlBvdkluaTBTYUVq?=
 =?utf-8?B?V01uSG9idTBCTEZxT3dDaC9hUFBaN2J1K2xib25UZjl0MVhxbVg5ejNWVmxo?=
 =?utf-8?B?UVZValgxUFVkRU1IV3h3ditXSnJlS3pYUWNOVGI3MnB1SElra1ZhS1VFdzlB?=
 =?utf-8?B?V3pSVG44MTljRXhmWDRZM0hTWFhxQjFHek9UZTY4bmtJMmhGRFpGalNkdjhY?=
 =?utf-8?B?UXU3VTlqa2RDcmdFOHZFSW05WVllYlhjTytqVm5iYUdqU2N0NVg4aTB5VDJ3?=
 =?utf-8?B?c21mbFAzZWlYTnNXREx5bmZjV00rY2J1N0R6RUprbzE3blVweUZwOXZxNzBx?=
 =?utf-8?B?b2YyZXZTekFXbEYxb3d6ek9TUXhhejdhL3c1Vnd2NVdNeER4eWJZYXRlb1hK?=
 =?utf-8?B?NGRubU91bkgvY0c5MnhoYTZCZFJuWXdaZmhMS3pnczdUdktZa0dpQjhDOUYx?=
 =?utf-8?B?QjViR0F0cE9CVTBGYVRYY1F6ZnpwMDBMbHMvaThOUG5neDU4b2ZTNmxCQmdE?=
 =?utf-8?B?OUlBMm1SRG95bGIxaWZGN0R1NThYVk5CRFZJUXBja2JsWnJXS3VOVEFoRlBm?=
 =?utf-8?B?Q0pvQ21DRDZnRVhsRUNxaXpaQ1RSZmZMWHpBcG1SSGI3NndFU0E1bUV6Mjh0?=
 =?utf-8?B?eDJFUUUyUVNwTFFQYmljTmVJQVNINkYvQ1ZGMXBKZGt4eWlJcXpCUWJGMG01?=
 =?utf-8?B?WnVHbU42MGxjK3VOVk8xZHhnb0taclpDcXRFTU5DLzF6QzJSNm52Tjg4Zkwr?=
 =?utf-8?B?NGwzdGo3WTR2RkwyNHR6WER6WmM2NW5RNjBxamVweUFlaktMV1F2KzZNUjRI?=
 =?utf-8?B?OHEvOWFPRkJxT1hBdVRoZmFzTURUQVlEU29HcWpKbHc0dGMxZGRxbytuT09Y?=
 =?utf-8?B?QkFRSEVubUx3dXhxM2NQaTdCMVk3Q1NrNnZpVFo1RE1RazJJbEkwQzQyWjc3?=
 =?utf-8?B?eEdaUld2YUhray9QVzlSTFZYbVZSMFV5d2VwRDBwOGdIUDRpWVczZU5mbXBo?=
 =?utf-8?B?K1lwNlMxcTdXYVlqSENkUE9QTEtod0VkVklkbnZPazc2N1p0QWIzY0tDMi80?=
 =?utf-8?B?UjZXYjJ1NDdGcDdmMVVTckVoQUlCK2tvVmdZdXkyYVY4Q2NTOEs0MDJjQW8y?=
 =?utf-8?B?U1MvcFZzR09vcStwYldDdUhCTE40TXV1OThIL1FDczZVZUtlZUJKOWNxb0ZS?=
 =?utf-8?B?dDBiU01JV2RqWHh3cmNPYnM0ZG9oKytrVTAyNjRGd2NuSGN0TTdKS1U2ejFN?=
 =?utf-8?B?Q1N4RkxBTFpOT1U1Nys4TkpsSlJwc3RwNFZNVmtvZkVFTHdkTFFCTGY3aUJ1?=
 =?utf-8?B?cjdYdEdyKzVuTkhJM3pkZTVYaktaNEJnZWNET1pYb2YwYjdzZXFjZVFBZWlR?=
 =?utf-8?B?aEpZdFpxZE9DWUIrL2VBVVZ4N3piU2Q4L3V6N2NDM0luV1hZUjlIWXE4OVY5?=
 =?utf-8?B?Y0lzaEd1ckQzaU1Yb0ViNzNqVEY0MjREektVbGMycmI0T1RhN1RXTDd6MVlO?=
 =?utf-8?B?bk1CNXRXY1NhQXFNVnJsV3liS1VodkFGVXJvc1JCOHBqVzJRVWVsZVFNL0FO?=
 =?utf-8?B?Q0hjTmVQc0VQU0sxS20zYTdNQThoYVA5T2FWSDBwVXRHdzRzVWpqN201U0F6?=
 =?utf-8?B?elB1WnhmVUV4RGJMa2U3U3hlcmx1UkVVc0lUL1EvempBR09OR2w0WWsxeEJY?=
 =?utf-8?B?NkhoVWhrVEpOaHU1T0VRMmtycU96Ky9sSVB3YVEvNDVFdlhyN2lSc2NMdVAx?=
 =?utf-8?B?TGVaRFBFQ0UxTGJRVXg2RHZxcGtQNFVlbFFIU1VtNVVaZ3p1UHMrbHlGUlNZ?=
 =?utf-8?B?VlFUTFVtK1hIR2dYcnlWNUZCKzBSd3RNVlRyNitUUVVyRExtL01tWitKRlpi?=
 =?utf-8?B?RGtvTHZWblNDWkJqSkZOTitjb2NmM0FDUnU4RzlWMHg2d0thOGJIeUxrOURQ?=
 =?utf-8?B?TDZveVA3alV2eUJiQWRsYWpQQ01MVzkyTGthd3ozUk1TZmt6RExNNVhSUFJ0?=
 =?utf-8?B?S1JHVGZoa1pTeG1RQkpQUFc4c013VkF1VGUyZ01pNmhzNUd4K2NncVV1N3JP?=
 =?utf-8?Q?Xcs0nU8rdEAmcTw2MwyQKKP/y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49012735-7047-437b-5817-08db3f1aee20
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:08:30.1241
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oZN7xRvuZiKddJ4IpHrs3MLyL+yOAlp4N4Cg4g7eT9yEMRZ9To80Yixa8LZoVnvx9T9/FBDWyfnoz9pK529Ucg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8746

On 11.04.2023 21:16, Vikram Garhwal wrote:
> Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
> device tree overlay.
> 
> xl dt-overlay remove file.dtbo:
>     Removes all the nodes in a given dtbo.
>     First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
>     in dt_host and delete the device node entries from dt_host.
> 
>     The nodes get removed only if it is not used by any of dom0 or domio.
> 
> Also, added overlay_track struct to keep the track of added node through device
> tree overlay. overlay_track has dt_host_new which is unflattened form of updated
> fdt and name of overlay nodes. When a node is removed, we also free the memory
> used by overlay_track for the particular overlay node.
> 
> Nested overlay removal is supported in sequential manner only i.e. if
> overlay_child nests under overlay_parent, it is assumed that user first removes
> overlay_child and then removes overlay_parent.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/sysctl.c        |  16 +-
>  xen/common/Makefile          |   1 +
>  xen/common/dt_overlay.c      | 415 +++++++++++++++++++++++++++++++++++
>  xen/include/public/sysctl.h  |  24 ++
>  xen/include/xen/dt_overlay.h |  59 +++++
>  5 files changed, 514 insertions(+), 1 deletion(-)
>  create mode 100644 xen/common/dt_overlay.c
>  create mode 100644 xen/include/xen/dt_overlay.h

Can new files please use dashes in preference to underscores in their names?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:09:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521827.810744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJvT-00047T-8I; Mon, 17 Apr 2023 08:09:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521827.810744; Mon, 17 Apr 2023 08:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poJvT-00047M-5b; Mon, 17 Apr 2023 08:09:27 +0000
Received: by outflank-mailman (input) for mailman id 521827;
 Mon, 17 Apr 2023 08:09:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poJvS-0003fb-5z
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:09:26 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28c962c7-dcf7-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 10:09:25 +0200 (CEST)
Received: from mail-bn8nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 04:09:20 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5070.namprd03.prod.outlook.com (2603:10b6:208:1a4::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:09:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:09:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28c962c7-dcf7-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681718965;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nJKWWJiu8fJzuR/l1aOE4WNY5JuaYmW1hUX683p7PbU=;
  b=eGrdauDzIElV36392SLrV3ltKLPp2HbXIfmkcACdwo8Ki4zW9J1byaIS
   i4/nc4+a02tmrLKfqTn+EjM23NkLS3b7OTvsnjR2rEhRUZwcXmwEX7RKI
   tF/kzf+6YjJ78dloDGKuRZu0ZtWT3F5cRFzxjkfCCpEgzksUYDmun59OZ
   U=;
X-IronPort-RemoteIP: 104.47.74.48
X-IronPort-MID: 105674896
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3fpC86uiMR340VE7l3xwXEJGlufnVHRfMUV32f8akzHdYApBsoF/q
 tZmKWmEOKyLa2KjLYwjbI+1/U8Gu5TVzd5jSwM4qHtmFS8b+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBCIsbS2jlbyK2Lu2W/lJieoREMjwM9ZK0p1g5Wmx4fcOZ7nmG/mPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0oujP6xYLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXlqKE20AzNroAVIFpOZ2C7k/bisGu/HOlhI
 Vwk1QUSi6dnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakc5oRAt5tDipMQ5iELJR9M7TKqt1IWpSXf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodu51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:jdGmna3WUlku9oDHr7cE0AqjBa9xeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtj4fZquz+8T3WB3B8beYOCGghrTEGgG1+ffKlLbak7DH4JmpM
 Jdmu1FeabN5DtB/LjHCWuDc+rIqePvmM7IuQ6d9QYUcegDUdAe0+4TMHf+LqQZfnghOXN0Lu
 v/2iIRzADQBUj/I/7LTkXsGIP41q/2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbfyJTzawk+W
 3llRW8wqm4qfm0xjLVymeWtv1t6Zfc4+oGIPbJptkeKz3qhArtTIN9W4eatDRwjPCz5E0smN
 zspQ5lG8ho8Xvecky8vBOo8Qj91zQF7WPk1Daj8DbeiP28YAh/J9tKhIpffBecw008vOtk2K
 YO+26CrZJYAT7JgSy4vrHzJltXv3vxhUBnvf8YjnRZX4dbQLhNrbYH9EcQNJsbBir15K0uDe
 ErJsDB4/R9d0+cchnizyJS6e3pek52MgaNQ0AEtMDQ+z9KnEphx09d/8AblmdozuNLd7B0o8
 D/doh4nrBHScEbKYhnAv0afMexAmvRBTrRLWO7Oz3cZeE6EkOIj6SyzKQ+5emsdpBN5oA1go
 79XFRRsnN3U17yCPeJwIZA/nn2MSSAtAzWu4NjDqVCy/jBrOKBC1zGdLluqbrvnxwnOLyZZx
 7pU6gmRMMKLgPVaPJ0NkPFKt9vwEIlIb4oU+YAKiOzS/3wW/3XX8zgAYDuzenWYH8Zc1K6JE
 c/dx7OA+gFxnyXexbD8W3ssjXWCwPCwa4=
X-Talos-CUID: =?us-ascii?q?9a23=3A0YPwr2siERHZzBv6G5Kff7Hw6IsJYFze8VfCfXO?=
 =?us-ascii?q?gAGt4WJiaZ22s+bhrxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3ATcF5vw+BFqDT4GGBqaULw0GQf4Aw0Z+uOk9Wq7A?=
 =?us-ascii?q?H/PK6PzxAFBihqjviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105674896"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LVUEg0D1fj1HodBGpoFyN72o5rAp8seBVel8YNo5kiRwAtIuiCsekQOiVkd7rNKKWu3917W3Ofaax9XCRzlNIoql9eol4x/h31jHPD0q3+jAo2gW/XQPBEGhpjKdpe4mUY8MpTTrvIDQ2/yyUz7GKYTQz9h3VMcO3INUK+QQ7a9IhyJ8zcB+FcdWvRTwxfB21yXhLKWq1XjIbGmz00HPMRqHoWc8t+BYeChoZkF/TxLoCg2F4oLEPCdgFHR5L2FCBMi9lw/ZLFNbmcDaFlMBvWQQcUHX/keikALJD7EhTaNtvKteOzL4c4Xxfff8YfDtzpN6RrlUc6+BiWKssQ1nmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nJKWWJiu8fJzuR/l1aOE4WNY5JuaYmW1hUX683p7PbU=;
 b=F7ntzVVdOnEuuE5NSQy278jhw0dFVhVe2AznnffgMNJ2pTx6aWnc1tOQWhFY/D/Kqxo7vspFLjffQQu437xGs/XyQOCo757d8ZETzghHTHpmJRU55+pnqJj98RSqzNEU4z93dTRjPZoL0nZ6SHPFC1aqwlT2rLQAgXK70Jfgy3vshYLQnIKZgBdeg2a2H+V0L2rAwIPyiM/NdgHc6gNEafzxz3fkReNi+dc3spT8ouXFF689pwweFt7IS1FuWH8mDAReyfK8DM9lwtMcFQwHTKGsf0+U9joFC1DcO+kNMLKJAGPRuNOg6bg4lY3dhtFz0OKgduA6qT7x1AUyMQK4Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nJKWWJiu8fJzuR/l1aOE4WNY5JuaYmW1hUX683p7PbU=;
 b=xHbOQXXiNT/TFQKy+A1Fdq81Kt0dLgIiQ+nMkmrPUXjsMPtNAnkLpLreUHrAnVlHN1lQT9gpUgGaiGZp4cIKRCXvSQPr8hJo/usbIC/OEzD9qt5P6OhTuC3HWGHIigR620gJ9FPuUAk5f7OrslVnNGVnicA0owNMlcPN2pGklN4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <cc0b5260-d551-b623-3eaa-50de22c2d124@citrix.com>
Date: Mon, 17 Apr 2023 09:09:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86emul/fuzz: correct header (symlink) dependencies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <658f5267-9943-7c5a-2ae7-f7e40a15301d@suse.com>
In-Reply-To: <658f5267-9943-7c5a-2ae7-f7e40a15301d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0135.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5070:EE_
X-MS-Office365-Filtering-Correlation-Id: f0f3cdcd-ed04-4a0a-1956-08db3f1b09c1
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0IVWYiEvHK3RK47T/tZkouqvnrGakCWLqGaIF6WAjxJphRhg6rOJroq0C1R6YpxLymEdVAwcpIUIO6RsX0oJSTZas5zndXNVURTps70g0nbffa3AxonRN4JU26GuJ0gT54RkKd26C53MEVj3tCFrcJBmLbwt1h+aPEKIWPRtNW7+OkbiMUwmfo0tfyfUjy2E0HA+Vm59BMCoyHHlAq+SWH5qps3lLEXh1NLDVA16IlXOfI0M4l6UEbfrVPkXAmCMvTe3lNLMXLyT9Di0xgkK8NeGkLb5mwxwiyQ+OQi4Y5hISXcBm5lBARSRuInC3QuKA5wEh1yOrE0NgiZWCLkd723X/oDBut6Z7cTS0QChcsezqDDxA3PSy8gRk/whTsr82GBqxIBCBcH423h30ii4QBca2fzIjZFI0xWE956gcfWF04tGnJ6nByKCnI0D8bTjWWh9dmphMK7kmo9m0etQJDEXepxhrhuypCFtTq/BCPjJNLg+va4tJVL/5IlGVH7pzEuMYqYKLf+NyYre5y/FCf1NHnuLY/AJXvPDEAxj+/i/OUcb+TL7cQwd29415SFkgceJ1zdlCIrRVwkUbzL8jUDFgL2L1sABrqkStzIIgZr/B5/Rxlbi+GgYB7OZ+FwGv4QDYha83SEGOiX26jvN5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199021)(54906003)(110136005)(31686004)(6486002)(6666004)(478600001)(41300700001)(82960400001)(316002)(4326008)(66556008)(2616005)(186003)(107886003)(53546011)(6512007)(6506007)(26005)(66946007)(66476007)(5660300002)(4744005)(2906002)(8676002)(8936002)(38100700002)(86362001)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTRQOHFJWmxsZnZKVWk5Y1EwWlZ2YllsTkswbnh2WXZjcndOcCtpa251M0hM?=
 =?utf-8?B?QXJKNGlxTWFSRHY1blk1QUhZa2hNaExsVEhHTHNLdzVaR3B4cVBjc2FUZGZr?=
 =?utf-8?B?em80c0hkUUFka2NKMkRuMVM2UVVEaU9HMGNqZSs2UUk2VTRPT3JKNm5HQ3d5?=
 =?utf-8?B?OGlFc1Y1ZGFwR21vVjV0eXhsKzlaaTFLNHJtK3NNTHVrRVo4ckxiOXVZbytW?=
 =?utf-8?B?V3JybEI5QlV6aFVMc283azdKSGIvU2wxa3BUbWU0YTlETCs5dGFqTzFEYlJD?=
 =?utf-8?B?c2w0ZktEUlNGc2R4bjRlMTJEQURnTFB0czFnaENNOWg2b2hXN2dLaUEzRDJl?=
 =?utf-8?B?TEhheWlSc2VXUzdxaHo3MDZldFoyaFA5Tk9tM08xL201THM2ZFBPNlJlQ2dI?=
 =?utf-8?B?UE5SQjVjaTJWZ1BTZkRwY09PYitEZk9yaTd0MEN5OEQwMmlrOWFFMk1URlFS?=
 =?utf-8?B?T1FuZDIxNmNzdmtyazFLS2pqRjZHcnVFRzlFRU92T1ZwcWh2aFBYTC9LeElm?=
 =?utf-8?B?cGpOWWZXL3pVVWxGVElqOWlSYmU0YVZoS1VPRHJsYXVONTRYYVAwS1Frd0Nm?=
 =?utf-8?B?VXF0blBFekhBd1RaVkNZdUlLVDBpeTZERmdiZnl6RGZjS2ZielVhMUxCcDQr?=
 =?utf-8?B?SE1mV2FCaWtvZlVoSWQzTUY5RFhheFZ4WDNWQzBnVXByem0rMW03bXZWMkgv?=
 =?utf-8?B?MVNOMFZ4MnpOWE0yN3NRWXhieWtmMzZCTXl3NmxrTU5ncWhGWm1iRUpnSnVW?=
 =?utf-8?B?ak5ndTJkOU1mZFovZmVMWERDdVo2cFM0eDBOQUFFNHNCejczNWpxWG9RR3FJ?=
 =?utf-8?B?NVd2MUdsdmNuMjc2TTRxZHp5S0k4YTExeWtKZ0kvVVVWb3VHS2VTMkNtV1g0?=
 =?utf-8?B?TGk5bi9Sd2ZvNGtxb2JQU2Q0SlR5Y3lhRzE5TlBWK0ZVYUpjbkJ0L1hjbFRO?=
 =?utf-8?B?OUlyaDNhdXlLQnRlRytIVTdvZEIxT0NxcXRnMU1sMW1WNEh4L2tvR0xYQlZj?=
 =?utf-8?B?TUdFRitnNFlkYlViazVtZDJlaDd4cmhFcS9xTndwMTVSbHZNRW03ZjVJMGlP?=
 =?utf-8?B?ejQvcWt3dEYwRDMrbngvZDdFZEpoam9ES0NmRkY5Y3FwVG5NU1NoNjdPVUZV?=
 =?utf-8?B?RXlaKzJHZzBrZnRoZWJOVDJicE9TcVBuRkJiTG5lR3dEUGhFRis2VFVlWEVh?=
 =?utf-8?B?YmZkWWdnODRWdmZ4cUlFTk05blN4WG92UVpRdTNJdUd5akFnRmlOSjZVbmh0?=
 =?utf-8?B?MmN4bk8zMGpFOXgrVzcxZUhGYkZXcmd5MFFsdGtyVmFXbXl6czVnU3FOVncx?=
 =?utf-8?B?ODJkK1JCQjFyaUtib2FUUVdRaDY2RlpjSzk1TUw1OEc5QVNRTk1HZzJ4R2Jp?=
 =?utf-8?B?b3BFQ3NrblQ3c0lHTFRhTjNTNmtaSCtOZzVPc1ZIWVVGVVQrc2N5UEZYckw3?=
 =?utf-8?B?NDVPOXd0WFFFckNDUDVsdkNhYzRGcExsSVEwUjNrY3I4N0c2SythazRYUG5h?=
 =?utf-8?B?NVRaWUphcjI1N0poSDNsaXhuQ3VONjRqaTlHL3FCVHdyYk1SK0c2WFMwaW14?=
 =?utf-8?B?QjZjZ1E2emtOc3dmRXNZNGZtRmo0RlNGeGd2RWcxQjdNRytHMVZCOUF2SjRW?=
 =?utf-8?B?N3ZFWXkzVFR6VVBJbVRUb1pWaGx6dUlCR2pCVWhXeUJNNTdad1I0ZXhlYkVV?=
 =?utf-8?B?aWU5ZDB3Z0lxMW11TEJGUVZBQzNkNENSWG53TElhNkdNWkwybE1mQjZvWUhh?=
 =?utf-8?B?Y05BekFNN1VSK3dHQmk2L2dWcHc1SDlOLzkxV0J2dHpBd1NLcC9oNk9NdUdm?=
 =?utf-8?B?b0wvZTlEU1dybklYc215cjRlN3lJZ21iR3dVeXR0ZWFWN1Vsd0lYTlAwZ0hx?=
 =?utf-8?B?Nit4THg4Rko5Q2xhcm1NOEFlR0dpSDBIOGlzRHVqam9WZFQ0UWx1NHRaQ0ZQ?=
 =?utf-8?B?TDJDRDEwV3NxNzZFeDFQRldhSHNMTFlad3NVbEd6QS92RW03VW9JSzVHMzRR?=
 =?utf-8?B?MDc5QU5vZ0lOYnNaT3NxTmlRUEJHU0FjcWpBdzkwWUx6N2REYjltclo1cmpQ?=
 =?utf-8?B?M1hqaFJ6VUp3eDVRUlMyenAvNGZXd2NpMVNpR29qNzk5WkFaQ0xyV0pFOFF3?=
 =?utf-8?B?L2lJb0xJVHhzRlhJbTNwT3p2MnFoaWgxOWFFejJwNTJKNVh3aUQ1aExvRFRV?=
 =?utf-8?B?d0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cCiWzp7IV9e2D7YONOV/6wms1i5KHbelHrl43cL6KBfrEJcFiP6QCuNk++ifBvRz9UcFjTGLUDOSrQMbr+SG5xUvazx95ktnjTfXZ2lkyKl9RUIBG9Mdf/VU8vkw9Kuc9RhhD3+epsCm5NyRZPsvhpIgTyWU5Gtq8pJ1WUeJvtkwge8uuY/juYCQq8xIlZQ1eTsIYN3JLn14bF1TMpJT62kd5xH54mGZ6w5N0bATFqoQN2dFRFqW1CKw0n2lwJzNHnmcjO3AbJBXeeClD+uV8pR2IcMOz/1zh+e+UGOBLkBjVqSIyy3lYFwcJU+sII2ErLDYE4euLaYi7L01JbzW1J/79UK+QKP0fFJv7PdOQVGkpEwBKHoDid/V1cDlYF/EUxQbDDLZ/y1nP/TnNVWU/8WbDQiJHK46nXgEAX3Z4qeYmUO+LwlYiTk9QNzDXwLoB/3zEyEjPOdm8Edw/H/dXSQNl12HbMkkE1um18ZQtkViQcDTxCxinnmmR11H3ZYDoPbO/8PC5tfiiAzf9GrgnM8Ml03B2ZZ6jQ8jwd06AlboCbIQrNq8Vhiyi97E4mpJkz5+UkGIv92qWxkb7Ny9GcR63kkL9tRCVYzLpqykN2whPihY8l6XP6UK3RS16+o5NESYY16iYmaR87dIaqIATHPaPy+MutC2KYWlgOJYSIrtUAXW7B1+70gLOfj6uYJ4tIZV9Q2BgLHyzs3HpC457b4rm/Eoe004wgKRJJiqo4hnZrUZVCd86ViYk5y0fHFcyUMmHU6PtqIcePVNm5lDUELslXlsa+aKo/4yiO/Ue6lxo3rfgck8WUhI8VQtu4Fw
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0f3cdcd-ed04-4a0a-1956-08db3f1b09c1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:09:16.6008
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qK5g4lKFqwEjl2vokShj2pivr2Cz5lqApSzUv3HSYCUn8HdviV+45J66J2yViZ7n1ls5Ot6nOKa868fAEL4hRgmJz8sp9+zxZXiglRSVJ4I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5070

On 17/04/2023 8:54 am, Jan Beulich wrote:
> A use of $(x86_emulate.h) was introduced (mirroring what the testharness
> has) without realizing that no such variable exists here. (Re)name the
> variable (to) "private.h", which better expresses what is included which
> way.
>
> Note that because of automatic dependencies tracking, unlike in the test
> harness no $(x86.h) variable is needed here - we solely need explicit
> dependencies for files which need symlinks created.
>
> Fixes: 9ace97ab9b87 ("x86emul: split off opcode 0f01 handling")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:20:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521834.810754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poK66-0006eA-97; Mon, 17 Apr 2023 08:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521834.810754; Mon, 17 Apr 2023 08:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poK66-0006e3-6S; Mon, 17 Apr 2023 08:20:26 +0000
Received: by outflank-mailman (input) for mailman id 521834;
 Mon, 17 Apr 2023 08:20:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poK64-0006dx-4r
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:20:24 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20626.outbound.protection.outlook.com
 [2a01:111:f400:fe13::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b2153085-dcf8-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:20:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9926.eurprd04.prod.outlook.com (2603:10a6:150:11a::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:20:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2153085-dcf8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H5HTmH62AVKaxFpCOnt9O6LDp3/PAzmeF7r0Is6OvS9m7Nxg1W5i1wzGyzmAMEfW/DPZ7HDiaCAcTXWwz2SSSx/4v8hNea2oxv1ktt9ETuAmdsSaUPy5Vvqayg3cCjYgAKNXtpBY3ileJ3gA2koCLeQdPjfnmfedLl/AD/nTBEtSrLWiW7zT/bbzdBOhp4I0RUtnQi/nSknXubRDcEKlGY9eSN/i2Y5nlqQyefcjgy1AguNpTIA54bTqCW2CebSN4bvApFOcDaOsp5/9dxjWtBL+6ZOisvyCzMKUlpPkteqLB0wqhAt8tf/mEYzafFa1Fb4caTLrtZxtlQOpw6YQTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zlbgpBd1vmkvi6OkFSdxXkSGwA24cB9Fe9vKuAhhh8Q=;
 b=jFuU7qRIttJKI/JvgWoWSe7YNLyxqbgnmsBf2ZTxAPVyVnpjul9+lDsWhXDGXe5YiPJZdieXR/9MKfBJKiqCOu0QsMFmViW8J2wsnOSKjn61vA1QaQZBw1wA6ap7Ezq4Sp/qGwxU6vsv9cwaVmzKvcpYitx3Yi1ASx/TQm2gcj0j1A/BGuhBi4kBr02A1sTPlqMD/hMDe5YIJ5MGy36iT9oDmH94gHbjbGDR15Yh+okTleQz5xLrqyOfsT29GAIA/4y96DiWj0rmeXlasbKZx82BdEaGlBT+ylwx60Dv6OgPHw6nvx790EanFV14Cli6laoeF0yb/yebN8Z+iwQWNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zlbgpBd1vmkvi6OkFSdxXkSGwA24cB9Fe9vKuAhhh8Q=;
 b=cU1c+R8WunX5Fg7XWyFGISGy1cj8p8BnN28J6cEQpPZxbhUVwJ6TKcEIR5icA542v7bQKre2nhtYE3nZX7V95itxEk+O2KpYReiyIb+Jz/sW0Oug3kfhZH2y3Uq5Wi++tGUoMZzRfDVpAbIfKQuZu6KzZ9JcIpbIlcaVuA64tSJ3cTVnoMMqaflM2Hp6gNJh9fvb8VVYJQRAYTuqBt2dHkpkacCoepeWsi+bm04JL67K+HGtDij2bsdgqCCEyGqqLOsOfGzkWvYiQMLCV8Yh5bvqvEJ7675sHfZg1jPK8surltyaW3cxqOI3cSM98gasm2s6BX4IZ8hUbV+wjMCJHg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9491746d-2216-06e2-f0c2-e7031267b901@suse.com>
Date: Mon, 17 Apr 2023 10:20:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: Gitlab status on older branches (Inc some 4.18 blockers)
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "committers@xenproject.org" <committers@xenproject.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <Michal.Orzel@arm.com>, Doug Goldstein <cardoe@cardoe.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <193206bf-76a0-818d-8fa8-1886a15ad5e5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <193206bf-76a0-818d-8fa8-1886a15ad5e5@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9926:EE_
X-MS-Office365-Filtering-Correlation-Id: 43a743b3-6d92-499f-9abd-08db3f1c9524
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CwDaAq69KbdTLYCgbhZl/aWHc2T7y5tQ1wjywMQLbiGeLKUAukS4V3nw7snOEURr2oTcWB5X94k+bu/qHz9IiUtTh5RphvNu9dXtBuS/6JsYjf3htaREBhOsxnOElVUChkLIG8zh89nXeFjQLvbjopgtVQ5grHYGmzhhm9qfDzhis0xY6T6fWvlkBN/2dyUrumX/N99l8HiNONaQaGa2SXVtBncVqQCgeFc0+FbhEeNBH1WjDsbUzQXGSc41PsW46gS1ZGf4rEKDJP5i6WQ07aedOPqWE/rnb2eGD6lmpdfM8E84wZTj6fLqz1GnhzfMoTu3u115yTNu1Wgibv8qNbTnZQ8XaFz+AZo7j96YZ0bRfY++PqQK44SOquuD8LC3jtzCufCZGGWQgD03fVfLf1hfX7tt2fhwbvtPceNeNmWRCtP7fizMN3BAJGqTEbqnFHyZeE1zAdJC1AwPz+wze9eY5tCSRFwUx+yiBqZHE/YmDnvuSCetv2S/BDdac26dp3QGbWg1ZHI8Ll0k2j9PaKldHCZBJxPSTMkN+mV2bRgEr2s4T+R4OcA4pT7a47Dr0svtjs+De/gTfMg85++DiK15JWGlxGr/yeV7Dak/3bOFw4lR2tHaVylfQieHhenWeW6crnzhGNK2+OppmAMtLw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(366004)(39860400002)(136003)(346002)(451199021)(36756003)(6916009)(54906003)(4326008)(316002)(66556008)(66946007)(66476007)(6486002)(478600001)(5660300002)(8936002)(8676002)(41300700001)(2906002)(86362001)(31696002)(38100700002)(2616005)(26005)(6512007)(6506007)(186003)(53546011)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?REhhMFRHNjVoQzlERVorNXBFZ2FJZEtwZ3RKRHM4UlR0Y2hIOG5PRWNOQS9w?=
 =?utf-8?B?V1hET1NheFFZQXJ5YWpnUDVrYyt1U0wwYmVpNTFLU01mTmJ6dFlpYWthYTlT?=
 =?utf-8?B?aGlrNGIzVlBkUWlQQVZhTFZ0WHNnTWZpRGZ2SmFpREdnN1psV0I3M0pLakxr?=
 =?utf-8?B?cUFlcU9YMnFGYWhyRTk2emt2TFpqYWZuK01FQ2Y2MTFwMVRQNFc1L0ZhbTBG?=
 =?utf-8?B?TnhYNndwY2NsVFdPclJPMTZSbis4RVArUEFIdTVXSW54ZXA0WCsvNGRTKzl6?=
 =?utf-8?B?NVYzWjRxTkxNc1FBM1R3b0FtUktzakJld01HYVMrT0ZpZUxBZjIyNUlZaE1h?=
 =?utf-8?B?T1V3KzZWejF3bmhlaG9EQzNnU3RleWFVbWZVZmRrYXZhQk4rS3FUdVJUQlRq?=
 =?utf-8?B?ckJFS2RuMjJnclZBMjg5NENMQWNma3IySklIRGYzdlZwUlNZblZsZWFVOFRJ?=
 =?utf-8?B?VXZtYzFJQXB5RDVJdkxXTlFyZjhYdXgyb3NjWGIyRExobE9SMHVxaFVMOE9F?=
 =?utf-8?B?eHpPNWIzQXpxaEkvRFFGZ1J4Z0t2UlpOalJNUlc3NmhsSmNjV2pHWWhHMmpM?=
 =?utf-8?B?TURkV1BaYU8yVW9MVHcrTFY5SHJkeHFEeUNWOVgzSHI2YmVlaGlocUxDeEVE?=
 =?utf-8?B?b2krNnZuMVhYSk4yeUNXcUI3QkZFMDNmSWUrYzJwblRFQitxT1dyNkdxWitX?=
 =?utf-8?B?dzBqVm55bm83UGF0a01MaFg3bENwMUEwdmlUdEptLyszT05YZk1WMnMxSlhQ?=
 =?utf-8?B?YmRLSFlrRFl2UUU1a0UwY2J0aWJjbzZTZmZkN3VMcWVFWHhhVGZuNjN6S1BW?=
 =?utf-8?B?NTdGWHYvZXNQUHJ1T2pxd0hBSFF0dlRTbGdhcllTenV1SW5aV21vb3I5eXFF?=
 =?utf-8?B?elRvR1F1SldlNFc4OXd1WGhsZTNRNXF6b0xsenZPLzdiUzJaUWVZYmhhbk5w?=
 =?utf-8?B?NG04RVF5OUR4amVERlpkUmg2aUVYNUNINFVzL3kzbUNVNnNvZ1F5UkwvbXp4?=
 =?utf-8?B?TTFudGVjRGpOR3BDSklJT2hvVjFyRzN3cGV4TjYyUzIrcS9DejkvWGtjV2s0?=
 =?utf-8?B?WGhnbzhHNVNFT1BKWVRBaVR4eWxudXdpVjRIRG5YZDloWWR4QTNXL1ZaeDNM?=
 =?utf-8?B?OWp3QTMxRjFCYkFKTmhLMUtRM09sbksxUVBxZjNKTUNZQmdZck5RcjQ3UmFJ?=
 =?utf-8?B?NVNqNjRlbVJXQzl1UGhYV1RvbHNVZHc1L1VYZ0doSXcvSG10S0F1bDc2OUU1?=
 =?utf-8?B?K2thVTg0d2QyUVhYRDdEVUZtbzlBd3Fuc2tpSy9HZkhkRHZGRnEwSUhtQlFh?=
 =?utf-8?B?QmlCQmtNd3ZsK1MwMW83c3Y1bTNzTnJ0ZmM1NWl0cXFUTWdUSFFuM1M3eXQ0?=
 =?utf-8?B?d3FqZEpDamZDRHdEUmlIYjFZL1dIaVhvMDFoWjhqMTFTVGZickZyNVZZdmdw?=
 =?utf-8?B?cDVoaXVJVWZQZ1lmc0RyU284MU9YRG1kNys5SE4zWFE3bVdjNnI5VjBOLzBq?=
 =?utf-8?B?SzhUWFNLU0xUVnF2WW5GZ01CVXp0MWxSQWNhVURRUS8wdWpPSHBDSVdPNnp6?=
 =?utf-8?B?L0ZvVUxaVm9LK09adDZoRWhGSHVlUHhSQ3RCejNqS3hoTy9QWlhLT3MrK1Nt?=
 =?utf-8?B?bFpzZ3Y2Zm4yTUE1V3B6RjBRa1RyeGlNZUhTd2plWmduUVBiM055WUVaeEtl?=
 =?utf-8?B?VWRINkh1UXlnQ1dvWFBZL2I5RWhiL0R4a1IxbGJ3LzlnMm80algvNkIyUDM1?=
 =?utf-8?B?UDhmQTMwZ2c0U0xwSi81TGphKzVIckowTG9ycktwak5UWnFyV1RYbzZHdld1?=
 =?utf-8?B?TVJoNDZVM0hLQnMzZmlvQ1N4OUJEaUNEM2hXYUd4dDNQcXBCdnlXN3BTdllQ?=
 =?utf-8?B?b2c1SHFjOG5lL3hOVG9JL1lpOGJmTUlGTzRseVA4V21TMDNzYVErMVZReWk2?=
 =?utf-8?B?bjVsZ2NXdlV3dVUxNlFad01KR2tjbW1GZm1mU3VPWFBDRWtrVDBwT2RDbjgw?=
 =?utf-8?B?UCt2ZTJPMDlhM2YzUE45S2o1K3BTQ0lZbXNnNHdWSSs0Y09jVW5Qa0JXQ0Vy?=
 =?utf-8?B?VzZjYXlGWmpQSC9GcHpiek90Ukw0Qi9CTGFvaXFuUTVuZ2FvbHBFclVjZ2cy?=
 =?utf-8?Q?IXB1j4Tjk2kdsakuxJ8g823vt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43a743b3-6d92-499f-9abd-08db3f1c9524
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:20:19.8327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u2JXEtuObe7f/oRD+HfUyyN9kMwiUJ3Xg9xWA+3Szd66ejN3LMe6kCPJXrem/9ma7VY6JwCXfJJRhSCGP3n+SQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9926

On 12.04.2023 19:08, Andrew Cooper wrote:
> Hello,
> 
> I've done various backports to staging-4.14 and later trying to improve
> the state of Gitlab testing.
> 
> The good news is that 4.16 and 4.17 now pass.  The bad news is that
> there are still bugs which need fixing, but lets start with the older
> branches.
> 
> Also, I was forced to backport an update to SeaBIOS 1.16 to all branches
> in order to fix compile failures in build environments we supported at
> the time of each of these releases.  I honestly don't know what we were
> failing to do testing wise back then, but whatever we missed ought to
> have been release blockers.

I find this odd. Even 1.14.0 post-dates 4.14.0, so I don't see what we
could have updated to at the time. Furthermore I believe we never
updated SeaBIOS after the initial major release, so to be honest I
consider the above insufficient justification (i.e. lack of detail).

That said, SeaBIOS changes over the last couple of years look to have
been pretty minor, so hopefully we indeed won't run into issues from
doing this backport. We may even want to reconsider what we've done so
far, towards updating for dot-releases as well.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:24:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:24:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521838.810765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poK9c-0007Ef-O2; Mon, 17 Apr 2023 08:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521838.810765; Mon, 17 Apr 2023 08:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poK9c-0007EY-LH; Mon, 17 Apr 2023 08:24:04 +0000
Received: by outflank-mailman (input) for mailman id 521838;
 Mon, 17 Apr 2023 08:24:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poK9b-0007ES-UM
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:24:03 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35ac0ff7-dcf9-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 10:24:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6966.eurprd04.prod.outlook.com (2603:10a6:20b:109::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:24:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35ac0ff7-dcf9-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dxo8CePUE0FiE1pO6ZD7el4ktm2fYiIy6jVrp3h1t+MhnIpQO0G9XdItx6pj0J0hKMmmfdMEefcZICeQsUdkrdNgk4jHmrK53ZWOLHg3gRNNKem0ydtucVaWFZVf09d0xgNN3WvmGSC+eDilhzYdODsAPMiVP4o2SN92RYcZUDP1rTKmfFtwbX8geXz7Y/eb1duAxbsbn6gHmkTbYuH1FClSSb5nNyLSvzsw5Gyq/b3kCZrGKf/VMDzAml/z5pag6UINPekFQNw91+/urqfb8U/8Br7bn/CGwdIa/kET/yP0U8K6MKK9C6NCKnGjv+Gua//L7cZ0llCQGHERHzGYzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tAfTZEBWkEqLVMBcODrgy7v/NgE7ZshQhgeO1XRx2Z4=;
 b=HFkRE1uaCAYcOcidCBBK/WED0X7XEZI1rWcUrRbBnCMyDhSCsUriCU3sPB+DFSi7nHATYKkQCFLxhNBlZSVmAd/pkaOvOeDoma5bkAUWWoVDay4CdZSZ2xck4i5eIHv/MqV++2ycLXlFyzhNu1pLidl+dkSJMCccDfHeDBgWb+BhDv4arSxlikV87+jgxVplABnsNKMTL6sH6Pazwd1Ys7w10rYOOlMJWEN0V3nUBda6O8TSGU8bwnNgrx7uEB3RAsVcgsJWOOG5Q6v8MFMIWTNE194rbXgSTe9sBD5JstrrajYGMrm2EVoY4qfqHAclAHmszwiR6OGxBTAqQIhRpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tAfTZEBWkEqLVMBcODrgy7v/NgE7ZshQhgeO1XRx2Z4=;
 b=xduZ5NmFVzKg3SyN5DpjFgXY/BKxyRUD7JL9fG4esFFE/W2DUG/W9JoVoPKNhRi7nrH4F7IV4jHuNbEKdNTrUuuxlvkJkX/BMiMLzyQx4/vSdcRc3S/pcSaRnD/hJJzTvfseBwZMnO0egEtWaGwLNIk67adGvfEJkCft+cpnzAWosGuBtXhm6WdtM9ha+xuhPgd6LkgyBZ/2LI0jCp018ilnNbQjEMTymvXbl1CjG8I4jgcXUaVdDf9DEsU4jV7NmibkTsV2jp0cKUWEG7Vna2FCUx4ZmjtC9z8BUhi+2P9HQMOVxh5CYpB0wIgBRWF1XyTpmie3XAuwiOwPgm7EtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <92e6ea3e-a381-a77e-f909-bf65d009647f@suse.com>
Date: Mon, 17 Apr 2023 10:23:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: HEADS UP: re-adding the armhf boxes to osstest
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <ZDkmu0mgy23ypaL7@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZDkmu0mgy23ypaL7@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0147.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6966:EE_
X-MS-Office365-Filtering-Correlation-Id: 6bc46b4d-7840-4084-55e7-08db3f1d18fb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kJDAbovsP0FhGtlJuDqb9IFZqKIkdycBnSLIwswFCExZAJ85yTsUgMDbh3aZSfL94c21IfwyHgUVs2XND8u/0yjcxieqmDLYSPoBNV7l4Sp+wM37NgwJMJ9sbFVcRh4HjsQ8eS0cZVvsYoyyJAMCnGHVYM2453QCDO03ZimvKTsUB2JKqwpw4Wd3RACGaw3uEJBm9ViGnnyQiz658YU5S7i0PMl0Y2BOGng/EvsSv7LsWFgn9Ztfuw2JCx2bD+Ls2cR0P78DzxYOvR9deQyrcoojif2PoKJ9FYmn6POR8eFIaIWyEgsT//XK3satR/bphUgRc7OTyTZt5zYxgkgF1h7EYOwGsgrRJf9g3Q44m6Xstdom+9lYp9yJzldtJ2a/Y1FiR56/n5vNXpbIrZYQcLh7FW30dhSSE6Q7WEdAJRKiKyITgiu6XZC0qd8gRZyOGMwhKJ3tw9l5xqVaw9AjCo4HZlb2O6wYEJA68fAIscOBkI5480NFVnmB9r9xRhgLkuodBkfGxask0E2HQBgcewphDC/n0m2QcXRMqP0I6V0o6PuoDO3GMaE8qZp2EyCeTTVpKHWPhqzeQCXtcbA4Cfpd8WEntTpZNJxEJF4x2Z3JkPSdDFZWl0f5Qnyml7pH4xK7kh6Gr878lZf3o1Dlkg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(366004)(136003)(346002)(39850400004)(451199021)(36756003)(6916009)(4326008)(316002)(66556008)(66946007)(66476007)(6486002)(478600001)(5660300002)(8936002)(8676002)(41300700001)(4744005)(2906002)(86362001)(31696002)(38100700002)(2616005)(26005)(6512007)(6506007)(186003)(53546011)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R3V6TnJqMDVXa1duaWJNcHl0QlAwZFdDUWc5aVYzVDY2WTJERlpWem1ZUW9m?=
 =?utf-8?B?cE5TMmdpVHJXSUNEZE5WUnZ4eXpndnRHNnF3c01BNkdYQlIzek56Q3hTME9p?=
 =?utf-8?B?YUJKK3pZZ2Rpck5BcFRVaElrZFZ3MDF0K01CcnAzK2U2a1ZhUUk4UGN4SlFJ?=
 =?utf-8?B?Q2ZmNEYzUnRsajNwQWJlMzRoMTU0MHh4R2NRM096d1pCelRlYzBhaW0xSVBG?=
 =?utf-8?B?dEJYNHk0Tk16d2U5czBjb3lQVG8wVEFCOXRFQWlaYnVNYWVwbmt4clhKcGps?=
 =?utf-8?B?ZEprZGxQVDVhTFM0dC9LNm1VUUc1WGNEbS9CbXNBa2lOVkNuN0V6c0NDcmxB?=
 =?utf-8?B?Mk5KdEs4dXNlOWVhQ01aR1FzUTNyVWVhZ1B1dzNaK2NhdkFZZ2tlQWlSaGpX?=
 =?utf-8?B?YU9CRkdwNzVEdVBxRk95NGE2bk5LdGljcXVERWpXRldiZWNHUWJ4TlhpTFdM?=
 =?utf-8?B?SU1IY2lOSEpEd1NsSWtMcnJ4U0U3UVVOYjRsSHcwbzdTeXNJL2p6bVpSM09B?=
 =?utf-8?B?STQ3bFFDemtady9oUU9PZG1qUlNFQ1RsREZ3eEQ3OW1TdXF0b1lWMHdpaTBz?=
 =?utf-8?B?WFoyeFc0ZDMxS2R4Y2Z4dGRoQy8xUnAvZEFkM3RWMjhSQ3NoTUNMM0NubTho?=
 =?utf-8?B?Uk43NVlpN2FyYWl0OExuOEtrTzNWclN3Z1NuZURNNC9sS3F5VkZ1UE1wbjQy?=
 =?utf-8?B?MVZFaXZQaWV2VUwwekdyNE1wcUYrdDk0RVhuRlV6TEMvUDhaaFU0UkFvVk9w?=
 =?utf-8?B?ajN6MjF2RDJJaGcrMzdLTElna1JRY2UyMU92SW9yanZGWm1zemthR0pURytm?=
 =?utf-8?B?VUhoMnh2UThsNHczU1pwZXdzaWVnY3lIK29rZTJQTDRyNXJiL09rQnAxVkpn?=
 =?utf-8?B?emExWThnM3VlNy9VcGtmSFUvVnlHQklqdUo0VTBWUzFhVU9BdFhOZnBmVlJH?=
 =?utf-8?B?S0ZjVGFWZ3VPbXpuck1GbkxHOFRVRWVwREg1TDdnbzU3VU9adkFySC95NnJz?=
 =?utf-8?B?TStRVlRjMjlMM1FiNjJZZXJHdmFCZkk5TXFLWTN3eVQzU2F5MGM4aExXTGpF?=
 =?utf-8?B?RmYrNDAwbzMyNVJBNzJETlVybXo4M3dvZ041SzhDOGhLZENWMHluc3RSeFYv?=
 =?utf-8?B?bjYzZzJnSlpVdjhYK0FBaVowaDZ6bGEzdmVnb2tqZGVrTzBZb3JQTnNPWGJy?=
 =?utf-8?B?eDZSUzNEbmplRkI3M0JZbHM5TnVBOW8rbmM3Vi9XL1AweFJIMVE3bFpnOWwr?=
 =?utf-8?B?V0Z4ekszQkFSRmxGZUFGWHNaQVQzWWc4NXhsKzNEdEkzUEZma3FHVnJGUkJm?=
 =?utf-8?B?dEVOcnpxN2FDT1FkYXVET0VzekFlTm1KbDZoZnl6NVgveWxzc3Aydmx3ZzYw?=
 =?utf-8?B?OTRGZGJQa2NobVUwS3M5dU1EWTZaVnY0SW5Bei9HcTB3Z1ZyazBZUWpVb3BD?=
 =?utf-8?B?dk1vYXpHU1dJWnkrQTluSXhRS0xvZG40ZHVDb3BScURiMVpyK1FxSHFTVXps?=
 =?utf-8?B?MUI1Q2xQKzdCVnZYbjlTRXA0UVVUN3lLc1kyaGpTaE1kVzQvZnhwdmVWbE9h?=
 =?utf-8?B?MWYzZXVQdUE2dTRmM3dqSE01T0duVmZvcVZ1YTFPd1lCQ1lpWE0yWWNRTjFi?=
 =?utf-8?B?SmRnMDNLdDhGdmJrdjhyQUFLUjIyRm9HTkM3R21MNEVQU2JDZlY0cGl0VWp3?=
 =?utf-8?B?RVpHYXBDUStJMDRaMlJEYS9ldDBUSWkram53cHFLNzZFVks3RjBBQ3FGSkdm?=
 =?utf-8?B?dEVBK2xBUFVkMXFXc0xSamw2TkRhdldUTzVPWWNQU255R3hJbjV2QXFhSWFn?=
 =?utf-8?B?SmtQYnBRVmthREdaa08xc1pVL3liVVh1WENlcVhhSmVtUVlhR25ISHY2Uy96?=
 =?utf-8?B?bDZIWVB0UFlLSW1ZbmJXNm1NeFgyK3dqajRFbHMzcUpNRWVyQUlWcFRLN2Vr?=
 =?utf-8?B?VmRSR1JpU3A3T2lybkpwazAzVnltY2s0T2srRW5lZXkvUzB2cVJCWGJiUXZK?=
 =?utf-8?B?Mk5vZmw2bEdGWk9kZXlWSHY3RXEvRThuaGRaaFRGZXp0RmpQaDRpM0VQVXZm?=
 =?utf-8?B?c3RuTFJlYmM2ZVBHZkliOUxnV1l4QTg2eXBqK2VqdHR2V1JNcmFwbEpTYU80?=
 =?utf-8?Q?MwFyZ0nJKVI9vppCkkquZPGMK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bc46b4d-7840-4084-55e7-08db3f1d18fb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:24:01.0784
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vU7KkwKeb7/rTT6SIKU12dmVSE44dsMONqzmNhMWdjKpmuaLzPfl4VAJd2kcV/u2XbrfPXzTLq80pH9iVkgXHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6966

On 14.04.2023 12:11, Roger Pau Monné wrote:
> We finally had the broken PDU replaced in the osstest colo, and the
> armhf boxes are operational again (those are the arndales and the
> cubietrucks).
> 
> I've run some ad-hoc tests on them and they look fine. I plan to bless
> them before the end of the day.
> 
> As usual, keep and eye on any failures that could be caused by the
> newly added boxes.

Sadly recent flights look to be reporting them as broken again.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:35:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:35:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521842.810775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKKy-0000Kp-SH; Mon, 17 Apr 2023 08:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521842.810775; Mon, 17 Apr 2023 08:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKKy-0000Ki-Oc; Mon, 17 Apr 2023 08:35:48 +0000
Received: by outflank-mailman (input) for mailman id 521842;
 Mon, 17 Apr 2023 08:35:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yHEP=AI=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1poKKx-0000Kc-Px
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:35:48 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d80f82e5-dcfa-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:35:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AF6871F381;
 Mon, 17 Apr 2023 08:35:43 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9B2E61390E;
 Mon, 17 Apr 2023 08:35:42 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id twdyJN4EPWS6RgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 17 Apr 2023 08:35:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d80f82e5-dcfa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1681720543; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=wX8FGfd18x+vkI4jWdlkxA72fLHxzquGU9rbJT7Ctpg=;
	b=e+W3tgVC6O0FR1xGnAz1x90UnNMNBbbuE73nNbfwddytLa9SBziuMyIzdXSM8oUCZblj6d
	2eXD0mn0b+WQJuQxB5FCvnGbqN+zDk0eXHoR6TrfUAHTCJK4TaR1G4xbdvYq8rUm/01H7Y
	nxaA2SEZbGwqFFFBUgXIWBdgvKUp5/4=
Message-ID: <cda9bf38-0c40-4658-65aa-fbca1b3577e8@suse.com>
Date: Mon, 17 Apr 2023 10:35:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230414225551.858160935@linutronix.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------mDBsVbb05nWuDwkk0s0aQs0o"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------mDBsVbb05nWuDwkk0s0aQs0o
Content-Type: multipart/mixed; boundary="------------IHia8Et000DpNvtEgqYnfJT6";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Message-ID: <cda9bf38-0c40-4658-65aa-fbca1b3577e8@suse.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
References: <20230414225551.858160935@linutronix.de>
In-Reply-To: <20230414225551.858160935@linutronix.de>

--------------IHia8Et000DpNvtEgqYnfJT6
Content-Type: multipart/mixed; boundary="------------IY5B8XYicrkm5h03mzx6NQ1O"

--------------IY5B8XYicrkm5h03mzx6NQ1O
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDQuMjMgMDE6NDQsIFRob21hcyBHbGVpeG5lciB3cm90ZToNCj4gSGkhDQo+IA0K
PiBUaGlzIGlzIGEgY29tcGxldGUgcmV3b3JrIG9mIHRoZSBwYXJhbGxlbCBicmluZ3VwIHBh
dGNoIHNlcmllcyAoVjE3KQ0KPiANCj4gICAgICBodHRwczovL2xvcmUua2VybmVsLm9yZy9s
a21sLzIwMjMwMzI4MTk1NzU4LjEwNDk0NjktMS11c2FtYS5hcmlmQGJ5dGVkYW5jZS5jb20N
Cj4gDQo+IHRvIGFkZHJlc3MgdGhlIGlzc3VlcyB3aGljaCB3ZXJlIGRpc2NvdmVyZWQgaW4g
cmV2aWV3Og0KPiANCj4gICAxKSBUaGUgWDg2IG1pY3JvY29kZSBsb2FkZXIgc2VyaWFsaXph
dGlvbiByZXF1aXJlbWVudA0KPiANCj4gICAgICBodHRwczovL2xvcmUua2VybmVsLm9yZy9s
a21sLzg3djhpaXJ4dW4uZmZzQHRnbHgNCj4gDQo+ICAgICAgTWljcm9jb2RlIGxvYWRpbmcg
b24gSFQgZW5hYmxlZCBYODYgQ1BVcyByZXF1aXJlcyB0aGF0IHRoZSBtaWNyb2NvZGUgaXMN
Cj4gICAgICBsb2FkZWQgb24gdGhlIHByaW1hcnkgdGhyZWFkLiBUaGUgc2libGluZyB0aHJl
YWQocykgbXVzdCBiZSBpbg0KPiAgICAgIHF1aWVzY2VudCBzdGF0ZTsgZWl0aGVyIGxvb3Bp
bmcgaW4gYSBwbGFjZSB3aGljaCBpcyBhd2FyZSBvZiBwb3RlbnRpYWwNCj4gICAgICBjaGFu
Z2VzIGJ5IHRoZSBtaWNyb2NvZGUgdXBkYXRlIChzZWUgbGF0ZSBsb2FkaW5nKSBvciBpbiBm
dWxseSBxdWllc2NlbnQNCj4gICAgICBzdGF0ZSwgaS5lLiB3YWl0aW5nIGZvciBJTklUL1NJ
UEkuDQo+IA0KPiAgICAgIFRoaXMgaXMgcmVxdWlyZWQgYnkgaGFyZHdhcmUvZmlybXdhcmUg
b24gSW50ZWwuIEFzaWRlIG9mIHRoYXQgaXQncyBhDQo+ICAgICAgdmVuZG9yIGluZGVwZW5k
ZW50IHNvZnR3YXJlIGNvcnJlY3RuZXNzIGlzc3VlLiBBc3N1bWUgdGhlIGZvbGxvd2luZw0K
PiAgICAgIHNlcXVlbmNlOg0KPiANCj4gICAgICBDUFUxLjAJICAJICAgICAgQ1BVMS4xDQo+
ICAgICAgCQkJICAgICAgQ1BVSUQoJEEpDQo+ICAgICAgTG9hZCBtaWNyb2NvZGUuDQo+ICAg
ICAgQ2hhbmdlcyBDUFVJRCgkQSwgJEIpDQo+ICAgICAgCQkJICAgICAgQ1BVSUQoJEIpDQo+
IA0KPiAgICAgIENQVTEuMSBtYWtlcyBhIGRlY2lzaW9uIG9uICRBIGFuZCAkQiB3aGljaCBt
aWdodCBiZSBpbmNvbnNpc3RlbnQgZHVlDQo+ICAgICAgdG8gdGhlIG1pY3JvY29kZSB1cGRh
dGUuDQo+IA0KPiAgICAgIFRoZSBzb2x1dGlvbiBmb3IgdGhpcyBpcyB0byBicmluZ3VwIHRo
ZSBwcmltYXJ5IHRocmVhZHMgZmlyc3QgYW5kIGFmdGVyDQo+ICAgICAgdGhhdCB0aGUgc2li
bGluZ3MuIExvYWRpbmcgbWljcm9jb2RlIG9uIHRoZSBzaWJsaW5ncyBpcyBhIE5PT1Agb24g
SW50ZWwNCj4gICAgICBhbmQgb24gQU1EIGl0IGlzIGd1YXJhbnRlZWQgdG8gb25seSBtb2Rp
ZnkgdGhyZWFkIGxvY2FsIHN0YXRlLg0KPiANCj4gICAgICBUaGlzIGVuc3VyZXMgdGhhdCB0
aGUgQVBzIGNhbiBsb2FkIG1pY3JvY29kZSBiZWZvcmUgcmVhY2hpbmcgdGhlIGFsaXZlDQo+
ICAgICAgc3luY2hyb25pemF0aW9uIHBvaW50IHcvbyBkb2luZyBhbnkgZnVydGhlciB4ODYg
c3BlY2lmaWMNCj4gICAgICBzeW5jaHJvbml6YXRpb24gYmV0d2VlbiB0aGUgY29yZSBzaWJs
aW5ncy4NCj4gDQo+ICAgMikgVGhlIGdlbmVyYWwgZGVzaWduIGlzc3VlcyBkaXNjdXNzZWQg
aW4gVjE2DQo+IA0KPiAgICAgIGh0dHBzOi8vbG9yZS5rZXJuZWwub3JnL2xrbWwvODdwbTh5
NnltZS5mZnNAdGdseA0KPiANCj4gICAgICBUaGUgcHJldmlvdXMgcGFyYWxsZWwgYnJpbmd1
cCBwYXRjaGVzIGp1c3QgZ2x1ZWQgdGhpcyBtZWNoYW5pc20gaW50bw0KPiAgICAgIHRoZSBl
eGlzdGluZyBjb2RlIHdpdGhvdXQgYSBkZWVwZXIgYW5hbHlzaXMgb2YgdGhlIHN5bmNocm9u
aXphdGlvbg0KPiAgICAgIG1lY2hhbmlzbXMgYW5kIHdpdGhvdXQgZ2VuZXJhbGl6aW5nIGl0
IHNvIHRoYXQgdGhlIGNvbnRyb2wgbG9naWMgaXMNCj4gICAgICBtb3N0bHkgaW4gdGhlIGNv
cmUgY29kZSBhbmQgbm90IG1hZGUgYW4gYXJjaGl0ZWN0dXJlIHNwZWNpZmljIHRpbmtlcg0K
PiAgICAgIHNwYWNlLg0KPiANCj4gICAgICBNdWNoIG9mIHRoYXQgaGFkIGJlZW4gcG9pbnRl
ZCBvdXQgMiB5ZWFycyBhZ28gaW4gdGhlIGRpc2N1c3Npb25zIGFib3V0DQo+ICAgICAgdGhl
IGVhcmx5IHZlcnNpb25zIG9mIHBhcmFsbGVsIGJyaW5ndXAgYWxyZWFkeS4NCj4gDQo+IA0K
PiBUaGUgc2VyaWVzIGlzIGJhc2VkIG9uOg0KPiANCj4gICAgZ2l0Oi8vZ2l0Lmtlcm5lbC5v
cmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3RpcC90aXAgeDg2L2FwaWMNCj4gDQo+IGFu
ZCBhbHNvIGF2YWlsYWJsZSBmcm9tIGdpdDoNCj4gDQo+ICAgIGdpdDovL2dpdC5rZXJuZWwu
b3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC90Z2x4L2RldmVsLmdpdCBob3RwbHVnDQo+
IA0KPiANCj4gQmFja2dyb3VuZA0KPiAtLS0tLS0tLS0tDQo+IA0KPiBUaGUgcmVhc29uIHdo
eSBwZW9wbGUgYXJlIGludGVyZXN0ZWQgaW4gcGFyYWxsZWwgYnJpbmd1cCBpcyB0byBzaG9y
dGVuDQo+IHRoZSAoa2V4ZWMpIHJlYm9vdCB0aW1lIG9mIGNsb3VkIHNlcnZlcnMgdG8gcmVk
dWNlIHRoZSBkb3dudGltZSBvZiB0aGUNCj4gVk0gdGVuYW50cy4gVGhlcmUgYXJlIG9idmlv
dXNseSBvdGhlciBpbnRlcmVzdGluZyB1c2UgY2FzZXMgZm9yIHRoaXMNCj4gbGlrZSBWTSBz
dGFydHVwIHRpbWUsIGVtYmVkZGVkIGRldmljZXMuLi4NCj4gDQo+IFRoZSBjdXJyZW50IGZ1
bGx5IHNlcmlhbGl6ZWQgYnJpbmd1cCBkb2VzIHRoZSBmb2xsb3dpbmcgcGVyIEFQOg0KPiAN
Cj4gICAgICAxKSBQcmVwYXJlIGNhbGxiYWNrcyAoYWxsb2NhdGUsIGludGlhbGl6ZSwgY3Jl
YXRlIHRocmVhZHMpDQo+ICAgICAgMikgS2ljayB0aGUgQVAgYWxpdmUgKGUuZy4gSU5JVC9T
SVBJIG9uIHg4NikNCj4gICAgICAzKSBXYWl0IGZvciB0aGUgQVAgdG8gcmVwb3J0IGFsaXZl
IHN0YXRlDQo+ICAgICAgNCkgTGV0IHRoZSBBUCBjb250aW51ZSB0aHJvdWdoIHRoZSBhdG9t
aWMgYnJpbmd1cA0KPiAgICAgIDUpIExldCB0aGUgQVAgcnVuIHRoZSB0aHJlYWRlZCBicmlu
Z3VwIHRvIGZ1bGwgb25saW5lIHN0YXRlDQo+IA0KPiBUaGVyZSBhcmUgdHdvIHNpZ25pZmlj
YW50IGRlbGF5czoNCj4gDQo+ICAgICAgIzMgVGhlIHRpbWUgZm9yIGFuIEFQIHRvIHJlcG9y
dCBhbGl2ZSBzdGF0ZSBpbiBzdGFydF9zZWNvbmRhcnkoKSBvbiB4ODYNCj4gICAgICAgICBo
YXMgYmVlbiBtZWFzdXJlZCBpbiB0aGUgcmFuZ2UgYmV0d2VlbiAzNTB1cyBhbmQgMy41bXMg
ZGVwZW5kaW5nIG9uDQo+ICAgICAgICAgdmVuZG9yIGFuZCBDUFUgdHlwZSwgQklPUyBtaWNy
b2NvZGUgc2l6ZSBldGMuDQo+IA0KPiAgICAgICM0IFRoZSBhdG9taWMgYnJpbmd1cCBkb2Vz
IHRoZSBtaWNyb2NvZGUgdXBkYXRlLiBUaGlzIGhhcyBiZWVuIG1lYXN1cmVkDQo+ICAgICAg
ICAgdG8gdGFrZSB1cCB0byB+OG1zIG9uIHRoZSBwcmltYXJ5IHRocmVhZHMgZGVwZW5kaW5n
IG9uIHRoZSBtaWNyb2NvZGUNCj4gICAgICAgICBwYXRjaCBzaXplIHRvIGFwcGx5Lg0KPiAN
Cj4gT24gYSB0d28gc29ja2V0IFNLTCBzZXJ2ZXIgd2l0aCA1NiBjb3JlcyAoMTEyIHRocmVh
ZHMpIHRoZSBib290IENQVSBzcGVuZHMNCj4gb24gY3VycmVudCBtYWlubGluZSBhYm91dCA4
MDBtcyBidXN5IHdhaXRpbmcgZm9yIHRoZSBBUHMgdG8gY29tZSB1cCBhbmQNCj4gYXBwbHkg
bWljcm9jb2RlLiBUaGF0J3MgbW9yZSB0aGFuIDgwJSBvZiB0aGUgYWN0dWFsIG9ubGluaW5n
IHByb2NlZHVyZS4NCj4gDQo+IEJ5IHNwbGl0dGluZyB0aGUgYWN0dWFsIGJyaW5ndXAgbWVj
aGFuaXNtIGludG8gdHdvIHBhcnRzIHRoaXMgY2FuIGJlDQo+IHJlZHVjZWQgdG8gd2FpdGlu
ZyBmb3IgdGhlIGZpcnN0IEFQIHRvIHJlcG9ydCBhbGl2ZSBvciBpZiB0aGUgc3lzdGVtIGlz
DQo+IGxhcmdlIGVub3VnaCB0aGUgZmlyc3QgQVAgaXMgYWxyZWFkeSB3YWl0aW5nIHdoZW4g
dGhlIGJvb3QgQ1BVIGZpbmlzaGVkIHRoZQ0KPiB3YWtlLXVwIG9mIHRoZSBsYXN0IEFQLg0K
PiANCj4gDQo+IFRoZSBhY3R1YWwgc29sdXRpb24gY29tZXMgaW4gc2V2ZXJhbCBwYXJ0cw0K
PiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gDQo+ICAg
MSkgW1AgMS0yXSBHZW5lcmFsIGNsZWFudXBzIChpbml0IGFubm90YXRpb25zLCBrZXJuZWwg
ZG9jLi4uKQ0KPiANCj4gICAyKSBbUCAzXSBUaGUgb2J2aW91cw0KPiANCj4gICAgICBBdm9p
ZCBwb2ludGxlc3MgZGVsYXkgY2FsaWJyYXRpb24gd2hlbiBUU0MgaXMgc3luY2hyb25pemVk
IGFjcm9zcw0KPiAgICAgIHNvY2tldHMuIFRoYXQgcmVtb3ZlcyBhIHdob3BwaW5nIDEwMG1z
IGRlbGF5IGZvciB0aGUgZmlyc3QgQ1BVIG9mIGENCj4gICAgICBzb2NrZXQuIFRoaXMgaXMg
YW4gaW1wcm92ZW1lbnQgaW5kZXBlbmRlbnQgb2YgcGFyYWxsZWwgYnJpbmd1cCBhbmQgaGFk
DQo+ICAgICAgYmVlbiBkaXNjdXNzZWQgdHdvIHllYXJzIGFnbyBhbHJlYWR5Lg0KPiANCj4g
ICAyKSBbUCAzLTZdIFJlbW92YWwgb2YgdGhlIENQVTAgaG90cGx1ZyBoYWNrLg0KPiANCj4g
ICAgICBUaGlzIHdhcyBhZGRlZCAxMSB5ZWFycyBhZ28gd2l0aCB0aGUgcHJvbWlzZSB0byBt
YWtlIHRoaXMgYSByZWFsDQo+ICAgICAgaGFyZHdhcmUgbWVjaGFuaXNtLCBidXQgdGhhdCBu
ZXZlciBtYXRlcmlhbGl6ZWQuIEFzIHBoeXNpY2FsIENQVQ0KPiAgICAgIGhvdHBsdWcgaXMg
bm90IHJlYWxseSBzdXBwb3J0ZWQgYW5kIHRoZSBwaHlzaWNhbCB1bnBsdWdnaW5nIG9mIENQ
VTANCj4gICAgICBuZXZlciBtYXRlcmlhbGl6ZWQgdGhlcmUgaXMgbm8gcmVhc29uIHRvIGtl
ZXAgdGhpcyBjcnVmdCBhcm91bmQuIEl0J3MNCj4gICAgICBqdXN0IG1haW50ZW5hbmNlIGJh
bGxhc3QgZm9yIG5vIHZhbHVlIGFuZCB0aGUgcmVtb3ZhbCBtYWtlcw0KPiAgICAgIGltcGxl
bWVudGluZyB0aGUgcGFyYWxsZWwgYnJpbmd1cCBmZWF0dXJlIHdheSBzaW1wbGVyLg0KPiAN
Cj4gICAzKSBbUCA3LTE2XSBDbGVhbnVwIG9mIHRoZSBleGlzdGluZyBicmluZ3VwIG1lY2hh
bmlzbToNCj4gDQo+ICAgICAgIGEpIENvZGUgcmVvcmdhbmlzYXRpb24gc28gdGhhdCB0aGUg
Z2VuZXJhbCBob3RwbHVnIHNwZWNpZmljIGNvZGUgaXMNCj4gICAgICAgICAgaW4gc21wYm9v
dC5jIGFuZCBub3Qgc3ByaW5rbGVkIGFsbCBvdmVyIHRoZSBwbGFjZQ0KPiANCj4gICAgICAg
YikgRGVjb3VwbGUgTVRSUi9QQVQgaW5pdGlhbGl6YXRpb24gZnJvbSBzbXBfY2FsbG91dF9t
YXNrIHRvIHByZXBhcmUNCj4gICAgICAgICAgZm9yIHJlcGxhY2luZyB0aGF0IG1hc2sgd2l0
aCBhIGhvdHBsdWcgY29yZSBjb2RlIHN5bmNocm9uaXphdGlvbg0KPiAgICAgICAgICBtZWNo
YW5pc20uDQo+IA0KPiAgICAgICBjKSBNYWtlIFRTQyBzeW5jaHJvbml6YXRpb24gZnVuY3Rp
b24gY2FsbCBiYXNlZCBzbyB0aGF0IHRoZSBjb250cm9sIENQVQ0KPiAgICAgICAgICBkb2Vz
IG5vdCBoYXZlIHRvIGJ1c3kgd2FpdCBmb3Igbm90aGluZyBpZiBzeW5jaHJvbml6YXRpb24g
aXMgbm90DQo+ICAgICAgICAgIHJlcXVpcmVkLg0KPiANCj4gICAgICAgZCkgUmVtb3ZlIHRo
ZSBzbXBfY2FsbGluX21hc2sgc3luY2hyb25pemF0aW9uIHBvaW50IGFzIGl0cyBub3QgbG9u
Z2VyDQo+ICAgICAgICAgIHJlcXVpcmVkIGR1ZSB0byAjM2MuDQo+IA0KPiAgICAgICBlKSBS
ZXdvcmsgdGhlIHNwYXJzZV9pcnFfbG9jayBoZWxkIHJlZ2lvbiBpbiB0aGUgY29yZSBjb2Rl
IHNvIHRoYXQgdGhlDQo+ICAgICAgICAgIG5leHQgcG9sbGluZyBzeW5jaHJvbml6YXRpb24g
cG9pbnQgaW4gdGhlIHg4NiBjb2RlIGNhbiBiZSByZW1vdmVkIHRvLg0KPiANCj4gICAgICAg
ZikgRHVlIHRvICMzZSBpdCdzIG5vdCBsb25nZXIgcmVxdWlyZWQgdG8gc3BpbiB3YWl0IGZv
ciB0aGUgQVAgdG8gc2V0DQo+ICAgICAgICAgIGl0J3Mgb25saW5lIGJpdC4gIFJlbW92ZSB3
YWl0X2NwdV9vbmxpbmUoKSBhbmQgdGhlIFhFTlBWDQo+ICAgICAgICAgIGNvdW50ZXJwYXJ0
LiBTbyB0aGUgY29udHJvbCBDUFUgY2FuIGRpcmVjdGx5IHdhaXQgZm9yIHRoZSBvbmxpbmUN
Cj4gICAgICAgICAgaWRsZSBjb21wbGV0aW9uIGJ5IHRoZSBBUCBhbmQgZnJlZSB0aGUgY29u
dHJvbCBDUFUgdXAgZm9yIG90aGVyDQo+ICAgICAgICAgIHdvcmsuDQo+IA0KPiAgICAgICBU
aGlzIHJlZHVjZXMgdGhlIHN5bmNocm9uaXphdGlvbiBwb2ludHMgaW4gdGhlIHg4NiBjb2Rl
IHRvIG9uZSwgd2hpY2gNCj4gICAgICAgaXMgdGhlIEFQIGFsaXZlIG9uZS4gVGhpcyBzeW5j
aHJvbml6YXRpb24gd2lsbCBiZSBtb3ZlZCB0byBjb3JlDQo+ICAgICAgIGluZnJhc3RydWN0
dXJlIGluIHRoZSBuZXh0IHNlY3Rpb24uDQo+IA0KPiAgIDQpIFtQIDE3LTI3XSBSZXBsYWNl
IHRoZSBkaXNjb25uZWN0ZWQgQ1BVIHN0YXRlIHRyYWNraW5nDQo+IA0KPiAgICAgIFRoZSBl
eHRyYSBDUFUgc3RhdGUgdHJhY2tpbmcgd2hpY2ggaXMgdXNlZCBieSBhIGZldyBhcmNoaXRl
Y3R1cmVzIGlzDQo+ICAgICAgY29tcGxldGVseSBzZXBhcmF0ZSBmcm9tIHRoZSBDUFUgaG90
cGx1ZyBjb3JlIGNvZGUuDQo+IA0KPiAgICAgIFJlcGxhY2luZyBpdCBieSBhIHZhcmlhbnQg
aW50ZWdyYXRlZCBpbiB0aGUgY29yZSBob3RwbHVnIG1hY2hpbmVyeQ0KPiAgICAgIGFsbG93
cyB0byByZWR1Y2UgYXJjaGl0ZWN0dXJlIHNwZWNpZmljIGNvZGUgYW5kIHByb3ZpZGVzIGEg
Z2VuZXJpYw0KPiAgICAgIHN5bmNocm9uaXphdGlvbiBtZWNoYW5pc20gZm9yIChwYXJhbGxl
bCkgQ1BVIGJyaW5ndXAvdGVhcmRvd24uDQo+IA0KPiAgICAgIC0gQ29udmVydCB4ODYgb3Zl
ciBhbmQgcmVwbGFjZSB0aGUgQVAgYWxpdmUgc3luY2hyb25pemF0aW9uIG9uIHg4NiB3aXRo
DQo+ICAgICAgICB0aGUgY29yZSB2YXJpYW50IHdoaWNoIHJlbW92ZXMgdGhlIHJlbWFpbmlu
ZyB4ODYgaG90cGx1Zw0KPiAgICAgICAgc3luY2hyb25pemF0aW9uIG1hc2tzLg0KPiANCj4g
ICAgICAtIENvbnZlcnQgdGhlIG90aGVyIGFyY2hpdGVjdHVyZXMgdXNhZ2UgYW5kIHJlbW92
ZSB0aGUgb2xkIGludGVyZmFjZQ0KPiAgICAgICAgYW5kIGNvZGUuDQo+IA0KPiAgIDUpIFtQ
IDI4LTMwXSBTcGxpdCB0aGUgYnJpbmd1cCBpbnRvIHR3byBzdGVwcw0KPiANCj4gICAgICBG
aXJzdCBzdGVwIGludm9rZXMgdGhlIHdha2V1cCBmdW5jdGlvbiBvbiB0aGUgQlAsIGUuZy4g
U0lQSS9TVEFSVFVQIG9uDQo+ICAgICAgeDg2LiBUaGUgc2Vjb25kIG9uZSB3YWl0cyBvbiB0
aGUgQlAgZm9yIHRoZSBBUCB0byByZXBvcnQgYWxpdmUgYW5kDQo+ICAgICAgcmVsZWFzZXMg
aXQgZm9yIHRoZSBjb21wbGV0ZSBvbmxpbmluZy4NCj4gDQo+ICAgICAgQXMgdGhlIGhvdHBs
dWcgc3RhdGUgbWFjaGluZSBhbGxvd3MgcGFydGlhbCBicmluZ3VwIHRoaXMgYWxsb3dzIGxh
dGVyDQo+ICAgICAgdG8ga2ljayBhbGwgQVBzIGFsaXZlIGluIGEgZmlyc3QgaXRlcmF0aW9u
IGFuZCB0aGVuIGJyaW5nIHRoZW0gdXANCj4gICAgICBjb21wbGV0ZWx5IG9uZSBieSBvbmUg
YWZ0ZXJ3YXJkcy4NCj4gDQo+ICAgNikgW1AgMzFdIFN3aXRjaCB0aGUgcHJpbWFyeSB0aHJl
YWQgZGV0ZWN0aW9uIHRvIGEgY3B1bWFzaw0KPiANCj4gICAgICBUaGlzIG1ha2VzIHRoZSBw
YXJhbGxlbCBicmluZ3VwIGEgc2ltcGxlIGNwdW1hc2sgYmFzZWQgbWVjaGFuaXNtDQo+ICAg
ICAgd2l0aG91dCB0b25zIG9mIGNvbmRpdGlvbmFscyBhbmQgY2hlY2tzIGZvciBwcmltYXJ5
IHRocmVhZHMuDQo+IA0KPiAgIDcpIFtQIDMyXSBJbXBsZW1lbnQgdGhlIHBhcmFsbGVsIGJy
aW5ndXAgY29yZSBjb2RlDQo+IA0KPiAgICAgIFRoZSBwYXJhbGxlbCBicmluZ3VwIGxvb2tz
IGxpa2UgdGhpczoNCj4gICAgICANCj4gICAgICAgIDEpIEJyaW5nIHVwIHRoZSBwcmltYXJ5
IFNNVCB0aHJlYWRzIHRvIHRoZSBDUFVIUF9LSUNLX0FQX0FMSVZFIHN0ZXANCj4gICAgICAg
IAkgb25lIGJ5IG9uZQ0KPiANCj4gICAgICAgIDIpIEJyaW5nIHVwIHRoZSBwcmltYXJ5IFNN
VCB0aHJlYWRzIHRvIHRoZSBDUFVIUF9PTkxJTkUgc3RlcCBvbmUgYnkNCj4gICAgICAgIAkg
b25lDQo+IA0KPiAgICAgICAgMykgQnJpbmcgdXAgdGhlIHNlY29uZGFyeSBTTVQgdGhyZWFk
cyB0byB0aGUgQ1BVSFBfS0lDS19BUF9BTElWRQ0KPiAgICAgICAgCSBzdGVwIG9uZSBieSBv
bmUNCj4gDQo+ICAgICAgICA0KSBCcmluZyB1cCB0aGUgc2Vjb25kYXJ5IFNNVCB0aHJlYWRz
IHRvIHRoZSBDUFVIUF9PTkxJTkUNCj4gICAgICAgIAkgc3RlcCBvbmUgYnkgb25lDQo+IA0K
PiAgICAgIEluIGNhc2UgdGhhdCBTTVQgaXMgbm90IHN1cHBvcnRlZCB0aGlzIGlzIG9idmlv
dXNseSByZWR1Y2VkIHRvIHN0ZXAgIzENCj4gICAgICBhbmQgIzIuDQo+IA0KPiAgIDgpIFtQ
IDMzLTM3XSBQcmVwYXJlIFg4NiBmb3IgcGFyYWxsZWwgYnJpbmd1cCBhbmQgZW5hYmxlIGl0
DQo+IA0KPiANCj4gQ2F2ZWF0cw0KPiAtLS0tLS0tDQo+IA0KPiBUaGUgbm9uIFg4NiBjaGFu
Z2VzIGhhdmUgYmVlbiBhbGwgY29tcGlsZSB0ZXN0ZWQuIEJvb3QgYW5kIHJ1bnRpbWUNCj4g
dGVzdGluZyBoYXMgb25seSBiZSBkb25lIG9uIGEgZmV3IHJlYWwgaGFyZHdhcmUgcGxhdGZv
cm1zIGFuZCBxZW11IGFzDQo+IGF2YWlsYWJsZS4gVGhhdCBkZWZpbml0ZWx5IG5lZWRzIHNv
bWUgaGVscCBmcm9tIHRoZSBwZW9wbGUgd2hvIGhhdmUNCj4gdGhlc2Ugc3lzdGVtcyBhdCB0
aGVpciBmaW5nZXJ0aXBzLg0KPiANCj4gDQo+IFJlc3VsdHMgYW5kIGFuYWx5c2lzDQo+IC0t
LS0tLS0tLS0tLS0tLS0tLS0tDQo+IA0KPiBIZXJlIGFyZSBudW1iZXJzIGZvciBhIGR1YWwg
c29ja2V0IFNLTCA1NiBjb3Jlcy8gMTEyIHRocmVhZHMgbWFjaGluZS4gIEFsbA0KPiBudW1i
ZXJzIGluIG1pbGxpc2Vjb25kcy4gVGhlIHRpbWUgbWVhc3VyZWQgaXMgdGhlIHRpbWUgd2hp
Y2ggdGhlIGNwdV91cCgpDQo+IGNhbGwgdGFrZXMgZm9yIGVhY2ggQ1BVIGFuZCBwaGFzZS4g
SXQncyBub3QgZXhhY3QgYXMgdGhlIHN5c3RlbSBpcyBhbHJlYWR5DQo+IHNjaGVkdWxpbmcs
IGhhbmRsaW5nIGludGVycnVwdHMgYW5kIHNvZnQgaW50ZXJydXB0cywgd2hpY2ggaXMgb2J2
aW91c2x5DQo+IHNrZXdpbmcgdGhlIHBpY3R1cmUgc2xpZ2h0bHkuDQo+IA0KPiBCYXNlbGlu
ZSB0aXAgdHJlZSB4ODYvYXBpYyBicmFuY2guDQo+IA0KPiAJCXRvdGFsICAgICAgYXZnL0NQ
VSAgICAgICAgICBtaW4gICAgICAgICAgbWF4DQo+IHRvdGFsICA6ICAgICAgOTEyLjA4MSAg
ICAgICAgOC4yMTcgICAgICAgIDMuNzIwICAgICAgMTEzLjI3MQ0KPiANCj4gVGhlIG1heCBv
ZiAxMDBtcyBpcyBkdWUgdG8gdGhlIHNpbGx5IGRlbGF5IGNhbGlicmF0aW9uIGZvciB0aGUg
c2Vjb25kDQo+IHNvY2tldCB3aGljaCB0YWtlcyAxMDBtcyBhbmQgd2FzIGVsaW1pbmF0ZWQg
Zmlyc3QuIEFsc28gdGhlIG90aGVyIGluaXRpYWwNCj4gY2xlYW51cHMgYW5kIGltcHJvdmVt
ZW50cyB0YWtlIHNvbWUgdGltZSBhd2F5Lg0KPiANCj4gU28gdGhlIHJlYWwgYmFzZWxpbmUg
YmVjb21lczoNCj4gDQo+IAkJdG90YWwgICAgICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAg
ICAgICBtYXgNCj4gdG90YWwgIDogICAgICA3ODUuOTYwICAgICAgICA3LjA4MSAgICAgICAg
My43NTIgICAgICAgMzYuMDk4DQo+IA0KPiBUaGUgbWF4IGhlcmUgaXMgb24gdGhlIGZpcnN0
IENQVSBvZiB0aGUgc2Vjb25kIHNvY2tldC4gMjBtcyBvZiB0aGF0IGlzIGR1ZQ0KPiB0byBU
U0Mgc3luY2hyb25pemF0aW9uIGFuZCBhbiBleHRyYSAybXMgdG8gcmVhY3Qgb24gdGhlIFNJ
UEkuDQo+IA0KPiBXaXRoIHBhcmFsbGVsIGJvb3R1cCBlbmFibGVkIHRoaXMgYmVjb21lczoN
Cj4gDQo+IAkJdG90YWwgICAgICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAgICAgICBtYXgN
Cj4gcHJlcGFyZTogICAgICAgMzkuMTA4ICAgICAgICAwLjM1MiAgICAgICAgMC4yMzggICAg
ICAgIDAuODgzDQo+IG9ubGluZSA6ICAgICAgIDQ1LjE2NiAgICAgICAgMC40MDcgICAgICAg
IDAuMTcwICAgICAgIDIwLjM1Nw0KPiB0b3RhbCAgOiAgICAgICA4NC4yNzQgICAgICAgIDAu
NzU5ICAgICAgICAwLjQwOCAgICAgICAyMS4yNDANCj4gDQo+IFRoYXQncyBhIGZhY3RvciB+
OS4zIHJlZHVjdGlvbiBvbiBhdmVyYWdlLg0KPiANCj4gTG9va2luZyBhdCB0aGUgMjcgcHJp
bWFyeSB0aHJlYWRzIG9mIHNvY2tldCAwIHRoZW4gdGhpcyBiZWNvbWVzIGV2ZW4gbW9yZQ0K
PiBpbnRlcmVzdGluZzoNCj4gDQo+IAkJdG90YWwgICAgICBhdmcvQ1BVICAgICAgICAgIG1p
biAgICAgICAgICBtYXgNCj4gdG90YWwgIDogICAgICAzMjUuNzY0ICAgICAgIDEyLjA2NSAg
ICAgICAxMS45ODEgICAgICAgMTQuMTI1DQo+IA0KPiB2ZXJzdXM6DQo+IAkJdG90YWwgICAg
ICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAgICAgICBtYXgNCj4gcHJlcGFyZTogICAgICAg
IDguOTQ1ICAgICAgICAwLjMzMSAgICAgICAgMC4yMzggICAgICAgIDAuODM0DQo+IG9ubGlu
ZSA6ICAgICAgICA0LjgzMCAgICAgICAgMC4xNzkgICAgICAgIDAuMTcwICAgICAgICAwLjIx
Mg0KPiB0b3RhbCAgOiAgICAgICAxMy43NzUgICAgICAgIDAuNTEwICAgICAgICAwLjQwOCAg
ICAgICAgMS4wNDYNCj4gDQo+IFNvIHRoZSByZWR1Y3Rpb24gZmFjdG9yIGlzIH4yMy41IGhl
cmUuIFRoYXQncyBtb3N0bHkgYmVjYXVzZSB0aGUgMjBtcyBUU0MNCj4gc3luYyBpcyBub3Qg
c2tld2luZyB0aGUgcGljdHVyZS4NCj4gDQo+IEZvciBhbGwgNTUgcHJpbWFyaWVzLCBpLmUg
d2l0aCB0aGUgMjBtcyBUU0Mgc3luYyBleHRyYSBmb3Igc29ja2V0IDEgdGhpcw0KPiBiZWNv
bWVzOg0KPiANCj4gICAgICAgICAgICAgICAgICB0b3RhbCAgICAgIGF2Zy9DUFUgICAgICAg
ICAgbWluICAgICAgICAgIG1heA0KPiB0b3RhbCAgOiAgICAgIDY4NS40ODkgICAgICAgMTIu
NDYzICAgICAgIDExLjk3NSAgICAgICAzNi4wOTgNCj4gDQo+IHZlcnN1czoNCj4gDQo+ICAg
ICAgICAgICAgICAgICAgdG90YWwgICAgICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAgICAg
ICBtYXgNCj4gcHJlcGFyZTogICAgICAgMTkuMDgwICAgICAgICAwLjM1MyAgICAgICAgMC4y
MzggICAgICAgIDAuODgzDQo+IG9ubGluZSA6ICAgICAgIDMwLjI4MyAgICAgICAgMC41NjEg
ICAgICAgIDAuMTcwICAgICAgIDIwLjM1Nw0KPiB0b3RhbCAgOiAgICAgICA0OS4zNjMgICAg
ICAgIDAuOTE0ICAgICAgICAwLjQwOCAgICAgICAyMS4yNDANCj4gDQo+IFRoZSBUU0Mgc3lu
YyByZWR1Y2VzIHRoZSB3aW4gdG8gYSBmYWN0b3Igb2YgfjEzLjgNCj4gDQo+IFdpdGggJ3Rz
Yz1yZWxpYWJsZScgb24gdGhlIGNvbW1hbmQgbGluZSB0aGUgc29ja2V0IHN5bmMgaXMgZGlz
YWJsZWQgd2hpY2gNCj4gYnJpbmdzIGl0IGJhY2sgdG8gdGhlIHNvY2tldCAwIG51bWJlcnM6
DQo+IA0KPiAgICAgICAgICAgICAgICAgIHRvdGFsICAgICAgYXZnL0NQVSAgICAgICAgICBt
aW4gICAgICAgICAgbWF4DQo+IHByZXBhcmU6ICAgICAgIDE4Ljk3MCAgICAgICAgMC4zNTEg
ICAgICAgIDAuMjMxICAgICAgICAwLjg3NA0KPiBvbmxpbmUgOiAgICAgICAxMC4zMjggICAg
ICAgIDAuMTkxICAgICAgICAwLjE2OSAgICAgICAgMC4zNTgNCj4gdG90YWwgIDogICAgICAg
MjkuMjk4ICAgICAgICAwLjU0MyAgICAgICAgMC40MDAgICAgICAgIDEuMjMyDQo+IA0KPiBO
b3cgbG9va2luZyBhdCB0aGUgc2Vjb25kYXJ5IHRocmVhZHMgb25seToNCj4gDQo+ICAgICAg
ICAgICAgICAgICAgdG90YWwgICAgICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAgICAgICBt
YXgNCj4gdG90YWwgIDogICAgICAxMDAuNDcxICAgICAgICAxLjc5NCAgICAgICAgMC4zNzUg
ICAgICAgIDQuNzQ1DQo+IA0KPiB2ZXJzdXM6DQo+ICAgICAgICAgICAgICAgICAgdG90YWwg
ICAgICBhdmcvQ1BVICAgICAgICAgIG1pbiAgICAgICAgICBtYXgNCj4gcHJlcGFyZTogICAg
ICAgMTkuNzUzICAgICAgICAwLjM1MyAgICAgICAgMC4yNTcgICAgICAgIDAuNTEyDQo+IG9u
bGluZSA6ICAgICAgIDE0LjY3MSAgICAgICAgMC4yNjIgICAgICAgIDAuMTc5ICAgICAgICAz
LjQ2MQ0KPiB0b3RhbCAgOiAgICAgICAzNC40MjQgICAgICAgIDAuNjE1ICAgICAgICAwLjQz
NiAgICAgICAgMy45NzMNCj4gDQo+IFN0aWxsIGEgZmFjdG9yIG9mIH4zLg0KPiANCj4gVGhl
IGF2ZXJhZ2Ugb24gdGhlIHNlY29uZGFyaWVzIGZvciB0aGUgc2VyaWFsaXplZCBicmluZ3Vw
IGlzIHNpZ25pZmljYW50bHkNCj4gbG93ZXIgdGhhbiBmb3IgdGhlIHByaW1hcmllcyBiZWNh
dXNlIHRoZSBTSVBJIHJlc3BvbnNlIHRpbWUgaXMgc2hvcnRlciBhbmQNCj4gdGhlIG1pY3Jv
Y29kZSB1cGRhdGUgdGFrZXMgbm8gdGltZS4NCj4gDQo+IFRoaXMgdmFyaWVzIHdpbGRseSB3
aXRoIHRoZSBzeXN0ZW0sIHdoZXRoZXIgbWljcm9jb2RlIGluIEJJT1MgaXMgYWxyZWFkeSB1
cA0KPiB0byBkYXRlLCBob3cgYmlnIHRoZSBtaWNyb2NvZGUgcGF0Y2ggaXMgYW5kIGhvdyBs
b25nIHRoZSBJTklUL1NJUEkgcmVzcG9uc2UNCj4gdGltZSBpcy4gT24gYW4gQU1EIFplbjMg
bWFjaGluZSBJTklUL1NJUEkgcmVzcG9uc2UgdGltZSBpcyBhbWF6aW5nbHkgZmFzdA0KPiAo
MzUwdXMpLCBidXQgdGhlbiBpdCBsYWNrcyBUU0NfQURKVVNUIGFuZCBkb2VzIGEgdHdvIG1p
bGxpc2Vjb25kIFRTQyBzeW5jDQo+IHRlc3QgZm9yIF9ldmVyeV8gQVAuIEFsbCBvZiB0aGlz
IHN1Y2tzLi4uDQo+IA0KPiANCj4gUG9zc2libGUgZnVydGhlciBlbmhhbmNlbWVudHMNCj4g
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gDQo+IEl0J3MgZGVmaW5pdGVseSB3
b3J0aHdoaWxlIHRvIGxvb2sgaW50byByZWR1Y2luZyB0aGUgY3Jvc3Mgc29ja2V0IFRTQyBz
eW5jDQo+IHRlc3QgdGltZS4gSXQncyBwcm9iYWJseSBzYWZlIGVub3VnaCB0byB1c2UgNW1z
IG9yIGV2ZW4gMm1zIGluc3RlYWQgb2YgMjBtcw0KPiBvbiBzeXN0ZW1zIHdpdGggVFNDX0FE
SlVTVCBhbmQgYSBmZXcgb3RoZXIgJ1RTQyBpcyBzYW5lJyBpbmRpY2F0b3JzLiBNb3ZpbmcN
Cj4gaXQgb3V0IG9mIHRoZSBob3RwbHVnIHBhdGggaXMgZXZlbnR1YWxseSBwb3NzaWJsZSwg
YnV0IHRoYXQgbmVlZHMgc29tZSBkZWVwDQo+IHRob3VnaHRzLg0KPiANCj4gTGV0J3MgdGFr
ZSB0aGUgVFNDIHN5bmMgb3V0IG9mIHRoZSBwaWN0dXJlIGJ5IGFkZGluZyAndHNjPXJlbGlh
YmxlIiB0byB0aGUNCj4ga2VybmVsIGNvbW1hbmQgbGluZS4gU28gdGhlIGJyaW5ndXAgb2Yg
MTExIEFQcyB0YWtlczoNCj4gDQo+ICAgICAgICAgICAgICAgICAgdG90YWwgICAgICBhdmcv
Q1BVICAgICAgICAgIG1pbiAgICAgICAgICBtYXgNCj4gcHJlcGFyZTogICAgICAgMzguOTM2
ICAgICAgICAwLjM1MSAgICAgICAgMC4yMzEgICAgICAgIDAuODc0DQo+IG9ubGluZSA6ICAg
ICAgIDI1LjIzMSAgICAgICAgMC4yMjcgICAgICAgIDAuMTY5ICAgICAgICAzLjQ2NQ0KPiB0
b3RhbCAgOiAgICAgICA2NC4xNjcgICAgICAgIDAuNTc4ICAgICAgICAwLjQwMCAgICAgICAg
NC4zMzkNCj4gDQo+IFNvbWUgb2YgdGhlIG91dGxpZXJzIGFyZSBub3QgbmVjZXNzYXJpbHkg
aW4gdGhlIHN0YXRlIGNhbGxiYWNrcyBhcyB0aGUNCj4gc3lzdGVtIGlzIGFscmVhZHkgc2No
ZWR1bGluZyBhbmQgaGFuZGxlcyBpbnRlcnJ1cHRzIGFuZCBzb2Z0DQo+IGludGVycnVwdHMu
IEhhdmVuJ3QgYW5hbHl6ZWQgdGhhdCB5ZXQgaW4gZGV0YWlsLg0KPiANCj4gSW4gdGhlIHBy
ZXBhcmUgc3RhZ2Ugd2hpY2ggcnVucyBvbiB0aGUgY29udHJvbCBDUFUgdGhlIGxhcmdlciBz
dGVwcyBhcmU6DQo+IA0KPiAgICBzbXBjZmQ6cHJlcGFyZSAgICAgICAgICAgMTZ1cyAgYXZn
L0NQVQ0KPiAgICB0aHJlYWRzOnByZXBhcmUgICAgICAgICAgOTh1cyAgYXZnL0NQVQ0KPiAg
ICB3b3JrcXVldWU6cHJlcGFyZSAgICAgICAgNDN1cyAgYXZnL0NQVQ0KPiAgICB0cmFjZS9S
QjpwcmVwYXJlCSAgMTM1dXMJIGF2Zy9DUFUNCj4gDQo+IFRoZSB0cmFjZSByaW5nYnVmZmVy
IGluaXRpYWxpemF0aW9uIGFsbG9jYXRlcyAzNTQgcGFnZXMgYW5kIDM1NCBjb250cm9sDQo+
IHN0cnVjdHVyZXMgb25lIGJ5IG9uZS4gVGhhdCBwcm9iYWJseSBzaG91bGQgYWxsb2NhdGUg
YSBsYXJnZSBwYWdlIGFuZCBhbg0KPiBhcnJheSBvZiBjb250cm9sIHN0cnVjdHVyZXMgYW5k
IHdvcmsgZnJvbSB0aGVyZS4gSSdtIHN1cmUgdGhhdCB3b3VsZCByZWR1Y2UNCj4gdGhpcyBz
aWduaWZpY2FudGx5LiBTdGV2ZW4/DQo+IA0KPiBzbXBjZmQgZG9lcyBqdXN0IGEgcGVyY3B1
IGFsbG9jYXRpb24uIE5vIGlkZWEgd2h5IHRoYXQgdGFrZXMgdGhhdCBsb25nLg0KPiANCj4g
VnMuIHRocmVhZHMgYW5kIHdvcmtxdWV1ZXMuIERhdmlkIHRob3VnaHQgYWJvdXQgc3ByZWFk
aW5nIG91dCB0aGUNCj4gcHJlcGFyYXRpb24gd29yayBhbmQgZG8gaXQgcmVhbGx5IGluIHBh
cmFsbGVsLiBUaGF0J3MgYSBuaWNlIGlkZWEsIGJ1dCB0aGUNCj4gdGhyZWFkcyBhbmQgd29y
a3F1ZXVlIHByZXBhcmUgc3RlcHMgYXJlIHNlbGYgc2VyaWFsaXppbmcuIFRoZSB3b3JrcXVl
dWUgb25lDQo+IGhhcyBhIGdsb2JhbCBtdXRleCBhbmQgYXNpZGUgb2YgdGhhdCBib3RoIHN0
ZXBzIGNyZWF0ZSBrZXJuZWwgdGhyZWFkcyB3aGljaA0KPiBpbXBsaWNpdGVseSBzZXJpYWxp
emUgb24ga3RocmVhZGQuIGFsbG9jX3BlcmNwdSgpLCB3aGljaCBpcyB1c2VkIGJ5DQo+IHNt
cGNmZDpwcmVwYXJlIGlzIGFsc28gZ2xvYmFsbHkgc2VyaWFsaXplZC4NCj4gDQo+IFRoZSBy
ZXN0IG9mIHRoZSBwcmVwYXJlIHN0ZXBzIGlzIHByZXR0eSBtdWNoIGluIHRoZSBzaW5nbGUg
ZGlnaXQNCj4gbWljcm9zZWNvbmRzIHJhbmdlLg0KPiANCj4gT24gdGhlIEFQIHNpZGUgaXQg
c2hvdWxkIGJlIHBvc3NpYmxlIHRvIG1vdmUgc29tZSBvZiB0aGUgaW5pdGlhbGl6YXRpb24N
Cj4gc3RlcHMgYmVmb3JlIHRoZSBhbGl2ZSBzeW5jaHJvbml6YXRpb24gcG9pbnQsIGJ1dCB0
aGF0IHJlYWxseSBuZWVkcyBhIGxvdA0KPiBvZiBhbmFseXNpcyB3aGV0aGVyIHRoZSBmdW5j
dGlvbnMgYXJlIHNhZmUgdG8gaW52b2tlIHRoYXQgZWFybHkgYW5kIG91dHNpZGUNCj4gb2Yg
dGhlIGNwdV9ob3RwbHVnX2xvY2sgaGVsZCByZWdpb24gZm9yIHRoZSBjYXNlIG9mIHR3byBz
dGFnZSBwYXJhbGxlbA0KPiBicmluZ3VwOyBzZWUgYmVsb3cuDQo+IA0KPiBUaGUgbGFyZ2Vz
dCBwYXJ0IGlzOg0KPiANCj4gICAgICBpZGVudGlmeV9zZWNvbmRhcnlfY3B1KCkJOTl1cyBh
dmcvQ1BVDQo+ICAgICANCj4gICAgICBJbnNpZGUgb2YgaWRlbnRpZnlfc2Vjb25kYXJ5X2Nw
dSgpIHRoZSBsYXJnZXN0IG9mZmVuZGVyOg0KPiANCj4gICAgICAgIG1jaGVja19pbml0KCkJ
CTczdXMgYXZnL0NQVQ0KPiANCj4gICAgICBUaGlzIHBhcnQgaXMgZGVmaW5pdGx5IHdvcnRo
IHRvIGJlIGxvb2tlZCBhdCB3aGV0aGVyIGl0IGNhbiBiZSBhdCBsZWFzdA0KPiAgICAgIHBh
cnRpYWxseSBtb3ZlZCB0byB0aGUgZWFybHkgc3RhcnR1cCBjb2RlIGJlZm9yZSB0aGUgYWxp
dmUNCj4gICAgICBzeW5jaHJvbml6YXRpb24gcG9pbnQuIFRoZXJlJ3MgYSBsb3Qgb2YgZGVl
cCBhbmFseXNpcyByZXF1aXJlZCBhbmQNCj4gICAgICBpZGVhbGx5IHdlIGp1c3QgcmV3cml0
ZSB0aGUgd2hvbGUgQ1BVSUQgZXZhbHVhdGlvbiB0cmFpbndyZWNrDQo+ICAgICAgY29tcGxl
dGVseS4NCj4gDQo+IFRoZSByZXN0IG9mIHRoZSBBUCBzaWRlIGlzIGxvdyBzaW5nbGUgZGln
aXQgbWljcm9zZWNvbmRzIGV4Y2VwdCBvZjoNCj4gDQo+ICAgICAgcGVyZi94ODY6c3RhcnRp
bmcJCTE0dXMgYXZnL0NQVQ0KPiANCj4gICAgICBzbXBib290L3RocmVhZHM6b25saW5lCTEz
dXMgYXZnL0NQVQ0KPiAgICAgIHdvcmtxdWV1ZTpvbmxpbmUJCTE3dXMgYXZnL0NQVQ0KPiAg
ICAgIG1tL3Ztc3RhdDpvbmxpbmUJCTE3dXMgYXZnL0NQVQ0KPiAgICAgIHNjaGVkOmFjdGl2
ZQkJMzB1cyBhdmcvQ1BVDQo+IA0KPiBzY2hlZDphY3RpdmUgaXMgc3BlY2lhbC4gT25saW5p
bmcgdGhlIGZpcnN0IHNlY29uZGFyeSBIVCB0aHJlYWQgb24gdGhlDQo+IHNlY29uZCBzb2Nr
ZXQgY3JlYXRlcyBhIDMuMm1zIG91dGxpZXIgd2hpY2ggc2tld3MgdGhlIHdob2xlIHBpY3R1
cmUuIFRoYXQncw0KPiBjYXVzZWQgYnkgZW5hYmxpbmcgdGhlIHN0YXRpYyBrZXkgc2NoZWRf
c210X3ByZXNlbnQgd2hpY2ggcGF0Y2hlcyB0aGUgd29ybGQNCj4gYW5kIHNvbWUgbW9yZS4g
Rm9yIGFsbCBvdGhlciBBUHMgdGhpcyBpcyByZWFsbHkgaW4gdGhlIDF1cyByYW5nZS4gVGhp
cw0KPiBkZWZpbml0ZWx5IGNvdWxkIGJlIHBvc3Rwb25lZCBkdXJpbmcgYm9vdHVwIGxpa2Ug
dGhlIHNjaGVkdWxlciBkb21haW4NCj4gcmVidWlsZCBpcyBkb25lIGFmdGVyIHRoZSBicmlu
Z3VwLiBCdXQgdGhhdCdzIHN0aWxsIGZ1bGx5IHNlcmlhbGl6ZWQgYW5kDQo+IHNpbmdsZSB0
aHJlYWRlZCBhbmQgb2J2aW91c2x5IGNvdWxkIGJlIGRvbmUgbGF0ZXIgaW4gdGhlIGNvbnRl
eHQgb2YgYXN5bmMNCj4gcGFyYWxsZWwgaW5pdC4gSXQncyB1bmNsZWFyIHdoeSB0aGlzIGlz
IGRpZmZlcmVudCB3aXRoIHRoZSBmdWxseSBzZXJpYWxpemVkDQo+IGJyaW5ndXAgd2hlcmUg
aXQgdGFrZXMgc2lnbmlmaWNhbnRseSBsZXNzIHRpbWUsIGJ1dCB0aGF0J3Mgc29tZXRoaW5n
IHdoaWNoDQo+IG5lZWRzIHRvIGJlIGludmVzdGlnYXRlZC4NCj4gDQo+IA0KPiBJcyB0cnVs
eSBwYXJhbGxlbCBicmluZ3VwIGZlYXNpYmxlPw0KPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLQ0KPiANCj4gSW4gdGhlb3J5IHllcywgcmVhbGlzdGljYWxseSBuby4g
V2h5Pw0KPiANCj4gICAgIDEpIFRoZSBwcmVwYXJhdGlvbiBwaGFzZQ0KPiANCj4gICAgICAg
IEFsbG9jYXRpbmcgbWVtb3J5LCBjcmVhdGluZyB0aHJlYWRzIGZvciB0aGUgdG8gYmUgYnJv
dWdodCB1cCBDUFUgbXVzdA0KPiAgICAgICAgb2J2aW91c2x5IGhhcHBlbiBvbiBhbiBhbHJl
YWR5IG9ubGluZSBDUFUuDQo+IA0KPiAgICAgICAgV2hpbGUgaXQgd291bGQgYmUgcG9zc2li
bGUgdG8gYnJpbmcgdXAgYSBzdWJzZXQgb2YgQ1BVcyBmaXJzdCBhbmQgbGV0DQo+ICAgICAg
ICB0aGVtIGRvIHRoZSBwcmVwYXJhdGlvbiBzdGVwcyBmb3IgZ3JvdXBzIG9mIHN0aWxsIG9m
ZmxpbmUgQ1BVcw0KPiAgICAgICAgY29uY3VycmVudGx5LCB0aGUgYWN0dWFsIGJlbmVmaXQg
b2YgZG9pbmcgc28gaXMgZHViaW91cy4NCj4gDQo+ICAgICAgICBUaGUgcHJpbWUgZXhhbXBs
ZSBpcyBrZXJuZWwgdGhyZWFkIGNyZWF0aW9uLCB3aGljaCBpcyBpbXBsaWNpdGVseQ0KPiAg
ICAgICAgc2VyaWFsaXplZCBvbiBrdGhyZWFkZC4NCj4gDQo+ICAgICAgICBBIHNpbXBsZSBl
eHBlcmltZW50IHNob3dzIHRoYXQgNCBjb25jdXJyZW50IHdvcmtlcnMgb24gNCBkaWZmZXJl
bnQNCj4gICAgICAgIENQVXMgd2hlcmUgZWFjaCBpcyBjcmVhdGluZyAxNCAqIDUgPSA3MCBr
ZXJuZWwgdGhyZWFkcyBhcmUgNSUgc2xvd2VyDQo+ICAgICAgICB0aGFuIGEgc2luZ2xlIHdv
cmtlciBjcmVhdGluZyA0ICogMTQgKiA1ID0gMjgwIHRocmVhZHMuDQo+IA0KPiAgICAgICAg
U28gd2UnZCBuZWVkIHRvIGhhdmUgbXVsdGlwbGUga3RocmVhZGQgaW5zdGFuY2VzIHRvIGhh
bmRsZSB0aGF0LA0KPiAgICAgICAgd2hpY2ggd291bGQgdGhlbiBzZXJpYWxpemUgb24gdGFz
a2xpc3QgbG9jayBhbmQgb3RoZXIgdGhpbmdzLg0KPiANCj4gICAgICAgIFRoYXQgYXNpZGUg
dGhlIHByZXBhcmF0aW9uIHBoYXNlIGlzIGFsc28gYWZmZWN0ZWQgYnkgdGhlIHByb2JsZW0N
Cj4gICAgICAgIGJlbG93Lg0KPiANCj4gICAgIDIpIEFzc3VtcHRpb25zIGFib3V0IGhvdHBs
dWcgc2VyaWFsaXphdGlvbg0KPiANCj4gICAgICAgIGEpIFRoZXJlIGFyZSBxdWl0ZSBzb21l
IGFzc3VtcHRpb25zIGFib3V0IENQVSBicmluZ3VwIGJlaW5nIGZ1bGx5DQo+ICAgICAgICAg
ICBzZXJpYWxpemVkIGFjcm9zcyBzdGF0ZSB0cmFuc2l0aW9ucy4gIEEgbG90IG9mIHN0YXRl
IGNhbGxiYWNrcyByZWx5DQo+ICAgICAgICAgICBvbiB0aGF0IGFuZCB3b3VsZCByZXF1aXJl
IGxvY2FsIGxvY2tpbmcuDQo+IA0KPiAJIEFkZGluZyB0aGF0IGxvY2FsIGxvY2tpbmcgaXMg
c3VyZWx5IHBvc3NpYmxlLCBidXQgdGhhdCBoYXMgc2V2ZXJhbA0KPiAJIGRvd25zaWRlczoN
Cj4gDQo+ICAgICAgICAgICAgLSBJdCBhZGRzIGNvbXBsZXhpdHkgYW5kIG1ha2VzIGl0IGhh
cmRlciBmb3IgZGV2ZWxvcGVycyB0byBnZXQNCj4gCSAgICB0aGlzIGNvcnJlY3QuIFRoZSBz
dWJ0bGUgYnVncyByZXN1bHRpbmcgb3V0IG9mIHRoYXQgYXJlIGdvaW5nDQo+IAkgICAgdG8g
YmUgaW50ZXJlc3RpbmcNCj4gDQo+ICAgICAgICAgICAgLSBGaW5lIGdyYWluZWQgbG9ja2lu
ZyBoYXMgYSBjaGFybSwgYnV0IG9ubHkgaWYgdGhlIHRpbWUgc3BlbnQNCj4gCSAgICBmb3Ig
dGhlIGFjdHVhbCB3b3JrIGlzIGxhcmdlciB0aGFuIHRoZSB0aW1lIHJlcXVpcmVkIGZvcg0K
PiAJICAgIHNlcmlhbGl6YXRpb24gYW5kIHN5bmNocm9uaXphdGlvbi4NCj4gDQo+IAkgICAg
U2VyaWFsaXppbmcgYSBjYWxsYmFjayB3aGljaCB0YWtlcyBsZXNzIHRoYW4gYSBtaWNyb3Nl
Y29uZCBhbmQNCj4gCSAgICB0aGVuIGhhdmluZyBhIGxhcmdlIG51bWJlciBvZiBDUFVzIGNv
bnRlbmRpbmcgb24gdGhlIGxvY2sgd2lsbA0KPiAJICAgIG5vdCBtYWtlIGl0IGFueSBmYXN0
ZXIgYXQgYWxsLiBUaGF0J3MgYSB3ZWxsIGtub3duIGlzc3VlIG9mDQo+IAkgICAgcGFyYWxs
ZWxpemluZyBhbmQgbmVpdGhlciBtYWRlIHVwIG5vciBrZXJuZWwgc3BlY2lmaWMuDQo+IA0K
PiAgICAgICAgYikgU29tZSBvcGVyYXRpb25zIGRlZmluaXRlbHkgcmVxdWlyZSB0byBiZSBw
cm90ZWN0ZWQgYnkgdGhlDQo+ICAgICAgICAgICBjcHVfaG90cGx1Z19sb2NrLCBlc3BlY2lh
bGx5IHRob3NlIHdoaWNoIGFmZmVjdCBjcHVtYXNrcyBhcyB0aGUNCj4gICAgICAgICAgIG1h
c2tzIGFyZSBndWFyYW50ZWVkIHRvIGJlIHN0YWJsZSBpbiBhIGNwdXNfcmVhZF9sb2NrKCkn
ZWQgcmVnaW9uLg0KPiANCj4gICAgICAgICAJIEFzIHRoaXMgbG9jayBjYW5ub3QgYmUgdGFr
ZW4gaW4gYXRvbWljIGNvbnRleHRzLCBpdCdzIHJlcXVpcmVkDQo+ICAgICAgICAgCSB0aGF0
IHRoZSBjb250cm9sIENQVSBob2xkcyB0aGUgbG9jayB3cml0ZSBsb2NrZWQgYWNyb3NzIHRo
ZXNlDQo+ICAgICAgICAgCSBzdGF0ZSB0cmFuc2l0aW9ucy4gQW5kIG5vLCB3ZSBhcmUgbm90
IG1ha2luZyB0aGlzIGEgc3BpbmxvY2sganVzdA0KPiAgICAgICAgIAkgZm9yIHRoYXQgYW5k
IHdlIGV2ZW4gY2FuJ3QuDQo+IA0KPiAgICAgICAgIAkgSnVzdCBzbGFwcGluZyBhIGxvY2sg
aW50byB0aGUgeDg2IHNwZWNpZmljIHBhcnQgb2YgdGhlIGNwdW1hc2sNCj4gICAgICAgICAJ
IHVwZGF0ZSBmdW5jdGlvbiBkb2VzIG5vdCBzb2x2ZSBhbnl0aGluZy4gVGhlIHJlbGV2YW50
IHBhdGNoIGluIFYxNw0KPiAgICAgICAgIAkgaXMgY29tcGxldGVseSB1c2VsZXNzIGFzIGl0
IG9ubHkgc2VyaWFsaXplcyB0aGUgYWN0dWFsIGNwdW1hc2svbWFwDQo+ICAgICAgICAgCSBt
b2RpZmljYXRpb25zLCBidXQgYWxsIHJlYWQgc2lkZSB1c2VycyBhcmUgaG9zZWQgaWYgdGhl
IHVwZGF0ZQ0KPiAgICAgICAgIAkgd291bGQgYmUgbW92ZWQgYmVmb3JlIHRoZSBhbGl2ZSBz
eW5jaHJvbml6YXRpb24gcG9pbnQsIGkuZS4gaW50byBhDQo+ICAgICAgICAgCSBub24gaG90
cGx1ZyBsb2NrIHByb3RlY3RlZCByZWdpb24uDQo+IA0KPiAgICAgICAgIAkgRXZlbiBpZiB0
aGUgaG90cGx1ZyBsb2NrIHdvdWxkIGJlIGhlbGQgYWNjcm9zcyB0aGUgd2hvbGUgcGFyYWxs
ZWwNCj4gICAgICAgICAJIGJyaW5ndXAgb3BlcmF0aW9uIHRoZW4gdGhpcyB3b3VsZCBzdGls
bCBleHBvc2UgYWxsIHVzYWdlIG9mIHRoZXNlDQo+ICAgICAgICAgCSBtYXNrcyBhbmQgbWFw
cyBpbiB0aGUgYWN0dWFsIGhvdHBsdWcgc3RhdGUgY2FsbGJhY2tzIHRvIGNvbmN1cnJlbnQN
Cj4gICAgICAgICAJIG1vZGlmaWNhdGlvbnMuDQo+IA0KPiAgICAgICAgIAkgQW5kIG5vLCB3
ZSBhcmUgbm90IGdvaW5nIHRvIGV4cG9zZSBhbiBhcmNoaXRlY3R1cmUgc3BlY2lmaWMgcmF3
DQo+ICAgICAgICAgCSBzcGlubG9jayB0byB0aGUgaG90cGx1ZyBzdGF0ZSBjYWxsYmFja3Ms
IGVzcGVjaWFsbHkgbm90IHRvIHRob3NlDQo+ICAgICAgICAgCSBpbiBnZW5lcmljIGNvZGUu
DQo+IA0KPiAgICAgICAgYykgU29tZSBjcHVfcmVhZF9sb2NrKCknZWQgcmVnaW9ucyBhbHNv
IGV4cGVjdCB0aGF0IHRoZXJlIGlzIG5vIENQVQ0KPiAgICAgICAgCSBzdGF0ZSB0cmFuc2l0
aW9uIGhhcHBlbmluZyB3aGljaCB3b3VsZCBtb2RpZnkgdGhlaXIgbG9jYWwNCj4gICAgICAg
IAkgc3RhdGUuIFRoaXMgd291bGQgYWdhaW4gcmVxdWlyZSBsb2NhbCBzZXJpYWxpemF0aW9u
Lg0KPiANCj4gICAgICAzKSBUaGUgYW1vdW50IG9mIHdvcmsgYW5kIGNodXJuOg0KPiANCj4g
ICAgICAgICAtIEFuYWx5emUgdGhlIHBlciBhcmNoaXRlY3R1cmUgbG93IGxldmVsIHN0YXJ0
dXAgZnVuY3Rpb25zIHBsdXMgdGhlaXINCj4gICAgICAgICAgIGRlc2NlbmRhbnQgZnVuY3Rp
b25zIGFuZCBtYWtlIHRoZW0gcmVhZHkgZm9yIGNvbmN1cnJlbmN5IGlmDQo+ICAgICAgICAg
CSBuZWNlc3NhcnkuDQo+IA0KPiAgICAgICAgIC0gQW5hbHl6ZSB+MzAwIGhvdHBsdWcgc3Rh
dGUgY2FsbGJhY2tzIGFuZCB0aGVpciBkZXNjZW5kYW50IGZ1bmN0aW9ucw0KPiAgICAgICAg
ICAgYW5kIG1ha2UgdGhlbSByZWFkeSBmb3IgY29uY3VycmVuY3kgaWYgbmVjZXNzYXJ5Lg0K
PiANCj4gICAgICAgICAtIEFuYWx5emUgYWxsIGNwdXNfcmVhZF9sb2NrKCknZWQgcmVnaW9u
cyBhbmQgYWRkcmVzcyB0aGVpcg0KPiAgICAgICAgICAgcmVxdWlyZW1lbnRzLg0KPiAgICAg
ICAgDQo+ICAgICAgICAgLSBSZXdyaXRlIHRoZSBjb3JlIGNvZGUgdG8gaGFuZGxlIHRoZSBj
cHVfaG90cGx1Z19sb2NrIHJlcXVpcmVtZW50cw0KPiAgICAgICAgICAgb25seSBpbiBkaXN0
aW5jdCBwaGFzZXMgb2YgdGhlIHN0YXRlIG1hY2hpbmUuDQo+IA0KPiAgICAgICAgIC0gUmV3
cml0ZSB0aGUgY29yZSBjb2RlIHRvIGhhbmRsZSBzdGF0ZSBjYWxsYmFjayBmYWlsdXJlIGFu
ZCB0aGUNCj4gICAgICAgICAgIHJlbGF0ZWQgcm9sbGJhY2sgaW4gdGhlIGNvbnRleHQgb2Yg
dGhlIG5ldyBydWxlcy4NCj4gDQo+ICAgICAgICAtIC4uLg0KPiANCj4gICAgIEV2ZW4gaWYg
c29tZSBwZW9wbGUgYXJlIGRlZGljYXRlZCBlbm91Z2ggdG8gZG8gdGhhdCwgaXQncyB2ZXJ5
DQo+ICAgICBxdWVzdGlvbmFibGUgd2hldGhlciB0aGUgcmVzdWx0aW5nIGNvbXBsZXhpdHkg
aXMganVzdGlmaWVkLg0KPiANCj4gICAgIFdlJ3ZlIHNwZW50IGEgc2VyaW91cyBhbW91bnQg
b2YgdGltZSB0byBzYW5pdGl6ZSBob3RwbHVnIGFuZCBicmluZyBpdA0KPiAgICAgaW50byBh
IHN0YXRlIHdoZXJlIGl0IGlzIGNvcnJlY3QuIFRoaXMgYWxzbyBtYWRlIGl0IHJlYXNvbmFi
bHkgc2ltcGxlDQo+ICAgICBmb3IgZGV2ZWxvcGVycyB0byBpbXBsZW1lbnQgaG90cGx1ZyBz
dGF0ZSBjYWxsYmFja3Mgd2l0aG91dCBoYXZpbmcgdG8NCj4gICAgIGJlY29tZSBob3RwbHVn
IGV4cGVydHMuDQo+IA0KPiAgICAgQnJlYWtpbmcgdGhpcyBjb21wbGV0ZWx5IHVwIHdpbGwg
cmVzdWx0IGluIGEgZmxvb2Qgb2YgaGFyZCB0byBkaWFnbm9zZQ0KPiAgICAgc3VidGxlIGlz
c3VlcyBmb3Igc3VyZS4gV2hvIGlzIGdvaW5nIHRvIGRlYWwgd2l0aCB0aGVtPw0KPiANCj4g
ICAgIFRoZSBleHBlcmllbmNlIHdpdGggdGhpcyBzZXJpZXMgc28gZmFyIGRvZXMgbm90IG1h
a2UgbWUgY29tZm9ydGFibGUNCj4gICAgIGFib3V0IHRoYXQgdGhvdWdodCBpbiBhbnkgd2F5
Lg0KPiANCj4gDQo+IFN1bW1hcnkNCj4gLS0tLS0tLQ0KPiANCj4gVGhlIG9idmlvdXMgYW5k
IGxvdyBoYW5naW5nIGZydWl0cyBoYXZlIHRvIGJlIHNvbHZlZCBmaXJzdDoNCj4gDQo+ICAg
IC0gVGhlIENQVUlEIGV2YWx1YXRpb24gYW5kIHJlbGF0ZWQgc2V0dXAgbWVjaGFuaXNtcw0K
PiANCj4gICAgLSBUaGUgdHJhY2UvcmluZ2J1ZmZlciBvZGRpdHkNCj4gDQo+ICAgIC0gVGhl
IHNjaGVkOmFjdGl2ZSBvZGRpdHkgZm9yIHRoZSBmaXJzdCBzaWJsaW5nIG9uIHRoZSBzZWNv
bmQgc29ja2V0DQo+ICAgIA0KPiAgICAtIFNvbWUgb3RoZXIgZXhwZW5zaXZlIHRoaW5ncyB3
aGljaCBJJ20gbm90IHNlZWluZyBpbiBteSB0ZXN0IHNldHVwIGR1ZQ0KPiAgICAgIHRvIGxh
Y2sgb2YgaGFyZHdhcmUgb3IgY29uZmlndXJhdGlvbi4NCj4gDQo+IEFueXRoaW5nIGVsc2Ug
aXMgcHJldHR5IG11Y2ggd2lzaGZ1bCB0aGlua2luZyBpbiBteSBvcGluaW9uLg0KPiANCj4g
ICAgVG8gYmUgY2xlYXIuIEknbSBub3Qgc3RhbmRpbmcgaW4gdGhlIHdheSBpZiB0aGVyZSBp
cyBhIHByb3BlciBzb2x1dGlvbiwNCj4gICAgYnV0IHRoYXQgcmVxdWlyZXMgdG8gcmVzcGVj
dCB0aGUgYmFzaWMgZW5naW5lZXJpbmcgcnVsZXM6DQo+IA0KPiAgICAgIDEpIENvcnJlY3Ru
ZXNzIGZpcnN0DQo+ICAgICAgMikgS2VlcCBpdCBtYWludGFpbmFibGUNCj4gICAgICAzKSBL
ZWVwIGl0IHNpbXBsZQ0KPiANCj4gICAgU28gZmFyIHRoaXMgc3R1ZmYgZmFpbGVkIGFscmVh
ZHkgYXQgIzEuDQo+IA0KPiBJIGNvbXBsZXRlbHkgdW5kZXJzdGFuZCB3aHkgdGhpcyBpcyBp
bXBvcnRhbnQgZm9yIGNsb3VkIHBlb3BsZSwgYnV0DQo+IHRoZSByZWFsIHF1ZXN0aW9uIHRv
IGFzayBoZXJlIGlzIHdoYXQgYXJlIHRoZSBhY3R1YWwgcmVxdWlyZW1lbnRzLg0KPiANCj4g
ICAgQXMgZmFyIGFzIEkgdW5kZXJzdGFuZCB0aGUgbWFpbiBnb2FsIGlzIHRvIG1ha2UgYSAo
a2V4ZWMpIHJlYm9vdA0KPiAgICBhbG1vc3QgaW52aXNpYmxlIHRvIFZNIHRlbmFudHMuDQo+
IA0KPiAgICBOb3cgbGV0cyBsb29rIGF0IGhvdyB0aGlzIHdvcmtzOg0KPiANCj4gICAgICAg
QSkgRnJlZXplIFZNcyBhbmQgcGVyc2lzdCBzdGF0ZQ0KPiAgICAgICBCKSBrZXhlYyBpbnRv
IHRoZSBuZXcga2VybmVsDQo+ICAgICAgIEMpIFJlc3RvcmUgVk1zIGZyb20gcGVyc2lzdGFu
dCBtZW1vcnkNCj4gICAgICAgRCkgVGhhdyBWTXMNCj4gDQo+ICAgIFNvIHRoZSBrZXkgcHJv
YmxlbSBpcyBob3cgbG9uZyBpdCB0YWtlcyB0byBnZXQgZnJvbSAjQiB0byAjQyBhbmQgZmlu
YWxseQ0KPiAgICB0byAjRC4NCj4gDQo+ICAgIEFzIGZhciBhcyBJIHVuZGVyc3RhbmQgI0Mg
dGFrZXMgYSBzZXJpb3VzIGFtb3VudCBvZiB0aW1lIGFuZCBjYW5ub3QgYmUNCj4gICAgcGFy
YWxsZWxpemVkIGZvciB3aGF0ZXZlciByZWFzb25zLg0KPiANCj4gICAgQXQgdGhlIHNhbWUg
dGltZSB0aGUgbnVtYmVyIG9mIG9ubGluZSBDUFVzIHJlcXVpcmVkIHRvIHJlc3RvcmUgdGhl
IFZNcw0KPiAgICBzdGF0ZSBpcyBsZXNzIHRoYW4gdGhlIG51bWJlciBvZiBvbmxpbmUgQ1BV
cyByZXF1aXJlZCB0byBhY3R1YWxseQ0KPiAgICBvcGVyYXRlIHRoZW0gaW4gI0QuDQo+IA0K
PiAgICBUaGF0IG1lYW5zIGl0IHdvdWxkIGJlIGdvb2QgZW5vdWdoIHRvIHJldHVybiB0byB1
c2Vyc3BhY2Ugd2l0aCBhDQo+ICAgIGxpbWl0ZWQgbnVtYmVyIG9mIG9ubGluZSBDUFVzIGFz
IGZhc3QgYXMgcG9zc2libGUuIEEgY2VydGFpbiBhbW91bnQgb2YNCj4gICAgQ1BVcyBhcmUg
Z29pbmcgdG8gYmUgYnVzeSB3aXRoIHJlc3RvcmluZyB0aGUgVk1zIHN0YXRlLCBpLmUuIG9u
ZSBDUFUNCj4gICAgcGVyIFZNLiBTb21lIHJlbWFpbmluZyBub24tYnVzeSBDUFUgY2FuIGJy
aW5ndXAgdGhlIHJlc3Qgb2YgdGhlIHN5c3RlbQ0KPiAgICBhbmQgdGhlIEFQcyBpbiBvcmRl
ciB0byBiZSBmdW5jdGlvbmFsIGZvciAjRCwgaS5lIHRoZSByZXN0b3JlIG9mIFZNDQo+ICAg
IG9wZXJhdGlvbi4NCj4gDQo+ICAgIFRyeWluZyB0byBvcHRpbWl6ZSB0aGlzIHB1cmVseSBp
biBrZXJuZWwgc3BhY2UgYnkgYWRkaW5nIGNvbXBsZXhpdHkgb2YNCj4gICAgZHViaW91cyB2
YWx1ZSBpcyBzaW1wbHkgYm9ndXMgaW4gbXkgb3Bpbmlvbi4NCj4gDQo+ICAgIEl0J3MgYWxy
ZWFkeSBwb3NzaWJsZSB0b2RheSB0byBsaW1pdCB0aGUgbnVtYmVyIG9mIENQVXMgd2hpY2gg
YXJlDQo+ICAgIGluaXRpYWxseSBvbmxpbmVkIGFuZCBvbmxpbmUgdGhlIHJlc3QgbGF0ZXIg
ZnJvbSB1c2VyIHNwYWNlLg0KPiANCj4gICAgVGhlcmUgYXJlIHR3byBpc3N1ZSB0aGVyZToN
Cj4gDQo+ICAgICAgYSkgVGhlIGRlYXRoIGJ5IE1DRSBicm9hZGNhc3QgcHJvYmxlbQ0KPiAN
Cj4gICAgICAgICBRdWl0ZSBzb21lIChjb250ZW1wb3JhcnkpIHg4NiBDUFUgZ2VuZXJhdGlv
bnMgYXJlIGFmZmVjdGVkIGJ5DQo+ICAgICAgICAgdGhpczoNCj4gDQo+ICAgICAgICAgICAt
IE1DRSBjYW4gYmUgYnJvYWRjYXN0ZWQgdG8gYWxsIENQVXMgYW5kIG5vdCBvbmx5IGlzc3Vl
ZCBsb2NhbGx5DQo+ICAgICAgICAgICAgIHRvIHRoZSBDUFUgd2hpY2ggdHJpZ2dlcmVkIGl0
Lg0KPiANCj4gICAgICAgICAgIC0gQW55IENQVSB3aGljaCBoYXMgQ1I0Lk1DRSA9PSAwLCBl
dmVuIGlmIGl0IHNpdHMgaW4gYSB3YWl0DQo+ICAgICAgICAgICAgIGZvciBJTklUL1NJUEkg
c3RhdGUsIHdpbGwgY2F1c2UgYW4gaW1tZWRpYXRlIHNodXRkb3duIG9mIHRoZQ0KPiAgICAg
ICAgICAgICBtYWNoaW5lIGlmIGEgYnJvYWRjYXN0ZWQgTUNFIGlzIGRlbGl2ZXJlZC4NCj4g
DQo+ICAgICAgYikgRG8gdGhlIHBhcmFsbGVsIGJyaW5ndXAgdmlhIHN5c2ZzIGNvbnRyb2wg
a25vYg0KPiANCj4gICAgICAgICBUaGUgcGVyIENQVSB0YXJnZXQgc3RhdGUgaW50ZXJmYWNl
IGFsbG93cyB0byBkbyB0aGF0IHRvZGF5IG9uZQ0KPiAgICAgICAgIGJ5IG9uZSwgYnV0IGl0
J3MgYWt3YXJkIGFuZCBoYXMgcXVpdGUgc29tZSBvdmVyaGVhZC4NCj4gDQo+ICAgICAgICAg
QSBrbm9iIHRvIG9ubGluZSB0aGUgcmVzdCBvZiB0aGUgbm90IHlldCBvbmxpbmVkIHByZXNl
bnQgQ1BVcw0KPiAgICAgICAgIHdpdGggdGhlIGJlbmVmaXQgb2YgdGhlIHBhcmFsbGVsIGJy
aW5ndXAgbWVjaGFuaXNtIGlzDQo+ICAgICAgICAgbWlzc2luZy4NCj4gDQo+ICAgICAgI2Ep
IFRoYXQncyBhIHJpc2sgdG8gdGFrZSBieSB0aGUgb3BlcmF0b3IuDQo+IA0KPiAgICAgICAg
ICBFdmVuIHRoZSByZWd1bGFyIHNlcmlhbGl6ZWQgYnJpbmd1cCBkb2VzIG5vdCBwcm90ZWN0
IGFnYWluc3QgdGhpcw0KPiAgICAgICAJaXNzdWUgdXAgdG8gdGhlIHBvaW50IHdoZXJlIGFs
bCBwcmVzZW50IENQVXMgaGF2ZSBhdCBsZWFzdA0KPiAgICAgICAJaW5pdGlhbGl6ZWQgQ1I0
Lg0KPiANCj4gCUxpbWl0aW5nIHRoZSBudW1iZXIgb2YgQVBzIHRvIG9ubGluZSBlYXJseSB2
aWEgdGhlIGtlcm5lbCBjb21tYW5kDQo+IAlsaW5lIHdpZGVucyB0aGF0IHdpbmRvdyBhbmQg
aW5jcmVhc2VzIHRoZSByaXNrIGZ1cnRoZXIgYnkNCj4gCWV4ZWN1dGluZyB1c2VyIHNwYWNl
IGJlZm9yZSBhbGwgQVBzIGhhdmUgQ1I0IGluaXRpYWxpemVkLg0KPiANCj4gCUJ1dCB0aGUg
c2FtZSBhcHBsaWVzIHRvIGEgZGVmZXJyZWQgb25saW5lIG1lY2hhbmlzbSBpbXBsZW1lbnRl
ZCBpbg0KPiAJdGhlIGtlcm5lbCB3aGVyZSBzb21lIHdvcmtlciBicmluZ3MgdXAgdGhlIG5v
dCB5ZXQgb25saW5lIEFQcyB3aGlsZQ0KPiAJdGhlIGVhcmx5IG9ubGluZSBDUFVzIGFyZSBh
bHJlYWR5IGV4ZWN1dGluZyB1c2VyIHNwYWNlIGNvZGUuDQo+IA0KPiAgICAgICNiKSBJcyBh
IG5vIGJyYWluZXIgdG8gaW1wbGVtZW50IG9uIHRvcCBvZiB0aGlzLg0KPiANCj4gDQo+IENv
bmNsdXNpb24NCj4gLS0tLS0tLS0tLQ0KPiANCj4gQWRkaW5nIHRoZSBiYXNpYyBwYXJhbGxl
bCBicmluZ3VwIG1lY2hhbmlzbSBhcyBwcm92aWRlZCBieSB0aGlzIHNlcmllcw0KPiBtYWtl
cyBhIGxvdCBvZiBzZW5zZS4gSW1wcm92aW5nIHBhcnRpY3VsYXIgaXNzdWVzIGFzIHBvaW50
ZWQgb3V0IGluIHRoZQ0KPiBhbmFseXNpcyBtYWtlcyBzZW5zZSB0b28uDQo+IA0KPiBCdXQg
dHJ5aW5nIHRvIHNvbHZlIGFuIGFwcGxpY2F0aW9uIHNwZWNpZmljIHByb2JsZW0gZnVsbHkg
aW4gdGhlIGtlcm5lbA0KPiB3aXRoIHRvbnMgb2YgY29tcGxleGl0eSwgd2l0aG91dCBleHBs
b3Jpbmcgc3RyYWlnaHQgZm9yd2FyZCBhbmQgc2ltcGxlDQo+IGFwcHJvYWNoZXMgZmlyc3Qs
IGRvZXMgbm90IG1ha2UgYW55IHNlbnNlIGF0IGFsbC4NCj4gDQo+IFRoYW5rcywNCj4gDQo+
IAl0Z2x4DQo+IA0KPiAtLS0NCj4gICBEb2N1bWVudGF0aW9uL2FkbWluLWd1aWRlL2tlcm5l
bC1wYXJhbWV0ZXJzLnR4dCB8ICAgMjANCj4gICBEb2N1bWVudGF0aW9uL2NvcmUtYXBpL2Nw
dV9ob3RwbHVnLnJzdCAgICAgICAgICB8ICAgMTMNCj4gICBhcmNoL0tjb25maWcgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgMjMgKw0KPiAgIGFyY2gvYXJtL0tj
b25maWcgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gv
YXJtL2luY2x1ZGUvYXNtL3NtcC5oICAgICAgICAgICAgICAgICAgICAgIHwgICAgMg0KPiAg
IGFyY2gvYXJtL2tlcm5lbC9zbXAuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAx
OA0KPiAgIGFyY2gvYXJtNjQvS2NvbmZpZyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHwgICAgMQ0KPiAgIGFyY2gvYXJtNjQvaW5jbHVkZS9hc20vc21wLmggICAgICAgICAgICAg
ICAgICAgIHwgICAgMg0KPiAgIGFyY2gvYXJtNjQva2VybmVsL3NtcC5jICAgICAgICAgICAg
ICAgICAgICAgICAgIHwgICAxNA0KPiAgIGFyY2gvY3NreS9LY29uZmlnICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gvY3NreS9pbmNsdWRlL2FzbS9z
bXAuaCAgICAgICAgICAgICAgICAgICAgIHwgICAgMg0KPiAgIGFyY2gvY3NreS9rZXJuZWwv
c21wLmMgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgOA0KPiAgIGFyY2gvbWlwcy9L
Y29uZmlnICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gv
bWlwcy9jYXZpdW0tb2N0ZW9uL3NtcC5jICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAg
IGFyY2gvbWlwcy9pbmNsdWRlL2FzbS9zbXAtb3BzLmggICAgICAgICAgICAgICAgIHwgICAg
MQ0KPiAgIGFyY2gvbWlwcy9rZXJuZWwvc21wLWJtaXBzLmMgICAgICAgICAgICAgICAgICAg
IHwgICAgMQ0KPiAgIGFyY2gvbWlwcy9rZXJuZWwvc21wLWNwcy5jICAgICAgICAgICAgICAg
ICAgICAgIHwgICAxNA0KPiAgIGFyY2gvbWlwcy9rZXJuZWwvc21wLmMgICAgICAgICAgICAg
ICAgICAgICAgICAgIHwgICAgOA0KPiAgIGFyY2gvbWlwcy9sb29uZ3NvbjY0L3NtcC5jICAg
ICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gvcGFyaXNjL0tjb25maWcgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gvcGFyaXNjL2tlcm5l
bC9wcm9jZXNzLmMgICAgICAgICAgICAgICAgICAgIHwgICAgNA0KPiAgIGFyY2gvcGFyaXNj
L2tlcm5lbC9zbXAuYyAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgNw0KPiAgIGFyY2gv
cmlzY3YvS2NvbmZpZyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAg
IGFyY2gvcmlzY3YvaW5jbHVkZS9hc20vc21wLmggICAgICAgICAgICAgICAgICAgIHwgICAg
Mg0KPiAgIGFyY2gvcmlzY3Yva2VybmVsL2NwdS1ob3RwbHVnLmMgICAgICAgICAgICAgICAg
IHwgICAxNA0KPiAgIGFyY2gveDg2L0tjb25maWcgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHwgICA0NSAtLQ0KPiAgIGFyY2gveDg2L2luY2x1ZGUvYXNtL2FwaWMuaCAgICAg
ICAgICAgICAgICAgICAgIHwgICAgNQ0KPiAgIGFyY2gveDg2L2luY2x1ZGUvYXNtL2NwdS5o
ICAgICAgICAgICAgICAgICAgICAgIHwgICAgNQ0KPiAgIGFyY2gveDg2L2luY2x1ZGUvYXNt
L2NwdW1hc2suaCAgICAgICAgICAgICAgICAgIHwgICAgNQ0KPiAgIGFyY2gveDg2L2luY2x1
ZGUvYXNtL3Byb2Nlc3Nvci5oICAgICAgICAgICAgICAgIHwgICAgMQ0KPiAgIGFyY2gveDg2
L2luY2x1ZGUvYXNtL3JlYWxtb2RlLmggICAgICAgICAgICAgICAgIHwgICAgMw0KPiAgIGFy
Y2gveDg2L2luY2x1ZGUvYXNtL3Nldi1jb21tb24uaCAgICAgICAgICAgICAgIHwgICAgMw0K
PiAgIGFyY2gveDg2L2luY2x1ZGUvYXNtL3NtcC5oICAgICAgICAgICAgICAgICAgICAgIHwg
ICAyNiAtDQo+ICAgYXJjaC94ODYvaW5jbHVkZS9hc20vdG9wb2xvZ3kuaCAgICAgICAgICAg
ICAgICAgfCAgIDIzIC0NCj4gICBhcmNoL3g4Ni9pbmNsdWRlL2FzbS90c2MuaCAgICAgICAg
ICAgICAgICAgICAgICB8ICAgIDINCj4gICBhcmNoL3g4Ni9rZXJuZWwvYWNwaS9zbGVlcC5j
ICAgICAgICAgICAgICAgICAgICB8ICAgIDkNCj4gICBhcmNoL3g4Ni9rZXJuZWwvYXBpYy9h
cGljLmMgICAgICAgICAgICAgICAgICAgICB8ICAgMjIgLQ0KPiAgIGFyY2gveDg2L2tlcm5l
bC9jYWxsdGh1bmtzLmMgICAgICAgICAgICAgICAgICAgIHwgICAgNA0KPiAgIGFyY2gveDg2
L2tlcm5lbC9jcHUvYW1kLmMgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMg0KPiAgIGFy
Y2gveDg2L2tlcm5lbC9jcHUvY2FjaGVpbmZvLmMgICAgICAgICAgICAgICAgIHwgICAyMQ0K
PiAgIGFyY2gveDg2L2tlcm5lbC9jcHUvY29tbW9uLmMgICAgICAgICAgICAgICAgICAgIHwg
ICA1MCAtLQ0KPiAgIGFyY2gveDg2L2tlcm5lbC9jcHUvdG9wb2xvZ3kuYyAgICAgICAgICAg
ICAgICAgIHwgICAgMw0KPiAgIGFyY2gveDg2L2tlcm5lbC9oZWFkXzMyLlMgICAgICAgICAg
ICAgICAgICAgICAgIHwgICAxNA0KPiAgIGFyY2gveDg2L2tlcm5lbC9oZWFkXzY0LlMgICAg
ICAgICAgICAgICAgICAgICAgIHwgIDEyMSArKysrKw0KPiAgIGFyY2gveDg2L2tlcm5lbC9z
ZXYuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMg0KPiAgIGFyY2gveDg2L2tl
cm5lbC9zbXAuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgMw0KPiAgIGFyY2gv
eDg2L2tlcm5lbC9zbXBib290LmMgICAgICAgICAgICAgICAgICAgICAgIHwgIDUwOCArKysr
KysrKy0tLS0tLS0tLS0tLS0tLS0NCj4gICBhcmNoL3g4Ni9rZXJuZWwvdG9wb2xvZ3kuYyAg
ICAgICAgICAgICAgICAgICAgICB8ICAgOTggLS0tLQ0KPiAgIGFyY2gveDg2L2tlcm5lbC90
c2MuYyAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAyMA0KPiAgIGFyY2gveDg2L2tl
cm5lbC90c2Nfc3luYy5jICAgICAgICAgICAgICAgICAgICAgIHwgICAzNiAtDQo+ICAgYXJj
aC94ODYvcG93ZXIvY3B1LmMgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAgIDM3IC0N
Cj4gICBhcmNoL3g4Ni9yZWFsbW9kZS9pbml0LmMgICAgICAgICAgICAgICAgICAgICAgICB8
ICAgIDMNCj4gICBhcmNoL3g4Ni9yZWFsbW9kZS9ybS90cmFtcG9saW5lXzY0LlMgICAgICAg
ICAgICB8ICAgMjcgKw0KPiAgIGFyY2gveDg2L3hlbi9lbmxpZ2h0ZW5faHZtLmMgICAgICAg
ICAgICAgICAgICAgIHwgICAxMQ0KPiAgIGFyY2gveDg2L3hlbi9zbXBfaHZtLmMgICAgICAg
ICAgICAgICAgICAgICAgICAgIHwgICAxNg0KPiAgIGFyY2gveDg2L3hlbi9zbXBfcHYuYyAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHwgICA1NiArLQ0KPiAgIGRyaXZlcnMvYWNwaS9w
cm9jZXNzb3JfaWRsZS5jICAgICAgICAgICAgICAgICAgIHwgICAgNA0KPiAgIGluY2x1ZGUv
bGludXgvY3B1LmggICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgICAgNA0KPiAgIGlu
Y2x1ZGUvbGludXgvY3B1aG90cGx1Zy5oICAgICAgICAgICAgICAgICAgICAgIHwgICAxNw0K
PiAgIGtlcm5lbC9jcHUuYyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwg
IDM5NyArKysrKysrKysrKysrKysrKy0NCj4gICBrZXJuZWwvc21wLmMgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICB8ICAgIDINCj4gICBrZXJuZWwvc21wYm9vdC5jICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAxNjMgLS0tLS0tLQ0KPiAgIDYyIGZp
bGVzIGNoYW5nZWQsIDk1MyBpbnNlcnRpb25zKCspLCA5NzYgZGVsZXRpb25zKC0pDQo+IA0K
PiANCg0KVGVzdGVkIHdpdGggYSBYZW4gUFYgZG9tMCBvbiBhbiA4IGNwdSBzeXN0ZW0sIG5v
IGlzc3VlcyBmb3VuZC4NCg0KVGVzdGVkLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3Vz
ZS5jb20+DQoNCg0KSnVlcmdlbg0K
--------------IY5B8XYicrkm5h03mzx6NQ1O
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------IY5B8XYicrkm5h03mzx6NQ1O--

--------------IHia8Et000DpNvtEgqYnfJT6--

--------------mDBsVbb05nWuDwkk0s0aQs0o
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQ9BN4FAwAAAAAACgkQsN6d1ii/Ey+Z
OwgAnVrfkjTjeDrRTn23+4x659lr4hF0ySTppvPNL9XjtqiyepCO1qGBCSu0zajDXX71MdGFIzEt
QHI//Yga3IMc2Xau9AEwJhsDYRxro6HoACE0lhb3DzEZm3WyAB08328z12stFJ65vzoCMmXAXFSI
itczgMHeYLWdlKyLfaZBoVIdMjXwQgZCbcPgh50APy2wOcSOAozLAjD9MqALcXNz//QCn0klN9wQ
UdYCiGWMczpRpqeRbW2IniCyuad6K2x7UUy+KwmRfBT2EpiEClc6aFVql1I8Qqz/8Ui4hQlAtLuh
Beioxl/iB2eHThFTyiqc4Ev5Z4CC3cTjQ4qQhLTgkA==
=kmSW
-----END PGP SIGNATURE-----

--------------mDBsVbb05nWuDwkk0s0aQs0o--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:41:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:41:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521847.810785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKQQ-0001oo-K9; Mon, 17 Apr 2023 08:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521847.810785; Mon, 17 Apr 2023 08:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKQQ-0001oh-Gy; Mon, 17 Apr 2023 08:41:26 +0000
Received: by outflank-mailman (input) for mailman id 521847;
 Mon, 17 Apr 2023 08:41:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKQP-0001ob-Co
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:41:25 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2080.outbound.protection.outlook.com [40.107.6.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1d29484-dcfb-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:41:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9087.eurprd04.prod.outlook.com (2603:10a6:150:22::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:40:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:40:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1d29484-dcfb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BLSqiu/MAuOKOQFNbaXTxbspXVVKG5GyeYoNV4j7xRXcFX+mHtaIuyweWBKfd3XC5AntNeJYMAWaQd4OfrTwcygpVepBh0RP9abCf9Yhe6Ro2SiDDfeaP7alqboJeO4th6JYip7k7JqKHOlRsEuG5ShgQw0AtP/hSJ9fKe8lMd4zUyg8C+I6v0lcDSNy7Qe8N0rQhZ+NPyShf/+ktU7z1Ug7yTPKguhME8cq5Ghu1VeUs3Y9yp/58OZzcnr1hpQ/x53SLDCn7ecuW1brF/0geZAwKW3Kj6xBRjI27KoK4GaWQ2oGUwrbskb19SvBGfnnaUwp09TSC0H9PP/VdZSTuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4xQLTxjkGweer8XTiy7zr0tqVBIDnHqVEj33+A0p7VY=;
 b=GW8NxJJY2Kd7ahPd3aXVesO5zjO+APcEzgRHyKb0QjDYBYld/oi0QGnqDCh+nEzisxAQfpBcBnZLGtQIX2xDCyri9nBqRhHDEniydkE5GNNtezpTt4EFlGOXpySixWzONv7I6vUMKGsxFKtPyIK+Y3jkQdoxbiZl6ulVsZ7VoIoLzKVBLCi67JinKLKvAwZQ1JoQ8eHlSqPX3iF3Xzf+yv6EFXbT8Yfht1J/SlkPLTCo026AgDvCJgQtqoW1AvNDmcMPJGHQqibhlWW+CePucjHh5TUz+8fDljuK7ArE4OgvKQkQ5T+VpMsZ3yHakU6HWkColktdcAFB2lZKXhi+wA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4xQLTxjkGweer8XTiy7zr0tqVBIDnHqVEj33+A0p7VY=;
 b=kLPbX8UVx2Y2VLJLk3WeGq7LjE/ufTmi/j+HD1JatgpnOMInLBgbftkrvmNw3h6GH7wzbQpSEsIDKIPBEszPsA3ILvccAPyrCezdVZUJuw1L0hsqgVSQSgiBWWcOj03R/jQHHL4tNa+FINKW4v62mzeq25tPa1Ipd0TpJUoGBmWnmnldCnmwE6aot1y0pTgH/DSJQI9Ke+NLXhqkTve+y1gyq7cYYuCAWDQkaJahmUek+UiPOPTYPQpWAeimTqELLq66ueNn3MiCJPVP7xWkcvuv75ySRI1GijPYkUXBQWPZmNp1qUteiy1Pb6y1ed9a/BMVzmBfsFkZhKJ7Y1t/Rw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0b0ea9cb-717e-8bcd-f08f-fd45d8993d95@suse.com>
Date: Mon, 17 Apr 2023 10:40:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 2/3] xen/efi: fix unitialized use warning
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-3-stewart.hildebrand@amd.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230414185714.292881-3-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0194.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9087:EE_
X-MS-Office365-Filtering-Correlation-Id: 893c5ad4-409a-46ef-08de-08db3f1f7416
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VoV0juDvrwPSFqKYmM8ggZhgS+nMG7B++3+LWDnYi9B94aet7NcBkR3lnrrsSGNPaXaf+a4SF8MIjx+x753FzZMRIpYCWrgbjxjAjCOMnSMbD+9NYMkvgik0jQ69ok4bfsz6TA5q6g0PmHGXSDvGIT08nhIePjvexoBYTZy9KCo7HLX9LcAaQyDGpUiRrtTrUK1YeGV4dd0IRrPGM7zCmBkTBSTCHAZQCCLUqHfvwVzIduLV1TDSwQGPTTOkDC7kR+lUOUytMKo5b3vTSJ5hdO2gHzWu19gZaQZvxWm9eWlL6GS1yT4n7H7hDhNINI5TFX1FwJDaD6WJc0hcHNEmxb6nIcXbOt6gxNkvqKSJQbgx5phJn/16naCo2MNHt3RIjOUMk65Kg2E5Chr1Z0aw5fvzhZItko/RIp+Cs3JKKlmGJf931/CrZV7HumsGztK+uYGaETzmzwfGkghTBR/Z441MnOb+eOsEydslH+zp+OebN6lqbnbhqLw/wRI/+9i1S6SKYOBxrdXknFI2Mj3untvY2BHuqGSFavxLsUUZCh185yn1/qyrwzbb9oADyeHVPObEbuOnRvrbtyuoR+5VyA5MnrxvjPY0jUYVO+hd/IRV3F694wwPgurpUi8KdpoxrIYVZX0BSbCFH7PrIgl/Bg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(39860400002)(376002)(396003)(136003)(451199021)(66476007)(66556008)(6916009)(66946007)(4744005)(2906002)(31686004)(4326008)(2616005)(5660300002)(8936002)(316002)(41300700001)(86362001)(36756003)(478600001)(6486002)(31696002)(186003)(53546011)(26005)(6506007)(6512007)(83380400001)(8676002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a2VjaXVMbC9YMDdDK29yS2lNcGZOOHhVMWZTYTQ3Mnd0NENWZUJyRzBvaHNV?=
 =?utf-8?B?KzFaUVBzUFNJbkEvR1BKWmxlanMwRmd2M1ZxOHZYSFdqdThxb1h6N29rbU05?=
 =?utf-8?B?Q0Fob3RWeTFuMmJZVjAvTFlpZDBMSlRFb3VZeWZuVFRPMkYzTFZDb0kwRU9I?=
 =?utf-8?B?TEtUL3c0ak94NWJOQytlczMycHNHOFJaMWpvcWM5RmNwSGorMDh6b1hmNHVL?=
 =?utf-8?B?ZHY4RkZuNTZtS3c3WGE4OThoS1Fzbnk2b1B0R1Qxcm03MlRTT0FUU3ZoOWxr?=
 =?utf-8?B?RUZuZlFDM0xONmdPeHZXaktmaktUNzJzNzRnUVg3R2hTbG01Wnc3TzVDL2Ni?=
 =?utf-8?B?WmxTZ3ZlUGh2ZklFenVrejRsM2d5YlBuQVB0cjBzVFM5Z205L3F4TWYrcTRL?=
 =?utf-8?B?d3FTa1V6blFOSXBMNjNvYkkrSWFkVytYTlJYdjVTakRHUUkxcDdFQUQ3NlN2?=
 =?utf-8?B?TFdtaXJXZU9lZWVNaFR6Ukt5VlBjazJDb0RWT0RwcmM1OGhGL25oYktLaUxo?=
 =?utf-8?B?cWVmeHhuMEJ1ZGxkWTl0UG1uSDFieW53MnZiaDRYOWcyczdSNFBGS2w5SDlt?=
 =?utf-8?B?NDJ4OE9zSmhoaWRHbzliNlVPWU0zbFRSZGx1a0pFMVJJZkVkaEdRSWVNUkZu?=
 =?utf-8?B?eFBOdG84akljWlJpOFBPM3ZsYlNvVXRlTXJubFNXQzJaNytMUVBwU2UrN2xU?=
 =?utf-8?B?SGRsQ29jV3FlYkh5ZmtVRDZlUGppSTgxdEhsaHhwR1k0cEVoUTdOVWNHbUh4?=
 =?utf-8?B?cDRPWmRkR2k3Z0hPR2RXQ1BtU0FISE43Z0lva21TV3MxeG9nWHhqeVZIMjVZ?=
 =?utf-8?B?TTFFeGxrNVlFTDFrSUQ0WGFoMVhQMXMyaksvbnpRUnF0S1Z5SE5WeG51YlNW?=
 =?utf-8?B?VGpZNzU2L2NsTDhVN0JWQUpNL0ZMT21CVUpweTdkRkNVV3dPbTJ4ZnVwTjkx?=
 =?utf-8?B?UFl4d2c3N0FOODJLN2tSOUJzYS9sTUdjb0sxQ1ZuWGRvUldEMUNjNlJKSkRj?=
 =?utf-8?B?S3NsQkVOdG5TOXQ3WE5MNCt4MXZ0ak4zNGtEeGxrQjNtQy92UStjbU4rdkZ4?=
 =?utf-8?B?dDl2cnN3Z1FrSU15L0FjbWl5aURpWGNtSkpZMlZqRUFxZDB6eUwvWFZuaDFR?=
 =?utf-8?B?TmhheW1TSWJrL0xQbDNiS1U0bXpjbFg3VEtkM2FPRmNwRjNURWhpMUswVjdk?=
 =?utf-8?B?RzFRVWJrR3lWRFRtcTl1N05rTXhqYkFwY0hnRlRrSlY0K09DNjJrTDNOaTd6?=
 =?utf-8?B?bVpzc2ZxeDJqTTQyQ2RnNVpRMmk5cmdjUkR3U0sxNExNU0dycGtYSDBtMFVs?=
 =?utf-8?B?SjhYWmxtR3l0UkJ5SUlQZU9DMkhlcTRhbDhPUDNZVHFWNi9HT3hCR2IzQ204?=
 =?utf-8?B?TU5MVXZiN0c0MnRqSm4yc0N2NllwbElxZGw4N1oyVEdlNEJmRHZDMlhHTGZE?=
 =?utf-8?B?MGxTOGhUaDdXY0tla01FcUQwMWZIYzZNUTBEMGZPcTRHb0szd1E3Q01GaXBE?=
 =?utf-8?B?TllNM3pnVExzaEx0ZWlKeFlzRE5HZ1cxT2diSzQrbHpUU3RmU0ZNQ2RKMUpD?=
 =?utf-8?B?OXowREtacXlUQjhjeERPNktaNkhFL2RZRzhnc1NjclpqQzE0R3VDdkVyY2dM?=
 =?utf-8?B?eDZYQ2JRL3Brc0k0K1lBSW5nNGFsMkROR1R1aDU5M2E3cWo3cHIzTENuRXRj?=
 =?utf-8?B?Wmw0d1RlWVNmeDRuSmUzTzNwSGNQS0dMVk83ZjcyVVFzVEE0MnQ0eGtZbkp5?=
 =?utf-8?B?dlZ2RzVuZDBvbEdOc254VnRDYTZUbTNJamc1WkFHSVB5V1NTNFVpQUliNkdT?=
 =?utf-8?B?L0doL2Y1YlpLa2NQeVFiRXNENjZHczArVW81aVZYaVc5cy9mVUFXYXpNNFE5?=
 =?utf-8?B?QXM2YjZwVWFqWStwTHkyWmtDUTdSOWpxSjBYaTBDMTEwSUxTQWlXYXFxdlo3?=
 =?utf-8?B?THZja2V6WkNUQ1JONUpveGoxd0xRYmpxQUNhdVZqeXBuN3VXWmVzQUNvODIr?=
 =?utf-8?B?aktkS29UZEJ5YXFjcy8rd2lTLzAvaEJ6bW4yQnNSWWRxdlBwNmNaRzNtSjFG?=
 =?utf-8?B?bXZyN3pDVk5OcTA4RG5BUG9lSGFUMkhkNWtHOTkxNVhpOVUrdUdpMFAyaVZX?=
 =?utf-8?Q?hgZeKV98ADAZLy4XYa/OBXHu5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 893c5ad4-409a-46ef-08de-08db3f1f7416
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:40:52.9388
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B+eHLkB7xtWTWlsfJDthLpJugFxpjRhM7MdfQCs6zXUfG/V902VlWwev/bhaFJFeMA4FLYb9HIfP9NbkcsI7Hg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9087

On 14.04.2023 20:57, Stewart Hildebrand wrote:
> When building the hypervisor for arm64 with -Og, we encounter a (false)
> uninitialized use warning:
> 
> arch/arm/efi/boot.c: In function ‘efi_start’:
> arch/arm/efi/boot.c:1468:9: error: ‘argc’ may be used uninitialized [-Werror=maybe-uninitialized]
>  1468 |         efi_arch_handle_cmdline(argc ? *argv : NULL, options, name.s);
>       |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> arch/arm/efi/boot.c:1263:21: note: ‘argc’ was declared here
>  1263 |     unsigned int i, argc;
>       |                     ^~~~
> cc1: all warnings being treated as errors
> 
> Fix this by initializing argc. As a precaution, also initialize argv.

I'm not happy about this kind of change, and I also wonder whether we
wouldn't better use initializers for both variables if we already have
to work around compiler shortcomings like this one. Nevertheless I can
see the need, so ...

> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 08:50:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 08:50:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521851.810794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKYu-0003Gt-E7; Mon, 17 Apr 2023 08:50:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521851.810794; Mon, 17 Apr 2023 08:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKYu-0003Gm-BZ; Mon, 17 Apr 2023 08:50:12 +0000
Received: by outflank-mailman (input) for mailman id 521851;
 Mon, 17 Apr 2023 08:50:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKYs-0003Gd-Sl
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 08:50:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2050.outbound.protection.outlook.com [40.107.20.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dafe2924-dcfc-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 10:50:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7439.eurprd04.prod.outlook.com (2603:10a6:800:1ab::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 08:49:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 08:49:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dafe2924-dcfc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hi45ttHk1MOGWS88Eqg5EpBEgzlwDgNsRT4gQtwEk7LdlMt/h/r83r+oizzJ3JriBfwyCbCdxItZJsdZoOBTVrdZwxGLQLu1mujl6rmdlKGceWpib36xeaM4hCZ/WC/sObxj2w9oeUjjUjNkYUtrP/d1/z9oehQGptqVVSKtLNPEltmakP/ADv0WAo2kqJxen2i3oZ60HGokzT+tAHLJBKbjhn1wqvBLGP7vrgVFHLy8ppyoy0UamMdZ2jACYsiyOf+YZGv+8rpyUM5MRkAyN4DZG+SNspinwnvifcOCJdTMZeE1I+RSAuejKPNHC9LD8QzXYPVpa0go6zQxPtt5sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NbnOnZxh4k4EIk3bd2LRxW8/M5ATWB6bNPv/uaYf9rw=;
 b=lAwIVcO+SNzjRqmF+7D5SngOnt99IxL2XV/yMiLUoBPh2zin+H0S3t/izbeKL/m3Teei9YuBqt+jkPRQszuWIDdC6jTWZ1aubRPE0v+zlL+HcfZ9mhLU/2xnSYKyyXTM8/Ckkii7KhiSi3JSalBeIn5sZmaKcMrfBTXbqu8c2gUBBftw/QCGQcntfbrNlTmYxg8ZSnVg2XnNdBEXZPpMuDFm3cYgM9q1+vc0oGD8iRlaYuzuVOOloxiB4S0OvmWUOPInY1LsXQXWxl31Gwnxvn7h4mAlmUn+Ou+KARZLH873YBbk2596bcljiEulIn4f4YovnX/tZHkByBB1MpXnpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NbnOnZxh4k4EIk3bd2LRxW8/M5ATWB6bNPv/uaYf9rw=;
 b=rJcuXcg6AKNgIIJtpCiuR8OtVmU451eUouAeIu/4GNl4xfE1nBoAng5VJr3i24PHWQIzrwXirdAjrZ3PCCoRqtlbfSjNqFosKbfO41y7nNrM+YPdhgEQLXzZZ3qYB/LKUo3c44ElaCu23ZQEFalCk9OYiFgqklQIghvZ5yCWzhlTh+hs1vCs4FFcZML8K3oKRgJ6LDvBOyEAaxe5zrairGpWOCrt79bh5dF0nOVZBjPDvYQrfLmmduQJvknk26OsOTUaI7MzJqxuUYvZWLGPt4hicyou5EiDtBMLZ4ASDUOPsTsV+4HDC14Bb0rRMC0T7LbRKZqL7j1mkCIfBDzUCQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2094ea22-a58e-fde0-8a77-f13675161a4a@suse.com>
Date: Mon, 17 Apr 2023 10:49:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: RFC: disable HPET legacy mode after timer check
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Simon Gaiser <simon@invisiblethingslab.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
 <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
 <ZDaVPiJTt8q74nQw@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZDaVPiJTt8q74nQw@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0163.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7439:EE_
X-MS-Office365-Filtering-Correlation-Id: df790808-4165-4c41-925d-08db3f20ae3a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HXnNzKr5r/E+n28WQnPIwGDtMS54J2/KX4/d91b9B16Zrs65IvdrvIP6UbDRCp7IUMEF8TiOnvpEqzyAmV2rNE9UPNnfimGfWRO+464ii7H2gDagTYcHwCnKVA7XKPfKW48JBw8pNq7R2K5zF68kQ01T5NW0w3O/htokAQH/7YZrRR8+bZVnCcGrV5zy3dAel6W7HqXlRyF35NWuVH45JB2ciLYo+pE7CYp2TbJUVfikah98+KAh/xCU67PmY2asibdOI+fwh0bFOcCbJAr4n0/UnUos+WTMVnFl5WN7t0sz7N+6y+XzjfgBl0ci9qI8Z+97cpgDJFuaAx35vbT0MzAPzL4aJCJKcUq8q5VSMzYuC25Gdhf6KEp0ApoXrQt0STKFMqinO9/iWU0wXVYm6KsKTlL4CZXECd9JY8mXjoKA5eVJI1YGKqUW5t3b5/fMP4uvF66mw0tHHvYKxtQiLbK48shJ5JJuDBdo/u9d6TwOlfJaF9vK9lb+JIRNARAR0VRQK4IcGLW5vaxLIrKiaR5OUqIZZNmPx1PWuhi+LOoJ0oXMHCYTKIfvcNStLa/sDaBxWLKj97Nt2Wq/XdjjTl2zC8qdT1YR3zLvxO1EKnzmtPWPVjCbYSpEik5/IiZEpzSHHD47gCfrrhcZlHYyUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(366004)(376002)(39850400004)(451199021)(4326008)(54906003)(316002)(478600001)(66946007)(66556008)(6916009)(66476007)(6486002)(26005)(2906002)(41300700001)(8936002)(8676002)(5660300002)(38100700002)(86362001)(31696002)(36756003)(2616005)(83380400001)(53546011)(6512007)(186003)(6506007)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dlUyOGZrUjhJTTV4OTlRUUE5QUNLL1pNRDdrNGJYcS9VdTM1QlE0K3dBcGJM?=
 =?utf-8?B?TnVleUJnRVhRVm5EWDdLcUExYks3Zmx6OGt1Z2pSRFFYQ2loSC9kd0EwUGRt?=
 =?utf-8?B?ZHpQZGFqMWFQUU4wTlgzNkZNV0wycVNQVVRGWGNmejQxS1FiOGw2NU9tN3N2?=
 =?utf-8?B?SlpyS3FPTHY5d0NMSEtqWUIxb25wRFB5RGsvOXZFMHRzV0lGWTJoT2QzOWI1?=
 =?utf-8?B?L1FhZWMyNFRyWVB0dEc5OTFjc0I4WDlaUnhpa2x1cWtpclRabGJQZXE5a0xJ?=
 =?utf-8?B?VHdqTGhIVUxTU1V2SDNTUWdXbkhrSTNyeVRFN1hkdWZXby8rNUhUS2tDb3E2?=
 =?utf-8?B?SWpiTWIrM2tFNlU1aDZpdjhFSmJFM282Y2FQUWpBYm5hRm84a0w0RU83VVlP?=
 =?utf-8?B?SXhwL2xDRUF2empvWDltNWV1VllQT3hmeHJCSmNxUFJ4d0VCU2xzNXRkZVNX?=
 =?utf-8?B?cVlLZWJNU3JNaVV4d2RnVjNZaXhoTDhWTGdwOWQ2M2k0ZkhjVmp3QzlsQ1FM?=
 =?utf-8?B?WDdZTlZsN1lESDV3Tkl4UXYxZDU2b3hNalJsd3lGeituM3NVY3FraFVFeHkz?=
 =?utf-8?B?T3daQmNXb29vSXhsOUtWcFpHeXA2WHB4cEpTc3o5S3F2S3F3ZVJmeTFCeSty?=
 =?utf-8?B?STF0ODhoZE1PMHM1Z05LUjZQdVRWbndiYmprZ2ZtQitBMmp0Ry9nbWhydWFH?=
 =?utf-8?B?eWdXQ3psaHFKd3JWY2FyRjNmQktpeEJMRFhaOHcwZmFwRnN6bHBkR0g0ZFgx?=
 =?utf-8?B?bXZzVnVBMTloSXp3NFh4dUlIZXdiK1pMSWZFRE54K05ZY2w0Rm9tZTNvQU1h?=
 =?utf-8?B?c1UrYVdNeWN0Y21Vd3JOWG51MEMzTHNmdWJLVHBnTzYxb1hQdFFIMHBTYyt1?=
 =?utf-8?B?ZERTTlcxTVV3TmdqVjYxVFNyRmg3c0dac0F1Y3ZITzU4MlBoU3ppaVl2N01F?=
 =?utf-8?B?ZGRsMUVvNGRVT29mUElHSDFwY1doQVZJeGhTZlgzVVQxNWV2U1V3QWNMR2U4?=
 =?utf-8?B?SzdIZDVlR1VNNllCRy84QWpIZXlhL25YMTNnMHJBVC8yQ3JwWXR6SU8zWkhR?=
 =?utf-8?B?Y2NYTGYwUlVXWjJBT3lzSmJCQkZYRTVtRTErdTZiaUVPWUlpZDhXcVo5dHU4?=
 =?utf-8?B?ZjV2bXUrRTM4NW44b05FK1RnSFJjQUhEV01CZ2F6NzdVY1ZEektHN0RBc0w0?=
 =?utf-8?B?Q0djYmwwMW1SREZ6blc1R1VnaldPMkpQWkNETTU2czQ2aXVsUDUwMkd6blNU?=
 =?utf-8?B?bktHNnNSVXhncWlHNEpSLy9RbzVLdmhrbWs3RGJpaytNbFFHVjFzeldiQTVC?=
 =?utf-8?B?am9iS09leVhLVU1Ga2h5dTY3bTlKTHB4dWMwS2w1eVBjbktiTGJrelZjQkhp?=
 =?utf-8?B?NUtXZ2VHNHZ2Zm9vNWZCQWk1aDNyVUJjemorc3o5aXJ0UVdYaC9JT28yZFBY?=
 =?utf-8?B?MXJQc3F6bFZzejEyMHE1eHFBVEpSVWxGU1hLcmlBQ29zcGk3NkJ1RU92aW44?=
 =?utf-8?B?N3dvYklGR25MUERMTjhPQTBZenVRUGtWRFROaFFkSklCMjNsU1k0YVluVUtM?=
 =?utf-8?B?ajJJTVlxaGZNU0phR2NvbmlQTnNFaXlhSmFwY3VTZCs2RWJDS2tsVzd0UnZT?=
 =?utf-8?B?WWl3Z1NLbkRRZTQ4NUlNSE0zRmdjcFJ1d1Vvc0pNck9UNlJUTENEYU1BT0t6?=
 =?utf-8?B?c1hvamxhTkQ3cHl2ejBEdEZNWC8yMXZWbFNRMmJZaW9HNkN2dzV4RWl5clZx?=
 =?utf-8?B?UElMOGVGNnJTa1RuSjltZ0lYNE42TTMzOGFBWSt2LzI3NWsvSTRvNU5jRTZ1?=
 =?utf-8?B?Sng3eXVRbURKZlZxREZ5TXB3UXpURWQ4WDMxbTc1cXFvMXcrdVRZc1V6Ynd5?=
 =?utf-8?B?akhiQ1laTGwxV3hBL1BYWTFUMTZ2ZkluRjd4RjlCVjlWdkNzWWRiMjNWWE1x?=
 =?utf-8?B?Mk1DZ1VoSms2UlR0Q041NnczTjdLdUtYaStLUlFIMUFTaDhDN3M3M1BiZ1hL?=
 =?utf-8?B?RGczc0ZNaHN4Mm1PTXpOcDY1WW5CTHpseVdhcnllSnp5V2ZsVWJYL2Q0eUgv?=
 =?utf-8?B?M3V0QUVLUzRtZTRMU1lQZDRhaGtIc0ZUUkg0Nm1IMmJKZUtlU1pqQUE1dWIy?=
 =?utf-8?Q?qS4gJCTIPw7GjgMRY2n7/H3ol?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: df790808-4165-4c41-925d-08db3f20ae3a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 08:49:39.9269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FRRpqNK/v0eUu3V6hgSE42v875kNBdeILf5NhrahgYrL8YfFpJNNkSI3x1rJbK2r3LUhru306/7kbKovO6QvTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7439

On 12.04.2023 13:25, Roger Pau Monné wrote:
> On Tue, Apr 11, 2023 at 12:20:13PM +0100, Andrew Cooper wrote:
>> On 11/04/2023 11:30 am, Simon Gaiser wrote:
>>> Hi,
>>>
>>> I have been recently looking into getting S0ix working on Xen [1].
>>>
>>> Thanks to a tip from Andrew I found that the HPET legacy mode was
>>> preventing my test system from reaching a package C-state lower than PC7
>>> and thereby also preventing S0ix residency.
>>>
>>> For testing I simply modified check_timer() to disable it again after it
>>> checked the timer irq:
>>>
>>> --- a/xen/arch/x86/io_apic.c
>>> +++ b/xen/arch/x86/io_apic.c
>>> @@ -1966,6 +1969,8 @@ static void __init check_timer(void)
>>>  
>>>              if ( timer_irq_works() )
>>>              {
>>> +                hpet_disable_legacy_replacement_mode();
>>>                  local_irq_restore(flags);
>>>                  return;
>>>              }
>>>
>>>
>>> With this [2] I'm able to reach S0ix residency for some time and for short
>>> periods the systems power consumption goes down to the same level as with
>>> native Linux!
>>
>> Excellent progress!
>>
>>> It reaches low power states only for a fraction of the suspend to idle
>>> time, so something still makes the CPU/chipset think it should leave the
>>> low power mode, but that's another topic.
>>
>> Do you have any further info here?  There are a range of possibilities,
>> from excess timers in Xen (e.g. PV guests default to a 100Hz timer even
>> though no guests actually want it AFAICT), or the 1s TSC rendezvous
>> (which isn't actually needed on modern systems), all the way to the
>> platform devices not entering d3hot.
>>
>>>
>>> I tried to understand how all the timer code interacts with disabling
>>> the legacy mode. I think it only would break cpuidle if X86_FEATURE_ARAT
>>> is not available (Which is available on my test system and indeed I
>>> didn't run into obvious breakage). 
>>>
>>> Is this (disabled PIT && !ARAT) a configuration that exists (and needs
>>> to be supported)?
>>>
>>> Did I miss something else? (Very much possible, given that this is way
>>> above my existing experience with X86 and Xen internals.)
>>
>> Xen's code is a mess and needs an overhaul.
>>
>> Right now, we're using the timer as "a source of interrupts" to try and
>> check that we've got things set up suitably.  But this doesn't need to
>> be the PIT, or a timer at all - it just needs to be "an interrupt coming
>> in from the platform".
> 
> I would even question whether that testing is useful overall.  We test
> a single IO-APIC pin, which still leaves room for the rest of them to
> not be properly configured, and Xen might not be using the PIT timer at
> the end.

Testing one pin is sufficient for the intended purpose (proving that
the delivery route platform -> IO-APIC -> LAPIC works), leaving aside
firmware possibly configuring multiple IO-APICs inconsistently. Yet
if there are multiple IO-APICs, I'm afraid we have no way of knowing
how to trigger any of the pins of secondary ones. Even if we went to
figure out what devices are connected to it, we'd then still have no
(rudimentary) device drivers knowing how to interact with the devices.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:01:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:01:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521855.810805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKje-0004ou-Cu; Mon, 17 Apr 2023 09:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521855.810805; Mon, 17 Apr 2023 09:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKje-0004on-9S; Mon, 17 Apr 2023 09:01:18 +0000
Received: by outflank-mailman (input) for mailman id 521855;
 Mon, 17 Apr 2023 09:01:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKjd-0004og-JQ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:01:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on2088.outbound.protection.outlook.com [40.107.6.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68152019-dcfe-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 11:01:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9971.eurprd04.prod.outlook.com (2603:10a6:20b:67f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Mon, 17 Apr
 2023 09:00:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68152019-dcfe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U5MEbVQYpXp5+8DjxUGREWFi9Gid34VIU2/OVfpI4sUbn+DK3nc4kSjQhnq+4+nkB3V+N+KV3t3liCeV6ve5hdvbqZxmHyzzE1Ldq+qAFTq6+lMDOt8nnTjC0KSTfqOkE+6U24a5oZemNx9xxhIIM7eyrFEeC02F3rpYFOX/7CO5UCteRIuEo7tSrsiEqzsKkqbhyrZJ1jPSeMXJqspMnIJCktE7n0UBH4UAM3s0v61ot+Tv1WQjikM9MwrVf5FsVXrSK/b78S3UtftRmOWrrc/FgdpOHEiF3W1aNE+zXtXn+QANhY7EhQ5Xrsx9yHxh07r4X1MxLrajlmNkfFAfpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AgHRfUQQeZNGr1AjVim4LvKvPYhcoG7vYUgFT7X9IFY=;
 b=kDlSdJwwK6BCWRf5KCtOBNKr2JppkacJ2LknFWkdxY/oI30EFVY8tnX+fSJXa0vn+Y7/Eqh4IItjzpO00q4RhNkghxVgHYN4NJg3UY4evX9t30HG3UG7WSI1t+8q55LR57x3LpRW8QwDiHylOtceiNDAp3aRj0YRFIItbnCZhONArZS8bFjnSwBmJPZkrQbxDRVzvIAMTtQtlA1p+wWrhOyag/tl0EwEu5YJFEKtePuMr+6Tx7M/69fIn7r/W5N5aJnTcluvcqHX4RNQFZNdx9vEuKEqFKVIkkGccHVJsvtvb4NHsKyVzglmGf+IFypVWAIQSEr7E2gR5JHH0m/3Pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AgHRfUQQeZNGr1AjVim4LvKvPYhcoG7vYUgFT7X9IFY=;
 b=xp9O8jrzt0/kPHH8v24MO/VMqr26ZNvPC8PJYdRujpu/uCtou0oNoXpPPj5z5/6OsbcpnKi1IwCIQunJbdBNv14huxbbR7+kqI0MJ3ri7uDoIM/cxAmLlbhOB7KOfDtGkn2phFyhRf//3qn+ZZUck+88tgLQwCmKXzljbSeRGGw5XOeB/Rgbcquylu+E/cEImCSSMBEszmtG9gj1Bsph8nUU4fsL0mR879oLB6e3Ux6DwfX86Bn2FTCLjbDayD68EbIA9ttuLeDd5lrHzddvWRtEL2yczOv/W4tefML1177I8UDIMI9BwqL5SKvwSPKyqZnfu7S+6wjuIKXb+YROZQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dfc9f77c-ab6e-a4cb-7fcf-4fa1d8371902@suse.com>
Date: Mon, 17 Apr 2023 11:00:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen: Fold exit paths in find_text_region()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230413192201.3255984-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0247.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9971:EE_
X-MS-Office365-Filtering-Correlation-Id: b3434130-07dc-4fda-7cc1-08db3f223991
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p4EfVokCjyx7fK1NvJKTqwhPUtTDoNp9UZrhH3AygOEAj7oDuwZ4stV1I2r9eFRQ26fKRuwmVldr1As9EVbmQHHUhUH1DKlwaz0XkYGZy3xDmb8UxqHKP5nZGdScZBXR9hojIbZAgyyafmLBqrY/Siq7OKp9Ni+ED1Ku/f7bxyaOsDA14XyQCxEEDgAUhIA/j8UlXYNHaGnpgFkfFM+5e0NT5Z3f/fQkiiu+dpPGb0mSo7uOtxrsynreA1AEkRHuX0pRmat/ThSg9qF6qsGcTp4iGZuMNEqN8GrwZJtppqKUi3Pe8JyGfVmXMKhZgwtIz2IZWOfugKGVB8PFUhBvJaeUxsISPut1kYz2f4rdfNtMBApXo9YdutN9CLzvzbRHSbr3xVYnQiB5BaMiehiw9AweEMUGWGEmpVxKbJQwtgbjt36tba15lkoltRW8HySk3y2CEFzMPD54NLsjHrZYjleqNrO4LJOF4OzKNqe0xSnPt1ehmGq7JASG9lev2k4yxJFlTLTwEE4pcaFE6ODwz8zyk1zbHAEB8LvKoeeiqj+ob2uE8RFm0pb51zlFC1lxcMr3/IqaYhY+DicQ4yFVBMnAyb4WBRArzijGK/bc00I/SmKkZe5GJv2qndms3PwzYf7zacCgS10nG/U2qbQBPg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199021)(316002)(31686004)(54906003)(66946007)(86362001)(8676002)(53546011)(66476007)(4326008)(31696002)(66556008)(6916009)(478600001)(38100700002)(186003)(2616005)(26005)(6506007)(41300700001)(36756003)(6512007)(2906002)(83380400001)(8936002)(6666004)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VytHZlJUWU5Lb1NPS0lub0FDS3oxVkJ4YVZ6SU9RTVdxQnRZREM1b0htejZJ?=
 =?utf-8?B?YWgwbW02TktpVVZxZXFUeWlWYzZzWGFzVVBZMmw3MmQrS1JoWWFKbU5zWjNh?=
 =?utf-8?B?QmtHVkQwM2c0ZkpMU01pWTZDakR2SHVPdHM3aStLa1V3RHNFczJyNTg5dFVH?=
 =?utf-8?B?Z1lRdllKK3NtNGZTaEx3NDY2Z1BoYTJxWHphaHE0ZGRQclhDOVNhQ1VDdHJH?=
 =?utf-8?B?OVFuQ0l4cnk0L2puWk1yZVZPMDhSM2QvZ2VyQnBuUXkrczFxeGlvZ0gwVDEx?=
 =?utf-8?B?TkcyUUxHbmd5ZFRDcGl1UUIwSENNYjlDazdSNUtZbVNhSWo5QTdVc2VYSjQz?=
 =?utf-8?B?QVQxRHQrREFJbmV5WE11YjRkbXVmVkJINDZoVndtNlRPQnFSN0xjUFRJb0Jv?=
 =?utf-8?B?SWd2OUNRSmpPYzNpbER3ZGhab212RjdBbXhtaHdSV1A4b1RMbzJXVCtnMWZZ?=
 =?utf-8?B?THN5S0FxbWtNM0xPeU9MVWhvTlJiNHI2UWY2MUtCQ3k1bHErMmM1bjJJRzhY?=
 =?utf-8?B?aDRCaC8vRXQ2UEU1cG40OU11Z2cyU0xMdkhzVExHdkV1cXNKWC9aTFF0NExL?=
 =?utf-8?B?WFhGK0JDdlpIcHdSTXJRcWpMOUtPTWRXL2tnRHlaQWw4V2p6NmpUYzcveGxU?=
 =?utf-8?B?ZFF0eVJYVHJUMUtpQVBlUCsyZTNnRjdmZVQvam9MUU9tNmdXbzdnRU0weVFX?=
 =?utf-8?B?OWZ5eG5JclNGMm5XYUNNK2Yvc0laMUIwM0hmaWVNSjFGRkd0dE12aThXV1Ba?=
 =?utf-8?B?T0g2bVVXeXlkSWlTTVlNcGNxUTJCaHoxVHdJbS9LWEdja2tFYWlPREdzTlBP?=
 =?utf-8?B?dnVJRlpQOW9xV1FKTXZ4bXBSSW5KS002YXZIRlBuUmFJMFJqcEZ3WmkvZlhP?=
 =?utf-8?B?UFNqNHo0eFk2NHQ1NG9wMTg0UnpEZ3FpK1RJTWErSmpzbDgxbHVUaFNGTmNj?=
 =?utf-8?B?OUpPMUwrZ3FQSkdmaENPOUR2ZU4yeEFPeEQ4Q3dJTmo3T1FJcG95ZHdaaHpm?=
 =?utf-8?B?T0NRN2Fkak5DK2Y1dDZNYkJSQ2VIamx0dy93ckh6SU9QR3Y2bU1vMnJJR0pl?=
 =?utf-8?B?YkxZYVN6R0tWTG9xTnBTSnpRc3UxYlZCRi9GNjFOTzBXRnJPbWJrL2R4S2U4?=
 =?utf-8?B?U3BtdHMvUTJYdDZFSkdmVytka3l4YjlpT1NBaGF2N3JvTnVDdHowUEw4NWVa?=
 =?utf-8?B?cTlQYit5Z0dOOGRGK2dOTTBOaTdVc01lRmZRTkhZNkZhcnRaNVptR2xSdVdJ?=
 =?utf-8?B?V2hFdVIyNkxpQlBDV256K0h0b0ZyeEhsVmZEMzBObzgvUUhwV3JyekVHVGNp?=
 =?utf-8?B?Y0JobEtvSmEwMGs3NGJyTEYxb3lsbXVyUlBzVDNNdk9JZ1p2Zjc2eUJ4U3JM?=
 =?utf-8?B?T0ZvUmt6cVIvZ0ZXYzB2YXJ1Y29JdCt3SmxzeC9TT0JPM0lNTlo5TDFWNnJ5?=
 =?utf-8?B?M0ZVclYzT3RMMzB2K0ExdlNwRnpCUmV2QnRwMWJVSGJIK3Nhb3IvbGo4Ynoz?=
 =?utf-8?B?L25pMk5TYlkzWm9PNDltMy9vbmk4Yk1nMm1rKzZxNHZnSWd2Mm1vTXlqcEll?=
 =?utf-8?B?RGVKOWFrcTExcEVkdWtkNEZKZEEveTV5d3FBUHR1NjN2WDBsRUZ3Um5aUkd1?=
 =?utf-8?B?WEJUalBZMEx1a2pKaEdPRnF4NmN6Vnp4T0VrSzNsWlNVYlJNclFrQmluS3lt?=
 =?utf-8?B?V1o2VXB3ZEh3MGdyUzFJMTV6YXNZbHdCYlg4b1VKakFkRmR0OUlYMHJFVHR2?=
 =?utf-8?B?WmNkRXNxbmVUL3BDNDk3QWQvb3BoZTJleVA0RzlzeFlqRm9ZUUJWTmxPM1dl?=
 =?utf-8?B?aWtlb2N5NTZaVXRkRVk1YjVHRDBSL2NQSlBtaVNwNldmOWtFSHd5SVgxU1JK?=
 =?utf-8?B?YnN4V2RacXplREV2TWZvSFlmc3ZqdkNMeGdwTGJMaXJuS01pWEwwQjJ0cjZD?=
 =?utf-8?B?NGhKSC9uOTZQSDJrbGxOYi96Qk1NNUZoVU9tTEFDS1V1S0xoVkZwVDJOK1VH?=
 =?utf-8?B?UjkzTkpBeW1TWm04UzhIL3M1TFYwQTRuaUlDZHVBSnBCVXBxYkVFSHpySWQ0?=
 =?utf-8?B?VVB6RWdMQ2Z0RFY0OWtGZWFrN01vb1c1TUJ3R3BhcVEvY01zc2xEcTR4bHZn?=
 =?utf-8?Q?RDfCTthOrSiVMQcysSr44CVb6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3434130-07dc-4fda-7cc1-08db3f223991
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:00:45.1986
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QXPas/VWi0Nb1+HRfvMt2817jxRuaSK50OnDuXBgx54uxidUwkmoprWzj0EMTmnZ40wPO49dcglNGXZ56hqV+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9971

On 13.04.2023 21:22, Andrew Cooper wrote:
> Despite rcu_read_unlock() being fully inlineable, the optimiser cannot fold
> these exit paths, because of the various compiler barriers providing RCU
> safety.  Help the compiler out.

Mind me asking about "cannot"? Iirc the compiler is permitted to fold
even volatile asm()s, as long as certain guarantees aren't violated.
With "... often won't ..." or alike I'd be happy to give my ack, albeit
doing so with another maintainer's questions pending (and imo wrongly
taken by you as possibly insulting) it's never really clear to me
whether doing so is appropriate. I would specifically argue that ...

> This compiles to marginally better code in all cases.  No functional change.

... even minor improvements are worth it.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Preferably (unless proven wrong) with said adjustment and on the basis
that Julien signals his question as addressed (once it really is, of
course):
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:10:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:10:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521861.810814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKrv-0005WA-9O; Mon, 17 Apr 2023 09:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521861.810814; Mon, 17 Apr 2023 09:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKrv-0005W3-6f; Mon, 17 Apr 2023 09:09:51 +0000
Received: by outflank-mailman (input) for mailman id 521861;
 Mon, 17 Apr 2023 09:09:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKrt-0005Vx-OM
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:09:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060.outbound.protection.outlook.com [40.107.20.60])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a8e0fa7-dcff-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:09:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9577.eurprd04.prod.outlook.com (2603:10a6:10:304::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:09:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:09:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a8e0fa7-dcff-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QN+v1WEiHTmgqBMkeCjOw4MzXrrsXsRuRXnHMyOUF6+Zx2C7Cxs/lSAisA7TkhB8L00Pe8cK9Ok/ePz1zWciON97psO7sObjR1la7VV5d7BNYAilR9cKZMcjf7YlsEwllp0yqhI+/WRYRodaLcPlqBy43A59bYtoC1NFwqx8AKQqlZ6+ubuptcSMcifaK0Il7Pgi5bFHFsFIYmfFPE7rnXugBo5iKwwt90dg3gQlJvFwqGNtiTDYgjrumu91BY0JLCjBuQCY8O06UTqSnRp7PQIXkojbQ5J9x9bjylNPjcYhU1WaWnlsPDrIwDy1LIRgMn2f2pGHjPcWTXHXxYZQJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pQF2NIVT3ZVhFqyJdWLcRXA3OYkqXWxqwG77hoStKwU=;
 b=ma1D9kaSgTm8BjGD5FxbuG2Q9bKP3xSBJvZLzCvVRxv9q5mjwmIaiDGuUC3eL/QyZ47gFm/9tK5Xu7PPeureF+xdLFaN0tcXO1CCtAW2NUtt3UjmQw5Pw9Ix/haDCy5/bXKv27eiYn2Kx6AvSycJfE2FCPuqcROXsS3paEeB7eML3F9Eog2+muICTE8u3laum9EELJM2UoDwIZD44V8JoZEC4D5zLFeGQT3nFPzWUdIFfHuRMZZpmQ3SyHZA1j75hAo5rVDBfxUtAVXPp9J0f8gA/QcbT4w6h/oJ3xwHUsFc36AhkL4nh/mEnTgW37R0JZt4XzpR+ZsuooOjQhb/zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pQF2NIVT3ZVhFqyJdWLcRXA3OYkqXWxqwG77hoStKwU=;
 b=phNWqD26wBgLVxUYUwBFDQicxbG+SCc2XlOy8WROtmP/xQdzCGds3cKH8GfTa6C0GTdgtCQWA8giT/lFL/Id6oOIXOPczClC7c1A2DUievEU/hNHNZiY8Iu7XWIE+i81uJ/GiY+EX03wQScDZmB+1+/XX3ExaU8SoQngk7Sguez2DxKv8Fh7hrPUu9iPlYaNOgnrboeR51ypkfQM/TfUN0LekLQrFUKWquiI6iz+SIpo26ArerSPUycpHBvm91/atR3DD5X4NNy/KPbi7iCe12R7+aGP/udvHlx+OqJZgR6IXeScQcR0TI7Cw4gH6iIOAPqMrihW5TFIz2GnfrHMLQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <110951c5-4dc6-5fa5-1722-a86ba28f1789@suse.com>
Date: Mon, 17 Apr 2023 11:09:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/vpci: initialize msix->next
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230414202932.293688-1-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230414202932.293688-1-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0022.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9577:EE_
X-MS-Office365-Filtering-Correlation-Id: c3c77777-8d62-4a60-b8c5-08db3f236d97
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	52OzxYy2jXnX118mturPZliVq1LPXTZZbI9JSXVC9RzmhPZaoaCI0RmnJBsTkgx+KO1RbPqqzyQVx1rQge6BzvhVfVfG+hI9k1S3t5JOk1zDU2vxRv3qEuybCYBbW5bmZZhSj1NdPxEJw7nwIOTZ5EOOmYE198YV1zgFGaed3SQt29kiHdq4DXtZDNC/XuNHjOtQPiIs0KsSdjKHRgND6ELZ3/hV2XN/9Pg7ESsNyI/0/vroArq5z8mKy0Wqa1rR4dXlX5d5LXRDh1YavDwQzv9jWsyYfxQnFFxw0bBt6mICq5ZPxJxbC2ZXidqyCnZ3CyeHmh1xZ7OL0FQBkeaCG8gxwKMzvxwvVYmx0zqNfT1u6rg3OWZRq+uAXi8VkJnMbbcViKRy4xLlvQ0J4BtixrYZWvlCyr/7qgj2eJmT6GkTgnOndGq1UzlgDurOLE032bZgSqZlGf7Ujkrd7pNzLVJlAHqMD4XwZ5ey7vW97Y0rHjzDFmuppBxtHH2ONAr7S3dE4eHLU2UxSC0rNG0cVRO+RLF6FKfOH58eYPUYce9TCGSBk7yfxA+SOAYQYGT3AJYawAofitsavTk11onhS4Q+7E5ya5h9CtRE6sFuwD61paeNmzHhx5oEUuiAVqlwmCvZ0NSZZdlyq0k8xPVyGQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(6916009)(4326008)(316002)(66946007)(66476007)(66556008)(478600001)(6486002)(8936002)(41300700001)(5660300002)(8676002)(4744005)(2906002)(31696002)(86362001)(36756003)(38100700002)(2616005)(26005)(6512007)(186003)(53546011)(6506007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bW9BNGFNNEhFK1FMa2pIR2N4Ni9OSnBDZ0dORmduT2M1T2xiZ2hNQjgyb3BH?=
 =?utf-8?B?WHlzYjhWa0ZVZFdhTWlsTi9qN3RsMkZ3WHlKUFFCdnFZVjFvSlJrNzlBM25Q?=
 =?utf-8?B?NmJ2ZWc3Z2xiV0hEcklEaTRUV1A5YitiU1JhcWxpZFBJWENNRlZSd2ZPK2pP?=
 =?utf-8?B?RTFweGNhNmlKSlRBMEZCQ3NISzhSTFJmR01CRVFFRXRUeEJOWit1Y3RBU2dq?=
 =?utf-8?B?YW9mZUl5ZG9qMTc4cDJ0S0I1bVNVVmJSN0dydEEyWGhVQVExc1I0RVIrMDBY?=
 =?utf-8?B?a0RBSS84czBqMXJrZXI0a0FIaUsyeUtWUHNjZUlvYUFReDI2N2JqeVdoS0Jp?=
 =?utf-8?B?eCtxVW0vSkFLZ3dYZ0Z2OEw4bmtZeFJ4S2oyaURsOFVzNWVUdzBXcFZnSVhT?=
 =?utf-8?B?cGtoQU5DNTl1VXQrT0ZJNVpqNXdSK3F3empPKzZTMlVSOEtJbTAxZ3BLUGhz?=
 =?utf-8?B?REtTVExFbmswRWxFNW9Zb3ZlN1Znb243M2dyZng4T0dWMWdPM0FFMXNkT1E3?=
 =?utf-8?B?QVBHNGpUN3RrSVJRSnB5NUNzMFlkU3J2OTM1TDFxL2JOWmhIRC9pTTVPS1hY?=
 =?utf-8?B?bzh3TnM4c3lLeW9TYmRmUnJtVzJtSmRwSWVkV0Y4OXVOL1BNS0tNQkFabW1O?=
 =?utf-8?B?eG1sTlRYMnMrMUlVUG56TDFyeXp3L3lqSmgydVN5dFE5TDQ3eTBaU0VQK3RE?=
 =?utf-8?B?ZUdRS1pIaUk3alZKb0dPZkZqMVhpemllc2tMZnQ3OXJjMzZ1WSthbFhUaW1B?=
 =?utf-8?B?WDlFR0kyZW9FTGVmNnBaWkNIbWxib0FiK3FocklxcjBMT0luNHVRV1ZBM0Y4?=
 =?utf-8?B?VFdUNFZobHUxYUhndk5GU0JVdmhOYkZpUkY0UDFicXBOVVc2S3R1bm4wMGdw?=
 =?utf-8?B?QnU4N2srYVRzS3MvUVcrT0xoK09zYU1ZU0Vta1hMODdDV0ZGUDNHRTkvbmo0?=
 =?utf-8?B?eWwxMlprcFVpUHgyVXgxd3N1K1BJVEtpMUQwV3QyK2JnMGpCOENQSFlMb1VE?=
 =?utf-8?B?WUhETkFtRUtJOER0TTI3MlQyS3VTRkZ2c2VHbk42ditmaEhtSENOSkFRaC83?=
 =?utf-8?B?dyt0M011WkRwcTdHSnMzVTVZOTRxTTVONFZiWG04SytZNm9aeVBXa1ozeG9v?=
 =?utf-8?B?a2h5dndPU1RqWGpMaHRtSGNBQjBPOVhaUGFZL1M4U1ZIR0FpbDBNNWRHTXk3?=
 =?utf-8?B?cnFjelB2SG9JRGhSWE00aGVxUWI0Q1lTNzZjcFVhNDhEMWlUQ2JoQ3c2MEZx?=
 =?utf-8?B?NDJyVnlCbWh0K3BIczE4MFlTWnpqVmNpN0xVNk11Nkx3SW9zenB0bk01eXJs?=
 =?utf-8?B?eFVqbEo1eDhrVnMvUHVWa2NwTnh6SkVrdkkyWGJHZ3NKaXZ1aXFQNUdHeWlJ?=
 =?utf-8?B?aktTV1htY3dkdmY4VHlWYjZJVWszNUt2N3I3cWRIdnlyQy8wcm1od2kwNFdZ?=
 =?utf-8?B?U2k0RzBtc0JQdkhScFRuaDJBU2pPZFJoeEF3WlFubXA0RklnRzJwalJNRVJs?=
 =?utf-8?B?cnhXaHdHNERtUFFFc3p1Q0NickhMMVhxd0dFRGFvMTgxV1BxbDZIbzlYTjNp?=
 =?utf-8?B?amhtT3UwZGtsbnExa3U4dXdSdkNuUUJmRHh0cUZibzc0dkcrY0tFNWZ1NzlQ?=
 =?utf-8?B?SzRoY0NsZjY0VXdpdWlGdVovQ1IxSHU5Q0tnV092ZjVQQ0R0V3hjMVdKbSth?=
 =?utf-8?B?c09GYlVWZHh6Q29MWUs5T21vclpKSDlKWjBkN21YcHBYZWFGU1BNRXF1WHFU?=
 =?utf-8?B?N0phcGRDT0NZYXg5cDVYbDZlV1lGLzFXR2FUdXpwbm9DT3RndEcxQWNRTDNv?=
 =?utf-8?B?Z0s1WW1SM1FtVDMxd1VUcXJZZW1HWGE3OUtSb3NIMXRWcStqd0t1WUVodTdl?=
 =?utf-8?B?TWFEWmhvbW1xZitSV3RhbkNqNkp3dzJGTUdad0hXWmFOdFJCckdpeFRQbWZ5?=
 =?utf-8?B?S3BOVit5N01nOU80bEdkeGdXem1zNjFEc1NOWE5UdCtEdHoxekl2Nm1Ed2ZX?=
 =?utf-8?B?QW9QUUF3Wk1hVFZyQ2MrOUwvTUF6UFFDbHdoL1BWUGJQUTdwV0l3d0UrTSs4?=
 =?utf-8?B?WmJhRzJrVnNITWVsdmpMZXhNcUhkbVNnemo2b2JUVVVCTE1JWW9pMWhkM3A2?=
 =?utf-8?Q?NV/H6MEjFxPdwD1KVTcTRtVjg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3c77777-8d62-4a60-b8c5-08db3f236d97
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:09:19.9632
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2p8nFVl0f8QCyutx0a0hFS0GGXoXPpjs4HAMUqUG5uEza3fhvB2UM/z5H6qGYEA7CoAgI16jrdRHUQ9tNzejFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9577

On 14.04.2023 22:29, Stewart Hildebrand wrote:
> The list was not being initialized, which could result in a crash in
> vpci_remove_device if no list items were added.

Can you please point out the code path which may lead to such a crash?

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -678,6 +678,8 @@ static int cf_check init_msix(struct pci_dev *pdev)
>      if ( !msix )
>          return -ENOMEM;
>  
> +    INIT_LIST_HEAD(&msix->next);
> +
>      rc = vpci_add_register(pdev->vpci, control_read, control_write,
>                             msix_control_reg(msix_offset), 2, msix);
>      if ( rc )

The error path below here frees msix again, so can't be a problem. The
only other return path from the function is after a suitable list_add().

"... if no list items were added" is misleading too - this isn't a list
head, but a list element. The list head is d->arch.hvm.msix_tables.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:12:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521865.810825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKuQ-0006ug-MU; Mon, 17 Apr 2023 09:12:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521865.810825; Mon, 17 Apr 2023 09:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKuQ-0006uZ-Jh; Mon, 17 Apr 2023 09:12:26 +0000
Received: by outflank-mailman (input) for mailman id 521865;
 Mon, 17 Apr 2023 09:12:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKuP-0006uT-Ik
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:12:25 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f6b6fbcf-dcff-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 11:12:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9577.eurprd04.prod.outlook.com (2603:10a6:10:304::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:12:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:12:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6b6fbcf-dcff-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GwuqF5aaJYtXuAdRnk432zZv1jgADAnnOF0DNCRUMk01SNsaSQPzFYnBMKXLDle/y+qpiXdAmLseWZ/Td6zp2cZYSqtCy5relrrytVqJcZI2ie1OM73QdQY9tzl9abd4EjGfjyv3vskcY1o5IsIBdp4Bj8QsRszAPjHuc53rbWtZ/TB6T+m+f2C9UxSNi3c0llUaMfdHx+4M2f0igRYZOXdDzxe4j3PYqpTzxtkOYDXuh5TpBM1+LZ8fliXGtooHMYTsvYkCD95Rfjuq74l/RzaI7yi6+54c3P1XsMUuHmk1cSJdp5aW6xMSgwUVAkY+BXNSKqYiN65SwGAiWEsSXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7wTXk1GgmTkArThWSjmbq+v/0yhBUgVd8EeCozB7vq8=;
 b=E/Vo1G/lVpWYpxEdNVpeAcnLMxhGj74JavZOIgquBQeuNf6Ciyjv7aAWegaiVv6h3p9XfGZbol09in3fT5a1Mzu1UY5PbO50El6TCU0L77fVnfmKqt3ET2spbob7plO16wOWO1cLKgyJRYn0ig0u+trgDuPso6G+VI6xEq4x4xvcChWHkV2GDTytgYdnABO1CLxwr+PdM9rfrHGTBRpf1hUQafiEt2UUEhZ6aUq/1CX7UNYw9AV4q7uJCfuDOdkbklS6cLQnNS7Ag65Av/FfO8Z0ZiHf05QHtk9bf2n/WAVdMIO7TTHzqEyM6oyNOfhB63IxTJJhorLY4+Le7astJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7wTXk1GgmTkArThWSjmbq+v/0yhBUgVd8EeCozB7vq8=;
 b=1APlCu3B+H7yyVdgiV4qaP251GbaCdD5gLOHKXMjh6gA+KQNzlXD4JJOsvcgqX8RZkaL5hlD2YcxKlxMEI6cO3r7OC6VvLNIIqUQD97AQqDRXoNU92swQpxE71KFArJRZIUKlopAIey5QlAUz8sFXD5hvCw0MIWgQhTrFyywXGvdx5LwTonSR+WwxXCTGdI7KKcoCiPi5Tg3CgNdarbFyGGZdTRVoXrUj4GRBlQva1xMlNmUTHqQchfahz1Mjh5YqjOsmfRwbdfj0zfmbGYmB/kyCfqKhkVim2TK2UEIOnNnVHg5+D4BYsw6MWIxtd3n/vZlFM7vt7MFzH91yJlkUQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ecfe786e-0623-f965-bdda-57ae46ef5646@suse.com>
Date: Mon, 17 Apr 2023 11:12:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN v5 00/10] Add support for 32-bit physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, wl@xen.org,
 rahul.singh@arm.com, xen-devel@lists.xenproject.org
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0061.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9577:EE_
X-MS-Office365-Filtering-Correlation-Id: ca6548d4-4ff3-4d2d-028d-08db3f23d9e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Kl6T+LU0q+L1bXxyOF2FL85hxdo12VXjy3sbpaRvdwBvDpB1P8UkoWElT1E2cHTuWXRYN2X7FqpVAR/OgrcIE75Wv5YFfBHhbM9VgI4PYEZPq3cEhKSsdyJf/5fU+Uu9HxyQb57MaJi8vB2XdPIahsCxCzWr/z0sN/0UTzFuToP9CyewHY9LY0bsaEBPPiTG1HrdClsTphBLxtS6mqFoan2QfJmzP08SIX2K1WrQhbanZ1n4CQqVbL2Vxq7rbcVtdq123O2TTQMyh0pIl7L4+omEhojr2W4lPy1LkLWmQDqReU/rnQA6Wkq5GcR6XK7P4UxPFUeKoMdti2hkJ3NsjLoG4oF/DVU6dS9LkkOEGhQNfvArrz/+/eZEPLvT78iP5n75k68g7JvJBquqPkPen39mN6Tox5uTKs5XR0PnDp1CrqYqX4lbi7HbSyzKkb0D6aJM8yioh3jU/Z7s0/Tg94aDs1fEIlKwf5sSH6QdSs7SswkEK/cAqnPSGrFZJ4TUs+y6W5i4u29w53c10uLulq7MnVRqjtJtXMlA9+UA9jhDGgc94/35wo52AJxuhIX0ROcZ7k94e396OJ9bb4XmlMhMFMdCThjRf35CpVh/V1zjrzFyaSAv/zzogDC1pco4itUzjFVHiYsdiyZ79oHsLw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(6916009)(4326008)(316002)(66946007)(66476007)(966005)(66556008)(478600001)(6486002)(8936002)(41300700001)(5660300002)(8676002)(4744005)(7416002)(2906002)(31696002)(86362001)(36756003)(38100700002)(2616005)(26005)(6512007)(186003)(53546011)(6506007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eEM5V0xBTDRMUEdGa0drMm1BZDdDWGhTMmdRSVJFLzFaVEhUQUk1YkYxakhi?=
 =?utf-8?B?TjZtQXRXKzEzelBrOFVPZlZPY2ZmU044QWdQWUtJdVloMVhFeTB1UjVFaWhw?=
 =?utf-8?B?QS9zMTlVbHM5L28wdmNMZHpUTXE0K0VFeFhoUmd5ZmwrNThaY1IzL0JsWk5E?=
 =?utf-8?B?Si82TjR5eUMzZmlHN0dER2dqUm1KS0RnVDVCR2VMVzdwcXhSMVdqR0pvcUE1?=
 =?utf-8?B?RVYwdTFHaDJHam90TnRqUW5sNHRhQWxBNDFRRnRkcWNWUDYwdnpqd2Zqb0hs?=
 =?utf-8?B?b0FWY1U2Kzl5eExhTFhVcVZVL1pJZTl6RVkrMnJ4ZEFac1hGb1czQ1UySVN5?=
 =?utf-8?B?K0VJbE0rVWpWcmtjSHpVeko4MlU5Z0VDNy80T2xjcW5NbU5lUVhpYUt6U0wr?=
 =?utf-8?B?dXVJQTVkTElHV0dOQzZHL3gwVi9MNWpPaW5IcXhJcW12RWc4UEZUZzJSaHJU?=
 =?utf-8?B?a1NxQjB3WVZXakxTZ1lrbEppeXV4b3ZEVC9DS0ZnZE9PWmhqdWlhZUVjRldG?=
 =?utf-8?B?NG9YYTcwaWc0TEdIdVJ2eUZIMDdNRHUwaUxMR1YzSGNrL2Jock1JZVJMWjlY?=
 =?utf-8?B?TkVpbGtDOW1PeExFUTdZZ0UwRnJKTjQ2bEhxbXBjdSt4bDFVcjhSb3ZCaFk0?=
 =?utf-8?B?UWdzSldoRGhrdEducmpBcFNUVGl3NFZETjFGdU5OS2hxVjdyOGRWa0RKQ0Va?=
 =?utf-8?B?ZVpob2x2OXFRdGJUY3hGVkZKZHgrQmIyRzBGN3hldmNPRWZpdk4yQ2p3OUR5?=
 =?utf-8?B?b0c5aG5xZWpyMW5Bb0R2NmZvSytSRldOR1N5elFjUVZ5RUJDaUNHb24rM2hq?=
 =?utf-8?B?c2dsNU9JenNIbEZxSUJXRmttaDhWZ0lETERIWmhnMm1RUVJmd3FEWDl6U1dJ?=
 =?utf-8?B?TU55aWhsa2tPSkwrVzVwb254SzcraFdlSVNtRDAxc1RuRjBHMTRBR0dQNzVa?=
 =?utf-8?B?NnlVRERDcnNqM3YwNll3U1k2UjE3d1JCWnhrVE9Cczh2RXlIY3NSczRVUy9S?=
 =?utf-8?B?SzlQUUVHc3NhaVpiSWRFaGlJeFlLK2R6ajBiYlVZci83UVpHeXlvckw2SzVO?=
 =?utf-8?B?M2YzQ3F1K3M1YnA2b0RMZ3R3aVk3cmpVVDllY0JveGNsaXV2Vk5XdzQwK3c0?=
 =?utf-8?B?M1E3SzF3QU8yK1BzSW50T2JsOTU4S2hCdSt3Mys1NjFQTm9uNllKMVFFcHhn?=
 =?utf-8?B?WHlaSlRCVlBmUSs3V0FzcndZT1ExQ3N4OGFtLy9MWVhDaUI1Qi9PZ1NiMS9Q?=
 =?utf-8?B?cmVDaVZGMmJPRGkzVEJyQXJIN3hHcGU4K3dyakN5TWJIeEdxbzBzMEhIdTFk?=
 =?utf-8?B?MHJtL25PMW54MXVtNFhFY0pHNDZOOGxMckR5N1VacVVFNGcweWh1ZEtERVFp?=
 =?utf-8?B?KzVNNXRReHpxd2p5Si9IUmM2aCtsS05sUlBjREpQQW0rcGlTSnE1RUNNNnVB?=
 =?utf-8?B?QlVNSjYzbWJRQ2d5VFI2WjYxSVJQeGhqd1lJRG0yc2ZiSmpZS3c3L3VsTGc2?=
 =?utf-8?B?UVhMY2VPVStOSXZ1Qk82L3dzK2VCalRyOENFRW5DZkpWL2ZQeXV4U2o1Zk1n?=
 =?utf-8?B?Sm9pY0YzTDRBMkwzVUlvbTNud3RzSm9POVFJanhGVzhJUi83Uld6TURxVEVY?=
 =?utf-8?B?N3R1UlI3ZCt4dk9aaHNTNHhyeUhaWXF2aUtXRWpTTHNjalNZcGo5QUswUUtB?=
 =?utf-8?B?c3I4M2VZU3c5VURzN1RIYjBYdUloelVDSERHaWdaUU1ScXo1dkhkN0FHVmRX?=
 =?utf-8?B?M0JFb241Vi9BMzIzTEVoTWVKd2xjeURDcUV3RnJLM0VBUHhVRko4TStpNkIx?=
 =?utf-8?B?eFBaWmR3aml5bnVlZ2lFTTVwWERuUkFzMW1JNEgzdzV1R3ZhbU1nVk9Selpz?=
 =?utf-8?B?emQwbnM0VjYrc2hWY0RYRlAzNmV6Z25PUFpQQUtHQ296OUt0cGUxMkUzSGR2?=
 =?utf-8?B?SHFqdklpY0FmWUdOOVFYUjFocEdXN2YvUUxPOWprL1JZc2FPcHZ6YWdZTG4x?=
 =?utf-8?B?Y2ZQb0l4SnkwTGxCdm1lNVNYVUVqWWx1RUFhbUJnTzRoUmJNL1NYeFRteHRQ?=
 =?utf-8?B?aERnSms5dnFRWWZueVByUFhYUEJPL0piYUF6SzdiaTl1TWFmL21ma0FYUk1E?=
 =?utf-8?Q?F6P55sdcXlp5rWWqSzpytnMit?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ca6548d4-4ff3-4d2d-028d-08db3f23d9e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:12:21.6618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZKbRSOWOsEq7F/Soot2nDsgaqCAODMg5iljZRJIbkHeKmxwqsVX+l4VWsk4t78t9tpeH2j5saYXD0HyQU01TYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9577

On 13.04.2023 19:37, Ayan Kumar Halder wrote:
> v4 - 1. Dropped "xen/drivers: ns16550: Use paddr_t for io_base/io_size" from the patch series.
> 
> As Jan (jbeulich@suse.com) had pointed out in https://lore.kernel.org/xen-devel/20230321140357.24094-5-ayan.kumar.halder@amd.com/,
> ns16550 driver requires some prior cleanup. Also, ns16550 can be ignored for now
> for the 32-bit paddr support (which is mainly targeted for Arm). I will send out
> separate patches to fix this once the current serie is committed (or in ready to
> commit state). I hope that is fine with Jan ?

Sure - if you don't really need the change right here, I'm happy to see
it come separately later on.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:15:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:15:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521869.810835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKwr-0007Tx-2w; Mon, 17 Apr 2023 09:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521869.810835; Mon, 17 Apr 2023 09:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKwq-0007Tq-Vw; Mon, 17 Apr 2023 09:14:56 +0000
Received: by outflank-mailman (input) for mailman id 521869;
 Mon, 17 Apr 2023 09:14:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poKwp-0007Tk-Dw
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:14:55 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20622.outbound.protection.outlook.com
 [2a01:111:f400:7eab::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fa2a60f-dd00-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:14:53 +0200 (CEST)
Received: from MW3PR05CA0007.namprd05.prod.outlook.com (2603:10b6:303:2b::12)
 by SA0PR12MB4429.namprd12.prod.outlook.com (2603:10b6:806:73::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 09:14:50 +0000
Received: from CO1NAM11FT035.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::7f) by MW3PR05CA0007.outlook.office365.com
 (2603:10b6:303:2b::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend
 Transport; Mon, 17 Apr 2023 09:14:49 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT035.mail.protection.outlook.com (10.13.175.36) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:14:49 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 04:14:48 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 04:14:48 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 04:14:46 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fa2a60f-dd00-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fIzLYJWBGan3eQCeTnX+dk6lz0ch4c0hDh+aourpwyDZNtasE0BctTrwDLfv9/WSvvO76YM0DBv94kP0p+/5nIMmSd5dVeAKII1035kzLPJVqTZLsOLdX4zu463DZf71XjzQCvY2PkGJ+7OK9SQiDwN6mb0uaQllt/mpdEtDJszYhmA48IeSlU7Ol/q9Jqt5hcZB9TitFx4aRGYB6O58qqMM95Ox7qCv7LfVI9q6hGxkt5INsOUZXsaTZRTgU9ReEWs3xY+2RgxmHPDhE3sBOEiE39Sj8EzJu+cyGaVP9sVR9xxr2qusxNXj/q7akquKQkYTZB70kIihAt9q0lYTVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FZrJ9heFh4JrBPCgTPG9XVKHRb96aOeDIXYtPn+itCg=;
 b=ndkicFCyq0sB3pm6wpYp7aqoqJTPPpLe3n/p6a7WHri+uHQBVF28Dmyjccllqzsj6sGGJYU5RmvmJKuRFho+Skdk6ZXWjsJUlJvgJQytQsribxjH16t106YAqCMXoEmtrs0A2TVGZ2a+PvwbxhTcAC4OK1GFblaZRiqZEqG/e4FLVZIo+momlHv2T0M/sifpJ2l0RXXM1Ci/y/I67wVyDF6fLT/nnyvWFemST3EACT1yxd/F8mmKyTaB6V4Cvr331bw5sPshOiYfmJIAWTf3nUlggOxdVbZD4VtCDTSiVmAqyj+ppEubKhRbHU/1w7lCzLa2SQGynLspElXxAsyiFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FZrJ9heFh4JrBPCgTPG9XVKHRb96aOeDIXYtPn+itCg=;
 b=UK/CuwIH0R81QDgg0j8nDCaOOyfUKBvEWVYRnaiprKltM/CxjXOso20hoZCz0fXiFUViUvdsuFzrSzHMfo+SEH9muH2h/ckzhVD0BfDZh8vSAJkCKrzhuBq8edxSD7fIHEWpM3Jc//N2nB7uuNLOVN9YpHXnUiL7NvCKhWRMfbU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <b03f23e0-e66f-520a-20eb-7038e19d1649@amd.com>
Date: Mon, 17 Apr 2023 11:14:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 08/17] xen/iommu: Move spin_lock from
 iommu_dt_device_is_assigned to caller
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-9-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-9-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT035:EE_|SA0PR12MB4429:EE_
X-MS-Office365-Filtering-Correlation-Id: 5e37a0ef-b664-4ab9-5b24-08db3f24320c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sVgdy0L91du4CMgiBqHk9uQx6N5l9m+rdeUzd/V7Yg6P9WeKOJswYevVADLMTJmT1ALyKKPMmdQADY3oUAKINar6fyIiJsaNoZvUMDL57JJQ3DiRwEXb5hldoU0R81wT2nBPQIOk6sHjNamQq6w4T0QmRhaS6du78EJNqoZ+7q0joxRMKbo+bHLe/EQ4rvyGp3K7HJzroOkrN+mdqr9E3Rg+AzBf+5Rm5v9cWBVtHDwS4cRB1WYMl78fq2CwCCyfLdzNYzpwMMXL3SeccP177DFR2Hn5xcYOx//FFrlFCinQCjA6netXJUoE3wx0hu6BENct8brdRYsNfgi6he+M6ZSrJuUHDhmcFrg5VteDgM0JFH1t/Yuq8x20s4St07gtvA05oQpvkMHQF8+hIsvDBi3G+hdUCoXX9NuQ5HOJTQUFyp+R8r/SrXiTAuzlCmii3ATcTbShKreiWZQBMS5lzanKY0Gv84WYSeGcuE7YuSB7nc7mzay+4NPVrA9I9YN2qy8Aha6T5zl7uOxaWMNDXcNGpCVVqbSSxScmk/vuCoUVXWJUEXOJbgpVqMMXOdj6cntzkT9JTPKDyZyMobFiQ7/yTn5sLL4TAbpLzK3yrOtmWEmAX5xkbKqCuR11DGejNSYSBAE7BZrAjrxZRZ8zB1MGEnSkDCjS0iJlw7/LLiNEDXQNAi/SNHivRIzWsAMg6g6VUp2nH74MIkcrOLMWeOrQc0reZQCY7HloCl03LYOfQtlBAtreGDspRJTOEBivSFOJPRQ1jo11f/JWcpJ14Q==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(40460700003)(2906002)(4326008)(70586007)(36756003)(70206006)(16576012)(44832011)(86362001)(8936002)(81166007)(82310400005)(41300700001)(478600001)(356005)(8676002)(316002)(82740400003)(31696002)(40480700001)(54906003)(110136005)(26005)(336012)(31686004)(426003)(53546011)(36860700001)(2616005)(47076005)(186003)(83380400001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:14:49.2381
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e37a0ef-b664-4ab9-5b24-08db3f24320c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT035.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4429

Hi Vikram,

It looks like you didn't take into account any of my remarks made for v4.
I will repeat them here.

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Rename iommu_dt_device_is_assigned() to iommu_dt_device_is_assigned_lock().
s/lock/locked

> 
> Moving spin_lock to caller was done to prevent the concurrent access to
> iommu_dt_device_is_assigned while doing add/remove/assign/deassign.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/drivers/passthrough/device_tree.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 1c32d7b50c..bb4cf7784d 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -83,16 +83,15 @@ fail:
>      return rc;
>  }
> 
> -static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
> +static bool_t
> +    iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev)
This should not be indented.

>  {
>      bool_t assigned = 0;
> 
>      if ( !dt_device_is_protected(dev) )
>          return 0;
> 
> -    spin_lock(&dtdevs_lock);
>      assigned = !list_empty(&dev->domain_list);
> -    spin_unlock(&dtdevs_lock);
> 
>      return assigned;
>  }
> @@ -213,27 +212,43 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
>          if ( (d && d->is_dying) || domctl->u.assign_device.flags )
>              break;
> 
> +        spin_lock(&dtdevs_lock);
> +
>          ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
>                                      domctl->u.assign_device.u.dt.size,
>                                      &dev);
>          if ( ret )
> +        {
> +            spin_unlock(&dtdevs_lock);
> +
Please, do not add blank line between spin_unlock and break. It does not improve readability.

>              break;
> +        }
> 
>          ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
>          if ( ret )
> +        {
> +            spin_unlock(&dtdevs_lock);
> +
same here.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:15:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521873.810845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKxN-00083G-F9; Mon, 17 Apr 2023 09:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521873.810845; Mon, 17 Apr 2023 09:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poKxN-000839-CA; Mon, 17 Apr 2023 09:15:29 +0000
Received: by outflank-mailman (input) for mailman id 521873;
 Mon, 17 Apr 2023 09:15:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poKxM-0007Tk-HF
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:15:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6486f62f-dd00-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:15:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7627.eurprd04.prod.outlook.com (2603:10a6:10:208::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:15:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:15:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6486f62f-dd00-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gnjJmB3R4AAMYUiiu7B3CnLfvFtdogY01D0Qy2yXdlJ4FeIrftT9PH2Z0K/Q1k1eQcuSGKXiA8HItqJuI8OQOrV8al3RlkUixo89j+Ylaze0iJwDx4lzGbQnjQP7rm5QktuGoMIv15yxp/hOZwA8HbZLhw9Qw9eAjf9lhWM8Kx/dxQMIS8Jo7HnNd2iuJjIZ6PwTYJGe1A9KQQjjWw1FyRggVWClxT2FHilPzZmxbmBYjazCVQXPBPKtTixX3g1wbfuSPrGV9ItFgpt2UfqeBIdtQGnzDublHML3rBP+YRQ6lJmto89HHQ6poCSmxLW7nbxsihiOcXuTOgaK5AOtUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p4de9NDZRmdkzw9W0oswOUd/6ehBsF9MdD78XRX5yJg=;
 b=eOu4YyOe46cJgj2yOjRfqOeIA0iWZktQW77Sviuan8yfH7zagSdpWpmgv5LMNMx8daKi6K8h9NEFB1P4Xi3zQItKCbys/Glh/1xoAIv8v7Q9oqiRNqeozIDYgKIBjTkWQnKT5y/D72wet+VJYvJcCym26pK4z7LtWOyKUA4SGpjnuirp+0y5zCS9D6knpea9vswo+0vFgmnKFtPZFLrmyw4RZWH6PTm3AnFQJ7wV1EP78auXBrQROfIKjqbTHhV4zyJBxEtVOblu2hX+FLPVWT4olP8sra5bNvaax4bwnEfwMAP6rgjasz6lDbSHiVPySnyQS8sh7Yp9xC2fGxnAIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p4de9NDZRmdkzw9W0oswOUd/6ehBsF9MdD78XRX5yJg=;
 b=T2x///tO09NUBOu2t+VlSL0DnSG6axvj2nnz3sOt8M0dfNqWCbk3K7kKtxyRcClkBqsqmpaZyjLgFEpx/Xbe5k6wA5xgQiiDTPq9aBce5BAwNP+NSX1WyToaobf3QnWVXRJlm9fIbeg9K4FO94MwVFJyqjk7FSQ2Uus3MJRTV/WGi0knGvcviqQz/mrOl3prMkzP/XkuLY9Ed7XVhqDKMReu/rS3Q7EUgBb6xIrFwFm7ygTg9nZtLGQgY6Ph8TRMa9x+lLSJhuOpYGqafi/MmQC3li9E4hd09VooSPSPhMXT51pdM1en5m9UXQmyS1QAgtXbH56KaadhCJL/JUCKCw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b39f263b-1bc5-89fb-6377-f2a45722fde9@suse.com>
Date: Mon, 17 Apr 2023 11:15:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [XEN v5 05/10] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, wl@xen.org,
 rahul.singh@arm.com, xen-devel@lists.xenproject.org
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-6-ayan.kumar.halder@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230413173735.48387-6-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7627:EE_
X-MS-Office365-Filtering-Correlation-Id: 411a765f-1ad1-49a2-eb8b-08db3f24473c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DmOPhrtV11Lzl6WZXlAYsaBqV9TrlvasLq06oTq5Du7Ps4FqTJDiDoRIcQYWH3wPtglb9F9N05aXueMRGLCgjwl5RrUHXWJNAXpRl0vLDx2LniQxiMEy1osrVAWZf9nGk2KypMEYNKGeOavufAIzDWPCQcOAyaNjcmMuoLbYIHEGdoiTh/2AP1QDFbwUrxg1gpqd/9FVakz0G+Dv2B3h+oENGac3LpBzrRLBKuxRCiesiQ2mY4YTqxlaIHWm6hDYm0Be+4yEHEUYFpa+uGMRIVi6g0zum66/wews07H7TdRblUR+Bzm3oQDYZopq9lRKERdS0Xm1KJ28623zjiHhTaeLHAKcDL72cFD+bHn+eIwpMA8hpWUL0FyS6D6/oxOcLwXzgvch2DyTAY7g7UUwQl/FthMOYqyTvUhHNyQFL7cPUd4gNw461eJqaKvUr/EBEmi4vBoT47GA06m6IyHdWnAPsPAAZJyxEePQ5TUXHHA5Vo4qWlbi4R/Wc+xCY8dU/XdskZbo+q7dW5m1Lvitf/PLc1SVUq5JRpcvXBPYfRX6OV0gC6+wvE+mqJD8cs1jjqIncQJPFjQt83ThAGfTwDnIP81xVNdRFhA8CTTiyFKxpmAyqxnypjt91EkhQjW3uzPsmoXqg7YiGbY4Mre3Hg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(7416002)(5660300002)(2616005)(86362001)(31696002)(53546011)(186003)(26005)(6506007)(6512007)(38100700002)(8676002)(8936002)(478600001)(6486002)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(6916009)(66946007)(31686004)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWo1dzYzZjcxaXc4NmdrV3cxbWh6ckNqQjN0T1cwN3NWNGlOd3pNVXhlQ1ZH?=
 =?utf-8?B?TlRFb3RkZGpSd2EwanNwMTdLU3I1eWQxTU0zNDFpV2kwTDRYREUvczZydCtO?=
 =?utf-8?B?MkxjQmFtNWN4d3NHMWV5NEhLME13VUdJM2xZVzhXZmdWc0pSU01senQ0U1c3?=
 =?utf-8?B?MGNpV0ZXemNiTHlXUlpORXVmdTh2Uk92ZzUySXAvYktTREp0WXRSQlFmZy9N?=
 =?utf-8?B?M2tETzA4VEQ3UXliQUttMEJhR04wRlQxdDRsRDRFWXkvYUFuN1lnWk5sR2tG?=
 =?utf-8?B?Y081UTN1aURRL2hZQTc3T2MxZ2Q2M2RDaUlNZVFQSUJwZzE3QitaYUdzK1Bt?=
 =?utf-8?B?TjdKQTliMFp4QXl4ajZ6VVU4MnFlbURKSkhOMzEwR1Ivdys2TGR0eklWZEhj?=
 =?utf-8?B?Mkg4a2I5TnhPSWxTTVA5c0ZwdVJJbEdaUGJneWd2YlpJZUd1V2pmNXdvYndy?=
 =?utf-8?B?RmpWZGlITTJZQzFxRUN4OCtMSWVWalR5VFFrYXJGVk5VSDRhaXZWR010WEZF?=
 =?utf-8?B?Wk9Ma1FrQ0xXRVZ6azJTR2dwdW55d3kydlA3S3czZVllb0plek5YbkEvbEdQ?=
 =?utf-8?B?QVorSDNlR0lYYjlCUTdSNWpCbVNUN1lxdXhUditBVW5NcmpXRTg4MWt1V3dq?=
 =?utf-8?B?VlJ4RU03MitySHAwUlBWVW5aKy91aGNiZ2lIQjk4VkhyUHlnTndMc0tPUDVv?=
 =?utf-8?B?MG1wN1pucUVOMDZPUnR0T2lldDVyUWViV3F5UGVOVDFWSFVwamtmb2ZUalY3?=
 =?utf-8?B?ejV0cEVLRXZ1ZGt5bmhjcjdWbW10Wmg4VmdVWTkyaGk5RjZSTFRqL1dSUUxL?=
 =?utf-8?B?dkxpbEpXdzJjSkxzdUFoa0o2c1NuUk50UE1hYjREZEE5cm1OV29sUTFqaEEv?=
 =?utf-8?B?eWZzMEFDSVJnZjhwT2VsUURuaFU2bkFwQVRNWmVoUFVxVjFNYm5GUzIwZTB1?=
 =?utf-8?B?UHJKay9aeU5tOEF0QUxqOEVnei9aM1c5cStnekhWQ2ZhS0dDeTBtZjB3UThN?=
 =?utf-8?B?ZDlqdUZYUk0xRlVHVnhWUmpnYzhTcXdFMzA3cExjYUtTQUM4QmxBamFnRDVh?=
 =?utf-8?B?Ymt3Uk9CS01zaGtFRzV1ZFFhQVRDbWMrZTA3TWJsRmxROUpKNU94Ym9HSmxk?=
 =?utf-8?B?ekJwbGhLaWE1eDhJLzhjdWZqVk9OKzhVOUo2d2FOZnJnaWFUS1p2bmRKRWVC?=
 =?utf-8?B?VEsrUHVac0pKOUErdTlvd2t6REErSEcvV3EvRXFXWUZjbS9Lc2xOK2VFK0d4?=
 =?utf-8?B?TCtjU09DbVIxZFpKV2ZVT0ViMW5ibGhJRWJzNWRpblhOVHM5dDkrUWpUTlB1?=
 =?utf-8?B?L3pJN3h2K2pua1RrbWV2RDA0QmNNSVArZU01VFhUTm1XQkNtTlNzWWhJaERO?=
 =?utf-8?B?cXY3MVJsU0M4a1VYeDNtNVVxQUJMV1l6eWFIcjlaWnNzNUF3bGZZWTdjUUVa?=
 =?utf-8?B?dC9sUlRLVzUvZUFtdk03THFsS2hiUnFTRG83QVZpdWtnVzNHZWtqcEJiNlR5?=
 =?utf-8?B?SkxISXJFeFlZL28xVTk4NlBvVTFhdVFNL2FLM0RSWlBRUStRcWNEU3hyUHFU?=
 =?utf-8?B?dXFwYUZxQ1dmRENjTVE3aDlaMEozVXhIWHpJN1hvSzdFWDJCNHVHT3pTNVdk?=
 =?utf-8?B?eVRHYjdjK3Y0Q0lsTVZ5S01TQTgzWkw2Y2lQN05INzhzdmNrUFQyVUxRVVBR?=
 =?utf-8?B?SlpsYVdaMmtvQUpKYm4yN3RyR1ZPZU5IRTJuOVo1aElrNmwvUVB0dTZFMTVD?=
 =?utf-8?B?QTdGUG51aDZDdjhXU3hQS3pCZWRCclcvQzhDSHFPSEJXdWk1U3UzMENVL3dZ?=
 =?utf-8?B?UHhaVkxoUGNFaXoyVUJnNE16QitRNGhlWnNxeHlvOFl6UStkR3NGZlNPdW9M?=
 =?utf-8?B?Nzd0VmtaUEhOcml4VmZabThOQjltL0UvbitGR1RBL3ZtVGkwcXVvdzdna29t?=
 =?utf-8?B?SlpSb0pHUUpVTVkxOG9CYUhTQmhNbDlTS2FJMkROaklLTU5Sc1Bsa0NZTHNG?=
 =?utf-8?B?Wm9mNDNleDIzdlRwTXNwZm5wdmRaSzdwRVRCVE4raW5IdWgySHNMWVNzYm1O?=
 =?utf-8?B?dm1RaE5tS2NCcUhKSE1TeEZPZmRDYWhXemQ4TFdhMFh1NUwyamFXV3F5UCtI?=
 =?utf-8?Q?WL6VP3dgpZMUt2iNbuh/YxMVt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 411a765f-1ad1-49a2-eb8b-08db3f24473c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:15:25.1286
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SUoRQKB2imTyclaf8A05YH0fIgtV3JpU5yu2FieMpmLzgWA5ustSGal2mgsQlyGwOorcroJFbzq+6yox3XmhIQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7627

On 13.04.2023 19:37, Ayan Kumar Halder wrote:
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -19,13 +19,46 @@ config ARM
>  	select HAS_PMAP
>  	select IOMMU_FORCE_PT_SHARE
>  
> +menu "Architecture Features"
> +
> +choice
> +	prompt "Physical address space size" if ARM_32
> +	default ARM_PA_BITS_48 if ARM_64
> +	default ARM_PA_BITS_40 if ARM_32
> +	help
> +	  User can choose to represent the width of physical address. This can
> +	  sometimes help in optimizing the size of image when user chooses a
> +	  smaller size to represent physical address.
> +
> +config ARM_PA_BITS_32
> +	bool "32-bit"
> +	help
> +	  On platforms where any physical address can be represented within 32 bits,
> +	  user should choose this option. This will help is reduced size of the
> +	  binary.
> +	select PHYS_ADDR_T_32
> +	depends on ARM_32

May I suggest that "help" come generally last, and preferably "depends on"
before "select"?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:21:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:21:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521878.810855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poL2p-00017j-3F; Mon, 17 Apr 2023 09:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521878.810855; Mon, 17 Apr 2023 09:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poL2o-00017c-Vr; Mon, 17 Apr 2023 09:21:06 +0000
Received: by outflank-mailman (input) for mailman id 521878;
 Mon, 17 Apr 2023 09:21:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poL2n-00017W-Oy
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:21:05 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2081.outbound.protection.outlook.com [40.107.104.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2bff6cee-dd01-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 11:21:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7627.eurprd04.prod.outlook.com (2603:10a6:10:208::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:20:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bff6cee-dd01-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kloHCBLxqUfpZrJinXySg04CZ+JEhlk7/NNmo+wAJT0mQwf+RLmWKXPGNy99b/x/JEaFV8eMplZi5BwvhyWl5rEx4AZ2ue/Jbcs9reWM5i3q4rPJ9mfujTVJ4QQJhoh03yy5ajrURZWUodJ+E3QzchvCkeIkJxLmSopPSp2Jvxxrs+8Il2CP6QOGzygaVycuFrPBjk4bx6Hh78LiLlazKkFvfczMdFn9X5oV7K4Q9vkQzlXkFdiCik6MIMw0VkNhAnWSVrFgtYy4JxwJAvC2eY3afbWDo+d1O0ume4UGVIqqgk7Uyae+eRfl26+LGXiltzD2kTj799vJVrHS37AldA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q18hF30gZ1xNEQqzIPg1e63xr7UG9az3QdjMvpHEdU8=;
 b=HHo2aCAwUAa+WaRzRQAqGgqhMx93zxpmd7+nXP4F/ClzVv35Q3pQijCdqhTwceK14me88XxCvKkTKsz89Y/wSkcUXlRf6MiiXgwcSn7TBZE3AmZepVF77wphKVpgQkT2gLgZJjcR/GcuTb8xMsuxiMtMzux8oA9gHoZzsGJkXAb35BeVMWiOq2b76rubjMDYneoMMFxcbQBQIacRAgeU9lVxzDu6d8xwdLhNijjSd8nIfCERaLyVtVn4ibYjjyRc7UWk5+6ZWOSVHo6kURIocdS6wGgnEVGHSsUBWw4SG6gt8JmisI2Z6SuqgtOVWEOjH2mwi1Gv5koJOjRVHngDWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q18hF30gZ1xNEQqzIPg1e63xr7UG9az3QdjMvpHEdU8=;
 b=HEvzlo84CzhWUxlFpdsLjguF5vcWPgg8Iq5FFPdWki/YXyQX+eJuhT3WNlDKxIJoJV5r8gXxFirMIdGatf2BP4vthsvWx5oRvzu87MGo3eDuVbU9SJOuw0b3PoU5v3gCo9paJMtGi+dObnsBAhe7pzM5V2mSni48M3QAMLPNSYdIGITRqzpki80g7FO+5+VVZ3/h1D6Sebk+tjc5aaQb20Wwvxnw/fqy3yYusKNqMd+3AH380Re6jTPRZ5YQ8laD6CAMATLYfVezPmcXqnT+10oHmCs0veKOxOpisqnjgT5/u5jEenI938BEZFO5lgFWzHlFdG3TNgTLl+Dkd9nqhA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <56efad28-cef9-ad68-d246-5221f99a6074@suse.com>
Date: Mon, 17 Apr 2023 11:20:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-3-luca.fancellu@arm.com>
 <9486E559-879E-49AF-9145-B929A8EE9301@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9486E559-879E-49AF-9145-B929A8EE9301@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7627:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c3a5b86-0150-4cd2-9e55-08db3f24ff1d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E164o7ViXs67HxDKnDT4fGmT7xfxXcX6IGxh884iaEgsOVp5VQU9cPkZUvboJ+CmL4AQ9PBthcU31AYuDUIBDpOd8XANw/RLKwezUr3AP9P+tns4jWDFGmaUEdxZwbRWwde1XAzFQE0KiaUGNTi/iluwoNZyA/qfvVJIBkBKjEjHOFpTbxRXFwE0TxjlijPYpuEj2ThiYEZz/QAcTC7a8QRi63TvCOxzwJnNLudJsyeQPTG6othHlChU5AvePTKpIkh7y7qdAnmTTlgGFfmL/mGfOEHUqeu76AcP4+sC8h7EmT0h4iEP1ih+zpZbKWjRGr3Nzkn9ltgpGY8gDL2JWaWq3Gtyg2cPtLJctKTpzHxJjs48cPeDbmRrinkaxYe/f29TKr1zwMm5V43vui4S463+a9X7AlwTvyLBgZ9U8bNAieZun18VLkRI/kOz0nZaBSeApQLGA+x42YjiHXD1TIkUm2tu0LuPvcG0rnlttOYMUvsQqK/ZZuY12NMjVH7zF+HKwMG3B9WgmWK8i/LIj/p8XbkQdiiKCyfD8DeeqWHP5QwHh7vSSNVeeQlvZrE9SMfVvUNtG6MyA8mK3HpiATU3uG/mTRYKghjucNvwqpbt6t1Up9m9uuwroj37ehu0XpzsCfNZL1PmNNF0nOw7FQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(7416002)(5660300002)(2616005)(86362001)(31696002)(83380400001)(53546011)(186003)(26005)(6506007)(6512007)(38100700002)(8676002)(8936002)(54906003)(110136005)(478600001)(6486002)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(66946007)(31686004)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1hERUp6QjhDNTVjQittZ1pQcDcyTyszOUZZb3lSU0h2am5paXhTcS9PTkRx?=
 =?utf-8?B?NmlaTXpNdEJMbjc0YzFDYXhyQ3NwTU9ndUF1MlpRbzA3eU03bWd1RkQ1TUQ3?=
 =?utf-8?B?cHhmNkJDZGJycWNZUTB3cDlEaDFTLzZGUlRIY2R3dUIxQS9STDNCQkhPdnZs?=
 =?utf-8?B?VERoZWpkYmJDNWNBZDZzUE1iWUpJSDhKZTFPcGl0QTJ2VzVFcU9oOFNmSUhl?=
 =?utf-8?B?OXQ3akY2WVR3N2o5ZUpkZ0dLVlFNcXZOelBuN3VCYm5Td1krUW9RemVKVVZm?=
 =?utf-8?B?aEw3Vk1GYnprYVhCOFJUbEs4d3Q4VUxNcDcwenVzNjdQN1ZWaFZnTlFLdklF?=
 =?utf-8?B?QUd5L1JzVTdvNnUrdjlJaEZhT1QyWEZJeWkzUEQxZDBmVnBlVGtCS0d3SFZI?=
 =?utf-8?B?ajFWWmxpcUttck5IYWdUVDFmcEZsbXZWSXduSFI0RWJ4UlExNkNmVWhhUWFI?=
 =?utf-8?B?L1FjNUpBMXVzdHdyUk9yTFl6TlJSRGNJUTF2R2U5ZHlpay9xeXV2SWVGM2l3?=
 =?utf-8?B?eG51RGxKYjE4bEJVb3hMT0tUS3V1VTFBVUZlRElBL3E2b2RiL29QMmpoaWYw?=
 =?utf-8?B?YkJVeVBNUjBVaEZjU29rUUozNlNDVGJLWGdrSUxQODI4VmhydHBsd3NLNk5J?=
 =?utf-8?B?ZCsvc3hTTGo2Q0RBbWdaM1R0L3RiS1R1ZEorKzNVVHBBc0Vxd3M1Z0R1U1Yy?=
 =?utf-8?B?blR4ZHA5T1RDMVZXd3BURTJPb2Fmd1N6clR5dWJaa00zVkFEUmZUeEFVWk1i?=
 =?utf-8?B?elpPdURRQWJxMm1YRVRRZ2FaVkd4ZUJBcnJpQzYyR2gxamNxZC9hOGdreFAr?=
 =?utf-8?B?UmovUGdOeXlabTl3MUR4dkNaKy9Lb3FtOGZYZThVdG5xQk1FV2tzamRaQW51?=
 =?utf-8?B?SE1yWkZ5bjNteHltb1lZYXVOMldMdGxQMlg0b2NQNUxIMVJWTDN6bk45NmF5?=
 =?utf-8?B?U3ZOZ3BMU1pMTnpCT0xiSEVqMWFtL0pFZTBDVThrZCtQZk1sN2VmK3JQMEpE?=
 =?utf-8?B?S2VaYlZtK2RkSmw4dDhYWEE0OEJldmliSG95ek9XRXJ3WkV4WFc0cG02cVVy?=
 =?utf-8?B?S0hTQ1FMa2FwT1QwbnFnZkw5WnQ1a2o4aG5xL3BFVjRDdjVhREFwZGJuQWR2?=
 =?utf-8?B?Wjlla08wTHEwY2JmcE85d25VcWJJQm1CTE9CN3FNMmtGQ1lsM3YzNjJnRDVD?=
 =?utf-8?B?bEVVUVd0cFFhQ3h6ajUvNG1ja01YNmZYM1ZhWVJHejlPa2xlRSt2N24yd2JK?=
 =?utf-8?B?SHNzcWdIWm9rSUZRZGFSUVEyaTc0WTROT2lsTFZqN2ZKWGpIa05Gdm5UK1dl?=
 =?utf-8?B?ZXhwYVR2a2R0elNneDdTc0E0cU01UDYvc3o3Ui9KVmJwNjlFMlhEVmpjQ2pZ?=
 =?utf-8?B?aE1EQ2xBWDlQRW9TUjhVVEdjM2gyZVhpc1NSSkMxQVBGb0J1UEhpdlR1VEtx?=
 =?utf-8?B?ajVxdklGOFZON3pjU3hRS0ZJUFRVYWZITDhVUkZDcE1zVjdoQXhYZ1BSVHgw?=
 =?utf-8?B?SFgvVksyTTBFc2JkbFdKbXl4clk4Vm4zN0NOc2JvRzZ0U0krREtZeVVvWTdo?=
 =?utf-8?B?ajMrcDRHN1p1M0dSWkFpeHVOK3AxYmlrMmprTVFhSEljRXpMVStkQTVOR0xZ?=
 =?utf-8?B?V1drTEsvaVFKSVZQTlBYOGpFRmtkcEQyZmVycGZGamVra2RuWHR4WGhhZUpo?=
 =?utf-8?B?bXZJTUtTMjVoZ1M1MzlyeEExM2RLdW01YUZWUW9CSVE4cjNkczF6TFlwdFNp?=
 =?utf-8?B?dzVyUHBXYmFwLzBtcUVQYys4YWNWbWdlQ1htNU5LektmUmVtSFB5VlordE1X?=
 =?utf-8?B?R043dFlDdFVCb1haVTlqWWJrWFc5VlhFVC95RjVxTjNaZVJ6YzNlN0FIVE40?=
 =?utf-8?B?Qlo5ajYrRDVEaVY2OTgxeTNMRnVxWGowam9uQXNkNk1ybkc0eEQ2NzU2NWw3?=
 =?utf-8?B?blpNb2FaQjRmK1lSRmZJWlNjdE8xUUF4MnNuSnZIc2FGQU5QUzJad0lvejdH?=
 =?utf-8?B?cEZ2VzNkd3VpYVRxTkZ3QjdmY0VnQ1Q5MWtOUVJnaTd6NHlJVW02dU41dUhL?=
 =?utf-8?B?Z2h5em5iZGZDaUNtbEIwYTgvS00yUDcrQyt3dFFNRDB4OGRRenY1bmdzZHpn?=
 =?utf-8?Q?6gkZJmfYSQNkwFR91rv1oiF3N?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c3a5b86-0150-4cd2-9e55-08db3f24ff1d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:20:33.5881
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: znEU1PpO+nXgDU4JyCGkhw6xBD61iBiGKPqvcE9RjGROAMrORCaHIFqaKaVy/QSUQkaOTKH0ERDLsF/yT/1TPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7627

On 13.04.2023 14:47, Bertrand Marquis wrote:
>> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>
>> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
>> to allow the domain to have an information about the SVE feature
>> and the number of SVE register bits that are allowed for this
>> domain.
>>
> 
> Please mention in the commit message that you are bumping
> domctl interface version.

In which case please not just "that", but also why.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:35:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:35:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521885.810865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLGH-0002fw-Ai; Mon, 17 Apr 2023 09:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521885.810865; Mon, 17 Apr 2023 09:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLGH-0002fp-7f; Mon, 17 Apr 2023 09:35:01 +0000
Received: by outflank-mailman (input) for mailman id 521885;
 Mon, 17 Apr 2023 09:34:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poLGF-0002fg-GC
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:34:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ced5294-dd03-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:34:57 +0200 (CEST)
Received: from mail-dm6nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 05:34:49 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB7159.namprd03.prod.outlook.com (2603:10b6:510:29d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:34:47 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:34:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ced5294-dd03-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681724097;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CjiwO2LbqKOgJFR5GfspgFmo2PSkJjR8xbepR1QYH08=;
  b=XnIz4nGRg63n3NsYIFwREbZrS+pbRyBGcG5TfUCFqZzcnPIBKQ3XGBDP
   VujoETSbZrCWG0cP/imzkvOl0x8WuAeou1jLwBydZEPLvc2Fbuz6VV0XZ
   t9DTKOWcUASHXDTmPWfCvIblD2fFOZvxfAE4Ds/MoM5ONrsaV7f33Vs3h
   M=;
X-IronPort-RemoteIP: 104.47.73.48
X-IronPort-MID: 104562965
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yO70BaDF1lBPlxVW/xPiw5YqxClBgxIJ4kV8jS/XYbTApGh31WNRz
 GIeDGmDPvmKYWP9KNB2aYXl8k0Hv5TSyd8wQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwpMJWK1Nt6
 KEhCTU9TU6unuG/5oykRbw57igjBJGD0II3nFhFlW2cIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxrswA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eTwH2hCdxPfFG+3vFL33DCmkEwMSExahi5nvr+2nK6f90Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZjQtE7sM49RRQxy
 0SE2djuAFRHr7m9WX+bsLCOoluaOi8TN2YOIzAFSQgt5MPqq4U+yBnIS75e/LWdi9T0HXT8x
 m6MpS1m27EL15ZXh+O84EzNhC+qqt7RVAkp6w7LX2WjqARkeIqiYI/u4l/ehRpdELukopC6l
 CBss6CjAComVPlhSATlrD0xIYyU
IronPort-HdrOrdr: A9a23:pEyRXa0f1SfiqsBcD8kioAqjBI0kLtp133Aq2lEZdPWaSL3gqy
 nOpoVi6faQslwssR4b6LW90cW7MBHhHNtOkOos1NSZPTUO2lHYSL2KhLGKq1bd8m/FltK1vp
 0QFJSWZueAa2SSTvyX3OB7KbsdKRW8n5xATN2x80tQ
X-Talos-CUID: 9a23:EmoKXmOSU28pDe5DBg1G3nw3B50cU3jt7yn9OkyxL1h2R+jA
X-Talos-MUID: 9a23:/bUa5Qk37R7FcJWeU5u2dnptFZhFsviDK3kxtqpFufaHcnxMPxOS2WE=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="104562965"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JlP52K8tBWrd0CRfEEdtDO/fGUtV/jqS35HN2m/7Ywv+B00I7/2MMn9FYrlhsoxr2WPF9tTEMtHD7VLya5izjacnhB0kLAoaLIwaE+2RAtR4LxqjWL4tanZQabn0v4YT3Sx6AgcSmu/46xdbKnudgpkR5Ux8TFCNmBGrd32Z0tqrwF6b0E9VpulllfT54pZxbQ5GA4vsNoyHT5/Bp9VTN1Ot1nvqHiiuQ5YGTi+dIdD197nk5Os/t2f3karWZztP8JlG1cPV+QXmFJMymAkUe/+FCKiFIASgBim12bDX7lNUX7z6q2/IyP9ou3v+gEcDPzurJGHFZe5XYPlPovkCDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nGDmXLLZfnd90PTseAF0OPUHPoRFvm7qxvzmE5AZKkY=;
 b=n1gCg2RpKv072Vyrah2iLPjRpnWUEAmf+IYcFVvYFyPw7yKr0NAY8h9D0X9J+F6vwB9BPTWX+YtXhNnhs9a/HBEZce9TCpNmwxp0FDYQWa7hG6Wjrle/hMsGJpQ/GGZmnoaAnk+5i7W6++YvVdwSytqJVxuztRHvJzX1dEHyjLXGCVdWvZJP1Uu+22w9soJpeu0QrDXLjDfcOAwVBHg8P0CBvxbqIvKOx6Q3v7BF6e+E/9z837eM8iTC5fpHvzDFhm9nu2pLn7U9gkz4RDmWK+/VL5igJMkKENLlBVIM9qeJ7+bvICm4Hpr88YGt8Eqt8QGwrVfkI7UohCeyPh4fbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nGDmXLLZfnd90PTseAF0OPUHPoRFvm7qxvzmE5AZKkY=;
 b=aly1j86dJu/M1oBxXcjGWdPUCRnDh/SFbn80dy7z2z3u5W0D72Cb5GWdTM6J7zaQ1DjMTXoJ0ya57yy4McBQrdzdCE861I+0TgRLtMU7+xaUWl2gada3WpvfkDy8TKSOF0KduVblhj6KeGfgueiMkwzIGdDBBsG8EAO7N5yj/+8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a8f841c9-9a09-75ba-d8b6-c8924074a210@citrix.com>
Date: Mon, 17 Apr 2023 10:34:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO6P123CA0029.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:313::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB7159:EE_
X-MS-Office365-Filtering-Correlation-Id: a7ca0a94-68a1-4f45-b7db-08db3f26fb8c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wKq0C0DZAq0f2+oGVISahTh8VsqdvG2shZqv84OJRonPk3jtdeWJXyLu5tBDGd1gfXkIrSXfRcW+yNGGd1pjTDyPinmTyzvQjubIntowgIaaiA5jnpYeEVZ63O6xP0ldlZ9puIGOVQOua8rUrSQtZ1+3SzNfWLLH3+gxhFfUnJ9WiiBSDN5536E/5vAVKY0v3aezrvpa8Vw6WVqrJ21bQmljUSy59L6tPFsdswRzcseiSd+eIKAQ1BooWr3OQhwIPPYvYse1mYtONri9Vw2p5ZwtNn4k4GhXTCmdeUbXevlGHBuVSMOEcfcYHErtdQ7OTRPIXP/0dGV2i8F18z5mSCjnZWfFkVSJThaiNJ7l3JOF9al57+ibRFrZeZY1Ac/CdBcasRVOWWwLl54TZ4UVlXUxFoeDBp2KWziJqj96yqZoAPN+u4TY1vPfWRfIUqbfhlevxvP0YmHNKqMN63ZI42pQ+MPAW8eCrM3tNroy900JVHoKHONnYaCBA22ISo8xRYR8QoeiL9t0yOFf5DXpaeKZ0u83plEwmOcxNPH4LXAa0ifhP2QYuI04QqvfcG5RFrcJpoTKSY7bhKy1E6ZrxtzwECLglWOogg/U+w05ReLwwMgaaT4186oJcZAGTDIkVfwnW0kqqT+thGD8SG7ZwA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(2906002)(4744005)(36756003)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(107886003)(2616005)(26005)(6506007)(6512007)(53546011)(66946007)(66476007)(316002)(41300700001)(82960400001)(4326008)(6916009)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3VMRnR6NHJkSjJ2NFE0dC9NTzNmRmFOVzF5T1hyWXk1d3BiakNIVXJkZGVV?=
 =?utf-8?B?ZFQ0L0Erd3R2NjY1em12L0ZUNmFjMGtiR1BoSXUwM0k2a2RSUGQ0NU8ycFFx?=
 =?utf-8?B?TVY2Z3VoZC9VM2FRQmpxYmNVcTBSTE1uWHc1dkRmQkora1Nydmcrd2VmTjIr?=
 =?utf-8?B?ajNCZ1JBK0dobnNhZzVqNG1NNGRIUTkvUUt3aWYra2llWFg5ZXlSQnJyWWl4?=
 =?utf-8?B?NHdHQ2JoSGcydHMzclNHaGhrb0haemRSUWdJbVo2OUNQZGpsYnd5QWRmS3BG?=
 =?utf-8?B?eS8wWTk4SlZkTE02TnJMMjhJZzcyZGdKUEZqNllNN01jeFBQQ1FzMFdIZldZ?=
 =?utf-8?B?T1BhTUJiM0pQYzREcUs4ZUcrbXUrUzRZWkFBSmJVbDZLUTlnZkpHQithejVR?=
 =?utf-8?B?N1paRnFIdEFPbWZYTGl4RXFxSFdtcDdDVjBGUldtT21wN010em9RTm5jcS9X?=
 =?utf-8?B?YUFIWFhSNnUvbXd0d1ByYUQyTUkrOHo1OENHSkdDN3Nid1BYTU5ya2lnaVZR?=
 =?utf-8?B?RU1jOTZvbEpFM2NINEd3SmZkRXd0UmpYTFY5cWxwc0R6SXYzZDdVdncraWVl?=
 =?utf-8?B?WlVld255YktwTXU3Z3dZbjQzVGt4VEwyWm8xc3JUZ0tuOGkwZFpYL1NlQTFC?=
 =?utf-8?B?dzhNU3pmcUhmZG1zSjZqenU2UW1xanpUSGEvV25XS3ptYVFPenUvQVh3dGlj?=
 =?utf-8?B?ZU5vRGFqLzZPSXRLQVdycThwbkx2cXcyQkFuNHEyeUVYN1NCaGZwaS9mS2Jq?=
 =?utf-8?B?TnA2cFdJRkdkenVIN21oMmFjaGpwL0o1ZkpldVpJOUppbC95eEdhemdoRFdq?=
 =?utf-8?B?ZzVyWGlWYjg4Q05iY25odzVFb09kd2JiYngvMlZTZUVFOG4zK084Q3Zkc0pj?=
 =?utf-8?B?aEd6cDBYcUdYaGxuczFUVWdNZHZxMG1kck5xU2xnK0thTUErOWZkWHpnSXhz?=
 =?utf-8?B?NEkxcThRT09GeFJNM1pJRmNVZEFRL0pDbXdudmRSUkxYQXRRdGNjWXFVa0da?=
 =?utf-8?B?S1dxWm5iQ0RLUVczNU9VdG15VkdpTnczc0xPQ1g4VHoyb1RSSDdYNTYyN3pZ?=
 =?utf-8?B?cmhwcy9CdVlkWTFnUU1wRVFFaitKbklBUUNONCt3ZVc4YVBhRTJBOHhMSFlu?=
 =?utf-8?B?NHNmTEVMLzliengxNXQ2cmprK0Y1enR6Yzd2c1V5a09vSmZTQmRKV1JTYUxH?=
 =?utf-8?B?NTE4Y1pCMzFGdGZldVJ4VEs5dXl5RndoYVhhZ1lLTzhkWVF2RFZEek1sYlNp?=
 =?utf-8?B?NkRFMWQrWjZiVFpFbWVMb2ZzOGM3ekRBODFwT2xhcGxTMjJJbG9sNnhWbmlT?=
 =?utf-8?B?UFFmOG9IK00rRHkreGpaUUpkV0hEZDZ5UHgyWEVNQkMzMUd2T1BBN09TVEJP?=
 =?utf-8?B?SkdyU2tOY1ZjeG9mZmZRYjRmVllMcVZtMGYyK1JVUE1MUzhSY1BTaGM1VHE2?=
 =?utf-8?B?bjFzYWpPTUU5aTRLK0UvU05RQXExZEZ6UVlIYzcvZE9LS05MTnJ4ajhzbVRw?=
 =?utf-8?B?eFNBRUpyRVhPaGhOUDhmanRYWElWZzNNM2pjMEYzazZDcHRuWUpSTEJFZHB0?=
 =?utf-8?B?a28xMTlJUk9kVDB1QUd3NGExRTJrWDBLeVZsSTlEU2ZycXVGWmdPbmlldFMx?=
 =?utf-8?B?cHdJbHJHbStPR3JxOTJLY0hUV0l2VHNvTmh0YlBEK1BLTzQ1SUZvd3R5dndu?=
 =?utf-8?B?WkJhUVBRSHUvZG1nM1RoSVE4K3J3cnFPcnQ1ZWNrZGFibHltVm16WithWnhE?=
 =?utf-8?B?NjRTVmUwNjJiVVRiUFJraGhvRXNQZUptRkljK0tOZE04UG1RZms0cEExRzZz?=
 =?utf-8?B?SytVUGljSVppMlFVczEzY1dOUEZDbXVjZ1h0cG1YcFM2NExvSTBpR2UxbjNs?=
 =?utf-8?B?aFVEV0V2ZU93SktWYVgxd0lyS3ozVzliOGtEbW5pQ1lCNklwWk9OblNUemZQ?=
 =?utf-8?B?UXJ3NFhCN0RMQmU5T3VsYWluNUlJM20xMTVaMEFvbWJSaUVRdld4S1lKamQ1?=
 =?utf-8?B?dUlEOFVlWU94NzNseW1IdlZzbVlnbkFsTUFPL1J5bkdjZFUveFdnbXd0cUhC?=
 =?utf-8?B?cE1SL1ZXbzJROFBsV3FOZWcrSVlXemU1Q2ZjNm5yM256dmJDMTFlQWtsRVdX?=
 =?utf-8?B?UVBpNW1JSXNNQmxiSmhpeUtYZzVXZGltSC91cVhvaWlQT3NPRmtUNXZ2VVA3?=
 =?utf-8?B?UWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nLGpOphqxtYHah4rmmPfbIQsEWJvLSnkiYeHUiqBGqtMgCJsFViIIO79FJIX4FCJ9Hp5iDARgbUJST5MAB817fHroW817sodd2O3irhKRWQMeODyCeXKr+/+hohvM2ViEoaHOCvq+ksOfKulgHpTxEYbE2V4pl8BkzV9cumd6zIdG4Y67weqTfuP1ZVTxc2I5L2BqYWfQHkvIf6DYjR4uOFWnMxGOEZG6wlD1iTi9nd3lTDcZYirk1RXZCFCDfcMjwx43ysUXm+EuuZBQJT6HFGksQfJ0FxAAbcnQmAzpsJL+ValQrDkGvpZFSXkv/ra4NlhOTFiJKoJF+gKm4pQ20+A6E7sQ4RVCP3QylDUxgZKSU/Nb38S8qUD6C1gZf8tSaKCW3i0KhgjCRhhLxr1CJfJCgiHpmzVBpZQjtywA5spvx0pHoTIToGkeNJsYiqtZlrmfYGGHHciExNmOLPA8teHgfX7rCkxa9M04Jka3eI9veEEfKP30cqrk5L4mcce/hFZ+KcFqx+F7ikvjvcHo5qLXPjte0bQ7SYlPUqvP9PA9JC9IRQfi9D41CPnJMEq+dhlNWC1mpBIsrVnjTe3uOvqOfU8hdGSWXbe4Phq6T5BmUher3+DxTtDPhnNHlKcrwlCknyvPHeN3PPaaP5+RWqpY1UgDqNaRwP7XrmDI1CdTQmXWj0L7QFwucQhwdpj9Zl9YLpSEBRzSLC5QUWVZgLVGp80E10F8+KcKQLXxpo3xwP/Gtds2L8aaAAkyRYa4anyFgDtzaJKqdOrybg3pv/mMumxZmiP3z/hgpPQn50Tft8xpIWPHB8vrT0zGHq5JOTJ+mTy0p4vvEh49cos9Z//BWXP5yge/w8PMZN8CHU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a7ca0a94-68a1-4f45-b7db-08db3f26fb8c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:34:46.8791
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3crwXTqn/xbDSvy7vqYRrlz6rKlOe5UqhDfskDY7l+02yBivsEvZFbtOVzmplad2uQ/de13Nuxrp8ejhgMiXmKeHXalgZrbQn+lvZRRqNw8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB7159

On 15/04/2023 8:58 pm, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 36a07ef77eae..1707bcd2d15c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5879,6 +5879,77 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>      return modify_xen_mappings(s, e, _PAGE_NONE);
>  }
>  
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_HAS_ALTERNATIVE)

Gitlab testing highlighted that this should be || not &&.

I've fixed up locally, but won't repost just for this change.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:41:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521890.810875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLMi-0004Ce-3y; Mon, 17 Apr 2023 09:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521890.810875; Mon, 17 Apr 2023 09:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLMi-0004CX-1O; Mon, 17 Apr 2023 09:41:40 +0000
Received: by outflank-mailman (input) for mailman id 521890;
 Mon, 17 Apr 2023 09:41:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poLMg-0004CR-J6
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:41:38 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on2074.outbound.protection.outlook.com [40.107.15.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c0b1633-dd04-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:41:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8720.eurprd04.prod.outlook.com (2603:10a6:102:21f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 09:41:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 09:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c0b1633-dd04-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bur95Hb1q/fQwpes+AuHsehsBkDgR7yFszEILMZtlODqtl02SBIV38q51zoy8oo9eqMJPCtNdmWOfSZ6SHGxdZN/aNj5x5M/p+8pDQPsiuKEH4DF8oa3YpiISwggPxkbf8Kc9g9LBSqFq8E6qL0PgFknDOBVxYtzJky7rzDUZpiPE89JoDbVvUj1MclvjTtbTfnStV/yZ7ksUi6/ajGIS3u4pD4xUR3OYdmBiOBg6YK/99EZK9iLuNH4iC8Ah63JpJyQpMu62oQ077EK0Ha1hRpCH9LKAx/iJgyAo+XGIdINwo5UnsqKZrdgC1b5xMRzjUtY46z4xIUX67pshzNAGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DeJ9IpgG3hpzefEguhPJJZdIxX2C1mp+5Y717kuIISQ=;
 b=g3nG4sbb4Zx4ncGwIJX67w2TmJocsEraZpIHZ+BpDVPP6pF1UvH7ITsgRgkkxLUG+4dqt584ZhFUXjjYz46ZvlJ4zLsWYVD1m/AXCYxbod7LQlDHWa5FnmlJJTYzU7cr9S5bEeBT4pVA1ZkrP9fg+HlHhJYx483ne+VHbl446eehuN6kL+l9by0GSllghowrQTL8erMENir9CKzgkvN4jVp3ACPQJchh2IcPiWc4vclSW+9DV+uM+MxrVoGsC+IpfHhM/wXg/Jw23h0GZqlHyiK5CoGS2HJl/hXt+BPBul8tOS3NPS84IOYqqgl8O+HeXZ+ojtiWNCTJEbG2VlsPZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DeJ9IpgG3hpzefEguhPJJZdIxX2C1mp+5Y717kuIISQ=;
 b=JsV5ybDJaisauEUvqU6hRMJ1cMGdvUOK+2Ek6IGZr4L6XitP4wzVcFxWLTubMOvEjK9zJZg0jGnHYGh1gCOFJHDgS5hQc6+SD6K50aaLg3Iq6JGCmMKuNg7ay4xYIJe+G9DzDE3KIUL1WJnZ2NhS6e2SY19vKUAR69nN4hy6Yg8BHEs/uU7kJaLo5e4xuefZkFRUmT2/72dmPzjhOiCOEt/tI6vQmT4yBAkbEXBGLUWzEgD0rwCT7VhezGgr6xGa6KBKmJsUhQ+1G3uCFtuawQHyqnl/7Lg+INGo7sYqo9O47dKMMWOiND31aqF9Xh76mUGcC7wvMP5u2FVkRkvSMw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2978b495-c222-a3f2-16e1-ff577c7b699c@suse.com>
Date: Mon, 17 Apr 2023 11:41:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230412094938.2693890-8-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0058.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8720:EE_
X-MS-Office365-Filtering-Correlation-Id: 9d5b409d-4b92-4062-45e6-08db3f27df53
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MIHeXiP3UdXsCOJh/WycINw9HSjTc5yVrVbpgdUgBA1VxjjoPsz+kIv4gbuJZGl0qKi7Sk0isHWWgSLyDNcJbU5Z/nmxCHWc1GTocSPuSgvoNqYfn31+qbLTn0/v5LEwqbwUCIAriIZbP0l4HMx8oppfI9lGao9Uhq7F1rsjC14Fp82g72BKpSPwVKUil0ZyRH6zIRDdpQYIWlD+9g+ENzmLelNhdDxnRQuiAggWPDuNjWeAiHLa0YBq8vK0+18NJt4TOO1n34G8cg/KYkSbMJ53zsof20zTKsY68QBO4JCOWOBbojBNZeKgDDUlNQwaP8fBAGw1WD2GvMZSL9w7/Acd6/HzZV7Ysn6iJ69l49LsJDQMuHXGSenUaTDiSfSctUK1EZuctRG23rMG4WsThTgvvS+u/mzJ6hcLnOupt4Ti8HDE3fw5kpxh9ANkM0bVOolRAY+QGjiyJLSzNXFe6CHj1MAVmeMidDTjSvNnGClClPxhKASMZv5fmLA9B+kxjqoYYpJdU1nkGX/fNeClZ5Q2hJeHp2UXTXxBBM7k0DqBCj0BGNiClesqohJ7t/mwWRLC8cJmUDKviq14xZAojRqFBNN5VDeKj21lSpUOgitiY7plX/JvsTNOqxFokGMhfEeIeDPpGCnfce7KzTVF1Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(396003)(39860400002)(376002)(136003)(451199021)(478600001)(2906002)(38100700002)(7416002)(53546011)(5660300002)(6506007)(6512007)(26005)(83380400001)(6486002)(316002)(186003)(8936002)(31686004)(2616005)(86362001)(66946007)(36756003)(41300700001)(66556008)(66476007)(54906003)(8676002)(4326008)(6916009)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2NiRWRkKzFVUnliMDFDOEdNcm0rM3RZQnFiN2MzRHR6aUtkdXY1K01ZMCty?=
 =?utf-8?B?disyZ01RTTBoeWZWR1VieVpaOFcvbytWRURVUlBBdVA0Z3ZGSG1zcWdNNDhJ?=
 =?utf-8?B?NTZpL200bnEvWTN2NllKSk5qYXVrTXpHMGQzVUlSTmNrOUlGV1B3a1F2OWJG?=
 =?utf-8?B?dGYrdHQrR0VaemRCbzgrMDN6b2lobmJPZnJUelppVnNDMlprdW9nOHROZE9k?=
 =?utf-8?B?ZHpTOFpvRlYwZWlkYi9GNTdEaWZFSEhNMjdHVWx6VzBsSWVTdU83Lys0TVNl?=
 =?utf-8?B?bElkMU1VTjFURVExSmJ5aXl5bHk0emFIYUY4VGxwd09QOEN3TGdWWTNPZC9P?=
 =?utf-8?B?WE1zSEtsUmxVUnVpNFp3V1YzOVQyelVuamJyQXhxbE42dVRvTmNWV1lONlVP?=
 =?utf-8?B?OUJUUXZjd1pkb0JwMHVPYVFwNzJvS3ZRRWdVdnZHbTdFelM5NW94WEs0anE5?=
 =?utf-8?B?QWo1dWlNRllaNVhTRUhkYTBmN2RaeitvaHpvemRDc2ErQ0gzVVBtOGs5T2NI?=
 =?utf-8?B?WVlPRFJ5c05CU1d5RmdrbGRkVCtTT2VOMngwc2NweEFCNzVveVJuK0lNaVl4?=
 =?utf-8?B?elN3bFJKZ3UrN0RzN2NzeDR3R2ErNkJRK2h5WHpwOGhLWjRlMlNLSzZnZGtZ?=
 =?utf-8?B?V1ZxSEtrLytibDFCMGJDc2VLdTlBYXZaT0UrNzVlaEdQcDQ1U044azhIYkpZ?=
 =?utf-8?B?Vm1CNkd0a0hvQW1rS1BFcTJPZmZsazZ2T2gwUTEvOWFpT2daNU8vK1MybTlG?=
 =?utf-8?B?N2NBYUEwVnNaZzJCWitKaFJPYk1uNWNHWEJvY0RFeGtJUUZaVE4vSnQ5VUtn?=
 =?utf-8?B?akNPaTRZVVk1anB3SUNlaUYzamNycG13Q1dFTFpXV0VVRWJkTEtoVXpNS3hR?=
 =?utf-8?B?MnZuQnliWnJIbU1BQ1hzOWRyTG1VVFBmTkRsQkFmRHZMWWhHTThTcGdpaE1S?=
 =?utf-8?B?b0xaYXE3cHlXS2hRc2pYZzFyQTZiZ1Jwa1dVamtTQ21OdG5lTG9haHI3YlFO?=
 =?utf-8?B?ZXJIOHcyN3J3WWxCQktvTWZ3V3d5OTl5SVI4UFlwRUsxMlBrT3UzV250a1Mz?=
 =?utf-8?B?U1oyVndJUk5WaGdRRE9DdkFnd3F4RVFUMnY2UHQwaTZaOUdnc0l1bU4zdzZK?=
 =?utf-8?B?SkJZRXhPZ29WRzFJMlV6ckRHNGpJeEw3ZEtwUXd6OStyRHJSQm5yMG93T0gr?=
 =?utf-8?B?UVdiTTRmdFZsRGxsbFVic2NpSVcvdTRjMmtxU2MvWmMxTXZJSGJqbWxEK29p?=
 =?utf-8?B?M095dHZqSERqOU5RaEM5RkUwT2tHVDBIMXdYVDNDTkdzbEdzTG1CWm5KUzA0?=
 =?utf-8?B?cUNLMGloU1ZQUURrV3Fmalp1ODhYMlJvcUNnNzlHUENrTFMwRGl6RDZ5R1pP?=
 =?utf-8?B?L3ZpcmxqbjdtYUh4R0R5SzZ5amFIeHRlZjd6VW1LTHJyZytVNlliTEZJTW81?=
 =?utf-8?B?T1BRR3FVMEk3c2VSNW1pN3hzMzgvaVo4R2JtY2ZtaWRROXE2aXF0VXFCZnpL?=
 =?utf-8?B?VzBTaUlJYkxUZ1lTczFmcThBdE5QV0FkTmdXY2pUL1NEUVArNkUzQXlTbXI2?=
 =?utf-8?B?Y2dYYzcyZ2IrTytZMVBuMStxQXFab2ZBaVNFSXRwdFBicmZubVlmVFBBYjBl?=
 =?utf-8?B?WmVmRlhyN1g4RjgvbHIvaGNnNG1zTmJlNG9rVlY0WEtkK2djMkZJZURmK0to?=
 =?utf-8?B?SXlkU2VpYXE2ZUN2L05BMFk1SlcyZWV4NzI2L2FBZXB3YllWN3pTWkxQa1dB?=
 =?utf-8?B?WFY1SHpLeXdIVXpYNUloR2Jpc1NtajBoTU0yUXhiSTBWbXNocnRoMnpCZWpo?=
 =?utf-8?B?NVFwNjE0amgyVVlYRklKSEhxNnpnYm9BSW9mc0tkd3NRRVFDVytRRmVrS0Vr?=
 =?utf-8?B?RWhlczA5TTZZeVg1Qml3L2NCeWxqMU5YdFYrYzR0SE51ODgySVJid1lsTGRU?=
 =?utf-8?B?Vm91b2FEMC9CVlk1ZExLY2xPSHhwNzc5eXlNdWx1MUk4aUxiclhEeG1zY2dN?=
 =?utf-8?B?bFJnekM1a0k2enRjaHcrQ3ZJaWV3MDdJZXFOcldTR1RKOEhsSHY2SXBNbHpK?=
 =?utf-8?B?cjBkMzYySVhvckJncXpwVjlRK1Znc1JBTFI3VE94TDRPNlhLYTdLMzNWUTJC?=
 =?utf-8?Q?Iqxon3MhCyT6QfBOEBbBRNEmL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d5b409d-4b92-4062-45e6-08db3f27df53
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:41:08.7519
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MrxM/NmDEuZYc9f8V3i8olEhxDop2vyuT5Fre1zhLS/WnStljuzFHLfS0rglhQYXkY7b4DEWjZfoNIfwdgKlzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8720

On 12.04.2023 11:49, Luca Fancellu wrote:
> @@ -118,3 +121,21 @@ void sve_restore_state(struct vcpu *v)
>  
>      sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
>  }
> +
> +int __init sve_sanitize_vl_param(int val, unsigned int *out)
> +{
> +    /*
> +     * Negative SVE parameter value means to use the maximum supported
> +     * vector length, otherwise if a positive value is provided, check if the
> +     * vector length is a multiple of 128 and not bigger than the maximum value
> +     * 2048
> +     */
> +    if ( val < 0 )
> +        *out = get_sys_vl_len();
> +    else if ( ((val % SVE_VL_MULTIPLE_VAL) == 0) && (val <= SVE_VL_MAX_BITS) )
> +        *out = val;
> +    else
> +        return -1;
> +
> +    return 0;
> +}

I think such a function wants to either return boolean, or -E... in the
error case. Boolean would ...

> @@ -4109,6 +4125,17 @@ void __init create_dom0(void)
>      if ( iommu_enabled )
>          dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
>  
> +    if ( opt_dom0_sve )
> +    {
> +        unsigned int vl;
> +
> +        if ( !sve_sanitize_vl_param(opt_dom0_sve, &vl) )

... yield a slightly better call site here, imo.

> +            dom0_cfg.arch.sve_vl = sve_encode_vl(vl);
> +        else
> +            printk(XENLOG_WARNING
> +                   "SVE vector length error, disable feature for Dom0\n");

I appreciate the now better behavior here, but I think the respective part
of the doc is now stale?

> @@ -28,9 +35,12 @@ int sve_context_init(struct vcpu *v);
>  void sve_context_free(struct vcpu *v);
>  void sve_save_state(struct vcpu *v);
>  void sve_restore_state(struct vcpu *v);
> +int sve_sanitize_vl_param(int val, unsigned int *out);
>  
>  #else /* !CONFIG_ARM64_SVE */
>  
> +#define opt_dom0_sve (0)

With this I don't think you need ...

> @@ -55,6 +65,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>  static inline void sve_save_state(struct vcpu *v) {}
>  static inline void sve_restore_state(struct vcpu *v) {}
>  
> +static inline int sve_sanitize_vl_param(int val, unsigned int *out)
> +{
> +    return -1;
> +}

... such a stub function; having the declaration visible should be
enough for things to build (thanks to DCE, which we use for similar
purposes on many other places).

> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
>      return -1;
>  }
>  
> +int __init parse_signed_integer(const char *name, const char *s, const char *e,
> +                                long long *val)
> +{
> +    size_t slen, nlen;
> +    const char *str;
> +    long long pval;

What use is this extra variable?

> +    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
> +    nlen = strlen(name);
> +
> +    /* Does s start with name or contains only the name? */
> +    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
> +        return -1;

The comment imo wants wording consistently positive or consistently
negative. IOW either you say what you're looking for, or you say
what you're meaning to reject.

> +    pval = simple_strtoll(&s[nlen + 1], &str, 0);

I wonder whether, when potentially expecting negative numbers,
accepting other than decimal numbers here is useful.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:42:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521893.810884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLNG-0004gr-Bf; Mon, 17 Apr 2023 09:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521893.810884; Mon, 17 Apr 2023 09:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLNG-0004gk-91; Mon, 17 Apr 2023 09:42:14 +0000
Received: by outflank-mailman (input) for mailman id 521893;
 Mon, 17 Apr 2023 09:42:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poLNE-0004gc-Hn
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:42:12 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20623.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1fc76888-dd04-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:42:11 +0200 (CEST)
Received: from MW4PR04CA0050.namprd04.prod.outlook.com (2603:10b6:303:6a::25)
 by IA0PR12MB7628.namprd12.prod.outlook.com (2603:10b6:208:436::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Mon, 17 Apr
 2023 09:42:07 +0000
Received: from CO1NAM11FT035.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::de) by MW4PR04CA0050.outlook.office365.com
 (2603:10b6:303:6a::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 09:42:07 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT035.mail.protection.outlook.com (10.13.175.36) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:42:07 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 04:42:06 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 04:42:06 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 04:42:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fc76888-dd04-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JEi8DUJjviYeEcDBk3mkvqlkEtQA9Sxt+CuiJhCyQtopQutGoRTcLXa+nFl89XBfjwK1FMjG0AF8H8l5FZzKXavRAVM+C1iKV9n1Fj6997New7XKMxsFv1lwMgDPtWTZePDJnJC1fAbIOuS6L7Ut7zxcc4uZbLN0I+eqAfZa1bnEVU3RBxOfAe0zdHvqqxgrPX2UJOEYailALXfqPn3HeHA9ZiRdibDuO812Nz9LVjnqWX7AnSkZqeO3NywljibBt02/YURt/uZEFGry8Z06Ior36FVWIR/bIldD+FMbIdWl7ed5JjSneAJc9rUuvdyZJxNMqbAMGRqSs1nMoYNmqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jwxDBkwwAOkhG7T9g8ihpMC8ekDPONtAy2podrYq4lA=;
 b=ZXBSUpWHOEtHhrf7Q01jx6J5XqIvSXr1Qvxw4mWHECTKyTnOZ7+TBoskA3TES/awPJmQieqtXcWFMtuMTDfTi+xu6mCAx51Hk6hWg/h6vnERqbjJQ0PB2Fl21jFPyIS+Rj6owL+BeADJdlH4SaTTgVO7l9np9OQQGpRn2cQ5dQG0mxMHctJJnT7jxGYu324kbuPBUi5j5MfdARqjfHsBkM57ThDVhx85Jk5XWFZ1FZwHegO0luIRujYMn9uOf3m8Ah6jXLoAjZF3VzSJVeCOkA89oFYBHnr90+LdFxe/tXKkHjo5HAV7IiYp8nfCQnR6W2BhDc42AbjmL3ORvfAlcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jwxDBkwwAOkhG7T9g8ihpMC8ekDPONtAy2podrYq4lA=;
 b=YfeM1460YhiObjo/TbuXYi4xFA+yrdwLZLZTQ6kOjfeGYuNQ6U4UI5PKFllHlUKJf2Vk4Zkivw0TnWR+pEDigSKc0FOlfN7PkBKrjyeOiQk3q3XDZHxkYRFSFKNouN6KPOLtU016abGNkIuMKUZWuIe19tsrDJRG7rhyfl5JObo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <70a79d55-3f16-d371-23e9-e3650e47b00d@amd.com>
Date: Mon, 17 Apr 2023 11:41:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 11/17] asm/smp.h: Fix circular dependency for
 device_tree.h and rwlock.h
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-12-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-12-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT035:EE_|IA0PR12MB7628:EE_
X-MS-Office365-Filtering-Correlation-Id: 9034ea9f-76d7-4a54-dc55-08db3f280247
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WkBPvT910YlzhLPopo3ylZMOvWsIipjVApDdeuLc1Omvp+1oaPpFoxGKUfo6TrpoMg2OGw5gRVZNyFQ1EaidIJ1d8kdEyuQQIq+rWiBLZtJsDJ+oBiyw98wXR4A7YfNsQmhDYDKaPAjFE6shLf3ZV0fRlY18yBRd9hTcbzYSeDQMs9Dpau9MhFRHtV9pkG+r/Qvm2mhwlFHFT2LlflRCiwXS9k6WOq/S7N/YGc136BYYEw5nEWGKa0ZitTx7zohxt6sV+KzqQbrBHQAHt06uciRO9ZN00E7u198DbvRkz0ZHgL4h1lmpvfCY1SghJgAQEdYd728p2UOr1uJ0t/+hOQYeA1wowUM49ylBDCY1Iltv2YaPRUVNyXrY/5yU1oDpX/qaPv45AQVJscIUhL9KvBxnLC3t3tflOFxpu/7D46MMzXG5GLayjFAWJlAeuWg8H46Vru1konosAPFLnxZSD283V8ltp7jRhUr0WgxJK6ql2Fpex14d2hbScLfK1/Fp0ybaaL1fu2Hg6NGzl5o6byBGp12cUFGez//v11ePnbPqMbcM4MGgtoM8b1Qkc0x/oSe4sU70pCa3kUdRUTqv+OhCg1/nU2taHHt+uJQ06oAlx0FvwET2A2V29vTIYHo19q9bE0Nhlfg+RvuOe63N0F/eaAtg6wfHTNdFrnzlLNnGVfEnyrjIuv//UMXgH9ZF2lpVcBN2tkZ/6P6Xxt2uA+TKzfZ+Ka061ggXrwL8FyQdDYzZwYBBcYwZMFw4/6KS7e4bnFY16mYH5zs/OrIdJA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199021)(36840700001)(40470700004)(46966006)(31686004)(8676002)(4326008)(54906003)(70206006)(70586007)(5660300002)(8936002)(40480700001)(110136005)(16576012)(316002)(478600001)(40460700003)(41300700001)(86362001)(47076005)(426003)(31696002)(83380400001)(2616005)(82310400005)(336012)(82740400003)(36860700001)(81166007)(356005)(6666004)(36756003)(44832011)(53546011)(26005)(186003)(2906002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:42:07.0844
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9034ea9f-76d7-4a54-dc55-08db3f280247
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT035.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7628

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Dynamic programming ops will modify the dt_host and there might be other
> function which are browsing the dt_host at the same time. To avoid the race
> conditions, adding rwlock for browsing the dt_host. But adding rwlock in
> device_tree.h causes following circular dependency:
>     device_tree.h->rwlock.h->smp.h->asm/smp.h->device_tree.h
> 
> To fix this, removed the "#include <xen/device_tree.h> and forward declared
> "struct dt_device_node".
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/include/asm/smp.h | 3 ++-
>  xen/arch/arm/smpboot.c         | 1 +
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
> index 8133d5c295..afe6129276 100644
> --- a/xen/arch/arm/include/asm/smp.h
> +++ b/xen/arch/arm/include/asm/smp.h
> @@ -3,13 +3,14 @@
> 
>  #ifndef __ASSEMBLY__
>  #include <xen/cpumask.h>
> -#include <xen/device_tree.h>
>  #include <asm/current.h>
>  #endif
> 
>  DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
>  DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
> 
> +struct dt_device_node;
> +
>  #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
> 
>  #define smp_processor_id() get_processor_id()
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 412ae22869..336a7d418b 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -11,6 +11,7 @@
>  #include <xen/cpumask.h>
>  #include <xen/delay.h>
>  #include <xen/domain_page.h>
> +#include <xen/device_tree.h>
Headers should be listed in alphabetical order, so device_tree.h goes before domain_page.h

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 09:58:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 09:58:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521898.810894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLcv-0006Lg-Kp; Mon, 17 Apr 2023 09:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521898.810894; Mon, 17 Apr 2023 09:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLcv-0006LZ-Hr; Mon, 17 Apr 2023 09:58:25 +0000
Received: by outflank-mailman (input) for mailman id 521898;
 Mon, 17 Apr 2023 09:58:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poLcu-0006LT-IT
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 09:58:24 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 623d5702-dd06-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 11:58:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 623d5702-dd06-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681725502;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=83FgYJEDQqw0aagJvbVPk/UjZa2pVWCfrH0XPjUsAL0=;
  b=ardyeLF7c8BajflSRoCvWhKCZc3p6eEm6C2x428f8gXAC/3jg9vc2kLZ
   4xDFU7+5t6snIx0pZJujc0kbZA0efqyNwu7DKUyS6cyouhaYUqcMuoXjy
   Nsoo3urrsUI3LS2jE5s/IJO8muB79QlN1f3keTt7mhFkH6vzv3vEAEvRr
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104565129
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:aDRYa6gmYDsG0OduAYAKIzocX161FhAKZh0ujC45NGQN5FlHY01je
 htvXG7Ub6mDYzegftt/bYq380pXsZPXmtAwSQpvpXo9Hykb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQxFhQzZx6S3Nuw/67lWrBu2cMHE5LSadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 j6boTmgWEBy2Nq35SvYyVnvmcj2wWDEQYgoBYaXrPdAuQjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O+497huExuzL4gKaLm8eRzVFZZots8peeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwA8E0s8bCEVFjID3MW9sNgx0RXrcY1aRfvdYsLOJd3g/
 9ybhHFg1+xP1p9WhvnTEUPv2Gz1+MWQJuIhzkCOBz/+sFskDGKwT9bwgWU3+8qsO2pworOpm
 HEf0/aT4+kVZX1mvHzcGb5ddF1FChvsDdE9vbKMN8N7n9hV0yT/Fb28GRknTKuTDu4KeCXyf
 GjYsh5L6ZlYMROCNPEnOtzuU5x3kPe4SbwJs8w4ifIXO/BMmPKvpnkyNSZ8IUi2+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5fTkEgAALWjOHG/HEx6BQliEEXXzKve86R/HtNv6CI8cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:8mFkC6wrZSAnQ1SntbnEKrPw871zdoMgy1knxilNoHxuH/Bw8P
 re+MjztCWE7Qr5PUtLpTnuAsa9qB/nm6KdgrNhX4tKPjOHhILAFugLgbcKqweKJ8SUzJ8/6U
 4PSclD4N2bNykBsS75ijPIburJw7O8gdyVbf+19QYLcenzAZsQlDuQDGygYytLrFkvP+tBKH
 KEjPA33wadRQ==
X-Talos-CUID: 9a23:rx39Q2GZzsaB6m6oqmJOymoqE840KUTb82uOMxa6VmFNd+asHAo=
X-Talos-MUID: 9a23:jpannwq059LNjv/m38Yezx1zafxDoLuJMRxOrowMgdi/GnZ5EQ7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="104565129"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH] xen/livepatch: Fix secure_payload() in non-debug builds
Date: Mon, 17 Apr 2023 10:58:15 +0100
Message-ID: <20230417095815.3734434-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The ro_pages + rw_pages + text_pages != payload->pages check is not something
which is reasonable to skip at runtime.  Rewrite it to not be an ASSERT().

As the code is being shuffled anyway, rework the logic calling
arch_livepatch_secure() to reduce its verbosity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>
---
 xen/common/livepatch.c | 37 ++++++++++++++++---------------------
 1 file changed, 16 insertions(+), 21 deletions(-)

diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c
index d385f882c65c..c10ab1f374e0 100644
--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -405,32 +405,27 @@ static int move_payload(struct payload *payload, struct livepatch_elf *elf)
 
 static int secure_payload(struct payload *payload, struct livepatch_elf *elf)
 {
-    int rc = 0;
-    unsigned int text_pages, rw_pages, ro_pages;
+    unsigned int text_pages = PFN_UP(payload->text_size);
+    unsigned int rw_pages   = PFN_UP(payload->rw_size);
+    unsigned int ro_pages   = PFN_UP(payload->ro_size);
+    int rc;
 
-    text_pages = PFN_UP(payload->text_size);
+    if ( ro_pages + rw_pages + text_pages != payload->pages )
+        return -EINVAL;
 
-    if ( text_pages )
-    {
-        rc = arch_livepatch_secure(payload->text_addr, text_pages, LIVEPATCH_VA_RX);
-        if ( rc )
-            return rc;
-    }
-    rw_pages = PFN_UP(payload->rw_size);
-    if ( rw_pages )
-    {
-        rc = arch_livepatch_secure(payload->rw_addr, rw_pages, LIVEPATCH_VA_RW);
-        if ( rc )
-            return rc;
-    }
+    if ( text_pages &&
+         (rc = arch_livepatch_secure(payload->text_addr, text_pages, LIVEPATCH_VA_RX)) )
+        return rc;
 
-    ro_pages = PFN_UP(payload->ro_size);
-    if ( ro_pages )
-        rc = arch_livepatch_secure(payload->ro_addr, ro_pages, LIVEPATCH_VA_RO);
+    if ( rw_pages &&
+         (rc = arch_livepatch_secure(payload->rw_addr, rw_pages, LIVEPATCH_VA_RW)) )
+        return rc;
 
-    ASSERT(ro_pages + rw_pages + text_pages == payload->pages);
+    if ( ro_pages &&
+         (rc = arch_livepatch_secure(payload->ro_addr, ro_pages, LIVEPATCH_VA_RO)) )
+        return rc;
 
-    return rc;
+    return 0;
 }
 
 static bool section_ok(const struct livepatch_elf *elf,
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:01:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:01:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521902.810905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLgE-0007rY-4V; Mon, 17 Apr 2023 10:01:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521902.810905; Mon, 17 Apr 2023 10:01:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLgE-0007rR-0H; Mon, 17 Apr 2023 10:01:50 +0000
Received: by outflank-mailman (input) for mailman id 521902;
 Mon, 17 Apr 2023 10:01:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poLgD-0007rL-C7
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:01:49 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20612.outbound.protection.outlook.com
 [2a01:111:f400:fe16::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ddafdee3-dd06-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:01:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8776.eurprd04.prod.outlook.com (2603:10a6:10:2e3::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:01:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddafdee3-dd06-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TD0yQyxUDGwNrcVDSK/pOJvodLJZP84HMgwMSwXzyPKf7Ule67C/Lxj//6G+0LEOZKGXeDIp6NIaxnAdlGR/jBB9qpcPTBnmyUkkHH3Vx3snHX+8GinpKf6d3SQel2RYX42go0rchcVjLAjCfERfe7gzAp0EJKE9wBMDa23LjYuLHbdXyPReJavwi3mvKf+dfN83uQ69u7OB2lVjmYA8JbrOcMWDI8+GhkB/Y5dKUMopVh4Z7AW0KCSmYBe2sZ8kzK+VT0M9ZsgLbNWdokCfuP2dPQv9ghrqwq1cydU8t2YihfvFCAeyjUusBzDfU2Zm78jhA2/n9UwGGKBC7bmVVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F/ZIoQayojoYWFXGq/8OJUWDdaphhpR+SRQz0jfU/v0=;
 b=Z8z+Bar7NuaMqR6vwm6Uzl3gn2E+rCKU4EuoobGp4p+u1A9sLkRdSUe29+jQ1ph/gLVfc/7uLb6w3JThaXF66JobHqvemCeFtDyLqFzd0kL3a7nsjyXgk6QbWLsIXX2URak32K+1zLQ5sA88zPHXHb7tovshOSLqf7yu6oTEhuMgySpkenTs9gtT+V38IbXuPKR8V2g5fcJ/zKpbiJwSd5Fb0EUTQu3cZAxjntYJ3Y1whgYpESV298d3XaF1TChnsUZpAUeHJ4RWpAgnKqDg1OK31i9HzNHEJ/2Jpe90O7do44p4UHOhzKxx/9zc6ve2s6M5sQWBkI/keOu7ojPMDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F/ZIoQayojoYWFXGq/8OJUWDdaphhpR+SRQz0jfU/v0=;
 b=AekoPhUpqrUV4kX0cjmL+Fr6ZR6/XpjGZSca+1wAlmJFPgBj7cQ8YTAPomZbk52qwkCN98YJfQD1sLcghdJ5axTtJkwXfOxX40Pcl1ORl0QUL3zajMY3F+90+xINOccUIQ6zTbHnRdr4Vch6w0khhbszG6ai/f8sbjcPu0/RwjtBQl1MIHTj2Q0DC0gVHZGFcjBBrZN10uwiIHOzW30Ed8BVa4NToiQ7bcIkZmjB2i7fBkjuQQ/EhzIgfdLVkH9hDJF+G5u5BxvhFNyQdSyIpJ7cT7yO4Ccvgjp33km3Ys8CeqjF6+IQEYFh0vrb2vuoJO0U5+maKSW51h7/wLrnwA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dd6615fb-dc2d-9885-e3a3-9cf0954f57d3@suse.com>
Date: Mon, 17 Apr 2023 12:01:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/livepatch: Fix .altinstructions safety checks
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230415022229.3475033-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230415022229.3475033-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0221.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8776:EE_
X-MS-Office365-Filtering-Correlation-Id: d57ba8c5-2931-4804-35cd-08db3f2abf08
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d8TN7u+YgihdH2UfAIqeWKnCTTZueDRN3jc64tWu+eDHy8XQ+JilkgH09699m1p2Wab1qARrs2lpnsTW8i6Nx5jkcIMsRW+Nfrfe4bb3rVg5h9Ry0EeUUz2blWpcQVqzYPpDnjttMuJQco8eAmzh5Ov0HhnLo8dxHwUpdjlWXyPbC5aSk6EIXdSAExp/MOUjBpHBsB5uGRHfP7tVr2vAgCcjnT3QQWXoC47JkDOLPFsBGIpJzaNxPgvosQX2D0zpcalruO42NdFpBeY3L9bbc8po3imnTU+YXrdjDb3canr3lJYGScxFJm55xElAiYvhjBylVsDsDjjJ5pVG2gVBfUfsn3/wC/Gd376fjoAziH/+TC4inc14RRAbUzf5eJAtm1VqewPqzuydyNc6roUXX7bGPfWbjs7BaGptCom96eEaDx7e7j+NDjzxefBnQvpw9p9OWVtIgiPH3oH2sYmVPDV+6DYO4uDzzV021UlRO4xPifsC68rZaOF4+NJ/oKIwFVqa465Nh0+x2gq6mgrY/Uixs+qcbxjAbioHdMc+JwpVibPY6tDYQ8JtyU45TkzDOwb5p2WBBWkeKKhb9Tlo12X9CYyCewp3POjPHXHg/qwJbjz4mjXL3jF/h4VPqNY6FHBlAx6E11RS25RuUxu/PjK9IfaZkcBn94/qhU+IZFhfG7rdmoAOnDuCw/kTnpro
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199021)(6486002)(86362001)(478600001)(31696002)(2616005)(36756003)(83380400001)(26005)(6506007)(186003)(53546011)(6512007)(38100700002)(6916009)(66946007)(66476007)(66556008)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(54906003)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Nmd0aktwZ0Exc1pmRWlwa09kV0RGV2h5MGZDNlNZV2cvMVl3U1NTZFEyU005?=
 =?utf-8?B?UEJUMzdOb2hGL3hRS1pVNGY4TThoUTZ5Y2dkVHJ6MzN3dnJYMGc0N1pER2Zo?=
 =?utf-8?B?a0UwNFRQQUpFUUJ3NjF4UjFubEJCaUZjU3ZzUGhiV201WUhUK3dQNWVkY0Zo?=
 =?utf-8?B?Y09wZlhQYS9VWkkzUTZBRUNsVjd2ZzFlSjdRWFk5cjRLQ0kvRTN2VTZLTW9P?=
 =?utf-8?B?dklOYXpXUWNmZkVOVGNkRWdZcjVrNmFlMnhMd2xlbDhiZjA1SFExSzkwaGNG?=
 =?utf-8?B?eEU2dFBZQXNheG53aTFRY0FJRDhKUXFUUkdPRVNQdTNGdXU2UE02b2ZJL3Zm?=
 =?utf-8?B?QjJ5QVNSRUlLMTZ1VndDYlh3YWdnbVZ5bDhkQnRuOGl6KzVjckVJRTdERUpt?=
 =?utf-8?B?SGRoYkNyY2xHR1lNaEE3KzdxNHdXOXpNSnRRYmx1Mk9xRXhqOHNIQVVjek9m?=
 =?utf-8?B?MnlJY1FORm1xaHNwZWhxcEJNb3BiT0JIZGdlcHV3eHN0OHpXaVdEOXN6RnVS?=
 =?utf-8?B?akxuMGpmNkdtQS9udTRQbkh2YnpSWlhyMWljYWFTeC9QeDc3LzFVNEJES251?=
 =?utf-8?B?OUp5dG1tQmJ3N3JOU01wM0hQZ0ZPa3NYOG1heVVDaldwL0xWSU0vbGl3amht?=
 =?utf-8?B?c0RUcmJyeWRFOUR4cXZUcVY3Y1VjVEExYnVZZTIwU0lMbHpJN2FNWkIwWFJL?=
 =?utf-8?B?WUNGTStVcExsVHI1ZDZHSEhtTCtLNThsWWxKSEhkcEp1emFTdXU0UTE2ZHlu?=
 =?utf-8?B?TzA0RXIzcHp3dTFMSWo0c0lFY0UzZC9aUmdVZ0d2bFBsZVlQV3UwSEtIZVBW?=
 =?utf-8?B?eER2TnRRMlo3YTJtaERBQUhUa3JXdk5Wck9FV0hDaWRrSDNTSWUzZTltU1RH?=
 =?utf-8?B?QldtQTFXWU5QcDZBTkV6L1RQYUc0Q3JzUzBtOG9LcnRHZGwzSWpyMGEwV3A1?=
 =?utf-8?B?eTYyeVpPWk5BbE9lQVp1THZ2SURHRWhkYmFqbi9HN2FXbzdKTEZZRTMzdXBI?=
 =?utf-8?B?Ky9WSFo1aS80eS9rWktrUVNES2trM25KMUdWaFNIaGNzUWlYNndxSndGN2h4?=
 =?utf-8?B?RERtK1U1V3hvSWpPSzh5SUp0WVpkSC9uandLUTlVeTRsTC96KzRqTENlcDVS?=
 =?utf-8?B?WUNOMWlINys5dHpaUUhDS2NuZmRGY3liK3YyZDFUdk0rK1VMamt1RVRuNzI3?=
 =?utf-8?B?WG1CK251c1BZNEFtU2s5WnQ5SkQwSjd2YVdXeXF1cVEyTnlZM1JwVzQwOXVQ?=
 =?utf-8?B?SERMV0Q0K3F0N3dxS0Mrb2dMSkZ0aXNycEpKaUpDaFJPMmNKeHNlLy9FNVRT?=
 =?utf-8?B?SFdUTHhLek5QQTlmSXFZR2RxWnVrNXFTNHBXaTFENktZanhvTjFUR3NEVHBE?=
 =?utf-8?B?RjB6dGZyeDFkYjNhRkJsQkFUWENaSVprZXg0aXVOS1h5aEdKY29RL2M3alVl?=
 =?utf-8?B?K1A1Vm1peDEwT2Y4RUVBTW9QSDljNnNXYkR0VERZTjZzMDZ1RWNtV21GWHNJ?=
 =?utf-8?B?LzBSWE56M05nZDgwek85ZWlPb2xpVVJUZWNNeC9tQnZyQ1FqbHlrMFVwUDNq?=
 =?utf-8?B?QzVsVk1TeEtuaEJqYjN2cDZlUHUycVhjSU85RStkNGVlMm1YbFNiOVd1K25Z?=
 =?utf-8?B?cjJRWUZaSTFLTlFrUmNxWG1DZnNlUGFKVEJ1aGpVMUVaL2JJWGNHcERRNG95?=
 =?utf-8?B?dFhEbFRpcWxLMjhpYWZCRTZFTzZ6TkNKTU4vK2kvb3g3OXNXVjVIczkyVU1M?=
 =?utf-8?B?V0xKUUpKRUt0V0V6azlWcTU5UXJ5M1dmK0prWXJpN2wxbWhRL0lNSWtlR3ht?=
 =?utf-8?B?L3RmbUJGRGNTSXhMZDA4bjlzOFZoRmJtc0xFWWd2YjBXS1lNYVNHQ3l5d0V0?=
 =?utf-8?B?LzBoNDBPUmUvRnptbUxaMmRUOVUxWWJEOWd4K2JMb1NZTzhpWnBibHdkSkt4?=
 =?utf-8?B?RHhCUG12RHB3NWxZdmZQVzZjVE1YNzdPdklLY282dFZueFhsVDN1RDk3ZVYr?=
 =?utf-8?B?U25TNjJoVXJKaXFpSDlzZlgvdndDNW1meExZQ1EyZ2s4Q1lmVDVUa1hTTUhL?=
 =?utf-8?B?V2ZoZFBVd1VnVnJiUGlhRHNoMlQzUzRKSk9qNmhLOVNHV2U1a0RJZFU4cDk2?=
 =?utf-8?Q?1jkO/FcI6m6+ty+IeqHza/yGS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d57ba8c5-2931-4804-35cd-08db3f2abf08
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:01:43.0928
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: esZ1S3NfsUomP+DpDdbKobGxMhDAGjrPZ/DwNW6gr98hGauj3PVNoAXAX2qhf7r6sPfY7ApSCWjyteRNmHabig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8776

On 15.04.2023 04:22, Andrew Cooper wrote:
> The prior check has && vs || mixups, making it tautologically false and thus
> providing no safety at all.  There are boundary errors too.
> 
> First start with a comment describing how the .altinstructions and
> .altinstr_replacement sections interact, and perform suitable cross-checking.
> 
> Second, rewrite the alt_instr loop entirely from scratch.  Origin sites have
> non-zero size, and must be fully contained within .text.

Or .init.text, which may be worth making explicit (perhaps also in the
respective code comment). Or am I misremembering and livepatch blobs,
unlike e.g. Linux modules, don't support the concept of .init.* sections?

>  Any non-zero sized
> replacements must be fully contained within .altinstr_replacement.

Yes, but if they're all zero-sized, in principle no .altinstr_replacement
section could be there. Not sure though whether that's worth supporting
as a special case.

Furthermore, ...

> Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Ross Lagerwall <ross.lagerwall@citrix.com>
> 
> As a further observation, .altinstr_replacement shouldn't survive beyond its
> use in apply_alternatives(), but the disp32 relative references (for x86 at
> least) in alt_instr force .altinstr_replacement to be close to the payload
> while being applied.

... if .altinstr_replacement is retained right now anyway, isn't it legal
to fold it with another section (e.g. .text) while linking?

> --- a/xen/common/livepatch.c
> +++ b/xen/common/livepatch.c
> @@ -803,28 +803,82 @@ static int prepare_payload(struct payload *payload,
>      if ( sec )
>      {
>  #ifdef CONFIG_HAS_ALTERNATIVE
> +        /*
> +         * (As of April 2023), Alternatives are formed of:
> +         * - An .altinstructions section with an array of struct alt_instr's.
> +         * - An .altinstr_replacement section containing instructions bytes.

Since this is generic code, perhaps drop "bytes"? (Or else use "instruction
bytes"?)

> +         * An individual alt_instr contains:
> +         * - An orig reference, pointing into .text with a nonzero length
> +         * - A repl reference, pointing into .altinstr_replacement
> +         *
> +         * It is legal to have zero-length replacements, meaning it is legal
> +         * for the .altinstr_replacement section to be empty too.  An
> +         * implementation detail means that a zero-length replacement's repl
> +         * reference will be the start of the .altinstr_replacement section.

"will" or "may"? And especially if indeed "will", is it really worth mentioning
this here in this way, posing a fair risk of the comment going stale entirely
unnoticed?

> +         */
> +        const struct livepatch_elf_sec *repl_sec;
>          struct alt_instr *a, *start, *end;
>  
>          if ( !section_ok(elf, sec, sizeof(*a)) )
>              return -EINVAL;
>  
> +        /* Tolerate an empty .altinstructions section... */
> +        if ( sec->sec->sh_size == 0 )
> +            goto alt_done;
> +
> +        /* ... but otherwise, there needs to be something to alter... */
> +        if ( payload->text_size == 0 )
> +        {
> +            printk(XENLOG_ERR LIVEPATCH "%s Alternatives provided, but no .text\n",
> +                   elf->name);
> +            return -EINVAL;
> +        }
> +
> +        /* ... and something to be altered to. */
> +        repl_sec = livepatch_elf_sec_by_name(elf, ".altinstr_replacement");
> +        if ( !repl_sec )
> +        {
> +            printk(XENLOG_ERR LIVEPATCH "%s .altinstructions provided, but no .altinstr_replacement\n",
> +                   elf->name);
> +            return -EINVAL;
> +        }
> +
>          start = sec->load_addr;
>          end = sec->load_addr + sec->sec->sh_size;
>  
>          for ( a = start; a < end; a++ )
>          {
> -            const void *instr = ALT_ORIG_PTR(a);
> -            const void *replacement = ALT_REPL_PTR(a);
> +            const void *orig = ALT_ORIG_PTR(a);
> +            const void *repl = ALT_REPL_PTR(a);
> +
> +            /* orig must be fully within .text. */
> +            if ( orig               < payload->text_addr ||
> +                 a->orig_len        > payload->text_size ||
> +                 orig + a->orig_len > payload->text_addr + payload->text_size )
> +            {
> +                printk(XENLOG_ERR LIVEPATCH
> +                       "%s Alternative orig %p+%#x outside payload text %p+%#lx\n",
> +                       elf->name, orig, a->orig_len, payload->text_addr, payload->text_size);
> +                return -EINVAL;
> +            }
>  
> -            if ( (instr < region->start && instr >= region->end) ||
> -                 (replacement < region->start && replacement >= region->end) )
> +            /*
> +             * repl must be fully within .altinstr_replacement, even if they
> +             * happen to both have zero length.

Who is "they ... both" here? Surely it doesn't matter here whether "orig_len"
is zero.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:18:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521922.810915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLwF-00016W-Jn; Mon, 17 Apr 2023 10:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521922.810915; Mon, 17 Apr 2023 10:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poLwF-00016P-Gk; Mon, 17 Apr 2023 10:18:23 +0000
Received: by outflank-mailman (input) for mailman id 521922;
 Mon, 17 Apr 2023 10:18:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLV6=AI=citrix.com=prvs=464b9e9d0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1poLwE-00016J-AX
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:18:22 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a47e397-dd09-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:18:17 +0200 (CEST)
Received: from mail-mw2nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:17:54 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5792.namprd03.prod.outlook.com (2603:10b6:a03:2d5::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:17:51 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:17:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a47e397-dd09-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681726697;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=xgYB6m8H4+2JzUGAMrR5jV4AES1k4uICgrIo8QMwoV0=;
  b=bcNY1AQ5th4LGS0JEewjQWJSJ80qcysdEx7cyV9RyvpDLs+L8a84z8Jd
   63VuHHu9HwQyuip2TiolsP5+aYJ7TZb75FznKJKHq6JN4etcJsIWlJe3K
   MZrgusK00TiJBQV8jP04xyr8XkH8/Sl6f94kI15pRSsZ3UMGHCXNmH20P
   0=;
X-IronPort-RemoteIP: 104.47.55.100
X-IronPort-MID: 104567625
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BW0Jm6uVYPo4tBFLNaP0+SxNpOfnVN9fMUV32f8akzHdYApBsoF/q
 tZmKWmHP6vYZ2bxfN90O47goEgE7ZODytFmHVA4/ChgFCNE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBjAQbyumpe+M+42dZddhoOcBF/KwI9ZK0p1g5Wmx4fcOZ7nmGv2PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60aIK9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN5PSuLmrKMCbFu7+nEBVAAwelyH/NriikKXBv5Qd
 RYO9X97xUQ13AnxJjXnZDWGp3qDsg8ZSsBnOeQw4wGQyYLZ+w+cQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAXJ2IfYS4PTSMe/sLu5oo0i3rnUdJLAKOzyNrvFlnYw
 S2OrSU4r6Uei4gMzarT1Uvcnzumq5zNTwg0zgbaRGSo6kV+foHNT5Sh9Fzz/ftGaoGDQTGpv
 3Uams7Y8OEHC7mMkjCARKMGG7TB2hqeGDjVgFoqGoZ78T2ooianZdoJuGk4I1p1OMEZfzOve
 FXUpQ5a+J5UOj2tcLNzZIWyTc8tyMAMCOjYaxwdVfIWCrAZSeNN1HgxDaJM9wgBSHQRrJw=
IronPort-HdrOrdr: A9a23:1PMRDayo1cX7rijREMgaKrPxS+gkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRZxRBf7LdUUtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrXAgWVxRAXVhJI2PMH/X
 LemwL0y62/u7XjoyWsmlP73tBzop/M29FDDMuDhow8LSjtsB+hYMBMSqCPpzc8pcCo8RIPnM
 PXqxktEsxv4zf6f32zozHqxw78uQxeoUPK+Bu9uz/OsMb5TDU1B45ogp9YSALQ7w4FsMtn2K
 xG8mqFv94PZCmw1xjV1pztbVVHh0C0qX0tnao6iGFea5IXbPt0oZYE9E1YPZ8cFGbR6ZwhEs
 NpEMbAjcwmOW+yXjT8hC1C0dasVnM8ElOvRVUDgNWc13xskHVw3yIjtbgit0ZF0Kh4Z4hP5u
 zCPKgtvqpJVNUqYaV0A/pEaderC0TWKCi8cV66EBDCLuUqKnjNo5n47PEe/+exYqEFy5M0hd
 DoTE5Yj2gvYEjjYPf+kqGjyiq9A1lVYA6diP23v/NCy/jBrfvQQGK+oWkV4oudS651OLyeZx
 6xUKgmdsMLY1GeXrqh5DeOK6W6GUNuLvH9hexLKm5mgvi7XbEC5darBsr7Ff7KLQsOfF/ZLz
 8qYAXTTf8wnHxDHEWIzCTsZw==
X-Talos-CUID: =?us-ascii?q?9a23=3A5J3DPGlKN1+1MZKyitIN5KwCyELXOVPd0XbrBm2?=
 =?us-ascii?q?1M11WbpKrRHmz2phfyNU7zg=3D=3D?=
X-Talos-MUID: 9a23:Hfy2qAnAZ1MCi5YoBmnrdnpeKdU0/oKBD3xQss4Gt+DZGSMuYS2S2WE=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="104567625"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Re/kqYardcP4t35yqfmpdIIbwWYeDyQsvk8dMaVAByeOQGYErOsYuvSoHGwR34knm2nMuL3pIEnHO4mSgTzCPIZreLTCf2qpnL6LoeZMjLbf1Je9dpRsHAqdp4FLFQFnmDIhFYwfLTKr0COxva6aDccrr/gSeCzQN6MC8EjgFYiTdUb4pD4ntusAZHmxDJzX93KIu8b02FQyCziloM8X1NrL60nN3OHesUdO0KMfRHV1hc7is5P/oLJ3SU0VEXIC1V/Ulf+RkywZM6a8bt9iXb5RJ6ja5ifh23eZKYxqaHp/FwH0NDp+R8cms770ZezgWcwfnnHcRqoUf4TBXEmhVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TFFlwTFJ9kF38nuMLMUhPcUJPXsGC/YFQ6fd9bJYm3w=;
 b=Nsej6oODp6AlKsWEhxCcaxoXvJK/kMHKFv4/qJuUZRd9OidZMfeMfuheIXylLwp329tzh3JgXPsIlR1DgIq4tNXoBHkE1pVnwMR5A101kz+gkXoyDXihdgWkhLvElHamsE+YPul4GgC8MYdo3hg3fsUeTuoO5AQCnjkTCRwUdmLGtWFhdESZSnmT1WXsWhqKQSa+RMlx+ruFxxVnceEOM9oMF6T2cpsuPGh81CiiWVwHgRQe0/FeAX4k3bgqjfCMi4YauaOgb1SV/FHRmB8biBkxICdagvC9LeIi+jIFWS1y8huq+uqgDzd3z5p4AC9rK848b+OaAtvUb+7dkuR8Ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TFFlwTFJ9kF38nuMLMUhPcUJPXsGC/YFQ6fd9bJYm3w=;
 b=TUeRijnrtkmiYLcBj0faKSqJUfGjyVXowdE12Nd5qjZ6DOKBaoBiLnI72g4XJvLdXl1GZALDLzrB6Y15kTy2In+JPDoW/5ol8VPkSDQsN8jB4Q80a/YEJUwAjqlOKDUvemhq2VVCXSmUkyNppuYcWWaQa8ba+x2pf5T+/sggz1c=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 17 Apr 2023 12:17:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Message-ID: <ZD0cyXLt1knXyUzA@Air-de-Roger>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger>
 <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger>
 <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger>
 <87leivw8qp.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87leivw8qp.fsf@epam.com>
X-ClientProxiedBy: LO2P265CA0101.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5792:EE_
X-MS-Office365-Filtering-Correlation-Id: 20b0a7b4-e790-4ea8-357e-08db3f2d0007
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VmL0TcFWejjq+/K/WU9yb+BXP/+EWYdtmXX0zlpwYJVlxfv8RdCjyJgUdQx1Deu5w/pzROyPVhXG6X4ejPSkgdaGvETMRdfhVMH7ZeWepamZ67kBuTGfvPT2pMdaMQB76eh8Yf/3iAVDOwHfGLaLpMue3SDiKQ8WuuYL7LiM1qs44Ng5lpx6oUWoWgRtnGwSk2Px7ddHVUDlGQWqFCSUa6Iesz3NxkqLo2LOQ7mMMDEleKOZ7sbnuPOP+AtrGiCb9yJWh6YaSDNfW9vjWGBlZCcX9QiI5f/2KDwPVVS/h3JWQqNqxTysHGQm6dsAn/k053JiOPcmUAWTSyLE7QuxJIJbeWKU2GoKxMbLDF+QdTJL7gigE3Ne075492KiN/jrQHHU37XYGhHDXDFVshIUyw2vaaTv9fBIKvOkzeECKg+Z6jNKZtf90i534MBV9HNOxilUmjBOp+TYtBe0AprNU7TCTPT1wTG1bKed+kx7/0VWZelIKWpUmFPvZLkIc9IRbTx5ZL7pp2VpNJ13dsV7QPHetg1KiqA945yQi+5AAdiZHJDXQdQ7rdpbdEJaxdnR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(396003)(39860400002)(136003)(346002)(376002)(451199021)(26005)(66946007)(186003)(9686003)(6512007)(6506007)(85182001)(41300700001)(54906003)(86362001)(66899021)(83380400001)(6916009)(4326008)(316002)(66476007)(66556008)(2906002)(30864003)(7416002)(8676002)(8936002)(6486002)(6666004)(82960400001)(38100700002)(33716001)(5660300002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eHRMQm4zQUtLdDFjWUtCWGNUQjlNaElUQjl3bGdOUWswVEpTcngzNUpIcmFG?=
 =?utf-8?B?WTA3dmdabjRkdmljOGpBYnNNMDMvWlROeDUzejdPOGJnVlIxZkEzZ0R3K01l?=
 =?utf-8?B?MnBneEpOR1haR3FLYVpNQkJKN1AzZndzYlcxK1Y3dW5aUWM5SytvbFpqSzJm?=
 =?utf-8?B?eXIzYzFjUUt3Q2VWZmM1ZlNiNU5xT25DVC9IMklsMGcxRTJYWXJ1Z2crRXcx?=
 =?utf-8?B?SXpLZUZaVU9nd1B1SExTMVhYMHZWdmROUXR1d2pxeUUwMHBXSTcyMndMSmFu?=
 =?utf-8?B?aEZSbVJGdzJCeXdDVTVBKzNaVU5MNFFEWVUvU1IyR0RrYmRaam9jQW9Yd3hm?=
 =?utf-8?B?SDlaSm9qRUJ0RmFQdVYrREJvdVAvWlhOOFRZM25xc0lzSFArc01VWk5yVFBV?=
 =?utf-8?B?eGVRVllVQkQ1YjNpY0NkMnJJRWNRbjRIVElSTU9zWW40QmxaUkZsWlhvWSts?=
 =?utf-8?B?TzZTbjBWVDl3eWtyMmlTb3diWXQvNitSSWhhdzQ4SC9NUHNHREFwUTFQZGdJ?=
 =?utf-8?B?N3piZFNRR2U4LytnZHNMaWhsZmVNYWtHL01yQWFqc2lhQm1TQ2M4Z0hEMGJs?=
 =?utf-8?B?T0MycnFReEMrVTlNb0g2TFBsMUNWUUZMbWNaVGg1MzVJM2NyTmpYcVFCeEVD?=
 =?utf-8?B?UkM2M3JvVi9qUnRRQXVZblhER0FyaGdvRGx4ZTBqbW1hTjZxVWZGMjhFSFUw?=
 =?utf-8?B?RjRCdWswS3hZWk1RWW43Y1dMTGxTK0ViTFlYR1JCYThDY0VRZStXZ3Y1cGIx?=
 =?utf-8?B?OWhGclFtSytYZi8rWGIyb3AwdjFPN2xvU08xUmtnaGxnMmUvUlBTM3Q2OFkr?=
 =?utf-8?B?bUkvaWhuWVU2ZGNXTlZSUWdwbk1RSndMMFU1T1V2cTl2UUtSOHp6SkdFM0Mw?=
 =?utf-8?B?TkZDSGRHbXNxSmt0RVBMTTdac09CV21rV21FT3ZUMnZlSG9OdkNIeGVBYU1m?=
 =?utf-8?B?NlF1Z2d2NmhqSTN1b1ljR2dwUkRMbnVTRlAydGxJUFQ2QWxONWpPNzNGSEZX?=
 =?utf-8?B?T0tBaG5nTVZkYjNjZ0FhOWxWb1hJUVFiV0d0eGxWV3FGQ0dUbUtOWGhxUWJW?=
 =?utf-8?B?d25Wb1VaTjQvNVE2a1Jtak1kVGZ1dElKeXI0N0xyVXNRS1Q1UHp2V3NoRWhP?=
 =?utf-8?B?c3hjUW9oWkpva3pKR1lIL01VRWhncGxWbmtUbkw2ZFo4aXhnTjZ2UHVaakRJ?=
 =?utf-8?B?WHhRaVZXOGtuaDF3cGllc0oyaHQxZ1ZpVGRWNWQxbzlKSng1MzBTbmp3SmRU?=
 =?utf-8?B?d1NDWFJiRmZnajRRd0xuTGhyMVhFbjFoSTVHSWFLTDlNZjhEL0xkK0RyWGN3?=
 =?utf-8?B?SDBDZW9HVnNIRTFYZnlYSmUyL3ZnYTNhSzNSaTlKaDRDMFBtTHZyZHBka3hq?=
 =?utf-8?B?SzZYb0lCa0hJdzVNdU9DT01kVWhuaWZsZU5xMG53YmRPbXIrZHlyNEZVSUhG?=
 =?utf-8?B?aGwyckJGSmVXaVcyYnlaL01EK00zb3Z2bmFiL1RlVURVTW1uOG1YelA4UUVz?=
 =?utf-8?B?VFJGN1BaY0lKSlVTNGY5MlRWOE0yT0ZyVGlTVU1iTnc3dTZKTFRCdFlwa0xL?=
 =?utf-8?B?cmovWlVBR0V6RHhCQnRONldIaXgvVzVKWVVCWjhDNzdrKzJBVDlFY0szbnZC?=
 =?utf-8?B?T0haZ2dld0FiWE1mdXVLN0MwQklGenRmNHRFenpHOENvRE42ZGpxWURnZlE0?=
 =?utf-8?B?QnZnYksvTkJlRndoenZERWh0SW5wY3lUSm5vbi96QWQrOFhOZXFRZ3h5Z2Rm?=
 =?utf-8?B?dlp6Y3AzejU0Q0FaQWtBa1JGNERoOG5MZGx3ME95K3ZBYjNmdHQzNURJTm5J?=
 =?utf-8?B?QUtQS0VSZndyNmdDYnVQbFBjZnIxMGNYVkh6UjFxOHF0N2k1SGdNeFRUTEpD?=
 =?utf-8?B?QlIvQjkxTzJYa3Zpc25vQUFOT2tEbkxtdTMzMUE3MElneEZFWURnYlNaQ1pi?=
 =?utf-8?B?NlI0SzdKNmJZSVFoMmFKSENONmNjNm1NRFY3NTlOOUlYUTNKeUx0Zy96RDhM?=
 =?utf-8?B?WXpVL0RkQVF5V3JRYmZ3ZVlzMmYzZE4rUUhUa2tsQWZPakZyNXY5RDZ3M0o3?=
 =?utf-8?B?TkduVGJzUk5zQW52QThYaHhZSm9kOFVLZGVhWlYrTTVkMlcrVHJnYUo3VVBq?=
 =?utf-8?Q?wz7O27IIoz0uw+KZwGNw4temU?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	oy3yfYN8ns2iALyLTB3IuBxA9AzblWqmR1uBPU8tLr2CBCArHpV9u9NJMYB3Cbafxs27IN2zxriKyWJA8kwr1cXYqBb7rAs8yxPJjjZy4PevBakq31tffW7m4oTxWNTyvWhy6Rjzf4PA3BQ12WlDN8B81wmL5fxgz1iXgoAnMygvgjpWlmCoTSgKoYj2wfit42hSAD3Ie2Rpp9z5B0creCRPfuppFTfE6zFDtNuZg7Hk7539pvrcuCW6XTNMdgQclz/WSLBQ2rs+Gkh+pZ3J9a6HrJTrsqDr1pFoeZ86zFm4fKwfdzmgzSx5OIZCOh7hox1ZDjw+W1x+L+ODw5ZgGYPLwvf4ExAx8h3LlqVmVo+oqA46RS6hHpWfgYuZ3v91arPJbVrrH1uEEPrllftH0xJMmIvKT8va7Iy7bSqQ5JjScZs/KbYqmASTUXXq52vbYULPX9dQY7RSLXE2nrxGVlkDm+OZednJIKneRCr+7It9vMEYC9cRKqvHQbYvTDiXlOk9gh59B6dskLUffXajUJbq0Z4caRHXo+PYPLUdTSVBhdHQs382wJWfzcn3NYZU/zpXAxrRperi9YaPPPj0Sm6BuoUS9JhQkZVmxI5hd2k9E81Qr0QmwUax6LFXgE3rquy70zCDZhZuRuKYlHOQsjZPWKUtcgu/SQjzOqVijvpQ983BU1vUCWHo5Zqo16hA+j35PHxQEOjoBQPpeT1hRh7v/34LatWFYPewgNlvq5FrIsUr3iAIGxuXkcvxwt9/KhwZaw81oMs8eAIj2+S7CwRoJcgC8Um6i/sQ/m0nax59FDRx0uBFdFHA/f+a3fM0y3ajfggkezZB4heDYuZlPTtDsxXPJyte1Ad0uTgZKvLxOQfMi5FT0LoDqSXyeih+CL+xAkwU9S4PH9qiT4bPPw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20b0a7b4-e790-4ea8-357e-08db3f2d0007
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:17:51.3334
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xrifzIEP2+3rx0iBqpRogWKZvgX6ZWejfHLlOMNmihnqHeCzTrqf/4tbj8Xf+0ibsi/MntUEs5BuNzZYUxbEjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5792

On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
> > On Wed, Apr 12, 2023 at 09:54:12PM +0000, Volodymyr Babchuk wrote:
> >> 
> >> Hi Roger,
> >> 
> >> First of all, I want to provide link [1] to the RFC series where I tried
> >> total PCI locking rework. After discussing with Jan, it became clear for
> >> me, that task is much harder, than I anticipated. So, it was decided to
> >> move with a smaller steps. First step is to make vPCI code independed
> >> from the global PCI lock. Actually, this is not the first try.
> >> Oleksandr Andrushchenko tried to use r/w lock for this: [2]. But,
> >> Jan suggested to use refcounting instead of r/w locks, and I liked the
> >> idea. So, this is why you are seeing this patch series.
> >
> > Thanks, I've been on leave for long periods recently and I've missed
> > some of the series.
> >
> 
> Did you checked this RFC series? I am not asking you to review it, I am
> just curious about your opinion on the selected approach

I've just taken a look, it seems sensible (locking is complicated).
Splitting a big lock like the pci devs one can lead to all kind of
unexpected races because it was applying serialization to a lot of
operations which would no longer be serialized on the global lock.

Overall it's time we kill pci_devs lock, inevitably this will likely
result in some fallout.  It's important however that we give some
thought to what mode we switch too, as to try to avoid finding
ourselves in a similar situation to where we are now.

> >> 
> >> 
> >> Roger Pau Monné <roger.pau@citrix.com> writes:
> >> 
> >> > On Tue, Apr 11, 2023 at 11:41:04PM +0000, Volodymyr Babchuk wrote:
> >> >> 
> >> >> Hi Roger,
> >> >> 
> >> >> Roger Pau Monné <roger.pau@citrix.com> writes:
> >> >> 
> >> >> > On Tue, Mar 14, 2023 at 08:56:29PM +0000, Volodymyr Babchuk wrote:
> >> >> >> Prior to this change, lifetime of pci_dev objects was protected by global
> >> >> >> pcidevs_lock(). Long-term plan is to remove this log, so we need some
> >> >> >                                                    ^ lock
> >> >> >
> >> >> > I wouldn't say remove, as one way or another we need a lock to protect
> >> >> > concurrent accesses.
> >> >> >
> >> >> 
> >> >> I'll write "replace this global lock with couple of more granular
> >> >> locking devices"
> >> >> if this is okay for you.
> >> >> 
> >> >> >> other mechanism to ensure that those objects will not disappear under
> >> >> >> feet of code that access them. Reference counting is a good choice as
> >> >> >> it provides easy to comprehend way to control object lifetime.
> >> >> >> 
> >> >> >> This patch adds two new helper functions: pcidev_get() and
> >> >> >> pcidev_put(). pcidev_get() will increase reference counter, while
> >> >> >> pcidev_put() will decrease it, destroying object when counter reaches
> >> >> >> zero.
> >> >> >> 
> >> >> >> pcidev_get() should be used only when you already have a valid pointer
> >> >> >> to the object or you are holding lock that protects one of the
> >> >> >> lists (domain, pseg or ats) that store pci_dev structs.
> >> >> >> 
> >> >> >> pcidev_get() is rarely used directly, because there already are
> >> >> >> functions that will provide valid pointer to pci_dev struct:
> >> >> >> pci_get_pdev(), pci_get_real_pdev(). They will lock appropriate list,
> >> >> >> find needed object and increase its reference counter before returning
> >> >> >> to the caller.
> >> >> >> 
> >> >> >> Naturally, pci_put() should be called after finishing working with a
> >> >> >> received object. This is the reason why this patch have so many
> >> >> >> pcidev_put()s and so little pcidev_get()s: existing calls to
> >> >> >> pci_get_*() functions now will increase reference counter
> >> >> >> automatically, we just need to decrease it back when we finished.
> >> >> >
> >> >> > After looking a bit into this, I would like to ask whether it's been
> >> >> > considered the need to increase the refcount for each use of a pdev.
> >> >> >
> >> >> 
> >> >> This is how Linux uses reference locking. It decreases cognitive load
> >> >> and chance for an error, as there is a simple set of rules, which you
> >> >> follow.
> >> >> 
> >> >> > For example I would consider the initial alloc_pdev() to take a
> >> >> > refcount, and then pci_remove_device() _must_ be the function that
> >> >> > removes the last refcount, so that it can return -EBUSY otherwise (see
> >> >> > my comment below).
> >> >> 
> >> >> I tend to disagree there, as this ruins the very idea of reference
> >> >> counting. We can't know who else holds reference right now. Okay, we
> >> >> might know, but this requires additional lock to serialize
> >> >> accesses. Which, in turn, makes refcount un-needed.
> >> >
> >> > In principle pci_remove_device() must report whether the device is
> >> > ready to be physically removed from the system, so it must return
> >> > -EBUSY if there are still users accessing the device.
> >> >
> >> > A user would use PHYSDEVOP_manage_pci_remove to signal Xen it's trying
> >> > to physically remove a PCI device from a system, so we must ensure
> >> > that when the hypervisor returns success the device is ready to be
> >> > physically removed.
> >> >
> >> > Or at least that's my understanding of how this should work.
> >> >
> >> 
> >> As I can see, this is not how it is implemented right
> >> now. pci_remove_device() is not checking if device is not assigned to a
> >> domain. Id does not check if there are still users accessing the
> >> device. It just relies on a the global PCI lock to ensure that device is
> >> removed in an orderly manner.
> >
> > Right, the expectation is that any path inside of the hypervisor using
> > the device will hold the pcidevs lock, and thus bny holding it while
> > removing we assert that no users (inside the hypervisor) are left.
> >
> 
> May I proposed a bit relaxed assertion? "We assert that no users that
> access the device are left". What I am trying is say there, that no one
> will try to access, say, device's config space. Because the device
> already may be physically removed and any access to the device itself
> will cause a fault. But there may be users that can access struct pdev
> that corresponds to this device.

Isn't holding a reference to the pdev a sign that it's PCI config
space might be accessed?

> > I don't think we have been very consistent about the usage of the
> > pcidevs lock, and hence most of this is likely broken.  Hopefully
> > removing a PCI device from a system is a very uncommon operation.
> >
> >> My patch series has no intention to change this behavior. All what I
> >> want to achieve - is to allow vpci code access struct pdev objects
> >> without holding the global PCI lock. 
> >
> > That's all fine, but we need to make sure it doesn't make things worse
> > and what they currently are, and ideally it should make things easier.
> >
> > That's why I would like to understand exactly what's the purpose of
> > the refcount, and how it should be used.  The usage of the refcount
> > should be compatible with the intended behaviour of
> > pci_remove_device(), regardless of whether the current implementation
> > is not correct.  We don't want to be piling up more broken stuff on
> > top of an already broken implementation.
> >
> 
> I agree with you. I'll fix the issue with vPCI, that you mentioned below
> and prepare more comprehensive commit description in the next version.
> 
> >> >> >
> >> >> > That makes me wonder if for example callers of pci_get_pdev(d, sbdf)
> >> >> > do need to take an extra refcount, because such access is already
> >> >> > protected from the pdev going away by the fact that the device is
> >> >> > assigned to a guest.  But maybe it's too much work to separate users
> >> >> > of pci_get_pdev(d, ...); vs pci_get_pdev(NULL, ...);.
> >> >> >
> >> >> > There's also a window when the refcount is dropped to 0, and the
> >> >> > destruction function is called, but at the same time a concurrent
> >> >> > thread could attempt to take a reference to the pdev still?
> >> >> 
> >> >> Last pcidev_put() would be called by pci_remove_device(), after removing
> >> >> it from all lists. This should prevent other threads from obtaining a valid
> >> >> reference to the pdev.
> >> >
> >> > What if a concurrent user has taken a reference to the object before
> >> > pci_remove_device() has removed the device from the lists, and still
> >> > holds it when pci_remove_device() performs the supposedly last
> >> > pcidev_put() call?
> >> 
> >> Well, let's consider VPCI code as this concurrent user, for
> >> example. First, it will try to take vpci->lock. Depending on where in
> >> pci_remov_device() there will be three cases:
> >> 
> >> 1. Lock is taken before vpci_remove_device() takes the lock. In this
> >> case vpci code works as always
> >> 
> >> 2. It tries to take the lock when vpci_remove_device() is already locked
> >> this. In this case we are falling to the next case:
> >> 
> >> 3. Lock is taken after vpci_remove_device() had finished it's work. In this
> >> case vPCI code sees that it was called for a device in an invalid state
> >> and exits.
> >
> > For 2) and 3) you will hit a dereference, as the lock (vpci->lock)
> > would have been freed by vpci_remove_device() while a concurrent user
> > is waiting on pci_remov_device() to release the lock.
> >
> > I'm not sure how the user sees the device is in an invalid state,
> > because it was waiting on a lock (vpci->lock) that has been removed
> > under it's feet.
> >
> > This is an existing issue not made worse by the refcounting, but it's
> > not a great example.
> >
> 
> Yes, agree. I am going to move vpci->lock to the upper level
> (pdev->vpci_lock) and rework vPCI code so it will gracefully handle
>  pdev->vpci == NULL.

We likely need to so something along this lines.

> >> 
> >> As you can see, there is no case where vPCI code is running on an device
> >> which was removed.
> >> 
> >> After vPCI code drops refcounter, pdev object will be freed once and for
> >> all. Please node, that I am talking about pdev object there, not about
> >> PCI device, because PCI device (as a high-level entity) was destroyed by
> >> pci_remove_device(). refcount is needed just for the last clean-up
> >> operations.
> >
> > Right, but pci_remove_device() will return success even when there are
> > some users holding a refcount to the device, which is IMO undesirable.
> >
> > As I understand it the purpose of pci_remove_device() is that once it
> > returns success the device can be physically removed from the system.
> >
> 
> Yes, I totally agree with you. By saying "the device can physically removed
> from the system" we are asserting that no one will try to access this
> device via PCI bus. But this is not the same as "no one shall access
> struct pdev fields as it should be freed immediately".

I kind of view those two linked together, a user holding a ref to a
pdev might access it's pci config space.  It's still possible for some
callers to access the PCI condig space of a device as long as the SBDF
is known, but still it feels wrong to return from pci_remove_device()
while the pdev hasn't been fully purged from the hypervisor.

The more complex we make this handling the more likely to introduce
errors in the long term. IMO I think it's easier to reason about
device state if we make pci_remove_device() authoritative wrt any uses
of the related pdev inside the hypervisor.

> >> >
> >> >> >
> >> >> >>          sbdf.devfn &= ~stride;
> >> >> >>          pdev = pci_get_pdev(NULL, sbdf);
> >> >> >>          if ( pdev && stride != pdev->phantom_stride )
> >> >> >> +        {
> >> >> >> +            pcidev_put(pdev);
> >> >> >>              pdev = NULL;
> >> >> >> +        }
> >> >> >>      }
> >> >> >>  
> >> >> >>      return pdev;
> >> >> >> @@ -548,13 +526,18 @@ struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
> >> >> >>          list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> >> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf &&
> >> >> >>                   (!d || pdev->domain == d) )
> >> >> >> +            {
> >> >> >> +                pcidev_get(pdev);
> >> >> >>                  return pdev;
> >> >> >> +            }
> >> >> >>      }
> >> >> >>      else
> >> >> >>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
> >> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf )
> >> >> >> +            {
> >> >> >> +                pcidev_get(pdev);
> >> >> >>                  return pdev;
> >> >> >> -
> >> >> >> +            }
> >> >> >>      return NULL;
> >> >> >>  }
> >> >> >>  
> >> >> >> @@ -663,7 +646,10 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
> >> >> >>                              PCI_SBDF(seg, info->physfn.bus,
> >> >> >>                                       info->physfn.devfn));
> >> >> >>          if ( pdev )
> >> >> >> +        {
> >> >> >>              pf_is_extfn = pdev->info.is_extfn;
> >> >> >> +            pcidev_put(pdev);
> >> >> >> +        }
> >> >> >>          pcidevs_unlock();
> >> >> >>          if ( !pdev )
> >> >> >>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
> >> >> >> @@ -818,7 +804,9 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
> >> >> >>              if ( pdev->domain )
> >> >> >>                  list_del(&pdev->domain_list);
> >> >> >>              printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
> >> >> >> -            free_pdev(pseg, pdev);
> >> >> >> +            list_del(&pdev->alldevs_list);
> >> >> >> +            pdev_msi_deinit(pdev);
> >> >> >> +            pcidev_put(pdev);
> >> >> >
> >> >> > Hm, I think here we want to make sure that the device has been freed,
> >> >> > or else you would have to return -EBUSY to the calls to notify that
> >> >> > the device is still in use.
> >> >> 
> >> >> Why? As I can see, pdev object is still may potentially be accessed by
> >> >> some other CPU right now. So pdev object will be freed after last
> >> >> reference is dropped. As it is already removed from all the lists,
> >> >> pci_dev_get() will not find it anymore.
> >> >> 
> >> >> Actually, I can't see how this can happen in reality, as VPCI, MSI and
> >> >> IOMMU are already deactivated for this device. So, no one would touch it.
> >> >
> >> > Wouldn't it be possible for a concurrent user to hold a reference from
> >> > befoe the device has been 'deactivated'?
> >> >
> >> 
> >> Yes, it can hold a reference. This is why we need additional locking to
> >> ensure that, say, pci_cleanup_msi() does not races with rest of the MSI
> >> code. Right now this is ensured by then global PCI lock.
> >> 
> >> >> >
> >> >> > I think we need an extra pcidev_put_final() or similar that can be
> >> >> > used in pci_remove_device() to assert that the device has been
> >> >> > actually removed.
> >> >> 
> >> >> Will something break if we don't do this? I can't see how this can
> >> >> happen.
> >> >
> >> > As mentioned above, once pci_remove_device() returns 0 the admin
> >> > should be capable of physically removing the device from the system.
> >> >
> >> 
> >> This patch series does not alter this requirement. Admin is still
> >> capable of physically removing the device from the system. After
> >> successful call to the pci_remove_device()
> >
> > Indeed, but there might be users in the hypervisor still holding a
> > reference to the pdev.
> >
> 
> reference counting alone can't protect you from this
> situation. Additional locking is required in this case. And right now we
> have the global PCI lock that protects us. Actually, almost all the code
> takes and drops references while holding the global PCI lock. Only one
> exception, as far as I know, is the vPCI code. Which I am going to fix
> in the next version.

But it would IMO be fine to just return -EBUSY if pci_remove_device()
doesn't drop the last reference (and thus the pdev is not yet
removed).

I'm not saying that pci_remove_device() must unconditionally remove
the pdev, but that whe nnot doing so it should return -EBUSY and the
caller will have to try.

I'm not the maintainer, so maybe Jan has other opinions about this, I
will let him comment, as I don't want to enforce something without
having agreement.

> Also, I'll double check that only vPCI code obtains references while not
> holding the global lock. My reasoning is the following:
> 
> 1. Right now (i.e. on staging branch) all accesses to pdevs are
> in consistent state. This basically means that all code that access
> pdevs is doing this while holding an appropriate lock. Global PCI lock,
> in most cases.

The only appropriate lock should be the pci_devs lock if we want to
prevent device removal.

> This means the following: pdev can't disappear under our
> feet, no one racing with us while accessing the pdev, no new pdev can be
> created while we are holding the global PCI lock.
> 
> 2. Adding reference counting alone changes nothing in this
> regard. Actually, PCI code will needlessly increase/decrease an atomic
> while holding the global lock.
> 
> 3. As all work with PCI devices is done while holding the lock, we can
> assert that reference count at the beginning of a critical section will
> be equal to reference count at the end of a critical section, because
> my patch add _put to the every _get all across the hypervisor, with a
> few notable exceptions:
> 
> 3.1. pci_add_device() will initialize a device and set reference count
> to 1
> 
> 3.2. pci_remove_device() will de-initialize a device and decrease
> reference count by 1. I can assert, that if p.1 is true and I didn't
> messed up with balancing _gets/_puts in other parts of the code, then
> pci_remove_device() will always remove the last reference. This may (and
> will) change in the future.
> 
> 3.3. MSI code holds long-term pointers to pdev, so
> msi[x]_capability_init() does additional _get() and then
> `msi_free_irq()` does corresponding _put(). Luckily for us,
> pci_remove_device() calls pci_cleanup_msi() so we can be sure that does
> not break assertion in p.3.2

Doesn't by the same logic assign device to a domain also take an extra
reference because it's adding the pdev to a domain private list?  (ie:
much like MSI storing the pointer to the pdev).

And then for MSI-X it feels like we should be taken a reference for
each msi_desc entry in use, since each one contains a pointer to the
pdev.

> 4. Now, we want vPCI code to be able to access PCI devices without
> holding the global PCI lock the whole time. This is where we can
> leverage reference counting. Here are the assertions:
> 
> 4.1. vPCI code gets pdev pointer only via pci_get_pdev() function, which
> reads from a list while holding the global PCI lock. That means that
> pci_get_pdev() will return NULL after pci_remove_device() deletes the
> device from all lists. Also, that means that vPCI code can't get pdev
> while pci_remove_device() is running, because pci_remove_device() is
> holding the global PCI lock.

What if it gets the pointer just before pci_remove_device() runs?

It can't get the pointer while pci_remove_device() is running, but
could get it just before.

> 4.2. vPCI code will always acquire pdev->vpci_lock before accessing
> pdev->vpci
> 
> 4.3. pci_remove_device() will de-init vpci state while holding
> pdev->vpci_lock
> 
> 4.4. vPCI code will not try to access PCI device if pdev->vpci == NULL
> 
> 4.5. vPCI code will access only vpci-related fields in struct pdev

That's not currently true, vPCI does make extensive use of pdev->sbdf
for example.  It does also cause changes in the MSI(-X) state, albeit
not directly but through helpers.

> 
> 4.6. vPCI does not depends and does not alter non-vPCI-related state of a PCI
> device. This is the most tricky part, because most of the remaining state is
> protected by the global PCI lock, which we are not holding. That means,
> that we need to disable vPCI while re-assigning the PCI device to
> another domain. As I can see, this is the only place where vPCI depends
> on more broader PCI device state.

Device re-assigning should cause all previous vPCI state to be torn
down and re-created when assigned to a different domain.  For once we
need to do this in order to clear any internal state, but we also must
do so because the initial setup (as done by init_bars for example) can
be different depending on whether the owner domain is a domU or dom0.

> This approach will not interfere with pci_remove_device() obligations,
> because we can be sure that right now vPCI is the only user that can
> hold reference counter past pci_remove_device() call and that vPCI code
> will not attempt to access to PCI device after end of , thus, allowing admin to
> physically remote the device.
> 
> In the future, we can gradually remove other parts of the PCI code from under
> the global PCI lock, providing we can give the same guarantees as p 4.1-4.6

I appreciate you doing all this analysis and reasoning. IMO having to
write a page long justification should really get us worried about the
locking scheme we are using being far too complex and difficult to
follow.  Again not your fault, it's just how things currently are.

> >> >> >> -static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >> >> +static int assign_device(struct domain *d, struct pci_dev *pdev, u32 flag)
> >> >> >>  {
> >> >> >>      const struct domain_iommu *hd = dom_iommu(d);
> >> >> >> -    struct pci_dev *pdev;
> >> >> >> +    uint8_t devfn;
> >> >> >>      int rc = 0;
> >> >> >>  
> >> >> >>      if ( !is_iommu_enabled(d) )
> >> >> >> @@ -1422,10 +1412,11 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >> >>  
> >> >> >>      /* device_assigned() should already have cleared the device for assignment */
> >> >> >>      ASSERT(pcidevs_locked());
> >> >> >> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >> >>      ASSERT(pdev && (pdev->domain == hardware_domain ||
> >> >> >>                      pdev->domain == dom_io));
> >> >> >>  
> >> >> >> +    devfn = pdev->devfn;
> >> >> >> +
> >> >> >>      /* Do not allow broken devices to be assigned to guests. */
> >> >> >>      rc = -EBADF;
> >> >> >>      if ( pdev->broken && d != hardware_domain && d != dom_io )
> >> >> >> @@ -1460,7 +1451,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
> >> >> >>   done:
> >> >> >>      if ( rc )
> >> >> >>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> >> >> >> -               d, &PCI_SBDF(seg, bus, devfn), rc);
> >> >> >> +               d, &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
> >> >> >>      /* The device is assigned to dom_io so mark it as quarantined */
> >> >> >>      else if ( d == dom_io )
> >> >> >>          pdev->quarantine = true;
> >> >> >> @@ -1595,6 +1586,9 @@ int iommu_do_pci_domctl(
> >> >> >>          ASSERT(d);
> >> >> >>          /* fall through */
> >> >> >>      case XEN_DOMCTL_test_assign_device:
> >> >> >> +    {
> >> >> >> +        struct pci_dev *pdev;
> >> >> >> +
> >> >> >>          /* Don't support self-assignment of devices. */
> >> >> >>          if ( d == current->domain )
> >> >> >>          {
> >> >> >> @@ -1622,26 +1616,36 @@ int iommu_do_pci_domctl(
> >> >> >>          seg = machine_sbdf >> 16;
> >> >> >>          bus = PCI_BUS(machine_sbdf);
> >> >> >>          devfn = PCI_DEVFN(machine_sbdf);
> >> >> >> -
> >> >> >>          pcidevs_lock();
> >> >> >> -        ret = device_assigned(seg, bus, devfn);
> >> >> >> +        pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >> >> +        if ( !pdev )
> >> >> >> +        {
> >> >> >> +            printk(XENLOG_G_INFO "%pp non-existent\n",
> >> >> >> +                   &PCI_SBDF(seg, bus, devfn));
> >> >> >> +            ret = -EINVAL;
> >> >> >> +            break;
> >> >> >> +        }
> >> >> >> +
> >> >> >> +        ret = device_assigned(pdev);
> >> >> >>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
> >> >> >>          {
> >> >> >>              if ( ret )
> >> >> >>              {
> >> >> >> -                printk(XENLOG_G_INFO "%pp already assigned, or non-existent\n",
> >> >> >> +                printk(XENLOG_G_INFO "%pp already assigned\n",
> >> >> >>                         &PCI_SBDF(seg, bus, devfn));
> >> >> >>                  ret = -EINVAL;
> >> >> >>              }
> >> >> >>          }
> >> >> >>          else if ( !ret )
> >> >> >> -            ret = assign_device(d, seg, bus, devfn, flags);
> >> >> >> +            ret = assign_device(d, pdev, flags);
> >> >> >> +
> >> >> >> +        pcidev_put(pdev);
> >> >> >
> >> >> > I would think you need to keep the refcount here if ret == 0, so that
> >> >> > the device cannot be removed while assigned to a domain?
> >> >> 
> >> >> Looks like we are perceiving function of refcnt in a different
> >> >> ways. For me, this is the mechanism to guarantee that if we have a valid
> >> >> pointer to an object, this object will not disappear under our
> >> >> feet. This is the main function of krefs in the linux kernel: if your
> >> >> code holds a reference to an object, you can be sure that this object is
> >> >> exists in memory.
> >> >> 
> >> >> On other hand, it seems that you are considering this refcnt as an usage
> >> >> counter for an actual PCI device, not "struct pdev" that represent
> >> >> it. Those are two related things, but not the same. So, I can see why
> >> >> you are suggesting to get additional reference there. But for me, this
> >> >> looks unnecessary: the very first refcount is obtained in
> >> >> pci_add_device() and there is the corresponding function
> >> >> pci_remove_device() that will drop this refcount. So, for me, if admin
> >> >> wants to remove a PCI device which is assigned to a domain, they can do
> >> >> this as they were able to do this prior this patches.
> >> >
> >> > This is all fine, but needs to be stated in the commit message.
> >> >
> >> 
> >> Sure, I will add this.
> >> 
> >> >> The main value of introducing refcnt is to be able to access pdev objects
> >> >> without holding the global pcidevs_lock(). This does not mean that you
> >> >> don't need locking at all. But this allows you to use pdev->lock (which
> >> >> does not exists in this series, but was introduced in a RFC earlier), or
> >> >> vpci->lock, or any other subsystem->lock.
> >> >
> >> > I guess I was missing this other bit about introducing a
> >> > per-device lock, would it be possible to bundle all this together into
> >> > a single patch series?
> >> 
> >> As I said at the top of this email, it was tried. You can check RFC at [1].
> >> 
> >> >
> >> > It would be good to place this change together with any other locking
> >> > related change that you have pending.
> >> 
> >> Honestly, my main goal is to fix the current issues with vPCI, so ARM
> >> can move forward on adding PCI support for the platform. So, I am
> >> focusing on this right now.
> >
> > Thanks, we need to be careful however as to not accumulate more
> > bandaids on top just to workaround the fact that the locking we have
> > regarding the pci devices is not suitable.
> >
> > I think it's important to keep all the usages of the pci_dev struct in
> > mind when designing a solution.
> >
> > Overall it seems like might help vPCI on Arm, I think the only major
> > request I have is the one related to pci_remove_device() only
> > returning success when there are not refcounts left.
> 
> Above I have proposed another view on this. I hope, it will work for
> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
> after returning from pci_remove_device(). By "harmless" I mean that
> owners of those refcounts will not try to access the physical PCI
> device if pci_remove_device() is already finished.

I'm not strictly a maintainer of this piece code, albeit I have an
opinion.  I will like to also hear Jans opinion, since he is the
maintainer.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:25:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:25:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521928.810924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM2n-0002cd-EI; Mon, 17 Apr 2023 10:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521928.810924; Mon, 17 Apr 2023 10:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM2n-0002cW-Bj; Mon, 17 Apr 2023 10:25:09 +0000
Received: by outflank-mailman (input) for mailman id 521928;
 Mon, 17 Apr 2023 10:25:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poM2m-0002cQ-1P
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:25:08 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1dbf2682-dd0a-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:25:04 +0200 (CEST)
Received: from MW2PR16CA0037.namprd16.prod.outlook.com (2603:10b6:907:1::14)
 by BN9PR12MB5161.namprd12.prod.outlook.com (2603:10b6:408:11a::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:24:58 +0000
Received: from CO1NAM11FT058.eop-nam11.prod.protection.outlook.com
 (2603:10b6:907:1:cafe::84) by MW2PR16CA0037.outlook.office365.com
 (2603:10b6:907:1::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 10:24:58 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT058.mail.protection.outlook.com (10.13.174.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 10:24:57 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 05:24:56 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 05:24:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dbf2682-dd0a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gFY/t/I3XCuk11zRNnfx5nhvhs90dOH5ZdTA90ZMKnpyte8zz4NfNnzaWXNCK9auPtvyvG6yJMwOih1dIAWixwMMu4jTz0RWIPlCVCuKuHAfAH8oNj38DM0BcQlrL4U0P7NzIqlpcAEisH4AAi9Xcy0iHFar0aHHTMa5JzNGvQPt1xTiiE/rI5tKQ0J6gZneBLMxdV3cckE5+uMF5PiRNlG/kJhRrq7y2u3Wq02YBxiINgZ50KbtO0SMm3SXnBUJG5NNE6VV9nKWQrxJdXvrWBm3DgkVNCVXTEMSdoI2aehDPkE8C7sKvztIr0TLKx/WTK7y33HeBP3+yaUK/sWeUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yXXrP4sb1wUWjuiRCJUdQCZtqL4Tq1RBkstGbhiltNk=;
 b=YaMTFBQjwrGwg03P3x6Uj5gJjtGrbBeZ0GHRh1hwVIe818BASM4XstGcBC9qBSciwRUDY6PQFbRfgTumWmCYR4QU5fQ3TvZYCSsLXryoVkc4PfDv+YsMGd/Gl4zrnWNDJpx6a/ZX5bS2dNHWEGsoyStYtF5ngG7ZboGWcg9p019Nu+iJ0LpHAmdJ85Ijsp9Cfp09oe2tF5md59ZmCj2CA/vTaFwX913ffFy8p+t0N9m3SMqH8OVkF4aN0NDCrvQJxSDV3y/htMElDQyDBvadHDZMnGdHCgPK3wizRqeLJzluV153wsBnszKfEhO4y+z6M99/D4bIxNKMEn6UYuRX8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yXXrP4sb1wUWjuiRCJUdQCZtqL4Tq1RBkstGbhiltNk=;
 b=A9XBS7dCCXEoATotLg7UAuJCJSvCUtztXSRI01IB3HuEV5Q9beKLNoH8YVyzduhYeE3FI2GNM7NPPqkvI1Lqg6W2rcA8r6We3jJbjg4c5fdU/bapaBOzavjKAjVt55Mf4VJuFd6J7ahssKG0ut0AMgjfxgEsD84J8x1+GvInt8E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <9382acbd-3c98-3407-ba1f-f02e1e6751a1@amd.com>
Date: Mon, 17 Apr 2023 12:24:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 07/17] xen/smmu: Add remove_device callback for
 smmu_iommu ops
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-8-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-8-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT058:EE_|BN9PR12MB5161:EE_
X-MS-Office365-Filtering-Correlation-Id: bff5f1fd-5dd5-49ec-c7eb-08db3f2dfe4f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UqbwVaAKd40RBud6+7/L8DorYMLjchyaWINwWbASR+prZPT/IKSTioKW3exMz2ds6vm9HmwwaLYVmK1uZ0TPtO0rhup/2qgrDx2lNLoasaF5Wcq60O79a0T00ErXIibAQ7R89HoW9f6/SrEGvcWbeeymUdoUk98hapq7VKqFuDQpgNBq804fZTtzOA5BDCGT6VQ78JBFLpAmdUzcjdtQaYPqvpr4YmIBaUzTLkuKlZl+CiUIKeKXDyRi30X/HVSGABSAjkYvxcYyeTL5i7QTBaEI+Xv1mobMnFSZyL/cnP01eihO4zgYC7qZCNlW2lYtXHBhP2uF5ZhXdPPnhV7mrz49wdEPHTZhKnkWjWKQI67dLm8AM7TFb3tZepdhRm7vhoRR9R4nIQ3xZnjfntZPSNGqJbq1xsRHFloBXIeN4fbrpqSWoi7piw2Rcyl6hn1tYdR/QgrJVfsRr7Zw/f0GzgAb7rbxwOM3egETrjdDInrj7lK2+LjsKyEkzVoIJzg/G0Nq2zv8sK0mUgKhIt6Rp28ayIvyA9MHcYHAOLpY8B7W5vm601xI03duO9GY4O7zCHfRVgM4nUCw2s2mWNM4ToPyhp3E/U6GIMCO7OuhGuGFNm1qFuGBSaLdG86SBstCTjZlSjn7mnIiYOv1mXzMTwI7gmsRKYli/2/kZ4oucxReP8Dn0tXu40Uo8wf27fmQebYpW5QAvLscpwrDYTVZBVtxMzEBgOctJnaBlqXU3fp/J+PVz4HAwcGY2lLg0QpH9d97Lh4WF7Ocq56yo07dRQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(426003)(5660300002)(44832011)(82310400005)(2616005)(336012)(86362001)(47076005)(83380400001)(53546011)(186003)(31696002)(356005)(26005)(82740400003)(81166007)(36860700001)(8936002)(8676002)(110136005)(54906003)(478600001)(16576012)(40480700001)(41300700001)(316002)(40460700003)(36756003)(4326008)(70206006)(70586007)(31686004)(2906002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:24:57.4214
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bff5f1fd-5dd5-49ec-c7eb-08db3f2dfe4f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT058.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5161

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Add remove_device callback for removing the device entry from smmu-master using
> following steps:
> 1. Find if SMMU master exists for the device node.
> 2. Remove the SMMU master
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/drivers/passthrough/arm/smmu.c | 56 ++++++++++++++++++++++++++++++
>  1 file changed, 56 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 0a514821b3..14e15f1bc6 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -816,6 +816,19 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
>         return 0;
>  }
> 
> +static int remove_smmu_master(struct arm_smmu_device *smmu,
> +                             struct arm_smmu_master *master)
> +{
> +       if (!smmu->masters.rb_node) {
> +               ASSERT_UNREACHABLE();
> +               return -ENOENT;
> +       }
> +
> +       rb_erase(&master->node, &smmu->masters);
> +
> +       return 0;
> +}
> +
>  static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>                                          struct device *dev,
>                                          struct iommu_fwspec *fwspec)
> @@ -853,6 +866,32 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>         return insert_smmu_master(smmu, master);
>  }
> 
> +static int arm_smmu_dt_remove_device_legacy(struct arm_smmu_device *smmu,
> +                                        struct device *dev)
> +{
> +       struct arm_smmu_master *master;
> +       struct device_node *dev_node = dev_get_dev_node(dev);
> +       int ret;
> +
> +       master = find_smmu_master(smmu, dev_node);
> +       if (master == NULL) {
> +               dev_err(dev,
> +                       "No registrations found for master device %s\n",
> +                       dev_node->name);
> +               return -EINVAL;
> +       }
> +
> +       ret = remove_smmu_master(smmu, master);
This patch looks good although I remember seeing Julien advising you that it would be beneficial
for the SMMU driver itself to check if device is not currently used before you remove it (even though
you have this check in iommu_remove_dt_device(). I could not find your answer to this.

NIT: No need for a blank line here if the next instruction is checking the ret value.

With the above things clarified:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:26:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521932.810935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM4F-0003C0-QF; Mon, 17 Apr 2023 10:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521932.810935; Mon, 17 Apr 2023 10:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM4F-0003Bt-MW; Mon, 17 Apr 2023 10:26:39 +0000
Received: by outflank-mailman (input) for mailman id 521932;
 Mon, 17 Apr 2023 10:26:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLV6=AI=citrix.com=prvs=464b9e9d0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1poM4F-0003Bk-5r
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:26:39 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53f4cb9c-dd0a-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:26:36 +0200 (CEST)
Received: from mail-dm6nam04lp2045.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:26:28 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB7065.namprd03.prod.outlook.com (2603:10b6:510:29b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 10:26:26 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:26:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53f4cb9c-dd0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681727196;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=AhAREnHbR5KLramCLS/upR+E6O4jFabjJHFlnW1s46U=;
  b=SOMgbcrV25z6UsXWDYnj7e+A/061Oy7C4juf+Bk/2GFhyinkWX09yoGy
   bQPjUMjPxtOGLIGTeItLZjO+LD6mx/EpyjfYzkHF4MxfkMmmUu7x+ypgY
   pS5Mq7WfIF5z1Q9oSj3lH61hVuaPSdz5/3VXFRrorqJYptTra7+f7q6PN
   Q=;
X-IronPort-RemoteIP: 104.47.73.45
X-IronPort-MID: 108227600
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rNLLEakEXN1Q5p/sELOhVq/o5gxRJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYCG/TaPveYmKhL9B1Ydy18E0A78PcnNUxSwNrrywzQSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fwbFQ4kbhuGu8i/7byyEMZ8v/4+HvC+aevzulk4pd3YJdAPZMifBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1c3jOWF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXOhAt1MTOXQGvhCoU236VYODhMqfFqS/ty5kFa/Yt1FN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml0avR5ACxB24aTyVAYtgrqM83QzMx0
 laD2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85jrKR8xmGbS4jfX0Hy/x2
 DGAqCUih7QVgtUP3q/99lfC6w9AvbDMRw8xownSAGSs61slYJb/P9D1r1/G8fxHMYCVCEGbu
 2QJkNSf6+ZICoyRkCuKQ6MGG7TBC+u5DQAwSGVHR/EJnwlBMVb5FWyMyFmS/HtUD/s=
IronPort-HdrOrdr: A9a23:zKCQD6z/ku69dH5wRI1NKrPxS+gkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67Scy9qFfnhOZICO4qTMyftWjdyRKVxeRZgbcKrAeBJ8STzJ8/6U
 4kSdkFNDSSNykEsS+Z2njeLz9I+rDunsGVbKXlvhFQpGlRGt1dBmxCe2Km+yNNNWt77c1TLu
 vg2iMLnUvoRZxRBf7LdUUtbqzmnZnmhZjmaRkJC1oO7xSPtyqh7PrXAgWVxRAXVhJI2PMH/X
 LemwL0y62/u7XjoyWsmlP73tBzop/M29FDDMuDhow8LSjtsB+hYMBMSqCPpzc8pcCo8RIPnM
 PXqxktEsxv4zf6f32zozHqxw78uQxeoUPK+Bu9uz/OsMb5TDU1B45ogp9YSALQ7w4FsMtn2K
 xG8mqFv94PZCmw1xjV1pztbVVHh0C0qX0tnao6iGFea5IXbPt0oZYE9E1YPZ8cFGbR6ZwhEs
 NpEMbAjcwmOW+yXjT8hC1C0dasVnM8ElOvRVUDgNWc13xskHVw3yIjtbgit0ZF0Kh4Z4hP5u
 zCPKgtvqpJVNUqYaV0A/pEaderC0TWKCi8cV66EBDCLuUqKnjNo5n47PEe/+exYqEFy5M0hd
 DoTE5Yj2gvYEjjYPf+kqGjyiq9A1lVYA6diP23v/NCy/jBrfvQQGK+oWkV4oudS651OLyeZx
 6xUKgmdsMLY1GeXrqh5DeOK6W6GUNuLvH9hexLKm5mgvi7XbEC5darBsr7Ff7KLQsOfF/ZLz
 8qYAXTTf8wnHxDHEWIzCTsZw==
X-Talos-CUID: =?us-ascii?q?9a23=3AyQiXHmuCvJc9vnwLy6/BLoKL6IslcUDj5XfbJnO?=
 =?us-ascii?q?SAEtnQee6WWaq5ZFdxp8=3D?=
X-Talos-MUID: 9a23:QzsK2wVH16/YQdjq/B78oB5YKcFm342vIR8okY8euvCLLzMlbg==
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="108227600"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cQTNBFMT+pwNk1VrQfUkdiFILR5VoR/r0NSSJO8FH+ORYS/yV43PmAe/GabtSPkwn6U8uAbfND0+uvii5WwFJpvN6ZYdFjBi6d8gfmxt5KthfTpyPy2SGSLE167vIs63PH7Kh0cnqiT5IFU1YwQTbXsnSh+5x7E7KBPFpG/mSxpL9ZZ9FjeXz7G4gIcq3crCbKBi18EKaKnFsNDy4eLjp6bhhoGB8Ah1eKebzj0YrjxdDrbKtIuVm+f9TK7PpuGamMf7FS2sc8wTTXrgV85B8MrpPpv/93PdJ48UvLmxLLen8MGUOXvWMCfliXnaFGFXoSP8yaB5OfSIeqmgZokgrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zgd2FIFviWWFFsHRnPV6ruCFrjWqWs9TWr9RN26trZI=;
 b=OzWvrLqcaUIKseOWSNoiBhFVD7QdPe5UCHNtmiBZOoFaWIUz/SiRo8M7nVLQnS9ZaRvCR7zsdGev9FAyLCP9PaEc1Q+vQNsAWFk2nTrW4100BD9RcYlL49NSkjh0zKepTL7k4VWYkUJI2ME2DoBv9CavIs+YY7CbB9vs/+5huRwx97tmKVminH3BteJdZFCFI5bGkPKT/TJ/NGI6D56aH7UyVpk6mYDEpm9x2rtHYZ7ndKjVFHdCE9PYLJfnUN8fnpg2T03oShYmmtXQSqYjwuYbKgFpa4Vo/vSzGW98XR129cajRa0ZTWO3AVqgr3+jNHSJi6zK28l/sclHFMik2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zgd2FIFviWWFFsHRnPV6ruCFrjWqWs9TWr9RN26trZI=;
 b=iJ94UtsN1fIXQnJQOiiX05zM2gKKkucLfLM6639FbujkasmDyzAfM1bOg5/9UDhJpSezO45+8jENdkOXphYv08q2T3lSn2bnzFQGISBZE5/MC7Qsm8dc/t3VA6wpbXGRwh+RqFtXN3Rb/YRRKJO0TUV9NINtCijkmU7m2wTYhdI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 17 Apr 2023 12:26:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Simon Gaiser <simon@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: RFC: disable HPET legacy mode after timer check
Message-ID: <ZD0ezeo6AIUYjGe4@Air-de-Roger>
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
 <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
 <ZDaVPiJTt8q74nQw@Air-de-Roger>
 <2094ea22-a58e-fde0-8a77-f13675161a4a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2094ea22-a58e-fde0-8a77-f13675161a4a@suse.com>
X-ClientProxiedBy: LO4P123CA0416.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB7065:EE_
X-MS-Office365-Filtering-Correlation-Id: 0ae2a5fa-df37-4f56-dcec-08db3f2e3336
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	B1nT8vm21pCET1YcIe79swcJeXctXeTRvTMWIckZmNFqp0HQB85eFZN3Lr9riAOCc0ww5A+xShSDH3PAedlNqmEjvvvTs2BE0kM8kCc6mebTATjKbyUg6M5GSBoTut8iEdFPVKrgVNrz/LaNI0fiJBgMm8+P9DmnCO3g3xbzBNI3Ylb0y8OAwwG2s04UBfWho3Be7OO0sk8njgbB78KTnGW46oG1E5wFblmXssgOkc9CLhxBthcIwOnTdWIe0tKzMsng2DQ7dMmDe7j5yJFpfGel+RMyEuY+W3osnSe2RlMSqa/rGDRZCZJOGiNedilasqaEjo3u5aTKRZuMI3Ai5OxUu9MYaP1k1/WQoV4bRpgRWqmh76p35+ORc0kliMWVlO19OSkS0h3VMsXIY5mCZVo8k1W3qwR4jgPysBi1jZrPwCabU6xsSdqTiDVV2oLeL6kC54nzg11vAubZ9RiS7+NAewad6OKtHK2u45tTFE+2h2ByyueBZA5XoToaPIhmiG9oR3MRMz+bkn33L0vIlWspu4lf+bJn0SBGaEAoypabZV4rrPepdGyL31DhrRvY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(376002)(39860400002)(396003)(346002)(366004)(451199021)(316002)(53546011)(6512007)(186003)(9686003)(6666004)(6486002)(86362001)(38100700002)(26005)(107886003)(33716001)(6506007)(82960400001)(66476007)(66556008)(83380400001)(66946007)(6916009)(4326008)(54906003)(41300700001)(85182001)(8936002)(8676002)(5660300002)(2906002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NnNpOXhrcnBUdDE4YWxRd1FjcEoyRFJHSTlPRkIwaENZeGRnL1lvZUpEU0c5?=
 =?utf-8?B?ZnhzQ3ZoaVJRZnpic3pwK0t2VWhlVThzVkx2S0NzUDVTTzFxUEhLM2ZZS1dt?=
 =?utf-8?B?TDRUTWg2Z05vMTB1WjBvS0E5elNNdU9tcmU3RzFwM3V5U3RUMzg4MzE1ZlZN?=
 =?utf-8?B?NUt2YkphZ3cvcmszRnhXaytUT3dkaXRxb1dBZy9IOWpJTHRuUHg1YkJEZjIx?=
 =?utf-8?B?d3hBQlFkS1c2bXZGbGcvTVBHS21Zd3JEQlN0MDdnT0w2a1ljK1VyUmtBWW1i?=
 =?utf-8?B?UENXakl6SnV3ditmS1VVTWxWV2R3NWg2cjdLNUhTR21nYjJIOVhYZkNJR3Nr?=
 =?utf-8?B?SG5wZGtFOGRsNjE3V0VuZWJPRGZyWThjNE51eFR5WUN4b3R4VnlLcEpzYTJR?=
 =?utf-8?B?YUN1dHlFTWhXckc5ZHcwMktJM0tla0VpQ3lmMWlzQVZNWGhQWWxZVlpYZmxE?=
 =?utf-8?B?Ni92U2xDc21td1F1WkhsYTJEU0R1QzNudkN6OXhFMkJObURFRmFwZXU3aDFo?=
 =?utf-8?B?T0NoNytpODRrR1B1MHhYaklKU0R3ak8yNWNNVXQ1cDJyMTJMNzNLR2RzZVBw?=
 =?utf-8?B?WGVDMWQwemMySnluTWFSdkI2Q3ZKNDdQWWJ4UFlydUNjMkZDa3FWRmQ0MlNC?=
 =?utf-8?B?dmZBd0VxS1Q5ZE5jVlE5b2N3cUNyOEpzN1crNWQxdnNEQ1lXSlc1V3Zha2FJ?=
 =?utf-8?B?Q210TmV4VXdJVmZ3WWRaancwZHpxL3NPR0tHTHcrWTFjUGVQRFdIYWRZK2xu?=
 =?utf-8?B?dDRyZ0NTUnVzeVp1ajErOC9zaW9jTlZrK08rVjlmOWRaakMvbVZSV0VCclIx?=
 =?utf-8?B?TEdTS3o4UnNIbHJVd1dWaTd2UkJOS2FyL2J3OGZRa2oyNWlBaTIxa20zeEF5?=
 =?utf-8?B?NXpmQUtGM3dxRTR5WnVhUFVscXlxQUZmcXBwM3lyTFF3QktUd3k5MmJjOTZq?=
 =?utf-8?B?b3c5SjFCdnZUU0lqcjNaRXRRMk5VMGRObGwyM0QxN0c4SUF6RFhIOFJBYzgy?=
 =?utf-8?B?UzJxMm96QWd3cjBTR3dscUhzb2UybnNZQVAwcFlIMU1UOTFLUE1vUzlGQm05?=
 =?utf-8?B?NDRHUkhtZEpTaVlFUG9wMzRYYU5SV0VuZm11dzd4aDRLTVhWRy9mRTIyazRq?=
 =?utf-8?B?TFEwdklRZ2JmSmhxaSswcE1zVDRRcE45Q0xyclB6czhqdGkzdnoyV3pxRUlw?=
 =?utf-8?B?L3NONkxqZHdqSXNQa1JHcDNWRk9TelJDK0NPTnArVHBrZko0Uno3QnNwSzZG?=
 =?utf-8?B?N0FMVW9MT2hWT294NzBycFVCOHJnczNqY0VPU2tIb1JIOHFSbTllbzgrZlkw?=
 =?utf-8?B?SXRmdzZ1MU4wNCtDNTJLQUxJWmNOK2x6NFRJWW82SmNpSGdqdGJnTGwrRDZ5?=
 =?utf-8?B?QTJnU2Z4a0FFT2htWFQvMHFDTENsdXFVNHllQnNDUG5XTWpmeFB5YUtlMnRo?=
 =?utf-8?B?ZTNtWjNzYXRwdzY0RVpQSHdUZUhxdFhyWGxLelZ3ZEMrZmhCN0NVS0RuOXVQ?=
 =?utf-8?B?OWJocDBVSzNzSjQxTlc5WEY3dForSVlZYkZoVk85QlZMbG1QL0tjN0ZreEo2?=
 =?utf-8?B?dGRZYVI1R20xZGI5aGgrbmJac25WSlg0UVN5VmYvZmxBNUdjMktDVzFkeFBE?=
 =?utf-8?B?eG5uKzczRUtMQ3VqTnJzMXJaL0x6RU1MYXhZQ0d4V1VrZTY3eVBVU05BdGFi?=
 =?utf-8?B?UU1TSUpUQkhMTWRVK3htMEJJbnpYaTBBeGx3ZkVPaDUvQmcvWjJrNVZQaUUw?=
 =?utf-8?B?NjhRNGpWU3d4UnBOVDUzNFFvdHJUK09taXgwMjJ0NUlUL0lZT0dvNldwaFo1?=
 =?utf-8?B?K3NTM1VaMXVueVRqQSt0MlNTaTlGbGpROTRMSmhqTmhUczlLZWV1cjgyS1Q3?=
 =?utf-8?B?QldRZmFqYnRsaitKSUxFT2ZwU2FhQkhtZjY4bWlQTFA2MFdyQkc3ZXFkVGNM?=
 =?utf-8?B?TTJMT0MvUkU5YUY0RVpzWXl0RFA0Q1JuOXFMdFJuUU53M2l3WUNRYlAxazRt?=
 =?utf-8?B?dTJMay9mSkMvcXlyYWVlYTVqSDd4UXoyNjB5bW9GUlMzcHN0SnFFZGoyNEdG?=
 =?utf-8?B?S2NiazE5alBFRGRxblFObytHQVJsclZ5SEdKVkRBc3lMZ3p1UmE3NHhoTThk?=
 =?utf-8?Q?lOZzpjiITqp9vk4lmXShXehp7?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ref431zevfkxIi1R006L0aG4amod2Wy+qnz2P7wD1PxtUkxTFsc5s+QPLibVFF7fWPNFxtBrczxfrbH+jxUnBHEotgXlycEnPbmn6UB+8GhLeCN8xe0+J5IxXDXpw3sleHnioGNDgbZRW5UvALWfzCsKFrSDz8l4RA4wEGtB7hBws8ZaBKHdHukfb8c6lcthhmB2QbsLJm2jw5mQPRBlBedZtL0IhbtXU9vlzgl2qLHMObZ0/Xwgp6ohNLkQ9NUO4eKhpRDJCzwvEpE/mkE83Bcm3S/y6UobHWnHhDuUPFpvQOpBCnlxUXbPTcTSlvpabx3bY8q46LxzFWOIBs9B4kYKyugkwABMjcgtVn7scL8R6VLDhF0pxthJJD8VLwdFmHjbVEpgTit9M7JjdWer++XQFjjfo/ZVB7QzLMcXzg7ohxVw3Xj5Khemn9HqgT6pLpxMz5hc97lsSQ+u/OMFcMDt8DHn+iF7dD/QXpeydQ14MqHGFivIchurIVSFiWrBVZFgXVp2mbkhFFRIOe6aEYQ6S4PiPZunrOquLS1jeK4iK8EaQr1Z80c5vpllygM/V5wBvBh8qYUkGTz3Q9CjmXSvbSDtdpmuv/MntgXNhbuRBDqjZjqeMKgPseqKkP1wVoHWd4FO3AIZ5DeEGhZFV0/8Gy/bw4RPxAwd2MDtHPerrBsKyMnVl3PW1di6ahDYlkj0nP9QpPYbsf3V9T1rok3Xfh1Jsu2TUEkBlIf7PqpEIy7yhcV28SlpuegIVq0Q4OQTnjaO3QYIL8pppKX7NSc8nchgkEp2308gNvhzG2NuAmnP+wKZbwTCufTSNbmc4++o7J6ZWri9xdiFI6g4VbIlvYA+Ri6OiriAx/NKl4A=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0ae2a5fa-df37-4f56-dcec-08db3f2e3336
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:26:26.6082
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dP0/oAnqHyDzkNVODi0dQFitg/+sObqY1ZbCLOQVKL6jU8EfHfroMCGJjHH4RX/Och4SZjnMeXqzDsmAeq1QEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB7065

On Mon, Apr 17, 2023 at 10:49:38AM +0200, Jan Beulich wrote:
> On 12.04.2023 13:25, Roger Pau Monné wrote:
> > On Tue, Apr 11, 2023 at 12:20:13PM +0100, Andrew Cooper wrote:
> >> On 11/04/2023 11:30 am, Simon Gaiser wrote:
> >>> Hi,
> >>>
> >>> I have been recently looking into getting S0ix working on Xen [1].
> >>>
> >>> Thanks to a tip from Andrew I found that the HPET legacy mode was
> >>> preventing my test system from reaching a package C-state lower than PC7
> >>> and thereby also preventing S0ix residency.
> >>>
> >>> For testing I simply modified check_timer() to disable it again after it
> >>> checked the timer irq:
> >>>
> >>> --- a/xen/arch/x86/io_apic.c
> >>> +++ b/xen/arch/x86/io_apic.c
> >>> @@ -1966,6 +1969,8 @@ static void __init check_timer(void)
> >>>  
> >>>              if ( timer_irq_works() )
> >>>              {
> >>> +                hpet_disable_legacy_replacement_mode();
> >>>                  local_irq_restore(flags);
> >>>                  return;
> >>>              }
> >>>
> >>>
> >>> With this [2] I'm able to reach S0ix residency for some time and for short
> >>> periods the systems power consumption goes down to the same level as with
> >>> native Linux!
> >>
> >> Excellent progress!
> >>
> >>> It reaches low power states only for a fraction of the suspend to idle
> >>> time, so something still makes the CPU/chipset think it should leave the
> >>> low power mode, but that's another topic.
> >>
> >> Do you have any further info here?  There are a range of possibilities,
> >> from excess timers in Xen (e.g. PV guests default to a 100Hz timer even
> >> though no guests actually want it AFAICT), or the 1s TSC rendezvous
> >> (which isn't actually needed on modern systems), all the way to the
> >> platform devices not entering d3hot.
> >>
> >>>
> >>> I tried to understand how all the timer code interacts with disabling
> >>> the legacy mode. I think it only would break cpuidle if X86_FEATURE_ARAT
> >>> is not available (Which is available on my test system and indeed I
> >>> didn't run into obvious breakage). 
> >>>
> >>> Is this (disabled PIT && !ARAT) a configuration that exists (and needs
> >>> to be supported)?
> >>>
> >>> Did I miss something else? (Very much possible, given that this is way
> >>> above my existing experience with X86 and Xen internals.)
> >>
> >> Xen's code is a mess and needs an overhaul.
> >>
> >> Right now, we're using the timer as "a source of interrupts" to try and
> >> check that we've got things set up suitably.  But this doesn't need to
> >> be the PIT, or a timer at all - it just needs to be "an interrupt coming
> >> in from the platform".
> > 
> > I would even question whether that testing is useful overall.  We test
> > a single IO-APIC pin, which still leaves room for the rest of them to
> > not be properly configured, and Xen might not be using the PIT timer at
> > the end.
> 
> Testing one pin is sufficient for the intended purpose (proving that
> the delivery route platform -> IO-APIC -> LAPIC works), leaving aside
> firmware possibly configuring multiple IO-APICs inconsistently. Yet
> if there are multiple IO-APICs, I'm afraid we have no way of knowing
> how to trigger any of the pins of secondary ones. Even if we went to
> figure out what devices are connected to it, we'd then still have no
> (rudimentary) device drivers knowing how to interact with the devices.

That's why I think the test is not very useful.  Also the delivery
route not being properly configured will be quite obvious when dom0
attempts to to use any device, as it would get timeouts.

I think it's fine to test that the timer interrupt in use by Xen is
working, but forcing this kind of interrupt delivery test doesn't seem
specially useful to me, the more that we keep accumulation workarounds
to make it work on newer platforms.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:29:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521937.810945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM6W-0003mK-7a; Mon, 17 Apr 2023 10:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521937.810945; Mon, 17 Apr 2023 10:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM6W-0003mD-4Q; Mon, 17 Apr 2023 10:29:00 +0000
Received: by outflank-mailman (input) for mailman id 521937;
 Mon, 17 Apr 2023 10:28:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poM6V-0003m7-8B
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:28:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7e3efdd-dd0a-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:28:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7926.eurprd04.prod.outlook.com (2603:10a6:20b:2ab::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:28:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7e3efdd-dd0a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D3FHKDYtRITZV+EZmZve6AtnIoxd+ukU0gEAR+UDJgQD6edrspHFJw994OtA3Hq+ys3sMljuAgQMnMhBN1DHUe+kejmHua81DainJWEOTth+Oo4z4ny2yt50C68+Y4Pi3ohNB7vJt0gcGys1oX3IQZax+vTneysrWCF9t1gjeUQLpCbzuRiN/ayzXhDgiYffJ4hRUS6Q6+saxWauhwlASr6HPkV4H565P8aQVdp3/VkpeGSCHDRbqe4XvjcTz9LXrVgikx+gujaDTo+wZ8ElYUhq4ThiwCjDMbc6jQLsrrFUXA14vxfBCDTcasymSNRx4/65c7ufOAKMwGtmc3Fixw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yQHT1oZP17QpYfE5QRddGAismzOzCVueQ4OA1oVKvXk=;
 b=Gw6zGCRqsymWAIwo9O2QtEle2zLdH+swiFopMfpxT/ypU10E/32CeBHZRNYj1xKEQZnFgXEUFWHts991WhLfg8kWG553QcvS9ogOV0MkEzwVQVHrxrejzpr1Nd3YqEjb/xsQlkI3GJYBWXhLWsZRLfYy1nG2+ixtZHEnYu2XBmyRMfDSPpUE/QNhu7kxAGHajb6gKRGVNNVC9HLbFp5+lQbgjsa5oqKVgDlguqxqgSgvbGOnzgrvQT2vuK9SDar5W8mYlkNzRWphW09ucfBTJ55j3ubtPs97GoNtJvMPr1cF05Dme3p1nrQT5MBnr/x3hiim4YqvSOcuXPH3ZVvFNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yQHT1oZP17QpYfE5QRddGAismzOzCVueQ4OA1oVKvXk=;
 b=LoeQzoxTZ0mANFbxeASw+agRbIc7dNMEJ2bC/UutfDxts9esKklUpG7PJAQ9PN/MfKWFPkn85nu/ueF62E/JT8VxmPRHlX2KCvzO27r0og88UOoTFkEm4p449rla6h7yh9XkQjwdjptf4/+JWSI9G3oC37vYcxIFV19n71XrFHekpurZM63iA+04X/QMyAz07mOR8rarLU9mqyJPFZ1Z9Ca3/rq1AkehMAsqow3WT4Wr5BqiW3jFhf07E5qOoS7CHyaQHcfuaInBQ+or9TND7/93yIle0W4DyoJdtbYsAeSnf9nTFSx6QLxAaSZ4h+V8qY1j03oP8nK9bPXvbkKjbw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1cde4c30-bc48-7170-d465-11ed8617449f@suse.com>
Date: Mon, 17 Apr 2023 12:28:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0133.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7926:EE_
X-MS-Office365-Filtering-Correlation-Id: 2640094d-75dc-44d1-ec02-08db3f2e7aa0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yby0j/TMz5f2S+qpsIpSGJncRiTe6K+FnWLAQ21WNLIGVoSoHsDlNzy668+A2lyuhKN7Xvhq4wUW8y1tkBMyG/EEn6hc+7y0AwVjsYvD6F0dFxJhgfADAu/hudcfozr3AcGnyOXHX+9y0GMDAANqw/OKb77wMT5hOPNgpQ+HMdm1mwFR+nIHwbbon0Q49jWq3pNmfODoiG3Ikukql97NMxZE6FrL+N0Q8gVg1vwKQNcgbDNq4z/BvKLeIzdkqm5JqDfZDmLmEa43h9qJDh+PxsedoeBYX7u3Xva+lDt7JLEpAVhF0skgIpWtugtRUw1PQm7cQBzyDGzIa+NeCXriSZaZ7TFxTmQIwUji10dZQ+kdgc7OZbp7S8uIzVww1JpxzgczDme3FLWRi+UhamWKpJXoxlfdlHSp+Ng+4dQRSomZR6+MvfZvC7YZm5ewRRhooh+lYjjh2nGjVESII6xzXvJJlTIWtxVMbfc4RVJ0YW9l5g4qOnJWu5dYiMTyIVZ/hSh0sjCFZoGmDynJm9iJDm42nfW2ElO0VasjB6M3hRDcYaDihsBYU/X3Odc884zSckrV163exlxuKdegRM1J88LxkUIr/owAlOTna1DF6QtUkFk6vI86Ez+DavrxSlb9BuOiZrLBelwNd9et/IZtBw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(66899021)(8676002)(8936002)(5660300002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(54906003)(31686004)(186003)(2616005)(26005)(6506007)(6512007)(53546011)(66946007)(66476007)(83380400001)(316002)(41300700001)(4326008)(6916009)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?STg1alZDb3NOaDVjYzUwMW5ub2Vxekp6Z3phMjhhbTZiZkJXNFpIdDU5by9v?=
 =?utf-8?B?d1NYalJDdytxQW9nTE1vdjRSS1E3UElIamNqUlVOVmZQRlEzUmRiVUhBbDAx?=
 =?utf-8?B?WVNjY0dKV2VXTHM5S082RzNVc1hNeGxIallzV1dMaTZlR1ZEMHdOV0Y1akY1?=
 =?utf-8?B?RXdCTzdldXZOVE1tVUJxQjMwZnR3QnY3dmNGaTY3SCtRRnBaRjNNR0dZWlJy?=
 =?utf-8?B?ZkVFL3BBOG1GZUdtQ2NETDdqZWRPQUpoQVg1TG95aURJNTlUdkM0VDllQzE3?=
 =?utf-8?B?K09idTVScmduU1ZIUEpBRlpYemhQc2dvV0ZLa3ZtazRnWEs0dG1LRlhxemNr?=
 =?utf-8?B?c1JMckZBZEFEWWpFMjROdk1uV0ZPZk1qdGpRaWhxdjlVLzF1T1J1aHZmajN1?=
 =?utf-8?B?dlNxRzRLVDdNZWEya2hwUk5JU3gzdk9UKzNVVUlzTVIweUZ3WFZNamtMeDAr?=
 =?utf-8?B?YjBiQ0dSNzRJYlhRMWgwUjRjMWFCbTVmYzhIU1BtZUhZM1BKM0dBUU5LdllV?=
 =?utf-8?B?YVdVSjl3SzN3UFFMMXhGQk00MHhEOG9WZWovVnMvRVlDR0FscXFQby9aVTR4?=
 =?utf-8?B?U21WTnVjOVFxdzBLU2xNYnB2L0o3T2p3MVQ1ME4zOEpZcEVyeFlNbTNPbGxR?=
 =?utf-8?B?bnhpZHNFWFkrNCtOVnRybEc0Z0dFV3JGQ1MxUHlWNXJOSklueXpLYjBsQXhi?=
 =?utf-8?B?QWthemF2WG1CQWxKbGJHVWc0c0duSEM0N2FLZUhrc20xN2g2TERKck1DdHJU?=
 =?utf-8?B?VFpkM09oMm44L01IRi9CdUdLeEFLenl0Rmx1OGFsRWMrS1JzL3huTXBLdGlP?=
 =?utf-8?B?U2pPQXlKOE1PS0dxZ0ppZ00vdVcxRXNKQUhZWGtJQUI5SFgxenNZWFk4QjBr?=
 =?utf-8?B?ZjBpRE9tZVM5WVJDV3Zpd0VQOE9wSjBpRHVDZ29IMWRyTU5pRHFSdGhaRmpX?=
 =?utf-8?B?Z3VaZ1VhQlEwZkN1enRuaE41Nm5JMUNaM0MzMEVPQ0VxcmlnQ3FDeVJ5bHpx?=
 =?utf-8?B?NXFBdGxQdE9MdWNrT0hveElnWWdYdFZNQ2JscXB1NncrcUJRc2NVVFRwTEsw?=
 =?utf-8?B?SkRudnFxU09jc1F1elk1Q0Z0ZzVyQ2xYQkhNTXRQR1RUTXZnSDQvbmN4Nm1l?=
 =?utf-8?B?VW9SQnlXY3JnQjBGeHg4ckIwTkxLOWFpQjVYb0xXNHh4YWN2RHRpWHZPS0VF?=
 =?utf-8?B?RFdiWFYzbnRsNlNTck9DK3V4eTBxYkRGQXhVV0dUTzFWTDY1b2VydU9haGpO?=
 =?utf-8?B?MUVBdmVaQlYwdGExQWVSL0VuYmtvT2twblMwTEZ5T2VZV1ZaMnpFRkc3SVBo?=
 =?utf-8?B?RWFDWmVPRmNUQWpzYzY2dHcxbm1zRE10cXRHMFdMeHp4Szl3WnliNkd1YTF4?=
 =?utf-8?B?UWZpZzViZk5qUVhSSjdlc1N2Zm1ITzFEc1NkMFZoWlA4YjNZZDB3bkZwbzNH?=
 =?utf-8?B?a2hUV3RkSFYyQXFDVG5CUE1lcGVkKzl2MmswVjBSbGFyTXo5UTQrL0xldmpI?=
 =?utf-8?B?bWlvWFplOXZUMDczZ01Dc2JMaUppOFFncnFCSnh1aDhIM2xuRUYyVkpHanFV?=
 =?utf-8?B?eXlhODA0Ni9OVHNCWmVqWEE3SmRNTFBmaEhVT1dDajlDbUp1WnpJajYyM3VB?=
 =?utf-8?B?azF4Uk5QZnNKTklHQkU5Y3d0MlZGZ2dMejdGcSszWm1NNEZ3aUwvMndvVlBR?=
 =?utf-8?B?cTl2SS94am8rNWh5cjNvaXN6R2NJQzlNeUVIN2hvU01RRUJuRzFEakNUYTIx?=
 =?utf-8?B?RXM0OU8ybjIxRDAzQ09VL0kvN3RCcjcwOGdOK25VMTlwQnlaMkpCZTZmQk55?=
 =?utf-8?B?bzQwdGw0c2NDTFRBVm0xNnA5UENTR0RmSTBQWlRiUlJBVHVJWHp6cThqUndL?=
 =?utf-8?B?dExqSUFRSkExUlhTb0oyeWJmdGU1VHV4OC9LNFFCbXB4dnBPUlVPZDhQbmEx?=
 =?utf-8?B?M0p3ZTFNcUx6WTVBQXcvMnlmUkpUWG5veEFnRDRkN3UzL0Jsek05QThjMGZk?=
 =?utf-8?B?ZDBmLytZTXIxcC9QYUEyeE91L3NBemQyUytiS05iZ0tYaVRwWGMyWlVFNTBk?=
 =?utf-8?B?NGdxR2IxOHZpYnlCbU5UY3Nna2JwVXJuZWNzQThScHFzQ21NQmlLdlNFU1px?=
 =?utf-8?Q?mjlKNDDwWP8meun7XD7963mAM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2640094d-75dc-44d1-ec02-08db3f2e7aa0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:28:26.3320
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TkhlIJn8Z8NTlSSlvU3j2OPcD/v8oT3izyunj9p1Erl2MIxvo36p4N/i0duSZJSX3fFHJTvkrpt3+S9LMPv04Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7926

On 15.04.2023 21:58, Andrew Cooper wrote:
> Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
> or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:
> 
>   (XEN) livepatch: lp: Verifying enabled expectations for all functions
>   (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
>   (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
>   (XEN) livepatch: lp: Applying 1 functions
>   (XEN) hi_func: Hi! (called 1 times)
>   (XEN) Hook executing.
>   (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
>   (XEN) *** DOUBLE FAULT ***
>   <many double faults>
> 
> The assertion failure is from a global (system wide) TLB flush initiated by
> modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
> sure exactly what causes the #DF's, but it doesn't really matter either
> because they highlight a latent bug that I'd overlooked with the CET-SS vs
> patching work the first place.

Which perhaps warrants a Fixes: tag at least for that latter change you
mention?

> While we're careful to arrange for the patching CPU to avoid encountering
> non-shstk memory with transient shstk perms, other CPUs can pick these
> mappings up too if they need to re-walk for uarch reasons.
> 
> Another bug is that for livepatching, we only disable CET if shadow stacks are
> in use.  Running on Intel CET systems when Xen is only using CET-IBT will
> crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
> still active.
> 
> Also, we never went and cleared the dirty bits on .rodata.  This would
> matter (for the same reason it matters on .text - it becomes a valid target
> for WRSS), but we never actually patch .rodata anyway.

Maybe worth making explicit that this (the clearing of D bits for .rodata)
also isn't changed here? Otherwise this reads as if you meant to deal with
this as well.

> --- a/xen/arch/x86/alternative.c
> +++ b/xen/arch/x86/alternative.c
> @@ -382,24 +382,28 @@ static int __init cf_check nmi_apply_alternatives(
>       */
>      if ( !(alt_done & alt_todo) )
>      {
> -        unsigned long cr0, cr4;
> -
> -        cr0 = read_cr0();
> -        cr4 = read_cr4();
> -
> -        if ( cr4 & X86_CR4_CET )
> -            write_cr4(cr4 & ~X86_CR4_CET);
> -
> -        /* Disable WP to allow patching read-only pages. */
> -        write_cr0(cr0 & ~X86_CR0_WP);
> +        /*
> +         * Relax perms on .text to be RWX, so we can modify them.
> +         *
> +         * This relaxes perms globally, but we run ahead of bringing APs
> +         * online, so only have our own TLB to worry about.
> +         */
> +        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
> +                                 (unsigned long)&__2M_text_end,
> +                                 PAGE_HYPERVISOR_RWX);
> +        flush_local(FLUSH_TLB_GLOBAL);
>  
>          _apply_alternatives(__alt_instructions, __alt_instructions_end,
>                              alt_done);
>  
> -        write_cr0(cr0);
> -
> -        if ( cr4 & X86_CR4_CET )
> -            write_cr4(cr4);
> +        /*
> +         * Reinstate perms on .text to be RW.  This also cleans out the dirty

I suppose you mean RX here, matching ...

> +         * bits, which matters when CET Shstk is active.
> +         */
> +        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
> +                                 (unsigned long)&__2M_text_end,
> +                                 PAGE_HYPERVISOR_RX);

... the code.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5879,6 +5879,77 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>      return modify_xen_mappings(s, e, _PAGE_NONE);
>  }
>  
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_HAS_ALTERNATIVE)

In line with your observation that this wants to be ||, ...

> +/*
> + * Similar to modify_xen_mappings(), but used by the alternatives and
> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
> + * responsibility of the caller, and *MUST* not be introduced here.
> + *
> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
> + * Must be called with preset flags, and over present mappings.

(s/preset/present/ ?)

> + * Must be called on leaf page boundaries.
> + */
> +void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int _nf)

... perhaps use init_or_livepatch here? At which point the #if may want
to go away, as in the !LIVEPATCH case the code then will be discarded
post-init anyway? The more that HAS_ALTERNATIVE is always true on x86
anyway.

> +{
> +    unsigned long v = s, fm, nf;
> +
> +    /* Set of valid PTE bits which may be altered. */
> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
> +    _nf &= FLAGS_MASK;
> +
> +    fm = put_pte_flags(FLAGS_MASK);
> +    nf = put_pte_flags(_nf);
> +
> +    ASSERT(nf & _PAGE_PRESENT);
> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);

I can see why you want s page-aligned, but does e really need to be?

> +    while ( v < e )
> +    {
> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
> +        unsigned int l2f = l2e_get_flags(l2e);
> +
> +        ASSERT(l2f & _PAGE_PRESENT);
> +
> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
> +        {
> +            ASSERT(l1_table_offset(v) == 0);
> +
> +            l2e_write_atomic(pl2e, l2e_from_intpte((l2e.l2 & ~fm) | nf));
> +
> +            v += 1UL << L2_PAGETABLE_SHIFT;
> +            continue;
> +        }
> +
> +        /* else decend to l1 */

Nit: "descend"?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521938.810955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM6k-00048S-Hu; Mon, 17 Apr 2023 10:29:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521938.810955; Mon, 17 Apr 2023 10:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM6k-00048L-F6; Mon, 17 Apr 2023 10:29:14 +0000
Received: by outflank-mailman (input) for mailman id 521938;
 Mon, 17 Apr 2023 10:29:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLV6=AI=citrix.com=prvs=464b9e9d0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1poM6i-0003m7-Ty
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:29:12 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b00e9972-dd0a-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:29:10 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:29:08 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB7065.namprd03.prod.outlook.com (2603:10b6:510:29b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 10:29:06 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:29:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b00e9972-dd0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681727350;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=nY7Fz9InwZE4oF3Pv/DIA4rYtc6HFcX4XrRo2forpFU=;
  b=KpEnHkU1e/pdZA5B2+ArpjYfR+TNeVu/31n00TIv1sjSl/gEle/Q9Wox
   QM+lrRHihoa+YMsr1ZFy4tRRu0QB2X9tz9DMCONy2sf1rLW9PSnSZaTUN
   He+vTLb2IUO1hCebzsIgRt84jDxZCMhGOoOqSUfupQdL6KiuoefpjbZk9
   Q=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 105690991
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MSLN3KrIcjC7DogNw3UhNPUjRdleBmIKZBIvgKrLsJaIsI4StFCzt
 garIBmPOquLZDCge41+PYq3ph4Pu8XSnN42T1A4+H8xQiIW8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzidNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACghSjG+wLO9+ZeiTe832MINDZPFEqpK7xmMzRmBZRonabbqZvySoPV+g3I3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+CraYKOEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpPSuzgp68z6LGV7mZLLUwER12Bm+Wkjm6MYN1zM
 05P+CV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhqgGqy8qDqzPW0fKzAEbCpdFQ8duYC7+8c0kw7FSctlHOitlNrpFDrsw
 jeM6i8jm7EUis1N3KK+lbzavw+RSlHyZlZdzm3qsqiNt2uVuKbNi1SU1GXm
IronPort-HdrOrdr: A9a23:Tv3Vxq7DlMcJgR3bFwPXwS+BI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhPk3I6urwQZVoIEmsgqKdhLN8AV7MZniDhILFFuBfBOjZskvd8k/Fh4lgPM
 5bGsAQZuEYZmIK7voSlTPIdurIt+P3kpxA692/815dCSVRL41w5QZwDQiWVmdsQhNdOJY/HJ
 2AouJaujuJYx0sH4yGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3y0ZTyhEzd4ZgC
 P4ek3Cl++eWsOAu1PhPlzonttrcRzau5V+7fm3+4Uow/PX+0eVjcpaKv2/VXsO0ZmSAR4R4a
 LxSlEbTo1OAjrqDxuIiAqo1A/63Dk07Xj+jVeenHv4uMT8ACk3EsxbmOtiA2nkAmcbzaFBOZ
 hwrhGknosSCQmFkDX25tDOWR0vnk2ooWA6mepWi3BES4MRZLJYsIRapSpuYeM9NTO/7JpiHP
 hlDcna6voTeVSGb2rBtm0qxNC3RHw8EhqPX0BHsM2I1Dpdmmx/0iIjtbkit2ZF8Ih4R4hP5u
 zCPKgtnLZSTtUOZaY4H+sFSdvfMB29ffsNChPtHb3KLtB5B5uWke+L3Fwc3pDXRKA1
X-Talos-CUID: 9a23:LNRiSG/Izcxtiitr4FuVv1Y4PuEYNSzW9VLJJEnjV2BHaee7Z3bFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AqQBXCA3pqOrhHSlHnQqOVbKFtjUj6fyLWVIfl68?=
 =?us-ascii?q?8p9SAFR5LNBuFjm+1Tdpy?=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105690991"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=naNscmX5WCIj2MGQ+ksKSWQuH2gd8xjg/R9jrrR66l19TCBO+lQmGpzMlxuRMRx6aVlOSR5EOnENHXXeFYPHoJTsnuEqasCAlvdAS69OkOgQPvUwJPG0tkSk8MH1Odq4wlZ17fqT6uUaDL+mEpLKPWWuca3ukAicrBZjS5L247HQMZuH0wOjQbKLOvCIf68DJiGGoOhitOG49Ovr5EfOUGh5Ptb8KTAe4WA9MEqrGQFHXn6/v+XXAxxBFjOv5qVrDu8TAWTu0pe8XjiXovGKFSV4DNa53uHNFNvzLwbyatAPRYzYhxE2TBWQQLS7sUBir7dfB4aEyf/XYeXMcc66Bg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YeXr8hmZioZP8K766l8XK8XDW/IsSfBjCtgEip2sOAg=;
 b=TrJWMNqs86kjes+RJx9mzCeJN3TfKHV5jPqtwr0iuRYqHF3YDRSV4whI/WedKdP77dAGhi6Ftcw/VBTyaFBJ/eCZKBmgbTdElkWI6uixjZ3eaUU+evCNfyAvxepX3+cexS2MXSDsiVgPOgmWzvveFzuPNRZsd6I7WhlGR7p7fOXIWb9XrAgwGXaP25il+p0j1v73b7fr2+q3hw1LMA8Nx1Ye7E5ZwRIBms1hjPGKKkbrdU2Xra7FZmUQNuo/uV16nYWhH4T5KQU5GkJPIl08kYT9efNTj7vb3j3iOx4K4UL9uW3FKkQ1LEZer/j6MLuDMjyd8304LYxAILvaJNW/Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YeXr8hmZioZP8K766l8XK8XDW/IsSfBjCtgEip2sOAg=;
 b=Mz5hQ8jryl1PiUxHXXe8Bo7yGXvMgqZWL8Ym8TFAy0hX28TMNQMOn4pbTiTPZ4vh4FDqSijXa9Rem/lzmcKQCsDWgFf+SN9oO5W4NmJLK6OVA3uaibD1Z4h83/mR1r0rlVyVHSHvSDJHnxpCiLdg6QTOc6kT9BBttVrUCbVxMBA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 17 Apr 2023 12:29:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: HEADS UP: re-adding the armhf boxes to osstest
Message-ID: <ZD0fbNMqT7tMZVAq@Air-de-Roger>
References: <ZDkmu0mgy23ypaL7@Air-de-Roger>
 <92e6ea3e-a381-a77e-f909-bf65d009647f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <92e6ea3e-a381-a77e-f909-bf65d009647f@suse.com>
X-ClientProxiedBy: LO4P123CA0030.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB7065:EE_
X-MS-Office365-Filtering-Correlation-Id: 21ad6124-1b3b-4037-4aa9-08db3f2e9257
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rmm9gJLJ6nvR1Xm480z9PgfVYg1gW/39Q0IC6VwA25q7QAc2M08HQ+bNT2odSj3MYQemqiIoWFK/t/+cOv5TD123bS3WeEiwj4d2yZwbEcQTWY5YCGK/4Le1yrXQ0VXxJtqI84+kE1cLP6F3gAepm2O51/Zii+4EXgLXiZSa/wcVNva6ifZ4mdlKvs1piEn3qgxIZyvNmgPwcz5ZVFC0l9CytgCUHaon3Asc9D/ymS1G0bVhCzGUVMP5+NOmWnLXCC2wEyqDXOaaymAx8TeUoZSgVfQGH/i9a3SzLcmi6xtbSh4udn0hD4RIJmbTTER1E7d8Lwgxs2x9aZITaA2uptFBiWUIN4POobHzcN4WY5r50TPVGZbLvXxWtFDFwtSzCj9l/mKGjpAZ3UsD3/RlDzVFCkBEh1KhANXusyOzBjJSgysVoYyippry8lsfa0Rgfjt3BEGNnmdM+g3eWTKgYzQKMOg6F++H5XC1P+D3TKzD781cSWwewH3Nt+tSuDvABaLcFVHN1UIwFAkojYYsMjFgqxzcngiDmXJlmyTK7+Frwi6YNpfy53ctSAU27y6I
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(376002)(39860400002)(396003)(346002)(366004)(451199021)(316002)(53546011)(6512007)(186003)(9686003)(6666004)(6486002)(86362001)(38100700002)(26005)(33716001)(6506007)(82960400001)(66476007)(66556008)(83380400001)(66946007)(6916009)(4326008)(41300700001)(85182001)(8936002)(8676002)(5660300002)(4744005)(2906002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WllzTURwNHRuUCtQOGRSb1NYUVlPcUZQT1AyS3pFbXVaUmx6SnNTSlZxdFdX?=
 =?utf-8?B?TVBMZ1AwWE4yZDRtQzJrb0U0WDRWb1dUN2dDaWxTRDJXeHl1OVh6MW9QTWlS?=
 =?utf-8?B?em5oRnhkbGRtUzlSQVRPWitEYTgzWE04TVpEMFRLSTVmcFdFYmp3ZlRmVFB4?=
 =?utf-8?B?c0x3enRHc05PL0RnTFhBa0xicmJTUTgycFVSOGx1aUVJa0swaUtWUGYrNjZv?=
 =?utf-8?B?Z0ZpU0tYWkhmZGFZc1lpaWYzT1RYZCtYQkVpNTEvVkRxOENuUmNlSzN2ME40?=
 =?utf-8?B?SFMxVmFzdFNreUJLZFZvdU15MTJzNmFDVk5Xbnd5MGh6NGxqMEozSWVuV1NE?=
 =?utf-8?B?WWF5V0QzTktadmRmMWpZK2duNktUbmQzVlEyY3E3bTd3QW5PREs2cEovWTda?=
 =?utf-8?B?cDFRaFVNdW5JanRzR2p4eENXNGgwUFNxQTBOdmxhekMvYXVrVDY0U2RRV05O?=
 =?utf-8?B?QVdnMFVOYWxNNzlxQjVkTXdkQ2Qybk1EaTlHaUxmMXFVNDg5ZmIydFJEclhJ?=
 =?utf-8?B?U3JzcEFNSUp4ampRWDhYOEtlYVJDeWVKUDZCZDEyQVdiamJSMVU2QitLZnMw?=
 =?utf-8?B?b0prUVZBQjVrcENaYmt2MWg2VS9jNXI5d3M4UUNJaThGWmVLSjFhQkpFYkpU?=
 =?utf-8?B?dTM1RWIvOWJXZ1JERTVMcytWcVprdDJaVjdpVjQ3UGJMeFFZRW4rblRTVEZN?=
 =?utf-8?B?L0doeHVXKzdwdjlkOUYrTUJQNnFkV3UwOTNabjgzY0NIMlhibk4ycWE3ZDJo?=
 =?utf-8?B?VWZjN1B1UjVHZktQaFBZYW5XT1NiOFFxajUydUhsclQyS0VabThqMG1oV284?=
 =?utf-8?B?WlNFOXhZb25hZm0wSW4vbVQyaVFoRWpOUEtNamhOb25lMEpUUTBYTUs5WkNu?=
 =?utf-8?B?VUxJQmRlK2x2V00wcGhqMjRuSTdWY3hwZHJSNm1OSGhhajUxSU1qUzRaUzA3?=
 =?utf-8?B?clhrYS9aS3BMZWdLTzFNY0QrdVBnbWRtaXZxeWFqYU1QTVVvSnZMVFZSQ3JP?=
 =?utf-8?B?ZU5wU0d1Wk8vVW1jY2VvYVV2ZWNpUW9NK292cHlTcDh3djQvb2FOZkVIWjhE?=
 =?utf-8?B?V3dIZWRPN2dwNXR3VXFML3ptL1Z5WEpFVDhhOHlSOExmY1hHdDAxMlZQbGVo?=
 =?utf-8?B?RlpVazBiQ3ljREF2b09TbThiL09jb1MxMXRSbU1lZmlWMjRpaGhXMDVudTZp?=
 =?utf-8?B?b1VjUFA0SzRlZC9VOTNvc1prQmNtaGkvQXIzS2NlMm9JbWRQSlRseGRnV0Z3?=
 =?utf-8?B?TWJXejJ1RFUrMEhLaGJlVjJRZ1c5bzJYeU9mejVHVWdCNUhlbUpKQXRydXJ2?=
 =?utf-8?B?SnhWTjltQk5Gdk1BRWlqR1NEWm9NaERDUnNzZzNvMFM3WHc3WUx5UTVHaU5T?=
 =?utf-8?B?VnE4N1QvUU1RemF6V3FFeWM5Ty9GaGl1TnhIQU5uR3VMNFZFTk8zR3FLTzY4?=
 =?utf-8?B?ZDc2VzB5ODUrVXpKa084Z0JLeEFRM2lTZEdDTnN1M0ovN3g1SFRlZ3BXUnZR?=
 =?utf-8?B?Yk9CZUo1RWtsZE1qUFA2ejBORjNSSVc0bStDai9IZTZKd3V0Q2N0aWRzMi9C?=
 =?utf-8?B?c3Y0Y3NpUGF0Yk1tVWlZVUE5SUROU2VqRWNtandYTUYwa05kVmRXZWozNGlh?=
 =?utf-8?B?TUsyWUFBdjBwT05EbU9uRG9DUXIvODBtdGdWMWtYSlp0cGpIVVd5R3hGWVpY?=
 =?utf-8?B?YjN2TG9qZEtQNnA5VVZlUjE0VEpPRDFJNU5CSGJQdDJKbEFGYmg2WHYyMVd2?=
 =?utf-8?B?dkhINkVUbElZOUU2SElldFB6UmZPZTVtcnRaZlJ4cWtUYzViU3Y5Mmw2V3JG?=
 =?utf-8?B?dGp1anlNTVJlc0lBSkM2MzRybjJsbzlrdlpBYlhvVkN6TDhJWVNDSHhUTlpj?=
 =?utf-8?B?YWZyMExXeFZkRUR2SURhMHFTSWtZeUJwV0l3dTRZbUlkRmVnNHlSVTdUT1VG?=
 =?utf-8?B?K2xVTlRSU244VmxlR1BhNVVIcnZ3VmxhMG1zems2WjZmdlV1VTBodUljK0lH?=
 =?utf-8?B?T210U2FYUGNjdmtteEIxVGZGcVdYdUFrVjM5K1lRUG9vcGRVdk13Z291M3VP?=
 =?utf-8?B?bnhyS0ZOeGhadnVjejNmMzlKQ2pEalM4VUhPYVF4TFZUTERTdmcxY0ttZCti?=
 =?utf-8?Q?51CAul6/uIb3v2AR3Xg7ZZx4y?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/XAYuVrzbrynkWFX6Q726iH+/zN5iLfzBWrAMtXk1IWK7obMFmsP8wTd4F89lbSHptPjGjkIxSXUJQBiFGO/+RXeonkiOFm9VS70W9mwvu1HJ1l0ni2YNINKUUKsQ/kE7truGUVC/FOVCcvk8HnSxBTFC4czF4Bh+wl1ZLMBD9E2o7wHeMvMsAvIIrSJKRm5/ZAbgS4dLFlqYH3D2h6r32Cgtl+7PuZSBnuOPK3dYiaz6DPNj3IUzEH+vdHhlm9TiBj8h752j8AW1ruUHAKlI3f+o1wdPuWtlNpMYY9YbB9u24fzC4vkfb9HqmhG6oqmR3Wk7yh/Wqa/Kr68tAjEv2vQs2pkxi05jidG5FC6fLhZEAG9NAcOkic0CI2axx1gZcPdKa7tHg9LSlkIDqFi3E9lnHl0HdMKhsoRt0e6Nz4y6P3qwV9/KaAZ7VHYqq1+ppJUjeLRCkpPskYourxknQVUxEA4IwVHVm92jSRMNdHEPlzFEVO9hAJcy4Je6NrgnIewwITiIfReU5n2C/vVRXs3n0Tx8x6izjtzH9STlLTX1+NVdewp0v9laCZlIBByzG0cqErP1jiLX/5EF2Maj9zTzM1L8dEVGeYp0ygzrE3aPifG0ofgiA6t+tbOga/ilH41ZNTJ30VBlpV2GX9WXKhrYM6ljVmMWKlB0EmBOdt0ev7js+N/4QwDglQ+sfR7Os3zksERJsgXq+Sq37ad4I8ph+unMrGGYJnjTKGJLDXuX895ALeibdbzmrKQYKvQBHBWhOszMFTulk/EsLH3+CPPNGipBsYiIjyLzVkc6g+fqyC+5XmaZV1Hv8jXjAcL
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 21ad6124-1b3b-4037-4aa9-08db3f2e9257
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:29:06.0413
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pb6emBeZTXiobjlXzxUHYoKQrLoEnSZrjQQcWc371El1fUy+rWhy5HT+V5wFS6jDuOmePixdGUXWBcHOgHnwWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB7065

On Mon, Apr 17, 2023 at 10:23:59AM +0200, Jan Beulich wrote:
> On 14.04.2023 12:11, Roger Pau Monné wrote:
> > We finally had the broken PDU replaced in the osstest colo, and the
> > armhf boxes are operational again (those are the arndales and the
> > cubietrucks).
> > 
> > I've run some ad-hoc tests on them and they look fine. I plan to bless
> > them before the end of the day.
> > 
> > As usual, keep and eye on any failures that could be caused by the
> > newly added boxes.
> 
> Sadly recent flights look to be reporting them as broken again.

I've unblessed arndale-bluewater which doesn't seem to reboot properly
when the reboot is initiated by the OS:

The system is going down NOW!
Sent SIGKILL to all processes
Apr 17 06:59:53.337634 
Requesting system reboot
Apr 17 06:59:53.337668 [ 1109.675149] reboot: Reû

The other boxes seem to be fine.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:32:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521946.810965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM9k-0005oZ-WD; Mon, 17 Apr 2023 10:32:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521946.810965; Mon, 17 Apr 2023 10:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poM9k-0005oS-Tf; Mon, 17 Apr 2023 10:32:20 +0000
Received: by outflank-mailman (input) for mailman id 521946;
 Mon, 17 Apr 2023 10:32:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6fLN=AI=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1poM9j-0005oK-1J
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:32:19 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f98c3ae-dd0b-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:32:17 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1poM8N-00BF8h-AH; Mon, 17 Apr 2023 10:30:55 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E90B63002A3;
 Mon, 17 Apr 2023 12:30:50 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id B6C8B24209FD2; Mon, 17 Apr 2023 12:30:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f98c3ae-dd0b-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Cn+6pHKXj0VgnZYFnmy9b6moWoCaDLSiP4wWF3jJhpk=; b=eFjpJ43H0H0+AtZ/3zOGdj6Vd6
	OueCbYqge93774y/hDVMyg9JOeSPYaoz588QMxpqE7X2BrsdzYXm1eiqUkP7r5hy1uMf6j5nNTJjv
	A9RRC/WSRjF825s0T8GA0lQebZ3dp9M9IyU3teuq/dI8X2XZaUTdNuoTFxBGKCMOZfl0s4hJ10bJE
	wildSkhs3WJwK8286wr1oqCiRM+g4dc162upctej4aFx3Lq869pJvyLudFHd9czfvyLxuQYKNwfz9
	DnxilytX62HF8vLXmxZCch6S6pEDTaIL2gQ1g0bQsLMq4QE1xuAYGgNXpzurzPjqHGFuZgvEc/Dhj
	UcZ+ecNg==;
Date: Mon, 17 Apr 2023 12:30:50 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Message-ID: <20230417103050.GF83892@hirez.programming.kicks-ass.net>
References: <20230414225551.858160935@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230414225551.858160935@linutronix.de>

On Sat, Apr 15, 2023 at 01:44:13AM +0200, Thomas Gleixner wrote:

> Background
> ----------
> 
> The reason why people are interested in parallel bringup is to shorten
> the (kexec) reboot time of cloud servers to reduce the downtime of the
> VM tenants. There are obviously other interesting use cases for this
> like VM startup time, embedded devices...

...

>   There are two issue there:
> 
>     a) The death by MCE broadcast problem
> 
>        Quite some (contemporary) x86 CPU generations are affected by
>        this:
> 
>          - MCE can be broadcasted to all CPUs and not only issued locally
>            to the CPU which triggered it.
> 
>          - Any CPU which has CR4.MCE == 0, even if it sits in a wait
>            for INIT/SIPI state, will cause an immediate shutdown of the
>            machine if a broadcasted MCE is delivered.

When doing kexec, CR4.MCE should already have been set to 1 by the prior
kernel, no?


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:34:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521952.810975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMBz-0006NB-CQ; Mon, 17 Apr 2023 10:34:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521952.810975; Mon, 17 Apr 2023 10:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMBz-0006N4-9L; Mon, 17 Apr 2023 10:34:39 +0000
Received: by outflank-mailman (input) for mailman id 521952;
 Mon, 17 Apr 2023 10:34:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poMBx-0006Mu-UB
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:34:37 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe16::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 726c8fb4-dd0b-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:34:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7926.eurprd04.prod.outlook.com (2603:10a6:20b:2ab::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:34:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726c8fb4-dd0b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WGoe6Z2TtzQjhPFeVWYZxUn9D8iK+C93BmiCbt7za3/xAXIH9Ac/lJTYlPnIRc4lcnX7R0F41wNCRGZjS0QyDtj15XM0Wc2GmcVvh/CDgTxi64BadNQVOAfYz++eiUIjGgfuzFVEAFXqVjLTkbpz5JxS7yg9fmPD8v6KWr4Fx4SvPqIXssjJn41YcGMiGTJjJh8why3yddOLrJ6ayVenye4aoJMvBpUpYGld9tImkx3ekvgnwZNPrtA7JX9co1KIKxqfwpdcse/VDD0frwUdFIKiCb/ddxrn+b9AQqlRqqj4yytS2pKFz4xBm8aTGwL5EL8Uv7AUvO6MKqeuk3Ns1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=baBHCNCprn9x3IMJt6q/GtY4bteAhrnSmSni5g5U3yQ=;
 b=X6qD7iEuOsyyCWivDINDtkulg6ET0nlxPcgiHFhrYdGo4cbuW75fLsSuKfGtBMSrBUamVIn3yrn+nf9A8xFNODR2eD3RiNgwClmF+3+zCA9q0pAZv+mP5E7Po0O8AkMjHwZ6IOmaO7aG4uld7eO9zolrJxENNtt7zcLjqYY2BAZ9RHoJDbOJXuesFRDv269HfxYZ8B38pOtetivIth8pTsozmhwrXhoZDmNQsNbSdJPNNw0PeuXgPcqTQ1162pBlBChw/fDbzUQVwM1VPzgu/eofIBhmkfGbvU7u1GZCIzDmewfwhNxxtThB0Yy3jVKmYgPOmwl7dmlpc/TUnVg16A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=baBHCNCprn9x3IMJt6q/GtY4bteAhrnSmSni5g5U3yQ=;
 b=QJOSUO07QpEzSsqWRdaoT9g57x8iZmO1e1vGP55/opJReWF6tsrOf29yRtoooJkuPJZIi2kdSEEMxEezjEZI3SXxu+1TS6+P/em8uuMNPwQs2mmHoeA+NgYy2dwFJBtVs7I5203LJPqWZwQ+y0XSfNzLoKj+Ik6R0Vbeif3GfBkV+g9EV9q1QiZ4EPQvhI51D2S7hz0hSiMuIoxSd16CwBue49ylJLGBbSCVb7QXJzGbkraLbClHb26sdsTCO+4H+ihXyrXvvFZum8aNAgTd6MpNUVqT1w/wHSW8YxZRcUjO9z3b9ahMRvpENRUmVb4wU7KuTTs7fQJI57lqHcDkqw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
Date: Mon, 17 Apr 2023 12:34:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD0cyXLt1knXyUzA@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0035.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7926:EE_
X-MS-Office365-Filtering-Correlation-Id: 96c861c0-926b-4d9f-0516-08db3f2f55e5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hFRlIt/PT8DBDvbBU2kcVxB2ki0tPUKuxvI1nKJqAmgZ69LUM5ELygRTbdGiMPEeKci/AtsU5xsXIwBFZveYhPxdO18d18NAMCRLvC5fKBoBvtLvy3JmB+pkQIg/zy1GlEUw7vFOgbqpoSTbsR+5g+BdtfTjf6sAr38RVDac8rgp+9qc9DyMS9eXoFEYzdug1vBQGV8tuzyr/EmT4/EW2KdqR2+wRD0YEAttPo3EOX6T0X4DAjROfdXztsmyY1grJ6D96mTGGdHoCM54ZvJz5crS1pd85Jl5P0cJGd7CNfASF1RujQ06bt7CiiVtkNCDpjuEJGofcwFV+gxun2r807hUchlHWuferxcgrOPEWUvlPWNnzW3ZBoKtQnVDGPyq9kOIWF8G2oyTdD3iN3Mo3pdNmt6zjw8rPWPT3h9NixgqSGV6Fa0ftFerT93eNCH2fQN1RvnfDQSx/tCW3ZintyiJhrbaKOJdInSFlP61D4TKSVhRYeDcHIbk92VEE9SwlHjyyheXrwu1yFtFNPcOzsiy/+QgpihSmxbmEUQGKeAY0kVKFsORZheuJu/1KFTrgOTnjIS1RDg1yhyVOel1n9HXmb4/PcvpaS6k06Lp0YOxurqO2E7jpaXzHjHgXbFpDRb+qvPuhcJ1AfiV5Z/vRQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(7416002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(110136005)(31686004)(186003)(2616005)(26005)(6506007)(6512007)(53546011)(66946007)(66476007)(83380400001)(316002)(41300700001)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NmFya01iZHRySHVjOUtuQ2tTOFJYU0VuZnE5RERYS3hpQmt4UmN6VG5NM2g5?=
 =?utf-8?B?Ylo4UUo0NW9lejFyTEMzK3ZoclpaT29LUFRtK0NwSEVlVkZyalJYNk5hOThm?=
 =?utf-8?B?dXdmRjEvQlE4Ukx1S2Y4ZE1JL0lsVEUydFhFQWVBTnZ3RFNDdTVnOFJxcytp?=
 =?utf-8?B?Qk9GZUE3MTd5MllNbE16MDJDaEdWRTduR0V3L0RyVkVVWVQ2R2hNWjhtd1cz?=
 =?utf-8?B?OFZFajZKd25CZTU5RmxXbTlpbmtKSlhHdmJTTUVxbjN4eW9CY045cHEzVWRF?=
 =?utf-8?B?UFgxTm5IYk5XMytOaEp2ZWlsT21pR01IbWUyWFFobk1kWDVxNk8wZzloUzlC?=
 =?utf-8?B?MTFpajA2Y1lpVTZnVHp3dkRROW8yRGlnbnFuZk1Db3huMEYyTUdtdktxbTZ6?=
 =?utf-8?B?UEVSdWdkU0VnNnFFODNCbFhVQ3ZldGtWcUVZKzcvL2h4dElROWIrTkw2Sm1N?=
 =?utf-8?B?Qi9haFkyeUJMaTl3NVdnVnhXc0JSYXkzTTc3UHp3bjArYWs3QzlzMVR2YWcv?=
 =?utf-8?B?RUMyV2tOcXh1cjNlZG5BUXlRUmdldXJnS0pqR0I5Y0h0eHJaQi9YYWxOenJy?=
 =?utf-8?B?MWYrZFZwSkNNYU5GZ3cvamNjU2FCTWVnVVZNZ01tdHdZZE00aVVqMW5FQWZR?=
 =?utf-8?B?eCtDUGpIWHNLVlMzcXUvNVFtdG9oVVN0dzZrVXZYelhzTWd0OHpBc2V1RUIz?=
 =?utf-8?B?eEJ6eWNIVk9aamlrL0VHT2wxdmdRWG1lKzE1bkFFaHlZMjBtVlFMMzlYdWZK?=
 =?utf-8?B?cUJYTEwyc01xUFdxTG1JWmFlUVB2L05makc3UnREZUZuRGVTRk9CazJ5ajBS?=
 =?utf-8?B?djhXVlRSREhKdWlOVkMvQ0FxcER1M1IwVTJGZ2RocFc3WitUS3c1TUYwbEda?=
 =?utf-8?B?THV2ejZhTjJMV2Y3dmFOUmY0RjFFUDZqTnhmeXpCYlhZZ3B4Um5TaVppQitV?=
 =?utf-8?B?bFBxTUNONkNrNW1aUkNqNWJ6MHh1am9JNzZaeVQ1UUxZQy9qWHpmN1hoUk1O?=
 =?utf-8?B?OHBxYlViN29IODR6eDRjUmxDN0YwYXBSQnpnWTFWSVpyTUozc0RNdXQ3d0FJ?=
 =?utf-8?B?THI1dXZMbjRRaGcwdEpzZkxBblZjNEk5R1hTbk1SaHI5cUxkeHhHcmpCVU01?=
 =?utf-8?B?K05hMEdSZ2RVbnhqTTJwcFNxdy9yekx3Wm5UcTV5WXlvTDc2L3VtQVJYMTBk?=
 =?utf-8?B?VU16Qm5VRStLZldyeUdsZFRqUG5idk1VdTJaQVFTQi93NDFVYWQzTlFOalkr?=
 =?utf-8?B?bi9tUXNKRU5pK1l2bzZZRWRiRWlmMWlkcjdrNHc0eGpZVE56MzYwWXBmWWE5?=
 =?utf-8?B?QzY5VWVWdSs3Vlo3SFF1dDZjbkxuVzVIakgwRUJmK2N1dnhEQUNQRGlFVXE1?=
 =?utf-8?B?OHNHR2NSd2hLckVJUjBGS2dZS3FLNm00SUd4M3JZbkdYVEJPRmxnYkxpdDJw?=
 =?utf-8?B?U1lUSzhvNjc1KzRxaERvK240QS9zbzJrNDJXYXBZS3dBUXFqeWhEdmJLOHJQ?=
 =?utf-8?B?UkZLbk1JZkpwd0Vvc1JQd1lLdWY1U2xnbGpaelVtQzNQZ3k4NWJSSUVEZ3dP?=
 =?utf-8?B?dGRNY0ZTbFYrNUZ3bE5aY0YrK2djNFlIWnB5dXZRVkJ0Qy81Mmd1Y1hZU2hh?=
 =?utf-8?B?NlJMcDhNek93YTFmek1idU1sTXNmeTc5WWhQeHo0WXFxdGxsSmZqNUlQa3hW?=
 =?utf-8?B?U3VWUXVnd2Y1bnhUSW9kRDE4OGMxWGpTRW5IS2dwTTdVb3VGeGthSzF5NnNl?=
 =?utf-8?B?Wm9MOUhoU1FiR2tiQkMyK21pWlV1cjVDY3hMTURWK212MWFSZitpY0taWUVU?=
 =?utf-8?B?RldSeXluLzA3RU4veWh1WXByQ3RGNC8vU0F2ZGNuRk80Z1RValBNR2o2eXZx?=
 =?utf-8?B?L1pBM0RZVWpMYTdJVFZUOEwrWFYzM0p2eUdtc1pQNkYzSXNwZktkbXk0bUhX?=
 =?utf-8?B?SFBRd1c0NGtRTEdZSTEzdm0yK2ZJajBHclZOK1YrRlVUVzltcG84RGhIR2ZP?=
 =?utf-8?B?czUxM3Fkdlg0cSs1N1hnRk92UFR5OGJOeEl6WkpGeUw1OXlyTkZnQnNpQnhG?=
 =?utf-8?B?ZkVZYmhNbXRZTVVEVE14UXZCYVBXbWlrM1RmTGNJQkc2cnRwREpESWJPSHlx?=
 =?utf-8?Q?p3iL9QITGSzYB+14OwY9qsd2J?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96c861c0-926b-4d9f-0516-08db3f2f55e5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:34:34.2297
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FEKdjnPS/PgBS5NbOMnGdM/YursXfuLO0pI6i6yNDZnkjyBynu8Qzo/M9U0tm1xtySWkYxc+FXtodRPnMnU8fQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7926

On 17.04.2023 12:17, Roger Pau Monné wrote:
> On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
>> Above I have proposed another view on this. I hope, it will work for
>> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
>> after returning from pci_remove_device(). By "harmless" I mean that
>> owners of those refcounts will not try to access the physical PCI
>> device if pci_remove_device() is already finished.
> 
> I'm not strictly a maintainer of this piece code, albeit I have an
> opinion.  I will like to also hear Jans opinion, since he is the
> maintainer.

I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
holds a ref is entitled to access the device. As stated before, I see only
two ways of getting things consistent: Either pci_remove_device() is
invoked upon dropping of the last ref, or it checks that it is dropping the
last one. The former looks architecturally cleaner to me, but I can accept
that moving there might be more of a change, so wouldn't object to going
the latter route.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:36:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521956.810985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMDe-0006xr-NP; Mon, 17 Apr 2023 10:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521956.810985; Mon, 17 Apr 2023 10:36:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMDe-0006xk-KV; Mon, 17 Apr 2023 10:36:22 +0000
Received: by outflank-mailman (input) for mailman id 521956;
 Mon, 17 Apr 2023 10:36:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poMDe-0006xe-0g
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:36:22 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20611.outbound.protection.outlook.com
 [2a01:111:f400:7eab::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b02d240d-dd0b-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:36:20 +0200 (CEST)
Received: from DS7PR03CA0109.namprd03.prod.outlook.com (2603:10b6:5:3b7::24)
 by IA1PR12MB7495.namprd12.prod.outlook.com (2603:10b6:208:419::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:36:16 +0000
Received: from DS1PEPF0000E650.namprd02.prod.outlook.com
 (2603:10b6:5:3b7:cafe::48) by DS7PR03CA0109.outlook.office365.com
 (2603:10b6:5:3b7::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 10:36:16 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E650.mail.protection.outlook.com (10.167.18.6) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 10:36:16 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 05:35:52 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 03:35:51 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 05:35:50 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b02d240d-dd0b-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DoegPqCECVB+31NpdAssC+ntHE38qSx74K7xM46AjYPadlwIEV/HEkBCflXGd3kkTumKf+oZpE6w7E2zg5Hhm/x+0MQxW8uR1fdM3Yn044hFfvNger7vjI6VQcWaY9GEGohkA0hYL2FdcYUiZ+EF7UhAv4F0aaeA3qDk+4/HpYaOzmkUZnfgBI/xJ3e7hDCvEfPndSre3BB7k+GAbllp4TeCYCfbvaGv8PVb/8vBbCFbXjVvpNusxE7llyXFzblpd8hDcqk5fHON05WeUXSJY5LQu1JtB0g13gfxOLjUnVgl91Sh7EH9f7Js53Td/pVxYAvtda6HZJ4EXYFVVshnGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hlAqRTivRxolPbI376THeEP0s7asZIDGX9hWp4yMI4Y=;
 b=Cgh7lsAiDjK9k18fJxDi3w6ivRy45XRWa8dOXWuDaWHy1wfjteNpDdZM0N7EYiMvmK/JcN357ajlVfbYPw7s7ZI8aWt+JlTpkLF+XGgFkZfQhAADk+bbNusWkEmFMkr2BSf9Rb66iG7mPJC5YbWglqIQbB3/v9zOaIM/okPake8u8YVcmx84CQwwE2HOVhdwn8r9b/em6U2YmOG0XUltXmEjfqp9TQNXLTq31/eK6VXxLuTkdOQpFWVPU2JEDtos6qjyCIKQOMyDOatDCz25WPWXJ+KX4sRFvxVMhltHFJBWpq9iKP0eQnPCPrr/iwQmk/95m6eXSROpsiSlVa9yfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hlAqRTivRxolPbI376THeEP0s7asZIDGX9hWp4yMI4Y=;
 b=SK3LSsOAr5muphMNmJP0YrqYjJbjEuAUUSZtt0peoMMXhtrFZceGL1Gs0RTRHinVEXW4h8jvZpDs5CzJYo/OcFCTChVlLX9gJNW+4us2eOoTrHGENDQnI6/WFzyZc6ZvKlnRph95x4ZgBGeQUGTwFKHAkADPKEO47kOadNYMRaw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <8667c4cf-e974-6107-8a98-4a14a89f9266@amd.com>
Date: Mon, 17 Apr 2023 12:35:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 09/17] xen/iommu: protect iommu_add_dt_device()
 with dtdevs_lock
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-10-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-10-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E650:EE_|IA1PR12MB7495:EE_
X-MS-Office365-Filtering-Correlation-Id: 93dcc17e-e85e-4b32-2512-08db3f2f92c6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r3vAnhNHEi9LycE1W+3q2pRsk57Xpqbq5B7KwFryu48qVIAnDO1ltClay5NnGXobFveTM7ApW6esnwCc8nRnWHHOrJ6xrg1ab2ux5ie/caXO5og/vKSwa93mSGPfiTjfMbqBc/pw6rRQ+3ovv2FLYcJiobeErRPtydIHnaBTrNsolZhTaGaAcYAcJLZSbROeJuwzyp3uyJ8ZYxyV63nFUN75Cqn9+FXFjsnP+vuhhCxBaJ6zlm/TYS1EUtcJvDjqtpTOJF7A6fKbOIRpHipe1Xu/ICLInTe/tc2psNlB9Aps6aHOJjLN/d0PGIyKOyZ8VksOeQ6FOQF9eD+2615rwR45FrAVHVR+yZjMCooK7xWp7WDXGN4QieAh2mZSD1I17lbGkZypeyugWeqPFmSZjp5VyM9nDTzmsX15qmXZDy3zbT1tsJX2CbZ5imm4s5gO+dioM0Yu5GgJy+EY28n6j+6ZnH6fcrNRaLEWezO21kzxFjgERHTfnlCaClLNmdVPWd8hPMmkxmix0x4MW/uJJjJhvaqsHNb6J7+pdQ8t8oxbI8wMG3jdkRCziHN8k3h0aKTGtni+7u/FtgDxKSnCPkscT1Epo/andP+AyClF39y35JwAHc6wLlWVwvZHk+vqErFHWuEZkC/FbSnI0+PTb0ein7U58z7nM1JzqYtu66cq1Q3wWimrAQ2eZgCxegiXpmlWUotZbEAqkWBE8ch8NcW1seheg2dCHsMtsRyoq0Kpg6+ztbyyl7vTFGlILn5w8cqRKOHADjCvBlj+cU3cTg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(40480700001)(70586007)(70206006)(81166007)(54906003)(110136005)(356005)(40460700003)(186003)(4744005)(2906002)(36756003)(53546011)(26005)(336012)(86362001)(31696002)(83380400001)(426003)(47076005)(82310400005)(2616005)(36860700001)(31686004)(16576012)(5660300002)(44832011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:36:16.0747
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 93dcc17e-e85e-4b32-2512-08db3f2f92c6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E650.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7495

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Protect iommu_add_dt_device() with dtdevs_lock to prevent concurrent access add.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:42:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:42:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521961.810994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMJH-0008TT-Ff; Mon, 17 Apr 2023 10:42:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521961.810994; Mon, 17 Apr 2023 10:42:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMJH-0008TM-Cv; Mon, 17 Apr 2023 10:42:11 +0000
Received: by outflank-mailman (input) for mailman id 521961;
 Mon, 17 Apr 2023 10:42:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poMJG-0008TG-Gg
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:42:10 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2088.outbound.protection.outlook.com [40.107.13.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 81309e2e-dd0c-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:42:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9779.eurprd04.prod.outlook.com (2603:10a6:150:111::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 10:41:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:41:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81309e2e-dd0c-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CXEB6ULl+4gfQW+/b988wPvho+lnD4bCa+7cJGrw7AuS+I5+raN8yPOj4HsgQm96/vl7U9EEjE8uEHndj95sTuCabZf0pebUqa05Pk9Ey78sSG8+9XrnB1rVuGYBvxIw+/EXN1QIBLtwIzdr8raj9jb0QMU4LNywflrFl4yt0hgxJR51cxDbO/TCKP+wU7VpeZmTMxmZOAW6eE/Gi0Sd43I5LIyTUDLi5zSSmBWEyWhj8AgJeMsURseuTWf/Zz3Yg+JKbwruKA6YshyPAQMKTUmbTE+hOUjTOs90kMTzURE1heS5pJyhGPXz+BvcHRJOYY+wytWjCHjXYAB1cRlqsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R2VvzppRbmXMivGi/k2XbuKnAJw7QqsWC5xHrhvSiBs=;
 b=b7+eKypfvMVVaKJfsRSyh4MAjomvd07r3ta7ero47nUwQze2Gf7+xPgKE/Mv0rCpCTHBfmZ1zhmQ28smq65AnooyNzjgikpZKIOOILFF6wFwZTh6r7JuYzo6A+fG0ShkGkuLWJAxB5CVf5Nwhin1RewjHPBHknWkZXoSpAXvBZV3x9KKpujGc6cn6vfpQf9iFVcoV6qVMRN1Z20kRCGJGwGx3spTyZ9q2e8SAe1YN9lf/MDPczyKUuEt2jghF2FtDjurZSpKN9TCXISnPC0fidHLtZ7xDehTyQ4E/eWFC2nwJK9MRepxzPH0XZgk/ycSKD0VUrXfeDtwfN4qdCSF8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R2VvzppRbmXMivGi/k2XbuKnAJw7QqsWC5xHrhvSiBs=;
 b=2q8DLpXIjlhlQAA1pNPRL3LJZArUK4Z4yIcTT/WrratsTfsj/st838Lwhd8BoJyOxrX1Sefc5jdsH2KbXbldG5/q4uLyagwWlY3SVx7BeUp/ddenKmtzMDET7l46ktwt2Ch5n/1Z88b/PDEkqWNbe3sYUrLp5X7Yo+58Tzlerdz4CdjQP05elm4ZbxPY4MhAop/XSYyGlyNJ3r0R8eBvTroeE6bGh53MXPszgXmJN+hKX2ddztXGN/XUJbs4aLx86pcye6k0ypvg33vlzo7Rss2jnX8cuZOqmC91GoWxLRoRnUY1stNVXodWUZgvA/8ADzyhLZ85J2jWoT193U9mvg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2b8c7abe-9bc5-cd8f-b650-1de0205c4ee9@suse.com>
Date: Mon, 17 Apr 2023 12:41:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/livepatch: Fix secure_payload() in non-debug builds
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417095815.3734434-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230417095815.3734434-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0071.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9779:EE_
X-MS-Office365-Filtering-Correlation-Id: ef57b071-87ce-49b5-9f28-08db3f30537c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Yf7rq9wqSJvPrHxLDMfeSlfxA89XcpCzbAtUQeDxiVqt12o7H8bykzwDdPSmeKE9Asetv7Dk2OMtIPR/T7kCJyapwbrt5J1feCYAKDEKmCEOVnKtBziqxXgPZmmyVkD5y5Ol6p/cLkkEyoEzoSKvcjii2HsC7ziA3ntxRXFS/cua7rfbb0B7snvgwSqpLKawS3zWw90cTjfM323O5wuWPsV5fllAPNpfsDl8qp6qFBc9C3xgPb/ubPIYfIqdVchA40mCuCN72phQNxhb9lmkUyZFYNF0P/v2uvPojydXsJy/DG1aHXwHEyirTPC+2sfk3xTEq25Mm0LcFzR+OVeySUBh8WRU/8iMPU22szS35YO6tJF/58s0YjDYCqqOjoiWF8Pr1xF9HQZIW/3DNVpUegfHfW3GUfpiLxXvKZom5KxcjdC114OWW+8YRy5ijb9P6GzjmBSDdxcfF+XZx8WFJEEGMb8xo9teb7ciJx0/I4bvdMTHQH2IycGC77Irdb7pJxf9KadZ3xmSCoQpoQ6HvcgYI3OZ121tYVxoiVyhqIA/8toXEJ6QLxwLnBhaOjrarUwDP5Iau70wT+msl7JaaR7FgGGoHyJ6b4laVh3j1davqUXH0uI/jjytU0G2zXtc7n/jza7QcrE7IyuMDnMChQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(376002)(39860400002)(366004)(136003)(451199021)(8676002)(8936002)(41300700001)(5660300002)(31686004)(66946007)(66556008)(2906002)(4326008)(6916009)(316002)(4744005)(66476007)(31696002)(54906003)(86362001)(478600001)(6486002)(38100700002)(53546011)(186003)(6512007)(6506007)(36756003)(26005)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UkpwREdpREhHbE0vYjNkZFNYc3ZNR2dFMms1T2dPNjRkWTNSVnl1cG9lb0Zu?=
 =?utf-8?B?c1FLQ1BGWTVML1lqRENmWGJGZlRqVElPTlcwc1FQVDNsRDEvVzZNeXlpbmQ0?=
 =?utf-8?B?aHQrZlhFa2JQUlJlaFltb2t0RW1YbGJWb1hocFRzR0NMQkdNS3o5SUNmcXNi?=
 =?utf-8?B?NTdKTXFMaHpWZ3NGYUx3cllkbkt6OVUyYUpRYVdzYUUxMUZwcERNczg0WGpp?=
 =?utf-8?B?YjJ2NGxTNFE4S28yU3gyRzloMjM1cXhJMWV3ZlMvTks2Q25kVWJDbUhyU09z?=
 =?utf-8?B?OWh3RExYTDlJeFB0S0d2OHNWd3BBd2Q5WUliYnQyamJEZ0ZnN3BOTnYrbElS?=
 =?utf-8?B?Vkptd3BMeDcvajNCMFVQWm8yck03UjRRelpRdHNLcHJwTXRSSmFHRERzS245?=
 =?utf-8?B?L0FlYzdlVHRIcytPQXUyaHFsRTJyZ05BMkVUc21YWXdPZTlwWEJGWUFPWStY?=
 =?utf-8?B?QmdWQXpQU2NBRy9GT2RoU2diNkxZVW9GZ0JENjhXSzRIZFlzMW1adzV2c3pP?=
 =?utf-8?B?ZkRzZ3F3TDZQMmxodGtUMlh1aEFML2lldXpxTWpXSGFaYTZpZGFYTjc3ZGJ0?=
 =?utf-8?B?Z0xNWVFETkprdVMwUDF4SnQrL29sMVFDS1cxS1NEWVN5YWNDTnluRTFONU84?=
 =?utf-8?B?WmZXTVVOVVRad05EVkVkTEJiUUNTZ3lKdlFqZ1ZCaXRyNkRHdGZCS3FCemZy?=
 =?utf-8?B?MTZBUHZsMTE3aFVtQ1ZNMk5tUEtqdmo1anRwRDd6MHB5N2VlMHA5NU1hQ2cy?=
 =?utf-8?B?TjE1d21PKzV3bjZQemxHOHp3T2JweTBYQ2V5Z0YzSFUzOGx6NU1Ec2ZOSFVn?=
 =?utf-8?B?NWU0NW1TTTRnNjlTOVpSSFc3aXJVS1haUlZuSDMzenRvQ3Zsbno4OWI5Z2x1?=
 =?utf-8?B?SDFnMFdReDE3VElKT1VoOGpFcXJ2bC9hQXMxNnQ2aUJoZzQ0aXlpMldjWmlu?=
 =?utf-8?B?anVYRktLNkpMWUl0UG45aVdqUDBKVURXLzh5N1kyTlRzOHhtY0M3Umx0Y0Jk?=
 =?utf-8?B?Q3FxVmVraTZHYmtLQWhINElVZVp1cTdnNWxzakg0aHBiM1ZMaERuZU5DSzFJ?=
 =?utf-8?B?djVzMi9nUnI4U01FeU1paVJpOHZOUTM0WW91REExdnl0ZXQ1SU9IamFhUWZi?=
 =?utf-8?B?dEg0MGh4SzZuQ3ZjbkJUSWdDb3dwUXR2SWE5eGFnNUZQeW9iNHY4SXhQbzlx?=
 =?utf-8?B?dSsrbW4vNUZnYlZSYlZxRy9aNTRjdDE3TjFHb1lXOU1JOUVzaS9LMWJ3aTJB?=
 =?utf-8?B?aFMzOUlsZEROaXJyUGp1WjlWNi9TL0VQQno5bFVRVVdac29oSTdKNXEvMEta?=
 =?utf-8?B?YVlaYSsxVkd4aDM1VGp3Q2RZSklYRm5BVWc3YTVPUWwycC8xVURXdFBRQVJS?=
 =?utf-8?B?bmtzbGZ4Q0lsZHh5THBDbmRjNTE5WVVpZC8rWTJ4aklhcmpYV1JRVWNoQVdl?=
 =?utf-8?B?eXVLN0dzMTdrS2Rqd0NvK25scHhyQmxiM3A0N3NhdU1JTmZ5RmcrK1M3K2tt?=
 =?utf-8?B?dUpDb0pjZlhwYUw0TVNzZVlXVzR4QmZhZnduelNPY0tuVkdma0RNemJucTBw?=
 =?utf-8?B?TjhtbU9iT0RVaGR1dTZYSStZWUpaamNzQlgxbFlXdDNuUldUU2FITW13cWVm?=
 =?utf-8?B?Z0NrejN1cXdGbFpwK2RBK2JwWW9hYTFzRGtsM3FmSVdPM0JnNDRQUUwrNWFG?=
 =?utf-8?B?aSsxbWhNcWowRlB0RjJpSTVuRjNrWUYwT3RodW1BVXdEckJ3K3hCNFZSSG9O?=
 =?utf-8?B?RmpZV3JDQ0UzUmErYkRWSFNuVSszem4wdnowYUpmcFpLZnRaY2RIVHJ5Y0p0?=
 =?utf-8?B?ZFhDQWRlOXgrSWhUa1pTeHEwSGNKaFVWVUFrQndLcVphZDhYS0FyeVdwcVVG?=
 =?utf-8?B?dDY4U093OUxrRCtzUE0yZkkyYlJua1ZLNXQ0MHlsVmw5MFI1ZUpkUjQwcVAx?=
 =?utf-8?B?VVdsYUFORGJ3OS84dkJMR0NqTkRLZEh6OGp5bG5uQ1E0bFR2Wk54Y3lRb0lK?=
 =?utf-8?B?eGtCd3FHN3E3c0F3ZFFWVnJGRzBCcisxZXhZRHd0QnRJMnBTV0hhRUlYUmdw?=
 =?utf-8?B?WFpBejNCWmlHYjZNckNrTldRTkx0TGdWQndZbFhMRm5peVBlU3dGd1dEVCsy?=
 =?utf-8?Q?IfLANljzRSeKoMAMBhA+b9Enw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef57b071-87ce-49b5-9f28-08db3f30537c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:41:39.6997
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S/euP2U+/dHK0qKbMTBuryDJl8noLjGc+kjB5FLIbOyazlwcmMUW9uQayZJv1AvB/3/KBZoqjTye3DP8Z1/BJw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9779

On 17.04.2023 11:58, Andrew Cooper wrote:
> The ro_pages + rw_pages + text_pages != payload->pages check is not something
> which is reasonable to skip at runtime.  Rewrite it to not be an ASSERT().

Isn't this merely a sanity check? IOW isn't returning -EINVAL in this case
misleading, as to calling "invalid input" what really is an internal error
in Xen? But anyway, I guess I'll leave this to the maintainers.

> As the code is being shuffled anyway, rework the logic calling
> arch_livepatch_secure() to reduce its verbosity.

By "verbosity", dym lines of code?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:44:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521965.811005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMLX-0000cf-SI; Mon, 17 Apr 2023 10:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521965.811005; Mon, 17 Apr 2023 10:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMLX-0000cY-PX; Mon, 17 Apr 2023 10:44:31 +0000
Received: by outflank-mailman (input) for mailman id 521965;
 Mon, 17 Apr 2023 10:44:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poMLW-0000cS-4O
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:44:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d282ff51-dd0c-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:44:27 +0200 (CEST)
Received: from mail-bn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:44:20 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6965.namprd03.prod.outlook.com (2603:10b6:303:1a4::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 10:44:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:44:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d282ff51-dd0c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681728267;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=fW9Y+j+jGHx3UA4FbuEAPpQFY3f0K+SKfdS7u4XCBz8=;
  b=gHy4cAdRAfAhQAEJjaDwCVPyRSx87LWwmwnVB7s3JQG8V6MXFOTOjsRt
   v6Qmbp8XC2DiBFmBJ9VKaUoDR8xDX7q+hurBnJLTjt7QZ+mRxYcQTTo/q
   5KaSaGLl4TxYqGw7E+aRSfTf1k1lg8bfptMvohnilHlxM8V/Zqt076dJb
   k=;
X-IronPort-RemoteIP: 104.47.51.47
X-IronPort-MID: 105692443
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BK+A+aDHYggiWRVW//Xlw5YqxClBgxIJ4kV8jS/XYbTApD8rhDZWz
 zQfXjqGbP+CMTCkKY8gady3pkxXvsSHzoA2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw+utsLm1u+
 f8jAz1Ra0qFm8mf7Le2Y7w57igjBJGD0II3nFhFlWucJ9B/BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI/OxrvAA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eSw3KhBN5JfFG+3qA3unq83WY8MiExUUax+qfppGe9Zc0Kf
 iT4/QJr98De7neDVtThUgeqiH+CsAQVV9dZH6s98g7l4q7V5RuJLmEeSzpAbsE28sgsSno31
 TehlsnvCRRmqruZQzSR+9+8qTK0JDhQJHUHICwJVw0I5/HnoZovlVTOSNh5GaK4h9GzHiv/q
 xiBpTQ3g7QVy8sCzaS99Evviiip4JPOS2Yd+QTTWkqm4xl/aYrjYJangXDU8PFaIYCxTVSbu
 nUA3c+E44gmCZCLiTzIS/4ODZm36Pufdj7Rm1hiG98m7TvF02K4d4df7TdyDE5tKsYNPzHza
 UnQtAUX6JI7FH+ra7JnJoewE98C06ftD5LmW+rSY94IZYJ+HCel9SRjfgi62Hzxl00onLAXO
 Z6dasuqFX8AFaJq1iG2Rv9b2rgurggyyGfXSIrTwBG3l7aTDFaRSLEYIB6WZ/o496isvgrY6
 ZBcOtGMxhEZV/fxCgHP+JMXa08DKX0gAZ3ng9dWeO+dL0xtH2RJI+fYxbYsYaRplq5fm+PUu
 Hq6XydwzFv5mG2CMwSRYW5LbLL0QY05rHQ1JyUgMF+knX85bu6H/KoZMpc6Y7Qj3Ohi1uJvC
 ekIfd2aBfZCQSiB/C4SBbH4pZZhMg62mQaHOSaNaSI6OZVnQmTh8Nj+fxCp8zISFC2prsgvi
 7q63wjfTNwIQAEKJNvNYfemiVqrvHY1kvNuUkfBJNJePk7r9eBCNyP1ntczIscRNQ/EwDqKk
 QqbaT8cpO/Qs8o2/cPPiKSssYikCa19E1BcEm2d6qy5XQHKrjSLwoJaVuuMOzfHWwvc9Lqne
 r99zvfyKvQLkV9G9Y1mHN5Dx7gx6sH0u5dVyw1lGDPAaFHDIrp6IHCA9c1OsLBdgL5fpQayH
 EmI/7FyIb+OPtHkFl85Pgcpbu2fk/oTn1H69f0oJEDx5wd08aCBXEEUOAOD4ARBLLxwOZhjx
 eontMcd6B2Xgx8mdN2Bi0h87GmFKH4Le6ogsZ4eDcngjQ9D4lZGbJ/HAyn6+rmJYslFNkQsI
 TjSgqPHitx03k/PdWg+EVDI2u5SiJJIvQhFilMPT3yGgtvOjfgz3TVQ/j0zTw0TxRJCu8p3J
 2NpPkszIbiF/T5ug9ZrUGWlGgUHDxqckmT91F4WvGTcRluvUCrGKysgOo6l/k8D9HlHViNG5
 7zew2HgOR7xdcvr1zM7X2ZsrvXxSto3/QrH8P1LBOyAFpg+JDDj26mnYDJSrwO9WZ9uwkrau
 eNt4eB8L7XhMjIdqLE6DI/c0qkMTBeDJypJRvQJEL41IFwwsQqagVCmQ31dsOsWTxAW2SdU0
 /BTG/8=
IronPort-HdrOrdr: A9a23:bCC1sqDd/L4CPQflHeg5sceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U8ssQIb6Ki90ci7MDrhHPtOjbX5Uo3SODUPNgGTXeZfBOfZogEIeBeOvtK1t5
 0QFZSWYeeYZTcVsS+Q2njaLz9U+qjjzEnev5a9854Cd2FXQpAlyz08JheQE0VwSgUDL4E+Do
 Cg6s1OoCflUWgLb+ygb0N1FdTrlpnurtbLcBQGDxko5E2lljWz8oP3FBCew1M3Ty5P+7E/6m
 LI+jaJq5lL8svLhiM05VWjoai+q+GRi+erw/b8yvT9Hw+cxTpAor4RGIFq8gpF4t1Ho2xa6+
 Uk6y1QRfibrUmhNV1d6CGdpzUJ3FsVmgLf4E7djn35rcPjQjUmT8JHmIJCaxPcr1Etpddmzc
 twrhakXrdsfGH9dR7Glq31fgAvklDxrWspkOYVgXAaWYwCaKVJpYha+E9OCp8PEC/z9YhiSY
 BVfbfhzecTdUnfY2HSv2FpztDpVnMvHg2eSkxHvsCOyTBZkH1w0kNdzs0CmXUL8o47VvB/lp
 P5G7UtkKsLQt4dbKp7CutEScyrCnbVSRaJK26WKUSPLtB1B5sMke+G3FwY3pDaRHVT9upMpH
 3oaiIniVIP
X-Talos-CUID: =?us-ascii?q?9a23=3AsCwjXGm6gHYYepYa4m7Gp5ak1bPXOX3Q11jcA0O?=
 =?us-ascii?q?7Mk1kFLjPT0O70Y9hnuM7zg=3D=3D?=
X-Talos-MUID: 9a23:8s48uwvO0NxVI/sIgs2n2mxYE+hz6YuUNGcJjJ9d5ZmqHC9OJGLI
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105692443"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bemZVqoo06j/0liAU3QeQysHX6exlFmUOVkQDlFdrjEN02ijjjOIahYQjTlSEVGxFkWGN2ebzHzw+pO1cuBbgV7FpvmXf1U9SmRQf50ae1WaeWXWGhDgjKJfEbVTQfkZD1agUSw6+ByRc54rUh7RByF4x44FwjweZJ2R3WecPW8y9jVoVM3DF/Y6OMqH1RbDeuOpOchaDljmTgUSdrP6qXxL9nszl3h2kHw2hLMoP5cJ8wh5oTsqETWSE1FxPNgpcT3iKOwzVcuA9DckBub3hFVEPn+nHY0DfvtabQ5FGde8CRuEzvYlbzJSDNgfaN7DyKIc21hJfcb65MJBmrRVfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fuffgUis9cpulCJM35538EEjK3qLzbj/4QHfErdU/c8=;
 b=PzzerW5LkBUh+esbneENdCIChDwqTpooec28+LI/5jheHALkhOPiEOPsGvC/6IV2n4ZW00h+b8GCE/MMKe0OqCLTx8ntAI9tLSRm+qmxklgSxfNC7QFW/SDJETv0nsORDE9xLDqipOJLhtBckapEnC+U2T0Sc3beo2/a0P98nQOWZX4JaAFFo8ZudGkH0uuGCvBJmRY8pWM3O5oWinHexvTHWKcSnZgCNQkRUW1lP03C93KUsENowmv4QJYhUWn5Kh2SPrzl/GjFatiCDF2PX2I7N8ScUZdTHBedbb8GZTtPKWZEDaRL+qSvVb0jr4EMV9S+v7/Fb7K8c9baNrfLww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fuffgUis9cpulCJM35538EEjK3qLzbj/4QHfErdU/c8=;
 b=kWTQjaXfy6kAVI4wPngH8cEAWeIOkH+ZA/DZHTx/4vJr1fEm1/HAy36AnlyMsXm2+FNy6c1mQ2ejeYZ6visLNvEJv8wZ/HF6bM6bbFJIGdywdxj4aU8D6MXSibjqcPmtxwJp4qV3F1Col65qKOd7gkBslHhaaxnEjCUkJzjsBlI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <5f5c9395-f9e0-cb9c-4929-cc0134f9b895@citrix.com>
Date: Mon, 17 Apr 2023 11:44:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-GB
To: Peter Zijlstra <peterz@infradead.org>,
 Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <20230417103050.GF83892@hirez.programming.kicks-ass.net>
In-Reply-To: <20230417103050.GF83892@hirez.programming.kicks-ass.net>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0030.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:387::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6965:EE_
X-MS-Office365-Filtering-Correlation-Id: feb5a54b-5717-4e4b-5c8f-08db3f30b146
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uQb4ILpVeJxZmYmUVkrWJAqPbKyRWmXfdTpblmGMcvmej/7ROZKjtVBclOLYM5D0vWZ9NKGUruLRJGncYDk4PAkhrhE6ghs/yI7x+KjTv6bPgheG0ciDBgVsoO01F88ocFLAdf3GCzy7QMMYTPVoerOMrWtvAlQnqnc1aynn+k6OkMs3DY4gFZZSaJbjWUKPWS5F0Z3UmbL7KF3SXMH5DumfZ1+swrpnSTwvvFjNXs6sIDQCnJidGJCQ8zwrmSakqzj2f7U0uQ8GbYPuWLCpQ4rzSjtD7QQc0uQAanxn2IU25CwCH4iDi3X5eQWMAKI8iLI9B9AE5houIcY/r+0pPUIzkp2Z8xH6Hmoq0oeEUSYAnYxwbspGMhlucPuTzaFWZzUCjJTBkf726pRJTIYiZxT7Lypfz1b8RZasolGUbbN9jvAa8WbEFViOHcUc7q4cF8KbF8IFgifTqRXQVWn+HCQ3lpVws3eG/8RAYr+3Gexa9BYHDUzPiK9RJ/lOed9sNr24D8BoiPbTJHBT+ovgiIBp4kSbXJPmb8nSMtspIROQ86dnPUh5Vcy0SMcw1M0zDJBtVt6lf3XMELMvVHLBN6rCcNSIq+XQ2iMPLSf28MLU5w5zl+Iigct81mI1kSve6C8cx0lZTl7SmFQCRqp7dQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(366004)(451199021)(8676002)(83380400001)(110136005)(2616005)(54906003)(478600001)(6486002)(6666004)(6512007)(6506007)(26005)(186003)(53546011)(7416002)(36756003)(38100700002)(7406005)(5660300002)(4326008)(316002)(66946007)(66476007)(66556008)(41300700001)(82960400001)(8936002)(86362001)(2906002)(31696002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V0xBTjJaS1JLZ2ZPS1huZVI0RHV6cHBaeHFwVlpLakVBeGlNY0drV1B1UnFG?=
 =?utf-8?B?SXRVamNZWVdkSC9zUU1mMUNFVmlNUCtoWElBSmlvbHhJQ0pYNEFvakZ0dTB5?=
 =?utf-8?B?dXRiL0wxVUZSSGs3VUVGTFQ3b3E3R0dremtmU1c3SEl6dkR1QUIrRkJXaGdu?=
 =?utf-8?B?TTUvbXZsVjhWbXdQTkNWSHZ5NUFOK3AyeTlUVWxEQWZGS0hiT2VJZ2I5MGpQ?=
 =?utf-8?B?UlpaZ1dHVVNhSjNnSFAzeWpuQnp0STA3U0RGclJaRklOY2FuYWNaUi9UYWN4?=
 =?utf-8?B?V1JCUXdYcUsyYzgrTnYzNWJqT2JxdEZZNDZrUmV0QU9Jc0tRbUI2THQvdnlJ?=
 =?utf-8?B?VmtyTVRaU0dwbGF0a2lvR0kzcEJ1MFBUTzVVcFMrbElWNE0wOVc0TU1BQk56?=
 =?utf-8?B?WVc1V0c3R3pDWnovdVllakpjVXI1aDZaeDdZMEE1N0FkSkV1c2Uzc2hJRm5S?=
 =?utf-8?B?am5RSDU5Ujd4bmtCSnloNmlNd3NPZWVOVC8xeXVuUUIwV1o0aFNaazRGZGxY?=
 =?utf-8?B?Y1FwekJoQlFMUjlSOVhucVJpYkhMNjBMeFZ6UmNjcUpRbDNTYmo3V1dWSXRn?=
 =?utf-8?B?OURlSC9mV0pFQ05WQTZvazUvZzhNMmNMTVVkMUE2YUh6KytvNHd2RU1peGdn?=
 =?utf-8?B?aGZtdnRlOFJOVURTNTRPOTQ1S0FHSGhKMEN5WVR5aVZteEJLMjBZcTZpMVJP?=
 =?utf-8?B?RmJxWlhScWpLWk9XeU1ub3pxZm5XdjVTMmpFMjhWbzY0WlQ4SmJFVDJTT2tQ?=
 =?utf-8?B?ekxva1JNcmt5alV4YkFucWlVZWJBakdyVEpWUmdxaXhjQkpFdDFXNVdSeCtL?=
 =?utf-8?B?UXQ0K0xRcjY1ZTYwNG1TcnFTZDhSWWhSandEZ2hibDVGWXVqeFE1b1JuK0Ez?=
 =?utf-8?B?Y2hlRWlwbHp2a0ZHL21aVVJMTTVVUjJIU1dLdG1WUWJyZmZ2bWp4UFA2RHov?=
 =?utf-8?B?WldWQ3p6NTJvZ0MwRVlRcFBuMStOVzVySkh3NTIyd3NoNTd0SnVIMS9iWktZ?=
 =?utf-8?B?bkVOaWpDdThwbFNybkR4b2pTR2tqRXRuWWxOMDJRbkU5MzE0Z1gwVUx6SWU1?=
 =?utf-8?B?OTJJY0V1MFRJdkQxTjRHT3NKQU1JZGhna0lldmxzQUdsOEpSc3d5eHgvb2pi?=
 =?utf-8?B?bFhoeVg1cWg0bnpSNzJHMnovSlMwdjkrSGhuU0xuNU1LRnBQZU9WRWJGZW43?=
 =?utf-8?B?VGhoQ1ZKV0p3UkM5K2RSTkdpVUZOa1NCb29Lc3lBNGFHZWdaUFlQOFdIb0pM?=
 =?utf-8?B?ejNCeHhWbW1oZnF5Z25OZjVGME54WXUyVkJSWTBWeHRSdnFvbkdheXlTaU41?=
 =?utf-8?B?amNraTA5dVBYck5DMURiWVFJL2VMRlJUMVBTUENMOG5oL2U5UWNLZGdwWk51?=
 =?utf-8?B?Rm1kN2w5czU3MVZNamRVNWwvMGl5d2ZxOHpaQUlVa1pRZ3hlQ084aWt4L0xm?=
 =?utf-8?B?K0NZMnN3RkkwUlgrRkJRd0hnRHZiQmhnUFJ4YWIyL1RFd3A3MDdjYWdrZURM?=
 =?utf-8?B?MGlHcUdqdUpYMjRyMUdxVjg3L1dhVllGdEZ3Q2JkTGJhRjl1dXhUb095YjVY?=
 =?utf-8?B?cSs3OVFYN1lxT1VNdnVhazNRYkNvcmJnQ1kxWGo5a3pJMVMxQXRUQWRmWUFv?=
 =?utf-8?B?NkpYTk9XdWFPNTY5V212OFFZcU16Z3c4eGhpTzFvWUJMWGg4VG5JM0Vza1VH?=
 =?utf-8?B?QzBTdG9HWnJEVThOU1d4MzRHT2dHU1RYOUN0Rmt4MTdiVUM5TmV6OXdZbWtI?=
 =?utf-8?B?SllFRUhsWkJ0clN3bkN0RmZCNmFqb0ZOUWdXMmJRc2VBL2VLVlUrdEZuOHRT?=
 =?utf-8?B?dFFzQXNNeGd3N1RGNmRPVFYrUDFlRjRRdllvZFEwRGNYd1BnOEJseWJjbHZO?=
 =?utf-8?B?Qkk0Sk9YQWtSUzJsanVXdzB0Sk40L0hacUExaUJsdDVSTWJoc240S0F4c1BW?=
 =?utf-8?B?dXBkZjdUeHhGSVNLRENZeEM5U0tZRHo4N0xYazV2ZGJGeWJKdWtPMTFiYlRH?=
 =?utf-8?B?NHlITGpCSjUvVEduK1JNQzNjRk00VHYvb1JnUVJGSlpzallMTC9xaitrdzgv?=
 =?utf-8?B?K0lVZzVGUUVxQU5Sc3NjK1NFVzZ5cWRBSkE3THpGOGczcTMxMkxVRUJWNFNN?=
 =?utf-8?B?aVpUUjJTUUZaUG45Z2crUDM1MVNnNExQN3JpQjRsdXp6TnFFVnhZM2xEcEh3?=
 =?utf-8?B?RVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?YlNrTnhRZ2VlSFp2VGgxTis2OWZ5OU5ZN09tREFqSjcxOE42T3dkRE9wSVpE?=
 =?utf-8?B?YVR1Q3VNS2hKRDBLeDZFVDhYaXFTRjYrd2hOaDFseXJSamJ2RSs1d3J5VHNU?=
 =?utf-8?B?bkgxWTF3S2ZETlYxMTZYNVhmZlVieWR5Q2Z2ekh0bzlQZDlwZERHckdXb2ZB?=
 =?utf-8?B?OU5TNTJZS3RxckdoQ2JGcHFPa0lhVEtCRW5DOUVWM3Y5ck54UXl3WlJLRkIw?=
 =?utf-8?B?bEFCaGQ3MkNGWnJIeEREdUdPYXp4d1BwZkNPaG5aeDlIMXp2bXIrWGpYZTg3?=
 =?utf-8?B?ZWliTjdrVTlOaEU3cVZlRXdlQks0dVo4cmF3UEc4U2lVQmE1MWM1YUcrMDJp?=
 =?utf-8?B?MVAxUmJha21JcUw1U0ZCMml3RHVkZ1FWdXJJSkF1Nml3aG9VWlNTUzNqOWw3?=
 =?utf-8?B?Um5td1h4YSs2dUI2YzdSR2hFRXJxTExMUUVaZm8zcSs2RmpxeHdRbHVVODBz?=
 =?utf-8?B?amJGelpRZVIxbkVEdTYrRGNTZnYyc09rRFZCQ2xuRVFteHNqSDdRdzNnOUdW?=
 =?utf-8?B?akNmYWk2M0U1c1pRTnZiV2lUMThkZmkwSmhoVkZsTHRVelM2VDA0azhMQ2Zr?=
 =?utf-8?B?OVpMSUU1UkY2SWV4Zzl1cE11ckJ5QXZuNk1mU0w0ZXA5YWQ0Z1ljdlFCanBm?=
 =?utf-8?B?RkJVYkZicmpVUGRDcFY4M3UwNm9aVEswWXBVVmJFNzQ0aFEyTjZKYUc5T3dL?=
 =?utf-8?B?aXJVMDNTUXplZVJZejlqZkF3UXZTUGQ5Q05SQXVRMVJKYks0cyt3OFNNNjUx?=
 =?utf-8?B?bll5Qnk4dkZQVDdOcnUrTWt6NGJBVzNKa29iaDBZSXlyalJnU0YzQmEwancv?=
 =?utf-8?B?SU1IZkl5WWg2andTNUJvUWZyam1HMG9zbTVGK0RXbzEwRmpqaDNuend3TkNY?=
 =?utf-8?B?NENQa29VV3h6MTdVSWZ6eUt6TnpHTDA0R1YyVkFwWVRkOGRRU1JMM3JsRG1r?=
 =?utf-8?B?NUJMMTdiYkJSbGdmK0c5ZjVCWE44VDJBQ2M1SU0ydHlDNm5YZEFHL1RmYWFt?=
 =?utf-8?B?L2pycU5POFhnL21NQ01HdGxjM0tSYlZ0bEM3QTVPMzRDUjgzSXgvRVZZN1Az?=
 =?utf-8?B?MW1odG1tdXZaRUpJN282TVp0NjZlbjUwRVh0VEZsTGdhWE1XUjl1SUs2ZFox?=
 =?utf-8?B?QjBZRDBBRzNMeW0zamlSK3crdDgrYUo3YVVuSitiNU5XOXJXQ0NMRFFhaVVh?=
 =?utf-8?B?dUxkZFpjd3RVdmtWL1k5eEthL2cwd0UwWlhmaG1rTDVsWVFJaWVGQjRjdVY4?=
 =?utf-8?B?NmE5ZGlzckdZNEpudnZ0STU1cDYvU0p5cEM0WnYxcVVFTEFJZlhJbWJRbzRw?=
 =?utf-8?B?K2tUajZhSVN6SVVnWWNxcWczejdWWXdpenY4Ky8vMnNaaW93clI2YW9zb2xz?=
 =?utf-8?B?bXdjL2dnWnpTTE5hT09ybXM2dlhiT3hjWkFza09pMUlxcUFLcDJMMU5ENitN?=
 =?utf-8?B?YUhlVElvQW00dHZQV0tjVlEwSXVzellnL3lhQ0FHTlprNk9Xc2VJQ2Z5NFNC?=
 =?utf-8?B?QXFYREczRG84M0E2am9MZ29VVDRwWEE4Vlo3NDNFdWlMQW9GZmtadTZwZ2tE?=
 =?utf-8?B?NndkZkx1SWxJM2F4T05YOFZIMm9rYzdONCtHOE1ESjhtZW04N2FHUE5BNWJj?=
 =?utf-8?B?dkpZakRZK1A2NmY1UzE0bWZjV2w1ZkMxYXVnU3JGTlM3aHRCWXZ6K2lTdkRF?=
 =?utf-8?B?NEU3bkgxSHZCbTNxVlhsR1dBQnhDYzFZZUU1OTduOXJtbjBDM1hvSjNxY1M4?=
 =?utf-8?B?RCtQRkZaVTZQOGhTd3k0c2ljMGpQTVlGZXNHYThtcE9QMXJZbWU3MjMrS3gw?=
 =?utf-8?B?WHgvdzM4OVNxU3Y5NlBma1FQeFdvV0l2S2hYWUZtR05IdExteHFzYzdocmxQ?=
 =?utf-8?B?N1B6bURJd2o2OUJVaFdSNUJsQ0pQV25MdDhjakM3Yml0T2JuNHFwVzFqWUxx?=
 =?utf-8?B?c01Nc3dlcHcwVm9pZnJDR3NyRVlCUnJLS1lRTThDa0RtT09reUdwMi84ZW9y?=
 =?utf-8?B?SitnVFlDc2lPME1DU2RKbnRkdCtHdkl4Yk5YSEd4Wk5SVWRZRXQrc0xrUzVj?=
 =?utf-8?B?TUQ3cDNXSGxQYmZmUEs4VFF4ZmlRS25jYXY4WnV2b3JxMFp0SitGVUhsdzNo?=
 =?utf-8?B?cDMyci9uZThFanZhZ09XMEptVFhlajRDVzVVRmNkdjM4SE1iMEVBQnhUZUlU?=
 =?utf-8?B?Y1RTZzRFMkg4TXFSOXY3aWJWMWNNcTZvSC9GbWxwQ0RVWXoxeEtPWnJjbUNC?=
 =?utf-8?B?NUNkMW42RTlZK2hiK0l3c0FiZld3PT0=?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: feb5a54b-5717-4e4b-5c8f-08db3f30b146
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:44:17.1596
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7ldWRWyeQvcXp+SQ326Q0yYhDi+YIShlClKP7BLXwzgBNdUIMXv6tumazU1npiyLmvDKoMz7cL0CQiBKLmdJ0tEcgxm2hjQlFgjIYHcgqH4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6965

On 17/04/2023 11:30 am, Peter Zijlstra wrote:
> On Sat, Apr 15, 2023 at 01:44:13AM +0200, Thomas Gleixner wrote:
>
>> Background
>> ----------
>>
>> The reason why people are interested in parallel bringup is to shorten
>> the (kexec) reboot time of cloud servers to reduce the downtime of the
>> VM tenants. There are obviously other interesting use cases for this
>> like VM startup time, embedded devices...
> ...
>
>>   There are two issue there:
>>
>>     a) The death by MCE broadcast problem
>>
>>        Quite some (contemporary) x86 CPU generations are affected by
>>        this:
>>
>>          - MCE can be broadcasted to all CPUs and not only issued locally
>>            to the CPU which triggered it.
>>
>>          - Any CPU which has CR4.MCE == 0, even if it sits in a wait
>>            for INIT/SIPI state, will cause an immediate shutdown of the
>>            machine if a broadcasted MCE is delivered.
> When doing kexec, CR4.MCE should already have been set to 1 by the prior
> kernel, no?

No(ish).  Purgatory can't take #MC, or NMIs for that matter.

It's cleaner to explicitly disable CR4.MCE and let the system reset
(with all the MC banks properly preserved), than it is to take #MC while
the IDT isn't in sync with the handlers, and wander off into the weeds.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:46:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:46:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521969.811015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMNB-0001Cp-84; Mon, 17 Apr 2023 10:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521969.811015; Mon, 17 Apr 2023 10:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMNB-0001Ci-4M; Mon, 17 Apr 2023 10:46:13 +0000
Received: by outflank-mailman (input) for mailman id 521969;
 Mon, 17 Apr 2023 10:46:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poMN9-0001Cc-3V
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:46:11 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ea0a116-dd0d-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:46:09 +0200 (CEST)
Received: from mail-bn7nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:46:05 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6965.namprd03.prod.outlook.com (2603:10b6:303:1a4::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 10:46:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ea0a116-dd0d-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681728369;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=qIpiUhZSyHGmv7An023UC8NpDFm6tx8941EcX0ptUlw=;
  b=D7LjGRK19iTP+uznLZMWm0BKl01GmlvzV5ApBrJQlbyAs6BytkrPTPUR
   zDC+B+63UJGhP62SNXr4iGbmkBfuWtT3Ifii4WTetoXIM76sKqcgo6Fdk
   9PPjUqV8LLl/S+gd+JMmt33ea20Df1G7S2oBH5+tcNEwFNlqT6LmkiP8o
   0=;
X-IronPort-RemoteIP: 104.47.70.105
X-IronPort-MID: 105820573
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fXYJUqmGAA4CsYPhyLjyVj7o5gywJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIWWm7Qa/veYGPyfNklPIix9EoOuMKBytU2S1c+pHwwRCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fcGKTEgfy+xvMm7g5WgEu10qMgjE/C+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieCwWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTODorKY33wb7Kmo7KRNNXGC7p9WCh2mCd+BzJ
 0I5yyY0ov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xBW8CVDdNLsMnsMweQiYj3
 VuE2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85i8jVf5mGa+xy9fzRjf5x
 mnSqDBk3u1Oy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDEhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:L5MM/qEknJWYyNzypLqFcJHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskd2ZJkh8erhBEDyewKkyXcV2/hnAV7MZniDhILFFu9fBOjZskvd8k/Fh4lgPM
 5bGsATZ+EYZmIK7voSlTPIdurIt+P3kpxA692+815dCSVRL41w5QZwDQiWVmdsQhNdOJY/HJ
 2AouJaujuJYx0sH4yGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3y0ZTyhEzd4ZgC
 P4ek3Cl++eWsOAu1PhPlzonttrcRzau5V+7fm3+4Uow/PX+0eVjcpaKv2/VXsO0ZmSAR4R4a
 LxSlEbTo1OAjrqDxuIiAqo1A/63Dk07Xj+jVeenHv4uMT8ACk3EsxbmOtiA2nkAmcbzaFBOZ
 hwrhGknosSCQmFkDX25tDOWR0vnk2ooWA6mepWi3BES4MRZLJYsIRapSpuYeM9NTO/7JpiHP
 hlDcna6voTeVSGb2rBtm0qxNC3RHw8EhqPX0BHsM2I1Dpdmmx/0iIjtbkit2ZF8Ih4R4hP5u
 zCPKgtnLZSTtUOZaY4H+sFSdvfMB29ffsNChPtHb3KLtB5B5uWke+L3Fwc3pDXRKA1
X-Talos-CUID: 9a23:XQTcyWB/UATclgv6EyJ59U85C/8XSVbQj1DLe0ugJ2ppd4TAHA==
X-Talos-MUID: 9a23:3toJygWuR+wecbHq/B7i3zhZNMFE2KmJNGw/vZkIlviILyMlbg==
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="105820573"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fk8N8LAtgO8M0XRmaprDASx9eRdubCQk0BqvV0rtNbO5f5xaKlpcqJBDek1ycJnB4vn6PnqV7pTDv4qNS9hl9a+JEhGSznlMFGwE+mi4KsaT462djFhGJngHo8i3JmZ1T28ipPGhePUFVIE2TgrDVXkXsvwj4wS9m+7demaYCHmu2RjoaJui/csDN/TYhVKP26U0dKQuU34RuW4uHdmJm1QBp6qerh6vUve7Wa12mO2rv/z1qwIl/E15o5Q2czazUwzxJTOBCqUN9P6aW77QWQrMqu7WBYVPP8E3V6LvElC39Icoy9DGmVb5DHmL0vaBvKIJWxwPfApf6xqYMXbZHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OuGMmpMgVv3Sqli+PmWRBIzdFVx7DqqibqKW7hrQ+I0=;
 b=cnbu78Gs8VoKo5sFkhVAaLSGWZVRCeVYzMRmrEm2UAwji9UYdA8TuCILmFp282zObBKRpAlB4kzB/LmFDnp5J7v2Z1CbvqTZNetoy8tZMPQTeg+1sh1GvWHwrEpO4UKrjkaFSOP96i3NxeUnREUXGuGFq83zxq4INUXNfuDMSsWPZ/pgAfqGxZFDkXuy208U9aaU+qwknHqeeMTjE0/KojByvWq1lzQIhJWZENjbHQcW1OcKaXDZsOQVMlSO9XbfhNPoSagtljkCoECLBPxON18K7cQ1La5w0w5TRUMmUzKOONAcIIIG39Qgo4tCy2Gl83mvB3c98dlRMrN9plNHFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OuGMmpMgVv3Sqli+PmWRBIzdFVx7DqqibqKW7hrQ+I0=;
 b=ogOPJKGkx1pziVyQHgAUb6L2AlHEF1n7gtHiftK7GJ/jqfqEXJQ0r1IrtO/KHrjgOXTyQn0Z/7Du8zDEdIy+mBu0rfyKSJiUKFBmkG/B+pJpIjnrEqZU0vkH5ipSWb7ma8vEMVPohjYTYX9bG491+j9Iy7Sf7Zlf6GK2xbKY6xQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b9fb5819-14c4-98a9-dea3-bf20748a0fda@citrix.com>
Date: Mon, 17 Apr 2023 11:45:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/livepatch: Fix .altinstructions safety checks
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230415022229.3475033-1-andrew.cooper3@citrix.com>
 <dd6615fb-dc2d-9885-e3a3-9cf0954f57d3@suse.com>
In-Reply-To: <dd6615fb-dc2d-9885-e3a3-9cf0954f57d3@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0026.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:387::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6965:EE_
X-MS-Office365-Filtering-Correlation-Id: bf14ca5f-a827-4e4e-7bde-08db3f30efc5
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r+WVK7depdb3QkZW0t/90JZ4A864Ed3CnwbiNvwkLN9k3B+/xfRCux0u6nTYRSdySing4Ltco96nQyZ1QuMGl/nWW3zQ49BggBaqhys8VyPF/AFAnJ2fx831XNDMdwOIa1nErdSZK+WBO/VR5JijO5UxZ7AyE+rWN00SxgYxVQQebpMr3+l/fMT1RN0/6umBwM0XqTHe2XebpJFh+JUNU7FOaiTkxgdxoo75ABi0sPlw3dWM0zFoaoXK8zibPdXxqHzx204pD6UCS5eVeV7sidARb8JCuKsTCIInOGWk/daSthUALx6wtRaBJr0juVAgkcUTX9tlpNaqME4fWgeg3jYaUeUqPo3+WwePNrtxcqsp7ymArda8XoSbKmLjPyU9EFZF0/Dtgo/L5XgHsmrV9sxdyIHwESQw+V3eRRmdJTMOYvlRM1w6pzixgzskBPII7oUalRDw7+EBctJMH0EnBZDpH2B0BV91+ptK8WCP3UxnFNXZfogZ3DkP5JQSVJrw2+odtZeWyfbvWykQKDybRHy0eQdri49vWGm/ShAXb4hPkBQFHkk5ziydIlHMbCtlmFcaWeN/T8JEY58DA1M5VVhD4qwF5QelX5FWUUhhe9xrXiGH7pGFEiPc3EppgIO5I1zFJOkIXcroJ71WW8RyqSoKGqzft5DZPuO7PNQwSpSRmkhB8getGL5VR8T0hNlV
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(136003)(366004)(451199021)(8676002)(83380400001)(2616005)(54906003)(478600001)(6486002)(6666004)(6512007)(6506007)(26005)(186003)(53546011)(36756003)(38100700002)(5660300002)(4326008)(6916009)(316002)(66946007)(66476007)(66556008)(41300700001)(82960400001)(8936002)(86362001)(2906002)(31696002)(31686004)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUtGemQzeG9Ia1FuODQyV1pKeDBqUXROb29lS3d1RndiMEQ3Ykp6MXhZeTZn?=
 =?utf-8?B?WnBXK0ltalMwQWRuUGZCVFBMY2k0VVN3TWdaT1gyYmFOSmNrRUF3SmdjUnJ2?=
 =?utf-8?B?bWYzQnAvVGd5UG10NlBneUJHQnczbFQ3V0dZcFA5SUpza1Uvd0c2TURQU0lS?=
 =?utf-8?B?SzBVbVk0ellocFBWL2NBeTZWeTlyblJUSFRuTTFocGhrN01zMnZuem4yUFNN?=
 =?utf-8?B?N0g0OTJ1L0NUWWVGdStRalV5clhTbkhzVjJrUjNXUkJhQ2R0czFkelJ4SVpI?=
 =?utf-8?B?Y2gwSXRCUS9EQThiTSsrZmU2bm5IRmFkS1RJOWFwQld4MVU2MzdnZ2NiWnFi?=
 =?utf-8?B?MENWZEM2YVBVK0FVVGpEeDQ4b1dvVkJsbDhEdkRMd081N0Q4N2JUcUhETXZG?=
 =?utf-8?B?UXUwRkNNVFl1YXNFdnNXY2RoN09IcGtsdHp2K2owVDlVd3BCVVdvQmUxVXUw?=
 =?utf-8?B?RHRzNklmT1QweVduY2ZBM3pyd1VNVStHRXdEZ292NnBIYThXNy9pSGRLeTcw?=
 =?utf-8?B?NlovMVNtbDFKWkUzUVJRWmJ3NjlFMDBTeTVILzBXK0lVTzN2Y3BzQ0hKU1FC?=
 =?utf-8?B?VytQWmFnRm5SQW9kUlZBY0c1OTUrNysxTXBOSk9WZ01wenFma2FRSHBOMERz?=
 =?utf-8?B?L09sWnJPc2JWbU9OczRTK2F3WTQxd1NXa0xORXNaS0FGRHQzUHZEMEZoYXpx?=
 =?utf-8?B?UVN0QWxrWW81clZPM2RxMy83dFhiOXZ5Z2I2d1NWZ1ZkTEg0Zk9Sc1hzdnhq?=
 =?utf-8?B?OWtRb0hDSWNNbWMyL2JjTXdYcVA2ZW9ZZXVMeHJRcFVHMDFHQjl4QTJkNEJm?=
 =?utf-8?B?bGpjTEFNTHp6N0VqdHQvaFVNU3l6aForNmRzOVZveWYwUjJ1OHN5ajNpNFRk?=
 =?utf-8?B?K09FZzQ5dmwwTjV5cEZnQ3B6Zk1zckZReW1YRG1oVUM4RXZzNnVGWnpwU053?=
 =?utf-8?B?aUhxdDZPZU9GZm1pbnIxWFdsRFBUbzhVWG0wMkJnMUhQdnR3QXFuSGtaUmVh?=
 =?utf-8?B?eVY3N0xaYnpqQTFPUDlRQ2x6LzhrcXpnYXpoRmdsTmdsLzFlcVoxUVV6QjNp?=
 =?utf-8?B?UWlhOEhnVTEwSmpQTDhjT05sUDdNaEhra2FZd0NXRXk3enpDekQ5dE5lNXEr?=
 =?utf-8?B?QkJqNjVRVVlaSCt0ZnpUSFZsSTNIb0wxL25HTkZkUVVQV0R6TmZYakgzQmQ2?=
 =?utf-8?B?NEg2Zy8zUUtNWWVkdHhFN0hOOEdyZVplYTBia3BKcFRaNzNhaUNTeVNrSkRZ?=
 =?utf-8?B?aHFjQVJFMm1QSnA3VHhZaXd3aWtFd2JrOXB0U29wNGRpWnNabWRYNVlYaEZh?=
 =?utf-8?B?UkdaVFkvNEV0R0xrODVmT1RkNmhaRGNXSjRVdlBialVDY29HbEFRTjRlczRj?=
 =?utf-8?B?NnROZVFKSFNCNldkWXF6OXRjY2phcVhRTlBtY3AvODBwTUc0N3F4TVNrTU1j?=
 =?utf-8?B?NlhUejk0Q3BPTVdSNnptWmplam9oRXVYQkxCSk4xc3kvTkJmNXZNQUU1VjNt?=
 =?utf-8?B?RFRneU1hWDJwbUNYTmZ2SmpySG4yb1JoZUw2S0xnQzg5eUVmQ0dNZm5lQmph?=
 =?utf-8?B?ZjM2U3R0QWtMaStCZEQ1NXBOU0tKdmdncUJJdXpLL1Awemp1a2dLdGZRM1dm?=
 =?utf-8?B?YWM4WmJFMFlWbXZTR1QzcWZXdUZWcGZFWjk3OGRuMDM1aFhPcEozVE1QcTJ2?=
 =?utf-8?B?Z0xjOGwyemN5WHFDNytaL2llZE91eERGY0t2dEVkMk9CUEpsUGZJcDhwbjRO?=
 =?utf-8?B?SlA3bVVsVFpqUHVJYTFWZ1pJbFM4NTZiVlhJZzNpQWF1K0hDdXl5NU5mSUNZ?=
 =?utf-8?B?aDE5cVd5K3ViMXJGcXNTdGlVYWFzeG1jSkdhL0lNbTN1Z0svRi9hVDhVUzRD?=
 =?utf-8?B?bUFScXFFL2tzby83dWhwNlEvQVlSNmlaU1V4eWFyM01UV1llcW1tT0FnT1Y4?=
 =?utf-8?B?d3VxQnd4eExHNmlxQTlUSWxUMG1qVTYrdFJNbk9GTlVGcWg3dWl0bno5ZnUv?=
 =?utf-8?B?UHM2Q0Mrei9WSExjOEtpRjB3YVNsZmcyemFkNXVlZXBGWHNkSTN2bHppdnNp?=
 =?utf-8?B?dis3clNjSlZyb3JJL2FjWGsvc2Y3T2h4MnJIQlgrbWFjMnZGZFJaSldqRGRS?=
 =?utf-8?B?WkNNMTJRU0tRaUhLTkxuNU90RkJLN25lUmhDQlVnYkZVN21BWS9UeHc4V1pY?=
 =?utf-8?B?eWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2v/pojSHmItgG3GAoO0ZR1tO6RJ9kSalQ905jPpe2kNBCglImYQt2Bka/kw108hiCuYyRbhn4kci8MPa6JBgIsKSugXeXeD1vC5wSkYNsRxTlwpAzyz+/VyL4Avign/rz2SNn58Nea696PJVvB8O4SzWpHQy3ITcxemLPchTnjzlUQE6j0C02b+BXFWoG52X//nWtt+RCeSzd7fMG3W0lrFsYY/KG8mIK0rVSn2h6NWFL1nH4OZgk7WhtUcHfye9P2k4mbHA0p5F82nI1KRsapn34+MzQVgFiy/33F06v9NZgGIKlLfiuFdh7ZZEzbWD5KAP252hgLcqullT+jSc2PINlVvlu7p54C3k5Y+WDSZohKRUHwHJPyXbXFKHzcOxHwRWObsiEHdQEOui0oStXdNqwXIobCn8yK/Qh7kapNhMYOoGLz8cvYGYeX8qXdfDxBcQfIxdZ6F6jRKOBsOCBNqiSoxdDMH0DFBGPG66kSBujQ+lKmHk+oEmyXvXg+DnVE4fTz96auL0aRqf55HIelrfz9LnRvMWy5CTpFFNgshE6/b0wgskakN9XATrqXrR0uQYmG3wNSN1xaON+XuxIbIy74/9JzM2Cl9CsjTDT6LReotlrftCkIcmUJw0gOtrdyStYO4hsUDuJy/DNrbr6iP9GkzOPo78lMJXoaADjBJf06Po2SfrXKP9JemdliWrvdoSjMm0I+oAQTNjSKbsYQcBQ1zuPkyiZjgYFLucj+/oKWWWw09e2Y1qH2Hv2QiKgVf748PvEv80YXD6Y4+ShLRihd2mIJ3mCRL0nDh8u8EuUz9Oqh8Y0Xfqy5f0W7JfY48LAOkQf09zQ+a4FHckmg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf14ca5f-a827-4e4e-7bde-08db3f30efc5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:46:01.7628
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yrSoG3nbPtFwmPvezBf9f1GSi5n0EnC3Bw8In4FrMkuX7Z9TBrmALQZm0GMv9yvI8mI0XWuDgzUATHLzTjipbb5ZeguSCzvyUK/JKqxdqWo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6965

On 17/04/2023 11:01 am, Jan Beulich wrote:
> On 15.04.2023 04:22, Andrew Cooper wrote:
>> The prior check has && vs || mixups, making it tautologically false and thus
>> providing no safety at all.  There are boundary errors too.
>>
>> First start with a comment describing how the .altinstructions and
>> .altinstr_replacement sections interact, and perform suitable cross-checking.
>>
>> Second, rewrite the alt_instr loop entirely from scratch.  Origin sites have
>> non-zero size, and must be fully contained within .text.
> Or .init.text, which may be worth making explicit (perhaps also in the
> respective code comment). Or am I misremembering and livepatch blobs,
> unlike e.g. Linux modules, don't support the concept of .init.* sections?

Here, we're talking strictly about the .alt* and .text of the livepatch
itself.

I suppose it would be nice if the safety check / other one-time hooks
could live in a local .init.text for the livepatch itself, but we don't
have this concept at the moment.

>
>>  Any non-zero sized
>> replacements must be fully contained within .altinstr_replacement.
> Yes, but if they're all zero-sized, in principle no .altinstr_replacement
> section could be there. Not sure though whether that's worth supporting
> as a special case.

This is discussed in the source code comment.

Right now, all zero-length replacements reference the
altinstr_replacement section, so the section is present even if it's not
got any data in it.

If this changes in the future, we can accommodate.

> Furthermore, ...
>
>> Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> CC: Ross Lagerwall <ross.lagerwall@citrix.com>
>>
>> As a further observation, .altinstr_replacement shouldn't survive beyond its
>> use in apply_alternatives(), but the disp32 relative references (for x86 at
>> least) in alt_instr force .altinstr_replacement to be close to the payload
>> while being applied.
> ... if .altinstr_replacement is retained right now anyway, isn't it legal
> to fold it with another section (e.g. .text) while linking?

Why would it be?

We're auditing what the current tools actually produce, not any
arbitrary theoretical one.

>
>> --- a/xen/common/livepatch.c
>> +++ b/xen/common/livepatch.c
>> @@ -803,28 +803,82 @@ static int prepare_payload(struct payload *payload,
>>      if ( sec )
>>      {
>>  #ifdef CONFIG_HAS_ALTERNATIVE
>> +        /*
>> +         * (As of April 2023), Alternatives are formed of:
>> +         * - An .altinstructions section with an array of struct alt_instr's.
>> +         * - An .altinstr_replacement section containing instructions bytes.
> Since this is generic code, perhaps drop "bytes"? (Or else use "instruction
> bytes"?)

Technically bytes is ok here, but yeah - it will be slightly better without.

>> +         * An individual alt_instr contains:
>> +         * - An orig reference, pointing into .text with a nonzero length
>> +         * - A repl reference, pointing into .altinstr_replacement
>> +         *
>> +         * It is legal to have zero-length replacements, meaning it is legal
>> +         * for the .altinstr_replacement section to be empty too.  An
>> +         * implementation detail means that a zero-length replacement's repl
>> +         * reference will be the start of the .altinstr_replacement section.
> "will" or "may"? And especially if indeed "will", is it really worth mentioning
> this here in this way, posing a fair risk of the comment going stale entirely
> unnoticed?

Hmm.  Thinking about it, I expect that it's not actually always the start.

The code uses pushsection/ref/{repl bytes}/popsection, so for an empty
replacement it will probably reference the end of the previously
replacement.

I should tweak the comment, but the logic is fine.  All I check is that
[repl, size) is entirely within altinstr_replacement, with no special
case at 0.

>
>> +         */
>> +        const struct livepatch_elf_sec *repl_sec;
>>          struct alt_instr *a, *start, *end;
>>  
>>          if ( !section_ok(elf, sec, sizeof(*a)) )
>>              return -EINVAL;
>>  
>> +        /* Tolerate an empty .altinstructions section... */
>> +        if ( sec->sec->sh_size == 0 )
>> +            goto alt_done;
>> +
>> +        /* ... but otherwise, there needs to be something to alter... */
>> +        if ( payload->text_size == 0 )
>> +        {
>> +            printk(XENLOG_ERR LIVEPATCH "%s Alternatives provided, but no .text\n",
>> +                   elf->name);
>> +            return -EINVAL;
>> +        }
>> +
>> +        /* ... and something to be altered to. */
>> +        repl_sec = livepatch_elf_sec_by_name(elf, ".altinstr_replacement");
>> +        if ( !repl_sec )
>> +        {
>> +            printk(XENLOG_ERR LIVEPATCH "%s .altinstructions provided, but no .altinstr_replacement\n",
>> +                   elf->name);
>> +            return -EINVAL;
>> +        }
>> +
>>          start = sec->load_addr;
>>          end = sec->load_addr + sec->sec->sh_size;
>>  
>>          for ( a = start; a < end; a++ )
>>          {
>> -            const void *instr = ALT_ORIG_PTR(a);
>> -            const void *replacement = ALT_REPL_PTR(a);
>> +            const void *orig = ALT_ORIG_PTR(a);
>> +            const void *repl = ALT_REPL_PTR(a);
>> +
>> +            /* orig must be fully within .text. */
>> +            if ( orig               < payload->text_addr ||
>> +                 a->orig_len        > payload->text_size ||
>> +                 orig + a->orig_len > payload->text_addr + payload->text_size )
>> +            {
>> +                printk(XENLOG_ERR LIVEPATCH
>> +                       "%s Alternative orig %p+%#x outside payload text %p+%#lx\n",
>> +                       elf->name, orig, a->orig_len, payload->text_addr, payload->text_size);
>> +                return -EINVAL;
>> +            }
>>  
>> -            if ( (instr < region->start && instr >= region->end) ||
>> -                 (replacement < region->start && replacement >= region->end) )
>> +            /*
>> +             * repl must be fully within .altinstr_replacement, even if they
>> +             * happen to both have zero length.
> Who is "they ... both" here? Surely it doesn't matter here whether "orig_len"
> is zero.

I haven't explicitly rejected it, but an orig_len of 0 is an error.

"they" is repl_len and altinstr_replacement.


And FYI, I need to repost this as part of a 3-patch series in order to
not break the ARM build.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:49:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521974.811025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMQh-0001rr-ST; Mon, 17 Apr 2023 10:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521974.811025; Mon, 17 Apr 2023 10:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMQh-0001rk-Ng; Mon, 17 Apr 2023 10:49:51 +0000
Received: by outflank-mailman (input) for mailman id 521974;
 Mon, 17 Apr 2023 10:49:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poMQg-0001re-MY
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:49:50 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 926a6bd4-dd0d-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:49:49 +0200 (CEST)
Received: from BN9PR03CA0476.namprd03.prod.outlook.com (2603:10b6:408:139::31)
 by IA1PR12MB6187.namprd12.prod.outlook.com (2603:10b6:208:3e5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:49:43 +0000
Received: from BN8NAM11FT025.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:139:cafe::af) by BN9PR03CA0476.outlook.office365.com
 (2603:10b6:408:139::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 10:49:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT025.mail.protection.outlook.com (10.13.177.136) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.31 via Frontend Transport; Mon, 17 Apr 2023 10:49:43 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 05:49:43 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 05:49:42 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 05:49:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 926a6bd4-dd0d-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L+op2pTk/8syq7/tZu+ELKYFFGWJQRq2IeC1NIFtRycExFYABc0vuxTHJYIjG9DUk+JofV82p4MxDBKzfPI8N5MkjlzHxCNl+yInpuK+hjX6n/5dnMxobKT7SoVVR95R8hUYh2XiJvp9FkCjGAkAafRYJIay8I5LBTajVjmCk1FYCjQfv6SQ1507ogMhC1Du1giBZFpfUXSfY+9uW9i4HgjwENTi59QoZIV+dTGlaFtOtz2ZX4TclH1I9n1ubzePk3xIUm06T57lDdRgo1BZpQd97UFtQD1BxONCLIx/h5nMBBASfWqa3AzsxzYYMW34CBqOCKtLveCuR5VByABEyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0h/tw2Xxd4md3b/WR2PSFmwygza9eelOtUn0TBTW5o4=;
 b=BD7SsnnCRiZ+ZQL7c4wk2N7RW+GOoZninqtt4pcIQ7dP3L9fLKdZzD48XbIRn5j1DiG9jEL6KpT3hdJHx4YxYmVx5z+MphH1RpzYkjY51WH8jpxPxdYux4G8juRnlBHZfpq3YzRcFXpDKAp2lq/y9kgPAQeGNhHEoKBAaUEur3wvTW4DGq1q+W8kvQa83w63XpaqD37r54lnWzes7FnTxPN8tJHPgdlRDGjmZSCmup4U13HrcSkS+f5KdapdK3R5nXKn3vJ+rlZL0Z0XgKXwqwkArlrX6yrV73HtuT4JMyyO9fwjkFfmzT0WGaswSzKsvMsBwvSMWjc0pu5q1A+HRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0h/tw2Xxd4md3b/WR2PSFmwygza9eelOtUn0TBTW5o4=;
 b=i5jXhbEoTCXMGJmn6VNGHsrxowhAgoKRXlFKI1O+c6dnWdbfqKVDbi7GfjGSaEQrdiaQhZCG9k4KSWIYybBNnpsyjVEWOASr8pyoqoeLMv7hhxxtroLt/YQvATt9OSS4fjDEGbZDx+1WtvMvbOnATkuoNgqb0lxFudhhx/kwDLs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <dee8bad5-6f5f-7fc7-dab7-78811ee7c2d2@amd.com>
Date: Mon, 17 Apr 2023 12:49:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 10/17] xen/iommu: Introduce
 iommu_remove_dt_device()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-11-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-11-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT025:EE_|IA1PR12MB6187:EE_
X-MS-Office365-Filtering-Correlation-Id: b9313c41-7a47-40e9-8721-08db3f3173f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Wcl5UX4nrxw7xDSkp0GoOa1mKPsBLLJYBCTH4q3hikUSoGpkql5tOby9dKkStqY3LUl8Syj/L0CWa8L9iyrElhq2bGPuTK/aCXPHju61U1A16wGpiVXzHdyUKxv+k0QOI6dr9hMOiXW2xdeDOC9i1UHLy+imaLlD2zh8ZyQ4FnM1vNBeQ2qB9v3HiP8yheGcim6ixUc71ahW+IwJc3ws9Kxvm2Ssv3Ut+FkDaXKORMwTsNfMUOKxdUPIERz3p8OP6qf0r6OGwrX67RWZ9BZC2eYl9sbG7bBebON40erXIfrqvMmAdM4eBrbXbrG1i+Tukf1x0HxtPOLtWrUtsieRgGQIj4B5HP27ia5saFLl8qUgLG6B8d6XZ5gBO8M7Ot3jMnKb+gyZS6EsujV8rCDCWrjCFeabfBSl5fjc2wYEN6ve2XHl2mUm0NLPQCLasCbzsid3T/4M4OJPJYaMgnVLjmefnHHEHmVH4sTL11cWA8AlB2joMIJveKRcwuI0HT7POA5oldWxhEf1+5upjXEdw8F7o94M/JfAY5iXJLsBPmHnne6ijlBoGqdLPH/lQl2w3sEZ5Lmog95c93lnPa/Ejuk+7UutHCOS8TrUjvz5V+3ZopP0MUCmA75NRWIBNM8pSpaNyZw+L6aT7+yGJGNACQSSwqMYLqhpFc3+5yHMic1i/lojaeHwvM14eUL1PSnJLmsf7jnzyivzoAERiK5LOZHEFdZUQ7LOuNgaWV17jVvYjRa18Fqz7DxUBDQeMVvZAYI1FNbPv2fMI3CI4AJFhQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(46966006)(36840700001)(40470700004)(478600001)(8936002)(8676002)(316002)(41300700001)(82740400003)(4326008)(40480700001)(70586007)(70206006)(81166007)(54906003)(110136005)(356005)(40460700003)(186003)(2906002)(36756003)(53546011)(26005)(336012)(86362001)(31696002)(83380400001)(426003)(47076005)(82310400005)(2616005)(36860700001)(31686004)(16576012)(5660300002)(44832011)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:49:43.3835
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b9313c41-7a47-40e9-8721-08db3f3173f0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT025.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6187

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Remove master device from the IOMMU. This will be helpful when removing the
> overlay nodes using dynamic programming during run time.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/drivers/passthrough/device_tree.c | 38 +++++++++++++++++++++++++++
>  xen/include/xen/iommu.h               |  2 ++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 457df333a0..a77a217f3d 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -126,6 +126,44 @@ int iommu_release_dt_devices(struct domain *d)
>      return 0;
>  }
> 
> +int iommu_remove_dt_device(struct dt_device_node *np)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    struct device *dev = dt_to_dev(np);
> +    int rc;
> +
> +    if ( !ops )
> +        return -EOPNOTSUPP;
> +
> +    spin_lock(&dtdevs_lock);
> +
> +    if ( iommu_dt_device_is_assigned_locked(np) )
> +    {
> +        rc = -EBUSY;
> +        goto fail;
> +    }
> +
> +    /*
> +     * The driver which supports generic IOMMU DT bindings must have
> +     * these callback implemented.
s/these/this since you are checking for a single callback.

> +     */
> +    if ( !ops->remove_device )
> +    {
> +        rc = -EOPNOTSUPP;
> +        goto fail;
> +    }
> +
> +    /* Remove master device from the IOMMU if latter is present and available. */
80 chars length exceeded -> please fix.
Apart from that, similarly to comment inside iommu_add_dt_device(), I would also write
something that the driver is responsible for unsetting is_protected flag.

Apart from that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:51:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521978.811034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMSU-0003Gf-5q; Mon, 17 Apr 2023 10:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521978.811034; Mon, 17 Apr 2023 10:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMSU-0003GY-37; Mon, 17 Apr 2023 10:51:42 +0000
Received: by outflank-mailman (input) for mailman id 521978;
 Mon, 17 Apr 2023 10:51:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLV6=AI=citrix.com=prvs=464b9e9d0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1poMST-0003GN-C2
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:51:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d353bebf-dd0d-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 12:51:39 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 06:51:35 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SN4PR03MB6718.namprd03.prod.outlook.com (2603:10b6:806:1ed::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:51:32 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:51:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d353bebf-dd0d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681728699;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=dBc2aJE9OLrXpvgtc1+Kda/k+4UH/hEbAocd0TjUphU=;
  b=TrTimqCJd53zmv6QOQ0/RNzUehHypwgKQq+26CbuG1FQjBVphGhjKbjF
   xP/iQ1uy/Yh8wWMDmKysj1uguMztHO9lPCVUuMMSktZHkqNQGvoIXxLO0
   vufnZOxIkmgT6p6GFyrCv3JTRUzcJphxZhhqvRIzYlpKqh5XMIMht/dJN
   Q=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 106199728
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:M5pk8KIKl658i/vWFE+RX5QlxSXFcZb7ZxGr2PjKsXjdYENShmdUn
 GMYUG2COfqLYWr0eYh0ad7j/BkFsJbSnYUwQQdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c40P3xl6
 qIVKQxUMBPbi/7oz7OBVcRV05FLwMnDZOvzu1lG5BSAVLMMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGNnGSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02LaezHyhCOr+EpXlx+dkrkDM4Vc9FS0UTHWwq+Cn0FGHDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8UE7wWKxrvR8hyuLGEORT5ca/QrrMYzAzct0
 zehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi/H8pKkjgxSJScxseIa3k9n0FDfY0
 z2M6i8kiN07ltUX3q+2+VTGhTOEpZXTSAMxoALNUQqN9gpkYKa1aoru7kLUhd5DIZiYSB+dv
 XECs8mY8O0KS5qKkUSwrP4lGbio47OPNm3aiFs2Rp05rW3zqzikYJxa5yx4KAFxKMEYdDT1Y
 UjV/wRM+JtUO3jsZqhyC26sN/kXIWHbPYyNfpjpghBmOPCdqCfvEPlSWHOt
IronPort-HdrOrdr: A9a23:uFikWK1UzVPmA5asNh5zDAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: =?us-ascii?q?9a23=3Af+sS12sW1/Lk7A/jkOD7Bqz/6ItmTGD20yndInP?=
 =?us-ascii?q?hV2UxRKHLRl6q545dxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AQ10fCg2LFwAz+irdNK0B/68L8zUjvpW3Bm4QzrI?=
 =?us-ascii?q?9ldS0NzVeMD2Ekm6oe9py?=
X-IronPort-AV: E=Sophos;i="5.99,203,1677560400"; 
   d="scan'208";a="106199728"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I/hmmuG3r6ee95GWRggeUqiOWsuhW9JqyrDim/GSgVtoaFtYs66nEWPFOsDEhwCRsJsPykR4qv8SYEqqrmjVP+iiUcaivSZ9S8JC8hChFBDefaWWWmOvK6Pj0oqvF0ICreGAwZ8eyGfsh4luswfZMPmYTZK0t7PBIhV9UzF13OOQWEIZyYLBv7IIRMggpnVXnTnKqrzGsWcE5toKaOL72h6ivhWzC79Yi5WtCl5WDK7fg14cVejjYG8eR7VFhHJtlD4iW2XUibWPOr0F+yewHWUKhVh1tRldTnsJ1yc6X3Kxjor6UxB09HMUKhF2zu4hGcB/6MguUaOWhLPMkjTu5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=luAQw0Nt9OFxoS6gBFVwZd4a0PvMp2HJZFmdmW97eas=;
 b=FdxtXebCNelXOgrEBW1GdAkgrGq8BU4YcQIHNgzWb3KED0JKEsr0ijKlLH0ZPgSNenPUowC3mLZ/xchsusMik3vhfy1EWYNmTw1ha4kpgjT5bZc2iFLqMRN1PT+GWdfpg4rEU/fD2bN0xHHiJL5CoksUQiIMOsS0fzKeTu9VWef08yCvFywkto38SuHRio9CJKaEk3oU5arbQ2kOD1yMc9dQUJzHJBP08m7NykwY9LAuaPr/kCVgOhl37RqwvOM5fn3tVAaJxq5ORwP4TlGVsDQZCwpmyzRpIpruQDuBh6yPGRhKsnEF9qznumNZorQcTMfNSuJHRGg4QRdmASl96Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=luAQw0Nt9OFxoS6gBFVwZd4a0PvMp2HJZFmdmW97eas=;
 b=Jccvu+K54oLcF/PJ1FbCPjwmkYjd7bC5apcsK62EHUew2JAXPkWxyH2tfXNnS+hsDJSWOADdpktrtpfT5o8eiR35VjhouJAnMkoOZQ91EYzDau8nT8rIYgTNtStCiCTnexTMufUkOidakPq6v04TXT44Co0K3TJXx2MvB6AMu+U=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 17 Apr 2023 12:51:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Message-ID: <ZD0krtCOrEwiKMFP@Air-de-Roger>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger>
 <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger>
 <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger>
 <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
X-ClientProxiedBy: LO4P123CA0044.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SN4PR03MB6718:EE_
X-MS-Office365-Filtering-Correlation-Id: a83e0e76-a66c-4152-0e7e-08db3f31b4a9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wFAT75lE5hfGMLeXK0VoOhcbqSrjXsfg7WXSjdnYH3UR+5GUOM7Oi42z2SGwZFv8HZ6woKL5/G1EJjI9HOx7znIwVNfRrY5J8sIxYdGHCbE8fglg+ihZe78i0GGw3TTrRbRKQ3PQwVbab6C1XVkWVxj/5IlPu/Mh54/Ciz7D1OKqSNIOazGbOXZI0zFRp1l4Ly27jkJgSY84NuKcDSaETisxHxqm42NziQ7Hi12e94hktFmad8ciCzkGgcnnTYpJQt3mnFDBnQmbz1Q1VAAfGJm8kWjUs9KLJg3cHo12YgtJpgX3oD2JF3yUyDLrGxeWD6jVhfpjvUnNGyIcL2TnM5D60hyr8wAyYmaCuH7+e3ka4rM3Pqv0G+DOl/6qzd0alsbWuOMmhP5ivQg+lH82WPm2U2tv5IKRYenjVH6r+qdBccHKQhIDZMdf0OHRXyxSo5eo1+DANXgtNA2jta4/eoewqTwRjzLGLAE+5KqAMIkLO6CmGK5Ya4P1k7bJsOj9CO7d4ebf0kHja/z1mFP1Jnz14P6NZOHYlmXBSJRBNVX747U8aHWm2/hN1G0P8ErH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(376002)(366004)(39860400002)(136003)(346002)(451199021)(85182001)(6916009)(54906003)(4326008)(316002)(66556008)(66946007)(66476007)(6486002)(478600001)(6666004)(5660300002)(8936002)(8676002)(41300700001)(33716001)(7416002)(2906002)(86362001)(82960400001)(38100700002)(26005)(6512007)(6506007)(9686003)(186003)(53546011)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OWhQTTVhaXg4Ky9Rb2FhRExCUmowMU5WaUZMYXB5VFBhbStRT3JrUDFmM1ZW?=
 =?utf-8?B?dE16Q2ZoQ2IyS2FhZm9WN3NPOXNZaytVY1hWaCt2NUpFcXYzU3ZIc3BnaEYw?=
 =?utf-8?B?R1UxRnBaMHVnT3JQdTZqUGxoWEFKd0tQcTZmcUN6Mmg3M0ZvVkNPcmY1U05u?=
 =?utf-8?B?dmxzM2lJZXNxWWpaR2dCYlpNU1NiSzU5OHkwNVNmQ0NIVkhLV05aZmtISHRq?=
 =?utf-8?B?TmdZWXh2UTkxRjBxM0RsWmFFVlR3Z1lGVVVsZ2dyaGZZR25MWVlGQVQxQnhZ?=
 =?utf-8?B?WGdzWG5xaTdiNFFIbGxlQmY3L2hmS0pNYzdjYnUva0lpQ0JuZXNTOXJmWG5x?=
 =?utf-8?B?UWxaY2kra3VkOWdLcDU5c1JENXhXY2FkRHF5dnVUVkxkcjQ0OG1rR3prbFVX?=
 =?utf-8?B?RjdFTzArWFdxbzJNSWZxcUIrbTNocndpcWI1a2pMdHJSUWxvQ2k3em9DWHR4?=
 =?utf-8?B?Y3NrMS9lOStBeVRCb3IxWG1YNHBaclpFSzVibGxJK0pQM2JROVRqblZKbXUw?=
 =?utf-8?B?bDY1RDk1UlBld1lXQ3Zwb1h3SFk4RWt6V2dDK1Z1QmJ2TnNrMWJvd21BTjRW?=
 =?utf-8?B?bWRkczh5R255eit6MDFYbktzcVE5b25QcjhTVDg2aHJKS296bjVEMmN6UHVF?=
 =?utf-8?B?MUhPNjRVeVdBZlNPQ2dtd0JtOVVrTDJNOHlKT0hTeEVKNXBUY1lOMFltdi93?=
 =?utf-8?B?OUJOSmJpSU1JdWprS0wxU0RzOE9tbEFsb1R4QnMzend4RU9FTDdQekpWRXlx?=
 =?utf-8?B?WnFac0hMMHkzQmV4dUJmUDBjc0tCNFVRWlhBYTc2d2VrVThlKzcvYUN0UFFn?=
 =?utf-8?B?cDZWeitxYjhtemNXRVBjTDd4Q054OTVETnYxaFkzWVpDMDlOMGlEbnJnZ0pF?=
 =?utf-8?B?L3k1SEdPQ2dmWmJBZkhheXVkeGNET2ZQUWQ5Qm5WVll2NXE2dUlndjcxM1Nl?=
 =?utf-8?B?OWtNVDIwUnY2YnNxS2JQZ1EwZXJpRUlZUDZ5NUZxY3dyUG5LK0hmSEw1azNQ?=
 =?utf-8?B?em54M0pQMU1ZcklQaGllZVp2Ulc3UHZoVm5PdDBjVUtMc2gyQnZIUWczZ3dm?=
 =?utf-8?B?di9JelRHK3NRSm9UVGR2eUhkeHVBakh6TkkwTk9XeWw3M3hCekJsV3VsVThR?=
 =?utf-8?B?RXl2ZEVLcVNPMVJuYms5d1F4RHNoRmZrOE4yNERsREZtbXk2YVR5a3dQSjU3?=
 =?utf-8?B?SzZSVzZRVTJCeHVJTmM4V3E3dkJzRzczMDYyUWxTendiNlRwdGw0SjJqelNX?=
 =?utf-8?B?VktldnYyVWRwWGhUajlTWXlReCtZVEdzaENKMjRSTXJQMnNFNkxUeFpnYVcw?=
 =?utf-8?B?YXJHZ1dWU1RsVjV1Q1l2SWNXbnFqUjV5ZXZVdVcrU2huRG5LeG8wVHZuSGYx?=
 =?utf-8?B?NWZyVnkvaiszeGp3UnlaM1B0eG5SamVtS3Z6ajJIMzVVOE5mNk96STBGZnl0?=
 =?utf-8?B?VnN5ZklmbGJFKzNMOUoxdjBrQ3NrclBlcEVpbnZkWmpaZkVIV1lJb2luQnZI?=
 =?utf-8?B?b1NkK2hJRFQ2Z1RBWTFvTVp0enloZ1V6UDRvVU5VZVFPcXFVOFQyc2Vjd0pR?=
 =?utf-8?B?SEZTeWx5YzdzbmtlOHlIY3NjclFYUTJDcVl6UGJGZUtHQnpHZUp6SEpzVlIy?=
 =?utf-8?B?d2JKVXdsck0zSDhhTjhDeUlDNExsRGVsVE1Kbk5VL3Y0Qi9rUEtBQW1uTHYr?=
 =?utf-8?B?RUlSWGQxSm9TdERQdUNURUJ4NnJhMDNUbDQxM2lYRDRGVmt2ejgrWldsQW14?=
 =?utf-8?B?MGhQbnI5Z1h0RVVUVG5saXVVbDhmZG5qUEx4UmRFUDVkUmNHSS9PRkh3dEZp?=
 =?utf-8?B?RHNSWnVsdG5wSDZnOW1WTXFmY0JFbUtiQWNkOWJFMGp0MWY4NTdQajlFVVRD?=
 =?utf-8?B?OG8wQ2xNMmVjMXR4VUp6cUswY25Sb2FBK2Y1NFcvSlJqbWI5bm8zcmlLdkYz?=
 =?utf-8?B?eHk5bk1YWkVQN1Uxa0w3L2liRlFDOTdrMEcwZXFMUEJCZjVTMWJNSmRmcUs5?=
 =?utf-8?B?Tm0rNjRXUEN0L3o1UHh0WVJ6Q2dKSzRadGYwNnVoNm9OMGJoZkFtTS83V0hm?=
 =?utf-8?B?aUViWTlwRXJHK3Y1dDFBVzRLYXU5d2w5M1dPTEsweFdGYmtMdXJkZS9vWlFn?=
 =?utf-8?Q?UcZpSUcPASllFiG7s24ThFAkU?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ujR1doLy3RO/Lm645F6lhvZJ+ZKweLbAB2+Nvbc6YxHsOkNz2/Yd7g+8pBIcCuKDDcgi04mMnINpVCna46PM2wA5Vk7NRB3jlCFU3GBZ6vrL3c2zz9o5qo4/QFd0FHrDP6+Q0BsnJXAYJF3pvvR6AMOQBKSdyWloJ2qISACsZYsfz7GQvOFXcMrGlw5fnHLYnnpTREGvMiv/AEbHdfOZDzxAYSlAy4e3i2SE+QkKvzgWmn+tl92CAoTLIMiLH8JJ5oozskTfHrt+M22wnlmaxpef8qw5K83mNagC0algfnMlJRDTmOW2OEyGKW432h+6gIZa3GNcODwixRc+ctfHERaRGR0TFzwHIm4R/jEOUVZdI3kJVGzpADZCaLk4X84Ao4IS7iFEa4g0MKnuVsWBGVkkkemRjSw4ZqHgDADeu6aBt8Zw/pnz6r1T8MvpayPP+NESO1ZzU8YIozjls0y2jIwYhnB5BZWqsfWBX+0UdkDtTQHomg7bG1T/22vKz6+3boqGzOBdyQgwW89dvdNxAFixHXg6FtlkOc30vOyvtBkLpEUOP8n8o4uC+ROtST17mRcqC17P6F1wk+Nrkssj24svgVduaN+ReZk+mK5ghFevqloyWwiHxZbirKLXAIFXLh5cEaYv+SAAJlFO7dVQ6KFx0YjFNV/yPAB546LJnSA/hjE9uf4TmEk1W/Jy31LFqrxmNM1gMHEbE1vmqhjA2YVBMZzSt/euqiWgIn0S8rJEFB2eUqY36urRVVURjA83197l/XP0tSBIKH0rvpgW3RkwgHO2DcVlbt6UB77pdrNFw3M9q85p/qlnHiWzij3pSZLF31rKpmE8kANluDwVufixGOeWzrT8Bdb3J7/GeX/P6BY6LG1i7dA3zSn5MD03rEYzvkZ7xQPgz1W5HTv8Vw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a83e0e76-a66c-4152-0e7e-08db3f31b4a9
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:51:32.2336
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FCBF60bAvFf5D/UOv0da+qjxCfdQQ96vImuObt/io3rHRAWfLTdDhWXW4knHtgoNx6BPSvpY2ue2vfbo0vnJ0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN4PR03MB6718

On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
> On 17.04.2023 12:17, Roger Pau Monné wrote:
> > On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
> >> Above I have proposed another view on this. I hope, it will work for
> >> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
> >> after returning from pci_remove_device(). By "harmless" I mean that
> >> owners of those refcounts will not try to access the physical PCI
> >> device if pci_remove_device() is already finished.
> > 
> > I'm not strictly a maintainer of this piece code, albeit I have an
> > opinion.  I will like to also hear Jans opinion, since he is the
> > maintainer.
> 
> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
> holds a ref is entitled to access the device. As stated before, I see only
> two ways of getting things consistent: Either pci_remove_device() is
> invoked upon dropping of the last ref,

With this approach, what would be the implementation of
PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
exist and either return 0 or -EBUSY?

> or it checks that it is dropping the
> last one. The former looks architecturally cleaner to me, but I can accept
> that moving there might be more of a change, so wouldn't object to going
> the latter route.

One of my concerns is what is expected of PHYSDEVOP_manage_pci_remove,
I don't think it's expected for PHYSDEVOP_manage_pci_remove to return
0 while there are users inside the hypervisor still holding a
reference to the pdev.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 10:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 10:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521982.811045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMY0-0003u5-Qt; Mon, 17 Apr 2023 10:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521982.811045; Mon, 17 Apr 2023 10:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMY0-0003ty-Nw; Mon, 17 Apr 2023 10:57:24 +0000
Received: by outflank-mailman (input) for mailman id 521982;
 Mon, 17 Apr 2023 10:57:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poMXz-0003ts-1j
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 10:57:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2086.outbound.protection.outlook.com [40.107.20.86])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0c46102-dd0e-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 12:57:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7658.eurprd04.prod.outlook.com (2603:10a6:10:20d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 10:56:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 10:56:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0c46102-dd0e-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LAmmz3V54Mo6fDuixo7GmIxftmUA6O+DwSQMaRHa4CdJ/HHNFwbHBnOVpEC+mc1p8ZeQJ++vTT2iEZGHh0IA0mA6eVzd/VL3grAC3HeRbCv1R+DJMDAdKcldwfoPMtSFzzc0StjGEAm45LyoGuSm7BsaTEN2OUDfiCACjz4p03B4oCIiDnOHPujiFj8w+TExEyYy1gHSSn6VhOxAnRqdg2IxE9boVxXXNaZYkQukSDnKbodedyTkaZ2OcRZFXtMXKLCpTggdhzbOjCP7DpZnYfUd628Ov9dukFNGm19Y9lkjLwZbjVzUKMSmPi5WdCFHAPyTYK3NYumiZbvJly0vjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tDwUViNCltR1eT6y/vXEAxozoz8xQXzv0Ckl8vOSrHc=;
 b=JEhjF2CWo7iWSFhGSspV4b1oD7igYSy5sh094aqS+HSueiXVnoWASKhlN8Gr4j9c7WoLzy29UmBYpV6MhTDIDhFuTy0l82JLqxDRoclmLiaMl6rzFWWo1Ud3OSpQBan0hAQsHafHFV/wgz6BUE4Njkn3MLDjj9mSCdi1Nwf4Ea/hkqzRDhoOnNaIONJpiA6qX/ANPCCOdCH5ldhKfC1AOykd1CLPX/hraENzlN6WXGJCOGHMj1mn3Lp/ftYZk9nH2RsrnMEgibyHrdITkxbXa5ALAnIuAuTQUGbTyeTfvEuTgNZtO2nvAUL2NcUHWMrrzKUxE2rNwX/HAuM4og4Mwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tDwUViNCltR1eT6y/vXEAxozoz8xQXzv0Ckl8vOSrHc=;
 b=DrXiv5sBXylrfBR0ae+jde0ycB/JUeoOqQ6lWpg1Zu36HqzqZFHWXQOkTQsKv4hH8olZsrWH1UkWlv/wwWHQP86A+d0lW/EQ7GZC5Vsv4FvfK0NmN4nvgq093nUxf5hlGSTtKqZg80fV42fNjn3mlF4DOz54HASy/GKX6hKSu/sR0MsX8hb51eiId6XrPjdHsQ9MDlgAaFmYPojBu2F08qAu2BkHyjDd6iY/Ayb9ZFEi54cphoCAuCnIp7atLCUSmNNvwueRqDPex4bbJ6GLj3sVxv6ZfwuCHNf6nbij0WQL72Q++TLG17wmJBa4Di/3lFc/ZqrXj0iAMOCX6XNkYw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <56a62f69-8340-9d29-8fe3-fec84e084af0@suse.com>
Date: Mon, 17 Apr 2023 12:56:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 4/9] x86emul: support CMPccXADD
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <7fdf882f-0667-e0f1-8183-2dc1a344f4fb@suse.com>
 <c973ddcf-506b-8318-07cc-bb177541619a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c973ddcf-506b-8318-07cc-bb177541619a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0107.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7658:EE_
X-MS-Office365-Filtering-Correlation-Id: 438e1e33-10cc-41bc-d28a-08db3f327374
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ObP+YetEAp5QRrRsdRUI/faVuBq/+no6cYS56gFKzV1kzk+/woSlAub8MjOtVv4ZNFMZyLe+HSVyZoffApb0AgG3OMVH7Tt4pkruoTGtK51wLMoJFV4+ZPp8x8P1xrOpTeS8Xots8VVkGRE4owHae4FDZpSSUkbHl9tjjANqh9SlD1/4/WVlQgyHkvxLz7qbt7gDHj0F62WVWtrH50ETIPPVeJePBFcFkFbFdx+ZV0EcZ2Jb+WBy61pU8mHdT/n/c6Y/o4x2mHLIQLFOC53GUE27R/e2n49iQIEf6/20JZcLsnfgwSk0AqahmUsxgfaWDzDsNLs/jlJiB1EfdMJTIu8jz6+ki8Pqn+YEddAwfpEajbnth+tv2zqSqXPvzweH3m3w7P0vwlSEsvQIYR7aUaAHRluSrbwrzmIaibysvN/0+g/u5MqGw3CqEAgj7RJ75aTDizeRhH1y5kIXextSA3Jhc1EMjMhuuO1dXU4nNLa+X+u3R5tGvfxPQYON9i0MEkIhCQB4GKLD2Zemclf8lXoFtccS5+pJ9uUv/Rq5ymVAjTiZO5ssFP0yoIWdyWZzUGzFIbm/ZP5/UV4pYlS68iaB3H9CCzn5x13VO43y7lwNFXZWIfvdFVA2TcgoMlsajpCM69OIAZdCgzG3znIHSP+RiNhq8aos4v+1ah6OVkA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199021)(6486002)(86362001)(478600001)(31696002)(2616005)(36756003)(83380400001)(26005)(6506007)(186003)(53546011)(6512007)(38100700002)(6916009)(66946007)(66476007)(66556008)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(66899021)(54906003)(87944012)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QzgydExIS3YvNEtxdEdNUDRSNXc1Rmh0UjU4RWh6ZFRTeEY5NUZDUjNVSndO?=
 =?utf-8?B?TmpWQ2RTdy9CMXpNUUxobzUwczFZOVFnVzVwN1RuM1VIYWZBV3FsOUE0aXlY?=
 =?utf-8?B?bUNyVUxwNUxZSlJDYjliUzBRc0MyRFBEYUJ4dno3VjV0NVpJZFEvYmQzVUZy?=
 =?utf-8?B?MzluM1VDTndNd0ovbFA3YUJzZGVQNlRBN2NESGRjMXg0eWpBcCtpd1VtQ1Ev?=
 =?utf-8?B?WjZjZWlXQTJaZXVIOXV6bHQ2cFR0ZUZTaXQ2MEZhZGdpcmZUYnNCNHVjQkZD?=
 =?utf-8?B?L0Z6QlkrdGZCdkN1UGx0SjloRnZmRVF2ankyQ0l0RWlnZFFmRXlrQ2NJcEtt?=
 =?utf-8?B?OGp6WkRrQk5wSmRDaW5ia3RjWmh2aHBLdnNCT2xxVnUvb3hkNWRaMlhZYTlS?=
 =?utf-8?B?L0R0MFFsR3ljK2NXRzNGdEZxa1JkT0hRTHZIRGs5TkFqcytBYmNoTGg5NEc4?=
 =?utf-8?B?Wmp0Wkd1a3VjRlB3S1NqR0xod09Ja2VWRHc3aUprMWhYNXRXK25oL1hNZmhQ?=
 =?utf-8?B?U2JwNFpuVEtOaHpMSUpQVHgrS050b204WEFLWkREaEFSczVCeUE4UjU3L05v?=
 =?utf-8?B?V0dSN3ZCRXN1cVFOZm1TK3E0T1JrdTIwNlpIbFNOeldlSCtqVFFmc3Y5bllv?=
 =?utf-8?B?eDVIeTlPRU5tcGwxeFArWTJPRUM2SVh2TTdmenR5VXZWU2tvcE0vZmYwODNr?=
 =?utf-8?B?aTFZVDNSbmYrM3gwdFlNZVVza2gwaGwwRlZtTGs2cGhQUWJSeHM0bHNPeWRt?=
 =?utf-8?B?QVlHMUR2TmpyK1E3WnNxcjh1VWNiRzBLbE15Q2krODgwZEFSMHc3S2prd216?=
 =?utf-8?B?YjVPb2FaSGpNMVBGSVpKV1R6Z3dYSU5hMzdyOTdOaEp2VE1uS3ZlcWIrNlhv?=
 =?utf-8?B?eExYSXNucDY2d3YwRmZqRFhER2hnMTVqc2JnK2gySFI1VXBtQTYrRjNRMWxU?=
 =?utf-8?B?bnM3YnhqNkpGVUJ3UlBlRUg3UUNzSzBFOG9oZ2ROYVcwbGxNdEszdHdxOGM2?=
 =?utf-8?B?S2ZaWDdldFBuZWFML3hibm5GNUJNdkxKOXpvVWJLTWpLYXJRTXNpbmF6U01q?=
 =?utf-8?B?enRSUHBrWlE1YlpkVFpTS0VETmltRXk3bG40WnpTQXMrSDM2anlvUDJWOXMy?=
 =?utf-8?B?UjRzKzA3eDd6RkZYQWdNTElYSzhjMDRYSi9iaFpQWmc4dkVlMkN4dE8yL056?=
 =?utf-8?B?U2NmaUsrUGRQV3N2aFNnZ2lKcElyWUF3UHp0MVpMRlhxR0w4dzBBRE5KSUlZ?=
 =?utf-8?B?b2p5SkFSdUkxdjBqNlgwRkFoaTRyZ3ZPeGhtZTZuU2ZJT2ZYVjRtTVovMVFw?=
 =?utf-8?B?cUIxT2xhVnBWMjAzekJSN1hTRkZGRjhmbzFRTnhXaTBwekM5YTRUVHV0L2x4?=
 =?utf-8?B?VllXN0lZMWliU1k4M1JlbHRLZTlEcTd1aFVHRENtYzFTQk5PZnVYTUNGUDdF?=
 =?utf-8?B?QVJUNEFTQVFTT0s5SWc3WHQvZzB2V3Ruem5HTFlaYmlYQjcwU3ErNW81R2hE?=
 =?utf-8?B?VkdGVmtDU05DcHdjVDJFVVNtR042TEtCbjR4M3NFQ0hiRUEvQVd4dElqU0tt?=
 =?utf-8?B?YkNvQVZWc3h0Z0toeTFBUHgvMm9uVlRFcUhsaGFmaCtTMzZoZDJFN0tEK0tp?=
 =?utf-8?B?N1VLdTE4ZkhKQ01oSmFwSjhaRnZ3dVBWU3g1MmtxUXgvdUthSTduZFlRSmwy?=
 =?utf-8?B?TURXTVZibldRSzR1ZTd3YnpEby90VnVkYkxVRVo4cDlhUW1ZSTRrRldjb2xQ?=
 =?utf-8?B?TFFoNENFWHRhNzIzbG5CT0kxUGI1L0tsWEFMblU2alJBZ2xtVldEVTVFUXhS?=
 =?utf-8?B?dWo1UFkvL2VEM0htckszelRJNUxhMGVDMTFNK3pkS1UwZi9PNUo2Unl4cFZo?=
 =?utf-8?B?c01yK2RYQlZYa2J0WGNidk92TzA0QTVBZFAyck9EUkNOZ3ZNcGxkRVlMOFdX?=
 =?utf-8?B?b1dUcmFxdHZSbzhjZUt5VDUybGZmdEdNdjFWVEh1eTFPb2lHcGl1RHZnN0tq?=
 =?utf-8?B?Q0U2TEdpd2tOT05NMzF3bVl1a0w5OEttSmFteDI3bFZ2RmZhWjJtWGtwaFRq?=
 =?utf-8?B?clpXT2lwemorZ2RHZVg5RWcvc3RZWTRIL3ozNFFJSzFmQkhsQzF3S3pwL1d6?=
 =?utf-8?Q?KQag6QVX0ZX47FZJwuMr0BCNG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 438e1e33-10cc-41bc-d28a-08db3f327374
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 10:56:52.3013
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /C+5ygCdiH+zsmsqHhgEKuCPg6k9Azd0cKn8uCAk1MrJaWlYWgxzqL6uvQ75vFQlqS/soJU5oytwTsg5FY7DHg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7658

On 06.04.2023 21:28, Andrew Cooper wrote:
> On 04/04/2023 3:52 pm, Jan Beulich wrote:
>> Unconditionally wire this through the ->rmw() hook. Since x86_emul_rmw()
>> now wants to construct and invoke a stub, make stub_exn available to it
>> via a new field in the emulator state structure.
> 
> IMO, patch 5 should be re-ordered with this, because it removes one
> incidental change that's not really related to CMPccXADD.

Yeah, I can probably do that. The order here really is simply the order
things were written; I did notice the potential for the subsequent patch
only when already done with the one here.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> # SDE: -grr or -srf
> 
> The ISE makes a point of noting that CMPccXADD is implicitly locked,
> like XCHG.  (Unlike XCHG, there isn't a valid reg/reg encoding.)
> 
> Right now, the xchg emulation overrides lock_prefix, but I have a
> feeling that's stale now with the rmw() hook in place.  But it is
> dubious that we let xchg fall back to a non-atomic exchange if the rmw()
> hook is missing.
> Either way, I think it would be nice to clean that up so we don't have
> differences in the handling of instructions which the ISE at least
> claims are similar.

We can certainly revisit this (independently).

> Tangentially, what about the RAO instructions?

With the infrastructure we have right now, I don't see how we could get
their exception behavior (non-WB -> #GP) correct. Hence while I have
these on my todo list, I don't have immediate plans to deal with them.

>> --- a/tools/tests/x86_emulator/x86-emulate.h
>> +++ b/tools/tests/x86_emulator/x86-emulate.h
>> @@ -934,6 +935,8 @@ decode_0f38(struct x86_emulate_state *s,
>>              ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
>>          break;
>>  
>> +    case X86EMUL_OPC_VEX_66(0, 0xe0)
>> +     ... X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */
> 
> I know the style is a little mixed in the emulator, but
> 
> +    case X86EMUL_OPC_VEX_66(0, 0xe0) ...
> +         X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */
> 
> is more consistent with Xen style (because it's somewhat of a binary
> operator), and more readable IMO.

I don't mind; done.

>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -278,6 +278,7 @@ XEN_CPUFEATURE(SSBD,          9*32+31) /
>>  /* Intel-defined CPU features, CPUID level 0x00000007:1.eax, word 10 */
>>  XEN_CPUFEATURE(AVX_VNNI,     10*32+ 4) /*A  AVX-VNNI Instructions */
>>  XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
>> +XEN_CPUFEATURE(CMPCCXADD,    10*32+ 7) /*A  CMPccXADD Instructions */
> 
> Given the non-triviality of this instruction, I'd prefer to keep this
> "a" until we've tried it on real hardware.

Hmm, yes, probably better.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:02:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:02:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521987.811055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMd7-0005Q3-J7; Mon, 17 Apr 2023 11:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521987.811055; Mon, 17 Apr 2023 11:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMd7-0005Pw-Fg; Mon, 17 Apr 2023 11:02:41 +0000
Received: by outflank-mailman (input) for mailman id 521987;
 Mon, 17 Apr 2023 11:02:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poMd5-0005Pq-Ts
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:02:40 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2085.outbound.protection.outlook.com [40.107.20.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d1f90b0-dd0f-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 13:02:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7045.eurprd04.prod.outlook.com (2603:10a6:20b:11e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 11:02:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 11:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d1f90b0-dd0f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P0+tTxMhwZ8fUg/ZpXKJBKhwsrNhY07oanbzPnWBVv82mK11+k8no8qJ8m6DX7lcvlswiKvGICnxqKgNGWEvKf71+sS079bm76OZbwcwhQTtVoI37U5DnwCgzHolfmY9+CqQ7W4xciECD33pacQ74B3qYZZKMHRUnBaokvXPPZh32Kyl4QBEqMQySzdxBAPCP+UG8C2hdYK26ThOzufPPzo/EHqXa2QNEiKANtM0BmaGIopz2KAg2R3BZXDERvOIr32dBeKT5XMyzfZM0+GA31VZkN7VSVyuOxHkxIDrBfsFIjAlWas0/18cBFFdYgGjGkB38l5KNo9GwhrLRYRNlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5vO59h4mbsX2JtvtV8+BTsUc+CYbHjxMOfqt9GLOhVI=;
 b=Rq+1T7ixx+5o4wUOmmzj0NZYZIiTXlPb6IxxGNe9LoJo+kYbBezp6zY3SIIJ/DGHgevXBCNl7yPKYDcQbsD0kwHqMjBKkZAJzv5su53In47JYvC3m6FwLqT28NQBn6qKzy4O/KIYZjawjF51VGmgMDO/ROVQ9y3prszcznPtypPogx2TCaBRVgMxc4TNGpWLfsRb3hbqRHWapeOocOqB6a8N7foyu8GI81VjsZTA8Vb2wVK8jzDXLeQjpiTjNBHyrwYUrFRtjS1NKLxIqtyLOsf9KPKn1ustu2s/ax6qLfpU9uCip6TODDDgbXzW5YotufOyn32QbwzzvYAxM7BtLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5vO59h4mbsX2JtvtV8+BTsUc+CYbHjxMOfqt9GLOhVI=;
 b=wt3h9uOo99gkIpK7ez5Pm1s1sBPMOj8yK9bRcHKLnUPma7+7N1USr+hzjxBxOMAdXLIUyQe69jG0T/+MBT1YUsLIJJZBCKFfraKetPKGs1YcrUQpXiCg1tLBdqh4wwYy14LUXyHgLXNfRfjVIgrVqEnwLZ5pf46MBtmsPfFlGxjDqQJaMmi73uEW6temETS/C46udY7p7eOXmjtu3BWlJoq8W6cDpsQ6IAGZvviA2doR30TGUsITmert9GRvQ1OuDV3Y1jpzfVHTV/BOs6cyQ/UNOZrJCTTilnMDOedTuRA53VYZUOr8wVeKbBG/TqbMjso7EXjGb6NP1hXFFV4Xog==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3bf70c3f-d360-30b7-3ea1-e076fdf94709@suse.com>
Date: Mon, 17 Apr 2023 13:02:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD0krtCOrEwiKMFP@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7045:EE_
X-MS-Office365-Filtering-Correlation-Id: d74966bc-2d72-48fa-ff2b-08db3f333031
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qopsNDbRwkAA3S1NGDfX4n/+q6szXXxPcWwvKhPkPbwgdoUiqNqE8Kue6uDHJonOOZjumY6dGMHxgZJczd1jBTrDNAyq9oXgm+tJKnm7o/MIvtPcU3FFJjlh8cMJ7CrGE3ikcxUu1rIZFy1V0dVbFyxsGGxFa66pVk5ACG0315zAzcerzOBcpGfPQEgtVccNSbCOwHlkOX9jDbL8D+h4Agv2BHZe9dw5hi9JQUmU9fEpV57NXVcuIfhIm2FuovmsTzfJWjOeBw2vHqgxsj4maxQ71L7k/6/obYjhNRWySi0j2mUO8G8zetpteuEYfBn6wHmCsQ17A4wNXkzYN2KLJNVs+QjQtm7eR3tdrYXzQZ9Cb8jXTpvJ7JPAIxGJ5++ftIuGRUDcftUr7lhtN6ikV9yRW51sjj+I//UXrHnsi5UZgRusbqdeMIhjdc69HBxppYmi5Go+49cAWSeorklJRG9pTU1RmbW9Mtu49hwHnOI52iHjbBWot+FCPbrW+8lBpE9ZFgvWrKXCJCm6a89vkwVShDgqI7ussldTCJLMublGxM1xOH6fQhDGCA3gsRJk6aNRdIsF2BYlXOX6ic2Go+T+mnM78khE92j+1KhQ0w5aJJ/BC22eds7dkZIYpGAKv9A+JDkPbjmcCclrj3BMzQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(136003)(39850400004)(346002)(396003)(451199021)(8676002)(478600001)(6666004)(38100700002)(8936002)(316002)(41300700001)(66476007)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(2906002)(53546011)(36756003)(6506007)(26005)(31696002)(86362001)(6512007)(83380400001)(2616005)(31686004)(5660300002)(6486002)(7416002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTQwcUhwazM3aGl2NVA3eEFzY3VVamVxQ0k0ckVLdDdJbDM0OS92MEtNUEw3?=
 =?utf-8?B?azZsT0piQTV1ZUZVV0xtamEzVmZqcUt4ODlaZ0ZGQkoxQ1RUalAyNW8rU1lD?=
 =?utf-8?B?MzN0cUdEUXA1NldZK0JqeDYvUVM5RVZzTTM0QXlmSll2VXVoaE9HUy9UdytB?=
 =?utf-8?B?ckw5bzUzVXNsUkRzYVJobFUwZHgxeEh0YUcyN2RyZWdseGNrMzJMUTVJQmxZ?=
 =?utf-8?B?MjQ0akdhK0lpZnAvVkVFWEdiVVgyV1MyLzdIdjZhN3kyOHVEbW1jRXRUWklu?=
 =?utf-8?B?bGZ4UndmL0hWRS9tUCtMbVU0clMxVWh2Y201L09NdWJMVkQ5MFNxZEM4eXg3?=
 =?utf-8?B?K2haMkdpZEhQcXNNRHFWWFlIRS9nMk12cWR1RGlOd1JqNEZzci90clNOVk80?=
 =?utf-8?B?em02blQ4RlhjaDlpS2ZQeU9xVWdkUll4TVNmQ1Zqcmp4YlJVWG5hSkUvS20w?=
 =?utf-8?B?TXQrNG0zVmw0WUIyekNHN3hqRnVESEFNWjVaRzc5b0g0RUMxdWdUWVZvQ0NH?=
 =?utf-8?B?cW9JdUtBWTMwUTVJNFI0ejBrSzh5MHgzWDA4RUFxaXV3TnNyaDVvTnFzdnlT?=
 =?utf-8?B?enQ1dWtEZTYvT2ZkeklWcnllWWJmSGxMUy84WTY0UXIxZHZMVitrTUp6V1h2?=
 =?utf-8?B?ZHM3NTF4VDlsZjdPeGovNDBGNWFrY0xpUFIwdjlFVGtaV0xkQkFac1czSjMz?=
 =?utf-8?B?K1A0bWZSODJocHBoNGc4eDVFbG45bEpKOGlTdmJOeFFXTEFTSm81NGc4TWY3?=
 =?utf-8?B?R3cyb2pNZk1vcWhjR0lTYU5VaDZQeEs3UUJCaloyaVFLdFJpcGU2cG0ya3FQ?=
 =?utf-8?B?K1FNc3F3bUd6RVNHMk44ejZWSWV3SWhCcytlK0ZXZldFaDRPNnZFMjBDaHZ6?=
 =?utf-8?B?YUlzdmVqWk02MW42aXlMRGpiZzEyRVRQZ3Z0bXJiVHVYTkc2dENXUi9mTjcx?=
 =?utf-8?B?dEFQRGNOWXREN2h2ZkNib1RZeHA1c0ZTYWhIV3FJdVBWL2ZhRklFRUVoazYw?=
 =?utf-8?B?YStXeWtQMzRhNTBkMlRNdm9mM0tpWTFwVWJER2xZQ0VBQngxT0liQTF1aG5r?=
 =?utf-8?B?T20vQ3A1ZTBCcFFnYmIyZWk2MGVDVXoyQnVPUG1EeTE4VU02QkJSdlJ3L2FR?=
 =?utf-8?B?UjkrdVRaSzU2dC92T1dZOFF4bUFiU0xyZVdGKzR3eDhCZy9Wem0wdkhwbmJl?=
 =?utf-8?B?Y2RTVUIxS2NpWDg5eXlwOU90bkdCejg5bzJXSWFjbWNsYkNJdzE4cjdqNDNF?=
 =?utf-8?B?Ui95Y3RGQzlYZkVxbjROOUlsL2ljMEJOZkZkUUgxS3J3SGJiaHloSDBodGhY?=
 =?utf-8?B?OGR3RmJZSEZYZEU4V0ZWZEMxNXhJSjFqSEQ5eXdmeUFnVjdhUWU5TVlVZXl2?=
 =?utf-8?B?TklXNkY5eitOSk1kN0kyMWwwSGxKQmdSS2I1Z2ZlTFBVbU1rWEVvc000MlMx?=
 =?utf-8?B?WGs0K3BqckNwVUh0SzBIZ25WNkRiZFNVYm9uWHpKYjlBSzFUUHlwS2lySWZ6?=
 =?utf-8?B?QmFQNFc0NkZjMkFEK2pOLzF2bzhvaHlmUWpMaUI4NG4yN1BiaDRCWUZacjFY?=
 =?utf-8?B?dmR6aTd4WG0ydGpLS3dNdHh1WWtTejVUZGo1ckxGbE9UU0J3ZjZQUGJGY3pQ?=
 =?utf-8?B?RE1DeHBPMzUrdTZadjl4ZWZ4bG5Pd1pxOXR3bVJuejdTQTFWeEE3cXAxb2E3?=
 =?utf-8?B?NDd6UFJBMFZtVkEwbDNHSjVsSEZKSnNodzEvNmE5NFc1Y3ZLTHh4OStIa2VW?=
 =?utf-8?B?dHVWS0xEdCtHQThnejhyTDA4OUxvNG1KcExSSzFqZTdOWXBLZHVWWklOTEti?=
 =?utf-8?B?anhHOHJGeW12dThNOHU2c0pEM3hSUE9WckxCelZzeFV5Mlh5TWRXOTRGM015?=
 =?utf-8?B?QThsajA0NFBwZkZ1cUE0SWhPNWg4QzhXMG5Gci9TZmgxbjhwc1FuTkVISkh6?=
 =?utf-8?B?UVA3T2htS3VIT2VqenhJa0kvUGMyNEtYekRNdkxNTEU2SllHQXV5OTlmMTl0?=
 =?utf-8?B?aGpFWTZ6eVdwUW1sZWhKaDBCeXJuMW1IVHlodm93MFBGYVg5V1o4NHB5d215?=
 =?utf-8?B?WnZ4Rk1CdENIRTdkcDNSclBldWJScWYxYjBLVWEzZFJvTklkN2t4Nk1aQk5Q?=
 =?utf-8?Q?Q8WElPZd6Ui3mEiERpLMEJ6wE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d74966bc-2d72-48fa-ff2b-08db3f333031
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 11:02:08.8955
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6TOzdSWrerzb1XioZjM0UKErIZMRhDdhKUst/bd1VkHwQKyY0aKqRz71YdgMuxaVnYS/HhcbH7XEOJnK5rwPJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7045

On 17.04.2023 12:51, Roger Pau Monné wrote:
> On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
>> On 17.04.2023 12:17, Roger Pau Monné wrote:
>>> On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
>>>> Above I have proposed another view on this. I hope, it will work for
>>>> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
>>>> after returning from pci_remove_device(). By "harmless" I mean that
>>>> owners of those refcounts will not try to access the physical PCI
>>>> device if pci_remove_device() is already finished.
>>>
>>> I'm not strictly a maintainer of this piece code, albeit I have an
>>> opinion.  I will like to also hear Jans opinion, since he is the
>>> maintainer.
>>
>> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
>> holds a ref is entitled to access the device. As stated before, I see only
>> two ways of getting things consistent: Either pci_remove_device() is
>> invoked upon dropping of the last ref,
> 
> With this approach, what would be the implementation of
> PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
> exist and either return 0 or -EBUSY?

If the device doesn't (physically) exist, it would return e.g. -ENODEV.
If it still exists and the pdev also does, it would return e.g. -EBUSY,
yes.

Jan

>> or it checks that it is dropping the
>> last one. The former looks architecturally cleaner to me, but I can accept
>> that moving there might be more of a change, so wouldn't object to going
>> the latter route.
> 
> One of my concerns is what is expected of PHYSDEVOP_manage_pci_remove,
> I don't think it's expected for PHYSDEVOP_manage_pci_remove to return
> 0 while there are users inside the hypervisor still holding a
> reference to the pdev.
> 
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:15:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521991.811065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMpf-0006xD-MV; Mon, 17 Apr 2023 11:15:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521991.811065; Mon, 17 Apr 2023 11:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMpf-0006x6-Jg; Mon, 17 Apr 2023 11:15:39 +0000
Received: by outflank-mailman (input) for mailman id 521991;
 Mon, 17 Apr 2023 11:15:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poMpe-0006x0-U1
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:15:38 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c8afff5-dd11-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 13:15:36 +0200 (CEST)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 07:15:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB5955.namprd03.prod.outlook.com (2603:10b6:610:e2::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 11:15:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 11:15:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c8afff5-dd11-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681730136;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=t1Oc15oY+SibQeuRLZm/bYE42FQCGF+bveXnFQrPLkQ=;
  b=MWxQK3TMYPPIQaZ/GTSq4vndMps/rvJChKDyy3WMSGtphpdH4kLOLRdW
   jsrYkaCxsgT0GojHP85V54jK2+7P4joIBDIXtPI2tLFzrLAzOI/kcghns
   U6GvHUrV5CrI8elrSzhYkQOEEG/HFRyuHONYEoWNlM8yJQv6nqteg5ngg
   U=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 108232542
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:raonzKsZAhvz2GCseAhxYJvpROfnVHtfMUV32f8akzHdYApBsoF/q
 tZmKWqBMv3cZzD8KNBwaIi18UtXvZfdmoJrSgpvpC40ECpA+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHyCFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwLWk3Pkm9gLyK/bOYEsJN1v4BNePnBdZK0p1g5Wmx4fcOZ7nmGvyPzvgBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjv60b4G9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgpqQx2gDPnAT/DjU3ennlm+WIrHKyVuBGJ
 GI2xnMX96E9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hRWWSN9mSfexloesRmq2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:eos7r6sO3yO1984sRgET4YJE7skDUNV00zEX/kB9WHVpm62j+/
 xG+c5x6faaslkssR0b9+xoQZPwI080l6QU3WBhB9aftWDd0QPDQb2KhrGSoAEIdReOjdJ15O
 NNdLV/Fc21LXUSt7eB3OG0eexQp+Vu/MqT9IPjJ30Gd3AOV0luhT0JbDpzy3cGPTWu06BJbK
 ah2g==
X-Talos-CUID: 9a23:AZZI5m2vfmD25eG6CHktfLxfH5kfIiTjyynsHk6bUU1IGIe3UHWcwfYx
X-Talos-MUID: 9a23:cKHHlQtV8L+p2Ckouc2npR5Aav100oKSBE0/iJoomfXdDHZ5NGLI
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108232542"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lhdYTL3QzmH5xUJ63oCd3NR0Mgc0qWdCK3f2HmRCGxAxcTo8F8JcTbyy2ZntivA99z7w6wLXmLuLvLL4Dyv3OWzjPtt9xOKA/AUSwMIkMMR+ytV0493Cq+NX+CpuHtpKOOsN9bFX69eD6ch8slXJuRc5Yfvv/zbre2mqBTe4QuV9HhxEospj/3Eq3utLOYOoMTlylCzBUZM1j5Afe0tmtCNhdGCjeIPBpvfsQxHvZWOx4U4pSc80IV8EgZoIFcudl6Ih0WB/0rYSIiYsvimsmCeFUVjmf7/b3qFooxaVBEgo/EimzKq4ucD0oSKHtngmYj88I702CrIvseagEb89xQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rpg6lElZC59lsXCnEog1fWyecGpK9ciM/VpA9qPFSHc=;
 b=jpi7wiVLMYFOYZs/uPxeHGLbeqc4qS8fhyriPnl6zgi4h352750xgXD/jVR68D8gCPo2ZBt44ghqsafLE6rDzZI3v8buENtSu/OcfTSwHBBipjqyqbueI9o7axthImAlo1VajNLnt5NS6VR9MVVMzYyv5CLJXsfR+WcDRgYoHuj3qYwOscRWYEfiS9ZCSCMiECEtEO1JOSv0ECwy85QTKBKHzeEN/I76EkyJ3CnGFsEyGjKC1In45TkvIZEstiZixfmYbRNK23RV8HfVhCQ5hEZIaRHa3dzg0bj9Wi7gq2t7flbB7lOcQNUdXtH8tImtMD08eqqoIv+bkwp8OXjQbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rpg6lElZC59lsXCnEog1fWyecGpK9ciM/VpA9qPFSHc=;
 b=KLDfxTflvSwMk6mx+Op/L2n2wquWEN9Ae6NvbA+PEY21H6kZ54ZTDPlcUWcDszxRAAnrvx+FLJiq5hHT3ATbKuhn/G5gNw00WHImN1XSWKfDsJ6na3X5OTvxrw+Ya/HSM3jAY5FQ7pcc/ZjySWWOom/nkEHYUHtjcCGjScrUIY8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e7456897-cde6-9bf7-2444-16de2aab6cce@citrix.com>
Date: Mon, 17 Apr 2023 12:15:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/hvm: Disallow CR0.PG 1->0 transitions when CS.L=1
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230412183519.2996902-1-andrew.cooper3@citrix.com>
 <20230413150009.3145462-1-andrew.cooper3@citrix.com>
 <3a554475-9e74-39be-e03a-aaca2c22b857@suse.com>
In-Reply-To: <3a554475-9e74-39be-e03a-aaca2c22b857@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0572.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH0PR03MB5955:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f2640ad-e772-4b08-abd9-08db3f35094b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KHNWM8KPU3hcVeFef/ASn8pphkMtj2drwsnB8RflVxA7h+kuug741357ekTSE4w3qHvTr0pknIibRQp8fzXzsdf9TBHJNGul3An9L8XO8O3viACNhWs/2fdldM41usH9/lmJBOtOJarukBPA1Xt9AxnYcMXiDK6wb4/3gdwcEHp7ETtfpGbjOjKxWTYapOKI9djAUQZp6Bo04HebugZNvXH+dFPjMZQ4+PF/PtQ/I3h10gammelccoBaOc2h5GUC3a8BwFSD6PRRCmUh20PJ4OkNFOfl1Q5mnfQB1Fm9GICskdDgx6Vzaa6N/uw8Cr44SZ2S56LUYjjHDeRpCtGhVR/+UO8ezGzZHMiWSFDwFLhsaLcry4tSGVDni6NrYHne5p1xGSDAVGLRb/EDWgac8mtWND3u49Juwypu1ARZ0gwcNekImz9Js1ytPAm71nUU5uMosH3q0BxL2HiyYOTrvNjgZURM409dD9g4+NY8r9HHg/BcC1Pb5pXjTADDSt3LOqI/qMcx9s7GyKvr6+b453fGzZFxtzX6AKqUEJDwXp3rli89gB+TlbP37s87qIz/J00Sz1SuK6PXtMW30nORThyMoLO/GKqfzG3FH2X4e5Md30dlYFxF13qG29Hz665qcLNu8tBUGlZE5CLAGbyvxg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(26005)(6506007)(6512007)(53546011)(66946007)(66476007)(316002)(41300700001)(82960400001)(4326008)(6916009)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?US9jYkZIU2tpVVd5VTlPbFhTSEpDL1RRaFRjbmlHQ0NKVTByTFFyR1ZwOStO?=
 =?utf-8?B?Y0luV081MytEN2hYQVlRaEE4ZTB2aTEyOUlRVWFxMDJVNnBiVmFzR0VrTVZi?=
 =?utf-8?B?Ti9MbWtuYTNPamg4VmxxZnhmNWl2MTd1YUVvdThLN2FpNnVxa2l6TmFVejhP?=
 =?utf-8?B?Q2tUVlpwVWNMZVNIN3l2bHlhL1RCcVp6bDk1MHRoRldoZTQxc25EYkljUytS?=
 =?utf-8?B?TUR5ZG1obXVOL291ekxJSlN4bW1QL3c2VHJxT1c5MVFwQm1GaFdDeDlTODZ6?=
 =?utf-8?B?S01lQTFnN0ZKTTNKYXhWb0RMYjNkL0lhN0VJckU4eGYvN2FZdU9iSGJpRTBa?=
 =?utf-8?B?ZE9NL3ExL1RxUFFPTkNlSFFDK0VlaWNmZHBteE9vODBpTnhsdW05bzYxNkRH?=
 =?utf-8?B?V3VUTW50Zk5wZ05MblB3TzczdDRjWk5UeUJramdOM0lQNXhvK1lzczNTeU5H?=
 =?utf-8?B?U2Y0djNKU1h1QXBNSFhmajBPMDZtWjB6YmZzTStqRkkydkZlaFVqK1RZMTFt?=
 =?utf-8?B?dS90QVhTZ2pNNXcyWUlWT3ptaHdwYWhVUnZVSVRITFFFWndTQmhHZ3Q3WW9s?=
 =?utf-8?B?RjNzbW9BL3k2VkFFZ05UMDA3NVpIMjcxa3paeWtncjV1SldYRENFeE5uYU1T?=
 =?utf-8?B?UERlaDcrdWd0TTJ1TWpPYzJnN0pXdjFLb1NzSUNTaFIxWStoQ21OU2gzd21B?=
 =?utf-8?B?bWpLSzBHMkNQc3QxajRPZVcrWDFvYzJGVWRjTks5VFliODZyTGYwaVMyc2hB?=
 =?utf-8?B?UmFWR1VkbDEwUmUyM3Q3ZjlNZnU0OEZ3c1F6NWZtVk9EL1hHd3g3bHdGVitS?=
 =?utf-8?B?cWhEbjk2bGxsbEpOYzQwNTlSa1Uwc0hPYi9MWWNjUDhCWlhOeldZejNVUnhL?=
 =?utf-8?B?ZUhmUDd2ZDA5cVNYd2ZhUUpGQ0Y0VTQzQ1FtV0RhbzhkRXViejg4SUZia201?=
 =?utf-8?B?QkZEVjhuODJ4SzEwbWZucE8yTlpIM2VZbS8wQUxWSE9GN3JkT1ByTjRVNEow?=
 =?utf-8?B?Zitnem5WaFFsZjFUNkVaT0s4RGN1c05NUWo2RTNiQkwzL21tTGhVVXhmbG5L?=
 =?utf-8?B?T0lxVU5Tb1pjck43Z2FlRlpRUlhwMXhNTGtKSE54Z2lEeXNueTJPMFFxTzF4?=
 =?utf-8?B?aDRzSk40bzM5eUN5cjl3cTBBOWVJdFhySHRzYzhGZFZZYjJvSjMwWXA3NkI3?=
 =?utf-8?B?U2lwY2lVTzNLaE9oTndXekE5N3ptTHpGSWIyNDFpRmkwbWdzZG52bU54bnp3?=
 =?utf-8?B?cXJMcjNvcnFMdVNqTVlkQk1WZEhpVGdLQUhjdmVPdmFzcWkrYWRaQU5zaVFx?=
 =?utf-8?B?dmpkSGhuNzA2VDlhc0haY0VyL1NraWQ1Q0VoeUZ0R05Pak5SdlNCMUZieExx?=
 =?utf-8?B?WDVwdXJSenIwenkrUkNhR0VEWCtueTF1d05Ra3ZUSVVSQmdCSmZaQkpESWh1?=
 =?utf-8?B?TFdPSWtVYUZTRldPMlg4NmJCbTRiRTRyZ0dlRE1vSjl6OUdHaDg1NWFJY0lX?=
 =?utf-8?B?TzZHUzAvYXErQUJyOElVN2cxL0R6N2hzZnNEV0kwdnhwTnBvRFNmd043K0hQ?=
 =?utf-8?B?OCtRaThkbk9QRXdFMzlzK0tCYlphckpFYXpRbGkxNW1jSDhIU2EraDlvTG1I?=
 =?utf-8?B?OHBzRlNhZElyanJQYUpya09obWhwUG5wYWZUeWpZRTYwMXI5RHR1YmJHM0xm?=
 =?utf-8?B?NVdpQlNzK0RaZVE1YTExOS9Wd0FtaXJ2YkVCWTg2dHBwNWl5MzdESHoyT0po?=
 =?utf-8?B?LzR0UDZhS1U1LzdQTFNBTi8yR1JIRkhCQlpBOHJPZEpSdTY2M1dzd3M4QmdZ?=
 =?utf-8?B?ODZwTHFlcmR4c3JHdzJPaGl6YnFic2lrVUZ2YXYrTTRYeDgxOURPQnRnSkRo?=
 =?utf-8?B?ZSsxMW9SNTBBZXhvNWY3RENybkVCMk03aE5ZWE42ZkxXVlR2VTFJYy80V0VE?=
 =?utf-8?B?MXJpMWFuT3Jvako1ZXhCRmZEbEQrR1VLU3dhYjlQcGVkc1h4ZFhnWUlWb1lR?=
 =?utf-8?B?UHVvYWVQMElrOFNDQkpseHRLQkFOL09aeXZZbXVxd1FnY0YxcEhhaDFtcFNl?=
 =?utf-8?B?emRjaEVTMDZSdHBzTVdqZWNkRzdHTXUwYnU4YklNd2dHaTRCMXJPSzNmN3pw?=
 =?utf-8?B?VnRKY1pEVEtsTFdCT2c2UUtMem52dUh6QVkyNmM0R0pKcEM5Q2hMLzZsaENE?=
 =?utf-8?B?MVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	o3DhuTvn+czws/SVTRAJJEdPTmUoIwmMRONWivOaQELs7nJtoUes809BTW0shlWXWon47rrc06RASOypiGDyUHy64Y6Iv8jitVo2UpVKROqzgJZk5bl3VZgmUqIL6dEIxmPu4iIjjWAhI61i4oojx1/7jVHlqBI31wez0aRtQ72Xok9Ulq/KIxy/o8wnr2Sz47vlWYDc4LpqT4lmN7tpcODNZvYbmISotyIy+Liu1TXjoV+ZVCPNb1SyBoPQ8e/7UugcheMPjRlucNB9SVgtIhEw2cD1zoE+/dxVWW6C1PulGCgPbxtHdbjq20KhT6lVirWazqMoXu4veWmgsHo39PR2ConmzJwaaagC8Sp/GrZ5i8gari/6xGPPefoGa68e2p1WiS8ZFE0IrAO+EgaeX1KfYbd6RNRLr/P8N7hlze1R2y2qW18j4rhZ3u4OyMDQFDMTEJ/9o/gVbCNbj0Si86hBUo3NQgKXcPHJqDflYujqJU0vaToIcnJaKm4Fk5XDQ0iXi20O8c1+KPWL65Xr7AAinVxS4ul08pLlalOyrvuX9b52r33R0EK72N/25LEvoggLUvMapwdc4aYGIhA+u73wrCegztdUG53S8Juuq+U8wZ5U34y3bZtwKnM3JdFcuSNZvMrrwwAQxApwTdj5xvoXTTZPUuJ9q0CbhMXRDrcoWs3yebsBPN9yk/d5e9+obezF1HkWWTB45cMDSjqY/t9QvaooMt4wuXxwQ/TYlxQdALBk76PLnYLzWh2C+IJb+IH5JYk9NuDrTXfF0UJWsjezyGJvBRAp69MZn6l2Mb72IJZflGb3AL2QPREe9MUKg3WGfni8b38Gvpf8Rn+wXQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f2640ad-e772-4b08-abd9-08db3f35094b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 11:15:22.7838
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e8RasDs17mDhsbAMXD+nC8/AiWPrNjRHTr0Rruw6jIKgZcoziJ5WomEhDBN6hTZB3zP7zT2Dv/QlXKFgqjytNiASNdcraMeVPdZNRRG0XNI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB5955

On 17/04/2023 9:05 am, Jan Beulich wrote:
> On 13.04.2023 17:00, Andrew Cooper wrote:
>> The Long Mode consistency checks exist to "ensure that the processor does not
>> enter an undefined mode or state that results in unpredictable behavior".  APM
>> Vol2 Table 14-5 "Long-Mode Consistency Checks" lists them, but there is no row
>> preventing the OS from trying to exit Long mode while in 64bit mode.  This
>> could leave the CPU in Protected Mode with an %rip above the 4G boundary.
>>
>> Experimentally, AMD CPUs really do permit this state transition.  An OS which
>> tries it hits an instant SHUTDOWN, even in cases where the truncation I expect
>> to be going on behind the scenes ought to result in sane continued execution.
> For my own understanding, which truncation are you referring to here?
> As you're in 1:1 mapped code, %rip can't really be meant. Clearly IDT
> and GDT would need to be (re)loaded to point to 32-bit-style tables, so
> the only thing left would seem to be %rsp. It's not clear to me whether
> after such an illegal mode switch its upper bits would be cleared or
> ignored ...

Outside of 64bit mode, all address generation is truncated to 32 bits.

So when %rip happens to be above 2^32, the fetch of the next instruction
ought to be from a truncated %eip, but my attempts to set up such an
experiment still crashed.

I didn't spend too long investigating.  I've got too many other things
to do.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:20:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.521996.811075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMtn-0007XR-79; Mon, 17 Apr 2023 11:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 521996.811075; Mon, 17 Apr 2023 11:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMtn-0007XK-4C; Mon, 17 Apr 2023 11:19:55 +0000
Received: by outflank-mailman (input) for mailman id 521996;
 Mon, 17 Apr 2023 11:19:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=owEo=AI=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1poMtm-0007XE-2S
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:19:54 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4205282-dd11-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 13:19:49 +0200 (CEST)
Received: from [141.14.220.45] (g45.guest.molgen.mpg.de [141.14.220.45])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id 1369861E4052B;
 Mon, 17 Apr 2023 13:19:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4205282-dd11-11ed-8611-37d641c3527e
Content-Type: multipart/mixed; boundary="------------sZY8260RBWtSJLYF8RPb2fpa"
Message-ID: <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de>
Date: Mon, 17 Apr 2023 13:19:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <20230414225551.858160935@linutronix.de>

This is a multi-part message in MIME format.
--------------sZY8260RBWtSJLYF8RPb2fpa
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 15.04.23 um 01:44 schrieb Thomas Gleixner:

> This is a complete rework of the parallel bringup patch series (V17)
> 
>      https://lore.kernel.org/lkml/20230328195758.1049469-1-usama.arif@bytedance.com
> 
> to address the issues which were discovered in review:

[…]

Thank you very much for your rework.

I tested this on the ASUS F2A85-M PRO, and get a delay of ten seconds.

```
[…]
[    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD 
Graphics (family: 0x15, model: 0x13, stepping: 0x1)
[…]
[    0.259329] smp: Bringing up secondary CPUs ...
[    0.259527] x86: Booting SMP configuration:
[    0.259528] .... node  #0, CPUs:      #1
[    0.261007] After schedule_preempt_disabled
[   10.260990] CPU1 failed to report alive state
[   10.261070] smp: Brought up 1 node, 1 CPU
[   10.261073] smpboot: Max logical packages: 2
[   10.261074] smpboot: Total of 1 processors activated (7800.54 BogoMIPS)
[   10.261601] devtmpfs: initialized
[   10.261697] x86/mm: Memory block size: 128MB
```

This delay has been there with v6.3-rc6-46-gde4664485abbc and some 
custom (printk) patches on top and merging dwmw2/parallel-6.2-rc3-v16 
into it. I only tested this. I think dwmw2/parallel-6.2-v17 failed to 
build for me, when trying to merge it into Linus’ master version at that 
time. I didn’t come around to report it, and you posted your rework, so 
I am replying here.

I am going to try your branch directly in the next days, but just wanted 
to report back already.


Kind regards,

Paul
--------------sZY8260RBWtSJLYF8RPb2fpa
Content-Type: text/plain; charset=UTF-8;
 name="kodi-linux-6.3-rc6-smp-tglx.txt"
Content-Disposition: attachment; filename="kodi-linux-6.3-rc6-smp-tglx.txt"
Content-Transfer-Encoding: base64

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA2LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2
OWY2NiAocm9vdEBiZjE2ZjM2NDZhODQpIChnY2MgKERlYmlhbiAxMS4yLjAtMTIpIDExLjIu
MCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi40MCkgIzQ0NiBTTVAgUFJF
RU1QVF9EWU5BTUlDIFNhdCBBcHIgMTUgMTQ6MTI6MjkgVVRDIDIwMjMKWyAgICAwLjAwMDAw
MF0gQ29tbWFuZCBsaW5lOiBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmM2LTAw
MzExLWdkZTgyMjQ5NjlmNjYgcm9vdD0vZGV2L3NkYTMgcncgcXVpZXQgbm9pc2FwbnAgY3J5
cHRvbWdyLm5vdGVzdHMgaXB2Ni5kaXNhYmxlX2lwdjY9MSBzZWxpbnV4PTAKWyAgICAwLjAw
MDAwMF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDAxOiAneDg3IGZs
b2F0aW5nIHBvaW50IHJlZ2lzdGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3VwcG9y
dGluZyBYU0FWRSBmZWF0dXJlIDB4MDAyOiAnU1NFIHJlZ2lzdGVycycKWyAgICAwLjAwMDAw
MF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDA0OiAnQVZYIHJlZ2lz
dGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogeHN0YXRlX29mZnNldFsyXTogIDU3Niwg
eHN0YXRlX3NpemVzWzJdOiAgMjU2ClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IEVuYWJsZWQg
eHN0YXRlIGZlYXR1cmVzIDB4NywgY29udGV4dCBzaXplIGlzIDgzMiBieXRlcywgdXNpbmcg
J3N0YW5kYXJkJyBmb3JtYXQuClsgICAgMC4wMDAwMDBdIHNpZ25hbDogbWF4IHNpZ2ZyYW1l
IHNpemU6IDE3NzYKWyAgICAwLjAwMDAwMF0gQklPUy1wcm92aWRlZCBwaHlzaWNhbCBSQU0g
bWFwOgpbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAw
LTB4MDAwMDAwMDAwMDA5ZmJmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1MtZTgyMDog
W21lbSAweDAwMDAwMDAwMDAwOWZjMDAtMHgwMDAwMDAwMDAwMDlmZmZmXSByZXNlcnZlZApb
ICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMGYwMDAwLTB4MDAw
MDAwMDAwMDBmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVt
IDB4MDAwMDAwMDAwMDEwMDAwMC0weDAwMDAwMDAwNWZlNGNmZmZdIHVzYWJsZQpbICAgIDAu
MDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDVmZTRkMDAwLTB4MDAwMDAwMDA3
ZmZmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVtIDB4MDAw
MDAwMDBmODAwMDAwMC0weDAwMDAwMDAwZmJmZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIEJJT1MtZTgyMDogW21lbSAweDAwMDAwMDAwZmVjMTAwMDAtMHgwMDAwMDAwMGZlYzEw
ZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAw
MTAwMDAwMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIE5Y
IChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQpbICAgIDAuMDAwMDAwXSBT
TUJJT1MgMy4wLjAgcHJlc2VudC4KWyAgICAwLjAwMDAwMF0gRE1JOiBBU1VTIEYyQTg1LU1f
UFJPL0YyQTg1LU1fUFJPLCBCSU9TIDQuMTgtOS1nOTkxN2QyZDkxNSAwNC8xNy8yMDIzClsg
ICAgMC4wMDAwMDBdIHRzYzogRmFzdCBUU0MgY2FsaWJyYXRpb24gdXNpbmcgUElUClsgICAg
MC4wMDAwMDBdIHRzYzogSW5pdGlhbCB1c2VjIHRpbWVyIDYwMzU2MTUKWyAgICAwLjAwMDAw
MF0gdHNjOiBEZXRlY3RlZCAzOTAwLjI3MyBNSHogcHJvY2Vzc29yClsgICAgMC4wMDA3NTZd
IGU4MjA6IHVwZGF0ZSBbbWVtIDB4MDAwMDAwMDAtMHgwMDAwMGZmZl0gdXNhYmxlID09PiBy
ZXNlcnZlZApbICAgIDAuMDAwNzU5XSBlODIwOiByZW1vdmUgW21lbSAweDAwMGEwMDAwLTB4
MDAwZmZmZmZdIHVzYWJsZQpbICAgIDAuMDAwNzYzXSBsYXN0X3BmbiA9IDB4MTdmMDAwIG1h
eF9hcmNoX3BmbiA9IDB4NDAwMDAwMDAwClsgICAgMC4wMDA3NjhdIHg4Ni9QQVQ6IENvbmZp
Z3VyYXRpb24gWzAtN106IFdCICBXQyAgVUMtIFVDICBXQiAgV1AgIFVDLSBXVCAgClsgICAg
MC4wMDA5NDJdIGxhc3RfcGZuID0gMHg1ZmU0ZCBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAw
MApbICAgIDAuMDA0MDAwXSBVc2luZyBHQiBwYWdlcyBmb3IgZGlyZWN0IG1hcHBpbmcKWyAg
ICAwLjAwNDAwMF0gQUNQSTogRWFybHkgdGFibGUgY2hlY2tzdW0gdmVyaWZpY2F0aW9uIGRp
c2FibGVkClsgICAgMC4wMDQwMDBdIEFDUEk6IFJTRFAgMHgwMDAwMDAwMDAwMEY2ODMwIDAw
MDAyNCAodjAyIENPUkV2NCkKWyAgICAwLjAwNDAwMF0gQUNQSTogWFNEVCAweDAwMDAwMDAw
NUZFNUEwRTAgMDAwMDc0ICh2MDEgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUgMjAy
MDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IEZBQ1AgMHgwMDAwMDAwMDVGRTVCQkMwIDAw
MDExNCAodjA2IENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpbICAg
IDAuMDA0MDAwXSBBQ1BJOiBEU0RUIDB4MDAwMDAwMDA1RkU1QTI4MCAwMDE5M0EgKHYwMiBD
T1JFdjQgQ09SRUJPT1QgMDAwMTAwMDEgSU5UTCAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0g
QUNQSTogRkFDUyAweDAwMDAwMDAwNUZFNUEyNDAgMDAwMDQwClsgICAgMC4wMDQwMDBdIEFD
UEk6IEZBQ1MgMHgwMDAwMDAwMDVGRTVBMjQwIDAwMDA0MApbICAgIDAuMDA0MDAwXSBBQ1BJ
OiBTU0RUIDB4MDAwMDAwMDA1RkU1QkNFMCAwMDAwOEEgKHYwMiBDT1JFdjQgQ09SRUJPT1Qg
MDAwMDAwMkEgQ09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogTUNGRyAweDAw
MDAwMDAwNUZFNUJENzAgMDAwMDNDICh2MDEgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENP
UkUgMjAyMDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IEFQSUMgMHgwMDAwMDAwMDVGRTVC
REIwIDAwMDA2MiAodjAzIENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1
KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBIUEVUIDB4MDAwMDAwMDA1RkU1QkUyMCAwMDAwMzgg
KHYwMSBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAyMDIwMDkyNSkKWyAgICAwLjAw
NDAwMF0gQUNQSTogSEVTVCAweDAwMDAwMDAwNUZFNUJFNjAgMDAwMUQwICh2MDEgQ09SRXY0
IENPUkVCT09UIDAwMDAwMDAwIENPUkUgMjAyMDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6
IElWUlMgMHgwMDAwMDAwMDVGRTVDMDMwIDAwMDA3MCAodjAyIEFNRCAgICBBTURJT01NVSAw
MDAwMDAwMSBBTUQgIDAwMDAwMDAwKQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBTU0RUIDB4MDAw
MDAwMDA1RkU1QzBBMCAwMDA1MUYgKHYwMiBBTUQgICAgQUxJQiAgICAgMDAwMDAwMDEgTVNG
VCAwNDAwMDAwMCkKWyAgICAwLjAwNDAwMF0gQUNQSTogU1NEVCAweDAwMDAwMDAwNUZFNUM1
QzAgMDAwNkIyICh2MDEgQU1EICAgIFBPV0VSTk9XIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDEp
ClsgICAgMC4wMDQwMDBdIEFDUEk6IFZGQ1QgMHgwMDAwMDAwMDVGRTVDQzgwIDAwRjI2OSAo
djAxIENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0
MDAwXSBBQ1BJOiBSZXNlcnZpbmcgRkFDUCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTVi
YmMwLTB4NWZlNWJjZDNdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBEU0RUIHRh
YmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWEyODAtMHg1ZmU1YmJiOV0KWyAgICAwLjAwNDAw
MF0gQUNQSTogUmVzZXJ2aW5nIEZBQ1MgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YTI0
MC0weDVmZTVhMjdmXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgRkFDUyB0YWJs
ZSBtZW1vcnkgYXQgW21lbSAweDVmZTVhMjQwLTB4NWZlNWEyN2ZdClsgICAgMC4wMDQwMDBd
IEFDUEk6IFJlc2VydmluZyBTU0RUIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWJjZTAt
MHg1ZmU1YmQ2OV0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIE1DRkcgdGFibGUg
bWVtb3J5IGF0IFttZW0gMHg1ZmU1YmQ3MC0weDVmZTViZGFiXQpbICAgIDAuMDA0MDAwXSBB
Q1BJOiBSZXNlcnZpbmcgQVBJQyB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTViZGIwLTB4
NWZlNWJlMTFdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBIUEVUIHRhYmxlIG1l
bW9yeSBhdCBbbWVtIDB4NWZlNWJlMjAtMHg1ZmU1YmU1N10KWyAgICAwLjAwNDAwMF0gQUNQ
STogUmVzZXJ2aW5nIEhFU1QgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YmU2MC0weDVm
ZTVjMDJmXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgSVZSUyB0YWJsZSBtZW1v
cnkgYXQgW21lbSAweDVmZTVjMDMwLTB4NWZlNWMwOWZdClsgICAgMC4wMDQwMDBdIEFDUEk6
IFJlc2VydmluZyBTU0RUIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWMwYTAtMHg1ZmU1
YzViZV0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIFNTRFQgdGFibGUgbWVtb3J5
IGF0IFttZW0gMHg1ZmU1YzVjMC0weDVmZTVjYzcxXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBS
ZXNlcnZpbmcgVkZDVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTVjYzgwLTB4NWZlNmJl
ZThdClsgICAgMC4wMDQwMDBdIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZApbICAgIDAu
MDA0MDAwXSBGYWtpbmcgYSBub2RlIGF0IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAw
MDAwMDE3ZWZmZmZmZl0KWyAgICAwLjAwNDAwMF0gTk9ERV9EQVRBKDApIGFsbG9jYXRlZCBb
bWVtIDB4MTdlZmU3MDAwLTB4MTdlZmZkZmZmXQpbICAgIDAuMDA0MDAwXSBab25lIHJhbmdl
czoKWyAgICAwLjAwNDAwMF0gICBETUEgICAgICBbbWVtIDB4MDAwMDAwMDAwMDAwMTAwMC0w
eDAwMDAwMDAwMDBmZmZmZmZdClsgICAgMC4wMDQwMDBdICAgRE1BMzIgICAgW21lbSAweDAw
MDAwMDAwMDEwMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZmXQpbICAgIDAuMDA0MDAwXSAgIE5v
cm1hbCAgIFttZW0gMHgwMDAwMDAwMTAwMDAwMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0KWyAg
ICAwLjAwNDAwMF0gICBEZXZpY2UgICBlbXB0eQpbICAgIDAuMDA0MDAwXSBNb3ZhYmxlIHpv
bmUgc3RhcnQgZm9yIGVhY2ggbm9kZQpbICAgIDAuMDA0MDAwXSBFYXJseSBtZW1vcnkgbm9k
ZSByYW5nZXMKWyAgICAwLjAwNDAwMF0gICBub2RlICAgMDogW21lbSAweDAwMDAwMDAwMDAw
MDEwMDAtMHgwMDAwMDAwMDAwMDllZmZmXQpbICAgIDAuMDA0MDAwXSAgIG5vZGUgICAwOiBb
bWVtIDB4MDAwMDAwMDAwMDEwMDAwMC0weDAwMDAwMDAwNWZlNGNmZmZdClsgICAgMC4wMDQw
MDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAwMDAwMTAwMDAwMDAwLTB4MDAwMDAwMDE3ZWZm
ZmZmZl0KWyAgICAwLjAwNDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAwMDAw
MDAwMDAwMDEwMDAtMHgwMDAwMDAwMTdlZmZmZmZmXQpbICAgIDAuMDA0MDAwXSBPbiBub2Rl
IDAsIHpvbmUgRE1BOiAxIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdlcwpbICAgIDAuMDA0
MDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiA5NyBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5n
ZXMKWyAgICAwLjAwNDAwMF0gT24gbm9kZSAwLCB6b25lIE5vcm1hbDogNDM1IHBhZ2VzIGlu
IHVuYXZhaWxhYmxlIHJhbmdlcwpbICAgIDAuMDA0MDAwXSBPbiBub2RlIDAsIHpvbmUgTm9y
bWFsOiA0MDk2IHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdlcwpbICAgIDAuMDA0MDAwXSBB
Q1BJOiBQTS1UaW1lciBJTyBQb3J0OiAweDgxOApbICAgIDAuMDA0MDAwXSBBQ1BJOiBMQVBJ
Q19OTUkgKGFjcGlfaWRbMHhmZl0gaGlnaCBlZGdlIGxpbnRbMHgxXSkKWyAgICAwLjAwNDAw
MF0gSU9BUElDWzBdOiBhcGljX2lkIDQsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAw
MCwgR1NJIDAtMjMKWyAgICAwLjAwNDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1
c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKWyAgICAwLjAwNDAwMF0gQUNQSTogSU5U
X1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQpbICAg
IDAuMDA0MDAwXSBBQ1BJOiBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRp
b24gaW5mb3JtYXRpb24KWyAgICAwLjAwNDAwMF0gQUNQSTogSFBFVCBpZDogMHgxMDIyODIx
MCBiYXNlOiAweGZlZDAwMDAwClsgICAgMC4wMDQwMDBdIHNtcGJvb3Q6IEFsbG93aW5nIDIg
Q1BVcywgMCBob3RwbHVnIENQVXMKWyAgICAwLjAwNDAwMF0gc21wYm9vdDogc21wYm9vdDog
WFhYIGVuZCBvZiBwcmVmaWxsX3Bvc3NpYmxlX21hcApbICAgIDAuMDA0MDAwXSBBZnRlciBw
cmVmaWxsX3Bvc3NpYmxlX21hcApbICAgIDAuMDA0MDAwXSBBZnRlciBpbml0X2NwdV90b19u
b2RlClsgICAgMC4wMDQwMDBdIEFmdGVyIGluaXRfZ2lfbm9kZXMKWyAgICAwLjAwNDAwMF0g
QWZ0ZXIgaW9fYXBpY19pbml0X21hcHBpbmdzClsgICAgMC4wMDQwMDBdIEFmdGVyIHg4Nl9p
bml0Lmh5cGVyLmd1ZXN0X2xhdGVfaW5pdApbICAgIDAuMDA0MDAwXSBbbWVtIDB4ODAwMDAw
MDAtMHhmN2ZmZmZmZl0gYXZhaWxhYmxlIGZvciBQQ0kgZGV2aWNlcwpbICAgIDAuMDA0MDAw
XSBBZnRlciBlODIwClsgICAgMC4wMDQwMDBdIGNsb2Nrc291cmNlOiByZWZpbmVkLWppZmZp
ZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lkbGVf
bnM6IDc2NDU1MTk2MDAyMTE1NjggbnMKWyAgICAwLjAwNDAwMF0gQWZ0ZXIgdW53aW5kX2lu
aXQKWyAgICAwLjAwNDAwMF0gQWZ0ZXIgc2V0dXBfYXJjaApbICAgIDAuMDA0MDAwXSBBZnRl
ciBzZXR1cF9jb21tYW5kX2xpbmUKWyAgICAwLjAwNDAwMF0gQWZ0ZXIgc2V0dXBfbnJfY3B1
X2lkcwpbICAgIDAuMDA0MDAwXSBzZXR1cF9wZXJjcHU6IE5SX0NQVVM6NjQgbnJfY3B1bWFz
a19iaXRzOjIgbnJfY3B1X2lkczoyIG5yX25vZGVfaWRzOjEKWyAgICAwLjAwNDAwMF0gcGVy
Y3B1OiBFbWJlZGRlZCA1NSBwYWdlcy9jcHUgczE4ODMyOCByODE5MiBkMjg3NjAgdTEwNDg1
NzYKWyAgICAwLjAwNDAwMF0gcGNwdS1hbGxvYzogczE4ODMyOCByODE5MiBkMjg3NjAgdTEw
NDg1NzYgYWxsb2M9MSoyMDk3MTUyClsgICAgMC4wMDQwMDBdIHBjcHUtYWxsb2M6IFswXSAw
IDEgClsgICAgMC4wMDQwMDBdIEFmdGVyIHNldHVwX3Blcl9jcHVfYXJlYXMKWyAgICAwLjAw
NDAwMF0gQWZ0ZXIgc21wX3BlcnBhcmVfYm9vdF9jcHUKWyAgICAwLjAwNDAwMF0gQWZ0ZXIg
Ym9vdF9jcHVfaG90cGx1Z19pbml0ClsgICAgMC4wMDQwMDBdIEZhbGxiYWNrIG9yZGVyIGZv
ciBOb2RlIDA6IDAgClsgICAgMC4wMDQwMDBdIEJ1aWx0IDEgem9uZWxpc3RzLCBtb2JpbGl0
eSBncm91cGluZyBvbi4gIFRvdGFsIHBhZ2VzOiA4OTg0NTEKWyAgICAwLjAwNDAwMF0gUG9s
aWN5IHpvbmU6IE5vcm1hbApbICAgIDAuMDA0MDAwXSBLZXJuZWwgY29tbWFuZCBsaW5lOiBC
T09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmM2LTAwMzExLWdkZTgyMjQ5NjlmNjYg
cm9vdD0vZGV2L3NkYTMgcncgcXVpZXQgbm9pc2FwbnAgY3J5cHRvbWdyLm5vdGVzdHMgaXB2
Ni5kaXNhYmxlX2lwdjY9MSBzZWxpbnV4PTAKWyAgICAwLjAwNDAwMF0gVW5rbm93biBrZXJu
ZWwgY29tbWFuZCBsaW5lIHBhcmFtZXRlcnMgIm5vaXNhcG5wIEJPT1RfSU1BR0U9L2Jvb3Qv
dm1saW51ei02LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2OWY2NiIsIHdpbGwgYmUgcGFzc2Vk
IHRvIHVzZXIgc3BhY2UuClsgICAgMC4wMDQwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxl
IGVudHJpZXM6IDUyNDI4OCAob3JkZXI6IDEwLCA0MTk0MzA0IGJ5dGVzLCBsaW5lYXIpClsg
ICAgMC4wMDQwMDBdIElub2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChv
cmRlcjogOSwgMjA5NzE1MiBieXRlcywgbGluZWFyKQpbICAgIDAuMDA0MDAwXSBtZW0gYXV0
by1pbml0OiBzdGFjazpvZmYsIGhlYXAgYWxsb2M6b2ZmLCBoZWFwIGZyZWU6b2ZmClsgICAg
MC4wMDQwMDBdIHN0YWNrZGVwb3Q6IGFsbG9jYXRpbmcgaGFzaCB0YWJsZSB2aWEgYWxsb2Nf
bGFyZ2Vfc3lzdGVtX2hhc2gKWyAgICAwLjAwNDAwMF0gc3RhY2tkZXBvdCBoYXNoIHRhYmxl
IGVudHJpZXM6IDI2MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMsIGxpbmVhcikKWyAg
ICAwLjAwNDAwMF0gc29mdHdhcmUgSU8gVExCOiBhcmVhIG51bSAyLgpbICAgIDAuMDA0MDAw
XSBNZW1vcnk6IDM0NzcxNjBLLzM2NTE1MDBLIGF2YWlsYWJsZSAoMTQzMzZLIGtlcm5lbCBj
b2RlLCAyMzQwSyByd2RhdGEsIDUzMDhLIHJvZGF0YSwgMjkwOEsgaW5pdCwgMTEwNjBLIGJz
cywgMTc0MDgwSyByZXNlcnZlZCwgMEsgY21hLXJlc2VydmVkKQpbICAgIDAuMDA0MDAwXSBT
TFVCOiBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0yLCBOb2Rl
cz0xClsgICAgMC4wMDQwMDBdIEFmdGVyIG1tX2luaXQKWyAgICAwLjAwNDAwMF0gQWZ0ZXIg
cG9raW5nX2luaXQKWyAgICAwLjAwNDAwMF0gZnRyYWNlOiBhbGxvY2F0aW5nIDM4NjY0IGVu
dHJpZXMgaW4gMTUyIHBhZ2VzClsgICAgMC4wMDQwMDBdIGZ0cmFjZTogYWxsb2NhdGVkIDE1
MiBwYWdlcyB3aXRoIDMgZ3JvdXBzClsgICAgMC4wMDQwMDBdIER5bmFtaWMgUHJlZW1wdDog
ZnVsbApbICAgIDAuMDA0MDAwXSBBZnRlciBzY2hlZF9pbml0ClsgICAgMC4wMDQwMDBdIHJj
dTogUHJlZW1wdGlibGUgaGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAw
LjAwNDAwMF0gcmN1OiAJUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTY0IHRv
IG5yX2NwdV9pZHM9Mi4KWyAgICAwLjAwNDAwMF0gCVRyYW1wb2xpbmUgdmFyaWFudCBvZiBU
YXNrcyBSQ1UgZW5hYmxlZC4KWyAgICAwLjAwNDAwMF0gCVJ1ZGUgdmFyaWFudCBvZiBUYXNr
cyBSQ1UgZW5hYmxlZC4KWyAgICAwLjAwNDAwMF0gCVRyYWNpbmcgdmFyaWFudCBvZiBUYXNr
cyBSQ1UgZW5hYmxlZC4KWyAgICAwLjAwNDAwMF0gcmN1OiBSQ1UgY2FsY3VsYXRlZCB2YWx1
ZSBvZiBzY2hlZHVsZXItZW5saXN0bWVudCBkZWxheSBpcyAyNSBqaWZmaWVzLgpbICAgIDAu
MDA0MDAwXSByY3U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2
LCBucl9jcHVfaWRzPTIKWyAgICAwLjAwNDAwMF0gQWZ0ZXIgcmN1X2luaXQKWyAgICAwLjAw
NDAwMF0gTlJfSVJRUzogNDM1MiwgbnJfaXJxczogNDQwLCBwcmVhbGxvY2F0ZWQgaXJxczog
MTYKWyAgICAwLjAwNDAwMF0gcmN1OiBzcmN1X2luaXQ6IFNldHRpbmcgc3JjdV9zdHJ1Y3Qg
c2l6ZXMgYmFzZWQgb24gY29udGVudGlvbi4KWyAgICAwLjAwNDAwMF0gQWZ0ZXIgcmFuZG9t
X2luaXQoKQpbICAgIDAuMDA0MDAwXSBBZnRlciBib290X2luaXRfc3RhY2tfY2FuYXJ5Clsg
ICAgMC4wMDQwMDBdIHNwdXJpb3VzIDgyNTlBIGludGVycnVwdDogSVJRNy4KWyAgICAwLjAw
NDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUKWyAgICAwLjAwNDAwMF0gcHJpbnRr
OiBjb25zb2xlIFt0dHkwXSBlbmFibGVkClsgICAgMC4wMDQwMDBdIEFDUEk6IENvcmUgcmV2
aXNpb24gMjAyMjEwMjAKWyAgICAwLjAwNDAwMF0gY2xvY2tzb3VyY2U6IGhwZXQ6IG1hc2s6
IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lkbGVfbnM6IDEzMzQ4
NDg3MzUwNCBucwpbICAgIDAuMDA0MDAwXSBBUElDOiBTd2l0Y2ggdG8gc3ltbWV0cmljIEkv
TyBtb2RlIHNldHVwClsgICAgMC4wMDQwMDBdIEFNRC1WaTogVXNpbmcgZ2xvYmFsIElWSEQg
RUZSOjB4MCwgRUZSMjoweDAKWyAgICAwLjAwNDAwMF0gQVBJQzogRG9uZQpbICAgIDAuMDA0
MDAwXSBCZWZvcmUgYXBpY19ic2Jfc2V0dXAKWyAgICAwLjAwNDAwMF0gY2hlY2tfdGltZXIg
YmVnaW4KWyAgICAwLjAwNDAwMF0gY2hlY2tfdGltZXIgYWZ0ZXIgbG9jYWxfaXJxX2Rpc2Fi
bGUKWyAgICAwLjAwNDAwMF0gLi5USU1FUjogdmVjdG9yPTB4MzAgYXBpYzE9MCBwaW4xPTIg
YXBpYzI9LTEgcGluMj0tMQpbICAgIDAuMDA0MDAwXSBjbG9ja3NvdXJjZTogdHNjLWVhcmx5
OiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczogMHg3MDcwYjZmZDZmYywg
bWF4X2lkbGVfbnM6IDg4MTU5MDYxMTcxMCBucwpbICAgIDAuMTQ0OTgyXSBDYWxpYnJhdGlu
ZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1lciBm
cmVxdWVuY3kuLiA3ODAwLjU0IEJvZ29NSVBTIChscGo9MTU2MDEwOTIpClsgICAgMC4xNDQ5
ODZdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMQpbICAgIDAuMTQ1MDc5
XSBMU006IGluaXRpYWxpemluZyBsc209Y2FwYWJpbGl0eQpbICAgIDAuMTQ1MTczXSBNb3Vu
dC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDgxOTIgKG9yZGVyOiA0LCA2NTUzNiBieXRl
cywgbGluZWFyKQpbICAgIDAuMTQ1MTg5XSBNb3VudHBvaW50LWNhY2hlIGhhc2ggdGFibGUg
ZW50cmllczogODE5MiAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4x
NDU1NjRdIEJpdCAzMCBpbiBDUFVJRCBFQ1ggbm90IHNldC4KWyAgICAwLjE0NTU4N10gTGFz
dCBsZXZlbCBpVExCIGVudHJpZXM6IDRLQiA1MTIsIDJNQiAxMDI0LCA0TUIgNTEyClsgICAg
MC4xNDU1ODldIExhc3QgbGV2ZWwgZFRMQiBlbnRyaWVzOiA0S0IgMTAyNCwgMk1CIDEwMjQs
IDRNQiA1MTIsIDFHQiAwClsgICAgMC4xNDU1OTRdIFNwZWN0cmUgVjEgOiBNaXRpZ2F0aW9u
OiB1c2VyY29weS9zd2FwZ3MgYmFycmllcnMgYW5kIF9fdXNlciBwb2ludGVyIHNhbml0aXph
dGlvbgpbICAgIDAuMTQ1NTk3XSBTcGVjdHJlIFYyIDogTWl0aWdhdGlvbjogUmV0cG9saW5l
cwpbICAgIDAuMTQ1NTk4XSBTcGVjdHJlIFYyIDogU3BlY3RyZSB2MiAvIFNwZWN0cmVSU0Ig
bWl0aWdhdGlvbjogRmlsbGluZyBSU0Igb24gY29udGV4dCBzd2l0Y2gKWyAgICAwLjE0NTU5
OV0gU3BlY3RyZSBWMiA6IFNwZWN0cmUgdjIgLyBTcGVjdHJlUlNCIDogRmlsbGluZyBSU0Ig
b24gVk1FWElUClsgICAgMC4xNDU2MDBdIFNwZWN0cmUgVjIgOiBFbmFibGluZyBTcGVjdWxh
dGlvbiBCYXJyaWVyIGZvciBmaXJtd2FyZSBjYWxscwpbICAgIDAuMTQ1NjAwXSBSRVRCbGVl
ZDogTWl0aWdhdGlvbjogdW50cmFpbmVkIHJldHVybiB0aHVuawpbICAgIDAuMTQ1NjAyXSBT
cGVjdHJlIFYyIDogbWl0aWdhdGlvbjogRW5hYmxpbmcgY29uZGl0aW9uYWwgSW5kaXJlY3Qg
QnJhbmNoIFByZWRpY3Rpb24gQmFycmllcgpbICAgIDAuMTQ1NjA0XSBTcGVjdWxhdGl2ZSBT
dG9yZSBCeXBhc3M6IE1pdGlnYXRpb246IFNwZWN1bGF0aXZlIFN0b3JlIEJ5cGFzcyBkaXNh
YmxlZCB2aWEgcHJjdGwKWyAgICAwLjE1MDA0Nl0gRnJlZWluZyBTTVAgYWx0ZXJuYXRpdmVz
IG1lbW9yeTogMzJLClsgICAgMC4xNTAwNTFdIEFmdGVyIGNoZWNrX2J1Z3MKWyAgICAwLjE1
MDA1Ml0gQWZ0ZXIgYWNwaV9zdWJzeXN0ZW1faW5pdApbICAgIDAuMTUwMDUzXSBBZnRlciBh
cmNoX3Bvc3RfYWNwaV9zdWJzeXNfaW5pdApbICAgIDAuMTUwMDU0XSBBZnRlciByY3Vfc2No
ZWR1bGVyX3N0YXJ0aW5nClsgICAgMC4xNTAxMjZdIEFmdGVyIGZpbmRfdGFza19ieV9waWRf
bnMgYW5kIFBGX05PX1NFVEFGRklOSVRZClsgICAgMC4xNTAxMzFdIEFmdGVyIG51bWFfZGVm
YXVsdF9wb2xpY3kKWyAgICAwLjE1MDE1MV0gQWZ0ZXIgcmN1X3JlYWRfbG9jawpbICAgIDAu
MTUwMTUyXSBBZnRlciByY3VfcmVhZF91bmxvY2sKWyAgICAwLjE1MDE1M10gQWZ0ZXIga3Ro
cmVhZGRfZG9uZQpbICAgIDAuMTUwMTY1XSBzbXBib290OiBTdGFydCBvZiBzbXBfcHJlcGFy
ZV9jcHVzX2NvbW1vbgpbICAgIDAuMTUwMTY3XSBzbXBib290OiBzbXBib290OiB6YWxsb2Mg
MApbICAgIDAuMTUwMTY4XSBzbXBib290OiBzbXBib290OiB6YWxsb2MgMQpbICAgIDAuMTUw
MTY5XSBzbXBib290OiBzbXBib290OiBBZnRlciBzZXRfc2NoZWRfdG9wb2xvZ3koKQpbICAg
IDAuMTUwMTcxXSBzbXBib290OiBzbXBib290OiBBZnRlciBzbXBfc2FuaXR5X2NoZWNrKCkK
WyAgICAwLjE1MDE3Ml0gc21wYm9vdDogc21wYm9vdDogQmVmb3JlIHg4Nl9pbml0LnRpbWVy
cy5zZXR1cF9wZXJjcHVfY2xvY2tldigpClsgICAgMC4yNTgxOTJdIHNtcGJvb3Q6IHNtcGJv
b3Q6IEFmdGVyIHg4Nl9pbml0LnRpbWVycy5zZXR1cF9wZXJjcHVfY2xvY2tldigpClsgICAg
MC4yNTgxOTNdIHNtcGJvb3Q6IENQVTA6IEFNRCBBNi02NDAwSyBBUFUgd2l0aCBSYWRlb24o
dG0pIEhEIEdyYXBoaWNzIChmYW1pbHk6IDB4MTUsIG1vZGVsOiAweDEzLCBzdGVwcGluZzog
MHgxKQpbICAgIDAuMjU4NDI1XSBjYmxpc3RfaW5pdF9nZW5lcmljOiBTZXR0aW5nIGFkanVz
dGFibGUgbnVtYmVyIG9mIGNhbGxiYWNrIHF1ZXVlcy4KWyAgICAwLjI1ODQyN10gY2JsaXN0
X2luaXRfZ2VuZXJpYzogU2V0dGluZyBzaGlmdCB0byAxIGFuZCBsaW0gdG8gMS4KWyAgICAw
LjI1ODQ1OF0gY2JsaXN0X2luaXRfZ2VuZXJpYzogU2V0dGluZyBzaGlmdCB0byAxIGFuZCBs
aW0gdG8gMS4KWyAgICAwLjI1ODQ4NF0gY2JsaXN0X2luaXRfZ2VuZXJpYzogU2V0dGluZyBz
aGlmdCB0byAxIGFuZCBsaW0gdG8gMS4KWyAgICAwLjI1ODUxM10gUGVyZm9ybWFuY2UgRXZl
bnRzOiBGYW0xNWggY29yZSBwZXJmY3RyLCBBTUQgUE1VIGRyaXZlci4KWyAgICAwLjI1ODUz
Nl0gLi4uIHZlcnNpb246ICAgICAgICAgICAgICAgIDAKWyAgICAwLjI1ODUzN10gLi4uIGJp
dCB3aWR0aDogICAgICAgICAgICAgIDQ4ClsgICAgMC4yNTg1MzhdIC4uLiBnZW5lcmljIHJl
Z2lzdGVyczogICAgICA2ClsgICAgMC4yNTg1MzldIC4uLiB2YWx1ZSBtYXNrOiAgICAgICAg
ICAgICAwMDAwZmZmZmZmZmZmZmZmClsgICAgMC4yNTg1NDBdIC4uLiBtYXggcGVyaW9kOiAg
ICAgICAgICAgICAwMDAwN2ZmZmZmZmZmZmZmClsgICAgMC4yNTg1NDFdIC4uLiBmaXhlZC1w
dXJwb3NlIGV2ZW50czogICAwClsgICAgMC4yNTg1NDJdIC4uLiBldmVudCBtYXNrOiAgICAg
ICAgICAgICAwMDAwMDAwMDAwMDAwMDNmClsgICAgMC4yNTg2NjJdIHJjdTogSGllcmFyY2hp
Y2FsIFNSQ1UgaW1wbGVtZW50YXRpb24uClsgICAgMC4yNTg2NjNdIHJjdTogCU1heCBwaGFz
ZSBuby1kZWxheSBpbnN0YW5jZXMgaXMgMTAwMC4KWyAgICAwLjI1OTI1N10gTk1JIHdhdGNo
ZG9nOiBFbmFibGVkLiBQZXJtYW5lbnRseSBjb25zdW1lcyBvbmUgaHctUE1VIGNvdW50ZXIu
ClsgICAgMC4yNTkzMjldIHNtcDogQnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uClsg
ICAgMC4yNTk1MjddIHg4NjogQm9vdGluZyBTTVAgY29uZmlndXJhdGlvbjoKWyAgICAwLjI1
OTUyOF0gLi4uLiBub2RlICAjMCwgQ1BVczogICAgICAjMQpbICAgIDAuMjYxMDA3XSBBZnRl
ciBzY2hlZHVsZV9wcmVlbXB0X2Rpc2FibGVkClsgICAxMC4yNjA5OTBdIENQVTEgZmFpbGVk
IHRvIHJlcG9ydCBhbGl2ZSBzdGF0ZQpbICAgMTAuMjYxMDcwXSBzbXA6IEJyb3VnaHQgdXAg
MSBub2RlLCAxIENQVQpbICAgMTAuMjYxMDczXSBzbXBib290OiBNYXggbG9naWNhbCBwYWNr
YWdlczogMgpbICAgMTAuMjYxMDc0XSBzbXBib290OiBUb3RhbCBvZiAxIHByb2Nlc3NvcnMg
YWN0aXZhdGVkICg3ODAwLjU0IEJvZ29NSVBTKQpbICAgMTAuMjYxNjAxXSBkZXZ0bXBmczog
aW5pdGlhbGl6ZWQKWyAgIDEwLjI2MTY5N10geDg2L21tOiBNZW1vcnkgYmxvY2sgc2l6ZTog
MTI4TUIKWyAgIDEwLjI2Mjc4OF0gY2xvY2tzb3VyY2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZm
ZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lkbGVfbnM6IDc2NDUwNDE3ODUx
MDAwMDAgbnMKWyAgIDEwLjI2Mjc5NV0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIg
KG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKQpbICAgMTAuMjYyODgzXSBwaW5jdHJs
IGNvcmU6IGluaXRpYWxpemVkIHBpbmN0cmwgc3Vic3lzdGVtClsgICAxMC4yNjI5NTNdIFBN
OiBSVEMgdGltZTogMDc6Mzk6NTAsIGRhdGU6IDIwMjMtMDQtMTcKWyAgIDEwLjI2MzcwOV0g
TkVUOiBSZWdpc3RlcmVkIFBGX05FVExJTksvUEZfUk9VVEUgcHJvdG9jb2wgZmFtaWx5Clsg
ICAxMC4yNjM5MzddIGF1ZGl0OiBpbml0aWFsaXppbmcgbmV0bGluayBzdWJzeXMgKGRpc2Fi
bGVkKQpbICAgMTAuMjY0MTkyXSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdv
dmVybm9yICdmYWlyX3NoYXJlJwpbICAgMTAuMjY0MTkzXSB0aGVybWFsX3N5czogUmVnaXN0
ZXJlZCB0aGVybWFsIGdvdmVybm9yICdiYW5nX2JhbmcnClsgICAxMC4yNjQxOTRdIHRoZXJt
YWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ3N0ZXBfd2lzZScKWyAgIDEw
LjI2NDE5NV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAndXNl
cl9zcGFjZScKWyAgIDEwLjI2NDIxNV0gY3B1aWRsZTogdXNpbmcgZ292ZXJub3IgbGFkZGVy
ClsgICAxMC4yNjQyMjBdIGNwdWlkbGU6IHVzaW5nIGdvdmVybm9yIG1lbnUKWyAgIDEwLjI2
NDQyN10gUENJOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAwMDAgW2J1cyAwMC0zZl0gYXQgW21l
bSAweGY4MDAwMDAwLTB4ZmJmZmZmZmZdIChiYXNlIDB4ZjgwMDAwMDApClsgICAxMC4yNjQ0
MzJdIFBDSTogTU1DT05GSUcgYXQgW21lbSAweGY4MDAwMDAwLTB4ZmJmZmZmZmZdIHJlc2Vy
dmVkIGFzIEU4MjAgZW50cnkKWyAgIDEwLjI2NDQ0NF0gUENJOiBVc2luZyBjb25maWd1cmF0
aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MKWyAgIDEwLjI2NDY0Nl0ga3Byb2Jlczoga3By
b2JlIGp1bXAtb3B0aW1pemF0aW9uIGlzIGVuYWJsZWQuIEFsbCBrcHJvYmVzIGFyZSBvcHRp
bWl6ZWQgaWYgcG9zc2libGUuClsgICAxMC4yNjkwNjRdIGF1ZGl0OiB0eXBlPTIwMDAgYXVk
aXQoMTY4MTcxNzE5MC4xNDA6MSk6IHN0YXRlPWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9
MCByZXM9MQpbICAgMTAuMjgxMDgwXSBIdWdlVExCOiByZWdpc3RlcmVkIDEuMDAgR2lCIHBh
Z2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzClsgICAxMC4yODEwODRdIEh1Z2VUTEI6
IDE2MzgwIEtpQiB2bWVtbWFwIGNhbiBiZSBmcmVlZCBmb3IgYSAxLjAwIEdpQiBwYWdlClsg
ICAxMC4yODEwODVdIEh1Z2VUTEI6IHJlZ2lzdGVyZWQgMi4wMCBNaUIgcGFnZSBzaXplLCBw
cmUtYWxsb2NhdGVkIDAgcGFnZXMKWyAgIDEwLjI4MTA4Nl0gSHVnZVRMQjogMjggS2lCIHZt
ZW1tYXAgY2FuIGJlIGZyZWVkIGZvciBhIDIuMDAgTWlCIHBhZ2UKWyAgIDEwLjI4NjE2Nl0g
Y3J5cHRkOiBtYXhfY3B1X3FsZW4gc2V0IHRvIDEwMDAKWyAgIDEwLjI4OTYxNV0gQUNQSTog
QWRkZWQgX09TSShNb2R1bGUgRGV2aWNlKQpbICAgMTAuMjg5NjE3XSBBQ1BJOiBBZGRlZCBf
T1NJKFByb2Nlc3NvciBEZXZpY2UpClsgICAxMC4yODk2MThdIEFDUEk6IEFkZGVkIF9PU0ko
My4wIF9TQ1AgRXh0ZW5zaW9ucykKWyAgIDEwLjI4OTYyMF0gQUNQSTogQWRkZWQgX09TSShQ
cm9jZXNzb3IgQWdncmVnYXRvciBEZXZpY2UpClsgICAxMC4yOTM5NDJdIEFDUEk6IERTRFQg
c3VjY2Vzc2Z1bGx5IGFjcXVpcmVkIGFuZCBsb2FkZWQKClsgICAxMC4yOTU2MjZdIEFDUEk6
IDQgQUNQSSBBTUwgdGFibGVzIHN1Y2Nlc3NmdWxseSBhY3F1aXJlZCBhbmQgbG9hZGVkClsg
ICAxMC4yOTcxNjFdIEFDUEk6IEludGVycHJldGVyIGVuYWJsZWQKWyAgIDEwLjI5NzE4NF0g
QUNQSTogUE06IChzdXBwb3J0cyBTMCBTMSBTMyBTNSkKWyAgIDEwLjI5NzE4Nl0gQUNQSTog
VXNpbmcgSU9BUElDIGZvciBpbnRlcnJ1cHQgcm91dGluZwpbICAgMTAuMjk3MjM2XSBIRVNU
OiBUYWJsZSBwYXJzaW5nIGhhcyBiZWVuIGluaXRpYWxpemVkLgpbICAgMTAuMjk3MjU4XSBH
SEVTOiBGYWlsZWQgdG8gZW5hYmxlIEFQRUkgZmlybXdhcmUgZmlyc3QgbW9kZS4KWyAgIDEw
LjI5NzI2MV0gUENJOiBVc2luZyBob3N0IGJyaWRnZSB3aW5kb3dzIGZyb20gQUNQSTsgaWYg
bmVjZXNzYXJ5LCB1c2UgInBjaT1ub2NycyIgYW5kIHJlcG9ydCBhIGJ1ZwpbICAgMTAuMjk3
MjYyXSBQQ0k6IElnbm9yaW5nIEU4MjAgcmVzZXJ2YXRpb25zIGZvciBob3N0IGJyaWRnZSB3
aW5kb3dzClsgICAxMC4yOTc1MTBdIEFDUEk6IEVuYWJsZWQgOCBHUEVzIGluIGJsb2NrIDAw
IHRvIDFGClsgICAxMC4zMDMyMzBdIEFDUEk6IFBDSSBSb290IEJyaWRnZSBbUENJMF0gKGRv
bWFpbiAwMDAwIFtidXMgMDAtZmZdKQpbICAgMTAuMzAzMjQxXSBhY3BpIFBOUDBBMDM6MDA6
IF9PU0M6IE9TIHN1cHBvcnRzIFtFeHRlbmRlZENvbmZpZyBBU1BNIENsb2NrUE0gU2VnbWVu
dHMgTVNJIEhQWC1UeXBlM10KWyAgIDEwLjMwMzMyMF0gYWNwaSBQTlAwQTAzOjAwOiBfT1ND
OiBPUyBub3cgY29udHJvbHMgW1BNRSBBRVIgUENJZUNhcGFiaWxpdHkgTFRSXQpbICAgMTAu
MzAzMzM1XSBhY3BpIFBOUDBBMDM6MDA6IFtGaXJtd2FyZSBJbmZvXTogTU1DT05GSUcgZm9y
IGRvbWFpbiAwMDAwIFtidXMgMDAtM2ZdIG9ubHkgcGFydGlhbGx5IGNvdmVycyB0aGlzIGJy
aWRnZQpbICAgMTAuMzAzNDE0XSBhY3BpIFBOUDBBMDM6MDA6IGhvc3QgYnJpZGdlIHdpbmRv
dyBleHBhbmRlZCB0byBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XTsgW2lvICAweDAzYjAt
MHgwM2RmIHdpbmRvd10gaWdub3JlZApbICAgMTAuMzAzNjUxXSBQQ0kgaG9zdCBicmlkZ2Ug
dG8gYnVzIDAwMDA6MDAKWyAgIDEwLjMwMzY1M10gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1
cyByZXNvdXJjZSBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XQpbICAgMTAuMzAzNjU1XSBw
Y2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwZDAwLTB4ZmZmZiB3
aW5kb3ddClsgICAxMC4zMDM2NThdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3Vy
Y2UgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsgICAxMC4zMDM2NjBdIHBjaV9idXMg
MDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW21lbSAweDgwMDAwMDAwLTB4ZmZmZmZmZmZd
ClsgICAxMC4zMDM2NjJdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2J1
cyAwMC1mZl0KWyAgIDEwLjMwMzY4Nl0gcGNpIDAwMDA6MDA6MDAuMDogWzEwMjI6MTQxMF0g
dHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgMTAuMzAzODMwXSBwY2kgMDAwMDowMDowMC4y
OiBbMTAyMjoxNDE5XSB0eXBlIDAwIGNsYXNzIDB4MDgwNjAwClsgICAxMC4zMDM5MjFdIHBj
aSAwMDAwOjAwOjAxLjA6IFsxMDAyOjk5OTZdIHR5cGUgMDAgY2xhc3MgMHgwMzAwMDAKWyAg
IDEwLjMwMzkyOV0gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTA6IFttZW0gMHhlMDAwMDAw
MC0weGVmZmZmZmZmIHByZWZdClsgICAxMC4zMDM5MzRdIHBjaSAwMDAwOjAwOjAxLjA6IHJl
ZyAweDE0OiBbaW8gIDB4MTAwMC0weDEwZmZdClsgICAxMC4zMDM5MzldIHBjaSAwMDAwOjAw
OjAxLjA6IHJlZyAweDE4OiBbbWVtIDB4ZjAxODAwMDAtMHhmMDFiZmZmZl0KWyAgIDEwLjMw
Mzk1NV0gcGNpIDAwMDA6MDA6MDEuMDogZW5hYmxpbmcgRXh0ZW5kZWQgVGFncwpbICAgMTAu
MzAzOTY1XSBwY2kgMDAwMDowMDowMS4wOiBWaWRlbyBkZXZpY2Ugd2l0aCBzaGFkb3dlZCBS
T00gYXQgW21lbSAweDAwMGMwMDAwLTB4MDAwZGZmZmZdClsgICAxMC4zMDM5ODNdIHBjaSAw
MDAwOjAwOjAxLjA6IHN1cHBvcnRzIEQxIEQyClsgICAxMC4zMDQwNDhdIHBjaSAwMDAwOjAw
OjAxLjE6IFsxMDAyOjk5MDJdIHR5cGUgMDAgY2xhc3MgMHgwNDAzMDAKWyAgIDEwLjMwNDA1
Nl0gcGNpIDAwMDA6MDA6MDEuMTogcmVnIDB4MTA6IFttZW0gMHhmMDFjMDAwMC0weGYwMWMz
ZmZmXQpbICAgMTAuMzA0MDc3XSBwY2kgMDAwMDowMDowMS4xOiBlbmFibGluZyBFeHRlbmRl
ZCBUYWdzClsgICAxMC4zMDQxMDFdIHBjaSAwMDAwOjAwOjAxLjE6IHN1cHBvcnRzIEQxIEQy
ClsgICAxMC4zMDQxODVdIHBjaSAwMDAwOjAwOjExLjA6IFsxMDIyOjc4MDFdIHR5cGUgMDAg
Y2xhc3MgMHgwMTA2MDEKWyAgIDEwLjMwNDE5OF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4
MTA6IFtpbyAgMHgxNDEwLTB4MTQxN10KWyAgIDEwLjMwNDIwNl0gcGNpIDAwMDA6MDA6MTEu
MDogcmVnIDB4MTQ6IFtpbyAgMHgxNDIwLTB4MTQyM10KWyAgIDEwLjMwNDIxM10gcGNpIDAw
MDA6MDA6MTEuMDogcmVnIDB4MTg6IFtpbyAgMHgxNDE4LTB4MTQxZl0KWyAgIDEwLjMwNDIy
MV0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MWM6IFtpbyAgMHgxNDI0LTB4MTQyN10KWyAg
IDEwLjMwNDIyOF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MjA6IFtpbyAgMHgxNDAwLTB4
MTQwZl0KWyAgIDEwLjMwNDIzNV0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MjQ6IFttZW0g
MHhmMDFjYzAwMC0weGYwMWNjN2ZmXQpbICAgMTAuMzA0MzkyXSBwY2kgMDAwMDowMDoxMi4w
OiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAxMC4zMDQ0MDZdIHBj
aSAwMDAwOjAwOjEyLjA6IHJlZyAweDEwOiBbbWVtIDB4ZjAxYzgwMDAtMHhmMDFjOGZmZl0K
WyAgIDEwLjMwNDU4N10gcGNpIDAwMDA6MDA6MTIuMjogWzEwMjI6NzgwOF0gdHlwZSAwMCBj
bGFzcyAweDBjMDMyMApbICAgMTAuMzA0NjAxXSBwY2kgMDAwMDowMDoxMi4yOiByZWcgMHgx
MDogW21lbSAweGYwMWNkMDAwLTB4ZjAxY2QwZmZdClsgICAxMC4zMDQ2NjZdIHBjaSAwMDAw
OjAwOjEyLjI6IHN1cHBvcnRzIEQxIEQyClsgICAxMC4zMDQ2NjddIHBjaSAwMDAwOjAwOjEy
LjI6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QKWyAgIDEwLjMwNDY2OF0g
cGNpIDAwMDA6MDA6MTIuMjogcG1lX3BvbGwgPSB0cnVlClsgICAxMC4zMDQ2NjldIHBjaSAw
MDAwOjAwOjEyLjI6IGFmdGVyIGRldmljZV9zZXRfd2FrZXVwX2NhcGFibGUoKQpbICAgMTAu
MzA0NjczXSBwY2kgMDAwMDowMDoxMi4yOiBhZnRlciBwY2lfcG1lX2FjdGl2ZSgpClsgICAx
MC4zMDQ4MTRdIHBjaSAwMDAwOjAwOjEzLjA6IFsxMDIyOjc4MDddIHR5cGUgMDAgY2xhc3Mg
MHgwYzAzMTAKWyAgIDEwLjMwNDgyN10gcGNpIDAwMDA6MDA6MTMuMDogcmVnIDB4MTA6IFtt
ZW0gMHhmMDFjOTAwMC0weGYwMWM5ZmZmXQpbICAgMTAuMzA1NTY2XSBwY2kgMDAwMDowMDox
My4yOiBbMTAyMjo3ODA4XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzIwClsgICAxMC4zMDU1ODNd
IHBjaSAwMDAwOjAwOjEzLjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2UwMDAtMHhmMDFjZTBm
Zl0KWyAgIDEwLjMwNTY1NF0gcGNpIDAwMDA6MDA6MTMuMjogc3VwcG9ydHMgRDEgRDIKWyAg
IDEwLjMwNTY1Nl0gcGNpIDAwMDA6MDA6MTMuMjogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBE
MSBEMiBEM2hvdApbICAgMTAuMzA1NjU3XSBwY2kgMDAwMDowMDoxMy4yOiBwbWVfcG9sbCA9
IHRydWUKWyAgIDEwLjMwNTY1OV0gcGNpIDAwMDA6MDA6MTMuMjogYWZ0ZXIgZGV2aWNlX3Nl
dF93YWtldXBfY2FwYWJsZSgpClsgICAxMC4zMDU2NjJdIHBjaSAwMDAwOjAwOjEzLjI6IGFm
dGVyIHBjaV9wbWVfYWN0aXZlKCkKWyAgIDEwLjMwNTgwM10gcGNpIDAwMDA6MDA6MTQuMDog
WzEwMjI6NzgwYl0gdHlwZSAwMCBjbGFzcyAweDBjMDUwMApbICAgMTAuMzA1OTc1XSBwY2kg
MDAwMDowMDoxNC4yOiBbMTAyMjo3ODBkXSB0eXBlIDAwIGNsYXNzIDB4MDQwMzAwClsgICAx
MC4zMDU5OTJdIHBjaSAwMDAwOjAwOjE0LjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAxYzQwMDAt
MHhmMDFjN2ZmZiA2NGJpdF0KWyAgIDEwLjMwNjA0N10gcGNpIDAwMDA6MDA6MTQuMjogUE1F
IyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgIDEwLjMwNjA0OV0gcGNpIDAw
MDA6MDA6MTQuMjogcG1lX3BvbGwgPSB0cnVlClsgICAxMC4zMDYwNTBdIHBjaSAwMDAwOjAw
OjE0LjI6IGFmdGVyIGRldmljZV9zZXRfd2FrZXVwX2NhcGFibGUoKQpbICAgMTAuMzA2MDUy
XSBwY2kgMDAwMDowMDoxNC4yOiBhZnRlciBwY2lfcG1lX2FjdGl2ZSgpClsgICAxMC4zMDYx
ODVdIHBjaSAwMDAwOjAwOjE0LjM6IFsxMDIyOjc4MGVdIHR5cGUgMDAgY2xhc3MgMHgwNjAx
MDAKWyAgIDEwLjMwNjM2MF0gcGNpIDAwMDA6MDA6MTQuNDogWzEwMjI6NzgwZl0gdHlwZSAw
MSBjbGFzcyAweDA2MDQwMQpbICAgMTAuMzA2NTAyXSBwY2kgMDAwMDowMDoxNC41OiBbMTAy
Mjo3ODA5XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAxMC4zMDY1MTVdIHBjaSAwMDAw
OjAwOjE0LjU6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2EwMDAtMHhmMDFjYWZmZl0KWyAgIDEw
LjMwNjY4Ml0gcGNpIDAwMDA6MDA6MTUuMDogWzEwMjI6NDNhMF0gdHlwZSAwMSBjbGFzcyAw
eDA2MDQwMApbICAgMTAuMzA2NzExXSBwY2kgMDAwMDowMDoxNS4wOiBlbmFibGluZyBFeHRl
bmRlZCBUYWdzClsgICAxMC4zMDY3NTFdIHBjaSAwMDAwOjAwOjE1LjA6IHN1cHBvcnRzIEQx
IEQyClsgICAxMC4zMDY5MTFdIHBjaSAwMDAwOjAwOjE1LjE6IFsxMDIyOjQzYTFdIHR5cGUg
MDEgY2xhc3MgMHgwNjA0MDAKWyAgIDEwLjMwNjk0Ml0gcGNpIDAwMDA6MDA6MTUuMTogZW5h
YmxpbmcgRXh0ZW5kZWQgVGFncwpbICAgMTAuMzA2OTgxXSBwY2kgMDAwMDowMDoxNS4xOiBz
dXBwb3J0cyBEMSBEMgpbICAgMTAuMzA3MTM3XSBwY2kgMDAwMDowMDoxNS4yOiBbMTAyMjo0
M2EyXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsgICAxMC4zMDcxNjZdIHBjaSAwMDAwOjAw
OjE1LjI6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MKWyAgIDEwLjMwNzIwNV0gcGNpIDAwMDA6
MDA6MTUuMjogc3VwcG9ydHMgRDEgRDIKWyAgIDEwLjMwNzI4MF0gcGNpIDAwMDA6MDA6MTYu
MDogWzEwMjI6NzgwN10gdHlwZSAwMCBjbGFzcyAweDBjMDMxMApbICAgMTAuMzA3MjkzXSBw
Y2kgMDAwMDowMDoxNi4wOiByZWcgMHgxMDogW21lbSAweGYwMWNiMDAwLTB4ZjAxY2JmZmZd
ClsgICAxMC4zMDc0NjddIHBjaSAwMDAwOjAwOjE2LjI6IFsxMDIyOjc4MDhdIHR5cGUgMDAg
Y2xhc3MgMHgwYzAzMjAKWyAgIDEwLjMwNzQ4MV0gcGNpIDAwMDA6MDA6MTYuMjogcmVnIDB4
MTA6IFttZW0gMHhmMDFjZjAwMC0weGYwMWNmMGZmXQpbICAgMTAuMzA3NTQ2XSBwY2kgMDAw
MDowMDoxNi4yOiBzdXBwb3J0cyBEMSBEMgpbICAgMTAuMzA3NTQ3XSBwY2kgMDAwMDowMDox
Ni4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAxMC4zMDc1NDhd
IHBjaSAwMDAwOjAwOjE2LjI6IHBtZV9wb2xsID0gdHJ1ZQpbICAgMTAuMzA3NTQ5XSBwY2kg
MDAwMDowMDoxNi4yOiBhZnRlciBkZXZpY2Vfc2V0X3dha2V1cF9jYXBhYmxlKCkKWyAgIDEw
LjMwNzU1Ml0gcGNpIDAwMDA6MDA6MTYuMjogYWZ0ZXIgcGNpX3BtZV9hY3RpdmUoKQpbICAg
MTAuMzA3NjkxXSBwY2kgMDAwMDowMDoxOC4wOiBbMTAyMjoxNDAwXSB0eXBlIDAwIGNsYXNz
IDB4MDYwMDAwClsgICAxMC4zMDc3NThdIHBjaSAwMDAwOjAwOjE4LjE6IFsxMDIyOjE0MDFd
IHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgIDEwLjMwNzgxOV0gcGNpIDAwMDA6MDA6MTgu
MjogWzEwMjI6MTQwMl0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgMTAuMzA3ODgwXSBw
Y2kgMDAwMDowMDoxOC4zOiBbMTAyMjoxNDAzXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsg
ICAxMC4zMDgwMTZdIHBjaSAwMDAwOjAwOjE4LjQ6IFsxMDIyOjE0MDRdIHR5cGUgMDAgY2xh
c3MgMHgwNjAwMDAKWyAgIDEwLjMwODA3OV0gcGNpIDAwMDA6MDA6MTguNTogWzEwMjI6MTQw
NV0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgMTAuMzA4MTUzXSBwY2lfYnVzIDAwMDA6
MDE6IGV4dGVuZGVkIGNvbmZpZyBzcGFjZSBub3QgYWNjZXNzaWJsZQpbICAgMTAuMzA4MjE4
XSBwY2kgMDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdIChzdWJ0cmFjdGl2
ZSBkZWNvZGUpClsgICAxMC4zMDgyMjddIHBjaSAwMDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdp
bmRvdyBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XSAoc3VidHJhY3RpdmUgZGVjb2RlKQpb
ICAgMTAuMzA4MjMwXSBwY2kgMDAwMDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW2lvICAw
eDBkMDAtMHhmZmZmIHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgIDEwLjMwODIz
Ml0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFttZW0gMHgwMDBhMDAwMC0w
eDAwMGRmZmZmXSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAgMTAuMzA4MjM1XSBwY2kgMDAw
MDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweDgwMDAwMDAwLTB4ZmZmZmZmZmZd
IChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAxMC4zMDgyODNdIHBjaSAwMDAwOjAwOjE1LjA6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwMl0KWyAgIDEwLjMwODM2OF0gcGNpIDAwMDA6MDM6MDAu
MDogWzFiMjE6MTA0Ml0gdHlwZSAwMCBjbGFzcyAweDBjMDMzMApbICAgMTAuMzA4NDA0XSBw
Y2kgMDAwMDowMzowMC4wOiByZWcgMHgxMDogW21lbSAweGYwMDAwMDAwLTB4ZjAwMDdmZmYg
NjRiaXRdClsgICAxMC4zMDg1NzldIHBjaSAwMDAwOjAzOjAwLjA6IFBNRSMgc3VwcG9ydGVk
IGZyb20gRDNob3QgRDNjb2xkClsgICAxMC4zMDg1ODFdIHBjaSAwMDAwOjAzOjAwLjA6IHBt
ZV9wb2xsID0gdHJ1ZQpbICAgMTAuMzA4NTgyXSBwY2kgMDAwMDowMzowMC4wOiBhZnRlciBk
ZXZpY2Vfc2V0X3dha2V1cF9jYXBhYmxlKCkKWyAgIDEwLjMwODU4N10gcGNpIDAwMDA6MDM6
MDAuMDogYWZ0ZXIgcGNpX3BtZV9hY3RpdmUoKQpbICAgMTAuMzA4NjI1XSBwY2kgMDAwMDow
MzowMC4wOiAyLjAwMCBHYi9zIGF2YWlsYWJsZSBQQ0llIGJhbmR3aWR0aCwgbGltaXRlZCBi
eSAyLjUgR1QvcyBQQ0llIHgxIGxpbmsgYXQgMDAwMDowMDoxNS4xIChjYXBhYmxlIG9mIDQu
MDAwIEdiL3Mgd2l0aCA1LjAgR1QvcyBQQ0llIHgxIGxpbmspClsgICAxMC4zMTcxMzNdIHBj
aSAwMDAwOjAwOjE1LjE6IFBDSSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgIDEwLjMxNzE0NV0g
cGNpIDAwMDA6MDA6MTUuMTogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmMDAwMDAwMC0weGYw
MGZmZmZmXQpbICAgMTAuMzE3MTU0XSBwY2kgMDAwMDowMDoxNS4yOiBicmlkZ2UgY29uZmln
dXJhdGlvbiBpbnZhbGlkIChbYnVzIDAwLTAwXSksIHJlY29uZmlndXJpbmcKWyAgIDEwLjMx
NzI3OF0gcGNpIDAwMDA6MDQ6MDAuMDogWzEwZWM6ODE2OF0gdHlwZSAwMCBjbGFzcyAweDAy
MDAwMApbICAgMTAuMzE3Mjk2XSBwY2kgMDAwMDowNDowMC4wOiByZWcgMHgxMDogW2lvICAw
eDAwMDAtMHgwMGZmXQpbICAgMTAuMzE3MzE4XSBwY2kgMDAwMDowNDowMC4wOiByZWcgMHgx
ODogW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjMxNzMz
Ml0gcGNpIDAwMDA6MDQ6MDAuMDogcmVnIDB4MjA6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAz
ZmZmIDY0Yml0IHByZWZdClsgICAxMC4zMTc0NDBdIHBjaSAwMDAwOjA0OjAwLjA6IHN1cHBv
cnRzIEQxIEQyClsgICAxMC4zMTc0NDJdIHBjaSAwMDAwOjA0OjAwLjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDEgRDIgRDNob3QgRDNjb2xkClsgICAxMC4zMTc0NDRdIHBjaSAwMDAw
OjA0OjAwLjA6IHBtZV9wb2xsID0gdHJ1ZQpbICAgMTAuMzE3NDQ1XSBwY2kgMDAwMDowNDow
MC4wOiBhZnRlciBkZXZpY2Vfc2V0X3dha2V1cF9jYXBhYmxlKCkKWyAgIDEwLjMxNzQ0OV0g
cGNpIDAwMDA6MDQ6MDAuMDogYWZ0ZXIgcGNpX3BtZV9hY3RpdmUoKQpbICAgMTAuMzI5MDQy
XSBwY2kgMDAwMDowMDoxNS4yOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDQtZmZdClsgICAxMC4z
MjkwNTNdIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4MDAwMC0w
eDBmZmZdClsgICAxMC4zMjkwNTddIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRv
dyBbbWVtIDB4MDAwMDAwMDAtMHgwMDBmZmZmZl0KWyAgIDEwLjMyOTA2Ml0gcGNpIDAwMDA6
MDA6MTUuMjogICBicmlkZ2Ugd2luZG93IFttZW0gMHgwMDAwMDAwMC0weDAwMGZmZmZmIDY0
Yml0IHByZWZdClsgICAxMC4zMjkwNjVdIHBjaV9idXMgMDAwMDowNDogYnVzbl9yZXM6IFti
dXMgMDQtZmZdIGVuZCBpcyB1cGRhdGVkIHRvIDA0ClsgICAxMC4zMjk1NzVdIEFDUEk6IFBD
STogSW50ZXJydXB0IGxpbmsgSU5UQSBjb25maWd1cmVkIGZvciBJUlEgMApbICAgMTAuMzI5
NjY5XSBBQ1BJOiBQQ0k6IEludGVycnVwdCBsaW5rIElOVEIgY29uZmlndXJlZCBmb3IgSVJR
IDAKWyAgIDEwLjMyOTc1OV0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlRDIGNvbmZp
Z3VyZWQgZm9yIElSUSAwClsgICAxMC4zMjk4NDldIEFDUEk6IFBDSTogSW50ZXJydXB0IGxp
bmsgSU5URCBjb25maWd1cmVkIGZvciBJUlEgMApbICAgMTAuMzI5OTQwXSBBQ1BJOiBQQ0k6
IEludGVycnVwdCBsaW5rIElOVEUgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgIDEwLjMzMDAy
OV0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlRGIGNvbmZpZ3VyZWQgZm9yIElSUSAw
ClsgICAxMC4zMzAxMTldIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5URyBjb25maWd1
cmVkIGZvciBJUlEgMApbICAgMTAuMzMwMjEwXSBBQ1BJOiBQQ0k6IEludGVycnVwdCBsaW5r
IElOVEggY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgIDEwLjMzMDQ0NF0gaW9tbXU6IERlZmF1
bHQgZG9tYWluIHR5cGU6IFRyYW5zbGF0ZWQgClsgICAxMC4zMzA0NDZdIGlvbW11OiBETUEg
ZG9tYWluIFRMQiBpbnZhbGlkYXRpb24gcG9saWN5OiBsYXp5IG1vZGUgClsgICAxMC4zMzA2
MjRdIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkClsgICAxMC4zMzA3MThdIGxpYmF0YSB2
ZXJzaW9uIDMuMDAgbG9hZGVkLgpbICAgMTAuMzMwNzUyXSBBQ1BJOiBidXMgdHlwZSBVU0Ig
cmVnaXN0ZXJlZApbICAgMTAuMzMwNzc1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRl
cmZhY2UgZHJpdmVyIHVzYmZzClsgICAxMC4zMzA3ODZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQg
bmV3IGludGVyZmFjZSBkcml2ZXIgaHViClsgICAxMC4zMzA3OThdIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGRldmljZSBkcml2ZXIgdXNiClsgICAxMC4zMzExMjVdIFBDSTogVXNpbmcg
QUNQSSBmb3IgSVJRIHJvdXRpbmcKWyAgIDEwLjMzMjY5NF0gUENJOiBwY2lfY2FjaGVfbGlu
ZV9zaXplIHNldCB0byA2NCBieXRlcwpbICAgMTAuMzMyNzQ1XSBlODIwOiByZXNlcnZlIFJB
TSBidWZmZXIgW21lbSAweDAwMDlmYzAwLTB4MDAwOWZmZmZdClsgICAxMC4zMzI3NDhdIGU4
MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4NWZlNGQwMDAtMHg1ZmZmZmZmZl0KWyAg
IDEwLjMzMjc1MF0gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHgxN2YwMDAwMDAt
MHgxN2ZmZmZmZmZdClsgICAxMC4zMzI3OTRdIGhwZXQwOiBhdCBNTUlPIDB4ZmVkMDAwMDAs
IElSUXMgMiwgOCwgMApbICAgMTAuMzMyNzk5XSBocGV0MDogMyBjb21wYXJhdG9ycywgMzIt
Yml0IDE0LjMxODE4MCBNSHogY291bnRlcgpbICAgMTAuMzM0MDY3XSBjbG9ja3NvdXJjZTog
U3dpdGNoZWQgdG8gY2xvY2tzb3VyY2UgdHNjLWVhcmx5ClsgICAxMC4zNTA3MDVdIFZGUzog
RGlzayBxdW90YXMgZHF1b3RfNi42LjAKWyAgIDEwLjM1MDczNF0gVkZTOiBEcXVvdC1jYWNo
ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXIgMCwgNDA5NiBieXRlcykKWyAgIDEw
LjM1MDg0N10gcG5wOiBQblAgQUNQSSBpbml0ClsgICAxMC4zNTExMzddIHN5c3RlbSAwMDow
MDogW21lbSAweGZlYzEwMDAyLTB4ZmVjMTEwMDFdIGNvdWxkIG5vdCBiZSByZXNlcnZlZApb
ICAgMTAuMzUxNDQ2XSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCAyIGRldmljZXMKWyAgIDEwLjM1
ODQwOF0gY2xvY2tzb3VyY2U6IGFjcGlfcG06IG1hc2s6IDB4ZmZmZmZmIG1heF9jeWNsZXM6
IDB4ZmZmZmZmLCBtYXhfaWRsZV9uczogMjA4NTcwMTAyNCBucwpbICAgMTAuMzU4NTQ0XSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfSU5FVCBwcm90b2NvbCBmYW1pbHkKWyAgIDEwLjM1ODY4Nl0g
SVAgaWRlbnRzIGhhc2ggdGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA3LCA1MjQyODgg
Ynl0ZXMsIGxpbmVhcikKWyAgIDEwLjM2MDI1Nl0gdGNwX2xpc3Rlbl9wb3J0YWRkcl9oYXNo
IGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5l
YXIpClsgICAxMC4zNjAyNzFdIFRhYmxlLXBlcnR1cmIgaGFzaCB0YWJsZSBlbnRyaWVzOiA2
NTUzNiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKQpbICAgMTAuMzYwMjc3XSBU
Q1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAzMjc2OCAob3JkZXI6IDYsIDI2
MjE0NCBieXRlcywgbGluZWFyKQpbICAgMTAuMzYwMzQ0XSBUQ1AgYmluZCBoYXNoIHRhYmxl
IGVudHJpZXM6IDMyNzY4IChvcmRlcjogOCwgMTA0ODU3NiBieXRlcywgbGluZWFyKQpbICAg
MTAuMzYwNjcwXSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVzdGFibGlzaGVkIDMy
NzY4IGJpbmQgMzI3NjgpClsgICAxMC4zNjA3MzldIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6
IDIwNDggKG9yZGVyOiA0LCA2NTUzNiBieXRlcywgbGluZWFyKQpbICAgMTAuMzYwNzYzXSBV
RFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDIwNDggKG9yZGVyOiA0LCA2NTUzNiBieXRl
cywgbGluZWFyKQpbICAgMTAuMzYwODcwXSBORVQ6IFJlZ2lzdGVyZWQgUEZfVU5JWC9QRl9M
T0NBTCBwcm90b2NvbCBmYW1pbHkKWyAgIDEwLjM2MDkwNl0gcGNpIDAwMDA6MDA6MTUuMjog
QkFSIDE1OiBhc3NpZ25lZCBbbWVtIDB4ODAwMDAwMDAtMHg4MDBmZmZmZiA2NGJpdCBwcmVm
XQpbICAgMTAuMzYwOTEyXSBwY2kgMDAwMDowMDoxNS4yOiBCQVIgMTM6IGFzc2lnbmVkIFtp
byAgMHgyMDAwLTB4MmZmZl0KWyAgIDEwLjM2MDkxN10gcGNpIDAwMDA6MDA6MTQuNDogUENJ
IGJyaWRnZSB0byBbYnVzIDAxXQpbICAgMTAuMzYwOTI4XSBwY2kgMDAwMDowMDoxNS4wOiBQ
Q0kgYnJpZGdlIHRvIFtidXMgMDJdClsgICAxMC4zNjA5MzZdIHBjaSAwMDAwOjAwOjE1LjE6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgIDEwLjM2MDkzOV0gcGNpIDAwMDA6MDA6MTUu
MTogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgMTAu
MzYwOTQ3XSBwY2kgMDAwMDowNDowMC4wOiBCQVIgNDogYXNzaWduZWQgW21lbSAweDgwMDAw
MDAwLTB4ODAwMDNmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjM2MDk1OV0gcGNpIDAwMDA6MDQ6
MDAuMDogQkFSIDI6IGFzc2lnbmVkIFttZW0gMHg4MDAwNDAwMC0weDgwMDA0ZmZmIDY0Yml0
IHByZWZdClsgICAxMC4zNjA5NzFdIHBjaSAwMDAwOjA0OjAwLjA6IEJBUiAwOiBhc3NpZ25l
ZCBbaW8gIDB4MjAwMC0weDIwZmZdClsgICAxMC4zNjA5NzZdIHBjaSAwMDAwOjAwOjE1LjI6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwNF0KWyAgIDEwLjM2MTExNl0gcGNpIDAwMDA6MDA6MTUu
MjogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgyMDAwLTB4MmZmZl0KWyAgIDEwLjM2MTEyMl0g
cGNpIDAwMDA6MDA6MTUuMjogICBicmlkZ2Ugd2luZG93IFttZW0gMHg4MDAwMDAwMC0weDgw
MGZmZmZmIDY0Yml0IHByZWZdClsgICAxMC4zNjExMjhdIHBjaV9idXMgMDAwMDowMDogcmVz
b3VyY2UgNCBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XQpbICAgMTAuMzYxMTMwXSBwY2lf
YnVzIDAwMDA6MDA6IHJlc291cmNlIDUgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10KWyAg
IDEwLjM2MTEzMl0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA2IFttZW0gMHgwMDBhMDAw
MC0weDAwMGRmZmZmXQpbICAgMTAuMzYxMTMzXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNl
IDcgW21lbSAweDgwMDAwMDAwLTB4ZmZmZmZmZmZdClsgICAxMC4zNjExMzVdIHBjaV9idXMg
MDAwMDowMTogcmVzb3VyY2UgNCBbaW8gIDB4MDAwMC0weDBjZjcgd2luZG93XQpbICAgMTAu
MzYxMTM3XSBwY2lfYnVzIDAwMDA6MDE6IHJlc291cmNlIDUgW2lvICAweDBkMDAtMHhmZmZm
IHdpbmRvd10KWyAgIDEwLjM2MTEzOF0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSA2IFtt
ZW0gMHgwMDBhMDAwMC0weDAwMGRmZmZmXQpbICAgMTAuMzYxMTQwXSBwY2lfYnVzIDAwMDA6
MDE6IHJlc291cmNlIDcgW21lbSAweDgwMDAwMDAwLTB4ZmZmZmZmZmZdClsgICAxMC4zNjEx
NDFdIHBjaV9idXMgMDAwMDowMzogcmVzb3VyY2UgMSBbbWVtIDB4ZjAwMDAwMDAtMHhmMDBm
ZmZmZl0KWyAgIDEwLjM2MTE0M10gcGNpX2J1cyAwMDAwOjA0OiByZXNvdXJjZSAwIFtpbyAg
MHgyMDAwLTB4MmZmZl0KWyAgIDEwLjM2MTE0NF0gcGNpX2J1cyAwMDAwOjA0OiByZXNvdXJj
ZSAyIFttZW0gMHg4MDAwMDAwMC0weDgwMGZmZmZmIDY0Yml0IHByZWZdClsgICAxMC4zNjEy
MzldIHBjaSAwMDAwOjAwOjAxLjE6IEQwIHBvd2VyIHN0YXRlIGRlcGVuZHMgb24gMDAwMDow
MDowMS4wClsgICAxMC4zNjE0NTZdIHBjaSAwMDAwOjAwOjEyLjA6IEFNRCBVU0IgZGV2aWNl
ClsgICAxMC4zNjE0NzhdIHBjaSAwMDAwOjAwOjEyLjA6IEFNRCBVU0Igb2hjaSBoYW5kb2Zm
ClsgICAxMC4zNjE3NjFdIHBjaSAwMDAwOjAwOjEyLjI6IEFNRCBVU0IgZGV2aWNlClsgICAx
MC4zNjE3NzVdIHBjaSAwMDAwOjAwOjEyLjI6IEFNRCBVU0IgZWhjaSBoYW5kb2ZmClsgICAx
MC4zNjE5MDNdIHBjaSAwMDAwOjAwOjEyLjI6IFBNRSMgZG9lcyBub3Qgd29yayB1bmRlciBE
MywgZGlzYWJsaW5nIGl0ClsgICAxMC4zNjIwNDldIHBjaSAwMDAwOjAwOjEzLjA6IEFNRCBV
U0IgZGV2aWNlClsgICAxMC4zNjIwNjJdIHBjaSAwMDAwOjAwOjEzLjA6IEFNRCBVU0Igb2hj
aSBoYW5kb2ZmClsgICAxMC4zNjIzMjhdIHBjaSAwMDAwOjAwOjEzLjI6IEFNRCBVU0IgZGV2
aWNlClsgICAxMC4zNjIzMzldIHBjaSAwMDAwOjAwOjEzLjI6IEFNRCBVU0IgZWhjaSBoYW5k
b2ZmClsgICAxMC4zNjI0NjldIHBjaSAwMDAwOjAwOjEzLjI6IFBNRSMgZG9lcyBub3Qgd29y
ayB1bmRlciBEMywgZGlzYWJsaW5nIGl0ClsgICAxMC4zNjI2MjJdIHBjaSAwMDAwOjAwOjE0
LjU6IEFNRCBVU0IgZGV2aWNlClsgICAxMC4zNjI2MzRdIHBjaSAwMDAwOjAwOjE0LjU6IEFN
RCBVU0Igb2hjaSBoYW5kb2ZmClsgICAxMC4zNjI5MTVdIHBjaSAwMDAwOjAwOjE2LjA6IEFN
RCBVU0IgZGV2aWNlClsgICAxMC4zNjI5MjhdIHBjaSAwMDAwOjAwOjE2LjA6IEFNRCBVU0Ig
b2hjaSBoYW5kb2ZmClsgICAxMC4zNjMxOTVdIHBjaSAwMDAwOjAwOjE2LjI6IEFNRCBVU0Ig
ZGV2aWNlClsgICAxMC4zNjMyMDZdIHBjaSAwMDAwOjAwOjE2LjI6IEFNRCBVU0IgZWhjaSBo
YW5kb2ZmClsgICAxMC4zNjMzMzRdIHBjaSAwMDAwOjAwOjE2LjI6IFBNRSMgZG9lcyBub3Qg
d29yayB1bmRlciBEMywgZGlzYWJsaW5nIGl0ClsgICAxMC4zNjM1NjFdIHBjaSAwMDAwOjAz
OjAwLjA6IEFNRCBVU0IgeGhjaSBoYW5kb2ZmClsgICAxMC4zNjM2MTBdIFBDSTogQ0xTIDY0
IGJ5dGVzLCBkZWZhdWx0IDY0ClsgICAxMC4zNjM3MjNdIHBjaSAwMDAwOjAwOjAwLjI6IEFN
RC1WaTogQXBwbHlpbmcgZXJyYXR1bSA3NDYgd29ya2Fyb3VuZApbICAgMTAuMzYzODEwXSBw
Y2kgMDAwMDowMDowMS4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgMApbICAgMTAuMzYzODI2
XSBwY2kgMDAwMDowMDowMS4xOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgMApbICAgMTAuMzYz
ODUxXSBwY2kgMDAwMDowMDoxMS4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgMQpbICAgMTAu
MzYzODg2XSBwY2kgMDAwMDowMDoxMi4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgMgpbICAg
MTAuMzYzOTAzXSBwY2kgMDAwMDowMDoxMi4yOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgMgpb
ICAgMTAuMzYzOTM3XSBwY2kgMDAwMDowMDoxMy4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAg
MwpbICAgMTAuMzYzOTUzXSBwY2kgMDAwMDowMDoxMy4yOiBBZGRpbmcgdG8gaW9tbXUgZ3Jv
dXAgMwpbICAgMTAuMzYzOTkxXSBwY2kgMDAwMDowMDoxNC4wOiBBZGRpbmcgdG8gaW9tbXUg
Z3JvdXAgNApbICAgMTAuMzY0MDA5XSBwY2kgMDAwMDowMDoxNC4yOiBBZGRpbmcgdG8gaW9t
bXUgZ3JvdXAgNApbICAgMTAuMzY0MDI1XSBwY2kgMDAwMDowMDoxNC4zOiBBZGRpbmcgdG8g
aW9tbXUgZ3JvdXAgNApbICAgMTAuMzY0MDQ4XSBwY2kgMDAwMDowMDoxNC40OiBBZGRpbmcg
dG8gaW9tbXUgZ3JvdXAgNQpbICAgMTAuMzY0MDcwXSBwY2kgMDAwMDowMDoxNC41OiBBZGRp
bmcgdG8gaW9tbXUgZ3JvdXAgNgpbICAgMTAuMzY0MTA0XSBwY2kgMDAwMDowMDoxNS4wOiBB
ZGRpbmcgdG8gaW9tbXUgZ3JvdXAgNwpbICAgMTAuMzY0MTIzXSBwY2kgMDAwMDowMDoxNS4x
OiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgNwpbICAgMTAuMzY0MTM5XSBwY2kgMDAwMDowMDox
NS4yOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgNwpbICAgMTAuMzY0MTc4XSBwY2kgMDAwMDow
MDoxNi4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOApbICAgMTAuMzY0MTk0XSBwY2kgMDAw
MDowMDoxNi4yOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOApbICAgMTAuMzY0MjQ2XSBwY2kg
MDAwMDowMDoxOC4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpbICAgMTAuMzY0MjY2XSBw
Y2kgMDAwMDowMDoxOC4xOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpbICAgMTAuMzY0Mjgz
XSBwY2kgMDAwMDowMDoxOC4yOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpbICAgMTAuMzY0
MzAzXSBwY2kgMDAwMDowMDoxOC4zOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpbICAgMTAu
MzY0MzIxXSBwY2kgMDAwMDowMDoxOC40OiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpbICAg
MTAuMzY0MzQwXSBwY2kgMDAwMDowMDoxOC41OiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAgOQpb
ICAgMTAuMzY0MzUyXSBwY2kgMDAwMDowMzowMC4wOiBBZGRpbmcgdG8gaW9tbXUgZ3JvdXAg
NwpbICAgMTAuMzY0MzYwXSBwY2kgMDAwMDowNDowMC4wOiBBZGRpbmcgdG8gaW9tbXUgZ3Jv
dXAgNwpbICAgMTAuMzY2NTAwXSBwY2kgMDAwMDowMDowMC4yOiBBTUQtVmk6IEZvdW5kIElP
TU1VIGNhcCAweDQwClsgICAxMC4zNjY1MDVdIEFNRC1WaTogRXh0ZW5kZWQgZmVhdHVyZXMg
KDB4ODAwMDAwODUzLCAweDApOiBQcmVGIFBQUiBHVCBJQQpbICAgMTAuMzY2NTEwXSBBTUQt
Vmk6IEludGVycnVwdCByZW1hcHBpbmcgZW5hYmxlZApbICAgMTAuMzY2NjkxXSBQQ0ktRE1B
OiBVc2luZyBzb2Z0d2FyZSBib3VuY2UgYnVmZmVyaW5nIGZvciBJTyAoU1dJT1RMQikKWyAg
IDEwLjM2NjY5M10gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQgW21lbSAweDAwMDAwMDAwNWJl
NGQwMDAtMHgwMDAwMDAwMDVmZTRkMDAwXSAoNjRNQikKWyAgIDEwLjM2Njc0Ml0gTFZUIG9m
ZnNldCAwIGFzc2lnbmVkIGZvciB2ZWN0b3IgMHg0MDAKWyAgIDEwLjM2Njc2M10gcGVyZjog
QU1EIElCUyBkZXRlY3RlZCAoMHgwMDAwMDBmZikKWyAgIDEwLjM2Njc3MV0gYW1kX3VuY29y
ZTogNCAgYW1kX25iIGNvdW50ZXJzIGRldGVjdGVkClsgICAxMC4zNjc1NzRdIHdvcmtpbmdz
ZXQ6IHRpbWVzdGFtcF9iaXRzPTM3IG1heF9vcmRlcj0yMCBidWNrZXRfb3JkZXI9MApbICAg
MTAuMzY3NjAzXSB6YnVkOiBsb2FkZWQKWyAgIDEwLjM2ODA2Nl0gTkVUOiBSZWdpc3RlcmVk
IFBGX0FMRyBwcm90b2NvbCBmYW1pbHkKWyAgIDEwLjM2ODA3MV0gS2V5IHR5cGUgYXN5bW1l
dHJpYyByZWdpc3RlcmVkClsgICAxMC4zNjgwNzNdIEFzeW1tZXRyaWMga2V5IHBhcnNlciAn
eDUwOScgcmVnaXN0ZXJlZApbICAgMTAuMzY4MzQ2XSBhbGc6IHNlbGYtdGVzdHMgZGlzYWJs
ZWQKWyAgIDEwLjM2ODQzOV0gQmxvY2sgbGF5ZXIgU0NTSSBnZW5lcmljIChic2cpIGRyaXZl
ciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI1MSkKWyAgIDEwLjM2ODQ3N10gaW8gc2No
ZWR1bGVyIG1xLWRlYWRsaW5lIHJlZ2lzdGVyZWQKWyAgIDEwLjM2ODQ3OV0gaW8gc2NoZWR1
bGVyIGt5YmVyIHJlZ2lzdGVyZWQKWyAgIDEwLjM3MDAxMV0gcGNpZXBvcnQgMDAwMDowMDox
NS4wOiBQTUU6IFNpZ25hbGluZyB3aXRoIElSUSAyNQpbICAgMTAuMzcwMTc0XSBwY2llcG9y
dCAwMDAwOjAwOjE1LjE6IFBNRTogU2lnbmFsaW5nIHdpdGggSVJRIDI2ClsgICAxMC4zNzAy
NDddIHBjaWVwb3J0IDAwMDA6MDA6MTUuMjogZW5hYmxpbmcgZGV2aWNlICgwMDAwIC0+IDAw
MDMpClsgICAxMC4zNzA0NDldIHBjaWVwb3J0IDAwMDA6MDA6MTUuMjogUE1FOiBTaWduYWxp
bmcgd2l0aCBJUlEgMjcKWyAgIDEwLjM3MDcwNl0gaW5wdXQ6IFBvd2VyIEJ1dHRvbiBhcyAv
ZGV2aWNlcy9MTlhTWVNUTTowMC9MTlhQV1JCTjowMC9pbnB1dC9pbnB1dDAKWyAgIDEwLjM3
MDc2N10gQUNQSTogYnV0dG9uOiBQb3dlciBCdXR0b24gW1BXUkZdClsgICAxMC4zNzA4MjNd
IEFDUEk6IFxfU0JfLlAwMDA6IEZvdW5kIDIgaWRsZSBzdGF0ZXMKWyAgIDEwLjM3MDkzN10g
QUNQSTogXF9TQl8uUDAwMTogRm91bmQgMiBpZGxlIHN0YXRlcwpbICAgMTAuMzcxODI3XSB0
aGVybWFsIExOWFRIRVJNOjAwOiByZWdpc3RlcmVkIGFzIHRoZXJtYWxfem9uZTAKWyAgIDEw
LjM3MTgzMF0gQUNQSTogdGhlcm1hbDogVGhlcm1hbCBab25lIFtUWjAwXSAoMCBDKQpbICAg
MTAuMzcyMTQ2XSBOb24tdm9sYXRpbGUgbWVtb3J5IGRyaXZlciB2MS4zClsgICAxMC4zNzIy
MThdIEFNRC1WaTogQU1EIElPTU1VdjIgbG9hZGVkIGFuZCBpbml0aWFsaXplZApbICAgMTAu
MzcyMzM4XSBhaGNpIDAwMDA6MDA6MTEuMDogdmVyc2lvbiAzLjAKWyAgIDEwLjM3MjYxMl0g
YWhjaSAwMDAwOjAwOjExLjA6IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDggcG9ydHMgNiBH
YnBzIDB4NDAgaW1wbCBTQVRBIG1vZGUKWyAgIDEwLjM3MjYxNl0gYWhjaSAwMDAwOjAwOjEx
LjA6IGZsYWdzOiA2NGJpdCBuY3Egc250ZiBpbGNrIGxlZCBjbG8gcGlvIApbICAgMTAuMzc0
MTMzXSBzY3NpIGhvc3QwOiBhaGNpClsgICAxMC4zNzQzMzNdIHNjc2kgaG9zdDE6IGFoY2kK
WyAgIDEwLjM3NDUxMF0gc2NzaSBob3N0MjogYWhjaQpbICAgMTAuMzc0NzEwXSBzY3NpIGhv
c3QzOiBhaGNpClsgICAxMC4zNzQ4NzldIHNjc2kgaG9zdDQ6IGFoY2kKWyAgIDEwLjM3NTA1
N10gc2NzaSBob3N0NTogYWhjaQpbICAgMTAuMzc1MjQxXSBzY3NpIGhvc3Q2OiBhaGNpClsg
ICAxMC4zNzU0MjFdIHNjc2kgaG9zdDc6IGFoY2kKWyAgIDEwLjM3NTUwOF0gYXRhIHBvcnQx
OiBEVU1NWQpbICAgMTAuMzc1NTEwXSBhdGEgcG9ydDI6IERVTU1ZClsgICAxMC4zNzU1MTFd
IGF0YSBwb3J0MzogRFVNTVkKWyAgIDEwLjM3NTUxMl0gYXRhIHBvcnQ0OiBEVU1NWQpbICAg
MTAuMzc1NTE0XSBhdGEgcG9ydDU6IERVTU1ZClsgICAxMC4zNzU1MTVdIGF0YSBwb3J0Njog
RFVNTVkKWyAgIDEwLjM3NTUxN10gYXRhIHBvcnQ3OiBTQVRBIG1heCBVRE1BLzEzMyBhYmFy
IG0yMDQ4QDB4ZjAxY2MwMDAgcG9ydCAweGYwMWNjNDAwIGlycSAxOQpbICAgMTAuMzc1NTE5
XSBhdGEgcG9ydDg6IERVTU1ZClsgICAxMC4zNzU1OTddIEFDUEk6IGJ1cyB0eXBlIGRybV9j
b25uZWN0b3IgcmVnaXN0ZXJlZApbICAgMTAuMzc1ODIzXSBpODA0MjogUE5QOiBObyBQUy8y
IGNvbnRyb2xsZXIgZm91bmQuClsgICAxMC4zNzU4MjRdIGk4MDQyOiBQcm9iaW5nIHBvcnRz
IGRpcmVjdGx5LgpbICAgMTAuMzc4Njc1XSBzZXJpbzogaTgwNDIgS0JEIHBvcnQgYXQgMHg2
MCwweDY0IGlycSAxClsgICAxMC4zNzg3NTFdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAw
eDYwLDB4NjQgaXJxIDEyClsgICAxMC4zNzg4NzRdIG1vdXNlZGV2OiBQUy8yIG1vdXNlIGRl
dmljZSBjb21tb24gZm9yIGFsbCBtaWNlClsgICAxMC4zNzg5MjddIHJ0Y19jbW9zIDAwOjAx
OiBSVEMgY2FuIHdha2UgZnJvbSBTNApbICAgMTAuMzc5MTczXSBydGNfY21vcyAwMDowMTog
cmVnaXN0ZXJlZCBhcyBydGMwClsgICAxMC4zNzkxOTddIHJ0Y19jbW9zIDAwOjAxOiBzZXR0
aW5nIHN5c3RlbSBjbG9jayB0byAyMDIzLTA0LTE3VDA3OjM5OjUwIFVUQyAoMTY4MTcxNzE5
MCkKWyAgIDEwLjM3OTIzNF0gcnRjX2Ntb3MgMDA6MDE6IGFsYXJtcyB1cCB0byBvbmUgZGF5
LCB5M2ssIDExNCBieXRlcyBudnJhbSwgaHBldCBpcnFzClsgICAxMC4zNzkyNjhdIGRldmlj
ZS1tYXBwZXI6IHVldmVudDogdmVyc2lvbiAxLjAuMwpbICAgMTAuMzc5MzM3XSBkZXZpY2Ut
bWFwcGVyOiBpb2N0bDogNC40Ny4wLWlvY3RsICgyMDIyLTA3LTI4KSBpbml0aWFsaXNlZDog
ZG0tZGV2ZWxAcmVkaGF0LmNvbQpbICAgMTAuMzc5NDk2XSBoaWQ6IHJhdyBISUQgZXZlbnRz
IGRyaXZlciAoQykgSmlyaSBLb3NpbmEKWyAgIDEwLjM3OTUzMV0gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JoaWQKWyAgIDEwLjM3OTUzMl0gdXNiaGlk
OiBVU0IgSElEIGNvcmUgZHJpdmVyClsgICAxMC4zNzk2MzhdIEluaXRpYWxpemluZyBYRlJN
IG5ldGxpbmsgc29ja2V0ClsgICAxMC4zNzk2NDhdIE5FVDogUmVnaXN0ZXJlZCBQRl9QQUNL
RVQgcHJvdG9jb2wgZmFtaWx5ClsgICAxMC4zNzk2NTBdIHg4Ni9wbTogZmFtaWx5IDB4MTUg
Y3B1IGRldGVjdGVkLCBNU1Igc2F2aW5nIGlzIG5lZWRlZCBkdXJpbmcgc3VzcGVuZGluZy4K
WyAgIDEwLjM3OTgxN10gbWljcm9jb2RlOiBDUFUwOiBwYXRjaF9sZXZlbD0weDA2MDAxMTFm
ClsgICAxMC4zNzk4MjddIG1pY3JvY29kZTogTWljcm9jb2RlIFVwZGF0ZSBEcml2ZXI6IHYy
LjIuClsgICAxMC4zNzk4MzFdIElQSSBzaG9ydGhhbmQgYnJvYWRjYXN0OiBlbmFibGVkClsg
ICAxMC4zNzk4MzhdIEFWWCB2ZXJzaW9uIG9mIGdjbV9lbmMvZGVjIGVuZ2FnZWQuClsgICAx
MC4zNzk4NTRdIEFFUyBDVFIgbW9kZSBieTggb3B0aW1pemF0aW9uIGVuYWJsZWQKWyAgIDEw
LjM4MDY5NV0gYXRhIGxpbms3OiBTQVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0dXMgMTMz
IFNDb250cm9sIDMwMCkKWyAgIDEwLjM4MDk2OV0gYXRhIGRldjcuMDogQVRBLTk6IFNhbkRp
c2sgU0RTU0RQMDY0RywgMi4wLjAsIG1heCBVRE1BLzEzMwpbICAgMTAuMzgwOTcyXSBhdGEg
ZGV2Ny4wOiAxMjUwNDU0MjQgc2VjdG9ycywgbXVsdGkgMTogTEJBNDggTkNRIChkZXB0aCAz
MikKWyAgIDEwLjM4MTE3OF0gYXRhIGRldjcuMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMK
WyAgIDEwLjM4MTI4OV0gc2NzaSA2OjA6MDowOiBEaXJlY3QtQWNjZXNzICAgICBBVEEgICAg
ICBTYW5EaXNrIFNEU1NEUDA2IDAgICAgUFE6IDAgQU5TSTogNQpbICAgMTAuMzgxNzE4XSBz
ZCA2OjA6MDowOiBbc2RhXSAxMjUwNDU0MjQgNTEyLWJ5dGUgbG9naWNhbCBibG9ja3M6ICg2
NC4wIEdCLzU5LjYgR2lCKQpbICAgMTAuMzgxNzMwXSBzZCA2OjA6MDowOiBbc2RhXSBXcml0
ZSBQcm90ZWN0IGlzIG9mZgpbICAgMTAuMzgxNzMzXSBzZCA2OjA6MDowOiBbc2RhXSBNb2Rl
IFNlbnNlOiAwMCAzYSAwMCAwMApbICAgMTAuMzgxNzQ5XSBzZCA2OjA6MDowOiBbc2RhXSBX
cml0ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBw
b3J0IERQTyBvciBGVUEKWyAgIDEwLjM4MTc3NV0gc2QgNjowOjA6MDogW3NkYV0gUHJlZmVy
cmVkIG1pbmltdW0gSS9PIHNpemUgNTEyIGJ5dGVzClsgICAxMC4zODI1NDddICBzZGE6IHNk
YTEgc2RhMiBzZGEzClsgICAxMC4zODI3MzZdIHNkIDY6MDowOjA6IFtzZGFdIEF0dGFjaGVk
IFNDU0kgZGlzawpbICAgMTAuMzg1MzQ0XSBzY2hlZF9jbG9jazogTWFya2luZyBzdGFibGUg
KDEwMjY4MDA3OTc0LCAxMTY5Nzc4NjYpLT4oMTAzODc0MjMxNTYsIC0yNDM3MzE2KQpbICAg
MTAuMzg1NTE5XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9uIDEKWyAgIDEwLjM4NTcz
NF0genN3YXA6IGxvYWRlZCB1c2luZyBwb29sIGx6by96YnVkClsgICAxMC4zOTAwMjhdIGtt
ZW1sZWFrOiBLZXJuZWwgbWVtb3J5IGxlYWsgZGV0ZWN0b3IgaW5pdGlhbGl6ZWQgKG1lbSBw
b29sIGF2YWlsYWJsZTogMTU2NzcpClsgICAxMC4zOTAwMzNdIGRlYnVnX3ZtX3BndGFibGU6
IFtkZWJ1Z192bV9wZ3RhYmxlICAgICAgICAgXTogVmFsaWRhdGluZyBhcmNoaXRlY3R1cmUg
cGFnZSB0YWJsZSBoZWxwZXJzClsgICAxMC4zOTM4OTRdIGttZW1sZWFrOiBBdXRvbWF0aWMg
bWVtb3J5IHNjYW5uaW5nIHRocmVhZCBzdGFydGVkClsgICAxMC4zOTQ0ODhdIEtleSB0eXBl
IGVuY3J5cHRlZCByZWdpc3RlcmVkClsgICAxMC4zOTc0NTNdIFBNOiAgIE1hZ2ljIG51bWJl
cjogMzoxMzk6NjczClsgICAxMC4zOTc0NjZdIHdvcmtxdWV1ZSBzY3NpX3RtZl8zOiBoYXNo
IG1hdGNoZXMKWyAgIDEwLjQxMDMzNF0gRVhUNC1mcyAoc2RhMyk6IG1vdW50ZWQgZmlsZXN5
c3RlbSBmZTI5ZTBkYy02MzAzLTQ0MDEtOTg3Yy04NDcyYmMxYjk1MTYgd2l0aCBvcmRlcmVk
IGRhdGEgbW9kZS4gUXVvdGEgbW9kZTogbm9uZS4KWyAgIDEwLjQxMDM3OF0gVkZTOiBNb3Vu
dGVkIHJvb3QgKGV4dDQgZmlsZXN5c3RlbSkgb24gZGV2aWNlIDg6My4KWyAgIDEwLjQxMjI2
OF0gZGV2dG1wZnM6IG1vdW50ZWQKWyAgIDEwLjQxMjI4Nl0gQWZ0ZXIga2VybmVsX2luaXRf
ZnJlZWFibGUKWyAgIDEwLjQxNjc5M10gRnJlZWluZyB1bnVzZWQga2VybmVsIGltYWdlIChp
bml0bWVtKSBtZW1vcnk6IDI5MDhLClsgICAxMC40MjEyOTFdIFdyaXRlIHByb3RlY3Rpbmcg
dGhlIGtlcm5lbCByZWFkLW9ubHkgZGF0YTogMjA0ODBrClsgICAxMC40MjE1NTldIEZyZWVp
bmcgdW51c2VkIGtlcm5lbCBpbWFnZSAocm9kYXRhL2RhdGEgZ2FwKSBtZW1vcnk6IDgzNksK
WyAgIDEwLjQ1ODYxNF0geDg2L21tOiBDaGVja2VkIFcrWCBtYXBwaW5nczogcGFzc2VkLCBu
byBXK1ggcGFnZXMgZm91bmQuClsgICAxMC40NTg2MjBdIHJvZGF0YV90ZXN0OiBhbGwgdGVz
dHMgd2VyZSBzdWNjZXNzZnVsClsgICAxMC40NTg2MjFdIEFmdGVyIG1hcmtfcmVhZG9ubHkK
WyAgIDEwLjQ1ODYyMV0gQWZ0ZXIgcHRpX2ZpbmFsaXplClsgICAxMC40NTg2MzddIHJjdV9l
bmRfaW5rZXJuZWxfYm9vdApbICAgMTAuNDU4NjQ0XSBSdW4gL3NiaW4vaW5pdCBhcyBpbml0
IHByb2Nlc3MKWyAgIDEwLjQ1ODY0Nl0gICB3aXRoIGFyZ3VtZW50czoKWyAgIDEwLjQ1ODY0
OF0gICAgIC9zYmluL2luaXQKWyAgIDEwLjQ1ODY0OV0gICAgIG5vaXNhcG5wClsgICAxMC40
NTg2NTBdICAgd2l0aCBlbnZpcm9ubWVudDoKWyAgIDEwLjQ1ODY1MF0gICAgIEhPTUU9Lwpb
ICAgMTAuNDU4NjUxXSAgICAgVEVSTT1saW51eApbICAgMTAuNDU4NjUyXSAgICAgQk9PVF9J
TUFHRT0vYm9vdC92bWxpbnV6LTYuMy4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2ClsgICAx
MC42MzcxOTBdIHN5c3RlbWRbMV06IEluc2VydGVkIG1vZHVsZSAnYXV0b2ZzNCcKWyAgIDEw
LjY2MzYyNV0gTkVUOiBSZWdpc3RlcmVkIFBGX0lORVQ2IHByb3RvY29sIGZhbWlseQpbICAg
MTAuNjY0NDczXSBTZWdtZW50IFJvdXRpbmcgd2l0aCBJUHY2ClsgICAxMC42NjQ1MDBdIElu
LXNpdHUgT0FNIChJT0FNKSB3aXRoIElQdjYKWyAgIDEwLjY5MTM3Nl0gc3lzdGVtZFsxXTog
c3lzdGVtZCAyNTIuNi0xIHJ1bm5pbmcgaW4gc3lzdGVtIG1vZGUgKCtQQU0gK0FVRElUICtT
RUxJTlVYICtBUFBBUk1PUiArSU1BICtTTUFDSyArU0VDQ09NUCArR0NSWVBUIC1HTlVUTFMg
K09QRU5TU0wgK0FDTCArQkxLSUQgK0NVUkwgK0VMRlVUSUxTICtGSURPMiArSUROMiAtSURO
ICtJUFRDICtLTU9EICtMSUJDUllQVFNFVFVQICtMSUJGRElTSyArUENSRTIgLVBXUVVBTElU
WSArUDExS0lUICtRUkVOQ09ERSArVFBNMiArQlpJUDIgK0xaNCArWFogK1pMSUIgK1pTVEQg
LUJQRl9GUkFNRVdPUksgLVhLQkNPTU1PTiArVVRNUCArU1lTVklOSVQgZGVmYXVsdC1oaWVy
YXJjaHk9dW5pZmllZCkKWyAgIDEwLjY5MTM4N10gc3lzdGVtZFsxXTogRGV0ZWN0ZWQgYXJj
aGl0ZWN0dXJlIHg4Ni02NC4KWyAgIDEwLjY5NjQ2OF0gc3lzdGVtZFsxXTogSG9zdG5hbWUg
c2V0IHRvIDxrb2RpPi4KWyAgIDExLjAwMDcwMF0gc3lzdGVtZFsxXTogUXVldWVkIHN0YXJ0
IGpvYiBmb3IgZGVmYXVsdCB0YXJnZXQgZ3JhcGhpY2FsLnRhcmdldC4KWyAgIDExLjAxMTY1
MV0gc3lzdGVtZFsxXTogQ3JlYXRlZCBzbGljZSBzeXN0ZW0tZ2V0dHkuc2xpY2UgLSBTbGlj
ZSAvc3lzdGVtL2dldHR5LgpbICAgMTEuMDEyNzQ3XSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNs
aWNlIHN5c3RlbS1tb2Rwcm9iZS5zbGljZSAtIFNsaWNlIC9zeXN0ZW0vbW9kcHJvYmUuClsg
ICAxMS4wMTM2MDBdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2UgdXNlci5zbGljZSAtIFVz
ZXIgYW5kIFNlc3Npb24gU2xpY2UuClsgICAxMS4wMTM3ODRdIHN5c3RlbWRbMV06IFN0YXJ0
ZWQgc3lzdGVtZC1hc2stcGFzc3dvcmQtY29uc29sZS5wYXRoIC0gRGlzcGF0Y2ggUGFzc3dv
cmQgUmVxdWVzdHMgdG8gQ29uc29sZSBEaXJlY3RvcnkgV2F0Y2guClsgICAxMS4wMTM5MDJd
IHN5c3RlbWRbMV06IFN0YXJ0ZWQgc3lzdGVtZC1hc2stcGFzc3dvcmQtd2FsbC5wYXRoIC0g
Rm9yd2FyZCBQYXNzd29yZCBSZXF1ZXN0cyB0byBXYWxsIERpcmVjdG9yeSBXYXRjaC4KWyAg
IDExLjAxNDMyNF0gc3lzdGVtZFsxXTogU2V0IHVwIGF1dG9tb3VudCBwcm9jLXN5cy1mcy1i
aW5mbXRfbWlzYy5hdXRvbW91bnQgLSBBcmJpdHJhcnkgRXhlY3V0YWJsZSBGaWxlIEZvcm1h
dHMgRmlsZSBTeXN0ZW0gQXV0b21vdW50IFBvaW50LgpbICAgMTEuMDE0MzY2XSBzeXN0ZW1k
WzFdOiBSZWFjaGVkIHRhcmdldCBjcnlwdHNldHVwLnRhcmdldCAtIExvY2FsIEVuY3J5cHRl
ZCBWb2x1bWVzLgpbICAgMTEuMDE0NDE2XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBp
bnRlZ3JpdHlzZXR1cC50YXJnZXQgLSBMb2NhbCBJbnRlZ3JpdHkgUHJvdGVjdGVkIFZvbHVt
ZXMuClsgICAxMS4wMTQ0NTVdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IHBhdGhzLnRh
cmdldCAtIFBhdGggVW5pdHMuClsgICAxMS4wMTQ0ODVdIHN5c3RlbWRbMV06IFJlYWNoZWQg
dGFyZ2V0IHJlbW90ZS1mcy50YXJnZXQgLSBSZW1vdGUgRmlsZSBTeXN0ZW1zLgpbICAgMTEu
MDE0NTE1XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBzbGljZXMudGFyZ2V0IC0gU2xp
Y2UgVW5pdHMuClsgICAxMS4wMTQ1NTRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IHN3
YXAudGFyZ2V0IC0gU3dhcHMuClsgICAxMS4wMTQ1OTJdIHN5c3RlbWRbMV06IFJlYWNoZWQg
dGFyZ2V0IHZlcml0eXNldHVwLnRhcmdldCAtIExvY2FsIFZlcml0eSBQcm90ZWN0ZWQgVm9s
dW1lcy4KWyAgIDExLjAxNzA2NV0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIHN5c3RlbWQt
Y29yZWR1bXAuc29ja2V0IC0gUHJvY2VzcyBDb3JlIER1bXAgU29ja2V0LgpbICAgMTEuMDE3
MzE3XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1mc2NrZC5zb2NrZXQgLSBm
c2NrIHRvIGZzY2tkIGNvbW11bmljYXRpb24gU29ja2V0LgpbICAgMTEuMDE3NDc5XSBzeXN0
ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1pbml0Y3RsLnNvY2tldCAtIGluaXRjdGwg
Q29tcGF0aWJpbGl0eSBOYW1lZCBQaXBlLgpbICAgMTEuMDE3Nzg2XSBzeXN0ZW1kWzFdOiBM
aXN0ZW5pbmcgb24gc3lzdGVtZC1qb3VybmFsZC1hdWRpdC5zb2NrZXQgLSBKb3VybmFsIEF1
ZGl0IFNvY2tldC4KWyAgIDExLjAxODA1OV0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIHN5
c3RlbWQtam91cm5hbGQtZGV2LWxvZy5zb2NrZXQgLSBKb3VybmFsIFNvY2tldCAoL2Rldi9s
b2cpLgpbICAgMTEuMDE4MzM3XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1q
b3VybmFsZC5zb2NrZXQgLSBKb3VybmFsIFNvY2tldC4KWyAgIDExLjAxODU5NF0gc3lzdGVt
ZFsxXTogTGlzdGVuaW5nIG9uIHN5c3RlbWQtbmV0d29ya2Quc29ja2V0IC0gTmV0d29yayBT
ZXJ2aWNlIE5ldGxpbmsgU29ja2V0LgpbICAgMTEuMDE5NDIwXSBzeXN0ZW1kWzFdOiBMaXN0
ZW5pbmcgb24gc3lzdGVtZC11ZGV2ZC1jb250cm9sLnNvY2tldCAtIHVkZXYgQ29udHJvbCBT
b2NrZXQuClsgICAxMS4wMTk2OTNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1k
LXVkZXZkLWtlcm5lbC5zb2NrZXQgLSB1ZGV2IEtlcm5lbCBTb2NrZXQuClsgICAxMS4wMjI1
MDRdIHN5c3RlbWRbMV06IE1vdW50aW5nIGRldi1odWdlcGFnZXMubW91bnQgLSBIdWdlIFBh
Z2VzIEZpbGUgU3lzdGVtLi4uClsgICAxMS4wMjUxODldIHN5c3RlbWRbMV06IE1vdW50aW5n
IGRldi1tcXVldWUubW91bnQgLSBQT1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lzdGVtLi4u
ClsgICAxMS4wMjk1OTBdIHN5c3RlbWRbMV06IE1vdW50aW5nIHN5cy1rZXJuZWwtZGVidWcu
bW91bnQgLSBLZXJuZWwgRGVidWcgRmlsZSBTeXN0ZW0uLi4KWyAgIDExLjA0Njg0OV0gc3lz
dGVtZFsxXTogTW91bnRpbmcgc3lzLWtlcm5lbC10cmFjaW5nLm1vdW50IC0gS2VybmVsIFRy
YWNlIEZpbGUgU3lzdGVtLi4uClsgICAxMS4wNTM3MzBdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IGttb2Qtc3RhdGljLW5vZGVzLnNlcnZpY2UgLSBDcmVhdGUgTGlzdCBvZiBTdGF0aWMgRGV2
aWNlIE5vZGVzLi4uClsgICAxMS4wNjYwMTVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHBy
b2JlQGNvbmZpZ2ZzLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgY29uZmlnZnMuLi4K
WyAgIDExLjA2OTQxNF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAZG1fbW9kLnNl
cnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZG1fbW9kLi4uClsgICAxMS4wNzY1NDddIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHByb2JlQGRybS5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwg
TW9kdWxlIGRybS4uLgpbICAgMTEuMDkwODUyXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBtb2Rw
cm9iZUBlZmlfcHN0b3JlLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZWZpX3BzdG9y
ZS4uLgpbICAgMTEuMDk3ODE1XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBtb2Rwcm9iZUBmdXNl
LnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZnVzZS4uLgpbICAgMTEuMTA5Mzg4XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBtb2Rwcm9iZUBsb29wLnNlcnZpY2UgLSBMb2FkIEtlcm5l
bCBNb2R1bGUgbG9vcC4uLgpbICAgMTEuMTA5NDU2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWZp
cnN0Ym9vdC5zZXJ2aWNlIC0gRmlyc3QgQm9vdCBXaXphcmQgd2FzIHNraXBwZWQgYmVjYXVz
ZSBvZiBhbiB1bm1ldCBjb25kaXRpb24gY2hlY2sgKENvbmRpdGlvbkZpcnN0Qm9vdD15ZXMp
LgpbICAgMTEuMTA5NTI1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWZzY2stcm9vdC5zZXJ2aWNl
IC0gRmlsZSBTeXN0ZW0gQ2hlY2sgb24gUm9vdCBEZXZpY2Ugd2FzIHNraXBwZWQgYmVjYXVz
ZSBvZiBhbiB1bm1ldCBjb25kaXRpb24gY2hlY2sgKENvbmRpdGlvblBhdGhJc1JlYWRXcml0
ZT0hLykuClsgICAxMS4xMDk1NjJdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IGxvY2Fs
LWZzLnRhcmdldCAtIExvY2FsIEZpbGUgU3lzdGVtcy4KWyAgIDExLjEwOTYyOV0gc3lzdGVt
ZFsxXTogYXBwYXJtb3Iuc2VydmljZSAtIExvYWQgQXBwQXJtb3IgcHJvZmlsZXMgd2FzIHNr
aXBwZWQgYmVjYXVzZSBvZiBhbiB1bm1ldCBjb25kaXRpb24gY2hlY2sgKENvbmRpdGlvblNl
Y3VyaXR5PWFwcGFybW9yKS4KWyAgIDExLjEyMDgxMF0gbG9vcDogbW9kdWxlIGxvYWRlZApb
ICAgMTEuMTIyMDAzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLWJpbmZtdC5zZXJ2
aWNlIC0gU2V0IFVwIEFkZGl0aW9uYWwgQmluYXJ5IEZvcm1hdHMuLi4KWyAgIDExLjEyOTU5
OF0gZnVzZTogaW5pdCAoQVBJIHZlcnNpb24gNy4zOCkKWyAgIDExLjEzNzQyNl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgc3lzdGVtZC1qb3VybmFsZC5zZXJ2aWNlIC0gSm91cm5hbCBTZXJ2
aWNlLi4uClsgICAxMS4xNDU3NTJdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtcmFu
ZG9tLXNlZWQuc2VydmljZSAtIExvYWQvU2F2ZSBSYW5kb20gU2VlZC4uLgpbICAgMTEuMTU3
NDk0XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlIC0gQXBw
bHkgS2VybmVsIFZhcmlhYmxlcy4uLgpbICAgMTEuMTcyNjM0XSBzeXN0ZW1kWzFdOiBTdGFy
dGluZyBzeXN0ZW1kLXN5c3VzZXJzLnNlcnZpY2UgLSBDcmVhdGUgU3lzdGVtIFVzZXJzLi4u
ClsgICAxMS4xOTc5MzldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtdWRldi10cmln
Z2VyLnNlcnZpY2UgLSBDb2xkcGx1ZyBBbGwgdWRldiBEZXZpY2VzLi4uClsgICAxMS4yMTYy
NDddIHN5c3RlbWRbMV06IE1vdW50ZWQgZGV2LWh1Z2VwYWdlcy5tb3VudCAtIEh1Z2UgUGFn
ZXMgRmlsZSBTeXN0ZW0uClsgICAxMS4yMTY0NTFdIHN5c3RlbWRbMV06IE1vdW50ZWQgZGV2
LW1xdWV1ZS5tb3VudCAtIFBPU0lYIE1lc3NhZ2UgUXVldWUgRmlsZSBTeXN0ZW0uClsgICAx
MS4yMTY2MTFdIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudCAt
IEtlcm5lbCBEZWJ1ZyBGaWxlIFN5c3RlbS4KWyAgIDExLjIxNjc3Ml0gc3lzdGVtZFsxXTog
TW91bnRlZCBzeXMta2VybmVsLXRyYWNpbmcubW91bnQgLSBLZXJuZWwgVHJhY2UgRmlsZSBT
eXN0ZW0uClsgICAxMS4yMzM0ODFdIHN5c3RlbWRbMV06IEZpbmlzaGVkIGttb2Qtc3RhdGlj
LW5vZGVzLnNlcnZpY2UgLSBDcmVhdGUgTGlzdCBvZiBTdGF0aWMgRGV2aWNlIE5vZGVzLgpb
ICAgMTEuMjM0Mjc5XSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBjb25maWdmcy5zZXJ2aWNlOiBE
ZWFjdGl2YXRlZCBzdWNjZXNzZnVsbHkuClsgICAxMS4yMzg1MTldIHN5c3RlbWRbMV06IEZp
bmlzaGVkIG1vZHByb2JlQGNvbmZpZ2ZzLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUg
Y29uZmlnZnMuClsgICAxMS4yMzkxNzddIHN5c3RlbWRbMV06IG1vZHByb2JlQGRtX21vZC5z
ZXJ2aWNlOiBEZWFjdGl2YXRlZCBzdWNjZXNzZnVsbHkuClsgICAxMS4yNDcyMjddIHN5c3Rl
bWRbMV06IEZpbmlzaGVkIG1vZHByb2JlQGRtX21vZC5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwg
TW9kdWxlIGRtX21vZC4KWyAgIDExLjI0Nzk4MF0gc3lzdGVtZFsxXTogbW9kcHJvYmVAZHJt
LnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAgIDExLjI1MzUyM10gc3lz
dGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAZHJtLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBN
b2R1bGUgZHJtLgpbICAgMTEuMjU0MjMzXSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBlZmlfcHN0
b3JlLnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAgIDExLjI1NTU0MF0g
c3lzdGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAZWZpX3BzdG9yZS5zZXJ2aWNlIC0gTG9h
ZCBLZXJuZWwgTW9kdWxlIGVmaV9wc3RvcmUuClsgICAxMS4yNTYxMDldIHN5c3RlbWRbMV06
IG1vZHByb2JlQGZ1c2Uuc2VydmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vzc2Z1bGx5LgpbICAg
MTEuMjYxNzQ3XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBmdXNlLnNlcnZpY2Ug
LSBMb2FkIEtlcm5lbCBNb2R1bGUgZnVzZS4KWyAgIDExLjI2MjM3NV0gc3lzdGVtZFsxXTog
bW9kcHJvYmVAbG9vcC5zZXJ2aWNlOiBEZWFjdGl2YXRlZCBzdWNjZXNzZnVsbHkuClsgICAx
MS4yNjU5OTFdIHN5c3RlbWRbMV06IEZpbmlzaGVkIG1vZHByb2JlQGxvb3Auc2VydmljZSAt
IExvYWQgS2VybmVsIE1vZHVsZSBsb29wLgpbICAgMTEuMjY3MjEzXSBzeXN0ZW1kWzFdOiBG
aW5pc2hlZCBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlIC0gQXBwbHkgS2VybmVsIFZhcmlhYmxl
cy4KWyAgIDExLjI2ODE4Nl0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC1zeXN1c2Vy
cy5zZXJ2aWNlIC0gQ3JlYXRlIFN5c3RlbSBVc2Vycy4KWyAgIDExLjI2ODY5MF0gc3lzdGVt
ZFsxXTogcHJvYy1zeXMtZnMtYmluZm10X21pc2MuYXV0b21vdW50OiBHb3QgYXV0b21vdW50
IHJlcXVlc3QgZm9yIC9wcm9jL3N5cy9mcy9iaW5mbXRfbWlzYywgdHJpZ2dlcmVkIGJ5IDEz
MyAoc3lzdGVtZC1iaW5mbXQpClsgICAxMS4yODI3OTldIHN5c3RlbWRbMV06IE1vdW50aW5n
IHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLm1vdW50IC0gQXJiaXRyYXJ5IEV4ZWN1dGFibGUg
RmlsZSBGb3JtYXRzIEZpbGUgU3lzdGVtLi4uClsgICAxMS4zMDk0NDFdIHN5c3RlbWRbMV06
IE1vdW50aW5nIHN5cy1mcy1mdXNlLWNvbm5lY3Rpb25zLm1vdW50IC0gRlVTRSBDb250cm9s
IEZpbGUgU3lzdGVtLi4uClsgICAxMS4zNTg1ODNdIHN5c3RlbWRbMV06IE1vdW50aW5nIHN5
cy1rZXJuZWwtY29uZmlnLm1vdW50IC0gS2VybmVsIENvbmZpZ3VyYXRpb24gRmlsZSBTeXN0
ZW0uLi4KWyAgIDExLjM1ODY4N10gc3lzdGVtZFsxXTogc3lzdGVtZC1wc3RvcmUuc2Vydmlj
ZSAtIFBsYXRmb3JtIFBlcnNpc3RlbnQgU3RvcmFnZSBBcmNoaXZhbCB3YXMgc2tpcHBlZCBi
ZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlvbiBjaGVjayAoQ29uZGl0aW9uRGlyZWN0b3J5
Tm90RW1wdHk9L3N5cy9mcy9wc3RvcmUpLgpbICAgMTEuMzU4ODE2XSBzeXN0ZW1kWzFdOiBz
eXN0ZW1kLXJlcGFydC5zZXJ2aWNlIC0gUmVwYXJ0aXRpb24gUm9vdCBEaXNrIHdhcyBza2lw
cGVkIGJlY2F1c2Ugbm8gdHJpZ2dlciBjb25kaXRpb24gY2hlY2tzIHdlcmUgbWV0LgpbICAg
MTEuMzc3NDM4XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLXRtcGZpbGVzLXNldHVw
LWRldi5zZXJ2aWNlIC0gQ3JlYXRlIFN0YXRpYyBEZXZpY2UgTm9kZXMgaW4gL2Rldi4uLgpb
ICAgMTEuMzc4ODIwXSBzeXN0ZW1kWzFdOiBNb3VudGVkIHByb2Mtc3lzLWZzLWJpbmZtdF9t
aXNjLm1vdW50IC0gQXJiaXRyYXJ5IEV4ZWN1dGFibGUgRmlsZSBGb3JtYXRzIEZpbGUgU3lz
dGVtLgpbICAgMTEuMzg1MDE5XSB0c2M6IFJlZmluZWQgVFNDIGNsb2Nrc291cmNlIGNhbGli
cmF0aW9uOiAzOTAwLjIyMyBNSHoKWyAgIDExLjM4NTAyNl0gY2xvY2tzb3VyY2U6IHRzYzog
bWFzazogMHhmZmZmZmZmZmZmZmZmZmZmIG1heF9jeWNsZXM6IDB4NzA3MDVhNjQ3MmMsIG1h
eF9pZGxlX25zOiA4ODE1OTA1ODY4MTIgbnMKWyAgIDExLjM4NTAzOF0gY2xvY2tzb3VyY2U6
IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIHRzYwpbICAgMTEuMzk3NDY1XSBzeXN0ZW1kWzFd
OiBGaW5pc2hlZCBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlIC0gU2V0IFVwIEFkZGl0aW9uYWwg
QmluYXJ5IEZvcm1hdHMuClsgICAxMS4zOTc4MjJdIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lz
LWZzLWZ1c2UtY29ubmVjdGlvbnMubW91bnQgLSBGVVNFIENvbnRyb2wgRmlsZSBTeXN0ZW0u
ClsgICAxMS4zOTc5ODVdIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lzLWtlcm5lbC1jb25maWcu
bW91bnQgLSBLZXJuZWwgQ29uZmlndXJhdGlvbiBGaWxlIFN5c3RlbS4KWyAgIDExLjQ0ODcx
MF0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC10bXBmaWxlcy1zZXR1cC1kZXYuc2Vy
dmljZSAtIENyZWF0ZSBTdGF0aWMgRGV2aWNlIE5vZGVzIGluIC9kZXYuClsgICAxMS40NTcy
NTRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtdWRldmQuc2VydmljZSAtIFJ1bGUt
YmFzZWQgTWFuYWdlciBmb3IgRGV2aWNlIEV2ZW50cyBhbmQgRmlsZXMuLi4KWyAgIDExLjUw
OTA4Ml0gc3lzdGVtZFsxXTogU3RhcnRlZCBzeXN0ZW1kLWpvdXJuYWxkLnNlcnZpY2UgLSBK
b3VybmFsIFNlcnZpY2UuClsgICAxMS41Njk4NjNdIHN5c3RlbWQtam91cm5hbGRbMTM0XTog
UmVjZWl2ZWQgY2xpZW50IHJlcXVlc3QgdG8gZmx1c2ggcnVudGltZSBqb3VybmFsLgpbICAg
MTIuMDUzMjIxXSBzZCA2OjA6MDowOiBBdHRhY2hlZCBzY3NpIGdlbmVyaWMgc2cwIHR5cGUg
MApbICAgMTIuMTQwOTk3XSByYW5kb206IGNybmcgaW5pdCBkb25lClsgICAxMi4zNjc5NTRd
IGFjcGlfY3B1ZnJlcTogb3ZlcnJpZGluZyBCSU9TIHByb3ZpZGVkIF9QU0QgZGF0YQpbICAg
MTIuNTEwNzQ3XSBRVUlSSzogRW5hYmxlIEFNRCBQTEwgZml4ClsgICAxMi41MTA4MDRdIGVo
Y2ktcGNpIDAwMDA6MDA6MTIuMjogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjUxMDgz
M10gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3Np
Z25lZCBidXMgbnVtYmVyIDEKWyAgIDEyLjUxMDg0NF0gZWhjaS1wY2kgMDAwMDowMDoxMi4y
OiBhcHBseWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBFSENJIGR1bW15IHFoIHdv
cmthcm91bmQKWyAgIDEyLjUxMDg1M10gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBkZWJ1ZyBw
b3J0IDEKWyAgIDEyLjUxMTAyM10gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBpcnEgMTcsIGlv
IG1lbSAweGYwMWNkMDAwClsgICAxMi41MTU3NjddIHBpaXg0X3NtYnVzIDAwMDA6MDA6MTQu
MDogU01CdXMgSG9zdCBDb250cm9sbGVyIGF0IDB4YjAwLCByZXZpc2lvbiAwClsgICAxMi41
MTU3NzNdIHBpaXg0X3NtYnVzIDAwMDA6MDA6MTQuMDogVXNpbmcgcmVnaXN0ZXIgMHgyZSBm
b3IgU01CdXMgcG9ydCBzZWxlY3Rpb24KWyAgIDEyLjUxNjI0MV0gcGlpeDRfc21idXMgMDAw
MDowMDoxNC4wOiBBdXhpbGlhcnkgU01CdXMgSG9zdCBDb250cm9sbGVyIGF0IDB4YjIwClsg
ICAxMi41MjUwMTRdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogVVNCIDIuMCBzdGFydGVkLCBF
SENJIDEuMDAKWyAgIDEyLjUyNTM5N10gdXNiIHVzYjE6IE5ldyBVU0IgZGV2aWNlIGZvdW5k
LCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMiwgYmNkRGV2aWNlPSA2LjAzClsgICAx
Mi41MjU0MDBdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJv
ZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgMTIuNTI1NDAyXSB1c2IgdXNiMTogUHJvZHVj
dDogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjUyNTQwNF0gdXNiIHVzYjE6IE1hbnVm
YWN0dXJlcjogTGludXggNi4zLjAtcmM2LTAwMzExLWdkZTgyMjQ5NjlmNjYgZWhjaV9oY2QK
WyAgIDEyLjUyNTQwNV0gdXNiIHVzYjE6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMi4yClsg
ICAxMi41MjU4NjJdIGh1YiAxLTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAxMi41MjU4ODld
IGh1YiAxLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAxMi41MjY1OTJdIGVoY2ktcGNp
IDAwMDA6MDA6MTMuMjogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjUyNjYxMl0gZWhj
aS1wY2kgMDAwMDowMDoxMy4yOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBi
dXMgbnVtYmVyIDIKWyAgIDEyLjUyNjYyM10gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBhcHBs
eWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBFSENJIGR1bW15IHFoIHdvcmthcm91
bmQKWyAgIDEyLjUyNjYzMl0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBkZWJ1ZyBwb3J0IDEK
WyAgIDEyLjUyNjc2N10gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBpcnEgMTcsIGlvIG1lbSAw
eGYwMWNlMDAwClsgICAxMi41NDEwMTRdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogVVNCIDIu
MCBzdGFydGVkLCBFSENJIDEuMDAKWyAgIDEyLjU0MTI2M10gdXNiIHVzYjI6IE5ldyBVU0Ig
ZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMiwgYmNkRGV2aWNl
PSA2LjAzClsgICAxMi41NDEyNjZdIHVzYiB1c2IyOiBOZXcgVVNCIGRldmljZSBzdHJpbmdz
OiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgMTIuNTQxMjY4XSB1c2Ig
dXNiMjogUHJvZHVjdDogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjU0MTI3MF0gdXNi
IHVzYjI6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmM2LTAwMzExLWdkZTgyMjQ5Njlm
NjYgZWhjaV9oY2QKWyAgIDEyLjU0MTI3MV0gdXNiIHVzYjI6IFNlcmlhbE51bWJlcjogMDAw
MDowMDoxMy4yClsgICAxMi41NDE3MThdIGh1YiAyLTA6MS4wOiBVU0IgaHViIGZvdW5kClsg
ICAxMi41NDE3NDZdIGh1YiAyLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAxMi41NDI0
MThdIGVoY2ktcGNpIDAwMDA6MDA6MTYuMjogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEy
LjU0MjQzNl0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVk
LCBhc3NpZ25lZCBidXMgbnVtYmVyIDMKWyAgIDEyLjU0MjQ0N10gZWhjaS1wY2kgMDAwMDow
MDoxNi4yOiBhcHBseWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBFSENJIGR1bW15
IHFoIHdvcmthcm91bmQKWyAgIDEyLjU0MjQ1Nl0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBk
ZWJ1ZyBwb3J0IDEKWyAgIDEyLjU0MjU4NF0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBpcnEg
MTcsIGlvIG1lbSAweGYwMWNmMDAwClsgICAxMi41NTcwMjNdIGVoY2ktcGNpIDAwMDA6MDA6
MTYuMjogVVNCIDIuMCBzdGFydGVkLCBFSENJIDEuMDAKWyAgIDEyLjU1NzQwNV0gdXNiIHVz
YjM6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAw
MiwgYmNkRGV2aWNlPSA2LjAzClsgICAxMi41NTc0MDhdIHVzYiB1c2IzOiBOZXcgVVNCIGRl
dmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgMTIu
NTU3NDEwXSB1c2IgdXNiMzogUHJvZHVjdDogRUhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEy
LjU1NzQxMV0gdXNiIHVzYjM6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmM2LTAwMzEx
LWdkZTgyMjQ5NjlmNjYgZWhjaV9oY2QKWyAgIDEyLjU1NzQxM10gdXNiIHVzYjM6IFNlcmlh
bE51bWJlcjogMDAwMDowMDoxNi4yClsgICAxMi41NTc4NThdIGh1YiAzLTA6MS4wOiBVU0Ig
aHViIGZvdW5kClsgICAxMi41NTc4ODRdIGh1YiAzLTA6MS4wOiA0IHBvcnRzIGRldGVjdGVk
ClsgICAxMi41NTg0NDddIG9oY2ktcGNpIDAwMDA6MDA6MTIuMDogT0hDSSBQQ0kgaG9zdCBj
b250cm9sbGVyClsgICAxMi41NTg0NjZdIG9oY2ktcGNpIDAwMDA6MDA6MTIuMDogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA0ClsgICAxMi41NTg2MDhd
IG9oY2ktcGNpIDAwMDA6MDA6MTIuMDogaXJxIDE4LCBpbyBtZW0gMHhmMDFjODAwMApbICAg
MTIuNTU4NjE4XSBvaGNpLXBjaSAwMDAwOjAwOjEzLjA6IE9IQ0kgUENJIGhvc3QgY29udHJv
bGxlcgpbICAgMTIuNTU4NjM1XSBvaGNpLXBjaSAwMDAwOjAwOjEzLjA6IG5ldyBVU0IgYnVz
IHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgNQpbICAgMTIuNTU4NzAzXSBvaGNp
LXBjaSAwMDAwOjAwOjEzLjA6IGlycSAxOCwgaW8gbWVtIDB4ZjAxYzkwMDAKWyAgIDEyLjU1
ODcxMF0gb2hjaS1wY2kgMDAwMDowMDoxNC41OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIK
WyAgIDEyLjU1ODcyNV0gb2hjaS1wY2kgMDAwMDowMDoxNC41OiBuZXcgVVNCIGJ1cyByZWdp
c3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDYKWyAgIDEyLjU1ODc5M10gb2hjaS1wY2kg
MDAwMDowMDoxNC41OiBpcnEgMTgsIGlvIG1lbSAweGYwMWNhMDAwClsgICAxMi41NTg4MDBd
IG9oY2ktcGNpIDAwMDA6MDA6MTYuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAx
Mi41NTg4MTRdIG9oY2ktcGNpIDAwMDA6MDA6MTYuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciA3ClsgICAxMi41NTg4ODhdIG9oY2ktcGNpIDAwMDA6
MDA6MTYuMDogaXJxIDE4LCBpbyBtZW0gMHhmMDFjYjAwMApbICAgMTIuNjM1NzA5XSB1c2Ig
dXNiNzogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0w
MDAxLCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjYzNTcxNl0gdXNiIHVzYjc6IE5ldyBVU0Ig
ZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAx
Mi42MzU3MThdIHVzYiB1c2I3OiBQcm9kdWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIK
WyAgIDEyLjYzNTcyMF0gdXNiIHVzYjc6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmM2
LTAwMzExLWdkZTgyMjQ5NjlmNjYgb2hjaV9oY2QKWyAgIDEyLjYzNTcyMV0gdXNiIHVzYjc6
IFNlcmlhbE51bWJlcjogMDAwMDowMDoxNi4wClsgICAxMi42MzYxNDVdIGh1YiA3LTA6MS4w
OiBVU0IgaHViIGZvdW5kClsgICAxMi42MzYxNzJdIGh1YiA3LTA6MS4wOiA0IHBvcnRzIGRl
dGVjdGVkClsgICAxMi42MzY4NDldIHVzYiB1c2I1OiBOZXcgVVNCIGRldmljZSBmb3VuZCwg
aWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEsIGJjZERldmljZT0gNi4wMwpbICAgMTIu
NjM2ODUxXSB1c2IgdXNiNTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1
Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgIDEyLjYzNjg1M10gdXNiIHVzYjU6IFByb2R1Y3Q6
IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgMTIuNjM2ODU0XSB1c2IgdXNiNTogTWFu
dWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2OWY2NiBvaGNpX2hj
ZApbICAgMTIuNjM2ODU2XSB1c2IgdXNiNTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEzLjAK
WyAgIDEyLjYzOTU0OV0gaHViIDUtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjYzOTU3
OV0gaHViIDUtMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjY0MDQ2NV0gdXNiIHVz
YjQ6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAw
MSwgYmNkRGV2aWNlPSA2LjAzClsgICAxMi42NDA0NjhdIHVzYiB1c2I0OiBOZXcgVVNCIGRl
dmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgMTIu
NjQwNDcwXSB1c2IgdXNiNDogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsg
ICAxMi42NDA0NzFdIHVzYiB1c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjNi0w
MDMxMS1nZGU4MjI0OTY5ZjY2IG9oY2lfaGNkClsgICAxMi42NDA0NzNdIHVzYiB1c2I0OiBT
ZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTIuMApbICAgMTIuNjQwODQ4XSBodWIgNC0wOjEuMDog
VVNCIGh1YiBmb3VuZApbICAgMTIuNjQwODc0XSBodWIgNC0wOjEuMDogNSBwb3J0cyBkZXRl
Y3RlZApbICAgMTIuNjQ0MDc5XSB1c2IgdXNiNjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlk
VmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxLCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjY0
NDA4NF0gdXNiIHVzYjY6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0
PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi42NDQwODZdIHVzYiB1c2I2OiBQcm9kdWN0OiBP
SENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgIDEyLjY0NDA4OF0gdXNiIHVzYjY6IE1hbnVm
YWN0dXJlcjogTGludXggNi4zLjAtcmM2LTAwMzExLWdkZTgyMjQ5NjlmNjYgb2hjaV9oY2QK
WyAgIDEyLjY0NDA4OV0gdXNiIHVzYjY6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxNC41Clsg
ICAxMi42NDQ0ODRdIGh1YiA2LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAxMi42NDQ1MDld
IGh1YiA2LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsgICAxMi42NTg3MzhdIHI4MTY5IDAw
MDA6MDQ6MDAuMDogZW5hYmxpbmcgZGV2aWNlICgwMDAwIC0+IDAwMDMpClsgICAxMi42OTc1
NDBdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEy
LjY5NzU2Nl0geGhjaV9oY2QgMDAwMDowMzowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVk
LCBhc3NpZ25lZCBidXMgbnVtYmVyIDgKWyAgIDEyLjcwOTg3OF0gcjgxNjkgMDAwMDowNDow
MC4wIGV0aDA6IFJUTDgxNjhmLzgxMTFmLCAwODo2MDo2ZTo3NDo3YTo1MSwgWElEIDQ4MCwg
SVJRIDI4ClsgICAxMi43MDk4ODZdIHI4MTY5IDAwMDA6MDQ6MDAuMCBldGgwOiBqdW1ibyBm
ZWF0dXJlcyBbZnJhbWVzOiA5MTk0IGJ5dGVzLCB0eCBjaGVja3N1bW1pbmc6IGtvXQpbICAg
MTIuNzM1NzU2XSAxClsgICAxMi43MzU4MDZdIDIKWyAgIDEyLjczNjI3Nl0gc25kX2hkYV9p
bnRlbCAwMDAwOjAwOjAxLjE6IEZvcmNlIHRvIG5vbi1zbm9vcCBtb2RlClsgICAxMi43MzYy
ODJdIDMKWyAgIDEyLjczNjI4M10gNApbICAgMTIuNzM2Mjg0XSA1ClsgICAxMi43MzYyODRd
IDcKWyAgIDEyLjczNjI4OF0gOApbICAgMTIuNzM2Mjg4XSA5ClsgICAxMi43NDA3OTddIDEK
WyAgIDEyLjc0MDg0Nl0gMgpbICAgMTIuNzQxMzc2XSAzClsgICAxMi43NDEzNzddIDQKWyAg
IDEyLjc0MTM3OF0gNQpbICAgMTIuNzQxMzc5XSA3ClsgICAxMi43NDEzODJdIDgKWyAgIDEy
Ljc0MTM4M10gOQpbICAgMTIuNzY5MjYwXSBpbnB1dDogSERBIEFUSSBIRE1JIEhETUkvRFAs
cGNtPTMgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjAxLjEvc291bmQvY2FyZDAv
aW5wdXQxClsgICAxMi43Njk1NTVdIGlucHV0OiBIREEgQVRJIEhETUkgSERNSS9EUCxwY209
NyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDEuMS9zb3VuZC9jYXJkMC9pbnB1
dDIKWyAgIDEyLjc4NDQzOV0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiBB
TEM4OTI6IFNLVSBub3QgcmVhZHkgMHgwMDAwMDEwMApbICAgMTIuNzg1NjgyXSBzbmRfaGRh
X2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6IGF1dG9jb25maWcgZm9yIEFMQzg5MjogbGlu
ZV9vdXRzPTQgKDB4MTQvMHgxNi8weDE1LzB4MTcvMHgwKSB0eXBlOmxpbmUKWyAgIDEyLjc4
NTY4OF0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICBzcGVha2VyX291
dHM9MCAoMHgwLzB4MC8weDAvMHgwLzB4MCkKWyAgIDEyLjc4NTY5MV0gc25kX2hkYV9jb2Rl
Y19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICBocF9vdXRzPTEgKDB4MWIvMHgwLzB4MC8weDAv
MHgwKQpbICAgMTIuNzg1NjkzXSBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6
ICAgIG1vbm86IG1vbm9fb3V0PTB4MApbICAgMTIuNzg1Njk0XSBzbmRfaGRhX2NvZGVjX3Jl
YWx0ZWsgaGRhdWRpb0MxRDA6ICAgIGRpZy1vdXQ9MHgxZS8weDAKWyAgIDEyLjc4NTY5Nl0g
c25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICBpbnB1dHM6ClsgICAxMi43
ODU2OTddIHNuZF9oZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogICAgICBSZWFyIE1p
Yz0weDE4ClsgICAxMi43ODU2OTldIHNuZF9oZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFE
MDogICAgICBGcm9udCBNaWM9MHgxOQpbICAgMTIuNzg1NzAwXSBzbmRfaGRhX2NvZGVjX3Jl
YWx0ZWsgaGRhdWRpb0MxRDA6ICAgICAgTGluZT0weDFhClsgICAxMi43ODU3MDFdIHNuZF9o
ZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogICAgICBDRD0weDFjClsgICAxMi44MDc1
MjhdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDogaGNjIHBhcmFtcyAweDAyMDBmMTgwIGhjaSB2
ZXJzaW9uIDB4OTYgcXVpcmtzIDB4MDAwMDAwMDAwMDA4MDAxMApbICAgMTIuODEzMTc4XSBp
bnB1dDogSEQtQXVkaW8gR2VuZXJpYyBSZWFyIE1pYyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAw
LzAwMDA6MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1dDMKWyAgIDEyLjgxMzUxNF0gaW5wdXQ6
IEhELUF1ZGlvIEdlbmVyaWMgRnJvbnQgTWljIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAw
MDowMDoxNC4yL3NvdW5kL2NhcmQxL2lucHV0NApbICAgMTIuODEzNzgyXSBpbnB1dDogSEQt
QXVkaW8gR2VuZXJpYyBMaW5lIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4y
L3NvdW5kL2NhcmQxL2lucHV0NQpbICAgMTIuODE0MDQ4XSBpbnB1dDogSEQtQXVkaW8gR2Vu
ZXJpYyBMaW5lIE91dCBGcm9udCBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQu
Mi9zb3VuZC9jYXJkMS9pbnB1dDYKWyAgIDEyLjgxNDMxN10gaW5wdXQ6IEhELUF1ZGlvIEdl
bmVyaWMgTGluZSBPdXQgU3Vycm91bmQgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjE0LjIvc291bmQvY2FyZDEvaW5wdXQ3ClsgICAxMi44MTQ1NzVdIGlucHV0OiBIRC1BdWRp
byBHZW5lcmljIExpbmUgT3V0IENMRkUgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjE0LjIvc291bmQvY2FyZDEvaW5wdXQ4ClsgICAxMi44MTQ4MzNdIGlucHV0OiBIRC1BdWRp
byBHZW5lcmljIExpbmUgT3V0IFNpZGUgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjE0LjIvc291bmQvY2FyZDEvaW5wdXQ5ClsgICAxMi44MTUwODddIGlucHV0OiBIRC1BdWRp
byBHZW5lcmljIEZyb250IEhlYWRwaG9uZSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6
MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1dDEwClsgICAxMi44MTYyMzNdIHhoY2lfaGNkIDAw
MDA6MDM6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjgxNjI0OV0geGhjaV9o
Y2QgMDAwMDowMzowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMg
bnVtYmVyIDkKWyAgIDEyLjgxNjI2MF0geGhjaV9oY2QgMDAwMDowMzowMC4wOiBIb3N0IHN1
cHBvcnRzIFVTQiAzLjAgU3VwZXJTcGVlZApbICAgMTIuODMwOTg1XSB1c2IgdXNiODogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyLCBiY2RE
ZXZpY2U9IDYuMDMKWyAgIDEyLjgzMDk5M10gdXNiIHVzYjg6IE5ldyBVU0IgZGV2aWNlIHN0
cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi44MzA5OTVd
IHVzYiB1c2I4OiBQcm9kdWN0OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuODMwOTk2
XSB1c2IgdXNiODogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzYtMDAzMTEtZ2RlODIy
NDk2OWY2NiB4aGNpLWhjZApbICAgMTIuODMwOTk4XSB1c2IgdXNiODogU2VyaWFsTnVtYmVy
OiAwMDAwOjAzOjAwLjAKWyAgIDEyLjgzNTU0M10gaHViIDgtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgIDEyLjg0MjI5Nl0gaHViIDgtMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEy
Ljg1MDg0Ml0gdXNiIHVzYjk6IFdlIGRvbid0IGtub3cgdGhlIGFsZ29yaXRobXMgZm9yIExQ
TSBmb3IgdGhpcyBob3N0LCBkaXNhYmxpbmcgTFBNLgpbICAgMTIuODUwOTgzXSB1c2IgdXNi
OTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAz
LCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjg1MDk4Nl0gdXNiIHVzYjk6IE5ldyBVU0IgZGV2
aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi44
NTA5ODhdIHVzYiB1c2I5OiBQcm9kdWN0OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIu
ODUwOTg5XSB1c2IgdXNiOTogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzYtMDAzMTEt
Z2RlODIyNDk2OWY2NiB4aGNpLWhjZApbICAgMTIuODUwOTkxXSB1c2IgdXNiOTogU2VyaWFs
TnVtYmVyOiAwMDAwOjAzOjAwLjAKWyAgIDEyLjg1NjI3OF0gcjgxNjkgMDAwMDowNDowMC4w
IGVucDRzMDogcmVuYW1lZCBmcm9tIGV0aDAKWyAgIDEyLjg2Mjc3MV0gaHViIDktMDoxLjA6
IFVTQiBodWIgZm91bmQKWyAgIDEyLjg3MzYwMV0gaHViIDktMDoxLjA6IDIgcG9ydHMgZGV0
ZWN0ZWQKWyAgIDEzLjAyNTAyNl0gdXNiIDQtMTogbmV3IGxvdy1zcGVlZCBVU0IgZGV2aWNl
IG51bWJlciAyIHVzaW5nIG9oY2ktcGNpClsgICAxMy4xMDMzNzddIHI4MTY5IDAwMDA6MDQ6
MDAuMDogRGlyZWN0IGZpcm13YXJlIGxvYWQgZm9yIHJ0bF9uaWMvcnRsODE2OGYtMS5mdyBm
YWlsZWQgd2l0aCBlcnJvciAtMgpbICAgMTMuMTAzMzg5XSByODE2OSAwMDAwOjA0OjAwLjA6
IFVuYWJsZSB0byBsb2FkIGZpcm13YXJlIHJ0bF9uaWMvcnRsODE2OGYtMS5mdyAoLTIpClsg
ICAxMy4xMDM5MDddIFJUTDgyMTFFIEdpZ2FiaXQgRXRoZXJuZXQgcjgxNjktMC00MDA6MDA6
IGF0dGFjaGVkIFBIWSBkcml2ZXIgKG1paV9idXM6cGh5X2FkZHI9cjgxNjktMC00MDA6MDAs
IGlycT1NQUMpClsgICAxMy4xNzU1MDVdIHI4MTY5IDAwMDA6MDQ6MDAuMCBlbnA0czA6IExp
bmsgaXMgRG93bgpbICAgMTMuMTc5ODcwXSBbZHJtXSByYWRlb24ga2VybmVsIG1vZGVzZXR0
aW5nIGVuYWJsZWQuClsgICAxMy4xODE1ODZdIFtkcm1dIGluaXRpYWxpemluZyBrZXJuZWwg
bW9kZXNldHRpbmcgKEFSVUJBIDB4MTAwMjoweDk5OTYgMHgxMDAyOjB4OTk5NiAweDAwKS4K
WyAgIDEzLjE4MTY1Ml0gQVRPTSBCSU9TOiAxMTMKWyAgIDEzLjE4MTc1N10gcmFkZW9uIDAw
MDA6MDA6MDEuMDogVlJBTTogNTEyTSAweDAwMDAwMDAwMDAwMDAwMDAgLSAweDAwMDAwMDAw
MUZGRkZGRkYgKDUxMk0gdXNlZCkKWyAgIDEzLjE4MTc2MV0gcmFkZW9uIDAwMDA6MDA6MDEu
MDogR1RUOiAxMDI0TSAweDAwMDAwMDAwMjAwMDAwMDAgLSAweDAwMDAwMDAwNUZGRkZGRkYK
WyAgIDEzLjE4MTc2OV0gW2RybV0gRGV0ZWN0ZWQgVlJBTSBSQU09NTEyTSwgQkFSPTI1Nk0K
WyAgIDEzLjE4MTc3MF0gW2RybV0gUkFNIHdpZHRoIDY0Yml0cyBERFIKWyAgIDEzLjE4MTk0
N10gW2RybV0gcmFkZW9uOiA1MTJNIG9mIFZSQU0gbWVtb3J5IHJlYWR5ClsgICAxMy4xODE5
NTJdIFtkcm1dIHJhZGVvbjogMTAyNE0gb2YgR1RUIG1lbW9yeSByZWFkeS4KWyAgIDEzLjE4
MTk5NF0gW2RybV0gTG9hZGluZyBBUlVCQSBNaWNyb2NvZGUKWyAgIDEzLjE5MDM1MF0gW2Ry
bV0gSW50ZXJuYWwgdGhlcm1hbCBjb250cm9sbGVyIHdpdGhvdXQgZmFuIGNvbnRyb2wKWyAg
IDEzLjE5MTEwMF0gW2RybV0gcmFkZW9uOiBkcG0gaW5pdGlhbGl6ZWQKWyAgIDEzLjE5NjAx
NV0gW2RybV0gRm91bmQgVkNFIGZpcm13YXJlL2ZlZWRiYWNrIHZlcnNpb24gNTAuMC4xIC8g
MTchClsgICAxMy4xOTYwNzNdIFtkcm1dIEdBUlQ6IG51bSBjcHUgcGFnZXMgMjYyMTQ0LCBu
dW0gZ3B1IHBhZ2VzIDI2MjE0NApbICAgMTMuMjM0MzM3XSBbZHJtXSBHQVJUOiBSZXN0b3Jl
IGVudHJpZXM6IG51bSBjcHUgcGFnZXMgMjYyMTQ0LCBudW0gZ3B1IHBhZ2VzIDI2MjE0NApb
ICAgMTMuMjM3OTM0XSBbZHJtXSBHQVJUOiBEb25lIHJlc3RvcmluZyBlbnRyaWVzClsgICAx
My4yMzc5MzhdIFtkcm1dIFBDSUUgR0FSVCBvZiAxMDI0TSBlbmFibGVkICh0YWJsZSBhdCAw
eDAwMDAwMDAwMDAxRDYwMDApLgpbICAgMTMuMjM4MTc3XSByYWRlb24gMDAwMDowMDowMS4w
OiBXQiBlbmFibGVkClsgICAxMy4yMzgxODBdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNl
IGRyaXZlciBvbiByaW5nIDAgdXNlIGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMwMApbICAg
MTMuMjM4NTU4XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyA1
IHVzZSBncHUgYWRkciAweDAwMDAwMDAwMDAwNzVhMTgKWyAgIDEzLjI0NjA0NV0gdXNiIDQt
MTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTQxM2MsIGlkUHJvZHVjdD0yMTA2
LCBiY2REZXZpY2U9IDEuMDEKWyAgIDEzLjI0NjA1MF0gdXNiIDQtMTogTmV3IFVTQiBkZXZp
Y2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAKWyAgIDEzLjI0
NjA1Ml0gdXNiIDQtMTogUHJvZHVjdDogRGVsbCBRdWlldEtleSBLZXlib2FyZApbICAgMTMu
MjQ2MDU0XSB1c2IgNC0xOiBNYW51ZmFjdHVyZXI6IERFTEwKWyAgIDEzLjI1MzgxOV0gaW5w
dXQ6IERFTEwgRGVsbCBRdWlldEtleSBLZXlib2FyZCBhcyAvZGV2aWNlcy9wY2kwMDAwOjAw
LzAwMDA6MDA6MTIuMC91c2I0LzQtMS80LTE6MS4wLzAwMDM6NDEzQzoyMTA2LjAwMDEvaW5w
dXQvaW5wdXQxMQpbICAgMTMuMjYwOTM2XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBk
cml2ZXIgb24gcmluZyA2IHVzZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMTgKWyAgIDEz
LjI2MDk0MF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNyB1
c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDIwMDAwYzFjClsgICAxMy4yNjA5NDJdIHJhZGVvbiAw
MDAwOjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDEgdXNlIGdwdSBhZGRyIDB4MDAw
MDAwMDAyMDAwMGMwNApbICAgMTMuMjYwOTQ0XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5j
ZSBkcml2ZXIgb24gcmluZyAyIHVzZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMDgKWyAg
IDEzLjI2MDk0NV0gcmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcg
MyB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDIwMDAwYzBjClsgICAxMy4yNjA5NDddIHJhZGVv
biAwMDAwOjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDQgdXNlIGdwdSBhZGRyIDB4
MDAwMDAwMDAyMDAwMGMxMApbICAgMTMuMjYyOTY0XSByYWRlb24gMDAwMDowMDowMS4wOiBy
YWRlb246IE1TSSBsaW1pdGVkIHRvIDMyLWJpdApbICAgMTMuMjYzMTUxXSByYWRlb24gMDAw
MDowMDowMS4wOiByYWRlb246IHVzaW5nIE1TSS4KWyAgIDEzLjI2MzIyMl0gW2RybV0gcmFk
ZW9uOiBpcnEgaW5pdGlhbGl6ZWQuClsgICAxMy4yODE2NDhdIFtkcm1dIHJpbmcgdGVzdCBv
biAwIHN1Y2NlZWRlZCBpbiAzIHVzZWNzClsgICAxMy4yODE2NThdIFtkcm1dIHJpbmcgdGVz
dCBvbiAzIHN1Y2NlZWRlZCBpbiA0IHVzZWNzClsgICAxMy4yODE2NjVdIFtkcm1dIHJpbmcg
dGVzdCBvbiA0IHN1Y2NlZWRlZCBpbiA0IHVzZWNzClsgICAxMy4yOTU2NTldIFtkcm1dIHJp
bmcgdGVzdCBvbiA1IHN1Y2NlZWRlZCBpbiAyIHVzZWNzClsgICAxMy4yOTc2NTZdIFtkcm1d
IFVWRCBpbml0aWFsaXplZCBzdWNjZXNzZnVsbHkuClsgICAxMy4zMTM2NjNdIGhpZC1nZW5l
cmljIDAwMDM6NDEzQzoyMTA2LjAwMDE6IGlucHV0LGhpZHJhdzA6IFVTQiBISUQgdjEuMTAg
S2V5Ym9hcmQgW0RFTEwgRGVsbCBRdWlldEtleSBLZXlib2FyZF0gb24gdXNiLTAwMDA6MDA6
MTIuMC0xL2lucHV0MApbICAgMTMuNDQ2OTUzXSBbZHJtXSByaW5nIHRlc3Qgb24gNiBzdWNj
ZWVkZWQgaW4gMTggdXNlY3MKWyAgIDEzLjQ0Njk2N10gW2RybV0gcmluZyB0ZXN0IG9uIDcg
c3VjY2VlZGVkIGluIDMgdXNlY3MKWyAgIDEzLjQ0Njk2OF0gW2RybV0gVkNFIGluaXRpYWxp
emVkIHN1Y2Nlc3NmdWxseS4KWyAgIDEzLjQ0NzEyMl0gc25kX2hkYV9pbnRlbCAwMDAwOjAw
OjAxLjE6IGJvdW5kIDAwMDA6MDA6MDEuMCAob3BzIHJhZGVvbl9hdWRpb19jb21wb25lbnRf
YmluZF9vcHMgW3JhZGVvbl0pClsgICAxMy40NDcyOTNdIFtkcm1dIGliIHRlc3Qgb24gcmlu
ZyAwIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAxMy40NDczNDZdIFtkcm1dIGliIHRlc3Qg
b24gcmluZyAzIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAxMy40NDczOTZdIFtkcm1dIGli
IHRlc3Qgb24gcmluZyA0IHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAxMy40NjUwOTldIFtk
cm1dIGliIHRlc3Qgb24gcmluZyA1IHN1Y2NlZWRlZApbICAgMTMuNDgxMTMyXSBbZHJtXSBp
YiB0ZXN0IG9uIHJpbmcgNiBzdWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgMTMuNDk3MDg1XSBb
ZHJtXSBpYiB0ZXN0IG9uIHJpbmcgNyBzdWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgMTMuNTAw
MDU2XSBbZHJtXSBSYWRlb24gRGlzcGxheSBDb25uZWN0b3JzClsgICAxMy41MDAwNjBdIFtk
cm1dIENvbm5lY3RvciAwOgpbICAgMTMuNTAwMDYxXSBbZHJtXSAgIERQLTEKWyAgIDEzLjUw
MDA2Ml0gW2RybV0gICBIUEQxClsgICAxMy41MDAwNjJdIFtkcm1dICAgRERDOiAweDY1MzAg
MHg2NTMwIDB4NjUzNCAweDY1MzQgMHg2NTM4IDB4NjUzOCAweDY1M2MgMHg2NTNjClsgICAx
My41MDAwNjVdIFtkcm1dICAgRW5jb2RlcnM6ClsgICAxMy41MDAwNjVdIFtkcm1dICAgICBE
RlAxOiBJTlRFUk5BTF9VTklQSFkyClsgICAxMy41MDAwNjZdIFtkcm1dIENvbm5lY3RvciAx
OgpbICAgMTMuNTAwMDY3XSBbZHJtXSAgIFZHQS0xClsgICAxMy41MDAwNjhdIFtkcm1dICAg
SFBEMgpbICAgMTMuNTAwMDY5XSBbZHJtXSAgIEREQzogMHg2NTQwIDB4NjU0MCAweDY1NDQg
MHg2NTQ0IDB4NjU0OCAweDY1NDggMHg2NTRjIDB4NjU0YwpbICAgMTMuNTAwMDcwXSBbZHJt
XSAgIEVuY29kZXJzOgpbICAgMTMuNTAwMDcxXSBbZHJtXSAgICAgQ1JUMTogSU5URVJOQUxf
VU5JUEhZMgpbICAgMTMuNTAwMDcyXSBbZHJtXSAgICAgQ1JUMTogTlVUTUVHClsgICAxMy41
MDAwNzJdIFtkcm1dIENvbm5lY3RvciAyOgpbICAgMTMuNTAwMDczXSBbZHJtXSAgIEhETUkt
QS0xClsgICAxMy41MDAwNzRdIFtkcm1dICAgSFBEMwpbICAgMTMuNTAwMDc1XSBbZHJtXSAg
IEREQzogMHg2NTUwIDB4NjU1MCAweDY1NTQgMHg2NTU0IDB4NjU1OCAweDY1NTggMHg2NTVj
IDB4NjU1YwpbICAgMTMuNTAwMDc2XSBbZHJtXSAgIEVuY29kZXJzOgpbICAgMTMuNTAwMDc3
XSBbZHJtXSAgICAgREZQMjogSU5URVJOQUxfVU5JUEhZClsgICAxMy43NDEwODFdIHVzYiA0
LTI6IG5ldyBsb3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMyB1c2luZyBvaGNpLXBjaQpb
ICAgMTMuNzc5ODE4XSBbZHJtXSBmYiBtYXBwYWJsZSBhdCAweEUwM0U5MDAwClsgICAxMy43
Nzk4MjZdIFtkcm1dIHZyYW0gYXBwZXIgYXQgMHhFMDAwMDAwMApbICAgMTMuNzc5ODI4XSBb
ZHJtXSBzaXplIDUyNDI4ODAKWyAgIDEzLjc3OTgzMF0gW2RybV0gZmIgZGVwdGggaXMgMjQK
WyAgIDEzLjc3OTgzMl0gW2RybV0gICAgcGl0Y2ggaXMgNTEyMApbICAgMTMuNzgwMzk4XSBm
YmNvbjogcmFkZW9uZHJtZmIgKGZiMCkgaXMgcHJpbWFyeSBkZXZpY2UKWyAgIDEzLjkzNjE1
OF0gdXNiIDQtMjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTA0NmQsIGlkUHJv
ZHVjdD1jMDE2LCBiY2REZXZpY2U9IDMuNDAKWyAgIDEzLjkzNjE2N10gdXNiIDQtMjogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAK
WyAgIDEzLjkzNjE3MF0gdXNiIDQtMjogUHJvZHVjdDogT3B0aWNhbCBVU0IgTW91c2UKWyAg
IDEzLjkzNjE3M10gdXNiIDQtMjogTWFudWZhY3R1cmVyOiBMb2dpdGVjaApbICAgMTMuOTQ1
MzQ1XSBpbnB1dDogTG9naXRlY2ggT3B0aWNhbCBVU0IgTW91c2UgYXMgL2RldmljZXMvcGNp
MDAwMDowMC8wMDAwOjAwOjEyLjAvdXNiNC80LTIvNC0yOjEuMC8wMDAzOjA0NkQ6QzAxNi4w
MDAyL2lucHV0L2lucHV0MTIKWyAgIDEzLjk0NjE3OF0gaGlkLWdlbmVyaWMgMDAwMzowNDZE
OkMwMTYuMDAwMjogaW5wdXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBNb3VzZSBbTG9naXRl
Y2ggT3B0aWNhbCBVU0IgTW91c2VdIG9uIHVzYi0wMDAwOjAwOjEyLjAtMi9pbnB1dDAKWyAg
IDEzLjk2ODE5MV0gQ29uc29sZTogc3dpdGNoaW5nIHRvIGNvbG91ciBmcmFtZSBidWZmZXIg
ZGV2aWNlIDE2MHg2NApbICAgMTMuOTczNDYwXSByYWRlb24gMDAwMDowMDowMS4wOiBbZHJt
XSBmYjA6IHJhZGVvbmRybWZiIGZyYW1lIGJ1ZmZlciBkZXZpY2UKWyAgIDEzLjk4MTU3NF0g
W2RybV0gSW5pdGlhbGl6ZWQgcmFkZW9uIDIuNTAuMCAyMDA4MDUyOCBmb3IgMDAwMDowMDow
MS4wIG9uIG1pbm9yIDAKWyAgIDE1Ljg3NjYxM10gcjgxNjkgMDAwMDowNDowMC4wIGVucDRz
MDogTGluayBpcyBVcCAtIDFHYnBzL0Z1bGwgLSBmbG93IGNvbnRyb2wgcngvdHgKWyAgIDE1
Ljg3NjYyOF0gSVB2NjogQUREUkNPTkYoTkVUREVWX0NIQU5HRSk6IGVucDRzMDogbGluayBi
ZWNvbWVzIHJlYWR5ClsgICAxNi43MTMyNjZdIFtkcm1dIGFtZGdwdSBrZXJuZWwgbW9kZXNl
dHRpbmcgZW5hYmxlZC4KWyAgIDE3LjExMDAwOF0gbWVtZmRfY3JlYXRlKCkgd2l0aG91dCBN
RkRfRVhFQyBub3IgTUZEX05PRVhFQ19TRUFMLCBwaWQ9MjQ5ICdzeXN0ZW1kJwo=

--------------sZY8260RBWtSJLYF8RPb2fpa--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:24:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522001.811085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMyO-0000aJ-0h; Mon, 17 Apr 2023 11:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522001.811085; Mon, 17 Apr 2023 11:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poMyN-0000aC-TI; Mon, 17 Apr 2023 11:24:39 +0000
Received: by outflank-mailman (input) for mailman id 522001;
 Mon, 17 Apr 2023 11:24:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=owEo=AI=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1poMyM-0000a6-CA
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:24:38 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6fb18bd7-dd12-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 13:24:37 +0200 (CEST)
Received: from [141.14.220.45] (g45.guest.molgen.mpg.de [141.14.220.45])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id C7E5061CC40F9;
 Mon, 17 Apr 2023 13:24:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fb18bd7-dd12-11ed-b21e-6b7b168915f2
Message-ID: <a736cae5-8c05-ddda-a1b0-37c8afdbd6ea@molgen.mpg.de>
Date: Mon, 17 Apr 2023 13:24:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
From: Paul Menzel <pmenzel@molgen.mpg.de>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de>
In-Reply-To: <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

[Correct David’s address]

Am 17.04.23 um 13:19 schrieb Paul Menzel:
> Dear Thomas,
> 
> 
> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
> 
>> This is a complete rework of the parallel bringup patch series (V17)
>>
>>      
>> https://lore.kernel.org/lkml/20230328195758.1049469-1-usama.arif@bytedance.com
>>
>> to address the issues which were discovered in review:
> 
> […]
> 
> Thank you very much for your rework.
> 
> I tested this on the ASUS F2A85-M PRO, and get a delay of ten seconds.
> 
> ```
> […]
> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD 
> Graphics (family: 0x15, model: 0x13, stepping: 0x1)
> […]
> [    0.259329] smp: Bringing up secondary CPUs ...
> [    0.259527] x86: Booting SMP configuration:
> [    0.259528] .... node  #0, CPUs:      #1
> [    0.261007] After schedule_preempt_disabled
> [   10.260990] CPU1 failed to report alive state
> [   10.261070] smp: Brought up 1 node, 1 CPU
> [   10.261073] smpboot: Max logical packages: 2
> [   10.261074] smpboot: Total of 1 processors activated (7800.54 BogoMIPS)
> [   10.261601] devtmpfs: initialized
> [   10.261697] x86/mm: Memory block size: 128MB
> ```
> 
> This delay has been there with v6.3-rc6-46-gde4664485abbc and some 
> custom (printk) patches on top and merging dwmw2/parallel-6.2-rc3-v16 
> into it. I only tested this. I think dwmw2/parallel-6.2-v17 failed to 
> build for me, when trying to merge it into Linus’ master version at that 
> time. I didn’t come around to report it, and you posted your rework, so 
> I am replying here.
> 
> I am going to try your branch directly in the next days, but just wanted 
> to report back already.
> 
> 
> Kind regards,
> 
> Paul


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:26:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522005.811094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poN02-0001AV-AL; Mon, 17 Apr 2023 11:26:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522005.811094; Mon, 17 Apr 2023 11:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poN02-0001AO-7c; Mon, 17 Apr 2023 11:26:22 +0000
Received: by outflank-mailman (input) for mailman id 522005;
 Mon, 17 Apr 2023 11:26:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poN00-0001AE-LK
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:26:20 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac367e3c-dd12-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 13:26:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9264.eurprd04.prod.outlook.com (2603:10a6:20b:4c4::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 11:26:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 11:26:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac367e3c-dd12-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CikCacH62Gazd2DTsqVfGIT1TJc7y5w4nq3qaF4ztU6HNGUtt7rkVV4+u8kKuGIjyq57b53sPt3NLfEtR7cbvbvzxqEdbkJOZY+74YN+Ly2GYzruuhJPdIPZb/HByiIjum7hpzCsUR3ggPVQHa+LjG5MoZawulydm4iyCoQSLLH+7lWALZbS5ZFymflAHYTGib0WArORUvykMACO8HA6sVtqzmezq8MPGaEptH8ze+aJAvUpvPhjkHpHNcoaxu5KOBXIJ2+Nd8m64INhsakRfvB5dtblF1VhrDfVRxGjTUZtvOXuThdZZbBbXY0bGU7J9QvBB+hMc+/PaOmCZR/FnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pSp8JAdbD6alqZx8DC6iHdRefLG+btm9pOyUQkSTXUI=;
 b=hpPZP+DWVsJGXjLfaZSFWxqn6KhnpboytYM7L+C7iALzszRZ8o0qzmfg2KTKWhBLRiUQgIOm8SlvMJWbSrON3121AfhSuZRtUgqhfEfgV6hsXJt3tynhsCMHOAP6dIsgCOCeIvhF8V3q8j4euzXel500zxRPhE0fX7LEo0pmQc66P7c7tCajXrjXK4GpUdibx86+NDLoQLEou5nlbLUDiTf2Od7tWVxOxIXqJC/JTQ1nm6tqLprjl9GA1Xdx4JIsZXjb5wb29u6NoWnFWZDGHk7JM+o5yCEYLHFVTNqzO65o2SN3x6Fn4ZHVoUTyH0Y/RG1PWI9nTIZsSrfPLdAlRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pSp8JAdbD6alqZx8DC6iHdRefLG+btm9pOyUQkSTXUI=;
 b=b/8v+TZlVuWptQNz1ZNOB8wHs4l6DnJlk2FNWh8kg12u3GoY9ad+wMmHWx94arAoYez9OMD+4yueSdqy5GvwQWyYdSGxnFVrIzp3TWGxKKzj6UC1p5cIfMUwoEFT7CnfGx2/M6H+OlQsa0m54ZBQk59i+jD8LuoP9+0e0nZCJ6MJZIKmL2w1NLzy4E8ZoJERn8JapI3WhWE4HwSQjrlistsKhLuLnqLsRQ0D/wB7lnFHWW7rro48z5KlpsM8EwfWm96sfjEW79E5vvhuCkDG0fFIKw5nd1CiffZGzuEeCPZB7cn9z/lFAWVmtNGV8rtCeaU8J8z2WwNwtNGaDl8xNQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0b164d8d-e09e-b1b7-9d5d-7c7fe2fd0702@suse.com>
Date: Mon, 17 Apr 2023 13:26:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 5/9] x86emul: re-use new stub_exn field in state structure
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <a9c212a8-8c63-91e0-eb07-8c927b62c1ca@suse.com>
 <197c2d8a-0a41-da10-771a-87843c9a007d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <197c2d8a-0a41-da10-771a-87843c9a007d@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9264:EE_
X-MS-Office365-Filtering-Correlation-Id: d0b6b284-ebc9-4de5-377f-08db3f368f05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2ZYVAjExWfBSkgJguTVECm46PJQYR1Z56E8R7pffuyXfb7st0N8urz96wr0Vs0DJAUveD5sPShn/mCe9QiZqaV/OTMOLAew2cc3FMqRYCZ5T8H5U3FxOWu85U5xwk1RE9wVHykv9QjD0EwUMPF5JeQBTaBMonR+iIP82egxNSgyGtyQwidI+wXtmnlC+6GqnH/3QpBa0KvTjn/nypjPJK9ocMhkjk1Ns81CQfMBXHf2qf5Vpz5ywHEua6r6777W2lVzuDKpDYWJzUBKHT5jdfVvHSRizVuxLGICF4Hp1HpzVNH/nQjud6t5bL5sdQqJHP9XR3U4hU/dwvA6CtVnw3qPD1/Z8dqlwUaGcuCavui6n11JNUNT7SMY0aaUlDPRoi4lD/NmyJ+76NIomF0Fl8gsK21NMVwuE4EU9iCkbSTusEvB0BCHMh55cs5KCZhsUgSZ0db66B0/bxBfYaGA/lrNMcU4vq5n3UQ4/avJ9YrPfXBYg+YyngV2Xa+uxcXEKPkTuFMWYgaSo+slJZCAumJAsSJcSEhdmgUHOiJl/Pw3E6AIla1wfxTfvARrtMhCnmeUH13+xzAKnJ8b1B4CQgbPDKPHOkEUslN+1iDuYROzEDaLXHzj4TLsktj4VP1/vbtGeS4qVsGsKh8ZNbNbSTw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(396003)(39860400002)(346002)(451199021)(54906003)(2616005)(53546011)(6512007)(26005)(6506007)(31686004)(478600001)(66946007)(66476007)(66556008)(83380400001)(316002)(6486002)(186003)(6916009)(4326008)(8676002)(8936002)(41300700001)(5660300002)(2906002)(36756003)(31696002)(38100700002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnNraVNTd0FTVDJ4cURnREg5NmFpTDFtUnlteFM0UnZoKytCYjNyZHk3MFZL?=
 =?utf-8?B?RW4vWjF6cCtNT1Z1NGxEQzUzaE9ud0dLY1g1Z2pualJPQlFyaGpEMjd4OG1S?=
 =?utf-8?B?dTIvSUREOUtTZUpla2wxZkJLOU1vT04xOVpQZ3F3YmVJR2k4TUhuUjFsNWVa?=
 =?utf-8?B?UUkzckI1bnRnUWlVQmRMU2VjdmZqSGM0SElvL09DQkdCdWRDcTdXT3RaTEty?=
 =?utf-8?B?SU02ZTNRL2g1KzhVNFdOSWNqSURTd2lMT2NpNWUyT092WUlTWHJ5R3N5d2dz?=
 =?utf-8?B?SndWcnJiTDFGSkJSZnhUWnBDVFBBRVFCQ0sxZ0MzZ2tuUzFSNStBRjJ4QzQ3?=
 =?utf-8?B?OFJSTDlEbUNEbG1KaHF2aHNZM3ZxOUloZXNoNzlnRkxaN2JqSkhKNHRleGRv?=
 =?utf-8?B?VFJPakg1Nm5uNzdkdHdKMlNlOC94SjdMSkJXOG9XUis2clFuZmhXdnI1T0hT?=
 =?utf-8?B?Y2hmZERDbzdlSjhXVjdDVENtSVRuMlVkT1FVV3JTVDNXc1JXV2JrVElySG14?=
 =?utf-8?B?Y1NtQ2hHd3l2VVoyWGdTNzFkOGd2R2lyRnJrVUdUWFlGWmYwQ3pYVStGeFh6?=
 =?utf-8?B?VTl4eFJmTW1sTTc0cmpreGxEWnVKQkhIS0JDSitMbmllNUZ2MEZTUHYrZ0FT?=
 =?utf-8?B?SkN5aTIwNzUzaUptUy9Rc01vQXY1RjlxNDdQcWc3UE5aa2NWREYyQ2pEMWtr?=
 =?utf-8?B?b0dxM3V0akFaWkZtS3NmRjlNN0NqMkthN01EWHRGYnQ0ek9LNHN6ZTJXMGpk?=
 =?utf-8?B?STRDYnlkcWN6MEFPMzBscytRK0c3dVNsNEtXRURCbDVPWU9hdTRZS1VWY0tL?=
 =?utf-8?B?VEZ4ZzloWnpFR2ZSMnBBUTg1RFVnbVFiNjlRK0JEYkkwekFMcE1hSkhvcWV3?=
 =?utf-8?B?bzJpQ01aY1gwaVZHVFFtelY4ZC9MWUp0R2pQbmZPQ1lqNyt2SUpVMWE2ai8y?=
 =?utf-8?B?R0RRM1I5Vit6RWZVMURyaE8zR2xTYkZ4Uk9wSWxpOG1VbFJsWTJ4VjdYSGo2?=
 =?utf-8?B?Z0hVajY4R05SYXRWelpKNmNtb3lybSt2MXRzakJMMmRFOWVwTXluZGVjNTNl?=
 =?utf-8?B?akdRdFd5Yk5qWHhtUDh6eEtIdmZYTm9odHpnWGhKMkc3VXR5NXRmWHJwRExl?=
 =?utf-8?B?WGZOWEZ1bExYVlhmR012M1N2aUgyN0NOb3dmRFY5R3FoV3hidlRkbjdWQXlp?=
 =?utf-8?B?Uk4rWk5WU0ZMMXJETzFweFlGeldHNEVjR0NNSlFOSGxzOW1Bb3IwcWdZTCtV?=
 =?utf-8?B?cWI4R2c5VWViamV2VXREQ3ZkZWtlbDY5M3dEU0ZHaFNMNS94UXJKa2hTZ0Nx?=
 =?utf-8?B?ckFHcVFJMUU0eDR6dlNTYnBPSFhpUDMvdHJVckZneDFLL0kybDJEQ0JVVHFr?=
 =?utf-8?B?Y05tL0lURDlKSldKVjNUSkNlajAvRVBRbi80eWg4MDl0YWdwUVJuallGZ3JE?=
 =?utf-8?B?ZlNhR3ZZU2Z4UWRncHFEK1g0MzBVQUNVTEJDWS9GWk4wZTZYZXEwK21xYkxu?=
 =?utf-8?B?N2NUZmJOMHcvK0xiSkwvZFB2MXVyMW1sSXFqNFdlM3pSNWNqbHpva0dzM2sw?=
 =?utf-8?B?SUgxelpGU0hJOGRUMDZFN1QyQUJjaDBOL015V1hBVW4yWC95NTZXb0NWMVhW?=
 =?utf-8?B?Tm5iMkpUeVo5b1YySlh4S0ZSYmdFUmVNU0dZZEdET3lHYWhhR1BBWTh2TC9M?=
 =?utf-8?B?dG1XZjI2SXBMSzJRY2hKdStweGZEbzNTWmdpazZURlVXN1FnVmMrR3RieTFS?=
 =?utf-8?B?ejJJODJMU2FTVlVFc3RGSGtuSXZoMEdzRkVTOXB2Yms2a211a2xJNWUySjEw?=
 =?utf-8?B?SWZOaFQ0dXRpaDNIR053eTBES3FHYUIxMGJzTGEwbXlPd2pFTVA4ZHlOdXN2?=
 =?utf-8?B?U1J1cWZ3bVJIdnFLRUx0bmNSbjJucFJVMmI2anJ6UU8rOGNvUXlST2JkdWZy?=
 =?utf-8?B?UGxhVFdZYWFTbzdSdEEyc0lib29UZnhGaHVJY3UyNjlCNXlLVmJuRnNSK0t4?=
 =?utf-8?B?enEvUlpIamFXamtCOStPbWxYaDV1VG5CYmgwZEFvbS9CbC9WQTVMMlhKN2Fa?=
 =?utf-8?B?MkxoTEpYM2FxMVo0WFZnYjRaV3VGQWNvVFlCdCtMYk9kQU8wZG1pSzBZcVBL?=
 =?utf-8?Q?/5R3YXrZL/9hkR3cmc2w8tNOp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0b6b284-ebc9-4de5-377f-08db3f368f05
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 11:26:16.4803
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /EDSadHpbFJAwdHOD9YwfqkN6hsECHCRAjXygc00HePJ2/PaNYreolrRPXbYYWIbKgOFgpz3yDCIJNCoSdB3/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9264

On 06.04.2023 21:48, Andrew Cooper wrote:
> On 04/04/2023 3:53 pm, Jan Beulich wrote:
>> This can now also be used to reduce the number of parameters
>> x86emul_fpu() needs to take.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> As said in the previous patch, I think this patch wants reordering
> forwards and picking up the addition into state.
> 
> "Because we're going to need it in another hook, and it simplifies an
> existing one" is a perfectly fine justification in isolation.

As said there, I'll do that.

> With that done, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>,

Thanks.

> although...> 
>> ---
>> We could of course set the struct field once early in x86_emulate(), but
>> for now I think we're better off leaving it as NULL where not actually
>> needed.
> 
> Do we gain anything useful from not doing it once?  it's certainly more
> to remember, and more code overall, to assign when needed.

Right, that's why I did add the remark. In harness and fuzzer we'll be
able to catch stray uses of the field if we keep it at NULL. Actually
if we were to always set the pointer, perhaps it wouldn't make sense
to have a pointer in struct x86_emulate_state in the first place, but
then we'd better put the struct itself there.

>> --- a/xen/arch/x86/x86_emulate/fpu.c
>> +++ b/xen/arch/x86/x86_emulate/fpu.c
>> @@ -90,9 +90,8 @@ int x86emul_fpu(struct x86_emulate_state
>>                  unsigned int *insn_bytes,
>>                  enum x86_emulate_fpu_type *fpu_type,
>>  #define fpu_type (*fpu_type) /* for get_fpu() */
>> -                struct stub_exn *stub_exn,
>> -#define stub_exn (*stub_exn) /* for invoke_stub() */
>>                  mmval_t *mmvalp)
>> +#define stub_exn (*s->stub_exn) /* for invoke_stub() */
> 
> ... honestly, I'd really like to see these macros purged.
> 
> I know the general style was done like this to avoid code churn, but
> hiding indirection is a very rude thing to do, and none of these are
> usefully shortening the expressions they replace.

Right, getting rid of those is certainly on my radar. But it'll be
further code-churn-ish changes in an area where history tells me
things often take long to get in (and hence there may be measurable
re-basing effort in the meantime).

> Also, putting stub_exn in the K&R type space is still weird to read.

What's K&R-ish there?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522009.811105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poN18-0001i4-Lc; Mon, 17 Apr 2023 11:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522009.811105; Mon, 17 Apr 2023 11:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poN18-0001hx-Hp; Mon, 17 Apr 2023 11:27:30 +0000
Received: by outflank-mailman (input) for mailman id 522009;
 Mon, 17 Apr 2023 11:27:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poN17-0001ho-Hv
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:27:29 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d471f044-dd12-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 13:27:27 +0200 (CEST)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 07:27:25 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5277.namprd03.prod.outlook.com (2603:10b6:208:1e8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 11:27:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 11:27:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d471f044-dd12-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681730847;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=6UTiTsPO8GVwqlj7t3BxjpF+JtSoOgTXRCqJhoII7og=;
  b=cNiO+ZcIZN+PI51XspBLz+uX5hmMoLuchvRCvRkqscZbuGn5R+beg/c7
   OxB037y7CFoOY86YkUprwK1BFYZwqxgZgTqttkQsfabVbcIG5bMuxoMC5
   iU5fRD6WOOrBXV41+2m0igg7CSI3l8w0mvGl+iQMxVPC0BVn5YmkE/w0F
   8=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 105696815
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+KDNBagqV1+nU5GeRoLcvlAzX161pREKZh0ujC45NGQN5FlHY01je
 htvWjyPPPfeZmLyedBxPNzj8h8E75Tdm9YyTQZr/39jHikb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaBzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRfCRkxfzO6g96qxeu1dsJLvtYJIs/SadZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGybrI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXlqK800ATOroAVICYpDWGErPyJtladctF6A
 XweygcRgadnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebT8t0
 EWAk5X2BDhsmLqPQHmZ+/GfqjbaETgYKyoOaDEJSSMB4sL/u8cjgxTXVNFhHaWpyNrvFlnNL
 yuiqSE/g/AfiJAN3qDipFTf2Wvz+N7OUxI/4RjRUiS99ARlaYW5Zouur1/G8fJHK4XfRV6E1
 JQZp/WjACk1JcnlvESwrC8lRdlFO97t3OXgvGNS
IronPort-HdrOrdr: A9a23:D32Pe65VAlYJ/eJh8gPXwBTXdLJyesId70hD6qkRc20vTiX8ra
 uTdZsguyMc5Ax9ZJhio6HlBED4exLhHMdOgbX5Xo3SPjUO2lHYVL2KhLGKq1fd8kvFh4tgPM
 xbHJSWZuedMbE0t7ec3OAUKadH/PCXtIqTraP1yXN1SAFjbKttqz1+Fh2QHiRNNWp77N4CZe
 Oh2vY=
X-Talos-CUID: =?us-ascii?q?9a23=3At1S5BWgUUj8ei/5M2pnGUACwszJuNVz08GbzGku?=
 =?us-ascii?q?EFWMwRoG+bmeC0q1iqp87?=
X-Talos-MUID: 9a23:HqDYUAaUQCOhN+BTjAC0vg5vMfdU+4eHC2UnoZ82lvaoOnkl
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="105696815"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=INs7hvEIOylB5wzbQFUze+In8sQwC8Y0PwEkSlzolPJu5vMI+0cwqr1JkrgLe0l2l+nsiAb8n5YcdfxO1yrvYqIWl4hwQoE+yaPLU+W1w4vDREnfv7xBkIhHhkQlSvzlQtQgdRNJp+zjDJJZKEQxWvSPiyCtU5aRinkX2BOclaORpC4//FX9/rKIJMbDyIJvnFLTKTaHpqOxDMLr8EYJIz++PUeWPebvHUKSq13gOOuyFlddWX0FtJ1d1RiyihhkluywNcph4Fqw5bEbpw+OLimkrqZrXJiV7wabIxbOp4CZ3p2g5MJb+GEO1Zh7suHDANkISPBdEeawxckd6nsWww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F4VUQnZyuaznrhxdXf/7Atx4YpgITQfM7rd0F0CiHEw=;
 b=QmQQ4khmPTy1cf9T6n9eFW4td5p2LCt2kjojGfoA5kokleY/njqrpslCm+qD/0NTBsIUEe03L+Gs+ZuPh4sBUhyKpHsV0c0fdzPu7uS+CiuvXNNBfBlum39aV7YO5DGEI6iV7h3W4URdtmLZ5alGZqxVE8vMSK/Hrs1NZIqMeXrJvCX3GCnXBQFsLWQWXMfLH0M+Ii3oNkH29R6GRnQuTPAVKdaKb53XPmMI4eYBfKiKdlzGQFGVC/baKJo6vdH4m7lH0ReO/faJUSwIQMm8/38LUV6GIOoJ6WZvGZcEsCgrZT9Wl6a/SjPep1iXiA8FRP8tyRM3SQ1f4c1JhPDCpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F4VUQnZyuaznrhxdXf/7Atx4YpgITQfM7rd0F0CiHEw=;
 b=clE+rQnEz1qPme5PEemZ2gFinOoWaIwLlIDvMeaRpriLinid+bc5ZS/aP0uMntPa/br87VPf/MsMBNcWZRPWw8ObG3rHQrFj837TeUzkOMsq/k4QEZCXW0rfROHVnlpGlHQZzjPxpr9faAVLykLO4JaZfx4nMYxv7pWxLgGAG3E=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f1ffbf21-e6e2-b3cc-5c02-2382c052d20e@citrix.com>
Date: Mon, 17 Apr 2023 12:27:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230415195816.3717648-1-andrew.cooper3@citrix.com>
 <1cde4c30-bc48-7170-d465-11ed8617449f@suse.com>
In-Reply-To: <1cde4c30-bc48-7170-d465-11ed8617449f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P265CA0023.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ff::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5277:EE_
X-MS-Office365-Filtering-Correlation-Id: 746ca7fc-7a9b-4b69-21bb-08db3f36b43d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qSr8eT843mA9f8vuuUp7VNnVTGmcKER7g7TUD458OTRj9D+ekTR3urihEEVuyAH7hIWFhRqSOoEjNLaX7FgE/f304YBX8FZL5k4Io0+KLESSi7T5UE+xCezrbedh1YdPgyuZFUd+2EHG8HyFI989OjLiKAZF/ufOTJomFP6PimWFz8+lavSy/rmRIX7r7HpnTtfzJpi3IDrbuN6PbLFsbAcS2FgkTjRkxiZSate+9e3F++mkELJR30i4n2+EBULskt2JbQK+808rYMXMjcDM1OomTApa5nOdm+A74wJX0SqC1oYEl4qniqIsR3w6Noiptb0wPbzKz9geVSAu4tl2f4vBTqjhPKKzlpg6Y9uFVnAr4/0dJ5UvJWnZ3jLrxBCplb5av/5gGIXL5U67oVtBMv4gcGYEfCxQjY/QHXcvQ+mESpAY+1x4FiuAicUfnAK0FPk3wD3lEKsYlbnkw8OgUCVfanIGAGzfdlOx4LIefGHKVdmWCAlG4/kUTQSZ68pSjfJvS1LHdGFRJ7fN/Yv+Wb/CqvHGQd3BQeJiB3iZ2F87iCBQAJP9TlLLgCpJ7PGye7Kd+ZGwjVddU/mTI0Ww8LDdgsqVoW2DtgEFjYtENqpz2PHsEhVoApMPTHaVnEN95xEPb0nZYV7BkN5NmUAkJw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(366004)(136003)(376002)(451199021)(36756003)(2906002)(5660300002)(66899021)(8936002)(8676002)(41300700001)(86362001)(31696002)(38100700002)(31686004)(478600001)(54906003)(2616005)(26005)(6506007)(53546011)(6512007)(186003)(6666004)(6486002)(6916009)(4326008)(66476007)(66556008)(66946007)(316002)(83380400001)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S2ZoOFR3bjF5SUYzakJaMk1LSXNBQVRyUFBiMFpNY3RnNmswd1N2WFB1WUln?=
 =?utf-8?B?OVBiQmVUbWxpQVlvSGMycHRuSHVaTzkyQ0h0THE4ank2VEczcmhjR2daMmlQ?=
 =?utf-8?B?MnFHdGRiNDZwbTJMN3d1Rkl6STU0MDlTNFVFdUJuTHNSOWlSaC82Y3hleGpQ?=
 =?utf-8?B?b01VbnpmSFRURHRTZUk0aEdQSTI0VkFWZXZpQlpRTlNSSE9zVzJ4c0RlbnFj?=
 =?utf-8?B?VGMvKzV3eU5NdEVBVWU5aUNYd1YzOUVBK2Z4dXpIQzdycTBudy8zS2hpNzBl?=
 =?utf-8?B?TlVtMFVrcDhLQVl1aDF6QUJIM3JML1RJNkVQU0tKSzB2MXpYSnQrdWxZSXBm?=
 =?utf-8?B?MnQvN2d3RmJ0YVFVamVqRVRHU0hjcW9UTi9nZXUwYVpvejNYWmFjRUZ6Q1FB?=
 =?utf-8?B?Vkh2T3RoaFlHNmxRVXRZUEhRbXF0RXBCbVNkVHRDUFY0WlRUaHk3amVFNlFa?=
 =?utf-8?B?QzRkZWtOYTAycENjbzk4K2xuT3YwUXdrTVVXK24rNWpxSFdKV1hUV1FYVVpq?=
 =?utf-8?B?WlpNZ2M3RGtLQ3NWbVR0a0paZ0xHZzJnSTRud1NOc1NXN0tyUlE0WUk2TktX?=
 =?utf-8?B?RlpjcHFZV1VPZDdTUmZZK0EzS0tjR0VrYWZxdzB1eWxLMDNmaWpGTEVTTThp?=
 =?utf-8?B?OGpBMU5hZThHa0U5RytoSS9UYnhyZzkzaXNrWmJNS3hYVmZtUmN0d0hneXp5?=
 =?utf-8?B?ejdRSjhFSDJxR3BDUVNETFhEYVVxQ3Y2WkVqNWVWVEpQYU8vOFpMc0gvWXRy?=
 =?utf-8?B?UHNrVDYrOThFRitCdHVSOEt2MHE4b1c0anMvT1Q0SmpielJVb3BRejJuSVh1?=
 =?utf-8?B?Wkg0cnloV29qbFY0NzdCVzduZGxCdXd6dU1HdzNjenM0NExUdGxiWHlPM25P?=
 =?utf-8?B?OUc2V2dqUnpKdXFSNkMrRjF3L3M0RTBtVU91MkkwT2lNTXV2UGVjT3J4SDFO?=
 =?utf-8?B?SXM1MUdVaXl6d1VPZC9qVGlqdkxwVUY2cGJRemtvOHFaZ25pSk9ienBEaGE2?=
 =?utf-8?B?TXpNVW5yekYwV2lqekxsbSs3cGxzMkUzcXd2WVlzT2NDMzZ6V0lYSFJTVWZH?=
 =?utf-8?B?cGxXeVpqeW9mc3M1aUpvN1FVYXZwK2tqeDhYU016dnhKMzNjeHdFQXBRWHU4?=
 =?utf-8?B?YVVpd2t0bFAxbDVpSEZoUUYzQ1RyQnc1ZDQ5SjRialV1ZXB2TVNoakJtd1Yz?=
 =?utf-8?B?M2hNc2VuRFZkOW5ZOEZHRUNhcG93ajZMTVFWSnNpN3pBdiszNUVUSmtBdkQ5?=
 =?utf-8?B?Nnl1b2tzMWtOYkNuNytPQ2Vwa1B0M1huZWR2dytxUzkvOEg4WEdCbDVaM2Nl?=
 =?utf-8?B?OXlmZjUrR252b0p1U2hycFZVRVlTaEFjZzdoV3lVQndEQ3RBcEp0aHh4dHBl?=
 =?utf-8?B?SW1RMWkreElBMTFBdk9lbFFoTC9pc0pQbzlxMTBLejBNc2ZoUFdHVEZWTXNh?=
 =?utf-8?B?R2RxcUJTNmRselNLa3F1RFhFdUNscDNKZ2FBTEZENWwvOE5oNEx5R1BPZ25h?=
 =?utf-8?B?b0gxckJ4b3U5LytKSWxZQjdiOUVXbGpjdFZMcUQydGxsbFdTaWxKQjZDelB2?=
 =?utf-8?B?Y2JCV3VZUUV1YlVkUFlJUzJDWDA0TGYxdVB6TWNRL2p6bEt5b052ek5mN1Rz?=
 =?utf-8?B?YmxLaEVnVTRwZTBpYlRsVWFkU21hNzFEMUsrMzRmZmdXSGwzRVR6Z1RGNjZu?=
 =?utf-8?B?VlpjNCs2b3JISHJySHloYlRBZVhHZUJXYXVmd1ZnWEVmUWdFN2YreklNSnhF?=
 =?utf-8?B?Z0NKQ1VxaDlsZFdqVEpTRDhYU25yWjkxWldCRUF0SzRsOUI3LzdqNEFDSU5z?=
 =?utf-8?B?US9oQ3RDOCtlNkZ6OEZrMG9rbUlqZS9aWVdWOUNjQ3VmNGRWWHFWUmhaeXVi?=
 =?utf-8?B?WXMyOTRUTGNUaUFlSUNhUC9KbHFPRG53M1cyS1cyV0VzTTJTblErVGtOVWs0?=
 =?utf-8?B?bWpVanlhaUtiNFdnZGtlZ1dBd251akdGakREMXg4R1plQ2swZFdXUEpuejNE?=
 =?utf-8?B?OHhJa2VIMC95a25iTDNZK2tBOEwzZTNuRDJvcjlOOEpKejQzTENzdGJyaTVS?=
 =?utf-8?B?QmFvQTFHN1A0UHNGSjFTOFd4bXVheE4wdGdRSFBLcEJCZ1U1cWQwZTZ0TjNN?=
 =?utf-8?B?VWM0TjloUkJDYmZlcVd2SFR2RXAyQzluK0pJWEt4bWs0MFdjRXFkUE1mY09J?=
 =?utf-8?B?a1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HL3us6sqS72ra6RbpHo4bCZAhCyQpAd7HiO29CwFRmzaG2W+dOTqLqlXFwmDHdX5BNfiJEG1p4Bd1dzNsQKyxXGcfBoaURnGZXFPG84wCVs43Q8jOZvk+7PR3gZKUryG48KDaTms5RjAk7/4VG/GxTg4s6DsYuGp5fgzHIKl6iE3KdvUmYdYGti7Wo7wWUn+t8b+qHzcXcM5XvRPW2HVOeCTtzrMzKBPirWLVFjGFV6hQ6Lk87DtYf/WwnsS5Ropbdsftp6ZQ6z7776rJg7z/piKA9RK5c4p6703rO0LwPVNA5MkQIUk5lyzGeitzeL3MsalIzy3DXfIYMB6zaJZ54RZI18MGP2kMhiDpzCzJTkE/DrzsZAm2+h+Gw3o5KZC6jAqZq1lo4Ph8/5lxhv0L4r4qy4WfOHXJBAlxdifYyH7tgag49fcR9tKNyJuuzV5UMxRM4sYVfGVrnkJd4fU/vmdvFnKX+oXbhxHjDND8WepvwRtRgazaa0FfoDW4knpOnubPxiK0CIIsFsr/FqKw7y8UH7CUsz+NqN3eoCq0lrjnGTL7dAZasWv/MAZTxrqowb8UEUhXmpSEKL6vxydMCYrGE8NfV/uRPscER/uflD3K4rOZw+CapUsgvodnTKBq36PdGBLk5t3958dJ/FG3gkTOhrFseb/0ZUqA2F/5neokYivnARZLvG8hWy3vrAwk6XyT8hcnQ8/BtccFVVnQd6VXbM71Totvhs9v9nXvLzNHFiUG9MssehGaLjL6qp6aYeidVTYcOuc/eRAxzWTPzNRZ5wkeE0nCAVvhIR2AbyQZ5e7sa0Z04rtNWaF+vhAJpZwSNAwh1mQuKYZkW26nc9oA+9BehhZH92TzOH0loM=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 746ca7fc-7a9b-4b69-21bb-08db3f36b43d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 11:27:19.0248
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aGdtP0zU7L2/u7iJvwinN3yygU5PYll3UxD1ICylToV+xQGkchHgeIgh0M9oPZwlEW/AvT7fzaFmVkAnjmruO5JfcNCatuIk59VQxhxNevc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5277

On 17/04/2023 11:28 am, Jan Beulich wrote:
> On 15.04.2023 21:58, Andrew Cooper wrote:
>> Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
>> or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:
>>
>>   (XEN) livepatch: lp: Verifying enabled expectations for all functions
>>   (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
>>   (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
>>   (XEN) livepatch: lp: Applying 1 functions
>>   (XEN) hi_func: Hi! (called 1 times)
>>   (XEN) Hook executing.
>>   (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
>>   (XEN) *** DOUBLE FAULT ***
>>   <many double faults>
>>
>> The assertion failure is from a global (system wide) TLB flush initiated by
>> modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
>> sure exactly what causes the #DF's, but it doesn't really matter either
>> because they highlight a latent bug that I'd overlooked with the CET-SS vs
>> patching work the first place.
> Which perhaps warrants a Fixes: tag at least for that latter change you
> mention?

Hmm yes.  I meant to do that and forgot.

>
>> While we're careful to arrange for the patching CPU to avoid encountering
>> non-shstk memory with transient shstk perms, other CPUs can pick these
>> mappings up too if they need to re-walk for uarch reasons.
>>
>> Another bug is that for livepatching, we only disable CET if shadow stacks are
>> in use.  Running on Intel CET systems when Xen is only using CET-IBT will
>> crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
>> still active.
>>
>> Also, we never went and cleared the dirty bits on .rodata.  This would
>> matter (for the same reason it matters on .text - it becomes a valid target
>> for WRSS), but we never actually patch .rodata anyway.
> Maybe worth making explicit that this (the clearing of D bits for .rodata)
> also isn't changed here? Otherwise this reads as if you meant to deal with
> this as well.

Well, it is dealt with, but in a roundabout way.

With this patch in place, we don't relax the perms on .rodata, and never
crash in either alternatives or livepatching.

So we never actually write to .rodata, and never set D bits, so there's
nothing to clean up.

If in the future we do find a usecase that involves writing to .rodata,
then we will need to relax the perms too, and the D bits will be cleared
as a side effect of re-tightening.  This will also involves extending
virtual_region with more than just the .text reference.

>
>> --- a/xen/arch/x86/alternative.c
>> +++ b/xen/arch/x86/alternative.c
>> @@ -382,24 +382,28 @@ static int __init cf_check nmi_apply_alternatives(
>>       */
>>      if ( !(alt_done & alt_todo) )
>>      {
>> -        unsigned long cr0, cr4;
>> -
>> -        cr0 = read_cr0();
>> -        cr4 = read_cr4();
>> -
>> -        if ( cr4 & X86_CR4_CET )
>> -            write_cr4(cr4 & ~X86_CR4_CET);
>> -
>> -        /* Disable WP to allow patching read-only pages. */
>> -        write_cr0(cr0 & ~X86_CR0_WP);
>> +        /*
>> +         * Relax perms on .text to be RWX, so we can modify them.
>> +         *
>> +         * This relaxes perms globally, but we run ahead of bringing APs
>> +         * online, so only have our own TLB to worry about.
>> +         */
>> +        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
>> +                                 (unsigned long)&__2M_text_end,
>> +                                 PAGE_HYPERVISOR_RWX);
>> +        flush_local(FLUSH_TLB_GLOBAL);
>>  
>>          _apply_alternatives(__alt_instructions, __alt_instructions_end,
>>                              alt_done);
>>  
>> -        write_cr0(cr0);
>> -
>> -        if ( cr4 & X86_CR4_CET )
>> -            write_cr4(cr4);
>> +        /*
>> +         * Reinstate perms on .text to be RW.  This also cleans out the dirty
> I suppose you mean RX here, matching ...
>
>> +         * bits, which matters when CET Shstk is active.
>> +         */
>> +        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
>> +                                 (unsigned long)&__2M_text_end,
>> +                                 PAGE_HYPERVISOR_RX);
> ... the code.

Oops yes.

>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -5879,6 +5879,77 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>  }
>>  
>> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_HAS_ALTERNATIVE)
> In line with your observation that this wants to be ||, ...
>
>> + * Must be called on leaf page boundaries.
>> + */
>> +void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int _nf)
> ... perhaps use init_or_livepatch here?

Ah yes, missed that.

>  At which point the #if may want
> to go away, as in the !LIVEPATCH case the code then will be discarded
> post-init anyway? The more that HAS_ALTERNATIVE is always true on x86
> anyway.

I was considering if there was a nicer way to do this.  One idea was to
end up with it in some kind of lib-y form so it gets pulled in on
demand.  But that wouldn't cope nicely with putting it in .init for the
!LIVEPATCH case.

I think I'll just go with init_or_livepatch and drop the ifdefary.

>> +{
>> +    unsigned long v = s, fm, nf;
>> +
>> +    /* Set of valid PTE bits which may be altered. */
>> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
>> +    _nf &= FLAGS_MASK;
>> +
>> +    fm = put_pte_flags(FLAGS_MASK);
>> +    nf = put_pte_flags(_nf);
>> +
>> +    ASSERT(nf & _PAGE_PRESENT);
>> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
>> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
> I can see why you want s page-aligned, but does e really need to be?

To be honest, I copied this straight from modify_xen_mappings().

I think the logic will work without it being aligned, but I'd also
consider it an error to pass in a non-aligned end, seeing as this
function strictly operates on pagetable granularity.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:51:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522015.811115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNNt-00059Z-NW; Mon, 17 Apr 2023 11:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522015.811115; Mon, 17 Apr 2023 11:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNNt-00059S-Jb; Mon, 17 Apr 2023 11:51:01 +0000
Received: by outflank-mailman (input) for mailman id 522015;
 Mon, 17 Apr 2023 11:51:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poNNr-00059M-VK
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:51:00 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1dbfe4ae-dd16-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 13:50:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8379.eurprd04.prod.outlook.com (2603:10a6:10:241::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 11:50:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 11:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dbfe4ae-dd16-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=khk8y5YsH0rLu3VhDfGx5nxEi6eUbYOyfqmsa+zTJs87QD/S8NPnEB3Vl5ywkPZQkh52XoSGKigfknHXqzIjUTXd/mrDzsqOzUHwsEVsx3mwhWZKb6V6oQAwkUWyvSsSgQhO45RA/Ixduk01MgmvG/LaRfUOe/7kf7bMRJk/tL8TGaxk9DyhYGVptpYwLwUdvfotf1zEZbZ1UWUa2ytmg+ugsp9ECXMeic+tuW/4mF3YOuT2KhQbauJscAO1JrjyeibVLjqN+/aQnM5TuSbQzTXhhhQxE9dFNdmhGAgKA6lSbvnywbgc/umLA8f9oP3QL/MFVRGHKfYC0/zOxcw8Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IVp7bfOTYDaO/n204BRBLGpbzGv2My5FGkda1ilb9XM=;
 b=C9m/OD1kG7juwNOHozoilDhdr3R1ndulm8GXt4WEs3FyYzras6zcKUDGxD4xS6x+o2fpU87G1IwDWmDFxSS3END+Uk7M2kztN6kzldPorfzRheJd0xHmyEcU9SVILZhceIJM1melwF9XPQPMri/xxCIK2KSXSbMa5fxbWadN2Zj9lSzshrnngurrFa486AIa6eY1xo+z/ZGKr82dvcvfP8BaZiyymedrt0EPjH2r1XTsVUlq88d2lcOwXCS8ikaxLir9AeDACftC3mdCwoOjt6dcKrWYwtNQGRHkHuNSgvUkLKX+VQDC0V8dQu8VWTgez/xw0ZruywX6bwi+DCC5rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IVp7bfOTYDaO/n204BRBLGpbzGv2My5FGkda1ilb9XM=;
 b=gfiP5tQUGmz6Kotm1OwjneYr3XrsCaZNYTwQJKt9Yb/+97vKD9WIdMIVKj3yCAngbcbTivGszDB6buvSVwE4U8K7felWzN9CT2NBMDKgv2jyZC56+r4uzOrN+78zR5vhL/dMWGqebsz8b+J/y1lmaijS9z1mUt9xE3VDvVd3t2BXwAiVmPj02jTTiWwX5wfhuLalwnyT4BKawSU77hCUwiTNd8r7B7jZR8U/ruvRpHtA2i5s9mK7yZqSxj2W633S8/CQqtMNZTOJTEQrZ/o9TywDVLLIpapVSRSE+tB/l7WMRg0ixkF+Ihbnnkpl6G8NQ0upJkJxbxTlNkMdf7LHpQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1cd40a12-7030-ec0d-dae7-e60132c2989c@suse.com>
Date: Mon, 17 Apr 2023 13:50:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v4 1/3] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1680882176.git.oleksii.kurochko@gmail.com>
 <50ed83073ccb440fb651070de8b0abebd3888b43.1680882176.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <50ed83073ccb440fb651070de8b0abebd3888b43.1680882176.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0253.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8379:EE_
X-MS-Office365-Filtering-Correlation-Id: cb10b0ab-76dd-45b1-d719-08db3f3a001d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J03owkyzprP5OaF9wO0JuTZpQ3HtX94/8sKo7xfkPTSQTb1nQHEv4xjiAHh/RcWqRhLu/Cd7oDWseXd5jPVPlOA6fUNrdLIlwC6rNk6x1m3YhGipPzlLkyzt40d0FolUouAvtY6Q2s/luBqd9iZmZbAnya4PkhL4wgOrxgqDv+kKWqfx4zFGyIWUqaTd4hQn77EHcxAg7QJFL+FkrQ7ElOQzFX6PerkELA6TySOUcPffJOu2+Q/fx9viWkuvixZkzRFUxpSQ73qd+X/5ktQ6lx/MSgMjDRuPeIKgl+S40bT8qHuNZpuwvE22+1z7O8yFM7205Dr0Zq7ZDl161r8/eYYszt7yy9l2Jp28DvWMa+OxYiG9pK3VIUjTlH/b5ftpfA0VkG98w4+AV01e/h+gA/SbYtBxyoVWfo4QgrX6eD7QFeKk8RD2zZZzttIbyd4b7jYYlV2tHBmL+gxUIflUPcvGveD06zGc3IC/SAi5cVjRK1gABWUSsxdJcbzKmeZ9xQt12vx3+aMWv98Biyi2s/nwP1RzUfatMXCK+z9O3K+/ovBhBn9Hqixacd1lJOQ40jUDaq8ka5933EzYakiT+SCqazwps6tfaVpsdtWhHP25yYUBfbUlrZWA2nUHRHm38a7nBMGSsT3ca1mExE55rA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199021)(6486002)(86362001)(478600001)(31696002)(2616005)(36756003)(83380400001)(26005)(6506007)(186003)(53546011)(6512007)(38100700002)(6916009)(66946007)(66476007)(66556008)(30864003)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjJXbThCamJRQ0VQc1VYRG9PNHNaZTFPaXpHbDVtbXMvR0cwWlk1cDV2QmFj?=
 =?utf-8?B?eDNlc01ob0F6RUhrOEtwRy9pcHVuWUF4c1JYT1B3K1lSSjRwa3VoKzlvYUdX?=
 =?utf-8?B?UThDdW9lM0VYa2Zobmtvb0JNdTluMDBtZ0Y0RzByM3N0S3pEc2t1ZE5rdG04?=
 =?utf-8?B?YU5aODhGclVrbHMrdGRHWXpGN25ONll4bGhZZ21EM2RhTDNuYTUxZ0h0UDE1?=
 =?utf-8?B?dlJSR2lKK0M3cXErS05xemx1V2NwUGV4NEpOVHZVbUI2TVJXR3MydGVwOUh1?=
 =?utf-8?B?eUpIY3hocUxNQTJSZm0xekJpWHVRN3l1RGVqS2RneDJ2QjFvckw3VDV0emxW?=
 =?utf-8?B?a0dOTnZsZ0xSM2lsMStxZnNSOThicWtIQW5hSVNZTXd3M0NGK3daajBrYjgv?=
 =?utf-8?B?SDhSdFYwWkFoZDlYaGZnVHVTYnR3S3JlSDJ1OVd3NVJGS1VoRDVsdmpDS3Vt?=
 =?utf-8?B?R2loazF3aUJnYlVPUW0vejJTQTBmeU82K2pNQUk2SGdEWGIwNElucHpGUnNl?=
 =?utf-8?B?WW51MUhuYUtmS3h0NUJ4NGdEMTlwVmdsYU4zOGRVcG15Z2srMERPYW9zYUY2?=
 =?utf-8?B?OElPanpjZUhmNE9malJzYlp1Wk8wMHBEUzFwMjAxbEpRdmFwQmVLRWdGaG14?=
 =?utf-8?B?QWJraWxrdUs1a1ZYNm9jbjVhOGNlL1BINXIvbEtUR2pscHZyQlYzeDdWQ0Jr?=
 =?utf-8?B?cGhjUXZDeVZxUE56aGFPd3BpOFJnRGxIMFVEQTZBQnZPU1YvVU1DOGV3d1BT?=
 =?utf-8?B?dUpFVzhudW50TXlQeEVDdm9qRTRFVkxSUjNpVVl3anRRNFVuUDc1ZUQyenNX?=
 =?utf-8?B?bXJzOFNpUGtmam13S0M2UXdHMXhlZHNnYVFhUEcwZ1VhWW1TQ0JNaEIxS09w?=
 =?utf-8?B?WkNEY3lmL044TStKMUs5cDdxZzRTdU5DTzRMZ1NpbUx2NU1rUk85VU02dzhn?=
 =?utf-8?B?TWlSRXd1VGZkVjhyckMrRFdLV0o3S1BCNGo4aGJrK3BZbEdZMVVKWlZqTzlL?=
 =?utf-8?B?NUFFdTE4UytXR3hQM3BETDNKWUJ3WUJwQmp6Q250TXo1bStyUmI3bU5PQits?=
 =?utf-8?B?OWFxNnZwaUZ1blFPdWRHUVB4dHhHVDNTWFhtVDlGL0JDRlg1Q24wYTJwcm5v?=
 =?utf-8?B?VjYycnVhblBrSlduWU8xK2haMW5Ka0hTbVBDM1hxSkxpOXV5SnM5T2lKQTJZ?=
 =?utf-8?B?N1daTTNzeUhldmgvWm5wSFRmazI4VmFHRjdMQVBEeTZrbzBkbWhZQm1tK0pI?=
 =?utf-8?B?NG02bHI0NnRmU2hkYmx5QldyekNGTzJ5TXltVDJ2SWtSUm90MURlTXNFNk9p?=
 =?utf-8?B?OUhtMVpZak56dFVNV0k5cVZtTzRscTlCeFRkWFFRcGQ1MmRxWS9hK0lvK3k3?=
 =?utf-8?B?ejNXNk1jRVlHQmFqQnpmN1JFNHo0QnJvdFQ3ZlhJVHl5dnkxaHhUNzNzRXZU?=
 =?utf-8?B?bUZXSHdyS0hzbHBscFZmN1Bub1R3SWZpNS9oUHMyaDBpQXl2MzljSTBmNjNW?=
 =?utf-8?B?Vi9WNlR1a0JMUVZJN1liNVpNRTlHeWJhSmh0eFpPWklocFFaSU9RZTE0ZmZN?=
 =?utf-8?B?elhNSnU1aUhPdVpyWWwrMWs1Q0RiZzVrR0lEajNWZDg5YzVzN2dTNUVjQmta?=
 =?utf-8?B?ek52UFRSUUZ3aW8xWGt4aWtlZkRxV1pnN2x0TFE4cjByTkZpRS82UUJlbTFV?=
 =?utf-8?B?VzVmUVozZ09HanQrWG43SUtFUlNEbXplc21rSlM5OEp5eks3SmEzeWNCclFL?=
 =?utf-8?B?Qkp5Uk03dFFSbXpWTk1MdG9aVUFGZFQvVEVkVS81aytZNzZvb1MrNE0zT0lQ?=
 =?utf-8?B?LzY1RlRqWWM5K2JhYnl4WitDVkpya3RaSEVjUHRZMEt0aDBUM0VaWHNaQldz?=
 =?utf-8?B?c1dZNjB1cFBJVFNYTFZoTTJvNUI4VkZqUE5vNEdjMGZTc2ZSdmJLeERrcitL?=
 =?utf-8?B?dWhiT1NvajRObWdBTXMvVStKWE9lbWVxNHNQSHJpZ3R3OGdrdzNqVmNWWWRt?=
 =?utf-8?B?T2YwSDgydXV0SXBSUFZCQWhVTGxFbkJvZWFWSzFUZ0JaY3N3Z0hBZ1NONjJZ?=
 =?utf-8?B?dHZTbk91bkl3N25SSGE3d0ZCcjd2TDdPNDBnR0ZWTUtMNUNOczlGcndiTmNW?=
 =?utf-8?Q?vtN2PB9vsD8GYABKQkJT48VNG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb10b0ab-76dd-45b1-d719-08db3f3a001d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 11:50:54.7784
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A+eqZjNJBqTOQu9KKqpc+68YvW8uQKxP/JVyE1rZMY+K1NnpO7EyCvmdXBFS9WM6kNB2y1XklkKyK9NOQZ5PqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8379

On 07.04.2023 17:48, Oleksii Kurochko wrote:
> The idea was taken from xvisor but the following changes
> were done:
> * Use only a minimal part of the code enough to enable MMU
> * rename {_}setup_initial_pagetables functions
> * add an argument for setup_initial_mapping to have
>   an opportunity to make set PTE flags.
> * update setup_initial_pagetables function to map sections
>   with correct PTE flags.
> * Rewrite enable_mmu() to C.
> * map linker addresses range to load addresses range without
>   1:1 mapping. It will be 1:1 only in case when
>   load_start_addr is equal to linker_start_addr.
> * add safety checks such as:
>   * Xen size is less than page size
>   * linker addresses range doesn't overlap load addresses
>     range
> * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
> * change PTE_LEAF_DEFAULT to RW instead of RWX.
> * Remove phys_offset as it is not used now
> * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
>   in  setup_inital_mapping() as they should be already aligned.
>   Make a check that {map_pa}_start are aligned.
> * Remove clear_pagetables() as initial pagetables will be
>   zeroed during bss initialization
> * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
>   as there is no such section in xen.lds.S
> * Update the argument of pte_is_valid() to "const pte_t *p"
> * Add check that Xen's load address is aligned at 4k boundary
> * Refactor setup_initial_pagetables() so it is mapping linker
>   address range to load address range. After setup needed
>   permissions for specific section ( such as .text, .rodata, etc )
>   otherwise RW permission will be set by default.
> * Add function to check that requested SATP_MODE is supported
> 
> Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
>   * use GB() macros instead of defining SZ_1G
>   * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))

Perhaps in a separate patch, may I ask that you add - like x86 and Arm
have it - a block comment to config.h laying out virtual address space
use? Knowing what is planned to be put where (even if just vaguely, i.e.
keeping open the option of changing the layout) is likely going to help
with figuring whether this is a good placement.

Such a comment could then also be accompanied by mentioning that
virtual address space really "wraps" at certain boundaries (due to the
upper address bits simply being ignored). For an x86 person like me
this is certainly unexpected / unusual behavior.

>   * remove unnecessary 'asm' word at the end of #error
>   * encapsulate pte_t definition in a struct
>   * rename addr_to_ppn() to ppn_to_paddr().
>   * change type of paddr argument from const unsigned long to paddr_t
>   * pte_to_paddr() update prototype.
>   * calculate size of Xen binary based on an amount of page tables
>   * use unsgined int instead of 'uint32_t' instead of uint32_t as
>     their use isn't warranted.
>   * remove extern of bss_{start,end} as they aren't used in mm.c anymore
>   * fix code style
>   * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
>   * make enable_mmu() as noinline to prevent under link-time optimization
>     because of the nature of enable_mmu()
>   * add function to check that SATP_MODE is supported.
>   * update the commit message
>   * update setup_initial_pagetables to set correct PTE flags in one pass
>     instead of calling setup_pte_permissions after setup_initial_pagetables()
>     as setup_initial_pagetables() isn't used to change permission flags.
> ---
> Changes in V3:
>  - update definition of pte_t structure to have a proper size of pte_t
>    in case of RV32.
>  - update asm/mm.h with new functions and remove unnecessary 'extern'.
>  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
>  - update paddr_to_pte() to receive permissions as an argument.
>  - add check that map_start & pa_start is properly aligned.
>  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
>    <asm/page-bits.h>
>  - Rename PTE_SHIFT to PTE_PPN_SHIFT
>  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
>    and after setup PTEs permission for sections; update check that linker
>    and load addresses don't overlap.
>  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
>    necessary.
>  - rewrite enable_mmu in C; add the check that map_start and pa_start are
>    aligned on 4k boundary.
>  - update the comment for setup_initial_pagetable funcion
>  - Add RV_STAGE1_MODE to support different MMU modes
>  - set XEN_VIRT_START very high to not overlap with load address range
>  - align bss section
> ---
> Changes in V2:
>  * update the commit message:
>  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
>    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
>  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
>  * Remove clear_pagetables() functions as pagetables were zeroed during
>    .bss initialization
>  * Rename _setup_initial_pagetables() to setup_initial_mapping()
>  * Make PTE_DEFAULT equal to RX.
>  * Update prototype of setup_initial_mapping(..., bool writable) -> 
>    setup_initial_mapping(..., UL flags)  
>  * Update calls of setup_initial_mapping according to new prototype
>  * Remove unnecessary call of:
>    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
>  * Define index* in the loop of setup_initial_mapping
>  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
>    as we don't have such section
>  * make arguments of paddr_to_pte() and pte_is_valid() as const.
>  * make xen_second_pagetable static.
>  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
>  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
>  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
>  * set __section(".bss.page_aligned") for page tables arrays
>  * fix identatations
>  * Change '__attribute__((section(".entry")))' to '__init'
>  * Remove phys_offset as it isn't used now.
>  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
>    setup_inital_mapping() as they should be already aligned.
>  * Remove clear_pagetables() as initial pagetables will be
>    zeroed during bss initialization
>  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
>    as there is no such section in xen.lds.S
>  * Update the argument of pte_is_valid() to "const pte_t *p"
> ---
> 
>  xen/arch/riscv/Makefile                |   1 +
>  xen/arch/riscv/include/asm/config.h    |  12 +-
>  xen/arch/riscv/include/asm/mm.h        |   9 +
>  xen/arch/riscv/include/asm/page-bits.h |  10 +
>  xen/arch/riscv/include/asm/page.h      |  65 +++++
>  xen/arch/riscv/mm.c                    | 319 +++++++++++++++++++++++++
>  xen/arch/riscv/riscv64/head.S          |   2 +
>  xen/arch/riscv/setup.c                 |  11 +
>  xen/arch/riscv/xen.lds.S               |   4 +
>  9 files changed, 432 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/riscv/include/asm/mm.h
>  create mode 100644 xen/arch/riscv/include/asm/page.h
>  create mode 100644 xen/arch/riscv/mm.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 443f6bf15f..956ceb02df 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>  obj-y += entry.o
> +obj-y += mm.o
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += sbi.o
>  obj-y += setup.o
> diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
> index 763a922a04..0cf9673558 100644
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -39,12 +39,22 @@
>    name:
>  #endif
>  
> -#define XEN_VIRT_START  _AT(UL, 0x80200000)
> +#ifdef CONFIG_RISCV_64
> +#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
> +#else
> +#error "RV32 isn't supported"
> +#endif
>  
>  #define SMP_CACHE_BYTES (1 << 6)
>  
>  #define STACK_SIZE PAGE_SIZE
>  
> +#ifdef CONFIG_RISCV_64
> +#define RV_STAGE1_MODE SATP_MODE_SV39
> +#else
> +#define RV_STAGE1_MODE SATP_MODE_SV32
> +#endif
> +
>  #endif /* __RISCV_CONFIG_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
> new file mode 100644
> index 0000000000..e16ce66fae
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -0,0 +1,9 @@
> +#ifndef _ASM_RISCV_MM_H
> +#define _ASM_RISCV_MM_H
> +
> +void setup_initial_pagetables(void);
> +
> +void enable_mmu(void);
> +void cont_after_mmu_is_enabled(void);
> +
> +#endif /* _ASM_RISCV_MM_H */
> diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
> index 1801820294..0879a527f2 100644
> --- a/xen/arch/riscv/include/asm/page-bits.h
> +++ b/xen/arch/riscv/include/asm/page-bits.h
> @@ -1,6 +1,16 @@
>  #ifndef __RISCV_PAGE_BITS_H__
>  #define __RISCV_PAGE_BITS_H__
>  
> +#ifdef CONFIG_RISCV_64
> +#define PAGETABLE_ORDER         (9)
> +#else /* CONFIG_RISCV_32 */
> +#define PAGETABLE_ORDER         (10)
> +#endif
> +
> +#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
> +
> +#define PTE_PPN_SHIFT           10
> +
>  #define PAGE_SHIFT              12 /* 4 KiB Pages */
>  #define PADDR_BITS              56 /* 44-bit PPN */
>  
> diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
> new file mode 100644
> index 0000000000..30406aa614
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,65 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
> +
> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define PTE_VALID                   BIT(0, UL)
> +#define PTE_READABLE                BIT(1, UL)
> +#define PTE_WRITABLE                BIT(2, UL)
> +#define PTE_EXECUTABLE              BIT(3, UL)
> +#define PTE_USER                    BIT(4, UL)
> +#define PTE_GLOBAL                  BIT(5, UL)
> +#define PTE_ACCESSED                BIT(6, UL)
> +#define PTE_DIRTY                   BIT(7, UL)
> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
> +
> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
> +#define PTE_TABLE                   (PTE_VALID)
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))
> +
> +/* Page Table entry */
> +typedef struct {
> +#ifdef CONFIG_RISCV_64
> +uint64_t pte;
> +#else
> +uint32_t pte;
> +#endif
> +} pte_t;

Please indent both field declarations accordingly.

> +#define addr_to_pte(x) (((x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)

This still looks to be converting _to_ an address, not to PTE layout, ...

> +/* Shift the VPN[x] or PPN[x] fields of a virtual or physical address
> + * to become the shifted PPN[x] fields of a page table entry */
> +#define ppn_to_paddr(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)

... while this converts an address (not a ppn) to PTE layout (not a
paddr). Getting the names of these helpers right is crucial for easy
following of any code using them. To be honest, I'll stop reviewing
here, because the names being wrong is just going to be too confusing.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 11:59:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 11:59:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522019.811124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNVV-0005oN-FZ; Mon, 17 Apr 2023 11:58:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522019.811124; Mon, 17 Apr 2023 11:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNVV-0005oG-Cw; Mon, 17 Apr 2023 11:58:53 +0000
Received: by outflank-mailman (input) for mailman id 522019;
 Mon, 17 Apr 2023 11:58:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YIcH=AI=invisiblethingslab.com=simon@srs-se1.protection.inumbo.net>)
 id 1poNVT-0005oA-8f
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 11:58:51 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3457aa80-dd17-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 13:58:48 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 78AB43200926;
 Mon, 17 Apr 2023 07:58:43 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Mon, 17 Apr 2023 07:58:43 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Apr 2023 07:58:41 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3457aa80-dd17-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1681732723; x=1681819123; bh=mUmD8VKSgcdYni5p23E9r7fMAU6U8epsMQh
	PXgFnWXA=; b=ltzpIztrjn4tewsMauo24FeuxpRrgg7H5slu5AFpZMHcNZm0f4Y
	1sJhyrhic93T5AJVEBQ8SsLS5/plt1QJwPaiD0IaQTnwIbEQV/NHEHiGIjkINIr0
	rPdQxel2zlR7y9uQ2dBsyG572x8OjUFLaB2Jt9JT0YqIPq+fKszITf6KGSh5J7Cc
	IL5xESX5gkTPtzUFPH5x432jTuG40+2v4xFEGWveqKMP44oHIVpwi6e9GExjMmQN
	1tInaOzR5CphK+5duZj3BjkFoN8HhCGdMzZihPaQvv8iXXj2jjzCaXXpRVZ+i/k+
	ek30AHIdTGCEzmH1I7LJIXkXkCCVa5mAkDQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1681732723; x=1681819123; bh=mUmD8VKSgcdYn
	i5p23E9r7fMAU6U8epsMQhPXgFnWXA=; b=UTv/ExP3dZxMl5gsdSnjMoRbE/QwG
	WvQsxfXTCO8H2GCXD9JfnbTLdAR3SFdyuDTMDABTuPOiFcnLxJc6YAXgzf6QU2wT
	+L/wZDY1yetHvB3RXlPicjFW+2GLw0VxS+riS22/2zC3aV8DIyJT196rNekiDYmm
	Vqmw9IpY9y0y0vfGaFo+PEdCGNm8Go+gJG1n37CmSu5LsBnrMCDoMltqpCkuRVz9
	QQlO3ZdASRl24+wOsVH6hulfFkrChlZEPO2HGuPGzq+A8RFjfnlPGtmxp1QoPPyZ
	RMgafnZsB/b4Pc28LJSw2O4kuEgrIkyWJjNSaXJecEbpjpA3m7ToqH/iQ==
X-ME-Sender: <xms:cjQ9ZBvVzNQFfWsrHpkcKej9t74T9FH90MzAqh6okfVx37aT2Fg7bw>
    <xme:cjQ9ZKcah8lRlFRydVo_NjJcReu51TdDVO2-Nb9WtEiYHAVegKItzCw8R6IMhqOWU
    P_Tiq0wlSuH8a0>
X-ME-Received: <xmr:cjQ9ZEy7zIut2KFBNxktLZsMDh_TL6O_FcwFVdGWG507SNvBZz8Y0xnZ00bVVTo>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdeliedggeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepkfffggfuvfevfhfhjggtsehgtderredttdejnecuhfhrohhmpefuihhmohhn
    ucfirghishgvrhcuoehsihhmohhnsehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homheqnecuggftrfgrthhtvghrnhepgfffudekgeeukeeihfffffeuvedtgeefheeuveev
    veeikeefueegtefhjeetgfffnecuffhomhgrihhnpedtuddrohhrghdpihhnthgvlhdrtg
    homhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehs
    ihhmohhnsehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:cjQ9ZINzZU0SjeW-Z8ZCp3PAat6a6aeIPAuqessuYM2saayY-OFXQw>
    <xmx:cjQ9ZB-lAW0k230XEfZw9MoaABNna7wFt7e-y2OLmmyBoTGSE3HcRA>
    <xmx:cjQ9ZIXEh3_DnQyeXgbbwG20xiGcX5ASnQAqFu4ZroymDA5AzGUwGw>
    <xmx:czQ9ZGJb3-b8Ed9oteDjywm2bNgzXNjzraFjA5EAivOT2ZAZlPgPng>
Feedback-ID: idc5945a3:Fastmail
Message-ID: <c44c1c88-64ef-7164-2647-37709f2b2551@invisiblethingslab.com>
Date: Mon, 17 Apr 2023 13:58:27 +0200
MIME-Version: 1.0
Subject: Re: S0ix support in Xen
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
References: <9051e484-b128-715a-9253-48af8e47bb9d@invisiblethingslab.com>
 <ZCRNSeQzfYikJMmG@Air-de-Roger>
From: Simon Gaiser <simon@invisiblethingslab.com>
In-Reply-To: <ZCRNSeQzfYikJMmG@Air-de-Roger>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------dijNSEYwkfgDcQp3XA8ncRM1"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------dijNSEYwkfgDcQp3XA8ncRM1
Content-Type: multipart/mixed; boundary="------------n5AiTDCk20napUsgSkkHzLti";
 protected-headers="v1"
From: Simon Gaiser <simon@invisiblethingslab.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <c44c1c88-64ef-7164-2647-37709f2b2551@invisiblethingslab.com>
Subject: Re: S0ix support in Xen
References: <9051e484-b128-715a-9253-48af8e47bb9d@invisiblethingslab.com>
 <ZCRNSeQzfYikJMmG@Air-de-Roger>
In-Reply-To: <ZCRNSeQzfYikJMmG@Air-de-Roger>

--------------n5AiTDCk20napUsgSkkHzLti
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Roger Pau Monn=C3=A9:
> On Mon, Feb 27, 2023 at 12:48:03PM +0100, Simon Gaiser wrote:
>> Hi,
>>
>> I have been looking into using S0ix with Xen. On systems with with
>> 11th gen (Tiger Lake) Intel mobile CPUs or newer this is often the
>> only supported suspend method, thus we want to support it in Qubes
>> OS.
>>
>> Below a summary of my current understanding of what's needed (and
>> known unknowns). I would appreciate some feedback (what's missing,
>> preferred solutions, etc.).
>>
>> Note this topic is much above my previous experience with Xen and x86
>> power management internals, so sorry if I'm missing things that are
>> obvious to you.
>>
>> PIT timer: During some previous private discussion it was mentioned
>> that the PIT timer that Xen initializes for IO-APIC testing prevents
>> S0ix residency and therefore that part needs to be reworked. But if
>> I'm reading the current code correctly Xen can already use the HPET
>> timer instead, either with an automatic fallback if PIT is
>> unavailable or by forcing it via hpet=3Dlegacy-replacement=3D1. Lookin=
g
>> at the rest I think the PIT isn't used if Xen finds another
>> clocksource. Did I miss something?
>=20
> Do you have some reference to documentation related to the S0ix
> states?
>=20
> I would like to understand exactly what's required in terms of
> hardware devices the OS can use and still be able to enter such
> states.

Unfortunately the documentation that I'm aware of is rather sparse.
There are two blog posts by Intel [1] and [2] that are quite good when
you are trying to get it working under Linux. [3] might be also useful
for debugging.=20

But I'm not aware of an explicit and complete list of what is required
to enter S0ix. You can deduct things from the blog posts or for example
the names of the debug register fields, but yeah ...

Also [2] contains this gem:

    Currently, only the NDA version of the Intel=C2=AE SoC Watch tool can=

    expose the IP Link Power State. [...] If you have any questions,
    please refer to your Intel representative.

Regarding what devices are active in S0ix the PCH datasheet [4] has a
few additional pointers like:

    USB 3.2 only works in S0. USB 2.0 survives S0ix and Sx states and
    provides early boot access.

If someone knows about more public docs, I'm all ear.

Simon

[1]: https://01.org/blogs/qwang59/2018/how-achieve-s0ix-states-linux
[2]: https://01.org/blogs/qwang59/2020/linux-s0ix-troubleshooting
[3]: https://01.org/blogs/2019/using-power-management-controller-drivers-=
debug-low-power-platform-states
[4]: https://cdrdv2.intel.com/v1/dl/getContent/631119

--------------n5AiTDCk20napUsgSkkHzLti--

--------------dijNSEYwkfgDcQp3XA8ncRM1
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEE3E8ezGzG3N1CTQ//kO9xfO/xly8FAmQ9NGUACgkQkO9xfO/x
ly972A//VAAPjXpWKoQW2CE2d6UIEugoxogcsB32rpYFDqWJgGUWoIia+2WL8mMs
uvbeV51SMatKwjX3ruS0E41k98MvATPvyG0VpTUjR54NptdO8dZsW2Oz8a59eSs1
JFiQQ7RAkhAGL/+KETSu5NhUn7uvEnuQFyN3Fie/yzUM/odkC9EkqVa39cZhs6E6
uko+XShLogKpKk23N0Mt8Yr0tuz6qKCmZDm1zJHlToJanHV4YgRwDffnN/a48lcz
WFgoN8f0xzyviG8Obb0NZmkCwV2puoHtJcQURv5TSI+6k3gpFmZe+49bgWXhPMGM
JTszxb3IcKBuA9cACLbogdiLeUZbvsyFHb9CH+CYmIUApfBntMU8SdM0KA2TMQxh
HVg9Z0Zda11ZL9A6j5A2ii3FT6lWCNF/ERnyR47VKu76WsfAqs+NVw/e++V6LuzF
G9/uUhi2wFuJQd/aYyzRiS7jPEk2PlSO3edHORdbxonumbokAuWARkgtSqCC+Ilg
DaKDgOfqlmuPUp6OAiEsZKazFvsgXqaqbl8uDRde3FUbqC9iBlyjYFJRy+heY96D
2ck7l5umCzwFXO0O+1dRRqDqR1ZO6OZ8A4NEfYIO/5GVO4tVTqqSxhCy5QpJfSrj
fJqSgP+bAoLMyvILVV9PzCUBb2OVC8MZyBTbDzqhj6BuHb0+1Uw=
=iRok
-----END PGP SIGNATURE-----

--------------dijNSEYwkfgDcQp3XA8ncRM1--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:11:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:11:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522025.811135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNhQ-0008H0-It; Mon, 17 Apr 2023 12:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522025.811135; Mon, 17 Apr 2023 12:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNhQ-0008Gt-GJ; Mon, 17 Apr 2023 12:11:12 +0000
Received: by outflank-mailman (input) for mailman id 522025;
 Mon, 17 Apr 2023 12:11:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YIcH=AI=invisiblethingslab.com=simon@srs-se1.protection.inumbo.net>)
 id 1poNhO-0008Gn-Oh
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:11:10 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ee31c078-dd18-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:11:09 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.west.internal (Postfix) with ESMTP id 14A853200063;
 Mon, 17 Apr 2023 08:11:05 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Mon, 17 Apr 2023 08:11:05 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Apr 2023 08:11:02 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee31c078-dd18-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm2; t=
	1681733464; x=1681819864; bh=wV1CyrKzAueO3Q0Ub+JKxlmobwzQ6Wd5szN
	iDhr2RRY=; b=aAGRFNpmDru8IQP6U8cJxD0r+63nulegR+IY7hTPK6M4U/47QBF
	6xUMPJXSSMHQ+VZFYo0cwZG3mpFkQ0DS/8ek1/Kdsl6hMTbkISJCN5jKhiGAhdio
	76O8VRGgpi/WWPAQCZrAVDxLvSdGTqQITUQbgcRXzsE5sDejikjaY/otEx3x9R/s
	G/4dmDWdWEIGnNtEQ768SlSHSIxyjBVtvo9EcKygdFdmUgU83axyD504Dal9JiGv
	nGc5FDYrl7ofRFKzeiOhWRROAyGDMBJpAXm6x0pspfAK7+uW0r0QaOnqTTYLEsOl
	icNzV7ZJcsFIYDQRo64GCi61ZluHIXQte4g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1681733464; x=1681819864; bh=wV1CyrKzAueO3
	Q0Ub+JKxlmobwzQ6Wd5szNiDhr2RRY=; b=EubID4dbR/4sZkmJCqcjfLhmsTFmY
	0QbIB32GS1kBc5oBmOmEBh9guNm3K43eyn97b2HdnCcvho6Hj0bWTYVBv0g6lWyt
	y1N+ofh35N6r/tMsU11F/T22bM3YdSZuy50etIXnFWZqWpiQ4Cw/fIljbn2jewoQ
	UlNaGN0D+W1D8oaAYZf3ih0qutvQPiQPZV9E1L9ESA46dUxlzRQaFzV7TC/KovMu
	AzXavwAZp5GjkmjjlJ83ywiyzs5LVbUDX46kWNeDZ0n385A4tQQil/5kIWr4ljGS
	wd6NrXcyeCKpcDPilCNeDmD6EpQi2kCprn9QxxhVTh34P8Le6qhMK1XmQ==
X-ME-Sender: <xms:WDc9ZGhTj-0YYVKhNGLoeadKrq_hjruC3CXoOgbNsT-iQZS3GEcHIA>
    <xme:WDc9ZHAPzEYU6YMRhQbgMQfuw7YGcxUtksbR8agHZ5AMvetjl4LSs19OX5lTcd40t
    5BJHi6okbg0VxU>
X-ME-Received: <xmr:WDc9ZOGKYc2MV8dG7NsyKGIMT49JdtC9XhJgYcu11byCvELDO9MjoiJmyDT7ODeODvRpnPMadZeSWmkZYSVCQhqLOw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrvdeliedggeejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepkfffggfuvfevfhfhjggtsehgtderredttdejnecuhfhrohhmpefuihhmohhn
    ucfirghishgvrhcuoehsihhmohhnsehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homheqnecuggftrfgrthhtvghrnhepkeeuffeitdegffelgeelhfeltdefgfeitdevieff
    gefhgfeltdevhefhudfgteevnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpe
    hmrghilhhfrhhomhepshhimhhonhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgt
    ohhm
X-ME-Proxy: <xmx:WDc9ZPSznX_pBxyLN1r8iKOGA4d-liOmKu-iekvQhgpvJyLQHwI4hg>
    <xmx:WDc9ZDybBr-Dq3ZHiCwVdbMTqtItp451Gk9Xp8_TRbuKtEM-GesdRA>
    <xmx:WDc9ZN6cfulhbL2_dEjDftPQb4wgGMQBvEf9vkii_qSaC9ZTux0SOg>
    <xmx:WDc9ZBbDpgj3vY61qTG44LjjqUL8kTX8TADlHU15UEoeLO0NRqgFYQ>
Feedback-ID: idc5945a3:Fastmail
Message-ID: <87a2e249-4dd6-ddd7-680b-dfc21e57e638@invisiblethingslab.com>
Date: Mon, 17 Apr 2023 14:10:35 +0200
MIME-Version: 1.0
Subject: Re: RFC: disable HPET legacy mode after timer check
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
 <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
Content-Language: en-US
From: Simon Gaiser <simon@invisiblethingslab.com>
In-Reply-To: <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------a0w7qVAi99JZJ9x0F80MJVVp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------a0w7qVAi99JZJ9x0F80MJVVp
Content-Type: multipart/mixed; boundary="------------xRuodBTeRS0m0l3XA2JPsUtY";
 protected-headers="v1"
From: Simon Gaiser <simon@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <87a2e249-4dd6-ddd7-680b-dfc21e57e638@invisiblethingslab.com>
Subject: Re: RFC: disable HPET legacy mode after timer check
References: <cb408368-077d-edb5-b4ad-f80086db48c1@invisiblethingslab.com>
 <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>
In-Reply-To: <0ac3fce6-dcd2-4521-6207-ede4d90e656b@citrix.com>

--------------xRuodBTeRS0m0l3XA2JPsUtY
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Andrew Cooper:
[...]
>> It reaches low power states only for a fraction of the suspend to idle=

>> time, so something still makes the CPU/chipset think it should leave t=
he
>> low power mode, but that's another topic.
>=20
> Do you have any further info here?  There are a range of possibilities,=

> from excess timers in Xen (e.g. PV guests default to a 100Hz timer even=

> though no guests actually want it AFAICT), or the 1s TSC rendezvous
> (which isn't actually needed on modern systems), all the way to the
> platform devices not entering d3hot.

So in the meantime I got some progress here.

What helps a lot is setting cpufreq to powersave before going to s2idle.
With that I get residency of about 88 % (everything is still tested with
only dom0 running). Not yet the > 99 % that a native Linux manages, but
much better than before (<< 50 %).

While, based on your and Marek's feedback, I was already looking at
active timers, I first ignored the cpufreq dbs timer since the idle
driver suspend it and I assumed it was active because I wake things up
when triggering the debug key. But turns out disabling the ondemand
governor has a big effect. But not sure if it's the timer itself or some
other part of it.

I tried to disable the time calibration timer. While eyeballing on the
power meter I first thought it brings some improvement there's no
difference according to the residency counters (will need to improve my
power measurement setup).

Other timers I see active:

common/sched/core.c#vcpu_singleshot_timer_fn:
If I understand correctly those are configure by the domain (so dom0
here). So Linux should do this right. But I will have to have a closer
look.

arch/x86/cpu/mcheck/non-fatal.c#mce_work_fn:
Triggers only seldom (something >> 1 s), so unlikely. But will try
disabling.

arch/x86/time.c#plt_overflow:
Dito.

Simon

--------------xRuodBTeRS0m0l3XA2JPsUtY--

--------------a0w7qVAi99JZJ9x0F80MJVVp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEE3E8ezGzG3N1CTQ//kO9xfO/xly8FAmQ9Nz0ACgkQkO9xfO/x
ly8kHRAAp9lLysWFjmSNsOs5x5zBGrIiiiF7xmOZ4U7UGNlmeN2Olre/5ASBhoQg
fgGQNAvHViHIMVTw4pyb52udCFQnJOTQSoQEQJb67JSA9gWczLK0lsk1K8ikwyB3
oQBUxAF0jaxVctRVb0cgONy23r9XlWl6aWYhjSQ9JqoRjk1qq7fn4JttvzjH8pbM
xuXPhQzs7T2QXn50/S831aROvAfHwicF0sDUVczL9z452QphiPKjjVCJInm/3Lyo
JEYhu0zOXNX/xTKLynhha2gvJQ6tZZEWU91PYoioMcAO6Arxw/aYZUyb0jGCruY1
87UNg6iC6DR3hbqC2gFKd3ocUo3hEHRHKiaf/AcWjv65g+r37WNT3kk0qhCElHOS
293AgA3eCv388msUr7AE/33rdDg4pqiJs5t78sMBYEnei4aKhI/ThruJrd68Jatf
gOM8rVXUXatX/gJJBl6VmyktQV5mw9pep65YdYisrjK5EASoel8BD0fweq3gie7J
ITFSexNPR7AtWPhSZGwqoLNtC5mAp3slAzGLEjavwN3JmA2bT8RCS7omCDVshRut
EmuF768prQAkZses77E02JsrDH5ZeSZaQPuFIpSPsALtERncdVxUGq15BeYJNUZ+
tfJNINQcShJS6Sipegq+pKYkpBYy/rLFU3ZxkFAlSbN1iDkcuWQ=
=rjUA
-----END PGP SIGNATURE-----

--------------a0w7qVAi99JZJ9x0F80MJVVp--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:14:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522030.811144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkS-0000QV-0q; Mon, 17 Apr 2023 12:14:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522030.811144; Mon, 17 Apr 2023 12:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkR-0000QO-UN; Mon, 17 Apr 2023 12:14:19 +0000
Received: by outflank-mailman (input) for mailman id 522030;
 Mon, 17 Apr 2023 12:14:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poNkR-0000QE-DV
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:14:19 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f52fce9-dd19-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:14:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f52fce9-dd19-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681733657;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=NKlnIOh7Hws4lz5whTM0B5pgpEvxTYKB3zR4tROjvGY=;
  b=S7AV7DBGTRAYrI0help1kb93u/THwaYj+CR9T3dH6sTPk8DrsA3yTKa0
   Ctr2Wzfvbj8ES6dKiesaYYF2T9FZpEPTd8dkhPVk8IrwVWeTW4L20uQs/
   pnreHh9oU65W2xqBMIyMQnnG5iEYGhtD79aJLA0OFWXNNtFcwYIPw8k1b
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108238671
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0oHowa5P7W/Dt0pfnIT+VwxRtM3HchMFZxGqfqrLsTDasY5as4F+v
 mAbX2mDMq7YYTDzf9gjaNnjoRxSvMPQnNQ1SlBvqXszHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawS4weH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m/
 sYlKBYISk26m7iy/4qGR7I8gvgsBZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7I/
 DmWoTugav0cHNXG8DjY6i6yvf3sgz7yfKQPGpG92uE/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JTFOsg4Q3L1avQ4C6eHGEPSjMHY9sj3OctXiAj3
 FKNm9LvBBRsvaeTRHbb8a2bxRuMPiwSIX4HdDUzZwIP6Nn+o6k+lhvKCN1kFcadhNDvBSv5x
 TzMqSEknqgSluYCzaD99lfC6xqOjJXUSg8+5i3MQ3moqAh+YeaNZZGs6FXdxeZNKsCeVFbpl
 GcAs9iT6qYJF57lqcCWaLxTRvfzva/DaWCCxwc1RPHN6ghB5VaoR71QxjRaeX51aMI7QQLkO
 x/XhQx4sco70GSRUUNnX26gI510nfG8ToW4B6y8gslmOcYoKlLelM16TQvJhj22zhBx+U0qE
 c3DGftAG0r2HkiOINCeY+4GmYEmySklrY84bcCqlk/3uVZyiZP8dFvkDLdtRrpjhE98iF+Jm
 +uzzuPTo/mlbMXwYzPM7akYJk0QIH4wCPje8pIHL7DcelI7RT55V5c9JI/NnKQ8xsxoehrgp
 CnhCie0NnKk7ZE4Fel6Qi86M+6+NXqOhXk6ITYtLT6V5pTXWq72tP13X8JuLdEaGBlLkaYco
 w8tJ5/RXZyii13vp1wgUHUKhNY6LE703lreYXPNjfpWV8cIejElM+TMJmPHnBTixALs3Sfii
 9VMDj/mfKc=
IronPort-HdrOrdr: A9a23:EP24ZKrQJ/zpNSOHys0CeIQaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: 9a23:5Ltsfm1wQA3pVO51c1LEabxfEfoYclvY6CrpPxW8Nl5URebFTWOf0fYx
X-Talos-MUID: =?us-ascii?q?9a23=3A674nSAyqSh0xzClZ+/i9VZN2D4GaqK33MUJQick?=
 =?us-ascii?q?YgNunKzF5HxGAghiybpByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108238671"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 0/3] xen/livepatch: Fix .altinstructions safety checks
Date: Mon, 17 Apr 2023 13:13:54 +0100
Message-ID: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This replaces the previous singleton patch, with several build fixes found by
Gitlab.  I also included some feedback from Jan on patch 3.

Andrew Cooper (3):
  xen/ELF: Fix ELF32 PRI formatters
  arm/alternatives: Rename alt_instr fields which are used in common code
  xen/livepatch: Fix .altinstructions safety checks

 xen/arch/arm/alternative.c             |  6 +--
 xen/arch/arm/include/asm/alternative.h | 12 ++---
 xen/common/livepatch.c                 | 68 +++++++++++++++++++++++---
 xen/common/livepatch_elf.c             |  6 +--
 xen/include/xen/elfstructs.h           |  6 ++-
 5 files changed, 78 insertions(+), 20 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:14:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522031.811155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkT-0000fh-7t; Mon, 17 Apr 2023 12:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522031.811155; Mon, 17 Apr 2023 12:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkT-0000fY-5D; Mon, 17 Apr 2023 12:14:21 +0000
Received: by outflank-mailman (input) for mailman id 522031;
 Mon, 17 Apr 2023 12:14:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poNkS-0000QE-6g
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:14:20 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 610ac168-dd19-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:14:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 610ac168-dd19-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681733659;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8IBGG1wJgH6bC2c5Y+Xq+OFacgbZ53MLTBgxfZw6AyY=;
  b=AjLGgLt7LmSkbnbsurRrkXmWCpUNbAEJb0SIg3SDn2eyBoeLyHzlRldd
   csAc/cO9kceba/ZDLeGO1HyV/leyjGWZDyWIQLlpddkNDCu0d/ykg89JZ
   Bkha2jykulQeIoz3SZeDlTk33N8HE4vNE6tevZU0wJ7tDFMd6vJY+acHX
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108238672
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Tu8qLaDQ8b91dBVW/9zjw5YqxClBgxIJ4kV8jS/XYbTApDsigzUBm
 2UdXm7TOamIN2LwKdBzPdmw8k9V78fXm4JhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4QRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw8dxKAUdL0
 aEhNGoJUSi+irnmxI60Y7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4TaH54FwBnCz
 o7A137cAhgQPdqU8DGMzVSXrfflnn37WatHQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFafpQIVUddUF+w86SmOx7DS7gLfAXILJhZ/b9ghuN4zVCYd/
 FaDlNP0BhRiqLSQD3ma89+8pz6oJTIcK2NEYCYeVBYE+PHquoR1hRXKJv5gF6ivh9GzBjD0w
 BiNtiE1g7hVhskOv42Z113ahzOnprDSUxU4oA7QWwqN7B59ZYOjT5yl7x7c9/koBJmdZkmMu
 j4Dgcf20QwVJcjTzmrXGrxLRez3oa/fa1UwnGKDAbEqzQmt3XuHILlMujVBHHpkaZYARTDAN
 Rq7VRxq2HNDAJe7RfYpM9vtUJV3nPSI+cfNDa6NMIcXCnRlXErepXw1OxbNt4z4uBJ0+ZzTL
 6t3ZipF4ZwyLa18hAS7SO4GuVPA7nBvnDiDLXwXIvnO7FZ/WJJ2Ye1fWLd2RrplhJ5oWS2Mm
 zqlC+OEyg9ETMr1aTTN/IgYIDgidCZrXM6p85QKK7HbfmKK/V3N7NeImNscl3FNxfwJxo8kA
 FnmMqOn9LYPrSKecljbApySQLjuQYx+vRoGAMDYBn7xgyJLSd/2vM8im24fIeFPGBpLkaQlE
 JHouqyoXpxyd9gw025FNcOi99QyKk3DaMDnF3PNXQXTtqVIH2ThkuIItCOznMXSJkJbbfcDn
 oA=
IronPort-HdrOrdr: A9a23:ocIKU6ueutdXp8i6MNBC7iyY7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-Talos-CUID: 9a23:UkT1qmxcmbw+V2ZRQ/pEBgUfIP0lVHj5kU7MeUuHMVtkE+2RdVC5rfY=
X-Talos-MUID: 9a23:Sik0uAoJ1dOjWQ3NcDEezytHNM5i6qKlMUsErrM9kNGuNnJwOh7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108238672"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ross Lagerwall
	<ross.lagerwall@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 2/3] arm/alternatives: Rename alt_instr fields which are used in common code
Date: Mon, 17 Apr 2023 13:13:56 +0100
Message-ID: <20230417121357.3738919-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Alternatives auditing for livepatches is currently broken.  To fix it, the
livepatch code needs to inspect more fields of alt_instr.

Rename ARM's fields to match x86's, because:

 * ARM already exposes alt_offset under the repl name via ALT_REPL_PTR()
 * "alt" is somewhat ambiguous in a structure entirely about alternatives
   already.
 * "repl", being the same number of character as orig leads to slightly neater
   code.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

The other option is to make alt_instr an entirely common structure, but it's
already different between ARM and x86 and I'm not sure the result of doing
this would result in nicer code.
---
 xen/arch/arm/alternative.c             |  6 +++---
 xen/arch/arm/include/asm/alternative.h | 12 ++++++------
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index f00e3b9b3c11..7366af4ea646 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -44,7 +44,7 @@ static bool branch_insn_requires_update(const struct alt_instr *alt,
         return true;
 
     replptr = (unsigned long)ALT_REPL_PTR(alt);
-    if ( pc >= replptr && pc <= (replptr + alt->alt_len) )
+    if ( pc >= replptr && pc <= (replptr + alt->repl_len) )
         return false;
 
     /*
@@ -128,9 +128,9 @@ static int __apply_alternatives(const struct alt_region *region,
             continue;
 
         if ( alt->cpufeature == ARM_CB_PATCH )
-            BUG_ON(alt->alt_len != 0);
+            BUG_ON(alt->repl_len != 0);
         else
-            BUG_ON(alt->alt_len != alt->orig_len);
+            BUG_ON(alt->repl_len != alt->orig_len);
 
         origptr = ALT_ORIG_PTR(alt);
         updptr = (void *)origptr + update_offset;
diff --git a/xen/arch/arm/include/asm/alternative.h b/xen/arch/arm/include/asm/alternative.h
index 1eb4b60fbb3e..d3210e82f9e5 100644
--- a/xen/arch/arm/include/asm/alternative.h
+++ b/xen/arch/arm/include/asm/alternative.h
@@ -13,16 +13,16 @@
 
 struct alt_instr {
 	s32 orig_offset;	/* offset to original instruction */
-	s32 alt_offset;		/* offset to replacement instruction */
+	s32 repl_offset;	/* offset to replacement instruction */
 	u16 cpufeature;		/* cpufeature bit set for replacement */
 	u8  orig_len;		/* size of original instruction(s) */
-	u8  alt_len;		/* size of new instruction(s), <= orig_len */
+	u8  repl_len;		/* size of new instruction(s), <= orig_len */
 };
 
 /* Xen: helpers used by common code. */
 #define __ALT_PTR(a,f)		((void *)&(a)->f + (a)->f)
 #define ALT_ORIG_PTR(a)		__ALT_PTR(a, orig_offset)
-#define ALT_REPL_PTR(a)		__ALT_PTR(a, alt_offset)
+#define ALT_REPL_PTR(a)		__ALT_PTR(a, repl_offset)
 
 typedef void (*alternative_cb_t)(const struct alt_instr *alt,
 				 const uint32_t *origptr, uint32_t *updptr,
@@ -90,12 +90,12 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
 #include <asm/asm_defns.h>
 #include <asm/macros.h>
 
-.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
+.macro altinstruction_entry orig_offset repl_offset feature orig_len repl_len
 	.word \orig_offset - .
-	.word \alt_offset - .
+	.word \repl_offset - .
 	.hword \feature
 	.byte \orig_len
-	.byte \alt_len
+	.byte \repl_len
 .endm
 
 .macro alternative_insn insn1, insn2, cap, enable = 1
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:14:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522032.811164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkY-0000yp-HI; Mon, 17 Apr 2023 12:14:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522032.811164; Mon, 17 Apr 2023 12:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkY-0000yi-Dn; Mon, 17 Apr 2023 12:14:26 +0000
Received: by outflank-mailman (input) for mailman id 522032;
 Mon, 17 Apr 2023 12:14:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poNkX-0000xC-0t
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:14:25 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61fdfbb7-dd19-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:14:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61fdfbb7-dd19-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681733662;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=WOSwA4Gw3YJcS+3VeWA2TjaLAqOkde+i62MNsG9fL/4=;
  b=R56YTCNKt/8X7mO8MwYZL5CSyLl56X8pPaY7Txx1ycW/atagL8MBX+xd
   3O36ulpXs3ZD7V13JBdNBwrlWTS8xZKJ3Tfnjdsnkgd9p90jvIuO0d6s2
   IDXyAHLF13YKcYytSXLtQ+qVdZDBgAkNC1c1fX2fahCZvHuhyOV2K9a30
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105144451
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:IeHEXaxETLSf45916HR6t+fDxirEfRIJ4+MujC+fZmUNrF6WrkUBn
 GQdUG6CP/jYMWGgcopxaY+18ElUuZ+Ey95lSlBo+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPK8T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVMJ1
 fkSBRFTV1PZrLrszYOJe7lQoe12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNwx/E+
 j6bpgwVBDkXOYe6yTm760ugqdXRkQjlVpgLT4eno6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0WN1WCeQ2rh6Mzqn85ByQDWwJCDVGbbQOr9QqTDYn0
 luImdLBBjF1trCRD3WH+d+8kzS2PiQEKH4YUgUNRwAF/trLrZk6i1TESdMLOKS4lMHvEDf8h
 TWDtjEjhq47hNQOka68+DjvoRihu5zIRQ4d/RjMUySu6QYRTIy4Y42l73DL4PAGK5yWJnGeu
 FAUls7Y6/oBZaxhjwTUHr9LRuvwoa/YbnuF2wUH84QdGyqFyTmDeIp9wW9HZx1CapwJSA3YM
 HP0kFYEjHNMB0dGfZObcqroVZRzkfOxSIW5PhzHRoEQO8YsLWdr6AkrPBfNhD61zSDAhIllY
 f+mndCQ4WH24EiN5B6/XK8j3LAi3UjSLkuDFMmgn3xLPVdzDUN5qIvp03PUNIjVFIve/G3oH
 y93bqNmMSl3XuzkeTXw+oUON10MJnVTLcmo+5UJKbbbf1c5QD1J5xrtLVQJItUNokiovr2Qo
 iHVtrFwkzITekEr2S3VMys+OdsDrL50rG4hPDxEAGtEL0MLON71hI9GLstfQFXS3LA7pRKCZ
 6VfKpro7zUmYmivxgnxmrGn9NQ4K0/z1VzXV8dnCRBmF6Ndq8Xy0oeMVmPSGOMmVHPfWRcWy
 1F46j7mfA==
IronPort-HdrOrdr: A9a23:lknfQKPhoAAYE8BcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-Talos-CUID: 9a23:hHYlyWP/5TCIwu5Dam5H8UQ5O54cMSf2lC3dEn2JWVpOYejA
X-Talos-MUID: 9a23:pQoADQZkwAsF0+BTujK3oTFhCt5R74uUFkwmj5scopK+Knkl
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="105144451"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters
Date: Mon, 17 Apr 2023 13:13:55 +0100
Message-ID: <20230417121357.3738919-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

It is rude to hide width formatting inside a PRI* macro, doubly so when it's
only in one bitness of the macro.

However its fully buggy when all the users use %#"PRI because then it expands
to the common trap of %#08x which does not do what the author intends.

Switch the ELF PRI formatters to use plain integer PRI's, and move the width
formatting to those callers where it matters.

No practical change.

Fixes: 7597fabca76e ("livepatch: Include sizes when an mismatch occurs")
Fixes: 380b229634f8 ("xsplice: Implement payload loading")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/common/livepatch_elf.c   | 6 +++---
 xen/include/xen/elfstructs.h | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/common/livepatch_elf.c b/xen/common/livepatch_elf.c
index 45d73912a3cd..d37a7af84be6 100644
--- a/xen/common/livepatch_elf.c
+++ b/xen/common/livepatch_elf.c
@@ -310,12 +310,12 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
                     break;
                 }
             }
-            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => %#"PRIxElfAddr"\n",
+            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => 0x%08"PRIxElfAddr"\n",
                     elf->name, elf->sym[i].name, st_value);
             break;
 
         case SHN_ABS:
-            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => %#"PRIxElfAddr"\n",
+            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => 0x%08"PRIxElfAddr"\n",
                     elf->name, elf->sym[i].name, sym->st_value);
             break;
 
@@ -344,7 +344,7 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
 
             st_value += (unsigned long)elf->sec[idx].load_addr;
             if ( elf->sym[i].name )
-                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => %#"PRIxElfAddr" (%s)\n",
+                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => 0x%08"PRIxElfAddr" (%s)\n",
                        elf->name, elf->sym[i].name,
                        st_value, elf->sec[idx].name);
         }
diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
index 06e6f87c3d80..3124469faeb4 100644
--- a/xen/include/xen/elfstructs.h
+++ b/xen/include/xen/elfstructs.h
@@ -561,8 +561,8 @@ typedef struct {
 #endif
 
 #if defined(ELFSIZE) && (ELFSIZE == 32)
-#define PRIxElfAddr	"08x"
-#define PRIuElfWord	"8u"
+#define PRIxElfAddr 	PRIx32
+#define PRIuElfWord 	PRIu32
 
 #define Elf_Ehdr	Elf32_Ehdr
 #define Elf_Phdr	Elf32_Phdr
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:14:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:14:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522033.811175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNkb-0001HN-0F; Mon, 17 Apr 2023 12:14:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522033.811175; Mon, 17 Apr 2023 12:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNka-0001HE-TZ; Mon, 17 Apr 2023 12:14:28 +0000
Received: by outflank-mailman (input) for mailman id 522033;
 Mon, 17 Apr 2023 12:14:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poNkY-0000xC-Vw
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:14:26 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6479c75b-dd19-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:14:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6479c75b-dd19-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681733664;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Ew+rB9d1GFmavjWRsz8+3kGUcmwxQ2u2fM4aYYZINb4=;
  b=e/ow2rp1E2xrM7btd1qpiSEkma/y2KWlQ64rBQJRF4SoeAY681cceWwZ
   z6QNGkuXfcjuNYamDiME3LQpYSydCmhNuNCzga+XqZxzn5CD8bnFNM260
   8vmSsADNpDn6E+R9adlyUZz56L6SDgl6wymQqZp1wYpNzm055MrCFMpvJ
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105144452
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:nPoUKaJ8dejJkM8MFE+RoJUlxSXFcZb7ZxGr2PjKsXjdYENSgTFWm
 GpOWGnSOvyLM2f8LYx1bY7l/RhU7ZWDz9dqQQVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVhPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5FRkFL/
 9E5dglONDvb3e+H8oPrds5F05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZgMwhrE+
 ziuE2LRITIcMfiT4xW80XOW3M3UunuiW8UiPejtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslh0bXcBZH6sl6QWO4q3O6g2dCy4PSTspVTA9nJZoH3pwj
 AbPxo63Q2U169V5VE5x6J+m6hO3MwU0c1ZBPwRcFwY00eiznKYa20enoslYLEKlsjHkMWiuk
 2nW93lj1ul7Yd0jjPviow2e6964jt2QF1NuuF2KNo6wxlkhDLNJcbBE/rQyARxoCI+CBmeMs
 3Ef8yR1xLBfVMrd/MBhrQhkIV1I2xpmGGeG6bKXN8N9nwlBAlb6FWyq3BlwJV1yLuEPciLzb
 UnYtGt5vcEDZSX1NfcqPt3pV6zGKJQM8vy8D5jpgidmOMAtJGdrAgk1DaJv44wduBd1yvxuU
 XtqWc2tEWwbGcxa8dZCfM9EieVD7nlnlQvuqWXTk0zPPUy2OCTEFt/o8TKmMogE0U9ziF+Nq
 4wAbJPalUw3vS+XSnC/zLP/5GsidRATba0aYeQOJoZv/iIO9LkdNsLs
IronPort-HdrOrdr: A9a23:iUwiKaEgP1UZ1KT/pLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: =?us-ascii?q?9a23=3AOe38jWhWonu2y6/LPwG1wDm0pzJuQDr9yX39AhG?=
 =?us-ascii?q?BSjxCEOeYeG2hqJ9Dup87?=
X-Talos-MUID: 9a23:qp6IBwkdYm03/lQX3dkJdnpBF8Ftx6iEJ3kukK9cmdKjbXVzfAe02WE=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="105144452"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Date: Mon, 17 Apr 2023 13:13:57 +0100
Message-ID: <20230417121357.3738919-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The prior check has && vs || mixups, making it tautologically false and thus
providing no safety at all.  There are boundary errors too.

First start with a comment describing how the .altinstructions and
.altinstr_replacement sections interact, and perform suitable cross-checking.

Second, rewrite the alt_instr loop entirely from scratch.  Origin sites have
non-zero size, and must be fully contained within the livepatches .text
section(s).  Any non-zero sized replacements must be fully contained within
the .altinstr_replacement section.

Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>

v2:
 * Rebase over prior patches to keep the ARM build working
 * Tweak commit message and comments for clarity

As a further observation, .altinstr_replacement shouldn't survive beyond its
use in apply_alternatives(), but the disp32 relative references (for x86 at
least) in alt_instr force .altinstr_replacement to be close to the payload
while being applied.
---
 xen/common/livepatch.c       | 68 ++++++++++++++++++++++++++++++++----
 xen/include/xen/elfstructs.h |  2 ++
 2 files changed, 64 insertions(+), 6 deletions(-)

diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c
index c10ab1f374e0..004b5a436569 100644
--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -803,28 +803,84 @@ static int prepare_payload(struct payload *payload,
     if ( sec )
     {
 #ifdef CONFIG_HAS_ALTERNATIVE
+        /*
+         * (As of April 2023), Alternatives are formed of:
+         * - An .altinstructions section with an array of struct alt_instr's.
+         * - An .altinstr_replacement section containing instructions.
+         *
+         * An individual alt_instr contains:
+         * - An orig reference, pointing into .text with a nonzero length
+         * - A repl reference, pointing into .altinstr_replacement
+         *
+         * It is legal to have zero-length replacements, meaning it is legal
+         * for the .altinstr_replacement section to be empty too.  An
+         * implementation detail means that a zero-length replacement's repl
+         * reference will still be in the .altinstr_replacement section.
+         */
+        const struct livepatch_elf_sec *repl_sec;
         struct alt_instr *a, *start, *end;
 
         if ( !section_ok(elf, sec, sizeof(*a)) )
             return -EINVAL;
 
+        /* Tolerate an empty .altinstructions section... */
+        if ( sec->sec->sh_size == 0 )
+            goto alt_done;
+
+        /* ... but otherwise, there needs to be something to alter... */
+        if ( payload->text_size == 0 )
+        {
+            printk(XENLOG_ERR LIVEPATCH "%s Alternatives provided, but no .text\n",
+                   elf->name);
+            return -EINVAL;
+        }
+
+        /* ... and something to be altered to. */
+        repl_sec = livepatch_elf_sec_by_name(elf, ".altinstr_replacement");
+        if ( !repl_sec )
+        {
+            printk(XENLOG_ERR LIVEPATCH "%s .altinstructions provided, but no .altinstr_replacement\n",
+                   elf->name);
+            return -EINVAL;
+        }
+
         start = sec->load_addr;
         end = sec->load_addr + sec->sec->sh_size;
 
         for ( a = start; a < end; a++ )
         {
-            const void *instr = ALT_ORIG_PTR(a);
-            const void *replacement = ALT_REPL_PTR(a);
+            const void *orig = ALT_ORIG_PTR(a);
+            const void *repl = ALT_REPL_PTR(a);
+
+            /* orig must be fully within .text. */
+            if ( orig               < payload->text_addr ||
+                 a->orig_len        > payload->text_size ||
+                 orig + a->orig_len > payload->text_addr + payload->text_size )
+            {
+                printk(XENLOG_ERR LIVEPATCH
+                       "%s Alternative orig %p+%#x outside payload text %p+%#zx\n",
+                       elf->name, orig, a->orig_len,
+                       payload->text_addr, payload->text_size);
+                return -EINVAL;
+            }
 
-            if ( (instr < region->start && instr >= region->end) ||
-                 (replacement < region->start && replacement >= region->end) )
+            /*
+             * repl must be fully within .altinstr_replacement, even if the
+             * replacement and the section happen to both have zero length.
+             */
+            if ( repl               < repl_sec->load_addr ||
+                 a->repl_len        > repl_sec->sec->sh_size ||
+                 repl + a->repl_len > repl_sec->load_addr + repl_sec->sec->sh_size )
             {
-                printk(XENLOG_ERR LIVEPATCH "%s Alt patching outside payload: %p\n",
-                       elf->name, instr);
+                printk(XENLOG_ERR LIVEPATCH
+                       "%s Alternative repl %p+%#x outside .altinstr_replacement %p+%#"PRIxElfWord"\n",
+                       elf->name, repl, a->repl_len,
+                       repl_sec->load_addr, repl_sec->sec->sh_size);
                 return -EINVAL;
             }
         }
         apply_alternatives(start, end);
+    alt_done:;
 #else
         printk(XENLOG_ERR LIVEPATCH "%s: We don't support alternative patching\n",
                elf->name);
diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
index 3124469faeb4..eb6b87a823a8 100644
--- a/xen/include/xen/elfstructs.h
+++ b/xen/include/xen/elfstructs.h
@@ -563,6 +563,7 @@ typedef struct {
 #if defined(ELFSIZE) && (ELFSIZE == 32)
 #define PRIxElfAddr 	PRIx32
 #define PRIuElfWord 	PRIu32
+#define PRIxElfWord 	PRIx32
 
 #define Elf_Ehdr	Elf32_Ehdr
 #define Elf_Phdr	Elf32_Phdr
@@ -591,6 +592,7 @@ typedef struct {
 #elif defined(ELFSIZE) && (ELFSIZE == 64)
 #define PRIxElfAddr	PRIx64
 #define PRIuElfWord	PRIu64
+#define PRIxElfWord	PRIx64
 
 #define Elf_Ehdr	Elf64_Ehdr
 #define Elf_Phdr	Elf64_Phdr
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:23:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522047.811184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNt4-0003dI-SY; Mon, 17 Apr 2023 12:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522047.811184; Mon, 17 Apr 2023 12:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNt4-0003dB-Ps; Mon, 17 Apr 2023 12:23:14 +0000
Received: by outflank-mailman (input) for mailman id 522047;
 Mon, 17 Apr 2023 12:23:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poNt4-0003d2-2B
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:23:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20629.outbound.protection.outlook.com
 [2a01:111:f400:7d00::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9eff3954-dd1a-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:23:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9203.eurprd04.prod.outlook.com (2603:10a6:102:222::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 12:23:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 12:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eff3954-dd1a-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GvpLhi4pyV8hQ4j5kz7PtvVAmo7RpMOfkTwbM0gl1ZN8N/Sx8Q705nuQdnQ9H+QBuW7BYFu3xWt+U2yUWjQFZ5PmzQ+xI5nfU1dfq3KhlJT+/WwjtR3wpkKFYYckJG3FJrG8IZqEf+Qkjs2lb3cGhkQIRWu6oobFgIoUD6BAjKfSPLT9L+6t80tJ2YXyFcqsFMreVaRGhYywKgZmJypyvCrtArHXLiFwe4OqsRNHehlkinAjLpF+lr2AewEG/uDdhDQed/7S+cy92ueBQdFkQ2Edv5ALTztcn6rra5sfTHXXbttGmeFQG6anhPdkIth7pYiSdNHC9DCUTG15E6tShQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/MDRsRSp2EhX9RYEDn8zJnZs0T2Su6y3I5rhe+9sQRg=;
 b=hul+Q6xRx4IlInfqQZ1AJatsh/vESFLDPYPOp3wlWA4sGhVzH0ynf7wNa+idlMXpqtSDPF+ff6f/mjSkYf6zeVR7mDY4dmuA1n1YyaE9rIICTRtMWwSc6i0sr5XlpuavLuUQp4fCpdiBsgu7nL4ja1mp0ZEYZYHfDMJ6R1l4ISoO3d0zcYMW8CrBeWVW9s+yEliRSOJiey2rLTVZkCg9qJmnAbbargHblvRVGvUw7gt8QT3H+lkJMnHf169LRaapkZAFSgWAIhNdyKr/3woIE7Hl5cDIHh0hfoXsB7mkx2YTdPrXsaDb6Ydd7Y0WokPJKfGN44K2bGs4HV46+/Dcrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/MDRsRSp2EhX9RYEDn8zJnZs0T2Su6y3I5rhe+9sQRg=;
 b=UxqK6wqMmx9aFSJPUddGqXjHMmw5CZtM53EJbXpo1bOCrijAeRaEN+TfvAIcPSYSIG0Dn2rkEv1ehmTKu7jnsbj0yjRS7AHrGA8ZPr/KK08fALcFyIiv6hWabHLJAfPLXN//HFfPbKJSkgkr6zQxMw+NlmqzFYXQJKXMiTJgiuh2KlrMlccxt5hnPp23GaR4xRmtdjSbqu6geBd18GD9Yp8XqqE7TkB/TMDfZ6eo3lIRXs9/PSmwE+sr7FcnLLH+a/FAuYBDr2kPOTaydg0P/TD/oHyBt8CquMFtGxcpV+LgsN0m/PeEZZiu5pJy1qnjDqAoyovGMRACAqe9xG+cjQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a056a258-51c0-2721-1ef2-1e7796c85659@suse.com>
Date: Mon, 17 Apr 2023 14:23:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86emul: avoid triggering event related assertions
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0058.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9203:EE_
X-MS-Office365-Filtering-Correlation-Id: f8b57309-4107-4807-245f-08db3f3e81b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d+N1tjEJNsAA+rCmzz4WN4v/WwwGo7Q7OLqw+gi8F4drXfDWv4LPvs7ZcYvW9gDuuRglgMjn8huSBknOAd/tzkGbKBRCI8zog0ao8bL6ePrAxliE8iQt59/nXYXm5ATJOD1//9YM3/UUPwczwPfklq7X2DBD9MX1A9O9xx3HTDFMSq57fGNf8DllAbobU7NX9oDcxD9VNTElMiuepotizdXDoIujZ3ZFM4Fab+k/zEQehY1RGrIJskqnqjWOYwuzv/CcTxV1btVxF7A/ev2cbEbRUybLpRue0kcHs2QmDI9XIEDrTD4NqCHug4efe93VT3SMZiCYDLwLpo33okh4WSbdbag6J9ynfDXlq5hQOWgRi/1DBSmW35Rtd/2ay/hSq4oh6WDlaBbSwFyIODs8SICS3ifevM1WHUWkvTXgaRg6OZwoF9zEKUbvUbcccmGcHUX2KDfdhbyUhOSCOkDjq/d4KoYX97OxQYZzcSY+RTa9pl1RGuKdEwEp50zyfzs3SDqjJwmtuH2AUVCimXi3ulSHJZO9uY+kIrpWWRXQd+aiN+X0Oi7fCpuGTeCZrm+yCqnotDZMA+pbGUeDwcU6HsRPOrdr2l1G9BMm9rGxKk7VVwXNqIg/xhHKTq9KFjyhcJMfWqU2q6v2L3zlscFwwA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(396003)(366004)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(54906003)(31686004)(186003)(2616005)(26005)(6506007)(6512007)(66946007)(66476007)(83380400001)(316002)(41300700001)(4326008)(6916009)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUZtNmZLdVI4dUhkbVV5MWZ3Ty92SUVCMmNHNmRaRXpzNGRPNzFpM25jbDVv?=
 =?utf-8?B?d1FmbC9NbndhOGliQ1ZvNXIvSmNDaXYzSHp6cjM5NHA4akpIWmdRcGRXRi9w?=
 =?utf-8?B?VVV5M00rWmZ3S3dPcjJhY0txcHFuUE1YREE5a2l2UmZCNUpkTC81Mlpwc3Zw?=
 =?utf-8?B?MTkzZThIQVBWcnZXWkhQbDVSNUFialplUkZRUGVOdkc1cmd6clNicmlVanlo?=
 =?utf-8?B?RmZWSnRQQ2dod0xJckw4Z1lKMEdFdzIweWpOOTFDcEJSVm0yRmorVHJzbDJr?=
 =?utf-8?B?MTJ4OVhYeHJWSDErSWpyeVFHVnczbVVsUUFLZEQxVGdsTkxzSGIwRzcvWnFS?=
 =?utf-8?B?MEhaMzdTZ1ExSEMwWWtRUzhwNnlXUFk4L2dvNHQ4d21tRTVLY3pNbk14R2h3?=
 =?utf-8?B?eFBRUFB0Z0tkdW1nNksxRi9wc3Rhb29Yc2JLZGU4TzcvSHU4QlNFYUFrOHBF?=
 =?utf-8?B?UXQxM0E4ZDR2U1VjRVFkd0Z5VzAwaFVQdlVleGxEOEcwOXoya2dQeDdtNXVB?=
 =?utf-8?B?K1luWWkzVFBVTE13NmtVNkJMSm16bVFublQ1SldQTi9lRU0xL3BxYUlQQ0Z2?=
 =?utf-8?B?M3RLUFBqakppTXNpa1g1NGI0S21hRCs2WWM3WGZzS04ybXJ4a3FuWjI1NXQ5?=
 =?utf-8?B?NXFpNW1maG1NckxPdUtpOUcybGpzSHlTc1dwWjZ5ZjBDUURDN1o1SWtSUGRQ?=
 =?utf-8?B?UzV2aGh4UnJlRi82MkRrbllxY0pXblREeW1KMzRTSUcvZVExZldHakJtN0VS?=
 =?utf-8?B?VXBveHB2QUtSRGtmb29rSC9wV1NqczFlNEJVTUhyd2t5ejY2VWxySDFocGlS?=
 =?utf-8?B?YklmKzNLaXFoSWJxbXdCNXdQaUJFMnRydmhVNnhMMjdLaEx3QmZpM1FVemFK?=
 =?utf-8?B?b2VIMkFFVFlzbDNJWFpBd1dYaTJtRWh2elluaDJoUm1oelN6bmFIUUE2KzhP?=
 =?utf-8?B?U3BiNUpEalBsOHBhQ1ptaHgyY2NHSnltTERsY2owZTBmem1YakxHVzU0c3Z2?=
 =?utf-8?B?Vk9Uc0JGbExpaXhsRzZpSDQ3N1F6dTR4Z1dTVVlsSUc0cVFSZlJ5VWRjTDZD?=
 =?utf-8?B?MEQwOWZvVSs1UGExOTFJNlBoR2wwdFhka2lPMSswZDN0ck10bGMvdTlnNHB5?=
 =?utf-8?B?Z2l6MDNpd3JZdWhESnZqNTFmZStEbmdZMTdZMmppLzJIWWdLaU5ta0U0UnNq?=
 =?utf-8?B?VWQ3N1VXdGU3dVF0enMyKzZKQnM2VGJqNWs4cHJVcm4za0tRbloxWU1rcFli?=
 =?utf-8?B?d0VNckg3aG5TN2R4a2hSdnAraW0yVFowampiYnFDNkZVdFRZdmlnaHIxRnA0?=
 =?utf-8?B?VmVPRHk0dGh5MHRDUTZRZ0dQL3UwZXQvQXgvOXZNTXpOclc3dytiZXNwMlNR?=
 =?utf-8?B?NFJBY25hMjJPNlpDelpCTyt5Y1NiRVhqek5RN2tWRFVLUU90NDBpUlBiSk1K?=
 =?utf-8?B?M2NaZCt2VTdHRktDcWlEcWN3RmFCckwvbzJFekNabTdCRTlTbm5yYmxNaVpS?=
 =?utf-8?B?RWtKQTNnQXRBYUo4bmhhWFpGZ1Y5Q2F0SHBRdlRjWUZZVC9aaW1Obk9ic2lC?=
 =?utf-8?B?TDVnSEZ2Yml1MzltWUpjOGFydTdmS09ncmw4ZmpTcHFCYXFXYjJXVW5QUGlV?=
 =?utf-8?B?M2Uyem1SM2dtQ1hHTDZKTjNQME5HWlVrOXVqTmdXL0t4RUtkOFZtR2ZwSmtU?=
 =?utf-8?B?MEk3eUlwa21MRVhSQnBqTkhEVzgrMVVNTTVycHpaZHduL3VzTDZ4VjBmRUVP?=
 =?utf-8?B?QXBEclJ6NzVockhKajhua1QwMGtGQXdkQTFyRUZ6MmtTbDVJNW1CN3U2TDhh?=
 =?utf-8?B?dU03dU52RjM0YWwvQ1BkVXk5bThJMXZLN1lVeGNNeEJ3YlI3WHNNNVd0M1lV?=
 =?utf-8?B?d0lBbnptWnBwV0MweW8xM1hha2dLRmdmb210dnlDQkExaTZGWDY5MzVpczRu?=
 =?utf-8?B?NFduUHJVSzN6R1RBaE9peWtEWVdXam01MzhvSUNqaTRPQVVISkhWa0lQRFFY?=
 =?utf-8?B?Ti9JTXUxemRkY3JmWGJKSDhhTzF4YmFnRUozdlIwUitGeml2eVhqMjFtdkFN?=
 =?utf-8?B?dDFBS2Q1UDZMaVpJVUkwUXJLODY5NUF0eFJ6MlN6d1NnTjAxQ3RWYnRNZW9y?=
 =?utf-8?Q?RujD8LMDtv+t1D5iAHe8i+aO/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f8b57309-4107-4807-245f-08db3f3e81b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:23:10.1533
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c0UlK7xsY1Yg5F7feleYKlqImkes4HDzbFvF9mWD2tjCitgwiGO1YczCsOqnr7tqBQcHTvIkFRjo9iINi/KM/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9203

The assertion at the end of x86_emulate_wrapper() as well as the ones
in x86_emul_{hw_exception,pagefault}() can trigger if we ignore
X86EMUL_EXCEPTION coming back from certain hook functions. Squash
exceptions when merely probing MSRs, plus on SWAPGS'es "best effort"
error handling path.

In adjust_bnd() add another assertion after the read_xcr(0, ...)
invocation, paralleling the one in x86emul_get_fpu() - XCR0 reads should
never fault when XSAVE is (implicitly) known to be available.

Also update the respective comment in x86_emulate_wrapper().

Fixes: 14a6be89ec04 ("x86emul: correct EFLAGS.TF handling")
Fixes: cb2626c75813 ("x86emul: conditionally clear BNDn for branches")
Fixes: 6eb43fcf8a0b ("x86emul: support SWAPGS")
Reported-by: AFL
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
EFER reads won't fault in any of the handlers we have, so in principle
the respective check could be omitted (and hence has no Fixes: tag).
Thoughts?

The Fixes: tags are for the commits introducing the affected code; I'm
not entirely sure whether the raising of exceptions from hook functions
actually pre-dates them, but even if not the issues were at least latent
ones already before.
---
v2: Also update the respective comment in x86_emulate_wrapper().

--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -200,8 +200,10 @@ int x86emul_0f01(struct x86_emulate_stat
         if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
                                       ctxt)) != X86EMUL_OKAY )
         {
-            /* Best effort unwind (i.e. no error checking). */
-            ops->write_msr(MSR_SHADOW_GS_BASE, msr_val, ctxt);
+            /* Best effort unwind (i.e. no real error checking). */
+            if ( ops->write_msr(MSR_SHADOW_GS_BASE, msr_val,
+                                ctxt) == X86EMUL_EXCEPTION )
+                x86_emul_reset_event(ctxt);
             goto done;
         }
         break;
--- a/xen/arch/x86/x86_emulate/0fae.c
+++ b/xen/arch/x86/x86_emulate/0fae.c
@@ -55,7 +55,10 @@ int x86emul_0fae(struct x86_emulate_stat
                     cr4 = X86_CR4_OSFXSR;
                 if ( !ops->read_msr ||
                      ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                {
+                    x86_emul_reset_event(ctxt);
                     msr_val = 0;
+                }
                 if ( !(cr4 & X86_CR4_OSFXSR) ||
                      (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
                     s->op_bytes = offsetof(struct x86_fxsr, xmm[0]);
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1143,10 +1143,18 @@ static bool is_branch_step(struct x86_em
                            const struct x86_emulate_ops *ops)
 {
     uint64_t debugctl;
+    int rc = X86EMUL_UNHANDLEABLE;
 
-    return ops->read_msr &&
-           ops->read_msr(MSR_IA32_DEBUGCTLMSR, &debugctl, ctxt) == X86EMUL_OKAY &&
-           (debugctl & IA32_DEBUGCTLMSR_BTF);
+    if ( !ops->read_msr ||
+         (rc = ops->read_msr(MSR_IA32_DEBUGCTLMSR, &debugctl,
+                             ctxt)) != X86EMUL_OKAY )
+    {
+        if ( rc == X86EMUL_EXCEPTION )
+            x86_emul_reset_event(ctxt);
+        debugctl = 0;
+    }
+
+    return debugctl & IA32_DEBUGCTLMSR_BTF;
 }
 
 static void adjust_bnd(struct x86_emulate_ctxt *ctxt,
@@ -1160,13 +1168,21 @@ static void adjust_bnd(struct x86_emulat
 
     if ( !ops->read_xcr || ops->read_xcr(0, &xcr0, ctxt) != X86EMUL_OKAY ||
          !(xcr0 & X86_XCR0_BNDREGS) || !(xcr0 & X86_XCR0_BNDCSR) )
+    {
+        ASSERT(!ctxt->event_pending);
         return;
+    }
 
     if ( !mode_ring0() )
         bndcfg = read_bndcfgu();
     else if ( !ops->read_msr ||
-              ops->read_msr(MSR_IA32_BNDCFGS, &bndcfg, ctxt) != X86EMUL_OKAY )
+              (rc = ops->read_msr(MSR_IA32_BNDCFGS, &bndcfg,
+                                  ctxt)) != X86EMUL_OKAY )
+    {
+        if ( rc == X86EMUL_EXCEPTION )
+            x86_emul_reset_event(ctxt);
         return;
+    }
     if ( (bndcfg & IA32_BNDCFGS_ENABLE) && !(bndcfg & IA32_BNDCFGS_PRESERVE) )
     {
         /*
@@ -8395,7 +8411,9 @@ int x86_emulate_wrapper(
      * An event being pending should exactly match returning
      * X86EMUL_EXCEPTION.  (If this trips, the chances are a codepath has
      * called hvm_inject_hw_exception() rather than using
-     * x86_emul_hw_exception().)
+     * x86_emul_hw_exception(), or the invocation of a hook has caused an
+     * exception to be raised, while the caller was only checking for
+     * success/failure.)
      */
     ASSERT(ctxt->event_pending == (rc == X86EMUL_EXCEPTION));
 


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:29:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522050.811194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNyZ-0004Fl-FW; Mon, 17 Apr 2023 12:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522050.811194; Mon, 17 Apr 2023 12:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poNyZ-0004Fe-Cx; Mon, 17 Apr 2023 12:28:55 +0000
Received: by outflank-mailman (input) for mailman id 522050;
 Mon, 17 Apr 2023 12:28:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=be8F=AI=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1poNyY-0004FY-PS
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:28:54 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a59482f-dd1b-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:28:53 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-2f625d52275so1509763f8f.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 05:28:53 -0700 (PDT)
Received: from [192.168.0.165] (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id
 l10-20020a7bc34a000000b003eeb1d6a470sm11782134wmj.13.2023.04.17.05.28.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Apr 2023 05:28:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a59482f-dd1b-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681734533; x=1684326533;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:cc:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=EHlangfw+7m1YCMw8lZV9CNZHwQKHUWeJfkJFI19eOg=;
        b=bYg3qN6rK63/iJpugod5x88LFcz1ySMUUCQvLKN8kkAOJQfnJN+YlMvQnvFOStsAh6
         BjYH7KdED5M8d0IuoHaV/lK/N4fpjAIAylJ9HBBBLx824IF7/Y1MYngc6q0UJByU1Y8r
         Uy2/4QynzcQfW7tPQpQVwhDEstIAVTaN2cwo3HT8I5kxOVpsfGNBfApxzR/rkvqhc/cT
         s5AElOg4n4GJGRPIHK0RdoejuNQOcJUOSEXsE0bvLOrGL/2n82ceKv6+z7keZkcY76WS
         Xeq6yWo1ZJL05Cz8FwzIShenJKvi0sTHeAGK9HozDDLT5QsIgvegW8tNtkFIgcCBCjf7
         V39w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681734533; x=1684326533;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:cc:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EHlangfw+7m1YCMw8lZV9CNZHwQKHUWeJfkJFI19eOg=;
        b=cfa+trjDeDmxbUADwaLkxLArzfY84GH1GD/QqwTr4v+b5kjGWh5dNWBtB79kwv7rhh
         mQAAoaM5o5HHeDQXU7GEcSz88T8Os5ypxVEBoB4YcLuKQxQaESTpLgpxrOmv/5ckiaoH
         ASkdhBY0KTiZMu1u7MS7hfWac8r2ypsfv8tkV3w/XcIxUTMQQwfsiaaSoNyDchZFPGxo
         LCGJD9QQj39tFsOU04Ta0KDckwCdyH2m780c5GDgE8ow1JrqxAy31RdTCdJeD0RL8CG7
         JLiS8Ol5zcRtmhKZw1hnBwoN/51ewJz10Ycadkm72mX2JytRwbkAFq4myXdmqLu+h+fp
         vQ8Q==
X-Gm-Message-State: AAQBX9dFjIhO8d4A/tooFWp1B4IhEiZUKkeUiZxkhc655PMnAxKuJec4
	uDs3xo9iYgZfWr4F+y16fgA=
X-Google-Smtp-Source: AKy350abVnY/wIAxkqlJo9mt+LIZ2kcEVzwVNhlLPlEpeUu8gGoZ0KZlxQblQ+9TrYZpypLwud6o1Q==
X-Received: by 2002:a5d:6e01:0:b0:2ef:a57e:bb9a with SMTP id h1-20020a5d6e01000000b002efa57ebb9amr5522963wrz.6.1681734533109;
        Mon, 17 Apr 2023 05:28:53 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <95698d88-85ba-0072-f23e-91e8b686dc52@xen.org>
Date: Mon, 17 Apr 2023 13:28:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 2/5] hw/xen: Fix memory leak in libxenstore_open() for Xen
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, xen-devel@lists.xenproject.org
References: <20230412185102.441523-1-dwmw2@infradead.org>
 <20230412185102.441523-3-dwmw2@infradead.org>
Content-Language: en-US
Organization: Xen Project
In-Reply-To: <20230412185102.441523-3-dwmw2@infradead.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 12/04/2023 19:50, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> There was a superfluous allocation of the XS handle, leading to it
> being leaked on both the error path and the success path (where it gets
> allocated again).
> 
> Spotted by Coverity (CID 1508098).
> 
> Fixes: ba2a92db1ff6 ("hw/xen: Add xenstore operations to allow redirection to internal emulation")
> Suggested-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:30:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:30:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522060.811208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO06-0005hl-R0; Mon, 17 Apr 2023 12:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522060.811208; Mon, 17 Apr 2023 12:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO06-0005he-OM; Mon, 17 Apr 2023 12:30:30 +0000
Received: by outflank-mailman (input) for mailman id 522060;
 Mon, 17 Apr 2023 12:30:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=be8F=AI=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1poO05-0004j0-Kc
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:30:29 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2944436-dd1b-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:30:28 +0200 (CEST)
Received: by mail-wm1-x333.google.com with SMTP id
 eo6-20020a05600c82c600b003ee5157346cso15416986wmb.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 05:30:28 -0700 (PDT)
Received: from [192.168.0.165] (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id
 n6-20020a05600c3b8600b003f173a2b2f6sm2802937wms.12.2023.04.17.05.30.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Apr 2023 05:30:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2944436-dd1b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681734627; x=1684326627;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=sOoY5WTG+InyofH4KHUoATfgAaPfNyAeXdK/e766al0=;
        b=f8522ysrl1IfSEB7/siDCOfEDKk/w/Do6Pgobux024FweSGC0QGYT10SBjf1VPso14
         utEY3jq0+lwGGGGvyw55J5wcn+t/pJu3jNlxravd31EZKNoPv7htBNkOECKCynWT+jHK
         xYjLRgI9Zx1zpxWbFB1HBwMH5bhPZNchVD3zVA7ADjuDHuxsumT1/czYS0eOktteISoN
         dl+Pap8Ihk9qzhrQ93ZrFA8NDYnTlGzfM2GAqnrSk6HPkT3NxdmyUGME2LhLbv6DZg9w
         nhHSrclBAjVRkd7ceyoXfa34gJHoABQ2gQ3EgvIL4UyDDh/NWJDCPg4F3AC6+DzsCN9Y
         mY/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681734627; x=1684326627;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sOoY5WTG+InyofH4KHUoATfgAaPfNyAeXdK/e766al0=;
        b=lcFXOZ+j9MzlosR2ytwWYrfxIiejRDnjP3qcg1InrrHGBvlTZFlVLu1go4aXdERuKR
         Ijg0uVFnhkUq1nIZ6p1bsjORHwv9iHQ2qpuT9CqVxLgYReIdb44TIgLMdVTHsE53MtxD
         FfkdgdeYkH0xJ76PGhpjgpjtuI9wD8S78M3TxGOqFcYR39vZ4YSQxCzpOC1DHXh/yKbU
         IDUjc++FVjVF+Svd+2BGkWiocBJolYmdhhuB5o/9oFeUYRFwyhgsgVmafKhBoB6x5tTV
         Z8ILBIRbHHKgKJ0q7A3gteqKs0sI9ht9/MzXN6g8t/LunXfnhWOrCwvGQdAm0sKJUZ+i
         UgWA==
X-Gm-Message-State: AAQBX9fyZpoZjSbJAU/XH5q1fOdPLZDfrF2ozuTuOFvUji+ctoV2CljX
	/kXNYi9e+7MfSqMGHvplB4A=
X-Google-Smtp-Source: AKy350ZCWY+iLGD9Fyd2GzYGRLSxota9WngBMMlDXO/QahTk7nhjmJvBA6V5jWeqLAnMpj4K2y9GTA==
X-Received: by 2002:a1c:e90d:0:b0:3eb:42fa:39d5 with SMTP id q13-20020a1ce90d000000b003eb42fa39d5mr10451882wmc.29.1681734627479;
        Mon, 17 Apr 2023 05:30:27 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <79aefe17-be48-2be9-7c3e-12056f5f2819@xen.org>
Date: Mon, 17 Apr 2023 13:30:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 3/5] xen: Drop support for Xen versions below 4.7.1
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, xen-devel@lists.xenproject.org
References: <20230412185102.441523-1-dwmw2@infradead.org>
 <20230412185102.441523-4-dwmw2@infradead.org>
Organization: Xen Project
In-Reply-To: <20230412185102.441523-4-dwmw2@infradead.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 12/04/2023 19:51, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> In restructuring to allow for internal emulation of Xen functionality,
> I broke compatibility for Xen 4.6 and earlier. Fix this by explicitly
> removing support for anything older than 4.7.1, which is also ancient
> but it does still build, and the compatibility support for it is fairly
> unintrusive.
> 
> Fixes: 15e283c5b684 ("hw/xen: Add foreignmem operations to allow redirection to internal emulation")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   hw/xen/xen-operations.c     |  57 +------------------
>   include/hw/xen/xen_native.h | 107 +-----------------------------------
>   meson.build                 |   5 +-
>   scripts/xen-detect.c        |  60 --------------------
>   4 files changed, 3 insertions(+), 226 deletions(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:32:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522068.811219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO1Y-0006Jf-Ae; Mon, 17 Apr 2023 12:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522068.811219; Mon, 17 Apr 2023 12:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO1Y-0006JY-83; Mon, 17 Apr 2023 12:32:00 +0000
Received: by outflank-mailman (input) for mailman id 522068;
 Mon, 17 Apr 2023 12:31:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poO1W-0006Iz-VQ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:31:58 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20607.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d78321af-dd1b-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:31:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9651.eurprd04.prod.outlook.com (2603:10a6:20b:4ce::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 12:31:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 12:31:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d78321af-dd1b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MO0paTe3C6UB0Q+KwvQKitnMMutdoItB8NyzrzkFLXrSZSTzC4oSgLQMZPQlV/yT8jFWxyNXp1ATvy9ZEB6R5vK/smgmTPaUJwczWXw6XkxiIrnmLP8W02JkEP2cVCivNc4ekIXtuPvk0lmEl48pu/B1Pi0pd2fJJVvLw/7+b3Fi2ujSuoksRgXgQ614igZRYfWPoz+Iva0h4Qx9JEaKuoRJLqemundrAiHJ+XNugfSnNVKJgyqF7Lj3mgKROG0thmtehFxjUHCThqyS+GQU79XKsbY60RWN+kwmoFpfWPm4U8YBWwEZUH3CfVM7LiujgI7mZF8yei2Mb+//zMuubQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xi3+8447OF2S1p+bzTgtIFsvyuZusJ/CO/hiesmK1L8=;
 b=M93o35YDqldI79wUVxtsZA40qH8JTMDsVfLj/ua+A/XCSYD8UUYfOqp66dHeVvHuLNzuBcyYq/hgNrnZGsqN0XlpelhO6wuRcgO/R20TumSQwr+RCmxxjjIWxaSBEVI6knKhV747ZvDa3/p7yTWER7l6sPGreN6Caf3/XnNa7NpjscRodLgf5nnotyB33KqGcB0ccP+a6iqFY5JlJKA5RRhacIPO/t/uiIjCCGye0h4WNxsdPZpoU90YCM9u72PvMnqaw0LBrWV4F6wBzRv8ybtZY253jb/xDB43FHNT0qfaaqPZ8I2sbw2PIsVpmfelg/dmWk5Gav7Qtc1ATemPLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xi3+8447OF2S1p+bzTgtIFsvyuZusJ/CO/hiesmK1L8=;
 b=aLmlvf+SnIrtyQTr61vHrNFJXNl3/J+VDs6Lrpk3IYWVPyzDcumR8y6Kl8oM3N9g0AKd/+Ckgs2ywurdON6C8ntDWrKSfsmq6t8B0rC/gk6EA8uBOs859WuIK2rGmFoumaP9Z01d7FVDA+eFKHnkRyg9/QUv7uzte2DmmUCp0ObwtdhZQa3W+X1vDoA5z5PhhjDbASpJ91iwxtSGgY6JCwKnBXa2dpA1LSMhXJmi7W2i57XAqCZmS1UPESEmL2NAbGAn13fQ+d5D+czT1Y3SKhr/wj8kAAazFIxfxlAMax4eqZ8cPKTg8BdPKe2XWwxIWs7gSE4ebet86rCA+tNBfQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0a94cc73-f99b-a616-d342-8d84e8a274b4@suse.com>
Date: Mon, 17 Apr 2023 14:31:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230417121357.3738919-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9651:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ccfe117-5ad3-48a9-4a16-08db3f3fb9d0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5m/8cWrHyLfFCWswPLkgG9VImZ2FZ+f3qrcGZSlNObcMm3tnqPVeAggKfEDCrQbP43YPI2XIun7z/Mg85KcuAM5ixaqJl8GgTzqqZvpkBeTN0rsKrjVB+MrsjmtSVkWOfKYRnRPL52llpJuSktwDoEbnpN5sF/tndnmlyONLpj1TXAXtRepkJVmpUlFUPSHnGZAAr8E4oLDaQSFaU38NmLqhQgUy/vOj6qR3pd691E00s9FGZZkxPTXW4yv0ViVqZYCjsirD7stVeMGCM1Da/pc7Vmyv3qJkFAytPd6V6QMvVPLeWHNcuULuPtew1+NVF1ktKtLGOOYqH5nJAaSUiI9Z3GfLMynLZstn10XJW10C5vrxze4oIYyqgDWcR1oXu4jk3OZD124oZAzSbUMR+bWnZGeHdOBvjC8mObZR6Y/U3qua0ALJf7Mv9K93U1PS2tFXIpbby/Ri0RqqZRPak3WbGXRGPda43T/l17WnJ3RFfk9kNVmSazZ3gI83nz4lyW3kYGgDqsAXlgxrXkPQqrpK96SmmQKy4d5GbADbuTVxGUyXXKZwTBmGkdQatKx9QLmf+u42jrDrL2o/BT5jmU7v2MlN73yfe4QpIWeK1zpN/LrfRz9HwGVZW807dnhBnjykvJS9B/N8e8oxJs0CyQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(5660300002)(6486002)(66476007)(66556008)(2906002)(4326008)(66946007)(36756003)(7416002)(86362001)(8936002)(38100700002)(41300700001)(478600001)(8676002)(316002)(31696002)(54906003)(110136005)(6506007)(6512007)(26005)(31686004)(53546011)(2616005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aUQxR0FwSzNEL28rek93R0ZKdzdVL0VGd1MwYzg4RElCMEtLTEVycEUyRnl1?=
 =?utf-8?B?L0FmNFRUV3R4RzUrM241eHQzMEd0R0ZmSDFsU1BnT2g5UHVKd1hVWklVdmZU?=
 =?utf-8?B?cXZ6aG5yVmt2SjJjcDc5Rzg3VXBJVXBOOVVjV0pXZ2U3aXNrWEJNYnp6bFNz?=
 =?utf-8?B?QjhBWEtPdi9La2YvZ3FsL01oK2dOUVBmVmVwbWEvSElaM1dwemVXYkZkRjAx?=
 =?utf-8?B?eTVWVkV0Z1pqbUpRMTJpMlpNeTkxTS9mOFk2eTh6eFhHOTR0b2V2V2k3Z2cy?=
 =?utf-8?B?cG05MXZ0L2IzVUNzT2dUcWlLSWRJcS9QcndwNlNjanNXMjRpajBmOE1mT1pU?=
 =?utf-8?B?Tnl1bytqTEI2em5xdW9HNjJ1elpTZFZKM3JQSkIzVExKNUhqbWhabGU3WVcw?=
 =?utf-8?B?TXpCcVAyMEMvd3pwWlRhMVFBMWxqWHNFZi9rN001aHNpZVNIS0cvY1dyUEZi?=
 =?utf-8?B?MWxwOHpSSEZUbHVqZTJEQTAyS1JOU3BqeUJaR0tmajZaRk5EZDRUTkRpRWRN?=
 =?utf-8?B?MFFvWnplL0t3LzJrUnJ3akxDb0JocEx6N09VcE90dlZUZEV3TEtJOGVpK1E2?=
 =?utf-8?B?dkFhdVBwWEw2S2E1d25YbkFyb1lJNnV2bnRDL0s5dEZDQ2UrVHRRdm1rVkFk?=
 =?utf-8?B?RGJ0cXlCNFRSUEkxL0IzWjBJWnNtRFlTYVl5TktsUnNiU0VkRExLUm1XdGVJ?=
 =?utf-8?B?bmUyODZyRmJFa1hUYmpRb2pPUGk0ZTg1VjM3Uklhcy9zRGpjTVlTTEpaTXl0?=
 =?utf-8?B?S3kwVFMrU01Dcng4Mk15RFAzRDl4YnRmWjJRVUtIdUlNWDk4MHkxYkxtZ05x?=
 =?utf-8?B?azdJd3dRL1Ywa0pUcmNoMWRMN0puNUxIVTNFSmw3bFlKbkQrZGJKUU5xOUEr?=
 =?utf-8?B?cFRRTVpnL095RkpaNXprZFFXWmZQQWhLLy9RVGJ5UFZDTHpPYmpFZ0R0cW02?=
 =?utf-8?B?L2czOUhTa21JRzBRNTJGMnFvUWxOc1doT2N0U3AxZUZBTUpSMlpXamFZUFZl?=
 =?utf-8?B?V2doRFViYWpibU5GOTNDNTZrOGt3YTVDTFRoVzZlSjBINUVyamdwSkJ1N0RM?=
 =?utf-8?B?Uk1Jb2YrdEFIUFYxV2dLZUxmVEJWcEdjZEdjRHBpdnRaR0FEODliSWhnVWdL?=
 =?utf-8?B?VytrN05NamVnMGVUekh5Sy9Ecjkzd2NZdkx5QURhUlBXTitjWnB5MmtRSTY5?=
 =?utf-8?B?MnR6L2hzU3pzM2UzK3Y1azcwUUhCMWpWaUp5YlNBdVRTQ3hPVS83NFE2NVFO?=
 =?utf-8?B?cE83anRTYXZmZG9RODhGWUR6USt0R3dPc1pKa1dDa3FxdkNIWVVQSkZMRTFV?=
 =?utf-8?B?bzJCaEdGV3RYdjR4TDFycHJ4MThmL3FqS216emxvSFExc0VMQWhqQVJoTkZY?=
 =?utf-8?B?eWgva0VYemVJdFU5enYzS3RGNjA2WGZKOE5TMzF4bDM0cWlXc2pIckhlYi9H?=
 =?utf-8?B?T1hEOGRiTDVRbXhaei80S1JMMkVBTnBMdng0a2FlUVRyTEwzUGpIeTVxRTRD?=
 =?utf-8?B?ZWVGd1F1cHpvelZXRWpzOHg5NFprd1RrcFQ3dVdKSjBWZTRYZmVCcDFJNk5P?=
 =?utf-8?B?UHRxS1F3cXFmd0JZWDNLMjlqeHhJSzJwcWwvcnBXRjVsYzZGaStvMmIvZ0Vj?=
 =?utf-8?B?Wklxb3FmbFdpbTZPOUJYVjVUNkt4Z1E4S1hkWmNLbTlqVVBFZzMrckVHTnFC?=
 =?utf-8?B?VmVrN2pzaXVLeDZzcXE4Z1RjQXl2bmJHWWhHRnNQbVJjZjBmRmxtUkRETTdW?=
 =?utf-8?B?VS84V1ZYOUtNNE92Yk5QcmUyMENVZ29BVnBxNks5VHNiQjhQQnVrc0xUaHF0?=
 =?utf-8?B?WTFnUXZ5SnBTbFpqaTFOZGQxOHh5TVRyV3FrWk1NaHdseFpwbnFUN2ZGNkxw?=
 =?utf-8?B?KzlkY2p1YmNpVDVERnpnK0JpdWRQZVIwbVV6QlZIY2xWZnlDa1JYN0JQRHJT?=
 =?utf-8?B?ZHNqWExzVStoOThxN2d6bkdxOHJxS3JxZ0RKdUpHZXk0dzRJZTVUZEQya0tr?=
 =?utf-8?B?Y1ExR2tkWHBGdks5N2U3L3hGeTNjNzliaG9vTXVaK08rWDZEOXVUKzlGZ2t5?=
 =?utf-8?B?R0lKNXVXR1pjZU1RbGhGNzVxQkRBWVd4Z3BJUE8yWXVvOUZRWVhBRG1hY1Bj?=
 =?utf-8?Q?Zlcae3B7DlHOKjxea5ju5JdO0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ccfe117-5ad3-48a9-4a16-08db3f3fb9d0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:31:53.7596
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oaNMbBcS4T8IajyL6Rlse3x/6XmH0A0JxoKhndvcneSMjOer0mVgPK7UekgYBECQ3CWpf2fzrdAxmd/g+u1vzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9651

On 17.04.2023 14:13, Andrew Cooper wrote:
> --- a/xen/common/livepatch_elf.c
> +++ b/xen/common/livepatch_elf.c
> @@ -310,12 +310,12 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
>                      break;
>                  }
>              }
> -            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => %#"PRIxElfAddr"\n",
> +            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => 0x%08"PRIxElfAddr"\n",

I don't see what's wrong with using %# here (and below); I also don't see
what value it has to zero-pad to 8 digits when the printed value either
is far below 4G (when representing just a section offset) or likely far
above (when representing a real address on 64-bit). But once again I'll
leave judging to the maintainers.

>                      elf->name, elf->sym[i].name, st_value);
>              break;
>  
>          case SHN_ABS:
> -            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => %#"PRIxElfAddr"\n",
> +            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => 0x%08"PRIxElfAddr"\n",
>                      elf->name, elf->sym[i].name, sym->st_value);
>              break;
>  
> @@ -344,7 +344,7 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
>  
>              st_value += (unsigned long)elf->sec[idx].load_addr;
>              if ( elf->sym[i].name )
> -                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => %#"PRIxElfAddr" (%s)\n",
> +                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => 0x%08"PRIxElfAddr" (%s)\n",
>                         elf->name, elf->sym[i].name,
>                         st_value, elf->sec[idx].name);
>          }
> diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
> index 06e6f87c3d80..3124469faeb4 100644
> --- a/xen/include/xen/elfstructs.h
> +++ b/xen/include/xen/elfstructs.h
> @@ -561,8 +561,8 @@ typedef struct {
>  #endif
>  
>  #if defined(ELFSIZE) && (ELFSIZE == 32)
> -#define PRIxElfAddr	"08x"
> -#define PRIuElfWord	"8u"
> +#define PRIxElfAddr 	PRIx32
> +#define PRIuElfWord 	PRIu32
>  
>  #define Elf_Ehdr	Elf32_Ehdr
>  #define Elf_Phdr	Elf32_Phdr

This part certainly
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:33:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:33:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522072.811228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO3D-0006tD-LC; Mon, 17 Apr 2023 12:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522072.811228; Mon, 17 Apr 2023 12:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO3D-0006t6-Id; Mon, 17 Apr 2023 12:33:43 +0000
Received: by outflank-mailman (input) for mailman id 522072;
 Mon, 17 Apr 2023 12:33:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=be8F=AI=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1poO3C-0006sw-QU
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:33:42 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 158f9bf8-dd1c-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:33:41 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 ay3-20020a05600c1e0300b003f17289710aso1643917wmb.5
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 05:33:41 -0700 (PDT)
Received: from [192.168.0.165] (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id
 p6-20020a1c7406000000b003ed2c0a0f37sm11811485wmc.35.2023.04.17.05.33.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Apr 2023 05:33:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 158f9bf8-dd1c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681734820; x=1684326820;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=jwFF4OlAc8WhFMJkqHAMU3vekqaWmw/4QM0BxFjkB48=;
        b=YYiMHYka6VsDJYi2gscWqaahkAinjOFRm6DbLxe3JVigfbf7X8xyh196bUhxXbwGIA
         fbFosGudK5afJc3wrMrjhajrsnrRLbLK2pjN/r3cD0Ij+J8qQsmsVlAkcleKr72sMjGf
         vG/ofoI/fa128DN8xM4UZXfHugGHV3TLODbSj8oO5ArJpWf9mZcmvFc/NPZzxPmklgtr
         MmVsREIV3J4iU1jzKbFE/PjWoiOpE3qfrdcrf/BdwDVsEJ9Blm+iRI9J+jja3VJ6O104
         gMB+5mrzDPo6ZXmKOn+5BKTgDL4/cNnBbYw7xMcL4ZsPdMSknPudzQNuRW+ah/X8n4aa
         AfEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681734820; x=1684326820;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jwFF4OlAc8WhFMJkqHAMU3vekqaWmw/4QM0BxFjkB48=;
        b=Yl0DFaEKS8BL6xkPDf0BQPI62D5Rg11LMH8aS8BKz22J5OFybIolwfcEIhU4dEBWKw
         DtY3UNoigh0e/1Vckzhzn/ZaB797FbYeVzeNjco3lYbvDHfdNzShO7jKg2a75DNXrFQ0
         RQvDqMDZNcAQCQ4KKv+qujHu+lbOEpFkH5sp4nOVzscCSvpAytg9+tn5ALCRTQiFcRXP
         i4S8nzpcQEaP0r0c7l2IQs0LtHTT+7vwwfySmXyxvbPeSNfUqWwVS6di5PI47wG7wRDE
         Z/GzEYDLw03Yi+4gTCrQvWNnvBJfpTnY5/1BZbf3L6yirUBmz57OK0ZpwwFif0nJ6nf5
         BDgA==
X-Gm-Message-State: AAQBX9eFZ7k5qNOrYpf1LAteETyH6j3mjxWylflao6EJTcteeLZq/NFq
	7gl1Ap2tI8x5MGntvKcf8ns=
X-Google-Smtp-Source: AKy350aN1LXr0qrwFoe7EoDFDDVNhV2/qcHWtPrIt1pJ+Qk+uDEMi1hR+uzq10YOMcZsvTQTe0DXYg==
X-Received: by 2002:a05:600c:2051:b0:3f1:6ec5:bc6e with SMTP id p17-20020a05600c205100b003f16ec5bc6emr5634089wmg.3.1681734820491;
        Mon, 17 Apr 2023 05:33:40 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <0b67440a-9c65-6606-5e24-6fb01e8543a3@xen.org>
Date: Mon, 17 Apr 2023 13:33:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 4/5] hw/xen: Fix double-free in xen_console
 store_con_info()
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, xen-devel@lists.xenproject.org
References: <20230412185102.441523-1-dwmw2@infradead.org>
 <20230412185102.441523-5-dwmw2@infradead.org>
Organization: Xen Project
In-Reply-To: <20230412185102.441523-5-dwmw2@infradead.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 12/04/2023 19:51, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Coverity spotted a double-free (CID 1508254); we g_string_free(path) and
> then for some reason immediately call free(path) too.
> 
> We should just use g_autoptr() for it anyway, which simplifies the code
> a bit.
> 
> Fixes: 7a8a749da7d3 ("hw/xen: Move xenstore_store_pv_console_info to xen_console.c")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   hw/char/xen_console.c | 13 +++----------
>   1 file changed, 3 insertions(+), 10 deletions(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:34:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522076.811239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO3u-0007PJ-TK; Mon, 17 Apr 2023 12:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522076.811239; Mon, 17 Apr 2023 12:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO3u-0007PC-Qj; Mon, 17 Apr 2023 12:34:26 +0000
Received: by outflank-mailman (input) for mailman id 522076;
 Mon, 17 Apr 2023 12:34:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=be8F=AI=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1poO3t-0007Hc-Kg
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:34:25 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fd6e9e1-dd1c-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:34:25 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f176a16c03so1493005e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 05:34:25 -0700 (PDT)
Received: from [192.168.0.165] (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id
 h12-20020adffa8c000000b002d322b9a7f5sm10452463wrr.88.2023.04.17.05.34.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Apr 2023 05:34:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fd6e9e1-dd1c-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681734864; x=1684326864;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=jQnz3n1AuRPtCJ1RDVocHpGppB0g/9zbL7gA6j6iNsk=;
        b=jWNACl3Mv8T6RnJ2Tyq1dQTW62FBIGpJNCFkGblaSI2JAcJIVilFP/umFBmbz1w/HH
         r3Nmyfh9lWQRsSoP+uPNvKi9OFgAt4pviDH/9aYuInFEjFUDPs4xscK54uNmOE9AXi5w
         9u/hB0RyE74A/MZBcWu4fP9d8+DYXZHoV8NTHsmSKOVl7QDmMl7s4wrHFkPrl98mUJlB
         IESEaRiefya3Gtgm95mePAcnbg+X88s8w4Dui87ZvlYm4+Mdh314bGeVGm8OhRs0Gp0b
         wlgDQP/Kj08xv57wSi2pHaqfOjNq8JH8BZk2hKcMdmJIcH1FfwHAd14uFUdUethfcmRY
         miyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681734864; x=1684326864;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jQnz3n1AuRPtCJ1RDVocHpGppB0g/9zbL7gA6j6iNsk=;
        b=De1VrwAe88AFNkXVZTMr4jn894Ig4Esms2k1BII2b7YuhLc7IyyYgQ8/Hl2lgOTApR
         cU6oIBzRdxfdOpIP06fgaN1L/c8RocAPFdIJKyO7fPQgeHea+HutZPXgFROW2CFksaXJ
         C4UBZM1u0p/Y5ouY31TXOpNd+0PiwkjhkfJ8GQLtPuf9HRVh97pBqYbytTseJ6Gh1+Fz
         dBifV0n5WadL+TjRp4ayXTPHadMdUK6nGpiWmm69I+j33kMyQs/Y3L3rqAkQki1cjUqW
         a6yfw1RzWTLWqLN/9ck0pnifZmefsX/EeLp4LSgzeJ2dX2EQfG9tQjmPy3ekjj3b6CKE
         os3A==
X-Gm-Message-State: AAQBX9fgfZAi5GPFtZvLSHVmjPCE5zXxOVIAAm0de/1mHFsiBha6xnYQ
	DFstHy0A9yQ9h/YzoT5EDvk=
X-Google-Smtp-Source: AKy350ZMoM7ssfA8tZqdPzobXT3XgXokj5MKZKhf3HYCvcgZjxuGt3q+5Ae24ff7S2+r47tuGPVhlg==
X-Received: by 2002:a5d:5409:0:b0:2f8:5d73:dbf0 with SMTP id g9-20020a5d5409000000b002f85d73dbf0mr5397935wrv.27.1681734864568;
        Mon, 17 Apr 2023 05:34:24 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <eea29aa3-b3e7-6579-aef4-74f496e99c66@xen.org>
Date: Mon, 17 Apr 2023 13:34:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH 5/5] hw/xen: Fix broken check for invalid state in
 xs_be_open()
Content-Language: en-US
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
Cc: no Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@linaro.org>, xen-devel@lists.xenproject.org
References: <20230412185102.441523-1-dwmw2@infradead.org>
 <20230412185102.441523-6-dwmw2@infradead.org>
Organization: Xen Project
In-Reply-To: <20230412185102.441523-6-dwmw2@infradead.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 12/04/2023 19:51, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Coverity points out that if (!s && !s->impl) isn't really what we intended
> to do here. CID 1508131.
> 
> Fixes: 032475127225 ("hw/xen: Add emulated implementation of XenStore operations")
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   hw/i386/kvm/xen_xenstore.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:35:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:35:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522080.811249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO5J-00081U-7I; Mon, 17 Apr 2023 12:35:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522080.811249; Mon, 17 Apr 2023 12:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poO5J-00081N-4c; Mon, 17 Apr 2023 12:35:53 +0000
Received: by outflank-mailman (input) for mailman id 522080;
 Mon, 17 Apr 2023 12:35:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poO5H-00081G-Kd
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:35:51 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20631.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 624ab6a3-dd1c-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 14:35:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9651.eurprd04.prod.outlook.com (2603:10a6:20b:4ce::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 12:35:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 12:35:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 624ab6a3-dd1c-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JAQSlWeem5oJUiOqcg42b1kKjkRZQQFZHBQYR7yQIFIdCgqXKxPCHtsf5LM5MGHqcdkMoVfFPSmnHD30eT46lmjWkHaFwVIEHnOxlWOmD3K+iNrEuc3fpdoc+ZsHCOPnU3wCYt3YdejIvEHS/nFRag+dIJmGpYV5f/FIkqMiiA0bussgAk/RKL8oUN5GsZVAk9fg1lrzE0xymmlOYhvlGoLi7P0yZJpq4jfGyMhhcl03YjC3kAuaB/jTn+elAhUxXaGHKXeEfkdNZB5gNQABRK6gPsxhstI5hQwY3E2VmZxkCqXZeB5WmZm1aytyJIXzejazy734zti/2yMuPC+1Gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MGvUqX/dgB60Z74TjItC0AZ71tF3trodq1AxDIYlSX4=;
 b=ePSDiAz6PLn2FuuBOec+M9+trfQuWoKNIwh5bQJ2V/MHAH998cCOI3/nTNTaXK68WmfRuHJ7Gej1hhw46QkQJoKhBinQRqblMq7ILVYA+/7Z6dYlclMAnRKEpcMkLyGssqDtCsEDXKfS5xm0gKFhQ8hs36sVtnDV8DD2DAgu0Vhn8Y1qM0Ihkfy1a7qIe8sbbRzYTE0CqnHPdC83VIMjko5zwrPJOssBmGXIHoe5LS6c4O0E1WRSA0bZXXN/v+8TL/Ef/1D/POEXC+Q+tC57r6Cfr52uytqL/ldAlovwdg9bcGhqBnvSQlMidw2l2uEDGPa76n+XMaTPFh3uAVjseQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MGvUqX/dgB60Z74TjItC0AZ71tF3trodq1AxDIYlSX4=;
 b=Cc32AY0BxnJ6au6xsbWL6toLmpIsrgY8TqRKiUPaGy/PCBwa0t8/G3x+4nvfwOVYUeglMOBcLojnn5syask1vlzcDkhVdMWus7cTwmHLS1tXpbaX6h7DdasKIlj+qmWfHkQEtea5WIli3eKaZNgRaRG6DSkSPC973+HetA21POj/ahHBac8rVdnZkz0OWzgr0B3D0Z2elgVCv7belmh3wILRlkYym1EvkgEFEnXiPoesv3333nlIdK3mq9ZKxCkJovHwYccm009ajPuelXSpleqpOxcci0Ypzl8Wa6xSksMnYUAqwakq5w4jrLj34O2/YfjOJ+o2QPiFVnJzq8jqNw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3b1f3719-c8e5-8a98-8325-d907c160d81d@suse.com>
Date: Mon, 17 Apr 2023 14:35:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230417121357.3738919-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0051.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9651:EE_
X-MS-Office365-Filtering-Correlation-Id: 5cdf34b2-44e1-43e4-6aee-08db3f4045c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4dphuPQBTHOI/JgzE4FeH7nadqCGRCiNOREbjWBfE1OVB2U+1FYlrI7BJEPdK6vyx1uMtd1GL6qi6zQ6SrJWzIf26oNDQwmJLyuaWOQ5V6MVb9QbLI1UzLueQXvVhPJC0L/dJhUI8V5FKITlD2K5/WPjL3Tbzwuudz5EZlQ+BvzufT9Q3JrzMI1ZQxmZR9CDrsxg3LaVITmr+ge87UdYyJr73VMdkHpPYfGorp3I029y855REF9VZlA2JlFLoRNZxOg/twKMHA1pnf42vuojQJqu9viLtdqf9GhTgdD2pNdip8D0bgutu11cvK5BPXhyPkVL/j081tSpX+3dP3X7xlCmEufpbKjzVZClXuCAytSd+6QWvCETShbJHwiYdjk9NH1xYf5MASdW4zJpf/Rj0SgwQYinAnmhFajTTAH2rriLoJx4d8IPEM2Cvw995mUtLgTfMwPHqG+9OdvhUgqdkcmkWaREnjUMBuGRiNi+qQkNBXvC0kOkAo3MCiEcXqjDaogYSoR9Gynogi26/epES6xo9Qe06CMu9vzayTcvINhwjGHa7JTvMv0zlrX1+HXWqmqqj5yBjZ0xdDyTUqQ4mFyWRXOWomW3Mbtw01TEDCR9HeJIMo2nnTtmr59jmfoFTgyYANdM2i6oif9mWb7WX9PXw7q+LeMRfTnPsGe8x5s=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(5660300002)(6486002)(6916009)(66476007)(66556008)(2906002)(4326008)(66946007)(36756003)(86362001)(8936002)(38100700002)(41300700001)(478600001)(8676002)(316002)(31696002)(54906003)(6506007)(6512007)(26005)(31686004)(53546011)(2616005)(186003)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OFVBalFCUDdjZXlHd2pUQm5vODRFZEIwR1YvckluWnptRHVHRWVFUzRQaXFj?=
 =?utf-8?B?dW9BVEkvOWtDbWJTV3RqZ1FkdXZLWXkyWE5wMlpqVVM2TFhCS0E4cFdUeTBL?=
 =?utf-8?B?M3FjQThLMWFIblprSzNQWHVTb3F6ZVZYWFUwMDJjMVkyWDZDeXhhbkRPaURS?=
 =?utf-8?B?QWJhTzBsRmw4NmhldHN4NjdRcG52TDhnaXJCUFNIajhvQ3dad2svRWNlSXZ1?=
 =?utf-8?B?ZS9rNkpBeFV3WHc2eHBHcFk5dXhXd2trdEQ4K1hrOEJoQ3dBcFZyd3N4bjFU?=
 =?utf-8?B?YVBaUUFzNXdaZUZOZmZmNzZVZFI3SjZHTlM1bzIxT3gwczhEbVVQNG9jajQw?=
 =?utf-8?B?YmQ5djJvOFRHRWtzd1JiTWhMeHNPWVZWQnhHdHVUSm0xNEE1REttY1NFZWVI?=
 =?utf-8?B?Y1BNYllvOFYwWGQzbHIyeHVKbzJDVmlMbmQvTk11ckVNNE8yVThsQ0NlVDZv?=
 =?utf-8?B?Qk5KMkl6UEpHSTJsYndCTjgyZ1VVV3ZBTWZaZ3ZqQ05FTmJETkxOTTJhUjZT?=
 =?utf-8?B?NVlOb3oydFo4UE5VSEVQL24vVUp1UGlaYU9uNkNlMkp1dHRZQlJHSkxOMWV4?=
 =?utf-8?B?TDdVR2JTUGRadW5lRHR1NDg4VS9uY0VHZElWeHFNVlRSblJjcXVEalpQMnkr?=
 =?utf-8?B?cVFRZnh2VlM5Vy9nK1RaMTFuN0duTktyaTBadGpIeXV2UjZGUUtZTmhnaEdj?=
 =?utf-8?B?R3VXc3dNWHdXSnpid0ZNZnU0bDdtY2laekFqVU5wSWhFVzB2U2MyUlNqcnJX?=
 =?utf-8?B?dDR4Mmt3ck42aUJPS2xGWkZTQ1Bab1dTZkpiM3lQeE96dDdKWmU2R2c4ZHd1?=
 =?utf-8?B?cWoxMldDeS82VGZzRzlyRm1NRnZLOGhQM2F6MUkyRkR0NklpT1dDNy9aZmpK?=
 =?utf-8?B?RFJDODFTR3piN2pRMCtLYzVMYVA4dW4yTkxQOHo4ME9nZkpzMXo3eTAvTDhV?=
 =?utf-8?B?M09uLzhLNDhtSnl3VDJUTStuZWNFWXorL1pYOG9CdlBiWTU2NnBpRXhJekd0?=
 =?utf-8?B?T25wZUl1ZzY5aVBBQmtYTTZ4WVhZcnBjUTJ4UWI5SkY2L1RwMkpIVXdobG5p?=
 =?utf-8?B?S3VEb1N4S1ZkNmdMdHZaRW1QRmlub0N2WTdRQTdkRElNL2w1TDdZWHNGOXFq?=
 =?utf-8?B?MXNsZXVEb0ZJUURhaGxiSkJqOUJZNEM4Q1FHMEQ5YXJwNFg0cEtmNFU3cUlk?=
 =?utf-8?B?d3NBSmlUbW1QZ3BwVDBNMitwdmJwR1FJb0xGanM1bm56VHpIVjF2MEk1YlpF?=
 =?utf-8?B?TEpqUTJoOStyNWI0RmNsSXVwRzRDNitzU0VqVTE1djBVQ2hNeGFUclNlcVlP?=
 =?utf-8?B?bmVzbmxZSWQvNnZCa1VkVHJoMFpiSmVyU0s2ZzNZR3UzK1Y3Um5vWU5wSHNi?=
 =?utf-8?B?R0Y3Y1E4NFN3bHpHZmllajlPYktLOVJFTEFRcFFLZlJNN2Z4SGgyTmpBTi8w?=
 =?utf-8?B?eWttY2RDZFpCM1FJbW04MjE5dGl1aDZLMm1ES2xub0RKNEFkSUdPRTFBZ3F0?=
 =?utf-8?B?YkZjK2g2ZjVuOWE4cVNDUDFYVHJnZTNEQkpneUcyT0tXZ3NiWVdEbGxUTUJG?=
 =?utf-8?B?RThuOTg4VFkyVlo1NithNTVwVmpKbE1WZVYzZ0hWd1FRMmx1RzZRLzFyV2Mw?=
 =?utf-8?B?QVBpWGk0VVBqZmh3VitvVEE3SWFTcUNaNXhJR1prTGhDKzNzSHc2Y0I2Q3lQ?=
 =?utf-8?B?RDZvRmM0MW1qL0lrU01nN001cWdkU1p1dWI4dm9SSDFYODNQMGdWSFMyU21F?=
 =?utf-8?B?YlFncTRXd1o5RDFaUlN1UlhxaUg0RW91OXNtL3VTVi9oTGw4K2YyL2I3U2JF?=
 =?utf-8?B?ZnNOZjYzdE5ZQ3QreFRSRjJIbUhpbkFBbVRRUVJWQ1pVRHQvekpmMTJVUkc2?=
 =?utf-8?B?TEptRDhDNk5zRkF6QUVSRGdDcTQyMGp0Si9qclpWK3Q4ZjFHaXRGMlI1S1ZF?=
 =?utf-8?B?KzlNRXRDQW1IUlc5QU5SUTNtcCtUK0pPeGJnOWh4Rkp5cm5LOGZGaE9EOVQ5?=
 =?utf-8?B?VTA5Ky9qT3gwaFd4VWo4M25HSEdCY3N2VUlzSC9wYzhpM1dXbVRmamNxcTdM?=
 =?utf-8?B?QTJIT3pRWXEwb1JmdmtneWNZTXZiLzJOeFg4SU1TNW1sZHo2RWxmVkJUNXB2?=
 =?utf-8?Q?SEM846oNvfspXajkCbkrnX8BX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5cdf34b2-44e1-43e4-6aee-08db3f4045c8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:35:48.5504
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mkKKtFTHDtMLXbhb66LlB7jWxTbUJMm4DJGEcX/aDnOsEXXGDGj4sJHz0PJTct7sHdGQFbeGMUIK9QLcsgdudw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9651

On 17.04.2023 14:13, Andrew Cooper wrote:
> --- a/xen/common/livepatch.c
> +++ b/xen/common/livepatch.c
> @@ -803,28 +803,84 @@ static int prepare_payload(struct payload *payload,
>      if ( sec )
>      {
>  #ifdef CONFIG_HAS_ALTERNATIVE
> +        /*
> +         * (As of April 2023), Alternatives are formed of:
> +         * - An .altinstructions section with an array of struct alt_instr's.
> +         * - An .altinstr_replacement section containing instructions.
> +         *
> +         * An individual alt_instr contains:
> +         * - An orig reference, pointing into .text with a nonzero length
> +         * - A repl reference, pointing into .altinstr_replacement
> +         *
> +         * It is legal to have zero-length replacements, meaning it is legal
> +         * for the .altinstr_replacement section to be empty too.  An
> +         * implementation detail means that a zero-length replacement's repl
> +         * reference will still be in the .altinstr_replacement section.

Didn't you agree that "will" is not really true, and it's at best "may", but
then also doesn't really matter here in the first place (suggesting that the
sentence might best be dropped, to avoid drawing attention to something that
might at best confuse the reader as to its relevance)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 12:51:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 12:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522086.811259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poOK6-00026N-L2; Mon, 17 Apr 2023 12:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522086.811259; Mon, 17 Apr 2023 12:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poOK6-00026G-ID; Mon, 17 Apr 2023 12:51:10 +0000
Received: by outflank-mailman (input) for mailman id 522086;
 Mon, 17 Apr 2023 12:51:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poOK4-00026A-M8
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 12:51:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83009214-dd1e-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 14:51:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9210.eurprd04.prod.outlook.com (2603:10a6:10:2f9::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 12:51:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 12:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83009214-dd1e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AyO7EAA4USMbp63Kr6iijrENMI7rzHjsGKHr4WP1u+ipxxfhyCmUajt9vtTYVVlDNnOGhvBdLu0HeooRoxSHKu5kce+5FK4EiJY7ZhFRCbJwL5KMkjs7UF2A01iBe89MqCrt9/EjLKhmqVBZTpPl3SK1/VkXdCMtvAtw2rq+JxgECb2cKkQzQAZCiC0PQDRS4lT5wgEnoD09mBeV5F17mdba+/W6VABxSyzV03x3agq/Iv+qPcQUgqsUsor4pJkCh6j1kNIubhe8Q2aOSdZMXb9jPii9LmNWrhCDIAuj1F9TNQ9WxZePKwdD7mVhry/ojOV6mskTbl6hMRlENc5s7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YkNfOviI4hf1AzPUv3CbDAXMwM+GNKcA60lFWDNu3Zg=;
 b=aMNwt2n9G6K6fDxY3mjZGzYmNbqLgZfOXF4yixK+HEgLgPMQG3/IwPBR0z+whooe8Cw4NIC3arz+6Xh2tls7F4H8lm3oq0eYtuO+Me1M5qNxaaxJJaDhW2lZAUGS7GQI4ayi0q4jopH/mrv9bsrtQ13ein/08Ae8NuTRkl4Yl44QHZ5NIhbK296qZr0aW64UkhyxL69zObZ09KDXit2FcWHfCh0YkbavfvUcPBi7o0P/zXC6CY9h8vzLQfewBJnKGtoyNqCWvRAwjuZX28pz2G+XlmSYv6UQV933KDOu9la8ITwfRrMevdbZLdasUuZt5ura867dY4gMKJjawSuDyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YkNfOviI4hf1AzPUv3CbDAXMwM+GNKcA60lFWDNu3Zg=;
 b=BSR9Vvl94CRRQSqTOjeHKTh2xrpjdIxq/t+B/eB3CwAWFT471rTUnwqZIloxeZ7YwBFbnJZRHLc15uXwjHzOkTEY3CBOK9/hVLgA9pGBbh3c1xkLckqLSLYyIdWEgfWKRyV3a0d+MtByc2StLF/uYWUjlK3Tt25IY7osPpJi246wSaY+11BXSU9xK8dTBdMy/qvzBnZjx1cXdUqFQ8X2/GxVt+9oHgbuc72s1k7jApXHdEqpACC2TAyN5kgKOdWJEmcaCusG1j0bkZUXpdIN2NyFQjU5yJet/UTFQ5RakIpFsk4cq8myAC3MJU2LQ7jJIaWVKt+MTrDovyy6jYp0ug==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <823ca649-9a9d-cbb1-e7bd-a91ab38e15d5@suse.com>
Date: Mon, 17 Apr 2023 14:50:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 8/9] x86emul: support AVX-NE-CONVERT insns
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <bdcb4822-397a-0795-08eb-74e661d9b7ae@suse.com>
 <385b175a-5123-3b1f-0663-1a956f5ca3a7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <385b175a-5123-3b1f-0663-1a956f5ca3a7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0154.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9210:EE_
X-MS-Office365-Filtering-Correlation-Id: dee3e646-495c-467d-0b4e-08db3f4265cf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wIEhO/hkGpM5wNeAn4MohZKLqQbm84uaj41nffzeTv4yy7J59sC70+IB643+eRc1b13MeZaX8eWZ5E2otLN+L6fSsC9VeftxP/wLiLMcJjLJNR9RV3h7aczMLXpv6a2zET9UVMjP/hpAnouy6aDSstMvvCFkTEpajEU9qgtOaSasmicYWb7UY+QdZrfjkjlblNVFzkMY7x95g2fA75nudxpBDG/z2fSY79JmaRrSu8QUSAvrH9wAb6I+DUXStjeZB6sttz3WmMWaR9Dz8jLFfWllWfrgaoLuK6TSvbHPxgxx0gV/h0uy5ZZor6gR1YoxAhBB1NTFmRxWrEUZibuj2kykXpNtQgEek5MeFnsm3pJwGhksy3xe/AIFhJCBnxQIzKoph4L+SuxuOecMcgeozaMYRfFhsBBqEdGtR6Y5x/dEOIdXiACW0ydPmJo81EllggqRSiJHeX1x2ZR23N+W597t/ql+waLDPZ9deGiU6LRT5Kpb+kULAAjpjLDD9c5lmE0hvRvXttMasC8g7gJYLFOKCXqTd4adAkkVAOr06SWLVXf8FeEr56qciNRREMJg2rNmNVPukuWfXy4f4MX0uYtDFXgABMRerj06DrFmhmHpzBmgoi/2RGzKIuVTuzjXdWN4DhjgE3YtJ2Pgf//jlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(396003)(366004)(136003)(376002)(451199021)(36756003)(2906002)(5660300002)(8936002)(8676002)(41300700001)(86362001)(31696002)(38100700002)(31686004)(478600001)(54906003)(2616005)(26005)(6506007)(53546011)(6512007)(186003)(6486002)(6916009)(4326008)(66476007)(66556008)(66946007)(316002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cHZ2L2x4L1NBRzEzc0NRUEFoZW8vUGYyZW1xSklBQXdmam9VV3BwbTNJbUdK?=
 =?utf-8?B?SjVUdVFyQjZoeGFJdzdCWkZBRTlTcktsaDAxaEhaN1dQWnR3cFNmMXVWNXFB?=
 =?utf-8?B?WmhnZnpoUTJ1aUw1L1pkYVh0UEJCSnFtb3lDVXEzTE9SZWloajEzL09KcGRB?=
 =?utf-8?B?RjdjQXBlT3Z3SFNwbi9XaEJhb3haM09yRW1ZZWJzZXZKNVZBQjZqTkFpaXZM?=
 =?utf-8?B?VjAvMktYdUg4TjlKN1hJcVh6R3UwRldrNWRDN1ZYVlptWEx2TlVIMHhxdzkw?=
 =?utf-8?B?SUN2cnhIL3N6NFdYMERMWExpa1NRWWxZcDhwWi9mTTFqYjVCd3liTmUyZlFZ?=
 =?utf-8?B?M3hCekx3ZjRibjdITHNsdTl2bmtOY201ck4rYUs5R1dzUnE5SXdSVTBDRHlx?=
 =?utf-8?B?bkFSTC90cG9QTmdOTVRLTEZMT3NWa29qY0NoQTlhN1hTSEQ5VjdDOE8waXhl?=
 =?utf-8?B?ckpTcENNN0F1UTNheTU3VEZsazV0Q3Y3K3FRNGJiallxR1duU1NuYmZMNjlJ?=
 =?utf-8?B?c1VaeTZobS80S2hMQXJXZEpFVWxXbzJKdXc1SjhnT0YzdWhqRmI4VmpFWUt4?=
 =?utf-8?B?T2dmaWhrT1RxQzhoc1ZNMzg2V0tDWUlYbzJ2N0c4QzV5YloxOWtyQzhsZzRj?=
 =?utf-8?B?eGRybXAvbXVGMXcraEQvb3p1M2FxMC9kWDNLMWRZdElzSCtDR3dZRUkwa2dE?=
 =?utf-8?B?QkJ6Mjh3Vjd2NURGQnJMQnd3SFZEaEFxT2JydHkwMzBlQmZ0MlpGRDF0VDdH?=
 =?utf-8?B?UVoxbWF2UnlxVGREZ3NmbXhKM3ltN1JOS2JQT05HazJWUGx0MVBtZU5DVVgw?=
 =?utf-8?B?Wkp1dU9rNHVWYURVVStIK0txbENpNnQ1TXdoMkQxMjV2YUNjbllScVJteHZI?=
 =?utf-8?B?MlZNWnpIVWwvMWdsK1J2bWpRMjd0WDJNamZtVE9pZG5nNjM2MjFrVUZwT0xo?=
 =?utf-8?B?ZENNYVM1bFozR3p3K1U2RVYwaVdvaUZmOVVia1NYbmd5dStlaFFybWVVdjlB?=
 =?utf-8?B?azdxY2VXdFBlYUZmRkVtTUlJOWlBY0xBK1J1aGNoYXlVZ3JyaXlsZFdsTVFP?=
 =?utf-8?B?YWorTXBUK2syMDA4S0QyYVZlRzZFZ1RpSlg0VHVqeHZQNEZ3WGtrNEZtVTNp?=
 =?utf-8?B?azNCN2tGZ2U2UTRoZzNDTWVWdWpLQ0xlSXFYSHRmcnZROXVPdjczcjFiU29Q?=
 =?utf-8?B?Y3dXMXN1U2JwMXFSZUJ3a1lXS1oweWlreU5LUHlieGZ3M1BUS3FiK0I5TVhN?=
 =?utf-8?B?Q2NLd0QvYitLZXFlRnZtYUJsYmtMUjMxNVRIRDhITXM3MmVtSUZMcFhuNm9D?=
 =?utf-8?B?MVFkZjFLaWRlSEt4NGx5b0lMNWsxOW8zVVZQS0VJZlFJMHNHOEtjR04xNXRu?=
 =?utf-8?B?TVQ0RVJHZEg0RG51M29rZU12SHoxRzZQdVJDd1B0SlRycWE5bExGZzk3NnlH?=
 =?utf-8?B?b09zbDl3aTNBYUxMZTRuOThBSmVrRXBoa25FN0FPeGlka28vNDBQYkFPZGZz?=
 =?utf-8?B?SmR0b0NOL245Uk12bStKaWszSmVZa3hPaE5nQXRZcTVkLzhWL3BpRG9wTnJr?=
 =?utf-8?B?aVVYaHo1L3hTU3lzMUF2TVdUR2xpdEkybzFvandZSDB5SzZBSHgxa1AwMXVZ?=
 =?utf-8?B?d2pFVTFjTW5YNGRNbjJNcEUzQ3A5Umh5bUc0TnJRZ3Z6Zzc4bnd5SzRDUlk4?=
 =?utf-8?B?L2RIelR5QmtrWlZCcnlCenAwVEZHODN0T09uNW1XWTgvYmxqL3E3ZHhGQU11?=
 =?utf-8?B?NUVUaDR5WXN4TXJSL2pvQktRZXM4b1FFZVhUY1J6RHNvSmVZa29GSFE3UGk0?=
 =?utf-8?B?QWhvRjN4Vm5Ub2hueUIwN3JQKy8wNWVqK0hHUUtZK3FkVXlXbTFSdG9OSXMy?=
 =?utf-8?B?bzZpeVZ6bHZEdmQ3NE0vZEtMRm9NYmlLamp3NUN2ZnI0Vkp3ZTUweExsM29i?=
 =?utf-8?B?MUdKR0NYUFJVOHl5YkMvZkExQXVWeUNQTGJuY3A2QVZTMFBsYVdZMThKVUd4?=
 =?utf-8?B?UGhoYUVYQnpKODFYbU1Ga2pwUVpYOTNFYjZNYTdQTzJSUENVQmwrVTBvRjlv?=
 =?utf-8?B?R0FaUjZSVXFOekRla2diNzhSSkhXMzIycUswTFJHQ3J6MTBUdzVHZGtaZExn?=
 =?utf-8?Q?dUYXEW3FJ3Yt5wsA19k6AfO6L?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dee3e646-495c-467d-0b4e-08db3f4265cf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 12:51:01.3163
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lmfQc3Zcjy/EChEbopmibKoR301jH2FJbmZmeMIXj5mKfHRMKzqJnHC4VOGPNhvWZsm/CKIba8mHzV51J07ULQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9210

On 06.04.2023 23:22, Andrew Cooper wrote:
> On 04/04/2023 3:54 pm, Jan Beulich wrote:
>> Matching what was done earlier, explicit tests are added only for
>> irregular insn / memory access patterns.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>,

Thanks.

> with two minor requests.
> 
>> --- a/tools/misc/xen-cpuid.c
>> +++ b/tools/misc/xen-cpuid.c
>> @@ -214,7 +214,7 @@ static const char *const str_7c1[32] =
>>  
>>  static const char *const str_7d1[32] =
>>  {
>> -    [ 4] = "avx-vnni-int8",
>> +    [ 4] = "avx-vnni-int8", [ 5] = "avx-ne-convert",
> 
> I'd leave a bit more horizontal space.  These names are getting rather
> long, and we're only 10% into this word.

Sure. I had taken neighboring arrays as reference; I've now switched
to "aligning" with rtm-always-abort.

>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -6208,6 +6208,19 @@ x86_emulate(
>>          host_and_vcpu_must_have(avx512_vbmi2);
>>          goto avx512f_no_sae;
>>  
>> +    case X86EMUL_OPC_VEX   (0x0f38, 0xb0): /* vcvtneoph2ps mem,[xy]mm */
>> +    case X86EMUL_OPC_VEX_66(0x0f38, 0xb0): /* vcvtneeph2ps mem,[xy]mm */
>> +    case X86EMUL_OPC_VEX_F3(0x0f38, 0xb0): /* vcvtneebf162ps mem,[xy]mm */
>> +    case X86EMUL_OPC_VEX_F2(0x0f38, 0xb0): /* vcvtneobf162ps mem,[xy]mm */
>> +        generate_exception_if(ea.type != OP_MEM, EXC_UD);
>> +        /* fall through */
> 
> Only just occurred to me, but we should probably be using fallthrough;
> in new code, now there's a real attribute to use.

I did actually consider doing so (and iirc also already on an earlier
occasion), but that'll be yet another item we also need to cater for
in the harness'es x86-emulate.h. For the moment I prefer to stick to
comments, switching over - if necessary for e.g. Misra - all in one
go at some point.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:03:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:03:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522090.811269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poOVm-0003eL-M2; Mon, 17 Apr 2023 13:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522090.811269; Mon, 17 Apr 2023 13:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poOVm-0003eE-Ix; Mon, 17 Apr 2023 13:03:14 +0000
Received: by outflank-mailman (input) for mailman id 522090;
 Mon, 17 Apr 2023 13:03:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poOVk-0003e5-KZ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:03:12 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2082.outbound.protection.outlook.com [40.107.13.82])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34321663-dd20-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 15:03:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7002.eurprd04.prod.outlook.com (2603:10a6:10:119::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 13:02:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 13:02:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34321663-dd20-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XXKTtNzFPw0O+tPnx9C7+WiUPPK+LniL7sUvPyqjN/Jp60P3Ae0ETdl1KpkV+t0uawoBu1lcSSeyanYmstQSAGMxBj5OkzpgxJCU8aEq/zpJxTQfi0IpqWteHUU8GPmlWLr7cbD1ZsONYmO6gncw6nHVI7as9y4VpFTCgU/JInqhwSsul2enKGK6r36tWm615LankYiDRhzsIpF6j5YDG9QpuMqADxxyXNLiTd5jicLRoaeQ1wT7MARdgNilAy16mIChLhYDqu93N84uGt//LkJ1CMVB9rwAjITwhnSADlHaHgSnnCw3vNIqYnmdAwVdP4To8owzixME2dYXf5zq0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jEr06YXWBLCW6Qe7L/3ulkPBi4aePPe64tK3Vf0jxkg=;
 b=WS/V8JsF3vGWZ+VYET1IbP7Wm3Gdtw9/bYeKhpCNr3d1Ibt/Xsb9lYie+Kok2EEzJQeiWHu5LqvZToTWViSzNhlvw6B2kli00smAWwK3A+Lf/yNjCoTTVXqVq34X5C8WnV5jsKSTwRczZ9tPPH8ikKYVet/0fb2UHp4/RuKC267vDNSO6/CN1ktOCk7k8w59keo2CSLHGysCUkZ3CXpvHjLv255Rpncf+rGMgeFsWMQG8es2VLibOccMeasm0E0CXxKCxN95mu85cdNq9SRFoL3CRlUBvswZ0L27X1KXsCBprnNTkrc2UykpY1PBds7psd1PCH5WmLZyMBSS6eqesQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jEr06YXWBLCW6Qe7L/3ulkPBi4aePPe64tK3Vf0jxkg=;
 b=GtyDm4KoJyHVgcja0ZRt0S9Tt+c62UAh7MFa9Peoz26hBSD3L4fD7g4MeBduK+xo2lfVYhjyHfPwuBwjzxJMpDQuCBG76535+vCy3J377kSp4EWTgStxBoAquqSmlpo4eZevr5djp2nsh4Pjeb3GFr3pNx9TVOPRaKBdKDRlCM33AscsZJdApprFK1YHMV9bSKhsXaGcXY9BL+L0xf5FNqd4Z/ltlE56c0no/dblAja//ChBFsXJ4Ja6xskNDqMkyKZxZ6LPfVJunyxPDrSIsthDcb5eDvvskX5zJGexhz4m6EtVg00iM2AI3FJA1swWkb1uJeS6i2TB4l+CtPMvaA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <225d7d9e-b5a2-7324-0e23-c610dd06f954@suse.com>
Date: Mon, 17 Apr 2023 15:02:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 9/9] x86emul+VMX: support {RD,WR}MSRLIST
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7f748fe-f062-c2fc-4cc4-b2f888633abe@suse.com>
 <b567e068-dcab-b294-9706-ffbecb36de3c@suse.com>
 <1c2cfeec-d64a-3e4b-013a-840f812da12b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <1c2cfeec-d64a-3e4b-013a-840f812da12b@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7002:EE_
X-MS-Office365-Filtering-Correlation-Id: f5c0a1a4-839e-4b30-0182-08db3f440632
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YhWbw4tSl8hl6PaVMDyLN81TRipEMzCYtEBOYFn1U74mZglXSUGG8oR0tqVCIGTISI9dqs3bzwAGBBqyX6vWnIr833wtPVcBzVs/e4Zw0Yg+W9q/ZHTHCDaKeENGgaF1NzyBxYQlBFnQEMlnJX+JFRxriQKZskf200+z/E6Z1q2mZle5Zt6mdI6yew1yu1VJljBRL8W/1nIPjNq41cnUGxFtwLqBaPnTR4phzrf3zvNlMOwJx2foG3FOMzExYTLrOdn3nTrglMcYkRDF1Ax+ns0xS9CI1VmeuIGUGt1i4RUAmGIk7YUxUYB1uvc+SlAj4Vc+ST2D4B9pfrJ10FZF46L6U58cKKSzCj3W3iR5YmoTM9BOhb65jfoF0LLbr4ZFAMpRA0IVPxOed0wKcq3IQWvs3DxbzoaCfzbQ0pdrn++OIear+nmtFbR6ftc9HqkPmvBQvE+WXzTKFF79jtutSXxL/xvABP8Zgw4GN+jrnOLFs+S7vaAuLhDUkO3XTE2q/+LIM8VYAoVP5uOYMf13g/E6ZVIiw2C7svoj6zX7GMLDjewNQuMoZKP7D0i+KlYkrSCfPCetAkA3Fg8hExfMyyAavljCDypBgpX/HBr+NybXg4rW3PEjaAEjFTgZM2W8VpkEgKLVDKiF57IBIwNBiw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(36756003)(31696002)(86362001)(38100700002)(478600001)(6512007)(26005)(6506007)(6486002)(5660300002)(41300700001)(8936002)(316002)(4326008)(54906003)(2906002)(8676002)(66946007)(66556008)(6916009)(66476007)(31686004)(83380400001)(2616005)(53546011)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUpQT3Rtd0NVYXRxUloxTWJPTnptemI0T3dOQmVPRzk4R0pEUkpzVDZkM1hv?=
 =?utf-8?B?WXcvem1HblNrUzEvK3cxOTA1eG1QVGZBL3F5MkdRN0ExbE9TOExPb2tTT3FR?=
 =?utf-8?B?cGVEUi9MblU4R1Ixa0U4eWpCaDBEZ2FhcVBubTJ4YklIeG1xTHh6OGJsWEJx?=
 =?utf-8?B?Mm9PbnArWmxrcVdxZWNWSGFGY1Exd2s1RTVwZ1BEck1WdVluNVpVcEhWYU1K?=
 =?utf-8?B?OUhhdTNvWEV0VkQyVmU3MzhpMnNWdW82OHQ5a2RDTUhQRElJbmNsSnRST1FN?=
 =?utf-8?B?MlNjYnFVWWdPa2RvMlpqMzJTNFdyV2tuRlB4RUF1Sm5PZmxySjlWNWVWQ1FQ?=
 =?utf-8?B?SHdFSUQxc2NqTWsrNW5MczliSEYvSm0xZXEwZ1RxeGpEREoyeDg3V0RqZmtK?=
 =?utf-8?B?cndhd0ZJZlppUlRZa0Z2SXJNN3JuelJUV0Z6QVQ5T2llVjRMam5qNk53UHBI?=
 =?utf-8?B?d1cxSEpIQ1lXK2JobkVjWnpRdG5RY3ZSZXhiREwvNWQ5NnR4bk56MVlNMDAw?=
 =?utf-8?B?c3d4NjVmZU1pNFBwVFdVR1FRelQ5QkwrWXk2b05iTzNHbnNlZXNrek5QTVdP?=
 =?utf-8?B?Sitka2k0WE03aStTRVBIdU5NcHU1YXMrSnJ0dm02U1RtUjBFMDlGcXpVNzRv?=
 =?utf-8?B?YjI1RzRhTGo3Y21tbHNWM2x6Q2dnWDVoOGVWQjFxeGF6cUx0Szh0WFRncVBY?=
 =?utf-8?B?MDIyem9GL0tSZkU0UUREY0xPS2tpcFVDV0VNTTI3clgwVnRZMVZCUFN6bUVx?=
 =?utf-8?B?NlBtSWpLS3QwUGZDT1VvTXlUS2NWNUh6UWQyd3RCblV2MUJZNkVYU3BQQmI1?=
 =?utf-8?B?SlBpSytDYWMvSHZodlZrZlBYNHZiMGVpVXFoYkM0cmtVaVhpbXhORGZoVzRZ?=
 =?utf-8?B?SUZpaUJzNm91RWVhb1F0Q3M3WmM4dmVxRW9QRDJPUUk0QTAyblJyV1BIZDBu?=
 =?utf-8?B?VE1sTitBaExocm9oeWxSMEpRbEp0REtPL2ovRG44bmlZcWtjMnZiZkF0eGF5?=
 =?utf-8?B?MStwb1g1Q2JoWEpRcmpobzAzakZCSHkzbWF5UGNkZ2VxK1Z6aXpRSm9uRzY1?=
 =?utf-8?B?dUtRMVR1UVQwQ3k2NC9OWkZoZ3liTm4yUk1Qb09Hei9RRUJFUTljWFBvZHdY?=
 =?utf-8?B?VVlFdVByWUFkbW9NVE0yOEhDK21lY1FZbzVmWnpGQ2tscmR1MmNzSER3akVz?=
 =?utf-8?B?bkd1bEpiWmNVOGlKL2VyUVFya2VXdGdVQi9rM3B1aENidzUrL1BEWHNId0xw?=
 =?utf-8?B?Z2JIYTRuZnQrcU02WkVueGo1VWRmTWJ4dkJIWmVBc21LQ25NS2UrYk9BVzE0?=
 =?utf-8?B?R0xUcmY5RnM0S3U3UzBlczMrT2wwdlRhQmVuZ3B3UEVMMjhnRldQMnFaSnhD?=
 =?utf-8?B?S2t0cmZldlRsMWYvOG01R1lBbytFZWxiN0NtWUNtdVBpUUMyb25xTkZQM29w?=
 =?utf-8?B?MndpeWNMRE1YYnZSSWVlL3lLSXM2bUV3bnUyb2hYbUNxM0YxS2xZcTcwU09N?=
 =?utf-8?B?ZlhOdkFzTWlPd2pNVmdVa3pLV1NJdmtTYlBmUjRtYWkwUmQ0QThzbFZZbjdR?=
 =?utf-8?B?cmY2MERJR1BaT0hNSUtEeTZ1KzZhZlZ6OEgwQzZTd1lmWkc0SWVTNUROTTBV?=
 =?utf-8?B?UmdHTmRWVG1BRFRjWHo3bi80aGRlUTVrZTJwRCtKTVNiczErNGxPdlFxVlZR?=
 =?utf-8?B?N2pzcjZIUWFoUXkyQ3NFVXJqYVpPZGxFb2w0Z3cxUDJ4U0k4QUpxUE5VdC81?=
 =?utf-8?B?WkZUcjgxRmJyZUhXNVYwOWc1c3lmYkNmYm5DK1BhVW1Ob3dRaUFDT2ZiTmpq?=
 =?utf-8?B?REtqcVhOcWVVZWUySW5QeTg1QlJNeHRYS1ViNlVRczk1K1ZmTzcySHFKZU9J?=
 =?utf-8?B?U2lYeUsxT09DeE9Ia2tDL28vWFpqamV5UjVtUlBLWkhyL3lrNllHKzZmMWww?=
 =?utf-8?B?Lzc4YkxTWlNjV0ZzQzlBcjlTKzIvNHNzeklrSFNhT2xzQ3Bzb0hoSTJSQXFu?=
 =?utf-8?B?eGV0NFFNUy9LSC8vZGNiZW8vd3preitlQ0xZMnI5a1FtMEJRc3pKclVJVmVz?=
 =?utf-8?B?cGVlSkRkcWFTaWlhOGRoY1U5bTBNbnVkQmVYZ3dxellPMnBqWnBZc1dIN0lS?=
 =?utf-8?Q?LkepMF+muQuUhw5TCl5sCAmrc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5c0a1a4-839e-4b30-0182-08db3f440632
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 13:02:39.9143
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BHdljvSw/R3LunLVXRM8bfNd71Xu4u0QWZ9787oHVITnILveTWHVwUPzz/kJe1+426iEw6OuLh6qLjMsD5CJsg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7002

On 07.04.2023 00:03, Andrew Cooper wrote:
> On 04/04/2023 3:55 pm, Jan Beulich wrote:
>> ---
>> TODO: Use VMX tertiary execution control (once bit is known; see
>>       //todo-s) and then further adjust cpufeatureset.h.
>>
>> RFC: In vmx_vmexit_handler() handling is forwarded to the emulator
>>      blindly. Alternatively we could consult the exit qualification and
>>      process just a single MSR at a time (without involving the
>>      emulator), exiting back to the guest after every iteration. (I
>>      don't think a mix of both models makes a lot of sense.)
> 
> {RD,WR}MSRLIST are supposed to be used for context switch paths.  They
> really shouldn't be exiting in the common case.
> 
> What matters here is the conditional probability of a second MSR being
> intercepted too, given that one already has been.  And I don't have a
> clue how to answer this.
> 
> I would not expect Introspection to be intercepting a fastpath MSR. 
> (And if it does, frankly it can live with the consequences.)
> 
> For future scenarios such as reloading PMU/LBR/whatever, these will be
> all-or-nothing and we'd expect to have them eagerly in context anyway.
> 
> If I were going to guess, I'd say that probably MSR_XSS or MSR_SPEC_CTRL
> will be giving us the most grief here, because they're both ones that
> are liable to be touched on a context switch path, and have split
> host/guest bits.

I'm not really certain, but I'm tending to interpret your reply as
agreement with the choice made (and hence not as a request to change
anything in this regard); clarification appreciated.

>> RFC: For PV priv_op_ops would need to gain proper read/write hooks,
>>      which doesn't look desirable (albeit there we could refuse to
>>      handle anything else than x86_seg_none); we may want to consider to
>>      instead not support the feature for PV guests, requiring e.g. Linux
>>      to process the lists in new pvops hooks.
> 
> Ah - funny you should ask.  See patch 2.  These are even better reasons
> not to support on PV guests.

Yeah, with PV dropped there it's quite obvious to not consider it here
either. I'll drop the remark.

>> RFC: I wasn't sure whether to add preemption checks to the loops -
>>      thoughts?
> 
> I'd be tempted to.  Mostly because a guest can exert 64* longest MSR
> worth of pressure here, along with associated emulation overhead.
> 
> 64* "write hypercall page" sounds expensive.  So too does 64* MSR_PAT,
> given all the EPT actions behind the scenes.
> 
> Its probably one of those

Which leaves me inclined to add preemption checking to writes, but
keep reads as they are. Thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:38:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:38:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522117.811291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poP3X-0007HJ-Dc; Mon, 17 Apr 2023 13:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522117.811291; Mon, 17 Apr 2023 13:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poP3X-0007HC-9e; Mon, 17 Apr 2023 13:38:07 +0000
Received: by outflank-mailman (input) for mailman id 522117;
 Mon, 17 Apr 2023 13:38:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poP3V-0007Gy-St
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:38:06 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12fc7d69-dd25-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 15:38:03 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 09:37:58 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5039.namprd03.prod.outlook.com (2603:10b6:208:1a8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 13:37:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 13:37:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12fc7d69-dd25-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681738683;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rUTikKzqH8l7r3KkbCiKJLMqs6hE+lJQoZr8tHrl2Ag=;
  b=QDUF6YWSeHPRpNgN8xq5CWHXAM3Eh0DHxra4yxJHA44RXmnAqVX0pPdR
   xsqFnxqafidwsGEHCVxmC0yG81didaRsEpOu+gx9l+Grel7oPiSv25I7j
   lQR2mrlDanb1Dv1HPcbKyrYPq6DfXf+AazsdI03sFRnKjEG1PZaPqC4/S
   0=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 108249378
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:u/hwnqtm122+SECUDoGTW87aROfnVGdfMUV32f8akzHdYApBsoF/q
 tZmKW2COKyCMDb0e990a97koE5Tv8XQmtBkS1Nvryk8Fn4W+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHyCFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwKWpWQhajjeOM55WAbM831to4DcrlM9ZK0p1g5Wmx4fcOZ7nmGv2PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60aIu9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgpqQ20QbMlgT/DjUveWWe/d28qHeGQogGB
 E43xxsws4I9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAHSThbYdBgq84yRhQtz
 FaCm96vDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwFTSux
 TmP9XA6n+9K1Z9N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLtm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:S8oxD6hUqC7n0BMBq16uUgWUO3BQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-Talos-CUID: =?us-ascii?q?9a23=3ADF0kjWvQUFdz/3tcwkrkTmwn6IskKCbxylfaInW?=
 =?us-ascii?q?fLl1PbubFV0Sx+61rxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AbLjGvA+bDvT3GAi1GL4GJTKQf/5E3YHyLEUvqK0?=
 =?us-ascii?q?f4ZCrNw9NCimDpjviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108249378"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIhh781JSFc8xH4hySb165ku+9EXzYBeKq6DweVq6RaIghNpTXM6FCbv9uSj+pBsc79C7+2h6KNrAJG1HUAAqcSCyR4LIiFsnDP+4MvQTQYY31JY1NKKoKBqME2RL79+FAnxTHzDOSEedH7DShbxTSc48FNopK6btPxU+xNssqRwjLQmM9pO5nUzviOU0xf0iSs3YZqQFj3qPQ1DLSGvuAnlQwbI5Uvljyame9hoTv4/keuntjbnG6noJrNQKZkxggm1ucrHYoDnwEuNXHmZInyPi1+Lmu1NuNxyd5mviLYE0i0j89vaPwV9NCZywcW81K6sy9WEF67feDw1VMrp5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hVo06p7g6W0ADpYJ6P+uNakowlj4o3PlG9B9LXLpdps=;
 b=A+wnwrnTAhFp3FtGql7x43212dL7bnxke5VBiJV/1sUNFOS20p01K7hZhWMIuNOC9VL4nf+h8BDOORq7siNnpiol+6DBIGgXmfuANjIyV5Uk5dDzWu47pU5q9d46Yr8CMXv3ghhGDVKMvbF/kNS82ULieKmhomN3oEeuiAQ4vI0ixNw4q6K8apgtqGRMUzeqib/IAAIHp3/tOJSJ6+rthrVEW+QX5U4Pcv8ThfvkzRwEbm/sqqXlGsdYANNGaq4XMXa2D3FFJa/DcB0GyL8qfILxNXbTpZ1J0SVChQI24mBrbrYCdngtEQEAySHD19vdRoJ+3IV/6IQVG+zyDERX4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hVo06p7g6W0ADpYJ6P+uNakowlj4o3PlG9B9LXLpdps=;
 b=vfJXl9TqZM0MkSfs9DJqd2zFdqhBjKnAARoas7dvF9SunwYlGHEShCGONw+3xKN03SbGgpdiMJaLk7ZDONz5MGMVMC7UeF2QZTb8auSZ2h43S8rH7Xn7dT4tfG6CeDlKlNzAJCdKUxQWG1V/el6IplNUvZC64PHsmDfJAnU1ctw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <881ec3af-19a8-a448-cb61-1667e146344b@citrix.com>
Date: Mon, 17 Apr 2023 14:37:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-4-andrew.cooper3@citrix.com>
 <3b1f3719-c8e5-8a98-8325-d907c160d81d@suse.com>
In-Reply-To: <3b1f3719-c8e5-8a98-8325-d907c160d81d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0026.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5039:EE_
X-MS-Office365-Filtering-Correlation-Id: 21f9c5d2-467d-459c-623a-08db3f48f360
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ecCLihNrt/xAcdZanEqVrQ6GxsQZ4xg+f27zqeBdLaUQqABT6XJ8xqbA5MNPX0oRnbDJdNi9/vH8wyN3MxjPAO5XONOKRjxyvsjTzWJNjcrJ4/GZtemvScTIfgswqEJtC2wvHwKymihT5IgOdoZ5SzRELjpLqVqef4Ua6cLw6bgEYRbKW7nyNsyrtPtudNqWRKB3gjqhrEWK+5LAYGCKoGVXqx/i5r8FBteanVCwmfDgm9jfshDFFUIhMxlmYqM1hZFfYIrFqiW5V0Q1nG/2jXJvB3A5DO+zJDuydFkkK52MeyDuXOq5YVDg/yOYQfg5mdaGFotKsHQXwVAeI/teaCG/vhZ10Tnzfl1lPJ+GKcH4bxxZL/EdT7D9PmHRbCWWap6T5tUGuaUT4ME6Qb0z8oUkozLAcxjD199QVEGsJOwWoVpxKN57szKw+nUO282Vpa4fwiK7sX9pivhS9Z2UmPnI+MZ+m9mub+2nAzKPkfJEeiom+bC6S+6KwuXRD7umxingBquzGBrq8L6j7xUM8kKsWbjcq96O6EcWIsu2MT14R2HlBntHI7dQnfV//50uMwrZROoP8bvpHY4wBCVFdnGigxtJboWN+XYMgrXi6h31E7qWcy1cKCuHeHvCVe7dkr3k66UQbSrw42GZXJBEO29/cQ02lk98MBILoInk7xk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(39860400002)(136003)(366004)(451199021)(31696002)(38100700002)(2906002)(82960400001)(478600001)(6486002)(6666004)(2616005)(26005)(53546011)(6506007)(6512007)(31686004)(186003)(86362001)(36756003)(41300700001)(316002)(5660300002)(8676002)(8936002)(54906003)(4326008)(6916009)(66556008)(66476007)(66946007)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1d5U0JaZ01ITjZlNVZPVS8xMVd6Z3BsTEdXY2pKRXZHTTh0OXozaGp1Uytu?=
 =?utf-8?B?RXprd3FXT2U4bHVXc09WcnBOdzduT2dZai9SYjJsdGZmNndVRExtTFF5T1FG?=
 =?utf-8?B?TUJOdUNTUjlEenN5aXBLc25KMktiWnI1MEMrMVFIR09lb0dkdFJnR1g4VDB5?=
 =?utf-8?B?QSs3RzNFeXFvVDZ6cGNjaktSS3BJTGNJKytuSFllZFF3VFVsZDdmOUMxMElo?=
 =?utf-8?B?RG1oL08wZXJUYlVHeTRidFNrY3VsQWpqNVZzU0kyeHlWL2JraC9TdUJPR3Yr?=
 =?utf-8?B?K2VvL1o2TkJDYXpKcEM0RmRPNnI0dzhKQzd0U3lUS1hsR25MQ3U1N1c3MjhE?=
 =?utf-8?B?T0dJdFJrVXExdEhYUU5CeE5oT3dZSHZRbFdIT3JndEtIVFhXVG5mNmpyM3ha?=
 =?utf-8?B?RjZzTk1PWHdZTGVDTnRQUWx3Ry8xUHJVcVhKUmh3VDRjVU1ramtpdHdYL213?=
 =?utf-8?B?ZEhtNWRQcmpJeFYwMWh2M0ZzVkI0cUp1MFUzWndOY0ZSTzNSMWtLOERCc0Iw?=
 =?utf-8?B?Z1RQd2xabzRPcGpudEY0WlBIVjM3Z2J2QXUzRnZxT1l2VUQvejhyVlVZVldR?=
 =?utf-8?B?T09uMkNBcVBqWHNJSkxoaTUvS1ZRdUZXbXlNaFEvdW9ld2tmYzh0ZGlHYU9q?=
 =?utf-8?B?d0JZRGhWVHFpUWxLdXdabDJ5dDVaTVVZeXluemMxUGMwelYrcG9qRTVRc1NP?=
 =?utf-8?B?OExobFh4cnBOdjUrUmJUYjZWazJFMi9lQ3RGUnh3eklHVUhNdTVOZVFPajdU?=
 =?utf-8?B?aHRCL0hST2tLMVJ3bWFwYitzOXl3QmhSYXM0Q1UwemhMZnpLYjFPZ3dkV2k1?=
 =?utf-8?B?TFZFWE9JT2RSalkwVjlRc1lhQ1BkU2xWeWplUndGU3VEYUxmV2laK210N1Fz?=
 =?utf-8?B?d3ZVamdTT2xtTGFNeEZ6TnlmUDRJcnRMb20vVUhucmdybmEzM2wvTG13OVVq?=
 =?utf-8?B?REpPajk3T3lpSTNKMWNPcWFGN0dCT0R6dlVOUWJtMyszbCt0V3RmQXVrbDcz?=
 =?utf-8?B?dEQzNjNoU0gwOHlaSjZraWxGY2FQSVZyV25JTTBSbWVmL0Q5SUIxUXU5blpi?=
 =?utf-8?B?enN6Tk45K2VuNUlaWUJ3cVo5YjQxdzFFNDJFVHIwNi80U3ZvQXpqanhEWGpo?=
 =?utf-8?B?U1lhYlNHcmxIMVdzYk9OUGFMRVRQTitYUTBSNEY5aHZ1N3BISzF6Ylk3Uit4?=
 =?utf-8?B?THh3QXMycHhRNUV2R0l3NkZCei9vTXJHZDQ4MHpMdVR0ZFdmYTFROHZBaHhp?=
 =?utf-8?B?YjhNTTdMQ2xlcTIyOHJlUlo2U1BydXU5d1hIQm5xQkFweEJ2c21kL29jcGll?=
 =?utf-8?B?dkpJeWt4UE9MYXdiaVVYRVlhbmhsZVlUTGFBUnZzOUJxRnNpd3hrblp2b2xn?=
 =?utf-8?B?N0RIUjhaYXFhbXRGWGxxUU5YT2N2eFdBL254ekRMMTU0cW41UHQvQWhwajZm?=
 =?utf-8?B?L1NhS2syRzRONlFvd3VIZzY5OU5peDZFSVBoVkxDWHJ1SlQxcGdrRHMxblNw?=
 =?utf-8?B?Q2xBaGp2Q0lSMWwySTVieU5RQzIxbGgybWR6WkFoNE5RMVFTL1RtWUN6ZWRW?=
 =?utf-8?B?a013bUlnSnF0Y0xrOG9HZzFXb0VrQ0ZsTUVDOWFXYlU3VmxjOUtRSGR5bjFN?=
 =?utf-8?B?NDRHMGxmSzUyWURjR2Zxb0NTcUlTWklTWnNKVW1nSkxwbWJiaG1rbEtFV3VL?=
 =?utf-8?B?ejJqOTl0bjEvN1lMRVplN1E4Tk5MRGFjd2hXNFVlSThzUUdVZXB6alRWRE5v?=
 =?utf-8?B?SGM0NjFXeDFPMHpVbXgrcUVhdk5kaW5wdWlvYlRJU2tUZWJwbnF0eDRFMFRl?=
 =?utf-8?B?RGZkalNXdG1LZ0FraS9PS2lnWlgwRStxWEVUZElMMXZiWWJZM3UxRFNJeUJh?=
 =?utf-8?B?bVhaUTV0R2hlQ0hLZVBoa25HNDB1WEhSUkRYakwxYjlFWE12dzdBcEt4T1VN?=
 =?utf-8?B?UXBmeUpQTExGdnFJWWRuckx6a3lYY3U1NGZYTDJaMmwrZTd4WTRYbnBwcHln?=
 =?utf-8?B?cGgxeGtpZnQ4SE9rR3FZZ3dUUzVRcVdaTS9kQTE0T2hjeFQwSCtMeEx1RXgy?=
 =?utf-8?B?cFd3RjdvcFFldjVzNDV0c3NQa1NkSVZzRUVNc1FKMW1NWUxxT2NzbW8yd2Mz?=
 =?utf-8?B?TzVTRWFQSERFYmV2RXhwZ0JXY1ROZkNyMUphKzlSaW5KS2M1ZURHK1ovN3FI?=
 =?utf-8?B?blE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AIrRYRChKDFkHZ93ECEGrHMKngvuwM9j5w0GJoCHmsN5JN9ftqcUsZYaA6g/0WP5p55bR7uViYq96JVRPcJX36ORgLp8NVErFlk6LGD1Ol8RCyIGRofTiGD4BF76fU9L6C4UGFvzwqvWXRyfKGAZd5SkVpkk0IUX6PSunQWbCBb2q9k0KKoWgRI7uQL64l8JDRCEhcgZLAxrIymV5DQK3uOMkZ1HSaOwgW5YGl6LuCSgYNSXzQpvn9y9jDT4Y+FO/LPKlf3ozm2sxM9qjGmurKWmxV4akPwKH2uvNqvvmw89Rqiuf8ZFrQ84f3uZ2FTHFNtq5sl3eo0N1hlTGPk57kri4At4nWY6m1x54ymof9GzpsF3IktS6v1k6U+dSDWVKxmJGTv9gyzbCitvZtqFW0PGAKJIg8+60Kqt6M2Cj4FOBs7dh9muOmiJQfv0ZPQEcK2oXX1PI5iErYPKNFJVsCmh8W109SCDpCuBJ7qD7q5UHmDhv1V/Hw0HIJvy5Gvfk+YKBmHQkbgJ+sdnlFH6/hG93XD6MEoCWES/Fnw0w9cd6pCf7/vnAz5EQ2zkj1FFJe4Ihgex3/sPzGi+dZxBhozIolbhAyYqYx9ZppHKjFMUYu9j6VdOzqdazngMZ0AjJFXEQz2+j0Sfraujlc0WNlUToyzLbePNlvfsAppTBEmb6TO8KVwcyBl7JExNz9GqS15I8BmnztOM/L5r/nkyPNlYLb52iv/IVoCcCFhYjo+Q2Dz4EOPgV5vCDBcDi9wXP79HcA8/y9M25kFScstlC4BJoYUJSxdVhEWTGpFhLZ9W338Ru+83QSe3+gK6fmzDuWJMH1ofo842V4vuF72oYA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 21f9c5d2-467d-459c-623a-08db3f48f360
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 13:37:55.9039
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KecdJdEsXbqkvuguIHlxVf6f+1v1JkHhtQxsua3hJnL2UgW6m3MKEffD7PMbFibS7wSdo1+sgO2vtNVAnBQBN1WZUM+2tBuybFqlIkOY4RE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5039

On 17/04/2023 1:35 pm, Jan Beulich wrote:
> On 17.04.2023 14:13, Andrew Cooper wrote:
>> --- a/xen/common/livepatch.c
>> +++ b/xen/common/livepatch.c
>> @@ -803,28 +803,84 @@ static int prepare_payload(struct payload *payload,
>>      if ( sec )
>>      {
>>  #ifdef CONFIG_HAS_ALTERNATIVE
>> +        /*
>> +         * (As of April 2023), Alternatives are formed of:
>> +         * - An .altinstructions section with an array of struct alt_instr's.
>> +         * - An .altinstr_replacement section containing instructions.
>> +         *
>> +         * An individual alt_instr contains:
>> +         * - An orig reference, pointing into .text with a nonzero length
>> +         * - A repl reference, pointing into .altinstr_replacement
>> +         *
>> +         * It is legal to have zero-length replacements, meaning it is legal
>> +         * for the .altinstr_replacement section to be empty too.  An
>> +         * implementation detail means that a zero-length replacement's repl
>> +         * reference will still be in the .altinstr_replacement section.
> Didn't you agree that "will" is not really true, and it's at best "may", but
> then also doesn't really matter here in the first place (suggesting that the
> sentence might best be dropped, to avoid drawing attention to something that
> might at best confuse the reader as to its relevance)?

Only that "will be at 0" wasn't actually true.

Right now, the repl reference *will* be somewhere in
altinstr_replacement.  It is discussed here because it is what the check
enforces.

As an implementation detail, it is of course free to change in the
future if needs be.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:48:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522123.811301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPD5-0000Nz-DS; Mon, 17 Apr 2023 13:47:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522123.811301; Mon, 17 Apr 2023 13:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPD5-0000Ns-AC; Mon, 17 Apr 2023 13:47:59 +0000
Received: by outflank-mailman (input) for mailman id 522123;
 Mon, 17 Apr 2023 13:47:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poPD4-0000Nm-Cs
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:47:58 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 745da973-dd26-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 15:47:56 +0200 (CEST)
Received: from mail-bn8nam04lp2045.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 09:47:42 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5405.namprd03.prod.outlook.com (2603:10b6:a03:286::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 13:47:39 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 13:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 745da973-dd26-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739276;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NY3JFoJs35jQOCuOpkTH1KXun9Wc5hi94n7FdtZee/o=;
  b=Y63Q8Ko/Z4w8bVpnF4F3O/ZnOXmrcNKQBkVtT6HGwXlx8THkDPGtu7Eq
   YcaH/9Ez+S3BriKCrnVKor45toHw3+c1YuL2lDanE5z+JoL45+330A/8R
   GM+q3gVn+PnqDY7nj7XmrxK97mKFZx5p0sK32uB0+S9bHG0K5xbJbrEEh
   0=;
X-IronPort-RemoteIP: 104.47.74.45
X-IronPort-MID: 104591562
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:e62lTa1jaODdtxOxjPbD5Utwkn2cJEfYwER7XKvMYLTBsI5bp2AOm
 2pKWj+DP/2CZTSjco9wPISypkkOuJ/QzYUyHgdvpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnOqgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfGWhj/
 8xEEDwxSB2HlsPs24O/E9thmZF2RCXrFNt3VnBI6xj8Vapja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxovy6PkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHunBdNLSePinhJsqEWI/HUKUB8Sb3ym/+bkunTlUYIBO
 UNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9SnWb7L6Y6yyzPSs9KnULbisJCwAC5rHLu5ovhxjCStJiFq+djdDvHzz0h
 TeQo0AWhagXjMMN/7W2+xbAmT3EjqbOSgk59wDGRFWP5wlyZJOmT4Gw4F2d5vFFRK6TQ0Odp
 nECl46b5foXEJCWvCWXRaMGG7TBz+mBGC3RhxhoBZZJyti203uqfIQV6jcuIk5sapoAYWWxP
 BKVvh5N7phOOnfsdbVwf4+6F8Uty+7nCMjhUffXKNFJZ/CdaTO6wc2nXmbIt0iFraTmufxX1
 UuzGSp0MUsnNA==
IronPort-HdrOrdr: A9a23:0K075aEJ+MzeE86DpLqEI8eALOsnbusQ8zAXPiFKOHlom6mj/P
 xG88526faZslkssQgb6KG90cq7MBHhHPxOgbX5Zo3SODUO0VHAROsO0WKF+VPd8kbFh41gPM
 lbEpSWAueAa2STqq7BkXHIaKdbsaS62ZHtpsPXz3JgVmhRGt5dBn9Ce3qm+pMffng+OXIAfK
 DsmPavDADQCUj/Kv7LYUU4Yw==
X-Talos-CUID: 9a23:WhXF+2Fbw73epHWXqmJ//WMeKpwdfkH7wXPBAhWjOGIwerCKHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AVlqgyg2VTyn3qEJ75kBKZOWvgjUj+7+UEVlVk5E?=
 =?us-ascii?q?8nNTaKnFAAWyX3GqlTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="104591562"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LbQbWT7eYiEd9Z2v4EP9D5SknqhP7me/y2s1yIkGTTNKkuG9oJ8Xg5hEccq6KHVykKdGSGzBW9ZIcKWLhnSBtDAUuLJbsuK+rkNHdy3bdg6vmRN7QpHM1/Hi8jMSMOIA3M6krNMdtnEZH6wCtyFmVh3DCkIMurZNJcqpNIbFG+Z1qtBQ2IDdp1Bz38rdjINdfi3X7CcBd+wW5AfjUf6R5Vabpwpmvhca337lc7OVqwIzINH4RgRVFi9CoWhnF8UMwFtHopiKkE683FxKkJtauFCeI7lcJbgkaC4LQDafZIVrTStwmWztdbfFzISW0LpHZU0UAUi1S9lk6cyXo/xwvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TQp5LWwPfngv0IeLask3X6cTOeI2qHYy5LfM7Q0wmsI=;
 b=CDlXF/a1AgGoybUfyjbDKzKSduqFEDsFTx7kXZT+0AYuOVMPoDdktMmlGllgUv5VX9sxUS/TbhPykmPiopXx3t5ja4j/QojzsqPNVFGcjAM+HSeQcX6CRlvYmWY57QpCv0S9sWXjyWVtTJhCE1TtGi3L/2rFR1dNQUAh263PkU0XsZqufoHvIjN0nQi27WuXEbsM0dT+eMWRAPP86qJ79IN3SwnTFQN8XDpaf9fYt+n7uQsPZCzSMHQR3clMN7X8IOP3WHeYiPf3T+/cqso/TZ6HBgU4968EpVNtwTzWzKOaLWuPHlFGaN1WB96RcrQn6J5+ubtFi9Ucz0dyJ1oypw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TQp5LWwPfngv0IeLask3X6cTOeI2qHYy5LfM7Q0wmsI=;
 b=MdF4WEmDViKp5OQaqLToZ1FGN1a3/IqaOu5eAO0LfWWTVq7l4uO/qZVvPF4sgnGyUVwC5aw1MFXWJyEwnqKptsU5VnX2F5zrliKUviKYG0gU6DO9/HMPn3qQoBhlnwZ5bwpR+GWh1oBv1bWIGnfKCKWPPhij/6aIuB5IEYMgDrQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <639a5440-8408-d6c8-4d6f-68e5f7857d2c@citrix.com>
Date: Mon, 17 Apr 2023 14:47:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-2-andrew.cooper3@citrix.com>
 <0a94cc73-f99b-a616-d342-8d84e8a274b4@suse.com>
In-Reply-To: <0a94cc73-f99b-a616-d342-8d84e8a274b4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0034.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5405:EE_
X-MS-Office365-Filtering-Correlation-Id: 251c67a1-4f43-4424-bf0a-08db3f4a4f1d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oyNx8T+hKeGST/EdLye1JriSAFnKcUROM3/wpWh2k+HY9MOvkGvDVf7/KWcroG3uYgFuFKFezrKI8p63E43IiV8hExToGL5X5BCMfJy7b1ssJiMfYpIKg/mlRHwHMc18B+CxmSHMDQ6UKdWVz+mupQ36qxZEZpVsV+JPWtPKnrW7d9nPjXQTicAT40fzl+sO6euKDlIsk/w/9DjoSVVmjIYDowqMfxvuHMhtzm56EHA9p3Gqy2z7NiglfCSgP/QQp4Ci+bCxo+kW9EtIIWBmVT1usHZMGAOJ0d6t9x9nYp4IfDU5c1PQEXpGHQZf275hinvKlc1g3UoHuk27gDhoCgzYUgWg/A0n+wH2F1Dj/M1JdQFTY6NNUzrC83OA0PYUeRmkGfLzyo/LJGdddjk+wgH2QSy62XMcweatXzesaj1/qOcHyF0B30zKO8SmNKoXxWt31Yq0WQUhnb5nz++SlSZP1FC2Et0iqyiqTErYDVPC9d/eCDoVr4B4CE+JIJt4SzuTSvh0yRUOGu9U1xdmqpA1u/kZp5CG2QDNtHD+Dbe3cLP3R6JrErjzlfFgEYw+UWc/6gVldwTmi3K9z+f9uMkb/KCPCWZI7sfYSPzIjOuXMJPhf7djdJf2vRf43b7wy4ngAx24FeetLQc5vIeHBA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(366004)(346002)(451199021)(6486002)(6666004)(86362001)(478600001)(31696002)(110136005)(2616005)(36756003)(26005)(6506007)(186003)(53546011)(6512007)(82960400001)(38100700002)(66946007)(66476007)(66556008)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(7416002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVkzL2M1ZzBpNlBnMmhWNlIydEFCVGw3VWRGVzhqUG01cUloMTEvL2tRR0FX?=
 =?utf-8?B?UG1jdTUxUS9hNDVZMUdIMHRvb0c2eHBLUmpKRG8zazNibFYyWlRGTk5Kd2JC?=
 =?utf-8?B?SEk2OWpCdWdmNjJDQTNDTjlIc0ViSjNCN2FXd2xqWlRXNDhWRUJNNlgwTllQ?=
 =?utf-8?B?eE9xdkZ1M2R6Zzk3K3N2aU9PdFFodTIzYjhxUEFmRU11MUFQSTNJRDZIVEZv?=
 =?utf-8?B?RlBoZXRiQnNzaHd2cHI2V1BrVk9ybnhGMXVCenFsc0EwSTRHMjJYdDR5aVhi?=
 =?utf-8?B?b0F2b1lSeElyRE4vN1NlWC9UeDc0ckh6WUk4ZzdWYVZKQ1lsV2ovcVZYdVh3?=
 =?utf-8?B?b0Y5a3NaaGo4aE80WU5XTmNraEMwaUVqVEM5aXN0S3AvSHZFN2JMLzlSNi9s?=
 =?utf-8?B?VlpWSnVlQWFiSlVJbEJ5UDY1a044U3V6RjBrMnJxazJ0VGd6M05qTHdlRk5l?=
 =?utf-8?B?SEpzN0pRU0cwaGVrYVhqbEJ2a0JTZE1UekhIWlZnRnd1QUIxc2c2VnlVQU95?=
 =?utf-8?B?R29JUmw3TEhQU3l2TldRZVhiTzdnY0hXSDdVWVZDRFRYVVFjdnk1bTNhY05y?=
 =?utf-8?B?aEYxR0pyRFY5ZUdnTzJOVnZDZ08zSm9VRFpjWE5kQnY0Ky9Uc0tyTkRNRUk5?=
 =?utf-8?B?bWc5TjQydXRTY3h3YVIrTlVEa1RIODB5Ym9KR0swcXdmQ3djOE1TaU9EMitT?=
 =?utf-8?B?eVJpUXZBUEllUE1UT0R6QTA5azhKVW44Q3ZHVmlRQVRDV0k3OWdkaExBd25C?=
 =?utf-8?B?MzVWVVBNYnRTUU1Ha01aKzU0U0o3dnkxcVMwajVGWlJRRVVJRUFGSVhPekxF?=
 =?utf-8?B?cll0dDJEWVg0TWFLVnI3RjQzUURHWXpCNWRmUFhPOHNGR3YrNExOUjVFVGpn?=
 =?utf-8?B?ZFFyUUgrL25iOXZRdmJLRzAwbWVWZFl2eTVjUDlYbzdsUktwQnJqVlNwcFRL?=
 =?utf-8?B?SnFwaGdEc1BlQkNhR2FuMlA0bnNUdmY0enJmWWRodkxCTERRaENjZVlsTGRp?=
 =?utf-8?B?MnBvbUFRKzJma0pQSDVrLzQyWEdyUlNMTUhQLytmczFkUENwdnRLN0tBL1V4?=
 =?utf-8?B?Z0R5c0NJYnljbHk0T1pkSEIvZTVXR3J2VlhpcGZSQWJhbjYxMVc4dUdpZGJK?=
 =?utf-8?B?TFVTaFplRkt3UFBnb004MGx1NnRqU2hnOFN0Z2F1NUZTRkQ1dE1ycmMzMG1w?=
 =?utf-8?B?M3d0SnZLVFpvL1JxVW1QbmhBQ3FzOGMwTzA1U25vSmIvZG1IQ2VYc3EzR2Js?=
 =?utf-8?B?anNjMTFXSDcvSlVvaFRlZkFDQUlEMk9RM0ZBdndyUTFEbkdaZXJRM3AvRjRa?=
 =?utf-8?B?Q2NoNVF4VXFaWW1zZUMzSnJHaVpyNFl3SVF6bjRLd21iQmhXVTRMRC80UmNy?=
 =?utf-8?B?bzdQc25nY2RxUnlIZWxyZmY2L2JBZU8vbm5WS2tna2srcUY0QW5PSnVrU1pl?=
 =?utf-8?B?VUhPNUpLRFNCclQrdnNMeXozR1NSWWtOVm5WOUFJTm1BaURCKzcvcjh5Q3ky?=
 =?utf-8?B?cW5vbUFnYnAyWCtuWVh0SjJTaFJEQzd4MHI4enJXNVJCR0hyNFR2dSs4N3NO?=
 =?utf-8?B?OVVnU0ZMeGVVM2RPVGVIamFxZkJsbldaZnlrVTFPNTRyb1RVcHBGVnNVakV3?=
 =?utf-8?B?WnYxWm1VZzEwYkxUczZwU1UrQzIxNGoweVdvZVFxY2tkeWRnRElmTU5QK0R0?=
 =?utf-8?B?R1JiVy9TRTh2T0drNm5WL2JnSWJ3emJZcW41UjMwam05QVY0cTFZWkdHQnRN?=
 =?utf-8?B?c25KNGhWckFsalNNdjBZSUFDdlVXMEJ3Tm5Tb1BmRGJXZzByb0YyQUM0bkVa?=
 =?utf-8?B?dVhlc1o4VTk5dmMxeFJsWjc3OFRDYm8wUGtPM3Z1TXpEOFpGMjIzcEs2Zmph?=
 =?utf-8?B?a01LT3J1WktNd1BHMG5rQzU5YW1VMEFFZHE2cGNrdGw0YWM5WHcrRmFJT0dP?=
 =?utf-8?B?VUYrWThudFY2TEh6K2k1TGNabWI1NEdVS0x6VnlreDBTU1JGRnZNbjFPTUlG?=
 =?utf-8?B?MndDeGZ5VnNSclc2SFd1RW9WbDhNYTdsRTNKbWdUTjNDZmZGaVQ3R1p5VldU?=
 =?utf-8?B?aGQ3R3hkdjNSaVNXMVUzcTRaS1IyKzFKQnJJaWFnQUpuTWg4dWZNQ3BvaXRL?=
 =?utf-8?B?azdHN25wc2oxVFRFUHdSZ0ZvdTgxU2pmU3NXN05kY2VaUDBOYUhzTFdlQ0VZ?=
 =?utf-8?B?aUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	O7/qvRr1+H9CMzHYxDJPEhZsJSPxHcIaoy9e7Wu5Q9hoPDVzWRRYnpc1U8nXpHg9jLw5pJfeKYzIcf2F1clc5ZZ5NfeCH2rJXKGdSsLboT87hFLDr2ZieyxatV6XUbvHWZT/Ym0reLHBhFXJO+bxeRCQE0w9e3RGyDxSoyIgm2gNU368tgtIF3V3CYNGej6ZrZw7GurItwuGL1344QI0cUA+YhARP1rUpzUanxtoCBcT03Es5S+oCy3iUEq732zvnIrLzQIqMEhTkqLuO1Q93sHi9InEggtJQHvru5t18hzHqtp9QWTvdNXcYBbeflSz2TZ6aDDwO2nhRIcm8aEapwTaZfpvia0yibQWtoPAjSYhiRrEcE8zd31eZF9z2vgedG6zoAlq75dZN6LK6qjoAQV0hPfl4Gco8l6OvKRDQZBmCO0ayvYv9ltxRYf7CVqIKfIy2GJwQsER4AVGFZkAudwt5KjAbbRHeFGyUY648ki7DMh8U2sV5wL84IWLWQY2L2Exsb5OgPf1/fXIziDV6FH4FBsO/hj9wZ88C2zabFTPXN3AThv/+6EDhXrkizU4gTOF7reRHBp8z41ogJ6MF9+XCfnXG0fakckXI3awDz0A8olo0qvq3IntiiUU+XFfCV5toRryqmOCh+s3q/zSaGPTacyPkove/hfFf4ire9oXiULTBkXZZp7MpkCy5B2zbAelRChqs+pknlHb27007H4vhRkSl4jI0lunQ7Xw29ou2tib3TjcXeg2Vw/WMVX8tMymcFN/aInrRLzQVr0mxdI16kK3ObOAT94cOum4o82mGoT7RHTh7YMeeMWfYPNFGYOe31X+lvZDVVR/r/KTrBKpRMYJ/3GlqpFBbXSErJ2cGE4rgJnk0y7GBGIXwxzkAx4gPfadMnuHEdwAsoYgmiZO65OAKA+V96vS5+vyB1k=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 251c67a1-4f43-4424-bf0a-08db3f4a4f1d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 13:47:39.2979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ffTmkc9Y7X2GyPISrxC1AM+d8wSOr9LtCk9FD5bDXT7vfSUiTJriGUjuSrVdbYbcdESmoN18YyoyJ1kwEHkSBLIRyCtJZWbB8kLfMj95N+U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5405

On 17/04/2023 1:31 pm, Jan Beulich wrote:
> On 17.04.2023 14:13, Andrew Cooper wrote:
>> --- a/xen/common/livepatch_elf.c
>> +++ b/xen/common/livepatch_elf.c
>> @@ -310,12 +310,12 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
>>                      break;
>>                  }
>>              }
>> -            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => %#"PRIxElfAddr"\n",
>> +            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Undefined symbol resolved: %s => 0x%08"PRIxElfAddr"\n",
> I don't see what's wrong with using %# here (and below); I also don't see
> what value it has to zero-pad to 8 digits when the printed value either
> is far below 4G (when representing just a section offset) or likely far
> above (when representing a real address on 64-bit). But once again I'll
> leave judging to the maintainers.

Hmm - I could be persuaded to drop everything in livepatch_elf.c.  I
guess that makes it more consistent with the 64bit side too.

Ross?

>
>>                      elf->name, elf->sym[i].name, st_value);
>>              break;
>>  
>>          case SHN_ABS:
>> -            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => %#"PRIxElfAddr"\n",
>> +            dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Absolute symbol: %s => 0x%08"PRIxElfAddr"\n",
>>                      elf->name, elf->sym[i].name, sym->st_value);
>>              break;
>>  
>> @@ -344,7 +344,7 @@ int livepatch_elf_resolve_symbols(struct livepatch_elf *elf)
>>  
>>              st_value += (unsigned long)elf->sec[idx].load_addr;
>>              if ( elf->sym[i].name )
>> -                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => %#"PRIxElfAddr" (%s)\n",
>> +                dprintk(XENLOG_DEBUG, LIVEPATCH "%s: Symbol resolved: %s => 0x%08"PRIxElfAddr" (%s)\n",
>>                         elf->name, elf->sym[i].name,
>>                         st_value, elf->sec[idx].name);
>>          }
>> diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
>> index 06e6f87c3d80..3124469faeb4 100644
>> --- a/xen/include/xen/elfstructs.h
>> +++ b/xen/include/xen/elfstructs.h
>> @@ -561,8 +561,8 @@ typedef struct {
>>  #endif
>>  
>>  #if defined(ELFSIZE) && (ELFSIZE == 32)
>> -#define PRIxElfAddr	"08x"
>> -#define PRIuElfWord	"8u"
>> +#define PRIxElfAddr 	PRIx32
>> +#define PRIuElfWord 	PRIu32
>>  
>>  #define Elf_Ehdr	Elf32_Ehdr
>>  #define Elf_Phdr	Elf32_Phdr
> This part certainly
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:49:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:49:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522127.811311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPE0-0000tv-Nr; Mon, 17 Apr 2023 13:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522127.811311; Mon, 17 Apr 2023 13:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPE0-0000to-J4; Mon, 17 Apr 2023 13:48:56 +0000
Received: by outflank-mailman (input) for mailman id 522127;
 Mon, 17 Apr 2023 13:48:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poPDz-0000tg-45
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:48:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2088.outbound.protection.outlook.com [40.107.20.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9764f684-dd26-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 15:48:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7636.eurprd04.prod.outlook.com (2603:10a6:20b:281::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 13:48:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 13:48:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9764f684-dd26-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EMFd10JHwOy9ErWL/PJuQ9WjiekFlTH77/OueyZrzFDMop/5ggiVh7R64NKo17Xby44PYOkuPRErJ+xbbnDGSBm+wJ3dD1hF9Gohq0f92oD7Kt5RRe17J4vntN1EeqXpzlStN/ihiu+78jHwuijK13FO9LtY8IG0pnbUm4fop8Oe4KROvgZmwViHWM7r2IP7b/gtwsx05mUJrxkRnNdMl7vdJxOnydVUaYktQULU9BSNc4DUK0/mRqShszv+WathKReuM4vuafyT4ha277zIJdZAkgetwgNKT+IDy5faE2EHli1btrNVQjI/qonBeqAVCuDk+ok+ZOMAclcxz1J9bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FXxoCtRRqAyKfgQX8GSwGx2PQHTZBmhvzG8VLrf1QOQ=;
 b=g6xgGZw/VMH80AmpA0fyX9e/i+S2SQURQ62zZNpoMAPeVPDwCnoC/rEEGDDurZjxnFKpuOuyJI4Ge5btdYAwiSFnDTV/mxywsNpUMZphtzSE2WBX1xYuTFgxfj7PqKV9UB8CbQForzY4blwnDN1bLRMeEp9Cogs5rWFRp8/MB77ytg730LuzXsO+WCM0orfswQB/ldnENY3IqGygxD/nHvE4MrK85NZqFcIX+f7+tsfzdQeD6NionU2Lg+u867nJjTaa+EL/l5yUj75hJC5gSAcn+LM1OLspjUfJvCDHTqLPrsxrQZn1dmSigJNH8B5PVMijlzUL4fKzRHrSl5o89Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FXxoCtRRqAyKfgQX8GSwGx2PQHTZBmhvzG8VLrf1QOQ=;
 b=Y8XAHXb2+hS47+2W2OJw/pqNZXbDGH0vMRlxhnBCOFv4vvsKfKGVH1nhO7obeYxGI5U5gyPHpFlqIGi8FVChydwocxcjqFHwG9ru+OYUWXwtetSl//ktJnjhobQLYXOrviyOJm0ayGxgR6w3RfF1ZY2GyOPs92zyNuLAoZiYx1i0kOU9xLXDXr5Mf5PmAmaZybfFuODloPiYqN5aE4ADgTYWhPWJeiwiaG/VLo3etT9Apd7Xie+SNaSIWS63xwJe81B1ZMc3wr4gP/fkdM0okcYUex5JxPZooMGYbOpP4fcJ++NdLIwrfxAXB/i7BHgxk6zQhZTlb8u32JyU588QUw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3e11fdb1-88f7-4e4b-df0d-fe3b2039c98e@suse.com>
Date: Mon, 17 Apr 2023 15:48:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-4-andrew.cooper3@citrix.com>
 <3b1f3719-c8e5-8a98-8325-d907c160d81d@suse.com>
 <881ec3af-19a8-a448-cb61-1667e146344b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <881ec3af-19a8-a448-cb61-1667e146344b@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0095.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7636:EE_
X-MS-Office365-Filtering-Correlation-Id: cfc3bb5d-af4b-48fb-c99e-08db3f4a6aa1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T0do0XMpZFOkXldzW4oAMBVBHzmxRrDr88mizXbLbvf080ajp/+1gU4s2OeJIJ/3VLySx9UGWttRy1D169/NqDynJVkkbP9AMQc91UwXYKTRdxFDsD9HFVPQamUUoRdM/eon2AlsxeEyRtBupDF+CG/Bxow5hoEmjxdVOUXiGRwby4/LBSyks3gQ+QKVsP7DZEzfoB/NAX/N8AkXsg3LDvTR/rdv+YCvK0ddZk8gY+fkFWftVnOIIrpVQ+By2T4rGtyO5s1/JddDbLb/ETqh/PUJwRpcZUWH/4vbYdJm+vTLf/516SZfG91SD3V924LJZMmMXxCVUtsL/+mvNcAFSJ9PKPCuvye0sW7HoT4n6N033Klu9u8XSNQMlUeiplW3S4wqc9RpYEoTJnGXo52Se06rydqzm1QcMPKKvtKKz3jmF0rex+qNQ32dD7eppbe1OXf7+kHEFyLaarjA5EplOsFjlyaAeeEHsB53Px+mnfrYCja5Eq73xzyU4wfZxzaWYWpBNU3KuOcNZnqLA6iBvcDu+1N5zDcj9eGb3CkqYijj0Z2Q+Cw0P7cqHpUAUrRmHE1BfaL0017/p3+tHBIN76O5D+hgbjshFXzvt//vA9RHb1QZusrOHFf4LCsFk1mHLWZNobFzAi1e5UKOWVmb10FMbV1/sbu0TeYTrlAUj2k=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(366004)(136003)(376002)(396003)(451199021)(5660300002)(8676002)(8936002)(41300700001)(31686004)(316002)(6916009)(4326008)(66946007)(66556008)(66476007)(2906002)(54906003)(31696002)(478600001)(86362001)(6486002)(38100700002)(6506007)(186003)(53546011)(6512007)(26005)(36756003)(2616005)(142923001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eWVvYW1xTVRRUlhraFVJMmpOZ1ZmR0VrS1p3a2V2NHFIcU92SUtINU5HVG5z?=
 =?utf-8?B?MnM3NXFML2NyVEtoQ3ltUExoSHdJQi8xVHdCWEM3WnZwaTkxSUp3TDFxWHR5?=
 =?utf-8?B?SEJvVVE3cjZadmdveE1zc0duTXo0UEs3REt3V3czclJUNGhoekFBQVNMMEZn?=
 =?utf-8?B?b2JNTVVhaVpMaWkzYTN0c1hWNitBamR5bDZRa3k0VUhqSG5OaVh2aXlTeHk1?=
 =?utf-8?B?WlpYRG9TMHBvak9yM0c3SmVLckZIT0tZaWdRbGFrUURFYmJ0TmNnOVVkMEtO?=
 =?utf-8?B?cm5EdGJkbmtTM2pqM200VVgyM0wrcVpsOFhEaFpPcXJlTkd3S28zRnRXdngv?=
 =?utf-8?B?UVNpZHEwM002bE5ZT2NYaUpzT1JaeVhWUnc3QmJzMC9BUnRIVmxJQmsvY29U?=
 =?utf-8?B?Z2I5dEphSWJKVG9zc1U4N2I1U2dxWlQzL3hLN3dIRU9FSmk2N3BtL0dOc1Rl?=
 =?utf-8?B?ektNUFlkT0k5bmZyYm0zM2JtSkZBVmYzSk44RU42SjEyMVJzeGZkcmtWRlAr?=
 =?utf-8?B?MFJaaVExUVYxYkE1SFM4bjVoaFRBZitrV0ZrbkV3b09yZlZoUGNSdGMydEpW?=
 =?utf-8?B?OFlCcWIrak4wS0h4WjUyaW81RnpUVkxMd291Z0djUmlQNDQyM0VGeWFoZmlJ?=
 =?utf-8?B?bUszdlYzV2hYZS8zaTVqWXpJRUtBVGl4dFBpbkU2Z01UWGJyRWd5cFFiSkY3?=
 =?utf-8?B?UzNwNFRJZnJsY1dRL0U4b3pPVUtHM2d3bU5lN2dOdWNlUHIyKy9HTjQ5ZTla?=
 =?utf-8?B?NHVTTU85M1daWlRqbElYQXZibXJlMVkxSW9BOXFOMStqb2RsRTRBbG0vNjAv?=
 =?utf-8?B?UnN5VnNBM0xiUFUzcXFuNllFOVFVZU4vQ1hJZDZjOFloU00ycUord2l1NUdw?=
 =?utf-8?B?NUltN1h1Yy9UQWIvdjNhZWRLUENyZEwwMnVSenliWVZTTjA5RlJqdFJDMU15?=
 =?utf-8?B?eGRKd0h4UjhjR3FzOXNtbUYwZFZkMFU2TENPR2lad3JrZWlXd1pjSzgzT2Rm?=
 =?utf-8?B?dDJpS0VleFdYeXB2VHJ0akJUc1VGYzlWbDNleWtWWWJTMHh4YldIV2x0aXoy?=
 =?utf-8?B?ZEtLSUdCTHJCUUVGQXRidGltZ2NsZE1GL3RBeU5acWdxRjJlUlBoZWdyYWRq?=
 =?utf-8?B?WG1pcTRxQVFDd2RQK3pJYnBSS1B3bERwTldWR3NWNGppWWg5SkF2dnZJUVhk?=
 =?utf-8?B?WlZXd3NpeGo0ZnRPeXIrOENQY1UxdkUxYWRRKzhFdkRSNm1wUVF5TEZpa29H?=
 =?utf-8?B?M2FMcll3ME92R1ZGM2VYRVZnZkxDYjVFeldPeEFmaGkxZlIxbllKMTNORzhJ?=
 =?utf-8?B?d3IyMjVXaEh1UHRRVzZCSkFNelovWHo2WGpNZ3d2RUQyVUc5eExOS05KVDcr?=
 =?utf-8?B?S29pZ2ZZaXB5Y3N0cjFLd3ZxMEwyVWtEQ2RyeTlKcWUvSnRUMHNiMG9oTEJS?=
 =?utf-8?B?RERkcUo5Wkw3aDV0Vk9rWFQ0WVp5OGRFem9IUVByS2dRQ3UybWJBam9QRVpE?=
 =?utf-8?B?TmFWcVo1MFcxYmEvay92VTNDQnNWQlNOc3RCSG1SV1RUOEViWVM4MU1TMVg0?=
 =?utf-8?B?U05kbFcyV0I4WUhjVHVFaFdqMWZ1b3JVdW5KdnpZMXdWUzFTNTVQTGhKRDY2?=
 =?utf-8?B?NE1FdUw2bnErTDdkUU44a1lNclQ3NkZoS2pEUkMxdXF3ejFEZ3FVczB5TjZT?=
 =?utf-8?B?VXl4TkJyUU9LYnliQllzcE04UHdQS3NOR2FsbTY2K2h4c1pDVld6cjJ0czV0?=
 =?utf-8?B?Qm8xYzQ4R093VTdyRzdUM0Rna2NKTU9HTlFxNXB5WWJqNjJsWTZkQVFKekZy?=
 =?utf-8?B?QUJvVVlvSUQ3ZVkvWE9PSlZLZFBlRTRiWnNjSjFNanUvMVBkWm5MZnBURWlB?=
 =?utf-8?B?Y2V6ZTdLdlF2YlFxTE80NFA4bVU0YzE0OUw2dkRkU2NLV3hzOGU2NGVWOWU5?=
 =?utf-8?B?ZVhvUnFPU05NVXVhc01XS0xNaUlXZkZhdWt6SGhwbThtS0txOURNUk5LRnBH?=
 =?utf-8?B?TlRpNUtPS3lOanlLdTZ2bGhRRWFRaWY4Y0V0YUJGb0lSaWFiMGp6ZkhuZVU4?=
 =?utf-8?B?bUlHb2twMHorOHhpV2RxRDMvQU5OM0JOREhCSlpBTlZhSXpib0ZVempML2tX?=
 =?utf-8?Q?ZlJd3ak7AS//fE4wBk73GOJ60?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfc3bb5d-af4b-48fb-c99e-08db3f4a6aa1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 13:48:25.3565
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tbKzfyAl0ebhmsIkx3bBdT+j7WUs92sU5wrdi2JLyBYZ69u7vFcaE94HXxLw+1lRs17Hg0UzKpzK/C4t+2PRuA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7636

On 17.04.2023 15:37, Andrew Cooper wrote:
> On 17/04/2023 1:35 pm, Jan Beulich wrote:
>> On 17.04.2023 14:13, Andrew Cooper wrote:
>>> --- a/xen/common/livepatch.c
>>> +++ b/xen/common/livepatch.c
>>> @@ -803,28 +803,84 @@ static int prepare_payload(struct payload *payload,
>>>      if ( sec )
>>>      {
>>>  #ifdef CONFIG_HAS_ALTERNATIVE
>>> +        /*
>>> +         * (As of April 2023), Alternatives are formed of:
>>> +         * - An .altinstructions section with an array of struct alt_instr's.
>>> +         * - An .altinstr_replacement section containing instructions.
>>> +         *
>>> +         * An individual alt_instr contains:
>>> +         * - An orig reference, pointing into .text with a nonzero length
>>> +         * - A repl reference, pointing into .altinstr_replacement
>>> +         *
>>> +         * It is legal to have zero-length replacements, meaning it is legal
>>> +         * for the .altinstr_replacement section to be empty too.  An
>>> +         * implementation detail means that a zero-length replacement's repl
>>> +         * reference will still be in the .altinstr_replacement section.
>> Didn't you agree that "will" is not really true, and it's at best "may", but
>> then also doesn't really matter here in the first place (suggesting that the
>> sentence might best be dropped, to avoid drawing attention to something that
>> might at best confuse the reader as to its relevance)?
> 
> Only that "will be at 0" wasn't actually true.

Oh, right - I'm sorry for the noise.

Jan

> Right now, the repl reference *will* be somewhere in
> altinstr_replacement.  It is discussed here because it is what the check
> enforces.
> 
> As an implementation detail, it is of course free to change in the
> future if needs be.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:52:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522131.811321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPH6-0002Me-4Y; Mon, 17 Apr 2023 13:52:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522131.811321; Mon, 17 Apr 2023 13:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPH6-0002MX-1g; Mon, 17 Apr 2023 13:52:08 +0000
Received: by outflank-mailman (input) for mailman id 522131;
 Mon, 17 Apr 2023 13:52:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poPH4-0002MN-Gj; Mon, 17 Apr 2023 13:52:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poPH4-0005M2-B4; Mon, 17 Apr 2023 13:52:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poPH3-0000uN-MC; Mon, 17 Apr 2023 13:52:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poPH3-0000S4-LF; Mon, 17 Apr 2023 13:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sMxGa2F8gu2ltvuyvd8nKZ3q2dSR6OqgigMa+tQU6Rw=; b=xgCMyAvjhaPPedD3ZeHeDKFZ0w
	gll40c5VTdaRJHBDdIEwhpVt+9APmhwdgcn3CTW5CRn27yJ57QJA4tnxV+mCDU24hvpGwNBVjI2pc
	6AWXCdBsjOIrZ5WQRwzT20/2huskup1gRv+EkrjfsvYkcdSX4XevMV+A9uuSMv4ktG1g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180281: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a8f57ae2eb07ab39a6f0ccad60c760743051026
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 13:52:05 +0000

flight 180281 linux-linus real [real]
flight 180284 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180281/
http://logs.test-lab.xenproject.org/osstest/logs/180284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6a8f57ae2eb07ab39a6f0ccad60c760743051026
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    0 days
Testing same since   180281  2023-04-17 06:24:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6a8f57ae2eb07ab39a6f0ccad60c760743051026
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Apr 16 15:23:53 2023 -0700

    Linux 6.3-rc7


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:52:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522135.811331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPHR-0002pF-HE; Mon, 17 Apr 2023 13:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522135.811331; Mon, 17 Apr 2023 13:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPHR-0002p8-EH; Mon, 17 Apr 2023 13:52:29 +0000
Received: by outflank-mailman (input) for mailman id 522135;
 Mon, 17 Apr 2023 13:52:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poPHP-0002mZ-Bp
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:52:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1447fd55-dd27-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 15:52:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1447fd55-dd27-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739544;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=A3p4keaS3hpXGhq0nkJV5Dw39H7+1lYcGAYPx6GvaNw=;
  b=P3v5v+O7maDkmxw/YhrfI655VIuxFj/d4jOuEJz6UNEWTJd59G4IL2J5
   7huyhCcdImFpOzWLhBI3Y9hqUDd1sNR8+Is2yhjpw0WL1/K7HDpt4NJpc
   NmcTzknPkSgiqQ7O7iacvIKF1fxIEQCxgb3KzWNdeavcZYvUzEb+FLDhA
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108251280
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:pIYz56hQmo9dGi+6dY8h/IrUX161dxAKZh0ujC45NGQN5FlHY01je
 htvWTyEOvyJNGX9edpzbYvj80MCupaAy4JmS1RsqiFnEngb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaBzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQfGRUANkGau9nmzYKAc/hw3fx+AdTkadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 jKcoj2jUkly2Nq3iiOj0HjwrOP1oC74cYAdDI2y06Z4jwjGroAUIEJPDgbqyRWjsWa9XNRFI
 kBS5SsqroA17kWgStS7VBq9yFaUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBebR4A2
 0KNntjpLSdyq7DTQnWYnp+LqRuiNC5TKnUNDRLoViNcvYOl+ttqyEuSEJA6SvXdYsDJ9S/Yx
 AGvoXBvnoko3cM77Jyq4Qv/3h+xqc2cJuIq3Tk7Tl5J/ysgOt78O9f5tAmHhRpTBN3HFwfc5
 RDoj+DbtblTVs/VyURhVc1XRNmUC+C53CowaLKFN70o7HyT9nGqZui8CxkudR4yYq7oldIEC
 XI/WD+9B7cJZhNGlYctP+qM5z0ClMAM7+jNWPHOdcZpaZNsbgKB9ywGTRfOjzmzzxB0zPxgY
 cvznSOQ4ZEyUP0P8dZLb71Fje9DKt4Wngs/uqwXPzz4iOHDNRZ5uJ8OMUeUb/BR0Z5oVD79q
 o4FX+PTkkU3bQELSnWPmWLlBQxQfCdT6FGfg5A/S9Nv1SI8QDl7VKSJke14E2Gn9owM/tr1E
 riGchcw4DLCabfvcG1mtlgLhGvTYKtC
IronPort-HdrOrdr: A9a23:/JsMH6AY3m9HI0rlHeln55DYdb4zR+YMi2TDtnoBLCC9F/byqy
 nAppgmPHPP5wr5IUtQ6OxoW5PwI080l6QU3WBLB8bHYOCOggLBRuxfBO3ZrQEIcBeOldK1u5
 0AT0F1MqyXMbBc5fyKmHjCYqxQveWvweSKoe/fynt3JDsaFJ2Ilz0JdjpyzCVNNXB77aJQLu
 vj2iPtnUvRRZ1SVLXdOkU4
X-Talos-CUID: 9a23:+vtEvmNnU2guV+5DXxhC+RZFA/weInDt907BPxSAVEt0R+jA
X-Talos-MUID: =?us-ascii?q?9a23=3AXa+gxgxSxuxU3oJnx/ghCHuIRliaqOe3LmIHzb8?=
 =?us-ascii?q?KgfSJNgVyNC+thhHvUIByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108251280"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: [PATCH v2] x86/livepatch: Fix livepatch application when CET is active
Date: Mon, 17 Apr 2023 14:52:19 +0100
Message-ID: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:

  (XEN) livepatch: lp: Verifying enabled expectations for all functions
  (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
  (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
  (XEN) livepatch: lp: Applying 1 functions
  (XEN) hi_func: Hi! (called 1 times)
  (XEN) Hook executing.
  (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
  (XEN) *** DOUBLE FAULT ***
  <many double faults>

The assertion failure is from a global (system wide) TLB flush initiated by
modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
sure exactly what causes the #DF's, but it doesn't really matter either
because they highlight a latent bug that I'd overlooked with the CET-SS vs
patching work the first place.

While we're careful to arrange for the patching CPU to avoid encountering
non-shstk memory with transient shstk perms, other CPUs can pick these
mappings up too if they need to re-walk for uarch reasons.

Another bug is that for livepatching, we only disable CET if shadow stacks are
in use.  Running on Intel CET systems when Xen is only using CET-IBT will
crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
still active.

Also, we never went and cleared the dirty bits on .rodata.  This would
matter (for the same reason it matters on .text - it becomes a valid target
for WRSS), but we never actually patch .rodata anyway.

Therefore rework how we do patching for both alternatives and livepatches.

Introduce modify_xen_mappings_lite() with a purpose similar to
modify_xen_mappings(), but stripped down to the bare minimum as it's used in
weird contexts.  Leave all complexity to the caller to handle.

Instead of patching by clearing CR0.WP (and having to jump through some
fragile hoops to disable CET in order to do this), just transiently relax the
permissions on .text via l2_identmap[].

Note that neither alternatives nor livepatching edit .rodata, so we don't need
to relax those permissions at this juncture.

The perms are relaxed globally, but is safe enough.  Alternatives run before
we boot APs, and Livepatching runs in a quiesced state where the other CPUs
are not doing anything interesting.

This approach is far more robust.

Fixes: 48cdc15a424f ("x86/alternatives: Clear CR4.CET when clearing CR0.WP")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>

v2:
 * Add a fixes tag
 * Put modify_xen_mappings_lite() in init_or_livepatch
 * Fix various comments

Pulling put_pte_flags() out of the loops in modify_xen_mappings_lite() halves
the size of the function.  The code generation of the typesafe pagetable
helpers is terrible, both because of flags needing a 32->64 expand, and
because of _PAGE_NX using cpu_has_nx behind the scene.  We really should
improve how all of this works.
---
 xen/arch/x86/alternative.c       | 45 +++++++++------------
 xen/arch/x86/livepatch.c         | 56 +++++++++++---------------
 xen/arch/x86/mm.c                | 68 ++++++++++++++++++++++++++++++++
 xen/common/virtual_region.c      | 22 ++++++++---
 xen/include/xen/mm.h             |  1 +
 xen/include/xen/virtual_region.h |  4 +-
 6 files changed, 129 insertions(+), 67 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index 2383fa66294c..99482766b51f 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -382,24 +382,28 @@ static int __init cf_check nmi_apply_alternatives(
      */
     if ( !(alt_done & alt_todo) )
     {
-        unsigned long cr0, cr4;
-
-        cr0 = read_cr0();
-        cr4 = read_cr4();
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4 & ~X86_CR4_CET);
-
-        /* Disable WP to allow patching read-only pages. */
-        write_cr0(cr0 & ~X86_CR0_WP);
+        /*
+         * Relax perms on .text to be RWX, so we can modify them.
+         *
+         * This relaxes perms globally, but we run ahead of bringing APs
+         * online, so only have our own TLB to worry about.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RWX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         _apply_alternatives(__alt_instructions, __alt_instructions_end,
                             alt_done);
 
-        write_cr0(cr0);
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4);
+        /*
+         * Reinstate perms on .text to be RX.  This also cleans out the dirty
+         * bits, which matters when CET Shstk is active.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         alt_done |= alt_todo;
     }
@@ -454,19 +458,6 @@ static void __init _alternative_instructions(bool force)
         panic("Timed out waiting for alternatives self-NMI to hit\n");
 
     set_nmi_callback(saved_nmi_callback);
-
-    /*
-     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
-     * writing into the mappings set dirty bits, turning the mappings into
-     * shadow stack mappings.
-     *
-     * While we can execute from them, this would also permit them to be the
-     * target of WRSS instructions, so reset the dirty after patching.
-     */
-    if ( cpu_has_xen_shstk )
-        modify_xen_mappings(XEN_VIRT_START + MB(2),
-                            (unsigned long)&__2M_text_end,
-                            PAGE_HYPERVISOR_RX);
 }
 
 void __init alternative_instructions(void)
diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c
index f2d783fdc567..a54d991c5f0f 100644
--- a/xen/arch/x86/livepatch.c
+++ b/xen/arch/x86/livepatch.c
@@ -61,46 +61,32 @@ int arch_livepatch_safety_check(void)
 
 int noinline arch_livepatch_quiesce(void)
 {
-    /* If Shadow Stacks are in use, disable CR4.CET so we can modify CR0.WP. */
-    if ( cpu_has_xen_shstk )
-        write_cr4(read_cr4() & ~X86_CR4_CET);
-
-    /* Disable WP to allow changes to read-only pages. */
-    write_cr0(read_cr0() & ~X86_CR0_WP);
+    /*
+     * Relax perms on .text to be RWX, so we can modify them.
+     *
+     * This relaxes perms globally, but all other CPUs are waiting on us.
+     */
+    relax_virtual_region_perms();
+    flush_local(FLUSH_TLB_GLOBAL);
 
     return 0;
 }
 
 void noinline arch_livepatch_revive(void)
 {
-    /* Reinstate WP. */
-    write_cr0(read_cr0() | X86_CR0_WP);
-
-    /* Clobber dirty bits and reinstate CET, if applicable. */
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
-    {
-        unsigned long tmp;
-
-        reset_virtual_region_perms();
-
-        write_cr4(read_cr4() | X86_CR4_CET);
-
-        /*
-         * Fix up the return address on the shadow stack, which currently
-         * points at arch_livepatch_quiesce()'s caller.
-         *
-         * Note: this is somewhat fragile, and depends on both
-         * arch_livepatch_{quiesce,revive}() being called from the same
-         * function, which is currently the case.
-         *
-         * Any error will result in Xen dying with #CP, and its too late to
-         * recover in any way.
-         */
-        asm volatile ("rdsspq %[ssp];"
-                      "wrssq %[addr], (%[ssp]);"
-                      : [ssp] "=&r" (tmp)
-                      : [addr] "r" (__builtin_return_address(0)));
-    }
+    /*
+     * Reinstate perms on .text to be RX.  This also cleans out the dirty
+     * bits, which matters when CET Shstk is active.
+     *
+     * The other CPUs waiting for us could in principle have re-walked while
+     * we were patching and cached the reduced perms in their TLB.  Therefore,
+     * we need to do a global TLB flush.
+     *
+     * However, we can't use Xen's normal global TLB flush infrastructure, so
+     * delay the TLB flush to arch_livepatch_post_action(), which is called on
+     * all CPUs (including us) on the way out of patching.
+     */
+    tighten_virtual_region_perms();
 }
 
 int arch_livepatch_verify_func(const struct livepatch_func *func)
@@ -197,6 +183,8 @@ void noinline arch_livepatch_revert(const struct livepatch_func *func)
  */
 void noinline arch_livepatch_post_action(void)
 {
+    /* See arch_livepatch_revive() */
+    flush_local(FLUSH_TLB_GLOBAL);
 }
 
 static nmi_callback_t *saved_nmi_callback;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 36a07ef77eae..46df495352e9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -91,6 +91,7 @@
 #include <xen/ioreq.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
+#include <xen/livepatch.h>
 #include <xen/mm.h>
 #include <xen/param.h>
 #include <xen/domain.h>
@@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
     return modify_xen_mappings(s, e, _PAGE_NONE);
 }
 
+/*
+ * Similar to modify_xen_mappings(), but used by the alternatives and
+ * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
+ * responsibility of the caller, and *MUST* not be introduced here.
+ *
+ * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
+ * Must be called with present flags, and over present mappings.
+ * Must be called on leaf page boundaries.
+ */
+void init_or_livepatch modify_xen_mappings_lite(
+    unsigned long s, unsigned long e, unsigned int _nf)
+{
+    unsigned long v = s, fm, nf;
+
+    /* Set of valid PTE bits which may be altered. */
+#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
+    fm = put_pte_flags(FLAGS_MASK);
+    nf = put_pte_flags(_nf & FLAGS_MASK);
+#undef FLAGS_MASK
+
+    ASSERT(nf & _PAGE_PRESENT);
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
+
+    while ( v < e )
+    {
+        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
+        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
+        unsigned int l2f = l2e_get_flags(l2e);
+
+        ASSERT(l2f & _PAGE_PRESENT);
+
+        if ( l2e_get_flags(l2e) & _PAGE_PSE )
+        {
+            ASSERT(l1_table_offset(v) == 0);
+
+            l2e_write_atomic(pl2e, l2e_from_intpte((l2e.l2 & ~fm) | nf));
+
+            v += 1UL << L2_PAGETABLE_SHIFT;
+            continue;
+        }
+
+        /* else descend to l1 */
+        {
+            l1_pgentry_t *pl1t = map_l1t_from_l2e(l2e);
+
+            while ( v < e )
+            {
+                l1_pgentry_t *pl1e = &pl1t[l1_table_offset(v)];
+                l1_pgentry_t l1e = l1e_read_atomic(pl1e);
+                unsigned int l1f = l1e_get_flags(l1e);
+
+                ASSERT(l1f & _PAGE_PRESENT);
+
+                l1e_write_atomic(pl1e, l1e_from_intpte((l1e.l1 & ~fm) | nf));
+
+                v += 1UL << L1_PAGETABLE_SHIFT;
+
+                if ( l2_table_offset(v) == 0 )
+                    break;
+            }
+
+            unmap_domain_page(pl1t);
+        }
+    }
+}
+
 void __set_fixmap(
     enum fixed_addresses idx, unsigned long mfn, unsigned long flags)
 {
diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 5ecdba9c08ed..ddac5c9147e5 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -92,16 +92,28 @@ void unregister_virtual_region(struct virtual_region *r)
     remove_virtual_region(r);
 }
 
-#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_XEN_SHSTK)
-void reset_virtual_region_perms(void)
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_X86)
+void relax_virtual_region_perms(void)
 {
     const struct virtual_region *region;
 
     rcu_read_lock(&rcu_virtual_region_lock);
     list_for_each_entry_rcu( region, &virtual_region_list, list )
-        modify_xen_mappings((unsigned long)region->start,
-                            ROUNDUP((unsigned long)region->end, PAGE_SIZE),
-                            PAGE_HYPERVISOR_RX);
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RWX);
+    rcu_read_unlock(&rcu_virtual_region_lock);
+}
+
+void tighten_virtual_region_perms(void)
+{
+    const struct virtual_region *region;
+
+    rcu_read_lock(&rcu_virtual_region_lock);
+    list_for_each_entry_rcu( region, &virtual_region_list, list )
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RX);
     rcu_read_unlock(&rcu_virtual_region_lock);
 }
 #endif
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9d14aed74baa..b0dc3ba9c98d 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -100,6 +100,7 @@ int map_pages_to_xen(
     unsigned int flags);
 /* Alter the permissions of a range of Xen virtual address space. */
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags);
+void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int flags);
 int destroy_xen_mappings(unsigned long s, unsigned long e);
 /* Retrieve the MFN mapped by VA in Xen virtual address space. */
 mfn_t xen_map_to_mfn(unsigned long va);
diff --git a/xen/include/xen/virtual_region.h b/xen/include/xen/virtual_region.h
index ba408eb87a1a..d05362071135 100644
--- a/xen/include/xen/virtual_region.h
+++ b/xen/include/xen/virtual_region.h
@@ -33,7 +33,9 @@ void setup_virtual_regions(const struct exception_table_entry *start,
 void unregister_init_virtual_region(void);
 void register_virtual_region(struct virtual_region *r);
 void unregister_virtual_region(struct virtual_region *r);
-void reset_virtual_region_perms(void);
+
+void relax_virtual_region_perms(void);
+void tighten_virtual_region_perms(void);
 
 #endif /* __XEN_VIRTUAL_REGION_H__ */
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522143.811341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIm-0003Vz-SF; Mon, 17 Apr 2023 13:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522143.811341; Mon, 17 Apr 2023 13:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIm-0003Vs-PE; Mon, 17 Apr 2023 13:53:52 +0000
Received: by outflank-mailman (input) for mailman id 522143;
 Mon, 17 Apr 2023 13:53:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMb/=AI=citrix.com=prvs=464dae365=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1poPIm-0003Vm-Aq
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:53:52 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47b98e0f-dd27-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 15:53:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47b98e0f-dd27-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739630;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=UOlxbCFg/XDa6iTXG/9aZ11mIf7hbb1IslY0i63jutw=;
  b=KfsxBJSV42msJk3pB3dLOX4Qn3G5OqLIBAtZq0GiVJHZ9CJSUbNBrp+R
   rWRO649wiZcKNLO9yUVoRENu/jRqqF1WUwAq9hz0mK641mSQJVR7RfVrS
   HGdParPeJvT106jOkE1e2P6xOB8wT6XTf+WOIzzenYOt/o/i6pdNhv2E2
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105714898
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:rII7j6pNHUHZobOXw4SVeUorQnVeBmIvZRIvgKrLsJaIsI4StFCzt
 garIBmAPPiOZWfxeYsnYd6+px4OvJHdz9RkQAdqpS82Fn9HpZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCziZNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADstYiqpo86H+puQENlnjN0fcZnPIpxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVDpUiaqLtx73na1whw+LPsLMDUapqBQsA9ckOw/
 zqboD2lUkBKXDCZ4Su1012FmtLMpgTQSY4sCYa9q99UvnTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvBBzN1t6aOYWmA7brSpjS3UQAXMGsDaCksXQYDpd75r+kblQnTR9xuFKq0iNzdGjzqx
 T2O6i8kiN0uYdUjjvvhuwqd2nT1+8aPF1RujunKYo67xghZaLSPQ6CZ03Hwt8ZLJp+lEwmlo
 mdRzqBy89sy4YGxeD2lGbtdRe3ytqvUbFUwknY0QcB/qm3FF2qLONkJvWogfBoB3tMsI2eBX
 aPFhe9GCHa/1lOOZLQ/XY++At9CIUPIRYW8DaC8gjajj/FMmO67EMJGPxT4M5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzOKG8+jl0n6juLCDJJwdVviGALXBt3VEYve+FmFm
 zqhH5DiJ+pjvB3WPXCMrN97waEiJnknH5Hmw/Fqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiMCg7Muy0BcYh9BrW/0UEZD6V5pTqWq73hI93Snf9VeBPGDBLpRKsc
 8Q4Rg==
IronPort-HdrOrdr: A9a23:v9ZHuqgKl24YdpJcsspT+vtzSnBQXssji2hC6mlwRA09TyX4ra
 2TdZEgvnXJYVkqKRIdcK+7Scu9qB/nm6KdgrN8AV7BZmnbUQKTRelfBODZrAEIdReeygdV79
 YET5RD
X-Talos-CUID: 9a23:S61bc2BTFpyKd836EyJH+HQ+PuwUSFT2lkfbKVaEVDZNVqLAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3AZJs+1A7/fOp8FgmfbKcx3/lcxox504WBIVwwkq4?=
 =?us-ascii?q?hkNiICzFLPmq8sgqeF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="105714898"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v5 1/3] tools/xenctrl: add xc_get_cpu_version()
Date: Mon, 17 Apr 2023 14:53:33 +0100
Message-ID: <20230417135335.17176-2-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230417135335.17176-1-sergey.dyasli@citrix.com>
References: <20230417135335.17176-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

As a wrapper for XENPF_get_cpu_version platform op.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v4 --> v5:
- Added Reviewed-by

v3 --> v4:
- Replaced DECLARE_PLATFORM_OP
- Removed NULL checks
---
 tools/include/xenctrl.h   |  1 +
 tools/libs/ctrl/xc_misc.c | 17 +++++++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..34b3b25289 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1186,6 +1186,7 @@ int xc_physinfo(xc_interface *xch, xc_physinfo_t *info);
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo);
 int xc_microcode_update(xc_interface *xch, const void *buf, size_t len);
+int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver);
 int xc_numainfo(xc_interface *xch, unsigned *max_nodes,
                 xc_meminfo_t *meminfo, uint32_t *distance);
 int xc_pcitopoinfo(xc_interface *xch, unsigned num_devs,
diff --git a/tools/libs/ctrl/xc_misc.c b/tools/libs/ctrl/xc_misc.c
index 265f15ec2d..90d50faa4f 100644
--- a/tools/libs/ctrl/xc_misc.c
+++ b/tools/libs/ctrl/xc_misc.c
@@ -226,6 +226,23 @@ int xc_microcode_update(xc_interface *xch, const void *buf, size_t len)
     return ret;
 }
 
+int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
+{
+    int ret;
+    struct xen_platform_op op = {
+        .cmd = XENPF_get_cpu_version,
+        .u.pcpu_version.xen_cpuid = cpu_ver->xen_cpuid,
+    };
+
+    ret = do_platform_op(xch, &op);
+    if ( ret != 0 )
+        return ret;
+
+    *cpu_ver = op.u.pcpu_version;
+
+    return 0;
+}
+
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo)
 {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522144.811351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIr-0003m0-4c; Mon, 17 Apr 2023 13:53:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522144.811351; Mon, 17 Apr 2023 13:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIr-0003lt-1H; Mon, 17 Apr 2023 13:53:57 +0000
Received: by outflank-mailman (input) for mailman id 522144;
 Mon, 17 Apr 2023 13:53:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMb/=AI=citrix.com=prvs=464dae365=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1poPIq-0003Vm-7O
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:53:56 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4af54abe-dd27-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 15:53:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4af54abe-dd27-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739635;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=7KS8mI+2XK81GgSuy9LkhsX3/pJwxpsay4LFxPQZ9HE=;
  b=ML4mF6vfk1UA7dfhfIy84o46AnNrd2IKeasiYeKrVyv/HgYLxT0NrBlP
   Th1LNcRqrYX4acIQlOpb5388bRd/yGwqPXYdVJlPNP85ZydX7/EhkGD68
   32mpLTwFF/9VD8M6YK0b045Qtz/0YFMkTHOx7tY5XhHcgvTyiDQ9a6yvf
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105714904
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:fOWrdK11+YApajHoh/bD5eRxkn2cJEfYwER7XKvMYLTBsI5bpzYDy
 GZKUWGAO62KNDb9LohxOd7k9EtU7JHWmtBiQQNrpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnOqgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfEWFr5
 aUnDiI2S0qjjf6/wre7Y9Y8iZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tPylnHbyYntUuVuOoasf6GnP1g1hlrPqNbI5f/TTHZgKxxrJ/
 j6uE2LRGCgLKuOS0mW58F2O36yVmh6qfIEQPejtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQo2
 UWOhMjBHiF0vfueTnf13rWJqTK/PwAFIGlEYjULJSMe+MXqqow3ihPJT/5gHbSzg9mzHiv/q
 w1mtwBn2e9V15RSkfzmoxae2WnESoX1ohAd9gXyTjuayBFCQdSFbZCh613bxMkQI9PMJrWeh
 0Toi/Ry/chXU8HUyHfcHbRRdF26z63baWOB2DaDC7Fkrm3woCD7IOi89RkkfC9U3tA4lSgFi
 aM5kSdY/9dtMXSjdsebiKrhWp1xncAM+TkIP804j+aigbArLmdrBAk0OSatM5nFySDAa50XN
 5aBatqLBn0HE6lhxzfeb75DgeZ1mXhmmD6MHcyTI/GbPV22PSf9dFv4GAHWMrBRAF2s+205D
 Oqzx+PVkk4CAYUSkwHc8JIJLEBiEEXX8ave8pQNHsbae1oOJY3UI6OJqV/XU9A/zvs9eyah1
 i3VZ3K0P3Kl3SWddl7SOi46AF4tNL4mxU8G0eUXFQ7A8xAejUyHts/zq7NfkWEbydFe
IronPort-HdrOrdr: A9a23:OQ2t/a7VQnXfEhFUYQPXwMjXdLJyesId70hD6qhwISY7TiX4rb
 HJoB11737JYVoqNU3I3OrwWpVoIkmskqKdg7NwAV7KZmCP0wGVxcNZnO7fKlXbaknDH4Vmu5
 uIHZITNDSJNykYsfrH
X-Talos-CUID: 9a23:na9ANGFBAGyK2kkGqmJ39GsEQ/8DYEb4llfaP0a6DGZAd+2aHAo=
X-Talos-MUID: 9a23:TTtyRwYKBFHTvOBTrQb9mg5vL5hU4YuMUQNXlqhXn8qGHHkl
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="105714904"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v5 3/3] tools/xen-ucode: print information about currently loaded ucode
Date: Mon, 17 Apr 2023 14:53:35 +0100
Message-ID: <20230417135335.17176-4-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230417135335.17176-1-sergey.dyasli@citrix.com>
References: <20230417135335.17176-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Add an option to xen-ucode tool to print the currently loaded ucode
revision and also print it during usage info.  Print CPU signature and
platform flags as well.  The raw data comes from XENPF_get_cpu_version
and XENPF_get_ucode_revision platform ops.

Example output:
    Intel: CPU signature 06-55-04 (raw 0x00050654) pf 0x1 revision 0x02006e05
      AMD: CPU signature 19-01-01 (raw 0x00a00f11) revision 0x0a0011ce

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v4 --> v5:
- Changed AMD output to be FF-MM-SS instead of famXX
- Modified usage string
- Fixed fprintf indentation
- Printing error messages always to stderr
- Use appropriate exit codes in show_curr_cpu()
---
 tools/misc/xen-ucode.c | 85 +++++++++++++++++++++++++++++++++++++-----
 1 file changed, 75 insertions(+), 10 deletions(-)

diff --git a/tools/misc/xen-ucode.c b/tools/misc/xen-ucode.c
index ad32face2b..c6ae6498d6 100644
--- a/tools/misc/xen-ucode.c
+++ b/tools/misc/xen-ucode.c
@@ -12,22 +12,95 @@
 #include <fcntl.h>
 #include <xenctrl.h>
 
+static xc_interface *xch;
+
+static const char intel_id[] = "GenuineIntel";
+static const char   amd_id[] = "AuthenticAMD";
+
+static void show_curr_cpu(FILE *f)
+{
+    int ret;
+    struct xenpf_pcpu_version cpu_ver = { .xen_cpuid = 0 };
+    struct xenpf_ucode_revision ucode_rev = { .cpu = 0 };
+    /* Always exit with 2 when called during usage-info */
+    int exit_code = (f == stderr) ? 2 : 1;
+
+    ret = xc_get_cpu_version(xch, &cpu_ver);
+    if ( ret )
+    {
+        fprintf(stderr, "Failed to get CPU information. (err: %s)\n",
+                strerror(errno));
+        exit(exit_code);
+    }
+
+    ret = xc_get_ucode_revision(xch, &ucode_rev);
+    if ( ret )
+    {
+        fprintf(stderr, "Failed to get microcode information. (err: %s)\n",
+                strerror(errno));
+        exit(exit_code);
+    }
+
+    /*
+     * Print signature in a form that allows to quickly identify which ucode
+     * blob to load, e.g.:
+     *
+     *      Intel:   /lib/firmware/intel-ucode/06-55-04
+     *      AMD:     /lib/firmware/amd-ucode/microcode_amd_fam19h.bin
+     */
+    if ( memcmp(cpu_ver.vendor_id, intel_id,
+                sizeof(cpu_ver.vendor_id)) == 0 )
+    {
+        fprintf(f,
+                "CPU signature %02x-%02x-%02x (raw 0x%08x) pf %#x revision 0x%08x\n",
+                cpu_ver.family, cpu_ver.model, cpu_ver.stepping,
+                ucode_rev.signature, ucode_rev.pf, ucode_rev.revision);
+    }
+    else if ( memcmp(cpu_ver.vendor_id, amd_id,
+                     sizeof(cpu_ver.vendor_id)) == 0 )
+    {
+        fprintf(f,
+                "CPU signature %02x-%02x-%02x (raw 0x%08x) revision 0x%08x\n",
+                cpu_ver.family, cpu_ver.model, cpu_ver.stepping,
+                ucode_rev.signature, ucode_rev.revision);
+    }
+    else
+    {
+        fprintf(f, "Unsupported CPU vendor: %s\n", cpu_ver.vendor_id);
+        exit(exit_code);
+    }
+}
+
 int main(int argc, char *argv[])
 {
     int fd, ret;
     char *filename, *buf;
     size_t len;
     struct stat st;
-    xc_interface *xch;
+
+    xch = xc_interface_open(NULL, NULL, 0);
+    if ( xch == NULL )
+    {
+        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
+                strerror(errno));
+        exit(1);
+    }
 
     if ( argc < 2 )
     {
         fprintf(stderr,
                 "xen-ucode: Xen microcode updating tool\n"
-                "Usage: %s <microcode blob>\n", argv[0]);
+                "Usage: %s [<microcode file> | show-cpu-info]\n", argv[0]);
+        show_curr_cpu(stderr);
         exit(2);
     }
 
+    if ( !strcmp(argv[1], "show-cpu-info") )
+    {
+        show_curr_cpu(stdout);
+        return 0;
+    }
+
     filename = argv[1];
     fd = open(filename, O_RDONLY);
     if ( fd < 0 )
@@ -52,14 +125,6 @@ int main(int argc, char *argv[])
         exit(1);
     }
 
-    xch = xc_interface_open(NULL, NULL, 0);
-    if ( xch == NULL )
-    {
-        fprintf(stderr, "Error opening xc interface. (err: %s)\n",
-                strerror(errno));
-        exit(1);
-    }
-
     ret = xc_microcode_update(xch, buf, len);
     if ( ret )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:54:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522145.811361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIz-000494-EQ; Mon, 17 Apr 2023 13:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522145.811361; Mon, 17 Apr 2023 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPIz-00048v-Bd; Mon, 17 Apr 2023 13:54:05 +0000
Received: by outflank-mailman (input) for mailman id 522145;
 Mon, 17 Apr 2023 13:54:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMb/=AI=citrix.com=prvs=464dae365=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1poPIy-00047e-Ip
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:54:04 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4def2296-dd27-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 15:54:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4def2296-dd27-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739641;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=PgPnbMZmAtpUANOitlWrq21FjsYwMoYfOaJIFf4YOcw=;
  b=CvLmJ+HGOBFfrtLyVMnFdWMJ6rrlXAP0qkAv3tv91sa2j0OZjhY7s2RA
   lJxkQfnb96TScofw4pFj8+ydD4dKTkoVspn8WnITbM81hYCNNbHEFRkl6
   jbWqK/BJDBHWvaSmIw5oVRcsnj4yPHMHdM/co6r6c3ZmUKabusmch+jia
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106220666
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:JkgybKAVXgW+qRVW/x7jw5YqxClBgxIJ4kV8jS/XYbTApDolhTEPm
 GNLW2HSP/6OZmSjeopwO9i1pB9Sv57dm9IxQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B4QRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwo7pMHlpE9
 eYkNCEKLTfAoPDq0L6+Y7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIIkzhuillz/zYjRDrFO9rqsr+WnDigd21dABNfKMIoLQH50LwBjwS
 mTu+lSmGx9GEOWllGCU43jxxc+IvHqrcddHfFG/3qEz2wDCroAJMzUGWF3+rfSnh0qWX9NEN
 1dS6icotbI19kGgUp/6RRLQiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3Oc0SiYtz
 UShhM7yCHpkt7j9YXCA8raZqxuiNC5TKnUNDQcfVhcM6dTnpIA1jzrMQ8xlHarzicf6cQwc2
 BjT8nJ43e9Ky5dWiePipwuvby+QSobhF1IO+T7dXniZ8hJ+J6nmQJ70+VTexKMVRGqGdWVtr
 EToiuDHsrBXUcrcyX3RKAkeNOr3vqjYaVUwlXYqRsB8rGr1phZPaKgKuFlDyFFV3tHokNMDS
 Gvaoktv6ZBaJxNGhocnMtvqW6zGIUUNfOkJt8w4jfIUOPCdjCfdoElTibe4hggBanQEn6AlI
 ou8es2xF3scAqkP5GPoF75Djudzm31hnT+7qXXHI/OPiOP2WZJoYe1dbAvmgh4Rt8toXzk5A
 /4AbpDXmn2zocX1YzXN8J57EG3m2UMTXMisw+QOL77rH+aTMD15YxMn6e97KtMNcmU8vrugw
 0xRrWcCkAKl2iafeVvTAp2hAZu2NatCQbsAFXREFT6VN7ILP+5DMI93m0MLQIQa
IronPort-HdrOrdr: A9a23:SpMJWqqFQNsMS5y66q00PjcaV5o/eYIsimQD101hICG8cqSj+P
 xG/c5rsSMc5wxxZJhNo7290cq7MBbhHPxOgbX5VI3KNGKNhILBFvAB0WKI+VPd8kPFmtK1rZ
 0QEJRDNA==
X-Talos-CUID: 9a23:B6aS52PlugfumO5DAXVc63UeGtoZbmzaj1CTBF3kGWpsYejA
X-Talos-MUID: 9a23:r6IPVQaZznoZjuBTsg+znT8+EP5Svv6/GUpQ0rVBmpDUHHkl
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="106220666"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v5 0/3] xen-ucode: print information about currently loaded ucode
Date: Mon, 17 Apr 2023 14:53:32 +0100
Message-ID: <20230417135335.17176-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain

Currently it's impossible to get CPU's microcode revision from Xen after
late loading without looking into Xen logs which is not always convenient.
Add an option to xen-ucode tool to print the currently loaded ucode
revision.

Sergey Dyasli (3):
  tools/xenctrl: add xc_get_cpu_version()
  x86/platform: introduce XENPF_get_ucode_revision
  tools/xen-ucode: print information about currently loaded ucode

 tools/include/xenctrl.h                  |  3 +
 tools/libs/ctrl/xc_misc.c                | 35 ++++++++++
 tools/misc/xen-ucode.c                   | 85 +++++++++++++++++++++---
 xen/arch/x86/platform_hypercall.c        | 29 ++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  4 ++
 xen/include/public/platform.h            | 11 +++
 xen/include/xlat.lst                     |  1 +
 7 files changed, 158 insertions(+), 10 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 13:54:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 13:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522147.811371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPJ1-0004XO-N3; Mon, 17 Apr 2023 13:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522147.811371; Mon, 17 Apr 2023 13:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPJ1-0004Wn-Jo; Mon, 17 Apr 2023 13:54:07 +0000
Received: by outflank-mailman (input) for mailman id 522147;
 Mon, 17 Apr 2023 13:54:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMb/=AI=citrix.com=prvs=464dae365=sergey.dyasli@srs-se1.protection.inumbo.net>)
 id 1poPJ0-00047e-FX
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:54:06 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 507e64cb-dd27-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 15:54:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 507e64cb-dd27-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681739644;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=TNVm1fLgaF4ksJwYK+L9Xd3vsJAQXLNOU+yZsnmqhF8=;
  b=TStnw5k8DyPG5+EoF3ebiAgB6qLAMOTYUo2jtcEVWclaWM9x8hIeJKlz
   EW5jSmZ20YqvD/gZqhPfQ16EXU0E1l8+i+AM62egBE3ixPvm903By3+Zw
   j+7jkAteZPgKmgDBKnPH02DlydqmK7Jkx78OFkkTfmDs6m96a7EY3kE4U
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106220668
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:FCIWI6PYGGENOVPvrR2Sl8FynXyQoLVcMsEvi/4bfWQNrUpxgjNUz
 GQeWmmCM6veNDD1coskYN6/8hgPvZbUyYRlSAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5wNmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0r1wOEtvz
 N0qES5XSD2qhOmM37GET9A506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLolkf2ni2i5fyxRs1aUjaE2/3LS3Ep6172F3N/9I4TUH58NwBjIz
 o7A11b4GzQ2MM618wiI8kKwod72ty6hQatHQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceRzMw0
 USSt8j0HjEpu7qQIVqf67OVoDWaKSUTa2gYakcsVhAZ6tPupIUyiBPnTdt5FqOxyNrvFlnNL
 yui9XZkwe9J1IhSivv9pAqc696xmnTXZlUy3y/2Z0OX1x0jQqOMZIeS9lvk6M8Vee51UWK9U
 Gg4d9m2tb5eVM3WxXHcHI3hD5nyua/bbWS0bUpHWsB4qm/zoyPLkZV4umkWGat/DioTldYFi
 mf3sBgZ2pJcNWDCgURfM9PoUJRCIUQN+L3YuhHogjlmOMIZmPevpn0GWKJp9zmFfLIQua8+I
 4yHVs2nEGwXD69qpBLvGbdEj+Bznn1jmjuPLXwe8/hA+ePHDEN5tJ9faAfeBgzHxPjsTPrpH
 yZ3aJLRlkQ3vBzWaSjL648DRW03wYwALcmu8aR/L7fTSjeK7Ul9U5c9N5t9Id0690mU/8+Ul
 kyAtrhwkgKn3yKccVXUMxiOqtrHBP5CkJ7yBgR0VX7A5pTpSdbHAHs3H3fvQYQayQ==
IronPort-HdrOrdr: A9a23:vtywhq4Rd9P0TF1OxwPXwMjXdLJyesId70hD6qhwISY7TiX4rb
 HJoB11737JYVoqNU3I3OrwWpVoIkmskqKdg7NwAV7KZmCP0wGVxcNZnO7fKlXbaknDH4Vmu5
 uIHZITNDSJNykYsfrH
X-Talos-CUID: 9a23:uBjjVGHGGAwWRGTxqmI883c+O/kEQETDllH9CkyzM3lqd7isHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AvnI9FAystrKaxT5FqAv2ErW2aOKaqJavEWUhlYc?=
 =?us-ascii?q?sgfuJNRxsHAakpjntW6Zyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="106220668"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Sergey Dyasli <sergey.dyasli@citrix.com>
Subject: [PATCH v5 2/3] x86/platform: introduce XENPF_get_ucode_revision
Date: Mon, 17 Apr 2023 14:53:34 +0100
Message-ID: <20230417135335.17176-3-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230417135335.17176-1-sergey.dyasli@citrix.com>
References: <20230417135335.17176-1-sergey.dyasli@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

Currently it's impossible to get CPU's microcode revision from Xen after
late loading without looking into Xen logs which is not always convenient.

Add a new platform op in order to get the required data from Xen and
provide a wrapper for libxenctrl.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v4 --> v5:
- Added Reviewed-by

v3 --> v4:
- clarified the commit message
- Renamed "ucode version" to "ucode revision"
- Removed DECLARE_PLATFORM_OP and NULL checks
- Added a TODO comment about parked CPUs
- Renamed struct xenpf_ucode_revision fields
---
 tools/include/xenctrl.h                  |  2 ++
 tools/libs/ctrl/xc_misc.c                | 18 +++++++++++++++
 xen/arch/x86/platform_hypercall.c        | 29 ++++++++++++++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  4 ++++
 xen/include/public/platform.h            | 11 +++++++++
 xen/include/xlat.lst                     |  1 +
 6 files changed, 65 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 34b3b25289..1149f805ba 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1187,6 +1187,8 @@ int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo);
 int xc_microcode_update(xc_interface *xch, const void *buf, size_t len);
 int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver);
+int xc_get_ucode_revision(xc_interface *xch,
+                          struct xenpf_ucode_revision *ucode_rev);
 int xc_numainfo(xc_interface *xch, unsigned *max_nodes,
                 xc_meminfo_t *meminfo, uint32_t *distance);
 int xc_pcitopoinfo(xc_interface *xch, unsigned num_devs,
diff --git a/tools/libs/ctrl/xc_misc.c b/tools/libs/ctrl/xc_misc.c
index 90d50faa4f..4159294b2e 100644
--- a/tools/libs/ctrl/xc_misc.c
+++ b/tools/libs/ctrl/xc_misc.c
@@ -243,6 +243,24 @@ int xc_get_cpu_version(xc_interface *xch, struct xenpf_pcpu_version *cpu_ver)
     return 0;
 }
 
+int xc_get_ucode_revision(xc_interface *xch,
+                          struct xenpf_ucode_revision *ucode_rev)
+{
+    int ret;
+    struct xen_platform_op op = {
+        .cmd = XENPF_get_ucode_revision,
+        .u.ucode_revision.cpu = ucode_rev->cpu,
+    };
+
+    ret = do_platform_op(xch, &op);
+    if ( ret != 0 )
+        return ret;
+
+    *ucode_rev = op.u.ucode_revision;
+
+    return 0;
+}
+
 int xc_cputopoinfo(xc_interface *xch, unsigned *max_cpus,
                    xc_cputopo_t *cputopo)
 {
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index a2d9526355..9ff2da8fc3 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -640,6 +640,35 @@ ret_t do_platform_op(
     }
     break;
 
+    case XENPF_get_ucode_revision:
+    {
+        struct xenpf_ucode_revision *rev = &op->u.ucode_revision;
+
+        if ( !get_cpu_maps() )
+        {
+            ret = -EBUSY;
+            break;
+        }
+
+        /* TODO: make it possible to know ucode revisions for parked CPUs */
+        if ( (rev->cpu >= nr_cpu_ids) || !cpu_online(rev->cpu) )
+            ret = -ENOENT;
+        else
+        {
+            const struct cpu_signature *sig = &per_cpu(cpu_sig, rev->cpu);
+
+            rev->signature = sig->sig;
+            rev->pf = sig->pf;
+            rev->revision = sig->rev;
+        }
+
+        put_cpu_maps();
+
+        if ( __copy_field_to_guest(u_xenpf_op, op, u.ucode_revision) )
+            ret = -EFAULT;
+    }
+    break;
+
     case XENPF_cpu_online:
     {
         int cpu = op->u.cpu_ol.cpuid;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index 5bf6b958d2..99440f4076 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -28,6 +28,10 @@ CHECK_pf_pcpuinfo;
 CHECK_pf_pcpu_version;
 #undef xen_pf_pcpu_version
 
+#define xen_pf_ucode_revision xenpf_ucode_revision
+CHECK_pf_ucode_revision;
+#undef xen_pf_pucode_revision
+
 #define xen_pf_enter_acpi_sleep xenpf_enter_acpi_sleep
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 60caa5ce7e..15777b5416 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -614,6 +614,16 @@ DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
 typedef struct dom0_vga_console_info xenpf_dom0_console_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_dom0_console_t);
 
+#define XENPF_get_ucode_revision 65
+struct xenpf_ucode_revision {
+    uint32_t cpu;             /* IN:  CPU number to get the revision from.  */
+    uint32_t signature;       /* OUT: CPU signature (CPUID.1.EAX).          */
+    uint32_t pf;              /* OUT: Platform Flags (Intel only)           */
+    uint32_t revision;        /* OUT: Microcode Revision.                   */
+};
+typedef struct xenpf_ucode_revision xenpf_ucode_revision_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_ucode_revision_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -645,6 +655,7 @@ struct xen_platform_op {
         xenpf_resource_op_t           resource_op;
         xenpf_symdata_t               symdata;
         xenpf_dom0_console_t          dom0_console;
+        xenpf_ucode_revision_t        ucode_revision;
         uint8_t                       pad[128];
     } u;
 };
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d601a8a984..9c41948514 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -157,6 +157,7 @@
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 ?	xenpf_resource_entry		platform.h
+?	xenpf_ucode_revision		platform.h
 ?	pmu_data			pmu.h
 ?	pmu_params			pmu.h
 !	sched_poll			sched.h
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:00:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522169.811380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPOh-0005ux-BE; Mon, 17 Apr 2023 13:59:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522169.811380; Mon, 17 Apr 2023 13:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPOh-0005up-8b; Mon, 17 Apr 2023 13:59:59 +0000
Received: by outflank-mailman (input) for mailman id 522169;
 Mon, 17 Apr 2023 13:59:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poPOf-0005uj-Nf
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 13:59:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21b60862-dd28-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 15:59:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6880.eurprd04.prod.outlook.com (2603:10a6:803:130::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 13:59:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 13:59:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b60862-dd28-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oIco0W6QMAdSGT7T/pPGb/1MwHuT2Ht7HgmrLBfuIMdj0emjl3ZgkDntj4d4F9ggS7ViyUXfQiosz/jVx323G1V79PBKvpn9HtpoqQrSyZpzHXth1Bp7cwEl0mlZ6Rf/eB/H38lePpep83JOU9RGpUsaaAFA6+hwKKNw9+tbORSJvC+vPn5UmrKdUqLwhlCXAo7kvwxGSmpghPcs+EVdOcDJ/PxHLh1T3nKchM0XcgLHw1dBcMY7EHwe5Gob6w8+Vu5lIZhnjEmg/QlxB2y4f2TvcKvle0h8XNSEcpi+gOe7trDqXWAp3myyBD3+KV0XNjKlSPC/fgghxpaHU3NxfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9pFd26Z/80a3lrZcFVycKkST+rPq1jiwNT5VLcBrdBU=;
 b=X9kLNOkyzEmNAOjaz313/svuG1hhzpg74n2motMSb0AzMHjwSzRAGQOBcvSTidzWiCgZ3jE9Q+ZK3tBxINMWw4SKd5rlpWrEzT5yEF+CK6pX8bKPysKA+OB5TEUOMuKAgDhB3DFte2yCjNbKK75QmA0C2d0+lyN0wmG9FkYZN3iqwjAtYLaTFJtR+rLTSAKMIl9QSU1nwpBrnKoAttRVJUhZp9yHNnF0o2QW6SbJQ9/aif5ApI/cNWsPpGEQssSZxpWHY0dQgjE4EUFIcqorfwl09fvhCYGrJLwoOjrKlFh1uVoaGT8UrW6vEvZBg+6Jxuv5ptvoQHpH7JWL4CxCwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9pFd26Z/80a3lrZcFVycKkST+rPq1jiwNT5VLcBrdBU=;
 b=Goy34wQLLOXpmEeuKSh2ECDbNT0SRmrAXmXMg9+8CC2PB44Kbc0JZo6xATgH0yh1kOBCW7wCkcCdakyhQP/d9SPjLu45E08BoTtD8Qj785dF7XIeeadd/iMJ9LLftdtdveMIl6BWxsPbW7HRPr2Iw16Kx0estu/PJM5078H/OaSlB53zhxUXW7T+4dzc5a5sfa4mGlx+qupV1PLKm2bAt0f3QBek5ursa0oN29mFBUoLKiteE/OYpGMUnVcYXHjDOJOpakppXxw+R0GUncPSg3GeyvlY0GY88mzoXdaXNkEmCDy7ry4E3CWD+wphW+UpBHeXeLxlkLmVa2nw7WP6zA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
Date: Mon, 17 Apr 2023 15:59:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0097.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6880:EE_
X-MS-Office365-Filtering-Correlation-Id: df72c81d-7af2-41a3-1204-08db3f4c04f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nT39LLhnZ1Wrr+dO0oquBwOV0ch01h5fmvRukbOzbk8b0p1mVjKBW5CAsdqfvQCrflllOlrsnBQkg3cjfsJjSbWvU/BesP/qFDkv1zojsWRNDIYseKxZtZqy2B2ZwtpGrTnaGgFsxRq/K1R1/N6iBxUp/18oU5/T8lYFELsEg2c/wdVER/Boji8Vl3mOmDb7Z617CpaEAhDEFssB2O4q2Q2NX0n4j4V6CX1q+V6hFk2W7QprrrGzNTD61glLX4uChCcgbti7qHCqzXbIfzVepfAcN4uU0JuKNwr6mF+hZZSxRBLYc819aI3B7VR1EkXHQ3J9wlRjx7ncJ23i07JELe4f+wTOqk0n5NhhcHhdXvudw7GpTEY8k7WtHCOwZvqqEmx6Rglt7WaVRbXgmOZYNUXBMT2/x4IjHxfOlaPs0WG7fkyGriiJeCxcw9OzjfkptFGNY2zWrA4/JML5zu3fuLRpHFf6IRVkHd3jgqLwB9pO2Fc0/qOOoVdFDZzXTLe2VWVWHLOaCsdDBh7UZ6yOYvnBxK3hubUooJNKy0Pf011dqynihVRm0gJ+1eOJnufCmER1JspI4y+DmmP8bgam+nPWBTE8nMGvv8rc3Hr6PovuluRZUZZAPpyf5vi6Qp2i0/bfndMOWPxX6NUeYWgYEw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(366004)(39850400004)(346002)(451199021)(6486002)(86362001)(478600001)(31696002)(2616005)(36756003)(83380400001)(26005)(6506007)(186003)(53546011)(6512007)(38100700002)(6916009)(66946007)(66476007)(66556008)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Ykd2Q3RBMUlPcHVMcVFKd3Noa01sRlNiZ2MxRlhDNHdDbFFwV29SNmw1blM0?=
 =?utf-8?B?cXlTVjlBNkxOSzkzMnlCYldhOWNab0FpdGxpWm9uSHpsRjdWQi83MCtRVTZP?=
 =?utf-8?B?ZXBjUDNTbXFZakRyby9iMkx4dUNCY2VQMVBLVW9tbHg1K3VjVm8yREVpckYz?=
 =?utf-8?B?ZTdiSHBPK3RoZFZOdWFlMldJYmswb25jK2ZHTmhmSVJoVmpCYVJPT25hWUIw?=
 =?utf-8?B?ZUxpSXBhZjJEU0NqVzVUbFFtVHdmaXYvdUFSd0VaRVFpR2s2aHEzRjR2NklB?=
 =?utf-8?B?S1pUem9xMG44dE81ajlNc09NREFUcFBxMkFiVmVENERNOEZTeFkrMFJRY0RC?=
 =?utf-8?B?TXVSbzNSbkNTcFRkY0Z3c1dDN0JSYlpkNmFyRlo2Tlhqd1M4QzJmaFRPaW94?=
 =?utf-8?B?TWFjYjhtZytyRnI3LzZ2RVl2OURyQmR1aDlZTmwyZ0xnRktVeDJ2QnhOMkRa?=
 =?utf-8?B?RVhodXJaNTkwU3UrdlhhSHhFT1UrNDExK1M2SGZONDVaeGZ1M3BBYXYrYWpm?=
 =?utf-8?B?MWVOR1hWSGR5dStha050b0U0Z3B0Qk1wUE1MTGtlSmxxV1BIWEFDNUpzNXp2?=
 =?utf-8?B?UisxVmtUWS81T0huTVY5Zm5KWHR0L0JQdmZKMEN3YURuVVBMQnd0NHpCaXBn?=
 =?utf-8?B?SFJTTzE5c29UbVozWTNsaGQwczJ0U1VTS0dCTm14MTZmUlNUK2ZVblNIcFpr?=
 =?utf-8?B?ZWgzRkZGOUcxaUVnLzU5V3I4MDZVTHRTS25zemRKUlVncGVLZzhtaVA1VHB2?=
 =?utf-8?B?ZFZrS0RGenNLNllxQzVYTjkxTFZPR3N1cDVCWEhaNDkzcU9WbXNVTzRiVkkz?=
 =?utf-8?B?VXRBaEJBWmtMY25iajhObGZmZElCVWJ6L0RtTEl0NTVLSDNneGdPYUdJSmRW?=
 =?utf-8?B?bjlCQ29MWEI4NDlZT1d6ejh4Ynd5ZmM5ZWU3N0F1TFNTNHM5QVAzeUk3dlNE?=
 =?utf-8?B?QWtMbU01aEtzZjFvTXFTb0RCMytpRUJDL3pmQ0JEcWJrU280VFdUV3RCU2Z3?=
 =?utf-8?B?VjlIWVVSa2NTbUY5UWRhRFRuQ3E3U05aV2s0YmJQWFBjUVlsaGIrYmJiRHNq?=
 =?utf-8?B?REFFY1VQSGU5T3krYXpCUWF0eDh5SDc1L2hOd3ZtUC9KMlU1MXhZL2h1cG5k?=
 =?utf-8?B?LzJHK3puZzRubGllTVltRE4vR05mblNsc2lSbVRmZjlsNzBlc01IdVNnN1c0?=
 =?utf-8?B?Vzh3OWlpNjBuTWxZNHBIY3V3dmtqZUZvaVFRa2k0MUU2ZzFUN2sxUVBpdnE5?=
 =?utf-8?B?WUJMNTl3Z1FlbEwya2VsdUR0UEJTSU9XUkN3L0psaURBOFdQekNwMjhlRmty?=
 =?utf-8?B?bGthUVBNNXg0dTVZUDJweFhyUWl3a1NhbTY1cDcySXhIaVJWOStSbVQyVFBm?=
 =?utf-8?B?bWRMOEpUajVQWHN3TitSVzA0VzJKQ2FwRVZJYy9lTzcxSTdPMnJYYmFia0o5?=
 =?utf-8?B?S1l4ZEJrZGFpQ1cxT2VSRnpsMzkrWjFvT1lZN01ITGhmRHR3ZTlZZDMxR2E2?=
 =?utf-8?B?K2NlanNBQlpHQVhocXFuV2hadEZWM0REcG9WTE1SKzFheWt2U0VIVGJTTlFO?=
 =?utf-8?B?elcwVHFmdk9hTlc3Mk51ekVIenU2Vk9SVmlPY2JaTEVOUkI5ZTZuS0ZxbVdG?=
 =?utf-8?B?U3JWcjArcFBlWE0rRDg5bUtkSWhPRG9vTkY2aGlBZi9UQTR6amJzOTRybmt6?=
 =?utf-8?B?aS9sVXVabWg5dVg5NEkzSCt6Qnd2R2NEcFJSR2FnS1VPbk0wZVdzZ2VwK3Zw?=
 =?utf-8?B?UDVyZGFvQ0RhMnZldHVNV1FoSkRnVG84VWdaVkRua1VBendldjR0MDdzd1Az?=
 =?utf-8?B?a1JKa0lxQWhNdFJ1YTVibTRFOU1vWkdxMEdiZkxTWHp2WHNJN0k3YlMzOWJ0?=
 =?utf-8?B?cGh5VEFFMXd1Mk14ekJFemdIZXh5b25POHFMNGVNUWhQOS9iRy9XaVhBcWx1?=
 =?utf-8?B?NWxWV1RXUGt2c0VOaVVXam5qYjlMYWROcUx4R1pkQ1JpZTMwdkQ3MWNEV2Ur?=
 =?utf-8?B?bW11OEk1Tis1KzYwaG9jUWxWeXBtd3o4RWNTWGNHZWxOL2VpcXJaRjBkSElM?=
 =?utf-8?B?dkh4eHd6ZWRBM3lmQ0V2K2ZjcUxPSFU4eWdGYlBTRDJhbnZnc0hFSzNnT2li?=
 =?utf-8?Q?hlR9pBW6dSpxwigPJkHnBExBT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: df72c81d-7af2-41a3-1204-08db3f4c04f0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 13:59:53.7554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sC3x2xvDwfJXgdorBCoNH/fK0IlL185Fp60/aRBSaOxMKctIxnv/bJzbol8pqiO+zOZq/DnrWbtHCgBpsVFdrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6880

On 17.04.2023 15:52, Andrew Cooper wrote:
> Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
> or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:
> 
>   (XEN) livepatch: lp: Verifying enabled expectations for all functions
>   (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
>   (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
>   (XEN) livepatch: lp: Applying 1 functions
>   (XEN) hi_func: Hi! (called 1 times)
>   (XEN) Hook executing.
>   (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
>   (XEN) *** DOUBLE FAULT ***
>   <many double faults>
> 
> The assertion failure is from a global (system wide) TLB flush initiated by
> modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
> sure exactly what causes the #DF's, but it doesn't really matter either
> because they highlight a latent bug that I'd overlooked with the CET-SS vs
> patching work the first place.
> 
> While we're careful to arrange for the patching CPU to avoid encountering
> non-shstk memory with transient shstk perms, other CPUs can pick these
> mappings up too if they need to re-walk for uarch reasons.
> 
> Another bug is that for livepatching, we only disable CET if shadow stacks are
> in use.  Running on Intel CET systems when Xen is only using CET-IBT will
> crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
> still active.
> 
> Also, we never went and cleared the dirty bits on .rodata.  This would
> matter (for the same reason it matters on .text - it becomes a valid target
> for WRSS), but we never actually patch .rodata anyway.
> 
> Therefore rework how we do patching for both alternatives and livepatches.
> 
> Introduce modify_xen_mappings_lite() with a purpose similar to
> modify_xen_mappings(), but stripped down to the bare minimum as it's used in
> weird contexts.  Leave all complexity to the caller to handle.
> 
> Instead of patching by clearing CR0.WP (and having to jump through some
> fragile hoops to disable CET in order to do this), just transiently relax the
> permissions on .text via l2_identmap[].
> 
> Note that neither alternatives nor livepatching edit .rodata, so we don't need
> to relax those permissions at this juncture.
> 
> The perms are relaxed globally, but is safe enough.  Alternatives run before
> we boot APs, and Livepatching runs in a quiesced state where the other CPUs
> are not doing anything interesting.
> 
> This approach is far more robust.
> 
> Fixes: 48cdc15a424f ("x86/alternatives: Clear CR4.CET when clearing CR0.WP")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

One further remark, though:

> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>      return modify_xen_mappings(s, e, _PAGE_NONE);
>  }
>  
> +/*
> + * Similar to modify_xen_mappings(), but used by the alternatives and
> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
> + * responsibility of the caller, and *MUST* not be introduced here.
> + *
> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
> + * Must be called with present flags, and over present mappings.
> + * Must be called on leaf page boundaries.

This last sentence, while wording-wise correct, could do with making more
explicit that it is the caller's responsibility to know whether large page
mappings are in use, due to ...

> + */
> +void init_or_livepatch modify_xen_mappings_lite(
> +    unsigned long s, unsigned long e, unsigned int _nf)
> +{
> +    unsigned long v = s, fm, nf;
> +
> +    /* Set of valid PTE bits which may be altered. */
> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
> +    fm = put_pte_flags(FLAGS_MASK);
> +    nf = put_pte_flags(_nf & FLAGS_MASK);
> +#undef FLAGS_MASK
> +
> +    ASSERT(nf & _PAGE_PRESENT);
> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
> +
> +    while ( v < e )
> +    {
> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
> +        unsigned int l2f = l2e_get_flags(l2e);
> +
> +        ASSERT(l2f & _PAGE_PRESENT);
> +
> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
> +        {
> +            ASSERT(l1_table_offset(v) == 0);

... this.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:12:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522188.811399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPaX-000070-Ho; Mon, 17 Apr 2023 14:12:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522188.811399; Mon, 17 Apr 2023 14:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPaX-00006t-Dk; Mon, 17 Apr 2023 14:12:13 +0000
Received: by outflank-mailman (input) for mailman id 522188;
 Mon, 17 Apr 2023 14:12:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poPaW-00006j-Gf
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 14:12:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20627.outbound.protection.outlook.com
 [2a01:111:f400:7d00::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d742a817-dd29-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 16:12:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7864.eurprd04.prod.outlook.com (2603:10a6:20b:2a4::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 14:12:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 14:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d742a817-dd29-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=juR63KWB6noZrYlfA8DzA4e/JkDHSlcjPoaKY64DV8uZ3aZJWHR9OOTMplV22JCpRdYnbSxh1H9ic4PwUTVbYQH8nIcsgcSOpF4Snrh0tJv7x2eT2R230zukr14ADONWjV00AwXRGhC0SzCmQDa2RqMOxo4//TshYVU1Rnd17++6JvE+0MJkzBaoBIpsDRXE4t156sVBAHUUqfL+dfEHeAbQwQzTehqdyuBtCJl73VKyB5ngB9bfqi47o9WEZxQ5eSoWx1xhwtR7EcPIBYxPW7xP5iMn5/oa4IEcx7Zd5aLQ0fcCODUyd4L+qmGnlg2ghKW5yp54pTMecqAKaE/6HA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hoLyluOlNSAe8v9S6rj+i/CuSjeYOtj8gsgYT445rPE=;
 b=EcA6t5Dd+AH2q2X7Q9h1Fss1Z6NknkM1wHTjyFPysF8ED3dXOoTXEkmeBShGDZRjo+tum9Zyl7jDUpnjZEuZuMLJ9X3s6UNHwweo90IrKNSPU5rc58dycLXEMeLm6rrvsfMhd/zI411UEnTSOb6jKgYomn0HBqpr6lYU6pAh4toiGRJ5dM6ZDpSs2V/7mB/CHPNk0mm/vQde5sVO5WzCrFvLwc0Z/9qlya3L3u5Px/Lr8iEjIdjuExe/rT6OwYd5YURxb+i+VIQsMgH+s/lQi7CIeah2vknTd92BQgs/phHOCCRTiS7w0O6vtvm4GmViVHVIP2Eu8agZwXBhE1n8hQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hoLyluOlNSAe8v9S6rj+i/CuSjeYOtj8gsgYT445rPE=;
 b=YSMH+a93MRU2tIkVA9aVPFLXNxH2lbsXhDt7dADwov+EZ3kp5v3hreFfraNQOI+5US+EPre/9HUGWUL9pJLnlt59+zFRY/crMATDHWjv2/O205NVqRaD1/vbzf9hUPzHEHKMmKJ1E0z5vfS7W7rdlNjcPzT4D//QtLaK1LIIDLn49LphcCnDFE+nJLpgvBw+ouVAvf113BvwLiQyR7yX7cjlOFS2/LVbrUIuAbPy6EVJceMELyXI8Y3vosBGzQ0UXEyOaDbzjJvBgcUFLMtzPCU99rQ04EGO15UvZWuEPhRpEr+6kQXIvxf1qZP+fjSAT1Nv7x4g+H5kngejTNDrIQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b343d8c3-b23b-c67b-76f6-c25d5892328b@suse.com>
Date: Mon, 17 Apr 2023 16:12:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 0/2] deal with GOT stuff for RISC-V
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1678970065.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cover.1678970065.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0071.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7864:EE_
X-MS-Office365-Filtering-Correlation-Id: 6bce2d28-2ec0-45e1-b80c-08db3f4dba06
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SmumE2JFNrBvj06mwsy2lY5U7E2q278SrY7ndlDnDHZdq4dmwjllwmwPnhBIwoJzxc0X/etAh6K51KmwJX71Kjbw4KzYKoMSFCunFbewPLzMpUaHt3kcUzIB3ymhIA03kMMj45lefXdegDg9d9Sav7UnD2k0XMWK8HOclVNhE4Ml4NLv33b5RHTNQqQY7Bi7bfxVlnc2J+xCvzIGYO5PWxf/yp5blr7T0oZ2Wv/hwvDFg6zmnqy+Nw7/EDtsHAwPLmGu+2QO/8DV+rk+5J+W4B3+F80oY1DyeBC0pWuTlPvgg+FcGNfBe25G2QAZc6yZlFFFRqaew13VA62yRBHBC/DSuNoy+ZORNUA/yTc26kXMu24tU26Ti2Z0FOv6ByQ3QGkeEjWMBdyLCHcp0wH7gtLHpfD32g8wQtNqtbtoPpoYKAF3Nu9TeXjjkVTCResoS9VHX5RSyzROe0kOdxzorVZAUw+BQphiTVG6gfD4VxlU+3Vb0MhUqSDBat8NXBZzwQIGcEBdwaLUyMNn2XGhVu+w8Qn8bF5j8om3mrisAo861/aikgqytx3iurPuwZfGUU+Ic4muIqTX/X9J1x3b+zOSycvktYkEPLr/2+XkN64YNUmGBiQxzpV3PRTqnCCoytVYNxR3uHyA0UvbSutsjA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(376002)(346002)(366004)(451199021)(54906003)(31686004)(38100700002)(66476007)(31696002)(478600001)(66946007)(66556008)(86362001)(6916009)(4326008)(316002)(41300700001)(2616005)(2906002)(4744005)(53546011)(6512007)(6506007)(8676002)(8936002)(36756003)(6486002)(186003)(26005)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VEZKS0MySHFKSmo5TjJ2ZS9oN2hDa1ZleDZ1TCttaFhmVXB1Nm1HOVY4eWYw?=
 =?utf-8?B?TGZuNGY1aFdCeC9iemo1eE1zTUtaZ0k3Q25Sa1E4QXB0ZnNWOXBzQU12UUJn?=
 =?utf-8?B?RGpqYVJDZ1hMLzgxOUM5T0MyRFZ6R1QwZndhQU1BbDJRSlVMOVBEdlRYQzRz?=
 =?utf-8?B?ZiszK2hYL3RBQk5reTcyZ0YyWUQrZ0hWOWF5NmF1QmY4VWtkeUp4S1FBcmI3?=
 =?utf-8?B?bTFaRFNVR1drRDBnNTZCeGRpdFVLZndJcE00Sm5oWWVLUHo4a3RudmhjKzBO?=
 =?utf-8?B?Qzc0WFFVcVBWZDlyNWo5MWtQVmZRM0VSaHBlT2dUMkdKaXcveFIwNGhHdGxN?=
 =?utf-8?B?VDllU1ZHK3VkaUlnZ0Q4enBRWU9hSmFVSkIvVjRDd2VGN2UwR2RFRCs5c3Av?=
 =?utf-8?B?TEdxRDh2Q3o5d1VKZy9xMGRtS0Z1UmlSaTFSNDBOWitVekpXeWNXVnc3QWFX?=
 =?utf-8?B?MG4rdjd5MENxSUc2anhBMlFNQXYxSGlQd05WcnJYRzZoMThJUXQzc2ZvUHNG?=
 =?utf-8?B?N0Y0dlUwR2JlL0pyWk9CUkRDcmo3TGVETHRDWUdjVC9IVEpvMXArd08rQUpv?=
 =?utf-8?B?QVJjdmVENHRreTNRbFBXaUVrUlZkRFhiTEx1My9QV3lFZFpOWnhMMDdON3JK?=
 =?utf-8?B?ODlzME0zMjdrUEhraE44WGtmSW80Qzl1VFRjKzZDaEFDRWpOaURBSmg2UWdV?=
 =?utf-8?B?NXpGVVowK2F2SW1xeVdYWG9zak1xcHVqTXBxRDRabHlUYmY5TE9KUEF2Yi9H?=
 =?utf-8?B?MC8vMmFQcUhTYVgzVHdWdWo1dWVXYUdONHFQUUtBOVNHb3lvTkFBcXhrNitY?=
 =?utf-8?B?SnEyaGdJT0dpcTBYcjBlREtoQmZKQVR4bTdWQUd1eG1UeU5PU21GMDRQbWYr?=
 =?utf-8?B?cFBXOHpTWjZneEZtbTZpUXRpbGNyT3JvZUZyVFIvR0VRT0JYdU1wVXloVTBa?=
 =?utf-8?B?RG8zQmxVZURrWWEvWHkyZ3BtaWJvamVBejdnOXRxeGhQdmNpVnpidFRVZkMw?=
 =?utf-8?B?b2M5QklyZ2cwNGhlNXdrQi9kZmg4WXQ1djFORE9rbWhibjVXMGF0emJvQ0hw?=
 =?utf-8?B?M2lSYk5iOVI4aTRQQlZOTjVNL1ozSlR6MXJCcXU4ZmNaN3JKMUZNSFpXcDJs?=
 =?utf-8?B?SG5Uck9mYXdvcG9iVXRvTVludlVKL0ViSFUwTE1Kb2Z1VzVISTlsVmJna1FW?=
 =?utf-8?B?L1JxUmJpU0dVcmpHZFNlSVZUS0J2WVZxSDZIMEtScHJYMVZNUUNvaXBZQnk1?=
 =?utf-8?B?Vk1FUzdKSS92OUJvYVRBK1NPWmJDQXdCdTJPUHdhVTRuMUJPZHNJZGwxdDJw?=
 =?utf-8?B?SlNCNytvbENQeE1pRnUwbmhxMGJwSmEwZGVvenZBQkpadSsvaTBZSUx6dERt?=
 =?utf-8?B?RHRIZ0VXanYwRkRQMU9Rcm9URkVaR1JMeUpQKzBDeWU3eWVKWW1WdnN0d0tx?=
 =?utf-8?B?Vm52cjEvYWtLQW1RUUtsbXpWMVI5dnp4YjRqK1FDSHJEYWxUZ0ZqRFJ3MnhR?=
 =?utf-8?B?TWpkaEl6LzlweFp1M2lYVll2ZFpnb0ZUV2hDc1RxL1dNTDA4NDJJSEMyb2M4?=
 =?utf-8?B?SVUyKzJBbGpwVlA0Nzl6S1VOSFJ0MklTbUs0SGl6ZVNZbTA5OWVYdm9Sb0hX?=
 =?utf-8?B?NmhPOXJqMER0VkJINFV2eWMvNU1SSjNhTXdYc3FibGpyVEkyODJUK0ErYzE1?=
 =?utf-8?B?cGpNeXZQU09WS0FVRWY5aE9yTHZkVTc0VlRXaFU3a2hQaklKalRQNDVpT3pN?=
 =?utf-8?B?UHJWM1RKbG5ONVk1eGRBYVZUQWZpN3VXeVd2VnF1VTNpelZoN2Z2dFJGUUxq?=
 =?utf-8?B?b2IwZlBKQ1p1WXNLN24wNEkzMFR6ZklkSjU3ekQzZ1dBbGdYbk5YTjMyUzJE?=
 =?utf-8?B?MEhORk80eWFUa3FBUFJOTmx0UUhDMFVUWk8rQ0xRaVhuSlpJaFlFQVBaZzAz?=
 =?utf-8?B?Y1NHdlZwNVU3djd6NlI0RExWVWVxL2tyZGtqQTB1aDUvZEQrbzFOazcxc003?=
 =?utf-8?B?TktsMkowZGJVZzU4aW9Wd3llVWhhckoyWFgvWnQ0YmtRbmtqRVk4MFBhQ0Zw?=
 =?utf-8?B?cE5KOTNUOHQzWEhHUHg1MVZScVhzMS9TampzQlZId25ieVF3aGZpVThZNWlI?=
 =?utf-8?Q?Hwyu12nBO4vDDDRl+2ByJ3NaV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6bce2d28-2ec0-45e1-b80c-08db3f4dba06
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 14:12:07.0981
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AHlLjc8iaecdk1vaZv9Q3jWymbFxdI5BdqLobPCTT3HonIDtP/mRrz/5LahvZflsH6LPKBwh4zX2yE99pGS3Mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7864

On 16.03.2023 14:22, Oleksii Kurochko wrote:
> Oleksii Kurochko (2):
>   xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
>   xen/riscv: add explicit check that .got{.plt} is empty
> 
>  xen/arch/riscv/arch.mk   |  2 ++
>  xen/arch/riscv/xen.lds.S | 13 +++++++++++++
>  2 files changed, 15 insertions(+)

Just to mention it in case you aren't aware: Hunting down the necessary acks
is your responsibility, not one of the committers. You may want to ping Bob
and Alistair (unless this response of mine is already enough of a ping).
Provided of course the patches still apply as-is ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:12:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:12:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522192.811409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPbC-0000di-T6; Mon, 17 Apr 2023 14:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522192.811409; Mon, 17 Apr 2023 14:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poPbC-0000db-QR; Mon, 17 Apr 2023 14:12:54 +0000
Received: by outflank-mailman (input) for mailman id 522192;
 Mon, 17 Apr 2023 14:12:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nDUx=AI=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1poPbB-0000VI-HQ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 14:12:53 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7e89::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef9b6938-dd29-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 16:12:52 +0200 (CEST)
Received: from MW3PR05CA0009.namprd05.prod.outlook.com (2603:10b6:303:2b::14)
 by IA0PR12MB8929.namprd12.prod.outlook.com (2603:10b6:208:484::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Mon, 17 Apr
 2023 14:12:47 +0000
Received: from CO1PEPF00001A61.namprd05.prod.outlook.com
 (2603:10b6:303:2b:cafe::67) by MW3PR05CA0009.outlook.office365.com
 (2603:10b6:303:2b::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend
 Transport; Mon, 17 Apr 2023 14:12:47 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF00001A61.mail.protection.outlook.com (10.167.241.8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.17 via Frontend Transport; Mon, 17 Apr 2023 14:12:47 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 09:12:46 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 09:12:45 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 09:12:45 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef9b6938-dd29-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UlY4dTigtg+XEouTXwfVx12y5p/xqAdKQx8zJTvt8lg/KwOC35PYaZxH9GQO8oLuG7OxCtN/Uauwf5/K0NC/ufPEtlw5ypaSpUOa76WLJ5j3QOJ6LSB7/a4suLRqfFlXJfIGEWiA0me8AGkX2Nz3oysGv2nGQcWqbfJHac9MP+LDQz0GdDRc59PMEaIRKw60JS0e5Di/odNrRASkWQqp42J3QCOxUHdaeMM2nw/7KSgRuPl/LbPHAi02BN2cpnHRKGu8FN0RVW16Bv8FAZvlVuwVVTeYKujmVhv06X0zgp2Ov0OepvQQva1FDfSIFQVyeS3xGGd0h7+PxgFy0wTuKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ibMEOXHgyM91Y55EGpty08iDY8dRjzTXs1I1Jn2SKIA=;
 b=nbEYzjaCq/E0t/pvW/A2rotep0dLdXLcgoYR3yYKjCC5FFRN2lQKVaFTSPA7UCIup1m6X266mXbRSvSbUQTRnSzhvlXhPeb1jlgXoNXSmXpH90w8WzI4anRMr/e0Uiw79TDiXZiOPBsACN0XxoC/DfRxEv+Bz+/NW97easN6snmwi7lVnG82gWgV/EAH/hn0sH5jJnwKJCLrIPvEjel5hH5bKYi2BhdBz72ZB1VUQHoCuki144sDwbOq1zH7MZmZyeCLM3CLhTCy0eN4KQV0PNbbKc2Jo0+HaQ7GaL268FBrx7zCWPVxAl9SkWpEfP1d7bxBWWenOZu3f5gwcyQ/UQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibMEOXHgyM91Y55EGpty08iDY8dRjzTXs1I1Jn2SKIA=;
 b=RZi5RAIfjXAMESQe6gq0nmFg2N4+h9i7HxP6lU+MApb2PLrB/JA7APQBOsiI783VMhtv2Hz3TIsTtnqLqnPweJpBJAo630CAw/MMyxExSQIQ5JIE9ZUEXpFDq1MoAHH6S/lajn+CRhpeK06lFkqHZtDikACOmylExq2THIN4W1Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <cf4922ce-6394-268a-7aa6-76aa8ca6d863@amd.com>
Date: Mon, 17 Apr 2023 10:12:44 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/vpci: initialize msix->next
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>, Rahul Singh <Rahul.Singh@arm.com>
References: <20230414202932.293688-1-stewart.hildebrand@amd.com>
 <110951c5-4dc6-5fa5-1722-a86ba28f1789@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <110951c5-4dc6-5fa5-1722-a86ba28f1789@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF00001A61:EE_|IA0PR12MB8929:EE_
X-MS-Office365-Filtering-Correlation-Id: 9093822a-5f73-474b-b603-08db3f4dd218
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	otolui9TlQDsqfSwA0jTwIYrH3s6VM5xhsG8k/jIjgZzUx+ncqnolfyOcOSWD1S4DgGLIFOnP0eRXFTSGbjRSi9hbhAYHvfbvYiZ+JJleYFXhv0Vx1h9Fb7aLPBrOKaqk2aEPwj1sP8EOlZ2pvF+fSMxh/HuubiIXocyAZOttFuiMjVyeMzp3wjqdgsNuA2yzVG1Bsc1oSZe9wu3QRgP/cBgO6Ggw3h3utev5UmYkA+jilsA95peQ4mCsAthLTGiEYrr9WhE06lqt/P5VzyFRyiSyPZ/xfRY3Cqgs8YItEFFBxppccYAGEe4kq0hmLMLFURKSrCi6e7fk+RHrOBP2E6qVAIifNmaLLH/DPxyC/0p4UuvAVAJXVNFuX6PVvCmyNnl7/AZ363YdzjdV80O7QgesAeMnh/9RuiGRdZ02gE/CXlSaTnti1paP/hmvVUTY+YVO0L2uDXg/VsWodwmujsF37TuPz0F3RAMrnovJZnLeTYoYT0h6kr4dWQEAcuptOrboIpiPrghReYvduZ0HdfJ6DmL0jMhFvR/0+tkUtltDhyYsgwjFylVvnPWn0FcneZ3bocB1vviYdSHk/PWHVQqk5NQGO8FniId6tqYELAm7e/xbdYEQelGkoNrDHcDd8WvWQ1dRR3fEiTnpvzosFtIjWfjue7itCg/LBPS54DhtlOH90XjxGQyGmHLOXv2HfcQfxyUPTXdL1WNNCzfEFA9trH3VV5HMAdwxzEXojigqvtXnpK1jX64fxGgY9oE
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(46966006)(36840700001)(40470700004)(2906002)(31686004)(8936002)(8676002)(70206006)(5660300002)(44832011)(70586007)(478600001)(41300700001)(40460700003)(316002)(40480700001)(82740400003)(426003)(36756003)(16576012)(54906003)(336012)(4326008)(6916009)(86362001)(186003)(53546011)(26005)(81166007)(2616005)(36860700001)(356005)(966005)(47076005)(31696002)(82310400005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 14:12:47.1360
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9093822a-5f73-474b-b603-08db3f4dd218
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF00001A61.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8929

On 4/17/23 05:09, Jan Beulich wrote:
> On 14.04.2023 22:29, Stewart Hildebrand wrote:
>> The list was not being initialized, which could result in a crash in
>> vpci_remove_device if no list items were added.
> 
> Can you please point out the code path which may lead to such a crash?

It would be
xen/drivers/vpci/vpci.c:59:        list_del(&pdev->vpci->msix->next);

The crash was observed when msix->next had not been initialized (nor added to a list_head). It was uninitialized because ...

>> --- a/xen/drivers/vpci/msix.c
>> +++ b/xen/drivers/vpci/msix.c
>> @@ -678,6 +678,8 @@ static int cf_check init_msix(struct pci_dev *pdev)
>>      if ( !msix )
>>          return -ENOMEM;
>>
>> +    INIT_LIST_HEAD(&msix->next);
>> +
>>      rc = vpci_add_register(pdev->vpci, control_read, control_write,
>>                             msix_control_reg(msix_offset), 2, msix);
>>      if ( rc )
> 
> The error path below here frees msix again, so can't be a problem. The
> only other return path from the function is after a suitable list_add().

... when I wrote this I had applied patch that removed the list_add() in question (on ARM). See [1].

> "... if no list items were added" is misleading too - this isn't a list
> head, but a list element. The list head is d->arch.hvm.msix_tables.

I can see now that the crash should more appropriately be resolved in the referenced patch where the crash was actually observed [1]. In the current vpci on ARM work-in-progress, there's no equivalent struct list_head msix_tables...

Sorry for the noise.

[1] https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc/-/commit/9f36d1b1dffcca1ae3fcb2dfcac4709c39d1b3bc


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:42:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522215.811427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQ3Z-0004J4-Ar; Mon, 17 Apr 2023 14:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522215.811427; Mon, 17 Apr 2023 14:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQ3Z-0004Ix-7l; Mon, 17 Apr 2023 14:42:13 +0000
Received: by outflank-mailman (input) for mailman id 522215;
 Mon, 17 Apr 2023 14:42:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poQ3X-0004Ir-Kd
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 14:42:12 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05360382-dd2e-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 16:42:06 +0200 (CEST)
Received: from mail-dm6nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 10:42:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7196.namprd03.prod.outlook.com (2603:10b6:510:247::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 14:41:55 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 14:41:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05360382-dd2e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681742526;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=y8jdEVpiojtZlwow2Je50nscY+z6pm980Qmqfz8vV1c=;
  b=C80jNoP5hwhwzAE+USS/SwyBz3QS910Iu/gv8RQNYA6dXTKEs/JHfhr4
   7PQP/vJyUbKAZ8gV6NXEV8ZBOZMRAAcJ03XUXUWXpQKASU79oZNqAI0qi
   K+IRsvLXQXpgCEqpU9I6I5IQ9SP3NvzPRoZ8W+myG/oAHRdV0HohY5zbs
   I=;
X-IronPort-RemoteIP: 104.47.58.104
X-IronPort-MID: 104599947
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Y7t1ZKmk3W4Tufy1hoMVgeXo5gxEJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJLW2iGPv3bMGb9fIgiYdu2/UNQ68XVndVkSAQ/+31hEyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5QCGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 bsTIxsyQkzdvPL1nqCgWvdDmO0pHda+aevzulk4pd3YJdAPZMmbBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3ieCxWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOHlp6873AL7Kmo7LxkVage4rv2DrE+PZe1UN
 RMGwgcCsv1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceRjssz
 FaF2czoAT9Ht6ecQnaQsLyTqFuaKSUTaGMPeyIAZQ8E+MX45pE+iArVSdRuG7Lzicf6cQwc2
 BiPpSk6wrkW08gC0vzj+Uid2mrw4J/UUgQy+wPbGHq/6R90b5KkYIru7kXH6fFHL8CSSVzpU
 GU4pvVyJdsmVfml/BFhis1XdF11z55p6AHhvGM=
IronPort-HdrOrdr: A9a23:ELHnRqO1ZBYdvsBcTsijsMiBIKoaSvp037BL7TETdfUxSKalfq
 +V8cjzuSWZtN9zYhEdcLK7VpVoKEm0nfVICOEqTNKftWLd2VdAQrsM0WOoqweQfxEXnIRmpM
 VdT5Q=
X-Talos-CUID: =?us-ascii?q?9a23=3AHtui2WgYQJzmLfrw6yCS3iNAzjJuNXbW50j8fxG?=
 =?us-ascii?q?DGSU3WqaFV3CO0alVjJ87?=
X-Talos-MUID: 9a23:cWHe1gqlxP/u+e0ORQQezxZmN/g04KuqMmYMoKs5sNfbJxRdFyjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="104599947"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MT91D2zWamebBEEv2k4CEYzADAy8JVyyfP8Kh0A9FZkRgu4ihAu80fs7rHyUxdL3CiNsfqJdgy73gvQCI0NQ5FSQRHtE/85Y8dN3O6YkDv7rUdCNKn8Lh3k+hGE03KvxfsPRCJ8CF1nLmWQUwbBrg0wdGG8Gh4Dgi9v9WACjvIipYdIFvpyxyqw8sQBPKp8jbgbaC7NaQ2EHMZoXyn4tHO/89AtXqaZGT4V6ES8PFKgtwJOldpjEG6FFcQOCZ/trpJX4wKRz+Fiej3kbqB1J7rfRTS3k1iiMuUuuJaGr4unDKGC3UU603WREbnqlVGbV3aDpU8raKiME+QqbpurRzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q8EAtqB0DW9e9+Dy70SloebbvPxYikdcuoygh95MPUI=;
 b=bFyp+lvh1j0z4BnTkz+CCmbNhw65vrHw50oDL4992EFlZCN4k3yt5rdPpipHDNMWVxzJ01zdUCf/gUHZ3lho/xJIm7MptQ+MqAWwqhwL/jberPA01po03SsFbK1EUl2F4Pqzgj535yqgNDfP/qS2Orfkw/gncdY8Qis8JoZghjIO0VSE5czvBnp6pnExcKob9fhqVJ+/Xa4KQEAj489WTA6SCrTvm67rciYQ3UHLdIwzbkNO3stAzUrTydIFfrj8a4Mr5vijmjrUzTxV9Fg/gPLHz9t4LNCqmMK9r238PZmesnj/4sDVUlB15zC8UXqBjDzOzvmQWmAMh82fzHJiYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q8EAtqB0DW9e9+Dy70SloebbvPxYikdcuoygh95MPUI=;
 b=w2BXfBE8CDH5iQzfYWMMcZVHDBMrHnvmKL4A5lUeCqTEugjFGAZIL3EXOZKJw73jLlbK9rTn2TvbDa91h50SQKG14aeoIv5yxgbe2CSxpTVN4IW4k31/EaEOMpPGKkDvS0m11p6IlRSIxeTkKH3azVpAF1jSmaej3q2jYYcBs3k=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
Date: Mon, 17 Apr 2023 15:41:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
 <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
In-Reply-To: <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0209.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::29) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7196:EE_
X-MS-Office365-Filtering-Correlation-Id: 284f86f0-96a7-4a15-9195-08db3f51e3b8
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/lUW0EJ0U7czC5iJjMHcfrfqitMaIVqvUeJQRkuF0SO4pWUeQr1nwHdL5Fo7ddGAig9vnGCfgrKhD5acEaV8RYSf/EKnkOPLZe72BJi72GKLO5hsLEXcoJXJbU3eU5G2C9GCQ5wB+z8m0L3kbz3P6tIJF53HF5VfZ/sdIjT7i+fBUTnE/iDwmzL9UpxO2AKx314UwEKJggvOj/73+KwwNlTU7vIc22IBuxcRxcNO2CA42tugW3LzJAp/3yrN6b7aUDpp4dqeiPICR2DP7rbt9bYBIc62kwt77xv/ASQ5yiXhfYrK7mk1hUwvoFxdEPzo6R1WBxI9suHH3k1JEnJCIyxNTYyrDRQDovOgFJ1tcZN0A3rfLqN5Yc9pA4bHTMSVfWyo51T2UKAdyPtZ6pMsfNejthA3UTFRa832JcW8th0zFutiqNUK4Vp/eHrEJcgFgtQ6hSuKOKTrLtCIlvT2RYLfZEx7FEv/+I6Gq5jRSUDBCMKJ0+k2ShvWGIal0UfRv6pezzCA+CRqu5xowC0J12RL2EJJ+JQl/StbeE04c0HTgP5TwIvQt7nnLcxmWiOrHqejU1xJF68L3W5LqVk5rjzfzyUUwFO3x8XQ7suEFgpR0RiKISlgtU7UHwq4BYdoHFV2MNyYS3cdcGkfDXmQcw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(366004)(451199021)(478600001)(6666004)(38100700002)(8936002)(8676002)(316002)(41300700001)(82960400001)(66476007)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(2906002)(53546011)(36756003)(26005)(6506007)(6512007)(86362001)(31696002)(83380400001)(2616005)(31686004)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z2srNU5yanRnNVpBamMrMld0YTJYOS9uK3l6ZDR3aUxuWjdaTHdwbUs4M2hv?=
 =?utf-8?B?czk2WWJ0VFV5SXFrK1Q0d3FBVEVPeVA1WGxHZFhNemg3d3d5dTd2Tm9ZQ3hR?=
 =?utf-8?B?NXVNL3Jpek9EdkJlSE9pdzFCVUEwa3JaS1orWTRETGNuWGhHd3BYQVJYMnVn?=
 =?utf-8?B?TGY4UFljRnAzdFhwV3FGWUQzYk1kdEFnZWNnZ1BiR00yQ3pRMy9lZEJQUS9Q?=
 =?utf-8?B?dkxEZ1RvcVlVam5MeHIvem5neHM0a0h2TmV4VzE3dkQ1VjBlSlp4Mmx6K2Vp?=
 =?utf-8?B?dGZFUzhaNjJsM3R0dFk5aGorbUhWVlg3alF4YjBqUDlUT2NyTVBoU0RrMVBu?=
 =?utf-8?B?aWEwRXU1RUN2OXdwQXF4K1BMRk1YL014c2ttNWNIaU8rNXR0TDlNU0kyRFpp?=
 =?utf-8?B?bFcwMTVMN3JlY3RnSGNhWDNhZlowcU04QUJ3Z1plQlRBcjE1Yjdma1g2Skww?=
 =?utf-8?B?M1g4cml6S3lmd2hIM3VIeC9UNUR0ZXFMQ2tFdU9sTnJpUzBobnhON2JtTmZE?=
 =?utf-8?B?ODBpTHNOblhxZERpZUd3K0VyWkZtRmFpSHhXUTBwclYrY293MVlMYkdtOWow?=
 =?utf-8?B?MWV0ZjlYWExTNzN6dUw0ZDN5bVZkaTZPQStObkdmOHMxRitTSHI5UHZkRlFO?=
 =?utf-8?B?VzFERTlFQWcrTjZBZXZzME04V0J5QkZSMDE3Nm90djNSWFk2OFVLTkYxYlc0?=
 =?utf-8?B?akxYclArN2pCZlMxVXlKSHRrYTRsOXZ2TUtGWithWHNTZmp5UnBWSVE0c0hv?=
 =?utf-8?B?K0V2OUtzcm5iNmoydE1jaFhzMlc1ZWJyZXZsNXhYc3NRcXhEZ3RabldpOGRG?=
 =?utf-8?B?MURtMlBzcEdYNzJ1dWxCN2FtdXE2U1dvQ1pzRW5DOEtGNWcwM01KcG41dEdD?=
 =?utf-8?B?N2tBaGU4cWNZSVYyRGFjV3dCT1BpZjNaN2hiSEtHS01JTUh5TGNyd29CWlda?=
 =?utf-8?B?MVpjK0J2SmZHZmZ5YkdYS1hRQXFaZC8yQ2paQnBXb1pHeGdJTmcwdThKNGgy?=
 =?utf-8?B?bC82WDNaVkJ6TWxSNnNVdGlWUkZWZHVGb3FDdzBXWUkxbFZGdEVSV1ozeW9L?=
 =?utf-8?B?RWdYQlZ4NkF5SlJ6Q1hsUGZaTjBYOVJkNVcyelYxUXAwbURqNnRyT1VXejZa?=
 =?utf-8?B?bzhKV1FCd2xJM3crOGp3ZG9odU5RckZnRHFYUGpIbGIrM2xmeVN6QnhDN3Rk?=
 =?utf-8?B?L1NGTnluL3Qzamw2Tkh5YnBvSGZZWXlJbk4wSG1NT20zcThscTliQ3ZOV3Jj?=
 =?utf-8?B?OTgyM2d4MXh3VDBlSjRCQjhrYzJOYXhIOHBqankzVUhDaXU1MUpaelpWSWI0?=
 =?utf-8?B?dWlWWFFxelZDeWVjWk9aSDBJZERJbzJuQWFobEx0bmVQS3N4eTNOQVViR2tU?=
 =?utf-8?B?bTdTOWFxVnJ5YUxvN083bDFaallveWJBZDFGU2U5R21wV2sySWRGQlVGN3Bw?=
 =?utf-8?B?VjA4UzFrWU5OMTJEcWhGVExBOUxnSEVGdHJCa1psa2ttK29iaWEyaklRcU4y?=
 =?utf-8?B?aWRId254bFB5OEhTM3dtbjFsRENRN2tCK05INUdtU29vVW5VQ3BIZmNvWjRF?=
 =?utf-8?B?QjBVWlAxUk1iWHB3aWQvV01vaXVGNGs0bG42emZhZ2ozUng5UHRkQTJZYVRF?=
 =?utf-8?B?eTY3eHQ5Tkxoa1F6NGtxM3ovaEJZQm1aNyt4ZWp2dXMwbDJ0bzAzNW03SU5M?=
 =?utf-8?B?dVJObzFHcEs5em9zWkpuc0tBVnFnSU5pc0ZQT3lleXFGdU1hVXUzRHh2Yktn?=
 =?utf-8?B?WHhOZU1UcW5OM004UFlxelZPTXorQWpucFVCQW9wRkg1NlFoZFdER1hrKzhC?=
 =?utf-8?B?UzFmMDR3ZVJpVzZoN0pMOHIrcGpTMnpEZlRGNUhUYXJ4bkRJRVVsY21EdHJR?=
 =?utf-8?B?c21mMG11V3dPbEpmQWZXdzJqN3JZVEk1ZkNrWFFiVEtkdFhtZ1prNVQ0WVEv?=
 =?utf-8?B?RVVwQ1NDczdWM1pOTnZNZUF4WmRUSlhWeWJTR2JZbTZ2cncxdnRmQ3dSVndn?=
 =?utf-8?B?TkxPSm80R2k3UFdtc0UyYitmdUUwcnI0OVVuVW9nWFI5Y1R3V20wZFVkaERh?=
 =?utf-8?B?NzlSdUJVUTdJQi9DL29YQSsyNUtad2ZIRWhuR0xBZDJKV2ZmK2hvWXJpL3Ri?=
 =?utf-8?B?Q0IrbkVHbnR2S2RMVysvT2ZXa2FnbjhtaGFSQ2RHbzZVUUg4WC9zR2I5MG1u?=
 =?utf-8?B?SEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	UzSR/JwqHyWSfdt3XmxhSFVw41C4J8frxiBtQljfcwT2yjAtv9jGvPtOh1f20NH0w28iz6Y0WjBnrn1J7EQyhqrOmTQfavKkr9URvSH3fmq0ZBgebtKO13S2MjcWFFnN98jpvpfa5FBTgGAUXZaQRaWGX1LTjsFAuLzjsEc67R7sZbXbW6IjBqCfZhZ2eay/AEWidEmt4ECp4edIJJVsVeP/s3/Xnar+10MsmRnElbi3DEEkCmb/0HRKuZIwJqtDdx/a6umyZnrvhpIK/ixfFN+hg7tAx2h7OOn2ftx2d1XkKn/DpOx4WGSajPotHFUNtzOuW0gMV5+asCafnkA9FUVfanT+Q5sI6jpwKCej6by6vPnQZLJgTVpHE/mS5Yb8a0HDUkifR51XBYHIliJmlQx2drXvUIB75wn8+Kb/o9SrBHGyYdYI7e9K3QwzAvhfZBNgCnOpTnNj7b6Yuf28AEZMcyYBck/oSShjqfHLVR4blg62wRGMduzdIxNj8qZaPnXObpeErkz5C5BtJS9Am6AwYZXos/+S8SxVj5tKTfIfRlT+SlNmAACggFqXgBF2PhpgnPgS++5LEro+NktpLGx1wk5jxzfEzz9bHaqs6UMnLZ/TXTFFXjrTx5QTDrPwSPADKdic+dzmLB+u7QEm8Sn8vYq/WgB4SE1s/9UPAzOT+ZBFs+oJt/9VBlIQkqHocKlrlo2JsT9G5fONi6Uvm3up67D8VlS0eU+ovSWE2o7sro52vPs80K8QL96Jhve0DcKNnjFWNiwwox6rTwCGqmeGufrhbpRQDLARr5VbKxd3Zp2u13fZM16bAU0ZS5zQya0HH5XVhTw7nR1hz6oixAVWRHVyf/TS2XNaT5xIaik=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 284f86f0-96a7-4a15-9195-08db3f51e3b8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 14:41:55.0134
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5ngoeb0k84XwvN1l/kt2q26cjngm6pCkyKCUwPOjww+RLakJ6VdSZh8y8VTl8P0EIAydmwHihnt/sS/zzNmqoF6JJgmhXpxiJnXQa38zB8g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7196

On 17/04/2023 2:59 pm, Jan Beulich wrote:
> On 17.04.2023 15:52, Andrew Cooper wrote:
>> Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
>> or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:
>>
>>   (XEN) livepatch: lp: Verifying enabled expectations for all functions
>>   (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
>>   (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
>>   (XEN) livepatch: lp: Applying 1 functions
>>   (XEN) hi_func: Hi! (called 1 times)
>>   (XEN) Hook executing.
>>   (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
>>   (XEN) *** DOUBLE FAULT ***
>>   <many double faults>
>>
>> The assertion failure is from a global (system wide) TLB flush initiated by
>> modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
>> sure exactly what causes the #DF's, but it doesn't really matter either
>> because they highlight a latent bug that I'd overlooked with the CET-SS vs
>> patching work the first place.
>>
>> While we're careful to arrange for the patching CPU to avoid encountering
>> non-shstk memory with transient shstk perms, other CPUs can pick these
>> mappings up too if they need to re-walk for uarch reasons.
>>
>> Another bug is that for livepatching, we only disable CET if shadow stacks are
>> in use.  Running on Intel CET systems when Xen is only using CET-IBT will
>> crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
>> still active.
>>
>> Also, we never went and cleared the dirty bits on .rodata.  This would
>> matter (for the same reason it matters on .text - it becomes a valid target
>> for WRSS), but we never actually patch .rodata anyway.
>>
>> Therefore rework how we do patching for both alternatives and livepatches.
>>
>> Introduce modify_xen_mappings_lite() with a purpose similar to
>> modify_xen_mappings(), but stripped down to the bare minimum as it's used in
>> weird contexts.  Leave all complexity to the caller to handle.
>>
>> Instead of patching by clearing CR0.WP (and having to jump through some
>> fragile hoops to disable CET in order to do this), just transiently relax the
>> permissions on .text via l2_identmap[].
>>
>> Note that neither alternatives nor livepatching edit .rodata, so we don't need
>> to relax those permissions at this juncture.
>>
>> The perms are relaxed globally, but is safe enough.  Alternatives run before
>> we boot APs, and Livepatching runs in a quiesced state where the other CPUs
>> are not doing anything interesting.
>>
>> This approach is far more robust.
>>
>> Fixes: 48cdc15a424f ("x86/alternatives: Clear CR4.CET when clearing CR0.WP")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> One further remark, though:
>
>> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>  }
>>  
>> +/*
>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>> + * responsibility of the caller, and *MUST* not be introduced here.
>> + *
>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>> + * Must be called with present flags, and over present mappings.
>> + * Must be called on leaf page boundaries.
> This last sentence, while wording-wise correct, could do with making more
> explicit that it is the caller's responsibility to know whether large page
> mappings are in use, due to ...

The meaning here is really "this doesn't shatter superpages", and this
was the most concise I could come up with.

Would ", i.e. won't shatter 2M pages." as a clarification work?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:48:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:48:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522233.811445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQ9o-00055D-4Q; Mon, 17 Apr 2023 14:48:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522233.811445; Mon, 17 Apr 2023 14:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQ9o-000556-0S; Mon, 17 Apr 2023 14:48:40 +0000
Received: by outflank-mailman (input) for mailman id 522233;
 Mon, 17 Apr 2023 14:48:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VMc7=AI=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1poQ9n-000550-AE
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 14:48:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eecc48c1-dd2e-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 16:48:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eecc48c1-dd2e-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681742915;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SuH57htx4HoqqPTADyv5r/MQySPN8DmRQYMrJ+PpUgg=;
	b=INb5SzlK5MX/Bq/m0881zfdrzueVbPX5Prz3Xgz2rxss/IDiXaObMBisydQEAC3xunlQOs
	5FoThs8SU4xHlYkNvPHTdHnazGH9MZS0yGJQJLAmwI5D7ZTNc7jYm5HqGNYCHs198vO4Fx
	Z3OtiE3BTc9mB860bA+Z0kx73U3myYwxpdggsQfQrNtW2MsIXh6QqGIKR6eyMV4JP0dPex
	ACDhdDGRvp6HxbcrUCAsTApHuoenv4J1gKFxU7NBHTucKvq6LHYcjtnHCUSe1Q33mLsYAb
	M+FB63wZmlbcOfZKg7GIVobNBFuqhpFRD3yr6B///q6aRnAMLX/Vy4rZO5+ycQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681742915;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SuH57htx4HoqqPTADyv5r/MQySPN8DmRQYMrJ+PpUgg=;
	b=PyXMLWfIzV3JqDuBaEC9RZdvFmc5btxtvXPM95519mxDIYGaw5KC7da7PccMR5S1yjqMpt
	wr1a07P8ngNdz3AA==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de>
Date: Mon, 17 Apr 2023 16:48:34 +0200
Message-ID: <87wn2a4la5.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Paul!

On Mon, Apr 17 2023 at 13:19, Paul Menzel wrote:
> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD=20
> Graphics (family: 0x15, model: 0x13, stepping: 0x1)
> [=E2=80=A6]
> [    0.259329] smp: Bringing up secondary CPUs ...
> [    0.259527] x86: Booting SMP configuration:
> [    0.259528] .... node  #0, CPUs:      #1
> [    0.261007] After schedule_preempt_disabled
> [   10.260990] CPU1 failed to report alive state

Weird. CPU1 fails to come up and report that it has reached the
synchronization point.

Does it work when you add cpuhp.parallel=3Doff on the kernel command line?

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 14:52:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 14:52:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522239.811455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQDI-0006a0-OE; Mon, 17 Apr 2023 14:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522239.811455; Mon, 17 Apr 2023 14:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQDI-0006Zt-La; Mon, 17 Apr 2023 14:52:16 +0000
Received: by outflank-mailman (input) for mailman id 522239;
 Mon, 17 Apr 2023 14:52:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5Vl=AI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poQDG-0006Zn-RU
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 14:52:14 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2077.outbound.protection.outlook.com [40.107.20.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 702b023c-dd2f-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 16:52:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6912.eurprd04.prod.outlook.com (2603:10a6:803:134::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 14:51:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 14:51:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 702b023c-dd2f-11ed-b21e-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZIjZL3qvVsJtwAESoND4tMX6wP+dVBa1qe1OFw7tF3+ioMTjnNE1Vw/2MIWv6G0rkpSrwM/09UOBeuuhpXSOkc5pKB7Rg1CDrYIRLswrIEY7oMoXsjzJdCfLLr46oL+FKOJ+DUjUBRPp++pHhPgiJIN/+2xhdvfZvEzieSXNsEU3QGMfYaqo/GGhugz/i1F6SsYT1HOLSfNuvemV+FgpGfpO5gR/OyxOd6jZCyJi3X6I/XSalT/L637Jn2nPKaGKvq45JC2PWGehpyir43L64jfsGF0BgX2pkd7Ycq0FcvQIxzYrHU45lg4XonCMJ3AAja+NMMD3YpGTPTGpRXD3aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jGs3nfNqAPwJY3bJq44yxmf503e25eaRRdy+jNP8/8c=;
 b=bpE2fPk8Aw8/zuJfy3AvFC1n9RBgdIp+eHSam6yQ7BOCFpXYEFD5tepKKjoJE1xPN966E0rdRZ4HoNySJ4qT92GvwLrxCaPJt+m3tNd60goZ4kS5hrysIGCvjjWGDs+j2LEJl4HA9ZqJ+t4/1Qk4ZJ22AF0kV8Q3Z849FuEx9OOlFR7g32Go0OC9vr8npyDTC2ysAkHGSFmoTwzfFI5zIW3JNYybEz/GJOpfkeUkUXH5LfBpqv/SPYsgmxxUQu+3+rIZmGv/tWrOUU9ai6zxsJd1TlIEG1Y9iTT0+QOGe9YsKW2yDNs2DNwB0RTml3NeQzvnua08neR7uJcpVldyxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jGs3nfNqAPwJY3bJq44yxmf503e25eaRRdy+jNP8/8c=;
 b=Dfw6kNnCqL8g3EIadqP8NQCGcdE49iOR695oStNk41mQe8TGbFG1HzL0iBjEDd3Pz0UBpeRspCaaJLELrW14nhPsX3DndXyRYxLX8xgq7NOu19HHX1rUdrxXPwO/88A6hxitBG4qrfbK0Jl+ipIShjmTwqzFgvU8OmgTY0TxMnbu85lZ8jiQUNj4v6r4GLXrWH4D/+Fyf3YQdRmAvsv1rbW9dy7PmnxBmBYQRRwaYE3SvISqrQCZzmkRuEvY5nyUyAou2gVpAAFLA8pkK2kLD86GJL1HcFdMWukk1bwy8jg3f7KDLVLyQ6JbYslQfqfHQWRPJYVBC18rc0yLAg5kLQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3ea38da5-70a9-6887-5384-fe002d8568c4@suse.com>
Date: Mon, 17 Apr 2023 16:51:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
 <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
 <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0003.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6912:EE_
X-MS-Office365-Filtering-Correlation-Id: f07f9d3a-dc06-4f16-931d-08db3f53431a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sFk/f6fvXzJTL0i38NNa9HlVYOltCEih3FSmeDQ1RbWxg9I/euH+O1Xr4BMwJ801FzPli8qp6RU1MmZU67QNogspVPYqvglIMinS3JU0qdB+aIeqE5Wun4n2vPreuwEdJQyVERuEBEL9LyFfE+70rMh/itGz9E68u1IjguoAzwUv+3aDooAozzZARgwHXogd0T4X+fI6Zx1F2OUcjKJbLCKLij7XFTYG+SyNnBXcBewdAJKAjR4/BFTxGtriLDtXrf2KlPkJNRBjzhqyiFrIFbFF3TX2FgIcJ5Hw5jhtqKUsIUsFVN+o45mp2gBRWd2zYmcKrJ2/sI3qS+NmTZUyr35yo58WHyEXG0m4LaOeT6ZNyGX7Db35WmiZ6DxyRZgQRtR44uR2oGdqZWVbMiFp/07td9M3MqSRZgeQYNfcKOr2NaXWt03k6OcUPW3jmqSgtX4WApFFcGU7qVO2sdSNvgGz4KuOA/gcxJ0Cvmr4Vz4pOYyl0RJnac4BRihVoicG6XHpSW5Ex4kikJ5nl26+hSZ64gF4iU94oGSnCGcielgAOT679Ec1vEuDpd3MfoJ/XL6WkSgcliypNlCX/pmM4QGqsCVPW7TirWjQPqTcFKibNhtQextuY+OCX/9Sn0Xnb/PLlgCYH2vF54UMOzsB5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(366004)(39850400004)(346002)(451199021)(6486002)(86362001)(478600001)(31696002)(2616005)(36756003)(83380400001)(26005)(6506007)(186003)(53546011)(6512007)(38100700002)(6916009)(66946007)(66476007)(66556008)(316002)(2906002)(4326008)(8936002)(8676002)(5660300002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cVRZdHg3czJ1V1pTSDk4MXUrK0p0bnJ5MDJPbEhnekJjczlVZFduMFcxT20w?=
 =?utf-8?B?MGpXT1ptbGIxaWtMNVZKZCsyQTByNmM3Qnc3R2dSSnBLQnp4ZXhsUmdrMG1t?=
 =?utf-8?B?RjZTSERvQ01sTXlNS09YbFVKVC90RVRGZTQ4OWhma1BBeEhqOEpoTG5MRzQv?=
 =?utf-8?B?Y0tjakxCWUE0b2lXSklPdllnUUxCSmV0U2Iya0pTc29zNEx0NVk0ajZTUVc5?=
 =?utf-8?B?S2xrK0FjWm1CMC8vdGdlTkFUMVU2Uk9welpvbnI5N0Mvb21RTHowdzZ5WFZG?=
 =?utf-8?B?Qzd6TTlzSm5iRXZORVdDL3M4UlM5U3F4U1p2aHl2bStlSUs2aDdMOTZCeElQ?=
 =?utf-8?B?ZjB4cXVIWlhDaWo1R3lBVDhCMllXOUF5azRaWVFJVTI1d201blhRTXJIOFp4?=
 =?utf-8?B?ODZiaWtrcy92WnFpU1djbEhuTmhGYmdpSXo0b05xWlZyNWg5R0Rsb1pZN3VV?=
 =?utf-8?B?N2J5SmtpZVFWcjVndWVuWXJIWVdkOGdKdFlZMHk5YlpSVGNEOCtrMm1oeG8z?=
 =?utf-8?B?ME5pYjVBQmpPNjAzVzZ2RDZ2amxoWDZJNGZlOEF4aGF4RkVCaHZQNUUrUjMz?=
 =?utf-8?B?ejJySldMSVNaTlVzOVYrcVVlSkp3N3JMd05xRzRRcHNrK01kVENFUi9rYVVJ?=
 =?utf-8?B?dzRFL29zejM1MTRURnNmU3o2d2hJZTFjNWZjcGM2VWNDRXNWNGg0bWJkeFNr?=
 =?utf-8?B?WXg2eG14dnd3bDJSbmkvYVdTamVvbzJhUEs5eHN0U3dQWnJ1Skh2Z203VEVw?=
 =?utf-8?B?SnQ3T092cUdFckszWkM4TFkxNEpBSkxXMzRKY205akFzbFVESzJNaFVsM0du?=
 =?utf-8?B?RUFVb3czWDBKbHJoN1I4SGdycTZuSUFPZURFMlBWZ1pjV2IyTGxDNWVUM09X?=
 =?utf-8?B?dUpoYmRsQTZpRjdGU2lCQVBhV1NBYUhRQzBQVnkza0VhSjNjWTY2TlBJeWI2?=
 =?utf-8?B?d1lZRnMza0lHV3MxMWVxSmZLemFYMFM5UzNGb0hlME81VzFlMElKWUxwRWpM?=
 =?utf-8?B?RjFyWUZwK3hkK29zb1ZydWZ2dDBmTUhxeTRhZWJwZGlxZ09SL1puWVE0Vkcz?=
 =?utf-8?B?RDlaSnJNLzI5VDZHLy9kMnhaRWlNTzVrWGNmM0pwdVB5OURuUGhWdDdTS1JW?=
 =?utf-8?B?d2ZxN296RGpmQkFJdXN6NEVrU0V4MTVaNFlmSXNzTGx4cXhlVEM5dVZyQWRj?=
 =?utf-8?B?TEtFdjFJZVhQSjlFb20wdzhKc2NoZE91SVBZTER4ZDI0aE10d1FSaENWYWpD?=
 =?utf-8?B?T00yVkFvcHFoVFpPN0I4VzNOM3MyMUZiY3Fra25rV2QzZUhOWHpSK1Z1ak81?=
 =?utf-8?B?dE1XV0JtcTlhd2hmNlh2enpIQjU2OUJUYVZobkdkNTdxM2FJbVpqNFR3Wk56?=
 =?utf-8?B?b2lBRW9aVHo0OXhQaXNWWnlqTnFQZ0tqV09yMGYxYkdEbGxaN3pQb1hJMGlD?=
 =?utf-8?B?RllwZ0hSV2lrSXlCZ0Z5b09Qck1GK25SVVJmc09abXNNbU5pN2s4dERab3Fr?=
 =?utf-8?B?bDg3d204NXl1aWFtRWxmeDdobWM5ejZicDZIZCtueFpTMFd6eFJzcW9YRGRU?=
 =?utf-8?B?NE5FZjl0bnU5dTRseC9KS2pTWEdKL2I4QW9JbEs5K2ZVTlFpTklYaURQVDJr?=
 =?utf-8?B?bFhPNFZ0eWZnN2NRMVloeVBJck1RZVp4Z2I2VTFvaU8raW4rMWhxa2tTUXAz?=
 =?utf-8?B?aWZoT0ZOY1dGUUNjcGFoOTJKNHphN1Q5VjRHNnBFRzIrVGRjZExGTnpVaGYv?=
 =?utf-8?B?OHdJOWdYenR6bE50R1Vmek82dHFLUG1NUk11VnFhRVFPVjdNWVR3MGZQNWU1?=
 =?utf-8?B?N0JSUGdOYzJlVmp1Wis4REJWMHhUcU5wMmt2SVRmTld2L3h1dGNyRDlWNmVU?=
 =?utf-8?B?V0F3UUg3Z1FhQ29yNUZFTVQ3dC8wL3o3VGR5QThoSTBIVUVwNzkyK3FpRXVC?=
 =?utf-8?B?cmVRSDMrWEU2MTF2eXhrdERaMGZLNC80SStlWE1FTXM4NHIvMDBISFhGY2N6?=
 =?utf-8?B?NjltR29VS1NYOXFuWFBvY3AxYUdxWkFqeFRYcUQ5NUtwVVA5c25LbGdoQnkx?=
 =?utf-8?B?Q0JoWUkwZEZrZUE2bHhhYXM2K2VBdzBuNWtiQUNkdDZsb1V6UXJVSG5sOW54?=
 =?utf-8?Q?FNI2Ur9uxfbGsPHK4WJgJgOuK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f07f9d3a-dc06-4f16-931d-08db3f53431a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 14:51:44.5672
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0x1qphYs72HvOckxLvyHVfBDVU4+PHLnRBT10eyfouue9gf3R8wxVv8LmkJGwWOn28/sjuHfDvR4KjxNtkzUEg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6912

On 17.04.2023 16:41, Andrew Cooper wrote:
> On 17/04/2023 2:59 pm, Jan Beulich wrote:
>> On 17.04.2023 15:52, Andrew Cooper wrote:
>>> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>  }
>>>  
>>> +/*
>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>> + *
>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>> + * Must be called with present flags, and over present mappings.
>>> + * Must be called on leaf page boundaries.
>> This last sentence, while wording-wise correct, could do with making more
>> explicit that it is the caller's responsibility to know whether large page
>> mappings are in use, due to ...
> 
> The meaning here is really "this doesn't shatter superpages", and this
> was the most concise I could come up with.
> 
> Would ", i.e. won't shatter 2M pages." as a clarification work?

Yes, that would definitely help. Nevertheless I was more after something
like "..., i.e. for 2M mappings on 2M boundaries." Which, thinking about
it, points out that while you have a respective check for the start
address, the full 2M page would be changed even if the end address wasn't
2M aligned (but fell in the middle of a 2M page).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 15:06:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 15:06:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522243.811465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQQw-0008GX-Vm; Mon, 17 Apr 2023 15:06:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522243.811465; Mon, 17 Apr 2023 15:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poQQw-0008GQ-Sr; Mon, 17 Apr 2023 15:06:22 +0000
Received: by outflank-mailman (input) for mailman id 522243;
 Mon, 17 Apr 2023 15:06:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncOi=AI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1poQQv-0008GK-In
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 15:06:21 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e89::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66d544df-dd31-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 17:06:18 +0200 (CEST)
Received: from BN0PR04CA0106.namprd04.prod.outlook.com (2603:10b6:408:ec::21)
 by PH7PR12MB5949.namprd12.prod.outlook.com (2603:10b6:510:1d8::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.36; Mon, 17 Apr
 2023 15:06:12 +0000
Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ec:cafe::d8) by BN0PR04CA0106.outlook.office365.com
 (2603:10b6:408:ec::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Mon, 17 Apr 2023 15:06:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.20 via Frontend Transport; Mon, 17 Apr 2023 15:06:11 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 10:06:11 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 17 Apr
 2023 08:06:10 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 17 Apr 2023 10:06:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66d544df-dd31-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jV4mAzLlAtvyfjSX+cvfIgl8p/8SJVeNaUy8n3VC69rm8xlA43GNJZrgLUcm23z0J6HNnTrvW/IPGFuAnLFOgyovk46SO4B6je0ZM1yi5Dka9TNDGyzZshl+NWBlqgM6o2ooS+lpU2VXssSq8iN/KJuC2rXfItvaVBUD4IFtMEzygTWU9XMh62PaqdXjXP8usAsDjPl7PAZRxicZqlM3n8GjlYpdAH4UdnkmvqH52qmUbjbgIwsYeZ0nTu58zjJtOiaN2dtU+IBiXsBIGHomObWd34+qm80Kqbsx+3OvnJX8DvLDMbcXqTa/m27QqHYsXsB8ol+KmVCgKJ+1la9Nug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dl1EUnjKckzVE+mHGFs2owfDy7i9e5SX6+iIAxhiaUA=;
 b=Eazcf2EMTEJUgIPl65KBWh+an7kt2310HF+Azt4xNzE8OitdS2HVE01NZ0TQAl3AhOSKRv2OppHYbWcbcSOzsa+S0ncGGr+vrqBs63R84be4ench1QA5t4NSmQfLAMRoptpzeqne7GK8rvc8jXE0B7OTrnNeTEc3nvVdUuhGX1icNpaygqrF+FGf/Y6rSAl375PJrm15+aLwkmx3O+d4iPJf5YbsMIxju5WspQvolBg4yTgQaUvWcdw3sppuqbAW2mFutb58HGp1OML1nJFVBNDp/FeMn5Ba5zwgL0Cy1Lu/uY33gsJZLzhV+x265smmqolt7MNQb5PbgvizGDp/oA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dl1EUnjKckzVE+mHGFs2owfDy7i9e5SX6+iIAxhiaUA=;
 b=gb5EJxZFHoh6vfa6ANOq23wV524lkpe+NzesrBYRu0Jo9vkqanaM0otc4gjJ+mHwHh8clPB61YIUe76PrQdu/mx4P5J9eSwM+spgI7T9gEGgcOZAwrr2bxdE9YAYwhk6/+0pt9j6UqS8pCGHpZIId34hz1Ljd2Vsu4/mH69bc5g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <6e6ee0c8-7cc8-13ab-3cf9-ea1b1147efc2@amd.com>
Date: Mon, 17 Apr 2023 17:06:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 13/17] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-14-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-14-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT011:EE_|PH7PR12MB5949:EE_
X-MS-Office365-Filtering-Correlation-Id: 5fe2e688-eb18-4f6c-bade-08db3f55482d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gy3ljCiVk9hDOoPvicFz5fXtBMyGMMNP4/tcjg+nsHvYaYoKdJJsPgbuIyx7HBeaZGrkIr8qRyyC3Ug4QqCYyYbP8PDwZkJjXOfDLqS7CoHcGoKM0SzeYvu8f9Zo6Gs/N+nQQxQKCRxW3DE1uc9/l7rK4lTEvYg8hwUh4It/pcT6P3XuhEL8B2IHWEcmAy8VzlujlfW7OLB0miXt318mPSOtw0qll7XL2BI7f9QMG5HkGv/W58Ao+W0BSDyFgpLDSprG8pkdWdHPPrtHPISnEp+wrEDJIt+ti7+gRJ+qkUjt4leuTN+C0j7WhOSIuSf5B/flRjcMYyR2oqjHJYYSkgr2WBjnxIFmrHnxPTbYr+7EdreuISlOMcL9PToGzICVLGw1wStpi74ZTMEAnPGsY+ClYED+aI4ZH0tBWZlQgqpy+WdTzWi9x2z7mrF1rYVaniVja3riS5EZpczJaAJsjSJ77rZ9HU3HMwr/35zW7+pZH48S733irWAkGNh+B5BfMCav6ExluifP9qZOkWZnD59q4Si/Gx8vPKOzVUcpxHlg3rclAUOFAa2ZMAtbFJxjf9lQeawKFZXWiiwpGm2eo/IvhK7m0xKXybCwsrdMBXcYDiOBTOf6TcjB8dkpy1/JeLuPG2EPq1Gq8OBo8K3XGcVnPOany1/vuGZtaJc2/BE48fM/NhMiSqoLEb+KQt8lGzirzN9+LIJ4jUBlNKcIy+dj7Skg6+x3HAyKCZmZ1tXMQzT0YXGKyBmtY+xjGl0SJvjz5coigFC/vnkZ7vjXAQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199021)(46966006)(40470700004)(36840700001)(47076005)(40460700003)(31686004)(36756003)(4326008)(41300700001)(54906003)(8676002)(70206006)(16576012)(110136005)(70586007)(316002)(81166007)(478600001)(86362001)(31696002)(336012)(2616005)(83380400001)(26005)(53546011)(426003)(5660300002)(82310400005)(2906002)(36860700001)(40480700001)(44832011)(30864003)(8936002)(82740400003)(356005)(186003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 15:06:11.8314
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5fe2e688-eb18-4f6c-bade-08db3f55482d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT011.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5949

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
> device tree overlay.
> 
> xl dt-overlay remove file.dtbo:
>     Removes all the nodes in a given dtbo.
>     First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
>     in dt_host and delete the device node entries from dt_host.
> 
>     The nodes get removed only if it is not used by any of dom0 or domio.
> 
> Also, added overlay_track struct to keep the track of added node through device
> tree overlay. overlay_track has dt_host_new which is unflattened form of updated
> fdt and name of overlay nodes. When a node is removed, we also free the memory
> used by overlay_track for the particular overlay node.
> 
> Nested overlay removal is supported in sequential manner only i.e. if
> overlay_child nests under overlay_parent, it is assumed that user first removes
> overlay_child and then removes overlay_parent.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/sysctl.c        |  16 +-
>  xen/common/Makefile          |   1 +
>  xen/common/dt_overlay.c      | 415 +++++++++++++++++++++++++++++++++++
>  xen/include/public/sysctl.h  |  24 ++
>  xen/include/xen/dt_overlay.h |  59 +++++
>  5 files changed, 514 insertions(+), 1 deletion(-)
>  create mode 100644 xen/common/dt_overlay.c
>  create mode 100644 xen/include/xen/dt_overlay.h
> 
> diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
> index b0a78a8b10..672db61650 100644
> --- a/xen/arch/arm/sysctl.c
> +++ b/xen/arch/arm/sysctl.c
> @@ -12,6 +12,7 @@
>  #include <xen/errno.h>
>  #include <xen/hypercall.h>
>  #include <public/sysctl.h>
> +#include <xen/dt_overlay.h>
xen/ headers should be grouped together so this should be moved before public/.

> 
>  void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
>  {
> @@ -21,7 +22,20 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
>  long arch_do_sysctl(struct xen_sysctl *sysctl,
>                      XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +
> +    switch ( sysctl->cmd )
> +    {
> +    case XEN_SYSCTL_dt_overlay:
> +        ret = dt_sysctl(&sysctl->u.dt_overlay);
> +        break;
> +
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    return ret;
>  }
> 
>  /*
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 46049eac35..be78c9a8c2 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -8,6 +8,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
>  obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
>  obj-$(CONFIG_IOREQ_SERVER) += dm.o
>  obj-y += domain.o
> +obj-$(CONFIG_OVERLAY_DTB) += dt_overlay.o
>  obj-y += event_2l.o
>  obj-y += event_channel.o
>  obj-y += event_fifo.o
> diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
> new file mode 100644
> index 0000000000..516e8010c5
> --- /dev/null
> +++ b/xen/common/dt_overlay.c
> @@ -0,0 +1,415 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0
Our CODING_STYLE states that SPDX tag should be a single line comment at the top of the file.

> + *
> + * xen/common/dt_overlay.c
> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
The copyright year as well as company name is different here than in the header dt_overlay.h

> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + */
> +#include <xen/iocap.h>
> +#include <xen/xmalloc.h>
> +#include <asm/domain_build.h>
> +#include <xen/dt_overlay.h>
> +#include <xen/guest_access.h>
Sort headers alphabetically and group them depending on xen/ or asm/.

> +
> +static LIST_HEAD(overlay_tracker);
> +static DEFINE_SPINLOCK(overlay_lock);
> +
> +/* Find last descendants of the device_node. */
> +static struct dt_device_node *find_last_descendants_node(
> +                                            struct dt_device_node *device_node)
To correctly split arguments this should be:
static struct dt_device_node *
find_last_descendants_node(struct dt_device_node *device_node)

> +{
> +    struct dt_device_node *child_node;
> +
> +    for ( child_node = device_node->child; child_node->sibling != NULL;
> +          child_node = child_node->sibling )
> +    {
> +    }
> +
> +    /* If last child_node also have children. */
> +    if ( child_node->child )
> +        child_node = find_last_descendants_node(child_node);
> +
> +    return child_node;
> +}
> +
> +static int dt_overlay_remove_node(struct dt_device_node *device_node)
> +{
> +    struct dt_device_node *np;
> +    struct dt_device_node *parent_node;
> +    struct dt_device_node *device_node_last_descendant = device_node->child;
> +
> +    parent_node = device_node->parent;
> +
> +    if ( parent_node == NULL )
> +    {
> +        dt_dprintk("%s's parent node not found\n", device_node->name);
> +        return -EFAULT;
> +    }
> +
> +    np = parent_node->child;
> +
> +    if ( np == NULL )
> +    {
> +        dt_dprintk("parent node %s's not found\n", parent_node->name);
> +        return -EFAULT;
> +    }
> +
> +    /* If node to be removed is only child node or first child. */
> +    if ( !dt_node_cmp(np->full_name, device_node->full_name) )
> +    {
> +        parent_node->child = np->sibling;
> +
> +        /*
> +         * Iterate over all child nodes of device_node. Given that we are
> +         * removing parent node, we need to remove all it's descendants too.
> +         */
> +        if ( device_node_last_descendant )
> +        {
> +            device_node_last_descendant =
> +                                        find_last_descendants_node(device_node);
> +            parent_node->allnext = device_node_last_descendant->allnext;
> +        }
> +        else
> +            parent_node->allnext = np->allnext;
> +
> +        return 0;
> +    }
> +
> +    for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
> +    {
> +        if ( !dt_node_cmp(np->sibling->full_name, device_node->full_name) )
> +        {
> +            /* Found the node. Now we remove it. */
> +            np->sibling = np->sibling->sibling;
> +
> +            if ( np->child )
> +                np = find_last_descendants_node(np);
> +
> +            /*
> +             * Iterate over all child nodes of device_node. Given that we are
> +             * removing parent node, we need to remove all it's descendants too.
> +             */
> +            if ( device_node_last_descendant )
> +                device_node_last_descendant =
> +                                        find_last_descendants_node(device_node);
> +
> +            if ( device_node_last_descendant )
> +                np->allnext = device_node_last_descendant->allnext;
> +            else
> +                np->allnext = np->allnext->allnext;
> +
> +            break;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +/* Basic sanity check for the dtbo tool stack provided to Xen. */
> +static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
> +{
> +    if ( (fdt_totalsize(overlay_fdt) != overlay_fdt_size) ||
> +          fdt_check_header(overlay_fdt) )
> +    {
> +        printk(XENLOG_ERR "The overlay FDT is not a valid Flat Device Tree\n");
> +        return -EINVAL;
> +    }
> +
> +    return 0;
> +}
> +
> +/* Count number of nodes till one level of __overlay__ tag. */
> +static unsigned int overlay_node_count(void *fdto)
fdto can be const. Also in other function you name the same parameter as overlay_fdt
so it would be best to stick to one naming scheme.

> +{
> +    unsigned int num_overlay_nodes = 0;
> +    int fragment;
> +
> +    fdt_for_each_subnode(fragment, fdto, 0)
> +    {
> +        int subnode;
> +        int overlay;
> +
> +        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
> +
> +        /*
> +         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
> +         * overlay >= 0. So, no need for a overlay>=0 check here.
> +         */
> +        fdt_for_each_subnode(subnode, fdto, overlay)
> +        {
> +            num_overlay_nodes++;
> +        }
> +    }
> +
> +    return num_overlay_nodes;
> +}
> +
> +static int handle_remove_irq_iommu(struct dt_device_node *device_node)
> +{
> +    int rc = 0;
> +    struct domain *d = hardware_domain;
> +    domid_t domid;
> +    unsigned int naddr, len;
> +    unsigned int i, nirq;
> +    uint64_t addr, size;
You could limit the scope of addr and size by moving them to for loop.

> +
> +    domid = dt_device_used_by(device_node);
> +
> +    dt_dprintk("Checking if node %s is used by any domain\n",
> +               device_node->full_name);
> +
> +    /* Remove the node iff it's assigned to domain 0 or domain io. */
s/iff/if

> +    if ( domid != 0 && domid != DOMID_IO )
> +    {
> +        printk(XENLOG_ERR "Device %s as it is being used by domain %d. Removing nodes failed\n",
Use %u to print domid. Also s/as it is/is/

> +               device_node->full_name, domid);
> +        return -EINVAL;
> +    }
> +
> +    dt_dprintk("Removing node: %s\n", device_node->full_name);
> +
> +    nirq = dt_number_of_irq(device_node);
> +
> +    /* Remove IRQ permission */
> +    for ( i = 0; i < nirq; i++ )
> +    {
> +        rc = platform_get_irq(device_node, i);;
remove extra ; at the end of line.
Also, shouldn't you first make sure that rc is >= 0 before checking access?

> +
> +        if ( irq_access_permitted(d, rc) == false )
Instead of d which points to hwdom, can't you use just domid?
> +        {
> +            printk(XENLOG_ERR "IRQ %d is not routed to domain %d\n", rc,
%u for domid

> +                   domid);
> +            return -EINVAL;
> +        }
> +        /*
> +         * TODO: We don't handle shared IRQs for now. So, it is assumed that
> +         * the IRQs was not shared with another devices.
> +         */
> +        rc = irq_deny_access(d, rc);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "unable to revoke access for irq %u for %s\n",
> +                   i, device_node->full_name);
> +            return rc;
> +        }
> +    }
> +
> +    /* Check if iommu property exists. */
> +    if ( dt_get_property(device_node, "iommus", &len) )
> +    {
> +        rc = iommu_remove_dt_device(device_node);
> +        if ( rc != 0 && rc != -ENXIO )
> +            return rc;
> +    }
> +
> +    naddr = dt_number_of_address(device_node);
> +
> +    /* Remove mmio access. */
> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        rc = dt_device_get_address(device_node, i, &addr, &size);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(device_node));
> +            return rc;
> +        }
> +
> +        rc = iomem_deny_access(d, paddr_to_pfn(addr),
> +                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to remove dom%d access to"
> +                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
> +                   d->domain_id,
> +                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
> +            return rc;
> +        }
> +
> +    }
> +
> +    return rc;
> +}
> +
> +/* Removes all descendants of the given node. */
> +static int remove_all_descendant_nodes(struct dt_device_node *device_node)
> +{
> +    int rc = 0;
> +    struct dt_device_node *child_node;
> +
> +    for ( child_node = device_node->child; child_node != NULL;
> +         child_node = child_node->sibling )
> +    {
> +        if ( child_node->child )
> +            remove_all_descendant_nodes(child_node);
> +
> +        rc = handle_remove_irq_iommu(child_node);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    return rc;
> +}
> +
> +/* Remove nodes from dt_host. */
> +static int remove_nodes(const struct overlay_track *tracker)
> +{
> +    int rc = 0;
> +    struct dt_device_node *overlay_node;
> +    unsigned int j;
> +
> +    for ( j = 0; j < tracker->num_nodes; j++ )
> +    {
> +        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
> +        if ( overlay_node == NULL )
> +        {
> +            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
> +                   overlay_node->full_name);
> +            return -EINVAL;
> +        }
> +
> +        rc = remove_all_descendant_nodes(overlay_node);
> +
> +        /* All children nodes are unmapped. Now remove the node itself. */
> +        rc = handle_remove_irq_iommu(overlay_node);
> +        if ( rc )
> +            return rc;
> +
> +        read_lock(&dt_host->lock);
> +
> +        rc = dt_overlay_remove_node(overlay_node);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            return rc;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * First finds the device node to remove. Check if the device is being used by
> + * any dom and finally remove it from dt_host. IOMMU is already being taken care
> + * while destroying the domain.
> + */
> +static long handle_remove_overlay_nodes(void *overlay_fdt,
> +                                        uint32_t overlay_fdt_size)
> +{
> +    int rc = 0;
> +    struct overlay_track *entry, *temp, *track;
> +    bool found_entry = false;
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( overlay_node_count(overlay_fdt) == 0 )
> +        return -ENOMEM;
Why ENOMEM if there was no allocation attempt?

> +
> +    spin_lock(&overlay_lock);
> +
> +    /*
> +     * First check if dtbo is correct i.e. it should one of the dtbo which was
> +     * used when dynamically adding the node.
> +     * Limitation: Cases with same node names but different property are not
> +     * supported currently. We are relying on user to provide the same dtbo
> +     * as it was used when adding the nodes.
> +     */
> +    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
> +    {
> +        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
> +        {
> +            track = entry;
> +            found_entry = true;
> +            break;
> +        }
> +    }
> +
> +    if ( found_entry == false )
> +    {
> +        rc = -EINVAL;
> +
> +        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
> +               " Removing nodes is supported for only prior added dtbo. Please"
> +               " provide a valid dtbo which was used to add the nodes.\n");
> +        goto out;
> +
> +    }
> +
> +    rc = remove_nodes(entry);
> +
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Removing node failed\n");
> +        goto out;
> +    }
> +
> +    list_del(&entry->entry);
> +
> +    xfree(entry->dt_host_new);
> +    xfree(entry->fdt);
> +    xfree(entry->overlay_fdt);
> +
> +    xfree(entry->nodes_address);
> +
> +    xfree(entry);
> +
> +out:
> +    spin_unlock(&overlay_lock);
> +    return rc;
> +}
> +
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    long ret;
> +    void *overlay_fdt;
> +
> +    if ( op->overlay_fdt_size <= 0 || op->overlay_fdt_size > KB(500) )
overlay_fdt_size is uint32_t so it cannot be < 0.

> +        return -EINVAL;
> +
> +    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
> +
> +    if ( overlay_fdt == NULL )
> +        return -ENOMEM;
> +
> +    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
> +    if ( ret )
> +    {
> +        gprintk(XENLOG_ERR, "copy from guest failed\n");
> +        xfree(overlay_fdt);
> +
> +        return -EFAULT;
> +    }
> +
> +    switch ( op->overlay_op )
> +    {
> +    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
> +        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +        xfree(overlay_fdt);
> +
> +        break;
> +
> +    default:
> +        xfree(overlay_fdt);
> +        break;
> +    }
> +
> +    return ret;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 2b24d6bfd0..1158c1efb3 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -1057,6 +1057,25 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
>  #endif
> 
> +#if defined(__arm__) || defined (__aarch64__)
> +#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
> +#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
> +
> +/*
> + * XEN_SYSCTL_dt_overlay
> + * Performs addition/removal of device tree nodes under parent node using dtbo.
> + * This does in three steps:
This is done... would read better.

> + *  - Adds/Removes the nodes from dt_host.
> + *  - Adds/Removes IRQ permission for the nodes.
> + *  - Adds/Removes MMIO accesses.
> + */
> +struct xen_sysctl_dt_overlay {
> +    XEN_GUEST_HANDLE_64(void) overlay_fdt; /* IN: overlay fdt. */
> +    uint32_t overlay_fdt_size;  /* IN: Overlay dtb size. */
> +    uint8_t overlay_op; /* IN: Add or remove. */
Please align comments.
Also you could move the command macros right before overlay_op to improve
readability (just an idea to consider, not a request).

> +};
> +#endif
> +
>  struct xen_sysctl {
>      uint32_t cmd;
>  #define XEN_SYSCTL_readconsole                    1
> @@ -1087,6 +1106,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_livepatch_op                  27
>  /* #define XEN_SYSCTL_set_parameter              28 */
>  #define XEN_SYSCTL_get_cpu_policy                29
> +#define XEN_SYSCTL_dt_overlay                    30
Did you check if you need to bump XEN_SYSCTL_INTERFACE_VERSION ?

>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -1117,6 +1137,10 @@ struct xen_sysctl {
>  #if defined(__i386__) || defined(__x86_64__)
>          struct xen_sysctl_cpu_policy        cpu_policy;
>  #endif
> +
> +#if defined(__arm__) || defined (__aarch64__)
> +        struct xen_sysctl_dt_overlay        dt_overlay;
> +#endif
>          uint8_t                             pad[128];
>      } u;
>  };
> diff --git a/xen/include/xen/dt_overlay.h b/xen/include/xen/dt_overlay.h
> new file mode 100644
> index 0000000000..2cd975a070
> --- /dev/null
> +++ b/xen/include/xen/dt_overlay.h
> @@ -0,0 +1,59 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0
> + *
> + * xen/dt_overlay.h
> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (c) 2022 AMD Inc.
Different copyright info in comparison to source file.

> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + */
> +#ifndef __XEN_DT_SYSCTL_H__
SYSCTL? I think you want __XEN_DT_OVERLAY_H__

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 15:51:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 15:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522251.811475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poR8G-00058C-DC; Mon, 17 Apr 2023 15:51:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522251.811475; Mon, 17 Apr 2023 15:51:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poR8G-000585-9d; Mon, 17 Apr 2023 15:51:08 +0000
Received: by outflank-mailman (input) for mailman id 522251;
 Mon, 17 Apr 2023 15:51:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GnOu=AI=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1poR8F-00057z-Lq
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 15:51:07 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id a7a516ff-dd37-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 17:51:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 748271691;
 Mon, 17 Apr 2023 08:51:45 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.19.253])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 99D093F5A1;
 Mon, 17 Apr 2023 08:50:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7a516ff-dd37-11ed-b21e-6b7b168915f2
Date: Mon, 17 Apr 2023 16:50:53 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, linux-arm-kernel@lists.infradead.org,
	David Woodhouse <dwmw@amazon.co.uk>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org, Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 22/37] arm64: smp: Switch to hotplug core state
 synchronization
Message-ID: <ZD1q3TF2ixVD1f2M@FVFF77S0Q05N>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.569498144@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230414232310.569498144@linutronix.de>

On Sat, Apr 15, 2023 at 01:44:49AM +0200, Thomas Gleixner wrote:
> Switch to the CPU hotplug core state tracking and synchronization
> mechanim. No functional change intended.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org

I gave this a spin on arm64 (in a 64-vCPU VM on an M1 host), and it seems to
work fine with a bunch of vCPUs being hotplugged off and on again randomly.

FWIW:

Tested-by: Mark Rutland <mark.rutland@arm.com>

I also hacked the code to have the dying CPU spin forever before the call to
cpuhp_ap_report_dead(). In that case I see a warning, and that we don't call
arch_cpuhp_cleanup_dead_cpu(), and that the CPU is marked as offline (per
/sys/devices/system/cpu/$N/online).

As a tangent/aside, we might need to improve that for confidential compute
architectures, and we might want to generically track cpus which might still be
using kernel text/data. On arm64 we ensure that via our cpu_kill() callback
(which'll use PSCI CPU_AFFINITY_INFO), but I'm not sure if TDX and/or SEV-SNP
have a similar mechanism.

Otherwise, a malicious hypervisor can pause a vCPU just before it leaves the
kernel (e.g. immediately after the arch_cpuhp_cleanup_dead_cpu() call), wait
for a kexec (or resuse of stack memroy), and unpause the vCPU to cause things
to blow up.

Thanks,
Mark.

> ---
>  arch/arm64/Kconfig           |    1 +
>  arch/arm64/include/asm/smp.h |    2 +-
>  arch/arm64/kernel/smp.c      |   14 +++++---------
>  3 files changed, 7 insertions(+), 10 deletions(-)
> 
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -216,6 +216,7 @@ config ARM64
>  	select HAVE_KPROBES
>  	select HAVE_KRETPROBES
>  	select HAVE_GENERIC_VDSO
> +	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
>  	select IRQ_DOMAIN
>  	select IRQ_FORCED_THREADING
>  	select KASAN_VMALLOC if KASAN
> --- a/arch/arm64/include/asm/smp.h
> +++ b/arch/arm64/include/asm/smp.h
> @@ -99,7 +99,7 @@ static inline void arch_send_wakeup_ipi_
>  
>  extern int __cpu_disable(void);
>  
> -extern void __cpu_die(unsigned int cpu);
> +static inline void __cpu_die(unsigned int cpu) { }
>  extern void cpu_die(void);
>  extern void cpu_die_early(void);
>  
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -333,17 +333,13 @@ static int op_cpu_kill(unsigned int cpu)
>  }
>  
>  /*
> - * called on the thread which is asking for a CPU to be shutdown -
> - * waits until shutdown has completed, or it is timed out.
> + * Called on the thread which is asking for a CPU to be shutdown after the
> + * shutdown completed.
>   */
> -void __cpu_die(unsigned int cpu)
> +void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
>  {
>  	int err;
>  
> -	if (!cpu_wait_death(cpu, 5)) {
> -		pr_crit("CPU%u: cpu didn't die\n", cpu);
> -		return;
> -	}
>  	pr_debug("CPU%u: shutdown\n", cpu);
>  
>  	/*
> @@ -370,8 +366,8 @@ void cpu_die(void)
>  
>  	local_daif_mask();
>  
> -	/* Tell __cpu_die() that this CPU is now safe to dispose of */
> -	(void)cpu_report_death();
> +	/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
> +	cpuhp_ap_report_dead();
>  
>  	/*
>  	 * Actually shutdown the CPU. This must never fail. The specific hotplug
> 


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 16:21:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 16:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522260.811485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poRaw-0000fD-Pu; Mon, 17 Apr 2023 16:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522260.811485; Mon, 17 Apr 2023 16:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poRaw-0000f6-MR; Mon, 17 Apr 2023 16:20:46 +0000
Received: by outflank-mailman (input) for mailman id 522260;
 Mon, 17 Apr 2023 16:20:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poRav-0000ew-Ux; Mon, 17 Apr 2023 16:20:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poRav-0000og-NF; Mon, 17 Apr 2023 16:20:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poRav-00078Z-91; Mon, 17 Apr 2023 16:20:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poRav-0008QW-8a; Mon, 17 Apr 2023 16:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fX2fu/sQmXZqFdK8ozUq4WqS9hw/0PbIHgSX2woOIjw=; b=id5Fif6D/Xn/LE98eKtdNkTpWN
	flI7Phk7x/krBUZNhVfzDWvdrYm+fGfZdWqZWTlsa36AL9oWUBwcyXgDf1fxyu03qyufCm8dQH+yo
	KrAmLRJ/7pwkQTLV96WVpUaGQICN9IFmeFf9qvPfdAEZOO9Y46N59sZsgkiirAAbpmu0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180283: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
X-Osstest-Versions-That:
    xen=44843cee3d2b8daa09e5860fc4574219b57acde8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 16:20:45 +0000

flight 180283 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180283/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc
baseline version:
 xen                  44843cee3d2b8daa09e5860fc4574219b57acde8

Last test of basis   180279  2023-04-16 20:00:26 Z    0 days
Testing same since   180283  2023-04-17 13:02:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   44843cee3d..5eb6bd7454  5eb6bd7454e253f4907dbeb7aa982967b21698bc -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 17:41:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 17:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522269.811495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poSqb-0000O5-OF; Mon, 17 Apr 2023 17:41:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522269.811495; Mon, 17 Apr 2023 17:41:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poSqb-0000Ny-KT; Mon, 17 Apr 2023 17:41:01 +0000
Received: by outflank-mailman (input) for mailman id 522269;
 Mon, 17 Apr 2023 17:41:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=owEo=AI=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1poSqa-0000Ns-50
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 17:41:00 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01b4edf6-dd47-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 19:40:56 +0200 (CEST)
Received: from [192.168.0.2] (ip5f5ae817.dynamic.kabel-deutschland.de
 [95.90.232.23])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id 40A5B61CC40F9;
 Mon, 17 Apr 2023 19:40:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01b4edf6-dd47-11ed-b21e-6b7b168915f2
Content-Type: multipart/mixed; boundary="------------v3TYoqJn0gjeHEVPTU7V0kge"
Message-ID: <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de>
Date: Mon, 17 Apr 2023 19:40:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <87wn2a4la5.ffs@tglx>

This is a multi-part message in MIME format.
--------------v3TYoqJn0gjeHEVPTU7V0kge
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 17.04.23 um 16:48 schrieb Thomas Gleixner:

> On Mon, Apr 17 2023 at 13:19, Paul Menzel wrote:
>> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
>> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD
>> Graphics (family: 0x15, model: 0x13, stepping: 0x1)
>> […]
>> [    0.259329] smp: Bringing up secondary CPUs ...
>> [    0.259527] x86: Booting SMP configuration:
>> [    0.259528] .... node  #0, CPUs:      #1
>> [    0.261007] After schedule_preempt_disabled
>> [   10.260990] CPU1 failed to report alive state
> 
> Weird. CPU1 fails to come up and report that it has reached the
> synchronization point.
> 
> Does it work when you add cpuhp.parallel=off on the kernel command line?

Yes, the ten seconds delay is gone with `cpuhp.parallel=off`.

There was a patch set in the past, that worked on that device. I think 
up to v4 it did *not* work at all and hung [1]. I need some days to 
collect the results again.


Kind regards,

Paul


[1]: 
https://lore.kernel.org/lkml/ab28d2ce-4a9c-387d-9eda-558045a0c35b@molgen.mpg.de/
--------------v3TYoqJn0gjeHEVPTU7V0kge
Content-Type: text/plain; charset=UTF-8;
 name="kodi-linux-6.3-rc6-smp-tglx-cpuhp.paralleloff.txt"
Content-Disposition: attachment;
 filename="kodi-linux-6.3-rc6-smp-tglx-cpuhp.paralleloff.txt"
Content-Transfer-Encoding: base64

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA2LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2
OWY2NiAocm9vdEBiZjE2ZjM2NDZhODQpIChnY2MgKERlYmlhbiAxMS4yLjAtMTIpIDExLjIu
MCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi40MCkgIzQ0NiBTTVAgUFJF
RU1QVF9EWU5BTUlDIFNhdCBBcHIgMTUgMTQ6MTI6MjkgVVRDIDIwMjMKWyAgICAwLjAwMDAw
MF0gQ29tbWFuZCBsaW5lOiBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmM2LTAw
MzExLWdkZTgyMjQ5NjlmNjYgcm9vdD0vZGV2L3NkYTMgcncgcXVpZXQgbm9pc2FwbnAgY3J5
cHRvbWdyLm5vdGVzdHMgaXB2Ni5kaXNhYmxlX2lwdjY9MSBzZWxpbnV4PTAgY3B1aHAucGFy
YWxsZWw9b2ZmClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IFN1cHBvcnRpbmcgWFNBVkUgZmVh
dHVyZSAweDAwMTogJ3g4NyBmbG9hdGluZyBwb2ludCByZWdpc3RlcnMnClsgICAgMC4wMDAw
MDBdIHg4Ni9mcHU6IFN1cHBvcnRpbmcgWFNBVkUgZmVhdHVyZSAweDAwMjogJ1NTRSByZWdp
c3RlcnMnClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IFN1cHBvcnRpbmcgWFNBVkUgZmVhdHVy
ZSAweDAwNDogJ0FWWCByZWdpc3RlcnMnClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IHhzdGF0
ZV9vZmZzZXRbMl06ICA1NzYsIHhzdGF0ZV9zaXplc1syXTogIDI1NgpbICAgIDAuMDAwMDAw
XSB4ODYvZnB1OiBFbmFibGVkIHhzdGF0ZSBmZWF0dXJlcyAweDcsIGNvbnRleHQgc2l6ZSBp
cyA4MzIgYnl0ZXMsIHVzaW5nICdzdGFuZGFyZCcgZm9ybWF0LgpbICAgIDAuMDAwMDAwXSBz
aWduYWw6IG1heCBzaWdmcmFtZSBzaXplOiAxNzc2ClsgICAgMC4wMDAwMDBdIEJJT1MtcHJv
dmlkZWQgcGh5c2ljYWwgUkFNIG1hcDoKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVt
IDB4MDAwMDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOWZiZmZdIHVzYWJsZQpbICAgIDAu
MDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMDlmYzAwLTB4MDAwMDAwMDAw
MDA5ZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVtIDB4MDAw
MDAwMDAwMDBmMDAwMC0weDAwMDAwMDAwMDAwZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIEJJT1MtZTgyMDogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDVmZTRj
ZmZmXSB1c2FibGUKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVtIDB4MDAwMDAwMDA1
ZmU0ZDAwMC0weDAwMDAwMDAwN2ZmZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBdIEJJ
T1MtZTgyMDogW21lbSAweDAwMDAwMDAwZjgwMDAwMDAtMHgwMDAwMDAwMGZiZmZmZmZmXSBy
ZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMGZlYzEw
MDAwLTB4MDAwMDAwMDBmZWMxMGZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1l
ODIwOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAxN2VmZmZmZmZdIHVzYWJs
ZQpbICAgIDAuMDAwMDAwXSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3Rp
dmUKWyAgICAwLjAwMDAwMF0gU01CSU9TIDMuMC4wIHByZXNlbnQuClsgICAgMC4wMDAwMDBd
IERNSTogQVNVUyBGMkE4NS1NX1BSTy9GMkE4NS1NX1BSTywgQklPUyA0LjE4LTktZ2I2NDBl
ZDUxYjIgMDQvMTcvMjAyMwpbICAgIDAuMDAwMDAwXSB0c2M6IEZhc3QgVFNDIGNhbGlicmF0
aW9uIHVzaW5nIFBJVApbICAgIDAuMDAwMDAwXSB0c2M6IEluaXRpYWwgdXNlYyB0aW1lciA5
MjQ5MDY1ClsgICAgMC4wMDAwMDBdIHRzYzogRGV0ZWN0ZWQgMzg5OS45NTQgTUh6IHByb2Nl
c3NvcgpbICAgIDAuMDAwNzU1XSBlODIwOiB1cGRhdGUgW21lbSAweDAwMDAwMDAwLTB4MDAw
MDBmZmZdIHVzYWJsZSA9PT4gcmVzZXJ2ZWQKWyAgICAwLjAwMDc1OV0gZTgyMDogcmVtb3Zl
IFttZW0gMHgwMDBhMDAwMC0weDAwMGZmZmZmXSB1c2FibGUKWyAgICAwLjAwMDc2M10gbGFz
dF9wZm4gPSAweDE3ZjAwMCBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMApbICAgIDAuMDAw
NzY4XSB4ODYvUEFUOiBDb25maWd1cmF0aW9uIFswLTddOiBXQiAgV0MgIFVDLSBVQyAgV0Ig
IFdQICBVQy0gV1QgIApbICAgIDAuMDAwOTM4XSBsYXN0X3BmbiA9IDB4NWZlNGQgbWF4X2Fy
Y2hfcGZuID0gMHg0MDAwMDAwMDAKWyAgICAwLjAwNDAwMF0gVXNpbmcgR0IgcGFnZXMgZm9y
IGRpcmVjdCBtYXBwaW5nClsgICAgMC4wMDQwMDBdIEFDUEk6IEVhcmx5IHRhYmxlIGNoZWNr
c3VtIHZlcmlmaWNhdGlvbiBkaXNhYmxlZApbICAgIDAuMDA0MDAwXSBBQ1BJOiBSU0RQIDB4
MDAwMDAwMDAwMDBGNjgzMCAwMDAwMjQgKHYwMiBDT1JFdjQpClsgICAgMC4wMDQwMDBdIEFD
UEk6IFhTRFQgMHgwMDAwMDAwMDVGRTVBMEUwIDAwMDA3NCAodjAxIENPUkV2NCBDT1JFQk9P
VCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBGQUNQIDB4
MDAwMDAwMDA1RkU1QkJDMCAwMDAxMTQgKHYwNiBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAg
Q09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogRFNEVCAweDAwMDAwMDAwNUZF
NUEyODAgMDAxOTNBICh2MDIgQ09SRXY0IENPUkVCT09UIDAwMDEwMDAxIElOVEwgMjAyMDA5
MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IEZBQ1MgMHgwMDAwMDAwMDVGRTVBMjQwIDAwMDA0
MApbICAgIDAuMDA0MDAwXSBBQ1BJOiBGQUNTIDB4MDAwMDAwMDA1RkU1QTI0MCAwMDAwNDAK
WyAgICAwLjAwNDAwMF0gQUNQSTogU1NEVCAweDAwMDAwMDAwNUZFNUJDRTAgMDAwMDhBICh2
MDIgQ09SRXY0IENPUkVCT09UIDAwMDAwMDJBIENPUkUgMjAyMDA5MjUpClsgICAgMC4wMDQw
MDBdIEFDUEk6IE1DRkcgMHgwMDAwMDAwMDVGRTVCRDcwIDAwMDAzQyAodjAxIENPUkV2NCBD
T1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBB
UElDIDB4MDAwMDAwMDA1RkU1QkRCMCAwMDAwNjIgKHYwMyBDT1JFdjQgQ09SRUJPT1QgMDAw
MDAwMDAgQ09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogSFBFVCAweDAwMDAw
MDAwNUZFNUJFMjAgMDAwMDM4ICh2MDEgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUg
MjAyMDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IEhFU1QgMHgwMDAwMDAwMDVGRTVCRTYw
IDAwMDFEMCAodjAxIENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpb
ICAgIDAuMDA0MDAwXSBBQ1BJOiBJVlJTIDB4MDAwMDAwMDA1RkU1QzAzMCAwMDAwNzAgKHYw
MiBBTUQgICAgQU1ESU9NTVUgMDAwMDAwMDEgQU1EICAwMDAwMDAwMCkKWyAgICAwLjAwNDAw
MF0gQUNQSTogU1NEVCAweDAwMDAwMDAwNUZFNUMwQTAgMDAwNTFGICh2MDIgQU1EICAgIEFM
SUIgICAgIDAwMDAwMDAxIE1TRlQgMDQwMDAwMDApClsgICAgMC4wMDQwMDBdIEFDUEk6IFNT
RFQgMHgwMDAwMDAwMDVGRTVDNUMwIDAwMDZCMiAodjAxIEFNRCAgICBQT1dFUk5PVyAwMDAw
MDAwMSBBTUQgIDAwMDAwMDAxKQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBWRkNUIDB4MDAwMDAw
MDA1RkU1Q0M4MCAwMEYyNjkgKHYwMSBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAy
MDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIEZBQ1AgdGFibGUgbWVt
b3J5IGF0IFttZW0gMHg1ZmU1YmJjMC0weDVmZTViY2QzXQpbICAgIDAuMDA0MDAwXSBBQ1BJ
OiBSZXNlcnZpbmcgRFNEVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTVhMjgwLTB4NWZl
NWJiYjldClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBGQUNTIHRhYmxlIG1lbW9y
eSBhdCBbbWVtIDB4NWZlNWEyNDAtMHg1ZmU1YTI3Zl0KWyAgICAwLjAwNDAwMF0gQUNQSTog
UmVzZXJ2aW5nIEZBQ1MgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YTI0MC0weDVmZTVh
MjdmXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgU1NEVCB0YWJsZSBtZW1vcnkg
YXQgW21lbSAweDVmZTViY2UwLTB4NWZlNWJkNjldClsgICAgMC4wMDQwMDBdIEFDUEk6IFJl
c2VydmluZyBNQ0ZHIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWJkNzAtMHg1ZmU1YmRh
Yl0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIEFQSUMgdGFibGUgbWVtb3J5IGF0
IFttZW0gMHg1ZmU1YmRiMC0weDVmZTViZTExXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNl
cnZpbmcgSFBFVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTViZTIwLTB4NWZlNWJlNTdd
ClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBIRVNUIHRhYmxlIG1lbW9yeSBhdCBb
bWVtIDB4NWZlNWJlNjAtMHg1ZmU1YzAyZl0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2
aW5nIElWUlMgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YzAzMC0weDVmZTVjMDlmXQpb
ICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgU1NEVCB0YWJsZSBtZW1vcnkgYXQgW21l
bSAweDVmZTVjMGEwLTB4NWZlNWM1YmVdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2Vydmlu
ZyBTU0RUIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWM1YzAtMHg1ZmU1Y2M3MV0KWyAg
ICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIFZGQ1QgdGFibGUgbWVtb3J5IGF0IFttZW0g
MHg1ZmU1Y2M4MC0weDVmZTZiZWU4XQpbICAgIDAuMDA0MDAwXSBObyBOVU1BIGNvbmZpZ3Vy
YXRpb24gZm91bmQKWyAgICAwLjAwNDAwMF0gRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAw
MDAwMDAwMDAwMDAwMC0weDAwMDAwMDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdIE5PREVf
REFUQSgwKSBhbGxvY2F0ZWQgW21lbSAweDE3ZWZlOTAwMC0weDE3ZWZmZmZmZl0KWyAgICAw
LjAwNDAwMF0gWm9uZSByYW5nZXM6ClsgICAgMC4wMDQwMDBdICAgRE1BICAgICAgW21lbSAw
eDAwMDAwMDAwMDAwMDEwMDAtMHgwMDAwMDAwMDAwZmZmZmZmXQpbICAgIDAuMDA0MDAwXSAg
IERNQTMyICAgIFttZW0gMHgwMDAwMDAwMDAxMDAwMDAwLTB4MDAwMDAwMDBmZmZmZmZmZl0K
WyAgICAwLjAwNDAwMF0gICBOb3JtYWwgICBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAw
MDAwMDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdICAgRGV2aWNlICAgZW1wdHkKWyAgICAw
LjAwNDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUKWyAgICAwLjAwNDAw
MF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzClsgICAgMC4wMDQwMDBdICAgbm9kZSAgIDA6
IFttZW0gMHgwMDAwMDAwMDAwMDAxMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0KWyAgICAwLjAw
NDAwMF0gICBub2RlICAgMDogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDVm
ZTRjZmZmXQpbICAgIDAuMDA0MDAwXSAgIG5vZGUgICAwOiBbbWVtIDB4MDAwMDAwMDEwMDAw
MDAwMC0weDAwMDAwMDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdIEluaXRtZW0gc2V0dXAg
bm9kZSAwIFttZW0gMHgwMDAwMDAwMDAwMDAxMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0KWyAg
ICAwLjAwNDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogMSBwYWdlcyBpbiB1bmF2YWlsYWJs
ZSByYW5nZXMKWyAgICAwLjAwNDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogOTcgcGFnZXMg
aW4gdW5hdmFpbGFibGUgcmFuZ2VzClsgICAgMC4wMDQwMDBdIE9uIG5vZGUgMCwgem9uZSBO
b3JtYWw6IDQzNSBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXMKWyAgICAwLjAwNDAwMF0g
T24gbm9kZSAwLCB6b25lIE5vcm1hbDogNDA5NiBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5n
ZXMKWyAgICAwLjAwNDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MTgKWyAgICAw
LjAwNDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50
WzB4MV0pClsgICAgMC4wMDQwMDBdIElPQVBJQ1swXTogYXBpY19pZCA0LCB2ZXJzaW9uIDMz
LCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTIzClsgICAgMC4wMDQwMDBdIEFDUEk6IElO
VF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpClsgICAg
MC4wMDQwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDkgZ2xvYmFsX2ly
cSA5IGxvdyBsZXZlbCkKWyAgICAwLjAwNDAwMF0gQUNQSTogVXNpbmcgQUNQSSAoTUFEVCkg
Zm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uClsgICAgMC4wMDQwMDBdIEFDUEk6
IEhQRVQgaWQ6IDB4MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMApbICAgIDAuMDA0MDAwXSBz
bXBib290OiBBbGxvd2luZyAyIENQVXMsIDAgaG90cGx1ZyBDUFVzClsgICAgMC4wMDQwMDBd
IHNtcGJvb3Q6IHNtcGJvb3Q6IFhYWCBlbmQgb2YgcHJlZmlsbF9wb3NzaWJsZV9tYXAKWyAg
ICAwLjAwNDAwMF0gQWZ0ZXIgcHJlZmlsbF9wb3NzaWJsZV9tYXAKWyAgICAwLjAwNDAwMF0g
QWZ0ZXIgaW5pdF9jcHVfdG9fbm9kZQpbICAgIDAuMDA0MDAwXSBBZnRlciBpbml0X2dpX25v
ZGVzClsgICAgMC4wMDQwMDBdIEFmdGVyIGlvX2FwaWNfaW5pdF9tYXBwaW5ncwpbICAgIDAu
MDA0MDAwXSBBZnRlciB4ODZfaW5pdC5oeXBlci5ndWVzdF9sYXRlX2luaXQKWyAgICAwLjAw
NDAwMF0gW21lbSAweDgwMDAwMDAwLTB4ZjdmZmZmZmZdIGF2YWlsYWJsZSBmb3IgUENJIGRl
dmljZXMKWyAgICAwLjAwNDAwMF0gQWZ0ZXIgZTgyMApbICAgIDAuMDA0MDAwXSBjbG9ja3Nv
dXJjZTogcmVmaW5lZC1qaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1heF9jeWNsZXM6IDB4
ZmZmZmZmZmYsIG1heF9pZGxlX25zOiA3NjQ1NTE5NjAwMjExNTY4IG5zClsgICAgMC4wMDQw
MDBdIEFmdGVyIHVud2luZF9pbml0ClsgICAgMC4wMDQwMDBdIEFmdGVyIHNldHVwX2FyY2gK
WyAgICAwLjAwNDAwMF0gQWZ0ZXIgc2V0dXBfY29tbWFuZF9saW5lClsgICAgMC4wMDQwMDBd
IEFmdGVyIHNldHVwX25yX2NwdV9pZHMKWyAgICAwLjAwNDAwMF0gc2V0dXBfcGVyY3B1OiBO
Ul9DUFVTOjY0IG5yX2NwdW1hc2tfYml0czoyIG5yX2NwdV9pZHM6MiBucl9ub2RlX2lkczox
ClsgICAgMC4wMDQwMDBdIHBlcmNwdTogRW1iZWRkZWQgNTUgcGFnZXMvY3B1IHMxODgzMjgg
cjgxOTIgZDI4NzYwIHUxMDQ4NTc2ClsgICAgMC4wMDQwMDBdIHBjcHUtYWxsb2M6IHMxODgz
MjggcjgxOTIgZDI4NzYwIHUxMDQ4NTc2IGFsbG9jPTEqMjA5NzE1MgpbICAgIDAuMDA0MDAw
XSBwY3B1LWFsbG9jOiBbMF0gMCAxIApbICAgIDAuMDA0MDAwXSBBZnRlciBzZXR1cF9wZXJf
Y3B1X2FyZWFzClsgICAgMC4wMDQwMDBdIEFmdGVyIHNtcF9wZXJwYXJlX2Jvb3RfY3B1Clsg
ICAgMC4wMDQwMDBdIEFmdGVyIGJvb3RfY3B1X2hvdHBsdWdfaW5pdApbICAgIDAuMDA0MDAw
XSBGYWxsYmFjayBvcmRlciBmb3IgTm9kZSAwOiAwIApbICAgIDAuMDA0MDAwXSBCdWlsdCAx
IHpvbmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogODk4NDUx
ClsgICAgMC4wMDQwMDBdIFBvbGljeSB6b25lOiBOb3JtYWwKWyAgICAwLjAwNDAwMF0gS2Vy
bmVsIGNvbW1hbmQgbGluZTogQk9PVF9JTUFHRT0vYm9vdC92bWxpbnV6LTYuMy4wLXJjNi0w
MDMxMS1nZGU4MjI0OTY5ZjY2IHJvb3Q9L2Rldi9zZGEzIHJ3IHF1aWV0IG5vaXNhcG5wIGNy
eXB0b21nci5ub3Rlc3RzIGlwdjYuZGlzYWJsZV9pcHY2PTEgc2VsaW51eD0wIGNwdWhwLnBh
cmFsbGVsPW9mZgpbICAgIDAuMDA0MDAwXSBVbmtub3duIGtlcm5lbCBjb21tYW5kIGxpbmUg
cGFyYW1ldGVycyAibm9pc2FwbnAgQk9PVF9JTUFHRT0vYm9vdC92bWxpbnV6LTYuMy4wLXJj
Ni0wMDMxMS1nZGU4MjI0OTY5ZjY2Iiwgd2lsbCBiZSBwYXNzZWQgdG8gdXNlciBzcGFjZS4K
WyAgICAwLjAwNDAwMF0gRGVudHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTI0Mjg4
IChvcmRlcjogMTAsIDQxOTQzMDQgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjAwNDAwMF0gSW5v
ZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9yZGVyOiA5LCAyMDk3MTUy
IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4wMDQwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9m
ZiwgaGVhcCBhbGxvYzpvZmYsIGhlYXAgZnJlZTpvZmYKWyAgICAwLjAwNDAwMF0gc3RhY2tk
ZXBvdDogYWxsb2NhdGluZyBoYXNoIHRhYmxlIHZpYSBhbGxvY19sYXJnZV9zeXN0ZW1faGFz
aApbICAgIDAuMDA0MDAwXSBzdGFja2RlcG90IGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0
IChvcmRlcjogOSwgMjA5NzE1MiBieXRlcywgbGluZWFyKQpbICAgIDAuMDA0MDAwXSBzb2Z0
d2FyZSBJTyBUTEI6IGFyZWEgbnVtIDIuClsgICAgMC4wMDQwMDBdIE1lbW9yeTogMzQ3NzE2
OEsvMzY1MTUwMEsgYXZhaWxhYmxlICgxNDMzNksga2VybmVsIGNvZGUsIDIzNDBLIHJ3ZGF0
YSwgNTMwOEsgcm9kYXRhLCAyOTA4SyBpbml0LCAxMTA2MEsgYnNzLCAxNzQwNzJLIHJlc2Vy
dmVkLCAwSyBjbWEtcmVzZXJ2ZWQpClsgICAgMC4wMDQwMDBdIFNMVUI6IEhXYWxpZ249NjQs
IE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTEKWyAgICAwLjAwNDAw
MF0gQWZ0ZXIgbW1faW5pdApbICAgIDAuMDA0MDAwXSBBZnRlciBwb2tpbmdfaW5pdApbICAg
IDAuMDA0MDAwXSBmdHJhY2U6IGFsbG9jYXRpbmcgMzg2NjQgZW50cmllcyBpbiAxNTIgcGFn
ZXMKWyAgICAwLjAwNDAwMF0gZnRyYWNlOiBhbGxvY2F0ZWQgMTUyIHBhZ2VzIHdpdGggMyBn
cm91cHMKWyAgICAwLjAwNDAwMF0gRHluYW1pYyBQcmVlbXB0OiBmdWxsClsgICAgMC4wMDQw
MDBdIEFmdGVyIHNjaGVkX2luaXQKWyAgICAwLjAwNDAwMF0gcmN1OiBQcmVlbXB0aWJsZSBo
aWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLgpbICAgIDAuMDA0MDAwXSByY3U6IAlS
Q1UgcmVzdHJpY3RpbmcgQ1BVcyBmcm9tIE5SX0NQVVM9NjQgdG8gbnJfY3B1X2lkcz0yLgpb
ICAgIDAuMDA0MDAwXSAJVHJhbXBvbGluZSB2YXJpYW50IG9mIFRhc2tzIFJDVSBlbmFibGVk
LgpbICAgIDAuMDA0MDAwXSAJUnVkZSB2YXJpYW50IG9mIFRhc2tzIFJDVSBlbmFibGVkLgpb
ICAgIDAuMDA0MDAwXSAJVHJhY2luZyB2YXJpYW50IG9mIFRhc2tzIFJDVSBlbmFibGVkLgpb
ICAgIDAuMDA0MDAwXSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1l
bmxpc3RtZW50IGRlbGF5IGlzIDI1IGppZmZpZXMuClsgICAgMC4wMDQwMDBdIHJjdTogQWRq
dXN0aW5nIGdlb21ldHJ5IGZvciByY3VfZmFub3V0X2xlYWY9MTYsIG5yX2NwdV9pZHM9Mgpb
ICAgIDAuMDA0MDAwXSBBZnRlciByY3VfaW5pdApbICAgIDAuMDA0MDAwXSBOUl9JUlFTOiA0
MzUyLCBucl9pcnFzOiA0NDAsIHByZWFsbG9jYXRlZCBpcnFzOiAxNgpbICAgIDAuMDA0MDAw
XSByY3U6IHNyY3VfaW5pdDogU2V0dGluZyBzcmN1X3N0cnVjdCBzaXplcyBiYXNlZCBvbiBj
b250ZW50aW9uLgpbICAgIDAuMDA0MDAwXSBBZnRlciByYW5kb21faW5pdCgpClsgICAgMC4w
MDQwMDBdIEFmdGVyIGJvb3RfaW5pdF9zdGFja19jYW5hcnkKWyAgICAwLjAwNDAwMF0gc3B1
cmlvdXMgODI1OUEgaW50ZXJydXB0OiBJUlE3LgpbICAgIDAuMDA0MDAwXSBDb25zb2xlOiBj
b2xvdXIgVkdBKyA4MHgyNQpbICAgIDAuMDA0MDAwXSBwcmludGs6IGNvbnNvbGUgW3R0eTBd
IGVuYWJsZWQKWyAgICAwLjAwNDAwMF0gQUNQSTogQ29yZSByZXZpc2lvbiAyMDIyMTAyMApb
ICAgIDAuMDA0MDAwXSBjbG9ja3NvdXJjZTogaHBldDogbWFzazogMHhmZmZmZmZmZiBtYXhf
Y3ljbGVzOiAweGZmZmZmZmZmLCBtYXhfaWRsZV9uczogMTMzNDg0ODczNTA0IG5zClsgICAg
MC4wMDQwMDBdIEFQSUM6IFN3aXRjaCB0byBzeW1tZXRyaWMgSS9PIG1vZGUgc2V0dXAKWyAg
ICAwLjAwNDAwMF0gQU1ELVZpOiBVc2luZyBnbG9iYWwgSVZIRCBFRlI6MHgwLCBFRlIyOjB4
MApbICAgIDAuMDA0MDAwXSBBUElDOiBEb25lClsgICAgMC4wMDQwMDBdIEJlZm9yZSBhcGlj
X2JzYl9zZXR1cApbICAgIDAuMDA0MDAwXSBjaGVja190aW1lciBiZWdpbgpbICAgIDAuMDA0
MDAwXSBjaGVja190aW1lciBhZnRlciBsb2NhbF9pcnFfZGlzYWJsZQpbICAgIDAuMDA0MDAw
XSAuLlRJTUVSOiB2ZWN0b3I9MHgzMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0x
ClsgICAgMC4wMDQwMDBdIGNsb2Nrc291cmNlOiB0c2MtZWFybHk6IG1hc2s6IDB4ZmZmZmZm
ZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDcwNmU2MDNiYjU1LCBtYXhfaWRsZV9uczogODgx
NTkwODE5MTMzIG5zClsgICAgMC4xNDUxNTZdIENhbGlicmF0aW5nIGRlbGF5IGxvb3AgKHNr
aXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5jeS4uIDc3OTku
OTAgQm9nb01JUFMgKGxwaj0xNTU5OTgxNikKWyAgICAwLjE0NTE2MF0gcGlkX21heDogZGVm
YXVsdDogMzI3NjggbWluaW11bTogMzAxClsgICAgMC4xNDUyNTRdIExTTTogaW5pdGlhbGl6
aW5nIGxzbT1jYXBhYmlsaXR5ClsgICAgMC4xNDUzNDhdIE1vdW50LWNhY2hlIGhhc2ggdGFi
bGUgZW50cmllczogODE5MiAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAg
MC4xNDUzNjVdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChv
cmRlcjogNCwgNjU1MzYgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjE0NTczOF0gQml0IDMwIGlu
IENQVUlEIEVDWCBub3Qgc2V0LgpbICAgIDAuMTQ1NzYyXSBMYXN0IGxldmVsIGlUTEIgZW50
cmllczogNEtCIDUxMiwgMk1CIDEwMjQsIDRNQiA1MTIKWyAgICAwLjE0NTc2NF0gTGFzdCBs
ZXZlbCBkVExCIGVudHJpZXM6IDRLQiAxMDI0LCAyTUIgMTAyNCwgNE1CIDUxMiwgMUdCIDAK
WyAgICAwLjE0NTc2OV0gU3BlY3RyZSBWMSA6IE1pdGlnYXRpb246IHVzZXJjb3B5L3N3YXBn
cyBiYXJyaWVycyBhbmQgX191c2VyIHBvaW50ZXIgc2FuaXRpemF0aW9uClsgICAgMC4xNDU3
NzJdIFNwZWN0cmUgVjIgOiBNaXRpZ2F0aW9uOiBSZXRwb2xpbmVzClsgICAgMC4xNDU3NzNd
IFNwZWN0cmUgVjIgOiBTcGVjdHJlIHYyIC8gU3BlY3RyZVJTQiBtaXRpZ2F0aW9uOiBGaWxs
aW5nIFJTQiBvbiBjb250ZXh0IHN3aXRjaApbICAgIDAuMTQ1Nzc0XSBTcGVjdHJlIFYyIDog
U3BlY3RyZSB2MiAvIFNwZWN0cmVSU0IgOiBGaWxsaW5nIFJTQiBvbiBWTUVYSVQKWyAgICAw
LjE0NTc3NF0gU3BlY3RyZSBWMiA6IEVuYWJsaW5nIFNwZWN1bGF0aW9uIEJhcnJpZXIgZm9y
IGZpcm13YXJlIGNhbGxzClsgICAgMC4xNDU3NzVdIFJFVEJsZWVkOiBNaXRpZ2F0aW9uOiB1
bnRyYWluZWQgcmV0dXJuIHRodW5rClsgICAgMC4xNDU3NzddIFNwZWN0cmUgVjIgOiBtaXRp
Z2F0aW9uOiBFbmFibGluZyBjb25kaXRpb25hbCBJbmRpcmVjdCBCcmFuY2ggUHJlZGljdGlv
biBCYXJyaWVyClsgICAgMC4xNDU3NzldIFNwZWN1bGF0aXZlIFN0b3JlIEJ5cGFzczogTWl0
aWdhdGlvbjogU3BlY3VsYXRpdmUgU3RvcmUgQnlwYXNzIGRpc2FibGVkIHZpYSBwcmN0bApb
ICAgIDAuMTUwMjQ0XSBGcmVlaW5nIFNNUCBhbHRlcm5hdGl2ZXMgbWVtb3J5OiAzMksKWyAg
ICAwLjE1MDI0OV0gQWZ0ZXIgY2hlY2tfYnVncwpbICAgIDAuMTUwMjUwXSBBZnRlciBhY3Bp
X3N1YnN5c3RlbV9pbml0ClsgICAgMC4xNTAyNTFdIEFmdGVyIGFyY2hfcG9zdF9hY3BpX3N1
YnN5c19pbml0ClsgICAgMC4xNTAyNTJdIEFmdGVyIHJjdV9zY2hlZHVsZXJfc3RhcnRpbmcK
WyAgICAwLjE1MDMyNF0gQWZ0ZXIgZmluZF90YXNrX2J5X3BpZF9ucyBhbmQgUEZfTk9fU0VU
QUZGSU5JVFkKWyAgICAwLjE1MDMyOV0gQWZ0ZXIgbnVtYV9kZWZhdWx0X3BvbGljeQpbICAg
IDAuMTUwMzQ5XSBBZnRlciByY3VfcmVhZF9sb2NrClsgICAgMC4xNTAzNTBdIEFmdGVyIHJj
dV9yZWFkX3VubG9jawpbICAgIDAuMTUwMzUxXSBBZnRlciBrdGhyZWFkZF9kb25lClsgICAg
MC4xNTAzNjNdIHNtcGJvb3Q6IFN0YXJ0IG9mIHNtcF9wcmVwYXJlX2NwdXNfY29tbW9uClsg
ICAgMC4xNTAzNjVdIHNtcGJvb3Q6IHNtcGJvb3Q6IHphbGxvYyAwClsgICAgMC4xNTAzNjZd
IHNtcGJvb3Q6IHNtcGJvb3Q6IHphbGxvYyAxClsgICAgMC4xNTAzNjddIHNtcGJvb3Q6IHNt
cGJvb3Q6IEFmdGVyIHNldF9zY2hlZF90b3BvbG9neSgpClsgICAgMC4xNTAzNjldIHNtcGJv
b3Q6IHNtcGJvb3Q6IEFmdGVyIHNtcF9zYW5pdHlfY2hlY2soKQpbICAgIDAuMTUwMzY5XSBz
bXBib290OiBzbXBib290OiBCZWZvcmUgeDg2X2luaXQudGltZXJzLnNldHVwX3BlcmNwdV9j
bG9ja2V2KCkKWyAgICAwLjI1ODM4MV0gc21wYm9vdDogc21wYm9vdDogQWZ0ZXIgeDg2X2lu
aXQudGltZXJzLnNldHVwX3BlcmNwdV9jbG9ja2V2KCkKWyAgICAwLjI1ODM4Ml0gc21wYm9v
dDogQ1BVMDogQU1EIEE2LTY0MDBLIEFQVSB3aXRoIFJhZGVvbih0bSkgSEQgR3JhcGhpY3Mg
KGZhbWlseTogMHgxNSwgbW9kZWw6IDB4MTMsIHN0ZXBwaW5nOiAweDEpClsgICAgMC4yNTg2
MTVdIGNibGlzdF9pbml0X2dlbmVyaWM6IFNldHRpbmcgYWRqdXN0YWJsZSBudW1iZXIgb2Yg
Y2FsbGJhY2sgcXVldWVzLgpbICAgIDAuMjU4NjE4XSBjYmxpc3RfaW5pdF9nZW5lcmljOiBT
ZXR0aW5nIHNoaWZ0IHRvIDEgYW5kIGxpbSB0byAxLgpbICAgIDAuMjU4NjQ4XSBjYmxpc3Rf
aW5pdF9nZW5lcmljOiBTZXR0aW5nIHNoaWZ0IHRvIDEgYW5kIGxpbSB0byAxLgpbICAgIDAu
MjU4Njc1XSBjYmxpc3RfaW5pdF9nZW5lcmljOiBTZXR0aW5nIHNoaWZ0IHRvIDEgYW5kIGxp
bSB0byAxLgpbICAgIDAuMjU4NzAzXSBQZXJmb3JtYW5jZSBFdmVudHM6IEZhbTE1aCBjb3Jl
IHBlcmZjdHIsIEFNRCBQTVUgZHJpdmVyLgpbICAgIDAuMjU4NzI2XSAuLi4gdmVyc2lvbjog
ICAgICAgICAgICAgICAgMApbICAgIDAuMjU4NzI3XSAuLi4gYml0IHdpZHRoOiAgICAgICAg
ICAgICAgNDgKWyAgICAwLjI1ODcyOF0gLi4uIGdlbmVyaWMgcmVnaXN0ZXJzOiAgICAgIDYK
WyAgICAwLjI1ODcyOV0gLi4uIHZhbHVlIG1hc2s6ICAgICAgICAgICAgIDAwMDBmZmZmZmZm
ZmZmZmYKWyAgICAwLjI1ODczMF0gLi4uIG1heCBwZXJpb2Q6ICAgICAgICAgICAgIDAwMDA3
ZmZmZmZmZmZmZmYKWyAgICAwLjI1ODczMV0gLi4uIGZpeGVkLXB1cnBvc2UgZXZlbnRzOiAg
IDAKWyAgICAwLjI1ODczMl0gLi4uIGV2ZW50IG1hc2s6ICAgICAgICAgICAgIDAwMDAwMDAw
MDAwMDAwM2YKWyAgICAwLjI1ODg1Ml0gcmN1OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBsZW1l
bnRhdGlvbi4KWyAgICAwLjI1ODg1M10gcmN1OiAJTWF4IHBoYXNlIG5vLWRlbGF5IGluc3Rh
bmNlcyBpcyAxMDAwLgpbICAgIDAuMjU5NDQxXSBOTUkgd2F0Y2hkb2c6IEVuYWJsZWQuIFBl
cm1hbmVudGx5IGNvbnN1bWVzIG9uZSBody1QTVUgY291bnRlci4KWyAgICAwLjI1OTUxNV0g
c21wOiBCcmluZ2luZyB1cCBzZWNvbmRhcnkgQ1BVcyAuLi4KWyAgICAwLjI1OTcxNF0geDg2
OiBCb290aW5nIFNNUCBjb25maWd1cmF0aW9uOgpbICAgIDAuMjU5NzE1XSAuLi4uIG5vZGUg
ICMwLCBDUFVzOiAgICAgICMxClsgICAgMC4xMjExNTFdIEJpdCAzMCBpbiBDUFVJRCBFQ1gg
bm90IHNldC4KWyAgICAwLjI1OTg0NF0gQWZ0ZXIgc2NoZWR1bGVfcHJlZW1wdF9kaXNhYmxl
ZApbICAgIDAuMjU5ODQ5XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAyIENQVXMKWyAgICAw
LjI1OTg0OV0gc21wYm9vdDogTWF4IGxvZ2ljYWwgcGFja2FnZXM6IDEKWyAgICAwLjI1OTg0
OV0gc21wYm9vdDogVG90YWwgb2YgMiBwcm9jZXNzb3JzIGFjdGl2YXRlZCAoMTU1OTkuODEg
Qm9nb01JUFMpClsgICAgMC4yNjEzMTFdIGRldnRtcGZzOiBpbml0aWFsaXplZApbICAgIDAu
MjYxMzExXSB4ODYvbW06IE1lbW9yeSBibG9jayBzaXplOiAxMjhNQgpbICAgIDAuMjYyMjIw
XSBjbG9ja3NvdXJjZTogamlmZmllczogbWFzazogMHhmZmZmZmZmZiBtYXhfY3ljbGVzOiAw
eGZmZmZmZmZmLCBtYXhfaWRsZV9uczogNzY0NTA0MTc4NTEwMDAwMCBucwpbICAgIDAuMjYy
MjIwXSBmdXRleCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDMsIDMyNzY4IGJ5
dGVzLCBsaW5lYXIpClsgICAgMC4yNjIyMjBdIHBpbmN0cmwgY29yZTogaW5pdGlhbGl6ZWQg
cGluY3RybCBzdWJzeXN0ZW0KWyAgICAwLjI2MjIyMF0gUE06IFJUQyB0aW1lOiAxNzoyNDow
OSwgZGF0ZTogMjAyMy0wNC0xNwpbICAgIDAuMjYyMjIwXSBORVQ6IFJlZ2lzdGVyZWQgUEZf
TkVUTElOSy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHkKWyAgICAwLjI2MjQzNF0gYXVkaXQ6
IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpClsgICAgMC4yNjI0NTNd
IGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMTY4MTc1MjI0OS4xNDA6MSk6IHN0YXRlPWluaXRp
YWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MQpbICAgIDAuMjYyNDUzXSB0aGVybWFsX3N5
czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICdmYWlyX3NoYXJlJwpbICAgIDAuMjYy
NDUzXSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICdiYW5nX2Jh
bmcnClsgICAgMC4yNjI0NTNdIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292
ZXJub3IgJ3N0ZXBfd2lzZScKWyAgICAwLjI2MjQ1M10gdGhlcm1hbF9zeXM6IFJlZ2lzdGVy
ZWQgdGhlcm1hbCBnb3Zlcm5vciAndXNlcl9zcGFjZScKWyAgICAwLjI2MjQ1M10gY3B1aWRs
ZTogdXNpbmcgZ292ZXJub3IgbGFkZGVyClsgICAgMC4yNjI0NTNdIGNwdWlkbGU6IHVzaW5n
IGdvdmVybm9yIG1lbnUKWyAgICAwLjI2MjQ1M10gUENJOiBNTUNPTkZJRyBmb3IgZG9tYWlu
IDAwMDAgW2J1cyAwMC0zZl0gYXQgW21lbSAweGY4MDAwMDAwLTB4ZmJmZmZmZmZdIChiYXNl
IDB4ZjgwMDAwMDApClsgICAgMC4yNjI0NTNdIFBDSTogTU1DT05GSUcgYXQgW21lbSAweGY4
MDAwMDAwLTB4ZmJmZmZmZmZdIHJlc2VydmVkIGFzIEU4MjAgZW50cnkKWyAgICAwLjI2MjQ1
M10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MKWyAg
ICAwLjI2MjQ1M10ga3Byb2Jlczoga3Byb2JlIGp1bXAtb3B0aW1pemF0aW9uIGlzIGVuYWJs
ZWQuIEFsbCBrcHJvYmVzIGFyZSBvcHRpbWl6ZWQgaWYgcG9zc2libGUuClsgICAgMC4yNzMy
MzVdIEh1Z2VUTEI6IHJlZ2lzdGVyZWQgMS4wMCBHaUIgcGFnZSBzaXplLCBwcmUtYWxsb2Nh
dGVkIDAgcGFnZXMKWyAgICAwLjI3MzIzNV0gSHVnZVRMQjogMTYzODAgS2lCIHZtZW1tYXAg
Y2FuIGJlIGZyZWVkIGZvciBhIDEuMDAgR2lCIHBhZ2UKWyAgICAwLjI3MzIzNV0gSHVnZVRM
QjogcmVnaXN0ZXJlZCAyLjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdl
cwpbICAgIDAuMjczMjM1XSBIdWdlVExCOiAyOCBLaUIgdm1lbW1hcCBjYW4gYmUgZnJlZWQg
Zm9yIGEgMi4wMCBNaUIgcGFnZQpbICAgIDAuMjc1NDI4XSBjcnlwdGQ6IG1heF9jcHVfcWxl
biBzZXQgdG8gMTAwMApbICAgIDAuMjc1NDI4XSBBQ1BJOiBBZGRlZCBfT1NJKE1vZHVsZSBE
ZXZpY2UpClsgICAgMC4yNzU0MjhdIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIERldmlj
ZSkKWyAgICAwLjI3NTQyOF0gQUNQSTogQWRkZWQgX09TSSgzLjAgX1NDUCBFeHRlbnNpb25z
KQpbICAgIDAuMjc1NDI4XSBBQ1BJOiBBZGRlZCBfT1NJKFByb2Nlc3NvciBBZ2dyZWdhdG9y
IERldmljZSkKWyAgICAwLjI4MDg3OV0gQUNQSTogRFNEVCBzdWNjZXNzZnVsbHkgYWNxdWly
ZWQgYW5kIGxvYWRlZAoKWyAgICAwLjI4MTE1NF0gQUNQSTogNCBBQ1BJIEFNTCB0YWJsZXMg
c3VjY2Vzc2Z1bGx5IGFjcXVpcmVkIGFuZCBsb2FkZWQKWyAgICAwLjI4MTY0OV0gQUNQSTog
SW50ZXJwcmV0ZXIgZW5hYmxlZApbICAgIDAuMjgxNjY5XSBBQ1BJOiBQTTogKHN1cHBvcnRz
IFMwIFMxIFMzIFM1KQpbICAgIDAuMjgxNjcxXSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGlu
dGVycnVwdCByb3V0aW5nClsgICAgMC4yODE3MjRdIEhFU1Q6IFRhYmxlIHBhcnNpbmcgaGFz
IGJlZW4gaW5pdGlhbGl6ZWQuClsgICAgMC4yODE3NDNdIEdIRVM6IEZhaWxlZCB0byBlbmFi
bGUgQVBFSSBmaXJtd2FyZSBmaXJzdCBtb2RlLgpbICAgIDAuMjgxNzQ1XSBQQ0k6IFVzaW5n
IGhvc3QgYnJpZGdlIHdpbmRvd3MgZnJvbSBBQ1BJOyBpZiBuZWNlc3NhcnksIHVzZSAicGNp
PW5vY3JzIiBhbmQgcmVwb3J0IGEgYnVnClsgICAgMC4yODE3NDZdIFBDSTogSWdub3Jpbmcg
RTgyMCByZXNlcnZhdGlvbnMgZm9yIGhvc3QgYnJpZGdlIHdpbmRvd3MKWyAgICAwLjI4MjAw
M10gQUNQSTogRW5hYmxlZCA4IEdQRXMgaW4gYmxvY2sgMDAgdG8gMUYKWyAgICAwLjI4Njcz
NV0gQUNQSTogUENJIFJvb3QgQnJpZGdlIFtQQ0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1m
Zl0pClsgICAgMC4yODY3NDZdIGFjcGkgUE5QMEEwMzowMDogX09TQzogT1Mgc3VwcG9ydHMg
W0V4dGVuZGVkQ29uZmlnIEFTUE0gQ2xvY2tQTSBTZWdtZW50cyBNU0kgSFBYLVR5cGUzXQpb
ICAgIDAuMjg2ODM4XSBhY3BpIFBOUDBBMDM6MDA6IF9PU0M6IE9TIG5vdyBjb250cm9scyBb
UE1FIEFFUiBQQ0llQ2FwYWJpbGl0eSBMVFJdClsgICAgMC4yODY4NTJdIGFjcGkgUE5QMEEw
MzowMDogW0Zpcm13YXJlIEluZm9dOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAwMDAgW2J1cyAw
MC0zZl0gb25seSBwYXJ0aWFsbHkgY292ZXJzIHRoaXMgYnJpZGdlClsgICAgMC4yODY5NDJd
IGFjcGkgUE5QMEEwMzowMDogaG9zdCBicmlkZ2Ugd2luZG93IGV4cGFuZGVkIHRvIFtpbyAg
MHgwMDAwLTB4MGNmNyB3aW5kb3ddOyBbaW8gIDB4MDNiMC0weDAzZGYgd2luZG93XSBpZ25v
cmVkClsgICAgMC4yODcxOTddIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAg
IDAuMjg3MTk5XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgw
MDAwLTB4MGNmNyB3aW5kb3ddClsgICAgMC4yODcyMDFdIHBjaV9idXMgMDAwMDowMDogcm9v
dCBidXMgcmVzb3VyY2UgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10KWyAgICAwLjI4NzIw
M10gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MDAwYTAwMDAt
MHgwMDBkZmZmZl0KWyAgICAwLjI4NzIwNV0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyBy
ZXNvdXJjZSBbbWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0KWyAgICAwLjI4NzIxMF0gcGNp
X2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbYnVzIDAwLWZmXQpbICAgIDAuMjg3
MjM0XSBwY2kgMDAwMDowMDowMC4wOiBbMTAyMjoxNDEwXSB0eXBlIDAwIGNsYXNzIDB4MDYw
MDAwClsgICAgMC4yODczODhdIHBjaSAwMDAwOjAwOjAwLjI6IFsxMDIyOjE0MTldIHR5cGUg
MDAgY2xhc3MgMHgwODA2MDAKWyAgICAwLjI4NzQ3OV0gcGNpIDAwMDA6MDA6MDEuMDogWzEw
MDI6OTk5Nl0gdHlwZSAwMCBjbGFzcyAweDAzMDAwMApbICAgIDAuMjg3NDg4XSBwY2kgMDAw
MDowMDowMS4wOiByZWcgMHgxMDogW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYgcHJlZl0K
WyAgICAwLjI4NzQ5M10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTQ6IFtpbyAgMHgxMDAw
LTB4MTBmZl0KWyAgICAwLjI4NzQ5OF0gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTg6IFtt
ZW0gMHhmMDE4MDAwMC0weGYwMWJmZmZmXQpbICAgIDAuMjg3NTE0XSBwY2kgMDAwMDowMDow
MS4wOiBlbmFibGluZyBFeHRlbmRlZCBUYWdzClsgICAgMC4yODc1MjRdIHBjaSAwMDAwOjAw
OjAxLjA6IFZpZGVvIGRldmljZSB3aXRoIHNoYWRvd2VkIFJPTSBhdCBbbWVtIDB4MDAwYzAw
MDAtMHgwMDBkZmZmZl0KWyAgICAwLjI4NzU0MV0gcGNpIDAwMDA6MDA6MDEuMDogc3VwcG9y
dHMgRDEgRDIKWyAgICAwLjI4NzYxMV0gcGNpIDAwMDA6MDA6MDEuMTogWzEwMDI6OTkwMl0g
dHlwZSAwMCBjbGFzcyAweDA0MDMwMApbICAgIDAuMjg3NjE4XSBwY2kgMDAwMDowMDowMS4x
OiByZWcgMHgxMDogW21lbSAweGYwMWMwMDAwLTB4ZjAxYzNmZmZdClsgICAgMC4yODc2NDBd
IHBjaSAwMDAwOjAwOjAxLjE6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MKWyAgICAwLjI4NzY2
M10gcGNpIDAwMDA6MDA6MDEuMTogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjI4Nzc1MF0gcGNp
IDAwMDA6MDA6MTEuMDogWzEwMjI6NzgwMV0gdHlwZSAwMCBjbGFzcyAweDAxMDYwMQpbICAg
IDAuMjg3NzYzXSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxMDogW2lvICAweDE0MTAtMHgx
NDE3XQpbICAgIDAuMjg3NzcxXSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxNDogW2lvICAw
eDE0MjAtMHgxNDIzXQpbICAgIDAuMjg3Nzc4XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgx
ODogW2lvICAweDE0MTgtMHgxNDFmXQpbICAgIDAuMjg3Nzg1XSBwY2kgMDAwMDowMDoxMS4w
OiByZWcgMHgxYzogW2lvICAweDE0MjQtMHgxNDI3XQpbICAgIDAuMjg3NzkzXSBwY2kgMDAw
MDowMDoxMS4wOiByZWcgMHgyMDogW2lvICAweDE0MDAtMHgxNDBmXQpbICAgIDAuMjg3ODAw
XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgyNDogW21lbSAweGYwMWNjMDAwLTB4ZjAxY2M3
ZmZdClsgICAgMC4yODc5NTRdIHBjaSAwMDAwOjAwOjEyLjA6IFsxMDIyOjc4MDddIHR5cGUg
MDAgY2xhc3MgMHgwYzAzMTAKWyAgICAwLjI4Nzk2N10gcGNpIDAwMDA6MDA6MTIuMDogcmVn
IDB4MTA6IFttZW0gMHhmMDFjODAwMC0weGYwMWM4ZmZmXQpbICAgIDAuMjg4MTUyXSBwY2kg
MDAwMDowMDoxMi4yOiBbMTAyMjo3ODA4XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzIwClsgICAg
MC4yODgxNjVdIHBjaSAwMDAwOjAwOjEyLjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2QwMDAt
MHhmMDFjZDBmZl0KWyAgICAwLjI4ODIzMF0gcGNpIDAwMDA6MDA6MTIuMjogc3VwcG9ydHMg
RDEgRDIKWyAgICAwLjI4ODIzMV0gcGNpIDAwMDA6MDA6MTIuMjogUE1FIyBzdXBwb3J0ZWQg
ZnJvbSBEMCBEMSBEMiBEM2hvdApbICAgIDAuMjg4MjMzXSBwY2kgMDAwMDowMDoxMi4yOiBw
bWVfcG9sbCA9IHRydWUKWyAgICAwLjI4ODIzNF0gcGNpIDAwMDA6MDA6MTIuMjogYWZ0ZXIg
ZGV2aWNlX3NldF93YWtldXBfY2FwYWJsZSgpClsgICAgMC4yODgyMzddIHBjaSAwMDAwOjAw
OjEyLjI6IGFmdGVyIHBjaV9wbWVfYWN0aXZlKCkKWyAgICAwLjI4ODM3M10gcGNpIDAwMDA6
MDA6MTMuMDogWzEwMjI6NzgwN10gdHlwZSAwMCBjbGFzcyAweDBjMDMxMApbICAgIDAuMjg4
Mzg3XSBwY2kgMDAwMDowMDoxMy4wOiByZWcgMHgxMDogW21lbSAweGYwMWM5MDAwLTB4ZjAx
YzlmZmZdClsgICAgMC4yODg1NzBdIHBjaSAwMDAwOjAwOjEzLjI6IFsxMDIyOjc4MDhdIHR5
cGUgMDAgY2xhc3MgMHgwYzAzMjAKWyAgICAwLjI4ODU4M10gcGNpIDAwMDA6MDA6MTMuMjog
cmVnIDB4MTA6IFttZW0gMHhmMDFjZTAwMC0weGYwMWNlMGZmXQpbICAgIDAuMjg4NjUwXSBw
Y2kgMDAwMDowMDoxMy4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjg4NjUyXSBwY2kgMDAw
MDowMDoxMy4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAgMC4y
ODg2NTNdIHBjaSAwMDAwOjAwOjEzLjI6IHBtZV9wb2xsID0gdHJ1ZQpbICAgIDAuMjg4NjU0
XSBwY2kgMDAwMDowMDoxMy4yOiBhZnRlciBkZXZpY2Vfc2V0X3dha2V1cF9jYXBhYmxlKCkK
WyAgICAwLjI4ODY1N10gcGNpIDAwMDA6MDA6MTMuMjogYWZ0ZXIgcGNpX3BtZV9hY3RpdmUo
KQpbICAgIDAuMjg4Nzk4XSBwY2kgMDAwMDowMDoxNC4wOiBbMTAyMjo3ODBiXSB0eXBlIDAw
IGNsYXNzIDB4MGMwNTAwClsgICAgMC4yODg5ODBdIHBjaSAwMDAwOjAwOjE0LjI6IFsxMDIy
Ojc4MGRdIHR5cGUgMDAgY2xhc3MgMHgwNDAzMDAKWyAgICAwLjI4ODk5N10gcGNpIDAwMDA6
MDA6MTQuMjogcmVnIDB4MTA6IFttZW0gMHhmMDFjNDAwMC0weGYwMWM3ZmZmIDY0Yml0XQpb
ICAgIDAuMjg5MDUyXSBwY2kgMDAwMDowMDoxNC4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQw
IEQzaG90IEQzY29sZApbICAgIDAuMjg5MDU0XSBwY2kgMDAwMDowMDoxNC4yOiBwbWVfcG9s
bCA9IHRydWUKWyAgICAwLjI4OTA1NV0gcGNpIDAwMDA6MDA6MTQuMjogYWZ0ZXIgZGV2aWNl
X3NldF93YWtldXBfY2FwYWJsZSgpClsgICAgMC4yODkwNThdIHBjaSAwMDAwOjAwOjE0LjI6
IGFmdGVyIHBjaV9wbWVfYWN0aXZlKCkKWyAgICAwLjI5MDEzMV0gcGNpIDAwMDA6MDA6MTQu
MzogWzEwMjI6NzgwZV0gdHlwZSAwMCBjbGFzcyAweDA2MDEwMApbICAgIDAuMjkwMzM1XSBw
Y2kgMDAwMDowMDoxNC40OiBbMTAyMjo3ODBmXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAxClsg
ICAgMC4yOTA0OTBdIHBjaSAwMDAwOjAwOjE0LjU6IFsxMDIyOjc4MDldIHR5cGUgMDAgY2xh
c3MgMHgwYzAzMTAKWyAgICAwLjI5MDUwNF0gcGNpIDAwMDA6MDA6MTQuNTogcmVnIDB4MTA6
IFttZW0gMHhmMDFjYTAwMC0weGYwMWNhZmZmXQpbICAgIDAuMjkwNjc3XSBwY2kgMDAwMDow
MDoxNS4wOiBbMTAyMjo0M2EwXSB0eXBlIDAxIGNsYXNzIDB4MDYwNDAwClsgICAgMC4yOTA3
MDVdIHBjaSAwMDAwOjAwOjE1LjA6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MKWyAgICAwLjI5
MDc0Nl0gcGNpIDAwMDA6MDA6MTUuMDogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjI5MDkwN10g
cGNpIDAwMDA6MDA6MTUuMTogWzEwMjI6NDNhMV0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApb
ICAgIDAuMjkwOTM5XSBwY2kgMDAwMDowMDoxNS4xOiBlbmFibGluZyBFeHRlbmRlZCBUYWdz
ClsgICAgMC4yOTA5NzhdIHBjaSAwMDAwOjAwOjE1LjE6IHN1cHBvcnRzIEQxIEQyClsgICAg
MC4yOTExNDZdIHBjaSAwMDAwOjAwOjE1LjI6IFsxMDIyOjQzYTJdIHR5cGUgMDEgY2xhc3Mg
MHgwNjA0MDAKWyAgICAwLjI5MTE3NF0gcGNpIDAwMDA6MDA6MTUuMjogZW5hYmxpbmcgRXh0
ZW5kZWQgVGFncwpbICAgIDAuMjkxMjE0XSBwY2kgMDAwMDowMDoxNS4yOiBzdXBwb3J0cyBE
MSBEMgpbICAgIDAuMjkxMjg5XSBwY2kgMDAwMDowMDoxNi4wOiBbMTAyMjo3ODA3XSB0eXBl
IDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yOTEzMDNdIHBjaSAwMDAwOjAwOjE2LjA6IHJl
ZyAweDEwOiBbbWVtIDB4ZjAxY2IwMDAtMHhmMDFjYmZmZl0KWyAgICAwLjI5MTQ3M10gcGNp
IDAwMDA6MDA6MTYuMjogWzEwMjI6NzgwOF0gdHlwZSAwMCBjbGFzcyAweDBjMDMyMApbICAg
IDAuMjkxNDg2XSBwY2kgMDAwMDowMDoxNi4yOiByZWcgMHgxMDogW21lbSAweGYwMWNmMDAw
LTB4ZjAxY2YwZmZdClsgICAgMC4yOTE1NTFdIHBjaSAwMDAwOjAwOjE2LjI6IHN1cHBvcnRz
IEQxIEQyClsgICAgMC4yOTE1NTJdIHBjaSAwMDAwOjAwOjE2LjI6IFBNRSMgc3VwcG9ydGVk
IGZyb20gRDAgRDEgRDIgRDNob3QKWyAgICAwLjI5MTU1NF0gcGNpIDAwMDA6MDA6MTYuMjog
cG1lX3BvbGwgPSB0cnVlClsgICAgMC4yOTE1NTVdIHBjaSAwMDAwOjAwOjE2LjI6IGFmdGVy
IGRldmljZV9zZXRfd2FrZXVwX2NhcGFibGUoKQpbICAgIDAuMjkxNTU4XSBwY2kgMDAwMDow
MDoxNi4yOiBhZnRlciBwY2lfcG1lX2FjdGl2ZSgpClsgICAgMC4yOTE2ODRdIHBjaSAwMDAw
OjAwOjE4LjA6IFsxMDIyOjE0MDBdIHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI5
MTc0OF0gcGNpIDAwMDA6MDA6MTguMTogWzEwMjI6MTQwMV0gdHlwZSAwMCBjbGFzcyAweDA2
MDAwMApbICAgIDAuMjkxODA2XSBwY2kgMDAwMDowMDoxOC4yOiBbMTAyMjoxNDAyXSB0eXBl
IDAwIGNsYXNzIDB4MDYwMDAwClsgICAgMC4yOTE4NjZdIHBjaSAwMDAwOjAwOjE4LjM6IFsx
MDIyOjE0MDNdIHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI5MTk5OF0gcGNpIDAw
MDA6MDA6MTguNDogWzEwMjI6MTQwNF0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgIDAu
MjkyMDY0XSBwY2kgMDAwMDowMDoxOC41OiBbMTAyMjoxNDA1XSB0eXBlIDAwIGNsYXNzIDB4
MDYwMDAwClsgICAgMC4yOTIxMzZdIHBjaV9idXMgMDAwMDowMTogZXh0ZW5kZWQgY29uZmln
IHNwYWNlIG5vdCBhY2Nlc3NpYmxlClsgICAgMC4yOTIyMDBdIHBjaSAwMDAwOjAwOjE0LjQ6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwMV0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI5
MjIwOF0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4
MGNmNyB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMC4yOTIyMTFdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93
XSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAgIDAuMjkyMjE0XSBwY2kgMDAwMDowMDoxNC40
OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdIChzdWJ0cmFj
dGl2ZSBkZWNvZGUpClsgICAgMC4yOTIyMTZdIHBjaSAwMDAwOjAwOjE0LjQ6ICAgYnJpZGdl
IHdpbmRvdyBbbWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0gKHN1YnRyYWN0aXZlIGRlY29k
ZSkKWyAgICAwLjI5MjI2MV0gcGNpIDAwMDA6MDA6MTUuMDogUENJIGJyaWRnZSB0byBbYnVz
IDAyXQpbICAgIDAuMjkyMzQ1XSBwY2kgMDAwMDowMzowMC4wOiBbMWIyMToxMDQyXSB0eXBl
IDAwIGNsYXNzIDB4MGMwMzMwClsgICAgMC4yOTIzODJdIHBjaSAwMDAwOjAzOjAwLjA6IHJl
ZyAweDEwOiBbbWVtIDB4ZjAwMDAwMDAtMHhmMDAwN2ZmZiA2NGJpdF0KWyAgICAwLjI5MjU1
OV0gcGNpIDAwMDA6MDM6MDAuMDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEM2hvdCBEM2NvbGQK
WyAgICAwLjI5MjU2MF0gcGNpIDAwMDA6MDM6MDAuMDogcG1lX3BvbGwgPSB0cnVlClsgICAg
MC4yOTI1NjJdIHBjaSAwMDAwOjAzOjAwLjA6IGFmdGVyIGRldmljZV9zZXRfd2FrZXVwX2Nh
cGFibGUoKQpbICAgIDAuMjkyNTY3XSBwY2kgMDAwMDowMzowMC4wOiBhZnRlciBwY2lfcG1l
X2FjdGl2ZSgpClsgICAgMC4yOTI2MDVdIHBjaSAwMDAwOjAzOjAwLjA6IDIuMDAwIEdiL3Mg
YXZhaWxhYmxlIFBDSWUgYmFuZHdpZHRoLCBsaW1pdGVkIGJ5IDIuNSBHVC9zIFBDSWUgeDEg
bGluayBhdCAwMDAwOjAwOjE1LjEgKGNhcGFibGUgb2YgNC4wMDAgR2IvcyB3aXRoIDUuMCBH
VC9zIFBDSWUgeDEgbGluaykKWyAgICAwLjMwNTIxNV0gcGNpIDAwMDA6MDA6MTUuMTogUENJ
IGJyaWRnZSB0byBbYnVzIDAzXQpbICAgIDAuMzA1MjI4XSBwY2kgMDAwMDowMDoxNS4xOiAg
IGJyaWRnZSB3aW5kb3cgW21lbSAweGYwMDAwMDAwLTB4ZjAwZmZmZmZdClsgICAgMC4zMDUy
MzddIHBjaSAwMDAwOjAwOjE1LjI6IGJyaWRnZSBjb25maWd1cmF0aW9uIGludmFsaWQgKFti
dXMgMDAtMDBdKSwgcmVjb25maWd1cmluZwpbICAgIDAuMzA1MzU5XSBwY2kgMDAwMDowNDow
MC4wOiBbMTBlYzo4MTY4XSB0eXBlIDAwIGNsYXNzIDB4MDIwMDAwClsgICAgMC4zMDUzNzhd
IHBjaSAwMDAwOjA0OjAwLjA6IHJlZyAweDEwOiBbaW8gIDB4MDAwMC0weDAwZmZdClsgICAg
MC4zMDUzOTldIHBjaSAwMDAwOjA0OjAwLjA6IHJlZyAweDE4OiBbbWVtIDB4MDAwMDAwMDAt
MHgwMDAwMGZmZiA2NGJpdCBwcmVmXQpbICAgIDAuMzA1NDEzXSBwY2kgMDAwMDowNDowMC4w
OiByZWcgMHgyMDogW21lbSAweDAwMDAwMDAwLTB4MDAwMDNmZmYgNjRiaXQgcHJlZl0KWyAg
ICAwLjMwNTUyMF0gcGNpIDAwMDA6MDQ6MDAuMDogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjMw
NTUyMl0gcGNpIDAwMDA6MDQ6MDAuMDogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEMSBEMiBE
M2hvdCBEM2NvbGQKWyAgICAwLjMwNTUyNF0gcGNpIDAwMDA6MDQ6MDAuMDogcG1lX3BvbGwg
PSB0cnVlClsgICAgMC4zMDU1MjVdIHBjaSAwMDAwOjA0OjAwLjA6IGFmdGVyIGRldmljZV9z
ZXRfd2FrZXVwX2NhcGFibGUoKQpbICAgIDAuMzA1NTI5XSBwY2kgMDAwMDowNDowMC4wOiBh
ZnRlciBwY2lfcG1lX2FjdGl2ZSgpClsgICAgMC4zMjEyMTddIHBjaSAwMDAwOjAwOjE1LjI6
IFBDSSBicmlkZ2UgdG8gW2J1cyAwNC1mZl0KWyAgICAwLjMyMTIyOF0gcGNpIDAwMDA6MDA6
MTUuMjogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4MGZmZl0KWyAgICAwLjMyMTIz
Ml0gcGNpIDAwMDA6MDA6MTUuMjogICBicmlkZ2Ugd2luZG93IFttZW0gMHgwMDAwMDAwMC0w
eDAwMGZmZmZmXQpbICAgIDAuMzIxMjM3XSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3
aW5kb3cgW21lbSAweDAwMDAwMDAwLTB4MDAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjMy
MTI0MF0gcGNpX2J1cyAwMDAwOjA0OiBidXNuX3JlczogW2J1cyAwNC1mZl0gZW5kIGlzIHVw
ZGF0ZWQgdG8gMDQKWyAgICAwLjMyMTc1OV0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJ
TlRBIGNvbmZpZ3VyZWQgZm9yIElSUSAwClsgICAgMC4zMjE4NTVdIEFDUEk6IFBDSTogSW50
ZXJydXB0IGxpbmsgSU5UQiBjb25maWd1cmVkIGZvciBJUlEgMApbICAgIDAuMzIxOTQ5XSBB
Q1BJOiBQQ0k6IEludGVycnVwdCBsaW5rIElOVEMgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAg
ICAwLjMyMjA0MV0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlREIGNvbmZpZ3VyZWQg
Zm9yIElSUSAwClsgICAgMC4zMjIxMzRdIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5U
RSBjb25maWd1cmVkIGZvciBJUlEgMApbICAgIDAuMzIyMjMyXSBBQ1BJOiBQQ0k6IEludGVy
cnVwdCBsaW5rIElOVEYgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgICAwLjMyMjMyNV0gQUNQ
STogUENJOiBJbnRlcnJ1cHQgbGluayBJTlRHIGNvbmZpZ3VyZWQgZm9yIElSUSAwClsgICAg
MC4zMjI0MThdIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5USCBjb25maWd1cmVkIGZv
ciBJUlEgMApbICAgIDAuMzIyNjQwXSBpb21tdTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJh
bnNsYXRlZCAKWyAgICAwLjMyMjY0Ml0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRh
dGlvbiBwb2xpY3k6IGxhenkgbW9kZSAKWyAgICAwLjMyMjgzOV0gU0NTSSBzdWJzeXN0ZW0g
aW5pdGlhbGl6ZWQKWyAgICAwLjMyNTE1OV0gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQu
ClsgICAgMC4zMjUxNTldIEFDUEk6IGJ1cyB0eXBlIFVTQiByZWdpc3RlcmVkClsgICAgMC4z
MjUxNTldIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMK
WyAgICAwLjMyNTE1OV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZl
ciBodWIKWyAgICAwLjMyNTE1OV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRy
aXZlciB1c2IKWyAgICAwLjMyNTE1OV0gUENJOiBVc2luZyBBQ1BJIGZvciBJUlEgcm91dGlu
ZwpbICAgIDAuMzI1MTU5XSBQQ0k6IHBjaV9jYWNoZV9saW5lX3NpemUgc2V0IHRvIDY0IGJ5
dGVzClsgICAgMC4zMjUxNTldIGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4MDAw
OWZjMDAtMHgwMDA5ZmZmZl0KWyAgICAwLjMyNTE1OV0gZTgyMDogcmVzZXJ2ZSBSQU0gYnVm
ZmVyIFttZW0gMHg1ZmU0ZDAwMC0weDVmZmZmZmZmXQpbICAgIDAuMzI1MTU5XSBlODIwOiBy
ZXNlcnZlIFJBTSBidWZmZXIgW21lbSAweDE3ZjAwMDAwMC0weDE3ZmZmZmZmZl0KWyAgICAw
LjMyNTE1OV0gaHBldDA6IGF0IE1NSU8gMHhmZWQwMDAwMCwgSVJRcyAyLCA4LCAwClsgICAg
MC4zMjUxNTldIGhwZXQwOiAzIGNvbXBhcmF0b3JzLCAzMi1iaXQgMTQuMzE4MTgwIE1IeiBj
b3VudGVyClsgICAgMC4zMzAyMzJdIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3Nv
dXJjZSB0c2MtZWFybHkKWyAgICAwLjMzMDQ3NF0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82
LjYuMApbICAgIDAuMzMwNTAwXSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUgZW50cmll
czogNTEyIChvcmRlciAwLCA0MDk2IGJ5dGVzKQpbICAgIDAuMzMwNjIwXSBwbnA6IFBuUCBB
Q1BJIGluaXQKWyAgICAwLjMzMDk2OV0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmVjMTAwMDIt
MHhmZWMxMTAwMV0gY291bGQgbm90IGJlIHJlc2VydmVkClsgICAgMC4zMzEzNjJdIHBucDog
UG5QIEFDUEk6IGZvdW5kIDIgZGV2aWNlcwpbICAgIDAuMzM3OTg2XSBjbG9ja3NvdXJjZTog
YWNwaV9wbTogbWFzazogMHhmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmYsIG1heF9pZGxl
X25zOiAyMDg1NzAxMDI0IG5zClsgICAgMC4zMzgxODZdIE5FVDogUmVnaXN0ZXJlZCBQRl9J
TkVUIHByb3RvY29sIGZhbWlseQpbICAgIDAuMzM4MzU5XSBJUCBpZGVudHMgaGFzaCB0YWJs
ZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcywgbGluZWFyKQpbICAg
IDAuMzQwMTUzXSB0Y3BfbGlzdGVuX3BvcnRhZGRyX2hhc2ggaGFzaCB0YWJsZSBlbnRyaWVz
OiAyMDQ4IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikKWyAgICAwLjM0MDE2OV0g
VGFibGUtcGVydHVyYiBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNiwgMjYy
MTQ0IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zNDAxNzddIFRDUCBlc3RhYmxpc2hlZCBoYXNo
IHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIp
ClsgICAgMC4zNDAyNTFdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmllczogMzI3NjggKG9y
ZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zNDA2MTRdIFRDUDogSGFz
aCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgMzI3NjggYmluZCAzMjc2OCkKWyAg
ICAwLjM0MDY4NV0gVURQIGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDQsIDY1
NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zNDA3MTFdIFVEUC1MaXRlIGhhc2ggdGFibGUg
ZW50cmllczogMjA0OCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4z
NDA4MjddIE5FVDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWls
eQpbICAgIDAuMzQwODYyXSBwY2kgMDAwMDowMDoxNS4yOiBCQVIgMTU6IGFzc2lnbmVkIFtt
ZW0gMHg4MDAwMDAwMC0weDgwMGZmZmZmIDY0Yml0IHByZWZdClsgICAgMC4zNDA4NjldIHBj
aSAwMDAwOjAwOjE1LjI6IEJBUiAxMzogYXNzaWduZWQgW2lvICAweDIwMDAtMHgyZmZmXQpb
ICAgIDAuMzQwODc0XSBwY2kgMDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFd
ClsgICAgMC4zNDA4ODZdIHBjaSAwMDAwOjAwOjE1LjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAw
Ml0KWyAgICAwLjM0MDg5NF0gcGNpIDAwMDA6MDA6MTUuMTogUENJIGJyaWRnZSB0byBbYnVz
IDAzXQpbICAgIDAuMzQwODk4XSBwY2kgMDAwMDowMDoxNS4xOiAgIGJyaWRnZSB3aW5kb3cg
W21lbSAweGYwMDAwMDAwLTB4ZjAwZmZmZmZdClsgICAgMC4zNDA5MDhdIHBjaSAwMDAwOjA0
OjAwLjA6IEJBUiA0OiBhc3NpZ25lZCBbbWVtIDB4ODAwMDAwMDAtMHg4MDAwM2ZmZiA2NGJp
dCBwcmVmXQpbICAgIDAuMzQwOTIyXSBwY2kgMDAwMDowNDowMC4wOiBCQVIgMjogYXNzaWdu
ZWQgW21lbSAweDgwMDA0MDAwLTB4ODAwMDRmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjM0MDkz
NF0gcGNpIDAwMDA6MDQ6MDAuMDogQkFSIDA6IGFzc2lnbmVkIFtpbyAgMHgyMDAwLTB4MjBm
Zl0KWyAgICAwLjM0MDk0MV0gcGNpIDAwMDA6MDA6MTUuMjogUENJIGJyaWRnZSB0byBbYnVz
IDA0XQpbICAgIDAuMzQwOTQzXSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cg
W2lvICAweDIwMDAtMHgyZmZmXQpbICAgIDAuMzQwOTQ5XSBwY2kgMDAwMDowMDoxNS4yOiAg
IGJyaWRnZSB3aW5kb3cgW21lbSAweDgwMDAwMDAwLTB4ODAwZmZmZmYgNjRiaXQgcHJlZl0K
WyAgICAwLjM0MDk1Nl0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAw
LTB4MGNmNyB3aW5kb3ddClsgICAgMC4zNDA5NThdIHBjaV9idXMgMDAwMDowMDogcmVzb3Vy
Y2UgNSBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgIDAuMzQwOTYwXSBwY2lfYnVz
IDAwMDA6MDA6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsgICAg
MC4zNDA5NjRdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNyBbbWVtIDB4ODAwMDAwMDAt
MHhmZmZmZmZmZl0KWyAgICAwLjM0MDk2Nl0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSA0
IFtpbyAgMHgwMDAwLTB4MGNmNyB3aW5kb3ddClsgICAgMC4zNDA5NjhdIHBjaV9idXMgMDAw
MDowMTogcmVzb3VyY2UgNSBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgIDAuMzQw
OTcwXSBwY2lfYnVzIDAwMDA6MDE6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAw
ZGZmZmZdClsgICAgMC4zNDA5NzNdIHBjaV9idXMgMDAwMDowMTogcmVzb3VyY2UgNyBbbWVt
IDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0KWyAgICAwLjM0MDk3NV0gcGNpX2J1cyAwMDAwOjAz
OiByZXNvdXJjZSAxIFttZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgIDAuMzQwOTc3
XSBwY2lfYnVzIDAwMDA6MDQ6IHJlc291cmNlIDAgW2lvICAweDIwMDAtMHgyZmZmXQpbICAg
IDAuMzQwOTc5XSBwY2lfYnVzIDAwMDA6MDQ6IHJlc291cmNlIDIgW21lbSAweDgwMDAwMDAw
LTB4ODAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjM0MTEwNF0gcGNpIDAwMDA6MDA6MDEu
MTogRDAgcG93ZXIgc3RhdGUgZGVwZW5kcyBvbiAwMDAwOjAwOjAxLjAKWyAgICAwLjM0MTQx
Ml0gcGNpIDAwMDA6MDA6MTIuMDogQU1EIFVTQiBkZXZpY2UKWyAgICAwLjM0MTQzNl0gcGNp
IDAwMDA6MDA6MTIuMDogQU1EIFVTQiBvaGNpIGhhbmRvZmYKWyAgICAwLjM0MTgzM10gcGNp
IDAwMDA6MDA6MTIuMjogQU1EIFVTQiBkZXZpY2UKWyAgICAwLjM0MTg0N10gcGNpIDAwMDA6
MDA6MTIuMjogQU1EIFVTQiBlaGNpIGhhbmRvZmYKWyAgICAwLjM0MjAzMl0gcGNpIDAwMDA6
MDA6MTIuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxpbmcgaXQKWyAg
ICAwLjM0MjIzNV0gcGNpIDAwMDA6MDA6MTMuMDogQU1EIFVTQiBkZXZpY2UKWyAgICAwLjM0
MjI1MF0gcGNpIDAwMDA6MDA6MTMuMDogQU1EIFVTQiBvaGNpIGhhbmRvZmYKWyAgICAwLjM0
MjYzM10gcGNpIDAwMDA6MDA6MTMuMjogQU1EIFVTQiBkZXZpY2UKWyAgICAwLjM0MjY1MF0g
cGNpIDAwMDA6MDA6MTMuMjogQU1EIFVTQiBlaGNpIGhhbmRvZmYKWyAgICAwLjM0MjgzMF0g
cGNpIDAwMDA6MDA6MTMuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxp
bmcgaXQKWyAgICAwLjM0MzA0MV0gcGNpIDAwMDA6MDA6MTQuNTogQU1EIFVTQiBkZXZpY2UK
WyAgICAwLjM0MzA1N10gcGNpIDAwMDA6MDA6MTQuNTogQU1EIFVTQiBvaGNpIGhhbmRvZmYK
WyAgICAwLjM0MzU3N10gcGNpIDAwMDA6MDA6MTYuMDogQU1EIFVTQiBkZXZpY2UKWyAgICAw
LjM0MzU5OF0gcGNpIDAwMDA6MDA6MTYuMDogQU1EIFVTQiBvaGNpIGhhbmRvZmYKWyAgICAw
LjM0Mzk5M10gcGNpIDAwMDA6MDA6MTYuMjogQU1EIFVTQiBkZXZpY2UKWyAgICAwLjM0NDAw
OF0gcGNpIDAwMDA6MDA6MTYuMjogQU1EIFVTQiBlaGNpIGhhbmRvZmYKWyAgICAwLjM0NDIw
Ml0gcGNpIDAwMDA6MDA6MTYuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNh
YmxpbmcgaXQKWyAgICAwLjM0NDUxMF0gcGNpIDAwMDA6MDM6MDAuMDogQU1EIFVTQiB4aGNp
IGhhbmRvZmYKWyAgICAwLjM0NDU3N10gUENJOiBDTFMgNjQgYnl0ZXMsIGRlZmF1bHQgNjQK
WyAgICAwLjM0NDY3OF0gcGNpIDAwMDA6MDA6MDAuMjogQU1ELVZpOiBBcHBseWluZyBlcnJh
dHVtIDc0NiB3b3JrYXJvdW5kClsgICAgMC4zNDQ3OTBdIHBjaSAwMDAwOjAwOjAxLjA6IEFk
ZGluZyB0byBpb21tdSBncm91cCAwClsgICAgMC4zNDQ4MTNdIHBjaSAwMDAwOjAwOjAxLjE6
IEFkZGluZyB0byBpb21tdSBncm91cCAwClsgICAgMC4zNDQ4NDRdIHBjaSAwMDAwOjAwOjEx
LjA6IEFkZGluZyB0byBpb21tdSBncm91cCAxClsgICAgMC4zNDQ4ODVdIHBjaSAwMDAwOjAw
OjEyLjA6IEFkZGluZyB0byBpb21tdSBncm91cCAyClsgICAgMC4zNDQ5MDVdIHBjaSAwMDAw
OjAwOjEyLjI6IEFkZGluZyB0byBpb21tdSBncm91cCAyClsgICAgMC4zNDQ5NDldIHBjaSAw
MDAwOjAwOjEzLjA6IEFkZGluZyB0byBpb21tdSBncm91cCAzClsgICAgMC4zNDQ5NzBdIHBj
aSAwMDAwOjAwOjEzLjI6IEFkZGluZyB0byBpb21tdSBncm91cCAzClsgICAgMC4zNDUwMjBd
IHBjaSAwMDAwOjAwOjE0LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA0ClsgICAgMC4zNDUw
NDNdIHBjaSAwMDAwOjAwOjE0LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA0ClsgICAgMC4z
NDUwNjZdIHBjaSAwMDAwOjAwOjE0LjM6IEFkZGluZyB0byBpb21tdSBncm91cCA0ClsgICAg
MC4zNDUxMDJdIHBjaSAwMDAwOjAwOjE0LjQ6IEFkZGluZyB0byBpb21tdSBncm91cCA1Clsg
ICAgMC4zNDUxMjhdIHBjaSAwMDAwOjAwOjE0LjU6IEFkZGluZyB0byBpb21tdSBncm91cCA2
ClsgICAgMC4zNDUxNzJdIHBjaSAwMDAwOjAwOjE1LjA6IEFkZGluZyB0byBpb21tdSBncm91
cCA3ClsgICAgMC4zNDUxOThdIHBjaSAwMDAwOjAwOjE1LjE6IEFkZGluZyB0byBpb21tdSBn
cm91cCA3ClsgICAgMC4zNDUyMTldIHBjaSAwMDAwOjAwOjE1LjI6IEFkZGluZyB0byBpb21t
dSBncm91cCA3ClsgICAgMC4zNDUyNjFdIHBjaSAwMDAwOjAwOjE2LjA6IEFkZGluZyB0byBp
b21tdSBncm91cCA4ClsgICAgMC4zNDUyODVdIHBjaSAwMDAwOjAwOjE2LjI6IEFkZGluZyB0
byBpb21tdSBncm91cCA4ClsgICAgMC4zNDUzNDldIHBjaSAwMDAwOjAwOjE4LjA6IEFkZGlu
ZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDUzNzRdIHBjaSAwMDAwOjAwOjE4LjE6IEFk
ZGluZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDU0MDBdIHBjaSAwMDAwOjAwOjE4LjI6
IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDU0MjhdIHBjaSAwMDAwOjAwOjE4
LjM6IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDU0NTFdIHBjaSAwMDAwOjAw
OjE4LjQ6IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDU0NzldIHBjaSAwMDAw
OjAwOjE4LjU6IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAgMC4zNDU0ODldIHBjaSAw
MDAwOjAzOjAwLjA6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNDU0OTldIHBj
aSAwMDAwOjA0OjAwLjA6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNDgyNjVd
IHBjaSAwMDAwOjAwOjAwLjI6IEFNRC1WaTogRm91bmQgSU9NTVUgY2FwIDB4NDAKWyAgICAw
LjM0ODI3Ml0gQU1ELVZpOiBFeHRlbmRlZCBmZWF0dXJlcyAoMHg4MDAwMDA4NTMsIDB4MCk6
IFByZUYgUFBSIEdUIElBClsgICAgMC4zNDgyNzldIEFNRC1WaTogSW50ZXJydXB0IHJlbWFw
cGluZyBlbmFibGVkClsgICAgMC4zNDg0ODRdIFBDSS1ETUE6IFVzaW5nIHNvZnR3YXJlIGJv
dW5jZSBidWZmZXJpbmcgZm9yIElPIChTV0lPVExCKQpbICAgIDAuMzQ4NDg2XSBzb2Z0d2Fy
ZSBJTyBUTEI6IG1hcHBlZCBbbWVtIDB4MDAwMDAwMDA1YmU0ZDAwMC0weDAwMDAwMDAwNWZl
NGQwMDBdICg2NE1CKQpbICAgIDAuMzQ4NTQ3XSBMVlQgb2Zmc2V0IDAgYXNzaWduZWQgZm9y
IHZlY3RvciAweDQwMApbICAgIDAuMzQ4NTk1XSBwZXJmOiBBTUQgSUJTIGRldGVjdGVkICgw
eDAwMDAwMGZmKQpbICAgIDAuMzQ4NjAyXSBhbWRfdW5jb3JlOiA0ICBhbWRfbmIgY291bnRl
cnMgZGV0ZWN0ZWQKWyAgICAwLjM1MjIyNV0gd29ya2luZ3NldDogdGltZXN0YW1wX2JpdHM9
MzcgbWF4X29yZGVyPTIwIGJ1Y2tldF9vcmRlcj0wClsgICAgMC4zNTIyNTldIHpidWQ6IGxv
YWRlZApbICAgIDAuMzUyNzUyXSBORVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZh
bWlseQpbICAgIDAuMzUyNzU4XSBLZXkgdHlwZSBhc3ltbWV0cmljIHJlZ2lzdGVyZWQKWyAg
ICAwLjM1Mjc1OV0gQXN5bW1ldHJpYyBrZXkgcGFyc2VyICd4NTA5JyByZWdpc3RlcmVkClsg
ICAgMC4zNTMxMTNdIGFsZzogc2VsZi10ZXN0cyBkaXNhYmxlZApbICAgIDAuMzUzMjA5XSBC
bG9jayBsYXllciBTQ1NJIGdlbmVyaWMgKGJzZykgZHJpdmVyIHZlcnNpb24gMC40IGxvYWRl
ZCAobWFqb3IgMjUxKQpbICAgIDAuMzUzMjYzXSBpbyBzY2hlZHVsZXIgbXEtZGVhZGxpbmUg
cmVnaXN0ZXJlZApbICAgIDAuMzUzMjY1XSBpbyBzY2hlZHVsZXIga3liZXIgcmVnaXN0ZXJl
ZApbICAgIDAuMzU0NDc5XSBwY2llcG9ydCAwMDAwOjAwOjE1LjA6IFBNRTogU2lnbmFsaW5n
IHdpdGggSVJRIDI1ClsgICAgMC4zNTQ2NDRdIHBjaWVwb3J0IDAwMDA6MDA6MTUuMTogUE1F
OiBTaWduYWxpbmcgd2l0aCBJUlEgMjYKWyAgICAwLjM1NDcxMl0gcGNpZXBvcnQgMDAwMDow
MDoxNS4yOiBlbmFibGluZyBkZXZpY2UgKDAwMDAgLT4gMDAwMykKWyAgICAwLjM1NDk0MV0g
cGNpZXBvcnQgMDAwMDowMDoxNS4yOiBQTUU6IFNpZ25hbGluZyB3aXRoIElSUSAyNwpbICAg
IDAuMzU1MjExXSBpbnB1dDogUG93ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xOWFNZU1RNOjAw
L0xOWFBXUkJOOjAwL2lucHV0L2lucHV0MApbICAgIDAuMzU1Mjk4XSBBQ1BJOiBidXR0b246
IFBvd2VyIEJ1dHRvbiBbUFdSRl0KWyAgICAwLjM1NTM2M10gQUNQSTogXF9TQl8uUDAwMDog
Rm91bmQgMiBpZGxlIHN0YXRlcwpbICAgIDAuMzU1NDk1XSBBQ1BJOiBcX1NCXy5QMDAxOiBG
b3VuZCAyIGlkbGUgc3RhdGVzClsgICAgMC4zNTY0NDJdIHRoZXJtYWwgTE5YVEhFUk06MDA6
IHJlZ2lzdGVyZWQgYXMgdGhlcm1hbF96b25lMApbICAgIDAuMzU2NDQ1XSBBQ1BJOiB0aGVy
bWFsOiBUaGVybWFsIFpvbmUgW1RaMDBdICgyMSBDKQpbICAgIDAuMzU2Nzk3XSBOb24tdm9s
YXRpbGUgbWVtb3J5IGRyaXZlciB2MS4zClsgICAgMC4zNTY4NjRdIEFNRC1WaTogQU1EIElP
TU1VdjIgbG9hZGVkIGFuZCBpbml0aWFsaXplZApbICAgIDAuMzU3MDY1XSBhaGNpIDAwMDA6
MDA6MTEuMDogdmVyc2lvbiAzLjAKWyAgICAwLjM1NzM1Nl0gYWhjaSAwMDAwOjAwOjExLjA6
IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDggcG9ydHMgNiBHYnBzIDB4NDAgaW1wbCBTQVRB
IG1vZGUKWyAgICAwLjM1NzM2MF0gYWhjaSAwMDAwOjAwOjExLjA6IGZsYWdzOiA2NGJpdCBu
Y3Egc250ZiBpbGNrIGxlZCBjbG8gcGlvIApbICAgIDAuMzU4Nzg1XSBzY3NpIGhvc3QwOiBh
aGNpClsgICAgMC4zNTg5OTldIHNjc2kgaG9zdDE6IGFoY2kKWyAgICAwLjM1OTIxNV0gc2Nz
aSBob3N0MjogYWhjaQpbICAgIDAuMzU5NDI0XSBzY3NpIGhvc3QzOiBhaGNpClsgICAgMC4z
NTk2MjRdIHNjc2kgaG9zdDQ6IGFoY2kKWyAgICAwLjM1OTgzNl0gc2NzaSBob3N0NTogYWhj
aQpbICAgIDAuMzYwMDQ4XSBzY3NpIGhvc3Q2OiBhaGNpClsgICAgMC4zNjAyNDNdIHNjc2kg
aG9zdDc6IGFoY2kKWyAgICAwLjM2MDMzNl0gYXRhIHBvcnQxOiBEVU1NWQpbICAgIDAuMzYw
MzM4XSBhdGEgcG9ydDI6IERVTU1ZClsgICAgMC4zNjAzMzldIGF0YSBwb3J0MzogRFVNTVkK
WyAgICAwLjM2MDM0MV0gYXRhIHBvcnQ0OiBEVU1NWQpbICAgIDAuMzYwMzQyXSBhdGEgcG9y
dDU6IERVTU1ZClsgICAgMC4zNjAzNDNdIGF0YSBwb3J0NjogRFVNTVkKWyAgICAwLjM2MDM0
NV0gYXRhIHBvcnQ3OiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0yMDQ4QDB4ZjAxY2MwMDAg
cG9ydCAweGYwMWNjNDAwIGlycSAxOQpbICAgIDAuMzYwMzQ3XSBhdGEgcG9ydDg6IERVTU1Z
ClsgICAgMC4zNjA0MzFdIEFDUEk6IGJ1cyB0eXBlIGRybV9jb25uZWN0b3IgcmVnaXN0ZXJl
ZApbICAgIDAuMzYwNzM0XSBpODA0MjogUE5QOiBObyBQUy8yIGNvbnRyb2xsZXIgZm91bmQu
ClsgICAgMC4zNjA3MzZdIGk4MDQyOiBQcm9iaW5nIHBvcnRzIGRpcmVjdGx5LgpbICAgIDAu
MzYzMDc4XSBzZXJpbzogaTgwNDIgS0JEIHBvcnQgYXQgMHg2MCwweDY0IGlycSAxClsgICAg
MC4zNjMxODVdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAweDYwLDB4NjQgaXJxIDEyClsg
ICAgMC4zNjMzNDddIG1vdXNlZGV2OiBQUy8yIG1vdXNlIGRldmljZSBjb21tb24gZm9yIGFs
bCBtaWNlClsgICAgMC4zNjM0MjBdIHJ0Y19jbW9zIDAwOjAxOiBSVEMgY2FuIHdha2UgZnJv
bSBTNApbICAgIDAuMzYzOTI0XSBydGNfY21vcyAwMDowMTogcmVnaXN0ZXJlZCBhcyBydGMw
ClsgICAgMC4zNjM5NTBdIHJ0Y19jbW9zIDAwOjAxOiBzZXR0aW5nIHN5c3RlbSBjbG9jayB0
byAyMDIzLTA0LTE3VDE3OjI0OjA5IFVUQyAoMTY4MTc1MjI0OSkKWyAgICAwLjM2NDAxMV0g
cnRjX2Ntb3MgMDA6MDE6IGFsYXJtcyB1cCB0byBvbmUgZGF5LCB5M2ssIDExNCBieXRlcyBu
dnJhbSwgaHBldCBpcnFzClsgICAgMC4zNjQwNDhdIGRldmljZS1tYXBwZXI6IHVldmVudDog
dmVyc2lvbiAxLjAuMwpbICAgIDAuMzY0MTM2XSBkZXZpY2UtbWFwcGVyOiBpb2N0bDogNC40
Ny4wLWlvY3RsICgyMDIyLTA3LTI4KSBpbml0aWFsaXNlZDogZG0tZGV2ZWxAcmVkaGF0LmNv
bQpbICAgIDAuMzY0MzAxXSBoaWQ6IHJhdyBISUQgZXZlbnRzIGRyaXZlciAoQykgSmlyaSBL
b3NpbmEKWyAgICAwLjM2NDMzNV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNl
IGRyaXZlciB1c2JoaWQKWyAgICAwLjM2NDMzNl0gdXNiaGlkOiBVU0IgSElEIGNvcmUgZHJp
dmVyClsgICAgMC4zNjQ0MjddIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0Clsg
ICAgMC4zNjQ0MzZdIE5FVDogUmVnaXN0ZXJlZCBQRl9QQUNLRVQgcHJvdG9jb2wgZmFtaWx5
ClsgICAgMC4zNjQ0MzhdIHg4Ni9wbTogZmFtaWx5IDB4MTUgY3B1IGRldGVjdGVkLCBNU1Ig
c2F2aW5nIGlzIG5lZWRlZCBkdXJpbmcgc3VzcGVuZGluZy4KWyAgICAwLjM2NDY1Ml0gbWlj
cm9jb2RlOiBDUFUxOiBwYXRjaF9sZXZlbD0weDA2MDAxMTFmClsgICAgMC4zNjQ2NTJdIG1p
Y3JvY29kZTogQ1BVMDogcGF0Y2hfbGV2ZWw9MHgwNjAwMTExZgpbICAgIDAuMzY0NjY0XSBt
aWNyb2NvZGU6IE1pY3JvY29kZSBVcGRhdGUgRHJpdmVyOiB2Mi4yLgpbICAgIDAuMzY0NjY4
XSBJUEkgc2hvcnRoYW5kIGJyb2FkY2FzdDogZW5hYmxlZApbICAgIDAuMzY0Njc4XSBBVlgg
dmVyc2lvbiBvZiBnY21fZW5jL2RlYyBlbmdhZ2VkLgpbICAgIDAuMzY0NzEwXSBBRVMgQ1RS
IG1vZGUgYnk4IG9wdGltaXphdGlvbiBlbmFibGVkClsgICAgMC4zNjg1NzhdIHNjaGVkX2Ns
b2NrOiBNYXJraW5nIHN0YWJsZSAoMjQ5OTcyNTkzLCAxMTcxNTE1MTcpLT4oMzY5ODY3NTE1
LCAtMjc0MzQwNSkKWyAgICAwLjM2ODgyM10gcmVnaXN0ZXJlZCB0YXNrc3RhdHMgdmVyc2lv
biAxClsgICAgMC4zNjkwODBdIHpzd2FwOiBsb2FkZWQgdXNpbmcgcG9vbCBsem8vemJ1ZApb
ICAgIDAuMzcxNjYzXSBhdGEgbGluazc6IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1
cyAxMzMgU0NvbnRyb2wgMzAwKQpbICAgIDAuMzcxOTM5XSBhdGEgZGV2Ny4wOiBBVEEtOTog
U2FuRGlzayBTRFNTRFAwNjRHLCAyLjAuMCwgbWF4IFVETUEvMTMzClsgICAgMC4zNzE5NDJd
IGF0YSBkZXY3LjA6IDEyNTA0NTQyNCBzZWN0b3JzLCBtdWx0aSAxOiBMQkE0OCBOQ1EgKGRl
cHRoIDMyKQpbICAgIDAuMzcyMTQ4XSBhdGEgZGV2Ny4wOiBjb25maWd1cmVkIGZvciBVRE1B
LzEzMwpbICAgIDAuMzcyMjgzXSBzY3NpIDY6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFU
QSAgICAgIFNhbkRpc2sgU0RTU0RQMDYgMCAgICBQUTogMCBBTlNJOiA1ClsgICAgMC4zNzM0
OTZdIGttZW1sZWFrOiBLZXJuZWwgbWVtb3J5IGxlYWsgZGV0ZWN0b3IgaW5pdGlhbGl6ZWQg
KG1lbSBwb29sIGF2YWlsYWJsZTogMTU2NzkpClsgICAgMC4zNzM1MDFdIGRlYnVnX3ZtX3Bn
dGFibGU6IFtkZWJ1Z192bV9wZ3RhYmxlICAgICAgICAgXTogVmFsaWRhdGluZyBhcmNoaXRl
Y3R1cmUgcGFnZSB0YWJsZSBoZWxwZXJzClsgICAgMC4zNzM5NTldIHNkIDY6MDowOjA6IFtz
ZGFdIDEyNTA0NTQyNCA1MTItYnl0ZSBsb2dpY2FsIGJsb2NrczogKDY0LjAgR0IvNTkuNiBH
aUIpClsgICAgMC4zNzM5ODJdIHNkIDY6MDowOjA6IFtzZGFdIFdyaXRlIFByb3RlY3QgaXMg
b2ZmClsgICAgMC4zNzM5ODddIHNkIDY6MDowOjA6IFtzZGFdIE1vZGUgU2Vuc2U6IDAwIDNh
IDAwIDAwClsgICAgMC4zNzQwMTVdIHNkIDY6MDowOjA6IFtzZGFdIFdyaXRlIGNhY2hlOiBl
bmFibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZV
QQpbICAgIDAuMzc0MDU5XSBzZCA2OjA6MDowOiBbc2RhXSBQcmVmZXJyZWQgbWluaW11bSBJ
L08gc2l6ZSA1MTIgYnl0ZXMKWyAgICAwLjM4MDQwN10ga21lbWxlYWs6IEF1dG9tYXRpYyBt
ZW1vcnkgc2Nhbm5pbmcgdGhyZWFkIHN0YXJ0ZWQKWyAgICAwLjM4MDYzM10gIHNkYTogc2Rh
MSBzZGEyIHNkYTMKWyAgICAwLjM4MTEyMF0gS2V5IHR5cGUgZW5jcnlwdGVkIHJlZ2lzdGVy
ZWQKWyAgICAwLjM4MTI0OF0gc2QgNjowOjA6MDogW3NkYV0gQXR0YWNoZWQgU0NTSSBkaXNr
ClsgICAgMC4zODQyOTRdIFBNOiAgIE1hZ2ljIG51bWJlcjogMzo0NDM6NDQxClsgICAgMC4z
OTY3MjFdIEVYVDQtZnMgKHNkYTMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gZmUyOWUwZGMtNjMw
My00NDAxLTk4N2MtODQ3MmJjMWI5NTE2IHdpdGggb3JkZXJlZCBkYXRhIG1vZGUuIFF1b3Rh
IG1vZGU6IG5vbmUuClsgICAgMC4zOTY3NjZdIFZGUzogTW91bnRlZCByb290IChleHQ0IGZp
bGVzeXN0ZW0pIG9uIGRldmljZSA4OjMuClsgICAgMC4zOTg2NTBdIGRldnRtcGZzOiBtb3Vu
dGVkClsgICAgMC4zOTg2NjhdIEFmdGVyIGtlcm5lbF9pbml0X2ZyZWVhYmxlClsgICAgMC40
MDMxNzBdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAoaW5pdG1lbSkgbWVtb3J5OiAy
OTA4SwpbICAgIDAuNDE1NjM5XSBXcml0ZSBwcm90ZWN0aW5nIHRoZSBrZXJuZWwgcmVhZC1v
bmx5IGRhdGE6IDIwNDgwawpbICAgIDAuNDE1OTIyXSBGcmVlaW5nIHVudXNlZCBrZXJuZWwg
aW1hZ2UgKHJvZGF0YS9kYXRhIGdhcCkgbWVtb3J5OiA4MzZLClsgICAgMC40NTMwMjRdIHg4
Ni9tbTogQ2hlY2tlZCBXK1ggbWFwcGluZ3M6IHBhc3NlZCwgbm8gVytYIHBhZ2VzIGZvdW5k
LgpbICAgIDAuNDUzMDMwXSByb2RhdGFfdGVzdDogYWxsIHRlc3RzIHdlcmUgc3VjY2Vzc2Z1
bApbICAgIDAuNDUzMDMxXSBBZnRlciBtYXJrX3JlYWRvbmx5ClsgICAgMC40NTMwMzFdIEFm
dGVyIHB0aV9maW5hbGl6ZQpbICAgIDAuNDUzMDQ1XSByY3VfZW5kX2lua2VybmVsX2Jvb3QK
WyAgICAwLjQ1MzA1NF0gUnVuIC9zYmluL2luaXQgYXMgaW5pdCBwcm9jZXNzClsgICAgMC40
NTMwNTVdICAgd2l0aCBhcmd1bWVudHM6ClsgICAgMC40NTMwNTddICAgICAvc2Jpbi9pbml0
ClsgICAgMC40NTMwNThdICAgICBub2lzYXBucApbICAgIDAuNDUzMDU5XSAgIHdpdGggZW52
aXJvbm1lbnQ6ClsgICAgMC40NTMwNjBdICAgICBIT01FPS8KWyAgICAwLjQ1MzA2MF0gICAg
IFRFUk09bGludXgKWyAgICAwLjQ1MzA2MV0gICAgIEJPT1RfSU1BR0U9L2Jvb3Qvdm1saW51
ei02LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2OWY2NgpbICAgIDAuNjI5NTE4XSBzeXN0ZW1k
WzFdOiBJbnNlcnRlZCBtb2R1bGUgJ2F1dG9mczQnClsgICAgMC42NTY2MDFdIE5FVDogUmVn
aXN0ZXJlZCBQRl9JTkVUNiBwcm90b2NvbCBmYW1pbHkKWyAgICAwLjY1NzQ5NV0gU2VnbWVu
dCBSb3V0aW5nIHdpdGggSVB2NgpbICAgIDAuNjU3NTIzXSBJbi1zaXR1IE9BTSAoSU9BTSkg
d2l0aCBJUHY2ClsgICAgMC42ODM3NDVdIHN5c3RlbWRbMV06IHN5c3RlbWQgMjUyLjYtMSBy
dW5uaW5nIGluIHN5c3RlbSBtb2RlICgrUEFNICtBVURJVCArU0VMSU5VWCArQVBQQVJNT1Ig
K0lNQSArU01BQ0sgK1NFQ0NPTVAgK0dDUllQVCAtR05VVExTICtPUEVOU1NMICtBQ0wgK0JM
S0lEICtDVVJMICtFTEZVVElMUyArRklETzIgK0lETjIgLUlETiArSVBUQyArS01PRCArTElC
Q1JZUFRTRVRVUCArTElCRkRJU0sgK1BDUkUyIC1QV1FVQUxJVFkgK1AxMUtJVCArUVJFTkNP
REUgK1RQTTIgK0JaSVAyICtMWjQgK1haICtaTElCICtaU1REIC1CUEZfRlJBTUVXT1JLIC1Y
S0JDT01NT04gK1VUTVAgK1NZU1ZJTklUIGRlZmF1bHQtaGllcmFyY2h5PXVuaWZpZWQpClsg
ICAgMC42ODM3NTZdIHN5c3RlbWRbMV06IERldGVjdGVkIGFyY2hpdGVjdHVyZSB4ODYtNjQu
ClsgICAgMC42ODg0ODVdIHN5c3RlbWRbMV06IEhvc3RuYW1lIHNldCB0byA8a29kaT4uClsg
ICAgMC45NTkwNzFdIHN5c3RlbWRbMV06IFF1ZXVlZCBzdGFydCBqb2IgZm9yIGRlZmF1bHQg
dGFyZ2V0IGdyYXBoaWNhbC50YXJnZXQuClsgICAgMC45ODIxOThdIHN5c3RlbWRbMV06IENy
ZWF0ZWQgc2xpY2Ugc3lzdGVtLWdldHR5LnNsaWNlIC0gU2xpY2UgL3N5c3RlbS9nZXR0eS4K
WyAgICAwLjk4MzI4Nl0gc3lzdGVtZFsxXTogQ3JlYXRlZCBzbGljZSBzeXN0ZW0tbW9kcHJv
YmUuc2xpY2UgLSBTbGljZSAvc3lzdGVtL21vZHByb2JlLgpbICAgIDAuOTg0MTIzXSBzeXN0
ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHVzZXIuc2xpY2UgLSBVc2VyIGFuZCBTZXNzaW9uIFNs
aWNlLgpbICAgIDAuOTg0MzAyXSBzeXN0ZW1kWzFdOiBTdGFydGVkIHN5c3RlbWQtYXNrLXBh
c3N3b3JkLWNvbnNvbGUucGF0aCAtIERpc3BhdGNoIFBhc3N3b3JkIFJlcXVlc3RzIHRvIENv
bnNvbGUgRGlyZWN0b3J5IFdhdGNoLgpbICAgIDAuOTg0NDE3XSBzeXN0ZW1kWzFdOiBTdGFy
dGVkIHN5c3RlbWQtYXNrLXBhc3N3b3JkLXdhbGwucGF0aCAtIEZvcndhcmQgUGFzc3dvcmQg
UmVxdWVzdHMgdG8gV2FsbCBEaXJlY3RvcnkgV2F0Y2guClsgICAgMC45ODQ4NTVdIHN5c3Rl
bWRbMV06IFNldCB1cCBhdXRvbW91bnQgcHJvYy1zeXMtZnMtYmluZm10X21pc2MuYXV0b21v
dW50IC0gQXJiaXRyYXJ5IEV4ZWN1dGFibGUgRmlsZSBGb3JtYXRzIEZpbGUgU3lzdGVtIEF1
dG9tb3VudCBQb2ludC4KWyAgICAwLjk4NDg5NF0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJn
ZXQgY3J5cHRzZXR1cC50YXJnZXQgLSBMb2NhbCBFbmNyeXB0ZWQgVm9sdW1lcy4KWyAgICAw
Ljk4NDkzNF0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgaW50ZWdyaXR5c2V0dXAudGFy
Z2V0IC0gTG9jYWwgSW50ZWdyaXR5IFByb3RlY3RlZCBWb2x1bWVzLgpbICAgIDAuOTg0OTcy
XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBwYXRocy50YXJnZXQgLSBQYXRoIFVuaXRz
LgpbICAgIDAuOTg1MDA2XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCByZW1vdGUtZnMu
dGFyZ2V0IC0gUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICAwLjk4NTAzOF0gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgc2xpY2VzLnRhcmdldCAtIFNsaWNlIFVuaXRzLgpbICAgIDAu
OTg1MDc4XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBzd2FwLnRhcmdldCAtIFN3YXBz
LgpbICAgIDAuOTg1MTE2XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCB2ZXJpdHlzZXR1
cC50YXJnZXQgLSBMb2NhbCBWZXJpdHkgUHJvdGVjdGVkIFZvbHVtZXMuClsgICAgMC45ODc1
NTBdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWNvcmVkdW1wLnNvY2tldCAt
IFByb2Nlc3MgQ29yZSBEdW1wIFNvY2tldC4KWyAgICAwLjk4NzgwMl0gc3lzdGVtZFsxXTog
TGlzdGVuaW5nIG9uIHN5c3RlbWQtZnNja2Quc29ja2V0IC0gZnNjayB0byBmc2NrZCBjb21t
dW5pY2F0aW9uIFNvY2tldC4KWyAgICAwLjk4Nzk3NV0gc3lzdGVtZFsxXTogTGlzdGVuaW5n
IG9uIHN5c3RlbWQtaW5pdGN0bC5zb2NrZXQgLSBpbml0Y3RsIENvbXBhdGliaWxpdHkgTmFt
ZWQgUGlwZS4KWyAgICAwLjk4ODI3NF0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIHN5c3Rl
bWQtam91cm5hbGQtYXVkaXQuc29ja2V0IC0gSm91cm5hbCBBdWRpdCBTb2NrZXQuClsgICAg
MC45ODg1NDNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWpvdXJuYWxkLWRl
di1sb2cuc29ja2V0IC0gSm91cm5hbCBTb2NrZXQgKC9kZXYvbG9nKS4KWyAgICAwLjk4ODgy
Ml0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIHN5c3RlbWQtam91cm5hbGQuc29ja2V0IC0g
Sm91cm5hbCBTb2NrZXQuClsgICAgMC45ODkwNzJdIHN5c3RlbWRbMV06IExpc3RlbmluZyBv
biBzeXN0ZW1kLW5ldHdvcmtkLnNvY2tldCAtIE5ldHdvcmsgU2VydmljZSBOZXRsaW5rIFNv
Y2tldC4KWyAgICAwLjk4OTk0Nl0gc3lzdGVtZFsxXTogTGlzdGVuaW5nIG9uIHN5c3RlbWQt
dWRldmQtY29udHJvbC5zb2NrZXQgLSB1ZGV2IENvbnRyb2wgU29ja2V0LgpbICAgIDAuOTkw
MjEwXSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29j
a2V0IC0gdWRldiBLZXJuZWwgU29ja2V0LgpbICAgIDAuOTkyOTA0XSBzeXN0ZW1kWzFdOiBN
b3VudGluZyBkZXYtaHVnZXBhZ2VzLm1vdW50IC0gSHVnZSBQYWdlcyBGaWxlIFN5c3RlbS4u
LgpbICAgIDAuOTk1OTQ0XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBkZXYtbXF1ZXVlLm1vdW50
IC0gUE9TSVggTWVzc2FnZSBRdWV1ZSBGaWxlIFN5c3RlbS4uLgpbICAgIDAuOTk5MTQyXSBz
eXN0ZW1kWzFdOiBNb3VudGluZyBzeXMta2VybmVsLWRlYnVnLm1vdW50IC0gS2VybmVsIERl
YnVnIEZpbGUgU3lzdGVtLi4uClsgICAgMS4wMDI2ODJdIHN5c3RlbWRbMV06IE1vdW50aW5n
IHN5cy1rZXJuZWwtdHJhY2luZy5tb3VudCAtIEtlcm5lbCBUcmFjZSBGaWxlIFN5c3RlbS4u
LgpbICAgIDEuMDA3MjU2XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBrbW9kLXN0YXRpYy1ub2Rl
cy5zZXJ2aWNlIC0gQ3JlYXRlIExpc3Qgb2YgU3RhdGljIERldmljZSBOb2Rlcy4uLgpbICAg
IDEuMDE0Mjc2XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBtb2Rwcm9iZUBjb25maWdmcy5zZXJ2
aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGNvbmZpZ2ZzLi4uClsgICAgMS4wMTczOTldIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHByb2JlQGRtX21vZC5zZXJ2aWNlIC0gTG9hZCBLZXJu
ZWwgTW9kdWxlIGRtX21vZC4uLgpbICAgIDEuMDIwNjU3XSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBtb2Rwcm9iZUBkcm0uc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBkcm0uLi4KWyAg
ICAxLjAyNDAwNl0gc3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAZWZpX3BzdG9yZS5z
ZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGVmaV9wc3RvcmUuLi4KWyAgICAxLjAyNzI3
OF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAZnVzZS5zZXJ2aWNlIC0gTG9hZCBL
ZXJuZWwgTW9kdWxlIGZ1c2UuLi4KWyAgICAxLjAzNjA2Nl0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgbW9kcHJvYmVAbG9vcC5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGxvb3AuLi4K
WyAgICAxLjAzNjE1MV0gc3lzdGVtZFsxXTogc3lzdGVtZC1maXJzdGJvb3Quc2VydmljZSAt
IEZpcnN0IEJvb3QgV2l6YXJkIHdhcyBza2lwcGVkIGJlY2F1c2Ugb2YgYW4gdW5tZXQgY29u
ZGl0aW9uIGNoZWNrIChDb25kaXRpb25GaXJzdEJvb3Q9eWVzKS4KWyAgICAxLjAzNjIzN10g
c3lzdGVtZFsxXTogc3lzdGVtZC1mc2NrLXJvb3Quc2VydmljZSAtIEZpbGUgU3lzdGVtIENo
ZWNrIG9uIFJvb3QgRGV2aWNlIHdhcyBza2lwcGVkIGJlY2F1c2Ugb2YgYW4gdW5tZXQgY29u
ZGl0aW9uIGNoZWNrIChDb25kaXRpb25QYXRoSXNSZWFkV3JpdGU9IS8pLgpbICAgIDEuMDM2
Mjc5XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBsb2NhbC1mcy50YXJnZXQgLSBMb2Nh
bCBGaWxlIFN5c3RlbXMuClsgICAgMS4wMzYzNjJdIHN5c3RlbWRbMV06IGFwcGFybW9yLnNl
cnZpY2UgLSBMb2FkIEFwcEFybW9yIHByb2ZpbGVzIHdhcyBza2lwcGVkIGJlY2F1c2Ugb2Yg
YW4gdW5tZXQgY29uZGl0aW9uIGNoZWNrIChDb25kaXRpb25TZWN1cml0eT1hcHBhcm1vciku
ClsgICAgMS4wNDEzNTZdIGxvb3A6IG1vZHVsZSBsb2FkZWQKWyAgICAxLjA0Mzc0OV0gc3lz
dGVtZFsxXTogU3RhcnRpbmcgc3lzdGVtZC1iaW5mbXQuc2VydmljZSAtIFNldCBVcCBBZGRp
dGlvbmFsIEJpbmFyeSBGb3JtYXRzLi4uClsgICAgMS4wNDg5ODNdIHN5c3RlbWRbMV06IFN0
YXJ0aW5nIHN5c3RlbWQtam91cm5hbGQuc2VydmljZSAtIEpvdXJuYWwgU2VydmljZS4uLgpb
ICAgIDEuMDUwNzQwXSBmdXNlOiBpbml0IChBUEkgdmVyc2lvbiA3LjM4KQpbICAgIDEuMDUy
ODI0XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLXJhbmRvbS1zZWVkLnNlcnZpY2Ug
LSBMb2FkL1NhdmUgUmFuZG9tIFNlZWQuLi4KWyAgICAxLjA2MzkyNF0gc3lzdGVtZFsxXTog
U3RhcnRpbmcgc3lzdGVtZC1zeXNjdGwuc2VydmljZSAtIEFwcGx5IEtlcm5lbCBWYXJpYWJs
ZXMuLi4KWyAgICAxLjA2ODA2MV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lzdGVtZC1zeXN1
c2Vycy5zZXJ2aWNlIC0gQ3JlYXRlIFN5c3RlbSBVc2Vycy4uLgpbICAgIDEuMDcxMzI4XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLXVkZXYtdHJpZ2dlci5zZXJ2aWNlIC0gQ29s
ZHBsdWcgQWxsIHVkZXYgRGV2aWNlcy4uLgpbICAgIDEuMDkzNTE3XSBzeXN0ZW1kWzFdOiBN
b3VudGVkIGRldi1odWdlcGFnZXMubW91bnQgLSBIdWdlIFBhZ2VzIEZpbGUgU3lzdGVtLgpb
ICAgIDEuMDk0MTM0XSBzeXN0ZW1kWzFdOiBNb3VudGVkIGRldi1tcXVldWUubW91bnQgLSBQ
T1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lzdGVtLgpbICAgIDEuMDk0NDQ3XSBzeXN0ZW1k
WzFdOiBNb3VudGVkIHN5cy1rZXJuZWwtZGVidWcubW91bnQgLSBLZXJuZWwgRGVidWcgRmls
ZSBTeXN0ZW0uClsgICAgMS4wOTQ5ODVdIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lzLWtlcm5l
bC10cmFjaW5nLm1vdW50IC0gS2VybmVsIFRyYWNlIEZpbGUgU3lzdGVtLgpbICAgIDEuMDk1
OTc0XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBrbW9kLXN0YXRpYy1ub2Rlcy5zZXJ2aWNlIC0g
Q3JlYXRlIExpc3Qgb2YgU3RhdGljIERldmljZSBOb2Rlcy4KWyAgICAxLjA5Njg4OF0gc3lz
dGVtZFsxXTogbW9kcHJvYmVAY29uZmlnZnMuc2VydmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vz
c2Z1bGx5LgpbICAgIDEuMDk3MjU3XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBj
b25maWdmcy5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGNvbmZpZ2ZzLgpbICAgIDEu
MDk4MDAxXSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBkbV9tb2Quc2VydmljZTogRGVhY3RpdmF0
ZWQgc3VjY2Vzc2Z1bGx5LgpbICAgIDEuMDk4MzM5XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBt
b2Rwcm9iZUBkbV9tb2Quc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBkbV9tb2QuClsg
ICAgMS4wOTkwNzddIHN5c3RlbWRbMV06IG1vZHByb2JlQGRybS5zZXJ2aWNlOiBEZWFjdGl2
YXRlZCBzdWNjZXNzZnVsbHkuClsgICAgMS4wOTk0NDZdIHN5c3RlbWRbMV06IEZpbmlzaGVk
IG1vZHByb2JlQGRybS5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGRybS4KWyAgICAx
LjEwMDUxOF0gc3lzdGVtZFsxXTogbW9kcHJvYmVAZWZpX3BzdG9yZS5zZXJ2aWNlOiBEZWFj
dGl2YXRlZCBzdWNjZXNzZnVsbHkuClsgICAgMS4xMDA4NDZdIHN5c3RlbWRbMV06IEZpbmlz
aGVkIG1vZHByb2JlQGVmaV9wc3RvcmUuc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBl
ZmlfcHN0b3JlLgpbICAgIDEuMTAxNzQ5XSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBmdXNlLnNl
cnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAgICAxLjEwMjA2NF0gc3lzdGVt
ZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAZnVzZS5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9k
dWxlIGZ1c2UuClsgICAgMS4xMDI4MTBdIHN5c3RlbWRbMV06IG1vZHByb2JlQGxvb3Auc2Vy
dmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vzc2Z1bGx5LgpbICAgIDEuMTAzMTQyXSBzeXN0ZW1k
WzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBsb29wLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1
bGUgbG9vcC4KWyAgICAxLjEwMzgzMF0gc3lzdGVtZFsxXTogcHJvYy1zeXMtZnMtYmluZm10
X21pc2MuYXV0b21vdW50OiBHb3QgYXV0b21vdW50IHJlcXVlc3QgZm9yIC9wcm9jL3N5cy9m
cy9iaW5mbXRfbWlzYywgdHJpZ2dlcmVkIGJ5IDEzOCAoc3lzdGVtZC1iaW5mbXQpClsgICAg
MS4xMjMyNDJdIHN5c3RlbWRbMV06IE1vdW50aW5nIHByb2Mtc3lzLWZzLWJpbmZtdF9taXNj
Lm1vdW50IC0gQXJiaXRyYXJ5IEV4ZWN1dGFibGUgRmlsZSBGb3JtYXRzIEZpbGUgU3lzdGVt
Li4uClsgICAgMS4xNDAwMjhdIHN5c3RlbWRbMV06IE1vdW50aW5nIHN5cy1mcy1mdXNlLWNv
bm5lY3Rpb25zLm1vdW50IC0gRlVTRSBDb250cm9sIEZpbGUgU3lzdGVtLi4uClsgICAgMS4x
NDgyMjNdIHN5c3RlbWRbMV06IE1vdW50aW5nIHN5cy1rZXJuZWwtY29uZmlnLm1vdW50IC0g
S2VybmVsIENvbmZpZ3VyYXRpb24gRmlsZSBTeXN0ZW0uLi4KWyAgICAxLjE0ODM0Nl0gc3lz
dGVtZFsxXTogc3lzdGVtZC1wc3RvcmUuc2VydmljZSAtIFBsYXRmb3JtIFBlcnNpc3RlbnQg
U3RvcmFnZSBBcmNoaXZhbCB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRp
dGlvbiBjaGVjayAoQ29uZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9L3N5cy9mcy9wc3RvcmUp
LgpbICAgIDEuMTQ4NTE1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXJlcGFydC5zZXJ2aWNlIC0g
UmVwYXJ0aXRpb24gUm9vdCBEaXNrIHdhcyBza2lwcGVkIGJlY2F1c2Ugbm8gdHJpZ2dlciBj
b25kaXRpb24gY2hlY2tzIHdlcmUgbWV0LgpbICAgIDEuMTU5OTA5XSBzeXN0ZW1kWzFdOiBG
aW5pc2hlZCBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlIC0gQXBwbHkgS2VybmVsIFZhcmlhYmxl
cy4KWyAgICAxLjE4NDUyOV0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC1zeXN1c2Vy
cy5zZXJ2aWNlIC0gQ3JlYXRlIFN5c3RlbSBVc2Vycy4KWyAgICAxLjE4NjAwMF0gc3lzdGVt
ZFsxXTogTW91bnRlZCBwcm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5tb3VudCAtIEFyYml0cmFy
eSBFeGVjdXRhYmxlIEZpbGUgRm9ybWF0cyBGaWxlIFN5c3RlbS4KWyAgICAxLjE4NjM1N10g
c3lzdGVtZFsxXTogTW91bnRlZCBzeXMtZnMtZnVzZS1jb25uZWN0aW9ucy5tb3VudCAtIEZV
U0UgQ29udHJvbCBGaWxlIFN5c3RlbS4KWyAgICAxLjE4NjYwMF0gc3lzdGVtZFsxXTogTW91
bnRlZCBzeXMta2VybmVsLWNvbmZpZy5tb3VudCAtIEtlcm5lbCBDb25maWd1cmF0aW9uIEZp
bGUgU3lzdGVtLgpbICAgIDEuMjA5ODIzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1k
LXRtcGZpbGVzLXNldHVwLWRldi5zZXJ2aWNlIC0gQ3JlYXRlIFN0YXRpYyBEZXZpY2UgTm9k
ZXMgaW4gL2Rldi4uLgpbICAgIDEuMjEwNTEyXSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBzeXN0
ZW1kLWJpbmZtdC5zZXJ2aWNlIC0gU2V0IFVwIEFkZGl0aW9uYWwgQmluYXJ5IEZvcm1hdHMu
ClsgICAgMS4yNTQyNzNdIHN5c3RlbWRbMV06IEZpbmlzaGVkIHN5c3RlbWQtdG1wZmlsZXMt
c2V0dXAtZGV2LnNlcnZpY2UgLSBDcmVhdGUgU3RhdGljIERldmljZSBOb2RlcyBpbiAvZGV2
LgpbICAgIDEuMjczNDY5XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLXVkZXZkLnNl
cnZpY2UgLSBSdWxlLWJhc2VkIE1hbmFnZXIgZm9yIERldmljZSBFdmVudHMgYW5kIEZpbGVz
Li4uClsgICAgMS4zMTMyODZdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgc3lzdGVtZC1qb3VybmFs
ZC5zZXJ2aWNlIC0gSm91cm5hbCBTZXJ2aWNlLgpbICAgIDEuMzU5NjM2XSBzeXN0ZW1kLWpv
dXJuYWxkWzEzOV06IFJlY2VpdmVkIGNsaWVudCByZXF1ZXN0IHRvIGZsdXNoIHJ1bnRpbWUg
am91cm5hbC4KWyAgICAxLjM2OTE3M10gdHNjOiBSZWZpbmVkIFRTQyBjbG9ja3NvdXJjZSBj
YWxpYnJhdGlvbjogMzkwMC4yMjMgTUh6ClsgICAgMS4zNjkxODRdIGNsb2Nrc291cmNlOiB0
c2M6IG1hc2s6IDB4ZmZmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDcwNzA1YTY0NzJj
LCBtYXhfaWRsZV9uczogODgxNTkwNTg2ODEyIG5zClsgICAgMS4zNjkyMDBdIGNsb2Nrc291
cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB0c2MKWyAgICAxLjU5NDU1OV0gc2QgNjow
OjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMCB0eXBlIDAKWyAgICAxLjczMTcwNF0g
YWNwaV9jcHVmcmVxOiBvdmVycmlkaW5nIEJJT1MgcHJvdmlkZWQgX1BTRCBkYXRhClsgICAg
MS45NTMxNzRdIHJhbmRvbTogY3JuZyBpbml0IGRvbmUKWyAgICAxLjk5MjY1NF0gUVVJUks6
IEVuYWJsZSBBTUQgUExMIGZpeApbICAgIDEuOTkyNzA2XSBlaGNpLXBjaSAwMDAwOjAwOjEy
LjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMS45OTI3MzhdIGVoY2ktcGNpIDAwMDA6
MDA6MTIuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAx
ClsgICAgMS45OTI3NTFdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogYXBwbHlpbmcgQU1EIFNC
NzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kClsgICAgMS45
OTI3NjBdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZGVidWcgcG9ydCAxClsgICAgMS45OTI5
NzFdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBpbyBtZW0gMHhmMDFjZDAwMApb
ICAgIDIuMDA1MTk3XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IFVTQiAyLjAgc3RhcnRlZCwg
RUhDSSAxLjAwClsgICAgMi4wMDU0NjhdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBmb3Vu
ZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNi4wMwpbICAg
IDIuMDA1NDcxXSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFBy
b2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjAwNTQ3NF0gdXNiIHVzYjE6IFByb2R1
Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4wMDU0NzZdIHVzYiB1c2IxOiBNYW51
ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2IGVoY2lfaGNk
ClsgICAgMi4wMDU0NzhdIHVzYiB1c2IxOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTIuMgpb
ICAgIDIuMDA2Mjc5XSBodWIgMS0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIuMDA2MzIx
XSBodWIgMS0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZApbICAgIDIuMDA3MzE5XSBlaGNpLXBj
aSAwMDAwOjAwOjEzLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4wMDczNDVdIGVo
Y2ktcGNpIDAwMDA6MDA6MTMuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQg
YnVzIG51bWJlciAyClsgICAgMi4wMDczNjBdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogYXBw
bHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJv
dW5kClsgICAgMi4wMDczNjldIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogZGVidWcgcG9ydCAx
ClsgICAgMi4wMDc1MjNdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogaXJxIDE3LCBpbyBtZW0g
MHhmMDFjZTAwMApbICAgIDIuMDE2MTQxXSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjE0LjA6IFNN
QnVzIEhvc3QgQ29udHJvbGxlciBhdCAweGIwMCwgcmV2aXNpb24gMApbICAgIDIuMDE2MTQ3
XSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjE0LjA6IFVzaW5nIHJlZ2lzdGVyIDB4MmUgZm9yIFNN
QnVzIHBvcnQgc2VsZWN0aW9uClsgICAgMi4wMTY3ODhdIHBpaXg0X3NtYnVzIDAwMDA6MDA6
MTQuMDogQXV4aWxpYXJ5IFNNQnVzIEhvc3QgQ29udHJvbGxlciBhdCAweGIyMApbICAgIDIu
MDIxMTg0XSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAx
LjAwClsgICAgMi4wMjE1MjNdIHVzYiB1c2IyOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRW
ZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNi4wMwpbICAgIDIuMDIx
NTI3XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9
MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjAyMTUyOV0gdXNiIHVzYjI6IFByb2R1Y3Q6IEVI
Q0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4wMjE1MzFdIHVzYiB1c2IyOiBNYW51ZmFjdHVy
ZXI6IExpbnV4IDYuMy4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2IGVoY2lfaGNkClsgICAg
Mi4wMjE1MzNdIHVzYiB1c2IyOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTMuMgpbICAgIDIu
MDMyMDg2XSBodWIgMi0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIuMDMyMTcxXSBodWIg
Mi0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZApbICAgIDIuMDMzNDIwXSBlaGNpLXBjaSAwMDAw
OjAwOjE2LjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4wMzM0NTVdIGVoY2ktcGNp
IDAwMDA6MDA6MTYuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51
bWJlciAzClsgICAgMi4wMzM0NzBdIGVoY2ktcGNpIDAwMDA6MDA6MTYuMjogYXBwbHlpbmcg
QU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kClsg
ICAgMi4wMzM0NzldIGVoY2ktcGNpIDAwMDA6MDA6MTYuMjogZGVidWcgcG9ydCAxClsgICAg
Mi4wMzM2NDFdIGVoY2ktcGNpIDAwMDA6MDA6MTYuMjogaXJxIDE3LCBpbyBtZW0gMHhmMDFj
ZjAwMApbICAgIDIuMDQ5MTc3XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IFVTQiAyLjAgc3Rh
cnRlZCwgRUhDSSAxLjAwClsgICAgMi4wNDk0NTRdIHVzYiB1c2IzOiBOZXcgVVNCIGRldmlj
ZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNi4w
MwpbICAgIDIuMDQ5NDU5XSB1c2IgdXNiMzogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjA0OTQ2MV0gdXNiIHVzYjM6
IFByb2R1Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4wNDk0NjNdIHVzYiB1c2Iz
OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2IGVo
Y2lfaGNkClsgICAgMi4wNDk0NjVdIHVzYiB1c2IzOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6
MTYuMgpbICAgIDIuMDUwMDI1XSBodWIgMy0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIu
MDUwMDcwXSBodWIgMy0wOjEuMDogNCBwb3J0cyBkZXRlY3RlZApbICAgIDIuMDUwODgxXSBv
aGNpLXBjaSAwMDAwOjAwOjE2LjA6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDIu
MDUwOTEwXSBvaGNpLXBjaSAwMDAwOjAwOjE2LjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQs
IGFzc2lnbmVkIGJ1cyBudW1iZXIgNApbICAgIDIuMDUwOTk5XSBvaGNpLXBjaSAwMDAwOjAw
OjEyLjA6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDIuMDUxMDIxXSBvaGNpLXBj
aSAwMDAwOjAwOjEyLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBu
dW1iZXIgNQpbICAgIDIuMDUxMTEwXSBvaGNpLXBjaSAwMDAwOjAwOjE2LjA6IGlycSAxOCwg
aW8gbWVtIDB4ZjAxY2IwMDAKWyAgICAyLjA1MTExMV0gb2hjaS1wY2kgMDAwMDowMDoxMi4w
OiBpcnEgMTgsIGlvIG1lbSAweGYwMWM4MDAwClsgICAgMi4wNTExMjBdIG9oY2ktcGNpIDAw
MDA6MDA6MTMuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAgMi4wNTExNDFdIG9o
Y2ktcGNpIDAwMDA6MDA6MTMuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQg
YnVzIG51bWJlciA2ClsgICAgMi4wNTEyNDBdIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogaXJx
IDE4LCBpbyBtZW0gMHhmMDFjOTAwMApbICAgIDIuMDUxMjUwXSBvaGNpLXBjaSAwMDAwOjAw
OjE0LjU6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDIuMDUxMjY4XSBvaGNpLXBj
aSAwMDAwOjAwOjE0LjU6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBu
dW1iZXIgNwpbICAgIDIuMDUxMzU2XSBvaGNpLXBjaSAwMDAwOjAwOjE0LjU6IGlycSAxOCwg
aW8gbWVtIDB4ZjAxY2EwMDAKWyAgICAyLjExMzU3OV0gdXNiIHVzYjQ6IE5ldyBVU0IgZGV2
aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMSwgYmNkRGV2aWNlPSA2
LjAzClsgICAgMi4xMTM1ODZdIHVzYiB1c2I0OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBN
ZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgIDIuMTEzNTg5XSB1c2IgdXNi
NDogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAgMi4xMTM1OTFdIHVz
YiB1c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5
ZjY2IG9oY2lfaGNkClsgICAgMi4xMTM1OTNdIHVzYiB1c2I0OiBTZXJpYWxOdW1iZXI6IDAw
MDA6MDA6MTYuMApbICAgIDIuMTE0MDk2XSBodWIgNC0wOjEuMDogVVNCIGh1YiBmb3VuZApb
ICAgIDIuMTE0MTI3XSBodWIgNC0wOjEuMDogNCBwb3J0cyBkZXRlY3RlZApbICAgIDIuMTI2
MTk0XSB1c2IgdXNiNTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlk
UHJvZHVjdD0wMDAxLCBiY2REZXZpY2U9IDYuMDMKWyAgICAyLjEyNjIwMl0gdXNiIHVzYjU6
IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJl
cj0xClsgICAgMi4xMjYyMDRdIHVzYiB1c2I1OiBQcm9kdWN0OiBPSENJIFBDSSBob3N0IGNv
bnRyb2xsZXIKWyAgICAyLjEyNjIwN10gdXNiIHVzYjU6IE1hbnVmYWN0dXJlcjogTGludXgg
Ni4zLjAtcmM2LTAwMzExLWdkZTgyMjQ5NjlmNjYgb2hjaV9oY2QKWyAgICAyLjEyNjIwOV0g
dXNiIHVzYjU6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMi4wClsgICAgMi4xMjczNDFdIGh1
YiA1LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMi4xMjczODBdIGh1YiA1LTA6MS4wOiA1
IHBvcnRzIGRldGVjdGVkClsgICAgMi4xMjg1MjJdIHVzYiB1c2I2OiBOZXcgVVNCIGRldmlj
ZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEsIGJjZERldmljZT0gNi4w
MwpbICAgIDIuMTI4NTI4XSB1c2IgdXNiNjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjEyODUzMV0gdXNiIHVzYjY6
IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDIuMTI4NTMzXSB1c2Ig
dXNiNjogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2OWY2
NiBvaGNpX2hjZApbICAgIDIuMTI4NTM1XSB1c2IgdXNiNjogU2VyaWFsTnVtYmVyOiAwMDAw
OjAwOjEzLjAKWyAgICAyLjEyOTA0OV0gaHViIDYtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAg
ICAyLjEyOTA4Ml0gaHViIDYtMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgICAyLjEzMDMx
MV0gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFBy
b2R1Y3Q9MDAwMSwgYmNkRGV2aWNlPSA2LjAzClsgICAgMi4xMzAzMTZdIHVzYiB1c2I3OiBO
ZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9
MQpbICAgIDIuMTMwMzE5XSB1c2IgdXNiNzogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250
cm9sbGVyClsgICAgMi4xMzAzMjFdIHVzYiB1c2I3OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYu
My4wLXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2IG9oY2lfaGNkClsgICAgMi4xMzAzMjNdIHVz
YiB1c2I3OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTQuNQpbICAgIDIuMTMwODU1XSBodWIg
Ny0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIuMTMwODk1XSBodWIgNy0wOjEuMDogMiBw
b3J0cyBkZXRlY3RlZApbICAgIDIuMTQ4MTM3XSAxClsgICAgMi4xNDgxOTZdIDIKWyAgICAy
LjE0ODgyMF0gc25kX2hkYV9pbnRlbCAwMDAwOjAwOjAxLjE6IEZvcmNlIHRvIG5vbi1zbm9v
cCBtb2RlClsgICAgMi4xNDg4MzJdIDMKWyAgICAyLjE0ODgzM10gNApbICAgIDIuMTQ4ODM0
XSA1ClsgICAgMi4xNDg4MzVdIDcKWyAgICAyLjE0ODgzN10gOApbICAgIDIuMTQ4ODM3XSA5
ClsgICAgMi4xNDg5MjldIDEKWyAgICAyLjE0ODk3OV0gMgpbICAgIDIuMTQ5NjM3XSAzClsg
ICAgMi4xNDk2NDFdIDQKWyAgICAyLjE0OTY0Ml0gNQpbICAgIDIuMTQ5NjQzXSA3ClsgICAg
Mi4xNDk2NDVdIDgKWyAgICAyLjE0OTY0Nl0gOQpbICAgIDIuMTY2Njc0XSB4aGNpX2hjZCAw
MDAwOjAzOjAwLjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4xNjY3MDddIHhoY2lf
aGNkIDAwMDA6MDM6MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVz
IG51bWJlciA4ClsgICAgMi4yMDY4NTNdIGlucHV0OiBIREEgQVRJIEhETUkgSERNSS9EUCxw
Y209MyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDEuMS9zb3VuZC9jYXJkMC9p
bnB1dDEKWyAgICAyLjIwODEzM10gaW5wdXQ6IEhEQSBBVEkgSERNSSBIRE1JL0RQLHBjbT03
IGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDowMS4xL3NvdW5kL2NhcmQwL2lucHV0
MgpbICAgIDIuMjIxNjE1XSBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6IEFM
Qzg5MjogU0tVIG5vdCByZWFkeSAweDAwMDAwMTAwClsgICAgMi4yMjIzNzddIHNuZF9oZGFf
Y29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogYXV0b2NvbmZpZyBmb3IgQUxDODkyOiBsaW5l
X291dHM9NCAoMHgxNC8weDE2LzB4MTUvMHgxNy8weDApIHR5cGU6bGluZQpbICAgIDIuMjIy
Mzg0XSBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgIHNwZWFrZXJfb3V0
cz0wICgweDAvMHgwLzB4MC8weDAvMHgwKQpbICAgIDIuMjIyMzg3XSBzbmRfaGRhX2NvZGVj
X3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgIGhwX291dHM9MSAoMHgxYi8weDAvMHgwLzB4MC8w
eDApClsgICAgMi4yMjIzOTBdIHNuZF9oZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDog
ICAgbW9ubzogbW9ub19vdXQ9MHgwClsgICAgMi4yMjIzOTJdIHNuZF9oZGFfY29kZWNfcmVh
bHRlayBoZGF1ZGlvQzFEMDogICAgZGlnLW91dD0weDFlLzB4MApbICAgIDIuMjIyMzkzXSBz
bmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgIGlucHV0czoKWyAgICAyLjIy
MjM5NV0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICAgIFJlYXIgTWlj
PTB4MTgKWyAgICAyLjIyMjM5OF0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQw
OiAgICAgIEZyb250IE1pYz0weDE5ClsgICAgMi4yMjIzOTldIHNuZF9oZGFfY29kZWNfcmVh
bHRlayBoZGF1ZGlvQzFEMDogICAgICBMaW5lPTB4MWEKWyAgICAyLjIyMjQwMV0gc25kX2hk
YV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICAgIENEPTB4MWMKWyAgICAyLjIzMTgx
MV0geGhjaV9oY2QgMDAwMDowMzowMC4wOiBoY2MgcGFyYW1zIDB4MDIwMGYxODAgaGNpIHZl
cnNpb24gMHg5NiBxdWlya3MgMHgwMDAwMDAwMDAwMDgwMDEwClsgICAgMi4yMzUyNDVdIHI4
MTY5IDAwMDA6MDQ6MDAuMDogZW5hYmxpbmcgZGV2aWNlICgwMDAwIC0+IDAwMDMpClsgICAg
Mi4yNjUwNTFdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAyLjI2NTA3Nl0geGhjaV9oY2QgMDAwMDowMzowMC4wOiBuZXcgVVNCIGJ1cyByZWdp
c3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDkKWyAgICAyLjI2NTA5Ml0geGhjaV9oY2Qg
MDAwMDowMzowMC4wOiBIb3N0IHN1cHBvcnRzIFVTQiAzLjAgU3VwZXJTcGVlZApbICAgIDIu
MjY2OTUxXSB1c2IgdXNiODogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIs
IGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDYuMDMKWyAgICAyLjI2Njk1OF0gdXNiIHVz
Yjg6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51
bWJlcj0xClsgICAgMi4yNjY5NjFdIHVzYiB1c2I4OiBQcm9kdWN0OiB4SENJIEhvc3QgQ29u
dHJvbGxlcgpbICAgIDIuMjY2OTYzXSB1c2IgdXNiODogTWFudWZhY3R1cmVyOiBMaW51eCA2
LjMuMC1yYzYtMDAzMTEtZ2RlODIyNDk2OWY2NiB4aGNpLWhjZApbICAgIDIuMjY2OTY1XSB1
c2IgdXNiODogU2VyaWFsTnVtYmVyOiAwMDAwOjAzOjAwLjAKWyAgICAyLjI2NzAxNF0gaW5w
dXQ6IEhELUF1ZGlvIEdlbmVyaWMgUmVhciBNaWMgYXMgL2RldmljZXMvcGNpMDAwMDowMC8w
MDAwOjAwOjE0LjIvc291bmQvY2FyZDEvaW5wdXQzClsgICAgMi4yNjczNTRdIGlucHV0OiBI
RC1BdWRpbyBHZW5lcmljIEZyb250IE1pYyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6
MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1dDQKWyAgICAyLjI2Nzc1MF0gaW5wdXQ6IEhELUF1
ZGlvIEdlbmVyaWMgTGluZSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9z
b3VuZC9jYXJkMS9pbnB1dDUKWyAgICAyLjI2ODA4OV0gaW5wdXQ6IEhELUF1ZGlvIEdlbmVy
aWMgTGluZSBPdXQgRnJvbnQgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIv
c291bmQvY2FyZDEvaW5wdXQ2ClsgICAgMi4yNjg0MzddIGlucHV0OiBIRC1BdWRpbyBHZW5l
cmljIExpbmUgT3V0IFN1cnJvdW5kIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDox
NC4yL3NvdW5kL2NhcmQxL2lucHV0NwpbICAgIDIuMjY4NzgzXSBpbnB1dDogSEQtQXVkaW8g
R2VuZXJpYyBMaW5lIE91dCBDTEZFIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDox
NC4yL3NvdW5kL2NhcmQxL2lucHV0OApbICAgIDIuMjY5MTMyXSBpbnB1dDogSEQtQXVkaW8g
R2VuZXJpYyBMaW5lIE91dCBTaWRlIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDox
NC4yL3NvdW5kL2NhcmQxL2lucHV0OQpbICAgIDIuMjY5NzE0XSBodWIgOC0wOjEuMDogVVNC
IGh1YiBmb3VuZApbICAgIDIuMjY5ODE1XSBodWIgOC0wOjEuMDogMiBwb3J0cyBkZXRlY3Rl
ZApbICAgIDIuMjY5OTU2XSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBGcm9udCBIZWFkcGhv
bmUgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEvaW5w
dXQxMApbICAgIDIuMjcyODI2XSB1c2IgdXNiOTogV2UgZG9uJ3Qga25vdyB0aGUgYWxnb3Jp
dGhtcyBmb3IgTFBNIGZvciB0aGlzIGhvc3QsIGRpc2FibGluZyBMUE0uClsgICAgMi4yNzMw
MDZdIHVzYiB1c2I5OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQ
cm9kdWN0PTAwMDMsIGJjZERldmljZT0gNi4wMwpbICAgIDIuMjczMDEwXSB1c2IgdXNiOTog
TmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVy
PTEKWyAgICAyLjI3MzAxMl0gdXNiIHVzYjk6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9s
bGVyClsgICAgMi4yNzMwMTRdIHVzYiB1c2I5OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4w
LXJjNi0wMDMxMS1nZGU4MjI0OTY5ZjY2IHhoY2ktaGNkClsgICAgMi4yNzMwMTZdIHVzYiB1
c2I5OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDM6MDAuMApbICAgIDIuMjc1NTIxXSBodWIgOS0w
OjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIuMjc1NjQ5XSBodWIgOS0wOjEuMDogMiBwb3J0
cyBkZXRlY3RlZApbICAgIDIuMjk1MzE4XSByODE2OSAwMDAwOjA0OjAwLjAgZXRoMDogUlRM
ODE2OGYvODExMWYsIDA4OjYwOjZlOjc0OjdhOjUxLCBYSUQgNDgwLCBJUlEgMzIKWyAgICAy
LjI5NTMyN10gcjgxNjkgMDAwMDowNDowMC4wIGV0aDA6IGp1bWJvIGZlYXR1cmVzIFtmcmFt
ZXM6IDkxOTQgYnl0ZXMsIHR4IGNoZWNrc3VtbWluZzoga29dClsgICAgMi4zMTE4NDVdIHI4
MTY5IDAwMDA6MDQ6MDAuMCBlbnA0czA6IHJlbmFtZWQgZnJvbSBldGgwClsgICAgMi40MjE0
MDNdIHI4MTY5IDAwMDA6MDQ6MDAuMDogRGlyZWN0IGZpcm13YXJlIGxvYWQgZm9yIHJ0bF9u
aWMvcnRsODE2OGYtMS5mdyBmYWlsZWQgd2l0aCBlcnJvciAtMgpbICAgIDIuNDIxNDE1XSBy
ODE2OSAwMDAwOjA0OjAwLjA6IFVuYWJsZSB0byBsb2FkIGZpcm13YXJlIHJ0bF9uaWMvcnRs
ODE2OGYtMS5mdyAoLTIpClsgICAgMi40MjE5MDNdIFJUTDgyMTFFIEdpZ2FiaXQgRXRoZXJu
ZXQgcjgxNjktMC00MDA6MDA6IGF0dGFjaGVkIFBIWSBkcml2ZXIgKG1paV9idXM6cGh5X2Fk
ZHI9cjgxNjktMC00MDA6MDAsIGlycT1NQUMpClsgICAgMi40NjUxNjNdIHVzYiA1LTE6IG5l
dyBsb3ctc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBvaGNpLXBjaQpbICAgIDIu
NDcyNzM1XSBbZHJtXSByYWRlb24ga2VybmVsIG1vZGVzZXR0aW5nIGVuYWJsZWQuClsgICAg
Mi40NzQyMDBdIFtkcm1dIGluaXRpYWxpemluZyBrZXJuZWwgbW9kZXNldHRpbmcgKEFSVUJB
IDB4MTAwMjoweDk5OTYgMHgxMDAyOjB4OTk5NiAweDAwKS4KWyAgICAyLjQ3NDI3MV0gQVRP
TSBCSU9TOiAxMTMKWyAgICAyLjQ3NDM3Ml0gcmFkZW9uIDAwMDA6MDA6MDEuMDogVlJBTTog
NTEyTSAweDAwMDAwMDAwMDAwMDAwMDAgLSAweDAwMDAwMDAwMUZGRkZGRkYgKDUxMk0gdXNl
ZCkKWyAgICAyLjQ3NDM3Nl0gcmFkZW9uIDAwMDA6MDA6MDEuMDogR1RUOiAxMDI0TSAweDAw
MDAwMDAwMjAwMDAwMDAgLSAweDAwMDAwMDAwNUZGRkZGRkYKWyAgICAyLjQ3NDM4NF0gW2Ry
bV0gRGV0ZWN0ZWQgVlJBTSBSQU09NTEyTSwgQkFSPTI1Nk0KWyAgICAyLjQ3NDM4NV0gW2Ry
bV0gUkFNIHdpZHRoIDY0Yml0cyBERFIKWyAgICAyLjQ3NDU4OV0gW2RybV0gcmFkZW9uOiA1
MTJNIG9mIFZSQU0gbWVtb3J5IHJlYWR5ClsgICAgMi40NzQ1OTVdIFtkcm1dIHJhZGVvbjog
MTAyNE0gb2YgR1RUIG1lbW9yeSByZWFkeS4KWyAgICAyLjQ3NDY0Ml0gW2RybV0gTG9hZGlu
ZyBBUlVCQSBNaWNyb2NvZGUKWyAgICAyLjQ4MTYwOF0gW2RybV0gSW50ZXJuYWwgdGhlcm1h
bCBjb250cm9sbGVyIHdpdGhvdXQgZmFuIGNvbnRyb2wKWyAgICAyLjQ4MTk3NF0gW2RybV0g
cmFkZW9uOiBkcG0gaW5pdGlhbGl6ZWQKWyAgICAyLjQ4NjY2NF0gW2RybV0gRm91bmQgVkNF
IGZpcm13YXJlL2ZlZWRiYWNrIHZlcnNpb24gNTAuMC4xIC8gMTchClsgICAgMi40ODY3MTFd
IFtkcm1dIEdBUlQ6IG51bSBjcHUgcGFnZXMgMjYyMTQ0LCBudW0gZ3B1IHBhZ2VzIDI2MjE0
NApbICAgIDIuNDkyNDAyXSByODE2OSAwMDAwOjA0OjAwLjAgZW5wNHMwOiBMaW5rIGlzIERv
d24KWyAgICAyLjUyNjM5Nl0gW2RybV0gR0FSVDogUmVzdG9yZSBlbnRyaWVzOiBudW0gY3B1
IHBhZ2VzIDI2MjE0NCwgbnVtIGdwdSBwYWdlcyAyNjIxNDQKWyAgICAyLjUyOTg0M10gW2Ry
bV0gR0FSVDogRG9uZSByZXN0b3JpbmcgZW50cmllcwpbICAgIDIuNTI5ODQ3XSBbZHJtXSBQ
Q0lFIEdBUlQgb2YgMTAyNE0gZW5hYmxlZCAodGFibGUgYXQgMHgwMDAwMDAwMDAwMUQ2MDAw
KS4KWyAgICAyLjUzMDA4N10gcmFkZW9uIDAwMDA6MDA6MDEuMDogV0IgZW5hYmxlZApbICAg
IDIuNTMwMDkwXSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyAw
IHVzZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMDAKWyAgICAyLjUzMDQ2OF0gcmFkZW9u
IDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNSB1c2UgZ3B1IGFkZHIgMHgw
MDAwMDAwMDAwMDc1YTE4ClsgICAgMi41NTA1MjZdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZl
bmNlIGRyaXZlciBvbiByaW5nIDYgdXNlIGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMxOApb
ICAgIDIuNTUwNTMxXSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmlu
ZyA3IHVzZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMWMKWyAgICAyLjU1MDUzM10gcmFk
ZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgMSB1c2UgZ3B1IGFkZHIg
MHgwMDAwMDAwMDIwMDAwYzA0ClsgICAgMi41NTA1MzVdIHJhZGVvbiAwMDAwOjAwOjAxLjA6
IGZlbmNlIGRyaXZlciBvbiByaW5nIDIgdXNlIGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMw
OApbICAgIDIuNTUwNTM2XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24g
cmluZyAzIHVzZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMGMKWyAgICAyLjU1MDUzOF0g
cmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNCB1c2UgZ3B1IGFk
ZHIgMHgwMDAwMDAwMDIwMDAwYzEwClsgICAgMi41NTA4MDldIHJhZGVvbiAwMDAwOjAwOjAx
LjA6IHJhZGVvbjogTVNJIGxpbWl0ZWQgdG8gMzItYml0ClsgICAgMi41NTEwMDNdIHJhZGVv
biAwMDAwOjAwOjAxLjA6IHJhZGVvbjogdXNpbmcgTVNJLgpbICAgIDIuNTUxMDcxXSBbZHJt
XSByYWRlb246IGlycSBpbml0aWFsaXplZC4KWyAgICAyLjU2OTU3OV0gW2RybV0gcmluZyB0
ZXN0IG9uIDAgc3VjY2VlZGVkIGluIDMgdXNlY3MKWyAgICAyLjU2OTU4OV0gW2RybV0gcmlu
ZyB0ZXN0IG9uIDMgc3VjY2VlZGVkIGluIDQgdXNlY3MKWyAgICAyLjU2OTU5Nl0gW2RybV0g
cmluZyB0ZXN0IG9uIDQgc3VjY2VlZGVkIGluIDQgdXNlY3MKWyAgICAyLjU4MzU5NV0gW2Ry
bV0gcmluZyB0ZXN0IG9uIDUgc3VjY2VlZGVkIGluIDIgdXNlY3MKWyAgICAyLjU4NTU5M10g
W2RybV0gVVZEIGluaXRpYWxpemVkIHN1Y2Nlc3NmdWxseS4KWyAgICAyLjY3MDIxM10gdXNi
IDUtMTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTQxM2MsIGlkUHJvZHVjdD0y
MTA2LCBiY2REZXZpY2U9IDEuMDEKWyAgICAyLjY3MDIxOF0gdXNiIDUtMTogTmV3IFVTQiBk
ZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAKWyAgICAy
LjY3MDIyMF0gdXNiIDUtMTogUHJvZHVjdDogRGVsbCBRdWlldEtleSBLZXlib2FyZApbICAg
IDIuNjcwMjIyXSB1c2IgNS0xOiBNYW51ZmFjdHVyZXI6IERFTEwKWyAgICAyLjY3ODAyN10g
aW5wdXQ6IERFTEwgRGVsbCBRdWlldEtleSBLZXlib2FyZCBhcyAvZGV2aWNlcy9wY2kwMDAw
OjAwLzAwMDA6MDA6MTIuMC91c2I1LzUtMS81LTE6MS4wLzAwMDM6NDEzQzoyMTA2LjAwMDEv
aW5wdXQvaW5wdXQxMQpbICAgIDIuNjk0OTk0XSBbZHJtXSByaW5nIHRlc3Qgb24gNiBzdWNj
ZWVkZWQgaW4gMTggdXNlY3MKWyAgICAyLjY5NTAwNl0gW2RybV0gcmluZyB0ZXN0IG9uIDcg
c3VjY2VlZGVkIGluIDMgdXNlY3MKWyAgICAyLjY5NTAwN10gW2RybV0gVkNFIGluaXRpYWxp
emVkIHN1Y2Nlc3NmdWxseS4KWyAgICAyLjY5NTE3NF0gc25kX2hkYV9pbnRlbCAwMDAwOjAw
OjAxLjE6IGJvdW5kIDAwMDA6MDA6MDEuMCAob3BzIHJhZGVvbl9hdWRpb19jb21wb25lbnRf
YmluZF9vcHMgW3JhZGVvbl0pClsgICAgMi42OTUzNDZdIFtkcm1dIGliIHRlc3Qgb24gcmlu
ZyAwIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAgMi42OTU0MDBdIFtkcm1dIGliIHRlc3Qg
b24gcmluZyAzIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAgMi42OTU0NDldIFtkcm1dIGli
IHRlc3Qgb24gcmluZyA0IHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAgMi43MTMyODBdIFtk
cm1dIGliIHRlc3Qgb24gcmluZyA1IHN1Y2NlZWRlZApbICAgIDIuNzI5MjcwXSBbZHJtXSBp
YiB0ZXN0IG9uIHJpbmcgNiBzdWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuNzM3OTY0XSBo
aWQtZ2VuZXJpYyAwMDAzOjQxM0M6MjEwNi4wMDAxOiBpbnB1dCxoaWRyYXcwOiBVU0IgSElE
IHYxLjEwIEtleWJvYXJkIFtERUxMIERlbGwgUXVpZXRLZXkgS2V5Ym9hcmRdIG9uIHVzYi0w
MDAwOjAwOjEyLjAtMS9pbnB1dDAKWyAgICAyLjc0NTI3MF0gW2RybV0gaWIgdGVzdCBvbiBy
aW5nIDcgc3VjY2VlZGVkIGluIDEgdXNlY3MKWyAgICAyLjc0ODc4M10gW2RybV0gUmFkZW9u
IERpc3BsYXkgQ29ubmVjdG9ycwpbICAgIDIuNzQ4Nzg5XSBbZHJtXSBDb25uZWN0b3IgMDoK
WyAgICAyLjc0ODc5MF0gW2RybV0gICBEUC0xClsgICAgMi43NDg3OTFdIFtkcm1dICAgSFBE
MQpbICAgIDIuNzQ4NzkyXSBbZHJtXSAgIEREQzogMHg2NTMwIDB4NjUzMCAweDY1MzQgMHg2
NTM0IDB4NjUzOCAweDY1MzggMHg2NTNjIDB4NjUzYwpbICAgIDIuNzQ4Nzk1XSBbZHJtXSAg
IEVuY29kZXJzOgpbICAgIDIuNzQ4Nzk2XSBbZHJtXSAgICAgREZQMTogSU5URVJOQUxfVU5J
UEhZMgpbICAgIDIuNzQ4Nzk3XSBbZHJtXSBDb25uZWN0b3IgMToKWyAgICAyLjc0ODc5OF0g
W2RybV0gICBWR0EtMQpbICAgIDIuNzQ4Nzk5XSBbZHJtXSAgIEhQRDIKWyAgICAyLjc0ODgw
MF0gW2RybV0gICBEREM6IDB4NjU0MCAweDY1NDAgMHg2NTQ0IDB4NjU0NCAweDY1NDggMHg2
NTQ4IDB4NjU0YyAweDY1NGMKWyAgICAyLjc0ODgwMl0gW2RybV0gICBFbmNvZGVyczoKWyAg
ICAyLjc0ODgwM10gW2RybV0gICAgIENSVDE6IElOVEVSTkFMX1VOSVBIWTIKWyAgICAyLjc0
ODgwNF0gW2RybV0gICAgIENSVDE6IE5VVE1FRwpbICAgIDIuNzQ4ODA1XSBbZHJtXSBDb25u
ZWN0b3IgMjoKWyAgICAyLjc0ODgwNl0gW2RybV0gICBIRE1JLUEtMQpbICAgIDIuNzQ4ODA3
XSBbZHJtXSAgIEhQRDMKWyAgICAyLjc0ODgwOF0gW2RybV0gICBEREM6IDB4NjU1MCAweDY1
NTAgMHg2NTU0IDB4NjU1NCAweDY1NTggMHg2NTU4IDB4NjU1YyAweDY1NWMKWyAgICAyLjc0
ODgxMV0gW2RybV0gICBFbmNvZGVyczoKWyAgICAyLjc0ODgxMV0gW2RybV0gICAgIERGUDI6
IElOVEVSTkFMX1VOSVBIWQpbICAgIDMuMDE5OTk5XSBbZHJtXSBmYiBtYXBwYWJsZSBhdCAw
eEUwM0U5MDAwClsgICAgMy4wMjAwMDZdIFtkcm1dIHZyYW0gYXBwZXIgYXQgMHhFMDAwMDAw
MApbICAgIDMuMDIwMDA5XSBbZHJtXSBzaXplIDUyNDI4ODAKWyAgICAzLjAyMDAxMV0gW2Ry
bV0gZmIgZGVwdGggaXMgMjQKWyAgICAzLjAyMDAxMl0gW2RybV0gICAgcGl0Y2ggaXMgNTEy
MApbICAgIDMuMDIwNTA4XSBmYmNvbjogcmFkZW9uZHJtZmIgKGZiMCkgaXMgcHJpbWFyeSBk
ZXZpY2UKWyAgICAzLjIwMTIzOV0gdXNiIDUtMjogbmV3IGxvdy1zcGVlZCBVU0IgZGV2aWNl
IG51bWJlciAzIHVzaW5nIG9oY2ktcGNpClsgICAgMy4yMTU5ODhdIENvbnNvbGU6IHN3aXRj
aGluZyB0byBjb2xvdXIgZnJhbWUgYnVmZmVyIGRldmljZSAxNjB4NjQKWyAgICAzLjIxNzc2
N10gcmFkZW9uIDAwMDA6MDA6MDEuMDogW2RybV0gZmIwOiByYWRlb25kcm1mYiBmcmFtZSBi
dWZmZXIgZGV2aWNlClsgICAgMy4yMzc0NjJdIFtkcm1dIEluaXRpYWxpemVkIHJhZGVvbiAy
LjUwLjAgMjAwODA1MjggZm9yIDAwMDA6MDA6MDEuMCBvbiBtaW5vciAwClsgICAgMy4zOTc0
ODVdIHVzYiA1LTI6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0wNDZkLCBpZFBy
b2R1Y3Q9YzAxNiwgYmNkRGV2aWNlPSAzLjQwClsgICAgMy4zOTc0OTNdIHVzYiA1LTI6IE5l
dyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0w
ClsgICAgMy4zOTc0OTVdIHVzYiA1LTI6IFByb2R1Y3Q6IE9wdGljYWwgVVNCIE1vdXNlClsg
ICAgMy4zOTc0OTddIHVzYiA1LTI6IE1hbnVmYWN0dXJlcjogTG9naXRlY2gKWyAgICAzLjQw
NjAwNl0gaW5wdXQ6IExvZ2l0ZWNoIE9wdGljYWwgVVNCIE1vdXNlIGFzIC9kZXZpY2VzL3Bj
aTAwMDA6MDAvMDAwMDowMDoxMi4wL3VzYjUvNS0yLzUtMjoxLjAvMDAwMzowNDZEOkMwMTYu
MDAwMi9pbnB1dC9pbnB1dDEyClsgICAgMy40MDY3OTBdIGhpZC1nZW5lcmljIDAwMDM6MDQ2
RDpDMDE2LjAwMDI6IGlucHV0LGhpZHJhdzE6IFVTQiBISUQgdjEuMTAgTW91c2UgW0xvZ2l0
ZWNoIE9wdGljYWwgVVNCIE1vdXNlXSBvbiB1c2ItMDAwMDowMDoxMi4wLTIvaW5wdXQwClsg
ICAgNS4xMDA1MzhdIHI4MTY5IDAwMDA6MDQ6MDAuMCBlbnA0czA6IExpbmsgaXMgVXAgLSAx
R2Jwcy9GdWxsIC0gZmxvdyBjb250cm9sIHJ4L3R4ClsgICAgNS4xMDA1NTVdIElQdjY6IEFE
RFJDT05GKE5FVERFVl9DSEFOR0UpOiBlbnA0czA6IGxpbmsgYmVjb21lcyByZWFkeQpbICAg
IDUuOTQ1NDMyXSBbZHJtXSBhbWRncHUga2VybmVsIG1vZGVzZXR0aW5nIGVuYWJsZWQuClsg
ICAxMC4wNjYzNjFdIG1lbWZkX2NyZWF0ZSgpIHdpdGhvdXQgTUZEX0VYRUMgbm9yIE1GRF9O
T0VYRUNfU0VBTCwgcGlkPTI1OCAnc3lzdGVtZCcK

--------------v3TYoqJn0gjeHEVPTU7V0kge--


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 17:50:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 17:50:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522274.811505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poSzZ-0001vM-Qh; Mon, 17 Apr 2023 17:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522274.811505; Mon, 17 Apr 2023 17:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poSzZ-0001vF-M6; Mon, 17 Apr 2023 17:50:17 +0000
Received: by outflank-mailman (input) for mailman id 522274;
 Mon, 17 Apr 2023 17:50:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poSzY-0001v5-SK; Mon, 17 Apr 2023 17:50:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poSzY-0002zI-Dd; Mon, 17 Apr 2023 17:50:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poSzX-0001gl-PV; Mon, 17 Apr 2023 17:50:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poSzX-0006ze-P3; Mon, 17 Apr 2023 17:50:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aIc1QPpsZur4viuaCVJTRNN/FT+zbGqaMdmxrQFyoPY=; b=zKz9wQR+e6aMeboPWSKZaqkwZR
	217CPOTimq5rGbJPSwdd6javboUmvwUxi55ztPyXjFOZkzAYM8s4dszUakEaOS0agFAES+RIg0p1L
	mJ6p8BpoT1800PHnGU13uI0Q9ZzEqEMp3wKdDBj5I5iKkB5ZqYxFd9ODC09N6CGvwrHc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180282: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:nonblocking
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=44843cee3d2b8daa09e5860fc4574219b57acde8
X-Osstest-Versions-That:
    xen=f872a624cbf92de9944483eea7674ef80ced1380
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 17:50:15 +0000

flight 180282 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180282/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180238
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180238
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw  8 xen-boot                fail starved in 180238
 test-armhf-armhf-examine   11 examine-serial/bootloader fail starved in 180238
 test-armhf-armhf-examine     12 examine-serial/kernel   fail starved in 180238
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180238
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180238

version targeted for testing:
 xen                  44843cee3d2b8daa09e5860fc4574219b57acde8
baseline version:
 xen                  f872a624cbf92de9944483eea7674ef80ced1380

Last test of basis   180238  2023-04-13 14:38:34 Z    4 days
Failing since        180256  2023-04-14 05:34:08 Z    3 days    8 attempts
Testing same since   180280  2023-04-17 00:10:03 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f872a624cb..44843cee3d  44843cee3d2b8daa09e5860fc4574219b57acde8 -> master


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 19:35:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 19:35:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522307.811554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poUd0-00044f-Ia; Mon, 17 Apr 2023 19:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522307.811554; Mon, 17 Apr 2023 19:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poUd0-00044Y-FT; Mon, 17 Apr 2023 19:35:06 +0000
Received: by outflank-mailman (input) for mailman id 522307;
 Mon, 17 Apr 2023 19:35:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=696H=AI=citrix.com=prvs=464f2b76b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poUcz-00044S-Ee
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 19:35:05 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2912397-dd56-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 21:35:04 +0200 (CEST)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 Apr 2023 15:34:54 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5464.namprd03.prod.outlook.com (2603:10b6:5:2cf::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 19:34:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 19:34:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2912397-dd56-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681760104;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=WDzioplaXFGWPDpYdQ9L6MkAr86TpdPZn/E+7Hc8eFo=;
  b=fsTpYUtuA4Ch1Fopi9TXuDP8r/qbjIplckZ4E5xz5ceFTbZi02OjLVS/
   L4UbtZLCpMT9fROkYhk7rvHoPNgf+7mUi8bhD2GHwaWLaqlc8UyA3un1U
   sF8JtjK3L9GudVvm+EoQGwhv5TRh7trK9/lvVBkIW7vShtYO5ySUKKk/v
   4=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 108298942
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gx2zmqqIKXbSpUXj/F6IUQY7l0xeBmLdZBIvgKrLsJaIsI4StFCzt
 garIBmGMqnbZ2HyKN53bti28EkAvMPVxtYyGgNqqy4wESwboJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCziZNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGwSNE/EnfPt+oqQWsh2n+UfJ/nEJZxK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWJEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrq4w0Q3CmjZ75Bs+CGv8hqeSoHaCRogFd
 Vwd8BEO9Ixj+xn+JjX6d1jiyJKehTYeUddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQC1
 ViPhdrlQyNutL69TmiU/bOZ6zi1PEA9PWIEICMJUwYBy93iu50oyALCSM55F6y4hcGzHiv/q
 w1mtwA7jrQXyMIOiaOy+Amehyr2/8eVCAko+g/QQ2SpqBtjY5KobJCp7l6d6utcKIGeTR+Ku
 31sd9Wi0d3ixKqlzESlKNjh1pnwjxpZGFUwWWJSIqQ=
IronPort-HdrOrdr: A9a23:CEGJwKlvSAWw/dCoe1ZbmrP2GdjpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: 9a23:mgFrV2z8XATxxsX2jkf9BgVIBZp6NWfMl03oCF+ZE39IdLalZkaPrfY=
X-Talos-MUID: 9a23:ZoXIOAjnSNJLOpEPBqj+TMMpafcvxPqFIngxjM9c5dmNPxdQYhPMg2Hi
X-IronPort-AV: E=Sophos;i="5.99,204,1677560400"; 
   d="scan'208";a="108298942"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QSjKuuDoEC1xU2ah714KK/PB30IQAsHAZ9zBgwJpdU+wcoXr9G3misGAJNKRlsYZLnuwCXolzRbUZc8iNW6LX1pnFDXLJANLlR4qrTlRsTuqWKJftnSUO+K3e2QsU2Vpj+Wq0CGiVSjCILLFFrai7MhFWQmcXtLMd9/BZNkEdVYkHlf7dg0vguLSpD5BqMJiKN5AoA+HDj1zjqU+MJ8ZxXQAh5jj/9q9rWV7a/HamWHu8ZYzRGWDZCcRxOScZk+boPr0sn/XlRyrSH1RAToPa2g6BlXzJ1HD11tIdnPCzNdFyheXuJaD2CIk0VTSUWe1n5eWZOZuq6wTtM0rmy2MuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wwi354LYQz9HN1BxA3lYJIr3XX6+lUTpBdgKKHkHq7A=;
 b=FETas9sxy7qZQORcy/bRZNRziOJWDe/YE8jQRS/sjIetDoT2+tKeFy+a1/djpRSyynoe1SX/6koe1y2kC/ixJRjPLf43jnZzIhdMthexFrARogU+LjO0sk8tuK04Q6xo6TFCcD8veQM5HI/eli+Nx1OxCauF0aypMPu+teVDSOB0SokPSC+X4u89P8ARqNbnbxle17rklAbUZVKhXgEX0HIaj9B8wH21e0tKA+XdEYvhJ319OhAb5Dpher55id/HlLzxXs5v+uNI+opAnAwLC3Qvx6MWsKZqHbfYhFI50NEJ77zKrdh0XfRFXWRMBYYpH57+yZUU3JiY+a/kd81UdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wwi354LYQz9HN1BxA3lYJIr3XX6+lUTpBdgKKHkHq7A=;
 b=lPH7NHlAU48YWSyMel/4KTZ2NDI50oBz3foJNyYmjN89f82Z1UGxyn2MGNwko2MRlBVAZ+nTsdeHI1tDZDlSb9ubcf3ihNsfJxXmBTIayTkm2wQnbUb8nAxrF4XbeZzArv5ABXTexAvsPiw/BVqYbUtE+7m35xUWQhkdvkydfQQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2ebda1f3-23fb-3f06-c4ca-1ac508c82b40@citrix.com>
Date: Mon, 17 Apr 2023 20:34:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
 <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
 <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
 <3ea38da5-70a9-6887-5384-fe002d8568c4@suse.com>
In-Reply-To: <3ea38da5-70a9-6887-5384-fe002d8568c4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0538.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS7PR03MB5464:EE_
X-MS-Office365-Filtering-Correlation-Id: fbaa915e-a1d2-4204-c52a-08db3f7acfc9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z6jpqTzjH6wmIQ9PqAssZx8rTPSBSHDiyMzRklZ6pYOiH1Td4Sm17V60UizNdEHsiJ642b48G/DKJcahkuwPxAHed9ZM19H1+jZ/sZc7IOBcv9TRlFQ1lxMOYJctMovqE5IgjMn7gUpd85u0QOO4P+WiSyy36PyGEd84vZOYA/uPOYKrY+1LKw1sXLRe4x3SeJAW2nKmi8uPHiqKGabeHNZW0/d6acnPGsTDRm77n0I90zYMOlNhett8v6lxHQTjKrzoB/UNOQbUmmcVY7ZtZIrGMLaGlqWJEnqWwJc2U0G1X5HhsjgWADxXqSpYqTVdMCn3vtLRBP5eKgsb+uNJDYEWAK1Vn6g9yLFsy71RGStbeIps+EMLkLng/ixVXyT4q9RQ2/qW3CbIMnCtvi9vU0JdpyhCYtRyqBrEimRAyYEi53glgFioeIPVASaFHJLbn+8tc8nbR1PJsh8Rn40PVWUdU4koLIycIAmYrPpSJ09dAV7hTmM+fLORpQYWaVNEBldL+vd6aLcjJuP2bmk4op0lZsAWZRFjECMqZ60bew4nQb1MRYZ3sJXvT6RxPJ6VR3E0BhROYvxsTM4boNA4ftAosjpVvd8QODmeRys7J6NhrlauahwEVjo24Yt29cjjhyndXjSIHUGIK+/1YrSHGQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(396003)(346002)(366004)(451199021)(38100700002)(8676002)(8936002)(5660300002)(2906002)(36756003)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(6512007)(6506007)(53546011)(66476007)(66946007)(26005)(41300700001)(316002)(83380400001)(82960400001)(6916009)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZHMvUE5tU3grMEZkSVVkakJtNGxheEVJYWlZQ3BOTlowOGxnUzVqWEk2RFdJ?=
 =?utf-8?B?bnlYWjBFVUNOVFV1WDdKMEF3ZFh0dzc1cUh3V01adk9DR3ltODF4WW9yNkJ2?=
 =?utf-8?B?ZUt3QnFLQ1ZzeTd0ajZvQWJFNFZ5OThUeldJcURZZ0tWcDkzWUFmd2VzMDJo?=
 =?utf-8?B?djk5M3Z0dStGUjNtU1E3eUM0eXdZRXloNDI0VnJobk9zNnV3TFVhMHB3eURN?=
 =?utf-8?B?QkNrbW51ZlNBUkhXNktodU1yYW54U1lIanF2TEtoUzNQWmdEYnQ3M1ErdjVk?=
 =?utf-8?B?T0RnZ2ZRSkVJVmlJT25VTmVCUG9tUHJhVWgyTlB6c3JRWDhJM2dQSzFLS0pl?=
 =?utf-8?B?V1U3aHFYemFQMStzdWRCQ3Rqc1YvOVpoaHZBdjdyLzVLTGQzeElVOXpHbno5?=
 =?utf-8?B?bW1saGZvVWZDY0dnYXkwLzhMNjdCSDZNSHF6a254L2x5ak8zeFhEeVpXaU1j?=
 =?utf-8?B?VGcxeWo2VVdmMjg0cDd6dmtaSzBha1M4TTY0RUc2Lyt5QlkxZGlKVHVXRDZM?=
 =?utf-8?B?QVRkdVBPVVdGSkp3ZUsyQVQ0S0IxOFJIYy9uRzcxSERBalMvZDBEVlh2aTc3?=
 =?utf-8?B?TEJSaHJjbzRReW5CM1N0VGM0TDI1Nnp3c0g1OG0wdURUeEZTWllpTmdyYXR1?=
 =?utf-8?B?ZnV3RVNiaDg5UEdEQVNMcm9jQWI2dVVYOXluSm1xRFN2NUtLWit5T05Lcm1m?=
 =?utf-8?B?THBqMC80TTgwM3N2S1h3U0VBN2FBMnJMaDduNE5KalArTnVFbjR0bkdobmh3?=
 =?utf-8?B?RlhxQnQ5Q0ZhbEc5ZUhsdW5PandYbDA2WVpMMEVCM05UR0wvQ0w0UzUxM0o1?=
 =?utf-8?B?c0s2eWM1aVA2M1lBako3NEphK1RKeXZwcGEzM1QzNmRoNWs2VGRvbGd6dWdX?=
 =?utf-8?B?QUxRQWRQdTZZd08vMXR2eTVPVjBnYmdnNitWeTQzNndZV3J6TWJKU0M5cFg5?=
 =?utf-8?B?N2o3U0RLd2dmQlJLdHhsQjRzaFdCMEtYNVlkRnE3TXA2aVV1MjhRZUlsc0xQ?=
 =?utf-8?B?clBNN1dqeEZDdHZ0cGRSOTBCOUdRQ2MvUHJPQkxMZ3ZqQjEvN3ljTzhqdStq?=
 =?utf-8?B?bTkvL1JRRUFheWhHQTFsOHk3WDlGR1E2Sk5vajJ5QkM4Ym8zVnhObmFQaFdS?=
 =?utf-8?B?N1lyWlBEZlpzVFBYbk15QXlSY21ZWWxJbFYrSXJYWjRzWDN1aGxVaUFLNm1q?=
 =?utf-8?B?ZW82Qm9wY1BTSFRiMm10bU03NzZyVXl0OWg2SVJkS3BiRk1Ta0RubnMySjRp?=
 =?utf-8?B?cnE0ZmkzTDNESnR6OTkyZGpiNDlKUEs0M0ZzRURUcVVrdDl2WVlkZGFFSmpx?=
 =?utf-8?B?S2xhQUtSY0ZDK1BEUngwa2FoMU1KT1owYzFCUzU0d3FYUTlFU0w4NmdVWlp6?=
 =?utf-8?B?WC9xenJBYkcwc2QvNTkydForK2V0M1N3SllWbnduZzZRaFo2WkdrMzJsUWJo?=
 =?utf-8?B?MjQzdUd4Z1JQWUlLYzdtdGhSWHRLTFMwbHdlZE1IZXZFeW9qV2VHZjY3Uy8x?=
 =?utf-8?B?N0F0L1FJQStmVWtvcUVqb3NTa1JCMWRsUzRZbkpPZitza2NVYVRIMGU4K0hW?=
 =?utf-8?B?dWR3aUE4bXdYVStlcVF6anNnT3hOZmRTZDNUN0w0a2JqTG9ZQW5vdUdFWno0?=
 =?utf-8?B?N1VSVDN3dmJUREQ3Z0RiY3ZBUW5WdVppS0NKTTNmL2ZRay9JNlU0VlZ4Rm5P?=
 =?utf-8?B?K0d4YUc5a0RzK3poNDFma3FBbVMzc0hPcGZPd3IzUURSWkhKR0VYNllXQ3Bh?=
 =?utf-8?B?K3pWbTk1eXJLMVhqMjFaNkVubk1FV21taFJFZ21LdGE1TDVlT09NVWhLY1hz?=
 =?utf-8?B?d29qYktIWXhhN0ljZFNWWGNjMktqdjB4MmFmRWxJKzY1VmVwWktlM09hMWov?=
 =?utf-8?B?Qk8xRFI1QkM4dkJXQTRtam5yc2VUaXNpaktIeDhVV0M3cWo5Zjl2cmw5SFhX?=
 =?utf-8?B?SXBGaVdabWZCVnpCL2gvYUhNQWlzcFVuY1lOeGRZSHA0Z2tzblV4S3FDZTBh?=
 =?utf-8?B?MFJtRzRXdTBWSkRjRVF5NEppcDV2ZklSd0hERVM4SzNCRmQrRUNkbUluazFF?=
 =?utf-8?B?THI5amVPclhYdExIUm03VXhuVlJFblB1TWFCdjhGemJ2OVFSUXdtY1MvYUVZ?=
 =?utf-8?B?VUIyWkdrd09lVmRObjNFVk5iSVZtTlMxekt4YmthY2xlOUVyU1J6UDRSTFFD?=
 =?utf-8?B?UlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	65IOBTonZfE3SIQQWQjIQmPN3toy0QbYvDeORPOKzlgcyOuJJ55iVQuJNSWFfTEfhqV0jXVH5gfLbWe7O/rVBbiL47gM9fXEfQo9rtkAZd3XM7v0aGyyG8q4MVBqHtfIA8YHhHkNU9S6uPM6bdmcDyid1KfvHn+FoW+GDvb/jyPjuylt5dC09RUTGD4RGDD9B+ltA2bhdGdBMMyl+zc/zCpsqsOtNWL+S+P7TKgzKtUUfRvv+Mpj994EU4ikNRoGudrUnCQ0VnIcaRNBcTrEtkjwWSgkSmOTbOgv5lPQMbke+xcTubbGMHeFThLJxxSVXmnkGDZI1NCQcJSRSDC6vdrHeeTGBJVC3ItfaDjVD+qqTeKrquM3CVGPVEhHmpeLaoTUtllQnIgfY9uB5mbAwx+inF8+Se6SZ87k2lS7pe0m8YnrZTTywa7qvGW+vTvqChwZ36PD+sT5Ui4RO5jvc/Z6Fy+i6tgZmoB1l07XdK/JPwcHY3gpgoWilXOC8SNBZNJ9h/QVe3OKzkye3vIxAkvpskSpkRG/XFJoFGZXV1D7BU5QDgz19sqKGweHUX5Fc8zSC6FJ4hfcUD0+uopxuxI5dyCzhzmmkKDU5AGDPCihztro6zCA8XZKnqr01wnk+O2RrzRsYBZm5U4UkJ4d6JhX28blTaQLLE2gQzMXTL0Jjc0ncDLT8gDY3Oj7XkgzF5UR0uJiA1cHhLBTcB+6nda5ALyNvTnawvibNqlHamjcS2Ufegq/MMG36YdmDjq+A6m3IC9ej+BqEIga1BYGQN7AwJB9wyfY35y+/dxIzCG6fG+nT9kQ63MoPYo5ua+XExem+187m5wfCcybbI27PN8OS8Yy7b1FroBkRfCvjQI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fbaa915e-a1d2-4204-c52a-08db3f7acfc9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 19:34:51.1714
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eXLPc7eN8mbJ+ug33o308TZ72JywYIPUTePwKKiqf4MS+GtSC5yKN1LTlKWFwJRjP6KzPToEoIzVfl/OsRDEwPJC5Fa/vlRsV/EMUm968bc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5464

On 17/04/2023 3:51 pm, Jan Beulich wrote:
> On 17.04.2023 16:41, Andrew Cooper wrote:
>> On 17/04/2023 2:59 pm, Jan Beulich wrote:
>>> On 17.04.2023 15:52, Andrew Cooper wrote:
>>>> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>>  }
>>>>  
>>>> +/*
>>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>>> + *
>>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>>> + * Must be called with present flags, and over present mappings.
>>>> + * Must be called on leaf page boundaries.
>>> This last sentence, while wording-wise correct, could do with making more
>>> explicit that it is the caller's responsibility to know whether large page
>>> mappings are in use, due to ...
>> The meaning here is really "this doesn't shatter superpages", and this
>> was the most concise I could come up with.
>>
>> Would ", i.e. won't shatter 2M pages." as a clarification work?
> Yes, that would definitely help. Nevertheless I was more after something
> like "..., i.e. for 2M mappings on 2M boundaries." Which, thinking about
> it, points out that while you have a respective check for the start
> address, the full 2M page would be changed even if the end address wasn't
> 2M aligned (but fell in the middle of a 2M page).

There's no nice way to check for because a range that starts on a 4k
non-2M boundary can legitimately end on a 2M boundary at 4k granularity.

How about ", i.e. s and e must not be in the middle of a superpage." then?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:48:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522316.811565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVlW-0003w4-3k; Mon, 17 Apr 2023 20:47:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522316.811565; Mon, 17 Apr 2023 20:47:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVlW-0003vx-0U; Mon, 17 Apr 2023 20:47:58 +0000
Received: by outflank-mailman (input) for mailman id 522316;
 Mon, 17 Apr 2023 20:47:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vOAz=AI=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1poVlU-0003vr-Dm
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:47:56 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ef00dd8-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:47:53 +0200 (CEST)
Received: from pps.filterd (m0333521.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33HJ0eMf024457; Mon, 17 Apr 2023 20:46:36 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3pyjuc46nr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 17 Apr 2023 20:46:36 +0000
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 33HJZjLu031978; Mon, 17 Apr 2023 20:46:34 GMT
Received: from nam10-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam10lp2105.outbound.protection.outlook.com [104.47.55.105])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3pyjcanu0c-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 17 Apr 2023 20:46:34 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by SJ0PR10MB6423.namprd10.prod.outlook.com (2603:10b6:a03:44d::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr
 2023 20:46:31 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::70ac:313b:544d:4f45]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::70ac:313b:544d:4f45%4]) with mapi id 15.20.6298.045; Mon, 17 Apr 2023
 20:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ef00dd8-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 to : cc : references : from : subject : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2023-03-30;
 bh=Dik3CTs5kluBXXW+c9tt0GXpgHx50LBCrVssZ6qybKQ=;
 b=pe8762NDrY1nqd9QBmeYSOJh5lX/B1dyiwIg1X1QGfSaKOTZJhO32r6uNW/uJZIWKmZo
 lhrsBVolwdorsGwTRJY7qIWC8HtYpLzSDWB1/La6CGBGb5lH0o8IL6jelPuwwUTmETBD
 kUPSwQkCpqfRTl5XsVfK0N/FBGq8sO/XXeopE9HWaflUV02EunjUafWYToq0SBX8G3LQ
 sBhQlGBTp5vBIua+nUCKZl7HBVEfQzCBD2lSjXp47Q7TXddfytiM0LucLFE2XXdawqTT
 zAJ2qYih4Wm8OKV3Pr1DhY8Jn01mdiAMqLAMP3YZbMvK/rsPg11SosgmfzuBlvQEIakV zw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nn9CnRSrGrUV0+/hmoiicoEn4H/hle7r15VGUKaRpXRn4RccPgNw8qZfgpm8DZvUlzS0tgWgRyJHnqt2TjhlDx1hrmAaDHbIIAxAlmLWvGJTkXbOyvUPg5Kxq5a6rdXpUiOUL5qRjHca9KUEzV7DvxWXvRQEr84YeQnJsCheKQq2TLJ+37wCzee5/zQmMSQXI7/KY2gaJY9MidlgSvGzA5GpPIPen4+M4mj0sOB+wTW5mmwToEhvnz0KWwZDoCAiQXPO+QoSUQc+Tte1fhTBODcTBZ8CZjhGoK+F73TwUH7MM07C9C92a9GmA9xlA5zla202MaZXRGuGZiSxgwIC6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dik3CTs5kluBXXW+c9tt0GXpgHx50LBCrVssZ6qybKQ=;
 b=ZT4DpdDDcvU4A9+glvi+/tu6b3tKMFV58LPM01tqFW4Y6RNhBAR1437BmYwdhcDXuVbl9WYatHPzeufFfwl6Joq0bGuZJgvNXY4EnM8QysE0O/XtYKS+V0zt4ss+S9waIDzG4zwB8PKn2koXIvTUH1OUdzIExv0ltXCmWaA5fAjFc84qN+iM5yO5gmJtYYHuypOJ00V6bIlHZiFWOJ/OBmBPJAJSoHxi+mFhnA5ZjdajSOEcctaoF99XJ+sUMk/EWJ6xvatYiqywg5vgR00MCqjThKMX/n4GI2JVWWfa9dCqY1jT8PK2WI/UUGnOyZYEhkCAd8YcjFupKgVEs+0FzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dik3CTs5kluBXXW+c9tt0GXpgHx50LBCrVssZ6qybKQ=;
 b=MOdN8AzLNIqVngZFV8fsxRYyftGAS9xS5AjTIJ0mdPvCYqKW4KR0vjOUiPMARKIfssSBP74g1it8yrIjASzSpTkUl0qWrWSv1BQyAXJlv67Y1pPx2ZrEeFCmrU8kcL5d+xJ3AcvbWHCbjBgT0ovRcriywBuJ/CPun11nDrjN1T8=
Message-ID: <9914d517-cf10-a06b-c782-c74d2f24ad46@oracle.com>
Date: Mon, 17 Apr 2023 16:46:17 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw@infradead.org>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        Brian Gerst <brgerst@gmail.com>,
        Arjan van de Veen <arjan@linux.intel.com>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Paul McKenney <paulmck@kernel.org>,
        Tom Lendacky <thomas.lendacky@amd.com>,
        Sean Christopherson <seanjc@google.com>,
        Oleksandr Natalenko <oleksandr@natalenko.name>,
        Paul Menzel <pmenzel@molgen.mpg.de>,
        "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
        Piotr Gorski <lucjan.lucjanov@gmail.com>,
        Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        David Woodhouse <dwmw@amazon.co.uk>,
        Usama Arif <usama.arif@bytedance.com>,
        Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
        linux-arm-kernel@lists.infradead.org,
        Catalin Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
        linux-csky@vger.kernel.org,
        Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
        linux-mips@vger.kernel.org,
        "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
        Paul Walmsley <paul.walmsley@sifive.com>,
        Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
        Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.194293270@linutronix.de>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [patch 16/37] x86/xen/smp_pv: Remove wait for CPU online
In-Reply-To: <20230414232310.194293270@linutronix.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0255.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:37c::20) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:EE_|SJ0PR10MB6423:EE_
X-MS-Office365-Filtering-Correlation-Id: 9b7e9894-5935-424c-27d6-08db3f84d314
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Hp78QesZsk+ufzGrNhzZAke4U6ifyKKuUkHUNPc4Acr6oW/0KXdGfLJM+phFO/7LaVhHeAnYs2ih/L2Wj0eeI2JYJpKleeudxdxaGXkUHOo77A6ioXvzq+vhvZwEeVZ8gwxUtf2Mj5J6L9Z4j2YLN4vJwzYnOtkOp2k7Thjx/674eZr4njHRofPwCfr2VO2Vn+zPuyIGpoY3//eKc2jAKbnr0HAMJiG1OyVC0itF8YsQJ033x3HSfoh1YZmBSekdhAbB4+Cw62ey+W2Vkp2rMRxlm0ZEc/n//WmBEp0CZcz3nrYTu72wsxbabrZ0VvvlJtFS5LwXfI8RvpdahP/TrXj6E4jnrAeQqofp/vrMnXnFSpNIW/yTmXFPycF6MltSMhFVrZo2mOqLWG3MmpMRkdbpYZ3RhsCC19G+6FVkDzCNBkz55AnMsW6nJq/nTTIujgIDGjlPJBzCnk1zHzHALzFWlVSBa8GeXeIrYpjKGNB7i4ejcP6N3qyJK7TCMP2u5KnmbEo/EQQ0ugKk1dDE9Y1vAyGj7KAekpJthswoX7WSVvgMMDiOPldEgHx+6eP0jwX7C39eOO2z/BusV26CCIcLh7BToFVWIp9wtrtJddCOY8wXKE8x6xwt2ZYs24Oi4ln1MpOF3UdJyhbmrZAvHg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(396003)(346002)(376002)(39860400002)(451199021)(86362001)(31696002)(83380400001)(36756003)(53546011)(6506007)(6512007)(26005)(186003)(2906002)(44832011)(7406005)(7416002)(2616005)(6486002)(31686004)(5660300002)(38100700002)(6666004)(8936002)(8676002)(478600001)(54906003)(110136005)(41300700001)(316002)(66946007)(66556008)(4326008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?QS9rMnNESnl1NVZkVGFxNkdPU3ZDUlZETGtzT3NhVll0UmxqeWNIb3AyalNx?=
 =?utf-8?B?SGU3K05kR05jVEVLOFlDeUdxNFNuMXNIWHd1c2laWml6OEszZ1kzRFkyMUhp?=
 =?utf-8?B?R3ZaMDZibURuY2JYbXRUMGd0RDUvVnJlQmN0QkgxdWlxMkl5OUN6RkJyTytt?=
 =?utf-8?B?T1c4TXJJUkJ3YWRTa0ErZ1NYa09TbmE4SXErSUYrTU1lTXlJNG03TzJPUURQ?=
 =?utf-8?B?T2w3a29JdUlSMHJWU0ZHb3psYUNUaVpUa0hkZzdMa0NCUzBQMDV2Z09qVHM0?=
 =?utf-8?B?b0lkd1p0YUhkdXhuMmFrN042ek1QU1dsbk02cFc3WWljSG5nVWUwbzRIOVNu?=
 =?utf-8?B?ZXgyVkcrREgrb1NUdmJQUzJVdkRNUHlmN2FhSUgyTlVvbk1QVFYzTCtOZFNt?=
 =?utf-8?B?bHVlenNvU2ZnVDkzMDFtSzB6U2QyT2k3bXFFdS9kMWUwdFRCOHI4MU9wUEYx?=
 =?utf-8?B?WmMyQnlmYzEvYXNEVFA1S3VYVHRBdjVKK1VYV2ZHUWtoa3NydWVaYUxOSXFO?=
 =?utf-8?B?NUFvaUpCTnhxM1R6KzRySFpHcm5FcENnU2JhTmJYVzBEem5rK04yZFdNQXh3?=
 =?utf-8?B?MVllZXNsSnp2VEhRYk9vUVo3YlJmZ1MwN01zNyt3SVpESUk4aTRrZzNxcFZT?=
 =?utf-8?B?WWtuRUE4b2lVRXVPczZPOWFGK3MzS1J0eENlekJkdUtRckNUS2lDWTYrV1VL?=
 =?utf-8?B?OVdLSlpjTEkxZnZkL1V3TE5JRGZaVnYwemYrbzZMck5nNStVd0NVbzgrZGxv?=
 =?utf-8?B?OE9CNXNUblNKbDVVUnBKUDR3ckRSOENFWW1BQ0U0clVtUFhOWFlSY0dmUDQr?=
 =?utf-8?B?MFB5eG8wY1pleDhNZlRkZXJrbXE1VFM1cFNyUXhJZnNXR3JCMFpzNmlSTnN2?=
 =?utf-8?B?Sk43d29YMnM4UVg0UFNSemFJQ0R3MGhqYkhzWjY5WithTGVtVnowL0ZNR0pz?=
 =?utf-8?B?SWdVQ3ZMbyt0TW1BSmRMenY2R1hERWRzcjVwYStnWXQ4L0wwM3lXWUFia2I1?=
 =?utf-8?B?SU4vanlTK2t5VlV3UC9KMFNzVzZDd1phV0t1UTVadTN0eWFMYmNDdlNKR2li?=
 =?utf-8?B?NTJoaFU0TDB3emdGTjZwVmxzdVU3NGxNbVBDMGJaek5QMEUyK25vNHdJOENE?=
 =?utf-8?B?d3Vxb1hvUnZyQXAwVzBWV0RHQ0JxdDYxa2syQWl6UXo3N2xpbndRNm5tQU1P?=
 =?utf-8?B?WG15NUI2K2NyRFhUK2lxQWFMWWxZZGxrcXplalpGZTFQdFRYWUkvZFFyelhj?=
 =?utf-8?B?enUzTVYxR01WZUoyaHNVQ2ZCMWtmeW1yZG1ZckNvS0oxUDI1WjhqWEtYcE5U?=
 =?utf-8?B?ZEpEVzBSeFBBcDNtek1TeTFCQmxnVGtWRlhwemNyM2hyMUo0aTFwSG5EbEkw?=
 =?utf-8?B?TGc5NlpqL0Q5bnU1SW1hUkdoNFZabkNqU2lnZ3FDdHN1MmNsQjNCV3B5K2FY?=
 =?utf-8?B?dE1PNmdlbE02Q29IVW9QKzFucC9lUld5dGdvYk90ZlgwK3ZRYVJDWFhvUzc0?=
 =?utf-8?B?L1JUdE51alhsL3hZb2o5N3dvNGhLbXVPOUJTOFhTNUxaSGZiOWdVRnM2Y3dn?=
 =?utf-8?B?Y29CeDNCQmZ5Ums3MFA1bXNxV1VRWDhzQ0EyYnovK0EvS1hIa3dTVit5ZDZG?=
 =?utf-8?B?WFkzT0xtTjIzTUZVSXlScDZGaENnYlZNTlRGRVd6d3E4S3lMMklYRDlTcmhv?=
 =?utf-8?B?Z1JyUS83VmRJcGhwdmRTdE9nNmVHNldPc0g5VVF2UUZqa3ZxcEl3YU12L29J?=
 =?utf-8?B?Q0pLcXpiWmpBeTNaSzZvRGhFbTlUOU1vRGtEblJXRXhIMTA1dytjS0hZcEgx?=
 =?utf-8?B?QW9JZ0lRdU56NjZ0OXJwZUhBTWhDM0xQSjVJWEFiaHJDVDA5dDZER25uVThR?=
 =?utf-8?B?WFEzRFh4MXJYODY3eFYyTTF3OSsxUWoyNjlYOUkzVXFrWE5tRzRHdnZ4NFZw?=
 =?utf-8?B?a01EQ1llbmo0TU9Dd0hhQ3YyTUZWUHo2UmF2VXoxSFlzMndiRGJRaEdaSDBl?=
 =?utf-8?B?L3FMN2hrTlBoVTZqOU13YVhpOUhWT2lSRkhZRW9rb1RyWFR3ZW9uZi9LR3pQ?=
 =?utf-8?B?Zk1FRUZUSzc2eXhpREtVOXhQcFNWenhUN0ZGa3pzK1VGMGNWWXhsdzcvbWlm?=
 =?utf-8?B?NGlvWjI5TlFQNGdZS0E2ZkJyMUFLOHI2M1ExMjk0YmRhZ0MrS28zMUJMRDZH?=
 =?utf-8?B?MEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?ZkF6bWp3L0EyalFoSVgrMERuOWRLUGFScXp5UE0zQkJTVklHSFRqY1N2MkRu?=
 =?utf-8?B?NGRMVGhTUlZoQzlFTmt1UFM1RS9INlRJbFhQOFZ2L3pGYVNxcmltMm0vcEZU?=
 =?utf-8?B?Mmg0ajMwYkczTDBCcmlBSU5nY25ReUVvWSt4blI4aklUY2xBVWMwT01nOHlt?=
 =?utf-8?B?NzZ1b2srNFppeTV2S1czNWFOR0htOG1hcHJEY050ZjRZYitkMWQ1RGtjYjNz?=
 =?utf-8?B?MjB3TjJWaURraXdwdmVabFRUUFArd1A4OUI3aUNGY2RackpQWWRCM3FzQU5Z?=
 =?utf-8?B?Qnh5cy9QL3U4RCtiSGtzdTZDdHZnTnRWNFRUcHRLYU40aHJqMUFsajlqNXBi?=
 =?utf-8?B?cGNmVlVVTVlSblJQL015UFh5UlZXd0t5V0VjSkZFbFdjZVFXVXJRRmxPZHZE?=
 =?utf-8?B?WEZuVVhJTTBxaE1QQy9SNytKZURZdjBsVzBiQkdlK2hNRHBQbFhMdHNRQjk3?=
 =?utf-8?B?OWh0WnVUZVdQV3YwTGw4U25TSTFSbXlRL1VkYW93aWNpWlN5Nnd0L3NNWnlZ?=
 =?utf-8?B?RXBoT2FlRUxqUFRDWC9VWnllekZabk4vMjdCUXlBdTBSWXJSRWs3RGdTeFZV?=
 =?utf-8?B?bVNHcjdnQnhyYlNHdlRTVjFDQmsvU1ZpdzBRcnA4MFJVUFlPOFFrc2dDUEMx?=
 =?utf-8?B?c01hTWdoT0Q5ODFVcDZKTGZibkZralhFSU0wRHUxZ3hzRkg5alZBcm9tckV0?=
 =?utf-8?B?OC9kQ3NJWVM3Yi9CZzRIQnM1RFZ0TllFc205eG1xanorNUU5Nm1PQlBKNzZX?=
 =?utf-8?B?cUJWeG9iWnZ1QitJSVNJTXVqbS94N2pNZTgydkphTTNSQTA5M1JMV0djMVA2?=
 =?utf-8?B?RzJLMHdXMzh6RDVuS2VQd00vV2p5Rk5kK2YxTXRjZWtoVVkyeGs1MGtOaVpt?=
 =?utf-8?B?TmR3Z2RJS3YwVDhIZzRjSndpSEMvUFlMMWJEaDVuSVROcTV5MkEwUWdkbzlr?=
 =?utf-8?B?U2RnbVpTb0loYXBHKzlIaGpaYlpUY2UxQnpDWG9uMEtpY25JR2xNNnAxS0pP?=
 =?utf-8?B?L1FmVTJ2MWo3MlRJL0NTMVI4WWVqNElYcW5ic3dudnR2RVJLTnV0M1NMQ3Vw?=
 =?utf-8?B?enpjRFQwQSt1RzZhZXRSQ003MzlDdVdoRmMwY2t0a2hoaEdSQzRmKzRYOTk0?=
 =?utf-8?B?KzNqOFpYSUxrb00rNVNPVS9uRW9zS2FXQU5ZZnIrcm5MUmJIeStrZWVLZ05E?=
 =?utf-8?B?bFl6c0U5bmNKdThWOEpKV2J0dTVXUE1ZeDZpNTZ0SmlPZjM2S2FxNy9ndnRz?=
 =?utf-8?B?RnkrZGcrYk1oM2pIM2E5bWxXb2l6WWxtaFpVWjIyblMxWExBS3BYWG9BdFVp?=
 =?utf-8?B?R0k5TldSS05LYUJpd2hxVVUvVWVKKzNGS3lSLzBudTVJWXUrWDZWeVRlOUVP?=
 =?utf-8?B?TFQwek9qTCtXNC93MEhXdDJmSG9FbzRIY2F5bUxXQU9TVVdmUlFFN0tYZzZY?=
 =?utf-8?B?VlFaTHZqYkhTdHJqSE9VNFZBTlFlSmgzSE9IMVRDbWNzaVBxTm5KaWZxRXNt?=
 =?utf-8?B?ZmlqdzZpenNQLzVxbFA5MzlxSnFreDRyS1lod2tiM0tIakVUamI0azFLM2Za?=
 =?utf-8?B?bEcxMFFMcENvMDYrQ1VDYWtwb24wU0h0ZHpMdzk2NWtoVGQxVFJLMVI2U2RP?=
 =?utf-8?B?ODFZVkY1bGczUENoNzhsajJ5ZXVNZlk1V3N2MXFWbEg1Nm9FTjBCWEJRekRz?=
 =?utf-8?B?d2R6YmJNSXFPS0M3ZU55VEZaWmxVbWxpL01pWm11NFpiYTJJRjBrWGsrOGVy?=
 =?utf-8?B?NHRlcWFTa01jRk5EMEwzL3JrdVlDQmhsdzE0cExvV2R3czIzNzBGSmZkQmll?=
 =?utf-8?B?L2lvWis2THN5VUFDaXhyL0JET3dSOGcxclRYcWFodENlVE1kVDVMQUFaYnlN?=
 =?utf-8?B?Z1h1ak45T09CeVJWaXZCRlllUy9mS1pGaGh4VWhZdlNiWHA2eXpFRWw3OEYw?=
 =?utf-8?B?c0JtZFEvelp2UzRzQnJqWm1aMWlOYituT0VlSDkxVzEzdUJvenRsZGRlOENn?=
 =?utf-8?B?U2l5NzFFcVRqY01BM210UTlFZWNmeWQrS1JGMGpHK0NOT2srNjFPTmphL1JV?=
 =?utf-8?B?SjZuWEQvUndsMEZaR0xSWmRncDI3QlpxeUNlSCtvSm84OHZ4VU1oMDVlT01n?=
 =?utf-8?B?TGw5ajdlajlBNkV4akwwbFVmK3l0UWxSZnBlVXk1ZXBma0NIOFNLODBIREZu?=
 =?utf-8?B?aDlNSkVrSUpBamdqN1pHdXJOSS93ODBFRS9OdGFTT0o0UUVmZStJOFBhVXZT?=
 =?utf-8?B?cFZQYnE0dkhFUm52ZHJVK05ZRkFRPT0=?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b7e9894-5935-424c-27d6-08db3f84d314
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 20:46:31.5113
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y4xClfSDkM2poLEaUQXiLHA5qZIpYq/FGfjTvTEdKFD8DcSQn7SG2UmbwTGULCnlU8GWneUTA4M6YoAcSGDFJFxYakWP8xUMhL3fYbVGGZQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB6423
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-17_14,2023-04-17_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 bulkscore=0
 mlxscore=0 mlxlogscore=999 adultscore=0 phishscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000
 definitions=main-2304170183
X-Proofpoint-GUID: QmFEh5ZhXzphhqQeiblSz4L24eGPHOo4
X-Proofpoint-ORIG-GUID: QmFEh5ZhXzphhqQeiblSz4L24eGPHOo4



On 4/14/23 7:44 PM, Thomas Gleixner wrote:
> Now that the core code drops sparse_irq_lock after the idle thread
> synchronized, it's pointless to wait for the AP to mark itself online.
> 
> Whether the control CPU runs in a wait loop or sleeps in the core code
> waiting for the online operation to complete makes no difference.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: xen-devel@lists.xenproject.org
> ---
>   arch/x86/xen/smp_pv.c |   10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> --- a/arch/x86/xen/smp_pv.c
> +++ b/arch/x86/xen/smp_pv.c
> @@ -340,11 +340,11 @@ static int xen_pv_cpu_up(unsigned int cp
>   
>   	xen_pmu_init(cpu);
>   
> -	rc = HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL);
> -	BUG_ON(rc);
> -
> -	while (cpu_report_state(cpu) != CPU_ONLINE)
> -		HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
> +	/*
> +	 * Why is this a BUG? If the hypercall fails then everything can be
> +	 * rolled back, no?
> +	 */


In many cases this indicates either some sort of hypervisor internal error or broken logic in the guest, so it is, well, a bug. But I suppose it may also be some transient condition in the hypervisor (I don't see it now but it can happen in the future) so perhaps we should indeed try not to die on the spot.



-boris


> +	BUG_ON(HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL));
>   
>   	return 0;
>   }
> 


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522321.811580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr6-0005Zb-5c; Mon, 17 Apr 2023 20:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522321.811580; Mon, 17 Apr 2023 20:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr6-0005ZG-0w; Mon, 17 Apr 2023 20:53:44 +0000
Received: by outflank-mailman (input) for mailman id 522321;
 Mon, 17 Apr 2023 20:52:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqD-0005M2-1z
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:49 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf19d0e6-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:48 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id la15so4089324plb.11
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:48 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf19d0e6-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764767; x=1684356767;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iUQ9/pimfaGXNi0CjQ5OYjWOczKCpE//S8xnkPc78yU=;
        b=FAf7Q7K913ZX6r/eLsy+yAUsXKMRxOHj1fLyHB9UIaactpRsuyZTxwm6yr9/0kHhuJ
         4Vp9zU5n5/h6vSIOmrhOIkQhtsXJjouneV7ATDr91jNlm1HLz9ERQ9iZEYmxf7IWZXzO
         7Bjl99gPzEex4CTscD2JLBVlVfqL8/78eXbRp1C2OSneg1gvPPXFUCS8B+INtmwbneb1
         7xu84pfQjybA5JWgsxDb5XtXBfw3JMpB/K4/MrDjwMi52JjctUf0w80qqRetA0UVkDY9
         /WMnuBxaObETeplAy+OYgYm1cMdR/MAIyIhp1m3TGneur/zdkE7Srlujs8An47+1uBTa
         DS5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764767; x=1684356767;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iUQ9/pimfaGXNi0CjQ5OYjWOczKCpE//S8xnkPc78yU=;
        b=PQxXSpe01IwKReJ7Z+SRLwFg1rEO84lynLMot3ErKS4NRBDJ7sar4ZgRB9kZ6ErLMD
         oIKuvzeqab7ggxfpSrmvXYjDO0fSrUJjurn5samwpHaqT4U1+dbyHWFCv6W4tSsb5enP
         cpobmZ1zQBGamELBsSUpShMqZ2gnH9vwsd8fUMRzQI0fWpU2r8PJc1fjS8UgwBcckKmn
         cdYX2FbQwMm6B9XjdOkVUKvgHIQy0+A45/DaWYLTEj3+YBZ1hq419RujUDcmgqWDnc9i
         WsbsbPicakNgfVFr9ZrMeM7IlkIKE9agXGAc6euKLoBY8SsaM3W7kufqi1FNDcoNhZXK
         eAjA==
X-Gm-Message-State: AAQBX9fCEyAgIQVhhNK4wO28EXFibntmKnu1FK4Ge8Zhne6pgcRxfqB+
	Cv2aqRQepv/rlYJLh9R4uU3F1HI1uKl+Zg==
X-Google-Smtp-Source: AKy350bJ+z7DqyQlo/Tq9sm7RhBTmVNaq1jAeXkgM2fHwXUjj/5Rwh13YNUG3IxyilUJQDR+HGgjBw==
X-Received: by 2002:a17:90a:e642:b0:247:83fe:12b6 with SMTP id ep2-20020a17090ae64200b0024783fe12b6mr5579845pjb.43.1681764766744;
        Mon, 17 Apr 2023 13:52:46 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking
Date: Mon, 17 Apr 2023 13:50:16 -0700
Message-Id: <20230417205048.15870-2-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 uses page->index to keep track of page tables for the guest address
space. In an attempt to consolidate the usage of page fields in s390,
replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.

This will help with the splitting of struct ptdesc from struct page, as
well as allow s390 to use _pt_frag_refcount for fragmented page table
tracking.

Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
before freeing the pages as well.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c      | 50 +++++++++++++++++++++++++++-------------
 include/linux/mm_types.h |  2 +-
 2 files changed, 35 insertions(+), 17 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 5a716bdcba05..a61ea1a491dc 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		goto out_free;
-	page->index = 0;
+	page->_pt_s390_gaddr = 0;
 	list_add(&page->lru, &gmap->crst_list);
 	table = page_to_virt(page);
 	crst_table_init(table, etype);
@@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap)
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
+	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
 		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru)
+		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
+			page->_pt_s390_gaddr = 0;
 			page_table_free_pgste(page);
+		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
 		gmap_put(gmap->parent);
@@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 		list_add(&page->lru, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->index = gaddr;
+		page->_pt_s390_gaddr = gaddr;
 		page = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page)
+	if (page) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	return 0;
 }
 
@@ -341,7 +347,7 @@ static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
 	page = pmd_pgtable_page((pmd_t *) entry);
-	return page->index + offset;
+	return page->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1351,6 +1357,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	/* Free page table */
 	page = phys_to_page(pgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 }
 
@@ -1379,6 +1386,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		/* Free page table */
 		page = phys_to_page(pgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		page_table_free_pgste(page);
 	}
 }
@@ -1409,6 +1417,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	/* Free segment table */
 	page = phys_to_page(sgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1437,6 +1446,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		/* Free segment table */
 		page = phys_to_page(sgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1467,6 +1477,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1495,6 +1506,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1525,6 +1537,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1557,6 +1570,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1762,9 +1776,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r2t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r2t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1814,6 +1828,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1846,9 +1861,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r3t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r3t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1898,6 +1913,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1930,9 +1946,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = sgt & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_sgt = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1982,6 +1998,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -2014,9 +2031,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
+		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2054,9 +2071,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	page = page_table_alloc_pgste(sg->mm);
 	if (!page)
 		return -ENOMEM;
-	page->index = pgt & _SEGMENT_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_pgt = page_to_phys(page);
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
@@ -2101,6 +2118,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 	return rc;
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3fc9e680f174..2616d64c0e8c 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -144,7 +144,7 @@ struct page {
 		struct {	/* Page table pages */
 			unsigned long _pt_pad_1;	/* compound_head */
 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_pad_2;	/* mapping */
+			unsigned long _pt_s390_gaddr;	/* mapping */
 			union {
 				struct mm_struct *pt_mm; /* x86 pgds only */
 				atomic_t pt_frag_refcount; /* powerpc */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522320.811574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr5-0005Xd-Sm; Mon, 17 Apr 2023 20:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522320.811574; Mon, 17 Apr 2023 20:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr5-0005XW-Po; Mon, 17 Apr 2023 20:53:43 +0000
Received: by outflank-mailman (input) for mailman id 522320;
 Mon, 17 Apr 2023 20:52:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqB-0005M2-US
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:48 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce3f57a9-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:47 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id d15so10117808pll.12
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:46 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce3f57a9-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764765; x=1684356765;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=hd0eSPgOtTAWwT17PjaULR/3wkbhGTdpKJdeXsj3/Vw=;
        b=pldku0ysKEp0NOEJOd7l0dIaF0yyGvAqpPlnmph3EM3Z2d65YTU4pF9c8hRBI8SIkB
         bXZgcndrjQg15zyEZxLW+H7cAmzYTJGKbe6hTNRu88cjrehCSqACdUD1N8GZKEqBE4Xp
         FwmFpHRnoIb29o3voauSnQ6p7drjWK4VEEwhJEits2ZcgZwb7+ryqE7mhLT9s2Dr91lB
         24kMEEC0owKF37MtG1/RM5nwSQTWlS0iZbYOKzR4iyZZCfaJAgJ4qZTiCg2KEwK1Bku2
         sGcyojF3egYr4Kr1rOZRroFrhK9ZnYopo4hTv+rSWBcsF96wp6BJQzFP7J0YdZ6544dh
         BRmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764765; x=1684356765;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=hd0eSPgOtTAWwT17PjaULR/3wkbhGTdpKJdeXsj3/Vw=;
        b=Odt6gaH8KiAVRUICPaxDLZZgiRPjJQ6GF2iXCYxmDCBsM8XrXdCwyT48hjejjdnA57
         oDFfMM7eV89lM33dsmDLFk7xyoWu9YBHKAnQ7Xj113v+uWKncbbRGMLAUnaZqBupjO61
         EncoWY5O9/zf87XyETB7ofDnkC0K+ox2e3X5VCGRBWX5Iwbu84UBGNPW8SAtwgMMR5Dd
         eczwS/Mv0neEmEcBq8wVlVN390kJP46wTmw3AxTYq2r4BKc750KJA753Mu+suNtfBOk+
         WOef4EBgXNixc7ZgyjmImSEeGXiBaI1XacZs2i+zomK/ak7oRj/TIN+8x3h0B4X6bofZ
         Mcww==
X-Gm-Message-State: AAQBX9dRKVPWIUV7pWAcgxDX63jokAAKqAB/SU1r567DaCUnpgPBYnKc
	iFpZvfWIPBUtS7fnkalVEqg=
X-Google-Smtp-Source: AKy350bJ4sdY66hQiY/gbzNJFaJpsyj9BEPw5KYAOTG4tZNrVmb2RliejTZYzbSXKDavSzAED/FXaQ==
X-Received: by 2002:a17:90b:684:b0:246:f8a8:af02 with SMTP id m4-20020a17090b068400b00246f8a8af02mr15620340pjz.5.1681764765269;
        Mon, 17 Apr 2023 13:52:45 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 00/33] Split ptdesc from struct page
Date: Mon, 17 Apr 2023 13:50:15 -0700
Message-Id: <20230417205048.15870-1-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The MM subsystem is trying to shrink struct page. This patchset
introduces a memory descriptor for page table tracking - struct ptdesc.

This patchset introduces ptdesc, splits ptdesc from struct page, and
converts many callers of page table constructor/destructors to use ptdescs.

Ptdesc is a foundation to further standardize page tables, and eventually
allow for dynamic allocation of page tables independent of struct page.
However, the use of pages for page table tracking is quite deeply
ingrained and varied across archictectures, so there is still a lot of
work to be done before that can happen.

This series is rebased on next-20230417.

Vishal Moola (Oracle) (33):
  s390: Use _pt_s390_gaddr for gmap address tracking
  s390: Use pt_frag_refcount for pagetables
  pgtable: Create struct ptdesc
  mm: add utility functions for ptdesc
  mm: Convert pmd_pgtable_page() to pmd_ptdesc()
  mm: Convert ptlock_alloc() to use ptdescs
  mm: Convert ptlock_ptr() to use ptdescs
  mm: Convert pmd_ptlock_init() to use ptdescs
  mm: Convert ptlock_init() to use ptdescs
  mm: Convert pmd_ptlock_free() to use ptdescs
  mm: Convert ptlock_free() to use ptdescs
  mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
  powerpc: Convert various functions to use ptdescs
  x86: Convert various functions to use ptdescs
  s390: Convert various gmap functions to use ptdescs
  s390: Convert various pgalloc functions to use ptdescs
  mm: Remove page table members from struct page
  pgalloc: Convert various functions to use ptdescs
  arm: Convert various functions to use ptdescs
  arm64: Convert various functions to use ptdescs
  csky: Convert __pte_free_tlb() to use ptdescs
  hexagon: Convert __pte_free_tlb() to use ptdescs
  loongarch: Convert various functions to use ptdescs
  m68k: Convert various functions to use ptdescs
  mips: Convert various functions to use ptdescs
  nios2: Convert __pte_free_tlb() to use ptdescs
  openrisc: Convert __pte_free_tlb() to use ptdescs
  riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
  sh: Convert pte_free_tlb() to use ptdescs
  sparc64: Convert various functions to use ptdescs
  sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
  um: Convert {pmd, pte}_free_tlb() to use ptdescs
  mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers

 Documentation/mm/split_page_table_lock.rst    |  12 +-
 .../zh_CN/mm/split_page_table_lock.rst        |  14 +-
 arch/arm/include/asm/tlb.h                    |  12 +-
 arch/arm/mm/mmu.c                             |   6 +-
 arch/arm64/include/asm/tlb.h                  |  14 +-
 arch/arm64/mm/mmu.c                           |   7 +-
 arch/csky/include/asm/pgalloc.h               |   4 +-
 arch/hexagon/include/asm/pgalloc.h            |   8 +-
 arch/loongarch/include/asm/pgalloc.h          |  27 ++-
 arch/loongarch/mm/pgtable.c                   |   7 +-
 arch/m68k/include/asm/mcf_pgalloc.h           |  41 ++--
 arch/m68k/include/asm/sun3_pgalloc.h          |   8 +-
 arch/m68k/mm/motorola.c                       |   4 +-
 arch/mips/include/asm/pgalloc.h               |  31 +--
 arch/mips/mm/pgtable.c                        |   7 +-
 arch/nios2/include/asm/pgalloc.h              |   8 +-
 arch/openrisc/include/asm/pgalloc.h           |   8 +-
 arch/powerpc/mm/book3s64/mmu_context.c        |  10 +-
 arch/powerpc/mm/book3s64/pgtable.c            |  32 +--
 arch/powerpc/mm/pgtable-frag.c                |  46 ++--
 arch/riscv/include/asm/pgalloc.h              |   8 +-
 arch/riscv/mm/init.c                          |  16 +-
 arch/s390/include/asm/pgalloc.h               |   4 +-
 arch/s390/include/asm/tlb.h                   |   4 +-
 arch/s390/mm/gmap.c                           | 218 +++++++++++-------
 arch/s390/mm/pgalloc.c                        | 126 +++++-----
 arch/sh/include/asm/pgalloc.h                 |   8 +-
 arch/sparc/mm/init_64.c                       |  17 +-
 arch/sparc/mm/srmmu.c                         |   5 +-
 arch/um/include/asm/pgalloc.h                 |  18 +-
 arch/x86/mm/pgtable.c                         |  46 ++--
 arch/x86/xen/mmu_pv.c                         |   2 +-
 include/asm-generic/pgalloc.h                 |  62 +++--
 include/asm-generic/tlb.h                     |  11 +
 include/linux/mm.h                            | 138 +++++++----
 include/linux/mm_types.h                      |  14 --
 include/linux/pgtable.h                       |  60 +++++
 mm/memory.c                                   |   8 +-
 38 files changed, 625 insertions(+), 446 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522328.811615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr8-0006R9-HX; Mon, 17 Apr 2023 20:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522328.811615; Mon, 17 Apr 2023 20:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr8-0006OB-AS; Mon, 17 Apr 2023 20:53:46 +0000
Received: by outflank-mailman (input) for mailman id 522328;
 Mon, 17 Apr 2023 20:52:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqH-0005M2-Qj
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:53 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d249996c-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:52 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id la15so4089694plb.11
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:52 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d249996c-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764772; x=1684356772;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=BPCTGsUKFtNZ0E2E3gxAeEJL+3qovRd//xVc9xUETqc=;
        b=IRDdSAScs3TBGztRzu6QZgH4fMR7/UKueaVS3DDhFsZMLZlJQUDaWkKiPXvov39z8Z
         bssT73UYUy1LP7uOeEalBEo3irDXhaxWlDKHCjJjWa82XbzBt1YrruOwks0GGaa/KE2/
         kcH9ajFlIINgfw2e+eWdEuaQpWFwhb8tF79JjCU2EAC5A/73X/TUfQV/8mop72+F/v41
         nZHSSM2ss9/OewEDYIYb1Iz4F1hF7m8KXVNPHD0iWKxsJnaQ4CmjrVdNW0xNgbwEICJR
         VACO7VFEKZxOIvTtIDcu6wtwdqmMEpSRB4owcroJF6TbGx9VSAN4uiN81drhHklILqhb
         p9aA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764772; x=1684356772;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=BPCTGsUKFtNZ0E2E3gxAeEJL+3qovRd//xVc9xUETqc=;
        b=aZCkZi1xqlSGv9u8pHmgkyKKn8hY5YsJsviRDnEnIG2z9KnQ8Anbo5gE7mLF46VtOD
         eNT+PzmRRSvaYAJ0b4/mjoVs+xOSE4UDr25EaZqcLeX1Nun8l8cfRtn8sa/LuNNOm169
         nuHsSb6Qj6CAfndaW8JgUJtCGRIKyrrIZbyntJlDuLdEtijsDVEjhF1euElFwOC6zlL8
         ZXtfS+C8cR1953bizwjccqsy7ISKzMyagtlHRKJSANvsCXoXHwnmoKqk4PINpLvWszLA
         Mc86i1LrJr7LbqI7M8oY8RhZ5tXJTWRQjQdIoFjauYog+lUJ/aqO16M5QxBzZY/YiMkD
         u+QA==
X-Gm-Message-State: AAQBX9dLW3iSEaWRzHCEk0Kw2pPEa/1bt33G78OicuFoy3IhzO8rsE/n
	ZzKiinvO2tAaBa98+HCi3PU=
X-Google-Smtp-Source: AKy350bms9aSjWgOwaAnzsnpGuFvcEqsOo1myEWAVSaCBXxzFAqV0lCr0DmPVe9wT5EHiIOY8l5jIQ==
X-Received: by 2002:a17:90b:f84:b0:246:fc58:d77b with SMTP id ft4-20020a17090b0f8400b00246fc58d77bmr15834371pjb.44.1681764772342;
        Mon, 17 Apr 2023 13:52:52 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 05/33] mm: Convert pmd_pgtable_page() to pmd_ptdesc()
Date: Mon, 17 Apr 2023 13:50:20 -0700
Message-Id: <20230417205048.15870-6-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Converts pmd_pgtable_page() to pmd_ptdesc() and all its callers. This
removes some direct accesses to struct page, working towards splitting
out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ec3cbe2fa665..069187e84e35 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2892,15 +2892,15 @@ static inline void pgtable_pte_page_dtor(struct page *page)
 
 #if USE_SPLIT_PMD_PTLOCKS
 
-static inline struct page *pmd_pgtable_page(pmd_t *pmd)
+static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 {
 	unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
-	return virt_to_page((void *)((unsigned long) pmd & mask));
+	return virt_to_ptdesc((void *)((unsigned long) pmd & mask));
 }
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_pgtable_page(pmd));
+	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
@@ -2919,7 +2919,7 @@ static inline void pmd_ptlock_free(struct page *page)
 	ptlock_free(page);
 }
 
-#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte)
+#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
 
 #else
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522326.811605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr7-0006Eb-Ol; Mon, 17 Apr 2023 20:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522326.811605; Mon, 17 Apr 2023 20:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr7-0006De-Kh; Mon, 17 Apr 2023 20:53:45 +0000
Received: by outflank-mailman (input) for mailman id 522326;
 Mon, 17 Apr 2023 20:52:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqG-0005NG-No
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:52 +0000
Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com
 [2607:f8b0:4864:20::1034])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d095fb21-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:52:50 +0200 (CEST)
Received: by mail-pj1-x1034.google.com with SMTP id
 v21-20020a17090a459500b0024776162815so6438924pjg.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:50 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d095fb21-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764769; x=1684356769;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ds3IS9TWlY4iXOF1Vygr3Q/N2fJxO0/Dzlxnv3cq6/0=;
        b=hr7pEt7A7XcPfK4WHrNDJyXTyW/f522/QxHB9K2NM6BmG8KnIlcSk7Prak06XFsubX
         Sjsu1MPagS5Ajy0wM6nwYZFly4hHrr9DYIrVTEdELHRdfsGo64oEk7qD+41S/NMObi8y
         l8A2tz8IWZ1X+Du4Mg5V1agW+uGCUPtQdgx1HioQtiEf01hZXw902OFYpkixEmc4936W
         rbvY3hq0N/ZSHnXC+KGydT6+ZrTUjCaNrxKIoD5OlWamJ5QPi1l2Tt7uz8xTA1Fp0QB2
         97/hz5zZCcVfwzgfNZP+zBkRfPzWu3gB3ojY6/LKA9XIpl4tn1OEejbLEhRl9E9QH4p3
         YC/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764769; x=1684356769;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ds3IS9TWlY4iXOF1Vygr3Q/N2fJxO0/Dzlxnv3cq6/0=;
        b=h3/jIAoamGS+20z7ctBPZ1PRjHNdQoAFEtcWcYwz7kIKHNnSSCa1HObFopGVsAX0d5
         VRy3iSeDLm8u+lw5x8xiuBlBiryqXV+1zVZo2UQUgJhy4OLglRO3+0Pm/7MncNWUZ6ZD
         6bHOdLkwNIeaVD2XELFl7jwYBDjRzbBJbHXHdokf01QuDrcZuTcSx7hhQpw06Bi4Vr4J
         WaYBCKB2R9tEIiPSDEVu+Vt4OGNf8cFLXprj8J+FvuE66KfecJb9lcI6SM4u78wLoSTV
         pgd3AIffFzVe8GgzlS+NS1eV3he7rO7Y/sLu1/GgJ+HFrqQk3y6EKtD/CfXsBPbSRnRc
         qNfA==
X-Gm-Message-State: AAQBX9dds3uFFEBhOyv9rK20GoyaGFQA/DCPQe3LVeotI1BYFhylD/Cy
	26+n2MMaId07II/DP56or7Y=
X-Google-Smtp-Source: AKy350bKOtS2RU5+UPauyBEeUGcMLqfRctd5dndnPm3bZbQM5N2yHnkL/dVsnC5V83h9pYCSEFQOHw==
X-Received: by 2002:a05:6a20:6595:b0:eb:ad6a:ccf4 with SMTP id p21-20020a056a20659500b000ebad6accf4mr15296929pzh.18.1681764769281;
        Mon, 17 Apr 2023 13:52:49 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 03/33] pgtable: Create struct ptdesc
Date: Mon, 17 Apr 2023 13:50:18 -0700
Message-Id: <20230417205048.15870-4-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, page table information is stored within struct page. As part
of simplifying struct page, create struct ptdesc for page table
information.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pgtable.h | 50 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 023918666dd4..7cc6ea057ee9 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -47,6 +47,56 @@
 #define pmd_pgtable(pmd) pmd_page(pmd)
 #endif
 
+/**
+ * struct ptdesc - Memory descriptor for page tables.
+ * @__page_flags: Same as page flags. Unused for page tables.
+ * @pt_list: List of used page tables. Used for s390 and x86.
+ * @_pt_pad_1: Padding that aliases with page's compound head.
+ * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs.
+ * @_pt_s390_gaddr: Aliases with page's mapping. Used for s390 gmap only.
+ * @pt_mm: Used for x86 pgds.
+ * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only.
+ * @ptl: Lock for the page table.
+ *
+ * This struct overlays struct page for now. Do not modify without a good
+ * understanding of the issues.
+ */
+struct ptdesc {
+	unsigned long __page_flags;
+
+	union {
+		struct list_head pt_list;
+		struct {
+			unsigned long _pt_pad_1;
+			pgtable_t pmd_huge_pte;
+		};
+	};
+	unsigned long _pt_s390_gaddr;
+
+	union {
+		struct mm_struct *pt_mm;
+		atomic_t pt_frag_refcount;
+	};
+
+#if ALLOC_SPLIT_PTLOCKS
+	spinlock_t *ptl;
+#else
+	spinlock_t ptl;
+#endif
+};
+
+#define TABLE_MATCH(pg, pt)						\
+	static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt))
+TABLE_MATCH(flags, __page_flags);
+TABLE_MATCH(compound_head, pt_list);
+TABLE_MATCH(compound_head, _pt_pad_1);
+TABLE_MATCH(mapping, _pt_s390_gaddr);
+TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
+TABLE_MATCH(pt_mm, pt_mm);
+TABLE_MATCH(ptl, ptl);
+#undef TABLE_MATCH
+static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522327.811609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr8-0006Kg-4x; Mon, 17 Apr 2023 20:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522327.811609; Mon, 17 Apr 2023 20:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr7-0006J9-UP; Mon, 17 Apr 2023 20:53:45 +0000
Received: by outflank-mailman (input) for mailman id 522327;
 Mon, 17 Apr 2023 20:52:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqG-0005M2-QS
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:52 +0000
Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com
 [2607:f8b0:4864:20::1034])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d179002a-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:52 +0200 (CEST)
Received: by mail-pj1-x1034.google.com with SMTP id
 fw22-20020a17090b129600b00247255b2f40so13876461pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:52 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d179002a-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764771; x=1684356771;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=73w88W+Bv7iZwJaXp6X3OPD7H4X7IV5/5eDbDnIxbNE=;
        b=W9nPVyoXWKBUEnGfTyhkHARBTsZqdpJrkrNTKo667lyZEzjN6PxmFbbY0YylZMq9yl
         YfrdFmghyeO5HwXhYCrYBTje7eKqSW91jSFSDlT0dbMWvyCEN9ttpAXe7AFMT1oxAH0K
         BCnetbVzBpycS1Tax/m+o/Gfxhef/Kkj1D6sF8rcDiRlOa7VvGL8i+xK5Gl2/zoIf4yK
         M/9RMi7Hd+G3wH7cuJrpgQnusczVewZS29/AobewpqmBg09r8g/9YIIka/gTIbj6DHBc
         MFg7CjS5kCFyT6XG4/q3MJQ78bnC2QIXJhA82Tu9j9zbgdzbLpx3IE+L7G092ISLAdCE
         r0mA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764771; x=1684356771;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=73w88W+Bv7iZwJaXp6X3OPD7H4X7IV5/5eDbDnIxbNE=;
        b=G1GTh+wJtbBaU5O4BTvYN18k3ZvYuPiAvCHxUYyeMAwJlOrxQW4yFB8ykxKfbUoD/C
         TJrC+QjK9EE85dYe/tVnpggedVws+67lIZZGPZ0yOfy1Zk6Llt8+dXuP/nUnFQnTK6z2
         nC82F6TPn6w3lJP+vG+QOD7NYpwFBP7XSHLvXKKQXaDfWfYrWGSIBjTpCcaOtsW2pmHC
         Ru9r0ylY+klbgHOUgZu5lCcIpD3DP7ItYSUJmH8dniCYfHIKbzm+mHA3IdBnnxLaU/1h
         pwQrxzbIhGFOw+m+6v+Ttnx42pSrn7oRVcWu6duFNIrI+YuNTZlrI3EdNNeiYOKxRQ61
         T9Ag==
X-Gm-Message-State: AAQBX9dK2dO0qkYaruJwIQb5Pz87q27dVOCC5Nw0Tk4lq2rbnvVQDZYe
	Vw/lgV302Fx9iJoTFaoSfz0=
X-Google-Smtp-Source: AKy350ZgO/j6WOIysdkqHxj8eua8dcgo+KDK2FD0csNlrjJfCGtmgVmsUk4Xos3l7Ag2y16revzrHg==
X-Received: by 2002:a17:902:8bc6:b0:1a2:8924:224a with SMTP id r6-20020a1709028bc600b001a28924224amr205356plo.25.1681764770838;
        Mon, 17 Apr 2023 13:52:50 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 04/33] mm: add utility functions for ptdesc
Date: Mon, 17 Apr 2023 13:50:19 -0700
Message-Id: <20230417205048.15870-5-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce utility functions setting the foundation for ptdescs. These
will also assist in the splitting out of ptdesc from struct page.

ptdesc_alloc() is defined to allocate new ptdesc pages as compound
pages. This is to standardize ptdescs by allowing for one allocation
and one free function, in contrast to 2 allocation and 2 free functions.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/tlb.h | 11 ++++++++++
 include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
 include/linux/pgtable.h   | 13 ++++++++++++
 3 files changed, 68 insertions(+)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index b46617207c93..6bade9e0e799 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
 	return tlb_remove_page_size(tlb, page, PAGE_SIZE);
 }
 
+static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
+{
+	tlb_remove_table(tlb, pt);
+}
+
+/* Like tlb_remove_ptdesc, but for page-like page directories. */
+static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt)
+{
+	tlb_remove_page(tlb, ptdesc_page(pt));
+}
+
 static inline void tlb_change_page_size(struct mmu_gather *tlb,
 						     unsigned int page_size)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b18848ae7e22..ec3cbe2fa665 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
 }
 #endif /* CONFIG_MMU */
 
+static inline struct ptdesc *virt_to_ptdesc(const void *x)
+{
+	return page_ptdesc(virt_to_head_page(x));
+}
+
+static inline void *ptdesc_to_virt(struct ptdesc *pt)
+{
+	return page_to_virt(ptdesc_page(pt));
+}
+
+static inline void *ptdesc_address(struct ptdesc *pt)
+{
+	return folio_address(ptdesc_folio(pt));
+}
+
+static inline bool ptdesc_is_reserved(struct ptdesc *pt)
+{
+	return folio_test_reserved(ptdesc_folio(pt));
+}
+
+static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
+{
+	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
+
+	return page_ptdesc(page);
+}
+
+static inline void ptdesc_free(struct ptdesc *pt)
+{
+	struct page *page = ptdesc_page(pt);
+
+	__free_pages(page, compound_order(page));
+}
+
+static inline void ptdesc_clear(void *x)
+{
+	clear_page(x);
+}
+
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
@@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct page *page)
 	adjust_managed_page_count(page, -1);
 }
 
+static inline void free_reserved_ptdesc(struct ptdesc *pt)
+{
+	free_reserved_page(ptdesc_page(pt));
+}
+
 /*
  * Default method to free all the __init memory into the buddy system.
  * The freed pages will be poisoned with pattern "poison" if it's within
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 7cc6ea057ee9..7cd803aa38eb 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -97,6 +97,19 @@ TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
+#define ptdesc_page(pt)			(_Generic((pt),			\
+	const struct ptdesc *:		(const struct page *)(pt),	\
+	struct ptdesc *:		(struct page *)(pt)))
+
+#define ptdesc_folio(pt)		(_Generic((pt),			\
+	const struct ptdesc *:		(const struct folio *)(pt),	\
+	struct ptdesc *:		(struct folio *)(pt)))
+
+static inline struct ptdesc *page_ptdesc(struct page *page)
+{
+	return (struct ptdesc *)page;
+}
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522323.811586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr6-0005fT-EA; Mon, 17 Apr 2023 20:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522323.811586; Mon, 17 Apr 2023 20:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr6-0005di-7R; Mon, 17 Apr 2023 20:53:44 +0000
Received: by outflank-mailman (input) for mailman id 522323;
 Mon, 17 Apr 2023 20:52:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqE-0005M2-C8
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:50 +0000
Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com
 [2607:f8b0:4864:20::1034])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfc6bae7-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:49 +0200 (CEST)
Received: by mail-pj1-x1034.google.com with SMTP id kx14so7221735pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:49 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfc6bae7-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764768; x=1684356768;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=;
        b=BOUYQbRtprz0j+QE5FspkyGNnQXo7tua/LH56ZXhTbzE3CAOV8yTgXemF8QTCY9h4g
         g6V6zIZ7VFlE3ubnJkZ/yYqYyanZgx2V65Z6ltF6wcw0210IAZlR8To7Sk4y8TLWY4Hz
         xNxTo6TOWAdqeGCeg1Pit+CsJNU/MS0gvSxN85F+x2sLQC0IMYUPA6am8ZQrYVGpPzga
         8IeLZRTxr8/8Yg1lDFCUFpg2PfjK4sGTGgffRI7OQ6kx1dS8UMGUuVs3EuNQkywAoLYX
         E+HKq98y8quQjgUskaGG1Ic02lqe4ZtSQ+Zea8JL+rlDVF597bKHVbSZetgCXEY/xB7J
         hYgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764768; x=1684356768;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=;
        b=fXNO1mPalsSswNeRsYqxdVAFxqH6BaE6hu6laBpUGehltJYa0Zxf/IYbiYYSV+6aJ0
         2xKx6ohIxY65AAspRh1Bak/QosUhQS5mJyS2buv474zwHw42Vkez7VobeEU+Y0U1m8oY
         3kAcAaP+4ztMGVkbjUc89HaL0nXjPtfh4gPC31DTbeib8OgZviVyo3iNqlDLjFYqDUpi
         k4jC/3kyE2LrtsIVSqiCfkr7rbLOWyB6qfhBQAKAplm0ZG/nSYe7TJxRkts/Q3uLfLDt
         bIqyhpDfwa6o1s1C7J4fphVsKE0z9Ko1+QFaYLVAfiPkLI8dhUbB6/JDzXKApX/Ywegj
         +Gig==
X-Gm-Message-State: AAQBX9fdpUvQRsof0iFRki6YcX9AHifDIe2pv3powvg51BbVr0ZhVR3J
	6YNBdVaKQqsovICn0yF8fBs=
X-Google-Smtp-Source: AKy350ZtXMgj2Ot8GUfxTmnnYjtYP4oOp+wIKL46L+o6DOMpet2qdkXsjbB50RH+WV9/LkMm+8nkDA==
X-Received: by 2002:a17:90b:3907:b0:247:8f24:eb31 with SMTP id ob7-20020a17090b390700b002478f24eb31mr4943020pjb.48.1681764767923;
        Mon, 17 Apr 2023 13:52:47 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 02/33] s390: Use pt_frag_refcount for pagetables
Date: Mon, 17 Apr 2023 13:50:17 -0700
Message-Id: <20230417205048.15870-3-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 currently uses _refcount to identify fragmented page tables.
The page table struct already has a member pt_frag_refcount used by
powerpc, so have s390 use that instead of the _refcount field as well.
This improves the safety for _refcount and the page table tracking.

This also allows us to simplify the tracking since we can once again use
the lower byte of pt_frag_refcount instead of the upper byte of _refcount.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/pgalloc.c | 38 +++++++++++++++-----------------------
 1 file changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 66ab68db9842..6b99932abc66 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -182,20 +182,17 @@ void page_table_free_pgste(struct page *page)
  * As follows from the above, no unallocated or fully allocated parent
  * pages are contained in mm_context_t::pgtable_list.
  *
- * The upper byte (bits 24-31) of the parent page _refcount is used
+ * The lower byte (bits 0-7) of the parent page pt_frag_refcount is used
  * for tracking contained 2KB-pgtables and has the following format:
  *
  *   PP  AA
- * 01234567    upper byte (bits 24-31) of struct page::_refcount
+ * 01234567    upper byte (bits 0-7) of struct page::pt_frag_refcount
  *   ||  ||
  *   ||  |+--- upper 2KB-pgtable is allocated
  *   ||  +---- lower 2KB-pgtable is allocated
  *   |+------- upper 2KB-pgtable is pending for removal
  *   +-------- lower 2KB-pgtable is pending for removal
  *
- * (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why
- * using _refcount is possible).
- *
  * When 2KB-pgtable is allocated the corresponding AA bit is set to 1.
  * The parent page is either:
  *   - added to mm_context_t::pgtable_list in case the second half of the
@@ -243,11 +240,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		if (!list_empty(&mm->context.pgtable_list)) {
 			page = list_first_entry(&mm->context.pgtable_list,
 						struct page, lru);
-			mask = atomic_read(&page->_refcount) >> 24;
+			mask = atomic_read(&page->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
-			 * value of (i.e 0x13 or 0x23) written to _refcount.
+			 * value of (i.e 0x13 or 0x23) written to
+			 * pt_frag_refcount.
 			 * Such values violate the assumption that pending and
 			 * allocation bits are mutually exclusive, and the rest
 			 * of the code unrails as result. That could lead to
@@ -259,8 +257,8 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->_refcount,
-							0x01U << (bit + 24));
+				atomic_xor_bits(&page->pt_frag_refcount,
+							0x01U << bit);
 				list_del(&page->lru);
 			}
 		}
@@ -281,12 +279,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 	table = (unsigned long *) page_to_virt(page);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->_refcount, 0x03U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->_refcount, 0x01U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
 		list_add(&page->lru, &mm->context.pgtable_list);
@@ -323,22 +321,19 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
 			list_add(&page->lru, &mm->context.pgtable_list);
 		else
 			list_del(&page->lru);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 	}
 
 	page_table_release_check(page, table, half, mask);
@@ -368,8 +363,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-	mask >>= 24;
+	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
 		list_add_tail(&page->lru, &mm->context.pgtable_list);
 	else
@@ -391,14 +385,12 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522331.811623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr9-0006cw-2Y; Mon, 17 Apr 2023 20:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522331.811623; Mon, 17 Apr 2023 20:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr8-0006bg-Ss; Mon, 17 Apr 2023 20:53:46 +0000
Received: by outflank-mailman (input) for mailman id 522331;
 Mon, 17 Apr 2023 20:52:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqI-0005M2-R3
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:54 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d32d2349-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:54 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id d15so10118422pll.12
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:54 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d32d2349-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764774; x=1684356774;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=seoMdP0S43tB5tHiAJsyEn/McuL6oTkl/wxfiJ8zej0=;
        b=m9/ZIcz+Tqctc8DsupfPgmRFjJ+xGEJ5QFcDJn618MR0YPogH6sfzkHf8JHJze9cJp
         x5NQYQBUB4f261Rif3c3u/e/lVyrkb7fkfJolfOwhFk6XOwO7D63t/TQQqs9S95k9Ae8
         FKQXzDTaphWXYOHY2MRQP+XNQ8wh3L+YHkBgztA18oGJHjweyg/A/nWWIx5QMqDpAEPB
         X0wu3GkIqgBlm9RuBmiAr6UEFrV7LCwD9Gy8HkrUbR2REAIxrVGyinkjPCtpi11n6vo5
         ARCTsKi2jOZHutSbPq4tlksgdP3OAZq4N0EqMo6MlYwn+u6KLBtMzuBeO5yqTZEreTX6
         9o+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764774; x=1684356774;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=seoMdP0S43tB5tHiAJsyEn/McuL6oTkl/wxfiJ8zej0=;
        b=DEa+JWBuKyEk1Hnvmbm408ppRqJWemyFIL3kffEW0Jkv/sujz/j6YepHcNgsjCZ3+u
         WN9Hjo7Dn9k/+Z7W6Ls7vZssIUm8P1nHNmcS6H2DYfBmQuulalOJXJXDDBfgefEPrIvF
         yeYmERb0RSSBphKmA2KF01270uYu1yji+Hk9EXWJs8fk9Z/uKfRbxPBfrigQIVm/2aZA
         CAOlidS1H+B4HTxcnhN6YElud+fioiZpN0fPIh9JhdUnYkRp9HokCV2JVzaQeg06OC50
         l1DcYlCqJKY61SMJkUOng61D4q8POliQhNfGkTgu2DMsjQ5DR/hAgCXZpwB33Qws1dSO
         YHag==
X-Gm-Message-State: AAQBX9fXs/xZdAmOjY+sC4isfSTrHrdnouoyTXRyMEQKIkHC+3j8PxfF
	r2BCTUlD8ztsblANFhjb85E=
X-Google-Smtp-Source: AKy350aUejw0DeO6puoaXC3R5n6x3DzcxqacgAq+Qj51dxkW8glMsrCpGOXbLfdvjTORWF+TApg9qw==
X-Received: by 2002:a05:6a20:b21:b0:eb:8d47:332a with SMTP id x33-20020a056a200b2100b000eb8d47332amr14024065pzf.36.1681764773709;
        Mon, 17 Apr 2023 13:52:53 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 06/33] mm: Convert ptlock_alloc() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:21 -0700
Message-Id: <20230417205048.15870-7-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 6 +++---
 mm/memory.c        | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 069187e84e35..17dc6e37ea03 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2786,7 +2786,7 @@ static inline void ptdesc_clear(void *x)
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
-extern bool ptlock_alloc(struct page *page);
+bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
 static inline spinlock_t *ptlock_ptr(struct page *page)
@@ -2798,7 +2798,7 @@ static inline void ptlock_cache_init(void)
 {
 }
 
-static inline bool ptlock_alloc(struct page *page)
+static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	return true;
 }
@@ -2828,7 +2828,7 @@ static inline bool ptlock_init(struct page *page)
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page))
+	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
 	spin_lock_init(ptlock_ptr(page));
 	return true;
diff --git a/mm/memory.c b/mm/memory.c
index d4d7df041b6f..37d408ac1b8d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5926,14 +5926,14 @@ void __init ptlock_cache_init(void)
 			SLAB_PANIC, NULL);
 }
 
-bool ptlock_alloc(struct page *page)
+bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	spinlock_t *ptl;
 
 	ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL);
 	if (!ptl)
 		return false;
-	page->ptl = ptl;
+	ptdesc->ptl = ptl;
 	return true;
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522334.811631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr9-0006rm-Vb; Mon, 17 Apr 2023 20:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522334.811631; Mon, 17 Apr 2023 20:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVr9-0006mq-Kc; Mon, 17 Apr 2023 20:53:47 +0000
Received: by outflank-mailman (input) for mailman id 522334;
 Mon, 17 Apr 2023 20:52:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqM-0005NG-6g
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:58 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3f0fd64-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:52:56 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id p17so16169835pla.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:56 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3f0fd64-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764775; x=1684356775;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7rEN7TkKGGrS4Nr1e9+VSo1FDhdsaSEXyIULzlMdAE4=;
        b=EY4pAo5dPiMdURcOVQM/Apa82VCO0I/XR1aYYt5qSz76HGumyOhUfKwA9+Mr5OFHh6
         y4qX3lmzBRH5sE5hiL3G5H7+P1vU90VNiEuutpY92Aii8KdV81S7nJ0QBzccPQA4L2A1
         H0r+gFj2e0wYN/AG4aVILv2cjCWqAiomeMPXPkWjszJF01iQ5TmXH8Bsz45QUOgwxoQ4
         sHC8DrgU25+a1fsqdhglJSyR11j/cBOYD1x2MxJDA6rfBM+Ci8Y+wWCZicg642iTcaFB
         y1U70avkZLTukuwNbayq0JR0dDmctnOUY0ghxZuZ1egx4BqGjtzmW9DXZrJYTWGZ5wwv
         sRpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764775; x=1684356775;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7rEN7TkKGGrS4Nr1e9+VSo1FDhdsaSEXyIULzlMdAE4=;
        b=XD7NH1BpGaiMI6QZSq1yUzW019cL7ZibUWDL4abYOVIetra1c0anczP6u1JWIzGlST
         iOY6GGhGHlIkTwrUwwLW97DwybFn/bSO8BZ++eG4UQ3xTjYvwVsNrKGZeugA5I4Fe2jY
         nSN5dnyKxUd11ZwvZMmYgy6A7hFliWagezKSo+WyHeym+M4tZAFKHVbuOLeOy44P6EtB
         IJmTM711bCfUGaDa9h04NMNS7SODH/HN/TEoXF8N3pkXSpIQjnPwv3isxR4x+1QnX25b
         Lmhq5BIsX0a4B0wCkRNOVwbWjIUFbaUiOrz0Nq7LRaXWEF7V6/3GzONbFBgZKIVj5is+
         FJqw==
X-Gm-Message-State: AAQBX9e4H+gI/3K1mCAtsOAB2MpHae8iP26ysEROoNeuDuef2NcVKtw0
	pG0jbSjk2TXcN0hb6FfrZX8=
X-Google-Smtp-Source: AKy350Yghynh+/JYw1Y4OeSQkY97RuGOXV9v1JZPlpoHn0H6WhBdjqt/CneUg+QzYmLvk18Z1w5zNw==
X-Received: by 2002:a05:6a20:394f:b0:ef:b02a:b35b with SMTP id r15-20020a056a20394f00b000efb02ab35bmr6833020pzg.0.1681764775054;
        Mon, 17 Apr 2023 13:52:55 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 07/33] mm: Convert ptlock_ptr() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:22 -0700
Message-Id: <20230417205048.15870-8-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/xen/mmu_pv.c |  2 +-
 include/linux/mm.h    | 14 +++++++-------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index fdc91deece7e..a1c9f8dcbb5a 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -651,7 +651,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm)
 	spinlock_t *ptl = NULL;
 
 #if USE_SPLIT_PTE_PTLOCKS
-	ptl = ptlock_ptr(page);
+	ptl = ptlock_ptr(page_ptdesc(page));
 	spin_lock_nest_lock(ptl, &mm->page_table_lock);
 #endif
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 17dc6e37ea03..ed8dd0464841 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2789,9 +2789,9 @@ void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return page->ptl;
+	return ptdesc->ptl;
 }
 #else /* ALLOC_SPLIT_PTLOCKS */
 static inline void ptlock_cache_init(void)
@@ -2807,15 +2807,15 @@ static inline void ptlock_free(struct page *page)
 {
 }
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return &page->ptl;
+	return &ptdesc->ptl;
 }
 #endif /* ALLOC_SPLIT_PTLOCKS */
 
 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_page(*pmd));
+	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
 static inline bool ptlock_init(struct page *page)
@@ -2830,7 +2830,7 @@ static inline bool ptlock_init(struct page *page)
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
 	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
-	spin_lock_init(ptlock_ptr(page));
+	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
 	return true;
 }
 
@@ -2900,7 +2900,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
+	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522335.811640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrA-00072G-KQ; Mon, 17 Apr 2023 20:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522335.811640; Mon, 17 Apr 2023 20:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrA-0006z7-Ai; Mon, 17 Apr 2023 20:53:48 +0000
Received: by outflank-mailman (input) for mailman id 522335;
 Mon, 17 Apr 2023 20:52:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqN-0005NG-LN
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:59 +0000
Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com
 [2607:f8b0:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4e37050-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:52:58 +0200 (CEST)
Received: by mail-pl1-x629.google.com with SMTP id i8so18357972plt.10
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:58 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4e37050-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764776; x=1684356776;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=I0S9vHWOxLue2VyFd3w8q0WLRZoA25NyvyllT+RcawU=;
        b=XGW0pwJ3AeguS3BSEjZkQda83npB8B7RbJEymTCypauBOGiv3vWjnUTzzYfl8eoJ/G
         l8vpSac3iXyBMdZp/fMD6sum0gW4MOCVqLS502CXgyi0RUYYf3ntgvAGSXGBolZoNn2i
         V+ToiFcIdhtDlP2wctGltXpdu2C91rdsIF1S9oPOQ0O7Jf0vF/Q8QloTMvDXpKn4JpZo
         XwyusDwNIWazvsnj/elh4vmpM936WP8JEzePXhLQ+OdbmLb8t8oYaUiKKrqV8mjQTuNV
         U8/37jTeHtYYWs9KLzMjZZ9An2VnMVkd7CaDSgdHdeupgwKBIBRRGFzwxZa8/Q4lnujL
         ZhVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764776; x=1684356776;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=I0S9vHWOxLue2VyFd3w8q0WLRZoA25NyvyllT+RcawU=;
        b=WYnU0uvgpWd2PLgKHmWUDHTnI0Ye0Ha0hUO7n+qWlbnNs9DsJ0Au1pJavS8O5vWTmp
         c752bN0Vavl5sOK0oXxS70Upx6sfMFubsJokZ+ZD2kzrSObMZa27xXd+U9XaU/Qhrjwu
         v2cqnKN78pFomJu5NEU7AuEYJJcq3HM1Wsqotc8H9xAfcpUoWOiVCzkOlmSXnoAA9RoS
         qXbj9W+J0kIP2BVtD+0xJw6G3u45BCWML84yx0cem2oq4hIgdal8k3YgA09rA0L78GhE
         8VtzQdeqd+zgeUrUZTG/cZLaWhVyf6BBhKq7FgEzSjP1bkWI5dNlTf3UFt2gMsYO+n2v
         Si3g==
X-Gm-Message-State: AAQBX9e5iNzusjYBeouIERLXT4MDHtbOhDd9T4MW8jg2nNYl2BMK60U2
	oH0i+DLwm20EHGvMoJOiBxY=
X-Google-Smtp-Source: AKy350ZxT8mmBCzqXXqE8vhbMJvpIdAkvQhshhpl/b9NVru4kEOX264b08aZJzrt7Emcu03WoO14SQ==
X-Received: by 2002:a17:90a:3fc8:b0:23d:15d8:1bc3 with SMTP id u8-20020a17090a3fc800b0023d15d81bc3mr16271900pjm.39.1681764776607;
        Mon, 17 Apr 2023 13:52:56 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 08/33] mm: Convert pmd_ptlock_init() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:23 -0700
Message-Id: <20230417205048.15870-9-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ed8dd0464841..7eb562909b2c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2903,12 +2903,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
-static inline bool pmd_ptlock_init(struct page *page)
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	page->pmd_huge_pte = NULL;
+	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(page);
+	return ptlock_init(ptdesc_page(ptdesc));
 }
 
 static inline void pmd_ptlock_free(struct page *page)
@@ -2928,7 +2928,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 
-static inline bool pmd_ptlock_init(struct page *page) { return true; }
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void pmd_ptlock_free(struct page *page) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
@@ -2944,7 +2944,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 
 static inline bool pgtable_pmd_page_ctor(struct page *page)
 {
-	if (!pmd_ptlock_init(page))
+	if (!pmd_ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522336.811646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrB-0007B7-BI; Mon, 17 Apr 2023 20:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522336.811646; Mon, 17 Apr 2023 20:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrA-000786-QI; Mon, 17 Apr 2023 20:53:48 +0000
Received: by outflank-mailman (input) for mailman id 522336;
 Mon, 17 Apr 2023 20:53:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqN-0005M2-Vy
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:52:59 +0000
Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com
 [2607:f8b0:4864:20::102d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5bfdaff-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:52:59 +0200 (CEST)
Received: by mail-pj1-x102d.google.com with SMTP id
 s23-20020a17090aba1700b00247a8f0dd50so3409488pjr.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:52:59 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5bfdaff-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764778; x=1684356778;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X2z4+jy2VvwYGAEqDadtXD4/9gzlnrS5efeNMpJ5P9k=;
        b=I/h1pnbYM2DwgRZCSJteD0jcjPogajwYvoNTWu05FIt8Dug2P9e0wRrWLPxc4zYyi2
         bF+AVTemVcp3diPO+7OSxLM72RxW/MppC8vSDXDjD8liqzM1pBLkxoUoebzionO/oxsa
         qHKGIp7d97W6jlem/3yxXQy6r/jNX2Q0ie6VB85Wwsz/8JpbJnzPVNlM6MFS7XC7iPJr
         OFMLKNx0UPHxfGtNqDckarA/92oBRif46FzIncuYkpkQ+fhRc09HfBoH8I7mBRTJiPxj
         c4VsiDutZTNuFgmpjh8q1KLiugVszPA4Gaz711F805EB6LUcut5gmvOb7RYuzrO1w3sO
         Wqsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764778; x=1684356778;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=X2z4+jy2VvwYGAEqDadtXD4/9gzlnrS5efeNMpJ5P9k=;
        b=f5fxEZk5RPjHkw0MhPwS3TrkETxlB8CGkUW5V/O4iOPpj7lKn4/2H8ENwuWVZ3RXXz
         rMHZBFb3aW8VW1zcte//iY+xDfED1kW3ADo+qgkXKAhLcJbWmpt0UxptHSrsGUlZ3Dc8
         zqULFFMjKpqZszvvvt/ToDLJG+P2kXhBVBazanmRCQk/Z1pqG7AQ4wtiW0/VxAj+T2Hr
         ruVAKd3/VXvOKCmtLFe60WS3jCqOD02eFyl6RcRbykstFZmj564AHqwXXwR7X7IBfvCF
         XWv3po1Ps1p9DxDzp2VlU8HZ12FiIpyfALBhZE2+rv9MNTV4kedFfBpYifbJeXpeiStp
         piMw==
X-Gm-Message-State: AAQBX9fXwpAmMzhM4R7iYdUdRJjoMBRweIsseOKZEkrXdsPJFY7xD8Yk
	SydpB+4sXBtqeaS5ErwXxAE=
X-Google-Smtp-Source: AKy350a6Xakm9Y4oYLvOvIWfyGM8BNUM3uv3d+WixCRDkQ1rcDDo2rZhVhsbndThyou7yfhT1XyHRg==
X-Received: by 2002:a17:90a:8a04:b0:247:14ac:4d3a with SMTP id w4-20020a17090a8a0400b0024714ac4d3amr17877854pjn.20.1681764778063;
        Mon, 17 Apr 2023 13:52:58 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 09/33] mm: Convert ptlock_init() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:24 -0700
Message-Id: <20230417205048.15870-10-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7eb562909b2c..d2485a110936 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2818,7 +2818,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
-static inline bool ptlock_init(struct page *page)
+static inline bool ptlock_init(struct ptdesc *ptdesc)
 {
 	/*
 	 * prep_new_page() initialize page->private (and therefore page->ptl)
@@ -2827,10 +2827,10 @@ static inline bool ptlock_init(struct page *page)
 	 * It can happen if arch try to use slab for page table allocation:
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
-	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page_ptdesc(page)))
+	VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc));
+	if (!ptlock_alloc(ptdesc))
 		return false;
-	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
+	spin_lock_init(ptlock_ptr(ptdesc));
 	return true;
 }
 
@@ -2843,13 +2843,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 static inline void ptlock_cache_init(void) {}
-static inline bool ptlock_init(struct page *page) { return true; }
+static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct page *page) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
 {
-	if (!ptlock_init(page))
+	if (!ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
@@ -2908,7 +2908,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(ptdesc_page(ptdesc));
+	return ptlock_init(ptdesc);
 }
 
 static inline void pmd_ptlock_free(struct page *page)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522338.811658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrC-0007N0-57; Mon, 17 Apr 2023 20:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522338.811658; Mon, 17 Apr 2023 20:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrB-0007JY-J0; Mon, 17 Apr 2023 20:53:49 +0000
Received: by outflank-mailman (input) for mailman id 522338;
 Mon, 17 Apr 2023 20:53:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqP-0005M2-6r
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:01 +0000
Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com
 [2607:f8b0:4864:20::1035])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6820218-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:00 +0200 (CEST)
Received: by mail-pj1-x1035.google.com with SMTP id f2so18670089pjs.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:00 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:52:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6820218-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764779; x=1684356779;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X3zVXX/N6SOzga9RDJq+tPAqPrrx50X9RMNZ27hRgPo=;
        b=sICUrUNhKpAOA4VRjrqbzI2X/H4tUjO6T2SdOnpuj4Iy+PajhE1DrUSbcJznfffRrX
         L+wg0APu+UnJsiMWo07D8eesoEiJoCPkIf7iFkO3frlAWKtnOneztwNYCGmhOQpiZeuy
         AXohiTFVSZ6j/QYK9dyhVhIaufaLIPexDAAJXmf6qW3dNeCDEUq++V0vxfiKc0g0e7Ab
         Ewxv8L1qYPD4uyzx9cQ6CGY815b+pAM3zpWc0XUF8sLnRhkHRA/O3qjcuWg0cKvRpE4M
         YyoU6OpnuIHEQ4e7n1BxpYaP+3T9kR2hf6Oim0PIubi0gprGsuCOvZCKtVwS9TGYF6kY
         TvEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764779; x=1684356779;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=X3zVXX/N6SOzga9RDJq+tPAqPrrx50X9RMNZ27hRgPo=;
        b=RkVH6m2kAqviIsq28acqMDlEPCQh8jnkTb0VxM3wF7Fws6TyfR2LacgN9AR/dqjLrI
         rqebj3iDjfFq/6AoCHctOsH97Oho4CU4h3L3BZHDh37Zw7OeBKnpmszPDs26pZaG2b5L
         6kYf6IynPoSI5UhSguZtv5ZZYPd0LDBX4rx9XE/n51Ao4uj6Ke6Cr++2xsvQcEfBwmk/
         ToiDaHmh4d6pDcWtL8d3+NfhYbmsnPPd4miFg261udKfe3f+LW3Aw+c5xDOf/wfJnnK9
         fJc+hbjtrJGr3QYUIG0ZvaiPpi8WVNNs/QCe7WAUGUEIRhXAUlGhU5rxNZhpoKdWs95S
         QGNQ==
X-Gm-Message-State: AAQBX9d/U+OrlEatZiA0ngFH4u/1RvtRTNplNQLvIhRElBuNPtyBCf3t
	K7ocFJeBhNXt3QD0odUf/4k=
X-Google-Smtp-Source: AKy350aL1Q/BuQU4b/coHWha7jodau934g5ZfPpWwX1i7O384OcXkkUs5sEaNrPzHDYpgrYCEKrWgg==
X-Received: by 2002:a17:90a:cb8c:b0:233:f393:f6cd with SMTP id a12-20020a17090acb8c00b00233f393f6cdmr15121483pju.5.1681764779380;
        Mon, 17 Apr 2023 13:52:59 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 10/33] mm: Convert pmd_ptlock_free() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:25 -0700
Message-Id: <20230417205048.15870-11-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d2485a110936..2390fc2542aa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2911,12 +2911,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 	return ptlock_init(ptdesc);
 }
 
-static inline void pmd_ptlock_free(struct page *page)
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
+	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(page);
+	ptlock_free(ptdesc_page(ptdesc));
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
@@ -2929,7 +2929,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 
 static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void pmd_ptlock_free(struct page *page) {}
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
 
@@ -2953,7 +2953,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page)
 
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page);
+	pmd_ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522341.811668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrD-0007ca-Cs; Mon, 17 Apr 2023 20:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522341.811668; Mon, 17 Apr 2023 20:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrC-0007Z8-AS; Mon, 17 Apr 2023 20:53:50 +0000
Received: by outflank-mailman (input) for mailman id 522341;
 Mon, 17 Apr 2023 20:53:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqQ-0005M2-SR
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:02 +0000
Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com
 [2607:f8b0:4864:20::102c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d77a87e0-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:02 +0200 (CEST)
Received: by mail-pj1-x102c.google.com with SMTP id
 k36-20020a17090a4ca700b0024770df9897so6598410pjh.4
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:02 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d77a87e0-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764781; x=1684356781;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Hc0m/U8RaSqOAaIvHe2pZ8la6AwQPtvql1ppirfWYYQ=;
        b=FTIIrhTe8vfWt4jgLQ0zryZ+NfsJ/OFhYpHyrwEJjHtN6fYrjayv1wmbuEiWrZWNBG
         Ab6oC0JEVFKaVtr5ERs/HGyia4m0cuJPp54Y6qKJvGxwoWwnKA3jL+mq6GAMCgFGvFWn
         bEbIp/iuCRzaSQasJ9iYfHsEIZ8C5L6UBjIsqJjVLRNCdj3wFA0VCJaoyvqsQShlPxAr
         nn14k2m42+J6xR2hfl2ffw8tFItkZsF+DKNsuBC18vZWoMB4mE4FcDOZiJ4pqP0DqYky
         PjLPxBMY5gAsTXZg/iGvnD4lxFGHfkuFymQLt7EWHTlIBpCDY+iSj33ZpMo6EFdNFztz
         HYrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764781; x=1684356781;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Hc0m/U8RaSqOAaIvHe2pZ8la6AwQPtvql1ppirfWYYQ=;
        b=eUaK2+U2+1SbLEbiyZ/TTRxTUvM34xSPJ8d6ZVsC3Z6gA6HaW2wYdRepxBi4KskkGl
         ZaUXUuiHZ4PPdFouiVmSascoqvk+YfUJDmyUXMXglR/sH7WJI0b9u0Zjps1Mr2B1w/f/
         gU8SUO3AUp7ioopIkwFHJkrVzrhLNgbOgSlff8dP8wCQBVTg3o/K+LAV3eYVUqw+kHhK
         aU9VU6ixB8tESWMgmz5T58HeXXbtS9+dUJM1Q0Nm6o2KF1KOgo3D1amleqd+Tnm1mexk
         a6ecU3EiI+nJ00Zh8EBws0W0IobNVmn1yIrtQRBiRpp13wBuRDYFYi/JWD7zuwA7dPj/
         +/KQ==
X-Gm-Message-State: AAQBX9dF8vDG8Pd5vA3cAAtxipqn98OGIvT23JZA8GszCqqYhr4vlq/I
	Sq9ufeF/8/HcVl8W/xZQheU=
X-Google-Smtp-Source: AKy350ZMofF2taT9lCtSfGwXj3mTLikp6R4dNTLL/p579XSegIqzslR/71TUEAWTkG6EWBZpLey91g==
X-Received: by 2002:a17:90a:d143:b0:23b:2c51:6e7 with SMTP id t3-20020a17090ad14300b0023b2c5106e7mr16554327pjw.21.1681764780697;
        Mon, 17 Apr 2023 13:53:00 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 11/33] mm: Convert ptlock_free() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:26 -0700
Message-Id: <20230417205048.15870-12-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 mm/memory.c        |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2390fc2542aa..17a64cfd1430 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2787,7 +2787,7 @@ static inline void ptdesc_clear(void *x)
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
-extern void ptlock_free(struct page *page);
+void ptlock_free(struct ptdesc *ptdesc);
 
 static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
@@ -2803,7 +2803,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline void ptlock_free(struct page *page)
+static inline void ptlock_free(struct ptdesc *ptdesc)
 {
 }
 
@@ -2844,7 +2844,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 static inline void ptlock_cache_init(void) {}
 static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void ptlock_free(struct page *page) {}
+static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
@@ -2858,7 +2858,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page)
 
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page);
+	ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
@@ -2916,7 +2916,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(ptdesc_page(ptdesc));
+	ptlock_free(ptdesc);
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
diff --git a/mm/memory.c b/mm/memory.c
index 37d408ac1b8d..ca74425c9405 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5937,8 +5937,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-void ptlock_free(struct page *page)
+void ptlock_free(struct ptdesc *ptdesc)
 {
-	kmem_cache_free(page_ptl_cachep, page->ptl);
+	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522343.811672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrD-0007kQ-OS; Mon, 17 Apr 2023 20:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522343.811672; Mon, 17 Apr 2023 20:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrC-0007fk-QV; Mon, 17 Apr 2023 20:53:50 +0000
Received: by outflank-mailman (input) for mailman id 522343;
 Mon, 17 Apr 2023 20:53:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqR-0005M2-Vw
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:03 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d81c3f24-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:03 +0200 (CEST)
Received: by mail-pl1-x634.google.com with SMTP id kh6so25752950plb.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:03 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d81c3f24-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764782; x=1684356782;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WrZX7kBoYSlYOsS+wVtDQN8RsK5AFCnoOOzdjwi5Mms=;
        b=Fq/GP8pcdLi4I5bPiFEsvLaXpxmzLee60TMK8PZJIeuQRzzOkEw4LNRxem1TjMHkl4
         hpoJjh3mkwkBNIc+7ZgY54HKylkrI6wgo3KFZgMqWIFkgvOs6SvSJyDnyNXKU57Vfgni
         F3o9PaXHz7p6eZRgp/mKtHM6HxVxSp+JnGgjpmMtA2nsWbkEb7dNgfcs3WzKBla+e/KK
         mEc/vn9ckYnWIktc03O3OhGV7y0sYxPsn1QEH3DBGZVQjs25sE8N0p2VjcijmIU6liuv
         XPjI1oYrX0OJGyw0BE4l6r4tCUbdIF44UY/L4uGFZKlEP+7PRKuryCCDJJcwRQi8c96H
         TYTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764782; x=1684356782;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WrZX7kBoYSlYOsS+wVtDQN8RsK5AFCnoOOzdjwi5Mms=;
        b=JcepvHO19eyN4ba9adbbT3N5bg1Ja9yAHGfl8hcbiXdbeT5ksgyaDL7vERX6wAYews
         PSZ93L6gFUw3jN9oUHBavSKj7l0i8ISMiQ2bPnwP2z/QQjCRydwKE9gzUPwpqpeKf2lw
         JW299mx4fyn+XTKX2IysDav93GaFRv2AgyIckM1JQuZZpWE6N4yH65eSSjJ9xBE0IArI
         UeLOLBnY4XJ6Hsyof7KzquruwE00oQsjkkIybfXf+szkVouBsfCUklNAuu0Cguke66JB
         +KnDw8Ke9ksm/vxX13Sqhh/+hxipecNrSrtgLb67dg9fVwBbA6/Wak1sQMmfb9Uwb3TS
         /Ejg==
X-Gm-Message-State: AAQBX9dtwbfv8vb+EBbcsmvdD9pdK2GVa6MabkU8MRWdfBviozOLsYes
	IVj1aSXy29vUMne7EJbEgCQ=
X-Google-Smtp-Source: AKy350ZKSmY2j825YUr7PKp4UY4ecmOCun/8MK3oszIbqT0tc1tYYx3cO7v8JESnTYGwYaXHIhiESg==
X-Received: by 2002:a17:902:ecd2:b0:1a6:9671:253e with SMTP id a18-20020a170902ecd200b001a69671253emr219179plh.47.1681764781984;
        Mon, 17 Apr 2023 13:53:01 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 12/33] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
Date: Mon, 17 Apr 2023 13:50:27 -0700
Message-Id: <20230417205048.15870-13-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Creates ptdesc_pte_ctor(), ptdesc_pmd_ctor(), ptdesc_pte_dtor(), and
ptdesc_pmd_dtor() and make the original pgtable constructor/destructors
wrappers.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 42 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 17a64cfd1430..cb136d2fdf74 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2847,20 +2847,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
+static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
 {
-	if (!ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__SetPageTable(&folio->page);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pte_page_ctor(struct page *page)
+{
+	return ptdesc_pte_ctor(page_ptdesc(page));
+}
+
+static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	ptlock_free(ptdesc);
+	__ClearPageTable(&folio->page);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	ptdesc_pte_dtor(page_ptdesc(page));
 }
 
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
@@ -2942,20 +2956,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 	return ptl;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
+static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
 {
-	if (!pmd_ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!pmd_ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__SetPageTable(&folio->page);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pmd_page_ctor(struct page *page)
+{
+	return ptdesc_pmd_ctor(page_ptdesc(page));
+}
+
+static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	pmd_ptlock_free(ptdesc);
+	__ClearPageTable(&folio->page);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	ptdesc_pmd_dtor(page_ptdesc(page));
 }
 
 /*
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522346.811683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrE-00083a-Pm; Mon, 17 Apr 2023 20:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522346.811683; Mon, 17 Apr 2023 20:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrD-0007vp-R8; Mon, 17 Apr 2023 20:53:51 +0000
Received: by outflank-mailman (input) for mailman id 522346;
 Mon, 17 Apr 2023 20:53:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqW-0005NG-6D
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:08 +0000
Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com
 [2607:f8b0:4864:20::102a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9f3da1c-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:06 +0200 (CEST)
Received: by mail-pj1-x102a.google.com with SMTP id
 b2-20020a17090a6e0200b002470b249e59so16985581pjk.4
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:06 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9f3da1c-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764785; x=1684356785;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=;
        b=oN3WN3FaawXE5CF6Cg/W/uIlOgFpp+g16jZXog0VNnI/qtOkBdWsDVehFGMED7dn5+
         e56tIJJLtydkdncbDj6EgMfHci5dsJ/ubCUoN7HG1abvk1Sxq0PSQWiGjWfSNWnYoCUZ
         iexosl+123qGqL0uhQqkt6/lQEj+ZZbGmt11o31Lgc0lv69E3YkAOFWPJgHBPxqW6Hcs
         AY1JBassDB9z9gn7HCEtsHeuw7WKOVmcQJj/XwZLajgpzdXqkSttOF9JB8ZOKpdhpqCp
         KcapkWtUG6onUSGsHkpFKHhWq2wTFC19ykRFpbUoVueZkn9rzc97nSY2du8bEMz2qr3M
         LcZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764785; x=1684356785;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=;
        b=c1C40fQWVM2/VrfY8yHgYMucVP7sFL3eqqKv2UF++n5wJL4cmhBhr3jZne8fG/0sfc
         npZHbC8tsxUTVHDvrqOJjNM+DS04QVDSenBRLJR0aciRH8q7yonhbasyD2qejEcbBSoe
         pqdgV6KmqqXnidZ/pRUP6M3rFchDOj3UacWYjraa5kgmpVYOVrmQCvT1GP6q5JgqTHgr
         3DOh94Ftfq4/9IkOGK0/QTnoRbrmAcBQ4MBRsLSaDK+7GI0ORUu4OH80aT8iYK245MlE
         Hxo+RxSlKYYV0LCwXwqsDC/WsvMip7/GOGAzEdqqcyzlZLvLwFiO0r5Y3QJ4TnQhcrUO
         eoMw==
X-Gm-Message-State: AAQBX9eHdo4VhLjS9l7Yd0lfuw2K1ecKW3cKGOs0jssLUlpaZeQb6QVV
	3GjJ2LYQdGrkxOimZm0jj0KLP30axEpYTQ==
X-Google-Smtp-Source: AKy350bQu6z/qcpmnOWvPCYz19t9kGfi+ogJbAChD8E648rYRlxNxJTxaR2oZuddVpCSFFn5irXClA==
X-Received: by 2002:a17:90b:4d81:b0:240:7f0d:9232 with SMTP id oj1-20020a17090b4d8100b002407f0d9232mr16482647pjb.3.1681764785136;
        Mon, 17 Apr 2023 13:53:05 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 14/33] x86: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:29 -0700
Message-Id: <20230417205048.15870-15-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pgtable.c | 46 +++++++++++++++++++++++++------------------
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index afab0bc7862b..9b6f81c8eb32 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -52,7 +52,7 @@ early_param("userpte", setup_userpte);
 
 void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 {
-	pgtable_pte_page_dtor(pte);
+	ptdesc_pte_dtor(page_ptdesc(pte));
 	paravirt_release_pte(page_to_pfn(pte));
 	paravirt_tlb_remove_table(tlb, pte);
 }
@@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 #if CONFIG_PGTABLE_LEVELS > 2
 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 	paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT);
 	/*
 	 * NOTE! For PAE, any changes to the top page-directory-pointer-table
@@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 #ifdef CONFIG_X86_PAE
 	tlb->need_flush_all = 1;
 #endif
-	pgtable_pmd_page_dtor(page);
-	paravirt_tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc));
 }
 
 #if CONFIG_PGTABLE_LEVELS > 3
@@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d)
 
 static inline void pgd_list_add(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_add(&page->lru, &pgd_list);
+	list_add(&ptdesc->pt_list, &pgd_list);
 }
 
 static inline void pgd_list_del(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_del(&page->lru);
+	list_del(&ptdesc->pt_list);
 }
 
 #define UNSHARED_PTRS_PER_PGD				\
@@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd)
 
 static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm)
 {
-	virt_to_page(pgd)->pt_mm = mm;
+	virt_to_ptdesc(pgd)->pt_mm = mm;
 }
 
 struct mm_struct *pgd_page_get_mm(struct page *page)
 {
-	return page->pt_mm;
+	return page_ptdesc(page)->pt_mm;
 }
 
 static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)
@@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd)
 static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 {
 	int i;
+	struct ptdesc *ptdesc;
 
 	for (i = 0; i < count; i++)
 		if (pmds[i]) {
-			pgtable_pmd_page_dtor(virt_to_page(pmds[i]));
-			free_page((unsigned long)pmds[i]);
+			ptdesc = virt_to_ptdesc(pmds[i]);
+
+			ptdesc_pmd_dtor(ptdesc);
+			ptdesc_free(ptdesc);
 			mm_dec_nr_pmds(mm);
 		}
 }
@@ -232,16 +235,21 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 		gfp &= ~__GFP_ACCOUNT;
 
 	for (i = 0; i < count; i++) {
-		pmd_t *pmd = (pmd_t *)__get_free_page(gfp);
-		if (!pmd)
+		pmd_t *pmd = NULL;
+		struct ptdesc *ptdesc = ptdesc_alloc(gfp, 0);
+
+		if (!ptdesc)
 			failed = true;
-		if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) {
-			free_page((unsigned long)pmd);
-			pmd = NULL;
+		if (ptdesc && !ptdesc_pmd_ctor(ptdesc)) {
+			ptdesc_free(ptdesc);
+			ptdesc = NULL;
 			failed = true;
 		}
-		if (pmd)
+		if (ptdesc) {
 			mm_inc_nr_pmds(mm);
+			pmd = (pmd_t *)ptdesc_address(ptdesc);
+		}
+
 		pmds[i] = pmd;
 	}
 
@@ -838,7 +846,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 
 	free_page((unsigned long)pmd_sv);
 
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	free_page((unsigned long)pmd);
 
 	return 1;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522348.811691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrG-0008OY-5Q; Mon, 17 Apr 2023 20:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522348.811691; Mon, 17 Apr 2023 20:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrF-0008LG-8V; Mon, 17 Apr 2023 20:53:53 +0000
Received: by outflank-mailman (input) for mailman id 522348;
 Mon, 17 Apr 2023 20:53:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqY-0005NG-C0
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:10 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dae6fbc1-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:08 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id d8so2116015plg.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:08 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dae6fbc1-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764786; x=1684356786;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oL7THI6COKAZs945t/ufo2xvJ6u97lYjYyNIJW1c0dQ=;
        b=doNd2TAUyyyVhrEHrHJgS8adO/z3uWhCb50/lQYlJdk5oqIjZwFvTJlGFJ89/QBcif
         8imrD8TFaRte2p5w0KyYL2OFgHoMnHEPcvjiPP5MaiHba2amKVAMgresLNg9DoMmTCzS
         PUuzaVsdF5ThP0OKEtKVPgORmoth5LctgfKQpFi3NQhv2sSntUA0rqVkNtUltdbMYKjo
         vztqHLhdDPahrz8LgoZyQMd0QcyNEvkeUbtOTxQhJimZ8P9c4sf1Y5B8mePYpgeVVI0l
         u553/In38aRZGgvsmj1RZFHLxXeA0/2uAaIsm3kC5sKW6TY7wXshXxHWLV3FZoGUXIIW
         7irQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764786; x=1684356786;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oL7THI6COKAZs945t/ufo2xvJ6u97lYjYyNIJW1c0dQ=;
        b=fpopOIqLAnntF+Va76M766SR5WRkT4INa2weHuDMN5yltivb0za3l2IMfEq68Zo3YP
         cECPSV/N/JCubUCAS+1ypaq7+OMepzu/PIi9uRpozri2BPOJ2MhxRmFZvXj4PJUAxvL8
         XiDihkyVEZl+LgTdRsW47xsnwZ7KP9Axjkjx2EgshpTeHjGo3B2+VFuKj+l1f8TGaXad
         LOYNknQ+Y6y984ySdcfSXC9TMOfxxaMFKWZfnrAQJdN2F79m7kTg7lOwDja56htfgqH5
         FwlUmDMwNGxRKR6PbRERB44OVVrT9R44qUgkj8Va88up55yT+It8IK7dY0aM7ymJnFuh
         aqDw==
X-Gm-Message-State: AAQBX9cHdtsV/OuoC/qHuXSLClDK2l4WQ+dOJIqpjrSwc+K06fyJZyvn
	+doAra2Ckx0dHSapv6NHXJk=
X-Google-Smtp-Source: AKy350a3sfSYyKQBZCHGHLi0XTtVBj1S8oso+4y2+ULnoZrU3jXoGaDkpnKpDVzrixRlynGGGLFh9Q==
X-Received: by 2002:a05:6a20:7d9b:b0:f0:6d71:5f58 with SMTP id v27-20020a056a207d9b00b000f06d715f58mr1513174pzj.50.1681764786395;
        Mon, 17 Apr 2023 13:53:06 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 15/33] s390: Convert various gmap functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:30 -0700
Message-Id: <20230417205048.15870-16-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c | 230 ++++++++++++++++++++++++--------------------
 1 file changed, 128 insertions(+), 102 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index a61ea1a491dc..9c6ea1d16e09 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -34,7 +34,7 @@
 static struct gmap *gmap_alloc(unsigned long limit)
 {
 	struct gmap *gmap;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *table;
 	unsigned long etype, atype;
 
@@ -67,12 +67,12 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	spin_lock_init(&gmap->guest_table_lock);
 	spin_lock_init(&gmap->shadow_lock);
 	refcount_set(&gmap->ref_count, 1);
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		goto out_free;
-	page->_pt_s390_gaddr = 0;
-	list_add(&page->lru, &gmap->crst_list);
-	table = page_to_virt(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
+	table = ptdesc_to_virt(ptdesc);
 	crst_table_init(table, etype);
 	gmap->table = table;
 	gmap->asce = atype | _ASCE_TABLE_LENGTH |
@@ -181,25 +181,25 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
  */
 static void gmap_free(struct gmap *gmap)
 {
-	struct page *page, *next;
+	struct ptdesc *ptdesc, *next;
 
 	/* Flush tlb of all gmaps (if not already done for shadows) */
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) {
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
-		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
-			page->_pt_s390_gaddr = 0;
-			page_table_free_pgste(page);
+		/* Free all ptdesc tables. */
+		list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) {
+			ptdesc->_pt_s390_gaddr = 0;
+			page_table_free_pgste(ptdesc_page(ptdesc));
 		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
@@ -308,27 +308,27 @@ EXPORT_SYMBOL_GPL(gmap_get_enabled);
 static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 			    unsigned long init, unsigned long gaddr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *new;
 
 	/* since we dont free the gmap table until gmap_free we can unlock */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	new = page_to_virt(page);
+	new = ptdesc_to_virt(ptdesc);
 	crst_table_init(new, init);
 	spin_lock(&gmap->guest_table_lock);
 	if (*table & _REGION_ENTRY_INVALID) {
-		list_add(&page->lru, &gmap->crst_list);
+		list_add(&ptdesc->pt_list, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->_pt_s390_gaddr = gaddr;
-		page = NULL;
+		ptdesc->_pt_s390_gaddr = gaddr;
+		ptdesc = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	if (ptdesc) {
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 	return 0;
 }
@@ -341,13 +341,13 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
  */
 static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long offset;
 
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
-	page = pmd_pgtable_page((pmd_t *) entry);
-	return page->_pt_s390_gaddr + offset;
+	ptdesc = pmd_ptdesc((pmd_t *) entry);
+	return ptdesc->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1343,6 +1343,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	unsigned long *ste;
 	phys_addr_t sto, pgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */
@@ -1356,9 +1357,11 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 	/* Free page table */
 	page = phys_to_page(pgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+
+	page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 }
 
 /**
@@ -1372,9 +1375,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 				unsigned long *sgt)
 {
-	struct page *page;
 	phys_addr_t pgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _SEGMENT_SIZE) {
@@ -1385,9 +1389,11 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 		/* Free page table */
 		page = phys_to_page(pgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		page_table_free_pgste(page);
+
+		page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		page_table_free_pgste(ptdesc_page(ptdesc));
 	}
 }
 
@@ -1403,6 +1409,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	unsigned long r3o, *r3e;
 	phys_addr_t sgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */
@@ -1416,9 +1423,11 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 	/* Free segment table */
 	page = phys_to_page(sgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1432,9 +1441,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r3t)
 {
-	struct page *page;
 	phys_addr_t sgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION3_SIZE) {
@@ -1445,9 +1455,11 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 		/* Free segment table */
 		page = phys_to_page(sgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1463,6 +1475,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	unsigned long r2o, *r2e;
 	phys_addr_t r3t;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */
@@ -1476,9 +1489,11 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1493,8 +1508,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r2t)
 {
 	phys_addr_t r3t;
-	struct page *page;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION2_SIZE) {
@@ -1505,9 +1521,11 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1523,6 +1541,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	unsigned long r1o, *r1e;
 	struct page *page;
 	phys_addr_t r2t;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r1e = gmap_table_walk(sg, raddr, 4); /* get region-1 pointer */
@@ -1536,9 +1555,11 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r2t(sg, raddr, __va(r2t));
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1556,6 +1577,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 	struct page *page;
 	phys_addr_t r2t;
 	int i;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	asce = __pa(r1t) | _ASCE_TYPE_REGION1;
@@ -1569,9 +1591,11 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		r1t[i] = _REGION1_ENTRY_EMPTY;
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1768,18 +1792,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r2t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r2t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r2t = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */
@@ -1800,7 +1824,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 		 _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r2t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1828,8 +1852,8 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r2t);
@@ -1853,18 +1877,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r3t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r3t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r3t = page_to_phys(ptdesc);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */
@@ -1885,7 +1909,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 		 _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r3t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1913,8 +1937,8 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r3t);
@@ -1938,18 +1962,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_sgt;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE));
 	/* Allocate a shadow segment table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_sgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_sgt = page_to_phys(ptdesc);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */
@@ -1970,7 +1994,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 		 _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= sgt & _REGION_ENTRY_PROTECT;
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1998,8 +2022,8 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_sgt);
@@ -2022,8 +2046,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 			   int *fake)
 {
 	unsigned long *table;
-	struct page *page;
 	int rc;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	spin_lock(&sg->guest_table_lock);
@@ -2031,9 +2056,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
+		ptdesc = page_ptdesc(page);
+		*pgt = ptdesc->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(ptdesc->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2062,19 +2088,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 {
 	unsigned long raddr, origin;
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	phys_addr_t s_pgt;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE));
 	/* Allocate a shadow page table */
-	page = page_table_alloc_pgste(sg->mm);
-	if (!page)
+	ptdesc = page_ptdesc(page_table_alloc_pgste(sg->mm));
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_pgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_pgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
@@ -2092,7 +2118,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	/* mark as invalid as long as the parent table is not protected */
 	*table = (unsigned long) s_pgt | _SEGMENT_ENTRY |
 		 (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID;
-	list_add(&page->lru, &sg->pt_list);
+	list_add(&ptdesc->pt_list, &sg->pt_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_SEGMENT_ENTRY_INVALID;
@@ -2118,8 +2144,8 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 	return rc;
 
 }
@@ -2823,11 +2849,11 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range);
  */
 void s390_unlist_old_asce(struct gmap *gmap)
 {
-	struct page *old;
+	struct ptdesc *old;
 
-	old = virt_to_page(gmap->table);
+	old = virt_to_ptdesc(gmap->table);
 	spin_lock(&gmap->guest_table_lock);
-	list_del(&old->lru);
+	list_del(&old->pt_list);
 	/*
 	 * Sometimes the topmost page might need to be "removed" multiple
 	 * times, for example if the VM is rebooted into secure mode several
@@ -2842,7 +2868,7 @@ void s390_unlist_old_asce(struct gmap *gmap)
 	 * pointers, so list_del can work (and do nothing) without
 	 * dereferencing stale or invalid pointers.
 	 */
-	INIT_LIST_HEAD(&old->lru);
+	INIT_LIST_HEAD(&old->pt_list);
 	spin_unlock(&gmap->guest_table_lock);
 }
 EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
@@ -2860,15 +2886,15 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
 int s390_replace_asce(struct gmap *gmap)
 {
 	unsigned long asce;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	void *table;
 
 	s390_unlist_old_asce(gmap);
 
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	table = page_to_virt(page);
+	table = ptdesc_to_virt(ptdesc);
 	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
 
 	/*
@@ -2877,7 +2903,7 @@ int s390_replace_asce(struct gmap *gmap)
 	 * it will be freed when the VM is torn down.
 	 */
 	spin_lock(&gmap->guest_table_lock);
-	list_add(&page->lru, &gmap->crst_list);
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
 	spin_unlock(&gmap->guest_table_lock);
 
 	/* Set new table origin while preserving existing ASCE control bits */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522350.811696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrG-0000Ad-Qu; Mon, 17 Apr 2023 20:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522350.811696; Mon, 17 Apr 2023 20:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrG-00007N-3x; Mon, 17 Apr 2023 20:53:54 +0000
Received: by outflank-mailman (input) for mailman id 522350;
 Mon, 17 Apr 2023 20:53:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqb-0005NG-MF
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:13 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd31f6f8-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:12 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id v9so32540394pjk.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:11 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd31f6f8-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764790; x=1684356790;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=;
        b=rtRu53vchXs9D5cRJ3CetoIWLY2lh3C5JE3MRdQ44xVuQfskdtiHDLlzsBbYSofn7L
         e8lSzFUhD7A9hljeGTauQu8egrDhlG5YRZnhWotrw/49W7ZikJ5WwMzRyLlm7d1p91Fj
         tP9nyG9Dd28ZXF5MIPK2qlE+ewnSWVXYr1sMPo4LeXBmaVcfj1OQVCyd1BVW0WzbvUsd
         N8PJAsJH/tMmSy67sjwvjLZ5yO4kk3gDbHJjSYjg5iU+rvwTRrmDeUSPnhCF/kmv3N4w
         dd20d710X+Q7eq6GtdItIvNuKGeMwqGa7YpLc42c8kFJ7PC15/zIHwILDzQm/t8Xoxcw
         qHNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764790; x=1684356790;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=;
        b=QPbhbNyeg7oumSsT5f6FFtOahArfRd7od4CXlbzPdsIZxPpqGImcrFtSWZIjUkyJWO
         CMHboi1pnWvv23IcfPUPZTPUgngFZ9BAC01qI8k/GNeBCgEbq9Ds0z3re3cIc5SW4Mqn
         VtY0JM0Yz9+SltYpc76k4oZ8AX0QmlTgkthh5ceOb9vgEyZla931cGKDX7MSC7Q6Qgzl
         qpeNjQMeKWSx4LtZDYtQAVYHc+6gg+vE9K/jgV0BehsHTskwNFJN48paWlfOy3znZIuy
         qVk7k/fOuDx0kiWE4cjBQQQRql0LALsrta/tLCrUtdsiVD07NXXZU6Zsr+hYBrbASarH
         6Mjw==
X-Gm-Message-State: AAQBX9fN6R3akGx9CBVHjmWcbAZ5kuEVugbU5kwBBFIKW4LQBOhy48iF
	3NN9wMIpdNpYP/E9cAOLg3I=
X-Google-Smtp-Source: AKy350ZvnFjO+S7U7bV96piHTTfvEOv0mlMsOissXbg6oSWmd239xcTtUjPTO8K/viecYNaiDpXOfA==
X-Received: by 2002:a17:90a:e997:b0:23f:9448:89c2 with SMTP id v23-20020a17090ae99700b0023f944889c2mr16622304pjy.7.1681764790439;
        Mon, 17 Apr 2023 13:53:10 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 18/33] pgalloc: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:33 -0700
Message-Id: <20230417205048.15870-19-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/pgalloc.h | 62 +++++++++++++++++++++--------------
 1 file changed, 37 insertions(+), 25 deletions(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index a7cf825befae..7d4a1f5d3c17 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -18,7 +18,11 @@
  */
 static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, 0);
+
+	if (!ptdesc)
+		return NULL;
+	return (pte_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
@@ -41,7 +45,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
  */
 static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long)pte);
+	ptdesc_free(virt_to_ptdesc(pte));
 }
 
 /**
@@ -49,7 +53,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  * @mm: the mm_struct of the current context
  * @gfp: GFP flags to use for the allocation
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pte_ctor().
  *
  * This function is intended for architectures that need
  * anything beyond simple page allocation or must have custom GFP flags.
@@ -58,17 +62,17 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  */
 static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
 {
-	struct page *pte;
+	struct ptdesc *ptdesc;
 
-	pte = alloc_page(gfp);
-	if (!pte)
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(pte)) {
-		__free_page(pte);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	return pte;
+	return ptdesc_page(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE
@@ -76,7 +80,7 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
  * pte_alloc_one - allocate a page for PTE-level user page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pte_ctor().
  *
  * Return: `struct page` initialized as page table or %NULL on error
  */
@@ -98,8 +102,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
  */
 static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
 {
-	pgtable_pte_page_dtor(pte_page);
-	__free_page(pte_page);
+	struct ptdesc *ptdesc = page_ptdesc(pte_page);
+
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 
@@ -110,7 +116,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  * pmd_alloc_one - allocate a page for PMD-level page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pmd_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pmd_ctor().
  * Allocations use %GFP_PGTABLE_USER in user context and
  * %GFP_PGTABLE_KERNEL in kernel context.
  *
@@ -118,28 +124,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  */
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_PGTABLE_USER;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	return (pmd_t *)page_address(page);
+	return (pmd_t *)ptdesc_address(ptdesc);
 }
 #endif
 
 #ifndef __HAVE_ARCH_PMD_FREE
 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
+
 	BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
-	free_page((unsigned long)pmd);
+	ptdesc_pmd_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 #endif
 
@@ -149,11 +157,15 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 
 static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	gfp_t gfp = GFP_PGTABLE_USER;
+	gfp_t gfp = GFP_PGTABLE_USER | __GFP_ZERO;
+	struct ptdesc *ptdesc;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	return (pud_t *)get_zeroed_page(gfp);
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
+		return NULL;
+	return (pud_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PUD_ALLOC_ONE
@@ -175,7 +187,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 static inline void __pud_free(struct mm_struct *mm, pud_t *pud)
 {
 	BUG_ON((unsigned long)pud & (PAGE_SIZE-1));
-	free_page((unsigned long)pud);
+	ptdesc_free(virt_to_ptdesc(pud));
 }
 
 #ifndef __HAVE_ARCH_PUD_FREE
@@ -190,7 +202,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
 #ifndef __HAVE_ARCH_PGD_FREE
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long)pgd);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 #endif
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:53:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522352.811715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrK-000123-Jl; Mon, 17 Apr 2023 20:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522352.811715; Mon, 17 Apr 2023 20:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrJ-0000pU-E4; Mon, 17 Apr 2023 20:53:57 +0000
Received: by outflank-mailman (input) for mailman id 522352;
 Mon, 17 Apr 2023 20:53:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqi-0005NG-8B
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:20 +0000
Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com
 [2607:f8b0:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1303aca-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:18 +0200 (CEST)
Received: by mail-pl1-x632.google.com with SMTP id lh8so14361625plb.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:18 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1303aca-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764797; x=1684356797;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=;
        b=Bwk4ooUgupEoGCPug2H2Eo/HpnuuhxU7x81C/Qk6PZMSk1B54ZJ/T95wzfLs2bHmOL
         EoIfThp59UctKku6w1OEzm9Tx1+NGxZtvh5x5EZcL/GqxrGONyvWeZgGKJ7IB/ueJlyr
         3shywEfhQgNVSuay+LMKRYZJ2fcroNukWPhJ2nbiEAP2p1v1zQae9MOASciSz7iDNPe8
         PaaHFDAGtzWOJsB/jrA0V4yjZJzVFkj1D7lHE74htYvR1G0N63C9Bb92oSk5fpPTqJrg
         y20llT4kvkmAZ0rau94Yl1KLiffp2Awm3htmoO0PKOmikRHdd7NJITZhSry9QyqyhUwZ
         8Oiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764797; x=1684356797;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=;
        b=hN8KLvu/Y21qIFzJ9tezUTPIhYAho6cuwzj2FiTFABKAL/osh5juZLVCGm54Esw3Gl
         8UzFNHVXrtNAJa6qPn9pRstHsu1t9RNgTywkpsPkh0nlGlH6DK/FvhU0U0udIf8oVSMo
         UMeSmuuNt36q0BmDmNHvOBU+W0i1YXYGDcg9aEYY0zkPStcVlYWNwfPVjRO0Ay1rVSxq
         f7QxyEuJC4XaJGIqAN3/IlrfL4QZT8Ael6lPcmb+6qKgaURQvKBSdvba7653xbiCcYDS
         Fvbmu65hi4X9GgKOwBkhJCtjrNgVAZUz+XWk6uBjRnquVJ1IfnetBPJsz0VlA3RBBBy+
         vxGw==
X-Gm-Message-State: AAQBX9fjrqOSXPKT3wkIO/AQJ+21Lg8DczTglFL6ALZjvcGj17k3D1p1
	N3+CiYPIueLReb7iuGZ/07g=
X-Google-Smtp-Source: AKy350YGLzUMfl8U7k4p6Sd9nO3FUwkbMa1TWeVYLUOOgANIZAClkJJEoJzR0+dt425gFfwmjH24tg==
X-Received: by 2002:a17:90a:ab12:b0:246:8b47:3d5b with SMTP id m18-20020a17090aab1200b002468b473d5bmr16661185pjq.18.1681764797279;
        Mon, 17 Apr 2023 13:53:17 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 23/33] loongarch: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:38 -0700
Message-Id: <20230417205048.15870-24-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------
 arch/loongarch/mm/pgtable.c          |  7 ++++---
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h
index af1d1e4a6965..1fe074f85b6b 100644
--- a/arch/loongarch/include/asm/pgalloc.h
+++ b/arch/loongarch/include/asm/pgalloc.h
@@ -45,9 +45,9 @@ extern void pagetable_init(void);
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 #define __pte_free_tlb(tlb, pte, address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -55,18 +55,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_page(GFP_KERNEL_ACCOUNT);
-	if (!pg)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, 0);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_page(pg);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	pud = (pud_t *) __get_free_page(GFP_KERNEL);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
index 36a6dc0148ae..ff07b8f1ef30 100644
--- a/arch/loongarch/mm/pgtable.c
+++ b/arch/loongarch/mm/pgtable.c
@@ -11,10 +11,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	ret = (pgd_t *) __get_free_page(GFP_KERNEL);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *)ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522354.811725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrN-0001f3-0k; Mon, 17 Apr 2023 20:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522354.811725; Mon, 17 Apr 2023 20:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrM-0001Ri-0E; Mon, 17 Apr 2023 20:54:00 +0000
Received: by outflank-mailman (input) for mailman id 522354;
 Mon, 17 Apr 2023 20:53:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVql-0005NG-3y
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:23 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2e21f6f-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:21 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id p8so27388894plk.9
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:21 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2e21f6f-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764800; x=1684356800;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=;
        b=DwA2KoT+Gkb+TSG9gl3xh2UjcEOrukpYDh0FxXwVNpDTsL49sodGBFe/MxZJLiffh8
         Okfe4RTRfpJn4SFFKcweRaXMogMu3aLJXW8S0ijOu/y8rquyNOCuhOYqIlMfHtHNwFXl
         0RIVF9f99J773/WS0WxWYyLGFniLvieyUAz5torJ7RwCjhPN4iKRxNqBQLtPHzo1HkHq
         UDiYLsTcdIpx6G4pcrHTW0RLQiAILxALX6WF9BC1qxq+9QtqiExh0N2abkWgWZYyYNmB
         dTtZwe83/hP4XI3Q8dsRpB6pTlzqUQszpTQqyb8BBRRnRiGvSruO77w1spjFd7OVj31k
         rIHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764800; x=1684356800;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=;
        b=U1VafAO5p3OUFOtqHPfK0RkQl/vt5OGXWE1FbZ4Zt+dZA/9UCsY+FqMBf6nUhkZ/EJ
         kLA5pUUWMcAFeIFRMtH5ICOFgRlctS7297yWHxg460qe8mvMp5SOainKVzOAMrhez9eb
         UfTsEO9vHJJvsJeTJ17Bur1x1H8FFUTbV5M3sJZkpgZNpnhpLoISG+jAFgyl5biCkSXF
         74L7N+wxcunKSs6cCl1QzV0bshS13iczgaYPShont6/9Nk8yrkuMNIcSEELOLaFQWW+Q
         b4Y+woQO/O7qssjcphQhtR5OJfLPbiMovkORU1YvRDraLUQTZHdM5YGzWc+5BB66dARX
         fbPw==
X-Gm-Message-State: AAQBX9eqI+YFJ5pwK1QJ/btpifhF7DzGdOsyNT3tzeofXz3fCQ6sTvs8
	eofsuGmKiDYUcyMcziX5OHQ=
X-Google-Smtp-Source: AKy350b8thcmecOoaOKiNwMZgeAazmiEiQ9V6nLrivAsLyOgVvPy05KEMe/JXWQl3cGgcJMZ1iU0KA==
X-Received: by 2002:a17:90b:3848:b0:247:4ad1:f69b with SMTP id nl8-20020a17090b384800b002474ad1f69bmr11748993pjb.26.1681764800112;
        Mon, 17 Apr 2023 13:53:20 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 25/33] mips: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:40 -0700
Message-Id: <20230417205048.15870-26-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/mips/include/asm/pgalloc.h | 31 +++++++++++++++++--------------
 arch/mips/mm/pgtable.c          |  7 ++++---
 2 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
index f72e737dda21..7f7cc3140b27 100644
--- a/arch/mips/include/asm/pgalloc.h
+++ b/arch/mips/include/asm/pgalloc.h
@@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_pages((unsigned long)pgd, PGD_TABLE_ORDER);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 
-#define __pte_free_tlb(tlb,pte,address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -65,18 +65,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
-	if (!pg)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_pages(pg, PMD_TABLE_ORDER);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -90,10 +90,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PUD_TABLE_ORDER);
 
-	pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c
index b13314be5d0e..d626db9ac224 100644
--- a/arch/mips/mm/pgtable.c
+++ b/arch/mips/mm/pgtable.c
@@ -10,10 +10,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PGD_TABLE_ORDER);
 
-	ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *) ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:54:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522355.811734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrP-00026J-6Q; Mon, 17 Apr 2023 20:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522355.811734; Mon, 17 Apr 2023 20:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrN-0001yh-W9; Mon, 17 Apr 2023 20:54:02 +0000
Received: by outflank-mailman (input) for mailman id 522355;
 Mon, 17 Apr 2023 20:53:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqm-0005NG-Hz
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:24 +0000
Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com
 [2607:f8b0:4864:20::102b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3bf2385-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:23 +0200 (CEST)
Received: by mail-pj1-x102b.google.com with SMTP id
 h24-20020a17090a9c1800b002404be7920aso27918675pjp.5
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:22 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3bf2385-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764801; x=1684356801;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=;
        b=p/jfRaNPQGaoJd3ujqaeLDuUP7t0WkHa2oZwiGdpehhESyVFAOdOmg0fusWkZ2XuJJ
         Yw6K8nYA6/5N6tLufuYh20gi7M7wxwYV1IodMymMAu3e2vjlzp/s4l9Vf7kZibjxovMR
         cghNEQkYGyGSVWGuIAzI28gyVhrT8GgnWs6YLc4dnuPB71AZU/a5qfpWgaMEJ0Qfl1S+
         OG2NEJp7fE32TkPWsKahDfFv2MiE7qQWnDtWKvNsVMNbEDAHg0y2Zsbd1R1MkIpv2dly
         3HDaLyPuLXfOS4PxJXenmM3XNPnXJ5LYctnHIHC9Ub4peO/zkx0GIKpeh+s1nuyRNP2C
         xoAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764801; x=1684356801;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=;
        b=Yy/m++MqbpsiOa7MgHWT05ETqQOV8w8yPsA/0M+encr7Wf7gWIaieehJKviJOW30Vd
         MV2rOOG9xlp4fEfgqCk9CdCHkrgiPLjTfxQ4v0u6D1qKICsIcQijw6W8bwnYQILEk+Fe
         ZgUIdYZ/seFNWG75ygdm2ZtbszZ0vOY51UrDHhjaCSor9Ylp2jX5n2ze2XGINdm7ixqN
         a1Z85FgQW7Tio3KcWm1PeOyXvaoXQg5YOJgpXO4jJ9AQFvb5NdAUQII1iNM5Zmgpvr2z
         wQaWOGYAvlaby41D892rSDq2gEk40lPkUe1JvOGu0A4QuF9sms3FJUge/fXuCawH3zAN
         Xl1A==
X-Gm-Message-State: AAQBX9ckvlENhEBprMCXatJwcUUG2QOwtAVzxeFhTN6ua5qAkwbr5ZzL
	Y4Dn0EmZLJnV6/WvPN0W7dA=
X-Google-Smtp-Source: AKy350YG/84D/pba4O2jep64IWrC+mjA+ueyHqImP8NgwjI0qVBUMRNq8Z2uYJzIP+ywt0PW/sWA2w==
X-Received: by 2002:a17:90a:8048:b0:247:78c0:125e with SMTP id e8-20020a17090a804800b0024778c0125emr8083390pjw.15.1681764801496;
        Mon, 17 Apr 2023 13:53:21 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 26/33] nios2: Convert __pte_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:41 -0700
Message-Id: <20230417205048.15870-27-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/nios2/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h
index ecd1657bb2ce..ed868f4c0ca9 100644
--- a/arch/nios2/include/asm/pgalloc.h
+++ b/arch/nios2/include/asm/pgalloc.h
@@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)				\
-	do {							\
-		pgtable_pte_page_dtor(pte);			\
-		tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)					\
+	do {								\
+		ptdesc_pte_dtor(page_ptdesc(pte));			\
+		tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 	} while (0)
 
 #endif /* _ASM_NIOS2_PGALLOC_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:54:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522358.811746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrR-0002gG-Kt; Mon, 17 Apr 2023 20:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522358.811746; Mon, 17 Apr 2023 20:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVrQ-0002XP-4O; Mon, 17 Apr 2023 20:54:04 +0000
Received: by outflank-mailman (input) for mailman id 522358;
 Mon, 17 Apr 2023 20:53:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqu-0005NG-P4
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:32 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8ad6386-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:31 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id n17so11378174pln.8
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:31 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8ad6386-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764810; x=1684356810;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=;
        b=RTJID0Q/WW7iedj/afuCWToOfH/aBrY/NrWNeyN7ByP/usee9y9Bzy2eV2Pryddh5k
         SgN8rXjbYlFESamvA2kpEGUYeNDAEQAcb5DBJGBzqZ39H1PUjeRb3TeWmdsscSkj+duB
         XdhsoeufQBiAlQ1VDbbJWcUzg1wdK3pe72OAkzLWRjQ1C0S075Nqq87qXF8IdfaSlDxW
         +Bud3Bjv7jwgjZOfa7R1sSZhrnf0y6c/vZkSvJ9Qym0uRrXk7trecdbr2o902twf3Tei
         jaw/7g2aCCpoNC5Ttid57NGpw+n0jzjqhu5TM+KCbKbeUljRTtHuGZwGu624c+XnB9eN
         r1BQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764810; x=1684356810;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=;
        b=LE8wa4Aq5OJBQIcsjvoXpxwcapTbQV1veOtygpoAsHAJfXKDq3cX4a7IM2VFSo8EbV
         EMXUQ8/DceidAt8hgkIs+O430j09LeX4tBP2s1tDmfLqeRHTMn/dv1WBgfvP3WRvGiq+
         W58J60YCWutNm8JXsbV1gzksrjCA8uQR/uE4ylA/xQpJtc+D5XlhY6kJ92vH6HTTZw9e
         A9IQjbkQFuqzGfSUAtKASyb4EhkosCVjgCH8Cp7IMg1XjAlYkTJBwAzIFLczOSbq/hVz
         Iya6jMeC8AlgOzDIe6JRIIs7cJetO5SrTbSeHx/n8uLYjZpsWDRd71FMd/ATZb64CS6s
         iw2A==
X-Gm-Message-State: AAQBX9fuQDv93I0ibwZ4/rSfCIMndtHALVB9+4npEPg/Q+G19FwJFTXu
	/OacaiZHoRXoJjdSURvoeOU=
X-Google-Smtp-Source: AKy350Ygryz7WWGm7uHh1637qBCeDzWZK5X9wTShPkAHp+hvTpFfVrAPSXU4dDYY+IJtVCH6njxkhg==
X-Received: by 2002:a17:90a:4144:b0:240:973d:b436 with SMTP id m4-20020a17090a414400b00240973db436mr14169767pjg.49.1681764809815;
        Mon, 17 Apr 2023 13:53:29 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 32/33] um: Convert {pmd, pte}_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:47 -0700
Message-Id: <20230417205048.15870-33-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/um/include/asm/pgalloc.h | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h
index 8ec7cd46dd96..760b029505c1 100644
--- a/arch/um/include/asm/pgalloc.h
+++ b/arch/um/include/asm/pgalloc.h
@@ -25,19 +25,19 @@
  */
 extern pgd_t *pgd_alloc(struct mm_struct *);
 
-#define __pte_free_tlb(tlb,pte, address)		\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb),(pte));			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #ifdef CONFIG_3_LEVEL_PGTABLES
 
-#define __pmd_free_tlb(tlb, pmd, address)		\
-do {							\
-	pgtable_pmd_page_dtor(virt_to_page(pmd));	\
-	tlb_remove_page((tlb),virt_to_page(pmd));	\
-} while (0)						\
+#define __pmd_free_tlb(tlb, pmd, address)			\
+do {								\
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));			\
+	tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd));	\
+} while (0)
 
 #endif
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522405.811811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwE-0000cl-PX; Mon, 17 Apr 2023 20:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522405.811811; Mon, 17 Apr 2023 20:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwE-0000Zr-0X; Mon, 17 Apr 2023 20:59:02 +0000
Received: by outflank-mailman (input) for mailman id 522405;
 Mon, 17 Apr 2023 20:59:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqq-0005M2-P6
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:28 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6f33626-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:28 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 cm18-20020a17090afa1200b0024713adf69dso15386300pjb.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:28 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6f33626-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764807; x=1684356807;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=;
        b=fsz9HBXpyixYv0zGEIQNfLPb4EjMqtI/28pihon+vRduZpglxs4kGo7QlnsgzbeVep
         MUefXrtmBIPAEKph1/xE0Hixz+XFSj9eLb4OXqcCPGwVWZta2b+6OV8DcUNUc3MYWho8
         foQqBfLzT6QF4GBhL/zISw/ZQfIxqSnZu2sd2q6CE2U54ygVclXX08i47edCNHvoHxB+
         sBBNY+ovhIlsXyYoYKNVSm2boGQ3Di1szM5293gnTSGOnWYTxtAAUVimXGr6dZV0Zyrz
         TtkMrcCltGZOVK2SbO9vEj4TQWwRgKIU6/H0ozoMQVL6Kfmq0ZrnJgq0p7Ga0/De5Qi0
         sDcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764807; x=1684356807;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=;
        b=X+dKtr3i+9ru0YPT86uu/dTFW8aoSATMbfcUPE1a6tYtHB0c87LBYvB2o1OlqOPDSz
         RrNauRVAiOvwBZlSwU+RrlLbcM8+Lv3eCc9H2XGlfz8GZPGMamYUBqvrQM5vtt44DXTB
         /KSaZ/sG8HNaDUQH/eWynA+a6200dv9NDeBPqiutHxi/+HSik1Wum5P2Gv/f/YpeN6Hc
         Q9HA3GugCpjYCuYlvQhPmKcM7coGprb/CeqYrs6ufMV1xJ2QYJlKxgMlRZV/KCC4Xz0X
         tTCTVgSmgrGqi4gIynEHa436Lvc5vOwLyIH4x+sbDZOPUom2cgQA7y6+heP07n5fr90W
         kibw==
X-Gm-Message-State: AAQBX9cprxw4//Sw2VEWnLTlPwIT3haCPrpX4ARYlTuTwdy3RuqeJgAR
	OSr55XTaXFUhT5wiPtiKMFzroBbx3AOW2Q==
X-Google-Smtp-Source: AKy350YfnaT8KE6LOLQ9wRoeSIO99cnRXYKnv4nnzRXnI3u1QAONlMDFC8rrZFtHP4TZD3hWsCKEag==
X-Received: by 2002:a17:90a:fa3:b0:237:3dfb:9095 with SMTP id 32-20020a17090a0fa300b002373dfb9095mr16284735pjz.6.1681764806946;
        Mon, 17 Apr 2023 13:53:26 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 30/33] sparc64: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:45 -0700
Message-Id: <20230417205048.15870-31-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/init_64.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 04f9db0c3111..eedb3e03b1fe 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2893,14 +2893,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 
 pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-	if (!page)
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	return (pte_t *) page_address(page);
+	return (pte_t *) ptdesc_address(ptdesc);
 }
 
 void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
@@ -2910,10 +2911,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 
 static void __pte_free(pgtable_t pte)
 {
-	struct page *page = virt_to_page(pte);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pte);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 void pte_free(struct mm_struct *mm, pgtable_t pte)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522403.811800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwD-0000Rw-TL; Mon, 17 Apr 2023 20:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522403.811800; Mon, 17 Apr 2023 20:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwD-0000PL-EZ; Mon, 17 Apr 2023 20:59:01 +0000
Received: by outflank-mailman (input) for mailman id 522403;
 Mon, 17 Apr 2023 20:59:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqv-0005NG-P9
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:33 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e993bcd0-dd61-11ed-8611-37d641c3527e;
 Mon, 17 Apr 2023 22:53:31 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id p8so27389711plk.9
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:31 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e993bcd0-dd61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764811; x=1684356811;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lI4mQ87WoWgY+PGEzL+RyVT2V9KzYHmGTUoQkkpMa78=;
        b=OlBOZiQXTEQ2yJNCXWXVeEV70jBd/5g1C1TEFSYKGJH4RB2t5MeymRQTJkCAMhqY8X
         xT0QIMwkP3fej3rBN/YVKJNeHnVxdz5dc1MiD+bCMEF9s0dHv6uN4wyDmNsH2Y2Ym+iY
         2ZNsAP/RgJk51+uVp81qcQOj+bGNImtPBjaen/JrqoOw/PBUibNGoaM/ZCPpfG5uzvkW
         XBTMu6ICJd2ADipenkzYOjnqUxYIKWTiKPtzD1nHG87S/riR67Qes5ac47aDjBNBedgZ
         nWtoSmVXLxbkA7A5m2dqK2dJ5Frc5F3qBCVwCqlbD2BXSOmg77x4kXyTkenC4W3QlTw8
         reIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764811; x=1684356811;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lI4mQ87WoWgY+PGEzL+RyVT2V9KzYHmGTUoQkkpMa78=;
        b=e308nrmnFee7IpR+yc/1yogBW0TbofTGSdGNhLxQFZBwhewQRwiWZkZYBCUg5v4X6Y
         DC83eJA0gOuwkPtLPVcfWZQpAu22un2w8psuslUN0dAMYSJ2/5mXtJHQ1FEr1YOg3KQv
         dgDC9SmBXIMUIk+KhpV+QB6R0aIMQFwlxxRutMTYXLxkZm+YMfNgu/RI0FyyYrKLbSuc
         +xtWxLDMfGuyMhCX2qPaMIZIuvBqfHjm87NVaEi5YV8NlE3329YqMp8OtqXjPLu7zgUP
         t2T0556PdS1ho/02nBe6s7tNisapCwIl+BtWvdcPsc3i8Tf7NlWlvTRxjakJZsscEoCW
         CDIw==
X-Gm-Message-State: AAQBX9eDwPOwlZ3tlB9bqbmxRz2pwtS1uxWs/8Ry5lyNbXkJkLHt3y/T
	sS1YZALntvwEL316JJknYjewdvTqIVcdoA==
X-Google-Smtp-Source: AKy350bto7gkYUEXZmV5HcAXbLEbfMM+TO53YOoTMdsdAnKO7c7cFoDN5d9Q3fv9Tr0h94wXesVw1Q==
X-Received: by 2002:a17:902:ec82:b0:1a1:b137:4975 with SMTP id x2-20020a170902ec8200b001a1b1374975mr256275plg.49.1681764811306;
        Mon, 17 Apr 2023 13:53:31 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 33/33] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers
Date: Mon, 17 Apr 2023 13:50:48 -0700
Message-Id: <20230417205048.15870-34-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These functions are no longer necessary. Remove them and cleanup
Documentation referencing them.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 Documentation/mm/split_page_table_lock.rst    | 12 +++++------
 .../zh_CN/mm/split_page_table_lock.rst        | 14 ++++++-------
 include/linux/mm.h                            | 20 -------------------
 3 files changed, 13 insertions(+), 33 deletions(-)

diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index 50ee0dfc95be..b3c612183135 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -53,7 +53,7 @@ Support of split page table lock by an architecture
 ===================================================
 
 There's no need in special enabling of PTE split page table lock: everything
-required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which
+required is done by ptdesc_pte_ctor() and ptdesc_pte_dtor(), which
 must be called on PTE table allocation / freeing.
 
 Make sure the architecture doesn't use slab allocator for page table
@@ -63,8 +63,8 @@ This field shares storage with page->ptl.
 PMD split lock only makes sense if you have more than two page table
 levels.
 
-PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table
-allocation and pgtable_pmd_page_dtor() on freeing.
+PMD split lock enabling requires ptdesc_pmd_ctor() call on PMD table
+allocation and ptdesc_pmd_dtor() on freeing.
 
 Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
 pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
@@ -72,7 +72,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc().
 
 With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
 
-NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must
+NOTE: ptdesc_pte_ctor() and ptdesc_pmd_ctor() can fail -- it must
 be handled properly.
 
 page->ptl
@@ -92,7 +92,7 @@ trick:
    split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs
    one more cache line for indirect access;
 
-The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
-pgtable_pmd_page_ctor() for PMD table.
+The spinlock_t allocated in ptdesc_pte_ctor() for PTE table and in
+ptdesc_pmd_ctor() for PMD table.
 
 Please, never access page->ptl directly -- use appropriate helper.
diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
index 4fb7aa666037..a3323eb9dc40 100644
--- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
+++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
@@ -56,16 +56,16 @@ Hugetlb特定的辅助函数:
 架构对分页表锁的支持
 ====================
 
-没有必要特别启用PTE分页表锁：所有需要的东西都由pgtable_pte_page_ctor()
-和pgtable_pte_page_dtor()完成，它们必须在PTE表分配/释放时被调用。
+没有必要特别启用PTE分页表锁：所有需要的东西都由ptdesc_pte_ctor()
+和ptdesc_pte_dtor()完成，它们必须在PTE表分配/释放时被调用。
 
 确保架构不使用slab分配器来分配页表：slab使用page->slab_cache来分配其页
 面。这个区域与page->ptl共享存储。
 
 PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
-启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor()，在释放时调
-用pgtable_pmd_page_dtor()。
+启用PMD分页锁需要在PMD表分配时调用ptdesc_pmd_ctor()，在释放时调
+用ptdesc_pmd_dtor()。
 
 分配通常发生在pmd_alloc_one()中，释放发生在pmd_free()和pmd_free_tlb()
 中，但要确保覆盖所有的PMD表分配/释放路径：即X86_PAE在pgd_alloc()中预先
@@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
 一切就绪后，你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。
 
-注意：pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必
+注意：ptdesc_pte_ctor()和ptdesc_pmd_ctor()可能失败--必
 须正确处理。
 
 page->ptl
@@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁，其中'page'是包含该表的页面struc
    的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的
    情况下使用分页锁，但由于间接访问而多花了一个缓存行。
 
-PTE表的spinlock_t分配在pgtable_pte_page_ctor()中，PMD表的spinlock_t
-分配在pgtable_pmd_page_ctor()中。
+PTE表的spinlock_t分配在ptdesc_pte_ctor()中，PMD表的spinlock_t
+分配在ptdesc_pmd_ctor()中。
 
 请不要直接访问page->ptl - -使用适当的辅助函数。
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cb136d2fdf74..e08638dc58cf 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2858,11 +2858,6 @@ static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
-{
-	return ptdesc_pte_ctor(page_ptdesc(page));
-}
-
 static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -2872,11 +2867,6 @@ static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pte_page_dtor(struct page *page)
-{
-	ptdesc_pte_dtor(page_ptdesc(page));
-}
-
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
 ({							\
 	spinlock_t *__ptl = pte_lockptr(mm, pmd);	\
@@ -2967,11 +2957,6 @@ static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
-{
-	return ptdesc_pmd_ctor(page_ptdesc(page));
-}
-
 static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -2981,11 +2966,6 @@ static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pmd_page_dtor(struct page *page)
-{
-	ptdesc_pmd_dtor(page_ptdesc(page));
-}
-
 /*
  * No scalability reason to split PUD locks yet, but follow the same pattern
  * as the PMD locks to make it easier if we decide to.  The VM should not be
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522402.811794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwD-0000Fs-Cb; Mon, 17 Apr 2023 20:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522402.811794; Mon, 17 Apr 2023 20:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0000Da-Rk; Mon, 17 Apr 2023 20:59:00 +0000
Received: by outflank-mailman (input) for mailman id 522402;
 Mon, 17 Apr 2023 20:58:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqm-0005M2-OE
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:24 +0000
Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com
 [2607:f8b0:4864:20::1030])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4875a46-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:24 +0200 (CEST)
Received: by mail-pj1-x1030.google.com with SMTP id
 l9-20020a17090a3f0900b0023d32684e7fso256114pjc.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:24 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4875a46-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764803; x=1684356803;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=;
        b=Sg9gXChNrNLShRRgfwOKl/OTTCvmKSrllpuIiui1MavGQQvyi4/hV9+QaVJ3t9hABY
         7LNyY9zC8MZPe/8uyGF6BwZFFCebqKsRkCF7WD2/MczVElNAeTdXAZoU8m9zjTh6P7Zy
         0Ui6eydpR2I1tZyrhRUO19d2JsU/fHL/oyFe81PNxO86Jxw/ya4bgHlBy023YgAYk0Ea
         ro6AAOx594UY8fA/m7D7bEpjpZMo/caINuRe5NJsUpyuSeut+/oOBeuT5hXbXmkAYI67
         Pav/++zqcV0Pbp5FUcdmVoTNUO7tgK9EwNCc3eK51W0dnm+P+Og2+jgMY/+82hRUmWz8
         imHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764803; x=1684356803;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=;
        b=OQt8SMvintAfiCdJYwSCvZYc7J/Ev0H/3vmQYqW+BehYmsAcbcSCx7Pa2DvVHXYXpE
         nelAoJ4RsxE1nMyhvlMOrrDemBHgz+DwM4o2eZiDv2DNsjcmVqeOzZNxflhj2677+re5
         ldmq54h2JKxyf4OGOR6yarzcMZtj43Y2rOPfsmoE8dyqUHTlHZECYODx759gTOj7+Gue
         5ybw9o4uY2jU4t9vvLXfV9sqxvxb2MKohkB3FK/BnWC56nEtsJkwt196d8ybpTqDJP20
         Wrfp5gHU2yO98Tgl5lP48q3XtUG1wNLesPc2xOlG1PdgtlM9BCncnYReAedksJ2oZz/4
         fGUg==
X-Gm-Message-State: AAQBX9eVKbKt7TQZ6IvdhhQpo7Zfo24udMZWmegWuwQsKUepvfM/lDha
	nKKcDbec9Dq3h4UNysPHIdM=
X-Google-Smtp-Source: AKy350ZH2s2lfJdxSYj+T2S9Ef0UoYzkgLYY3eeGq1S7Q2hAbd0lFGX52BFTYQkOd2khPVGSck6LXA==
X-Received: by 2002:a17:90a:e28f:b0:247:4200:7432 with SMTP id d15-20020a17090ae28f00b0024742007432mr11696070pjz.40.1681764802858;
        Mon, 17 Apr 2023 13:53:22 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 27/33] openrisc: Convert __pte_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:42 -0700
Message-Id: <20230417205048.15870-28-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/openrisc/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h
index b7b2b8d16fad..14e641686281 100644
--- a/arch/openrisc/include/asm/pgalloc.h
+++ b/arch/openrisc/include/asm/pgalloc.h
@@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)	\
-do {					\
-	pgtable_pte_page_dtor(pte);	\
-	tlb_remove_page((tlb), (pte));	\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522398.811779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0008Vv-Dq; Mon, 17 Apr 2023 20:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522398.811779; Mon, 17 Apr 2023 20:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0008VG-8i; Mon, 17 Apr 2023 20:59:00 +0000
Received: by outflank-mailman (input) for mailman id 522398;
 Mon, 17 Apr 2023 20:58:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqn-0005M2-OZ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:25 +0000
Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com
 [2607:f8b0:4864:20::102c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e557d221-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:24 +0200 (CEST)
Received: by mail-pj1-x102c.google.com with SMTP id
 k36-20020a17090a4ca700b0024770df9897so6600486pjh.4
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:24 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e557d221-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764804; x=1684356804;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uzUB7crKqn7Of69Iz0IiuKaVcaC8bpzNVtqjj02cJJs=;
        b=CQxyKkc21JMWEGZgaxo+DzkT7hmUe/EPeRLNy9307TK3NRBVO8qYFVHUBh0a+wrrFK
         REDRofZ4pQ77HB/xKTTnWP/BT1cyKC/pHZN9NhEm5iKL1tflp83IUSNn/z3JIjWng98Y
         6RfpzxV5XfPWxGKOX4DdYV/wLmmlXNt332CwEYLgb9fOwCusASfTkzDGq48OmDuUGGpq
         tIsA4gMM5D4jkK0NC3tYS5RarJ0ZWQtkI1tM39+DRuLci+gK44bt1F+fDRaPwS3T5T9g
         S4uLRylS37AnplaeXGFnk7T6Qzad8RMawrqwBO6UKOhHtptnemCZZZWNN4xJh7uqqhEQ
         vrGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764804; x=1684356804;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=uzUB7crKqn7Of69Iz0IiuKaVcaC8bpzNVtqjj02cJJs=;
        b=WAn06BLUCLJJLtM5i6kNg6GkZMNWnyY9gmd9QQt0eiTQritRCey7jZ7trMo1/SEKzS
         FhjmtMp4cfH4C4r9hAlrs81EzNYQPpqvojQPNDQA4nBfOM5Sk/K9XDzQQVsA3TWCaLXI
         Y3xVjC83uwJ2Dsm0lP9ETtX33P+yBccz7Qo8BmO/KB1LXT//ABLn3O//IZNJAN/g3X1S
         mKNLrdyV02UVYM9uzH7Gnf67pk13WOvfBjpRsL52BRsFUMcmejJojAHNiOJWBgmygouN
         WUXHiBwTTxfR8qx0HFsfHYTY+6aIN/6nwjHmyvSItBJbG7Zr57nLH2umiLz6XdmCosxp
         SGKQ==
X-Gm-Message-State: AAQBX9cwnhSvrt7/ZAoQVVsSSFABziVZnoTZ8AGqGp+mpgWr7IP2grXU
	oBRGb1rssrIqdOQPXBZ0HPc=
X-Google-Smtp-Source: AKy350avL9PdXaEjhmJuV1ybk5n54BLKq2822oDesoZKEo2GpNTKDMiGoy71eNPEPZjYfA/nM1qr3w==
X-Received: by 2002:a17:90a:1286:b0:246:bb61:4a5b with SMTP id g6-20020a17090a128600b00246bb614a5bmr16053424pja.8.1681764804243;
        Mon, 17 Apr 2023 13:53:24 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 28/33] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:43 -0700
Message-Id: <20230417205048.15870-29-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/riscv/include/asm/pgalloc.h |  8 ++++----
 arch/riscv/mm/init.c             | 16 ++++++----------
 2 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
index 59dc12b5b7e8..cb5536403bd8 100644
--- a/arch/riscv/include/asm/pgalloc.h
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-#define __pte_free_tlb(tlb, pte, buf)   \
-do {                                    \
-	pgtable_pte_page_dtor(pte);     \
-	tlb_remove_page((tlb), pte);    \
+#define __pte_free_tlb(tlb, pte, buf)			\
+do {							\
+	ptdesc_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\
 } while (0)
 #endif /* CONFIG_MMU */
 
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 0f14f4a8d179..2737cbc4ad12 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -346,12 +346,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pte_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page(vaddr)));
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !ptdesc_pte_ctor(ptdesc));
+	return __pa((pte_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pte_mapping(pte_t *ptep,
@@ -429,12 +427,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pmd_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page(vaddr)));
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !ptdesc_pmd_ctor(ptdesc));
+	return __pa((pmd_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pmd_mapping(pmd_t *pmdp,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522399.811786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0000AX-QG; Mon, 17 Apr 2023 20:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522399.811786; Mon, 17 Apr 2023 20:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-00008F-HR; Mon, 17 Apr 2023 20:59:00 +0000
Received: by outflank-mailman (input) for mailman id 522399;
 Mon, 17 Apr 2023 20:58:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqb-0005M2-Tu
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:13 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de9d8b95-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:13 +0200 (CEST)
Received: by mail-pl1-x634.google.com with SMTP id kh6so25753854plb.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:13 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de9d8b95-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764793; x=1684356793;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=;
        b=VrQ0lWqrs/ZeVvxhrMkEnri25zysQvP6rvUVlziuAvcyoO7aiCs2Ip8nrZdL+4djlt
         EalUxXukap5u9TcR0Ypqd1nj4oCE7Zy1crkVgOFcBvFvQfkcdrgLVV5yu4aZ8sXiNQzS
         MzbdyHaLybqiTpdHLCapkJdM135ufCzQfSX7LCwPasNWotwzcPJsMf59UKAQgZOOczBx
         ZYZuB1ddeu/80rtF2gv16gY+bwHM4NdL4pbXt10qFjUxYFen/hCDj/7OzFab+Hq/fc/w
         UWRA/d9TiIVFZRkxxpOQPugklzg3SfxU/z6fpq4Q2x5DyNF+cvnkXmPJMDWg2KdcBHj4
         X1Mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764793; x=1684356793;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=;
        b=cBxTBuILQz+ECMYWhaBioulgunJltiGrJl/PcRMJShYjUWAcTwM/CGcab3KJ7bKCFn
         GiZe9vw9gowy+CRtLgq+JG0INcsUMchih7j22qRYvn7NmYF1ABnvtrrPG0cO51dbaECT
         ZT5jct4hdt4CwbnH9ewYsvoGIZIZQLxnlkpLwv2NhiLnEPHr/1GlQXweAD3HDubz0HGF
         5W71GzxBY4BIkYOPlXWpWHioZQEf6y8tSacTiayPYfYvH50RcWQtump5pMDnWdXjG8XE
         aTOTYWHdlO8Es4uhA8PkoduUvAfFa+030d5g3EBZVxHtHrGwnymUGUiDyEgnWV2cn7fY
         sZ1w==
X-Gm-Message-State: AAQBX9flsa6Wg0Qrw0ADjsmmjIFIvyzgbzGW3OcId3U4ezuDNW0VTDUt
	9mqJY9eUlQ4KqxRTchXhOkE=
X-Google-Smtp-Source: AKy350Y2w7IaE9PrAymAnxztuomQd4Ga7koe4Zw+ByQfv97VTbICRP51vtaSBBH4u41LkAvC07UcrA==
X-Received: by 2002:a17:902:bd86:b0:1a6:c6d4:5586 with SMTP id q6-20020a170902bd8600b001a6c6d45586mr227041pls.13.1681764792942;
        Mon, 17 Apr 2023 13:53:12 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 20/33] arm64: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:35 -0700
Message-Id: <20230417205048.15870-21-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm64/include/asm/tlb.h | 14 ++++++++------
 arch/arm64/mm/mmu.c          |  7 ++++---
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index c995d1f4594f..6cb70c247e30 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
 				  unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
-	tlb_remove_table(tlb, pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	ptdesc_pte_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 #if CONFIG_PGTABLE_LEVELS > 2
 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 				  unsigned long addr)
 {
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 #endif
 
@@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
 				  unsigned long addr)
 {
-	tlb_remove_table(tlb, virt_to_page(pudp));
+	tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp));
 }
 #endif
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index af6bc8403ee4..5ba005fd607e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
 static phys_addr_t pgd_pgtable_alloc(int shift)
 {
 	phys_addr_t pa = __pgd_pgtable_alloc(shift);
+	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
 
 	/*
 	 * Call proper page table ctor in case later we need to
@@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	 * this pre-allocated page table.
 	 *
 	 * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
-	 * folded, and if so pgtable_pmd_page_ctor() becomes nop.
+	 * folded, and if so ptdesc_pte_dtor() becomes nop.
 	 */
 	if (shift == PAGE_SHIFT)
-		BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa)));
+		BUG_ON(!ptdesc_pte_dtor(ptdesc));
 	else if (shift == PMD_SHIFT)
-		BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa)));
+		BUG_ON(!ptdesc_pte_dtor(ptdesc));
 
 	return pa;
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522397.811776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0008TU-7l; Mon, 17 Apr 2023 20:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522397.811776; Mon, 17 Apr 2023 20:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwC-0008TH-0k; Mon, 17 Apr 2023 20:59:00 +0000
Received: by outflank-mailman (input) for mailman id 522397;
 Mon, 17 Apr 2023 20:58:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqa-0005M2-MZ
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:12 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ddd561f7-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:12 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id d15so10119811pll.12
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:12 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddd561f7-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764791; x=1684356791;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=;
        b=gYiTGJsq3tS0FDSy3/Ogj5YUQaoNGj6XWEQGKGQUuQuqn1egVSKA3seChXbxd6ie6X
         KVLVgtxbt+Gl9lkEwLQ+F+iG9Kz1ErH6Jdv2PrLPdPwp/UgBJZIaGw67YMksWIwKnmyC
         7jVlh7AKx280dtpirqSUFhv9Pyd2cc0T8yO2lBO5mVRpibO3YWd6y8PCZ2jFDrnw2dOM
         qIyvjKI6SfUletTh72Za02rULTtpxgw/RsbeOg0p8ic4ZkVJUUvnJu+rLoRN8U3nnoDL
         0yuCtQCJj2oWJ5XTJVsuTcfyXkKiXbSXxA+TapEEDARVJ5R07j+Rkpo9klzzaVdh3pXw
         NvCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764791; x=1684356791;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=;
        b=XBifD4YBTX0xJl5DMVH9amP2GDDWDAi43EUPC6Hm5XtPHfIVUC1nvpdTeC6EC1365X
         Xz6niuccyZvbd78jdf7utFPVzCEk3b9hemAcU28jhItTOGK0QzKeWDwl0I3YKj0ElByG
         2iOnQEP3auBO9b4oq+cI1fnp9DyJz1Yj+iOJTSwrCSOjjLhRf/2U45G/osa6I1WzmwBZ
         0j5C/rSPmcbHqcfyjLQgN16uLrSX2cjFJzS+svb75B/GLOHvH6rcAlxVXritK3qazivy
         sPwZefB7hxoZZfEKAkAWm7pz9aTJnyQAC9s5XXYxptlTB5BWEdNq0ikc4VA4+uOd7nhS
         hwfA==
X-Gm-Message-State: AAQBX9fyQGkP7ecAmC2HSjRiNpej5Iu8AkpH3IvAfJKAIBgANX/ZDHv+
	RVXtHyRy+xH4Awq520BV014=
X-Google-Smtp-Source: AKy350bvWV8yzhiozqrrwWcRpctJK00ZbC2YeKZYOK2IgaSQcSotd8fWuCQxrW1E1m+ogfO6p1yZ/Q==
X-Received: by 2002:a17:902:b789:b0:1a6:8024:321e with SMTP id e9-20020a170902b78900b001a68024321emr189655pls.34.1681764791672;
        Mon, 17 Apr 2023 13:53:11 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 19/33] arm: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:34 -0700
Message-Id: <20230417205048.15870-20-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

late_alloc() also uses the __get_free_pages() helper function. Convert
this to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm/include/asm/tlb.h | 12 +++++++-----
 arch/arm/mm/mmu.c          |  6 +++---
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
index b8cbe03ad260..9ab8a6929d35 100644
--- a/arch/arm/include/asm/tlb.h
+++ b/arch/arm/include/asm/tlb.h
@@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table)
 static inline void
 __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	ptdesc_pte_dtor(ptdesc);
 
 #ifndef CONFIG_ARM_LPAE
 	/*
@@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 	__tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE);
 #endif
 
-	tlb_remove_table(tlb, pte);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 static inline void
 __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr)
 {
 #ifdef CONFIG_ARM_LPAE
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 #endif
 }
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 463fc2a8448f..7add505bd797 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -737,11 +737,11 @@ static void __init *early_alloc(unsigned long sz)
 
 static void *__init late_alloc(unsigned long sz)
 {
-	void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz));
+	void *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, get_order(sz));
 
-	if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr)))
+	if (!ptdesc || !ptdesc_pte_ctor(ptdesc))
 		BUG();
-	return ptr;
+	return ptdesc;
 }
 
 static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522409.811834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwH-0001cr-HX; Mon, 17 Apr 2023 20:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522409.811834; Mon, 17 Apr 2023 20:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwH-0001bd-BC; Mon, 17 Apr 2023 20:59:05 +0000
Received: by outflank-mailman (input) for mailman id 522409;
 Mon, 17 Apr 2023 20:59:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqr-0005M2-Dc
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:29 +0000
Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com
 [2607:f8b0:4864:20::1031])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7e36d1f-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:29 +0200 (CEST)
Received: by mail-pj1-x1031.google.com with SMTP id
 z11-20020a17090abd8b00b0024721c47ceaso14042290pjr.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:28 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7e36d1f-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764808; x=1684356808;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=;
        b=Bs6MBnbjQkX1GScZCbLklar3CQYsHMcNQJhdEJMSRoz85XciXhpnQ0YtDU9kdG/nWk
         TxMCtPr19eviQJI5aQ6Z23KjvD0/UYdwUUF801SK8Z3tk/XuPH8n6ufK4lU6OPOoTkEa
         tcxsOYaiL60cLVhSI6wekl2Ojt4T3pVi0l4/Kpd6dmP8a2GTqJPhoPlibhYfl7AzaVkC
         8dHP626PMZuKBlHUuWDeE5Ws0Xf5nvhx1+eNXqdjJYxnU1JpJLYxfuGpVyKb6jdKhgqK
         l+AozFzFnnh8JDe5VzEpQS6lDqTnIa40opwHXXYLbmcB20BMSqSIISIYF0OD142rh4KG
         4Sbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764808; x=1684356808;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=;
        b=Pgnl94wXYuNbdYqhyDixr7L17R6FHnB93BCzR4GYzBpd6sbO+nA0zmUQpu1bc8DvH8
         ScW1U7Us6WwVlyQi3xkyigmitKkWduIY8DvrPfGWMPx5l8k0Y4/KssFt9+ccSm9hG9p0
         o+ZvETrSeO61aVCOOOvKFwIwkQBzUYF0KEbNZbPQYwod4XYBVdEtQiYrXwInwHkbsT0G
         OYzmK7IZ+QyFYe8XHpKUk1dQIib1ptsEMzMEgbuT5RzXStKBVyrRbbrL2lm5WNmFEojm
         PY2rMSpPsZu7L4zsuQQMQPaSD804W8/FEeKzz1mxAPE21R//7BiK/3mDatJL0WAXFSzk
         VW+g==
X-Gm-Message-State: AAQBX9crOyHh89F60C5RA5yaHWpVwD3gwNMLgg/EOYn0VV4sghITQxVL
	TGKJuLlDRrLVoZf+RD1Z7/5mfZaoxwcvQw==
X-Google-Smtp-Source: AKy350a2GcdGbRUVrvD14pA7yb7zjU30cICJUnhJQSgTB8CRLuH7Cup5yhi/f0UjvURoMBG/RaZdVQ==
X-Received: by 2002:a17:903:1206:b0:19f:1871:3dcd with SMTP id l6-20020a170903120600b0019f18713dcdmr272381plh.5.1681764808447;
        Mon, 17 Apr 2023 13:53:28 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 31/33] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
Date: Mon, 17 Apr 2023 13:50:46 -0700
Message-Id: <20230417205048.15870-32-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable pte constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/srmmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 13f027afc875..964938aa7b88 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
 		return NULL;
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
-	if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) {
+	if (page_ref_inc_return(page) == 2 &&
+			!ptdesc_pte_ctor(page_ptdesc(page))) {
 		page_ref_dec(page);
 		ptep = NULL;
 	}
@@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep)
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
 	if (page_ref_dec_return(page) == 1)
-		pgtable_pte_page_dtor(page);
+		ptdesc_pte_dtor(page_ptdesc(page));
 	spin_unlock(&mm->page_table_lock);
 
 	srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522413.811845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwK-00021K-2a; Mon, 17 Apr 2023 20:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522413.811845; Mon, 17 Apr 2023 20:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwJ-00020l-Rl; Mon, 17 Apr 2023 20:59:07 +0000
Received: by outflank-mailman (input) for mailman id 522413;
 Mon, 17 Apr 2023 20:59:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqf-0005M2-45
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:17 +0000
Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com
 [2607:f8b0:4864:20::1031])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0599bb9-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:16 +0200 (CEST)
Received: by mail-pj1-x1031.google.com with SMTP id
 z11-20020a17090abd8b00b0024721c47ceaso14041257pjr.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:16 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0599bb9-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764796; x=1684356796;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=;
        b=DU+PC+Lb75+sxyzE8TFBhUq23MhtXNiuA1nw0c/GhsuV5IK3M7l/Hh4azDLq1DRa+j
         fu7HXiEzvV3T+N5Mt1is7DhkrUBl4PJsqFnRPYaX3oFVj8P6Qlk5ZLs6mljyu96mU+sv
         zpquQbUhbrKn4GUU7udC1O03s9ThOPwnMOZQlJkJdwRy8ikjPxI0naLAwtnZbb4WzEoy
         3BNpuQVnvIHEKJKSenAtt9bKOamb6q7fxWlfz+PtLFjOTztncr7nNFnGXeF7WX7UQ2CN
         r/J7luQKDmaqSpU1aG4/YKRgzEf0E8SysEU40ARw1q6PEFiJTtJVgg7dYOEMGKOTGrN2
         xyCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764796; x=1684356796;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=;
        b=DRWHoCAWuATZ/OU6t6+Hs81ZWw0UC/AuODIxF7BZESaiaGkPLJwZeojQhwYoAjK6oa
         QXz3LKah4Ck1+5kvcC1savi+d7JlBcsYtnuuiMbnKaY56kJSIi0r/Fmu68XqUyhRD1XH
         Izqur0vh4xoUsJFQwNIu5p+8U8kEbL8fdrCCnwqVPrGrIYDo986vN5c1uZNqgHid+TtX
         AtVS4zbhp9uIh0tdEIwtv8t3+LWkHU/apKR2ZBqkf1cFq2D5ZGWMBfgEy72vP8NQpwi3
         LRPJK0ddjj9L0l3H8FbMLgrkn/HOvwZ2fKGrC4tXN3CUo//0o0Mh6hRYDGA/5LSLVUQd
         qvvQ==
X-Gm-Message-State: AAQBX9cLCQh3QHCUxSPrIOIG4Ax7V0GS+pML8V98rrNhc1G7TiCtXKYT
	OhJqok3i4rl2Jnj4AaFXHBg=
X-Google-Smtp-Source: AKy350ZcoHXe5N321oYs87Q7Ayd5DnxwU4W2BhAi27QaGMsyV7i9NrgOQAsujiVTD54cy1B75qhHjg==
X-Received: by 2002:a17:90b:3b8c:b0:23f:9d83:ad76 with SMTP id pc12-20020a17090b3b8c00b0023f9d83ad76mr12727299pjb.23.1681764795745;
        Mon, 17 Apr 2023 13:53:15 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 22/33] hexagon: Convert __pte_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:37 -0700
Message-Id: <20230417205048.15870-23-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/hexagon/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h
index f0c47e6a7427..0f8432430e68 100644
--- a/arch/hexagon/include/asm/pgalloc.h
+++ b/arch/hexagon/include/asm/pgalloc.h
@@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
 		max_kernel_seg = pmdindex;
 }
 
-#define __pte_free_tlb(tlb, pte, addr)		\
-do {						\
-	pgtable_pte_page_dtor((pte));		\
-	tlb_remove_page((tlb), (pte));		\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor((page_ptdesc(pte)));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522414.811848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwK-00024V-If; Mon, 17 Apr 2023 20:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522414.811848; Mon, 17 Apr 2023 20:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwK-00023D-6n; Mon, 17 Apr 2023 20:59:08 +0000
Received: by outflank-mailman (input) for mailman id 522414;
 Mon, 17 Apr 2023 20:59:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqo-0005M2-Op
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:26 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e63ab257-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:26 +0200 (CEST)
Received: by mail-pl1-x634.google.com with SMTP id kh6so25754738plb.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:26 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e63ab257-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764806; x=1684356806;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1imWRfaXT28Tln454VWhUpR4BzNKR6UHESFdZqWEUY0=;
        b=bMzg27GqJpInZYbq5g7DlCbGyMQLVTO8/WaIXUjb9+Y1sP2Yg8U4pzTwcvHRDHiFNW
         xnksAVwNdJxrKURy2fgxnULAI227N9ToKvbiocqPRvOKs7RzP8ExmOpSZqrcqZUrdfu0
         Ly3bYv1kY3GNKQVg3ZctvJCnyIezYAOXblSfIzPoK3U4x0lQOFf2Xm+WIvk4tnnJrN+1
         YHPfjXznGMNehBha6m//o8BZSI/Chio1eErrXl3QeUgRmQPwnQk3cvQT2roiB3zkJZil
         tzHED45+1PigKvsFpmle9q7jl79yHSt/NZKlZjrviShfr3b19IL3cewZ6zZfWBjcahF9
         kN0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764806; x=1684356806;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1imWRfaXT28Tln454VWhUpR4BzNKR6UHESFdZqWEUY0=;
        b=cK8pwtGGiDn3rBDlIpz3mK6e71n6mP98C4TAOEiqHkaeCeqFC4fGwsS2mugfrf73in
         F6CzG9+bWDBcWgKj6hFRCwyplsFnOLVK9tjFcfMG04dNMqM078rGCU7Z5Yx26ReaJJps
         /XzESYN/lsRlPK+0iCgAB6eYpw9gE69HKGBOMAk87x8SBSPta7ZPbx8f8caxyHMQknfR
         wofU2/EBUsMsULgKrgrMRTJXuTMZzFrWsLGrNYIQZpmDFwMHy1emMHXmw1t94DZiAjbv
         ah/nNjmJ3iNkshn1aPh7OlV8YGkPocYW2e16ZRpxLOb+LYttoi04SC4zcycGfIKfQHbv
         LaYw==
X-Gm-Message-State: AAQBX9e/M/WK5doa7tPgYnB0U5oofYFpYrm37Y+Zp7wzdtp3c3g0OszO
	jYQVKzH4E/SQsvFEivbvncE=
X-Google-Smtp-Source: AKy350Z5+qBQNd0Z/JNfcMKaitdnDjfxbnOB0qE080qoviMZywL0T3VtP+5OcGBInDjHaerFJSFHYg==
X-Received: by 2002:a17:90a:d142:b0:246:5fbb:43bf with SMTP id t2-20020a17090ad14200b002465fbb43bfmr15902207pjw.4.1681764805699;
        Mon, 17 Apr 2023 13:53:25 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 29/33] sh: Convert pte_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:44 -0700
Message-Id: <20230417205048.15870-30-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sh/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h
index a9e98233c4d4..30e1823d2347 100644
--- a/arch/sh/include/asm/pgalloc.h
+++ b/arch/sh/include/asm/pgalloc.h
@@ -31,10 +31,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 	set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
 }
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif /* __ASM_SH_PGALLOC_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522419.811860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwM-0002WL-Ie; Mon, 17 Apr 2023 20:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522419.811860; Mon, 17 Apr 2023 20:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwM-0002VR-8Z; Mon, 17 Apr 2023 20:59:10 +0000
Received: by outflank-mailman (input) for mailman id 522419;
 Mon, 17 Apr 2023 20:59:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqe-0005M2-3p
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:16 +0000
Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com
 [2607:f8b0:4864:20::1031])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df69ae06-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:15 +0200 (CEST)
Received: by mail-pj1-x1031.google.com with SMTP id
 z11-20020a17090abd8b00b0024721c47ceaso14041077pjr.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:15 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df69ae06-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764794; x=1684356794;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=;
        b=Xg6P0UpaqLjmTr9GC8q8FEqPaWuvWAOVoK//8dKOM2wu82xE/PCXSYeAATRXUJ/HKh
         6EEKIr46u43LbmKVnfil2W4E6LUnTir89HhCdDllvASzF+yevHo7MjwPE9+XRv+3hhES
         OUUsnstcNrA0rLCFMYj7vWl5ZczSf3KPYpA7wTt20fxP/HKu0nYQRIzVhW6nrnGOA5jn
         ZX8zcSMK8Y+FlG/fX2gye51alnw5R0vQJIqmi3wnRTlMtqWonWHxJepWd7O0uYkErLAf
         nElaONPL6snfMupRjx7SgoPL0fo06TUuoxc3sYPVOppMWFUfAwgvDIOHO8Ugh+LFqXUj
         IYjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764794; x=1684356794;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=;
        b=LUFbqHmPvT8DhobpKf2HeN3cWmrz7hYpKKWsq7esnYpnbMsu7Jce4beoMbj+kfJY4s
         /6k972fpyiCW88ivcBdTnBJ8yLBIK/EJaBoDOugqzM0vH24Bq1cQLoUfR7Mycnpoi4Rj
         gX6K2lPWzLd4upUuoFM0CtKKdNQqMgc/KmRxwOV3QBcItyTpaTpAiaQioPiZTI762VQ5
         b9EppEjE1QG/+UU8eBq3h87H6lqIOt5T0+C7oInZjhqvWaPkzqd4KP2a4xjwlx76qMfO
         omzTA08WXLfGoBfzv0EvPAlWeWWL0HJURrYL4cQJHW2joaeu//A39eoB6vIXWfwQQdf7
         NCCQ==
X-Gm-Message-State: AAQBX9eY1IGmTs96kW3vUBMyQGgQfyTXkvoh66YTiaXJm26x4TBAVQYn
	lWgKkZS2I507P5f524U5Y48=
X-Google-Smtp-Source: AKy350Ym3GSROxWd30N3vnOcQ+scJJk4j2bn57g6f2y/wdjBz5ko3J6EyTomKSlBO1Xk2XiBoSAe7w==
X-Received: by 2002:a17:90a:5315:b0:23f:81c0:eadd with SMTP id x21-20020a17090a531500b0023f81c0eaddmr15717621pjh.47.1681764794292;
        Mon, 17 Apr 2023 13:53:14 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 21/33] csky: Convert __pte_free_tlb() to use ptdescs
Date: Mon, 17 Apr 2023 13:50:36 -0700
Message-Id: <20230417205048.15870-22-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/csky/include/asm/pgalloc.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h
index 7d57e5da0914..af26f1191b43 100644
--- a/arch/csky/include/asm/pgalloc.h
+++ b/arch/csky/include/asm/pgalloc.h
@@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #define __pte_free_tlb(tlb, pte, address)		\
 do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page(tlb, pte);			\
+	ptdesc_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc(tlb, page_ptdesc(pte));	\
 } while (0)
 
 extern void pagetable_init(void);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522425.811874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwP-0003BC-DG; Mon, 17 Apr 2023 20:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522425.811874; Mon, 17 Apr 2023 20:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwP-0003A4-2j; Mon, 17 Apr 2023 20:59:13 +0000
Received: by outflank-mailman (input) for mailman id 522425;
 Mon, 17 Apr 2023 20:59:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqh-0005M2-Ks
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:19 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1fce408-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:19 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id d15so10120360pll.12
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:19 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1fce408-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764798; x=1684356798;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nlmIG8UgshEYTOBASKXa1dMTltHF5q5LujwV+qMeaTE=;
        b=MO1PC1xt/KvuZ3BR0uFzYFvll2jFuloCAIP6GEEELaZGVsUuayg4+lNKAol0RlX2+3
         Iz38d0dC9QwTB6d/mc2DYTZXN+iAw+HkmmeqA37ilN3ijvqPuJYcJPAfuZ5NSiq7Oy0y
         5tk6CT5Flw0tO/XZTIWPNors/JAYwfVMvyqbu0GP1qyv9cYfy3iQHS4IKEIm4tGX0nuK
         r/0iigtw61JwQ8Pxh0uDlXeUn+URgKqdZ4DDg0uzUbspXwSfQOQ/MuFlx/Zb5Bn9cIh/
         DsllyX0j/egvd/MrVX7PxBJ0K/+L/v/BhuCd1hvi1MmWpxA5K0b5kps6s+vigGrLYnHf
         Yhaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764798; x=1684356798;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nlmIG8UgshEYTOBASKXa1dMTltHF5q5LujwV+qMeaTE=;
        b=LbV+JLd6mNmO9ifCLn5Dab37xzmSDY0ag6eK5ofEzsvEHaOfaSHoYj5W5m9U2cfTqw
         peeYWF3bawvuz2RhfC7pboYTvNjhwypW/Hmlsy3+Nt6juqCTqFPicT55Tp7I/ktGuLuv
         01nLxTxvKsmjXbp7vpfPEO/gjRuUgTMuuIJr58jLfGp302gRcOrnaiqPTf4vPuibOMMz
         3L57sRERt/lgtTYLI4V16VxO6I+HQYu5slQ0sWjDvUF/o+oXMsaEczA3HXpCxSegh21g
         36PJLIHfG0RPhxmaYvOAQd7iwBQ35xrvjw9val2k7+ON39AEs34v3iouq+ZWhFjeHhzz
         XuCg==
X-Gm-Message-State: AAQBX9diZTlHuXYvdkJMl/BEHRiIGI5zxGM1ysYvQDIuNFxVOZBSwSvs
	9guErFJMs+eG2DliO6h7e/M=
X-Google-Smtp-Source: AKy350aJaFntHhHbUG9MI8ooj+3D57RkURLGHHynthItQ5IDNnotpeWzZJLOOPG0rt2wtYTic7ZhtQ==
X-Received: by 2002:a17:90a:7086:b0:247:83ed:7e5d with SMTP id g6-20020a17090a708600b0024783ed7e5dmr6464463pjk.18.1681764798565;
        Mon, 17 Apr 2023 13:53:18 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 24/33] m68k: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:39 -0700
Message-Id: <20230417205048.15870-25-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/m68k/include/asm/mcf_pgalloc.h  | 41 ++++++++++++++--------------
 arch/m68k/include/asm/sun3_pgalloc.h |  8 +++---
 arch/m68k/mm/motorola.c              |  4 +--
 3 files changed, 27 insertions(+), 26 deletions(-)

diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
index 5c2c0a864524..b0e909e23e14 100644
--- a/arch/m68k/include/asm/mcf_pgalloc.h
+++ b/arch/m68k/include/asm/mcf_pgalloc.h
@@ -7,20 +7,19 @@
 
 extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long) pte);
+	ptdesc_free(virt_to_ptdesc(pte));
 }
 
 extern const char bad_pmd_string[];
 
 extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	unsigned long page = __get_free_page(GFP_DMA);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | __GFP_ZERO, 0);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
 
-	memset((void *)page, 0, PAGE_SIZE);
-	return (pte_t *) (page);
+	return (pte_t *) (ptdesc_address(ptdesc));
 }
 
 extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
@@ -35,36 +34,36 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable,
 				  unsigned long address)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgtable);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_DMA, 0);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA, 0);
 	pte_t *pte;
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pte = page_address(page);
-	clear_page(pte);
+	pte = ptdesc_address(ptdesc);
+	ptdesc_clear(pte);
 
 	return pte;
 }
 
 static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(ptdesc);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 /*
@@ -75,16 +74,18 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long) pgd);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 
 static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 {
 	pgd_t *new_pgd;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | GFP_NOWARN, 0);
 
-	new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN);
-	if (!new_pgd)
+	if (!ptdesc)
 		return NULL;
+	new_pgd = (pgd_t *) ptdesc_address(ptdesc);
+
 	memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t));
 	memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT);
 	return new_pgd;
diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h
index 198036aff519..013d375fc239 100644
--- a/arch/m68k/include/asm/sun3_pgalloc.h
+++ b/arch/m68k/include/asm/sun3_pgalloc.h
@@ -17,10 +17,10 @@
 
 extern const char bad_pmd_string[];
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index 911301224078..1e47b977bcf1 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -161,7 +161,7 @@ void *get_pointer_table(int type)
 			 * m68k doesn't have SPLIT_PTE_PTLOCKS for not having
 			 * SMP.
 			 */
-			pgtable_pte_page_ctor(virt_to_page(page));
+			ptdesc_pte_ctor(virt_to_ptdesc(page));
 		}
 
 		mmu_page_ctor(page);
@@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type)
 		list_del(dp);
 		mmu_page_dtor((void *)page);
 		if (type == TABLE_PTE)
-			pgtable_pte_page_dtor(virt_to_page(page));
+			ptdesc_pte_dtor(virt_to_ptdesc(page));
 		free_page (page);
 		return 1;
 	} else if (ptable_list[type].next != dp) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522426.811879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwQ-0003MI-Iu; Mon, 17 Apr 2023 20:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522426.811879; Mon, 17 Apr 2023 20:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwP-0003Gf-Me; Mon, 17 Apr 2023 20:59:13 +0000
Received: by outflank-mailman (input) for mailman id 522426;
 Mon, 17 Apr 2023 20:59:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqT-0005M2-Ne
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:05 +0000
Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com
 [2607:f8b0:4864:20::102c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d90dfda3-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:05 +0200 (CEST)
Received: by mail-pj1-x102c.google.com with SMTP id
 hg14-20020a17090b300e00b002471efa7a8fso14356117pjb.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:04 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d90dfda3-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764783; x=1684356783;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WiO1HawkIhn5Pxy3bGXHIBKOPwk6qSqlLgZYbsR783I=;
        b=i+wzmC4THa/GsSihCeauBzLBRwN2BkRSUDfwFHkQgHnG3pkWReRH8MPN+shfgDqHTj
         IeqmXv7VhbjU+1u0UCoy1GJsHdeP9LwybB0OHvKEyW1YIxBwbDPSb41pvZGpSpOpMkiQ
         WphjKneGpmymAtIBRE8JTRbqkCrOvlbdVWpBDf/QtnTqQmVA9yQNunQ+rFi4omAX2wkV
         RN+R+ZBLGm6OPOimIfvRu38tdAQhoIVp1Mn1HnU9Gwaf7SKiulzeceUhW9g6F763mHYB
         xGmH5zofQsKiypu290WAifFetv+03PwrQODgzNDZ7SB6mx5PwXcbwgLkn67IOgTYLm5f
         HyhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764783; x=1684356783;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WiO1HawkIhn5Pxy3bGXHIBKOPwk6qSqlLgZYbsR783I=;
        b=MJwGIkqOc9Q4uPHDxppG83GbCiEAPmO4K4LrRckXGccBWtbSjDkJJohdLemYvvgXTN
         Wesk+KhW8A3xH1cyFgoL5NO+EZXa/0yd1MEjbdkZwiJv/KfVKsLZvkzkhfMiD6zPrIC9
         SfqiXVii7gldRdvBbSyR5BRkY2bnTsXCOabOrkZvdW/eeXaVfHIuGnl3DtN23tAaOMie
         NE6LqgHIM/aGOFxn1GvLbCILK2QeUkk2t/piViGdz/SsvNgX190cOlM3l2pgM0yw7aMh
         /sLr4nKnfshRmyeAkUXmP/KelFRZoIFVvAn259tmmTd5AabdHP/XMrNZfSOp5lvjq0w9
         u3Hw==
X-Gm-Message-State: AAQBX9focVMDF0ewv5/3s80QDUwLFx8HSRQCUI3rMP1vChNiD6t0mhkD
	9Vos2kFA8S7wxKVZPCWNq+k=
X-Google-Smtp-Source: AKy350ZsayM5AzK3U9g9Pm56Yxkj5Ek92aNaeMW1rTxdVse4fOpzec7lc2oZYj+3ilv2TEF1mDgQ5A==
X-Received: by 2002:a17:90b:4f8f:b0:23f:58d6:b532 with SMTP id qe15-20020a17090b4f8f00b0023f58d6b532mr15289201pjb.5.1681764783502;
        Mon, 17 Apr 2023 13:53:03 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 13/33] powerpc: Convert various functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:28 -0700
Message-Id: <20230417205048.15870-14-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/powerpc/mm/book3s64/mmu_context.c | 10 +++---
 arch/powerpc/mm/book3s64/pgtable.c     | 32 +++++++++---------
 arch/powerpc/mm/pgtable-frag.c         | 46 +++++++++++++-------------
 3 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index c766e4c26e42..b22ad2839897 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx)
 static void pmd_frag_destroy(void *pmd_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pmd_frag);
+	ptdesc = virt_to_ptdesc(pmd_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		ptdesc_pmd_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 85c84e89e3ea..7693be80c0f9 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -306,22 +306,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm)
 static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO;
 
 	if (mm == &init_mm)
 		gfp &= ~__GFP_ACCOUNT;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = ptdesc_alloc(gfp);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_pages(page, 0);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -331,12 +331,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!mm->context.pmd_frag)) {
-		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR);
 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -357,15 +357,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr)
 
 void pmd_fragment_free(unsigned long *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (ptdesc_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
+		ptdesc_pmd_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
index 20652daa1d7e..cf08831fa7c3 100644
--- a/arch/powerpc/mm/pgtable-frag.c
+++ b/arch/powerpc/mm/pgtable-frag.c
@@ -18,15 +18,15 @@
 void pte_frag_destroy(void *pte_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pte_frag);
+	ptdesc = virt_to_ptdesc(pte_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pte_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		ptdesc_pte_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm)
 static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
 	if (!kernel) {
-		page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT);
-		if (!page)
+		ptdesc = ptdesc_alloc(PGALLOC_GFP | __GFP_ACCOUNT);
+		if (!ptdesc)
 			return NULL;
-		if (!pgtable_pte_page_ctor(page)) {
-			__free_page(page);
+		if (!ptdesc_pte_ctor(ptdesc)) {
+			ptdesc_free(ptdesc);
 			return NULL;
 		}
 	} else {
-		page = alloc_page(PGALLOC_GFP);
-		if (!page)
+		ptdesc = ptdesc_alloc(PGALLOC_GFP);
+		if (!ptdesc)
 			return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 		return ret;
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!pte_frag_get(&mm->context))) {
-		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR);
 		pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE);
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -108,15 +108,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel)
 
 void pte_fragment_free(unsigned long *table, int kernel)
 {
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (ptdesc_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
 		if (!kernel)
-			pgtable_pte_page_dtor(page);
-		__free_page(page);
+			ptdesc_pte_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522428.811887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwS-0003kk-81; Mon, 17 Apr 2023 20:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522428.811887; Mon, 17 Apr 2023 20:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwR-0003ex-IU; Mon, 17 Apr 2023 20:59:15 +0000
Received: by outflank-mailman (input) for mailman id 522428;
 Mon, 17 Apr 2023 20:59:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqX-0005M2-3n
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:09 +0000
Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com
 [2607:f8b0:4864:20::102d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db9763f1-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:08 +0200 (CEST)
Received: by mail-pj1-x102d.google.com with SMTP id
 s23-20020a17090aba1700b00247a8f0dd50so3410295pjr.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:08 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db9763f1-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764788; x=1684356788;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=;
        b=Bg9zrwHmz+zWpBrO553L6xcdY/9UqoWF+7JCbvFGdP25XqpLPTgtgzoeSaxtZoExE9
         /hfw60KOOdzfPF4LPrgT1cUNcQCYvvrOjqBV5EeXW1oBY3+U5yLq/ZcoGhd2glQtHjSd
         E1SNfE4qG9orNxm4nPGVLtj0ZQ+EoD5+Of2kyr0VxZlAahOe3asPxpa5GldaOSiEcuod
         A/YatVmT6GM0jfgVRuBueRMwbH+1Zmb4szBdOpsRslV8e0A8Jy37baDQLXTR/uZAq8Io
         SplJDGW4RHssNwF9j3bgci8G5WEKKtAuNGcD0sy8PlvNVq51dUIvBHLo/SQen5NnMvRt
         jmoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764788; x=1684356788;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=;
        b=DtBI417xTMOIrnAT7uNTw5CRkI4SpW3WirlDS6GrLdRm383zk0TauXQ785hESbAZtH
         Vw6aio1TiPVAaLFdmNgrJaoMgz7o0fKZY2fTTn0ab9vhs6HQF5D/7ZOJeLxshzekv/Pq
         Smsi3bym+aKfNYb8fxKvQVxIj/pBKx0P0ciBI4N9qNRrooHtLq25oacXM5s/ZlPtVtJ9
         INm5S8ALihHK6MBk7Eh9b77FvHKL4G4MiiCydGRDV5zPgEi5OMapocr3lDdRBhcXJzTJ
         J463/dCiC+l+bORfgMeUaaYIHaZ/6TbbdkuHpA+G6lMHtW46LNhqmoWq4eYAXe+n4+Qj
         pvSw==
X-Gm-Message-State: AAQBX9fswc1cLkI0vQlTiwKQANFgo/aTKDl49qEP+6Ktf1JN3asfmYUn
	tFNKnpY5KbvfxoKXPSJtF3s=
X-Google-Smtp-Source: AKy350Z9i6yPCDN1KgNRMsqi9oENFCLbYvx/eJwTeNJWe0OuxQ2o4hV6w1fDsy15hPJg9C6OAHihoA==
X-Received: by 2002:a17:90a:6aca:b0:246:82ac:b6b2 with SMTP id b10-20020a17090a6aca00b0024682acb6b2mr15576120pjm.9.1681764787792;
        Mon, 17 Apr 2023 13:53:07 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 16/33] s390: Convert various pgalloc functions to use ptdescs
Date: Mon, 17 Apr 2023 13:50:31 -0700
Message-Id: <20230417205048.15870-17-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/include/asm/pgalloc.h |   4 +-
 arch/s390/include/asm/tlb.h     |   4 +-
 arch/s390/mm/pgalloc.c          | 108 ++++++++++++++++----------------
 3 files changed, 59 insertions(+), 57 deletions(-)

diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
index 17eb618f1348..9841481560ae 100644
--- a/arch/s390/include/asm/pgalloc.h
+++ b/arch/s390/include/asm/pgalloc.h
@@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr)
 	if (!table)
 		return NULL;
 	crst_table_init(table, _SEGMENT_ENTRY_EMPTY);
-	if (!pgtable_pmd_page_ctor(virt_to_page(table))) {
+	if (!ptdesc_pmd_ctor(virt_to_ptdesc(table))) {
 		crst_table_free(mm, table);
 		return NULL;
 	}
@@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
 	if (mm_pmd_folded(mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	crst_table_free(mm, (unsigned long *) pmd);
 }
 
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index b91f4a9b044c..1388c819b467 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
 {
 	if (mm_pmd_folded(tlb->mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	__tlb_adjust_range(tlb, address, PAGE_SIZE);
 	tlb->mm->context.flush_mm = 1;
 	tlb->freed_tables = 1;
 	tlb->cleared_puds = 1;
-	tlb_remove_table(tlb, pmd);
+	tlb_remove_ptdesc(tlb, pmd);
 }
 
 /*
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 6b99932abc66..16a29d2cfe85 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl);
 
 unsigned long *crst_table_alloc(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
+	struct ptdesc *ptdesc = ptdesc(GFP_KERNEL, CRST_ALLOC_ORDER);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	arch_set_page_dat(page, CRST_ALLOC_ORDER);
-	return (unsigned long *) page_to_virt(page);
+	arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER);
+	return (unsigned long *) ptdesc_to_virt(ptdesc);
 }
 
 void crst_table_free(struct mm_struct *mm, unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	ptdesc_free(virt_to_ptdesc(table);
 }
 
 static void __crst_table_upgrade(void *arg)
@@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits)
 
 struct page *page_table_alloc_pgste(struct mm_struct *mm)
 {
-	struct page *page;
+	struct page *ptdesc;
 	u64 *table;
 
-	page = alloc_page(GFP_KERNEL);
-	if (page) {
-		table = (u64 *)page_to_virt(page);
+	ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
+	if (ptdesc) {
+		table = (u64 *)ptdesc_to_virt(page);
 		memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	}
-	return page;
+	return ptdesc_page(ptdesc);
 }
 
 void page_table_free_pgste(struct page *page)
 {
-	__free_page(page);
+	ptdesc_free(page_ptdesc(page));
 }
 
 #endif /* CONFIG_PGSTE */
@@ -230,7 +230,7 @@ void page_table_free_pgste(struct page *page)
 unsigned long *page_table_alloc(struct mm_struct *mm)
 {
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned int mask, bit;
 
 	/* Try to get a fragment of a 4K page as a 2K page table */
@@ -238,9 +238,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		table = NULL;
 		spin_lock_bh(&mm->context.lock);
 		if (!list_empty(&mm->context.pgtable_list)) {
-			page = list_first_entry(&mm->context.pgtable_list,
-						struct page, lru);
-			mask = atomic_read(&page->pt_frag_refcount);
+			ptdesc = list_first_entry(&mm->context.pgtable_list,
+						struct ptdesc, pt_list);
+			mask = atomic_read(&ptdesc->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
@@ -253,13 +253,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			 */
 			mask = (mask | (mask >> 4)) & 0x03U;
 			if (mask != 0x03U) {
-				table = (unsigned long *) page_to_virt(page);
+				table = (unsigned long *) ptdesc_to_virt(ptdesc);
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->pt_frag_refcount,
+				atomic_xor_bits(&ptdesc->pt_frag_refcount,
 							0x01U << bit);
-				list_del(&page->lru);
+				list_del(&ptdesc->pt_list);
 			}
 		}
 		spin_unlock_bh(&mm->context.lock);
@@ -267,27 +267,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			return table;
 	}
 	/* Allocate a fresh page */
-	page = alloc_page(GFP_KERNEL);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	arch_set_page_dat(page, 0);
+	arch_set_page_dat(ptdesc_page(ptdesc), 0);
 	/* Initialize page table */
-	table = (unsigned long *) page_to_virt(page);
+	table = (unsigned long *) ptdesc_to_virt(ptdesc);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
-		list_add(&page->lru, &mm->context.pgtable_list);
+		list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		spin_unlock_bh(&mm->context.lock);
 	}
 	return table;
@@ -309,9 +309,8 @@ static void page_table_release_check(struct page *page, void *table,
 void page_table_free(struct mm_struct *mm, unsigned long *table)
 {
 	unsigned int mask, bit, half;
-	struct page *page;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	page = virt_to_page(table);
 	if (!mm_alloc_pgste(mm)) {
 		/* Free 2K page table fragment of a 4K page */
 		bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
@@ -321,39 +320,38 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
-			list_add(&page->lru, &mm->context.pgtable_list);
+			list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		else
-			list_del(&page->lru);
+			list_del(&ptdesc->pt_list);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 			 unsigned long vmaddr)
 {
 	struct mm_struct *mm;
-	struct page *page;
 	unsigned int bit, mask;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	mm = tlb->mm;
-	page = virt_to_page(table);
 	if (mm_alloc_pgste(mm)) {
 		gmap_unlink(mm, table, vmaddr);
 		table = (unsigned long *) ((unsigned long)table | 0x03U);
-		tlb_remove_table(tlb, table);
+		tlb_remove_ptdesc(tlb, table);
 		return;
 	}
 	bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
@@ -363,11 +361,11 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+	mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
-		list_add_tail(&page->lru, &mm->context.pgtable_list);
+		list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list);
 	else
-		list_del(&page->lru);
+		list_del(&ptdesc->pt_list);
 	spin_unlock_bh(&mm->context.lock);
 	table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
 	tlb_remove_table(tlb, table);
@@ -377,7 +375,7 @@ void __tlb_remove_table(void *_table)
 {
 	unsigned int mask = (unsigned long) _table & 0x03U, half = mask;
 	void *table = (void *)((unsigned long) _table ^ mask);
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	switch (half) {
 	case 0x00U:	/* pmd, pud, or p4d */
@@ -385,18 +383,18 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 /*
@@ -424,16 +422,20 @@ static void base_pgt_free(unsigned long *table)
 static unsigned long *base_crst_alloc(unsigned long val)
 {
 	unsigned long *table;
+	struct ptdesc *ptdesc;
 
-	table =	(unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
-	if (table)
-		crst_table_init(table, val);
+	ptdesc = ptdesc_alloc(GFP_KERNEL, CRST_ALLOC_ORDER);
+	if (!ptdesc)
+		return NULL;
+	table = ptdesc_address(ptdesc);
+
+	crst_table_init(table, val);
 	return table;
 }
 
 static void base_crst_free(unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	ptdesc_free(virt_to_ptdesc(table));
 }
 
 #define BASE_ADDR_END_FUNC(NAME, SIZE)					\
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 20:59:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 20:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522431.811900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwU-0004gg-Ok; Mon, 17 Apr 2023 20:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522431.811900; Mon, 17 Apr 2023 20:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poVwU-0004eK-GA; Mon, 17 Apr 2023 20:59:18 +0000
Received: by outflank-mailman (input) for mailman id 522431;
 Mon, 17 Apr 2023 20:59:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RIN=AI=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poVqZ-0005M2-1y
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 20:53:11 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc59bc60-dd61-11ed-b21e-6b7b168915f2;
 Mon, 17 Apr 2023 22:53:10 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 x8-20020a17090a6b4800b002474c5d3367so9750208pjl.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 13:53:10 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139])
 by smtp.googlemail.com with ESMTPSA id
 h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Apr 2023 13:53:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc59bc60-dd61-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681764789; x=1684356789;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ySUR2NhNYIi45iieMc0XGAHKG7J++HPTQWlVu2Tasao=;
        b=r15/y1KDif+Drf/4AN1jpq4aAvcL2g0C6yBlVejRZjViwAKdx4tN5lVxRrilW72t8W
         lxYrwwZ1V+6OuCBWGNjnVnFHs9fxA0+y+Hq/T2Xun9HaX6uevRtPhFT+6Fyn6rL6cuxl
         Z3/9OHOOCrjsch5rq9oWYEod7RCMO4fFtyd7gUBrmXZG32pCnlsISiWWId2b7LrGrhST
         kpDrkb186Ifgo7UoD/Ti/tLM6Mv74UfKgE7j3u4BhSVZUj1yrtlCsZDlhuTy+CmGW1TT
         t+a+qaRd0Bf5+Wl/CDmlKFz2wUcV2Pe9TbKsnSy3YV2N+zpAWDFA5l3JzNu+kIHHm1Jf
         2puA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681764789; x=1684356789;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ySUR2NhNYIi45iieMc0XGAHKG7J++HPTQWlVu2Tasao=;
        b=F2egSfN5qpn1v+SZ5/wVTQy6Q5UG9UhzS1ReBGSNNwhMAHdFweZMfCnudW9ncHJU7V
         OG1lahUmwmr6W/E97J96gicZt/VFWuV73bwccqgyL0mRgkiQNkaWlmEDsQVcbBcSUAdc
         v6LiDpu0BsPu4V7CJIAjH8wfRGartUMmkUcLWWZyBlj0/TPzTYZ44QD8nxsPHATUS0Wy
         hCVwmpQpkKE8FrcWY1AJXogb3GitSuy1wRFOz/Y/zgIWamwtvte2K6exTp3MkBtlcSkj
         TfzCzR2COu/U7WBkALHRafJYXJCnB1Lt9NmmYfxu6MS++Ge7PsbgywtTWMn6Sjwr1lOM
         8J2w==
X-Gm-Message-State: AAQBX9fELCm0JdQIp3SyoDNTqJHDxGq9xHsUcfAiHNM8xiSHQBxfJUn4
	M30Rlgo8EnxZwdLx9NDGBrU=
X-Google-Smtp-Source: AKy350ZVl3PNdJ0+nN/Rb9LpMw5+uyjbe8u9t4N4aIteNFpQkEwBCpgg7+YRGDc+SZWH4x4d964lpw==
X-Received: by 2002:a17:90a:d205:b0:234:409:9752 with SMTP id o5-20020a17090ad20500b0023404099752mr15223725pju.25.1681764789129;
        Mon, 17 Apr 2023 13:53:09 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH 17/33] mm: Remove page table members from struct page
Date: Mon, 17 Apr 2023 13:50:32 -0700
Message-Id: <20230417205048.15870-18-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The page table members are now split out into their own ptdesc struct.
Remove them from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm_types.h | 14 --------------
 include/linux/pgtable.h  |  3 ---
 2 files changed, 17 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2616d64c0e8c..4355f95abc5a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -141,20 +141,6 @@ struct page {
 		struct {	/* Tail pages of compound page */
 			unsigned long compound_head;	/* Bit zero is set */
 		};
-		struct {	/* Page table pages */
-			unsigned long _pt_pad_1;	/* compound_head */
-			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_s390_gaddr;	/* mapping */
-			union {
-				struct mm_struct *pt_mm; /* x86 pgds only */
-				atomic_t pt_frag_refcount; /* powerpc */
-			};
-#if ALLOC_SPLIT_PTLOCKS
-			spinlock_t *ptl;
-#else
-			spinlock_t ptl;
-#endif
-		};
 		struct {	/* ZONE_DEVICE pages */
 			/** @pgmap: Points to the hosting device page map. */
 			struct dev_pagemap *pgmap;
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 7cd803aa38eb..8cacdf1fc411 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -91,9 +91,6 @@ TABLE_MATCH(flags, __page_flags);
 TABLE_MATCH(compound_head, pt_list);
 TABLE_MATCH(compound_head, _pt_pad_1);
 TABLE_MATCH(mapping, _pt_s390_gaddr);
-TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
-TABLE_MATCH(pt_mm, pt_mm);
-TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Apr 17 21:16:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 21:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522479.811915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poWCZ-0001cC-8N; Mon, 17 Apr 2023 21:15:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522479.811915; Mon, 17 Apr 2023 21:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poWCZ-0001c5-4k; Mon, 17 Apr 2023 21:15:55 +0000
Received: by outflank-mailman (input) for mailman id 522479;
 Mon, 17 Apr 2023 21:15:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poWCY-0001bv-FN; Mon, 17 Apr 2023 21:15:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poWCY-0007zy-AG; Mon, 17 Apr 2023 21:15:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poWCX-0002QF-UQ; Mon, 17 Apr 2023 21:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poWCX-0000mF-TX; Mon, 17 Apr 2023 21:15:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RLDthoILcMNiuUR0AtkgOeoQDWi6hTpu95lImlfk93Y=; b=kA/+oPEBSjBDYs5Qq2N9MRFRPt
	ZCqK9ZvMD3CTER/fUhyMQtjbtKby7niuf7ryGpo9Sym3acH4chkejDkesxeBCMO/3cdn9/L+x4FfC
	0Pet/jRNIPlgKu1rgn+JEThwkdAWk4/jDHBGXI6W2ijNW7y/slQYtflnKbzH9fz/eePo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180286-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180286: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=65c4e7472cafb60f478e7a5f358ee1eeac28b5a8
X-Osstest-Versions-That:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Apr 2023 21:15:53 +0000

flight 180286 xen-unstable-smoke real [real]
flight 180288 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180286/
http://logs.test-lab.xenproject.org/osstest/logs/180288/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 180283

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  65c4e7472cafb60f478e7a5f358ee1eeac28b5a8
baseline version:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc

Last test of basis   180283  2023-04-17 13:02:10 Z    0 days
Testing same since   180286  2023-04-17 17:03:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65c4e7472cafb60f478e7a5f358ee1eeac28b5a8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:45 2023 +0200

    x86emul: support AVX-NE-CONVERT insns
    
    Matching what was done earlier, explicit tests are added only for
    irregular insn / memory access patterns.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 842acaa743a503726d6c4d77a7982cc64f07c4bf
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:06 2023 +0200

    x86emul: support AVX-VNNI-INT8
    
    These are close relatives of the AVX-VNNI ISA extension. Since the insns
    here and in particular their memory access patterns follow the usual
    scheme (and especially the byte variants of AVX-VNNI), I didn't think it
    was necessary to add a contrived test specifically for them.
    
    While making the addition also re-wire AVX-VNNI's handling to
    simd_0f_ymm: There's no reason to check the AVX feature alongside the
    one actually of interest (there are a few features where two checks are
    actually necessary, e.g. GFNI+AVX, but this isn't the case here).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit da232f1f1118e8c8fad520dedee312005c2984fb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:10:14 2023 +0200

    x86emul: support AVX-IFMA insns
    
    As in a few cases before (in particular: AVX512_IFMA), since the insns
    here and in particular their memory access patterns follow the usual
    scheme, I didn't think it was necessary to add a contrived test
    specifically for them.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Apr 17 22:19:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Apr 2023 22:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522509.811928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poXCA-00089c-15; Mon, 17 Apr 2023 22:19:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522509.811928; Mon, 17 Apr 2023 22:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poXC9-00089V-Ta; Mon, 17 Apr 2023 22:19:33 +0000
Received: by outflank-mailman (input) for mailman id 522509;
 Mon, 17 Apr 2023 22:19:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/9ng=AI=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1poXC8-00089P-J3
 for xen-devel@lists.xenproject.org; Mon, 17 Apr 2023 22:19:32 +0000
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea78ef94-dd6d-11ed-b21e-6b7b168915f2;
 Tue, 18 Apr 2023 00:19:29 +0200 (CEST)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Apr 2023 15:19:11 -0700
Received: from lkp-server01.sh.intel.com (HELO b613635ddfff) ([10.239.97.150])
 by orsmga007.jf.intel.com with ESMTP; 17 Apr 2023 15:19:05 -0700
Received: from kbuild by b613635ddfff with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1poXBh-000chh-0f;
 Mon, 17 Apr 2023 22:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea78ef94-dd6d-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1681769969; x=1713305969;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=yMAw32GB6bnYHTIaJx+3YJu4UkN11Fi/gjV1eDLzAFs=;
  b=nYSxFZMeKsX+4yJ8F1tmRNvEuM43dszGufAI93tf80hIrbx6Kq+PTOiZ
   DF+ztBMDJrQdeuzZfKVu+qxPbvhVZhBXfoPWBh+G1rjfjEDbmuTFQ3rze
   yBo1F4lSsFSwS9KvbPaJX2NMmx98f7+LSsbyJBlDrSINmaeG3uiyRUpai
   5/XC/CJ1Jt6dCXai7jg3ll8IE/mFmO97MqU4DGVjS9UjmWgajgbXflh/p
   skYRil/egQZ2xYw9Wg7QDeZSvt9y5elXZKaxOB09MTsNNKXTx1jNF96B4
   wKDlqC2RapiMH9U8mxwwWopYSmxy/Bmkoun5+VJk6z3fPquEM9yWC+61p
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="347767944"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="347767944"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="684313012"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="684313012"
Date: Tue, 18 Apr 2023 06:18:32 +0800
From: kernel test robot <lkp@intel.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: oe-kbuild-all@lists.linux.dev,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: Re: [PATCH 24/33] m68k: Convert various functions to use ptdescs
Message-ID: <202304180652.LeoLmaNQ-lkp@intel.com>
References: <20230417205048.15870-25-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230417205048.15870-25-vishal.moola@gmail.com>

Hi Vishal,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20230417]
[cannot apply to s390/features powerpc/next powerpc/fixes geert-m68k/for-next geert-m68k/for-linus linus/master v6.3-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230417205048.15870-25-vishal.moola%40gmail.com
patch subject: [PATCH 24/33] m68k: Convert various functions to use ptdescs
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20230418/202304180652.LeoLmaNQ-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/630b38053b213e6138d3deb3e4325b24ad6dcb1f
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
        git checkout 630b38053b213e6138d3deb3e4325b24ad6dcb1f
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash arch/m68k/mm/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202304180652.LeoLmaNQ-lkp@intel.com/

All warnings (new ones prefixed by >>):

   arch/m68k/mm/motorola.c: In function 'free_pointer_table':
>> arch/m68k/mm/motorola.c:204:56: warning: passing argument 1 of 'virt_to_ptdesc' makes pointer from integer without a cast [-Wint-conversion]
     204 |                         ptdesc_pte_dtor(virt_to_ptdesc(page));
         |                                                        ^~~~
         |                                                        |
         |                                                        long unsigned int
   In file included from arch/m68k/mm/motorola.c:15:
   include/linux/mm.h:2721:57: note: expected 'const void *' but argument is of type 'long unsigned int'
    2721 | static inline struct ptdesc *virt_to_ptdesc(const void *x)
         |                                             ~~~~~~~~~~~~^
   arch/m68k/mm/motorola.c: At top level:
   arch/m68k/mm/motorola.c:418:13: warning: no previous prototype for 'paging_init' [-Wmissing-prototypes]
     418 | void __init paging_init(void)
         |             ^~~~~~~~~~~


vim +/virt_to_ptdesc +204 arch/m68k/mm/motorola.c

   185	
   186	int free_pointer_table(void *table, int type)
   187	{
   188		ptable_desc *dp;
   189		unsigned long ptable = (unsigned long)table;
   190		unsigned long page = ptable & PAGE_MASK;
   191		unsigned int mask = 1U << ((ptable - page)/ptable_size(type));
   192	
   193		dp = PD_PTABLE(page);
   194		if (PD_MARKBITS (dp) & mask)
   195			panic ("table already free!");
   196	
   197		PD_MARKBITS (dp) |= mask;
   198	
   199		if (PD_MARKBITS(dp) == ptable_mask(type)) {
   200			/* all tables in page are free, free page */
   201			list_del(dp);
   202			mmu_page_dtor((void *)page);
   203			if (type == TABLE_PTE)
 > 204				ptdesc_pte_dtor(virt_to_ptdesc(page));
   205			free_page (page);
   206			return 1;
   207		} else if (ptable_list[type].next != dp) {
   208			/*
   209			 * move this descriptor to the front of the list, since
   210			 * it has one or more free tables.
   211			 */
   212			list_move(dp, &ptable_list[type]);
   213		}
   214		return 0;
   215	}
   216	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 00:34:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 00:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522518.811947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poZIG-0005lD-8d; Tue, 18 Apr 2023 00:34:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522518.811947; Tue, 18 Apr 2023 00:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poZIG-0005l6-5K; Tue, 18 Apr 2023 00:34:00 +0000
Received: by outflank-mailman (input) for mailman id 522518;
 Tue, 18 Apr 2023 00:33:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZIE-0005kw-Dv; Tue, 18 Apr 2023 00:33:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZIE-0004kv-3z; Tue, 18 Apr 2023 00:33:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZID-0001eF-PC; Tue, 18 Apr 2023 00:33:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poZID-0006gk-Ok; Tue, 18 Apr 2023 00:33:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=25NHRpQDDo9yOgHkcnYW67BHex9Hm/FTCf0S/67Ny0c=; b=aUfvclLuRsPp1AiVKs1haLLen0
	k0V1cQ1MG/LvYNy/lZf4xhzaqg7Q7rOuKfbvRpyiT+YsdwJDGUreb+w1h0v+wL+2xhCK0/afcAFmu
	jfc0E9/SZG2oFA8OYNjHl2CjecUsodq6ADc89lCQgoO+C45kPCeVm0osDqITQCYclZGU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180290-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180290: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
X-Osstest-Versions-That:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 00:33:57 +0000

flight 180290 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180290/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
baseline version:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc

Last test of basis   180283  2023-04-17 13:02:10 Z    0 days
Failing since        180286  2023-04-17 17:03:22 Z    0 days    2 attempts
Testing same since   180290  2023-04-17 22:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5eb6bd7454..1213ebfb9f  1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 00:45:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 00:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522526.811956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poZT6-0007Hk-Ag; Tue, 18 Apr 2023 00:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522526.811956; Tue, 18 Apr 2023 00:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poZT6-0007Hd-7w; Tue, 18 Apr 2023 00:45:12 +0000
Received: by outflank-mailman (input) for mailman id 522526;
 Tue, 18 Apr 2023 00:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZT4-0007HQ-De; Tue, 18 Apr 2023 00:45:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZT4-0004xK-8K; Tue, 18 Apr 2023 00:45:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poZT3-0001vD-Qc; Tue, 18 Apr 2023 00:45:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poZT3-0001T5-QA; Tue, 18 Apr 2023 00:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OiVZfVl18QEOk5e8Vm9WhNCFdNe5mdq6DfCGJrUqrRg=; b=KpVQW4kVwb+hbeAXa+BcHSMo9k
	zcwhj8QbANa+fOW2s+ptwsuu37LJ/XUJuzxTSbUM8zd+vhcAETFh3SNR4FVP7I8/44VwU48Dc62dL
	uts+IWMFKMRpJ5WPmnLPcRwsG83ElkbZEpvk5K2kcdVgS8/+T79b2cNEUWtwZHJdwu5M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180285: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a8f57ae2eb07ab39a6f0ccad60c760743051026
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 00:45:09 +0000

flight 180285 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180285/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd       8 xen-boot                   fail pass in 180281
 test-amd64-amd64-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 180281

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180281 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180281 never pass
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                6a8f57ae2eb07ab39a6f0ccad60c760743051026
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    1 days
Testing same since   180281  2023-04-17 06:24:36 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6a8f57ae2eb07ab39a6f0ccad60c760743051026
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Apr 16 15:23:53 2023 -0700

    Linux 6.3-rc7


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 01:23:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 01:23:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522533.811967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poa45-0001Xf-As; Tue, 18 Apr 2023 01:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522533.811967; Tue, 18 Apr 2023 01:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poa45-0001XY-7v; Tue, 18 Apr 2023 01:23:25 +0000
Received: by outflank-mailman (input) for mailman id 522533;
 Tue, 18 Apr 2023 01:23:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jS8q=AJ=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1poa43-0001XS-4T
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 01:23:23 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 985ed0fe-dd87-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 03:23:19 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Apr 2023 18:23:15 -0700
Received: from lkp-server01.sh.intel.com (HELO b613635ddfff) ([10.239.97.150])
 by orsmga006.jf.intel.com with ESMTP; 17 Apr 2023 18:23:10 -0700
Received: from kbuild by b613635ddfff with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1poa3p-000co7-26;
 Tue, 18 Apr 2023 01:23:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 985ed0fe-dd87-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1681780999; x=1713316999;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=xxj4orMdwSEXYJlb7suHd8py6WoU3VU9TrspjC7QyJg=;
  b=hdb43JKFAGuPDJ01N/RoqxdhyfAKl0xmllIzz5b67OxrMWOdEg39v222
   v3Znk5rMNCFHLKFekryh1oz9NiaR+wyUlJNzWR5CoAkWttY6EMOHcJ+HD
   LSi6sdk/4OUQWBCMFDH7bzV/fXpcHGipgLsTvVfXPexujhnTUNUnJITh2
   inaiVKOLP+eyPPJpBRuLPzuTEUSM2B6TQifQgxnuBOWgQ5tJByKdljiOJ
   Wq7LZthKwooB6sn25v6izBatxKcwl4adndmX3Is/GoWk48FXuWNk3T+sY
   /iz1Ilf/MKu8JuRafoLV0eWxHxIvKthuUDcPeMnUq79vO+5qT9LYstAFB
   w==;
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="410264165"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="410264165"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="668298836"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="668298836"
Date: Tue, 18 Apr 2023 09:22:18 +0800
From: kernel test robot <lkp@intel.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: oe-kbuild-all@lists.linux.dev,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: Re: [PATCH 04/33] mm: add utility functions for ptdesc
Message-ID: <202304180913.p1BuXrBb-lkp@intel.com>
References: <20230417205048.15870-5-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230417205048.15870-5-vishal.moola@gmail.com>

Hi Vishal,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on s390/features powerpc/next powerpc/fixes geert-m68k/for-next geert-m68k/for-linus linus/master v6.3-rc7 next-20230417]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230417205048.15870-5-vishal.moola%40gmail.com
patch subject: [PATCH 04/33] mm: add utility functions for ptdesc
config: sh-allmodconfig (https://download.01.org/0day-ci/archive/20230418/202304180913.p1BuXrBb-lkp@intel.com/config)
compiler: sh4-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/1b6f8137ca50a543ad2937092836635ca58c78ce
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
        git checkout 1b6f8137ca50a543ad2937092836635ca58c78ce
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sh olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sh prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202304180913.p1BuXrBb-lkp@intel.com/

All error/warnings (new ones prefixed by >>):

   In file included from arch/sh/kernel/asm-offsets.c:14:
   include/linux/mm.h: In function 'virt_to_ptdesc':
>> include/linux/mm.h:2723:16: error: implicit declaration of function 'page_ptdesc' [-Werror=implicit-function-declaration]
    2723 |         return page_ptdesc(virt_to_head_page(x));
         |                ^~~~~~~~~~~
>> include/linux/mm.h:2723:16: warning: returning 'int' from a function with return type 'struct ptdesc *' makes pointer from integer without a cast [-Wint-conversion]
    2723 |         return page_ptdesc(virt_to_head_page(x));
         |                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/sh/include/asm/thread_info.h:13,
                    from include/linux/thread_info.h:60,
                    from include/asm-generic/preempt.h:5,
                    from ./arch/sh/include/generated/asm/preempt.h:1,
                    from include/linux/preempt.h:78,
                    from include/linux/spinlock.h:56,
                    from include/linux/mmzone.h:8,
                    from include/linux/gfp.h:7,
                    from include/linux/mm.h:7:
   include/linux/mm.h: In function 'ptdesc_to_virt':
>> include/linux/mm.h:2728:29: error: implicit declaration of function 'ptdesc_page'; did you mean 'pte_page'? [-Werror=implicit-function-declaration]
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                             ^~~~~~~~~~~
   arch/sh/include/asm/page.h:139:27: note: in definition of macro '___va'
     139 | #define ___va(x)        ((x)+PAGE_OFFSET)
         |                           ^
   include/linux/mm.h:117:25: note: in expansion of macro '__va'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                         ^~~~
   include/linux/mm.h:117:30: note: in expansion of macro 'PFN_PHYS'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                              ^~~~~~~~
   include/asm-generic/memory_model.h:64:21: note: in expansion of macro '__page_to_pfn'
      64 | #define page_to_pfn __page_to_pfn
         |                     ^~~~~~~~~~~~~
   include/linux/mm.h:2728:16: note: in expansion of macro 'page_to_virt'
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                ^~~~~~~~~~~~
>> include/asm-generic/memory_model.h:46:35: warning: initialization of 'const struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
      46 | ({      const struct page *__pg = (pg);                         \
         |                                   ^
   arch/sh/include/asm/page.h:139:27: note: in definition of macro '___va'
     139 | #define ___va(x)        ((x)+PAGE_OFFSET)
         |                           ^
   include/linux/mm.h:117:25: note: in expansion of macro '__va'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                         ^~~~
   include/linux/mm.h:117:30: note: in expansion of macro 'PFN_PHYS'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                              ^~~~~~~~
   include/asm-generic/memory_model.h:64:21: note: in expansion of macro '__page_to_pfn'
      64 | #define page_to_pfn __page_to_pfn
         |                     ^~~~~~~~~~~~~
   include/linux/mm.h:117:39: note: in expansion of macro 'page_to_pfn'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                                       ^~~~~~~~~~~
   include/linux/mm.h:2728:16: note: in expansion of macro 'page_to_virt'
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                ^~~~~~~~~~~~
   include/linux/mm.h: In function 'ptdesc_address':
>> include/linux/mm.h:2733:30: error: implicit declaration of function 'ptdesc_folio'; did you mean 'page_folio'? [-Werror=implicit-function-declaration]
    2733 |         return folio_address(ptdesc_folio(pt));
         |                              ^~~~~~~~~~~~
         |                              page_folio
>> include/linux/mm.h:2733:30: warning: passing argument 1 of 'folio_address' makes pointer from integer without a cast [-Wint-conversion]
    2733 |         return folio_address(ptdesc_folio(pt));
         |                              ^~~~~~~~~~~~~~~~
         |                              |
         |                              int
   include/linux/mm.h:2151:55: note: expected 'const struct folio *' but argument is of type 'int'
    2151 | static inline void *folio_address(const struct folio *folio)
         |                                   ~~~~~~~~~~~~~~~~~~~~^~~~~
   include/linux/mm.h: In function 'ptdesc_is_reserved':
>> include/linux/mm.h:2738:36: warning: passing argument 1 of 'folio_test_reserved' makes pointer from integer without a cast [-Wint-conversion]
    2738 |         return folio_test_reserved(ptdesc_folio(pt));
         |                                    ^~~~~~~~~~~~~~~~
         |                                    |
         |                                    int
   In file included from include/linux/mmzone.h:23:
   include/linux/page-flags.h:375:62: note: expected 'struct folio *' but argument is of type 'int'
     375 | static __always_inline bool folio_test_##lname(struct folio *folio)     \
         |                                                ~~~~~~~~~~~~~~^~~~~
   include/linux/page-flags.h:423:9: note: in expansion of macro 'TESTPAGEFLAG'
     423 |         TESTPAGEFLAG(uname, lname, policy)                              \
         |         ^~~~~~~~~~~~
   include/linux/page-flags.h:494:1: note: in expansion of macro 'PAGEFLAG'
     494 | PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
         | ^~~~~~~~
   include/linux/mm.h: In function 'ptdesc_alloc':
   include/linux/mm.h:2745:16: warning: returning 'int' from a function with return type 'struct ptdesc *' makes pointer from integer without a cast [-Wint-conversion]
    2745 |         return page_ptdesc(page);
         |                ^~~~~~~~~~~~~~~~~
   include/linux/mm.h: In function 'ptdesc_free':
>> include/linux/mm.h:2750:29: warning: initialization of 'struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2750 |         struct page *page = ptdesc_page(pt);
         |                             ^~~~~~~~~~~
   include/linux/mm.h: In function 'free_reserved_ptdesc':
>> include/linux/mm.h:2988:28: warning: passing argument 1 of 'free_reserved_page' makes pointer from integer without a cast [-Wint-conversion]
    2988 |         free_reserved_page(ptdesc_page(pt));
         |                            ^~~~~~~~~~~~~~~
         |                            |
         |                            int
   include/linux/mm.h:2971:52: note: expected 'struct page *' but argument is of type 'int'
    2971 | static inline void free_reserved_page(struct page *page)
         |                                       ~~~~~~~~~~~~~^~~~
   cc1: some warnings being treated as errors
   make[2]: *** [scripts/Makefile.build:114: arch/sh/kernel/asm-offsets.s] Error 1
   make[2]: Target 'prepare' not remade because of errors.
   make[1]: *** [Makefile:1286: prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [Makefile:226: __sub-make] Error 2
   make: Target 'prepare' not remade because of errors.


vim +/page_ptdesc +2723 include/linux/mm.h

  2720	
  2721	static inline struct ptdesc *virt_to_ptdesc(const void *x)
  2722	{
> 2723		return page_ptdesc(virt_to_head_page(x));
  2724	}
  2725	
  2726	static inline void *ptdesc_to_virt(struct ptdesc *pt)
  2727	{
> 2728		return page_to_virt(ptdesc_page(pt));
  2729	}
  2730	
  2731	static inline void *ptdesc_address(struct ptdesc *pt)
  2732	{
> 2733		return folio_address(ptdesc_folio(pt));
  2734	}
  2735	
  2736	static inline bool ptdesc_is_reserved(struct ptdesc *pt)
  2737	{
> 2738		return folio_test_reserved(ptdesc_folio(pt));
  2739	}
  2740	
  2741	static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
  2742	{
  2743		struct page *page = alloc_pages(gfp | __GFP_COMP, order);
  2744	
  2745		return page_ptdesc(page);
  2746	}
  2747	
  2748	static inline void ptdesc_free(struct ptdesc *pt)
  2749	{
> 2750		struct page *page = ptdesc_page(pt);
  2751	
  2752		__free_pages(page, compound_order(page));
  2753	}
  2754	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 02:13:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 02:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522539.811977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poaqZ-00078q-U0; Tue, 18 Apr 2023 02:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522539.811977; Tue, 18 Apr 2023 02:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poaqZ-00078j-Qf; Tue, 18 Apr 2023 02:13:31 +0000
Received: by outflank-mailman (input) for mailman id 522539;
 Tue, 18 Apr 2023 02:13:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jS8q=AJ=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1poaqY-00078d-H0
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 02:13:30 +0000
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9743184a-dd8e-11ed-b21e-6b7b168915f2;
 Tue, 18 Apr 2023 04:13:25 +0200 (CEST)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Apr 2023 19:13:19 -0700
Received: from lkp-server01.sh.intel.com (HELO b613635ddfff) ([10.239.97.150])
 by orsmga001.jf.intel.com with ESMTP; 17 Apr 2023 19:13:13 -0700
Received: from kbuild by b613635ddfff with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1poaqG-000cqb-33;
 Tue, 18 Apr 2023 02:13:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9743184a-dd8e-11ed-b21e-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1681784005; x=1713320005;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=wqlsjQk8Hlncd0Tit9Ob615YFcxS+GeVN0PRT6FhUrY=;
  b=czD+pxJvj3ElkP5b8s20F9FqE+YovoFFjgpTU5ssuoS9hxa2f4w9ckKj
   EAxyiGF41u6hOHX/qdGjXUwFPst+Ly9I6Fgax4HDdX0DDveDRXgFyIt5f
   sfM7uNNnVsK39ICbwIRsPSGHh4N4LMq6X059lbm9bCg8cs475CVOHmrKr
   /GeKzrCnJgb+Stdjy7nZBt+pfFAukBlDtskYcesnYLCZ3Ohv0AeGXowbT
   m/yYOze0pESuuByvW8oHPOqRhcsi3YT9uFMlvNfpZiGLF7Uq0miXINn5W
   MaOV1O5Du6JNgQcd7JFRREYg0OpZTtlIjmtKycr1RMW6YM5GCC186kW2Q
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="343806401"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="343806401"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="723460357"
X-IronPort-AV: E=Sophos;i="5.99,205,1677571200"; 
   d="scan'208";a="723460357"
Date: Tue, 18 Apr 2023 10:13:01 +0800
From: kernel test robot <lkp@intel.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: oe-kbuild-all@lists.linux.dev,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: Re: [PATCH 12/33] mm: Create ptdesc equivalents for
 pgtable_{pte,pmd}_page_{ctor,dtor}
Message-ID: <202304180959.YFCTfVKw-lkp@intel.com>
References: <20230417205048.15870-13-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230417205048.15870-13-vishal.moola@gmail.com>

Hi Vishal,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20230417]
[cannot apply to s390/features powerpc/next powerpc/fixes geert-m68k/for-next geert-m68k/for-linus linus/master v6.3-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230417205048.15870-13-vishal.moola%40gmail.com
patch subject: [PATCH 12/33] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
config: sh-allmodconfig (https://download.01.org/0day-ci/archive/20230418/202304180959.YFCTfVKw-lkp@intel.com/config)
compiler: sh4-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/d53de56a2dbf659b53aee1aa2eac60bcc936f10b
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Vishal-Moola-Oracle/s390-Use-_pt_s390_gaddr-for-gmap-address-tracking/20230418-045832
        git checkout d53de56a2dbf659b53aee1aa2eac60bcc936f10b
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sh olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=sh prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202304180959.YFCTfVKw-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from arch/sh/kernel/asm-offsets.c:14:
   include/linux/mm.h: In function 'virt_to_ptdesc':
   include/linux/mm.h:2723:16: error: implicit declaration of function 'page_ptdesc' [-Werror=implicit-function-declaration]
    2723 |         return page_ptdesc(virt_to_head_page(x));
         |                ^~~~~~~~~~~
   include/linux/mm.h:2723:16: warning: returning 'int' from a function with return type 'struct ptdesc *' makes pointer from integer without a cast [-Wint-conversion]
    2723 |         return page_ptdesc(virt_to_head_page(x));
         |                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from arch/sh/include/asm/thread_info.h:13,
                    from include/linux/thread_info.h:60,
                    from include/asm-generic/preempt.h:5,
                    from ./arch/sh/include/generated/asm/preempt.h:1,
                    from include/linux/preempt.h:78,
                    from include/linux/spinlock.h:56,
                    from include/linux/mmzone.h:8,
                    from include/linux/gfp.h:7,
                    from include/linux/mm.h:7:
   include/linux/mm.h: In function 'ptdesc_to_virt':
   include/linux/mm.h:2728:29: error: implicit declaration of function 'ptdesc_page'; did you mean 'pte_page'? [-Werror=implicit-function-declaration]
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                             ^~~~~~~~~~~
   arch/sh/include/asm/page.h:139:27: note: in definition of macro '___va'
     139 | #define ___va(x)        ((x)+PAGE_OFFSET)
         |                           ^
   include/linux/mm.h:117:25: note: in expansion of macro '__va'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                         ^~~~
   include/linux/mm.h:117:30: note: in expansion of macro 'PFN_PHYS'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                              ^~~~~~~~
   include/asm-generic/memory_model.h:64:21: note: in expansion of macro '__page_to_pfn'
      64 | #define page_to_pfn __page_to_pfn
         |                     ^~~~~~~~~~~~~
   include/linux/mm.h:2728:16: note: in expansion of macro 'page_to_virt'
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                ^~~~~~~~~~~~
   include/asm-generic/memory_model.h:46:35: warning: initialization of 'const struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
      46 | ({      const struct page *__pg = (pg);                         \
         |                                   ^
   arch/sh/include/asm/page.h:139:27: note: in definition of macro '___va'
     139 | #define ___va(x)        ((x)+PAGE_OFFSET)
         |                           ^
   include/linux/mm.h:117:25: note: in expansion of macro '__va'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                         ^~~~
   include/linux/mm.h:117:30: note: in expansion of macro 'PFN_PHYS'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                              ^~~~~~~~
   include/asm-generic/memory_model.h:64:21: note: in expansion of macro '__page_to_pfn'
      64 | #define page_to_pfn __page_to_pfn
         |                     ^~~~~~~~~~~~~
   include/linux/mm.h:117:39: note: in expansion of macro 'page_to_pfn'
     117 | #define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x)))
         |                                       ^~~~~~~~~~~
   include/linux/mm.h:2728:16: note: in expansion of macro 'page_to_virt'
    2728 |         return page_to_virt(ptdesc_page(pt));
         |                ^~~~~~~~~~~~
   include/linux/mm.h: In function 'ptdesc_address':
   include/linux/mm.h:2733:30: error: implicit declaration of function 'ptdesc_folio'; did you mean 'page_folio'? [-Werror=implicit-function-declaration]
    2733 |         return folio_address(ptdesc_folio(pt));
         |                              ^~~~~~~~~~~~
         |                              page_folio
   include/linux/mm.h:2733:30: warning: passing argument 1 of 'folio_address' makes pointer from integer without a cast [-Wint-conversion]
    2733 |         return folio_address(ptdesc_folio(pt));
         |                              ^~~~~~~~~~~~~~~~
         |                              |
         |                              int
   include/linux/mm.h:2151:55: note: expected 'const struct folio *' but argument is of type 'int'
    2151 | static inline void *folio_address(const struct folio *folio)
         |                                   ~~~~~~~~~~~~~~~~~~~~^~~~~
   include/linux/mm.h: In function 'ptdesc_is_reserved':
   include/linux/mm.h:2738:36: warning: passing argument 1 of 'folio_test_reserved' makes pointer from integer without a cast [-Wint-conversion]
    2738 |         return folio_test_reserved(ptdesc_folio(pt));
         |                                    ^~~~~~~~~~~~~~~~
         |                                    |
         |                                    int
   In file included from include/linux/mmzone.h:23:
   include/linux/page-flags.h:375:62: note: expected 'struct folio *' but argument is of type 'int'
     375 | static __always_inline bool folio_test_##lname(struct folio *folio)     \
         |                                                ~~~~~~~~~~~~~~^~~~~
   include/linux/page-flags.h:423:9: note: in expansion of macro 'TESTPAGEFLAG'
     423 |         TESTPAGEFLAG(uname, lname, policy)                              \
         |         ^~~~~~~~~~~~
   include/linux/page-flags.h:494:1: note: in expansion of macro 'PAGEFLAG'
     494 | PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
         | ^~~~~~~~
   include/linux/mm.h: In function 'ptdesc_alloc':
   include/linux/mm.h:2745:16: warning: returning 'int' from a function with return type 'struct ptdesc *' makes pointer from integer without a cast [-Wint-conversion]
    2745 |         return page_ptdesc(page);
         |                ^~~~~~~~~~~~~~~~~
   include/linux/mm.h: In function 'ptdesc_free':
   include/linux/mm.h:2750:29: warning: initialization of 'struct page *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2750 |         struct page *page = ptdesc_page(pt);
         |                             ^~~~~~~~~~~
   include/linux/mm.h: In function 'ptdesc_pte_ctor':
>> include/linux/mm.h:2826:31: warning: initialization of 'struct folio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2826 |         struct folio *folio = ptdesc_folio(ptdesc);
         |                               ^~~~~~~~~~~~
   include/linux/mm.h: In function 'pgtable_pte_page_ctor':
   include/linux/mm.h:2837:32: warning: passing argument 1 of 'ptdesc_pte_ctor' makes pointer from integer without a cast [-Wint-conversion]
    2837 |         return ptdesc_pte_ctor(page_ptdesc(page));
         |                                ^~~~~~~~~~~~~~~~~
         |                                |
         |                                int
   include/linux/mm.h:2824:51: note: expected 'struct ptdesc *' but argument is of type 'int'
    2824 | static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
         |                                    ~~~~~~~~~~~~~~~^~~~~~
   include/linux/mm.h: In function 'ptdesc_pte_dtor':
   include/linux/mm.h:2842:31: warning: initialization of 'struct folio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2842 |         struct folio *folio = ptdesc_folio(ptdesc);
         |                               ^~~~~~~~~~~~
   include/linux/mm.h: In function 'pgtable_pte_page_dtor':
   include/linux/mm.h:2851:25: warning: passing argument 1 of 'ptdesc_pte_dtor' makes pointer from integer without a cast [-Wint-conversion]
    2851 |         ptdesc_pte_dtor(page_ptdesc(page));
         |                         ^~~~~~~~~~~~~~~~~
         |                         |
         |                         int
   include/linux/mm.h:2840:51: note: expected 'struct ptdesc *' but argument is of type 'int'
    2840 | static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
         |                                    ~~~~~~~~~~~~~~~^~~~~~
   include/linux/mm.h: In function 'ptdesc_pmd_ctor':
   include/linux/mm.h:2935:31: warning: initialization of 'struct folio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2935 |         struct folio *folio = ptdesc_folio(ptdesc);
         |                               ^~~~~~~~~~~~
   include/linux/mm.h: In function 'pgtable_pmd_page_ctor':
   include/linux/mm.h:2946:32: warning: passing argument 1 of 'ptdesc_pmd_ctor' makes pointer from integer without a cast [-Wint-conversion]
    2946 |         return ptdesc_pmd_ctor(page_ptdesc(page));
         |                                ^~~~~~~~~~~~~~~~~
         |                                |
         |                                int
   include/linux/mm.h:2933:51: note: expected 'struct ptdesc *' but argument is of type 'int'
    2933 | static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
         |                                    ~~~~~~~~~~~~~~~^~~~~~
   include/linux/mm.h: In function 'ptdesc_pmd_dtor':
   include/linux/mm.h:2951:31: warning: initialization of 'struct folio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    2951 |         struct folio *folio = ptdesc_folio(ptdesc);
         |                               ^~~~~~~~~~~~
   include/linux/mm.h: In function 'pgtable_pmd_page_dtor':
   include/linux/mm.h:2960:25: warning: passing argument 1 of 'ptdesc_pmd_dtor' makes pointer from integer without a cast [-Wint-conversion]
    2960 |         ptdesc_pmd_dtor(page_ptdesc(page));
         |                         ^~~~~~~~~~~~~~~~~
         |                         |
         |                         int
   include/linux/mm.h:2949:51: note: expected 'struct ptdesc *' but argument is of type 'int'
    2949 | static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
         |                                    ~~~~~~~~~~~~~~~^~~~~~
   include/linux/mm.h: In function 'free_reserved_ptdesc':
   include/linux/mm.h:3016:28: warning: passing argument 1 of 'free_reserved_page' makes pointer from integer without a cast [-Wint-conversion]
    3016 |         free_reserved_page(ptdesc_page(pt));
         |                            ^~~~~~~~~~~~~~~
         |                            |
         |                            int
   include/linux/mm.h:2999:52: note: expected 'struct page *' but argument is of type 'int'
    2999 | static inline void free_reserved_page(struct page *page)
         |                                       ~~~~~~~~~~~~~^~~~
   cc1: some warnings being treated as errors
   make[2]: *** [scripts/Makefile.build:114: arch/sh/kernel/asm-offsets.s] Error 1
   make[2]: Target 'prepare' not remade because of errors.
   make[1]: *** [Makefile:1286: prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [Makefile:226: __sub-make] Error 2
   make: Target 'prepare' not remade because of errors.


vim +2826 include/linux/mm.h

  2823	
  2824	static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
  2825	{
> 2826		struct folio *folio = ptdesc_folio(ptdesc);
  2827	
  2828		if (!ptlock_init(ptdesc))
  2829			return false;
  2830		__SetPageTable(&folio->page);
  2831		lruvec_stat_add_folio(folio, NR_PAGETABLE);
  2832		return true;
  2833	}
  2834	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 02:17:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 02:17:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522544.811986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poauE-0007mL-HL; Tue, 18 Apr 2023 02:17:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522544.811986; Tue, 18 Apr 2023 02:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poauE-0007mD-Eh; Tue, 18 Apr 2023 02:17:18 +0000
Received: by outflank-mailman (input) for mailman id 522544;
 Tue, 18 Apr 2023 02:17:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=woKg=AJ=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1poauD-0007m5-0c
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 02:17:17 +0000
Received: from mail-ua1-x936.google.com (mail-ua1-x936.google.com
 [2607:f8b0:4864:20::936])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 224d6f0a-dd8f-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 04:17:15 +0200 (CEST)
Received: by mail-ua1-x936.google.com with SMTP id p12so5921636uak.13
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 19:17:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 224d6f0a-dd8f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681784234; x=1684376234;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FmsrhIlqPhNfC3v/mD58KM5Yio1KnveeLKkcAYOXjeo=;
        b=CECXZ4WQPjSS2V5dKzTtIadvWHYmxLh9NBqNhhEJ7ifYb8OhcbVb3aPDF4jqhMsyCe
         IOLfD0Vp7+k9fWagsVQaHxAhlKYjjTVFwjzdDXiTGYxjSsBdt6MjEQbUshZSZeC3jjBd
         iTVO+gR/wAf5QlwTLaJrVoJTZ8hkstouFq4wSouTzZJwtvllQH5UJhFIC4zjaImdPJS5
         QeRW++vWVLEugpEuDjY7y/4wV87XEGOg2HNgU80L8wWO1R0J1UfwjqPjGYtkSyszDyBu
         jHc0kT8B5IyvCk10qW2hCBfqKBjZU547CSN7QRvWci0xbKVfHezIJmH6qnafIW18AQG3
         UB/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681784234; x=1684376234;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FmsrhIlqPhNfC3v/mD58KM5Yio1KnveeLKkcAYOXjeo=;
        b=NMl5AIqULMCtbYGqxLJKnO50bq9tNDF9PZDPqrOAXfuC0ChnSKYyb0HPwsSue7LVh5
         wXPrQZ39nkgCLJYuHTr3icE7ZnA1U1gmL0OKYu6bfw/6T6+e7nhalXRF5FEX2W2+Mhy/
         etMDyjzNZ4UPTheXplttCqktULSM7SGy4qG5QRCp1kmJoklEDqRlp9SoqBfCMje6UXLy
         LHVptfXvUVSd1z8bUd/LWuZx7SJJuG0sT8Vi5hJYrR7HiaR3w1yZmCiewyktgt8H6Itb
         uQEVvvCugD2xFp1ZU8wb7ypcOlct5hNTSe+XxYCP5nsNjHEDGlmpo04I8M3Wo4gr8r3h
         oZlw==
X-Gm-Message-State: AAQBX9cINyAyOqnRAVWer4Zd6Nfob7FNqTR6QippLbhEKHzp9wLsEkwJ
	5urc7sx/BKH5r2klRwvM6COqK+EXbHJSh926m8E=
X-Google-Smtp-Source: AKy350abip4qDxzRZ+maiO/oRT+Jew3HTxluYCQI3bC+PB9CVOXK5jAPdmxRqBCSwDmVz3Ezfd8ostQG11krWDSgebs=
X-Received: by 2002:a1f:bd52:0:b0:440:125:7e59 with SMTP id
 n79-20020a1fbd52000000b0044001257e59mr4930034vkf.1.1681784233897; Mon, 17 Apr
 2023 19:17:13 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1678970065.git.oleksii.kurochko@gmail.com> <2785518800dce64fafb3096480a5ae4c4e026bcb.1678970065.git.oleksii.kurochko@gmail.com>
In-Reply-To: <2785518800dce64fafb3096480a5ae4c4e026bcb.1678970065.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 18 Apr 2023 12:16:47 +1000
Message-ID: <CAKmqyKOUKv+6yw8R4ccm_rJL8nwxKJ0RRYq0NXDfUneGu15Fzg@mail.gmail.com>
Subject: Re: [PATCH v2 1/2] xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Mar 16, 2023 at 11:22=E2=80=AFPM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The patch is needed to keep all address of cpu0_boot_stack
> PC-relative.
>
> Pseudoinstruction 'la' can be transformed to 'auipc/addi' or
> 'auipc/l{w|d}'. It depends on the .option directive: nopic and pic
> or compiler flags.
>
> Right now, 'la' transforms to 'auipc/l{w|d}', which in case of
> cpu0_boot_stack[] will lead to the usage of _GLOBAL_OFFSET_TABLE_
> where all addresses will be without counting that it might happen
> that linker address !=3D load address ( so addresses inside got
> sections will be relative to linker time ).
>
> It happens becuase the compiler from riscv64 docker compiled with
> --enable-default-pie:
>   [user@49295ae49cbe build]$ riscv64-linux-gnu-gcc -v
>   Using built-in specs.
>   COLLECT_GCC=3Driscv64-linux-gnu-gcc
>   COLLECT_LTO_WRAPPER=3D/usr/lib/gcc/riscv64-linux-gnu/12.2.0/lto-wrapper
>   Target: riscv64-linux-gnu
>   Configured with: /build/riscv64-linux-gnu-gcc/src/gcc-12.2.0/configure
>   --prefix=3D/usr --program-prefix=3Driscv64-linux-gnu- --with-local-
>   prefix=3D/usr/riscv64-linux-gnu --with-sysroot=3D/usr/riscv64-linux-gnu=
 --
>   with-build-sysroot=3D/usr/riscv64-linux-gnu --libdir=3D/usr/lib --
>   libexecdir=3D/usr/lib --target=3Driscv64-linux-gnu --host=3Dx86_64-pc-l=
inux-
>   gnu --build=3Dx86_64-pc-linux-gnu --with-system-zlib --with-isl --with-
>   linker-hash-style=3Dgnu --disable-nls --disable-libunwind-exceptions --
>   disable-libstdcxx-pch --disable-libssp --disable-multilib --disable-
>   werror --enable-languages=3Dc,c++ --enable-shared --enable-threads=3Dpo=
six
>   --enable-__cxa_atexit --enable-clocale=3Dgnu --enable-gnu-unique-object=
 -
>   -enable-linker-build-id --enable-lto --enable-plugin --enable-install-
>   libiberty --enable-gnu-indirect-function --enable-default-pie --enable-
>   checking=3Drelease
>   Thread model: posix
>   Supported LTO compression algorithms: zlib zstd
>   gcc version 12.2.0 (GCC)
>
> Looking at gcc spec file for the RISC-V architecture:
>   [user@49295ae49cbe build]$ riscv64-linux-gnu-gcc -dumpspecs | grep -i
>   pic
>   --traditional-format %(subtarget_asm_debugging_spec) %{fno-pie|fno-
>   PIE|fno-pic|fno-PIC:;:-fpic} %{march=3D*} %{mabi=3D*} %{mno-relax} %{mb=
ig-
>   endian} %{mlittle-endian} %(subtarget_asm_spec)%{misa-spec=3D*}
> which means that -fpic is enabled if none of the following options are
> present on the command line:
>     -fno-pie
>     -fno-PIE
>     -fno-pic
>     -fno-PIC
>
> That's the reasons why 'la' is transformed to 'aupic/l{w|d} GOT' and
> not be dependent on the toolchain used.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  Changes in V2:
>  * instead of changing 'la' to 'lla' to keep cpu0_boot_stack PC-relative
>    it was updated CFLAGS with EMBEDDED_EXTRA_CFLAGS which contains
>    -fno-PIE thereby 'la' will be transformed to 'auipc/addi' without
>    GOT usage.
>  * update the commit message with additional details.
> ---
>  xen/arch/riscv/arch.mk | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
> index 45fe858ee0..7448f759b4 100644
> --- a/xen/arch/riscv/arch.mk
> +++ b/xen/arch/riscv/arch.mk
> @@ -1,6 +1,8 @@
>  ########################################
>  # RISCV-specific definitions
>
> +$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
> +
>  CFLAGS-$(CONFIG_RISCV_64) +=3D -mabi=3Dlp64
>
>  riscv-march-$(CONFIG_RISCV_ISA_RV64G) :=3D rv64g
> --
> 2.39.2
>
>


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 02:18:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 02:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522548.811997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poaux-0008I5-QN; Tue, 18 Apr 2023 02:18:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522548.811997; Tue, 18 Apr 2023 02:18:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poaux-0008Hy-N1; Tue, 18 Apr 2023 02:18:03 +0000
Received: by outflank-mailman (input) for mailman id 522548;
 Tue, 18 Apr 2023 02:18:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=woKg=AJ=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1poauw-0007m5-8L
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 02:18:02 +0000
Received: from mail-ua1-x931.google.com (mail-ua1-x931.google.com
 [2607:f8b0:4864:20::931])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d5b53b8-dd8f-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 04:18:00 +0200 (CEST)
Received: by mail-ua1-x931.google.com with SMTP id q10so4624902uas.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Apr 2023 19:18:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d5b53b8-dd8f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681784279; x=1684376279;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zb73uaT0Z+rRNam7JCgWVn/vfNombeY2fg/tQca0kHA=;
        b=YGTGUhCMLb6f95hUBLHzh/b1zU4cZtTuKZqItNcdB+001D6HNrlzjGqBOy9TRMLG4U
         AoV3MX87PfW40aSTgL1qNQhReTpn4ZstJz4u9mppNmPj6CYH7iSEg33oDuj0nuN3nCgK
         Oazx0Wq/KGbCQ28HuEbkEBY4/5AYHmdbggccmaZP3wbgsdqOkFl/AZgjizTfqpOSt32b
         Ll2hbTbUnruTDjZiBrFHX4bQiH2Hn6y3w+gM5cR5X/qdhxDiQz7YdDuDhUXo5MEp29mV
         eJFnuP/iwfcTpRmicuguBbJpuOLMGDyycbgoC//yXBc03nkrPG2zXNA9YoPpzf/HG0Ny
         J1Jw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681784279; x=1684376279;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=zb73uaT0Z+rRNam7JCgWVn/vfNombeY2fg/tQca0kHA=;
        b=P2vSA5Ks9QC38xdPvhFft5pFjLgtDC8uXUQHPwClnMpo/9MYjq/orBUzbqoL+FJ+js
         doeL+2RbDETq46KuwKjaORtlNCEZJyIeKhOhrf86HRe+tPuwOZaQWYXECcW4FbN0HzBP
         rCSbNY7J95yLvGCiTpf8E0eyRNIV2TDej8I14LI2By+lEzH/n4PNqHp5yLx3vRsJj/9O
         PasIxY83m3b+0MXScY8r3jPDDd5Up3366qEOxIV6/Tj1zTKpbFTm7nZjxyiTb1e/x6k0
         eGT0RRd5jk1SNO778aeFwmT2+J17RNr9zP3MLKSNl/liIavNDjQeW/vjyQZp/smoMml8
         9aJw==
X-Gm-Message-State: AAQBX9dG1ORzScdcD5dz/lCnNlg/j/AGx6bEopWGAs07yHi7nJ0BmP+r
	LZhSp6cBnkg5Sc7B5Lu9Z0cEOJiE5tcDg6yQuq8=
X-Google-Smtp-Source: AKy350YkmJW7VmWXJK25j1G2LAtdUYtgU99d1RWAHaXOLLdf77ZwOucMlNkUhK1jLVZsSWmOs9ZkIOofbU1ZQmxOrrw=
X-Received: by 2002:a1f:bd4b:0:b0:439:bd5c:630 with SMTP id
 n72-20020a1fbd4b000000b00439bd5c0630mr4872877vkf.6.1681784279186; Mon, 17 Apr
 2023 19:17:59 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1678970065.git.oleksii.kurochko@gmail.com> <7c066d6dcc0618749df04785b34b93819148087d.1678970065.git.oleksii.kurochko@gmail.com>
In-Reply-To: <7c066d6dcc0618749df04785b34b93819148087d.1678970065.git.oleksii.kurochko@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Tue, 18 Apr 2023 12:17:33 +1000
Message-ID: <CAKmqyKPj43hVKjUXyNSiy4Y+VeJAPTM_E7=DzAY8dp=wWETJ1A@mail.gmail.com>
Subject: Re: [PATCH v2 2/2] xen/riscv: add explicit check that .got{.plt} is empty
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, 
	Alistair Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Mar 16, 2023 at 11:22=E2=80=AFPM Oleksii Kurochko
<oleksii.kurochko@gmail.com> wrote:
>
> The GOT sections usage should be avoided in the hypervisor
> so to catch such use cases earlier when GOT things are
> produced the patch introduces .got and .got.plt sections
> and adds asserts that they're empty.
>
> The sections won't be created until they remain
> empty otherwise the asserts would cause early failure.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> Changes in V2:
>  * the patch was introduced in patch series v2.
> ---
>  xen/arch/riscv/xen.lds.S | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
> index ca57cce75c..f299ea8422 100644
> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -1,3 +1,4 @@
> +#include <xen/lib.h>
>  #include <xen/xen.lds.h>
>
>  #undef ENTRY
> @@ -123,6 +124,15 @@ SECTIONS
>          *(SORT(.init_array.*))
>          __ctors_end =3D .;
>      } :text
> +
> +    .got : {
> +        *(.got)
> +    } : text
> +
> +    .got.plt : {
> +        *(.got.plt)
> +    } : text
> +
>      . =3D ALIGN(POINTER_ALIGN);
>      __init_end =3D .;
>
> @@ -156,3 +166,6 @@ SECTIONS
>
>      ELF_DETAILS_SECTIONS
>  }
> +
> +ASSERT(!SIZEOF(.got),      ".got non-empty")
> +ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
> --
> 2.39.2
>
>


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 05:16:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 05:16:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522553.812007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1podhb-0001Jb-7n; Tue, 18 Apr 2023 05:16:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522553.812007; Tue, 18 Apr 2023 05:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1podhb-0001JU-4k; Tue, 18 Apr 2023 05:16:27 +0000
Received: by outflank-mailman (input) for mailman id 522553;
 Tue, 18 Apr 2023 05:16:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LW8D=AJ=fujitsu.com=dietmar.hahn@srs-se1.protection.inumbo.net>)
 id 1podhZ-0001JO-N5
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 05:16:25 +0000
Received: from esa8.fujitsucc.c3s2.iphmx.com (esa8.fujitsucc.c3s2.iphmx.com
 [68.232.159.88]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 285fc3f6-dda8-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 07:16:22 +0200 (CEST)
Received: from mail-be0deu01lp2170.outbound.protection.outlook.com (HELO
 DEU01-BE0-obe.outbound.protection.outlook.com) ([104.47.7.170])
 by ob1.fujitsucc.c3s2.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 Apr 2023 14:16:21 +0900
Received: from FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:29::6) by
 FRYP281MB2558.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:76::11) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.45; Tue, 18 Apr 2023 05:16:19 +0000
Received: from FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM
 ([fe80::b0e4:c7ef:40f4:36f]) by FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM
 ([fe80::b0e4:c7ef:40f4:36f%6]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 05:16:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 285fc3f6-dda8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=fujitsu.com; i=@fujitsu.com; q=dns/txt; s=fj1;
  t=1681794982; x=1713330982;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=6uAz/N+NDIV2IK/52/YqNAowWVLd0xRgDPHWNnQ0qQA=;
  b=qfti54PEw6hnj0KRXuwCO885AszUp7e5rZCslsgC7p/4TZbBZ0vJpTTt
   r6HV9HlVikw1eOSFM2ToNvMKdB34yU1d15zpA7UJuVbTw/EYk1jQ/0RNR
   J1FnwvnnC04MJ7m3hNwPHiMxYy8FnlvQKCDUejwr6Jt1axpcrLu8Uf27Z
   TzoiXZNKvDLiwsie8INf0rlztlTkmNcp9PN6mDtmudMiANVfp39a3jkNS
   OfXG/VrxMkC4/u49xyZ4UjOsaKlVWT014phfnaS3NX9FfnHzzuFMDcleQ
   QqE8gm/FOQ3OwBa1U2HlxVanJfph5RkD5HTPl+blKuyVqu0DHSUnagb+9
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10683"; a="82339988"
X-IronPort-AV: E=Sophos;i="5.99,206,1677510000"; 
   d="scan'208";a="82339988"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hk5pZisYA/oIArlJ0SRn4OQujHvsMyBIbWMGu45mnDSqWkc6pYeDZpCa3LBwUOfHYxnwdfRS2/fS0kFUvIlZC7rzABhmBoUlW5Blfk8MvArrz6AC/1i630gRLw6dkYei8LidmLSz0yubyvNhSziXB6yFPoJBpEqDGZCsgKJunJkOp0JqfSpvl1cj2TeJFEmnYzTpyVSshALet0fT/guOxgKyPDVlAWSFUVaG6STL03julFFuvBRqsCeTrxG1IV4V4gGzlTFXHedbYx5AFT6KDkSW7GpXnholog9ajrOzjcEVIsifQWQohjnI+YIoWMAvtUcS7BziGcpDlACWE+Rg/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C4i5l61YbcnMjezwyjpCL15jLPNLE3FWAkzaiNX3FZs=;
 b=BZv0Idvmd4jKl770nlt5otLTIQOEqJObinAmhqRki3Be40soKf1y97rkt0RfVOkF8QD6c7zyrggVoGlPvS+CngxX+COs3DRsGPvNDPyC9r2FHjdJRK5TTOYEdqB0yGsgQJwRjE5E+vDqJ1C5z41I9FZ3EvBwrTHiMMCn1B9a853Ro2yNWKIQ0tdSDWd5Lo7L+9L3ry8pnjSMPleKGera0zIIjmsSuwOYeYIHDNZSm53QYGdrXPkTVUJ54G4NGOM0BW3omIBkeESQi076uHkHUjJE5wCE6I+zMgHv2kG76eMM6u2YAeIJRJp90sfk3ohhnBkGcJq+i6+QIi2R6km3Cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=fujitsu.com; dmarc=pass action=none header.from=fujitsu.com;
 dkim=pass header.d=fujitsu.com; arc=none
From: "Dietmar Hahn (Fujitsu)" <dietmar.hahn@fujitsu.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: RE: [PATCH v3 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
Thread-Topic: [PATCH v3 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
Thread-Index: AQHZZ8aayKNwRwnxU0qZJCKcF1kheq8wmN2g
Date: Tue, 18 Apr 2023 05:16:19 +0000
Message-ID:
 <FR0P281MB2123CD306618A20094AD0749949D9@FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM>
References: <20230405135629.21829-1-jgross@suse.com>
 <20230405135629.21829-2-jgross@suse.com>
In-Reply-To: <20230405135629.21829-2-jgross@suse.com>
Accept-Language: de-DE, en-US
Content-Language: de-DE
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_ActionId=86363395-ed68-4ea2-abef-7d5070f3e675;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_ContentBits=0;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_Enabled=true;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_Method=Standard;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_Name=FUJITSU-RESTRICTED???;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_SetDate=2023-04-18T05:11:18Z;MSIP_Label_a7295cc1-d279-42ac-ab4d-3b0f4fece050_SiteId=a19f121d-81e1-4858-a9d8-736e267fd4c7;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=fujitsu.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: FR0P281MB2123:EE_|FRYP281MB2558:EE_
x-ms-office365-filtering-correlation-id: 4e6b225d-2557-4049-8626-08db3fcc0b09
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 VwZ0MxqH3guVA3v+FuVEdTUx2IoJK/d/JwISJNvwL08RenOJeLWIQQ0TY8y8JOtWOYvegQjCmiRpWc05s3smc0jWn7qB+H4mJaST42KQK7Sd0l2YLeA14hjwyCIqhp1p8JtNhmO9hGm8oGdw0gC8u3+EUamgwyCA/chRVnNUmp+kue2rdkgqDwN/QIAF0ZNDqNlRlSPYgCFIA09IRzj5EbdtXI/aRkgpNdOpZ+zOsqF7xI5ALf+YtzA96MlJCm8WpWbtmQSUkGIkQiVs7zUMjgiPmOhQf6h5C1HRV3Ru6oolfvACKN1wgo9UQFwnSWoa+rKQE2hXWvINR8++JMgB+Y5IJu5qo80+xCpzF0qjjh7azAnYZog01RhcDwxpGb0FcKbtHz+GTLv3IdIgpHrmfNnuRyf0t/2WwwO+M4/IeVMBUVCB2d8socMiWBNFHhB8Qe+um+O9jF6XXBAuJDJ2DDrLFJNb+WzCevcBBBJkNgDGwEVXnvZVvo2EFEsjJn3w4uSnvgD5WaBCDM7L7KkVZTjL9Oa/n7sIa+vS44jSz9IoQGoxK02zacgI738nWtIm5XKWVj26caxW2vMYqwRIuszYfUp19HprapVwhn/Ecvsfcs6gMF/GclX4HNjCpSGy
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(52536014)(5660300002)(86362001)(83380400001)(186003)(9686003)(53546011)(122000001)(82960400001)(26005)(6506007)(38100700002)(8676002)(38070700005)(8936002)(33656002)(478600001)(54906003)(110136005)(7696005)(71200400001)(316002)(55016003)(76116006)(41300700001)(66446008)(66476007)(66556008)(4326008)(64756008)(66946007)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?Windows-1252?Q?NYils7nKpRilp8K+zhbfTQ7+E1XlVY73IAPaClt1WfjwR+P43UzdEYHv?=
 =?Windows-1252?Q?MC7yqvXSTWHt3/0aeWJH3j/Y5NZB42bnG/XORbBF8F+tQq3Hk/fOVPf/?=
 =?Windows-1252?Q?JlfSJUvfUMWZdO1hgmVslEW6T5afQpJlbztIvcskBwtV1rXgMIiBSdhp?=
 =?Windows-1252?Q?Fb0BdnGnfgTrPW8rPEKuvCAgTDhYZLXRFbxBQKZp0roTlrPCurixjLo+?=
 =?Windows-1252?Q?HFBULIEbxVIye0XZfs9SrnXWV++brX424mGBSYZkoRJyiJzGHiU6V+tW?=
 =?Windows-1252?Q?JeYVckaY5+ns7EieA4h6ljSHWJW7wFipaApSScwt1FPvpb/DPE8z6/8p?=
 =?Windows-1252?Q?V4zI38H7dL+eWqdo2QZwy295pnjNwfApzE+bv70AywarDvTj4ryO2cmG?=
 =?Windows-1252?Q?zTao5i8JI72aA/fR/0E8j0x/KeiaTZ5cITivywsUCr1Fj16C6YKbBS8h?=
 =?Windows-1252?Q?sXTNy8wwK3VIJl0IK51+Y/RRZ1G2wkvvPkqyQM8zhH5ivm0d5tOfswlC?=
 =?Windows-1252?Q?mTQB7YAQgloxtvEVIFiRR42lUvSdLkQBmRLgn+0jG+FVflYXUCb1uhat?=
 =?Windows-1252?Q?4BfKmo4lalJM8MycqPTXrE5RKQmEg5yPPUqRvKApx8hqkxwglqx0mhW3?=
 =?Windows-1252?Q?rNr+AV9zDQCOLa4i1TKT5rX3GudN65JUlg38Z1fo1jQdz2WjGyZDPUOx?=
 =?Windows-1252?Q?La49n7WConeeOEEYrQ6eYFhtpZ1rKvUWzwaJN6Rj8muh3Dg6pRySjFKq?=
 =?Windows-1252?Q?4J5J3s71jkSKwWVmuxPQORCPIgznjbdbYjdSc4W6Wp7kBqvaNorEhvkd?=
 =?Windows-1252?Q?OIjLE+JArzjE5ek7IhBYJvcRTkjBo2VKeVSX+NA/bpLI9tU1BHs8rSZN?=
 =?Windows-1252?Q?IJT8LViXwTf6NmE8j6ECdjLuY50kMz/VY6/ElpIYjdTRo+vIf9A17b+v?=
 =?Windows-1252?Q?npRt2TYlr9Qi0WD4gCP/W0ZvYb7A91KZtEaeEgWNqeC2jscxLLW+ScJZ?=
 =?Windows-1252?Q?sJ7NupdzLuaaEzEeR3kqT18EQSkItm9kDC+0x9C1TBFFE0UBEmexIknH?=
 =?Windows-1252?Q?YV+UI7sMvKpoY26Dtq5FIO/bI5QKIN1PFKuekKlpbK7BSy0fc+YBfHhd?=
 =?Windows-1252?Q?GuAcNkcdMxOkZZfmzuAl6Nh+PdLCV+DM9EoLUPIY5OQz4bIKLS+NGPb1?=
 =?Windows-1252?Q?Mxn+FBVCs+QK9HG7lhVQZncQhKzeQ1ckgqG4gpUfxhwXnj2piGWX3p5Y?=
 =?Windows-1252?Q?K3DChEx5joCXdH1hQ19vxz/Vzpws8rxlUh0flU5vcw1v7erz/9DcLnQc?=
 =?Windows-1252?Q?/egKLgX2y7NnDwXb8FuF7TR6FLWb5wLA3l7b05ZakhZnPg9Oui6m0xeY?=
 =?Windows-1252?Q?iubCTPMtyFIfhEyhLEc38jBsEtQhIFftDCkhuRpoJGKaWGS0Ygpnsn90?=
 =?Windows-1252?Q?li6aU7dFzTwH2CiBgFISybOxC9dsT3tsRN4qeH5nS1Tb3eWKlObMIrc6?=
 =?Windows-1252?Q?3Rp3Mrdt1iACOHne66FKiqfX5zDOY+bXfoZdxfd61f3Z/VFwUtKqhhA9?=
 =?Windows-1252?Q?3EH6V38vfI01xnHS8pym2Gmt/MluLr2batjHNaobOLTpkMPsfod/FqRv?=
 =?Windows-1252?Q?8K4GRlPnYRBZUDxo/wyzRz+ifRxEeW8g4EQ4xdPXovncm7M/K56y0U0G?=
 =?Windows-1252?Q?TFtt7/XUGbIJz5Yicptc1aKvgBDT8sW8?=
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Ei9Iytjc7p91/5++yj5Pz4Cy6unluBZhkhQJwMuPXARsMMDDDBdHB92c+oqCQQK51Xw888QjbwUZFR1Hs9jJN2pWYDRF6aRMcs6mXvrKgzaWxYQKls8L4XSFHZwPmzfdinBlQ+Q6qknn8ZtqBBB/5V+ZQ9mgNayA1NJ5YXZze4wUyuM18AwZT49BrJBBvKMP/jEKyXixl+IgyuPfPwVxfwhfX4MaG1RgNZEcIl+laVC50Ugc73klg4VSnkZQ4Xc8YdDKPAMa/vWVJ7i4QzuxgCP40P0/9kRBJRr+2jmz4zk4zaWVartXCPVO814BDzleornnLjeDurkkd8aGSCaaQI0kRGIrdCHCcAygD6CBKEnej0oyKDUYxjj9C21dhv5YnWX+wvqDlzknKHmMRzK0hLXCJWWqIBLtq8RvY0X79XXg+oYhHo222xkgWEmWuAHIV/EDtbsOCgGY3zzo/8DYexShxAk4r+WGXAjP6bVmTaCynUqI8K1mHrBVJEoyhlVPTOUPrlNeSrQCnigIJXCmpBXmZJAAr7u9MjcFOPLWe7tZlkbMjVps37DVPItcLPguV0FGNZf8WfyRlwj3k8dVoF6ZmMDeTK0pE7kucemFVKqziKuaC5ZAa1kQaVlI5/tyQrX7y8TZTImE0e5eYGQfsbhHCaqdoZ8zhi/BY23N9Kh/WLJewior+BQ11F7IHFNVmoc/8jG0poNEmdMqDogeKtYIa9dIFhlYSyt8rx7LKGBhGEPG/Fckoa4RMwpYEPgdhFDachrGEEvN/cguuhbZ8QXUWt/sBWJXJ7WFxJim0q1DhdM4uX36gUwo+ZQZaqPoyi072jbqBWd1szUjNOQKx+Fq2nsYRFC4Kl0bl7pu5je6JeEavBqTzoHqR+qIR9iIJSS+4jKnAUQK5t/kSdDh3Q==
X-OriginatorOrg: fujitsu.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: FR0P281MB2123.DEUP281.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e6b225d-2557-4049-8626-08db3fcc0b09
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 05:16:19.3862
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: a19f121d-81e1-4858-a9d8-736e267fd4c7
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: XpG/qKmsP3+WcW5ljE1jlyYT65huJ3HTaRL/yvT1uZ12t1KqRhOr4mPvbEJV5fGdKlFgyPoeFhaaqv7zqr+K0dfLCo499CatK/a+C2AD8xs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: FRYP281MB2558

Hi Juergen,

> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Jue=
rgen Gross
> Sent: Wednesday, April 5, 2023 3:56 PM
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix=
.com>; George Dunlap <george.dunlap@citrix.com>; Jan
> Beulich <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabe=
llini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: [PATCH v3 1/2] xen: move CONFIG_DEBUG_INFO out of EXPERT section
>=20
> In order to support hypervisor analysis of crash dumps, xen-syms needs
> to contain debug_info. It should be allowed to configure the hypervisor
> to be built with CONFIG_DEBUG_INFO in non-debug builds without having
> to enable EXPERT.
>=20
> Using a rather oldish gcc (7.5) it was verified that code generation
> doesn't really differ between CONFIG_DEBUG_INFO on or off without
> CONFIG_DEBUG being set (only observed differences were slightly
> different symbol addresses, verified via "objdump -d", resulting from
> the different config.gz in the binary). The old gcc version selection
> was based on the assumption, that newer gcc won't regress in this
> regard.
>=20
> So move CONFIG_DEBUG_INFO out of the section guarded by EXPERT.
>=20
> It should be mentioned that there have been reports that the linking
> of the xen.efi might take considerably longer with CONFIG_DEBUG_INFO
> selected when using newer binutils.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - expanded commit message (Jan Beulich)
> V3:
> - move DEBUG_INFO block to the end of the file (Jan Beulich)
> ---
>  xen/Kconfig.debug | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>=20
> diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
> index fad3050d4f..279dbe8274 100644
> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -28,13 +28,6 @@ config GDBSX
>  	  If you want to enable support for debugging guests from dom0 via
>  	  gdbsx then say Y.
>=20
> -config DEBUG_INFO
> -	bool "Compile Xen with debug info"
> -	default y
> -	---help---
> -	  If you say Y here the resulting Xen will include debugging info
> -	  resulting in a larger binary image.
> -
>  config FRAME_POINTER
>  	bool "Compile Xen with frame pointers"
>  	default DEBUG
> @@ -132,4 +125,11 @@ source "arch/$(SRCARCH)/Kconfig.debug"
>=20
>  endif # DEBUG || EXPERT
>=20
> +config DEBUG_INFO
> +	bool "Compile Xen with debug info"
> +	default DEBUG
> +	help
> +	  If you say Y here the resulting Xen will include debugging info
> +	  resulting in a larger binary image.
> +
>  endmenu
> --
> 2.35.3
>=20

For the non-efi xen:

Tested-by: Dietmar Hahn <dietmar.hahn@fujitsu.com>

Dietmar.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 05:59:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 05:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522559.812016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poeNB-0005c0-C4; Tue, 18 Apr 2023 05:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522559.812016; Tue, 18 Apr 2023 05:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poeNB-0005bt-9N; Tue, 18 Apr 2023 05:59:25 +0000
Received: by outflank-mailman (input) for mailman id 522559;
 Tue, 18 Apr 2023 05:59:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poeNA-0005bn-Bj
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 05:59:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2075.outbound.protection.outlook.com [40.107.7.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a9c29f0-ddae-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 07:59:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8269.eurprd04.prod.outlook.com (2603:10a6:102:1c4::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 05:58:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 05:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a9c29f0-ddae-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DaueH2BMBXTMVlYUPiHvgBqP9qJcNSybLyVvrePY6hQ99nnRqL9IXw7uebCwMT4xNLKpleTkggFplU9VE7plSa8LyO90Uw1y9aYvM5jESEFuy0QaKtd0X2gYB+UBRw/bxoD17tRrwPAazf6h5w+ZWJ+d8rYA4krlVDs2AH2jc9RWyGGoOojPxc7cxT2suPtgPo/ikhE3w4T9b9Nf20MauqgUQmllLzTspRrm1uBwiKJmumYjki4nTSaYtwNTZFl83/0V+qvCiuf5ryQPbmznC84ZQ7gNl9gX3ccFMGe5AbeW658YM+UXkr8lR4o3JxK0GZ7Nrs6uzzagVbpQsXDaYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F0J6n+628K/rBspYGtEVewmfWl9VC5Wiprn+3uP2ZZA=;
 b=I8PImZFcnU/Bx4wTvj4+eKQ88DNkSTOQIjMCNR0LHG8Bci2BMmtsbcgk/PxU97quO47h2R7l3x0EKm97KfJzLUHtEwoJW/9W3qgiW3jzJ3q9lUVLt05jIy+O2v2AvBc5022CCuJxOPWybUjy9D+77imIAlZQQs2VSK7O2N1f544n9FquN76CYXs2Uin7UdY5iVw5NVflipCKn09djZ6/nYiWUbYtjLxXf/I/Pjh/uT5eA10MienWVET9qcl068AnZDENInHmoINXalTaEJoAnbXNtvqTWv0l0PvnaI93zfEJO3flljMQlsk55tDzMyYG1RhhZpFKRPLIRQKu6uNscA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F0J6n+628K/rBspYGtEVewmfWl9VC5Wiprn+3uP2ZZA=;
 b=pw5ZDv33KOdc6BRlGwJPFnr1nzfdFf9APy8cA5oadCAN/dj7CZME1h0pBeOqmIT14YrZHAxza/yYHGWRpXWpQgE640OY4XkoUF6EAW/FpaYS8HGL6qNESnAGoE1FczrEMtZB8xsyP7r6ED2GNSSEwMDOrlnPjOMOF3rlpcqzdX/wYTGRln7PG0xT0PlLb0LST92m6yq2ZbltT4y4QjTcKi30c09sbWd5JFNEhBz9t/uTdj3YlIv388dWxusRB6DCyPwisb4IrqDBiC7i0912RKKfDo2WsAq/Z64cB0y4U4STooOse46MM3i7LgnR42vNeBhNZFTFKoK3xqeTqYLdgw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <49115307-753f-8196-55a7-8e8c1e50b503@suse.com>
Date: Tue, 18 Apr 2023 07:58:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
 <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
 <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
 <3ea38da5-70a9-6887-5384-fe002d8568c4@suse.com>
 <2ebda1f3-23fb-3f06-c4ca-1ac508c82b40@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2ebda1f3-23fb-3f06-c4ca-1ac508c82b40@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8269:EE_
X-MS-Office365-Filtering-Correlation-Id: 6da82f34-33eb-4068-83dd-08db3fd1fd35
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cIkiqXzaqkpgMjry00vc6DLtHMkB08lzWJGyqlakG5KDwyUoP0hhR5D7KbNNAh3X+zNwRiVlNQ29+IxF7eu+oLx7mw2q3Si3ynDYN0+HGJ3NSqhm7Zx90gBjUfr5hmknKTVuznmS98TtZ5j5+5H5t+BL9zIqJPMqMCulwZMJN5+/VM1I9SifW7aH/liSHnm7I/GgXkXz7t4l0yQ4qCiz9se8AzrT44jBzgiT+fibM9Sti8y9ufUeYlZbDQQFvg33uaw6Ha0tHAVy81KhcYySweI37G0+vyzF5ST3Kx0OtceITW6+K+UwTbEA+GCpYlWSqZ5ZPF/FqZUhcVce+MMnNdN+kL4L+mqRQ+xTF0uJa6Iuku0Bkse/C4rkL95V3hMGlkdgKfUj4r6e6RuExnG5tohnQXKtrr/EjisBzuERxSPxsv4GcNZ3pHWtPJvF8i6x/UHjsxIoIQtUzVKZeUKFUhosxebAcjqlnLpsKW3MbkjqR8zXfI6iqzFRdf7gDdf/JghQw7IyvpDDCTehDv+6pXckzspLrmXWODzHtVg29ZpfcqaixYGBWEFpsTQ+PKm10FmxN9DXpp1wkRVDbwwLL2fEqLuzHNB+n4bdkIcokg2c0pgUrEl1pcxKW0OseVaVBfy9DA0U37YPrmFhzEl/4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(6486002)(478600001)(86362001)(31696002)(2616005)(36756003)(26005)(83380400001)(6512007)(6506007)(53546011)(186003)(38100700002)(66556008)(66946007)(66476007)(316002)(2906002)(4326008)(6916009)(8936002)(5660300002)(8676002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TkhiTUpYR2tuV3F4aXpxQWcrV294d1cyQTR3RExkRjlZWmYzeldxTUlyRjdm?=
 =?utf-8?B?eko2WjRqVVhvaXBjcGU0Q0xONVI3N3Y3by9jVU1UNjNBbVB2TzA3MXdKVHA0?=
 =?utf-8?B?VGRBWVNNSjYxRjVmdkFIcitlLzI5azY2cVlXUU8zQVlpaWoxUnlBeG0yWDZi?=
 =?utf-8?B?VFhENWpyUUY0cWVkblpGQ0VaVUtOOTRXZ3k1N1hDc296V3U4UlpLUTdiUDNx?=
 =?utf-8?B?dTdmTjRUWlczSGI5VTBNa1ZIYXRxWUM5dzREQUNLWU4xaHpTMFE0cjUvZVVK?=
 =?utf-8?B?WXJtVmQxa09XeGNwNDV0RE9pNlNtVEZlb2xCV0tCQmtOcTRkVUQ3WVQyNkVV?=
 =?utf-8?B?Ylc3cURnNmRQMWhDRlJOM3lQMlh2bzFSRm5PN2hiYzhNemRyYkp3ZnZteVB0?=
 =?utf-8?B?Z25MMzdWMHA1TFBGaHd3Z0N3ZlU1dW1hZnV6RHpTaW92NjVZN1hQejRUbENr?=
 =?utf-8?B?dHE4Nm1EMnJWcUovZnAyYzF4ZHlVWWcxN1Ric2ZjRHJReURsR2FsLzZFUkZx?=
 =?utf-8?B?RlRGQURTODhvZGFRTWRhbEdZcW9zelFxZk8vRDRpMzFZQnU2MmdyUXU5enFQ?=
 =?utf-8?B?dTZEemlxWUpvYUVXS01QYW5VVWJtdkRaUVZxN2tTbVloUjVOOXhnMFVWenV4?=
 =?utf-8?B?eGtqRWcxeW9lZHdrcDVySXlLcmdHcVNISmVocHlFRTJpeWl3cU1CaXpSL0p0?=
 =?utf-8?B?NFdTcytSOGZSUHZHS1R0SVI4OG1JTjVoK0FOdG9oMUhBR0JmRklLM3BiT3hU?=
 =?utf-8?B?d1VWMWdDSmNZZ1JtYWlhOU1mOXBPeG1VVndyaXl4VjUrbUVvOWhCVjFVaXR3?=
 =?utf-8?B?d09QZ2pRYlM3dXA5WlVCNzY2aWttbFp0dkU2bVd6U1A1VjJ5TW9KMW9zRzFJ?=
 =?utf-8?B?TXAvbHFQbUpMSm9DRzltN2JOUUhaVitBLzZUMURLNUdGako1cFUxTTNWMHlD?=
 =?utf-8?B?ZGhwMHlabEFIZi9KK0MwYks3Ni9hY0JvY09IcnIrOEdhYURXSXpCeVI1YjJr?=
 =?utf-8?B?ekxheDNCeFp0aEVZakdiT2crNE9XTW4rWEVvZ1N0cUIwcXM2b3NEMFZDZVhi?=
 =?utf-8?B?SndFbERRWEFPbzA5MjNDTEM4SzQ4NDU2Rm9wT29XOWorZUpoRE8vZXV4eW9h?=
 =?utf-8?B?amxUYjVLTnY0bXpnL1VnOU5KblZZT3c2YWFPZW4zRmNaaHBVRGdwb2NNQXda?=
 =?utf-8?B?d09sN05abGlyZnYwTXBCazF1dDE5Nk9lc1lhMU4vYU1uUmpKT1VPYytkUjN2?=
 =?utf-8?B?bUJpazdSMmp4dkZvRDc4ZTFSZHNVS2JyR1J3cFRNMlR2dkVlUkxHTEYxaCtZ?=
 =?utf-8?B?UG1nTHdGWXJJOFRTSW12NTcxWjNhTWJWYXNKNDVhZ1k4ekpSSklZYmZtTWlt?=
 =?utf-8?B?UTl2V3ZienZ4dyszbXJMK2lFZGlXclJLS1QvSFFadFFRMzQzYk5RTjNhTFdJ?=
 =?utf-8?B?ak9xNWRBSG44NjlFV3BLZEh6Yjh1QkVYQ3JOdTAyc21NWDVFbThmemtNWFJU?=
 =?utf-8?B?RVprYUtpZEFuem80N2NZUmJrTmdoaEVHbE5BbHNtcCtoMzM1SWlCS2NOTmFI?=
 =?utf-8?B?MkZlU0ZQdnhhbEFzRHdEMTVoV1QrQkpkRFJrNC9jTUdsQ1JPc094U0FrajNp?=
 =?utf-8?B?bkNjZHhPT1pWUWEyQU5pU2o3K3doemdJcFhhenhlT0xmVGFiZDNwY2xDWXM0?=
 =?utf-8?B?L0hxTTBsdlg1L3R4TUFSTXJFeGUybXpQUnBhR1R6eFIvRkhNNHNWTFUyZzgr?=
 =?utf-8?B?WkI5M1d3cUthUWkrRjlwVFlnTGdUZjNSQkxHdWJKQjlSRTBjZXZpbEVvcGRn?=
 =?utf-8?B?UTcrTkJBRDJuZ1lIdmhRZy82c1dEMFdBVEZsVC9PUE9TY3hoMVRWMTkwME1V?=
 =?utf-8?B?eDdGK0w4alRmaC9nZm1mZ3lCVE0wQ2EvSHl2MGhvdFpDRDk5V05EaE43UHl4?=
 =?utf-8?B?RnZPd0pLRzFLeGhtVlVxaThDMzJDQnVadWpuMTRrV0ZDdnNoNUVNc051czVO?=
 =?utf-8?B?OWkrQm44MnpTZHlMbDhnSmo1enJMNmRPanFGK092alhqZlRrME8wVUlNY1B5?=
 =?utf-8?B?ZjkrRjk5SG9BcUU0N0p4ZWZYSjdvRURLUExpNHZseVdJMVJNNWhGNnJqeXRB?=
 =?utf-8?Q?3zqjSUEkXR8aXsq6EfvIS+boR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6da82f34-33eb-4068-83dd-08db3fd1fd35
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 05:58:53.4134
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6HWEXJSfK4dQukKlsN8ER8S76JG7B/q1cgs3lpVmMmeOSjMFCMsDwgqgUZDfnGb/U6cnOyYJKXSGiy4c29OPBQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8269

On 17.04.2023 21:34, Andrew Cooper wrote:
> On 17/04/2023 3:51 pm, Jan Beulich wrote:
>> On 17.04.2023 16:41, Andrew Cooper wrote:
>>> On 17/04/2023 2:59 pm, Jan Beulich wrote:
>>>> On 17.04.2023 15:52, Andrew Cooper wrote:
>>>>> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>>>  }
>>>>>  
>>>>> +/*
>>>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>>>> + *
>>>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>>>> + * Must be called with present flags, and over present mappings.
>>>>> + * Must be called on leaf page boundaries.
>>>> This last sentence, while wording-wise correct, could do with making more
>>>> explicit that it is the caller's responsibility to know whether large page
>>>> mappings are in use, due to ...
>>> The meaning here is really "this doesn't shatter superpages", and this
>>> was the most concise I could come up with.
>>>
>>> Would ", i.e. won't shatter 2M pages." as a clarification work?
>> Yes, that would definitely help. Nevertheless I was more after something
>> like "..., i.e. for 2M mappings on 2M boundaries." Which, thinking about
>> it, points out that while you have a respective check for the start
>> address, the full 2M page would be changed even if the end address wasn't
>> 2M aligned (but fell in the middle of a 2M page).
> 
> There's no nice way to check for because a range that starts on a 4k
> non-2M boundary can legitimately end on a 2M boundary at 4k granularity.

How about

        if ( l2e_get_flags(l2e) & _PAGE_PSE )
        {
            ASSERT(l1_table_offset(v) == 0);
            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));

?

> How about ", i.e. s and e must not be in the middle of a superpage." then?

That sounds good, thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 06:14:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 06:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522565.812027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poebt-00086K-N1; Tue, 18 Apr 2023 06:14:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522565.812027; Tue, 18 Apr 2023 06:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poebt-00086D-KI; Tue, 18 Apr 2023 06:14:37 +0000
Received: by outflank-mailman (input) for mailman id 522565;
 Tue, 18 Apr 2023 06:14:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poebs-000867-7T
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 06:14:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49102ebf-ddb0-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 08:14:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9484.eurprd04.prod.outlook.com (2603:10a6:20b:4ed::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 06:14:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 06:14:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49102ebf-ddb0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PxQWVRJjiUA1x/mCUq879b2Y0GJ9lYKGlXJ9uq3DDSC+7l/iYxHllaS0KOWYsi17oWZaD6tg38ADGpItzxtCInxFDxD26zsNW1gDezJDpg/ZGd2IBpRk69Ru6aoNuXVD9iBBLHGTE+Qtc3vUctt8zvVDqDzh1jHzYXxUOXk4YtqAa+jzgBQR5TuIFS5dcnBvicmIc06HLU/SNfKffd9KA9ngV+eHssfht39YZOZ5ZnvewzEXW90yTDbOc7ZrTavQlHKZym0SQBU1QNNKEem6qQFi+/6H1FsWukoDXBplb+1FdjCwASOB2mO7bnpuYPlFUQQFJRhfwTCjGKihZFqHHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bZl/v+/ijw54GOdj5uLKZsMtCnzW9OK05+tNBKbYTk4=;
 b=dcB7D4wVfzQhqGNj42pWa9MAa/HnFlyJQKQ5LleKasT/cld2XGgykVDy+MkS8KBd9FBVbXgUpY5qScjMbSBANV3vxmSRXhh7CQcCD8cWlmGqiVNq1rM0KxJre63FLkrV5/tW4KqIxwP4CC2PXRqIn5H7ACaWDLJWSdtiCmmKvEOaEZQ5LTnLfiZpTFs284rUmaYWVFp3Hy1h7dCDYkAs0WzlF3R2QW+WpdayMA1DIHGkCgb5fzceS5upo82i9norgrNND+2MQYbcyi74/n9/Urb55Dxg/ZUig0H7EV4NFFwES4FyNGXe/5fGv90oIMNb2IGnacIPgd8TLwSJ6PT/Bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bZl/v+/ijw54GOdj5uLKZsMtCnzW9OK05+tNBKbYTk4=;
 b=aoQ7RgMnI6R6lpi+3uDdajtx3KHwyNJQmgBbD90MwCDEXeXGZKtBa3N5A2w9khRVOh0taS3LYCm+nngMrGpTLjRZI7kQ4XpXfWyYJhCfREsKx8i2Wb1dkmuDeE99XpDElRXCxTnTyyQ7Q5Hgoz1hgpVJfsiVm3ls0sGrIMf85wvZDmjlzWSfeLhQkXFHt0Af7KXLk7878Ghsb5nmjTyqJay4sD2o7dG8EiV+CgdC/cRBPq7UoLapZxtEqpyQsU74yb12UuNjy+l5d8Fkiq9DG+1d/1yKlxwKmghJqqwkSR5V9BuvTL/XwFsLHw2ZNrVbwnRLeKwhP/UxgtY74HxxRQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3cf8cdbd-75b6-b1a0-6230-b7efdb9031be@suse.com>
Date: Tue, 18 Apr 2023 08:14:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3 0/2] xen: some CONFIG_DEBUG_INFO changes
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230405135629.21829-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230405135629.21829-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0105.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9484:EE_
X-MS-Office365-Filtering-Correlation-Id: de1bde4f-67e8-4184-eb72-08db3fd42be6
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BTxoTeC5W84Q7Y+E/TpFWao58ezKfr8cdOoy42l5b8NOgsnd1K3DEc6qeSIp0TyABse3IPX4pKCrI2s5OM7GXdj+h/TlScol72uRjSTsbpHLcEI7vvHU4ihkI/vsg+UXyrXcbcAdcgCvjVqquZ40bJlaVXXeryBdvW4IyguFVrf/6UBO3vz+ubcVyZtBEy4Fl4evobMRBiJOoo0h+2qAOte/kk2lUCRrXBqYU9366ejDFIABcE0UBFwgnimDbdyHbLKdtEXa7ob1eiyz2eQTM7+0+REHsILTkqWomixFCqqdx6GpbAnWnr3Nt/bOxYNAC5ZXHFdhEOzCah1/qQPTRK9DKwandLeBlKh90HBukYVb7ZV2vSDhpV+w+VflpfmszKEakqpAQeuU25v3XmTQrw4bv7g1N+xLOpN+n5YG7AOIsWevhYlLBOEh6I6rKNEXIET8oWRv6Q4KRL5fha6FvWjMppVDkzArM/kvMLi9RMGKNQ8K2vht3cI+Ofq4MyadbEgw0pb1O3YiUNk+WKBQ/WerLQbcLYEpxUM7EyLcZF1OqyzdX/UztZcJAMG/RVTl+l9dDVq8F6PBTngfuuCGG1aZaVdwkt8ib2fXVkeXi2rgWzornU+ZVYm6KJrdnRyNxcvz2bOQUpNO//H+F2yacw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(136003)(376002)(346002)(451199021)(37006003)(478600001)(38100700002)(8936002)(6862004)(8676002)(316002)(41300700001)(4326008)(6636002)(66556008)(66946007)(54906003)(66476007)(186003)(4744005)(2906002)(53546011)(6512007)(26005)(6506007)(86362001)(31696002)(83380400001)(36756003)(2616005)(31686004)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QnRnRHF1WFZRTTV3c0ltQzNJNkt0UmE3U0tiSTBaTWhIV3dSTmR3RHdJeUQr?=
 =?utf-8?B?UEEzK0RXQVhrbWNpaDFuQWxmQ0FUUzlzYmFBaW1ETU5CNTJleWpTUEQ5U3c0?=
 =?utf-8?B?UWQ4bm0yTUZoa2JkdFN3cjNMcis0RFZPSGJBMFhiUGlwTEsvNWg4UnNwTGNZ?=
 =?utf-8?B?My9UWUt4VUZQa0hESHQxNjNMdm8vcjdKQkt5Y0ZTVHFSWTJUaHFVZTduT0RG?=
 =?utf-8?B?SkRmZlU2dGVVVXJieElIdFkrNURuN2VOR0VDeUlGQTUrelVJMVlXM1BTdTNP?=
 =?utf-8?B?a05zZ2JsTnREQXFNdk9kT1JDdFdTNnVIMUZEbWx0YmRXcHZyUHo2b1gyZ0to?=
 =?utf-8?B?VFAvbFpBbUwxbUlHRDRCSVp2bU9yZllXc3QzRUVZZVVjSmJQU1IxUXdrcDN2?=
 =?utf-8?B?QXNPWVFvSjNIUzNzblB3T2w3V25jdVNuWjFOalIzZEhzN0JtWDVXeDBPSjl0?=
 =?utf-8?B?dGhrSVFpOU1zUXMwSDVzZmw0M0xZalRYcUNjTDBsRTAwSWxialZhZDFrM2Za?=
 =?utf-8?B?amh1cGJ1aUxNQkI2ckNPNjZKeEpmaGhzeTlHdjdCcWJjU0I1Snp0OGRaK0Ja?=
 =?utf-8?B?M1pPMkhjUmtmdjRLNGg4TjNxNWhIOWoxQTdFK2VJSkNvSXlpY2FxdldxYzNh?=
 =?utf-8?B?ZFl4TFhHcXRnK3lLRnN5TVE5WFp1R0EyZi9WL01hOWY4aDFyT2xQZ3VWRUNJ?=
 =?utf-8?B?ZjRuY2ttTlJ5Y011TTJZWVU1TFl6aDBBdFoxZVBjNzFSMTBiaFFVRFF0Snhu?=
 =?utf-8?B?T3llbU5aUlo5Y0RiUEFMcGxOUjhhSzNrWk5DREJYTEJVMEhQeXhKUURBajFj?=
 =?utf-8?B?MWY3NlFVbVdKNDZSWk9PY0xnRW9JWTFCVGFMRGYvWERVQTRYVEpQUWMwZVpp?=
 =?utf-8?B?RjN4ZzdCcFdSNFRPbk1ITmFTZTZKdWR1ZWNiK1RScnNYMCtOQVcyRldlLyt1?=
 =?utf-8?B?aGs0Z2J6TzJiS1prNXpJK0JrR3JDM2tyTFJ2cHFjcTJXMjhBNytrOU1KRjdW?=
 =?utf-8?B?bTZJamN4ZVJiRW1VYlBZSGltL2w5eVJxMnlNckE5a0ZTSVNVdlQ2MVNhRWQy?=
 =?utf-8?B?bkp2MkM1REQveXVoM1FmMlZzbnZPdWNBTmN1MnVveSt2ZU51eHI5Mkcwc1ky?=
 =?utf-8?B?U1E3OHdDdURoQ3lyZnhCamdOOXlER1J2VHlBNW1QRVlxODViN2RFeVhpbFBF?=
 =?utf-8?B?d0VMWTUxdzBIZW5URUxUcUJDTktwZW5jTGU3aDdKM0RtRFZrYVBRTkdPbTdh?=
 =?utf-8?B?cHZ6eFJzcmlVaTFpMy90VjhCblVjOG5ua3JGVVl2ZXFIcVBiVDA1WnRwSXBH?=
 =?utf-8?B?Y1BLbWZORjdxNkFydWZtRlNaOTFFMTVvdkZuakhRb1hUd0I4cDR2bzNPeE8x?=
 =?utf-8?B?N3prazk4L3p3d083TXErdjZwMEhLWTIrZ1VFU0hYdlNMelJnR0FlQ0xjSzVj?=
 =?utf-8?B?SStWZURXeE9HSWdoVmkrOXh3MFJ1SFRGYWVSTmdwcU4xWDJscWhBanl4ZFBE?=
 =?utf-8?B?cUlrUlNSU1ZMaktOQlNOM29BTHJMeWFONFN0MzRRYmJFMmpJYUN0Rmk4VkNp?=
 =?utf-8?B?ZEYzREJwQTNpcjBXMjJlZ3R5RjNBd2ZQMlFlbGoyays2MHAxMmNJRnFQL00x?=
 =?utf-8?B?MG9FcXRyRjhPTm9lakpVSVhoNHRHZmFSemF6SDkwOGNtbnptbGNwSEhLVm5t?=
 =?utf-8?B?T21JVTZlMk84Smc3WHhSWmRHS0R6UHFUNHRZcWxvYmZPTlJTUHpZUHhGSGNm?=
 =?utf-8?B?QWpza3lTaTViMmN1dVEyMlVhSjAzTjVzbnFzRWtDeE5RRlhrK0YxV203YVN1?=
 =?utf-8?B?S0FrbEVtR3MvR1I3d3pnUjMxTzA2dHNlaE5sRm1iY2dLT3l3Q0Y1dWFxcHFE?=
 =?utf-8?B?cFp6cWo2eVFWYStKcWJFWXlDcERHVExjKzN4aTY0aTIwZXJKYnJMd3FLTlRi?=
 =?utf-8?B?Qi9tajQyTHlnbWFIc2xpbVRwM2dubkVmSGltQVhUU0N4S3l5UVRNeE1zb3ZX?=
 =?utf-8?B?OFp6WWxXTlVjZnZuUUNCZzNNK0xKQ0FwcDlMa1NWZk0rTm1Xc0NDWElGWHhL?=
 =?utf-8?B?WnRReWVlWTJ5bWxUOUtKbnJZcDRxNUtXbzl0VHFXT1ZYcTR3S1h2YjFiajJq?=
 =?utf-8?Q?w0I1/0sS0rLZdgJwFmg/naqQL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de1bde4f-67e8-4184-eb72-08db3fd42be6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 06:14:30.6920
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2P/Lo2c9JlGUf2Tqr1BK9NNisksJtQWguFloe+oZZCkZmMw0OUwjlWp+0xb1Z0Jjxty7PFR3BdcUGOWSpnHzwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9484

On 05.04.2023 15:56, Juergen Gross wrote:
> Enabling crash dump analysis of the hypervisor requires the hypervisor
> having been built with CONFIG_DEBUG_INFO enabled. Today this requires
> either CONFIG_DEBUG or CONFIG_EXPERT set, which are both not security
> supported.
> 
> This small series changes that in order to allow security supported
> Xen builds with the capability to do crash dump analysis via the
> "crash" tool.
> 
> Note that due to problems with test machines proper support for EFI
> booted systems hasn't been verified, so this will likely need some more
> work.
> 
> Changes in V2:
> - comments addressed
> 
> Changes in V3:
> - comments addressed
> 
> Juergen Gross (2):
>   xen: move CONFIG_DEBUG_INFO out of EXPERT section
>   xen: update CONFIG_DEBUG_INFO help text
> 
>  xen/Kconfig.debug | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)

Seeing no-one else showing any interest either way:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 06:30:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 06:30:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522569.812036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poeqy-00022y-1a; Tue, 18 Apr 2023 06:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522569.812036; Tue, 18 Apr 2023 06:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poeqx-00022r-VB; Tue, 18 Apr 2023 06:30:11 +0000
Received: by outflank-mailman (input) for mailman id 522569;
 Tue, 18 Apr 2023 06:30:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poeqw-00022i-Qz
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 06:30:10 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060b.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 761f6983-ddb2-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 08:30:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8554.eurprd04.prod.outlook.com (2603:10a6:20b:435::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 06:30:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 06:30:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 761f6983-ddb2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GhvnT2QUtlvsE04bc9+M22DuU12EsP2PtaQ2SNahWViYU4O1RnmqlxKsasfQ9nbmazWIU57Ssapq+wL5Q2mlcGiykpMfLTJA1+Nniq+SKtW/++bUAGWcuQqybbVPSXo29zQa25JgkdzoybbL0mFEw6webKarxHVFnlhPE0zKQWtmj02IKgfwYVlSh0zS7IU+5Mqag7k08tkimJ4Yr2bqHiRJSS0RyqoYLEppBEoPhtQf4obClU+NkiX0+5cKOUqxmjpsPTgFoMlxwqwR3JLYXXKUFa/8Pq1pQjOHCxw8uh2alPzyOcB8O5/Mj32gcSFFSxz4zk1yCAvzmPKyNav4Ew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BayzNk9IyBQWsxoVQ5pSfpENWVpDuOMxuMOF13tzeiU=;
 b=loIIfRaU+QkROXS6XJKFrn9n27aoQffbgsNQSuYP05wxyreCfLHkuR2rd+TCbyrQbEidAymoDd4uUweicmkelx7B4v7Zo3H4+wORcONGL8mvMOc1SKhrxPbwSJ5AeYeD3LM7INed9Hlo20frX1nnEZcvFGtaqCP8G55QrzLoK2E85UOeveUcX6N6y8ln8QSOVHWpOtcMOoIibQlI8LNnYjeQYOQPUYdNe33HPnmQZ94aUpiXUcdNPwpmKdkrWhpddEvWbsGfxSVAvxH/FcA3DlgLgB5dw7XxEUC7lFiVLtqhfkJEsqc0fIY0NslHNnZhFYxFk0s4D1VFDJPkJSzQRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BayzNk9IyBQWsxoVQ5pSfpENWVpDuOMxuMOF13tzeiU=;
 b=VX1+IW7HJdyM7ugNAboZly/b9LcYAQnkAAACpZxTa7v2m5aI/7auAR4vsF4+6zB/5FxhzJS/3U/oxRqACKlQOJ12HnOcleKNg6WC2l6O4FsTHNz9tRE9flcU52LhWDYauIZhsWnCB0lyzTipTDptd0pVUzK33s4EZfangJ857uNS4ciiLcm+8Be5kblh2OSbP194kuKozoQuc0D52DgBwqz7DhMO3N6e/DF2UHn/3bIH8cZi7Nvzu9OXgGitnBtVrEtyJFTd1HfbwaXmh5p8p6WU+uWf0SACEe2zUEe3QXcD8HvuF8Jmp3ESQVgLanqaKFI8C7eJYiSMQVdxh8qcEQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fdc599f9-da0f-815c-1850-3120c5f69b73@suse.com>
Date: Tue, 18 Apr 2023 08:30:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 0/2] deal with GOT stuff for RISC-V
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1678970065.git.oleksii.kurochko@gmail.com>
 <69e031eb-6172-1ab0-5ffa-4650f69e83a7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <69e031eb-6172-1ab0-5ffa-4650f69e83a7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0109.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8554:EE_
X-MS-Office365-Filtering-Correlation-Id: ad6db745-76fa-4814-76de-08db3fd659b3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OtB/64II/hLYSnLzM1YgtEirNFtyRX/vW1U6oyP82QL4BUevESyPJr0mG+SsKCKY8wKAKLNmj9ge8F5XVdcC3SS1x3vlV03pmOhW4ZqbG7vGcWxsIPIhcluf7bbkoGYZuLa3xTjnNAXSVAG1R1yl7ANfy21F87G7gMHgtLRgKpq/auRtvUs/BRPAUR0pCznJpLQVDB1eHLjEZmi7E+zIC9cPnlsmZ9Eqj7ivu1gcLCk54+7TE4l23/GnogtNweX8796AUx+JhD2r0GV4fOoViOEEuuh3HAGbzr8CDXvyPPfAr/K7VUu2zxUlBJzASanx9ia0bEIiE+zO0dY/JjMP/OGO/edMINk0hSJ5EmUuSs1yF3tuFcheUBS+esdK7YPgJGHUpxHiS5TjEGBWZx1EcNhOtaEa/IPfJxiTyUR4t4HEENiu/9W2zSvEPHXGntSKR9s2gbwSunCkP0G2EMy6pPUmBz/idA73XHuTOngREkr+kt18sHV/Itw58BDuURsk8q2Efl6sJ2lZuKV7YnwmdKUSThlhSODZ0UecqUz7iVZpCPzquRw/J28VM9uVAt/CdUFKF2o4Shu5t/vugAjPWo0K9G+maMmNjsbvw3/4hJpH0jzzSpaVh9Se3966mtW8pwYRJtCla7ENVvjd+Gnopw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(366004)(136003)(39860400002)(451199021)(36756003)(86362001)(31696002)(2906002)(31686004)(6486002)(53546011)(186003)(2616005)(26005)(6506007)(6512007)(4744005)(66556008)(66476007)(478600001)(4326008)(6916009)(66946007)(41300700001)(8936002)(38100700002)(54906003)(316002)(5660300002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dUtnSEVHc1ZzTitzVjNQR3F5M0MzWHRFNllEVUZTL0NpSTlLZlVsYWkxYkUw?=
 =?utf-8?B?dEoybkh3YzVEZ2dtUC8yVVlJS1pmQk1BWE1kcDlzMllzMldTZm1LM2kvN3VZ?=
 =?utf-8?B?cmtaTy9CM28zQzllVE5BOVgvNmtCUDJFeXRwMWxDeDh1M3ptbGNRVGdpNlNF?=
 =?utf-8?B?cHlkNTkyRXpSY3Z3NEd2SWNRNWE4dEhnMy9JOUtRKzJweGxrTDJBNThlR3NX?=
 =?utf-8?B?V1E2SWs5a0dKSzU0bFZQWEpWd3d1dVFpYTBEcnRPTWNBQnJXVFN3N1VlaG5Z?=
 =?utf-8?B?L29Xai9tWDIxSGw2K0NEaXVTSmZuNWZveUd1WTl4VCtiNnZXSmpCNGpyMzJ4?=
 =?utf-8?B?a2wrRVZONi9HQXorT2kxQk0vWVVqQmVWZWhpY3dxNVl3cEllNmxTejVWeG1R?=
 =?utf-8?B?UHlneWJsMWFxRFhtWUFEanhMQkRJMmFpeXl6dGRna2k4WUE3V1pWM3BMUkpx?=
 =?utf-8?B?ZzBUdE1laTdoejV4TTk5eENtdHhtdWdjNkpGM0k4aWVDQ08xYk1Xd2dtMUwz?=
 =?utf-8?B?OEdkMlNwUyt5aWtUODMySnBQTVpneU5EdE95ZDd6VXBKbkxuNXhiN0cxZG9x?=
 =?utf-8?B?WkVkdW5qQzlXcWM2aUpwODZFcml3OS9NeHcvTlVNZzVSRnU4b1NZV3FRNGFj?=
 =?utf-8?B?M2FKMmNKdEtHQ2NWdjI2Q3FhakZvNUlGL0FwRzl0ZEhxYnN3RVdNL2Vtekpv?=
 =?utf-8?B?RUIrbHN0b3ljNGhydEFDZnQvcjYyU0p6VXZnamdUZ3NaWnFrbWc5SDZQSk4y?=
 =?utf-8?B?UXBweWVSZTlOWWxDUlZrVTVCK1VUeFFRL3ZHY1dCaVI3UTVZT2lUSzltb2V4?=
 =?utf-8?B?UFhaeXNLd01Vck0zeGJOcW1TcUc0SnEwME1jWlZ1a3dXckk3ZFBOWVRaK29j?=
 =?utf-8?B?N09EeE93cVc4ME5Meld6enBNOUJLbVZPWlJtQTFGZ0pSTDVSaC85KzIvUFpa?=
 =?utf-8?B?SlpmKzlvaUlNRWMzeTYzOVNDOUlOQUdFbk1LbjBMWmU4c0lrS3pIQlBBRkdY?=
 =?utf-8?B?alp3RE9ScittVzNJb3cva0Fmd09RRVhORXc0WEcraGVPYnc2R1VlSDZPeFYy?=
 =?utf-8?B?c0xOVmoxZHNuQmExYWVRbDJGL0NHV1hEMFFxZ1NuUHhyM1hIM1QrY281bExp?=
 =?utf-8?B?SFN2dkpTa2VWYkkvZnJZcTZENlJSUTFSY3FRMDNaaGtwd1BFT09lanZyaHZM?=
 =?utf-8?B?c2FxWWZ5TUJTSWJWczRuNld4TkN6SmpaMy9TOUZ5ejlmSnZOanA5dUk3SlNC?=
 =?utf-8?B?TlZUbW1xYStna1d6bVNaeUFFR2duN0Y5eHRObXIwdzUyMkc0eFhmNFNmMUlD?=
 =?utf-8?B?andnZnptS1NVYThjUlEwTmNIN3JEVTBEOU8rMy9qWDhtbmhZRVdVdzFrNUFm?=
 =?utf-8?B?d1pINmVuQnYzQldlcFN2YjJJSzRGUis1WXRFQmJ0STh3TWRqZG94czdIbFJv?=
 =?utf-8?B?amRUZVVlbzU3Y3ZRQXc4Vndack1WOVNGNWMvbTZMT2IwQUN5c095U2R0elFa?=
 =?utf-8?B?cjRtUVBtQ0NlSGdIbXdpYVowMmtKRVQxL3hNUEF1cXNneVRIaDZJK1Z6OTk5?=
 =?utf-8?B?eXBtSzBzOXZ3eEtReURuZlNNb0YwN3FuczdPUXR5cG9LVzZHYnBKWjRnYkxG?=
 =?utf-8?B?ZGxqS0JWd0V1ZzJqcS8wbnhPYlMvZWVEbENyZVNLbFdRR0dTMlYxTG9uVXp0?=
 =?utf-8?B?dEh2Q011Z3NSU1p4RWR5eDUwREZid3BqQndGL0l5MllabkhvdklqSmdPbkRG?=
 =?utf-8?B?N21BT3BhRWFlc0JuNHdaQWhQTEM1dUo3T1U4Z1hoelJVMlJTWDYxOU9TcldY?=
 =?utf-8?B?U3N3Q1lEUy81ZGt4Ti9HMUJCWnB5VEtrSkZFWkZWaDZ5SXJ5TEhqNFhaQXFX?=
 =?utf-8?B?TGc4emxWTXpGbWJQT0g1MHFOMnY2ZDNVMmU0SUdycEVmM2lKaFZIaDhYYnNX?=
 =?utf-8?B?UjBBSFhBaUJuSlJGeHJWeDY0WW1tS0hPVm9jQ2QvSkJJaUhzZnA0ME5LSFFa?=
 =?utf-8?B?Vy9PQnFMaXNWNEF4WU1VK3JNaC91K0FjRnpiMi93eEgzU01MVjRXeS95am9z?=
 =?utf-8?B?Z2dDbnhsTEJoaW40UUlMdW9tdnNxQmZkem1xcXhTOVZFZ2dDWmp3c1BZcUdx?=
 =?utf-8?Q?NrscsKCOyfOQG3hoGxnvTG+NJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad6db745-76fa-4814-76de-08db3fd659b3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 06:30:06.5589
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nkPeFQ0Z08ZGJO6d68C8U9vkSx7K63OJiYLRVCMVWD9O/UmR0QHTW92qlNoIxqwKuBnqFqPqAIv+nSHcSj3BaQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8554

On 16.03.2023 14:59, Andrew Cooper wrote:
> On 16/03/2023 1:22 pm, Oleksii Kurochko wrote:
>> Oleksii Kurochko (2):
>>   xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
>>   xen/riscv: add explicit check that .got{.plt} is empty
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'm sorry, I failed to apply this ack while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 06:32:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 06:32:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522574.812047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poetc-0002e1-GY; Tue, 18 Apr 2023 06:32:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522574.812047; Tue, 18 Apr 2023 06:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poetc-0002du-Bh; Tue, 18 Apr 2023 06:32:56 +0000
Received: by outflank-mailman (input) for mailman id 522574;
 Tue, 18 Apr 2023 06:32:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poetb-0002dk-GY; Tue, 18 Apr 2023 06:32:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poetb-00049D-9V; Tue, 18 Apr 2023 06:32:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poeta-00082c-Pz; Tue, 18 Apr 2023 06:32:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poeta-0004fM-PZ; Tue, 18 Apr 2023 06:32:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g7EUvKOJSSVHaU2nDJbknJQBUhXFAqxkRxR01eqJODg=; b=IvZwe+QgSV0I35pyVCuzHhnYLW
	YLWsopP2yXP5KzA+kSqOjdvsd6ToWFYXh4JowNxtJ5OfiRoIdX23/10bk34rhGv5Ua9wk0YanCBjX
	61wUhR3Qnr/7Df3DjeTtuSxBo8/wOPw8YPMhZDxBWzfrDRh+wA6VFQ2EmwSYVLrJkfq0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180287-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180287: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
X-Osstest-Versions-That:
    xen=44843cee3d2b8daa09e5860fc4574219b57acde8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 06:32:54 +0000

flight 180287 xen-unstable real [real]
flight 180294 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180287/
http://logs.test-lab.xenproject.org/osstest/logs/180294/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install         fail pass in 180294-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180294-retest
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail pass in 180294-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180294 like 180282
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180280
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180282
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180282
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180282
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180282
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180282
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180282
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180282
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc
baseline version:
 xen                  44843cee3d2b8daa09e5860fc4574219b57acde8

Last test of basis   180282  2023-04-17 07:34:27 Z    0 days
Testing same since   180287  2023-04-17 18:09:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   44843cee3d..5eb6bd7454  5eb6bd7454e253f4907dbeb7aa982967b21698bc -> master


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 06:36:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 06:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522589.812081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poewp-0003ar-BM; Tue, 18 Apr 2023 06:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522589.812081; Tue, 18 Apr 2023 06:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poewp-0003ak-7l; Tue, 18 Apr 2023 06:36:15 +0000
Received: by outflank-mailman (input) for mailman id 522589;
 Tue, 18 Apr 2023 06:36:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poewo-0003ae-8d
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 06:36:14 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d4a9452-ddb3-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 08:36:09 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 02:36:06 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY1PR03MB7285.namprd03.prod.outlook.com (2603:10b6:a03:529::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 06:36:04 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 06:36:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d4a9452-ddb3-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681799770;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jeN5R5IJXCD1wndUshTfAot4E7Y8F76ddWvdj6WVdwk=;
  b=MhNtM4kscE9/VGec2Vdj66qCACDv1CZmii64bvSg48zdPjuGvAap5Plz
   1HW7JsWzIgXJytPfs/U6gjp96MA9KHiAxi6mRveQuZ2cuN7zP2bJ1r6EM
   CD7fjXo8zF5r+c070OcOvDJYb9YMUdlcCzTZhZyn9M/M57QIxOozrLw6Z
   Y=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 106315822
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Z4E9fK6yXMvGZgV/sr+C2wxRtIzGchMFZxGqfqrLsTDasY5as4F+v
 mtJXTqBOquNYGXzed10OYqxphlSuZDcmIJnSAQ9pSg8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawS7AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 M4qOhsLdE6/n+vm4YmZRuc1jcEbM5y+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUoojumF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKSubhq64w2jV/wEQaLRcWVl6cqsWeyVfvS+MFF
 3MfoysX+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6CHXQNRDNFbN0gtec1SCYs2
 1vPmMnmbRRgsbSTTW+W/5+OrC21IikTJikJYipsZQEC6dPyrZozih/KR9BLH6u8j9mzEjb1q
 xiDqCklm7wSl4gFzay99lHcqy2grd7CSQtdzg7QWGSi7A9weo++T4Ot4Fnfq/1HKe6xSV2Mv
 2MFmo6d8foJBpGOkwSCRewMGPei4PPtGC3RhxtjEocs8xyp+mW/ZsZA7TdmPkBrP80YPzjzb
 yfuVRh54ZZSOD6ga/9xaofoUcAyl/G+RJLiS+zeacdIbt5pbgib8SpyZEmWmWfwjEwrlqJ5M
 pCeGSqxMUsn5W1c5GLeb48gPXUDn0jSGUu7qUjH8ima
IronPort-HdrOrdr: A9a23:imE71auECa9taiE3Ts9p5U6p7skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-Talos-CUID: 9a23:+Jb+vmAFMr0t/bn6ExBVxlw/HNp7TmXUnFbrAUO2UG1scKLAHA==
X-Talos-MUID: 9a23:SLtSJwSjDrwm7M+cRXTloTtiN8JP6JiJS1lcgaRch/aPbC1JbmI=
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="106315822"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OODWZ4HOGd9FOgJ+v0o/eoUJhlSVH0te2FgnWZevUTJi0abxvclv52LAVmgif1zpIBhAuVsFvUVYLVcfZiLfYltIv6hN+GTK/TQplgzz0m3FfMLILqqljlGswHgM1UfjjYPEmbOV5W3BX8lDiiSo8imiCs+mEJOepDclKQ0j0zRgE3A2nqhioovdmNJ84BtlKaDwZt6dJ813AQBydDmk5Dh5sDWWTsPT7hkPsjpkQ2VClp3Uicvi1mlwRJOJuKBHx54o34/bm1jrWJ61Jb6LsCp2n2AF4bcQ3Cj/IPKSNT0kpmQH1xq53+FjhqurRE15ZRHYIK8sLwLdaPPFjQiwCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+pl9fNOVUoXt7aab3EwC3zBNIsAVcmpyPKFit//ZfZU=;
 b=DMFILtwAnhQlwBwusCHBGj5Srw/WO5BBHz6o61PWowClrbEP2N4aWRF0h7CK5FwJDro8eyh7LVQac9xF45l8PI/ldSy8nD70Xpyj/MfBZeaymzOzphura2qH5NGfSS2eeSpTrhHV+C3hTl68kR67J1haYwcISohjx1sHHoFH7dykOckVp8qzISxyyohguAkf3ld5OY/DIaGQyjqxB0eilz75GEWROfGA4d67MshaGS1YC02z32NLcnjHzwBT1bAAWKXzrSZjPfk5d2yTunrGIv6nDtyVJD0ekL1HGwhEDK6mbfx5Xw9PzojqrhrjYPdR+l23xOdTXNqqk90O7CMBDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+pl9fNOVUoXt7aab3EwC3zBNIsAVcmpyPKFit//ZfZU=;
 b=hoqyH4g3tgbM+J3Twi8DDnpV2uFNMoFZqCfobRUli1mAWFEO0alvSryL4IPstgVjzhaqmWqDk+/wC6BkCX3tA3DO5aIGRJrjaWBvJhut9RTSDdF3YU8NyGPVA18nuI58u6sDopuD8h7pz7FtzrPq85vFFfbEhaeP7NZX0tZ2sks=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e6df88d7-25de-4ae9-2187-9cb9afc624e2@citrix.com>
Date: Tue, 18 Apr 2023 07:35:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 0/2] deal with GOT stuff for RISC-V
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1678970065.git.oleksii.kurochko@gmail.com>
 <69e031eb-6172-1ab0-5ffa-4650f69e83a7@citrix.com>
 <fdc599f9-da0f-815c-1850-3120c5f69b73@suse.com>
In-Reply-To: <fdc599f9-da0f-815c-1850-3120c5f69b73@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0010.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY1PR03MB7285:EE_
X-MS-Office365-Filtering-Correlation-Id: b521721f-29ea-4f83-5430-08db3fd72ed6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ea8DPUzSaooyk6iEBklNZG4eQUDFKxddOU5oqSTkQx7Y+LvVDj59Y4e4IRUpajdwMM9eM/1F3y/s7ct8fH+dv1MXOLPP/PSCXW3xyWs1tlvDDuib4VSUfIFq7SLhVkCM8633BAfj9IXE3P2PJJRdyocXy/ZfZDDeWxnICJeQQIK1UIINNeh0pTu7WjPvphuIHaDd315z6xxQHpQiovZrpBlODlsOYchtaTRFM6FSQkRrFmRj5bOCITk2JWvEBtjRf7siKk/mubrur5Q6Fru1KBOC1Glcmq7yHhpIivMEjbYA+sK9EoDx1+gCAHjfDtWxjFHuQYcvPwqdNxL+Vq71jGL4YPqF8PvbNHppuuYitLWs+sA4C6mzIgSvE88yCHzPHU1nmAHYFTCtV3+LuMZSxGRqaTYK8kOK4+89sv3OwWalu3dtkfhC6kpfL8V1RzlUTgSJezGWQvfDd8ap+1M6dfDsegBjBes5g6S0oFVUSMlapVZuXKn+1FfKrmYdSvqtjrlDxdsvUbF+SIPflJWoKL+nGboATC7fLYg02Fuq1Gsa3/7lWiyWtIxdlTDXKMLWnkTNZtoLN+eTN9DfX+MxpwtJzGQPAkR+Lq5dU9bx/J2i7WoC/SqDW34j697K63OvL4W34GEbRIFTizFh7a7xng==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(5660300002)(2616005)(31696002)(86362001)(186003)(26005)(6512007)(53546011)(82960400001)(6506007)(38100700002)(8936002)(8676002)(54906003)(478600001)(6486002)(6666004)(41300700001)(316002)(36756003)(66556008)(66476007)(4326008)(66946007)(31686004)(6916009)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OGJ0SW9DOWNlekJNQTFLTm14TzJwM1VqaEMremRuWWxURktFN2dpenlJVzh5?=
 =?utf-8?B?czlHeXMra2Z4QkRqZTBLRzkrZWs3eGl4aXpXOVNZVlg0bzZQUEtiQS9rVnpu?=
 =?utf-8?B?SVVSb3dMdlYvTEFpSGduT0dWdFBBMFZMQVd5anB0NUFadFQ3N0Rkc2NRY2gz?=
 =?utf-8?B?SEJyRnpiYWV6ekVlQkE3YkZtazRsbFhIWmF5dUlxc2VLMldnSFV3TlIvM0hj?=
 =?utf-8?B?MnNwNWFFUTRGRTgxMXcybUJXcW8rQUpody9BbjB3UXZ1M21sMC9oOVF0NXQ5?=
 =?utf-8?B?ME9FYVZORjdCT2VnRE52WktINEc2VWMvTFpGSVFrQ3IyaFNEOHAzRTUwMHI5?=
 =?utf-8?B?WmhVRmZ6ei9QMGVsRjN3UXZqQnJkUENzVVJRYXkrd3pxQ1EyUnhDcnBNTXVx?=
 =?utf-8?B?TG9Lek9tdXBEdmFhZzBUNlQrSFZIWXA0cE5NS3J2TGhsSmtSV1FwUU5jeXh0?=
 =?utf-8?B?K0RCYmliMUNrY2FMSE05dlBqaW9JUzdkZllzeHpjYWErWmRlOHNHeHc5Z1dC?=
 =?utf-8?B?NUg0c2VhOVNpWnFmWkZueVdMT2VWdk0zNTQyeGx2YmJBSzBaNkdFWGh0Qzhn?=
 =?utf-8?B?M2dNaThKSEtHM0xKR0xkV3kydUFENm42OEo2YlZzb0lJOEloejVvK2Y5R3BO?=
 =?utf-8?B?K0pTOVF3MmNsNHFrS1l1YitPV3JZdi9ZMk5VMWYwV1RUZHpoVTluR0wxNFNs?=
 =?utf-8?B?YTlDbnp5L0prZDJhOXczbmdmVDkraEZRNk8vRms5ZHRPY3JGMWJLQW8zc0xq?=
 =?utf-8?B?ZkViYk9ncklCRlZxdytaeDBzRDhsNFRsWXAvaml3dUlKVWIxMUpoOG5Yd3g5?=
 =?utf-8?B?QlpXQTFQSlhYOHZleE1CYnBidmtuNktGRlFPYlhvT0s0Y1lHVG1Va1p6VU9i?=
 =?utf-8?B?bDB0NlJ1UGVkbUw2TjVvZGlUTHNrMG1VNGJIOTZ4ZTVpS2NnaDVuNjJkTEpS?=
 =?utf-8?B?SkpmNmEwOERLV0tmTm9XWjZoQ2tDaEJoVlpvQ3U2bkl6NjJpTm5QcHd0UzE5?=
 =?utf-8?B?VFdJYzVLRU43elp0YVZIZFg4WXhVZVhhU2FIczJUUlJBandRZXVRbVB6TEZ0?=
 =?utf-8?B?RGFEYjdEb25pTjl2cWE3S2VkUC8xcnhKZE9zR004RUZPSjNXTWRMTlVLS3VX?=
 =?utf-8?B?VDZxT2wweXQ3bkQvalN5SVpEdDV5M3hnR2V3NU5BaFBlTUZvVVppTzh3WWVI?=
 =?utf-8?B?L0gwZUlENm9XeE9sbzlMcS9FNkNMTFUxNFR2UVJSVnJxK1pWYnZYcFVsWHNK?=
 =?utf-8?B?VHFtUUkvQ0cxZFVEczM1SmJGL1Y1djd6enVsdVVuek9xN2VwS3RTN0hwQ2FE?=
 =?utf-8?B?UGFRZUpiSk9nTXhrNUkzOUJRWXRRMnZyR2J6SFNDRWgvclJBMWNaRkxqYUN5?=
 =?utf-8?B?VWVET1BSVGZiQS9CcUlnbkdTS1JxMDhkQ3VqVy8vQU9UQVYrYlM1Y1RrM1Yv?=
 =?utf-8?B?ZXNQaGEwWjVwK2NFaWFPek9BY1BObWN0UU9OSlBqRUZNNUZFL1h6bHFDaDg2?=
 =?utf-8?B?WjhBMkQwNmZnelZmajl2a3A1N3AvYzBROUVybFJmKzFTb2hoeFNJam5kZTBt?=
 =?utf-8?B?L3ViS216R1EvZm42QUZEL2E0K0hyVnhaRXNybktKclZWS084VUlDNXc0b1Jm?=
 =?utf-8?B?SEwvUnl5dXNvUzJzcWN2K3ptNFF0N293TEVPL2JIbll5R213cldBUWRvTVJ2?=
 =?utf-8?B?T0NESUlxY3ArS0xHTmZPaXUzR2F0YWlKTndFQTRzdGtyRHdWbnVRMFNrU0lD?=
 =?utf-8?B?Tno2WHN4UVAyY2dHMlovMzFvRjJJWDd1ZEV4VENvbm1aZFppbE1PelJyV25q?=
 =?utf-8?B?R0dtakNpU1VGV21DZ1JleEtoTGwzQWJMWU1lR1hWMlByTTZVN0c2VENNTzJL?=
 =?utf-8?B?bkxHcUQrcTN5WHhIdFg4M0d5YUpUaHFjb1JaOXg0TGVKSjBRb0pEcFhBTXpv?=
 =?utf-8?B?UHRTMi96ck9IMzFEbTZIc1dRZjkyOWtCcXpvWkJWSHNvUHQ4YzVwT3BxVzVF?=
 =?utf-8?B?UVJ2WW5xY2VVNHJoQy93VDZ4elNybmU5UlZDZW5aUEMxc3pjbUZCL2xYSnN3?=
 =?utf-8?B?Sk1QYXcrNTVzVUhTeDRVRlU0c0VSRFZPTWE5Z0QvQnRwWVZBQUxRVWdGSnJw?=
 =?utf-8?B?OStlTC85NWhXL2RiOGJEY29XMVZSOWZOaGUwK2pVY3U3aUFybGNmUGVBNnZV?=
 =?utf-8?B?L1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	SIIhLi+s2dDhRytYdtqvpYyxKjg/CSCpa6t1Mo+lWx5o1I1LrUEJMB3NInuzY3f9CIh2YOcoMl7DKmIzg9PNUKTYm81VyOtk8LxR7uwmD8N/qe59RAdnbILwp7Ro3qC8PF4dbMDOFMMWgIjd3bcBfTNDUzEjgHwFc2InfthEr2pdvhB4qhDKQ9+vK/mHtGEASuXecsj0UIJPsudSoHhnUDxVkO9pK7L4/y9vxD7aGxAw6OJfOtlX8sOJrzzgMtfqEaEtfBTar/17FYFgR6waE893VKWysQfMCcj5WtB0XSkUjlRrde7Hi4yXV5eGMV38ybrGKeuEQ1gZ+loVyT8u3Qsk3GhSjBLhQY+QhIJhuaOTsnAIrNxp5X3qf9u7ixJnzAx8AZdTg9GUi3h7m9htAcGsgJI0eCIu3DBJ9SG9ZmtS2a4DNsCseiSdR+8+Sz7j04OjQtN7Llb4CuGs0vxaaHFX4CZmfGh+TD2NitBdivjmD8h5SiLlu/3ZwsmUacgdfjoPiN0sgEPNcEKvxu5+9n+btF4yuCrWnkv7MMapYPmSsUDhu0MST68ALkCGP9oBWXa6YlKu1e1cDFaHV9expbMx+JIEqyEplUY5onPi89ViVI80jiKzOpUHKp/Dn7GvY6xS+npAm/Ykt05i8OMvt5fz2j3Hl29LdoaFVx7rGNQl7Yi2jnZu1F4R4IlhGy++iY56RgZoeNadeEx2ZSHelcY5lrW/BI06keD74JK7piAC7FsIP2wFZTFc4HPSErG/7ABTlbNN2LVFyB4kw8Vlf/8CccD/5ukQjxHtyxFem7ASZVTVagmvyKPHjV2ao+bCkto54WprjmnMX36R29i7t2xjURYWyonXY0bX5o30VF2AhljNItIB3+dm/gpf+mVNen+lR1+JZjyaFnrW/STAoffjM/Q4NP0c6HEwOe0i/LA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b521721f-29ea-4f83-5430-08db3fd72ed6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 06:36:04.2275
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 13rYrn/+Re1GKAYanHYzCVtMXuH63HQKQ82NiY8f5/bpyrzqZmRZQH2NiF9EXQI8NdBwksoGQNvcXSALEo7fkvrApunkTNSxOu1RS4Ywp7g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR03MB7285

On 18/04/2023 7:30 am, Jan Beulich wrote:
> On 16.03.2023 14:59, Andrew Cooper wrote:
>> On 16/03/2023 1:22 pm, Oleksii Kurochko wrote:
>>> Oleksii Kurochko (2):
>>>   xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
>>>   xen/riscv: add explicit check that .got{.plt} is empty
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> I'm sorry, I failed to apply this ack while committing.

Oh well.  The important thing is that it's in now.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 06:58:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 06:58:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522594.812090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pofIV-00063B-2R; Tue, 18 Apr 2023 06:58:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522594.812090; Tue, 18 Apr 2023 06:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pofIU-000634-Vr; Tue, 18 Apr 2023 06:58:38 +0000
Received: by outflank-mailman (input) for mailman id 522594;
 Tue, 18 Apr 2023 06:58:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dFV=AJ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pofIT-00062y-IE
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 06:58:37 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6fffcca6-ddb6-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 08:58:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fffcca6-ddb6-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681801114;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7JrvTFx0WVeMY9cJy762Xh8k6ZgcD547tgSMeBaRvfE=;
	b=Et6llJLqBk0yfoU12mOHK72SfYQgEWu+q48gtjbOp01/oW1pWwNWzYkMj3rsubBi7GEzq5
	mPT5IZDLQl4QKMdJBFAIqKyeL5/EHZMZmHN21initZgWfHDK9dVyf4svi0aLfhUnyeDqQK
	I3vYRVmGWdTJWrOGtNsTNLH4myxFa9B/Cic2FVRJKblYIYu57zyU8GZBAWAfanD+weXHiR
	ms33Y/xbIFrXQYi8r9H5uTRDm8lfMric1/URavwJ7tQwqLrW0R7pwMbi7rSmGj07t3TRXL
	qWncTgcrXxLCvAZcnDKbYXnQQ7HAmFlnUm/wlHJLSqR024Ajk5DxDVTQi7RWeg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681801114;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7JrvTFx0WVeMY9cJy762Xh8k6ZgcD547tgSMeBaRvfE=;
	b=2IDo3vCdzNc6EWeefm49U4euIaA6bLjlIhlJmFMjsR5vvqBpGyDNOwFnUfvXHrOiAPFmV7
	jCwlVmV4XYpFQwCw==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de>
Date: Tue, 18 Apr 2023 08:58:32 +0200
Message-ID: <87ttxd4qxz.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Paul!

On Mon, Apr 17 2023 at 19:40, Paul Menzel wrote:
> Am 17.04.23 um 16:48 schrieb Thomas Gleixner:
>
>> On Mon, Apr 17 2023 at 13:19, Paul Menzel wrote:
>>> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
>>> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD
>>> Graphics (family: 0x15, model: 0x13, stepping: 0x1)
>>> [=E2=80=A6]
>>> [    0.259329] smp: Bringing up secondary CPUs ...
>>> [    0.259527] x86: Booting SMP configuration:
>>> [    0.259528] .... node  #0, CPUs:      #1
>>> [    0.261007] After schedule_preempt_disabled
>>> [   10.260990] CPU1 failed to report alive state
>>=20
>> Weird. CPU1 fails to come up and report that it has reached the
>> synchronization point.
>>=20
>> Does it work when you add cpuhp.parallel=3Doff on the kernel command lin=
e?
>
> Yes, the ten seconds delay is gone with `cpuhp.parallel=3Doff`.
>
> There was a patch set in the past, that worked on that device. I think=20
> up to v4 it did *not* work at all and hung [1]. I need some days to=20
> collect the results again.

Can you please apply the patch below on top of the pile remove the
command line option again?

Thanks,


        tglx
---
 kernel/cpu.c |    1 +
 1 file changed, 1 insertion(+)

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1777,6 +1777,7 @@ static void __init cpuhp_bringup_mask(co
 			 */
 			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
 		}
+		msleep(20);
 	}
 }
=20


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 07:47:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 07:47:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522619.812100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pog2y-00039E-Ki; Tue, 18 Apr 2023 07:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522619.812100; Tue, 18 Apr 2023 07:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pog2y-000397-I8; Tue, 18 Apr 2023 07:46:40 +0000
Received: by outflank-mailman (input) for mailman id 522619;
 Tue, 18 Apr 2023 07:46:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pog2w-00038x-Tb; Tue, 18 Apr 2023 07:46:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pog2w-0005i7-L1; Tue, 18 Apr 2023 07:46:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pog2w-0001Xe-8V; Tue, 18 Apr 2023 07:46:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pog2w-0007uS-86; Tue, 18 Apr 2023 07:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=87z+yaiu1nxYN62Lf7U/+A63vg3Bt1B89ResqD9s/iI=; b=yyeZyUvFFMcFdvqmV8VGGLbeM6
	3KDZh+5LNMF4bQR/smFu1jCrPXJ1komf1oUODMTIZZpulvbpO6crtQgZBhf1dxO/KJn7j5NN0n3lb
	FRulkeW5GbCEYh1amofoLFPGsfWy74EJeyv+Ov6rO/vd41VIsidD8bpxi8JF7R3xwm7I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180295: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b16284e2a0011489f6e16dfcc6af7623c3cbaf0b
X-Osstest-Versions-That:
    ovmf=6ded9f50c3aa123fe581c42ff6c03789b9b593c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 07:46:38 +0000

flight 180295 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180295/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b16284e2a0011489f6e16dfcc6af7623c3cbaf0b
baseline version:
 ovmf                 6ded9f50c3aa123fe581c42ff6c03789b9b593c1

Last test of basis   180277  2023-04-16 18:10:48 Z    1 days
Testing same since   180295  2023-04-18 06:12:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Duggapu Chinni B <chinni.b.duggapu@intel.com>
  Duggapu, Chinni B <chinni.b.duggapu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6ded9f50c3..b16284e2a0  b16284e2a0011489f6e16dfcc6af7623c3cbaf0b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 08:28:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 08:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522629.812111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poghb-0007zt-9R; Tue, 18 Apr 2023 08:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522629.812111; Tue, 18 Apr 2023 08:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poghb-0007zm-5u; Tue, 18 Apr 2023 08:28:39 +0000
Received: by outflank-mailman (input) for mailman id 522629;
 Tue, 18 Apr 2023 08:28:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poghY-0007zW-Uo
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 08:28:37 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00151f5a-ddc3-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 10:28:32 +0200 (CEST)
Received: from mail-dm6nam04lp2049.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 04:28:29 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5264.namprd03.prod.outlook.com (2603:10b6:208:1e5::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 08:28:27 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 08:28:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00151f5a-ddc3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681806512;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=76ZmziPpUuNh75qK4Sndi8IQuoGT93fpXTeReNkm2Wg=;
  b=T+Rrmw9kNKW4MJdW8pYw22UfT9uk2WCkh75Kp0jNM1ak0pBT7ayGLIiE
   asa7nCyoSY96DwK1MWVRPDifsegLziUY6PZGaMGHM4DQfrpnQ2Savf46I
   pv8kkFKxppyGUv8R/77/GcGsbLBICY0fSDICDE/BJIRnKMViz0Pse9bPN
   k=;
X-IronPort-RemoteIP: 104.47.73.49
X-IronPort-MID: 105823278
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:32uqnq1crtK8WhuYZvbD5elwkn2cJEfYwER7XKvMYLTBsI5bpzEOy
 mZLD2qCPfqIZWOhLox0Pojj8h4O7MSAztFhQQU/pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfEz53q
 voaNz02MxGEm96PkfWwRbUvv5F2RCXrFNt3VnBI6xj8VapjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouC6PnWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHumCNhJTuLonhJsqB7P+0FPMkVKb1ymqvOEgRHna89kO
 0NBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQN8hstU/SXo11
 1uKt9TzDDdrvfueTnf13qeZq3a+NDYYKUcGZDQYVk0V7t/7uoYxgxnTCNF5H8aIYsbdHDjxx
 3WGqXY4jrBL0coTjf3nrBbAni6moYXPQkgt/ALLU2m57wR/Iom4e4iv7lud5vFFRGqEcmS8U
 LE/s5D2xIgz4VulzkRhnM1l8GmV2su4
IronPort-HdrOrdr: A9a23:ejwkbaCWdMXWV/TlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-Talos-CUID: 9a23:9mETCm1f9RAv/6SgRPVeWLxfCJ8cLXvf03rrfk6/FVpQGfqeRgPTwfYx
X-Talos-MUID: 9a23:/yHkbwaKF21T9eBT9GPRqW1YbJtS5LmlAn0Gzsw2pvKUOnkl
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="105823278"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NFo7BXOrmM3Sd8nwgG9MMVmeu6plSCcUEvfghnDFfg6/uYVF3W8tYTUks2Gmilfm7li4LPMtO+6XtdC5VS3aFNoroi9FDqECFdbqLPAsjdlzTezHJPk3s/OOcKiaMz2bLjqBUjoXxojNIG34U4cRtlpEvEEGVB9ki3LviV9bZjgk/4hSosi0DrmVFBA0csxBrNP3MzK1YYMg8xZM+OjHOwSkLRedZCf4qBA+B4zPgJ+4Lvog4sD6CRmLCB1RwnPCRRbYaJRR+283fulIcDJAeRL930OKVJj5UfoNoXDeeODts5Ik+iqZXRCqaij40fL4LebSQ8Xl+IzLbe4dCuRmIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MUhhF4R2A7KldBjtaBfEHpx11Crn0DQE8HGcoGlBhvg=;
 b=C+fvDBUMAGGRi7UoBxxJma9pqUy7eCMzMKALQ+HKB3oTXzzyYxzVTUspCg/SO5Wd44/3+dMMLnWdO12EFJimsLuTf2gazBm8MjdiZ3zePth+7dLtF554XlX3ARVHHnHQtCeJPYQVaO1rR+WK5q78tyyw8ZnbzotLjELlyliS+BPc2i4RIDAvf5rNY31rCC7dK4/t5YNolnq6l0RZ8cUzRB1ITSbWrTWleLx/n0fBYey3Vtkb6clGL5LYS+XAJ7WK7lq+sdOEubxcQ58E59/vP6sSzxq/zX1B3hArNipzxmPkssyWy8Qu7o8Zl9ZtplpD7z3aW1dottsPGRIvwgVe5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MUhhF4R2A7KldBjtaBfEHpx11Crn0DQE8HGcoGlBhvg=;
 b=J0P3gbD0d+Z5o+1mlqrVS54W8JfaPs0w6RHRH9r3dmzaH0NAVZkxiwElReiDeU3P4Lo6/U9SKgrJR+spm5cX8tU1soUi0Aw5YNrdRLVb02ZMDHXLnhTA7w29NLA2RwwdSMZdHaU5nFz7cpaSgbpaoxJzcljuigaLMRvOuX88JcY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a80610dd-c988-eeb9-22ac-b91fd84df4e8@citrix.com>
Date: Tue, 18 Apr 2023 09:28:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230417135219.3776777-1-andrew.cooper3@citrix.com>
 <d83288c5-6247-ef7d-b9ba-8bf24c7831ac@suse.com>
 <22179eac-4fc9-1521-2a83-2313b8c44a2d@citrix.com>
 <3ea38da5-70a9-6887-5384-fe002d8568c4@suse.com>
 <2ebda1f3-23fb-3f06-c4ca-1ac508c82b40@citrix.com>
 <49115307-753f-8196-55a7-8e8c1e50b503@suse.com>
In-Reply-To: <49115307-753f-8196-55a7-8e8c1e50b503@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0510.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5264:EE_
X-MS-Office365-Filtering-Correlation-Id: 7359f1e8-ea56-4803-3f83-08db3fe6e1b9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7JDyeom6j68ZOVlvK/3dMZPhwOgjFVZDILFOC1B0tHcPkcPf8iZq137+cC5WeiqcTNSh15OJ13KCLogUJnuC7G7AOsu2u3aoRfaRlHUrbWjJydvJGYMNep8HL7AKYmP6FYTC+v6Tc/dTma+jEUxr07gpla2aChABxw82GCucl3rRHTU7t1bDj3gOF8UZlG8Z+3ltTmEOBpXXpLYleo0pXo5jKs8kz5s3VIv9fR+jIiNbcF6MJg5XRyOgdEJzqDyGb41/+8VUKX6n5FgLPQmU6EL1J7k6ntawRZNSqw0YgU0mecN0s0N0rq4us1N1WcPsFuu4580VUcE3qD6dtc+tj7SgnL4JGDgVjV5Xl5oZBgEsm9/+nUUYB8iSiJ7EmtkSBcabsKVut3HQCI0+lOv2QA5V43oXOArjSe213JHvPD+1U7at+Fu/5UW/Mwv8wQs61Ejw7C8WB6gbfPma/gR35TLj4/ur7oyUa9VhANBNfy7c+6MovXtf1SFdJ6X4DNXHMF3QR0wfQ98M66oWuYMUcEEOor084fNq5N1UCOAjCZnYHyIL/TQVSSliB3BvkUNCwlOi9SS753n4I1H829AAaakFMnrJKCbZ6RlPXT09/6OzkPEe3dxdIV7zLPRDpFDHdP+fWiv9QkkuifPTBXCdiw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(5660300002)(86362001)(2616005)(6512007)(31696002)(83380400001)(186003)(53546011)(82960400001)(26005)(6506007)(38100700002)(8676002)(8936002)(478600001)(54906003)(6486002)(6666004)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(6916009)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUd6a3N3a2N0clZKczhiQ3A4UnNrK2dZbmdJVnNNbDRZb2xkR3Jmcks1dnNK?=
 =?utf-8?B?bXpNREtaaThFMTdIb1VjRmQyWWQxVFZPOTlWeGd3YklKSERiNVkzSHF5QWdM?=
 =?utf-8?B?b0ZqaFlLc2t2MjNaYklkbEhYd1U5VlN5aTg0b1BoeGhUS0lueGlwWSt1emw2?=
 =?utf-8?B?MkhCQUFJWjdaYWxPQTFLOXZxYVk0aythdFpJdDNMUWtBdFQwVUdpMDdsQnpG?=
 =?utf-8?B?MXdWSkRzU3VlVThwOUE5ejhydDdUdVBubzBWQWdJY0FDY2dkVGdMNUt0UGho?=
 =?utf-8?B?Q2E5NlVpVkRUUVExcGtETFJKc0M5emVaZlNKRDc4b0hNTEpuY21zeDBrNFg2?=
 =?utf-8?B?VkcwVnA1d0VOL3c0UEJtakduTlJ6R29GQTlUWVpObG5tZWhYRlYzZ0VxZi9j?=
 =?utf-8?B?YTVoV3NhTFJFRXcxb25rSlQ5VWFrMVJsN3FRNmdHaktNTGo3TGJFZGVWS1hp?=
 =?utf-8?B?VlFmQXZpS05aRDA2TkxXdzBMK0IvV2xmVE5abWRtWlZadkJPZkZsY29JaTZv?=
 =?utf-8?B?UXNPeXI3TWpGc1A1VkgxYXpuYlFoWCtZS2dUMWIreDVaT0ZBeWZmZzdSS3Ra?=
 =?utf-8?B?dEllM1N0Y3BZUWtmSDZOZHdpMDJKUXBlS1RzeHRWUndUaUxBdjh4YWoyd01S?=
 =?utf-8?B?Q2VKd3RFMWtYMzU2dWhKUlZRQUdCUkJBMmR5bVA2TEFUcUdXWUR4Qi9McFl2?=
 =?utf-8?B?YXJzaHd3R1ZGWFBBSG9TY0JDalVrSEtGdUwwd3VWQW5mM254YU9lV2NpcDhk?=
 =?utf-8?B?NVdONnB1NDlVUTc2d1hDWURVRjd1Y2NVUkRSbGdYRGtJekhoMkJ4U2ZqV09y?=
 =?utf-8?B?am1neG5XSVJBV1dlb0JJY2FMVmVmUXVFdlJiL2I1eTMrdjYvcUFnYXBLZFV3?=
 =?utf-8?B?SEZxc1ZmeCtLR3RWMkxwa1dKcXNrNVJ3bzErTDVpNmpIYU5DdUVab1dVc1BX?=
 =?utf-8?B?TjRGNTRjZ3hXdkhQSHNnaUxNTS91YWJlaWN4U2g0NlZsYUF2cDFEMkxXVEc4?=
 =?utf-8?B?SDFoZWxqK0c3QlhzM1F4cWVXcU1mUVZHVDBlTTdYQlhWSmc5UDNjMFVHMk1D?=
 =?utf-8?B?TUtrYW9yVWdzY2NMNkJSMWM5RFUzSi9Ld1piRzRtcm9CSEhWdnRVMFYyc2hk?=
 =?utf-8?B?MmpyK0J4cDZYZVorZVpCOFAxTmtpSUttd01vUkpjWGVMUWtsZStsdHJiOXgw?=
 =?utf-8?B?UUxQOVhxUk44NjYwa1A1YWxWdGVjdWYzM1orV282WW02RjB3SVZPMy9uektB?=
 =?utf-8?B?bVJVU090d3dibWxuK1E0QjVESmRJZktQN1hxbkVFVnlKV05tZEdwVTgzSVp6?=
 =?utf-8?B?Q0gwL2ZoaFdud2lCN1hwOXYxVzRYeFFDbm9CMG9YMWF2dVp1NWtIR0VYN0Zz?=
 =?utf-8?B?ZCtxMFlxUTVnVmRrY05QS0RGcXFlb0tVMEFPZlI4YzgyeTJGYk9VY2Z3a3l3?=
 =?utf-8?B?RC9LNEpqN0NiNU95cklrbUpMWjVjYk90QXdCT3NaQVJyTGtvaTlISHNPcEIx?=
 =?utf-8?B?VjJBSkdPU3NhK1MyQzFHMXA5RUxqN2ZlaGkwR2xncDBTV0thRlJEc0ovNk1Q?=
 =?utf-8?B?TEl6a2szdXUrY1l5czgyRFR3NEhaZ052bU5EeVgzRk9DalIrRWJmd2dlQzNP?=
 =?utf-8?B?VFMyVDZVZDBhbkNuNVFUeUdxNTM0Y1VkeXBsaXltb2JaZ01ndkRaR2Z6SXY3?=
 =?utf-8?B?WnZFY2hCbjVjSGJCL1NmdTNHRTZ3U2tYeFcyQWJzWHdtMnZCSmI2Y1dEaS9T?=
 =?utf-8?B?am1LNE8ybWd4Vnpjc0M1NGZYcVRxWHFLZUdtTmdOMFJRM3A3V2JsLy9lSDE3?=
 =?utf-8?B?d1hLYS8waENQck9Zb09vYTFSaGN1dXRqOG9yZjhFUTFDRzVlNThDS0lPUXVV?=
 =?utf-8?B?SVNudWdRSW9heEpZZjFpbEdqZXNockNIeGdzR1VmdTJSc2wyYjRoSFBWLy9V?=
 =?utf-8?B?K3FENDRUVEdlWG9HNDBHTUhPR05mZU5SNEdFSDYxTHFZWGQ2OE41elROeXQ2?=
 =?utf-8?B?Q05pZ3lSUUppMzBzNEVpVXNyOThzR2VmdVp1MFZoUVdBd0ZjTDZ1eWpURnhv?=
 =?utf-8?B?eFV6aVhWaDdjam41UGRraHRnNU1XUWZrM2NsK0xUNm1qay85RGFSaXpRVWNL?=
 =?utf-8?B?eFZ6MXgvSGVXQ3R6bTFkTk1jL0hOWFZFbC96dll4Q2hxTkZNZVhienM5SUZH?=
 =?utf-8?B?Q1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AkUBBfpUCGRQcYClGCbs0ZJjlcwnsGTozNjDcGGbtxzlfbhPPscMj2jZl4XeHGRGdcsSvktfTSFYX1qLdmyW+oor+KQpbk9MeuvNT/vBrDOusWznSKBpQ2pTA80CHRgB1Ie4fv7bHS2cOtFc2GaIxZymVSg8RamCffa/4U+I6MXvds0b6ipNIURbHnlP+oVuiCP/imhQGZH+m0MQ7BPSf6hWJ/FQ+k29nZvesdUw5pk7eccDctsBHz4c989hM6N01nId4o78VLmetjap9VQooO1RCk0xVRYa0zoYT1iCUg4Zc3EUy063Kh8ZmN0VKo4HzR1Y+e1hcu+juh0atKrOJp9v8FCpklBbaoiXhqyyRNVcQ9kvrg+k3sjqPG+wcPihb2coHQLg4c+EJVzRax9VvK93yHpIT5f4j4qRm3PI2a9VwL6Q14rQH11JOSPFNj8+WrQ3/myinTTZ1alXfdZPnI5J6RgxfFra2nRgKLIF6prajtUx0xTqnylwc9gecVrSBSbI3cHw/wahx9qgDw8ZToGF+6qmLgiibzoUyGI/7+38tHbH53rxi0h8qLRFEoBJCd1pmGfWL6eZ8F+wgrxacyVpW0fgd8qiFhMzs6MYlNwfgPoPO0BejGE4EtA/tv+dJmQsvjD/w4cT/dDLhGjGVXDPS21lzq8Mwrqu5ax4mIB75VNVF+BVF6tl0RMnq07ZFDAwRs0VxjnIGxMRnHBCZ1KOsW02lG9mU2h8p8dQz8Ts0gOZ4OGqRLuT49ntoxJ4iiqyyGy69LdiRAd3fPnf/wo3rK4mCtQQu78SPnmv5OXKQY8JqNKWxUDiUaHOxZBvmgBFbORXNCV+m0ikc5z5Qzt/oChinmcYHyJ1P/13h8E=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7359f1e8-ea56-4803-3f83-08db3fe6e1b9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 08:28:26.8842
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Y8s9jyTYvTYGIdK8/ZS263BRxB2gcWVeTfAL1lTlb2QSnohNkAzsY0G5Sp/rWJxF5W5AELrMSTIvd5YtzRnwwaQ+njljOj4djj2VQ97ANpk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5264

On 18/04/2023 6:58 am, Jan Beulich wrote:
> On 17.04.2023 21:34, Andrew Cooper wrote:
>> On 17/04/2023 3:51 pm, Jan Beulich wrote:
>>> On 17.04.2023 16:41, Andrew Cooper wrote:
>>>> On 17/04/2023 2:59 pm, Jan Beulich wrote:
>>>>> On 17.04.2023 15:52, Andrew Cooper wrote:
>>>>>> @@ -5879,6 +5880,73 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>>>>  }
>>>>>>  
>>>>>> +/*
>>>>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>>>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>>>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>>>>> + *
>>>>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>>>>> + * Must be called with present flags, and over present mappings.
>>>>>> + * Must be called on leaf page boundaries.
>>>>> This last sentence, while wording-wise correct, could do with making more
>>>>> explicit that it is the caller's responsibility to know whether large page
>>>>> mappings are in use, due to ...
>>>> The meaning here is really "this doesn't shatter superpages", and this
>>>> was the most concise I could come up with.
>>>>
>>>> Would ", i.e. won't shatter 2M pages." as a clarification work?
>>> Yes, that would definitely help. Nevertheless I was more after something
>>> like "..., i.e. for 2M mappings on 2M boundaries." Which, thinking about
>>> it, points out that while you have a respective check for the start
>>> address, the full 2M page would be changed even if the end address wasn't
>>> 2M aligned (but fell in the middle of a 2M page).
>> There's no nice way to check for because a range that starts on a 4k
>> non-2M boundary can legitimately end on a 2M boundary at 4k granularity.
> How about
>
>         if ( l2e_get_flags(l2e) & _PAGE_PSE )
>         {
>             ASSERT(l1_table_offset(v) == 0);
>             ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));

Ah, not as bad as I feared.  I'll include this, and post a v3 for
completeness.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 08:37:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 08:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522634.812120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pogq5-0001B4-6S; Tue, 18 Apr 2023 08:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522634.812120; Tue, 18 Apr 2023 08:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pogq5-0001Ax-3p; Tue, 18 Apr 2023 08:37:25 +0000
Received: by outflank-mailman (input) for mailman id 522634;
 Tue, 18 Apr 2023 08:37:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bo8Z=AJ=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pogq3-0001An-V0
 for xen-devel@lists.xen.org; Tue, 18 Apr 2023 08:37:23 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3bee4f77-ddc4-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 10:37:21 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 98e67ed59e1d1-246f856d751so1190988a91.0
 for <xen-devel@lists.xen.org>; Tue, 18 Apr 2023 01:37:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bee4f77-ddc4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681807040; x=1684399040;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=yRPrXtmzxxhm9JhqjWU6Zz+KRFX0fxE7jKn9cCkTvrs=;
        b=juIKaEjC+NUpKkXZExMjgHs/lwMxg05ktmRPu62knYKgo0YjoAZ13NE94eqPWOLiFd
         xaY6rMfpZsRjt/pILupd1qnbUl01XmtxHXnJZL8/sn8pIcuYKLw0J2FTqSyFrHaj6jcW
         fTews0T8IZWUxTQpPk01lPOlodl0Te5us1hy415SPnPW9LlgF8jjqhDEUwiaMKP5jK9G
         Lpo5rOOCE6XdRy2UGV9Uwpp7gE24qLvLrtc4c7ZuRNd93Wh2ryIC621rj+CZE2v/frMZ
         87vOlNkoEzDBVxzVJ+XWlnmc5gK0fJO02rtAxxA6gX7CHiZA8ahtHkW4eHmKFZyfF/EO
         FGAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681807040; x=1684399040;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yRPrXtmzxxhm9JhqjWU6Zz+KRFX0fxE7jKn9cCkTvrs=;
        b=adeg0DMsAYSeRCJtETCLYSsxmhXPSueCd+nHffEpP59nSv8MEC6weS63PQC5yEgEFS
         r9nYuzn4V8Vs9HFv7HdMw+qeA1RJ3hboF7mRXY2VnObvPKIwzMCfa8OCmOkmEr4pxXkN
         +vAYzJAfAgM5NqM6TWiH+8KHRGixrVkrFYd/VwXdfYO2ntglqM1jJ2VnUB0nki8b/p87
         vgIjFp5k6a8YB0BMgIxYb/JhwMGCplwH0QBT6ULrW/DyLTXxM6tJOJ0JmgBYLvx3eTew
         ZWyKBuFtKOWn9x+OD9p/cimqFl3SmrUIo6ya9fqBX9z+JO2OcJzXTSq2qje5bUV6PWZa
         CVGw==
X-Gm-Message-State: AAQBX9cRueSULq4Flwm/c5DsgsOzBZOv3dvUTApIPLtIGWWQPCECh1Rk
	Hr8/atPKvezwStl+mSXWdRRROucWQXLWx4tau2tL7fn34yTVGg==
X-Google-Smtp-Source: AKy350aDhwRAL7m8rjqZTn6EfJK8v60x7ZEwTGZpHPOCbIA20opLbRCaSlPkCrbVTr8khRZRs8jE4bMYkt/RVzCeck4=
X-Received: by 2002:a17:90a:fd87:b0:23e:f8e2:9ed3 with SMTP id
 cx7-20020a17090afd8700b0023ef8e29ed3mr1282535pjb.43.1681807039731; Tue, 18
 Apr 2023 01:37:19 -0700 (PDT)
MIME-Version: 1.0
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 18 Apr 2023 11:43:23 +0300
Message-ID: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
Subject: xen cache colors in ARM
To: xen-devel@lists.xen.org
Content-Type: multipart/alternative; boundary="0000000000002f1b4805f9983792"

--0000000000002f1b4805f9983792
Content-Type: text/plain; charset="UTF-8"

Hello,

I tried to turn on this scheme and ran into panic.
Where was I wrong ?

Xen command line
xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";

Xen config color build settings
CONFIG_COLORING=y

Xen log:
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
(XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Coloring general information
(XEN) Way size: 64kB
(XEN) Max. number of colors available: 16
(XEN) Xen color(s): [ 0 ]
(XEN) alternatives: Patching with alt table 00000000002cc690 ->
00000000002ccc0c
(XEN) Color array allocation failed for dom0
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Error creating domain 0
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

Regards,
Oleg

--0000000000002f1b4805f9983792
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello,</div><div><br></div><div>I tried to turn on th=
is scheme and ran into panic.</div><div>Where was I wrong ?</div><div><br><=
/div>Xen command line<br><div>xen,xen-bootargs =3D &quot;console=3Ddtuart d=
tuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscru=
b=3D0 vwfi=3Dnative sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_color=
s=3D0-3 dom0_colors=3D4-7&quot;;</div><div><br></div><div>Xen config color =
build settings</div><div>CONFIG_COLORING=3Dy</div><div><br></div><div>Xen l=
og:</div><div>(XEN) I/O virtualisation enabled<br>(XEN) =C2=A0- Dom0 mode: =
Relaxed<br>(XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>(XEN) P2M=
: 3 levels with order-1 root, VTCR 0x0000000080023558<br>(XEN) Scheduling g=
ranularity: cpu, 1 CPU per sched-resource<br>(XEN) Coloring general informa=
tion<br>(XEN) Way size: 64kB<br>(XEN) Max. number of colors available: 16<b=
r>(XEN) Xen color(s):	[ 0 ]<br>(XEN) alternatives: Patching with alt table =
00000000002cc690 -&gt; 00000000002ccc0c<br>(XEN) Color array allocation fai=
led for dom0<br>(XEN) <br>(XEN) ****************************************<br=
>(XEN) Panic on CPU 0:<br>(XEN) Error creating domain 0<br>(XEN) **********=
******************************<br>(XEN) <br>(XEN) Reboot in five seconds...=
</div><div><br></div><div>Regards,</div><div>Oleg<br></div></div>

--0000000000002f1b4805f9983792--


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 08:41:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 08:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522639.812131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pogtb-0002bk-MR; Tue, 18 Apr 2023 08:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522639.812131; Tue, 18 Apr 2023 08:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pogtb-0002bd-Iv; Tue, 18 Apr 2023 08:41:03 +0000
Received: by outflank-mailman (input) for mailman id 522639;
 Tue, 18 Apr 2023 08:41:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2dFV=AJ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pogta-0002bX-2A
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 08:41:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be6e8f58-ddc4-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 10:41:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be6e8f58-ddc4-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681807259;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZcloEN6PHwdxX+wR7PTxsLZ5R7H2GaplG8eadvwcPdo=;
	b=YSKoMebO1SMgoiijw3sT6dUmL/DeC/wEBV7I8b1z9zceTKQGSQcXYB/YeSo0ccQDTS56AJ
	nhdRaDZ0NwXgH8hEOLQ/dTVz+deZ2fOLHnFZiiQisU0IHpMqNc6BSUUtDUgjhVk1lqhuiZ
	hUb1G4MomabLXGs1y5CDoqcnpdFFKWDNC8+YA3UX2Tbl5n22OpF5WIiJBUlmrHKqBerKx2
	wlWzIAPgOX14nhVpRxuNgRbo/aKsOKfHG6jdAiFfsWbGz9WdOu4gVRxQNpL5YEqBPaHGvU
	oTthGjE00T8jwQcWEiZWH5/TiAxdRfJ1M3JoIe2bb3crK0Gl32O2LSnNFxiq6g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681807259;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZcloEN6PHwdxX+wR7PTxsLZ5R7H2GaplG8eadvwcPdo=;
	b=rNfKPYcxDWHHNemvWuh/rsGPuG8dn/TolEBJI+1ZiBhvXwc9cXh+Pw7pgY4xNT8GFDtc33
	j+rvG82H6UitkyAQ==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <87ttxd4qxz.ffs@tglx>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
Date: Tue, 18 Apr 2023 10:40:57 +0200
Message-ID: <87r0sh4m7a.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Tue, Apr 18 2023 at 08:58, Thomas Gleixner wrote:
> On Mon, Apr 17 2023 at 19:40, Paul Menzel wrote:
>> Am 17.04.23 um 16:48 schrieb Thomas Gleixner:
>>
>>> On Mon, Apr 17 2023 at 13:19, Paul Menzel wrote:
>>>> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
>>>> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD
>>>> Graphics (family: 0x15, model: 0x13, stepping: 0x1)
>>>> [=E2=80=A6]
>>>> [    0.259329] smp: Bringing up secondary CPUs ...
>>>> [    0.259527] x86: Booting SMP configuration:
>>>> [    0.259528] .... node  #0, CPUs:      #1
>>>> [    0.261007] After schedule_preempt_disabled
>>>> [   10.260990] CPU1 failed to report alive state
>>>=20
>>> Weird. CPU1 fails to come up and report that it has reached the
>>> synchronization point.
>>>=20
>>> Does it work when you add cpuhp.parallel=3Doff on the kernel command li=
ne?
>>
>> Yes, the ten seconds delay is gone with `cpuhp.parallel=3Doff`.
>>
>> There was a patch set in the past, that worked on that device. I think=20
>> up to v4 it did *not* work at all and hung [1]. I need some days to=20
>> collect the results again.
>
> Can you please apply the patch below on top of the pile remove the
> command line option again?

Bah. That patch does not make any sense at all. Not enough coffee.

Can you please provide the output of cpuid?

Thanks,

        tglx






From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:00:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522664.812153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohBx-0005Ak-DA; Tue, 18 Apr 2023 09:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522664.812153; Tue, 18 Apr 2023 09:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohBx-0005Ad-9a; Tue, 18 Apr 2023 09:00:01 +0000
Received: by outflank-mailman (input) for mailman id 522664;
 Tue, 18 Apr 2023 09:00:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VaWE=AJ=xen.org=julien@srs-se1.protection.inumbo.net>)
 id 1pohBv-0005AX-Uw
 for xen-devel@lists.xen.org; Tue, 18 Apr 2023 08:59:59 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 645f1f92-ddc7-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 10:59:57 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pohBs-0007jm-GL; Tue, 18 Apr 2023 08:59:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.26.51]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pohBs-0005Na-9K; Tue, 18 Apr 2023 08:59:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 645f1f92-ddc7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:Cc:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=m+ZrO9nhwpVQLFDKyx6aOZTYNCTiQ++joqo9YEZCgws=; b=TNVCFat8TogxQvgIJbxMq49lXD
	MOrRvTMpqz3geYKdvJ8FF34kAK53IrBZ6L9g+MJnZp61F7IyVrkuSnpEPxqr8sNyplEhhErgjd4Mt
	VMT/VyXG8imPlgfoK/iYiTZHRE7hWVm72762eEbcdf2iYdCoP8t4SsbpMkUq5gJGdnuA=;
Message-ID: <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
Date: Tue, 18 Apr 2023 09:59:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>, xen-devel@lists.xen.org
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Carlo Nonato <carlo.nonato@minervasys.tech>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

+Stefano, + Bertrand, +Carlo,

On 18/04/2023 09:43, Oleg Nikitenko wrote:
> Hello,

Hi,

> I tried to turn on this scheme and ran into panic.
> Where was I wrong ?

This feature has not been merged in Xen upstream yet. So can you tell us 
what patches did you apply or which tree?

> 
> Xen command line
> xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
> dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
> timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";

Can you provide the following information:
  * HW
  * Where are the banks located?
  * Where do you load the various modules (e.g. kernel, xen...)?

> 
> Xen config color build settings
> CONFIG_COLORING=y
> 
> Xen log:
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Coloring general information
> (XEN) Way size: 64kB
> (XEN) Max. number of colors available: 16
> (XEN) Xen color(s): [ 0 ]
> (XEN) alternatives: Patching with alt table 00000000002cc690 ->
> 00000000002ccc0c
> (XEN) Color array allocation failed for dom0
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Error creating domain 0
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:02:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522668.812163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohDw-0006ZY-Nu; Tue, 18 Apr 2023 09:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522668.812163; Tue, 18 Apr 2023 09:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohDw-0006ZR-L5; Tue, 18 Apr 2023 09:02:04 +0000
Received: by outflank-mailman (input) for mailman id 522668;
 Tue, 18 Apr 2023 09:02:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pohDv-0006ZL-1R
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:02:03 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20628.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id adbffd0a-ddc7-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 11:02:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8552.eurprd04.prod.outlook.com (2603:10a6:10:2d7::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:01:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:01:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adbffd0a-ddc7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HqD4VoL7IHP1uMgSCQOWJMJsb8Khu6q+ENDnsX2PxMlc2AepawVJQSxXLoDJ7sedBIQmD7mDHL00+D0rmHrHL4db9A4B7aQqvIqMi/OWFTCCRCkIjGOZdv+pSxJHlxULDJ3paM4T4W598I7hZf97GBCD68jt8C6B0JmkUHLDq1EwNRxK3GOtScE4LYG4QQx+fApdInJ70Lia/k8FIphKL4onFddKo/wsiimmEk+vPHlS1UAc+bsPOWlAh/MEZgNNAbjZefi/WBlz25ThelFgmA3lZD/f31qo/mMSMRwFcJVGOwgH4USmbVDZ5ESQtG5yc9G4tzxz4TStr/XcDoP89Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WC6NPOv3EGhsLoDwn9NgnDDiknxDb2PVRIFkn58wZow=;
 b=TBmRgoi4/AE/jnoDOST2wo/ppaIGR0u7V5fG1f7MKPxqXc5iczCr6wNwnmCDIFYlAoPf+GgYHqhpTAMi5MIdzk2Csepka9aYbjiSBlEdIaRAk2vxbu7O3YStEpCVRsGxy6aoZ++qCZDUvh8+8Blr8tK545cg1hr+rSEkcl8E4PmxMalxHC0INGiYu+b1/+RGuBVdbMZ2W+BGC7UEaAChI1c9I+IFvKOADGbbKxmYYNgp/85GXS96/QeceOK9wjmZKo5+uGELK2iASKrr9Sn9NjUaIIPHcAUbUsR9kQN3fK0LxPXX0lEcNdnEDJzEEidlbyF4YvY7H4cOAO1vjv8HsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WC6NPOv3EGhsLoDwn9NgnDDiknxDb2PVRIFkn58wZow=;
 b=1NqKRg2UucopCnnAFvcH0UzOKfWrIfIrc2KgTHg6z+3vypxnNu4o0M+0bBg2qrgDFsx4T2KSpU1qiOIupbMGQwMeocgwTV8FgCoacMc5rvKhHYvwi/pwfr8SxmfFdOTfceV/dXQ6tTou1ZFk4Qquwetgc0ZybpzfXr/Y6DoXBxaiQenT9Bv3GiGaqHW3Ovjp2EB3nFILhPS6RUve24NDBGyC/W1qjfyIWOkelAwgVGfc19lDeE3xYgcb9QFnlNLweQqszUo6rFommjFuhOYQS3xVJ7MHiPwrQSsWMwI2ojqQWiHyI+8xxKYXz2g6/s/GiIdjkajHypUV8mmNp2Rckw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <80a02af5-9154-8289-4e92-6016c0948a61@suse.com>
Date: Tue, 18 Apr 2023 11:01:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] tests/cpu-policy: fix "run" goal
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0009.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8552:EE_
X-MS-Office365-Filtering-Correlation-Id: 18ef0179-acf7-4cfd-5fa0-08db3feb909b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iDOXXHhipWsN4nyAYg1232DB5L3way2hn6a1s7GKs7eBz5uUpzz9CLrYrEb/+R6HkcfFncko/zHNnZtXMZivPAz8LfX08rWvG/KceQU1rPx0NknVZSOd0yhigKh7x7d6xHLmmQJ9WG3np7HTk5d0EKaoezvwRyfdjgx+uwsiqt8sVg6av37As/ZUscIQjuSXG2rO4wSJC1UEeViiUkWyCeX3wcGh2S6i2BXXqjhgQWPipmQFoUvwn4I6AAMjGI1jkJvvCn0bJp/q+dzPxGdHmvWE2Kvq86GWETx36xGAYhQmrVnDPEk8+ar0bPshGnq/vh0kOkmIMdQjB2JWpNxt2zysSBSD2JYmsOULP52QLrQxXBLsVftuu59dTsGbJEqYtnW7H6igMWzOVkYfzSVagmIWEkL8bRZLYk3zRNporfTiuXIQ/FOYXvtE+cav/qxqNoP3u6V1L2cqT/CymCoULrqEmXtjbKT9RHGlfElf5CpOzVIuEsNpbWPHD6OpHBAi9UJZM/NdYaYB4DY0LmRgcCOZCy4OUrmG4VOyKCIl50gFFQsPqPlGt+VxFTnzsfj1uFlw8Y7hvrhXGi5EZrJDboO95e2A4vbT6PiyTi2S3U4lO4+x/rhZnrXpFQtW0/GIaLxu7AsYYLNq8o3kQolM6w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(6486002)(478600001)(86362001)(31696002)(2616005)(36756003)(26005)(6512007)(6506007)(186003)(38100700002)(66556008)(66946007)(66476007)(4744005)(316002)(2906002)(4326008)(6916009)(8936002)(5660300002)(8676002)(31686004)(41300700001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWRScFFzSm1pbXY1VkJqcjRvVmV6Q095aVFveHJaaW5VcWZra3pXU1dDcXdJ?=
 =?utf-8?B?OFRiN0hZVnRIMTg0V29kZnFhN01QRTFxRkxvUlU2eVVJa1I5STVYRVZmaFNr?=
 =?utf-8?B?dFIvZVhrbWZzS2YweEw4azBxQU54S1BHWFJGbURwRnU5YnJBTXpLQU1nVmpp?=
 =?utf-8?B?UW1pbjVwY09mTUZMYXdydHZWcnBDcGIzNmpLVmdabldOckhncVVoRWUrTkk2?=
 =?utf-8?B?bXkrSll5R3ZXRUNtcGxRcmh4YUc1WmdJTFp2Q0tUWFF2V1pvcElQSGJUMEpp?=
 =?utf-8?B?WWU0N0RVN0NWdHF2UHF6dC9McTlMeFBRRVJONnI1ODE1d1Uvdnc0bHdYdzRS?=
 =?utf-8?B?b3NoaitLbnZSMXBNSmtPT0pjWkU2SjZkdFJkMzV2YkhWaGMySmZodWpwa3dp?=
 =?utf-8?B?Y0drT0E0UzljVXEvSDEvRHo0WjdMNEJFL0JaMnhudnQ5OEJXcUFNbXNCcWU5?=
 =?utf-8?B?L2hOQlByVGpNNlRFdDAybEM5azNTb2s4MnZkZG56Y3hxREIzZ3o1OG54YkYv?=
 =?utf-8?B?KzErVFcxd2k3enQra0ZZWmtlY2kvUXozeHVwMnI5K2JUb2dwQldjRWYxZCty?=
 =?utf-8?B?QnJQRXNOR1duYytWeS9HWWdraWc0SVhoWHdkYzMyd1J3S2ZHMitzTk9WNzFI?=
 =?utf-8?B?RE5GS0FsbTZwZ24rcnBhTCt4SzRhdExWN0JuQzRtK3hvSjJBNUliVk1BZWtp?=
 =?utf-8?B?Ync5MGFUNXZjQnZwQ0pZM09jSFByWVRTdkxic1FvOUpVdFFRMzlkc3FUK1hz?=
 =?utf-8?B?Mmtlb0dud3lnWkpHcTFOQTVKcUVnMFIyMjJYTUFJRFJFZkRMWDV4RGQ2cmJJ?=
 =?utf-8?B?cnpjRSs0Zm9oVTFKMmo1T2x2cWFhM0NSUTlmTEczTlZXaStNRDlqWnIwTFNt?=
 =?utf-8?B?WTIwR1FEZ3c3SDlyV3RMTURxZmw1SjNxd1J0SkFCK3BlaUc0U2VJdXpJa2tm?=
 =?utf-8?B?Vk9Ecm90Zk5zRDk1aVpGNC9aenoxd2dDSTd6UWJucVZHMVh0SnN2WG1Nd1Rs?=
 =?utf-8?B?UFh4MEFIT21ReFdDS1FaYy90aDNYL3VmeUcwdmlBVEg1bzUwci9uR3hXV3Jn?=
 =?utf-8?B?TDUyZWhhcVVkSGdEcW0vNHU2NUw2TWk5dmhDUmRFdHVjdE56emQ2eml4QzlP?=
 =?utf-8?B?d2p1TEQzV0M0V3RLdEJTb2RVeGZBQkhtdXkwUUhHeE9tblJYdnY0TWRuNVVO?=
 =?utf-8?B?c1h4NTdEMnptckFkK0lma29UWmFPaTg0c3VBUlNJWlozY0EzVjRLTUdudkFH?=
 =?utf-8?B?RXN6bndselF4NEYvVTdEL2JXRHhEWnpiZEN5NjNaa0M0bVQwZk1kMC9hVlBu?=
 =?utf-8?B?bGdWVUUxbWxBMVpnTkdLNnhvSVl2V1p0Y04rZ01iMU80VDlQbkRQVENtVlQv?=
 =?utf-8?B?dHV3WGYrSTJTdGRReWMyUENwYUJuL2lXM1JhamJEY0s2L3JlU0ZMMCtLQ1lh?=
 =?utf-8?B?WWcwVjVTOUwwRnVxT2VVMS8wVW9MV1FPZjlBZUxuT3AxWTdTT0RJWSs3UlZk?=
 =?utf-8?B?MUpZZ0EveEJWakgwUFpZS1pIVVFsR2FFNVR1anUxR3JSNjUzRnUwWXBmUGRl?=
 =?utf-8?B?ZkMvTnhXa240VjRjUk9lUkxSOE9sRUdmOFJTSm0wUk5FRGk0OVhCNFhlckE3?=
 =?utf-8?B?K0YwdU01dVA3cVJEem1EWE85cE16bDZJZC9LM2E5V0dnYUNBU1ljZzdCNXc0?=
 =?utf-8?B?R3B0d20rYmEzTlNad2pWMkJRd25DOWxBbEhMMlI3eHdQYkY4NnN6NldqV0Rr?=
 =?utf-8?B?MGYxOWxJYmJ6dzRyUE5nanVMeFV0cnMyV0FiUW1qQWxOWUt0VW9pNW4waXlq?=
 =?utf-8?B?VU5za1ZqekczR1Jxa3dmMUxoUnFMZmdJUHM0YndZZW5JcFI5ZSt4ZjdDNS9o?=
 =?utf-8?B?ZG1VOHZnczBaZWFVeWdicHBudWYxMW5kaFBPY1ZXT0dZOFlzUWt2eDlqb3g0?=
 =?utf-8?B?bGRPeGJ3cEFOT1lVck5qbHI2UWk2ZEJRdnlaVk02MkpGOG11WUZqKzVBelpZ?=
 =?utf-8?B?Ukh0MDhOaGM5ZVY2RmNTN2NVVktQUDVSYk8vZE9sRTR2VmN1d0JVTVB6WDVN?=
 =?utf-8?B?ZmVobzNIQVNlOEdINnRQS2FuQVNKMXJKbi8zL2VhSnF0aE9JQlZDL2N0Tmg4?=
 =?utf-8?Q?QBklDllgR6p6eyY4qzM5GX+0i?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 18ef0179-acf7-4cfd-5fa0-08db3feb909b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:01:58.0855
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GtRATYqo8HKTqrU+d12/w7qMoGh+i3/QNfE9KNx4AlAhqDGQjrtiV/HIZXMSZ0Qi0n7BG1qmucIL6tv5hRWXZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8552

An earlier change converted TARGET-y to TARGETS, but failed to replace
all references. Convert run's dependency, but use $< in the command to
avoid the leading blank that += inserts.

Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/cpu-policy/Makefile
+++ b/tools/tests/cpu-policy/Makefile
@@ -16,8 +16,8 @@ endif
 all: $(TARGETS)
 
 .PHONY: run
-run: $(TARGET-y)
-	./$(TARGET-y)
+run: $(TARGETS)
+	./$<
 
 .PHONY: clean
 clean:


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:03:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522673.812173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohFJ-000785-1a; Tue, 18 Apr 2023 09:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522673.812173; Tue, 18 Apr 2023 09:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohFI-00077y-V9; Tue, 18 Apr 2023 09:03:28 +0000
Received: by outflank-mailman (input) for mailman id 522673;
 Tue, 18 Apr 2023 09:03:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cpzf=AJ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pohFH-00077q-O4
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:03:28 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20618.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfc40ba6-ddc7-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 11:03:24 +0200 (CEST)
Received: from DS7PR03CA0173.namprd03.prod.outlook.com (2603:10b6:5:3b2::28)
 by IA0PR12MB8326.namprd12.prod.outlook.com (2603:10b6:208:40d::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:03:19 +0000
Received: from DM6NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b2:cafe::32) by DS7PR03CA0173.outlook.office365.com
 (2603:10b6:5:3b2::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 09:03:19 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT085.mail.protection.outlook.com (10.13.172.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:03:19 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 18 Apr
 2023 04:03:18 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 18 Apr
 2023 02:03:18 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 18 Apr 2023 04:03:17 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfc40ba6-ddc7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nvKTVP8Ryty1dQWAl8X5Ql21HLH5B8tflYfXUa9s1qUjC4YsSF4qbvj51aMWXBYuFaGdk7NlXSCu/6OAVZixBUNWNPYP1nhqngLReQEBv47dC6OdDiRB9XvboOKLp1QbMS16pep9azEFW6ID4QNftEVYAG7E/dfMihGQ0HtzX3miMwmsZNf7LOwvgrbZqAZhlfExjglU1NPSDUnIgHQfpFifXHJw+nIrip1ukJkVX5hCq7KpOG9GzChMIfUseuP+FWVdEPEh2UCMPvKrE5Zy2JrHiWRdGhHt7ZXw6qIW9FXKtqprMEMeTYLI/Sca1ScMiGyPE8Md4dYpEZxXH7q3DA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EXmukdOMUyycIDxzlu6fVygzh6NgcLrC4VzVL5PmeLM=;
 b=Ino2xJQ6wf03PHslrk4ZnHg9PAK7mlx31nc7fLQ2g7kTuBfknkWBDXalmZhfGaj93jCnvjggAR0/RsMnfg3eOBxYj+x/uYPvQ0LCULeV/HK8RFWU1ppYqhIrj2OXR+xzgVp2tnL+emDh/y3AMiew325LNAaJMH2tHYU4QO8Yc29FQYOT/PtKk11OiKQ66jvUwPLTCm+JU6n1saE6/K22WVL8zz8rO83jGfI6tzuYq0kujHyvdcENFmv1G1LQDWZxX380Rcr/MHPp3+j38VK4rsqskfcPeTpDrWjUpUpoBxJd0Px53+w0vCfERw8VpcNru6VycbRcdmON7xcZQDQE7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EXmukdOMUyycIDxzlu6fVygzh6NgcLrC4VzVL5PmeLM=;
 b=EEhws7Lh/NIL6D/2O6pdm6zAMfGx6stSPmej/xznEhtgWdAdPhK3dfP3/dqWMJMA26EocGROR2hX3VuJ8NYh11JRceO8su/y5X0UtRepurYJ7V8bqy5CiTFfAA3juN67PGOUP78fqPTMoBMRMydRvRxKt4vrOOAaDZqGtOxkcTA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <fd604522-c2bd-770c-6548-19aeaeb57ad9@amd.com>
Date: Tue, 18 Apr 2023 11:03:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN][PATCH v5 14/17] xen/arm: Implement device tree node
 addition functionalities
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
References: <20230411191636.26926-1-vikram.garhwal@amd.com>
 <20230411191636.26926-15-vikram.garhwal@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230411191636.26926-15-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT085:EE_|IA0PR12MB8326:EE_
X-MS-Office365-Filtering-Correlation-Id: be385fb5-3389-4ff6-3d73-08db3febc138
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V6W0A6gQUVoO6mN1ifiplf688eUXHgzDShA3mK6ivEW8t5wIyS5PbnRaQxySM/TSN4jWDGHs07sRA00aDop6WUqhpZFUWvnjemPklFtr5LDmAtnF/yxnLvmpbtAha/q30HZ/xSo5J83YoA7WNssxZ0SpaIgBUdkOleC2GixHJOPzPHoVT96FgTuUCcdSy25Kw3JLBDlxb+J5VpzAb1XguGMkBRv6uR6uQlk7Q0taO64HGT8kLcGiZpVJE22UDc0PGgW79MBQ0d9ga+vBSukybtdaVkF6zKA9Beduy/IK7JHc+uO69j/qXKvPcgw3ywln29W+vLVwexnbKskaGvS2lu2/zxHctYD/GWe+PuWYZjFkcncO0cdV6/pa9MOD3uQWnqirjq7HnLvTCOFNqe/hxMDHcKJdUWhoboC25SUKxfHr9yBj0ax7adpLl8lTHYpukmKyDNDgU0lqYwH5WxL/eQyShcwkpc2PJyPOyLsP3eF62ukKBiNuu9lfkYjG+pwyQcMTtaTtTuFIAy9plXePknkZdsiMh1GahUt/S8jSjn8DlVNZdVjHQ7uSPgO2+gJTIpxw9UkrxbndYeg66srS44W6jQXgXYsmx4pd7A5RlpceviU8ph1c4R2ouQ3YrgCwwC7fpV4gFh+Q1uG7ekQ1z+Tz/50d17qrEcw3d02umbXd0Lm7mYnF5wy9ccjXKCk6eqfgPkNzsT4VaNpel3Yg1tCf7cQqy67LZLKtVENmUB1LBTyttS9nowG1M83ei4X39zM9Eq9soB3WkEnqTbfb+4VeEW5eCTyKZhnM2tWHCOA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(4326008)(110136005)(54906003)(316002)(16576012)(70586007)(70206006)(478600001)(6666004)(41300700001)(5660300002)(82310400005)(8676002)(8936002)(40480700001)(30864003)(2906002)(44832011)(82740400003)(31696002)(86362001)(81166007)(356005)(426003)(2616005)(336012)(40460700003)(26005)(186003)(53546011)(36860700001)(47076005)(83380400001)(31686004)(87944012)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:03:19.3828
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be385fb5-3389-4ff6-3d73-08db3febc138
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8326

Hi Vikram,

On 11/04/2023 21:16, Vikram Garhwal wrote:
> 
> 
> Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
> using device tree overlay.
> 
> xl dt-overlay add file.dtbo:
>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>     device_tree_flattened) is created and updated with overlay nodes. This
>     updated fdt is further unflattened to a dt_host_new. Next, it checks if any
>     of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
>     exist then find the overlay nodes in dt_host_new, find the overlay node's
>     parent in dt_host and add the nodes as child under their parent in the
>     dt_host. The node is attached as the last node under target parent.
> 
>     Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
>     overlay node.
> 
> When a node is added using overlay, a new entry is allocated in the
> overlay_track to keep the track of memory allocation due to addition of overlay
> node. This is helpful for freeing the memory allocated when a device tree node
> is removed.
> 
> The main purpose of this to address first part of dynamic programming i.e.
> making xen aware of new device tree node which means updating the dt_host with
> overlay node information. Here we are adding/removing node from dt_host, and
> checking/setting IOMMU and IRQ permission but never mapping them to any domain.
> Right now, mapping/Un-mapping will happen only when a new domU is
> created/destroyed using "xl create".
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/dt_overlay.c | 482 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 482 insertions(+)
> 
> diff --git a/xen/common/dt_overlay.c b/xen/common/dt_overlay.c
> index 516e8010c5..3344bad313 100644
> --- a/xen/common/dt_overlay.c
> +++ b/xen/common/dt_overlay.c
> @@ -36,6 +36,25 @@ static struct dt_device_node *find_last_descendants_node(
>      return child_node;
>  }
> 
> +/*
> + * Returns next node to the input node. If node has children then return
> + * last descendant's next node.
> +*/
> +static struct dt_device_node *dt_find_next_node(struct dt_device_node *dt,
> +                                            const struct dt_device_node *node)
This should be:
static struct dt_device_node *
dt_find_next_node(struct dt_device_node *dt, const struct dt_device_node *node)

> +{
> +    struct dt_device_node *np;
> +
> +    dt_for_each_device_node(dt, np)
> +        if ( np == node )
> +            break;
> +
> +    if ( np->child )
> +        np = find_last_descendants_node(np);
> +
> +    return np->allnext;
> +}
> +
>  static int dt_overlay_remove_node(struct dt_device_node *device_node)
>  {
>      struct dt_device_node *np;
> @@ -109,6 +128,72 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
>      return 0;
>  }
> 
> +static int dt_overlay_add_node(struct dt_device_node *device_node,
> +                               const char *parent_node_path)
> +{
> +    struct dt_device_node *parent_node;
> +    struct dt_device_node *np, *np_last_descendant;
> +    struct dt_device_node *next_node;
> +    struct dt_device_node *device_node_last_descendant;
You can limit the scope of at least 3 variables above.

> +
> +    parent_node = dt_find_node_by_path(parent_node_path);
> +
> +    if ( parent_node == NULL )
> +    {
> +        dt_dprintk("Node not found. Overlay node will not be added\n");
It would help if you print the node name.

> +        return -EINVAL;
> +    }
> +
> +    /* If parent has no child. */
> +    if ( parent_node->child == NULL )
> +    {
> +        next_node = parent_node->allnext;
> +        device_node->parent = parent_node;
> +        parent_node->allnext = device_node;
> +        parent_node->child = device_node;
> +    }
> +    else
> +    {
> +        /* If parent has at least one child node.
> +         * Iterate to the last child node of parent.
> +         */
> +        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling );
NIT: Your empty for() loop is not consistent. Here you use ; and somewhere else {}.

> +
> +        /* Iterate over all child nodes of np node. */
> +        if ( np->child )
> +        {
> +            np_last_descendant = find_last_descendants_node(np);
> +
> +            next_node = np_last_descendant->allnext;
> +            np_last_descendant->allnext = device_node;
> +        }
> +        else
> +        {
> +            next_node = np->allnext;
> +            np->allnext = device_node;
> +        }
> +
> +        device_node->parent = parent_node;
> +        np->sibling = device_node;
> +        np->sibling->sibling = NULL;
> +    }
> +
> +    /* Iterate over all child nodes of device_node to add children too. */
> +    if ( device_node->child )
> +    {
> +        device_node_last_descendant = find_last_descendants_node(device_node);
> +        /* Plug next_node at the end of last children of device_node. */
> +        device_node_last_descendant->allnext = next_node;
> +    }
> +    else
> +    {
> +        /* Now plug next_node at the end of device_node. */
> +        device_node->allnext = next_node;
> +    }
> +
> +    return 0;
> +}
> +
>  /* Basic sanity check for the dtbo tool stack provided to Xen. */
>  static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
>  {
> @@ -148,6 +233,79 @@ static unsigned int overlay_node_count(void *fdto)
>      return num_overlay_nodes;
>  }
> 
> +/*
> + * overlay_get_nodes_info will get full name with path for all the nodes which
s/will get/gets

> + * are in one level of __overlay__ tag. This is useful when checking node for
> + * duplication i.e. dtbo tries to add nodes which already exists in device tree.
> + */
> +static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
> +                                  unsigned int num_overlay_nodes)
> +{
> +    int fragment;
> +
> +    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
> +
> +    if ( *nodes_full_path == NULL )
> +        return -ENOMEM;
> +
> +    fdt_for_each_subnode(fragment, fdto, 0)
> +    {
> +        int target;
> +        int overlay;
> +        int subnode;
> +        const char *target_path;
> +
> +        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
> +                                           fragment, &target_path);
Shouldn't you also check if target_path is not NULL?
I can see that it is possible for fdt_overlay_target_offset() to return >= 0 and NULL as target_path.

> +        if ( target < 0 )
> +            return target;
> +
> +        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
> +
> +        /*
> +         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
> +         * overlay >= 0. So, no need for a overlay>=0 check here.
> +         */
> +        fdt_for_each_subnode(subnode, fdto, overlay)
> +        {
> +            const char *node_name = NULL;
> +            int node_name_len;
> +            unsigned int target_path_len = strlen(target_path);
> +            unsigned int node_full_name_len;
> +            unsigned int node_num = 0;
> +
> +            node_name = fdt_get_name(fdto, subnode, &node_name_len);
> +
> +            if ( node_name == NULL )
> +                return node_name_len;
> +
> +            /*
> +             * Magic number 2 is for adding '/'. This is done to keep the
I think this is 2 because of adding both '/' and '\0'.

> +             * node_full_name in the correct full node name format.
No such variable as node_full_name

> +             */
> +            node_full_name_len = target_path_len + node_name_len + 2;
> +
> +            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
> +
> +            if ( (*nodes_full_path)[node_num] == NULL )
> +                return -ENOMEM;
> +
> +            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
> +
> +            (*nodes_full_path)[node_num][target_path_len] = '/';
> +
> +            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
> +                    node_name, node_name_len);
> +
> +            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
> +
> +            node_num++;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  static int handle_remove_irq_iommu(struct dt_device_node *device_node)
>  {
>      int rc = 0;
> @@ -367,6 +525,322 @@ out:
>      return rc;
>  }
> 
> +/*
> + * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
> + * overlay_nodes.
s/overlay_nodes/overlay_node/ ?

> + */
> +static int handle_add_irq_iommu(struct domain *d,
> +                                struct dt_device_node *overlay_node)
> +{
> +    int rc;
> +    unsigned int naddr, i, len;
> +    uint64_t addr, size;
Please limit the scope

> +    struct dt_device_node *np;
> +
> +    /* First let's handle the interrupts. */
> +    rc = handle_device_interrupts(d, overlay_node, false);
> +    if ( rc < 0 )
> +    {
> +        printk(XENLOG_ERR "Interrupt failed\n");
How about: "Failed to retrieve interrupts configuration"

> +        return rc;
> +    }
> +
> +    /* Check if iommu property exists. */
> +    if ( dt_get_property(overlay_node, "iommus", &len) )
> +    {
> +
remove extra blank line

> +        /* Add device to IOMMUs. */
> +        rc = iommu_add_dt_device(overlay_node);
> +        if ( rc < 0 )
> +        {
> +            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
> +                   dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +    }
> +
> +    /* Set permissions. */
> +    naddr = dt_number_of_address(overlay_node);
> +
> +    dt_dprintk("%s passthrough = %d naddr = %u\n",
Why do you need to print passthrough if it is always false?

> +               dt_node_full_name(overlay_node), false, naddr);
> +
> +    /* Give permission for map MMIOs */
s/for/to/

> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        /*
> +         * For now, we skip_mapping which means it will only permit iomem access
> +         * to hardware_domain using iomem_permit_access() but will never map as
> +         * map_range_p2mt() will not be called.
> +         */
> +        struct map_range_data mr_data = { .d = d,
> +                                          .p2mt = p2m_mmio_direct_c,
> +                                          .skip_mapping = true };
}; should be placed on its own line

> +
> +        rc = dt_device_get_address(overlay_node, i, &addr, &size);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +
> +        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    /* Map IRQ and IOMMU for overlay_node's children. */
> +    for ( np = overlay_node->child; np != NULL; np = np->sibling)
missing space before closing parenthesis

> +    {
> +        rc = handle_add_irq_iommu(d, np);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
> +            return rc;
> +        }
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * Adds device tree nodes under target node.
> + * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
> + * is done to avoid the removal of device_tree generation, iomem regions mapping
> + * to hardware domain done by handle_node().
> + */
> +static long handle_add_overlay_nodes(void *overlay_fdt,
> +                                     uint32_t overlay_fdt_size)
> +{
> +    int rc, j, i;
> +    struct dt_device_node *overlay_node, *prev_node, *next_node;
Please limit the scope.

> +    struct domain *d = hardware_domain;
No need for extra variable if it is only used in one place. Just use hardware_domain.

> +    struct overlay_track *tr = NULL;
> +    char **nodes_full_path = NULL;
> +    unsigned int new_fdt_size;
> +
> +    tr = xzalloc(struct overlay_track);
> +    if ( tr == NULL )
> +        return -ENOMEM;
> +
> +    new_fdt_size = fdt_totalsize(device_tree_flattened) +
> +                                 fdt_totalsize(overlay_fdt);
> +
> +    tr->fdt = xzalloc_bytes(new_fdt_size);
> +    if ( tr->fdt == NULL )
> +    {
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    tr->num_nodes = overlay_node_count(overlay_fdt);
> +    if ( tr->num_nodes == 0 )
> +    {
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
> +    if ( tr->nodes_address == NULL )
> +    {
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +    {
> +        xfree(tr->nodes_address);
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return rc;
> +    }
> +
> +    /*
> +     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
> +     * overlay's content(magic) when applying overlay.
> +     */
> +    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
> +    if ( tr->overlay_fdt == NULL )
> +    {
> +        xfree(tr->nodes_address);
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
> +
> +    spin_lock(&overlay_lock);
> +
> +    memcpy(tr->fdt, device_tree_flattened,
> +           fdt_totalsize(device_tree_flattened));
> +
> +    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
> +    fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
You are not checking the return value for an error.

> +
> +    /*
> +     * overlay_get_nodes_info is called to get the node information from dtbo.
> +     * This is done before fdt_overlay_apply() because the overlay apply will
> +     * erase the magic of overlay_fdt.
> +     */
> +    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
> +                                tr->num_nodes);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
> +               rc);
> +        goto err;
Looking at err: if you fail here, you will free dt_host_new which was not yet allocated.

> +    }
> +
> +    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
> +        goto err;
> +    }
> +
> +    /*
> +     * Check if any of the node already exists in dt_host. If node already exits
> +     * we can return here as this overlay_fdt is not suitable for overlay ops.
> +     */
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
> +        if ( overlay_node != NULL )
> +        {
> +            printk(XENLOG_ERR "node %s exists in device tree\n",
> +                   nodes_full_path[j]);
> +            rc = -EINVAL;
> +            goto err;
> +        }
> +    }
> +
> +    /* Unflatten the tr->fdt into a new dt_host. */
> +    unflatten_device_tree(tr->fdt, &tr->dt_host_new);
> +
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
> +
> +        /* Find the newly added node in tr->dt_host_new by it's full path. */
> +        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
> +                                                     nodes_full_path[j]);
> +        if ( overlay_node == NULL )
> +        {
> +            dt_dprintk("%s node not found\n", nodes_full_path[j]);
> +            rc = -EFAULT;
> +            goto remove_node;
You assing rc and go to remove_node which will overwrite it. Please fix.

> +        }
> +
> +        /*
> +         * Find previous and next node to overlay_node in dt_host_new. We will
> +         * need these nodes to fix the dt_host_new mapping. When overlay_node is
> +         * take out of dt_host_new tree and added to dt_host, link between
> +         * previous node and next_node is broken. We will need to refresh
> +         * dt_host_new with correct linking for any other overlay nodes
> +         * extraction in future.
> +         */
> +        dt_for_each_device_node(tr->dt_host_new, prev_node)
> +            if ( prev_node->allnext == overlay_node )
> +                break;
> +
> +        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
> +
> +        read_lock(&dt_host->lock);
> +
> +        /* Add the node to dt_host. */
> +        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            /* Node not added in dt_host. */
> +            goto remove_node;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +
> +        prev_node->allnext = next_node;
> +
> +        overlay_node = dt_find_node_by_path(overlay_node->full_name);
> +        if ( overlay_node == NULL )
> +        {
> +            /* Sanity check. But code will never come here. */
> +            ASSERT_UNREACHABLE();
> +            goto remove_node;
> +        }
> +
> +        rc = handle_add_irq_iommu(d, overlay_node);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
> +            return rc;
> +        }
> +
> +        /* Keep overlay_node address in tracker. */
> +        tr->nodes_address[j] = (unsigned long)overlay_node;
> +    }
> +
> +    INIT_LIST_HEAD(&tr->entry);
> +    list_add_tail(&tr->entry, &overlay_tracker);
> +
> +    spin_unlock(&overlay_lock);
> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
> +
> +    return rc;
> +
> +/*
> + * Failure case. We need to remove the nodes, free tracker(if tr exists) and
> + * tr->dt_host_new.
> + */
> +remove_node:
> +    tr->num_nodes = j;
> +    rc = remove_nodes(tr);
> +
> +    if ( rc )
> +    {
> +        /* If removing node fails, this may cause memory leaks. */
I think we are missing information why it is ok to leak memory.

> +        printk(XENLOG_ERR "Removing node failed.\n");
> +        spin_unlock(&overlay_lock);
> +        return rc;
> +    }
> +
> +err:
> +    spin_unlock(&overlay_lock);
> +
> +    xfree(tr->dt_host_new);
> +    xfree(tr->fdt);
> +    xfree(tr->overlay_fdt);
> +    xfree(tr->nodes_address);
What is the order here? This does not look like reverse one.

> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
> +
> +    xfree(tr);
> +
> +    return rc;
> +}
> +
>  long dt_sysctl(struct xen_sysctl_dt_overlay *op)
>  {
>      long ret;
> @@ -391,6 +865,14 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> 
>      switch ( op->overlay_op )
>      {
> +    case XEN_SYSCTL_DT_OVERLAY_ADD:
> +        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +
no need for extra blank line before checking the return value.

> +        if ( ret )
> +            xfree(overlay_fdt);
> +
> +        break;
> +
>      case XEN_SYSCTL_DT_OVERLAY_REMOVE:
>          ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
>          xfree(overlay_fdt);
> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522678.812183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohZb-0001yQ-RK; Tue, 18 Apr 2023 09:24:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522678.812183; Tue, 18 Apr 2023 09:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohZb-0001yJ-OR; Tue, 18 Apr 2023 09:24:27 +0000
Received: by outflank-mailman (input) for mailman id 522678;
 Tue, 18 Apr 2023 09:24:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pohZa-0001yD-Am
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:24:26 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce43d655-ddca-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 11:24:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7189.eurprd04.prod.outlook.com (2603:10a6:20b:116::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Tue, 18 Apr
 2023 09:24:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:24:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce43d655-ddca-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ct2ZB9yNtnG/eCNJ2G3d13A2mKG7Uv7CANvQz2MiL5unDGLOowiSbJagU+4lXlqkluiiJg9rtCxQmV1MDIAYdlyM493SwTk5n9CjyQQ8lSX6TLD6i8bjGgMhhShpDuZzreiHpibJfa1tXZ/wg6mT4qqBY6q7X7prg4cGW5zsYPI3saGxgB28aiAnkaAvO7ChyB3Cm99QzE9XCKt1PBVgJtbS9RmPWuFqi3awNjuUwSW0GL9D47YIe1APWlLFZ5v4pBzZfYGapioeOWyfFR35pt5lkX/uKWa4Z0KDMGuloKXmEq8SDQWrW1ifXozc4Ei1BI9LfkALVpFFC0RdEw/Isw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=055egUZjzSFCS8fr7d/bRVeiYickat/ngXv8tNqa6ZY=;
 b=Dg/63OXyRkImB5h6ATrBZc4lkQMARvajCApe0kT/SPkMLaYvPbnqc3Fw8UtTXCbcbw2WRpYsOPeDrFNC+iDCKbE+L+VmgQ9f5MSApVxWk7V9tc/yDG3KpNYR+fm/7zopUA183v+T8YCNKI0uxVAIwOA27wumzINhezMkrt2sRMv2yMPf7pHXAk9ZOewccpKTSUj1kOsSzBiTncQgWGUaJcCWSdsHfzeTlydEpBrWWYChcEyMtzhWQ3pkIVWWOaXZyJd2z+tDGQpNKxvFaog1DRh7hYqFfYSv/1aCzCcAejxC6eqcSehhI22o+NYPnYNy/AcJYaZSRrt+SSVGc2vKkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=055egUZjzSFCS8fr7d/bRVeiYickat/ngXv8tNqa6ZY=;
 b=n0LTAF91Z5E31qRuvbi3bM/BqSBx4iyWKwk7bsM1TnvS718r36FghqRB11nWt0AChWSbIBoEm/43f7PxZmuN9IqVQFMgVvtxsJkB2oqlWSvdD/PD+MOyHTlFuBS9XSgq8qP38fJKMYlIKioZztSA+U6hzput7wxYGbtvF1A5AoHIKQBphAKpPqNbHIJ2Ck2duJAJQYMw7tCLXJtWDKO0SyxQFOjx7+2nf8GhHvc40vtLO2NGWQEpNLo41y8NLiVATVZWj65Bur1dHcRLJ5KkKQvigTaV9s8lsALeQck0rygOP1UgAhwL6JdiqqVHzH8ORs3z5ajCMBLEJwFnbU3QvQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
Date: Tue, 18 Apr 2023 11:24:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v6] x86: detect CMOS aliasing on ports other than 0x70/0x71
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Content-Language: en-US
In-Reply-To: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0003.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::13)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7189:EE_
X-MS-Office365-Filtering-Correlation-Id: a887c8ee-cec8-40e4-09ea-08db3feeb126
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9bCZpffYh8vutX5neDnvtkK91h1DnNx3+C167IIvm+wmBGuUau0exkSk9dME7cyfuQbLdOb/WwlN7b3V/Yf72XO2vo0hL/vi+BGBsMS0TnYP25GC9b9UUAzkfV+V0osqiwCJJC8Q2+J8EJ7p3wohU/qFLmo78NwLDUyL6ntAWhl08Lig+9o6+9czdTUKQIPyzt504OvGfGWdsTh9Xi/DjEksBr4dVKguDEpWocfFixkoQgd5RvTN+BUhSoDFWOyCaAeAToFNNkcBDJ6IHHBcEBCydFYGpbLb8FHPNIwTtBqXHL+FyacyJ1vtHzFa/EI1niR+u2bNUbgg2Z0oXZ2Gvo7cpGupik+MmqcPLzX2id/6yvZ20SObLo4/02JcM+57FkIs6RzOHKergKzyTeUz8CzhhNf5V0Rbb7va7Tx23qbQh923YZl6PBCNWiPzn396lIbbfh0zPFqgsS68pNAYO0sgrCzx8Wvhu+SONWjs/WVfGMb9aC8mL1n60u77S5zzaYb+FVN/EMsQ50TjWUtEOuq1XPh/rzhzsBAIGCPujMKJ+zsLj3oZ6o5d5oskOJgKcuvxUHInuYh2zfJnuIjuDR+EIUlE9IRFm6uSIuXdh9b8SNTHgYGGped+diRGBI8FlNbiyf5BsqXzToVR/gHxOA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(366004)(346002)(396003)(376002)(451199021)(6486002)(478600001)(31696002)(86362001)(36756003)(2616005)(83380400001)(186003)(6506007)(6512007)(26005)(38100700002)(6916009)(316002)(4326008)(66476007)(66556008)(66946007)(2906002)(8676002)(8936002)(5660300002)(31686004)(41300700001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bXNTOVZJclIrT2trd0NjempLRU9NSTRnWHVNZkhqVXkvZ1hjeDFiRU5ncU9B?=
 =?utf-8?B?aFRJUkprb3VSbTM0UWszY05VK1dsZG5xZnFQbUtXWS9EVGVxc1hPWmtyWjh2?=
 =?utf-8?B?UVhNWmw3RkMyUkJzRHlVTDlVemtETVJ3Uzd0QU9uUXRzRUpkMkRaSFdyNHBU?=
 =?utf-8?B?M3ZIaFlLSE94WTdhV2ZxK3hqSEFsSXc3WFBiR3kxQ20yOXVxYzg5bVZxOUdP?=
 =?utf-8?B?enNtclJVaDZKMG9IaGZTNnU5aDZTdEdraC9IQ3YrV1lQazdIZ0p4OTVtaVBL?=
 =?utf-8?B?QlRpSG9MeThaTXpnQjY3S1c1bG05OG1DNnVIMUM3MDJlVFFOY09lTCtIMHpu?=
 =?utf-8?B?UDFqb3ZkeWNLZElZMkYySm9FaGJVKzJmRE8yYk1MRzB0aEZ0TmpNS1pzeDVL?=
 =?utf-8?B?bE05WFkrSnBUUE5HWTBDNnpuNkdpMVliQXhoUFNCTURYOHZrTEFMZ2xZYUE2?=
 =?utf-8?B?WDVSL3BEdU1WSWptS0xoVzJXdng3U09Nc0wwdFFTRW90Ymh6eU5nSDhkbHVo?=
 =?utf-8?B?K1hMT2laYmwyY3d1M1JTNnFibVJqNWh6cEpaVjdZVnJsM3hUVENxZGFtNTV4?=
 =?utf-8?B?UXBzNjJPZzFQN25JMHdIU2ZoWHR1VlljSXZBQXZNdVltM09Icis3TXhhR2lm?=
 =?utf-8?B?TVprVW1RODNKaVJzTmQ0YlFLeXZ2cHpVSGRDaFF4aU1zZFlIdFFpQVFvZFhl?=
 =?utf-8?B?dnRlaHRIRDNBQTMyZk8ySUVUbnF6MEVBMXFRVEcvOTRDaXZUS21qN1RKQTJy?=
 =?utf-8?B?NzRxN2FUUHBHR0tMNFNZU3BGREFRekIrd3FaUzNsVzJod2hNUUMwdEY2SzVo?=
 =?utf-8?B?SittZk5lVm1IdzVCTmhIYXpEUDVHLzhFYWsyaEs5Rk9jcDB5UE45RS9CVmVu?=
 =?utf-8?B?V3FHNytZS1lFSEViek5yWk1NYk13RVhtbW5OYnJDUXB6WGV4YUNWK01naHhO?=
 =?utf-8?B?Smg3QlpzbHlCZXZMZmV3NVpkbGV2TDU3eFFVY0VoellydjV3VnUyc2hsWmpZ?=
 =?utf-8?B?WUxUdWkwRlZURHN5VVlZSG1IV0pseGVrNjhYTFMrRElBNTcyZ1R0LzhLdFZo?=
 =?utf-8?B?c3kyWVY4eUNoUWQrZ21KQ2dFM09XTjlEMjB2QWJCbzNNTnZrcTRJRnlRakJV?=
 =?utf-8?B?Wk8vZW0zYTFWVW5UWkttYjJhSHZHZjIxZlNKcTB6UnVoMm5TUzdYR0l2K2sx?=
 =?utf-8?B?NWM4a1NrczJ6Y1QvZnlHT0VRUnhZTDlyRFNoeEphUWcvSmROcmRXUEdvZDI4?=
 =?utf-8?B?TjN0cFgwRWJKbS9BQ0Y1eUF0VmtwL3BVa21pLzJKVEpidjEreG5hcXdGckZz?=
 =?utf-8?B?Nk1vYU5PUTE1emttNmRXTytrNG05RXc5Y2JlNmhqc3k2SHFtbkVpWjdCdHhR?=
 =?utf-8?B?clZUZFMzUmRaaU9wdmM2aFRRS2NZeDFXYk1GUjdodTBvT3RxYlB6NG5YZEVW?=
 =?utf-8?B?MGdPNU5BV05ZT3drVFhDY2VHV2R6QmFSUXBPS25JY21TMi9vSjZUT1poWEpG?=
 =?utf-8?B?ek1VTlhDK0M0QzBDOWJHa1lrdTR5NXVOY0N5NzFicnJFejhjWlFKZTVKcEZt?=
 =?utf-8?B?M1RGRUFoVEZ3L09hNzlGYkFIOERURC9jVzhzMXVrbStFYzZrcWRjTkRLK1U3?=
 =?utf-8?B?d1NkQkQ2dHpoY2NQNzBMSktnS0VYeEhmTzlxb3hyMUt6bXkyay9vL290WWwv?=
 =?utf-8?B?VTlYMXIrNlNmTjdrY1Uwa1preDdveEFWZ25HbVdRNjVqeFlJWTFnV0VyNGVw?=
 =?utf-8?B?MjIzOHRIeE4walZ0UFdrVDdXMlVVYXh2cTJBOVFxMEl0YTRPay9NUVRXN3VD?=
 =?utf-8?B?K0ZNRGorWUI2WW9CY3lVYVpCSmtxK3RTdXpyM1JONm9CSW9sdE04UCtkc2tu?=
 =?utf-8?B?WTAyZU05T1FMcHJtWXpVNXoxUUhGVDMvK2V6QlEzeDVDNVlRUTVpeGI1OEZU?=
 =?utf-8?B?UFFBMW11QjBub0VtZks3b250bUdCdG53SXpUbVBYZFVHdlFRWmJ1OXFWcnZY?=
 =?utf-8?B?TG15ekhNbk9NYTEzeG1TKy9EUUkwMXBhTGwwZy94dDJGSm0rV0duT3ZpRE91?=
 =?utf-8?B?ZnA4cmxjRi96NHYrampXT1ptMHpTd0E2OGM5WFdsTDJIc29oVmo2Mys4TXRH?=
 =?utf-8?Q?2EKB2+4nCdNj8LqiT2S/P2Omv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a887c8ee-cec8-40e4-09ea-08db3feeb126
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:24:21.2169
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Jij+uMjj7F9SHa6OcnLpeNkzvHS0hgufqPGzWWvQMI+Om7n5MPb5NCANz889mANF5FfeVyB1B3cmHLjp6R65hg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7189

... in order to also intercept Dom0 accesses through the alias ports.

Also stop intercepting accesses to the CMOS ports if we won't ourselves
use the CMOS RTC, because of there being none.

Note that rtc_init() deliberately uses 16 as the upper loop bound,
despite probe_cmos_alias() using 8: The higher bound is benign now, but
would save us touching the code (or, worse, missing to touch it) in case
the lower one was doubled.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: Restore lost "return" in rtc_init(). Convert printk() to dprintk()
    in probe_cmos_alias(). Correct is_cmos_port() for hwdom.
v5: Simplify logic in is_cmos_port(). Limit the scope of a local
    variable. Adjust a comment that's being moved.
v4: Also conditionally mask top bit for guest index port accesses. Add
    missing adjustments to rtc_init(). Re-work to avoid recursive
    read_lock(). Also adjust guest_io_{read,write}(). Re-base.
v3: Re-base over change to earlier patch.
v2: Re-base.

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -27,7 +27,7 @@
 #include <asm/hvm/vpt.h>
 #include <asm/hvm/io.h>
 #include <asm/hvm/save.h>
-#include <asm/current.h>
+#include <asm/iocap.h>
 #include <xen/trace.h>
 #include <public/hvm/params.h>
 
@@ -836,9 +836,19 @@ void rtc_init(struct domain *d)
 
     if ( !has_vrtc(d) )
     {
-        if ( is_hardware_domain(d) )
-            /* Hardware domain gets mediated access to the physical RTC. */
-            register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
+        unsigned int port;
+
+        if ( !is_hardware_domain(d) )
+            return;
+
+        /*
+         * Hardware domain gets mediated access to the physical RTC/CMOS (of
+         * course unless we don't use it ourselves, for there being none).
+         */
+        for ( port = RTC_PORT(0); port < RTC_PORT(0) + 0x10; port += 2 )
+            if ( is_cmos_port(port, 2, d) )
+                register_portio_handler(d, port, 2, hw_rtc_io);
+
         return;
     }
 
--- a/xen/arch/x86/include/asm/mc146818rtc.h
+++ b/xen/arch/x86/include/asm/mc146818rtc.h
@@ -9,6 +9,10 @@
 
 extern spinlock_t rtc_lock;             /* serialize CMOS RAM access */
 
+struct domain;
+bool is_cmos_port(unsigned int port, unsigned int bytes,
+                  const struct domain *d);
+
 /**********************************************************************
  * register summary
  **********************************************************************/
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -208,7 +208,7 @@ static bool admin_io_okay(unsigned int p
         return false;
 
     /* We also never permit direct access to the RTC/CMOS registers. */
-    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
+    if ( is_cmos_port(port, bytes, d) )
         return false;
 
     return ioports_access_permitted(d, port, port + bytes - 1);
@@ -278,7 +278,7 @@ static uint32_t guest_io_read(unsigned i
         {
             sub_data = pv_pit_handler(port, 0, 0);
         }
-        else if ( port == RTC_PORT(0) || port == RTC_PORT(1) )
+        else if ( is_cmos_port(port, 1, currd) )
         {
             sub_data = rtc_guest_read(port);
         }
@@ -424,7 +424,7 @@ static void guest_io_write(unsigned int
         {
             pv_pit_handler(port, (uint8_t)data, 1);
         }
-        else if ( port == RTC_PORT(0) || port == RTC_PORT(1) )
+        else if ( is_cmos_port(port, 1, currd) )
         {
             rtc_guest_write(port, data);
         }
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2130,37 +2130,36 @@ int __hwdom_init xen_in_range(unsigned l
 static int __hwdom_init cf_check io_bitmap_cb(
     unsigned long s, unsigned long e, void *ctx)
 {
-    struct domain *d = ctx;
+    const struct domain *d = ctx;
     unsigned int i;
 
     ASSERT(e <= INT_MAX);
     for ( i = s; i <= e; i++ )
-        __clear_bit(i, d->arch.hvm.io_bitmap);
+        /*
+         * Accesses to RTC ports also need to be trapped in order to keep
+         * consistency with hypervisor accesses.
+         */
+        if ( !is_cmos_port(i, 1, d) )
+            __clear_bit(i, d->arch.hvm.io_bitmap);
 
     return 0;
 }
 
 void __hwdom_init setup_io_bitmap(struct domain *d)
 {
-    int rc;
+    if ( !is_hvm_domain(d) )
+        return;
 
-    if ( is_hvm_domain(d) )
-    {
-        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
-        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
-                                    io_bitmap_cb, d);
-        BUG_ON(rc);
-        /*
-         * NB: we need to trap accesses to 0xcf8 in order to intercept
-         * 4 byte accesses, that need to be handled by Xen in order to
-         * keep consistency.
-         * Access to 1 byte RTC ports also needs to be trapped in order
-         * to keep consistency with PV.
-         */
-        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
-    }
+    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
+    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
+                                io_bitmap_cb, d) )
+        BUG();
+
+    /*
+     * We need to trap 4-byte accesses to 0xcf8 (see admin_io_okay(),
+     * guest_io_read(), and guest_io_write()).
+     */
+    __set_bit(0xcf8, d->arch.hvm.io_bitmap);
 }
 
 /*
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1234,7 +1234,10 @@ static unsigned long get_cmos_time(void)
         if ( seconds < 60 )
         {
             if ( rtc.sec != seconds )
+            {
                 cmos_rtc_probe = false;
+                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;
+            }
             break;
         }
 
@@ -1249,6 +1252,79 @@ static unsigned long get_cmos_time(void)
     return mktime(rtc.year, rtc.mon, rtc.day, rtc.hour, rtc.min, rtc.sec);
 }
 
+static unsigned int __ro_after_init cmos_alias_mask;
+
+static int __init cf_check probe_cmos_alias(void)
+{
+    unsigned int offs;
+
+    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
+        return 0;
+
+    for ( offs = 2; offs < 8; offs <<= 1 )
+    {
+        unsigned int i;
+        bool read = true;
+
+        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
+        {
+            uint8_t normal, alt;
+            unsigned long flags;
+
+            if ( i == acpi_gbl_FADT.century )
+                continue;
+
+            spin_lock_irqsave(&rtc_lock, flags);
+
+            normal = CMOS_READ(i);
+            if ( inb(RTC_PORT(offs)) != i )
+                read = false;
+
+            alt = inb(RTC_PORT(offs + 1));
+
+            spin_unlock_irqrestore(&rtc_lock, flags);
+
+            if ( normal != alt )
+                break;
+
+            process_pending_softirqs();
+        }
+        if ( i == 0x80 )
+        {
+            cmos_alias_mask |= offs;
+            dprintk(XENLOG_INFO, "CMOS aliased at %02x, index %s\n",
+                    RTC_PORT(offs), read ? "r/w" : "w/o");
+        }
+    }
+
+    return 0;
+}
+__initcall(probe_cmos_alias);
+
+bool is_cmos_port(unsigned int port, unsigned int bytes, const struct domain *d)
+{
+    unsigned int offs;
+
+    if ( !is_hardware_domain(d) )
+        return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
+
+    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
+        return false;
+
+    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
+        return true;
+
+    for ( offs = 2; offs <= cmos_alias_mask; offs <<= 1 )
+    {
+        if ( !(offs & cmos_alias_mask) )
+            continue;
+        if ( port <= RTC_PORT(offs | 1) && port + bytes > RTC_PORT(offs) )
+            return true;
+    }
+
+    return false;
+}
+
 /* Helpers for guest accesses to the physical RTC. */
 unsigned int rtc_guest_read(unsigned int port)
 {
@@ -1256,23 +1332,25 @@ unsigned int rtc_guest_read(unsigned int
     unsigned long flags;
     unsigned int data = ~0;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
     case RTC_PORT(0):
         /*
          * All PV domains (and PVH dom0) are allowed to read the latched value
          * of the first RTC port, as there's no access to the physical IO
-         * ports.
+         * ports.  Note that we return the index value regardless of whether
+         * underlying hardware would permit doing so.
          */
-        data = currd->arch.cmos_idx;
+        data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        data = inb(RTC_PORT(1));
+        outb(currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1))),
+             port - 1);
+        data = inb(port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 
@@ -1288,9 +1366,10 @@ void rtc_guest_write(unsigned int port,
     struct domain *currd = current->domain;
     unsigned long flags;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
         typeof(pv_rtc_handler) hook;
+        unsigned int idx;
 
     case RTC_PORT(0):
         /*
@@ -1298,20 +1377,22 @@ void rtc_guest_write(unsigned int port,
          * value of the first RTC port, as there's no access to the physical IO
          * ports.
          */
-        currd->arch.cmos_idx = data;
+        currd->arch.cmos_idx = data & (0xff >> (port == RTC_PORT(0)));
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
 
+        idx = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1)));
+
         hook = ACCESS_ONCE(pv_rtc_handler);
         if ( hook )
-            hook(currd->arch.cmos_idx & 0x7f, data);
+            hook(idx, data);
 
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        outb(data, RTC_PORT(1));
+        outb(idx, port - 1);
+        outb(data, port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:26:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:26:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522682.812192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohbt-0002ZT-80; Tue, 18 Apr 2023 09:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522682.812192; Tue, 18 Apr 2023 09:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohbt-0002ZM-5D; Tue, 18 Apr 2023 09:26:49 +0000
Received: by outflank-mailman (input) for mailman id 522682;
 Tue, 18 Apr 2023 09:26:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pohbr-0002ZE-Bm
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:26:47 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21aab0ac-ddcb-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 11:26:44 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 05:26:34 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6359.namprd03.prod.outlook.com (2603:10b6:a03:399::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:26:32 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:26:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21aab0ac-ddcb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681810004;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=38/TOdP7cjPvYIhZu7auUVXZSEH33UEk0E6f9XltzPo=;
  b=TuhnnlToZL4KDLtRizrehd36BvrKq1Qo/rLn+DrOcmZ+uQq6WaLAwZpW
   W2oI4mgj5N8LeFHW2rBWegYkRk5BhicbQTnnjdJbrlY5AxKmtCexzFVUu
   tiOKRul1bVR9BLgchLb9CvzpfRXE3TUEg8CX1jAKCivgHQjoy1mV4VSvW
   w=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 105272815
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GL7+iK4A3O/068idr1DcawxRtB7GchMFZxGqfqrLsTDasY5as4F+v
 mEZXTiPPKuCYWPxetF2OoXk8kgP7MLVyddlHFE5r31nHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawS7AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m2
 qYWdysPKTO4heOT2Jm8SbFtrdg/M5y+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOOF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxX6hB9hCTeDQGvhCgEOBl2oUNFoseGCXoaa3lBGze/VNJ
 BlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWKTVqN+7HSqim9URX5NkcHbC4ACAEDvN/qpdhrigqVF444VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTxgbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:OkOMEKDwqgt82yXlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-Talos-CUID: 9a23:3mSh7GNkxF7tyO5Dd3Npq1UWXeMecHz41FzXIBe5Inp1V+jA
X-Talos-MUID: 9a23:kna87AgjQkUIWEIDAiINg8MpNNl6sqKpNAc3vK4qu5aKK3N9GxCPg2Hi
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="105272815"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UPxN/VlsrkVDizig8NkZHCU+DqBHB6afTwYM/4ojxbwdY9BP2MMss8GmzO2Kfz2k6ZrLCeWoVNX019ul5iwjXdZZCvPXLBMbiTTk2nNSnEfnFsZX7BM0Rf3HFkvyO44j2QTDwHzIvhJ4hmT6CYLW5q28hCp0UtlaAq6QhCiU9S/AJ0hUghkw7jyjVwzTLPUI/lgWUn/rM14X3Yb3RvNuUm7zw5W8pRZVMgJwqAc4rOWr48STrIWfczzsPqguSKBssCvozMGXCPWg1WrVEPyQVAsgs6Ew3C5FeAaw2yqZPyd0P1qG4Ot4TMsBDwlRhkDUWm3Kl7S5yEXEnVwNfWFboQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ei+0nFlGCYdhhVyjG/s6LbreYKNTgFC6GaPlv3pfuMI=;
 b=kLyWSSxqyBTcN1F2bKTdVCg2uaCsLhTsl+w9UAmiVSzjPGPYa8zLv2+kDsID35EUI6ga4oy3Z7yTYVVH0Fr1puCNtgP65AU0D0//MVx98qUkdURi5tomWlKTqDweAnWyDAdJqUuGbG7JW+vpee6s4z+CzHxJk9AL/akMJkUhg5vWYxpVokc9WwjLKWw9zyluevnYgfffwEekDDzA7oidgrQEuEPNq+phNRl/7U5+6z1IBbqCjYh/6P/IUnbXkO5zrtp4sHilaeHF1fl/kwz4cYWQA7hNLN1Xa0rXeFuq2SCCR5C95OdZvTBb+EaV73bSRJbGnYGle102sB2VYVPNqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ei+0nFlGCYdhhVyjG/s6LbreYKNTgFC6GaPlv3pfuMI=;
 b=m8TiCREyRbP5owH0JVMQzF7RFrOKZY4eE97sC3NL12jr0YVIklWYHbFdvcDBYeVThwPr/1SMMA00SoZsSB1HRy88cwKHC2Cc4n8a6zAJW54afSZN1sq1/bddZ39LhyPbWtrymHh3Wr125bii5UFXOed0lbRCGpcb5mAteGNrmTo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] x86/livepatch: enable livepatching assembly source files
Date: Tue, 18 Apr 2023 11:24:58 +0200
Message-Id: <20230418092458.15253-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0214.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6359:EE_
X-MS-Office365-Filtering-Correlation-Id: 15abcf2d-30cd-4459-62ea-08db3feeff20
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dDpZT2wKPBG6R/gGFKfHVccNlVUSpcuR1P9F8w1RDh0zJNyuFO7jZkRCQNLY3dbCes4jmcbvZwmRrzbbikw6Gsw+75SKORMpLwFJ7dI+ltAMebA3oQddgMiPw/a1vWQ0fNH+/D9+r2mRpsb2kAzZLS7fljR6NJOFcXxIIklYmruObMdgG5r+ut6i4od6kGSRuZEVj4mICuvFP42TPK8sE33fsyvCTktq47lLX8VYgSMHCX04i61mxq7eAFCt0vhJL+BHYw9ootkM7ZEecORxyKMuIAqhQNB0CzRuc7aEY8GNyYJ1A+L9X3z7jlJUUX185IhPNXhCEWjYuhzQuBU8SUu7ALPp4yHoclZ4sIu5Nv++VuYT1VAPl5Me+KnR6KgQSK4OSZVUmWIAIAi7SxRhzVFHtx1rdZXQxmRRP2tdx9ETldbj01/waNBerGCrcOZDip7XjA8Ty3v20zSYsTKrmARFjvFb097LRW/3Ow+5XMFnn220SCiGkp6GbvTXnCQHUTe1jR8RxLCxoRxHaxqfNwjovyjlYF8NP1GPmflQIenhN6PSz2hG86YldyCuYX15
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(396003)(136003)(39860400002)(346002)(451199021)(36756003)(4326008)(6916009)(54906003)(316002)(66946007)(66556008)(66476007)(478600001)(6666004)(41300700001)(5660300002)(6486002)(8676002)(8936002)(2906002)(86362001)(82960400001)(38100700002)(2616005)(6506007)(1076003)(26005)(186003)(6512007)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmZXVCtTTEpzVW5pdG1qSXVSVzdlaDArZ1RwajQ1ajRUT0pjSGxSYngyUmcv?=
 =?utf-8?B?SjRwZVZWN1FYd2Y3dmY5VXJRSzVMQVJoSEZsaEJQY3NLMThzOHNBaW9ic0lU?=
 =?utf-8?B?MnFyMzNDcTJEQjB5RlJYbGxpY1NoS1BiUnF5T3ZQNjVMUGNVSVFKYmk0aHBU?=
 =?utf-8?B?OXN0SU1ZZWNRdmQ3K2NoalM3a0VHUVdoWlJWZlZ0aFcvRjMxQWxETU44SjFD?=
 =?utf-8?B?dE52RnV5QkZFeitnamR6b1QxeG1kWlZtcWo5T3lNbnRHakU4bFdUNGxISC9T?=
 =?utf-8?B?MnlqY2gyQjZGaDBsVjQ3ckNwYUlqVDU4aVdHZElQTEZiNkp3UzdINmJMMERp?=
 =?utf-8?B?MEh1Qkc0SW0vSlAxaStVYlg5VDRVL2FRUXRXM2dvTDNiNHZ0Vm9RSVh3Qmp0?=
 =?utf-8?B?Mk83Q2lQL3F4NTBZMjdsMXJ5UzRlK0ZaS3BEbnpJRlB5citseFM5U2FndlBo?=
 =?utf-8?B?MjN2a0lHQm12ZE10N1Jsa2RJK2t4RUZ1K0tCNEhxTVRncCtya1poWTlUOW5M?=
 =?utf-8?B?QVEwUlhUQnpuQmlGVWlEdXRNTG1QaDF3OUlYRUxLZ092MzM3VnJUUkpSeTdH?=
 =?utf-8?B?TFBMZnNpZnpJQTF4SGp6dUNCdzRZMm9uckkwalNRNTY2U3ZZSlVoQnJabzNw?=
 =?utf-8?B?OWZycFJDdThqeks5MkUwa084M1QrUy80bWp4U3NQTU5iOHhtSGYvQnRLMFNZ?=
 =?utf-8?B?SUd6R1plWjh3QXdBOFJzYU0xN25zamVIZkxkYzE3RnNPdXdkTzBsdmppZUpY?=
 =?utf-8?B?Ni9SUDByRENsUXJQNnFVbklDZHc0cTgwMUlsK1VacU1GYjlEWnlTb3huVG1p?=
 =?utf-8?B?R3liTE9laVRuc3EwcWU0MXN3dFl2cjFmTDNleTlqTkFSTzZGRzQ1a2FwYXVY?=
 =?utf-8?B?OHc4dXduQ2JGcW5HM0xMYkJBWENSMHB2T1MrdGxXdzNUVk42RUtYakxDa0dt?=
 =?utf-8?B?VnZLd1VTNHVoUnU2S0JuTkVEc3FyV3U5MkF3Zi94Mmd1Y3hnLzF6VVgza2JK?=
 =?utf-8?B?YUhHNU0xK1AvNXd4VU81TjZYOTlQQ2p5eVE3Yk5OVlBJd215cm9qb0lFZUEv?=
 =?utf-8?B?bi84WVo4RmtkSktlWGlrdloxRHpoQWN6NTZBOVpKbER6U2tRczQ4Tlo4NlBr?=
 =?utf-8?B?OXB5YWk3dlRrbmMweCt5M1hkOFlFcFA2RS81NVVkdUFvVXlCR3JIcXZBRm1r?=
 =?utf-8?B?d0F2dVBMdk1YSXkraS9DdTh1UVdsNGFRaFkzSnVGQXNmYkE1TThHckMvbVd0?=
 =?utf-8?B?azYvYXNjbUVqSjJQWXlidTVHQ0Rwb3ppNjJDU0FCcjhadGdNakxncjZRVWdX?=
 =?utf-8?B?d001eUdDSjJOanZJSTNpY2RaNWJrZ3NmSGJHVTd0Q1ZibWpVT1RBbXpXYWl0?=
 =?utf-8?B?dDR0RlhlSEVxZHF2dmZYOXBnMlZUY2x4THR6TUdMTkVlTVhDRWFJdWRhWlZz?=
 =?utf-8?B?ZTZFQkRwUld6NzUyMk1NcDVScEE5bEdLeDY4Q2k0Zks4dVZ2aDJmaVRGNFBy?=
 =?utf-8?B?N3RGQkt2L3cvSmtRSWdUUG5CSmhmSmxRc2JJcFNmZ3NrRG11R2JrNFlOQnFN?=
 =?utf-8?B?VVRvVVliZ05EL0FzNVhQUFZtcTRNN0lVSWwzaUl4UU5CM3FLL1M4N080SzZF?=
 =?utf-8?B?WVl5cEVjYWVWcjJyQWlIVERuUnUxOXVGdzg3eCtQeHZFQ3JxYllZK09uTTRx?=
 =?utf-8?B?US9OVDlkQzhjYkFaSVlwMnYvZnRZWWR5TVNuQjlhYnpBQ3Q1dy9hK3p6aU8y?=
 =?utf-8?B?RnBOdTdJTnUrV0I0cGttamlFMmdUOGRSMTEweVlmWS81TjdlR1BCdnUvV0tO?=
 =?utf-8?B?cWdWNlk1V0FNVW9yb3VoZFpLZFZ6Mm0rRlc2WC8vdlN1K2pSdFVINzNYRGdP?=
 =?utf-8?B?ZWk0WTdZZlo3djFpTTNQM0F1dEhQWjg4U1M4VFpqRGlydEl0bmFFMlNudmgw?=
 =?utf-8?B?dG1KY2xMVGN0cXUrMmpxNFJYcU1UN01ySkorek93elhVTXRNYmV4eERhcU1o?=
 =?utf-8?B?bXNheTZkQTREQldWVDZWVkZGZjNqbHppby9nWHBvOHJNK1p1cFZwbk1GZHRu?=
 =?utf-8?B?SFh3WDIrbFVWWVBXVEh0RWxkc1Y0dTlSSFAzMStDNDViY1BWYkVLdHRjSlZY?=
 =?utf-8?Q?P6U9NIzcdSm3FKRDRqD4TF4nJ?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	EyH9WXY0uG7e45Jtp8duU6HXYqIo9XLM+o/BxciRuleLsdS0wzzfq/cln6ySZlqR1KJMCE9bUbJXf3J/jumG9/yFRsmn4q/fB4a876J+9CA6Lw4I/jtWug7SnCJDIcYRMUBpHICF19hrJaaC4Jr68G37nK7h4WTLcJGfoCqVIXMXY23BvgxcwcFoGhTuis4b818T7C8MZQCPkEWNwdR2DH+YjkCtLjpW36VBmMwULYUe68Pb2FZSoBGTn/7R+SHlqXkIKwosqq53WOO4AItdd40kZFPs8O+SPRrxLvHWVwMaWVX0JRWjT+0gE463CogKftg1Zbukk9HS+uN8F+q4+8gOKPEFngCIM6wkoJyyQ64Rwv/AHFsDt4qt6Cvv/VI//mH1mwonZwx/W9tI5WMbK4/5CYtlsUWSVPHBVUGEmbT6pzuTEIra/csCGSVaWDNAd+BT3t4vbLVKM3lkO8H5qhTb+RA0CLq2eCg2O3BqUIWax3SHDrvMkoO5BAQCKfylrd9qi9jTyjduuT8SMvaQhmM9yiQMXw74P5mxuIuXgYJ9jDJj3d3FNZk/I4KMRYU40o191clonFFHEaLQw6AVJVtgTrgGEo4aVR+21HMlLSUoAU1csOvMF4DxoGGs1eM8Z0k5GT2/3CN5PAL0J9CoRqOcMFVFX9GEG+YGBrT4IZ13o35wMRHSrNLksrWQ2npRwUzU1XLENk7wfWwOhRsMh+AXy51nOLGrfEu/hU/fcS45nK12j1085FzYHSgi2EqF6En70OIoEPYEpkd8ao5roqn655UPAQlsBb1uCXNPFUIlglyJYysT99drfK0V5ZwW
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15abcf2d-30cd-4459-62ea-08db3feeff20
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:26:32.0821
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hMpyK94AYa6gwx2VNt9Z/9hFDd23bTAy2lUq6KvtEnDtvLEQe1Nay+52KkDhYDBUYiVkTM5N8Ka0RqbCbJSw7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6359

In order to be able to livepatch code from assembly files we need:

 * Proper function symbols from assembly code, including the size.
 * Separate sections for each function.

However assembly code doesn't really have the concept of a function,
and hence the code tends to chain different labels that can also be
entry points.

In order to be able to livepatch such code we need to enclose the
assembly code into isolated function-like blocks, so they can be
handled by livepatch.  Introduce two new macros to do so,
{START,END}_LP() that take a unique function like name, create the
function symbol and put the code into a separate text section.  Note
that START_LP() requires a preceding jump before the section change,
so that any preceding code that fallthrough correctly continues
execution, as sections can be reordered.  Chaining of consecutive
livepatchable blocks will also require that the previous section
jumps into the next one if required.

A couple of shortcomings:

 * We don't check that the size of the section is enough to fit a jump
   instruction (ARCH_PATCH_INSN_SIZE).  Some logic from the
   alternatives framework should be used to pad sections if required.
 * Any labels inside of a {START,END}_LP() section must not be
   referenced from another section, as the patching would break those.
   I haven't figured out a way to detect such references.  We
   already use .L to denote local labels, but we would have to be
   careful.

Some of the assembly entry points cannot be safely patched until it's
safe to use jmp, as livepatch can replace a whole block with a jmp to
a new address, and that won't be safe until speculative mitigations
have been applied.

I could also look into allowing livepatch of sections where jmp
replacement is not safe by requesting in-place code replacement only,
we could then maybe allow adding some nop padding to those sections in
order to cope with the size increasing in further livepatches.

So far this patch only contains two switched functions:
restore_all_xen and common_interrupt.  I don't really want to switch
more code until we agree on the approach, so take this as a kind of
RFC patch.  Obviously conversion doesn't need to be done in one go,
neither all assembly code need to be 'transformed' in this way.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/include/asm/config.h | 14 ++++++++++++++
 xen/arch/x86/x86_64/entry.S       |  5 ++++-
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/config.h
index fbc4bb3416bd..68e7fdfe3517 100644
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -44,6 +44,20 @@
 /* Linkage for x86 */
 #ifdef __ASSEMBLY__
 #define ALIGN .align 16,0x90
+#ifdef CONFIG_LIVEPATCH
+#define START_LP(name)                          \
+  jmp name;                                     \
+  .pushsection .text.name, "ax", @progbits;     \
+  name:
+#define END_LP(name)                            \
+  .size name, . - name;                         \
+  .type name, @function;                        \
+  .popsection
+#else
+#define START_LP(name)                          \
+  name:
+#define END_LP(name)
+#endif
 #define ENTRY(name)                             \
   .globl name;                                  \
   ALIGN;                                        \
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 7675a59ff057..c204634910c4 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -660,7 +660,7 @@ ENTRY(early_page_fault)
 
         ALIGN
 /* No special register assumptions. */
-restore_all_xen:
+START_LP(restore_all_xen)
         /*
          * Check whether we need to switch to the per-CPU page tables, in
          * case we return to late PV exit code (from an NMI or #MC).
@@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
 
         RESTORE_ALL adj=8
         iretq
+END_LP(restore_all_xen)
 
 ENTRY(common_interrupt)
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
@@ -687,6 +688,7 @@ ENTRY(common_interrupt)
         SPEC_CTRL_ENTRY_FROM_INTR /* Req: %rsp=regs, %r14=end, %rdx=0, Clob: acd */
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
+START_LP(common_interrupt_lp)
         mov   STACK_CPUINFO_FIELD(xen_cr3)(%r14), %rcx
         mov   STACK_CPUINFO_FIELD(use_pv_cr3)(%r14), %bl
         mov   %rcx, %r15
@@ -707,6 +709,7 @@ ENTRY(common_interrupt)
         mov   %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14)
         mov   %bl, STACK_CPUINFO_FIELD(use_pv_cr3)(%r14)
         jmp ret_from_intr
+END_LP(common_interrupt_lp)
 
 ENTRY(page_fault)
         ENDBR64
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:31:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522688.812202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohgA-00043u-Tg; Tue, 18 Apr 2023 09:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522688.812202; Tue, 18 Apr 2023 09:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohgA-00043n-R6; Tue, 18 Apr 2023 09:31:14 +0000
Received: by outflank-mailman (input) for mailman id 522688;
 Tue, 18 Apr 2023 09:31:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pohg9-00043h-4U
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:31:13 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c038a8b6-ddcb-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 11:31:11 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 05:31:05 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6514.namprd03.prod.outlook.com (2603:10b6:806:1c5::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:31:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c038a8b6-ddcb-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681810271;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ZDsajIU8iR14slYM81FilTBlNuBK4bFHte6iNbW1qTk=;
  b=YCeYtCTZNMVbo3Zy1XJAPnWf+kY6baRB64CrQ4Cce5ZFSh+yQNzPZOuk
   h5Pi7oTXUfZ/aY9bEZPZafldXppt3uKODXkhV8Iw7YkNw/9WurTNIYvG5
   dkStt8Ivc6Bu7IhN34QDHN+j8AgVRARDt6tIGTUkETFoAFqqKrn2eeSI8
   c=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 108368436
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:JMFh0avTXSyCGKu4Em3q6V0UUefnVHBfMUV32f8akzHdYApBsoF/q
 tZmKWGHMqqPamSkeIxxaoy+o0JTvcfRyNcxHABp/3s9FykQ+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHxyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwNBoQVBC82cKMyuybbs5cmcJzJs3vFdZK0p1g5Wmx4fcOZ7nmGv2PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60aIu9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANJJSu3kraACbFu74UAuTydNR3qC+LrosX6aQPh8O
 xwW9X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRVLufgcBRAoBptz8+oc6i0uXSs45SfbkyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CBhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:ejBFwKOJUUp6zcBcTs+jsMiBIKoaSvp037BL7SxMoHluGfBw+P
 rAoB1273HJYVQqOE3I6OrgBEDoexq1n/NICOIqTNSftWfdyQ6VBbAnwYz+wyDxXw3Sn9QtsZ
 uIqpIOauHNMQ==
X-Talos-CUID: =?us-ascii?q?9a23=3AC0XNZWtg21zEe1qXXpimeVj+6Is5WGDnzEiOA3W?=
 =?us-ascii?q?yV0ZNb42KTw+a/KdNxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3Ak2w3ug60TTDuB2Z373UEejtYxoxPz5npMhsfsKx?=
 =?us-ascii?q?f+Ji/BQhrERSN3Da4F9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="108368436"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V69n4NIWYQQZctgTPCZQvh2d8JAUt0cuHyasPiOLdpdkA6ElYIUtdkfyK/idS/zEoq1xWpkRgyz8Z1COOD4xaZh9IS/0okEsMcB7xWTt7ooyA53+ltfgIgmO2LTCQJ1zW0s7y+QU71EOPftylE1ic36QZuj6s29iOmers7JHMAjxSf3dOGU18rr8nmKVeyRFm3/lqU/yKdWEQzDKyUcMjx8X47rAyJA5PC10e4mShVYXe7SvcP/cZw/rps2zehdrVQ2FdDOIT+4vWZIY/cqmSFsQPWRGDjxQCz6hsP2C4l9CRwnpoO4IucKLn3QEtY6JzX6jDvBSr46juHOp94lUgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lLSBuNUz2YRkFh5WhmdJDM9ZFjV5lt5FDeN8iDjoiUI=;
 b=Fl9iqnD3rBqTvFZklw9WonrJWe48XagEfDw8+B6aYYtT7eLo/bY1hofcVa7SnqCw+pKhrhoqzMHVUoJA018xO5SBJzDOAiOq9LsdXUFGhx5JmrGblrtlf5FMjA0mJFZ9OS1n8fOEwsogQdDQDenK5zKjh2K41QxPQY9Es16xXyjuIGKGu1d9UgJD1JGxCa2mtW2GVCHDCiKaw7cYNJxw8WWQv8aMBtrpivk1FySKg4RuoWKBEvN09i57sWmHrBbXKQoeLXDfxHqNWZUsdE2NdtlKV8636O+SwnPM12wJ+99pyGZOkUM8O6ULeRFeesEEATp32akZJIVxGYGd4eh7LA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lLSBuNUz2YRkFh5WhmdJDM9ZFjV5lt5FDeN8iDjoiUI=;
 b=gx5Nb8ku99TA562YV6A9u6bSMG08UpXxN3EbnE5OU4AAN2kByFMj1GazSvxI0uTDTThDgZKcV2YesRIhbrIw8SWWZHSAis5evSqnM+n4JemQddm5FTwYIO7JDF4ys1cW6KlG6i3k4V3sDFMXy/B6qIPk8PyhDoL3MBJf0Zne1x8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 11:30:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH] tests/cpu-policy: fix "run" goal
Message-ID: <ZD5jT/F8b82ZkGxo@Air-de-Roger>
References: <80a02af5-9154-8289-4e92-6016c0948a61@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <80a02af5-9154-8289-4e92-6016c0948a61@suse.com>
X-ClientProxiedBy: LO2P123CA0085.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6514:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e43946f-621b-482b-702c-08db3fef9fa3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HQkJf3IjZwKwBjQ1PL1sJugtPXYV0WBfLdUDnnpuhz2Sas09yWqfd/Ud/7EUQX6ivnnvGk6FO5TRbrUQQfUztuEz3xEqU+GNjYlGdd5BJIjyRHCSMK8o/oSLSBqIwRcM0qCa0WsgcTSnlvc4FOXRWO48af66yfH90Q+VQpbzpiCStXSs6yAJvsoLc85YKcwkBUCdILqjavbtH4kV77gzb5yuydztWE9uoKdR4ObEHsH3fCV3hOZIvC8w/zq2gA7kWRDpzT/6LwwEbbqbnm9+tFoWs74rdca2vopjSa4XpjvsysC/3LPkOnN01ICnbBd1oxaCTblj8qTzPDRn6ZbnQFyksq3IVFAr4BLxGKFQWUGB90RUrpOs9Xz0ladIMfjQm6e7ObrQ9ZxLN72UJYJQ0MZWf3ocSOz2r2/9UUacuPDyq6IvViVF/Thmn5uz3CAgzB2tcPHd7WobJXIi8RslPv+Snz+CszzxYrDYDstBQqVy6DN1z60L0OZiFm1ZslflpmgiuiCrDp1i3WMreev0TtR2py/v626cTSBssdHBt+d55rbVdXVhWcst0drL7P41
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(366004)(396003)(39860400002)(346002)(376002)(451199021)(2906002)(54906003)(316002)(86362001)(9686003)(26005)(186003)(6506007)(6666004)(107886003)(6512007)(82960400001)(38100700002)(33716001)(6486002)(478600001)(66946007)(4744005)(8936002)(41300700001)(8676002)(66556008)(6916009)(4326008)(66476007)(85182001)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Lzd3aTgvUXczSFZDdk1kcnNUeDZydDZscDJJcmQzTUNKaFhDRHZDZ1liZmU5?=
 =?utf-8?B?aUdFNnN6NVd0SXRFT0t4TG4vR1d5bE1ZaUE5b3FYRG11NXJaNnBmS0hoNm9W?=
 =?utf-8?B?OGc0aTV1MEF3elBicnp6V1VONFI3ZDhGb1F5dUZDWGdCYUMvcUw1WGE3K1pP?=
 =?utf-8?B?YzVKanZFQStESmpETlQyT0hKYzVEd1BaWTFBenlRKzRvblBtd1ZOOXB5Q0N4?=
 =?utf-8?B?OTF0anR1RHRjb205OTg1QjZDS0svVm9TWURUVEhmYkJWcjRVb2NVaU83UzhU?=
 =?utf-8?B?UWlPSkIyQ2RvRkh4Y2J5aEtiSnZnbHJ4MDMyalUyV1U3QXBmdjRTUytFR0Fl?=
 =?utf-8?B?bGRZZWd0QzhJeDliM2JZK0d4MDhNanNvcFFpWFBQZUI5OVo4TGExa1NxZm1w?=
 =?utf-8?B?TlNLWmFwNmkyV0lmWU96Z2lJVDJRdVUvNkdFcDRWd2kwRStFd25QSnFTTzRE?=
 =?utf-8?B?UjVXajJyd2dqS3V3WjMzUk9vZVJDaldud3VVQ01KcmJ0VnVWc2VycHVlMFYv?=
 =?utf-8?B?UXRVS2paRzV6VjhiWUgvS21wem9yK2xtenorT01qb0dxRFhVMDdrUkNkdnpq?=
 =?utf-8?B?bkRwRmhVTXZtZC9sbGFuY2h5U0xVL0FxMEJVYThyU2doMjN0Y3dxNThlRWVK?=
 =?utf-8?B?dVNUL3RucjdUdlNFZmlPWVFHdUMzR1hORVFacEhCb2QwWTNNd2t0NDdRTEZS?=
 =?utf-8?B?bnphVjZ6RUtBejlSZnJSRkM1eE9WaFhaeHh4a0NpQjdkWVd6MVFFMWZmSktn?=
 =?utf-8?B?YTRMTVBXNU9FY1Z0WktwQTFEbVFuSTFaRktaQU40MDZFSDJrU1R0a0lCQjJF?=
 =?utf-8?B?VGFVL3RadzI5WEdLVThzOEhQcmprV3dFN0VOaUE1bWdwYzNwQ1JZckhPRkdj?=
 =?utf-8?B?SnR4c1ZuWkpOUFp0dy9DWS9GMEJoanVmZkd2VStiaVp6OFhTYk9RQjJjdGll?=
 =?utf-8?B?dVEwNDEvaUFGUzBTcDkxYzA1TFRKdXgyN3YxZUtXVE1ZS3RBNy9nY3IxRGY4?=
 =?utf-8?B?eEJUY0pHRjlHZmc1SEpLL3ZyNEY2WUx6Rk5Neld5Y2FZSWJXdjVSdGFNVEVQ?=
 =?utf-8?B?SzJyejU1WGdGSEEwMnVKODhwQmgxdGZBa3A2WEpXd2kyWFZYQ1UrbkxrcXRH?=
 =?utf-8?B?V1A0YlZuWW9VeGVmMjFmaXRibFlLNnRwWXRKeVpKbEsyUng5dElzR1VZaTgr?=
 =?utf-8?B?QmJsc0xYVkFEUUY0eklYaXgrMTh4OEpPczltUndwWkljbmZ6UzYzVVVlZXVo?=
 =?utf-8?B?MTlDODNrWGZDam5QbjZkQ0RNZTFTZnVsdVhWaGhMNlVmU3lsV0ViMm43RmhB?=
 =?utf-8?B?blkvZUpRUG5pbUVTd2JPRGpXZ0Rmbmp3YTB0MDJMZ0xFNUM3OXNqbE5raUVi?=
 =?utf-8?B?dVF1bTNONmZQeDVXSWNWWXc0L3FYd042UnNQbUVSZ282b3YwUDBZTkU4TWhn?=
 =?utf-8?B?ZG5VNU1xWWQ2dWJJV3FEU1FpOVIxb0xnV0svWm1SVnBCRzh5Rlp2anpUSVc3?=
 =?utf-8?B?dlExbXYwVnVITm45YVJCbzQySEVZbDZuSCsxVWRCSzFid3lNeXNWNFFpbTJr?=
 =?utf-8?B?Z0N3TTAwWmIyLzhlUGV5NFBMK2RKdjIzY1hBSnZRSjNoOVhzL1YzakZWMGk5?=
 =?utf-8?B?cTBMeU1EZVN6UGY3RFpEc2hpQ3JCZGdWR1N0M1lJY0NjQWlWR2RxUDdSVGYz?=
 =?utf-8?B?MjRBNkNCZitCYWZJOG0zdVBWaDVTYWVSQlhHWkt4ang4NFQzVEw3eEEwaFgv?=
 =?utf-8?B?UkZpM0pDaVREa1haQlhZZkw5VElrbWw1MHJzTlU4Y2l0TUVlWHZxS0JXUmk1?=
 =?utf-8?B?eU1FY0RkWEVIQzBsbXdkM0U4RnFpeHpVbFN6blZOWUpuYnBCWFBNQkh4NTBT?=
 =?utf-8?B?OXRKaC96c2dYWkQyVUFKV3NxYUErNjlwOEU1ZEVISEhPZzBEMDlaN01EallW?=
 =?utf-8?B?UmNrSEVnRi9XU2Y3OGFOcTVhTmRWU2FWdUJMK1NSenFndjk1c2RYSjZjSVU4?=
 =?utf-8?B?MVByUW9XTzEybXBWSkxiUlp3QkJCYnE1VEJMQkhxbE9DZk1PUEtmZzZIc2Nn?=
 =?utf-8?B?cGZYc3JGMEluaWw3d0R4MlJaTzNzRWE3SVBpNUNHUjhxWkFNemV5RUlUcTRr?=
 =?utf-8?Q?4TYyKoT1+9CE3MNBwv6aefsl6?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Qizajy3uyDuZ1mwNpoM9u0CujQ9b/fHXd9ISO5TKBAr8nrCDu7NxgH7FCK9WvppNV3lCfnw51Z4sVM0K3nlOBc/WpUnBjtdwJHDLtXL32VR2rS1vzF53mPA0NStS+fhHGs3DfD2VEgPtjGKE+MKQvx7adbXhHJdMUf93pkwb5adIb/Y01uEjvV4UR68mD3ZL010CSCYdVTjsPDD33f3bvisXGZiXDjMkojeLxDudWF1drguftGTKYBQhNpbLwiqLymyPJSgEB7AKvhC6JkJgEQ3Aw50ZDuhBezzGkUzUefN2YN3Cn2Nd/5zjtCCT6eHI5MGssC12N+Fdu3owB5bckQ9P7T8KouevM279voe9zSE/R8VR2C21oBZTWrpN2NzgPeuiZ4b57sxekMfysxo2P1Sk+EJTf27Enrq3k8sxnNq3DOvR8IYt3Qj4GnKJ1xpnpJVJH1FrGB+VZJHx4o1/0M6SfuDarN/9ysr3xRD0ukNVTWqLjjuCvT/7U3XCHjj/lVn8q59CLvdSmE3Myf6pcZ64f7BkQIeU1I6YVdJsDffSuAwrb+O3ycryUBrnCupSptzWzdRM0+2L8XxEj70Ru2odtIUL6EkB3S3aWm6vu+Qr5cxDtn6290jPWZVBC5q78oVNiLWKNktubX7ysPpK50wRFTbQw711ut2ERGqkDIDooXFynL+K6m1uoh6cU+26tvcqQ8UHvDeA+eBs7LKoG/gT9PUx87FV/Jyj8YpvGQV50HcfbS6ABlIeP3aDcfBslifvAp1/tKNyw1oOqGi8tuXgLRVcCSvfTc1lmAHgeC0k6tI9Ac3qGiK+fBdNXGR2
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e43946f-621b-482b-702c-08db3fef9fa3
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:31:01.1974
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 11A/YYVtvoAoJPSwlhCeX1/tA07kuzTRCg3e1+/Ke85837xv5ajh6g4aB4OQbshwr9acrTl94MoTJ5G97NYcag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6514

On Tue, Apr 18, 2023 at 11:01:56AM +0200, Jan Beulich wrote:
> An earlier change converted TARGET-y to TARGETS, but failed to replace
> all references. Convert run's dependency, but use $< in the command to
> avoid the leading blank that += inserts.
> 
> Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/tools/tests/cpu-policy/Makefile
> +++ b/tools/tests/cpu-policy/Makefile
> @@ -16,8 +16,8 @@ endif
>  all: $(TARGETS)
>  
>  .PHONY: run
> -run: $(TARGET-y)
> -	./$(TARGET-y)
> +run: $(TARGETS)
> +	./$<

Since it seems like TARGETS can contain multiple outputs, do we want
to have a for loop here?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:31:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522691.812212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohgo-0004ZC-6g; Tue, 18 Apr 2023 09:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522691.812212; Tue, 18 Apr 2023 09:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohgo-0004Z5-3h; Tue, 18 Apr 2023 09:31:54 +0000
Received: by outflank-mailman (input) for mailman id 522691;
 Tue, 18 Apr 2023 09:31:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pohgm-0004Yp-TU; Tue, 18 Apr 2023 09:31:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pohgm-00009P-Lk; Tue, 18 Apr 2023 09:31:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pohgm-0007F7-8l; Tue, 18 Apr 2023 09:31:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pohgm-0004U5-8K; Tue, 18 Apr 2023 09:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qJRa1XObNe17pGFgomC/lzqVbR0JFfjoi6oVmg4RMpU=; b=2jVYX3EIWPfdcnBVsGwYVIBPRV
	QEm9yqUe+eI1YewYTXpR2LZRyW6AKy8UXa5TcS1AHngoQDOX+OHRPm8mLg+sWODy6OoBOvVqJEbuE
	VdNd0XKjfGW5sE6T8tTcrls7YibSgGjwS3F3qtAMi0xof7OzacXnGmjg61lIEVNgKNDw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180297-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180297: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cbe828581b4a1717a4331b754c25a27a41d1bc58
X-Osstest-Versions-That:
    xen=1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 09:31:52 +0000

flight 180297 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180297/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cbe828581b4a1717a4331b754c25a27a41d1bc58
baseline version:
 xen                  1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f

Last test of basis   180290  2023-04-17 22:00:29 Z    0 days
Testing same since   180297  2023-04-18 07:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Dietmar Hahn <dietmar.hahn@fujitsu.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1213ebfb9f..cbe828581b  cbe828581b4a1717a4331b754c25a27a41d1bc58 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:35:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:35:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522701.812223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohka-0005F5-N8; Tue, 18 Apr 2023 09:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522701.812223; Tue, 18 Apr 2023 09:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohka-0005Ey-KJ; Tue, 18 Apr 2023 09:35:48 +0000
Received: by outflank-mailman (input) for mailman id 522701;
 Tue, 18 Apr 2023 09:35:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pohka-0005Es-96
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:35:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64ac5037-ddcc-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 11:35:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8624.eurprd04.prod.outlook.com (2603:10a6:102:21b::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:35:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:35:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64ac5037-ddcc-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e00X9ptd8NzNCDjQIN9sLm4B7Nmi54ZeJUZL0VYiZMe2Xuu5Sjfn7SWsC4hFknkMh9iEE4f/SjJ7qQ3dDjZFYihXNo6DCv9JN4bzn2QMRlA9SYiGffGk5UMu8ck1Zwf/VcmnK2temYijxskMSUgT3f2C3BAwTZ9YOUJuvaK/SU5FzWw8/7LyxZqs6RnAcAIdB9O8j1vBRQ9FZgEQ4h4gzZRFC376kQcetJ18P6C8bDmrIvE2k4KDAYbuv4AIhemSRnKPakQAafRcueBf7m2EeBAzEK+s15eq288yZ99vKGFM8loNXrDH8aK+tVXTz6tAXEpYQpUSWObWAAPkEmxjQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sKyDnkik5ayw5gfaTdlRNJoU3D5nuUPMckTjZzUPdmc=;
 b=UCEttQOT8Ag4EhyV5k70RSl20QVmPAsabIKdWjrRZVa4TA9fgkjdVU6x/fR+wngYq7m4RevniH1XybZPLTqUvRodnvR20c9qvHQlv93Ulro9qh9HvUyfFUrUSy18elSD7BSJ8D1ZUr76Zn6mg/QERoeVly36hwMR56YJABDUBCCKZyPouoEz3pv07DgLCRdDxGQjvWm4zrj94ELWIaZRW3wDutE2VtaZ+GeXche6HrnoUdNDCP3CIHNl1VXqoVkZi8t+L1H8MvJ6NNj41N3SnSCmaafw+YdNRFy8Dn9Let3NfW8ON6iBp4+RtqGWpP8dpstTKaOvl87YvVldlahjPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sKyDnkik5ayw5gfaTdlRNJoU3D5nuUPMckTjZzUPdmc=;
 b=p7eLVAnSq/ohPYXPSslioZFOh7Pv7oGwBvSaRj/XZ+t2atFHuafDjjbnEE7TkBmdCYoj1xktQtrgyYjUa9f/b3S4tqFLx7Y5dHWZ/TV8diRFvk76OL0hSqJnCE04OHGvhuyXum7jm+v1YSJkwsddGEArUyeP09pagx4ntm82VzgQXSlAlrSem6CxEKSoF57e2UGPWt2BxrWxsOY9NbZRcDuGxczGLbV+gTJ9ZUBuEdC5sOihGCgsOf7j9Gw5EDi1l0Uhd4G0hw+tYmN7880jxMD/wMaaJDrYjXT6sujyRKywK1+GISdzwk+04PJ09YCuYGu3lrb8H1uqDnnNSXIVFA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a24ace58-2ac5-6152-c42b-0037355ce9c4@suse.com>
Date: Tue, 18 Apr 2023 11:35:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: cpu{id,}_policy_updated() can be static
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0013.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::23)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8624:EE_
X-MS-Office365-Filtering-Correlation-Id: cff6c923-2c5a-44c3-e011-08db3ff047a9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KQOx2KRzWEqvnPhJGUSgkZJKHIE/xqNOjp5KdRH0Uu7Ah79fp473iitDvwIjWqfO9H1T/OKEUYT755+Nwf+1KEhRiPiasL2jle8LtESoGwwWtdao1kEQKdsAwTcvtxLS+rwcTGeCaFSpYOQzAeRPLn0/5WbAUVKnenQKdii90Bv0MkydvIoVqutcofLUes6tyKpTiMHZAxt3bxSOKePw38KvxlAStFjjZBPXr/vPNl7rklrmQi6jLW0NooftuUttHcH3WuAWXEnBe6TvR4EmbAGHbhSHDNY9VjRgTX2/4iwUZ5sMnoA5gcZEen3lCO2tnnQfuKefaiqUx2ib/WzZh8GciHKz40ADDivZCNAEGQqS366oUkRSLxFquLCjcy79YmHSFXPU8wkbSYjc0WSj3qCBVxvI9F0PQjUb6AkZnay+bOBm9FkPLOKBPUW/JC7WcO0reQQpEtMmGvP3sGs3noTHEL3K/uHBu/dGMsQCDp8B4DqUCA0WxioNmSY5xM49HXg5juOpcTbedGmuaSlqcqlBJbqCEyeRay6hW6gY2hTZs6yfVrf9ngf3IgzL44K6qeQF3sKZiL7eqY0ljerF5XBg1FpJp00EN2HTZ9LIha7h7lYpUNWXTXcifugFb4+7erzfrj9T7Lxnj1JGSRDdEw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(36756003)(38100700002)(8936002)(8676002)(5660300002)(2906002)(86362001)(31696002)(478600001)(6486002)(54906003)(31686004)(186003)(2616005)(6512007)(66946007)(6506007)(66476007)(26005)(41300700001)(316002)(83380400001)(6916009)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTJiaXNjMU9iSTZJWlNzKzVLejRzWGVPN0J1M2ZRTFBnRHp2NlFQR3ladHY2?=
 =?utf-8?B?ZFVsOG1oaG51dVZ6Z2daTEdkYThRYnhTeEdLNWFmRDFBcXZQd3dGMERyMTBE?=
 =?utf-8?B?TDg5R0RVV1NQRzVEelVLU0NjOHJBWWFKOG1rL1o3TjFFMjFDZ2R0SzArck9P?=
 =?utf-8?B?VDBEMXlvK1ZacUFSTTlkaktDckpRK2FMYk4wN1pKMjJxaUcxYUZPL1pTelRG?=
 =?utf-8?B?T3VPYlpDYXpRYWt5MXRWazdkSnl0N3hUb0pvTk82SVF3NFp6SWdtNHlMYmo0?=
 =?utf-8?B?RTVCUGI2NzZyQXdicDRRRm1jTGVCajg1bXRHWnpEZkZLR0h0SWpIYlNxOUhS?=
 =?utf-8?B?MDVRLzIzbzdaeGt4WSt2LzNPV250RERMbVI0cWRTMEJpUE9DQlVPZWR5Tjcv?=
 =?utf-8?B?dXhqaW5xU2l1bHJ2ZTJOeFFKOHpZSCtrZWRaU1VaQnlIWDhETlJ4QVBEZEwr?=
 =?utf-8?B?QTU5UURqZGZkQUlKZGprU080RkpiU0xpK2FFdlFucTh5T1hMbGdWRHpENXNh?=
 =?utf-8?B?Vm1FN3BCUy81Qjg3YjNnc2k2WGNCOVpzeitsN3dHV0MyR1lub2lPNVJxTklv?=
 =?utf-8?B?ZkxtMVZlSTdFbGlwaHFrUVhNQnBneXJ4QTFzL0t4WFdsNjZmVGt6MCszeEZ2?=
 =?utf-8?B?VHIvTWNOaHE1NXJzMklmU1ZPR0lJS2Z1RzBDcXI3OTlzRXVvWlkwYnk0b0hq?=
 =?utf-8?B?ZTBITDhZL1U5cHl0TmRHV01Uakg2Rk96KzZCdVdEazZCZjkyR3VFaXcrM1NM?=
 =?utf-8?B?L3JVbzk4SXM2WlJvekpGaDh4ZGZ4REwrWWRiUHFodlVtMFIzbmd1Y0hic3V4?=
 =?utf-8?B?Q04vOVdsaDdVZnJKL0RjSStLaWxGS1RWUWtCNkZDbU0yQndYcnJhQjdlbDhu?=
 =?utf-8?B?WndqY3pZZkpiYzRXdmg4V0F0VHE5YjFFdW52TGVmbTNUWUhyWjFvSXBsck1i?=
 =?utf-8?B?UVJPLzhzWkJZRkNLSStOWmZJbEpMQmltZndRemJOVnJKNkhSNHFNWVpNbnd3?=
 =?utf-8?B?ck8xbWk4Qk5FZkZ0TkMzbi95dk5zcVd5RU5tM2UreUJzUFo4dDlaN3NvamxL?=
 =?utf-8?B?YmEreE0xbys2YmRabFBLMGpFSzcrTkEvVjdqRWU0bU9lbUdmQXVsR1MrZHhs?=
 =?utf-8?B?T1oxYmdOMStOS2ZoRVpnK3NCdDFLMTJEKzJVZDVNWGhrWGFFc0lCT2c1cEZj?=
 =?utf-8?B?bTlZVmlLbVFLUVBWNVV0aFpuZ3hQd2NLQUpCZWRqYnVWRGRId3QyeVdJUHpT?=
 =?utf-8?B?RzhpaFNYSzE2b1MwdFlwYXdoNGNwRzhSU2R4ZFBVZXg1dFFmbWQrUVA0MVNM?=
 =?utf-8?B?Z2R3S1kvTER6a1RqOWVtRUxraDNXV2ZpK0FMSms0SDh6cUFUdWg4dStGcXlW?=
 =?utf-8?B?SVlJRGx4elZrK0U5Y3YrODA0Q1F1bUxwWDd1Y0NoZjhsWE5MVEdGeVNwLzE2?=
 =?utf-8?B?N1h2MURrNHlSNzJYckRPR3NNSVZvS0xZd0FkTjk3MlFnbUs1MjVCdWMvUjhQ?=
 =?utf-8?B?SjJoc28zM2VDT1VkSmp6aks1VEIzNmtQY2ZFQlEyT25xK1dDM3FpNnR2b21o?=
 =?utf-8?B?YTJBRTRIT2NLNU1yOG1LdjV6RXd3ZCtLNkg3NWE5WHl4VVZUcW90a2FXdWZZ?=
 =?utf-8?B?TEJDTHk4OEJhSnNacjJncVlPZ2xqZWtCWm04M3J0UGNRdkxYMnFoTXJyQWlx?=
 =?utf-8?B?QXgzbkg1cWxCMmpDTXgrTkdyeW9KNFpUbXByNDNxMjJkbWlKYzJRVVkzN0cz?=
 =?utf-8?B?Q0pzZE1STUlUMlVVby85Z2lmU3NFemROT1FiOHQ3OW9iQ2srdjB1WVJ6cjBm?=
 =?utf-8?B?QTNmRGVxOUxRMU5WNThwY0RyQnFaSFBBVU1PTmQ5UGpyM05yQ2gvRjlLbCs1?=
 =?utf-8?B?SnBUWlluZGN0L3NSRGRtMUJYd2hLTnViT0ovaUI5cTdELzlLN0tLdDFXbE1W?=
 =?utf-8?B?TTdOZGxUdURSWCtHS2tEMjBXbGlUNVAzdkF3SnlPR005YmJCVnI3STMrYnVN?=
 =?utf-8?B?U1d0dEJoM2dQeVd3Yy8xYzVpemc0YWN2aDN3Z01neFBROUVGSW1DTlBzVWdX?=
 =?utf-8?B?QjN6Y2VHVXBvZXhrN25Gb2tuY2tTSGlJelk5WWlNbWtia0ViV3JjVU1ORFJj?=
 =?utf-8?Q?dXDjEcPU7gyVmlMs1hOJtjh0N?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cff6c923-2c5a-44c3-e011-08db3ff047a9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:35:43.1204
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hpqkQ3kao7DFMo6yGrjscESP9BsYSZ8JyScEocp1ieLIxh5HNLrFe7/CL1YNBM4Lo860KTvDQJ3p5LHtwkZbSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8624

The function merely needs moving earlier in the file to avoid the need
for a forward declaration. While moving it, also rename it following the
recent folding of CPUID and MSR policies.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -288,6 +288,16 @@ void update_guest_memory_policy(struct v
     }
 }
 
+/*
+ * Called during vcpu construction, and each time the toolstack changes the
+ * CPUID configuration for the domain.
+ */
+static void cpu_policy_updated(struct vcpu *v)
+{
+    if ( is_hvm_vcpu(v) )
+        hvm_cpuid_policy_changed(v);
+}
+
 void domain_cpu_policy_changed(struct domain *d)
 {
     const struct cpu_policy *p = d->arch.cpu_policy;
@@ -446,7 +456,7 @@ void domain_cpu_policy_changed(struct do
 
     for_each_vcpu ( d, v )
     {
-        cpuid_policy_updated(v);
+        cpu_policy_updated(v);
 
         /* If PMU version is zero then the guest doesn't have VPMU */
         if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
@@ -591,7 +601,7 @@ int arch_vcpu_create(struct vcpu *v)
     {
         vpmu_initialise(v);
 
-        cpuid_policy_updated(v);
+        cpu_policy_updated(v);
     }
 
     return rc;
@@ -2416,16 +2426,6 @@ int domain_relinquish_resources(struct d
     return 0;
 }
 
-/*
- * Called during vcpu construction, and each time the toolstack changes the
- * CPUID configuration for the domain.
- */
-void cpuid_policy_updated(struct vcpu *v)
-{
-    if ( is_hvm_vcpu(v) )
-        hvm_cpuid_policy_changed(v);
-}
-
 void arch_dump_domain_info(struct domain *d)
 {
     paging_dump_domain_info(d);
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -83,8 +83,6 @@ void toggle_guest_mode(struct vcpu *);
 /* x86/64: toggle guest page tables between kernel and user modes. */
 void toggle_guest_pt(struct vcpu *);
 
-void cpuid_policy_updated(struct vcpu *v);
-
 /*
  * Initialise a hypercall-transfer page. The given pointer must be mapped
  * in Xen virtual address space (accesses are not validated or checked).


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:39:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522706.812233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohna-0005oW-52; Tue, 18 Apr 2023 09:38:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522706.812233; Tue, 18 Apr 2023 09:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohna-0005oP-2E; Tue, 18 Apr 2023 09:38:54 +0000
Received: by outflank-mailman (input) for mailman id 522706;
 Tue, 18 Apr 2023 09:38:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pohnY-0005oJ-U3
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:38:52 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4063bc7-ddcc-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 11:38:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8624.eurprd04.prod.outlook.com (2603:10a6:102:21b::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:38:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:38:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4063bc7-ddcc-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Io/nFooOfPPrvrhKn907vYbM3DzY7PrjLYC0NU6ib35DWduhBUh9BsPSStxndLEVTBj+1OgkwXTEOD99xWKh66M4H31e+hJnvxUrWvkOFOQ3G0GXq1V3XWV9kryH1WtJWTY+fng5YCR/Xj4iVjgy8NYroNjN0Noo+Bi99iWK89DPM+2z7OHMVeKYHdMQfM+EcY+i/7+gTcLDnMXY20E8Pl12XUf1zpDsfE0wAPIfJ1EK3MgbSz+Vn4Lev1qfi5b5cXkzZYDu4DQjYuKU3A9eyyCm9//+kLV1cCfjVa4WneE0iuEl2g4VN83MhNW5sVLGITMeLk12MRDteLnejWG0mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ogWMwPoSbDuqivXYfjPwTOM1RRjb8h/Lr4C9H5qDfA0=;
 b=YtqnNDnvSYCc4i4G7b1i0vUCv/HlX1ImVn9C/Tj/4ey2fRrYM6OSqa1xeF9hylGEFqnBi75yVGc5BgG3CkH5azAVbP6N1e9jX/dvQOqV3BTlZEoUKFraPNqNuKeCjsoARWQEbPl6mN+z/YFQehckogAKVvK+MAh5uzr+DpqVprr4e8B+lVXW+fbA8EhIGUsCM03pxBn8Neih2FrJOZ29kjtLX8UHko5fzYmoRLjcoF7SgAk//WO1/u1Xh0/aJebmfiXlWxCB21HLNZTPHj6vZePW3abeLDUvGmmzX72TX/ZK06QHYK1QfCtTEimRKoI/TL0bqHtMqBqxzSr1vQLX9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ogWMwPoSbDuqivXYfjPwTOM1RRjb8h/Lr4C9H5qDfA0=;
 b=pH2jGpWc1MZjegMpe50yFWP0sKCizr4ZXEeNpgCCTqFL9St0rM3EDEx8CyPRjJJgL/45tXyN5gxzanhpF6iVJpsJSgV2TKUtF9/txYMymI8vukqU/EcUiE/g9jEnNzRgJnzD/s/OexLq5ge7ghHd18OpgeQDvXglNy1+E+nvHKWpaD5sR5Adf/pcsVVjAHExetXMzzyRu0fbHZ0KSFS0//M7PdKNUizBQ1QutRz2ME/zl+0E5U+1M1w44799VE3DEqlrE0tL3eV6sQ0NHW68ATaETNr5METOCBQ7wyvc+zljjkXsUFzdvipWr48iOfWmtgfb9JMqncnOc5MleAgh6Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <963ede97-efd2-e63c-429d-32426386c3d8@suse.com>
Date: Tue, 18 Apr 2023 11:38:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] tests/cpu-policy: fix "run" goal
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <80a02af5-9154-8289-4e92-6016c0948a61@suse.com>
 <ZD5jT/F8b82ZkGxo@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD5jT/F8b82ZkGxo@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8624:EE_
X-MS-Office365-Filtering-Correlation-Id: 31809f27-4b60-4bbe-3501-08db3ff0b757
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kdQFGt8x7w8/lLhAW2OJQZlPYjk9pf07wZKNsB4C8v7dQmChjls1Y4sPeZtWRlw88loTx0FDBHTbrXDuZ/At+SdAPpap9fPwxqhFt6hXU/yRmtHx88cr/wIFAYquxCC7XOf6d/N3yqkwfAmdNLY7btPaId6vcG7z9dIQjsk0n6+K+zvS+JRV8XzG+xEcoFf0HhOaxQ07wsm9XBl6VWXrY8G+RzXnJJ/iKfWxTd9qQHjvN09li7vt7taRvzwmMuuSv6PCcRxt4sYq7bg3Dmv/atgpKV9M1U/J+a608FRILicnQB4wXuCzrCNUpEZ4H/BJg9fKIEkIUfW1Tk588xl3WjRRuqxmWStAwHfz0jM0lrfH9opNthPPX4q6Frg+ljOdf0ic9jo/tcdKBh1u9XfY9JNPFSTd3WQeANcSzkBRNaz2U6Zw9yRAnlSX0Ztvx/R1MCpxMkwTQsFxSwn50Z4+g7DgZFjON8mvJ3TdHy4mr1VpbvcXPQqMfuFJW2oQraiJ3D4/gBQjaF3SVGfOZ0frHms7AtTUCLt2Qm7HBLIOlR96xIO+dWW7ktYRJ2aKyP3SuMA9gSrzYti3lJxJUMhMP12Za29HjAi7wd69SpOX5fLBenf9NNaOYWw9PyTI6Hr8QwcyFKSEoC0j8vwnEtmX2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(36756003)(38100700002)(8936002)(8676002)(5660300002)(2906002)(4744005)(86362001)(31696002)(478600001)(6486002)(54906003)(31686004)(186003)(2616005)(6512007)(66946007)(53546011)(6506007)(66476007)(26005)(41300700001)(316002)(6916009)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y2VCVVVMWUtRaUZiRW1PY2hLSm42a05CTm9RTmkxVUJLYUlkLytkSzNzc3o1?=
 =?utf-8?B?aWlndnloTW1HNDV6K0txb0w5ZDNYQkd4N3ZtNks1MzhKeHh0OExxcHF1QTRK?=
 =?utf-8?B?ZDE0a2JJYnI4eHJDTWJad1hPd29lTnV5bmhsRTFvVTVFbEFhNnhJdmxEOTd3?=
 =?utf-8?B?VGVBUDh0eUtQeFlOSEwycXljUStkVWVENjZCQ1J2QkhyQjZkMk8zR2NxNEI0?=
 =?utf-8?B?a2pFaERyZi9yWkJSTUpscEtRemM3RDhIL0hiUUJZcmdyV0dOOTQvQVNrRlBj?=
 =?utf-8?B?RU5nS1ZhUitURHg4SmRyNnEwOXJuMHFKMmpGTEVUZlRidjZKcmpHdFdWZDQw?=
 =?utf-8?B?RGVQQ2w2b1g4RzlDRnM2R1RRTDYzazVwYnFiemNDQmg0b08wYXA0V1RVWnlQ?=
 =?utf-8?B?ZnB1QURlTVhyUGVNcHp5VnduaG5mRzBLTzVyQnZhcEw1YjU3VXFxRFZSdllZ?=
 =?utf-8?B?eXJXTS9BQ1c4bWdwUlZDS0xZNjIvK2xFTjBDU2tXZ0RRKzRheXJWcjFMMld2?=
 =?utf-8?B?dy9uWXQ5ZnZxdENkeVA1akVEdEI2SEVWbFJQbDNTS24xM1c1dGdqc2pzQzRG?=
 =?utf-8?B?SURva0hUZDF4a0VVb3FuQUhRemxBdzVOVmJSNWtMRGcvSmtzVkU2NHVkQmVJ?=
 =?utf-8?B?cStTNWhXM1Y2dUkvVWJHQjdzV3Vnc05XeG9ZbzB6UDIraFFpWk5uUFRxQUts?=
 =?utf-8?B?QVpUeEM4SllLeU5vQzg2MHovSVZiUHhzcThyWDFtSXNqWGUzNVQ2WjJ0ZG44?=
 =?utf-8?B?clZ4SzMyV0dSNVdxZWMwTnVxSlR5Y3dnUDdLV0NOVzFWMEJBZXUrLzU3czE2?=
 =?utf-8?B?U0VCV0l1NDBwYjBRcFZoU29wMk5UTXo0L1FKdmkydlpraEJoZ0RVN244NlY1?=
 =?utf-8?B?czFuQncrS3cydW13ZURKemloenhjL1BobENoREsvZDl1eks1UU9BOEh4RnU5?=
 =?utf-8?B?bHdLZUhCYlI3Mk9RMjRqQTRkajNHN0YvYmptK25pbldINXhVSUdSYm1mWDJ1?=
 =?utf-8?B?T2FFUTV4TEhPWmZXYUROSXUwb2Q3UVQ4NEYzbjk1aTUrcFFKblNvUE9RSHFO?=
 =?utf-8?B?NlArdDZDQjlYbDdHbzlRckQ2Q3pGVjhhSnF1WHMxSW5LMDZubmQ0dzVFZ1RI?=
 =?utf-8?B?eVVpTXFtYTFsOFJ1N0RLZzVFMGZyaGV4OXJsK2FqTzFscEVFTXIxWFJhMDRy?=
 =?utf-8?B?SWVBVmdNbWZjUWlWWTd1NDMxL25kdXFhajJnL2plZGNncTFnSGFJeHM3K05k?=
 =?utf-8?B?Z1M5djRvVXJXMnBkNjRoMXBGU3ZpYzVSQVZlZ2twcHVGNFY1cHhJOWkvR3dV?=
 =?utf-8?B?cWNKRkdocWQ2OG1tak9MODl0QVB3SnRRWlRMd05NVWpSemptTDVEa25XcjY4?=
 =?utf-8?B?ck1GMWkrYjB1VURsTUpuWGtndFQ0eDAxdHZzbWJIR1NMOFBOSWdSVEdSUlBC?=
 =?utf-8?B?NHpvVnl6Z1J4d3hBRHl1Q1AyVHBhTngrTHJvMExDdzk3eCthZjR5V1JDby81?=
 =?utf-8?B?djVacTF1WEdHby96OXRiRUtMelBEeDYyVThOdS9tNGk1UGg1bCt6V000d2R1?=
 =?utf-8?B?OEhMd1ZZUllaYlFOUEFoK1NmdnB2eHJiamxLWFU0L2hwOWNjb2hSSHljWFNE?=
 =?utf-8?B?M0VOaktrdlVVU2gxZm9FQmxaOVpuNTQwZ2ZoUjlNK2JWSXo5L1N0SHV5NGtp?=
 =?utf-8?B?dEg3VkhTQWFFUEhmNjNrOG9MUFhPWTRNMEMwRXFpdFJDcklwMHIzK3o5d2Qx?=
 =?utf-8?B?dHZwWWI1TDFoS1oxTDZlc29EMmdxU1BnT2xhV0EzRU5JRzF2QlcxUmIzMEtU?=
 =?utf-8?B?THNaNzVpd0NFdEFFRWRvcjh6a2FYU1lqWEdZcTZqUFRYSzcwbVp5ckVnRWd4?=
 =?utf-8?B?dHBsS0JhOVVKWVRQOS9CbjRlVWhNMHR1YWpKKzRrNEd1aGhiYUlVRlRDZ2Nx?=
 =?utf-8?B?SElQdzhNUFd0VHZRb2YxMTYxU0JUWnVKdHhmR1dMWVpuMU1nS3VnWnNuYVp3?=
 =?utf-8?B?QWxhdFh1SGcybkY0L0N1d2ZHZ1pmZ3RzUjJRV0FSQzlQZEpxd1hmZHNQUDY3?=
 =?utf-8?B?RWpOQXF6RzQwTm00aFM1THpxWmVlYjNaaEFCV0FNamRrU0JsYVBsdFdSOXo1?=
 =?utf-8?Q?XCd2IZpiYNdtuR+fIPM6S4sDi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 31809f27-4b60-4bbe-3501-08db3ff0b757
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:38:50.5682
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LMY2LdhwxxAXgVGJE7GAvAbC6KjWm6Gl5Ny7leV0HZrH3lU+0uvtvm0R6ptDtf7sr32eWLiVfcsyAOzzbi67HA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8624

On 18.04.2023 11:30, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 11:01:56AM +0200, Jan Beulich wrote:
>> An earlier change converted TARGET-y to TARGETS, but failed to replace
>> all references. Convert run's dependency, but use $< in the command to
>> avoid the leading blank that += inserts.
>>
>> Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/tools/tests/cpu-policy/Makefile
>> +++ b/tools/tests/cpu-policy/Makefile
>> @@ -16,8 +16,8 @@ endif
>>  all: $(TARGETS)
>>  
>>  .PHONY: run
>> -run: $(TARGET-y)
>> -	./$(TARGET-y)
>> +run: $(TARGETS)
>> +	./$<
> 
> Since it seems like TARGETS can contain multiple outputs, do we want
> to have a for loop here?

Imo TARGETS is just the conventional name, even if it expand to only
a single target. I'd prefer to stick with the simple rule until such
time that there really are multiple executables here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:42:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522711.812244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohrA-0007Ic-Q7; Tue, 18 Apr 2023 09:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522711.812244; Tue, 18 Apr 2023 09:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohrA-0007IV-Lo; Tue, 18 Apr 2023 09:42:36 +0000
Received: by outflank-mailman (input) for mailman id 522711;
 Tue, 18 Apr 2023 09:42:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pohr9-0007IP-6y
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:42:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 573e3221-ddcd-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 11:42:33 +0200 (CEST)
Received: from mail-dm6nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 05:42:30 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5439.namprd03.prod.outlook.com (2603:10b6:a03:286::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:42:29 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:42:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 573e3221-ddcd-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681810953;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=zMiO3xal1ozoVCsIdD1sflDutLfJhaVg7wxxIE9rD6I=;
  b=f9S+vsP/8nh+c/q8sVkZoioHFK6sm17TjelNEUCh/xGM0tW5Bk1KbdDo
   1YC1fbxiRCC3Z0VilT78Ay7Dd2HTJnS232nhCywfafEZjn4YTT4MGxlWY
   FHi8mmYK2omDLp1OJcKrtAlUE61aZhGXoJZdkRZhAtU/UCYDDaKz8yh2w
   o=;
X-IronPort-RemoteIP: 104.47.59.168
X-IronPort-MID: 105274832
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:OmykAKthamUYH1KUxUq73+W0ZOfnVHBfMUV32f8akzHdYApBsoF/q
 tZmKW2EbPiNMTCneY10YY6wp0oP75HXz4JgHlM/+S9jQS1H+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHxyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwGh4dRyqvjOOKnInmVuhO3u47LOL1BdZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60boq9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN9IS+HoraQCbFu7+XFDBjYpdH+CjfChuFafUIN6E
 3AQw397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqC8p+EoDX0PjIaRVLufgcBRAoBptXm/oc6i0uVSs45SfDlyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CBhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:DSNR7a9FlyEUfUm70MBuk+D+I+orL9Y04lQ7vn2ZESYlFPBxl6
 iV/MjzpiWE8Qr5OUtQ5OxoXZPqfZqyz+8Q3WB8B9mftWrdyReVxeNZnOzfKlTbckWSmoFgPM
 9bAtdD4bbLfDtHZLPBkWyF+qEbsbu6Gc6T5Ns2yU0GceggUdAZ0+4wMHfhLqRZfng9OaYE
X-Talos-CUID: =?us-ascii?q?9a23=3A1Kb2AWqrvzV7XiIMw+GHhMPmUfILYlTn4HzRHxO?=
 =?us-ascii?q?DVHpoauW/UV+75poxxg=3D=3D?=
X-Talos-MUID: 9a23:/0jJ7wuaQp5+cUyUTc2nuywyOuxXwoCXJEkVo40nsJihDRB0AmLI
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="105274832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J5HCY8kGzAoHKtxsj28D3b7ymECnzOp3gNkIK+ieFfnak/pmCUOGDSFYymTPYh3SnL608bgRv2x7gWJDmXv+xdS38wwBJYFathg/eLJzSJ9cDLinvlWDby4MdWSga2M65mYb9XXY7jmfJ7H6Kre2YQjBvza8yMXNju5/wk7NUUSEAH86RETmpOZZdeOjlY5SV86yEps3BykblQvDRzcZsUzfCVoNsJiNE8fia7lyG7d48OXAg7zux1EURwTr6lpRKu7ZfphW5tsLz5lso6qkTs34EE+7rMQOoCLaxzODDwPb5Oc6rykuSdC36jHb2M7gjzJWW2zbByVxFFMf28D/9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0AIfNzysz2UKvwfdQdACRz8x/Ch40xI0zk4GKK/TY0A=;
 b=fgrN4YbF+X9t0l6aH7+ALJhPP5S6ORTkVh3Pi3XT+G1WGymggCZKB6ZENXfV8o4U+emiXrxTMcbSLEdYwvPbbl1Xa4UPhCRbC5pZ70IJmXKjyNei8H7qezLMGTQEjBByE4ZDZjo+8cljzKY5h697FxecFcR4PO+cpTvdwFv8CrRkTXdRm14o4E7Vl9jwmcBuLoV4WV6uFcn5+viCrdm/NHc9yB52ou3ii8l9JHUBRiFtOLfXdLjXf5gU9HzwFrXOIIe50omtvh7ufpxEr99Mr4Da/5Evr5+Gdk9pm5NDahtFwxGFvUZER88qR5D+d61w2OWYKD7DKOyrgzMVlTf0qA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0AIfNzysz2UKvwfdQdACRz8x/Ch40xI0zk4GKK/TY0A=;
 b=KNN9FZk4GtBbhoahwhmLaHDpGNF7h49CtEz62AaKHlR0j2VLT4NvXUoHornINontASi8QJ4cP3KDrTz6QjqmvnIPwc5mTqgpc81nqR8dwJDd78naJ++dnvR3OJ4HSlKJrX0Pze5s0CDPxqy9fjk3Er6Kg5dizCKKjZ+DQ2+w3h0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 11:42:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86: cpu{id,}_policy_updated() can be static
Message-ID: <ZD5l/y15PkJS8jbw@Air-de-Roger>
References: <a24ace58-2ac5-6152-c42b-0037355ce9c4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a24ace58-2ac5-6152-c42b-0037355ce9c4@suse.com>
X-ClientProxiedBy: LNXP265CA0067.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::31) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5439:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c8b182e-5d59-4aeb-6cd4-08db3ff139c1
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kO+oPgxm+2w0HMgrDn/63o96NW9lsroRnWO76/88QG/wSbz/jPY8WM1aSQ7swrAksyoijiqjeoBqQrdzD8Fa/UnbVnev4ZzVvMbrFfcLN4Z0w2vDhbZSDtXFef2VbcXJIThl/HFT487HE6Ro83zwM+SVWV4zlBsi1ktu7KZsvbd1Zu/gOHobU8LnG8hSeTja+O/zEpnOYf/C+E7A6GDN1zwp42er3UsWmaliB6Uv0CEq6AF+0Wkm5kNQ1XGUOR7NXJaRpgGrOVgvxwHaYTO+hSJd192n0BMlrTx2vmzIU/xJbTnbG7A4YAETs3+A5mLfb0u32jS6ru1fAfDGRtLteCUHYfT0CA7QTUoktHJLCWQDScJiaPQuNClwEBXPqSlEmIdGstXhjjlqt7fxekaz7xQOog+OYJV1R4rzH4jCl/JJZh8vBkXnlWIHZ57xjx5xrTeavCLb9b+VboLsCR48qePQhSKtf4Uobx9ApLybdxi9jsi/qKF59RjaZVq8aQmUOGYc/LgWkPiOVmcfN+RSeZ1szyPIw7QFmb6VjiPs8SQ8RYfjcEBykgmd/zbuCD3D
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(6666004)(6486002)(478600001)(86362001)(26005)(83380400001)(85182001)(6512007)(9686003)(6506007)(186003)(82960400001)(38100700002)(33716001)(66556008)(66946007)(66476007)(4744005)(316002)(2906002)(4326008)(6916009)(8936002)(5660300002)(8676002)(41300700001)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OFRxOXd2c3NPbkhZTTdJMDBwYXhFbnl5dHhvdDhPMWx5aUVFZ29qRG1WRnJI?=
 =?utf-8?B?c01xN2U4cGd1NFZtejV0M3lRQXk5a3JXU3NWWXh3RzBFSTBHbVg3VkNEc0ZC?=
 =?utf-8?B?WE5JaDBKUVArZE51YktkdHBNN01uNnllRnJYM09FRk9CQXlDc0V6MlhBcmlP?=
 =?utf-8?B?YUM1cldSNjNFNEkxMk5mc0lld2tlemVMeWNmSmFTK3FFbGp2aktLZTVqL0Rr?=
 =?utf-8?B?WGFJV3lSY0lFTGR2QjYvYXVxcStySXJuTi9acFNEdDJkNmdtVG9RanRtOC9J?=
 =?utf-8?B?bGxjRmdQanpMMlVSeFlPWlNTQktwNG1PRTV4VG14bWVrMWQyc2JuQldOdnp6?=
 =?utf-8?B?cUw0Yjl2NWlUb2NRcDY5TXVDR2U2b1JkMWJXOWc0RUYwSEtLRm5FY1ZOR2tp?=
 =?utf-8?B?a0kxZFc2eHVQdUVhNFQyeGtSVDlZUmtOMEd3N1UrVzRmNWczRTZXMWxIdWlw?=
 =?utf-8?B?aFpIV0NNU2RXL0lsQ3Y2SFk1NFVxbTFhS0hWQTA1a2NaaHJ3RDZFSjhVK1dx?=
 =?utf-8?B?Q2NyaTdNajhxTytCTjFTSHpqY3pLNWFCYkRwSGpYamF6N2NtYXNpN3VRc1Zo?=
 =?utf-8?B?VzhtTmRLamI5T3R3TXVnc0ROMnhBUFF3R25zQU8rTmxiRzVraWJhSHhJb0VC?=
 =?utf-8?B?MXFxeHdBMnlzeWIwS0gvTlp5VFZpVEt4WnZJWWN0RmMyQ0M3Mk1EVnhoakNk?=
 =?utf-8?B?VVY1L1dpZnVhNDhNOGxHUWlYa3l0MlFxSkZ6TW1CbGJBOCtUNFpVWnA3cmFl?=
 =?utf-8?B?ZnBOd3dPU0JmSHJOM1VIZzErYVFEWHY2YUZwTHhaMk5lV1I0NjlHclBaQXhi?=
 =?utf-8?B?ZTVIeUJoWGVGOFc5U3pWcFdsRk9uNEJBSUk4blU0ZHBPTXJoM2hXK1RRaUhG?=
 =?utf-8?B?TUVMZFZmMWVPUE9ISWhEenduUGYvakh1MTA4T3Y5emdUeTdQRml1YUdoSzls?=
 =?utf-8?B?MEtDNjJNRzBxaDJEYVE5bUZhUEVSZWlkN0xBLzlmKzdieWxrWmhYOFdBeXdK?=
 =?utf-8?B?REpmcUhZMlBZREZlVHpsY29ubmFhejJaQWtpNTJMcUpoZUxOR1Z3Rld6QTB0?=
 =?utf-8?B?eEMvQ0w4c3BJV211Q3ZUTURRN2F2L0hoYXdlQXZST2Zhdk5WbTNvbVpzNFNX?=
 =?utf-8?B?K1FrU3ZSeXZ5WXBVd2w0b2l2ak80QWxBUlNmMldOQjB6VHUvalo2NDV3YSs5?=
 =?utf-8?B?YkFjbldzVEg5dFlsNndwSCtkMW53MGMvWldtRDg1ekRDa21NY09ZdE83L3dF?=
 =?utf-8?B?WXp1VnpQeXliWTEyaFJxSlJTZ1ExaktOamJkRnpkWmZ3elk3ZEpKY09QQitn?=
 =?utf-8?B?TlE5blhuRG82QjVxVTBsZG5Rem5lOWlCeTRSUzd4Wk15b0h5OVU2U05SOUtZ?=
 =?utf-8?B?cG9wK0lIWVpiZWhIdVQ2QWxyeDBMS3FTOXBiZnMvZVo1eFFnMWU2d2U1dTA3?=
 =?utf-8?B?cVlvZFl4c1lIczhiOG1YS1NXZEZ6aFZCTC8xY3hGRUJybG56Uk4vazdTbEdR?=
 =?utf-8?B?K3NPNjc5S0VCOW5hMG5IU1EzUDV1bFlzakpYcWtZL2hHbnNCRXh4WUJJSkxq?=
 =?utf-8?B?SjZyeU1kMUZHU05wTG1VT3RrQlUvSlhFN3VQdnBkTXdtNTBvMW43TGNrc1Nl?=
 =?utf-8?B?OXJoOU5kR1h5NGZienlYbGhOeStBa1dVUFQ5T0hiYm83TWFYKzQrRzloNzNG?=
 =?utf-8?B?dUFsK0phaU05MkZXemtqanl5R3hjK2FOb1VuWDVuMml6cWJ2QWd5SFF3UzZw?=
 =?utf-8?B?U2Q1QVZDWTBKN1o1NzFUa2ZaVjNUNjY3bUc3VWNzVkVmTWZ6b3ZhNE82TTlw?=
 =?utf-8?B?VVhaekxEUGJMWHZCSXdVRkVMeUkxUlNkcmRHZjM5cmVXYlEyQXF4bDZINDg5?=
 =?utf-8?B?d0FCUm5LdGtLOFBWT0QzNDFoZEVWL3VJTi9Rb0JUWWYxZUNySi90Qkw2Unp0?=
 =?utf-8?B?N0FwSTlGamNxaHhaVGorc0NRL3BKS3l1YmNUelE0QmQ3MGZCT3JzcmdRTC9z?=
 =?utf-8?B?V2NyTFUvcldZSmNFR1d4V0dVUTBpdW9xTW14MjhXNFdiWmI4cnNNOHpLdUpz?=
 =?utf-8?B?Y25Da3o1cXpMQnFvWFVhK0xmWHJXYXBqaDdIVmdrRm44Q3hVK2xkNFRGTG1S?=
 =?utf-8?Q?1XWhxEBgZVJw2ehHS3debvwQu?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HKLcmV4yacjYfIVx4wPXbh/whWGN6Qrzo9BkmB3kwiylMTymoEht79s2/3lPGIXprsSbbW4wU95GfyupOB85M7SiJ2SEtE9D55nQRSq/vtWU9/rxWXLdV7uuRKY3zn52OAC+Mo/v4KWZJWHLem6b45mnSwjcosA3w5nAYclBMYYzEl6La5E9rLK8YplGLnMyrOvw3qW41F+TdarzAmrLHeoHdMM98zdfp1qqjVlZfRmxHF35iP72SpTWYiTIj68gz98ORlM74Q+75eamxW+j5EfQUibr4JcYpCwI9T4gvg+jPJApFf5WNoElKNL2pr/SzRyt/pfw4uaX/NVJpTPPlcs/ZICUT+JpR4rXouak8CCwXzS/tmqMrbPSqRwIVPwWMqgXkuIpX0WSCugM3KluLypsFdbtv3unPjXkNdXHbDelKvELRqy8l5RZb7jzMwQ/FMm3cPt/9KzEnubVMuPaMZW3RIImecn2L/DWxYo7ijRuokJzU5IakuwHa+NKhuSMpqWKnvrml9b1DElOzHpXuCkbabYQFdD1sgtl5cPopDDTZJ7CW3N42+nFGE2ZXa5Y6oeqC2Lk2xtzu4EWWlMlKUkniLB6ZslKLa4IRM5DIb06lgkc5G5xASDyclIKgozcVEzqFHnr0i3wc5e+e5Uv8i51Hz7r9V9cBPSdCZ2e3p9Gx+u7ZTiYNIMNLPtRU90/9lfq00sdcr660GkVLNythNZJz6JnYcO1EeGZNGd9IPUBF+pl3d9O5BLYHv0F+el7Z6S+aYllA0Y1TV/DGr+e784EXbc2lWftqWzIn+bOTXyVRUHo5erDz31y/7YXNRWg
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c8b182e-5d59-4aeb-6cd4-08db3ff139c1
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:42:29.2654
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BRizxJxUpu8h+QjhkbUfRciSWiHazwNok87Ip21AkVLyvLlkwzBPUcUj7M6+skVG2o98nDEuKCzVs19hU3tx0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5439

On Tue, Apr 18, 2023 at 11:35:41AM +0200, Jan Beulich wrote:
> The function merely needs moving earlier in the file to avoid the need
> for a forward declaration. While moving it, also rename it following the
> recent folding of CPUID and MSR policies.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

We might also want to rename the hvm_function_table hook.

One minor comment below.

> 
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -288,6 +288,16 @@ void update_guest_memory_policy(struct v
>      }
>  }
>  
> +/*
> + * Called during vcpu construction, and each time the toolstack changes the
> + * CPUID configuration for the domain.

The comment also needs to be updated to contain CPUID/MSR or some
such now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 09:44:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 09:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522716.812252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohss-0007qC-3g; Tue, 18 Apr 2023 09:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522716.812252; Tue, 18 Apr 2023 09:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pohss-0007q5-15; Tue, 18 Apr 2023 09:44:22 +0000
Received: by outflank-mailman (input) for mailman id 522716;
 Tue, 18 Apr 2023 09:44:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pohsq-0007pz-7R
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 09:44:20 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9512c4bf-ddcd-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 11:44:17 +0200 (CEST)
Received: from mail-mw2nam04lp2173.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 05:44:12 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB5981.namprd03.prod.outlook.com (2603:10b6:5:388::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 09:44:08 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 09:44:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9512c4bf-ddcd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681811057;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=J9Q2lmcQwcobYidzzvJ5wj4VWY+TXBo6HS+1R6nwPeE=;
  b=Sw66P3YElu+IslrTYh0cMpeDPvjcmTAwXiCr6mYCrp6X10vZQqtL+R9t
   FIHzRAe5q5WvnheYS+v+LWZ7EahoxI8pmTmzjWeXUG/CoMsrHh5Aqlkg3
   4CYSvJiseHFMlhZ2vvCk3p+cYZqQzTgCnV0+hyfEp+t+8DE7uiRdo73cC
   Q=;
X-IronPort-RemoteIP: 104.47.73.173
X-IronPort-MID: 104708537
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:c1X5RqzF0KMDbQFgAoZ6t+cRxyrEfRIJ4+MujC+fZmUNrF6WrkUPz
 GoZD2DUOvqON2P8f95wPNnkoxkHuZOHxtQwGldk/iAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKAT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUxur
 KZGFSwtVDSSv+CXg5SGYa5l2dt2eaEHPKtH0p1h5RfwKK56BLX8GeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDOVlVMquFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eTwnurBdlNfFG+3tRq2HOXz2gdMyMPb3ai/eS61heUf+sKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluP1TM9KGYDYWoBUlED6ty7/IUr1EuQFZBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPuRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:sRQrmqsv51BkeBPvt9GZ1mzR7skDhtV00zEX/kB9WHVpm6yj+v
 xG/c5rsiMc7Qx6ZJhOo7+90cW7L080sKQFgrX5Xo3SODUO2lHJEGgK1+KLrwEIWReOlNK1vZ
 0KT0EUMqyUMbEVt6fHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-Talos-CUID: =?us-ascii?q?9a23=3AXvf6qmrBX8o4yzPMniHwpZnmUe0FKyeD8WvwGWu?=
 =?us-ascii?q?pMWRIC5uoYhiZp7wxxg=3D=3D?=
X-Talos-MUID: 9a23:ZkSvjAbUQPhJ6+BTnT+0qjVrMMlT3b2UJVlRjJZa4uOgDHkl
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="104708537"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gHDQs0yV4rtmOcLp2fkxL7t60kpUsa+N6zRPTuoR3vXNv7FJuRZjs7G2nNSYO6VplGxm+7PdL251krJlCLdmPUMasZxUuwg6S//U+KBuK86VvVn8KoFiFoyrnfUY51MaufTgCMtzqbk9Ht6PqSzTPm7JC4VfHdShp3qHL2K72TvtoZL090DrjJnt6VG2w8ZdWEmKW8T+HOV8WHWqv3ymAw34V5vZTsUp2E61mSKmpJj6ltjpzQdYpxTCzr4uS2K+i6hYhxGppxEPvTiF3dstprYDfyxslxuCR4yvQplS7H7FkqikYCc0DbEf96QyVin4ZnHzsaX8+i1Ksui6Pqg25g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=en1PDyDiRCEyYYwQMj/IFqRb6BrIjBNL8WfzdWZMF4I=;
 b=CYY9z33hVf1Xg/hf7qLu/4/f8hQ6LC9lpAiAi++7P2a8CZ3JVTJLJoTkG4CVyQmXzv+5R/p40SJ+saLYIj34ZwwJeeTpAt/Z2q4nROeBjDaIW++SeEKmQoBp1KZQHxanYdQa51FBpaJ6e1hi+EUYDHZb8lAo52z6/op4ebeenD2GHqe9Lm8wWRS/ZxXe7waPYdwFcVvAcL6KYfm6lKdvU+UUEqlPazzIxUnfhJ02N5QCDoG4vbgnSOUj8y3zPQYu7Vq8dB06Mmq6l2tj3h4MqE47WrSGKERyBPPTW93nNy9VkcoJpS1yQib18ds9nOJkeL9DiDiekj6olqV3xFf2uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=en1PDyDiRCEyYYwQMj/IFqRb6BrIjBNL8WfzdWZMF4I=;
 b=jMShyvYo8r2O+XtCMX7lk9yK6dXsBQFfWBNph81IzKGQjExIGsN0dZcVDZCDrYlJ06UAf0iukiJnI7ksZScAm6LtqfgOdWmJn9lDk/jhwEyI/Jr9804gdwGrdKHemGQIeCOfaZYoDSdbiG0Jm3vQ2Dfc+7DjK4YMlrVejwTehBs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 11:44:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH] tests/cpu-policy: fix "run" goal
Message-ID: <ZD5mY4ayWXO15Cpf@Air-de-Roger>
References: <80a02af5-9154-8289-4e92-6016c0948a61@suse.com>
 <ZD5jT/F8b82ZkGxo@Air-de-Roger>
 <963ede97-efd2-e63c-429d-32426386c3d8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <963ede97-efd2-e63c-429d-32426386c3d8@suse.com>
X-ClientProxiedBy: LNXP265CA0032.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB5981:EE_
X-MS-Office365-Filtering-Correlation-Id: af06885c-0aa0-447c-5a59-08db3ff17479
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TyymGFE4ctyLsDTMnWXQQrnv9iDiSQMhDHnnfsMUqxsr9ME4jlKcPCPUWziaqDiWh2+5Y2Ck2loFo7eh3YJOAZQOqzuoEQClx2kvSk3+W3LCIoWeOXdTcHngSI+38DzUSyAhppsr75BIQ9v6Yd04fK0FUltihu3A3eMeUxGqdXtqBV8kb6+fxRY1b3aX8djz+AWnoHzUWUjQaTcXqckeq6Z6XNwmnvNj5UEmbpIN26gl8fqIEoi2dX2a3hHutvTNEUIf+2NfcUah6PdlXFXufcSVRAUG1nDezo6zza4qnOHeO/DEb/LNrkxMjcrtScoe7jJ17ifrPKjBLXYc+f6TmfLwBAdzEZKu+gmtB4r3X/nharhGl62K7uzs5vq6KaXqotzP/dGPRHo4+qCJlfRXJHDKoKyIqydFxu3vsDXxH6OJYkz+33Wsnr6TqG+YMP0TGAWJkA3GDXfqcdPPYmPSWtRPeyiCPflYNjDV1ethlLvlcZ9q4wpIeXENXmKPH+Q8l96ScJUVBQ4+LMp2BEddYKEXsj/GQaC3In0EI9idPLbZfIE4AejFgYqGwahaJonxSBNUxDAeBIDQPj5p15lSVsf1Mcgw3C2DxZR+CFMLFoI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(366004)(346002)(39860400002)(396003)(376002)(451199021)(66476007)(82960400001)(66556008)(66946007)(6916009)(478600001)(4326008)(316002)(5660300002)(8676002)(8936002)(38100700002)(41300700001)(54906003)(186003)(53546011)(6666004)(6486002)(9686003)(6506007)(107886003)(26005)(6512007)(33716001)(85182001)(2906002)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VFlrRnQ4NlRCV0RTUnVEZ0VTcFlMVFV2QUlMRzhObWo1SFRWZVJ3VUs5UDNa?=
 =?utf-8?B?SDdFOXU1N1NxRGVyc0ozSnc2MWpDWlJlQW1TbExuZ3I4bkpTS0dtNmFRVU94?=
 =?utf-8?B?VFY5ejArNU9IN215ODBXc2V2Z2ZnSm44d3lSMUtwRXo0dm9LMlNkY2pDQjdq?=
 =?utf-8?B?eDlReEZ4VHJrUk13anRqRUE3c3dsNDVTNUdhL2tPTmlUa3dobWFZMnpYeUh1?=
 =?utf-8?B?RmViQmdUSTR3SlRvMGtWWk9Hem4zaEJreWZaV2txTEFzb05UNDVEeFpYZGhU?=
 =?utf-8?B?dTNFS2k3S055Z3hyM2NtaDNSMUdoSjNmR0lFNitCUzJTNFdzQ3YwV3ZjK1Jz?=
 =?utf-8?B?amtZU2pPbjlMUVFnd3R3QVZCa2ltQTFxVTBrNlFsazY1R3ZuWUtpeVZiNUYv?=
 =?utf-8?B?dWNoNzcwSjMvTm5oUW8xZFF4emE0R3FQWUNTTE9uZHNSOG5UNFk5dm5ZQWxS?=
 =?utf-8?B?RHpIK1lxdkx5QktzY0hBSzFjNEkrZlJCQmtidld0WEk5MG9sNFB0OVRsYVVo?=
 =?utf-8?B?a0J1Wi80dnpIdlhXSzR5MUloaTFyT0hrOWphWjdwRGN6OU0wSXFUNWEwNUln?=
 =?utf-8?B?TjdqZDM3Zm9CeGxGakd0b3dIUTZqUkZIQ1E2YWtLSlZyR2JycWJIR0hSN0xU?=
 =?utf-8?B?b21HZjZTK3FCbDBCS0N5TE9OMkpvWU02ZzVkVUFDWFFWQW5DcXIvaWo3QVdz?=
 =?utf-8?B?V2lNYlRtdlJ3cnAvUFRRVVlNbVhtdHA5TDEvbURuMU0zSk9kMzFhVExpWUJv?=
 =?utf-8?B?SlZJc0RnbGNOZnhOMmVpTFNpaEc4RzRtNUJyS0QvRDBycUQxYmd0SmdlTUhB?=
 =?utf-8?B?eDBPU2dIK0VnekhrdjdVaUVMcFMxa2pQR1hCaE5oT0ZJRW5uNjdwbEZQTmpr?=
 =?utf-8?B?R0tWQi9VQ0s1QW5nak9KUzluSFplSlNWWVdWZDBJMlJ0UmwxV3l6eXRzdjFG?=
 =?utf-8?B?U2w1ZC9FWERtd2R5aXFFak4yb29nVThpcm5MVGpxM3VZU1ZVVXM3a2pJbGpZ?=
 =?utf-8?B?eFRTUUVlYTNvTm5PaTZBRXl3R2gzZ205TWJoS25QVnlpdHl3UVd0WERKY3BE?=
 =?utf-8?B?bmVYaFlZd0pJTUhaS0VuM01lZ3lSNFFPVkNSdnRsSGZORnZjZHc4UEQrNmhv?=
 =?utf-8?B?NHdDaHVPYzR2MXlnV2NITDBtVW9leDJNVm1ISk9zZS8rWm84L2IwV1B6RXA2?=
 =?utf-8?B?Tk5oQlVuaWRvWUNGRWdPVE00dmRpd1dmdTJUNVJQc3E4bEpIWGdicHU5WHNX?=
 =?utf-8?B?clBkVkdUMGtBMDNhSHZyKzZzK0hTK3RZSjlIYXJHcVRqYnhlZGtqZk45THhu?=
 =?utf-8?B?R2h1bWZiNlNNUlRxMHM5TXEvUEZiTmJpVG13d29meW1leTBQVHVETENkNCtQ?=
 =?utf-8?B?U3BmWTRrSXJCd1pGQWhFY0loSURsR00xdXRreVVxajlBT1M3VDBlczAwUkQz?=
 =?utf-8?B?T2lPeFpwTjJSNDQwTVRGTVRNckEzYmxkL25uR0JldWdhaXU1d2lIdEU1V2FH?=
 =?utf-8?B?QU1ObzA0d3o2NGlJY0ZvcE9YeUF4U1BnbUJSa0liaVJyWGdWWlkvckY4c1dH?=
 =?utf-8?B?dGxwVEFhSW5keW1GU2NGRUlNREhFa2ZQNWp0TnlJaTFuYjRYdnI3aDhZMEtU?=
 =?utf-8?B?TnlqRktKRlYycVFaaTRmOUw0Z1RuWTUwNGdZVkQwbGdBQWtsUFZzNmN3cXpL?=
 =?utf-8?B?a0puUEQ5TjVUdVhsaTIyLzBrMXRWZ2tTVUY5K3d5bjFnTnJyUVBUVjg2ZVVJ?=
 =?utf-8?B?VWtUTnZoRlVMaFd2OUhyTGxsOU8yMzI2dFJyT0lXOVllbVREcVRTRjEwTG1K?=
 =?utf-8?B?OHhFMTRiNUc0UVpKeE5MWFRsalAxMmdzQzU4MTV2VWpEd1VBMThIM1FXRWx0?=
 =?utf-8?B?M2w1UXFNUFRvNkdBNVM5UjF4RFBQMWJMTTNRQ2I3T1paU2JrTzJyeUk0clhj?=
 =?utf-8?B?R3NVU084bHh3T1l2TnZocDV2bUk5YVZkdzR1RFFENG5xT0Z5TExHS1hxaGhu?=
 =?utf-8?B?VnBndG1JeklQMmN1VmpkbVlCa3dnTkxYY0psbFkvcnI2alR4bjN6TndEbVYr?=
 =?utf-8?B?Y00rVjRiTGQrVkxLdUI2SFcwMlNqMm9tai9JNEdPQ29HTTF6M2EzdERYb3pN?=
 =?utf-8?Q?/rBrefM54/tzCpKqfaYdX8H3Q?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LP1e8nlsz/9ZExjo9pvMrChFOFUFcc9OfAdt+URXPjVAWTh7m4ylj64cDNwTFjvyfp6UF3DmYbkz00dEL9GYjRhedDooJ59/FAOHf/p105mR+N9FLI3u0SZbd3hJyHnw/DAb2fM2cdYwx97CFfjwQfRI7ZR7iAtWw1SXxoaOtuNAiPB5a1KG+gG57ph4c97l4A2h3nZxfduLdW/uxGoopIPhMCD0LUNb57s5m/jL2lH1yUe0SKc6FRyAwb+klNSfGdEDhDDTkD3zdI1H/O/UV6QsPB3mGTzYhnYublKwWMujOY2ZLJfvd1KzmruelbqcUjAtuWMPVFqCp1Y+Xmhs19cFtENwUgIfy1t51eikVB857r/QJREn+hMkmLVhgLcRXb4zu65mhlq9BxGWCpe2c9n6W9xNCPcjImCgp4Whj4VPwpISgPM9B+vV3knUmYBgP8kUVcpNeWLPTlfCuI7idY8KOO/UAdqyBvDzruGmUSUos4s7wYd6VibHTnbkeXz2LP2V10LYlL6U/eqMSBybwfQudzPJyk9PdPKCUGu8hIKl/2crQP31ho7crf5hMHd1Z1JQHU6K81r/5YfsT3zB0ffQSXYEfaCNaBnu47Vi6FVge3rype3IEIUolilPBKJY+EdYgQaAV9VTFUZfPCo8Qmtx0UhJ6vpDoLjyLKQNyzZlUzjpjW6w1G0K2LvhAxj9ADj3YSo3nrziUtjkJ7ep01Reie+Hjx0u7SzVnLA/W0xE3A8DONOlKpBtBuQuzxVa1iHOWdB5m53n9xAqesZu6UOnynygQ6RJCByiOGoed2qgew1xjfDp+H423RseWpoi
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af06885c-0aa0-447c-5a59-08db3ff17479
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:44:07.9281
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ebaeWFQIchdves0qNcZ7xofwqhmRp8/ao6ZtGioHKozXpSN2D1Zu95qeLJ+6zW+KW7Aro0FdNIhJzvFiAc8ISw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5981

On Tue, Apr 18, 2023 at 11:38:48AM +0200, Jan Beulich wrote:
> On 18.04.2023 11:30, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 11:01:56AM +0200, Jan Beulich wrote:
> >> An earlier change converted TARGET-y to TARGETS, but failed to replace
> >> all references. Convert run's dependency, but use $< in the command to
> >> avoid the leading blank that += inserts.
> >>
> >> Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> --- a/tools/tests/cpu-policy/Makefile
> >> +++ b/tools/tests/cpu-policy/Makefile
> >> @@ -16,8 +16,8 @@ endif
> >>  all: $(TARGETS)
> >>  
> >>  .PHONY: run
> >> -run: $(TARGET-y)
> >> -	./$(TARGET-y)
> >> +run: $(TARGETS)
> >> +	./$<
> > 
> > Since it seems like TARGETS can contain multiple outputs, do we want
> > to have a for loop here?
> 
> Imo TARGETS is just the conventional name, even if it expand to only
> a single target. I'd prefer to stick with the simple rule until such
> time that there really are multiple executables here.

Not specially fuzzed either way, and it's certainly an improvement
from the current status:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 10:16:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 10:16:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522722.812263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiNX-0002vj-Jf; Tue, 18 Apr 2023 10:16:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522722.812263; Tue, 18 Apr 2023 10:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiNX-0002vc-Fq; Tue, 18 Apr 2023 10:16:03 +0000
Received: by outflank-mailman (input) for mailman id 522722;
 Tue, 18 Apr 2023 10:16:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poiNW-0002vW-MH
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 10:16:02 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2054.outbound.protection.outlook.com [40.107.13.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03bbee88-ddd2-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 12:15:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7124.eurprd04.prod.outlook.com (2603:10a6:208:1a0::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 10:15:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 10:15:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03bbee88-ddd2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UCUfz0ppoYsFFv9sJMAIHYYae8HdhN1YZwWOIqgZmrgYfii6xSJPTx9Y2jVM9dX0YW8IBXix2NtYLNMfh/K4ctyALBrUjI81DuJA3Hn8kPNG+MqkIt+X8V475IXMKbePAd17LMEbH0uK+sVzp94OiM/VhTtiM+JgLehjPKueTk6z/xISD4EsSglMBbV8MJHQpQQCvpGPWwO+8sqH56R8PYaGoOl/TaapTox2fj4waRaRZxt8xr3T2o3NhCYpxeTE4tGg4zmnEPCG5nWIBrLMUmDk7J21YqyPWJnQTjh2hxN7zrCXzdyJZBjkFoLd6tZhNSYP52VNOs7eHt1UPV/H2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nseY0nHkP0nk+3AOLLKCsOSnbTCI3BP0bHCtUpZGXgU=;
 b=IO1pwRfj2lsK+4OgYJGB4RHfdZVZaf7tlggqw/W6kN35hlGEtcRI7ceiDztyRTBKlvmSbfs2vfiutbO09hl2tLj2GAfGPwhzvM8rkaPYqLLVemYxNucBPNGOBE4t6HHJKUXaJ8YhgtcG0NRftf6RwyXt/PH4HeH8rKIdWeNreQQNgsQlEbTupbgy7rzAXvdJE7uwNjw4xBsm4LFUF9gcDiPTcFuuPz4x1tQlmCDKLvYeByD6TzNVBs4Upr475aM2+VMJTlH56rnZZTeqgrINjVE6z6R6FvXwLZlBwOZXFPqwZn49OPXbV4ni+4ESJW/yJ/IaZHGZaFuxguRD7CgDqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nseY0nHkP0nk+3AOLLKCsOSnbTCI3BP0bHCtUpZGXgU=;
 b=0hZqLlUr+zU+aTp6uFNiaVD1ACjewWqivsjZglZkv9P4sVgAT5aGZ9KLgnqU0XBEKundH1IQxE9tdp9xU2QRtmBlBkqvqTXQBJVIYrJ/OztDD8OrMa0hp310HRO8cvj6z94uOwGmQjuYZsr/Q+rTALBOjFSaafYXw/jeVTDrsIOFaARXMTAYJwn9rBKzbEaG8SNBFgpO7JMZamuJjssiiIugGvpqwPkyJQXXHrLOlFxcpSiMnUwg4rWbwAluWj9RkjjSJfPYystpdYGdixLBdfloedn+0IVk2ASadeVuuSoEUp3kYsP4cSPEPOyZOkHAqrh9OGax9643K80eGehwOA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <65f14053-1816-7f8f-b20e-c108575eabd4@suse.com>
Date: Tue, 18 Apr 2023 12:15:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: fix build with old gcc after CPU policy changes
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0102.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7124:EE_
X-MS-Office365-Filtering-Correlation-Id: 4ee90ebf-9785-4155-59d1-08db3ff5d60e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X/VNDVD8fF3c1AKCrzCqbkIINYIfNRlapAaBbLvlN2SvAhAcCB3l54aTMbU36HhFU8nVzRar5WlglLq443TcJsyfI2VJ6oh56Vhjdbm3tMXpPuU642b3moB3NzdeMQUun84+TMi8P1kBa4ponoGUYVzoSiMYBmIT58Lv8DfXG0CepE3GLkGpZYnwpkAj+g4qPvzlXySTEjVUReMnxLZFhVxLyB8l/XyrBwnQKccqFVb+GqrzjyPLo2+ElUXQt2JCQNF8KL86jrKddYagjW8xYXJ2CUf0C9vgYQ+HHG8Ygp9j097NJLldMB2sFqM6jt9busaMu8IS46KTjMXb/Petsy4X48fXylQzReko1d+OLRPWzSDOk1bLe62N0mamkKsEkWKti+YgkPaLQuRNjNIVipitavORAfteFY5ak+K1ovzfmNY/4BRF7c/bYCSof2jMOCFe1xhpFwVEx+X/P2W+gpupCVUKMlDSOsziAaqHp7bZlX86lsauOzdqs5hGFRZmd2kgPenFFyE5zaG4YynJAHS5kvQF70LMXkMuk9seE+UmAwo55jrM6+bBxQKF5cDQvwywkNMEdCmi6UimmcwU7kNsIQTVggXEiOB395JGK+mMPOtsGnwNayH9VyL5IOKMEAZ2qGGdz5nVN33a8YT8jw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(5660300002)(86362001)(2616005)(6512007)(31696002)(83380400001)(186003)(26005)(6506007)(38100700002)(8676002)(8936002)(478600001)(54906003)(6486002)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(6916009)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1lBQ29ZcjJzb1ltMndrZ0ErU1NHUzF6VDRFR2pucUtiSExLb25ZMkdyR3JJ?=
 =?utf-8?B?YitBWTQwR0dPU0Q3a2hiQmpVZWsvSjV5QmFjV3phZkVmb3FvK01WcGF3VWR3?=
 =?utf-8?B?NmJGZmpXajB1UHRidWJMZGNPeU9CaisyQVJqREdXZ2V2RGVoaWZtb3czdHdz?=
 =?utf-8?B?MnJwbW8wUm9JUUdaa0ZYKzZMT0Z1U1VLU29Hc1hpdVVMSnRKdnRGRjZjcW9y?=
 =?utf-8?B?ZUFmNnJBK1lIOUxmRGp4YzVZQ2JhNkRPenM4d2ZhLzMzT3NpVG9WRDBnanpq?=
 =?utf-8?B?Z3hUU3I4UzJocXF4YXRTRm9WK0tqS21ZSHNTV0lMTHIwdzBOZ1ViNUgrY0hQ?=
 =?utf-8?B?N2pWUnVrdldwVlZjR3AwUHRLTWprTjdrOXJQbG51eG95RXhIMHdjWTBmRUNh?=
 =?utf-8?B?RG9RTkI1VGc4S3lQbkZDM0xiQ21jRFRyaE1leitaL2wwMGtBRmc2SmpBbFpt?=
 =?utf-8?B?T3kydGlrMzY0dUNNTG8vNUpPQmRpeWlOZmZWT2hRUDBIRlhwWm9iN2htR096?=
 =?utf-8?B?R2lxNHlpdmROZVFnSTFWQ0Q1dmpCYStkVHk2SHBWQTVZcTVxMHNSL3hrdHd0?=
 =?utf-8?B?QWY5R1hhYTdsMDFrdmxSNjl3dXFHeDZUQ296NVdTNXlNUzVuTWRiaVl5QStK?=
 =?utf-8?B?anhyU1NpL3IrcTBNYmx4QmxGdXFtTEdGMXRvR2pVN3lZbUJkQjB2ck9ZdUdk?=
 =?utf-8?B?MUhrYy95MDFLT2xWOE5GNE1ONkZwL1IxbGMwWWlvZTRPeWRRNHB1ZFA4Z29n?=
 =?utf-8?B?WG1ObUwxejNlWmhld3U1SlZVaUZPTVRBeVRBS0VWelo0bDFnMmw2ZmNTeGlT?=
 =?utf-8?B?a2Jxb3JKVGhiTFRvcjhBcE0wUmdnVWJKdG1mYnAwaHBkQ3kzb082YWgzanVj?=
 =?utf-8?B?eGZ2NUtseXNUTVcxc1p6Nmpmcm9ROGI1aU1hSzJ2WFg1SWpzMzBRV2ZxaGx0?=
 =?utf-8?B?YXpGaWlmaWxVZ0pUbXQ3ZWNOa3MxNXcxU1ZmVk4zdHNnMnRwQTFzRmhwTzQw?=
 =?utf-8?B?YTY3MExWVS8vc0xSNzV0bk51UDhPVWJKUG1PYm5XY2tEeTRsRnlHalpIajA3?=
 =?utf-8?B?ekMzUGhESXFSL3BBOEN3Vk5UN253M3FiOVR6ait2YXVVam9vQ1BRSXBQYS9S?=
 =?utf-8?B?eHdZOEJWSkJmMGhnbFp5L2FwaWYvOFV4OGtIOXRIRVVlR2tqbktORExTV1hY?=
 =?utf-8?B?MWxaeE5zdnJMNXJ4UTFxVm9CemxWalQzZUlqU0hoUFM2MjFpVEdqOUZDQnJK?=
 =?utf-8?B?Ulk1SmpKaStiM1ZEQ3ZBY1VWcTBxN01KbmdRZE1NS0dycnd0cWl2eHhqL0JT?=
 =?utf-8?B?MWVGRnhxMWhSQ1NKai90Qy8ra3N5cjAwZ3BxbFBhODVZcmlRVWVBWVl3TWU4?=
 =?utf-8?B?U1F5STBBN0xYREZXejlnZldqcW1uNkJ3UE1FZzM0Um9pYXk1MDR1MXpRMkNO?=
 =?utf-8?B?U0w4RlIvcm14M0JRblJzNENYS2dxQkJhTFJuSEoxclVlTG9DZWNTbEZvd1Q3?=
 =?utf-8?B?YitqSzE4TVQzVElMQzU1ZU1LZ2pENXg1YlBQbTl4U0FsMXczUFNtTW4zV1A2?=
 =?utf-8?B?QzduelZmSDRNSDhtVU11bWE2STBHQlQ4ejEzNklVb29BUndudzhKbkIyZi9o?=
 =?utf-8?B?eHZQK3hPTE1WRFFpcnI5SFBJeVRzMnBlcE81RkFaSXl2djBwVHUyV0NRTm5F?=
 =?utf-8?B?OGdVKytla1pQS0h6ZXFWMTk1by9DdFpmejJVY2l6bjIxeXNLY3JOY211aHc5?=
 =?utf-8?B?d1l6dHBxaW9wcE1ZcmtNbDQwbnB2RGV5cjlia0xrZzhUZDd6cVVGRWdYZ2Y2?=
 =?utf-8?B?TnhmR0xIS0dOWG5jVkVmVWlTeVhoaUdURUFCZGQ0azhjdFBzbjJzQ0NxLzVV?=
 =?utf-8?B?d2Y1ajlLM3ZwemUzYVlVMnBXR25QYzRkc0kva2NpODgyOFgvUlRIaFE2Mkdn?=
 =?utf-8?B?RzVldDgyM1R5N3dSbjZsMytHV1hZZ3ZnRU8rM09JT3hyQUJaeTA3SUZGVUsr?=
 =?utf-8?B?VzVReVVFVG9SZWlXUENtRGVZMFZqbkVzcEQ5RFRvS1hQVEt2NjQrVmk1VWhy?=
 =?utf-8?B?dG9nRnF6M3Rjc3hIa0pDaGVWK2Jwci9iZXZlbjQ0b0p0UElpVnN3VkRaR1dX?=
 =?utf-8?Q?g5oQmCBUFbOIJiRTg9X+eu6JT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4ee90ebf-9785-4155-59d1-08db3ff5d60e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 10:15:29.5501
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9cRiY2uvIcluOQr7nT7wrTWgU2tKwGt42BpeZ2eooUUs0DnDascmtOr74VKFrfsODCGN5eTv/xPIph3TU2Hw9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7124

Old gcc won't cope with initializers involving unnamed struct/union
fields.

Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -893,12 +893,14 @@ int LLVMFuzzerTestOneInput(const uint8_t
     struct x86_emulate_ctxt ctxt = {
         .data = &state,
         .regs = &input.regs,
-        .cpu_policy = &cp,
         .addr_size = 8 * sizeof(void *),
         .sp_size = 8 * sizeof(void *),
     };
     int rc;
 
+    /* Not part of the initializer, for old gcc to cope. */
+    ctxt.cpu_policy = &cp;
+
     /* Reset all global state variables */
     memset(&input, 0, sizeof(input));
 
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1313,12 +1313,14 @@ int pv_emulate_privileged_op(struct cpu_
     struct domain *currd = curr->domain;
     struct priv_op_ctxt ctxt = {
         .ctxt.regs = regs,
-        .ctxt.cpu_policy = currd->arch.cpu_policy,
         .ctxt.lma = !is_pv_32bit_domain(currd),
     };
     int rc;
     unsigned int eflags, ar;
 
+    /* Not part of the initializer, for old gcc to cope. */
+    ctxt.ctxt.cpu_policy = currd->arch.cpu_policy;
+
     if ( !pv_emul_read_descriptor(regs->cs, curr, &ctxt.cs.base,
                                   &ctxt.cs.limit, &ar, 1) ||
          !(ar & _SEGMENT_S) ||
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -344,7 +344,6 @@ int pv_ro_page_fault(unsigned long addr,
     unsigned int addr_size = is_pv_32bit_domain(currd) ? 32 : BITS_PER_LONG;
     struct x86_emulate_ctxt ctxt = {
         .regs      = regs,
-        .cpu_policy = currd->arch.cpu_policy,
         .addr_size = addr_size,
         .sp_size   = addr_size,
         .lma       = addr_size > 32,
@@ -352,6 +351,9 @@ int pv_ro_page_fault(unsigned long addr,
     int rc;
     bool mmio_ro;
 
+    /* Not part of the initializer, for old gcc to cope. */
+    ctxt.cpu_policy = currd->arch.cpu_policy;
+
     /* Attempt to read the PTE that maps the VA being accessed. */
     pte = guest_get_eff_kern_l1e(addr);
 


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 10:17:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 10:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522727.812273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiP3-0003WE-3M; Tue, 18 Apr 2023 10:17:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522727.812273; Tue, 18 Apr 2023 10:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiP2-0003W7-Uz; Tue, 18 Apr 2023 10:17:36 +0000
Received: by outflank-mailman (input) for mailman id 522727;
 Tue, 18 Apr 2023 10:17:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poiP1-0003Vw-VU
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 10:17:35 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3bbaee54-ddd2-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 12:17:34 +0200 (CEST)
Received: from mail-dm6nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 06:17:26 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6075.namprd03.prod.outlook.com (2603:10b6:408:118::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 10:17:24 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 10:17:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bbaee54-ddd2-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681813054;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/pjCGsjSFflJwwCj7qG4e/Fx2p1aYOB73xPCXI/OSdQ=;
  b=gsiAW5WEBFfEXQqQoLvS9sdahTk/VbcgpP8xiKJe9VKApiak+7ZI/aFH
   xO9fniJmtD6NfWBFdq4vDWN6emLsja2P3LrVKZY0Qd73Vmp0/WTnWtCyN
   1B3Do/gqY/Eb2QsTyK4AnBhLso2DnBTNJ/9s6zCdP99e9ihSvpEVX+4QH
   U=;
X-IronPort-RemoteIP: 104.47.57.168
X-IronPort-MID: 108373692
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6BC5YKIUrNtmvfo0FE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS1mMOy
 jcXC2nUOq3cMTCke4wiO97joENQ75bcmtAyQAplqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVuPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5IWnxNq
 OADDQkzcwmsgO+7g7KxE+tF05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGNnGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurA91JSuzinhJsqESe6UtCCBM9bgP45taGoW3kfOAcF
 UNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1xbXFI89QOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:gXbU2a9gLxOrazxFv6Fuk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-Talos-CUID: =?us-ascii?q?9a23=3Ay2ek/2o6zm9fihWTAL0BwWnmUf9+b36A0Gz1GnG?=
 =?us-ascii?q?lEEluRbDWSVXJ+ooxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AqqIgVw2qHDzeZPw3HY9a3ryFrDUj+qmBLxENn74?=
 =?us-ascii?q?944rDGhNbGG26vneJTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,206,1677560400"; 
   d="scan'208";a="108373692"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h6FtxsPcnBKX61nOMi7UFdaqcK6IBMluwHg72Ygoi6ptfAC5W7BLJd7h3pnzgK61fvq6iH/9XRpMu1DBZO9vMWnMTEr6WEza+vGRGwXZQbf+f0R3t7Td+Z/DLdibzhUkfce9pXox2SNMNbvlMZbP4y6ypIKEG0iJw6m3b4KcHHW1AJnvi8lfxJquES/aBgp9xtA9h5EnMdLH2iUucf2jglZdQAcRfwYvfbinWtqZDtlxm+TsPOHR+/ItB8a3lqyWb/rFAPCgo6Nqq0DxtnsYZBmrE9c0rb5E7lhc8kQBytReSG2qsOxqIsyXuriBwajmPR7ic4A2BSDBik1zlHE6Rg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/pjCGsjSFflJwwCj7qG4e/Fx2p1aYOB73xPCXI/OSdQ=;
 b=H00wVlF/3i/pH2ncRSyzr20yXUXIVHohaF3MdIAyftbpaHN3RcCA9qpmcvGtm7mk7yXfkDtO27XNUlVffinMEoBf34HxzJi64Ln4y8jJDlpLvsRjhtMactWF1DWCSNNwnmOnKVrANE2fI9rC8WRbC9M5nmWlcOhxf3/PLrcVCg7RpnZCjQCki8+WIMQEXDVf89lAl54MvYtqrIO34jxLAbn6FzhhGTvFrl4uk3aHa60b0QXr+DL9BYyxs80LXjwEEO+3biIT9eONib40j+QqIcX3C7yfqBYuOtmWat1ZHLgLp+SlGnaKM7HUcUfskNZSM45Vq00NGzqT0b0tvJoF8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/pjCGsjSFflJwwCj7qG4e/Fx2p1aYOB73xPCXI/OSdQ=;
 b=MrVaZtnLi2AJ0SC6exWDOvcpotY0Zjgd1BjkhL509hAn8jv00oYA6mccEs0eQfY/BWWv79ri8Df511CeRee75ZUyY3rxHyg6Vb8EBo53W/l2rxLcVb3buNQYkOOpLxbpzgPZccrX47iW7JxFPkJAJ1jedZ52PyuUY+2UEVLm/vU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a00a5295-e3a6-feb5-3c30-1cf6237cb26d@citrix.com>
Date: Tue, 18 Apr 2023 11:17:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86: fix build with old gcc after CPU policy changes
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <65f14053-1816-7f8f-b20e-c108575eabd4@suse.com>
In-Reply-To: <65f14053-1816-7f8f-b20e-c108575eabd4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P302CA0011.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN9PR03MB6075:EE_
X-MS-Office365-Filtering-Correlation-Id: d09c75b0-06e0-4240-bed5-08db3ff61a72
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aP8usRvLVhnt0EwXGdFCTqh4zfpTIHOqFXcxzLAT1IClVBXdQo8IkiB9ZGh/CrVOdb1y5yFmq+PZhsOju10piNgQedcE8iqhYvHCiHJnjmALGryU4UOP5pQ4QaST/qYpWZ0Lnz2ftcXlmcGX1C+QN0F4AQs6XQbx23IAHPQlWQeHolwYTJuUPDMNvKqy0dsiPLywPVQ3pF3wCR/qWyRDWrI4m8PfyZYfOJIK+XhGs3q2+WCVRWJwms0rz+PTAOSimcxep3T2EKMenOG8fYpYmFXUVcBN7t1ZjpnhVJUTnz8sh1yoxmXrE6APQIObvVQFkqhMj3tyGzbNyp2EvL7LBPbvTEvqLecMq4+1BmJrV6nY/uFJp83FcS5+Tx0DBz4l0ycqv9MwZ13E7CmMmAGUoQcXi7y/u9UNwBZt49RRnNJEnp7BvYQSogm8trw9+90T17PZw5otpqMSC0NnNN29Zb6vOoLGddUL/SWJG6eTCU2x10ksYT+li3ByPle7DVfBwJt9elwRiYLe3JL3o4eYsDl4fTUfbrX25AMuBWWSZdEFx0nD35WFNGhoMWYOsPGta8WI4TFCdk6y5OKF6muDebxwXZSdtcPgMVnLANWqWlX0evtbW3W6RLblefyYdWhJZruUcoSyrxb59dLlYoMLlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(5660300002)(2616005)(31696002)(83380400001)(86362001)(186003)(26005)(6512007)(53546011)(107886003)(82960400001)(6506007)(38100700002)(8936002)(8676002)(110136005)(54906003)(478600001)(6486002)(6666004)(41300700001)(316002)(36756003)(66556008)(66476007)(4326008)(66946007)(31686004)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OS9jYktuWE1iaDR2TWhJM3c2ZTRHSTh1VTR2V3hxQzVabTFYUHJ3cWIwZG9r?=
 =?utf-8?B?TDJPbVVtK01hRU5RdkZWNGo4bmZid0Q4MEcxRmNCeUg0RExTR1pSalhqNnR2?=
 =?utf-8?B?dTVDeE9qcjAxc3pKam9pTDJ3M25wK2Jua2Qxd1dPUkFlUkxMOXVwTGRqNkRS?=
 =?utf-8?B?T2pWbkVBVFh2bE9EcHBpb3FnTy9sdStma3RHR1drL1VoM1lxWVJnZ2p4MWsr?=
 =?utf-8?B?cDhHaXk5MVlxK2YrS1J1VFRFMkt4QmZBVmVQT3NtQnczTzUrb1BGSXZ2RFdw?=
 =?utf-8?B?NFBCLyt1enlnZTdGNElEMks1ZzdCRmhpTW51NTBTU29xMC90cGtIVTVqQXZH?=
 =?utf-8?B?QWJQUERNTkd0NzVZVkhBK2luaHZZQWE5TUg3WWp2ZytMWXpmQ0JERFYzWTdE?=
 =?utf-8?B?WFlUWXVBVkFFY1QzblZiS2dWNnUyNjVKMTVUalUrb0xCNGNPUjZEWDRKYWFS?=
 =?utf-8?B?U3pIeTNWZ0pjQnhNNGt6U0k1L1V0MHdYWWpYVEJCWlJienAvOWg4c2REMnAx?=
 =?utf-8?B?S0QvbWRiWElVUmUveDhGNlBldXNGVnBNSVM1QmlSR0gyNVdTajVxRVo2ektt?=
 =?utf-8?B?Uy9zR0VOY2pnZldCaHV6Yy9raG1UMmM1MzZLWnNtNG9KR1g2M2pkMDVRNUhw?=
 =?utf-8?B?Vlg3Mi82OXVObzh4V3pHRU1sc1BnN2hDREVvTTRKYU5sanBrWGJPRlBjZ210?=
 =?utf-8?B?ZDZGS04vY2swMkIrTFNsQjNRMzVpMVh4VlhPTHpweHRnU1M5MTNmOS8zSEoy?=
 =?utf-8?B?bU5wOFNHWHUyeWpTQmRxdW9FYzNHN0hmTGZISDI3eXhrSGxwME5ZMmNVWGxX?=
 =?utf-8?B?S0Q3YWdZMHFYaHM1cTR4OHNuUGo4WTBTbFQ3RzdpVWx6cXUzaWFwTHd4ZVlM?=
 =?utf-8?B?UGhPVWZJK0JmOWt4MnNDODRuY0hXWnJsTWpGQXRqVHVENmlmNkhDTXdIM1dS?=
 =?utf-8?B?SG5adVk3SjNpc3FoNlhKRWRjU3dPUEtkd3B3T0ZTYmR5eDRsK0czNFJpbXgx?=
 =?utf-8?B?R3VrRnhNbWkwbXBrd0RLU3o5NCtpT3Q3Q1k4c3hEWXZzeHBOWE0vWlN0eCtj?=
 =?utf-8?B?MkRhWVlsRGt2VFdIVGJZSUZrRFY0TDErUlZoMVFFNzNWZ3hVbCtTa1JST2s1?=
 =?utf-8?B?S2dYR1dqNElRRldia3A1blp3TUUvRDZ4bVlkeEVFNHFMbHFyb0JnQUU5eERu?=
 =?utf-8?B?bzFVSlJpdEthSXB2LzRVNFJoR1A4R1hVcWVyWDNYTHhqQmN1U0VvZTdVbDFB?=
 =?utf-8?B?eXA3Vzg3dkthTndGSjlyNVBaYmJxOHdVbWdnNFIraG9XdXZiUmQrSmYrUEt2?=
 =?utf-8?B?YW9uOUhnU0IvbVltaGUvNk1RejB0K0lDR1ZqUlIwdEdZTUtJZG0yR2g5NXJj?=
 =?utf-8?B?Z0ZFdkN1cVhXbHRKMSszQytxcTBVMWFoOVEzeGdrUEJvNEZhVVVnZEp0TUNO?=
 =?utf-8?B?TGpzQ2pxNnpoMDZJY0U0c05rZkd3VjNBTWtiTElHTEJWMG16VGFvbTVnZG5P?=
 =?utf-8?B?SEFPL3h4bW5wN28wTy9RR1RIUlgxeVN4MmRlSzJiem5vSGl6VTF0QW8wekZH?=
 =?utf-8?B?ODdUa2s0N0FqNS96SjVjbzVraHg3dnk1QisvZGhwK01pdTgyL1h6MUh0MVdW?=
 =?utf-8?B?cW1GNmtkckhLMW1MNWFxdzJQcWxGa3paeUlvS0hVUSt0RWxYTzBxeXY5YjJB?=
 =?utf-8?B?Y25zMWpkVUlBR21jKys1V1NoWDNHeVRzazJjMzJXOFR1c3U1VHl0Wm41cGpp?=
 =?utf-8?B?WTRMa1cyZjBmQUxWLzRNMHdxZ1FzdnVlRHR3aDZhSkxRcDBqbDJocUZKenJL?=
 =?utf-8?B?TEcvU1NrdFJkRDI4Z0JoVWk3OGg5Ryt4WlFvckxUd0ZVNlpKeERFTFE4aTJO?=
 =?utf-8?B?d3JMOUJydFZpSGhmMUtFMnRxamZkQ3B3emtQZEp6blFSenVVL2kyZC80TUk5?=
 =?utf-8?B?V2hqTzl1WURJMjR4TThPUGZuaGJjc1c4ZnBoRlRyMFN1WGVWeG5hZ3RkVmhV?=
 =?utf-8?B?K2RqajFxa2drcXlJdzZYQXFqU21hZC9ZR296SGd5S1pFY3pBVlJTMnIveUNB?=
 =?utf-8?B?bmh5TFNDdVRlL2cyaTZLT3d1VjlJWVpHTnhIZmFxR0pEak9rdE93SXoxency?=
 =?utf-8?B?WXFTNHNLbFcxbGQ5cGEvN3QrcG9tY0xISmgzL1o2VVpYSklDZDlRckZxYXRP?=
 =?utf-8?B?SEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Y5JJenAlpMdXvA5tMQTl+07XVTZGFmUMvgyEMboHlOuO3z6/7XqDzyhYm3SKTjhYc9V+kjsQOjyjMJgsXzi6zw1t0JaS0MWGWBGviK8kRcQRjnwBsS93boLny2HZE4u5qZ4KlfBB9a/VDpRW7mekqXLT/JgZqodhsFxcDjC5MH5Z29Dq60p3lChop7X8xHiifCZQTGwde8t1Wqm3l/vCs5NXYQ6UfLH+sAeSVY9HL1kNEajxZa8ZyBbx+09ObZahThNfSTa4II4g+yq/EnI+pJIQDCvxi0Tq+SItw+HvwH4Jr5XBwdpMGAMG1dnJH9c68Kd9Hrj0UFWyU94srsStfl0rlcooycUwYA8hQkETSmNMUomoJW6n33q74p3TaYDgVCPChbbrcE/rv9Tw2yUw626cqHtsP75BAzKETvRRO9VDP+l3W2OrtnqUht84dgMi0R6HlJj/2qhSvCXMKrTf7qmUZLuFBYhhoHLALWNvHj+Hvm195mckfPtjpysSAKPJnER93UoNweqVS6aCwcFEQtLN3UfIXm5WJKKpfEFO0P3IOPhTddDcIKB/aA0j6HYY8iS9xcHXtH/wEYnBklzSXayRsIAF7DGI8wsryF2WZ8sLmk9U8ElRAK8QewDHIssD+8ZdqPyVY5r6N9BX+vHnzPSvil9q2gMoPAS5csRZcesba55ldHkgjyem3D9P8N0bY0h0ZG3bUENxGY8pvniMTKC52kLxH58kJz//JJo4ZiRgoDtLB/egyCzfl2Mxm9DbL/LohpfNr/XzgvKWeMPxhRQnUdyAFD7XM1Sxp6aDgPWFQJn9uILg/8Brc6cpiBPwhAs15oBau8886Bzjj4kkZQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d09c75b0-06e0-4240-bed5-08db3ff61a72
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 10:17:24.4861
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SDMyPvOWzjT9Qi8LO87K5otBeOvIe7ZZpmSNGu0BxNjJqK79nnIvZOmDIPqsqsewMcwmDMzxKLwupyB5ZSdP7V0CvIp5InvmAKUhKxuPfiE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6075

On 18/04/2023 11:15 am, Jan Beulich wrote:
> Old gcc won't cope with initializers involving unnamed struct/union
> fields.
>
> Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>


Urgh.  Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> I guess, but
I'd honestly far rather delete support for such obsolete compilers.



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 10:22:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 10:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522733.812283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiTf-0004zK-Jn; Tue, 18 Apr 2023 10:22:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522733.812283; Tue, 18 Apr 2023 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poiTf-0004zD-Gu; Tue, 18 Apr 2023 10:22:23 +0000
Received: by outflank-mailman (input) for mailman id 522733;
 Tue, 18 Apr 2023 10:22:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poiTe-0004z7-J4
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 10:22:22 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6ebbda8-ddd2-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 12:22:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9206.eurprd04.prod.outlook.com (2603:10a6:20b:44d::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 10:22:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 10:22:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6ebbda8-ddd2-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gOsi0yy7lOhmxstAYj3fg49KJ9Fbaq8ekvTHFXEdMbPSgkCu768OkNfT46IbCCI5+HAMCufGZWlr+fzCBq+I2hY9KcCfmcdatvRVYSezXUjniY52mYFWLHggZe6MidxWTwv8ybLvxOQawt6lKCEa6SAH+dkldsCNlp6Eduo7F60cT7Lzk0SCqgQ86fHd1cT8gnbx5yWBpXjJ7/HltPm/aBqNHI2UdJjZk6Gm+sGKDk+/hZc5HC2foLYxbO9BsPQ66dKLGltItXUrJXS5Pb8Vqq6fz+zTJLCJBWkzaV82BjVWlme5UnXejJnDP7YSMl2WlVg4CeKjbB6yWINB+5xjLQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/fdHGOELZfMAL7Kk0jxADdmkjbHVsTQAjBTablrwl60=;
 b=Mzd8Lzq81GTkiHs8J7qfTj/LKYkukeCnIpR87Ufeez4dyBJbitMbVRC93/IMGKAG0vOet5HSIRC7y4N9y2NwNrp9jVCaRpKlYFzeZ/0ll2lXadJgIORmubzHn4gOU82cnrmEQbm83q4epzK1jzm8RQ7NrLRvRovYb5qHVWQLRDHKQQYIANA2vq+uu7OxdRYxFRKPw05XfVrkmY+U49+uRD8N6STsVAFUFi0aFgsB9n8/ML1w4oz92zYc4s/mFLl87Zwx+GEM88BolzzqWvy3vr5fEdaRMcooIZ97caEF7zbARrLqE0VrxI4PkfLqL0ZriJ5wC1YnEPTDnpcFqRHLJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/fdHGOELZfMAL7Kk0jxADdmkjbHVsTQAjBTablrwl60=;
 b=yK77VDnGg7cg22BmX4TrMytszpFxwbD/8K7Xjc/yeuZX09olNKa5jGylAD4nURtqPlCPxEcEEXQx33mlTSuQjs165attmq+WFnHFx7/9aJbcWdDbuRO7WoThL8NeR8unzqa89B3+KZUPYDwyJHzaIGmELkslt85a0FuR176VgqCHQAQNgala1TQJ25q294u5X8da1Ah/22ql3a+GuvYo2DOYXsDSD3J5QEulr6g6CWyjCTw6siCn9uxD3aGqO14gMvFQqhdrnGGtGZ3BD1DIc/Y0gR/HQVcgROVBfr1CVoE140EykIQAw7bPOXsVZPk8WIjLmttdwV8a5WwMt54l8A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cdd717b4-dd70-fff5-eef9-33376de4baa9@suse.com>
Date: Tue, 18 Apr 2023 12:22:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86: cpu{id,}_policy_updated() can be static
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <a24ace58-2ac5-6152-c42b-0037355ce9c4@suse.com>
 <ZD5l/y15PkJS8jbw@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD5l/y15PkJS8jbw@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0187.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9206:EE_
X-MS-Office365-Filtering-Correlation-Id: 34c3c0b4-8006-4b62-977f-08db3ff6c9b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Jokw9IfdvP+i6uVssG84jnJA2ws98xa622YV4fwnMI7nWpUe7ESaJu6mVXJi+t63QaEo7g30A3sMiv/phthCa53ku4+afRd0JG+BuHFGbLVa3fajPegIjD42Uvd1xKOOItWuwFdnKqiaKgFkUgBo2kwEOvldwQCUAi/sGK46sjcD/kaA7UjFfntXataHaiJvnXcXJVBTP9eqdsOM3wtkFkqD1bTsL8JRxbrJ1f8XUih6nQ5wIndRNgrhJn8i0q0s1z/bGYhEIv0dJE/AY77mfwsyT9LBzwlH9YBq+oXrtSDTksfjbNdICCwZifP2C9BP0eL8140KKfM+2mQsvUn//nVFDwYZcBefqKOe1y5lmuWvpEvb64ZQSJ2ct5y7bwr7WYD0bDz7jnT+gO2rGAoXrsLwKdqiFxZ7TJM1UTNKC2TtAQLjaEr2hkQQGZU6sVnyu0iinW0+6N2gZkm8fkn2Qv1y9Tbhko/1qOphnrYrOOnea1LncQeXRzPFbujdfmTx9kBww7pZJ5PLATr58PYTvhXqmO1EL3oAsOk4Yx6Yn/AMSX/Mup+bXzPH67598kx68dZQdIN9hcmpYsRjRcFx3FHRm+w6ZMALo99xsfrY8lYV3PHSJbddKhj1ErJ9Udh9galn2lo8P+Z8XGK3O2XA2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(5660300002)(86362001)(2616005)(6512007)(31696002)(83380400001)(186003)(53546011)(26005)(6506007)(38100700002)(8676002)(8936002)(478600001)(54906003)(6486002)(316002)(41300700001)(36756003)(66476007)(66556008)(4326008)(6916009)(66946007)(31686004)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3F5d0c5N1dxeWJNRmd0UXdSbmU3Y1J1alFhaU1JUUhrd2pyMklBSWxaUmRV?=
 =?utf-8?B?U01UdUZVbXlkcGlXSnpoNDlaYkYrS1FKS2tVaHBYemJGUndzR0k5NElOWnZm?=
 =?utf-8?B?YTFJSkcvbjN1emoybzNacVBSbVFLeXQzc05HZ29tQnI5aDdiNFFIYnFxWWJO?=
 =?utf-8?B?YWVNdUpCK1V5WS9qZjVGcE5RUDZvczgzRUVIOERCekpLcy83Rndia1U2dFFK?=
 =?utf-8?B?ZEU2eEh1QzNMeDZzNHh5alBVa1NaRytTcHEwcXgrRnRSQk1DWDljWU9WYyto?=
 =?utf-8?B?d3JpOWxsQlRVQWxoT0x1YmRkbHdTRXAwZjZYRjRhdEtOVUhSZkFHRDlTdXRV?=
 =?utf-8?B?TmVGYlZpaVMycVpCaHhzQkJPVThKUFRDbC84cTlQL0NnQktMM2Q0ajdudEI5?=
 =?utf-8?B?TWFMOGV2N1g1WEVmTzIzOEdMdTJVSGNDUEEvMG1TdG9ZQzNDMStxamE3UU9J?=
 =?utf-8?B?dHQ1dElPRVdnTThYLzJVRDZQUDVkNWdpSUhaTktNMm1BL25JZ0JVZzRDeitK?=
 =?utf-8?B?c01WT1gvbko0NjY1bU5NdEk5ckZPVE1vTjNkdFJiZXUxZ29rT0pISE45VXZu?=
 =?utf-8?B?dXBzbVRiKzUwTXF1YnRPcENFTTVvMmFZdkVPUWU1TWVHY2RjZEE4SS8vV0tL?=
 =?utf-8?B?TVA0UGVrenZnTnRLa2kwd3FoVmtkcytzd3B1Y2RBc0trRVVKRmw4eUc2RFVM?=
 =?utf-8?B?SkVLVGticTBjT2w4akVEQ3BwSnpnYjVBUWtiR1o0RkFxSmxTZ2NsbTl2RjlO?=
 =?utf-8?B?K2U3TWljWjExazQxZzZOWFViajZ0bVZkRHU3T0NQWUZPQ29iWHhFdGdxaFFE?=
 =?utf-8?B?cFE0bnRGQ1I0Wko5Z3oyK0RFL2hJOEdkTkJaanVEaUhVbVNweWFBN1ArdDY2?=
 =?utf-8?B?dEp2MmJ1NTQzK0xDc2VyMGsxMWRoRlZtL2VUaWNkWkZvakRGbVVubEplYmpX?=
 =?utf-8?B?a0paaTRhWDVWZzY5Uk5FaFR0M01iSUo5WDAzNmtxU2Y3TGp5QW1QYzVHNTls?=
 =?utf-8?B?bTVsS1R6WlVoaWlsVjBQTWpVZUJoTW9sQS9OQjExR21SMXlpQ2VHeGJISFNW?=
 =?utf-8?B?dldzVnRibjhEd1FyM3ZKWUcyRlh1WW4wOFdBNXRPeklCMGJxZUEvNUxJZCtV?=
 =?utf-8?B?bmRpZ0FoMHVPb3lJakE4WXdUQ25WaTEzOXdlOWNSeDA0dDQzZkVXZjRtUkRv?=
 =?utf-8?B?c3lWem9VN05qdDJ6RVBNVkgyUW9udEN0a2NlY29yOCtUYytHNE8zaGVXdWtW?=
 =?utf-8?B?a0s3dE00ZTJoWGdFRUN5K2plL0U2cCtlblJvZXpHWCtzMklxSmp2T0pSOGo5?=
 =?utf-8?B?RlFKREh5ZFovUjFyN0JDWXBUbmlxT0kzZTkwaGd0T3pWUzlNMlZLcm9YUWk2?=
 =?utf-8?B?QjdFL1FSMzcxejJrMmFWN0dhTlNvVVl6SG9IVmtiSEZtTEdFSVUreWZQaS81?=
 =?utf-8?B?YWlWSzNPQk9KWVUzSGhJOEtNZWdydWJBa2dPcTlwblhoZldkZEJoZ1dZWjM2?=
 =?utf-8?B?eTlET2ZCV0t1QlZtM3RWcUFISEVYL3hWY2dCeXZ0bGZoekJHdnAxZ0U5aUJC?=
 =?utf-8?B?MnIzOFdZMEJvYzd6cTZickZkL2F3RnVQWjY3KzQ2SmRtdzlXZi9kNG52eHZv?=
 =?utf-8?B?MTlzMFg5VmUrNHJiRXBacDQydENRYnJRWHBMWWF0VXJBYVlNdzE0cUJIazM5?=
 =?utf-8?B?UVNoWDhhUzQ1Ky9FeUJMSjduSm1WZGllYks5UWFwd2g1ZkF6UVFyS0JYZGt6?=
 =?utf-8?B?TGN1eFkvL0t3UkVUWXcyWjl3dnRmOU1ybHhGbkVvRWVsTU5hcDRLbUlDOVRR?=
 =?utf-8?B?aE1NUnlyMG9LbEFuMkd3NVZmcllFc21IYVQ3VnJjclo0c3FPV3cyOTQxWVB2?=
 =?utf-8?B?bGZSZ2I3SFJLVTdwbU8wK1NpMnpBZkpkM251UDFjN1pBR25yUEFHMTNaNG02?=
 =?utf-8?B?VFExWXZjczQrblJGT21OeXBCY2M1Sm1WcnlmV09ZaGJjU2RsMFU4YkZnUFFu?=
 =?utf-8?B?UGViNHFudWhvSGpNa3BEbEk0WGJlU1FMd3lSbVFMNDR2NUhWNWhRd3YwZGdk?=
 =?utf-8?B?anI0bXBIeE9scGRyL0RjWUJyZ1FGRXp3Ly9UWGJ0bGFNZXBRNVlQVmNlSFBU?=
 =?utf-8?Q?k8XCEZDM8fgTn5zf7+JylCVSa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34c3c0b4-8006-4b62-977f-08db3ff6c9b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 10:22:18.3242
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UY9ffsUo9SIkBBMsDmfZD5CV31tdgc2giMJgst9JWyoeYjG2L/dD02Y3VEkUtUZWCg/tY0e1avtqty4ZweW8PQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9206

On 18.04.2023 11:42, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 11:35:41AM +0200, Jan Beulich wrote:
>> The function merely needs moving earlier in the file to avoid the need
>> for a forward declaration. While moving it, also rename it following the
>> recent folding of CPUID and MSR policies.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> We might also want to rename the hvm_function_table hook.

I did notice this, but it seemed orthogonal enough to not do it right here.

> One minor comment below.
> 
>>
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -288,6 +288,16 @@ void update_guest_memory_policy(struct v
>>      }
>>  }
>>  
>> +/*
>> + * Called during vcpu construction, and each time the toolstack changes the
>> + * CPUID configuration for the domain.
> 
> The comment also needs to be updated to contain CPUID/MSR or some
> such now.

This isn't the case just yet aiui, but will be soon. Saying something
like "MSR configuration" would read misleading to me, so I'd prefer "CPUID
etc configuration", if that's okay with you (and Andrew).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 10:34:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 10:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522738.812293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poif8-0006Tj-N6; Tue, 18 Apr 2023 10:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522738.812293; Tue, 18 Apr 2023 10:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poif8-0006Tc-KC; Tue, 18 Apr 2023 10:34:14 +0000
Received: by outflank-mailman (input) for mailman id 522738;
 Tue, 18 Apr 2023 10:34:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poif6-0006TW-VC
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 10:34:12 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c77cee9-ddd4-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 12:34:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9362.eurprd04.prod.outlook.com (2603:10a6:20b:4e7::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 10:33:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 10:33:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c77cee9-ddd4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MyPKMuuaePvTIe3zjSuAEyroFYPdGjwxQ60Nr8j2m9jXKJ61cdQR+VcXQoEp+vEt9mcGHPrOOZqEsK3wNgnLEhCPvXIKecopAnwtvMyZwX/QQp0HLb+6n1fiGHrOsht3+s0X5d2Nn69T/9n6+tJ+dyY61gzHukyvsqhEZW9dk9dkJH+a2Os6jAzoDJzV193wEwn2YEQbU5x+0rzvCN35KbB3JnqoEgSs6XLkbtG3524VtpwnnaaISfzRevYZXGrwISo9yoGK3SOHGlidhNzdDKFrBfHf3OiXNPACTdZA0Q3crRO6k7wwIpuN4vo+4E8wvTTvOCK3zir8rpvoZnOQ/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ho7DQTfcrRGgvdcXsi/AG6ZA4fuFOVAjl3n8bd4GpfQ=;
 b=LR+wdnEQpUnMsheH8hFbZ0v0AXE7cirouxPSjwI4yARBNrVuo6o5TbGn3BSaOaVRzkVECQPTb3hh1bYPpUrY9ADCHiw8vHeKMR7KLG3JQeVaTViDzfV4Y+O+Jv0UYAscIvogVgxt2zEZKk60TXT74iRLCWqJPKbuKpqKb2klYKB8Gt6c2BjoDp/FujRxpYIbfwRR6jYAmy1Imu4vjx0xhXjvIJv/kcmlKdc9q0rEaSpBYZ+lgpA7MM5/y34YDTTMZZglPQcXNuO4g+qKbOwhV8+nxdI2ULXsXDaKcLFenjgYCzVx/b+JJcXVh895clB47EtjvWNTwCZ3Ls222C4oIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ho7DQTfcrRGgvdcXsi/AG6ZA4fuFOVAjl3n8bd4GpfQ=;
 b=UXt+oNcJQVeyh6MKnboS/axEsI4wucqN2UQhBt7LIReP9tC8RArz+XxAwAnZgWkHIXTzeI6G7Sy2QlDrI7H3LM6gcf3qw03dgS0gXwjrF5W1L/Q6Ei6LpLUaLIQHbY+yA5/p7XSDF+zkwaFgitji6E+KDr3L4EQJIj20gJGMksm/tJ3jRdNO+DuwG5GZjBI2U6v7/CAVCvAm9opw+sPJY/b9J8vVyyH9HoHYjeKR7bEG0YjKXamMLHGZ5VwLdYT6GVf4yBsjpcyjVd9wf0xnbs+aQ/QelJfdX578GnaT8rHcBmVlRAeoVRExr3kkeMm3GhcPMvHyNWAJdvyTLzX+gw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2c603b4a-c126-edfa-2b64-b114e43606cf@suse.com>
Date: Tue, 18 Apr 2023 12:33:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] SVM: svm_get_insn_len() improvements
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0250.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9362:EE_
X-MS-Office365-Filtering-Correlation-Id: a0bc21e4-d893-49b9-4415-08db3ff85ff7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	56NJwWt4xUhGxo8XGjgJprhT6K54/E1T7HEGRlDEO785/ssEincbbOQvRjOZT0fLfbhF1p+k5bFN6lsiVnge8pyOBlEM2PTv9nh5IMQiX9d7wh9x89vGVc6EvBQtdz7K12QliLmE+5njQoR0x3u6+283DJQH8k8lu5ixRpdZYnLDiHI1mnGA21dZRf2wDvK0i1pS4bYqPNPv7Sb8Vu3XmbsOdOhCwR7m0mjQaCda+GBV70QFfIaV3VsN09lya07RUKlBmXryh8qhq8+6FxptaTx06MEJ00vN25vqHElMJVwkrT5yuu1JIR58XheJ//TBD61DYNmS6TNflTWBp5SNboeAkpNkWS7n2Pm0nlOes9k7vajT4pN5aCoGuPXnT4KlaIATRYNBpEFMdtQWjMo02E7cCARe94WauTImAgu5qg3CCOpCDfcknSM0cy7Koge/K+vri1uw1YjthN+HBG2YJmF+xgTca8aGI7iFxAMAMqOhKhoFnb5NXmOKBumtTv753fMhcypn+eKIbOsJJ7Zk6bpsvdtaed0TGy2k25/D8Sq2j+vQ0emLuDzBtapLZimDlNG3RYk3+ennRedncxCP2ZpRN5aQWlj3TjP25SaZ5+J1H84+5xlGNpIseLUNcTY637gBCC25a7wbQSqUZWuHXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199021)(5660300002)(6486002)(6506007)(26005)(4326008)(2616005)(83380400001)(478600001)(186003)(6512007)(38100700002)(66476007)(316002)(36756003)(66556008)(41300700001)(6916009)(66946007)(8676002)(8936002)(31696002)(86362001)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?emdmang2TitCOWE3Mmh1WGdCeE1JdzRDVHNkOUxXSUV2Q0VhdGFkZVhGSEhl?=
 =?utf-8?B?UzJjYmZMS1JiWmMwMnFKdWZhTnRoWTA5WDJHUWk1Mk4zNGM5WE5OaXRxUkJH?=
 =?utf-8?B?eWNmcHA0QXJjY3VOTU9VT1NPWHBmWXVsRG4yWjRTekI1emVkR1RSa0NQMUJQ?=
 =?utf-8?B?MUJDSXd0WUtjTHg0ZGhUTWNIeHJobE03OTFoNHpVVVlWV1hMamFSR0o3L3FH?=
 =?utf-8?B?czl4RXg0bWRvcEJSMGE3dXU5WGJUS01PVm1KT1BhN0lWa1JYZ0N4ek9ZOEJX?=
 =?utf-8?B?QTVtQTBxUVJHY1NlR3IvWlpOQWtvb014aWdnRDRrUHlCdFVjV2t4TTZlYm94?=
 =?utf-8?B?dmVjYThhQVBYeW90VklwTCtPaUFxekVtbmZXN29QTXVZQ3BXd2V3M1gzOUNC?=
 =?utf-8?B?NllxdjBpUkFzYlV2NVZqc2Z5a2RmUW1JendmbHNBdDUxVjdQZUk0UTl1UjFG?=
 =?utf-8?B?eVd1RkI4N1k2RW1IcW5nUUkzc0JuMzZLanFMT3dTK01YUHpqcTBQWkhJeHI2?=
 =?utf-8?B?Q0d2MGNXUWdPVXFWalNKYk81SSsyalB3eTZVTVdrbzNlczFUUXQxZjZMeUNr?=
 =?utf-8?B?ZWt6aFdjc2RQdXJZRkV6TTF6TWFoVWx5U1FMalVub2N4ZXFMcjc0MlFWYUI4?=
 =?utf-8?B?QUxuNEkzRC9WNyt3M1VPdDl3YkN4YitPZS8rWS9uZVFHL0lTWHkrK2hZb2xT?=
 =?utf-8?B?ZVBEa2tOMjVJdnNsREZOaHczUkY1bklTeTlOM2FaczZjSEY4SnFzMUQ4T0Y4?=
 =?utf-8?B?OUxjTkN6QlNyNExDTFE2UXFDNXA5dHZUcDljU1V4azE4c1pwRkxwU1h2QTJK?=
 =?utf-8?B?c0p4d2Y4NVRkZDZDcUhiODBET2JseGlmSEE5Sy95bHBpcEV5R3c0dHFaSHV3?=
 =?utf-8?B?WTMyVkZ1UVY0Y29vQjNGV0F2ZFUrTDQ0Y29HOG9WMUk1YlVnMmdmTDVIVHN4?=
 =?utf-8?B?NkhNWk80aDQ5L0x3ZGhlaTIwdHZLclliN09JalBZSXJFTk5FTjE2WFBqUWUw?=
 =?utf-8?B?L2F1OXdvNnpLWG95RVJzYll3ck9wSWZTQ3hMZ3dLL2dXMlI5SGd4amY4OGxo?=
 =?utf-8?B?YzI4b2xSOUxjbDYweVl4c0duV2JMV1BoTzlFQXRIVzBVd3N4VlI4ejZ6Z0ZI?=
 =?utf-8?B?YzYvYVhNVWg0S2NzVGlBQjlIVDVOU05tc0ZsRXBlYklrM29oTU9MbFVFV2hF?=
 =?utf-8?B?eks0T2cwUW13SUxSLzZsdWJYVWpTNnBZc2YybkwreWtTMGpSTENTZlF3OVJm?=
 =?utf-8?B?bE9JZHRXKy9Sc2pEMnVBVWRuc0VuTU9wY2RsbXdyZENVVFdZcVRuSW9lKzVG?=
 =?utf-8?B?bVVtMUE2bkpSMVExQ2dndkZ3MGtrRno0aVo3WnpoR3hKUnVYcTB0WmpZQnVS?=
 =?utf-8?B?cXRLSXZHM1Z4Y0xtQ3Z1dm5NZ1kzeERScm5IRnFHSG5XN2VZUVJYeWo5UCtl?=
 =?utf-8?B?VnVIeVdoemV0Yi9aVm5OMVR5S1FEaVdsZElnRTJ4NXI0VXQ2OWdHMFhqbE53?=
 =?utf-8?B?WDVRcWlINjQyVFdhcUlvRXRFOXBZMDF3NnRob0tiLzc5NEp6dElVZVhEM3hR?=
 =?utf-8?B?M29HTERvVmdlNWZicTFaVDNUU1VyR2crdTc3SHdxMXNwR2J1aWVpSzU0MmVr?=
 =?utf-8?B?enQ4KzU2ZVlWTytrVXdDZmUzN3RtaVljNzJrYUJlTXZ1N2lTS2JRVHNLOEdt?=
 =?utf-8?B?M1lHNkJia2ZxVTNYTGN2RDFUSTlwWlkza2RlMUFXNktGMFVpSFNZR3pJY05o?=
 =?utf-8?B?Q2R1SFRIWlpSUG9qT2tuaENJVlVXQ0dpL21OTVhwYXoxL3VndWZ2dHpQMkZw?=
 =?utf-8?B?a3Jtci9YNk9zOWNLYkxJWDIrMTh0dnl1YmVIWDlacjBlSW92T2UzN05wSTI2?=
 =?utf-8?B?WjFSSi8zTzZaelVpN0NhRENSUGZReDZmNVhKdDliVi9veDkxWk1iYjZRWnpv?=
 =?utf-8?B?Q0NXYmtGT3I4MC96YkhOdDErUjM4ZldtbXVhREhHZEFhU3N3UDFyZ3FzNDVj?=
 =?utf-8?B?dE5HY2YyYWdZWm9XR2NqRUkwV3c0ZGlwQzlIZVMvYlNackNST2hIbEowUDRz?=
 =?utf-8?B?bGZ1bDRaV1c4L1BpR2lnYllUaVhtOVg2MGM3bVhCWjc4MGpoNUFlcm94cmtz?=
 =?utf-8?Q?ueNs0q6onGYzJ1COeCaOrBWc/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0bc21e4-d893-49b9-4415-08db3ff85ff7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 10:33:39.9056
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ME2Loae6ciTlGy8QF4GBMZLhTz2zPxeIk620BHmJCTch+bxgpJr2j5bAWTDsqkbI+SqX96N3n0FfsiSbcOr5sQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9362

Don't let x86_decode_insn() failing go silently.

Check hardware provided value (if sensible) against decoder provided
one. Also use it as return value on the error path - there's no real
reason to inject #GP if we have a presumably good value in hands.

Check that, when no ModR/M byte is expected, the decoder also didn't
think there is one. This makes things symmetric with the opposite case,
where there being a valid ModR/M byte is implictly checked by the first
of the involved comparisons.

While adding the initializers, also switch emul_len to "unsigned int",
matching both the function's return type and that of x86_insn_length().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -56,8 +56,8 @@ unsigned int svm_get_insn_len(struct vcp
 {
     struct hvm_emulate_ctxt ctxt;
     struct x86_emulate_state *state;
-    unsigned long nrip_len, emul_len;
-    unsigned int instr_opcode, instr_modrm;
+    unsigned long nrip_len;
+    unsigned int emul_len = 0, instr_opcode = 0, instr_modrm = 0;
     unsigned int modrm_rm, modrm_reg;
     int modrm_mod;
 
@@ -75,19 +75,22 @@ unsigned int svm_get_insn_len(struct vcp
     hvm_emulate_init_per_insn(&ctxt, NULL, 0);
     state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
     if ( IS_ERR_OR_NULL(state) )
-        return 0;
+        goto bad;
 
     emul_len = x86_insn_length(state, &ctxt.ctxt);
     modrm_mod = x86_insn_modrm(state, &modrm_rm, &modrm_reg);
     x86_emulate_free_state(state);
 
+    if ( nrip_len > 0 && nrip_len <= MAX_INST_LEN && emul_len != nrip_len )
+        goto bad;
+
     /* Extract components from instr_enc. */
     instr_modrm  = instr_enc & 0xff;
     instr_opcode = instr_enc >> 8;
 
     if ( instr_opcode == ctxt.ctxt.opcode )
     {
-        if ( !instr_modrm )
+        if ( !instr_modrm && modrm_mod < 0 )
             return emul_len;
 
         if ( modrm_mod       == MASK_EXTR(instr_modrm, 0300) &&
@@ -96,12 +99,16 @@ unsigned int svm_get_insn_len(struct vcp
             return emul_len;
     }
 
+ bad:
     printk(XENLOG_G_WARNING
-           "Insn mismatch: Expected opcode %#x, modrm %#x, got nrip_len %lu, emul_len %lu\n",
+           "Insn mismatch: Expected opcode %#x, modrm %#x, got nrip_len %lu, emul_len %u\n",
            instr_opcode, instr_modrm, nrip_len, emul_len);
     hvm_dump_emulation_state(XENLOG_G_WARNING, "SVM Insn len",
                              &ctxt, X86EMUL_UNHANDLEABLE);
 
+    if ( nrip_len > 0 && nrip_len <= MAX_INST_LEN )
+        return nrip_len;
+
     hvm_inject_hw_exception(X86_EXC_GP, 0);
     return 0;
 }


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 10:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 10:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522744.812303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poikH-00079q-Ej; Tue, 18 Apr 2023 10:39:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522744.812303; Tue, 18 Apr 2023 10:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poikH-00079j-AY; Tue, 18 Apr 2023 10:39:33 +0000
Received: by outflank-mailman (input) for mailman id 522744;
 Tue, 18 Apr 2023 10:39:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1poikG-00079K-9J
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 10:39:32 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4ade6c36-ddd5-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 12:39:28 +0200 (CEST)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 06:39:26 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6974.namprd03.prod.outlook.com (2603:10b6:8:41::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Tue, 18 Apr
 2023 10:39:24 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 10:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ade6c36-ddd5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681814369;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=yDYjCyDWzR1AXuwgS+syds6olU7VBiFbQDVZd5Fwr14=;
  b=NSLW4keznnAkcSeYwL3V6mKfLJ2/rZZ2ns5Wm9SbhjdY0wJLDiTvMIY+
   LTDWf4zePn/LEweu2sKC9i/PBmKb24uHOw+xllnt1ON8MZxJ6KWYvfZC3
   n/0w80QSkIq5Z7chBGKSahHeRy18I+H8RsK49SgzwTMOVQpid0HKgFJ0w
   4=;
X-IronPort-RemoteIP: 104.47.56.176
X-IronPort-MID: 105972262
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ff0bhKiuVwBRMGuCtQwjIUPLX161TREKZh0ujC45NGQN5FlHY01je
 htvXTqPaKrcNDbyetEiPYSx8U9Vu5KAn4VkSwo9qn03ECIb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaOzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQyLGwqZSuPtdum74PmZNFOo8sgL+jCadZ3VnFIlVk1DN4AaLWaGuDhwoYd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluGybrI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXlp6Ew2gPJroAVIE0VXH7is9K5sRWBa8h2J
 EsY0zAuqqdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakc5oRAt5tDipMQ4iUvJR9M6Saqt1ISqR3f33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxocO51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:e6bQGKve0dDJy3ahtfg3rKPB7skDpdV00zEX/kB9WHVpm62j5r
 +TdZEgvnXJYVkqNE3I5urwQZVoLUmwyXc32/hrAV7aZnidhILKFvAf0WKd+VPd8kTFn4ZgPM
 FbEpSWY+eeMbEVt7eG3OFJeexQpuVuAsqT9ITjJyoHd3AJV0mAhT0YNu/iKDwPeOCwP+tfKH
 J/jPA3wgZJmR4sH6CGOkU=
X-Talos-CUID: =?us-ascii?q?9a23=3AuteUbmmcpTidSPNVEA5EbBgIezrXOVfZkFDbGEO?=
 =?us-ascii?q?KMHZGeeC7cX2196BnsuM7zg=3D=3D?=
X-Talos-MUID: 9a23:iJ+gDga4NdWfCOBT5zzloWhwH8dTuJueEmkClpkvocmDOnkl
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105972262"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L4BwrApIp0zu5MoEfOzvULjK8QJmDT8Dd+KrRcGdNcUgQF3NYvgMxrh7/TCkdvCFR4tNZvaqVQP7jvnEw8xGeq2rbEEpk59lTKLUpSTQQjjyWsiLNAJ/CHDonaz9dcoiHNMC4OlzgSs3Nn/dLa47vOTETlSFz1rZh68fNppk1qUSxm5hd0F3JhZYyw+OuwA1+kkt63gXdjtiBZA0NHrLbBFjKiVgiS60Hfdb7PxWiIEzvAnsf/KwIW7az+CKdgx5LLvIcUuxB/PG5z7kIKYf8SpN1RfsVHZ9wUqyCdyDDxqb1CCafsPLwtta/whnbGgkO5HQUl//LO/NglUhpZdaWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xpgpD7oc+ofi0DbdBk9Tn5P++fBjIyyFxvDUqif+9Oo=;
 b=I/fU2CkUow4UtGidMk5Zt3c30K5iulB8JvrEX2wPOeeFP1Ex8LjolcXhu8wPwTwd6sjrEvqRmDs3TuL4YrvgoeqdXLRMnUuaN+OSM9lptSbRHKOKehTucjbMdtyhjTxFzwfXzmAN2ThsHb2a5RtjiTPbZoraJL2IsXgjfsm7yxM1b4DuAgSpVcs4S4yvb9HPzi76UZF+AV+s3vqbLvNwD388mVv+dMaoIjtWfqvARrBNz+BlPCZ/Ai1myMEZXmVpxbVfKwXJDctVIzWmNK7hfEvQT/idmdZ/FtI9M8V1c441vlqdVjHk212wSTzS4uTNEItjwyYGPtiQHh7ga4lV1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xpgpD7oc+ofi0DbdBk9Tn5P++fBjIyyFxvDUqif+9Oo=;
 b=ahMNmni1DBfwgcC7zkDjIzNneqQJd2HjZ71XulXyCuzYBvkTXKLZwRMIJTCRkoAkSHu1CtN66IM0wYKrSbWqOGs1k2RCIS/A17Ej/4WdNktT9naPynvs26M9HtYcJdei6mWw4sTyagEfheW7LqYHZnorqENSldH7x9mlyVvdnPg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <7eab4bb3-f20c-6ce8-cb05-2bcaa8ac35f3@citrix.com>
Date: Tue, 18 Apr 2023 11:39:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86: cpu{id,}_policy_updated() can be static
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <a24ace58-2ac5-6152-c42b-0037355ce9c4@suse.com>
 <ZD5l/y15PkJS8jbw@Air-de-Roger>
 <cdd717b4-dd70-fff5-eef9-33376de4baa9@suse.com>
In-Reply-To: <cdd717b4-dd70-fff5-eef9-33376de4baa9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0172.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18a::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6974:EE_
X-MS-Office365-Filtering-Correlation-Id: 56999325-78cc-4050-755d-08db3ff92d59
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bQBibAW+PtAULtcHwRKBg4qNEd+S8dSQHtP+nBkd6i3/I9vwvuDlBxGxtgJpPZF8UzknMaYjLCarcOKzgktWxrfuCdkOPXpdIBh+7E7iBhyi+Uev3xUSdD9oZ4Ffle94vC/2p+sa7WTWAYUQ4+BPrpid/YY/f/0QUPR9dxRNXJC+beQloiBoTJq3V8TO7W2PG2UfiP8eQc9yE1s3Hw0Sd7U+Ek1axuCfLsv0TwLD5FXAfmZwhDTfYej+J9mnv/9tSCuZA5NLeR+sPhMw6Jy//Qkym20BHBVqXJBjPGXWABSLDwy8t03MwiG/lWU3fEdQ03wSZd+c1G5LR4b0obJCHprYqmVvfRSeFJx4hs9XHOXbBXqCRGLIAcPYGajf40H5u71HzFud3Pl85f/dEIe3/WuVYYmGI4myAHLaCx9ShK2EKVK1PrwNoo9uUSFyhPH2kjj4yEQkR8R7WsREla/bRoRQP2hr+1vSmziejhae5raFHyGI0Or1jtE/kajxbZREYFrrUsLRHBnFW1fdCqiQFpIKjSM6XhsmfqmEOoIwP2AeaEypGtZwQBzlg0Y8HRvwWQWBDZJZlJ1s/DENWdMrY7vEimqvEo5Gb0cekvQMXf/HpA+UjeIGLkNP6UDtWIgbNmA+k5fSE4gTo1bW0JmPJg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(346002)(396003)(39860400002)(376002)(451199021)(6666004)(6486002)(478600001)(110136005)(31696002)(86362001)(36756003)(2616005)(83380400001)(186003)(6506007)(6512007)(26005)(53546011)(82960400001)(38100700002)(316002)(4326008)(66476007)(66556008)(66946007)(2906002)(8676002)(8936002)(5660300002)(31686004)(41300700001)(54906003)(6636002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGJTLy8xMUUwUExoN1NqaktRM3Q4VWhQWUh4Mm0rQkVYTk85VE5hSEhpR1Fw?=
 =?utf-8?B?SW9yTnE3OHRxWDhMYllYWTBnR21QVVRlNUVDb0hVbzcrbnpnSG1mU1JyVGNB?=
 =?utf-8?B?TjZtckJNOEhWTWR6dy9JTitPWkpNZmp4LzVEYUVNcGlLc3J6bmJ6WXNVUXI1?=
 =?utf-8?B?c290d3crVytnb01FZ3VianNQdit4R2JpZENkMTVlRTRRY1NzMjdkZ25NOEkr?=
 =?utf-8?B?cElUbGZsTlVTRVBTbjFqUFRaaXhOc0VaTnV4UkFjYkhYYjhJVjdsMlczZlN2?=
 =?utf-8?B?UVZlV0NUaW5pTHYyQTBEK2lKUm1qR01xaUNIMFZ4ZGVKTmJrZnB3cHYyWFda?=
 =?utf-8?B?UWwxcldEMnFFYXBHeGN5UXVoQjRyb091L1U2TlpsOFNZZS9PSUc4Nzk0eWFa?=
 =?utf-8?B?S2d3YjA0cXFIQkFVU0gvYktnazBWcTAvNzl2cjJlVzZuTUpsSHVHZm84UE04?=
 =?utf-8?B?cDVIM2RTYXUwVkhkK1FxUVFZMXVaMGwyTnRURThEUUJza1dQcmRSNGpvR1Rv?=
 =?utf-8?B?d1dHWFNoditpdlhOUFRZaWZiN2kyZnNyRWRJaUk3cE55bGNYNS9La2g2ejNM?=
 =?utf-8?B?N01URml4NVFzTFNJOHl5QUxpckxkcUNsejRsanZUM25vSW54dU9EeGkraFpI?=
 =?utf-8?B?Mkd4WDJMRDFTUklHQ3NVVklOQWFFZjJTbUlLMnhDblFZTVplRUtXdHEvalZs?=
 =?utf-8?B?TzRadVRNcnY4eTJ0aTR3S1d0OWU0a0FiVlJoZkRSS0tRcFFyZjFLclVhcXVa?=
 =?utf-8?B?Yk41bEpUK2huaENES0dTTkU5VTk0bXJmTG5QS2ZORC81L3BML0ZiWUNQdEFF?=
 =?utf-8?B?VGxFUE4ySnJXWG9SWGtWc210ajB1bzQ3Qzd0cE5kdkNCUHNHN1hmeGsrUExj?=
 =?utf-8?B?SnBqOUlzU0ZGNGtYQ3Q0bVFBUmd3cGtFbHJxcTlMNFlMNEpSNzJmNkdZNU1l?=
 =?utf-8?B?UDFOREpGRUI3Y1RtZFJWQ011OCtFSHkyOGVPTlVUczllZ1FpNUllbHphSmhT?=
 =?utf-8?B?QVdCWWFaYnhjY0c4OERteTNRUU5vMmpiZ1dpNndaT25ydkhRdjR0bnJiSFBl?=
 =?utf-8?B?aWcySVVWYWdySDA4bEhSajhDbXMzUW5neG1kT0M4aTJvbmtVT3dPcjdOdmFS?=
 =?utf-8?B?aFRkWlNVUEUwQXRabkpjRTl4V0o3aUR5UnNXZnRUYnk0YjlDekZjOWwrbFBn?=
 =?utf-8?B?VXh0NmZkU21scm9MbEIwcER6eHluV2NuYy8xRXJwWTJDeGNibVRxanNNNUNK?=
 =?utf-8?B?V2VPNjFxQXlUWHNoWFd5elhZK0RLRGNqVWVsQnNIMjNJSC8zeDdISzdQZVpY?=
 =?utf-8?B?YUJaL25GbkVxRlpzeFVyaEtCRmY0MTFpbmNJOEFzK1lXbVcrZzkzOHVZMU1T?=
 =?utf-8?B?cTJIaHpKMURLWkVVWVhIbzNYT2s2Q2JNUFQ4MkNJU0xQK0RFWXpDVm1ZRFdq?=
 =?utf-8?B?OURuOXhwRTRCeWxkNUFiNnFiVkhZUmY2a3JXYi9qNEpOSldsMFc5dGMxYm5k?=
 =?utf-8?B?MTNsR0FvZHc4TnFnMnp5WEJRTnRJelhiK2dybmd1U254MENSQUZ6YlJydmVN?=
 =?utf-8?B?K0dNYTZrMWU4enBCZVFWQXE3Y2pnUlpkWGYyYlZXMi9OMnU5MS9aU2dXdHdX?=
 =?utf-8?B?T1lFQVQxSFJiMmZRYjZVeW91YlEwWEhqVFZHanc1cmh2UGpaNmlNcUpBZEJX?=
 =?utf-8?B?cEM0YWdlTUhGcVZEaTl1bGNLUTlhUTByemx6RGV3MUl4LzB0TU0xWEpMdTdK?=
 =?utf-8?B?WlQ0TjkzbEtvYVFZOVBLaGtrdDF3QXVNbUJWWjcxZVJpd2R1MTNVL1MyUmEr?=
 =?utf-8?B?U2VEUTJwdmxoTC9zUXRsbm5IdWJmekNQcE5XM1hDeXFoaW9NeC9BN0hmV3pP?=
 =?utf-8?B?ZTVUb0J6WGpKTEZtaTdyTGdaYkVnYjdrdHJsUHRjZkJDZzlWUWo2ZHpsMmhU?=
 =?utf-8?B?WUN0M0NzUHUvQTZpbFRZNUVrZzB2MmRzQ3RqUUtoK2plTWJZZlNIWWt0TTRq?=
 =?utf-8?B?bUJBOWR6U1ZEcWVYbGVRcUhiTzNzQnpZUzlvSjNua29YT2RkTEdCYkt5QjZN?=
 =?utf-8?B?cElBQzVaSTFFblRWZ1hRVkt6Y0c1OTQ3eFVxNnVXTXg0R0VsVFVuZ0dhUUUz?=
 =?utf-8?B?RFpFRmZEVFFUUUNVdjlhWmkzUGtYa3Ntdit6RnM1QWtDKzI4TnRGTFhEZUFi?=
 =?utf-8?B?L2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	C7oVGwH2ifXlz4SJhc6b3OA+nx4NwQL9BW2wCsje9Pq/H/RGBdbhCUv39LXCk1AFkZ0he9axxVU+19CtqoxLNSkwslNnauTzgXxoB+s9Hop71rSgaw86+D9VkseDa4icQ8679twzjryt8uOJUfYkfMVRRt9JH/gp1Z5ahg7jfqgY0CrSPJ4xORjo+hanMCGzjLUUUcteAeTNudBDhogMguwHxq5Z0F8/d7h9WQrPoMuORNylqjdVzJJnf/uAXY2SWHuc/bzCTsqcYlfugWW023a7j8ChAHMYU9Ef89JEv6HnG3zJdd6TsZopfOE0Rowu/KWD12xeWzTs/4Gvd6pRRC36MMmcfzgeGD8pIxA2ZndJPbUxxHA6RLweTy6aa7IPDHEknkGfHdxQDYIee5IqBeooJ3hPW+2XdJsPQcPEEGp+z4oxFEgygD+MgAJFkvfuWTIhzB3frwNLSTZ6lS/1+w8qvZqk+nKrRUHmyPdPomc1FK43UuwDZ4EFPEaF+xMwy8mh5Q+7fNGCxTm2RmcNirsmAL+XDtk5RpOC9GEqYSKe1ePFXcoGQMbzY2n9/PnbxVhgjG/AXWcAUmG/v9ADEyzgGbGcsQFRzZDh9VuZKT7qcpSfkvmKTMJ3zay54F5w8YNii3XHsLTVKSKmLo9sKibqcvoGnO9MYQT8nV2oHGztKQgq17YeT2ICjHGF7tfxA8YNGfWs5YK1xMjLoS9pdq5nrfENYy7+XPqepF8r1S4/6yAvkmis4SSHN6YD7Mtk9s4jwC1twv+Xww8hfGTdAJWcRKsWDazQvgOxnQvX4vOnrxc/ywvc8Naf8ql1GgZXTe1xX4D6+DldVVwvlpj/rA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 56999325-78cc-4050-755d-08db3ff92d59
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 10:39:24.5823
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lsZ3XxMpGzCslmjA0+fhTEAFzr7X+jOpC5Zqwr05AA0RMG202sQK2obJJelfNSJrTEdDmFIhCUUA6OR9MMnR0+Z4uu/KXcpllzw6dXvPuaQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6974

On 18/04/2023 11:22 am, Jan Beulich wrote:
> On 18.04.2023 11:42, Roger Pau Monné wrote:
>> On Tue, Apr 18, 2023 at 11:35:41AM +0200, Jan Beulich wrote:
>>> The function merely needs moving earlier in the file to avoid the need
>>> for a forward declaration. While moving it, also rename it following the
>>> recent folding of CPUID and MSR policies.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> Thanks.
>
>> We might also want to rename the hvm_function_table hook.
> I did notice this, but it seemed orthogonal enough to not do it right here.
>
>> One minor comment below.
>>
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -288,6 +288,16 @@ void update_guest_memory_policy(struct v
>>>      }
>>>  }
>>>  
>>> +/*
>>> + * Called during vcpu construction, and each time the toolstack changes the
>>> + * CPUID configuration for the domain.
>> The comment also needs to be updated to contain CPUID/MSR or some
>> such now.
> This isn't the case just yet aiui, but will be soon. Saying something
> like "MSR configuration" would read misleading to me, so I'd prefer "CPUID
> etc configuration", if that's okay with you (and Andrew).

Technically it already contains one MSRs worth of configuration, which
is misc info and cpuid faulting.  It will imminently contain two.

Please use "CPU policy" here, which I think will cover things suitably.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:01:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522750.812313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poj53-00021b-4V; Tue, 18 Apr 2023 11:01:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522750.812313; Tue, 18 Apr 2023 11:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poj53-00021U-1I; Tue, 18 Apr 2023 11:01:01 +0000
Received: by outflank-mailman (input) for mailman id 522750;
 Tue, 18 Apr 2023 11:01:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DyEx=AJ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1poj52-00021O-E3
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:01:00 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4bd2ea87-ddd8-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 13:00:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7579.eurprd04.prod.outlook.com (2603:10a6:10:1f5::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 11:00:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 11:00:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bd2ea87-ddd8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DQ1hdErq02bTMuNr9PgcQpjgXCs8Q3nari+AUnhuqGIuT3mKfw03TEv8ijQcgsidQ3vSJt+DXwmLkprxjOKQYh7F5vtjgj/hVzHHCqn6IzatwXjKhd5riS23xr6IHxtPF3KgPQwQlJ/NgmJCFyA9VUxWro5zj5QcNIUiuoCpJil5t0dinLymFJRxu4JSJikuqU5QAeapCGMCxs/iUWUMXa5gtgKfhZfikOMzq11qFdSArpo+k7nHZ+hqD5OlV5q68h/8JSQmQ1PEaqrBx8KTv6TwC/66rA5ITVOFDTJc483sdb4Yb+uch1BpJItU9CwDyfK+YLSSQSV+Dj7xILlkRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xxqjZLsNezemjLz+25hUXJFTrQVDJmaCzOrj+NGIoz8=;
 b=C2Luj0B/7D0G4IM7GMSWtk+Oorf4dUq8nsI+sh9Xg9+kBkgM66OYZ7dn9IAzEYoBg7koArcL/tNqMAjM6eBImRdsN63fhLHV+Ii/aCtkTTeksdZGeP421/+0ihqdFZFjEFvgxwvwykrvnf/BixuOz0La9qkyyyjceWx88HqPyOrII0OWdoKiAIPsd9648oavVyV6XTHhtdeXyBUAmmiZW5R74tjgRYkVv4QArOV38Xlx60hSrD4kMiLdMgV1DQ3ifyfI2Jos//6danTLHSdQnijQPaCFE0T3lCJNDj4uFKjtztAEGjTNM8VF89Uk9Xo8s9kj7Y51+X5xWsScf3w+Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xxqjZLsNezemjLz+25hUXJFTrQVDJmaCzOrj+NGIoz8=;
 b=RB/GlQ+1DOm8/cYpVQ2R0KTDjyEaqExQiA+rhYbm6A0g2nEyj7+ZA8wnhKqZV6LzoNTQUVzsGNEwSpZJurg7eLfySjtNCw4fSxYZLIQTA+TN6jHQDi2i2UVpCdebCRxDErXfEjW3rNoU6HHzsU0jxSfphVcGB2c7trtFibb2E6zbWNEtQ7reYOMA2Z4H9ZMD+7t29pXf7JQn5B62L8JpmxmRS+6NsppKm26a1Vu/sElVwXzi6VLUpzqfw45dQgmsU4oz11EuVbTuzALz/kB3pnbsguHmDbnq4/GcaT3P0K4udGtUw3GErB87f4z5wLo4YfOjmmdh1BqHytfXoij4XA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
Date: Tue, 18 Apr 2023 13:00:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230418092458.15253-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0187.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7579:EE_
X-MS-Office365-Filtering-Correlation-Id: 22754d08-323a-440f-b2f6-08db3ffc2ec8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZRGq0bsVCE84FTR33S/QKUD33/XoXStk63j3VsCbF8MML659z6kumSmdt0LPgFeZbdeuDkW6Ssw2kWmMBRW79yujjscdOf0REhWedk6dMcAKQF4BifD1vKHZErugej63Iaoe7mPacUeQh4TgzeCcCJ+UikmKN3TrR5v5BaJsi8scgo3nxwAb8kU+GYj7p2C+nIUxCk/QppqKOJSp0TGdqu2CMoJbrQtABnsMESBf35Y/eYu6OR7FFjyfcrtI4Dh52gJpLzEmR6q2RIn8UUORZHa7ygrSLffvyViwku1dtpvMRZcQXeBy4wIHEA8L/pLD68CiMXKIBo3mFxKolY5Wh8q8raQ6Lus3tUga+FdQbQzbHJT8Z6taYbc1QAgygQmoMqzeighpEg0fXYFE7LVQSUIoygr60OhPPtE0O99gu73vUPwU8w5n+MZfUVoN+f3BRn3e+yZQrh0htVGomLQqgoIpo0ha2HyS2acmRZCiVCEEoBvVQPEqLy4mSykd376J6jzsmE2m/l6LVj3VaEjT/UatkO1PuQh/olhNRIXYQivgLuid8ET/MeqVhhzXPDQxGFSHfFtcGhHQF6m3tN2Lu39tLRyUBFf3v3Z+QhWvR4KValSD/ejHQu9wevoq9xuzyufwGscq3n+qQfDmNSfdaQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(36756003)(38100700002)(8936002)(8676002)(5660300002)(2906002)(86362001)(31696002)(478600001)(6486002)(54906003)(31686004)(186003)(2616005)(6512007)(66946007)(53546011)(6506007)(66476007)(26005)(41300700001)(316002)(83380400001)(6916009)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alZ4RCtiKzhOVXNWME42NjI4SlZTd3NzQnV0SStsWStqZEQyVW5zczlPU3o0?=
 =?utf-8?B?K2txblhyNW1uQVo1MkR4eXF3L3cwTHBCdGU0VnFuTnZ3MjVkZTd2WnUrRWk3?=
 =?utf-8?B?K0t4VHdmV251Sm5iSnR4dW9wT1RFbVVOc1lvSUhPOFhscjQraDJkRTdGbkdD?=
 =?utf-8?B?cForSDJkakFaYmcyZTVMR1Z1K1duZTRheHpadTF6VnRYZFBoNUc3bWNSYkxX?=
 =?utf-8?B?bEVKa2JRRnFmeVhDMk4vbE5oZml0cEFPUEpJNktESTNSWWdjNUxVZTNmamhj?=
 =?utf-8?B?NWVac3RHSkhCQkJJMDRyaGZGOE1nSzBZRC96MzBzSG5uN05reHMxSEE2Z0s3?=
 =?utf-8?B?SVRObkJsT0hNdytrZWh6NWJkSExTYnZkeG50Y1lucHJ5QWVBd2N3dW85cWto?=
 =?utf-8?B?VlV3T3l6TzFZcVY4UzJKaGFBTzNkNkdlUmVyKyszcklDZVlOM0FWWHJ3SlZm?=
 =?utf-8?B?aHlaNmU3cnpxbWdGaVE1V3RxWVhlQXU3T2RrN2VYdVZLekVhQ1liZ1FNUGRX?=
 =?utf-8?B?cnRpMmNqazBNRWxkcEZtQkRma3VNajgvVWtsVzlxODBmTzFFWHhtUSsyWXJS?=
 =?utf-8?B?K3F1V2ZnRHdKT2pMMDgrMS9TVStTWWxpcU9hL1hGWmVCemtUL1I2bkFFdHlm?=
 =?utf-8?B?bnZMN1BFc2V3SE4xZndhcGV3MWFvRkQ4OVVBSk40NjcxV0lsbXI2cHBlSjR4?=
 =?utf-8?B?VVhYQUdMYjhDMWxlUDFFdUoxVGxsVk1KT0RsSzIwR1c4YjJDSkhOQjhxUTBD?=
 =?utf-8?B?eUpHeHRPZU5lckNvZ0lETGhETjF4bjE4NlRXWSsyZ1dkZGp6M2IyYUlZQll6?=
 =?utf-8?B?YkhxSHVxZ3NCRDBEOFEzVmF2Um5OemZzOTlqcXYzQXhXbXp3OWZ0TFFLUXR4?=
 =?utf-8?B?SUVUYjF4aFVVeGxkeHdkRnNTektjaUFIWE01NHpqTlljRVR6Z1hRYWhBanpv?=
 =?utf-8?B?U0drR2dXdUVTRzUwQW02M21ZUWVJYVJ0SEpSRGhOaUFjaGovZ3doNisrdkJm?=
 =?utf-8?B?eUNkdmtPY0V1NThlUlFmZFNsbmNrY1hwL2pEQXN5b0dmT29VSzB4MFkzaW0w?=
 =?utf-8?B?aWxVUkZJczd2YytBQm1PYW11bHZFWTk2Z3o5NFZsSldwVGM5NjZ5UEZ6cXFv?=
 =?utf-8?B?SjM0UHZYYzg1Zm5YYXhDYjM3V2tyYjhhWHlDT0dSWnhBMFdJWFQraHI5K3Q2?=
 =?utf-8?B?MlRxSGRsWDEzYkFhRDM5ZDU4dDRmSVVST0hvL0o1Q1hGYVhIUUlXSHlvK21B?=
 =?utf-8?B?QTE4MGxPaEVJUkNNNWJSOGRtb1FJMTZEYXEvc3FWeHcwTHROZEsyOHlqaldN?=
 =?utf-8?B?aWU4MVJZRStwSFdiL0M2SDdVU2Jpd2dJOUFOamkvM2FUSzdhbFQ3a3lPN296?=
 =?utf-8?B?Sis2VWtFa2RTRDNWR01VRlJob2pORGt0UCtqOWRyZ3pzRnlmeWtDTlpHZTlO?=
 =?utf-8?B?MllPcnhPYUNNWGo4dmRBbXZ2K1NQV2p5UWk3YzYyTVlPbGNhblFUNXpEcldR?=
 =?utf-8?B?SzF5OGZXczg4WEo2eEVwYUJXWmFYVjFOa1JaZVN5dy94aGM4Wk5JTjhPbUxS?=
 =?utf-8?B?SXpLT2d3VjRndjdMZTFnVDRkcGVNSFphcExBVHRPMEI0U3QyM1k5VTdQSG0w?=
 =?utf-8?B?L0pJZTlKbEVqMHhNM2Y4dGJJbkRzMlRnaUhUTVUxZk4wT3NmTEh1bU9oTHFB?=
 =?utf-8?B?Tjg5TW95UFZTKzIwVitGb0JVTnltS1FydFpXWlA4N1NQc3JobFdObEFpRnYz?=
 =?utf-8?B?d0dReWk2YlN4dFcxZXR6emxTMHdnL2lyZThDanhOeEtUWVl4SWU4N09GWjh2?=
 =?utf-8?B?REtaSzhidDlBN2FHQThjQnBYbzBXL3R2UUVuZjJXZ2tmay9PNC9CYjdsTXh1?=
 =?utf-8?B?UVBqb1hYVDhoWG5VZGZTSjl0L2dFRGdXTjNRUUdTdnhkb254Z1M0d2VSb3BR?=
 =?utf-8?B?RHJLZWNjaUtKYXJEdC9LcVVrOS9qUUg2WHdwWjNlcW1TMjRPL2tyYmFJdkha?=
 =?utf-8?B?ZHFPN3g2ZnRaaTFhdUljNDRpWlByRWNMekRZZVluWjRDck42bkFWeFc5WHlu?=
 =?utf-8?B?Q3kzeklML3J6TWdSRXE1a0Z5SkUrMjUvQURMM1NFRk9QcFpUenZEc0taT3hF?=
 =?utf-8?Q?0GXCUYVGC230E6nbCeV70hGrQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22754d08-323a-440f-b2f6-08db3ffc2ec8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 11:00:55.4262
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UPMMnkPQRyqhxWIKlTJwWodUK2/mPJ49f7pkezNSAcMzBblo2mXmvl80TetiGLvkvYIc8FUtYkIV0GY3tyfQjw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7579

On 18.04.2023 11:24, Roger Pau Monne wrote:
> Some of the assembly entry points cannot be safely patched until it's
> safe to use jmp, as livepatch can replace a whole block with a jmp to
> a new address, and that won't be safe until speculative mitigations
> have been applied.

Isn't the issue only with indirect JMP, whereas livepatch uses only
direct ones?

> --- a/xen/arch/x86/include/asm/config.h
> +++ b/xen/arch/x86/include/asm/config.h
> @@ -44,6 +44,20 @@
>  /* Linkage for x86 */
>  #ifdef __ASSEMBLY__
>  #define ALIGN .align 16,0x90
> +#ifdef CONFIG_LIVEPATCH
> +#define START_LP(name)                          \
> +  jmp name;                                     \
> +  .pushsection .text.name, "ax", @progbits;     \

In how far is livepatch susceptible to two .text.* sections of the same
name? This can result here and perhaps also for static C functions.

> +  name:
> +#define END_LP(name)                            \
> +  .size name, . - name;                         \
> +  .type name, @function;                        \
> +  .popsection
> +#else
> +#define START_LP(name)                          \
> +  name:
> +#define END_LP(name)
> +#endif
>  #define ENTRY(name)                             \
>    .globl name;                                  \
>    ALIGN;                                        \

Do these really need to go into config.h, instead of e.g. asm_defns.h?
I'd prefer if stuff like this was moved out of here, rather than more
things accumulating. (Perhaps these also would better be assembler
macros, in which case asm-defns.h might be the place to put them, but I
guess that's fairly much a matter of taste.)

Couldn't END_LP() set type and size unconditionally? (But see also
below.)

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
>  
>          ALIGN
>  /* No special register assumptions. */
> -restore_all_xen:
> +START_LP(restore_all_xen)
>          /*
>           * Check whether we need to switch to the per-CPU page tables, in
>           * case we return to late PV exit code (from an NMI or #MC).
> @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
>  
>          RESTORE_ALL adj=8
>          iretq
> +END_LP(restore_all_xen)

While I'm fine with this conversion, ...

>  ENTRY(common_interrupt)
>          ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
> @@ -687,6 +688,7 @@ ENTRY(common_interrupt)
>          SPEC_CTRL_ENTRY_FROM_INTR /* Req: %rsp=regs, %r14=end, %rdx=0, Clob: acd */
>          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
>  
> +START_LP(common_interrupt_lp)
>          mov   STACK_CPUINFO_FIELD(xen_cr3)(%r14), %rcx
>          mov   STACK_CPUINFO_FIELD(use_pv_cr3)(%r14), %bl
>          mov   %rcx, %r15
> @@ -707,6 +709,7 @@ ENTRY(common_interrupt)
>          mov   %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14)
>          mov   %bl, STACK_CPUINFO_FIELD(use_pv_cr3)(%r14)
>          jmp ret_from_intr
> +END_LP(common_interrupt_lp)

... this one's odd, as it doesn't cover the entire "function". How would
you envision we sensibly add ELF metadata also for common_interrupt?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:09:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:09:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522755.812323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojDA-0002fx-Us; Tue, 18 Apr 2023 11:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522755.812323; Tue, 18 Apr 2023 11:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojDA-0002fq-RW; Tue, 18 Apr 2023 11:09:24 +0000
Received: by outflank-mailman (input) for mailman id 522755;
 Tue, 18 Apr 2023 11:09:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pojD9-0002fk-Jy
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:09:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pojD9-0002KQ-0P; Tue, 18 Apr 2023 11:09:23 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.26.51]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pojD8-0003Bm-Ql; Tue, 18 Apr 2023 11:09:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:Cc:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=KMIVDdkVie5RqyZErClAY/QTxpxyjPi6ztY4PyqW7zY=; b=jr8wPqVwALu3BByITVYGWvNvbG
	gRGuYTpcvYINiNxbexjyWJbJ2D7S7qdv8q9yVuh5F5hOVIaQ6NOfycBBfS/Mb/jW3YBOG1z3z4q6o
	uWv8O0y/SVHmBg6T6gyg/kQ9pG6YmO2IYf64Ql5qNGw9+c3VPF5sYRj1VLhQg3mql2cI=;
Message-ID: <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
Date: Tue, 18 Apr 2023 12:09:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
 <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Carlo Nonato <carlo.nonato@minervasys.tech>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

I have added back xen-devel and the others. Please reply to all, so they 
can have the full conversation.

On 18/04/2023 11:13, Oleg Nikitenko wrote:
>> HW
> Board: Xilinx ZynqMP
>> Where are the banks located?
> 
>> Where are the banks located?
> I did not catch this question. Could you rephrase it ?

I am referring to the memory bank. But you provided the board, so we 
should be able to infer them.

> 
>> Where do you load the various modules (e.g. kernel, xen...)?
> BOOTMOD_XEN,
> BOOTMOD_FDT,
> BOOTMOD_KERNEL

At which address do you load them? What are there size?

> 
> Should I use another branch ?
> If yes then which one ?

I don't know which branch would work on Xilinx ZynqMP with cache 
coloring (although I would assume that upstream + the series on the ML 
[1] work).


 > A company's active branch is xlnx_rebase_4.16.

The branch you pointed out is not directly maintained by Xen Project and 
from what you wrote below there are some differences with upstream. So 
it would be best if you speak directly with Xilinx/AMD. Stefano in CC 
should be able to assist you.

Cheers,

> 
> Regards,
> Oleg
> 
> вт, 18 апр. 2023 г. в 12:31, Oleg Nikitenko <oleshiiwood@gmail.com>:
> 
>> Hi Julien,
>>
>> Thanks for the answer.
>>
>> A company's active branch is xlnx_rebase_4.16.
>> There are added there patches
>>
>>  From c14a26b9c9e6dc5711f3155e44adee8dfa634e33 Mon Sep 17 00:00:00 2001
>> From: Ayan Kumar Halder <ayan.kumar.halder@xilinx.com>
>> Date: Mon, 25 Apr 2022 11:21:19 +0100
>> Subject: [PATCH 1/6] xen/arm: smmuv1: Remove iommu group when deassign
>>
>>  From 6a7ace399f70f0001664d727476c59f211f389f5 Mon Sep 17 00:00:00 2001
>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>> Date: Thu, 23 Jun 2022 11:52:47 -0700
>> Subject: [PATCH 2/6] libxl: add support for emulated TPM on ARM
>>
>>  From 6dc26f1d82a8942dc5a00c55ee29ce4be5359529 Mon Sep 17 00:00:00 2001
>> From: Tanmay Shah <tanmay.shah@xilinx.com>
>> Date: Wed, 3 Aug 2022 08:56:56 -0700
>> Subject: [PATCH 3/6] xen/eemi: Add EEMI calls to support SGI registration
>>
>>  From 9fd67311c1253a170b1364de070a7535551bba52 Mon Sep 17 00:00:00 2001
>> From: Tanmay Shah <tanmay.shah@amd.com>
>> Date: Thu, 4 Aug 2022 16:34:31 -0700
>> Subject: [PATCH 4/6] xen: eemi: make xen passthrough for unknown EEMI calls
>>   from Dom0
>>
>>  From f81a621a28dfde7b8d0d5c5c125f2f250291b7e8 Mon Sep 17 00:00:00 2001
>> From: Michal Orzel <michal.orzel@amd.com>
>> Date: Mon, 29 Aug 2022 15:09:07 +0200
>> Subject: [PATCH 5/6] platforms: xilinx: Add support for mapping PM nodes
>> into
>>   64-bit addresses
>>
>>  From 47ce40314bbec31b683da56d007d14603f002d0c Mon Sep 17 00:00:00 2001
>> From: Ayan Kumar Halder <ayankuma@amd.com>
>> Date: Tue, 30 Aug 2022 12:48:25 +0100
>> Subject: [PATCH 6/6] Arm: Enable BOOT_PIN_CTRL for Dom0
>>
>> Regards,
>> Oleg
>>
>> вт, 18 апр. 2023 г. в 11:59, Julien Grall <julien@xen.org>:
>>
>>> +Stefano, + Bertrand, +Carlo,
>>>
>>> On 18/04/2023 09:43, Oleg Nikitenko wrote:
>>>> Hello,
>>>
>>> Hi,
>>>
>>>> I tried to turn on this scheme and ran into panic.
>>>> Where was I wrong ?
>>>
>>> This feature has not been merged in Xen upstream yet. So can you tell us
>>> what patches did you apply or which tree?
>>>
>>>>
>>>> Xen command line
>>>> xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
>>>> dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
>>>> timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>>>
>>> Can you provide the following information:
>>>    * HW
>>>    * Where are the banks located?
>>>    * Where do you load the various modules (e.g. kernel, xen...)?
>>>
>>>>
>>>> Xen config color build settings
>>>> CONFIG_COLORING=y
>>>>
>>>> Xen log:
>>>> (XEN) I/O virtualisation enabled
>>>> (XEN)  - Dom0 mode: Relaxed
>>>> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>>>> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>>>> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>>>> (XEN) Coloring general information
>>>> (XEN) Way size: 64kB
>>>> (XEN) Max. number of colors available: 16
>>>> (XEN) Xen color(s): [ 0 ]
>>>> (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>>>> 00000000002ccc0c
>>>> (XEN) Color array allocation failed for dom0
>>>> (XEN)
>>>> (XEN) ****************************************
>>>> (XEN) Panic on CPU 0:
>>>> (XEN) Error creating domain 0
>>>> (XEN) ****************************************
>>>> (XEN)
>>>> (XEN) Reboot in five seconds...
>>>
>>> Cheers,
>>>
>>> --
>>> Julien Grall
>>>
>>
> 

[1] 
https://lore.kernel.org/xen-devel/20230123154735.74832-1-carlo.nonato@minervasys.tech

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:11:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:11:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522760.812333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojEf-00047x-Fp; Tue, 18 Apr 2023 11:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522760.812333; Tue, 18 Apr 2023 11:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojEf-00047q-Ba; Tue, 18 Apr 2023 11:10:57 +0000
Received: by outflank-mailman (input) for mailman id 522760;
 Tue, 18 Apr 2023 11:10:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pojEd-00046h-AC
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:10:55 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae08854e-ddd9-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 13:10:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae08854e-ddd9-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681816253;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=AFOzAwpW7ezrAeIPcJTxx1WmchENP6XcS+WXx+rIE8w=;
  b=I5IKde6qEbHkt5XYOq8yhYXhBu05Ur0Olw6Z6FwANTUEjS9VQ7OO2B0/
   r0PdGzDwYFWCORcI4EkWWbuy3gcIrVKZtPNDkSFYAyZC59OmisIN/HoZ7
   se7FxqKHFWzplo1/kJMbiNIgI9ohXogcDqIs0alRjJ9IRUqqlbkB48+Qy
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 105283561
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Yezi+6Cc89CRRhVW/z3jw5YqxClBgxIJ4kV8jS/XYbTApDsk32FUy
 WNKWTvQMq2LNGSgL9h/a4jjp0MD7cSBn4QwQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3uB2KnoJ8
 6wkMxMVdhOfocTpx+KURbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4TXH5UOwxjBz
 o7A1zqnGBUqGOyP9TyA9EqIiuTwsHmkZp1HQdVU8dY12QbOlwT/EiY+RVa95PW0lEO6c9ZeM
 FAPvDojq7Ao806mRcW7WAe3yFaGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceRjssz
 FaF2czoAT9Ht6ecQnaQsLyTqFuP1TM9dDFYI3VeFE1cvoel+dto5v7Scjp9OIqzj/buXjeg/
 yuptDA9m5M10slb+KruqDgrnAmQSoj1oh8dv1uHBz30tlooNOZJdKTztwGFsK8owJKxCwDY4
 SNaw5X2APUmV8nlqcCbfAka8FhFDd6hOSaUv1NgFoJJG9+Fqy/6JtA4DN2TyS5U3ic4ldzBO
 hW7Vft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/eo3wwORHPgDm1+KTJrU3YE
 c7CGftA8F5AUfg3pNZIb7x1PUAXKtAWmjqIGMGTI+WP2ruCfn+FIYo43K+1Rrlhtsus+VyFm
 +uzwuPWk32zpsWiOHiImWPSRHhWRUUG6Wfe9pwHLbLSc1c5cIzjYteIqY4cl0Vet/w9vo/1E
 ruVAye0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:rZSRDKGwYlIti790pLqEO8eALOsnbusQ8zAXPiFKOH9om6mj/P
 xG88536faZslossQgb6K+90cq7IU80l6Qa3WBLB8bBYOCOggLBRr2KhrGC/9SPIULDH+RmpM
 Ndm48XMqyXfCEK9beA3CCIV/UFyNmD/LvAv5ai854Ud3ARV0m/hz0JbTpyKiVNNXV77DECZe
 ShD4581l+dRUg=
X-Talos-CUID: 9a23:UU7Z0G13JqaJCMW2LmtQFrxfPsYkWWaayifsOGygEWV1bpvKTg+/9/Yx
X-Talos-MUID: 9a23:xdr/TQXkvocCZc/q/GGynjVSGPd436KrDngpnZ45lpPbCzMlbg==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105283561"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ross Lagerwall
	<ross.lagerwall@citrix.com>
Subject: [PATCH v3] x86/livepatch: Fix livepatch application when CET is active
Date: Tue, 18 Apr 2023 12:10:32 +0100
Message-ID: <20230418111032.487587-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, trying to apply a livepatch on any system with CET shstk (AMD Zen3
or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows:

  (XEN) livepatch: lp: Verifying enabled expectations for all functions
  (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns
  (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the other 127 CPUs
  (XEN) livepatch: lp: Applying 1 functions
  (XEN) hi_func: Hi! (called 1 times)
  (XEN) Hook executing.
  (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu))' failed at arch/x86/smp.c:265
  (XEN) *** DOUBLE FAULT ***
  <many double faults>

The assertion failure is from a global (system wide) TLB flush initiated by
modify_xen_mappings().  I'm not entirely sure when this broke, and I'm not
sure exactly what causes the #DF's, but it doesn't really matter either
because they highlight a latent bug that I'd overlooked with the CET-SS vs
patching work the first place.

While we're careful to arrange for the patching CPU to avoid encountering
non-shstk memory with transient shstk perms, other CPUs can pick these
mappings up too if they need to re-walk for uarch reasons.

Another bug is that for livepatching, we only disable CET if shadow stacks are
in use.  Running on Intel CET systems when Xen is only using CET-IBT will
crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CET
still active.

Also, we never went and cleared the dirty bits on .rodata.  This would
matter (for the same reason it matters on .text - it becomes a valid target
for WRSS), but we never actually patch .rodata anyway.

Therefore rework how we do patching for both alternatives and livepatches.

Introduce modify_xen_mappings_lite() with a purpose similar to
modify_xen_mappings(), but stripped down to the bare minimum as it's used in
weird contexts.  Leave all complexity to the caller to handle.

Instead of patching by clearing CR0.WP (and having to jump through some
fragile hoops to disable CET in order to do this), just transiently relax the
permissions on .text via l2_identmap[].

Note that neither alternatives nor livepatching edit .rodata, so we don't need
to relax those permissions at this juncture.

The perms are relaxed globally, but is safe enough.  Alternatives run before
we boot APs, and Livepatching runs in a quiesced state where the other CPUs
are not doing anything interesting.

This approach is far more robust.

Fixes: 48cdc15a424f ("x86/alternatives: Clear CR4.CET when clearing CR0.WP")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Ross Lagerwall <ross.lagerwall@citrix.com>

v3:
 * Provide extra superpage assertion
 * Tweak comments

v2:
 * Add a fixes tag
 * Put modify_xen_mappings_lite() in init_or_livepatch
 * Fix various comments

Pulling put_pte_flags() out of the loops in modify_xen_mappings_lite() halves
the size of the function.  The code generation of the typesafe pagetable
helpers is terrible, both because of flags needing a 32->64 expand, and
because of _PAGE_NX using cpu_has_nx behind the scene.  We really should
improve how all of this works.
---
 xen/arch/x86/alternative.c       | 45 ++++++++------------
 xen/arch/x86/livepatch.c         | 56 ++++++++++---------------
 xen/arch/x86/mm.c                | 70 ++++++++++++++++++++++++++++++++
 xen/common/virtual_region.c      | 22 +++++++---
 xen/include/xen/mm.h             |  1 +
 xen/include/xen/virtual_region.h |  4 +-
 6 files changed, 131 insertions(+), 67 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index 2383fa66294c..99482766b51f 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -382,24 +382,28 @@ static int __init cf_check nmi_apply_alternatives(
      */
     if ( !(alt_done & alt_todo) )
     {
-        unsigned long cr0, cr4;
-
-        cr0 = read_cr0();
-        cr4 = read_cr4();
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4 & ~X86_CR4_CET);
-
-        /* Disable WP to allow patching read-only pages. */
-        write_cr0(cr0 & ~X86_CR0_WP);
+        /*
+         * Relax perms on .text to be RWX, so we can modify them.
+         *
+         * This relaxes perms globally, but we run ahead of bringing APs
+         * online, so only have our own TLB to worry about.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RWX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         _apply_alternatives(__alt_instructions, __alt_instructions_end,
                             alt_done);
 
-        write_cr0(cr0);
-
-        if ( cr4 & X86_CR4_CET )
-            write_cr4(cr4);
+        /*
+         * Reinstate perms on .text to be RX.  This also cleans out the dirty
+         * bits, which matters when CET Shstk is active.
+         */
+        modify_xen_mappings_lite(XEN_VIRT_START + MB(2),
+                                 (unsigned long)&__2M_text_end,
+                                 PAGE_HYPERVISOR_RX);
+        flush_local(FLUSH_TLB_GLOBAL);
 
         alt_done |= alt_todo;
     }
@@ -454,19 +458,6 @@ static void __init _alternative_instructions(bool force)
         panic("Timed out waiting for alternatives self-NMI to hit\n");
 
     set_nmi_callback(saved_nmi_callback);
-
-    /*
-     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
-     * writing into the mappings set dirty bits, turning the mappings into
-     * shadow stack mappings.
-     *
-     * While we can execute from them, this would also permit them to be the
-     * target of WRSS instructions, so reset the dirty after patching.
-     */
-    if ( cpu_has_xen_shstk )
-        modify_xen_mappings(XEN_VIRT_START + MB(2),
-                            (unsigned long)&__2M_text_end,
-                            PAGE_HYPERVISOR_RX);
 }
 
 void __init alternative_instructions(void)
diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c
index f2d783fdc567..a54d991c5f0f 100644
--- a/xen/arch/x86/livepatch.c
+++ b/xen/arch/x86/livepatch.c
@@ -61,46 +61,32 @@ int arch_livepatch_safety_check(void)
 
 int noinline arch_livepatch_quiesce(void)
 {
-    /* If Shadow Stacks are in use, disable CR4.CET so we can modify CR0.WP. */
-    if ( cpu_has_xen_shstk )
-        write_cr4(read_cr4() & ~X86_CR4_CET);
-
-    /* Disable WP to allow changes to read-only pages. */
-    write_cr0(read_cr0() & ~X86_CR0_WP);
+    /*
+     * Relax perms on .text to be RWX, so we can modify them.
+     *
+     * This relaxes perms globally, but all other CPUs are waiting on us.
+     */
+    relax_virtual_region_perms();
+    flush_local(FLUSH_TLB_GLOBAL);
 
     return 0;
 }
 
 void noinline arch_livepatch_revive(void)
 {
-    /* Reinstate WP. */
-    write_cr0(read_cr0() | X86_CR0_WP);
-
-    /* Clobber dirty bits and reinstate CET, if applicable. */
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
-    {
-        unsigned long tmp;
-
-        reset_virtual_region_perms();
-
-        write_cr4(read_cr4() | X86_CR4_CET);
-
-        /*
-         * Fix up the return address on the shadow stack, which currently
-         * points at arch_livepatch_quiesce()'s caller.
-         *
-         * Note: this is somewhat fragile, and depends on both
-         * arch_livepatch_{quiesce,revive}() being called from the same
-         * function, which is currently the case.
-         *
-         * Any error will result in Xen dying with #CP, and its too late to
-         * recover in any way.
-         */
-        asm volatile ("rdsspq %[ssp];"
-                      "wrssq %[addr], (%[ssp]);"
-                      : [ssp] "=&r" (tmp)
-                      : [addr] "r" (__builtin_return_address(0)));
-    }
+    /*
+     * Reinstate perms on .text to be RX.  This also cleans out the dirty
+     * bits, which matters when CET Shstk is active.
+     *
+     * The other CPUs waiting for us could in principle have re-walked while
+     * we were patching and cached the reduced perms in their TLB.  Therefore,
+     * we need to do a global TLB flush.
+     *
+     * However, we can't use Xen's normal global TLB flush infrastructure, so
+     * delay the TLB flush to arch_livepatch_post_action(), which is called on
+     * all CPUs (including us) on the way out of patching.
+     */
+    tighten_virtual_region_perms();
 }
 
 int arch_livepatch_verify_func(const struct livepatch_func *func)
@@ -197,6 +183,8 @@ void noinline arch_livepatch_revert(const struct livepatch_func *func)
  */
 void noinline arch_livepatch_post_action(void)
 {
+    /* See arch_livepatch_revive() */
+    flush_local(FLUSH_TLB_GLOBAL);
 }
 
 static nmi_callback_t *saved_nmi_callback;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 36a07ef77eae..98529215ddec 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -91,6 +91,7 @@
 #include <xen/ioreq.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
+#include <xen/livepatch.h>
 #include <xen/mm.h>
 #include <xen/param.h>
 #include <xen/domain.h>
@@ -5879,6 +5880,75 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
     return modify_xen_mappings(s, e, _PAGE_NONE);
 }
 
+/*
+ * Similar to modify_xen_mappings(), but used by the alternatives and
+ * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
+ * responsibility of the caller, and *MUST* not be introduced here.
+ *
+ * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
+ * Must be called with present flags, and over present mappings.
+ * Must be called on leaf page boundaries, i.e. s and e must not be in the
+ * middle of a superpage.
+ */
+void init_or_livepatch modify_xen_mappings_lite(
+    unsigned long s, unsigned long e, unsigned int _nf)
+{
+    unsigned long v = s, fm, nf;
+
+    /* Set of valid PTE bits which may be altered. */
+#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
+    fm = put_pte_flags(FLAGS_MASK);
+    nf = put_pte_flags(_nf & FLAGS_MASK);
+#undef FLAGS_MASK
+
+    ASSERT(nf & _PAGE_PRESENT);
+    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
+    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
+
+    while ( v < e )
+    {
+        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
+        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
+        unsigned int l2f = l2e_get_flags(l2e);
+
+        ASSERT(l2f & _PAGE_PRESENT);
+
+        if ( l2e_get_flags(l2e) & _PAGE_PSE )
+        {
+            ASSERT(l1_table_offset(v) == 0);
+            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));
+
+            l2e_write_atomic(pl2e, l2e_from_intpte((l2e.l2 & ~fm) | nf));
+
+            v += 1UL << L2_PAGETABLE_SHIFT;
+            continue;
+        }
+
+        /* else descend to l1 */
+        {
+            l1_pgentry_t *pl1t = map_l1t_from_l2e(l2e);
+
+            while ( v < e )
+            {
+                l1_pgentry_t *pl1e = &pl1t[l1_table_offset(v)];
+                l1_pgentry_t l1e = l1e_read_atomic(pl1e);
+                unsigned int l1f = l1e_get_flags(l1e);
+
+                ASSERT(l1f & _PAGE_PRESENT);
+
+                l1e_write_atomic(pl1e, l1e_from_intpte((l1e.l1 & ~fm) | nf));
+
+                v += 1UL << L1_PAGETABLE_SHIFT;
+
+                if ( l2_table_offset(v) == 0 )
+                    break;
+            }
+
+            unmap_domain_page(pl1t);
+        }
+    }
+}
+
 void __set_fixmap(
     enum fixed_addresses idx, unsigned long mfn, unsigned long flags)
 {
diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 5ecdba9c08ed..ddac5c9147e5 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -92,16 +92,28 @@ void unregister_virtual_region(struct virtual_region *r)
     remove_virtual_region(r);
 }
 
-#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_XEN_SHSTK)
-void reset_virtual_region_perms(void)
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_X86)
+void relax_virtual_region_perms(void)
 {
     const struct virtual_region *region;
 
     rcu_read_lock(&rcu_virtual_region_lock);
     list_for_each_entry_rcu( region, &virtual_region_list, list )
-        modify_xen_mappings((unsigned long)region->start,
-                            ROUNDUP((unsigned long)region->end, PAGE_SIZE),
-                            PAGE_HYPERVISOR_RX);
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RWX);
+    rcu_read_unlock(&rcu_virtual_region_lock);
+}
+
+void tighten_virtual_region_perms(void)
+{
+    const struct virtual_region *region;
+
+    rcu_read_lock(&rcu_virtual_region_lock);
+    list_for_each_entry_rcu( region, &virtual_region_list, list )
+        modify_xen_mappings_lite((unsigned long)region->start,
+                                 ROUNDUP((unsigned long)region->end, PAGE_SIZE),
+                                 PAGE_HYPERVISOR_RX);
     rcu_read_unlock(&rcu_virtual_region_lock);
 }
 #endif
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 9d14aed74baa..b0dc3ba9c98d 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -100,6 +100,7 @@ int map_pages_to_xen(
     unsigned int flags);
 /* Alter the permissions of a range of Xen virtual address space. */
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags);
+void modify_xen_mappings_lite(unsigned long s, unsigned long e, unsigned int flags);
 int destroy_xen_mappings(unsigned long s, unsigned long e);
 /* Retrieve the MFN mapped by VA in Xen virtual address space. */
 mfn_t xen_map_to_mfn(unsigned long va);
diff --git a/xen/include/xen/virtual_region.h b/xen/include/xen/virtual_region.h
index ba408eb87a1a..d05362071135 100644
--- a/xen/include/xen/virtual_region.h
+++ b/xen/include/xen/virtual_region.h
@@ -33,7 +33,9 @@ void setup_virtual_regions(const struct exception_table_entry *start,
 void unregister_init_virtual_region(void);
 void register_virtual_region(struct virtual_region *r);
 void unregister_virtual_region(struct virtual_region *r);
-void reset_virtual_region_perms(void);
+
+void relax_virtual_region_perms(void);
+void tighten_virtual_region_perms(void);
 
 #endif /* __XEN_VIRTUAL_REGION_H__ */
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:21:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522766.812343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojOI-0005cw-Ef; Tue, 18 Apr 2023 11:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522766.812343; Tue, 18 Apr 2023 11:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojOI-0005cp-Af; Tue, 18 Apr 2023 11:20:54 +0000
Received: by outflank-mailman (input) for mailman id 522766;
 Tue, 18 Apr 2023 11:20:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bo8Z=AJ=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pojOG-0005cj-RY
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:20:53 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12ea4000-dddb-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 13:20:51 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-63b5312bd4fso9425542b3a.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 04:20:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12ea4000-dddb-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681816850; x=1684408850;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Ttz40n3jb2Gl/JJd/9WHgffwssF2y2cEEdgzY8wbPRY=;
        b=ivkg0ecCjVkkTgBgbAX4cFwKJGMuv1TBbLBLxHIsP76ClvZpqV6mYHoNFG6T+94w8p
         p/I7tNZpkuunTHgemAtZcdgeZPUvJ7pYXscpx1MnEWYU8wmALVnLvRry0lAHfCrBrfbP
         8/Atfz6TspFXrdwIkkM1gQ+3s3+wN1joCnIxLTYftpc7W2CaYia6r8bA+LlbUS36d0m7
         ojn5XKF3cD/Yk4GaTOIoTfQKAOjFrU2zobz5xr6rcKGH1jcINB9Zy5jEv1LVWBbu1RVT
         jDyJUxfuisysRUPQ7/M45VxzeRai69siA07iQy1rEe2gW6ZUb/kzGBgDXXQTlkGXVcOY
         kzww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681816850; x=1684408850;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Ttz40n3jb2Gl/JJd/9WHgffwssF2y2cEEdgzY8wbPRY=;
        b=I+Ulm0CGdiroPFM14RQmnnpvMEzdZFN2B9x/XSXABBxv2jsVz9w4L1g/fc6zAgTWkD
         uq2cW5n2TtShKhuk9VVHzfW6BeOESFv2nOj5thgmGa3YRxPzTnqdNxWfN8CbmmH6lfId
         /KQxRFimNVl67ZcidkuooDkHTHb4tjBzjMHydZZOfF7PeMg6sEKWweAWc5JlgtKi1Oh+
         bR5t57llaIMo3nJKrTKCH6pQOkIVtj1AoJ4cHK4HKwOLqQpmkGEe4t8Mzi9EZhWSWU4E
         9rc8C4bTXZSWeNrcoOMey0bwAoHWtW53zyT2cUfeAX372kTfQcC+7MLWfNHqFYPIt74e
         QcgQ==
X-Gm-Message-State: AAQBX9fR+ssGnWwqDdYLHufvAKu+9f2r0DVEAkcAcQWniKha3u3XVNVp
	46cZoWDnJpkfENidCnmhkUbs9ImDOUBBDgq3AlI=
X-Google-Smtp-Source: AKy350YvX4/zGxYhziZRAEryAT7HJSIJ+DO839nnG/fFPBb5rL5v9/Z1SrQwGjGS8nVcQpWVn9Rv7tvhHvZknu4+jdY=
X-Received: by 2002:a17:902:e743:b0:1a0:76e8:a4d with SMTP id
 p3-20020a170902e74300b001a076e80a4dmr1921357plf.14.1681816849574; Tue, 18 Apr
 2023 04:20:49 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com> <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
In-Reply-To: <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 18 Apr 2023 14:26:52 +0300
Message-ID: <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000e570bf05f99a7f7c"

--000000000000e570bf05f99a7f7c
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

>> This feature has not been merged in Xen upstream yet

> would assume that upstream + the series on the ML [1] work

Please clarify this point.
Because the two thoughts are controversial.

Regards,
Oleg

=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:09, Jul=
ien Grall <julien@xen.org>:

> Hi,
>
> I have added back xen-devel and the others. Please reply to all, so they
> can have the full conversation.
>
> On 18/04/2023 11:13, Oleg Nikitenko wrote:
> >> HW
> > Board: Xilinx ZynqMP
> >> Where are the banks located?
> >
> >> Where are the banks located?
> > I did not catch this question. Could you rephrase it ?
>
> I am referring to the memory bank. But you provided the board, so we
> should be able to infer them.
>
> >
> >> Where do you load the various modules (e.g. kernel, xen...)?
> > BOOTMOD_XEN,
> > BOOTMOD_FDT,
> > BOOTMOD_KERNEL
>
> At which address do you load them? What are there size?
>
> >
> > Should I use another branch ?
> > If yes then which one ?
>
> I don't know which branch would work on Xilinx ZynqMP with cache
> coloring (although I would assume that upstream + the series on the ML
> [1] work).
>
>
>  > A company's active branch is xlnx_rebase_4.16.
>
> The branch you pointed out is not directly maintained by Xen Project and
> from what you wrote below there are some differences with upstream. So
> it would be best if you speak directly with Xilinx/AMD. Stefano in CC
> should be able to assist you.
>
> Cheers,
>
> >
> > Regards,
> > Oleg
> >
> > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 12:31,=
 Oleg Nikitenko <oleshiiwood@gmail.com>:
> >
> >> Hi Julien,
> >>
> >> Thanks for the answer.
> >>
> >> A company's active branch is xlnx_rebase_4.16.
> >> There are added there patches
> >>
> >>  From c14a26b9c9e6dc5711f3155e44adee8dfa634e33 Mon Sep 17 00:00:00 200=
1
> >> From: Ayan Kumar Halder <ayan.kumar.halder@xilinx.com>
> >> Date: Mon, 25 Apr 2022 11:21:19 +0100
> >> Subject: [PATCH 1/6] xen/arm: smmuv1: Remove iommu group when deassign
> >>
> >>  From 6a7ace399f70f0001664d727476c59f211f389f5 Mon Sep 17 00:00:00 200=
1
> >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> >> Date: Thu, 23 Jun 2022 11:52:47 -0700
> >> Subject: [PATCH 2/6] libxl: add support for emulated TPM on ARM
> >>
> >>  From 6dc26f1d82a8942dc5a00c55ee29ce4be5359529 Mon Sep 17 00:00:00 200=
1
> >> From: Tanmay Shah <tanmay.shah@xilinx.com>
> >> Date: Wed, 3 Aug 2022 08:56:56 -0700
> >> Subject: [PATCH 3/6] xen/eemi: Add EEMI calls to support SGI
> registration
> >>
> >>  From 9fd67311c1253a170b1364de070a7535551bba52 Mon Sep 17 00:00:00 200=
1
> >> From: Tanmay Shah <tanmay.shah@amd.com>
> >> Date: Thu, 4 Aug 2022 16:34:31 -0700
> >> Subject: [PATCH 4/6] xen: eemi: make xen passthrough for unknown EEMI
> calls
> >>   from Dom0
> >>
> >>  From f81a621a28dfde7b8d0d5c5c125f2f250291b7e8 Mon Sep 17 00:00:00 200=
1
> >> From: Michal Orzel <michal.orzel@amd.com>
> >> Date: Mon, 29 Aug 2022 15:09:07 +0200
> >> Subject: [PATCH 5/6] platforms: xilinx: Add support for mapping PM nod=
es
> >> into
> >>   64-bit addresses
> >>
> >>  From 47ce40314bbec31b683da56d007d14603f002d0c Mon Sep 17 00:00:00 200=
1
> >> From: Ayan Kumar Halder <ayankuma@amd.com>
> >> Date: Tue, 30 Aug 2022 12:48:25 +0100
> >> Subject: [PATCH 6/6] Arm: Enable BOOT_PIN_CTRL for Dom0
> >>
> >> Regards,
> >> Oleg
> >>
> >> =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:59=
, Julien Grall <julien@xen.org>:
> >>
> >>> +Stefano, + Bertrand, +Carlo,
> >>>
> >>> On 18/04/2023 09:43, Oleg Nikitenko wrote:
> >>>> Hello,
> >>>
> >>> Hi,
> >>>
> >>>> I tried to turn on this scheme and ran into panic.
> >>>> Where was I wrong ?
> >>>
> >>> This feature has not been merged in Xen upstream yet. So can you tell
> us
> >>> what patches did you apply or which tree?
> >>>
> >>>>
> >>>> Xen command line
> >>>> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1=
600M
> >>>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=
=3Dnull
> >>>> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7"=
;
> >>>
> >>> Can you provide the following information:
> >>>    * HW
> >>>    * Where are the banks located?
> >>>    * Where do you load the various modules (e.g. kernel, xen...)?
> >>>
> >>>>
> >>>> Xen config color build settings
> >>>> CONFIG_COLORING=3Dy
> >>>>
> >>>> Xen log:
> >>>> (XEN) I/O virtualisation enabled
> >>>> (XEN)  - Dom0 mode: Relaxed
> >>>> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >>>> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> >>>> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >>>> (XEN) Coloring general information
> >>>> (XEN) Way size: 64kB
> >>>> (XEN) Max. number of colors available: 16
> >>>> (XEN) Xen color(s): [ 0 ]
> >>>> (XEN) alternatives: Patching with alt table 00000000002cc690 ->
> >>>> 00000000002ccc0c
> >>>> (XEN) Color array allocation failed for dom0
> >>>> (XEN)
> >>>> (XEN) ****************************************
> >>>> (XEN) Panic on CPU 0:
> >>>> (XEN) Error creating domain 0
> >>>> (XEN) ****************************************
> >>>> (XEN)
> >>>> (XEN) Reboot in five seconds...
> >>>
> >>> Cheers,
> >>>
> >>> --
> >>> Julien Grall
> >>>
> >>
> >
>
> [1]
>
> https://lore.kernel.org/xen-devel/20230123154735.74832-1-carlo.nonato@min=
ervasys.tech
>
> --
> Julien Grall
>

--000000000000e570bf05f99a7f7c
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi Julien,</div><div><br></div><div>&gt;&gt; This fea=
ture has not been merged in Xen upstream yet</div><div><br></div><div>&gt; =
would assume that upstream + the series on the ML [1] work</div><div><br></=
div><div>Please clarify this point.</div><div>Because the two thoughts are =
controversial.</div><div><br></div><div>Regards,</div><div>Oleg<br></div></=
div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=
=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:09, Jul=
ien Grall &lt;<a href=3D"mailto:julien@xen.org">julien@xen.org</a>&gt;:<br>=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I have added back xen-devel and the others. Please reply to all, so they <b=
r>
can have the full conversation.<br>
<br>
On 18/04/2023 11:13, Oleg Nikitenko wrote:<br>
&gt;&gt; HW<br>
&gt; Board: Xilinx ZynqMP<br>
&gt;&gt; Where are the banks located?<br>
&gt; <br>
&gt;&gt; Where are the banks located?<br>
&gt; I did not catch this question. Could you rephrase it ?<br>
<br>
I am referring to the memory bank. But you provided the board, so we <br>
should be able to infer them.<br>
<br>
&gt; <br>
&gt;&gt; Where do you load the various modules (e.g. kernel, xen...)?<br>
&gt; BOOTMOD_XEN,<br>
&gt; BOOTMOD_FDT,<br>
&gt; BOOTMOD_KERNEL<br>
<br>
At which address do you load them? What are there size?<br>
<br>
&gt; <br>
&gt; Should I use another branch ?<br>
&gt; If yes then which one ?<br>
<br>
I don&#39;t know which branch would work on Xilinx ZynqMP with cache <br>
coloring (although I would assume that upstream + the series on the ML <br>
[1] work).<br>
<br>
<br>
=C2=A0&gt; A company&#39;s active branch is xlnx_rebase_4.16.<br>
<br>
The branch you pointed out is not directly maintained by Xen Project and <b=
r>
from what you wrote below there are some differences with upstream. So <br>
it would be best if you speak directly with Xilinx/AMD. Stefano in CC <br>
should be able to assist you.<br>
<br>
Cheers,<br>
<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 12:31=
, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;:<br>
&gt; <br>
&gt;&gt; Hi Julien,<br>
&gt;&gt;<br>
&gt;&gt; Thanks for the answer.<br>
&gt;&gt;<br>
&gt;&gt; A company&#39;s active branch is xlnx_rebase_4.16.<br>
&gt;&gt; There are added there patches<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From c14a26b9c9e6dc5711f3155e44adee8dfa634e33 Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Ayan Kumar Halder &lt;<a href=3D"mailto:ayan.kumar.halder@xi=
linx.com" target=3D"_blank">ayan.kumar.halder@xilinx.com</a>&gt;<br>
&gt;&gt; Date: Mon, 25 Apr 2022 11:21:19 +0100<br>
&gt;&gt; Subject: [PATCH 1/6] xen/arm: smmuv1: Remove iommu group when deas=
sign<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From 6a7ace399f70f0001664d727476c59f211f389f5 Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Stefano Stabellini &lt;<a href=3D"mailto:stefano.stabellini@=
amd.com" target=3D"_blank">stefano.stabellini@amd.com</a>&gt;<br>
&gt;&gt; Date: Thu, 23 Jun 2022 11:52:47 -0700<br>
&gt;&gt; Subject: [PATCH 2/6] libxl: add support for emulated TPM on ARM<br=
>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From 6dc26f1d82a8942dc5a00c55ee29ce4be5359529 Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Tanmay Shah &lt;<a href=3D"mailto:tanmay.shah@xilinx.com" ta=
rget=3D"_blank">tanmay.shah@xilinx.com</a>&gt;<br>
&gt;&gt; Date: Wed, 3 Aug 2022 08:56:56 -0700<br>
&gt;&gt; Subject: [PATCH 3/6] xen/eemi: Add EEMI calls to support SGI regis=
tration<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From 9fd67311c1253a170b1364de070a7535551bba52 Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Tanmay Shah &lt;<a href=3D"mailto:tanmay.shah@amd.com" targe=
t=3D"_blank">tanmay.shah@amd.com</a>&gt;<br>
&gt;&gt; Date: Thu, 4 Aug 2022 16:34:31 -0700<br>
&gt;&gt; Subject: [PATCH 4/6] xen: eemi: make xen passthrough for unknown E=
EMI calls<br>
&gt;&gt;=C2=A0 =C2=A0from Dom0<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From f81a621a28dfde7b8d0d5c5c125f2f250291b7e8 Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" tar=
get=3D"_blank">michal.orzel@amd.com</a>&gt;<br>
&gt;&gt; Date: Mon, 29 Aug 2022 15:09:07 +0200<br>
&gt;&gt; Subject: [PATCH 5/6] platforms: xilinx: Add support for mapping PM=
 nodes<br>
&gt;&gt; into<br>
&gt;&gt;=C2=A0 =C2=A064-bit addresses<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 From 47ce40314bbec31b683da56d007d14603f002d0c Mon Sep 17 00:=
00:00 2001<br>
&gt;&gt; From: Ayan Kumar Halder &lt;<a href=3D"mailto:ayankuma@amd.com" ta=
rget=3D"_blank">ayankuma@amd.com</a>&gt;<br>
&gt;&gt; Date: Tue, 30 Aug 2022 12:48:25 +0100<br>
&gt;&gt; Subject: [PATCH 6/6] Arm: Enable BOOT_PIN_CTRL for Dom0<br>
&gt;&gt;<br>
&gt;&gt; Regards,<br>
&gt;&gt; Oleg<br>
&gt;&gt;<br>
&gt;&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 1=
1:59, Julien Grall &lt;<a href=3D"mailto:julien@xen.org" target=3D"_blank">=
julien@xen.org</a>&gt;:<br>
&gt;&gt;<br>
&gt;&gt;&gt; +Stefano, + Bertrand, +Carlo,<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On 18/04/2023 09:43, Oleg Nikitenko wrote:<br>
&gt;&gt;&gt;&gt; Hello,<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Hi,<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; I tried to turn on this scheme and ran into panic.<br>
&gt;&gt;&gt;&gt; Where was I wrong ?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; This feature has not been merged in Xen upstream yet. So can y=
ou tell us<br>
&gt;&gt;&gt; what patches did you apply or which tree?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Xen command line<br>
&gt;&gt;&gt;&gt; xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dseria=
l0 dom0_mem=3D1600M<br>
&gt;&gt;&gt;&gt; dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnat=
ive sched=3Dnull<br>
&gt;&gt;&gt;&gt; timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_col=
ors=3D4-7&quot;;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Can you provide the following information:<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 * HW<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 * Where are the banks located?<br>
&gt;&gt;&gt;=C2=A0 =C2=A0 * Where do you load the various modules (e.g. ker=
nel, xen...)?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Xen config color build settings<br>
&gt;&gt;&gt;&gt; CONFIG_COLORING=3Dy<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Xen log:<br>
&gt;&gt;&gt;&gt; (XEN) I/O virtualisation enabled<br>
&gt;&gt;&gt;&gt; (XEN)=C2=A0 - Dom0 mode: Relaxed<br>
&gt;&gt;&gt;&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt;&gt;&gt;&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x000000008002=
3558<br>
&gt;&gt;&gt;&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resourc=
e<br>
&gt;&gt;&gt;&gt; (XEN) Coloring general information<br>
&gt;&gt;&gt;&gt; (XEN) Way size: 64kB<br>
&gt;&gt;&gt;&gt; (XEN) Max. number of colors available: 16<br>
&gt;&gt;&gt;&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt;&gt;&gt;&gt; (XEN) alternatives: Patching with alt table 00000000002cc6=
90 -&gt;<br>
&gt;&gt;&gt;&gt; 00000000002ccc0c<br>
&gt;&gt;&gt;&gt; (XEN) Color array allocation failed for dom0<br>
&gt;&gt;&gt;&gt; (XEN)<br>
&gt;&gt;&gt;&gt; (XEN) ****************************************<br>
&gt;&gt;&gt;&gt; (XEN) Panic on CPU 0:<br>
&gt;&gt;&gt;&gt; (XEN) Error creating domain 0<br>
&gt;&gt;&gt;&gt; (XEN) ****************************************<br>
&gt;&gt;&gt;&gt; (XEN)<br>
&gt;&gt;&gt;&gt; (XEN) Reboot in five seconds...<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Cheers,<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; --<br>
&gt;&gt;&gt; Julien Grall<br>
&gt;&gt;&gt;<br>
&gt;&gt;<br>
&gt; <br>
<br>
[1] <br>
<a href=3D"https://lore.kernel.org/xen-devel/20230123154735.74832-1-carlo.n=
onato@minervasys.tech" rel=3D"noreferrer" target=3D"_blank">https://lore.ke=
rnel.org/xen-devel/20230123154735.74832-1-carlo.nonato@minervasys.tech</a><=
br>
<br>
-- <br>
Julien Grall<br>
</blockquote></div>

--000000000000e570bf05f99a7f7c--


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:29:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522772.812353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojWb-0006Jz-CS; Tue, 18 Apr 2023 11:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522772.812353; Tue, 18 Apr 2023 11:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojWb-0006Js-8j; Tue, 18 Apr 2023 11:29:29 +0000
Received: by outflank-mailman (input) for mailman id 522772;
 Tue, 18 Apr 2023 11:29:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pojWZ-0006Ji-Kn
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:29:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pojWZ-0002fH-2j; Tue, 18 Apr 2023 11:29:27 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=[192.168.26.51]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pojWY-0003vx-Sp; Tue, 18 Apr 2023 11:29:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=33er2HK2Pue8U4YBESIqk5sVVWEzdL0YmimrS1s9S34=; b=omqnHqAi1bl2gu9iDcX9PHwZpn
	U/8L4lAURw8RXtePe3DKij4wBgGKDTkmNexROXOaVg1ISr/r/fSskIUxOGnO/Og+J7vJzcIhoae9d
	FfRNiOjVbhhFqiH6DHgq6C+C0raEEm0TjckE3j6ymCOj8nr+Wf07T/PQW/5Jn4mpt2pk=;
Message-ID: <a83e1891-99d2-2503-3675-e5c6573a9b69@xen.org>
Date: Tue, 18 Apr 2023 12:29:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Carlo Nonato <carlo.nonato@minervasys.tech>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
 <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 18/04/2023 12:26, Oleg Nikitenko wrote:
> Hi Julien,

Hi Oleg,

> 
>>> This feature has not been merged in Xen upstream yet
> 
>> would assume that upstream + the series on the ML [1] work
> 
> Please clarify this point.
> Because the two thoughts are controversial.

It is not clear to me how what I wrote is controversial. A series was 
sent on the ML for cache coloring support and this was tested on Xilinx 
ZynqMP (see cover letter).

This work was sponsored by Xilinx/AMD. So my assumption is they have 
done the same amount of testing as they did for their own branch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 11:35:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 11:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522778.812363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojcX-0007mT-1G; Tue, 18 Apr 2023 11:35:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522778.812363; Tue, 18 Apr 2023 11:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pojcW-0007mM-UY; Tue, 18 Apr 2023 11:35:36 +0000
Received: by outflank-mailman (input) for mailman id 522778;
 Tue, 18 Apr 2023 11:35:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pojcV-0007mF-Jb
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 11:35:35 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 20ab7977-dddd-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 13:35:33 +0200 (CEST)
Received: from mail-bn8nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 07:35:30 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SN7PR03MB7294.namprd03.prod.outlook.com (2603:10b6:806:2e4::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 11:35:28 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 11:35:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20ab7977-dddd-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681817734;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=dtjuvncMCJvRtsnOhde/xYDOFU1OLrPXR4zvd2rBA9Y=;
  b=JHSBFz2zjxZS7ALtPlA4TMBWnGny/4rIVOmQqOHr297QRYVTXnx/iYUx
   aviY2w+NbWt/WT6fP9rU+JsbOrW3LpsPLhIVyDYroxeTgaipXmRQk/0UX
   O8yQa66tbS3ZY2BtCvIdCImDUt7JCa3gbpNVfmNseYX0PeshaJQ3ipL+K
   s=;
X-IronPort-RemoteIP: 104.47.55.172
X-IronPort-MID: 105842517
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RsxJkaBOmwKWHBVW/x3iw5YqxClBgxIJ4kV8jS/XYbTApGx00zZRm
 jQWDGmDP6qIajSjfoh0PImz/U1XuJfVm4Q1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxt1bLmAf3
 tojAT08dxml29OW5a64Y7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDK7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi/82rWSxHyTtIQ6T5bh3dEwhVmvl0s0VyQ5Tn2qruiXlRvrMz5YA
 wlOksY0loAw/kG2Stj2XzWjvWWJ+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0
 UWG9/vxDCFrmK2YTzSa7Lj8hSy2ETgYKykFfyBscOcey9zqoYV2hBSfSN9mSfSxloesRmu2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNzxooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:YohI+6N+lChwP8BcTv6jsMiBIKoaSvp037Dk7TEJdfU1SL3hqy
 nKpp4mPHDP+VMssR0b6LK90ey7MBDhHP1OgLX5X43SODUO0VHAROpfBMnZowEIcBeOkdK1u5
 0QFZSWy+edMbG5t6vHCcWDfOrICePozJyV
X-Talos-CUID: =?us-ascii?q?9a23=3Abzx/NGq+L/xi7Yic2Xt/CP7mUfweXlLcnVz2GGa?=
 =?us-ascii?q?1OH9gbuGkaXqM+4oxxg=3D=3D?=
X-Talos-MUID: 9a23:sj1LNwVyVmag9rTq/GTH3RpNbsJs3/WJLx80vo4gqsaVGzMlbg==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105842517"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QvIRL90vYBgqZnuhb0wAZLI+byhgGfUmrkmOvs4tLF9wq4Fw1yKrt+dgOFQRIeUxnKhrWefX599i/t3Zypan4XYQ/b8crlQ5IdUTgFe5WoYVK07xITcSIgFhgEAFQBe8MAke/k+Ce/lnl9wkhzLcGn3M4Af6AiY0znB+SmFEsmVV36DGXkgsvoEeDI7DwE+oT1pAXoOgBumdiH9bxlZ6u3LV3C1ZEeEHN7qY3DMgQufGCYU8rv8DSUoDDYKc2J7ikjXGAC+JLF8TdwU/zjGBWuHg8CGWoOlKNvj5HEW10Ps90Z3pwL6BGivLGwhP8Sm9YqTLBwcF7WhCDAOo7YGhfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zizeZKoON3GYLMqE9PBEpuYhnikQDbSQHcw9iMR0o0U=;
 b=D3L9qWbb1c0b5Df+Ynxup2rvxOzUWqcxGtVs87oU5ykC3KCf4svPV5O5BCEJAN/gBQC+xQ8VhlmWMjorgVf08ncyJ37OxqDbg3f8IPKQkuj4bOdXv5r101Rq54jRmSZWQeVT6HQCm4Tp66Vas2SZpLXGPZhfs6v2Ipt09cOmcJ5DeacvI1dID5NkQRCAhwc/eLDlAbujlm2gEaEXK6/MKe6mDzUBK0cIpVgKeUUsK7YmdPn8hbsEbK29CXrSC8DFb+YZ4EmrB4JbvFIyzuTvmLmOCFCpkq3vsOLgBa9i0UR5+OUnLVrhHBi72sCI0Vlsu/jzn6aMjhTeVSNZfOOd8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zizeZKoON3GYLMqE9PBEpuYhnikQDbSQHcw9iMR0o0U=;
 b=RLdjTwOn2A702kaUMSldA1rC5LJS7236rXI0zbxsXmiiwO33Tz/64cqxjAeEnN5YMDAaGPqxZMjUS10ioXgNrzhQlEGtaxtYPzmSgLpZIrQrzJM3RLLItN/fCs/amF3Wd1TOp3N2bSIC5j4vLmz/Qb7ptmd7WY6zY48S5beK7cU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 13:35:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZD6AejXJxQxAyrx1@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
X-ClientProxiedBy: LO2P265CA0243.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SN7PR03MB7294:EE_
X-MS-Office365-Filtering-Correlation-Id: 1ee8327a-386b-443f-644a-08db40010210
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ycYcvMSfaryZjkLRagzdCOYnMoBs5nc5ciEuB5bWkiUfjYY7l9OpKCjmtk57hXVDVlkC4kBq/zwga65VymhclZaaySCcLF9lchDBkuvYJhkvLSNYPjbzQPj7cybXMWnc1HJfW9X8a85G1rZo5A2twqjpLEBoxK0yQOFDBaJx7+4r65kKgBgHev2KAoSAodkOV88fFvUiJ8oLEkgYc+Gjb826QHMz2DVdlib8gxvR1tUW3yvMfVSgzwQgmKCbPDINIDEzDpVkL64jFjHfu+S1nbYfAd3dx1K46XrWIB8XdsEmHO1RipYUPryUifxccxZkVj4xvhn8Ka/gg8h1hTUnXtFRcUTzI4fZh6XmSTRiitGBF6lnFrDUDmjHXjgSu35k0GfK1w10uyjzaMhb8mJ7cVEcOGJJX0EJ3n5SJx/1bjNi6jVGRZ5tpKzawSqWDdUJw09U6w3YxV5cB+DU+tXiFojq/1azYubXRChdgoViD5+/PhzljcVy92kiv5Az2A+CS4lP3GACX+wJGCzuKQtWlpfoF77xM4x+u3ipo6khDGh1wyAL39khMRlq4n4QpPW8
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(346002)(136003)(366004)(376002)(396003)(451199021)(85182001)(2906002)(5660300002)(8936002)(8676002)(41300700001)(86362001)(38100700002)(33716001)(478600001)(26005)(6506007)(6512007)(9686003)(54906003)(186003)(6666004)(6486002)(4326008)(6916009)(66476007)(66556008)(66946007)(316002)(83380400001)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VHIzSlFvUjBuS3Z1UXpkNXk3SXZBOHc5VGo0bFo4UkNtN25KTE8reUxvUEpJ?=
 =?utf-8?B?SVpGVC9BT0R6VlVTNmRtbkRVUjZqM0xsR3p0bnZlQXMwS0ZMcWx5b0RVMUh6?=
 =?utf-8?B?NWZLUUJJT3JHY3YwUVFRM3FhYUwzQjE1LzNYZ2ZQZ0M3Z2lHZUgydFloVGNL?=
 =?utf-8?B?MUY0azdXYkpySUFJZTV1TFc0bm5IZmRxNzZON1hyVHgwc1hnRFJ1d0tmUVh4?=
 =?utf-8?B?eEl1RFVZdG9vRE4xRWxHcEo5M3B3UVNzc3lTcGdEVTh2aDdvN0F6SzErbFhv?=
 =?utf-8?B?S3JhQ2t6SEQ4MnhSamxYRDBpeWdrQ2QxTHY5KytOd3MyYTN2K2VCM2QrRHFN?=
 =?utf-8?B?T0plQ3lkS1pnV3pxOGlwSFBmVUNGOW9EMU94K2h3bjd3c2pWZUduQjV3MUdu?=
 =?utf-8?B?RUw0eDRCMy93VGQ4bzAzZGpKd2hGLzNWSEhCNkpVSFJPVzZKNDZNSjFsdU9T?=
 =?utf-8?B?TXJYYnE2cTcyMjY1QmF5aVBDVkV1MUdBOHQrMXE3NWhxc0hhWFlWdGVKZTE0?=
 =?utf-8?B?VVBWNFFab1hScElkOVhxRVVqdTVlNjhRc3FwZGZuTmpLM0FLWHBCK3oxNVpR?=
 =?utf-8?B?T1hGL3pxWnBtSVJCWTk0RjloUEJRV2FjUGV5ZkFubHYzWmh4MG9OL3hKaFZj?=
 =?utf-8?B?OHg4N2JRcUdOVElleWgvaUlJSEx6VjJhTmVOeE5QYWlKQmxKMXRDb1VLSnVG?=
 =?utf-8?B?Ull2WERiYmpXdTUwS1VVWmcrVDRHRmRtTVc2cG9SZ0ZnK0YwbzRHVzBURG9K?=
 =?utf-8?B?ZEV6VmFOZThPVmlhYUJmdXo4aGxaTURZWWxaYnBhYm84Yk1YbDgyOE5FcGJI?=
 =?utf-8?B?YkVVeVlwdWp5dDlGMGJzTStPZkFwSWxJRjZEMEZxR1RGdW43ZS96WUp3aVgz?=
 =?utf-8?B?WnNVcWYrU2t5bmdrcGkwdEdjd3NBQ1F4bWtBRC9TU3NNK0wvbThCcXdNaTBR?=
 =?utf-8?B?TzJYU1hkT2FxZmlFQWhuYXhmaVUyRmZEeUNhTUJMNitYUktyK3hKN2NHUFNL?=
 =?utf-8?B?VEtZSE9GTHpHRjVzUk95cmdYUnJlNEtPSC8yekhidlNIQStGblFuc21jTlpP?=
 =?utf-8?B?VHJNdnBVK1lKWUlmQTgvMlBQNDI4K25yN3ZObTNwdHlldmNoVktxNG5tQWEv?=
 =?utf-8?B?M2xkdVZQM2diQXpWS013OXlqS2tIVFA1M1BNN0U1YTlPN1lack5wOHV0Yzlu?=
 =?utf-8?B?NWs0RzZZSjBBZ3BNdTVNa2srL3VIWDdTeE5SOHdsMUhsUkhlUStMbW1xNmtN?=
 =?utf-8?B?b2JOYWF1c0kvWlVDYmFJUTFwSWZSQTU1VkRzV3N0MDFWYndHaWVrbzNXREc0?=
 =?utf-8?B?MjFRSkp4YkdqeC9vc0xnaWxWY25lZFVpWkpxTFB4RTRYSUM0ZEQ0OUNuLy9X?=
 =?utf-8?B?NS9IZW4zVDZXRXhxQU9zaUMrQWxNMG9YU2pZQ2hIdkovVmtydnNoMDBPU1Bn?=
 =?utf-8?B?cWZVMUJaVk5UNy9OSGZ2TEhrOUxibGlDTGE4N0JPWCtjdlRHMGluSzJWQnJI?=
 =?utf-8?B?VWdDOW9DWjFrZG5TYTRNNElTbkxqc1l5TUtLeEFTaVR4S3dFa3liN3ordEYr?=
 =?utf-8?B?WUxYeEpKdjdKSE94MTE3N2xGRmVvSmZqTmRkaFE5UEJvK2NXV0JkNEMwNFB4?=
 =?utf-8?B?c0NLbkFGdWZRZFFRWnJETElUYWR4cGthdnRnOVZleng0OWFwcjRIS1hHZkY1?=
 =?utf-8?B?a3drNnd5NDRnc0NMYWFnN3B6dnQ4cmp6bk5HdmZLU1FWczB3MVN5S0RxRlpZ?=
 =?utf-8?B?WVYrZ01UUGJvU0Y4NldWc25XZnU4SEN4S3RaMU5sU2wyL1VSNjVaQXJEUjV3?=
 =?utf-8?B?WFZocU5rekhMVTI5WnZZYlFkYnBQOUlKc3loLzFRZnBQSDJMZFV3cXBWcnJv?=
 =?utf-8?B?eGhQd2o3V09CQkJ4M1FoUUZhSG5VallUYzk5bmo2N1haS252RElaVlB4aFJo?=
 =?utf-8?B?SnhlM2tzeHpJMC9EaEZFR0xxMFkySUFWcnM1R2REQks3ZWVFSHNvZmJDenNp?=
 =?utf-8?B?Ynd2REthdktXaDlDSnBOcWMxYVh0amk3c2svSlhraXIzMm1BMnAyUHQ1ZDBT?=
 =?utf-8?B?S3U0SHlXYWEvRzU3R3lGMFd4OGQ4QUNIWlp5Q2swNzR6VXFLWUpZZFpmNTBM?=
 =?utf-8?Q?RXBAbAD3V2MKM7u1Oq8F/+TEv?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0vwOr8UO+Ys77JPVCQQyWAic8fafosG5W7zyOwxTWckSo1ULsTA3KAEW/l4SxpM13rbZ0QvsVzJKh6SJI5C82oj3dkaQtwtSZln2mBCO2jhRCZEBBdqjpq3x9t1FG66kLT6CzOAQGCPGoCVPrpD3GsGIpSoG6QnGxx5ZWOW2jWZ2+rlJUC0ZtErMdC/0/qShm2S+VKsp4waeppfA/ZQFDYgVaRFrMTgXd/bJJc/W2JVWf5dZnUnXrCyRNp7A8Lfrt8erLNsR3fU6EyDmZeJumz7p1BNeiOi0NULG+fQ6Axw8iX+BhFakyhk6ruJ8bcj2hcsn3qcIX6G//vXsviZ/8JU3eaRbYSoN20JRZ0nnzreoDJ0F72gRMCYsuyDMYC0e7K5ZxHFpjO0R+VvA6MjZs+TvCDdSNVDQmuaFgm6i5p+Eks9M/MuHmk+ApMmGzN7Gp857EfnLHU3TXNoGG4P/SOs3/uAi1CdNVaDeNaeLLEZadAJaUgICdt7jAaJ8pZfQwrhm04QHtCLqztEkOjYFsuKx8YLw0uhkWL9popd71NQSymdQMLgsj7tSFzTS06F4aPf3hW50ZjzKwVVefnFpDaT/k4FwNAnvcQ3TmwJ6E7aQchtclX8FJjczy4ti9+JGP2UFzYE6UFfxUYYDr8CvAqHTC1n4UEP4S+KbqB3UnXjQj9vUmP+dU7Bt/B948w1IcWs2rhJO2BAhNJISTy4nEXDJ0JoBWKFOiKiy/avYfN4AM0O3ZJSEzMGrjGIk3Q4rimFAvH167eu+/3TKLadGj8RXuLiDo72bnA5yJCT+ZwlWs+J639lmESgpWTU2CL6fxFKIrwxolvwOm6cWeKkp+w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ee8327a-386b-443f-644a-08db40010210
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 11:35:28.0052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Y8uES4anZQPes61FGotcyqG46LHpHRE3pOMjH/6+lyX/ro8fiVtL4bQiFqYe/K9qzVsf+dxG53JSRn5/yQJW5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7294

On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
> ... in order to also intercept Dom0 accesses through the alias ports.
> 
> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> use the CMOS RTC, because of there being none.
> 
> Note that rtc_init() deliberately uses 16 as the upper loop bound,
> despite probe_cmos_alias() using 8: The higher bound is benign now, but
> would save us touching the code (or, worse, missing to touch it) in case
> the lower one was doubled.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> v6: Restore lost "return" in rtc_init(). Convert printk() to dprintk()
>     in probe_cmos_alias(). Correct is_cmos_port() for hwdom.
> v5: Simplify logic in is_cmos_port(). Limit the scope of a local
>     variable. Adjust a comment that's being moved.
> v4: Also conditionally mask top bit for guest index port accesses. Add
>     missing adjustments to rtc_init(). Re-work to avoid recursive
>     read_lock(). Also adjust guest_io_{read,write}(). Re-base.
> v3: Re-base over change to earlier patch.
> v2: Re-base.
> 
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -27,7 +27,7 @@
>  #include <asm/hvm/vpt.h>
>  #include <asm/hvm/io.h>
>  #include <asm/hvm/save.h>
> -#include <asm/current.h>
> +#include <asm/iocap.h>
>  #include <xen/trace.h>
>  #include <public/hvm/params.h>
>  
> @@ -836,9 +836,19 @@ void rtc_init(struct domain *d)
>  
>      if ( !has_vrtc(d) )
>      {
> -        if ( is_hardware_domain(d) )
> -            /* Hardware domain gets mediated access to the physical RTC. */
> -            register_portio_handler(d, RTC_PORT(0), 2, hw_rtc_io);
> +        unsigned int port;
> +
> +        if ( !is_hardware_domain(d) )
> +            return;
> +
> +        /*
> +         * Hardware domain gets mediated access to the physical RTC/CMOS (of
> +         * course unless we don't use it ourselves, for there being none).
> +         */
> +        for ( port = RTC_PORT(0); port < RTC_PORT(0) + 0x10; port += 2 )
> +            if ( is_cmos_port(port, 2, d) )
> +                register_portio_handler(d, port, 2, hw_rtc_io);
> +
>          return;
>      }
>  
> --- a/xen/arch/x86/include/asm/mc146818rtc.h
> +++ b/xen/arch/x86/include/asm/mc146818rtc.h
> @@ -9,6 +9,10 @@
>  
>  extern spinlock_t rtc_lock;             /* serialize CMOS RAM access */
>  
> +struct domain;
> +bool is_cmos_port(unsigned int port, unsigned int bytes,
> +                  const struct domain *d);
> +
>  /**********************************************************************
>   * register summary
>   **********************************************************************/
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -208,7 +208,7 @@ static bool admin_io_okay(unsigned int p
>          return false;
>  
>      /* We also never permit direct access to the RTC/CMOS registers. */

Hm, it's unclear to me whether the comment above would need updating:
we don't allow direct access to the RTC/CMOS registers, but we allow
direct access to the RTC/CMOS ports if there's no device behind.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:04:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522790.812372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok3u-0002nH-Dl; Tue, 18 Apr 2023 12:03:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522790.812372; Tue, 18 Apr 2023 12:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok3u-0002nA-BA; Tue, 18 Apr 2023 12:03:54 +0000
Received: by outflank-mailman (input) for mailman id 522790;
 Tue, 18 Apr 2023 12:03:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oQ2u=AJ=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pok3t-0002n4-SQ
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:03:53 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1587bceb-dde1-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 14:03:51 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-4edcc885d8fso566275e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 05:03:51 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 w15-20020a056512098f00b004eb2f35045bsm2349851lft.269.2023.04.18.05.03.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Apr 2023 05:03:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1587bceb-dde1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681819431; x=1684411431;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=gdzMls3HJNd1Z6ea0WAmO/TGILNl6dtJ4Wnnghi4zco=;
        b=F9x8NXapKU9JO/swG205Rgh25VRJWKx8bYuBfLzwy8lFKTv7d2s7Fu+Ur7kfQF5T0u
         xsBZtZozFh81Jp1vcNtPY/urjicvqAPvf3dEydQQlMH7tiSLa4iBr6az5NorrGCqEu1L
         3gBY+rbKYx/ZhosH+kUWLUc9JvnaFcw0uZQCPoQEovc4Sqi/jGXDWYUuNyegC8vCvW/G
         q/vXKWsIh8xeQOwDazIw9vg+KZXHOBwf8+THQNCwu0qIKEVgSoPYAbsZBiBhUyG6Ur98
         sYpCj0Q8KEID3iX0J52zujO7EcY9dvmQulc3RP0XfUMUNRzKkTY3CjKnrI2vaI5tKKq1
         9XjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681819431; x=1684411431;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=gdzMls3HJNd1Z6ea0WAmO/TGILNl6dtJ4Wnnghi4zco=;
        b=dtlJkHBCLqH7DJwJF5/AE5WNHWnsFoNOsKSl70JeOGASRyvWjbmREidvUt9bUjWYP4
         csE0smysxYpgIjp1gXu31lIEC3MmsZjLLpABwIxvTIUJepl3cmVZI3hgvLHbO3BaOf5a
         9p0BhQdwEhtgXL+BBjnIHFlUrM6xM4LcClmJtrjzd8cg5jmn/ycINe2g0lm2Dngjg+ex
         b/D6qsNBuq4ZeOw20P6xdR6F7sdNBL676lc1T1fWaVIbsoHGv0YO1x7C1/RM1Lz/mzTx
         rvnS6un0CvXmlEXqByajlvhM/jQ6OLi1/6kz6zzpvAJuAveK4xYtYzsNHXgujuCy77Zp
         R2Ew==
X-Gm-Message-State: AAQBX9fHw9v1tRbekQmkAwSgVZmZlEyVrP7oKPZFS8bz0aKuOFkuJSmX
	yQkNAtwrUVUxJx/qBvCaBtU=
X-Google-Smtp-Source: AKy350aMnhEJFJ6UfyJ8uptzdRtE6nKJzzkk/AsO+RIMvs/l/JTWCB8PQ4me22FE9QI6J1BSuG2v4w==
X-Received: by 2002:ac2:46c9:0:b0:4ed:c537:d0ca with SMTP id p9-20020ac246c9000000b004edc537d0camr1702418lfo.59.1681819431140;
        Tue, 18 Apr 2023 05:03:51 -0700 (PDT)
Message-ID: <48a1fd97d34a37a2cdbdadf35811d31523b61a4e.camel@gmail.com>
Subject: Re: [PATCH v2 0/2] deal with GOT stuff for RISC-V
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Tue, 18 Apr 2023 15:03:50 +0300
In-Reply-To: <b343d8c3-b23b-c67b-76f6-c25d5892328b@suse.com>
References: <cover.1678970065.git.oleksii.kurochko@gmail.com>
	 <b343d8c3-b23b-c67b-76f6-c25d5892328b@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) 
MIME-Version: 1.0

On Mon, 2023-04-17 at 16:12 +0200, Jan Beulich wrote:
> On 16.03.2023 14:22, Oleksii Kurochko wrote:
> > Oleksii Kurochko (2):
> > =C2=A0 xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
> > =C2=A0 xen/riscv: add explicit check that .got{.plt} is empty
> >=20
> > =C2=A0xen/arch/riscv/arch.mk=C2=A0=C2=A0 |=C2=A0 2 ++
> > =C2=A0xen/arch/riscv/xen.lds.S | 13 +++++++++++++
> > =C2=A02 files changed, 15 insertions(+)
>=20
> Just to mention it in case you aren't aware: Hunting down the
> necessary acks
> is your responsibility, not one of the committers. You may want to
> ping Bob
> and Alistair (unless this response of mine is already enough of a
> ping).
> Provided of course the patches still apply as-is ...
>=20
Thanks. I'll take that into account.

I thought the only option I have wait for a response from a maintainer.

~ Oleksii




From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:06:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:06:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522795.812383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok60-0003My-Qd; Tue, 18 Apr 2023 12:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522795.812383; Tue, 18 Apr 2023 12:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok60-0003Mr-Mz; Tue, 18 Apr 2023 12:06:04 +0000
Received: by outflank-mailman (input) for mailman id 522795;
 Tue, 18 Apr 2023 12:06:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bo8Z=AJ=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pok60-0003Ml-0N
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:06:04 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 631b9427-dde1-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:06:02 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 98e67ed59e1d1-24704a7bf34so1888080a91.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 05:06:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 631b9427-dde1-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681819561; x=1684411561;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jp1fDN0xITU9jamTbgoD5sc14TDJeM5ZVQEegcXjAgo=;
        b=P9Sl2GH1O9Fx3pci9j0KLcyrNHBcqy1fV0s/HLiFI0oYjRI2QImQF04V9SHVCyK29U
         AZu82Y0AS4CgrPdfUI9s4K5mneZy+Hx0+YlNVvNgzMowMoK/oqWBgmr6MW1Ccg06RkdI
         sJP/8ec9BFi7q72rsSxJushmQ+Z2cBw1aOVdvIifJffo3DqkZOqDmKu4Ebi2x0nH7k5J
         LRUYrfwcMhW+5+wwo//ixUW+uMQmqCuyH0q2JEDYZSIHv+N/9owQxHhlT7SpT44EucF+
         Anssf5gntn+vaxQYP+0kynq4TB5wSMFBtbiX0gp89WLG7zon0RlSHs6NPvNs+dMAKg7e
         c5Eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681819561; x=1684411561;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jp1fDN0xITU9jamTbgoD5sc14TDJeM5ZVQEegcXjAgo=;
        b=WNwy4MWJ+6lbEYkv6BXMoIv+Lh2RAargCfWevvkjNbWECEkpr0Lmp3lOMwHOWSRPoC
         0FoWiYCJwEWicfVGgNbiSitd8KAcLIdYkQ6PxqY4qL9mR9NmqLcoEm4at9ZV3ommFpXi
         s3rywApVqjL/pF3fXucfFL+QoYE8cS5CV5R/6yc2C1ObV3pVLz6y+tZeYAunmu6Ggdsr
         n0d6reIyOWTEnBnjqhqueTnr2cA81JyslzeoEvhfzI9lAzvVSiQ5+U8HeQMYUNr19BdS
         eXWz62Aw0wmLbaD6V/QlXEYaRP00mhOJS3izNZF6+SjNv2EiAaDEaTob3PW/pnISV5LQ
         g6Pg==
X-Gm-Message-State: AAQBX9fgfNwAYUGn5XSTMD7f9BCj408v9nBWS3HB626slIeyG1JOjBMF
	I1PZdOyuuzIr2rCtWizS1U5B+nMmm53FWqSH5C0=
X-Google-Smtp-Source: AKy350Z6RwQfPyHMXSuCCfEVriC5Yp+5OaFmCLK9PNOvyUayIdDJnffuDuIIdOOfKvHdv7Xq56UAMzm/hFxchSUJ0bI=
X-Received: by 2002:a17:90a:4e4a:b0:247:2ff9:1cff with SMTP id
 t10-20020a17090a4e4a00b002472ff91cffmr1766307pjl.25.1681819561440; Tue, 18
 Apr 2023 05:06:01 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <a83e1891-99d2-2503-3675-e5c6573a9b69@xen.org> <CA+SAi2sZnrLzQoBn-e0GDy5De6PcGzxDCuJ3MSKicB_wB7o+Nw@mail.gmail.com>
In-Reply-To: <CA+SAi2sZnrLzQoBn-e0GDy5De6PcGzxDCuJ3MSKicB_wB7o+Nw@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 18 Apr 2023 15:12:04 +0300
Message-ID: <CA+SAi2tsd6j-iriefD6nXbvd4Z1qjK1curTw7ZrCTjmX35fz7w@mail.gmail.com>
Subject: Fwd: xen cache colors in ARM
To: xen-devel@lists.xen.org, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000893d0b05f99b2134"

--000000000000893d0b05f99b2134
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

---------- Forwarded message ---------
=D0=9E=D1=82: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:0=
5
Subject: Re: xen cache colors in ARM
To: Julien Grall <julien@xen.org>


Hi Julien,

We are speaking differently.
You have applied your own terminology.
I applied on my own.
So we did not catch each other.

I should download the repository from xenbits.org.
I should switch to the ML branch then.

Is it correct ?

Regards,
Oleg

=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:29, Jul=
ien Grall <julien@xen.org>:

>
>
> On 18/04/2023 12:26, Oleg Nikitenko wrote:
> > Hi Julien,
>
> Hi Oleg,
>
> >
> >>> This feature has not been merged in Xen upstream yet
> >
> >> would assume that upstream + the series on the ML [1] work
> >
> > Please clarify this point.
> > Because the two thoughts are controversial.
>
> It is not clear to me how what I wrote is controversial. A series was
> sent on the ML for cache coloring support and this was tested on Xilinx
> ZynqMP (see cover letter).
>
> This work was sponsored by Xilinx/AMD. So my assumption is they have
> done the same amount of testing as they did for their own branch.
>
> Cheers,
>
> --
> Julien Grall
>

--000000000000893d0b05f99b2134
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">---------- Forwarded message ---------<br>=D0=9E=D1=82: <b =
class=3D"gmail_sendername" dir=3D"auto">Oleg Nikitenko</b> <span dir=3D"aut=
o">&lt;<a href=3D"mailto:oleshiiwood@gmail.com">oleshiiwood@gmail.com</a>&g=
t;</span><br>Date: =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3=
. =D0=B2 15:05<br>Subject: Re: xen cache colors in ARM<br>To: Julien Grall =
&lt;<a href=3D"mailto:julien@xen.org">julien@xen.org</a>&gt;<br></div><br><=
br><div dir=3D"ltr"><div>Hi Julien,</div><div><br></div><div>We are speakin=
g differently.</div><div>You have applied your own terminology.</div><div>I=
 applied on my own.</div><div>So we did not catch each other.</div><div><br=
></div><div>I should download the repository from <a href=3D"http://xenbits=
.org" target=3D"_blank">xenbits.org</a>.<br></div><div>I should switch to t=
he ML branch then.</div><div><br></div><div>Is it correct ?</div><div><br><=
/div><div>Regards,</div><div>Oleg<br></div></div><br><div class=3D"gmail_qu=
ote"><div dir=3D"ltr" class=3D"gmail_attr">=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:29, Julien Grall &lt;<a href=3D"mailto:=
julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;:<br></div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex"><br>
<br>
On 18/04/2023 12:26, Oleg Nikitenko wrote:<br>
&gt; Hi Julien,<br>
<br>
Hi Oleg,<br>
<br>
&gt; <br>
&gt;&gt;&gt; This feature has not been merged in Xen upstream yet<br>
&gt; <br>
&gt;&gt; would assume that upstream + the series on the ML [1] work<br>
&gt; <br>
&gt; Please clarify this point.<br>
&gt; Because the two thoughts are controversial.<br>
<br>
It is not clear to me how what I wrote is controversial. A series was <br>
sent on the ML for cache coloring support and this was tested on Xilinx <br=
>
ZynqMP (see cover letter).<br>
<br>
This work was sponsored by Xilinx/AMD. So my assumption is they have <br>
done the same amount of testing as they did for their own branch.<br>
<br>
Cheers,<br>
<br>
-- <br>
Julien Grall<br>
</blockquote></div>
</div></div>

--000000000000893d0b05f99b2134--


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:06:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:06:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522796.812393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok63-0003cI-1p; Tue, 18 Apr 2023 12:06:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522796.812393; Tue, 18 Apr 2023 12:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pok62-0003cB-Uf; Tue, 18 Apr 2023 12:06:06 +0000
Received: by outflank-mailman (input) for mailman id 522796;
 Tue, 18 Apr 2023 12:06:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bo8Z=AJ=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pok61-0003TN-DI
 for xen-devel@lists.xen.org; Tue, 18 Apr 2023 12:06:05 +0000
Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com
 [2607:f8b0:4864:20::102a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 636385ff-dde1-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 14:06:03 +0200 (CEST)
Received: by mail-pj1-x102a.google.com with SMTP id
 98e67ed59e1d1-24704a7bf34so1888083a91.1
 for <xen-devel@lists.xen.org>; Tue, 18 Apr 2023 05:06:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 636385ff-dde1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681819561; x=1684411561;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jp1fDN0xITU9jamTbgoD5sc14TDJeM5ZVQEegcXjAgo=;
        b=P9Sl2GH1O9Fx3pci9j0KLcyrNHBcqy1fV0s/HLiFI0oYjRI2QImQF04V9SHVCyK29U
         AZu82Y0AS4CgrPdfUI9s4K5mneZy+Hx0+YlNVvNgzMowMoK/oqWBgmr6MW1Ccg06RkdI
         sJP/8ec9BFi7q72rsSxJushmQ+Z2cBw1aOVdvIifJffo3DqkZOqDmKu4Ebi2x0nH7k5J
         LRUYrfwcMhW+5+wwo//ixUW+uMQmqCuyH0q2JEDYZSIHv+N/9owQxHhlT7SpT44EucF+
         Anssf5gntn+vaxQYP+0kynq4TB5wSMFBtbiX0gp89WLG7zon0RlSHs6NPvNs+dMAKg7e
         c5Eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681819561; x=1684411561;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jp1fDN0xITU9jamTbgoD5sc14TDJeM5ZVQEegcXjAgo=;
        b=VI+hNgw1/LqcyA8eNVHFBtKg9ux6ra/Od0OO/MIpAJ1hhvI70Fn/8a8Ais7saPZSXx
         9+EuHFLKgtOqdyDcB9g+xAo0Zyv+xU/EkUPJEhQOe3Dhoh5vCsApFSbw/MhTsybHYxYV
         0E7Ed99szX317oW74FIfCQOpIVh0pFO/g8C8V4GMsy71mwBQnxH9eJO1iL5Mxy7G64GH
         llc/mer9EbpXLJwxLqo6HCgZg6A1JNl6ywQiFQFJFhlsMtYdWk/7bRKQVKuUqyh6J59G
         grzj6mgUSdEGRXuk+6Q5R0k/RxsZCeaUnZwM4pLD+fdzBri+le55giEM/Byp+ETNZhjD
         1reA==
X-Gm-Message-State: AAQBX9eImoZ4dYOHyj6XxKLJTlINj+dWH2z3IPwNJHJhjTvSBn4s++Fa
	GRSfgTgNgTW8CYbI+oAO/90lq57T9q2Ey651uUdAGyc98tvfEQ==
X-Google-Smtp-Source: AKy350Z6RwQfPyHMXSuCCfEVriC5Yp+5OaFmCLK9PNOvyUayIdDJnffuDuIIdOOfKvHdv7Xq56UAMzm/hFxchSUJ0bI=
X-Received: by 2002:a17:90a:4e4a:b0:247:2ff9:1cff with SMTP id
 t10-20020a17090a4e4a00b002472ff91cffmr1766307pjl.25.1681819561440; Tue, 18
 Apr 2023 05:06:01 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <a83e1891-99d2-2503-3675-e5c6573a9b69@xen.org> <CA+SAi2sZnrLzQoBn-e0GDy5De6PcGzxDCuJ3MSKicB_wB7o+Nw@mail.gmail.com>
In-Reply-To: <CA+SAi2sZnrLzQoBn-e0GDy5De6PcGzxDCuJ3MSKicB_wB7o+Nw@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 18 Apr 2023 15:12:04 +0300
Message-ID: <CA+SAi2tsd6j-iriefD6nXbvd4Z1qjK1curTw7ZrCTjmX35fz7w@mail.gmail.com>
Subject: Fwd: xen cache colors in ARM
To: xen-devel@lists.xen.org, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000893d0b05f99b2134"

--000000000000893d0b05f99b2134
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

---------- Forwarded message ---------
=D0=9E=D1=82: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:0=
5
Subject: Re: xen cache colors in ARM
To: Julien Grall <julien@xen.org>


Hi Julien,

We are speaking differently.
You have applied your own terminology.
I applied on my own.
So we did not catch each other.

I should download the repository from xenbits.org.
I should switch to the ML branch then.

Is it correct ?

Regards,
Oleg

=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:29, Jul=
ien Grall <julien@xen.org>:

>
>
> On 18/04/2023 12:26, Oleg Nikitenko wrote:
> > Hi Julien,
>
> Hi Oleg,
>
> >
> >>> This feature has not been merged in Xen upstream yet
> >
> >> would assume that upstream + the series on the ML [1] work
> >
> > Please clarify this point.
> > Because the two thoughts are controversial.
>
> It is not clear to me how what I wrote is controversial. A series was
> sent on the ML for cache coloring support and this was tested on Xilinx
> ZynqMP (see cover letter).
>
> This work was sponsored by Xilinx/AMD. So my assumption is they have
> done the same amount of testing as they did for their own branch.
>
> Cheers,
>
> --
> Julien Grall
>

--000000000000893d0b05f99b2134
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">---------- Forwarded message ---------<br>=D0=9E=D1=82: <b =
class=3D"gmail_sendername" dir=3D"auto">Oleg Nikitenko</b> <span dir=3D"aut=
o">&lt;<a href=3D"mailto:oleshiiwood@gmail.com">oleshiiwood@gmail.com</a>&g=
t;</span><br>Date: =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3=
. =D0=B2 15:05<br>Subject: Re: xen cache colors in ARM<br>To: Julien Grall =
&lt;<a href=3D"mailto:julien@xen.org">julien@xen.org</a>&gt;<br></div><br><=
br><div dir=3D"ltr"><div>Hi Julien,</div><div><br></div><div>We are speakin=
g differently.</div><div>You have applied your own terminology.</div><div>I=
 applied on my own.</div><div>So we did not catch each other.</div><div><br=
></div><div>I should download the repository from <a href=3D"http://xenbits=
.org" target=3D"_blank">xenbits.org</a>.<br></div><div>I should switch to t=
he ML branch then.</div><div><br></div><div>Is it correct ?</div><div><br><=
/div><div>Regards,</div><div>Oleg<br></div></div><br><div class=3D"gmail_qu=
ote"><div dir=3D"ltr" class=3D"gmail_attr">=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 14:29, Julien Grall &lt;<a href=3D"mailto:=
julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;:<br></div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex"><br>
<br>
On 18/04/2023 12:26, Oleg Nikitenko wrote:<br>
&gt; Hi Julien,<br>
<br>
Hi Oleg,<br>
<br>
&gt; <br>
&gt;&gt;&gt; This feature has not been merged in Xen upstream yet<br>
&gt; <br>
&gt;&gt; would assume that upstream + the series on the ML [1] work<br>
&gt; <br>
&gt; Please clarify this point.<br>
&gt; Because the two thoughts are controversial.<br>
<br>
It is not clear to me how what I wrote is controversial. A series was <br>
sent on the ML for cache coloring support and this was tested on Xilinx <br=
>
ZynqMP (see cover letter).<br>
<br>
This work was sponsored by Xilinx/AMD. So my assumption is they have <br>
done the same amount of testing as they did for their own branch.<br>
<br>
Cheers,<br>
<br>
-- <br>
Julien Grall<br>
</blockquote></div>
</div></div>

--000000000000893d0b05f99b2134--


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:18:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522806.812403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokHq-0005RU-5v; Tue, 18 Apr 2023 12:18:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522806.812403; Tue, 18 Apr 2023 12:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokHq-0005RN-2k; Tue, 18 Apr 2023 12:18:18 +0000
Received: by outflank-mailman (input) for mailman id 522806;
 Tue, 18 Apr 2023 12:18:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pokHo-0005RH-Rn
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:18:17 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16edd25c-dde3-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:18:14 +0200 (CEST)
Received: from mail-bn7nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 08:18:05 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5611.namprd03.prod.outlook.com (2603:10b6:806:bf::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:18:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16edd25c-dde3-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681820294;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nqtMWzVhSHgKKA2lkWFoqWAqR+LCdhWzHZ+KIPm7k3s=;
  b=FFK0F2lIWMnL8oS2UpgDC3vod4TqVH9ymEj2ES7CcrPxISRjUkDAx/om
   ol8mdRfYp3UZSY6P9Vm8K5H6WhU50n/7aaogB96QlI9+Rfm4knUroqC8t
   TTyw193r04jdiwPSvKq+XYBGsefPTva5TV5/aXwNl8Xz3D8YVa+389Kh7
   k=;
X-IronPort-RemoteIP: 104.47.70.101
X-IronPort-MID: 108385673
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:51ZmEqu3fB5WWvbfeBHUQN2UJefnVHtfMUV32f8akzHdYApBsoF/q
 tZmKWjVM/yPNzf8Ltwjatm2/B8AvMXRnNU2HlA5pC4yRS4W+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHxyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwK2sjazaHuOeK/Z2iV9BM3PsIIvbzM9ZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIO9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgpqU63QPMnQT/DjUHCEWnnsu4hnKFStBjK
 UcfwQYnq5U9oRnDot7VGkfQTGS/lg4RXZ9cHvM37CmJy7HI+ECJC24cVDlDZdc68sgsSlQC1
 FWEgtfoDjxHq6CORDSW8bL8hSy2ETgYKykFfyBscOcey9zqoYV2hBSQSN9mSfaxloesQWi2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:d/rAf61aDIwj9cd2AMs3RAqjBGIkLtp133Aq2lEZdPUzSL3/qy
 nOpoV96faQsl16ZJhOo7290da7MAjhHPJOjbX5RI3SPjUO4VHYSb2KjrGSpwEIeReQygcH78
 ZdmsFFYbXN5DNB/L7HCWeDYq8dKfC8gcOVbJ/loUtFfEVMZ7976gtlFwqBVmFzTAlCQbo1fa
 D92iNPnUvbRUgq
X-Talos-CUID: 9a23:NK6eMm+KDbMo2riPeZKVv01PNMUDdkLY8Er/OlKBKTxWY7OqUVDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3Af93k8A9mMDiast/MeHFYKEOQf9Y1/biROkEBra4?=
 =?us-ascii?q?XufWGPidLFz6bsTviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="108385673"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GqyxmP9HeldJzpv6aoQFauD26P96/wWOOlx4KcB2rs7NbDeObaqq0rvezOjEi3+98mkhVdASVTAZNulhckJ7aiUdHRsUqpWQXfG/GO3fGxMQuCEkvAGtuXmemnzp9JFLRlWyBNZWE7TN2/hba4ZemJ1Hxry1CX4LUnjf8KJgvFTcyvNxXEJ+T300XP+skIQNgzLfuva/XfkHAmS8pPR4CV+dPkPLhA/FKWCTdwPFZU8iHM94Mn9L9LuYrtf6L/Z4SIPQctwvaChgsweqa9KzL6Z/gm8YEZV2hjKqeKeGZaZXTG8DJYNvWera/ZhXKIbPvGToethn5TChnszVfnmqgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EmqEYLVZS4IB3+FJFhu6PP4ws0zRPgYBkWt5s1tk+UA=;
 b=ECZG9nnpInqIJNMCiU1QAC8CE+EkQRENJT4TQh8t8sH49inbSNkFP/FRtfB8nxMeOa5pCLDMQjg462k/9K4B5joRPkjMkou/rLclu9UU7Q5UM2ZuVkE/9QV7bu2Brx4XabIHL3bmRyoj8M6f3NNcex/aZF+QurSWr6N7udQuSqAtDj1Ca6LwTc9CUFc4/AXAvp3KuqV+hTlCg6Fi0xeZ7vhNf9fINIH/vZ/cn/FfkThfFwxiBSGUnNWiMfSIJ/qhyEGXE8UF5jbiskr4plZZPutttKQedL/e8E8ljc0QCPOP661SEel9ovSlVX1r6q6rO7JBYysMyI2EejCY57Rwjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EmqEYLVZS4IB3+FJFhu6PP4ws0zRPgYBkWt5s1tk+UA=;
 b=i0qOHFDgN54fathBtnA3Vc5MCUm0maROLIBllwZJXLCVg/XYPWlqrLD5giUN6QxQPKQikyFj6CfUvTAR7pBu897Dr8UiO691BITDECWJl/3nsNwsO909quGSlVdfDMyQSssQoPaTKuMNR0edqdbAs7k2gc7RG6HF2GY/fkk4n+g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <fc9085f6-25b7-b94f-e7d3-ebc1d6283d73@citrix.com>
Date: Tue, 18 Apr 2023 13:17:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230418092458.15253-1-roger.pau@citrix.com>
In-Reply-To: <20230418092458.15253-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0196.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA0PR03MB5611:EE_
X-MS-Office365-Filtering-Correlation-Id: 6408f675-8b51-401e-3408-08db4006f3fe
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MUyxcIFm43n2sDO8uO+AZRjGL2keZLmRvOHwS6rehkvRUCkbMKUGh3jYkZfTEHfK4ntcS6UK88AUe5iUAloXUzYIkO/n6LOb06ppUEAdYxB0xWImsu7MQj9K8223GNk2HrhGyfNwnDJEoMmqOI025LCQVSz06bu7Hxp9hFhCvq6XonkDUv4PSaTF9rqMhj2dQdV9uFIwnxTEQub3CC42clUNDkzO9sfi2auDrjD7l1BLOYrf4ubhQYuFwdOTYksXFSeM6zoYDKqC4M/njwSmiUIKLm7h7nV8JlxoABNhopqr0grP7FK+nOUtBAk+Ru6O52YMT4DoG7lzigqPqr0WEk861Hb+WGlykL47H0Dg2jjzi7wrFK7hODDb+GAw1uzVJDly/fyhMrMXCVePnkH88WpZU5aHZzA7i6FzKhBNeE42L8qs6bDqBm9z34CdxsZtlhnNmjoFXDF9a90iWZhntNqsX4LrnG4vKKN7sJcS1omYm9xaKIdh+5xmBsnhzmhaC9xi1qExqlH+MDK0D9LBhYiMqEEcs5P0SEv/YyaLuDpo6R8WdxXic0TH0oMZIp9XWVsuveDdKBZG3NAlbD7vMeBGbfLbTuc8ox2FuobJhcNNDaHkhnVjXtpI5EfazhtaGcc4Q05HnxwSCsrzZdL60g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(36756003)(38100700002)(8936002)(8676002)(5660300002)(2906002)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(6512007)(66946007)(53546011)(6506007)(66476007)(26005)(41300700001)(82960400001)(316002)(83380400001)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFlGb2xsZjBkU0xMSFRPMG5wNHVyVXJwUExLM1pJNVNtUFE5WGFCMW1mVWNN?=
 =?utf-8?B?a3R1RndQdWZJdCs4b3hDSzFwWXNVWU5tTVhoQ0h6WC9mQms2ZTI1c2xHSG1S?=
 =?utf-8?B?TTVIYlVhbHhjMmUxSk9DWEdVQ1dwMXg2R0Y0bmVNQlVhWDh5eDZBR2JIZndt?=
 =?utf-8?B?dnUyYVVsVnlvdnNaOHc4N2ZIdkV4MEtCeXVqOUR3MmNNQVhvQUZFWS9kQWN2?=
 =?utf-8?B?MFc3a3hrQURvd3NoWER1M0tPajNvSVRFUHU5Vi8vSlROaHJPd2xwWlc3b0FY?=
 =?utf-8?B?OU1oSHUyYnlFalI5YVlPV2RGSlpjVkxncUZMR3NLS2RpbUdyWFF5ZnVyVWlx?=
 =?utf-8?B?ejZucVd4L1FoN29yckVjVHlpTXN6OE9vN25DVzRVaGtlUldUWWduRkxTdmY4?=
 =?utf-8?B?RUM1RlhuQTVZdTlhVnFTMWFWTTZqY0pWNFptaGRzUlFxRWZDUWttbHYwc2VH?=
 =?utf-8?B?bUlPN0NJMEN3ZjVDcFlNUnhwWGFoUGRDUWJaL2YvbkRpVTZiVXdzUzU2R3RR?=
 =?utf-8?B?N0tnTmloLzk0RVF5K0VvdTNVWm5ud2tsZGVPYVZZazhUY200TXR3L1pNMG5R?=
 =?utf-8?B?SUZVWlR0SzA0eUJDVlVIZUhCNUcrcGdoSVFjcnl4TTBzUFRsbUJJQzJhSjFO?=
 =?utf-8?B?U0R6VGlGb1V0S2VtRE9ocmFUQnhaQmxWcGw4UldzaGJWZ2ZHN09IOVdoQ3FX?=
 =?utf-8?B?Q3I2WmV2dUhRL2t5QVgrTjZodDFYcE16NXNsanlBbEx0STJTVG5UQ24vVlNn?=
 =?utf-8?B?Qk1tRmEzTktYa0E5RStNMFpCRlRRdFcwaVQwSVhjcmhJanZaQklBT0lHMS9y?=
 =?utf-8?B?U2RLSGo5VG5uMTlWN3NUdDdXT0FhSkczbWI1Wk9zWlk3R0gyeVFBOWUvc1or?=
 =?utf-8?B?c2hQLzl2Z1kwcjVqSXJxdXR5dDBaV256Ry9ZWmN1ajFPb3FjandFckZyUWVQ?=
 =?utf-8?B?Q3hnSFpiaHlMQlpIS2lDeG03VitZdHRxbDVzVFJYc2xLOWplUG5aZ1FIRVRi?=
 =?utf-8?B?REJmN25YbmszMzNIVDQySzhMQVhiSklHS3hLck9oalY0YnVqYWhwTFU4Q0tm?=
 =?utf-8?B?TVZmd2hBbnlmMXJzL1IxTlFUZ1FNY2M5R2dYUkljRFd0d255eTV0ckxtcktM?=
 =?utf-8?B?S016ZEJhMjBxZ2RnNXMwa1VPeVU0a0lGOTNqZHNMemxXTEhqVzg0MjVMMGdL?=
 =?utf-8?B?TUx5TmFuM2owZVBMMGUxek1RbnhXWkNSeTNreDRnMEh3ZEFsZG9DUjJRbmU5?=
 =?utf-8?B?MHpOeGxDZWhKSEN4TzQxZjR3cXEvL2V0aDdGQzJlVlI0QlRHY2hMN2ZOM01E?=
 =?utf-8?B?WXdjUUhnZVppWVNIWnNWSXBzNTBoU0VnYU1KUStjYlRRT3drbGdWazZObDNr?=
 =?utf-8?B?Z0lxWEVGY0xXU3paUW55QlVTbHdKSktQUHh3cEtVYkNjT2RNdVhkcWRnQmFq?=
 =?utf-8?B?Nk4wSHhOQTluSHdRU2RqL0FGOTZMY2pPNWx1RkhyUzk1Q0lTWmZoSlpjeXFw?=
 =?utf-8?B?cThsTmplT1hONkpDU3dPTm9oY3d3ZUdYekt5QnBUL0dFY2JEM1pmbUlOM3NT?=
 =?utf-8?B?QXJlWnF3dXVCbUdwZ2o3d3dpSmhHUktJK1h2UTArUVJPSnUrNDdmc0M1NHFl?=
 =?utf-8?B?bElsdmJBNWRMS0FtSnlGYVJIZ29zSFhpZjF5c2ZVdVgzby80NFNnYVd6ZlJh?=
 =?utf-8?B?VUcybC95OTZpbUtSeVRKWlFmMlY0dnZoaVpDUVlENC9TT0M4Q1lhc2V6aEVt?=
 =?utf-8?B?Ly92ZTkxYnhyMjdqaEN3am1QcVBUbGoxcFJFQjl0eGRqM1JBWk96K3gzanlQ?=
 =?utf-8?B?d1RHZVYzRU9vcHRZOHMyVDE2eUF6aEcxOTNtQ05WbE8rakNZODNlQWdPU2tz?=
 =?utf-8?B?b0dHditlSkZEUDhDN2xTbzJwZThHNUU5SHJvRGgwREpnZFg4TGFnUnRlWjBL?=
 =?utf-8?B?LzY2WG9WVWd1WkdHYW1YeWtOYlZQaDFZN0ZZY3hxR2kwUDNxUDRGUy8wRFNk?=
 =?utf-8?B?VlJzdGx1WXNORitGRThTVFY4cXJkMzJNREJlUWRNOXBzTmcrUnRtQ2EwM2gw?=
 =?utf-8?B?U0E5U3RDTlpLTER1NUhnZVhBZTFibGxpY3RYODhhZ0xwMmJ6RTFqL0EyaWZt?=
 =?utf-8?B?YVNZb1dlOFJwdE1PeWNwNmpHNEwwSm80SS8vZ3VyYldPWlFDVi93WnhEenBX?=
 =?utf-8?B?cUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	DR3JoLcP9P9Nppo/7XE7NGJqyDCEUloVPnKgLeL/If8mYb7hkyEVRQGo9pvUJN7+rGieh4KbezWtuNO9Vf0khe5zmXUWcXMEM2Wsaq6ZUIRia2kmntwf42P5NJ3fyt/MDHaUpeXAvFHmtUwi3Qb6utTwN2cRrd5iiScNFNhpEID/yHDT9rj6KvAGhPFKJZH5264/Nr7pUC/42AGRch7l0v9/02+4qUp72x04+snxKA4XnDtFXJAdKINCjPAwL3pQrwwUcCiZxUL/CHXMTibAO5IUr2nsXn+SuuwqaTAMEGiHiQAnZ8+m3HYXZKhdSBhKgAcNQ0IDstInrlK1guHbvU33qvPX1uBGKnnYsycy5Jh/5r6JgnTgGPt2Dd4jca3RH/zruPcJJ6PSNTRAVjmfUG7GBUGNrH52LdSMFTymP/qyOMUNBJZvqPBUke04VgLvf5bg8RUEwaOTcPWESXc5PhNFiP+yF80qnlbmXM0g3L74q4Hjr8n72szqPnrmdNzjnPXCH78QuwzGmj+vt3BOH22XSztzJlWh7PlipLs4ww9YgJT7iG+gYVcN3tcn++sHR70iw2ebdyktEs0cnkPEjhQThg6OuOWCkdhMuUChBHJQfeTq4Wu2cxrtucOAMOPXuEmVB7W3UFvw9ndpctlEzp2p+Vo974sBREKm1n6qfBskwsG0oPBdbUXJjy7lLlAg0STWK54oX7URKHNN7tT1AO8deRQ0dRfg/5Jw8N5qHqC9fBKzqfac9WQ4Xr2beGjZusiRb5rF3MP4adc20e5oZmCFzoqTcy4RyI2T/yvvUtZigSfByGwNvXgZUeQv+D9eqcfF144gw2bTyhUmtKgP2A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6408f675-8b51-401e-3408-08db4006f3fe
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:18:01.3277
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VhPNGGtaNV3J8adBQyzEDewoZkAvpSCAAWBa31ezhWEEjz//nTdEgRWpj0eR3zuQf8kw8P1BcrrSay5WkC5s4f8oRTqnwc8azArd5is3bn8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5611

On 18/04/2023 10:24 am, Roger Pau Monne wrote:

> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
> index 7675a59ff057..c204634910c4 100644
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
>  
>          ALIGN
>  /* No special register assumptions. */
> -restore_all_xen:
> +START_LP(restore_all_xen)
>          /*
>           * Check whether we need to switch to the per-CPU page tables, in
>           * case we return to late PV exit code (from an NMI or #MC).
> @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
>  
>          RESTORE_ALL adj=8
>          iretq
> +END_LP(restore_all_xen)


While it's useful to have a concrete idea of what is necessary to fix
all of this, I do not wish to put in markers like this.  This isn't
about livepatching - it's about getting sane ELF metadata.

This is why I had Jane work on using the Linux macros.  They account for
*all* interesting ELF metadata, as well as taking care of things like
the global function alignment settings, CFI patching space, etc.

Putting functions in separate sections should be hidden in the normal
SYM_FUNC_START(), and dependent on CONFIG_SPLIT_SECTIONS behind the
scenes seeing as we have that as a option already.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:41:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522812.812413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokdu-0000MK-2o; Tue, 18 Apr 2023 12:41:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522812.812413; Tue, 18 Apr 2023 12:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokdt-0000MD-VI; Tue, 18 Apr 2023 12:41:05 +0000
Received: by outflank-mailman (input) for mailman id 522812;
 Tue, 18 Apr 2023 12:41:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pokdt-0000M7-09
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:41:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20615.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47a9a16f-dde6-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:41:03 +0200 (CEST)
Received: from DUZPR01CA0228.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b4::23) by PA4PR08MB6237.eurprd08.prod.outlook.com
 (2603:10a6:102:f3::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:40:56 +0000
Received: from DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b4:cafe::96) by DUZPR01CA0228.outlook.office365.com
 (2603:10a6:10:4b4::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 12:40:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT053.mail.protection.outlook.com (100.127.142.121) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 12:40:56 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Tue, 18 Apr 2023 12:40:56 +0000
Received: from a6842cc5434d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9BE524D5-DDFF-4263-8789-15D8B149079E.1; 
 Tue, 18 Apr 2023 12:40:44 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a6842cc5434d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 12:40:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB7675.eurprd08.prod.outlook.com (2603:10a6:10:37e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:40:42 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:40:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47a9a16f-dde6-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KuBqlIVqw3weVo48w1CdKjXEop0iDtlBi0UM7Hgax94=;
 b=w4QLKCmF8gA6bw1NGrAmU90fr2U/yiaqIyHtVNeTdlNqJU2P3dFfQxtl2jR9w4rYdou1rRttMYlt25xU+TQmUXmp4XXWAX0ZNySDB4upRcN3wKRrwZGc4LB0axxaRM/SLxibygbwODUUAUCDD62QGjX6W2XCA/Ly0+ibdBy3ck0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ce5cbdeb159a52f7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PquUTDtQtnrt5IM2HageNjSgPxCXcL3+uM4R30lxUyBZCyJbnq8qEdtFuFcyBwsYv6cNoQE1whKvjSOlvFZtS6gxf18lv0a3YTIDKHurxNj/enmxBM5yXHNgb5+q2TRJ3/AT6zrxtQtpYlOulPxJxh8xRgPfArtRLRA08njnBvDuYtCMg4UZioOIxpgawQFZ/OGWXzXX9EuyZ0rh6ZIMsXrDuXVWAtgITjjGnDXVVsNX9GZ0s6t86RUfESMP24+6M4nkNeCtxZUFDayr12GViFkN2tm+P8Jm/5NhHpgZ5LCTH8T7AkXgBFjrTBhoCLpiFx87pnQq2cMM+SI5/AXndA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KuBqlIVqw3weVo48w1CdKjXEop0iDtlBi0UM7Hgax94=;
 b=E7mkRzEKQc+U7yT1suU8+Tpqe06l0eTNrWTALxjeMhcg+SD2WF1411+51e959qHi+2I7WfhVeg3wmmmpHGlzR6+96W7qEUmv1l+Z2T29Igqgh1jWQQVt17oi6ENtlsvmsj64vltbFgMSuemLAVM9OYz4gl7alApzu17IsRlVnMdd90V7Z6dBdKK4Ses/UpT+QvzLwNLzSuxhkQjkraLa+U5Ik0sQmK0Ar5hJPhT6OwiGIMJQOGu7P3fljYGUYoR7o5UNt4+dWPmJBxFh/LBRuu/ZLCBoFTkInUnqprDxKDwqlVTfyFKI6pBiBFlx0ZxLVmwtBoFup5MRG5ZNjkF1ag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KuBqlIVqw3weVo48w1CdKjXEop0iDtlBi0UM7Hgax94=;
 b=w4QLKCmF8gA6bw1NGrAmU90fr2U/yiaqIyHtVNeTdlNqJU2P3dFfQxtl2jR9w4rYdou1rRttMYlt25xU+TQmUXmp4XXWAX0ZNySDB4upRcN3wKRrwZGc4LB0axxaRM/SLxibygbwODUUAUCDD62QGjX6W2XCA/Ly0+ibdBy3ck0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQq1oowf9vRd0a2ZPNGYXQmqK8xC64A
Date: Tue, 18 Apr 2023 12:40:41 +0000
Message-ID: <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-6-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB7675:EE_|DBAEUR03FT053:EE_|PA4PR08MB6237:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d737754-dc53-427b-4421-08db400a27bf
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Gb8u6m7/XlYSQgFRMH2qrdSFSZBTS0bqjG9Kf4amhqfF9bM1kT0YX+UOlPbzl73PbWKWyLweyacuWfRBMkBJqzuqThD1RiG4xXjn4C82pj3GE06agEr8oi17WhW/1ZYq54/APLs+jXw5pdcEmZMlPoxLH01Vxp+aZSruoCeV5mSJPvQKs2nCXQI++459hgMBs0yyedHvjGnwESbZ//AuOEyk8C31TKme4SPierDHSEUhE2d+4YGaJINajpBE9VWxwAeSDfSMdSlQTWS8lkSunIlztxYZiZecEMTxt6+YTV1rL9uig5iG5htx17g68Cn//iv5Dm6eC/Aj7MM2HYjzsKPYoIvlifFNjYyJKENeLlcblFhpNAagAfftgtZxfRKYSaBba3eI9QWEf3HfyXitb/+EWpEU80ql7uz51gjhMRuflkunhyH765gfdCZmK6oHPK8N2Yh1H0rkqe36fYVTDtc3crFObUhSBBHEuTGjtaamvVtjYCj+SG5SEQEXciOh6/xab8eKO+YYug2m8sNFzdJOsskrjk0/jWEVFiosdIyHTA8EhWhRiqEICZRtjyewlmo/aFllriT3FqHQheo8IPExuVXTrjO4nOUST43R2n6HR97jgc6YEv1esN43Mt+ccOy70CISuajIvbEyME5sow==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(366004)(346002)(39860400002)(396003)(451199021)(6862004)(5660300002)(4326008)(76116006)(30864003)(2906002)(66946007)(41300700001)(91956017)(8936002)(8676002)(6636002)(316002)(478600001)(6486002)(71200400001)(53546011)(6506007)(6512007)(186003)(64756008)(66446008)(66476007)(66556008)(37006003)(2616005)(83380400001)(122000001)(54906003)(38100700002)(38070700005)(36756003)(33656002)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5B339A795A41A64295EC7B9DFA4FF1A4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7675
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f7740d9b-76aa-4c57-1cf1-08db400a1f2e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3UpkKZsUMlVvH59/Z8rR+A1SK/wof7OOZCwDxl2+jrS8/We2Tai5fXd5ZS0TrlgE3xBqgAxxFu0VrPTtU6I/kTzxVXdhf+hWoMYM51vpDaO5GjLwJXUkpPZuvNAdCdPOVyM6Fur/swqWFw1eMFSgtj7c/d9WotUC8ImDdx4soXhITeTHKPpL4E4pOj8Kgc3/7pDOQNx4q+46jZpzK8raD7ribJFNlwHD+UGNNcL5gIvxGW8WdGYzosLTJg3sCV4x6LZrC90upU7oCJrIXSOKsnO+YHITd0k/KzlGXAr4M1WiDNSbyp6BmJzY1S8fzasqq8Q87xRqG9Jyyd7AIttzP4WOUrH3HRSS+hP5uKTuZ5TENF6KQ4Lzk4qoZ3pjTRZ+9ikI0+TN/pKx0iz2QEzYVcLvb/WlJTO99urwMGSwk5KR9PNjrd3eKTkNtgMAhW9cUlNd9fqKzW1D2LqpmNLV7a5qem8uBLj3Kp9uaPBGiZyLtYI56oOop1PBlDt+OVwe798BXyo6SDkygRGmrZS/P8n2imXDDFhd8o8XP+bDwqCyzmEHodfPxQqljCKDLMhaw5FL6WFQF2OvvBx4FjBvAfQg91WR79u+seou84mtJaSE2tKEGAglbofLWGa0x/V17hg8NUkbgtCMHNcpwqcj+XuCrIFjkS4ymG8SsxLITSyrgayxw+hCOWxJpgBMMHAyCCqFYEb5qL/9WQcxdET0gA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199021)(36840700001)(40470700004)(46966006)(36756003)(8936002)(6862004)(8676002)(40460700003)(5660300002)(2906002)(30864003)(82310400005)(33656002)(86362001)(40480700001)(478600001)(6486002)(54906003)(6636002)(37006003)(186003)(107886003)(2616005)(36860700001)(6512007)(53546011)(6506007)(70586007)(70206006)(26005)(41300700001)(356005)(82740400003)(316002)(83380400001)(81166007)(4326008)(47076005)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:40:56.3597
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d737754-dc53-427b-4421-08db400a27bf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6237

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Save/restore context switch for SVE, allocate memory to contain
> the Z0-31 registers whose length is maximum 2048 bits each and
> FFR who can be maximum 256 bits, the allocated memory depends on
> how many bits is the vector length for the domain and how many bits
> are supported by the platform.
>=20
> Save P0-15 whose length is maximum 256 bits each, in this case the
> memory used is from the fpregs field in struct vfp_state,
> because V0-31 are part of Z0-31 and this space would have been
> unused for SVE domain otherwise.
>=20
> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
> creation given the requested vector length and restore it on
> context switch, save/restore ZCR_EL1 value as well.
>=20
> Remove headers from sve.c that are already included using
> xen/sched.h.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
> - No changes
> Changes from v3:
> - don't use fixed len types when not needed (Jan)
> - now VL is an encoded value, decode it before using.
> Changes from v2:
> - No changes
> Changes from v1:
> - No changes
> Changes from RFC:
> - Moved zcr_el2 field introduction in this patch, restore its
>   content inside sve_restore_state function. (Julien)
> ---
> xen/arch/arm/arm64/sve-asm.S             | 141 +++++++++++++++++++++++
> xen/arch/arm/arm64/sve.c                 |  68 ++++++++++-
> xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
> xen/arch/arm/domain.c                    |   7 ++
> xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
> xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
> xen/arch/arm/include/asm/arm64/vfp.h     |  10 ++
> xen/arch/arm/include/asm/domain.h        |   2 +
> 8 files changed, 284 insertions(+), 39 deletions(-)
>=20
> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
> index 4d1549344733..8c37d7bc95d5 100644
> --- a/xen/arch/arm/arm64/sve-asm.S
> +++ b/xen/arch/arm/arm64/sve-asm.S
> @@ -17,6 +17,18 @@
>     .endif
> .endm
>=20
> +.macro _sve_check_zreg znr
> +    .if (\znr) < 0 || (\znr) > 31
> +        .error "Bad Scalable Vector Extension vector register number \zn=
r."
> +    .endif
> +.endm
> +
> +.macro _sve_check_preg pnr
> +    .if (\pnr) < 0 || (\pnr) > 15
> +        .error "Bad Scalable Vector Extension predicate register number =
\pnr."
> +    .endif
> +.endm
> +
> .macro _check_num n, min, max
>     .if (\n) < (\min) || (\n) > (\max)
>         .error "Number \n out of range [\min,\max]"
> @@ -26,6 +38,54 @@
> /* SVE instruction encodings for non-SVE-capable assemblers */
> /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
>=20
> +/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
> +.macro _sve_str_v nz, nxbase, offset=3D0
> +    _sve_check_zreg \nz
> +    _check_general_reg \nxbase
> +    _check_num (\offset), -0x100, 0xff
> +    .inst 0xe5804000                \
> +        | (\nz)                     \
> +        | ((\nxbase) << 5)          \
> +        | (((\offset) & 7) << 10)   \
> +        | (((\offset) & 0x1f8) << 13)
> +.endm
> +
> +/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
> +.macro _sve_ldr_v nz, nxbase, offset=3D0
> +    _sve_check_zreg \nz
> +    _check_general_reg \nxbase
> +    _check_num (\offset), -0x100, 0xff
> +    .inst 0x85804000                \
> +        | (\nz)                     \
> +        | ((\nxbase) << 5)          \
> +        | (((\offset) & 7) << 10)   \
> +        | (((\offset) & 0x1f8) << 13)
> +.endm
> +
> +/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
> +.macro _sve_str_p np, nxbase, offset=3D0
> +    _sve_check_preg \np
> +    _check_general_reg \nxbase
> +    _check_num (\offset), -0x100, 0xff
> +    .inst 0xe5800000                \
> +        | (\np)                     \
> +        | ((\nxbase) << 5)          \
> +        | (((\offset) & 7) << 10)   \
> +        | (((\offset) & 0x1f8) << 13)
> +.endm
> +
> +/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
> +.macro _sve_ldr_p np, nxbase, offset=3D0
> +    _sve_check_preg \np
> +    _check_general_reg \nxbase
> +    _check_num (\offset), -0x100, 0xff
> +    .inst 0x85800000                \
> +        | (\np)                     \
> +        | ((\nxbase) << 5)          \
> +        | (((\offset) & 7) << 10)   \
> +        | (((\offset) & 0x1f8) << 13)
> +.endm
> +
> /* RDVL X\nx, #\imm */
> .macro _sve_rdvl nx, imm
>     _check_general_reg \nx
> @@ -35,11 +95,92 @@
>         | (((\imm) & 0x3f) << 5)
> .endm
>=20
> +/* RDFFR (unpredicated): RDFFR P\np.B */
> +.macro _sve_rdffr np
> +    _sve_check_preg \np
> +    .inst 0x2519f000                \
> +        | (\np)
> +.endm
> +
> +/* WRFFR P\np.B */
> +.macro _sve_wrffr np
> +    _sve_check_preg \np
> +    .inst 0x25289000                \
> +        | ((\np) << 5)
> +.endm
> +
> +.macro __for from:req, to:req
> +    .if (\from) =3D=3D (\to)
> +        _for__body %\from
> +    .else
> +        __for %\from, %((\from) + ((\to) - (\from)) / 2)
> +        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
> +    .endif
> +.endm
> +
> +.macro _for var:req, from:req, to:req, insn:vararg
> +    .macro _for__body \var:req
> +        .noaltmacro
> +        \insn
> +        .altmacro
> +    .endm
> +
> +    .altmacro
> +    __for \from, \to
> +    .noaltmacro
> +
> +    .purgem _for__body
> +.endm
> +
> +.macro sve_save nxzffrctx, nxpctx, save_ffr
> +    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
> +    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
> +        cbz \save_ffr, 1f
> +        _sve_rdffr 0
> +        _sve_str_p 0, \nxzffrctx
> +        _sve_ldr_p 0, \nxpctx
> +        b 2f
> +1:
> +        str xzr, [x\nxzffrctx]      // Zero out FFR
> +2:
> +.endm
> +
> +.macro sve_load nxzffrctx, nxpctx, restore_ffr
> +    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
> +        cbz \restore_ffr, 1f
> +        _sve_ldr_p 0, \nxzffrctx
> +        _sve_wrffr 0
> +1:
> +    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
> +.endm
> +
> /* Gets the current vector register size in bytes */
> GLOBAL(sve_get_hw_vl)
>     _sve_rdvl 0, 1
>     ret
>=20
> +/*
> + * Save the SVE context
> + *
> + * x0 - pointer to buffer for Z0-31 + FFR
> + * x1 - pointer to buffer for P0-15
> + * x2 - Save FFR if non-zero
> + */
> +GLOBAL(sve_save_ctx)
> +    sve_save 0, 1, x2
> +    ret
> +
> +/*
> + * Load the SVE context
> + *
> + * x0 - pointer to buffer for Z0-31 + FFR
> + * x1 - pointer to buffer for P0-15
> + * x2 - Restore FFR if non-zero
> + */
> +GLOBAL(sve_load_ctx)
> +    sve_load 0, 1, x2
> +    ret
> +
> /*
>  * Local variables:
>  * mode: ASM
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 78f7482619da..5485648850a0 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -5,14 +5,29 @@
>  * Copyright (C) 2022 ARM Ltd.
>  */
>=20
> -#include <xen/types.h>
> -#include <asm/cpufeature.h>
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
> #include <asm/arm64/sve.h>
> -#include <asm/arm64/sysregs.h>
> -#include <asm/processor.h>
> -#include <asm/system.h>
>=20
> extern unsigned int sve_get_hw_vl(void);
> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ff=
r);
> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
> +                         int restore_ffr);
> +
> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
> +{
> +    /*
> +     * Z0-31 registers size in bytes is computed from VL that is in bits=
, so VL
> +     * in bytes is VL/8.
> +     */
> +    return (vl / 8U) * 32U;
> +}
> +
> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
> +{
> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
> +    return (vl / 64U);
> +}
>=20
> register_t compute_max_zcr(void)
> {
> @@ -60,3 +75,46 @@ unsigned int get_sys_vl_len(void)
>     return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>             SVE_VL_MULTIPLE_VAL;
> }
> +
> +int sve_context_init(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *ctx =3D _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
> +                             sve_ffrreg_ctx_size(sve_vl_bits),
> +                             L1_CACHE_BYTES);
> +
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    v->arch.vfp.sve_context =3D ctx;
> +
> +    return 0;
> +}
> +
> +void sve_context_free(struct vcpu *v)
> +{
> +    xfree(v->arch.vfp.sve_context);
> +}
> +
> +void sve_save_state(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *sve_ctx_zreg_end =3D v->arch.vfp.sve_context +
> +            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));

You do quite some computation here for something which does not change
during the life of the VM.
Could we save the context_end in the vcpu instead and just do this=20
computation on init and free only ?

> +
> +    v->arch.zcr_el1 =3D READ_SYSREG(ZCR_EL1);
> +
> +    sve_save_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
> +}
> +
> +void sve_restore_state(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *sve_ctx_zreg_end =3D v->arch.vfp.sve_context +
> +            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));

Same as before.

> +
> +    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
> +    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);
> +
> +    sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
> +}
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 47885e76baae..2d0d7c2e6ddb 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -2,29 +2,35 @@
> #include <asm/processor.h>
> #include <asm/cpufeature.h>
> #include <asm/vfp.h>
> +#include <asm/arm64/sve.h>
>=20
> void vfp_save_state(struct vcpu *v)
> {
>     if ( !cpu_has_fp )
>         return;
>=20
> -    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> -                 "stp q2, q3, [%1, #16 * 2]\n\t"
> -                 "stp q4, q5, [%1, #16 * 4]\n\t"
> -                 "stp q6, q7, [%1, #16 * 6]\n\t"
> -                 "stp q8, q9, [%1, #16 * 8]\n\t"
> -                 "stp q10, q11, [%1, #16 * 10]\n\t"
> -                 "stp q12, q13, [%1, #16 * 12]\n\t"
> -                 "stp q14, q15, [%1, #16 * 14]\n\t"
> -                 "stp q16, q17, [%1, #16 * 16]\n\t"
> -                 "stp q18, q19, [%1, #16 * 18]\n\t"
> -                 "stp q20, q21, [%1, #16 * 20]\n\t"
> -                 "stp q22, q23, [%1, #16 * 22]\n\t"
> -                 "stp q24, q25, [%1, #16 * 24]\n\t"
> -                 "stp q26, q27, [%1, #16 * 26]\n\t"
> -                 "stp q28, q29, [%1, #16 * 28]\n\t"
> -                 "stp q30, q31, [%1, #16 * 30]\n\t"
> -                 : "=3DQ" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpreg=
s));
> +    if ( is_sve_domain(v->domain) )
> +        sve_save_state(v);
> +    else
> +    {
> +        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                     "stp q2, q3, [%1, #16 * 2]\n\t"
> +                     "stp q4, q5, [%1, #16 * 4]\n\t"
> +                     "stp q6, q7, [%1, #16 * 6]\n\t"
> +                     "stp q8, q9, [%1, #16 * 8]\n\t"
> +                     "stp q10, q11, [%1, #16 * 10]\n\t"
> +                     "stp q12, q13, [%1, #16 * 12]\n\t"
> +                     "stp q14, q15, [%1, #16 * 14]\n\t"
> +                     "stp q16, q17, [%1, #16 * 16]\n\t"
> +                     "stp q18, q19, [%1, #16 * 18]\n\t"
> +                     "stp q20, q21, [%1, #16 * 20]\n\t"
> +                     "stp q22, q23, [%1, #16 * 22]\n\t"
> +                     "stp q24, q25, [%1, #16 * 24]\n\t"
> +                     "stp q26, q27, [%1, #16 * 26]\n\t"
> +                     "stp q28, q29, [%1, #16 * 28]\n\t"
> +                     "stp q30, q31, [%1, #16 * 30]\n\t"
> +                     : "=3DQ" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.f=
pregs));
> +    }
>=20
>     v->arch.vfp.fpsr =3D READ_SYSREG(FPSR);
>     v->arch.vfp.fpcr =3D READ_SYSREG(FPCR);
> @@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
>     if ( !cpu_has_fp )
>         return;
>=20
> -    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> -                 "ldp q2, q3, [%1, #16 * 2]\n\t"
> -                 "ldp q4, q5, [%1, #16 * 4]\n\t"
> -                 "ldp q6, q7, [%1, #16 * 6]\n\t"
> -                 "ldp q8, q9, [%1, #16 * 8]\n\t"
> -                 "ldp q10, q11, [%1, #16 * 10]\n\t"
> -                 "ldp q12, q13, [%1, #16 * 12]\n\t"
> -                 "ldp q14, q15, [%1, #16 * 14]\n\t"
> -                 "ldp q16, q17, [%1, #16 * 16]\n\t"
> -                 "ldp q18, q19, [%1, #16 * 18]\n\t"
> -                 "ldp q20, q21, [%1, #16 * 20]\n\t"
> -                 "ldp q22, q23, [%1, #16 * 22]\n\t"
> -                 "ldp q24, q25, [%1, #16 * 24]\n\t"
> -                 "ldp q26, q27, [%1, #16 * 26]\n\t"
> -                 "ldp q28, q29, [%1, #16 * 28]\n\t"
> -                 "ldp q30, q31, [%1, #16 * 30]\n\t"
> -                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)=
);
> +    if ( is_sve_domain(v->domain) )
> +        sve_restore_state(v);
> +    else
> +    {
> +        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> +                     "ldp q2, q3, [%1, #16 * 2]\n\t"
> +                     "ldp q4, q5, [%1, #16 * 4]\n\t"
> +                     "ldp q6, q7, [%1, #16 * 6]\n\t"
> +                     "ldp q8, q9, [%1, #16 * 8]\n\t"
> +                     "ldp q10, q11, [%1, #16 * 10]\n\t"
> +                     "ldp q12, q13, [%1, #16 * 12]\n\t"
> +                     "ldp q14, q15, [%1, #16 * 14]\n\t"
> +                     "ldp q16, q17, [%1, #16 * 16]\n\t"
> +                     "ldp q18, q19, [%1, #16 * 18]\n\t"
> +                     "ldp q20, q21, [%1, #16 * 20]\n\t"
> +                     "ldp q22, q23, [%1, #16 * 22]\n\t"
> +                     "ldp q24, q25, [%1, #16 * 24]\n\t"
> +                     "ldp q26, q27, [%1, #16 * 26]\n\t"
> +                     "ldp q28, q29, [%1, #16 * 28]\n\t"
> +                     "ldp q30, q31, [%1, #16 * 30]\n\t"
> +                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpr=
egs));
> +    }
>=20
>     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
>     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 769fae8fe25e..060fc30bbb5d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -552,7 +552,12 @@ int arch_vcpu_create(struct vcpu *v)
>=20
>     v->arch.cptr_el2 =3D get_default_cptr_flags();
>     if ( is_sve_domain(v->domain) )
> +    {
> +        if ( (rc =3D sve_context_init(v)) !=3D 0 )
> +            goto fail;
>         v->arch.cptr_el2 &=3D ~HCPTR_CP(8);
> +        v->arch.zcr_el2 =3D vl_to_zcr(sve_decode_vl(v->domain->arch.sve_=
vl));
> +    }
>=20
>     v->arch.hcr_el2 =3D get_default_hcr_flags();
>=20
> @@ -582,6 +587,8 @@ fail:
>=20
> void arch_vcpu_destroy(struct vcpu *v)
> {
> +    if ( is_sve_domain(v->domain) )
> +        sve_context_free(v);
>     vcpu_timer_destroy(v);
>     vcpu_vgic_free(v);
>     free_xenheap_pages(v->arch.stack, STACK_ORDER);
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/=
asm/arm64/sve.h
> index a4c53e3e8e2e..fc162c9d2cf7 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -24,6 +24,10 @@ static inline unsigned int sve_decode_vl(unsigned int =
sve_vl)
> register_t compute_max_zcr(void);
> register_t vl_to_zcr(unsigned int vl);
> unsigned int get_sys_vl_len(void);
> +int sve_context_init(struct vcpu *v);
> +void sve_context_free(struct vcpu *v);
> +void sve_save_state(struct vcpu *v);
> +void sve_restore_state(struct vcpu *v);
>=20
> #else /* !CONFIG_ARM64_SVE */
>=20
> @@ -42,6 +46,15 @@ static inline unsigned int get_sys_vl_len(void)
>     return 0;
> }
>=20
> +static inline int sve_context_init(struct vcpu *v)
> +{
> +    return 0;
> +}
> +
> +static inline void sve_context_free(struct vcpu *v) {}
> +static inline void sve_save_state(struct vcpu *v) {}
> +static inline void sve_restore_state(struct vcpu *v) {}
> +
> #endif /* CONFIG_ARM64_SVE */
>=20
> #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/incl=
ude/asm/arm64/sysregs.h
> index 4cabb9eb4d5e..3fdeb9d8cdef 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -88,6 +88,9 @@
> #ifndef ID_AA64ISAR2_EL1
> #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
> #endif
> +#ifndef ZCR_EL1
> +#define ZCR_EL1                     S3_0_C1_C2_0
> +#endif
>=20

What about ZCR_EL2 ?

> /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
>=20
> diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/=
asm/arm64/vfp.h
> index e6e8c363bc16..8af714cb8ecc 100644
> --- a/xen/arch/arm/include/asm/arm64/vfp.h
> +++ b/xen/arch/arm/include/asm/arm64/vfp.h
> @@ -6,7 +6,17 @@
>=20
> struct vfp_state
> {
> +    /*
> +     * When SVE is enabled for the guest, fpregs memory will be used to
> +     * save/restore P0-P15 registers, otherwise it will be used for the =
V0-V31
> +     * registers.
> +     */
>     uint64_t fpregs[64] __vfp_aligned;
> +    /*
> +     * When SVE is enabled for the guest, sve_context contains memory to
> +     * save/restore Z0-Z31 registers and FFR.
> +     */
> +    uint64_t *sve_context;
>     register_t fpcr;
>     register_t fpexc32_el2;
>     register_t fpsr;
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm=
/domain.h
> index 78cc2da3d4e5..6b5ec3bd0680 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -195,6 +195,8 @@ struct arch_vcpu
>     register_t tpidrro_el0;
>=20
>     /* HYP configuration */
> +    register_t zcr_el1;
> +    register_t zcr_el2;
>     register_t cptr_el2;
>     register_t hcr_el2;
>     register_t mdcr_el2;
> --=20
> 2.34.1
>=20

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:48:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522820.812423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokke-0000z5-R8; Tue, 18 Apr 2023 12:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522820.812423; Tue, 18 Apr 2023 12:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokke-0000yy-Nn; Tue, 18 Apr 2023 12:48:04 +0000
Received: by outflank-mailman (input) for mailman id 522820;
 Tue, 18 Apr 2023 12:48:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pokkd-0000ys-R4
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:48:03 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41b49239-dde7-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:48:03 +0200 (CEST)
Received: from DB9PR06CA0010.eurprd06.prod.outlook.com (2603:10a6:10:1db::15)
 by AS8PR08MB7744.eurprd08.prod.outlook.com (2603:10a6:20b:508::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:47:55 +0000
Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1db:cafe::44) by DB9PR06CA0010.outlook.office365.com
 (2603:10a6:10:1db::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 12:47:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 12:47:55 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 18 Apr 2023 12:47:55 +0000
Received: from b6a0658ec7c5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 472F295A-AFBE-408D-B33F-63627DE530B6.1; 
 Tue, 18 Apr 2023 12:47:43 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b6a0658ec7c5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 12:47:43 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAVPR08MB9578.eurprd08.prod.outlook.com (2603:10a6:102:310::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:47:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:47:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41b49239-dde7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U9ZKDk0iLgLKg/KqLgosILTuSTfBe1xK7PMUC6wjftA=;
 b=AyiQIStFaaBOQI4DQ+Qk39Jze1AK6gxb96biVZxzxSZyOtGFW1aj8tSfgkc6sxlDzQgWXX9c85sU79/wVrEUmWWW1dMplPpxryqmftyZ5iI7pnYXnyN+aNRR5ATarWrHFTeUrfX3w4JDmqNaUP4Hv7e4lU39Jjs9/myaH/WNbJs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a802e5ffdf604c77
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HnO58N0vtztA9ksh2OaaEFRjk+nrkkZOg5/vahdGyZeuuo5wHZC4bI/vlTsuvMxqCzdZ3WlQ5aVzIwynTkJO6Cpt5exfSAQ0d4O3f5L8PNBbCPkw0q3K+NBNdkN4j7YR46jmIhvN0c4SwYXj2JlU5HdsRJv51gi4CMy/tezZUmR2JTZdzr4+d4g0HlPhxVJ0aHAcDqVum2XshK69P06U+ly+PfUJZLMwEVIMEsrgNpfRJCSKNkC3XpIsjy3AUM5micy8JBUi69x6egQ5/zkJGzKBCgOmF/11+JdAYm5NX9sTZa/eHbgR5fh3IVpcm16PCepvpSCrud/BgbOQ5Devkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=U9ZKDk0iLgLKg/KqLgosILTuSTfBe1xK7PMUC6wjftA=;
 b=hd/U3dfE1LasOEQIIQ0NfohOLPMMGdVPDPcCP6h+LA3ZazHNYw6WufXNYSjjAR11MwzyPpxC35+cUaim4LcpnDt9kDpzpwWGkqOnkAcaAEbnPa3QdlZEO/NzFXNXKTzgIu2bAszf7Ue/o0OrnlPvvNAXL15kqDm4eNt0s8vlHSM9n+djadWKAbZY0FX8L3C55/Mnyc1jQm8lA24BQd+iQMvHIu/ast1s3Pz1fvtetprA/UWNWp7kJTQYf4w04S1N5ecsSUgMjEgBN7YA3q5UcYkBRT2XnwwXx2YKppXBMgn1nEuaMTs12ESiF43sqwf1TEk2QbSktFzAy6F+If5SEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U9ZKDk0iLgLKg/KqLgosILTuSTfBe1xK7PMUC6wjftA=;
 b=AyiQIStFaaBOQI4DQ+Qk39Jze1AK6gxb96biVZxzxSZyOtGFW1aj8tSfgkc6sxlDzQgWXX9c85sU79/wVrEUmWWW1dMplPpxryqmftyZ5iI7pnYXnyN+aNRR5ATarWrHFTeUrfX3w4JDmqNaUP4Hv7e4lU39Jjs9/myaH/WNbJs=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQxHdlbNP/OhU+SnBXlOek6la8xDZGA
Date: Tue, 18 Apr 2023 12:47:35 +0000
Message-ID: <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-8-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|PAVPR08MB9578:EE_|DBAEUR03FT028:EE_|AS8PR08MB7744:EE_
X-MS-Office365-Filtering-Correlation-Id: 81829f7c-e8b6-48a1-8d98-08db400b215d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IWAvAOtTOmSOUgWFJ4I7/6E8UhNEy5xP8tkOeFuWlG46fF2FSOYzpFSBZMh5sDtjgziHzL+fzGtTRy8nQdEf0pdnEj51Om1GO57IYC8eT6zTvW5YRyi8Ke+NfCjfoBJktwBO3Tdkg5MyOt4Fja7NDEEm4xDuYfO390azFtqfI/wkGqFAyF9/3OIoBlBKsxFbI2JtBY+1vPdbeDSttZzyD3UdfPE+8AfbImVT8eGpARiHoIh//E1in0C5ynXa3t6Qdy53h82AeHTie3OQUBaNv92f8bRgrPVdfcxnB4bYQk9qdO6g+5sBYSKN6CMwPHJTPavk65NZefJdnTNf9ysoyt4TWDQNLM5x/61yXYwP76J/NIJtZAso5vNuBG0Zc7/JoHj0BpXxeRmSEH6AN7nYjT0PjzjiccW9swXQWgBRXcc7qSNtzzhEw/0Fk5VkbLwYYHU5C3X4XfKQvWxSUrEnMQVOMK10OA8IvR3o1c5N8E9QtCev3A9c/69irAha3jTNqnq0xQhNRSlikwjzIh1w4S3ya5m3IHgnxM+qL1FG3ysTaB7m6RwVZzNgnf+9mF0GQpcjNiSNPfn9GtbTcGWxHloUXD1SiF8BTPCCpqhNGS1L2lF3dGYlH2Hv7LhmzDChur2dxyBr9/5vCgQkQixSBw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(136003)(376002)(346002)(451199021)(37006003)(478600001)(71200400001)(38100700002)(8936002)(6862004)(8676002)(316002)(41300700001)(91956017)(4326008)(6636002)(76116006)(66446008)(66556008)(66946007)(54906003)(122000001)(66476007)(186003)(2906002)(53546011)(38070700005)(6512007)(64756008)(6506007)(86362001)(83380400001)(36756003)(33656002)(2616005)(5660300002)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DBEBA3C24B097545A6F96461A8C96E97@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9578
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5db6804d-0d61-43dd-0dea-08db400b15ea
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FBIEPV7LNrH6g/NTSRciCZc7VHd+psGWpw+IZ5rPuIl93oKvjUb8hk2OziAU2MmXDxnX7lrlYId1QmdgCMlaN63QRaQO+4Jk20ztf+fM4l1n+ZZ4kY6oigtiAC6apaq8jb+pmjYsh0eCcPvhW6JhooAQUhiVREdMPcwY7AqgMO34va5dGSb8C4JHoXZ1wPKMAlvO3bgTQaFUtt5/02y10NfU7DFPCAoMi5lV+dIkPtIW0WwU9bwMGyp6VP9mHFc2rgSaT6w7Q14/mq7SViT9L7cG0tN0UZR2gqlTrv476IyzJcNyJ3RGVXtZKTn+OmDoHkb5EU4WjX8luehBOnfBwML4+C6XyaeopkS/KgHrvuW1KBrZBWBrZqJeQzbDP66hc8rOSAapgAVVoYnHqhUeiuV3mEsnKMdZIc+3qjWFFPcH/q+PaGEd3MgYTqTin0nGtutgI73gQ1+3aN+WlDEG6D2r7oqQeuKWJVSyshp2KlYd2jd7ka6Dsl2jZbHtbm/m37LOpnJpsJX6cwHIjnfAh8HN+67FAtF5nXbO4EK9T1dtt7SuL89622Deu6181WWypOmpAKAfchjNQnkPUS/mtCv2WkmZ9QrD196cATi3wXsoDdr1Bdzy8kGrovr2FkrXIc9P769urmQelvZmV34HkDh/SyOhHvoA+UK5HT/qy68xBtyb/rY0bAUvIJCxVn9k5XWLz2nwd5UyE/ojIV/rzQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(6506007)(336012)(83380400001)(6512007)(26005)(36756003)(86362001)(47076005)(82310400005)(186003)(356005)(40460700003)(2906002)(53546011)(107886003)(33656002)(2616005)(5660300002)(36860700001)(6486002)(8676002)(6862004)(8936002)(478600001)(81166007)(54906003)(37006003)(41300700001)(82740400003)(316002)(70586007)(40480700001)(70206006)(4326008)(6636002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:47:55.1654
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 81829f7c-e8b6-48a1-8d98-08db400b215d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7744

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add a command line parameter to allow Dom0 the use of SVE resources,
> the command line parameter sve=3D<integer>, sub argument of dom0=3D,
> controls the feature on this domain and sets the maximum SVE vector
> length for Dom0.
>=20
> Add a new function, parse_signed_integer(), to parse an integer
> command line argument.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
> - Negative values as user param means max supported HW VL (Jan)
> - update documentation, make use of no_config_param(), rename
>   parse_integer into parse_signed_integer and take long long *,
>   also put a comment on the -2 return condition, update
>   declaration comment to reflect the modifications (Jan)
> Changes from v3:
> - Don't use fixed len types when not needed (Jan)
> - renamed domainconfig_encode_vl to sve_encode_vl
> - Use a sub argument of dom0=3D to enable the feature (Jan)
> - Add parse_integer() function
> Changes from v2:
> - xen_domctl_createdomain field has changed into sve_vl and its
>   value now is the VL / 128, create an helper function for that.
> Changes from v1:
> - No changes
> Changes from RFC:
> - Changed docs to explain that the domain won't be created if the
>   requested vector length is above the supported one from the
>   platform.
> ---
> docs/misc/xen-command-line.pandoc    | 18 ++++++++++++++++--
> xen/arch/arm/arm64/sve.c             | 21 +++++++++++++++++++++
> xen/arch/arm/domain_build.c          | 27 +++++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm64/sve.h | 15 +++++++++++++++
> xen/common/kernel.c                  | 25 +++++++++++++++++++++++++
> xen/include/xen/lib.h                | 10 ++++++++++
> 6 files changed, 114 insertions(+), 2 deletions(-)
>=20
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-li=
ne.pandoc
> index e0b89b7d3319..9c0790ce6c7c 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
>=20
> ### dom0
>     =3D List of [ pv | pvh, shadow=3D<bool>, verbose=3D<bool>,
> -                cpuid-faulting=3D<bool>, msr-relaxed=3D<bool> ]
> +                cpuid-faulting=3D<bool>, msr-relaxed=3D<bool> ] (x86)
>=20
> -    Applicability: x86
> +    =3D List of [ sve=3D<integer> ] (Arm)
>=20
> Controls for how dom0 is constructed on x86 systems.
>=20
> @@ -838,6 +838,20 @@ Controls for how dom0 is constructed on x86 systems.
>=20
>     If using this option is necessary to fix an issue, please report a bu=
g.
>=20
> +Enables features on dom0 on Arm systems.
> +
> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain an=
d sets
> +    the maximum SVE vector length.
> +    A value equal to 0 disables the feature, this is the default value.
> +    Values below 0 means the feature uses the maximum SVE vector length
> +    supported by hardware, please be aware that if the hardware doesn't =
supports
> +    SVE, the feature remains disabled.
> +    Values above 0 explicitly set the maximum SVE vector length for Dom0=
,
> +    allowed values are from 128 to maximum 2048, being multiple of 128.
> +    Please note that when the user explicitly specify the value, if that=
 value
> +    is above the hardware supported maximum SVE vector length, the domai=
n
> +    creation will fail and the system will stop.
> +
> ### dom0-cpuid
>     =3D List of comma separated booleans
>=20
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 5485648850a0..ad5db62e1805 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -9,6 +9,9 @@
> #include <xen/sizes.h>
> #include <asm/arm64/sve.h>
>=20
> +/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
> +int __initdata opt_dom0_sve;
> +
> extern unsigned int sve_get_hw_vl(void);
> extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr=
);
> extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
> @@ -118,3 +121,21 @@ void sve_restore_state(struct vcpu *v)
>=20
>     sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
> }
> +
> +int __init sve_sanitize_vl_param(int val, unsigned int *out)
> +{
> +    /*
> +     * Negative SVE parameter value means to use the maximum supported
> +     * vector length, otherwise if a positive value is provided, check i=
f the
> +     * vector length is a multiple of 128 and not bigger than the maximu=
m value
> +     * 2048
> +     */
> +    if ( val < 0 )
> +        *out =3D get_sys_vl_len();
> +    else if ( ((val % SVE_VL_MULTIPLE_VAL) =3D=3D 0) && (val <=3D SVE_VL=
_MAX_BITS) )
> +        *out =3D val;

Shouldn't you also check if it is not greater than the maximum vector lengt=
h ?

> +    else
> +        return -1;
> +
> +    return 0;
> +}
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index eeb4662f0eee..3f30ef5c37b6 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -26,6 +26,7 @@
> #include <asm/platform.h>
> #include <asm/psci.h>
> #include <asm/setup.h>
> +#include <asm/arm64/sve.h>
> #include <asm/cpufeature.h>
> #include <asm/domain_build.h>
> #include <xen/event.h>
> @@ -61,6 +62,21 @@ custom_param("dom0_mem", parse_dom0_mem);
>=20
> int __init parse_arch_dom0_param(const char *s, const char *e)
> {
> +    long long val;
> +
> +    if ( !parse_signed_integer("sve", s, e, &val) )
> +    {
> +#ifdef CONFIG_ARM64_SVE
> +        if ( (val >=3D INT_MIN) && (val <=3D INT_MAX) )
> +            opt_dom0_sve =3D val;
> +        else
> +            printk(XENLOG_INFO "'sve=3D%lld' value out of range!\n", val=
);
> +#else
> +        no_config_param("ARM64_SVE", "sve", s, e);
> +#endif

Correct me if my understanding is wrong but here you just ignore the sve
parameter if SVE is not supported by Xen ?

I am a bit wondering if we should not just refuse it here as the user might
wrongly think that his parameter had some effect.

Or is it a usual way to handle this case ?

> +        return 0;
> +    }
> +
>     return -EINVAL;
> }
>=20
> @@ -4109,6 +4125,17 @@ void __init create_dom0(void)
>     if ( iommu_enabled )
>         dom0_cfg.flags |=3D XEN_DOMCTL_CDF_iommu;
>=20
> +    if ( opt_dom0_sve )
> +    {
> +        unsigned int vl;
> +
> +        if ( !sve_sanitize_vl_param(opt_dom0_sve, &vl) )
> +            dom0_cfg.arch.sve_vl =3D sve_encode_vl(vl);
> +        else
> +            printk(XENLOG_WARNING
> +                   "SVE vector length error, disable feature for Dom0\n"=
);
> +    }
> +
>     dom0 =3D domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
>     if ( IS_ERR(dom0) )
>         panic("Error creating domain 0 (rc =3D %ld)\n", PTR_ERR(dom0));
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/=
asm/arm64/sve.h
> index fc162c9d2cf7..f1801876b5de 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -19,8 +19,15 @@ static inline unsigned int sve_decode_vl(unsigned int =
sve_vl)
>     return sve_vl * SVE_VL_MULTIPLE_VAL;
> }
>=20
> +static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
> +{
> +    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
> +}
> +
> #ifdef CONFIG_ARM64_SVE
>=20
> +extern int opt_dom0_sve;
> +
> register_t compute_max_zcr(void);
> register_t vl_to_zcr(unsigned int vl);
> unsigned int get_sys_vl_len(void);
> @@ -28,9 +35,12 @@ int sve_context_init(struct vcpu *v);
> void sve_context_free(struct vcpu *v);
> void sve_save_state(struct vcpu *v);
> void sve_restore_state(struct vcpu *v);
> +int sve_sanitize_vl_param(int val, unsigned int *out);
>=20
> #else /* !CONFIG_ARM64_SVE */
>=20
> +#define opt_dom0_sve (0)
> +
> static inline register_t compute_max_zcr(void)
> {
>     return 0;
> @@ -55,6 +65,11 @@ static inline void sve_context_free(struct vcpu *v) {}
> static inline void sve_save_state(struct vcpu *v) {}
> static inline void sve_restore_state(struct vcpu *v) {}
>=20
> +static inline int sve_sanitize_vl_param(int val, unsigned int *out)
> +{
> +    return -1;
> +}
> +
> #endif /* CONFIG_ARM64_SVE */
>=20
> #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/common/kernel.c b/xen/common/kernel.c
> index f7b1f65f373c..29d05282c8bb 100644
> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, c=
onst char *e)
>     return -1;
> }
>=20
> +int __init parse_signed_integer(const char *name, const char *s, const c=
har *e,
> +                                long long *val)
> +{
> +    size_t slen, nlen;
> +    const char *str;
> +    long long pval;
> +
> +    slen =3D e ? ({ ASSERT(e >=3D s); e - s; }) : strlen(s);
> +    nlen =3D strlen(name);
> +
> +    /* Does s start with name or contains only the name? */
> +    if ( (slen <=3D nlen) || strncmp(s, name, nlen) || (s[nlen] !=3D '=
=3D') )
> +        return -1;
> +
> +    pval =3D simple_strtoll(&s[nlen + 1], &str, 0);
> +
> +    /* Number not recognised */
> +    if ( str !=3D e )
> +        return -2;
> +
> +    *val =3D pval;
> +
> +    return 0;
> +}
> +
> int cmdline_strcmp(const char *frag, const char *name)
> {
>     for ( ; ; frag++, name++ )
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index e914ccade095..5343ee7a944a 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
>  */
> int parse_boolean(const char *name, const char *s, const char *e);
>=20
> +/**
> + * Given a specific name, parses a string of the form:
> + *   $NAME=3D<integer number>
> + * returning 0 and a value in val, for a recognised integer.
> + * Returns -1 for name not found, general errors, or -2 if name is found=
 but
> + * not recognised number.
> + */
> +int parse_signed_integer(const char *name, const char *s, const char *e,
> +                         long long *val);
> +
> /**
>  * Very similar to strcmp(), but will declare a match if the NUL in 'name=
'
>  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed f=
or
> --=20
> 2.34.1
>=20

Cheers
Bertrand




From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:48:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522821.812433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokki-0001FL-7y; Tue, 18 Apr 2023 12:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522821.812433; Tue, 18 Apr 2023 12:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokki-0001FC-4W; Tue, 18 Apr 2023 12:48:08 +0000
Received: by outflank-mailman (input) for mailman id 522821;
 Tue, 18 Apr 2023 12:48:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pokkg-0001ED-Fn
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:48:06 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 414bd391-dde7-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 14:48:03 +0200 (CEST)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 08:48:00 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5542.namprd03.prod.outlook.com (2603:10b6:5:2c5::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:47:56 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 414bd391-dde7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681822083;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=DCXB6hBZuFTqo1uq2o2RbJCmxUgSEAIAU9hcDVZRye4=;
  b=MYfnyC8CWEuUlffh+X6IG3Hivm9zMRumvwIIFMZ/eCuJasuEHscrTP3l
   KwwQhcMIF9AG1fS7lo/D21XU8NJo2GEAIAtLLzCY6xsIs+5i5wg6DpS9Z
   3vAaI2x/NKbYREalooZZu8MYozjDBCuEepyErX5XTmeHJYWOqYjj7opNG
   M=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 105850316
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Fec+taAlXIFNJBVW/xbiw5YqxClBgxIJ4kV8jS/XYbTApDkq0zcOm
 mAfUWGCPayKYWOjetgia9u38UIBsMfRydVjQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw+bd3M0Vu+
 7shMg8/azqI39q/7rjqVbw57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDW7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi/827eUxH2TtIQ6H6Hl8/1wr3SqxlcuIhkIFkubpsOpoxvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLkgJSCRQLuMvssAeTCYvk
 FSOmrvBDjtqtaCTSG6MsLKdqzq9NDI9PWIEfTUDTwYO/5/kuo5bs/7UZtNqEarwgtirHzj1m
 mmOtHJn2OhVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5P2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:h0lRnqmc9c9jwOXQo9839tceAsbpDfPLimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7Sc+9qADnhOdICOgqTMCftWzd1FdAQ7sSibcKrweAJ8SczJ8q6U
 4DSdkYNDSYNzET4qjHCWKDYrUdKay8gcWVbJDlvhVQpG9RC51I3kNcMEK2A0d2TA5JCd4SD5
 yH/PdKoDKmZDA+ctm7LmNtZZmKm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYZ93
 TfmQCR3NTpjxj78G6X64bg1eUbpDLT8KoNOCVKsLlaFtzYsHfkWG2mYczCgNl6mpDu1L9gqq
 i1n/5pBbUJ15qWRBD9nfKl4Xib7B8+r3Dl0lOWmn3lvIjwQy87EdNIgcZDfgLe8FdIhqAP7E
 tn5RPoi3NsN2KyoA3to9zTEx16nEu9pnQv1eYVknxESIMbLLtct5YW8k9ZGIoJWHuS0vFSLM
 B+SMXHoPpGe1KTaH7U+mFp3dy3R3w2WhOLWFILtMCZ2yVf2Hp500wbzsoCmWpozuNJd7BUo+
 Dfdqh4nrBHScEbKap7GecaWMOyTnfARBrdWVjiUGgP1Jt3RU4lh6SHn4ndvtvaBaDg5KFC5K
 j8bA==
X-Talos-CUID: =?us-ascii?q?9a23=3A7nCpiWsIraTd+XkEkioYFs2L6IsmI33v6FfeZHa?=
 =?us-ascii?q?iNlRsYrbNcVa+8vJ7xp8=3D?=
X-Talos-MUID: 9a23:X34d1woFYQhHWkepKQcezzp+acJr+YuHMUA2rL8CpYqUFStKKx7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105850316"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UvwAmuvjLfoWYQjCfeBtWXW1i20NjRXvc8T7uRhZJjd76sZsybMaH+EtrfsunRKv4brDegmmD7zHFECwpPOVfEs+QtqpslDN+UGqIT5+B44WHnpwDEEAmsrtp1K5B619bSqSpLW0APGNSyfAUmZUqTe+LqzLQuNKjLgn07jJygPyVw+zDcDGjzsUGEv6V++2Wk5yRzgFNtfZX5SpTCBuQbDUZqMZEywKGpGJZdWa7EVNO98zEZDaa9kdlHgnAvTCyfSayv4m8twGzbQe6gnMZXW7vgDl3NshWuLjUso2oK7BDH4UR6rpKeC7XMMfAZeClLyzA2pip95t91eiBuxDYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vKXhrr93JiSWj86pBbqD3mAJGN926lIyMx7TBVE7VCU=;
 b=JGJxi/ix2jC21ZPkekPb86YS0rhFXxhSvhzaKu5W9UhNCr+mh6PtA9ZxgHOz4spMiw/L3ZeNx82twCm2cqYGp3zw0Pp6nE/Nj6Ko/uIV8xbfKuTjZ6nJaJmG0J6BKc90J6/fZ2yiKZ5fJCQvjA4wQ40pMWESciCGl20/OIvM4jxdoBqyxSWyPgvz8l/nPSbChvW0EmzzL1HCy2AOvaS4jmtZfh7zPDSttOF7v2gK5JfyNZ7frOHneu/EDmYuJ5GSEaY9aYIcOFi8m3CAtrJuMTXbDsDLp/pxPTcK3cWI2zEl83ksRiZNjZ7Kt2lR7swkubcII2ZLGrlzE4VfGOTBKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vKXhrr93JiSWj86pBbqD3mAJGN926lIyMx7TBVE7VCU=;
 b=JW1TIVgVJX0ChoCA+HVf61SGy7aEGR2xm9f/OSDkbKAvP0ZnAHM9J+H7zQ3WEvQeWTMGbHZxggCsHdBJTtMdr5oPBMeo+tn4xrhuzoYqL8VXyZuqm8BkNzn2c+ns9FRasTsoxdjxTYJDl15uC9uuxx37Dk9bEJKg8hXk3Em4kYI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH] CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
Date: Tue, 18 Apr 2023 14:47:48 +0200
Message-Id: <20230418124748.17881-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0300.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5542:EE_
X-MS-Office365-Filtering-Correlation-Id: 88ff40c0-4b06-43ba-ab21-08db400b2154
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GixKzYpxJfrSdMqGwkNJ/xl6fw0FC2BAIQb8NjtP4nT+0T8p8uowbbA39bVtmj0tRXoaS4N02lvutzIs0YgYDoESqpdSuL03iZbA6URVyturemiOg5eJwJO1IhI6ZLykqdoYuRNrcYEScUXLA0jyqisZQiLuHf7MV5AOYrQKd7dNDdSRoJDPwxZ2IY+pyXnS/RjGY9PBvhzs65VoNJW2u2bbF2RbqjwKgsZYo5WYGwaM7C8tPTmRuBk9nKaFbbQIMPbD8Gv3O4CbVc63vlw723xxy2tlPcVWhmpgyxGZExXhhDg/DUN80WwKbZfpBPZY9mhYPGmZiKHSETLi9cLl+7dx7IzpZ16bo2tnhgjWYrbCHETS2bJRv4RF0p1XZecAz5K9fgo+TWPdixi7IIzUjUX3Df9OgSvHt6igtYU1qa/ya4z/oJzrNs2ycBWjjbxWMeprXd4aIQOLKLEHRdgmpH+TP6DVf53WznnSOJsDi0S10u6B6GkkWnLjONbzyLv4rJXvjmrSQ8vXwk8dMddK2T/27h46YoFFJb2lfTXw5c23FbE74Hb7SMH+ghOhmAwNMX/ZO1nx4WcFURMN3gtuB4cTpspzLAeCPpmGFPUk8QEfH880fyYBNfCBVGze3W5D
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(451199021)(2616005)(54906003)(6512007)(6506007)(26005)(1076003)(478600001)(66556008)(66946007)(66476007)(83380400001)(316002)(82960400001)(6486002)(6666004)(186003)(4326008)(6916009)(8936002)(8676002)(41300700001)(5660300002)(4744005)(2906002)(36756003)(4001150100001)(38100700002)(86362001)(207903002)(219803003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bS9FMGNsNXhENklQbjd6Y2JxL1ZYSG9GZndxZEVFWTQwQWVCS2tMd1l1QXZD?=
 =?utf-8?B?RWQ0eDBZUWtubjF3S1hoV3QxeWVZQzVOT2p2NWJwcTIwdlBIVUsvd3ZCT01Q?=
 =?utf-8?B?VG92OFlnSWtyeHFlWElpR1hRcGNoS0pHeWwrRVg3VUxVclRQVkQwYWlmVi9k?=
 =?utf-8?B?REFjSHl2ZTl5enVRdnIrNWJyUVlvR1plTitHRkw0TTlobU56NVVZYkorTnFN?=
 =?utf-8?B?dkZzNE9lN0EzdEp0K1JJMjNpUkxoaTFKbzN3V0RyU1IwNXlhdWh3amVsK3Ew?=
 =?utf-8?B?WlJQR3pnMlhQMW1CTjd3Yk82YXZwdm5NMEdzUFpkV25ydUJhaDdSS0swdVgy?=
 =?utf-8?B?citTd21QMlYyeG1ISEpMVnlUUitKdU1vbGdDaklkQkpqdnhUT3FnYWt5ejAv?=
 =?utf-8?B?T3Z0OHZlZmRTWCtzYXlvcUxicXh2M29GN0Rld21uRWhXTHd3YVJrNTZqVXdL?=
 =?utf-8?B?eUVyWHRuZWhtVHdaWldST3JSTmtzY3U1a2pPMXo0d1Y0bC9ITzdpN0FGclgr?=
 =?utf-8?B?Wk5IRFpUdloxbkg5alFQRStEYm9VOHFlSGljcjRHU1NYTGs1WVBOZUJEbFBF?=
 =?utf-8?B?d1k0QlZRYXBqSmFXalRGVkFVV1g3TEwxUmc3Wm1aMUxPS0Y4c1RMdW5aU2FW?=
 =?utf-8?B?N3Jxdnd4NTEvc2h0c0JxK0ErTGJyYWZuWUo1dkNReUJwbFNUV0ttVnRHZG5I?=
 =?utf-8?B?M1hVakR2c1MvZUZlYnljTXBZcmlpOCtUcFNXS0tYeWRFb0RHaWFha3EvbHZE?=
 =?utf-8?B?dE8zNEdYVTB4Nml4bGd1UDZlTlRyMFNXRld4eEpVUzRPdHI3OUxjOHEyYjdt?=
 =?utf-8?B?OVBaUnh0MVJtblk2bmRGM0ZSZk5TQmVMb1BVUE5heTBvdXNnczQ0OHJkWmdX?=
 =?utf-8?B?Y0gvNEV5Y083c0Jsc0FNTE1oWlZSeDFMN0ZXUkZ6QmRSSURIaXNybjA1ZXlS?=
 =?utf-8?B?c0dtS0gxS3lxV1h6ZkRLSGUreFd4Ym9xTGFCanFTazNUWGNGYnhQS3lySmoz?=
 =?utf-8?B?UzYwQS9NOVZTUEJjeVA5NStmaE1IZ0hNZWRvU2hFK3oyWUhvVldPa2tlVnJq?=
 =?utf-8?B?cE9ETDBKOUJJN3Z4bmxqWUFBYWNUOXJUYW43eEhDazkwVi9QVEo2Mms5RUxM?=
 =?utf-8?B?aTU3UEtFR2VyS0crbmJwaGtleXpXZ056ZjNJMEFKK1NPVjlzTm40WHN5eXoz?=
 =?utf-8?B?aEFuMTExWWE3alhEZHZkVkNuSzI3V3pET1lkUUF4NnN0cW43ckRtbEdmRjdD?=
 =?utf-8?B?ZnpZbGNpeFI2T3B3VStXZTJnY05nN20vZFpoT05MQVFKRXJ4VFgvVEh2NWMy?=
 =?utf-8?B?aUVNZlFTU1NwTUZjbjI5cXU1c0JLVlNudFI4NW9rUVZ0M1VJQzFHMFlyWW1N?=
 =?utf-8?B?UWNGSjJTQkR5NC9DU0FySHRPWGJZSklOcmliZVFxN1VYanhkelNzd09RUHFK?=
 =?utf-8?B?RzFNelhDdTdmSXdERXExb0lOczAyejBsQ0xleFJrV1NMTVZ4RUMrSFBaT1Ay?=
 =?utf-8?B?VE5Lc0Ywei84a2dhN080SHpCeVZLYkt3WG12V2FiVVJCQWY4UmlGa2F1S0sz?=
 =?utf-8?B?M3VsL0VyRmoveWRlRzA3c1ZLT1REaTFTdE51UTlQZGpBSW4vNjd3ZkEvY3NF?=
 =?utf-8?B?OURDTy8vWXlURE5nVzhWNENZWTd3YTdpYk8rUXFBajRvRWgza3ErNE9sTldC?=
 =?utf-8?B?K2p5OE9CMUswRGZuclZOb3ptenlXK2hWNW9jWDAySE84ZXZ0dkx3M2JtT0Z2?=
 =?utf-8?B?dk9RaFRucVlqeERlMzVaQmREbEw2SW5sZzdHM2Y1V3VoV0ZQMXpHOFp0QVBy?=
 =?utf-8?B?QnFCSmxiNFdNZ1R5UWVzVUJYNkpEM0dvUmxnNUJVajlDQUtGejJWaWd5N0kz?=
 =?utf-8?B?amcyT1NCREY3TG9LUmM2M2c2UVczRjdlSlZiZlhVQUFIQU9LZ01xd0srVkZD?=
 =?utf-8?B?ekd6djhVb245RDBoZ29XdUozQnNzcHBYN2dPSS9xTkt1c2hKTW1oZ0wyS1Vs?=
 =?utf-8?B?eUI4QWw3RjQvK2FyZnh0ZE1iUlRoTUt6ZFNWSE9rZGNCZ0lacVFzQ2lJL2I2?=
 =?utf-8?B?TlN1N2c3YWlVOUZjUlQzS3pCeVN1QW5ZUWI1bXFMSW84UHFsUTYzNmdxSVR5?=
 =?utf-8?Q?HEL+1qtaMRd7zN/UIYEvU+rGp?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	BC9LNps29cDdCumf6tkgDL3/RthXppPJDnN2Whr4rMLgJNtrtTWROBlX0NyTdGoPNcjm0wVmhltGDWmtrfxhunZs4//xIhuCDXXzJxnG03Hmh1tis3DFQ88XcjtAUGJr0kuMRqTYEguUHpw1dn8Z+SGD2nEPBHKYfu0kvZTEA2AqBo4hg52W4brX/95OoDP6dDbKXb+GIBETAYgJwaX5wVVCetJfxv1xX7vjsF1ch64vRkeIAut0EWpI9ZWxbaM8wy2KfW/JwCEP6pPdrLanXb/I2FC5+s6BbojtAPgHH5E379BI0CYlGyr8qVHFm3EDurm7rhdfW1+YYIL6sFDEY90ecj/U4n1Uk6sbJq874cUGjNCgs9m4lv25UG3xB9oFjeQxQdlNzLdWl+geMfc5Xt+wjYn+cU/Y5v56UgvJe9YPiZXfGf1BBnIV0vHiXEaEVJdH0RRhL1oG0NbHmLbEJsbLdd6Q2uAA4xp9cupQ/l2cahqI5W/6WosXCoZhpPAgh/fXagWEfZJy6DsagtmzoWqyOYlD02rZZ6CjIPPhwVc5X4pSd0zqTmHF5aCwA3YdFk8qwWw9z9QXGqav4v0VFkAE3QU+v7OjIq3622J00BRYVP6apws3dtD4eqswq+HB4NKCvlyuaOPU8HC2Hjiv8OZoqFWzTqs7WkA162F6wpRQ3EDA7mUPyWWaQv9PECd7KL8reCpx8U4WZ+dwdc8Ax4vWFwidO14izKf3BKDD7r6ftLjtDkEuhTuKzjU1v6V9RpV3gcTVM4jZUJg+jOxajF33GJJfh3r7m9uBMY+QFGC/XeGFg8/LEHVX8gwEnKoI9508J5/E2oEVTBo3x1PQPQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 88ff40c0-4b06-43ba-ab21-08db400b2154
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:47:56.2931
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +YvUENpDjpa88f6oXykQDE/ag09lVVnSKV/PXD2cILXenUCVNOysu8wVj+g4v/E8x14faSPAxCyU9lmtnEBSLg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5542

Note in the changelog that the purpose of
gnttab_max_{maptrack_,}frames command line options has been changed.

Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 CHANGELOG.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index c978cfd9b68f..2a7e62495104 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
+   cap toolstack provided values.
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:50:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522829.812443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokmW-00029W-IF; Tue, 18 Apr 2023 12:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522829.812443; Tue, 18 Apr 2023 12:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokmW-00029P-F4; Tue, 18 Apr 2023 12:50:00 +0000
Received: by outflank-mailman (input) for mailman id 522829;
 Tue, 18 Apr 2023 12:49:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pokmV-00029D-DB
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:49:59 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2060b.outbound.protection.outlook.com
 [2a01:111:f400:fe16::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86a59318-dde7-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:49:58 +0200 (CEST)
Received: from DB3PR08CA0034.eurprd08.prod.outlook.com (2603:10a6:8::47) by
 AM9PR08MB6065.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 12:49:51 +0000
Received: from DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::f2) by DB3PR08CA0034.outlook.office365.com
 (2603:10a6:8::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Tue, 18 Apr 2023 12:49:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT005.mail.protection.outlook.com (100.127.142.81) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 12:49:51 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Tue, 18 Apr 2023 12:49:51 +0000
Received: from dfbf5f12ac50.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 81C4CF4B-3866-4927-964E-D0487FE82FD7.1; 
 Tue, 18 Apr 2023 12:49:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dfbf5f12ac50.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 12:49:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB8203.eurprd08.prod.outlook.com (2603:10a6:10:39e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:49:37 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:49:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86a59318-dde7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hhKNfkA7X9/PI1wjLsH1O+OSaDiuEY666L8i7LUNEKA=;
 b=FK/udrJFLdmWwnIZLxSyiINCiC2mNaRM9GPM34mW2cUbhPXIov6m7EDmDR0Ypa9yRg14m3bGuXNWKlR8Yw907IBVG0XxW3tBfxOzB5ON8Wu6aefc0p6XKv4PTRft8D4wKkqKUsMVZhQeFQdfsd2sa5lWimOZoI8FxWf+HfIDKTM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a8e24a07a4f6c602
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ny/UYCJIu6oEfVSc+SOme31Vp+7WStPzzbTqasH1P9FU6ewbjdIrZBs7j5GCaVJZMCe2ZVpL5nZOYRZDCQZZ/blxUydLaTJQhncd2zNh6UD7W77nLSZcel0FBWNYLkSZFwhUguD0h+BJlUeEC9RHNKyBwZiQ372a0QZQOaJsF2UhKY80d2sOWwfqsTGR9AgbSpIYtoBmFVdaMP8B2VXjHI+FIS83YsHSTpM1vfgpCOBx8nTvgdbDt4NPcUCVcJIfHTQTnxsLBYGkrSbsJT1l+bXJlVt9xU6gBub6fQRpUAom/elCRcsknwv+rY0W3PnbzT6R2EQBFi0HOwy12Yg8YQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hhKNfkA7X9/PI1wjLsH1O+OSaDiuEY666L8i7LUNEKA=;
 b=ev9VzoAWbC0tV9JmQNYTCWijSOwqsKo5lP0yjTLBf2Poe2QqmcuWLpn+elGPyUT+QMV9ovdg8b12Y/38aTJqOIo2sVECBTHFJO/dsJrFNq9XTFgbmycIf5PU/RXVduC+j38cODyDlUL6ZnfrHnuxTO90jBQn8fgzFLGN5C87Bv/ixrKvoBv5QyPwptQlFTb52CYpxHWg+2b6A57NPlWwV/2J25xryZIHL5AGfZgQZclX78Wisz960Y35kCrodxz9UDDbvlGgn2PYQjbF6CyhwKwDlUM4zmfvWv03rv6UPAwU6xml4ASszHTBL6Rz2Jkb+6DhYArPFNqVlz5pIhbSeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hhKNfkA7X9/PI1wjLsH1O+OSaDiuEY666L8i7LUNEKA=;
 b=FK/udrJFLdmWwnIZLxSyiINCiC2mNaRM9GPM34mW2cUbhPXIov6m7EDmDR0Ypa9yRg14m3bGuXNWKlR8Yw907IBVG0XxW3tBfxOzB5ON8Wu6aefc0p6XKv4PTRft8D4wKkqKUsMVZhQeFQdfsd2sa5lWimOZoI8FxWf+HfIDKTM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 08/12] xen/physinfo: encode Arm SVE vector length in
 arch_capabilities
Thread-Topic: [PATCH v5 08/12] xen/physinfo: encode Arm SVE vector length in
 arch_capabilities
Thread-Index: AQHZbSQst7ttECjUJEKfLWpPDMYlwq8xDiEA
Date: Tue, 18 Apr 2023 12:49:36 +0000
Message-ID: <EDF6FF5F-2825-4001-BE7E-7361932936C1@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-9-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-9-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB8203:EE_|DBAEUR03FT005:EE_|AM9PR08MB6065:EE_
X-MS-Office365-Filtering-Correlation-Id: ab939950-5b6b-4cc0-9856-08db400b66be
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 O3s1mWV0cksmpnBpg+JXYMCnbQz2e9urghmkzJzSpjVz+7tFOhynRSgIZRknr+OKbjNEmsaJjDFo/6lGoU7Np4WwpNRXyJTdSGQyLnZMNTZBc6sr2IZF9fdceDX6PKS8nxvhww5c7jWtk2OGlekiPgBh+OmxRJ++RvSgsMtG/+FOs71+lHQn87ltY4QF7Bax98ekIZTXbXO/3GhGDGfV45ifqXqgFxSCQJW9DeBJ28L8cV2ziGTkmvMT7RttnDc6hH6ZVKhEyuqHtVNKEx2lcOjcX8JGP1quqwP+IkphYeCTktnG8B9uLkFpDAVSb9uqw09YX+UUifgcDDIeSjF1XfmbYP/eb9LwymiX7Tk1xXT2eQ7H+AjofPXggqgUWyn8hYEnRK8H8ycS+Vw/G/EfwkcUhKckmmywhCqv3zAJQG9H9kVFpobBcmqCPzxjDPbFDbzHlaTg8vo4J9Z5eihgaTvk0llr72NHNG2idcv39bYY6lVq05tYQDZndA/A5k0jjyKMrV5HgXfsy8B/LoqToOsfGW9s4DBhoNLBuzzi2ngv8RAsRA05XNGA3d3aWSl+45KvZhSueMJ/BvumR9wo7581Y1REAhbtn9Anfnlz1y4DBMGoRvEa3AVBtznbhhbiQ6MEdHEEBjHhRN/K4GXkIQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(136003)(376002)(346002)(451199021)(37006003)(478600001)(71200400001)(38100700002)(8936002)(6862004)(8676002)(316002)(41300700001)(91956017)(4326008)(6636002)(76116006)(66446008)(66556008)(66946007)(54906003)(122000001)(66476007)(186003)(2906002)(53546011)(38070700005)(6512007)(64756008)(6506007)(86362001)(36756003)(33656002)(2616005)(5660300002)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <822E84A6CCF5B6408AF04DE5EBFF6AE7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8203
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	dd82ef41-3208-497c-3b61-08db400b5e01
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WGBISpLsk0B5+QoRSIpQmBtEuQqQH3dbSBeeUeq4fAWN1Doc+cDGi8yIfTUzYMWaOpEsCbDQwvRJO1SJyw2LeXHjS8oENU4JOXg99MhViRXuUscXPz+INisyzUCM/ox3pchow7WN2M7jzsHjLcMVBzCZg229iHVfxMxSAs8Wn2KKT1QJF0ju82BWafA3QyMszi0C9nQsOYJFl2Ve/2iHoc0OuNTVv/II1YttaprWcIZfn3kM7OJbtm+r1RUPJnvLmP6DMZgZ67XLDYliC/Bbj83apF+jR/FvqVBV+BF6fc0PsAHHU7+RKWqAiQ1cc3zIxUWq73LyJo5uJ0qcvizky1wYea7FAhQpL304Eyd9gr7gAdlIhuyFxmw+LER+SWZva82BKE0e5OwjggsD91vL6rJhzgyJps8aTG6NoY/N8+SPzY0b4WNbyrKulQK5Ow1UvROEWKQOgmwDxbVguvMY+C6frbIhT90YOP3mx3f4BKKoP0inp+Ua9LHgs0T6oWeGyeI0D2ZuFslDYg0erIXFypiAZZR+/ZjATR3dIjkkVZPtX9WXGe1bOzivk7LN0TZGccUildAb0r+dM4f//1lvFS7Mc8Qh6uGacyC5+dQMDq4lehCN30RP26Ihxi9WmL2/hVLtD15EmgJNKDYKUEO97Y0Q47/KEdTTuruJf0AncZdyH8JUKM4ti0GvPXM8Epih2iZkXt0kE9pVUsWIrV4cBg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(36860700001)(41300700001)(40460700003)(82740400003)(36756003)(70586007)(5660300002)(6862004)(2906002)(40480700001)(8676002)(86362001)(33656002)(8936002)(70206006)(82310400005)(4326008)(356005)(81166007)(6512007)(2616005)(336012)(54906003)(6506007)(47076005)(26005)(37006003)(478600001)(6636002)(53546011)(186003)(6486002)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:49:51.5485
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ab939950-5b6b-4cc0-9856-08db400b66be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6065

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> When the arm platform supports SVE, advertise the feature in the
> field arch_capabilities in struct xen_sysctl_physinfo by encoding
> the SVE vector length in it.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v4:
> - Write arch_capabilities from arch_do_physinfo instead of using
>   stub functions (Jan)
> Changes from v3:
> - domainconfig_encode_vl is now named sve_encode_vl
> Changes from v2:
> - Remove XEN_SYSCTL_PHYSCAP_ARM_SVE_SHFT, use MASK_INSR and
>   protect with ifdef XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK (Jan)
> - Use the helper function sve_arch_cap_physinfo to encode
>   the VL into physinfo arch_capabilities field.
> Changes from v1:
> - Use only arch_capabilities and some defines to encode SVE VL
>   (Bertrand, Stefano, Jan)
> Changes from RFC:
> - new patch
> ---
> xen/arch/arm/sysctl.c       | 4 ++++
> xen/include/public/sysctl.h | 4 ++++
> 2 files changed, 8 insertions(+)
>=20
> diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
> index b0a78a8b10d0..e9a0661146e4 100644
> --- a/xen/arch/arm/sysctl.c
> +++ b/xen/arch/arm/sysctl.c
> @@ -11,11 +11,15 @@
> #include <xen/lib.h>
> #include <xen/errno.h>
> #include <xen/hypercall.h>
> +#include <asm/arm64/sve.h>
> #include <public/sysctl.h>
>=20
> void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
> {
>     pi->capabilities |=3D XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap=
;
> +
> +    pi->arch_capabilities |=3D MASK_INSR(sve_encode_vl(get_sys_vl_len())=
,
> +                                       XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
> }
>=20
> long arch_do_sysctl(struct xen_sysctl *sysctl,
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 2b24d6bfd00e..9d06e92d0f6a 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -94,6 +94,10 @@ struct xen_sysctl_tbuf_op {
> /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
> #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
>=20
> +#if defined(__arm__) || defined(__aarch64__)
> +#define XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK  (0x1FU)
> +#endif
> +
> struct xen_sysctl_physinfo {
>     uint32_t threads_per_core;
>     uint32_t cores_per_socket;
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:50:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522830.812449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokmW-0002D2-S5; Tue, 18 Apr 2023 12:50:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522830.812449; Tue, 18 Apr 2023 12:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pokmW-0002CR-ME; Tue, 18 Apr 2023 12:50:00 +0000
Received: by outflank-mailman (input) for mailman id 522830;
 Tue, 18 Apr 2023 12:49:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pokmV-00029E-GE
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:49:59 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 847e53d1-dde7-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 14:49:57 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 08:48:47 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6952.namprd03.prod.outlook.com (2603:10b6:a03:41a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:48:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:48:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 847e53d1-dde7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681822197;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1szQQbgu38S/FhK4aocf8lyVuLfzMiBV3lfjR3A9IV8=;
  b=I4TcWFiLXIr5xtBjck7arfkX/PpfwpqmeSl6iUc2NNwKQro6mt5Xie3K
   KgK976Y5cheFVCNX/SNGdJOeS4xgseOSNW/u95Z2BwDQZXSyCjTabUv0/
   6NPISszp5XqfAMQ1Dtt3Kd7jFvEj7W8YK+GE4xfss32dNe3UtHgYal20e
   g=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 105985840
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:UDGNuKhPdbWpKVb8mPKDIJ+eX161TxEKZh0ujC45NGQN5FlHY01je
 htvD2CBOKqNZzbyLdpwbIi+8U0Cv8CByYc3GgFq/CFhQyMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaOzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ4ImlUcEyYo9ifnuLmbMM23eMpLuDkadZ3VnFIlVk1DN4AaLWaG+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluG1YbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXlp6430AHLroAVIDQ3ZVyFvvC/tlyRXdV0F
 UUXqyUkkqdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ/i0zJR9M6SKqt1IStSHf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodtjIJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:Abe2JqhAjyEPUkPl5erW94uAWnBQXvUji2hC6mlwRA09TyX+rb
 HNoB17726WtN91YhodcL+7WZVoPkmsk6KdjbN/AV7aZnifhILwFvAY0WKA+V3d8k/Fh5RgPM
 5bGsAVNDSaNzZHZKjBjDVRx74bsaC6GHnEv5a6854Ud2xXg1wJ1XYcNu6AeXcGIzV7OQ==
X-Talos-CUID: =?us-ascii?q?9a23=3AhuiHDmtcI0DeUW1IiutZGn3C6IsFeCSE0ivLB3W?=
 =?us-ascii?q?RVzY0bpfPEnia+oprxp8=3D?=
X-Talos-MUID: 9a23:6QbACwjgIZ7ywmyWE++2m8MpHeZY5JqxAUYxg7oAvOK6PC8hND2ig2Hi
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105985840"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i4vVERsht+FW+BYw9aUa4i0AU3p28d2eEpYOTsvZLljx7FjfdMLeIgw9TwgWDUbmyCsA4N6RKuSKibwVGBVLbrwPSqxNHbYZH8NPAwCE2sP2I758m+He9yJ800MilxPfPSoSq3NRuDX0XUNmZdmr8bCkLWZn9ev8A5irDxgAW0eJm2vJWDaZF4/i6mqrPijXmjxjaP18n5qKzDsIDevN/gfSHxF36Q7VrKFjY+MrZLjh07dPpJXmtdT/6k2n/VfemYA9doL3dtxEGiaPNLDVoA/6FoHKMLzNK0oKl8tglEoZoe6HokjOMv52/EpeLtC8imTjD87E9nsjD9X93KsDVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MRUdGIkBe+/samxQSGlU7uiOQ2DDQnQ65pV1K5DDiSI=;
 b=AwiUbdAAPdVJp/CXHiKy8c9tVsk25wpp97vdZDzNi4upXT8P40MV+k8X17cGHp5nrTusLMCBJHT0hEFMmQ6S9K7Lw4Woeobv9EhKpeUN75mbFQ4lI7LdiKTTqMRFUOvE9Y0OBhTQD9/aIO64TcDY5xMFyrdsPyfDNMYQZxPTsJGuPTjTNq3LduMZIhPBCczxIb7cBP9EaWyGD3NNfVH2x+EeX0Oc5/JCrlEsqceXlG4+FUp2jTqQqfX9WZiTjqhu2k/uZv85uHtI7q1SV4Qlk/PXJMQJnx7z6YN7hgRy+kAHzGZ4tVsLzHq1PLtTTRtwYNCKsx5FuRAf+sx+PHhQsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MRUdGIkBe+/samxQSGlU7uiOQ2DDQnQ65pV1K5DDiSI=;
 b=wuCGYbW+gP7P/s7h+/6xpBPLxBSpQS3OcbE9XiX0DNrxh+aj94peMlcYeaF2aEvIsn3T0ItsdgluOfGeGe5+mwP4G4lvm2z1EoYVQL0IPVV9FpLgCkvIdu+I9gvobIzSFSZcccBmK0Q3V2nF093mvm3gGkWxdbGbjCJFjuom4iI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <69c0fb99-5c04-7b6e-bd0c-6e32bce12358@citrix.com>
Date: Tue, 18 Apr 2023 13:48:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
In-Reply-To: <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0006.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6952:EE_
X-MS-Office365-Filtering-Correlation-Id: 34623a80-25b3-4f0e-cbb5-08db400b3e19
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a0ElOzN8+J3SDlRFNwce7bK2XyReFGjKW/gRmcklRhM9H2AaiJ+lqEUWXEuGJ2IGcTQNs/omgjcoDfHq8Mm9lvJsTDUjvwArfX+dz7/glcY/EWawJ8awQs/SObOYAFa7qDCpG6xgUhJ3zQFK/+ip2F1AKci/+WzPhXHUwS1qZQP67GOuD5dIX6Q7q4VPxK+5YzCUzGkRYpWxkBINBgAwDpvFB8WzhSuC+JKkTZKT2VOi5L9P3Opgl6kK3u+dKcUpxDnwfnB7PhM4JXjJCPMWEEvVkKJupyISqlkwl0XyvsYPR41hlvOsI89HmSHAfc8MpRjD9KP16n2Isnw78invGt19CZFR0TbAxvZQiPen9eMomQ2cxhTDKqoVyaDNt4fE6Qw1H/OEUvS9jQCJCGt+E2ji6JhD0EiSlfWdI4oT7rWDJnL7ZboXFJor6Y8NNrZk9Ssm6HJLMTDposwPxwlFe0Su5nKTuMU58jM2QXAQYQiHfZ/fOioh0s3o3knOnmMCyT3WJ+OwcG2BU0ljySH3iQ9LjNN5jvIlXdA7xIJnGtPpT4CmqinPNdh2o024l6zkv+FQ17+zjUzrB/uNn+UcUq/+y4Q4IbDiyKqhUdFRkNx8Tjsd8cXqJ+/+m/2rsOdLl6MGEccdMnVMmPfHl323qg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199021)(31696002)(2906002)(82960400001)(478600001)(6486002)(6666004)(2616005)(38100700002)(83380400001)(26005)(53546011)(6506007)(6512007)(31686004)(86362001)(186003)(36756003)(41300700001)(316002)(5660300002)(8676002)(8936002)(6636002)(110136005)(66556008)(66476007)(4326008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Sk9YRmd4aVQzZGttcmVRR2VkdUYwMVJxR1RZTCt2VTlvelRxVU9rU1UrT01k?=
 =?utf-8?B?RFJ6c2orVWRnY1h6MWxRZkdPb0VzTG52MnZqV09zL0YrRWRoVnQ5eDhZbG5E?=
 =?utf-8?B?YXk0eklRM2plcm84WEVNOG9KNVpJdXVSZ2RhUHF3S2huOHNVT0VyOHRIMjVT?=
 =?utf-8?B?S0tFZGRiK04wWms2dWN6c2xMMUFTRi93MUxkQUhQRlJ5TUhkRUtNM3ZoSmIv?=
 =?utf-8?B?bjQ0bktXSzBlR2hOeVFjaDBJZmhtMmp6ejZXQzJUM21MaHRMRTYvTzZtZ2c3?=
 =?utf-8?B?OXROUTUvNm1hUWJ6MHk5Yi96UlhSODk3ZTA4OU5kVS91Vnl5bGQ4MklveTBL?=
 =?utf-8?B?Zmorc09lN2NUVW5wOUYvUFR1bHRtY3ljdkcvWjZmSTRVUDlkemZ0MGlkRkJO?=
 =?utf-8?B?ZGhPRnhUeHRqMTNTYkRSdlRmbjVHdHpTTzBKZEhYOUFwRUFoa3V0eWZhQWV1?=
 =?utf-8?B?K2tCdnY4aVBDYnVaZys4M1liOXBCNEFrL0VpQTdMZkRicGNrT2g5R09zK3hO?=
 =?utf-8?B?cThvL2dZeEcrK2FsR1NjWmZ1ZnJTQ25vWmdHQzFRL0dWV3diR3dvRmtOYlhB?=
 =?utf-8?B?OEtiTVpycnIzZW5jazF3VUs4V20vY3ZpZXBvK2FkeXFWL201Sys4OG1aS054?=
 =?utf-8?B?c2tXN2xrWkd6UUdOWVplSGo4cXc0TW5sYXk0K29Oa2FmbjgxV0dyZWJ1Q1k2?=
 =?utf-8?B?K21WV1dONWtVNVJ2Z3BhTjhpS0R3bnpDanVNS0RWUnRENW1vajh3TGtkWDRQ?=
 =?utf-8?B?QnpIOWNpT0lNZVM0MWVncDd2cjl6ekFKaEFPMWV4UWdRZmpzM0xsQ01GNnBB?=
 =?utf-8?B?TU5UZ0lFa1RtNjJNWW9XcmkwUmUvakZEQmdETmlEdDRJV3JpT2ZzUGZrVUpF?=
 =?utf-8?B?TXUyNmhFWlFPc1p0eXd0dE1ROUNEY0lsaUx6N2pjbW5iMUtsQUVuck1QRytD?=
 =?utf-8?B?bFFFK0hHbVRxdkU0Nmk0VkZxRy82cHpjeGVxYnp1UjU1VXl0WFJBSDAyTytp?=
 =?utf-8?B?V0VvenZQdDJzVU9UakV1bWM4VWtpNkJhZ05Hd0xGbmJXRE1XZWhnTHorb2ho?=
 =?utf-8?B?bFJyUTBscXdJYjFidGVDSmZUclo5MnRlbDlJWnF1RnZoUmRDakhoaWYzNHZH?=
 =?utf-8?B?dzlSU29GaDhyOGFYUHNZMVJkRHp1MXhkWDZiaEhPTjNyTjMyZ0s1cDljUUVP?=
 =?utf-8?B?am5SZ3NnemRKZjAwZm45bTg3Qy9nRzhQNjNHOTdSRHdSNGN0TG1sTnI0ZzJp?=
 =?utf-8?B?aWtZQ3Mxa0dsNGRtU29BWnVLajBGaVlOdnpMRTl1TDQ2eSt1ankxaElDc1Zu?=
 =?utf-8?B?NXEyZEhKWXdJRmtLZC94WExaMlBjNGRzRFFqd1BVSXdvNTlmYTVzWk9MZVhE?=
 =?utf-8?B?N21TcXIranlpTEFFZEpkdjVkZEVZb0NnTFJFbzZyZzRSbjhCaGtMeUFwZHNm?=
 =?utf-8?B?UzJza0tRbVB3STFudDh0WEJqWVNGWk0zNnYyR1B6SlVNM0pIb05WcVl2WC9p?=
 =?utf-8?B?L09tTmlZMlNuQ2E1UlRxY2grYVpTclpialNOTndhOHJvQXpDMGt3dk5TNGZ2?=
 =?utf-8?B?SzV6MnJWNkFaWm96OEdVdkhuUkVXNngrVFFFNW5kUEhFZElDUkNqZ3I0Z3N3?=
 =?utf-8?B?VjBIQ01RN2JQUmFIYXpmUklNbmR2Q2lIWEJ6aXVsRkxYamVLNHFYSWtNWGRS?=
 =?utf-8?B?aGordzZkdm5mRmh6SWxrOTlaRmRHRnh0Q1Y3M0dNamoyckc0cWVuV2IrRW0z?=
 =?utf-8?B?UWtBTXFoMVZZaEd6a01JOE1yTXpVL1VMNVhSVVJVb1c5TDB0UGU3TmJ5RE1H?=
 =?utf-8?B?ZUt4bEJHTmFqMnRkYnd2MHZkRmtmS1lpNlBXd2pKN3czZ2xvZmFUVEdJamNM?=
 =?utf-8?B?Y2VaMUN4aXpkTExJc1FHcWdIVHVQVkVnbk9uT2prSzhYZVNvYUs5RXZpMmhF?=
 =?utf-8?B?Nzl2NVBuWDhGUUtjWDVhTGtGdHpFRGNPYjlPUmYrNEVwSlNHeVVWVmZsVXFa?=
 =?utf-8?B?UVNpM0s5OGFoS2hpd2w2MHNLUkw1S1F5UkN5TmZmMFJlNFpubUJobE04Z1ZE?=
 =?utf-8?B?dDlobERhTlE5eU1MNWw4SmsrYjh4cHJ4UXE1UGdSKzFVdUlVSWlIYVBqWFR1?=
 =?utf-8?B?RzBDak5Mb003WXNUQjREL09tNUdIREFLRFg1dHg4aUlHTk5Ka250OThwSDdr?=
 =?utf-8?B?bXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	GiHwHvVnhpCMl1KJBuPMwRHPTOHmapwrDNEPKVEy+7NqRmKuErM5R9LCHi3g4v/JZ4eB4MZZkCFCT6PKlTkBAzwAnh5nhUcM/f2Tj/1edNQfJWfVuRv3Nm0opqfJURKx6xJWv1e4GgPsTvTtJkx93V1ZbqJm6/EcD7+3ohyFRnv+QhFnzPrV9EpBWNBqs1LS+U5S0ifxSgWMqA3lfZbNK5mNTc9rnG+TCIVl2XWpbRs9roKtAjhK98cZbnQhO11bjUVxs79u208DFaOL1G693rJ14ewE5q0fP6ZEzIIZFAnI2zv6pEOo9NTLErSxXYA2S1r5/UrSRK7aTSDcWzVveDs3o7NlDS2mJWKWJyYf5ijvTewi8TfdMwJMLOBeov6aTGSeHO3xs+6xfGCz/gMQIvUANV/27zAgnsOEtDd7swQ/hvfmNeU9ANjvwKC349KTmWDxW68+l1Jv0JD4r7xqe9CH+JxmhUxBLjpFlFkOyORJo5bT35c+o8BQHitqPmMcILdY7KNDlkw+YfYynMDwAvACVoroOTizFDYcPlahbyco7a8Q+CL6hEjFx7E4YWrz/xqt6srH66LMBualrUZfnmFnREb+/gAGYArRL7OYRHwQztPcDM1qFjzbUBucPwVZHYFL6jFRmTs85plnh2rmXO+37CvpCKtWBg9AiklraX9y7sqlSlznLav8tGkianEPByG0d49f38hZgV5yNTvC+VA/gEUmHoR/HZtoTRcx21Ef55yPM73LEEjBvi/gvhicdd8/L/f7ytTrRJYIjugTFFdQjYEV2FctyycU2fX57i/KVxrLms3DQc0OOrECOX6pJ3vVMhgjFSCHicj6QU3q/Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34623a80-25b3-4f0e-cbb5-08db400b3e19
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:48:43.6660
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XSiJM4MCRSFVAd+g4rmwwXMQH5lTV2dC/UJBOGRAfIq9LDNzwekA5KdMVt3txl0PjL2xnTjZZ0fEwMxrJAuQdUVNCMsycihxgYMRs00YE4o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6952

On 18/04/2023 12:00 pm, Jan Beulich wrote:
> On 18.04.2023 11:24, Roger Pau Monne wrote:
>> Some of the assembly entry points cannot be safely patched until it's
>> safe to use jmp, as livepatch can replace a whole block with a jmp to
>> a new address, and that won't be safe until speculative mitigations
>> have been applied.
> Isn't the issue only with indirect JMP, whereas livepatch uses only
> direct ones?

We already have direct jumps prior to speculation safety logic. 
Livepatching putting more in doesn't change our safety.

>> --- a/xen/arch/x86/include/asm/config.h
>> +++ b/xen/arch/x86/include/asm/config.h
>> @@ -44,6 +44,20 @@
>>  /* Linkage for x86 */
>>  #ifdef __ASSEMBLY__
>>  #define ALIGN .align 16,0x90
>> +#ifdef CONFIG_LIVEPATCH
>> +#define START_LP(name)                          \
>> +  jmp name;                                     \
>> +  .pushsection .text.name, "ax", @progbits;     \
> In how far is livepatch susceptible to two .text.* sections of the same
> name? This can result here and perhaps also for static C functions.

Well - the section is the unit of binary-diffing noticing a difference.

If we have a naming collision here, then I expect the linker will merge
the two section, and the livepatch will end up bigger than it strictly
needs to be.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:55:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:55:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522839.812463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poks7-0004Eq-Hf; Tue, 18 Apr 2023 12:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522839.812463; Tue, 18 Apr 2023 12:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poks7-0004Ej-Er; Tue, 18 Apr 2023 12:55:47 +0000
Received: by outflank-mailman (input) for mailman id 522839;
 Tue, 18 Apr 2023 12:55:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1poks6-0004Ed-Cz
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:55:46 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 555e5f0c-dde8-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:55:45 +0200 (CEST)
Received: from DUZPR01CA0008.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:3c3::17) by AS1PR08MB7475.eurprd08.prod.outlook.com
 (2603:10a6:20b:4dd::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:55:42 +0000
Received: from DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3c3:cafe::b2) by DUZPR01CA0008.outlook.office365.com
 (2603:10a6:10:3c3::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 12:55:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT038.mail.protection.outlook.com (100.127.143.23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 12:55:41 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 18 Apr 2023 12:55:41 +0000
Received: from 523b1c7270fb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4D28E417-4703-4A49-AC2C-C6CA1B1C168D.1; 
 Tue, 18 Apr 2023 12:55:35 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 523b1c7270fb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 12:55:35 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB8203.eurprd08.prod.outlook.com (2603:10a6:10:39e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:55:28 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 555e5f0c-dde8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HtQr+Y5ziVpwrOZc/shJYPPr8VuAC0TAvdtDs9X7vZY=;
 b=MhCLcSZUemYhPHzP2AGoZEAiYwqC1rHCUNwXqeDRum1NIMUEhkm9Dr0W+6aBEXx2e6nvGjNV4roP4WyTpiMC/gf4TTEAGWZKHLkHDDonQWNjbOvq1x4z5YnuSgBo6QWBVXsyIXNQGxlDNmp+jYKiQMHTFkpZIPMGeJrP0AMnJ+w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4158d203a396abc0
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZttQ7XtY3+bEUlfYlYE/rtxq3mGzulWkdH8VvRLuu+EjHfdM9+X4mVNyrvY5rjgGMlGLI8YJny80w6D8/udz4m0d25aGf4ywjFm3l9tfCa7RukIjV+hktNKQz214Nj4Xl0ZdUP5FEJUL3MtK1f9mBXUdUYqqkZH6YebTtOFO4gAihSF5jq0TNpRnbWdnF1S6ftfewiIA46v9xyJluRX7VSbQ21HhuCkDIMn1iY6YsxB2xCKgI92JGIOYQLDLuvYWcVRpdg5SqQgNfVNItmnOYFO+Rpogq/3DBu4C12WuESNPTM/hUjBx2tUBfmjSUSp+iosg80aClg3qlsChi8d68w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HtQr+Y5ziVpwrOZc/shJYPPr8VuAC0TAvdtDs9X7vZY=;
 b=kGaHykQc+CuEhobnwnDrYdzzsVuTEoywjqVYALvNC2JNlpkwBG7P03AIzd0kk6x9MXzqlJmvpW+RWYTJgCnLsTJX8f0f6S0OCYvOFOvL+yjmmfz02eXuJF5rRWhWbEFYVLQVPf4wmkcORpPmHUzKIThbjLlXia18bdXbCbq7wok5zcPKu0tDC5o72qhS4a+CWOg+Ir08zr1AhvhJyhhmD9Panlh819S4nNCOXmKDHLEYgHHDd/GJn3D+qXxI2gA6X4CiRNGN/1+okIcIZfl1+CzpDC7ah6XGJ7rkyhvslGj7kz9XZaEuBy0K1MJwDf1O6mEPcjiQpGQWGzjw/VPWRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HtQr+Y5ziVpwrOZc/shJYPPr8VuAC0TAvdtDs9X7vZY=;
 b=MhCLcSZUemYhPHzP2AGoZEAiYwqC1rHCUNwXqeDRum1NIMUEhkm9Dr0W+6aBEXx2e6nvGjNV4roP4WyTpiMC/gf4TTEAGWZKHLkHDDonQWNjbOvq1x4z5YnuSgBo6QWBVXsyIXNQGxlDNmp+jYKiQMHTFkpZIPMGeJrP0AMnJ+w=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 11/12] xen/arm: add sve property for dom0less domUs
Thread-Topic: [PATCH v5 11/12] xen/arm: add sve property for dom0less domUs
Thread-Index: AQHZbSQwrRQ9/UblkkiINOEljpHZc68xD8SA
Date: Tue, 18 Apr 2023 12:55:28 +0000
Message-ID: <5DF67AEC-54FC-4742-8377-995AFB390EFD@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-12-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-12-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB8203:EE_|DBAEUR03FT038:EE_|AS1PR08MB7475:EE_
X-MS-Office365-Filtering-Correlation-Id: e31d44c3-376e-42a2-c29a-08db400c373e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Z70khT5G/fvTLv7IrhlQNnjEKC/oVPtp+gsXlG/72oNcU3v6PuaByc+XmQs61JMhtZFSmBt8A3fF2ijesqpbaXcF2aZGc/eGzSinOhhhdGaJq62tYOZG/JxXSEhupBZVpYEX+oCjzo8C03XWjL3qTPTBZTgo09W75uT/7Ew8aRXB3W16Lbk3uRWy1BEzn2xlSTHPNvhKTlszLi9s29eboAGjJ9Pk8GYuPeKUzz679J3XOXbLlzTgoDE3BJuFl+Ha28CQVJcEZtHLfvx/E2HKVVyeKx08CVG3Tvtiqxorj0tVDwtQCa/9gWcb2mUv0Uzo4hkGD13V63jMx0WplGCJ1ABKeZtFWKcfmi0J2JSX+3FXJWLOhMkr/5sYBUdyHfVFz3iiSU+XFsdvu+Mufp8Vq6aMYjChKQNTiGVwKUhPUd8hTRwMEIzMKykGSxyaVW4tTvodnyZ0XGTTD1zUrCNFiO893rMF+bO8PGOQn304T4CW1jpthG633xrfUr0lL9IJn66daDwOwEE8z0rz+CitC2aBHVw8X6KmVcewKnOeM0mMxuHkwS+LkvbQYjcNAMTtc+0nu/JvTlm7XRwzsASiLSUBUfyQOUyERA4dERxIiu2sK3fVHdr4k09uBc6atRhmzsXyhhFF+D5MO5DZhoFY+nSYv8vPyI6SMA9iWHMotCA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(136003)(376002)(346002)(451199021)(37006003)(478600001)(71200400001)(38100700002)(8936002)(6862004)(8676002)(316002)(41300700001)(91956017)(4326008)(6636002)(76116006)(66446008)(66556008)(66946007)(54906003)(122000001)(66476007)(186003)(2906002)(53546011)(38070700005)(6512007)(64756008)(6506007)(86362001)(83380400001)(36756003)(33656002)(2616005)(5660300002)(6486002)(32563001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9425F9668954674AA42CD34FC095D1A7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8203
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	de68a510-65a2-40bd-7180-08db400c2f4f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7+tuVC6Eh/3UXApNM1/3Sik6Jfu76v9NIsgbrlB9aOf7NbzLHY5Kxc1v65JQK6FUUY9Z1wXV9B/Dy2s0jGhZZqJ3emfX8z9E6JoY8q+PjG1mTLvPODUkfxuDuAMU/m9SeKMHJ5tI5ALewHkUo4VNMfbFSpE/PlYgBIndu/edV283Qaj6toKI4Hb5dunCvOD/wbeKKpZ3dTVNaqgw6G8dxq90n7udIRVuviwscEFxl9eMEuTjkbQ9Ms04bWmhgBlLYEWYw1QfXkH5YQFJCw6LedVT6YSFA35CO39vercD91U8s1QtkTzORkaxNXrcuscUeETHZnE2pofRXLXO1tEnm7QmCqKoTMK5b9OrVCeeEfiVBvHpU671Wa/Zu3U/NHmn0j02xNxLWVa+FaaSt2oc2tikI9/ri5J2yQGXjRxOTwo9wvW10ZcDXt1hP7npc5Tlg1gLBnn/3OvV0j9x0TZZWiudan++baEj24haIfHH46Du5dmQDyqpReTmq+NihQIoawBcLXk9KF/FG1F+TQ2sVh/QS1Air/F3k3yh65faGqoA9zIsVJLOH5Ig2lJZllQvhzE4fsS2k/QzKIAeib4HQkn/vvNGTt/v379PuKElOieLpqvbpHMXixVP1HlpET2JDj4wVkoCZ/8oRsIDC6/6yoRpKgVFVeSdsDR3iZgGwLdJYH0Ng37NpUDamXZS9jx42ITtwUxqt56+2Y5KKWwOaUq6tdFcslwYp/45z3FCDjo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(6486002)(478600001)(86362001)(36860700001)(36756003)(47076005)(2616005)(336012)(33656002)(83380400001)(40480700001)(107886003)(186003)(6506007)(6512007)(26005)(53546011)(40460700003)(82740400003)(82310400005)(81166007)(356005)(316002)(4326008)(70206006)(70586007)(2906002)(8676002)(8936002)(5660300002)(6862004)(41300700001)(37006003)(54906003)(6636002)(32563001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:55:41.3546
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e31d44c3-376e-42a2-c29a-08db400c373e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7475

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add a device tree property in the dom0less domU configuration
> to enable the guest to use SVE.
>=20
> Update documentation.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v4:
> - Now it is possible to specify the property "sve" for dom0less
>   device tree node without any value, that means the platform
>   supported VL will be used.
> Changes from v3:
> - Now domainconfig_encode_vl is named sve_encode_vl
> Changes from v2:
> - xen_domctl_createdomain field name has changed into sve_vl
>   and its value is the VL/128, use domainconfig_encode_vl
>   to encode a plain VL in bits.
> Changes from v1:
> - No changes
> Changes from RFC:
> - Changed documentation
> ---
> docs/misc/arm/device-tree/booting.txt | 11 +++++++++++
> xen/arch/arm/domain_build.c           | 24 ++++++++++++++++++++++++
> 2 files changed, 35 insertions(+)
>=20
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device=
-tree/booting.txt
> index 3879340b5e0a..f9d2ecdda48a 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -193,6 +193,17 @@ with the following properties:
>     Optional. Handle to a xen,cpupool device tree node that identifies th=
e
>     cpupool where the guest will be started at boot.
>=20
> +- sve
> +
> +    Optional. A number that, when above 0, enables SVE for this guest an=
d sets
> +    its maximum SVE vector length. The default value is 0, that means th=
is
> +    guest is not allowed to use SVE, the maximum value allowed is 2048, =
any
> +    other value must be multiple of 128.
> +    Please note that if the platform supports a lower value of bits, the=
n the
> +    domain creation will fail.
> +    Specifying this property with no value, means that the SVE vector le=
ngth
> +    will be set equal to the maximum vector length supported by the plat=
form.
> +
> - xen,enhanced
>=20
>     A string property. Possible property values are:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 3f30ef5c37b6..c1f0d1d78431 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -4004,6 +4004,30 @@ void __init create_domUs(void)
>             d_cfg.max_maptrack_frames =3D val;
>         }
>=20
> +        if ( dt_get_property(node, "sve", &val) )
> +        {
> +            unsigned int sve_vl_bits;
> +
> +            if ( !val )
> +            {
> +                /* Property found with no value, means max HW VL support=
ed */
> +                rc =3D sve_sanitize_vl_param(-1, &sve_vl_bits);
> +            }
> +            else
> +            {
> +                if ( dt_property_read_u32(node, "sve", &val) )
> +                    rc =3D sve_sanitize_vl_param(val, &sve_vl_bits);
> +                else
> +                    panic("Error reading 'sve' property");
> +            }
> +
> +            if ( !rc )
> +                d_cfg.arch.sve_vl =3D sve_encode_vl(sve_vl_bits);
> +            else
> +                printk(XENLOG_WARNING
> +                       "SVE vector length error, disable feature for Dom=
0less DomU\n");
> +        }
> +
>         /*
>          * The variable max_init_domid is initialized with zero, so here =
it's
>          * very important to use the pre-increment operator to call
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 12:57:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 12:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522843.812473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poktU-0004n9-Rr; Tue, 18 Apr 2023 12:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522843.812473; Tue, 18 Apr 2023 12:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poktU-0004n2-Oz; Tue, 18 Apr 2023 12:57:12 +0000
Received: by outflank-mailman (input) for mailman id 522843;
 Tue, 18 Apr 2023 12:57:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1poktU-0004mu-1m
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 12:57:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 886efcef-dde8-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 14:57:11 +0200 (CEST)
Received: from AS9PR06CA0297.eurprd06.prod.outlook.com (2603:10a6:20b:45a::35)
 by PA4PR08MB7458.eurprd08.prod.outlook.com (2603:10a6:102:2a5::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:57:09 +0000
Received: from AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45a:cafe::a) by AS9PR06CA0297.outlook.office365.com
 (2603:10a6:20b:45a::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 12:57:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT004.mail.protection.outlook.com (100.127.140.210) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.21 via Frontend Transport; Tue, 18 Apr 2023 12:57:08 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Tue, 18 Apr 2023 12:57:08 +0000
Received: from 456185edfabf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 82ABA9C7-6FD8-4569-B3D3-5DB79EA50F2F.1; 
 Tue, 18 Apr 2023 12:56:57 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 456185edfabf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 12:56:57 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB7889.eurprd08.prod.outlook.com (2603:10a6:10:39c::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 12:56:54 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 12:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 886efcef-dde8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SOuwxyJBtLtaiSrci+hIe2SzH93EPi6tYpi44x7/Uuc=;
 b=bThBtTdrr9icMiQD4l4TBNqzHsXslnZ8SVolur9lxMuLS42VYS710UQOQuNOdUoPc9o+4X1j71lWdz//X9kyR4QiRbKhDTkkpy7TS0RFDso+G4vZk3bu0Vjmcyay0O/oN99ZGs4J+zUfz6EG6bGOgvu5zGzK6gmDeWbgJ0yrC00=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c48f50a5554029e8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kCv3YF5nxtjWS59i3TE1ylOYZqQUi/uvPRkjGxjYJFu6X6CkgTIxS4kIUKOMAsGRRAljXM6J3TyIN1ActRfbHfvO+bnrS+qvC2PlB3hc8kwP6/FKk4uFCoNUvlwLp5MP2Ts0Nma59z4lyarQVLaRdg3eMv7vHUt+u248ddYrTVG7LD3zhy/53fBYXAv23rPB6+ngUskuX5SD15mhtOBPxRlXrp99OPtbX1mrYD0pEq6qktQ59mKkNl5OsoRf851NN8vs84PhhsuWokV+4CInS/JtaX1TlT5RwzMzzfCu4xpcx100X2q43punkiP16PMeWPA9/1CtHwsx45WjzIx9nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SOuwxyJBtLtaiSrci+hIe2SzH93EPi6tYpi44x7/Uuc=;
 b=Oq82pTD/lKwLD4kGvBeptam+4cVB3infL7lr7sM9PsDjy+AXUyOZ2eQXkM8g3iDwWSkTPNZlXUeUnnh9Fqhh5iq8LrgNhKRTpzugPxMXUVoyMG4GrkI7jEDN0Tii7jXdtuFEutDHqM3b/shtzKsI8gDCMjxgPnDXx2TR+xvXvrJNdaMNobbTmrQ2oLAhB0NZS4S7zu7BWrnkI9RMqPor50giI2pMPg0yjSBRuM7kZUsuvduSJZV9NHntPfcC359IyPtQ2FW5Bsrup3IEJra0uxJEgGZ1wSG0yuR7gOZlJ5f6/AmLCtk6+0dCxf3bMXZfXXPCvvJM7NIlEDExspNhkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SOuwxyJBtLtaiSrci+hIe2SzH93EPi6tYpi44x7/Uuc=;
 b=bThBtTdrr9icMiQD4l4TBNqzHsXslnZ8SVolur9lxMuLS42VYS710UQOQuNOdUoPc9o+4X1j71lWdz//X9kyR4QiRbKhDTkkpy7TS0RFDso+G4vZk3bu0Vjmcyay0O/oN99ZGs4J+zUfz6EG6bGOgvu5zGzK6gmDeWbgJ0yrC00=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Henry Wang <Henry.Wang@arm.com>, Community Manager
	<community.manager@xenproject.org>
Subject: Re: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZbSQvB461HU2dmk2buv9fQooVPa8xECsA
Date: Tue, 18 Apr 2023 12:56:54 +0000
Message-ID: <20BBA7B1-AF5B-49D0-BA79-17B90893C3CC@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-13-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-13-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB7889:EE_|AM7EUR03FT004:EE_|PA4PR08MB7458:EE_
X-MS-Office365-Filtering-Correlation-Id: 0575205c-dc08-427f-c37d-08db400c6b48
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 BsIAKZZ0p+96WtGrnMjP9VGNzoUO9b0CJoCDhn4OlAjh5RDyR2Iwf/9WPz7LWPNklQDcFU+JoRsIGXXKVznoOd0RN3g4cRKNXGH8y3hgb8jq05w8xCG811Ou92TywgvZ31EAhsfc71OYapfpq3+CdlzaSKTlQbRWAB0ulOAytSCgOQIoV5vvIfwy+P6ibI6+/dmRqJ15+pm7OoZI+MFJ/43cC7jrnobz5c6ltwgWqBrRF/e/QSKMmZcAxn7tLIohC2kRuzg22ct/oiLl6cn9DHIYbx1sKvYUDX0MtFslKFXk/Oy4R+2TVK2bpXxyiQeK5RijVjyNYlUU5MV8xSGZvRHgv5/MzCHJ7v98AzrwukVzpPheXiAkmxBVUlyxMYJ3TYtR2jw2VfQZxDQapBRHziQvBKPvgbawTZ9E01nvhStqRKNECNr8jUgswo7IsErzdvqtxw0kaVR4+YKLvL2yw2Y28KuLcXd38xuAi5RcqxJrUzVWXB/DNMFH44yH+17viHmMZ633quRug/zswSkUYCFQDOsbV6a5fE23mMollueOYnQhbladce8Xv7kMx0DgKojpAPeDi5KkbNf+WnyHQSeyWBUzD3ma5E3Cs56Z809xs7C1i2WJyeuTB+umjvHgEcj9bGDDciPQwINlf0WmHysKQT/nvL0G4p6U832O1LbTdMNj5RIKKs631AzGgEXmhJ4Ys4t+iomV5HbY9x4PJ53zIlS3XWEOBA8uDntxTy0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(39860400002)(366004)(451199021)(4326008)(37006003)(54906003)(6636002)(316002)(66446008)(91956017)(66556008)(66476007)(64756008)(76116006)(66946007)(478600001)(71200400001)(6486002)(8936002)(5660300002)(41300700001)(6862004)(8676002)(2906002)(4001150100001)(122000001)(86362001)(38070700005)(36756003)(33656002)(38100700002)(2616005)(6506007)(6512007)(186003)(53546011)(83380400001)(219803003)(207903002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0EFA6AB05C93F540BA7543068B1685F7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7889
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	57732175-04b4-497c-9fc0-08db400c6308
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VcGYJHGDn0+D7sYazDIcb5k25XYyR5pkQP3wbde9onV7D4mIVGhawmDYDS+4uKzIqFk8A07YNuiZk3RGS/m3qXKHnTeCf86dRauav7hpBMXUrV/9w4ulZQvLaMZ1gmEvJNv4QKYMV3LuPPCGsDQMRU1du3BJ0rvHtfuPjog1RczyP18gVXjKVukLKIbqWmIn8whkTHdXn+JwSSd7+giMG3SVrMpev5em2Eboju+PAoA1Pnoatj1MrOnWt8Hk4O0lXF5f/g9CssvOSX25outkBeKtGUxgVEiEp1pNBvytbx/QlTTLYB9gJzLSAuZH+wTdVl0EoDwJ/+v3ibU6hVd5Yb4NcZBSslE1+2oHwra6eJP327lDzLQZMGk2/se1AMXfjYtllvM0qT+YVNM/J3kG/RkuYyP/nVebiHhDFX140cQeATWl/wp/4/f7FtrmWAPcUAlNz8mI6m3iqND9M/h9PKU4rCSrAOWdgi/B42eDXClCx+okZ8zm6gzTc3UzpS4kS2BBHs4daRWKKq3ra/mWCGEN6wwNmtSLXmjdPfUXv1WvpDUlfZ+DYJBn3eLHWBkmTBBCoQ4n16uO3r2ehLc91ASlSVcs6WaCP79Lw/DsXFppPkmx9YNryYp4iqdY1KDqK2qfUOxwy6ynpjgBO2SwBVLk5e8xDZoHmChnX2I7UPnGRcGrmDiYz8VYHVENqfhXffOAOd87fHe6huR5j/1bneoYlPJ+uYZh+LQmEUWenMdqj0zXU024lVdDmQoKM3U4hpNpdSiMQOeyvOLS7n8+e3DjWGuX9NbJ8AT279iSKt4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(37006003)(478600001)(8936002)(6862004)(8676002)(316002)(41300700001)(82740400003)(4326008)(6636002)(70586007)(70206006)(40480700001)(81166007)(54906003)(356005)(40460700003)(186003)(4001150100001)(2906002)(53546011)(6512007)(336012)(26005)(6506007)(86362001)(83380400001)(36756003)(47076005)(82310400005)(33656002)(2616005)(36860700001)(5660300002)(6486002)(219803003)(207903002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 12:57:08.6148
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0575205c-dc08-427f-c37d-08db400c6b48
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB7458

Hi Luca,

> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Arm now can use the "dom0=3D" Xen command line option and the support
> for guests running SVE instructions is added, put entries in the
> changelog.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v4:
> - No changes
> Change from v3:
> - new patch
> ---
> CHANGELOG.md | 5 +++++
> 1 file changed, 5 insertions(+)
>=20
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index c978cfd9b68f..a24951603359 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -6,6 +6,10 @@ The format is based on [Keep a Changelog](https://keepac=
hangelog.com/en/1.0.0/)
>=20
> ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
shortlog;h=3Dstaging) - TBD
>=20
> +### Changed
> +- The "dom0" option is now supported on Arm and "sve=3D" sub-option can =
be used
> +  to enable dom0 guest to use SVE/SVE2 instructions.
> +
> ### Added
>  - On x86, support for features new in Intel Sapphire Rapids CPUs:
>    - PKS (Protection Key Supervisor) available to HVM/PVH guests.
> @@ -14,6 +18,7 @@ The format is based on [Keep a Changelog](https://keepa=
changelog.com/en/1.0.0/)
>    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the s=
ystem
>      wide impact of a guest misusing atomic instructions.
>  - xl/libxl can customize SMBIOS strings for HVM guests.
> + - On Arm, Xen supports guests running SVE/SVE2 instructions.

Might be a good idea to mention that this is a tech preview ?

Cheers
Bertrand

>=20
> ## [4.17.0](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dshortlog;h=3D=
RELEASE-4.17.0) - 2022-12-12
>=20
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:06:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:06:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522850.812483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol2q-0006KY-OU; Tue, 18 Apr 2023 13:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522850.812483; Tue, 18 Apr 2023 13:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol2q-0006KR-Lg; Tue, 18 Apr 2023 13:06:52 +0000
Received: by outflank-mailman (input) for mailman id 522850;
 Tue, 18 Apr 2023 13:06:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pol2p-0006KL-0Z
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:06:51 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfb503af-dde9-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:06:48 +0200 (CEST)
Received: from mail-bn8nam04lp2046.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 09:06:35 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5528.namprd03.prod.outlook.com (2603:10b6:5:2c5::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:06:33 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfb503af-dde9-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681823208;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=vfbdOvdBTQ1QyuohBhSl2KUhIhJ8XGo8bJ9EKlH/nXk=;
  b=MbEFLVSL0N2e19cFA5vXEdKUN5EPRoNW7Pq1HvPwJzGmqe1NOR9xqSg5
   kgw48ijdEpVEGn5X8BSP2DYsC/9OpPEXxH9A/lS4IskO+gmpqq/YvQqrR
   jFB/Xc23RA7RM3laoWJZMbSDCfDpZ/Mvm90Ov6WhgdDpZF+XnkHFaQNo6
   A=;
X-IronPort-RemoteIP: 104.47.74.46
X-IronPort-MID: 105853709
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1TBGiKlvJMt4pO5FFbLXgxfo5gynJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNWj3XM/mPMGagL9wga4Wy90JXusfSyYdjHgNqqS81FCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5Q+GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cYFMnMRVCiuvfro5LiUdOZLh5hzAvC+aevzulk4pd3YJdAPZMifBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1A3jOaF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxX6rBdlNTOzQGvhCiUyxzU8IGh8sURiEm+KHgR+SSt8cE
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACAEDs9/qpdlvigqVFoo9VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTzsDA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:cjYpn64geWZ1Dte8xAPXwPPXdLJyesId70hD6qkRc20tTiX8ra
 uTdZsgpHjJYVoqKRIdcKm7WJVoIkmsk6Kdg7N9AV7KZmCP0ldASrsSj7cKqAeQfxEWmNQtsJ
 uIRJITNDQgNzlHZZeT2meF+4hJ+ra6zJw=
X-Talos-CUID: =?us-ascii?q?9a23=3AoXBXuGjQq90j4huhyz6jSyLR+zJuSGDHj2vqG2K?=
 =?us-ascii?q?CEThNQpKrRniRpPNKqp87?=
X-Talos-MUID: 9a23:BsNuMwSNvM0WALvlRXTllRNpF9x64Z6qS383iqsN6uWVOG9JbmI=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105853709"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PRVtbJuaJp9/S2y/nmRHpvyIfnI1R5DFgzahlILFVX+6JQa40mXUNVj50XvOhwrIqmKCM3nsjObLr0qSbtx4ApZpkuLCDKyqpHa+J6chaz8B4Q10j3oFyG+2a/yGOEAB3xOKULIdfTyqx9wGOAu6Z/prtjlkwfZXZXsas25dWPlZVdFly+eDR4Z82BYHw/3eDcyUuDdl4smVkb4fWKS59+R4P+OTClegiHopY4nnKrRWDRwp9TPZ7TY9K9XOspq5Hspdt+rJ0NpGt7hYKDP+yM6fjvci6n51YaUGVCXLpOyrR+J5/FJ30oOc18HJNgI1FMrFN6qfRgw4I6sgfR4+mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JKRKTIdcqwDDA0HCTO2pA69812HWzmrY+qeBp1Pmt4U=;
 b=c1wtAqCJlLfQ7PSvfsxumzjy1sva1azmhUUcWQo0uad9O/C6rTZynTwOWj6aU6EmAi8ELxEqUS32QSwylBIM3JgxRzQvZoUjrKG6Z1grQW59RgdV60CxzcvV7pE147MbmZeCBk8seISsxwM3EnOR0g5a7f8TbT63nP81dOMNK0/JaiLG7KSPeo0YjeYESYrK+1153QqCaO06wUPBRVCFe9dGCL30xuVf6YeP8W+VlBhwytSizk9w8y6n8WRYrb4mfFJocUO+KMmlTQT5WzIT48eoCta+RUQpxB+Jq9AEV3lLgyEhyafQOEyXOsTcLwjko2TNMdzhocA2x/LsWhPK2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JKRKTIdcqwDDA0HCTO2pA69812HWzmrY+qeBp1Pmt4U=;
 b=sSB47TCJo6Lx0/pk4CM+/uRkBvd5xVzHAdDGTctsWv/u7mHgjNAaXP0QM85Cew8kzhQDcrhm4G5z+MTEpXH2MKIkK6+yl2aZbVgyLrtozt0vM1zX65sQDD3WdRSwh+b42CdM5CEdj3r32Tl6euPCt4MAlHp/mJTNCf6nRNvHkv0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 15:06:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZD6V0wzw/VS/MMw/@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
X-ClientProxiedBy: LO4P123CA0552.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5528:EE_
X-MS-Office365-Filtering-Correlation-Id: 6ec97f29-5e48-4750-4234-08db400dbb62
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xkx5A6f77MxAOwXtX39keVEHOC3dZ78bCVXeVwJdXJYyJTR114XbgAeV6VVXbQOHC95K/XkGRNBMNCrpQ24QGf/St94C+Cf838FD/AbYa3BSThlyBY3vRodos4w+9zyLVT8E4hX61oSGQeyL+IErhgG/m/jj4Qgi622wewuPwomSspkGMFemyrX7u8WSlNVl3JsBSgNwbi7z3m+7whNEw3DtP+0RcB9Rr6vzS8gocUoKsQqjmjHSqZU7fMYIjFSM2aoCcl7y7qCKABIlTUAhkq2N83fcx4LOBsmRHXzKg6xqeD+DKgqrdYoOVET93D2N9kZ+waOkhCAusrdVEvoBkM/7aABhG9Zfi/djVsnjBIrI0jVN9YCmAgWc37SXhqYhNrk9WcrS/dV6tI8NLCpMspfA58I48+768f/GBwl2m+cQvM8EuHXuSaC0bGmhARnguMvwZ+ri4TnityzoavqwpOfyADhby1RqAkLfwrLLzpYiKgaC78vlALq+X9e/l+WWDN59xnB3MzxtQt3Z+M9UvybwWXeegRm0/sSGtVKz8ibq0HZevuQGuSUbgda3Opqa
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(66899021)(38100700002)(8936002)(8676002)(5660300002)(2906002)(85182001)(33716001)(86362001)(478600001)(6486002)(6666004)(54906003)(186003)(9686003)(6512007)(66946007)(53546011)(6506007)(66476007)(26005)(41300700001)(82960400001)(316002)(83380400001)(6916009)(4326008)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mm0rOCsyZnIveHFyMHduOGdEcG5RY3g1cjlZSnc0WWZhMUQ3ZWsvakp4SG9l?=
 =?utf-8?B?L0RMenNmS1psbXdSbkI0dTY3OXpIbXRaNVI0UC90RzVQeUQyREdob2ZnR25Y?=
 =?utf-8?B?R0Y3NjN6SEJwRW8zb0JKZ29UVS96c1dpd2FyOG5JMVQ4ZCszbTJXMkNEUDl5?=
 =?utf-8?B?alh0OGE2SHFsR2ZFYkgwYmhZK0drTGJ4V0FsR1pvWExXcS8wNWp6WkhtOHRo?=
 =?utf-8?B?WVBkSnNzVXYzRUJJd3c3Z3JoMm12WE9qaWl6RHFvSko0bXBuYmJERk9VRHNs?=
 =?utf-8?B?Wkg1eUVFMDdIbXFyZDdPTGpZaUpaTWJLZE9pcUgrYXhSaXI5eTl2WUVsa205?=
 =?utf-8?B?QkgrTDdqSW5aUlluTXg2Mi9IeDVhMDFTMDlLNXl6bXE1R3A5cjRGSWJhNXFU?=
 =?utf-8?B?bmwyS0RZQlUweSt2d2R6NXFoOVZyZHJiSmJLSTVhbGZ6eFBPNFNuWVdoVTFV?=
 =?utf-8?B?MU42MlZiektqRXpKWCswSllsL2QrQ3lIMUFOK05teW9TcEx1cVBHNGczSjJ5?=
 =?utf-8?B?MzdQS1J4WkcyRE55VGRqajkvWjk0QjlrbXd4cXYxdDgwekxvT09Mam5rY090?=
 =?utf-8?B?dVBIU1gxSXdjaldhK3FPcVBhZDNwdHFkcUJVU0w4enMzZXp4aEV2VDJQWXhP?=
 =?utf-8?B?U05jUGszZW9lUytadEpGMG1lOGhRTGxCU0RsbFU4SHJMWDdSeTZlM0ViQnFU?=
 =?utf-8?B?TEdzUk9iRndJYjc1cjVkbTNLZld1Qi9CNkxwSDByZmJMZTlVUnFNbjRmTitx?=
 =?utf-8?B?N3N3QkVRMHJjS3hZRWh0VHphZk1SZFlyZkEzM1pQWDVjQmllS1BOQ0xaNjBD?=
 =?utf-8?B?SU5OTFlHdVRUM1IvYVZaSEVkU1V6Q3g0RTdNR01pNGlOSGpreWVMa2ZwQjBH?=
 =?utf-8?B?WHEyOEFRYzZkOXNxR1VJWU1QYWcwckQwdktnS0dJTlYwTGwwcUhhcjRSbk5p?=
 =?utf-8?B?UGh1WkF3OUNBalYxZm9KRTZYcDEraDdXQ3FzZExsWVZ2R0loTG1BbFc5VTMv?=
 =?utf-8?B?eFRJY0hvOE11Z21sNERlOU1MZnBJUkNxUHN3SkM1enowU0pLOUJoQnlIWlht?=
 =?utf-8?B?M0Q1dVZBM3NKdDBYQ3dmd1pMWld6SUhqd1JxeGFjTHVsVFdHTEsyZXFVdzlS?=
 =?utf-8?B?WDk3alVFWjZvdjZxZkdkMEtGNVI5K0kyRDJjeG5kamUzU3dJQWVwaHBnN2hR?=
 =?utf-8?B?UmQyV0ovd0dPQjZhUG1IWTdrQW1YbWhXWWZzVUhVMlFuZVR0OTFwNWdoemVD?=
 =?utf-8?B?eTYxYU4vOWhEL0lFOGNFM1R3YzkyU0tGd2N6L01POFpKZXFSNkJSc1VBWWxh?=
 =?utf-8?B?cjZYQW9ueVZXOG1LNGE2NDNZMWJaaUpxTFpsRUdUQndyV0EvRUxlbFVzQ1pR?=
 =?utf-8?B?cnp4SGdmVzVCc3dIU3RxMWpqRDJUODJxZ094alVjcGRIMmtiR2RoZjVnVVNm?=
 =?utf-8?B?RGVHUTdlb0RTb0lEMXJLRG9KRE43TnpNNmsxdVlKVG1TSGMwcDNBZEpHWm9i?=
 =?utf-8?B?b3ZOYTd6WGQ5OXRYTi9WbGRUUGZjamQvYXg3ZUlQejlYdzEydnRKeGFxSVY2?=
 =?utf-8?B?eXNVQTZiQmJ0SU1JRXY5b25sbVFFMCsrU0Z5Q2RETTBJQ1FUL2s3TWo5cWk1?=
 =?utf-8?B?QkFCKzhDNm40SzFoRmtqcFJGNGtPODJ5bXZ6a2grdXdWd3RnZEhVOGo5Kzd3?=
 =?utf-8?B?N1FQamNvdkNlc1hqaWJ5QTgwQVJxMXZza3lqQ3FBWVN0eEhwOHh6MHFPalcz?=
 =?utf-8?B?a01YUmVDY1JXUzFucjdrRnBiUUE1ak9UcXFMVmFVSGJwTEkzUkQ1Tm82WFY4?=
 =?utf-8?B?V1VyNk9sMFVUMU1VWklOcnFkWm42eDVMQS9pK2VicUJPWi9QbTU5NURLMFJ5?=
 =?utf-8?B?RVVrZUR2UjRtbzFCVnR0eFlNcVBWSm1GdTJlTCt3bXN6cmtNQlk5Z3A2R2NG?=
 =?utf-8?B?Z0tqZVRmRmlTcDRjNlFnbXk5T3FGTHpMaitUaGF0a3ZiSURWa1cvZTBGZWdS?=
 =?utf-8?B?V0dya3hKb3NwejY1Q1ZzekkwdXJoaHBMVEpYSWxXdmNxbnZBSy95WVQ1M253?=
 =?utf-8?B?TWxEWkN3bHRsNVFZdCtWUXFVeWJIczQzZU42V2EwOFFCQjBFZ2c4d1h6L2xx?=
 =?utf-8?Q?8S8LSSW8D/93sXa7ysiLqP5yd?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YTQKA08kFwEw37aLVoo6Shroj92esiodwQKjflHR40zdnV0lcO6AjlFv6/SVDG5w18+RZslYiGp0QY5SbDEUlq/g749NCxF+YmOABmpial8KeF1LnbNxqDSi8QkgumdVPbPqLqqv4B1WBu7UgWIoysBc/qewZzzfrtvXB3eedqMCxh6dZaGMww400bC6zpvHDs6QP4gOa0c6LoS9SlX/ZYbdezapH+0tb+kfjYKiSB0ND+ze1a6pIBO04lVXObbW7Wm58bDJ+DsKRcOZ2L6c7uc+Kbo8JjM8BtTYEnsUBO/bL/VBGFYt5GSynxdsff9pA8K7dedGf8fb278r0k/xz8PqdYMRdUKvNqWcwcsWjxw0POHOTsGZM1LuZbJ4eejJrfH1TNzfvxAf+f7b54v6g0HI4n1lsuqZ+AQkhD+x1AxRwF/cWmV3/abEm62xVJdA91PAJVu31dRlcT0KVcQx7N5BG52bDKwYPhZNkYOhs4RW7eR8hdyDW9q4NSodi3jRHi6NlhGL3EmaMF1Z0LHD/Q84r0nf5KJU2e7m8GzXattt+25jb8L/42fXJCReTxdD+8AkbFVLPDPMVdWITWMKVu6YjiNdAYJN3VSRl6URTnSVqZ6Q4391UhBXB5bJgI8Ss84rIXxNHsV8tYdxludMFpxzGF8ddgp7syCv54Mhn+oDBqwQtiPxYfIW0+fTxl0n3p6InqOL8nqLRrePyr4eRLvN4hkCMrPAafjwuZvJSSQFeuxfQkNrPxOmu4FvsbKgmitYoxC7zGzCIBQU9ck9d3NDjoJYsQk28MvZg6MZfsImKts/4uRPtci8w3fITGpk
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ec97f29-5e48-4750-4234-08db400dbb62
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:06:32.8948
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nDsgRylornKOUy/LUKK7hx1RpH5aGxnyaoLz/RQm4sI838pNf5DE7bRmTiFwU2C5UWSVM1Rza3bjxP9z81Qw8Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5528

On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
> On 18.04.2023 11:24, Roger Pau Monne wrote:
> > Some of the assembly entry points cannot be safely patched until it's
> > safe to use jmp, as livepatch can replace a whole block with a jmp to
> > a new address, and that won't be safe until speculative mitigations
> > have been applied.
> 
> Isn't the issue only with indirect JMP, whereas livepatch uses only
> direct ones?

Oh, I see, livepatch uses a relative JMP, so that's not affected.

> > --- a/xen/arch/x86/include/asm/config.h
> > +++ b/xen/arch/x86/include/asm/config.h
> > @@ -44,6 +44,20 @@
> >  /* Linkage for x86 */
> >  #ifdef __ASSEMBLY__
> >  #define ALIGN .align 16,0x90
> > +#ifdef CONFIG_LIVEPATCH
> > +#define START_LP(name)                          \
> > +  jmp name;                                     \
> > +  .pushsection .text.name, "ax", @progbits;     \
> 
> In how far is livepatch susceptible to two .text.* sections of the same
> name? This can result here and perhaps also for static C functions.

This is all fine, as long as they are not in the same translation
unit.  Livepatch creation operates against object files, so as long as
there are no section names clashes in a translation unit it should be
able to deal with it.

> > +  name:
> > +#define END_LP(name)                            \
> > +  .size name, . - name;                         \
> > +  .type name, @function;                        \
> > +  .popsection
> > +#else
> > +#define START_LP(name)                          \
> > +  name:
> > +#define END_LP(name)
> > +#endif
> >  #define ENTRY(name)                             \
> >    .globl name;                                  \
> >    ALIGN;                                        \
> 
> Do these really need to go into config.h, instead of e.g. asm_defns.h?

I've just put them together to the ENTRY() macros, but yes, I see no
reason to not move them to asm{_,-}defns.h.

> I'd prefer if stuff like this was moved out of here, rather than more
> things accumulating. (Perhaps these also would better be assembler
> macros, in which case asm-defns.h might be the place to put them, but I
> guess that's fairly much a matter of taste.)
> 
> Couldn't END_LP() set type and size unconditionally? (But see also
> below.)

I see, so that we could also use it for debug purposes.  I guess at
that point it might be better to use {START,END}_FUNC() to note that
the macros also have an effect beyond that of livepatching.

Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
find START_ENTRY a weird name.

> > --- a/xen/arch/x86/x86_64/entry.S
> > +++ b/xen/arch/x86/x86_64/entry.S
> > @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
> >  
> >          ALIGN
> >  /* No special register assumptions. */
> > -restore_all_xen:
> > +START_LP(restore_all_xen)
> >          /*
> >           * Check whether we need to switch to the per-CPU page tables, in
> >           * case we return to late PV exit code (from an NMI or #MC).
> > @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
> >  
> >          RESTORE_ALL adj=8
> >          iretq
> > +END_LP(restore_all_xen)
> 
> While I'm fine with this conversion, ...

So I take that overall you would agree to adding this extra
information using a pair of macros similar to the proposed ones.

> >  ENTRY(common_interrupt)
> >          ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
> > @@ -687,6 +688,7 @@ ENTRY(common_interrupt)
> >          SPEC_CTRL_ENTRY_FROM_INTR /* Req: %rsp=regs, %r14=end, %rdx=0, Clob: acd */
> >          /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
> >  
> > +START_LP(common_interrupt_lp)
> >          mov   STACK_CPUINFO_FIELD(xen_cr3)(%r14), %rcx
> >          mov   STACK_CPUINFO_FIELD(use_pv_cr3)(%r14), %bl
> >          mov   %rcx, %r15
> > @@ -707,6 +709,7 @@ ENTRY(common_interrupt)
> >          mov   %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14)
> >          mov   %bl, STACK_CPUINFO_FIELD(use_pv_cr3)(%r14)
> >          jmp ret_from_intr
> > +END_LP(common_interrupt_lp)
> 
> ... this one's odd, as it doesn't cover the entire "function". How would
> you envision we sensibly add ELF metadata also for common_interrupt?

That was done as to avoid patching the first part of the function that
does the speculative mitigations, but since the jmp introduced by
livepatch is a relative one we are safe and could indeed patch the
whole function.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:13:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:13:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522858.812492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol9T-0007pH-Jq; Tue, 18 Apr 2023 13:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522858.812492; Tue, 18 Apr 2023 13:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol9T-0007pA-H7; Tue, 18 Apr 2023 13:13:43 +0000
Received: by outflank-mailman (input) for mailman id 522858;
 Tue, 18 Apr 2023 13:13:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pol9S-0007p0-6E; Tue, 18 Apr 2023 13:13:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pol9S-00053V-00; Tue, 18 Apr 2023 13:13:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pol9R-0007Y8-0r; Tue, 18 Apr 2023 13:13:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pol9R-0002ku-0N; Tue, 18 Apr 2023 13:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B07jrrMo2lsXLSd4qwCA2jENTs7b1yr1hOlPiRZCpxg=; b=lJnGm5uk3nV5OPqw+J0EB5//Vi
	fbsbEXKr5TT5Su9sbELXQR5KY5qX424qkVnG54a2j1f5wBXwPL1EFqJJG8yCPKgSp63uEaBQnBsN+
	eFn2jBU7TGYKmNZ/2NDWpl93YDb8ArAwPCM95oAjSVRYkH7J4KGxxqup4d4sTq1dOhpQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180292-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180292: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a8f57ae2eb07ab39a6f0ccad60c760743051026
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 13:13:41 +0000

flight 180292 linux-linus real [real]
flight 180298 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180292/
http://logs.test-lab.xenproject.org/osstest/logs/180298/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6a8f57ae2eb07ab39a6f0ccad60c760743051026
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    1 days
Testing same since   180281  2023-04-17 06:24:36 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6a8f57ae2eb07ab39a6f0ccad60c760743051026
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Apr 16 15:23:53 2023 -0700

    Linux 6.3-rc7


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:13:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522861.812502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol9Y-00085X-RV; Tue, 18 Apr 2023 13:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522861.812502; Tue, 18 Apr 2023 13:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pol9Y-00085Q-OW; Tue, 18 Apr 2023 13:13:48 +0000
Received: by outflank-mailman (input) for mailman id 522861;
 Tue, 18 Apr 2023 13:13:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pol9X-00084k-9c
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:13:47 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d87df1b2-ddea-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 15:13:44 +0200 (CEST)
Received: from AS9PR05CA0230.eurprd05.prod.outlook.com (2603:10a6:20b:494::17)
 by GV1PR08MB8010.eurprd08.prod.outlook.com (2603:10a6:150:9a::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:13:38 +0000
Received: from AM7EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:494:cafe::58) by AS9PR05CA0230.outlook.office365.com
 (2603:10a6:20b:494::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:13:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT043.mail.protection.outlook.com (100.127.140.160) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 13:13:37 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Tue, 18 Apr 2023 13:13:37 +0000
Received: from 44ccc0c81700.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 72FE2263-BDC1-462D-B758-25496574CAAA.1; 
 Tue, 18 Apr 2023 13:13:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 44ccc0c81700.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:13:34 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB8228.eurprd08.prod.outlook.com (2603:10a6:10:39d::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:13:26 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:13:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d87df1b2-ddea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HwSVKbIMzh3zCXI0VqAxwpS+XOYQ+lgS1cyfBJWwROA=;
 b=wn3z0NvHDOB5HXrAgBcCPsVI/vAgNekD7ZLWEwJCwoaqzQFbV2bNIRo8383AprVUeBJE3WiDTY4iO4v2TNWvZyF5wAuHkVEnoF+revmmroNDZAjRv9V2lHGa6ej8Fg3gL/VTM8YqzCg+y1PZTvnIDAzKzQBjEuUssnO9QrQ03q8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7b1ea878f41bbbdf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lhL5BilqVpf1o0HhIxUONS8pqNzz68nbg5wZlYwlfRhGPfwVy7CUl9Np5GKfIrVsdKzKBhkL+hs55LXUvLAlp6mBW1H0TI/08A8b6ajm/E97n4dgte+qLx31e0ydDZYGe5r3o1W2iwZTDPYU1pCjz79PZqt+ntgu8A10RANniobdi40xxdosj1da9FqPS7l3xAjI8gAkH3w3RwzlcKNOhNF5LJ0iA3G3iwTYTAoyyHT4vuh/1+XJekfc6z2R65lQBwksyerXeEfRzFgP52nJxRSsvBpZU+MxRLDCtRyBlpniwqjyLFVE7OmC9ba3Vt46KmQKRvEjNWbPc/UNB9whXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HwSVKbIMzh3zCXI0VqAxwpS+XOYQ+lgS1cyfBJWwROA=;
 b=Ibw6VRoOWcWg1G2FpbZuUeLeeW6q8UEm45BLADj+bCzny6rxP42nK/oK8j2rtZGG9vo04q1Z8OxIaV/rj3+x3r/BAV9NMUDQDvPFTJkdgm22dkB9QFNQG61V1gvR7msM7TmOj667mUf7cjIR5UvvDM3g9OD7I76i+DL4v/lKcvL0rCp+mbbG5h3HbXm3bCnOkTSSPo6Q141/RsJ111JEkNqy4iJAE8Kh8NlGdRhJAE0k+7KsWx1I9/k4iDe4Ly4VKDI613NWaem3holQx+ql9tGulmAJef0DpDtdbAQMMrdl/0x9witpoYy3DBWNRMHECEy7jAuAXm7Fn82pULvXTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HwSVKbIMzh3zCXI0VqAxwpS+XOYQ+lgS1cyfBJWwROA=;
 b=wn3z0NvHDOB5HXrAgBcCPsVI/vAgNekD7ZLWEwJCwoaqzQFbV2bNIRo8383AprVUeBJE3WiDTY4iO4v2TNWvZyF5wAuHkVEnoF+revmmroNDZAjRv9V2lHGa6ej8Fg3gL/VTM8YqzCg+y1PZTvnIDAzKzQBjEuUssnO9QrQ03q8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index: AQHZbSQqKlfgrZX3xkqta7+H+1gq3K8xFMoA
Date: Tue, 18 Apr 2023 13:13:26 +0000
Message-ID: <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
In-Reply-To: <20230412094938.2693890-1-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB8228:EE_|AM7EUR03FT043:EE_|GV1PR08MB8010:EE_
X-MS-Office365-Filtering-Correlation-Id: a0be6b71-28e8-4ba0-78be-08db400eb8c1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 CF5TOtbJc9QASBJmau6+O+nJZHOkCRT9evh9/eXll9k6YCrrg0gtlGo8QAmdNDz2YmRkSbbbVkcfiNHSslxx5DhCXiwvA3VjFxvXLX82rs9h8cQUrE90hx+ZHkH4Er1qFMblfp8pxSvJvLsdIoH61FEP4pmZL7OiWWhwn3JwUJgmgJrCEy3BB2oVMIyaOICyoyKWDXI7y+F2KS0RRhGXLyLYJR0402/DvK4f4kwhWRCmwC1TfQu5cIrgVkwaIgXfeJJE/TQoWttkbO4nKZajuiJT259BQ7eaqqn9zP6RuEhU5nDKi8PThgSx12UaKQwTP5aq07wKUA48bzkTGmcgV5BcP2+3WxnkiF2Bt/3xBg19fk5Q79L10w7fYkw19rUlUaGHOWD/57+pllRtsIFYNiQ82Gu1xCbZlluhhnuhbi37tajTUYnR5ZuFiR+l22DqKbMxLvI1E0twrUI0CcBTiFHUV5dAdJJFKDA7++ywxtbWhun8MzQApU8+bYbcArt67Nvwg7mlY5ZdO6i0ryMYEDVMbS2wcD06LGrU0C6e1WnVeZlMOi5TzYbpvCYUXawsBQM7yC/Ove9f9xYioqALbZzn6aNvGF3kdiDHZO+JtPiVr7xrExZwwFjG7VrS+MZZBDUFyaz4fisA2sDgV+i4jg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(376002)(136003)(39860400002)(346002)(451199021)(6512007)(6506007)(186003)(71200400001)(6486002)(2906002)(2616005)(5660300002)(41300700001)(64756008)(4326008)(316002)(66476007)(66556008)(66446008)(91956017)(7416002)(8676002)(6636002)(8936002)(66946007)(6862004)(54906003)(37006003)(478600001)(38070700005)(76116006)(38100700002)(122000001)(86362001)(33656002)(36756003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <001F099A482EDB4781E5F3E611A4E69D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8228
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fc27f8ff-7d61-4990-28d0-08db400eb23d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MFBldd5t2PfKCiyXQzVpJ9PWebZyRg9Nk3i8hqjVfIKRac0zcrkNoQmEMBVW0cBcHxgOzy0a/XL4mmu+IdfRQ5P7GFd+qJftTyMi3eoicbBqgutQW3hP++OyvHH7ecCCbAcLmk+PvPD4b58byik11y9dCB0KOHUCodl0C0BbK27A0G2T++MxKOPCX3WjUW9VADlJGh/pxEi8DV5S08oq+KsHaIg0JU9XgZp1U05MNPJzx8W0jAvys3gWfuEsTTyAqrsBeyJBCMgB8/9IUtkTV7j2kGj29I0ZM3gPJeYiGek5JSwaLM14ommffz0TMOzW+dMTmRbhqXQufGQh9KB2e48OBXRXq8/Mj8/E4BR6zQqJgwuCPK31AzFaeU6dpIvOy8oGAdTiu/j8tdZRJ9bm04jBUtsaNdt8ggcyNvFtcICHp2VU8HBdLUV5btmWmfGgK3dEKYqZ3W4on523couhg0Qjus8SRCpvM34akr1OZ1ILBgYX4ZedTdwRYXiSOcioXB6QMl+volVxD3lYO8QwQazcthv1dBZ7otglbAm7IixYnjMNYuBUlxgx1MMoeic1/uH5kPS3CQo8s2jmKuYYL4aR9H7mU4nXrFPJ5XqtF/XA7LOjnYSJw5zwhVPQEIyC7v5Fy+9NxJiBdnXXGES8TVtYpKpvL3M7hbIGXGOCfYAAZ4vWpu6o8k0gnN6XV4tmzgatUGSaJFF7goPSY0iSgQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(83380400001)(336012)(82310400005)(37006003)(40480700001)(82740400003)(47076005)(356005)(2906002)(33656002)(6636002)(5660300002)(316002)(41300700001)(54906003)(6862004)(8676002)(70206006)(70586007)(81166007)(8936002)(478600001)(4326008)(86362001)(6486002)(36860700001)(36756003)(40460700003)(186003)(6512007)(6506007)(26005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:13:37.5682
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a0be6b71-28e8-4ba0-78be-08db400eb8c1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8010

Hi Luca,

On this serie I would like to open a discussion on how to handle the vector=
 size
and the corresponding command line / configuration / device tree parameters=
.

In general the user must either give a vector size it wants or has a soluti=
on to
just request the maximum supported size.

In the current implementation if a size bigger than the supported one is pr=
ovided:
- we silently disable SVE for dom0
- we silently disable SVE for dom0less
- we do not create a guest when done through tools

This is not completely coherent and i think we should aim for a coherent be=
haviour
unless we have arguments for this status.

Is there any good reason to silently disable for Dom0 and dom0less only ?

I see some possible solutions here:

- modify parameter behaviour to use the supported size if parameter is bigg=
er than
it. This would at least keep SVE enabled if a VM depends on it and could si=
mplify
some of the handling by using 2048 to use the maximum supported size.

- coherently stop if the parameter value is not supported (including if sve=
 is
not supported)

- always disable SVE if the parameter value is not supported.

To be honest I am not quite sure which solution is better but I am not happ=
y with
the different kind of behaviour we have right now.

What are your thoughts ?

Regards
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:16:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:16:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522870.812513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polBy-0000Wg-9H; Tue, 18 Apr 2023 13:16:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522870.812513; Tue, 18 Apr 2023 13:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polBy-0000WZ-6Y; Tue, 18 Apr 2023 13:16:18 +0000
Received: by outflank-mailman (input) for mailman id 522870;
 Tue, 18 Apr 2023 13:16:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uBdk=AJ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1polBx-0000WT-LU
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:16:17 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 332985fb-ddeb-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:16:16 +0200 (CEST)
Received: from DB6PR0202CA0042.eurprd02.prod.outlook.com (2603:10a6:4:a5::28)
 by DU0PR08MB9461.eurprd08.prod.outlook.com (2603:10a6:10:42f::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:16:14 +0000
Received: from DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::a5) by DB6PR0202CA0042.outlook.office365.com
 (2603:10a6:4:a5::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend
 Transport; Tue, 18 Apr 2023 13:16:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT045.mail.protection.outlook.com (100.127.142.142) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 13:16:14 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Tue, 18 Apr 2023 13:16:14 +0000
Received: from ab814da23dff.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E83C93CE-C02D-4322-B749-4B2C5AA88D15.1; 
 Tue, 18 Apr 2023 13:16:07 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ab814da23dff.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:16:07 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB7874.eurprd08.prod.outlook.com (2603:10a6:150:5d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:16:05 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6298.030; Tue, 18 Apr 2023
 13:16:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 332985fb-ddeb-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j0ff2h2XgdtvjQKzZ02iKbqHkgDXfbLwDdKltdkduoo=;
 b=NNkkSdWGzdQyNYbRdENc73zvFkWf8gwirOcVHfYhPh/dfjAsQylkqEXjE1SdmI84sRogWsKKK4z23kfVVQ7RVcATERZg9dekdK3jgDd/EqeETRq37w/7aW1yXBggISpwsHUPvGA/83Vd9c5xTvcqPEaiUlXrgOGYYHhmv/RKfzU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1d0deb67e7bd23cb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AmVf/P1vW0tDnm7rJHo0pc6Lb+XwmMXU6AgIOhN0LRizbaT8h3xrcKoTM7ds29OVN17D6OSxkNxyiwoCRf0gOtLmvjLtB5B2ZMAqGo+O3CTwa4J8laeJALD0DIoD3NcOn5iGH8udmQ6O4n/statFmWjjSSZE3zZjAZ5jyyAmhv3zJ8obq+b/5yPHKGIe9adcBhpNpyUXgDzHOUgTCXFE4R8KrAJKRSVIQ7Zu40QE5vQIIuSdKaPqGfn70OSDl/0rS2CeLhfcRM3TezD26NG8V8jEnD08F+laxBkJOlZiEDUXcyjKwMYYWtLs2ENWh4iv5m/grQLmVO04YrTtTJuL5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=j0ff2h2XgdtvjQKzZ02iKbqHkgDXfbLwDdKltdkduoo=;
 b=YyI+Q+/LxfcX6zP6RYuP7c3i+4gAM9++1CEBLozvI/Bh2MI7WJzsJKYTqYFl9hp/JcV1GuXZ4nmHhrlYOIo6SnBj4oS8+ljdWhPy9b9Z41IxKnSvcfmPKtuxSDCUId7AueekPcZeFQ+KC/SznwIy0I19jNGLydQS1SC4AdJAGLrRkg7bud5U8hVkerrjmi45HEELGREybsAQFNGZ71o/ABAARqk1obuBYgB1cK6ldXU3hcwxLBCusQwDC6CyuD6F9hx9njbSk9XrzmzpbQkpcLwaNMKCb8WmVNM2uoooz05TTUYNUg80CLclU5SrgVANXOTj7lIz77gqtGcor2w8cA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j0ff2h2XgdtvjQKzZ02iKbqHkgDXfbLwDdKltdkduoo=;
 b=NNkkSdWGzdQyNYbRdENc73zvFkWf8gwirOcVHfYhPh/dfjAsQylkqEXjE1SdmI84sRogWsKKK4z23kfVVQ7RVcATERZg9dekdK3jgDd/EqeETRq37w/7aW1yXBggISpwsHUPvGA/83Vd9c5xTvcqPEaiUlXrgOGYYHhmv/RKfzU=
From: Henry Wang <Henry.Wang@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Community Manager <community.manager@xenproject.org>
Subject: RE: [PATCH] CHANGELOG: add gnttab_max_{maptrack_,}frames option
 changes
Thread-Topic: [PATCH] CHANGELOG: add gnttab_max_{maptrack_,}frames option
 changes
Thread-Index: AQHZcfQO8iHeqtg3REq5C5iSSt7pEa8xCqxw
Date: Tue, 18 Apr 2023 13:16:03 +0000
Message-ID:
 <AS8PR08MB79919D898B50404869AF4A3E929D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230418124748.17881-1-roger.pau@citrix.com>
In-Reply-To: <20230418124748.17881-1-roger.pau@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 19F81B9DDFB1B748919E88B71CDFCC5E.0
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB7874:EE_|DBAEUR03FT045:EE_|DU0PR08MB9461:EE_
X-MS-Office365-Filtering-Correlation-Id: 3caf9491-f163-4588-167f-08db400f1657
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7tnzi6aoXzb8jlcqezVb+lUmLwSKZsr65P4+pikrltgcwmeI1k1KgCBHbht7nYdb+yruaF/1b55O9PGAnhsFWh1hTJQFzY81oAf/262cHNR+ugThRu/SB6cOSufGbKtwAMX01bXmy1+tpM65geYhCSdulQTOa/ZeRIiw3Cb9Aj7KHi2VdWYP9HgWYsRU5fDoHpYaDNfva8LlU6sjPEeEpooweH/cNxKfviwI2LhCdmm0b+BpAs4QJrcXQpGduCu1bw8EXlP0rRIe7Sqqv7v2Z6tEqiUHCDe5+z5bXM75OWhNLzWDDMBTUEKmFGGL028YrsopEK0iS5E1G4bH5VD+uWMPERYbYHOA95GnnmpU3l5VQ9zQi5t2KgFsZzdQqv2gL95VdMEPwzHKpIgJY1zp+Hv9/eF22B/hGeyROm84MWw81dc3m4woQEB3Begn22KGIXoUYEdhYXXLZnOSkAYS+0Yj3hXHMfb2VanJr+NWH7jAt2Jiol7lr+4uAflZkV+IhXfl0tMFAZz+qtQlH8CuhgxarBOobzbi7NRjfbJDiG9MGX0BvRU2lemLia80Uq5pxuaLgffaY4lkAqmY1t0QntVP2Na83E01tYDgmxOAJWJGM9oMAX3YyaB+z/JTLQGrSC+64eN+bhkL+tBjn2BASxC8Kcq3hvO+pgbTe3Tz3qYrpp+6/MWmkjFIsXZA6bOs
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(396003)(136003)(39860400002)(346002)(451199021)(33656002)(76116006)(4326008)(110136005)(316002)(66446008)(64756008)(66946007)(66556008)(66476007)(71200400001)(7696005)(478600001)(41300700001)(5660300002)(8676002)(55016003)(8936002)(2906002)(52536014)(38070700005)(86362001)(122000001)(38100700002)(6506007)(9686003)(26005)(186003)(83380400001)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7874
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8803d16b-776e-413c-b924-08db400f0ffe
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	63kJmLp8gIG1dFWWfpFnBtxUyXtfTuy7rNpcXGvAd+I5oqZFY6AOVa7ns1QJjrv9XgPE4XUaGQO/+1kCLp6jWL/mMi5Ex9/6aVtTbvl3DiDNQIYfAVLYP8bf6tLKcwHo8xyOGQQnJheaEI9sJ9yirIguKCsdOSkVmVJniR5HVpUFhzRAfrB8TepDhCh3wLqIdmLx6B2h4xNGyhxATAv/mgAxjZk9fyKsmX5wDJpAWN6NH3g3nmJ/NICiN49yX4Y750gtAJ4D6wG3IIybvS4zDXdcrRgRG4Hrk1nV5nDRN8TcaqVgS9Gi1/KvMj0Bb+ocLIWkpQASIY/YoidsIF5QKrjgaxmezMMUlqjd//EaCWJBNV/gWHftghn+mAlKMqwFxcoaVPixGdq051xwwPpkP532Ul0cNAu8EP3UKoGCjtcVWijQ5I+5pjrtlJrMFSTE7J84kq/zkrDiqPeLEKNW/dKeoUGiHfCTqyDXgJkfY6MGLNWcKmYBbMkSpb8kNbOAfbfeD9VvdYwBCdZrIibKt8jQP+W5EDBQ/OaSQzviyKbe2pH50goac8rt7g3O5ZBRTJHKfZrEEbUoO8C634l/2lcr3CHavzLn3pAIPrEP6FcILjkjA863Lxax/8GTY2DQKF6ZTR3P7VLlDuRn6/JqYww16lyTAFoSs4Irv0nGJxWN/sPg2/YBMcBvNt3ZC0PCEN7vGmheAtkGOTQVDJSc4Ki3RRRvmBjWYcL6eLpjoJ7n02nVvCE9AcGiJGxHdAZsrhNu4EQvgEEINeR8WMW6uA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(40480700001)(55016003)(40460700003)(70586007)(41300700001)(316002)(70206006)(4326008)(52536014)(81166007)(82740400003)(356005)(2906002)(8676002)(8936002)(5660300002)(336012)(83380400001)(6506007)(26005)(186003)(110136005)(9686003)(36860700001)(7696005)(478600001)(47076005)(86362001)(33656002)(82310400005)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:16:14.6394
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3caf9491-f163-4588-167f-08db400f1657
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9461

SGkgUm9nZXIsDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUm9nZXIg
UGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gU3ViamVjdDogW1BBVENIXSBDSEFO
R0VMT0c6IGFkZCBnbnR0YWJfbWF4X3ttYXB0cmFja18sfWZyYW1lcyBvcHRpb24NCj4gY2hhbmdl
cw0KPiANCj4gTm90ZSBpbiB0aGUgY2hhbmdlbG9nIHRoYXQgdGhlIHB1cnBvc2Ugb2YNCj4gZ250
dGFiX21heF97bWFwdHJhY2tfLH1mcmFtZXMgY29tbWFuZCBsaW5lIG9wdGlvbnMgaGFzIGJlZW4g
Y2hhbmdlZC4NCg0KVGhhbmtzIGZvciByZW1lbWJlcmluZyB0aGlzIQ0KDQo+IA0KPiBGaXhlczog
YjJlYTgxZDJiOTM1ICgneGVuL2dyYW50czogcmVwdXJwb3NlIGNvbW1hbmQgbGluZSBtYXggb3B0
aW9ucycpDQo+IFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPg0KPiAtLS0NCj4gIENIQU5HRUxPRy5tZCB8IDIgKysNCj4gIDEgZmlsZSBjaGFuZ2Vk
LCAyIGluc2VydGlvbnMoKykNCj4gDQo+IGRpZmYgLS1naXQgYS9DSEFOR0VMT0cubWQgYi9DSEFO
R0VMT0cubWQNCj4gaW5kZXggYzk3OGNmZDliNjhmLi4yYTdlNjI0OTUxMDQgMTAwNjQ0DQo+IC0t
LSBhL0NIQU5HRUxPRy5tZA0KPiArKysgYi9DSEFOR0VMT0cubWQNCj4gQEAgLTE0LDYgKzE0LDgg
QEAgVGhlIGZvcm1hdCBpcyBiYXNlZCBvbiBbS2VlcCBhDQo+IENoYW5nZWxvZ10oaHR0cHM6Ly9r
ZWVwYWNoYW5nZWxvZy5jb20vZW4vMS4wLjAvKQ0KPiAgICAgLSBCdXMtbG9jayBkZXRlY3Rpb24s
IHVzZWQgYnkgWGVuIHRvIG1pdGlnYXRlIChieSByYXRlLWxpbWl0aW5nKSB0aGUgc3lzdGVtDQo+
ICAgICAgIHdpZGUgaW1wYWN0IG9mIGEgZ3Vlc3QgbWlzdXNpbmcgYXRvbWljIGluc3RydWN0aW9u
cy4NCj4gICAtIHhsL2xpYnhsIGNhbiBjdXN0b21pemUgU01CSU9TIHN0cmluZ3MgZm9yIEhWTSBn
dWVzdHMuDQo+ICsgLSBSZXB1cnBvc2UgY29tbWFuZCBsaW5lIGdudHRhYl9tYXhfe21hcHRyYWNr
Xyx9ZnJhbWVzIG9wdGlvbnMgc28NCj4gdGhleSBkb24ndA0KPiArICAgY2FwIHRvb2xzdGFjayBw
cm92aWRlZCB2YWx1ZXMuDQoNCkhvd2V2ZXIsIHNlZWluZyB0aGUgdGl0bGUgYW5kIHRoZSAicmVw
dXJwb3NlIiBoZXJlLCBtYXkgSSBwbGVhc2Ugc3VnZ2VzdA0KYWRkaW5nIGEgIiMjIyBDaGFuZ2Vk
IiBzZWN0aW9uIG9uIHRvcCBvZiB0aGUgIiMjIyBBZGRlZCIgc2VjdGlvbiBhbmQNCm1vdmUgdGhl
ICJnbnR0YWJfbWF4X3ttYXB0cmFja18sfWZyYW1lcyBvcHRpb24gY2hhbmdlcyIgZW50cnkgdGhl
cmU/DQoNCkkgdGhpbmsgdGhpcyBjYW4gYmUgZG9uZSBvbiBjb21taXQgaWYgeW91IGFncmVlIChh
bmQgYWxzbyB0aGUgY29tbWl0dGVyDQp3b3VsZCBsaWtlIHRvIGRvIHRoZSBmYXZvciBmb3IgdXMp
LCBzbzoNCg0KQWNrZWQtYnk6IEhlbnJ5IFdhbmcgPEhlbnJ5LldhbmdAYXJtLmNvbT4NCg0KS2lu
ZCByZWdhcmRzLA0KSGVucnkNCg==


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:21:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:21:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522877.812522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polHJ-00022D-WE; Tue, 18 Apr 2023 13:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522877.812522; Tue, 18 Apr 2023 13:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polHJ-000226-TW; Tue, 18 Apr 2023 13:21:49 +0000
Received: by outflank-mailman (input) for mailman id 522877;
 Tue, 18 Apr 2023 13:21:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1polHI-000220-KF
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:21:48 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe02::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f874097d-ddeb-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:21:47 +0200 (CEST)
Received: from AM5PR0301CA0014.eurprd03.prod.outlook.com
 (2603:10a6:206:14::27) by AS2PR08MB9295.eurprd08.prod.outlook.com
 (2603:10a6:20b:599::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:21:45 +0000
Received: from AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:14:cafe::8c) by AM5PR0301CA0014.outlook.office365.com
 (2603:10a6:206:14::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:21:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT047.mail.protection.outlook.com (100.127.140.69) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.21 via Frontend Transport; Tue, 18 Apr 2023 13:21:44 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Tue, 18 Apr 2023 13:21:44 +0000
Received: from 03b4f212f8aa.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 65F6F6F3-417C-4630-85B4-892963CB7D7E.1; 
 Tue, 18 Apr 2023 13:21:36 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 03b4f212f8aa.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:21:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM9PR08MB6115.eurprd08.prod.outlook.com (2603:10a6:20b:2df::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:21:31 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f874097d-ddeb-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+Jl7JSPnutaULJ58g5w7OynAPuCNtNpSrzK8J4VLJo=;
 b=arfwS7BWFuTbIu9EgujLOBB4FBlwgrqIR0tsbgBWC52/lfT1EpvGirbvUQs5PJnHQVP79KHB4i+YXyO7ymU++45USPuKkiX0YV8VIfFE8XNVyu9ytN8nWo+z3ZzAqj/36PuTvWtrcDUp7No3NdMtrnqsLefWnABJ4AQcV5Jv5dI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0ce952031f4f546f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DwkQXWeCYeKtxw0C77FsKeqGT9WX9hX0Ebpsm+o13/U3zNJX6US20kUIZO2QbIGlcw+PQPwQ81ANNvTmZeHPnouBBPQnFHrSXVZnjAhr3zNdOcfPO+WLcLaZQCBA00CC+wqhrq0nbyU+K4FnfYDe0DFAQLT8nhHACMQ72D2JbyOe5yWduM4DzOz+kU/1eAFoBvcBBLONnM1CKkfud1DTnjMHDFzQZxRwlhYqoeHzDKXC+mrdvt0d2Soyi7ZpjlI8lY7ok+Wd6YCwW1+PyRHDrGHw+f+6zD8RkpXtisCZzXEQCQckJAaXQcnoGOFDK0eJ4jvp2tzx20OjN4v8Mc0TZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H+Jl7JSPnutaULJ58g5w7OynAPuCNtNpSrzK8J4VLJo=;
 b=JtoWJLd/fndbGibM3/PVNi+Btb6yODuARo0j1cCACCQqixJu0SeZFF9bFZSEjTGozGY5YqGVWp78vx5wrSJHT8icb10gxk8twZgpL/uq6j3ZRN0oUS9PjEhFxd6vPn5++2z7QFArYb6Ilt6IGFPIAJBNFkt+dXJxB4G+XGwiyXQXU2Gy7Enyd8HvwpuiLITNICiezaoTMKAL6nmfnU7HtRpXuhpbXqeQjymlipNmO2bP6G7PefCWQ4LFAMpM18T+sFDX3UsSSX4DYYqOFvTzlrCQ7Y+uh+Hl63r2+p9mJjKqO3tV/cd6Q3TNh0pkSWz6vFHa4EaBwKBbtCcGYAvKvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+Jl7JSPnutaULJ58g5w7OynAPuCNtNpSrzK8J4VLJo=;
 b=arfwS7BWFuTbIu9EgujLOBB4FBlwgrqIR0tsbgBWC52/lfT1EpvGirbvUQs5PJnHQVP79KHB4i+YXyO7ymU++45USPuKkiX0YV8VIfFE8XNVyu9ytN8nWo+z3ZzAqj/36PuTvWtrcDUp7No3NdMtrnqsLefWnABJ4AQcV5Jv5dI=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 11/12] xen/arm: add sve property for dom0less domUs
Thread-Topic: [PATCH v5 11/12] xen/arm: add sve property for dom0less domUs
Thread-Index: AQHZbSQwrRQ9/UblkkiINOEljpHZc68xD8SAgAAHRwA=
Date: Tue, 18 Apr 2023 13:21:31 +0000
Message-ID: <91C846EF-CA01-4D8B-8D02-51ABA35B26E7@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-12-luca.fancellu@arm.com>
 <5DF67AEC-54FC-4742-8377-995AFB390EFD@arm.com>
In-Reply-To: <5DF67AEC-54FC-4742-8377-995AFB390EFD@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AM9PR08MB6115:EE_|AM7EUR03FT047:EE_|AS2PR08MB9295:EE_
X-MS-Office365-Filtering-Correlation-Id: f5a1ae97-f384-485f-2500-08db400fdb40
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Zu0i1ZwH0DMmhKRzawn+cBz6gyqciKq/cXa8IG97Z73IeUx1mFODit7CodOp6YNozR2YSZLeCvoLPqA90v9kBtnanoiQ3B+zc3p+ILwkPJq4DvmW/gkS766OhEEq6TJnwryFRFDcVsY2U+Etz0bWpB1P7JfmFdFf81fdij58tPCXX7qVM/j/CsaATZ3IPismh+h6LlBC8ESiqqMEeSf/1O/mWQx+EfEL7FiMIs7ko+PYR1lhVY0ZA5P+gw0JilpKcGttaaYwpu7Rq+WZCsUyt5RuvqFqQ7fhNwaBiKDHEM9UeLU5/Oe3pgFlZuAOZd8VLJdcAPTUzHhbaZRBAaTrhm8Vtct000nsROJWNJcpXoikMXW0pkpWQ29o8yv6L3V4YLyJOkmB+ZJoPndP60wHfd9vPnvJbN50cY1MqgCMemw6VaCeTwt0R+InveV4P7cy9N1QEHTnmSZwKqrwNtH7f1nb2xSE+97G6nyioAOA0WiJHjKzu+2PKjrExIWxCPYp/YpjUeC54Ytyts1ggOMuOawe++MqpNlHPt+Mjd8rKFQhRXSDX+klWwlmjU5mvjqKAReEUt4Xsv1poecHXsF0eWhvXVxnO5DmjAmwMLS99RlioTeKrr4o/BnV1kpuyrWoPu23TgqFXobf3FXunIk69uyiJ2P+rMiLRfiq9GySECY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(6486002)(478600001)(86362001)(71200400001)(2616005)(36756003)(83380400001)(33656002)(6512007)(6506007)(53546011)(186003)(38070700005)(38100700002)(122000001)(64756008)(76116006)(66556008)(66946007)(66476007)(316002)(66446008)(2906002)(4326008)(91956017)(8936002)(5660300002)(8676002)(6862004)(41300700001)(37006003)(54906003)(6636002)(32563001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A6353425C0BDFE4E9DA089F57BB9B25C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6115
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab856496-962d-4e43-3a50-08db400fd30c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qULi76ywHedW+x1GFSTGV+E7tbmv5Glym8rpMLh4ml/R7d1paAcYiCx0B9kflyXvc0BzAzeEfaCyUaaLOlfzRhHV9yMusJ3RwMoe7pmjK/S/IXiv1058nWeBzNT4NbJKDLJxQ6mRb8xHKqgARlibbKJXhP7EdtOw3QfltY0mn0+/Joh4RgoJ4b5dNb9U3Q6T0Fau/EAMIJRJioyoE56qMU+QwFEGla+CarT1zn7sjH5heoMD2eRP++g3o2Bg/XaFARoXb0zzxezbdTbQ/29q+3s02qZMSAex20ceDncZlSdWNOIl5SBf20o/YE2F8/Zz28XyJJlkw0Z+VW+fn2BQn//TxZOMU5erPm2zqex9p72dj/jiuOGFuCLWVUnwRSQdfcmAU/9Ful48klT54LfizMcRLFOoCs5HkCzaUk24ztAXVbPcDZLs1tuvGVfXXXH5z7ztkWr7yDbzIP6aEsqzUrSjU49qx8+EMVUfNrlcVDLkw1ythvJI43OhyTR/WpUOvxuKD01RAKRJGRcOueBJbNY8/bx62iKTP6UeE3hVBaDSf7eHLXou02D/laZh8Cp1b5EpJEMFm8INH1UOtoy6Y/byTxhe6EkMA381vtHRxBTTsMzvAHaiDZzQZ9nw81bNVCqozfS86BRxp6KgixYwaRWmDHYsvHEQntzinmxbAF0GkJeftjLJOn4XsbujQwml2hrnNCIMkiasn8seFOmESOT3TTf5w/d1No03WtkfP0g=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(336012)(6486002)(478600001)(86362001)(2616005)(47076005)(36756003)(36860700001)(40480700001)(26005)(83380400001)(33656002)(107886003)(6512007)(6506007)(53546011)(186003)(40460700003)(82740400003)(356005)(70586007)(70206006)(81166007)(316002)(2906002)(4326008)(8936002)(5660300002)(8676002)(6862004)(41300700001)(82310400005)(37006003)(54906003)(6636002)(32563001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:21:44.9377
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f5a1ae97-f384-485f-2500-08db400fdb40
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9295

Hi,

> On 18 Apr 2023, at 14:55, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Luca,
>=20
>> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>=20
>> Add a device tree property in the dom0less domU configuration
>> to enable the guest to use SVE.
>>=20
>> Update documentation.
>>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

In fact the doc needs fixing as the domain creation does not fail.
@Luca thanks for mentioning that to me

So please fix that and you can keep my R-b (unless some changes are
needed after the discussion on parameter handling).

Cheers
Bertrand

>=20
> Cheers
> Bertrand
>=20
>> ---
>> Changes from v4:
>> - Now it is possible to specify the property "sve" for dom0less
>>  device tree node without any value, that means the platform
>>  supported VL will be used.
>> Changes from v3:
>> - Now domainconfig_encode_vl is named sve_encode_vl
>> Changes from v2:
>> - xen_domctl_createdomain field name has changed into sve_vl
>>  and its value is the VL/128, use domainconfig_encode_vl
>>  to encode a plain VL in bits.
>> Changes from v1:
>> - No changes
>> Changes from RFC:
>> - Changed documentation
>> ---
>> docs/misc/arm/device-tree/booting.txt | 11 +++++++++++
>> xen/arch/arm/domain_build.c           | 24 ++++++++++++++++++++++++
>> 2 files changed, 35 insertions(+)
>>=20
>> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/devic=
e-tree/booting.txt
>> index 3879340b5e0a..f9d2ecdda48a 100644
>> --- a/docs/misc/arm/device-tree/booting.txt
>> +++ b/docs/misc/arm/device-tree/booting.txt
>> @@ -193,6 +193,17 @@ with the following properties:
>>    Optional. Handle to a xen,cpupool device tree node that identifies th=
e
>>    cpupool where the guest will be started at boot.
>>=20
>> +- sve
>> +
>> +    Optional. A number that, when above 0, enables SVE for this guest a=
nd sets
>> +    its maximum SVE vector length. The default value is 0, that means t=
his
>> +    guest is not allowed to use SVE, the maximum value allowed is 2048,=
 any
>> +    other value must be multiple of 128.
>> +    Please note that if the platform supports a lower value of bits, th=
en the
>> +    domain creation will fail.
>> +    Specifying this property with no value, means that the SVE vector l=
ength
>> +    will be set equal to the maximum vector length supported by the pla=
tform.
>> +
>> - xen,enhanced
>>=20
>>    A string property. Possible property values are:
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 3f30ef5c37b6..c1f0d1d78431 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -4004,6 +4004,30 @@ void __init create_domUs(void)
>>            d_cfg.max_maptrack_frames =3D val;
>>        }
>>=20
>> +        if ( dt_get_property(node, "sve", &val) )
>> +        {
>> +            unsigned int sve_vl_bits;
>> +
>> +            if ( !val )
>> +            {
>> +                /* Property found with no value, means max HW VL suppor=
ted */
>> +                rc =3D sve_sanitize_vl_param(-1, &sve_vl_bits);
>> +            }
>> +            else
>> +            {
>> +                if ( dt_property_read_u32(node, "sve", &val) )
>> +                    rc =3D sve_sanitize_vl_param(val, &sve_vl_bits);
>> +                else
>> +                    panic("Error reading 'sve' property");
>> +            }
>> +
>> +            if ( !rc )
>> +                d_cfg.arch.sve_vl =3D sve_encode_vl(sve_vl_bits);
>> +            else
>> +                printk(XENLOG_WARNING
>> +                       "SVE vector length error, disable feature for Do=
m0less DomU\n");
>> +        }
>> +
>>        /*
>>         * The variable max_init_domid is initialized with zero, so here =
it's
>>         * very important to use the pre-increment operator to call
>> --=20
>> 2.34.1
>>=20
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:51:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522892.812533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polk3-0005PE-Cr; Tue, 18 Apr 2023 13:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522892.812533; Tue, 18 Apr 2023 13:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polk3-0005P7-8X; Tue, 18 Apr 2023 13:51:31 +0000
Received: by outflank-mailman (input) for mailman id 522892;
 Tue, 18 Apr 2023 13:51:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1polk1-0005P1-6I
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:51:29 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2060a.outbound.protection.outlook.com
 [2a01:111:f400:fe12::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d4dfdd0-ddf0-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:51:27 +0200 (CEST)
Received: from AS9PR06CA0250.eurprd06.prod.outlook.com (2603:10a6:20b:45f::8)
 by DB9PR08MB6746.eurprd08.prod.outlook.com (2603:10a6:10:2a0::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:51:25 +0000
Received: from AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45f:cafe::2d) by AS9PR06CA0250.outlook.office365.com
 (2603:10a6:20b:45f::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:51:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT003.mail.protection.outlook.com (100.127.140.227) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.21 via Frontend Transport; Tue, 18 Apr 2023 13:51:24 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Tue, 18 Apr 2023 13:51:24 +0000
Received: from 1b2537ff0838.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 507DFE23-106B-4ED2-9730-14D6C5327966.1; 
 Tue, 18 Apr 2023 13:51:18 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1b2537ff0838.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:51:18 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GVXPR08MB7749.eurprd08.prod.outlook.com (2603:10a6:150:69::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 13:51:15 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d4dfdd0-ddf0-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DuyqEWRAZsnQzZLqYDtLoGjpSahmTRSrOVECSbWYQq8=;
 b=NrH9WSH9wSd8vzN25QkatWmN9OOmZ6ANYRKuAtnmvdvs4emNHiIkylnMN5bGI83AtObGgeLsArEpfOic+2qBKop9EkmqcRQbYqUphlQU8NTzYO2OWZZLzVveW/JjBahcXAFOQfZPslycvTqaGdEGmTjVgEfFhGhgIH+vE2QAxDg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4ad9fe6f39e9026b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nMKwQnykp1943zmngx4dvliRbnVl6wWY4q+cpfBxxYA4cDHBiQeR7SMfEk6pTFGa46hZ/R/BM2zyaFK3QmyULq5sow5ET1s2Os8ZP11vEaxBxcqAcPEW9SYFnQot2aS2dxGDD6Q4pNRUW5ev6h31T6BF/Vl5fyO363ybElOtE2CNEeDwwNKPOd1vMs6toAZnxHZt1ZfdHuL8sd0b0MeElDbNoa18gB2DCsB9HWEGYR62SHc5DP39kQAQxsUxkMv0rrQ+Q6PdnXdrqjm28oQ/4OUakkRApTA52Wo6Kc32YjLJ4hBYdyxkKR9vKIRORP71YMW3XeW4oqAvpUPZMRYR3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DuyqEWRAZsnQzZLqYDtLoGjpSahmTRSrOVECSbWYQq8=;
 b=l08STyj5i9kvnUS+aq0xAroiYMRGYna7Rz5/1kqdhbFknq09yZUSS+YoTYac2NW7QUHNQStpQFs0+w59eQS/JmshjAlFmy82Ij32R2W8d3odBwJTkNQ+or63sBkdlhP3qSaK3GrXv/qYKw1oEDpB5RDAlALuk0XhtLd3bTFqK9FKEiNfzMmgdoZzZ6nOBQto0B5nVs7kuhUPcYxk5cv6JXgiUZ+kFjl9zNZ4i3vSGOTd9BJXaPG5YRopJA+1PLGla98IsBmiTd9jBqQIFkB3jm49KvTVL5A3ELgisRCLGy/4HEPfL09GPDeU83FeVmvA/XADT95nEAb2pm+oZIB5FA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DuyqEWRAZsnQzZLqYDtLoGjpSahmTRSrOVECSbWYQq8=;
 b=NrH9WSH9wSd8vzN25QkatWmN9OOmZ6ANYRKuAtnmvdvs4emNHiIkylnMN5bGI83AtObGgeLsArEpfOic+2qBKop9EkmqcRQbYqUphlQU8NTzYO2OWZZLzVveW/JjBahcXAFOQfZPslycvTqaGdEGmTjVgEfFhGhgIH+vE2QAxDg=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Luca Fancellu
	<Luca.Fancellu@arm.com>, Michal Orzel <michal.orzel@amd.com>, Julien Grall
	<jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>, Henry Wang <Henry.Wang@arm.com>
Subject: Re: [PATCH v7 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Topic: [PATCH v7 1/5] xen/arm32: head: Widen the use of the temporary
 mapping
Thread-Index: AQHZcHBN/ascjg9fJEuBeoKcDpUO0a8xGMCA
Date: Tue, 18 Apr 2023 13:51:14 +0000
Message-ID: <358316D9-5665-44C7-8BCE-9FB98A0ADDF8@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-2-julien@xen.org>
In-Reply-To: <20230416143211.72227-2-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GVXPR08MB7749:EE_|AM7EUR03FT003:EE_|DB9PR08MB6746:EE_
X-MS-Office365-Filtering-Correlation-Id: ce9e75d5-982f-419e-a283-08db4013ffcd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Ty7/isdfPVVnLHN/Ifqa0FdT+byaRjoBrxghCJP7Rh0IvGjSVtX1WRxqXj2snIqMDxgBV82T8MGn9kioVIkKpGif2mFbzGdBW6qjjcKcXo5Opy6cccbq8lZ9oUOa+1F/gL3TuZ5jVuoC+JuxVo2WnsNG1SZaW7qcnrgr1QubQqlWTKRcpLjU0PMgmHPgowRnub6lVQ5N1HzW69eDTrbimcgOYzPfNzCpcqCRvX9R/SLCX+0BBZxXNtjVhkoqdXDI9T1s9T7eFPw3OROvRixY8/ghs6EU0ff6qkavy9DmZ0Uq7AAxZpUNORGc1U9iSb33Gep29T/nOzNdzshgoYBkpePRCIEWH2EskdwmkbPeiEblOoMtTGT2g8eNylAUJjhi40Yjk+nJfUluDW+kaDRqbsrOxZfaQpr7Ve6V1kEfwA0Dp424ta90lSd+RD42T5N3Jltcn17ObZwGjUcn301bdqnu3jHLLxd7VTRZN02hR13u2e5xz7zz9IaUL3u72zBMMEntrqmshv1BHHqnvHibhwSR8SMk1CXpkpHA8RwWG9KOi3zrlSAdBVXtq/BFULYpq2KocsZzUlIMIAwOVMw3LBINVpUD1iyFiHWrjIXu1mUajM3syRiVSTFJXFqFyg+IY52yfR/Y2GpRMhmGDlHhIQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(346002)(136003)(39860400002)(376002)(451199021)(2906002)(8676002)(76116006)(33656002)(8936002)(5660300002)(478600001)(41300700001)(91956017)(186003)(316002)(64756008)(66946007)(66556008)(66476007)(36756003)(83380400001)(66446008)(54906003)(4326008)(71200400001)(122000001)(86362001)(38100700002)(38070700005)(6506007)(53546011)(6512007)(6486002)(2616005)(6916009)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6875D22D89EBDB449D4822EA3600B650@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7749
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bba3e2f5-2da8-4b96-dc48-08db4013f9e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vw0ANQlxd4QTTQJ0N+014QuQXQ67Iq2p5yEwkJxvWwDbd85HUBO0HJ/hA5NDtOy129Zoyl1P6XblVkHwGIUrjBkcAkcBsMnW+b9/wK1LpvPx/I8HY+wSUZcEjlF2t5V0PGDLMORlT3l80lMdQMR+TlCjOPkxA9MvqPRuHBGn5a1jyerNI9Yhgicokaf5SPj7TmZsxUPLYGxKsNMiu7sXIJT8iLqdA/XSPBNmTMkCqGfWIk6AENUbI0oe2XwYY9SnxkBV5uB0GWlU+UCA0UYsKOAKQ90+nkcBhD3GEC9BpnVmUBDCVOyqufF6HQRR2Rolm6X8rLVz0nR+P+AG2/srw0kPkhN/now/yPrWolRp5d0WB0RS4CVnCGDCB0dLlvwGwjpnONnNHpAYRfuOHqK7BHYQsAOegqEgjZ1IPqqrEN57+HwnKV0MChYSprD5yVZoFZT/nOdIJJx0CZ1vTlF0zIX0u8W78mxv4F/HzbcoI6pvcfK7/WedyHJvvCdi99qpAfGPZ5dvPYhPWykkSK+vspOv5x0vDOFUFlHFjwg/bOWRjWuCSMIvG3q1qHrQizXvCMmCzBmd8eT6QZsLat0Wnhfdg/NLWJY1NR3vg3EBDQOG23oLebAAKzDkyyeKYBKrOPhN8zYCNxoZJDpuC5IriimtnNG1ZBgRpZCT6518z/iZ9AKp7d34fyPCMB9BComDjFHCVdC71WuNuE2xzeKb9w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(336012)(6486002)(478600001)(86362001)(2616005)(47076005)(36756003)(36860700001)(40480700001)(26005)(83380400001)(33656002)(6512007)(6506007)(53546011)(186003)(40460700003)(82740400003)(356005)(70586007)(70206006)(81166007)(316002)(2906002)(4326008)(8936002)(5660300002)(8676002)(6862004)(41300700001)(82310400005)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:51:24.2342
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ce9e75d5-982f-419e-a283-08db4013ffcd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6746

Hi Julien,

> On 16 Apr 2023, at 16:32, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> At the moment, the temporary mapping is only used when the virtual
> runtime region of Xen is clashing with the physical region.
>=20
> In follow-up patches, we will rework how secondary CPU bring-up works
> and it will be convenient to use the fixmap area for accessing
> the root page-table (it is per-cpu).
>=20
> Rework the code to use temporary mapping when the Xen physical address
> is not overlapping with the temporary mapping.
>=20
> This also has the advantage to simplify the logic to identity map
> Xen.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>
> Tested-by: Henry Wang <Henry.Wang@arm.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> ----
>=20
> Even if this patch is rewriting part of the previous patch, I decided
> to keep them separated to help the review.
>=20
> The "follow-up patches" are still in draft at the moment. I still haven't
> find a way to split them nicely and not require too much more work
> in the coloring side.
>=20
> I have provided some medium-term goal in the cover letter.
>=20
>    Changes in v6:
>        - Add Henry's reviewed-by and tested-by tag
>        - Add Michal's reviewed-by
>        - Add newline in remove_identity_mapping for clarity
>=20
>    Changes in v5:
>        - Fix typo in a comment
>        - No need to link boot_{second, third}_id again if we need to
>          create a temporary area.
>=20
>    Changes in v3:
>        - Resolve conflicts after switching from "ldr rX, <label>" to
>          "mov_w rX, <label>" in a previous patch
>=20
>    Changes in v2:
>        - Patch added
> ---
> xen/arch/arm/arm32/head.S | 86 ++++++++-------------------------------
> 1 file changed, 16 insertions(+), 70 deletions(-)
>=20
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index df51550baa8a..9befffd85079 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -459,7 +459,6 @@ ENDPROC(cpu_init)
> create_page_tables:
>         /* Prepare the page-tables for mapping Xen */
>         mov_w r0, XEN_VIRT_START
> -        create_table_entry boot_pgtable, boot_second, r0, 1
>         create_table_entry boot_second, boot_third, r0, 2
>=20
>         /* Setup boot_third: */
> @@ -479,70 +478,37 @@ create_page_tables:
>         cmp   r1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512*8-byte entries per pag=
e */
>         blo   1b
>=20
> -        /*
> -         * If Xen is loaded at exactly XEN_VIRT_START then we don't
> -         * need an additional 1:1 mapping, the virtual mapping will
> -         * suffice.
> -         */
> -        cmp   r9, #XEN_VIRT_START
> -        moveq pc, lr
> -
>         /*
>          * Setup the 1:1 mapping so we can turn the MMU on. Note that
>          * only the first page of Xen will be part of the 1:1 mapping.
> -         *
> -         * In all the cases, we will link boot_third_id. So create the
> -         * mapping in advance.
>          */
> +        create_table_entry boot_pgtable, boot_second_id, r9, 1
> +        create_table_entry boot_second_id, boot_third_id, r9, 2
>         create_mapping_entry boot_third_id, r9, r9
>=20
>         /*
> -         * Find the first slot used. If the slot is not XEN_FIRST_SLOT,
> -         * then the 1:1 mapping will use its own set of page-tables from
> -         * the second level.
> +         * Find the first slot used. If the slot is not the same
> +         * as TEMPORARY_AREA_FIRST_SLOT, then we will want to switch
> +         * to the temporary mapping before jumping to the runtime
> +         * virtual mapping.
>          */
>         get_table_slot r1, r9, 1     /* r1 :=3D first slot */
> -        cmp   r1, #XEN_FIRST_SLOT
> -        beq   1f
> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
> -        b     link_from_second_id
> -
> -1:
> -        /*
> -         * Find the second slot used. If the slot is XEN_SECOND_SLOT, th=
en the
> -         * 1:1 mapping will use its own set of page-tables from the
> -         * third level.
> -         */
> -        get_table_slot r1, r9, 2     /* r1 :=3D second slot */
> -        cmp   r1, #XEN_SECOND_SLOT
> -        beq   virtphys_clash
> -        create_table_entry boot_second, boot_third_id, r9, 2
> -        b     link_from_third_id
> +        cmp   r1, #TEMPORARY_AREA_FIRST_SLOT
> +        bne   use_temporary_mapping
>=20
> -link_from_second_id:
> -        create_table_entry boot_second_id, boot_third_id, r9, 2
> -link_from_third_id:
> -        /* Good news, we are not clashing with Xen virtual mapping */
> +        mov_w r0, XEN_VIRT_START
> +        create_table_entry boot_pgtable, boot_second, r0, 1
>         mov   r12, #0                /* r12 :=3D temporary mapping not cr=
eated */
>         mov   pc, lr
>=20
> -virtphys_clash:
> +use_temporary_mapping:
>         /*
> -         * The identity map clashes with boot_third. Link boot_first_id =
and
> -         * map Xen to a temporary mapping. See switch_to_runtime_mapping
> -         * for more details.
> +         * The identity mapping is not using the first slot
> +         * TEMPORARY_AREA_FIRST_SLOT. Create a temporary mapping.
> +         * See switch_to_runtime_mapping for more details.
>          */
> -        PRINT("- Virt and Phys addresses clash  -\r\n")
>         PRINT("- Create temporary mapping -\r\n")
>=20
> -        /*
> -         * This will override the link to boot_second in XEN_FIRST_SLOT.
> -         * The page-tables are not live yet. So no need to use
> -         * break-before-make.
> -         */
> -        create_table_entry boot_pgtable, boot_second_id, r9, 1
> -        create_table_entry boot_second_id, boot_third_id, r9, 2
> -
>         /* Map boot_second (cover Xen mappings) to the temporary 1st slot=
 */
>         mov_w r0, TEMPORARY_XEN_VIRT_START
>         create_table_entry boot_pgtable, boot_second, r0, 1
> @@ -675,33 +641,13 @@ remove_identity_mapping:
>         /* r2:r3 :=3D invalid page-table entry */
>         mov   r2, #0x0
>         mov   r3, #0x0
> -        /*
> -         * Find the first slot used. Remove the entry for the first
> -         * table if the slot is not XEN_FIRST_SLOT.
> -         */
> +
> +        /* Find the first slot used and remove it */
>         get_table_slot r1, r9, 1     /* r1 :=3D first slot */
> -        cmp   r1, #XEN_FIRST_SLOT
> -        beq   1f
> -        /* It is not in slot 0, remove the entry */
>         mov_w r0, boot_pgtable       /* r0 :=3D root table */
>         lsl   r1, r1, #3             /* r1 :=3D Slot offset */
>         strd  r2, r3, [r0, r1]
> -        b     identity_mapping_removed
> -
> -1:
> -        /*
> -         * Find the second slot used. Remove the entry for the first
> -         * table if the slot is not XEN_SECOND_SLOT.
> -         */
> -        get_table_slot r1, r9, 2     /* r1 :=3D second slot */
> -        cmp   r1, #XEN_SECOND_SLOT
> -        beq   identity_mapping_removed
> -        /* It is not in slot 1, remove the entry */
> -        mov_w r0, boot_second        /* r0 :=3D second table */
> -        lsl   r1, r1, #3             /* r1 :=3D Slot offset */
> -        strd  r2, r3, [r0, r1]
>=20
> -identity_mapping_removed:
>         flush_xen_tlb_local r0
>         mov   pc, lr
> ENDPROC(remove_identity_mapping)
> --=20
> 2.39.2
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:52:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522896.812542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polka-0005sE-Qd; Tue, 18 Apr 2023 13:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522896.812542; Tue, 18 Apr 2023 13:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polka-0005s7-NP; Tue, 18 Apr 2023 13:52:04 +0000
Received: by outflank-mailman (input) for mailman id 522896;
 Tue, 18 Apr 2023 13:52:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bqrX=AJ=citrix.com=prvs=465e465d1=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1polkZ-0005P1-JG
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:52:03 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3124a44f-ddf0-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:52:02 +0200 (CEST)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 09:51:59 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by BN8PR03MB5059.namprd03.prod.outlook.com (2603:10b6:408:d9::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:51:53 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:51:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3124a44f-ddf0-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681825922;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=m3r19hQM1WDXFiw+A77e9FObRqsL+dJACDbBC/vUIN8=;
  b=A+wtdOaxnNExLT8FXvgb0V6Bck/e1tgKZmUkRHsEQnaETQU84MkFiMMA
   UR6dty/f/HOALd/PUiw+0Jl02Nnrwyr++PTC/DhNfoxEBL331j6Lwup/v
   l6kygorcSFwB4SJKo1FYKBrvs/+adlRM6VSNXu6SbntHqOEkO/xzrdRhZ
   s=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 105995734
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:KK4R0K2LYozNIOVsUvbD5eBwkn2cJEfYwER7XKvMYLTBsI5bpzcHz
 GdKCG/XOvjYYGb9eN0iOdi180MPvsfQyoBrTFZupC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfEWdT1
 u4fdBk0fgm+gMPnw52LQbMvv5F2RCXrFNt3VnBI6xj8VK9jbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvC6Kk1IZPLvFabI5fvSQQt5O2EKRq
 W/c4G39BjkRNcCFyCrD+XWp7gPKtXqjBdNDSeLjqpaGhnWplmYQDT0NWGLgrNyEtlGbXdJnB
 Eo9r39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOttcxRDkg0
 VKhhd7lBTVz9raSTBq19bCStzq+fzcUKWwqYjUNRg8IpdLkpekOYgnnS99iFOu5i4PzEDSpm
 jSS9nFh2fMUkNIB0Li98RbfmTWwq5PVTwkzoALKQmai6QA/b4mgD2C11WXmAT97BN7xZjG8U
 LIswqByMMhm4UmxqRGw
IronPort-HdrOrdr: A9a23:Mc2Td668sSJifYuRMQPXwNnXdLJyesId70hD6qkRc3Fom6mj/K
 qTdZsgpHzJYUkqKRMdcLy7VpVoIkmxyXcW2+ks1N6ZNWHbUQCTQ72Kg7GC/9ToIVyaytJg
X-Talos-CUID: 9a23:+OmlX2PjKZKfOO5DSC9611wTRukcYmTgy1aBIGCIAjwqcejA
X-Talos-MUID: =?us-ascii?q?9a23=3Auy/UqQ1wjU40LUxTp1IWTwM2eDUj2fSkC0Utipg?=
 =?us-ascii?q?/4MShCBBvHjGdtBuxa9py?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105995734"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bKIls7F+d0piLKgOZVFheSOtOMWJK2GRGV85L1Rt+oPOa9DQKQxMCpASLDvgHdUDnb0PuFHQC4ed4xEII9hEuAhxt27X7HIYWUuddRa6nxeLE2CjQQ+ZV/Krz2Id1RRoOco1djjCX5EnYb2z8NJYQX+oHqPD8xDd8JkZ0jCfxRh9R5C3tYX4umx5zTKiQATk8VOm1TbT1zIPl/GXKzhq0ICfhSmV2nxt/VhsB8UgkNAA6dNADdFC2kR+l2h7UZ7tnalw4+CvwcmCAOBBmXGB78H0mMh8fj9VEn4BP2qF2HVa5z3CUZT9RbIwu1L6/pgSQbley824tZY436/86xRucg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m3r19hQM1WDXFiw+A77e9FObRqsL+dJACDbBC/vUIN8=;
 b=Rd93/vikXcqM5PuFDbhKDSwy77Fpdk0kswNCIiE6wFPfkzaAefMjhegvYy+BbhqGMbwOJk86wXEy0rJAo0eNjyo/EdXCxsB8N43h5CggXEqL2JEDFiovQsWh2RqkxlbQGYjeP9dQpL8me95QGA0kIuOdfo1Dz8kxrQxcVQt+6QDlBzFl1oxkOKHcaEKaZfJolRUM+LxrLhdEzNlNIkZPHVFGinut61+VwcpUNDe5GFna7RF/oWq63dup8qA5bpdN9zL9KX6cEnPwHmReNoy84bBCdkWmEw33DveSCi2x75m1nDaEm0IaUg0HC35Gkc4JhNerOfUyJl5NQi0VgEltBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m3r19hQM1WDXFiw+A77e9FObRqsL+dJACDbBC/vUIN8=;
 b=LI/KjDpX20PiyQKbY4JrVa9/9GHyngl7I7Jgtq7V89I+jyyJZ9UJDn1m4u1TV38J8vsHSjkbnMCfwAU06GQBFcp7W1BUAntwHAxyN2GX9S78h//aYEa1H9GavkfQKwOL/vkGxLk69a4helzcC6ksy0oJZn1gvwgC8Gz68DKb5AY=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Josh Poimboeuf <jpoimboe@redhat.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Subject: Re: [PATCH] create-diff-object: handle missing padding at end of
 special section
Thread-Topic: [PATCH] create-diff-object: handle missing padding at end of
 special section
Thread-Index: AQHZbuSTaQYCrBXOqU2gniRwRe4uLa8xG8Ih
Date: Tue, 18 Apr 2023 13:51:52 +0000
Message-ID:
 <DM6PR03MB53726ABC80D3687499672BA5F09D9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230414151933.53851-1-roger.pau@citrix.com>
In-Reply-To: <20230414151933.53851-1-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|BN8PR03MB5059:EE_
x-ms-office365-filtering-correlation-id: 38328439-a2e7-48ab-34b7-08db401410aa
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ZX02geIr/iuesYRP7VYH4IU2ZSKbv5Yni1+Sn0RZwnLpBGgLbvyG8yq31BeaATVh4O4mymiO6G2HiqssG7+YwR6KOOMLrXAyLILDw+SfWdxs5tjhBDTaZyR7QsXDDjKTJadW2tLUuaAfzYTiCwPKdeQt4k+5Hi+fjjgbUI/om8E5WMkuxm/WUhasqqyAZuujTWYumNbVx5/RDwIqVhm0Q5GQJUoSP1OvQYNBQD/Eo+wBUgm17iQccJdN6TmGM7ZM7V2B5JYFSXKjLUcuUCs1hwBGalAGLI+mB7iIUXGU1lhJAQHNY33NNJdn9pnoxXcQpKr7EWSiWrG0lEsiPoUE/RHziQI4z4JAtfya04TcWkloT3yrs1jMHA9i5xeZNu6BeiTCYIqrmJwZ/igGd9x+VDk8g9dxYlU+e6h2tKdichZfY9sId9dcqj5v4LaraH4YnIPMH1J4hi0KDRUYOJ0OnlBkCZpMydPK3Vm3PNMjFl0koktNbfcxNgot2h0TwtOVsRPkVz2gV8t6Xg+duMwXRQEOczQ7Ze/1GnoT+DRvQnR5uAIlWdxa93ZAXU0pMSiuakOABE+Pn78jJ802wWNad1nsgn8dc8q1rHreGyAebI9//p7CG9sofBf+h96XVJxV
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(38100700002)(8936002)(8676002)(122000001)(38070700005)(44832011)(5660300002)(2906002)(33656002)(86362001)(52536014)(55016003)(478600001)(7696005)(71200400001)(54906003)(110136005)(186003)(9686003)(66946007)(53546011)(6506007)(76116006)(66476007)(66446008)(26005)(41300700001)(82960400001)(91956017)(316002)(4326008)(64756008)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?nq0pIF1Q2ii6uP1w7Syn3wGDN6XYxFoAHsCxKl96724LNGQWrodenQnFmV?=
 =?iso-8859-1?Q?jyTJQ+5udj360FfiL9gtdy5QxS3+2PTXx4+xFfJui+S7G9XHc99pjiBRlc?=
 =?iso-8859-1?Q?qljvOQK0BJBumRK3DKCHsoxW03ZmsTdQlwdWxGsZTRFAUzSSklfmOjgnO2?=
 =?iso-8859-1?Q?glv6FG3B+ls3xhOV79E3rmGzGn4gU6i/IujFOWctT044vjTExQuK9H/5jc?=
 =?iso-8859-1?Q?QAr+HpIYqcFTcS+hhWWRVgvgdylvTOeokK8B5HqTLx/tiUsri6+CXi7qGU?=
 =?iso-8859-1?Q?aXJrdM9g6BYd8gg795M0qK++tKI1uk/CT4VMdw8iWncOhtMiIg6QGt8vMq?=
 =?iso-8859-1?Q?RRIaaFGs/xzi+bSjoKcTiImD4dBIb9K/ccOQgz5CvXz9LwR/hNCShS9hzG?=
 =?iso-8859-1?Q?g8YHzl19yNWNo7lXoiOt70kG9CWnqglUtdqAK6Y6HlvBBZOd8OTqQ9cGLP?=
 =?iso-8859-1?Q?pXJ2cFj7AQqd1oiNZ4NTYNqp+CNs5UKPSx26qlqyK98QOKD55OtODrPxDn?=
 =?iso-8859-1?Q?hQwlnSeyc5ty7wilN7kWwb/u8JdUNvNpAYymAiK1Zk2x8XFE0ptFDsBmrm?=
 =?iso-8859-1?Q?99fUhxBXkkJW6RInZS+jQkNJhIDqpkLzPsetCgXvgmcQEY1lTNAfmjgXJ9?=
 =?iso-8859-1?Q?RbtnxU809SkbIUapZWNMI8yqo/zaldDcbbKm2RYz8Vx3GEVAlEmC1lNtc6?=
 =?iso-8859-1?Q?3zkI8OQigq422wN2AhatEr5tyxKIs4aKdrMpEcE+JTJyjsKoAQEceYnC6A?=
 =?iso-8859-1?Q?N4m+MkS+n4T0pePgZKdAGzbckuXwVRFw0Ej/Pf6JlXQum0AfPl08SjBXmZ?=
 =?iso-8859-1?Q?HTYAaGueaooUGEYfyin8rfiNXnuvDTocNAjCXUzbO2UFhsjAhqCgal/lOq?=
 =?iso-8859-1?Q?BtD6Hwa8OSN+x//flgiPDQTw4DXmHMO0iaYcn5FJeK9lyjhaTC2B0RQKvu?=
 =?iso-8859-1?Q?H7vOCD8wocKArwteEO/DoPWablb4DqadTfneFajzEP4vbgc50aiRFWNcJp?=
 =?iso-8859-1?Q?J8j36/69IbvzZsExY5G20ueJBleGPmwgtktjU114xEtBMpcTEo62CYU8/4?=
 =?iso-8859-1?Q?xKTFB3SMZF4IP9ezWGfUIz/vMtP/GkariGdhopKJiDm8pqZvQDDk+dlbuA?=
 =?iso-8859-1?Q?BlC6OYnexL3AMpFMDmcuBMq54uhYphdZXj/WiPl4DUgjQl3Ku8fo3qVeoY?=
 =?iso-8859-1?Q?WMhw//GElsyksmjgld6JaD+M2lx6aW2+42+MkmaeymeBe786iJf/JauObv?=
 =?iso-8859-1?Q?zhXKEoIqoPGDU/gdbYOzhAt11zHX0LqzYLHIxmuAqcvdWz7oXPuuEmKjgM?=
 =?iso-8859-1?Q?fiCZmBSPVhgY9GJ4bQBKLYaO5WxZeZF+20XBvUNSITiZ9ljXy0GiBx87hF?=
 =?iso-8859-1?Q?f0U3wN5sFGt24oIa7jBEPJ55ZXJvTbKkikGLn7X/mkycWl1WTbVeOchNns?=
 =?iso-8859-1?Q?7XHXp79nlTWbghz74Is4UiIianvTGRfOzLVi40rfjNFyX0lix6nAtbUzkC?=
 =?iso-8859-1?Q?aF2T2HzrUKfPlWEooDteOwBbOuD0FwvyElNwoYhTp/sFzOBh20thSaImNk?=
 =?iso-8859-1?Q?GXvlU88iYBMUCW7OnXF/tPqDqYd272DRY6bLyhExb39gLGW0w3V4FB0xkn?=
 =?iso-8859-1?Q?O0/fdY1mDhGNRSy4bmQJHHclYkInCGQIde?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xq5NZ0cmICe6cps6gFMcn4qSdaF1VjhVl68kpsr/vMsvOaDsNuA7Hfqb90Cj5dVbZG77w3FjaY2oJ68rGAQwHcPWyOCqzvl2DA+wYZETZLChhOF7FqPo+n5Kk7rrTiTgtM39YZS0H2cgHyin+v6qcQuE9XXRKJH9oZjCpD3kKF/6CAqtVEh2/DCq7HNMWRBsGUVV2Uc8rYxfOigEUWUFjLjglpuG39eOdKw940QkCbj2oSsDIFAw0MBpZEO0RntvjRbLfOdTgYhE1HQ5zCnxReGLkCXYdskvVC7usgNgcCuXqKDuIdgaFZpkrsafuQLpqfU6xdO5x/xkZgdyOj76/SB6XQIYK0ZzzbFn5A7K1F8zQKIyOZnVEDUPZ6/8zUwDeTc0actXhFac09Ij7JMi2GXvmXLsBAVO6KM14YUc3vJdarO0OjrSyVNQIkMBHEd4bWaujj4eJOOhNdK1bzSKiQYOrYB++CYyhWnV74q82i7+WDkZ8DZIEq9hDQmEcX7TW8Ddt1fMt/00L2HE63Z1pj9vctK9qpo/+HyvloJWl5dlBkn7lDZOg2kHOALrqKYJXCH4ABQgFEGjJj7rufXgOl+UShWaUkR9AxajJNcxroay8XEahvjsJCUIE8Q9BcEPHtDqBTKp54APhL2p5f2YdQYbJjwu3wnoNULhpID4oaZ8OuuTkfTLhpswXRMtUtcOVpzxxwXeK51eX4SKMw0JE97UMGlpciBOvGRMoaAV+5pweszkmAtZmucu2s7B5umhmFGjC/7IQyawolBQ/Niutsf1445zPzErF4KEDpJy3j9e6GC59BQlQkXZ3nQHTG4P/yAIOPTnJmhqYZ+FKN7rIA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 38328439-a2e7-48ab-34b7-08db401410aa
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 13:51:52.5752
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6ZRJovDOeY1nk/C8GoKoL73pYMDdzNM+z99AiJBlovhnU7lGJaWp5Dz767RzUqOyXOfyMG3aPc5SMIkjVykgbzzgeM9ZvWGenDVbfNKt2NU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5059

> From: Roger Pau Monne <roger.pau@citrix.com>=0A=
> Sent: Friday, April 14, 2023 4:19 PM=0A=
> To: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>=0A=
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>; Roger Pau Monne <roger.pau@citr=
ix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Ross Lagerwall <ro=
ss.lagerwall@citrix.com>=0A=
> Subject: [PATCH] create-diff-object: handle missing padding at end of spe=
cial section =0A=
> =A0=0A=
> From: Josh Poimboeuf <jpoimboe@redhat.com>=0A=
> =0A=
> The paravirt_patch_site struct has 12 bytes of data and 4 bytes of=0A=
> padding, for a total of 16 bytes.=A0 However, when laying out the structs=
=0A=
> in the .parainstructions section, the vmlinux script only aligns before=
=0A=
> each struct's data, not after.=A0 So the last entry doesn't have the=0A=
> 4-byte padding, which breaks kpatch_regenerate_special_section()'s=0A=
> assumption of a 16-byte struct, resulting in a memcpy past the end of=0A=
> the section.=0A=
> =0A=
> Fixes #747.=0A=
> =0A=
> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>=0A=
> =0A=
> This is commit:=0A=
> =0A=
> c2dc3836e862 create-diff-object: handle missing padding at end of special=
 section=0A=
> =0A=
> In kpatch repository.=0A=
> =0A=
> I've seen the .fixup section get an alignment of 16 but a size of 81,=0A=
> which makes the error removed in this patch trigger.=A0 Overall I'm not=
=0A=
> sure why the original alignment check was done against the size of the=0A=
> section, the alignment applies to the address of the section, not its=0A=
> size.=0A=
> =0A=
> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>=0A=
> ---=0A=
=0A=
Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>=


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:52:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522902.812553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pollI-0006Tq-4N; Tue, 18 Apr 2023 13:52:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522902.812553; Tue, 18 Apr 2023 13:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pollI-0006Tj-1q; Tue, 18 Apr 2023 13:52:48 +0000
Received: by outflank-mailman (input) for mailman id 522902;
 Tue, 18 Apr 2023 13:52:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pollG-0006TV-Na; Tue, 18 Apr 2023 13:52:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pollG-0005uX-Go; Tue, 18 Apr 2023 13:52:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pollG-0000lV-4x; Tue, 18 Apr 2023 13:52:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pollG-000560-4Q; Tue, 18 Apr 2023 13:52:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wiAD/fCDFN2y1d7jbPVt60yj+Ds2EUTBqu0+HcE4/+o=; b=jTzsaczZxNXzXXDP88+/1SaZYB
	uJUsdQaq/LeECik0HGvb7GGrKGLAIDw1lf6BrA7qseJpRFPBztRWK/Y3Qw8UGlbLRdycZTHW27Jq2
	6zt9XPYLSkv5s+62GvSC3IfQVpdnoOgdMG8hOa+8Sr3YQxaAb7Kcbu2iCIO5VuHxxvpA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180293: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=edd604a6723522f4c17a38ff2df35460208b0c81
X-Osstest-Versions-That:
    libvirt=7cbbd45af115c24dba1b1be9631a32d6215ff0cc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 13:52:46 +0000

flight 180293 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180293/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180272
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180272
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180272
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              edd604a6723522f4c17a38ff2df35460208b0c81
baseline version:
 libvirt              7cbbd45af115c24dba1b1be9631a32d6215ff0cc

Last test of basis   180272  2023-04-16 04:20:12 Z    2 days
Testing same since   180293  2023-04-18 04:20:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   7cbbd45af1..edd604a672  edd604a6723522f4c17a38ff2df35460208b0c81 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:53:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522910.812563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polmH-00075Q-Fe; Tue, 18 Apr 2023 13:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522910.812563; Tue, 18 Apr 2023 13:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polmH-00075J-Br; Tue, 18 Apr 2023 13:53:49 +0000
Received: by outflank-mailman (input) for mailman id 522910;
 Tue, 18 Apr 2023 13:53:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1polmG-00075B-Fu
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:53:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6fe7afa8-ddf0-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 15:53:46 +0200 (CEST)
Received: from DUZPR01CA0028.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:46b::7) by AS8PR08MB8682.eurprd08.prod.outlook.com
 (2603:10a6:20b:564::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:53:40 +0000
Received: from DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:46b:cafe::5b) by DUZPR01CA0028.outlook.office365.com
 (2603:10a6:10:46b::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:53:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT036.mail.protection.outlook.com (100.127.142.193) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 13:53:40 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Tue, 18 Apr 2023 13:53:40 +0000
Received: from d7779ff6267e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 187A7BE3-F101-490B-A825-E943D5D518A4.1; 
 Tue, 18 Apr 2023 13:53:33 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d7779ff6267e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:53:33 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GVXPR08MB7749.eurprd08.prod.outlook.com (2603:10a6:150:69::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 13:53:30 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:53:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fe7afa8-ddf0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SX0a2+ixL92Lrl4Ob+ESXYJT+MtQ6svD3u4HBZLaXl8=;
 b=SDk4IijRbEOdg9HcEt/v5JPWiiY7A15uC+UfAG2baFnYqlAGvi2tA73v6mIB6xjwOuErf5Id6zIdNiD1Jk9gB9Vxl7tdX2b1CCNZT77pz01WgrYte7l3upY2pJqveXCuvN/FL/Y7UbK9quYQbGf/LDIRYRC+UcCqeLUsUoK4R/g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: caf32e46ad596eab
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B2Fk58JwC2vHQiG3k2Rk+m0tlmx+ObOQ/DLy9Bvc7P/vsphjd5NrhRyT7R7SSQS7aWvITOkXvzz0FMbza3vvFuq9On5Zl7FmWGr5XJs93oUxfEL4o2dUPZsdgeKPa+SzoPNQItQTeUpPcNr4vzoyE0Tp1tgK8DBMtsNkcQdECUJe+AtdZeJH+zKgv5g6cXkub3/hjVudPziHsGjdc+DaCfknUcgZLBJDOpWIIy2iupBEK96fdzUbric94bOIBhZVLh7n0YuwxH26GdmTrgzFo/XXddmjqCdI4IRRaWJ/M3rK+pVYOpRvfRQCL1gWRSBHK3zvfco5qO0Ls7cN0XJhNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SX0a2+ixL92Lrl4Ob+ESXYJT+MtQ6svD3u4HBZLaXl8=;
 b=cEv+l3rEkzz/3j6Kam0AreG6o3SaB2NyXVm1AhUTL4FbZDxwaDrULcxpszrKq40baPkH01lsgNrOJs0ZFYMs4B5UPnwCM/4g5MCZdAFkn4RhI1YAKZrS39MV1uHwyHKAORidrdgKyIj+xckY7bykM5LXS6I6Yeix5lKTzqg7mIHHkNmX4VfjabPa9AJ57sQJxrJSVRbu94d/x1Zg12aHPb7VNLq7kyK8jraMRjSxfQblyoE1Hv4LNH+LH6jqCyBZraalvUPAjNfwdgaoy80k+ODanjb5N3eK4r8Svo5nWgG7ewv029+7RkGxbZFpE5MoVd0WTeKdYRtdnbi0OHzg8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SX0a2+ixL92Lrl4Ob+ESXYJT+MtQ6svD3u4HBZLaXl8=;
 b=SDk4IijRbEOdg9HcEt/v5JPWiiY7A15uC+UfAG2baFnYqlAGvi2tA73v6mIB6xjwOuErf5Id6zIdNiD1Jk9gB9Vxl7tdX2b1CCNZT77pz01WgrYte7l3upY2pJqveXCuvN/FL/Y7UbK9quYQbGf/LDIRYRC+UcCqeLUsUoK4R/g=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Luca Fancellu
	<Luca.Fancellu@arm.com>, "michal.orzel@amd.com" <michal.orzel@amd.com>,
	Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Topic: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Index: AQHZcHBTPKRFJKI7hUu6krfNforKJa8xGWKA
Date: Tue, 18 Apr 2023 13:53:30 +0000
Message-ID: <D357504A-B3BA-4058-98AF-973A7286D19A@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
In-Reply-To: <20230416143211.72227-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GVXPR08MB7749:EE_|DBAEUR03FT036:EE_|AS8PR08MB8682:EE_
X-MS-Office365-Filtering-Correlation-Id: 3b282845-69c4-4328-e8fa-08db401450ea
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QbpezO3qIzsl2GUSPsQM5/qs6NcDsNoBWupmrBDMjhN/g9k3iyVWNivPZZQP+AuFQw5m0HLd2EzMNZOTmhpzhdj1sXg4GgtIlnC257uhSim988rDWbWzCIimgvz5l5xeNaL6dOGL5fB6V7MeiXHnFRjC2iIz6ukJsM2Ws31tIPGkdb7/KfdzON0cuSBiCycQ+K0Mp508s3Dibj0jyfMgfUFoSMtQB20grfQigu3g+N2q/E776WWVGbP1EAvinCiohT+00FiCejXp0UiKLUWoRh0xKTmOp3HhSu5giSRHyPz1RGIV+Iq+OPtNxCRBuBzdue+lIKyUeLUpK6HDsnK68ZEyL+jrI/lT+dhy+rYAYqSX+ZD9OXtIXqBpnBxv7cXsHFEPkcGK+biDcRikrhqe14YmntZP5nJ1ej9rA6cmXEcMa5Hx/Jt7lYsxHGNq3+0BZpG2F7fFvwdP8YqVjIUgIvbGjqk8a/hn2Fd9IXj2IqAef249ITlj5upatV4yjGR7rwgjIbUI1onnaD65bKG1u6cvVi5ncOqb4HMTkXi9p+Ocbl5XGLzwR8CIvljMmi4GPqAoqTk511ZR71pKhzz9D6jEYuVQMamqdlscUEZZlAbltwABl4aGhfps5IRwN/uHo6o0w/pC/YOv8gWHYw3jmQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(346002)(136003)(39860400002)(376002)(451199021)(2906002)(8676002)(76116006)(33656002)(8936002)(5660300002)(478600001)(41300700001)(91956017)(186003)(316002)(64756008)(66946007)(66556008)(66476007)(36756003)(83380400001)(66446008)(54906003)(4326008)(71200400001)(122000001)(86362001)(38100700002)(38070700005)(6506007)(53546011)(6512007)(6486002)(2616005)(6916009)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D0C1ECB4F9F1EE4793D1A2B189B96588@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR08MB7749
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8aad34d8-96d4-4d2d-4484-08db40144b19
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GPmWr14gNyvcc5EH/tmLnwFmNrBZsPD2DS1SxVIVfD4gC6Qvef0/3MbNxpZP68LXN0Ex7YSg1O56GC+lnch4gMOjXFLQYLuB43drm0Tg16K3K7rKh4RwVuwVPYsv2rFAKMW0dN0gfGLpG84D3s62Qt1YblaF4OOSiffijg4MPD8bjXbtoGAqZzNjdovM+l1vdo0BAPV8gLMTDqonl/bfLisnYWdASpXou4akiemxL/1pxx+g4b0o7vHaNc8BkcBRoeoLc1AJB+5cNy/vIWvGDCBVBvjlGylEsQBS9IYIA45XX83L8ZScXvvoRGwckDcAHFmKToNxYbaSw6Jl1xLY3ccm1C0O4OjMtLbTI+N4M7dGp1AzOaRvNpuAxN29x4xjQW9zj0dj2aINJdpB0bY0DLXHnMyYtirbSRLe0LcJQfTkso6srx0hie+daxYbMvluSrxlW6+FCtO2IAImpWDJXIRK7L2ek8lSabrpb2HNgSe2ii27DUpSM73A04KeuD1rPJ9rjn4OH/raraZkspKEo8R2ofbEXfYr44S3zCDBIDXYr2UVYujBr8lerKRAfKv4wZf3kKty9ZY9X74K3pI2YhwlJGWD71u4y8XJK8avW3zkTdNSzZolhSzc5oZrqgjYOo7e9memG2yrSQn/N3ZJbqOdzfQPnQDAGHIWDaIYfN4f3rpxiI0P0hR7kXGFl4fBeiCpbhgT4y7aywCoqK5moA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(107886003)(40480700001)(336012)(83380400001)(2616005)(47076005)(36860700001)(186003)(40460700003)(53546011)(6512007)(6506007)(26005)(86362001)(82740400003)(316002)(4326008)(41300700001)(70206006)(70586007)(2906002)(81166007)(356005)(54906003)(82310400005)(478600001)(33656002)(6486002)(36756003)(6862004)(8676002)(8936002)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:53:40.3996
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3b282845-69c4-4328-e8fa-08db401450ea
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8682

Hi Julien,

> On 16 Apr 2023, at 16:32, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Xen is currently not fully compliant with the Arm Arm because it will
> switch the TTBR with the MMU on.
>=20
> In order to be compliant, we need to disable the MMU before
> switching the TTBR. The implication is the page-tables should
> contain an identity mapping of the code switching the TTBR.
>=20
> In most of the case we expect Xen to be loaded in low memory. I am aware
> of one platform (i.e AMD Seattle) where the memory start above 512GB.
> To give us some slack, consider that Xen may be loaded in the first 2TB
> of the physical address space.
>=20
> The memory layout is reshuffled to keep the first four slots of the zeroe=
th
> level free. All the regions currently in L0 slot 0 will not be part of
> slot 4 (2TB). This requires a slight tweak of the boot code because
> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
>=20
> This reshuffle will make trivial to create a 1:1 mapping when Xen is
> loaded below 2TB.
>=20
> Lastly, take the opportunity to check a compile time if any of the
> regions may overlap with the reserved area for identity mapping.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

With the 2 typos found by Michal fixed:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> ----
>    Changes in v7:
>        - Remove all tags
>        - Add BUILD_BUG_ON()s
>        - Don't forget to update FRAMETABLE_VIRT_START and
>          VMAP_VIRT_START
>=20
>    Changes in v6:
>        - Correct the BUILD_BUG_ON(), Xen virtual address should be
>          above 2TB (i.e. slot0 > 4).
>        - Add Bertrand's reviewed-by
>=20
>    Changes in v5:
>        - We are reserving 4 slots rather than 2.
>        - Fix the addresses in the layout comment.
>        - Fix the size of the region in the layout comment
>        - Add Luca's tested-by (the reviewed-by was not added
>          because of the changes requested by Michal)
>        - Add Michal's reviewed-by
>=20
>    Changes in v4:
>        - Correct the documentation
>        - The start address is 2TB, so slot0 is 4 not 2.
>=20
>    Changes in v2:
>        - Reword the commit message
>        - Load Xen at 2TB + 2MB
>        - Update the documentation to reflect the new layout
> ---
> xen/arch/arm/arm64/head.S         |  3 ++-
> xen/arch/arm/include/asm/config.h | 38 +++++++++++++++++++++----------
> xen/arch/arm/mm.c                 | 23 +++++++++++++++----
> 3 files changed, 46 insertions(+), 18 deletions(-)
>=20
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 4a3f87117c83..663f5813b12e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -607,7 +607,8 @@ create_page_tables:
>          * need an additional 1:1 mapping, the virtual mapping will
>          * suffice.
>          */
> -        cmp   x19, #XEN_VIRT_START
> +        ldr   x0, =3DXEN_VIRT_START
> +        cmp   x19, x0
>         bne   1f
>         ret
> 1:
> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm=
/config.h
> index 5df0e4c4959b..2cfe5e480256 100644
> --- a/xen/arch/arm/include/asm/config.h
> +++ b/xen/arch/arm/include/asm/config.h
> @@ -72,16 +72,13 @@
> #include <xen/page-size.h>
>=20
> /*
> - * Common ARM32 and ARM64 layout:
> + * ARM32 layout:
>  *   0  -   2M   Unmapped
>  *   2M -   4M   Xen text, data, bss
>  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>  *   6M -  10M   Early boot mapping of FDT
>  *   10M - 12M   Livepatch vmap (if compiled in)
>  *
> - * ARM32 layout:
> - *   0  -  12M   <COMMON>
> - *
>  *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>  *                    space
> @@ -90,14 +87,23 @@
>  *   2G -   4G   Domheap: on-demand-mapped
>  *
>  * ARM64 layout:
> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> - *   0  -  12M   <COMMON>
> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
> + *
> + *  Reserved to identity map Xen
> + *
> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
> + *  (Relative offsets)
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -  10M   Early boot mapping of FDT
> + *  10M -  12M   Livepatch vmap (if compiled in)
>  *
>  *   1G -   2G   VMAP: ioremap and early_ioremap
>  *
>  *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>  *
> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
>  *  Unused
>  *
>  * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
> @@ -107,7 +113,17 @@
>  *  Unused
>  */
>=20
> +#ifdef CONFIG_ARM_32
> #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
> +#else
> +
> +#define SLOT0_ENTRY_BITS  39
> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
> +
> +#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
> +#endif
> +
> #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
>=20
> #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
> @@ -163,14 +179,12 @@
>=20
> #else /* ARM_64 */
>=20
> -#define SLOT0_ENTRY_BITS  39
> -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> -#define SLOT0_ENTRY_SIZE  SLOT0(1)
> +#define IDENTITY_MAPPING_AREA_NR_L0  4
>=20
> -#define VMAP_VIRT_START  GB(1)
> +#define VMAP_VIRT_START  (SLOT0(4) + GB(1))
> #define VMAP_VIRT_SIZE   GB(1)
>=20
> -#define FRAMETABLE_VIRT_START  GB(32)
> +#define FRAMETABLE_VIRT_START  (SLOT0(4) + GB(32))
> #define FRAMETABLE_SIZE        GB(32)
> #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>=20
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index b99806af996c..1d09d61dd922 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -153,7 +153,19 @@ static void __init __maybe_unused build_assertions(v=
oid)
> #endif
>     /* Page table structure constraints */
> #ifdef CONFIG_ARM_64
> -    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
> +    /*
> +     * The first few slots of the L0 table is reserved for the identity
> +     * mapping. Check that none of the other regions are overlapping
> +     * with it.
> +     */
> +#define CHECK_OVERLAP_WITH_IDMAP(virt) \
> +    BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L=
0)
> +
> +    CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START);
> +    CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START);
> +    CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START);
> +    CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START);
> +#undef CHECK_OVERLAP_WITH_IDMAP
> #endif
>     BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
> #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
> @@ -496,10 +508,11 @@ void __init setup_pagetables(unsigned long boot_phy=
s_offset)
>     phys_offset =3D boot_phys_offset;
>=20
> #ifdef CONFIG_ARM_64
> -    p =3D (void *) xen_pgtable;
> -    p[0] =3D pte_of_xenaddr((uintptr_t)xen_first);
> -    p[0].pt.table =3D 1;
> -    p[0].pt.xn =3D 0;
> +    pte =3D pte_of_xenaddr((uintptr_t)xen_first);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte;
> +
>     p =3D (void *) xen_first;
> #else
>     p =3D (void *) cpu0_pgtable;
> --=20
> 2.39.2
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:57:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:57:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522917.812573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polpq-0007oT-3O; Tue, 18 Apr 2023 13:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522917.812573; Tue, 18 Apr 2023 13:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polpq-0007oM-08; Tue, 18 Apr 2023 13:57:30 +0000
Received: by outflank-mailman (input) for mailman id 522917;
 Tue, 18 Apr 2023 13:57:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1polpo-0007oG-EH
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:57:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f315fdef-ddf0-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 15:57:26 +0200 (CEST)
Received: from DB7PR05CA0015.eurprd05.prod.outlook.com (2603:10a6:10:36::28)
 by DB5PR08MB10311.eurprd08.prod.outlook.com (2603:10a6:10:4a5::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:57:17 +0000
Received: from DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::b7) by DB7PR05CA0015.outlook.office365.com
 (2603:10a6:10:36::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:57:17 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT005.mail.protection.outlook.com (100.127.142.81) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 13:57:17 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Tue, 18 Apr 2023 13:57:17 +0000
Received: from bd2105df09d4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7D46468E-156E-4919-9B50-7B1680D333DA.1; 
 Tue, 18 Apr 2023 13:57:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bd2105df09d4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:57:15 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB8479.eurprd08.prod.outlook.com (2603:10a6:20b:55d::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 13:57:05 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f315fdef-ddf0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=haJ1iiFJBzgzTdkpwwcZ6vF0qjVcYXmKCk0mqHYCvKg=;
 b=yiMNHMoqMAwQHzG5+By3tRidg+zgclxyMQoatjRTG5Q4TX5s/0GcJmk5VaQlcF/OuFlNbZ77MyDmFkbd9dKFYZl7WYbFD7aZNd+ZFmpbrZws6wwLy7tTTtd7DkniLfmQ2kPlP+bfHgNCkhbsaeNmXGwKmQM6HrS8ovjC4UYVjVU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 66a6a5f4fa3e655e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d5iBNhp9Z9pD15Yw7KzMhFEZ01q4LT22pgQhx3Xs5TzBbfpwzpW6TD495ztmIFJ6rb/F22S3XudoLrvQiekzyrfQr+mnIj0ewHLod94pYK8cIoToWfAHeAc5ShZpNAfOum4nuu3dRGJHRbwNnyShlipPE5TKECTmS83CU1/aNjZf+AMGKqocELo/v59DBQalmK+2y3ycQ86uC0VT9kZ9URPOMqRhweXFD5+/jay8jVb6A5hu4jWpWfGe1SP0txmYkKwpPC/CV33dQoysAS3QFlntHCQAHTuP0NGAltV1dVwpNtyZIVM6RM/Q88HR0T/PZQ9w/5eyznn5mFHwDIXJrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=haJ1iiFJBzgzTdkpwwcZ6vF0qjVcYXmKCk0mqHYCvKg=;
 b=i8iYyRyjOal+QDeZ0AM3paclkP9Z9GV6acETOKoxwQvED7PNwlrV7U0A+wxsfISj5Q2XadxtgrFRXG/MlYoMX1S18TqL6EVJT+u3g41oUlbzztleR/mxtJoxd6AZekHl1xmGcH7Zt9AMPYWgCm5OCz7TDi7fbAvqPUbyxXMhHRoykPUFyLPhMEwtuIxp5G5F7qj7bVLVtR5CiISuY5fL+h2c/PvM/2xyHV1P4OU4UYXJ0qGs/8JL383Zvt06Jjjdf2ir2sS/90674BwJYhDtpQsArjLZPZILKMzx8UcVlCKDQM9k0Hc5Cl/sY2v9OZH9QGXWCljkzMWRyKes3+zG5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=haJ1iiFJBzgzTdkpwwcZ6vF0qjVcYXmKCk0mqHYCvKg=;
 b=yiMNHMoqMAwQHzG5+By3tRidg+zgclxyMQoatjRTG5Q4TX5s/0GcJmk5VaQlcF/OuFlNbZ77MyDmFkbd9dKFYZl7WYbFD7aZNd+ZFmpbrZws6wwLy7tTTtd7DkniLfmQ2kPlP+bfHgNCkhbsaeNmXGwKmQM6HrS8ovjC4UYVjVU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Luca Fancellu
	<Luca.Fancellu@arm.com>, "michal.orzel@amd.com" <michal.orzel@amd.com>,
	Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Topic: [PATCH v7 3/5] xen/arm64: mm: Introduce helpers to
 prepare/enable/disable the identity mapping
Thread-Index: AQHZcHBPCfFzIjsHxUmcm0cDknsveK8xGmMA
Date: Tue, 18 Apr 2023 13:57:05 +0000
Message-ID: <28908BA4-19B2-4B77-BF28-9F3F890474B6@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-4-julien@xen.org>
In-Reply-To: <20230416143211.72227-4-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB8479:EE_|DBAEUR03FT005:EE_|DB5PR08MB10311:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f72a8de-04ac-4124-54b2-08db4014d22d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 96upzNYzTitqOr6x5xctWOVYQdr5CT3bkBZdDTi7Pq6YW1aoSYOTucOZs9NgRh54aQxSqCH1yrAMnWiDqXaDlzDaWpV32ByXSVWGr1yN2/TfaIKCZoDpX8/uuXGr5cl3lnBU/UuC0S/IUgudtIiuGtNBB7kRszR8OooQmni/vjENZTi+QUfA5aj0XARiAB6pScsIU215dkGw6N6PJSEnerH21NvsHz7wEN69MiLR5eTnTk0Rilj3L8h6tVUSdjaa9kCXHtm6uhiIe7ab7GGkECHG3fRyT9vzLSOzVXf/L9Mo17yjq9uwX9CNzkGyEPkaQtXnIz+ZvDAifkFMwMuyTKGrniTwA1pqMPy7XxI26P8xTAAlpFkv1zEufSiYdzIj9Gna4ZpGjmTsm1zjz6K8xXoXPtIUSvxMONuHJaegowud9MLRq9bhKq951SUxML4F60f9z5oH4CxfKqZglGBUV1IfJI6CRGY/IvnHth2LlesrO6qDA1NR/82hAjaShWZdJ59VCCJY6WJlpFJVvsFLSail0LLSX9QBgVsvqdDzW8ETH416e/pUHw5zmSQ4QogN2KkCJF6reBVQ1HoFsdz3mnl4ycTBW4/ND8tznzCorrqxa5AIHvyRUgiqbCQcJjF+KjcUFWU/R7iiceFO8sOrHA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(366004)(451199021)(66946007)(38070700005)(91956017)(66476007)(53546011)(76116006)(4326008)(64756008)(66446008)(66556008)(6506007)(6512007)(36756003)(71200400001)(2906002)(83380400001)(2616005)(6486002)(6916009)(186003)(86362001)(54906003)(5660300002)(8936002)(8676002)(38100700002)(122000001)(33656002)(478600001)(41300700001)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CABC70C4202CAB4DB7022A00E89C0095@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8479
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	25b54d94-7f07-45a8-d483-08db4014cb1c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DCWKt+QgBTyqoqpEiUDjuEECaVi/D+sgQX4FgObCAAqwlHKhAiZelUubIr1X93jW3Vxs5Um4BusfkvZuRgnpbGcmVzSnvhc/4alQS/4H0pJNPTG1DRTKbYdFHhnBSRVoZiTWOiNMBa9BB5Rpom+FuGSyrw3Q3usVC2IRIltKFVwHtGrf+MuMY77ClhsjnYrBzVA4KNph4+2q+sjiUgaOP+8uqVHHmmSG2r4Wth8Yig1bQFKPXlgS5NlCqgZdragkWRxuqpEASFsSDT3q7oqsDfKyOYJSvBVOBd8Sgj/Uc9ohLW4oBcd5XS4REyo2gaex6AEbmd0Xrd/B5koJ257TzBIsfTwPlBmJRbYidXJObMwP51a6OlCJ8I7xG41NTK0QxApt/fFQ0QBOkLCZYI6BzS19dExqvYJ2SmQu77nEGBMe6sFtJG50dxi3mNNppwbtRjk1pLso0bNlD1XDytZEyoatW5kjITSDGzpP6BLZHaCHwSljsRBjNmJYW6AN6fSlGfy8t21hQu2mv5UDJfVlAK5NQnpvaOElBCA3909Q2yZ4yoQbacd9Grv5e3Y5+GgjDxKeIt970NWYPgY0wj7n9vE1HTfwC+Ha2qutlM0iPkQNyR4CNimiDAcpA7b+NqJJkYEU+Q8++P2gk5BqNN8YofKh1gjb1O6nSdBLW/xmcTgHQW4iW/0cXLgxbmI8FCcx8KZvyvnHJ0GWXPN9oEwVfQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39850400004)(451199021)(46966006)(40470700004)(36840700001)(86362001)(33656002)(36756003)(316002)(70586007)(70206006)(8676002)(4326008)(54906003)(6486002)(41300700001)(478600001)(40480700001)(82310400005)(6862004)(5660300002)(8936002)(2906002)(36860700001)(356005)(82740400003)(81166007)(186003)(53546011)(107886003)(6512007)(6506007)(26005)(2616005)(336012)(47076005)(83380400001)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:57:17.2610
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f72a8de-04ac-4124-54b2-08db4014d22d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10311

Hi Julien,

> On 16 Apr 2023, at 16:32, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> In follow-up patches we will need to have part of Xen identity mapped in
> order to safely switch the TTBR.
>=20
> On some platform, the identity mapping may have to start at 0. If we alwa=
ys
> keep the identity region mapped, NULL pointer dereference would lead to
> access to valid mapping.
>=20
> It would be possible to relocate Xen to avoid clashing with address 0.
> However the identity mapping is only meant to be used in very limited
> places. Therefore it would be better to keep the identity region invalid
> for most of the time.
>=20
> Two new external helpers are introduced:
>    - arch_setup_page_tables() will setup the page-tables so it is
>      easy to create the mapping afterwards.
>    - update_identity_mapping() will create/remove the identity mapping
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand


>=20
> ----
>    Changes in v7:
>        - The definition of IDENTITY_MAPPING_AREA_NR_L0 was moved
>          in the previous patch.
>=20
>    Changes in v6:
>        - Correctly check the placement of the identity mapping (take
>          2).
>        - Fix typoes
>=20
>    Changes in v5:
>        - The reserved area for the identity mapping is 2TB (so 4 slots)
>          rather than 512GB.
>=20
>    Changes in v4:
>        - Fix typo in a comment
>        - Clarify which page-tables are updated
>=20
>    Changes in v2:
>        - Remove the arm32 part
>        - Use a different logic for the boot page tables and runtime
>          one because Xen may be running in a different place.
> ---
> xen/arch/arm/arm64/Makefile         |   1 +
> xen/arch/arm/arm64/mm.c             | 130 ++++++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm32/mm.h |   4 +
> xen/arch/arm/include/asm/arm64/mm.h |  13 +++
> xen/arch/arm/include/asm/setup.h    |  11 +++
> xen/arch/arm/mm.c                   |   6 +-
> 6 files changed, 163 insertions(+), 2 deletions(-)
> create mode 100644 xen/arch/arm/arm64/mm.c
>=20
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 6d507da0d44d..28481393e98f 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -10,6 +10,7 @@ obj-y +=3D entry.o
> obj-y +=3D head.o
> obj-y +=3D insn.o
> obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o
> +obj-y +=3D mm.o
> obj-y +=3D smc.o
> obj-y +=3D smpboot.o
> obj-y +=3D traps.o
> diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mm.c
> new file mode 100644
> index 000000000000..56b9e9b8d3ef
> --- /dev/null
> +++ b/xen/arch/arm/arm64/mm.c
> @@ -0,0 +1,130 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +
> +#include <asm/setup.h>
> +
> +/* Override macros from asm/page.h to make them work with mfn_t */
> +#undef virt_to_mfn
> +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> +
> +static DEFINE_PAGE_TABLE(xen_first_id);
> +static DEFINE_PAGE_TABLE(xen_second_id);
> +static DEFINE_PAGE_TABLE(xen_third_id);
> +
> +/*
> + * The identity mapping may start at physical address 0. So we don't wan=
t
> + * to keep it mapped longer than necessary.
> + *
> + * When this is called, we are still using the boot_pgtable.
> + *
> + * We need to prepare the identity mapping for both the boot page tables
> + * and runtime page tables.
> + *
> + * The logic to create the entry is slightly different because Xen may
> + * be running at a different location at runtime.
> + */
> +static void __init prepare_boot_identity_mapping(void)
> +{
> +    paddr_t id_addr =3D virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    /*
> +     * We will be re-using the boot ID tables. They may not have been
> +     * zeroed but they should be unlinked. So it is fine to use
> +     * clear_page().
> +     */
> +    clear_page(boot_first_id);
> +    clear_page(boot_second_id);
> +    clear_page(boot_third_id);
> +
> +    if ( id_offsets[0] >=3D IDENTITY_MAPPING_AREA_NR_L0 )
> +        panic("Cannot handle ID mapping above 2TB\n");
> +
> +    /* Link first ID table */
> +    pte =3D mfn_to_xen_entry(virt_to_mfn(boot_first_id), MT_NORMAL);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&boot_pgtable[id_offsets[0]], pte);
> +
> +    /* Link second ID table */
> +    pte =3D mfn_to_xen_entry(virt_to_mfn(boot_second_id), MT_NORMAL);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&boot_first_id[id_offsets[1]], pte);
> +
> +    /* Link third ID table */
> +    pte =3D mfn_to_xen_entry(virt_to_mfn(boot_third_id), MT_NORMAL);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&boot_second_id[id_offsets[2]], pte);
> +
> +    /* The mapping in the third table will be created at a later stage *=
/
> +}
> +
> +static void __init prepare_runtime_identity_mapping(void)
> +{
> +    paddr_t id_addr =3D virt_to_maddr(_start);
> +    lpae_t pte;
> +    DECLARE_OFFSETS(id_offsets, id_addr);
> +
> +    if ( id_offsets[0] >=3D IDENTITY_MAPPING_AREA_NR_L0 )
> +        panic("Cannot handle ID mapping above 2TB\n");
> +
> +    /* Link first ID table */
> +    pte =3D pte_of_xenaddr((vaddr_t)xen_first_id);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&xen_pgtable[id_offsets[0]], pte);
> +
> +    /* Link second ID table */
> +    pte =3D pte_of_xenaddr((vaddr_t)xen_second_id);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&xen_first_id[id_offsets[1]], pte);
> +
> +    /* Link third ID table */
> +    pte =3D pte_of_xenaddr((vaddr_t)xen_third_id);
> +    pte.pt.table =3D 1;
> +    pte.pt.xn =3D 0;
> +
> +    write_pte(&xen_second_id[id_offsets[2]], pte);
> +
> +    /* The mapping in the third table will be created at a later stage *=
/
> +}
> +
> +void __init arch_setup_page_tables(void)
> +{
> +    prepare_boot_identity_mapping();
> +    prepare_runtime_identity_mapping();
> +}
> +
> +void update_identity_mapping(bool enable)
> +{
> +    paddr_t id_addr =3D virt_to_maddr(_start);
> +    int rc;
> +
> +    if ( enable )
> +        rc =3D map_pages_to_xen(id_addr, maddr_to_mfn(id_addr), 1,
> +                              PAGE_HYPERVISOR_RX);
> +    else
> +        rc =3D destroy_xen_mappings(id_addr, id_addr + PAGE_SIZE);
> +
> +    BUG_ON(rc);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/a=
sm/arm32/mm.h
> index 8bfc906e7178..856f2dbec4ad 100644
> --- a/xen/arch/arm/include/asm/arm32/mm.h
> +++ b/xen/arch/arm/include/asm/arm32/mm.h
> @@ -18,6 +18,10 @@ static inline bool arch_mfns_in_directmap(unsigned lon=
g mfn, unsigned long nr)
>=20
> bool init_domheap_mappings(unsigned int cpu);
>=20
> +static inline void arch_setup_page_tables(void)
> +{
> +}
> +
> #endif /* __ARM_ARM32_MM_H__ */
>=20
> /*
> diff --git a/xen/arch/arm/include/asm/arm64/mm.h b/xen/arch/arm/include/a=
sm/arm64/mm.h
> index aa2adac63189..e0bd23a6ed0c 100644
> --- a/xen/arch/arm/include/asm/arm64/mm.h
> +++ b/xen/arch/arm/include/asm/arm64/mm.h
> @@ -1,6 +1,8 @@
> #ifndef __ARM_ARM64_MM_H__
> #define __ARM_ARM64_MM_H__
>=20
> +extern DEFINE_PAGE_TABLE(xen_pgtable);
> +
> /*
>  * On ARM64, all the RAM is currently direct mapped in Xen.
>  * Hence return always true.
> @@ -10,6 +12,17 @@ static inline bool arch_mfns_in_directmap(unsigned lon=
g mfn, unsigned long nr)
>     return true;
> }
>=20
> +void arch_setup_page_tables(void);
> +
> +/*
> + * Enable/disable the identity mapping in the live page-tables (i.e.
> + * the one pointed by TTBR_EL2).
> + *
> + * Note that nested call (e.g. enable=3Dtrue, enable=3Dtrue) is not
> + * supported.
> + */
> +void update_identity_mapping(bool enable);
> +
> #endif /* __ARM_ARM64_MM_H__ */
>=20
> /*
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/=
setup.h
> index a926f30a2be4..66b27f2b57c1 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -166,6 +166,17 @@ u32 device_tree_get_u32(const void *fdt, int node,
> int map_range_to_domain(const struct dt_device_node *dev,
>                         u64 addr, u64 len, void *data);
>=20
> +extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
> +
> +#ifdef CONFIG_ARM_64
> +extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
> +#endif
> +extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
> +extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
> +
> +/* Find where Xen will be residing at runtime and return a PT entry */
> +lpae_t pte_of_xenaddr(vaddr_t);
> +
> extern const char __ro_after_init_start[], __ro_after_init_end[];
>=20
> struct init_info
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 1d09d61dd922..b7104d8d33ba 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -93,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third);
>=20
> #ifdef CONFIG_ARM_64
> #define HYP_PT_ROOT_LEVEL 0
> -static DEFINE_PAGE_TABLE(xen_pgtable);
> +DEFINE_PAGE_TABLE(xen_pgtable);
> static DEFINE_PAGE_TABLE(xen_first);
> #define THIS_CPU_PGTABLE xen_pgtable
> #else
> @@ -400,7 +400,7 @@ void flush_page_to_ram(unsigned long mfn, bool sync_i=
cache)
>         invalidate_icache();
> }
>=20
> -static inline lpae_t pte_of_xenaddr(vaddr_t va)
> +lpae_t pte_of_xenaddr(vaddr_t va)
> {
>     paddr_t ma =3D va + phys_offset;
>=20
> @@ -507,6 +507,8 @@ void __init setup_pagetables(unsigned long boot_phys_=
offset)
>=20
>     phys_offset =3D boot_phys_offset;
>=20
> +    arch_setup_page_tables();
> +
> #ifdef CONFIG_ARM_64
>     pte =3D pte_of_xenaddr((uintptr_t)xen_first);
>     pte.pt.table =3D 1;
> --=20
> 2.39.2
>=20



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 13:59:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 13:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522922.812583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polro-0008NS-Fx; Tue, 18 Apr 2023 13:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522922.812583; Tue, 18 Apr 2023 13:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1polro-0008NL-CY; Tue, 18 Apr 2023 13:59:32 +0000
Received: by outflank-mailman (input) for mailman id 522922;
 Tue, 18 Apr 2023 13:59:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1polrm-0008NA-Tw
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 13:59:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d15c8dc-ddf1-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 15:59:30 +0200 (CEST)
Received: from AS8PR04CA0035.eurprd04.prod.outlook.com (2603:10a6:20b:312::10)
 by AM7PR08MB5431.eurprd08.prod.outlook.com (2603:10a6:20b:10c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 13:59:00 +0000
Received: from AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::a4) by AS8PR04CA0035.outlook.office365.com
 (2603:10a6:20b:312::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 13:59:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT046.mail.protection.outlook.com (100.127.140.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 13:58:59 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Tue, 18 Apr 2023 13:58:59 +0000
Received: from 953d40ecfdfd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 073B72D5-8D01-424A-9865-3D02F11ADA55.1; 
 Tue, 18 Apr 2023 13:58:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 953d40ecfdfd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 13:58:48 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB8479.eurprd08.prod.outlook.com (2603:10a6:20b:55d::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 13:58:45 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 13:58:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d15c8dc-ddf1-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pmNww2X8yBE+p66gp08L+YaqPUcK4zCTMo5f2J6NXB4=;
 b=6dD9/+kZ4jb3wgbvTX+J3vpKNBvOqHWKmYYSQPnYA17oXsvitsK+ww6OSgWXfNa2JUALpun1tVxror5lus1PucOon4fn3XbwU3GuRFsyX4MWRLOtFwRPDjg460AIuIfxbBlgbSqP5PJcvOXebpy5WDp6nVpsebXwlNhHMrTAu/E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e3978173ef834f1b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nfM+vY9l9eKHnow2/yhLxjfhKS4O/G8BBZjrewi1WqjXlk2pjl4hL9b7HvnhiLiFftOamXlqhtKZ4mMjMBzW+vWiYqtRRfrdhLOEMVBBib6L6jeFa/2wdizpwQsodN7NTv3AuIYUIVOmTQUe0XEiWzfbhRrGvRe/6AQYV9D1IZ7b6SGuw6SILkQkcQIFstL1fa46f0LN4tKztop232MxtYqWymMJcAfK1iuqfYgPXwcfoDLnKwsqk/SMN+mGUVhFdvYM4P+qJpXtJU0EfUGM1YGWMH/iml8hoF3SwY9s0ouwt0iBS5zn8iZrpvjJ1/g7WvQ1tZOmmMg8Pb3d/EB6CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pmNww2X8yBE+p66gp08L+YaqPUcK4zCTMo5f2J6NXB4=;
 b=BktKO+Gr3j0YRVa0bA68xaq+Mc2SRrnjMuw8Ick+MRNkJuTc1J2Z25FVY2iANEIW2i9L2iA/qY9Sp66rI8+QVtWtVHSYIU/BjV8XKI0k30/K5ZpSNG5GrZySafVKgFQl0b9Tm4cqpbG4HhgwCK0NvGaS6PcONFUUnmGatYXernu+0RB01DP+Ke6im6GdhZSXcAnFDWQ+6xbOsmXRjCqu5W7SGAkUyfCw+w9F41Bx4P0oKlL3DRWfDeJ4akgvV/1+3PIfwPf3XjS8dENuclLMZbvm8HLdaygIA8GU/oMq65DJFOK3tA3AsCZyuU3TJBzuGKK6DlkKJKCIJfEXkCofGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pmNww2X8yBE+p66gp08L+YaqPUcK4zCTMo5f2J6NXB4=;
 b=6dD9/+kZ4jb3wgbvTX+J3vpKNBvOqHWKmYYSQPnYA17oXsvitsK+ww6OSgWXfNa2JUALpun1tVxror5lus1PucOon4fn3XbwU3GuRFsyX4MWRLOtFwRPDjg460AIuIfxbBlgbSqP5PJcvOXebpy5WDp6nVpsebXwlNhHMrTAu/E=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Luca Fancellu
	<Luca.Fancellu@arm.com>, "michal.orzel@amd.com" <michal.orzel@amd.com>,
	Julien Grall <jgrall@amazon.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Topic: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Thread-Index: AQHZcHBTPKRFJKI7hUu6krfNforKJa8xGWKAgAABeAA=
Date: Tue, 18 Apr 2023 13:58:45 +0000
Message-ID: <1AB976E8-4A52-4392-899B-7ECFEF12F8E9@arm.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
 <D357504A-B3BA-4058-98AF-973A7286D19A@arm.com>
In-Reply-To: <D357504A-B3BA-4058-98AF-973A7286D19A@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB8479:EE_|AM7EUR03FT046:EE_|AM7PR08MB5431:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b6bca53-3d58-462e-6232-08db40150f4f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bcSwEKVlNTk2bHieK2osvxotwPXIjnjFxWyHBX3vpk2j58Ee+bcXM7Wpg+e6ak4tc4G5Xme/4fFRMy9MHP9DM9qzS8JQ7vN9TgE0zWRe2GQGQZklH51REKZUAyPLdfGZsln5FO+MWkHYYUKGM81K9CQW92DszH8nASSc5DMwlqSG0YO5IN/F9kexSpabsyj5Qy8VgaVzR6nM67muiuKENpMZ+zyGJRSXDocgRU/SlDwFGLj83/yir1+5joKSbSSrpdF1lg3XtyOaU7fs9a8JIamt//tsfpmOQKBa8FDUkySssQBvjVr497AD2d2axsg+bdPhqAICg2xBr6CNAj59I7w1BLRnbU7iWKi74z7wgJ7fWtrSkGvkbqw5EwcOYxy9GkMNdK2etwrhPJhAZRQ6kX7aDE8XCJdhWMyJt1zEg3fJK/6RKJatmUgBD9aXwTVxRs3BvS0krM2bPM0FS22FGa7RVohhoNLjyC37TBBoO30TCHRdepX6Fsj8fu6xCaPi9BlMXQ9Dy5n1zaXlg62Gsd45VHiBGeISlL1DDSsHvt3lYn0Pu4ZO/0hYCF/5ScsetpIUS7PS7EBIACivd+p2OO130/D61oElju4JwDJgKljHvIhxpqmc8/NJE/yZvWpuBt/Au4YK/n10SRzWZ1r5LA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(366004)(451199021)(66946007)(38070700005)(91956017)(66476007)(53546011)(76116006)(4326008)(64756008)(66446008)(66556008)(6506007)(6512007)(36756003)(71200400001)(2906002)(83380400001)(2616005)(6486002)(6916009)(186003)(86362001)(54906003)(5660300002)(8936002)(8676002)(38100700002)(122000001)(33656002)(478600001)(41300700001)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3AA1E2DE678BE441AD7EB08A92B3BCC2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8479
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7214eae8-7ad1-43a3-a304-08db401506a8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TkFDXjffWtsbhPWxL7n5/mTaFQ2+/tIPwVMOWRo217m5qBAm4H+XRkIZC6MS/1cL4a5ixcylpsJ7C1QguAAqweQjzcV9ZfGVeVwjnOpeNFwA1Zfqi3KOC9oXPDLD0Rl0hG56lvSysUUKNTzYC1j78JktNkyuvg/353zdiUOmlJTcUUrCvssoMFNhNsPLE5bk8lwPTaIPVchIeKIthdIL6e+x4svHpYZ+faohauvJZJnhPPIzXmoyZxXIDQNCWnkEvJ6fwI/dCP/VCwbvwm2haC80z2XyqZdUxMwCEphclvJf1sesQ0ofV2ZRDacwNELwIVSlhVefVLSM0t8+wrZl2dw6w6B2uSn6K9aL7ws/ibMYx+PVEa07qO5v14sdhYnom2kNvGHLsoBuHz0tQseOc635m7l/Shj+lGdHvOzaHXMVcmb1CwjLmxpA+o6zC8rmT2mO6U4Z2hPAZ7qLigotyWxgYByTz8nJGSn57JfP/4ldMUg/3xOMlt+cKUoYXIezN+JgVCWeTI0U830CNbf4SW3luhHLldZB4n2/0I1K2hW3hhmjk8tKwC9M5EVytoZT2Wo1DyFEug7lv2Md4T2gn3zXAV3yayXsoXTLtFV2YwmUU7dagX7P2fBqMIhmJTgYJUEnx6T88E+t8NvDkVBAxHZ17TtTJVpoC96kvfrQqmbIEnV99EHpxPCYLJJKggDIhUIcH8QfdHSl9bbnU3psEQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39850400004)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(81166007)(6486002)(40480700001)(86362001)(33656002)(316002)(36756003)(40460700003)(47076005)(82740400003)(356005)(36860700001)(83380400001)(2616005)(186003)(107886003)(53546011)(6512007)(336012)(6506007)(26005)(54906003)(478600001)(82310400005)(6862004)(8676002)(70586007)(70206006)(8936002)(41300700001)(4326008)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 13:58:59.7627
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b6bca53-3d58-462e-6232-08db40150f4f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5431



> On 18 Apr 2023, at 15:53, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Julien,
>=20
>> On 16 Apr 2023, at 16:32, Julien Grall <julien@xen.org> wrote:
>>=20
>> From: Julien Grall <jgrall@amazon.com>
>>=20
>> Xen is currently not fully compliant with the Arm Arm because it will
>> switch the TTBR with the MMU on.
>>=20
>> In order to be compliant, we need to disable the MMU before
>> switching the TTBR. The implication is the page-tables should
>> contain an identity mapping of the code switching the TTBR.
>>=20
>> In most of the case we expect Xen to be loaded in low memory. I am aware
>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>> To give us some slack, consider that Xen may be loaded in the first 2TB
>> of the physical address space.
>>=20
>> The memory layout is reshuffled to keep the first four slots of the zero=
eth
>> level free. All the regions currently in L0 slot 0 will not be part of
>> slot 4 (2TB). This requires a slight tweak of the boot code because
>> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
>>=20
>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>> loaded below 2TB.
>>=20
>> Lastly, take the opportunity to check a compile time if any of the
>> regions may overlap with the reserved area for identity mapping.
>>=20
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> With the 2 typos found by Michal fixed:
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

You can fix them on commit definitely :-)

Cheers
Bertrand

>=20
> Cheers
> Bertrand
>=20
>>=20
>> ----
>>   Changes in v7:
>>       - Remove all tags
>>       - Add BUILD_BUG_ON()s
>>       - Don't forget to update FRAMETABLE_VIRT_START and
>>         VMAP_VIRT_START
>>=20
>>   Changes in v6:
>>       - Correct the BUILD_BUG_ON(), Xen virtual address should be
>>         above 2TB (i.e. slot0 > 4).
>>       - Add Bertrand's reviewed-by
>>=20
>>   Changes in v5:
>>       - We are reserving 4 slots rather than 2.
>>       - Fix the addresses in the layout comment.
>>       - Fix the size of the region in the layout comment
>>       - Add Luca's tested-by (the reviewed-by was not added
>>         because of the changes requested by Michal)
>>       - Add Michal's reviewed-by
>>=20
>>   Changes in v4:
>>       - Correct the documentation
>>       - The start address is 2TB, so slot0 is 4 not 2.
>>=20
>>   Changes in v2:
>>       - Reword the commit message
>>       - Load Xen at 2TB + 2MB
>>       - Update the documentation to reflect the new layout
>> ---
>> xen/arch/arm/arm64/head.S         |  3 ++-
>> xen/arch/arm/include/asm/config.h | 38 +++++++++++++++++++++----------
>> xen/arch/arm/mm.c                 | 23 +++++++++++++++----
>> 3 files changed, 46 insertions(+), 18 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 4a3f87117c83..663f5813b12e 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -607,7 +607,8 @@ create_page_tables:
>>         * need an additional 1:1 mapping, the virtual mapping will
>>         * suffice.
>>         */
>> -        cmp   x19, #XEN_VIRT_START
>> +        ldr   x0, =3DXEN_VIRT_START
>> +        cmp   x19, x0
>>        bne   1f
>>        ret
>> 1:
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/as=
m/config.h
>> index 5df0e4c4959b..2cfe5e480256 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -72,16 +72,13 @@
>> #include <xen/page-size.h>
>>=20
>> /*
>> - * Common ARM32 and ARM64 layout:
>> + * ARM32 layout:
>> *   0  -   2M   Unmapped
>> *   2M -   4M   Xen text, data, bss
>> *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>> *   6M -  10M   Early boot mapping of FDT
>> *   10M - 12M   Livepatch vmap (if compiled in)
>> *
>> - * ARM32 layout:
>> - *   0  -  12M   <COMMON>
>> - *
>> *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>> * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>> *                    space
>> @@ -90,14 +87,23 @@
>> *   2G -   4G   Domheap: on-demand-mapped
>> *
>> * ARM64 layout:
>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>> - *   0  -  12M   <COMMON>
>> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
>> + *
>> + *  Reserved to identity map Xen
>> + *
>> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
>> + *  (Relative offsets)
>> + *   0  -   2M   Unmapped
>> + *   2M -   4M   Xen text, data, bss
>> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>> + *   6M -  10M   Early boot mapping of FDT
>> + *  10M -  12M   Livepatch vmap (if compiled in)
>> *
>> *   1G -   2G   VMAP: ioremap and early_ioremap
>> *
>> *  32G -  64G   Frametable: 56 bytes per page for 2TB of RAM
>> *
>> - * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
>> + * 0x0000028000000000 - 0x00007fffffffffff (125TB, L0 slots [5..255])
>> *  Unused
>> *
>> * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
>> @@ -107,7 +113,17 @@
>> *  Unused
>> */
>>=20
>> +#ifdef CONFIG_ARM_32
>> #define XEN_VIRT_START          _AT(vaddr_t, MB(2))
>> +#else
>> +
>> +#define SLOT0_ENTRY_BITS  39
>> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
>> +
>> +#define XEN_VIRT_START          (SLOT0(4) + _AT(vaddr_t, MB(2)))
>> +#endif
>> +
>> #define XEN_VIRT_SIZE           _AT(vaddr_t, MB(2))
>>=20
>> #define FIXMAP_VIRT_START       (XEN_VIRT_START + XEN_VIRT_SIZE)
>> @@ -163,14 +179,12 @@
>>=20
>> #else /* ARM_64 */
>>=20
>> -#define SLOT0_ENTRY_BITS  39
>> -#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
>> -#define SLOT0_ENTRY_SIZE  SLOT0(1)
>> +#define IDENTITY_MAPPING_AREA_NR_L0  4
>>=20
>> -#define VMAP_VIRT_START  GB(1)
>> +#define VMAP_VIRT_START  (SLOT0(4) + GB(1))
>> #define VMAP_VIRT_SIZE   GB(1)
>>=20
>> -#define FRAMETABLE_VIRT_START  GB(32)
>> +#define FRAMETABLE_VIRT_START  (SLOT0(4) + GB(32))
>> #define FRAMETABLE_SIZE        GB(32)
>> #define FRAMETABLE_NR          (FRAMETABLE_SIZE / sizeof(*frame_table))
>>=20
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index b99806af996c..1d09d61dd922 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -153,7 +153,19 @@ static void __init __maybe_unused build_assertions(=
void)
>> #endif
>>    /* Page table structure constraints */
>> #ifdef CONFIG_ARM_64
>> -    BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START));
>> +    /*
>> +     * The first few slots of the L0 table is reserved for the identity
>> +     * mapping. Check that none of the other regions are overlapping
>> +     * with it.
>> +     */
>> +#define CHECK_OVERLAP_WITH_IDMAP(virt) \
>> +    BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_=
L0)
>> +
>> +    CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START);
>> +    CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START);
>> +    CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START);
>> +    CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START);
>> +#undef CHECK_OVERLAP_WITH_IDMAP
>> #endif
>>    BUILD_BUG_ON(first_table_offset(XEN_VIRT_START));
>> #ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE
>> @@ -496,10 +508,11 @@ void __init setup_pagetables(unsigned long boot_ph=
ys_offset)
>>    phys_offset =3D boot_phys_offset;
>>=20
>> #ifdef CONFIG_ARM_64
>> -    p =3D (void *) xen_pgtable;
>> -    p[0] =3D pte_of_xenaddr((uintptr_t)xen_first);
>> -    p[0].pt.table =3D 1;
>> -    p[0].pt.xn =3D 0;
>> +    pte =3D pte_of_xenaddr((uintptr_t)xen_first);
>> +    pte.pt.table =3D 1;
>> +    pte.pt.xn =3D 0;
>> +    xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte;
>> +
>>    p =3D (void *) xen_first;
>> #else
>>    p =3D (void *) cpu0_pgtable;
>> --=20
>> 2.39.2




From xen-devel-bounces@lists.xenproject.org Tue Apr 18 14:15:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 14:15:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522933.812593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pom6p-0002U0-V2; Tue, 18 Apr 2023 14:15:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522933.812593; Tue, 18 Apr 2023 14:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pom6p-0002Tt-Qs; Tue, 18 Apr 2023 14:15:03 +0000
Received: by outflank-mailman (input) for mailman id 522933;
 Tue, 18 Apr 2023 14:15:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pom6p-0002Tn-8A
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 14:15:03 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 678552c2-ddf3-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 16:15:01 +0200 (CEST)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 10:14:59 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6443.namprd03.prod.outlook.com (2603:10b6:303:122::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 14:14:57 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 14:14:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 678552c2-ddf3-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681827301;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GotnlFM9qrukBsZi1IP8oU3vONNWmAYOaty+glrG1Jo=;
  b=D4e5sjq68/DMNudUXWYWXXRJYxoteR/pNrNP/JLopqmmC5Wds6hqvPCm
   +3psw0kaEdhC3Vw6vKb73nOR4Dz0SE+4mVx1XC3v1/1R4iuImuiWLrFDb
   LMpEASwgHH14Et2ZbD4nyVavsqCkmcaHJUyZ1ZS/7fSu/o+zeFDYJgG+u
   Y=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 105999568
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:UmPCc6rQXFNM0v4xzbknqfwNBFpeBmI+ZBIvgKrLsJaIsI4StFCzt
 garIBmPPPzbZjP0fd5xaNjl9k5QvcCBn9I1Sws4rSEzQyhA8JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzilNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADoNXBSn2+2X+7GmEcdcqc0PEtPPDLpK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWNEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpPRezpq6A66LGV7mgdUhwcS2Lhm9ykhm+ncd9dF
 GYI9wN7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0TzE30
 l6Cn/vyGCdi9raSTBq16bO8vT60fy8PIgc/iTQsSAIE55zvpd81hxeWFNJ7Svfq1pvyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzt56s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:RLaMNKmgH2uKqmFMVP3OfDuMrZjpDfLv3DAbv31ZSRFFG/Fw9/
 rCoB19726RtN9xYgBEpTnkAsO9qBznmqKdjbN+AV7AZniFhILLFuFfBOLZqlWNJ8SXzIVgPM
 xbAstD4bPLbGSTjazBkXSF+9RL+qj6zEiq792usEuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-Talos-CUID: =?us-ascii?q?9a23=3A6crN/WgYeuTijfAYH9aEbYmeqTJueH3vyFOMLG6?=
 =?us-ascii?q?DCnsyZo26QHOu4odUqp87?=
X-Talos-MUID: =?us-ascii?q?9a23=3A2/Q7Zg3oa3dBoufoJgwbPLl2QjUj56r0CUoTqM8?=
 =?us-ascii?q?8ntSIFDVaNjWipii6a9py?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105999568"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U28vIQqkB/2JUOxtlde5/DKuEjXYh0ckFK37v6OTpQpjLgyqKDqvLdTl1qRgMio0J3USR9fG7FnBdB9uyy0bbRzdzZ0trhvrARQOtid+ATr2mx1t6Zl+7Ng93HWq/jrFa5QSdvxBB54MetXZBsznYMBil5hEdUnZwC1xA+6H7/9ZU9u9/2EheySgFpzD4DidL5rxrM5oXTyZ5Q1uvQmkU0y3LgFsPyuMV//LxnMccRUbHRAq6X8g9DP2v1BnlRizZ9u5jw5BwJAZSPdCOkgRtg5RBqUwyPBA86qImyW81onJWOF1eU926+2n+zRhNovQYWEwUjQ9+SJPXDtOLm6tdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iajh/4buaGwggkintFP9KUyTMsoP4KFgsTOOytx99xw=;
 b=VGhtUaEdwzVNbxpPKBwEMyPb9HxpnyjbmnQw9cqk1T9x1dpvrex23wc5aLYZtk6cSXusYHERu1xDjwF5NNOttzvoG484P2ZVV9MknwesPiSeyVpdfgm0Eu5477YltmRuYyOfsGyMQ4JdymYYmi81qAAM/z7OuHuQ0z/7RtbeuID+GvX7mv0yC05SSjK4wU/+Z6RfGbnxmYarZotbm5XZvRZ8ZViWxLgIjSQQrGTxDY5PoouHJxSmJB5LtUtdd0h4T40fn5tBkJidKWB/xOb4yEJIv+L3FZVFp4tYVkSB83yzVWYyMENG9EPXe4uRSsl2H91eRa9sAZiO0BVAoY/lWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iajh/4buaGwggkintFP9KUyTMsoP4KFgsTOOytx99xw=;
 b=drkubDVRHd91dUUXwWpj7ppa/imNluzlmhCRYzxjhkw/JSVGKMLxhm+7KTFRx48u89FmhAYCeFEFswEiLJpFxNIx6iFReI553238BdrKE/s/tKZd6SewH38Z30V1OxxA8ZRT9xRW3CP0yvJF9PIDj35wSM6SZWgYNBMpVc1ULgI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 16:14:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZD6l2wcK00O7nSlS@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <fc9085f6-25b7-b94f-e7d3-ebc1d6283d73@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fc9085f6-25b7-b94f-e7d3-ebc1d6283d73@citrix.com>
X-ClientProxiedBy: LO4P265CA0067.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2af::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6443:EE_
X-MS-Office365-Filtering-Correlation-Id: e1678d72-51eb-4fe5-a579-08db401749dd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a+1eYKkXxBW4kTFatI4Csc9kBO2tH0Za5SB31ORMXsSsX4XDl+AxqpC/Q51ZdwAnCzOkyUroXNnD/zOeA+8UNeM5uZd+fqdDKQvLiblmIOFnlzbYh/GUixv8ISqCv5bgsa3exTCQswfeW9N+F1WLuEHIU/LoQnpnrcbka38qIZk3suW9fD0V6PePb8+Ab8JjAJxsqPFzqgmh4CjgOOh3U1wMjh02Sw5TlhzaqY81Y4quUFMCaiDZCeiWnVEwDlbBi9gA9cXjxmaM7Grv/xkN7HlLHx731OS6hGP6XdXCEP579xUd5KKpF4WtmzUgRbp/F/OUAXCoaZlI0BHjFcWJ8fZseJEuxps0Io1XeF/1e+DoHZflF/mLmrtir9Jngayq/93Vr132IHCCkpqapxKpfjb7tPncR0EUMv3dWAjyAfL7u0vYTY0owmHih24boj6VrGM/3rncF+L4LNQ/CARuA/KryZwv1dPUmxa/43HiyQRH1Iupu/QzrMFeSoaBlwTAyiK/ka2Pmy18rpxaYFIYoTKKi12VI+qYXaF30OSMTEKYC2EyCA5pxrgIWzZwHyiD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(396003)(39860400002)(376002)(346002)(366004)(451199021)(4326008)(66946007)(316002)(6636002)(478600001)(54906003)(66556008)(66476007)(6666004)(6486002)(26005)(2906002)(33716001)(41300700001)(8936002)(6862004)(5660300002)(8676002)(38100700002)(86362001)(85182001)(9686003)(83380400001)(53546011)(6512007)(186003)(6506007)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VlBWN01UZWI1WmpJWHZ5Q1J3d255RzZtN25XNEFkZlNPYVc4a3ZUbUVqdkov?=
 =?utf-8?B?NW81MmZIWnZKSm05aUNySTk5cmhxcDRwUjJYRnh5UzhsL0w0VmV2T3JnTE14?=
 =?utf-8?B?RW9kcmw2K3BUSWN5RWVWVE8vMjk5Nm5wd0wxWDhNWFVNdUx2b2c3YWZDMm1Z?=
 =?utf-8?B?NnBVWXIrb3AydHRhdlZNclpzNm81bXBvcGNlVjdWVGhRYUJjKzliY093QVVD?=
 =?utf-8?B?K1BPazdqb3dyaDNUakdENU13NnJuWU9YOWEra1FNN2gxdVZpZFpQYzlxN1VZ?=
 =?utf-8?B?MmdWZkJXMDV3Rm9sVjV2M3hGby9LNTNnL2pyd2lJRWVRMkxGazUwU0ZNWU5N?=
 =?utf-8?B?ZmFqN3loY0Z0enVlNGV0ZjdvZDRNc1JQZEJOaVVwR0xGenZBTTBpd3FmRHhj?=
 =?utf-8?B?UldyZU01M2ZPT28vanlPcWF5MThJdGxWTnUvWTJJV1JPV3BZYzIrb05WSU1T?=
 =?utf-8?B?cmdDVVU2bUl2VTQvT0JRNlhpcDUyNjU1TVVNNlFnbEo2VUFjcndyZm1kaDFY?=
 =?utf-8?B?ZXBydTVXNzJpTGtTUnE3eHJBcHppR3p6K0hEZW1DclBwWm5mWHFlY0IvNDFY?=
 =?utf-8?B?eDVVbnBtalNWaGZpaDVQL092ZHJFNFlyRGI0YVBxVDJXYkc0d0RBWHE0NEpG?=
 =?utf-8?B?bUpuV29CQXJncVBuaHp1TkFmTXo2ekUxUWVpaUZTVHIrNWQ1SHlFUWt1eFE0?=
 =?utf-8?B?S3NHbUZ4RDltc0VZenU0a3lQNEFmL1JJbFJiWWNIZlJYd3FWNVp5enFsRUpX?=
 =?utf-8?B?cGpwRWFxRmxuMDVmY2N4WkhGQ0hxZkY4ZTdnOFB5WEdwTWo3OWJyMUV5dUhj?=
 =?utf-8?B?blVMcGxIN0F3bGdQZGwvYmRGM0h5WlJoM0pVSzZuVEFlTGJaeDJvTjdsZGJt?=
 =?utf-8?B?K21hd0lTeWlNaFZtWmxGQzhmSGJRYkJvYXpZZGY1QXZWODZDd2M4azk4cTJS?=
 =?utf-8?B?WU5NNVAxRldUMTMyU2tRVi85QkhUbE5JUVgzbUNMSkJ1cncxSW9GRnZHcVhx?=
 =?utf-8?B?V3V0MHZLbTF3TGdDbFZEMHNWOWJ4akRIWDF6NG91MWI3bTlrYTBZenBVQmRD?=
 =?utf-8?B?WDEzNVBhYTJTdTlsKzUwY3EwajFxN0lYa2MrKzVWVjV3bU1PMVd3M05qU3pH?=
 =?utf-8?B?Z2hFN3Rqa1BmeDNlU0hCN1VLS1Y0NVJZaDNoL1FaT3V2UHdhTXF6U0JuaXV4?=
 =?utf-8?B?cUhhMGs1aUxXZVpSN3BHalFSRlVuYzBuTUd1VG5WODN6bkNxeXN4YVNKV05w?=
 =?utf-8?B?b0YraHFsZWlaREdmcFBiNENSK2RIYW14aFFadm5PYmhkcHhNMHU1YlZMb0FR?=
 =?utf-8?B?ckd4VG1NMFFvdWlvTG1YTVB3ZHUwQUxEVDZBYTJzMlJPVU5BNWY1V3RhelhO?=
 =?utf-8?B?YlBJSWgrV2xkK0wwMkJjOWZ6MUZmM0VMWHlqbEpNbWJkOWhPT1Q2ekluWmxB?=
 =?utf-8?B?QUI5QkFLa2lDN3JvTnFyRDQxeTRBWGlzSjdpeHg1bSsvUnN1ZGdNMWVsVmdX?=
 =?utf-8?B?bFBrbGl2YWJENUk4UHMrMkdsc1VGWXFkL3NFbjlOMWhvbjdHQkVwdnA2VWFu?=
 =?utf-8?B?cURlUE5PYXhTWWlKVHEzRWRLK1ZkemJWbStYa2c0VUMxclBEVk9TRW9PeHJa?=
 =?utf-8?B?dkdlSkdKNHQ4SjJ1L25FZitVUm0yeFpTVk9URThnZkszL1UzSXY2aXNhS3hI?=
 =?utf-8?B?Z3lOWkFBU3U0Y0I3T3pNYyszM1VKWUxabGdRdkNZb2drdlRnMkFRbzBZZzdP?=
 =?utf-8?B?Ly9ENVhubnF1S3N2MTI0cDFLa0FxczB4RTdJRVJwdkF3K1RhcG9sOVdOZ0J4?=
 =?utf-8?B?TnlWSHVMRUFvMXlGZFh6alNVbFFWQnUvN0FSWGlvSzZvUytwY0tVWmlsdW5W?=
 =?utf-8?B?cERKZDFJREdVK1BDNjhUbitzVGRzcXhDTmpWQjZrekw3WHBZbURLUm9ueldl?=
 =?utf-8?B?cmNJeGlkckdlaEI4eVdTUjJSTS9yU0g5bmFjamFpam9peXJmTlpMbzJiYU1D?=
 =?utf-8?B?RkIzYzZGRTlIaHBPdWdEcEtnZXFCcUtsd2dIRzA2UHcwenYreTF6cU9CeFVM?=
 =?utf-8?B?NldDTG1FR0lQazYrQ3AvVkV0VlhNQS8vYVlvYUV5cGxBNDNCL2oyN1NBUDAz?=
 =?utf-8?Q?7+2WaPegcmzZUSeCaOkg2jWdj?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	EYWBRNOK5K/KcRGvbg+us28La55O+50yajr7rdMPopTp57X5uj8nmlaHuxZdZMNkW00ne6uwCRTitqkIlUE5mP6SX9LhTp7D3l/fqA8QJHhE2Lfhue088yxch8qI6+VAhh2fCp4xH32Cb0bq27UkFhmSCqdeGKPGSV88i4TQikiDFGpiFdm9RLmgAsErJsRK+U7YDV0/IABSatLyf1UJvxwnMVWsWwpyKTDSCuvTQecISe8j8wK26WWh0p7mFewSrC16r03LmWmEnmJa8SgDTHYGJ8lGfqU1FE8FoIX405WtWOFCro4MfPiFqHamgigJj/MaNb0m+l/WcCBaCszgMxgVeF4mIj8e4IM0UqneQiA7xqAAsgJRoHm2ftWnRJ2eEbILdWvuDey53zlcqQkS1Ltwy6SX7EePrnbX8agXk0DtHOmHtY2cDjpZ51sACKodI0aUtkvX9/joK34p0+m+0vMmyu8/76fZiS7zDSKBH/WWScnTnWK2w7EYC524jCwSbT0aARiW5CAaCR4/5q4SG+AuB53703FOY23G7zGaTVLYlwqaKvJyXs4VincZeFdSCU8axYpfeiMpBTpmYV4Riwp5MPZqvQZPMOZ3Ioef8zokOAYVqasXBY59rbmgYIsDCcVYufLFczqP+REzdQDCXqJK4a5vDxW1tl4iJqzGxqUUZdnEjpwn8boPCOmSqz5EjxsSm08XpyO//doSkv4Qkh4/PvGxI4yPkOFh1UVPMGKeBe8EpaNS2Ptan4PFCygI1ahy9AfVd6EKsYMrU5xveeja8GpkHWFaTQg4Edv6+HeB3k23Btxjg/g8AJTztXsN
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e1678d72-51eb-4fe5-a579-08db401749dd
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 14:14:57.3592
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1hIiwCoJi4Orz4Pe9OtsEb5d5zte/DI9lZDLGSjA9NTT90a1GXl7yBwSqF2N9GX7HLpQVEmeQJ+QcrLCOZf4qA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6443

On Tue, Apr 18, 2023 at 01:17:55PM +0100, Andrew Cooper wrote:
> On 18/04/2023 10:24 am, Roger Pau Monne wrote:
> 
> > diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
> > index 7675a59ff057..c204634910c4 100644
> > --- a/xen/arch/x86/x86_64/entry.S
> > +++ b/xen/arch/x86/x86_64/entry.S
> > @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
> >  
> >          ALIGN
> >  /* No special register assumptions. */
> > -restore_all_xen:
> > +START_LP(restore_all_xen)
> >          /*
> >           * Check whether we need to switch to the per-CPU page tables, in
> >           * case we return to late PV exit code (from an NMI or #MC).
> > @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
> >  
> >          RESTORE_ALL adj=8
> >          iretq
> > +END_LP(restore_all_xen)
> 
> 
> While it's useful to have a concrete idea of what is necessary to fix
> all of this, I do not wish to put in markers like this.  This isn't
> about livepatching - it's about getting sane ELF metadata.

Right, Jan has expressed a similar opinion.

> This is why I had Jane work on using the Linux macros.  They account for
> *all* interesting ELF metadata, as well as taking care of things like
> the global function alignment settings, CFI patching space, etc.

I don't see much issue in using the Linux macros, if we agree we want
those.  Some of those would likely be unused, do we want to import it
wholesale, or just introduce the ones required?  Initially I might
just introduce SYM_FUNC_START{,_LOCAL}() and SYM_FUNC_END().

> Putting functions in separate sections should be hidden in the normal
> SYM_FUNC_START(), and dependent on CONFIG_SPLIT_SECTIONS behind the
> scenes seeing as we have that as a option already.

Sure.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 14:25:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 14:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522939.812603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomGp-0003zo-Tq; Tue, 18 Apr 2023 14:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522939.812603; Tue, 18 Apr 2023 14:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomGp-0003zh-Qx; Tue, 18 Apr 2023 14:25:23 +0000
Received: by outflank-mailman (input) for mailman id 522939;
 Tue, 18 Apr 2023 14:25:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pomGo-0003zb-7S
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 14:25:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pomGl-0006mm-5v; Tue, 18 Apr 2023 14:25:19 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.26.51]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pomGk-0004F7-V9; Tue, 18 Apr 2023 14:25:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jr0CkdzxrtGTXuZcfeBcPc5F9ABh5BiZuEkLntsQRRs=; b=nkDpxpwzemHnIVwJX6r+JU7TnR
	qCKAeVd+Evk5JdUkhUoEXltTw0loXl8JQ+BYp3n7GvyczfigJB+Uw5TfkE4esHvl6A8IVRZg8zZDe
	Q/G3vsLhsUTgRX5Km+rDnV08Ye64T0R5XY5gbISJgQ9xyJGBbZy44PdxKjY6/eHPG5C0=;
Message-ID: <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
Date: Tue, 18 Apr 2023 15:25:15 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 18/04/2023 14:13, Bertrand Marquis wrote:
> Hi Luca,

Hi,

> On this serie I would like to open a discussion on how to handle the vector size
> and the corresponding command line / configuration / device tree parameters.
> 
> In general the user must either give a vector size it wants or has a solution to
> just request the maximum supported size.
> 
> In the current implementation if a size bigger than the supported one is provided:
> - we silently disable SVE for dom0
> - we silently disable SVE for dom0less
> - we do not create a guest when done through tools
> 
> This is not completely coherent and i think we should aim for a coherent behaviour
> unless we have arguments for this status.

+1.

> Is there any good reason to silently disable for Dom0 and dom0less only ?
> 
> I see some possible solutions here:
> 
> - modify parameter behaviour to use the supported size if parameter is bigger than
> it. This would at least keep SVE enabled if a VM depends on it and could simplify
> some of the handling by using 2048 to use the maximum supported size.

My concern with this approach and the third one is the user may take 
some time to realize the problem in the xl.cfg. So...

> 
> - coherently stop if the parameter value is not supported (including if sve is
> not supported)

... this is my preferred approach because it would be clear that the 
value passed to Xen is bogus.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 14:39:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 14:39:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522945.812613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomU1-0005Xp-3x; Tue, 18 Apr 2023 14:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522945.812613; Tue, 18 Apr 2023 14:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomU1-0005Xi-1G; Tue, 18 Apr 2023 14:39:01 +0000
Received: by outflank-mailman (input) for mailman id 522945;
 Tue, 18 Apr 2023 14:38:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pomTz-0005XB-C6
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 14:38:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0ad611f-ddf6-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 16:38:58 +0200 (CEST)
Received: from AS9PR06CA0394.eurprd06.prod.outlook.com (2603:10a6:20b:461::27)
 by DU0PR08MB8090.eurprd08.prod.outlook.com (2603:10a6:10:3e9::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 14:38:56 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:461:cafe::b5) by AS9PR06CA0394.outlook.office365.com
 (2603:10a6:20b:461::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 14:38:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 14:38:56 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Tue, 18 Apr 2023 14:38:56 +0000
Received: from fbd32eaa3ee3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4CEF2662-0269-499D-A059-62FD0BF2E09B.1; 
 Tue, 18 Apr 2023 14:38:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fbd32eaa3ee3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 14:38:55 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS4PR08MB8069.eurprd08.prod.outlook.com (2603:10a6:20b:588::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 14:38:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 14:38:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0ad611f-ddf6-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0EljlqRfyBN5u+AH1HiyHn2aQySznZuUmDSCl1CQLBk=;
 b=WgJDP4m2tu+uq9RO4gSQiIzM4YXBb9SUy/d6sZSXrBowyRXIRhU56nJvZvgPBOSUM8Ha1zo+urqBuyH16evPmgfWi555+m6UjjwPQyYJGHHFF2XEyI0CyoqqgY3ZTjLQIGuLKAwFCGz95udU9PKPJ/JLS7opFT6ZVoXtC+vcQEY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0baeb10b0392f62e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RJQLbCLWUn9N0l2FtTkWuWDoi2XLZluyPrjtfa0w2wDmnF2nB+UKTAzQOWFRgRE6hw5+LaE+PLNqd1a8QRfz/ihVozllX7LI2NQnQ335ik1Ymb9bS1GB93SncBD9HEfmTcDk2nA+9hc9u4iHqAqfDl0UylUiwlfPIOjy823SDMmQfKt8ayndDtRAhZriwzptguQfUvluMmJWNT3aaybMZ+2wuP61ljv064bdxxx7WnWA+pTAwvtct2kD2qyjLrp1eecFEXHfEvmoWgX9Ii8HKekT3WVE/2FcxZ0FA2QqKIkL5Mc2BgjbHhgfsPVuM+jHA7c9VScMMFvx6o4cSrRLng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0EljlqRfyBN5u+AH1HiyHn2aQySznZuUmDSCl1CQLBk=;
 b=T8uFi8y9Q8DzJ1/KYjhPu6Yjn4LTi7qKHKanKjhxTMeCB1wR688EtFPdvFpLpFRACraPCRUf3G2SCCe9EAG6VrZmXwbY0/O2cMJsw/ATmBwWfr1uksc2mbssep6M5vWW24rDmDH5bLkaDQw05R1YVaDzM6+nGba4waCD9Hj2ScIAE/+M1dvq1on0Ubp7Ky/Ou5hFNTeDUqhMLjLB4lr7GI16ppVRycTHTXGvfp5mug0yq5GKiuD5Vx8iIrKGhYst6qNWG8LRAUhLLuUPUjQuSnYQnYK4D/+NDt5qjENqh5yrgzGJvw2a/vn19vUWt48fqjKk20FRrjegAa2crQDmsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0EljlqRfyBN5u+AH1HiyHn2aQySznZuUmDSCl1CQLBk=;
 b=WgJDP4m2tu+uq9RO4gSQiIzM4YXBb9SUy/d6sZSXrBowyRXIRhU56nJvZvgPBOSUM8Ha1zo+urqBuyH16evPmgfWi555+m6UjjwPQyYJGHHFF2XEyI0CyoqqgY3ZTjLQIGuLKAwFCGz95udU9PKPJ/JLS7opFT6ZVoXtC+vcQEY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>, Christian
 Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index: AQHZbSQqKlfgrZX3xkqta7+H+1gq3K8xFMoAgAAUHYCAAAOtgA==
Date: Tue, 18 Apr 2023 14:38:36 +0000
Message-ID: <5D0FF62F-AA83-4A35-8A39-74A2F0D29603@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
In-Reply-To: <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS4PR08MB8069:EE_|AM7EUR03FT034:EE_|DU0PR08MB8090:EE_
X-MS-Office365-Filtering-Correlation-Id: ca23d725-2619-4928-afa1-08db401aa3eb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kVcGjBNITfnxNSGldypT5wSQRC9ndBI5T/e/PgT+BdWRCHDGktDgLLEAM2i0rbM7C3Wg84kdWXynMHHJlnGi8olzzt6Iar4bGSWA9l30C+g3fFKLZgfStlZurDqy7njHFzNkrjliAZbNHLGbCOrcNy0Kgjx95Y5FvG/LmrYl3xF5B15cOvBhQV1jkSMm7j3WNB4Wk6D5MvqowDhidYEKyY1uGQo7z5mAnoOcsTsW1oDv9t8jFd+6ovW47oP3GR/joB35ZdYl6SysLx6g/3zzDjbc4e2DxLOHcgdsc0GTEan2rL56UN7/HG6tAog/Q2dO5/+9UcZHpUHgMIln6iGR0M0UPbSQsZUEMun3YVHEjP/ys1fFCRuN9lQUdWTv70XvwX/NwBCI2Arn2RjTl5EvNnbwoNOzAlFhj3YznRWyTp5uP9hlPMc64bGUeiWw48ToEl3do1L2nJPKt0ML03ShPRCB/ObIVdECKFWj3b6gpRtzr0xSAZT5Ou23GzRzlB1QeMWyegGspREDPUChODroPlN/3dE3blM8UxcM0/vsoRJ7xx00OTnsBokSHNbw7274ECFs099HFAkfAV99Okwv3hrzaUoLa4ny+kQxeK1cWkNBnG22fHcdMo3HOlNbNYvW8RrIn5ceqYBXXHa7rG2cqQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(396003)(136003)(39860400002)(346002)(451199021)(36756003)(33656002)(76116006)(4326008)(6916009)(54906003)(316002)(66446008)(64756008)(66946007)(66556008)(66476007)(91956017)(71200400001)(478600001)(41300700001)(5660300002)(6486002)(8676002)(8936002)(2906002)(7416002)(38070700005)(86362001)(122000001)(38100700002)(2616005)(6506007)(186003)(53546011)(6512007)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <ACDD050DF8166744A46582275E683BEC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8069
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2983b8be-a258-4754-a81a-08db401a97d3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dS2kg1GjWOwdN0kzSb09dDT0narfyD+b2gTB5hEQjeGY5b4sFE7r7DMPoFS6RUQyseD1yRkG12TwEGdwoJSMMwsz5yimdn2n48DbnvXnEi9Eo16Dh0I1Epelk/8agKr80p0CnaicVb0MlRlz6Izxpokar/B8Su4McMO2oFqtj9+XhqpM9LH8BCFDldS2scYAK3yk58fpfsYwlBUY6TyT3IkrxA8a9orxt6KS6MwtczgwMINkmzdKEIEXoeCSuP6Ie/y4W0Duv0Bn976yrSs1egYgW/gjCyBRous4GroJ8UN+DtQiu9PD73aOpjl/vX2yYyen/mC1igdjZhXJZZOR3Byg2UL1L/HZkBvct7sBviZeffJRQZz2rH8Au+pJgbfgagtQHPYhQhfq6yjauSejDWXMsmxdTw/ZFRywmzWuZVjV7DuakJ7EtK0s7eGsG6dRSg/xoxi8zpD/ga0RccZDZsASd6LytgBTbVQsg9VZqvCh+4Pd/HpALR6lGVX+Dc4lvnLwkzxWBD/+zFZKfnx1VTI7oyn4Z3G5Cg/qiP6W1YTkx9eQe3EyE5/Yyx0KKC9M6nrwrj6QRySUoqP4sNCTrJfcFxaqxEmIzh4vZXyi4BOPaUjCOxk5ZpHAygQEa5t2OsODd/MkrvI/jlDbZBHbDvZ4HEQHRXjr6QoClIaEqX3HoKmEBQzhT25n6MB2BUihoR1OCCY4/cpOOUopHylR4g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199021)(36840700001)(46966006)(40470700004)(6486002)(478600001)(54906003)(2616005)(82310400005)(186003)(53546011)(6512007)(26005)(336012)(6506007)(5660300002)(2906002)(6862004)(8676002)(70586007)(70206006)(8936002)(4326008)(41300700001)(33656002)(86362001)(40480700001)(316002)(36756003)(40460700003)(36860700001)(83380400001)(47076005)(356005)(82740400003)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 14:38:56.5891
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ca23d725-2619-4928-afa1-08db401aa3eb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8090

Hi Julien,

> On 18 Apr 2023, at 16:25, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 18/04/2023 14:13, Bertrand Marquis wrote:
>> Hi Luca,
>=20
> Hi,
>=20
>> On this serie I would like to open a discussion on how to handle the vec=
tor size
>> and the corresponding command line / configuration / device tree paramet=
ers.
>> In general the user must either give a vector size it wants or has a sol=
ution to
>> just request the maximum supported size.
>> In the current implementation if a size bigger than the supported one is=
 provided:
>> - we silently disable SVE for dom0
>> - we silently disable SVE for dom0less
>> - we do not create a guest when done through tools
>> This is not completely coherent and i think we should aim for a coherent=
 behaviour
>> unless we have arguments for this status.
>=20
> +1.
>=20
>> Is there any good reason to silently disable for Dom0 and dom0less only =
?
>> I see some possible solutions here:
>> - modify parameter behaviour to use the supported size if parameter is b=
igger than
>> it. This would at least keep SVE enabled if a VM depends on it and could=
 simplify
>> some of the handling by using 2048 to use the maximum supported size.
>=20
> My concern with this approach and the third one is the user may take some=
 time to realize the problem in the xl.cfg. So...

Good point

>=20
>> - coherently stop if the parameter value is not supported (including if =
sve is
>> not supported)
>=20
> ... this is my preferred approach because it would be clear that the valu=
e passed to Xen is bogus.

I agree: we should not silently ignore configuration parameters or try to "=
fix" them.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 14:41:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 14:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522950.812622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomWP-0006z7-L2; Tue, 18 Apr 2023 14:41:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522950.812622; Tue, 18 Apr 2023 14:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomWP-0006z0-IP; Tue, 18 Apr 2023 14:41:29 +0000
Received: by outflank-mailman (input) for mailman id 522950;
 Tue, 18 Apr 2023 14:41:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nfss=AJ=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pomWN-0006yr-QG
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 14:41:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 190f9c30-ddf7-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 16:41:26 +0200 (CEST)
Received: from AM5P194CA0024.EURP194.PROD.OUTLOOK.COM (2603:10a6:203:8f::34)
 by AM9PR08MB6243.eurprd08.prod.outlook.com (2603:10a6:20b:2db::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Tue, 18 Apr
 2023 14:41:24 +0000
Received: from AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:8f:cafe::ba) by AM5P194CA0024.outlook.office365.com
 (2603:10a6:203:8f::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 14:41:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT048.mail.protection.outlook.com (100.127.140.86) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 14:41:23 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Tue, 18 Apr 2023 14:41:23 +0000
Received: from dedf6fc7baa9.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2A2F59C6-7F54-4FA7-945C-81FA4E8F67CF.1; 
 Tue, 18 Apr 2023 14:41:12 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dedf6fc7baa9.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 14:41:12 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB7350.eurprd08.prod.outlook.com (2603:10a6:102:227::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38; Tue, 18 Apr
 2023 14:41:07 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6298.030; Tue, 18 Apr 2023
 14:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 190f9c30-ddf7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lsp22NfGzXzD44WrV/+zmGyqXZdAYS8nZklEHUQMunk=;
 b=mRonGje7rSyQAfh27uaTE7Mwx7aftK+ZLgr2t0rmfd6KhRgFISpNNsMRdIQH9SqfG2984oHsWPCCpl4k3PB36uguzaklVdWnnIxTDuP69lPEorqm2ENgkucwJ4tikbtUtvGRMPfm24Vsd1ymekiUQ04TSjhCjso9e/2a8RNcqzg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 61c031df18f2d4e9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S1NbIkEubXdk8EMT2IhGtnjmbBLlmY2RW99NtSuvAMdYG6uLn/fsKNtcNSmkWxlPXl3wZOOFtbgE8wUZkHl+Ca1NnQ8Ww9Rg3yH0uBWnIWnRPy1M1CZp+kdFyHcB2UcIHXHiv+ts+tFWIf+LQVSMASnm2xq9+wr+cgRnw1uzUH0rZB8WnFfI7RAQJCihkSq2eMTkZM31dSUa/L3WkoeSGoDquqLzOZPZUD/Gil01VuqN1HpMVJo7D85Hrs5w3oPc6TxiGKx5sHDTokZcN+lXCtPVnQVWs6Pj8rjiQBKqBjzatXdmplQas5YUSsCplE3QXx87Gq/yqCWbbIvkRhxmNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lsp22NfGzXzD44WrV/+zmGyqXZdAYS8nZklEHUQMunk=;
 b=hdAO99smMOoe/uyWZp3Y230DVwlcArLhWPgz2+eE/XMdMuLYep1DboDgA56dp4IexCJb6IAKAk1TfBl17OOJ51DE4/XrF1859TXZZECPZi/c92UgX8b96MBKmkdzrSKwknXuuhoF7RVHpC5r+xL04I7XzoADmhEbgHQGIbaU0zxb1lyXGApk7Yi22G6hoYnbrUaZH6YVPsZtzeTlqUOSQQ5P4Y9qU0f3P9FOZltxebu/x1J/0l7jSUDt0pG1gkDv9hfJ4cjJR8QDdW10qfBvCvWvnZx1a9rRmDItCXaRgq4tbDf5YTdPiZhbOXmXv2tDclghnhEzF0Q+aKnbxr/KDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lsp22NfGzXzD44WrV/+zmGyqXZdAYS8nZklEHUQMunk=;
 b=mRonGje7rSyQAfh27uaTE7Mwx7aftK+ZLgr2t0rmfd6KhRgFISpNNsMRdIQH9SqfG2984oHsWPCCpl4k3PB36uguzaklVdWnnIxTDuP69lPEorqm2ENgkucwJ4tikbtUtvGRMPfm24Vsd1ymekiUQ04TSjhCjso9e/2a8RNcqzg=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index: AQHZbSQ8i9h7etA7H0yQyBJ8Cc6LPa8xFNYAgAAUEICAAAO7AIAAAKcA
Date: Tue, 18 Apr 2023 14:41:07 +0000
Message-ID: <36E824E5-B33C-4F98-8A29-9C642AC3F7D7@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <5D0FF62F-AA83-4A35-8A39-74A2F0D29603@arm.com>
In-Reply-To: <5D0FF62F-AA83-4A35-8A39-74A2F0D29603@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB7350:EE_|AM7EUR03FT048:EE_|AM9PR08MB6243:EE_
X-MS-Office365-Filtering-Correlation-Id: 41d7a528-51ad-4b1c-b625-08db401afb83
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2oNkQBuzhWuwhvneJifpjk28kkh6m/zbwikhUPNnqHP7VVGkjU/h1dg6d29l6eWrwEyqcviBTuy6qtgV3VLJt/fRJuXmtywGb/Co2LVOgucOnsdY2OQ3R5wsVHDM3gV37QpKsKdTMRG/tYRY1r/OfBLMqYrkYZiLy88OcTjtIl6x8Pnahe5N4mw9Dsehao8ZV01oxGecYP/WF+whCvMbKifOc1Vba8uvo9E9quzUOJTWknVJyivapwh+xlFbWEGKXloeLouWJRbsiZZ+ch/jd4/H5pzW9SzVo1XYDwy4jrqLcmbNnQF3H1wRswdHVJ2IahGNREUtCB0UPkM3DiM4FBsQF2mcZ65d6dI7f7mSHZA9OEKjXlitS2KFdF5Y5y0bUJ5QgBPD4X8ijxDGa1vdCum3/HdsGqTcZ8eb/dLFwt0YKx3a2ALkEkbhYOMvVSZw8uHBTB6uAwp1cFI0gEgX95bNO1pR5SmxnH5jUdMVKG23ph3gfNWgW/oRQ9L6CgX+9VEfq6q/jNElMs/pgfa8ple3iSTCcg7Q0MdsRb345WFlrAwKPFdESCjWFoNRuxrZ1jH6beuIQzpohu/bNR+2s6uqVpN1VSqbIVK4yU3LoYHgzCmXm0C11MP8pcFa9eat8IULvsZB3dACm7lFrAVlhA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(366004)(39860400002)(451199021)(86362001)(33656002)(36756003)(91956017)(37006003)(316002)(76116006)(66946007)(66556008)(66446008)(64756008)(8676002)(4326008)(66476007)(6636002)(54906003)(6486002)(71200400001)(41300700001)(478600001)(6862004)(5660300002)(7416002)(8936002)(2906002)(38070700005)(38100700002)(122000001)(186003)(53546011)(6512007)(6506007)(26005)(2616005)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <9ACA37321D0E9347A807BBC503019724@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7350
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7941076f-359f-4102-f237-08db401af1d7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Wr+gEXcR9YCuaM7Anbot1mABjDk4DqRAHIY3Rv+Ojs0Dv0krJczc23OtgihDINj3OhOK4qe9/s3zwd4x6Ii1u0YTlNSiPQ3jA57xP05hQBinpQdK3r/5M3D+449iPvRKQePT6MPzgjSmHWeT3amrTEaD4ITlK5rJjYWgcu2LGZa6+i4gL801Wq8xoy+sAhGmDpAzO3I29ijh3oQ7ebjOXAxa/3T3/fTIQiVLwSsmLm8ZhYu8z1q5f2O0gZ00brCXyDex2AtVudClQia44mSzOguskG32b68E+uuj0R+KsXKJAOzPUiMjkgfpn5f70GfGAKEHofW2/kGanulKMH1YRU3/Gl+tIcnmEv3dCXKawCuh3+J7VYDmQ35tTutqC78SToav/nGgEM0u3X3PhuMNv83qEIZAoNHS7oeyc13AMDJpD+x6nnyFnplsA514dAZdkg1RqNs3Gcq6rMsr5Z3p2cup7vkFPUMgOc7Psib3Gi9d/YMzPuKWf20bY2/KVe8dF+2W4i7TGJKTnNnXnd8/68iHVyMQs1W+GXvoZ2Xi12cTB8hab9n+gsKqIkw5J0e6iG/H8Ec/79l1kEFz/YMBPE8UQcZSUVF5PDbXWif0SBSaKAiqnBdFXzbvsuTTJEg//xuYicKEqyv4dpWWtG88OuV7MCt2NbMW0JyYGz6dNl6Oh+/3zZYk/lPhH0YD6H6T4LO13pm93kRzmTXhxPgTTg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(6486002)(478600001)(86362001)(36860700001)(36756003)(47076005)(2616005)(336012)(33656002)(83380400001)(40480700001)(186003)(6506007)(6512007)(26005)(53546011)(40460700003)(82740400003)(82310400005)(81166007)(356005)(316002)(4326008)(70206006)(70586007)(2906002)(8676002)(8936002)(5660300002)(6862004)(41300700001)(37006003)(54906003)(6636002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 14:41:23.5316
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 41d7a528-51ad-4b1c-b625-08db401afb83
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6243



> On 18 Apr 2023, at 15:38, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Julien,
>=20
>> On 18 Apr 2023, at 16:25, Julien Grall <julien@xen.org> wrote:
>>=20
>>=20
>>=20
>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>> Hi Luca,
>>=20
>> Hi,
>>=20
>>> On this serie I would like to open a discussion on how to handle the ve=
ctor size
>>> and the corresponding command line / configuration / device tree parame=
ters.
>>> In general the user must either give a vector size it wants or has a so=
lution to
>>> just request the maximum supported size.
>>> In the current implementation if a size bigger than the supported one i=
s provided:
>>> - we silently disable SVE for dom0
>>> - we silently disable SVE for dom0less
>>> - we do not create a guest when done through tools
>>> This is not completely coherent and i think we should aim for a coheren=
t behaviour
>>> unless we have arguments for this status.
>>=20
>> +1.
>>=20
>>> Is there any good reason to silently disable for Dom0 and dom0less only=
 ?
>>> I see some possible solutions here:
>>> - modify parameter behaviour to use the supported size if parameter is =
bigger than
>>> it. This would at least keep SVE enabled if a VM depends on it and coul=
d simplify
>>> some of the handling by using 2048 to use the maximum supported size.
>>=20
>> My concern with this approach and the third one is the user may take som=
e time to realize the problem in the xl.cfg. So...
>=20
> Good point
>=20
>>=20
>>> - coherently stop if the parameter value is not supported (including if=
 sve is
>>> not supported)
>>=20
>> ... this is my preferred approach because it would be clear that the val=
ue passed to Xen is bogus.
>=20
> I agree: we should not silently ignore configuration parameters or try to=
 "fix" them.

Hi Bertrand and Julien,

Ok I will update the serie to stop the domain creation if the parameter sup=
plied is wrong or SVE is not supported by the platform.

>=20
> Cheers
> Bertrand




From xen-devel-bounces@lists.xenproject.org Tue Apr 18 14:54:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 14:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522957.812632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomiR-00004y-NZ; Tue, 18 Apr 2023 14:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522957.812632; Tue, 18 Apr 2023 14:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pomiR-0008WV-Kr; Tue, 18 Apr 2023 14:53:55 +0000
Received: by outflank-mailman (input) for mailman id 522957;
 Tue, 18 Apr 2023 14:53:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pomiQ-0008WP-KZ
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 14:53:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d36a8f03-ddf8-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 16:53:50 +0200 (CEST)
Received: from mail-dm6nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 10:53:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5854.namprd03.prod.outlook.com (2603:10b6:a03:2d3::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 14:53:36 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 14:53:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d36a8f03-ddf8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681829630;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=27gscADshn5oWryFmzOSqzn0gTT6s9nDFMm6Hoj78OQ=;
  b=EOozdecW0/27JVglDJGH36YqMVpdlIiFM0ODZ04mMRKjDq91+jPjaJ96
   fC2SKuM7oXQc5YiVuLAxn+hMx2HauOWMMGwUzcnS4jfINx4z9itKr2pYk
   bdQi+NfapUlgICgPin2w25SdhI6lRA1pUeQixadWYVkUG1S0jWiRnxXKa
   o=;
X-IronPort-RemoteIP: 104.47.59.170
X-IronPort-MID: 105312771
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SPAV/qmh96MNkNjLnOrFpATo5gy+J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIfW2uCP/+IajPzfNsibd62oBlSvcLWzdIyHAZo/3xnHyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5Q+GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dJDCSgJUjzYu9+V7KLicrJHjfgFC/C+aevzulk4pd3YJdAPZMmZBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkV03iee1WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHumA9tIS+LhnhJsqGeO4l0KEyQKbkeEqOv6ukKsWvFPJ
 3VBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQGucksVHoV3
 1mGt9rzAHpkt7j9YXma87KJqzKuKG4QJGkLaiIeZRsI5cH5p4M+hQ6JScxseJNZlfXwEDD0h
 jyP8i43guxJidZRjvrju1fanziru57FCBYv4RnaVX6k6QU/Y5O5Y4uv6h7Q6vMowJulc2Rtd
 UMsw6C2hN3ix7nU/MBRaI3hxI2U2ss=
IronPort-HdrOrdr: A9a23:+HvTXq2VIxGQgwtGrkIIVwqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: 9a23:E3rjgmFO8MVqv49yqmJJr28MHvwJKUHg71vAenfjIm1VbpqsHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3A1n810wwgx2yI2j1U5px72nyeVhGaqJ6zLHEUiqw?=
 =?us-ascii?q?agNW7PD4zCzuPjRenTrZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105312771"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XElX258LuLQeHt0EMinbQry+OwsD7ZS6OyNm+JvM5aHxjGWfE5SS0IPksbKKVZ3avvV2XWkc/fJdng0TcdiiLvyoV1RYdzKxo379TrSey8bT5zvV1WVrzTIEkhAYn5jdsAzPvLzH8fWhh71j/jAaDf9WEwzuKKVCYyNRQcit3Od3CivimdS1liQ+N930lB41ySILaZnw64PcpkiDUN3OxaVSOoaGLUOHcrckMa9tUFfEicOUISSOIqjv6c6rd0v6V8oK5UNLwGogEjrm8N3aWeFGztuJUs/8dJsTwPkHyvOTEmeaK2QJfkY7lRLhl3NCKBUj7lBOrCy1p42NzsFInA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PU8DWBM7Bp6I7aNiFPCUlMcCJ7C6O30oHVsUCIUFJZw=;
 b=QdWiMPv8ajrH2OAiEQHp/kVHjNnVoWKe1GyQ39bL8ymDGPlrt2EM/7wLMrio1r1FeYBrmJOmq2f8eLkz/tauD2OVFSZO+eHvQTEgoTRpr0ZYTMAp8s62yNqkN3GgW9RwY5rw6b6Cbl/brlX/Vm/kkx5J7+p83un5/nq/97DDILrK+E7r86mwz/N6SBki3+t3vL9CoVShxH4nHmbKmz1rkKKE+lUtU6kQRWw/Qj9fpmvKDIabY56zNeORgROf1HCRiBpy2FrMW+A0vwNUQZHlXdQ5gpwzmAk6hem4EzEHOesDO4foTGtX5xHSul8aR+lWcp6uNL/D59jGidhkDWMtFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PU8DWBM7Bp6I7aNiFPCUlMcCJ7C6O30oHVsUCIUFJZw=;
 b=Gb779dYZ8imopMt8k1NiZTleF74+wihACWUUWtMa9xDiQNcDOYo8fpc+dkTysnwDbYZj/ECe8tkxjWQJRv3km7wA+4oiZSmLMpXkQ02n9ORE3vaeplbaejtAENetvJlpC3s5oz0MNot9l2FjFTdCtVXj7OHOhu1BBKT8tJT1OEs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v2] CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
Date: Tue, 18 Apr 2023 16:53:27 +0200
Message-Id: <20230418145327.19352-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0667.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:316::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5854:EE_
X-MS-Office365-Filtering-Correlation-Id: 7746041f-9bc1-4005-06b5-08db401cb027
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1AvQ8M1M6HHTLr3Zm71MVU2j6jlw3F/wfoZ39HVNHedpsxolHDvf6cLtLdu7hiSpJRRehsN+Qfs6r6q2krsy4WXZi0H/EPJMvZqJ/TbGfTuOPZBW0/gw7fRoEHMYRRQHW6q0KOlPTMrMahyjqRYZPi5JCIhnmyhHLU6JIO6mEX4rMJ0Z5naQ2zlGanfIlriLc1EqPDbu3+MHuxIKfdc9+opT5moMRMPc2m5ymqwSWPv+i8Dc9viL1gOhI07OCBqC7hUfSRrgSpDG6JZAvoEEgtJxb8RObw3DtfryQkaGT5WhhHuW+B7TI4bj+L3c1JT7tFpjrPvlUAzPXK4AaY2cZEleX8/J1q6rspOzEP6x4Yyw/rJo62ASK5jAfu8vdNOpHKYgLMKvn0YbVkpa4wxEPA2+YWl3TePiOBs8WyYnbXYrjA+UGgjv7qyq18BURoOwAt16brN66fLpgiM+DT8lt7VzdWKwUWJj8Tls/Ymwm9gzVhYEoSQAiQWFmgeTfOHIInSDUrRCJi0Mr+nQkvddVz6UX5O7cOm4aO3fnHoPNT0xFp4h2Bfj9omy1UNS4yBusoW5DnkL2mcnqnXB8xVPt/M1jyaoEPavwduKVUUGiTUborMjWvC1wmiTZ8pTb8Jy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(36756003)(38100700002)(8936002)(8676002)(5660300002)(2906002)(4744005)(86362001)(478600001)(6486002)(6666004)(54906003)(186003)(2616005)(6512007)(1076003)(66946007)(6506007)(66476007)(26005)(41300700001)(82960400001)(316002)(6916009)(4326008)(66556008)(207903002)(219803003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1JTekhWUno2Rm9xbzFDdi9lRmJDaC85Sk5GUU91WjJYYjJuVDJRRGg4Slp4?=
 =?utf-8?B?L3pxcUZLd082Y3B0U3MxN0hnbWN2VXdMNXJvUUdPdTU5TGxyODg0RUIrTG5t?=
 =?utf-8?B?UFN3QS9kZWFmQTVIOTF0MVVOYUliS1ZNZC9zbHBvNVQ0bGthUk5Qa0hFTndH?=
 =?utf-8?B?a1JuL1hCR3Jndks2K3Z1Ylg3cEJJa0lZQm16bHNuY053MHdpT09FNDdiczN5?=
 =?utf-8?B?VitYc1ZWajc2TW1XWGtXVkFscnQrUXR3UHE3N3YvMXQwbGR5Vk05bWJMbHdR?=
 =?utf-8?B?aFRTMEtxOGdTUS96aTV6Vkw2a3dVTVYxMEJQd1Y4c0k4dzFRdmRjWGVnOGpu?=
 =?utf-8?B?RS9NbTMvUzFIcDVsQzhiQ0hQU0VBMFFOUWpUUGNoT0pqdmpxYm1oakZaRFJW?=
 =?utf-8?B?dG1aalR5WG9LK2Z6WUhVVGl0TlJLWFY1dWtCcTZqZ21LRzE1WlBOaW8zcTk4?=
 =?utf-8?B?MjhWSUN5OGdiYlIyUmx2cmZ1S242Tms4UGlPRWZLNVU2S3hIbnUwWWNPQ2Vv?=
 =?utf-8?B?b3doZGlhcHI3Y0hETHRrRHRPMld2N2V0cW1JeFRodHNQaEdibkdXaTgzeWs2?=
 =?utf-8?B?SDNKUUN3b1RjK2kvc2xmUTlzcmo4Y2NWQnR0K01rbFd4Ni96UzBMRUJEa2wy?=
 =?utf-8?B?ejRuQXRSRnk5RFk0ZW9NSHVtREl2QktMcGFRa2RUZDBydVBNR2FBa3ZmazhC?=
 =?utf-8?B?dHJtVkd3VEpab3FLenpwcXZYNDh4MXY1aytjbVRKa2w2Nm0xV3NCSXlnTVpK?=
 =?utf-8?B?UDhpemRtRnlmcnY1Z2ZoM0FGTExEZFR0TEdJN08vaUVEcUVZMEw2djdKcHBz?=
 =?utf-8?B?OFNEb2ZQeEFlVzQ0R2U5dVZZbG1QRHVPUnFFZWowWmU4czcyeXM5VFpLaGpL?=
 =?utf-8?B?ZU03UnE3Q1IvU1F5SzlUcnU4aDBqa2J3Z2hkVlIvWVlJVEFOMXF4eU8zc1NP?=
 =?utf-8?B?bEIzbnpvRkZyVEJEMG5MYks1Z0lZODBCanV0c1N3b1hlb2w3VlBKRnJIcjUx?=
 =?utf-8?B?MGcxRFlXR1JoRWl0UHcwcEcwRFVVUGF6WFc3d3dsSDMxenRhanhUVXFqMlVq?=
 =?utf-8?B?dEo3enhhSmlPcXdET3JLbVowY0h1a0hUUXNpWW40azVqSHE2cGc5U2w2YU8z?=
 =?utf-8?B?YVhEMkFNazhtbkdick4wWE53N3lzNUxURlBLa3NyVWtSbjJ1SkZVbkNPdVBa?=
 =?utf-8?B?dG5YQStiZFVCQjBMMTFqN1pNNzIyTTU5Snh0NkZ6OUpSdHAwN2dIc0tITStv?=
 =?utf-8?B?WFl1USswbGZTOWxDWkorZG8wL2g2YXV5cFNDcDZFeTZpK0JWTjdJRm9tUTFU?=
 =?utf-8?B?OUQ3OTFGekZiZVVzWWNJYTVQMFoxSkhXNHkwMU84TFRvSVBHZXg0dzBkanpx?=
 =?utf-8?B?aFVjbGlzMHdlcmVBcDBRRzNIcmxUZ0NIT1p6eFc4ZGlGb254ZHhQTUZLdTVL?=
 =?utf-8?B?MXBrazNWbzVZOGF4di81V3IvQjJiN0x0SzRKUGpZQ3VrMlp4UEVmMUxZZ0I1?=
 =?utf-8?B?SGVEZEJGS3pBckY2VmpkaVJndW9hQVUrRXArZFpvbTdCUnFnWStMTGhOdHVl?=
 =?utf-8?B?aEpXT05XVEZ1MjBtdU9xYjRvdHpBUXNyajkxZjNOVDdpWUFlNjhmUFlYaGRI?=
 =?utf-8?B?NkFOOW9nc25CSFRxWVViMExwQXJmbktNN0xEUTlrb2puS1U5aGxNMm9JOW9j?=
 =?utf-8?B?RjA5RnNJZXdEOTFEMTZRRzI4Y1JMMnhWRU5kQ2dSUER6M3ZRTzZJZzhVUzQ4?=
 =?utf-8?B?c3FRRlBlUnBEbnpsRWdUYkhpUkNmVGExUG42aWkwQ0dOMlE2ckIzTWxmTTM3?=
 =?utf-8?B?eGJ4TWhtYm1rSFp1czRCc3d2blJsNlI5LytHVlU1RTBrWjFubWNRNlRPRk9s?=
 =?utf-8?B?cmdKNEpUR3FoL3gwNGd1dTF6YjRTUGo4eWdDSU9DeHVyVUZVS0NiNVNJSisx?=
 =?utf-8?B?b3ZOeXFnOFBQL0grYUpPSUYvMVVLdjFkeTVNQWRJTEJlbWcwTXkvTmdQNGZR?=
 =?utf-8?B?NmVsTkJHRWJQRTB5M2dsa09LQ3M0ODBjSk1aRGVCdHBGeHY1eVgyYU0yYTFn?=
 =?utf-8?B?b0NyWGpRREMyanBpWnVXQXVYdU1zNS9BMEVKSGdxa1d1SERlRktjdnNtRUNX?=
 =?utf-8?Q?hMVnfwfqpjfqxs5OBq/pjk5C6?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NwnKeH+r6NYjgKhejXdPBFypCFOZNZJMYZ9QpK3D2R4KeQwK/HSuQPEgrxDfwAuGxxJRzeZJnBuF93P8oS4gO47g6iv8jLar9hJ1nEhzPx7bMsbM4nt2HUC8Dg2HhMbMTy75Ylq+vCIm69dyXeQRcE7Sa6sDU64arf55MyVh5ll1R9Cbxe70NmOqB+Ym9ECQ0d9QAwnFE7X4feVzxRuN81pprCzdf4OaUWbZC/vQpBac47rM1rH9SorCxNy4vOkcpK0+XCVWAC0/mzLR43GP5M/9T1OWYnwK1vuhD5POzJrL00A3vhiEz9URUJ+bRNMuSAf617Kd8Kc3sUUKP3imxJt9gDTmRZgWkyAr8bebun4we+PQB+9jlG/cdKBJXQFYHGj4cJkfprcGkuzLX+iHu6gCc4rX+El7WXVMHtiSRmnIQPbn/4+NcY5mb6APveLzrrwHSu8P+FtajHxJnyIrSGGiOqNd+hvMbn4ZU8xIV9MTMreo4UblOpnl07y0PeFK8C3E3gXVebnavGDrpRLJPWzTseLK/sXsP3S3x7BHnUA8f4zMBSKFwTCNAOEfrw+9wV8LpT4B5ZcMGgyBg8RkIdPDcd8FYtfAD9cEZi+qVM4rDYPZpqXdFZ6hY4t5Es50mUxHKdIV5g1jQXzuHTnxe+vSI7ubzB/9hAjSYyP+ju1L4ygFqEu+5MyjcbX/By5Bmh9oyW+3w8VPb5tHqFJL8VxLvshAANpDmymr4ROYndpmqT8yOv9pgQoX/X6I1zCLNNZjfFm0If6QzF5AoieXT4f8b3uvTO/zyDOFd2jbo1dtiT4ygcBxcszY0SB8GmVS2bwdyEIYhwNfC9XExFVX8A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7746041f-9bc1-4005-06b5-08db401cb027
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 14:53:36.3514
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0fD5QEPyB4wSOwNyJHXBPFPAC+jfxfShJefQM3huFsk27QNCZeuju4E2Ag0lpEhtQ1gy4Ws6n2tvCD5OXpENIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5854

Note in the changelog that the purpose of
gnttab_max_{maptrack_,}frames command line options has been changed.

Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Henry Wang <Henry.Wang@arm.com>
---
Changes since v1:
 - Introduce and move to 'Changed' section.
---
 CHANGELOG.md | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index c978cfd9b68f..5dbf8b06d72c 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 
 ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
 
+### Changed
+ - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
+   cap toolstack provided values.
+
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
    - PKS (Protection Key Supervisor) available to HVM/PVH guests.
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:01:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522961.812643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pompW-0001cA-GP; Tue, 18 Apr 2023 15:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522961.812643; Tue, 18 Apr 2023 15:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pompW-0001c3-Da; Tue, 18 Apr 2023 15:01:14 +0000
Received: by outflank-mailman (input) for mailman id 522961;
 Tue, 18 Apr 2023 15:01:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BdQL=AJ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pompV-0001bx-2e
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:01:13 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe12::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da690723-ddf9-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 17:01:10 +0200 (CEST)
Received: from AM6PR04CA0068.eurprd04.prod.outlook.com (2603:10a6:20b:f0::45)
 by GV2PR08MB8099.eurprd08.prod.outlook.com (2603:10a6:150:7d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 15:00:54 +0000
Received: from AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::ab) by AM6PR04CA0068.outlook.office365.com
 (2603:10a6:20b:f0::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Tue, 18 Apr 2023 15:00:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT003.mail.protection.outlook.com (100.127.140.227) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.21 via Frontend Transport; Tue, 18 Apr 2023 15:00:54 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 18 Apr 2023 15:00:53 +0000
Received: from abf34ce5f805.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 03CED438-CB51-4EB6-B678-B52E6562299E.1; 
 Tue, 18 Apr 2023 15:00:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id abf34ce5f805.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 Apr 2023 15:00:52 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV1PR08MB7987.eurprd08.prod.outlook.com (2603:10a6:150:9d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Tue, 18 Apr
 2023 15:00:28 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 15:00:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da690723-ddf9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/4sNW1PuzWecqMdLwH/QY3oViqeHefFf+DMTDFaJ+Z0=;
 b=fQ3wt3IXIdouOfGyphXcvqmbxCEQmAuCxpQUAl/lwiDVjA9H05yY4tzN6x+ziXRQ/7ntzJLzHJR3OdmYo+E4rKdGJ2q7kWD+bOgjKxi1Bn6X71H6s1sANz3gX3bibsune3HYxqVhaoLRCHuHIDjKdqjpdnVM5cYz91iepwKzUQc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c9c74e18fbebfa02
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ma5y3N//Ruk0h4bk85FH3EBoDj9uxbu768xMtz2yET1z59MLn9Cf0HFMxwHRXxF2n1OXaTSoDWWouC/JreG4SzjwVA+t02yyNlH/fnakzlkiSB2kJLdL3MCN9NTpjRMeO6Zw+StqWDUKXeaWeQcvGa7m1lyV1fVhhdxetqhARhVgg6FrkS9yZ3e37cgLhydVxGjK0gbwzpvO3ROJyFVERB5nmdH/FCrhX2939rnRU4lE7m/aD4sPYI3BoJ3/C/p4ZH5icnDU3i8YGKA/2CLs+24M0MQyjddlmes7skbotGtZF1r9ZGaS6LFCD7M6y29i6VdGlZX1SwXbNrUERHAw5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/4sNW1PuzWecqMdLwH/QY3oViqeHefFf+DMTDFaJ+Z0=;
 b=WiW4lGkxvFk2pMLEyYR+syiW6Ws1Fh7XbVMG2vecmKZ6u7RO5SnA+Pk5FLtWXuWsa9nvKgenXE/+Ek86Lm3aXRVhlG8A08n26D2m8mUYZGx6XGvR1q2RfjZuO1lUER4hkXaiZDWkd4NoT2I3RW4UvHTcjeBGHG0eaAG/xWPScQ7eTEfamII5Bx+C/TkZIzSywXtNuXa6J0B3HW7ojPg++gG8fnuj3FHb7JfGmPRBcQFqlDitrtbACR2K7oZEhwuzDdUufhbrGZweA96VogSwUHPCRsQvqNnSxGHEBa5YZevpo2M4P+D7jpnEXEPuDaw5Q/x1yL1o2Z7/B5f0Oa4GPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/4sNW1PuzWecqMdLwH/QY3oViqeHefFf+DMTDFaJ+Z0=;
 b=fQ3wt3IXIdouOfGyphXcvqmbxCEQmAuCxpQUAl/lwiDVjA9H05yY4tzN6x+ziXRQ/7ntzJLzHJR3OdmYo+E4rKdGJ2q7kWD+bOgjKxi1Bn6X71H6s1sANz3gX3bibsune3HYxqVhaoLRCHuHIDjKdqjpdnVM5cYz91iepwKzUQc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index: AQHZbSQqKlfgrZX3xkqta7+H+1gq3K8xFMoAgAAUHYCAAAOtgIAAAMGAgAAFWgA=
Date: Tue, 18 Apr 2023 15:00:28 +0000
Message-ID: <59A4EC98-2087-479F-B585-BDA1FF9B6B08@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <5D0FF62F-AA83-4A35-8A39-74A2F0D29603@arm.com>
 <36E824E5-B33C-4F98-8A29-9C642AC3F7D7@arm.com>
In-Reply-To: <36E824E5-B33C-4F98-8A29-9C642AC3F7D7@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV1PR08MB7987:EE_|AM7EUR03FT003:EE_|GV2PR08MB8099:EE_
X-MS-Office365-Filtering-Correlation-Id: 623834ee-6753-4bcb-b9f3-08db401db526
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sFx4q12vuqILI55H/8gobAk6UKuhS+gX3+68BCfRXPNAW+aHDXhW0mzB0W9x8CcQQlJkPqEBY/4eB/f3A+Z6506DRyxrViqO0eyTCPGnya4nvY3Z4rxf5+Rl0Bjttha4x0ItwZ0q0oaqENl2F58i+LE9aMHVFpwMD4f+UTLkQAYonu+sW+S+jb9t8An7MIz+JNBvlvwSZckE+80LJdaUjs67562XVhzIYX9Tnlwcb+NZjGj/47Lv4ntMRJsS41D1pdZqmBDKi/tQOE02+bHUOrqnHBkjWxQm4NVfC7MdjNPgwV49ePx3mSgw/dp83nTnjAfHAqABybGlFm/5QEVJJnxO6p0S19AdaZNLEHdtXw/mDmDFvhJ4xoFBvEUmFxwgIl8e2wr3i3ZS4ba4NYGb6knvnHZZ6n1wBEIwPZO2GsC52KlpX+8olEiQXBOgTqPq0NajS6JMOnW7hIHJX5LfHTy7oMdFgZDy3wjPgVbGQqiJlJqnSIIuoXZ1mxLBnvZ+iMKv0K97+7boWRqPv9hN0yOaeZZ7yDQOEcWzVvfeaua840YFlaE3OVQKvXYsH28dU7+barFJOWDDmZYfEGSN1lZQ6ZPdSUEeSvm8Nm8JPICmdYUkROPXlFkgjhaDmn/Fa9rYmlh8IoQ5YfNGdoPo2g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(451199021)(5660300002)(37006003)(71200400001)(6486002)(76116006)(66556008)(66476007)(2906002)(66946007)(64756008)(66446008)(4326008)(36756003)(91956017)(7416002)(33656002)(86362001)(8936002)(38070700005)(122000001)(41300700001)(6862004)(38100700002)(478600001)(316002)(8676002)(54906003)(6636002)(6512007)(6506007)(53546011)(2616005)(186003)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <34C76A398557C8438D2BAA9CD332CFE6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7987
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b14baaad-91fe-43f2-f48c-08db401da5f1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aS8H6Rdgdf8DaifL3WpJxzv3cQKiLA1Jbm0woe01OJClO69VOs88Pwhrj8c8G+ltUhZKHfjDh8NgdUmm5G4fo5ohBbEpHCBoVJAyOcSIWQchAj1mvRNTKtspS4zeV5IlltsYyXTTjL8mTNJneAA6oq3/o0qV6aORXMgKSVRI2kyuPCx79VMu4zmTnPUNjMSVJlPg+uP25Uhdb8/oaDvZQKHKO5teBGSJA6WA0M9vWpUw9JsqWGkq6D0CuWPP4+NSqztp4AoymheM/C9WxqPFQFT6v5tyspow+kLiqz85YtZrIvIZQbHMQWxsghzyfd2fPzrIw5KKZuuQDW1d/RChsTFL+qPmky+qikbfv8ajwSsN2smTIOfbpymjxMGCQ/HjrbnV1kC0rk6OO0ZxFUhNBabWCqVDEg+HykSr/3ORPBFbkfgTAjPkY4oh4fxY3yr8hqVf6elIUzHzP+00wuIktpmcLhVOtHOWQ+oSyVGqFHqRSNpEaDAbQ7nJaPacBOL5Ma8xGOOWvtWyP4LCNOuMFHK+AwwBlvNFGhsJTTIASQf/RsBMG+Fejt/geB0v9T3mJqdu8hxkQWwVh8wblbOEC71m8HgSGjlRcVgpxwbkJmhWHA6Zd/ii6fFz68a41AjEt6eHj8O2L8gbMYzU9AeoSVKlCiYWdghlIRqanTN1nDP0jCxKgGt89Y5LcaypG1X4hixs7yIErZOrpEESuU2UHw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(33656002)(6636002)(4326008)(37006003)(54906003)(316002)(70586007)(70206006)(478600001)(41300700001)(5660300002)(6486002)(6862004)(82310400005)(8676002)(8936002)(40480700001)(2906002)(82740400003)(86362001)(81166007)(356005)(2616005)(336012)(6506007)(40460700003)(26005)(186003)(53546011)(6512007)(36860700001)(47076005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 15:00:54.0031
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 623834ee-6753-4bcb-b9f3-08db401db526
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8099

Hi Luca,

> On 18 Apr 2023, at 16:41, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
>=20
>=20
>> On 18 Apr 2023, at 15:38, Bertrand Marquis <Bertrand.Marquis@arm.com> wr=
ote:
>>=20
>> Hi Julien,
>>=20
>>> On 18 Apr 2023, at 16:25, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>>> Hi Luca,
>>>=20
>>> Hi,
>>>=20
>>>> On this serie I would like to open a discussion on how to handle the v=
ector size
>>>> and the corresponding command line / configuration / device tree param=
eters.
>>>> In general the user must either give a vector size it wants or has a s=
olution to
>>>> just request the maximum supported size.
>>>> In the current implementation if a size bigger than the supported one =
is provided:
>>>> - we silently disable SVE for dom0
>>>> - we silently disable SVE for dom0less
>>>> - we do not create a guest when done through tools
>>>> This is not completely coherent and i think we should aim for a cohere=
nt behaviour
>>>> unless we have arguments for this status.
>>>=20
>>> +1.
>>>=20
>>>> Is there any good reason to silently disable for Dom0 and dom0less onl=
y ?
>>>> I see some possible solutions here:
>>>> - modify parameter behaviour to use the supported size if parameter is=
 bigger than
>>>> it. This would at least keep SVE enabled if a VM depends on it and cou=
ld simplify
>>>> some of the handling by using 2048 to use the maximum supported size.
>>>=20
>>> My concern with this approach and the third one is the user may take so=
me time to realize the problem in the xl.cfg. So...
>>=20
>> Good point
>>=20
>>>=20
>>>> - coherently stop if the parameter value is not supported (including i=
f sve is
>>>> not supported)
>>>=20
>>> ... this is my preferred approach because it would be clear that the va=
lue passed to Xen is bogus.
>>=20
>> I agree: we should not silently ignore configuration parameters or try t=
o "fix" them.
>=20
> Hi Bertrand and Julien,
>=20
> Ok I will update the serie to stop the domain creation if the parameter s=
upplied is wrong or SVE is not supported by the platform.

Thanks

Bertrand

>=20
>>=20
>> Cheers
>> Bertrand




From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:21:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:21:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522969.812653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pon97-0004AL-Be; Tue, 18 Apr 2023 15:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522969.812653; Tue, 18 Apr 2023 15:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pon97-0004AE-78; Tue, 18 Apr 2023 15:21:29 +0000
Received: by outflank-mailman (input) for mailman id 522969;
 Tue, 18 Apr 2023 15:21:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bqrX=AJ=citrix.com=prvs=465e465d1=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1pon95-0004A5-Pm
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:21:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae6be02e-ddfc-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 17:21:26 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 11:21:21 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by CO1PR03MB5809.namprd03.prod.outlook.com (2603:10b6:303:6e::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 15:21:18 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 15:21:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae6be02e-ddfc-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681831285;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=RqItxdo61GXy4k94QWiXwDyJBjp/jzETgPuMVBpt7A0=;
  b=RHZ6aeb4P4yYaylHniJNT2tiQdIDX3ss/xY74fUZ1FbVzO/P89x286hN
   uMNKC1jkxZQ6kp401rGHwHnjBQTg9mnPaun71xu7I14se6DIFHWjhJ2ey
   xAZmP0xeAwj0U6dyFCeCJpptvigkvEyPQLtM+1Tm3VLhNPXjHqaVIFmea
   E=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 108414613
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XgdNi60Bm9gj2owK0/bD5RZwkn2cJEfYwER7XKvMYLTBsI5bp2NSm
 2MYC2qOOq6PMGukeYtyb9m0p04Gv5fWx9MxQFA+pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfAlhQ0
 9NBKC4xfB2vhcyY/KmRG+Nqr5F2RCXrFNt3VnBI6xj8VK5jZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvi6KlF0ZPLvFabI5fvSQQt5O2EKRq
 W/c4G39BjkRNcCFyCrD+XWp7gPKtXqjCN5LSOPhrZaGhnWw5GcdKjIPd2CeuMujiUC3Bst9J
 X49r39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOtsU7WDgr3
 V+hhM7yCHpkt7j9YW2Z3qeZq3W1Iyd9BWwFYzUNQU0a4t3giIYphxnLQ5BoF6vdszHuMTT5w
 jTPqTdkgbwW1JYPz//ipQGBhC+wrJ/USAJz/h/QQm+u8gJ+YsiiepCs7l/Yq/1HKe51U2W8g
 ZTNoODGhMhmMH1HvHflrDkldF1x28u4DQ==
IronPort-HdrOrdr: A9a23:SsgyJ6C9A+Ko8VPlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:c3rlRmMa5oGQDe5DWxB50mFNGt4ZYHDxzln+H1SSWGxvYejA
X-Talos-MUID: 9a23:TtNBygqDDJ/Yeaq6DQsezxJAP8lKzL2MM2UUo48FtcDZLA5+CR7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="108414613"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JQBFmxtiEV7GPYDOhWiDwMW49+asrAjmYG6KsEFU9k3m6EWTQpWqxsWvhU/KgPER85tiViwSxqOfxjrRec2WLcmiTc+IXkVqFsD43vraxOi7qqB+g+MRf+cBtjjsqGvXnyJ1vv5dzPguOpkWPdENFmyE+geelmCy7rK3m95StzVsdkV7X3akOjGdrmEYdir9j1tSAXpplIwcsQhlNcH0V/O5zt3accxe7vwR+4oIuM39kk+AY0UZKnRdCvasJI7tgyivI6yV4ymuuIl+M39fapK0wsGMU7XqC6+wBMoBsjKgj5Sh+VEVBeB9OIQU76YMjlRr+J8ZGenyNudOuvdJCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RqItxdo61GXy4k94QWiXwDyJBjp/jzETgPuMVBpt7A0=;
 b=AMDaYsXRwRlnXi1eCDn4iuxMUc8RFdn0I2xAhCNHXpBP3WsWmYxbU6GbupsuqSCkXZ/nUSMG4DEtg+DsA/D+7K+SfUHetGFEfGk0kSLD4VjMpbvKMR1/ODP5uLtyQKiIH260FPUkXyH2KKiTCzwPJXRUG9hm+CMSQJPp1mPMCjmnb8A8dgXVSv+VQ2ybpZUKMoBMM6OI5EhNPnpmc+69yY15RjulmOoohMyF/HwEBTiHAplNsiqSQqmyAa2uptOnISFw3zXB6OK8V04CMxmUSavZZtl5MqsLKC2NG/SNlERP81xqHyO5M0hGCadHK8EannCk+E46TaKyfcjaa3wXRQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RqItxdo61GXy4k94QWiXwDyJBjp/jzETgPuMVBpt7A0=;
 b=IeoL6AP8JlUYeBEwm0Q+GbqbAjvDvymNiEUDHqtnggmoBtqqZw2KXfZjFVbgzy2VLPhiI6mRtzVBmF/OPI595D7d48OZKSnfLTs60AzhokMZci3QcYlK6665hrri7aMQZvAuovoXhzzABrH4MEsJDf/RxcERyHwuSImO2+IuzI0=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Jan Beulich <jbeulich@suse.com>, Roger
 Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Subject: Re: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Thread-Topic: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Thread-Index: AQHZceZuos1XEE2/ekWAsw5hIrlFZa8xLr3Z
Date: Tue, 18 Apr 2023 15:21:18 +0000
Message-ID:
 <DM6PR03MB53728958A3638343888DFEB8F09D9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230418111032.487587-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230418111032.487587-1-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|CO1PR03MB5809:EE_
x-ms-office365-filtering-correlation-id: 13bba099-b5e7-42e8-0692-08db40208ed0
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 HW5lAorSWA/SzeV2tY3P/eIYMBA0kKo+OAs4BCtfHWNHwOK3qnlElnjr/dWY6MC4DczrcmsxfntKGBzyBWKYdF6k6E3DsbBkcUVDwVxUNnqy2D7rN4eZYNsdqpG3zy9ijrncSCoAcwO6VdWxYbXNaPyR3IO+W1WXv7Mn1tSdGgD5B/E9/HCvbdTCmm42t61Jbk32o+VOrE9cxR3qwt2RM0PSlMqaCLd94yGLbUXWNDolMFVy+nMkLyXcGCVAjLFd7KpGObRMPBAUD25jNmwzYu8m5vT7R3mfExSIWr4jfU3C9yfCQIBOasHQNv/in1wptgklFAMdKGUemuLIHWYrxIX3u9Ygtm/4ucZW4XFp06XVc9uXrRewo7EJCv+7HwYio2I47X4/tbc0Tmjfh4ZHoUyxCxZ3HeA0vS0BLT8zy8TiaveSZ4VYmF3UxLV4n/5CRlvNUUJgG1Mu60kLKIaSnrioFFOawWvGJVduRaG3xYX3uA/obvtvhZCa3WxJq6CnV6JirikgJvgV2xDAwurpbl0TT9Hlw/4J4HfH+qCJ3cx8qoO3OXfiCOTrS2qt6D/bRsosU2xh9YChNTDgCoY8I5jmuolWdfgLcmYghcQNX1qjxussSmgwx55HesS43zvz
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(366004)(376002)(451199021)(76116006)(316002)(110136005)(54906003)(91956017)(66446008)(64756008)(66946007)(66556008)(66476007)(4326008)(33656002)(186003)(53546011)(26005)(6506007)(9686003)(38100700002)(122000001)(82960400001)(83380400001)(5660300002)(8936002)(8676002)(478600001)(41300700001)(71200400001)(7696005)(55016003)(86362001)(38070700005)(52536014)(2906002)(44832011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?jUGX3ZwGIyYDuQyxwWkpw5GuJjRjBSno7Ax4jN3ik+1tadZKBsPJ0yMLZ3?=
 =?iso-8859-1?Q?N30PHWYRB5Ca/7v6zQhZwXt6jKIqjbQc5NQBEsehMUySblk2WebuPjthwo?=
 =?iso-8859-1?Q?ZMTxNawi2ziJDblNKmvSOvxUAq0zGy3JMP34F2iT43yRaWhE/zKmdU2T5O?=
 =?iso-8859-1?Q?VCj752Cxmg7YCFG5eZYm6oVtsA8oZ14Jh2lelyqd31V5/Fl5Ome025t9Vu?=
 =?iso-8859-1?Q?y/54rSCZsOzyBEmSAXVBmJ0jXFJlv/JOkN5++sZD1na58IwAXbFFrWBwTE?=
 =?iso-8859-1?Q?0Q9AVQbz/jjwqzuKEiMC4W6F46EMm7k3I2Cxnummm2yRNsItA9DtdzqJ1s?=
 =?iso-8859-1?Q?P/DOGtPrDTFIXbslUmd8KIG+mdjC1N1MZ2Cpj+sz+w8Z81SJrsItaMpbg1?=
 =?iso-8859-1?Q?wLCS6LZAz48FX1KeOotjCEXFNNWv38dWw9hq/ZsngmHegDloD9dtjcqo6H?=
 =?iso-8859-1?Q?gTcBJDmM4P3g31GbAnzsleNrW8OyGzUrarVAKNCsLUOu9tI1wKzhYaUvmB?=
 =?iso-8859-1?Q?8SGVhDn09wFnqSeIdciYyBExvXXN+n8SXxAGZtzL/9HPT/aNlSd4LMcNaj?=
 =?iso-8859-1?Q?u/V7umayFk/EMczcTvvoRl8yBd96ApTkMPMByf4VFTe/zVKIXWSRZ024M1?=
 =?iso-8859-1?Q?Yn6EdFEZE0mPtJirPtsh3cMS1hICs0yCcPdwfO/bOkOuzmqZ+Y6q8CDhxT?=
 =?iso-8859-1?Q?iz/Z6kbYJ+Ffth2zCQ366pfyUH3SIvuHpXh0iaIUp7ANhz+/jmm8+iBoJd?=
 =?iso-8859-1?Q?yh63N6sN01wt2oVe4s5SGVJTAsdWUH2mOlZf32ngTT4EGrPHe9jPkCd2qt?=
 =?iso-8859-1?Q?ajTyqileFqpB5Td0iJlzl7bGCcbewIgalCMxGtCJbs4W6X+5XiLKu0y9Cn?=
 =?iso-8859-1?Q?EdHIRzlBYLJnNj9Bl70COAOxUMz39fL/W34Y3k8Ihd9g9ljubhpzUoTajq?=
 =?iso-8859-1?Q?A7vVNhdqJnuwB2mhJH/IwCMajuDefGHi5IWGBrjAajWgxtu5BkvVQ43Mp0?=
 =?iso-8859-1?Q?vgZNpMlAVDfY1wCglP3mEFALPIV3HmZB7ZdZ3lslkaYAIF9wxaon2B/MVj?=
 =?iso-8859-1?Q?b5Su8DtIl3Xnwp/M1e/a02WUpyKpQImocxGZfyR+E5e9vOGbLpjJFGODfc?=
 =?iso-8859-1?Q?3itN1ADVkx8zSpvGpvvJtC+rUhYS7OMyxrNSQc2shqgWgafx0tMvilqhZv?=
 =?iso-8859-1?Q?NBg3yWIUTAXOgyur5IV3Kv8OagnKMJXYo+hAqebvnDSz8Ypv1G94Fmg3dp?=
 =?iso-8859-1?Q?B1ws8Izf67IdXmBc9I1L3UdkfVBy3pWt0sdVGa7diAl8t6xF65gvdXTjMO?=
 =?iso-8859-1?Q?0XlAXKPI8AiyEtaIFEDzwuIN3Oe4gcRvb6IOteuPecjGM1uCUTcHlMHhID?=
 =?iso-8859-1?Q?7NlDq5aeGylrzUZRopd9MKpTEoUHoKjw7+Uv81tvhunLjvM4zA5lig8Cv6?=
 =?iso-8859-1?Q?ajzxqhWWebqEANYZjrz5arh6GeQhiDA4i5TTvjrkrP7gi4UB3tIW7t0KiC?=
 =?iso-8859-1?Q?yEV+gRkHeK8eECjrhJ1Zqu4Nar1bv56GaO4mRdvbpPcODkAJ+bN/wID44z?=
 =?iso-8859-1?Q?L/mmJHQsfd45SWUekC1l5tmDCCeUucXKvGKWrXl+tuYnnrW+nCpxmotFQJ?=
 =?iso-8859-1?Q?lyS5TVfynwK+CpcIo8d20YLFHasFFo5FDW?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0CbBQO0bQIhdjSdOsZUPwi41NPlD67pN7uSAkZZCj97sAWhAYwkc8D1BU8+NGAEjgLdxcuVvF8pPr7DCIn/i15DSQT6MhnBcjoPknYIdD0BEhJqzGbPesWAfZL1S96JgFX/4hxncTdIuMJ+rsEjf2UCTXAM+r1lVWUfYsDH3pSw9a3bavxDFFKmzd1y8gOrYPsbpWRJC8bjbyL0juXGvAr8ssVPuu0x9zbxMOGs/mRYYiETOqIAbhkz9BTgd/y4Y8go1HAjAVnjqnj5BMjHuv3Pr3b4lN8bvqTzKaCyPXpuDMSSo1nFfdMOXHj1lqmBf9yyniMyRZKJI8TSa2kmdW2cCJj0Zcq/6SecdEcZ+OoSb2FZw0xbkj2BJIghQONLyv0ppxDEdibyo/dzWb6h8SCjjQAFCUOUJzui/iWIL6VHNwB+qwgKEluJuqxJ3691VMiCu5wG3ILahn1v6lh98V2ahDsiX8L3t1Nh3473MMLYUMBZP8tuF0JXMmCx5YRPJJ9Z7UAe+VLYVhSLnVWo7IgpBgVEp9hJyJGpu36550YjIgS9MCeXpnj1SOlgw/wm/Shs3b34C4fVmNoB2e/miXDhXpt9SIiRJBx927txsLS7jRKE22VYppkswItVef586xk8b2G2f7GzZ8jKSkAQ6n2HKFifiLPahIi3h+liZ6X4nDG2cJiSK0SfT9psnQxpzJhI6Hq1qDZvjsXxBbBLPQEBV4o0LQpz/ejQAz7cVkPwekt10EN+96voIzc5GrSZ9fYOF8hE8FhAspcTEpYEFGxKiPnR5aYcwl8eN0SCLrQkbTSyVtj8GfsK8BrxGncb71I+CxMP/jDDH+vrxTxVyccaNP5f2ewwk953X03XzIFs=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13bba099-b5e7-42e8-0692-08db40208ed0
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 15:21:18.1598
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: i9dCwUxqTK+dwT0QYgEkOrid07tdKkEuLbwOpTeNYhXqX5CeOE45PNTSeTCYfXnQyU65aX8LvGqKsfHV3+QbauQxOt/A2sqqq8wseLG9zYw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5809

> From: Andrew Cooper <andrew.cooper3@citrix.com>=0A=
> Sent: Tuesday, April 18, 2023 12:10 PM=0A=
> To: Xen-devel <xen-devel@lists.xenproject.org>=0A=
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich <jbeulich@suse=
.com>; Jan Beulich <JBeulich@suse.com>; Roger Pau Monne <roger.pau@citrix.c=
om>; Wei Liu <wl@xen.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; =
Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> Subject: [PATCH v3] x86/livepatch: Fix livepatch application when CET is =
active =0A=
> =A0=0A=
> Right now, trying to apply a livepatch on any system with CET shstk (AMD =
Zen3=0A=
> or later, Intel Tiger Lake or Sapphire Rapids and later) fails as follows=
:=0A=
> =0A=
> =A0 (XEN) livepatch: lp: Verifying enabled expectations for all functions=
=0A=
> =A0 (XEN) common/livepatch.c:1591: livepatch: lp: timeout is 30000000ns=
=0A=
> =A0 (XEN) common/livepatch.c:1703: livepatch: lp: CPU28 - IPIing the othe=
r 127 CPUs=0A=
> =A0 (XEN) livepatch: lp: Applying 1 functions=0A=
> =A0 (XEN) hi_func: Hi! (called 1 times)=0A=
> =A0 (XEN) Hook executing.=0A=
> =A0 (XEN) Assertion 'local_irq_is_enabled() || cpumask_subset(mask, cpuma=
sk_of(cpu))' failed at arch/x86/smp.c:265=0A=
> =A0 (XEN) *** DOUBLE FAULT ***=0A=
> =A0 <many double faults>=0A=
> =0A=
> The assertion failure is from a global (system wide) TLB flush initiated =
by=0A=
> modify_xen_mappings().=A0 I'm not entirely sure when this broke, and I'm =
not=0A=
> sure exactly what causes the #DF's, but it doesn't really matter either=
=0A=
> because they highlight a latent bug that I'd overlooked with the CET-SS v=
s=0A=
> patching work the first place.=0A=
> =0A=
> While we're careful to arrange for the patching CPU to avoid encountering=
=0A=
> non-shstk memory with transient shstk perms, other CPUs can pick these=0A=
> mappings up too if they need to re-walk for uarch reasons.=0A=
> =0A=
> Another bug is that for livepatching, we only disable CET if shadow stack=
s are=0A=
> in use.=A0 Running on Intel CET systems when Xen is only using CET-IBT wi=
ll=0A=
> crash in arch_livepatch_quiesce() when trying to clear CR0.WP with CR4.CE=
T=0A=
> still active.=0A=
> =0A=
> Also, we never went and cleared the dirty bits on .rodata.=A0 This would=
=0A=
> matter (for the same reason it matters on .text - it becomes a valid targ=
et=0A=
> for WRSS), but we never actually patch .rodata anyway.=0A=
> =0A=
> Therefore rework how we do patching for both alternatives and livepatches=
.=0A=
> =0A=
> Introduce modify_xen_mappings_lite() with a purpose similar to=0A=
> modify_xen_mappings(), but stripped down to the bare minimum as it's used=
 in=0A=
> weird contexts.=A0 Leave all complexity to the caller to handle.=0A=
> =0A=
> Instead of patching by clearing CR0.WP (and having to jump through some=
=0A=
> fragile hoops to disable CET in order to do this), just transiently relax=
 the=0A=
> permissions on .text via l2_identmap[].=0A=
> =0A=
> Note that neither alternatives nor livepatching edit .rodata, so we don't=
 need=0A=
> to relax those permissions at this juncture.=0A=
> =0A=
> The perms are relaxed globally, but is safe enough.=A0 Alternatives run b=
efore=0A=
> we boot APs, and Livepatching runs in a quiesced state where the other CP=
Us=0A=
> are not doing anything interesting.=0A=
> =0A=
> This approach is far more robust.=0A=
> =0A=
> Fixes: 48cdc15a424f ("x86/alternatives: Clear CR4.CET when clearing CR0.W=
P")=0A=
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>=0A=
> Reviewed-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A=
Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com> (live patching bits=
)=


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:36:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:36:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522974.812664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponNd-0005jy-Kt; Tue, 18 Apr 2023 15:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522974.812664; Tue, 18 Apr 2023 15:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponNd-0005jr-Fq; Tue, 18 Apr 2023 15:36:29 +0000
Received: by outflank-mailman (input) for mailman id 522974;
 Tue, 18 Apr 2023 15:36:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bqrX=AJ=citrix.com=prvs=465e465d1=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1ponNc-0005jl-9r
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:36:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6a639db-ddfe-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 17:36:25 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 11:36:23 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by DS7PR03MB5544.namprd03.prod.outlook.com (2603:10b6:5:2d3::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 15:36:21 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 15:36:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6a639db-ddfe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681832185;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ycA6Nr7hkSnlbY67HxQ6qmBFbGODReNlBXl6mfGY1QE=;
  b=iik+VTHdqJvPnS9SlrmqlEzupBeekrJegOn/HoCXeTT92J+o4LIZi9LE
   WNL6Gb5HCBejGrroEGb/MWVoXuzldWqk51Y00Jn6a6XkyZgeSa4DsxR5J
   j3ylKtcQunE2GiaIXfK9lYUS92U1SrTuRhot8t1dQ4NId/z5uV2rQDvfX
   Y=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 108416973
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3FJjzqvPFfA8zleixr/UC1fofefnVGdfMUV32f8akzHdYApBsoF/q
 tZmKTzUPPfcYDGgfN10bN6z8x8BvJXVy941TAo5qS1nQS9D+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHxyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwETcJYBC5qMGM26uXTcUygscGIM/FFdZK0p1g5Wmx4fcOZ7nmGv2PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60aIu9lt+iHK25mm6Co
 XnduWDwDRwAK9WbzRKO8262h/+JliT+MG4XPOTgpqIz3gzMnQT/DjVVcWGar/2+g3T9WulmD
 nw6+CYf9ag9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAHSThbYdBgq84yRhQtz
 FaCm96vDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OKu8j9mwHC6qx
 TmP9XA6n+9K1Z9N0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLlm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:NvhnAaphgSosUffZMXRApz0aV5oFeYIsimQD101hICG9E/b2qy
 nKpp8mPHDP+VEssR0b+OxoQZPwJE80lqQa3WBuB8bHYOC8ghrKEGgK1+KLqVCNJ8SZzJ866U
 4KSchD4bPLfCNHZO/BkWuFOudl7N6b8L25wcfypk0dNz2CspsQljuR3DzranFLeA==
X-Talos-CUID: 9a23:fBrBsWxvirpQYgHPJguKBgUtHcQrckfy703LOmO/JThtS5iZT3G5rfY=
X-Talos-MUID: =?us-ascii?q?9a23=3A5MVcbg98yCEAklHpnXLCasOQf4RM2YWgDEcEq9Y?=
 =?us-ascii?q?HgPChBQJyNwbFjDviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="108416973"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gDy0nJ0hcTcBrzvr8nqABC5Af7HYjbxpguIol8KdFZmYAG3ntroxbGZgiba2eexyyEmCEnaEmwQa6yl8kb3kna95Pu/5FuJ3vuu3wRy7wnOGcAglgIrpv/HNCOMuAjjF81SS0C9CAEgYwjHHYI4h63UXx7c1JvE0n3K+Sw8R/YUu0BGZKcANEtSZ4i88ZVFtdPZiFdOqWF4kH4LOlo/6iLqoEIvJzGBaKlJ+4HIZUTTHjHkR3+/zi5n/17EbfzycKGvIe/hDlYVkl6b23bVHHdpCumEHQTWqWDfqwmwrNsuhxP4OFclW7XwoObf7YhNKQ1PCpOShg692fzJL7480iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ycA6Nr7hkSnlbY67HxQ6qmBFbGODReNlBXl6mfGY1QE=;
 b=J1Mc07dzeoM0o3FA+KXZAMxy5csM7u2flElNTK2nCcaBZijFzRbIyTF0raX3QxYc25mmL7rOCdvM7uSrLl0UVS4Yu/qoh1rzX9FfXSK8vGlDHqAS7IWnM0LYPszFSrQjYvo/6cmzuWLy1fVJZ/Etb2fjXx2RCqp81uSuGxD0yRtbzWN9hdSPiJJi3untmMauRrsq7+WMl45zB/JYdGqxsuZBm2dm+awOrm8km6OfraLU/sWs7+SVIyypDRqbSeYF1iUeYrHQMrT/KCtsGlmv4hizs6SJRZ4LttvGvdo+aIFEleQj6S+vpVEsYXT5CxZ1WVOvlOJr3d0CLqGqwfV1Tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ycA6Nr7hkSnlbY67HxQ6qmBFbGODReNlBXl6mfGY1QE=;
 b=VM19t8F56YqcMeTMCqz9bR6G7P25NFNnGLwLWuh7OSD8F424MVa2iqo0nOlPAyubTVoFi/5hIE8DWLXHvQw3xTYBGlRrg5J7o+KxwdmXqMrGQAIP71mc0bpFA1aLwy4nh9c8kaEWU4tWs0iww/q0Fgms3Ko36roRJNMDsPaR1oI=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/livepatch: Fix secure_payload() in non-debug builds
Thread-Topic: [PATCH] xen/livepatch: Fix secure_payload() in non-debug builds
Thread-Index: AQHZcRMmMLpASuHOmkGih8/02E4a3a8vUDkAgAHj1ME=
Date: Tue, 18 Apr 2023 15:36:21 +0000
Message-ID:
 <DM6PR03MB5372AD212F31482D7B451ABFF09D9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230417095815.3734434-1-andrew.cooper3@citrix.com>
 <2b8c7abe-9bc5-cd8f-b650-1de0205c4ee9@suse.com>
In-Reply-To: <2b8c7abe-9bc5-cd8f-b650-1de0205c4ee9@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|DS7PR03MB5544:EE_
x-ms-office365-filtering-correlation-id: 8d1f73b3-dcf2-4920-c220-08db4022a92f
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dZGRbnulJkAyR8s+vAiWUa/3+KPjioLA5XRFV5B0hTNkR81H1KE54Gq9yo9dIaRPkde2ijX7aT7YAjim9cskFr9ndMSnHKuh4WA+mZ5BVEE8JI6WUFCNBOK61/1o/5pJIx/ccMv75QoRf4NN+zkMnb3nB7o7JPECyIMTm8yjArtVGfPCdNlMQIjATYw0i9Oy0wFDw8NaCYsVf6q2dq8ILVgCjaGzXZ+9uBEf6og5Meb4tXYfL1almIa/1DJLOpUC9fspR7Bw93cyGrTe0q0DGu1QxCh0nrmnZ7i00IWXoPt2PqFx/GfZNgGwjSlJr5LoXEIOKiKpnw53yEXTueSfOTL+mz9tWuVhI+4X7spJLM8A2Ps7swkXvWXas06oeD2OQBG8ahpTANPcU5r2yYKM7fnWEAmnM+RyfkU6aL8YiS+KjXTEDk/EfZDShri3fqiBgulcKw4d/dQfVeuGhgdGyfR2aXxn863prkxEjRm4TpVr9B32QcKxaNG5YisF8bnpNu2V/x8Ar9ijJhK4o/fpWHZIZsakr5wjhugwJ2ma60dHEEF32b+S7zuVXQDIuKyJYil3pwjT9mVrQJ2k2DIyYPCEK9eTCOg1LoCqEKHxBW6kgmh9v5+qOlFj9D5WkhZK
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(366004)(136003)(396003)(451199021)(66556008)(64756008)(66476007)(4326008)(66946007)(2906002)(4744005)(66446008)(5660300002)(52536014)(44832011)(8676002)(8936002)(55016003)(316002)(41300700001)(76116006)(91956017)(54906003)(478600001)(33656002)(110136005)(7696005)(6636002)(71200400001)(186003)(122000001)(86362001)(6506007)(82960400001)(26005)(53546011)(9686003)(38070700005)(38100700002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?ydN5C7IQOAi30b1brlbadB1V3e5kDv1CngE1S/jeBj5lqmv+JHepMeTSyx?=
 =?iso-8859-1?Q?CyDyhmkX/h3xTqXtt/TiORrMH45Ig8cllad3AqiA7wxsKOlBr9fjgi4oCq?=
 =?iso-8859-1?Q?gWcwA2l+Epa6b6WoWXB4vFj4wLGnDinrwGIjxvhCIaRFYGtpDMlc5twQZs?=
 =?iso-8859-1?Q?lcNcj5p35W/hv0Zg2vCp0dMjbIcrb69LvNsfbo1nptcf+KOov2zlh8mhgR?=
 =?iso-8859-1?Q?AtKDz1x3LnhxD044i1tHIOHBUMRVK1yF3s1mv5SMPKVw65eqean5n6T+wI?=
 =?iso-8859-1?Q?aIdIW3I/G7e5hBzb1ypw9upfODFcRzFE3FdrL9AKxrif3O1q7/Oo9dqrHg?=
 =?iso-8859-1?Q?XJhUTU2oj6m4G240d5xnUlyX2BjkIHh5KN7Xhh1RWwVT1iPfdOhUHWJB9P?=
 =?iso-8859-1?Q?pn8kA2f1+GTUXJqgPrk9bYe+8mudZCW0B+NuvbjR8w9KH7n+YN2XGxhppr?=
 =?iso-8859-1?Q?FLkcitcQ+Eu+ixrxlAzA0IVDrQWaczi5DcEFwGKBMA439krtAK/2NfD+z3?=
 =?iso-8859-1?Q?3D2hJ36qDUZW0w18cwMdXPokQyU6cW6+K3/ITfL4i5EPxMAWOeG3aJKGhC?=
 =?iso-8859-1?Q?ChlLzu4OeDIqtHn2k8AH8v+RFwONYGmO1jyjepiTz/MhAlxvuFM3I1j8GM?=
 =?iso-8859-1?Q?VeM/IUOgvQfqhev3YoICb+EGm27HpD47IMF+XPjTguynkeksMXL0/Hb43A?=
 =?iso-8859-1?Q?0LgRR0xW9fNDm85Z+1h8ivFXxEaiYfCqCfQW2iXWmJN4MKxooiAOo8ZOG2?=
 =?iso-8859-1?Q?HptCMLKBi1kxHhdOf/iNXRgU1cxkn01HT6XVPV217WPEZYaNnMZL6HtCID?=
 =?iso-8859-1?Q?LsyiZfU70jXCdwU4awZVUVALNGNpUYU7qaXMSUV7Ifu+b9Ykv9v7GDtsev?=
 =?iso-8859-1?Q?AfCHbn0VyhHDta9E1LHp9+8q5Ts4dlfggNveWdKk/KeZ0AuYFXDAdrfkp4?=
 =?iso-8859-1?Q?3YewgwxQgWFdfDQcS7U/1vACseQ62MCgYZRFy7j0VpYSahtUXSDaoRYJH5?=
 =?iso-8859-1?Q?x1I9FjpF4dD/nHthIwlQCAZiYqji8vE1cPsDPIpnI5DbqNZe6kh6mdwfST?=
 =?iso-8859-1?Q?MOGdEvZF69+wWhdVPyf9y0F4FOItsSilleaO9VgUF9yq4tw4mIoDembJlt?=
 =?iso-8859-1?Q?7tCwleexUwn05/wcUKkndo/kCNDLanZ1yNWhe6wrDh7BuCZP6aspZN4KEy?=
 =?iso-8859-1?Q?ineC1Kj4rB3FqHucRSeEhW0+mUJ21O8Yefeh3JaGFRGlbCncJ5Gq2L943K?=
 =?iso-8859-1?Q?zycawWUE1qNIOpqLYeF1I3jNedh3be12+1scVrD7zfbbP2SEs+S45FfVxm?=
 =?iso-8859-1?Q?GpkEkOwJeRCYPTn3O9yvJDhyLUSeGAd0EGa22eeUvBQN4tpqoGXDHQcqIm?=
 =?iso-8859-1?Q?XW4jkWjeeWKrLxnLnuTmVCPntxx7V7IQvuBBM+BpZj2KqcPr4FO6VSAG2+?=
 =?iso-8859-1?Q?XYdXSK/4CEbhAXUyesjd3mNXr55Ym7+hg7IgOilz4vKkvUZpwsWcxTTMdb?=
 =?iso-8859-1?Q?yFbpF4zTNbY1dekDLsN1/IsSQH8dkPggyLOfQ3nMSpxyHbT53QJtXGVar+?=
 =?iso-8859-1?Q?NY38VAzUcaO8/+7MsAdJCap7xkxSsIexwtVNi0sMaiUUHbjNi318gM/eQC?=
 =?iso-8859-1?Q?VgHdtoLeKnSqqvnIn9yjdnAGkBt2ytSr/t?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	W+mi46lz9vmIocoG9aJJIJ+W5Q34BjZsYajrsRBdnPxa0T0HJnRwR9tLW5WN0sScdexv6jopSKUQ3n8KbpHi5jTa1qeTQDz/v3HPMpavCZbu65/AFpsUOegfxQBcjJbq4E+B3BgmiTNnsQJOsFK//QSZmtL5ajYseOeXH2tFBHo/9l+xZN2BgdJBvzdLkTt9mkoGAKNZ0MviZaIdHKNAgtu2HL8RTHb3PYnygHPHih7uWNkoMV6E2W2j0SpfcHFl3FocMkpkYa7KoZDFRR6x5bhIu0LCr9X/gExXDkdUpCmYan1Uo4Hy247vXKGGpc+4pq4U8NeLyUUpoKhHX6rH0ybyNspShFM8LSqG5BR9hnmmAvqXvs5+LfYaeeolBxCxXN5nxDjBioylX/70scRtfVKdJl1wJUfroGnYHDPiIcuuwjQc6ZNbaUMRzbd7tS0iZtTCENfVSjd8l6R7sbyx2t7J+NxdSJQGWy/e5DdB35ILP1KlfCvusfzejh+Zh5j292wVjacqXhZ4jJg5zuC/WuMI7SSivI9BUCdsT5ZDdBpZZ6A2oo+VG+5vYJfEwgW1iuqbs82LofpgIc8ZyRfq5BhtXu4qyyBsgURT4Q3vgE6ve84+M54tOGAOJGGFXwRQztUPtQTTdgmtUczDCCid6GbOoQodSbn+TNTpZZxX8FQC1EA7xyBbzWUprwxyQ9/BTiAVIGxKg/YrIRv/tic5yCryMfEPQ/1S708Zu707J3r/LGvsJ/a+yw5IQMDSjbh7ie+noB80hrZW1hn4blY/Hoesmae8/fR85oDbUU69q03z7h8yUxgEp5wlisEfteqK2nZdtNKE+TPug6IkTEQAeg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d1f73b3-dcf2-4920-c220-08db4022a92f
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 15:36:21.4468
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: DfLR0HyCvTtDHoSPmryi2LjOdwOB1/jqonq61wQypMnYty5Frcm44li+hbkYz9hKqfWeKGGEwPOZ3zK3mUIHmadXnrDbFP8s9ioat4QPD48=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5544

> From: Jan Beulich <jbeulich@suse.com>=0A=
> Sent: Monday, April 17, 2023 11:41 AM=0A=
> To: Andrew Cooper <Andrew.Cooper3@citrix.com>=0A=
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Ross Lagerwall <ross.=
lagerwall@citrix.com>; Xen-devel <xen-devel@lists.xenproject.org>=0A=
> Subject: Re: [PATCH] xen/livepatch: Fix secure_payload() in non-debug bui=
lds =0A=
> =A0=0A=
> On 17.04.2023 11:58, Andrew Cooper wrote:=0A=
> > The ro_pages + rw_pages + text_pages !=3D payload->pages check is not s=
omething=0A=
> > which is reasonable to skip at runtime.=A0 Rewrite it to not be an ASSE=
RT().=0A=
> =0A=
> Isn't this merely a sanity check? IOW isn't returning -EINVAL in this cas=
e=0A=
> misleading, as to calling "invalid input" what really is an internal erro=
r=0A=
> in Xen? But anyway, I guess I'll leave this to the maintainers.=0A=
> =0A=
=0A=
Yes, it looks like it is just a sanity check of the payload->pages=0A=
calculation in move_payload(). Since it is not dependent on the=0A=
payload, I think ASSERT() is correct.=0A=
=0A=
Ross=


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:42:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522979.812672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponTi-0007Ar-95; Tue, 18 Apr 2023 15:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522979.812672; Tue, 18 Apr 2023 15:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponTi-0007Ak-6L; Tue, 18 Apr 2023 15:42:46 +0000
Received: by outflank-mailman (input) for mailman id 522979;
 Tue, 18 Apr 2023 15:42:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ponTh-0007Ae-3T
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:42:45 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a74b8755-ddff-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 17:42:43 +0200 (CEST)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 11:42:39 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5677.namprd03.prod.outlook.com (2603:10b6:a03:2dd::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 15:42:32 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 15:42:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a74b8755-ddff-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681832563;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=QNJaYcfp3uTRR3yCJmGaeDRdpgauMlX3e/Mo3bOiQr0=;
  b=I5KitI+Ip2cmeiOT+WBBY7+NI/VOMO4cQs3+v+G0FSTxE4gpIVydz0dC
   rGxq57u91r9ZPbfeKqAI0bWNIzhAVZag10Ar9+cSjJTEfN3a87sKCBQhq
   KmyqTEzMp6xyo9S6/KfhQxOXwaBxb0aE4S2bxfYTy8QvgqrQ9mXBDdSVy
   Y=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 106384467
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2XdY16rdIxtonwaPuFwtCLnRm/1eBmKcZBIvgKrLsJaIsI4StFCzt
 garIBnVafncZ2v0L4giPovkphgCucDQzoVjHAQ4pCE1ES5G+ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzilNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABIcfwGAneWp+rCmEPVNndZ6CMftMrpK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWJEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpMT+3kqqA16LGV7lMUDAYSBX6dmsuSkl7gdsgPN
 B0F9yV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OcWSDowx
 xm2ltXmLTV1tfueTnf13ryTqDavPykJPCkHbCkNQgYfy8nuppkpiRDCRcolF7S65vX3Ezztx
 zGBrAAlmq4ey8UM0s2T/03Dgj+qjojESEgy/Aq/dnm+8gpzaYqhZoqpwVvW9/BNKMCeVFbpl
 GcAs9iT6qYJF57lqcCWaOAEHbXs6/PbNjTZ2AdrB8N4qGjr/GO/d4dN5j04PF1uLssPZT7uZ
 gnUpB9V45hQenCtaMebfr6MNijj9oC4ffyNaxweRoAmjkRZHONfwBxTWA==
IronPort-HdrOrdr: A9a23:U8uNe6NgzWT0y8BcTvujsMiBIKoaSvp037BN7SxMoH1uHfBw8v
 rEoB1173HJYVoqOU3I++rwWpVoMEm9yXcd2+B4V9qftWLdyQiVxe9ZnOzf6gylNyri9vNMkY
 dMGpIObuEY1GIK6PoSNjPId+od/A==
X-Talos-CUID: 9a23:fdEM723G2EeMaihfC09TbbxfONE0Y37Hy2jqIVLiFUdyWuSxR1mewfYx
X-Talos-MUID: 9a23:uA1Uewq04ti6zMov6iYezw5lMNxss4GIM3ImiYVYltPUKn1hNjjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="106384467"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H3goudBVEvvF3s4STo5N5Z/xrBZx56IGydA+KTIscPzlKfSePA8p5evZ2+q4kzqnWu4TOtXqIz/Bcc8E5AGF3DeKNjzLPtz1iYxMNP1ByCRFxf2z8FKBbdfBBX4t/GFaIdLHHJaj24dv3cIchiBBfyXtEYJ+6Gv1wkZHqi76KMLJCqF6ojUq0uU+KDk2yWI52QgIo1l6jFVC1Dyv9ptVS7IFYQgxFdXjPXdYrnA7a+qmk1mtLZ1BuJJQMH2oXGJ/KzYTEPN23BwrmxqCSRpp+uOI0yxBbIRJ77rCNpXMehWWQ/co6+rxk4SOqONKZQimUXi8v6FEUMJYf3ZTDr3yBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CUnHgYjtSUltMVGFEA0os/QHQZE4foAxTSPRIIYyuQQ=;
 b=J62MtTDUe/c/avRQDCAkuRAsNVlius8NSj1Z+TeMY/9sprFmUnvA7giTrEaoC979tSuw1ql8EKhBCUjwPnU3KYiUbUYTEwyjsL3bztM+fh45T+GTw4P4Yy508aiHIJ5SNOooYAyjOE8MPoReZ3SOcEkP6nsszMsq9utDQ9N3+cWeGjEu5wjNImL3wjvZrKR6k54SsLRc879UGLZWcryszapgsd9Bm1267QlgPpg+sjh5QR5NNx5FKZnM59Gk6e43BVtklFNoakNxvPJJaGNdSPvrBBmaD3OFXneQElRY8WloZyMcG/HgfQplw8AGenMOnEiRXdenE93Y1eSumCGt1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CUnHgYjtSUltMVGFEA0os/QHQZE4foAxTSPRIIYyuQQ=;
 b=EgQXFgvpOKI9trPxbqNlT1RHrALnfQgYWMd0hyxQ1Qgs0lm0O5B/ssEDMqG4ivCEpKAnTjgwfvAizYYiZvxCnMfe6fdA/aceJaDtEJyoAul8FQCwUXmiYh/FOWI8FRS9UVWhjui+1wQILipkw7cQZWCCdJVBp8FktFYcnZtIsho=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Date: Tue, 18 Apr 2023 17:42:23 +0200
Message-Id: <20230418154223.20181-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0464.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5677:EE_
X-MS-Office365-Filtering-Correlation-Id: e25a8800-1147-41c1-97b1-08db402385c9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iap20tkaVeEppaor6wwiK43ERKjwk16QfTMHQdzaHxPec60XvjV1Hhh2RfrqWYAx5ik/Ia8mBWxkCOOBE2U2s1j/j8zwCSdSBvZpVpnzWf+YuhvJMW8qOqir5BqXq1j3nXQDjtJmxT2v5l+xGrx0HEvTHfSBi8qSaV3KT4WiefUgW5Y5wp3GwZy962g2JoGWXziOKYR2jXy3MU9SurHUvhn9b+pbMLv2DILNHCMINIpXMctInNV2GwcJ6ZcvNmHCLhli5IIxzcn+E2JtMIgDARDeLOqPb5CoIbW3otb3t/pqoKQ+lC+PG8+J5z4jbzle+OU0GOEZw4oMhBd7OSqkbiui/sgxgJCemOgcumoJnbNfFYRzoM+JoOJVL3EhEJWxeg/GYdJq3vhUPrQg6OhGXJ/tlga6scYSeh+cRwg0jUlIcCCI5uIl59cfR6afygTixZ/7DIDEqPzVWIQCMI+0U1pr2cak/xcFjU4/PA7QgjpgF3tIPkRktTjGBMXKS0/5Od611RF8ZRaQBe5id58zLZkRbSBhopoQMmYRbezUVDs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(366004)(346002)(451199021)(36756003)(86362001)(2906002)(6486002)(2616005)(6666004)(186003)(83380400001)(6512007)(6506007)(1076003)(26005)(316002)(4326008)(478600001)(66946007)(66556008)(66476007)(6916009)(82960400001)(41300700001)(7416002)(38100700002)(54906003)(5660300002)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NnE1cVpWV1R0Qy82UG1Vdk52OXVsUHBJZk00Y0JVY2c0a2Y1eUxFVTUxeENw?=
 =?utf-8?B?NVA2UlRaRHl1N0ZMcGFzWXU2MENYRHBkUG5CZzNlZ1NTMm1jbGN4cE5uaEpB?=
 =?utf-8?B?NEYwOHNheStKU2VjMkJlQjRMSWpVOHAxU3dRZnpsRTlFWjhkaVkveGlmV1FU?=
 =?utf-8?B?RVpWM0ZFVEpDNzBFUnZqSVJzM2hhS3BmczJySnduZmJtZEt5RVRvKzZtb3dn?=
 =?utf-8?B?alQ1dkl4aGY5cEJrOXBHNldNbE04cEw2VS93L1lQTjNsOEQ4cmxpdVJ2ckJD?=
 =?utf-8?B?SDNJWmJxVVBjK3FQMmJvRjdDbzl5dVdUdFNZYjlIVFp4TEM4YlFBZDNobERr?=
 =?utf-8?B?UDFZOWlpb1RjTStrWGdwSjQzajBLcVFKeUY0ZlRrZWMzS1hhUk1ON3FxR01v?=
 =?utf-8?B?ajdmaGVDM09rZHdJdnI5dW0vbVlPcUtscFNFRE9ZZ2pXQmFQMGMyZ0dRTlRZ?=
 =?utf-8?B?aXJvY3EwMWZONU1RZGhtVlVtMUlNMjgzV3RDVkljdDNIUUhWQVlQSHB0SHBV?=
 =?utf-8?B?YS9nckxPSGRabDV1bDRycmRHeEh4bitmMWFoK1NzR0pmOFBMN1pKeUtLS0pj?=
 =?utf-8?B?K2RjaU1UQXBUQzZGNmxQRjFJYkpiTjYwSGxKUlNTUkpaeWtKNGVNTEVwNGJ3?=
 =?utf-8?B?U250Y0RPdUQvV1V2b3IzZHhMdU5hblc4K0ZKYmluU2h4YkZyTndnTUhhYjZI?=
 =?utf-8?B?WGt1V05NM1ZwdUVOUGlDcEZVMVpIV0JScUNxaXMwa0pWVnVhWUZZMHE2STJF?=
 =?utf-8?B?TmFXNmhVUXNSVkhxc1lqczdWa2hNRGhhVGNzdDl5OTFTNUpWaXUzbllDTTBn?=
 =?utf-8?B?MjQ3bnZkcSt5V0VxNHhsUm5UQ0ZyUFVOc1Ara0FzT1pheGh1Vm5CZU53M1VZ?=
 =?utf-8?B?MEwvM2F2UkxMbTU5V2RJdkczVUwwMy9qSmtZb2N1eURhVHFNSHc0UUs0UWEz?=
 =?utf-8?B?N1BnelVxcmhvMlBkanlkd2dRWWUrc0xrRU81WmNmeUhKcXVJOXAyeXk1OERC?=
 =?utf-8?B?Si9hQ1BKSjRnakJ5SFhTaHV6NUNLY3dEcysvbk90a3dpNG8yU2Q1R0dJUk01?=
 =?utf-8?B?aFNiY1ZWNjFHdm9pZTJHNTE3bXFJREo1N1o0V2t0OUp5WEhqZW5DOFJWVG9x?=
 =?utf-8?B?UjFuRE54bUFPak0vUnJiNzRTOU9DR3dmbjI0aXdMaWd0UlgrN2RvRkI2Zkgx?=
 =?utf-8?B?cXRIcWZoWTNEV3EraUtteXBRM2tRTmp0Qm5IV1JZbFh2WHBDOUR6ZkJXQlZn?=
 =?utf-8?B?amcwK2c2Zk9xTnlXNis2Q3pTR25rY2JVNUdSMWVmL1JlWnBUcTBBcCtDbWIv?=
 =?utf-8?B?QWJFNFkyOXdXSEJuVFJ1VzZUTGcwNlQ0bWhLSXFwaE1xUlY5bGR5R1gxWlo5?=
 =?utf-8?B?Q0l4enlqNUpzd3BYK1UzVnFHRlNFbzVHbEZpR3A0cVl5SEZuOTd4R0pQcFRN?=
 =?utf-8?B?L3UvUTRxaGE3S1VFaFlUUUxQbWsrcHN0VGplMUpvcUxPK01PcGNRZUQ0d1BC?=
 =?utf-8?B?eEFmSjN6anNsTE9ZcVdablNNdzUwdWt1RUVBYS96MUVqeEhMZ3VSVFM4d3Bt?=
 =?utf-8?B?VG9ubW04QmV3TmFKdDVpVDBCMWZJSCtzTjVtZ0NhN3BaVEJXUHJKaEFpN0Qx?=
 =?utf-8?B?S2d5N2t6VW55L1gyMjFydVF5T3gwQVVPMDBXamgrYjZHOHRSRDk1RURIeVUx?=
 =?utf-8?B?WmE2UEZ4UGFTUUF5VG52aGU0SlVQclRaalFrWkdkbWh2STdDWUNpT1drdUg1?=
 =?utf-8?B?bmF5Slc1Q21VaEdzT1JuU2ZiMXphZHk2ZlVnTHdxaTg2VVJoeGdBSGh3UHMy?=
 =?utf-8?B?YWNxa0FEM3c3RVpsRkEvb1NBckRHUTlFTkNKR0dzc2RYQ1JPMVp6Q3lkMDdm?=
 =?utf-8?B?dUpiTGFlRFIwcVNJUGlhcm8wSGRoa1RDZTYxS0pDRjlTanNZa3BDcjBZbDEy?=
 =?utf-8?B?NzhpVmNuTXdVK2hHNzhoTHNGM29HZFNTQjl3cTBQd3Q5Rzk3S1hXZFIxSDdN?=
 =?utf-8?B?TjFIbmJhUlYvanY2MlYxd0t1eUpPemJMWHZzYW1vdVJCODRCOVlVd1pINUFs?=
 =?utf-8?B?a0FxWERqYkpaNkthK09hekJhWHhHaExldnN0b3ZGdzJvdkk1bHd5QlhGRTdE?=
 =?utf-8?Q?OuP3oZghW9JyPHiVqjD3pp6zt?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+xCC2+DS5whec1F4NXKJBhxTZO+da2oIL43krN+2xablTMgvZYl3acsK6SmABo3oWzQATr4kDhOM0muoFBnLXXLv8kLw6XWiyGtkmTEn5xNP+AKDefoQWr+QItF2QZ3tS4Q6kNWeBexB2tynUBrbtgnOWs748llxcABSVkDFxdgdim/u+sjUvsYSYkL1MwZNBEtq0p1mERKjfTg1T72/eutnPHtzGtop4rLc++i5KXeXyQf46mwNbTspDuwMWL3N0ZjxwFU87IDHEf8lKbu6SWGpWubc3/egjEkAzTP4HKBung4uLkFQfpDyNLq/YNHomWQYz9Jrrc8mujMDWI+b0gFGxcW8r5ujL9sm+MjxjHKw7bhuiax6gCnSD52fpKQL4DOjOYcaeSjQY4LTS/briVQ8rEN84RJxJo6bu51PlhLYM/qWEzXutttulKy/YnRibFcxKhvbiI69UUOZ2/zeVCRLCpW5Q7jMuDDOA3s7J92YmWRj3iT77egzyNjv2GuUMu1G9BCx3LhOlF6NXZBKA7OLPooeD/CCAQ1wtEyVEvhnska0ly6dvLx8FQ//S0y17U9eznombn7E13mGxnflJnH1dizt5opZs3IsHwfA0L9IeG4yV4XJ0V9IPnhtKAr1HUsuT8+ge6ruH5yyCZRfYlx2ncFWa5VPkJX62Ww8D+wUZAbGvoyue8kzk972fDVlrEB56h7PP5Pz5XejHVy7UWrkTMcFoXozD8AU2+XRexnUdMjvANi0T9pPYZHdKRWyVE2qGwYG/rPKaK9APUlGl5A5SNmZ9vsJ2eEy33ViPC5UlhyJMrOtfocYIU3CSKtL6DNpyZFdI9UoaKk0PNxez43RtWejX1Skk3hEpzIxe8kbpXV+650w2iyqQu4HNyGRJsrL17dTDkatdvU7zmP0qYIRphO4qRIhMwXRl6h/JWc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e25a8800-1147-41c1-97b1-08db402385c9
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 15:42:31.8510
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I4rgbHxv3tz9BTGbMwhZsVbypdrZ+E9PcNKlMpHZpgf5RBX2sPslyEvz9Xxd7/2cSu+wvi1fMFfGP/RPcvmjYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5677

The addition of the flags field in the vcpu_set_singleshot_timer in
505ef3ea8687 is an ABI breakage, as the size of the structure is
increased.

Remove such field addition and drop the implementation of the
VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
value just inject the timer interrupt.

Bump the Xen interface version, and keep the flags field and
VCPU_SSHOTTMR_future available for guests using the old interface.

Note the removal of the field from the vcpu_set_singleshot_timer
struct allows removing the compat translation of the struct.

Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 CHANGELOG.md                    |  2 ++
 xen/common/compat/domain.c      | 18 +++++-------------
 xen/common/domain.c             | 13 ++++++++++---
 xen/include/public/vcpu.h       | 12 +++++++-----
 xen/include/public/xen-compat.h |  2 +-
 xen/include/xlat.lst            |  2 +-
 6 files changed, 26 insertions(+), 23 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5dbf8b06d72c..b0d9bf4edbda 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 ### Changed
  - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
    cap toolstack provided values.
+ - Remove flags field from vcpu_set_periodic_timer: its introduction was an
+   ABI breakage.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c
index c4254905359e..ffc73a9a1dc9 100644
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -16,6 +16,10 @@ EMIT_FILE;
 CHECK_vcpu_set_periodic_timer;
 #undef xen_vcpu_set_periodic_timer
 
+#define xen_vcpu_set_singleshot_timer vcpu_set_singleshot_timer
+CHECK_vcpu_set_singleshot_timer;
+#undef xen_vcpu_set_singleshot_timer
+
 #define xen_vcpu_info vcpu_info
 CHECK_SIZE_(struct, vcpu_info);
 #undef xen_vcpu_info
@@ -97,6 +101,7 @@ int compat_common_vcpu_op(int cmd, struct vcpu *v,
     case VCPUOP_is_up:
     case VCPUOP_set_periodic_timer:
     case VCPUOP_stop_periodic_timer:
+    case VCPUOP_set_singleshot_timer:
     case VCPUOP_stop_singleshot_timer:
     case VCPUOP_register_vcpu_info:
         rc = common_vcpu_op(cmd, v, arg);
@@ -116,19 +121,6 @@ int compat_common_vcpu_op(int cmd, struct vcpu *v,
         break;
     }
 
-    case VCPUOP_set_singleshot_timer:
-    {
-        struct compat_vcpu_set_singleshot_timer cmp;
-        struct vcpu_set_singleshot_timer *nat;
-
-        if ( copy_from_guest(&cmp, arg, 1) )
-            return -EFAULT;
-        nat = COMPAT_ARG_XLAT_VIRT_BASE;
-        XLAT_vcpu_set_singleshot_timer(nat, &cmp);
-        rc = do_vcpu_op(cmd, vcpuid, guest_handle_from_ptr(nat, void));
-        break;
-    }
-
     default:
         rc = -ENOSYS;
         break;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae095..6a440590fe2a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1762,9 +1762,16 @@ long common_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&set, arg, 1) )
             return -EFAULT;
 
-        if ( (set.flags & VCPU_SSHOTTMR_future) &&
-             (set.timeout_abs_ns < NOW()) )
-            return -ETIME;
+        if ( set.timeout_abs_ns < NOW() )
+        {
+            /*
+             * Simplify the logic if the timeout has already expired and just
+             * inject the event.
+             */
+            stop_timer(&v->singleshot_timer);
+            send_timer_event(v);
+            break;
+        }
 
         migrate_timer(&v->singleshot_timer, smp_processor_id());
         set_timer(&v->singleshot_timer, set.timeout_abs_ns);
diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h
index 81a3b3a7438c..6d86a661bd67 100644
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -144,15 +144,17 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_set_periodic_timer_t);
 #define VCPUOP_stop_singleshot_timer 9 /* arg == NULL */
 struct vcpu_set_singleshot_timer {
     uint64_t timeout_abs_ns;   /* Absolute system time value in nanoseconds. */
-    uint32_t flags;            /* VCPU_SSHOTTMR_??? */
+#if __XEN_INTERFACE_VERSION__ < 0x00040f00
+    uint32_t flags;            /* Ignored. */
+#endif
 };
 typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
 
-/* Flags to VCPUOP_set_singleshot_timer. */
- /* Require the timeout to be in the future (return -ETIME if it's passed). */
-#define _VCPU_SSHOTTMR_future (0)
-#define VCPU_SSHOTTMR_future  (1U << _VCPU_SSHOTTMR_future)
+#if __XEN_INTERFACE_VERSION__ < 0x00040f00
+/* Ignored. */
+#define VCPU_SSHOTTMR_future  1
+#endif
 
 /*
  * Register a memory location in the guest address space for the
diff --git a/xen/include/public/xen-compat.h b/xen/include/public/xen-compat.h
index 97fe6984989a..dc43cc9567c0 100644
--- a/xen/include/public/xen-compat.h
+++ b/xen/include/public/xen-compat.h
@@ -10,7 +10,7 @@
 #ifndef __XEN_PUBLIC_XEN_COMPAT_H__
 #define __XEN_PUBLIC_XEN_COMPAT_H__
 
-#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040e00
+#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040f00
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 /* Xen is built with matching headers and implements the latest interface. */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d601a8a98421..5463961ce26b 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -168,7 +168,7 @@
 ?	vcpu_register_vcpu_info		vcpu.h
 !	vcpu_runstate_info		vcpu.h
 ?	vcpu_set_periodic_timer		vcpu.h
-!	vcpu_set_singleshot_timer	vcpu.h
+?	vcpu_set_singleshot_timer	vcpu.h
 ?	build_id                        version.h
 ?	compile_info                    version.h
 ?	feature_info                    version.h
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:45:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522984.812682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponW5-0007oD-QI; Tue, 18 Apr 2023 15:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522984.812682; Tue, 18 Apr 2023 15:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponW5-0007o6-NY; Tue, 18 Apr 2023 15:45:13 +0000
Received: by outflank-mailman (input) for mailman id 522984;
 Tue, 18 Apr 2023 15:45:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7BjK=AJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1ponW3-0007mc-RH
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:45:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ffc16f2e-ddff-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 17:45:10 +0200 (CEST)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-650-5hH2RS9OPUeBpaTsb3BCaw-1; Tue, 18 Apr 2023 11:45:07 -0400
Received: by mail-wm1-f72.google.com with SMTP id
 5b1f17b1804b1-3f08ed462c0so42522965e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 08:45:07 -0700 (PDT)
Received: from ?IPV6:2003:cb:c715:3f00:7545:deb6:f2f4:27ef?
 (p200300cbc7153f007545deb6f2f427ef.dip0.t-ipconnect.de.
 [2003:cb:c715:3f00:7545:deb6:f2f4:27ef])
 by smtp.gmail.com with ESMTPSA id
 z10-20020a5d654a000000b002daeb108304sm13380984wrv.33.2023.04.18.08.45.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Apr 2023 08:45:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffc16f2e-ddff-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681832709;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=27YuiGTc/1Ayk3pIwHwifC0cK9cQSm41NYDsP6qMI7E=;
	b=CDPJWjw7922+HWw5MqXWuotkaTCG5XVEvGU1OSihcrxAc9NydIVybPt59TN32MBRtuA/5m
	UFxuV07/YeaK2h96WTHKE4Xc2Lrlnzyri5go6ZXhcHvV4A47wf/QxXJa4YP8FT4DCwQ8QM
	+Mu/7eMO8xZc5VF/cbvXjIiEcZcwDkE=
X-MC-Unique: 5hH2RS9OPUeBpaTsb3BCaw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681832706; x=1684424706;
        h=content-transfer-encoding:in-reply-to:organization:from:references
         :cc:to:content-language:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=27YuiGTc/1Ayk3pIwHwifC0cK9cQSm41NYDsP6qMI7E=;
        b=c8P0YZdRCMs56jb1Pyj9l4vwKmf//YYdNP4JteeKDm6u/tUZePhFzVSdRrTZeTvQz9
         +8DSJdJvpHxCm27go79Mx8EptBa2ovDKUKAV7HM2vpbHrK1QOcVnsrT/2ckLzta1/nsO
         pdAw8R8/sDrENW/ic6F+KIw44nUFVKu5LHIGd4ciWyrMul8GOe1hJU9kAFwVB+DSlOd/
         qNzAwAYxVHnX+QtXpsM/gLnHoc9dCRRPNTXeJvvrs8QG8z0xgY7R3cfuHvzVy1ayOGXq
         ZUFuDWrrZZBQAWdp3r+VdlDsY8et4uT/gD3goXbu4nola7YiNgWYSOFkuBZAUhcJyjdb
         XcWg==
X-Gm-Message-State: AAQBX9csBihzehVtQyX3S/Gl2HudE/QXYdYGBT+R5+azviqmrundvHV7
	dKTv4LCOX+6nroUZAKUGHFmyqFdg1fj66GIDYya8knZAeQ+IJnkTFrmzv4Fo2cAHdQSbQT8wl+N
	Uzj/KBkpw3vrvZ0RTkrXluccbrIw=
X-Received: by 2002:a5d:6dcc:0:b0:2ef:b8ae:8791 with SMTP id d12-20020a5d6dcc000000b002efb8ae8791mr2341926wrz.10.1681832706618;
        Tue, 18 Apr 2023 08:45:06 -0700 (PDT)
X-Google-Smtp-Source: AKy350boD2pgyaTgtT87B/OGZT/ZiXFytTcP4U4+Qx8x/yMODSTgYDG7+n20ySQtuSYJgFc2NPKTdg==
X-Received: by 2002:a5d:6dcc:0:b0:2ef:b8ae:8791 with SMTP id d12-20020a5d6dcc000000b002efb8ae8791mr2341904wrz.10.1681832706235;
        Tue, 18 Apr 2023 08:45:06 -0700 (PDT)
Message-ID: <da600570-51c7-8088-b46b-7524c9e66e5d@redhat.com>
Date: Tue, 18 Apr 2023 17:45:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
 Andrew Morton <akpm@linux-foundation.org>,
 Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org,
 linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev,
 linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org,
 linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
 linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-2-vishal.moola@gmail.com>
From: David Hildenbrand <david@redhat.com>
Organization: Red Hat
In-Reply-To: <20230417205048.15870-2-vishal.moola@gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
> s390 uses page->index to keep track of page tables for the guest address
> space. In an attempt to consolidate the usage of page fields in s390,
> replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
> 
> This will help with the splitting of struct ptdesc from struct page, as
> well as allow s390 to use _pt_frag_refcount for fragmented page table
> tracking.
> 
> Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
> before freeing the pages as well.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---

[...]

> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 3fc9e680f174..2616d64c0e8c 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -144,7 +144,7 @@ struct page {
>   		struct {	/* Page table pages */
>   			unsigned long _pt_pad_1;	/* compound_head */
>   			pgtable_t pmd_huge_pte; /* protected by page->ptl */
> -			unsigned long _pt_pad_2;	/* mapping */
> +			unsigned long _pt_s390_gaddr;	/* mapping */
>   			union {
>   				struct mm_struct *pt_mm; /* x86 pgds only */
>   				atomic_t pt_frag_refcount; /* powerpc */

The confusing part is, that these gmap page tables are not ordinary 
process page tables that we would ordinarily place into this section 
here. That's why they are also not allocated/freed using the typical 
page table constructor/destructor ...

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Apr 18 15:55:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 15:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522989.812692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponfi-0000yA-PJ; Tue, 18 Apr 2023 15:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522989.812692; Tue, 18 Apr 2023 15:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponfi-0000y3-Ma; Tue, 18 Apr 2023 15:55:10 +0000
Received: by outflank-mailman (input) for mailman id 522989;
 Tue, 18 Apr 2023 15:55:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ponfh-0000xu-AE
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 15:55:09 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61a00cfd-de01-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 17:55:05 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 11:55:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5055.namprd03.prod.outlook.com (2603:10b6:208:1aa::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 15:54:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 15:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61a00cfd-de01-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681833304;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5gkS6MMLCm9yrvjojfcGRnvcPbyUje4BRXnXYqdssbY=;
  b=cYkFvlfK4om1gpw4OGamBJKA5HE/q3FidyHFvA0bRjoP+XLIMRv88R0m
   dLZjiFDpL6gguDRHham/0j5CD5Nn8KH0qhxdeQ48tb0DZQCfAL7SBzyl6
   oBVm+OjrOyPWfbYWlTwpoa/LHf20RcuubMdiTqnDbzyqtlkP2Hj6MnRkr
   s=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 105879446
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:8Kw5S6LhfI0DDYLzFE+RUpQlxSXFcZb7ZxGr2PjKsXjdYENS12cOx
 jEbDT2PaKyKZDHzLdt/bd7noUwBvZeAmoViQQJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVuPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c59RnkQ8
 KQceQwLazefp+68766eW9ZF05FLwMnDZOvzu1lG5BSAV7MMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dqpTGMkWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHumCN1DSOHmnhJsqAGDwUBIUzw1b3Thr+v+sHGSV9tgG
 UNBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQGucksVHoV3
 1mGt9rzAHpkt7j9YXma87KJqzKuKG4QJGkLaiIeZRsI5cH5p4M+hQ6JScxseIa3hNDoHTD7w
 xiRsTMzwb4UiKYj1bi//F3BqyKhoN7OVAFdzh7MQmuv4wd9ZYikT4+l817W6bBHNonxZkaFl
 GgJnY6Z9u9mMH2WvCmEQeFIFrf54f+AaWfYmQQ2QMRn8Cmx8Xm+e4wW+Ct5OEpiLscDf3nuf
 VPXvgRSopRUORNGcJNKXm54MOxypYCIKDgvfqu8ggZmCnSpSDK6wQ==
IronPort-HdrOrdr: A9a23:ldI/Q6CCHF++z5TlHemU55DYdb4zR+YMi2TDtnoBLyC9Hfb4qy
 nDppQmPHzP+VEssRMb6LW90cC7KBu2n/MYjucs1NGZLWrbUQCTXeVfBOXZsl/d8/GXzJ8h6U
 4aSdkGNDQ1NykD/L2KmnjFL+od
X-Talos-CUID: 9a23:1bOvEmPYZiXApu5DUm5/qktMHpEZdHj60FfcKWCkLHdWYejA
X-Talos-MUID: 9a23:BBtSqwrw3N2Af/VMlisez21yM+szuIWNMm9Xn4dBgfaoOSVPCTjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105879446"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M3OdhvvuCbEpKbUBK8RozxDt279oN0lrSZ/pqYpuw0c82MBScP5X3eRF2aGgKj4mf4Jvul1tvwOWfpALhHUO+SYW6jPkXlXhC8ZGy4hIRnRsgKDV9lFcbfTwtfvwcV3GJvDuzlBhw59LbftaI3M8N/T87+CYqFrclUAM30A77L18Pa5WB55jMGxnVy+whZKdfW7lkvhvrR1srRG6v5KmxXfD/bbUYDQQ8nXCwAXgEoJnlkc7wRzjm4sVLZz5zFvaGOjUm1HENfBV5QF0BryfMY1ukj/f7gvohDGe5V7KUi82TJggE7Tsi55I2MqdCIDF08/kxW/X1p5OzZ8SaPbJxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tcPbJuOJMLDO1T3Gj0DzKmnUAlE+EEb0FngBro3YOgM=;
 b=kqBkR8PjYI4LZD9rYUREiF+2zpAogvfZR/p8HZU2LrduXR4P2RbuYHqFYPZfCRt9vtOJ2FNO6/SoT2EC33qmhzOBjITgQLxJkVoOro6BjitE/h6SsCZN4GcmUxxvcJsdVohcwgAgPJHpAFXS3/a/D8AIMzEBu9Y0zB4wFUZ1SlswxT2buevuvqYgIxgGMX2leOJl8vc8Gv7T+03ceFi7dghwerui41U4JgMQYgq8aYL4h4gjwgcANR7FEfmoRLwoKvDeOWfnMA8TGUkVtEvveffA8rEOfDHrJ1MS/kfLpMrCMIz1RWpAH83DjiLuF9qUn9gZWd9x4/apclKnXjzcAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tcPbJuOJMLDO1T3Gj0DzKmnUAlE+EEb0FngBro3YOgM=;
 b=jxXVoOgAPxuca+JJH0xZiAKUW8ctMusfqt1d+o4rWpKw3NlyJlyVQG9OhJT3OLdJaa4CItewyIZCYqUuT9T0sVsBEoe8sicXTS7h9hNBvIsctSwLqqADsrarGwGiU0R3HPqykMztrW9vwuSRKapEKYfQe1BzNedl9GTPFrGFMRw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
Date: Tue, 18 Apr 2023 16:54:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230418154223.20181-1-roger.pau@citrix.com>
In-Reply-To: <20230418154223.20181-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO0P123CA0011.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:354::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5055:EE_
X-MS-Office365-Filtering-Correlation-Id: 89f3bed2-2fd6-49ca-b833-08db40254110
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/qoVwR/zFjJsNHqqf83OZGWhYcB3nELPmc6Ph81opV1GYVi2emL1z5hPifc9KaIa0sZ/3BqlR1oLnNSG9CbiaLviaeyEf8NysgwENQjE24QCk2V2uTp+xZ3OGurQLR842JE85E40M1P4ROwZYgthokYL3L0BfoDmrZIb6+QucomCa8tlDDLVLKip/6W1JvDq2epkGf0WHrgFg2TZPPaVCSP4d+CGP9dwACNifQIf5rAKeM4QKSYUxKy1TLORyxyhkoqYFE+5ygiSELNlel0diaVoEyp/zHFCW8jSALsaW+jVw2mH+ui8fsjVmdkYfuYrOENa5/m7N6xYgp7GRaITvL43gOQSDVoRj0qXsapHzpDEhTTJ6JqYTQicLMyl7c2qBtx0aIKAEpV1pM0zkL8pyqHhJFow2WWKr3BG1pp9v1FFqAWAB+9DXhLIW3JNSU/qU8+p9Vmz3V52AzHJW9caUW1jarTCzhG8S1S55vjb1ABVvbeOt4shg9+gWO/HgN1OdBQxQwHlofzcqdPFj0cs14EhSZtD0LEQF0tufXYelgoBgMssGpcbX7K6CdNHiL2xRPzqXTuIrHW5bMJAiWtRaI8fFnEKxNEEYEOe+/2Wu0DkaGPJya/r0ZBlOxHSaxjUtpZ2M6xS2bkcM8ZKaalH6A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(376002)(346002)(136003)(39860400002)(451199021)(31696002)(2906002)(82960400001)(478600001)(6666004)(6486002)(2616005)(38100700002)(83380400001)(26005)(53546011)(6506007)(6512007)(31686004)(86362001)(186003)(36756003)(41300700001)(316002)(8676002)(8936002)(5660300002)(54906003)(4326008)(66476007)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Zi95ZnNxa08yNEUremlIMlVLdUh6alpVUGY4TTlsbjlyeFYrWmhIUU5XWjF6?=
 =?utf-8?B?TnM1SVYyZDk5c2JnVTFnZVFoQmRnN1BRNXhhUmlkUGR3VHlnRHFQMWR3eEl6?=
 =?utf-8?B?ZlI0YUUwRWo1dCs2VGhmaDV3SUpNWWc4VUlzRDJyUlpuQlFWUVJDTDZ4OU5n?=
 =?utf-8?B?RGU0S2pwQWVHUUxGQjhMM0NTVlFTK2NEN0RxbTBJbXBiOTV3d3pLRU5aR0x1?=
 =?utf-8?B?cW5rT3pBbGtiL2trMERUOU0yQ1ZGRmNWM1BhcE5qU21PWnY4UHRYRzVqQlhn?=
 =?utf-8?B?Y2hLWm14bUlZRHZ6RlJPSUlSM2xuYzdJSTIxeERTMG5UU2pBenh0bnNoSk16?=
 =?utf-8?B?WTVMSnNBUk9DOVBoVTVQK09kVm1HY3FWWTZRZXRubGZxSElmVytMUmNOVkt0?=
 =?utf-8?B?TTdhbzRndG9Ia29MakUyeDAwVlp3UFVMZ2xMOEFyOVZzS1VaL3hNTVg3cGMr?=
 =?utf-8?B?Tk4vdFhxRGdsbVArbXJnWXhBMVNVK3JMY2lTYWIzaTlORlVFS1c4cnNmYjZD?=
 =?utf-8?B?ZGJ0YXJpaE1XUUJ4MU1CKzgxRkZiOTROdHJFd2VJRndacmU0cFA1Sm9OUlE5?=
 =?utf-8?B?RmJ2TlpOK3BlL3dDeHpIeksvRW5IeDNQZnhQSUsrMUE1ZldJajJ5ZGZ2bVhr?=
 =?utf-8?B?ZTU1akZrK0hiN25KZU5xVWxZVmlzVVYybmpTenNzR29pd3dveTBIanJCNkJS?=
 =?utf-8?B?clN5WXZGV0xlaFh3N08rU3d0T0FqazZNSE9RS1FXcGhXTUVERFA1SVpON25M?=
 =?utf-8?B?YVVZVzRnMkc4VlVuVk5lbU5MNnJ6V1ZxS2lrbUpVNHI2Z29YY21xcnArSEZq?=
 =?utf-8?B?OGNtWGRGMHdZUnhaQ3FxQVNaYmptczE3OWVyaFZmZUJJQzZlZFNwVUlKTmZz?=
 =?utf-8?B?UTNxS3RqYS9EZGZTUGU2RFh6RTVoWlVROGpjZ2EyQU56eDBIcGZQWHV4SU5p?=
 =?utf-8?B?RW9wZGxOalJpQmZtemhJYmdLOTBBVFhVSG1ZYzJhYUlubW0wNjgvUHRpTWpp?=
 =?utf-8?B?VTVFVTBRdSs3REVjRklNa3pycTNQcjJ0ZHh2NnZTWEJTYTVNWnlrZHVqbzh2?=
 =?utf-8?B?VjRDejdZRWdRVkM4V1B4cDUvYU1BUXJsbFZrK05FZnN4aEtNUExOT1I0eFpv?=
 =?utf-8?B?aVFpVGhFUm9yU2tGNzBER1htajFTL29MYjRkeE4xVUFuUmx0MFpUU0U3eXJY?=
 =?utf-8?B?QWFlTlFwQjN0WkYrL2dPYXk5bUN1RGpLMCtpcHgxeXVWV0d3ZDBmOXd0STRp?=
 =?utf-8?B?dE00ZkhsVVA3bkRVNEtrVEFWVDVEMTRmbVFHcThoMGkyVk1tVlc2U3RyVk13?=
 =?utf-8?B?TGR1ektqVzJsZzVKMTZkU254cmlpdzVNQXhhdlU1WStWeWMxcytFbDlaazA5?=
 =?utf-8?B?MVI3Uy8wcFJPaDk1V29WVnFnV2xQOExrK1Q2RnI1c05UUmZQSnNlZGsvWkhi?=
 =?utf-8?B?L3E4a1hndUdTMGk3K3pOdE41TTAwOTd5M2dFVGdhZzdlNmVvRForRnkzTlQy?=
 =?utf-8?B?MURjUVZZSm5KRkc3WHoxZzRBWmt6dUE2NzkzUTI4NjJEWFpjRHRVOUFqYnpp?=
 =?utf-8?B?dXpzay9jSFl2NFJMajd1VytlS2VJRFVZT2RyMWNQS0FvNEdieEZCQ1pYM3Br?=
 =?utf-8?B?SmtqUHppSndJWVczZXpkZXdJOHVScSsyM1dCeUY1WTBuNGY4QXpvMDV0NzE1?=
 =?utf-8?B?MnZwTWovNElyL1ZLalo2V3oxYmRRcDI1MXRsMnZlaXpUNmtuVm9Hb2Z3RHZq?=
 =?utf-8?B?K0hHa3JKUEpWZW5wYk1zaktZVmMrbFlRc0lIS21yVmt1RFdjUVRwT0J3WEFP?=
 =?utf-8?B?UjYwek90aTIzT2NqZnQ5L3dXUThhN0ZEeHY3UklIR1BsemU5S3RiekVRTVh6?=
 =?utf-8?B?dXIxTUhoYzNXeXdPaFZuK0puT2VuV3hNWEp6WmtudkVUOEpXdmxURjFObm1K?=
 =?utf-8?B?aVlOUFp5OGYrdVBMSHJMKzVjN0NJZmZQUWwxc0Fia1k4dVRWalNhUTRtSi9Q?=
 =?utf-8?B?Y0JkazN4d1p4QjFqb1RDOHNoVExYZVF2WGd4RVY5TVpEZWlOS01KLy9lY3VG?=
 =?utf-8?B?UXc4R01OM3VKMGQ1Zmpidko4WjkrZ0JtZDNqSUJ0TUk5bWlMS3NHTE1aR0tU?=
 =?utf-8?B?blFJUFllL0k5eTh0WWdyclcreVFTaTNWa2RWcHRLckMwL3dlSWdkemU3cDJV?=
 =?utf-8?B?Znc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	dcIZUCQZObzdM5/BwLtGwJd17hXYGJirBIQckWI2kPSj4fhESV8LfJML7SX4VelTgPdpsdNaL1+L3a02OJz3yTnoiY+d9txfHeVzO+gOIEURPqVgE77i/VfC87lYajCR2i2toQft6SGG5njMr1y9qRjxUnLWf0MGiJHRFpGrdbJ1c6opylGXWiCfCXk50Q16myjdwP4bg2Y093D96qL+pw9L8Qi7/wRt4rP9Eyey+xy1psCh92XVgoLOaEqyRi/fP/CjWGbfCFcJIQM1jDGG9lpFt1QGQTQZ3i187+QxEXFbpuWiAa8+l/pOSuL3XRu46NmEAOl5bghbcxlsMQtD+QrMvGHupF9TRh93Dde8/x6fGGYl8J0DDaylHGEB6jkI4VSrDzpouehHmUpVdqLu1+YlpyaRchBiCfnwYQW9Dy9VkQ9MDJ6x4kxNX2A2UxMyZ49qbe4FWL8RqNljRRxOKwdIKWB/qw/w4HJQe1juhpusiveYL86K81k7sa1YMKcHuaXkUNBrdCK5RlyWZVrKlnq3VkqdEu5JJ3yFGR10Q0mcjI22j0j4C6A/VgEklDY+VlKz642s781BNvASfJt0IN3XAJ8YBPOSw/Tu4TjJ/nWU0WjXKnzdZM93z4uM0Qs0U+ZNVJuPicKaIbdrMCTSa3uT8Z3QMXrXG/DzQ5IpDukTqnLiuxhwy7hYQ83iZMZz9jjQACRb/Fit8Fev621PLr9cZbb6GKliTUrn7cQR5Ufc6UvttotMwO0rSmYDnyWzYqW5eC1p59cfnEh72JpyYxczes8w+1eSwfg3hW103YJDvikpTII8mjPuZqUsxjkt/hYS5XP+wsxs+9Ik+B5sP8kdOt2yzHqlvwSPdWiRT2e/TWDc+PRNV3z2lHwzSJApMUSqLGJXw2bkOO6T3AWLVFQoEGTtsXEJJvhMaiJOkho=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 89f3bed2-2fd6-49ca-b833-08db40254110
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 15:54:55.7121
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pv/mHCjIXlIMZSQxrmYB4pLTQYZxPNO2TYQs2CT11ptkK7yjV0/v062EgnERjNE6Q/Vk3ns17u0AA1slBTmH8ePKrjiVi8PbJu+lBaxboro=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5055

On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> The addition of the flags field in the vcpu_set_singleshot_timer in
> 505ef3ea8687 is an ABI breakage, as the size of the structure is
> increased.
>
> Remove such field addition and drop the implementation of the
> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> value just inject the timer interrupt.
>
> Bump the Xen interface version, and keep the flags field and
> VCPU_SSHOTTMR_future available for guests using the old interface.
>
> Note the removal of the field from the vcpu_set_singleshot_timer
> struct allows removing the compat translation of the struct.
>
> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

While everything said is true, this isn't the reason to to get rid of
VCPU_SSHOTTMR_future

It 505ef3ea8687 does appear to have been an ABI break, that's
incidental.  It dates from 2007 so whatever we have now is the de-facto
ABI, whether we like it or not.

The reason to delete this is because it is a monumentality and entirely
stupid idea which should have been rejected outright at the time, and
the only guest we can find which uses it also BUG_ON()'s in response to
-ETIME.

It can literally only be used to shoot yourself in the foot with, and
more recent Linuxes have dropped their use of it.

The structure needs to stay it's current shape, and while it's fine to
hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
need to say that it is explicitly ignored.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 16:03:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 16:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522994.812703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponn5-00030U-IY; Tue, 18 Apr 2023 16:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522994.812703; Tue, 18 Apr 2023 16:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponn5-00030N-Fl; Tue, 18 Apr 2023 16:02:47 +0000
Received: by outflank-mailman (input) for mailman id 522994;
 Tue, 18 Apr 2023 16:02:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EUq5=AJ=citrix.com=prvs=465f4c9e2=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ponn4-00030H-W2
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 16:02:46 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74123969-de02-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 18:02:45 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 12:02:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB4927.namprd03.prod.outlook.com (2603:10b6:208:1a8::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 16:02:36 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 16:02:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74123969-de02-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681833765;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=N1krN1ooRmqcggwJyqWyLtHWJy0g0Cn4AoVgz6SsMG8=;
  b=bFSd9v/iX5NWIQzrtWJG7q53i57ED+SQ6Rpbn5IJe6R9OAI9VSGY9Zui
   wygIhz4HMXhhZwD0p+5QqTHSEo8JYnBp8poLQDQ0TtjZsBG7RgNzVlul9
   VF3MGyF6EpNXcao4OyNCV2zNnxceDP1sEFZvjQ2zGrztUSeQcT0NfXb6N
   I=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 104760151
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/pz1tKnkFhUlj60hze4WBEPo5gwFJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIeDGjUaamCYmKme4p2PI3j/U4PuJHRmtRlSgA/rHxnFSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5Q+GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fUaCzVdMz+Yu/qv/I2AZ+lylv0Ad9a+aevzulk4pd3YJdAPZMmaBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3iee2WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHunB9xKTeDhnhJsqFy9llIeCh8ubkqij9+irmu7HMBRN
 nVBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQGucksVHoV3
 1mGt9rzAHpkt7j9YXma87KJqzKuKG4QJGkLaiIeZRsI5cH5p4M+hQ6JScxseIa3hNDoHTD7w
 xiRsTMzwb4UiKYj1bi//F3BqyKhoN7OVAFdzh7MQmuv4wd9ZYikT4+l817W6bBHNonxZkaFl
 GgJnY6Z9u9mMH2WvCmEQeFIELT34f+AaWTYmQQ2QMJn8Cmx8Xm+e4wW+Ct5OEpiLscDf3nuf
 VPXvgRSopRUORNGcJNKXm54MOxypYCIKDgvfqu8ggZmCnSpSDK6wQ==
IronPort-HdrOrdr: A9a23:HQKxf6BlCsZT1MTlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:RQdOwGHM26UkxSrQqmJu9m0RSp0KaUHF1UvvGGK4F3ZQa+aKHAo=
X-Talos-MUID: 9a23:hihEzQgBeDx3B+g8amgwWcMpNek43amsCng2gIwondKgECk3KTmAtWHi
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="104760151"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aFq17xOadN19rDjjQbxlctf06EQ3MmRjzz6SAIhOV7dtaknppqHGB3uNr3BQDdMQVm7HBRPUJiZZyhYbUJY/Kyi2uB2ajUIdabNQVgXSbkaR88SLBxtwioH0KLcZ2xl2Qd2tV5edMZzDFg/xX4nJGtYpr5/TY82rBDsAq6acSj6akhTLxQ+1fzY+raz1nlzEJu895oGcvpBpM3nmmpLZ3v1i4OgooIBwyShC4nKcALLfFJsgMp2ASrivyE4oUeAzt0mLAVF2t6ZJeAB9XFgPSKJPkxQXh8Xsx/aJ57E+V4+/s1kpFi4DoFEL20uU72fO8VoYDaLIIuTodMRcHfUW3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=j+YHflBB5WhGYW9CALxtTnbjAYuGnnaW8Gd6ckZPrug=;
 b=POdoVM0fLNuGsuImy/DQndp2khF+PYe0dBUcEeVl2WyYwByRmeQ+f3MgSOKl0kRKG08b1VIyZ7N3Xz8HBxvzFZm4h88nBFvllyoP6bUAcLCYMKatZqmyBlUiBP1oZjx/ZF7V+HMH9wDJumTw4Y1Zn+OSRdfoId+pCyADb0g6mvrjU5INo6SagPUe+DtAQVdiSI/v2mf3jvxQAx75DtXLntzOEgo5/S6AO1mIbG2L2xJ+zIOjV1ydEncXTOVE/HBSY6BkkCacGHm21LFzxnaWQVjYzybgGOu5mQs1JIzuQlJSHwVK4ycVe6uTDfH5/0CfwKwy79GgkpJpaf61CTlrow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j+YHflBB5WhGYW9CALxtTnbjAYuGnnaW8Gd6ckZPrug=;
 b=ZpDV0apxQuzGzMG7LD1qm66ghm4nlOd/j23sXMnAONCOxYvDBsjh19xV7hcqTViOnADgqhMwhbCDmpxiHuzSen1srmN3rNCdML80ezEmm9h3AXpsx+zMFfTQL+cN/Pj8Fr/5hrLaBnYSqQmpoFJq38wD0h2C9pg880zig/pR9XA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 18 Apr 2023 18:02:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Message-ID: <ZD6/Fk6S6D421AgE@Air-de-Roger>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
X-ClientProxiedBy: LO2P265CA0122.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB4927:EE_
X-MS-Office365-Filtering-Correlation-Id: 5df2920b-22c6-4a49-4c7c-08db40265374
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hg58Ffi2RWeRP6VTvTFzmzzlTtx+DbunAPjYKQAHKSRpi96VQ2zjnbnXhVhrByrrHLe53ebv4NLaKWBo8lzZKh8mQ6+TcYHcO9aQrMGfnZ8mjRBNELIl1Tb+txZn4DtTxwyFoHCywhoS+7oyRtrSbajA7IOyCQwUF9g1JQaKljMPmGfjWv47EBj0h/rDYkdzM/zGuvVBCRwTlDH7PPzIU1V+zbTBQ0VQHOPuTwCzoSHoWk/u7qDQ/vYf4Rp8AmwaRXM/zsPCCGkk7G00PclmIfOhItqk9UGNZFXlKWpIxx5GTMLEN6av/LEo+TfNTselm49Z/AnDkZd7uvF4tzCsnj3wHc++SX81lolnL2Bcek7X8nrLAglzDhJks3z1NNT2T53rf1+5948EQgim+oGcLi8qGpOGLeMV1r+YMffnMlN+znx85wr+PtGIPN3wM1hfBIa9+mkrGRHr3wRl/dBjO17LO0trQPdx5bYnd6qSJNkT7lCjKAo1ENesdRZzb36PQWB64GCF81lD0eHfgr8IfdxMK4dgfNnuohik9VKepK+l80NCXn89AORucZbFkiKJ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(376002)(346002)(39860400002)(396003)(366004)(451199021)(82960400001)(6666004)(6486002)(6636002)(54906003)(2906002)(38100700002)(478600001)(83380400001)(33716001)(186003)(6506007)(6512007)(26005)(9686003)(53546011)(4326008)(5660300002)(316002)(66476007)(66946007)(66556008)(41300700001)(85182001)(86362001)(6862004)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YVlaazBnSkJoT2ExSlNydXBZdUpDTFlNbWpJcWlFZksyZ1FqWDdPbXJCUUxn?=
 =?utf-8?B?UmxIcDErWEpBOS91SjQreGpHYUZXcXQrdjkrQ2UrQmhVQlhYV2JCZTFzeTRm?=
 =?utf-8?B?VUp6ZGcxR1kzNnFmdWo3U1ZYeGdCbWFaZitRUmRsZzkvajgvQ3BuMW0zYlB2?=
 =?utf-8?B?OStlSkUzdmVodElaWlpDbmRwTUxPNUVYOUxSSlE2VWpPN21tTWNidkYrN2ZN?=
 =?utf-8?B?c2c4YVZFN2hTd3FKak5aNFA5RkV6TlZDcUJhWWNSc0Q5akp2RHBXdUVmMDg0?=
 =?utf-8?B?MUNwZUNjT3NTaDVmaFl6QlZJZ0hEa1NMM0hzUzlRbitHOUNXVkpNRGsvc05U?=
 =?utf-8?B?dkF3empQcFhrMVJRcW0zYVN6c2dZZWE4dVhyRkVZNnNuNmwvcGRrMjl5UnB2?=
 =?utf-8?B?L21mRGs1UWFDUVQ1OStPUWc4WTB5ZWdZazRaSDgxbTJBWlgzekZkZGhTVTNy?=
 =?utf-8?B?MzI3T2tqYUZ5TjZmMk40REs0bWxseHR6R2pHWWVHeWluc1JxYUk1cEFKbXVO?=
 =?utf-8?B?OWJxOW1kMjREYUhCSXNGdTZsWVlBWDJ1NnpkbU1IUGFUcXRNQWVna0pjakhs?=
 =?utf-8?B?bVdZRU1Ndm5DN1FGWFowK1FrdDVncDcyU0M5OUtaZUlGRWtyZVMzZ0x3U3lY?=
 =?utf-8?B?TnI1c2tDVTFESGhIazgwTzFDSCtycVFaS0xBMFRvMHNMM2VtRWdDTGgydmxU?=
 =?utf-8?B?dkk4VmNocEcrV1ZLeTQydkFFQkhlT25LUHBDK1RERzg5eFJiaXQ2ZnN2UFlD?=
 =?utf-8?B?WVlqeTJMOFBxQmdNNlNGUTZGTmhLQzlGS3lIVDd3WXYvUnBkallIbG5tamtC?=
 =?utf-8?B?VDR4TS92NFUwcDg2bENZZXhDWWlQK3dqdG1hQjhtdXdoWXFnVkJIVjhYdjJM?=
 =?utf-8?B?QTdZZW9RTHM4RUhQeG9VVUFqNzNtZXI1MGc4dGZrRUU3Z3NhRWg4aXJRWnh3?=
 =?utf-8?B?SFBLMzJhSldPYUZmVUdHQ2VicFRPS2grWVIyQUR1NjVLbmkvQjYxRUtrcWlR?=
 =?utf-8?B?SFBXdCsrSE4wQ2JYS0t3aHRJcUl5bG4yUjIxSXE4ZlNib0NBRkYrS0JMWEFI?=
 =?utf-8?B?aW9VQWNZVHFQc1kvR3dPNDdTMmw4aUZ0dytRTXliVXRWamF5eERvWXEyNkgv?=
 =?utf-8?B?dFVxelNMcm83blpIOEsrZnQxbjF4M0w1UHl4UVVlMVdDSUtsRXpsZHc4QTFV?=
 =?utf-8?B?VjIwS3RhM3hrL09XZ1RFR0NpM2lnK0VleE0zaGpveHd6UThDbUJBa2g4Zm9J?=
 =?utf-8?B?QjAwTWhXcWZPRExjYUsvZ01idmJRKytmdGU3bXZnVERIeW04RHA4QWdUU0R0?=
 =?utf-8?B?VkQ4eDhVUVBvQk9NK0cydzNRd3JwblpYTG5XYU9ncy9uSjdJMGtmYmlIaXdk?=
 =?utf-8?B?dmtqVng0MUM2YWhmbTNHVDF6eUFvSGRjL1M5SVVrL3AvaXc1NnJGYzQ1N3o0?=
 =?utf-8?B?ZkJUZzJEaE9lQXd5TFBIQUx4QXYvcjYwejVuaEtnRStpQ2Z2amNWeTNVTXdK?=
 =?utf-8?B?a3JXckFuTE5aQ0Nxd053alJMZGN0NHUxYmNWZmNIbGFnbHQxVDdMK3BwTnZo?=
 =?utf-8?B?Y2c0T0pUWVpVSXJrd3cvRGhXNitJY21KKzhOZjl3MjA0dHBWYmJBaGp0dUpW?=
 =?utf-8?B?eXBlejZocWx5RHViY1dlWUFIS2J5RDBCa0hhT3had3pBR2wrTk1OWmVIQUhw?=
 =?utf-8?B?dC9NRzk4V3hsMzlTR2tyNUxFeVdrYktSWTFYbVNEa0ZoOGFXdnBNdmdYMWND?=
 =?utf-8?B?REo5bTY4RDRnMm8yRWZmZ1dLbGoxdDhISlF1TXJlNzNsY3Y4S2ZFaUIzUFBn?=
 =?utf-8?B?ZVJzQWRQMlhTdGpKTWZ4SGNtNk9OVEx2b2I2WlJ0L1U3RFVFYkM4OVp2Zkhp?=
 =?utf-8?B?OVhINERrWHl4WXEzSnhBWndhZHlQTUhkclZqZEJNZnU1K1RWeC9vbk9oeXR1?=
 =?utf-8?B?UU9sZWlYN0VUNzhSUVVRMUY2OXlWTG1vTFdybTkrVWZ0ZTlGY3M2alBJSHBv?=
 =?utf-8?B?R05yMzNocWxrUnlkeDh2SDdXZUtWTDhmQ0h2M1hhSzJVVEJCNEpKU0dtckhJ?=
 =?utf-8?B?RU5FeTN6OUsremhWbGxQNVFUcHA2SUY0cXEwUUNqN0xDMDBZd0ZwSkF1RDhN?=
 =?utf-8?B?dXZTamVnN21YMlpHUW5oakR2dmNDamtRZG14OGhESnBkVlpnZjFyV2RhOUxY?=
 =?utf-8?B?N0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cn1/mFb2oJ1bXQ0lqleNVmHMs+69yO7NMnqjRPASoflD4tL2srzYiqb96Y+mZ6omRVcBcE8L8q61iIVQD9EmnevIjIFwXFYHbghUnMnqluy1mpireGhlDNioJDyJcjEODfdGXvQJJO64JbZudnuBsvXg6i8xFlq7qD3nZ1NZK0VWPzs5ujppW3zWhrLIGxhlz9kQGhkVMrTbCdgpbghRGY6rWlFS/9e1k/x/B3f+SVybPmF9sYFsHn6Mvp6zmO/CHPtXDDHK6GFlsNR6cuYSJsIJt14he0Gej5Xd7yfAkzTM7Bz/5MfJAeXfBmkMEw6yisAl3AHrvbMT+nKnfVGKQFeBKpTA2U3lfeYOoqUoH2QQiNBQMz2cjpm3c485bEbvB+gA8bjBWgGWhzpE8vLmltPjYtUVrWusykZAwod6BMND0RM0yYQqy/XlmtDss3x7U0VVIAwl6Cl4dtPg4pNoJq/l46uTjgQ5u4/l6cHlaQAAyltc3DNOAzcBhkcIvKH1vXVwr9BFVy+9BLiJxIYd9oc9yYEOtSPkLo7w3Q5sUBq9uUfqVp1eD3gRDoOnr/JABpKBOBKJLHJ6kdn2bCmAEgI6n0w93LN69vwa9sHeTAmf9SLlBhvDXFyeBMw/U6KE7A+1hg5BYTZZsb/TRj53dnOagSAeEhSymS3f3jIGEVv7k3H4iDAohoZnfleGdv1pgeeO9k28RRJ/7z4iF0wdhOhfPeHXAarsZ6B+YDst48YzW0U+JiNlaw7zN/biiApLnmjgJR/bmd+lQvwlWf++N5XR4ViahkFUQqw3mawpFKSygWwff2RrflGt/+1D7Ja1kgQW6G9kl+c1aZtLdYHPWO1a9FtF8GewiPuQ2np3W/h6KIhch+eMMcIJR8GY3+mG8UMqjR4PkzgCuf+MvA2rsElQXcVumk87L61VXdZ4xEU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5df2920b-22c6-4a49-4c7c-08db40265374
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 16:02:35.9020
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3RwDx6E0NyAgihiA9fqlhP16yJsYykIAjlshBu7mfNfWyre1nzzqioIqmZQ4KgDZD+bRr9Gs38sBagnIfOehjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4927

On Tue, Apr 18, 2023 at 04:54:49PM +0100, Andrew Cooper wrote:
> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> > The addition of the flags field in the vcpu_set_singleshot_timer in
> > 505ef3ea8687 is an ABI breakage, as the size of the structure is
> > increased.
> >
> > Remove such field addition and drop the implementation of the
> > VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> > value just inject the timer interrupt.
> >
> > Bump the Xen interface version, and keep the flags field and
> > VCPU_SSHOTTMR_future available for guests using the old interface.
> >
> > Note the removal of the field from the vcpu_set_singleshot_timer
> > struct allows removing the compat translation of the struct.
> >
> > Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> > Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> While everything said is true, this isn't the reason to to get rid of
> VCPU_SSHOTTMR_future
> 
> It 505ef3ea8687 does appear to have been an ABI break, that's
> incidental.  It dates from 2007 so whatever we have now is the de-facto
> ABI, whether we like it or not.
> 
> The reason to delete this is because it is a monumentality and entirely
> stupid idea which should have been rejected outright at the time, and
> the only guest we can find which uses it also BUG_ON()'s in response to
> -ETIME.

I agree, but didn't think it was necessary to get into debating
whether it's useful or not, since its introduction was bogus anyway.

> It can literally only be used to shoot yourself in the foot with, and
> more recent Linuxes have dropped their use of it.
> 
> The structure needs to stay it's current shape, and while it's fine to
> hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
> need to say that it is explicitly ignored.

Oh, I think I've dropped the comment I had added next to
VCPU_SSHOTTMR_future that contained /* Ignored. */ (just like for the whole
flags field).

I can elaborate a bit on why VCPU_SSHOTTMR_future is not useful in the
commit log, and add that Ignored comment to the flag.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 16:12:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 16:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.522999.812713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponwL-0004Xi-La; Tue, 18 Apr 2023 16:12:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 522999.812713; Tue, 18 Apr 2023 16:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponwL-0004Xb-Hh; Tue, 18 Apr 2023 16:12:21 +0000
Received: by outflank-mailman (input) for mailman id 522999;
 Tue, 18 Apr 2023 16:12:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ponwK-0004XV-4T
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 16:12:20 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c91e2f5b-de03-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 18:12:17 +0200 (CEST)
Received: from mail-co1nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 12:12:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5920.namprd03.prod.outlook.com (2603:10b6:a03:2d6::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 16:12:12 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 16:12:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c91e2f5b-de03-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681834337;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3iF0pFZWflMoXkz3LeLPPyJL2uX5n7WWH48UEJX9Xg0=;
  b=DwN932RNHF45uAuHK9DeBVHp9Ehw075JGDDUaSUhDoa8zsmqWuti7Rht
   Wz3XEpt/gZXfCxKZWm1bBz+X4bK/Y1sf6Kt0BkiMq094E+sPbSVQtN1y6
   ZCAlAE2gtCveicVNVx3dC9XbCsq22+ddV34nH6TF+vhqLatU0KFGxgal7
   8=;
X-IronPort-RemoteIP: 104.47.56.174
X-IronPort-MID: 105327146
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1VT7Wa2mZpquVKlqQPbD5VBwkn2cJEfYwER7XKvMYLTBsI5bpzEDn
 2UYCmiCM//cY2KjLd0ib97lo0gFvZWGz9FgTANrpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfKF9rr
 +0JCy00LQ2bqsOJxZG4ZPRtr5F2RCXrFNt3VnBI6xj8VK9jareaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6Kk1AZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjBdlIS+TkqJaGhnWrmH0PS14nTWK/oNehhESQYe9VG
 1c9r39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOlMIwXy1s6
 VaPkPvgHzkpu7qQIVqW8bKRsDWzJTlTKGYEbCAJVyMV7t/7uoYxgxnTCNF5H8adjNf4BDXxy
 DCitzUlivMYistj/6em+VHKhRq8q56PSRQ6ji3MRX6s5A59YI+jZqSr5ELd4PIGK5yWJnGeu
 FAUls7Y6/oBZaxhjwSISeQJWbquvvCMNWSFhUY1RsZ9sTOw53SkYIZcpilkI1tkOdoFfjmvZ
 1LPvQRW59lYO37CgbJLXr9dwv8ClcDIfekJnNiPBjaSSvCdrDO6wRw=
IronPort-HdrOrdr: A9a23:dbL9MqMJF6LsncBcTuqjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ4OxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykAsS8h2njfLz/8+qjhzEl1v5an856yd3ARV51d
X-Talos-CUID: 9a23:a+M/em7Jt22wqZxUndss3XAxQNE0UWDkj2rIDm6HDEVqZqCUYArF
X-Talos-MUID: 9a23:/WhC6AkJ6WiYEeR75UUodnpaatZ0oOefEHkJkLsetcOqDxF6Cyy02WE=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105327146"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z7CzyLhJ5Tdg9na78zKhXBu2r2LnAhwCa2XK7GkmZbe1QbZmRaRZrUP7fLHdUybRr4xqoHp7jAJQG25MDSW+VXRhF+4+smSf3JCNS3K+DdUiSm+HvlpNTe3w3WRT0ncGyCw5pH7wth5MwbJc3VxaatAw639GDhSrKHkf0bltNUzVBQqmILY5YEoJ+FP5xfKjTV7p8ih6jBXIaUzagEjWJOmnh3BfyU77t46PnLWod8qe7T2bv5ZsxNJkP2L58LKw8cw9QJMFm2/3g7ZHalIzgE/7kY/w3qYKRTIsyfdlTM61PFjGoXKJE/squ1e+KVz0RU4nkC2ldmI+6xrqUg0Yog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pR4juPX5MTHn0GLeJm0iu0Ao2luZDAcVEmc+G0mbQek=;
 b=k44TbrN3uywrm+gJNqKc6DvaKL6xRHKvHaohJzWF7M2rx3Vh2qOpkYYHcQ5HiKdllDjEjeDTPLENjRwJ4HnGWy6sfylCIROB45q91u4cuZyfezIRlsqp2nRNzs13gPYicS3ptH9cvXDVoYjbgvU76XGyr0eAI4wvmgdcmmJ8c+9wj+Vp38CzRIz/r90W22iCcO2JvDwUY1uITAwCfd2vyicZsJpXse2ZrV0wnn0Z5T2x4KSiNoOS4vHKCRS2yT+KZsolhWzi8eGKdbBMyXriHpzapr/AI/Sq2pj1KkmDrJRiwoyzdU57SyR+C1QPYzOH7SuLta6jC8gPsdX/gPvgzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pR4juPX5MTHn0GLeJm0iu0Ao2luZDAcVEmc+G0mbQek=;
 b=IxaxzOsRjVwG6F9tipzWQkxs5dqAZWykhPYHG4UoiH4AG4JNuzmvX7n/kE/xzmJig/x4KHJKzWsbGMtDT8Kwh66ww2TqdnXoidHOQvNWguSGkGN/LEKT4vz3kPI6NPivHVykUAwwGFtfRhDCQw4slOWlMXcAGFhcbe966pTGYV8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <acea0109-967b-f3d3-2a60-b71e5a207ea6@citrix.com>
Date: Tue, 18 Apr 2023 17:12:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <ZD6/Fk6S6D421AgE@Air-de-Roger>
In-Reply-To: <ZD6/Fk6S6D421AgE@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0015.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5920:EE_
X-MS-Office365-Filtering-Correlation-Id: ea8fa997-b373-46a3-0907-08db4027ab13
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EB8wIlXVpD66ezRqJ9hs3Z3vVwFTx1HURXxy494neQ88oIIHIELfxphHgGBun3Chg7fNuANJPWUheWgRixWLqBrtEOXzvK4LYFtO3rzi2LYzTDcO1joUIpWtVR3XzzvbnO2K6wGsWDT0CBpNKqYIJr0CHcKFf4WRaaBVVM96+i6jgDSj+4b46Kyv/yVuol8ALbkS0pj0Z11qvm+rl98auByfmedbyWVSXzTD93+X7YN3RJklKfFN7AL3w4Ghg6IZM/bI03HAw+IZ0YaQJhRDkkVzlPvJZ9fmK+3ZNCNcBd+j57ILliqSi8jHaBoJ7KCJGN+CyNZhB9wiipUcsqPy9qkNyeQaJz8KWvbAeKu13VdDzoJsDjatQ11LH5jy5w1UPQgcCsH5k7vfkFAW5xAonXme5ex0Mn0xHMDLIqB/cT50YlB78GwIJxqilN/Tv+O8aQ1Gz6otxipZrlD0gSSLGqQPzVDi0P0KN9duR4eCETCzMSZSq6fj+eShK611wR3fQxgVbDE4VEmiRDznnbDLcfa11QutktJkDwk9SDDtZAfOHaV34Wkkb5c2bfvoRKlpu5X4OCkSUBqYcGOdb0spMcBHLjXknskLc+/rltJdxm4Gjdpggk6VZj9njWCklhXAXi/8+ecb79ygRWIZGOaP/A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(376002)(396003)(366004)(451199021)(8936002)(36756003)(38100700002)(6862004)(8676002)(5660300002)(2906002)(86362001)(31696002)(478600001)(6486002)(6666004)(54906003)(6636002)(37006003)(31686004)(186003)(2616005)(6512007)(53546011)(66946007)(6506007)(66476007)(26005)(41300700001)(82960400001)(316002)(83380400001)(4326008)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHpxOGs5VmpKaEhxVXVnUThndlpPUHpMOHF1cC8wTDJ5Tll6enNFaWhiQVJv?=
 =?utf-8?B?NjBhSE54U2hsb3M4bEs5TVo3cThIV3k3bjVIanZZRStDc2JqVUlxK3QzNFJV?=
 =?utf-8?B?alFRcXNJSldqTVVBYlZsbURPdG5uZFdXV0RQVXBSREhsMmVycEFNMGFqQUMy?=
 =?utf-8?B?S3F1aHQ4eXE4cVIwRDM3WWJSYk9tSFphL0k2OFc0ZWlmRC9ITjM5SXdCdy94?=
 =?utf-8?B?NlRibWdHUVMrakp5VnBHMGRtNU1HUjZqTER1RVV0dTh0UnVGRFZCdkk4WTlm?=
 =?utf-8?B?TVlNeVRLclZQQ3UzQlBwT0Y0aGVmV1VDZEZjZktPYnkxcnAvWEZnTEVOaVk5?=
 =?utf-8?B?Wk9YVmxCZ3FLVytPNTNYS3BHaTdRM3p5ZzcrL3BiMUoyeURaZ2orVm94NWNa?=
 =?utf-8?B?ZWJ1Tm9qOHpYTnFwcGY0T3dueGlGWUNXOXhGQW1EMXZWb2hzMHR1aitsZVgx?=
 =?utf-8?B?eFdqVDNXUUdCZFVUNFVhcGpQVlYzeHI0K1I3cEkyNnpDczA2L21kNmpDT1Yz?=
 =?utf-8?B?SnROb0N5ZDBhL0NsODBFQ2dQbWs4eW9PNjlVUnd0SXRHd2dHenBkZGhZeCty?=
 =?utf-8?B?aGVJR0JVQjlXSU00YWQ5VWt5NTFNTWlNS3l0cExvVGJkNE9vNytWV3hQMDU0?=
 =?utf-8?B?S3Uzb1hxY0hwRDZFNUpNbFpzMmRLQ2VMRXBnSXZaYWI0WXdqTm8rZEM1QWUv?=
 =?utf-8?B?L0RpNVZHR0phc25BMVZwbHJyV2NpUEdTaHhRRTd1aDhIT3N1TzBoTjcwcU9N?=
 =?utf-8?B?NlUrOHVhRGtnRjZLbjM3WGF1Y2tCdExNaU9KblVKWm42bVdaN000bVZ2MmI2?=
 =?utf-8?B?d3Zvc2JGY2RNUmFMakVtbk1vbk5UMzVjWnUvMU5WVDB4cVBRYUtFS2d1VlN0?=
 =?utf-8?B?a1RCdEMyREVEYnZQaVRSbWl6azhQQTJVMm4xOW03Z1VCVzdXM0xsaGdCUk0v?=
 =?utf-8?B?OENTc3pKYnRWTi92SzJVbWRtdkFtaTJNYmc5c0J5RkVKVlB0Qi9ucGdhSWpl?=
 =?utf-8?B?dkNnYnRQVCtMc2ZYcmhleXljeUw5a2pQUnNocTZuMEJNNjVVUWJXZHJrTW0x?=
 =?utf-8?B?dVptcVI0bkpFbTFtcC9PelpWcktMNXFCbTNxT3BuUkpEaTIvSm9JZXJ0a2Nu?=
 =?utf-8?B?M2QzSEZobzluVDU4TXd0N0l2Z3FGL2Y3SWhsQjhad0tCSUhUQXB4NnNQL1NL?=
 =?utf-8?B?Rkx2REhWaHlhelVmTnE0SDl5NDRSNFVhVzAvMjUvamY1SGk1aGtWZWZ5a1VS?=
 =?utf-8?B?dFA1Y3kvN1RNNWI5ZU82dWpxQjhqb295N2VEd3R3bGFtTlk4cWgrUEpEcXNM?=
 =?utf-8?B?MTJzRXk2VFRQQ08yU0JvR3ptQVhLWVFkZ1U5blhFbldoV21xWCtxVFZYQnJz?=
 =?utf-8?B?amZwcWpJODVvQlhzazJhcDFMbWN3ODkxdlFFdnBtbjh6L1p3V2xBU3F1RS9a?=
 =?utf-8?B?ZURQZ3N2QnIzS1ZSam5HQjdWeTUwM0tDWUhuK05tYWJ1R1F1RlFFN0xZVGZi?=
 =?utf-8?B?WVErSGI5anAvWitiOU53Z0dwZE5pdklFZ0hFTXI2bDB6b2pyYXZaYjN3Y296?=
 =?utf-8?B?SWNORDFEd1ZQcEhTZGlqWjZleStnZFhNd0FWQmtaaWNQRm5abmtBYVFBWnJN?=
 =?utf-8?B?Z3NFZEZGUC8wdUhEQVJGTk1EUjQycXVxTHZaM0ZZUjd0WnF4OVdicCtocFFD?=
 =?utf-8?B?VU8yYjJNMXN2OEVmY2U0L3VHUDczOC9lY293YTVJS2JIb0JjSDFELytMQVBW?=
 =?utf-8?B?MXpaUWU1RjZRL3R0Q2JiZGxKQjRWc0VoTDFacHVtY3VWdDNjQ00yZE9qZGNN?=
 =?utf-8?B?VEQzemFsbmt1Y1NLd2Z6ZUhyVzhHRFlrNy9oZ0lMY3kweTV3ZjFjMFRkL0hq?=
 =?utf-8?B?c3lwcnRocFJmNFVxOGFHQzRUTFN5cUZrZWwvc3doa2dOd2tna0hFSEpZN2ls?=
 =?utf-8?B?enpsaEZxSEt0TjNSdndkajFoQVR3REJDZjl0bXBFQnhqNmFhNU1Jd2NRNnU0?=
 =?utf-8?B?R2lJWnJIQzhCb2lnamU3ZHBoRGV3cERudU16M1ZIZTdvZGxJNHJLbUZOWmhX?=
 =?utf-8?B?enVDN2o3TVVTQk9aZGJsZHZNZGV5bS9qamVPQXdXeENxOGxHYjFQUEd0UXVX?=
 =?utf-8?B?ejVBbEZoNUVwaTVWdHZSUXVTaVNnR1Z3VjAwc01sTXVpY005UTNXWmV4ZUtC?=
 =?utf-8?B?RWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Xfmy/1pVOy+tNC1W8zTEgm7O7MXXUBdbDWvS+hC+N3qq0JBqick2CjFOauwdVbeRUpyqUPrBZxiotvDQnxOLS4/GDr6AmwjZL2MdqvSR8+XTtERiMRN0Q+SMRqbpsmMkBzksk3tRWqmhwXt1nMKdzZKtN3lVRValc+9BZJR99Dg980tXXZHOaH5uniTzOPdBvfA2cshRYY4cFv2NCZ+k6bdILz8Lco+tyDJu0pyD+3jnP9FFbg5zYgjcLN3Qh+H5109qLCl2peKNj4RuyVacVnVuhDiX8Pcqhf+cDPqBIRoo60sfsjVKIVWTXL+91jZkfde4ES9+EVYGHC/9vVHQ0noynR3rQebo1vj4t6iWF5oJ6IhPj6XMhQO6tcI/3cgSTo+xqQ6B+TkE6xkKCBYajM0MpSDSGy1TKjRayJpDziExGZB7wyoX0u6dGSnB787jFkZd8nK8nQO+AIpSZxf5Q5nqCC474mg4q3u6eOBRNf8kh5Zfe6UsMXuiY9w8Nb8baIH63Pby/9sgvUFVsjnkEjZ8expthDgOvBFhrfKp/382o3WC9zMHUdyIKXw4WE/XS0QEa6SU8wjnqFLX3xZQP91m0FCWyAKuV1KGN9AILtArMUB/V/SKejuz1F27Orxp0FgOydOBGU6l5857hSqk0bLF0IrQRK4D2YagOhCFKhymsA5tGAdlHno9h7hOMVwrF9amiy9U262f6F38jMJUY6cOe9Ky8CEOhJJROBAn86v0co4GtPzd+LIF9Ro1hJI5L/IQMs/wO3yLdAmXlxa0RvITGqz8UXefaJpUTsjJHoPhlHjNCNj1PqJFcU64BJkxgs2dyLsPIr6JEbHoWFWh/FDz2VbVRfujQsLHHntVGff/dpCUSQV0GSGs2TlaC1pBeCStRW683QwiZuxwrbkS5BT7kcQMI6/ukE6Ml1WliUA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ea8fa997-b373-46a3-0907-08db4027ab13
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 16:12:12.3963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P7nWCkjU3ImeGVozdOhmkGsGW2Z66Qjd7h/lG6uhYSIe2gacNbo8+04Jvi5wVnJ5kkPKEuFYyAUhLjVzJ9buOfdK1e/TF1djc5uiXH8AEaA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5920

On 18/04/2023 5:02 pm, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 04:54:49PM +0100, Andrew Cooper wrote:
>> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
>>> The addition of the flags field in the vcpu_set_singleshot_timer in
>>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
>>> increased.
>>>
>>> Remove such field addition and drop the implementation of the
>>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
>>> value just inject the timer interrupt.
>>>
>>> Bump the Xen interface version, and keep the flags field and
>>> VCPU_SSHOTTMR_future available for guests using the old interface.
>>>
>>> Note the removal of the field from the vcpu_set_singleshot_timer
>>> struct allows removing the compat translation of the struct.
>>>
>>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> While everything said is true, this isn't the reason to to get rid of
>> VCPU_SSHOTTMR_future
>>
>> It 505ef3ea8687 does appear to have been an ABI break, that's
>> incidental.  It dates from 2007 so whatever we have now is the de-facto
>> ABI, whether we like it or not.
>>
>> The reason to delete this is because it is a monumentality and entirely
>> stupid idea which should have been rejected outright at the time, and
>> the only guest we can find which uses it also BUG_ON()'s in response to
>> -ETIME.
> I agree, but didn't think it was necessary to get into debating
> whether it's useful or not, since its introduction was bogus anyway.

Well - the reason to actually make a change is that (older) guests are
really exploding on that BUG_ON() for reasons outside of their own control.

And the reason to fix it by ignoring VCPU_SSHOTTMR_future is because the
entire concept is broken and should never have existed.

The ABI argument just adds to why the patch ought to have been rejected
at the time.  But it was done, and the fact it has been like this for 16
years means that the ABI shouldn't change further, even if it done in
error in the first place.

>
>> It can literally only be used to shoot yourself in the foot with, and
>> more recent Linuxes have dropped their use of it.
>>
>> The structure needs to stay it's current shape, and while it's fine to
>> hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
>> need to say that it is explicitly ignored.
> Oh, I think I've dropped the comment I had added next to
> VCPU_SSHOTTMR_future that contained /* Ignored. */ (just like for the whole
> flags field).
>
> I can elaborate a bit on why VCPU_SSHOTTMR_future is not useful in the
> commit log, and add that Ignored comment to the flag.

The important thing is to not actually change the size of the structure,
and to change the commit message to explain the real reason why we need
to make the change.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 16:15:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 16:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523004.812723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponz3-00056Z-1q; Tue, 18 Apr 2023 16:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523004.812723; Tue, 18 Apr 2023 16:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ponz2-00056S-VL; Tue, 18 Apr 2023 16:15:08 +0000
Received: by outflank-mailman (input) for mailman id 523004;
 Tue, 18 Apr 2023 16:15:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bqrX=AJ=citrix.com=prvs=465e465d1=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1ponz1-00056M-10
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 16:15:07 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c5ba041-de04-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 18:15:04 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 12:15:00 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by CH0PR03MB6130.namprd03.prod.outlook.com (2603:10b6:610:b9::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 16:14:57 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 16:14:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c5ba041-de04-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681834504;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OTnlHU1fFwShJQqPzdH7gtEcjZ3s249B0e2b45mwTNg=;
  b=S7wr1pis4WVnKbOgkbBryN4GTT6x6du1z4GVnsoUsji9l0nXmuDO7wLw
   coVT+y2PDzJbDEupTF+EUbqFRwg705QHr2Dar1WoPEfPZP6J8NOUP/f08
   reGzxQEYAje1Lv7tFTJvBL5ZwQe05N+hhocm+e8xJ3kW6cXzKHGLVXCC5
   Y=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 104761964
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MkmB6a8cy1VQ7AsfzzPODrUDG3+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 WIdCGmGbqrfM2Hyfoh+b4W1oEkG7ZLdnIJrHlBsqyA8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOakb5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklH5
 O0eEW4dPymqxOK5h+74cPVwlOgseZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUujNABM/KMEjCObd9ShV3eo
 mvJ8n7mCxUeHNee1SCE4jSngeqncSbTAdpOS+Php6A26LGV7jA1J0MRCQGZmPqWhl60etBEc
 W0QuSV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq19bCStzq+fzcUKWwqYjUNRg8IpdLkpekblQnTR9xuFKq0iNzdGjzqx
 T2O6i8kiN07j9MP1qi91UDKhXSrvJehZjAy4gLbT2e09DRTbYSuZ5GrwVXD5PMGJ4GcJnGLs
 WIYgcGY4KYLBIuUiS2WaOwXGfei4PPtGCLYqU5iGd8m7TvFxpK4VYVZ4TU7KEI3NM8BIGXte
 BWK5l8X44JPNny3a6Mxe5i2F8kh0annE5LiS+zQad1NJJN2cWdr4R1TWKJZ5Ei1+GBErE31E
 c3znRqEZZrCNZla8Q==
IronPort-HdrOrdr: A9a23:xOhLIqMvAv2wYMBcTuqjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ4OxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykAsS8h2njfLz/8+qjhzEl1v5an856yd3ARV51d
X-Talos-CUID: 9a23:SarOP2yIVdxBVjP6Gck3BgUdC9EjSyPNnU75YE+3TjoqcqGJW3OfrfY=
X-Talos-MUID: 9a23:AUDEvgVk+RkdCNDq/GPKq29laJ1K2qeBBFAHsJcd4eiAdiMlbg==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="104761964"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YzoDgSrEJcutBWnAqA7g3qMpzD9m+hS00FrlEu7uffGcoQOM1faComVlO12gFz5CCfT60gIWhha79i3ywrWN/izy/EA6446otRAg5ppW3c2P1iAN1ZYVcXFN7V4oa5fqkYjWQN4KPs1a5cAHcriedPqmOpJm+SwvX46Nw+mA3U368nsbeFpwUgIzMAmBWvHjP6k/gz1Cl1L4665ILVJuR2bwQnxpIZsVqU4Jy3cQVvkNYP3DEF69d35MNyrKP001R9fmvZ1MPZCN6Jy+PX1j2fsn0wQvCCXnPLLr76m6M1AJBSk9r0D9ad5j1qb9emb39Z7jx9qmche9beVbPxUSrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OTnlHU1fFwShJQqPzdH7gtEcjZ3s249B0e2b45mwTNg=;
 b=O5TMVSiEb9XhMg0Ar3W3mDl6hnCGnRvb+QMQtoChZzAsaiXpsZUA8xj3VMMXcf97DLGfy7mFj9kYk8BP8ckaJi8A3pBFOUDtKNZKLJdmVghid9uEdm+TUzJTMFY1rnqCZMad0h76VH3XWA7qdmj970F5UnS3PnoddEwg9Ohe307GLXXKXehclEMymxx/rvTr0DpjCxZ0hgaRz8WhkxdG7x0s7aUSGKUVxuMBE86oM14mCbn94juEL1sJiPIwwrud8X7WtpPw/kDQzqPgiml1b1CXmBYPCXRoj2uZ0H0Bno9mDJHtDczbQWXd2csP70hewEHbOewRUnvQ64NJuyR/5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OTnlHU1fFwShJQqPzdH7gtEcjZ3s249B0e2b45mwTNg=;
 b=GehfqOngaC8rJNK0HqsL8J7m7XNv2zSv1fM8Xiu0/F2vXdVkIx9PxnUAoUmChincEZoSrK9BpumYlWTT/rAqhMHdZ06GIfXPbY/ZwFPVUY0zj8pDpIHqeschTRN4gTLgAlGSrYA1fanf2gbGiOe8RIsLd9Qw/2QZ3QPa3RAvA1I=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters
Thread-Topic: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters
Thread-Index: AQHZcSYeWIT0W+bO3UKuBt801j9Sd68vbuAAgAAVJYCAAbqv2A==
Date: Tue, 18 Apr 2023 16:14:57 +0000
Message-ID:
 <DM6PR03MB5372A4026B618404FF65EC39F09D9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-2-andrew.cooper3@citrix.com>
 <0a94cc73-f99b-a616-d342-8d84e8a274b4@suse.com>
 <639a5440-8408-d6c8-4d6f-68e5f7857d2c@citrix.com>
In-Reply-To: <639a5440-8408-d6c8-4d6f-68e5f7857d2c@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|CH0PR03MB6130:EE_
x-ms-office365-filtering-correlation-id: 31f0845a-7515-456f-4399-08db40280de4
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 PlrNQcNlhGW4FvFJkCt1N4OU2WLA13W8EcIILP5N0dUn7BSGRNbNVLUFZLCOoYdPBQGv2eKRgSg/KXGwWckiHxrGYRL8cHSOYtAorXNu4WAwY7UwdIdmSqHZKJuxfseJo3Lok6OA998we5bF7G881wovFCmtR6ia9H4Yd+M3F2Eo05TI9p07km+XzgeFE2Wm61GosMf9iLp9gAkN0gvlvBKcHE8EwQRbJNk0tBxlJc+q0wBA9IFeCAE7QVM03rrX0DvmFFOY2JRAnNdFi18yWhnIiV5hck3WhKAfBnt3Vvk88VKoOXAsANJ9n/LmiH5VlTxYLejWYT3fOjEA0jaLrL7oHDpfNchADDU7m/HDiUyY3+Ird//JyS9yKZY9qheySMWHJsXVja8oWp6LXLrPF6zmw1vJIlcQ6ELXLLGHE19WD4n+f/mNdcuJXWgYn8acZiXDjIpB1iK+9Rw1THNJ7GgBI7jRuCE6nhElbqznljk8xN3cl0W3AYR7pFonOU6mm9aZNgsm9ZA/7A1TT6LQlkmwuLBw3Puq5IhS5TXfVJQOhnwuEWH2v1I3ajpRMTxp+fC5h5FVmw0geYXbqvzPPxEuI59lb6lxbxFBhwZBkeNYMgaIWPak4kTsOUC6j10E
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(7416002)(52536014)(5660300002)(44832011)(9686003)(122000001)(86362001)(186003)(53546011)(26005)(6506007)(82960400001)(38100700002)(38070700005)(8936002)(8676002)(110136005)(54906003)(478600001)(33656002)(71200400001)(7696005)(55016003)(41300700001)(316002)(91956017)(76116006)(64756008)(66476007)(4326008)(66556008)(66946007)(66446008)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?/O9TajWfFTDhKzH50vdDhDhORBqY3tjZLiPCCx2H1QBTfpyLvUSVdmYHeR?=
 =?iso-8859-1?Q?mT/G+Wg+I1X9HgY3uVqOt9F9xV3Wd0exdoz7uvHV5WAclJ/lvl+yzEym5d?=
 =?iso-8859-1?Q?XI78ll3O/dwsQLvitvKZOyVln2TznJmW8OMlXnqn3bq3mt1ct8ztxoevH8?=
 =?iso-8859-1?Q?CqYh9UMdeBEW9r/I+tQvYd2yZCZDKjyLhhlgGcGYINpuyU3NO20oYr/HUN?=
 =?iso-8859-1?Q?T4B3swZjFF+b6sEkeIHAqQNaYmbg5DYGt99+6oJfA5I5V2r17FhTndqnxp?=
 =?iso-8859-1?Q?FDPeO8wCf/+jPYn7GJgTLwZPR1VP2ue3BEIVdeqizM+Y/0cpOAfXgC3oqC?=
 =?iso-8859-1?Q?UE305dGL3qBPphZF7vt2bPmjviKvACHQniTXxYnLHtWcR8GsTJGt1HDUH1?=
 =?iso-8859-1?Q?Bd9yt/jODMzuTioQ9CQs3mLaEjZGa0GatlQrVqKpiC4BuiPcy5gc0Z9ux5?=
 =?iso-8859-1?Q?zhwWdh9fNQOoNzt7L410xnigx0pxtyyNmcKIy2SmpwBDodteMcGJf5TMJ0?=
 =?iso-8859-1?Q?5rTSca4zDm2u3isiJgX3kIZMpAvnw3EkC5Jsewm3zdaQxt/E8MfwNAR6qp?=
 =?iso-8859-1?Q?DjH9FhY9MeRiHRZak2ab9iBGPZNP6MtoHMkRBYl4xAw6BEIP4WCkuW9rZe?=
 =?iso-8859-1?Q?AoCenlpav8vvWyLCF6+G7JBtTLHeoi7WLJhon5O0AGnsi5+/Y+0gcl36tx?=
 =?iso-8859-1?Q?jAyKapbPEMZOVQSa0is75/7aX9VVTnYBtO6QUlzQz9DsuqPgBHhVaCB5xQ?=
 =?iso-8859-1?Q?xfvahnBLfLUsA1HM94pOjX4NbOTeJGB/bLYnoe5KvPE8nV1rNaTz/Q7lxA?=
 =?iso-8859-1?Q?unf05Q83MleVQgt/FVLCfNyt4M5+gHfrp1cXQMyVNVC9qSR3vJneDEqAvW?=
 =?iso-8859-1?Q?lorvL68Hzm+uKqJ0ws7A0j7/9t69XzoNDchS+ZjsSYWga7BzEhkQsxxlHj?=
 =?iso-8859-1?Q?pTlRDxolrRTmQ3bJJO3RYaI4UAmI5WhjmKjinj5uShsWQlfJwGTlnB96EI?=
 =?iso-8859-1?Q?ary8V1P5Vc2Pz/zNH8W0GGLyOL3W/ChiWPm/gR/kBjid/nzGckQmXsvOWo?=
 =?iso-8859-1?Q?BWVaEcx9L3tMjgRq5kbAYP+Xcga5x/f8sjjqMQiuBqddGXZSBLfgqpdYYW?=
 =?iso-8859-1?Q?as6cfXxzg+RlznpSS6DnGGn95A+5gh8CfjKfBxHtw0jDV7hsDNG2WJU4gY?=
 =?iso-8859-1?Q?DWCaqamV5dwNcuiiGLhx8ptHQ/UjD0lPNGpg2sotphL8K5PIZ54vQqrhZ1?=
 =?iso-8859-1?Q?QRyWbSwMeQNeSI/V/51lDdt4olE09inU/BvtnrO3d0/pFLwZfSlLmp8iPI?=
 =?iso-8859-1?Q?ti+LHSkc5uWV6TIXJemtcWUOmNSf6dKIp74BsZev89+VR75CoYn6ALElO7?=
 =?iso-8859-1?Q?ZW8Hy9IbA3f5RETWZEeaak6qw1xIdWcRAUYApD6QJuykd6wAtKbP9Qpj14?=
 =?iso-8859-1?Q?Vz0MrgKqLQBW72OiSRWOMeztFfE8paSO9CyYzwy7VHyXGYFBexm2exTT/+?=
 =?iso-8859-1?Q?rencMPP3p4xpV2Z0S4fPBsg5A6SfSBYQGORUv4LxaXSwNMhMDKG27uS9p3?=
 =?iso-8859-1?Q?y5IIoTzQuTGD8GcVQ7AKWqg4r9EdXnOLwd5bZSw95NkGaQWycHulbtZvMR?=
 =?iso-8859-1?Q?HxTa8j4/wk0LklCwyVSuEfjGPn21hZUnAR?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	U1JCYn2OlX2m0Dex7MAWBOCA8jUT2eJpnZaIMGDxYhWW++YAoEtp/W4DDCWdqW7Xw7roUSQSlyLscfK3UwPM9PGgFHBmYZl/UyoXAkSPJyMrJJ0bjvz1V3e/4KT09hYwfn7MDCyF8PkjkSD7zr8zIiNE+MKsTe/OvIoaJnMQGUvX2fMBCy3ACDx1qcB7WwIBNML5A/Do20Ae9ifR3NYcNtAPWwdOJzBO6vuyFIaV2xn23wjG/V4eR4LsahUQaaS5SVVrsKx6UlNQ0WbqO306YqQ15Zfvown7C19yogK3hPhNGxt4VjAKAXT9GUcti5zMCItVUqtRSc8LjSfQurnJun7Haz4qTc6xDDqD8vl+9z6U7HKaM0uTgW98hcXwe1yE4CcP2Dr0EW+WxHXdyGK3PvTOwK7kVdA10xlq7pTRnauJd8wJX6XF7e0KDfpenOGaHnbv0Nga9J8gkRZya40AjUuqJYrpYzZCjRYaKfKrPv5u0vuV+sXWSPGvKwHTJPEXDLo28gSvdnbDnYeV1lQPCt3E0gWNbxaaRIISgpcaxSEJWfCAW2CvDkvMOorgiE5vWP0Omdg/m0yVl8UbmJAzSui9TGF9M7vOwSUreahvNlSKVPU3zGDj4EMlvHj2YQAxwvHx3HnLW4GmZNDiPccSzKh89NjSgyqwC9WCngVi/1QHHGNb8zgwJQwvU+6BYH6/UnkFhOHQaLncrOvTYHFOkYUI0wx42JFu6Pw0nbYA465uurQT6CkQlzzZuTQ+/ulk5MDeQkq2hRRx8JAPZjwC4t8QImOQfB4lf0vRoWxbl2pCufScxlgEW3NwWQyzFm+XaPt9K1CfKRB5zkBcfU7yIdrx1gBNgW2TURUy6yJC07QKS8Tal0RChFb6Ro35PrVK+qjs3Rys27itfG6NdOJW6gMeKZFOSv1uTLbG1Fray9E=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 31f0845a-7515-456f-4399-08db40280de4
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 16:14:57.8868
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Tt7vxkR3EqVDtoR/btQwoiP/mlz313byrHkZF9jIwYzK+NsSKGBLmmGpOoL+qmcMdN2ABjhlGYWDBNsxPS2DiNNgK4ylzoZyWZDlqZi6ih4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6130

> From: Andrew Cooper <Andrew.Cooper3@citrix.com>=0A=
> Sent: Monday, April 17, 2023 2:47 PM=0A=
> To: Jan Beulich <jbeulich@suse.com>; Xen-devel <xen-devel@lists.xenprojec=
t.org>=0A=
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Ross Lagerwall <ross.=
lagerwall@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien =
Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Ber=
trand Marquis <bertrand.marquis@arm.com>; Roger Pau Monne <roger.pau@citrix=
.com>; Wei Liu <wl@xen.org>=0A=
> Subject: Re: [PATCH v2 1/3] xen/ELF: Fix ELF32 PRI formatters =0A=
> =A0=0A=
> On 17/04/2023 1:31 pm, Jan Beulich wrote:=0A=
> > On 17.04.2023 14:13, Andrew Cooper wrote:=0A=
> >> --- a/xen/common/livepatch_elf.c=0A=
> >> +++ b/xen/common/livepatch_elf.c=0A=
> >> @@ -310,12 +310,12 @@ int livepatch_elf_resolve_symbols(struct livepat=
ch_elf *elf)=0A=
> >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 break;=
=0A=
> >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }=0A=
> >>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 }=0A=
> >> -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 dprintk(XENLOG_DEBUG, LIVEPATCH "%s=
: Undefined symbol resolved: %s =3D> %#"PRIxElfAddr"\n",=0A=
> >> +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 dprintk(XENLOG_DEBUG, LIVEPATCH "%s=
: Undefined symbol resolved: %s =3D> 0x%08"PRIxElfAddr"\n",=0A=
> > I don't see what's wrong with using %# here (and below); I also don't s=
ee=0A=
> > what value it has to zero-pad to 8 digits when the printed value either=
=0A=
> > is far below 4G (when representing just a section offset) or likely far=
=0A=
> > above (when representing a real address on 64-bit). But once again I'll=
=0A=
> > leave judging to the maintainers.=0A=
> =0A=
> Hmm - I could be persuaded to drop everything in livepatch_elf.c.=A0 I=0A=
> guess that makes it more consistent with the 64bit side too.=0A=
=0A=
Indeed, I would prefer without the changes in xen/common/livepatch_elf.c=0A=
=0A=
With those dropped,=0A=
=0A=
Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>=


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 16:26:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 16:26:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523011.812733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poo9X-0006hm-7M; Tue, 18 Apr 2023 16:25:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523011.812733; Tue, 18 Apr 2023 16:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poo9X-0006hf-3U; Tue, 18 Apr 2023 16:25:59 +0000
Received: by outflank-mailman (input) for mailman id 523011;
 Tue, 18 Apr 2023 16:25:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bqrX=AJ=citrix.com=prvs=465e465d1=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1poo9W-0006hZ-08
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 16:25:58 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b0d26f3c-de05-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 18:25:55 +0200 (CEST)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 12:25:53 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by DM6PR03MB4985.namprd03.prod.outlook.com (2603:10b6:5:1f0::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 16:25:51 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::f4f8:2c53:17cd:f3cb%4]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 16:25:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0d26f3c-de05-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681835155;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=GheJe5J+AZWn6GMpDCG8af9lZgEy0F/0Z2KZ8WdeIhY=;
  b=XXWXwQ0elcb2xhu5cEhcXb541FzMsmx5HVqRwDRRsat2VVGZrDNklRyi
   awlLva/XxLk+WCf4B5/+y9DmIMDQpQMKu2eMrTAQJz1QyJvKK+BgDJ2lP
   MGskCxoMymCiWzkYkOA1K1gQugT6HgmGQNk8f9UFDrOoBY11lFLhrTryA
   M=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 105328711
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RqmsF6yoD683h8vP0/d6t+cTxyrEfRIJ4+MujC+fZmUNrF6WrkVRn
 GoeXD3Va/fcMTegeYh3O97j80gP65CAydZgHlBvriAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKAT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVpUq
 uAiFT4zVUCGmtmm/6q+buNCof12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQruFTuGIO9ltiiX8JOhQCcr
 23L5XvwBhUyP92D0zuVtHmrg4cjmAuiAN9JTu3nr6cCbFu7x04cJTMJSmeCmtKazVe1R/ZjC
 xc60397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnMwxQyYj2
 hmWntfqLTt1ubaRRDSW8bL8kN+pES0cLGtHbylbSwIAuoHnuNtq1kiJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNfNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:xaK2zaCqsMgz2rPlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-Talos-CUID: 9a23:LyT3HGG/zLU+1cK+qmJc7hEIIP0rTEaNyWrrM3CgDEJjZpqsHAo=
X-Talos-MUID: 9a23:zgPd1QtcG4WKvMziSc2nhRNtGel3/JiXM2su1owNp+24FXdeEmLI
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105328711"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ez119Df///LlqbMLVqOGhZqlHba5t3MSeWqIjhA/3JDpVqlJTRHHkNd+ghPQjHoBE5MDbfsa9wZPLqEUf6iEQp4BPIFXu0nFcZCKPptbsWdIrKxf8gZoVGJMosmt4wdbMA5I9nCRIrMvUTfdwwBDM/nB+TuY25+ajhgqm3AtakGWKZbA77ZFAbDoeF9XfKANMS7ZDLJfWt65k4I2tjcwtb3YQ8H85/eV2tpWSU5W4E9ydb98guuZLiGJapSiRo+YuhyrPQgCuU+/IE9lXszDHW8A3k2hNjaDI1iOTZh/M+zPHjGkk7g1NZu1IACcI8q5GAO643C2FYG5d3m3eVT49g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GheJe5J+AZWn6GMpDCG8af9lZgEy0F/0Z2KZ8WdeIhY=;
 b=E/12EJRBpgao1wGsrmdh7+SoEk2eCxHtmCENvD+kIn4yfdnqNbxFAR1dY4PT8buqmgPV0mRZDdHUjAWy+aLLEs73+6z89VwBjTKKWiVRZQJvTn5eCz1fT9qBLkipwGFvQOQ1dY4boCOjUqE+7ZwRNoyqx6hO4cl/34DXds7ho/CzDKs/pB+bUodVoETpcGi+1vLW2IuE95u3aj1C8YBSxpB224B0EUnQLPDj1TVKmjh5K7C8LuUW8AynAdwpPRuHjOI12cPOllPVgbIChYaHJ7m11EHbrXVY+MSi9VDIOyUcXnzAtb0mKBm+tuZpN0WJBe7Kx6czX9/QsGRVJlJ+Zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GheJe5J+AZWn6GMpDCG8af9lZgEy0F/0Z2KZ8WdeIhY=;
 b=W0sLBI7So3Ype3llSfMJZiITS/sSjd/Vcw1ovoptoJ8zjBwidQg7L/oFR/BIoixK1iM/vNcnzEs5Ai/NMWzqWqh/T+VxeE93xPvrvrPeGmp3qLq2qGWADBcKdOUk26jCH2JhRriq36N0ZwEAskqYuw/dEZzjhtuWJv9trU5rAZE=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Thread-Topic: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks
Thread-Index: AQHZcSYgbPbLeOW+Q0WPDx+blTOlEq8xQGxn
Date: Tue, 18 Apr 2023 16:25:51 +0000
Message-ID:
 <DM6PR03MB53728A41166A01EAF9A86482F09D9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com>
 <20230417121357.3738919-4-andrew.cooper3@citrix.com>
In-Reply-To: <20230417121357.3738919-4-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|DM6PR03MB4985:EE_
x-ms-office365-filtering-correlation-id: a11249d0-44a5-4d99-a4b8-08db40299345
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 V/40NL6dJi/UsQ6rl0ZybtM3m8zpvhwfUmOkxAt+WNcb5zsETWoAm5TVsonCMmmyA+Sl5M6zuHb3s0rnRDVjaJQt42wskayiCa+KWq+64m03PVRHQg/3SKlHh+5PhvJ4i4iYYHT6ZiPXvDeiN8g/vgdRprMLhUPrSwFrO8AMOX20DPmk2SqLnxVJwcNtJjPMIC37FxwQFO5YtIo4LdISiGgUBiF5IPmJR86YrsvecD4cejxPmsWMdCVV95aPFCJGygxai7AEVrwVhGCFgI3XFeirjr5+uGfRDkR0Qpzef2Aop6owUGN2J8ksYZlAgfNtb0rMXRPdJbaLuslZFyNglNTVIVRqggzoMRYyDHQ+J+cnhe/nk2iUlkMJrtyUO7mZO16M9MH7VjYQH1ncqZeCF2wL0HbHV5zhN9gsHNqJujFdSoWV1scPJnQwgi4B7wDNeJte7BRkrcOw93yITRLBEL2D1lUwMMd3IOwFOdKirdprCUDC9AjJdcET7mVYJsUxowL+wowJ6cXZqYBuYTWpUbJw8a6Z5AczITIVCMALn+3Q68KHR88EYtXPbFxTckQqlmXpI+62EbrOgcHmRN2mUFkrrFy4O2uUVSZ+ydCO88emvyolDvyrYYGtod4ZaiVz2EhnE5l8+VUhBxreIHQ0Ig==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(38100700002)(4326008)(122000001)(91956017)(71200400001)(7696005)(55016003)(44832011)(316002)(52536014)(33656002)(41300700001)(8936002)(8676002)(110136005)(5660300002)(82960400001)(4744005)(2906002)(64756008)(76116006)(66946007)(66556008)(66446008)(66476007)(9686003)(478600001)(53546011)(6506007)(26005)(38070700005)(186003)(86362001)(142923001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?++OdFiVw5VCBzf+tRQClBJ6lzNBULBU3muToplymIypcEOOVvUl7aW2cUx?=
 =?iso-8859-1?Q?CxxvHATLBG1dgHQZnSjf9WJ81LwsR+noOALYxO/3ovKEtYf9zR3sAwLbOx?=
 =?iso-8859-1?Q?rnCNgxzqRvKHKdAGpkXGmjPgotohlhoU3stxqQinRSgCsCKKWI57xx44NN?=
 =?iso-8859-1?Q?7HNcU0xegAQn+0t0kRmUbLAibAD9wdpWfysyXfGdwi4KrOMQP9ADfdHPP7?=
 =?iso-8859-1?Q?B+HldXGeIhH8xbCWxDKlaxQAD56pRVJSEJyRiCTU36EHp5X4iWSFXndYAO?=
 =?iso-8859-1?Q?b5TInxq/qj6+ecslfiU0ayfb2JW4zC/bj3psg6EbXQteXirgnxi7f7Tsu+?=
 =?iso-8859-1?Q?oB/Uj2F/CSO4QKtNC3a02hLNedRQ+M0MFkonrQg+xedDR9x9Sf5yK3ym5O?=
 =?iso-8859-1?Q?QKqVzprnjwmr3lSf4thV0EYKoBZj+aOCRLGYoheoT9HxlWZijkBkBWMS/X?=
 =?iso-8859-1?Q?NmjxM6GyN0o3zFCuizCgLyIeKaNYhOaP4nX3hZ7B7fTjo+lHeXp8Lc8wBM?=
 =?iso-8859-1?Q?1Oa4UU7KOQ3Hced7Q+3W76hy8SoH/feyeJnoCBtk5PIWWUWaByKsc6jkh1?=
 =?iso-8859-1?Q?4+GHOizJw774PIsnsWNetqLPa0+AtDaiAgxbLpA9Pa4SVse3YTwtTj+Bja?=
 =?iso-8859-1?Q?GiOWEMo/ZLFMYzq5JaKZ84O9wypOTQxV8hzGhBptO44PaXSCbwZyMAykce?=
 =?iso-8859-1?Q?oFGHBbzGGgH1TTQdGTCT5LKINOzJhWfvOJgjZbg+7gS9h0oSN7ALE1qZ1q?=
 =?iso-8859-1?Q?SevNV9hOK4KPFB04lFrD/s3ld8wI5w901HXTJCqF4LnHFT7N5rpF5Kbodi?=
 =?iso-8859-1?Q?uMg5ASSG9adEI1DUvEzNxVMlDUjeprv2/RjKiEfHVBeo8ZQdScTrI4TReF?=
 =?iso-8859-1?Q?z4xOx758xf5h8HeWruVmkDTIm3jcOsC+3mhh8fWefswImgfVlb9fvr/+PZ?=
 =?iso-8859-1?Q?L3cpuQOKw4fmMa2sNhVY751VVZ19XF31KcTwDiaAs86pAZCuZB/dsXPoph?=
 =?iso-8859-1?Q?2Y4BvQGlTFi1bUepgpkW9aBpG7YtcH/tIVTtTEDwN4HrAFTeMLqm4qWKCk?=
 =?iso-8859-1?Q?WkQ9yipQ5EvFTW7OP79FrNCLLrMcl8LYCB0lTXqvrEZYvgPrhYaG3Z6kXT?=
 =?iso-8859-1?Q?BrdWkJWFbFRH0yUEkp4MAl/Af377Mx0N4t9UJPKpmtJtRm2Awg3UJBahf9?=
 =?iso-8859-1?Q?JK4mcVChn+hs2Zem422qa+ZoaYeWYD+Wvgr41O/arI2vgnb8VAkJIyDpjy?=
 =?iso-8859-1?Q?wcU6ezKS7cjJsVmDZUpJWaTzqi/Ee8jjrk/pY8y6Weym7PAJnRUeKSFvEX?=
 =?iso-8859-1?Q?a4asN97y2lKItse6L7rPWkz5KCMkMBtq/YF9tiUQtLDg3IFfVCI6uwS2rP?=
 =?iso-8859-1?Q?k++REbRjsO62qgZUcO4Iey2FWzZFCTLrRwKyYuAHfFrqDjDLj5ROnKhPcU?=
 =?iso-8859-1?Q?VuUygtr0XWSrHabyiQUC51E5sBVVjS7IzUa80pmj9p3lCjAZn6c8B/+A1d?=
 =?iso-8859-1?Q?IAIdOJc6mSaGM87cLqjP6cEEHrhoQRLgd5J2j/BcG41FoI8DI6YOPR5hrd?=
 =?iso-8859-1?Q?v/W+CZlqRwj2TKIQ/aoadmBS5No23V/s8xpbBZbO/u/fbClS1E1jP6Jm5N?=
 =?iso-8859-1?Q?I3Nrm5BHEmn2E7k+yxTxMzr8JeBXHIBmyd?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	gA40S7zidF7fNI8YvkohyNLSdhAsUtToW+6Z8yGHxG7JXbrnob6OZkLUai1PC7zg2o29wHJixiyZrnvF4HxGZqI6YR46Jy13483kuWTae1oNCwcaOWVYvTYLyMjpwxifeGf7/xnY+11O4EPF3FyQaTabkC3EgIAhtYI1a9S0LDOtaMO3y7Xb8xh1vRrsTuN4kHag8myaQPr9g6jJy4PQK/v1G/j5zf0Wednxe0vseCXGJwDtv1YarDhfTaWMuDtr/INUJ08QjUGmbaOFANi2u5d4u76J117OOc/5i61hlG+hfSbteIrXnGSLWvoLK/5BV2Z66oYxgLiHW5o36P0IYg4BxrbjrgBXUYTCP3cZNdD5To3ZsUFW9x7Gwfim7tEdUs7ccSlhIXVWvEMudgYI+DMWnTuFUAGP93lN/bX2Uf2sYVXT3/oBSD97ZJsgVNFpCu6anXL4mY71alr+1hnhrA7bnHa5v7gOv043zFRaO/IKmpP5MpZK7nnQOzUqFIIjTRxNWB829kTEZ5fQh2pwUZFIae215qWcuQ69hiZNVdyvZyR3pYfT0sk9UmQ9oyhkkKuPN5htEmfSUhX/HLm3W8Aau06/r+IzJkceOVQarBFLIyOPETiGrEmpp3vLJIZ6SJrwg0eSi5Qh4QqMA3GEoTl+BBwWFkBAKYAyJeFYN2MzXLSWjpGlpTuNGJNifcmy6r1uo9PP+zsIMhzXnUsmO5vdsLEdxhDn/tqXdYA4jXhptL8R6x7mciC8GksIax4TxaKVHtzKebc9VbDEorZdtQmDOJJPJor7pGZ5JchnMIpwL4YOvXcLqZuYxFljh6g2
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a11249d0-44a5-4d99-a4b8-08db40299345
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Apr 2023 16:25:51.1621
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: KX70nIRh943IXSWq2wwgc5R1xLSfKdLdK/87v7C/ToJ7mNZv5C5+LzTBwK17VxXl+GkBvVzopA+Z2ZlnBTV+tDiTsy7/Ym7Q0jYOTfB/yPg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4985

> From: Andrew Cooper <andrew.cooper3@citrix.com>=0A=
> Sent: Monday, April 17, 2023 1:13 PM=0A=
> To: Xen-devel <xen-devel@lists.xenproject.org>=0A=
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Konrad Rzeszutek Wilk <kon=
rad.wilk@oracle.com>; Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> Subject: [PATCH v2 3/3] xen/livepatch: Fix .altinstructions safety checks=
 =0A=
> =A0=0A=
> The prior check has && vs || mixups, making it tautologically false and t=
hus=0A=
> providing no safety at all.=A0 There are boundary errors too.=0A=
> =0A=
> First start with a comment describing how the .altinstructions and=0A=
> .altinstr_replacement sections interact, and perform suitable cross-check=
ing.=0A=
> =0A=
> Second, rewrite the alt_instr loop entirely from scratch.=A0 Origin sites=
 have=0A=
> non-zero size, and must be fully contained within the livepatches .text=
=0A=
> section(s).=A0 Any non-zero sized replacements must be fully contained wi=
thin=0A=
> the .altinstr_replacement section.=0A=
> =0A=
> Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")=0A=
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>=0A=
> ---=0A=
=0A=
Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>=


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 17:31:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 17:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523017.812743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popAn-0005PH-2G; Tue, 18 Apr 2023 17:31:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523017.812743; Tue, 18 Apr 2023 17:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popAm-0005PA-UH; Tue, 18 Apr 2023 17:31:20 +0000
Received: by outflank-mailman (input) for mailman id 523017;
 Tue, 18 Apr 2023 17:31:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1popAl-0005P4-9F
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 17:31:19 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1b0e980-de0e-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 19:31:16 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 13:31:07 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5902.namprd03.prod.outlook.com (2603:10b6:a03:2da::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 17:31:05 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 17:31:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1b0e980-de0e-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681839076;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=fqTLQ8QMx/L0Df/mejP/A+c7F0UDtXJOlbscrTKeTKs=;
  b=JCCJbP4wMF1eAflvwSdGlvw0WeTtVLYyrTAN+bIH1AL+icP4Uaas1Du3
   nqu+vn2xBaLcFaeT5uSUftrHP9mxQq+QtauzSp2U4OIJ+rhlbD4HjSVNX
   Uf0qfnSlWi+ShRkuIEzJ925P6bC23w37lW9juqvNTw5t9x1UtqebcF+Lz
   Y=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 105336779
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LsBDYqOaWOzGRWDvrR1glsFynXyQoLVcMsEvi/4bfWQNrUon12ADn
 zdNX2mEbP+MMTD1ft9yPYzk/RsGvcPXy4c3Hgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5wxmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vlTUWVkq
 /EjEjlTPxaondiR5eqnVvY506zPLOGzVG8ekldJ6GiASNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujaDpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxX6gA9xNS+zQGvhCnXLI2HcTITstBF68n8S2h1aMBv90J
 BlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWKTVqN+7HSqim9UQAWKmkYbCNCUgoB4PHkuog4ih+JRdFmeJNZlfXwEDD0h
 jqM/C43guxKidZRjvvru1fanziru57FCBYv4RnaVX6k6QU/Y5O5Y4uv6h7Q6vMowJulc2Rtd
 UMsw6C2hN3ix7nU/MBRaI3hxI2U2ss=
IronPort-HdrOrdr: A9a23:aZ6q+KHqD9gEYnnbpLqEzseALOsnbusQ8zAXPhZKOGZom+ij5r
 mTdZMgpHnJYVcqKRYdcLW7UpVoLkmslqKdjbNwAV7AZniDhILLFvAB0WK4+UyZJ8SWzIc0vp
 uIGJIObeEYY2Iase/KpCGlDtA6zMCD4MmT9JzjJrRWIT2CqZsM0+60MGmm+4RNKjV7OQ==
X-Talos-CUID: =?us-ascii?q?9a23=3A6yUu0Gke73mUM90WLsx1IhTH4svXOWL5nHP6fUK?=
 =?us-ascii?q?DM283Gbu+U2W33ft4j9U7zg=3D=3D?=
X-Talos-MUID: 9a23:3FWnFQqx/Cz9ykKBAW8ezyxNb9lz442IMxEIlqs7tMScB3FdEDjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="105336779"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WpzWBp2UoOVlVnUvaX6/QXVy1rLJjOjXMBgKF1ol1mjTl452hUcM9VOvGHYNEt8HafAJNs3Pj6QLSpoZYGetz/D633zWOds3ePo0VMVatjE9ALN4JyOQ0jk3jJO374ZaaHybghVGXyBA6aZz1Uu6JxC2Dx3CGYHxZVFRYBSDekIj84fCtSLEZb11Wq5Ht0i3GrgO6/e+nPbBwNK62HB4prN5Lg3uLZs1vf1Shd/ReS6PnBZxSD1er+xAWdkywzUWegqNss05cQJe/vUoE87/A8GdTINRpN2oAVyyg2wpMkbAqjzqCIdFHKxmuhlF7vSqVXdOf2tfUUOGE5Iupyo41A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vbRp9Mx95LV5ej9oSFLZYhqDm13n7JL2CcAFETtf5rE=;
 b=izlvzej05pfxzfrtQobTTmzLM4lkYAG42cQsVwAhLOV+7eygQRcYNdta1z/rwfIlq5pR7q8OeJs/E7CsCyHEG4IsIHgb0Bm+GNO7KEohyDmH5IquDAP2hwHjJXliWm00Jj+BBUOekng/RfTS/QBDCFwrzTITjVyHCkTkfXeKlSPFTItzhYzO+Nk4uq1ziOm40LW33QFgQSJxbjEG4JeANKHIK0H8S3nwFEwwa5RodxRyJmcZcJ5Vtdwv2aazXE2CVjPSt7LTJ8kyLx7RcYBmW85rSNpX1fm4xiNqd+QBKWwbs0a2rTTc1M5btcI/QIC8QePKPFRnCjikUQ9/BMCjOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vbRp9Mx95LV5ej9oSFLZYhqDm13n7JL2CcAFETtf5rE=;
 b=xU7q6IpV1HPD2OOXUFP/whAs7oKImh9bYZmtLOSu5XQNFqY/viKZlMCLWJpcc52dmn4j+2R2HKkvjBp8e9ywrh5qsj1AUtN6oopkRpLuN14fuv1RSQAMwxjSPZiD8BdeEwvz5OJtMkXdFK1Zkz3WN7Fu9O2GPlrwh929A2+p6rE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c2693ac0-4f6a-83ae-c477-75b3f05b938a@citrix.com>
Date: Tue, 18 Apr 2023 18:30:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230418111032.487587-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230418111032.487587-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0009.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:62::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5902:EE_
X-MS-Office365-Filtering-Correlation-Id: 03d07541-ce9c-4995-081a-08db4032afe7
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FYR1uXU8sKBkkFSBuse9gc/0s2gUkKRbW2Xbloese1bXwxy8LRoyjGAYKYM2jBjrkMqLhNa8T1Be00MXqFp+1xTZGpklxYOFtfGlrb8BfJ8TfkWr89gsg5Yg8IOCfVX3dKBA5QbkvFdg3GVORoMAwSs9+I4j8bivsBuHh67KQ2Y+FFd4uf1Si/ft65lMtfuHp+zEQSbTg8I4zjnzzNGarrZ9CrLWUtqCxWS+89t2ULUakN4aemJfiUV2DXq3GgR5+RtojvhsU2vSq6WXIJnoKTJTGURHJbtFJa86B3aVrASGOwSIpVyqMrLwFXZA6t4Y/uQP1htZlrVogW7CojNaod9y8cvJ90tNpv5UWOHBSeXcn0Qv/2/4fCMMp1vALa+/hgIW1r3SK8qvalUISvbJumVG54OD69dskseuhE7Zlrt+T/IOb2C+Z8MUUcj/+HObLPZEwchkLBweWEYdnt2plzSEwItMgg6vL9V0gHPeqQyUSQ7UALbMbH0cQeIoaVHgd6uoOPQA6hlLhRDmdHPKg1b3ahnERagLYe8u9wtMRA0qeY43gaF5oQJsXDTs93eckqEGXckPcn9dd5cXNKCroV7quHc+tFHWdEV3Van9dWZbuhMLEWapbGwwMKyKM016NghHuDjEe9iFsyQC95j4yg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(396003)(366004)(376002)(451199021)(8936002)(36756003)(38100700002)(8676002)(2906002)(86362001)(31696002)(5660300002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(107886003)(2616005)(6512007)(53546011)(66946007)(6506007)(66476007)(26005)(41300700001)(82960400001)(316002)(83380400001)(6916009)(66556008)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OWlQSkFrVXJqVjk1bHJpT2hIYm40QUdyazlQMUt0ZlQ5QlI1djYyU0UwZFJL?=
 =?utf-8?B?S3F3VnJRRWd1M2lnNE5GSjB4ZW5TaGRvZWxHcVRoTTgrdFJ3Z1o3Z2czVFFh?=
 =?utf-8?B?Q1VYb2Y2aDdxZWtQaldhaFQ0MVNEWnYycE1LZHZDbmlrdVRJZDFZTDhMNkE4?=
 =?utf-8?B?NE55ZzhkL09HanJyR1VLUk1SSVEwYVkvYVVzeTM5bk00b3VZSzc4SFpIU0s5?=
 =?utf-8?B?dW04VER6UXpuY09Nbm94bXllZmgvTEYwb09SUXNaS0pFLzN5WnAzZFliM21q?=
 =?utf-8?B?QWxLYmg5Mk1mbDdGVjk4azlEbWI2dUw2Y09JUzJQNEtkMXJyNlRMS3V4WWNq?=
 =?utf-8?B?V1hoTUh1bEczRVVHZUFZZHdSZzFFVW9yaTNIQkd6OU0ycHRNVWFuQUM1T0RN?=
 =?utf-8?B?MHQ1UGJoWXY0OUFidjMyOEJJQmZoUDdDWkw3VlY0Z2lDTGQ3Tkc5L21Gclpy?=
 =?utf-8?B?WmVQRGlydVJuY3VmekFFbXV1cDIvZ2Q2VWZhWDhTYjZyNTJheDErVzY3ZU1L?=
 =?utf-8?B?V1BoT0tUUkdESm9HdnJsVkJuWnNiOGp3eHJyZysxYjlLSGN3ZWdVMWlWalRV?=
 =?utf-8?B?OFI0bGpDbWFvTENsK1Nna1NZMFM2Rk1oTGpVZk95eTZDNHlmL0RpNUx5aVZ4?=
 =?utf-8?B?TlJobkorZkVLQzdpTWI0SlZFUVJTcmpzTTBZeGZLa3NUQU1jUXRHbmZVTDZv?=
 =?utf-8?B?ci93N1ovYlZlRjB4bllzajFwOFF1SlBHeC9XbXJGa1hwR2NKRnZ2UGhGK0xD?=
 =?utf-8?B?TWdRWnV6OEMwR0R2VkdMNGtXei9ZSmdjcXFKTnhuVGNJMlhQb3NoSHBvdXpl?=
 =?utf-8?B?U1NrNVhFRkt1ZVdrS2lHWTJMNmdZWUJPVW5ZRCs2djAzN3lUcGNhSVdiaXB4?=
 =?utf-8?B?QjdBUTg2NE5tVGRkWURRa3NYWXIrd0lVWEl3L2hVdStkbXF5TkdEVFhSZFBm?=
 =?utf-8?B?YWlCdFRUVDZyb3RTdHJXM3E5T2xnTnJtRnhlWjFvYnZjOGRpR0FiSnpzTFd6?=
 =?utf-8?B?SW42NFRJbWxVK3FqQUFlWXRYUnROUEJIRU1sWW1pSjVhY3J5enljeHlHSjlk?=
 =?utf-8?B?ZHd4a0hXZkYwNFQ4dWk3VGN6YVVNd25iOHcxbC9KODRUc2ZZK3BwK1FPTVNo?=
 =?utf-8?B?N2ViRHR0SmllVTF3eHhwMzNZaGEyaHhwUmlMUzF6UnVCNHowbHVlTEFSaHlS?=
 =?utf-8?B?dHpzY3lBc0F4SG0xc2NMankxUUprK045Rm5yeTlEb1hSTHJFRUQ2cTBCQnZv?=
 =?utf-8?B?WVRmY28zMit1RmRzQUsycXU1NFFZRks1SUdtTzNtd2dZc3hndkdwTGRWQTk3?=
 =?utf-8?B?VE9NTWdmTWVUQ05SNnRpSUgyVVZxQ0ZBL3kyTURSWDV2RDJ4MDY2aUQrYkRt?=
 =?utf-8?B?SFZsMlN1QUwzNjZxSlhZVkRMWHVLeFBVZlJ5ZkFuNlJEQmFNTUc1dUhETU5v?=
 =?utf-8?B?dmdiRFcvdnoxMlNIK2hTYk4yTWZhZ09JRjduNWo0a3UrM0M1WFAxNlBhV0pF?=
 =?utf-8?B?VWZKWS9XOUJtWnhrbHp3WkpTMTVjMHVYOEZ6TWlzcElKTjRtUVVCTWkwY3RD?=
 =?utf-8?B?dGNxVTVLb2hFSDJJOGNPWlRXcklMR2R0bUVTR3Vtb2xBUGUvMGNxcEtFOXp0?=
 =?utf-8?B?L1RlUDF4MGhGYnhnTjhUeUlRbjVaOEhvWTF5MzBwUCswM2c5TWp0eitCZGJL?=
 =?utf-8?B?V2xMTFdydEI4T3pvb0pENGd4REFyUUNIYUY2SWFXRmZlSzBOL1Jub0VqRjYw?=
 =?utf-8?B?dTUzVVZzZzJuTVZncUZnRHlOcEhSLzhVY09sbFFtQitGVkUyYjNrT0NiaXJp?=
 =?utf-8?B?UXZPSGw4dGRMRmR1dmMrbUNVc1lEN2Zyald4L04rdHY4clgwVHBsdUI2V01M?=
 =?utf-8?B?MjI4OEQ3bFpFRVZiYnUrREQ0bkdkNmhyZmpGaGhvUGpROFdJR0V3aUNzQko5?=
 =?utf-8?B?Nk5aN3crRkRmVG9qbENGNFFGTjZMd0N2SFM3TEVRQit6T3JRenZKeW9wZlRL?=
 =?utf-8?B?SE0xMmtWS3ZKV2tZc2pkd1lUVXVkRDlWa3NNZWIvbldKcjExN2drM1lMSmVR?=
 =?utf-8?B?NnU5dHFMZFNCK2Vwb0lUT01VUjA5TDliK2RkRkJXZ1g5cTJBK2lMcTczK2dX?=
 =?utf-8?B?bHlERGR6RFc0bTNQQkpSYXhaZWhpNlEvN2V4YTU0YkxPNExtdnJwNzFqaEVa?=
 =?utf-8?B?NUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YG6p3qa3nv+P8ekJ7Wl2rL0cwbjTiAr78VWMk50FADhnS5tZK8JkJdodsOBaKce4XyX4nc5iEv/AYFiYOWnuXn61+hfx8Oowqu+nQhklZJcQ3Fu5XT0XR2bqTSwytv2sIU3Bf/XzY584xnkznlLcDbZnajw+WG3XByZWY5Qt9gRe50YhZdjMWWT407kW2NXltRdfzelrUWkZ+sCYf4ofD/tBLhg2wInjNyVrl6ExDoi1Px/oEzKp+xpdzClggVmvCBNTb02HnVLZunfduPe5R1wD33hhqD4e4CtYSX0Aranfi7DXDMNWSA/eAt/IP6hy7NQ9OsH5+Yu2PJ5gbbiu3H/LI88gThGwZNpAHYjWN0WrLSRXF341aHGGg2i1lorzAJS37Pc4/JiJ7xphVayzKk1DWwCBGfYdz2XrtYRpNS2tIABwz73xI95jB+JHgodlK+ZwmqA/uIUDOxxwFTfBoeoKo3159Tu/YX7RrIguzOUFJJzcez3WXYLrSApMzGAzu7Qqk5D4h1x2375tHEFCdmSg83gCULUEor5ptNwJNqqimvgSeWybFYO4zg8geXeAPJrjxGuPs6f3OizltTxydUpVifjK9QvXIEcIkpTp7Yng0N/AoXCEn/DvTKYVvonBRw/R3WSXdlm6cUkUiby1aIxeGHcZm5AqPEkNJ+Y9De736D5fTvuwePY2ny7hlLfgQZ+ziG5r5IfiDVowFqKZ+w0SavkTocMDhYT8IXECZAoKxwF2Np+DusR0LtsNUS1CgGwuKIycEK+v8alGtWF3gl+kOnJfubOGYr07qFi37TC/WDgRSWjqFI/pm1GDZBTeE1IFz5voCd0N3PrjsihXE+TjgFWrnOXsUjcxpiPvb0o=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 03d07541-ce9c-4995-081a-08db4032afe7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 17:31:04.9977
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JioxwLT3Zn20JvYe4wtvsITZyIZWKnp84LxsWwVotpv/cmal+XRzmSODKJPL5bRQi/OW0RMPpdzbYHzpDL1NPglDH6JirX14ltdj9jBCYVE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5902

On 18/04/2023 12:10 pm, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 36a07ef77eae..98529215ddec 100644
> @@ -5879,6 +5880,75 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>      return modify_xen_mappings(s, e, _PAGE_NONE);
>  }
>  
> +/*
> + * Similar to modify_xen_mappings(), but used by the alternatives and
> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
> + * responsibility of the caller, and *MUST* not be introduced here.
> + *
> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
> + * Must be called with present flags, and over present mappings.
> + * Must be called on leaf page boundaries, i.e. s and e must not be in the
> + * middle of a superpage.
> + */
> +void init_or_livepatch modify_xen_mappings_lite(
> +    unsigned long s, unsigned long e, unsigned int _nf)
> +{
> +    unsigned long v = s, fm, nf;
> +
> +    /* Set of valid PTE bits which may be altered. */
> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
> +    fm = put_pte_flags(FLAGS_MASK);
> +    nf = put_pte_flags(_nf & FLAGS_MASK);
> +#undef FLAGS_MASK
> +
> +    ASSERT(nf & _PAGE_PRESENT);
> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
> +
> +    while ( v < e )
> +    {
> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
> +        unsigned int l2f = l2e_get_flags(l2e);
> +
> +        ASSERT(l2f & _PAGE_PRESENT);
> +
> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
> +        {
> +            ASSERT(l1_table_offset(v) == 0);
> +            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));

On second thoughts, no.  This has just triggered in my final sanity
testing before pushing.

Currently debugging.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 17:44:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 17:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523021.812752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popNP-0006v6-4Q; Tue, 18 Apr 2023 17:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523021.812752; Tue, 18 Apr 2023 17:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popNP-0006uz-1v; Tue, 18 Apr 2023 17:44:23 +0000
Received: by outflank-mailman (input) for mailman id 523021;
 Tue, 18 Apr 2023 17:44:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l0Ed=AJ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1popNO-0006ut-1O
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 17:44:22 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a4dbb4cd-de10-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 19:44:19 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1A99D6379A;
 Tue, 18 Apr 2023 17:44:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67801C433D2;
 Tue, 18 Apr 2023 17:44:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4dbb4cd-de10-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681839857;
	bh=YHktAOKFRbBnAqDNEvIzBF7nkKCUyfWRhBgJqb2pazg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZiZ3a+gBgJSu9TVaAuk8otdSKY1zaPb7fVt6Q3yU74MFlQV4blalTvDP/2fygbznP
	 NNjn5+9q5e2oB7CQczPfxiogOUAzChIM1JjEWox6KyeoIrV0GsRTL+YXREDw9DrGsp
	 P5ApsxF+gHK2UahFdaRCRAESm/aaGWLmDhU2Xt08icor274r0D5VMouyFWpa6vGN3C
	 ZZmd1FrJ9nhXpPBTrYsdQQWx9DgfADg0TRiNHPmuy7JcXpNxWEvdyK9e33XUXBe8mD
	 8MTpD7Dt1pcP2BBuHsEa9BJZqS3gXjhb0lLzffxkN6XGa4ZMnUlgdYYjmUiasrRJx8
	 kAPrbJleguSPQ==
Date: Tue, 18 Apr 2023 10:44:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>, michal.orzel@amd.com
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com> <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> Hi Julien,
> 
> >> This feature has not been merged in Xen upstream yet
> 
> > would assume that upstream + the series on the ML [1] work
> 
> Please clarify this point.
> Because the two thoughts are controversial.

Hi Oleg,

As Julien wrote, there is nothing controversial. As you are aware,
Xilinx maintains a separate Xen tree specific for Xilinx here:
https://github.com/xilinx/xen

and the branch you are using (xlnx_rebase_4.16) comes from there.


Instead, the upstream Xen tree lives here:
https://xenbits.xen.org/gitweb/?p=xen.git;a=summary


The Cache Coloring feature that you are trying to configure is present
in xlnx_rebase_4.16, but not yet present upstream (there is an
outstanding patch series to add cache coloring to Xen upstream but it
hasn't been merged yet.)


Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
you as you already have Cache Coloring as a feature there.


I take you are using ImageBuilder to generate the boot configuration? If
so, please post the ImageBuilder config file that you are using.

But from the boot message, it looks like the colors configuration for
Dom0 is incorrect.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 17:54:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 17:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523026.812763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popWY-0008Se-5H; Tue, 18 Apr 2023 17:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523026.812763; Tue, 18 Apr 2023 17:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popWY-0008SX-0t; Tue, 18 Apr 2023 17:53:50 +0000
Received: by outflank-mailman (input) for mailman id 523026;
 Tue, 18 Apr 2023 17:53:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1popWW-0008SN-Ho; Tue, 18 Apr 2023 17:53:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1popWW-0003f2-8Z; Tue, 18 Apr 2023 17:53:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1popWV-0001Z6-RT; Tue, 18 Apr 2023 17:53:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1popWV-0002WQ-Qj; Tue, 18 Apr 2023 17:53:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q2HTni8lwFqtUBEK49oDitTKBZtKru1Db6WCbnX5rh0=; b=fQXUJ9erQpE+GS16bSQfhV7mXy
	CM9lzY2gT4QUaUUOBoM3hl5t16+US5zwei+bXPyNNfNlXdX24KRu+6d/Kgm5Ep3Ikx966SH83FtJV
	uFpJ7+qg5EwTO79ny5d75IqEeTeXf96P1DXPVI8lOiFcRSQzmUWeLcnIbV+uNzK87LSo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180296: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:regression
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
X-Osstest-Versions-That:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 17:53:47 +0000

flight 180296 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine    11 examine-serial/bootloader fail REGR. vs. 180287
 test-armhf-armhf-examine     12 examine-serial/kernel    fail REGR. vs. 180287
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 180287

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop      fail blocked in 180287
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180287
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180287
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180287
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180287
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180287
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180287
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180287
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
baseline version:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc

Last test of basis   180287  2023-04-17 18:09:05 Z    0 days
Testing same since   180296  2023-04-18 06:36:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:50 2023 +0200

    xen/arm: vpl011: Do not try to handle TX FIFO status when backend in Xen
    
    >From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
    RX and TX state. Because we are passing 0 as out_fifo_level and
    SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
    vpl011_update_tx_fifo_status() which performs TXI bit handling
    depending on the FIFO trigger level. This does not make sense when backend
    is in Xen, as we maintain a single TX state where data can always be
    written and as such there is no TX FIFO handling. Furthermore, this
    function assumes that the backend is in domain by making use of struct
    xencons_interface unconditionally. Fix it by calling this function only
    when backend is in domain. Also add an assert for sanity.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Henry Wang <Henry.Wang@arm.com>

commit d3784f16bbfeabde92b55ee6d5d66dcb1d82d060
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:49 2023 +0200

    xen/arm: vpl011: Handle correctly TXFE when backend in Xen
    
    When backend is in Xen, the handling of data written to DR register is a
    bit special because we want to tell guest that we are always ready for new
    data to be written (i.e. no real FIFO, TXFF/BUSY never set and TXI always
    set). This conflicts with the current handling of TXFE bit, which we
    always clear and never set on a write path (we happen to set it when we
    receive char from serial input due to use of vpl011_data_avail() but this
    might never be called). This can lead to issues if a guest driver makes
    use of TXFE bit to check for TX transmission completion (such guest could
    then wait endlessly). Fix it by keeping TXFE always set to match the
    current emulation logic.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>

commit 005e84e695ed086b9ebb281ee6711fd1aa6aaaba
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:48 2023 +0200

    xen/arm: vpl011: Fix misleading comments
    
    In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
    stating that the guest is expected to read the DR register only if the
    TXFE bit of FR register is not set. This is obviously logically wrong and
    it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>

commit 65c4e7472cafb60f478e7a5f358ee1eeac28b5a8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:45 2023 +0200

    x86emul: support AVX-NE-CONVERT insns
    
    Matching what was done earlier, explicit tests are added only for
    irregular insn / memory access patterns.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 842acaa743a503726d6c4d77a7982cc64f07c4bf
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:06 2023 +0200

    x86emul: support AVX-VNNI-INT8
    
    These are close relatives of the AVX-VNNI ISA extension. Since the insns
    here and in particular their memory access patterns follow the usual
    scheme (and especially the byte variants of AVX-VNNI), I didn't think it
    was necessary to add a contrived test specifically for them.
    
    While making the addition also re-wire AVX-VNNI's handling to
    simd_0f_ymm: There's no reason to check the AVX feature alongside the
    one actually of interest (there are a few features where two checks are
    actually necessary, e.g. GFNI+AVX, but this isn't the case here).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit da232f1f1118e8c8fad520dedee312005c2984fb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:10:14 2023 +0200

    x86emul: support AVX-IFMA insns
    
    As in a few cases before (in particular: AVX512_IFMA), since the insns
    here and in particular their memory access patterns follow the usual
    scheme, I didn't think it was necessary to add a contrived test
    specifically for them.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 17:55:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 17:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523032.812773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popXm-0000Ya-Fp; Tue, 18 Apr 2023 17:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523032.812773; Tue, 18 Apr 2023 17:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1popXm-0000YT-C8; Tue, 18 Apr 2023 17:55:06 +0000
Received: by outflank-mailman (input) for mailman id 523032;
 Tue, 18 Apr 2023 17:55:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TTWx=AJ=citrix.com=prvs=4659928b3=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1popXk-0000YF-Ov
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 17:55:04 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24756fc8-de12-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 19:55:03 +0200 (CEST)
Received: from mail-sn1nam02lp2040.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 Apr 2023 13:54:52 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6185.namprd03.prod.outlook.com (2603:10b6:408:11e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr
 2023 17:54:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6298.045; Tue, 18 Apr 2023
 17:54:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24756fc8-de12-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681840503;
  h=message-id:date:subject:from:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=v103pMdYteJa04aSmRITptQsuiqiS8CgqKPSeDkkQ+0=;
  b=Sq0wSn4YPZ7U8NV6EY8uDUWHbm06Ce1ljjeWMYFNUohqOHZ9H5bHslKs
   t1mDGH66d+nZj1iDQJ9oKwtcJrzYTsHIEz8V+dbda57LQJjVr12d4mRmC
   vxqhS1TB0O+9MBlkhlAudn9dtOPG5DnkIHXNpaGulLuRhU7ywkgyKeuPb
   Q=;
X-IronPort-RemoteIP: 104.47.57.40
X-IronPort-MID: 104773415
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:A9VNaqBa4LpfcBVW/xPiw5YqxClBgxIJ4kV8jS/XYbTApDsh3z0Ax
 2EbUG7VM6rZYmfwed9xaoXg9BlSsZHWzYdhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw089pKm5L5
 8IhM3M8QRuO3Nm73JPic7w57igjBJGD0II3nFhFlWucNtB/BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxruwA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eTwnygBNtPfFG+3uAtgFivn2BNMk09XgWwq9DjqmGbAc0Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZDYtE7sM49RRQxy
 0SE2djuAFRHr7m9WX+bsLCOoluaOi8TN2YOIzAFSQgt5MPqq4U+yBnIS75e/LWdi9T0HXTrx
 WmMpS1m3bEL15ZXj+O84EzNhC+qqt7RVAkp6w7LX2WjqARkeIqiYI/u4l/ehRpdELukopC6l
 CBss6CjAComVPlhSATlrD0xIYyU
IronPort-HdrOrdr: A9a23:/49/zKwqmbP/K/Yyc0FcKrPx/uskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9xYgBHpTnkAsW9qBznhPpICOUqTNWftWrdyQiVxeNZnPLfKlTbckWQmI5gPM
 9bAtBD4bbLfD9HZKjBkWyF+uIbsaK6Ge2T9JTj5kYoaTsvR7Br7g9/BAreOkpqRDNeDZ58MJ
 aH/MJIqxepZHxSN62Adww4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9n22
 nYlA7S4LmlrpiAu23h/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtX4hlU63qhkFKnAn6gmxKrP
 D85zMbe+hj4XLYeW+45TH33RP77Too43j+jXeFnHrKu6XCNXgHIvsEobgcXgrS6kImst05+r
 lMxXilu51eCg6FtDjh5uLPSwphmiOP0DEfeNYo/jFiuLYlGfZsRM0kjTVo+a47bVXHAVUcYa
 FT5MK13ocoTbrVVQGUgoBV+q3RYp0CJGb6fqE8gL3u79F3pgEJ86JK/r1uop5HzuNId6V5
X-Talos-CUID: =?us-ascii?q?9a23=3A5RZzSGsjVEZKxy8M3ABgQOqX6IsMW2P2w3LCHXa?=
 =?us-ascii?q?jSldGQ7DOclKM2fpdxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AvP6zcgw6LE4dnsIi18ItjjfHTCGaqLrpCEUxyJl?=
 =?us-ascii?q?YgZmvPnJ7MR7HrBuSGJByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,207,1677560400"; 
   d="scan'208";a="104773415"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ahICyN+yYc44t61JGv9S0VCnpvEqk17oxH8UKPKyKkLJNVh9ywDILxhVvXKvM000JiSbYZEb+fH/NY9sr6C1ovWVc4DwATG6bcz28t4PeT948ZqM/r9wHDxNmt/0qQ90CqY+N7mKYSTuQ2HKQDaJQdPu25hnO5U9NR36LxNC+trJCbNkoe56IX/RQdDC60LuH/4COh//C/rQdCri/TW0d01gaWDiE70+sAP3MP+Fcp9yYQhUpbmJrEny3ZRI0IbSUjQFV/5y/CHnK4WKDDnQ3kQu3DNUL3qKl9mUUUXVmo1x8RtonrsaxZ9x208NKRWsSm8BGEFMVw/+7b/0O2ZG2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pVcOGkqVAfHnxEW6KpN88v8nvWMqLX9ErkEbpJwWj6U=;
 b=KQ/0xz9hNhYFtsO1H8JOCIuSiJYm7vyAiXS+GKhcPIvTX4ktSEg3CPqN7XNAPGPr0UWtCVbdYbUySZm12qudqau62fwnjErmDK95C0hrsNbOtKtxBBf2cC9yNZA2dnsP39JefD+zCAN2ucfgTfJMY2bXWAsqsvcOuxhrmi93VqvYSXNBRrBWCrEvu+nkfCMzSQALywLhqxC+eLaIBjTmw2mSO2Xy//m1y8EVvNfOBcOrqjbSRKFTCdBEdxZfkOJG19eSJ1QDkFgVO9H6KO1VV13Ypaa5YAO3/oYd/nnvUqPWtIF0Vefs1eWY+2o//5H+PUfHHbYOgNXGMCYXfAtJvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pVcOGkqVAfHnxEW6KpN88v8nvWMqLX9ErkEbpJwWj6U=;
 b=cU/aDEdXzPXxReaY8Gy/jAk3IT8h2ce8QYzMx1t0PLC2K2gXVDBumQiE/1mY6lRZPNVuRXElE/5uTqKKJa/qSkggPlBeXU9D9+KLAbylCBf1NbBWBpZJr0is1DzXOLanvwbPsjK451z5pFdqSZYdWx2kAA/mMjPG2oKHFVIddlQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <226fba6c-aeca-d38b-7d47-07b2f8d6b403@citrix.com>
Date: Tue, 18 Apr 2023 18:54:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>
References: <20230418111032.487587-1-andrew.cooper3@citrix.com>
 <c2693ac0-4f6a-83ae-c477-75b3f05b938a@citrix.com>
In-Reply-To: <c2693ac0-4f6a-83ae-c477-75b3f05b938a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0059.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN9PR03MB6185:EE_
X-MS-Office365-Filtering-Correlation-Id: 75690730-7c52-4001-77e5-08db403600da
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ckrIVyf7+QUvs1j9GtyzTbjSBRhqifzoaUOtXitaQgXgJvuq8VMO+doDlnk9jU9hSJAatRqePm+ZoFnyiwnQMiP4AnogfAD51vE9pjxvzqqiQNVCr+WzEbBnMapZD/RNs1B+I41pHONdM7NHbbfor9BcsMNxQJ8PcNftvuXoMN4t30QWiuO+R61b1kbfkPAohnu+rpAdb64sMmnYurQjvHPgdQvNGBgEWsXHFkAdbpX/E0oOwuFy7ZjvlVl+LK6159vaDw5zpK98rvlrco4VwJTboscnNOlBmvtgz9SYTu7/nuC+SAhXxBTams2uuZpWg26lbH7eLdiq/fLEr9ie4JErCJOarNxUX24aNmOfFyGLdQ2dpnLtfmiscbIkjbV6OA1XNzPiBdpy7bVi+CGZBOdMBpvZIbExZb+4zbrCLcRifM+Y6kvhHCqfwRWZMYcBrGAZjau5hTrcHiAW654mgJrLjXPsiR7XjIa7A93XRUVanSFBVcatHFDjQXn8uMZntmGTlEJAs/msdEvVLbFA7m+u6Qs5Ro6+riZCeXwTj8jF7Fms9uinJ6LNwQOmNJpGAxH7YPYVQpuVXA/GCQDHl9ovAyA7qoMypDuo3SvuHMFkpGCFYY/6jZK78wYEcxV2U1Db+rroVBe0q3awxT2Ccw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(376002)(396003)(451199021)(8676002)(5660300002)(8936002)(6916009)(41300700001)(31686004)(316002)(66556008)(66476007)(66946007)(4326008)(2906002)(54906003)(31696002)(478600001)(86362001)(6486002)(6666004)(38100700002)(186003)(6512007)(6506007)(53546011)(82960400001)(26005)(107886003)(83380400001)(2616005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MElHV2NRZ1ZVaVZJcThralBkSjdkQUFkb3paeE1hMXNCbXFKalVRTWt3bUg0?=
 =?utf-8?B?Yk5oZlBDNHYxQ0FaVzI2aWJEUTQrcGFKdjRwRnlRR0xJbzlKck9obDVKRDFa?=
 =?utf-8?B?ZGdZRkl2VFNtL3R1T2hOaEdibnlFL1FIcVdUbytqbzBEL09MdVJJQ3lyQXJk?=
 =?utf-8?B?TWFUbUxIZGNJQTh2U0hFN2pESmJoNHp0eEZINEI3SjE0UjZ2YzRWRVdVL0Zh?=
 =?utf-8?B?TytDUjNnS0NFWXFjbFRwZDlFb1VvTjZtYlgrdkpxalI3QW84S2NpditJVEV3?=
 =?utf-8?B?R1VKSFEwdlVTOCsxTi9JbGE0VTlSM0RtdmppQUw5OFVud3FGQmE3V3dTTmRN?=
 =?utf-8?B?MW8rTnZPMlN3eXlhaE1CeEY2dlBJTVZXSjV4MmJWQ0wydjJUTS9jTWpnWS95?=
 =?utf-8?B?bUx5WVVvQlFGTXZWczZGM1hHazFOdWF5MSt6WUZVbzU0L3ZIM2JQQVZnOGgy?=
 =?utf-8?B?VHFodlhWcjd0Ui93NHNURnZIMEJlVUpPVGNCbHZvcEtlLzhadlFQdHVVdHpH?=
 =?utf-8?B?aE5LK2hGWG1VK0JFOG13eG5iSXJ1VTR6WHdTVWhZMWpybFNwZy9lQ0hwSUtZ?=
 =?utf-8?B?ZFc4cDRjU0pkKytUM3k3ZnY4Wngzcnh0cHhrTnhobGJvb1hzcmp3SFRsNjFx?=
 =?utf-8?B?anUyQzRvSWdITC80d0VJQzNBZldvZi9GYzZxbndPRGpIcnZCTWpwZ2FLTWNI?=
 =?utf-8?B?aXNSeUNHSU1rMHJ3ZmYzM0ZtQ2Foa3JlUmUwMTdIdVM5SVl5Mk1SMW1HNlZq?=
 =?utf-8?B?SDBxQ21IbytFaWRMTUU3VDViL3NLNFVSQUQ2RndIL0lHaUdMbWlnaFdFRnJh?=
 =?utf-8?B?cmVYdDRxWlZzcStNNVZuZWVxby8xbEQzRUp5ejhoM1UvYitVZWM3YzFZaTIx?=
 =?utf-8?B?c3p3Unk3VXNROGFBZXJ3dHppcHplejBzd0NxN1VBWWFBeTVIZnRtcTNQT2t3?=
 =?utf-8?B?WHNFdUFxNEE4V1pxMVllSlZGcGhnWnNDeDJvZXgra08rZVNqWkl6VjEybU9u?=
 =?utf-8?B?R3FRVHNaaElSYXkvUGoyUitMYnE1ZHB1a2dEdEFrQWRPckJWMDAyRXEzZGU3?=
 =?utf-8?B?bnBNRFQ5cVJ3RjlOQ0FVRVRLL29SZ0hvUnpkeGdBbFFXb1JucHhYSitVbFJh?=
 =?utf-8?B?amRyMXZCbEZCS29Td2V5Q3l1c0MwMWRZUlVnS3h3Z3FUTGN2V1IvTUpTQ01k?=
 =?utf-8?B?MUhiQ00rQjhBcUFaQityUUZOODJVcjhjSlh2eXdWMWl1MWVEN1dNUG5ZbnFo?=
 =?utf-8?B?YXpWK25KSXo2SDg3bVdlYzVneHFxOS9DUEV2UTk4cGxqeHNTVU9zMUZQWk1E?=
 =?utf-8?B?VmlBMkNsWTFEUEx3SlplQlNiQ3Bmdk5CbkZDTkR5NVh3dDNKeWdhVmFzUTJJ?=
 =?utf-8?B?SGFYU0RSTmFtb0RQUFpvcVo2N2NKZnIwNGJVa0tWNDRhdkRYaHRlRFE5VFZs?=
 =?utf-8?B?U2JpYm1XTHlVVVhlV0h5VWE1dFIxY01WTEsyZHFEdHhGbkxxVmVpT2tEeFZw?=
 =?utf-8?B?UHJ3WmhtaXRSaUM0ODBrQUt4cmNWTm1zVkhaRklVTnQzOENydm5Mc04rTkNU?=
 =?utf-8?B?bmc4U08vZ2tDdVhwVkNBMjRuelQyVHMzWE9oNXRSc3ZaakpEUHpyY0l6MEdt?=
 =?utf-8?B?b08rdTg4R25TRVVxb1AvWXFmSm5jSDV5M0dja09HNWlwR3BpMzVIQUNqVDNK?=
 =?utf-8?B?cUVkODhQeDRQVE5JTTBaV2tuaU1tYVg2bGd1em54bDVqK0JOeHdXN0U4b2Y5?=
 =?utf-8?B?SER6SXhaWGhuNWNJeFZOT09pWU1QRzhUbGxYdDVHdE5FSWwwQVo3WjJKQzBu?=
 =?utf-8?B?bWVVUkZEZFBuU284Nlh1S1pHb2luNVhFUE5BcFJRN1ptOUIxMVEvOFdRbHl2?=
 =?utf-8?B?UEhhZjVzNEl2WlN1UmV5K0VzR1l0NFdzd1ZkUG84bzdudzRmckxiUmk1Z3Ra?=
 =?utf-8?B?UUN0OUZJZ1ZubkhsK1NDekpTUG5SWEhGTlVCT09idkRuYkdCVmF1VG13VXlH?=
 =?utf-8?B?ZUdJWlhSYkxFTUJLZ0E5Myt6ajlkaWo1UTNYbTZZNUVvZW0yOS9VUm56ODNq?=
 =?utf-8?B?SUFQK0p6ZmpES2pVaWFUbDcvVCs0VExrTm8rMmtIZ1kreTFZazdaRlIzVVRP?=
 =?utf-8?B?S3VtYWwvWTkzTWtpd2dhbEN2Ky8rUWJOYWVtaG9Nang0NWQzdGRMVWlXM2FQ?=
 =?utf-8?B?T3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tjz9CM+ngbVgSBmmwEK6A3v8QeBV6r/s9Baw2p9b5hS8YMiqNEAraUtT9PlShw+V+jKz8tGN8GI5pFd9z/mxnfq4WiobeqRPRzqH5meyC2nl3fECQ4hG3XboejufYxIXSvrt/mPmBEoDy1QT7VSN5kgOtMUNyAGj+//JBpLQx7elJ6sXY/ZLK3lhF21zqCyF3LO50B80FhsCHRdMYfBiR6f+RzZJL6T9xEb7dIikqKzt5euqR8uKPPpZRiW+TxJtlxhP50dFaZWmJgA5ePKfiB/K6BJDrtPhvjWm8aDNRrkg9tnq5mtJegW3G3fwKKpFZRf4vaeYIz7nDmGGz3wJWvKpFXpCkgp3uQIBbv55DFJ2pJiXNoBmnfS5pIpi15zii9re19x5dGZ/EshCx9rhjavx3M68Cd5GWNpuswPgrXHI2gIgrke7cn9dyH1qO2I7VmlA63/35QPbmg29HHQbAcGf+NYxiWXdGIYpcc4vm3VIZxHXsehKg6LdXDQsQNx/mi7Ta3MiqVpMh+vj8lVoyLybdlOjEJTHI9lt2itaNSLz/zExz+5OV7VZtnptUIzJqAdj/DQhfwRC1Ei2vILLaYW8jtlmIf2lqMijHEz86IAX70TnGUJEqI3KGlquWvAyYkCo2VMufnNdiXTkf4VsXgPGUbGydbcG1ZpIsIDAjQfY24p2XYslB71uw7R6Xyhv6X6VtNf+Rxw2fe8LYT97nHUogumwMxefsl0Rwkm6WPlu1vfK43+caVV5CtU/fvTxIax/cBPzENdELcd80KDRvm+TDBdvQa8/QZKg5332BvQE5NJe2xfiZMnBNh207KeV++Y2rtDQk4a7eGreaiZB7wpU8owvfbHABo4C+Pmg3VQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 75690730-7c52-4001-77e5-08db403600da
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 17:54:49.3338
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lR7pXNC5gaVajTy1vagIenBO+cLqV7xHjVSjDMH7Tz6J06IUkFXFeNm+tfVrYtU2w2z247dYHjmdXxitqks67qHtMtr1uFpOHEyvuJswdlc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6185

On 18/04/2023 6:30 pm, Andrew Cooper wrote:
> On 18/04/2023 12:10 pm, Andrew Cooper wrote:
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 36a07ef77eae..98529215ddec 100644
>> @@ -5879,6 +5880,75 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>  }
>>  
>> +/*
>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>> + * responsibility of the caller, and *MUST* not be introduced here.
>> + *
>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>> + * Must be called with present flags, and over present mappings.
>> + * Must be called on leaf page boundaries, i.e. s and e must not be in the
>> + * middle of a superpage.
>> + */
>> +void init_or_livepatch modify_xen_mappings_lite(
>> +    unsigned long s, unsigned long e, unsigned int _nf)
>> +{
>> +    unsigned long v = s, fm, nf;
>> +
>> +    /* Set of valid PTE bits which may be altered. */
>> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
>> +    fm = put_pte_flags(FLAGS_MASK);
>> +    nf = put_pte_flags(_nf & FLAGS_MASK);
>> +#undef FLAGS_MASK
>> +
>> +    ASSERT(nf & _PAGE_PRESENT);
>> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
>> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
>> +
>> +    while ( v < e )
>> +    {
>> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
>> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
>> +        unsigned int l2f = l2e_get_flags(l2e);
>> +
>> +        ASSERT(l2f & _PAGE_PRESENT);
>> +
>> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
>> +        {
>> +            ASSERT(l1_table_offset(v) == 0);
>> +            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));
> On second thoughts, no.  This has just triggered in my final sanity
> testing before pushing.
>
> Currently debugging.

(XEN) livepatch: lp: Applying 1 functions
(XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x163)
(XEN)   l2t[001] SP: 000000009f4001a1->000000009f4001e3  (v
ffff82d040200000, e ffff82d0403b4000)
(XEN) hi_func: Hi! (called 1 times)
(XEN) Hook executing.
(XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x121)
(XEN)   l2t[001] SP: 000000009f4001e3->000000009f4001a1  (v
ffff82d040200000, e ffff82d0403b4000)
(XEN) livepatch: module metadata:

When Xen is using forced 2M alignment, the virtual_region entry for
.text isn't aligned up to the end of the region.

So the final bullet point is actually wrong.  I'm going to relax it to
say that it is the callers responsibility to make sure that bad things
don't happen if s or e are in the middle of a superpage, because I'm not
changing how virtual_region works to satisfy this assert.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 20:10:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 20:10:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523057.812786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1porei-0006L3-9W; Tue, 18 Apr 2023 20:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523057.812786; Tue, 18 Apr 2023 20:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1porei-0006Kw-6g; Tue, 18 Apr 2023 20:10:24 +0000
Received: by outflank-mailman (input) for mailman id 523057;
 Tue, 18 Apr 2023 20:10:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szI2=AJ=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1poreg-0006Kq-IF
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 20:10:22 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a03efc9-de25-11ed-b21f-6b7b168915f2;
 Tue, 18 Apr 2023 22:10:18 +0200 (CEST)
Received: from [192.168.0.2] (ip5f5aebaa.dynamic.kabel-deutschland.de
 [95.90.235.170])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id A233F61CC40F9;
 Tue, 18 Apr 2023 22:10:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a03efc9-de25-11ed-b21f-6b7b168915f2
Content-Type: multipart/mixed; boundary="------------0DImfNerhQq1M72qua4azj6z"
Message-ID: <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
Date: Tue, 18 Apr 2023 22:10:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx>
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <87r0sh4m7a.ffs@tglx>

This is a multi-part message in MIME format.
--------------0DImfNerhQq1M72qua4azj6z
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 18.04.23 um 10:40 schrieb Thomas Gleixner:
> On Tue, Apr 18 2023 at 08:58, Thomas Gleixner wrote:
>> On Mon, Apr 17 2023 at 19:40, Paul Menzel wrote:
>>> Am 17.04.23 um 16:48 schrieb Thomas Gleixner:
>>>
>>>> On Mon, Apr 17 2023 at 13:19, Paul Menzel wrote:
>>>>> Am 15.04.23 um 01:44 schrieb Thomas Gleixner:
>>>>> [    0.258193] smpboot: CPU0: AMD A6-6400K APU with Radeon(tm) HD Graphics (family: 0x15, model: 0x13, stepping: 0x1)
>>>>> […]
>>>>> [    0.259329] smp: Bringing up secondary CPUs ...
>>>>> [    0.259527] x86: Booting SMP configuration:
>>>>> [    0.259528] .... node  #0, CPUs:      #1
>>>>> [    0.261007] After schedule_preempt_disabled
>>>>> [   10.260990] CPU1 failed to report alive state
>>>>
>>>> Weird. CPU1 fails to come up and report that it has reached the
>>>> synchronization point.
>>>>
>>>> Does it work when you add cpuhp.parallel=off on the kernel command line?
>>>
>>> Yes, the ten seconds delay is gone with `cpuhp.parallel=off`.
>>>
>>> There was a patch set in the past, that worked on that device. I think
>>> up to v4 it did *not* work at all and hung [1]. I need some days to
>>> collect the results again.
>>
>> Can you please apply the patch below on top of the pile remove the
>> command line option again?
> 
> Bah. That patch does not make any sense at all. Not enough coffee.
> 
> Can you please provide the output of cpuid?

Of course. Here the top, and the whole output is attached.

```
CPU 0:
    vendor_id = "AuthenticAMD"
    version information (1/eax):
       processor type  = primary processor (0)
       family          = 0xf (15)
       model           = 0x3 (3)
       stepping id     = 0x1 (1)
       extended family = 0x6 (6)
       extended model  = 0x1 (1)
       (family synth)  = 0x15 (21)
       (model synth)   = 0x13 (19)
       (simple synth)  = AMD (unknown type) (Richland RL-A1) 
[Piledriver], 32nm
[…]
```


Kind regards,

Paul
--------------0DImfNerhQq1M72qua4azj6z
Content-Type: text/plain; charset=UTF-8; name="cpuid.txt"
Content-Disposition: attachment; filename="cpuid.txt"
Content-Transfer-Encoding: base64

Q1BVIDA6CiAgIHZlbmRvcl9pZCA9ICJBdXRoZW50aWNBTUQiCiAgIHZlcnNpb24gaW5mb3Jt
YXRpb24gKDEvZWF4KToKICAgICAgcHJvY2Vzc29yIHR5cGUgID0gcHJpbWFyeSBwcm9jZXNz
b3IgKDApCiAgICAgIGZhbWlseSAgICAgICAgICA9IDB4ZiAoMTUpCiAgICAgIG1vZGVsICAg
ICAgICAgICA9IDB4MyAoMykKICAgICAgc3RlcHBpbmcgaWQgICAgID0gMHgxICgxKQogICAg
ICBleHRlbmRlZCBmYW1pbHkgPSAweDYgKDYpCiAgICAgIGV4dGVuZGVkIG1vZGVsICA9IDB4
MSAoMSkKICAgICAgKGZhbWlseSBzeW50aCkgID0gMHgxNSAoMjEpCiAgICAgIChtb2RlbCBz
eW50aCkgICA9IDB4MTMgKDE5KQogICAgICAoc2ltcGxlIHN5bnRoKSAgPSBBTUQgKHVua25v
d24gdHlwZSkgKFJpY2hsYW5kIFJMLUExKSBbUGlsZWRyaXZlcl0sIDMybm0KICAgbWlzY2Vs
bGFuZW91cyAoMS9lYngpOgogICAgICBwcm9jZXNzIGxvY2FsIEFQSUMgcGh5c2ljYWwgSUQg
PSAweDAgKDApCiAgICAgIG1heGltdW0gSURzIGZvciBDUFVzIGluIHBrZyAgICA9IDB4MiAo
MikKICAgICAgQ0xGTFVTSCBsaW5lIHNpemUgICAgICAgICAgICAgID0gMHg4ICg4KQogICAg
ICBicmFuZCBpbmRleCAgICAgICAgICAgICAgICAgICAgPSAweDAgKDApCiAgIGJyYW5kIGlk
ID0gMHgwMCAoMCk6IHVua25vd24KICAgZmVhdHVyZSBpbmZvcm1hdGlvbiAoMS9lZHgpOgog
ICAgICB4ODcgRlBVIG9uIGNoaXAgICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAg
ICAgVk1FOiB2aXJ0dWFsLTgwODYgbW9kZSBlbmhhbmNlbWVudCAgICAgPSB0cnVlCiAgICAg
IERFOiBkZWJ1Z2dpbmcgZXh0ZW5zaW9ucyAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBQ
U0U6IHBhZ2Ugc2l6ZSBleHRlbnNpb25zICAgICAgICAgICAgICA9IHRydWUKICAgICAgVFND
OiB0aW1lIHN0YW1wIGNvdW50ZXIgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFJETVNS
IGFuZCBXUk1TUiBzdXBwb3J0ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBQQUU6IHBo
eXNpY2FsIGFkZHJlc3MgZXh0ZW5zaW9ucyAgICAgICA9IHRydWUKICAgICAgTUNFOiBtYWNo
aW5lIGNoZWNrIGV4Y2VwdGlvbiAgICAgICAgICAgPSB0cnVlCiAgICAgIENNUFhDSEc4QiBp
bnN0LiAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBBUElDIG9uIGNoaXAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgU1lTRU5URVIgYW5kIFNZ
U0VYSVQgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIE1UUlI6IG1lbW9yeSB0eXBl
IHJhbmdlIHJlZ2lzdGVycyAgICAgID0gdHJ1ZQogICAgICBQVEUgZ2xvYmFsIGJpdCAgICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgTUNBOiBtYWNoaW5lIGNoZWNrIGFy
Y2hpdGVjdHVyZSAgICAgICAgPSB0cnVlCiAgICAgIENNT1Y6IGNvbmRpdGlvbmFsIG1vdmUv
Y29tcGFyZSBpbnN0ciAgID0gdHJ1ZQogICAgICBQQVQ6IHBhZ2UgYXR0cmlidXRlIHRhYmxl
ICAgICAgICAgICAgICA9IHRydWUKICAgICAgUFNFLTM2OiBwYWdlIHNpemUgZXh0ZW5zaW9u
ICAgICAgICAgICAgPSB0cnVlCiAgICAgIFBTTjogcHJvY2Vzc29yIHNlcmlhbCBudW1iZXIg
ICAgICAgICAgID0gZmFsc2UKICAgICAgQ0xGTFVTSCBpbnN0cnVjdGlvbiAgICAgICAgICAg
ICAgICAgICAgPSB0cnVlCiAgICAgIERTOiBkZWJ1ZyBzdG9yZSAgICAgICAgICAgICAgICAg
ICAgICAgID0gZmFsc2UKICAgICAgQUNQSTogdGhlcm1hbCBtb25pdG9yIGFuZCBjbG9jayBj
dHJsICAgPSBmYWxzZQogICAgICBNTVggVGVjaG5vbG9neSAgICAgICAgICAgICAgICAgICAg
ICAgICA9IHRydWUKICAgICAgRlhTQVZFL0ZYUlNUT1IgICAgICAgICAgICAgICAgICAgICAg
ICAgPSB0cnVlCiAgICAgIFNTRSBleHRlbnNpb25zICAgICAgICAgICAgICAgICAgICAgICAg
ID0gdHJ1ZQogICAgICBTU0UyIGV4dGVuc2lvbnMgICAgICAgICAgICAgICAgICAgICAgICA9
IHRydWUKICAgICAgU1M6IHNlbGYgc25vb3AgICAgICAgICAgICAgICAgICAgICAgICAgPSBm
YWxzZQogICAgICBoeXBlci10aHJlYWRpbmcgLyBtdWx0aS1jb3JlIHN1cHBvcnRlZCA9IHRy
dWUKICAgICAgVE06IHRoZXJtLiBtb25pdG9yICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBJQTY0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIFBCRTogcGVuZGluZyBicmVhayBldmVudCAgICAgICAgICAgICAgID0gZmFsc2UK
ICAgZmVhdHVyZSBpbmZvcm1hdGlvbiAoMS9lY3gpOgogICAgICBQTkkvU1NFMzogUHJlc2Nv
dHQgTmV3IEluc3RydWN0aW9ucyAgICAgPSB0cnVlCiAgICAgIFBDTE1VTERRIGluc3RydWN0
aW9uICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgRFRFUzY0OiA2NC1iaXQgZGVi
dWcgc3RvcmUgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgTU9OSVRPUi9NV0FJVCAgICAg
ICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBDUEwtcXVhbGlmaWVkIGRlYnVn
IHN0b3JlICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBWTVg6IHZpcnR1YWwgbWFjaGlu
ZSBleHRlbnNpb25zICAgICAgICAgPSBmYWxzZQogICAgICBTTVg6IHNhZmVyIG1vZGUgZXh0
ZW5zaW9ucyAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBFbmhhbmNlZCBJbnRlbCBTcGVl
ZFN0ZXAgVGVjaG5vbG9neSAgICAgPSBmYWxzZQogICAgICBUTTI6IHRoZXJtYWwgbW9uaXRv
ciAyICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBTU1NFMyBleHRlbnNpb25zICAg
ICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIGNvbnRleHQgSUQ6IGFkYXB0aXZl
IG9yIHNoYXJlZCBMMSBkYXRhICA9IGZhbHNlCiAgICAgIFNEQkc6IElBMzJfREVCVUdfSU5U
RVJGQUNFICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEZNQSBpbnN0cnVjdGlvbiAgICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgQ01QWENIRzE2QiBpbnN0cnVjdGlv
biAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICB4VFBSIGRpc2FibGUgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBQRENNOiBwZXJmbW9uIGFuZCBkZWJ1
ZyAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBQQ0lEOiBwcm9jZXNzIGNvbnRleHQg
aWRlbnRpZmllcnMgICAgICAgPSBmYWxzZQogICAgICBEQ0E6IGRpcmVjdCBjYWNoZSBhY2Nl
c3MgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBTU0U0LjEgZXh0ZW5zaW9ucyAgICAg
ICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFNTRTQuMiBleHRlbnNpb25zICAgICAg
ICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgeDJBUElDOiBleHRlbmRlZCB4QVBJQyBz
dXBwb3J0ICAgICAgICAgID0gZmFsc2UKICAgICAgTU9WQkUgaW5zdHJ1Y3Rpb24gICAgICAg
ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgUE9QQ05UIGluc3RydWN0aW9uICAgICAg
ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICB0aW1lIHN0YW1wIGNvdW50ZXIgZGVhZGxp
bmUgICAgICAgICAgICAgPSBmYWxzZQogICAgICBBRVMgaW5zdHJ1Y3Rpb24gICAgICAgICAg
ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFhTQVZFL1hTVE9SIHN0YXRlcyAgICAgICAg
ICAgICAgICAgICAgICA9IHRydWUKICAgICAgT1MtZW5hYmxlZCBYU0FWRS9YU1RPUiAgICAg
ICAgICAgICAgICAgID0gdHJ1ZQogICAgICBBVlg6IGFkdmFuY2VkIHZlY3RvciBleHRlbnNp
b25zICAgICAgICAgPSB0cnVlCiAgICAgIEYxNkMgaGFsZi1wcmVjaXNpb24gY29udmVydCBp
bnN0cnVjdGlvbiA9IHRydWUKICAgICAgUkRSQU5EIGluc3RydWN0aW9uICAgICAgICAgICAg
ICAgICAgICAgID0gZmFsc2UKICAgICAgaHlwZXJ2aXNvciBndWVzdCBzdGF0dXMgICAgICAg
ICAgICAgICAgID0gZmFsc2UKICAgY2FjaGUgYW5kIFRMQiBpbmZvcm1hdGlvbiAoMik6CiAg
IHByb2Nlc3NvciBzZXJpYWwgbnVtYmVyID0gMDA2MS0wRjMxLTAwMDAtMDAwMC0wMDAwLTAw
MDAKICAgZGV0ZXJtaW5pc3RpYyBjYWNoZSBwYXJhbWV0ZXJzICg0KToKICAgICAgLS0tIGNh
Y2hlIDAgLS0tCiAgICAgIGNhY2hlIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgPSBu
byBtb3JlIGNhY2hlcyAoMCkKICAgTU9OSVRPUi9NV0FJVCAoNSk6CiAgICAgIHNtYWxsZXN0
IG1vbml0b3ItbGluZSBzaXplIChieXRlcykgICAgICAgPSAweDQwICg2NCkKICAgICAgbGFy
Z2VzdCBtb25pdG9yLWxpbmUgc2l6ZSAoYnl0ZXMpICAgICAgICA9IDB4NDAgKDY0KQogICAg
ICBlbnVtIG9mIE1vbml0b3ItTVdBSVQgZXh0cyBzdXBwb3J0ZWQgICAgID0gdHJ1ZQogICAg
ICBzdXBwb3J0cyBpbnRycyBhcyBicmVhay1ldmVudCBmb3IgTVdBSVQgID0gdHJ1ZQogICAg
ICBudW1iZXIgb2YgQzAgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlUICAgID0gMHgwICgwKQog
ICAgICBudW1iZXIgb2YgQzEgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlUICAgID0gMHgwICgw
KQogICAgICBudW1iZXIgb2YgQzIgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlUICAgID0gMHgw
ICgwKQogICAgICBudW1iZXIgb2YgQzMgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlUICAgID0g
MHgwICgwKQogICAgICBudW1iZXIgb2YgQzQgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlUICAg
ID0gMHgwICgwKQogICAgICBudW1iZXIgb2YgQzUgc3ViIEMtc3RhdGVzIHVzaW5nIE1XQUlU
ICAgID0gMHgwICgwKQogICAgICBudW1iZXIgb2YgQzYgc3ViIEMtc3RhdGVzIHVzaW5nIE1X
QUlUICAgID0gMHgwICgwKQogICAgICBudW1iZXIgb2YgQzcgc3ViIEMtc3RhdGVzIHVzaW5n
IE1XQUlUICAgID0gMHgwICgwKQogICBUaGVybWFsIGFuZCBQb3dlciBNYW5hZ2VtZW50IEZl
YXR1cmVzICg2KToKICAgICAgZGlnaXRhbCB0aGVybW9tZXRlciAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSW50ZWwgVHVyYm8gQm9vc3QgVGVjaG5vbG9neSAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgQVJBVCBhbHdheXMgcnVubmluZyBBUElDIHRpbWVyICAgICAg
ICAgID0gZmFsc2UKICAgICAgUExOIHBvd2VyIGxpbWl0IG5vdGlmaWNhdGlvbiAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgRUNNRCBleHRlbmRlZCBjbG9jayBtb2R1bGF0aW9uIGR1dHkg
ICAgID0gZmFsc2UKICAgICAgUFRNIHBhY2thZ2UgdGhlcm1hbCBtYW5hZ2VtZW50ICAgICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIGJhc2UgcmVnaXN0ZXJzICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIG5vdGlmaWNhdGlvbiAgICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIGFjdGl2aXR5IHdpbmRvdyAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIGVuZXJneSBwZXJmb3JtYW5jZSBwcmVmZXJlbmNlICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIHBhY2thZ2UgbGV2ZWwgcmVxdWVzdCAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSERDIGJhc2UgcmVnaXN0ZXJzICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSW50ZWwgVHVyYm8gQm9vc3QgTWF4IFRlY2hub2xvZ3kgMy4w
ICAgID0gZmFsc2UKICAgICAgSFdQIGNhcGFiaWxpdGllcyAgICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSFdQIFBFQ0kgb3ZlcnJpZGUgICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgZmxleGlibGUgSFdQICAgICAgICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSUEzMl9IV1BfUkVRVUVTVCBNU1IgZmFzdCBhY2Nlc3MgbW9k
ZSAgID0gZmFsc2UKICAgICAgSFdfRkVFREJBQ0sgTVNScyBzdXBwb3J0ZWQgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgaWdub3JpbmcgaWRsZSBsb2dpY2FsIHByb2Nlc3NvciBIV1Ag
cmVxID0gZmFsc2UKICAgICAgVGhyZWFkIERpcmVjdG9yICAgICAgICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgSUEzMl9IV19GRUVEQkFDS19USFJFQURfQ09ORklHIGJpdCAy
NSAgID0gZmFsc2UKICAgICAgZGlnaXRhbCB0aGVybW9tZXRlciB0aHJlc2hvbGRzICAgICAg
ICAgID0gMHgwICgwKQogICAgICBoYXJkd2FyZSBjb29yZGluYXRpb24gZmVlZGJhY2sgICAg
ICAgICAgPSB0cnVlCiAgICAgIEFDTlQyIGF2YWlsYWJsZSAgICAgICAgICAgICAgICAgICAg
ICAgICA9IGZhbHNlCiAgICAgIHBlcmZvcm1hbmNlLWVuZXJneSBiaWFzIGNhcGFiaWxpdHkg
ICAgICA9IGZhbHNlCiAgICAgIG51bWJlciBvZiBlbmggaGFyZHdhcmUgZmVlZGJhY2sgY2xh
c3NlcyA9IDB4MCAoMCkKICAgICAgcGVyZm9ybWFuY2UgY2FwYWJpbGl0eSByZXBvcnRpbmcg
ICAgICAgID0gZmFsc2UKICAgICAgZW5lcmd5IGVmZmljaWVuY3kgY2FwYWJpbGl0eSByZXBv
cnRpbmcgID0gZmFsc2UKICAgICAgc2l6ZSBvZiBmZWVkYmFjayBzdHJ1Y3QgKDRLQiBwYWdl
cykgICAgID0gMHgxICgxKQogICAgICBpbmRleCBvZiBDUFUncyByb3cgaW4gZmVlZGJhY2sg
c3RydWN0ICAgPSAweDAgKDApCiAgIGV4dGVuZGVkIGZlYXR1cmUgZmxhZ3MgKDcpOgogICAg
ICBGU0dTQkFTRSBpbnN0cnVjdGlvbnMgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgSUEzMl9UU0NfQURKVVNUIE1TUiBzdXBwb3J0ZWQgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIFNHWDogU29mdHdhcmUgR3VhcmQgRXh0ZW5zaW9ucyBzdXBwb3J0ZWQgPSBmYWxzZQog
ICAgICBCTUkxIGluc3RydWN0aW9ucyAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQog
ICAgICBITEUgaGFyZHdhcmUgbG9jayBlbGlzaW9uICAgICAgICAgICAgICAgID0gZmFsc2UK
ICAgICAgQVZYMjogYWR2YW5jZWQgdmVjdG9yIGV4dGVuc2lvbnMgMiAgICAgICA9IGZhbHNl
CiAgICAgIEZEUF9FWENQVE5fT05MWSAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBTTUVQIHN1cGVydmlzb3IgbW9kZSBleGVjIHByb3RlY3Rpb24gICAgID0gZmFs
c2UKICAgICAgQk1JMiBpbnN0cnVjdGlvbnMgICAgICAgICAgICAgICAgICAgICAgICA9IGZh
bHNlCiAgICAgIGVuaGFuY2VkIFJFUCBNT1ZTQi9TVE9TQiAgICAgICAgICAgICAgICAgPSBm
YWxzZQogICAgICBJTlZQQ0lEIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgUlRNOiByZXN0cmljdGVkIHRyYW5zYWN0aW9uYWwgbWVtb3J5ICAgICA9
IGZhbHNlCiAgICAgIFJEVC1DTVQvUFFvUyBjYWNoZSBtb25pdG9yaW5nICAgICAgICAgICAg
PSBmYWxzZQogICAgICBkZXByZWNhdGVkIEZQVSBDUy9EUyAgICAgICAgICAgICAgICAgICAg
ID0gZmFsc2UKICAgICAgTVBYOiBpbnRlbCBtZW1vcnkgcHJvdGVjdGlvbiBleHRlbnNpb25z
ICA9IGZhbHNlCiAgICAgIFJEVC1DQVQvUFFFIGNhY2hlIGFsbG9jYXRpb24gICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBBVlg1MTJGOiBBVlgtNTEyIGZvdW5kYXRpb24gaW5zdHJ1Y3Rp
b25zID0gZmFsc2UKICAgICAgQVZYNTEyRFE6IGRvdWJsZSAmIHF1YWR3b3JkIGluc3RydWN0
aW9ucyA9IGZhbHNlCiAgICAgIFJEU0VFRCBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAg
ICAgICAgPSBmYWxzZQogICAgICBBRFggaW5zdHJ1Y3Rpb25zICAgICAgICAgICAgICAgICAg
ICAgICAgID0gZmFsc2UKICAgICAgU01BUDogc3VwZXJ2aXNvciBtb2RlIGFjY2VzcyBwcmV2
ZW50aW9uICA9IGZhbHNlCiAgICAgIEFWWDUxMklGTUE6IGludGVnZXIgZnVzZWQgbXVsdGlw
bHkgYWRkICAgPSBmYWxzZQogICAgICBQQ09NTUlUIGluc3RydWN0aW9uICAgICAgICAgICAg
ICAgICAgICAgID0gZmFsc2UKICAgICAgQ0xGTFVTSE9QVCBpbnN0cnVjdGlvbiAgICAgICAg
ICAgICAgICAgICA9IGZhbHNlCiAgICAgIENMV0IgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAg
ICAgICAgICAgICAgPSBmYWxzZQogICAgICBJbnRlbCBwcm9jZXNzb3IgdHJhY2UgICAgICAg
ICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyUEY6IHByZWZldGNoIGluc3RydWN0
aW9ucyAgICAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMkVSOiBleHBvbmVudCAmIHJlY2lw
cm9jYWwgaW5zdHJzICAgPSBmYWxzZQogICAgICBBVlg1MTJDRDogY29uZmxpY3QgZGV0ZWN0
aW9uIGluc3RycyAgICAgID0gZmFsc2UKICAgICAgU0hBIGluc3RydWN0aW9ucyAgICAgICAg
ICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMkJXOiBieXRlICYgd29yZCBp
bnN0cnVjdGlvbnMgICAgICAgPSBmYWxzZQogICAgICBBVlg1MTJWTDogdmVjdG9yIGxlbmd0
aCAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgUFJFRkVUQ0hXVDEgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMlZCTUk6IHZlY3RvciBi
eXRlIG1hbmlwdWxhdGlvbiAgICAgPSBmYWxzZQogICAgICBVTUlQOiB1c2VyLW1vZGUgaW5z
dHJ1Y3Rpb24gcHJldmVudGlvbiAgID0gZmFsc2UKICAgICAgUEtVIHByb3RlY3Rpb24ga2V5
cyBmb3IgdXNlci1tb2RlICAgICAgICA9IGZhbHNlCiAgICAgIE9TUEtFIENSNC5QS0UgYW5k
IFJEUEtSVS9XUlBLUlUgICAgICAgICAgPSBmYWxzZQogICAgICBXQUlUUEtHIGluc3RydWN0
aW9ucyAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyX1ZCTUkyOiBi
eXRlIFZQQ09NUFJFU1MsIFZQRVhQQU5EICA9IGZhbHNlCiAgICAgIENFVF9TUzogQ0VUIHNo
YWRvdyBzdGFjayAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBHRk5JOiBHYWxvaXMg
RmllbGQgTmV3IEluc3RydWN0aW9ucyAgICAgID0gZmFsc2UKICAgICAgVkFFUyBpbnN0cnVj
dGlvbnMgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFZQQ0xNVUxRRFEg
aW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBBVlg1MTJfVk5O
STogbmV1cmFsIG5ldHdvcmsgaW5zdHJ1Y3Rpb25zID0gZmFsc2UKICAgICAgQVZYNTEyX0JJ
VEFMRzogYml0IGNvdW50L3NoaWZmbGUgICAgICAgICA9IGZhbHNlCiAgICAgIFRNRTogVG90
YWwgTWVtb3J5IEVuY3J5cHRpb24gICAgICAgICAgICAgPSBmYWxzZQogICAgICBBVlg1MTI6
IFZQT1BDTlREUSBpbnN0cnVjdGlvbiAgICAgICAgICAgID0gZmFsc2UKICAgICAgTEE1Nzog
NTctYml0IGFkZHJzICYgNS1sZXZlbCBwYWdpbmcgICAgICA9IGZhbHNlCiAgICAgIEJORExE
WC9CTkRTVFggTUFXQVUgdmFsdWUgaW4gNjQtYml0IG1vZGUgPSAweDAgKDApCiAgICAgIFJE
UElEOiByZWFkIHByb2Nlc3NvciBJRCBzdXBwb3J0ZWQgICAgICAgPSBmYWxzZQogICAgICBL
TDoga2V5IGxvY2tlciAgICAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
YnVzIGxvY2sgZGV0ZWN0aW9uICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAg
IENMREVNT1RFIHN1cHBvcnRzIGNhY2hlIGxpbmUgZGVtb3RlICAgICAgPSBmYWxzZQogICAg
ICBNT1ZESVJJIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgTU9WRElSNjRCIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIEVOUUNNRCBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQog
ICAgICBTR1hfTEM6IFNHWCBsYXVuY2ggY29uZmlnIHN1cHBvcnRlZCAgICAgID0gZmFsc2UK
ICAgICAgUEtTOiBzdXBlcnZpc29yIHByb3RlY3Rpb24ga2V5cyAgICAgICAgICA9IGZhbHNl
CiAgICAgIFNHWC1LRVlTOiBTR1ggYXR0ZXN0YXRpb24gc2VydmljZXMgICAgICAgPSBmYWxz
ZQogICAgICBBVlg1MTJfNFZOTklXOiBuZXVyYWwgbmV0d29yayBpbnN0cnMgICAgID0gZmFs
c2UKICAgICAgQVZYNTEyXzRGTUFQUzogbXVsdGlwbHkgYWNjIHNpbmdsZSBwcmVjICA9IGZh
bHNlCiAgICAgIGZhc3Qgc2hvcnQgUkVQIE1PViAgICAgICAgICAgICAgICAgICAgICAgPSBm
YWxzZQogICAgICBVSU5UUjogdXNlciBpbnRlcnJ1cHRzICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgQVZYNTEyX1ZQMklOVEVSU0VDVDogaW50ZXJzZWN0IG1hc2sgcmVncyA9
IGZhbHNlCiAgICAgIElBMzJfTUNVX09QVF9DVFJMIFNSQkRTIG1pdGlnYXRpb24gTVNSICAg
PSBmYWxzZQogICAgICBWRVJXIE1EX0NMRUFSIG1pY3JvY29kZSBzdXBwb3J0ICAgICAgICAg
ID0gZmFsc2UKICAgICAgUlRNIHRyYW5zYWN0aW9uIGFsd2F5cyBhYm9ydHMgICAgICAgICAg
ICA9IGZhbHNlCiAgICAgIElBMzJfVFNYX0ZPUkNFX0FCT1JUIE1TUiAgICAgICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBTRVJJQUxJWkUgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAg
ICAgID0gZmFsc2UKICAgICAgaHlicmlkIHBhcnQgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICA9IGZhbHNlCiAgICAgIFRTWExEVFJLOiBUU1ggc3VzcGVuZCBsb2FkIGFkZHIgdHJh
Y2tpbmcgPSBmYWxzZQogICAgICBQQ09ORklHIGluc3RydWN0aW9uICAgICAgICAgICAgICAg
ICAgICAgID0gZmFsc2UKICAgICAgTEJSOiBhcmNoaXRlY3R1cmFsIGxhc3QgYnJhbmNoIHJl
Y29yZHMgICA9IGZhbHNlCiAgICAgIENFVF9JQlQ6IENFVCBpbmRpcmVjdCBicmFuY2ggdHJh
Y2tpbmcgICAgPSBmYWxzZQogICAgICBBTVgtQkYxNjogdGlsZSBiZmxvYXQxNiBzdXBwb3J0
ICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyX0ZQMTY6IGZwMTYgc3VwcG9ydCAgICAg
ICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFNWC1USUxFOiB0aWxlIGFyY2hpdGVjdHVyZSBz
dXBwb3J0ICAgICAgPSBmYWxzZQogICAgICBBTVgtSU5UODogdGlsZSA4LWJpdCBpbnRlZ2Vy
IHN1cHBvcnQgICAgID0gZmFsc2UKICAgICAgSUJSUy9JQlBCOiBpbmRpcmVjdCBicmFuY2gg
cmVzdHJpY3Rpb25zICA9IGZhbHNlCiAgICAgIFNUSUJQOiAxIHRociBpbmRpcmVjdCBicmFu
Y2ggcHJlZGljdG9yICAgPSBmYWxzZQogICAgICBMMURfRkxVU0g6IElBMzJfRkxVU0hfQ01E
IE1TUiAgICAgICAgICAgID0gZmFsc2UKICAgICAgSUEzMl9BUkNIX0NBUEFCSUxJVElFUyBN
U1IgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIElBMzJfQ09SRV9DQVBBQklMSVRJRVMg
TVNSICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBTU0JEOiBzcGVjdWxhdGl2ZSBzdG9y
ZSBieXBhc3MgZGlzYWJsZSAgID0gZmFsc2UKICAgRGlyZWN0IENhY2hlIEFjY2VzcyBQYXJh
bWV0ZXJzICg5KToKICAgICAgUExBVEZPUk1fRENBX0NBUCBNU1IgYml0cyA9IDAKICAgQXJj
aGl0ZWN0dXJlIFBlcmZvcm1hbmNlIE1vbml0b3JpbmcgRmVhdHVyZXMgKDB4YSk6CiAgICAg
IHZlcnNpb24gSUQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAweDAgKDApCiAg
ICAgIG51bWJlciBvZiBjb3VudGVycyBwZXIgbG9naWNhbCBwcm9jZXNzb3IgPSAweDAgKDAp
CiAgICAgIGJpdCB3aWR0aCBvZiBjb3VudGVyICAgICAgICAgICAgICAgICAgICAgPSAweDAg
KDApCiAgICAgIGxlbmd0aCBvZiBFQlggYml0IHZlY3RvciAgICAgICAgICAgICAgICAgPSAw
eDAgKDApCiAgICAgIGNvcmUgY3ljbGUgZXZlbnQgICAgICAgICAgICAgICAgICAgICAgICAg
PSBub3QgYXZhaWxhYmxlCiAgICAgIGluc3RydWN0aW9uIHJldGlyZWQgZXZlbnQgICAgICAg
ICAgICAgICAgPSBub3QgYXZhaWxhYmxlCiAgICAgIHJlZmVyZW5jZSBjeWNsZXMgZXZlbnQg
ICAgICAgICAgICAgICAgICAgPSBub3QgYXZhaWxhYmxlCiAgICAgIGxhc3QtbGV2ZWwgY2Fj
aGUgcmVmIGV2ZW50ICAgICAgICAgICAgICAgPSBub3QgYXZhaWxhYmxlCiAgICAgIGxhc3Qt
bGV2ZWwgY2FjaGUgbWlzcyBldmVudCAgICAgICAgICAgICAgPSBub3QgYXZhaWxhYmxlCiAg
ICAgIGJyYW5jaCBpbnN0IHJldGlyZWQgZXZlbnQgICAgICAgICAgICAgICAgPSBub3QgYXZh
aWxhYmxlCiAgICAgIGJyYW5jaCBtaXNwcmVkIHJldGlyZWQgZXZlbnQgICAgICAgICAgICAg
PSBub3QgYXZhaWxhYmxlCiAgICAgIHRvcC1kb3duIHNsb3RzIGV2ZW50ICAgICAgICAgICAg
ICAgICAgICAgPSBub3QgYXZhaWxhYmxlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDAgc3VwcG9y
dGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICAxIHN1cHBv
cnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAgMiBzdXBw
b3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDMgc3Vw
cG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICA0IHN1
cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAgNSBz
dXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDYg
c3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICA3
IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAg
OCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIg
IDkgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVy
IDEwIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRl
ciAxMSBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50
ZXIgMTIgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3Vu
dGVyIDEzIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291
bnRlciAxNCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNv
dW50ZXIgMTUgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBj
b3VudGVyIDE2IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQg
Y291bnRlciAxNyBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVk
IGNvdW50ZXIgMTggc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhl
ZCBjb3VudGVyIDE5IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4
ZWQgY291bnRlciAyMCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZp
eGVkIGNvdW50ZXIgMjEgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBm
aXhlZCBjb3VudGVyIDIyIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
Zml4ZWQgY291bnRlciAyMyBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAg
IGZpeGVkIGNvdW50ZXIgMjQgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICBmaXhlZCBjb3VudGVyIDI1IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgZml4ZWQgY291bnRlciAyNiBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIGZpeGVkIGNvdW50ZXIgMjcgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQog
ICAgICBmaXhlZCBjb3VudGVyIDI4IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UK
ICAgICAgZml4ZWQgY291bnRlciAyOSBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIGZpeGVkIGNvdW50ZXIgMzAgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBmaXhlZCBjb3VudGVyIDMxIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFs
c2UKICAgICAgbnVtYmVyIG9mIGNvbnRpZ3VvdXMgZml4ZWQgY291bnRlcnMgICAgICA9IDB4
MCAoMCkKICAgICAgYml0IHdpZHRoIG9mIGZpeGVkIGNvdW50ZXJzICAgICAgICAgICAgICA9
IDB4MCAoMCkKICAgICAgYW55dGhyZWFkIGRlcHJlY2F0aW9uICAgICAgICAgICAgICAgICAg
ICA9IGZhbHNlCiAgIHgyQVBJQyBmZWF0dXJlcyAvIHByb2Nlc3NvciB0b3BvbG9neSAoMHhi
KToKICAgICAgZXh0ZW5kZWQgQVBJQyBJRCAgICAgICAgICAgICAgICAgICAgICA9IDAKICAg
ICAgLS0tIGxldmVsIDAgLS0tCiAgICAgIGxldmVsIG51bWJlciAgICAgICAgICAgICAgICAg
ICAgICAgICAgPSAweDAgKDApCiAgICAgIGxldmVsIHR5cGUgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgPSBpbnZhbGlkICgwKQogICAgICBiaXQgd2lkdGggb2YgbGV2ZWwgICAgICAg
ICAgICAgICAgICAgID0gMHgwICgwKQogICAgICBudW1iZXIgb2YgbG9naWNhbCBwcm9jZXNz
b3JzIGF0IGxldmVsID0gMHgwICgwKQogICBYU0FWRSBmZWF0dXJlcyAoMHhkLzApOgogICAg
ICBYQ1IwIHZhbGlkIGJpdCBmaWVsZCBtYXNrICAgICAgICAgICAgICAgPSAweDQwMDAwMDAw
MDAwMDAwMDcKICAgICAgICAgeDg3IHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICAg
ID0gdHJ1ZQogICAgICAgICBTU0Ugc3RhdGUgICAgICAgICAgICAgICAgICAgICAgICAgICAg
PSB0cnVlCiAgICAgICAgIEFWWCBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9
IHRydWUKICAgICAgICAgTVBYIEJORFJFR1MgICAgICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgTVBYIEJORENTUiAgICAgICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgQVZYLTUxMiBvcG1hc2sgICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgQVZYLTUxMiBaTU1fSGkyNTYgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgQVZYLTUxMiBIaTE2X1pNTSAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgUEtSVSBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgWFRJTEVDRkcgc3RhdGUgICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgICAgWFRJTEVEQVRBIHN0YXRlICAgICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgYnl0ZXMgcmVxdWlyZWQgYnkgZmllbGRzIGluIFhDUjAgICAgICAgID0g
MHgwMDAwMDM0MCAoODMyKQogICAgICBieXRlcyByZXF1aXJlZCBieSBYU0FWRS9YUlNUT1Ig
YXJlYSAgICAgPSAweDAwMDAwM2MwICg5NjApCiAgICAgIFhTQVZFT1BUIGluc3RydWN0aW9u
ICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFhTQVZFQyBpbnN0cnVjdGlvbiAg
ICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFhHRVRCViBpbnN0cnVjdGlvbiAg
ICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFhTQVZFUy9YUlNUT1JTIGluc3Ry
dWN0aW9ucyAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFhGRDogZXh0ZW5kZWQgZmVhdHVy
ZSBkaXNhYmxlIHN1cHBvcnRlZCA9IGZhbHNlCiAgICAgIFNBVkUgYXJlYSBzaXplIGluIGJ5
dGVzICAgICAgICAgICAgICAgICA9IDB4MDAwMDAwMDAgKDApCiAgICAgIElBMzJfWFNTIHZh
bGlkIGJpdCBmaWVsZCBtYXNrICAgICAgICAgICA9IDB4MDAwMDAwMDAwMDAwMDAwMAogICAg
ICAgICBQVCBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBQQVNJRCBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBDRVRfVSB1c2VyIHN0YXRlICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBDRVRfUyBzdXBlcnZpc29yIHN0YXRlICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBIREMgc3RhdGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBVSU5UUiBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBMQlIgc3RhdGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICAgICBIV1Agc3RhdGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICBB
VlgvWU1NIGZlYXR1cmVzICgweGQvMik6CiAgICAgIEFWWC9ZTU0gc2F2ZSBzdGF0ZSBieXRl
IHNpemUgICAgICAgICAgICAgPSAweDAwMDAwMTAwICgyNTYpCiAgICAgIEFWWC9ZTU0gc2F2
ZSBzdGF0ZSBieXRlIG9mZnNldCAgICAgICAgICAgPSAweDAwMDAwMjQwICg1NzYpCiAgICAg
IHN1cHBvcnRlZCBpbiBJQTMyX1hTUyBvciBYQ1IwICAgICAgICAgICAgPSBYQ1IwICh1c2Vy
IHN0YXRlKQogICAgICA2NC1ieXRlIGFsaWdubWVudCBpbiBjb21wYWN0ZWQgWFNBVkUgICAg
ID0gZmFsc2UKICAgICAgWEZEIGZhdWx0aW5nIHN1cHBvcnRlZCAgICAgICAgICAgICAgICAg
ICA9IGZhbHNlCiAgIExXUCBmZWF0dXJlcyAoMHhkLzB4M2UpOgogICAgICBMV1Agc2F2ZSBz
dGF0ZSBieXRlIHNpemUgICAgICAgICAgICAgICAgID0gMHgwMDAwMDA4MCAoMTI4KQogICAg
ICBMV1Agc2F2ZSBzdGF0ZSBieXRlIG9mZnNldCAgICAgICAgICAgICAgID0gMHgwMDAwMDM0
MCAoODMyKQogICAgICBzdXBwb3J0ZWQgaW4gSUEzMl9YU1Mgb3IgWENSMCAgICAgICAgICAg
ID0gWENSMCAodXNlciBzdGF0ZSkKICAgICAgNjQtYnl0ZSBhbGlnbm1lbnQgaW4gY29tcGFj
dGVkIFhTQVZFICAgICA9IGZhbHNlCiAgICAgIFhGRCBmYXVsdGluZyBzdXBwb3J0ZWQgICAg
ICAgICAgICAgICAgICAgPSBmYWxzZQogICBleHRlbmRlZCBwcm9jZXNzb3Igc2lnbmF0dXJl
ICgweDgwMDAwMDAxL2VheCk6CiAgICAgIGZhbWlseS9nZW5lcmF0aW9uID0gMHhmICgxNSkK
ICAgICAgbW9kZWwgICAgICAgICAgID0gMHgzICgzKQogICAgICBzdGVwcGluZyBpZCAgICAg
PSAweDEgKDEpCiAgICAgIGV4dGVuZGVkIGZhbWlseSA9IDB4NiAoNikKICAgICAgZXh0ZW5k
ZWQgbW9kZWwgID0gMHgxICgxKQogICAgICAoZmFtaWx5IHN5bnRoKSAgPSAweDE1ICgyMSkK
ICAgICAgKG1vZGVsIHN5bnRoKSAgID0gMHgxMyAoMTkpCiAgICAgIChzaW1wbGUgc3ludGgp
ICA9IEFNRCAodW5rbm93biB0eXBlKSAoUmljaGxhbmQgUkwtQTEpIFtQaWxlZHJpdmVyXSwg
MzJubQogICBleHRlbmRlZCBmZWF0dXJlIGZsYWdzICgweDgwMDAwMDAxL2VkeCk6CiAgICAg
IHg4NyBGUFUgb24gY2hpcCAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHZp
cnR1YWwtODA4NiBtb2RlIGVuaGFuY2VtZW50ICAgICAgICAgPSB0cnVlCiAgICAgIGRlYnVn
Z2luZyBleHRlbnNpb25zICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHBhZ2Ugc2l6
ZSBleHRlbnNpb25zICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHRpbWUgc3RhbXAg
Y291bnRlciAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFJETVNSIGFuZCBXUk1T
UiBzdXBwb3J0ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHBoeXNpY2FsIGFkZHJlc3Mg
ZXh0ZW5zaW9ucyAgICAgICAgICAgPSB0cnVlCiAgICAgIG1hY2hpbmUgY2hlY2sgZXhjZXB0
aW9uICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIENNUFhDSEc4QiBpbnN0LiAgICAgICAg
ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIEFQSUMgb24gY2hpcCAgICAgICAgICAgICAg
ICAgICAgICAgICAgPSB0cnVlCiAgICAgIFNZU0NBTEwgYW5kIFNZU1JFVCBpbnN0cnVjdGlv
bnMgICAgICAgPSB0cnVlCiAgICAgIG1lbW9yeSB0eXBlIHJhbmdlIHJlZ2lzdGVycyAgICAg
ICAgICAgPSB0cnVlCiAgICAgIGdsb2JhbCBwYWdpbmcgZXh0ZW5zaW9uICAgICAgICAgICAg
ICAgPSB0cnVlCiAgICAgIG1hY2hpbmUgY2hlY2sgYXJjaGl0ZWN0dXJlICAgICAgICAgICAg
PSB0cnVlCiAgICAgIGNvbmRpdGlvbmFsIG1vdmUvY29tcGFyZSBpbnN0cnVjdGlvbiAgPSB0
cnVlCiAgICAgIHBhZ2UgYXR0cmlidXRlIHRhYmxlICAgICAgICAgICAgICAgICAgPSB0cnVl
CiAgICAgIHBhZ2Ugc2l6ZSBleHRlbnNpb24gICAgICAgICAgICAgICAgICAgPSB0cnVlCiAg
ICAgIG11bHRpcHJvY2Vzc2luZyBjYXBhYmxlICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICBuby1leGVjdXRlIHBhZ2UgcHJvdGVjdGlvbiAgICAgICAgICAgID0gdHJ1ZQogICAgICBB
TUQgbXVsdGltZWRpYSBpbnN0cnVjdGlvbiBleHRlbnNpb25zID0gdHJ1ZQogICAgICBNTVgg
VGVjaG5vbG9neSAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBGWFNBVkUv
RlhSU1RPUiAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBTU0UgZXh0ZW5z
aW9ucyAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICAxLUdCIGxhcmdlIHBh
Z2Ugc3VwcG9ydCAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBSRFRTQ1AgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBsb25nIG1vZGUgKEFBLTY0KSAg
ICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICAzRE5vdyEgaW5zdHJ1Y3Rpb24gZXh0
ZW5zaW9ucyAgICAgICAgID0gZmFsc2UKICAgICAgM0ROb3chIGluc3RydWN0aW9ucyAgICAg
ICAgICAgICAgICAgICA9IGZhbHNlCiAgIGV4dGVuZGVkIGJyYW5kIGlkICgweDgwMDAwMDAx
L2VieCk6CiAgICAgIHJhdyAgICAgPSAweDIwMDAwMDAwICg1MzY4NzA5MTIpCiAgICAgIEJy
YW5kSWQgPSAweDAgKDApCiAgICAgIFBrZ1R5cGUgPSBGTTIgKFBHQSkgKDIpCiAgIEFNRCBm
ZWF0dXJlIGZsYWdzICgweDgwMDAwMDAxL2VjeCk6CiAgICAgIExBSEYvU0FIRiBzdXBwb3J0
ZWQgaW4gNjQtYml0IG1vZGUgICAgID0gdHJ1ZQogICAgICBDTVAgTGVnYWN5ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgU1ZNOiBzZWN1cmUgdmlydHVhbCBt
YWNoaW5lICAgICAgICAgICAgPSB0cnVlCiAgICAgIGV4dGVuZGVkIEFQSUMgc3BhY2UgICAg
ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBBbHRNb3ZDcjggICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA9IHRydWUKICAgICAgTFpDTlQgYWR2YW5jZWQgYml0IG1hbmlwdWxh
dGlvbiAgICAgICAgPSB0cnVlCiAgICAgIFNTRTRBIHN1cHBvcnQgICAgICAgICAgICAgICAg
ICAgICAgICAgID0gdHJ1ZQogICAgICBtaXNhbGlnbmVkIFNTRSBtb2RlICAgICAgICAgICAg
ICAgICAgICA9IHRydWUKICAgICAgM0ROb3chIFBSRUZFVENIL1BSRUZFVENIVyBpbnN0cnVj
dGlvbnMgPSB0cnVlCiAgICAgIE9TIHZpc2libGUgd29ya2Fyb3VuZCAgICAgICAgICAgICAg
ICAgID0gdHJ1ZQogICAgICBpbnN0cnVjdGlvbiBiYXNlZCBzYW1wbGluZyAgICAgICAgICAg
ICA9IHRydWUKICAgICAgWE9QIHN1cHBvcnQgICAgICAgICAgICAgICAgICAgICAgICAgICAg
PSB0cnVlCiAgICAgIFNLSU5JVC9TVEdJIHN1cHBvcnQgICAgICAgICAgICAgICAgICAgID0g
dHJ1ZQogICAgICB3YXRjaGRvZyB0aW1lciBzdXBwb3J0ICAgICAgICAgICAgICAgICA9IHRy
dWUKICAgICAgbGlnaHR3ZWlnaHQgcHJvZmlsaW5nIHN1cHBvcnQgICAgICAgICAgPSB0cnVl
CiAgICAgIDQtb3BlcmFuZCBGTUEgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgID0gdHJ1ZQog
ICAgICBUQ0U6IHRyYW5zbGF0aW9uIGNhY2hlIGV4dGVuc2lvbiAgICAgICA9IHRydWUKICAg
ICAgTm9kZUlkIE1TUiBDMDAxMTAwQyAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAg
IFRCTSBzdXBwb3J0ICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICB0
b3BvbG9neSBleHRlbnNpb25zICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgY29y
ZSBwZXJmb3JtYW5jZSBjb3VudGVyIGV4dGVuc2lvbnMgICAgPSB0cnVlCiAgICAgIE5CL0RG
IHBlcmZvcm1hbmNlIGNvdW50ZXIgZXh0ZW5zaW9ucyAgID0gdHJ1ZQogICAgICBkYXRhIGJy
ZWFrcG9pbnQgZXh0ZW5zaW9uICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIHBlcmZvcm1h
bmNlIHRpbWUtc3RhbXAgY291bnRlciBzdXBwb3J0ID0gZmFsc2UKICAgICAgTExDIHBlcmZv
cm1hbmNlIGNvdW50ZXIgZXh0ZW5zaW9ucyAgICAgPSBmYWxzZQogICAgICBNV0FJVFgvTU9O
SVRPUlggc3VwcG9ydGVkICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFkZHJlc3MgbWFz
ayBleHRlbnNpb24gc3VwcG9ydCAgICAgICAgID0gZmFsc2UKICAgYnJhbmQgPSAiQU1EIEE2
LTY0MDBLIEFQVSB3aXRoIFJhZGVvbih0bSkgSEQgR3JhcGhpY3MgICAiCiAgIEwxIFRMQi9j
YWNoZSBpbmZvcm1hdGlvbjogMk0vNE0gcGFnZXMgJiBMMSBUTEIgKDB4ODAwMDAwMDUvZWF4
KToKICAgICAgaW5zdHJ1Y3Rpb24gIyBlbnRyaWVzICAgICA9IDB4MTggKDI0KQogICAgICBp
bnN0cnVjdGlvbiBhc3NvY2lhdGl2aXR5ID0gMHhmZiAoMjU1KQogICAgICBkYXRhICMgZW50
cmllcyAgICAgICAgICAgID0gMHg0MCAoNjQpCiAgICAgIGRhdGEgYXNzb2NpYXRpdml0eSAg
ICAgICAgPSAweGZmICgyNTUpCiAgIEwxIFRMQi9jYWNoZSBpbmZvcm1hdGlvbjogNEsgcGFn
ZXMgJiBMMSBUTEIgKDB4ODAwMDAwMDUvZWJ4KToKICAgICAgaW5zdHJ1Y3Rpb24gIyBlbnRy
aWVzICAgICA9IDB4MzAgKDQ4KQogICAgICBpbnN0cnVjdGlvbiBhc3NvY2lhdGl2aXR5ID0g
MHhmZiAoMjU1KQogICAgICBkYXRhICMgZW50cmllcyAgICAgICAgICAgID0gMHg0MCAoNjQp
CiAgICAgIGRhdGEgYXNzb2NpYXRpdml0eSAgICAgICAgPSAweGZmICgyNTUpCiAgIEwxIGRh
dGEgY2FjaGUgaW5mb3JtYXRpb24gKDB4ODAwMDAwMDUvZWN4KToKICAgICAgbGluZSBzaXpl
IChieXRlcykgPSAweDQwICg2NCkKICAgICAgbGluZXMgcGVyIHRhZyAgICAgPSAweDEgKDEp
CiAgICAgIGFzc29jaWF0aXZpdHkgICAgID0gMHg0ICg0KQogICAgICBzaXplIChLQikgICAg
ICAgICA9IDB4MTAgKDE2KQogICBMMSBpbnN0cnVjdGlvbiBjYWNoZSBpbmZvcm1hdGlvbiAo
MHg4MDAwMDAwNS9lZHgpOgogICAgICBsaW5lIHNpemUgKGJ5dGVzKSA9IDB4NDAgKDY0KQog
ICAgICBsaW5lcyBwZXIgdGFnICAgICA9IDB4MSAoMSkKICAgICAgYXNzb2NpYXRpdml0eSAg
ICAgPSAweDIgKDIpCiAgICAgIHNpemUgKEtCKSAgICAgICAgID0gMHg0MCAoNjQpCiAgIEwy
IFRMQi9jYWNoZSBpbmZvcm1hdGlvbjogMk0vNE0gcGFnZXMgJiBMMiBUTEIgKDB4ODAwMDAw
MDYvZWF4KToKICAgICAgaW5zdHJ1Y3Rpb24gIyBlbnRyaWVzICAgICA9IDB4NDAwICgxMDI0
KQogICAgICBpbnN0cnVjdGlvbiBhc3NvY2lhdGl2aXR5ID0gOCB0byAxNS13YXkgKDYpCiAg
ICAgIGRhdGEgIyBlbnRyaWVzICAgICAgICAgICAgPSAweDQwMCAoMTAyNCkKICAgICAgZGF0
YSBhc3NvY2lhdGl2aXR5ICAgICAgICA9IDggdG8gMTUtd2F5ICg2KQogICBMMiBUTEIvY2Fj
aGUgaW5mb3JtYXRpb246IDRLIHBhZ2VzICYgTDIgVExCICgweDgwMDAwMDA2L2VieCk6CiAg
ICAgIGluc3RydWN0aW9uICMgZW50cmllcyAgICAgPSAweDIwMCAoNTEyKQogICAgICBpbnN0
cnVjdGlvbiBhc3NvY2lhdGl2aXR5ID0gNCB0byA1LXdheSAoNCkKICAgICAgZGF0YSAjIGVu
dHJpZXMgICAgICAgICAgICA9IDB4NDAwICgxMDI0KQogICAgICBkYXRhIGFzc29jaWF0aXZp
dHkgICAgICAgID0gOCB0byAxNS13YXkgKDYpCiAgIEwyIHVuaWZpZWQgY2FjaGUgaW5mb3Jt
YXRpb24gKDB4ODAwMDAwMDYvZWN4KToKICAgICAgbGluZSBzaXplIChieXRlcykgPSAweDQw
ICg2NCkKICAgICAgbGluZXMgcGVyIHRhZyAgICAgPSAweDEgKDEpCiAgICAgIGFzc29jaWF0
aXZpdHkgICAgID0gMTYgdG8gMzEtd2F5ICg4KQogICAgICBzaXplIChLQikgICAgICAgICA9
IDB4NDAwICgxMDI0KQogICBMMyBjYWNoZSBpbmZvcm1hdGlvbiAoMHg4MDAwMDAwNi9lZHgp
OgogICAgICBsaW5lIHNpemUgKGJ5dGVzKSAgICAgPSAweDAgKDApCiAgICAgIGxpbmVzIHBl
ciB0YWcgICAgICAgICA9IDB4MCAoMCkKICAgICAgYXNzb2NpYXRpdml0eSAgICAgICAgID0g
TDIgb2ZmICgwKQogICAgICBzaXplIChpbiA1MTJLQiB1bml0cykgPSAweDAgKDApCiAgIFJB
UyBDYXBhYmlsaXR5ICgweDgwMDAwMDA3L2VieCk6CiAgICAgIE1DQSBvdmVyZmxvdyByZWNv
dmVyeSBzdXBwb3J0ID0gZmFsc2UKICAgICAgU1VDQ09SIHN1cHBvcnQgICAgICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBIV0E6IGhhcmR3YXJlIGFzc2VydCBzdXBwb3J0ICA9IGZhbHNl
CiAgICAgIHNjYWxhYmxlIE1DQSBzdXBwb3J0ICAgICAgICAgID0gZmFsc2UKICAgQWR2YW5j
ZWQgUG93ZXIgTWFuYWdlbWVudCBGZWF0dXJlcyAoMHg4MDAwMDAwNy9lY3gpOgogICAgICBD
bXBVbml0UHdyU2FtcGxlVGltZVJhdGlvID0gMHgwICgwKQogICBBZHZhbmNlZCBQb3dlciBN
YW5hZ2VtZW50IEZlYXR1cmVzICgweDgwMDAwMDA3L2VkeCk6CiAgICAgIFRTOiB0ZW1wZXJh
dHVyZSBzZW5zaW5nIGRpb2RlICAgICAgICAgICA9IHRydWUKICAgICAgRklEOiBmcmVxdWVu
Y3kgSUQgY29udHJvbCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgVklEOiB2b2x0YWdl
IElEIGNvbnRyb2wgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgVFRQOiB0aGVybWFs
IHRyaXAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBUTTogdGhlcm1hbCBt
b25pdG9yICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFNUQzogc29mdHdhcmUg
dGhlcm1hbCBjb250cm9sICAgICAgICAgICA9IGZhbHNlCiAgICAgIDEwMCBNSHogbXVsdGlw
bGllciBjb250cm9sICAgICAgICAgICAgICA9IHRydWUKICAgICAgaGFyZHdhcmUgUC1TdGF0
ZSBjb250cm9sICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBUc2NJbnZhcmlhbnQgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIENQQjogY29yZSBwZXJmb3Jt
YW5jZSBib29zdCAgICAgICAgICAgICA9IHRydWUKICAgICAgcmVhZC1vbmx5IGVmZmVjdGl2
ZSBmcmVxdWVuY3kgaW50ZXJmYWNlID0gdHJ1ZQogICAgICBwcm9jZXNzb3IgZmVlZGJhY2sg
aW50ZXJmYWNlICAgICAgICAgICAgPSBmYWxzZQogICAgICBBUE0gcG93ZXIgcmVwb3J0aW5n
ICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBjb25uZWN0ZWQgc3RhbmRieSAg
ICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBSQVBMOiBydW5uaW5nIGF2ZXJh
Z2UgcG93ZXIgbGltaXQgICAgICAgPSBmYWxzZQogICBQaHlzaWNhbCBBZGRyZXNzIGFuZCBM
aW5lYXIgQWRkcmVzcyBTaXplICgweDgwMDAwMDA4L2VheCk6CiAgICAgIG1heGltdW0gcGh5
c2ljYWwgYWRkcmVzcyBiaXRzICAgICAgICAgPSAweDMwICg0OCkKICAgICAgbWF4aW11bSBs
aW5lYXIgKHZpcnR1YWwpIGFkZHJlc3MgYml0cyA9IDB4MzAgKDQ4KQogICAgICBtYXhpbXVt
IGd1ZXN0IHBoeXNpY2FsIGFkZHJlc3MgYml0cyAgID0gMHgwICgwKQogICBFeHRlbmRlZCBG
ZWF0dXJlIEV4dGVuc2lvbnMgSUQgKDB4ODAwMDAwMDgvZWJ4KToKICAgICAgQ0xaRVJPIGlu
c3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGluc3RydWN0
aW9ucyByZXRpcmVkIGNvdW50IHN1cHBvcnQgICAgICAgPSBmYWxzZQogICAgICBhbHdheXMg
c2F2ZS9yZXN0b3JlIGVycm9yIHBvaW50ZXJzICAgICAgID0gZmFsc2UKICAgICAgSU5WTFBH
QiBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFJEUFJV
IGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBtZW1v
cnkgYmFuZHdpZHRoIGVuZm9yY2VtZW50ICAgICAgICAgICAgID0gZmFsc2UKICAgICAgTUNP
TU1JVCBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFdC
Tk9JTlZEIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBJ
QlBCOiBpbmRpcmVjdCBicmFuY2ggcHJlZGljdGlvbiBiYXJyaWVyID0gdHJ1ZQogICAgICBp
bnRlcnJ1cHRpYmxlIFdCSU5WRCwgV0JOT0lOVkQgICAgICAgICAgID0gZmFsc2UKICAgICAg
SUJSUzogaW5kaXJlY3QgYnJhbmNoIHJlc3RyIHNwZWN1bGF0aW9uICA9IGZhbHNlCiAgICAg
IFNUSUJQOiAxIHRociBpbmRpcmVjdCBicmFuY2ggcHJlZGljdG9yICAgPSBmYWxzZQogICAg
ICBDUFUgcHJlZmVyczogSUJSUyBhbHdheXMgb24gICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgQ1BVIHByZWZlcnM6IFNUSUJQIGFsd2F5cyBvbiAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIElCUlMgcHJlZmVycmVkIG92ZXIgc29mdHdhcmUgc29sdXRpb24gICAgPSBmYWxzZQog
ICAgICBJQlJTIHByb3ZpZGVzIHNhbWUgbW9kZSBwcm90ZWN0aW9uICAgICAgID0gZmFsc2UK
ICAgICAgRUZFUltMTVNMRV0gbm90IHN1cHBvcnRlZCAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIElOVkxQR0Igc3VwcG9ydHMgVExCIGZsdXNoIGd1ZXN0IG5lc3RlZCAgPSBmYWxz
ZQogICAgICBwcGluIHByb2Nlc3NvciBpZCBudW1iZXIgc3VwcG9ydGVkICAgICAgID0gZmFs
c2UKICAgICAgU1NCRDogc3BlY3VsYXRpdmUgc3RvcmUgYnlwYXNzIGRpc2FibGUgICA9IGZh
bHNlCiAgICAgIHZpcnR1YWxpemVkIFNTQkQgICAgICAgICAgICAgICAgICAgICAgICAgPSBm
YWxzZQogICAgICBTU0JEIGZpeGVkIGluIGhhcmR3YXJlICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgQ1BQQzogY29sbGFib3JhdGl2ZSBwcm9jZXNzb3IgcGVyZiBjdHJsICA9
IGZhbHNlCiAgICAgIFBTRkQ6IHByZWRpY3RpdmUgc3RvcmUgZm9yd2FyZCBkaXNhYmxlICAg
PSBmYWxzZQogICAgICBub3QgdnVsbmVyYWJsZSB0byBicmFuY2ggdHlwZSBjb25mdXNpb24g
ID0gZmFsc2UKICAgICAgYnJhbmNoIHNhbXBsaW5nIGZlYXR1cmUgc3VwcG9ydCAgICAgICAg
ICA9IGZhbHNlCiAgICAgICh2dWxuIHRvIGJyYW5jaCB0eXBlIGNvbmZ1c2lvbiBzeW50aCkg
ICAgPSB0cnVlCiAgIFNpemUgSWRlbnRpZmllcnMgKDB4ODAwMDAwMDgvZWN4KToKICAgICAg
bnVtYmVyIG9mIENQVSBjb3JlcyAgICAgICAgICAgICAgICAgPSAweDIgKDIpCiAgICAgIEFw
aWNJZENvcmVJZFNpemUgICAgICAgICAgICAgICAgICAgID0gMHg0ICg0KQogICAgICBwZXJm
b3JtYW5jZSB0aW1lLXN0YW1wIGNvdW50ZXIgc2l6ZSA9IDQwIGJpdHMgKDApCiAgIEZlYXR1
cmUgRXh0ZW5kZWQgU2l6ZSAoMHg4MDAwMDAwOC9lZHgpOgogICAgICBtYXggcGFnZSBjb3Vu
dCBmb3IgSU5WTFBHQiBpbnN0cnVjdGlvbiA9IDB4MCAoMCkKICAgICAgUkRQUlUgaW5zdHJ1
Y3Rpb24gbWF4IGlucHV0IHN1cHBvcnQgICAgPSAweDAgKDApCiAgIFNWTSBTZWN1cmUgVmly
dHVhbCBNYWNoaW5lICgweDgwMDAwMDBhL2VheCk6CiAgICAgIFN2bVJldjogU1ZNIHJldmlz
aW9uID0gMHgxICgxKQogICBTVk0gU2VjdXJlIFZpcnR1YWwgTWFjaGluZSAoMHg4MDAwMDAw
YS9lZHgpOgogICAgICBuZXN0ZWQgcGFnaW5nICAgICAgICAgICAgICAgICAgICAgICAgICAg
PSB0cnVlCiAgICAgIExCUiB2aXJ0dWFsaXphdGlvbiAgICAgICAgICAgICAgICAgICAgICA9
IHRydWUKICAgICAgU1ZNIGxvY2sgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID0g
dHJ1ZQogICAgICBOUklQIHNhdmUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSB0
cnVlCiAgICAgIE1TUiBiYXNlZCBUU0MgcmF0ZSBjb250cm9sICAgICAgICAgICAgICA9IHRy
dWUKICAgICAgVk1DQiBjbGVhbiBiaXRzIHN1cHBvcnQgICAgICAgICAgICAgICAgID0gdHJ1
ZQogICAgICBmbHVzaCBieSBBU0lEICAgICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVl
CiAgICAgIGRlY29kZSBhc3Npc3RzICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUK
ICAgICAgU1NTRTMvU1NFNSBvcGNvZGUgc2V0IGRpc2FibGUgICAgICAgICAgID0gZmFsc2UK
ICAgICAgcGF1c2UgaW50ZXJjZXB0IGZpbHRlciAgICAgICAgICAgICAgICAgID0gdHJ1ZQog
ICAgICBwYXVzZSBmaWx0ZXIgdGhyZXNob2xkICAgICAgICAgICAgICAgICAgPSB0cnVlCiAg
ICAgIEFWSUM6IEFNRCB2aXJ0dWFsIGludGVycnVwdCBjb250cm9sbGVyICA9IGZhbHNlCiAg
ICAgIHZpcnR1YWxpemVkIFZNTE9BRC9WTVNBVkUgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIHZpcnR1YWxpemVkIGdsb2JhbCBpbnRlcnJ1cHQgZmxhZyAoR0lGKSA9IGZhbHNlCiAg
ICAgIEdNRVQ6IGd1ZXN0IG1vZGUgZXhlY3V0ZSB0cmFwICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIFgyQVZJQzogdmlydHVhbGl6ZWQgWDJBUElDICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIHN1cGVydmlzb3Igc2hhZG93IHN0YWNrICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIGd1ZXN0IFNwZWNfY3RsIHN1cHBvcnQgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIFJPR1BUOiByZWFkLW9ubHkgZ3Vlc3QgcGFnZSB0YWJsZSAgICAgICA9IGZhbHNlCiAg
ICAgIGhvc3QgTUNFIG92ZXJyaWRlICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIElOVkxQR0IvVExCU1lOQyBoeXBlcnYgaW50ZXJjIGVuYWJsZSAgICA9IGZhbHNlCiAg
ICAgIFZOTUk6IE5NSSB2aXJ0dWFsaXphdGlvbiAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIElCUyB2aXJ0dWFsaXphdGlvbiAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIGd1ZXN0IFNWTUUgYWRkciBjaGVjayAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
IE5BU0lEOiBudW1iZXIgb2YgYWRkcmVzcyBzcGFjZSBpZGVudGlmaWVycyA9IDB4MTAwMDAg
KDY1NTM2KToKICAgTDEgVExCIGluZm9ybWF0aW9uOiAxRyBwYWdlcyAoMHg4MDAwMDAxOS9l
YXgpOgogICAgICBpbnN0cnVjdGlvbiAjIGVudHJpZXMgICAgID0gMHgxOCAoMjQpCiAgICAg
IGluc3RydWN0aW9uIGFzc29jaWF0aXZpdHkgPSBmdWxsICgxNSkKICAgICAgZGF0YSAjIGVu
dHJpZXMgICAgICAgICAgICA9IDB4NDAgKDY0KQogICAgICBkYXRhIGFzc29jaWF0aXZpdHkg
ICAgICAgID0gZnVsbCAoMTUpCiAgIEwyIFRMQiBpbmZvcm1hdGlvbjogMUcgcGFnZXMgKDB4
ODAwMDAwMTkvZWJ4KToKICAgICAgaW5zdHJ1Y3Rpb24gIyBlbnRyaWVzICAgICA9IDB4NDAw
ICgxMDI0KQogICAgICBpbnN0cnVjdGlvbiBhc3NvY2lhdGl2aXR5ID0gOCB0byAxNS13YXkg
KDYpCiAgICAgIGRhdGEgIyBlbnRyaWVzICAgICAgICAgICAgPSAweDQwMCAoMTAyNCkKICAg
ICAgZGF0YSBhc3NvY2lhdGl2aXR5ICAgICAgICA9IDggdG8gMTUtd2F5ICg2KQogICBQZXJm
b3JtYW5jZSBPcHRpbWl6YXRpb24gSWRlbnRpZmllcnMgKDB4ODAwMDAwMWEvZWF4KToKICAg
ICAgMTI4LWJpdCBTU0UgZXhlY3V0ZWQgZnVsbC13aWR0aCA9IHRydWUKICAgICAgTU9WVSog
YmV0dGVyIHRoYW4gTU9WTCovTU9WSCogICA9IHRydWUKICAgICAgMjU2LWJpdCBTU0UgZXhl
Y3V0ZWQgZnVsbC13aWR0aCA9IGZhbHNlCiAgIEluc3RydWN0aW9uIEJhc2VkIFNhbXBsaW5n
IElkZW50aWZpZXJzICgweDgwMDAwMDFiL2VheCk6CiAgICAgIElCUyBmZWF0dXJlIGZsYWdz
IHZhbGlkICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIElCUyBmZXRjaCBzYW1wbGlu
ZyAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIElCUyBleGVjdXRpb24gc2Ft
cGxpbmcgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHJlYWQgd3JpdGUgb2Ygb3Ag
Y291bnRlciAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIG9wIGNvdW50aW5nIG1vZGUg
ICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIGJyYW5jaCB0YXJnZXQgYWRk
cmVzcyByZXBvcnRpbmcgICAgICAgICAgPSB0cnVlCiAgICAgIElic09wQ3VyQ250IGFuZCBJ
YnNPcE1heENudCBleHRlbmQgNyAgICAgPSB0cnVlCiAgICAgIGludmFsaWQgUklQIGluZGlj
YXRpb24gc3VwcG9ydCAgICAgICAgICAgPSB0cnVlCiAgICAgIGZ1c2VkIGJyYW5jaCBtaWNy
by1vcCBpbmRpY2F0aW9uIHN1cHBvcnQgPSBmYWxzZQogICAgICBJQlMgZmV0Y2ggY29udHJv
bCBleHRlbmRlZCBNU1Igc3VwcG9ydCAgID0gZmFsc2UKICAgICAgSUJTIG9wIGRhdGEgNCBN
U1Igc3VwcG9ydCAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIElCUyBMMyBtaXNzIGZp
bHRlcmluZyBzdXBwb3J0ICAgICAgICAgICAgPSBmYWxzZQogICBMaWdodHdlaWdodCBQcm9m
aWxpbmcgQ2FwYWJpbGl0aWVzOiBBdmFpbGFiaWxpdHkgKDB4ODAwMDAwMWMvZWF4KToKICAg
ICAgbGlnaHR3ZWlnaHQgcHJvZmlsaW5nICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICBMV1BWQUwgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAg
IGluc3RydWN0aW9uIHJldGlyZWQgZXZlbnQgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
YnJhbmNoIHJldGlyZWQgZXZlbnQgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBE
QyBtaXNzIGV2ZW50ICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGNv
cmUgY2xvY2tzIG5vdCBoYWx0ZWQgZXZlbnQgICAgICAgICAgID0gZmFsc2UKICAgICAgY29y
ZSByZWZlcmVuY2UgY2xvY2tzIG5vdCBoYWx0ZWQgZXZlbnQgPSBmYWxzZQogICAgICBjb250
aW51b3VzIG1vZGUgc2FtcGxpbmcgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIHRzYyBp
biBldmVudCByZWNvcmQgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgaW50ZXJy
dXB0IG9uIHRocmVzaG9sZCBvdmVyZmxvdyAgICAgICAgPSBmYWxzZQogICBMaWdodHdlaWdo
dCBQcm9maWxpbmcgQ2FwYWJpbGl0aWVzOiBTdXBwb3J0ZWQgKDB4ODAwMDAwMWMvZWR4KToK
ICAgICAgbGlnaHR3ZWlnaHQgcHJvZmlsaW5nICAgICAgICAgICAgICAgICAgPSB0cnVlCiAg
ICAgIExXUFZBTCBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAg
ICBpbnN0cnVjdGlvbiByZXRpcmVkIGV2ZW50ICAgICAgICAgICAgICA9IHRydWUKICAgICAg
YnJhbmNoIHJldGlyZWQgZXZlbnQgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIERD
IG1pc3MgZXZlbnQgICAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgY29y
ZSBjbG9ja3Mgbm90IGhhbHRlZCBldmVudCAgICAgICAgICAgPSBmYWxzZQogICAgICBjb3Jl
IHJlZmVyZW5jZSBjbG9ja3Mgbm90IGhhbHRlZCBldmVudCA9IGZhbHNlCiAgICAgIGNvbnRp
bnVvdXMgbW9kZSBzYW1wbGluZyAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgdHNjIGlu
IGV2ZW50IHJlY29yZCAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBpbnRlcnJ1
cHQgb24gdGhyZXNob2xkIG92ZXJmbG93ICAgICAgICA9IHRydWUKICAgTGlnaHR3ZWlnaHQg
UHJvZmlsaW5nIENhcGFiaWxpdGllcyAoMHg4MDAwMDAxYy9lYngpOgogICAgICBMV1BDQiBi
eXRlIHNpemUgICAgICAgICAgICAgPSAweDEzICgxOSkKICAgICAgZXZlbnQgcmVjb3JkIGJ5
dGUgc2l6ZSAgICAgID0gMHgyMCAoMzIpCiAgICAgIG1heGltdW0gRXZlbnRJZCAgICAgICAg
ICAgICA9IDB4MyAoMykKICAgICAgRXZlbnRJbnRlcnZhbDEgZmllbGQgb2Zmc2V0ID0gMHg4
MCAoMTI4KQogICBMaWdodHdlaWdodCBQcm9maWxpbmcgQ2FwYWJpbGl0aWVzICgweDgwMDAw
MDFjL2VjeCk6CiAgICAgIGxhdGVuY3kgY291bnRlciBiaXQgc2l6ZSAgICAgICAgICA9IDB4
MCAoMCkKICAgICAgZGF0YSBjYWNoZSBtaXNzIGFkZHJlc3MgdmFsaWQgICAgID0gZmFsc2UK
ICAgICAgYW1vdW50IGNhY2hlIGxhdGVuY3kgaXMgcm91bmRlZCAgID0gMHgwICgwKQogICAg
ICBMV1AgaW1wbGVtZW50YXRpb24gdmVyc2lvbiAgICAgICAgPSAweDEgKDEpCiAgICAgIGV2
ZW50IHJpbmcgYnVmZmVyIHNpemUgaW4gcmVjb3JkcyA9IDB4MSAoMSkKICAgICAgYnJhbmNo
IHByZWRpY3Rpb24gZmlsdGVyaW5nICAgICAgID0gZmFsc2UKICAgICAgSVAgZmlsdGVyaW5n
ICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgY2FjaGUgbGV2ZWwgZmlsdGVy
aW5nICAgICAgICAgICAgID0gZmFsc2UKICAgICAgY2FjaGUgbGF0ZW5jeSBmaWx0ZWluZyAg
ICAgICAgICAgID0gZmFsc2UKICAgQ2FjaGUgUHJvcGVydGllcyAoMHg4MDAwMDAxZCk6CiAg
ICAgIC0tLSBjYWNoZSAwIC0tLQogICAgICB0eXBlICAgICAgICAgICAgICAgICAgICAgICAg
ICAgID0gZGF0YSAoMSkKICAgICAgbGV2ZWwgICAgICAgICAgICAgICAgICAgICAgICAgICA9
IDB4MSAoMSkKICAgICAgc2VsZi1pbml0aWFsaXppbmcgICAgICAgICAgICAgICA9IHRydWUK
ICAgICAgZnVsbHkgYXNzb2NpYXRpdmUgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGV4
dHJhIGNvcmVzIHNoYXJpbmcgdGhpcyBjYWNoZSAgPSAweDAgKDApCiAgICAgIGxpbmUgc2l6
ZSBpbiBieXRlcyAgICAgICAgICAgICAgPSAweDQwICg2NCkKICAgICAgcGh5c2ljYWwgbGlu
ZSBwYXJ0aXRpb25zICAgICAgICA9IDB4MSAoMSkKICAgICAgbnVtYmVyIG9mIHdheXMgICAg
ICAgICAgICAgICAgICA9IDB4NCAoNCkKICAgICAgbnVtYmVyIG9mIHNldHMgICAgICAgICAg
ICAgICAgICA9IDY0CiAgICAgIHdyaXRlLWJhY2sgaW52YWxpZGF0ZSAgICAgICAgICAgPSBm
YWxzZQogICAgICBjYWNoZSBpbmNsdXNpdmUgb2YgbG93ZXIgbGV2ZWxzID0gZmFsc2UKICAg
ICAgKHN5bnRoIHNpemUpICAgICAgICAgICAgICAgICAgICA9IDE2Mzg0ICgxNiBLQikKICAg
ICAgLS0tIGNhY2hlIDEgLS0tCiAgICAgIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgPSBpbnN0cnVjdGlvbiAoMikKICAgICAgbGV2ZWwgICAgICAgICAgICAgICAgICAgICAg
ICAgICA9IDB4MSAoMSkKICAgICAgc2VsZi1pbml0aWFsaXppbmcgICAgICAgICAgICAgICA9
IHRydWUKICAgICAgZnVsbHkgYXNzb2NpYXRpdmUgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIGV4dHJhIGNvcmVzIHNoYXJpbmcgdGhpcyBjYWNoZSAgPSAweDEgKDEpCiAgICAgIGxp
bmUgc2l6ZSBpbiBieXRlcyAgICAgICAgICAgICAgPSAweDQwICg2NCkKICAgICAgcGh5c2lj
YWwgbGluZSBwYXJ0aXRpb25zICAgICAgICA9IDB4MSAoMSkKICAgICAgbnVtYmVyIG9mIHdh
eXMgICAgICAgICAgICAgICAgICA9IDB4MiAoMikKICAgICAgbnVtYmVyIG9mIHNldHMgICAg
ICAgICAgICAgICAgICA9IDUxMgogICAgICB3cml0ZS1iYWNrIGludmFsaWRhdGUgICAgICAg
ICAgID0gZmFsc2UKICAgICAgY2FjaGUgaW5jbHVzaXZlIG9mIGxvd2VyIGxldmVscyA9IGZh
bHNlCiAgICAgIChzeW50aCBzaXplKSAgICAgICAgICAgICAgICAgICAgPSA2NTUzNiAoNjQg
S0IpCiAgICAgIC0tLSBjYWNoZSAyIC0tLQogICAgICB0eXBlICAgICAgICAgICAgICAgICAg
ICAgICAgICAgID0gdW5pZmllZCAoMykKICAgICAgbGV2ZWwgICAgICAgICAgICAgICAgICAg
ICAgICAgICA9IDB4MiAoMikKICAgICAgc2VsZi1pbml0aWFsaXppbmcgICAgICAgICAgICAg
ICA9IHRydWUKICAgICAgZnVsbHkgYXNzb2NpYXRpdmUgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIGV4dHJhIGNvcmVzIHNoYXJpbmcgdGhpcyBjYWNoZSAgPSAweDEgKDEpCiAgICAg
IGxpbmUgc2l6ZSBpbiBieXRlcyAgICAgICAgICAgICAgPSAweDQwICg2NCkKICAgICAgcGh5
c2ljYWwgbGluZSBwYXJ0aXRpb25zICAgICAgICA9IDB4MSAoMSkKICAgICAgbnVtYmVyIG9m
IHdheXMgICAgICAgICAgICAgICAgICA9IDB4MTAgKDE2KQogICAgICBudW1iZXIgb2Ygc2V0
cyAgICAgICAgICAgICAgICAgID0gMTAyNAogICAgICB3cml0ZS1iYWNrIGludmFsaWRhdGUg
ICAgICAgICAgID0gdHJ1ZQogICAgICBjYWNoZSBpbmNsdXNpdmUgb2YgbG93ZXIgbGV2ZWxz
ID0gZmFsc2UKICAgICAgKHN5bnRoIHNpemUpICAgICAgICAgICAgICAgICAgICA9IDEwNDg1
NzYgKDEwMjQgS0IpCiAgIGV4dGVuZGVkIEFQSUMgSUQgPSAxNgogICBDb21wdXRlIFVuaXQg
SWRlbnRpZmllcnMgKDB4ODAwMDAwMWUvZWJ4KToKICAgICAgY29tcHV0ZSB1bml0IElEICAg
ICAgICA9IDB4MCAoMCkKICAgICAgY29yZXMgcGVyIGNvbXB1dGUgdW5pdCA9IDB4MiAoMikK
ICAgTm9kZSBJZGVudGlmaWVycyAoMHg4MDAwMDAxZS9lY3gpOgogICAgICBub2RlIElEICAg
ICAgICAgICAgID0gMHgwICgwKQogICAgICBub2RlcyBwZXIgcHJvY2Vzc29yID0gMHgxICgx
KQogICAoaW5zdHJ1Y3Rpb24gc3VwcG9ydGVkIHN5bnRoKToKICAgICAgQ01QWENIRzhCICAg
ICAgICAgICAgICAgID0gdHJ1ZQogICAgICBjb25kaXRpb25hbCBtb3ZlL2NvbXBhcmUgPSB0
cnVlCiAgICAgIFBSRUZFVENIL1BSRUZFVENIVyAgICAgICA9IHRydWUKICAgKG11bHRpLXBy
b2Nlc3Npbmcgc3ludGgpID0gbXVsdGktY29yZSAoYz0yKQogICAobXVsdGktcHJvY2Vzc2lu
ZyBtZXRob2QpID0gQU1ECiAgIChBUElDIHdpZHRocyBzeW50aCk6IENPUkVfd2lkdGg9MSBT
TVRfd2lkdGg9MAogICAoQVBJQyBzeW50aCk6IFBLR19JRD0wIENPUkVfSUQ9MCBTTVRfSUQ9
MAogICAodWFyY2ggc3ludGgpID0gQU1EIFBpbGVkcml2ZXIsIDMybm0KICAgKHN5bnRoKSA9
IEFNRCBBLVNlcmllcyAoUmljaGxhbmQgUkwtQTEpIFtQaWxlZHJpdmVyXSwgMzJubQpDUFUg
MToKICAgdmVuZG9yX2lkID0gIkF1dGhlbnRpY0FNRCIKICAgdmVyc2lvbiBpbmZvcm1hdGlv
biAoMS9lYXgpOgogICAgICBwcm9jZXNzb3IgdHlwZSAgPSBwcmltYXJ5IHByb2Nlc3NvciAo
MCkKICAgICAgZmFtaWx5ICAgICAgICAgID0gMHhmICgxNSkKICAgICAgbW9kZWwgICAgICAg
ICAgID0gMHgzICgzKQogICAgICBzdGVwcGluZyBpZCAgICAgPSAweDEgKDEpCiAgICAgIGV4
dGVuZGVkIGZhbWlseSA9IDB4NiAoNikKICAgICAgZXh0ZW5kZWQgbW9kZWwgID0gMHgxICgx
KQogICAgICAoZmFtaWx5IHN5bnRoKSAgPSAweDE1ICgyMSkKICAgICAgKG1vZGVsIHN5bnRo
KSAgID0gMHgxMyAoMTkpCiAgICAgIChzaW1wbGUgc3ludGgpICA9IEFNRCAodW5rbm93biB0
eXBlKSAoUmljaGxhbmQgUkwtQTEpIFtQaWxlZHJpdmVyXSwgMzJubQogICBtaXNjZWxsYW5l
b3VzICgxL2VieCk6CiAgICAgIHByb2Nlc3MgbG9jYWwgQVBJQyBwaHlzaWNhbCBJRCA9IDB4
MSAoMSkKICAgICAgbWF4aW11bSBJRHMgZm9yIENQVXMgaW4gcGtnICAgID0gMHgyICgyKQog
ICAgICBDTEZMVVNIIGxpbmUgc2l6ZSAgICAgICAgICAgICAgPSAweDggKDgpCiAgICAgIGJy
YW5kIGluZGV4ICAgICAgICAgICAgICAgICAgICA9IDB4MCAoMCkKICAgYnJhbmQgaWQgPSAw
eDAwICgwKTogdW5rbm93bgogICBmZWF0dXJlIGluZm9ybWF0aW9uICgxL2VkeCk6CiAgICAg
IHg4NyBGUFUgb24gY2hpcCAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBW
TUU6IHZpcnR1YWwtODA4NiBtb2RlIGVuaGFuY2VtZW50ICAgICA9IHRydWUKICAgICAgREU6
IGRlYnVnZ2luZyBleHRlbnNpb25zICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFBTRTog
cGFnZSBzaXplIGV4dGVuc2lvbnMgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBUU0M6IHRp
bWUgc3RhbXAgY291bnRlciAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgUkRNU1IgYW5k
IFdSTVNSIHN1cHBvcnQgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFBBRTogcGh5c2lj
YWwgYWRkcmVzcyBleHRlbnNpb25zICAgICAgID0gdHJ1ZQogICAgICBNQ0U6IG1hY2hpbmUg
Y2hlY2sgZXhjZXB0aW9uICAgICAgICAgICA9IHRydWUKICAgICAgQ01QWENIRzhCIGluc3Qu
ICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIEFQSUMgb24gY2hpcCAgICAg
ICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBTWVNFTlRFUiBhbmQgU1lTRVhJ
VCAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgTVRSUjogbWVtb3J5IHR5cGUgcmFu
Z2UgcmVnaXN0ZXJzICAgICAgPSB0cnVlCiAgICAgIFBURSBnbG9iYWwgYml0ICAgICAgICAg
ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBNQ0E6IG1hY2hpbmUgY2hlY2sgYXJjaGl0
ZWN0dXJlICAgICAgICA9IHRydWUKICAgICAgQ01PVjogY29uZGl0aW9uYWwgbW92ZS9jb21w
YXJlIGluc3RyICAgPSB0cnVlCiAgICAgIFBBVDogcGFnZSBhdHRyaWJ1dGUgdGFibGUgICAg
ICAgICAgICAgID0gdHJ1ZQogICAgICBQU0UtMzY6IHBhZ2Ugc2l6ZSBleHRlbnNpb24gICAg
ICAgICAgICA9IHRydWUKICAgICAgUFNOOiBwcm9jZXNzb3Igc2VyaWFsIG51bWJlciAgICAg
ICAgICAgPSBmYWxzZQogICAgICBDTEZMVVNIIGluc3RydWN0aW9uICAgICAgICAgICAgICAg
ICAgICA9IHRydWUKICAgICAgRFM6IGRlYnVnIHN0b3JlICAgICAgICAgICAgICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBBQ1BJOiB0aGVybWFsIG1vbml0b3IgYW5kIGNsb2NrIGN0cmwg
ICA9IGZhbHNlCiAgICAgIE1NWCBUZWNobm9sb2d5ICAgICAgICAgICAgICAgICAgICAgICAg
ID0gdHJ1ZQogICAgICBGWFNBVkUvRlhSU1RPUiAgICAgICAgICAgICAgICAgICAgICAgICA9
IHRydWUKICAgICAgU1NFIGV4dGVuc2lvbnMgICAgICAgICAgICAgICAgICAgICAgICAgPSB0
cnVlCiAgICAgIFNTRTIgZXh0ZW5zaW9ucyAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1
ZQogICAgICBTUzogc2VsZiBzbm9vcCAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIGh5cGVyLXRocmVhZGluZyAvIG11bHRpLWNvcmUgc3VwcG9ydGVkID0gdHJ1ZQog
ICAgICBUTTogdGhlcm0uIG1vbml0b3IgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIElBNjQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgUEJFOiBwZW5kaW5nIGJyZWFrIGV2ZW50ICAgICAgICAgICAgICAgPSBmYWxzZQogICBm
ZWF0dXJlIGluZm9ybWF0aW9uICgxL2VjeCk6CiAgICAgIFBOSS9TU0UzOiBQcmVzY290dCBO
ZXcgSW5zdHJ1Y3Rpb25zICAgICA9IHRydWUKICAgICAgUENMTVVMRFEgaW5zdHJ1Y3Rpb24g
ICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBEVEVTNjQ6IDY0LWJpdCBkZWJ1ZyBz
dG9yZSAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBNT05JVE9SL01XQUlUICAgICAgICAg
ICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIENQTC1xdWFsaWZpZWQgZGVidWcgc3Rv
cmUgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFZNWDogdmlydHVhbCBtYWNoaW5lIGV4
dGVuc2lvbnMgICAgICAgICA9IGZhbHNlCiAgICAgIFNNWDogc2FmZXIgbW9kZSBleHRlbnNp
b25zICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEVuaGFuY2VkIEludGVsIFNwZWVkU3Rl
cCBUZWNobm9sb2d5ICAgICA9IGZhbHNlCiAgICAgIFRNMjogdGhlcm1hbCBtb25pdG9yIDIg
ICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFNTU0UzIGV4dGVuc2lvbnMgICAgICAg
ICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgY29udGV4dCBJRDogYWRhcHRpdmUgb3Ig
c2hhcmVkIEwxIGRhdGEgID0gZmFsc2UKICAgICAgU0RCRzogSUEzMl9ERUJVR19JTlRFUkZB
Q0UgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgRk1BIGluc3RydWN0aW9uICAgICAgICAg
ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBDTVBYQ0hHMTZCIGluc3RydWN0aW9uICAg
ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHhUUFIgZGlzYWJsZSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFBEQ006IHBlcmZtb24gYW5kIGRlYnVnICAg
ICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFBDSUQ6IHByb2Nlc3MgY29udGV4dCBpZGVu
dGlmaWVycyAgICAgICA9IGZhbHNlCiAgICAgIERDQTogZGlyZWN0IGNhY2hlIGFjY2VzcyAg
ICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFNTRTQuMSBleHRlbnNpb25zICAgICAgICAg
ICAgICAgICAgICAgICA9IHRydWUKICAgICAgU1NFNC4yIGV4dGVuc2lvbnMgICAgICAgICAg
ICAgICAgICAgICAgID0gdHJ1ZQogICAgICB4MkFQSUM6IGV4dGVuZGVkIHhBUElDIHN1cHBv
cnQgICAgICAgICAgPSBmYWxzZQogICAgICBNT1ZCRSBpbnN0cnVjdGlvbiAgICAgICAgICAg
ICAgICAgICAgICAgPSBmYWxzZQogICAgICBQT1BDTlQgaW5zdHJ1Y3Rpb24gICAgICAgICAg
ICAgICAgICAgICAgPSB0cnVlCiAgICAgIHRpbWUgc3RhbXAgY291bnRlciBkZWFkbGluZSAg
ICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFFUyBpbnN0cnVjdGlvbiAgICAgICAgICAgICAg
ICAgICAgICAgICA9IHRydWUKICAgICAgWFNBVkUvWFNUT1Igc3RhdGVzICAgICAgICAgICAg
ICAgICAgICAgID0gdHJ1ZQogICAgICBPUy1lbmFibGVkIFhTQVZFL1hTVE9SICAgICAgICAg
ICAgICAgICAgPSB0cnVlCiAgICAgIEFWWDogYWR2YW5jZWQgdmVjdG9yIGV4dGVuc2lvbnMg
ICAgICAgICA9IHRydWUKICAgICAgRjE2QyBoYWxmLXByZWNpc2lvbiBjb252ZXJ0IGluc3Ry
dWN0aW9uID0gdHJ1ZQogICAgICBSRFJBTkQgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAg
ICAgICAgPSBmYWxzZQogICAgICBoeXBlcnZpc29yIGd1ZXN0IHN0YXR1cyAgICAgICAgICAg
ICAgICAgPSBmYWxzZQogICBjYWNoZSBhbmQgVExCIGluZm9ybWF0aW9uICgyKToKICAgcHJv
Y2Vzc29yIHNlcmlhbCBudW1iZXIgPSAwMDYxLTBGMzEtMDAwMC0wMDAwLTAwMDAtMDAwMAog
ICBkZXRlcm1pbmlzdGljIGNhY2hlIHBhcmFtZXRlcnMgKDQpOgogICAgICAtLS0gY2FjaGUg
MCAtLS0KICAgICAgY2FjaGUgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICA9IG5vIG1v
cmUgY2FjaGVzICgwKQogICBNT05JVE9SL01XQUlUICg1KToKICAgICAgc21hbGxlc3QgbW9u
aXRvci1saW5lIHNpemUgKGJ5dGVzKSAgICAgICA9IDB4NDAgKDY0KQogICAgICBsYXJnZXN0
IG1vbml0b3ItbGluZSBzaXplIChieXRlcykgICAgICAgID0gMHg0MCAoNjQpCiAgICAgIGVu
dW0gb2YgTW9uaXRvci1NV0FJVCBleHRzIHN1cHBvcnRlZCAgICAgPSB0cnVlCiAgICAgIHN1
cHBvcnRzIGludHJzIGFzIGJyZWFrLWV2ZW50IGZvciBNV0FJVCAgPSB0cnVlCiAgICAgIG51
bWJlciBvZiBDMCBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAgPSAweDAgKDApCiAgICAg
IG51bWJlciBvZiBDMSBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAgPSAweDAgKDApCiAg
ICAgIG51bWJlciBvZiBDMiBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAgPSAweDAgKDAp
CiAgICAgIG51bWJlciBvZiBDMyBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAgPSAweDAg
KDApCiAgICAgIG51bWJlciBvZiBDNCBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAgPSAw
eDAgKDApCiAgICAgIG51bWJlciBvZiBDNSBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQgICAg
PSAweDAgKDApCiAgICAgIG51bWJlciBvZiBDNiBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdBSVQg
ICAgPSAweDAgKDApCiAgICAgIG51bWJlciBvZiBDNyBzdWIgQy1zdGF0ZXMgdXNpbmcgTVdB
SVQgICAgPSAweDAgKDApCiAgIFRoZXJtYWwgYW5kIFBvd2VyIE1hbmFnZW1lbnQgRmVhdHVy
ZXMgKDYpOgogICAgICBkaWdpdGFsIHRoZXJtb21ldGVyICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBJbnRlbCBUdXJibyBCb29zdCBUZWNobm9sb2d5ICAgICAgICAgICAg
PSBmYWxzZQogICAgICBBUkFUIGFsd2F5cyBydW5uaW5nIEFQSUMgdGltZXIgICAgICAgICAg
PSBmYWxzZQogICAgICBQTE4gcG93ZXIgbGltaXQgbm90aWZpY2F0aW9uICAgICAgICAgICAg
PSBmYWxzZQogICAgICBFQ01EIGV4dGVuZGVkIGNsb2NrIG1vZHVsYXRpb24gZHV0eSAgICAg
PSBmYWxzZQogICAgICBQVE0gcGFja2FnZSB0aGVybWFsIG1hbmFnZW1lbnQgICAgICAgICAg
PSBmYWxzZQogICAgICBIV1AgYmFzZSByZWdpc3RlcnMgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBIV1Agbm90aWZpY2F0aW9uICAgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBIV1AgYWN0aXZpdHkgd2luZG93ICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBIV1AgZW5lcmd5IHBlcmZvcm1hbmNlIHByZWZlcmVuY2UgICAgICAg
PSBmYWxzZQogICAgICBIV1AgcGFja2FnZSBsZXZlbCByZXF1ZXN0ICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBIREMgYmFzZSByZWdpc3RlcnMgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBJbnRlbCBUdXJibyBCb29zdCBNYXggVGVjaG5vbG9neSAzLjAgICAg
PSBmYWxzZQogICAgICBIV1AgY2FwYWJpbGl0aWVzICAgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBIV1AgUEVDSSBvdmVycmlkZSAgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBmbGV4aWJsZSBIV1AgICAgICAgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBJQTMyX0hXUF9SRVFVRVNUIE1TUiBmYXN0IGFjY2VzcyBtb2RlICAg
PSBmYWxzZQogICAgICBIV19GRUVEQkFDSyBNU1JzIHN1cHBvcnRlZCAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBpZ25vcmluZyBpZGxlIGxvZ2ljYWwgcHJvY2Vzc29yIEhXUCByZXEg
PSBmYWxzZQogICAgICBUaHJlYWQgRGlyZWN0b3IgICAgICAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBJQTMyX0hXX0ZFRURCQUNLX1RIUkVBRF9DT05GSUcgYml0IDI1ICAg
PSBmYWxzZQogICAgICBkaWdpdGFsIHRoZXJtb21ldGVyIHRocmVzaG9sZHMgICAgICAgICAg
PSAweDAgKDApCiAgICAgIGhhcmR3YXJlIGNvb3JkaW5hdGlvbiBmZWVkYmFjayAgICAgICAg
ICA9IHRydWUKICAgICAgQUNOVDIgYXZhaWxhYmxlICAgICAgICAgICAgICAgICAgICAgICAg
ID0gZmFsc2UKICAgICAgcGVyZm9ybWFuY2UtZW5lcmd5IGJpYXMgY2FwYWJpbGl0eSAgICAg
ID0gZmFsc2UKICAgICAgbnVtYmVyIG9mIGVuaCBoYXJkd2FyZSBmZWVkYmFjayBjbGFzc2Vz
ID0gMHgwICgwKQogICAgICBwZXJmb3JtYW5jZSBjYXBhYmlsaXR5IHJlcG9ydGluZyAgICAg
ICAgPSBmYWxzZQogICAgICBlbmVyZ3kgZWZmaWNpZW5jeSBjYXBhYmlsaXR5IHJlcG9ydGlu
ZyAgPSBmYWxzZQogICAgICBzaXplIG9mIGZlZWRiYWNrIHN0cnVjdCAoNEtCIHBhZ2VzKSAg
ICAgPSAweDEgKDEpCiAgICAgIGluZGV4IG9mIENQVSdzIHJvdyBpbiBmZWVkYmFjayBzdHJ1
Y3QgICA9IDB4MCAoMCkKICAgZXh0ZW5kZWQgZmVhdHVyZSBmbGFncyAoNyk6CiAgICAgIEZT
R1NCQVNFIGluc3RydWN0aW9ucyAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBJ
QTMyX1RTQ19BREpVU1QgTVNSIHN1cHBvcnRlZCAgICAgICAgICAgID0gZmFsc2UKICAgICAg
U0dYOiBTb2Z0d2FyZSBHdWFyZCBFeHRlbnNpb25zIHN1cHBvcnRlZCA9IGZhbHNlCiAgICAg
IEJNSTEgaW5zdHJ1Y3Rpb25zICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAg
IEhMRSBoYXJkd2FyZSBsb2NrIGVsaXNpb24gICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICBBVlgyOiBhZHZhbmNlZCB2ZWN0b3IgZXh0ZW5zaW9ucyAyICAgICAgID0gZmFsc2UKICAg
ICAgRkRQX0VYQ1BUTl9PTkxZICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIFNNRVAgc3VwZXJ2aXNvciBtb2RlIGV4ZWMgcHJvdGVjdGlvbiAgICAgPSBmYWxzZQog
ICAgICBCTUkyIGluc3RydWN0aW9ucyAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UK
ICAgICAgZW5oYW5jZWQgUkVQIE1PVlNCL1NUT1NCICAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIElOVlBDSUQgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBSVE06IHJlc3RyaWN0ZWQgdHJhbnNhY3Rpb25hbCBtZW1vcnkgICAgID0gZmFs
c2UKICAgICAgUkRULUNNVC9QUW9TIGNhY2hlIG1vbml0b3JpbmcgICAgICAgICAgICA9IGZh
bHNlCiAgICAgIGRlcHJlY2F0ZWQgRlBVIENTL0RTICAgICAgICAgICAgICAgICAgICAgPSBm
YWxzZQogICAgICBNUFg6IGludGVsIG1lbW9yeSBwcm90ZWN0aW9uIGV4dGVuc2lvbnMgID0g
ZmFsc2UKICAgICAgUkRULUNBVC9QUUUgY2FjaGUgYWxsb2NhdGlvbiAgICAgICAgICAgICA9
IGZhbHNlCiAgICAgIEFWWDUxMkY6IEFWWC01MTIgZm91bmRhdGlvbiBpbnN0cnVjdGlvbnMg
PSBmYWxzZQogICAgICBBVlg1MTJEUTogZG91YmxlICYgcXVhZHdvcmQgaW5zdHJ1Y3Rpb25z
ID0gZmFsc2UKICAgICAgUkRTRUVEIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAg
ICA9IGZhbHNlCiAgICAgIEFEWCBpbnN0cnVjdGlvbnMgICAgICAgICAgICAgICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBTTUFQOiBzdXBlcnZpc29yIG1vZGUgYWNjZXNzIHByZXZlbnRp
b24gID0gZmFsc2UKICAgICAgQVZYNTEySUZNQTogaW50ZWdlciBmdXNlZCBtdWx0aXBseSBh
ZGQgICA9IGZhbHNlCiAgICAgIFBDT01NSVQgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAg
ICAgICAgPSBmYWxzZQogICAgICBDTEZMVVNIT1BUIGluc3RydWN0aW9uICAgICAgICAgICAg
ICAgICAgID0gZmFsc2UKICAgICAgQ0xXQiBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAg
ICAgICAgICA9IGZhbHNlCiAgICAgIEludGVsIHByb2Nlc3NvciB0cmFjZSAgICAgICAgICAg
ICAgICAgICAgPSBmYWxzZQogICAgICBBVlg1MTJQRjogcHJlZmV0Y2ggaW5zdHJ1Y3Rpb25z
ICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyRVI6IGV4cG9uZW50ICYgcmVjaXByb2Nh
bCBpbnN0cnMgICA9IGZhbHNlCiAgICAgIEFWWDUxMkNEOiBjb25mbGljdCBkZXRlY3Rpb24g
aW5zdHJzICAgICAgPSBmYWxzZQogICAgICBTSEEgaW5zdHJ1Y3Rpb25zICAgICAgICAgICAg
ICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyQlc6IGJ5dGUgJiB3b3JkIGluc3Ry
dWN0aW9ucyAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMlZMOiB2ZWN0b3IgbGVuZ3RoICAg
ICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBQUkVGRVRDSFdUMSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQVZYNTEyVkJNSTogdmVjdG9yIGJ5dGUg
bWFuaXB1bGF0aW9uICAgICA9IGZhbHNlCiAgICAgIFVNSVA6IHVzZXItbW9kZSBpbnN0cnVj
dGlvbiBwcmV2ZW50aW9uICAgPSBmYWxzZQogICAgICBQS1UgcHJvdGVjdGlvbiBrZXlzIGZv
ciB1c2VyLW1vZGUgICAgICAgID0gZmFsc2UKICAgICAgT1NQS0UgQ1I0LlBLRSBhbmQgUkRQ
S1JVL1dSUEtSVSAgICAgICAgICA9IGZhbHNlCiAgICAgIFdBSVRQS0cgaW5zdHJ1Y3Rpb25z
ICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBBVlg1MTJfVkJNSTI6IGJ5dGUg
VlBDT01QUkVTUywgVlBFWFBBTkQgID0gZmFsc2UKICAgICAgQ0VUX1NTOiBDRVQgc2hhZG93
IHN0YWNrICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEdGTkk6IEdhbG9pcyBGaWVs
ZCBOZXcgSW5zdHJ1Y3Rpb25zICAgICAgPSBmYWxzZQogICAgICBWQUVTIGluc3RydWN0aW9u
cyAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgVlBDTE1VTFFEUSBpbnN0
cnVjdGlvbiAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMl9WTk5JOiBu
ZXVyYWwgbmV0d29yayBpbnN0cnVjdGlvbnMgPSBmYWxzZQogICAgICBBVlg1MTJfQklUQUxH
OiBiaXQgY291bnQvc2hpZmZsZSAgICAgICAgID0gZmFsc2UKICAgICAgVE1FOiBUb3RhbCBN
ZW1vcnkgRW5jcnlwdGlvbiAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFWWDUxMjogVlBP
UENOVERRIGluc3RydWN0aW9uICAgICAgICAgICAgPSBmYWxzZQogICAgICBMQTU3OiA1Ny1i
aXQgYWRkcnMgJiA1LWxldmVsIHBhZ2luZyAgICAgID0gZmFsc2UKICAgICAgQk5ETERYL0JO
RFNUWCBNQVdBVSB2YWx1ZSBpbiA2NC1iaXQgbW9kZSA9IDB4MCAoMCkKICAgICAgUkRQSUQ6
IHJlYWQgcHJvY2Vzc29yIElEIHN1cHBvcnRlZCAgICAgICA9IGZhbHNlCiAgICAgIEtMOiBr
ZXkgbG9ja2VyICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBidXMg
bG9jayBkZXRlY3Rpb24gICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQ0xE
RU1PVEUgc3VwcG9ydHMgY2FjaGUgbGluZSBkZW1vdGUgICAgICA9IGZhbHNlCiAgICAgIE1P
VkRJUkkgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBN
T1ZESVI2NEIgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
RU5RQ01EIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAg
IFNHWF9MQzogU0dYIGxhdW5jaCBjb25maWcgc3VwcG9ydGVkICAgICAgPSBmYWxzZQogICAg
ICBQS1M6IHN1cGVydmlzb3IgcHJvdGVjdGlvbiBrZXlzICAgICAgICAgID0gZmFsc2UKICAg
ICAgU0dYLUtFWVM6IFNHWCBhdHRlc3RhdGlvbiBzZXJ2aWNlcyAgICAgICA9IGZhbHNlCiAg
ICAgIEFWWDUxMl80Vk5OSVc6IG5ldXJhbCBuZXR3b3JrIGluc3RycyAgICAgPSBmYWxzZQog
ICAgICBBVlg1MTJfNEZNQVBTOiBtdWx0aXBseSBhY2Mgc2luZ2xlIHByZWMgID0gZmFsc2UK
ICAgICAgZmFzdCBzaG9ydCBSRVAgTU9WICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIFVJTlRSOiB1c2VyIGludGVycnVwdHMgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBBVlg1MTJfVlAySU5URVJTRUNUOiBpbnRlcnNlY3QgbWFzayByZWdzID0gZmFs
c2UKICAgICAgSUEzMl9NQ1VfT1BUX0NUUkwgU1JCRFMgbWl0aWdhdGlvbiBNU1IgICA9IGZh
bHNlCiAgICAgIFZFUlcgTURfQ0xFQVIgbWljcm9jb2RlIHN1cHBvcnQgICAgICAgICAgPSBm
YWxzZQogICAgICBSVE0gdHJhbnNhY3Rpb24gYWx3YXlzIGFib3J0cyAgICAgICAgICAgID0g
ZmFsc2UKICAgICAgSUEzMl9UU1hfRk9SQ0VfQUJPUlQgTVNSICAgICAgICAgICAgICAgICA9
IGZhbHNlCiAgICAgIFNFUklBTElaRSBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAg
PSBmYWxzZQogICAgICBoeWJyaWQgcGFydCAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ID0gZmFsc2UKICAgICAgVFNYTERUUks6IFRTWCBzdXNwZW5kIGxvYWQgYWRkciB0cmFja2lu
ZyA9IGZhbHNlCiAgICAgIFBDT05GSUcgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAg
ICAgPSBmYWxzZQogICAgICBMQlI6IGFyY2hpdGVjdHVyYWwgbGFzdCBicmFuY2ggcmVjb3Jk
cyAgID0gZmFsc2UKICAgICAgQ0VUX0lCVDogQ0VUIGluZGlyZWN0IGJyYW5jaCB0cmFja2lu
ZyAgICA9IGZhbHNlCiAgICAgIEFNWC1CRjE2OiB0aWxlIGJmbG9hdDE2IHN1cHBvcnQgICAg
ICAgICAgPSBmYWxzZQogICAgICBBVlg1MTJfRlAxNjogZnAxNiBzdXBwb3J0ICAgICAgICAg
ICAgICAgID0gZmFsc2UKICAgICAgQU1YLVRJTEU6IHRpbGUgYXJjaGl0ZWN0dXJlIHN1cHBv
cnQgICAgICA9IGZhbHNlCiAgICAgIEFNWC1JTlQ4OiB0aWxlIDgtYml0IGludGVnZXIgc3Vw
cG9ydCAgICAgPSBmYWxzZQogICAgICBJQlJTL0lCUEI6IGluZGlyZWN0IGJyYW5jaCByZXN0
cmljdGlvbnMgID0gZmFsc2UKICAgICAgU1RJQlA6IDEgdGhyIGluZGlyZWN0IGJyYW5jaCBw
cmVkaWN0b3IgICA9IGZhbHNlCiAgICAgIEwxRF9GTFVTSDogSUEzMl9GTFVTSF9DTUQgTVNS
ICAgICAgICAgICAgPSBmYWxzZQogICAgICBJQTMyX0FSQ0hfQ0FQQUJJTElUSUVTIE1TUiAg
ICAgICAgICAgICAgID0gZmFsc2UKICAgICAgSUEzMl9DT1JFX0NBUEFCSUxJVElFUyBNU1Ig
ICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFNTQkQ6IHNwZWN1bGF0aXZlIHN0b3JlIGJ5
cGFzcyBkaXNhYmxlICAgPSBmYWxzZQogICBEaXJlY3QgQ2FjaGUgQWNjZXNzIFBhcmFtZXRl
cnMgKDkpOgogICAgICBQTEFURk9STV9EQ0FfQ0FQIE1TUiBiaXRzID0gMAogICBBcmNoaXRl
Y3R1cmUgUGVyZm9ybWFuY2UgTW9uaXRvcmluZyBGZWF0dXJlcyAoMHhhKToKICAgICAgdmVy
c2lvbiBJRCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IDB4MCAoMCkKICAgICAg
bnVtYmVyIG9mIGNvdW50ZXJzIHBlciBsb2dpY2FsIHByb2Nlc3NvciA9IDB4MCAoMCkKICAg
ICAgYml0IHdpZHRoIG9mIGNvdW50ZXIgICAgICAgICAgICAgICAgICAgICA9IDB4MCAoMCkK
ICAgICAgbGVuZ3RoIG9mIEVCWCBiaXQgdmVjdG9yICAgICAgICAgICAgICAgICA9IDB4MCAo
MCkKICAgICAgY29yZSBjeWNsZSBldmVudCAgICAgICAgICAgICAgICAgICAgICAgICA9IG5v
dCBhdmFpbGFibGUKICAgICAgaW5zdHJ1Y3Rpb24gcmV0aXJlZCBldmVudCAgICAgICAgICAg
ICAgICA9IG5vdCBhdmFpbGFibGUKICAgICAgcmVmZXJlbmNlIGN5Y2xlcyBldmVudCAgICAg
ICAgICAgICAgICAgICA9IG5vdCBhdmFpbGFibGUKICAgICAgbGFzdC1sZXZlbCBjYWNoZSBy
ZWYgZXZlbnQgICAgICAgICAgICAgICA9IG5vdCBhdmFpbGFibGUKICAgICAgbGFzdC1sZXZl
bCBjYWNoZSBtaXNzIGV2ZW50ICAgICAgICAgICAgICA9IG5vdCBhdmFpbGFibGUKICAgICAg
YnJhbmNoIGluc3QgcmV0aXJlZCBldmVudCAgICAgICAgICAgICAgICA9IG5vdCBhdmFpbGFi
bGUKICAgICAgYnJhbmNoIG1pc3ByZWQgcmV0aXJlZCBldmVudCAgICAgICAgICAgICA9IG5v
dCBhdmFpbGFibGUKICAgICAgdG9wLWRvd24gc2xvdHMgZXZlbnQgICAgICAgICAgICAgICAg
ICAgICA9IG5vdCBhdmFpbGFibGUKICAgICAgZml4ZWQgY291bnRlciAgMCBzdXBwb3J0ZWQg
ICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDEgc3VwcG9ydGVk
ICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICAyIHN1cHBvcnRl
ZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAgMyBzdXBwb3J0
ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDQgc3VwcG9y
dGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICA1IHN1cHBv
cnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAgNiBzdXBw
b3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgIDcgc3Vw
cG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyICA4IHN1
cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAgOSBz
dXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIgMTAg
c3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVyIDEx
IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRlciAx
MiBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50ZXIg
MTMgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3VudGVy
IDE0IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291bnRl
ciAxNSBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNvdW50
ZXIgMTYgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBjb3Vu
dGVyIDE3IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQgY291
bnRlciAxOCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVkIGNv
dW50ZXIgMTkgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhlZCBj
b3VudGVyIDIwIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4ZWQg
Y291bnRlciAyMSBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZpeGVk
IGNvdW50ZXIgMjIgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBmaXhl
ZCBjb3VudGVyIDIzIHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZml4
ZWQgY291bnRlciAyNCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGZp
eGVkIGNvdW50ZXIgMjUgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBm
aXhlZCBjb3VudGVyIDI2IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
Zml4ZWQgY291bnRlciAyNyBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAg
IGZpeGVkIGNvdW50ZXIgMjggc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQogICAg
ICBmaXhlZCBjb3VudGVyIDI5IHN1cHBvcnRlZCAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgZml4ZWQgY291bnRlciAzMCBzdXBwb3J0ZWQgICAgICAgICAgICAgICA9IGZhbHNlCiAg
ICAgIGZpeGVkIGNvdW50ZXIgMzEgc3VwcG9ydGVkICAgICAgICAgICAgICAgPSBmYWxzZQog
ICAgICBudW1iZXIgb2YgY29udGlndW91cyBmaXhlZCBjb3VudGVycyAgICAgID0gMHgwICgw
KQogICAgICBiaXQgd2lkdGggb2YgZml4ZWQgY291bnRlcnMgICAgICAgICAgICAgID0gMHgw
ICgwKQogICAgICBhbnl0aHJlYWQgZGVwcmVjYXRpb24gICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgeDJBUElDIGZlYXR1cmVzIC8gcHJvY2Vzc29yIHRvcG9sb2d5ICgweGIpOgog
ICAgICBleHRlbmRlZCBBUElDIElEICAgICAgICAgICAgICAgICAgICAgID0gMAogICAgICAt
LS0gbGV2ZWwgMCAtLS0KICAgICAgbGV2ZWwgbnVtYmVyICAgICAgICAgICAgICAgICAgICAg
ICAgICA9IDB4MCAoMCkKICAgICAgbGV2ZWwgdHlwZSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICA9IGludmFsaWQgKDApCiAgICAgIGJpdCB3aWR0aCBvZiBsZXZlbCAgICAgICAgICAg
ICAgICAgICAgPSAweDAgKDApCiAgICAgIG51bWJlciBvZiBsb2dpY2FsIHByb2Nlc3NvcnMg
YXQgbGV2ZWwgPSAweDAgKDApCiAgIFhTQVZFIGZlYXR1cmVzICgweGQvMCk6CiAgICAgIFhD
UjAgdmFsaWQgYml0IGZpZWxkIG1hc2sgICAgICAgICAgICAgICA9IDB4NDAwMDAwMDAwMDAw
MDAwNwogICAgICAgICB4ODcgc3RhdGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSB0
cnVlCiAgICAgICAgIFNTRSBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRy
dWUKICAgICAgICAgQVZYIHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1
ZQogICAgICAgICBNUFggQk5EUkVHUyAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBNUFggQk5EQ1NSICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBBVlgtNTEyIG9wbWFzayAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBBVlgtNTEyIFpNTV9IaTI1NiAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBBVlgtNTEyIEhpMTZfWk1NICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBQS1JVIHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBYVElMRUNGRyBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICAgICBYVElMRURBVEEgc3RhdGUgICAgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBieXRlcyByZXF1aXJlZCBieSBmaWVsZHMgaW4gWENSMCAgICAgICAgPSAweDAw
MDAwMzQwICg4MzIpCiAgICAgIGJ5dGVzIHJlcXVpcmVkIGJ5IFhTQVZFL1hSU1RPUiBhcmVh
ICAgICA9IDB4MDAwMDAzYzAgKDk2MCkKICAgICAgWFNBVkVPUFQgaW5zdHJ1Y3Rpb24gICAg
ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgWFNBVkVDIGluc3RydWN0aW9uICAgICAg
ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgWEdFVEJWIGluc3RydWN0aW9uICAgICAg
ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgWFNBVkVTL1hSU1RPUlMgaW5zdHJ1Y3Rp
b25zICAgICAgICAgICAgID0gZmFsc2UKICAgICAgWEZEOiBleHRlbmRlZCBmZWF0dXJlIGRp
c2FibGUgc3VwcG9ydGVkID0gZmFsc2UKICAgICAgU0FWRSBhcmVhIHNpemUgaW4gYnl0ZXMg
ICAgICAgICAgICAgICAgID0gMHgwMDAwMDAwMCAoMCkKICAgICAgSUEzMl9YU1MgdmFsaWQg
Yml0IGZpZWxkIG1hc2sgICAgICAgICAgID0gMHgwMDAwMDAwMDAwMDAwMDAwCiAgICAgICAg
IFBUIHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IFBBU0lEIHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IENFVF9VIHVzZXIgc3RhdGUgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IENFVF9TIHN1cGVydmlzb3Igc3RhdGUgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IEhEQyBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IFVJTlRSIHN0YXRlICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IExCUiBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgICAg
IEhXUCBzdGF0ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgIEFWWC9Z
TU0gZmVhdHVyZXMgKDB4ZC8yKToKICAgICAgQVZYL1lNTSBzYXZlIHN0YXRlIGJ5dGUgc2l6
ZSAgICAgICAgICAgICA9IDB4MDAwMDAxMDAgKDI1NikKICAgICAgQVZYL1lNTSBzYXZlIHN0
YXRlIGJ5dGUgb2Zmc2V0ICAgICAgICAgICA9IDB4MDAwMDAyNDAgKDU3NikKICAgICAgc3Vw
cG9ydGVkIGluIElBMzJfWFNTIG9yIFhDUjAgICAgICAgICAgICA9IFhDUjAgKHVzZXIgc3Rh
dGUpCiAgICAgIDY0LWJ5dGUgYWxpZ25tZW50IGluIGNvbXBhY3RlZCBYU0FWRSAgICAgPSBm
YWxzZQogICAgICBYRkQgZmF1bHRpbmcgc3VwcG9ydGVkICAgICAgICAgICAgICAgICAgID0g
ZmFsc2UKICAgTFdQIGZlYXR1cmVzICgweGQvMHgzZSk6CiAgICAgIExXUCBzYXZlIHN0YXRl
IGJ5dGUgc2l6ZSAgICAgICAgICAgICAgICAgPSAweDAwMDAwMDgwICgxMjgpCiAgICAgIExX
UCBzYXZlIHN0YXRlIGJ5dGUgb2Zmc2V0ICAgICAgICAgICAgICAgPSAweDAwMDAwMzQwICg4
MzIpCiAgICAgIHN1cHBvcnRlZCBpbiBJQTMyX1hTUyBvciBYQ1IwICAgICAgICAgICAgPSBY
Q1IwICh1c2VyIHN0YXRlKQogICAgICA2NC1ieXRlIGFsaWdubWVudCBpbiBjb21wYWN0ZWQg
WFNBVkUgICAgID0gZmFsc2UKICAgICAgWEZEIGZhdWx0aW5nIHN1cHBvcnRlZCAgICAgICAg
ICAgICAgICAgICA9IGZhbHNlCiAgIGV4dGVuZGVkIHByb2Nlc3NvciBzaWduYXR1cmUgKDB4
ODAwMDAwMDEvZWF4KToKICAgICAgZmFtaWx5L2dlbmVyYXRpb24gPSAweGYgKDE1KQogICAg
ICBtb2RlbCAgICAgICAgICAgPSAweDMgKDMpCiAgICAgIHN0ZXBwaW5nIGlkICAgICA9IDB4
MSAoMSkKICAgICAgZXh0ZW5kZWQgZmFtaWx5ID0gMHg2ICg2KQogICAgICBleHRlbmRlZCBt
b2RlbCAgPSAweDEgKDEpCiAgICAgIChmYW1pbHkgc3ludGgpICA9IDB4MTUgKDIxKQogICAg
ICAobW9kZWwgc3ludGgpICAgPSAweDEzICgxOSkKICAgICAgKHNpbXBsZSBzeW50aCkgID0g
QU1EICh1bmtub3duIHR5cGUpIChSaWNobGFuZCBSTC1BMSkgW1BpbGVkcml2ZXJdLCAzMm5t
CiAgIGV4dGVuZGVkIGZlYXR1cmUgZmxhZ3MgKDB4ODAwMDAwMDEvZWR4KToKICAgICAgeDg3
IEZQVSBvbiBjaGlwICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgdmlydHVh
bC04MDg2IG1vZGUgZW5oYW5jZW1lbnQgICAgICAgICA9IHRydWUKICAgICAgZGVidWdnaW5n
IGV4dGVuc2lvbnMgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgcGFnZSBzaXplIGV4
dGVuc2lvbnMgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgdGltZSBzdGFtcCBjb3Vu
dGVyICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgUkRNU1IgYW5kIFdSTVNSIHN1
cHBvcnQgICAgICAgICAgICAgICA9IHRydWUKICAgICAgcGh5c2ljYWwgYWRkcmVzcyBleHRl
bnNpb25zICAgICAgICAgICA9IHRydWUKICAgICAgbWFjaGluZSBjaGVjayBleGNlcHRpb24g
ICAgICAgICAgICAgICA9IHRydWUKICAgICAgQ01QWENIRzhCIGluc3QuICAgICAgICAgICAg
ICAgICAgICAgICA9IHRydWUKICAgICAgQVBJQyBvbiBjaGlwICAgICAgICAgICAgICAgICAg
ICAgICAgICA9IHRydWUKICAgICAgU1lTQ0FMTCBhbmQgU1lTUkVUIGluc3RydWN0aW9ucyAg
ICAgICA9IHRydWUKICAgICAgbWVtb3J5IHR5cGUgcmFuZ2UgcmVnaXN0ZXJzICAgICAgICAg
ICA9IHRydWUKICAgICAgZ2xvYmFsIHBhZ2luZyBleHRlbnNpb24gICAgICAgICAgICAgICA9
IHRydWUKICAgICAgbWFjaGluZSBjaGVjayBhcmNoaXRlY3R1cmUgICAgICAgICAgICA9IHRy
dWUKICAgICAgY29uZGl0aW9uYWwgbW92ZS9jb21wYXJlIGluc3RydWN0aW9uICA9IHRydWUK
ICAgICAgcGFnZSBhdHRyaWJ1dGUgdGFibGUgICAgICAgICAgICAgICAgICA9IHRydWUKICAg
ICAgcGFnZSBzaXplIGV4dGVuc2lvbiAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAg
bXVsdGlwcm9jZXNzaW5nIGNhcGFibGUgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIG5v
LWV4ZWN1dGUgcGFnZSBwcm90ZWN0aW9uICAgICAgICAgICAgPSB0cnVlCiAgICAgIEFNRCBt
dWx0aW1lZGlhIGluc3RydWN0aW9uIGV4dGVuc2lvbnMgPSB0cnVlCiAgICAgIE1NWCBUZWNo
bm9sb2d5ICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIEZYU0FWRS9GWFJT
VE9SICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFNTRSBleHRlbnNpb25z
ICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIDEtR0IgbGFyZ2UgcGFnZSBz
dXBwb3J0ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFJEVFNDUCAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIGxvbmcgbW9kZSAoQUEtNjQpICAgICAg
ICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIDNETm93ISBpbnN0cnVjdGlvbiBleHRlbnNp
b25zICAgICAgICAgPSBmYWxzZQogICAgICAzRE5vdyEgaW5zdHJ1Y3Rpb25zICAgICAgICAg
ICAgICAgICAgID0gZmFsc2UKICAgZXh0ZW5kZWQgYnJhbmQgaWQgKDB4ODAwMDAwMDEvZWJ4
KToKICAgICAgcmF3ICAgICA9IDB4MjAwMDAwMDAgKDUzNjg3MDkxMikKICAgICAgQnJhbmRJ
ZCA9IDB4MCAoMCkKICAgICAgUGtnVHlwZSA9IEZNMiAoUEdBKSAoMikKICAgQU1EIGZlYXR1
cmUgZmxhZ3MgKDB4ODAwMDAwMDEvZWN4KToKICAgICAgTEFIRi9TQUhGIHN1cHBvcnRlZCBp
biA2NC1iaXQgbW9kZSAgICAgPSB0cnVlCiAgICAgIENNUCBMZWdhY3kgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBTVk06IHNlY3VyZSB2aXJ0dWFsIG1hY2hp
bmUgICAgICAgICAgICA9IHRydWUKICAgICAgZXh0ZW5kZWQgQVBJQyBzcGFjZSAgICAgICAg
ICAgICAgICAgICAgPSB0cnVlCiAgICAgIEFsdE1vdkNyOCAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgID0gdHJ1ZQogICAgICBMWkNOVCBhZHZhbmNlZCBiaXQgbWFuaXB1bGF0aW9u
ICAgICAgICA9IHRydWUKICAgICAgU1NFNEEgc3VwcG9ydCAgICAgICAgICAgICAgICAgICAg
ICAgICAgPSB0cnVlCiAgICAgIG1pc2FsaWduZWQgU1NFIG1vZGUgICAgICAgICAgICAgICAg
ICAgID0gdHJ1ZQogICAgICAzRE5vdyEgUFJFRkVUQ0gvUFJFRkVUQ0hXIGluc3RydWN0aW9u
cyA9IHRydWUKICAgICAgT1MgdmlzaWJsZSB3b3JrYXJvdW5kICAgICAgICAgICAgICAgICAg
PSB0cnVlCiAgICAgIGluc3RydWN0aW9uIGJhc2VkIHNhbXBsaW5nICAgICAgICAgICAgID0g
dHJ1ZQogICAgICBYT1Agc3VwcG9ydCAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRy
dWUKICAgICAgU0tJTklUL1NUR0kgc3VwcG9ydCAgICAgICAgICAgICAgICAgICAgPSB0cnVl
CiAgICAgIHdhdGNoZG9nIHRpbWVyIHN1cHBvcnQgICAgICAgICAgICAgICAgID0gdHJ1ZQog
ICAgICBsaWdodHdlaWdodCBwcm9maWxpbmcgc3VwcG9ydCAgICAgICAgICA9IHRydWUKICAg
ICAgNC1vcGVyYW5kIEZNQSBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgPSB0cnVlCiAgICAg
IFRDRTogdHJhbnNsYXRpb24gY2FjaGUgZXh0ZW5zaW9uICAgICAgID0gdHJ1ZQogICAgICBO
b2RlSWQgTVNSIEMwMDExMDBDICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgVEJN
IHN1cHBvcnQgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIHRvcG9s
b2d5IGV4dGVuc2lvbnMgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBjb3JlIHBl
cmZvcm1hbmNlIGNvdW50ZXIgZXh0ZW5zaW9ucyAgICA9IHRydWUKICAgICAgTkIvREYgcGVy
Zm9ybWFuY2UgY291bnRlciBleHRlbnNpb25zICAgPSB0cnVlCiAgICAgIGRhdGEgYnJlYWtw
b2ludCBleHRlbnNpb24gICAgICAgICAgICAgID0gZmFsc2UKICAgICAgcGVyZm9ybWFuY2Ug
dGltZS1zdGFtcCBjb3VudGVyIHN1cHBvcnQgPSBmYWxzZQogICAgICBMTEMgcGVyZm9ybWFu
Y2UgY291bnRlciBleHRlbnNpb25zICAgICA9IGZhbHNlCiAgICAgIE1XQUlUWC9NT05JVE9S
WCBzdXBwb3J0ZWQgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgQWRkcmVzcyBtYXNrIGV4
dGVuc2lvbiBzdXBwb3J0ICAgICAgICAgPSBmYWxzZQogICBicmFuZCA9ICJBTUQgQTYtNjQw
MEsgQVBVIHdpdGggUmFkZW9uKHRtKSBIRCBHcmFwaGljcyAgICIKICAgTDEgVExCL2NhY2hl
IGluZm9ybWF0aW9uOiAyTS80TSBwYWdlcyAmIEwxIFRMQiAoMHg4MDAwMDAwNS9lYXgpOgog
ICAgICBpbnN0cnVjdGlvbiAjIGVudHJpZXMgICAgID0gMHgxOCAoMjQpCiAgICAgIGluc3Ry
dWN0aW9uIGFzc29jaWF0aXZpdHkgPSAweGZmICgyNTUpCiAgICAgIGRhdGEgIyBlbnRyaWVz
ICAgICAgICAgICAgPSAweDQwICg2NCkKICAgICAgZGF0YSBhc3NvY2lhdGl2aXR5ICAgICAg
ICA9IDB4ZmYgKDI1NSkKICAgTDEgVExCL2NhY2hlIGluZm9ybWF0aW9uOiA0SyBwYWdlcyAm
IEwxIFRMQiAoMHg4MDAwMDAwNS9lYngpOgogICAgICBpbnN0cnVjdGlvbiAjIGVudHJpZXMg
ICAgID0gMHgzMCAoNDgpCiAgICAgIGluc3RydWN0aW9uIGFzc29jaWF0aXZpdHkgPSAweGZm
ICgyNTUpCiAgICAgIGRhdGEgIyBlbnRyaWVzICAgICAgICAgICAgPSAweDQwICg2NCkKICAg
ICAgZGF0YSBhc3NvY2lhdGl2aXR5ICAgICAgICA9IDB4ZmYgKDI1NSkKICAgTDEgZGF0YSBj
YWNoZSBpbmZvcm1hdGlvbiAoMHg4MDAwMDAwNS9lY3gpOgogICAgICBsaW5lIHNpemUgKGJ5
dGVzKSA9IDB4NDAgKDY0KQogICAgICBsaW5lcyBwZXIgdGFnICAgICA9IDB4MSAoMSkKICAg
ICAgYXNzb2NpYXRpdml0eSAgICAgPSAweDQgKDQpCiAgICAgIHNpemUgKEtCKSAgICAgICAg
ID0gMHgxMCAoMTYpCiAgIEwxIGluc3RydWN0aW9uIGNhY2hlIGluZm9ybWF0aW9uICgweDgw
MDAwMDA1L2VkeCk6CiAgICAgIGxpbmUgc2l6ZSAoYnl0ZXMpID0gMHg0MCAoNjQpCiAgICAg
IGxpbmVzIHBlciB0YWcgICAgID0gMHgxICgxKQogICAgICBhc3NvY2lhdGl2aXR5ICAgICA9
IDB4MiAoMikKICAgICAgc2l6ZSAoS0IpICAgICAgICAgPSAweDQwICg2NCkKICAgTDIgVExC
L2NhY2hlIGluZm9ybWF0aW9uOiAyTS80TSBwYWdlcyAmIEwyIFRMQiAoMHg4MDAwMDAwNi9l
YXgpOgogICAgICBpbnN0cnVjdGlvbiAjIGVudHJpZXMgICAgID0gMHg0MDAgKDEwMjQpCiAg
ICAgIGluc3RydWN0aW9uIGFzc29jaWF0aXZpdHkgPSA4IHRvIDE1LXdheSAoNikKICAgICAg
ZGF0YSAjIGVudHJpZXMgICAgICAgICAgICA9IDB4NDAwICgxMDI0KQogICAgICBkYXRhIGFz
c29jaWF0aXZpdHkgICAgICAgID0gOCB0byAxNS13YXkgKDYpCiAgIEwyIFRMQi9jYWNoZSBp
bmZvcm1hdGlvbjogNEsgcGFnZXMgJiBMMiBUTEIgKDB4ODAwMDAwMDYvZWJ4KToKICAgICAg
aW5zdHJ1Y3Rpb24gIyBlbnRyaWVzICAgICA9IDB4MjAwICg1MTIpCiAgICAgIGluc3RydWN0
aW9uIGFzc29jaWF0aXZpdHkgPSA0IHRvIDUtd2F5ICg0KQogICAgICBkYXRhICMgZW50cmll
cyAgICAgICAgICAgID0gMHg0MDAgKDEwMjQpCiAgICAgIGRhdGEgYXNzb2NpYXRpdml0eSAg
ICAgICAgPSA4IHRvIDE1LXdheSAoNikKICAgTDIgdW5pZmllZCBjYWNoZSBpbmZvcm1hdGlv
biAoMHg4MDAwMDAwNi9lY3gpOgogICAgICBsaW5lIHNpemUgKGJ5dGVzKSA9IDB4NDAgKDY0
KQogICAgICBsaW5lcyBwZXIgdGFnICAgICA9IDB4MSAoMSkKICAgICAgYXNzb2NpYXRpdml0
eSAgICAgPSAxNiB0byAzMS13YXkgKDgpCiAgICAgIHNpemUgKEtCKSAgICAgICAgID0gMHg0
MDAgKDEwMjQpCiAgIEwzIGNhY2hlIGluZm9ybWF0aW9uICgweDgwMDAwMDA2L2VkeCk6CiAg
ICAgIGxpbmUgc2l6ZSAoYnl0ZXMpICAgICA9IDB4MCAoMCkKICAgICAgbGluZXMgcGVyIHRh
ZyAgICAgICAgID0gMHgwICgwKQogICAgICBhc3NvY2lhdGl2aXR5ICAgICAgICAgPSBMMiBv
ZmYgKDApCiAgICAgIHNpemUgKGluIDUxMktCIHVuaXRzKSA9IDB4MCAoMCkKICAgUkFTIENh
cGFiaWxpdHkgKDB4ODAwMDAwMDcvZWJ4KToKICAgICAgTUNBIG92ZXJmbG93IHJlY292ZXJ5
IHN1cHBvcnQgPSBmYWxzZQogICAgICBTVUNDT1Igc3VwcG9ydCAgICAgICAgICAgICAgICA9
IGZhbHNlCiAgICAgIEhXQTogaGFyZHdhcmUgYXNzZXJ0IHN1cHBvcnQgID0gZmFsc2UKICAg
ICAgc2NhbGFibGUgTUNBIHN1cHBvcnQgICAgICAgICAgPSBmYWxzZQogICBBZHZhbmNlZCBQ
b3dlciBNYW5hZ2VtZW50IEZlYXR1cmVzICgweDgwMDAwMDA3L2VjeCk6CiAgICAgIENtcFVu
aXRQd3JTYW1wbGVUaW1lUmF0aW8gPSAweDAgKDApCiAgIEFkdmFuY2VkIFBvd2VyIE1hbmFn
ZW1lbnQgRmVhdHVyZXMgKDB4ODAwMDAwMDcvZWR4KToKICAgICAgVFM6IHRlbXBlcmF0dXJl
IHNlbnNpbmcgZGlvZGUgICAgICAgICAgID0gdHJ1ZQogICAgICBGSUQ6IGZyZXF1ZW5jeSBJ
RCBjb250cm9sICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBWSUQ6IHZvbHRhZ2UgSUQg
Y29udHJvbCAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBUVFA6IHRoZXJtYWwgdHJp
cCAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFRNOiB0aGVybWFsIG1vbml0
b3IgICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgU1RDOiBzb2Z0d2FyZSB0aGVy
bWFsIGNvbnRyb2wgICAgICAgICAgID0gZmFsc2UKICAgICAgMTAwIE1IeiBtdWx0aXBsaWVy
IGNvbnRyb2wgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBoYXJkd2FyZSBQLVN0YXRlIGNv
bnRyb2wgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIFRzY0ludmFyaWFudCAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgQ1BCOiBjb3JlIHBlcmZvcm1hbmNl
IGJvb3N0ICAgICAgICAgICAgID0gdHJ1ZQogICAgICByZWFkLW9ubHkgZWZmZWN0aXZlIGZy
ZXF1ZW5jeSBpbnRlcmZhY2UgPSB0cnVlCiAgICAgIHByb2Nlc3NvciBmZWVkYmFjayBpbnRl
cmZhY2UgICAgICAgICAgICA9IGZhbHNlCiAgICAgIEFQTSBwb3dlciByZXBvcnRpbmcgICAg
ICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGNvbm5lY3RlZCBzdGFuZGJ5ICAgICAg
ICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIFJBUEw6IHJ1bm5pbmcgYXZlcmFnZSBw
b3dlciBsaW1pdCAgICAgICA9IGZhbHNlCiAgIFBoeXNpY2FsIEFkZHJlc3MgYW5kIExpbmVh
ciBBZGRyZXNzIFNpemUgKDB4ODAwMDAwMDgvZWF4KToKICAgICAgbWF4aW11bSBwaHlzaWNh
bCBhZGRyZXNzIGJpdHMgICAgICAgICA9IDB4MzAgKDQ4KQogICAgICBtYXhpbXVtIGxpbmVh
ciAodmlydHVhbCkgYWRkcmVzcyBiaXRzID0gMHgzMCAoNDgpCiAgICAgIG1heGltdW0gZ3Vl
c3QgcGh5c2ljYWwgYWRkcmVzcyBiaXRzICAgPSAweDAgKDApCiAgIEV4dGVuZGVkIEZlYXR1
cmUgRXh0ZW5zaW9ucyBJRCAoMHg4MDAwMDAwOC9lYngpOgogICAgICBDTFpFUk8gaW5zdHJ1
Y3Rpb24gICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgaW5zdHJ1Y3Rpb25z
IHJldGlyZWQgY291bnQgc3VwcG9ydCAgICAgICA9IGZhbHNlCiAgICAgIGFsd2F5cyBzYXZl
L3Jlc3RvcmUgZXJyb3IgcG9pbnRlcnMgICAgICAgPSBmYWxzZQogICAgICBJTlZMUEdCIGlu
c3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgUkRQUlUgaW5z
dHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIG1lbW9yeSBi
YW5kd2lkdGggZW5mb3JjZW1lbnQgICAgICAgICAgICAgPSBmYWxzZQogICAgICBNQ09NTUlU
IGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgV0JOT0lO
VkQgaW5zdHJ1Y3Rpb24gICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIElCUEI6
IGluZGlyZWN0IGJyYW5jaCBwcmVkaWN0aW9uIGJhcnJpZXIgPSB0cnVlCiAgICAgIGludGVy
cnVwdGlibGUgV0JJTlZELCBXQk5PSU5WRCAgICAgICAgICAgPSBmYWxzZQogICAgICBJQlJT
OiBpbmRpcmVjdCBicmFuY2ggcmVzdHIgc3BlY3VsYXRpb24gID0gZmFsc2UKICAgICAgU1RJ
QlA6IDEgdGhyIGluZGlyZWN0IGJyYW5jaCBwcmVkaWN0b3IgICA9IGZhbHNlCiAgICAgIENQ
VSBwcmVmZXJzOiBJQlJTIGFsd2F5cyBvbiAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBD
UFUgcHJlZmVyczogU1RJQlAgYWx3YXlzIG9uICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
SUJSUyBwcmVmZXJyZWQgb3ZlciBzb2Z0d2FyZSBzb2x1dGlvbiAgICA9IGZhbHNlCiAgICAg
IElCUlMgcHJvdmlkZXMgc2FtZSBtb2RlIHByb3RlY3Rpb24gICAgICAgPSBmYWxzZQogICAg
ICBFRkVSW0xNU0xFXSBub3Qgc3VwcG9ydGVkICAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgSU5WTFBHQiBzdXBwb3J0cyBUTEIgZmx1c2ggZ3Vlc3QgbmVzdGVkICA9IGZhbHNlCiAg
ICAgIHBwaW4gcHJvY2Vzc29yIGlkIG51bWJlciBzdXBwb3J0ZWQgICAgICAgPSBmYWxzZQog
ICAgICBTU0JEOiBzcGVjdWxhdGl2ZSBzdG9yZSBieXBhc3MgZGlzYWJsZSAgID0gZmFsc2UK
ICAgICAgdmlydHVhbGl6ZWQgU1NCRCAgICAgICAgICAgICAgICAgICAgICAgICA9IGZhbHNl
CiAgICAgIFNTQkQgZml4ZWQgaW4gaGFyZHdhcmUgICAgICAgICAgICAgICAgICAgPSBmYWxz
ZQogICAgICBDUFBDOiBjb2xsYWJvcmF0aXZlIHByb2Nlc3NvciBwZXJmIGN0cmwgID0gZmFs
c2UKICAgICAgUFNGRDogcHJlZGljdGl2ZSBzdG9yZSBmb3J3YXJkIGRpc2FibGUgICA9IGZh
bHNlCiAgICAgIG5vdCB2dWxuZXJhYmxlIHRvIGJyYW5jaCB0eXBlIGNvbmZ1c2lvbiAgPSBm
YWxzZQogICAgICBicmFuY2ggc2FtcGxpbmcgZmVhdHVyZSBzdXBwb3J0ICAgICAgICAgID0g
ZmFsc2UKICAgICAgKHZ1bG4gdG8gYnJhbmNoIHR5cGUgY29uZnVzaW9uIHN5bnRoKSAgICA9
IHRydWUKICAgU2l6ZSBJZGVudGlmaWVycyAoMHg4MDAwMDAwOC9lY3gpOgogICAgICBudW1i
ZXIgb2YgQ1BVIGNvcmVzICAgICAgICAgICAgICAgICA9IDB4MiAoMikKICAgICAgQXBpY0lk
Q29yZUlkU2l6ZSAgICAgICAgICAgICAgICAgICAgPSAweDQgKDQpCiAgICAgIHBlcmZvcm1h
bmNlIHRpbWUtc3RhbXAgY291bnRlciBzaXplID0gNDAgYml0cyAoMCkKICAgRmVhdHVyZSBF
eHRlbmRlZCBTaXplICgweDgwMDAwMDA4L2VkeCk6CiAgICAgIG1heCBwYWdlIGNvdW50IGZv
ciBJTlZMUEdCIGluc3RydWN0aW9uID0gMHgwICgwKQogICAgICBSRFBSVSBpbnN0cnVjdGlv
biBtYXggaW5wdXQgc3VwcG9ydCAgICA9IDB4MCAoMCkKICAgU1ZNIFNlY3VyZSBWaXJ0dWFs
IE1hY2hpbmUgKDB4ODAwMDAwMGEvZWF4KToKICAgICAgU3ZtUmV2OiBTVk0gcmV2aXNpb24g
PSAweDEgKDEpCiAgIFNWTSBTZWN1cmUgVmlydHVhbCBNYWNoaW5lICgweDgwMDAwMDBhL2Vk
eCk6CiAgICAgIG5lc3RlZCBwYWdpbmcgICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRy
dWUKICAgICAgTEJSIHZpcnR1YWxpemF0aW9uICAgICAgICAgICAgICAgICAgICAgID0gdHJ1
ZQogICAgICBTVk0gbG9jayAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSB0cnVl
CiAgICAgIE5SSVAgc2F2ZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUK
ICAgICAgTVNSIGJhc2VkIFRTQyByYXRlIGNvbnRyb2wgICAgICAgICAgICAgID0gdHJ1ZQog
ICAgICBWTUNCIGNsZWFuIGJpdHMgc3VwcG9ydCAgICAgICAgICAgICAgICAgPSB0cnVlCiAg
ICAgIGZsdXNoIGJ5IEFTSUQgICAgICAgICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAg
ICAgZGVjb2RlIGFzc2lzdHMgICAgICAgICAgICAgICAgICAgICAgICAgID0gdHJ1ZQogICAg
ICBTU1NFMy9TU0U1IG9wY29kZSBzZXQgZGlzYWJsZSAgICAgICAgICAgPSBmYWxzZQogICAg
ICBwYXVzZSBpbnRlcmNlcHQgZmlsdGVyICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAg
IHBhdXNlIGZpbHRlciB0aHJlc2hvbGQgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAg
QVZJQzogQU1EIHZpcnR1YWwgaW50ZXJydXB0IGNvbnRyb2xsZXIgID0gZmFsc2UKICAgICAg
dmlydHVhbGl6ZWQgVk1MT0FEL1ZNU0FWRSAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
dmlydHVhbGl6ZWQgZ2xvYmFsIGludGVycnVwdCBmbGFnIChHSUYpID0gZmFsc2UKICAgICAg
R01FVDogZ3Vlc3QgbW9kZSBleGVjdXRlIHRyYXAgICAgICAgICAgID0gZmFsc2UKICAgICAg
WDJBVklDOiB2aXJ0dWFsaXplZCBYMkFQSUMgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
c3VwZXJ2aXNvciBzaGFkb3cgc3RhY2sgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
Z3Vlc3QgU3BlY19jdGwgc3VwcG9ydCAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
Uk9HUFQ6IHJlYWQtb25seSBndWVzdCBwYWdlIHRhYmxlICAgICAgID0gZmFsc2UKICAgICAg
aG9zdCBNQ0Ugb3ZlcnJpZGUgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
SU5WTFBHQi9UTEJTWU5DIGh5cGVydiBpbnRlcmMgZW5hYmxlICAgID0gZmFsc2UKICAgICAg
Vk5NSTogTk1JIHZpcnR1YWxpemF0aW9uICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
SUJTIHZpcnR1YWxpemF0aW9uICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
Z3Vlc3QgU1ZNRSBhZGRyIGNoZWNrICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgTkFT
SUQ6IG51bWJlciBvZiBhZGRyZXNzIHNwYWNlIGlkZW50aWZpZXJzID0gMHgxMDAwMCAoNjU1
MzYpOgogICBMMSBUTEIgaW5mb3JtYXRpb246IDFHIHBhZ2VzICgweDgwMDAwMDE5L2VheCk6
CiAgICAgIGluc3RydWN0aW9uICMgZW50cmllcyAgICAgPSAweDE4ICgyNCkKICAgICAgaW5z
dHJ1Y3Rpb24gYXNzb2NpYXRpdml0eSA9IGZ1bGwgKDE1KQogICAgICBkYXRhICMgZW50cmll
cyAgICAgICAgICAgID0gMHg0MCAoNjQpCiAgICAgIGRhdGEgYXNzb2NpYXRpdml0eSAgICAg
ICAgPSBmdWxsICgxNSkKICAgTDIgVExCIGluZm9ybWF0aW9uOiAxRyBwYWdlcyAoMHg4MDAw
MDAxOS9lYngpOgogICAgICBpbnN0cnVjdGlvbiAjIGVudHJpZXMgICAgID0gMHg0MDAgKDEw
MjQpCiAgICAgIGluc3RydWN0aW9uIGFzc29jaWF0aXZpdHkgPSA4IHRvIDE1LXdheSAoNikK
ICAgICAgZGF0YSAjIGVudHJpZXMgICAgICAgICAgICA9IDB4NDAwICgxMDI0KQogICAgICBk
YXRhIGFzc29jaWF0aXZpdHkgICAgICAgID0gOCB0byAxNS13YXkgKDYpCiAgIFBlcmZvcm1h
bmNlIE9wdGltaXphdGlvbiBJZGVudGlmaWVycyAoMHg4MDAwMDAxYS9lYXgpOgogICAgICAx
MjgtYml0IFNTRSBleGVjdXRlZCBmdWxsLXdpZHRoID0gdHJ1ZQogICAgICBNT1ZVKiBiZXR0
ZXIgdGhhbiBNT1ZMKi9NT1ZIKiAgID0gdHJ1ZQogICAgICAyNTYtYml0IFNTRSBleGVjdXRl
ZCBmdWxsLXdpZHRoID0gZmFsc2UKICAgSW5zdHJ1Y3Rpb24gQmFzZWQgU2FtcGxpbmcgSWRl
bnRpZmllcnMgKDB4ODAwMDAwMWIvZWF4KToKICAgICAgSUJTIGZlYXR1cmUgZmxhZ3MgdmFs
aWQgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgSUJTIGZldGNoIHNhbXBsaW5nICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgSUJTIGV4ZWN1dGlvbiBzYW1wbGlu
ZyAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgcmVhZCB3cml0ZSBvZiBvcCBjb3Vu
dGVyICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgb3AgY291bnRpbmcgbW9kZSAgICAg
ICAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgYnJhbmNoIHRhcmdldCBhZGRyZXNz
IHJlcG9ydGluZyAgICAgICAgICA9IHRydWUKICAgICAgSWJzT3BDdXJDbnQgYW5kIElic09w
TWF4Q250IGV4dGVuZCA3ICAgICA9IHRydWUKICAgICAgaW52YWxpZCBSSVAgaW5kaWNhdGlv
biBzdXBwb3J0ICAgICAgICAgICA9IHRydWUKICAgICAgZnVzZWQgYnJhbmNoIG1pY3JvLW9w
IGluZGljYXRpb24gc3VwcG9ydCA9IGZhbHNlCiAgICAgIElCUyBmZXRjaCBjb250cm9sIGV4
dGVuZGVkIE1TUiBzdXBwb3J0ICAgPSBmYWxzZQogICAgICBJQlMgb3AgZGF0YSA0IE1TUiBz
dXBwb3J0ICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgSUJTIEwzIG1pc3MgZmlsdGVy
aW5nIHN1cHBvcnQgICAgICAgICAgICA9IGZhbHNlCiAgIExpZ2h0d2VpZ2h0IFByb2ZpbGlu
ZyBDYXBhYmlsaXRpZXM6IEF2YWlsYWJpbGl0eSAoMHg4MDAwMDAxYy9lYXgpOgogICAgICBs
aWdodHdlaWdodCBwcm9maWxpbmcgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIExX
UFZBTCBpbnN0cnVjdGlvbiAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgaW5z
dHJ1Y3Rpb24gcmV0aXJlZCBldmVudCAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBicmFu
Y2ggcmV0aXJlZCBldmVudCAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIERDIG1p
c3MgZXZlbnQgICAgICAgICAgICAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgY29yZSBj
bG9ja3Mgbm90IGhhbHRlZCBldmVudCAgICAgICAgICAgPSBmYWxzZQogICAgICBjb3JlIHJl
ZmVyZW5jZSBjbG9ja3Mgbm90IGhhbHRlZCBldmVudCA9IGZhbHNlCiAgICAgIGNvbnRpbnVv
dXMgbW9kZSBzYW1wbGluZyAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgdHNjIGluIGV2
ZW50IHJlY29yZCAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBpbnRlcnJ1cHQg
b24gdGhyZXNob2xkIG92ZXJmbG93ICAgICAgICA9IGZhbHNlCiAgIExpZ2h0d2VpZ2h0IFBy
b2ZpbGluZyBDYXBhYmlsaXRpZXM6IFN1cHBvcnRlZCAoMHg4MDAwMDAxYy9lZHgpOgogICAg
ICBsaWdodHdlaWdodCBwcm9maWxpbmcgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAg
TFdQVkFMIGluc3RydWN0aW9uICAgICAgICAgICAgICAgICAgICAgPSB0cnVlCiAgICAgIGlu
c3RydWN0aW9uIHJldGlyZWQgZXZlbnQgICAgICAgICAgICAgID0gdHJ1ZQogICAgICBicmFu
Y2ggcmV0aXJlZCBldmVudCAgICAgICAgICAgICAgICAgICA9IHRydWUKICAgICAgREMgbWlz
cyBldmVudCAgICAgICAgICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBjb3JlIGNs
b2NrcyBub3QgaGFsdGVkIGV2ZW50ICAgICAgICAgICA9IGZhbHNlCiAgICAgIGNvcmUgcmVm
ZXJlbmNlIGNsb2NrcyBub3QgaGFsdGVkIGV2ZW50ID0gZmFsc2UKICAgICAgY29udGludW91
cyBtb2RlIHNhbXBsaW5nICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICB0c2MgaW4gZXZl
bnQgcmVjb3JkICAgICAgICAgICAgICAgICAgICA9IGZhbHNlCiAgICAgIGludGVycnVwdCBv
biB0aHJlc2hvbGQgb3ZlcmZsb3cgICAgICAgID0gdHJ1ZQogICBMaWdodHdlaWdodCBQcm9m
aWxpbmcgQ2FwYWJpbGl0aWVzICgweDgwMDAwMDFjL2VieCk6CiAgICAgIExXUENCIGJ5dGUg
c2l6ZSAgICAgICAgICAgICA9IDB4MTMgKDE5KQogICAgICBldmVudCByZWNvcmQgYnl0ZSBz
aXplICAgICAgPSAweDIwICgzMikKICAgICAgbWF4aW11bSBFdmVudElkICAgICAgICAgICAg
ID0gMHgzICgzKQogICAgICBFdmVudEludGVydmFsMSBmaWVsZCBvZmZzZXQgPSAweDgwICgx
MjgpCiAgIExpZ2h0d2VpZ2h0IFByb2ZpbGluZyBDYXBhYmlsaXRpZXMgKDB4ODAwMDAwMWMv
ZWN4KToKICAgICAgbGF0ZW5jeSBjb3VudGVyIGJpdCBzaXplICAgICAgICAgID0gMHgwICgw
KQogICAgICBkYXRhIGNhY2hlIG1pc3MgYWRkcmVzcyB2YWxpZCAgICAgPSBmYWxzZQogICAg
ICBhbW91bnQgY2FjaGUgbGF0ZW5jeSBpcyByb3VuZGVkICAgPSAweDAgKDApCiAgICAgIExX
UCBpbXBsZW1lbnRhdGlvbiB2ZXJzaW9uICAgICAgICA9IDB4MSAoMSkKICAgICAgZXZlbnQg
cmluZyBidWZmZXIgc2l6ZSBpbiByZWNvcmRzID0gMHgxICgxKQogICAgICBicmFuY2ggcHJl
ZGljdGlvbiBmaWx0ZXJpbmcgICAgICAgPSBmYWxzZQogICAgICBJUCBmaWx0ZXJpbmcgICAg
ICAgICAgICAgICAgICAgICAgPSBmYWxzZQogICAgICBjYWNoZSBsZXZlbCBmaWx0ZXJpbmcg
ICAgICAgICAgICAgPSBmYWxzZQogICAgICBjYWNoZSBsYXRlbmN5IGZpbHRlaW5nICAgICAg
ICAgICAgPSBmYWxzZQogICBDYWNoZSBQcm9wZXJ0aWVzICgweDgwMDAwMDFkKToKICAgICAg
LS0tIGNhY2hlIDAgLS0tCiAgICAgIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgICAg
PSBkYXRhICgxKQogICAgICBsZXZlbCAgICAgICAgICAgICAgICAgICAgICAgICAgID0gMHgx
ICgxKQogICAgICBzZWxmLWluaXRpYWxpemluZyAgICAgICAgICAgICAgID0gdHJ1ZQogICAg
ICBmdWxseSBhc3NvY2lhdGl2ZSAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAgZXh0cmEg
Y29yZXMgc2hhcmluZyB0aGlzIGNhY2hlICA9IDB4MCAoMCkKICAgICAgbGluZSBzaXplIGlu
IGJ5dGVzICAgICAgICAgICAgICA9IDB4NDAgKDY0KQogICAgICBwaHlzaWNhbCBsaW5lIHBh
cnRpdGlvbnMgICAgICAgID0gMHgxICgxKQogICAgICBudW1iZXIgb2Ygd2F5cyAgICAgICAg
ICAgICAgICAgID0gMHg0ICg0KQogICAgICBudW1iZXIgb2Ygc2V0cyAgICAgICAgICAgICAg
ICAgID0gNjQKICAgICAgd3JpdGUtYmFjayBpbnZhbGlkYXRlICAgICAgICAgICA9IGZhbHNl
CiAgICAgIGNhY2hlIGluY2x1c2l2ZSBvZiBsb3dlciBsZXZlbHMgPSBmYWxzZQogICAgICAo
c3ludGggc2l6ZSkgICAgICAgICAgICAgICAgICAgID0gMTYzODQgKDE2IEtCKQogICAgICAt
LS0gY2FjaGUgMSAtLS0KICAgICAgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA9
IGluc3RydWN0aW9uICgyKQogICAgICBsZXZlbCAgICAgICAgICAgICAgICAgICAgICAgICAg
ID0gMHgxICgxKQogICAgICBzZWxmLWluaXRpYWxpemluZyAgICAgICAgICAgICAgID0gdHJ1
ZQogICAgICBmdWxseSBhc3NvY2lhdGl2ZSAgICAgICAgICAgICAgID0gZmFsc2UKICAgICAg
ZXh0cmEgY29yZXMgc2hhcmluZyB0aGlzIGNhY2hlICA9IDB4MSAoMSkKICAgICAgbGluZSBz
aXplIGluIGJ5dGVzICAgICAgICAgICAgICA9IDB4NDAgKDY0KQogICAgICBwaHlzaWNhbCBs
aW5lIHBhcnRpdGlvbnMgICAgICAgID0gMHgxICgxKQogICAgICBudW1iZXIgb2Ygd2F5cyAg
ICAgICAgICAgICAgICAgID0gMHgyICgyKQogICAgICBudW1iZXIgb2Ygc2V0cyAgICAgICAg
ICAgICAgICAgID0gNTEyCiAgICAgIHdyaXRlLWJhY2sgaW52YWxpZGF0ZSAgICAgICAgICAg
PSBmYWxzZQogICAgICBjYWNoZSBpbmNsdXNpdmUgb2YgbG93ZXIgbGV2ZWxzID0gZmFsc2UK
ICAgICAgKHN5bnRoIHNpemUpICAgICAgICAgICAgICAgICAgICA9IDY1NTM2ICg2NCBLQikK
ICAgICAgLS0tIGNhY2hlIDIgLS0tCiAgICAgIHR5cGUgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgPSB1bmlmaWVkICgzKQogICAgICBsZXZlbCAgICAgICAgICAgICAgICAgICAgICAg
ICAgID0gMHgyICgyKQogICAgICBzZWxmLWluaXRpYWxpemluZyAgICAgICAgICAgICAgID0g
dHJ1ZQogICAgICBmdWxseSBhc3NvY2lhdGl2ZSAgICAgICAgICAgICAgID0gZmFsc2UKICAg
ICAgZXh0cmEgY29yZXMgc2hhcmluZyB0aGlzIGNhY2hlICA9IDB4MSAoMSkKICAgICAgbGlu
ZSBzaXplIGluIGJ5dGVzICAgICAgICAgICAgICA9IDB4NDAgKDY0KQogICAgICBwaHlzaWNh
bCBsaW5lIHBhcnRpdGlvbnMgICAgICAgID0gMHgxICgxKQogICAgICBudW1iZXIgb2Ygd2F5
cyAgICAgICAgICAgICAgICAgID0gMHgxMCAoMTYpCiAgICAgIG51bWJlciBvZiBzZXRzICAg
ICAgICAgICAgICAgICAgPSAxMDI0CiAgICAgIHdyaXRlLWJhY2sgaW52YWxpZGF0ZSAgICAg
ICAgICAgPSB0cnVlCiAgICAgIGNhY2hlIGluY2x1c2l2ZSBvZiBsb3dlciBsZXZlbHMgPSBm
YWxzZQogICAgICAoc3ludGggc2l6ZSkgICAgICAgICAgICAgICAgICAgID0gMTA0ODU3NiAo
MTAyNCBLQikKICAgZXh0ZW5kZWQgQVBJQyBJRCA9IDE3CiAgIENvbXB1dGUgVW5pdCBJZGVu
dGlmaWVycyAoMHg4MDAwMDAxZS9lYngpOgogICAgICBjb21wdXRlIHVuaXQgSUQgICAgICAg
ID0gMHgwICgwKQogICAgICBjb3JlcyBwZXIgY29tcHV0ZSB1bml0ID0gMHgyICgyKQogICBO
b2RlIElkZW50aWZpZXJzICgweDgwMDAwMDFlL2VjeCk6CiAgICAgIG5vZGUgSUQgICAgICAg
ICAgICAgPSAweDAgKDApCiAgICAgIG5vZGVzIHBlciBwcm9jZXNzb3IgPSAweDEgKDEpCiAg
IChpbnN0cnVjdGlvbiBzdXBwb3J0ZWQgc3ludGgpOgogICAgICBDTVBYQ0hHOEIgICAgICAg
ICAgICAgICAgPSB0cnVlCiAgICAgIGNvbmRpdGlvbmFsIG1vdmUvY29tcGFyZSA9IHRydWUK
ICAgICAgUFJFRkVUQ0gvUFJFRkVUQ0hXICAgICAgID0gdHJ1ZQogICAobXVsdGktcHJvY2Vz
c2luZyBzeW50aCkgPSBtdWx0aS1jb3JlIChjPTIpCiAgIChtdWx0aS1wcm9jZXNzaW5nIG1l
dGhvZCkgPSBBTUQKICAgKEFQSUMgd2lkdGhzIHN5bnRoKTogQ09SRV93aWR0aD0xIFNNVF93
aWR0aD0wCiAgIChBUElDIHN5bnRoKTogUEtHX0lEPTAgQ09SRV9JRD0xIFNNVF9JRD0wCiAg
ICh1YXJjaCBzeW50aCkgPSBBTUQgUGlsZWRyaXZlciwgMzJubQogICAoc3ludGgpID0gQU1E
IEEtU2VyaWVzIChSaWNobGFuZCBSTC1BMSkgW1BpbGVkcml2ZXJdLCAzMm5tCg==

--------------0DImfNerhQq1M72qua4azj6z--


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 21:33:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 21:33:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523074.812796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1posx1-00065O-GL; Tue, 18 Apr 2023 21:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523074.812796; Tue, 18 Apr 2023 21:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1posx1-00065H-Cl; Tue, 18 Apr 2023 21:33:23 +0000
Received: by outflank-mailman (input) for mailman id 523074;
 Tue, 18 Apr 2023 21:33:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6sTN=AJ=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1poswz-00065B-Hq
 for xen-devel@lists.xenproject.org; Tue, 18 Apr 2023 21:33:21 +0000
Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com
 [2607:f8b0:4864:20::1136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a29bc7f8-de30-11ed-8611-37d641c3527e;
 Tue, 18 Apr 2023 23:33:19 +0200 (CEST)
Received: by mail-yw1-x1136.google.com with SMTP id
 00721157ae682-555bc7f6746so31469377b3.6
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 14:33:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a29bc7f8-de30-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681853598; x=1684445598;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=fZ+faik75720SnLDzvntxl1cmAaRHsLQ7oOmPoVPf3o=;
        b=fBPSS93aalbav0LUzUejRt2NjbHpBJmNsHVPRRbsF+jG5Nu8WjfWT+weSiXFGrkEbN
         Bl+EcMNC7lpcawm/MuwqggHfJmNk7zAK9haLpnJV08P367c5YohyB4lPm9ubYCuINJZG
         pmhBTFu6UKlSFmyU0QWGBEDpdg/7xojdmyIrq4CWcHgeoPLavovDZAMaCmQm5Tmi+PdF
         awAr9pJ9qRfL4qAoU/GIaaVF0Yb/A9oc1EXdyleFSucNiyAMBSWiFjcubi/Cl7MqSDQT
         LqfYl5LxNuy3TqKDTGZky+Cx6fyOI9+XqrTm2mW88NG+xvACaI/puCphU95FqO3MwFDd
         on2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681853598; x=1684445598;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=fZ+faik75720SnLDzvntxl1cmAaRHsLQ7oOmPoVPf3o=;
        b=JhK2FEGp7dgdrFMBu+UxVmA/19pO+O1rfu2g4BD1mF0laTtIIN6peIOqQbVzEBUSKn
         JGMGjH2DrM9yRubabscvBBDRrFnoiktpkA8kC84LI7LCU1EsTArDfHScj7pNEMeczEKv
         anxcAk0rGujXQU1IM6SMObziCq7nyoiuLTtFp3Vi4NDCSOB5u3vd0W3KVt2qMpAFheHg
         mHdZmDoETjkUe75OaJMPWW03P0r9JWIGT6fD7htd0iS8jHDEgim2zcBP4INpxLf0h7nb
         bdyQjxMeHWBdwZrRALfggCQkuV5fcQ5LX8abBIaTFPUC0qKwo4LtGZKa6eEXFBvf6heM
         HB5A==
X-Gm-Message-State: AAQBX9dSbdQTb6dOTp60oh1YsXp2rCeQwHbwB+dHXXPPQHmcxrfg36aL
	f+bTgtX2KqjAZmpnUC3ZlHESIE+TJp3+i0ARkEY=
X-Google-Smtp-Source: AKy350Zg9/w1pIq1h9Mfl98gkIvdjlXYVLE/yxlCrmuaudFEJKPjKGaCNnj1KVCOnrcLYZjlnD0Q2KC4cnvL5HhzBhY=
X-Received: by 2002:a81:8443:0:b0:54f:8af3:6488 with SMTP id
 u64-20020a818443000000b0054f8af36488mr1215032ywf.23.1681853598073; Tue, 18
 Apr 2023 14:33:18 -0700 (PDT)
MIME-Version: 1.0
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-2-vishal.moola@gmail.com> <da600570-51c7-8088-b46b-7524c9e66e5d@redhat.com>
In-Reply-To: <da600570-51c7-8088-b46b-7524c9e66e5d@redhat.com>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Tue, 18 Apr 2023 14:33:06 -0700
Message-ID: <CAOzc2pwpRhNoFbdzdzuvrqbZdf2OsrTvBGs40QCZJjA5fS_q1A@mail.gmail.com>
Subject: Re: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking
To: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Apr 18, 2023 at 8:45=E2=80=AFAM David Hildenbrand <david@redhat.com=
> wrote:
>
> On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
> > s390 uses page->index to keep track of page tables for the guest addres=
s
> > space. In an attempt to consolidate the usage of page fields in s390,
> > replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
> >
> > This will help with the splitting of struct ptdesc from struct page, as
> > well as allow s390 to use _pt_frag_refcount for fragmented page table
> > tracking.
> >
> > Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
> > before freeing the pages as well.
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
>
> [...]
>
> > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > index 3fc9e680f174..2616d64c0e8c 100644
> > --- a/include/linux/mm_types.h
> > +++ b/include/linux/mm_types.h
> > @@ -144,7 +144,7 @@ struct page {
> >               struct {        /* Page table pages */
> >                       unsigned long _pt_pad_1;        /* compound_head =
*/
> >                       pgtable_t pmd_huge_pte; /* protected by page->ptl=
 */
> > -                     unsigned long _pt_pad_2;        /* mapping */
> > +                     unsigned long _pt_s390_gaddr;   /* mapping */
> >                       union {
> >                               struct mm_struct *pt_mm; /* x86 pgds only=
 */
> >                               atomic_t pt_frag_refcount; /* powerpc */
>
> The confusing part is, that these gmap page tables are not ordinary
> process page tables that we would ordinarily place into this section
> here. That's why they are also not allocated/freed using the typical
> page table constructor/destructor ...

I initially thought the same, so I was quite confused when I saw
__gmap_segment_gaddr was using pmd_pgtable_page().

Although they are not ordinary process page tables, since we
eventually want to move them out of struct page, I think shifting them
to be in ptdescs, being a memory descriptor for page tables, makes
the most sense.

Another option is to leave pmd_pgtable_page() as is just for this case.
Or we can revert commit 7e25de77bc5ea which uses the function here
then figure out where these gmap pages table pages will go later.


From xen-devel-bounces@lists.xenproject.org Tue Apr 18 23:34:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Apr 2023 23:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523082.812806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pouq1-00015P-5J; Tue, 18 Apr 2023 23:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523082.812806; Tue, 18 Apr 2023 23:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pouq1-00015I-2Q; Tue, 18 Apr 2023 23:34:17 +0000
Received: by outflank-mailman (input) for mailman id 523082;
 Tue, 18 Apr 2023 23:34:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poupz-000158-1m; Tue, 18 Apr 2023 23:34:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poupy-0002zg-LI; Tue, 18 Apr 2023 23:34:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1poupy-0007qM-5h; Tue, 18 Apr 2023 23:34:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1poupy-00029O-4p; Tue, 18 Apr 2023 23:34:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b7n78PsOx6FGmQo7NjyciJ/Ka0FxwDAwMqPcYJBxYWg=; b=U4bbFLAZCg4x5H4FeWeSslWpCG
	pIPfz3d9s8KBUVmdhqMZjC+VTwusyP+MaPZN6JXDh2KxuDi5oG3BuazODaMh+G5X21UMt6s4BNwWg
	9KULqOCSCFJl2jnqKyrH8xzPbxV+UaX4HKcqu0N//u18e1JianWaE98VPZni0Cp2FHc8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180302-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180302: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
X-Osstest-Versions-That:
    xen=cbe828581b4a1717a4331b754c25a27a41d1bc58
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Apr 2023 23:34:14 +0000

flight 180302 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180302/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542
baseline version:
 xen                  cbe828581b4a1717a4331b754c25a27a41d1bc58

Last test of basis   180297  2023-04-18 07:00:25 Z    0 days
Testing same since   180302  2023-04-18 20:01:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cbe828581b..8676092a0f  8676092a0f16ca6ad188d3fb270784a2caecf542 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 00:12:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 00:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523091.812825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1povQW-000662-P0; Wed, 19 Apr 2023 00:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523091.812825; Wed, 19 Apr 2023 00:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1povQW-00065v-MH; Wed, 19 Apr 2023 00:12:00 +0000
Received: by outflank-mailman (input) for mailman id 523091;
 Wed, 19 Apr 2023 00:11:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1povQU-00065l-Gi; Wed, 19 Apr 2023 00:11:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1povQU-0004QK-24; Wed, 19 Apr 2023 00:11:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1povQT-0000Su-LA; Wed, 19 Apr 2023 00:11:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1povQT-00067A-Kd; Wed, 19 Apr 2023 00:11:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z6Fgt0u5xMHUTX51MxmZJsY3cUM1aYZWE/XbDNOvhk4=; b=OSwUjRBq2iwvLKMoAEnX4YDnZe
	IFwG+E/1i69TN6VXb1e9rHCgrlApBWhSlHj4NsaBijcHFP0ngsKyohSS/d6EvSrRF+lVtuAbNv1IN
	Hqa/Hm3Hh8pY1AgCuSCdCLeMpyc5xdk9HzIr+yrIGLRSnUFgox+VBbeVaSRqE5Fyx9f8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180299-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180299: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a8f57ae2eb07ab39a6f0ccad60c760743051026
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 00:11:57 +0000

flight 180299 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180299/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 180292

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6a8f57ae2eb07ab39a6f0ccad60c760743051026
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    2 days
Testing same since   180281  2023-04-17 06:24:36 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6a8f57ae2eb07ab39a6f0ccad60c760743051026
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Apr 16 15:23:53 2023 -0700

    Linux 6.3-rc7


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 00:44:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 00:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523098.812834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1povvp-000122-8b; Wed, 19 Apr 2023 00:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523098.812834; Wed, 19 Apr 2023 00:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1povvp-00011v-5v; Wed, 19 Apr 2023 00:44:21 +0000
Received: by outflank-mailman (input) for mailman id 523098;
 Wed, 19 Apr 2023 00:44:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LUBK=AK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1povvo-00011p-P6
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 00:44:20 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 502df9e8-de4b-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 02:44:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6674262F6A;
 Wed, 19 Apr 2023 00:44:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 613FAC433D2;
 Wed, 19 Apr 2023 00:44:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502df9e8-de4b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681865055;
	bh=RVRfQO5esDlerePnzZ5V1QxEUUsFiDgRsMwmpFYjjAg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mAOhyMH6MBAEvqL7onW4AW6qU5ZGQyWQWR2kpMg28fQYv/HB2BFcfsI7+j17zi3ld
	 Vg5UNMQZPf/qq/gV6WcU+NPcIT8CUQX56JrqlU+6ed+XubuxQkJna08056TNvqq8D6
	 wNKrx1kLDpzQdxHthjaQrVocCrRzqWvSCmof7Uh/7g5rSjLYdbBPpT9gGFeBSUvPvr
	 FrrGTJaY0S6R7GdTDyf50t+rqGljy0tEa0ZvcByjlRVNUSkws1uy8AL//qsQ8WrIJR
	 nYCLloM1T2gblN5RHEILZIyvygwPgeMgmHUlj+MtZg0xswGR9BbSOHLcHSxdqapnJ7
	 zS/WlaPz9hKQg==
Date: Tue, 18 Apr 2023 17:44:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
    Ross Lagerwall <ross.lagerwall@citrix.com>, 
    Jan Beulich <JBeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/3] arm/alternatives: Rename alt_instr fields which
 are used in common code
In-Reply-To: <20230417121357.3738919-3-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304181744060.15580@ubuntu-linux-20-04-desktop>
References: <20230417121357.3738919-1-andrew.cooper3@citrix.com> <20230417121357.3738919-3-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-399832817-1681865055=:15580"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-399832817-1681865055=:15580
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 17 Apr 2023, Andrew Cooper wrote:
> Alternatives auditing for livepatches is currently broken.  To fix it, the
> livepatch code needs to inspect more fields of alt_instr.
> 
> Rename ARM's fields to match x86's, because:
> 
>  * ARM already exposes alt_offset under the repl name via ALT_REPL_PTR()
>  * "alt" is somewhat ambiguous in a structure entirely about alternatives
>    already.
>  * "repl", being the same number of character as orig leads to slightly neater
>    code.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Ross Lagerwall <ross.lagerwall@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> The other option is to make alt_instr an entirely common structure, but it's
> already different between ARM and x86 and I'm not sure the result of doing
> this would result in nicer code.
> ---
>  xen/arch/arm/alternative.c             |  6 +++---
>  xen/arch/arm/include/asm/alternative.h | 12 ++++++------
>  2 files changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
> index f00e3b9b3c11..7366af4ea646 100644
> --- a/xen/arch/arm/alternative.c
> +++ b/xen/arch/arm/alternative.c
> @@ -44,7 +44,7 @@ static bool branch_insn_requires_update(const struct alt_instr *alt,
>          return true;
>  
>      replptr = (unsigned long)ALT_REPL_PTR(alt);
> -    if ( pc >= replptr && pc <= (replptr + alt->alt_len) )
> +    if ( pc >= replptr && pc <= (replptr + alt->repl_len) )
>          return false;
>  
>      /*
> @@ -128,9 +128,9 @@ static int __apply_alternatives(const struct alt_region *region,
>              continue;
>  
>          if ( alt->cpufeature == ARM_CB_PATCH )
> -            BUG_ON(alt->alt_len != 0);
> +            BUG_ON(alt->repl_len != 0);
>          else
> -            BUG_ON(alt->alt_len != alt->orig_len);
> +            BUG_ON(alt->repl_len != alt->orig_len);
>  
>          origptr = ALT_ORIG_PTR(alt);
>          updptr = (void *)origptr + update_offset;
> diff --git a/xen/arch/arm/include/asm/alternative.h b/xen/arch/arm/include/asm/alternative.h
> index 1eb4b60fbb3e..d3210e82f9e5 100644
> --- a/xen/arch/arm/include/asm/alternative.h
> +++ b/xen/arch/arm/include/asm/alternative.h
> @@ -13,16 +13,16 @@
>  
>  struct alt_instr {
>  	s32 orig_offset;	/* offset to original instruction */
> -	s32 alt_offset;		/* offset to replacement instruction */
> +	s32 repl_offset;	/* offset to replacement instruction */
>  	u16 cpufeature;		/* cpufeature bit set for replacement */
>  	u8  orig_len;		/* size of original instruction(s) */
> -	u8  alt_len;		/* size of new instruction(s), <= orig_len */
> +	u8  repl_len;		/* size of new instruction(s), <= orig_len */
>  };
>  
>  /* Xen: helpers used by common code. */
>  #define __ALT_PTR(a,f)		((void *)&(a)->f + (a)->f)
>  #define ALT_ORIG_PTR(a)		__ALT_PTR(a, orig_offset)
> -#define ALT_REPL_PTR(a)		__ALT_PTR(a, alt_offset)
> +#define ALT_REPL_PTR(a)		__ALT_PTR(a, repl_offset)
>  
>  typedef void (*alternative_cb_t)(const struct alt_instr *alt,
>  				 const uint32_t *origptr, uint32_t *updptr,
> @@ -90,12 +90,12 @@ int apply_alternatives(const struct alt_instr *start, const struct alt_instr *en
>  #include <asm/asm_defns.h>
>  #include <asm/macros.h>
>  
> -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len
> +.macro altinstruction_entry orig_offset repl_offset feature orig_len repl_len
>  	.word \orig_offset - .
> -	.word \alt_offset - .
> +	.word \repl_offset - .
>  	.hword \feature
>  	.byte \orig_len
> -	.byte \alt_len
> +	.byte \repl_len
>  .endm
>  
>  .macro alternative_insn insn1, insn2, cap, enable = 1
> -- 
> 2.30.2
> 
--8323329-399832817-1681865055=:15580--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 02:16:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 02:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523109.812851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poxMP-0000Tq-50; Wed, 19 Apr 2023 02:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523109.812851; Wed, 19 Apr 2023 02:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1poxMP-0000Tj-1L; Wed, 19 Apr 2023 02:15:53 +0000
Received: by outflank-mailman (input) for mailman id 523109;
 Wed, 19 Apr 2023 02:15:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VWR0=AK=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1poxMO-0000Td-1Q
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 02:15:52 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20629.outbound.protection.outlook.com
 [2a01:111:f400:fe16::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18c2ebde-de58-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 04:15:49 +0200 (CEST)
Received: from AM6PR08CA0042.eurprd08.prod.outlook.com (2603:10a6:20b:c0::30)
 by GV2PR08MB8414.eurprd08.prod.outlook.com (2603:10a6:150:bc::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 02:15:45 +0000
Received: from AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::8c) by AM6PR08CA0042.outlook.office365.com
 (2603:10a6:20b:c0::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 02:15:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT003.mail.protection.outlook.com (100.127.140.227) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 02:15:44 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 19 Apr 2023 02:15:44 +0000
Received: from 7d1344cefdb7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 50B84CE3-A403-401E-B6ED-7DEE19306672.1; 
 Wed, 19 Apr 2023 02:15:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7d1344cefdb7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 02:15:38 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VE1PR08MB5616.eurprd08.prod.outlook.com (2603:10a6:800:1a1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 02:15:36 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.021; Wed, 19 Apr 2023
 02:15:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18c2ebde-de58-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LZJ3mj7dJhBtSn19zVsoHBg83pKwHBYdzpDAnjfWBnA=;
 b=dHuUjNTe2XtqjUsYRkinL8Rmhzpt+3W078bOwAMB/Yr03w0Be+HsBYeo05Oz7/Uaa5eeI3I7c3mp3y6mzmhC9zLTNttRJsiTd4N3TUAUzeRqg8IbdIYM0HTueykGPCTlwis/NJsgZH+yi5aLI8ex2gNB+d/7SI4n6V1+Mkhg8Xk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QwMSUuBbxsq361URjnmkFIv58oiJnuGupYXzlDywSSlwHP6Bg3iJDVM3k0CXQmjThnLX7UifCkRsquCdesNjOBuGihYjJ3OCIqVWsxs6WLyBisyrULic5xa8997qe++l4r+/T0zEL2Gnlu4V2t5DtOKsg7kfENPdnq0dp0NtReXGyAqWrUUeYZkIscz987c1y+HyV1SacvW/3hG/zfry9szJAArGpQQIjUU6PIByiQPeakPYK5Xf9SyQe4qsfG4Z831zWdr24B4cf9c118xH6rDRf7tcWZXoqJRsU1EBw42VGwiV1q7fsx+itzZmjYzNAntjXYbB0NS7ClnCyIEAbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LZJ3mj7dJhBtSn19zVsoHBg83pKwHBYdzpDAnjfWBnA=;
 b=I23v0cFDJbrBLh9T5wumKE4xHvRlenof8rEb70cUBcdbYeTzYq5WWpdvfQvwDGif2S41coMVPjMjQah9jipx9eX794Z9QaFRtDUMY6Vf7T5U0ONXpmAzhiHeosRcanAObqTbxKV2f50aWrmV5DH+jLZchqdnft8ZSzgcza8hs1Nr5I1k+V6VA7tpr2qDcP6aM6Wue+ImXfZbmti96buXcuxlBSm5k639f67OPhXelZFOLid98YLSR4PgjV30+O8HM4ww9kTQFyLmgpCgjKEyk1gTcTG+r5Rj/f8BIvWnWXjX8eYKWfdhPXsrTpoqAV2um+h1WziamLnwwuponJ6RBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LZJ3mj7dJhBtSn19zVsoHBg83pKwHBYdzpDAnjfWBnA=;
 b=dHuUjNTe2XtqjUsYRkinL8Rmhzpt+3W078bOwAMB/Yr03w0Be+HsBYeo05Oz7/Uaa5eeI3I7c3mp3y6mzmhC9zLTNttRJsiTd4N3TUAUzeRqg8IbdIYM0HTueykGPCTlwis/NJsgZH+yi5aLI8ex2gNB+d/7SI4n6V1+Mkhg8Xk=
From: Henry Wang <Henry.Wang@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Community Manager <community.manager@xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Thread-Topic: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Thread-Index: AQHZcgxvaGx+5HqAbEWvNzQ3gFsUBa8x5Ulw
Date: Wed, 19 Apr 2023 02:15:32 +0000
Message-ID:
 <AS8PR08MB799102ADF81C961A29678F7D92629@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230418154223.20181-1-roger.pau@citrix.com>
In-Reply-To: <20230418154223.20181-1-roger.pau@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B17FB9B12C7025458FB53AAC40ABD4AB.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VE1PR08MB5616:EE_|AM7EUR03FT003:EE_|GV2PR08MB8414:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d7fb849-b01f-44a8-96bd-08db407bfb6e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bDjMBiqE9KiRpUv9aP9jhod5cQrAqR3wLPPRziI11BNL/j/pCuQ3lBZRAwhGcEkKAM2vg+7P5NFgqNH9Mgd5JqU6apaf8sCyEO8NXs6iUOXGUxRsAfUcWA07qYhaFHp7H2IX6n4nFn9qLUX4/ne0E3A9c6kk220jkFA/8y5KyoP2PuY/yD4RZiaj8a8fxaQTGSttM+OVsdqbDn5cWzpt3RRe+uDq8M67crjkT2k3l7zW+YibUg7WqHD2cNqNE0SgOU4UlOboRbOGpUO1wuojF735+7pIboc3n++GugZrBc22kvGc25TeZnSJz3z3mn38FRVB9gqMmkiLVOvcN/PnNrH2NbndXw7xIkxihCifH1M9Uu41pnsBmX1B117qX/sXsQQa+W+p3B/EIdXYa6eqmpwCaLpPWB6u2r0zSAbQfw9zJOQAazgAN4ejRdZeDPRgkk0lS8jyt38ogqbKf+LUqClo0IgE1IfRXL+AE/sr8xoLwDmBq1eyuwWZt7FJpy6mUTtNORTUPH9lm5mm+8ZaGRSEvpenS92P3zRyNuM9leVFcYSfKF4onj3UUDjcygJ6Bdstn/aJfy9fWyzRriUoY/PQL47irsA5PaLU9L2B+qGP6bgiFEF8oIJFVvMF8RaT
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39850400004)(376002)(366004)(346002)(136003)(451199021)(7696005)(6666004)(478600001)(86362001)(110136005)(55016003)(71200400001)(83380400001)(33656002)(26005)(38070700005)(9686003)(186003)(6506007)(38100700002)(122000001)(64756008)(66556008)(76116006)(4326008)(66446008)(2906002)(66476007)(4744005)(66946007)(316002)(8676002)(5660300002)(8936002)(52536014)(41300700001)(54906003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5616
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	af6ffbbf-5768-4a8d-e069-08db407bf422
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5hzjLkekwAMCPmmR7U88fX6W/ZgLIP1ImJ0700QxFvN/mUY9eLHq44h0oz89QFdQ3aZW+4HNmvuQmZJVAnN3Ihd4/Hl8Y5R+nG1u60jstDy4upXc0uVDaojMqUzI+4rcwH8CQF0GuqiLAkHwvhAgkYB8Lm7qdQTGmord6GpbisKnS7ZaNaj6Qfm3Owv4q+gpfegljmQL4yqjYuDUyOPlm7UU9VbA/G8rE/rD8eDYAzvCyy6RQKTopIasq0jOU/LtHnqig94Cpp0EaQaChc0IQSEaGPAIbOZ8SafAyc792vzyuVdXyDTMP2JeHbx9xM7KBu9Mgt1C17X1Dipej/0tfhXgQTiPCueZxiRluhcyQ1d7OMNPzzozJeyvcb+DSy0l4fZspDLrcftG+aGtBA+vw1XLJtoYmv25rOTP3qWTiyXyoEdYMl7Z37j2z3zIM2rebShWJyIrnevF8njijq0EaJojDXURBVQvIUshbTu/VIGrUIGLXpMen9m++/TigdbKUI+C6OuuJKgS6HDMnWdMvQp9aa5cneQS5SXM2ykZZkdPvMGFa3Lebe3PMiGrq+TU4mS1wRJqeUhaSdMcvw6D7LzfvsAxr7k3s80+qnmoOkTx/cS49kKf3cwxQpxIQA21iAPebi6i+NtNzyYnMDJUdknvww8DfhDrtvfoFdAZpIIwHelq6IuCBXO3O7fv2c1Dgg8EdulZrTOnR2QaPptYcw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(46966006)(40470700004)(36840700001)(110136005)(54906003)(316002)(70206006)(4326008)(40460700003)(70586007)(26005)(186003)(9686003)(6506007)(36860700001)(336012)(47076005)(83380400001)(82310400005)(5660300002)(8936002)(41300700001)(8676002)(55016003)(478600001)(6666004)(7696005)(82740400003)(86362001)(33656002)(356005)(52536014)(2906002)(81166007)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 02:15:44.5916
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d7fb849-b01f-44a8-96bd-08db407bfb6e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8414

SGkgUm9nZXIsDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUm9nZXIg
UGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gU3ViamVjdDogW1BBVENIXSB4ZW4v
dmNwdTogcmVtb3ZlIHZjcHVfc2V0X3NpbmdsZXNob3RfdGltZXIgZmxhZ3MgZmllbGQNCj4gDQo+
IFRoZSBhZGRpdGlvbiBvZiB0aGUgZmxhZ3MgZmllbGQgaW4gdGhlIHZjcHVfc2V0X3NpbmdsZXNo
b3RfdGltZXIgaW4NCj4gNTA1ZWYzZWE4Njg3IGlzIGFuIEFCSSBicmVha2FnZSwgYXMgdGhlIHNp
emUgb2YgdGhlIHN0cnVjdHVyZSBpcw0KPiBpbmNyZWFzZWQuDQo+IA0KPiBSZW1vdmUgc3VjaCBm
aWVsZCBhZGRpdGlvbiBhbmQgZHJvcCB0aGUgaW1wbGVtZW50YXRpb24gb2YgdGhlDQo+IFZDUFVf
U1NIT1RUTVJfZnV0dXJlIGZsYWcuICBJZiBhIHRpbWVyIHByb3ZpZGVzIGFuIGV4cGlyZWQgdGlt
ZW91dA0KPiB2YWx1ZSBqdXN0IGluamVjdCB0aGUgdGltZXIgaW50ZXJydXB0Lg0KPiANCj4gQnVt
cCB0aGUgWGVuIGludGVyZmFjZSB2ZXJzaW9uLCBhbmQga2VlcCB0aGUgZmxhZ3MgZmllbGQgYW5k
DQo+IFZDUFVfU1NIT1RUTVJfZnV0dXJlIGF2YWlsYWJsZSBmb3IgZ3Vlc3RzIHVzaW5nIHRoZSBv
bGQgaW50ZXJmYWNlLg0KPiANCj4gTm90ZSB0aGUgcmVtb3ZhbCBvZiB0aGUgZmllbGQgZnJvbSB0
aGUgdmNwdV9zZXRfc2luZ2xlc2hvdF90aW1lcg0KPiBzdHJ1Y3QgYWxsb3dzIHJlbW92aW5nIHRo
ZSBjb21wYXQgdHJhbnNsYXRpb24gb2YgdGhlIHN0cnVjdC4NCj4gDQo+IEZpeGVzOiA1MDVlZjNl
YTg2ODcgKCdBZGQgZmxhZ3MgZmllbGQgdG8gVkNQVU9QX3NldF9zaW5nbHNlaG90X3RpbWVyLicp
DQo+IFJlcG9ydGVkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
Pg0KPiBTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNv
bT4NCg0KRm9yIHRoZSBDSEFOR0VMT0cgcGFydDoNCkFja2VkLWJ5OiBIZW5yeSBXYW5nIDxIZW5y
eS5XYW5nQGFybS5jb20+ICMgQ0hBTkdFTE9HDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQo=


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 04:38:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 04:38:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523114.812861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pozaL-0006KE-2R; Wed, 19 Apr 2023 04:38:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523114.812861; Wed, 19 Apr 2023 04:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pozaK-0006K7-V9; Wed, 19 Apr 2023 04:38:24 +0000
Received: by outflank-mailman (input) for mailman id 523114;
 Wed, 19 Apr 2023 04:38:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pozaI-0006Jx-SK; Wed, 19 Apr 2023 04:38:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pozaI-0000rD-JU; Wed, 19 Apr 2023 04:38:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pozaI-0003CY-8a; Wed, 19 Apr 2023 04:38:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pozaI-0001Ye-85; Wed, 19 Apr 2023 04:38:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=quyhWiDR8ugSYMEHC17ftEBaMoVS5HZHCZ5lFqFsZC8=; b=rchT557psE6q+jcfJJAnzqCVqe
	ane1IUOMGextfc8aEQQc1UFCoGK7UnHmfLjA8YSDDMlRFige+8rxmYL9lqoxzkPDlOwL1Y/YbTZvI
	FEd8LvzrCjMAah2BUY9pqHN300jzv19osJMI4Y39JZNUoL9vqaLeZ566TAdilozR1f5o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180301: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:regression
    xen-unstable:test-armhf-armhf-examine:examine-serial/bootloader:fail:regression
    xen-unstable:test-armhf-armhf-examine:examine-serial/kernel:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cbe828581b4a1717a4331b754c25a27a41d1bc58
X-Osstest-Versions-That:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 04:38:22 +0000

flight 180301 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180301/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-livepatch     7 xen-install              fail REGR. vs. 180287
 test-armhf-armhf-examine    11 examine-serial/bootloader fail REGR. vs. 180287
 test-armhf-armhf-examine     12 examine-serial/kernel    fail REGR. vs. 180287
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180287

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180287
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180287
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180287
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180287
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180287
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180287
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180287
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 180287
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  cbe828581b4a1717a4331b754c25a27a41d1bc58
baseline version:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc

Last test of basis   180287  2023-04-17 18:09:05 Z    1 days
Failing since        180296  2023-04-18 06:36:57 Z    0 days    2 attempts
Testing same since   180301  2023-04-18 18:07:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Dietmar Hahn <dietmar.hahn@fujitsu.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cbe828581b4a1717a4331b754c25a27a41d1bc58
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Apr 18 08:28:15 2023 +0200

    xen: update CONFIG_DEBUG_INFO help text
    
    Update the help text of the CONFIG_DEBUG_INFO option to be a little
    bit more specific.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 14a9f07d99f314a86fc4e94d5e5577fbf3f8a3ef
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Apr 18 08:26:24 2023 +0200

    xen: move CONFIG_DEBUG_INFO out of EXPERT section
    
    In order to support hypervisor analysis of crash dumps, xen-syms needs
    to contain debug_info. It should be allowed to configure the hypervisor
    to be built with CONFIG_DEBUG_INFO in non-debug builds without having
    to enable EXPERT.
    
    Using a rather oldish gcc (7.5) it was verified that code generation
    doesn't really differ between CONFIG_DEBUG_INFO on or off without
    CONFIG_DEBUG being set (only observed differences were slightly
    different symbol addresses, verified via "objdump -d", resulting from
    the different config.gz in the binary). The old gcc version selection
    was based on the assumption, that newer gcc won't regress in this
    regard.
    
    So move CONFIG_DEBUG_INFO out of the section guarded by EXPERT.
    
    It should be mentioned that there have been reports that the linking
    of the xen.efi might take considerably longer with CONFIG_DEBUG_INFO
    selected when using newer binutils.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Tested-by: Dietmar Hahn <dietmar.hahn@fujitsu.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 3146c0f10140e23594c8185568f887111e504977
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Apr 18 08:25:50 2023 +0200

    xen/riscv: add explicit check that .got{.plt} is empty
    
    The GOT sections usage should be avoided in the hypervisor
    so to catch such use cases earlier when GOT things are
    produced the patch introduces .got and .got.plt sections
    and adds asserts that they're empty.
    
    The sections won't be created until they remain
    empty otherwise the asserts would cause early failure.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Alistair Francis <alistair.francis@wdc.com>

commit c57cd4d45c4e54b3906171499258dbd3556855d4
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Tue Apr 18 08:22:20 2023 +0200

    xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
    
    The patch is needed to keep all address of cpu0_boot_stack
    PC-relative.
    
    Pseudoinstruction 'la' can be transformed to 'auipc/addi' or
    'auipc/l{w|d}'. It depends on the .option directive: nopic and pic
    or compiler flags.
    
    Right now, 'la' transforms to 'auipc/l{w|d}', which in case of
    cpu0_boot_stack[] will lead to the usage of _GLOBAL_OFFSET_TABLE_
    where all addresses will be without counting that it might happen
    that linker address != load address ( so addresses inside got
    sections will be relative to linker time ).
    
    It happens becuase the compiler from riscv64 docker compiled with
    --enable-default-pie:
      [user@49295ae49cbe build]$ riscv64-linux-gnu-gcc -v
      Using built-in specs.
      COLLECT_GCC=riscv64-linux-gnu-gcc
      COLLECT_LTO_WRAPPER=/usr/lib/gcc/riscv64-linux-gnu/12.2.0/lto-wrapper
      Target: riscv64-linux-gnu
      Configured with: /build/riscv64-linux-gnu-gcc/src/gcc-12.2.0/configure
      --prefix=/usr --program-prefix=riscv64-linux-gnu- --with-local-
      prefix=/usr/riscv64-linux-gnu --with-sysroot=/usr/riscv64-linux-gnu --
      with-build-sysroot=/usr/riscv64-linux-gnu --libdir=/usr/lib --
      libexecdir=/usr/lib --target=riscv64-linux-gnu --host=x86_64-pc-linux-
      gnu --build=x86_64-pc-linux-gnu --with-system-zlib --with-isl --with-
      linker-hash-style=gnu --disable-nls --disable-libunwind-exceptions --
      disable-libstdcxx-pch --disable-libssp --disable-multilib --disable-
      werror --enable-languages=c,c++ --enable-shared --enable-threads=posix
      --enable-__cxa_atexit --enable-clocale=gnu --enable-gnu-unique-object -
      -enable-linker-build-id --enable-lto --enable-plugin --enable-install-
      libiberty --enable-gnu-indirect-function --enable-default-pie --enable-
      checking=release
      Thread model: posix
      Supported LTO compression algorithms: zlib zstd
      gcc version 12.2.0 (GCC)
    
    Looking at gcc spec file for the RISC-V architecture:
      [user@49295ae49cbe build]$ riscv64-linux-gnu-gcc -dumpspecs | grep -i
      pic
      --traditional-format %(subtarget_asm_debugging_spec) %{fno-pie|fno-
      PIE|fno-pic|fno-PIC:;:-fpic} %{march=*} %{mabi=*} %{mno-relax} %{mbig-
      endian} %{mlittle-endian} %(subtarget_asm_spec)%{misa-spec=*}
    which means that -fpic is enabled if none of the following options are
    present on the command line:
        -fno-pie
        -fno-PIE
        -fno-pic
        -fno-PIC
    
    That's the reasons why 'la' is transformed to 'aupic/l{w|d} GOT' and
    not be dependent on the toolchain used.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Alistair Francis <alistair.francis@wdc.com>

commit 1213ebfb9f35920b3e0f5dff71bb917f5fb4be5f
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:50 2023 +0200

    xen/arm: vpl011: Do not try to handle TX FIFO status when backend in Xen
    
    >From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
    RX and TX state. Because we are passing 0 as out_fifo_level and
    SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
    vpl011_update_tx_fifo_status() which performs TXI bit handling
    depending on the FIFO trigger level. This does not make sense when backend
    is in Xen, as we maintain a single TX state where data can always be
    written and as such there is no TX FIFO handling. Furthermore, this
    function assumes that the backend is in domain by making use of struct
    xencons_interface unconditionally. Fix it by calling this function only
    when backend is in domain. Also add an assert for sanity.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Henry Wang <Henry.Wang@arm.com>

commit d3784f16bbfeabde92b55ee6d5d66dcb1d82d060
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:49 2023 +0200

    xen/arm: vpl011: Handle correctly TXFE when backend in Xen
    
    When backend is in Xen, the handling of data written to DR register is a
    bit special because we want to tell guest that we are always ready for new
    data to be written (i.e. no real FIFO, TXFF/BUSY never set and TXI always
    set). This conflicts with the current handling of TXFE bit, which we
    always clear and never set on a write path (we happen to set it when we
    receive char from serial input due to use of vpl011_data_avail() but this
    might never be called). This can lead to issues if a guest driver makes
    use of TXFE bit to check for TX transmission completion (such guest could
    then wait endlessly). Fix it by keeping TXFE always set to match the
    current emulation logic.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>
    Tested-by: Henry Wang <Henry.Wang@arm.com>

commit 005e84e695ed086b9ebb281ee6711fd1aa6aaaba
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Wed Apr 5 13:17:48 2023 +0200

    xen/arm: vpl011: Fix misleading comments
    
    In both vpl011_read_data() and vpl011_read_data_xen(), there is a comment
    stating that the guest is expected to read the DR register only if the
    TXFE bit of FR register is not set. This is obviously logically wrong and
    it should be RXFE (i.e. RX FIFO empty bit set -> nothing to read).
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Reviewed-by: Henry Wang <Henry.Wang@arm.com>

commit 65c4e7472cafb60f478e7a5f358ee1eeac28b5a8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:45 2023 +0200

    x86emul: support AVX-NE-CONVERT insns
    
    Matching what was done earlier, explicit tests are added only for
    irregular insn / memory access patterns.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 842acaa743a503726d6c4d77a7982cc64f07c4bf
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:11:06 2023 +0200

    x86emul: support AVX-VNNI-INT8
    
    These are close relatives of the AVX-VNNI ISA extension. Since the insns
    here and in particular their memory access patterns follow the usual
    scheme (and especially the byte variants of AVX-VNNI), I didn't think it
    was necessary to add a contrived test specifically for them.
    
    While making the addition also re-wire AVX-VNNI's handling to
    simd_0f_ymm: There's no reason to check the AVX feature alongside the
    one actually of interest (there are a few features where two checks are
    actually necessary, e.g. GFNI+AVX, but this isn't the case here).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit da232f1f1118e8c8fad520dedee312005c2984fb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Apr 17 18:10:14 2023 +0200

    x86emul: support AVX-IFMA insns
    
    As in a few cases before (in particular: AVX512_IFMA), since the insns
    here and in particular their memory access patterns follow the usual
    scheme, I didn't think it was necessary to add a contrived test
    specifically for them.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:03:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:03:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523126.812877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp0u9-0007R7-Ac; Wed, 19 Apr 2023 06:02:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523126.812877; Wed, 19 Apr 2023 06:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp0u9-0007R0-7z; Wed, 19 Apr 2023 06:02:57 +0000
Received: by outflank-mailman (input) for mailman id 523126;
 Wed, 19 Apr 2023 06:02:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp0u7-0007Qs-KV
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:02:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d238b77d-de77-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 08:02:53 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6926.eurprd04.prod.outlook.com (2603:10a6:803:133::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 06:02:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:02:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d238b77d-de77-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dlP2Otp2DW50qExZ1izKhOtTAYzfEhmPwo8izZTFIgLgsK2NyODY6MeUrkwACLgQ0PAqGJbI1dOcalB7wiHEpYG5czk5mekRfu3NtYQztceLS2AfvjMTFfyKt15A7uD8Io1eWmm9r0Gd4ERbrVrZ88+VtxT/mQcT49qF2+MEtlzbd3kAeETDmh4ue2Cii4R7jeLqPA/YKUX404wyDfGMihIwNs7Bl4cUBIfqfxtPcX3l/eZHT/WFJQozWKpXt0njR7CBPbl0h8PWqreiTdK8zRdupxni64OhgzkklHZCP/RlYzU4gTk975dsJ3kw4Ekg+ZQOXHiKSLJL1VnPKUMdxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=estOwcJE9BExonNG4DxDXubZ2rM29t7xao74rS6T2RA=;
 b=HE5nfwe848dwTfUKslXU2+hR7Cj5DYtjaK4QesVUdwv28ZYK+lcH3c1z2undmiMhK1tgNariF6pGGITebqw9luddALGE/R1PCnZkqQQRp7VVwL1Irb9MdcKkZ9lEjtc9/tChgzSPu/QrqBfZvyhkXlL1T6NbUD4iMKcj4YCTe/AqSVtkzNs4/OdrB3+PtAiKXbcbbnE/jgYEUC7x06tVmy99okXRk/gC530FnF123ntR98XTSNmfC7xbsXB/cIfHzn9hdPVqr/MlHkb+wDjzEs708HcyigeZIbwbB/d2AFOHbjdwnTIslQ9mTBRVXsX0o/Pp3++6QL/u9JC5SylQBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=estOwcJE9BExonNG4DxDXubZ2rM29t7xao74rS6T2RA=;
 b=wtsp4o7l+cDYabScDKyG/i/BfE63pmKDzztdk7IsH7gLCInmPddLc4f5gON4OhIk3lpud/q9JaC4DQ569y2B3nm4oKsrP0rMdIazRfh2Gv5IRH6gPGxr0MYGQ5R0WBQCm5KlLoqWloJnvGhUIfCTH6UNlSqXWsHdnR/HfqjkqhUYHUVYE7pI/vCnDK76RBOKrS6Oh4dy25sG1CYFvAE6h35kiKRHSKkUySio4q2hNnKfYuv4vP4zMCCEVabWuggua3YHFoGKmja6NHebveqOXNVxDlq3NRp7nboiUOi5S0KUXZI+cTgQY8G0FzZ1Ye0G6dyhm98RHWwF3Xoz/79QGw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <38127c62-2dad-7072-706f-8e58b89bde0a@suse.com>
Date: Wed, 19 Apr 2023 08:02:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2 0/2] deal with GOT stuff for RISC-V
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1678970065.git.oleksii.kurochko@gmail.com>
 <b343d8c3-b23b-c67b-76f6-c25d5892328b@suse.com>
 <48a1fd97d34a37a2cdbdadf35811d31523b61a4e.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <48a1fd97d34a37a2cdbdadf35811d31523b61a4e.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0262.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6926:EE_
X-MS-Office365-Filtering-Correlation-Id: f7820bec-ec14-4cc6-5e9e-08db409bb599
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4c2+Jlt4/xoBl2DSVPXt4IOWPkmoBy6sQfQi44TMy4hMulwfEYQJmH5ydhYb7+QfTivqeGykl+qU2OAoJnqMqhksbqf7H39EWzlSRzqehaPHK5HAHUP5d8N6ulVwT1hcaR1N+vqoTZNP0ypZs9BNf6zOhSIsh/bCMox8W4n01/HHsJmhKwWKRd8SuUJb60g5l5Btj2vShFtWi6NHGvQzA62XkFknfRnf+yAeZ9OswoobNBC2hX1Ubgwl/5IOTcDf6L9mwQXeWHyvV6g0wde/1Xu3XnyLJ38fQ3nU3+k31/1Lt5TKeP1dImlb1Y/7WrDzpozdfOD9Me7qozd5yLwvSlw6+AhsPHXnQM/CXWKGYt+uU9Le45SHiwsc0NJIznJR/EwvXjEfhqPgLE/g4NQHN5tWCoL0GyZuqgd+tn2fXPKOwsNUwAe/dMvcSZIrOkktpkJ+qNM9QA4Kb1leIwoK4P8WMjrWIjRuD7UCvgFmMnhvNz/AQeOVb6A0KYbiyk+Zh9Ls0xm/YOCAGQGmOC0PaVHVsQfAmzqw8R7196zsoKlbSsQyjQUoc+jjyzBFYKKEszJDI4a+utVXTH84ifeMJzObTL4uMdy28834wu7giB389YGayb+L12yENhNtvV1YaH68G5JSQs/jxwK5zVwZdA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(396003)(346002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(53546011)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFo1bHV1bkNsVndaUVBTTFRxV2N0bjFJeDVFcUdzL1FKaW5GaHBhRXdnWGNY?=
 =?utf-8?B?QWQ5L3UzcHViMjRxTFJhWnJDZjlTT0xodHRkT0pheExBeWlBb2ZmWjlrV285?=
 =?utf-8?B?YkdFbFI0YzV3MUNhSERhdWg4TG02OFFEOFlnVnZSbXRES1RxUFVqelV1dVVn?=
 =?utf-8?B?V2xkVWx6SkNyU0lEa1o5UWNUOTFjeEJjRmVUUStiSHIwNGZmNzFxS1I3TXlG?=
 =?utf-8?B?TVpJZWs4U0Y0TVdQaWZtMlFLZU5FZnBEd2FvSDN0Y1dybzRVL3pCUVQvNTBO?=
 =?utf-8?B?NWsxay9BcjV6MVZlOWR4cU5sblhEZ1V6OVE0SUg5S1orZkZwb0lSY1ZXNWRW?=
 =?utf-8?B?T3lvN1crdkwzOGVmZyswMGxWbmxOeWs3S3hJWVNCd2ZOa0xpNy91QkU5RTlv?=
 =?utf-8?B?V21OS1NDZXFEbFROd0YxclVmcjVEdGhoZFpEcjQwUXhGZ0Y5YjV2U2NKL05F?=
 =?utf-8?B?Z3FYZk5yM3FIMVlOQUdRSUlFQlNCWHVZNHdLZWZEU2FnNi9CaG9KUE96NXJV?=
 =?utf-8?B?dUF6bGZ3NVFrcTJEZmRsMzdEK1J4KzFPKzBBTjJoaEZveUFhRGtmTlhhUENn?=
 =?utf-8?B?MzZyM1Zpc0wrcFVIUmk2WmNUWVNCYXEvZkpRb3NlYmhYT2prdHZtNDQyVXE4?=
 =?utf-8?B?akduNlJGRG16djJwNXlFNFgwVkl6VnVXNWxiM1JjYyswYTh0RUZwd25pYzBx?=
 =?utf-8?B?SWFqbWlxOGZscWtoZGt0MkZjM2ZVQVhYdzh3OUR4WDhCUXQ3SDFRazYwVmlu?=
 =?utf-8?B?SmVoSGtjVlpIL0dzdkcrQ3JZb3JoNXAvWUJnZ2lSYU9zdHlqSG8wVE01S0hN?=
 =?utf-8?B?NDJDY3pwWjRTUEo5VmJPQXdaTWo3Um9EQXN1bHN0cUErWkpNZzRRVzBKOFpB?=
 =?utf-8?B?c3praE1pY2cyYTFpc1Z4VU94ZHRHNmt3enJXWThUMEFPVzhBVlNLUUFTUlpX?=
 =?utf-8?B?UkIzeVZTVTRqRUY0VTQybVNNTnoxVUg5WkdFTHBLV1lNRXIrQVU5bExNNFhk?=
 =?utf-8?B?NllGMGt5VmM0UmFud2p1OHhNd0ZhYnpaTmk1aWNFLytJTkJNbHJxSEtzK2tQ?=
 =?utf-8?B?UkduUjdjaGVpVmNhU25uVjEweFNUNzkvWlJkU2g3Q2RjVG1MU2E3VU9RdkFD?=
 =?utf-8?B?ZldMZzNMWDY3a1Qwaks3TnZxRlZWcGFaeWNuaFA4azVQbzkvS1FxcDZ6L3Iw?=
 =?utf-8?B?cUpTby9SMDV1VFI5Y3puNzhYNVBGWVhLaHQxSFhDNlJhU0dqa2lzbTNEYmZB?=
 =?utf-8?B?czNRdEVkbVV0L2t5ZnA1QThPdkVrcUhHbFhIZisyUlZEMVJyTG5UcThKOWVQ?=
 =?utf-8?B?ZW1DNmM2VmRnWmpEcHQwVzRTWk1EOC9TYjBtMmRRWm5adjQ4MTdLVWh0TUJs?=
 =?utf-8?B?V2k2Z3JXL295ZnZQVlFMQzFEQWVzSUNvSzJ1eVdhaDFhcEIrY2kxWWZzc0k4?=
 =?utf-8?B?RlJRNVU5R3BTUEdmT0E3R1N0UDRqalFsbHFza0pqT1FKaHVRT3puNVhCaXNB?=
 =?utf-8?B?TVZhN2IyeWpHQVJzR3NhSWRTTHdVd2N6b2RQenZRRElRditqL1FYbGh6OTAx?=
 =?utf-8?B?cmRDZ0ZFMHEwankvLzZxTnFVclJDd3gycHJRRVQrMkUvaWVTL0RWUFdIN1g4?=
 =?utf-8?B?bFpJRmJHc05LeVhRbXBhSnBkZzllU05aQk03V2RjUjN6UytxSytnZi95eitw?=
 =?utf-8?B?Rkp2RHdTUXZMN2JFM1RTK2Q0R3pXOHJkSjFOQ29lZDNiT0ZVV29oNjhENHBN?=
 =?utf-8?B?ZVR0Z3QrTk1obTA3VC9SVnVpeHlrckMrWUxNSEFwMXQ0VTBaSnpWdGI2Q2pa?=
 =?utf-8?B?cGxtNzBUa1BGTURuS29ZUXBXMUFTbjBXdXBpWEJTSjlCL21NT3ZXa2ZNbDFU?=
 =?utf-8?B?WDlQMnVNZlhxaHVBakhyMTBPODB1SFlsL0lUN2FUdmlnR2JHV1l6bUh6NjhT?=
 =?utf-8?B?VVNMaUxWR200d0lEYTd1RW45TmZGZGZUdHJQSHhoajRvNDFkUHpZUjlRY25K?=
 =?utf-8?B?MVVrS1c3QktZemtIZE1uUnhwaitRYXRkSUJkSDdwYVlNYjB4aHpiT1F4cE1V?=
 =?utf-8?B?cEpWanlGV3dWOG1UdVczOUwxK1pyd0dncWtWUE51ckIvQnpJbERLcURSVHEw?=
 =?utf-8?Q?bBMQA0OhSHh+NAsLtyk63sdMS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7820bec-ec14-4cc6-5e9e-08db409bb599
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:02:51.5618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jxGvbdE/7lP5CuDWkvXN/mVge/fphwE0kwt0lOkR8D/J4IJhVITE9c/3Ld53YOf6q7B9jITA5izTb3Gz8DhzrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6926

On 18.04.2023 14:03, Oleksii wrote:
> On Mon, 2023-04-17 at 16:12 +0200, Jan Beulich wrote:
>> On 16.03.2023 14:22, Oleksii Kurochko wrote:
>>> Oleksii Kurochko (2):
>>>   xen/riscv: add EMBEDDED_EXTRA_CFLAGS to CFLAGS
>>>   xen/riscv: add explicit check that .got{.plt} is empty
>>>
>>>  xen/arch/riscv/arch.mk   |  2 ++
>>>  xen/arch/riscv/xen.lds.S | 13 +++++++++++++
>>>  2 files changed, 15 insertions(+)
>>
>> Just to mention it in case you aren't aware: Hunting down the
>> necessary acks
>> is your responsibility, not one of the committers. You may want to
>> ping Bob
>> and Alistair (unless this response of mine is already enough of a
>> ping).
>> Provided of course the patches still apply as-is ...
>>
> Thanks. I'll take that into account.
> 
> I thought the only option I have wait for a response from a maintainer.

Well, in principle yes. But pinging is appropriate after a reasonable
amount of time (rule of thumb: a week, maybe two, depending on e.g.
complexity).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:17:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523130.812886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp18a-0000Yi-LS; Wed, 19 Apr 2023 06:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523130.812886; Wed, 19 Apr 2023 06:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp18a-0000Yb-IP; Wed, 19 Apr 2023 06:17:52 +0000
Received: by outflank-mailman (input) for mailman id 523130;
 Wed, 19 Apr 2023 06:17:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp18Z-0000YV-5z
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:17:51 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20602.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8356cb7-de79-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 08:17:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6905.eurprd04.prod.outlook.com (2603:10a6:10:113::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 06:17:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8356cb7-de79-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bBY/QE4tzfh0nhGIPjJWZlEkzul5+oZ0/0a/aRBS8mk5dA7d3rguoGzPmR9m2JTOx7FqGPa/5skqKrI4EEsElE13U8KZ5SWRjxc4w5dP79sZvCzSxtlmjOOMInp3ijoo+gvD0cS1NjCKijlCrLNS6QQR4qvYkDE+1Wqt0NABf0h0cPuHLkjPOJLXmZMLYz006MRHQyd6yVdoDCjkPvU7lZHuKzZENej7RZSxthrnAg2p5RQkwkW/Ob+IDpjtyX82d6AiHQf23fOpJp8vunYup80H3baXA0L5bpEX9CD0A5h0vwkRZ6zQBA8FqjJxy0uNUaVlAJEgJkq1f9/BTEjKKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Cja/AsluzgAtOzgFCGClXMEUNs0H7z6a5o6h9BqqQWM=;
 b=Jneec/e4JvNPFugxqIqxm4GRhKe9BnJED4WW/OOmyIcojIIhLS5Nlei5TMeNVPwNYe+K28LEtQqbOLFJg/PKDkbhgyi3u/49UyxARdgIxr9M1zmAf4tDhC+ctWjhKNSQpoWnVAyyQOlqUlEaPv2W5PYp1GyzUdEyb3Bg9jC/xT4NGpikX4QzDGX/pnlLduufZMy7yLNXGcfh9+npY4rJVYj/enRK0D3LNj7Uig0B5XFRnvyAfHtg4MPs2xPjlA1dsav59AjsjA6wWMq7BG7xW8a25/bmpnxq2fPl9GHj77qA3BRjcU9BrwYoVG2VMAhf0NrOE2sJIVJWwgLSoY6HHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Cja/AsluzgAtOzgFCGClXMEUNs0H7z6a5o6h9BqqQWM=;
 b=rv8aJyweU6yWAtxuIVJNjv2BeykTITfl36PNhjsojsfJuSuoD2Jhp098vmX6g7JNVl1p0GB1K+tBn0DAgE0wLQ8DRqT/BQuPR8Z76ZgW+b29WOvbzKo+6DlrhjH3vZGLDE0e9KIkKHpeProyhvpwTvbQd1sKnpDn0G2pemuYcf/TFEpWM2uUgxC0MXo1za+GIP96lHmq0lVC8+PEDglKnF5dKkVOUfaCICiqXKT8vGCF0V1v5oNA9cEgMKygekn5vLy9DOGcIT5A4Ps/BrfDYmIjJOlQlc8+CX1ItxHGk8e8SFIm4lBC51f1KRszgQIiu9E1fXmz3nUdKwmLtO7zNw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d301e110-f840-a032-c406-2f7404752783@suse.com>
Date: Wed, 19 Apr 2023 08:17:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD6V0wzw/VS/MMw/@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0139.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6905:EE_
X-MS-Office365-Filtering-Correlation-Id: cd2024fb-2904-4fe2-bfd3-08db409dcb6f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jNLVGxjxC3exHirwS53lRWPLayh0KbfxFvSlwXoa5MbA2/WJGxCTmiNFTBBNxjsKmjgGc8UyG+7FPlqh29plhHs5NTpYaikA1mRn9Wynp2Fg5UzJXz0dOMgvYqa3x0DdWiKRlut4mF5zPkikD4ERm6KvWGSLZ0sPEr9qqk59qAWPAlYv0aDkaGw93zF4vxrRJAgo+OyM1bOQwgSEF/rEELZ8CS9Oy9tNEKDgHLE/5nR0SqcYPOVLYjblxFnOtYspP/SZux28V515x0zYuFzzbF1bneFzvxoa/EXMy+SarxAXCU77UVs/MVhA9EofPvpK6A1STaaaYazRMXSsQdXXjHaOZn4wy65/1Z9oDa3F9DyemIEZgeN30VY+QeF2CBCfrtNqTcy5cJYfCZHjtUwoEsgXLcIGfjU2+Q4AoQI+DkkjXdi8/Sd5DO7YwVhpgkIzFuPqzZxQLrrS+Lwg7px29gzmQkqiFzN97PBfv9VeqdIQQtXyTHXugabGXdCExJLy64GeUaUj+p+0yNf/vLz2VF0JPv1PEH+YfbKmC5QZo3LX0IXCubffpeXd8fBdsOlZraFXKPnQtN7Resicc+3NKtaUMRVNoV9IxWlnwtUEOA8qmtp7EXzpuYHd+FfdLoweYevEJ6L1vdS1tkfLCn7nJQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199021)(2906002)(6486002)(478600001)(54906003)(8936002)(8676002)(53546011)(31686004)(5660300002)(38100700002)(316002)(186003)(66476007)(66556008)(66946007)(6506007)(26005)(4326008)(6916009)(6512007)(41300700001)(31696002)(86362001)(36756003)(83380400001)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S3JYYXVnNDF4TWJ6Qnc5cm16TEhaNmp2OWVNQ1hHYVd5bFZDLzUvUW9YMUpO?=
 =?utf-8?B?cW5HZVNrTkNDUjVnVmhrS3lWTFBLcW1vdHRnWjVpbUtUbGZnTmVKNWpnMGlo?=
 =?utf-8?B?Qkc5Uk5GTkRGeWh0R2RkcmNDRCt2MjJyZW9zbktqVFk5RUVCNnJuRUVtMHpO?=
 =?utf-8?B?cjZPWWI3MFRtcUhwVEQrRkt4aHo4TUw3RUM0TDh2VFNEamUrcTFtTnYyMFpE?=
 =?utf-8?B?dE54ZGczWXdDd1VpN3Fxb0dSRWZvYmgxbFZRUE51a2M0UE1icWozOEVWU0N0?=
 =?utf-8?B?WklmZDBFdWtXR215NTZ3MGk5ZWVQakZKOENSNmVrMldUd1FIeFdhKzJyT3Bx?=
 =?utf-8?B?Y3prVDdrZjF2dzg3c01aa1UyQ3A5SXUyc2hHVDJieVozZm03ZnhDUjEwMEVF?=
 =?utf-8?B?bWRsUUJ5Z2Z3bDN5WnByaGJ1RVlkNlFCRlJUKzlldXFCWFYrd1dVeDRJQ3NM?=
 =?utf-8?B?Y2tTZjFNRWdhb05TQzVwTndvcEd4RWRxRmJKVXlvVTdwL1lnTXV5MUhPQmhZ?=
 =?utf-8?B?RE1KbXNvUzVFTFVxZHJIYkJ2Zkp6a0cxZ1QvcDhDbllXN3Z4dDVocGYyd1hK?=
 =?utf-8?B?MDhSWmxyRThKOTJxb2tiSTFKRHltTGNITnFodVQ2bUxRQTdKSXJzSFIzNEhu?=
 =?utf-8?B?NVF3Q1Y3SWRWNkh0NEQ1bERRMUdFdjZQdlJ4UEhZakZYb21wTzlTSEdveFB6?=
 =?utf-8?B?eUdReVRNNU1iejYxZW9xTzhVZHFNODkrUjExWk9NVC94eHBOUXBoU2pVakN4?=
 =?utf-8?B?ZlFzbWUwaXhtd3pSQ3hjaHJtTXIwL29hVUI4bDJVTjJpYks3aktvNTdETVAr?=
 =?utf-8?B?ZGRRMGhySC8zenE1enFsUXNJazVEdDhSVWErL0Q0SDJmR2pQbnhtUVBXRXFB?=
 =?utf-8?B?bTJsa2NjRDFDTFNWU3BJdll2TXNENE5tczJNaGlQem0vdXJ0UjJhWHlyekdt?=
 =?utf-8?B?d0wrRlQyN0NEQUxZK21KK2k2UzkyTU5tVWRqSmpBbU56RncvUVVrZzl1LzNx?=
 =?utf-8?B?ZmprQTV5Z0JucjQ2RzZQeCtDRkhCMzJuUi9oODcxVWgrYnBXdGVUVlVSMW44?=
 =?utf-8?B?ZzJGWERxZGxCbEdqb0xkSy81bEt6b1FhZUJINmp4eXFFVkFPWmVIM1RUcUZV?=
 =?utf-8?B?QU1udkNvTVlYZnlBWnkyQkZHSkNCQ3ZlWVMwdWhDTFRaNjJzSEoxWk9KOUUv?=
 =?utf-8?B?akJqekxzVTNjb2lWNkRpaWVXc3k1WmNnZCtteE1DTzhpN0k2S1JXc1lJNVJV?=
 =?utf-8?B?ZFkvR3pYTWtzaFFxNkNVaVB3RWNNS0FSSWd3RnZTN0k4R1JWZFRzR1RRcUhw?=
 =?utf-8?B?RjFYc2tnRVRUc3c4QW5Bb1E4VkhHRzkvZWc0NWtrd2ZnSDU3cDlKVWs5bmV5?=
 =?utf-8?B?NGFDVmgyWUVmWW8wSzVLdnpLMURXaGpoU1ZiVXo2cERKVFIrUmNGN2FKdE1r?=
 =?utf-8?B?QmdMalFMZzNmZ2hhSzFlSkVDQkYveXlCM3phVncya2xLN0g4eXdmK0xNbjVW?=
 =?utf-8?B?MkNrWWUxbUdGVHVLLzJ6YjNBcFJvYm1yWWd1TVk1YkcwcFpjeXdjQXA0UHh5?=
 =?utf-8?B?T3hFbnBXU1lKRUppc0xWQ2ZNeUxLRmROS0x2c3h3Rko0WlNzZjR1ZW52ZjdI?=
 =?utf-8?B?ZFJSb00yZitXbXRBUkttcHZ1VUYwWURUMURWeUJmSGZaYXgxZlR4QlZSZmJI?=
 =?utf-8?B?U1VEbGsxbHd2VmZLcW5JSHQyeUdyK0FsM3JacWRzajJlMTFTWmR5RDFLRTN6?=
 =?utf-8?B?dlR6WUpQZEdnNkhSWjRINklDNmlHV1dqcklxTStmVW1WN0ZDTlVScmcwQ2wy?=
 =?utf-8?B?eUlkbzlWVDBsMWJyN1NFNTdDWkhZSHJkWGt3UERGVDRNbEJoNXdvaEZ2Nkww?=
 =?utf-8?B?VFFQQ0ZzdXBoMlpFSHQ4M09PaTZucXlnNXNMdFlBUHBtZy9BNWJ6MUFLUXNo?=
 =?utf-8?B?aFl5UTQ2TUVIRVBlYm9BSWJhRWxHN2dRZmRMOTNZa2NOL1QxV2RHWW93Rmpv?=
 =?utf-8?B?OXpNZzdrMnRmZkdFdVl6Z1hQNHVJcnVjZzdlRUgxK0xkVkF4d2hDTEQxZHk1?=
 =?utf-8?B?UC9RK01nWDhMbE1NOHZNcTBMNU9Gb3B5ZnB3THNuT0xzNUdrQ0tNQ0sza0Zp?=
 =?utf-8?Q?/O7XUw8Ma/Tjl4TGA5gqU1PoK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd2024fb-2904-4fe2-bfd3-08db409dcb6f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:17:47.1926
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ylK5MK6GvE06Zjo/mvOFXSpIUjeRdhf0b9tLEZoThdQA4g0NVdXvXguSzziAdkwwQ+C86PkiViMmkFeUYBZxyA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6905

On 18.04.2023 15:06, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
>> On 18.04.2023 11:24, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/include/asm/config.h
>>> +++ b/xen/arch/x86/include/asm/config.h
>>> @@ -44,6 +44,20 @@
>>>  /* Linkage for x86 */
>>>  #ifdef __ASSEMBLY__
>>>  #define ALIGN .align 16,0x90
>>> +#ifdef CONFIG_LIVEPATCH
>>> +#define START_LP(name)                          \
>>> +  jmp name;                                     \
>>> +  .pushsection .text.name, "ax", @progbits;     \
>>> +  name:
>>> +#define END_LP(name)                            \
>>> +  .size name, . - name;                         \
>>> +  .type name, @function;                        \
>>> +  .popsection
>>> +#else
>>> +#define START_LP(name)                          \
>>> +  name:
>>> +#define END_LP(name)
>>> +#endif
>>>  #define ENTRY(name)                             \
>>>    .globl name;                                  \
>>>    ALIGN;                                        \
>>
>> Couldn't END_LP() set type and size unconditionally? (But see also
>> below.)
> 
> I see, so that we could also use it for debug purposes.  I guess at
> that point it might be better to use {START,END}_FUNC() to note that
> the macros also have an effect beyond that of livepatching.
> 
> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
> find START_ENTRY a weird name.

So do I. {START,END}_FUNC() or whatever else are in principle fine, but
I take it that you're aware that we meanwhile have two or even three
concurring proposals on a general scheme of such annotations, and we
don't seem to be able to agree on one. (I guess I'll make a design
session proposal on this topic for Prague.)

One thing needs to be clear though: Macros doing things solely needed
for LP need to not have extra effects with it disabled, and such
macros also better wouldn't e.g. insert stray JMP when not really
needed. Hence I expect we still want (some) LP-specific macros besides
whatever we settle on as the generic ones.

>>> --- a/xen/arch/x86/x86_64/entry.S
>>> +++ b/xen/arch/x86/x86_64/entry.S
>>> @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
>>>  
>>>          ALIGN
>>>  /* No special register assumptions. */
>>> -restore_all_xen:
>>> +START_LP(restore_all_xen)
>>>          /*
>>>           * Check whether we need to switch to the per-CPU page tables, in
>>>           * case we return to late PV exit code (from an NMI or #MC).
>>> @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
>>>  
>>>          RESTORE_ALL adj=8
>>>          iretq
>>> +END_LP(restore_all_xen)
>>
>> While I'm fine with this conversion, ...
> 
> So I take that overall you would agree to adding this extra
> information using a pair of macros similar to the proposed ones.

Yes (with the above in mind, though).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:28:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523134.812896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1Il-000244-K4; Wed, 19 Apr 2023 06:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523134.812896; Wed, 19 Apr 2023 06:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1Il-00023x-HS; Wed, 19 Apr 2023 06:28:23 +0000
Received: by outflank-mailman (input) for mailman id 523134;
 Wed, 19 Apr 2023 06:28:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp1Ik-00023r-2Y
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:28:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60e498ec-de7b-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 08:28:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7128.eurprd04.prod.outlook.com (2603:10a6:20b:11c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 06:28:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:28:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60e498ec-de7b-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IBGDIOjIgDDsr0umM0Ohv3XrvY3yeflntmmBwrmzRJDsBAHwqbzOETlSpZ+PrtYsMrLKFWw6SITHGJ/YSlze9WBUWh6wZ5gyUjX/j1uKQJjOPJb+CRk7+ULap8VGB/9JWgmadoeRwfpx+Z/yF9bjYSz0MFxg/X3DkorrOtwg9AYTN/5aG3SwwMSmD/2/odgOxTpp1JZqE6W/8bnFqyLqKm4pBrE0TPDmmOrgpfkLm9KWGqeh0h1N5fXu/f3bvG4xf+2KvjPlBdaYjRoSoK1j58e9sTovDym/extkrngexgajjnI3XYsKA6suhvEKhylMz1NZv4XlqTvenZOw39mLrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zppZioHxju2k1rOcyDNlXjKuHYmML04feaRLOpSEdxY=;
 b=FZL9ZCZE6Hg/wUXSyx6mRWXHHWichmXfaX5YHia9DhH0R0RyF8dZ1Cy1A8p7Qaos0jbfXtOJ/MupaE7x42WkVUeDilupu0N/Y4LEVKkY3HUFdXNnA2SX+GB9aoBO3wIzKnccTQmeYFAzOflbz5T0Gn1uroFOfo2iPhItkAGGOhxx5cCz50LEEnowO7DcrrZI2YWBSRdNrWQIY6zmWHwUpdDvcMNISU+mCGt484YI68uzH+YEyU0SnwuS7vmzNk3zRcCQPm6qbqCbQr2Fip37bPHW4DEvCaA6HtRWwocU06x0IyPdG22XNb4Twi+TP0QINLLcljk+d5F2dQKkaTvCCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zppZioHxju2k1rOcyDNlXjKuHYmML04feaRLOpSEdxY=;
 b=sZer6Iqf81bMPlzyi+oZUx6pqpB13qre771vs5SlkeCsaMX3Z8px75hcCZB9M3i3Zd2rNQ0c5l2YjtP7idbwPUusnaDpirwIU6saStxK8Rg4TQRbr+S8/7ccLO7aVo4l0UzYs0hQmpqTozf8+GZyhhUx1GZgRyPdJUAzfI74d3oyJ9O+31wjtcWBVMEQb+R9dJJmjDkCcVKGc/K4q9hyCtFmimaaLP3H6zHJodkvq/EozWw9z9/U1VQNk77mUFJVGc4FxP1tmw3/3ARTiVDXjDqXmBZ2h+zhgzlUR95unIVrJ4oYX0o1cXDF+WFaaPma4Gr1pWzg1TzHRPK4nRe6IA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
Date: Wed, 19 Apr 2023 08:28:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Bertrand Marquis
 <Bertrand.Marquis@arm.com>, Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0210.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7128:EE_
X-MS-Office365-Filtering-Correlation-Id: 71040519-5356-4ece-d47d-08db409f43dc
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1h/bk+3Zmin+UgtdK05ZodSQ3Y3Gulk8KO8yKh2pjmihF4AROav1zv6XPJwTnc5lJLHsEQC8E2X+MH+ms6+kSlx4UIrQul/34O6HrsW5Uc9oOJzkBaR28v23xyHfb+tMUDTSNpqjpSElevKpDn6uUipNVfdPCKmoOF2630TxrO2P21FzinJAd7yI3bzOwptCJ2rtEZxaVrJK0up/BqkBdU3DDFsInyHXj7WKQ0ftLuM7BERuT25+W8c52iMLlZLZYUoAik0tGPbi0esjIUs3ktU9fViP7ZDp8QTN38tb65Efp4LpIZ6HvrecaLdArCVjK6oGpI51Rr4foI5jnoaDrjPCUDqJz+Vfcrs6mCGj3F5gTibpT/aYzKjFwbl2f02j4/u7H1LqEF8KhFwFfh9EFyIIQ8pW3JuYzdhN6ihUl5JNFjcwFWBRcxYJvdyp8zi2opm5N43/pr94Ehu9naz7zIgK2erlELvZCnUxSUVklfbe8z4jV50N9V2/Ows4UzZE5tCN4Dzm8jSkaSxDRPpsrqK5uosgH2zJmTNQnMHBT5pdvNbjK4M3LzXDA+GKHCGYmsTMHJFndQjAw+1tG9Gj+XkttGr8u4LuAGz/lKiaAGeU8JsoyYyj/glrWtE3b6F094HtPASLBdgC74sMkQGDNA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(396003)(376002)(366004)(346002)(136003)(451199021)(6666004)(6486002)(478600001)(31696002)(110136005)(86362001)(36756003)(2616005)(83380400001)(6512007)(186003)(6506007)(53546011)(26005)(38100700002)(316002)(66946007)(66476007)(66556008)(2906002)(8676002)(5660300002)(8936002)(7416002)(31686004)(41300700001)(4326008)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NllXVUsvMStYVXlvTU05bTlYMjM5bXVwQmsyS0kwWUh0THdpc25NeDN4cGNs?=
 =?utf-8?B?dWJZUzY1YW9UczljaW0xNE5SSkNyalhsMFBCL3ErVXAyVWYyWHJaMCt5NHZU?=
 =?utf-8?B?S0xuTUJwZUpVRUk4ejgwdFV3aFNrQ0JhYlhqaDZ6RW9rUm5HUWhpRENBV3Rj?=
 =?utf-8?B?VDhsQTZQRnJiZXI2M1EvUVlLVWJGOEFKWlBlSFZxbmFoNU5EVGRGVUYrbzEr?=
 =?utf-8?B?RStWYVZyQ2syOE1lZGFBMlQ1VDhJUUVxbHpCVmUzaCs1N0dlZFd2aWdUMGt3?=
 =?utf-8?B?U2o4SFM5ZHNKUWdreDFNQkR2UHN1MFY1cmtkTFVXdUdZeko5enpOYXdoOXNF?=
 =?utf-8?B?TmFleExJVWJNblZtWkV6VDArbU5pSmNUcTN1VWZGQjY4TWZSM0xrcXV2V3RC?=
 =?utf-8?B?ZEM4Q1VnbUY4Z0FpdmR3TjlaUnEwRk4xN25lZmV6VTZGYXhtUVJPc2xKTlZC?=
 =?utf-8?B?RzFzZVBoT3RWb2F0cXdYQ0FBczl3S3ZKVjVXd2FuYXExK1Biekp1NTYzVVNS?=
 =?utf-8?B?UEtPMW05QWVwTFE3RCtROWtRa3lHVFNlWldXeGU1cG04MUROdmxyNTZURW5k?=
 =?utf-8?B?NFZIRk51TUc3VmU4ZlZxdWlKRkljaDlvdUxqRWRHSkJzMkpXc3dUUVhoMlZ6?=
 =?utf-8?B?YVJGTWY0MW5pN2kzUVZJNWVkbkJ5TDdZOFZieUttVWVsMG84QWM3bVhsYXlD?=
 =?utf-8?B?VDZZREwyU3pnNlBuMTd1RGJYVGNUUkwweE8wN0M3WmRyVjVRTkJMYTZtWDlY?=
 =?utf-8?B?SzZ1L3dwdFZtYTg3SjZBUWt2Y1ZrUWNzYzRmU1JCS01vMnhZMnppZENxSjly?=
 =?utf-8?B?TDh1bkM3cWYrOFk5UUYyTlFtYjFuWUloWldRQUd6YVhIeHF4RElMdWxxMWxS?=
 =?utf-8?B?T0g1MVFYU3lvUW9nSkZPZU0xVncrdUgzZlg5R3MwUW5rcElMT0dqSmx3SytE?=
 =?utf-8?B?MHJhMzlPUXJnbi9aTjBYVHRRYi9VdGhndnZndzl3Y3VsVS9PUG1DTTNJRDY1?=
 =?utf-8?B?K25Rakp2OWg0TUZndThQbnRTaTlVUHNQUnVKMUZDbk9OZlh5YTlHV1JuOGNr?=
 =?utf-8?B?Zmc1Z2hRNDhnWHFvS1EzVWdqOFQxdTlCMG5LYUxHQmMyYW0yK1l2V2ZnMlBE?=
 =?utf-8?B?M0hoZmwvL09uelFOQ005YTkyMjdPYjRQZjN4b2VHR3hMazNVenpGTXcvT1g5?=
 =?utf-8?B?SzJRY204Q2Y4QThSSDJEZ3RUUytIR1oyeVFCWlB6V2FpT1lMQTlPeUk1L0FH?=
 =?utf-8?B?QjJFYlpBSk1tR3YrUlAwbkJJVlJOdGQzeHYwL2FVSVhhVVlNYTFPSGt2SFpy?=
 =?utf-8?B?Q0tQdWk4QXZxY2g5NmFrQ1dlRW9MTW16czRsRHZRc0k5TC9IdC9FS0VQcytV?=
 =?utf-8?B?b0dod3EyazAxR1kvSmdpcXBMYUxIWkxtUDVLckFSRkFrWFEzeTNLV0pGQUd0?=
 =?utf-8?B?RmVZa28xdzhBUG8yNG5vQjFBcVpCQWwzTXF0UlRrdXB5MC9wOEh0bzhhVWt2?=
 =?utf-8?B?QVZJU0xweDZEWHQ1UHVIYjBSNk54a2xlT2lPNnI5eVh1T3ZXNXNWSnZBeDhY?=
 =?utf-8?B?ZnVtM0VsMVBBbEVWU29SL2lscUk5VDR4elVaaHRDZmNSWkF2T0lIM2k2aXAy?=
 =?utf-8?B?YytWM05qUzBUMit1ZVRSN0x2Tk5yYmpiazRDeGdwSXJPaVQ5aE85RXRwWWgz?=
 =?utf-8?B?SXA2Rlg2OWc2eHlvd2Z5ayt0L1lXTituVDNtTDhuUlhqSlBpNzdZRmVzY3Ra?=
 =?utf-8?B?ZHRUVlV3cmRBaytLajY5SjhKOHRSZkt1RDFBNE1CZGEvOFpBbkNoaG9iSHpq?=
 =?utf-8?B?NnVrQkNsVG9YZkRQUzlYa3RwZXNOV3NhREl2YkdGTUFSNWpoR1ZvbjJzeEhQ?=
 =?utf-8?B?TzI2SU1xcDdTdWJSQTJNaU9HdEpIR1oxVzNkSjlBK1hBdm9CTmp3WjZuRzkw?=
 =?utf-8?B?anRobGNhRFpGRERCQm94WTVDSVg4b21LamVucUNSWS9yOGtKUFRpalFhN2xl?=
 =?utf-8?B?Q3RTS2F2NHBUenVsOW5nTVNGSERDSXp3dGlpSXdJMXdkTmtBaWJBMlgyVnBK?=
 =?utf-8?B?bDV6S3Riby9lOEtEWmQzMWdJUzZCM2pkcTBmRFhaSmhMT2hKaU9mVVhFOFBB?=
 =?utf-8?Q?4jVbFs9VV2niCypNefUc1oneM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 71040519-5356-4ece-d47d-08db409f43dc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:28:18.8141
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tX9GdR3zJAzh3W6h85zVHlTXCfd0+Wt3RHPwskZMzrzydXv8Oz5EUXMqqUm3ObEabLvUsngu14wxLae2aZbXhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7128

On 18.04.2023 16:25, Julien Grall wrote:
> On 18/04/2023 14:13, Bertrand Marquis wrote:
>> On this serie I would like to open a discussion on how to handle the vector size
>> and the corresponding command line / configuration / device tree parameters.
>>
>> In general the user must either give a vector size it wants or has a solution to
>> just request the maximum supported size.
>>
>> In the current implementation if a size bigger than the supported one is provided:
>> - we silently disable SVE for dom0
>> - we silently disable SVE for dom0less
>> - we do not create a guest when done through tools
>>
>> This is not completely coherent and i think we should aim for a coherent behaviour
>> unless we have arguments for this status.
> 
> +1.
> 
>> Is there any good reason to silently disable for Dom0 and dom0less only ?
>>
>> I see some possible solutions here:
>>
>> - modify parameter behaviour to use the supported size if parameter is bigger than
>> it. This would at least keep SVE enabled if a VM depends on it and could simplify
>> some of the handling by using 2048 to use the maximum supported size.
> 
> My concern with this approach and the third one is the user may take 
> some time to realize the problem in the xl.cfg. So...
> 
>>
>> - coherently stop if the parameter value is not supported (including if sve is
>> not supported)
> 
> ... this is my preferred approach because it would be clear that the 
> value passed to Xen is bogus.

I did say earlier on that this comes with its own downside of preventing
boot to complete for no real reason. It's all Arm code, so you're fine
to ignore me, but in similar situations elsewhere (sorry, don't recall a
concrete example off the top of my head) we've aimed to allow the system
to boot, for the admin to then take corrective action if/as needed.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:34:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523140.812907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1OQ-0003Za-Dy; Wed, 19 Apr 2023 06:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523140.812907; Wed, 19 Apr 2023 06:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1OQ-0003ZT-BG; Wed, 19 Apr 2023 06:34:14 +0000
Received: by outflank-mailman (input) for mailman id 523140;
 Wed, 19 Apr 2023 06:34:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp1OP-0003ZN-86
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:34:13 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20608.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 325617a6-de7c-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 08:34:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAWPR04MB9805.eurprd04.prod.outlook.com (2603:10a6:102:37e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 06:34:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 325617a6-de7c-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EmATe0gAi7pdiWwHuTfksl4bDpXDVlIiOui+1eHA7HJe4r6P37Ue23arT5g04ZopCdz/70UqQCnDHgqUPuRdzOI9aUDXCGnSJah0WN/TqPGOHU3ApZYlM+GcvfliqYSB3kavEsp3bMr2lFaD2VD291TSB1va0nwHn2KhMXNEZuFcwOgpgDfZMlNPF7CgIXQyE+QcG2qFaLkwbYOf/myERU5WR8/kf8HiHvIz/d2nRGfT3qrQBQQT+zpH+95HqhdnUhi7IQL5OD0FY5eXByXQnlnuMzfumOfmfwpgKuKfTV6ktDJcIw4ZsXthD9vbvRP7BzEvqRxsMyfPqo8YQm5S8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iv6XBxAGE7g6mX4Zf7J1MBL4jIBjwrJDI5W2QHdeuz0=;
 b=nX4msxL/LPVy5+o7PXgBFTCOG7UqAFZzYG6wnCvtJ39ypGW9BcOonTjFHLpoXunLomTPjWec53JohnFbRdd9AT7N2TNY4tLdFNn586sGlIDRkFfWe3tBWm/dD8ZfcI00HOYt7To2UuClyX3QYF9WrZx0v46TDZ/O7z+BssBWxCD45pxdlK3p5SJG4g8bUxYMEHWHNPKZd7nqrq3Brap1dpO8GbFn1LKIsWGvb9y259zw3iMOe5NK5MACouWBsEJTTnQE0DP2kqUCd6VBoGxdgCbFw/Ixt+OqFDB1xX4YLHsrguVIqDeOKUOXy/SUEf36pxxqqDhnzIeW6BUX+ks5tQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iv6XBxAGE7g6mX4Zf7J1MBL4jIBjwrJDI5W2QHdeuz0=;
 b=jRssQ7vs4UCbzhJMKtxvU949dlOUD7mBO7h8+yoLMTW0BrSnZr1/rXqRGCssrAEsAdWXCR43pC9rGWoDtNdRbWFDn8GR009WKo2FZ42yYSW3NROAmiES7m+XTKFMgbxx3ZPtf0sM4wFQ8b3NRL9vDO17lEs5pU3AEa1XENFQlPqD5rlB2eKJ6POTOH7MqStHScITeSseB/t98mbKkimL0rnzBGrErzKkwlZ+kiKY0Er29V2qNM9jGuzNx1a16cRYk1pl0t1c5JRS41y+tDniEUzyIM/qK3PZ+wsoZjZ3H5tmbaaS2N3nZqYH69pNkDSj9wPT1wPwQvyf3lerlmtrOQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7969e744-bf32-94e9-0607-b16f36edac92@suse.com>
Date: Wed, 19 Apr 2023 08:34:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAWPR04MB9805:EE_
X-MS-Office365-Filtering-Correlation-Id: 98d5df46-7ef4-4498-d7b9-08db40a01560
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	i/mISvn5nSigCfJISABXvX8pTwzqH8e+JhdqXfBqDUTcUkg/KSwuvWXnnZr8CAKeS7F9L7+VBf1jZEoKXD/Sq/ubOhqMrwoneZCxxpPjdS3zUnJcBHmYUicULud6/6sZV1nrrMmMqkpulxB2fC8sEWN4FgjSk1/WA/8OsD8MBQ4b73sYlJXn+xA7MPfOEjenRvtYE0FW2UQgmTvQ33tRD+e1vjra9hVapsDWnBWeqxjv7gWivGmiK9xcTYMljgiWLze0Q/bne0PDQcPRGOr8R9lvGcyOryOLFetSFqMO+f0xogxnE4XP3Oi+b0Jt1ZDgMb4Z+WGRhgjaNGJ9ukBLEg/TMWxwqAj1ZyfAannL4YHCx1r9LHQ3tPGEsoP81k9eocRYBF0vT4WhH4LIRCJuSOLqwGQM0V5ZEfZ5GFF1p0g1zSZIT8A72zPThGZMxSSJSMP3gQ2TSGvwfzIacMeXWcWvYLpygfr/vW5J8m9K3dPM9KtqOd6S00XbeYMhdkYCC5SACVjyNs2DKATRFAbTceLLDghGQYHGD3z2A7HQIRFvCk/WHeqUI613mGaFP4eA9qTmZDnSvgeMSfSsccW6Awh4zk3n5beWMutBnLmrHkNbsf/JvccN8Bi3yd3H+zgUAAcEwsY05zk02AhfEcL5GQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(39860400002)(396003)(366004)(376002)(451199021)(2906002)(8936002)(8676002)(41300700001)(7416002)(38100700002)(5660300002)(36756003)(31696002)(86362001)(31686004)(478600001)(54906003)(2616005)(6512007)(26005)(6506007)(186003)(6486002)(53546011)(6916009)(83380400001)(316002)(4326008)(66946007)(66556008)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dSs1TVpMc1BlSkpoaVQvMHNYREVVL2pIUmVOQ05wWXdpSUhoSVRtWkFYM1Js?=
 =?utf-8?B?ek1qVzR2SDZzRXNRTFJFZU1KL0psL0lnOU9hbXRjYUhRblBCTUZmWHduR0lm?=
 =?utf-8?B?dGdZSTBaNTA5TlVmMExaeU1udXRHTU96QTV3cGtlSlFtQUovSWwxVmFTRVBN?=
 =?utf-8?B?bElVUGl2N0YybkM1b1pYOUhyT2UzeTlmWjVKSThwWmhtc1NXUUlTeUgwNmM1?=
 =?utf-8?B?dEtkK01hOG5OZGVRMnByaEVwWWs4ZENSR21zQUFUYW1GS3lTY25YSk5TNU1R?=
 =?utf-8?B?NWh0b0YvWUhZM2t1N3FHejYyZ3pwYmFnWGpNcWZLRHhDUVlYdlE5WW9EQmhh?=
 =?utf-8?B?SWV1UmpUc0txcG90VFNzaU41TEZuRFVFOFQ2cTRsbnRXSU5Id09TMnBveS94?=
 =?utf-8?B?MmN3clNvaWlJS0MycFFPYlpCYmptTUhvZlNTaGlPVWhJVm9xaERDMEdRS2VP?=
 =?utf-8?B?NHB4dlZrYlYvYWVNLzgwWFlNWnA5c20wOEthUDdkU0l4aUlMWlNZSHNCa2Ez?=
 =?utf-8?B?b256WlhhMzlyekRsVFNKUnl5dlVMUHJra3NlWERBRGRhNDcyL25ybkUvRU55?=
 =?utf-8?B?aTRZNk5qWm03VXhrNm5GV3p1L05JN0srUFgwNTRxNHNLZEdDNU5YcXRoNUww?=
 =?utf-8?B?b1k1SmNKdFd0cCtKaFNwZVV3YnorcnlBais3MkFmVFQ2Q0RuWHI4SXhxUjFO?=
 =?utf-8?B?cktxZFpKZkJFRC9LWnlYY3NXVTFmV2xQckFtdWdWV1RsYUJXY1paQmNrc0Jm?=
 =?utf-8?B?ZzBoOXFtVmNTaHRUc3U1dllYMVJZZ0EzRmhLcDZULzJnbW1VeWpObkpCbjdK?=
 =?utf-8?B?RmROcGw5aC9pWC9VNTF1QVFXYUV4UnB4NHdCeTFMdGk2VG9HZnNQSnk1K3Fq?=
 =?utf-8?B?WVlNaXltZ0FDbFQ3VzdaYjhDN0RyL1BEc3NlNXk1dzRaalVvM2NRTjVqZjZC?=
 =?utf-8?B?UC95clVsc21heXVHZFF1RFRnc1k3WG04NG9jQ0kwbjhKRTlXaWlrM2tBNEdL?=
 =?utf-8?B?VEdXT3JqRDBuZkpMQWNVSmJQdlVONHJOYkpkUHI5b2lpek5mYjVWK0RJdTI2?=
 =?utf-8?B?bGJBYm93YU5EZHlDZGlQNWxDc2VoMm0yWVY3R01tRGlsMEx4U3BRVEl5Umov?=
 =?utf-8?B?ZVl4azQ0eFNiZ1c3bHBRNEtDMkc5d3JoR2RFcXRSQTZXZzYzTkRpL0tXWW9a?=
 =?utf-8?B?enpVZEJYUllWaFZpTXZUWTNIbmdHdUlCcmNqcDl0Z1plN01taGQ1M3RYT3JV?=
 =?utf-8?B?OG1PYmFQNHNvUTVwbWZtK01adUZOZzNBUm4wOUpsdkhxbmo1TzJ0VGlRMEtV?=
 =?utf-8?B?YjJ2ZExSVWNlL2RwRVc1OWdFRFFTREpPUFF5cDJDNmgxZC9DSkF6cW9aY3dv?=
 =?utf-8?B?S05pR01uWEVLamozMUFvTXl0Z1E3eW4rUkQ1NEQ4UWVCaHFTekhyRmdBeER4?=
 =?utf-8?B?ME1vRTNFWlpBTDNWMEpDcGVOS0ZnRGdJOWdBMzZLMEhrUzBId0d3ajlBSW4v?=
 =?utf-8?B?S2p3SDlmdEwyT01ReVhCWlpqYlRiUE5HODBLWlp6cHRnN24rTWdNL2ZRYzJV?=
 =?utf-8?B?bHJTcm82ZW5aSFFmbGNHWjBEbnFzcWJIejg1OHFxeHNzU1BwNlVLVVhmSHc3?=
 =?utf-8?B?RU9EWFRkRk5sNCtrWDlvZzh6a01PaXdwaWpDdzRkRlA1b3h6STNnSEhGQzhC?=
 =?utf-8?B?cHp5d0UzbWQyNUJ1dElkM255Um4ySU9sTnFxYXRIaHQ3WnI1MTZ1bnd0cGdD?=
 =?utf-8?B?UGZkL1p4bGlOaENjdGt3NHJxelNKd2FDYkcvZEFnZ0FyMnpHcDhqVDdUZlJ2?=
 =?utf-8?B?MnFDS2tlNkYzeWcxc3ZDTEVQMzFDaFI5Zyt6YmZ4eVBPWW1zbHZvTFNJNzcr?=
 =?utf-8?B?cGxKZjczZzMrKzhTUVBNazY3L3hjNldEMWRray82VXRtVUNJQ281NkRhRlVR?=
 =?utf-8?B?bFgwdS9CSXpwZnYzN1dtakxsQ09rM29wUzh0VmRxSlVJeU5YTnV4ZlF5c3dS?=
 =?utf-8?B?Y0RwUS8vMTdoRUwrRk9HRHQxS3Vza0pvSUF3cUFBZWVCRUowS0RSc0I2clNG?=
 =?utf-8?B?alhZSFFkaStZQzNyaE9wbGxQdkxsVU12YWhtaDllZzFiUzZ4anBmV2xJNXM2?=
 =?utf-8?Q?EjCzXPpNL2iIUyH2bsiKME2OW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98d5df46-7ef4-4498-d7b9-08db40a01560
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:34:10.2518
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IK5Lb8Zx6pD9WGrcoYpCMHPekzTDf0N2AOBGUe4WNERZrfivVv6Le741sZEMKGE4enidto9PTEuKTqKHydfNtw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB9805

On 18.04.2023 14:47, Bertrand Marquis wrote:
>> On 12 Apr 2023, at 11:49, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>> @@ -118,3 +121,21 @@ void sve_restore_state(struct vcpu *v)
>>
>>     sve_load_ctx(sve_ctx_zreg_end, v->arch.vfp.fpregs, 1);
>> }
>> +
>> +int __init sve_sanitize_vl_param(int val, unsigned int *out)
>> +{
>> +    /*
>> +     * Negative SVE parameter value means to use the maximum supported
>> +     * vector length, otherwise if a positive value is provided, check if the
>> +     * vector length is a multiple of 128 and not bigger than the maximum value
>> +     * 2048
>> +     */
>> +    if ( val < 0 )
>> +        *out = get_sys_vl_len();
>> +    else if ( ((val % SVE_VL_MULTIPLE_VAL) == 0) && (val <= SVE_VL_MAX_BITS) )
>> +        *out = val;
> 
> Shouldn't you also check if it is not greater than the maximum vector length ?

Perhaps not "also" but "instead", because the supported length shouldn't be
larger than SVE_VL_MAX_BITS (or if there was a risk that it might be, that
should be taken care of elsewhere, e.g. in get_sys_vl_len(), not here).

>> @@ -61,6 +62,21 @@ custom_param("dom0_mem", parse_dom0_mem);
>>
>> int __init parse_arch_dom0_param(const char *s, const char *e)
>> {
>> +    long long val;
>> +
>> +    if ( !parse_signed_integer("sve", s, e, &val) )
>> +    {
>> +#ifdef CONFIG_ARM64_SVE
>> +        if ( (val >= INT_MIN) && (val <= INT_MAX) )
>> +            opt_dom0_sve = val;
>> +        else
>> +            printk(XENLOG_INFO "'sve=%lld' value out of range!\n", val);
>> +#else
>> +        no_config_param("ARM64_SVE", "sve", s, e);
>> +#endif
> 
> Correct me if my understanding is wrong but here you just ignore the sve
> parameter if SVE is not supported by Xen ?
> 
> I am a bit wondering if we should not just refuse it here as the user might
> wrongly think that his parameter had some effect.
> 
> Or is it a usual way to handle this case ?

It is, or else we'd need to alter what no_config_param() does. Plus ignoring
the argument when !ARM64_SVE is no different from passing the same argument
to an older Xen version, or from passing any unknown one.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:42:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523144.812916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1WJ-00053p-76; Wed, 19 Apr 2023 06:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523144.812916; Wed, 19 Apr 2023 06:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1WJ-00053i-4H; Wed, 19 Apr 2023 06:42:23 +0000
Received: by outflank-mailman (input) for mailman id 523144;
 Wed, 19 Apr 2023 06:42:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp1WH-00053c-78
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:42:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52270eea-de7d-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 08:42:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8902.eurprd04.prod.outlook.com (2603:10a6:10:2e1::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 06:41:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:41:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52270eea-de7d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VUns0+5Ne1AE7iqvpLJa+UkJxlL5+Hl0Hq8AFkw4eDykHDryVF37+BODlSUCL+mv6/0FbEAtfn6n2IfAtggSLH2LVfvi6s43slXtLGU3K1L9YElia3sQVgGHwgbn8P8go2DFYeDhy9gv/pTg4li8sLCkXqu8FD+pibtkoT17mvY8A4mInzoA1J/0F25OEeEfhDvWzq++HSWnti+yy2itcZOXX7Ck5FupssiVs9oO5xly4BJxfTY6kVx6EYsg4dJ/lcDhpa3S496vCJch0bTLvmEgzVjwGicJPmjTKeiNgcNq8R0j3SP6ErKYMO0Yo9+2OD8Gw4fkvp1s7VjPARHIRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YBS5pty2ViJmqDqtH1gW6mGqVx3J7T2ZYmwft6DYzSU=;
 b=PvaoicXWprlVEuBMb+fA9uA/TVj3hjbgguVKKwEq7qeNHbqTWhUVDu80cB+DUpIV5F81bL327TKs2iIeyHv9XeVGAThKKX5qq13cFW6cn61NA2bzjc9jHnB7/dOoRYt84F0aFCtkTZ6G9kJ+V0qpJyOaU/1C7YWGGghnQoDo7SE1AEQS8qToMY5ePOqjW8woirP3Dc2JuHxi3fkiHlGN1CXDxv576ZJGRBCaDWlaLf++UWVqEIX1ZyKczHx4Ma5eeGxAZJlVmzBDSBa6O3NJPBovpHauVNK6LB8KUK03jibUzsMY+5Kca+ZS9afK5twTEQEKLbyi9oZD0FCWZEhkPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YBS5pty2ViJmqDqtH1gW6mGqVx3J7T2ZYmwft6DYzSU=;
 b=zlJYDOIXO9qN/Nhedrqz78z+3FsFiv2w/3+oFHRBW643zIEz2mzMLdI5Matdk9meQSoav0Bf5OBsKn3cdTK0OKsD0NSpn0vLiJw6oAswS1FOeiwmO59m0x8b0qXoeN/qBzlrXagO1cgm1lAQeP0BXpUCyBG3Yz5kluNxhyfQT6K3Ln0AdNuw2uW8nAanXzLMjIXwOsveHqHy7G88xQKE0TZfG4Q8dlA2LNcycntd7TfdWlDkL3ACUGloJB6EjvB9Q5+SAvlMGovHeXhKGUdXU/okZRR31AoWNgwLM2MwbPeEhtRHJwEOIOfVAfPbMIRcnGyAAmtoXXL8kh2iInmIUA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b81f6d44-080a-10fa-3148-67aa23504561@suse.com>
Date: Wed, 19 Apr 2023 08:41:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230418111032.487587-1-andrew.cooper3@citrix.com>
 <c2693ac0-4f6a-83ae-c477-75b3f05b938a@citrix.com>
 <226fba6c-aeca-d38b-7d47-07b2f8d6b403@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <226fba6c-aeca-d38b-7d47-07b2f8d6b403@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0170.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8902:EE_
X-MS-Office365-Filtering-Correlation-Id: 9907db90-7372-4638-5a4d-08db40a12435
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZieHq2A6P2xCufqIH5552eWQGLpmiwYD1y62rIGjCv19jdVv7CouX4dtuc22oROlgbEcX0L6aJKxkqJBMNzkvAK6upMxn8BQ7IWyLKQcoSBfhMm6XxHKpWttUOIS6U+7JjKONlhoc5EpCA6QHKOzB37VUxE5P2b1RcFAFS7K/aPK6/ISMGDytuM/F9PTS2dkpo6QsR7KRHdfYfAOrBv7f1O0L4JhNacUp+0gqiGUbNioLZCJmKkE9UTLBA/8lAi5cLO8Crv2tbO5PfA8PGU1FRBTXiG80SHwdhZdEAFkW5Q3u/4TafFvKHWsd0QTpMVMXKkxURP0fj5hfS6V/gG0Pj5PCo14sX8xKgmIGZzFe+uWKLLtV1WZCBitlS/lKyZb8DXD98uavJp4cVIwC4nbjt0dqt2imA3gDeOn/Vhi2aTjCp/eJsH6i0aWYXUC/2W7pTpuyc8YM9JkAn+BV8i9AlGK/cCgOr5mwPFfVohL8KxjMWyDGDGnJqEIcQTvbMEV79XgECzSfPBWm4G4TpP3jmk7MdjIraW/ALvseAgkUdBbjeR9LZjyFmuSLS7xP0J/Zg1PBH3tyX5u0wWio6XRpLMg/nuQy0C4/ZjwToLbhBEafiB5QhqTOjeokfXPSB/TYGlY9MDWHo6kFXA1BPANXg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(8936002)(38100700002)(36756003)(8676002)(2906002)(86362001)(31696002)(5660300002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(6506007)(53546011)(66946007)(66476007)(6512007)(26005)(316002)(6916009)(83380400001)(66556008)(41300700001)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MDh1OWRtOW9WSjhmR3p5TmwwOE5UNzlJZktvVVNOMlRFQ0xVWlBjVjQrMnZm?=
 =?utf-8?B?Y2tRN244TkdQb2dQSDV6RkhwSEkrUnNLeFZPeXUxTWorc3hkcXBkM1ZsUyta?=
 =?utf-8?B?VnNPOVJUOTkzWlk0czZOOTk1RzY1UWZEeE5FcEtTL1RaK21wMkdyWktFdUts?=
 =?utf-8?B?amJmUTJrUS93T1RuWFprM3lKaGEvQytNSWY0MFdESE9MSkZRWDdvTitSSnQx?=
 =?utf-8?B?Z2prNjZZTFg3ZFRBTzRKLzF1NC85M2MrUFBkSExQQjRjanplUURpZC92akNZ?=
 =?utf-8?B?VVd4MU9tbFhLYTNlTUl1OURHVi84TCtOQ29hSjN2NEpMWnFFaGE2N1I5Vmd1?=
 =?utf-8?B?QjlUdWhoNVlYV3hsOFBqdGJLS1Nxd0pZV2hzYXdOU0ppdm1RS1pacnZmQWpY?=
 =?utf-8?B?aXhQRTFTMGJNdUpNcEZ4amRVZDVoU21JdFA4b21vdGNQVjNZTTJlbGpkbDhu?=
 =?utf-8?B?WXhpM0hIaFhxc250RTVralAreitaNE42VmtwVjNGTEQvYVNoZEZQOEx6Sm1G?=
 =?utf-8?B?WE9sUGhubEJ4QlFYclA5MDNBQzVkZ3d6MFhGYzJsTlQ0d2F4UXhRYldzR3c3?=
 =?utf-8?B?Y0JvR1FqUHJEWnhVYjdFdzYwQ3VNS0dTMTZGNUI4UjBkRWNJL0lBeldKWUNh?=
 =?utf-8?B?aVhLeDZ5eHNWcU1mM3h5d2JQM2lIOWJaaHJSYVRlWkhsY3EzUlA4QzB3bFZY?=
 =?utf-8?B?T0F5bnNWZEY3Y3pCdk1rbUIyMEt5eDdvRVFWRlJIQ1RVM3NDUWx2WUhSMTQv?=
 =?utf-8?B?aUZWbkFDWUtNeVc1VEd4WDBpVno1MHUrMUE0Qitzamo1d3BLYmRIOFg1MUND?=
 =?utf-8?B?Mm95UldrTWJTNHRkWU90ZEpmM3NXdjNkaGRpV3BOUGVzWEpYRmZ3YVJvYzVq?=
 =?utf-8?B?M09Kc2d3VzhqQmd2Y2VJM1k3VEZYSTZNaXFlVkdkTHRpY3NCN2ppcjdCM29K?=
 =?utf-8?B?MTEweFJ6UlB1b1Vkem1pb0hoMGoxZXlWWXkyOVlFblBVWnQwVlF0a0lWRzhp?=
 =?utf-8?B?TFZIV2ptUjJYWTJmOVhlWEJnV0xTSFVTbDBLcnJDVk4rYWdCNEdBSFFFZUFh?=
 =?utf-8?B?cW9rTjlKTUsyZzBMRkFuZnlNMXB3RUJUMjhkN0VOZlFyQ2pBY1d4Wjd1dnhH?=
 =?utf-8?B?cTVxM2NPdGNtUVY2Q2psNmVLRWlMb3pjcVozOHF0NkluUVoyZ0pGbmlZeVdU?=
 =?utf-8?B?UDVDQ2NtVEFXM3Y2MytUNG9sY0IyYkRseHVjZlZNeVd0dmlpQW81VXg2WThR?=
 =?utf-8?B?N0hjWlhxSEVwUzRNQlZycGc4a3BsVEw5QWo4b0pqU2ZOWHBDdEQ0REVKK0JZ?=
 =?utf-8?B?YlVtK1M5M2x3VWxTYTNLeFYrbmVNUnNLWlpTa2RrcVA4YUZxN3p1dExFRkNY?=
 =?utf-8?B?WndFNGtQajJSblRKdWpuUk1DOWtPbmdZVW9GbVBhVUtwaHRlcnE5UW9YbmxZ?=
 =?utf-8?B?NTVqZXBMVHJhU3k0b1YrTm85b294Mk1aemo2NFNySjVabm1RbkZ2OHFieE4z?=
 =?utf-8?B?RWRuS3JqY3hkUXlJbDBBdWwzZ1F1Y0dBajk4bmZIekNCWHprNURNSHhWNHM0?=
 =?utf-8?B?eGgwS3FNNXdZWTArcHp1UG5IWTBCM2Rkem03T3M1bm5HN0JtV1NXSkpMdWlx?=
 =?utf-8?B?YVNiZ1lqMFk0ZUN6eFlmdnNnakxlSXhxV1AzOTE2SEtHMTRkd05XeExqTDJ1?=
 =?utf-8?B?K0RielpYWk5kUGhyOEtCVG9TS0VxNGk3bG9IcjFNQlJBUTFwR3RlT3prNmxh?=
 =?utf-8?B?dFZCdUVvb3lScHg3ZkV0aXVsVW5vNG53UXJUMkZrMUFvbk5lZEh2eGFvOGZm?=
 =?utf-8?B?TGtES3pjVURpczVvN1hSQkQ0ZVZpQWprSXNoSHB5T01zYW9qaTlLQXBxZjkw?=
 =?utf-8?B?UkkxTG1kVDFyWWU3TlZtMXNuNU9hWnliU0U0amhEM0hCejBXb0tXNEswVTZz?=
 =?utf-8?B?WGx0UE56clA2YVVlS24rOXJUbjBIYkJKaDJkd3ltUUxtb2l0K1c1ZkhlME5I?=
 =?utf-8?B?SktqMnBFN25kVGZiOHFIVm5tQ1ozbDdWby9oaE1NRkNHTzRhdjhNNjF2eFd3?=
 =?utf-8?B?R1JRYlY0d3pzQ293NmFncnpYYXU0a0NrekJGOEVsOHpmRmxUekROWWxZdHNV?=
 =?utf-8?Q?WkVnL2U0gB1xl3ZG+jKq9CrsI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9907db90-7372-4638-5a4d-08db40a12435
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:41:44.6184
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6K1E/CPv2qU5FULLFKXLM3/fUvklKW3FB4qw6NlgkorIBFMhyjaKZqe09ye889/MrZrexHSVdB+9OG2W7CGTfg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8902

On 18.04.2023 19:54, Andrew Cooper wrote:
> On 18/04/2023 6:30 pm, Andrew Cooper wrote:
>> On 18/04/2023 12:10 pm, Andrew Cooper wrote:
>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>> index 36a07ef77eae..98529215ddec 100644
>>> @@ -5879,6 +5880,75 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>  }
>>>  
>>> +/*
>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>> + *
>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>> + * Must be called with present flags, and over present mappings.
>>> + * Must be called on leaf page boundaries, i.e. s and e must not be in the
>>> + * middle of a superpage.
>>> + */
>>> +void init_or_livepatch modify_xen_mappings_lite(
>>> +    unsigned long s, unsigned long e, unsigned int _nf)
>>> +{
>>> +    unsigned long v = s, fm, nf;
>>> +
>>> +    /* Set of valid PTE bits which may be altered. */
>>> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
>>> +    fm = put_pte_flags(FLAGS_MASK);
>>> +    nf = put_pte_flags(_nf & FLAGS_MASK);
>>> +#undef FLAGS_MASK
>>> +
>>> +    ASSERT(nf & _PAGE_PRESENT);
>>> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
>>> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
>>> +
>>> +    while ( v < e )
>>> +    {
>>> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
>>> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
>>> +        unsigned int l2f = l2e_get_flags(l2e);
>>> +
>>> +        ASSERT(l2f & _PAGE_PRESENT);
>>> +
>>> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
>>> +        {
>>> +            ASSERT(l1_table_offset(v) == 0);
>>> +            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));
>> On second thoughts, no.  This has just triggered in my final sanity
>> testing before pushing.
>>
>> Currently debugging.
> 
> (XEN) livepatch: lp: Applying 1 functions
> (XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x163)
> (XEN)   l2t[001] SP: 000000009f4001a1->000000009f4001e3  (v
> ffff82d040200000, e ffff82d0403b4000)
> (XEN) hi_func: Hi! (called 1 times)
> (XEN) Hook executing.
> (XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x121)
> (XEN)   l2t[001] SP: 000000009f4001e3->000000009f4001a1  (v
> ffff82d040200000, e ffff82d0403b4000)
> (XEN) livepatch: module metadata:
> 
> When Xen is using forced 2M alignment, the virtual_region entry for
> .text isn't aligned up to the end of the region.
> 
> So the final bullet point is actually wrong.  I'm going to relax it to
> say that it is the callers responsibility to make sure that bad things
> don't happen if s or e are in the middle of a superpage, because I'm not
> changing how virtual_region works to satisfy this assert.

I.e. you'll drop both assertions, not just the one added recently? Or
else what you say you'll tweak the comment to isn't going to be
consistent with code. (I ask because you say "this", not "these", which
is a little ambiguous in its meaning here.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:48:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523148.812927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1bn-0005iC-Ro; Wed, 19 Apr 2023 06:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523148.812927; Wed, 19 Apr 2023 06:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1bn-0005i5-Ot; Wed, 19 Apr 2023 06:48:03 +0000
Received: by outflank-mailman (input) for mailman id 523148;
 Wed, 19 Apr 2023 06:48:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp1bn-0005hv-Al; Wed, 19 Apr 2023 06:48:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp1bn-0003xM-62; Wed, 19 Apr 2023 06:48:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp1bm-0007Zd-SF; Wed, 19 Apr 2023 06:48:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pp1bm-0004tm-Rl; Wed, 19 Apr 2023 06:48:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pJEdfnxb3cN5iJRZj7iINV80Uklzy2YCu8TUG4e+Zik=; b=GS0ZxmFi7C4qEYUscIXdt/I3qV
	KLrZ0x3pdAl/2pCRDAfxFrEBjKdVoqkz3VLP9LsaGlhn9M8DXJ0bqRmn2OGuy6zt8lj8sM8kmbROl
	jbPov4AhZNmJR+sZC33/9yacOBgADH7TGLQBQF8IwwFF6rlJ1zzwquUHknRcHPA0Zjh0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180307-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180307: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e3d2c08322bc61e9c5b87b3c282dd2af3d52aec6
X-Osstest-Versions-That:
    ovmf=b16284e2a0011489f6e16dfcc6af7623c3cbaf0b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 06:48:02 +0000

flight 180307 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180307/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e3d2c08322bc61e9c5b87b3c282dd2af3d52aec6
baseline version:
 ovmf                 b16284e2a0011489f6e16dfcc6af7623c3cbaf0b

Last test of basis   180295  2023-04-18 06:12:09 Z    1 days
Testing same since   180307  2023-04-19 04:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yi Li <yi1.li@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b16284e2a0..e3d2c08322  e3d2c08322bc61e9c5b87b3c282dd2af3d52aec6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:48:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:48:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523153.812937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1cQ-0006DO-5t; Wed, 19 Apr 2023 06:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523153.812937; Wed, 19 Apr 2023 06:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1cQ-0006DH-2w; Wed, 19 Apr 2023 06:48:42 +0000
Received: by outflank-mailman (input) for mailman id 523153;
 Wed, 19 Apr 2023 06:48:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp1cP-0006CL-0u
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:48:41 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe13::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36cb806d-de7e-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 08:48:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9438.eurprd04.prod.outlook.com (2603:10a6:102:2aa::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 19 Apr
 2023 06:48:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 06:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36cb806d-de7e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L5YILYFj1DL8dUfzDPmSCGcNCSaOiToDpQ+75iBEqGPHbpKuR+ASVAAJc/pMRcw5yYLJ/kvU2x3/kxcALTdzY8b0eYf92tzbrr/t4PNSMERNRDcGxMxkN10SIQn9drIDNNWqIcqyVv7ni1Zi/2FMTqBQmuqPIMm2UZP1bTsnxHR3yaK2ktktkkto8UCdGPrdRbyh4yPeR//CWUtaxbzXTbe+Iys+vmH/SpOc2WPqUb1GGgQ1tMS6BrSZmTqtmSh+6PPo2XZ2DWxmVwrTdZ2nl8zCh9E1zsCqmr4RamieCiYpJxZ8EGGSwvJJM/k+NYSqEI+3NyYzTCVFwVYosAPjnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mqrjDSwPQOrjrHSRm9JHv6bLRODWdsufHkgQLID8ljk=;
 b=YH1rY/iPgOOA3AxGbLrMsSvfdEONSBbw66B0bkgowUaWr+v3kSp+HqEn9535V1LZVhASinJuUwYbXwe28YsQb5jDXljgk9tmzgSuRU7nD3POdUXJvhzvK83gI5MAdKsNM0WjVWRdkugo8EVnkBOTn8QQpZaESynQ/I0wNWV+EeapO+7n9V+Ft2t330A2F61qXsudgZkm1qKcShyTHU71HzDSYIldnD4+xbtzZ+xzky/A3F6OtkLD7k9jLTwcgWfIxjS7l8B4A6VBB0jPpGN9MxhDNwEXKRm7tH0uFZOFUW8J7XZVrAEnEJVLCNgIXw9WOB2NRRrmw/p8JBW6BSwsKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mqrjDSwPQOrjrHSRm9JHv6bLRODWdsufHkgQLID8ljk=;
 b=swshyBo5B+J6F/a4BQGUlF7wIdv6FmI/vef4cJ8skeTKvW71sNi5+Mo8cIXJlqivKRkZI8BFF8xLfT+zwrOOv+pJNulytEGqykZfpWdpcmxMa4SymQlRLNaqRbCY3DRw61v5CMjvn0nDB2ioP2/WAWdH544bG7+fkyX0eP95qib7gD+Zg3tmwpmjg2vuerGTF3PJIzCUGWuUwiU3K7XHO+OVxVcv5D2cHrSBQ1ZuK44IbKRKqQP/lLdBP8MIo49k8oKwRwInlH+TlixqzlYGfE2NPhu+APsiC8fzQQ/6QJdbZixcxjAigBMhB7yKV/3SnLyaulfDgpJFaZCO13Ot2A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af739920-4ef5-4762-7931-3479f688ab43@suse.com>
Date: Wed, 19 Apr 2023 08:48:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [xen-unstable test] 180296: regressions - FAIL
Content-Language: en-US
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-180296-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <osstest-180296-mainreport@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0099.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9438:EE_
X-MS-Office365-Filtering-Correlation-Id: 1e19ee83-630f-4e95-fba7-08db40a21987
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FSP+3Q5A83B7yYIduAY2orMWQi2Xoke1++mv7AOrYbwyohLZL/02ZF4iJ3bO1U7STfcUi6ktYP9b/VNibAir3LZ0GYth0MPbPFTcdd+z1IyO8kXMliLB8NavHtHhwJWtAIPZtE+oXiHBK6dfQoYhoVSaNxWG2aWx61PJLeINNrK29Cs/2xMsOL6f4WJuaS/zMJ4QwOFuIh8OvSCn7ZbBtjDtL4aY2hzBXFfNPuwXEl/h+t6PyrzOGTa2co7mn/fMn0xSLAZGwH/0uZzhVkCkWNGgZoZNnZiEmFR6cvzSR9HT2wtSQjB74II8G0LSxoYGCjyk0x8oKBIiTGG4c1D0hv9e4coKvfoYamlXt7/DDlqsQOad7afhg5X3em8fiXH8RIAW6oLuhAvxJxJlhoehwV9NnH2jUvQmgxm/NmT1wqtWNAaU7qSCcUJrv2xqv1GOedPJ6awdjqshOyWYSHTlgd5vZYY65bMAx94AId9xk2CIGsWBOzRg6u14Tox4Xa3nWpENndV7fVuwHGh+hAONaLXZjXD5J0y/9NiooMQqbnnReKYwnv1ZUXuq153TJlETrqiWFsNfkdQwQAGWiOJulGbPFg4gQdSwzSfdn+hw5WQSt0FRh51VyP0qoUCczI9Z8q8s5v8YW9BU3/u6tOoJBQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(376002)(396003)(39860400002)(451199021)(2616005)(53546011)(6512007)(2906002)(6506007)(186003)(26005)(38100700002)(8936002)(83380400001)(4744005)(66946007)(86362001)(66556008)(66476007)(31686004)(478600001)(6486002)(966005)(36756003)(5660300002)(316002)(8676002)(41300700001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ME4yaW00bFM3ZkFIeXZNN1NabGdZb0FFY1pQUFFRenAwbHA0a0tPTkU2b3I1?=
 =?utf-8?B?SEJQRGl3Zk1vdWpZZjdJRmxGd0k5dnpuQXNyYm9lUyt5SEZmdGxMNENiODhQ?=
 =?utf-8?B?Mkl0ZlgrUkkzeFFCeGN2UVE5bWZyanJLTlE4M01lVkxZV20wZkJJSGtSNlFx?=
 =?utf-8?B?bndaV0pUeXVxcmVMeDNxWS9YK3h2QUIxTFBwVFRiU2FrK3ZEN1E2TkNwMExr?=
 =?utf-8?B?MEw1S0NBaEI0djNKYXZhZE1wMHJwb0c4SCtpa2hVVlhQdmxLRlVoanpsV0F3?=
 =?utf-8?B?T3h2SE10VEhmaXVqbkhFalRIb3NDYmFxV2lNSGxlbEEyS1o1dnpEVHJzQURn?=
 =?utf-8?B?aWxzWFNuR3B6YnNFMGVKdXdtSG96NG9LeFpMUy9IK3lzUThveHlkRUtMMHNx?=
 =?utf-8?B?WHc0NE5sNjJ2MWN6Z0VHNjgvNldWOTJCdWZTMGg1aEZJcS94Skd6b0ErNHlG?=
 =?utf-8?B?cVlITmliblhkZS9uSFlJOEdsclk4MnBqTEppK0hDMVFLTHBnWmhIV3hjbG1L?=
 =?utf-8?B?cTR6SjN6OHFBU1hYbnJBVWYvdUs2bzRyM3BYMDdBTXVoWFJqSVVXemJTR2dp?=
 =?utf-8?B?ZktsVElBM1ltNHhyN2I2TXhzc2l4SUpuQncvM3FWaGpBRDA4ZURKZXM4VXVS?=
 =?utf-8?B?eCtZbFBpUVZaeVFFTVlVRnI3MkNLVDBEMTZEaWw0dnVoT2x0dDhkUjlXcHVB?=
 =?utf-8?B?dU1Va29aK29wTmkrY1IwSzRBYyt5bS9NaVJSZVYxdzlpcFRDbDhDSGRRd0Rr?=
 =?utf-8?B?YUpVaW9RN1prdlMrY2FYNitSYTA5TWlFNE9oVHdpdGtvQkYxekpSTmNsRy9a?=
 =?utf-8?B?RCtKSGw1SGF5bndQdERSdjVkUGxiOFg0bW5LREVKRS9IT2xxYlBkWTJhM2hP?=
 =?utf-8?B?Nkw1UVd4WmUyS1I1ZFVtbjZ6bGtYaVYwR09LdzRoblZHaUhvSVdCLzhHUFN4?=
 =?utf-8?B?anVMT1VNUDFjcjZJMlR0eC8yY2w1MDZKbUZtU3dqeHZoSmdtZk9jS0Vxb1dU?=
 =?utf-8?B?b210MGtRaGNkK0Y2dkdFNTR5UkxFMDA1NVM0dHFRV01MRFRNUkdObzNacjRB?=
 =?utf-8?B?RytwMEh4NW8rWVBQeUtHS2lEbVE0TWY1NVpYcUE1clFmdUVydCt2bzFyL21D?=
 =?utf-8?B?RURtSGNpMnRjVGU4bEczemtSSHlzUkFoWnRpdHZ6ZUhaVmhtdnhmaXhKK0FP?=
 =?utf-8?B?cFlUd0NjT1RBaGtBMVlMT2tIUXVSeEg5ZEt4eEdWcTYzdFQzbG1lYlR3bU5G?=
 =?utf-8?B?NGk5NGNrTVVOWm9GNVZmcHV0Q1pmRDZndHR2c1g1eDZPcWlRT0pMM2JnbTF1?=
 =?utf-8?B?WW1mczQxcE1ybVN5bHByV2hyQWRzOU8zOWltdzQyWWQ0Vk8vWDd4WlhLMFN5?=
 =?utf-8?B?TEFDaVZsYlMzem85WndXc0dVdW8rV0d0UVp1c3NSelBJZXRvang4bUQrVVV0?=
 =?utf-8?B?b3BRRlBHSnoyOTlkcTZ2OGZ1bVJsOXpXaEZiUDA2QXVlTXprUmxPWlpNOWc2?=
 =?utf-8?B?NWNhbFA2WVp3cEhvaWFtUkNmbkNpZnNpS3pKdnZZTy8xSU4rK2dvVVhoVzFS?=
 =?utf-8?B?SnZxWlJVVkI0L2FKbkdHM1NtZXgyNUdnV3lBNGF1MlZ2dmhaRDBrQ1dHWkwz?=
 =?utf-8?B?QmRJS0FiVnlsSnRaT0hhY0Q1Znk0MVVUZUNTd0ZDWGljNVdGSk53ajV1S0Vp?=
 =?utf-8?B?SFhSQ3RFMW5sdDZjdUkwSmpPWFhvNGtZMVJzL01UbWNiU2FSYWUzUHB6OVNh?=
 =?utf-8?B?SHoxRXJUVWVYMzFyaXBlc1VDeVpMN0FIUDJTTWt4YWpocXVSdzlkUXJydnVK?=
 =?utf-8?B?T1FOSzIyM1BJQjlQVHN1Sy9hL05VQXlRMlE2Y2hFdC9YSEFPMWVrZVg1cU1m?=
 =?utf-8?B?bEFpZTNYVEZHUkpRRElGT3dXTURGYmpFOHY1RDdxc0hBRkRPamhyb0RaR1Y1?=
 =?utf-8?B?OHBOUVl3Y1VMemdQemVIYTc2TTdoY0trbnJKTDN6YWlMY21GcUoxZjRTaVFX?=
 =?utf-8?B?TDBxeTAzM2Vua1g4b01nZXk1d2pwVnJ1S0xkckhzSVduRVZ3MDhIRVFMdXJ0?=
 =?utf-8?B?Mlh5N3pkOTNkOE5TRS9JU1p3WUxscTB1eERSQ1lJbEFHbDlsRzc1WnR4Rm9z?=
 =?utf-8?Q?i0M5Y8CZ5LGv2SjZ2DMzFaGpT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1e19ee83-630f-4e95-fba7-08db40a21987
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 06:48:36.2620
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 13m2ylsFy3mD8QxUEu6pKqHG5n7Qm+0nUbq4uwrmRm5u7ZqTj7RiLg/NUgCxBGRFaAJnp9CfZcPqNzbzMldQJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9438

On 18.04.2023 19:53, osstest service owner wrote:
> flight 180296 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/180296/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-examine    11 examine-serial/bootloader fail REGR. vs. 180287
>  test-armhf-armhf-examine     12 examine-serial/kernel    fail REGR. vs. 180287

I'm afraid I don't even know where to look for hints towards reasons
for this failure. The places I did look at left me without any clue.

>  test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 180287

This one looks to have been transient (yet still such transient failures
are a little worrying, too).

linux-linus flights also have an issue:

Volume group "arndale-lakeside-vg" not found
Cannot process volume group arndale-lakeside-vg

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 06:57:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 06:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523160.812950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1lA-0007p1-1m; Wed, 19 Apr 2023 06:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523160.812950; Wed, 19 Apr 2023 06:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1l9-0007ou-UG; Wed, 19 Apr 2023 06:57:43 +0000
Received: by outflank-mailman (input) for mailman id 523160;
 Wed, 19 Apr 2023 06:57:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=APhf=AK=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pp1l9-0007oo-0X
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 06:57:43 +0000
Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com
 [2607:f8b0:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 782ed5dd-de7f-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 08:57:38 +0200 (CEST)
Received: by mail-pl1-x629.google.com with SMTP id
 d9443c01a7336-1a920d484bdso654515ad.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Apr 2023 23:57:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 782ed5dd-de7f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681887458; x=1684479458;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=pNpECdtaAli+4MRrq43TxkkMjq4pU1Ea7BhTbv50bzM=;
        b=P91l2YfrHSWpd/1Xtmo+ygNkws1/NftsrF3zzykxepxcs7YRhf/nh2qLhj2qJyArfv
         4sHukWyvs0bMuJQmUGPYWOduFyZR/hYNs+CW4sSfGD43NxKByrwW43Z7TC22j2RYbQ3L
         N1xVXTZ7b1wBx6Pa28SFNdfHD0HKLSTCDFiqIyHkc4rNQw/wtg71w+m+8ddiUZwRkXYI
         nlKfU0iW6jSHk1PmOwYao05OBUrl+uXXlJyYbf3+n32KsTDLdgFcnxW0RVpkynprvziH
         kjzMh7YhklECc+90qgM3N0xjTuZdKUXlwPRTkJDNxoIP/sGZ137gCU5MWQHCXnL3tyOi
         td8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681887458; x=1684479458;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=pNpECdtaAli+4MRrq43TxkkMjq4pU1Ea7BhTbv50bzM=;
        b=IWucvvYK5ulXL6vr3x2DWdVdaeBY4705xva+3hBn5fFdvaCGNXBdvSjF441Y5YEEBo
         KBeWAmbBYSSVAc/5Wf4gnDp9k4mW73kEqjFyGxInYEXqLhncHxtmxbKd3pf5YF1/v6f7
         ZJ2AhlKYm/A5qWhT9OD7xtr2v/YdEKIotmoRKKX4+wVTL1Ei6KlTxj+822XQuB9B6geM
         lljNNjay/5C6NW7JXWPoWFq5jB0kA7XJ9bsfwy81SrORU4QijOdqD48AIHxuT2aJx5Ie
         z0GD3I3CpLWmzXExbfSfIMoZD+/vQdaeQn0fAp6AoRxdG/jo+pN+fpYXNrVEOCtlRdhC
         mMKg==
X-Gm-Message-State: AAQBX9c+1J54xJQlKiMnQhChoDyyqqXt79KsxlkzM9wHCjMwRLXkwCTB
	1JhUGv9/SwatBLYW49q6fknxFhChmpdxIdHZONY=
X-Google-Smtp-Source: AKy350YlEH6YgI7pzbT/f8s7mN/LPa1OZoALb+r7bOuaUqylheGtb9Wk7BhEDdRbVkQuLxuOYl8kG/vNHSAGhpy7SEE=
X-Received: by 2002:a17:903:294c:b0:1a6:ebc1:c54d with SMTP id
 li12-20020a170903294c00b001a6ebc1c54dmr4072785plb.30.1681887457650; Tue, 18
 Apr 2023 23:57:37 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Wed, 19 Apr 2023 10:03:42 +0300
Message-ID: <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	michal.orzel@amd.com
Content-Type: multipart/alternative; boundary="000000000000772bad05f9aaf025"

--000000000000772bad05f9aaf025
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Stefano,

Thanks for the clarification.
My company uses yocto for image generation.
What kind of information do you need to consult me in this case ?

Maybe modules sizes/addresses which were mentioned by @Julien Grall
<julien@xen.org> ?

Regards,
Oleg

=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Ste=
fano Stabellini <sstabellini@kernel.org>:

> On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> > Hi Julien,
> >
> > >> This feature has not been merged in Xen upstream yet
> >
> > > would assume that upstream + the series on the ML [1] work
> >
> > Please clarify this point.
> > Because the two thoughts are controversial.
>
> Hi Oleg,
>
> As Julien wrote, there is nothing controversial. As you are aware,
> Xilinx maintains a separate Xen tree specific for Xilinx here:
> https://github.com/xilinx/xen
>
> and the branch you are using (xlnx_rebase_4.16) comes from there.
>
>
> Instead, the upstream Xen tree lives here:
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
>
>
> The Cache Coloring feature that you are trying to configure is present
> in xlnx_rebase_4.16, but not yet present upstream (there is an
> outstanding patch series to add cache coloring to Xen upstream but it
> hasn't been merged yet.)
>
>
> Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
> you as you already have Cache Coloring as a feature there.
>
>
> I take you are using ImageBuilder to generate the boot configuration? If
> so, please post the ImageBuilder config file that you are using.
>
> But from the boot message, it looks like the colors configuration for
> Dom0 is incorrect.
>

--000000000000772bad05f9aaf025
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello Stefano,</div><div><br></div><div>Thanks for th=
e clarification.<br></div><div>My company uses yocto for image generation.<=
/div><div>What kind of information do you need to consult me in this case ?=
</div><div></div><div><br></div><div>Maybe modules sizes/addresses which we=
re mentioned by <a class=3D"gmail_plusreply" id=3D"plusReplyChip-0" href=3D=
"mailto:julien@xen.org" tabindex=3D"-1">@Julien Grall</a> ?<br></div><div><=
br></div><div>Regards,<br></div><div>Oleg<br></div></div><br><div class=3D"=
gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D0=B2=D1=82, 18 =D0=B0=
=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a =
href=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt;:<br><=
/div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bo=
rder-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, 18 Apr 2023,=
 Oleg Nikitenko wrote:<br>
&gt; Hi Julien,<br>
&gt; <br>
&gt; &gt;&gt; This feature has not been merged in Xen upstream yet<br>
&gt; <br>
&gt; &gt; would assume that upstream + the series on the ML [1] work<br>
&gt; <br>
&gt; Please clarify this point.<br>
&gt; Because the two thoughts are controversial.<br>
<br>
Hi Oleg,<br>
<br>
As Julien wrote, there is nothing controversial. As you are aware,<br>
Xilinx maintains a separate Xen tree specific for Xilinx here:<br>
<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bla=
nk">https://github.com/xilinx/xen</a><br>
<br>
and the branch you are using (xlnx_rebase_4.16) comes from there.<br>
<br>
<br>
Instead, the upstream Xen tree lives here:<br>
<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"=
noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a><br>
<br>
<br>
The Cache Coloring feature that you are trying to configure is present<br>
in xlnx_rebase_4.16, but not yet present upstream (there is an<br>
outstanding patch series to add cache coloring to Xen upstream but it<br>
hasn&#39;t been merged yet.)<br>
<br>
<br>
Anyway, if you are using xlnx_rebase_4.16 it doesn&#39;t matter too much fo=
r<br>
you as you already have Cache Coloring as a feature there.<br>
<br>
<br>
I take you are using ImageBuilder to generate the boot configuration? If<br=
>
so, please post the ImageBuilder config file that you are using.<br>
<br>
But from the boot message, it looks like the colors configuration for<br>
Dom0 is incorrect.<br>
</blockquote></div>

--000000000000772bad05f9aaf025--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:08:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523167.812960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1uu-0000wE-V2; Wed, 19 Apr 2023 07:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523167.812960; Wed, 19 Apr 2023 07:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1uu-0000w7-SI; Wed, 19 Apr 2023 07:07:48 +0000
Received: by outflank-mailman (input) for mailman id 523167;
 Wed, 19 Apr 2023 07:07:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp1ut-0000w0-Kl
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:07:47 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2df8140-de80-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:07:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7798.eurprd04.prod.outlook.com (2603:10a6:20b:2a3::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:07:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:07:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2df8140-de80-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=htQTAAA10KK5xflJ7o6kU8iqdMhFeJoRH6OjcqdDkOWkVq+FzkhzfFFs87Vv+4w4hENKTDfrSQmZ1OCp8qpZdPQ2fnbgovyhEo0zQJwy8Xk0oirCb+XnYBBtG+9+E1laP186tZ219nYE9Jn5HVaee9QGKcpCd/N+fxyzljdrKsKMATQaOp9hSDkb1JY7+Jlp53lNRpaLvkPr0XPD8dgs2gIZJc4sA/1ofAwAbV0cavDBNnW87f5N8wz5cA/ZuLhjQECdFmExDZUQjy0Q76sPdvs1SzmmtSYnGd7fsaY+l04DYt/R63RYrH5jPUW1Ir0BMz06wbBCGJ2UH4L+KxNQLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r35Rb35zAHx8L0l0T4lYDIXwIFCBnnjs+xh5DuqE/4w=;
 b=Nzz4Pjy9sap1ZMsr0EMEt/NQWiJAjVsXm9nQoFUbz8kZb9kPTJJsNjWHSRmVhjari+Uxf+PQxlZrKV6Xd76I2TiJD8wvpWl8VkSb+In6Ab5o9Wc6oeh2Zkufe8Lf+m++Z3HnmF2wRieaIomxPGGF0FFdgisAxyKvt5hhL47VKEMJ76hxmYleYGLEqIyvZvwHKAznIPoe8xYsSfVEDxfrnIgSXP2kZK1553evbapgUO8q5lFfwBM0TeQvboLj9EZeayK1jg9m+af/gafrjOS3lJnMYgoTfvDEiOualOdot8OppBd2zLaBEkTOZpOHczlAiIUu8ACRUjpojyNjuZYQIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r35Rb35zAHx8L0l0T4lYDIXwIFCBnnjs+xh5DuqE/4w=;
 b=4qyiLa51KiNAvvC3hVEJb/lFS4Z2Bs75gDivRuHfhu5CtJQVvpqOT468+ug9YlMfaq1twalc6KMZ5dg2lpo2MrZRPACl0Xe7+DNcKccNHVJ71u+EV0Rxzx9QyDUn2gwkcDWLCv2kNee2EsRrfx5VJ/NR+0kg6Q94gv7FlxyAQuwwwhpG/8lvnDuxTXAap+3kR9q9CTczIqLLsFTnF0bpj1cWT1KfoZDyE7MNWP8/wwIfRXzHKjhmkWDmS3jIwf51oN/pQ4Jxm5Drg4COJ34I4xBT3uq3FYFIOdtbChVSq1aMBvY2X0K6cb+T6srezVjnbgMlaoIR519rNfU1XkIuLA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7452a070-48b8-03fb-26c7-3dc7d652dcba@suse.com>
Date: Wed, 19 Apr 2023 09:07:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Roger Pau Monne <roger.pau@citrix.com>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0259.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7798:EE_
X-MS-Office365-Filtering-Correlation-Id: e70aae77-990c-4afc-e043-08db40a4c5b3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N4r5hjUMlcl2FMXyES2PJX1o+Bvlg+TOpmsDA4IWAOLjBB2nfnkPpDnmrNQ7idDi8VbdiayzQ0OD+SsQgIjaN8zpkshUizfwk4rQfKM/gMBSmGONbi9Ut6PW5QewBF42XTcQj7IHl/ijDWdCz230t3pqxPrAuFKwo2M6Z7e1i9VtlerZFU9sCq4gjBerj2bMGWjAxLhCO/cVDi5XnLExioWegObNTwM7qKLm4VFRNJq9+GNQ+0489o/4OMPPqhsZyC10/y9ygYBbOevP3tiiStnCxyQF097LB0xEkaogeAwoWFGeW1urW06Pe1RgJ8t3es1BQEQQGPJ9nqXazbSya7917cvpT1uZoFs+x2Cjl62aVzLgcwbw0TNezr6jziS6lhvrL8kl3JTgUA6L3//LlsYlAHbDRX35ER7eXEmq+c3yoYzIErmYcHae+97W2+tUzSyaELj8rfCc8pCd8TKOoWChx/8V8P3txNZ7+uTaPZxL+WilPpxbtpBGGMGxCfe3YmPhoxnXBe8SRwsMbXeu3sNM/t2CRntsBLvLkvHriadY+duZC5OuXsvpwNLFQpWfV74RdsjpJdIFKn1+Jw+yFQN5+LKwx4yD4ehmDMhtDkA93vSMioS4a5SNfggnG8G/dV2x6tbKZ00L+grm34wdug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199021)(66946007)(66556008)(66476007)(6916009)(4326008)(478600001)(316002)(54906003)(8936002)(8676002)(5660300002)(41300700001)(38100700002)(186003)(53546011)(2616005)(83380400001)(6486002)(6666004)(6512007)(6506007)(26005)(31696002)(86362001)(36756003)(2906002)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VnJyMVUyUjNnbGJCanBuaGpSMEc0Mjk4djlhbHAveWNBcUFoQ3RGeVhMaGQ3?=
 =?utf-8?B?alBzZVUwSWVoQWJJZ3kza3RSTmlsVE5yUEMwaVdDbzBPalhpb2Zqbk1LWTdW?=
 =?utf-8?B?bllpNktWMnhWSlB6OFI0b0lJeGxha0ErY3REaitSdTlPSTVLVERnampPVnp1?=
 =?utf-8?B?UW82eXlsUGU0SDNlN1R0WEpxVCtYenpuc2k0UTg0OEtob2FEQnBzYzlvS3pZ?=
 =?utf-8?B?aGpiaWd2NjZxVFlPMEVXNExYSG5yN2JObllsU1BiRXBqdEpqeklJQjNzNGky?=
 =?utf-8?B?dHNCZUt1OGpNcVhpR29BSFJWVnRuc1hTVG5ReUZpZXlFQlozZG5JT2Zobysy?=
 =?utf-8?B?cmh4bE0vd1BqclB1U0RqVFBsd3daSVlRc0tjK2lQN2FNaDBwanRrSlUxVHRz?=
 =?utf-8?B?Nkd2V2ZOYnRnazU0Q2RESk1xUlM4VVhBUmZuaThWNkdqdldwSURmMEtHNlp5?=
 =?utf-8?B?QVFBaVdybkE5OHptbEhrT1N0REI2Ym1lM3crQUFFWTRBNnhhcjdxNjRqTFFQ?=
 =?utf-8?B?S3piK0FIT0JJc2d2ZTdwT1QvZlorSSs0M0hKS2dRNkt1RUFwV3hZYmJmVmZ5?=
 =?utf-8?B?YlE0M3VKOVhtTjFCOG9TSE9KM1huTlA3MklMbFBpcWlYNmttdkJROEcxc21V?=
 =?utf-8?B?VGZKWFowOW8yam05REdtaEx5R2dWM2l2RHRaNTd5Ymo4MitML0FIaHlXQ254?=
 =?utf-8?B?a3pyVEJIZFk3RHVUK2ZrWGcrSTVDZkN5a01TdE1iNXJ4TXpPbE5FUUZ4NVF1?=
 =?utf-8?B?U251dnNNVGRqcndxTlRzUXhqRjE3TTd6QjlvaUtqWE9OU1orV1A2eU50SVFC?=
 =?utf-8?B?eXEvc2NXcFBsbVF0SUowdEJ0V1FFVWNyaVBaSmpvUUNhZ3BzdDNCdnVXYjRG?=
 =?utf-8?B?a0YwZzFhTHkzU28yaERnc2kxN3BuVXRsdk1IN3FKRHhKVnVoUTdoQ0p4Zno5?=
 =?utf-8?B?SUk1MWJLenFPU1VmVmdYZXpFVEF5TnZmRi9DTmprQnF5UUEvL0lHU1lOSkhy?=
 =?utf-8?B?cVV5ZmQ0NVBWYXNPbjdPQm9sNjhNU3ZwSGpkWnBwUmZic29tUngycEloVklQ?=
 =?utf-8?B?N0F1VlNRM0k5Ui9xZjJIc0NocThDMTNHSStScDJzelJxVjhCUGFIem9lZnVi?=
 =?utf-8?B?UEhpc3NJVWw1N3czNE9RZW1xWHpCWVVsT1hMMnRoMTJERkt1VE8wOFNnVEJp?=
 =?utf-8?B?RG5qN1IrY2xuRDYvL3FQOVY5YUlueTlOaDBGbEYzWVZMNlRWYkFiVXBadERq?=
 =?utf-8?B?clVNVXFvQksyNUx3ZFlsMFR1L3IwTGRUdlhlVTl3cVZsc3FkZytqWlhzazhD?=
 =?utf-8?B?MnJ0ejVjNXFTRFRkbjJJTmh3RTlMNHBHRUhSU0RIM2VadkozMjlvRkVieEhE?=
 =?utf-8?B?ejBFNG5rSyt2bmgyWkZUcnFRVERoSmVodHJBNVBOTkFXMktPRk5PRlJwY2pt?=
 =?utf-8?B?anRETEJnMDZXOWJzQmZTL1Uza2lvNThiZHF2UngyV05CMllRblBWVGxtWHU2?=
 =?utf-8?B?ZFk0YzJiTnNQL1lmeGl5LzcwR1czcXdmZS8vaEs0bTJZYkFpcFJVd1Voei8z?=
 =?utf-8?B?UFZyUEZJeUN1bkVmQStma2g5RVJkS1F2VXNoeGhrMGRpbTN6M1NNRk5hckN3?=
 =?utf-8?B?UkFYb1pYZzc0aWp3SEl6RFd6dUNtUGN6cUZ1dThmWjZ6N0NLNzFZd1Z2Vk4r?=
 =?utf-8?B?aURxTUx2WnlFRDVQR0o3dTRkOXBMY0RXa3JnSmpOVUlvNm51SEM0OE5XaUwr?=
 =?utf-8?B?RWR4V3hDRkNGWXRVdURRWlpHOCsrK0JjNWwvZ21DTkh0WFQ2aFowQ2RxajJN?=
 =?utf-8?B?LzVVUXV3SE8yRDhML2pkWnNNeWJzVlVXN3ZwemgrU0hkMXBsSURMclNWZHNB?=
 =?utf-8?B?NFM4NXI4ZU56ZmhGOGZtTWQ4NUQyL3ZyU1pPdW5sbUsyNUNwWUtZUktlV0Fl?=
 =?utf-8?B?ZkU0Q29kcFBsYmFGUHhJSEVqOFQ4TXRQNEtDM0Jad2FQakRzMkJaOWJ0U2Fy?=
 =?utf-8?B?WjFjSDhieFRlOG45UTZaZ1JDTDliZDZENUpDRzl3b2F3TjJhT092aVl0RkdD?=
 =?utf-8?B?dGhqZ2lYODZqdDRRT1h3aFV5dHRiclE5cHJCaFJIeVZrWFQ1UG1GV3kyeWMw?=
 =?utf-8?Q?dsiasRTJdXgUAPppoJSeBJbPs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e70aae77-990c-4afc-e043-08db40a4c5b3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:07:44.0365
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0Kvx03GiMOKVmOMrG/pM9mragmNHeOSk9b29QNXOGl7TiPCVNpvvdPI94qx5N4swI//5PQ+wVYSOFe9qc/YCVA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7798

On 18.04.2023 17:54, Andrew Cooper wrote:
> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
>> The addition of the flags field in the vcpu_set_singleshot_timer in
>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
>> increased.
>>
>> Remove such field addition and drop the implementation of the
>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
>> value just inject the timer interrupt.
>>
>> Bump the Xen interface version, and keep the flags field and
>> VCPU_SSHOTTMR_future available for guests using the old interface.
>>
>> Note the removal of the field from the vcpu_set_singleshot_timer
>> struct allows removing the compat translation of the struct.
>>
>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> While everything said is true, this isn't the reason to to get rid of
> VCPU_SSHOTTMR_future
> 
> It 505ef3ea8687 does appear to have been an ABI break, that's
> incidental.  It dates from 2007 so whatever we have now is the de-facto
> ABI, whether we like it or not.
> 
> The reason to delete this is because it is a monumentality and entirely
> stupid idea which should have been rejected outright at the time, and
> the only guest we can find which uses it also BUG_ON()'s in response to
> -ETIME.

The instance in Linux (up to 4.6) that I could find was

	BUG_ON(ret != 0 && ret != -ETIME);

i.e. not really dying when getting back -ETIME. (And if there really was
a BUG_ON(ret) somewhere despite setting the flag, it would be a bug there,
not something to "fix" in Xen.) I'm afraid I also disagree on "stupid
idea" as well as ...

> It can literally only be used to shoot yourself in the foot with, and
> more recent Linuxes have dropped their use of it.

... this: If used correctly, it can avoid injection of a pointless event.
Clearly the Linux change dropping use of the flag indicates that its use
wasn't correct (anymore?), likely because of not properly dealing with
-ETIME up the call stack. I'm willing to trust Jeremy / Keir that at the
time of its introduction such a problem didn't exist.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:10:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523173.812972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1xA-0002J5-BW; Wed, 19 Apr 2023 07:10:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523173.812972; Wed, 19 Apr 2023 07:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1xA-0002Iy-8p; Wed, 19 Apr 2023 07:10:08 +0000
Received: by outflank-mailman (input) for mailman id 523173;
 Wed, 19 Apr 2023 07:10:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+8i=AK=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pp1x8-0002Cl-EB
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:10:06 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7d00::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34916905-de81-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:10:03 +0200 (CEST)
Received: from DUZPR01CA0038.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::12) by DB4PR08MB9240.eurprd08.prod.outlook.com
 (2603:10a6:10:3f8::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:09:56 +0000
Received: from DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:468:cafe::39) by DUZPR01CA0038.outlook.office365.com
 (2603:10a6:10:468::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 07:09:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT016.mail.protection.outlook.com (100.127.142.204) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:09:55 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 19 Apr 2023 07:09:55 +0000
Received: from f8277197a3a9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 934AAE71-0034-4042-82FC-187394D6FC55.1; 
 Wed, 19 Apr 2023 07:09:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f8277197a3a9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:09:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6232.eurprd08.prod.outlook.com (2603:10a6:20b:296::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:09:47 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 07:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34916905-de81-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JxOb2ldTZDZMjSMYWyWXZMPpukECH1gEsHRfQ3OwtoU=;
 b=e+fGq9v7uNRSMQv9D1RrwfPe+3pZ0O4bb8F9JiezIsNKAOrdPtd4f8q15lE0mqANhhGYPzOa0hxLS/sHZD+PD3QyJ88avwOU6uNKu93kb9jXJatqkTUnQg/ITmO2rYBR5fu77aJW6R3eV6lINhi+xfdiTdtzel6S1U5iZ5sd5hE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: cc3a2528ca62d7d4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jdP9rlMc0UaL3C4L8ZTZvt/eIPHPZbrMjA3GXOfnx5XDtXk/vk/DSF2ZJuaeUY+yPlenhRPl78fGUFJL9zTQwOp8J2Sqcm0TkzZgquw3cczWdfnKx04k1CV/ehsb4sRnXTcvHEF3aokoHvM3Eer/iOuVCmRLOgMOP7KxYmGIr9go7By3Lbrxph7FjL3jObLNk53JIH234S/Wy0owLcV5OftpVUgffEQe8+/CMGQbIzBCDjwwHn9PrCNWuTPHvJ72A8vNo2mP9KSr4cAJYhINTt641a6QRZ3tojpTFrn2ArI88QHAluMmvpOT++cE6UzeLdBDK8L+JKiqVKYIceTJDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JxOb2ldTZDZMjSMYWyWXZMPpukECH1gEsHRfQ3OwtoU=;
 b=jLRFPn84oqRSaZtSTmAdBWkY4cSHrCXKe6QM1UOlAZ2cCBmTYYxwxhyoTHKq7wkMw8NESt8DobEZa8VbPQa0zi7oknzFhG1vNxsmQGNXXx3DuLnWI+DD3Pdd9/o/6t+sur+/TqyuU9QKSFUQMG1lr16JiJnJGMZV1CjSs4aw87t8lLqI5botv936yGgP7oNFg3ecCCkmmiYj8rAPDaqPeSZRZMqV/WPbpVT4UlqIqk0A/poEkqT9tefXFUkGPVMsg18nCcx9rigBQuwdUHkFPzTOJaYKLRUtkbqYfdow0VV3fWkQMiQfwnbGwo8NN59EqLq+GWtjg4XgpeyD6Hvf9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JxOb2ldTZDZMjSMYWyWXZMPpukECH1gEsHRfQ3OwtoU=;
 b=e+fGq9v7uNRSMQv9D1RrwfPe+3pZ0O4bb8F9JiezIsNKAOrdPtd4f8q15lE0mqANhhGYPzOa0hxLS/sHZD+PD3QyJ88avwOU6uNKu93kb9jXJatqkTUnQg/ITmO2rYBR5fu77aJW6R3eV6lINhi+xfdiTdtzel6S1U5iZ5sd5hE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQ26+FC1T02dEGoq+zimN/6Ra8xC6+AgAE104A=
Date: Wed, 19 Apr 2023 07:09:46 +0000
Message-ID: <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
In-Reply-To: <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6232:EE_|DBAEUR03FT016:EE_|DB4PR08MB9240:EE_
X-MS-Office365-Filtering-Correlation-Id: 387c9a26-6053-4075-793b-08db40a513f8
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lL6H7xurtju3njEHw6H8Gg6l1q0qvKtHpFSYy8ccOKoraUtKN8HgmrOcQcgzgTY1RdPEevski6Qt1KPqyWnMfSjXRnLWzZmRh6AfoO2KiN4ObNGV2M7y7HNOa7RmPdi20dkk3/IL0i1JkEUId3jPZwyLJHhV8EaZnQSCleKdrEaab2fOZkuEtJJUDcSMuTGg00PLv7Mlk1OwYp0k//frDUWo1G6XgASzBLy7c4+wqDYm4E1XNiwU7l2XazCKbfoR05uKooNKPAgA/6b3+E0jFKVkMIGPyw1VrM+wogw9wwr/3D/55zKIb/MVMwL4Ei64eZsmuz5VyPonlonA4SbapWompM9JZ45JGnCKzKuExA5lH7szy6BT88qB05RNr4lXsLXvlHyjObYGtM9mkTsXseyS8aeVGNuF9euNeiOoxxqRsEohGk2pxLVgUaPxULQrF5Yv8mrehqfzHkZ+cqnUjLat2pcMkafLBRA65njPTOklv0OnnjSLw5TFx5LrLvcydqBDOs/svwx2F9s3h1AGzIy/gPGH4sMquUBObK7G5Wbp0Jvt5nqL5QzVYnw18Lcqe+T1aImRgJJKPM1ZVjRNKWbarfPNL+yL2UjWNsx/hVB4/6hUYgGM+oHYkonVlApwqWP/rALbFLujZKlWfT5elA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(366004)(39860400002)(136003)(451199021)(186003)(71200400001)(6512007)(6506007)(53546011)(83380400001)(26005)(2616005)(38070700005)(6486002)(86362001)(2906002)(33656002)(316002)(5660300002)(76116006)(4326008)(6636002)(91956017)(6862004)(41300700001)(8676002)(64756008)(66556008)(8936002)(66476007)(66446008)(38100700002)(66946007)(478600001)(54906003)(122000001)(37006003)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <46715C9128255D4F96508A79040EF3C9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6232
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f4e888cd-bd73-4632-3150-08db40a50e9f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	antkDHpXvO8+N1lsWutVUCgsniB+jpcpdJ/WxRQv3ZpsytHLyIg3ocf4wJnk9zh66EtFwrk6JxDbAkBZh56gKXe8L6Aq5qWgmeZ0alUMB7t0gTRHZSRFpdutCBO8pS5Pufp7aUl2HjOS//FTqIEZiKZ+yCtv63husj+MRFRYhSL4lGPeuSqNgbUfEWHBUL2iV/PYRs7UYaAjRjX9xp+TIJvzLl50urMBijCeJSCGKmWYDBc2kqxPb7WyBQv1LsQ6jN5yZxr+oYoYLYt6QFf9DcROajfQ5b2z4XMJc0jee9TM+YMttO2i1N717+JF/eT4SU1O6+gkRQeQwuPtNG1VDzrOu8bDstXyjNIBgirR1Rrj/jzSnPs75jSgDy8RYW3E5PrlH2pEDswca6YS/OY9lKWLUQte8u/nUVbaUtcpyFxpCmrYoo2eRw5oCmue2yyGX8fdgp01JyJY06jSxbANhHFx3aBhI/MDH9YmfGvpLxSObJ796hDgAQfUqDBmU7cuNNGL1i9dkTwZ8o0UMZgM+qrR0NxdjMCWUfSY2kq1SeFLX6CvSPLwxop7yACBjKhc+JrNJjKhkTOB+uvd2xp+vL4qT1qOFQXpkiGl1cqeS6jtXQL1W4GE1Q0zn4YcMx2ozoR/+iltlotxlaF6tXS72+yN3ZK88/fFntAGDpxSs3s6n4Rpfd+WG9nron1LD0GT+wdWF/LCjzAGuCTI5g8ViQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(39850400004)(396003)(451199021)(40470700004)(46966006)(36840700001)(478600001)(6636002)(37006003)(33656002)(54906003)(2616005)(336012)(107886003)(41300700001)(6862004)(8936002)(8676002)(6486002)(86362001)(186003)(2906002)(40460700003)(47076005)(83380400001)(26005)(6506007)(6512007)(5660300002)(53546011)(36860700001)(36756003)(70206006)(70586007)(82310400005)(316002)(4326008)(82740400003)(81166007)(356005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:09:55.1733
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 387c9a26-6053-4075-793b-08db40a513f8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB9240

Hi Bertrand,

> On 18 Apr 2023, at 13:40, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
> Hi Luca,
>=20
>>=20
>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>> index 78f7482619da..5485648850a0 100644
>> --- a/xen/arch/arm/arm64/sve.c
>> +++ b/xen/arch/arm/arm64/sve.c
>> @@ -5,14 +5,29 @@
>> * Copyright (C) 2022 ARM Ltd.
>> */
>>=20
>> -#include <xen/types.h>
>> -#include <asm/cpufeature.h>
>> +#include <xen/sched.h>
>> +#include <xen/sizes.h>
>> #include <asm/arm64/sve.h>
>> -#include <asm/arm64/sysregs.h>
>> -#include <asm/processor.h>
>> -#include <asm/system.h>
>>=20
>> extern unsigned int sve_get_hw_vl(void);
>> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_f=
fr);
>> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs=
,
>> +                         int restore_ffr);
>> +
>> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
>> +{
>> +    /*
>> +     * Z0-31 registers size in bytes is computed from VL that is in bit=
s, so VL
>> +     * in bytes is VL/8.
>> +     */
>> +    return (vl / 8U) * 32U;
>> +}
>> +
>> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
>> +{
>> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
>> +    return (vl / 64U);
>> +}
>>=20
>> register_t compute_max_zcr(void)
>> {
>> @@ -60,3 +75,46 @@ unsigned int get_sys_vl_len(void)
>>    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>>            SVE_VL_MULTIPLE_VAL;
>> }
>> +
>> +int sve_context_init(struct vcpu *v)
>> +{
>> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl);
>> +    uint64_t *ctx =3D _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
>> +                             sve_ffrreg_ctx_size(sve_vl_bits),
>> +                             L1_CACHE_BYTES);
>> +
>> +    if ( !ctx )
>> +        return -ENOMEM;
>> +
>> +    v->arch.vfp.sve_context =3D ctx;
>> +
>> +    return 0;
>> +}
>> +
>> +void sve_context_free(struct vcpu *v)
>> +{
>> +    xfree(v->arch.vfp.sve_context);
>> +}
>> +
>> +void sve_save_state(struct vcpu *v)
>> +{
>> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl);
>> +    uint64_t *sve_ctx_zreg_end =3D v->arch.vfp.sve_context +
>> +            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>=20
> You do quite some computation here for something which does not change
> during the life of the VM.
> Could we save the context_end in the vcpu instead and just do this
> computation on init and free only ?

Yes sure, would you be ok to have it in struct vfp_state?

>=20
>>=20
>> #endif /* _ARM_ARM64_SVE_H */
>> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/inc=
lude/asm/arm64/sysregs.h
>> index 4cabb9eb4d5e..3fdeb9d8cdef 100644
>> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
>> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
>> @@ -88,6 +88,9 @@
>> #ifndef ID_AA64ISAR2_EL1
>> #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
>> #endif
>> +#ifndef ZCR_EL1
>> +#define ZCR_EL1                     S3_0_C1_C2_0
>> +#endif
>>=20
>=20
> What about ZCR_EL2 ?

It was introduced in patch #1 because we use it in register_t compute_max_z=
cr(void)





From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:11:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523178.812983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1yq-0002yU-Qm; Wed, 19 Apr 2023 07:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523178.812983; Wed, 19 Apr 2023 07:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp1yq-0002yN-O0; Wed, 19 Apr 2023 07:11:52 +0000
Received: by outflank-mailman (input) for mailman id 523178;
 Wed, 19 Apr 2023 07:11:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+8i=AK=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pp1yp-0002yH-4z
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:11:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73c0bfc5-de81-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:11:49 +0200 (CEST)
Received: from DU2PR04CA0345.eurprd04.prod.outlook.com (2603:10a6:10:2b4::16)
 by AM8PR08MB6532.eurprd08.prod.outlook.com (2603:10a6:20b:316::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:11:45 +0000
Received: from DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b4:cafe::9f) by DU2PR04CA0345.outlook.office365.com
 (2603:10a6:10:2b4::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 07:11:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT063.mail.protection.outlook.com (100.127.142.255) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Wed, 19 Apr 2023 07:11:44 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Wed, 19 Apr 2023 07:11:44 +0000
Received: from 305ad627bad9.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5014C673-458E-4AE2-B3B4-CABB0EF1B6A7.1; 
 Wed, 19 Apr 2023 07:11:33 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 305ad627bad9.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:11:33 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6232.eurprd08.prod.outlook.com (2603:10a6:20b:296::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:11:30 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 07:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73c0bfc5-de81-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kMrIzwUcoDqf/Kh2TV1PHFWdZFOzdZ1c1nR199FGlZg=;
 b=LbsdVQqpnMn2EtvPPHRMVDWznGQ12lLCchn8vSMZ2qNoz9ohNgHeUwVI+WUE6c2cekVMgGyH9dRS61shemBu3uwzVhYyECoJTyxPMmVFVaaVYNxgMJ13dP/vRHCuAXGHhBYbR1M2l+8F2h2o/NabHb/OHmCh7y9/x+FF9yUGyD4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e0aeb721de568a90
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z/uokU0HYyVXttR13m1mzImEN420OEW7pW/1W+m2P8IbhSLdHqqteTr/koRbMDI8jKuZzx2o7X+umqEb6mU5E7qUiF6+vE0PfejRQJFf4XoK0YbU/VmMnhaLvPZhGliEG8l7MWQRzwDTm6C6wRcvUmmqvUmrB5QfSEP+/yCDA6h/yf/lX6zylukpaD2K0H9DjwGq2Uj9uJ1kXlyyTdNbznukPUNCyt+nMhlGNEa6LnRCrVjOHu8qOmWDtHteym/guM3UlOtqNzaCtzP6gmhvCUfSmY441V624/X4wM9874F974NAtouOMZBRbxD5C/jrgpUWx7ehTJQbt/AcFHQO7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kMrIzwUcoDqf/Kh2TV1PHFWdZFOzdZ1c1nR199FGlZg=;
 b=Pe+AzWM1uC5f0qXvHXPaS1eA/2GVPS0FA9xVeK71J1Y8eaXvXhR6fX/DTTjbRx5KxqWBsk0Fu3D40hYaG4NorUFPvImw64V8Ve9/Lf1tjhnXYBcjcEYSMnUJ4geSx+hS/u7ZzOePhd0NX84IBmXdtg7eMg0kiXGmot/CZWyz5wtmgmxUIzSIABqGZfiAG0WF3LwFw5ThosdZCXwc3cY/fiJxRwFmW3ont9TodeHgfMrrDNfrxIeitJbDBDshA+pMqKQQGoo3HCsP3WbemFPZ5NpliIE5YAxQ+o+CA6m9tVbsF2rVMuGZbaYDpkn4059NMcAGFm7rEwWbN4e2IlJOoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kMrIzwUcoDqf/Kh2TV1PHFWdZFOzdZ1c1nR199FGlZg=;
 b=LbsdVQqpnMn2EtvPPHRMVDWznGQ12lLCchn8vSMZ2qNoz9ohNgHeUwVI+WUE6c2cekVMgGyH9dRS61shemBu3uwzVhYyECoJTyxPMmVFVaaVYNxgMJ13dP/vRHCuAXGHhBYbR1M2l+8F2h2o/NabHb/OHmCh7y9/x+FF9yUGyD4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Henry Wang <Henry.Wang@arm.com>, Community Manager
	<community.manager@xenproject.org>
Subject: Re: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZbSQ7MYMS4SfeGk2TvWO0qogYra8xEDcAgAExxYA=
Date: Wed, 19 Apr 2023 07:11:28 +0000
Message-ID: <DEE57E48-1ADB-447B-90FB-0AC559B936F7@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-13-luca.fancellu@arm.com>
 <20BBA7B1-AF5B-49D0-BA79-17B90893C3CC@arm.com>
In-Reply-To: <20BBA7B1-AF5B-49D0-BA79-17B90893C3CC@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6232:EE_|DBAEUR03FT063:EE_|AM8PR08MB6532:EE_
X-MS-Office365-Filtering-Correlation-Id: 64ea2526-9f42-4cfc-d965-08db40a5555f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 R5gANSXd4o2pI69OHGmSws9Ws2vImYkb1QlsW2hVkNsj7O/TsPpF7sC4JmOXmRnJlN0tmp+2uRjh4k8eDGoECLTMVQHR/E6u23w3y3higtUS6TJUBf8wrVLifGQfAvnmeAZ6NLiKHMC/+NO1p6lPvWeWpLgIqoItKcxxlPmsihSDfb43upKgIfJnkdrCGh8/mWHkTJYugbynM4wf8OVDY5kM+yTel5jsFzRHY27BaY+DM8OVa+Bb5QqJC23dQBhhgplUMcuw0P1fvIkpj6e6xv5vS9o1dmmsU9Qn1uTsnbyr/H7+PN+OmJmFQDmxPo4f4u0dUrvY4VjLVqqSvyPfmAXVE+6+UkxlBHndQ3oLwuYZhe1bzsfW84pKoNHLn7LY+bfkttYxok1+N/Qj/3v9ukzwr+flYtIdEgVZE4R7PO7sOJQCGdaVTq77/9u55f4OUzDFFASJYCNQOhfGgjWZMGpttNI1PQ5yjfI2yCdmZeS0poUCKJ22cjSDIYwiJLn+PPsWIbIQX3r6KpBor980ux2o3j4b8mmugWr2/DM5gOrh+gEYbr1oySo5nRdDFbioaW3uQXcjYORm6pyXXLC6okso6xf+ObPLhOabe75tn8dYN6yimqnDG1uQ32Bm1CCBeHcfn2Fbmy4VqLRQdGkAoRRrrLkZUqibJFEkuI/h5UgJpYlcHDcEyRxpVOnfmRuK2XoeO1dVdv2qp1eUp9tcLtkmOjpl7iZeGmv78zfgLcw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(366004)(39860400002)(136003)(451199021)(186003)(71200400001)(6512007)(6506007)(53546011)(83380400001)(26005)(2616005)(38070700005)(6486002)(86362001)(4001150100001)(2906002)(33656002)(316002)(5660300002)(76116006)(4326008)(6636002)(91956017)(6862004)(41300700001)(8676002)(64756008)(66556008)(8936002)(66476007)(66446008)(38100700002)(66946007)(478600001)(54906003)(122000001)(37006003)(36756003)(207903002)(219803003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <752BA23F0DB10F4A9E26AC0FCD0F9313@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6232
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8f2fc8c2-c8d3-4f81-cb60-08db40a54bac
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	353yw2gVI+obJzwEf7kA6huWVp5RFJW8Mm+zzrmTKv9ctaikSeNt533FtqeZeo5ap9B5hV07/5EmZ0+oBoy+GdxUl0of8iGVbh9TyrpKct9cO56gg0R7mazR2Grwqtx/XX6bTbwCOWCgQ6PMujD/SAB/wUm2uDSNM/Lr7fR/oQso4MC/qumbGS31rQyvYH4Ho4NjtUjK5hJvd20+66YcxKCytAMogWz8THsJFbBpFLQQSWOOyXQtCY+B4+BZcxeD/tDW5Hr52T5fDi2VtNWRUhrPSmoR2nCuJ5deiIExHPv8p1jByfRbtTnDlxbew2hCrbH7y6ahmJH43Zvj5/L0SwD3J7rs9r3HO7RCrasU3NCVi1Odyk096tKeXcaTCIQPRL4px3Ii5SH0izysMFQ3AYPlhleG23Lg0E29OqQQZASSJMRA+1uOT1nxHDgEP17yaGmNFZYXxV/1AJsdyy0w5FljEY70NMlMzHNFCsP2YPkL53maBpdjIZdMF18vvpXcw13nI6xB4ZtCf5C4gVBOcZyS6+gYsWwIvNKxq/tl18ihU5fi3DqSxWqfy+iPR3FAvQUt3KSZFf4YRlFDXzdWeUI7SP/Mci9uJawxq2APrb/pB80GeSEkZjhPBemN6GpQR6Tf9X7Lj+zkWQ7hFvJ42VQeaFQep58IUzkQhmUL0WICRn911G+l8otm3P2fyyl+Funpqg1w74cD1jZG99M6G3UHeodABpQUJUTlf9lxnisQEH8MVAQCtHn3+UxR/C6jn1fUl519uyo01lrH3gpWGiekGRGtE3iP2QTTceLNjas=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199021)(46966006)(40470700004)(36840700001)(6486002)(86362001)(40480700001)(33656002)(40460700003)(36756003)(83380400001)(81166007)(82740400003)(356005)(47076005)(36860700001)(2616005)(82310400005)(336012)(53546011)(6506007)(6512007)(26005)(186003)(478600001)(54906003)(6636002)(37006003)(316002)(8676002)(8936002)(4326008)(70206006)(41300700001)(70586007)(5660300002)(4001150100001)(6862004)(2906002)(207903002)(219803003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:11:44.9183
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64ea2526-9f42-4cfc-d965-08db40a5555f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6532

DQoNCj4gT24gMTggQXByIDIwMjMsIGF0IDEzOjU2LCBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFu
ZC5NYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPiANCj4gSGkgTHVjYSwNCj4gDQo+PiBPbiAxMiBB
cHIgMjAyMywgYXQgMTE6NDksIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2VsbHVAYXJtLmNvbT4g
d3JvdGU6DQo+PiANCj4+IEFybSBub3cgY2FuIHVzZSB0aGUgImRvbTA9IiBYZW4gY29tbWFuZCBs
aW5lIG9wdGlvbiBhbmQgdGhlIHN1cHBvcnQNCj4+IGZvciBndWVzdHMgcnVubmluZyBTVkUgaW5z
dHJ1Y3Rpb25zIGlzIGFkZGVkLCBwdXQgZW50cmllcyBpbiB0aGUNCj4+IGNoYW5nZWxvZy4NCj4+
IA0KPj4gU2lnbmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29t
Pg0KPj4gLS0tDQo+PiBDaGFuZ2VzIGZyb20gdjQ6DQo+PiAtIE5vIGNoYW5nZXMNCj4+IENoYW5n
ZSBmcm9tIHYzOg0KPj4gLSBuZXcgcGF0Y2gNCj4+IC0tLQ0KPj4gQ0hBTkdFTE9HLm1kIHwgNSAr
KysrKw0KPj4gMSBmaWxlIGNoYW5nZWQsIDUgaW5zZXJ0aW9ucygrKQ0KPj4gDQo+PiBkaWZmIC0t
Z2l0IGEvQ0hBTkdFTE9HLm1kIGIvQ0hBTkdFTE9HLm1kDQo+PiBpbmRleCBjOTc4Y2ZkOWI2OGYu
LmEyNDk1MTYwMzM1OSAxMDA2NDQNCj4+IC0tLSBhL0NIQU5HRUxPRy5tZA0KPj4gKysrIGIvQ0hB
TkdFTE9HLm1kDQo+PiBAQCAtNiw2ICs2LDEwIEBAIFRoZSBmb3JtYXQgaXMgYmFzZWQgb24gW0tl
ZXAgYSBDaGFuZ2Vsb2ddKGh0dHBzOi8va2VlcGFjaGFuZ2Vsb2cuY29tL2VuLzEuMC4wLykNCj4+
IA0KPj4gIyMgW3Vuc3RhYmxlIFVOUkVMRUFTRURdKGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c2hvcnRsb2c7aD1zdGFnaW5nKSAtIFRCRA0KPj4gDQo+PiArIyMj
IENoYW5nZWQNCj4+ICstIFRoZSAiZG9tMCIgb3B0aW9uIGlzIG5vdyBzdXBwb3J0ZWQgb24gQXJt
IGFuZCAic3ZlPSIgc3ViLW9wdGlvbiBjYW4gYmUgdXNlZA0KPj4gKyAgdG8gZW5hYmxlIGRvbTAg
Z3Vlc3QgdG8gdXNlIFNWRS9TVkUyIGluc3RydWN0aW9ucy4NCj4+ICsNCj4+ICMjIyBBZGRlZA0K
Pj4gLSBPbiB4ODYsIHN1cHBvcnQgZm9yIGZlYXR1cmVzIG5ldyBpbiBJbnRlbCBTYXBwaGlyZSBS
YXBpZHMgQ1BVczoNCj4+ICAgLSBQS1MgKFByb3RlY3Rpb24gS2V5IFN1cGVydmlzb3IpIGF2YWls
YWJsZSB0byBIVk0vUFZIIGd1ZXN0cy4NCj4+IEBAIC0xNCw2ICsxOCw3IEBAIFRoZSBmb3JtYXQg
aXMgYmFzZWQgb24gW0tlZXAgYSBDaGFuZ2Vsb2ddKGh0dHBzOi8va2VlcGFjaGFuZ2Vsb2cuY29t
L2VuLzEuMC4wLykNCj4+ICAgLSBCdXMtbG9jayBkZXRlY3Rpb24sIHVzZWQgYnkgWGVuIHRvIG1p
dGlnYXRlIChieSByYXRlLWxpbWl0aW5nKSB0aGUgc3lzdGVtDQo+PiAgICAgd2lkZSBpbXBhY3Qg
b2YgYSBndWVzdCBtaXN1c2luZyBhdG9taWMgaW5zdHJ1Y3Rpb25zLg0KPj4gLSB4bC9saWJ4bCBj
YW4gY3VzdG9taXplIFNNQklPUyBzdHJpbmdzIGZvciBIVk0gZ3Vlc3RzLg0KPj4gKyAtIE9uIEFy
bSwgWGVuIHN1cHBvcnRzIGd1ZXN0cyBydW5uaW5nIFNWRS9TVkUyIGluc3RydWN0aW9ucy4NCj4g
DQo+IE1pZ2h0IGJlIGEgZ29vZCBpZGVhIHRvIG1lbnRpb24gdGhhdCB0aGlzIGlzIGEgdGVjaCBw
cmV2aWV3ID8NCg0KU3VyZSBJ4oCZbGwgbWVudGlvbiB0aGF0LCBpcyB0aGVyZSBzb21ldGhpbmcg
aW4gcGFydGljdWxhciB0aGF0IG5lZWRzIHRvIGJlIGRvbmUgb3IgaXMgaXQgZW5vdWdoIHRvIHNh
eSB0aGF0IGhlcmU/DQoNCj4gDQo+IENoZWVycw0KPiBCZXJ0cmFuZA0KPiANCj4+IA0KPj4gIyMg
WzQuMTcuMF0oaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zaG9y
dGxvZztoPVJFTEVBU0UtNC4xNy4wKSAtIDIwMjItMTItMTINCj4+IA0KPj4gLS0NCj4+IDIuMzQu
MQ0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:14:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523183.812993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp21C-0003Y6-7X; Wed, 19 Apr 2023 07:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523183.812993; Wed, 19 Apr 2023 07:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp21C-0003Xz-4b; Wed, 19 Apr 2023 07:14:18 +0000
Received: by outflank-mailman (input) for mailman id 523183;
 Wed, 19 Apr 2023 07:14:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qUCI=AK=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pp21B-0003Xt-3B
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:14:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb4aa338-de81-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:14:16 +0200 (CEST)
Received: from AM6PR10CA0049.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::26)
 by AS8PR08MB9503.eurprd08.prod.outlook.com (2603:10a6:20b:61d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 07:14:14 +0000
Received: from AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:80:cafe::2d) by AM6PR10CA0049.outlook.office365.com
 (2603:10a6:209:80::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend
 Transport; Wed, 19 Apr 2023 07:14:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT032.mail.protection.outlook.com (100.127.140.65) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:14:14 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Wed, 19 Apr 2023 07:14:14 +0000
Received: from f7c348952fff.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D7347220-EE19-4D83-9924-94F6E8012A94.1; 
 Wed, 19 Apr 2023 07:14:02 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f7c348952fff.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:14:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM9PR08MB5876.eurprd08.prod.outlook.com (2603:10a6:20b:2d5::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:13:48 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:13:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb4aa338-de81-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qabZrQ2wjO5kr9XBBY2OyNobnlg9Rn35ZXmKMpR4CNI=;
 b=glVYZvqOyvTRgcPJVAPUSeBB9XYmSsZ85I8Mz14LuHV0m7SgyQmpYkeoUHFl45T/QOifmK2Gmet57u9zOVep6uM1iMK312vekNnyh5wxtCwGHMe/jfuyiEvrTINVZJ9K4cOyx+jH5E/LL9i9i2lH1PRU0rPtsNSYFVCADLFs5Eo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b51d2e3333e6b565
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eNBXoRfHZyQ/7p5jICL6K8Ea9JDBZ2wc6gmwWbjbjkEjxDegSfh9q0OsuTfU24V/OrwexcwGefOt8mezrBOxyFcR+kJ+n22H0Y9b5wo2EvxoSl5j/Ki3lZKGKu1An8GdAZeV+bycYMFobSixXY5RQPL0ZnkLXK9I6E/gGtDe4KUTe50BSFEj/1DHkN6GKbZ9S7NXb/VPiKz1hc/Ua0KY3GjVa87tsFjKj8zzx67vfoK0bjibmqE+evc34hmoUC98b9yBcrdLGHGpBReqD80CebBsa2KJ7IIIgk+lQPZrNW7dB6EOoD7iZ7YMA6REByPrUQt6MO/6KlBfVsVK/hn6pA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qabZrQ2wjO5kr9XBBY2OyNobnlg9Rn35ZXmKMpR4CNI=;
 b=cvynrUhbNJNFCc3LbExaN3Wb8Tb7BqAjq5cnyO5Y9UTWcffP72+9kiwuD5GFRY7CirQ2vwkj6ZsJVYVTa9wxUKxPclvnrgiyl6cHOfUERblFe7pYeeikhIX8hJ8W6LuRu8GCDOYbn0uD4k9+t9PpXcQOlum3fsnu1hdIuMZ09774kF79sGMP1wiKAL7Od24xWG/GdkKhK4Nq+n5QWJ7lPNQF1wcU3w3tR8sT4rWvssQ3kg3gMelyXldvAw4+m9GFVTtAA485uKiv1vN7BXOaR12+UWF8QNWMeW0CV6utLVSusi6mP+vPiKf5LlqncwAdDjq+JOce745g8z0S7u7LgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qabZrQ2wjO5kr9XBBY2OyNobnlg9Rn35ZXmKMpR4CNI=;
 b=glVYZvqOyvTRgcPJVAPUSeBB9XYmSsZ85I8Mz14LuHV0m7SgyQmpYkeoUHFl45T/QOifmK2Gmet57u9zOVep6uM1iMK312vekNnyh5wxtCwGHMe/jfuyiEvrTINVZJ9K4cOyx+jH5E/LL9i9i2lH1PRU0rPtsNSYFVCADLFs5Eo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQq1oowf9vRd0a2ZPNGYXQmqK8xC64AgAE14QCAAAESAA==
Date: Wed, 19 Apr 2023 07:13:47 +0000
Message-ID: <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
 <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
In-Reply-To: <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AM9PR08MB5876:EE_|AM7EUR03FT032:EE_|AS8PR08MB9503:EE_
X-MS-Office365-Filtering-Correlation-Id: ac79462d-7d56-430a-7080-08db40a5ae66
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 i+tdrDFi/OSfQ9iNpZ7PMN4WHq8H1aKriFdXysPa6nmaJjcad4EKWhXmOVD78PiYi/zDTEWM+tFoWo6KB0gcKlhMsDZuLtv/PQPhxUoEYe3JvFbOTlZ7CXuBXLojIRpXN5/mR7VBpipKHLw35CSWnwRYpLd5DR4FkpkUrtZsFAI8TgXXGfaksqvoslaN2QgiDdPdSCOhfMbaDoUEBt9OrzhfPMcfgYrnBynDfB70Ih8/nIsCUd6eINHMRKISfMW1XCxszhyZgo/gfdc0/mHX15+oWvt12TB7/h6ek/3ZRxhy6nhMdZqD0HnOxoO5+8qrPBUXhL3UNLhi97dlwsnaF2bpboSRdeQnVczv0HMQVQcSsXCidwclhib87jW/OpL/wDYVl1aAgIVsbS/g9s0mDxFbf0sq2gIQjLIepr0M776yBnXxzBvoSio5URjd9DQk42AzoawbR3/wDmW6iF9YrZrSrthhpURhC+m9iacpy04VDgs62UN+6iB+3t5kpYDSyZs08PzR6eu9XvGMx48TkgePidHNV6R1hN+FruMr48C0kRaOt5R4a/HnmHeg6D5BEhifELjseIRyF7TK2p/Bh8gLIlPCJ3KgqLNr+HHs0TmrlxafQduZ+3CsSOLmxhRAbuP4mlZCZDUFvGk9K+x6aQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(366004)(346002)(451199021)(83380400001)(6512007)(6506007)(53546011)(41300700001)(122000001)(71200400001)(6486002)(2616005)(91956017)(186003)(478600001)(33656002)(316002)(66946007)(4326008)(6636002)(64756008)(66446008)(66476007)(66556008)(76116006)(5660300002)(37006003)(54906003)(36756003)(38070700005)(86362001)(2906002)(8936002)(6862004)(8676002)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <5DB8A14516F7D64DB9BEFBB583FF57FE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5876
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	23aa60d8-ca38-4769-b42c-08db40a59e5e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LhPgKBFuQQ/UfzCjuyG5YR3bZTHTFe+33E+BZ/AD1Ob1oJpoAb38AvUsfk4rAHwGChYQpdME8TqPr4gxbRhPdTNGW/+04bDUXq3I3Va3HaZJEmsaC1EToAF7kRnPzr8NurCkqFmSzGSv2QnkUiWehR6xNEDXMLmIt5FPM3fFR+Do3etGRv+cdvZU9jGmRZQnWUVFzCEHdJ36RW7MDybOMGzv3Z6j4hiq4cQ1pDKJTyOszrUjcE0S8LZ9rNGlSJRckNJXo2GupfYr1R1lflUOWLLL6rPpjaoUzo/Z7IS/95NFlilzeMIa5raE+v5iSfYoJiJFBeLJJf1cIzpcAEj+XL5C5JbAP9PBG7gyfuWSe62p5k9IvqJdZY32aMmiDYZSx/lWEl/o8Na7vD8S+qKIedwLcBd4OZjuu8xHeQNMCsy9IKhsaKH8vlDc38cHn0PR2tOKy4I5mzmtDzs+g9hpCi1Nwq444yOiqPz1srCxputObvnlBG6NZD2u5wJx2D8v8BN5SR0HTiQdKGCYDD9bLxkZ0KbiDzVaX9mQj0as+wpyvF0+6yKFrswjV0tuoqiBVF/5ZGhzOH83bakkji8Re8fsf9qBS+EafP+TmQaBoyqFwUBmP1dkTvnFSu2UzsrHe27VM0uMqQXZ0jamXKhWbSMXnXT9Jz1GRUabZJScBQrtXWbJjkpXTBrT0DLdcwXy4Uzgt4UdK2AWe5MkAPPJRQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(82310400005)(36860700001)(2616005)(37006003)(107886003)(6506007)(26005)(6512007)(40480700001)(86362001)(54906003)(478600001)(316002)(6636002)(4326008)(83380400001)(336012)(186003)(6486002)(53546011)(70586007)(47076005)(70206006)(356005)(82740400003)(41300700001)(81166007)(5660300002)(40460700003)(2906002)(6862004)(8936002)(8676002)(33656002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:14:14.2319
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ac79462d-7d56-430a-7080-08db40a5ae66
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9503

Hi Luca,

> On 19 Apr 2023, at 09:09, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Hi Bertrand,
>=20
>> On 18 Apr 2023, at 13:40, Bertrand Marquis <Bertrand.Marquis@arm.com> wr=
ote:
>>=20
>> Hi Luca,
>>=20
>>>=20
>>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>>> index 78f7482619da..5485648850a0 100644
>>> --- a/xen/arch/arm/arm64/sve.c
>>> +++ b/xen/arch/arm/arm64/sve.c
>>> @@ -5,14 +5,29 @@
>>> * Copyright (C) 2022 ARM Ltd.
>>> */
>>>=20
>>> -#include <xen/types.h>
>>> -#include <asm/cpufeature.h>
>>> +#include <xen/sched.h>
>>> +#include <xen/sizes.h>
>>> #include <asm/arm64/sve.h>
>>> -#include <asm/arm64/sysregs.h>
>>> -#include <asm/processor.h>
>>> -#include <asm/system.h>
>>>=20
>>> extern unsigned int sve_get_hw_vl(void);
>>> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_=
ffr);
>>> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *preg=
s,
>>> +                         int restore_ffr);
>>> +
>>> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
>>> +{
>>> +    /*
>>> +     * Z0-31 registers size in bytes is computed from VL that is in bi=
ts, so VL
>>> +     * in bytes is VL/8.
>>> +     */
>>> +    return (vl / 8U) * 32U;
>>> +}
>>> +
>>> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
>>> +{
>>> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
>>> +    return (vl / 64U);
>>> +}
>>>=20
>>> register_t compute_max_zcr(void)
>>> {
>>> @@ -60,3 +75,46 @@ unsigned int get_sys_vl_len(void)
>>>   return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>>>           SVE_VL_MULTIPLE_VAL;
>>> }
>>> +
>>> +int sve_context_init(struct vcpu *v)
>>> +{
>>> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl)=
;
>>> +    uint64_t *ctx =3D _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
>>> +                             sve_ffrreg_ctx_size(sve_vl_bits),
>>> +                             L1_CACHE_BYTES);
>>> +
>>> +    if ( !ctx )
>>> +        return -ENOMEM;
>>> +
>>> +    v->arch.vfp.sve_context =3D ctx;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +void sve_context_free(struct vcpu *v)
>>> +{
>>> +    xfree(v->arch.vfp.sve_context);
>>> +}
>>> +
>>> +void sve_save_state(struct vcpu *v)
>>> +{
>>> +    unsigned int sve_vl_bits =3D sve_decode_vl(v->domain->arch.sve_vl)=
;
>>> +    uint64_t *sve_ctx_zreg_end =3D v->arch.vfp.sve_context +
>>> +            (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>>=20
>> You do quite some computation here for something which does not change
>> during the life of the VM.
>> Could we save the context_end in the vcpu instead and just do this
>> computation on init and free only ?
>=20
> Yes sure, would you be ok to have it in struct vfp_state?

Yes definitely i would store it instead of the current sve_context.

>=20
>>=20
>>>=20
>>> #endif /* _ARM_ARM64_SVE_H */
>>> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/in=
clude/asm/arm64/sysregs.h
>>> index 4cabb9eb4d5e..3fdeb9d8cdef 100644
>>> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
>>> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
>>> @@ -88,6 +88,9 @@
>>> #ifndef ID_AA64ISAR2_EL1
>>> #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
>>> #endif
>>> +#ifndef ZCR_EL1
>>> +#define ZCR_EL1                     S3_0_C1_C2_0
>>> +#endif
>>>=20
>>=20
>> What about ZCR_EL2 ?
>=20
> It was introduced in patch #1 because we use it in register_t compute_max=
_zcr(void)
>=20

Sorry i missed that.

Cheers
Bertrand

>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:16:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:16:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523187.813002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp22p-00048v-Ih; Wed, 19 Apr 2023 07:15:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523187.813002; Wed, 19 Apr 2023 07:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp22p-00048o-Fw; Wed, 19 Apr 2023 07:15:59 +0000
Received: by outflank-mailman (input) for mailman id 523187;
 Wed, 19 Apr 2023 07:15:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+8i=AK=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pp22o-00048g-C8
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:15:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0769f26a-de82-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:15:57 +0200 (CEST)
Received: from DUZPR01CA0032.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::11) by AS2PR08MB8746.eurprd08.prod.outlook.com
 (2603:10a6:20b:55e::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:15:50 +0000
Received: from DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:468:cafe::e6) by DUZPR01CA0032.outlook.office365.com
 (2603:10a6:10:468::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 07:15:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT007.mail.protection.outlook.com (100.127.142.161) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:15:49 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 19 Apr 2023 07:15:49 +0000
Received: from a331d0faefe1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0C3E11AD-592C-4A4C-BDF4-F11910E74BE6.1; 
 Wed, 19 Apr 2023 07:15:38 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a331d0faefe1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:15:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PR3PR08MB5595.eurprd08.prod.outlook.com (2603:10a6:102:83::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 07:15:34 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 07:15:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0769f26a-de82-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KK7PKWpjrjn0hXtChFEkJ3JAre4L3RVXSIAdZH3Vf4o=;
 b=GhmU3iWGHygi8IX6ZOGqfTLuT4BlCAouao93v+iBNLzLwwI92dNOnOMNYt/pR/4/60hsPI0HSoDheymOqb4QRcedosLc1UZjT1icRqCh0+gI9g83e7y3JXlI2YEcdNWcxbhYJvOFATU4tsRlGsbbC4J0jOvpkl5v9b8cf61np80=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c64387cdb22c641a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G6nYrMOnhxs+l/g+ceDv9S1D1BvUq4HVsW4cTs41pj0vk6T/b6HoSw5Ydr2CzxD8lTfAwQeO0j5MJMuTmmQ59QcbD2EngKdccwh1X9PiirnhEe2wcPl5f+HLO8TxdT6Q9uK/5cX9oQA+FjCMQtIDnQwc9yaDqoNvlHxcZMI+qM1+HiBQSFjRkVsJkk5Tr9XXGOJkVBG5jpLiVRrKBtEI2p++AtL+aeFcgjnDkrdNsiyb8C60mO09Sko4Sz207JUQEnDzQBLAD/2g+v9fzsrNKTji8v8wAADspsw7GBeqwL5MNxrmBdw8jNQQ9iotHHoOz7RVxV7jppisr1Fg3XZK5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KK7PKWpjrjn0hXtChFEkJ3JAre4L3RVXSIAdZH3Vf4o=;
 b=fLv3RQMcTz4sVwiSeBfr7LeLA3/CliN6I8xg4C7U5i8R1fC/cyRl3c5tH15i0F19St+EKRR9CbaZ6tl0O9xQs+0be8p5QXvrR2XRbbTADM6yIN0sovkIcPbPr3XX3FSSFSkJrK60TxeaTziIyJOo8s9A9jB/6sZ+QUqbeBbiOpTkR+68VCDhHMaQq5suCgs6JbJWUJxCED6zpJpc6QaXFR1Smp1lGWIJO57au4BYOx0VsS2ATgXldkmivHyYS9Qau5A7vMgV5A6pSLjwVhX9O/GnLwebxueZw19rUZ83hmOhdhF+cDCaiEWJGSmRD1nwR5eHP6P+s664zP6Z0zTxxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KK7PKWpjrjn0hXtChFEkJ3JAre4L3RVXSIAdZH3Vf4o=;
 b=GhmU3iWGHygi8IX6ZOGqfTLuT4BlCAouao93v+iBNLzLwwI92dNOnOMNYt/pR/4/60hsPI0HSoDheymOqb4QRcedosLc1UZjT1icRqCh0+gI9g83e7y3JXlI2YEcdNWcxbhYJvOFATU4tsRlGsbbC4J0jOvpkl5v9b8cf61np80=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQ+XccnxtxNG0WbEgpX88e2za8xDZ2AgAE1hQA=
Date: Wed, 19 Apr 2023 07:15:34 +0000
Message-ID: <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
In-Reply-To: <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PR3PR08MB5595:EE_|DBAEUR03FT007:EE_|AS2PR08MB8746:EE_
X-MS-Office365-Filtering-Correlation-Id: cca6dc1a-b5a4-4ad9-ba65-08db40a5e75c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +lWxgbOWVj4Vhq48BSZKYAgLaa1H+VPHq31zwpwzyleUmyHGEXifMcOIR80y0AzsAG/U7gX8XHO4COWOU97/iPiWJA80qRexPaOw+57PLlbyfNIOwlyvbJYU0r9BDMkX/i3UrBJ/rPl8g0IiO/ahuD5H5Ov6vxz3y20gASsUl0JpIJr1QdYduWVN1cBgKkdJhNhgZDTe6u9O6j8sN0RwCVyRXkCmk6d77DZQSOFO70r2bn1bC0ZvuWiuybBnnPp812CJmXWE4g5p8NvMdpz7BiLAak9sbyCf5tiqKaRDxdOTKb9ADr/wYq5D/E7lg+/Wvwh835lZZar5I3fAvtCSfaslALYNEbJ1t3NCzhEAjAxGYo8Gx8r1glmLQGt47KxT4X1EdyJbmPzLfdf/3KBjUf21YyAa96L8GuZRkl/fYh6RbtAKlQqoNEo/Dx+OFRLGGJ7Ixon05Ebmofe3WOhaB2i9YpLEOTK3VH8Ku0M/d93w7DxMj/QEwv70Bp8uE4+z6epe+j7IeztxIv1loyyFroUld5KTrbQu6yF3Qwr5Q9MKVgbKuo+NFJyB61aVAakox2/G2U1Ww1MbRwBBi4ZSZia4PrjeFtsDE8X0mKCFeUdKQMsVeE09/2uSMnMTsQ/Lniw8DKnaj7VNV0lrSVAgpA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(366004)(451199021)(71200400001)(66446008)(91956017)(66556008)(76116006)(64756008)(66476007)(478600001)(4326008)(37006003)(316002)(5660300002)(8676002)(66946007)(38100700002)(6636002)(41300700001)(122000001)(8936002)(54906003)(83380400001)(2616005)(6486002)(186003)(26005)(6506007)(6512007)(38070700005)(33656002)(2906002)(86362001)(36756003)(6862004)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <1E74389164BEEC4F9101B62E7F773D99@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5595
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	48826e2e-24e4-44b6-2324-08db40a5de54
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qobSk0rW4NwGx3/Ne/n0dk8l+ixB7WgbEG4t05bdEaPMLBdzD1SAzzhkvneZ9/YALfViz94jVA1R96uW6Gk/3lijWNWtSaRsBBcDTDrsFRjJEHD7w+PDiKC/PAowxVvqklYo73JdfwpFDVOfzuXz6hE7fQFDUrxu43H5/9/44ndjXjVYgN6xTaz8zBXkJENDBOHb8qiTy7jV96abhT2/a13w9xOIJFO5Z/f3m2vfHg0ohoAAbju3ASfShyUXS8Frirkxlnr9AePgf4Crpee/gAnPXy+zDNaSuiCaMGpqevLa1VWwwNzrYE5PBngX14pFwpxYVgh+rJc4qtWgzrb/37wmPtX91AV3dhrva0MvwNhxGPaYdYA1ntI1kBAqEuqTvDas/QCD53zU0/BlTlHrOA/lzg5p9efJBfH7zBYPrlBzDBQZxhUe+bJQrLkxZs+x29CAJUzW9+PSgNQo7VmHf75h3IY6XS/SNgogit7W+lp9V5dH+Q4rMnbCASdAu7rEnqI1Rj0qIY/bBLnunX+LxcJ6vs+Dqw1AvkNS3/ZQjoGI3P0l8gFNT0JWmSwFfdfHy25PmTjOfbSCeAuv01zRq2uP/FOpgyFSkte3BsjvKpezyX6YpSDF8wlHlEXQr/OZSD/P4zlq30EvpeKIdklvtQA1B6yI8eqLkjrzI7ylyfI4SkEUHlNcodroD6JKCyVWijejpj/AqF8yKVg2oom0vA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(6486002)(36756003)(83380400001)(336012)(47076005)(356005)(107886003)(26005)(6512007)(6506007)(41300700001)(81166007)(70586007)(8676002)(70206006)(6862004)(8936002)(37006003)(54906003)(6636002)(2616005)(36860700001)(186003)(4326008)(316002)(82740400003)(5660300002)(2906002)(86362001)(82310400005)(33656002)(40460700003)(40480700001)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:15:49.8452
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cca6dc1a-b5a4-4ad9-ba65-08db40a5e75c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8746

SGkgQmVydHJhbmQsDQoNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9z
dmUuYyBiL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gaW5kZXggNTQ4NTY0ODg1MGEwLi5h
ZDVkYjYyZTE4MDUgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCj4+
ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gQEAgLTksNiArOSw5IEBADQo+PiAj
aW5jbHVkZSA8eGVuL3NpemVzLmg+DQo+PiAjaW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4g
DQo+PiArLyogb3B0X2RvbTBfc3ZlOiBhbGxvdyBEb20wIHRvIHVzZSBTVkUgYW5kIHNldCBtYXhp
bXVtIHZlY3RvciBsZW5ndGguICovDQo+PiAraW50IF9faW5pdGRhdGEgb3B0X2RvbTBfc3ZlOw0K
Pj4gKw0KPj4gZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3ZsKHZvaWQpOw0KPj4gZXh0
ZXJuIHZvaWQgc3ZlX3NhdmVfY3R4KHVpbnQ2NF90ICpzdmVfY3R4LCB1aW50NjRfdCAqcHJlZ3Ms
IGludCBzYXZlX2Zmcik7DQo+PiBleHRlcm4gdm9pZCBzdmVfbG9hZF9jdHgodWludDY0X3QgY29u
c3QgKnN2ZV9jdHgsIHVpbnQ2NF90IGNvbnN0ICpwcmVncywNCj4+IEBAIC0xMTgsMyArMTIxLDIx
IEBAIHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpDQo+PiANCj4+ICAgIHN2
ZV9sb2FkX2N0eChzdmVfY3R4X3pyZWdfZW5kLCB2LT5hcmNoLnZmcC5mcHJlZ3MsIDEpOw0KPj4g
fQ0KPj4gKw0KPj4gK2ludCBfX2luaXQgc3ZlX3Nhbml0aXplX3ZsX3BhcmFtKGludCB2YWwsIHVu
c2lnbmVkIGludCAqb3V0KQ0KPj4gK3sNCj4+ICsgICAgLyoNCj4+ICsgICAgICogTmVnYXRpdmUg
U1ZFIHBhcmFtZXRlciB2YWx1ZSBtZWFucyB0byB1c2UgdGhlIG1heGltdW0gc3VwcG9ydGVkDQo+
PiArICAgICAqIHZlY3RvciBsZW5ndGgsIG90aGVyd2lzZSBpZiBhIHBvc2l0aXZlIHZhbHVlIGlz
IHByb3ZpZGVkLCBjaGVjayBpZiB0aGUNCj4+ICsgICAgICogdmVjdG9yIGxlbmd0aCBpcyBhIG11
bHRpcGxlIG9mIDEyOCBhbmQgbm90IGJpZ2dlciB0aGFuIHRoZSBtYXhpbXVtIHZhbHVlDQo+PiAr
ICAgICAqIDIwNDgNCj4+ICsgICAgICovDQo+PiArICAgIGlmICggdmFsIDwgMCApDQo+PiArICAg
ICAgICAqb3V0ID0gZ2V0X3N5c192bF9sZW4oKTsNCj4+ICsgICAgZWxzZSBpZiAoICgodmFsICUg
U1ZFX1ZMX01VTFRJUExFX1ZBTCkgPT0gMCkgJiYgKHZhbCA8PSBTVkVfVkxfTUFYX0JJVFMpICkN
Cj4+ICsgICAgICAgICpvdXQgPSB2YWw7DQo+IA0KPiBTaG91bGRuJ3QgeW91IGFsc28gY2hlY2sg
aWYgaXQgaXMgbm90IGdyZWF0ZXIgdGhhbiB0aGUgbWF4aW11bSB2ZWN0b3IgbGVuZ3RoID8NCg0K
SSBkb27igJl0IHVuZGVyc3RhbmQsIEkgYW0gY2hlY2tpbmcgdGhhdCB0aGUgdmFsdWUgaXMgYmVs
b3cgb3IgZXF1YWwgdG8gU1ZFX1ZMX01BWF9CSVRTLA0KSWYgeW91IG1lYW4gaWYgaXQgc2hvdWxk
IGJlIGNoZWNrZWQgYWxzbyBhZ2FpbnN0IHRoZSBtYXhpbXVtIHN1cHBvcnRlZCBsZW5ndGggYnkg
dGhlIHBsYXRmb3JtLA0KSSB0aGluayB0aGlzIGlzIG5vdCB0aGUgcmlnaHQgcGxhY2UsIHRoZSBj
aGVjayBpcyBhbHJlYWR5IGluIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZygpLCBpbnRyb2R1
Y2VkDQppbiBwYXRjaCAjMg0KDQo+IA0KPj4gKyAgICBlbHNlDQo+PiArICAgICAgICByZXR1cm4g
LTE7DQo+PiArDQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vZG9tYWluX2J1aWxkLmMgYi94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4+
IGluZGV4IGVlYjQ2NjJmMGVlZS4uM2YzMGVmNWMzN2I2IDEwMDY0NA0KPj4gLS0tIGEveGVuL2Fy
Y2gvYXJtL2RvbWFpbl9idWlsZC5jDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxk
LmMNCj4+IEBAIC0yNiw2ICsyNiw3IEBADQo+PiAjaW5jbHVkZSA8YXNtL3BsYXRmb3JtLmg+DQo+
PiAjaW5jbHVkZSA8YXNtL3BzY2kuaD4NCj4+ICNpbmNsdWRlIDxhc20vc2V0dXAuaD4NCj4+ICsj
aW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4gI2luY2x1ZGUgPGFzbS9jcHVmZWF0dXJlLmg+
DQo+PiAjaW5jbHVkZSA8YXNtL2RvbWFpbl9idWlsZC5oPg0KPj4gI2luY2x1ZGUgPHhlbi9ldmVu
dC5oPg0KPj4gQEAgLTYxLDYgKzYyLDIxIEBAIGN1c3RvbV9wYXJhbSgiZG9tMF9tZW0iLCBwYXJz
ZV9kb20wX21lbSk7DQo+PiANCj4+IGludCBfX2luaXQgcGFyc2VfYXJjaF9kb20wX3BhcmFtKGNv
bnN0IGNoYXIgKnMsIGNvbnN0IGNoYXIgKmUpDQo+PiB7DQo+PiArICAgIGxvbmcgbG9uZyB2YWw7
DQo+PiArDQo+PiArICAgIGlmICggIXBhcnNlX3NpZ25lZF9pbnRlZ2VyKCJzdmUiLCBzLCBlLCAm
dmFsKSApDQo+PiArICAgIHsNCj4+ICsjaWZkZWYgQ09ORklHX0FSTTY0X1NWRQ0KPj4gKyAgICAg
ICAgaWYgKCAodmFsID49IElOVF9NSU4pICYmICh2YWwgPD0gSU5UX01BWCkgKQ0KPj4gKyAgICAg
ICAgICAgIG9wdF9kb20wX3N2ZSA9IHZhbDsNCj4+ICsgICAgICAgIGVsc2UNCj4+ICsgICAgICAg
ICAgICBwcmludGsoWEVOTE9HX0lORk8gIidzdmU9JWxsZCcgdmFsdWUgb3V0IG9mIHJhbmdlIVxu
IiwgdmFsKTsNCj4+ICsjZWxzZQ0KPj4gKyAgICAgICAgbm9fY29uZmlnX3BhcmFtKCJBUk02NF9T
VkUiLCAic3ZlIiwgcywgZSk7DQo+PiArI2VuZGlmDQo+IA0KPiBDb3JyZWN0IG1lIGlmIG15IHVu
ZGVyc3RhbmRpbmcgaXMgd3JvbmcgYnV0IGhlcmUgeW91IGp1c3QgaWdub3JlIHRoZSBzdmUNCj4g
cGFyYW1ldGVyIGlmIFNWRSBpcyBub3Qgc3VwcG9ydGVkIGJ5IFhlbiA/DQo+IA0KPiBJIGFtIGEg
Yml0IHdvbmRlcmluZyBpZiB3ZSBzaG91bGQgbm90IGp1c3QgcmVmdXNlIGl0IGhlcmUgYXMgdGhl
IHVzZXIgbWlnaHQNCj4gd3JvbmdseSB0aGluayB0aGF0IGhpcyBwYXJhbWV0ZXIgaGFkIHNvbWUg
ZWZmZWN0Lg0KPiANCj4gT3IgaXMgaXQgYSB1c3VhbCB3YXkgdG8gaGFuZGxlIHRoaXMgY2FzZSA/
DQoNCkphbiBzdWdnZXN0ZWQgdGhpcyBhcHByb2FjaCwgYXMgaXQgc2hvdWxkIGJlIHRoZSBwcmVm
ZXJyZWQgd2F5IHRvIGhhbmRsZSB0aGUgY2FzZSwNCmxvb2tpbmcgaW50byB0aGUgeDg2IGNvZGUg
aXQgc2VlbXMgc28uDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:16:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:16:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523192.813013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp23W-0004iB-Ui; Wed, 19 Apr 2023 07:16:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523192.813013; Wed, 19 Apr 2023 07:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp23W-0004i4-Rz; Wed, 19 Apr 2023 07:16:42 +0000
Received: by outflank-mailman (input) for mailman id 523192;
 Wed, 19 Apr 2023 07:16:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qUCI=AK=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pp23V-00048g-Tn
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:16:41 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0628.outbound.protection.outlook.com
 [2a01:111:f400:fe02::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21c521b5-de82-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:16:41 +0200 (CEST)
Received: from DBBPR09CA0025.eurprd09.prod.outlook.com (2603:10a6:10:d4::13)
 by DB5PR08MB10115.eurprd08.prod.outlook.com (2603:10a6:10:4a2::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 07:16:34 +0000
Received: from DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::a5) by DBBPR09CA0025.outlook.office365.com
 (2603:10a6:10:d4::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 07:16:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT051.mail.protection.outlook.com (100.127.142.148) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:16:34 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 19 Apr 2023 07:16:34 +0000
Received: from 5acadf1ab1e9.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EBDE0E92-F75B-4569-85DB-C8938004437E.1; 
 Wed, 19 Apr 2023 07:16:23 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5acadf1ab1e9.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:16:23 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM8PR08MB6499.eurprd08.prod.outlook.com (2603:10a6:20b:317::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:16:21 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21c521b5-de82-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HL61DGg9KUlQhzPgLOsxXQAgFBDQneW30RVqjM8Jgbs=;
 b=j/wQadt+gLhIeizhYoLDITuYX3SG5lUUsNNEaiPp9a+rbXzBgh8k9S114eLK2hoKaPzBRZiIgrOgTcDXxJ0GBIIBbI0ucFFtoTY2SgajxsfqlQFi9kMDnWWvrxs36h0cKwx9L6sjKxUNEErKqGbsXRUZoICsYzCarMC5EMmfx9A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8aa47bec2dbdabbf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LT+RJp+ljG6WU9YG8g/xddlj+X+KuVXiFZWSgYU6V0x4+AISXnsUaNcZte8GenOScAjqcGKFNsoqD+H1NYpMm3XuzdMTFdWBu7uJTtXNJcTYUo5FYx3Ejk8N0E4c3zlnjcGKEcsLSrhKGefgnWa1tsORquUjVh4HhnrBSSF8dhGS7fQBkNOQ/9HJy/8S5FYx4b9qdWdDhLYzZSwN8p24PCuzHKHvPtvglTJEavFcJkxPahpGV/ncaMiq7XWCdPumKSsxS6994vuhR4dhGlW9R3UguWQcCY6lEk2vOV8JKKDGhCdu2KnEzFm7HmE9tMB8YSm2BNgHRdEFS3Eg3lZ7Jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HL61DGg9KUlQhzPgLOsxXQAgFBDQneW30RVqjM8Jgbs=;
 b=nUAFUDIdIsIPlzCKVNZe6uqxotl6YYGW0dfnr4YlvqJmg/ldh5DmlNL3f9wymInNBGcLlqcRcGpveZW8lPtKNpD5sdKPSsw2BJJHprHzGOiBC1pkExD+84EV3sShWm8Segk3W3alhVXeQJJdQJc5fMpnVm2VsEpthUlslBJr+g20gGeEPOXH6jRe35HlGImDvmmh7tGXq/ey+rMXPjVpkxTcuHata7lH0Q4nn2KdTkQi2s0sWG17b1Tbknsljw+8IYygMLRK40lKYRBl7viHBbEC7hGy+j0CzhJdC18Xh+cCRhOLh8Zkt5151pfDIDUHiXNtQW27QuE7G5ok8YVvSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HL61DGg9KUlQhzPgLOsxXQAgFBDQneW30RVqjM8Jgbs=;
 b=j/wQadt+gLhIeizhYoLDITuYX3SG5lUUsNNEaiPp9a+rbXzBgh8k9S114eLK2hoKaPzBRZiIgrOgTcDXxJ0GBIIBbI0ucFFtoTY2SgajxsfqlQFi9kMDnWWvrxs36h0cKwx9L6sjKxUNEErKqGbsXRUZoICsYzCarMC5EMmfx9A=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Henry Wang <Henry.Wang@arm.com>, Community Manager
	<community.manager@xenproject.org>
Subject: Re: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v5 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZbSQvB461HU2dmk2buv9fQooVPa8xECsAgAEx3gCAAAFQAA==
Date: Wed, 19 Apr 2023 07:16:21 +0000
Message-ID: <3C0380BC-EC6F-4C86-AB7F-A87DEE843AA0@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-13-luca.fancellu@arm.com>
 <20BBA7B1-AF5B-49D0-BA79-17B90893C3CC@arm.com>
 <DEE57E48-1ADB-447B-90FB-0AC559B936F7@arm.com>
In-Reply-To: <DEE57E48-1ADB-447B-90FB-0AC559B936F7@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AM8PR08MB6499:EE_|DBAEUR03FT051:EE_|DB5PR08MB10115:EE_
X-MS-Office365-Filtering-Correlation-Id: 72527751-e3ee-4dcd-f565-08db40a60201
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bx5kgNmdFBwoDPo6KnL1APn0VL1rl7aD9KAzY3IEIo2aj7UHNe6CIAbSR8oUR1Xlv2eaql2yfiIY4yhwiaM7ohgkujvDwGh0PqWx9saWR5igOG5SkOOTJtFTNCvHn3iH3qdoL0n+khsM79UlOd3MC1f6QCSuPGdTlhcpWMv9/TiTcEOGZb4x/RN+Kx0SkQTh1dGnvuZqAT63dQg3R+EGS5/PGCCRiypaJno0aa6MoKJk0dCCiijmcOaxvOo5VnN7ZO9YrJM7wpT/DwjVLhmzaFsDdVjKKVd3xTjCcbSf98YvNVg86ZYjSkIuFWZl1wYy6OAVjdygQAiBCCARtAY2q0pCflcPiMfj0MOuOiI6SL2SVZ2nASYIrDHR2VHAMYTemD1sfvVvnMFcqSYYLE0hIX7hhJ12E7pxO2gmwXySviF/cZ9BdH+NUQ90NLxulLUcj29b95TC6wCzV602R02TRHVGzpc6oJVSy9d7zXG3Ez1cSRHxQF4NQqbwwOkQ7PbSSfI7XBnMvDjRvcNRqhePTaM+bE9rLmiSCaH3xwIwbdP0t+l6MeW1b421wVCO7Qp8+ttIQahV461BFWPnZif+tAf+CCGF0V9ASMTMJ9TLoWa8nnPt9bJWE3DgahFpcKxDQ2N993dBQAz07V+DP/WjMH4WQUfVvZZLoQLbNfpEZwof913aaoyyyd4kd3rqlCxEmrARZwQlswLf44BLRfc2Kew3LNooOyNUixFFbV8zWJk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(38070700005)(4001150100001)(2906002)(8676002)(8936002)(6862004)(38100700002)(5660300002)(36756003)(33656002)(86362001)(6486002)(71200400001)(6512007)(6506007)(37006003)(54906003)(6636002)(478600001)(2616005)(83380400001)(53546011)(66946007)(186003)(76116006)(91956017)(316002)(41300700001)(122000001)(66556008)(66476007)(64756008)(66446008)(4326008)(219803003)(207903002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B4387C3BB6B94942B6F544BD347644A5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6499
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fc6fc68a-d3ef-4a4b-ae90-08db40a5f9ff
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t7x/sM8FRVx2igJZnRFGLBi6HJmZMVI170uMFj+SqsY/vRL/9rdDNqbM99f3W15L5plYpmld2SBm/btHhliTEvBWGeTpR6EgnIvE5PVK6y5gTXlFWyg3lRR6PLAxivHtVOsErjTRwVYiGLqAFBhKxkt4yWYHfVipRhv9HT2P8DYDPoUNdibnNd1ppmeAjTObkQcD2evTy6V058zd/Ke4TsCZUyVXDQu8NnThCZLRB4OOo6DL/6S8PBRLPzkFmVSm5pDvb/jUew+8p2Y89Y3buSocJHpVzAzMd7ATKF+oVbJ85kZDUnCZbZ4vVWcf1BhAY2g+8mMrmdETnol2dXB4ta/WDuKUBFns59/hZiJQ+AKYPUndy/XucqxquXDzZcl2LYj9mJn4XiYXRzqxHv0VYaNvfOlp05mERMQxZPNj8V6S/QJR+a3WgKJPG0bhPviFYcRY97+9qcIcG96Ro4Ow8/jxu8DOgy0TJURfmyfBNWuWuclEGiRCE41AXp+8+YfvrmTbAAURrH5X4BGFguo/cTt5UWmjQqlglchn0M0B8duKIbrQGdcQokZY+ArGwA3poL62KGwBsgZWo7vQJVOnT0LN8edw1clvTjTRpq4IsZGPvhKMra7TmY3PfDqw6s8X45F3hcaPA6T7vmyaIwwDkbzko2BDggR9nNL+H63bk9g3GTnYTdzKMttPXIOT6bVN7/8zuYVoR6k2gXChHRnpmLJ0LI2VqCbv5nAMrViumveGsBNfzvtyDDOEzl6DeD1ubDzKg6TQ/7n/1c7nxjmscn25QB5qm2KJYGYJv4CxrOY=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(346002)(39850400004)(451199021)(46966006)(36840700001)(40470700004)(37006003)(478600001)(8936002)(6862004)(8676002)(316002)(82740400003)(41300700001)(6636002)(4326008)(70586007)(70206006)(40480700001)(81166007)(54906003)(356005)(186003)(40460700003)(4001150100001)(2906002)(53546011)(6512007)(36756003)(26005)(83380400001)(6506007)(86362001)(336012)(47076005)(82310400005)(33656002)(2616005)(5660300002)(36860700001)(6486002)(219803003)(207903002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:16:34.5332
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 72527751-e3ee-4dcd-f565-08db40a60201
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10115

SGkgTHVjYSwNCg0KPiBPbiAxOSBBcHIgMjAyMywgYXQgMDk6MTEsIEx1Y2EgRmFuY2VsbHUgPEx1
Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiANCj4gDQo+PiBPbiAxOCBBcHIgMjAy
MywgYXQgMTM6NTYsIEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT4g
d3JvdGU6DQo+PiANCj4+IEhpIEx1Y2EsDQo+PiANCj4+PiBPbiAxMiBBcHIgMjAyMywgYXQgMTE6
NDksIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+Pj4gDQo+
Pj4gQXJtIG5vdyBjYW4gdXNlIHRoZSAiZG9tMD0iIFhlbiBjb21tYW5kIGxpbmUgb3B0aW9uIGFu
ZCB0aGUgc3VwcG9ydA0KPj4+IGZvciBndWVzdHMgcnVubmluZyBTVkUgaW5zdHJ1Y3Rpb25zIGlz
IGFkZGVkLCBwdXQgZW50cmllcyBpbiB0aGUNCj4+PiBjaGFuZ2Vsb2cuDQo+Pj4gDQo+Pj4gU2ln
bmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPj4+IC0t
LQ0KPj4+IENoYW5nZXMgZnJvbSB2NDoNCj4+PiAtIE5vIGNoYW5nZXMNCj4+PiBDaGFuZ2UgZnJv
bSB2MzoNCj4+PiAtIG5ldyBwYXRjaA0KPj4+IC0tLQ0KPj4+IENIQU5HRUxPRy5tZCB8IDUgKysr
KysNCj4+PiAxIGZpbGUgY2hhbmdlZCwgNSBpbnNlcnRpb25zKCspDQo+Pj4gDQo+Pj4gZGlmZiAt
LWdpdCBhL0NIQU5HRUxPRy5tZCBiL0NIQU5HRUxPRy5tZA0KPj4+IGluZGV4IGM5NzhjZmQ5YjY4
Zi4uYTI0OTUxNjAzMzU5IDEwMDY0NA0KPj4+IC0tLSBhL0NIQU5HRUxPRy5tZA0KPj4+ICsrKyBi
L0NIQU5HRUxPRy5tZA0KPj4+IEBAIC02LDYgKzYsMTAgQEAgVGhlIGZvcm1hdCBpcyBiYXNlZCBv
biBbS2VlcCBhIENoYW5nZWxvZ10oaHR0cHM6Ly9rZWVwYWNoYW5nZWxvZy5jb20vZW4vMS4wLjAv
KQ0KPj4+IA0KPj4+ICMjIFt1bnN0YWJsZSBVTlJFTEVBU0VEXShodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXNob3J0bG9nO2g9c3RhZ2luZykgLSBUQkQNCj4+PiAN
Cj4+PiArIyMjIENoYW5nZWQNCj4+PiArLSBUaGUgImRvbTAiIG9wdGlvbiBpcyBub3cgc3VwcG9y
dGVkIG9uIEFybSBhbmQgInN2ZT0iIHN1Yi1vcHRpb24gY2FuIGJlIHVzZWQNCj4+PiArICB0byBl
bmFibGUgZG9tMCBndWVzdCB0byB1c2UgU1ZFL1NWRTIgaW5zdHJ1Y3Rpb25zLg0KPj4+ICsNCj4+
PiAjIyMgQWRkZWQNCj4+PiAtIE9uIHg4Niwgc3VwcG9ydCBmb3IgZmVhdHVyZXMgbmV3IGluIElu
dGVsIFNhcHBoaXJlIFJhcGlkcyBDUFVzOg0KPj4+ICAtIFBLUyAoUHJvdGVjdGlvbiBLZXkgU3Vw
ZXJ2aXNvcikgYXZhaWxhYmxlIHRvIEhWTS9QVkggZ3Vlc3RzLg0KPj4+IEBAIC0xNCw2ICsxOCw3
IEBAIFRoZSBmb3JtYXQgaXMgYmFzZWQgb24gW0tlZXAgYSBDaGFuZ2Vsb2ddKGh0dHBzOi8va2Vl
cGFjaGFuZ2Vsb2cuY29tL2VuLzEuMC4wLykNCj4+PiAgLSBCdXMtbG9jayBkZXRlY3Rpb24sIHVz
ZWQgYnkgWGVuIHRvIG1pdGlnYXRlIChieSByYXRlLWxpbWl0aW5nKSB0aGUgc3lzdGVtDQo+Pj4g
ICAgd2lkZSBpbXBhY3Qgb2YgYSBndWVzdCBtaXN1c2luZyBhdG9taWMgaW5zdHJ1Y3Rpb25zLg0K
Pj4+IC0geGwvbGlieGwgY2FuIGN1c3RvbWl6ZSBTTUJJT1Mgc3RyaW5ncyBmb3IgSFZNIGd1ZXN0
cy4NCj4+PiArIC0gT24gQXJtLCBYZW4gc3VwcG9ydHMgZ3Vlc3RzIHJ1bm5pbmcgU1ZFL1NWRTIg
aW5zdHJ1Y3Rpb25zLg0KPj4gDQo+PiBNaWdodCBiZSBhIGdvb2QgaWRlYSB0byBtZW50aW9uIHRo
YXQgdGhpcyBpcyBhIHRlY2ggcHJldmlldyA/DQo+IA0KPiBTdXJlIEnigJlsbCBtZW50aW9uIHRo
YXQsIGlzIHRoZXJlIHNvbWV0aGluZyBpbiBwYXJ0aWN1bGFyIHRoYXQgbmVlZHMgdG8gYmUgZG9u
ZSBvciBpcyBpdCBlbm91Z2ggdG8gc2F5IHRoYXQgaGVyZT8NCg0KSXQgaXMgY2xlYXJlciBpIHRo
aW5rIGlmIHlvdSBtZW50aW9uIGhlcmUgdGhhdCBpdCBpcyBhIHRlY2ggcHJldmlldy4NCg0KRm9y
IHRoZSByZXN0IGl0IG11c3QgYmUgbGlzdGVkIGFzIFVOU1VQUE9SVEVEIGluIEtjb25maWcgYW5k
IFRlY2ggUHJldmlldyBzdGF0ZSBtdXN0IGJlIGFsc28gbWVudGlvbmVkIGluIFNVUFBPUlQubWQu
DQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiANCj4+IA0KPj4gQ2hlZXJzDQo+PiBCZXJ0cmFuZA0K
Pj4gDQo+Pj4gDQo+Pj4gIyMgWzQuMTcuMF0oaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zaG9ydGxvZztoPVJFTEVBU0UtNC4xNy4wKSAtIDIwMjItMTItMTINCj4+
PiANCj4+PiAtLQ0KPj4+IDIuMzQuMQ0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:21:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:21:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523196.813023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp28C-0006BY-I1; Wed, 19 Apr 2023 07:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523196.813023; Wed, 19 Apr 2023 07:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp28C-0006BR-El; Wed, 19 Apr 2023 07:21:32 +0000
Received: by outflank-mailman (input) for mailman id 523196;
 Wed, 19 Apr 2023 07:21:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qUCI=AK=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pp28B-0006BL-50
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:21:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cdbf7391-de82-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:21:30 +0200 (CEST)
Received: from DB9PR06CA0030.eurprd06.prod.outlook.com (2603:10a6:10:1db::35)
 by AS8PR08MB10027.eurprd08.prod.outlook.com (2603:10a6:20b:63b::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:21:27 +0000
Received: from DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1db:cafe::1e) by DB9PR06CA0030.outlook.office365.com
 (2603:10a6:10:1db::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 07:21:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT022.mail.protection.outlook.com (100.127.142.217) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:21:27 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Wed, 19 Apr 2023 07:21:26 +0000
Received: from 1a5e34f44208.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4C26D933-B6F2-43F4-9B4C-3D816DCC898E.1; 
 Wed, 19 Apr 2023 07:21:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1a5e34f44208.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:21:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAVPR08MB9434.eurprd08.prod.outlook.com (2603:10a6:102:318::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 07:21:06 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:21:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdbf7391-de82-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wMg3ZPl2fYI3hMPRJtCOP9wN1cxUOVBWkBIJKA0EJfk=;
 b=4fLtBDBBhDzZI/zrA+D+R1qUAelkUF9VeCYT9U2UoAlVWUkvp4I/u74m59xsFLHDHH+NkgzsEYMCv8lvrluZETh0Udhos03qjMPHHmu2vLcuN67fjO+PdUYLqg7S7stlUx+bGkr8zXIxUaWvDomIrDXURuWkhOZGIyiCFJ5zkIA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: aaf4216b1bde690a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RBripDd4xF5cBIonh4MA4r2zPuyJ6ObaQ+7H4EB0x2j+4VoJ661ChRT1OWWn3BK/pHTZgLjs/MzULH6QbJOfM/UKFpP910UfU5vNaqHcdnr8A8IYx/OI5J3YXTOi0aowFPGv4bi21wh1z8BuoHJyFi/lGC3xOr580ev6fy7laeFKcHXszAYYqMRsO7gYRHLcuTnropSChI2ETzqOnGb2xLjSrIVllxfvscBetCMaDn/L6E0qBlTXWPjEjGyBrdQlcUG57MxC6KOFpQWS/SZGTNdVWLniAXsApYlLRGUg2oru4TDXk4cw42oJORPfNzmsiO1TXkg9YVIHOcdEINH1FA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wMg3ZPl2fYI3hMPRJtCOP9wN1cxUOVBWkBIJKA0EJfk=;
 b=A+TkcPYMJ3G5rHo9hW1BY+zar6TLXN0B3amESMuQk5+MBKUXSIcU8UTrF/hcl2BWt87l8f1hImyEMZqPSGVmkSleG59y28Hhaxdr63Ulb9iGdGDZlEn5qpEOak4Sadj6vmaIauQAD+p9WNS/DIZdBAGUwHwu9wSNz8kyeaVfX7HWvTFnFXWWjSG2k02/kGfz90Ch+dRaje5FOYT4r4c2anBTKNJS/ZZCSOclvrVwqGR3qgxnOLZTPCIOV7TIhsu9a9AaZ+0mXb2k9dWRq6JuMFuwJ6V1wzHzacve94AHc3+kFawUaTTVU3mO/mweTDMIXlwItCsJiIDii3lH3yImtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wMg3ZPl2fYI3hMPRJtCOP9wN1cxUOVBWkBIJKA0EJfk=;
 b=4fLtBDBBhDzZI/zrA+D+R1qUAelkUF9VeCYT9U2UoAlVWUkvp4I/u74m59xsFLHDHH+NkgzsEYMCv8lvrluZETh0Udhos03qjMPHHmu2vLcuN67fjO+PdUYLqg7S7stlUx+bGkr8zXIxUaWvDomIrDXURuWkhOZGIyiCFJ5zkIA=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQxHdlbNP/OhU+SnBXlOek6la8xDZGAgAE1nQCAAAF/gA==
Date: Wed, 19 Apr 2023 07:21:06 +0000
Message-ID: <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
In-Reply-To: <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|PAVPR08MB9434:EE_|DBAEUR03FT022:EE_|AS8PR08MB10027:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b617a79-77a0-4b95-9166-08db40a6b060
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0trHPcvmc9rX+/5a/jeocExLmvnMQFWYAdqbpZsnbaLIpkWawD9tHYz73cXjmP7HXZC+pWXfoXw3JIVNmX3QHPIro+nClfVkZcXsWKQAWrInePaSyWLSJPJfyMi+ggXtZghtEhZZCGefcMlsqvMGNioHXTOPjS0mVGpLH0Y3lnXxeDThXkE/ECBd6+ozWoOHLlT4f4O4K52u0LLVyDd1UH4STN9LM97FW5CHoRPlEmqIDEZqaqWJTDNzsnFFQksyaXMpekX8baCZtbpqhuTvEO7+7zB0neKr6E5E/rMc5s+NqUxt3CiVBwUNbY0SIMXKndD9Ui0615UdsW9QCDTzy4TBBvLfNIVk3RnfeA8PS3rk4SyjLmcrvRb+P9OYYEUXtoiPAg8LgKA5aiabOFddPSvF7ziKuFf0DYaweWCSeCQaWpWEEBniPmOWbsSnMKRgcBOVKoAvZ7UCpqD4IKz9jkVzjKL88kQGNG/6/cpPjhukul2MJ0Nnrk2ghUmlXS6PxeQgRYcM/EqSY4TGPFb0jSHEZjEaYaQf8fXC0XpyqpjIUqS99AQCMozkj79lcZlbz3K0Wl3N/3wC8qFhozpSy7p5TuvJtrY8tPT1HdvtX3TBXzsixA0fkNTkM6waGtBEsbYD1KTOS9oc1lnp2uLNvA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199021)(122000001)(6506007)(6512007)(38100700002)(478600001)(6486002)(66446008)(66946007)(66556008)(66476007)(64756008)(6636002)(76116006)(36756003)(4326008)(71200400001)(86362001)(41300700001)(316002)(37006003)(91956017)(54906003)(83380400001)(186003)(38070700005)(53546011)(8676002)(8936002)(2906002)(33656002)(5660300002)(6862004)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <918E763527C09046887CEE950C910D32@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9434
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f5b2eb8d-f243-4d00-c64c-08db40a6a3d3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GmWHQyKyn6EWaaXTTjFbSScgiUN2t3NDykCHbidw+YAR3hRv9kBrgrUly2XXlqnr63R4up+1caUiRmlGe/u+SDlL2xqV50NC411L0MEmXdxmAkmDFG8kuIDLZe3nJdDfm19bT35nuAXRCWtZjVjB8lWZ+1+NK1LdDN2y0UuiyRCDOa+fgu1STtoCAn0G1y8rMRTxlC4+dRdOyAkBB3+wubg/ftlxH/uTt9kSjHQwyuPGt0SuwyS9Qogbg0VqP/3piJKWk1M+8VJNi0Q/fpW6WqGqUI8QTim7Qv0sZJkjpyGw+Lq4pS196hvF6/69XSOOLN1knrRo4N0zNARTWr7krnutQViRNknRS7GHjKdA+F27zOYYcdlIll7ejI7OxZLuORjRDmS50kQNSoI8tYpCa/C0L3dkXTh/5OliQeSR1XxhCVjpswyRkmgMiFjRr+GQ9F0LiSPC7xY3iabrBzfOk/iXmySSb2PgKGjR4rGVbUdbpUEByPQLrJWTR0V+cm+U4ew/F0c7+XJd+AbuqLcZmTpZx+YczW3hIQjjKQtqHP2tzNrZyGbBIG4Rb0U4PjbZFTPZgA3GmQIJZLmLtRXN0oCpMlrDHw/hXjqSZW6w3ZSrZEJOYEFJ1HQKktoh3MHPTWj7d4LZOefHHZaiKNrlApICnGqKpdA7eAd47yUjF3CCYfilSTUQB6cHv42dVMLkGkEova9NCFsmG9PlwRw0hA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199021)(40470700004)(36840700001)(46966006)(36860700001)(40460700003)(33656002)(36756003)(2906002)(6862004)(8676002)(8936002)(86362001)(40480700001)(5660300002)(70206006)(70586007)(81166007)(356005)(41300700001)(82310400005)(82740400003)(4326008)(2616005)(47076005)(83380400001)(336012)(54906003)(6506007)(107886003)(6636002)(37006003)(186003)(316002)(26005)(53546011)(6512007)(6486002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:21:27.0768
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b617a79-77a0-4b95-9166-08db40a6b060
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10027

SGkgTHVjYSwNCg0KPiBPbiAxOSBBcHIgMjAyMywgYXQgMDk6MTUsIEx1Y2EgRmFuY2VsbHUgPEx1
Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBIaSBCZXJ0cmFuZCwNCj4gDQo+Pj4g
DQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYyBiL3hlbi9hcmNoL2Fy
bS9hcm02NC9zdmUuYw0KPj4+IGluZGV4IDU0ODU2NDg4NTBhMC4uYWQ1ZGI2MmUxODA1IDEwMDY0
NA0KPj4+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9hcm02NC9zdmUuYw0KPj4+IEBAIC05LDYgKzksOSBAQA0KPj4+ICNpbmNsdWRlIDx4ZW4v
c2l6ZXMuaD4NCj4+PiAjaW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4+IA0KPj4+ICsvKiBv
cHRfZG9tMF9zdmU6IGFsbG93IERvbTAgdG8gdXNlIFNWRSBhbmQgc2V0IG1heGltdW0gdmVjdG9y
IGxlbmd0aC4gKi8NCj4+PiAraW50IF9faW5pdGRhdGEgb3B0X2RvbTBfc3ZlOw0KPj4+ICsNCj4+
PiBleHRlcm4gdW5zaWduZWQgaW50IHN2ZV9nZXRfaHdfdmwodm9pZCk7DQo+Pj4gZXh0ZXJuIHZv
aWQgc3ZlX3NhdmVfY3R4KHVpbnQ2NF90ICpzdmVfY3R4LCB1aW50NjRfdCAqcHJlZ3MsIGludCBz
YXZlX2Zmcik7DQo+Pj4gZXh0ZXJuIHZvaWQgc3ZlX2xvYWRfY3R4KHVpbnQ2NF90IGNvbnN0ICpz
dmVfY3R4LCB1aW50NjRfdCBjb25zdCAqcHJlZ3MsDQo+Pj4gQEAgLTExOCwzICsxMjEsMjEgQEAg
dm9pZCBzdmVfcmVzdG9yZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdikNCj4+PiANCj4+PiAgIHN2ZV9s
b2FkX2N0eChzdmVfY3R4X3pyZWdfZW5kLCB2LT5hcmNoLnZmcC5mcHJlZ3MsIDEpOw0KPj4+IH0N
Cj4+PiArDQo+Pj4gK2ludCBfX2luaXQgc3ZlX3Nhbml0aXplX3ZsX3BhcmFtKGludCB2YWwsIHVu
c2lnbmVkIGludCAqb3V0KQ0KPj4+ICt7DQo+Pj4gKyAgICAvKg0KPj4+ICsgICAgICogTmVnYXRp
dmUgU1ZFIHBhcmFtZXRlciB2YWx1ZSBtZWFucyB0byB1c2UgdGhlIG1heGltdW0gc3VwcG9ydGVk
DQo+Pj4gKyAgICAgKiB2ZWN0b3IgbGVuZ3RoLCBvdGhlcndpc2UgaWYgYSBwb3NpdGl2ZSB2YWx1
ZSBpcyBwcm92aWRlZCwgY2hlY2sgaWYgdGhlDQo+Pj4gKyAgICAgKiB2ZWN0b3IgbGVuZ3RoIGlz
IGEgbXVsdGlwbGUgb2YgMTI4IGFuZCBub3QgYmlnZ2VyIHRoYW4gdGhlIG1heGltdW0gdmFsdWUN
Cj4+PiArICAgICAqIDIwNDgNCj4+PiArICAgICAqLw0KPj4+ICsgICAgaWYgKCB2YWwgPCAwICkN
Cj4+PiArICAgICAgICAqb3V0ID0gZ2V0X3N5c192bF9sZW4oKTsNCj4+PiArICAgIGVsc2UgaWYg
KCAoKHZhbCAlIFNWRV9WTF9NVUxUSVBMRV9WQUwpID09IDApICYmICh2YWwgPD0gU1ZFX1ZMX01B
WF9CSVRTKSApDQo+Pj4gKyAgICAgICAgKm91dCA9IHZhbDsNCj4+IA0KPj4gU2hvdWxkbid0IHlv
dSBhbHNvIGNoZWNrIGlmIGl0IGlzIG5vdCBncmVhdGVyIHRoYW4gdGhlIG1heGltdW0gdmVjdG9y
IGxlbmd0aCA/DQo+IA0KPiBJIGRvbuKAmXQgdW5kZXJzdGFuZCwgSSBhbSBjaGVja2luZyB0aGF0
IHRoZSB2YWx1ZSBpcyBiZWxvdyBvciBlcXVhbCB0byBTVkVfVkxfTUFYX0JJVFMsDQo+IElmIHlv
dSBtZWFuIGlmIGl0IHNob3VsZCBiZSBjaGVja2VkIGFsc28gYWdhaW5zdCB0aGUgbWF4aW11bSBz
dXBwb3J0ZWQgbGVuZ3RoIGJ5IHRoZSBwbGF0Zm9ybSwNCj4gSSB0aGluayB0aGlzIGlzIG5vdCB0
aGUgcmlnaHQgcGxhY2UsIHRoZSBjaGVjayBpcyBhbHJlYWR5IGluIGFyY2hfc2FuaXRpc2VfZG9t
YWluX2NvbmZpZygpLCBpbnRyb2R1Y2VkDQo+IGluIHBhdGNoICMyDQoNCklmIHRoaXMgaXMgbm90
IHRoZSByaWdodCBwbGFjZSB0byBjaGVjayBpdCB0aGVuIHdoeSBjaGVja2luZyB0aGUgcmVzdCBo
ZXJlID8NCg0KRnJvbSBhIHVzZXIgb3IgYSBkZXZlbG9wZXIgcG9pbnQgb2YgdmlldyBJIHdvdWxk
IGV4cGVjdCB0aGUgdmFsaWRpdHkgb2YgdGhlIGlucHV0IHRvIGJlIGNoZWNrZWQgb25seQ0KaW4g
b25lIHBsYWNlLg0KSWYgaGVyZSBpcyBub3QgdGhlIHBsYWNlIGZvciB0aGF0IGl0IGlzIG9rIGJ1
dCB0aGVuIGkgd291bGQgY2hlY2sgZXZlcnl0aGluZyBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9j
b25maWcNCihtdWx0aXBsZSwgbWluIGFuZCBzdXBwb3J0ZWQpIGluc3RlYWQgb2YgZG9pbmcgaXQg
cGFydGx5IGluIDIgcGxhY2VzLg0KDQo+IA0KPj4gDQo+Pj4gKyAgICBlbHNlDQo+Pj4gKyAgICAg
ICAgcmV0dXJuIC0xOw0KPj4+ICsNCj4+PiArICAgIHJldHVybiAwOw0KPj4+ICt9DQo+Pj4gZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBiL3hlbi9hcmNoL2FybS9kb21h
aW5fYnVpbGQuYw0KPj4+IGluZGV4IGVlYjQ2NjJmMGVlZS4uM2YzMGVmNWMzN2I2IDEwMDY0NA0K
Pj4+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYw0KPj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9kb21haW5fYnVpbGQuYw0KPj4+IEBAIC0yNiw2ICsyNiw3IEBADQo+Pj4gI2luY2x1ZGUg
PGFzbS9wbGF0Zm9ybS5oPg0KPj4+ICNpbmNsdWRlIDxhc20vcHNjaS5oPg0KPj4+ICNpbmNsdWRl
IDxhc20vc2V0dXAuaD4NCj4+PiArI2luY2x1ZGUgPGFzbS9hcm02NC9zdmUuaD4NCj4+PiAjaW5j
bHVkZSA8YXNtL2NwdWZlYXR1cmUuaD4NCj4+PiAjaW5jbHVkZSA8YXNtL2RvbWFpbl9idWlsZC5o
Pg0KPj4+ICNpbmNsdWRlIDx4ZW4vZXZlbnQuaD4NCj4+PiBAQCAtNjEsNiArNjIsMjEgQEAgY3Vz
dG9tX3BhcmFtKCJkb20wX21lbSIsIHBhcnNlX2RvbTBfbWVtKTsNCj4+PiANCj4+PiBpbnQgX19p
bml0IHBhcnNlX2FyY2hfZG9tMF9wYXJhbShjb25zdCBjaGFyICpzLCBjb25zdCBjaGFyICplKQ0K
Pj4+IHsNCj4+PiArICAgIGxvbmcgbG9uZyB2YWw7DQo+Pj4gKw0KPj4+ICsgICAgaWYgKCAhcGFy
c2Vfc2lnbmVkX2ludGVnZXIoInN2ZSIsIHMsIGUsICZ2YWwpICkNCj4+PiArICAgIHsNCj4+PiAr
I2lmZGVmIENPTkZJR19BUk02NF9TVkUNCj4+PiArICAgICAgICBpZiAoICh2YWwgPj0gSU5UX01J
TikgJiYgKHZhbCA8PSBJTlRfTUFYKSApDQo+Pj4gKyAgICAgICAgICAgIG9wdF9kb20wX3N2ZSA9
IHZhbDsNCj4+PiArICAgICAgICBlbHNlDQo+Pj4gKyAgICAgICAgICAgIHByaW50ayhYRU5MT0df
SU5GTyAiJ3N2ZT0lbGxkJyB2YWx1ZSBvdXQgb2YgcmFuZ2UhXG4iLCB2YWwpOw0KPj4+ICsjZWxz
ZQ0KPj4+ICsgICAgICAgIG5vX2NvbmZpZ19wYXJhbSgiQVJNNjRfU1ZFIiwgInN2ZSIsIHMsIGUp
Ow0KPj4+ICsjZW5kaWYNCj4+IA0KPj4gQ29ycmVjdCBtZSBpZiBteSB1bmRlcnN0YW5kaW5nIGlz
IHdyb25nIGJ1dCBoZXJlIHlvdSBqdXN0IGlnbm9yZSB0aGUgc3ZlDQo+PiBwYXJhbWV0ZXIgaWYg
U1ZFIGlzIG5vdCBzdXBwb3J0ZWQgYnkgWGVuID8NCj4+IA0KPj4gSSBhbSBhIGJpdCB3b25kZXJp
bmcgaWYgd2Ugc2hvdWxkIG5vdCBqdXN0IHJlZnVzZSBpdCBoZXJlIGFzIHRoZSB1c2VyIG1pZ2h0
DQo+PiB3cm9uZ2x5IHRoaW5rIHRoYXQgaGlzIHBhcmFtZXRlciBoYWQgc29tZSBlZmZlY3QuDQo+
PiANCj4+IE9yIGlzIGl0IGEgdXN1YWwgd2F5IHRvIGhhbmRsZSB0aGlzIGNhc2UgPw0KPiANCj4g
SmFuIHN1Z2dlc3RlZCB0aGlzIGFwcHJvYWNoLCBhcyBpdCBzaG91bGQgYmUgdGhlIHByZWZlcnJl
ZCB3YXkgdG8gaGFuZGxlIHRoZSBjYXNlLA0KPiBsb29raW5nIGludG8gdGhlIHg4NiBjb2RlIGl0
IHNlZW1zIHNvLg0KPiANCg0KVGhpcyBpcyBzb21laG93IGdvaW5nIGFyb3VuZCB0aGUgZ2xvYmFs
IGRpc2N1c3Npb246IGlzIGl0IHJlYWxseSBvayB0byBpZ25vcmUgc3ZlIA0KcGFyYW0gaWYgaXQg
aXMgbm90IHN1cHBvcnRlZC4gTGV0J3MgaGF2ZSB0aGlzIGRpc2N1c3Npb24gb24gdGhlIG90aGVy
IHRocmVhZCBpbnN0ZWFkLg0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523203.813033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Ij-0007jq-M9; Wed, 19 Apr 2023 07:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523203.813033; Wed, 19 Apr 2023 07:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Ij-0007jj-Ip; Wed, 19 Apr 2023 07:32:25 +0000
Received: by outflank-mailman (input) for mailman id 523203;
 Wed, 19 Apr 2023 07:32:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qUCI=AK=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pp2Ii-0007jd-EY
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:32:24 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7d00::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52930091-de84-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:32:22 +0200 (CEST)
Received: from DU2PR04CA0028.eurprd04.prod.outlook.com (2603:10a6:10:3b::33)
 by AS8PR08MB6583.eurprd08.prod.outlook.com (2603:10a6:20b:33f::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:31:55 +0000
Received: from DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::ba) by DU2PR04CA0028.outlook.office365.com
 (2603:10a6:10:3b::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 07:31:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT048.mail.protection.outlook.com (100.127.142.200) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:31:54 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 19 Apr 2023 07:31:54 +0000
Received: from eee6270813dc.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C96D62AB-B61B-49B4-9463-F4971291F570.1; 
 Wed, 19 Apr 2023 07:31:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eee6270813dc.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:31:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB10324.eurprd08.prod.outlook.com (2603:10a6:20b:5e7::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 07:31:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:31:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52930091-de84-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RhW/DpWlQYOySUClXSe+IdtcoIJA9DmLjpE0ZxwRwNE=;
 b=r1VKvZxMjSj3KKvsuV7lJCr/khhWCq8AI5jvTETalmsZ7w4ekW7zD7WhPBADge3z0up6/U8aqlcXVpPFFWImSoz7LngDxbJjD7TLAe436pALBrKvSbOz/y37RelH60yPuqeyruC1RywReKcEz9rlGDUhs+BXjZCSqQRPhj+uj+8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6475b652457a3906
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QOL79pecvbqJISECt2TKxSLd97MVFm5KxKfuysRYUnSmSoyy4TCOU5heCA+vt3XPDAvJLU8MZPwGw8c7w73mpBu07S43Zde6N1SncAgVnUcHAURu+L0HGObsBkLaBE44nO8z2lWJBxCoOytkmuCLgv9N6JxkHXfeJwTq1MTa/uRLfyMtJhsUI7loMdXPl1YGfRzcOW4aujd/t71mjGo5jL2tUDxI3THF/BvPAx1O6xw9hR5fr2XUctnoLxlW8mNmozFbuRWM+Ppqmh/zkfI9oyBfsvSxQHIj8V7HIccjhqBgW+wL34zGgyK0EmtBA1Npd4Q2VMJBCYKFv5OD3Vnb9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RhW/DpWlQYOySUClXSe+IdtcoIJA9DmLjpE0ZxwRwNE=;
 b=B4VcA2L0zeH4xnnB3yNTh5JxkrlWWVYWItWA1lrK4aoBpj4MsObM6rtk5dCaawXeSEKC9O++BCKWywR7D7kymO9hFxd6zU9tBBvfMRHGBASOlZ/mqcFAqAazaiSHYZTGg9KhVkn9rl2mLJEqMQ+ICwoY5Fwl8Yu4ufEdVsd06BhaGfIhj+O8/FeaZ12A7W01arT1UuO7ZbKooSJOleVwk4acPNzl3EWgb5fHfECo5y9OT4kT9Z0vgqTGTQKBMdXs07EZaFz6eU2zsLyQ43cujJfmCLPYSzVviWe2w9M8tkbw8a6wxVOnpeOMxW5oDbnBqOmxwkPYT0RgME5b1kJGvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RhW/DpWlQYOySUClXSe+IdtcoIJA9DmLjpE0ZxwRwNE=;
 b=r1VKvZxMjSj3KKvsuV7lJCr/khhWCq8AI5jvTETalmsZ7w4ekW7zD7WhPBADge3z0up6/U8aqlcXVpPFFWImSoz7LngDxbJjD7TLAe436pALBrKvSbOz/y37RelH60yPuqeyruC1RywReKcEz9rlGDUhs+BXjZCSqQRPhj+uj+8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>, Luca Fancellu <Luca.Fancellu@arm.com>,
	Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index: AQHZbSQqKlfgrZX3xkqta7+H+1gq3K8xFMoAgAAUHYCAAQ0QAIAAEaQA
Date: Wed, 19 Apr 2023 07:31:35 +0000
Message-ID: <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
In-Reply-To: <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB10324:EE_|DBAEUR03FT048:EE_|AS8PR08MB6583:EE_
X-MS-Office365-Filtering-Correlation-Id: 44af45ec-8ee0-4aa3-93b9-08db40a8268c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7Yl22+g3vZ3fvdiLBJHEg0Q0oMNxWRCcV+VZOYtpS7wsuza7AaOfCWyOdprvPUkrCf35YuSUwbZh6Hcg+XZLF6Q4yHciGp0VwQ/kXR2+LszkLwRucM/lM79uv7VOY7JQzc4OE0HdX4PuHVBnWIg/mt4a/naDk8YLlgqgo0fPJz0WH/ZrEFWowN+ff9p/gRlBbF8JU9G47jldbwbgUZomLSYVdgEjvvL5tEaeMQW1boFwRwcJ2x42Ie7hm4va9l9L1cEL/zChGNK5AEGVaT9BVVImjMJUqadIS/0ngVyzoXlfYLdIO9K8QfDQ+T0Yj9QRhKvISwGVMYEe1BE0yDqDbSR1UqCeA7uGK2j3TA6Hbq7wHVdOQU+DYtl14q3QKsC6MMznOVbFyafaWmx0Uc4PkwZs2ddEvAaUXKQ9hlIXrMX4UGOMQp3348jh5YX+QrSHzfocj+ZHiRagq6dIszCstH9QXVhw728K7LvLtdvIXTpfBS74VtgCCp4VaP8Y4K8YapZSBqUjU/TP+ofB7e0WB8weAtnf3sHmKCR9F0AHg5twQvtQttUT5A91FwtXto/aehnGblCiOhT9Eo94pVw78NIYWYBgA1ZetkFA6OfsMVWFP2iYKDRfjoJz5fGLXNTdXym95okYQOW2sqUsAZf16g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(451199021)(38070700005)(2906002)(8676002)(8936002)(38100700002)(7416002)(5660300002)(36756003)(33656002)(86362001)(6486002)(71200400001)(6512007)(6506007)(54906003)(478600001)(2616005)(83380400001)(53546011)(66946007)(186003)(76116006)(91956017)(316002)(41300700001)(122000001)(66556008)(66476007)(64756008)(66446008)(6916009)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <42428FC86BEB154282AB2FC610EBD04F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10324
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	dc98c86d-d4e5-4782-b176-08db40a81ada
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CeV5b90Fy6DMd61QXl0C7vwvwAfeB4oMNGsoIyD4MbJsFmPUS+Fa/m9zRquJOR+866CWa62bfoXFXn8/EgnKOkefPm292sz+GnBfUcINjL6YgXKVMr7GBcM7s8AyUIV49rO4jEAnAP0BjwOXK9WYbe+JqtZfa/j9MBpJR0wApvO5+pdPaJtWUzaN6+awJfISjqsaH6hab4zAQ/I5REzLOR+7mwJG0fBqNsu+h/E7/nNdgZKNO3Hr08bJXDcy42wWusY398sg/QxAxIKu2n9lsGUIJIvxUtKPhjgSVALG+ddTaqmfojb0/0HgzoBuZzWggo7yGgUNDPSMjlD+TpkKV+wr4foxpDxHJ22Ks8s8fdSOOoEWGXszFCA0SJLqXOOzPTFZKcLMx01qEvAkGlQDcwORlgYhoYcYz46wAWF2RGuyidtiOcbUSGoMzWfVU4y3EBucDHCfhld3sGFZYNhNv5e675xaFqi/MxXYbNSzOukyexIR15N0zHN8a9+sXJmOng6UCFqEl9/1Lv1gLMHp1LPZu17sjnL3dBV1S7WQCnV3IpSCNtBFNu0O4Ty4zV2aB+rkDCgD9ZoRWkHTm4JjljysGrgiNChh3ZDavhazXQoK1RA0iJrPiMTDbqww6Zb7hG0FoontgOisQ6FyRMb+oJ0wo5EZ5O6FI0iz3sjjj9rQZgRsIAPw4MjerYhJuJeGmV9rjg26d6hCWMfNzY4dkA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199021)(46966006)(40470700004)(36840700001)(33656002)(36756003)(356005)(81166007)(82740400003)(82310400005)(2906002)(47076005)(36860700001)(83380400001)(336012)(2616005)(6862004)(8676002)(5660300002)(86362001)(8936002)(41300700001)(26005)(186003)(53546011)(6506007)(6512007)(316002)(4326008)(70586007)(70206006)(6486002)(40460700003)(40480700001)(54906003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:31:54.8331
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 44af45ec-8ee0-4aa3-93b9-08db40a8268c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6583

Hi Jan,

> On 19 Apr 2023, at 08:28, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 18.04.2023 16:25, Julien Grall wrote:
>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>> On this serie I would like to open a discussion on how to handle the ve=
ctor size
>>> and the corresponding command line / configuration / device tree parame=
ters.
>>>=20
>>> In general the user must either give a vector size it wants or has a so=
lution to
>>> just request the maximum supported size.
>>>=20
>>> In the current implementation if a size bigger than the supported one i=
s provided:
>>> - we silently disable SVE for dom0
>>> - we silently disable SVE for dom0less
>>> - we do not create a guest when done through tools
>>>=20
>>> This is not completely coherent and i think we should aim for a coheren=
t behaviour
>>> unless we have arguments for this status.
>>=20
>> +1.
>>=20
>>> Is there any good reason to silently disable for Dom0 and dom0less only=
 ?
>>>=20
>>> I see some possible solutions here:
>>>=20
>>> - modify parameter behaviour to use the supported size if parameter is =
bigger than
>>> it. This would at least keep SVE enabled if a VM depends on it and coul=
d simplify
>>> some of the handling by using 2048 to use the maximum supported size.
>>=20
>> My concern with this approach and the third one is the user may take=20
>> some time to realize the problem in the xl.cfg. So...
>>=20
>>>=20
>>> - coherently stop if the parameter value is not supported (including if=
 sve is
>>> not supported)
>>=20
>> ... this is my preferred approach because it would be clear that the=20
>> value passed to Xen is bogus.
>=20
> I did say earlier on that this comes with its own downside of preventing
> boot to complete for no real reason. It's all Arm code, so you're fine
> to ignore me, but in similar situations elsewhere (sorry, don't recall a
> concrete example off the top of my head) we've aimed to allow the system
> to boot, for the admin to then take corrective action if/as needed.

But a guest depending on the feature will just crash later when booting.
This is making the assumption that guests are all able to properly adapt
to different hardware possibilities.=20
This might be the case when you have a full Linux but if you consider an
embedded use case, if something is activated due to command line parameters
or configuration ones, it should not be expected that those are ignored I t=
hink.

There are definitely 2 different needs here, maybe we need to have somethin=
g
like a "strict" switch to allow both use cases ?

Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:37:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523207.813042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2NQ-0008La-7Z; Wed, 19 Apr 2023 07:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523207.813042; Wed, 19 Apr 2023 07:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2NQ-0008LT-4x; Wed, 19 Apr 2023 07:37:16 +0000
Received: by outflank-mailman (input) for mailman id 523207;
 Wed, 19 Apr 2023 07:37:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp2NO-0008LN-JS
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:37:14 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe9282b0-de84-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:37:11 +0200 (CEST)
Received: from DM6PR21CA0015.namprd21.prod.outlook.com (2603:10b6:5:174::25)
 by SJ0PR12MB6989.namprd12.prod.outlook.com (2603:10b6:a03:448::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 19 Apr
 2023 07:37:08 +0000
Received: from DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:174:cafe::d5) by DM6PR21CA0015.outlook.office365.com
 (2603:10b6:5:174::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.8 via Frontend
 Transport; Wed, 19 Apr 2023 07:37:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT054.mail.protection.outlook.com (10.13.173.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 07:37:07 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 02:37:07 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 02:37:07 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 02:37:05 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe9282b0-de84-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZtLUKTns89HL5C5JowSrge1qll5CjuWMtpSuM6iWO2piywwPF9DY145T4ht9eIArw3s5tbTXEpv0kfeyrbwA4SZ6YhyRqr9TkPSdYrAG1t9kszzyzoJtHUkVKlrFO8Px+NJlaWsoEaFNhsrivlLGoB6DnW71PSdKcoLmZP3jhu7qi9A5Kw1UdH+rEZ3tC5r/I4CG8G2yfXHHriHXxzmacTPrS8eDHX2ukS+/mMqoFwpgUmVWV3zPr9dk1EWjnSS3kbjZXyNE/bAfqusuUvKPD1yRImrVtEZJ54PzjfYQ0mBWCe8S22ZaPIhV2r/EgqZnHMRir/tlNigJ6tfXZYcfbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=93SaySx2WRYEqqXGLGfz1BxIEOVaXJypyjYBnhWojm4=;
 b=fiNLa0KdKXJ2vIBWgK5JeFH4I3rV3ove7jEAKHAt60Dr1YMzbAnCTQW+JDuCXewn/wt7gaqxyj0ieKQdbQI+uEGCoKQ7R5xjcG5eJEAXZ1AnCrIFj9YRifAR32ahcSBSrM5pAAhDRxJDG9Wgqs/o3hvaj8xzYQrVgkb/TYn3AJrif7G/Hitz+fzpCdyfUJmURc/gsHhFhQ9MxfhbYPmCQEpOMtOdFa+4ABoKX1OARrJKr98ba6RoPz7TyXCFS+fbncWsbkEx1sN8y2/bcSrs6fxJJi612F8p3mppozM1Lv3NDSOV2CgvbHuOc18NlS4oksePsB24pVZojoqUb3b2AA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=93SaySx2WRYEqqXGLGfz1BxIEOVaXJypyjYBnhWojm4=;
 b=aNZUhqJBixLNg7YXMHLQ3b/Pou4T+RbMSlmteYIrnhMyAPmKi17UCiBTqwMkAxoZg60KPxgsOikREUVpwIXkg5PodxL/5SpHjcO4UjdxRShgVBDvkgo5gMR2ypzQKTejhOnDIyyrdgpyd8U+JO7q3rly5sNMall7QFJJrIqana4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
Date: Wed, 19 Apr 2023 09:37:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
To: Oleg Nikitenko <oleshiiwood@gmail.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
 <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT054:EE_|SJ0PR12MB6989:EE_
X-MS-Office365-Filtering-Correlation-Id: 11755536-8380-427e-4f63-08db40a8e12f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WaoIr+daA0J1urrqBPIvUD+KgtlVxTLqnZ3cU4bS3aLS981uO5fa/yLQZNRdqe1bM1G36UevFwF0kLBIdDP/pM8c96xABEfozJDMl/pxE6CgplakckF9H47/dulVRWUawX77OSYjG2rowt0vrs8uJgDw3uv6tPvduQIGtD0bTgWVO1Cuom195Bmw1ulZakRinTDl1s1Dak1OyVfZNHcyBmmmU47l+LBABFkluTbU1A3pQQMTBli1gQx8rrgORfIGxrZarr3Skqps/MMwuDb7n9fY63Ok0TzayniZXDJgmj4I9JosOgVMWFJYy5W0LUplmNixVGc7QsrHLUThVv2pqIIfALkXzIjsW6uPR3XT5akNvSlb0kxCf7T5o8rJYmOv210jRqcM4A3T8CZKn/3ak5eRsrkM4YRWEopU19LPHmrr4yFUuxvhB7BYasu36GNttWttmsZqj1zpjM3/8JAJTdnXIzCKj6hblSHHhN6ZoucmlVV2+lAEFY7lOQ7/r47DNhlVcRK/BBheb60L1ZTIjFe+6C+8xSXOSwMTxFF59oF06dO4K3+AnkAKAzxBfYuvlPcG0xDVx6hn1FJe8JX5aE6gk4k6wKzOI1m2ynjiHtuq3eyZwxCJY0b9pW21UO1m30vR6Igs6AxcStH1f06NY2K3WJyulM2VIkMgCImtEBNr8MV1trc++ewBJIrRv14ImUlRhyNmtCBpqdD5B5vVThPqtPA4E9QT3rmYzitUxXKZMpUITdPolNPrEQMIkGPw
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199021)(46966006)(40470700004)(36840700001)(41300700001)(2906002)(82740400003)(356005)(70206006)(70586007)(81166007)(40480700001)(8676002)(8936002)(44832011)(40460700003)(4326008)(36756003)(316002)(53546011)(186003)(966005)(86362001)(47076005)(2616005)(426003)(5660300002)(83380400001)(336012)(82310400005)(26005)(31686004)(36860700001)(31696002)(478600001)(54906003)(16576012)(110136005)(6666004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:37:07.8933
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 11755536-8380-427e-4f63-08db40a8e12f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT054.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6989

Hi Oleg,

On 19/04/2023 09:03, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello Stefano,
> 
> Thanks for the clarification.
> My company uses yocto for image generation.
> What kind of information do you need to consult me in this case ?
> 
> Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org> ?

Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
is set, I strongly doubt that this is the actual command line in use.

You wrote:
xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";

but:
1) way_szize has a typo
2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
(XEN) Xen color(s): [ 0 ]

This makes me believe that no colors configuration actually end up in command line that Xen booted with.
Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.

So I would suggest to first cross-check the command line in use.

~Michal


> 
> Regards,
> Oleg
> 
> вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> 
>     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>     > Hi Julien,
>     >
>     > >> This feature has not been merged in Xen upstream yet
>     >
>     > > would assume that upstream + the series on the ML [1] work
>     >
>     > Please clarify this point.
>     > Because the two thoughts are controversial.
> 
>     Hi Oleg,
> 
>     As Julien wrote, there is nothing controversial. As you are aware,
>     Xilinx maintains a separate Xen tree specific for Xilinx here:
>     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> 
>     and the branch you are using (xlnx_rebase_4.16) comes from there.
> 
> 
>     Instead, the upstream Xen tree lives here:
>     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
> 
> 
>     The Cache Coloring feature that you are trying to configure is present
>     in xlnx_rebase_4.16, but not yet present upstream (there is an
>     outstanding patch series to add cache coloring to Xen upstream but it
>     hasn't been merged yet.)
> 
> 
>     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>     you as you already have Cache Coloring as a feature there.
> 
> 
>     I take you are using ImageBuilder to generate the boot configuration? If
>     so, please post the ImageBuilder config file that you are using.
> 
>     But from the boot message, it looks like the colors configuration for
>     Dom0 is incorrect.
> 


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:42:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523211.813052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Sg-0001L4-T7; Wed, 19 Apr 2023 07:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523211.813052; Wed, 19 Apr 2023 07:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Sg-0001Kx-QI; Wed, 19 Apr 2023 07:42:42 +0000
Received: by outflank-mailman (input) for mailman id 523211;
 Wed, 19 Apr 2023 07:42:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/+8i=AK=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pp2Sf-0001Kr-9m
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:42:41 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2045.outbound.protection.outlook.com [40.107.7.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1d1574f-de85-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:42:38 +0200 (CEST)
Received: from AS9PR06CA0004.eurprd06.prod.outlook.com (2603:10a6:20b:462::20)
 by DB9PR08MB9586.eurprd08.prod.outlook.com (2603:10a6:10:454::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 07:42:04 +0000
Received: from AM7EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:462:cafe::29) by AS9PR06CA0004.outlook.office365.com
 (2603:10a6:20b:462::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 07:42:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT061.mail.protection.outlook.com (100.127.140.72) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.20 via Frontend Transport; Wed, 19 Apr 2023 07:42:03 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 19 Apr 2023 07:42:03 +0000
Received: from 29c7130c41e4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4B1C7173-A561-4CCA-BF7D-8EB6A300B81A.1; 
 Wed, 19 Apr 2023 07:41:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 29c7130c41e4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 07:41:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB9317.eurprd08.prod.outlook.com (2603:10a6:20b:59b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:41:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 07:41:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1d1574f-de85-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xSHOW64FiWvIeoQDQYQfnO8qeilHAcWJniHIJc1w5j4=;
 b=VOYeCyTwpH+QMTLxAWQNSh2uXgcu2lcT3Zm+8LHDSrWjMqgVkedgnb9UoVKaj63OsQX2DuBBkxDGW1Arb5gnXZ6KDu5Vz2vDaL0Z7fFLpTpGtFnLbX1oNbaf3y0mGl6tyhiYyg7IOpPX5dycqdI4TCJN8XkY+TQxjVE/oT08gwI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e74356d34f632442
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=drd2Gg2MOirxK2NHtgFjeCXB/uil5eZ9DeI5YuULKxeMd/1GTD/nyXHs//HGo3pHRr8l1wOiiOa/u35COVHdVfoT/hlKehT4058NF4oTA1Ihw93NZVQU8y3nVLQ6PB+UkUKjGXwU+Jen7yHsmO4/W6YfCsIsaBq06ivvctb8/ScVl73dV1WQFSgjyKZSrb5BKzWmGinIQ8DDpMVVmINMVOSVKHX80l0Xc0/7yISTrWO1bFVoNngfhXrObmaxV/YgS6/f0ARaqCvO49ICnES9LdALikV23e58I5Q53Im3HeY10vwP14wTSB3pFm6OdVl0kO7b/qwHFQjjvY5q9rfeuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xSHOW64FiWvIeoQDQYQfnO8qeilHAcWJniHIJc1w5j4=;
 b=E+zmEwJs2VjMekq2KAUGsxM4HEg/uJ2I4BJXnw6sM/Zw7dllsFDOlfpkkPS/ER4px+CGSXGmja8UnoD7bnJMh26w6KwR1zvweE4gscR9PcJ9V6m9rih5MUSinQa6LwwSiEiEZkos+aQIEBny8zNDp+jpchDFqP2G+lWG/CaazkXPhbl1b7Nco/SkueskRyLkz7YKjcNoO6YQXuoXqXeeIdpH8RkJUC6bPXe8AT2vrGTDwl0JhUYnyNpJWmjRA2SiB0AUDqg81F4zXLhENxkvpuGgBbvmDU7PBOgaSufG2VvFA67E0k3Ug+IEwrJD8lRbNC4p9PlJAL9kfMvc1ItfrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xSHOW64FiWvIeoQDQYQfnO8qeilHAcWJniHIJc1w5j4=;
 b=VOYeCyTwpH+QMTLxAWQNSh2uXgcu2lcT3Zm+8LHDSrWjMqgVkedgnb9UoVKaj63OsQX2DuBBkxDGW1Arb5gnXZ6KDu5Vz2vDaL0Z7fFLpTpGtFnLbX1oNbaf3y0mGl6tyhiYyg7IOpPX5dycqdI4TCJN8XkY+TQxjVE/oT08gwI=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQ+XccnxtxNG0WbEgpX88e2za8xDZ2AgAE1hQCAAAGYAIAABb6A
Date: Wed, 19 Apr 2023 07:41:50 +0000
Message-ID: <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
In-Reply-To: <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB9317:EE_|AM7EUR03FT061:EE_|DB9PR08MB9586:EE_
X-MS-Office365-Filtering-Correlation-Id: 92ff2ae5-4f00-4351-452e-08db40a991a2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8k70fhjAoIoOCxgeHEACv7gwfts0wiP8RkejCmR9CW6TIn4QyaaoznjL5Cagwgr8iXN0n4d6NIxbDa2blMn+EIAb55ujcoK3u6afnvKrebR/59f//KrqSybNG4AVR7MJ4mUg+FY/Kk1v8CCtF5i8SFTor8XsOGwMeoIW8yT1zSsApFUs11W/ZxCgqVJCSKIFXTaV9sfdrxptWqqKBncotKZH59lSPidRTtTL13nmfm3cNzvftVjTgbzv3WQc9tpJC4Piixr6vwDUeVH+Uxw/ihjVJOJv4yUxXIt3HqPtp9L5rGKWPyGKjxRCpvZ7CYFaJs0WdCWAiwZJVcivnV8H8Ba2vsstRryoLR+GdKlPKIcWeiQLtTpXcJPcUtlb+qNheCWPQVLJUuUOxW6MgK/osNYNH62EoPLQNPxRKr8jlqGUGoQEL3960VcXQL9gNFft5oVemLxW1oILu22nCt3vw6K9aXOOWKcy0+Qdzarq9XjRavTR/9FOu0M7axCSgnt35Dy037m3TS3olCxVLsJZoLzQ36Wjv2JXMJbuDvItlkGrn1VO3ZJ/1LBjBEpkOS7OUgHmEtMVJsUVTVP5INKS59DyofRf7ZnMxTWdzI27Wz9LTn+DGlmTf3fcY+u4bEhpGzspk39sxtrk+vJD8L+lBA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(451199021)(26005)(6512007)(186003)(4326008)(53546011)(6506007)(316002)(8676002)(6862004)(8936002)(36756003)(33656002)(86362001)(2906002)(5660300002)(478600001)(91956017)(54906003)(37006003)(6636002)(64756008)(41300700001)(83380400001)(2616005)(38070700005)(122000001)(38100700002)(66446008)(66556008)(66476007)(71200400001)(6486002)(76116006)(66946007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <9BC31B854F245448A63D458F3FDBB061@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9317
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e9e574e5-20ac-4bab-b7f3-08db40a9896c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hjEpEWrYLy3uvYgPcGPs+0XsKK0adNWid/MWQqXgknuURv1F4gYClanqAdNHpZp3PtN4egmWHmKOIabBXErCreSQ+fuQ4Tls63KWzf9VPhJyAAl5nZaTyNecbS7sn9lEObIG2IIQ14xnGY2byrisKIjXVTvUfAtwxI5EHj2Xq1FatxRHdK3wxGAsGyht1QV+2ELMTSvPXOyE35Hc+N5LwBt+8pi7ZSI0YNYwSAB/d2izUC3uT4d3xA0iPw1+icla9HR45nIlf0rIobEGOVqSLsxzUjaHWvu9Uk28P4Z1Bp+H4aiSRhcaVZrTWJ7WyHoGskREHqHXXQyQ6vJ+++sF2/StaYHAjKDNvH53/Omtu31I2HknbCxJZCq4renocAxE58Kcp3YED6l4IhY5SGaroEab+cnpnhSH60Xe9S32SuPRQ32ErvmQ/YuqvLpMUeYAS43V1gyIwhhNN3M2ts9J+6WAnuGozfAB6FDHTrpwH70BTlagNNeasmeHPslN2Tz7GgiSUOCJgI4wTJS0Tk/br1K5GFpYaC4pVTnF7q+ahtVUh6bIlU4TqEkRlYZzGXGJOsr4DLW1cT1zXxU1TIiZD1IPeKUmtvAKkf+h2ctHTd5vcrhcAeGM+1au84We+SEDoUZO1SBOoqPIaS/0mmubr7v0thsO1UwRdKkYO4fBDvlEpW/NvVp6vCsj2Lws0mD3rAfCoUQiCVsUzlqul+BVHw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(26005)(107886003)(186003)(40460700003)(82310400005)(41300700001)(6506007)(6512007)(6636002)(54906003)(37006003)(53546011)(40480700001)(86362001)(336012)(2616005)(47076005)(83380400001)(4326008)(316002)(70586007)(70206006)(36860700001)(8676002)(8936002)(2906002)(6862004)(6486002)(81166007)(33656002)(356005)(36756003)(82740400003)(5660300002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:42:03.9413
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 92ff2ae5-4f00-4351-452e-08db40a991a2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9586

DQoNCj4gT24gMTkgQXByIDIwMjMsIGF0IDA4OjIxLCBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFu
ZC5NYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPiANCj4gSGkgTHVjYSwNCj4gDQo+PiBPbiAxOSBB
cHIgMjAyMywgYXQgMDk6MTUsIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2VsbHVAYXJtLmNvbT4g
d3JvdGU6DQo+PiANCj4+IEhpIEJlcnRyYW5kLA0KPj4gDQo+Pj4+IA0KPj4+PiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jIGIveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+
Pj4+IGluZGV4IDU0ODU2NDg4NTBhMC4uYWQ1ZGI2MmUxODA1IDEwMDY0NA0KPj4+PiAtLS0gYS94
ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0L3N2
ZS5jDQo+Pj4+IEBAIC05LDYgKzksOSBAQA0KPj4+PiAjaW5jbHVkZSA8eGVuL3NpemVzLmg+DQo+
Pj4+ICNpbmNsdWRlIDxhc20vYXJtNjQvc3ZlLmg+DQo+Pj4+IA0KPj4+PiArLyogb3B0X2RvbTBf
c3ZlOiBhbGxvdyBEb20wIHRvIHVzZSBTVkUgYW5kIHNldCBtYXhpbXVtIHZlY3RvciBsZW5ndGgu
ICovDQo+Pj4+ICtpbnQgX19pbml0ZGF0YSBvcHRfZG9tMF9zdmU7DQo+Pj4+ICsNCj4+Pj4gZXh0
ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3ZsKHZvaWQpOw0KPj4+PiBleHRlcm4gdm9pZCBz
dmVfc2F2ZV9jdHgodWludDY0X3QgKnN2ZV9jdHgsIHVpbnQ2NF90ICpwcmVncywgaW50IHNhdmVf
ZmZyKTsNCj4+Pj4gZXh0ZXJuIHZvaWQgc3ZlX2xvYWRfY3R4KHVpbnQ2NF90IGNvbnN0ICpzdmVf
Y3R4LCB1aW50NjRfdCBjb25zdCAqcHJlZ3MsDQo+Pj4+IEBAIC0xMTgsMyArMTIxLDIxIEBAIHZv
aWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpDQo+Pj4+IA0KPj4+PiAgc3ZlX2xv
YWRfY3R4KHN2ZV9jdHhfenJlZ19lbmQsIHYtPmFyY2gudmZwLmZwcmVncywgMSk7DQo+Pj4+IH0N
Cj4+Pj4gKw0KPj4+PiAraW50IF9faW5pdCBzdmVfc2FuaXRpemVfdmxfcGFyYW0oaW50IHZhbCwg
dW5zaWduZWQgaW50ICpvdXQpDQo+Pj4+ICt7DQo+Pj4+ICsgICAgLyoNCj4+Pj4gKyAgICAgKiBO
ZWdhdGl2ZSBTVkUgcGFyYW1ldGVyIHZhbHVlIG1lYW5zIHRvIHVzZSB0aGUgbWF4aW11bSBzdXBw
b3J0ZWQNCj4+Pj4gKyAgICAgKiB2ZWN0b3IgbGVuZ3RoLCBvdGhlcndpc2UgaWYgYSBwb3NpdGl2
ZSB2YWx1ZSBpcyBwcm92aWRlZCwgY2hlY2sgaWYgdGhlDQo+Pj4+ICsgICAgICogdmVjdG9yIGxl
bmd0aCBpcyBhIG11bHRpcGxlIG9mIDEyOCBhbmQgbm90IGJpZ2dlciB0aGFuIHRoZSBtYXhpbXVt
IHZhbHVlDQo+Pj4+ICsgICAgICogMjA0OA0KPj4+PiArICAgICAqLw0KPj4+PiArICAgIGlmICgg
dmFsIDwgMCApDQo+Pj4+ICsgICAgICAgICpvdXQgPSBnZXRfc3lzX3ZsX2xlbigpOw0KPj4+PiAr
ICAgIGVsc2UgaWYgKCAoKHZhbCAlIFNWRV9WTF9NVUxUSVBMRV9WQUwpID09IDApICYmICh2YWwg
PD0gU1ZFX1ZMX01BWF9CSVRTKSApDQo+Pj4+ICsgICAgICAgICpvdXQgPSB2YWw7DQo+Pj4gDQo+
Pj4gU2hvdWxkbid0IHlvdSBhbHNvIGNoZWNrIGlmIGl0IGlzIG5vdCBncmVhdGVyIHRoYW4gdGhl
IG1heGltdW0gdmVjdG9yIGxlbmd0aCA/DQo+PiANCj4+IEkgZG9u4oCZdCB1bmRlcnN0YW5kLCBJ
IGFtIGNoZWNraW5nIHRoYXQgdGhlIHZhbHVlIGlzIGJlbG93IG9yIGVxdWFsIHRvIFNWRV9WTF9N
QVhfQklUUywNCj4+IElmIHlvdSBtZWFuIGlmIGl0IHNob3VsZCBiZSBjaGVja2VkIGFsc28gYWdh
aW5zdCB0aGUgbWF4aW11bSBzdXBwb3J0ZWQgbGVuZ3RoIGJ5IHRoZSBwbGF0Zm9ybSwNCj4+IEkg
dGhpbmsgdGhpcyBpcyBub3QgdGhlIHJpZ2h0IHBsYWNlLCB0aGUgY2hlY2sgaXMgYWxyZWFkeSBp
biBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoKSwgaW50cm9kdWNlZA0KPj4gaW4gcGF0Y2gg
IzINCj4gDQo+IElmIHRoaXMgaXMgbm90IHRoZSByaWdodCBwbGFjZSB0byBjaGVjayBpdCB0aGVu
IHdoeSBjaGVja2luZyB0aGUgcmVzdCBoZXJlID8NCj4gDQo+IEZyb20gYSB1c2VyIG9yIGEgZGV2
ZWxvcGVyIHBvaW50IG9mIHZpZXcgSSB3b3VsZCBleHBlY3QgdGhlIHZhbGlkaXR5IG9mIHRoZSBp
bnB1dCB0byBiZSBjaGVja2VkIG9ubHkNCj4gaW4gb25lIHBsYWNlLg0KPiBJZiBoZXJlIGlzIG5v
dCB0aGUgcGxhY2UgZm9yIHRoYXQgaXQgaXMgb2sgYnV0IHRoZW4gaSB3b3VsZCBjaGVjayBldmVy
eXRoaW5nIGluIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZw0KPiAobXVsdGlwbGUsIG1pbiBh
bmQgc3VwcG9ydGVkKSBpbnN0ZWFkIG9mIGRvaW5nIGl0IHBhcnRseSBpbiAyIHBsYWNlcy4NCg0K
T2ssIGdpdmVuIHRoZSB3YXkgd2UgZW5jb2RlZCB0aGUgdmFsdWUgaW4geGVuX2RvbWN0bF9jcmVh
dGVkb21haW4gc3RydWN0dXJlLCB3ZSBoYXZlIHRoYXQgdGhlIHZhbHVlIHRha2VzDQp2ZXJ5IGxp
dHRsZSBzcGFjZSwgYnV0IGEgc21hbGwgaXNzdWUgaXMgdGhhdCB3aGVuIHdlIGVuY29kZSBpdCwg
d2UgYXJlIGRpdmlkaW5nIGl0IGJ5IDEyOCwgd2hpY2ggaXMgZmluZSBmb3IgdXNlciBwYXJhbXMN
CnRoYXQgYXJlIG11bHRpcGxlIG9mIDEyOCwgYnV0IGl04oCZcyBsZXNzIGZpbmUgaWYgdGhlIHVz
ZXIgcGFzc2VzIOKAnDEyOeKAnS4NCg0KVG8gb3ZlcmNvbWUgdGhpcyBpc3N1ZSB3ZSBhcmUgY2hl
Y2tpbmcgdGhlIHZhbHVlIHdoZW4gaXQgaXMgbm90IGFscmVhZHkgZW5jb2RlZC4gTm93LCB0aGlu
a2luZyBhYm91dCBpdCwgdGhlIGNoZWNrIA0KIiYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRTKeKA
nSBpcyBub3QgcmVhbGx5IG5lZWRlZCwgYmVjYXVzZSBldmVuIGlmIHRoZSB2YWx1ZSBpcyBhYm92
ZSwgdGhlbiBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcNCndlIHdpbGwgaGl0IHRoZSB0
b3AgbGltaXQgb2YgdGhlIHBsYXRmb3JtIG1heGltdW0gVkwuDQoNCmludCBhcmNoX3Nhbml0aXNl
X2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQp7
DQogICAgdW5zaWduZWQgaW50IG1heF92Y3B1czsNCiAgICB1bnNpZ25lZCBpbnQgZmxhZ3NfcmVx
dWlyZWQgPSAoWEVOX0RPTUNUTF9DREZfaHZtIHwgWEVOX0RPTUNUTF9DREZfaGFwKTsNCiAgICB1
bnNpZ25lZCBpbnQgZmxhZ3Nfb3B0aW9uYWwgPSAoWEVOX0RPTUNUTF9DREZfaW9tbXUgfCBYRU5f
RE9NQ1RMX0NERl92cG11KTsNCiAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMgPSBzdmVfZGVj
b2RlX3ZsKGNvbmZpZy0+YXJjaC5zdmVfdmwpOw0KDQogICAgaWYgKCAoY29uZmlnLT5mbGFncyAm
IH5mbGFnc19vcHRpb25hbCkgIT0gZmxhZ3NfcmVxdWlyZWQgKQ0KICAgIHsNCiAgICAgICAgZHBy
aW50ayhYRU5MT0dfSU5GTywgIlVuc3VwcG9ydGVkIGNvbmZpZ3VyYXRpb24gJSN4XG4iLA0KICAg
ICAgICAgICAgICAgIGNvbmZpZy0+ZmxhZ3MpOw0KICAgICAgICByZXR1cm4gLUVJTlZBTDsNCiAg
ICB9DQoNCiAgICAvKiBDaGVjayBmZWF0dXJlIGZsYWdzICovDQogICAgaWYgKCBzdmVfdmxfYml0
cyA+IDAgKQ0KICAgIHsNCiAgICAgICAgdW5zaWduZWQgaW50IHpjcl9tYXhfYml0cyA9IGdldF9z
eXNfdmxfbGVuKCk7DQoNCiAgICAgICAgaWYgKCAhemNyX21heF9iaXRzICkNCiAgICAgICAgew0K
ICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywgIlNWRSBpcyB1bnN1cHBvcnRlZCBvbiB0
aGlzIG1hY2hpbmUuXG4iKTsNCiAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KICAgICAgICB9
DQoNCiAgICAgICAgaWYgKCBzdmVfdmxfYml0cyA+IHpjcl9tYXhfYml0cyApDQogICAgICAgIHsN
CiAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sDQogICAgICAgICAgICAgICAgICAgICJS
ZXF1ZXN0ZWQgU1ZFIHZlY3RvciBsZW5ndGggKCV1KSA+IHN1cHBvcnRlZCBsZW5ndGggKCV1KVxu
IiwNCiAgICAgICAgICAgICAgICAgICAgc3ZlX3ZsX2JpdHMsIHpjcl9tYXhfYml0cyk7DQogICAg
ICAgICAgICByZXR1cm4gLUVJTlZBTDsNCiAgICAgICAgfQ0KICAgIH0NCiAgIFsuLi5dDQoNCk5v
dywgSSB1bmRlcnN0YW5kIHlvdXIgcG9pbnQsIHdlIGNvdWxkIGNoZWNrIGV2ZXJ5dGhpbmcgaW4g
c3ZlX3Nhbml0aXplX3ZsX3BhcmFtKCksIGJ1dCBpdCB3b3VsZCBsZWF2ZSBhIHByb2JsZW0NCmZv
ciBkb21haW5zIGNyZWF0ZWQgYnkgaHlwZXJjYWxscyBpZiBJIGFtIG5vdCB3cm9uZy4NCg0KV2hh
dCBkbyB5b3UgdGhpbms/DQoNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:47:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523217.813062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Ww-00020f-Iv; Wed, 19 Apr 2023 07:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523217.813062; Wed, 19 Apr 2023 07:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2Ww-00020Y-GP; Wed, 19 Apr 2023 07:47:06 +0000
Received: by outflank-mailman (input) for mailman id 523217;
 Wed, 19 Apr 2023 07:47:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pp2Wu-00020S-PG
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:47:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp2Wr-0005Jh-VX; Wed, 19 Apr 2023 07:47:01 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp2Wr-00083S-LJ; Wed, 19 Apr 2023 07:47:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:Subject:
	From:References:Cc:To:MIME-Version:Date:Message-ID;
	bh=dPK3SrCAoK1GH1B+SmruB2rvgVuY5U6ng7MBIXUUy6o=; b=HDYimD/jsGiUNQDEudj5hYMo0T
	fl7bVJmb8nZIvXpbOTG7Skns5zGytgzZhB/3Op/0Rz4+eKqMM+J0Vo/2bPzqCFUz4TvmGNgg41sJM
	1UI0EkuKlD79GUBb5CobWRbr3lAxdXz2YIlJCnwzIAkiu03cPvTRXQfWkkOgxbGKs/4o=;
Message-ID: <5949f98b-9cd6-7903-0be6-731665caff75@xen.org>
Date: Wed, 19 Apr 2023 08:46:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.9.0
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Luca Fancellu <Luca.Fancellu@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
 <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
From: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
In-Reply-To: <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

Answering to Jan's and Bertrand's e-mail at the same time.

On 19/04/2023 08:31, Bertrand Marquis wrote:
>> On 19 Apr 2023, at 08:28, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 18.04.2023 16:25, Julien Grall wrote:
>>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>>> On this serie I would like to open a discussion on how to handle the vector size
>>>> and the corresponding command line / configuration / device tree parameters.
>>>>
>>>> In general the user must either give a vector size it wants or has a solution to
>>>> just request the maximum supported size.
>>>>
>>>> In the current implementation if a size bigger than the supported one is provided:
>>>> - we silently disable SVE for dom0
>>>> - we silently disable SVE for dom0less
>>>> - we do not create a guest when done through tools
>>>>
>>>> This is not completely coherent and i think we should aim for a coherent behaviour
>>>> unless we have arguments for this status.
>>>
>>> +1.
>>>
>>>> Is there any good reason to silently disable for Dom0 and dom0less only ?
>>>>
>>>> I see some possible solutions here:
>>>>
>>>> - modify parameter behaviour to use the supported size if parameter is bigger than
>>>> it. This would at least keep SVE enabled if a VM depends on it and could simplify
>>>> some of the handling by using 2048 to use the maximum supported size.
>>>
>>> My concern with this approach and the third one is the user may take
>>> some time to realize the problem in the xl.cfg. So...
>>>
>>>>
>>>> - coherently stop if the parameter value is not supported (including if sve is
>>>> not supported)
>>>
>>> ... this is my preferred approach because it would be clear that the
>>> value passed to Xen is bogus.
>>
>> I did say earlier on that this comes with its own downside of preventing
>> boot to complete for no real reason. It's all Arm code, so you're fine
>> to ignore me, but in similar situations elsewhere (sorry, don't recall a
>> concrete example off the top of my head) we've aimed to allow the system
>> to boot, for the admin to then take corrective action if/as needed.

It is not clear to me why it is necessary for the system to boot in 
order to take corrective action. In the case of GRUB, you would likely 
have a fallback for a Linux baremetal boot just in case Xen crash at 
boot. So you can use the fallback to boot and update your command line 
in grub.cfg.

I agree this is less efficient, but it would be easier for a user to 
find out there was an issue in the command line passed.

> 
> But a guest depending on the feature will just crash later when booting.
> This is making the assumption that guests are all able to properly adapt
> to different hardware possibilities.
> This might be the case when you have a full Linux but if you consider an
> embedded use case, if something is activated due to command line parameters
> or configuration ones, it should not be expected that those are ignored I think.
> 
> There are definitely 2 different needs here, maybe we need to have something
> like a "strict" switch to allow both use cases ?

At the mode, I am not in favor of introducing a switch to relax the 
check. It will add more code in Xen for a benefit that is not clear to 
me (see above).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:53:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523222.813073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2cg-0003R1-6W; Wed, 19 Apr 2023 07:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523222.813073; Wed, 19 Apr 2023 07:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2cg-0003Qu-3k; Wed, 19 Apr 2023 07:53:02 +0000
Received: by outflank-mailman (input) for mailman id 523222;
 Wed, 19 Apr 2023 07:53:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp2cf-0003Qo-1g
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:53:01 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe12::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 336371d9-de87-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:52:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7637.eurprd04.prod.outlook.com (2603:10a6:20b:29d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 07:52:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 336371d9-de87-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KAgt9hFV/eVn3+fRs+I+G5h9z8SQ9PUQhvzIpPo25Wt2vJ44uo/XOfMQSolX6mGmNorV2q7K8e3Vq/fvCZ7XCKEffJk/m2ajynkM1khL/ECSrilm/vSkvfZsbiEp7euKJKGSYDbPCuCfV7ZEYJd/ZddIlixY4VJhFSV7UptHpeRfLmC4As8X44ONz1lkKs0XiTsh5E2TMB62GwIdrpbilTRu/g+ggmoS4iog8HBNjOHV2+m/tHpFlSP6sKevaRXzCSZqYy2Uaq8WuL1lsfRfMrKxeGe2xtOtObWaCdtGunkSRhGxwPwKbfexG7iGCVOIBG/7r8hxwEpIsMOqwlgzTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ce29GVfstlPFdACrx7tHmvpcZrNeT7vhdJdgK7GM+fg=;
 b=P/tVAephZybm2a2JkgAAA2Yf2pMD/JIa5w3XMhgki9e4BO6C/Hi/Q3QUXXx7JhiVvOJtA19HJLr6lW+nVa+vJHL7B6ehNiEcOkkgP+fZlSn7boqxAQyT6Mrayk2qD7nwb3vqvk5OAb5LWbxh40mn+iymzk0RgG8C8CZbS22c8IkQz9TtO3Cpbqite+KT5atOGZ3vq6OCfNw/wjk/d3tPm+xS24efwdP3YtlmclkKB3zsr+JXSz1aGB3VT+6JTmoqLyD16TwCl9cJcVC/xXbdc8oC/uzfeeXNANm6EOpMOFUz8ycpvc0IHBNozZuIbsLaObQF7UjZnHwiDmJbfyGZxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ce29GVfstlPFdACrx7tHmvpcZrNeT7vhdJdgK7GM+fg=;
 b=n+BDrI5haKLNKLXn1v+C11BTXYeEU6/NHlomMEUfzwkezyejb6pHNzO6mxrMM5ADaernyTrC7ThO6U5LUYWlpKAycSHY8nAxuU244R2klZ62lWu4go9eC0ODIGW8LFDy6cAzr765CBvin+ieLjQzRFoD+foOH9v1863n6r5RK5aRKcoeaCgUber0c20/jo4AF6agAKrD1o82YNenVS3os+cyKcs0NewzNbV0IhxhD+XSXWFdZELcGSK8w5lv8/SYr3TweUb33chmywkqiARpxwK5STPvTvjENYfVT6BHwnPrISR6TQS6JLNwCbJlIfP96WDOcVczNEvNurophAdHIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5f5b65eb-d1fc-271a-02db-aa347cc708e9@suse.com>
Date: Wed, 19 Apr 2023 09:52:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>, Luca Fancellu <Luca.Fancellu@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
 <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0265.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7637:EE_
X-MS-Office365-Filtering-Correlation-Id: 22ccf7c6-f2b6-42e0-f2a8-08db40ab15ed
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ssi9xMrasnuZMTnf2JMGyLtY+S69X4jsNEv1b5/8s3PzJAci0JpRHPbZHHBNGyjpLU0Xw8/yM/3XRlKY9EbDgE+dEHdrMOTUaPH+R7DjysN2tHXbGXWzrxhCXNNiFXIZvKaL/i5mdhy2ueufog5C9PIU5T9RYc2QWirb3P9geV30uQF3ICw5eokJdqTUPczPP/hPkQQdHi5qO8LavTf/dZKHAspTQtdb6WA3NCu/VBvFJxU7W5iW2FClsuSJzdFrIUzLlxkbvemSpzLcyGmJsof1s+qMtjdXLpFbflq1giBWsx+zG7vnRD+jrFOuULxGgMYfqLzQccgLat0lh8N4eBRJ1FT7Z7ryaPaRuhj9hsAEjlsyDpRaDlsWlUAbFYLADymz1kV8sDJq5gW9xC6BRkW7GUY0IOVioTbdK36kCRdGVxJOcO0zu1imxIj/tIv/gvawsVkNRtDWKtCkhIgNBwodoR9SqbPseLo6dUdWdWkOmOKdACghSegAo4v1df7xzMtax4rC424kiyKd88ltF8x9LtHYDIxWQb2/BZJIIhg5yAMnzaFLG74UhCcvXnuZWbTIm8UBqr2FsY0GY8PGXPqiCDgrWMi805w5VGQfXXoR6Bg17vDuq7xCtiG3PCur83h3YumFGfAh8/Wu+1c/PA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(396003)(376002)(366004)(39860400002)(451199021)(186003)(6916009)(4326008)(54906003)(316002)(66946007)(66476007)(66556008)(478600001)(6486002)(8676002)(8936002)(6666004)(41300700001)(5660300002)(2906002)(7416002)(86362001)(36756003)(31696002)(38100700002)(2616005)(6512007)(26005)(6506007)(53546011)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UDN3SlR5Q1NOb0VsQkNzZUsyMmFLMUE3VkVINmFiUDUvN2xGSGV0eVJ2Qm1p?=
 =?utf-8?B?ajRCME02UjZsditKa0FCZ2FldHhjb01uMmJsVllkbHFmSTBVd2JqTEt0QWVC?=
 =?utf-8?B?Z3I3UGh3OXFPeTlCd1lxMkgyTXhjZGlGRHpYWWtyZmJtYncyak5CN0FyRkF3?=
 =?utf-8?B?SkV6VC9rbWM4VWcwRzZTcmVMV1YwTTEwUEZGakdhZk84OUViMlAvbmdTaXNx?=
 =?utf-8?B?ZC95TGZTL2pOTzZuQzZ3cVkxcXVSRUNibzJkeVc1c0xiRlRxb2ZiR3RUekd4?=
 =?utf-8?B?akgvR3l0dWxaeXVWVUtHcXRlTVFoemtqUzF4a010UkY4Nnc3ckJrRldmdkdp?=
 =?utf-8?B?OVZyYUNDbHJUeG45Unc2M0YrWFFYT1plaHR5bkx2b0IzQVVKYW4zbG1mM2Ux?=
 =?utf-8?B?bC9oNnZNYmRkVkVTR2tWZGJGaGFwNW03djUwV1F2ZTJDVzJ2RUlLM0NmNng0?=
 =?utf-8?B?K3piemc1S0Y5cWpiUk10MVZacEl2YkxWeTVwTElkYjRzb0U1OFg1TEcwdktq?=
 =?utf-8?B?MmV5R0xVUWdsQkI4RnYxdDJHUE02cWx4WElQQVNYT1I0ajlWazBYOGJzYzlF?=
 =?utf-8?B?c1N1QkdreXQwYUhLNG5rbFMzay9ncG5ZWXJzcVdUMjkyZ1VIU2VyOStEQmtD?=
 =?utf-8?B?OUZNRmI5RlZDTVFrNEQ1d1dHdXpvZzR4QWdCdGFMTm9NdVo5SkZLUkRUdHV2?=
 =?utf-8?B?UTROMFRnQXVXZTY1WXMydVhDcktMQjFDOHNPenlxY1VDUjNlUGRISm8wYzAx?=
 =?utf-8?B?Ym9hR1NSSFovZWRPbXVMTWlYYUk1UVBrdWVlWHBDUDhSQmNlZ2JVdmN1YkFZ?=
 =?utf-8?B?WU5VZy83VGFHakU2Ry96SkpYMmkyNlVWUDFZSGQ5VEdFQ2UzLyt4R1FVYkJm?=
 =?utf-8?B?UHNEQkovUGhXOFh0UEJwdTlvaGZDamJqK3FhOTRyclBFQ1R4VmhpSXVvTkU5?=
 =?utf-8?B?MFpwTmo4RDYyVlMwTHQrVTlnLzQrMW1XbDdlQlBkVldvaGx6bmRMUElEYW1p?=
 =?utf-8?B?UFFaQmQ0TTYxdnJYT3A2SVZmRlUxQkNFVFJGZ3BVaGRnQVpVaUNlVmRCZjJa?=
 =?utf-8?B?bGhvUE1VUzFzM3J6UGhWWkJCNHdFUVhmOWFXelJ6TStQKzhaaEZqNmtMSlhz?=
 =?utf-8?B?dFpTd3lmYWRtTFVwRUNRakJKZUlLUW9ENVdmTnFVR3BBd1hISFB2YXdnK0E3?=
 =?utf-8?B?bHFKSHkrNmVvVVBxbmJCd1NFUVpoZFQzMmExREpQYWZuVVVXcTdsNk5CaVFS?=
 =?utf-8?B?TmxMbzViSkxsK2lkQWZFS3poM0o2RDA0bHZidVRnZThUc21qMVYwd0FBTWVO?=
 =?utf-8?B?Q2JKQkJrbDdjK0RFY3BpYVZzSUdFeVI0RklTcUk4REhCYzY3RGR4TmtlQk85?=
 =?utf-8?B?Zk15WThYbzFEN0tmTld1eFVwSmZyVXNCUGpmUS9VTU5lT0hDa3drZ2g3SDda?=
 =?utf-8?B?MmdOY2pDSUZVdVlMN2JHck5CbTZYN2xRaVJ6a1VRNDFvbmlqVXhpQkU1eDk4?=
 =?utf-8?B?VTJ5THB5U3g4V0E4VTdnVHRJWURIRkJVVVBRamtiUTJjV2FZMmNiNmcvYW9k?=
 =?utf-8?B?ZzFNdTRxdjVpdnd6eXhJQkcvRE9icGxvOXZUdFZWblBnakp0NExxTVJNekNm?=
 =?utf-8?B?aWxuSFVrVE5CYjdyVHpmWHRPcDN3aSt0WWFLZ1pQVWpJT2NHMjEybWNkV0cy?=
 =?utf-8?B?SFkyUTJ3U0hBWSs2VWxpazU0L1pTdlVuTVdlVGxIeXMzVlJpYjk3RW5sRGR1?=
 =?utf-8?B?MVJHK0IxZmxvSUhSTGtVKzBXcHkrZjhoYnVhckZBR3NlMlRUREN3VGlxZ0Vx?=
 =?utf-8?B?U0dBejhWOU1vdXVkWllrMnFHOWlnNnNJWnE0ejhCclh1UHlZUUt4aHdLTmFs?=
 =?utf-8?B?R0YzZjR6dmpRdGpJd0hKZGx4dWtTU2hmT1l6ZEpjNERxT3Q3WE5DYVNtamlY?=
 =?utf-8?B?R1dFbWVvMHZDNm4ra0crUUIrSjk1TVpvYW44UlhmM3FHV1E4cWF6dDlyMFVI?=
 =?utf-8?B?MHZ6cklZK3VLTkNqYUYrT0NiemxJU1EwbVdEeHlqTEZvb2hyeElXand0Y0tL?=
 =?utf-8?B?MlNoYkpZZ2dHUmx2cGtzTGNiTXJXYXoxVFhzSnZjeHhybm5UYzNSeGx0Y2tk?=
 =?utf-8?Q?Gpn0/oopX5qkMB8tYF02Vt0ri?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22ccf7c6-f2b6-42e0-f2a8-08db40ab15ed
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:52:55.6404
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2tTJho0KkQkpRisuaMp6bwEXc9YwE11+LKrTOZR49bEdvlrYhTf2GyB43mZSXU9qcJOE8Te39ahLum0kMyjxww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7637

On 19.04.2023 09:31, Bertrand Marquis wrote:
> Hi Jan,
> 
>> On 19 Apr 2023, at 08:28, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 18.04.2023 16:25, Julien Grall wrote:
>>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>>> On this serie I would like to open a discussion on how to handle the vector size
>>>> and the corresponding command line / configuration / device tree parameters.
>>>>
>>>> In general the user must either give a vector size it wants or has a solution to
>>>> just request the maximum supported size.
>>>>
>>>> In the current implementation if a size bigger than the supported one is provided:
>>>> - we silently disable SVE for dom0
>>>> - we silently disable SVE for dom0less
>>>> - we do not create a guest when done through tools
>>>>
>>>> This is not completely coherent and i think we should aim for a coherent behaviour
>>>> unless we have arguments for this status.
>>>
>>> +1.
>>>
>>>> Is there any good reason to silently disable for Dom0 and dom0less only ?
>>>>
>>>> I see some possible solutions here:
>>>>
>>>> - modify parameter behaviour to use the supported size if parameter is bigger than
>>>> it. This would at least keep SVE enabled if a VM depends on it and could simplify
>>>> some of the handling by using 2048 to use the maximum supported size.
>>>
>>> My concern with this approach and the third one is the user may take 
>>> some time to realize the problem in the xl.cfg. So...
>>>
>>>>
>>>> - coherently stop if the parameter value is not supported (including if sve is
>>>> not supported)
>>>
>>> ... this is my preferred approach because it would be clear that the 
>>> value passed to Xen is bogus.
>>
>> I did say earlier on that this comes with its own downside of preventing
>> boot to complete for no real reason. It's all Arm code, so you're fine
>> to ignore me, but in similar situations elsewhere (sorry, don't recall a
>> concrete example off the top of my head) we've aimed to allow the system
>> to boot, for the admin to then take corrective action if/as needed.
> 
> But a guest depending on the feature will just crash later when booting.
> This is making the assumption that guests are all able to properly adapt
> to different hardware possibilities. 
> This might be the case when you have a full Linux but if you consider an
> embedded use case, if something is activated due to command line parameters
> or configuration ones, it should not be expected that those are ignored I think.
> 
> There are definitely 2 different needs here, maybe we need to have something
> like a "strict" switch to allow both use cases ?

Possibly. Yet along what I've said before - would you then also mean that to
fail boot upon encountering entirely unknown command line options?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:54:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:54:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523226.813083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2du-0003xs-Ha; Wed, 19 Apr 2023 07:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523226.813083; Wed, 19 Apr 2023 07:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2du-0003xl-Dt; Wed, 19 Apr 2023 07:54:18 +0000
Received: by outflank-mailman (input) for mailman id 523226;
 Wed, 19 Apr 2023 07:54:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfww=AK=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1pp2ds-0003xS-MS
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:54:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 601949d2-de87-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 09:54:14 +0200 (CEST)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-620-4G2n1ZMvMEOjIN49LYlOtA-1; Wed, 19 Apr 2023 03:54:10 -0400
Received: by mail-wm1-f69.google.com with SMTP id
 fl8-20020a05600c0b8800b003f16fe94249so817242wmb.9
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 00:54:09 -0700 (PDT)
Received: from ?IPV6:2003:cb:c70b:7b00:7c52:a5fa:8004:96fd?
 (p200300cbc70b7b007c52a5fa800496fd.dip0.t-ipconnect.de.
 [2003:cb:c70b:7b00:7c52:a5fa:8004:96fd])
 by smtp.gmail.com with ESMTPSA id
 l26-20020a1ced1a000000b003eeb1d6a470sm1327085wmh.13.2023.04.19.00.54.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Apr 2023 00:54:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 601949d2-de87-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681890853;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ku4/Si8D6xrlYmDkiBxBM7QW/Zhm4h1GUH6h0hqbhBo=;
	b=Cla5vpMiSIZ5MOM+7UGd9nWVvOXptu0l43jeZGrATas6+8FjMyfn14Fgp7gvTt+0hIes+w
	QwdHtUYCx8CYOVcExcScBGm2t3vI3VvievgQXPz05kwqAR2Czo4iMDt3qZlIugSxHfVorL
	lvG0B2ZsjbTX60EegYqc9foEfZM6B10=
X-MC-Unique: 4G2n1ZMvMEOjIN49LYlOtA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681890849; x=1684482849;
        h=content-transfer-encoding:in-reply-to:subject:organization:from
         :references:cc:to:content-language:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Ku4/Si8D6xrlYmDkiBxBM7QW/Zhm4h1GUH6h0hqbhBo=;
        b=QK38gXdyzzusVsd0SdblsivFkX22cTGioPBYI/5RW1Xwr7HK5zxUkfpXj4ZQYiTgik
         nMJg+Pn/8w5k+1aLg1zxKKQqYtIFyx8J8SzIZJTAbEgQy/bPo+eVoQ7Ls2jlWm+C9poY
         C7W2gJO6hl131Geeuovuj4aRkCRsWvJpWtZG8gKFzq0tt9/HN/DP9itYn9eiCE3jzCae
         /AIcdyniMlx/zLtvEEBSa4l+Trqp38G5bAC+h73yAHMqplEhnsEBBioy/J47PwoWz8Aj
         /O1MEsnfleSxz5NLNPy9uL6NJ8yiknyhnZc/xQvjk/CEaWlfXbI1L2wF2lJpgVdqOTFb
         wZOQ==
X-Gm-Message-State: AAQBX9csi76ZKqEmlqApUwR5sbr7xevfzCF40XiyXjtq3gDsH07DIrQC
	FTKHYJ68U+JrIDamRPdDplwtT7bixA4CkGl4Hyvs8oKFvEbB6+GjqkIGU3K2ly51zMVtnHMwxc/
	tBiwSQILds6WJVLdJdaL1X9/+ncQ=
X-Received: by 2002:a05:600c:2305:b0:3f1:728a:1881 with SMTP id 5-20020a05600c230500b003f1728a1881mr7565072wmo.31.1681890848809;
        Wed, 19 Apr 2023 00:54:08 -0700 (PDT)
X-Google-Smtp-Source: AKy350ZGeCzv8hHHPCLoBp3+kn6wkQN0cMnv6OhMvvwsxXxMwENw712YUrmrVhvIskYHJBvgTLUksw==
X-Received: by 2002:a05:600c:2305:b0:3f1:728a:1881 with SMTP id 5-20020a05600c230500b003f1728a1881mr7565048wmo.31.1681890848428;
        Wed, 19 Apr 2023 00:54:08 -0700 (PDT)
Message-ID: <e0c0ad67-f23f-ff35-80bf-841dcfd43d99@redhat.com>
Date: Wed, 19 Apr 2023 09:54:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
 Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
 linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
 loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
 linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
 linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
 sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-2-vishal.moola@gmail.com>
 <da600570-51c7-8088-b46b-7524c9e66e5d@redhat.com>
 <CAOzc2pwpRhNoFbdzdzuvrqbZdf2OsrTvBGs40QCZJjA5fS_q1A@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Organization: Red Hat
Subject: Re: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking
In-Reply-To: <CAOzc2pwpRhNoFbdzdzuvrqbZdf2OsrTvBGs40QCZJjA5fS_q1A@mail.gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 18.04.23 23:33, Vishal Moola wrote:
> On Tue, Apr 18, 2023 at 8:45 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
>>> s390 uses page->index to keep track of page tables for the guest address
>>> space. In an attempt to consolidate the usage of page fields in s390,
>>> replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
>>>
>>> This will help with the splitting of struct ptdesc from struct page, as
>>> well as allow s390 to use _pt_frag_refcount for fragmented page table
>>> tracking.
>>>
>>> Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
>>> before freeing the pages as well.
>>>
>>> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
>>> ---
>>
>> [...]
>>
>>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>>> index 3fc9e680f174..2616d64c0e8c 100644
>>> --- a/include/linux/mm_types.h
>>> +++ b/include/linux/mm_types.h
>>> @@ -144,7 +144,7 @@ struct page {
>>>                struct {        /* Page table pages */
>>>                        unsigned long _pt_pad_1;        /* compound_head */
>>>                        pgtable_t pmd_huge_pte; /* protected by page->ptl */
>>> -                     unsigned long _pt_pad_2;        /* mapping */
>>> +                     unsigned long _pt_s390_gaddr;   /* mapping */
>>>                        union {
>>>                                struct mm_struct *pt_mm; /* x86 pgds only */
>>>                                atomic_t pt_frag_refcount; /* powerpc */
>>
>> The confusing part is, that these gmap page tables are not ordinary
>> process page tables that we would ordinarily place into this section
>> here. That's why they are also not allocated/freed using the typical
>> page table constructor/destructor ...
> 
> I initially thought the same, so I was quite confused when I saw
> __gmap_segment_gaddr was using pmd_pgtable_page().
> 
> Although they are not ordinary process page tables, since we
> eventually want to move them out of struct page, I think shifting them
> to be in ptdescs, being a memory descriptor for page tables, makes
> the most sense.

Seeing utilities like tlb_remove_page_ptdesc() that don't really apply 
to such page tables, I wonder if we should much rather treat such 
shadow/auxiliary/... page tables (just like other architectures like 
x86, arm, ... employ as well) as a distinct type.

And have ptdesc be the common type for all process page tables.

> 
> Another option is to leave pmd_pgtable_page() as is just for this case.
> Or we can revert commit 7e25de77bc5ea which uses the function here
> then figure out where these gmap pages table pages will go later.

I'm always confused when reading gmap code, so let me have another look :)

The confusing part is that s390x shares the lowest level page tables 
(PTE tables) between the process and gmap ("guest mapping", similar to 
EPT on x86-64). It maps these process PTE tables (covering 1 MiB) into 
gmap-specific PMD tables.

pmd_pgtable_page() should indeed always give us a gmap-specific 
PMD-table. In fact, something allocated via gmap_alloc_table().

Decoupling both concepts sounds like a good idea.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 07:57:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 07:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523230.813092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2ge-0004bR-Vh; Wed, 19 Apr 2023 07:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523230.813092; Wed, 19 Apr 2023 07:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp2ge-0004bK-T7; Wed, 19 Apr 2023 07:57:08 +0000
Received: by outflank-mailman (input) for mailman id 523230;
 Wed, 19 Apr 2023 07:57:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp2gd-0004bE-Dj
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 07:57:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2085.outbound.protection.outlook.com [40.107.7.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7415788-de87-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 09:57:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9176.eurprd04.prod.outlook.com (2603:10a6:20b:44b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 07:56:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 07:56:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7415788-de87-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lQ/XD3n50rBijmZ2H/ZHNbUpgTe4lrp0THLTa8DZCZQfWpUf0L0lS20wtArxF0GqVP4UyHNUI+GunovvHQawTIJhB0d7YdAM8nUBVoljMMVKrQr9KKywDBqmiziloIqYkbgOlICKeRoP2TmaKTPHty/1ZDOVCM1jzdb8nmrfL1Voq271TptWNhkveJ2ujxMK+zJ4rJfr2BYED8WSqiPfyJ2IbfFT9PS/Gp/S/paSMTsqb9GLrXFOfSJloJYbN83bv8mL1sOkn5avXCfeECfFEjWXnCiOTemu6NGER+Ajt9Nxk4AP7hoPNGuBfzC1tH61rRItS6dE3gZnB4A9NMY1Dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PmCLQJlBYbn6dcif6atux0FPC8qBpDdBLSryj0Xailc=;
 b=ZOKyZ/CxCHJcU9sxX7Z6BhEo5ol38+6t0ZqA359gTCqZBSUT3yyp1haDTyhTm42LlHN4yyLlpglHv5+HCx0R6nIsO0cCpXjN4kkDuGHYLze6qhxc1o3dUrmLtBt4qu71/2h0FC3pu9t+wuQrvVaok28x/NMftvW+hadL8x1+ehnP7a9Z2bAtUWawW3NH9ttmEui4Q/ROfuQrK+V27+4uE1FMC8cy/TVVvs5yePckIrgSfdjvAAn3HiPjw0P7d9nDN2yz6sX8sYtfkhcahRPOdz9gKLxZAQC8G0DDSr93k9r5zaz1kGiUGFJJXeBUS6xp4Soj2XarDjjPsSlaBF68mw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PmCLQJlBYbn6dcif6atux0FPC8qBpDdBLSryj0Xailc=;
 b=MH9smZw0znmbPr9wzBoArx+0OoFtXWdSlOJrg45b3FTUssnnMcdrc5LrErhR8qZdJsYtjYNFkzgEdJKGFZESZEvy+qY8XiIi1AKfJPxPpffT+R293sIEWDhXLnaLGyjwr8CN1RISNc93lVU9ITlgOt9J1r1J8xJHyKelOcQBEsCw9XfpsLHLd3WvDRJnR73bRTmLCJwhFnkRvPAbb1Bx1DJhZ+yrE2Avwmjz9LF9rhrx+vpVLvs1u+9mq+1qq30rqeTCCZ+Jba1l5g7EQjVhIU9hTN/0UJ/hxUxzaOFK/xHx8wAKHH53fbfVoyZViQiQfq3JKgekThUM2nCjBhp55g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5e90e951-20bc-3ff5-30b3-da17cb14d260@suse.com>
Date: Wed, 19 Apr 2023 09:56:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD6AejXJxQxAyrx1@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9176:EE_
X-MS-Office365-Filtering-Correlation-Id: caedbfb8-d930-4288-8e64-08db40ab9a1a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zZucu/ScN74BeigQ4Pa0Y6/IsWOaLMicpz15gbupuqkg00VpLXVJz8zZJ+EdBJ5CHpflCTlKL3E2kZLBYqNGCka87MPPCCB8OPyeCNPduhjN8giZRhvPM3WRCVR1Hcm/0CttUA8eIJl5eh6cHGjN3kSxqZnoo1YdujQ62IHkQrpM3iHJKsOxDR9+IYv8iDqV5FXQJmst7eO/R7yX1syRw5Oop2vNSXDEWS4yqinczz6IU+jEZeMQFEV1G+M2X20pdOv9aAY4m+5OIOKbot6ZOz0YRbOkz3jVtbLxyRkQ+iIC4RGHEkORF0iyVu+yD9AlShnEvdK33gc07f7XbxOxtW2nZAOkN4RxjZo5bjaNg7yiQTn1zXwkrS4BZf1vAUFMW0oAFNHXlGK8h95QT8i9JdXrAjTX4yVBMfTn5aQlkSvq9nUf65zOQqkLwHh/4C8hOUgf1/VaV94675sNBLwDWJSW/JB7MWxQ18dVlZ7eYZSH70cH5wtkjugy9zC8dPx+6afSQClzQbM7nuqdbjCdVhT+7DC57h1qqNUYbqbqTy2IBJaDzr5tQGS0tQZEOEnG5EVexJ9zy81haJYk5DEc3awdtlE5C8rGxriilGthW0o4krVlaSoWXR/2u4lzkRcshJsJln57GqIFMMRJWhIMhQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(376002)(366004)(396003)(39860400002)(451199021)(31686004)(26005)(6512007)(6916009)(186003)(4326008)(53546011)(6506007)(316002)(8676002)(8936002)(36756003)(86362001)(31696002)(2906002)(5660300002)(478600001)(54906003)(41300700001)(83380400001)(2616005)(38100700002)(66556008)(66476007)(6486002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1QzRjdmeWtmdG9Gbm1WMG1XRlduMzEwNTR1MnBpQUEvTlZ5ZGRQSUVLZEhB?=
 =?utf-8?B?TnhYTDlpTnhhRDZ3dzF0c21LM2M0THZiTzhiSEtOYWp1bTN6czhsSmtzYXh0?=
 =?utf-8?B?RjROMWZsQjhERFhrSnhGREZIdzhJM3dCTnk2QXZISDJ4S0pZdFFEZHZGN1M0?=
 =?utf-8?B?cUtKVW1pU1BNbEFsaGNWTXRaQXBMZVhiZEJlL0M4ZGFXb3RmZzZnODJFeFZO?=
 =?utf-8?B?UGFlK2VPTWFaQUc1czFNU1RST0ZuQ0FGb1JQbnJLZ0NGZFNxMnJsRmZzUlRO?=
 =?utf-8?B?TWQ4NjJuU3l4WkZhZTlsZS9ZbUdVOVhzTG5aeWlJVDdRYXA5ajByWTJ6WUJL?=
 =?utf-8?B?ZkF4UmxBN3l0SHBIeVVNcVRkbVNNN21yalY4MFBoTHViRHJRY3lJY1NjNDVT?=
 =?utf-8?B?ZWxXaExpL0hzMTJCRmM3cWpNNEl2VDZRVi9WNFh3cUJ2b2p4NWdPdHQ4WVBE?=
 =?utf-8?B?N1BrVEEyKy9SWjNSZ3VMMlhsbXZCQ3lqamdaOXg2TVJJT0g2bHVWQmRVMG5z?=
 =?utf-8?B?N3ZsYjQzRG9JempCd1dwNFBCNmcxNlhmcVVuc1Z0WXM2MnpzWDEwSGM5WExV?=
 =?utf-8?B?bmh4QjcvT0FVb1ZNVCtUMDVsOGxkZVRsZkM2VFRtNWF1cGs2Um9KKzhhYmpT?=
 =?utf-8?B?cGlKRTMvSUJvb3VSc015bmVSZlMyZ1luR3dOODBjQW9UY1VHbXZCaE1Dejhl?=
 =?utf-8?B?WjNLakliaWhIcTcwUmtTODlwb3pMWmJKUzhyVjc5amhpUHM2bmM5QXNyMjR4?=
 =?utf-8?B?TEFNR09TZUV1eUdEeGxpRlNTVE9FSmlQWXdKeloyb3FiS0ZWQ0FhNmRiOUQv?=
 =?utf-8?B?VlZOazhPdnRRUFRHK3NXdTJWNkNrYWovMmcwaGtKU2pTSEgvNVpLQ2N2TVgy?=
 =?utf-8?B?YWswUlFPRVB0WkFJcVZUYXlXYWlCellhUVc3c0xOUFJ0emQvTCtyalZYRHZ5?=
 =?utf-8?B?czJkWHc5VGZHaWRwbS9pSytCb1N0VXo1VkgvTmk5MTBBSVErK2hKd20zQXhX?=
 =?utf-8?B?dXBPZk54cEtXMk9LWEo4WDEyWjZHT3U3bEtVZFozTnNubGFRcG9yb3JVaVRR?=
 =?utf-8?B?S0xwd24wOVBTUnQ4cTNXa245bXRmUVF3RHREOExnRkJFSUJaeFd4aWt4WFdk?=
 =?utf-8?B?ZnpEQ1RHZUNlQ3ZZUnRGZW42Z01OL0F1WkE1MWxvd2dnSm9ROXhML1BiQm1n?=
 =?utf-8?B?QXIraHRCMGc0TzgvOHJ5YU1BdE5xK2lhUS94Wm9vUUJXM090Mmx5SFVoT1c5?=
 =?utf-8?B?aHVFcHVQbFZDc0FJbVA5WFRXVDZKcWRCeVZaQmR4ZXNXU29iNE0zNVNycm9m?=
 =?utf-8?B?dUE4aXAxVi9sN3B0aTNYKzIxOEVWaFdURXFJeldNYkV5emtNaXA3NE9ib0Zj?=
 =?utf-8?B?L09UV28zSWlTSTdFaG1RQU1oN2lIV0E3Q3lJNkhvOFZQekJZYXkzUFRXVnZM?=
 =?utf-8?B?TGgrb1BzUG5ab3VHZWRubnlqNkNpaVRicWFLbEp1d3M0L2VnSndvUTN0S1Bi?=
 =?utf-8?B?T3BCbitlUjBzZWJGOG1laW9VekZEUUdrbkNyTWcvYmZsRVdKSmptWUY0dDVU?=
 =?utf-8?B?M3FOczN6WW84Y1BqK0VqeHhCajFQWUx4eGFZK2Y1a2c2cmNUUXJQUHNNZmhF?=
 =?utf-8?B?OFZuaG1HV1NkNDcxWm5ubU9jRjdMQjBBMThvbkJlOTV2b1dxSEhLNmJadkRo?=
 =?utf-8?B?VW50eVk3U1dqQ2pTTGEyK2YwaWhrOGxOcXQ0ckVZd1BNYnJQdGFyZjN4cWpF?=
 =?utf-8?B?dmx0cWxKcWZWbVc2d1A1M0M1VEc3dEtCVnlvZjRjZW9vTE05dFRQWFJ6eGpK?=
 =?utf-8?B?N1Vsdmx6UVZnVUtKbEhvdk5tMm5vS2ZsMXYzVkRLUlBKQTJaWHMvbUJUaFYr?=
 =?utf-8?B?QWl5L0lWWERXeHJUOWRjcEdJSnAxaGMxNU5oZXhFU2xxMVFzdGRsUk80Qk5F?=
 =?utf-8?B?R014MzJxUDNaS2dEZlFjalN5L2RwajNIN0k5YnA3QzVTeFBKN2Q0YWI4b1RL?=
 =?utf-8?B?Q0lNNXM5OGlJR0g0aFF4aHNQYWpRREJLNDJ0Tmw4REFNZjRJTXhmdVFnVGFI?=
 =?utf-8?B?Y3dmem1xaHBzSSt0eEt5WXNhQmVXclVGZmtEU2ZZbVh0bHBmdm9LQVp2Zkd6?=
 =?utf-8?Q?fvXk7UZvYoaghR8/ML1KjZQ0R?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: caedbfb8-d930-4288-8e64-08db40ab9a1a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 07:56:37.3667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RA9SD6zxwOsqxotMr2Xia38OVeZgInKtW+Fk1VKvlxfvPUvZBfPtZv7eHR8A6GIP+JhyhPiEUaVgtvOpSZ1sTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9176

On 18.04.2023 13:35, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
>> ... in order to also intercept Dom0 accesses through the alias ports.
>>
>> Also stop intercepting accesses to the CMOS ports if we won't ourselves
>> use the CMOS RTC, because of there being none.
>>
>> Note that rtc_init() deliberately uses 16 as the upper loop bound,
>> despite probe_cmos_alias() using 8: The higher bound is benign now, but
>> would save us touching the code (or, worse, missing to touch it) in case
>> the lower one was doubled.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -208,7 +208,7 @@ static bool admin_io_okay(unsigned int p
>>          return false;
>>  
>>      /* We also never permit direct access to the RTC/CMOS registers. */
> 
> Hm, it's unclear to me whether the comment above would need updating:
> we don't allow direct access to the RTC/CMOS registers, but we allow
> direct access to the RTC/CMOS ports if there's no device behind.

Right, but those ports then don't allow access to said registers. So
I think the comment is fine as is.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 08:19:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 08:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523239.813102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp324-0007dx-8G; Wed, 19 Apr 2023 08:19:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523239.813102; Wed, 19 Apr 2023 08:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp324-0007dq-5U; Wed, 19 Apr 2023 08:19:16 +0000
Received: by outflank-mailman (input) for mailman id 523239;
 Wed, 19 Apr 2023 08:19:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=APhf=AK=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pp323-0007dk-7l
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 08:19:15 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc904dc3-de8a-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 10:19:11 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id
 d9443c01a7336-1a67d2601b3so22844245ad.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 01:19:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc904dc3-de8a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681892350; x=1684484350;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=898LiG4cD0xRXnDD3dm1nSi3ewVhnQ1xRqaQGdUc+K4=;
        b=WAMBzSo1vnOqxoK5sJJCXkXWPh7lZ80O9FV7SWu81OInFXqWL9cS+dtgXwHjSgEmAx
         ehNubphA8Ao0BZE5hZmBdJ0i+jaqdVSlsisoMtmrbvEd5HPPG9jFZiJ8As561g1mYxlE
         KVLYgWKGT0pRYuWJKECCVboJKdmlmk4YSGxgIUWHP7Fe0EK9cmW3eTA32iaG1sUlv94s
         Zguz0GF7AsYK0TeT7n4x5lbnbq0qEQPu/q/nDtddrUKBQFgpva62KWxkNe4rxDAAGuCI
         3rcrFF4gB+gR34tqij53LK0swQBY6wNQecGOYMN/l3ShDi81amfVQaT01OsbQHzuyNWs
         cZgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681892350; x=1684484350;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=898LiG4cD0xRXnDD3dm1nSi3ewVhnQ1xRqaQGdUc+K4=;
        b=bsOJ0WnzZb2ZpczDQdKo2cvL/euXqmw3/ZGashkjtyzC3KypLJNSOcpoHjyP9o4Vib
         bRuelpsXJSSWriBpkSjlyF++GUPxOdEIWdYbnMn5cqjB5k2Av7XkgTjrqMPpFdjp3r/g
         rrcpSo8y9wtQuU35uCyoGU4WAfjoafx9RrLWDRn49pQUJXAnCct+66KKRKvfnfYcsVw1
         kfrl3e/ehb1BtsQHXTkyg8kWw3vMoTRBWDgoZ10FwuEI0IO85WcJyT4/nAjZ/g4L7Mhp
         n/6ik8kvdZhoE6xveJQxOkJOBTJFZ+Rf2bJ4IVtl7OgCU+LPXjeJwRW+gvi3cZLPkMwX
         i6PQ==
X-Gm-Message-State: AAQBX9cYaGeg2Fj1Vesx8bYnDZjquB1UTShCQiDUD+u8+S4GR3hGAor1
	a+2kr/WbSYfNO8wyJtUKqgJXE7aSRbtGMNu+Fcc=
X-Google-Smtp-Source: AKy350aUAkho/f+/Tdphf/4xcf9DpnYKgk1KXiOob4kNzkZY903AjcexgBB8VST/U8a/GgqUcN7AuUCO3GASAcqxKJg=
X-Received: by 2002:a17:902:f550:b0:1a5:22a:165c with SMTP id
 h16-20020a170902f55000b001a5022a165cmr5567538plf.0.1681892350007; Wed, 19 Apr
 2023 01:19:10 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com> <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
In-Reply-To: <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Wed, 19 Apr 2023 11:25:14 +0300
Message-ID: <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000129f0405f9ac1448"

--000000000000129f0405f9ac1448
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Michal,

You put my nose into the problem. Thank you.
I am going to use your point.
Let's see what happens.

Regards,
Oleg


=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Mic=
hal Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello Stefano,
> >
> > Thanks for the clarification.
> > My company uses yocto for image generation.
> > What kind of information do you need to consult me in this case ?
> >
> > Maybe modules sizes/addresses which were mentioned by @Julien Grall
> <mailto:julien@xen.org> ?
>
> Sorry for jumping into discussion, but FWICS the Xen command line you
> provided seems to be not the one
> Xen booted with. The error you are observing most likely is due to dom0
> colors configuration not being
> specified (i.e. lack of dom0_colors=3D<> parameter). Although in the comm=
and
> line you provided, this parameter
> is set, I strongly doubt that this is the actual command line in use.
>
> You wrote:
> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnul=
l
> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>
> but:
> 1) way_szize has a typo
> 2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen
> has only one:
> (XEN) Xen color(s): [ 0 ]
>
> This makes me believe that no colors configuration actually end up in
> command line that Xen booted with.
> Single color for Xen is a "default if not specified" and way size was
> probably calculated by asking HW.
>
> So I would suggest to first cross-check the command line in use.
>
> ~Michal
>
>
> >
> > Regards,
> > Oleg
> >
> > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44,=
 Stefano Stabellini <sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>:
> >
> >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >     > Hi Julien,
> >     >
> >     > >> This feature has not been merged in Xen upstream yet
> >     >
> >     > > would assume that upstream + the series on the ML [1] work
> >     >
> >     > Please clarify this point.
> >     > Because the two thoughts are controversial.
> >
> >     Hi Oleg,
> >
> >     As Julien wrote, there is nothing controversial. As you are aware,
> >     Xilinx maintains a separate Xen tree specific for Xilinx here:
> >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >
> >     and the branch you are using (xlnx_rebase_4.16) comes from there.
> >
> >
> >     Instead, the upstream Xen tree lives here:
> >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >
> >
> >     The Cache Coloring feature that you are trying to configure is
> present
> >     in xlnx_rebase_4.16, but not yet present upstream (there is an
> >     outstanding patch series to add cache coloring to Xen upstream but =
it
> >     hasn't been merged yet.)
> >
> >
> >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too muc=
h
> for
> >     you as you already have Cache Coloring as a feature there.
> >
> >
> >     I take you are using ImageBuilder to generate the boot
> configuration? If
> >     so, please post the ImageBuilder config file that you are using.
> >
> >     But from the boot message, it looks like the colors configuration f=
or
> >     Dom0 is incorrect.
> >
>

--000000000000129f0405f9ac1448
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi Michal,</div><div><br></div><div></div><div>You pu=
t my nose into the problem. Thank you.</div><div>I am going to use your poi=
nt.</div><div>Let&#39;s see what happens.</div><div><br></div><div>Regards,=
</div><div>Oleg<br></div><div><br></div></div><br><div class=3D"gmail_quote=
"><div dir=3D"ltr" class=3D"gmail_attr">=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel &lt;<a href=3D"mailto:mic=
hal.orzel@amd.com">michal.orzel@amd.com</a>&gt;:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">Hi Oleg,<br>
<br>
On 19/04/2023 09:03, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt; <br>
&gt; <br>
&gt; Hello Stefano,<br>
&gt; <br>
&gt; Thanks for the clarification.<br>
&gt; My company uses yocto for image generation.<br>
&gt; What kind of information do you need to consult me in this case ?<br>
&gt; <br>
&gt; Maybe modules sizes/addresses which were mentioned by @Julien Grall &l=
t;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org=
</a>&gt; ?<br>
<br>
Sorry for jumping into discussion, but FWICS the Xen command line you provi=
ded seems to be not the one<br>
Xen booted with. The error you are observing most likely is due to dom0 col=
ors configuration not being<br>
specified (i.e. lack of dom0_colors=3D&lt;&gt; parameter). Although in the =
command line you provided, this parameter<br>
is set, I strongly doubt that this is the actual command line in use.<br>
<br>
You wrote:<br>
xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D160=
0M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnu=
ll timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quot=
;;<br>
<br>
but:<br>
1) way_szize has a typo<br>
2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has=
 only one:<br>
(XEN) Xen color(s): [ 0 ]<br>
<br>
This makes me believe that no colors configuration actually end up in comma=
nd line that Xen booted with.<br>
Single color for Xen is a &quot;default if not specified&quot; and way size=
 was probably calculated by asking HW.<br>
<br>
So I would suggest to first cross-check the command line in use.<br>
<br>
~Michal<br>
<br>
<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44=
, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;:<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This feature has not been merged in X=
en upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assume that upstream + the series o=
n the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two thoughts are controversial.<br=
>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, there is nothing controversial. As=
 you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a separate Xen tree specific for X=
ilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/xilinx/xen" rel=3D"no=
referrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0and the branch you are using (xlnx_rebase_4.16) com=
es from there.<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstream Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.=
git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.o=
rg/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.o=
rg/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">ht=
tps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring feature that you are trying to c=
onfigure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16, but not yet present upstream (=
there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch series to add cache coloring to X=
en upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merged yet.)<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are using xlnx_rebase_4.16 it doesn&=
#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0you as you already have Cache Coloring as a feature=
 there.<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0I take you are using ImageBuilder to generate the b=
oot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0so, please post the ImageBuilder config file that y=
ou are using.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0But from the boot message, it looks like the colors=
 configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<br>
&gt; <br>
</blockquote></div>

--000000000000129f0405f9ac1448--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 08:20:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 08:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523244.813113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp33h-0000aE-KL; Wed, 19 Apr 2023 08:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523244.813113; Wed, 19 Apr 2023 08:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp33h-0000a7-GJ; Wed, 19 Apr 2023 08:20:57 +0000
Received: by outflank-mailman (input) for mailman id 523244;
 Wed, 19 Apr 2023 08:20:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qUCI=AK=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pp33f-0000Zq-QE
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 08:20:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061c.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19ecdd46-de8b-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 10:20:53 +0200 (CEST)
Received: from AS9PR06CA0498.eurprd06.prod.outlook.com (2603:10a6:20b:49b::24)
 by AS8PR08MB9148.eurprd08.prod.outlook.com (2603:10a6:20b:57f::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 08:20:50 +0000
Received: from AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:49b:cafe::68) by AS9PR06CA0498.outlook.office365.com
 (2603:10a6:20b:49b::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 08:20:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT045.mail.protection.outlook.com (100.127.140.150) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 08:20:50 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 19 Apr 2023 08:20:49 +0000
Received: from 5d26f3ffca9f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9A6A76D4-8C8D-43F4-AC6E-0582F8A18CDB.1; 
 Wed, 19 Apr 2023 08:20:39 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5d26f3ffca9f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 08:20:39 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB6278.eurprd08.prod.outlook.com (2603:10a6:20b:29a::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 08:20:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 08:20:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19ecdd46-de8b-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tnIlT5mr9sMgHW/Ip/R/sIvrNzIy+ngtAL3XaHMdcGA=;
 b=TXa+nlplVHZHiVUu749g98IdIeb2j0qMFS/WbhyuRHivzc0odC9priKpLozUQDTEPKNcLt/YkfwHUd4D+ibsT6VdVGJKyc2kgyuq/3q8GRfld1ulgyLWBA5tzEcgkg49FhaXDO2TlmQghaGrsasHy+1HRY7iVst2DoEgjlOdHi0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 51d15056cb0db813
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ek8v7+Sl0YOlOLRAy1aNwA1FoxJbyLzRFCFmOcwiHLjpbps0uzvUSFVIq5qaao29WJsxKNSwck1GkmfL9rM0fiqzv7xkM7+cle5H/94sDiKgOnAKcp/fyLrTbMn48XNR+Lq2JLTVZDQKvqD8vcZPP5IP5WKix5mTTENRL3oSDHZsne5Tbssu1q1q77uDkeYOtjLH2WJwgYtI8c7ZFEPQ44bcrF/iEyqjLq+qQ8iCD/0jen6G/5uGhlehO398/IW3vfdEMWZ/uXeNBukKxBuuD4/YsvK57mS9qYQcxlLFCMGVQl2WzvKHsqgpKB195UfrJ6BRh/Erc7gtoitI3vH6uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tnIlT5mr9sMgHW/Ip/R/sIvrNzIy+ngtAL3XaHMdcGA=;
 b=XN/EdtSY1Lj0o9MVurYYh3zxRux/G0xHGn259EhNWQySsOteYWFunhNA7A8UAcghyj1nE+Fq5IQfGdf5dLTFVIK+28PZ29oeRFUu4byFn14HiNkCpEmFBMRhPHmgVcmSZBSFOLcAB13EE+325B0waSwYtusV9kDuduCphl+Z2NGURKAPtyFMPjNjJOYKtrBix34bGFuEe6Vajdh+qX7YuHMILWPLEuAuz0srpQ3WeesYhVairkcdg4IDrCv/I6PzdaOCsBrFYwmdqwDH6PKXnj6u6ex3cT1MlFxCSowEoLqo1vZkO/GiK6jEtyBa4NThsTc4a0/jLItRR6pp9qB6eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tnIlT5mr9sMgHW/Ip/R/sIvrNzIy+ngtAL3XaHMdcGA=;
 b=TXa+nlplVHZHiVUu749g98IdIeb2j0qMFS/WbhyuRHivzc0odC9priKpLozUQDTEPKNcLt/YkfwHUd4D+ibsT6VdVGJKyc2kgyuq/3q8GRfld1ulgyLWBA5tzEcgkg49FhaXDO2TlmQghaGrsasHy+1HRY7iVst2DoEgjlOdHi0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>, Luca Fancellu <Luca.Fancellu@arm.com>,
	Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v5 00/12] SVE feature for arm guests
Thread-Index:
 AQHZbSQqKlfgrZX3xkqta7+H+1gq3K8xFMoAgAAUHYCAAQ0QAIAAEaQAgAAGAICAAAewAA==
Date: Wed, 19 Apr 2023 08:20:35 +0000
Message-ID: <7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
 <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
 <5f5b65eb-d1fc-271a-02db-aa347cc708e9@suse.com>
In-Reply-To: <5f5b65eb-d1fc-271a-02db-aa347cc708e9@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB6278:EE_|AM7EUR03FT045:EE_|AS8PR08MB9148:EE_
X-MS-Office365-Filtering-Correlation-Id: ea619a0e-4007-48af-86f8-08db40aefc34
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4Jt6+F+NIuby9q9uW5hsoJYySaZB3uW/6kOL7c2snW6OX/UoPMBYv8YO6GVc+cHWIGIlg/FeBqCu4ZFszDzfx6l3ha+9bWbJSx1wsUQi6U5T45j9g5z6cF90T2mnmx9oMz/xy/3xlLGcLar04di3HpwSm/FZE77EElY+AOke57LjjDOX87DORwHUmcnmGfkvrDZuwXTllbYGRPiqOEqToupRpNXWo6URdBrZYfD9l8NKDFX59i7wJHYGKQmu+ypXAj6lG8abS0A0ng6ENRZR0Z4pzr2S248+vovyjzUuunQ1MErlnjNWIhPmbcR57uEBQ6YCxaoHCgDHmqoKDM4Jlr+dpXN3vQ1igDg0K0GTrGLYn8ohJwEXWROt1JO13LzpwWMoeneXYO9UgAzNy02olZB9H3Nkx71RIoy5uHpuaIwVRmFJg+TTyj7KUrK9PTpKDiHirIHVu68rbjldl3PIhL1e5tgB4WGyMxYQegaPIyuZ3btwqGhtn8ebDxR7g070XV8F21TAXOJ2bu1/jml6ThVNTGQHegbYBygCRJp+eG+6wGwMJzT897ITevyR6cHp1jUwBJItojxflKfIJxd6U7oW4ZyunO9qZesZLfgclG1XCPceXnvN84DNLFLvluTqy1393wGbV17eTmm7q5ITvQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(346002)(376002)(366004)(451199021)(53546011)(71200400001)(54906003)(2906002)(76116006)(91956017)(86362001)(2616005)(66446008)(66946007)(64756008)(66476007)(66556008)(4326008)(33656002)(36756003)(38100700002)(6916009)(316002)(6486002)(8936002)(8676002)(186003)(7416002)(6512007)(6506007)(478600001)(38070700005)(5660300002)(41300700001)(122000001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <40008235A1FC8C41BA2F653D45952279@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6278
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a46fecea-3b9b-4197-b727-08db40aef330
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sczSuQIWEj/eRNK6OJ0milWoovWbbS2pk0j4hLOo4Z2qJLdmW7abVo6gjrPk3RJwCsK/0QbxCoFBpDtA9nr0vwj2i4FaPcwNGR78zJC5qosDPN7kpkT4AaGNuyraZrzIZvCP0K8e++uu5BNOBUwYD49svhL13S3scKgY9KTEDml0fmX/TgqrzqbMJEK5QtZVvfa8wO/a8xA1XHIV7dOflAacECXh7zTCwcuJoEeFrecnFxsOg1tkR3qIuA5a0SUDG6jIz2IcWD2nNdK6cL+HERfuKnEn63UCPNP9WV3k9fAk+n57noss+36cY6D0kkNlgN+gtPNO4H1JfFkUM5aKR8wmREFcIp5iHxueXd/8N5riXrFQZgqy9NPcFTEi98s1gMTVtZn5X6PSXB0Xtz6jC9r6ZtW+Zv2bg5lChHINEVcNY3i2INKM+JiIVwG6sn2wdT8vywaGNQVdwvlBPilLKs0anL0tEC6E9eQIZiJXMCBWY5hboy1VPicy4Xo98s5oxwRZr3mgYOz5CuWq2j8rghGqSreWpGs38FYjjoJy19Y1cxVhKXIOuE2u/VoMt49dn31/riBtJSTSM4RaWENuNB8dlLQz9KiM0hOd7HfMRmYH9JHqID+hPg+8fF+EY8a3kiTA33ev16TdS+BIuXL9ykjxgpVSf9z+BS1S+GfqfRrfnjNMxhQcbZuGgvZCalpRo/1YxHEUk1H9hOtZDGydIA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(2906002)(478600001)(6486002)(54906003)(8936002)(8676002)(6862004)(53546011)(5660300002)(186003)(70206006)(70586007)(6512007)(6506007)(4326008)(26005)(82740400003)(316002)(81166007)(41300700001)(356005)(86362001)(36756003)(82310400005)(33656002)(83380400001)(40460700003)(40480700001)(47076005)(336012)(2616005)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 08:20:50.2063
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ea619a0e-4007-48af-86f8-08db40aefc34
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9148

Hi Jan,

> On 19 Apr 2023, at 09:52, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 19.04.2023 09:31, Bertrand Marquis wrote:
>> Hi Jan,
>>=20
>>> On 19 Apr 2023, at 08:28, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 18.04.2023 16:25, Julien Grall wrote:
>>>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>>>> On this serie I would like to open a discussion on how to handle the =
vector size
>>>>> and the corresponding command line / configuration / device tree para=
meters.
>>>>>=20
>>>>> In general the user must either give a vector size it wants or has a =
solution to
>>>>> just request the maximum supported size.
>>>>>=20
>>>>> In the current implementation if a size bigger than the supported one=
 is provided:
>>>>> - we silently disable SVE for dom0
>>>>> - we silently disable SVE for dom0less
>>>>> - we do not create a guest when done through tools
>>>>>=20
>>>>> This is not completely coherent and i think we should aim for a coher=
ent behaviour
>>>>> unless we have arguments for this status.
>>>>=20
>>>> +1.
>>>>=20
>>>>> Is there any good reason to silently disable for Dom0 and dom0less on=
ly ?
>>>>>=20
>>>>> I see some possible solutions here:
>>>>>=20
>>>>> - modify parameter behaviour to use the supported size if parameter i=
s bigger than
>>>>> it. This would at least keep SVE enabled if a VM depends on it and co=
uld simplify
>>>>> some of the handling by using 2048 to use the maximum supported size.
>>>>=20
>>>> My concern with this approach and the third one is the user may take=20
>>>> some time to realize the problem in the xl.cfg. So...
>>>>=20
>>>>>=20
>>>>> - coherently stop if the parameter value is not supported (including =
if sve is
>>>>> not supported)
>>>>=20
>>>> ... this is my preferred approach because it would be clear that the=20
>>>> value passed to Xen is bogus.
>>>=20
>>> I did say earlier on that this comes with its own downside of preventin=
g
>>> boot to complete for no real reason. It's all Arm code, so you're fine
>>> to ignore me, but in similar situations elsewhere (sorry, don't recall =
a
>>> concrete example off the top of my head) we've aimed to allow the syste=
m
>>> to boot, for the admin to then take corrective action if/as needed.
>>=20
>> But a guest depending on the feature will just crash later when booting.
>> This is making the assumption that guests are all able to properly adapt
>> to different hardware possibilities.=20
>> This might be the case when you have a full Linux but if you consider an
>> embedded use case, if something is activated due to command line paramet=
ers
>> or configuration ones, it should not be expected that those are ignored =
I think.
>>=20
>> There are definitely 2 different needs here, maybe we need to have somet=
hing
>> like a "strict" switch to allow both use cases ?
>=20
> Possibly. Yet along what I've said before - would you then also mean that=
 to
> fail boot upon encountering entirely unknown command line options?

I think this should depend:
- completely unknow: we can ignore
- not supported (sve while sve is not supported by the platform or Xen): we=
 should not ignore

I agree that one could use custom command line arguments for lots of reason=
s
(in linux you can do that and get them back from /proc for example) but sil=
ently
ignoring a parameter that is known to Xen i do not think we should do.

I think in most cases, one could think its system is correctly running but =
could get
problems later (or in some cases even not have any) so having a clear error=
 at the
beginning is a lot more clear.

Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 08:26:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 08:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523249.813123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp38n-0001F1-6h; Wed, 19 Apr 2023 08:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523249.813123; Wed, 19 Apr 2023 08:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp38n-0001Eu-3t; Wed, 19 Apr 2023 08:26:13 +0000
Received: by outflank-mailman (input) for mailman id 523249;
 Wed, 19 Apr 2023 08:26:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp38l-0001Eo-9g
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 08:26:11 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4467966-de8b-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 10:26:08 +0200 (CEST)
Received: from mail-dm3nam02lp2042.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 04:26:00 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB5374.namprd03.prod.outlook.com (2603:10b6:208:1e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 08:25:58 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 08:25:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4467966-de8b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681892768;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jXGDC6kiv+tN07Ku6MpZLq9f/mygVX9xgH8nqy7xw2M=;
  b=f2vXY5R6s/8nxWlHJoQa8cHT1XZrpEMGI2WhJt/wRObTbh+Dy84tHb89
   NKPazmzuqj1vJ7bFKojRAjJGE2l3obGhJHC71s/ULiSOYywVQ9bZKg6iU
   dCWLy/l/nt8pY0z2PTMzl/92snaeVe7X6tj5pPM3g11wfk4VMsAEYfOdp
   0=;
X-IronPort-RemoteIP: 104.47.56.42
X-IronPort-MID: 105412093
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:cFZNfquijxpDGfsogctdfIY73+fnVHFfMUV32f8akzHdYApBsoF/q
 tZmKTiEOayMYmf9fd8kbITkoRkDvMWDyoc3GwZlqysxFn9H+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOHxyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwDg4xckmSjuCN7ujgR+Vzq/slE9HsI9ZK0p1g5Wmx4fcOZ7nmG/+PyfoDmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjv6xarI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjBd5LTuXprZaGhnWeyi8sBy00FmCLrOGyrFSecYJGN
 lI9r39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A79y4pog21kjLVow7TPTzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQGzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:UvNz6a3kiOacGZEdu4R8bgqjBUtyeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4OxoS5PwOU80lKQFlbX5WI3PYOCIghrOEGgP1+rfKl7balrDH4xmpM
 FdmsFFYbWeY2STSK3BkW2F+r0bsbq6GdWT9ILjJgBWPGNXgs9bjztRO0K+KAlbVQNGDZ02GN
 63/cxcvQetfnwRc4CSGmQFd/KrnayDqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 n+lRDj7KnLiYDy9vac7R6Z031loqqt9jJxPr3BtiEhEESntu/nXvUvZ1TIhkFPnAjm0idRrD
 CLmWZXAy1f0QKgQoiOm2qf5yDwlDI1r3Pyw16RhnXu5cT/WTIhEsJEwYZUaAHQ5UYstMx1lP
 sj5RPti7NHSRfb2Cjt7dnBUB9n0kKyvHo5iOYWy3hSS5EXZrNdpZEWuElVDJADFiTn751PKp
 gdMOjMoPJNNV+KZXHQuWdihNalW3g1Ex+cBlIPocyYyXxXm2plx0wTyIgekx47he4AorV/lp
 X52/5T5cxzp+ctHNxA7Mdoe7rJNlDw
X-Talos-CUID: =?us-ascii?q?9a23=3AVL26OWolH49fVnkOkCKLQqzmUeYbTmGE3X7SGGH?=
 =?us-ascii?q?iUmtlV5CqT2CI/7wxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AbO/wiAyH5xQlisAqfayPWQtNFaSaqKmiS38Cobd?=
 =?us-ascii?q?FgPmJLSdWIRe33C2qXKZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105412093"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z+67UWvcAtHAbLiECx9RLYAB0iclVsj95rbTGbBJgZLIczzh+G5eIxJQb9ejAWfrz/q6bsW6c6SzdGdT6Hy5WFRu7jJ8zX9COOGVOjzu6XhWQrBsLpnZBJOIAcHewu4ha1wyT0JdvaY/46OMxxpAcYFT4IwCyH/D37kwPNHfHVXAoAQ5U40sluFXLei10rTIJNi4roeBd+qO9qcVXhI76KYM7m/WGMN8CwYXokTEveQwTB7onxXxcE2VDoUuazLPRM7pxOn3P7T87oq8ItvaR1hw77iZ9I9cq1gN3Pr0TiQSZBHROYTwfGcok55PgqFeZAiPtwKnkR3byNCvTs9Uww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/tErtKDgMNZIdbfUMqXfkVAwKLd2PbJoAetA2bfFtK0=;
 b=AqAqhygo49TXUDCJDNGTzrTVF4WGCMc91ZEJe0ctKr7GWFWtb2Ha2NBHUEgIynFEV9Xhai4BwOV7atnc27ollQEph+sQywep0H8fonGKPJ2FQjYlBcQSkHq/kpty3b2OPSMVryMSm1/RMU24jTcxf5zCDiUKnPOfyNd9Leas5vbGh1V+15uD9rlLQ8XfW7UvNQB8SyXkeBM9usujxcxKPfc8CqUDrcoZ78jOEkbOGq/iCcVwqnl59FNKoAhJPh0vzUmLpoUQ+cEhzQ8NP9EmjCFjepMiDz7DP+9V6yPUrmp6NwT9QMzEfrRQh1jndnR2PQe1Nguo08f0By7kw0XpQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/tErtKDgMNZIdbfUMqXfkVAwKLd2PbJoAetA2bfFtK0=;
 b=De6qsVZNlX09/HZtNi9V9rLPupd/6vHinR2ipsNcnermZbrOBDJrhILJpJZZDLd6Cbf0wJT9DBMgGhBwQby+ruV8Wtqk/siSTJvvUMekajGqv0FLFoy+TXdd28pqdrMoGAL8mUYO/35KT42lT7to5WoYSJ3LDFUTGC09pqOiZxI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 10:25:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZD+ljXSEPCmPMAtN@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d301e110-f840-a032-c406-2f7404752783@suse.com>
X-ClientProxiedBy: LO4P123CA0133.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB5374:EE_
X-MS-Office365-Filtering-Correlation-Id: 5e4c9988-022e-4ac1-3c78-08db40afb3b6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mh3waQgL/ffsm0ruR6QjQEsy2p9KK4DsU9ayhibZbvx6p0/ZbgCjUAB4GWqwBtTgiMi9/QkohFobnKVP1yQmNH3sT9RxlgbHykUlwxIGJWFS0QzC+PKoYwwKVdW8PRXEbtZyDjuYlZR1nLUo/DV2gjgL3k66NYqrYjBMrIZZVWJ/RNJQJsWFABily199m+H8Va+bV8lOaWwm1AokmwDF0nzgIUYrkO6poywGUiY0pJogxd9hhmUHpKWL9/ypObMJuexA59mPorpzhandFQAf/C5SZWgthB9mejWbE/qAYPHMIvrcJXNJ6QI0BELu+pWqKjEN0H2yR6NsvuLO0+rW5ar3BieEfcmo3hAw3Ypo73Qx8pqLJi6cicUrWhXronY4CZc1J0UwBE6vjLymBw6vqYV+iC4mKvUkixR/U7R51V3JWoVbSdUglKTHmqPQPQQQ7Ttnwwh+EuO0RlQbLKRClxVUxtsVAhpr33jYkpLrtSedudT3ceIRifl0ZxjiAmlhB2n7Y5T3O28XRXSd43ATfgnOwu/S8TYd/jrbwR6OBPaLd/4cZZgt3ju/t7unQRrezQr/7oiTPfc2yogBkcxw2zGBuKnGhEpw7gwiktSCGUs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(366004)(396003)(346002)(136003)(39860400002)(451199021)(54906003)(6666004)(6486002)(478600001)(83380400001)(82960400001)(316002)(41300700001)(4326008)(6916009)(66556008)(186003)(9686003)(53546011)(6512007)(26005)(6506007)(66476007)(66946007)(5660300002)(2906002)(8936002)(38100700002)(85182001)(8676002)(33716001)(86362001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TjRadlQvZjU1NHFBSkRLVVJsTjUvRjBodWhackxOdGVpM3pzZEo4QjZtSjZr?=
 =?utf-8?B?aXdzV2RDd1JhV3FvYXNLQTBPc2krRFZSYStLUUV6MTRGeFB0alYvQWpDVXMw?=
 =?utf-8?B?Q2RKRXVHQzE4dEQ0QkxNczAvSUtOcFZCWVBLUE9waEpjWTRydmdKQW51eVY2?=
 =?utf-8?B?Z0FPcms2YWRwaEhoUXdPTjQvK2l2SGpTUFdsNDFyYUZLSnBPQnhaem44dUdP?=
 =?utf-8?B?dUV3WlVFbTg3aWY2KzlOay9oSVprR3JhRVQ2K05MNjdVbkZhY1B2WUwyYmdK?=
 =?utf-8?B?ZUY5endpWHdSQ2djSzh5T0p1cXZTZUJFNEUzUXM2UVBhQ00rdmZEZ3B6TzUx?=
 =?utf-8?B?UVVpdkNHTFJFV2hGS243YXVpVkl0NEhOY0JKSGtrb25iZm4rNGladDRZSVZt?=
 =?utf-8?B?bkl3ZmNCOExibVI0K0QyUVUvbitWR2Nnd3BRM1BtS1N2SVRmbU9NLzgyTmxu?=
 =?utf-8?B?cVNzNEcxUEVSdWlTbWQvM3NWTnRmeGsyaHRYQWJtWTBmT2pqVUF3R0FUUEMr?=
 =?utf-8?B?VW1pVjRBUWtVeTZRczZSbEROcTVjSmEzMThvekpLRVlNVUR6dkpDUE9GY2xH?=
 =?utf-8?B?YjJOdmRrUVhIaS8yQy9ZM2FlcmpUMXJwM3pVczNVSHZvdE5TRzRtQWtaV2FU?=
 =?utf-8?B?L3g2VXZ6Y0x0bDFMbjlCcDZVMWwrQ3JKbFZXYnlZSHRiWExzTUs0Nmptakdx?=
 =?utf-8?B?bGd1QlJFYi8ydFBkcjBENW5pSUowRmtDS3dydXB4SFRQWG12N1oySzY5OXVi?=
 =?utf-8?B?c0NrWlZoUndTdUdtWWI1a3Fmb3FDeit0emVVUTJydWRGVHQvQW8wa2hQRCtY?=
 =?utf-8?B?YkMvVEY4bHI0RTBtcGF0Zi96bmM5TU4vRTk0d3IxaWNhR2Nmc3ZJbCtFRVgx?=
 =?utf-8?B?Z1lSMTI5MmpqK2lWWDA3VXArb3kwVzlnRnU3Wjl1YWwxL0VJckt2bGxPcHo4?=
 =?utf-8?B?TnZyeDJSSUlLZ250N29nWGpSZFdMcGZSK2U2NGxRL0hpRDdWL0RhbW1XcUlm?=
 =?utf-8?B?bGI3ZmU5cGY3MWhyTGIxdVpIWUp3c2d0NXhrS2hLUlU4a2lpU3pqMSs2THNs?=
 =?utf-8?B?WEF0N0tUbzRRSVFkeFJTa1lsS2x1TSttWVlxdkNJczBmMU1CaFpiMXgvTTI1?=
 =?utf-8?B?SU5vSjY3TDUzTHlmRGxjcVZOQTdvek9xVnVaZzI4RmM4RVZJaG4yY00yVEpB?=
 =?utf-8?B?dFJFVytRUVhiZkZMZW05blA5U01oQzJ1TDEyLzByMHdSSzFBNytLY2E0TElB?=
 =?utf-8?B?QWthZDZWVlFLcldCcXRHSDIvY3dEbldWcHM1VGJIbzkwUXVLR0VaN0FIdUlq?=
 =?utf-8?B?d3R4NmxHWlI5OVpVamRQTDBHL25ScWFFMGR4YWpYUzZudUltVUJaMnhtVndp?=
 =?utf-8?B?WGRsN3kzK015eU5SK3ZpYlZmRHljRmVwYURHbjY4cHQwRlV4Y1FmNFMrRjhS?=
 =?utf-8?B?QWZuY3V6SzMzSG03MVUxL3BXTzRZcmFaTGhyK3NPcUtEN21PY2xLUHFPR1ha?=
 =?utf-8?B?aURWMEZOeHVZbWVsWFZ5SVphdmp0ZlM3cFBJc0JVT1NzdW9WaysweExsZ3VB?=
 =?utf-8?B?TmdkQStZSHZBdEJJeCtGZ0Q3cHA4VWlYWHd6U3RZZTBNOC9RT0VpUU1DQllp?=
 =?utf-8?B?elZKcENXQ0taUDZJSDJSbnBGZ3d4aGN0d2QxSEJCTEp2VldBMW9yb1NyZWYy?=
 =?utf-8?B?V1dYMno3aVFiK3UyZWNwTWxhOTZEUEpNcURlQkg4cmlkWGdGS2dzdUdlQ3dL?=
 =?utf-8?B?Vmd6WWR3cnFtYXVLeFFwZWduaUpFMXBOMkRUeXV5VElBdGk1UGhjTVlyMWE4?=
 =?utf-8?B?d1Q2OGx1Z2EvUTZTVjFyWHdKTlg5VHJhaVBHMFNUUlV5emFkNWhTQ0Jyd1dh?=
 =?utf-8?B?azFQUmR2WmFFV2lSdDc3QlJQUmpUOUdQQ0dhbk9HT0FTMG5zaHRWZ1NGWG1h?=
 =?utf-8?B?K2FmMXZGUXM5dHBFZTEzMDNlNzVPSW0yMXNWRnVVenp1TmhrTUdvcElaWVVK?=
 =?utf-8?B?VlRvQnJQTmZMeFpsVVYydzJBbUVHOVByRWt5SXkzVGloYjBwK0NzUmpXM08x?=
 =?utf-8?B?TGYvL2FQQ0Z3OTQzQ3AyNWdaT2dHaWFPYTFTVVBNbnprc2pnaDd4dkNZSFhh?=
 =?utf-8?B?VkJmcnAvcDBNUWs1MnpNOHNSdWRWT2RQSzg2NGRBVlg4SHI2dVlvZVRzdXky?=
 =?utf-8?B?YlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NMieQ1AyBbF5KEIqVizY6dPqMY5Z10px0CS7YQ0gCRfYJWFceuhk319Z0BjKLeG3VVO65gIMgVvu/RWuHFRJoBIXxh1spxm+tJHuMX0HNJmvzTD4EFQPuLlXKU/ki1vHIdW8t+fgfePaAtjRG8A7icAxBLEOL3GNfVLBtjcVwGCH1/Hj3a3/ss8X4QwhZi3FvryQXF1qdBBk7b6G9jVf7dKUb5KkVS5oEJYTa3nooQzXcQxUb0lm6VfHa0KjRgjshh7JYS0BiF3lu9vI/d2oPEVo7jtdnqt0lcd2dF4BEYdoxD3i2XoHUC9W7SJ81i/p+n32FUZamYELUk5BZdjOW37jsfuhMX+l9mADd9Sd/aLr19Cz9MarZiynbvlYR5GfuHkGfWG+xrIZ4b3N/zp6w42hFqMBiZOlM3gL/N4bBfoHbR49FP1i/QTZrcnFK+G38jAgeeCS/jTCqOxJY+ig1bQjn2nVuKtkpWJfJniyQl/q6pZE5qmfUKm5XnQGpgTiixeR1mNGXlrSPvbVI1teE7RFSs9YeAi11R2ykvfQkLqiaFT+2hsNdMXjPP3Yj6347zYCYG869K6nqViehOBJwipgZiyHeCU5Oy1gWzzQEpQ1w2BKmv5ly7S26/fG7Zqq08lh+iKYU+VKP98hEirYXJqrs7KYiZuX+V4MFAJpslAuklEJ2bd2AuCDu5cMDAhRIkNzcQCawnZn5jzJuW24BfA7hPobC16D1gcuGztp0nQAsMjck8vIZVRKVXJFmcWPipGGDDEHXrqNrCX08gQ7HTP6QiWmyka+N3Dadgrc61T0WuOSVuaHfCC+xkJjM/Gv
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e4c9988-022e-4ac1-3c78-08db40afb3b6
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 08:25:58.4937
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TDri0lnSEUQbJRwqqwj0Gip/7i50j2UA0OKEYJECRCatfHgzqDSR9ry4AXmaZ5kMRDLRwAlDUMlzvgaffI7Zow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5374

On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
> On 18.04.2023 15:06, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
> >> On 18.04.2023 11:24, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/include/asm/config.h
> >>> +++ b/xen/arch/x86/include/asm/config.h
> >>> @@ -44,6 +44,20 @@
> >>>  /* Linkage for x86 */
> >>>  #ifdef __ASSEMBLY__
> >>>  #define ALIGN .align 16,0x90
> >>> +#ifdef CONFIG_LIVEPATCH
> >>> +#define START_LP(name)                          \
> >>> +  jmp name;                                     \
> >>> +  .pushsection .text.name, "ax", @progbits;     \
> >>> +  name:
> >>> +#define END_LP(name)                            \
> >>> +  .size name, . - name;                         \
> >>> +  .type name, @function;                        \
> >>> +  .popsection
> >>> +#else
> >>> +#define START_LP(name)                          \
> >>> +  name:
> >>> +#define END_LP(name)
> >>> +#endif
> >>>  #define ENTRY(name)                             \
> >>>    .globl name;                                  \
> >>>    ALIGN;                                        \
> >>
> >> Couldn't END_LP() set type and size unconditionally? (But see also
> >> below.)
> > 
> > I see, so that we could also use it for debug purposes.  I guess at
> > that point it might be better to use {START,END}_FUNC() to note that
> > the macros also have an effect beyond that of livepatching.
> > 
> > Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
> > find START_ENTRY a weird name.
> 
> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
> I take it that you're aware that we meanwhile have two or even three
> concurring proposals on a general scheme of such annotations, and we
> don't seem to be able to agree on one. (I guess I'll make a design
> session proposal on this topic for Prague.)

Oh, I wasn't aware we had other proposals, I've been away on an off
quite a lot recently, and haven't been able to keep up with all
xen-devel email.  Do you have any references at hand?

> One thing needs to be clear though: Macros doing things solely needed
> for LP need to not have extra effects with it disabled, and such
> macros also better wouldn't e.g. insert stray JMP when not really
> needed. Hence I expect we still want (some) LP-specific macros besides
> whatever we settle on as the generic ones.

The stray jmp can be inserted only in the livepatch case, if we end up
having to add it.

Maybe we should just go with Linux names, so initially I would like to
use:

SYM_FUNC_START{_NOALIGN}(name)
SYM_FUNC_START_LOCAL{_NOALIGN}(name)
SYM_FUNC_END(name)

> >>> --- a/xen/arch/x86/x86_64/entry.S
> >>> +++ b/xen/arch/x86/x86_64/entry.S
> >>> @@ -660,7 +660,7 @@ ENTRY(early_page_fault)
> >>>  
> >>>          ALIGN
> >>>  /* No special register assumptions. */
> >>> -restore_all_xen:
> >>> +START_LP(restore_all_xen)
> >>>          /*
> >>>           * Check whether we need to switch to the per-CPU page tables, in
> >>>           * case we return to late PV exit code (from an NMI or #MC).
> >>> @@ -677,6 +677,7 @@ UNLIKELY_END(exit_cr3)
> >>>  
> >>>          RESTORE_ALL adj=8
> >>>          iretq
> >>> +END_LP(restore_all_xen)
> >>
> >> While I'm fine with this conversion, ...
> > 
> > So I take that overall you would agree to adding this extra
> > information using a pair of macros similar to the proposed ones.
> 
> Yes (with the above in mind, though).

Sure, thanks for the feedback.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 08:43:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 08:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523256.813133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3PW-0003fq-Oi; Wed, 19 Apr 2023 08:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523256.813133; Wed, 19 Apr 2023 08:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3PW-0003fj-K6; Wed, 19 Apr 2023 08:43:30 +0000
Received: by outflank-mailman (input) for mailman id 523256;
 Wed, 19 Apr 2023 08:43:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp3PU-0003fd-Nz
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 08:43:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 408ec0af-de8e-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 10:43:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9233.eurprd04.prod.outlook.com (2603:10a6:10:373::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Wed, 19 Apr
 2023 08:43:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 08:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 408ec0af-de8e-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=brzGicrkeWC+Uk/hYHMoY6FCkRAzFlOfBmMrDtjmjnRnGomk5gNT0yjjUxm6BFjrFgFd7pMxfEYc9Rtt1vs64rLr5Va73C+fXU6ppMH03wZnT9vcQtjSdyCfbj4xmvleV1MofJAuaY1QwVg1yLA66q5imEIOur+TOBdzf26J54Qx9P5gv+5VnOvcx/vm2/QjZ5dMR1dYaVqbNHY70xP6nslvALkISiUtHqXpcBfWulG3JyTXGaCaOh5FWjl/r7zeH3kRLe1KJcRlPjxsxxoT+ER5xyleLJ3B28u1JdQG9TGVsjBfly/hy2RSgN09/wZMPVXh0UNAfay+H+ytNTO/Tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YoYmZuNKrOJHcyU0jChPjGkQDD4RtKGmQ6SJUHNkmaY=;
 b=nDTZxByrtdZjIFRxgNwgr3WCg97QoUSp14WvVjqDjorV2F3RNE69FKnXLQvSpjLIDQmUo2/IsysbtoHgfhrONNwitPPBGvYCmBRwz+0I/U2IlmIXjmZFguuYQbUIG1WrQ7KgzYYIVIytmeZwGK0OrFv91vgTZWQGOXMLqtI+nvQWIWNBct+KUm7DZGSrYx12lFulz+XN/DgW5QKeVTe2gY8kVwtX0RZN7ZUF7FxPISLKGlong+wKDsxDlWNDW0p2WYDVRe0p7kLZm5B1c3tWhlQGMutU5G14F2Gwo0CO3vswYnoOgNSIOzL7CpiVLuIptSKMVL8JjpUEDmsCvH4Kow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YoYmZuNKrOJHcyU0jChPjGkQDD4RtKGmQ6SJUHNkmaY=;
 b=GGiUs4DIbDfAkOpoKhcyFLXz6oiv4TA/Nr3ZK27grjkued4bL8F5ZqFLliefYIfKfdPsR2UYfRJoZYJQ9/Ay25BQvXzy74PhwO4TMlR+NBuHADIN2inUffmCvx5Df/qF8wf6g//cUPBevOBva+Lx1C5FTgLTPNa6lMx6LyWsW+8sHBY7UMkX0P54kDtzgHXRJuFAK03/hON27yMH0DYMUOIhEXZRNaKrlkG9J8c6D4u9u2pV9T38ywbHbFSj9GSeyZrxr5JzFj+qwDTEKj9IED3vk+R58l890WH+n6jnH4ZVNbJ05RsnSVhrN97u6ALbgJGmTilCzmZyYFImMG4IsQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
Date: Wed, 19 Apr 2023 10:43:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD+ljXSEPCmPMAtN@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9233:EE_
X-MS-Office365-Filtering-Correlation-Id: c1c2ebf1-bd8e-4035-2dd2-08db40b22361
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KVqMU7d1lxajYxDM9QN5LIoWA4OxSK1N3RFqjm4AqtbiSoi7RvPRn1R3pjHWYbYmeE79omYeg8+f578uWZNtCXSBwWbcWqUl5t+mVvJKy1zZqN2uVyWR5LnuR+SNahmyEeRqqvuH/ZuHfqBBd+rwsrYnE+0TtdXEF1BRu4lkO7iC+KRnO/d6B/WOBkFU4p6iSajl/JygTEZKG1RXdhNkK3AthUAzNfdSsDK0xX4eBT9qQNPdai/28+lEo03esGO6a4nGf83sUh5nzbgdHP/EACw8RGg9RkSt5LNjcqoEoe1R95NwCMUktByiZsT/2ypPA0KnN0KhEjmL9ZCIK/S3ZuNKYW+Qp4n7xB7UhzwfNY+URE+EzctEJjGpvAI6IPOJYHbWvdldDwViT5Cde67Tf5dPus9nMqCSqrreehikOhtxMDhOvDz3vg4150Bx7DwesJJDNFqQZxIjEVP3PoJfTMbDfhJ1h6rulaclkGKl6LNWKHLrsZe75lQtqAxyVZvCS53VMYcBHWJcGCi+8uR5VlyDPEFWg2wVFKb1amqTOP0dADcLeG/hE3eOvMKiVVNzDLfIDxqVaMahjBKY7wKABxIxzXnwLWUMu62wF+MpkWK0vPP2e9ZZSGuMBVEij77HOlbJP6YuGKgIoyC8ApymtA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(396003)(346002)(376002)(451199021)(86362001)(54906003)(478600001)(316002)(41300700001)(8676002)(8936002)(38100700002)(5660300002)(66556008)(6512007)(31696002)(53546011)(6506007)(36756003)(26005)(31686004)(186003)(66476007)(4326008)(6916009)(66946007)(83380400001)(966005)(6666004)(2906002)(6486002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b042MEhJanBQVDJyWVNxRDdzcGE1d0YvMk1aWU5USVZLT1lJU2UxeUJkaEk5?=
 =?utf-8?B?WTVNVXRjbUREbTZ2V2JkVnZXaVRRNXlBdEprbHdTVDA0d1hGVHpObW9rc3lY?=
 =?utf-8?B?bEVkVk9ET2doU0IwaTJVais1WjR3NXBTZWo3NW9VcWd5Z2lJcHlGREd3STk4?=
 =?utf-8?B?RUFweHFQakV4OStBYVZvbS9WMk5kakRwT2lXdkpxZi9KSlpnKzRBT0sweFYr?=
 =?utf-8?B?eU83Y3oxYk1RNEhoV0pWMjNQTFB1aDF1VkF1WlZzUU9POEV3RW0xaE1qZXJR?=
 =?utf-8?B?c1BmL1dRRTF2bUdYOGVab2dDTHc3Yi9DMkhHY1M2ZmRuUW1iR3M1Nmt0d3p4?=
 =?utf-8?B?czBZYzMxMUhMWUhxSVcrODNCb1Rwb1FSQU5jWmxHSll1YkFLMG4vOTFhUjZ3?=
 =?utf-8?B?dG5wU3FBUllvb0hMRktyZXU0ejZKM1RURXVqdjdMcnBkWDYzd1Q1VitoRUVn?=
 =?utf-8?B?cjJsYmJDWm5sTWhuRlNhcXhzNmUrY3UzOWFRUml5MVI5ZjdPRExRQkFQWm5s?=
 =?utf-8?B?cHp5UEp2S2hERy9BRGF5NWlveVZQeVlkdzllK0p0RWttRnhtaVB3ZTZ0RWUx?=
 =?utf-8?B?TUNaK2dGK2Q4YnU2WDlZczFoTTF6ci9URXlKK0NvTkR4MDNxWW8xMUpwS2My?=
 =?utf-8?B?VllLVEVpYWY5RmlKNEx0UWdxMG5YNkN6OTRubUVhbzlzeUJTQk5RWXc2Y1ZF?=
 =?utf-8?B?ZWRNVDJFY25SeVBXT2ZXL1RVeDJSQkFDQjZIcUYyWGdiaDlRWnFPOEwrRkNs?=
 =?utf-8?B?TmNWMlYxMnExdmhrWGhWRDNEMFhRNUYvWnJxWFhlRlViT0JKYnZ4SGNybS9h?=
 =?utf-8?B?bzlFNlBxWWhNT0JnUmhpTU92Y2tWMEpQT3lhNldicGViNjZjdmhNN1FHWndD?=
 =?utf-8?B?THJaRVI1ak13MkxjVW5vbUFGV0FwNGU3MkJHZTlKOTJrVXg3VUk4SG5qeDB0?=
 =?utf-8?B?RGsvYnRpOXl4ZWhpakM2bWlPRVZYV1loa2Y2NUlhK29OK0djc2JUUzZFeEIx?=
 =?utf-8?B?Vm1kVWNlcHZtcW9kSEVZdFJNSUwvWnQxbXhOWE5nWUw0SHp4YlNabG8zQXpv?=
 =?utf-8?B?dEFpbDZqSWJnSGJJMUVHNmJQZnRrZkV6eWt3M2dwc2g2QVVMUmM1eGxVb1Y2?=
 =?utf-8?B?SjFkRkwybVFpOGc0aUhHSDkrZHJRdHRUclQ0c2huM2FXaUV6UExPWFdFblpx?=
 =?utf-8?B?V2dYSHhCb0Q4ZnNMMDRqR1E2Zko5N2k5QlA1K2RTT2dlbUl1NWpoWjY1RWZD?=
 =?utf-8?B?UWhBNkdoRko5dUwrbTZjMzU2QW1vZDBETTNXK0VRODRxbmx5dkVkVDF4Q09G?=
 =?utf-8?B?TFVCRGhRcUdUc3Rlak5iMkEydGZsK2R1N21yUUo1aDJlcklSU2l6MUdKaWdZ?=
 =?utf-8?B?WmtSaExmWFg1dnBqcklXSFpja1B3dFNOeVZudTM4K0s0QnZBV0ZLTndaMldN?=
 =?utf-8?B?RHNkcDNwOHUweHB3ZnpDQ0ViNjAzUGRwRFhsNlFzTjBhZ3gzb25obTRpeHAz?=
 =?utf-8?B?WWRidDRlSW5wNE02VzAwRUJCSFNVeHZSSTNJcWFlK3VsUnAzR1BYNHZ1MkFu?=
 =?utf-8?B?MWYzWTljUkVYNUdwK0w1ajIwdWp5QUc2QSt2Uk4ycjFRK1NzbEMxSEJ0QnRy?=
 =?utf-8?B?cTM5TEVlR3dQdzVzRjVUS2huditBTTlxQi9HMGJTOUlaQVBzT0tjbjRRbjg0?=
 =?utf-8?B?SVVRWUdEWmQ1NXpNWmM3RUxpeWdsZUFTMEl6ZEZmOVlzQm5RMU01WlFwSWJU?=
 =?utf-8?B?bWlvYTc1ZVN6SEhlNWlBV3FpMSt2cDBKR0pkZkxvMk1EcDlOc0I2dDdralpT?=
 =?utf-8?B?MG1kdlRHRk1RTGpzRFBFYldwVnBsVklOZHRwRUVtSjNubzBuNjd2SU5FQTZ6?=
 =?utf-8?B?NDV1MlYvcmxMcFRIRXptN1Yya0xTSEViM09LV2FVY0dRc0kvWUxUZ3FRRlpR?=
 =?utf-8?B?TTlLL2RTSDZ4czJIVmV3djdoMlg1aVlUV25GKzhwSUI0eTg3Q3krcCt3cFhZ?=
 =?utf-8?B?V3FMRitRc0VURVpqbldiaHluWE5xSHVBSXZDZTFjeWNDTWp0Y0EwSjk3SGI4?=
 =?utf-8?B?T3F2YVkxNmRETjVoemljcjY3YStnRGNkQlhVemltTWl2Q3orZkMrc2tUc1E0?=
 =?utf-8?Q?uIpv6nYVcncqoMYMa8dOB7UEa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1c2ebf1-bd8e-4035-2dd2-08db40b22361
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 08:43:24.9890
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sUao5K9hv4+yQQVNzQJbFZuoCLjelY9qTQFGj1QwsMCj/bHthI4a5bhNqO4qvuVncHbRnKTAyOFz/rgvh1i2qw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9233

On 19.04.2023 10:25, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
>> On 18.04.2023 15:06, Roger Pau Monné wrote:
>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
>>>>> --- a/xen/arch/x86/include/asm/config.h
>>>>> +++ b/xen/arch/x86/include/asm/config.h
>>>>> @@ -44,6 +44,20 @@
>>>>>  /* Linkage for x86 */
>>>>>  #ifdef __ASSEMBLY__
>>>>>  #define ALIGN .align 16,0x90
>>>>> +#ifdef CONFIG_LIVEPATCH
>>>>> +#define START_LP(name)                          \
>>>>> +  jmp name;                                     \
>>>>> +  .pushsection .text.name, "ax", @progbits;     \
>>>>> +  name:
>>>>> +#define END_LP(name)                            \
>>>>> +  .size name, . - name;                         \
>>>>> +  .type name, @function;                        \
>>>>> +  .popsection
>>>>> +#else
>>>>> +#define START_LP(name)                          \
>>>>> +  name:
>>>>> +#define END_LP(name)
>>>>> +#endif
>>>>>  #define ENTRY(name)                             \
>>>>>    .globl name;                                  \
>>>>>    ALIGN;                                        \
>>>>
>>>> Couldn't END_LP() set type and size unconditionally? (But see also
>>>> below.)
>>>
>>> I see, so that we could also use it for debug purposes.  I guess at
>>> that point it might be better to use {START,END}_FUNC() to note that
>>> the macros also have an effect beyond that of livepatching.
>>>
>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
>>> find START_ENTRY a weird name.
>>
>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
>> I take it that you're aware that we meanwhile have two or even three
>> concurring proposals on a general scheme of such annotations, and we
>> don't seem to be able to agree on one. (I guess I'll make a design
>> session proposal on this topic for Prague.)
> 
> Oh, I wasn't aware we had other proposals, I've been away on an off
> quite a lot recently, and haven't been able to keep up with all
> xen-devel email.  Do you have any references at hand?

Andrew said he had posted something long ago, but I didn't recall and
hence have no reference. My posting from about a year ago is
https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
Subsequently Jane went kind of the Linux route:
https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html

>> One thing needs to be clear though: Macros doing things solely needed
>> for LP need to not have extra effects with it disabled, and such
>> macros also better wouldn't e.g. insert stray JMP when not really
>> needed. Hence I expect we still want (some) LP-specific macros besides
>> whatever we settle on as the generic ones.
> 
> The stray jmp can be inserted only in the livepatch case, if we end up
> having to add it.
> 
> Maybe we should just go with Linux names, so initially I would like to
> use:
> 
> SYM_FUNC_START{_NOALIGN}(name)
> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
> SYM_FUNC_END(name)

As said in replies on the earlier threads, I think these are overly
verbose and come in overly many variations.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 09:03:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 09:03:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523263.813143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3iU-00068X-Bc; Wed, 19 Apr 2023 09:03:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523263.813143; Wed, 19 Apr 2023 09:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3iU-00068Q-8g; Wed, 19 Apr 2023 09:03:06 +0000
Received: by outflank-mailman (input) for mailman id 523263;
 Wed, 19 Apr 2023 09:03:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp3iT-00067y-9s
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 09:03:05 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fcfd17a4-de90-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 11:03:03 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 05:03:00 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6985.namprd03.prod.outlook.com (2603:10b6:a03:433::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 09:02:58 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 09:02:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcfd17a4-de90-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681894983;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=qPIxOMJF4tnHCpnii+tQ5mDKJoHE+WZhWXREJOII1ac=;
  b=bh3Q4+9r4oMHX6TJWbkXj/mWSsMtozrQljiQ42rfRpwVjxVkXcTIqmKE
   iQTK3UyCfnv5HRR/zySaE5zU/T43xBdx2sKnkKtUp7Gk9TyNwdGFdrYVY
   9G5m6QK2/sS7hPkekCCvqm13eu4V/rDuAISH49YsnUu838oIxTJ11aaz3
   Q=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 106480767
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:qcxRZaJg9PZUoio7FE+RVZQlxSXFcZb7ZxGr2PjKsXjdYENSgWAEx
 jZMDz/VbvyOZzT0LYolO9yzo0kB78DWm9BhG1BlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVuPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c52KEMXy
 KUjMgsEVR263tCwnKyDENNj05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLlGSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02LOfxXylB+r+EpW287lgmFKI2FYVBR5MUVSfsMSig2OhDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Ua5QeX2+zr6gCWLmEeS3hKb9lOnMQxQDk30
 F6VjpXsDDpmv7CPYWKQ8K+OqjG/MjRTKnUNDQcGUA8E7t/LsIw1yBXVQb5LC7Wph9f4HTXxx
 TGiryUkgbgXy8kR2M2T4lTvkz+q4J/TQWYd9gjRG26o8A59TIqkfJCzr0jW6+5aK4SURUXHu
 2IL8/Vy98gLBJCJ0SCIHuMEGejx4+7faWWEx1lyA5Mm6jKhvWa5epxd6y1/I0EvNdsYfTjuY
 wnYvgY5CIJvAUZGpJRfO+qZY/nGB4C6fTg5fpg4tuZzX6U=
IronPort-HdrOrdr: A9a23:AxV+Z6PXBORv6cBcTuGjsMiBIKoaSvp037BL7TESdfUxSKalfq
 +V8cjzuSWZtN9pYgBFpTniAtjifZq/z/9ICOAqVN+ftW/d11dAR7sD0WKN+VPd8hrFh4tgPP
 dbGJSW0OeAdmSSV/yKmTVQzuxQp+VvLJrY/ds2EU0dNz2DBMlbnmFENjo=
X-Talos-CUID: 9a23:EOpXuWOiVajmDu5DWQBFrXNNIdoee0bQ9S7iJROmMTxTR+jA
X-Talos-MUID: 9a23:72myTAVwC/GxZdvq/DHpwzt6O9x22aKjCxoJnrEXhtKNGDMlbg==
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106480767"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XqrLH3mLaVC/lmc9z5JQEeBNY0t2dO111OkQWB4aa3bFkK2JNWYMslHlCSS+zB1HHaB8WI6dz3V+rukf+lRCaWirF9hLWpjNL1usRDyYMBCyCANrlsP9IIY0cmUED1WkEmNX+oroqxHOxBqaTKZpkGbRCarXT7H9sPzuc9n1rYqi10kaYmc4kQXN7DiUgyOu7AFBu02zGb9wOxmG1dZNDRYmRVOxPSP1SgF5zUrNz61+rZEOjjZGu17XM68/dL4ppjkoSk0CQs4K1oo/DCoe/JuXNdl3o+BOHPgPyQRhRgKb3feMAjDH0y+IAnKFtopdc3fkkGaCx0OHlgtS6GUZtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OwvsNZ5/ZkN+OMBd1mx4Zzs6EhYGMZawkAZsJAaZYQY=;
 b=H4jpzKhCZfDR2nfzCbzysO1fhw0BMxVTp3fP+hrBy9Z9gOyqmbFZyy89DUUqNn8lzcigHPresMCpocrCEVdqK6wDu0k31P9XRzWvVN2R81c5kd3i7J8QquCFtW30DKZb7IonhuSbaNWcxxdPeLiCoTdd2+it4WNqHZcK39Q91XibLI6QpKipp/R0LgfXAqun3tl9GRXMb5UkR/7lFPX1V5v2Cn5RNfcIVKfazdHuJxYY8QA2pejZhuyuKTGvr5ETN1YI5yfvgO0Th2/Mp9BCgTc4fyMxKGYk1UB4C3aIuDE0e9jxkh30YBxmwIay2ip0ak7XLP1528WFYfJPcRodSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OwvsNZ5/ZkN+OMBd1mx4Zzs6EhYGMZawkAZsJAaZYQY=;
 b=wObym+S2VW6yfaj+uOzZ8tVANhrwdyOIEQ3rWLSrUqA+Hxso15C/P6oyqj0loKS6x2M7jgBRdB2McccAtxWyrmLH9QKunqcSMM5dIBismT6BhifVCyQIoMEmFrkCAAVbrl6HElGmxsGn9P0Pty7dQhH4NzipN1g0r4hRPwKoCnE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 11:02:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Message-ID: <ZD+uPd/wICTK6qB4@Air-de-Roger>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <7452a070-48b8-03fb-26c7-3dc7d652dcba@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7452a070-48b8-03fb-26c7-3dc7d652dcba@suse.com>
X-ClientProxiedBy: BN8PR03CA0022.namprd03.prod.outlook.com
 (2603:10b6:408:94::35) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6985:EE_
X-MS-Office365-Filtering-Correlation-Id: 47c5aec8-1159-4658-2047-08db40b4de5e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0ewXKAMK3mELdiDuXQZQyi+I+DcKNIBllU8bgOH+QG/v7ADJS7JSBUdedhf+ImLwHkWsxymNGQ0bVbENWNSkGkRaUoQOLdSmkSResTDw70rbVyDtc29yqVxxVhDT4aUIE9C5ls+WuGRG1Yc3qRHNQKr4WSD9/zGnUURxDM1EiLLLBrFfpOIGom9Dp97ftOmEfFtLrUQogf5USmCkB6wvrRmycD2gFawFWBeYyxml2oeeNOTWrpJBsSDZUZq2cdKbbnRXbKDxX8Pp7LOXzakHJRloh8+ww4kW6ZLAjJQmqUZ9Kj0ixx5VcgLYkpRDnZBiaWe+esBh1NFtS+9yVA6e/eE4RfAiTIiIZ1fbQ/gf4RruOxBHJ7Q3OGqZtHI/HfIoN6tn7YwqJPOdLVyoSCcVLMvHq1Cf4l9diX0lhu3MitF69SlKjWI+zLsgT66R4ATn6/H6tYdwP2hM9hURt5RoASFZaF/M8iNjovrSM8RNYE3BBXUsruksrS8HYfvpTwiyM1ss4JTXGOUb2V+dqTsaQznFvItC/rsmXDLGKODhvFwlijkbmmVa2tQ6p4KcPxB/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(396003)(136003)(346002)(366004)(451199021)(66476007)(478600001)(6666004)(38100700002)(8936002)(66899021)(8676002)(316002)(82960400001)(41300700001)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(85182001)(2906002)(53546011)(6512007)(26005)(83380400001)(9686003)(6506007)(86362001)(33716001)(5660300002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b1ZRTnlPaW8wZEoxRUJJWFc5T0I5SXN1Vy9uc3JYMXhWcWcwYTRKSnBIWDc2?=
 =?utf-8?B?UCtSY0FoeDBBWFN5dnoxOGNjSEVLZDB0Zm1YMmRpUTZNcWtnRlkxRHMwc0k3?=
 =?utf-8?B?MWdlODJ1ZG13cFp0dENNZjNnNGZUSkdLb0svNTMrQTAwVGV1ZjErbTIyUU04?=
 =?utf-8?B?V3liZVl1cTY1SE5XUndHRElJMCtTNUduMlZMejBxbzlvbENzTmVNYjFyTlBv?=
 =?utf-8?B?WldBS3E4Ujl2aEtUaU9PRXNrSVZtQzVVZEtpTng3Y3dLSjJReUJhMGJaVUFE?=
 =?utf-8?B?UGJyN1c2YXc2a1Nsd1ZUUWh2cmZoRmQ1clNPZXRycGlqd3JoMDNRVHVOL2FS?=
 =?utf-8?B?VTRyc2drV3EvaXo4d2N3OVhDY0lHNyszRmZkcFRIM2hlUUc1eWNQeTFubjhH?=
 =?utf-8?B?d2dlUG91NG9KVWdxbXpENmY5bFhUU2VFZ05tUGlVK3QzcWwyRWtDOFBETW9m?=
 =?utf-8?B?eTltOW93Z0F6ZzlueWtYSW5mTmVWSHNKT1ZEUDdyVStUbEpvRDFSTFdyN0Rp?=
 =?utf-8?B?cWViWVl5TCtUSTdOdDJsenJkbnZxNVlMTEkvU1ZsVlRRNDhrVVZjaHNBaW1L?=
 =?utf-8?B?TTJxTkZQbWFTdklyYVM2WnR1OUdUZkRqaWd4dG9KY0JUZXk1anVWMDNuL3pR?=
 =?utf-8?B?S0hhS3JrVEFxZmhtOU5qRTBmSnc3aXZvU0puNEcvOUdycG0zS0kybjBBVmpm?=
 =?utf-8?B?bDJ0R1RIUzY1cHZJanh4dndWQ3FIWGUza1VwWVozV3FCMFFKa0JjOU16ampK?=
 =?utf-8?B?bEJpLy9obUkzek1lYUx5TkR3Q0ZqdVhsUHFyUGt2R3o3NVZsQ3kwRlVWUUQy?=
 =?utf-8?B?S1ZjSFR0N0xiOG5nYjVVdlJDNTl4OUxMUFlYN0VVTlNncTRkbHEzdlFWY3U5?=
 =?utf-8?B?Tm9pSytlSFdwYW9mKzN0MzBKQ2xOQnBLVDRaVmZsV0FmZll4bzVoQ0FwQXFv?=
 =?utf-8?B?b3l1NkM3NlMybXJweWpLV0JtejFmYWsyS3hWK1dxbUZRVk5wbXg5TlI0Ujh2?=
 =?utf-8?B?aXVwcHVpbjF2UkJTaGlSb1d0bTVaSERiT1dBOFZtSkZ4SVk4WjNVMEp1TWda?=
 =?utf-8?B?ekZ3Y2lWeC96aG1ZZURINXZSQldOcFBMWDNlL0k1VUFnd0NQYlF2bEpIK2Rv?=
 =?utf-8?B?bDR5RnRTNkNxMUxmY1IzU01nOHhyUlo4Nm4rMDUrRC96WStoM1BkK3Mxc1pJ?=
 =?utf-8?B?dVhDWDBOcEhvNERiUm5QcnZiUVlBZWlZNDM2Ly9HMndxWFVqRG5EL00rb2Ur?=
 =?utf-8?B?MnVFVVpZaHE1ZSs5NEN1aElPcFgvM2RBZC8yL0F1ZzdlRzdyOHFLL0RzVXAr?=
 =?utf-8?B?QXBiMFgvL2cvQjZmano2YlBkRTdBS0VrNU1VQU92dDRFRzJsdHc5SVlVN3pK?=
 =?utf-8?B?SDE2dU14dTNhTGRKeXAraUZCcmc3TzJueHdzaXpWRUloOHhiSWdzdk1pTEp2?=
 =?utf-8?B?T2NaYnVqWUlveUhMeXJBaG9ObjhxcXl4dGthSXR3cjVtYk94OEVHQXdENnE1?=
 =?utf-8?B?MWkwc3VobytzUHhOSFNsaE5Jc0VGcFlsL0d2WDRUbzlCTVc0VFJsYlYwTjZR?=
 =?utf-8?B?cHFvZWF2cUlzdTdzc3pXOWpYWnp1NzVGRDcrNE9rV0lFU0JmUXRhTVVZTmhz?=
 =?utf-8?B?Vk5MaENoOHZzaFhpdWZRVnBMK3BJMk5HcG5zbktENTUrUXVtOVdTQmx0K2kw?=
 =?utf-8?B?WDVsMUE4RzFmQm5mbXNhNjRvMktScWgvTGJlVnZicDVzWG9hd2R5S04reXlr?=
 =?utf-8?B?R05uWGNkdWR0Yzk0YkVPd0tqVEpOUnVORDRKMFRVTE9TQTFiYzA3K2RwRlFS?=
 =?utf-8?B?SDMyeGRIaDRMcjdOSFhtMDU5aGxORHRFdU1UNGdHczFsUWI3djhKT0xpQlZB?=
 =?utf-8?B?TjFsZTdZWVZJVnVhemlZSFVKSnczdWgvSmMrR0NPUVFuNUFyT3puRmxBUzZk?=
 =?utf-8?B?a0hBQzR1NXNpMDViVUU4bG5LQTFlaHVmSXdNNWpreHZVVEJWNzUvQmVTQk5I?=
 =?utf-8?B?NlRvVkZ5ZStVMHNPUHZuTHhUdncvYjFMSTBWWU1TQWZaUVREazZldE9jeFJl?=
 =?utf-8?B?eEJicDRjSzZkN295dFh1VzRIc21sWW5PbngrSFIxMWg3TkN3TUEwM0kyd2ZI?=
 =?utf-8?Q?4aEWEiuK8p4sCc07KatykeRx2?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/wu37M+yfZ4ls2v1GsagGO2R9c6GWUXCo9aTQfA9Voxk9uFjtrc4UOtGZx6njFY2zUsl6S/AQI2teRUznjetimGHUUkUVd7DZH8/1fwmm+UujuP+yFt+GdoyfJA8eJHuL6eQU23V3oM84zYdivTwiDaxA4NCZRpoBWdNE8YnoK6OBg+basf5iT0I+tRqKkDs77I+Ti28cxzsGH+CnezQuG5+ZA/OssFKJePsoQxCEVArnEKAaWwSPu7/TB1YgPxvUcwNEcFnj1Q5owSEDR6CD0STOmP+E/JBin2vnIhphAZueyxb9L7/yYGJ9xtBr2OBHB+mBrWZrByXBwmiEVRxMpYiituL8sj3MZXNCJ9jlYmtaIA8+LbdLYxv/z80zT3Py0NNf9jkcO4Lab7Q1iK1RS3AK1CwfU839ufjWDE/VI5jSB7a3paajzdppf69npV81WI1F08zRvHorO1CcppKfhyAY5B6kpIjKGC2npBciYIUvwTQBQJq1sR13xeR3YtwrpGfbIdMLYfZLC+L+0yPTWcju/yVjElkYMWiI7lgQSEtD+EEahX3kGu7Z+Rgyg4CzOh0z+QmYHFRBiABXlTxUvafLV75FvI6cASKkigpSoJ8lNTIJTiHXBiHYnId7Yr55vVmGpa7qJYmX6qARMM6hSmQKVIT3EKcalArTVCqjoqzadx/Bn4qaxqVvB5oiJ04QrxXONfMrk4oIFfwpAQc8WHhV2owA/6ycaQakewKsW8IHpWHXCNq7zcFC1GnCBZwKxkUVbKeKrbdPcg/q5z33UL+JVMXkyM//EM461B/YOfrhji4yBv03Mcco5c6dq5g3KnaS58H0RVEz+3Gro0ce8aGwfKIR0hmGTMRIq5OA/1i5kGoLPT6G2sxcdsCEPJabqgC95gr0hP0xqgWUGUG2WK+jaDesM+5K5JHNpZF4nw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47c5aec8-1159-4658-2047-08db40b4de5e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 09:02:57.8511
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Nl3ZTaYcX6SyOF26vpqlOhxworjx19xibzAe3rB7T9NckHESmJ8mio8V3rCcS+gW+m7+r4mLDU2pq+pGHUh2AA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6985

On Wed, Apr 19, 2023 at 09:07:41AM +0200, Jan Beulich wrote:
> On 18.04.2023 17:54, Andrew Cooper wrote:
> > On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> >> The addition of the flags field in the vcpu_set_singleshot_timer in
> >> 505ef3ea8687 is an ABI breakage, as the size of the structure is
> >> increased.
> >>
> >> Remove such field addition and drop the implementation of the
> >> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> >> value just inject the timer interrupt.
> >>
> >> Bump the Xen interface version, and keep the flags field and
> >> VCPU_SSHOTTMR_future available for guests using the old interface.
> >>
> >> Note the removal of the field from the vcpu_set_singleshot_timer
> >> struct allows removing the compat translation of the struct.
> >>
> >> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> >> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > 
> > While everything said is true, this isn't the reason to to get rid of
> > VCPU_SSHOTTMR_future
> > 
> > It 505ef3ea8687 does appear to have been an ABI break, that's
> > incidental.  It dates from 2007 so whatever we have now is the de-facto
> > ABI, whether we like it or not.
> > 
> > The reason to delete this is because it is a monumentality and entirely
> > stupid idea which should have been rejected outright at the time, and
> > the only guest we can find which uses it also BUG_ON()'s in response to
> > -ETIME.
> 
> The instance in Linux (up to 4.6) that I could find was
> 
> 	BUG_ON(ret != 0 && ret != -ETIME);
> 
> i.e. not really dying when getting back -ETIME. (And if there really was
> a BUG_ON(ret) somewhere despite setting the flag, it would be a bug there,
> not something to "fix" in Xen.) I'm afraid I also disagree on "stupid
> idea" as well as ...

The logic in old Linux is indeed 'fine' in the sense that it doesn't
hit a BUG_ON.

The problem we are seeing is that when logdirty is enabled on a guest
with 32vCPUs (and without any kind of logdirty hardware assistance)
the contention on the p2m lock is so high that by the time
VCPUOP_set_singleshot_timer has copied the hypercall data from HVM
context the provided timeout has already expired, leading to:

[   65.543736] hrtimer: interrupt took 10817714 ns
[   65.514171] CE: xen increased min_delta_ns to 150000 nsec
[   65.514171] CE: xen increased min_delta_ns to 225000 nsec
[   65.514171] CE: xen increased min_delta_ns to 337500 nsec
[   65.566495] CE: xen increased min_delta_ns to 150000 nsec
[   65.514171] CE: xen increased min_delta_ns to 506250 nsec
[   65.573088] CE: xen increased min_delta_ns to 150000 nsec
[   65.572884] CE: xen increased min_delta_ns to 150000 nsec
[   65.514171] CE: xen increased min_delta_ns to 759375 nsec
[   65.638644] CE: xen increased min_delta_ns to 150000 nsec
[   65.566495] CE: xen increased min_delta_ns to 225000 nsec
[   65.514171] CE: xen increased min_delta_ns to 1000000 nsec
[   65.572884] CE: xen increased min_delta_ns to 225000 nsec
[   65.573088] CE: xen increased min_delta_ns to 225000 nsec
[   65.630224] CE: xen increased min_delta_ns to 150000 nsec
...

xenrt1062821 login: [   82.752788] CE: Reprogramming failure. Giving up
[   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
[   82.793075] CE: Reprogramming failure. Giving up
[   82.779470] CE: Reprogramming failure. Giving up
[   82.821864] CE: xen increased min_delta_ns to 506250 nsec
[   82.821864] CE: xen increased min_delta_ns to 759375 nsec
[   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
[   82.821864] CE: Reprogramming failure. Giving up
[   82.856256] CE: Reprogramming failure. Giving up
[   84.566279] CE: Reprogramming failure. Giving up
[   84.649493] Freezing user space processes ... 
[  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
[  130.604032] Task dump for CPU 14:
[  130.604032] swapper/14      R  running task        0     0      1 0x00000000
[  130.604032] Call Trace:
[  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
[  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
[  549.655463] Task dump for CPU 26:
[  549.655463] swapper/26      R  running task        0     0      1 0x00000000
[  549.655463] Call Trace:
[  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
[  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
[  821.888596] Task dump for CPU 26:
[  821.888622] swapper/26      R  running task        0     0      1 0x00000000
[  821.888677] Call Trace:
[  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
[  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
[  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
[  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
[  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
[  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14

At some point Linux simply gives up trying to reprogram the timer, and
that obviously lead to CPU stalls.

Ignoring the VCPU_SSHOTTMR_future flag allows the guest to survive, by
not returning ETIME and just injecting the expired interrupt.

Overall I'm not sure how useful VCPU_SSHOTTMR_future is at least when
implemented as done currently in Linux.

Instead of trying to reprogram the timer Linux should do the
equivalent of self-inject a timer interrupt in order to cope with the
fact that the selected timeout has already expired.

> > It can literally only be used to shoot yourself in the foot with, and
> > more recent Linuxes have dropped their use of it.
> 
> ... this: If used correctly, it can avoid injection of a pointless event.
> Clearly the Linux change dropping use of the flag indicates that its use
> wasn't correct (anymore?), likely because of not properly dealing with
> -ETIME up the call stack. I'm willing to trust Jeremy / Keir that at the
> time of its introduction such a problem didn't exist.

I'm afraid Linux didn't implement this properly originally, as it
attempted to reprogram the timer with a bigger timeout rather than
just doing the equivalent of self-injecting a timer interrupt.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 09:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 09:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523270.813153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3xe-0007kQ-QX; Wed, 19 Apr 2023 09:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523270.813153; Wed, 19 Apr 2023 09:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp3xe-0007kJ-Nk; Wed, 19 Apr 2023 09:18:46 +0000
Received: by outflank-mailman (input) for mailman id 523270;
 Wed, 19 Apr 2023 09:18:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp3xd-0007jx-Im
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 09:18:45 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20615.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2d9e670d-de93-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 11:18:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6795.eurprd04.prod.outlook.com (2603:10a6:10:fa::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 09:18:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 09:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d9e670d-de93-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NWbZgUzSHbm7UjdlvZvZduSOFmw3myw6jmnp7Ogw/b1GuIvOK6LVQa5iGJU/+TBE5wjdtQbBGF93TLLDMoBLLSMlOHIyAcT0QkqLcFDV/M4yhUJfL7v/LVb5HMoT8Z1no7lZ1QJd/TV+5OMYffwvNgg0iyEI6pSgdGbUPWowzaHotqnGi3qKOtjKXY2AwPC7RHEY3XiXuLMfxqQkz6SUTCEcwSCfhsBM603L5pshxjBavwlnOCER54imYSSDUq+Qc04d6tieAgY8bXRbHG9j5/36l45EExi00SwLS7fgr1foioFBvhtdihwOWC9c7YbacXh+ql06B/ORZaztRI+EQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z3rUZJgYqjCHR8Zt9kBKYEx2OC1fPPjxo69meIDR+tw=;
 b=eNOp9GkuFNtuCNMV7Pj8ArbFRkejZqsrqrPwyk7ItKXWGjxtDCK637Jp8ZGt84WPJwkBmtkWav3t2DnkPtUK4wqPUE24S57RnVEHe7bmRzK+L5zdpOI8HFjsmwCPSYtSI6pZ0C+RbWKNse0CNzcwaOxExT4fODNGkQLvjaY0u8sm5fQbEN5OI481v4DTjYj2C880EU+xcIvMYlj9NBsu5STcuteqBFOsW+9nu0+XZnlVWuFk35QlQu8C+8iJlkyAJysdI5YTMzmuBr0bqZmqtREISeVyvlQJXReiCmoY9aAFbFtzDSsBKPqLCkjEuUm0EAhKE7JIBSp46Lw02zuyFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z3rUZJgYqjCHR8Zt9kBKYEx2OC1fPPjxo69meIDR+tw=;
 b=Fi0sBI3JNl/yBIicyxwGLhT3Lbzv/EjQpJqUdUXqKJnLUvAt5faUEVj1Ewiy30JY4F4+3IMRCBhR6AAXKVO1lmbNZgaLM8n/9WUYG8V768X/7ft5tyWS128exmcM5qoshtNBUFXkDBAuUd1uhCxVAwVN7EmNq/I9mk/VQkoVB89naU0jYmIg2Vy+ON38N4M34KElAzPhXDD1BvHpBzfYwxAtLLZTLMuYmgqwomBmiPP47UqVxYd9nsDBE7O2D9Rl/nbUiZhLQPqto1PoI+Qb0FdLxCR2AcnGgmuZMKYUY/hul3GCAe5TxabTUccVipOgEujyIJTC/hbB7ctwDOfRNQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <619fe14d-e5cd-c355-bcfa-1d20e0c219ca@suse.com>
Date: Wed, 19 Apr 2023 11:18:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Henry Wang
 <Henry.Wang@arm.com>, Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <7452a070-48b8-03fb-26c7-3dc7d652dcba@suse.com>
 <ZD+uPd/wICTK6qB4@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD+uPd/wICTK6qB4@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6795:EE_
X-MS-Office365-Filtering-Correlation-Id: 07565fc7-bafd-4335-d19c-08db40b71070
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8YHBSQjMdqA8fIWYLuAssQnNl131vGVhmSoU4k0m3ceNmF63N5oDAmqJYnkdSXBa47BxloEK8D0WavvkFp8gO4nHcqJZi0b2QoYTC/9T6+OYMDPtJv3N97PvUD3YkBaITWRExv9EaUEqVCM57GcmDplCFft89f8d1gSrbB4GuiGS8yi+H3Uv+DZqrL+BEyyXcYd6sbVioPd3amMkAI7/AQngsfGYvyd/xCBl9tQH8UlULzJO0z0NJp54WOxx678UBxqqG8HHZPtQxRTjbQjyYTLZwmj+BO27otIslTku9ka+VN0i3XNPQVKWr+Kw8q6bO5PZEhursKvxRn8UapiEcW5y/DbMp1SUNexzSWVQZ7i06wemlc+oCzBvY8rdPAGKReGMowsX3gLDDkm/sNkMlrqDeYlRAs1AcumKGSP7Eq1gA3ONqMgyfpTrANhGicRYvB95J5NrllSzfHNWUT1onXF4FKmDfq8UkWEjfqgjH3W7lsT2rjbOrzgHO5pkHqSrBB+s+8AU0NIR45rcEu4+6u42vCju1FLGfr1KTttH/g177WHTcBVoXEMMWtoNBhhQ8OOO6xLWdmWvg6Hh4sTrQuafQr4ZVKgBA/KU3fX2CMfZIWs142qEcMg3T/mju5sAqoOb4bKBtWXVoVTY8vDwNg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(53546011)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckpYMVZ0VTlLVXZheXVPbE1JNFQwTzRhVmpCaXY5ZlBITmE0ZG5jTmNhV0Zn?=
 =?utf-8?B?b29rMXBkbytyZTl0STBLeFBpU3NKQ0xQbStGZ0dPSUJDWHkwRWs4MlNIOXZE?=
 =?utf-8?B?VlJDdUFzRjZaUmgwWVdoTzVkYnVtOTRvcW1kWXo5UWJsd0ZtN0lOUklCL2J3?=
 =?utf-8?B?ZkFHVG5ncjREQklPWjY3TnZ5V21ZdzBSQm4yZkV1bEd6SStXaWIrb0VYa1lF?=
 =?utf-8?B?YzdpN09qcXZuNnNab2o5ZlBJY2oxWXBUUC9lendrZXg2M0k1YXpYSkhtU2to?=
 =?utf-8?B?ckQzeElZWVAyR2pUclRzL3kwam9GNHNxaGVyNTRaMEw5UXlSQmtzRjQwMndz?=
 =?utf-8?B?MEFLMzA5TEk1a2ROcDBtdWtpdHZLcG96MktYeDNvVGZWSzJ2V0tqaVo0Y1Zr?=
 =?utf-8?B?cG8wU2tOYjlqcXZhbGZRZTZNMzVjV2dzWkpQSjJySG16bFFOOHJUK3ozTXpE?=
 =?utf-8?B?Mnl5Rk96ZU45dld4WUFmVEZ1MVhMUE1TVE12UjEyUGJDVHc3U2RXR1EzYTRp?=
 =?utf-8?B?anlyZE5LTy9sQlZaODNvc0NpdlFzT1RLWkFGTGs3aXBNVGIrOVBsVEdKS1c3?=
 =?utf-8?B?WWpJZExIMDIxcFN0STN5S2MxbFd3K1RXQ3MvVDRIRFZXdEo1TFl1cmw1R25D?=
 =?utf-8?B?eGJPRzlldWROMTRVUjNRbUJPTFN4K2NpUitVbCtxcDN1Q0pzbDBCYlBXUkps?=
 =?utf-8?B?a1RiK0pEdkxSaE0rNVV3cSswTWZWb0tOMGR2Sm40ZW5kVnJWZUhJMlo0MFZM?=
 =?utf-8?B?dkdOVjBwd2ZCRHdRMjRKVGV5YnJnekZmRTNEcGpNYmhFSnNwR1Fuc3lpbndB?=
 =?utf-8?B?aTlqNEYzUWJlR0lrc09yTkIrUDJyd1U5cWhFSW5OSWMwUVdwblRsaldmSGJx?=
 =?utf-8?B?OVhZRWl1VDRINC81cWRNYU5DYXgxak4waG1IUkVTTFB2aVpMK0t2OUJNdEFp?=
 =?utf-8?B?bVZtTVFsZFA2YWR1blJDUkJNbERLQXdnSlY3alI1ZmFXSjBIT2RYemdMZHNB?=
 =?utf-8?B?NkJ6Ynp3aFNabDlXVmg0dWdRWXN0MHdCeWJpM3FGd3k5NXlYdVU3U0Rqdm4z?=
 =?utf-8?B?NmRaWkNwNlFqYUY2WDZPV1ZNNHBnS2lIcS9nNWs5L0x4TWZ6dUx0U2hwZEl1?=
 =?utf-8?B?bjJUL2NKb3E1OW1LbmloVzJ4ejQ5K2h1MFhNb01NeTloajh3dFhlS3RVQkFp?=
 =?utf-8?B?WDVLZTIvQk1HcGFWOEh3TkZlcm5OV0RYZUk5V2lBa2wxeVZHNzVyNW9qSnVz?=
 =?utf-8?B?cGsrRlNxSWxnc3Q0U0h0TFFQbEVuTGR3bm5NTDVBT1k3bmpNOUZiZmUrRW04?=
 =?utf-8?B?azA4QzRUNUwyeUY2cjdlYVJyazlyRmhlUU9HQ2NkSXQwajJkNk5veEdtd1FL?=
 =?utf-8?B?YWVvWlB2UVRUUGdLUVRVWlVucWtYeGtsTVc1TXlzZ25JL0ppcjhTY2EvMElw?=
 =?utf-8?B?UXM5UlVrektzL1BYbnBTNEtZMmRFZFQybjZVdmRseWIwVm9FOGhHZGd2Risr?=
 =?utf-8?B?amgvR3ZzWnNLSW9rRURUeVlwb3k1TndoNlRuKzhGY3V1OFJ4L1RLVlAxaVUw?=
 =?utf-8?B?U0FsUXpsbk1Rd1RmdjJBNEFSNTAydFcydnZuaDR2eXQvSzJaN1JMbnZUT0NY?=
 =?utf-8?B?akxNL3hzUnZZMEdaeVpYS0dFM0QxRnQxSUd1SDNjcHJZQVBad1hSc3J4Szcz?=
 =?utf-8?B?eVF5NTkwWGpDbUp1ZlgvWVIvUmdEbnM5c2xkN01mTzk3Zm9jUXgvMEJrcWcz?=
 =?utf-8?B?ZzVjSzYzQ1pFRUF6L3N5ODhOckMwcEFUcW44WTZ6SkptKzBQSEx5aFlacXBh?=
 =?utf-8?B?SHUwSmsyaHNWT2g5dEtiN1RJYXlDTmIyTUFKTUxDRVM2WjVSWjF6c3JXQ1Z6?=
 =?utf-8?B?bmJHWWpOUldWS0I4OXQwUmJOL2hYeEdqbGp0RXo2ZTd5U3h4Mm1kZ3dqSVhw?=
 =?utf-8?B?S0NYN0F3eTBaK3BuaU1wSXpzVzRrTGFBb09LTGVraXo5NElxZFlPVkt1b090?=
 =?utf-8?B?NGsxRzFiOXlveWpFNTNJaExMa2R4VCs5WFFRSXZKb005TmNrYUlxcmJVMlV3?=
 =?utf-8?B?a0JPTUJTRUZsbTlYTGw5NWdiLzFRRFdVM1dJNWU3U1dIYlR0eHppZ21oenlw?=
 =?utf-8?Q?BmwbejlTBNv9yA8twHfRLwB5z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07565fc7-bafd-4335-d19c-08db40b71070
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 09:18:40.4246
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nxOUT11JOp3ZzaQwnYrh07Ssaj8Q6/VGp6CFpFfkzg8MvIA08ui1Snj0F4wGt7al28Ud1uq6l+t2qhXhOMPX0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6795

On 19.04.2023 11:02, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 09:07:41AM +0200, Jan Beulich wrote:
>> On 18.04.2023 17:54, Andrew Cooper wrote:
>>> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
>>>> The addition of the flags field in the vcpu_set_singleshot_timer in
>>>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
>>>> increased.
>>>>
>>>> Remove such field addition and drop the implementation of the
>>>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
>>>> value just inject the timer interrupt.
>>>>
>>>> Bump the Xen interface version, and keep the flags field and
>>>> VCPU_SSHOTTMR_future available for guests using the old interface.
>>>>
>>>> Note the removal of the field from the vcpu_set_singleshot_timer
>>>> struct allows removing the compat translation of the struct.
>>>>
>>>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>
>>> While everything said is true, this isn't the reason to to get rid of
>>> VCPU_SSHOTTMR_future
>>>
>>> It 505ef3ea8687 does appear to have been an ABI break, that's
>>> incidental.  It dates from 2007 so whatever we have now is the de-facto
>>> ABI, whether we like it or not.
>>>
>>> The reason to delete this is because it is a monumentality and entirely
>>> stupid idea which should have been rejected outright at the time, and
>>> the only guest we can find which uses it also BUG_ON()'s in response to
>>> -ETIME.
>>
>> The instance in Linux (up to 4.6) that I could find was
>>
>> 	BUG_ON(ret != 0 && ret != -ETIME);
>>
>> i.e. not really dying when getting back -ETIME. (And if there really was
>> a BUG_ON(ret) somewhere despite setting the flag, it would be a bug there,
>> not something to "fix" in Xen.) I'm afraid I also disagree on "stupid
>> idea" as well as ...
> 
> The logic in old Linux is indeed 'fine' in the sense that it doesn't
> hit a BUG_ON.
> 
> The problem we are seeing is that when logdirty is enabled on a guest
> with 32vCPUs (and without any kind of logdirty hardware assistance)
> the contention on the p2m lock is so high that by the time
> VCPUOP_set_singleshot_timer has copied the hypercall data from HVM
> context the provided timeout has already expired, leading to:
> 
> [   65.543736] hrtimer: interrupt took 10817714 ns
> [   65.514171] CE: xen increased min_delta_ns to 150000 nsec
> [   65.514171] CE: xen increased min_delta_ns to 225000 nsec
> [   65.514171] CE: xen increased min_delta_ns to 337500 nsec
> [   65.566495] CE: xen increased min_delta_ns to 150000 nsec
> [   65.514171] CE: xen increased min_delta_ns to 506250 nsec
> [   65.573088] CE: xen increased min_delta_ns to 150000 nsec
> [   65.572884] CE: xen increased min_delta_ns to 150000 nsec
> [   65.514171] CE: xen increased min_delta_ns to 759375 nsec
> [   65.638644] CE: xen increased min_delta_ns to 150000 nsec
> [   65.566495] CE: xen increased min_delta_ns to 225000 nsec
> [   65.514171] CE: xen increased min_delta_ns to 1000000 nsec
> [   65.572884] CE: xen increased min_delta_ns to 225000 nsec
> [   65.573088] CE: xen increased min_delta_ns to 225000 nsec
> [   65.630224] CE: xen increased min_delta_ns to 150000 nsec
> ...
> 
> xenrt1062821 login: [   82.752788] CE: Reprogramming failure. Giving up
> [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
> [   82.793075] CE: Reprogramming failure. Giving up
> [   82.779470] CE: Reprogramming failure. Giving up
> [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
> [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
> [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
> [   82.821864] CE: Reprogramming failure. Giving up
> [   82.856256] CE: Reprogramming failure. Giving up
> [   84.566279] CE: Reprogramming failure. Giving up
> [   84.649493] Freezing user space processes ... 
> [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> [  130.604032] Task dump for CPU 14:
> [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
> [  130.604032] Call Trace:
> [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> [  549.655463] Task dump for CPU 26:
> [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
> [  549.655463] Call Trace:
> [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> [  821.888596] Task dump for CPU 26:
> [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
> [  821.888677] Call Trace:
> [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> 
> At some point Linux simply gives up trying to reprogram the timer, and
> that obviously lead to CPU stalls.

And that's all with old enough Linux then, I suppose?

> Ignoring the VCPU_SSHOTTMR_future flag allows the guest to survive, by
> not returning ETIME and just injecting the expired interrupt.
> 
> Overall I'm not sure how useful VCPU_SSHOTTMR_future is at least when
> implemented as done currently in Linux.
> 
> Instead of trying to reprogram the timer Linux should do the
> equivalent of self-inject a timer interrupt in order to cope with the
> fact that the selected timeout has already expired.

Indeed - that's what I was expecting would be happening. But I didn't
go check their code ... Yet them getting it wrong still isn't a reason
to ignore the request, at least not unconditionally. OSes could be
getting it right, and they could then benefit from the avoided event.

As to "unconditionally": Introducing a per-guest control is likely too
much overhead for something that, aiui, isn't commonly used (anymore).
But tying this to a command line option might make sense - engaging it
shouldn't (hopefully) lead to misbehavior in guests properly using the
flag, so ought to be okay to enable in a system-wide manner.

I vaguely recall considerations for similar overrides to hypercall
behavior in other areas, so such an option - if made extensible -
might find further uses down the road.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 09:31:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 09:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523278.813169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp49H-0001g8-Vm; Wed, 19 Apr 2023 09:30:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523278.813169; Wed, 19 Apr 2023 09:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp49H-0001g1-Sj; Wed, 19 Apr 2023 09:30:47 +0000
Received: by outflank-mailman (input) for mailman id 523278;
 Wed, 19 Apr 2023 09:30:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=APhf=AK=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pp49G-0001ff-MX
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 09:30:46 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db1aa1dc-de94-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 11:30:44 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id
 98e67ed59e1d1-247296def99so1821762a91.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 02:30:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db1aa1dc-de94-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681896642; x=1684488642;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=rt9o9CT09nMPUDTTAAw9qhkZkcqXqhs4nfp9AzkKFtI=;
        b=JVX6IqwLxfIrqrmZGxDkUoiKF2NA6sxVGfzt9gOrlfpc2M6LnDNgoNLOJEzu4syhOb
         No6FgbQFLAAL5rQEEiuROwFEnDdEJdAcQapySGNDjvJJ+i8kXc7cIrpLPKMec3XD13Ny
         TNDk8xfHUL9GXlP+Y5xcBIOgPAQPy5Zr91zX+kOxAIVZtH67miT35vlKKg9LFwEaZWF9
         JkyV05+Pu9llt3gthnwMpCgE6v8IP8gc2fX09kEZgQ3ZhvMIhIE6kqGnBNjWSCZSZtAx
         Xo2GIIDGi1FKxRKbmtKfhwhiym9ewQhkajLsIoOPKPfK6KSv68LuLKcNmukyp8+p/Eww
         b99Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681896642; x=1684488642;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=rt9o9CT09nMPUDTTAAw9qhkZkcqXqhs4nfp9AzkKFtI=;
        b=CzAmLjLNSgOS/pmWqYum5lDBQa7LrXUM9oRCaZXC/cMfELtlRrlinjrJoeNVJcTt5e
         6RetS601RdnPt5UxcW7T+G/Kk3HFQ4/jnrLv1+OahRbc7a7g0915HiORdoWkJyMr6NCX
         myqVxgFaWf6Dv+tzNib/LTu6o6TnM8u7RcmBZTUS2eNmxwuS0VyRxryh2pxJByvYv/Ci
         IFsgcpL9GzZbXlr7CTgh0g7hSDABXngv7jAdGX//nCBngEGK7qrBh5AUe96jJPmPrflh
         TEAY00f3fD4hsj+PfGK/Ue4XRN1GNvTba9QWbxJ8Gq4qe+3UBOE9ZA2toN6pXbijzbAr
         ++5Q==
X-Gm-Message-State: AAQBX9eRoyyXgYpHDpy+YOtFYOlLw8KIE7Na65U4FUgrhn8G9A0kR4U6
	VoOt51M8WwPi00DRy9vRgJj1KuM9M976PUN+t30=
X-Google-Smtp-Source: AKy350Yh3yRIIOs+SHhExrzTQDTni4DBR0s9REpEd/LnXJHFuT5YIsq8nnKV4ukkRPcGzuaem/aHNJ5K+a0BxiXn34c=
X-Received: by 2002:a17:90b:3b52:b0:249:748b:a232 with SMTP id
 ot18-20020a17090b3b5200b00249748ba232mr2069629pjb.25.1681896642455; Wed, 19
 Apr 2023 02:30:42 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
In-Reply-To: <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Wed, 19 Apr 2023 12:36:47 +0300
Message-ID: <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000ec2d5e05f9ad13bc"

--000000000000ec2d5e05f9ad13bc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Michal,

I corrected xen's command line.
Now it is
xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull
timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";

Unfortunately the result was the same.

(XEN)  - Dom0 mode: Relaxed
(XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
(XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Coloring general information
(XEN) Way size: 64kB
(XEN) Max. number of colors available: 16
(XEN) Xen color(s): [ 0 ]
(XEN) alternatives: Patching with alt table 00000000002cc690 ->
00000000002ccc0c
(XEN) Color array allocation failed for dom0
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Error creating domain 0
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

I am going to find out how command line arguments passed and parsed.

Regards,
Oleg

=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Ole=
g Nikitenko <oleshiiwood@gmail.com>:

> Hi Michal,
>
> You put my nose into the problem. Thank you.
> I am going to use your point.
> Let's see what happens.
>
> Regards,
> Oleg
>
>
> =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, M=
ichal Orzel <michal.orzel@amd.com>:
>
>> Hi Oleg,
>>
>> On 19/04/2023 09:03, Oleg Nikitenko wrote:
>> >
>> >
>> >
>> > Hello Stefano,
>> >
>> > Thanks for the clarification.
>> > My company uses yocto for image generation.
>> > What kind of information do you need to consult me in this case ?
>> >
>> > Maybe modules sizes/addresses which were mentioned by @Julien Grall
>> <mailto:julien@xen.org> ?
>>
>> Sorry for jumping into discussion, but FWICS the Xen command line you
>> provided seems to be not the one
>> Xen booted with. The error you are observing most likely is due to dom0
>> colors configuration not being
>> specified (i.e. lack of dom0_colors=3D<> parameter). Although in the
>> command line you provided, this parameter
>> is set, I strongly doubt that this is the actual command line in use.
>>
>> You wrote:
>> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnu=
ll
>> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>>
>> but:
>> 1) way_szize has a typo
>> 2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen
>> has only one:
>> (XEN) Xen color(s): [ 0 ]
>>
>> This makes me believe that no colors configuration actually end up in
>> command line that Xen booted with.
>> Single color for Xen is a "default if not specified" and way size was
>> probably calculated by asking HW.
>>
>> So I would suggest to first cross-check the command line in use.
>>
>> ~Michal
>>
>>
>> >
>> > Regards,
>> > Oleg
>> >
>> > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44=
, Stefano Stabellini <sstabellini@kernel.org
>> <mailto:sstabellini@kernel.org>>:
>> >
>> >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>> >     > Hi Julien,
>> >     >
>> >     > >> This feature has not been merged in Xen upstream yet
>> >     >
>> >     > > would assume that upstream + the series on the ML [1] work
>> >     >
>> >     > Please clarify this point.
>> >     > Because the two thoughts are controversial.
>> >
>> >     Hi Oleg,
>> >
>> >     As Julien wrote, there is nothing controversial. As you are aware,
>> >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>> >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>> >
>> >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>> >
>> >
>> >     Instead, the upstream Xen tree lives here:
>> >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >
>> >
>> >     The Cache Coloring feature that you are trying to configure is
>> present
>> >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>> >     outstanding patch series to add cache coloring to Xen upstream but
>> it
>> >     hasn't been merged yet.)
>> >
>> >
>> >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too
>> much for
>> >     you as you already have Cache Coloring as a feature there.
>> >
>> >
>> >     I take you are using ImageBuilder to generate the boot
>> configuration? If
>> >     so, please post the ImageBuilder config file that you are using.
>> >
>> >     But from the boot message, it looks like the colors configuration
>> for
>> >     Dom0 is incorrect.
>> >
>>
>

--000000000000ec2d5e05f9ad13bc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi Michal,</div><div><br></div><div>I corrected xen&#=
39;s command line.</div><div>Now it is</div><div>xen,xen-bootargs =3D &quot=
;console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0=
_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull timer_slop=3D0 way_size=
=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;</div><div><br></div><div=
>Unfortunately the result was the same.<br></div><div><br></div><div>(XEN) =
=C2=A0- Dom0 mode: Relaxed<br>(XEN) P2M: 40-bit IPA with 40-bit PA and 8-bi=
t VMID<br>(XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br=
>(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>(XEN) Color=
ing general information<br>(XEN) Way size: 64kB<br>(XEN) Max. number of col=
ors available: 16<br>(XEN) Xen color(s):	[ 0 ]<br>(XEN) alternatives: Patch=
ing with alt table 00000000002cc690 -&gt; 00000000002ccc0c<br>(XEN) Color a=
rray allocation failed for dom0<br>(XEN) <br>(XEN) ************************=
****************<br>(XEN) Panic on CPU 0:<br>(XEN) Error creating domain 0<=
br>(XEN) ****************************************<br>(XEN) <br>(XEN) Reboot=
 in five seconds...</div><div><br></div><div>I am going to find out how com=
mand line arguments passed and parsed.</div><div><br></div><div>Regards,</d=
iv><div>Oleg<br></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr"=
 class=3D"gmail_attr">=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 11:25, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.c=
om">oleshiiwood@gmail.com</a>&gt;:<br></div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex"><div dir=3D"ltr"><div>Hi Michal,</div><div><br></div><di=
v></div><div>You put my nose into the problem. Thank you.</div><div>I am go=
ing to use your point.</div><div>Let&#39;s see what happens.</div><div><br>=
</div><div>Regards,</div><div>Oleg<br></div><div><br></div></div><br><div c=
lass=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D1=81=D1=80, 19=
 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel &lt;<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a>&gt;:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Ol=
eg,<br>
<br>
On 19/04/2023 09:03, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt; <br>
&gt; <br>
&gt; Hello Stefano,<br>
&gt; <br>
&gt; Thanks for the clarification.<br>
&gt; My company uses yocto for image generation.<br>
&gt; What kind of information do you need to consult me in this case ?<br>
&gt; <br>
&gt; Maybe modules sizes/addresses which were mentioned by @Julien Grall &l=
t;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org=
</a>&gt; ?<br>
<br>
Sorry for jumping into discussion, but FWICS the Xen command line you provi=
ded seems to be not the one<br>
Xen booted with. The error you are observing most likely is due to dom0 col=
ors configuration not being<br>
specified (i.e. lack of dom0_colors=3D&lt;&gt; parameter). Although in the =
command line you provided, this parameter<br>
is set, I strongly doubt that this is the actual command line in use.<br>
<br>
You wrote:<br>
xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D160=
0M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnu=
ll timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quot=
;;<br>
<br>
but:<br>
1) way_szize has a typo<br>
2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has=
 only one:<br>
(XEN) Xen color(s): [ 0 ]<br>
<br>
This makes me believe that no colors configuration actually end up in comma=
nd line that Xen booted with.<br>
Single color for Xen is a &quot;default if not specified&quot; and way size=
 was probably calculated by asking HW.<br>
<br>
So I would suggest to first cross-check the command line in use.<br>
<br>
~Michal<br>
<br>
<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44=
, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;:<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This feature has not been merged in X=
en upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assume that upstream + the series o=
n the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two thoughts are controversial.<br=
>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, there is nothing controversial. As=
 you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a separate Xen tree specific for X=
ilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/xilinx/xen" rel=3D"no=
referrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0and the branch you are using (xlnx_rebase_4.16) com=
es from there.<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstream Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.=
git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.o=
rg/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.o=
rg/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">ht=
tps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring feature that you are trying to c=
onfigure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16, but not yet present upstream (=
there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch series to add cache coloring to X=
en upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merged yet.)<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are using xlnx_rebase_4.16 it doesn&=
#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0you as you already have Cache Coloring as a feature=
 there.<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0I take you are using ImageBuilder to generate the b=
oot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0so, please post the ImageBuilder config file that y=
ou are using.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0But from the boot message, it looks like the colors=
 configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<br>
&gt; <br>
</blockquote></div>
</blockquote></div>

--000000000000ec2d5e05f9ad13bc--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 09:38:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 09:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523283.813178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4Gi-0002MS-RD; Wed, 19 Apr 2023 09:38:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523283.813178; Wed, 19 Apr 2023 09:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4Gi-0002ML-OD; Wed, 19 Apr 2023 09:38:28 +0000
Received: by outflank-mailman (input) for mailman id 523283;
 Wed, 19 Apr 2023 09:38:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dNGQ=AK=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pp4Gh-0002MF-1A
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 09:38:27 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eda8cb58-de95-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 11:38:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eda8cb58-de95-11ed-b21f-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681897102;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3ahWW2+Q3S/RvB6qR72sS8HIsTdCuAt1S9bqdH2z+7w=;
	b=qc5CvobL2r3hZWvSzSJqNdvsc3YOVQi05uwUK7zoWReiby465BIJI/zuxMUu5qXpSXXyVJ
	vzTv9ILxgQrJkJR8zzEJpve7BFlEE6OioXHOKbuuQuiRK8FoLk871pSTksNqCCIxl8ucz5
	wmXiG7xJVjNb7Bo6RoRV9J4Lz7UUc8BPgbwLe4YNDPdH4DGHf7Ptnhs3DupHwgSPtRAHBW
	rOKHJXmOZJEWGQe1t29Q6HPEqx9hwBtb4vRyuoV7kVeWG6zEx76YqS0yoDbWMbvMs5WCQW
	MFgUf5iaArW/mcuy/Cw2R/4IofuuCDjuJoLMyvJOsnpRGsf+HQwI40VP2uCO1Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681897102;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3ahWW2+Q3S/RvB6qR72sS8HIsTdCuAt1S9bqdH2z+7w=;
	b=h9kGxEZF0/449e5pzKL8RjkYF5PGK7O3l0s7czECIEu7mvAFDvboALol8fOrjIlj6I4Yg9
	stFZXVlGYGrqZrAQ==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
Date: Wed, 19 Apr 2023 11:38:21 +0200
Message-ID: <87a5z443g2.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

Paul!

On Tue, Apr 18 2023 at 22:10, Paul Menzel wrote:
> Am 18.04.23 um 10:40 schrieb Thomas Gleixner:
>> Can you please provide the output of cpuid?
>
> Of course. Here the top, and the whole output is attached.

Thanks for the data. Can you please apply the debug patch below and
provide the dmesg output? Just the line which is added by the patch is
enough. You can boot with cpuhp.parallel=off so you don't have wait for
10 seconds.

Thanks,

        tglx
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -814,6 +814,7 @@ static int wakeup_secondary_cpu_via_init
 	unsigned long send_status = 0, accept_status = 0;
 	int maxlvt, num_starts, j;
 
+	pr_info("Kicking AP alive: %d\n", phys_apicid);
 	preempt_disable();
 	maxlvt = lapic_get_maxlvt();
 


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 09:42:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 09:42:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523287.813189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4KM-0003nm-BL; Wed, 19 Apr 2023 09:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523287.813189; Wed, 19 Apr 2023 09:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4KM-0003nf-8b; Wed, 19 Apr 2023 09:42:14 +0000
Received: by outflank-mailman (input) for mailman id 523287;
 Wed, 19 Apr 2023 09:42:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp4KK-0003nZ-S8
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 09:42:13 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74154e24-de96-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 11:42:11 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 05:42:03 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5510.namprd03.prod.outlook.com (2603:10b6:5:2ce::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 09:42:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 09:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74154e24-de96-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681897331;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=RLWIMFXKjEqHXJfkfeAHLj0Cn07ctCqIClDuKE7Z+O0=;
  b=Fas0rAJwLq3lt3SbqTinQmO3TfC7osFVl1Y2fm8f2N4+SnqyDU+Q9ypQ
   HXCZC3YPJ5N5lQHbvQdhgivvfIvF+oJuBkdVYPShRkxGHf5Z9RtTwy03z
   bQmAt+1pQplzbGkfJ2Nr6OMKbIHjt85xghYBRlYSt4jTQ843/JbHWaPQQ
   U=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 108516847
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0KKuwK9ww2vEFbpj+nBZDrUDBH+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 DccXzyAM6mJa2b9fNpyO43gpk0B6pDSnN9kG1M6rns8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOakb5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkU+
 OxEKCAfaCy6uPutyrzlFeNRgdk8eZyD0IM34hmMzBn/JNN/GdXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWDilUpiNABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXzX6iBtJOS9VU8NYzmlmzyEBMGScqSEm2rqi40HTjHMBmf
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpSNEgrt5wejUs2
 XeAhdavDjtq2JWeTneY67GVsSL0PCETJGAPfwcUQA0d+d7hrYovyBXVQb5LEqS4k9n0EjHY2
 C2RoW41gLB7pcwW06S2+3jXjjTqoYLGJiYu/RneVG+h6gJ/Zaamapau5Fyd6uxPRK6GSnGRs
 X5CnNKRhN3iFrmInS2JBekIQreg4q/dNCWG2AY3WZ486z6q5nivO5hK5y1zL1toNcBCfiL1Z
 EjUukVa45o70GaWUJKbqrmZU6wCpZUM3/y8PhwIRrKiuqRMSTI=
IronPort-HdrOrdr: A9a23:NCVxgKPJV/VDWcBcTgejsMiBIKoaSvp037BK7S1MoH1uA6ilfq
 WV9sjzuiWatN98Yh8dcLO7Scy9qBHnhP1ICOAqVN/PYOCBggqVxelZhrcKqAeQeREWmNQ86U
 9hGZIOdeHYPBxBouvRpCODNL8bsb66GKLDv5aj85+6JzsaFJ2J7G1Ce3im+lUdfnghOXKgfq
 DsnPauoVCbCA0qhpTSPAh8YwDbzee7767bXQ==
X-Talos-CUID: 9a23:ONikfm0mWSmkfFlOJG8M1bxfQM01Ln3N3nHsGnChEmxnFuKfdFiN5/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3AVSZ4LA+1y5cNwTLYo4wNzgGQf55y54SgJB8cq6k?=
 =?us-ascii?q?bqcSLZC0qAWrang3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="108516847"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jZYwLqpT5CDZKJZvDuwbJHUUBh7vS610tyZkM1KO2N9asaT0PcgpbAWohpoAouFcy3cpIKkj5GtdAl82v2tDx1TdUJMDvB0ZiYt96C3jPUCv0adqcPyZCU/dNg0k6wXxb29RnhbtCERspX/0d038smfEMdRqBWsySjepA1dANSLwgPOQEhQp2it2+6oDvbx7Of4jOdniEbj8SfvXK4iQH8qy84kHgi4+rR2qvX03F2otLFotxfaXihqB4LAXL8SZ7l0eUfRnYHJzIuzhQm1qZwrvnOcxdCbYSrbV6NGJYYP6k/STMR4maV7CgHXarEqA62dTG9gmDp2Cb9clPv2R0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A15T0GhBZob4zDopz2ez9OET1z5EhIqj/3n3yuOhFkc=;
 b=GAThWiEUABvteL8V4DgpfrNv7CFeIegqbdfSq5XrwMjLjkxrKdaSI4BmLfdrdxbMY/th8PdhBhjztK44eTRo5IwuWkMPMxwPs0wpHgr7+ClN2D2eXZpxqk2t3ug2r4ye0EQB4vGN48Ee69V9CUP524DpvXK1YqBAHzhqLzGkezeCG1WDDACPOWdzjt7uJh/z/DERpK2OAT79YqmuAV2scJ1FIWptK43ySJK9sCFmCDmFTNOaOT5WI/xhYSQIOOrqUMJSWaX4li4Rc64spICSiG3JcUNxOXYV8QUrq8RFt43Tkk/WhNuHOcAJ+gh19/iI/R1vcAx+3LqI182Iswv5VA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A15T0GhBZob4zDopz2ez9OET1z5EhIqj/3n3yuOhFkc=;
 b=mmYhOy1WSIv3ncvvMQ3OOHtpjkFXF9p6cg7uXNbjhRE2Mf66OkvmoL+YlBwo6Dil4v91mtH/BEfZNmxXA3Ll6O+2uUw1lkzfhG617HDW96B1izk21n6p0uAbfoEuAW0Eva+hSGhD2diP6SbWToXxY0XnKYo6aEfDyuie4Zwl1Q8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 11:41:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Message-ID: <ZD+3Y+YYQxFSoAJi@Air-de-Roger>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <ZD6/Fk6S6D421AgE@Air-de-Roger>
 <acea0109-967b-f3d3-2a60-b71e5a207ea6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <acea0109-967b-f3d3-2a60-b71e5a207ea6@citrix.com>
X-ClientProxiedBy: LO4P123CA0364.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5510:EE_
X-MS-Office365-Filtering-Correlation-Id: fb1ed386-ee0e-4423-42de-08db40ba536c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h9pkxaxnWn5cVJdq13fuMJOx7DO3GsMKQDJzv3uXTXA3D2OQpt/ChYz0mJo14DIKTJbHmEiX2oZOHcirIV00NsX8hmGT5HXp/bpK2URmivPhoP5wNyz1VZjIvpBu8pTh9qdESOctvfI8c2azAdMIqdpNS5EB8VdhJaMpoNK87pjM4eYwigXHP4YEjBXJ/a3mW9/vLDPZ4fobUGcYRHev5pys/Aqb1is4rC5lfL3sTng+5T5XT7pUkU5KGVMpOX9JDZ1tavB/w+gdOP13cf0D6PNT420umkCGgYc5icfywi7u/sNSJiRTBchOTJXkH1B31Ms/zRIslMNgLriazrXcP+OCrDbLlq1yrtbR2Cb+LTO9mJO2HeNe4JnQkSRP3PdJnc7ksVbqcsIh1RR4ow5Q8U9gL60mRT3caUyM4c/pXtYb1gVxuOwoIalPSIOP0RPAGJ8BgFgl+Sam8noWzyWPTHzNrbv17R6e9ePOWhhhp0si2TPgbCcSdbi8CCYoj+RIt50JKPNzcKEvfhtmR7fshwi8Axswy8yyHiO7qupWDE6cRQvS33DS9onahsl2vc92
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(39860400002)(346002)(376002)(136003)(366004)(451199021)(85182001)(2906002)(8936002)(5660300002)(8676002)(41300700001)(6862004)(86362001)(38100700002)(33716001)(478600001)(316002)(53546011)(26005)(6506007)(9686003)(6512007)(6636002)(54906003)(186003)(6666004)(6486002)(4326008)(66476007)(66556008)(66946007)(83380400001)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eG0rWVhPTGlSM1ZXZExqOWpLaUpBNXFGTWFtUjdTNjJKcWVyVDlkZkxmVUtt?=
 =?utf-8?B?U3BmdE8xUU10eUhRWW0xOWdBUWJ4K1M0N2xHYnk3eWdmczdPU2I2dHYya3Iw?=
 =?utf-8?B?dUdWWVBaWldCbGcwVmRYdzR4YlVMZDVyVzVqdmNOd0lDQkJQdmlJWjdoaEx6?=
 =?utf-8?B?ZS83Y2UyVHpIKytNZUhhbUxrcGI1dnJBR0F2dFRaNXZiRHJpSG9MMWV2OUM1?=
 =?utf-8?B?TFNYM3MrRHhST1FWWVI1Wkx3Z0xtcFA4NEg0WXk4UzVRaTFObWRZMkgxVTI4?=
 =?utf-8?B?bjhQMGFqRXFVM1VwY1VOTFFudW5RS251K01uVmN2MC9oK0JrQ1pBTThxVW9V?=
 =?utf-8?B?ZDVwRm5HaWJFVG9UNGNZUGhid0NmakFYbTl3SE9uVGR0cnlFQ3FPZjZHZmNH?=
 =?utf-8?B?Y0JLd3BvcGlMc2d0UmEvNkhRQnJwQjBtdnVFSVJSeXR0MlFxRXRnOEt6T295?=
 =?utf-8?B?N0RuMGtpY0ZtOW5aMVd5RGszc2RyU09NQWlwWEJ3UGtyTC9HZDJQWW1LVHBl?=
 =?utf-8?B?V0h1UFc3MEVCZFZ1dm9aMEQ4ajFOTVM4Wk52SFA3YWFSREZHNklUMFJTclhy?=
 =?utf-8?B?bWNHVnZPeVc1U2NGSnJkQXBxQ01UdWtEL05hM3F4bm16NktSTFVwWFRrVUQv?=
 =?utf-8?B?N1NvZUpIbUtRT2pOKzU0eW9rU3pWNG8xKzZoZVQ3SDU4NE1nWnE3VkpSQVUw?=
 =?utf-8?B?ZVoxVHhqWDEwZEJ5WG03azRUZlpCc1dDd2g2NWRiVDRlM2JhY21JMHdhSG5L?=
 =?utf-8?B?cS9SU1RJZlVmWlV4RWZha0xvaFo2N2VCSEtRR09hSHNtckVWclFiYXp3dXEx?=
 =?utf-8?B?dDR4ZG1uWUU5TU9rakE5eGJCaGVuSjMzNlFzay8rclkwZVZqTWdJemIyR3Ju?=
 =?utf-8?B?ZU5iYkJkbmh6bXVHc0hKVExtUVhGaURSdUZJd01JaHJ6SVNjWDZyaWNPRmlx?=
 =?utf-8?B?VWJ0TUZsQ2c1TnhuYUtxcm5qeGIrb0JobmZJSU1oc3dyM0sxUEJKUGRYZ0lY?=
 =?utf-8?B?OWoxZ01qTE9udFkvaXJTWHRjVG1QZGNEYm1ZbnFoZzAybmhsMno2RnZHUCsy?=
 =?utf-8?B?bVhPVjVHZFFBS3NHNDJsaWtGOGVOTWlWeU0xMmdITm44QVM0eVdOTzBtWFZy?=
 =?utf-8?B?eGw3NXlDaWJLZHM0Vk45azIrdklSQS9qNVE5R2U2R1N3cFhRbG5MSEhsc1Fo?=
 =?utf-8?B?ME1leTdJSGVFZjRVckk5ZzJ1bnBUT2JGMmlmWnl0RjkwR0tVUzZVNVlON1ZL?=
 =?utf-8?B?blE2SVBVWVpISjBmbi9SMjZ0RVFRaGozbkltWEM2Z0I2OG1YWUpFaWV4L0p5?=
 =?utf-8?B?RkhBT0pCUUEwZlpScE04cm5zbEJGOHlJNUplYWJrMTc4aU9YbWtsYjVhcy8v?=
 =?utf-8?B?Mmt4Uzl4S2NNc0Y0OWp2WDM0eWkzOGRNVFZMTGRqRytzOU9tUzRJMmsxeFFZ?=
 =?utf-8?B?ZTJSZjdmdFAvNVM0eHBETlRac2l6MUJRS2VCNzhBM3J3c0ltYkhrWWUwamh1?=
 =?utf-8?B?ZkZpd3dqckh6aUlKQkg0NVJBZmYvV1pwVTBkaVhmM3JBTlo2cHhxckVBM056?=
 =?utf-8?B?cGx0cmtJWjV1R3kyeGVSbUZvL1haMFBtRmErNGQxMjFlM21BejZOMDJsMmxr?=
 =?utf-8?B?cUMxNzVPNTZDbmF3K2tHMWNwNURDWndROXYyQVBLZm1VRnpkTlNBVWRXamNl?=
 =?utf-8?B?QnNiN2ZLaXhLUHZ6cUdWL1lteE9GZGs2N05wTVRDZnBPVWJUc1dVNDBmWmFT?=
 =?utf-8?B?Z3VtY0lrWm9JRi9zV1VrV2ROSWI4YXFhV1BJenBuWTZpTE50QzArd3I1cTVs?=
 =?utf-8?B?V2NMTDNRTVRxbzFMZ2YvV0pQUjdLaE5uMzRJTXlpWUxPTEE2dUlxbWZmV1g0?=
 =?utf-8?B?OGh3T1lDelNXdGJ0NmFIRWdwdjhmaXliQTlZTkJhVDRCcThzVDV1ZlhwbkZD?=
 =?utf-8?B?RDZCNDI3VFFHWUhBTVRBZTdHZDRaYjFXZkViZnBBMHBQNUNmQ0w0ZXZGd1ll?=
 =?utf-8?B?K1VzMkV4Y0FreWVEMzdCVkpWZ0I2SllQSmZXWkVTT0VLQXFiRkE4NjZHV0Zo?=
 =?utf-8?B?Wk01dWEvWFc0d0RwODk5UEZzaEtwN1JhYkxETjlpajNKclBpLzA3MkRFWmFk?=
 =?utf-8?Q?ONGSs/LQ1TH5HguB+Ef2PZP7M?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	M0arHisaJR0JSX9/rxdCaPFRWSOOyiIG9Hn7UCqjwWUEiK2rfQLqvJdsrafu1k8O061Fh+GfnQ7mEGYBB5Gy3bRUVcqoD7/haigQh66WbyeY5x29s5WAY0qDNAM4+eXD/sapzssSQo9TEfPFz3uYZLnCwxExoDFszAc5IQjmn1k9ldPgX8A5vU3HUxN6uaI2olU36WAYVZuDYPGiFY8pW6oscJZidpKYLd7f4mOX0A2/w8HZ0XjKyzwaNjlB/CeVzesVo0iP33NCPie+locJ39fK+YKH8adX0amenJYiSqB5CinIMZahhptdALWRBYG4zGxSZBj3W1eOn94NCoXg1towWPxgONmBaz9xlvSe2Scijfvc3Z2if4GiC5M+e7K3ONBkswVvKXV7dr+I2CwywYN+sbamxuREwW4IqFASFL2O+RNXu8+Nk2BfcUXkHCUj8IS+rsw2BGa4M6PmjQ9I5HCWxMWYbzhkh/qfK50rrtj/7QmgiVatJe/pTa2VAmB2b9LZFH2DwjXf+rpSvbyPmoiR6QcPMZNlUyOyuX+FR1zMi0aIYi77T7xCAOYOriXlDa+WpDhFLyWbqA97i83GS47sbB3nQJtztJr394PmOnzS7CrbuZGqx2mdHVsHMmCCP9bPX8OiJPet+snOysKqhYe/lub3JWdxjNuNoBNHJcHJnP+iDtt7MJD5Fm/PGoWn0LdVMbmuQ7Vv/B/Kxf/ZqVNMAuDBrzYhkC8+g9VCSChv9KIq8cQaG8TT5agLj7jLli37vp9/EdR+Alm8hKIU5EN7heKJUWMqjMNQ2KgIlDUrxLpITCnyqOF0k7RH2edacTWK08m1gWHK1s0q0NEGWtdn/BqNmkDpXqi/KY9StTbVwAsjrHk7m5sXggOFIXv+akBcn+MxJEiKf2DKZ+OJZx8N/LI8Hg+KjIOu0g7+qJc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fb1ed386-ee0e-4423-42de-08db40ba536c
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 09:42:01.3667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6MYJP01+qXclTssKy3etEmXEY/a46OEKN++chyG6hr+hhFmeu9/JjtgsyTrlyP//HCmKCKyj+oTwu6Z4pieRgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5510

On Tue, Apr 18, 2023 at 05:12:07PM +0100, Andrew Cooper wrote:
> On 18/04/2023 5:02 pm, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 04:54:49PM +0100, Andrew Cooper wrote:
> >> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> >>> The addition of the flags field in the vcpu_set_singleshot_timer in
> >>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
> >>> increased.
> >>>
> >>> Remove such field addition and drop the implementation of the
> >>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> >>> value just inject the timer interrupt.
> >>>
> >>> Bump the Xen interface version, and keep the flags field and
> >>> VCPU_SSHOTTMR_future available for guests using the old interface.
> >>>
> >>> Note the removal of the field from the vcpu_set_singleshot_timer
> >>> struct allows removing the compat translation of the struct.
> >>>
> >>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> >>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >> While everything said is true, this isn't the reason to to get rid of
> >> VCPU_SSHOTTMR_future
> >>
> >> It 505ef3ea8687 does appear to have been an ABI break, that's
> >> incidental.  It dates from 2007 so whatever we have now is the de-facto
> >> ABI, whether we like it or not.
> >>
> >> The reason to delete this is because it is a monumentality and entirely
> >> stupid idea which should have been rejected outright at the time, and
> >> the only guest we can find which uses it also BUG_ON()'s in response to
> >> -ETIME.
> > I agree, but didn't think it was necessary to get into debating
> > whether it's useful or not, since its introduction was bogus anyway.
> 
> Well - the reason to actually make a change is that (older) guests are
> really exploding on that BUG_ON() for reasons outside of their own control.
> 
> And the reason to fix it by ignoring VCPU_SSHOTTMR_future is because the
> entire concept is broken and should never have existed.
> 
> The ABI argument just adds to why the patch ought to have been rejected
> at the time.  But it was done, and the fact it has been like this for 16
> years means that the ABI shouldn't change further, even if it done in
> error in the first place.
> 
> >
> >> It can literally only be used to shoot yourself in the foot with, and
> >> more recent Linuxes have dropped their use of it.
> >>
> >> The structure needs to stay it's current shape, and while it's fine to
> >> hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
> >> need to say that it is explicitly ignored.
> > Oh, I think I've dropped the comment I had added next to
> > VCPU_SSHOTTMR_future that contained /* Ignored. */ (just like for the whole
> > flags field).
> >
> > I can elaborate a bit on why VCPU_SSHOTTMR_future is not useful in the
> > commit log, and add that Ignored comment to the flag.
> 
> The important thing is to not actually change the size of the structure,
> and to change the commit message to explain the real reason why we need
> to make the change.

Why not revert back to the previous (smaller) size of the structure?

That would work for guests that have been built with Xen 3.0 headers.
Currently Xen does copy past the original (3.0) structure and likely
copy rubbish from the guest stack from those guests, that might
contain the VCPU_SSHOTTMR_future bit set and end up returning -ENOTIME
unexpectedly.

Overall I don't see the benefit of keeping the flags field, even if
technically it's been so long the ABI was broken that it doesn't
matter anymore.  But still keeping an empty flags field is kind of
useless, the more that removing it allows removing the compat code
handling for VCPUOP_set_singleshot_timer.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:07:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:07:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523292.813198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4iK-0006Ny-A1; Wed, 19 Apr 2023 10:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523292.813198; Wed, 19 Apr 2023 10:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4iK-0006Nr-7I; Wed, 19 Apr 2023 10:07:00 +0000
Received: by outflank-mailman (input) for mailman id 523292;
 Wed, 19 Apr 2023 10:06:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iFQL=AK=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pp4iI-0006Nl-Mi
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:06:58 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.221]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea069586-de99-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:06:56 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3JA6irpu
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 19 Apr 2023 12:06:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea069586-de99-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1681898804; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=G3w2mbEJJfgCZHDoE6CwY7Ya//D1yAoOULQRypOXY8zxn9mRq1c9EcusXa9uEmdzsZ
    nlB3O5pjmlZZoEQkden5vIVd9ldDCMBfk1IOwLMYMPJmMxDb6q1Mz7NETgthG+lTzXTW
    k8a3z6drrS7RqD7wV9WFt2GACTzoaqk9l/T6mBIvZt5bDqJKMMgUc4WEd9MvWLBG9iHq
    oN22Wyh0rO7AgX+q49kGm5sqVp7q0qRbmBXo3g720HBmvQhbwJHztN71vDrV9/W/YeT5
    qmS6S70L3EffHduBXVpYOMFXQp+j/YRYP5E0+CXgk+p8DDGvmuSb0ZxHyQMc68qRFOMy
    ZdAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1681898804;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=c1TYgW//a9TczZT7Hvd8U4aS6et0hLk47l2yhUc1fsw=;
    b=JToD897sebkm3zW7d5nmtM2r4dXRgFet4dR2ShIEDId3g7U0T59xtzZ6uJVR5mXKwY
    vevXLMilymdTM3eQt+h3zBKdQEjGFzflc4vlYCTL5TGtDqNa7bpcXtAhyRo64mEKUFfl
    RfDlkQQSlTt6XpCfO8lbhXgNwtV4PieBmIi8vUylG1y80JRTIA7z3r+Qgkzdc5EERp4i
    RXFcCKOWgN+A0SFBrpcoL2HCOJ6iQUuu/6iV+BaWLAoobFI0k0u4Lsr8RRIXcCzEzLCb
    RZDaIX0Cb2guCZoz6VZJfHC5peimQtOs3pw9IPBrHRNStku3MMtFGSOFfTxa8lSUZrig
    qSxg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1681898804;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=c1TYgW//a9TczZT7Hvd8U4aS6et0hLk47l2yhUc1fsw=;
    b=qohYo94XsdlTQzRChdKqvHZe2+8FeW4GMZdhNcY9eGtG4Z1LwG7sN9gJvLIXHAHOHU
    L6dM7jUXDT2eNr0d6j7zFCqu4k/lihm1IvX+Rq08Pa7rOt+pzxk7gGqnZWx0qzyPPgWR
    OF3GS4STtOegdxiEiXbuac/2rDRVxTYGxsuTHn6qmAuyWXg9yjng0+0exZ2NzIdZltKm
    qwc9itmSm6BSBHNJ4ohJDdUPkFuPmYKu8/KQd/OR0wiJ4BAclFYGkigJN88LxlWmBvHc
    IGLzUC9OgwEyec6Z3O721rfSo1WondAfHVy8dPLtCuegVKGJiaFzdAShZFtP9cRRhQhq
    p1Uw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1681898804;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=c1TYgW//a9TczZT7Hvd8U4aS6et0hLk47l2yhUc1fsw=;
    b=f8EnrHHbZwtJj7ZAJ88U3OqpTa49P/6rMBVpt3yFVCQJGmWL2I0qVG26HJywNKkRC5
    DOznU1EmQ2kjW+FsPkBA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqlr7GpgtSxIX+ZWs95M7PYKTHoBaxED20qrwFA=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v1] tools/libs/guest: assist gcc13's realloc analyzer
Date: Wed, 19 Apr 2023 10:06:33 +0000
Message-Id: <20230419100633.13047-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

gcc13 fails to track the allocated memory in backup_ptes:

xg_offline_page.c: In function 'backup_ptes':
xg_offline_page.c:191:13: error: pointer 'orig' may be used after 'realloc' [-Werror=use-after-free]
  191 |             free(orig);

Assist the analyzer by slightly rearranging the code:
In case realloc succeeds, the previous allocation is either extended
or released internally. In case realloc fails, the previous allocation
is left unchanged. Return an error in this case, the caller will
release the currently allocated memory in its error path.

http://bugzilla.suse.com/show_bug.cgi?id=1210570

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_offline_page.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index ccd0299f0f..8f0a252417 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -181,18 +181,14 @@ static int backup_ptes(xen_pfn_t table_mfn, int offset,
 
     if (backup->max == backup->cur)
     {
-        void *orig = backup->entries;
+        void *entries = realloc(backup->entries, backup->max * 2 *
+                                sizeof(struct pte_backup_entry));
 
-        backup->entries = realloc(
-            orig, backup->max * 2 * sizeof(struct pte_backup_entry));
-
-        if (backup->entries == NULL)
-        {
-            free(orig);
+        if (entries == NULL)
             return -1;
-        }
-        else
-            backup->max *= 2;
+
+        backup->entries = entries;
+        backup->max *= 2;
     }
 
     backup->entries[backup->cur].table_mfn = table_mfn;


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:11:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:11:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523297.813209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4mV-0007nb-Rn; Wed, 19 Apr 2023 10:11:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523297.813209; Wed, 19 Apr 2023 10:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp4mV-0007nU-Od; Wed, 19 Apr 2023 10:11:19 +0000
Received: by outflank-mailman (input) for mailman id 523297;
 Wed, 19 Apr 2023 10:11:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp4mT-0007nO-Un
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:11:18 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe02::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83c7c2c9-de9a-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:11:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9919.eurprd04.prod.outlook.com (2603:10a6:10:4d9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:11:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:11:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83c7c2c9-de9a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bFwjJqE8enGa6aP2p5bg2ibJKhNRIWFHDmYePl9Clugx5e/NVtY1Tubvyaf6ztcBUS6oJN3RiV87GfoRqgTjAdYSPFVS30bpu60hc7QA7Z8gJOiQDKUvGrLZesWA3OrODmk2b39Dqjn2rK01JXpoGCUPrMpzTamNEILo6Uq2p++qZuG7bA76Ec3v9QCAbz1XB/2aQOz8CgCfQieS8F0ABy6/oOOsgkqHhN9OtIhlPci4bynUbr+BWPD3Unk9sXQ0DfxzuTu6HzNT2f+9yFj/RwXGcQBxlGn8eeH7w+JWmNVdfGfd/LNJgiFsDGQhay3iyGHUyq8G83wA7GBhAWplhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OwhbEs/gsN5KpCp38BYdSeIDQo3sgtftXc5pm0kvaWc=;
 b=kWzlMFY4ou1h99H4DrTFHn+x3zVXLtU+iyCFk5etU2PcgGjM8XMDWcyN8beatnhofmYmnghND1qwzhJN69xsm7nqCryTdvN2UjMwx90/rTdKqL1JwoOCmUv8wlg06M2tikH06dh2cdUD2Ow4ikLyOV18F4v5tEJVnsvXDAvj/fZf1Fwh/Cll+H5+TSQAUkdCKxKVzS8FsFIib6+prDHKarE+zSv9IHLVgp/75B5JX1CehW8Ami65KU+mDs69qvwqKJWaIUvZuckCDWCFECm1VGsIJ6SbNPQxdPMCE2YKDdW4vwOJktjitmpXUk2sUYqHnKLVUL/nMizGu4KNDaUDTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OwhbEs/gsN5KpCp38BYdSeIDQo3sgtftXc5pm0kvaWc=;
 b=ST8n5db5Ipgn6dZs71sSKrhlFUheCuaeFHPFk+IKOL7OPrYZc4GoA6PDZQBlgU4GLII4ADUsUCrUnI98Fj2v7v4+rNfwFbUdma285IVDAWiLg7Mu6dYjh9npb8FOlRkqoYwrPCfeRb8CIeETPxeviDG+l/kJLDNnePnidhFKFXQbCDfaE6stldcYpHp9wHLx3wSBasy/Xep1s6ecWwkXQqBftQMJ3fX3oUth2Z+2h3J6DIj44LDj+cP4Fp9hTp+LjuzTGGnSJPZD37SVGq8xxtbegcIJU8f1J/SlSdZSQVQ92WrKQ/WnpWeZPy2sDLzgocyp2Xls262AYWYsZYiP6A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <38fcc039-bdd2-b939-d619-97621abc74d4@suse.com>
Date: Wed, 19 Apr 2023 12:11:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <ZD6/Fk6S6D421AgE@Air-de-Roger>
 <acea0109-967b-f3d3-2a60-b71e5a207ea6@citrix.com>
 <ZD+3Y+YYQxFSoAJi@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD+3Y+YYQxFSoAJi@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0108.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9919:EE_
X-MS-Office365-Filtering-Correlation-Id: 1a4dbe77-1099-4fe6-beba-08db40be66c0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BndMoNZCFiS3hWVPFuXTvdvdwCXEVr9amCIyvVJbj8Ftd/qir4LILepYyjKHy/oEApXJCjAisPiV8vv7cjU9vAoTpQNS0i4WIym5+r9oZhbHpfLVyL4sF3dei6CZxGQ/zYpIXCw3MvU4gqlia2LXWG2KV/6NoWILeHpRhGsReFtFzoeaOoeg0GjMGCAKz1V1/HLnD/rvPThjLOf19wvwDt/uSh1ABcjCOsNhAnqkkxuUuNvrdeJDkW8axKS4aSHF+cKo8HgaIAlSujmarOgvVHGyy/Wktk7S2nHS3pxNI4sssNZSPtSdWWLdK4+PdAvFc+1apIktdla69rUR/njfit1oas2qqFAWntH1BI7Vd1f6aikD40e9pVgnzt6dQDr1J5OIdXV0zcfbvH6EthuNv2H5/bIS1xp8kCLaL5L+U0h7rJGTzP3Vtlkc3nNuAMb7ZQWeQScBQ46v3IGGT/8Z5r6zdZ8z+O1nZWqfjnGGn+5m8x+2NImyhgwIVahfEPvcGAaOi7AiFhFWfhRW5QvMlYBsq3Ciho8NRZxnR3/TW8mTGSHfDC7FLiIQEFvFUKRd6G1cJx99MED+xDAn0atMaZo3MvOev+3CK/3yCisb4F+NIf/FDrm9cQrrnE/sUQt2JpGug+7y3Tugjs9XOaswxg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(396003)(136003)(346002)(366004)(451199021)(66476007)(478600001)(38100700002)(8936002)(8676002)(316002)(41300700001)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(2906002)(53546011)(6512007)(36756003)(26005)(83380400001)(6506007)(86362001)(31696002)(2616005)(31686004)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z2VNUjIxRXNkdGZiRDk4VVVrd0tMU0Y0MENRemZkMTdsUTgxRkUvTmlNMncr?=
 =?utf-8?B?UC9Fb2hJYTFUazRHbjVIWnpYYXMveHZ6dThybWh3WXRlZ3QwZW1CelR5U0tD?=
 =?utf-8?B?Vi9mOTUvL1p5aDhXNVp2MDhMM3IwUHJ1TVFKeWZLcTcwdmJBeGRrckhwK0ht?=
 =?utf-8?B?cXdMbGVnQTI2Z3JxVTZlS3J1bXJVVjdwL1ErTUpWdEt5d3g4VnVGK2J4Mnp1?=
 =?utf-8?B?SnBUNTRrTVVyWDZqVENDUXhSdWZPa1NEUFIyU3ltQzVSWXpocGZaNE0xa2RY?=
 =?utf-8?B?MFVsYkFwYWI1SC9QTHlDVm0rcXVxeDBDUzViSjVFTm94V2s2MmpZK0lhbzRU?=
 =?utf-8?B?ZXRSZTBFVEJwY3Y2RTlkTzNCdXBsaWE0T3pYMVBuUE05MzFuT2kvMlZMbk9l?=
 =?utf-8?B?THA4MW9nRGhDTEJvOEtwenRtR0VjeS9ZT21MZzlkcVRldzlDSCtQbTdkNkJs?=
 =?utf-8?B?TTQvWG5SMVltTXNEOVlGZG1QWWU4MXhBNEdpVzdReVpkbFZpL01ZVWRYd2dC?=
 =?utf-8?B?QjZRbWhSZGdJNms0ZXRtakNQWnJiLzBKbFJja2Znai9kY2FYb1dxZlhaZHFn?=
 =?utf-8?B?cWZLekpNSTNwTm5hUW0xaEFITjliMzVNSE1Eb0pGOXN1bEs1em5ZakFOYWNH?=
 =?utf-8?B?ZnFmOHh3TFdLME1pOXVVbGIvcW9SRUtZR0wwcE5TWnpYaVFjT1IxakhVY2FR?=
 =?utf-8?B?S2k2OW9vYzVVd3gxak4yTWJmQnRSVjEyYkxKQi9ia0xLMG5aSG9NYUVFNE9B?=
 =?utf-8?B?aXZ3UU4vdWtweGhkYU1JREtmQ0pLVkprZW02aHEwclkwR2dkaTlhS3pwdWIx?=
 =?utf-8?B?MnQ0TVozUjhJY2thSVVKSHdHLytJWTl1SFZUYXZmWEk0ak15V2UvV3hwYkNu?=
 =?utf-8?B?WGtjZ2pKQlVxV3d3b2pldUJBbUIvUEFZbUdiUDJEQk1vL3NHYUFmNlVQOUVQ?=
 =?utf-8?B?aTl4K0hYK0JLcUZuZGlMZ3E5ZGkwNk5RYXNsVXhEbFJleFNQU3ZiamNPaUl3?=
 =?utf-8?B?TFg5ZCt2eVB1NFdRYzgrQlcxT2srOWljOEd3dTdGRGpvN0NvVE53bjJENGl4?=
 =?utf-8?B?ZkkxeVdXdFErVjlGQ0xsRDdWcVMxZitpcEl1REZ0ajQ2UkhsZm5nODF2Y2pB?=
 =?utf-8?B?K3NZWmFaNHRGdmFQQ0pqQ2JITEc4bWVya3FidDB1R0NObTExc1YxdThEL2dX?=
 =?utf-8?B?MisraEpSUDcyVVdLbG92WW5hdW5sbll2UGpUaHVZbythR1Z1Y3VzeUhoQlpq?=
 =?utf-8?B?b2FQd0NmWG1PbHBLUFpDaTdENjg4Vy9kbERrSUYrWENVM052TVYzUE13NXN1?=
 =?utf-8?B?dFIyK0dEK3NoRGtIYmNzR2FHU3NaM0svNHpIZ3hPcFFrUDNzemk0eXhsaTNi?=
 =?utf-8?B?NEdabnhZaE1FTVdsTCtrVHpZRWczc2w3aVhISnBsR3JaV3VXT0hoTmFsZStk?=
 =?utf-8?B?NlZiWXBmL0F3Z1RnblhJWFJyZjIybDJOVTR0a1QwU1lhYTlXQzVCazUzU0VH?=
 =?utf-8?B?QTZxNWFZakk5NUpPS2k1Y1JBZ2tZek1CRnBNejBkNkN4VWZDNXEyVWtvQVl4?=
 =?utf-8?B?bTdwdktrV214eGJrVC9VSWRYdlNDZmVWOTNSZm42b3lWNHBRblBnZEVSb0t5?=
 =?utf-8?B?SC9hK2dkWEpVMG9rRUd6L3poeG5waDRnM1BlV21XN1hzVzN3RzlOVXBJWGEr?=
 =?utf-8?B?Zkc3akxLWE03ajJLNWJWWCt2ek1WWks4U0dlN3k5WmxnZ3ZreHRmNEd2R05D?=
 =?utf-8?B?Y2p0QVJ6ekpFWlpQWDhzWTRhNzRBQXROS0VWU25PL2JHbGhwUmNtdjFhZlAw?=
 =?utf-8?B?UEpjN1VsY0Vrd1Z0ZHFLa3pVRHQyYnFiRGZUSGZSUzN1U0QrVzlhNmxEU2FY?=
 =?utf-8?B?cnNZNGozb3p0S0NYeEkrYUpuZjY3dXdpQU1BbWpLWGlXcW5EMFp0NlEvUUgv?=
 =?utf-8?B?V1dmWGMyVkI4NXFES0VCYS9TVCtKN2N6dzR3VXF0Y1h3TUVRTjJuRjU0MU53?=
 =?utf-8?B?bDlESGFLS2Z5c1dlRk94clpuT09uWjdrMWZIMkhEWE5ibE1lMFIxeTJNL3ZV?=
 =?utf-8?B?STFNRGFQRFl0K0ltZ2FOQ2p6UDc1TXFFQTdlOUVna25aYVlsaHZwREhVc1BY?=
 =?utf-8?Q?kaw9EKFvdRlivrw9yj39FMUwJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a4dbe77-1099-4fe6-beba-08db40be66c0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:11:11.7886
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2vXIpw6pTEaMoaKInOQ4EepQOcs8sts2uK9sFH4CfgTSvDKj44FtRvLwuU9bcU/F4a4w+S6K408RqYQyKv6TIQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9919

On 19.04.2023 11:41, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 05:12:07PM +0100, Andrew Cooper wrote:
>> On 18/04/2023 5:02 pm, Roger Pau Monné wrote:
>>> On Tue, Apr 18, 2023 at 04:54:49PM +0100, Andrew Cooper wrote:
>>>> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
>>>>> The addition of the flags field in the vcpu_set_singleshot_timer in
>>>>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
>>>>> increased.
>>>>>
>>>>> Remove such field addition and drop the implementation of the
>>>>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
>>>>> value just inject the timer interrupt.
>>>>>
>>>>> Bump the Xen interface version, and keep the flags field and
>>>>> VCPU_SSHOTTMR_future available for guests using the old interface.
>>>>>
>>>>> Note the removal of the field from the vcpu_set_singleshot_timer
>>>>> struct allows removing the compat translation of the struct.
>>>>>
>>>>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
>>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>> While everything said is true, this isn't the reason to to get rid of
>>>> VCPU_SSHOTTMR_future
>>>>
>>>> It 505ef3ea8687 does appear to have been an ABI break, that's
>>>> incidental.  It dates from 2007 so whatever we have now is the de-facto
>>>> ABI, whether we like it or not.
>>>>
>>>> The reason to delete this is because it is a monumentality and entirely
>>>> stupid idea which should have been rejected outright at the time, and
>>>> the only guest we can find which uses it also BUG_ON()'s in response to
>>>> -ETIME.
>>> I agree, but didn't think it was necessary to get into debating
>>> whether it's useful or not, since its introduction was bogus anyway.
>>
>> Well - the reason to actually make a change is that (older) guests are
>> really exploding on that BUG_ON() for reasons outside of their own control.
>>
>> And the reason to fix it by ignoring VCPU_SSHOTTMR_future is because the
>> entire concept is broken and should never have existed.
>>
>> The ABI argument just adds to why the patch ought to have been rejected
>> at the time.  But it was done, and the fact it has been like this for 16
>> years means that the ABI shouldn't change further, even if it done in
>> error in the first place.
>>
>>>
>>>> It can literally only be used to shoot yourself in the foot with, and
>>>> more recent Linuxes have dropped their use of it.
>>>>
>>>> The structure needs to stay it's current shape, and while it's fine to
>>>> hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
>>>> need to say that it is explicitly ignored.
>>> Oh, I think I've dropped the comment I had added next to
>>> VCPU_SSHOTTMR_future that contained /* Ignored. */ (just like for the whole
>>> flags field).
>>>
>>> I can elaborate a bit on why VCPU_SSHOTTMR_future is not useful in the
>>> commit log, and add that Ignored comment to the flag.
>>
>> The important thing is to not actually change the size of the structure,
>> and to change the commit message to explain the real reason why we need
>> to make the change.
> 
> Why not revert back to the previous (smaller) size of the structure?
> 
> That would work for guests that have been built with Xen 3.0 headers.

Are there any such guests known to still be in active use? Linux iirc
requires 4.0 as a minimum ...

Jan

> Currently Xen does copy past the original (3.0) structure and likely
> copy rubbish from the guest stack from those guests, that might
> contain the VCPU_SSHOTTMR_future bit set and end up returning -ENOTIME
> unexpectedly.
> 
> Overall I don't see the benefit of keeping the flags field, even if
> technically it's been so long the ABI was broken that it doesn't
> matter anymore.  But still keeping an empty flags field is kind of
> useless, the more that removing it allows removing the compat code
> handling for VCPUOP_set_singleshot_timer.
> 
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:26:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:26:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523302.813219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp51B-0000zm-9N; Wed, 19 Apr 2023 10:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523302.813219; Wed, 19 Apr 2023 10:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp51B-0000zf-67; Wed, 19 Apr 2023 10:26:29 +0000
Received: by outflank-mailman (input) for mailman id 523302;
 Wed, 19 Apr 2023 10:26:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp519-0000zW-KY
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:26:27 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a02016b7-de9c-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:26:21 +0200 (CEST)
Received: from mail-co1nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 06:26:19 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6584.namprd03.prod.outlook.com (2603:10b6:a03:389::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:26:15 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a02016b7-de9c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681899981;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=fVaIfj7D4pt9xMiM0aqUN71wJRxU4Y1ye+3bbjQFGPk=;
  b=cGY9DIlJswqRF/VTMvJn5llJMv4LTwKPu/L8UjJ288X4N2eAQ6Po8yMO
   LW12opFDLN7O2KnXmSQAoFR/mfoV8kif8aW7Rn0DCl4s3ogaG+SuE8i1D
   7TM7fnFqE71sCA/lrqS5ZCXingil4yNRhMVGbjMd4FJHvOscY/5liM352
   w=;
X-IronPort-RemoteIP: 104.47.56.175
X-IronPort-MID: 105978875
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xcatzq8IHu6PCN3YArIDDrUDBH+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 WYaCjqHOaqIZGT0fIpyYY/k90IGvJeDx98wT1ForiE8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOakb5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklKy
 OERIz8pdyvavOLonJeqUPRtjPo8eZyD0IM34hmMzBn/JNN/GNXoZPyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilUujdABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXwHKkCNJNSdVU8NZ6hVG13F1MEyYEFkqRkeiJtmm/atlAf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmtx6zO8kCmD24LZjdbbZots8pebTct0
 1qUmdL1FHpqubucRn+H3qeZqyuoPioYJnNEYjULJSMH/t3irYcbnh/JCNF5H8adlcbpEDv9x
 zSLqikWhLgJi8MPkaKh8jjvnDaEtpXPCAkv6W3/Tm+jqw90eoOhT4ip8kTAq+ZNKp6DSVuMt
 2RCnNKRhN3iFrmInS2JBe4KRbeg4q/cNCWG2AEyWZ486z6q5nivO5hK5y1zL1toNcBCfiL1Z
 EjUukVa45o70GaWUJKbqrmZU6wCpZUM3/y8PhwIRrKiuqRMSTI=
IronPort-HdrOrdr: A9a23:dSBMSqn1lK3qzCxsZUdRxjNct4npDfIK3DAbv31ZSRFFG/Fwwf
 re5sjz8SWE8Qr5P0tQ/+xoWZPwJk80kKQe3WB/B8bAYOCLgguVxeJZnO/fKl/bak/DH7VmpN
 9dmsFFYbWaMbEQt7ee3ODXKbcd6ejC2Ly0g/zT1nJ8JDsaEJ2ILD0UNu9YKCBLrcV9aqbR3a
 Dz2vZ6
X-Talos-CUID: 9a23:r29VHm2FD4nZIUVTBNQK5bxfHv04e3bS6G7renS4VzpMWaeISFWu5/Yx
X-Talos-MUID: 9a23:DGcBIgZ+3VCZN+BTpzrgoitTa9xT8ee/InAcoakhuviPKnkl
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105978875"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DQ2/0qhIuDSEBTsEExeKhojDQMvhEv00q04JlV1CpAMMIgiYyfV9i/1doNFtar6+SzmSyt7z3vhracfuq6UoLEAcu9OM7CeHWPEnbznUXWm5Wk2gi9iCXNE6Ct20ojci+WMwwzGN4bjqpPWdcZD2fN1TajgPNZHL+rhi3a3cTOPWIHbipTnUrB2d6WlPaHI+lxIrvYP7AYPtlXat/xSPRlLPKjmj11iDc4YqokJ3+DUs5XEiZFL7dU/KBrHDqoKhisB4DLR8OuduNSBylVxjJcN8i6/b0clO0oZ833hL0dOF/Iy6rE0xhI06zDHiEh/3CR4GekAkXC45+k7sl7x5qQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bdY0eKT7j39fP0NB5timUkRgE5tyBNYd1dMHw+IPysc=;
 b=ZOZzybQI+pUQ21Tue3lLrnTlPi1QjcBimA79Sh/jORrq0n+VF7wIO83B8tAGecK7Id9S1FKTXA9y5Zb7ME65fHJJTX8IHfszlvEA3rWteNBms3/vop8mrTxE9rURCnXQuqxysQj1oNqNuswAupsWr3M5whC6BTiCtKoUtxd0VLBujlO74bawnTt6AOptrkqLATNbyhY0e9qC/o4KqrGUxWakvmCkV9I/5nGOuxSo4eqILomQkLzkOCSEMJ41KGlcIoFGRuHGSA8S7kj26bQCHruXQdYY7r8SHTQWqJCTOTwvy0Fu/4Bf26nnYe1U1H+kz6FfY1LzqyTpeu+kO62Dag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bdY0eKT7j39fP0NB5timUkRgE5tyBNYd1dMHw+IPysc=;
 b=kYfXF11yX2YV7wB9iaY2ojy6bK/nJV3hQW4E+JUt6gmX2Sgb2K2+wHikEoBGQsVgg8FDStarB0WkDQOJJbXju6Ahhzv4rsLQcsr08oGe2X1McLeCW+5X040v0tmhY9wfNEgLg5m108ujDHiocu43a86LPDYhIQwzA5kKgyydFg0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 12:26:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Message-ID: <ZD/BwQ8u+rmOTt9S@Air-de-Roger>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <7452a070-48b8-03fb-26c7-3dc7d652dcba@suse.com>
 <ZD+uPd/wICTK6qB4@Air-de-Roger>
 <619fe14d-e5cd-c355-bcfa-1d20e0c219ca@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <619fe14d-e5cd-c355-bcfa-1d20e0c219ca@suse.com>
X-ClientProxiedBy: LO4P123CA0438.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::11) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6584:EE_
X-MS-Office365-Filtering-Correlation-Id: c2a5fb83-bfaa-49de-b60f-08db40c08148
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/Q1gSDz2ibOv4YdZhyHQkDXTfYXIty4R/JIDbDfEl5sBtk7JLjZZ+PLNV9kZ/+kXwquYmPEi10ulLSKtwXJopxRoJM3KrhHM8LTS9di7QN6Z8vKJEolRkF0d4YqcWUNadMOokWbmKKrstvJODNWz3Mg5KXf4tNMdwiOiD5WxyLWGOf1oFn+DbH17WdBAShqeieGsWb9pgzXWibmhR8Zu7IOgLy/y1F1kPABIqxPIiRI4HWkUtA3xdsoNNH1nJn+2UhGqGIywHTjHbHCqzbCEfLfsSCmuEvB9nCy/7CKiOQ4pRwFmoXVpQHJt6Kszb4j2+JNIkma1wO59XMs6dVy8V+fnRqKH1M53C5CsTso9sJoSqnELPKlOUnu4wzTpYGwgy//Pr9O+kyCpCum5qP3eJyCLIdpXyYZ120qe2cT6NwB7whEZ5SkLJBlAL3AnTlpjaJHyBJcCpi3ZGgrcgQqJ2LFr0zfD+AgQDG4huB6N/VuoFaQ0L/pguU/uOfXi5N42Zrd24H+76IrBXggvtWT/ErdPCx8RebDd60Dk//UTjlVxdSmREGR7SGNvrdFcjGC5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199021)(2906002)(82960400001)(33716001)(478600001)(6666004)(6486002)(38100700002)(83380400001)(26005)(6506007)(53546011)(9686003)(6512007)(186003)(86362001)(41300700001)(5660300002)(316002)(85182001)(8936002)(8676002)(54906003)(66946007)(6916009)(66476007)(4326008)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZnFIUDhCM0NTWjVVOFQ4UHEvZ1k3M2daRlNRc25qNTUzZ2ZqQW1GNytETVBm?=
 =?utf-8?B?UUdBQnJjYTJhSXNJalEyNzJkSTJmTXpVaUs1OHVVemtiMk90MVFvWHVHYzhM?=
 =?utf-8?B?bjY1T1ZMVEdOMnEvVTNMSXlHVjRYbm1PNFp0NmZmczQ4YXBFd2s1dTMyRWZs?=
 =?utf-8?B?dDR5RWxXV3V1WXNxY0V3VjlRWVgzdDVPUTlCdTFMY0hFd1dkZm14aEs4S0Rz?=
 =?utf-8?B?S0FiMTM4WGN6TmpxdjlQWEhLREtVQ1ZjV3I0ekt5aERzalA4MHpqSFphWGxE?=
 =?utf-8?B?cnB0WW9tR3Q0cURWOTFXWk9hMFJOSGNaTFg0TE11YmovbEdqbXMvUERhMnNN?=
 =?utf-8?B?a201U0RFZVIrRnQ4eW5KMGxvVm5tZEpRWFB2NW1vZjJhSTBGNHBKSHc5WHJD?=
 =?utf-8?B?VEkxWWkzZkxwdmxGcHdHSTRjK2pmbXZJWEVlQXV0eG9nZ2dvZ1pSV1JwNVEr?=
 =?utf-8?B?bm1RM05laXhnQUpBMm1nQ3BvNDJUU2NYQlBLUVlESERlNEczRHk2Wi9BM2xL?=
 =?utf-8?B?SnA5eGZHV2FzRGxxNFArY3pySDQ0cGhzVlMxd0ltSTZlZE9PVzFDcTQ1ZFU1?=
 =?utf-8?B?dVI2ZGIzMTdJNWt2VzV3RTVGNkJlNVRNRkhMOWVrMC9ITkQ5WnpqeGd1WDZG?=
 =?utf-8?B?ajI5Unc4MWNJWXlmVFN3cTBPaHdwNWtYVmhRNHFxY1VmT2ZKRUJGSmJnS2g4?=
 =?utf-8?B?UnFDUnZtNU4vcmU2VThvM3ExWW9tRVZJSnFXVnprcWlwaG5WeWV4NGZLZVEw?=
 =?utf-8?B?YUJLWDBNL0ZFRElrU0lWS3pKMXFZRXhVSzAwQW9iUHpOWHNncFNwUVgyLzJo?=
 =?utf-8?B?cXU4TnFUVGFVSE4ybHV5dkFhMFczVGpvTGZidk9uenI1SkJJRWcrMWorMnhv?=
 =?utf-8?B?SVlrdW5CVkZPQ2psQ1R5Smprem1WUVRIR2dIWEV5VWpLYm9pbjZiOTE2RGFQ?=
 =?utf-8?B?b090WWR2ZTVTOFdPOHFXbDUrclBHTUp3Vjh0VzRLMGE0SmxUNDVqc2syNHRH?=
 =?utf-8?B?M1dPRDZWVnQyZEt6aW53eURtWEw5ZTQwNTQ1M29OWm9iRW1vcG1nZ3FoVmxR?=
 =?utf-8?B?WFZVWDhlcmZlazV0OUEyYWR5NGU2K05qcjg4YnM4dFRjb2hXUE9jZEl3d2RE?=
 =?utf-8?B?M2FpNjRzeDFJZjJXdER2WUxZTmNlc2NEbDlCY2FROGVzai9wUXNSZmFDWmll?=
 =?utf-8?B?d2xBWHI2WUdEajJDcXZHRUpGdWR6VGkyaUUzb2l3dXU4UWxQODcvenF2dDdN?=
 =?utf-8?B?eEJxKzAxQndwWkZTY013RmhPb0hML3NMekFlMkdoaCtyeUx5V0FRRU94djFU?=
 =?utf-8?B?Y0xxZkVNY2d1ZmlaMWpDUlVoT04yZCtaUElWMGlQVFZXYW1sZEJpUnIyV1NH?=
 =?utf-8?B?eStDRFdHNHdlQng3YVNKWUlMd1FsOWIwdGF0WW1heGxCbG5PdlpBRnhwRjl3?=
 =?utf-8?B?aG9XVFB6b2Jpbkp2MHF0a1ZrdXdiOWsyR3VyTVRWMkJ5eVBSVFdiQWY2aVd5?=
 =?utf-8?B?aHV6Szg5MFN4d2l4a0xhWXVpWjgzMnZ2amFCNU5xTzNLM3VMTzZ3ajJjZU1T?=
 =?utf-8?B?QWRzekVnd0dmRU5acmp6RjZLMm5qZW9NTGxrVnRXWFdTSXdZZXl0YUIvR3RM?=
 =?utf-8?B?bWdUcnVzd3g0STVCRjhkODNySVpMTUZRenNqNDRGOC9GdytLK2RPMGprYkR0?=
 =?utf-8?B?cDN2RTZEYUVYdmNmUEdFclkyMDlCNWErWVVjNXgra214NElZWW5MRlIzOGNn?=
 =?utf-8?B?V3NkVzFaR0NLL05jTW5SOXBELzM4TE9JWkxaeVlkYXF4TXBDTWsvTWxZaVRq?=
 =?utf-8?B?M1NsK01ZRExZNC9uc0MwdmkyaXZnRG8reGVFenlWRm55SGM0azFuOEdQcS9K?=
 =?utf-8?B?R0gwckp0K0o4UmZsL2dRZWFRVWNnUVlLcEhRWlBaODJlVXFsK24wdHlkLy9a?=
 =?utf-8?B?OWFMV1hKaXFEdzJDajE3NFBySFhPQ2JybTVTY1pUQWFMWkV1SUJ4SzVUa2hI?=
 =?utf-8?B?MVdlYzFvTCtFdjdRVWZpV2tPOE5VNmtMeG1QWnNJbGRubWZicDZibkNJRVly?=
 =?utf-8?B?RVI2bWswV3pGSEpCNFZzWFVVOUc2MmtObHlINDFmNDBaY2VsQ1M2UTYxTlBK?=
 =?utf-8?B?WjZXZG1Wa0NIc2hpR1poSFVLQW92Yk8wSEVFUXRYQVJxL0tCRzlCMTd0V0tT?=
 =?utf-8?B?Z2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0JqAwGWpK5tVTuBLgI5G90hBFIkON20KiadL55wfxLLOOog7yClXXGgMomaOAU3ryaRU1FacYjvj87lkoVYY7zvHUsAgF8ev5dEjtWakyEriqIskEjjdzv0SJ1vI8yMs4jc14Qx6Kd+Xdkzyw3zv5igmYUYJ+4rONB5M4QuMjYm2s4qHjng5xRsFrElvF7AD8ui2pEnAyTKbQHrg+FChPzu/Ivd5KO8Km8VXhgadi7ZA46nG2glizjRfNWusaVh/94CmTqGpodP4kMrhlCJFoBcilrSdd6Gsg/E1h1M54OH3FlO9raR4C441c63gX+1RD/g1mHGBUvRyBhKgA8N7iKhSd9+qt6Ei6R6uDZzQXpcrzySPmQiY2uku/4kWtt0jmiyDoCZ5g/8ypY33tvJrxpb2Gre06Dwpwpc6JsvXw6sb4vDVu/Qd+xPZDzr9D6LiF1k+iEkKUy4HSNOv2ZvW02qo5V4GbYrF5qPpKonpykVbU4vu5fwjgI+jJPwTrCvwKiWwANUdCMTtVj/SXO08L/lI4ESAmc4Tex15KRyt3wzRNK/X3Y+A6WABgoTgny5tCoPsyeAY0pjZdTqgGbjKmmJCQMN0iYou+RUsJf83yFmu0aEpWi+uEFnz226n4982TtZ2JX4EjSUiJEnPeKY7+Q1w8atIra0rXkFccjT+kmHQyxHP55SnnamC9i+RBlC7zUbzOHWr5cymwc+UTsggPjwclg0VRekrdLiWXjR4yICLgGV9y8s9Kmqnf3/qtn9YudEcfrjLhWI6dd1DG0aKY1yC9lEql+VR91fCG+ZqDA/tbxSukjAQLGfSu6C2v8198TkE3zx++ZT86D6h7VqAWxFk9Weua5/kS3rlh+cB3VxZp+0VGUvi+se9K+9pQmQ+Xwp0hpMELuuwfrCTPtAB2Azu5/Qw8bFsv30KpVMlLHg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c2a5fb83-bfaa-49de-b60f-08db40c08148
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:26:15.3184
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jTBV8mK85XA11TVcnmhloakveHXNU0dOckmBJ5vP1hC4dwfktWk+EgRoqHSoOPqAINPPiUvTYnBIgoaTbrmtCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6584

On Wed, Apr 19, 2023 at 11:18:38AM +0200, Jan Beulich wrote:
> On 19.04.2023 11:02, Roger Pau Monné wrote:
> > On Wed, Apr 19, 2023 at 09:07:41AM +0200, Jan Beulich wrote:
> >> On 18.04.2023 17:54, Andrew Cooper wrote:
> >>> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> >>>> The addition of the flags field in the vcpu_set_singleshot_timer in
> >>>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
> >>>> increased.
> >>>>
> >>>> Remove such field addition and drop the implementation of the
> >>>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> >>>> value just inject the timer interrupt.
> >>>>
> >>>> Bump the Xen interface version, and keep the flags field and
> >>>> VCPU_SSHOTTMR_future available for guests using the old interface.
> >>>>
> >>>> Note the removal of the field from the vcpu_set_singleshot_timer
> >>>> struct allows removing the compat translation of the struct.
> >>>>
> >>>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> >>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>>
> >>> While everything said is true, this isn't the reason to to get rid of
> >>> VCPU_SSHOTTMR_future
> >>>
> >>> It 505ef3ea8687 does appear to have been an ABI break, that's
> >>> incidental.  It dates from 2007 so whatever we have now is the de-facto
> >>> ABI, whether we like it or not.
> >>>
> >>> The reason to delete this is because it is a monumentality and entirely
> >>> stupid idea which should have been rejected outright at the time, and
> >>> the only guest we can find which uses it also BUG_ON()'s in response to
> >>> -ETIME.
> >>
> >> The instance in Linux (up to 4.6) that I could find was
> >>
> >> 	BUG_ON(ret != 0 && ret != -ETIME);
> >>
> >> i.e. not really dying when getting back -ETIME. (And if there really was
> >> a BUG_ON(ret) somewhere despite setting the flag, it would be a bug there,
> >> not something to "fix" in Xen.) I'm afraid I also disagree on "stupid
> >> idea" as well as ...
> > 
> > The logic in old Linux is indeed 'fine' in the sense that it doesn't
> > hit a BUG_ON.
> > 
> > The problem we are seeing is that when logdirty is enabled on a guest
> > with 32vCPUs (and without any kind of logdirty hardware assistance)
> > the contention on the p2m lock is so high that by the time
> > VCPUOP_set_singleshot_timer has copied the hypercall data from HVM
> > context the provided timeout has already expired, leading to:
> > 
> > [   65.543736] hrtimer: interrupt took 10817714 ns
> > [   65.514171] CE: xen increased min_delta_ns to 150000 nsec
> > [   65.514171] CE: xen increased min_delta_ns to 225000 nsec
> > [   65.514171] CE: xen increased min_delta_ns to 337500 nsec
> > [   65.566495] CE: xen increased min_delta_ns to 150000 nsec
> > [   65.514171] CE: xen increased min_delta_ns to 506250 nsec
> > [   65.573088] CE: xen increased min_delta_ns to 150000 nsec
> > [   65.572884] CE: xen increased min_delta_ns to 150000 nsec
> > [   65.514171] CE: xen increased min_delta_ns to 759375 nsec
> > [   65.638644] CE: xen increased min_delta_ns to 150000 nsec
> > [   65.566495] CE: xen increased min_delta_ns to 225000 nsec
> > [   65.514171] CE: xen increased min_delta_ns to 1000000 nsec
> > [   65.572884] CE: xen increased min_delta_ns to 225000 nsec
> > [   65.573088] CE: xen increased min_delta_ns to 225000 nsec
> > [   65.630224] CE: xen increased min_delta_ns to 150000 nsec
> > ...
> > 
> > xenrt1062821 login: [   82.752788] CE: Reprogramming failure. Giving up
> > [   82.779470] CE: xen increased min_delta_ns to 1000000 nsec
> > [   82.793075] CE: Reprogramming failure. Giving up
> > [   82.779470] CE: Reprogramming failure. Giving up
> > [   82.821864] CE: xen increased min_delta_ns to 506250 nsec
> > [   82.821864] CE: xen increased min_delta_ns to 759375 nsec
> > [   82.821864] CE: xen increased min_delta_ns to 1000000 nsec
> > [   82.821864] CE: Reprogramming failure. Giving up
> > [   82.856256] CE: Reprogramming failure. Giving up
> > [   84.566279] CE: Reprogramming failure. Giving up
> > [   84.649493] Freezing user space processes ... 
> > [  130.604032] INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> > [  130.604032] Task dump for CPU 14:
> > [  130.604032] swapper/14      R  running task        0     0      1 0x00000000
> > [  130.604032] Call Trace:
> > [  130.604032]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  130.604032]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  130.604032]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  130.604032]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  130.604032]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  130.604032]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > [  549.654536] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> > [  549.655463] Task dump for CPU 26:
> > [  549.655463] swapper/26      R  running task        0     0      1 0x00000000
> > [  549.655463] Call Trace:
> > [  549.655463]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  549.655463]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  549.655463]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  549.655463]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  549.655463]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  549.655463]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > [  821.888478] INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> > [  821.888596] Task dump for CPU 26:
> > [  821.888622] swapper/26      R  running task        0     0      1 0x00000000
> > [  821.888677] Call Trace:
> > [  821.888712]  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> > [  821.888771]  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> > [  821.888818]  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> > [  821.888865]  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> > [  821.888917]  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> > [  821.888966]  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > 
> > At some point Linux simply gives up trying to reprogram the timer, and
> > that obviously lead to CPU stalls.
> 
> And that's all with old enough Linux then, I suppose?

That's Linux 3.10.

> > Ignoring the VCPU_SSHOTTMR_future flag allows the guest to survive, by
> > not returning ETIME and just injecting the expired interrupt.
> > 
> > Overall I'm not sure how useful VCPU_SSHOTTMR_future is at least when
> > implemented as done currently in Linux.
> > 
> > Instead of trying to reprogram the timer Linux should do the
> > equivalent of self-inject a timer interrupt in order to cope with the
> > fact that the selected timeout has already expired.
> 
> Indeed - that's what I was expecting would be happening. But I didn't
> go check their code ... Yet them getting it wrong still isn't a reason
> to ignore the request, at least not unconditionally. OSes could be
> getting it right, and they could then benefit from the avoided event.

Well, the reason to ignore would be because the introduction of the
flags field and the VCPU_SSHOTTMR_future option did break the ABI.

If we care about that behavior we should introduce a new hypercall,
either that behaves in such way, or that has a flags field in order to
implement it.

> As to "unconditionally": Introducing a per-guest control is likely too
> much overhead for something that, aiui, isn't commonly used (anymore).

No, I don't think any guest I've looked at (Linux, NetBSD, FreeBSD)
use the VCPU_SSHOTTMR_future flag.

> But tying this to a command line option might make sense - engaging it
> shouldn't (hopefully) lead to misbehavior in guests properly using the
> flag, so ought to be okay to enable in a system-wide manner.

I personally don't think we should go to those lengths in order to
keep this behavior, because ignoring VCPU_SSHOTTMR_future (and thus
never returning -ENOTIME) is compatible with the current
implementation.  The Linux implementation shows that even
it's current/past user didn't know how the flag should be used anyway.

As said above, if there's a willingness to have this behavior (which
based on the current public implementations it seems it's not) it can
always be implemented as a separate hypercall.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:42:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:42:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523307.813229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5GR-0003MH-K9; Wed, 19 Apr 2023 10:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523307.813229; Wed, 19 Apr 2023 10:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5GR-0003MA-Gr; Wed, 19 Apr 2023 10:42:15 +0000
Received: by outflank-mailman (input) for mailman id 523307;
 Wed, 19 Apr 2023 10:42:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp5GQ-0003M4-Kn
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:42:14 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe59::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d65371ff-de9e-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:42:11 +0200 (CEST)
Received: from BN9PR03CA0945.namprd03.prod.outlook.com (2603:10b6:408:108::20)
 by PH7PR12MB6442.namprd12.prod.outlook.com (2603:10b6:510:1fa::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 10:42:08 +0000
Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:108:cafe::b5) by BN9PR03CA0945.outlook.office365.com
 (2603:10b6:408:108::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 10:42:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 10:42:07 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 05:42:05 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 05:42:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d65371ff-de9e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TgG0TvtnBwceqs5F5UznJPn+A7thquiaZNcckfxW0JB/sWWqUmJyMmVW7+XtLov2hVo/n3BT3gXVDpn7c90Xb/zYZXE3phf6NE3G6arcMYGXsWlmVOQBlxVmlVHQE7eftbe5+3wgbbTI5GQItxBOmTdgEi9cMWrzCICYriTp8OLeNNMmCBmkEoohjBTqDVe36ulxJ6wBL5BK+ZICpq0I48OD2qG1Pwx/7VMpAdBgS0yE12sYo/0XjoRQI31IUUO9krLPrI6RBxS6juHObIIqNOn8DvOOBVoAmoVjOa1A4mE56VWYUEcYjN38fcITB0V4AgDC20ZzEmgoJRzRcdlviQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Kw9zubjYP9j++8LYacicxoxGMx3WukjD4Ck2D2aP9ls=;
 b=g3u8FdkS0OA76apiF+boE1W6J6qJ7D/JybASddaHNWsFh6ZP/+H04OoQVoJq6fbw54N9oiIy6ejeuebaYZVr240CD1gLZPCTInUqzq6fXszEbriH8iezZqAftIvuyRMA+TXzk37VimFqNrAJh5GgPLLO++D0uxJ9AY9HuXKlytjaCb1eCbJiQRiRYJbNcivf23mpjeOU7ZvEtshnSOzJE/HJwx8XQDYJh/b8tP6+WsDTW9ArvSDDIpykDrPXSVfgG346j37x6dx3x/q0pDQoXsbAxXSXE1x8QOfTKZwkREjNbNRe+q01CZIcQLLQnmkcWu4GJ38CFRMbeUQVp552fQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kw9zubjYP9j++8LYacicxoxGMx3WukjD4Ck2D2aP9ls=;
 b=ZCkz6414afzq3ao4zeuAlbqzibLkltIk8pYtz/u8HHLfpzJNhelAFZRJFK02oDtbHKh75zrfCPxMnZ8YrwWIMuy/O1zylqgb5w6t1kd5dyxrdS9FESAXhY9WkCFtDYLvErgLztcexKI0tItPP/1HVqjUL1SIRP47IW2zKgX036Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <2941a6f0-6d4a-56af-648e-e5734362fce1@amd.com>
Date: Wed, 19 Apr 2023 12:42:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
 <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
 <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT016:EE_|PH7PR12MB6442:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ac11610-ee8b-4346-ce91-08db40c2b908
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7XBnJWE1rBGrDHpUNNpyWDeaxLLtmuTzAgMYYwuP/BreClU4uEmp//YYBtck+X8eghJ67ASlNz5GN7L8opMbI6GuE34MiljhKWggVnPSLWaPq017bQxOJhn8fjWmwbzN1hevCN+v7jfDYkRf2vEnV+oQGK1e1ljnBZNngvzXbTD0ZutQyN/WyZmBcqGusq3fqjdkuSGW/N2a0mlIrV5ri8q65gVv05Vhln5mMuQUN4KtSZqBrXlwa7Ggbr8UIQnhhvPhiCib1iFH3ywWlcBk7eznxcjt4yjdp1xRLvqaJt4ENydbb9qcA9NYOu8rQpi7ukEFxQ7hmP+MhJ3kWsz27fa5xQVTkBf5X7HiW90Nkk0mmZ4jWxiJHlqrBl1oqot2kQTmv5T+aaHZQl5ZdDrDEi87+wJhzOm50I3JvcHjJGEgY6iBlGlrSe1fsMfhigyJErsQODZ0alppAGgi7iG1NWVNPT1eT9UVlqzTBJ6L4QzhfI43smhwLUUyu6cHffmZfiv5cKPHCAkjyLivLDiXOrwwazmhYZ5f9VYtzAJIKZPYDj9k1/Qzxu/Qynzo7qI0J9u4y00QO8WdnXunt3uznVFFpmtjJn5gFQyEIMcsHkToahP+NTVebHI0T+8SvXvMWPa7SQQ0Ucs0pkB/HFuP014XQyas16zzpvu1+pG5B2cylSRDcwneK6rfoTVpT+PLUjpaibEbR/7ILVswqhH4JULrLrjYzybrh7F3MpT6X7t4zvJQ9zHYua/sZiVoDmsG
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(8936002)(8676002)(41300700001)(47076005)(40480700001)(2906002)(44832011)(966005)(5660300002)(83380400001)(40460700003)(54906003)(86362001)(16576012)(36756003)(478600001)(82310400005)(356005)(26005)(2616005)(81166007)(53546011)(31696002)(336012)(426003)(186003)(31686004)(82740400003)(316002)(4326008)(6916009)(36860700001)(70586007)(70206006)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:42:07.4939
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ac11610-ee8b-4346-ce91-08db40c2b908
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT016.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6442


On 19/04/2023 11:36, Oleg Nikitenko wrote:
> 	
> 
> 
> Hi Michal,
> 
> I corrected xen's command line.
> Now it is
> xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
> 
> Unfortunately the result was the same.
> 
> (XEN)  - Dom0 mode: Relaxed
> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Coloring general information
> (XEN) Way size: 64kB
> (XEN) Max. number of colors available: 16
> (XEN) Xen color(s): [ 0 ]
> (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
> (XEN) Color array allocation failed for dom0
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Error creating domain 0
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> 
> I am going to find out how command line arguments passed and parsed.
Best would be to cross-check the cmdline you provided with the one Xen sees.
For that you would need to enable early printk, so that Xen will print the cmdline (+ the boot modules).
Is yocto the only workflow to build Xen in your case?

Early printk for zynqMP can be enabled through menuconfig:
Debugging Options->Early printk->Early printk with Cadence UART for Xilinx ZynqMP SOCs
This will automatically set early UART address to serial0 which is at 0xff000000.

I think using Yocto, you could either do something like:
bitbake xen -c menuconfig
or provide the necessary Kconfig options in a config file added to SCR_URI (most likely you already have
such file with CONFIG_COLORING=y as it is not enabled by default).

~Michal

> 
> Regards,
> Oleg
> 
> ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
> 
>     Hi Michal,
> 
>     You put my nose into the problem. Thank you.
>     I am going to use your point.
>     Let's see what happens.
> 
>     Regards,
>     Oleg
> 
> 
>     ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> 
>         Hi Oleg,
> 
>         On 19/04/2023 09:03, Oleg Nikitenko wrote:
>         >       
>         >
>         >
>         > Hello Stefano,
>         >
>         > Thanks for the clarification.
>         > My company uses yocto for image generation.
>         > What kind of information do you need to consult me in this case ?
>         >
>         > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org <mailto:julien@xen.org>> ?
> 
>         Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
>         Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
>         specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
>         is set, I strongly doubt that this is the actual command line in use.
> 
>         You wrote:
>         xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
> 
>         but:
>         1) way_szize has a typo
>         2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>         (XEN) Xen color(s): [ 0 ]
> 
>         This makes me believe that no colors configuration actually end up in command line that Xen booted with.
>         Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.
> 
>         So I would suggest to first cross-check the command line in use.
> 
>         ~Michal
> 
> 
>         >
>         > Regards,
>         > Oleg
>         >
>         > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>         >
>         >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>         >     > Hi Julien,
>         >     >
>         >     > >> This feature has not been merged in Xen upstream yet
>         >     >
>         >     > > would assume that upstream + the series on the ML [1] work
>         >     >
>         >     > Please clarify this point.
>         >     > Because the two thoughts are controversial.
>         >
>         >     Hi Oleg,
>         >
>         >     As Julien wrote, there is nothing controversial. As you are aware,
>         >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>         >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>         >
>         >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>         >
>         >
>         >     Instead, the upstream Xen tree lives here:
>         >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >
>         >
>         >     The Cache Coloring feature that you are trying to configure is present
>         >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>         >     outstanding patch series to add cache coloring to Xen upstream but it
>         >     hasn't been merged yet.)
>         >
>         >
>         >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>         >     you as you already have Cache Coloring as a feature there.
>         >
>         >
>         >     I take you are using ImageBuilder to generate the boot configuration? If
>         >     so, please post the ImageBuilder config file that you are using.
>         >
>         >     But from the boot message, it looks like the colors configuration for
>         >     Dom0 is incorrect.
>         >
> 


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:43:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523312.813239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5Hf-0003v4-2g; Wed, 19 Apr 2023 10:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523312.813239; Wed, 19 Apr 2023 10:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5He-0003ux-Uv; Wed, 19 Apr 2023 10:43:30 +0000
Received: by outflank-mailman (input) for mailman id 523312;
 Wed, 19 Apr 2023 10:43:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp5Hd-0003un-I4
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:43:29 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01accab8-de9f-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 12:43:27 +0200 (CEST)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 06:43:10 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB4989.namprd03.prod.outlook.com (2603:10b6:208:1a5::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 10:43:08 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:43:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01accab8-de9f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681901007;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=rlNNSuetub0TtpXM4QrP+7ZksCvo/FS6QUeGxDg1gYU=;
  b=dJ2tV8sdHBrE+OCV8V6/Suuh6t4KkUzTc1KPMqduzfnjpG9D43fUCA3o
   uT7QkAYOL/zLTgi7gZf8Pm/LHUOCKVLJuAfve4snWnfgqYQbwbLuzc9D3
   oioLWhYUHrnccOTQNZfLhZJDI7Gr0HGV0zojGQIyHFlYdMd59fm+cjg/K
   U=;
X-IronPort-RemoteIP: 104.47.56.176
X-IronPort-MID: 105428109
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:F+9ao60IJZYQVB04cPbD5Vdwkn2cJEfYwER7XKvMYLTBsI5bp2RSy
 WAcXzrQPPmIM2D8c4x1PoWwoE0Pu8TVy4QwS1Y5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNagQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLjwW0
 vUUNhw0TE7emOyp8amAaqpTr5F2RCXrFNt3VnBI6xj8VK9jareaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6Kk1IZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rKTx3OiANN6+LuQ9a83mU+WwWgqWRQudF3imd6Dmh+ZcocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRPvbuPWDSi/7GbhTqoPG4eKmpqTSQDSA4Y5dj/scc2hxTGQdt5OL64iMXvHjP9y
 CzMqzIx750RkMoK2qOT7V3BxTW2qfDhVRUp7w/aWmak6AJRZ4O/YYGsr1/B4p5oM4KxXlSH+
 n8elKCjAPsmCJiMkGmGR7wLFbTwvvKdamSD3xhoAoUr8Cmr9zi7Z4dM7TpiJUBvdMEZZTvuZ
 0yVsgRUjHNOAEaXgWZMS9rZI6wXIWLITrwJiti8ggJyX6VM
IronPort-HdrOrdr: A9a23:tIpUvKxuw0HITgwQEBeVKrPwFr1zdoMgy1knxilNoH1uHvBw8v
 rEoB1173DJYVoqNk3I4OrwXpVoI0m9yXcF2+gs1N6ZNWGN1VdAR7sSjrcKrQeQfxHWx6pw0r
 phbrg7KPCYNykcsS8i2njbLz/3+qjjzJyV
X-Talos-CUID: 9a23:c0GYTWC/ssaxSEP6EwY6rGkdFt4JS1+DkWjePH6jMkYqd5TAHA==
X-Talos-MUID: 9a23:Kg7IPggjmkecBJNwyv6dkMMpN5Y1+rarD0Q0wJw+ms/dDQtKK2a6pWHi
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105428109"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ig7LToGyKX4fzh420WyVzuyhyjBcwS5A8r1h3xSLuOwuHTBfL36jgjAkC2+PEWNoWNVjjLeg2PPQHoHLrAabDmM9TGGUrUJYh2r7rofib+BX4fEO6XiscNZWCHj2axWwGpCYR3CeY9VGsvHyg5teX42lnUzCWM6HE4UtKWz2C4YmS3CLrL6R4eaUoi8REANCBR9zT+3vVno4xLxYue1ONFjVZ5q1WLPensrgCoZjky1/3HWHa3efdLnKgJlw8pGgKIDjKD8+e4vJ5p8fKWXYlygekqhtwxo/AXXv0aUzwcdb9hPlJuuGB8NSH/ig7xC3ixsVdiOd1Ncx2dLgFNelDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M1dZ4MdggJ3XVqh4kGp8C/FJwRikOHRyrhdlvHMotLw=;
 b=D2w3JXJCBzsP98VUmflzQXM4a/ejKzkPJ8LVY2QYdvOoukEr2at2OhKEpsxuMCi5wK+Th1XU4jZCPvdCIE35vWqiN4I0rw1cnSOOdMnHX0prCRdzNM5qwDHAbzoo9lMDOLK3v69ysiSxqmlvRYThF01JdOiqw0kemnhqF0/kh6Ux7Ixr8KZLBrS0oQraigDRN+mK9I9tj1BdmixyZSg4rRzVVXiopVGQvPDLhkQYnABz5aOJ3nvR733vTZ2BUfc4RHnC4yyNgKeAamECROucKeRD3kvgv3PssEr+Kd36pPKpD802bnkku2HU2QoHA/13+rlyA5BPZ944j5gaQJJvag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M1dZ4MdggJ3XVqh4kGp8C/FJwRikOHRyrhdlvHMotLw=;
 b=IPgYuYWffizwaXcxfIKt7UlCPt2FykRgzpVG+R9SC4qkJzl5XqfXUwLVMNoG2sLiww4oVDqq0jQnbzejC8mQ2lj9bA1nds5jyaLI0O4jVR8tNbNphCXJI+d11jd2JjfhRUU51EHqNRM7367lHN8+Lxqx7Xp4NrQtkzptKStl87c=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 12:43:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/vcpu: remove vcpu_set_singleshot_timer flags field
Message-ID: <ZD/FttS9Fkz7XExY@Air-de-Roger>
References: <20230418154223.20181-1-roger.pau@citrix.com>
 <225aeacd-7d8d-3832-8043-4f565403c2d7@citrix.com>
 <ZD6/Fk6S6D421AgE@Air-de-Roger>
 <acea0109-967b-f3d3-2a60-b71e5a207ea6@citrix.com>
 <ZD+3Y+YYQxFSoAJi@Air-de-Roger>
 <38fcc039-bdd2-b939-d619-97621abc74d4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <38fcc039-bdd2-b939-d619-97621abc74d4@suse.com>
X-ClientProxiedBy: LO4P123CA0634.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:294::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB4989:EE_
X-MS-Office365-Filtering-Correlation-Id: 7adea02f-478a-46d5-1028-08db40c2dd0b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CmJF+f3R7hbZY+AnAfOz1+erBjCwy80SpTJqYDT7m2upngpsECC5Pf5Stlj04/2xyDt+mv7tKh35t6HoI/DL+AlGCunsPHGo4A94PLqBPJ9gIjpXw3s/RjpWHuDk6DE3jvjxitnUwhi3BVGhBYQz9hXiOCLU82Mso1XZdClvG1iky8mt1A0pG/Zvbj+DAEfKzDqGxX/UoCRwtlgF8Fwgww1Sot4IJdn5O+tbQR5nGr8E4vfmftX7o7zM7TCsNDM3oz0lpUFvf0HB28GuSmHo4DFPYrQR8zx/v9WlxvrYNCuxgF2Nezm9KfH6BXRWYxosfuqGft6Ego+7Z2czeHGKaquMeYzpbib4L8UxyIRm54Dh12JbOTAJg3L6fbvwcgY0LAmVmYwB2PSYat/84zATm8Q5nrOcNoc54M8cHnJt1OvWUb3wzPNd6C09nyML663lcGYndO2Exrhiu9FYBVbD87Mh4AMymmcxtuAXjBpV6nRmJ5dzPmH9owUseAvYn4F9IvgwsbN98NEBI1INAcaLPATn5sVZXm5lHAgn6C5mxBoK2PQZCelvBN1VH3J5m+Sv
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(33716001)(86362001)(2906002)(85182001)(9686003)(6486002)(6666004)(186003)(53546011)(83380400001)(6512007)(6506007)(26005)(107886003)(478600001)(66946007)(66556008)(66476007)(6916009)(4326008)(82960400001)(41300700001)(38100700002)(54906003)(316002)(5660300002)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RkpoK0RiNTVRd3BzcEdremNjSWc3NXEzSitNVjZPMjBWSzRoSlJJbVB6RkRO?=
 =?utf-8?B?cDg0YlZSaXhHTGdPd1pYNEZJc3cvSHdSeSt1MlhiajFsdGM0ZWhiQXRPTDk2?=
 =?utf-8?B?VndPc0NlWFlZVlVkOVJTL2tsWHZiYTB6MTNWT05qT2NkdDI3TXFGUWxZOFlr?=
 =?utf-8?B?OGREaGl6OGJqWHlKbDhJcFRMZGkyQjV3Mm9XNDJ3dTRialBJRDVEaFQvaHJ1?=
 =?utf-8?B?LzU5K25OYWxkTjE2QkRsUUlQcXA3bnZJTWxsb0I0T2o2d3lJV1J1VzhlWVNm?=
 =?utf-8?B?bWtCMnBhUks1dDVDdVQ4SGtWOFd1dFp0STNVYlpGb3BBa0dCaUE5d2xSN3JM?=
 =?utf-8?B?VDZsTHFNYXBYR2Y1ZTIwVGZqbWpDb1BkL3U1eDNESVBra3hJWkwxelVLVEhM?=
 =?utf-8?B?Wmp4V0xRTEIwR0dTRWdHd3FpYno5dTFyMUxaU3BlYXZVUWF0dlQ2bnh6Yno5?=
 =?utf-8?B?aENNMHBXMkkzQWM4K0ZWOGtxdnptb1UzMFRxUkQ5NmZoZzQ3V2loSVhPZktz?=
 =?utf-8?B?Wkh2cVF6NzVLMzdDTXJTa2FDSlRuVnVpWUNKY2tFcFRGVjAxMDN5UThKMndr?=
 =?utf-8?B?djVnTW54a29EY09keHpEMWtOVENXakNrbVB2eTJpL2ZOeUFQVzBKdFppdXls?=
 =?utf-8?B?OU1mMkdHMHBqSm9YM2xNOGhRWWY0a01ZNFo1bFpxNVVhdk1ibnVzdElzQTNw?=
 =?utf-8?B?dVBjc21FaktKZEpzaE5XUmlWNHo1OGdaWm5LVm5ES0duZjVRRDJEWXRGNlFy?=
 =?utf-8?B?U2tUMm9BTFVkazJZODFBVDNodTMwa0psc0FKd0I3Wmt0bmFyeTBEbENBN2JT?=
 =?utf-8?B?dUd1NEZyRFRQMWswMkxaejF3S28vTUx0SGFOYVVxbjhiSDhROENaZWJlUnBH?=
 =?utf-8?B?NldzZHJSODFXS3l1dkNnUE93bTJaampoTlZyblVabW05T1V4NlJ4a0xQNUxm?=
 =?utf-8?B?RFBLY2Nhb1o1aUd5YVM3aE1OU0hZNkZqSzBJdGRUaGJnazZUc3YwTnZnSGFL?=
 =?utf-8?B?UE1mM1hlZWs2L29XTGhWcm1JZzZWWElKaE9JMEVqSE93WlRBUmNwbk1KZGhD?=
 =?utf-8?B?c0ZRRnFoNXJ0Wkk3K3IrMEwxRmNhZnF4MWZCUUU3V2V1K1FWL1JybU9yQ2lO?=
 =?utf-8?B?U2hGMWY0OWNJeEJjTVlVaXV5ZDg5VThMSVBNMFM0RnNWVXhjUDQrOXl6b1Ry?=
 =?utf-8?B?RENuYzdXb3pSajFwUmk5c3VLK3YxS0xWTGV1YWRaV2ltdWpvc09lRmFkeFd1?=
 =?utf-8?B?dFhXQ09YLzVKU1dCZGZqL3VkV1UwNmxjQUdjcTJ0QXVIN0ZNbk9QdHJTQmlY?=
 =?utf-8?B?L3NvODlhbWxEekp3S2haaEppdlhpdkwwQ2tPOElpZzhjL1orSi9kdGt1alJH?=
 =?utf-8?B?eFg2V05oR1pUYjR0Q3BRUVJZQXpHdWl4WWM1MFIyczdZOXJrTkpocEJjMEgw?=
 =?utf-8?B?QzhQNndQZzZjVVJURWdYalgvcHpTcFE1ZHZJdldCcko3Nlc0Mk82eDU5cjU0?=
 =?utf-8?B?M0FucFJ0OWFNaFZrcjU4b01TcmVPQ0lkbFZoSGNkbGh5NXpJdnlaR2pDTW9C?=
 =?utf-8?B?dWRTMlJnRXdDSGVXZ29TNkwyaWx1ODE0bVhHbzd0aHRTYm5sT3JSOXY1Q0VG?=
 =?utf-8?B?RmxxUzRFNjdaakRFYmRXbVJEVTlJNlpLdFFyNGpKOW56Z3c1dE9jYzNmVS91?=
 =?utf-8?B?RTNtQkE5S1BFN1hFR1pLcEkzdS9qNGhIOGpqV3dzdElSLzRyRjNoNWp4TFdy?=
 =?utf-8?B?bXJJdFN5WHB4YzByVTBiQUJPZWpSdmpNUmtPWlQvVVRNL3JURFhsN1VNOElk?=
 =?utf-8?B?YmthYTNJWm9wcWdjQmI3OU9NS0dLTThTSG1sNzJxSTMxTWFwcUx4NUNaVW44?=
 =?utf-8?B?SVVUNHp5Q0lRTmtGcXNqL2t2V1hmUlQ4Z2VCZW1CRUdXaklEL0Z2ZmxyRGFJ?=
 =?utf-8?B?eVFqMC8yTnBjdzBQSTVVZzB6VGxqbWNXYWhWNXo0MXlnNjdCaDcrK2NieUEw?=
 =?utf-8?B?bVk1RHdrQnhYY1lNM25rTFgxa1pONHBDdXlQNzQ3MUFxZUFWN2QzVkxmdnFR?=
 =?utf-8?B?RVBRZGhiSHgxRE84cGhIZEZxVmUrNTBtcXB1UXNFNWR6bU5oZlYxTVVweUdy?=
 =?utf-8?B?RXNabHRNajhvcjlUb2hlaEdIam1ncmx1V1RkeDR0OHpkQk42SkVQNi91ZVFJ?=
 =?utf-8?B?NWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ITOg5casl78EXE5zUbKjpeDQN8o71cp6TSM8vkrDLYCcTlaR39WRMV7i6kMU0JlQgU7KEuGbgF3Wn61fpKz2fPxTDVl7H/6uR86bPgJfFnfr9jDQxY8KKnt68jI772iJepkPo/WKie0//aXj5lRsl1CzaPu4pTjch89zXVCHiD0fd4WUQ3XROB2Y0tqVrCE8xL/QWliBciRMliQUIQplEfEAbdZ97B7W0vZ/fmFqfL+gI0NqiDgySGL2j/0lam6odcc4a6HptqY6kuwcFp2gLAsRrU7Qik5LdRfioP59d5J5PpHngdHHH8hUWXX4cI/z0odXPYjN0bWRDZ/rCVP2Ug/OjujC5VWR3TqPv/7yWysJaNxhzZRQiOmqJaP2bX42MkUGn0uQM8iPzQ9Q5qMJOSHu2RvewhLsnp4RIbsz2u/kio58opRFux6jGDXMsy5Xi7wwW8+sFr5Urrtt98+KJD6YAFMj0i3RbVPbuI0cwI4ReOnjUVQAIW8LQe0HoLE+uzxy8Z7IntiQt6PRx0UK3R1LTcguplvoCcWEX9IykNI5gFQ5TlnKQYkin40+aNwoQ/QKiEDNFvR+r3h5u31hv+P0afB5Qubu3AHAKWXhfpQmbGns9i6D4PVGgP/XH8NdNo5ol43L7SH0cIRV8WCjvchmNN/ur6Wb5Iil6cNZFF4QqLUQHqNlrpn+KaQejoDcwel6bVAh/jezqeBCRY95u+ghKraguCNX2XUEddv/CriMuMJf5x2sgmbiIOiQu7zwJa/m5LhtvCt5BuFduGEkOUZCnVsic1jGjae8TJK/nFtAdkc/aPpWM4gJR12eh5GLaC61JBx8bdCNMUBQ0wkGO2GCPoPzZNrK8yEbKFHvXLiZIBAtpiL8CE5xybyW8DUOa1Fh6dUs0dtDhrEYgmP06Dp/PinCnb/fo7RQab6byPY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7adea02f-478a-46d5-1028-08db40c2dd0b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:43:08.1292
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: F9Kass2sEkgzOSqmunKs2iZl+cPyZAD9Q3mMj2AMs9DbSewPRSbe6GphSA732jwOQDficbTB+VwDDuF7Ig7XkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4989

On Wed, Apr 19, 2023 at 12:11:09PM +0200, Jan Beulich wrote:
> On 19.04.2023 11:41, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 05:12:07PM +0100, Andrew Cooper wrote:
> >> On 18/04/2023 5:02 pm, Roger Pau Monné wrote:
> >>> On Tue, Apr 18, 2023 at 04:54:49PM +0100, Andrew Cooper wrote:
> >>>> On 18/04/2023 4:42 pm, Roger Pau Monne wrote:
> >>>>> The addition of the flags field in the vcpu_set_singleshot_timer in
> >>>>> 505ef3ea8687 is an ABI breakage, as the size of the structure is
> >>>>> increased.
> >>>>>
> >>>>> Remove such field addition and drop the implementation of the
> >>>>> VCPU_SSHOTTMR_future flag.  If a timer provides an expired timeout
> >>>>> value just inject the timer interrupt.
> >>>>>
> >>>>> Bump the Xen interface version, and keep the flags field and
> >>>>> VCPU_SSHOTTMR_future available for guests using the old interface.
> >>>>>
> >>>>> Note the removal of the field from the vcpu_set_singleshot_timer
> >>>>> struct allows removing the compat translation of the struct.
> >>>>>
> >>>>> Fixes: 505ef3ea8687 ('Add flags field to VCPUOP_set_singlsehot_timer.')
> >>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>>> While everything said is true, this isn't the reason to to get rid of
> >>>> VCPU_SSHOTTMR_future
> >>>>
> >>>> It 505ef3ea8687 does appear to have been an ABI break, that's
> >>>> incidental.  It dates from 2007 so whatever we have now is the de-facto
> >>>> ABI, whether we like it or not.
> >>>>
> >>>> The reason to delete this is because it is a monumentality and entirely
> >>>> stupid idea which should have been rejected outright at the time, and
> >>>> the only guest we can find which uses it also BUG_ON()'s in response to
> >>>> -ETIME.
> >>> I agree, but didn't think it was necessary to get into debating
> >>> whether it's useful or not, since its introduction was bogus anyway.
> >>
> >> Well - the reason to actually make a change is that (older) guests are
> >> really exploding on that BUG_ON() for reasons outside of their own control.
> >>
> >> And the reason to fix it by ignoring VCPU_SSHOTTMR_future is because the
> >> entire concept is broken and should never have existed.
> >>
> >> The ABI argument just adds to why the patch ought to have been rejected
> >> at the time.  But it was done, and the fact it has been like this for 16
> >> years means that the ABI shouldn't change further, even if it done in
> >> error in the first place.
> >>
> >>>
> >>>> It can literally only be used to shoot yourself in the foot with, and
> >>>> more recent Linuxes have dropped their use of it.
> >>>>
> >>>> The structure needs to stay it's current shape, and while it's fine to
> >>>> hide the VCPU_SSHOTTMR_future behind an interface version macro, we do
> >>>> need to say that it is explicitly ignored.
> >>> Oh, I think I've dropped the comment I had added next to
> >>> VCPU_SSHOTTMR_future that contained /* Ignored. */ (just like for the whole
> >>> flags field).
> >>>
> >>> I can elaborate a bit on why VCPU_SSHOTTMR_future is not useful in the
> >>> commit log, and add that Ignored comment to the flag.
> >>
> >> The important thing is to not actually change the size of the structure,
> >> and to change the commit message to explain the real reason why we need
> >> to make the change.
> > 
> > Why not revert back to the previous (smaller) size of the structure?
> > 
> > That would work for guests that have been built with Xen 3.0 headers.
> 
> Are there any such guests known to still be in active use? Linux iirc
> requires 4.0 as a minimum ...

That would be something from the pre-pvops Linux days.

I looked forward a bit, and I don't think the introduction of the
field was an ABI breakage, because it was done a day after introducing
the original hypercall, so no released version of the hypervisor
headers contained the structure without the flags field:

commit 505ef3ea86870bb8a35533ec9d446f98a6b61ea6
Author: kfraser@localhost.localdomain <kfraser@localhost.localdomain>
Date:   Sat Mar 10 16:58:11 2007 +0000

    Add flags field to VCPUOP_set_singlsehot_timer.
    Flag 'future' causes Xen to check if the timeout is in the past and
    return -ETIME if so.
    From: Jeremy Fitzhardinge <jeremy@goop.org>
    Signed-off-by: Keir Fraser <keir@xensource.com>

commit eb1a565927c0fdcd89be41f6d063c458539cca8d
Author: kfraser@localhost.localdomain <kfraser@localhost.localdomain>
Date:   Fri Mar 9 18:26:47 2007 +0000

    xen: New vcpu_op commands for setting periodic and single-shot timers.
    Signed-off-by: Keir Fraser <keir@xensource.com>

So I think my proposal is to declare the flag as deprecated (and
effectively ignored in the hypervisor) due to the bogus usage of it in
Linux.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:43:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523313.813248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5Hj-0004CV-9Y; Wed, 19 Apr 2023 10:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523313.813248; Wed, 19 Apr 2023 10:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5Hj-0004CO-70; Wed, 19 Apr 2023 10:43:35 +0000
Received: by outflank-mailman (input) for mailman id 523313;
 Wed, 19 Apr 2023 10:43:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5Hi-0003M4-62
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:43:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20614.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 077a0a5c-de9f-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:43:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7671.eurprd04.prod.outlook.com (2603:10a6:20b:299::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:43:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:43:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 077a0a5c-de9f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nWZusUWWS4XnvYn3sYaFSAyrB+noZ9zD46RCHxst2erhSvm6/hC1vRBx4Z/7w6TR+OitJit7Nc8C+289fZ6aZAWg36NqqyA6ysDNBKMV3MmtoGE4rckUPa7pXUYT0reudupigBsGkVXVPylaK7vN+3zqbWpbLJ5/jbppOa4H2XWj2rCtPKQ2oDBVUNb9bnbj/TEfM/qu8LcXAWLuHhwvHX2spoVcerE/9tmtbJK+tcFTGGcOWkN9vjLZgJILZBWxD7HSSi1XivKrnXZqlBoZYEAxUfm4GJ4e6aXOLWtTiZ7PmbSq260+ZphxC7gr2eKX+erzgB34wWiKxTOXLWvP7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1WLiDF52gwlGsA56jTGQEg+iCy+r7F/0Tm2Kl2/K1xA=;
 b=IgNZaSQkmnY7xhBhJQPJxsy6rO1B9sx999f5NrHyOWIaYzWEdOgstJc1roINwTtvshnCC1jZN0VhbEqnzjJXTg/F6402olEJ4C9wNLbdMzF/raX/2dQNmQm85qV+EVIlWWyc/UOkDYl/YVBd6lWQiVNT2i9iNtgbZoACCB1fSlg9XwnGsO7sKW0/+UOvmNRuiYTPOO4NtZJyt9FPliDL8TMPM6zgEkimGRkoox7oTYSA5a/+HNbO6YsgUWgSgH5UpEx5aXNNv/ZkzEwOUaacYlvmKKqC5VvKMq/nScZKzx8LkNVriFdVjBlzmFUgkLZWMGY+kLXq4HKDLKOlKrfbGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1WLiDF52gwlGsA56jTGQEg+iCy+r7F/0Tm2Kl2/K1xA=;
 b=t0HsMPW6fVjuf7p2ADgLhNu5QlgJLQnxfOChS6WCr6zzohDWLobOlRz0CpruFyu7EDNXL2f05/OR1XF0mqqbi1XLNx8EOhNuRAyRAlwU7mEtxoeRBajpCXZ1TtLfQ165L7xPSW3KoYreTD1mG0xcgpN7ZJ++y2PzV0Jkx+knMBeBBllT1R9Tgat5DDAv12De17SD85dq27TxhxZOIkL1gj8+UboTkZpim/DEod1ADTR8zmGvOIdO7CuIWlkOg/8CWpZ6a05WFz991Wsjg5fIFI6VhpAlUaDDxIt6CACW0V2rkpqsmAqU6N6cpTKuQaoL54m23/VxVfpWNqM3in08+Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Date: Wed, 19 Apr 2023 12:43:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86: reduce cache flushing overhead
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0161.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7671:EE_
X-MS-Office365-Filtering-Correlation-Id: 00dcff7c-defb-4db7-7c5b-08db40c2ea35
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WFyGo7IyS7NTh2vE7ainYgSzvDmcOgt6C6z4GpZ78Yx0DAXCCAE0tlPXCxqYoan9d5bto5ceINwyQXnXPplqibVsRW0GanumBSNDEkaKmcCIpuGl8rDNcHQB+OlNaTg5HXnpK3e1kXd+vKTs5IkW5lza85i+71zzmx7LUds9fP/81LmAPUPhys6JH0/UDaC07eMcH2vUicI4j/rR6sdXWccqFGclHy5F1oCQamPVpSHzjdQyLiJDJazr5Nlyn9qgT1qUUs8aZs8LMYix3HNET7J4J+tVp1HnUBqtH0voOf9h/P/ovyIMXgRWkllskJqhfU6O7tTS/3AxRRoYfs3F3GMnV/f3OS0ISqTxgiHdPP/5rHTP4U25kiGLZqHg6pPkE/VPaH8QQXcTc/xgp8ex6bhJ53xJFgcpiFhblIkLhk0lrR2MFBlKFPOZn2lBeVczi8xQe07mqsQwje4h/YH08HGl/ftfo9GN/Ac+10StXG+sC8Mm0zCW0F8CYeXBUIypCPn9o5PU+Y6RFc1JC18xvM8EuLrzxVbPld7jXLXkxlEO3HQBTHrPI58CjXldaI5C3ZZP6iMiDGkyy7wgHixMUYdKTVBXwzwrfghRQzr6/lN8mxThCLma14pb/mqIYUYo12RiQ0TNMsmrFtGo4CO7jQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a2Z5MDVmYmxlL08reXV6OXdFM01Pc1VFNk00RW5yVkRHQTQrT1hBUU9PeEtN?=
 =?utf-8?B?S3dEc0pLU1gwdk9Yb2hXOFFJdFZ4VnJ6akhpY0pwdmRNN1pRK3VkZTBOTjM1?=
 =?utf-8?B?SDlmOTlTV092NnpmSlVvemVVbmtXYUpOMU45bGV3RExCcWNwZnFTLzQ4Q1Q5?=
 =?utf-8?B?VVFJZjN5dlhMUGxoZEtBSU51MjNiczZid3ViTjROWWlUd29kL1gwK1RmUE1M?=
 =?utf-8?B?R1NCVW5TYzZpNCtndlVmYytvMVFoelBtdVBFNDd1eXBKOEFtUnN6SzRpZnpa?=
 =?utf-8?B?UVh6cnNjN3A0aWdWNXB3TDZUc0dYUHFEazUvQUNubGtZMVl0T0VMeUlwelRx?=
 =?utf-8?B?THdEVEZCdHpWNEpubDVIWXhYdkR1Uk9kUGdiQU5vSmZ3WmovRWhuNEwxazNX?=
 =?utf-8?B?Zy9pYksxOXJucE1DVnBhWTRNY2RnVVE1QVY5QWhHVk52REtIUjRIazVpOEtt?=
 =?utf-8?B?VXFCaWEzTHc0aWlKZnBrSEtaeFNQckkycDNRNlBtbkRKMkpkVXM4YW84Sm85?=
 =?utf-8?B?NzFnYXUxMHA2VlpsdHBWaWRNNytpZzRMVnJHL2hGd2RYeW9NakRlZ2pGUFlw?=
 =?utf-8?B?Vm5td3RpM3NnUEJ3cDF4VjVDQVZocmlncHovN2toYlQyUzZ3TENibHdHZE04?=
 =?utf-8?B?OHVVL3V6NnhRRXB1WkovTTFCNWk4UUdMeWhob21wU2VXNC9nRzZMVHpQOFZq?=
 =?utf-8?B?OFhFc1VqYlc0NWFkNW0yZTR3NHhDbGR4Uis1TkxWY0h6b2U4T3FLbExCN1Uv?=
 =?utf-8?B?VkVqZEIweWlVdUdpdzZlWERLWGFjbHFVSnhCa21NQmIzbnJhZHIxWDUvcG55?=
 =?utf-8?B?T0p4TkdaWVZPWHMzeVBuTnErZTk3bm95OUQwV1RuZ2d3aE1TYnFQa0xOOUEy?=
 =?utf-8?B?Zk1zTW90VEtReTRweUdEN09zc0FMWEMvTCtMRHMvUG82NnNCUUVESkl4QlhU?=
 =?utf-8?B?T3NLT2pPT2ZLcUZtcnhJN0tZWTVKbDFUc3ErOEhvemVYcUxsRHZCWGM1OHFr?=
 =?utf-8?B?VE5QUTR4MHF4V3VBWmZnaGtQKzRlcEs2ZmpST2ZHVTZSRCtXaTlndmo4STlB?=
 =?utf-8?B?Tno1Uk1hUFZzWUpTanZzejZrRG9zUzZmc21pMmdLeHhSRm1WSjFhRGc4dWVk?=
 =?utf-8?B?YWt6c3dyMzZsYzRSOUcwWXhQK1pZU2tvSTZpN1lOb3owK0o1L0NnN0tkZGZH?=
 =?utf-8?B?NTV2bGFVNm02aW1ycTJBbU9KT0NEbWtPcGJkY1JHL3BZelpOSUhVeGVETVdh?=
 =?utf-8?B?d3NSaUJuamlzbldqNnVsYVZCL3FDb3U1b1dmMXBmMy8yZFFObXRQZG9KSkFU?=
 =?utf-8?B?RkN2WWFoM2g4U1QxOEJCNTRBME81VFk0NHZBUW1ONGUybUdzOVBPbHE4WVJr?=
 =?utf-8?B?RWl1dUxzakNINkJ5eWV5RCtHZWJncUkwY1V4NVI5MXNKdWtGZ2I0WTVqcThI?=
 =?utf-8?B?czV6K1Z0STczQ0xQT2pnMUttMk9vdS9IeW1VanNPWFZ2WDk0WFZpWXNWNWtN?=
 =?utf-8?B?T1ZnNDRSdWExYzN6YmZoaDQ4c3k4WWVrcTR2VFRuaTdENHVVUThLUURham12?=
 =?utf-8?B?dlM5bXJmWTJYMVNrYllPRUorRWVFQVBaMll1WDU5bHFZQXlOYzJPYnNTQWZ0?=
 =?utf-8?B?YWh6RlZESWZwMHJLQVBZd0Z2d2E0dzExS1l5MjNCekVMSXVIQzZOWFN4Qkt6?=
 =?utf-8?B?TWp3RmZDR2I3N1pIU0p6RU9NSXFnRDhpMjZwTVo5SVVzZ28zMWp5MHFDQktt?=
 =?utf-8?B?M3cxQ2ROSENBcmQxZG90My9Ud1dUUVhWVmRocTczWXBsY2N1VTE1b2tDb0E2?=
 =?utf-8?B?NTB5RVM4bm9yZUQrZE5RcVhnZDFZVVVxRG1iciswSkhockcyRmdhNlVoaWFF?=
 =?utf-8?B?Y0FlNFdJa1UwemJuMVZGRnMwVWFjcDVNdXhROTBZbnc4bXNPUVViMU1sL1Nz?=
 =?utf-8?B?M28ycU0xSzF2a2tNd0NxWDBMc1J3N1VXMGR0RTdxK3MxMUZFVkZmNWczNW1L?=
 =?utf-8?B?T0JUa2hkQlNsZkNrbXdtanlicUJZOGdOcDQ4TnUwWW1kUXBrbmJnM09MU01u?=
 =?utf-8?B?cmN2SGFjYlpKWVRGc05PeW42cCt5UG1ic1RDRjZ2VEgyWmtES0hzQlNMYTdK?=
 =?utf-8?Q?GW1nrg0LKPiX1+JV8vQWs+z2Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00dcff7c-defb-4db7-7c5b-08db40c2ea35
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:43:30.2466
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YDbfOIhRX8IbIUUWytVwqrW39cqScV8DTIl5Uacri0uSMy0gCvPAqwX+CkmZtbNEdEbqMwmiWUFLU5Cttkptmw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7671

..., first and foremost by using cache write-back operations instead
of flushing ones when available (and sufficient for the purpose).

In the context of making the last patch I started wondering whether
for PV we don't flush (write back) too little for MMUEXT_FLUSH_CACHE:
Just like for HVM, pCPU-s a vCPU has run on before could still hold
data in their caches. (We clearly still flush / write back too much
in MMUEXT_FLUSH_CACHE_GLOBAL even with this series in place.) We also
can't call this the guest's responsibility, as it may not have any
means to have one of its vCPU-s run on the intended pCPU.

1: x86: support cache-writeback in flush_area_local() et al
2: x86/HVM: restrict guest-induced WBINVD to cache writeback
3: x86/PV: restrict guest-induced WBINVD (or alike) to cache writeback
4: VT-d: restrict iommu_flush_all() to cache writeback
5: x86/HVM: limit cache writeback overhead

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:44:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523319.813259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5Ij-00051p-KV; Wed, 19 Apr 2023 10:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523319.813259; Wed, 19 Apr 2023 10:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5Ij-00051i-Hn; Wed, 19 Apr 2023 10:44:37 +0000
Received: by outflank-mailman (input) for mailman id 523319;
 Wed, 19 Apr 2023 10:44:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5Ii-0003M4-Bn
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:44:36 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c56b75d-de9f-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:44:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7671.eurprd04.prod.outlook.com (2603:10a6:20b:299::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:44:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:44:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c56b75d-de9f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SF3nigEobWvO4lZQvPg+fseXc0vqkXnOzT7Y66cbs3o4KzggUqnRa4N3ul0pGawJ/WYCByJSMDe1o8iVXV2yLUYJ29hFAQjE089vYsVQyHZ3VO8ZNQfdcd+Q/F+CjzHVPIZsvjnd5Y+U1Hf7/A1rFOcYiuRdv/L8o3ULnG/r9VnwJVntYWC9t740XAT+oMZFWT9gEMxNUiO2W1yUys1xvgIHpo8ahF38cbInlrCtlLGCdcpIKPGT9NplwxzwOrKcglxGoZpH8Ow+LDGpK4YKeY0IUlNvr9qRP3QuVV/N33mdmcXPjfvr82F3O3aPqhmnWXFTDsjR0Py22k2+yAolNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XejQV07T8V2gYiQel4lhjkXlJAuuZk41L4stliIWs7I=;
 b=Hk/G0cOXWuIFoXbBkcN+W/iNWM3OXPMjIpOxtcV3WtkN+wxQkOnV7hXsPcdUEIAAZSrnekgEWYuy9CGkAj8GKgYS+xAAs/2ye8SQaQ5tuwNwuhDEydlRHP+jdD0C/BBdP0QVxOizuEV7VKcGo+nbrndyD1GLOqkJEEdx2HvN0CpowfGfsbSk69hnrovWWDhaHnvDzROFS1Q0IU7QhtaYNyXiKldRbgdaoRgvx5gUIl7N0NO5gQhvdggUUIC+Akfm2McqzxF0yrh2RfrC/AYewtjzY2n1aRsTmHeq1MXkZhGBuZ/8MurSonYsLU+8jOYXoGjVwtKgizXi88HjEyvX2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XejQV07T8V2gYiQel4lhjkXlJAuuZk41L4stliIWs7I=;
 b=51Nud44WJbSocPCFTtFgaf3+mGifwESn3rwb4M+PSahBQ6txqCeMNrTilwhzZZR+Y8kan9v4vnvCHtyN0uhls2eg1+24J6m6BENVb3peVb+UwKibZ48Y4IXa95239xM6jO7bLGIMC28zFgEpz6xCFy9BixftRhWIeXZ5X7gHnRwmbuarp5SKmM8iet+vA8P9hy7xUtpzuloNjMBhktPmHD7kdAv9Jio8DmcfJzAglwksW848ZKziizLQM2P0mPCSjCZtlV1r6C17/dDh+TFTiKvAVvS3Gopo8N2UHwBf8U41qwW/icfjf1xcn2IjxvZI5kFM3pMmsaCbUr8SXf3qBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ee33ad20-ef6e-504d-6987-59ccb166f8e4@suse.com>
Date: Wed, 19 Apr 2023 12:44:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 1/5] x86: support cache-writeback in flush_area_local() et al
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
In-Reply-To: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0014.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7671:EE_
X-MS-Office365-Filtering-Correlation-Id: ba16c0b1-374f-4801-4c25-08db40c30fc4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ciu6ZyWfasZVyJmuWnkVBQJiPDT5DjlRSVPs+SQaHrhoSd05ScrROU7/77mCiHF1HyyTKjdkMChRQS63R2EAJBT7Wm71JUdC/iQnqyR7hutfBOUfdxKQbn31DKVSQgicdSUwsUOR3aQr5zdRsAOiIPfOQBn5Bjfoqpxch3WVveJXZEj8XXIgbDIto87TUaVfFPZhjBsGdzns86gsbCx6vLubFxSG0nhwAJBTcp8gNwmwTRPfyh4d4tRBD9ZWLu7scknYpb+YibJRzDzMl/BwcvS64zYbyIWs19Tg/wXAouNjibssiKDoFIM7pYVbtcKb+/ZZK350PUAB5y1BbYN0ZvwflEhUMlyaJyK6cpnJPwQ0nZ3V4NWOL2Qf4CrxJ6KWu+uC/bnzmn8z7cxN/LpbvXaoegKOZbHSk5qip7lak11cTpL6kt7dP+yh0Peqiwax8FaT6rh1V5RdeHtAFelLFsqjnngQ8Q4CLrPUGgWu4V5/nC9cUjxgJgkPzVVOZXpOnEPcMsuJtZTBXX7kPUqLT706+g6iAztFBoDod4Epvk+KYo79ckD6dsaEqm6YA+vW8Di2sNywGw+EirojtsCuat2bb1zj2HQkz12OXB9vbT32IUxIjS8zgFNQg9/lO7fv1USM2IRbKtgp2DnMY1XOIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(39860400002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QnJseGdFZHhsSlRRM051WTJZb0YrYU91MlNnQmhKUTdtb0dMRE51Wmpkc3hm?=
 =?utf-8?B?R2gwS1AyNllLQVd5azNiZDIvdlZnVms5ZllTWTFvQmRaSjF6RzZJWmM5eHZ2?=
 =?utf-8?B?d3ArdTdxMHZ4L2RxZkhuSjNtYlVEbFMyRWkwRGhLc3ora3gzM0d2Q21veGNV?=
 =?utf-8?B?Z00raTNrVm50TEt3QmdiMjBidnJ3MWhPd1R3MlhDU3ppVlhITnZpeHEyNHU2?=
 =?utf-8?B?enZDbVQ0L0xLUTFyQlh6TFBWbDg3MU8vb3BmczMzWGNOMUVmclRacGltOU5M?=
 =?utf-8?B?M0RxbFhwZEdJMXdub1F5RFZ2NFRZb2Uwc1pMSUwwT3JaUXk3NmV5TS9nM3g1?=
 =?utf-8?B?NVZNZWNEVTNkVFl1a09VWVk0T3V4cDVjeXNsZDZyTGVuRVg5d0E3R1FpelNP?=
 =?utf-8?B?Tzd3Y3l0ek03aWYzR09KeTVxUnBuQkl4VGErZDArYVloN1YrTDF1aTk3RkF2?=
 =?utf-8?B?cXllZW9Ja2NkVGlWaElvL1pyZmZTWmdWZ28vYjFZVUdTOThXbXJXdWJhR0hs?=
 =?utf-8?B?VHg4WERMcmtLQ0FIc0MwTWFSdDFJWkNCTFdFY2xMeGdTcDN6eEVkVzFESm1M?=
 =?utf-8?B?ODgwUi9zK0NDNXYrU1p0eGcvNVMzQkJ3dmFBWDNIcEx1bkJwRU5ZSTZGZUFE?=
 =?utf-8?B?UzZEOENNQTkyOUgzQTd4MHErS3JuNHR3MW53dktBajBIN0J6RHJnZjVHSHov?=
 =?utf-8?B?bUcvQkJqSllGY3dzNkg2RUNwaEczdGhnUkw0SnVldjNjbzcyK3h1citMYWN2?=
 =?utf-8?B?WENMTkpwM0dBcDVDaVVPNitoeGdVOTRjVVpYdVFRL3FXWWFySmhKeWNibFpV?=
 =?utf-8?B?dHRaa3hyZWhjc3hyRGVBRVRwZXNpZDVHcmtZZVdZa2FvYWJKVG4rTE1kUTd2?=
 =?utf-8?B?c0t2VDN2VFhTVzhwSStOaU1jeFRUckczNnBHZSt2YjNaVkZQL2cvUi92b1Fx?=
 =?utf-8?B?RzhHc296elgvblE5dGtnclRsMEFIQWVyVTRINW1kbkVhT1gxMHdEd05UY2p5?=
 =?utf-8?B?T25wcmFhUUhKWExYSUhtam1FbmNtRStOUnpqajlwRGsvdjViWmJBTGliWThl?=
 =?utf-8?B?dEZNMWNCNGRueSt1UzRYcXJzaHhsQjlXbmZFeWdSRzNYRU9FNTQ3SjM1Q1lL?=
 =?utf-8?B?VndEM1dGdXRZMkp6N2tYSkV0bGpqeDRmS2NVT1lyL3dEaURmYVFBNTBHN0Nq?=
 =?utf-8?B?cERxUXQvMHREeERLUlRlVlpVekQ0NGdzc281MkFBcUNrQW5UREdxdmI5L1hz?=
 =?utf-8?B?bGRFTzVtaCtZUDJDdzhZNjJya0JVV2RJYm5YM3JNSWtFM2tITk5mNFl6Yk9O?=
 =?utf-8?B?MEF2am83YmpnTDgvK2dtMW51MDVhK0YyOXphTFNkNTI0NGluYzdja0FabERD?=
 =?utf-8?B?ZUw0Zk12WXY2ZjRtbTA5UXdEOVV6NUJXVnBrREQwZldRQ0F6ZzNJR1ZRai9q?=
 =?utf-8?B?NjVPVHJCZXpRZlJ2NlhtRkt1ekxaeGZGMDVYc0JuSndRamVkQWtIVjAzSkMx?=
 =?utf-8?B?YjFwNFFqdVhodU90WFdlVjkzejRaYkpRUVZTS0ZrTGlRSDdGOXc0RVBoVHgz?=
 =?utf-8?B?ZEo4VFp1UkV4ZkwvUGg4bG1HOG1BY2p0VHlFdHQxZGphQzZvQVpTZjJ5SDQ2?=
 =?utf-8?B?TzMvdHlPNkREUVh1clFHS1ZFSGN6cjBhQjBuMGNCM2JGMW4zWnJKZHhqaUFE?=
 =?utf-8?B?YkVlZUxYbUl3ZitKbEl2WGdHSEt3UHl3NUdCTFpTRWkvS1dYcS9KS3Y0SUxi?=
 =?utf-8?B?b1dyalBQWDZSVlBHU09FV3Y2cHl6c29kNXMwbk9DMEV3bmh5WFZrMElPRTha?=
 =?utf-8?B?RXlaT05YRy8xMnZjdXUyM0RCSFdjRmZOelJCc3h4bzJ4YWtqbklqYUVUY0F3?=
 =?utf-8?B?M1BwcU11VGNrSEZ1RElFdUgzb1N6QTF5djFlL0NjRDcvZlQzV1cwSmJiRlcv?=
 =?utf-8?B?cHgzMUZ2Yk5MV0o2RnBUak1FeFZ4NnNSYXdsYmFpOGlldTZ6SmxYcDNuT3Fs?=
 =?utf-8?B?SDZxSXNPNCtsaEtKVHRUWnhVbzhFUTBmNXoxMHpUU01HUTJIbGIzOFVaeCtx?=
 =?utf-8?B?dGtsV2hQN216b2lJK3Bpc3oyMmU4ZlVjbjBqdjNQZXFhLzZXcGdJZnU5dWNz?=
 =?utf-8?Q?B2abphhd2rkYn4IF2iyWWqzxn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ba16c0b1-374f-4801-4c25-08db40c30fc4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:44:33.1687
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IRRue+sUY+I8P8BDp0byCHD8lvAkP0uzAogYwwyxJk/V677Intc9VjfQo5iG+AOburzZslRtz7ERuHj3a1j56g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7671

The majority of the present callers really aren't after invalidating
cache contents, but only after writeback. Make this available by simply
extending the FLUSH_CACHE handling accordingly. No feature checks are
required here: cache_writeback() falls back to cache_flush() as
necessary, while WBNOINVD degenerates to WBINVD on older hardware.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -232,7 +232,7 @@ unsigned int flush_area_local(const void
     if ( flags & FLUSH_HVM_ASID_CORE )
         hvm_flush_guest_tlbs();
 
-    if ( flags & FLUSH_CACHE )
+    if ( flags & (FLUSH_CACHE | FLUSH_WRITEBACK) )
     {
         const struct cpuinfo_x86 *c = &current_cpu_data;
         unsigned long sz = 0;
@@ -245,13 +245,16 @@ unsigned int flush_area_local(const void
              c->x86_clflush_size && c->x86_cache_size && sz &&
              ((sz >> 10) < c->x86_cache_size) )
         {
-            cache_flush(va, sz);
-            flags &= ~FLUSH_CACHE;
+            if ( flags & FLUSH_CACHE )
+                cache_flush(va, sz);
+            else
+                cache_writeback(va, sz);
+            flags &= ~(FLUSH_CACHE | FLUSH_WRITEBACK);
         }
-        else
-        {
+        else if ( flags & FLUSH_CACHE )
             wbinvd();
-        }
+        else
+            wbnoinvd();
     }
 
     if ( flags & FLUSH_ROOT_PGTBL )
--- a/xen/arch/x86/include/asm/flushtlb.h
+++ b/xen/arch/x86/include/asm/flushtlb.h
@@ -135,6 +135,8 @@ void switch_cr3_cr4(unsigned long cr3, u
 #else
 # define FLUSH_NO_ASSIST 0
 #endif
+ /* Write back data cache contents */
+#define FLUSH_WRITEBACK  0x10000
 
 /* Flush local TLBs/caches. */
 unsigned int flush_area_local(const void *va, unsigned int flags);
@@ -194,7 +196,11 @@ static inline int clean_and_invalidate_d
 }
 static inline int clean_dcache_va_range(const void *p, unsigned long size)
 {
-    return clean_and_invalidate_dcache_va_range(p, size);
+    unsigned int order = get_order_from_bytes(size);
+
+    /* sub-page granularity support needs to be added if necessary */
+    flush_area_local(p, FLUSH_WRITEBACK | FLUSH_ORDER(order));
+    return 0;
 }
 
 unsigned int guest_flush_tlb_flags(const struct domain *d);



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:45:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523323.813269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5JT-0005eo-1Q; Wed, 19 Apr 2023 10:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523323.813269; Wed, 19 Apr 2023 10:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5JS-0005eh-Ua; Wed, 19 Apr 2023 10:45:22 +0000
Received: by outflank-mailman (input) for mailman id 523323;
 Wed, 19 Apr 2023 10:45:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5JS-0005YA-FM
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:45:22 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 487bdbbf-de9f-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 12:45:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7013.eurprd04.prod.outlook.com (2603:10a6:20b:116::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:45:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 487bdbbf-de9f-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N1M6CcxgYf2nBnLDT8Bl4VTGt/j/hf7cbiDpKE7i3HdlVhhwPhePaaHybCTt5Gkjz1NntW3Mshbvvu/ZgpynAQrVRKN/R396+c2deX65pfX7Of1zJF9Af9RiPaTo3js1C9ki0wdIQRyXfG740pncavtHrRACJezJRc+f/8AHiqLuF7w9JoTwjziMAQBG5DnJz97YtbaHyG/sFl6vaIPaMpSoJ5enJaAkrr1oWJOWlh0co6DF3msLy1BKUfJEW+VLLnOMKOXSv55qovm6SXK5Bts8UseP+5OCfVgTyKx0S4q0VSPZ+Xn6OPypQxOz5f8R5mziefJpn0GJGPkGcg1TCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4aXdXTyOcO8LUb03EjxFppTt+3a/uazHmqVhanxapU8=;
 b=hXeIYrCYHsKEgQylQ6MxH/bQHs/KMtYlp4FCjhAQk5XP9JvWizQnPkw7GEdMN8lYjTuhb0mtGC1JbBO8MHH7sU7FCQypAbrR6u+bAb7ZrdIb8ZshV4/weVfqgHhM2MFXl3IKzUUqLC9YpkGMEBaFjJLysJyKpceyfp35UlgZcj4gZ/BezqhUS9n4NFzjjgGPocR2ZvgG+2pjTbkEldEj1AkQGj8hsNMKAgcZm9j8pzMXF9GmzEqBTqZSzDI/VWQpGIZ8S11plIWnZMpAjdDon1zQ4aP7gWX7pEuXFvasCSiZ9vEWC+dX2wZRjQIjWxYl8vhvWiNSVkSXjWtVlWMVsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4aXdXTyOcO8LUb03EjxFppTt+3a/uazHmqVhanxapU8=;
 b=1E+oRGNTHTkqsodGsAw3Qs9R+WTGN8snrVKr5xmC18eozKoYpe1KfBWvpuyWJfnTYjQ6r1dcwf2KLs7h3IGaNMRHIhkzc1dXnXa6KiIR8mks065G2FuCHp/FuLRNwFpnlnNmvgm92l9pdnmlS8ChNxTFuuw9MkFSPxmR6JkezWObGJd21qU/bT6/hyZvia1Z09rz473MV8KHzrvp7NEB9buMg2LmcG3QweJTfRxCv11h8lagrCdAv2vJis7uPQDz/P/yX0zN1hYFu+daBHsww+9FG9Z7sXczUru9ByGn0OZ5+kPsMPPSfYhRsxMm0t+EMknviaYDtAUpOFMZ/D7eow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <78858961-9c8c-af8d-286d-f90981f85e07@suse.com>
Date: Wed, 19 Apr 2023 12:45:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 2/5] x86/HVM: restrict guest-induced WBINVD to cache writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
In-Reply-To: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0017.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7013:EE_
X-MS-Office365-Filtering-Correlation-Id: 24010689-b390-419c-f5fa-08db40c32b60
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WjOd2uMuEeDuagaAURUeHgfUZHWpAQrPusNvXmlOUKd3Iep3i+j7VsySTEc3LVhGSvA6/QC+hlC99lqIkx46pZ5B2I8Uf8dHJERuJ1GjYraEF4CDoObm7ZmOpjnL9S9qq5aqWQkRgeKcdFchOoTZFi4xaHOKcjIq46ib1Ppi51HRaauNFEuFAsulHSzUozRH2xz/zPzBSn0tXzp+t8O50uN23Q0lasGAIPPNWN46pqWkhwNwHC0PNVHhTUzF+FbKw3JjMORsXl7+l7hR/yyrU6X/QsPJ7oTYBl41IbTw4mI2svW+mJp/RlqbM76aB8x/TNygscytd8kOQfJjngtDa8n7+aJ4Arn7PpzicEIW0p3mR9r3sPeYLszda7kPnt024a6aUTbOeqvZ1S3yhgbWJ4yt09M2E3CUpzHZwKfuKGd+XPrytNddKpYAAPHln/UeOMcFFBLGmN2GpBn1SnMzadVxPaUY8CwDxYmQGDW7dZnv7s6tmtaHbmCN7wmc/4YcDNqgj98isU2YD/n38qBOo+pE/ZWne98cCdt4PDsvCcD3ZZIQu8Gtzs50tFjS+gUiwk7dxhEoYBvWAK544KH3yMKLRt7JdbPaJGOCD5BaJmgc6YHn85bCSDwmgFLu8YtEJnTGxrn5JbEvHc960Xa21Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(396003)(346002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVcvdDcxTHZrbUQ2enNwSGttM3AraWUwWUtMTzN2cmhkei9RTnBNd0hUcHRa?=
 =?utf-8?B?MkZ6Uk4wNHRlMERmL2M5VDZSUm1iUXhweFRPTTRtNHEzZ0RvbHhkb2tUM1cz?=
 =?utf-8?B?S1B1OXBZZFVWaEU3K29VZkFYQUlSRW9aT3AzbzV3ZURHNmhOQXVNUEQzUkZS?=
 =?utf-8?B?YzE0T012Qit6eElrV0FXMFo5OVVEMFJTY3FWUXhRTnlaclNDZUhLMGVLVGpX?=
 =?utf-8?B?U1dacy9NOVVyOHFPRmFqNUw0cFZTcDJVV0ErcE5RbVBmbWlxUVBDTFJhd0pW?=
 =?utf-8?B?SmVNLzN1MlNuNUpvdlU4bisvaE1vdmNxcXora0ovZGVXZ1FSc0JCc1cyN1VK?=
 =?utf-8?B?dDlkUmJkTUNkMGVhQUFkLzNxSEVYem5JN3h4dDAvQ0xpN2Z0enh3aGV6TklR?=
 =?utf-8?B?eGNxY3YwQmc5UWNRaWIyR01BZVRPZzRLLzl4SUxRaFVDMmFPcE1KSnNXaTh5?=
 =?utf-8?B?c3daR2k1aEJPQTAzeU1YN0gxRFd2dDNoeGRrUVRwOFIzcU5KODJjRDhNazhq?=
 =?utf-8?B?eE5PU3RWK1NUVVM4TjNsa0ZLNzRpS2ZZeitHNVhvcFh4b3FzdXpjZm5Fd2VL?=
 =?utf-8?B?Ylo4OGtVdXdtMGhETGJRZkM1Q3Q2TGtHWnJkQmw0Z1NpWjJOYzlHM3FwZzI3?=
 =?utf-8?B?aXQva28xdVM2dGdURUdQUjR3aWZUQUxFczVwWVl5bjA4OTl2SEh3VFZ0RXU4?=
 =?utf-8?B?Z2ZCdDBYbFJLNUtvZXNqakpPdW1kVCtnL1FORkNNSCtUeU9zUDFkTDFaa2Fz?=
 =?utf-8?B?eVdBSzIydWFDZ3k4MFhUYnU0MHBwZ2NOcVNYUDVEeDV1RGtzK2VnM2tkNlFU?=
 =?utf-8?B?enE0cHd0K2hKTFVsTW93NHZrakM4OExMYTljOWNTM283MlYrcDQrUzFhdVl4?=
 =?utf-8?B?ZVRFWlVwWVRydU1iWitONjk3dnN4eklHRWU0cW5UME0vTXRPSmdhQUwycG5w?=
 =?utf-8?B?b0tZbVpKanZ0QmczenZ1ajNZd2FTR3JWWXlkRlFyUVVQL2dkaEFmTnVVTWZP?=
 =?utf-8?B?YXRVYzF4Q21ESFV6aUczb09lN1Jra3JsTm9EV0E2TFp4aXUzODhkZTc4c2Mr?=
 =?utf-8?B?NGN4bjRxTVpxZ2lobkEyN3JBU3Y1RHErbUt5ZDhOaGpKSWhtZFJvV0E1V0dl?=
 =?utf-8?B?a1dpcmErZkhKb0gyU0ZOSm9DRVg4RXZ2YjZGc0w5U3ppNGw2UWdRQ2ZzWnRU?=
 =?utf-8?B?SWJnYjBGMnMrNEYzQ0MyRHJOYXhlNno0dWJXNzU3OEg5c2E5eVRjeE1tS1hT?=
 =?utf-8?B?RTRENkFYUEFUVEIvbVpJZWNhQ2d6NGc4RCtQZDFDQlBBblRqMVAycnh6ZVVB?=
 =?utf-8?B?OG54eGt0MGYrN1BNV2wrcG16NVUyZHRYTys1QTdXcnlHdTk1RDFGSktZeXky?=
 =?utf-8?B?dFlRc0pURVpQM2txektLRzdyNTdXY1Jzc2RoS2xLdkxmRnZMQlZCZEZGak9P?=
 =?utf-8?B?cis1SDFtYzFoTmZPR1BiNTdaZFB5dFJFVEdHRmhUak1YRTdwZ1E0b2YzZURp?=
 =?utf-8?B?NC9uVnI2UWFVdkk5QzZNaGpJOTd5N3c0WjBOQTRNTUhwK0FrMmJrTWR3NnI5?=
 =?utf-8?B?cUZyNk9LRXJ6RDhLaUk1WGh0R3ptRWtLR29jbmtlSzVsRHJ4RWFUSWRrOEZ0?=
 =?utf-8?B?Zi9DeDVDMWRrYnA3ZG9pR3FHdWZMT3l3MXFKTXZYdDhhY3plYjROa09uanRt?=
 =?utf-8?B?SkEyaVdCd0MxbExLZTRKNHQ3eWlJUzMvMnJCK1NSQ0xXOFVqMGFLVE5aTHVv?=
 =?utf-8?B?VWdQb3RqbzVLbXV2RWxyOVJQU1lZcU1TZGlDRk9PVVJ3cUU1S3d0RGJqbzhK?=
 =?utf-8?B?b3BGeVlLWUtBQU5QQnpHMHBzNmlkVHovc0NZSVVWOG8rb05DZmVpa2Rqcjdk?=
 =?utf-8?B?NnRVaW5KSnBPSWlwV2dyOUtVWExIcXNXci9aL3RKZ2I1UkF3WjBVZll3MFJy?=
 =?utf-8?B?Nmo4ejdGR3VGT3JOaFVvWjZ4Y2djYnRHaTlBTmNXVEFNdytWNlg5Zkk4b3Jv?=
 =?utf-8?B?SEJJOW9jSnkvb0tTZ0FqRnJsOFF5V3d3dmNOQUVRNGR2T0FMZUZUSFRzbTEr?=
 =?utf-8?B?RXp1WXZlajBlVzlGTEtDWFhqWURha2o4b3BpajlLdHV1QjY0bk0wSmczMXNG?=
 =?utf-8?Q?R9QH7Ayhh59meXszAdTpYIpGC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 24010689-b390-419c-f5fa-08db40c32b60
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:45:19.7709
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: woSSJO5IQrDxjEhbK8FqYNOrKO3DAW0eTQv+TxYLJfqLVSAyziidfWQ0QKABtXZYR4cdW3sTCVz+F+HbTi+Xgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7013

We allow its use for writeback purposes only anyway, so let's also carry
these out that way on capable hardware.

With it now known that WBNOINVD uses the same VM exit code as WBINVD for
both SVM and VT-x, we can now also expose the feature that way without
further distinguishing the specific cases of those VM exits. Note that
on SVM this builds upon INSTR_WBINVD also covering WBNOINVD, as the
decoder won't set prefix related bits for this encoding in the resulting
canonicalized opcode.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2364,7 +2364,7 @@ static void svm_vmexit_mce_intercept(
 static void cf_check svm_wbinvd_intercept(void)
 {
     if ( cache_flush_permitted(current->domain) )
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_WRITEBACK);
 }
 
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs,
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1881,12 +1881,12 @@ void cf_check vmx_do_resume(void)
     {
         /*
          * For pass-through domain, guest PCI-E device driver may leverage the
-         * "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space.
-         * Since migration may occur before WBINVD or CLFLUSH, we need to
-         * maintain data consistency either by:
-         *  1: flushing cache (wbinvd) when the guest is scheduled out if
+         * "Non-Snoop" I/O, and explicitly WB{NO,}INVD or CL{WB,FLUSH} RAM space.
+         * Since migration may occur before WB{NO,}INVD or CL{WB,FLUSH}, we need
+         * to maintain data consistency either by:
+         *  1: flushing cache (wbnoinvd) when the guest is scheduled out if
          *     there is no wbinvd exit, or
-         *  2: execute wbinvd on all dirty pCPUs when guest wbinvd exits.
+         *  2: execute wbnoinvd on all dirty pCPUs when guest wbinvd exits.
          * If VT-d engine can force snooping, we don't need to do these.
          */
         if ( has_arch_pdevs(v->domain) && !iommu_snoop
@@ -1894,7 +1894,7 @@ void cf_check vmx_do_resume(void)
         {
             int cpu = v->arch.hvm.vmx.active_cpu;
             if ( cpu != -1 )
-                flush_mask(cpumask_of(cpu), FLUSH_CACHE);
+                flush_mask(cpumask_of(cpu), FLUSH_WRITEBACK);
         }
 
         vmx_clear_vmcs(v);
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3714,9 +3714,9 @@ static void cf_check vmx_wbinvd_intercep
         return;
 
     if ( cpu_has_wbinvd_exiting )
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_WRITEBACK);
     else
-        wbinvd();
+        wbnoinvd();
 }
 
 static void ept_handle_violation(ept_qual_t q, paddr_t gpa)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
-XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*   WBNOINVD instruction */
+XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 XEN_CPUFEATURE(IBRS,          8*32+14) /*S  MSR_SPEC_CTRL.IBRS */
 XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*S  MSR_SPEC_CTRL.STIBP */



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:46:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:46:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523327.813279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5K6-0006DH-AC; Wed, 19 Apr 2023 10:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523327.813279; Wed, 19 Apr 2023 10:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5K6-0006DA-7A; Wed, 19 Apr 2023 10:46:02 +0000
Received: by outflank-mailman (input) for mailman id 523327;
 Wed, 19 Apr 2023 10:46:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5K4-0006CV-KF
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:46:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f2ec1c3-de9f-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 12:46:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7013.eurprd04.prod.outlook.com (2603:10a6:20b:116::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:45:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f2ec1c3-de9f-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K125paLkDc1s2CPG5UUnfiMudzOIk6DAsR9/8y6vXMWj4SrmpQruwSVisRexo6Rlfpqoi2ftM0VcPgw8e0c5EhYe27UmlbSYuUpJgVlH1ymuORoN5WCgBA07W/3GWaQtoMlRcLTYobMnDIQUYtG+le5er0AVaJQvXBiwO3MlesnjnyUKSELZPeWGCTITiAKuyKbklQHA7LWByIjDxuWnh2/Y+InEOYFwQGGCjPa0tFVmPv1kVocBXqlprLOh4aa9OB38sEqK8XWhtkWiclX4xRhLPHmhe9vjl/5CNYNSYAf0XjH11XGd48w2S8L7GgBnZfuM9GeLF1UrarqxGOfFLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AgoBtbMdStYg/E8daXMr1Bc+OTEzf/aiDaWtJfzyYEE=;
 b=cFWQUhPUAgwQh80aKGonFr6PpoQ+BcYVAeZT4kRUh1Zkg7m7M9pglpmrRqEwjLk9i8SzsVMtNgr+2w/qJhVySCMeVY8LrBqroKs/JKMKnvXfLcRWur8AavgbMJ6xtgJQJHtARiBeiITF34203/5cr0/ucpT95gO61GSK/i634uJCt+gEhlwC1s7KxFffGX8kMF0cWwKb4xJkC1UWJlqozQhPFfhW4qqdBB6yG5ke6q096W0+fyRoGlz/4mePbL+yTuhD1yai9w3BdteGSoe/+6OgmyKhSz6HdAEZ2ksfr6Vd3jP9l+I4yJLDOCmgAH4yj15uW9PXqW1ZnDmpXAFQzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AgoBtbMdStYg/E8daXMr1Bc+OTEzf/aiDaWtJfzyYEE=;
 b=HfOc5E/a53WNqQkgSMWRVmuEMPw06FcsuRGCRyG3p7QKZ/+Jjd95OZqI/nU/n9AhWJPV/aUhIOMF3emA3OlMjuXS1DXISVMh6sIaucMMcRZC43LukVhbq02BVx3ccTWEELPDTjL36b82/udoXpj8dLSEhMuvZRKZRh/orrn7ol00us22eTUdB6IHyka7S0uGKFnev1M2YDO6ntXmTrvcVa72YspODTpa5aTflSkqga+R0Ar2Z6RXXHIU5yF0hj7sY3Ax1VsZkMs1SIFCAAHVbc6iGdLu2QUA6kDd5c8i2XkbLS1W8/JtQ/lOWhNch4C765OqLQFj6a9XRNY33idFyA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0e60520c-d660-1a83-3f57-3466a0ad617b@suse.com>
Date: Wed, 19 Apr 2023 12:45:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 3/5] x86/PV: restrict guest-induced WBINVD (or alike) to cache
 writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
In-Reply-To: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0126.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7013:EE_
X-MS-Office365-Filtering-Correlation-Id: 42850444-faec-4451-0e09-08db40c3428a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mlAYDtB9+UlHT5sxw06Ju+CZj7eD0ICRjgu9Z4vP1bvZQcfedeQ3LRZ69/XD5T9bp6IGB66vxcuxXA8MgEu/wKReVSkGBu1nOgB3YNfXeuKgI5wvfKcSV/RcZy30bKkFTr4pjS2a2NJ6L4hzKlm7v/UNkmO7qxuukUpbzQZdxIphGPYL5RTWnUxu4CqR7WDbqWi9V3gVdFxdsqdNbm8GBFOYDo+tAc2esESg2l2+rjetyksNNHbHIeAv9bibtGxXI1bMMmMcYHfdE50FFG3NOHpJwaho0hYoQcUwqvKKVO85Ra57gl/RtIGCZv+hQ3IyxBgPntLM69oVupQsU3eK47SUI1GpkTFbBMlXsVyZOCO1dRpcyx0BO8yEmB92FBBEm//Rqeq2xg3cRZp+62wLuYQNSDEbt8mqGoFMVd7upirhpzOtxldSrcwF6dm3zpHGtPOzTxC+Nxp0IGHP07pgXgm2qWp4s9QsvKotENsSDdBnC8VeHzUhPJghXw1JiZJgM+ULKtE2etOrxHsN+qlggIpwSRDpghoqnDjmF/e1txT7swaRdzXQYU0wHLQFDhRGtUN+yEiwM4XSPcIGF/MQJ3ivp6MqtLwonW1+3/6werlfyxnMYceLbBtNWC2epnKLIOVDRo2amZ0QbWztnhC0KA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(396003)(346002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VDlsdlFsQTRPQ0d2TDBkL2JCRCthUWh0a0ljY2g5NURYYkVsMWtycFlDTy8x?=
 =?utf-8?B?RDFnZnM4bnJLYW51YUd2QWFlblpKa3JJem9TdGI1bkkwc2RaNnE3WGJMMU5C?=
 =?utf-8?B?cFkxL1lsbVNIdERIaWhRL0xZbG9QdXBDZ0ptQUw4ZEtOUFJwcmM3UVpqMTJR?=
 =?utf-8?B?djRLejZXOWRXclJ4L3hyNkJaVlpBS0p5SFg2SkdmWlVMTG8yaE9XRC9GdVZB?=
 =?utf-8?B?NVNvU2pRMmhtUE9JRG5hcVREeDc2MHkyWFNMOXQvTG1MMFBheEdkSno2WnU1?=
 =?utf-8?B?YUtYUUFvcW4yc0RBaG9kMkcya2tJajhXNytyVElRQk1KOWE0WGpENWF3K0RR?=
 =?utf-8?B?WHFNMkFuZkwrWXBjaFVWM0RtMXBHa1RRMGFucDBMdC9mbWxRRHY4UXBqcGdX?=
 =?utf-8?B?Mkg5TjVlY2NBUFRaamlHRXU2bjhVM2JMdEpYbTBXQkNlVW95SGVFay8vajU1?=
 =?utf-8?B?eXF1cVYyYk9XbnJVMVdpbDJYWFpNTVovOGRNN09rLzdMYTlIQlZRUXhEMTE5?=
 =?utf-8?B?a09MSkhpTkI0cFFRRFVDMEdoYksreHdSWlladVFTNFRMWDNkaUprNnp2OENy?=
 =?utf-8?B?eThkNVdHVUxBVlNhTGdXRHh3cUN4Wjh6VTY2NjBBc1IzSFJSblhEeW1mTzE3?=
 =?utf-8?B?T3FxeXUrVk43VlZKR01TTmpOTUhvODdjVzJqcUFhbk9BMFRnTW9uTTZIeVQx?=
 =?utf-8?B?aEY5eHNwcWZQSWgxTi9PQ3laVFdscm5NYjFtbmt5VTAyMkpTRzFIcU83eUtX?=
 =?utf-8?B?UkhoNU1PbWpSMnEvbUxJRVN5ZkRkRFJrWlpZeXhsN1cvblBTNkVGblVpREtY?=
 =?utf-8?B?NVNHRWpHbGU3cnV0bnFibkpvRlhCS0dQT0ZVTk56NWYrR0JpZEx1YzNPUVFr?=
 =?utf-8?B?UVpZWmdETE9MSi9wK3NEM2puYU8xQ3dHdGt3ZThUVEhrUWd6c3pscVhPUjZ5?=
 =?utf-8?B?RG5zNkZxL0tJMFhiT0xKb1VpU0xvMmZOYUtsc1dGKzVFL1RZWGdCd3B3ZC9u?=
 =?utf-8?B?ZFZiZ2tkZ0JzSWl0SSs0enp2cDlsNGw5cllkaytsSjZYY2xtemlLWW1TVVFG?=
 =?utf-8?B?dlh1bUM0cHpUYS9xNjFuMU5PemVVVytJRFlsWnBIMkZGdFArT3J2MEYwamJz?=
 =?utf-8?B?TFYwckFrbUZFWEttNUhLU0UyM0wrZTNVVVBSOUdoSUZWeHR3T0pDeU9HNWhi?=
 =?utf-8?B?eC9OVXJSRUhqa3paSjU2Y0JCQVNkSWlPMzRxMmJzZklsM0J5V1pMMmZyUDly?=
 =?utf-8?B?NWdMZGh6eFJybUN6cHYzbFpQVDVOZVBFMFN1NnlCbDQ0Mm1ETzRKejR6b1Yw?=
 =?utf-8?B?ZTl6K3J0VEFMUGl2TWNOSy9ZOVBMK1p6YnpJVGYxaG5qYjBrMUdKakJJRXhq?=
 =?utf-8?B?QldTc0VlV04yU25LZmhYcmpibjhKeDA1bVRJUTNUN1hJdkZjbnM4OENNZG1j?=
 =?utf-8?B?TWRtWit6aEVWcmJ3eXZ6RXBaQXlHQ3MxOVIzdUNQUFVqdDNJcW5YOUR5UCta?=
 =?utf-8?B?SE81dndCYTIzMU50eUZoaWJUeDlHcGhOY2M2WWVUM3ZuVGo1R0VWVU1wS203?=
 =?utf-8?B?dnJzSGlJalZudk5zb2FwYUZHSEMxZFJqV2tvU0dHdzMrWFFDd1JVeXZXTGZJ?=
 =?utf-8?B?M0pKb2JIN1JnblowaENQTnhsL1ltRmVJcU0rSlJyY3BPOFI4Nzdxb3dTSHh2?=
 =?utf-8?B?N1ZpbHVmUjhHSnpFanU3QkYyRnovNWluTjFjZHEzTkd0a1dSbmJ2QUNyc05t?=
 =?utf-8?B?Vk5nTFNFZkFHKytiSzRiTHZmOVZpNDB1cVkzVjdqZHRNZ3F0cnJxRHJrUzk4?=
 =?utf-8?B?UjZPNFlVUGk4b0d3N0RPcHJWMERMbzNjTXdxTjFEM3VHaW5KUGtjKzNnUmpB?=
 =?utf-8?B?d0VkU3M5WThsdVROUWFsTjlzY2IzSkpBVmNOdlVSMExHYzZTRlZmaU0yTDgr?=
 =?utf-8?B?dVFjS0c2WUwxRGVqQWp0Y3AzS1RtTEQ0aTcwbGRZbm1nTFkycmlMWkdHOC8x?=
 =?utf-8?B?cGk5MVBZOGFvWDh2U044c2dnenNEQnRpNmpxTVVLMnoveXN6V2xzZURIYlJU?=
 =?utf-8?B?cXE2b2RQWkdiSHNRUjlrRUZDQU16bkM3bDBvZlJpbkt4ajEzRHhKMHJXUWt0?=
 =?utf-8?Q?lxKS/aiDK5pKQUZtthSOp8CCP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42850444-faec-4451-0e09-08db40c3428a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:45:58.3667
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iNvkjVHBMHpDudxfPFXpZAdYoLCYQK3QRGnGWg/QY0szfAO0j1Zh5Nch+1tZgUMVV+ryjclHgQl+xTY6pKZxrw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7013

We allow its use for writeback purposes only anyway, so let's also carry
these out that way on capable hardware.

We can then also expose the WBNOINVD feature, as there's no difference
to WBINVD anymore. Note that respective emulation logic has already been
in place since ad3abc47dd23 ("x86emul: support WBNOINVD").

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3772,7 +3772,7 @@ long do_mmuext_op(
             else if ( unlikely(!cache_flush_permitted(currd)) )
                 rc = -EACCES;
             else
-                wbinvd();
+                wbnoinvd();
             break;
 
         case MMUEXT_FLUSH_CACHE_GLOBAL:
@@ -3788,7 +3788,7 @@ long do_mmuext_op(
                     if ( !cpumask_intersects(mask,
                                              per_cpu(cpu_sibling_mask, cpu)) )
                         __cpumask_set_cpu(cpu, mask);
-                flush_mask(mask, FLUSH_CACHE);
+                flush_mask(mask, FLUSH_WRITEBACK);
             }
             else
                 rc = -EINVAL;
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1196,10 +1196,8 @@ static int cf_check cache_op(
          * newer linux uses this in some start-of-day timing loops.
          */
         ;
-    else if ( op == x86emul_wbnoinvd /* && cpu_has_wbnoinvd */ )
-        wbnoinvd();
     else
-        wbinvd();
+        wbnoinvd();
 
     return X86EMUL_OKAY;
 }
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
-XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
+XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*A  WBNOINVD instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 XEN_CPUFEATURE(IBRS,          8*32+14) /*S  MSR_SPEC_CTRL.IBRS */
 XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*S  MSR_SPEC_CTRL.STIBP */



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:46:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523328.813289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5KG-0006XE-Hw; Wed, 19 Apr 2023 10:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523328.813289; Wed, 19 Apr 2023 10:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5KG-0006X5-FA; Wed, 19 Apr 2023 10:46:12 +0000
Received: by outflank-mailman (input) for mailman id 523328;
 Wed, 19 Apr 2023 10:46:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp5KF-00068C-CI
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:46:11 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6400a856-de9f-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 12:46:09 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 06:46:06 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6384.namprd03.prod.outlook.com (2603:10b6:510:aa::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:46:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6400a856-de9f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681901169;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=sRJMMnPXqbaJlwY33BBH2w+YPuH75f9ViKa8ZhNd4VE=;
  b=RAm2SjGw2qjjbBjSPdwNf8Nu064XICNkPG7+xdMhoxB/fCAx9VQmoltI
   YASt/KbKY4BWMvEjHjghIOQwU6QLdYXkugSgXp1EzW0v1aZHaFg1+EIZ7
   3xSrUi4pFXfHveDHF/Pz15xlnNomoFW1fd6FD7g3IMkiIVJAubM8yCwdP
   I=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 104858999
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:oAHBEqNana7V16XvrR2RlsFynXyQoLVcMsEvi/4bfWQNrUok1GMEz
 GBJUDuGPPqNazH2ft52Pd+28EhQ6JXdzdVnHAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5wxmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uFeAH5/r
 fxAERxOQhqx2e+57ryQceY506zPLOGzVG8ekldJ6GiDSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujeOpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrw1rKfzHKqMG4UPLuF6OBJ3Xm0+jAWFzpGfAu6j+mVoFHrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k
 FiTkLvBHTVytJWFRHTb8a2bxRutPQAFIGlEYjULJTbp+PHmqYA3yxjJHtBqFffvisWvQG6th
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:jXsePKAPxzJUPw7lHem/55DYdb4zR+YMi2TDtnoBKiC9Hfbzqy
 nDppkmPHzP6Ar5OktPpTnoAsDpKk80k6QY3WB7B9aftWfd11dB6OpZnO7fK/qKIVydytJg
X-Talos-CUID: 9a23:ULB9aGEs4vxo46ZhqmJg3U8LK5glKkTxj1vaERG1BkhnY4aaHAo=
X-Talos-MUID: 9a23:fPy51ASryz22fePuRXTquS98F5pMxJjyDV8qwbxWuueUGQZvbmI=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="104858999"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cWk1ENDP/xHohS7ijWIOU3uU6yaXCYCG1rSCvWZj3+Ee6XV7MpuDpBES50RufscUUn5Yf30A/YvmICkSKvDuNDR5YLeBfGAFij3BoWH0MtxW7QnxTMBdgFQR73E00NNZmPCIzOxQkxyCpswqyXTsBsoHaLFRlH8GcwKnIbBYFvnNJCUqpu0N0daOCHAT1XIZxCnua9dVpywvvpwRM8XfrscA9gxKH3dm30EuBjFeHGzQYWBGQC4wzITh5f/qqaPlCYEVlB5sr5DS1uKOi6IOjmTJn0LMYU2DHVz9K1XSTaruRZwm23wcV3CZ8A/+Ivm3VaIcEw15IJSptTqhqJkmKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9y/7nfPu7Jt5V7w1e68TRs5UVJfBzH6kPoJ14h7ZpPU=;
 b=j4jWTPRAAKNVOgz9A86lv71wm1uNTXXVTb8HpzjkxjbMqmTTI+EqrvQN5wH9DyMGcpn0O9gazolWI25oLD7PNThVB1vohVwtFBcjLg0ngM089vC1pBjCESESZgdAoo12EU4X0HgFi4BI8QHaSZWYlz65bepK8NS7ZeD40Ol+u/LMqCSmFvM2Lny0UH/IBCUl4l4qsN5CLsrl45JzhyRbu7oL+SBjm202zsWGj8Om6jMrVAluvB7qmwSDayF/zj6gc341TgGwXg/J/sC6PxkMtPWmZEXl7SCyN+KjHex5yfzYdVDTJRxAurwzKk8onQ03hPy/0ojomAC9Ws6+l1Lnlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9y/7nfPu7Jt5V7w1e68TRs5UVJfBzH6kPoJ14h7ZpPU=;
 b=Tys2mAjiEVpHqQp38/Bcj9+rKMLwFLDG2jzKMVUvjJ4fN9oRjEvOH0pkboLnO3+Fw+Dg3t8Ioxm43i/fbdzAp6biZbQVup3uW4bNLOUUuc00XPmVscSqhOWJoWDbHVbUsHHLYGhzPYjX7O+314QyrlqK4Rg1rDysHoXbdyKD6sA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 12:45:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZD/GY9ru4RtfK5RU@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
 <5e90e951-20bc-3ff5-30b3-da17cb14d260@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5e90e951-20bc-3ff5-30b3-da17cb14d260@suse.com>
X-ClientProxiedBy: LO4P123CA0567.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:33b::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6384:EE_
X-MS-Office365-Filtering-Correlation-Id: bfcd88d7-3455-40b3-d9d0-08db40c3444e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E5gDOqzUk/EPmSDvMfHoEWbbSnOIR+XnQ1dmGYI4841nLz7jkvCBvEVrK5xIQ5EIvNFWt7g5U+H1YEzYjzMaZE2P2TK1oj22C/WhMNPC0c6vGJvXCCOPfQ9Y+gon+o1dgemVo04E9fZnIbwkvxSBVxfDFX5g5oYLPK6SHGcpyXwaknRrunHZIqvgET0AKvVpPg9Daotk+5TEgqB5iW7UL11wGBp0uI6VGtiteuYsftdzcElFNVjyu7Ku4jIKFv+/Sg3G3mFNrqlzYZljZfxwJ6g5Dhf3JH856PRej/A2UVaFve/yehs+xL7DgLVQ/8BD0wITw0gw4Sb9ktjV8Ryk2LcyKtv11Wnzr49iW5ObF9OUaKBZqLa8GKJZ7vuotVuQUYwTqOuzWUFbMKwExXz8hAXlPkbrNTQd/AoljzKm3XcJG1fnp6Hhoup5i+7xm/TmDZxS26Uw/I8UnpWaoWaA31I1cBxRmVcOTdVLBwKqn60j9KAyWJbp1Pzp6n+3/toJfP1p9XAoa4TH2N7Q6LbDSniRcftwq5kwq/IM7n3NYvQOe2euh9XhpDVRMIXvnBCh
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(346002)(136003)(376002)(396003)(39860400002)(451199021)(26005)(2906002)(6666004)(6512007)(9686003)(6506007)(186003)(6486002)(53546011)(4326008)(41300700001)(5660300002)(66556008)(6916009)(316002)(8676002)(8936002)(54906003)(82960400001)(478600001)(38100700002)(66946007)(33716001)(86362001)(85182001)(83380400001)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1hOY2x3RDUya2ZqWjV3b2JyTVlUSnlnUXQwM0lxMTMyek5sdGZhWXc5bWt6?=
 =?utf-8?B?L3lBaEJUMGMyYU5lSk9hWWhGTmxXamk3cjJMK2QwOU1hZW9XVFRTRFFTVDZJ?=
 =?utf-8?B?QmFtNnladVU5dWhxVnVMVHJ6bHk5QnRYaXVvNkJ2VndJcm5OcitXSHA3aHIv?=
 =?utf-8?B?M3UyUG5hVVRjSlRJallvdEx2SCtOOFhsN0tXem5ScDYxbWtieVlaSmxmK2pP?=
 =?utf-8?B?ZlJ3U2cxQUVUcU1GZVl4YURtR0NaUzNEejlSV0tUREppWFo3cERreER1ODA3?=
 =?utf-8?B?WGtzYnZTZmdvZUlFS1pPa050NEdOVnZRZkpkbnZBR29MWGJwRmlidVUvMUZG?=
 =?utf-8?B?QmtRWFg3NUVBQmg4R2V2ZGVLYlNteVpoZ3AzaEtxL0NxZHZhYkl2V2ZMSGhw?=
 =?utf-8?B?Tk9mNVNDbGVyOGNFYUNnVk41SnU2d3k3SEdiWEFrK2NRRTlpTjVCbWhaSFRI?=
 =?utf-8?B?SmMyT0lHSEFZMDF5Y3piNmtFaFZFY3dhS29MRW13QTFNTUdLK1lYL0RoTWJU?=
 =?utf-8?B?dkd2US9xUEo2cWV4aktrOGR1cDd6bFFaVi9ucndNbXhQaVR2TVErQW12RS9R?=
 =?utf-8?B?S3NsUHpvVkJqdEpXVEdoQk5ma3NydDNMYlJvRGYzSDJwamVidHd6alVwY1JJ?=
 =?utf-8?B?emhjTzVFR1BwTG1taEgyV2VIZzhOR05jRW9ZRXhPYW1ndDdOcTgwbzdGWWhR?=
 =?utf-8?B?MGZadjY5RnpoclJDUHFKaVlhU2hUbWFyUWZ2U2RkUnN6ZG80emdRTFE1ZnhX?=
 =?utf-8?B?VlB2NlMvTTNubnNqeElNREFQcWljSnpsZmxMeFo1bU9Ib1FVSWFlRGR4M3ds?=
 =?utf-8?B?UDlJNkpCc21sWUVrMGZCRXk0QmJKbHhjVUtLVzRFbEE1Qjg3Rm1qZEI3Y2ZI?=
 =?utf-8?B?UHRoNW5IbHgxR1daWVV4Y0VpQmpVVkhTKzB4WW5nMWRqYU1rbXhkWURqVGwv?=
 =?utf-8?B?RU9zRDR0bk03M29ESms0Z240UmNReTdVZzFBa01mYmx2dHcwYkIwWFZtS0ow?=
 =?utf-8?B?Z0toQmtNS1gzQzNicTZsc2lJeTZTU2VEZDQzdmVNTVEzai9NcWxrdm5rSGU2?=
 =?utf-8?B?ZXU5Mldlcm5Rc0UrUXhoVTljUWlBQUI4cFZIaTlnRW0wRFQ1RmFYVmZ5bUli?=
 =?utf-8?B?ZFhNcy9rN1FiamsxYTc5RnpsSlBSU0tXZGdscjFCQXA2ZjhlcHZWSVpicnJS?=
 =?utf-8?B?RW9aaDBkME9aT0ZOYzhCU1FnK3hZVmRGRFlveCtOOUlaaCt2QnN4aW11NW5U?=
 =?utf-8?B?UGtsSmN0aFY5VGViRXFzSGpSKzAxSDcrNS9DSmljekpCQzdFSU93ZUlPWThD?=
 =?utf-8?B?YXZFVVF5dVlVTWtqOUNSMEZFRzdHdTRrL0FXZzhiMVFidUJKb0hPYXVOZnZw?=
 =?utf-8?B?RzZadlZpelRqTllNbFRmOHAwTTFwMGVkKy9GS240eG8vby8raHovay9MNmFB?=
 =?utf-8?B?YWRxZWFqa1ZFYUtLT2k3VnppbjZoUFpEN1pIeDI1R3hSS05mZUpWaGt5S1dw?=
 =?utf-8?B?dDlxTVgyRUdXSEYzY0NEZTR2WGRkRytGcXlueHFic3lSWEZRQ1NzcnhsUU9G?=
 =?utf-8?B?aS9JQ2t6WkNhNXFqUHJERHpIc2pWZFRiaThTTElGeHMyUktjTHR5eG5FMUJZ?=
 =?utf-8?B?TlU0NWFGbEpXVklMZWtnODRqV0RNRlF6QmtRNEhBckFJQ21BdUJuaW9NdlRY?=
 =?utf-8?B?R2QyTGFYbW9jZ3pTdURra1NKbndKRHl2NUFFaldMcVJBNnY4aG9yb2RXWDlu?=
 =?utf-8?B?WStJZURkRndBdmR4bVcvS2RlaGJ5QzdZYXhldkN5Zk9IVDNaaWw0NStpM2VJ?=
 =?utf-8?B?d0JTeThIN1dzQzJTK09PQ1pmeXJUbGVOMWd1alozY012eXV6TGlDK0s3bEIx?=
 =?utf-8?B?V0VpazlseU5NWkFoTTZzOXM4bW1VLzJoL0g5NXR5Y3E2TXVHTnNLclcrK1RX?=
 =?utf-8?B?L0VsMjkwZW8wL21uV3gyU3JBZzZwdHRJK0pHS09qN00xR3MxbU94dXpOQmJa?=
 =?utf-8?B?TXZaMmhNbWNtRE1rZXk4MVAxM2VmVzI3ZWF6Z21MdDBPU1BGTlNPMm0yVjIv?=
 =?utf-8?B?ZWZWL2JXRVlWbDZkMGc3QnJobjhEUWZ6KzZzQjBuUkg5eVk5b3V5YjlWMDBn?=
 =?utf-8?B?all0NGRiUXM1eDJvQzZoZzczWmdWQ3VhaXBCSHBiVFF2bHBLRVpNSkZ0YXVB?=
 =?utf-8?B?Q2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	P6TH2IAFUBQuC0Qtj6tsnupGMe4mFMW6JRaw/Q024DonyyNnr4TCCJue/1zYBy3xUsKxnkNKX4wtPAUdzMtpwdN43bKE+kIF7O6emUl5zhmQzZaRrDAWf/RKcoXNJXAFJIkvEB42gScmggTYzvBn9suA0M6aKvScBX/G1EmuykigSMZCDGXkasIYPy23Tz/8vLs5wW5qfdPzl2EII8dHG/xVzpqJpsuc1URaL3efDNNYufacV3Ql/RzujoWL4Y+Ri45BT3AMTgJ1KxIibgmVb3jMfPV8E/KO3eNKSLsf28e3aq2PCCh7qX1tj72x3kChZIP6FTpKvgg2+t3vpNIOBC+WRqiVnsjZhnxxqr4C+ob4VQ1QeA7R7a2Zx6J8c50EoRfDLoNLxoFigmu8iSJ3rwgpq8Sb0mtOpOlTvVFgD6ZJt5RVuWid/6X45FcI0R7/BO3UKBCgftlvh7Cv6FZ4z9SpNuT7GhHrHcgsF9ouwlxNR+SLsrxulwNOBBop6E/qX5IxTuG2G2M34irX/0PRocYVnQeX01kJ1sZjKjuUwkwBC6QkfbCNNf1KzE+gpy/x5sOyijisclDfwpNFJtTVgqDjhs/39GRTXfQV3x+QnXL6ZhsVgXXdgJhZAb9yUbKDDS73nYXdGuk9HtQPkHtYoNy+9Z/IleAJX5SmKmjjSl/6sP/CGmCJDM8qyVzaGb7lK/MNHTglNmOjBsSD779tw87shWgHP1aizvr8DAWj2R4proVUPuTFbNAGN2MO+ALfhOQluaXn+1BwZpCbhs6cXmQiW6fLTC3V6AydXeNaQyY64Ntwos3UytmiWB08hvSMwCat2up0OqM66fkPbuLbnw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bfcd88d7-3455-40b3-d9d0-08db40c3444e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:46:01.4421
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qz/oe5PfltxS/9Fg+WBB+WYdBAZtLbK0H5Lmo4j2Zp2WLCZ2wE4Z1hRg2bZ8AJalIA7287nnRMaHpFO9Ya3AiA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6384

On Wed, Apr 19, 2023 at 09:56:35AM +0200, Jan Beulich wrote:
> On 18.04.2023 13:35, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
> >> ... in order to also intercept Dom0 accesses through the alias ports.
> >>
> >> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> >> use the CMOS RTC, because of there being none.
> >>
> >> Note that rtc_init() deliberately uses 16 as the upper loop bound,
> >> despite probe_cmos_alias() using 8: The higher bound is benign now, but
> >> would save us touching the code (or, worse, missing to touch it) in case
> >> the lower one was doubled.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> >> --- a/xen/arch/x86/pv/emul-priv-op.c
> >> +++ b/xen/arch/x86/pv/emul-priv-op.c
> >> @@ -208,7 +208,7 @@ static bool admin_io_okay(unsigned int p
> >>          return false;
> >>  
> >>      /* We also never permit direct access to the RTC/CMOS registers. */
> > 
> > Hm, it's unclear to me whether the comment above would need updating:
> > we don't allow direct access to the RTC/CMOS registers, but we allow
> > direct access to the RTC/CMOS ports if there's no device behind.
> 
> Right, but those ports then don't allow access to said registers. So
> I think the comment is fine as is.

Yes, that's why I wasn't really sure whether to comment.  The comment
is formally correct, but it might lead to confusion if one doesn't
carefully read 'RTC/CMOS registers' (vs RTC/CMOS IO ports).

Anyway, sorry for the noise.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:46:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523330.813299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5KU-000738-Vr; Wed, 19 Apr 2023 10:46:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523330.813299; Wed, 19 Apr 2023 10:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5KU-000731-Re; Wed, 19 Apr 2023 10:46:26 +0000
Received: by outflank-mailman (input) for mailman id 523330;
 Wed, 19 Apr 2023 10:46:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5KS-0006CV-VX
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:46:24 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6dd92642-de9f-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 12:46:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7013.eurprd04.prod.outlook.com (2603:10a6:20b:116::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:46:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:46:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6dd92642-de9f-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iZHNvWxjZhlshU9FWzDmjFI4NCjQyVdHO4eLwswL7ROeDq9qxakyyvzc5dvg4d6oxutn+8gplwj/9JkVgrpKmsm0aYCVB38XkAJ2KU0jBxIe1l3PM39iIqDmuj8rq6kIH/10lGYVChY734k5dMacRP2XsH17rABEBIsdZpVBJf22ToMYpRDkGzVWSFMVQBie5tT69r+Fhuoou101TmrWYO+oEk5iYjBhhhZAX9Nd1GsugV1L5ab2VgbMzOMxTFskWzD56ko7sbGsqjIm6jEpJiAcuuTLc2KieN62BgBuRxZqDC14wT+e6/wBTYeu/2Iz2nYfs80RTVrGoMyoPch8nA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tq9WHKGqRr1z/PValrjLOh415pqYopf5CMM1Py5K+Iw=;
 b=NwHfGoCRIHh3pURNKg2sXz5jlvVPWcKrwTuM2vrKsKU6P6JTaeKslJGmfYnDdhkEr3I2UqNnlx9fpED7quBExRlAA3bnG8zLXaT248TerrSxEUH1VUCq2OHzc7LShM7ljq66mDpZ3xhkoyVA1nWHgUmS0qW7fl2Nr0FcsAjH6mcc54/VnWbleS6yxw1k2XHuo42Lb6NY2qyUsqJky2vnXjJML3MywCsDY8liqx3/HO2Ws91vnbpNSypHkPr4wSRbC4FyXKRngUokgCyg4itznc7t/niZc2k1tynstiGNqbXWmoBiR9LZFa0B0h8M5vMqGfMEKlqZO1DdrERq6x6WGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tq9WHKGqRr1z/PValrjLOh415pqYopf5CMM1Py5K+Iw=;
 b=igKjTYW6eq1St33CBa9YYBklQHM8GsLA7uraQFVMRKOWpFyaXtAgNGioi7L4YGo4N/4t2yv1wRYbnbnS5HZrzo5CUxWZxKFHp44I8Qo4vsicnepMSmev3YvnjsefRCJJo6CY356pLzyx2Xg3v6ble/jsfDzuVtunvi4xU/ASSE3lEEkeDrcELw3eStCxfP1pv5jYiJjv4QSWDhDsTWmB59+epSoUCerIC9pL3zNaWl8NpJ8B+RaylbabxwEuLNayYLTKfzd4PYk6K4f0I7L3fokJt7cLzeudlyrclcUtk8cY9a6xQfBShHOSz22f5W+/ObG2JvkRc9gZazQI2NXKng==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d07ee286-52f2-c7ec-2d0d-1c343dbc78be@suse.com>
Date: Wed, 19 Apr 2023 12:46:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 4/5] VT-d: restrict iommu_flush_all() to cache writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
In-Reply-To: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0263.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7013:EE_
X-MS-Office365-Filtering-Correlation-Id: 22352383-79f2-464a-ab91-08db40c35158
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qh1WnuqT5me8oDW6t0ajcZaYYk9SsQUn1+Cm8vAzfayfAXimO4Rs6csHRjNnkE9i0rXDf0MJrPV7+Ew0hvaRYOadp0MwKL/4YU8hfoEZMxII16F1Mo5zsOYfMqN2FWwJoi/0cVpshgYE6Anp1oxsw4TMY+owUdz2jWd3T4MyQ2LghDjX9QdD/nGLeeM2P4E8L3DJoqlxAgUGZgAbUmRMyawN98Es6MZtYDfGyNlAB8It+Jdo60qiUb7tvL/keivTYQshsa4D1H9B222urZLQkyIY2JsTNWAAyB08OnGZNkVHDylUSfzhF0WPFfuGyjkFMlGm02G/ekg04rOU9zDOyBlOGaDh1M3Vd1bl8Urrw+h90z+YljPKpZDloYOxwap8si9gKrNoXnqJtIGtKhygKqA1pOu9zeAiDJx/rhiizFpmsa9wsA+U+z1juSe/W4ADkGCYj9OrwmSckm47zS4DUAf7p+fS4hEpv67ek3I7h+gLoJLjRttnmIKarHCK6+/paRu0N9id4suI8TBO39CphPiwa/l+QY71GehRcvgEINoq8Oh2FdhAq4k1SJBMn2yCqz+odW99piBesiESGFYTE/X9yQGqI78qa+xlHTmke8EmJSJmueXOAmdPtIqBFA/wgGEq++Ap65GUKS4pBQfAAg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(396003)(346002)(376002)(366004)(451199021)(5660300002)(86362001)(2616005)(83380400001)(31696002)(6512007)(186003)(6506007)(26005)(38100700002)(8676002)(8936002)(54906003)(478600001)(6486002)(316002)(41300700001)(36756003)(4326008)(6916009)(66946007)(66556008)(31686004)(66476007)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ay90cllZZU1hWDJRMWVjdlJ3cEV4SGVCRDNNZ1hsVnpXY1dnYjRpV0JIanJK?=
 =?utf-8?B?YzVTVCtVRTBzNHk1WTVvZmF3ZWRZWXkrSVY2WFVaNWxKSlpsRmc2L0ZJNlJl?=
 =?utf-8?B?dXJGQXBTZXh1YUVPOEJ1ZzZQVlpCNUEvS0ZJQTV0TFJZei9FOUkxeDdMUW80?=
 =?utf-8?B?N0x1WHBhc3ZxSVd6eWNBMi9uSFJPYUg1UGU2dGFJV1RSZHVuQy96ODBuVU9r?=
 =?utf-8?B?aXdlWDdEejdRQk00bVB4eW5URGZpUHpDeU5zRXlXZHFRMUJieWtQanB4WXZs?=
 =?utf-8?B?WGtTTkx5UUlTVGNuOXNxK1lONjNCZ1E4MWFsNmVrc1JTYXFFZ2U4TjA0SUxZ?=
 =?utf-8?B?UGVmblVwMmhhanJETEJKSG5FTkhVb2thNHI2VWNVeldEQTBwNFo1TnQybGxu?=
 =?utf-8?B?b0NiT01rM1dSdVNKVUloNFdnSmNybHFVak81RnFRbEtoYTVBQUJpRmRXeVdj?=
 =?utf-8?B?QzhnNks4OGFCN1RoY1ZNeXM4cU15Y0pYalVYNFYwajdoTzF2Q25qYkxzR3hl?=
 =?utf-8?B?ak5Md0RvdW9Fa1FOU0Fkc0c1dG4rWjJiZUlaejVEWGhJU0V1Ti9CV0toS2JV?=
 =?utf-8?B?TGhHMVpac3pscmpPV3hubHl2VTNNTGw4aldQZkErbXRnTldTR0lNbFZvS290?=
 =?utf-8?B?MnYvWGE4Zm1LRXpZaVdQMVdxd2RUa3JyWXpYbEdNNjZvekc0aEsraGYwVXZn?=
 =?utf-8?B?YTRJb1ZlZkFoZU5pV1RWMi83Nk8yWWFMckVPL3Zod0ZVWW9TN3dNWUZFRjN0?=
 =?utf-8?B?QU9OaXNXR2Y4UkhPQlIxYWxURkJHK1BGaCt3RXduSzd3WTA1SGV6Vi9BNWZH?=
 =?utf-8?B?SUxCVW5BNVdNQ3JlUHRGY1YzMnZpLzZnTkdWVFNkVVpiOU0yZGNsT2lJejBj?=
 =?utf-8?B?bHZYbTE2M2YxRmpyUkVINE45UUpjdXFvYkg3cXRFSm1YV2lDbFRjUGNCdzBp?=
 =?utf-8?B?V1Jydk83N3NUVzF3TjFyR3lZWWVubU9UT2Q0dnV5S0FyYTNQbkdMOE1KWTJN?=
 =?utf-8?B?NTRkMWR2RVhVQis4bEtJd01aSWYwR2V0dThtQTE3NVBNWHJSQ3hrWjVXUE9M?=
 =?utf-8?B?N2xSTk82VHFJNnZqcFNUdy84dVVWVGRIekpDRDZFUVFyYW1tVCtOVVpyTDUr?=
 =?utf-8?B?SnVzam1XVlkrVHgzMWNQcWRvYkJNTEJDL0VnV2Rzekk4OVVkNzBGMjFaaTBp?=
 =?utf-8?B?TnpVeWFvQVJwVjBmY3lHY1VadGpuTU1xTzVVV1NTWStTTjhhNWFHV3JUTWdU?=
 =?utf-8?B?MzhvNmxIOXQ1dU1jRHVTMmczcStBSnNGVndVSktpRjFxOE5LME92bHNYY0oz?=
 =?utf-8?B?WXlzSTh0MlFWTld4cDBkV2pWZWdvMThCSXFwRmk4WjU5bE9aUEJDZWpqbHZX?=
 =?utf-8?B?UnYxVnlHcGsyVkdPeU5GNjJ2MXFUR1BLNjhmR1orRDl1eHROd3o2M2FBKzdF?=
 =?utf-8?B?R1pteDRIRzhNa2djSnFXNWtZZnByMjU0RTNRNlBzKytXYTJUZEdJWTliK3FO?=
 =?utf-8?B?REdTSDYzWkg3MTc5R0xTdi9jVzNiZnZhVnJudTAxTjYwZ3lvZEd3WUUxSEpF?=
 =?utf-8?B?eXJ6am13ckw4WTF3djBONncvOWMwQUVGUGowbk1tSFRuMUhtc2F4akRUMXdr?=
 =?utf-8?B?RE9LdURkSTA0ZXVxRnJCbDNyeWtnSk9jdC8wZEtUbkRFUE9FcUphMm5XODFr?=
 =?utf-8?B?YWdNRWtzZVEzZXF3OGd1V0lFNFFDRGZXbHNaMFZ6YWhGMTZmQjh1cTdCL3BJ?=
 =?utf-8?B?ZnA2N2hPYmErL1VQMSs5VHRBMTFKbnV2OTBoV3NBTks0R3pqRmJyRUp5amtF?=
 =?utf-8?B?Rm5HM1RucVhMdTFpaXBsRXdtNTY5Uzg1R3pYSW10Qis4ZXFsN3l4MjBEb0NW?=
 =?utf-8?B?YkdTYWFtUHBhZGU0dW5GQ3VXUkw0a1grOG9FUHQwRzlGRVAwMFV1U1krME5F?=
 =?utf-8?B?dUVRRENDY2wrcTdjQWs3cEttVzg0RlA0a1g4OTNKaVJVWWdsTzEyQlc4bkUz?=
 =?utf-8?B?eHdwS1A1WXpLYWs2WENmb0tlRjJJMHp0RXF6Y05nU2tsclpIbHVrZEtoYytz?=
 =?utf-8?B?bG9zUnU5SURPb3drQXk3enQwSXdGUnhJYzg1OUUybGxNYk9McnFMMmgyTXdS?=
 =?utf-8?Q?MDvc4/qKTLEhL/dJUADLFFWfW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22352383-79f2-464a-ab91-08db40c35158
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:46:23.1947
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PhMQsbeKyzqu18/ZP+15HtTiIOfYZr0nFxJLaPK+J74gfTKfyjfEcTswglgDw2pyt3S+2hH68bHw99RMgfQzFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7013

We don't need to invalidate caches here; all we're after is that earlier
writes have made it to main memory (and aiui even that just in case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This, aiui, being an analogue to uses of iommu_sync_cache() (just not
range restricted), I wonder whether it shouldn't be conditional upon
iommu_non_coherent. Then again I'm vaguely under the impression that
we had been here before, possibly even as far as questioning the need
for this call altogether.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -693,7 +693,7 @@ static int __must_check iommu_flush_all(
     bool_t flush_dev_iotlb;
     int rc = 0;
 
-    flush_local(FLUSH_CACHE);
+    flush_local(FLUSH_WRITEBACK);
 
     for_each_drhd_unit ( drhd )
     {



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 10:47:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 10:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523340.813309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5L3-0007rh-79; Wed, 19 Apr 2023 10:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523340.813309; Wed, 19 Apr 2023 10:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5L3-0007rW-4Q; Wed, 19 Apr 2023 10:47:01 +0000
Received: by outflank-mailman (input) for mailman id 523340;
 Wed, 19 Apr 2023 10:47:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp5L2-0006CV-4b
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 10:47:00 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe02::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82aa2f45-de9f-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 12:46:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9272.eurprd04.prod.outlook.com (2603:10a6:102:2a7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 10:46:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 10:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82aa2f45-de9f-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YJXEa/xQOLzSclZZsD042z7q+zZyZm60WvljSk4/1lYTZWm1UrAi24fca/YNGzE6KjLDg9EdMJW01zv93bUHhaqUB7iNcibnzOm/H53dFWWzxGbsqwKbkjGKjRaJFgSjrDGavmv5jOPGJy8pHRgjYdzESJGkRnNz2A73ONgN2gN8ZH3oGkkkDj9I0KZExiKHTOuTcQXJm1vDmKaCDgsbyZM1E3wpH+1W5PBteNsN31PB2g3nwy17djmMHh4cnoHLGTqc37HXORX76UM8zzmrlg1s66ZqCVXW6BCo2aaRJyle7zCJfOH0TLgSxW70R6SXsbW+tejmFRViEWAu+kw5/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R7uQczNK/4mpSt5b9zhiriwBYD9oXL4EeN1Uwl2RegA=;
 b=jb5qkX000ImTuxlilqXO5HUKCE1GOraIe2kna+dlZgwgwi/uVql3vJhqKMWaCfIrebOy2inbLf1RXMbzCGTxY3ifved3Q33cWbgJIQvP/D1ddK+fZTZyaJ2WKPj/WeozQ773k6K4K8/RZ8mL0FVnSohSOTmQScWgEv7qxTuRUZ5r884xE3OfyXkwS6O+6aZTEKUZGQfE32OLopiGwvh26veVWLYUwABeWLjrncsuQfYY7w9a3R4zuUC7bmFLqpXMThO4OBA3Kc2EOFFiRJ9uqMW9V+D3/qoKgCXK5ajQp/JcEh4QDqnSeLfzNeF90hhZnRW420qYmDiWw4fCJi5fQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R7uQczNK/4mpSt5b9zhiriwBYD9oXL4EeN1Uwl2RegA=;
 b=CbFgW3ZQItvcUoFrDKcNOOz/yIV4LywC4G6xCJMAUXtuyMLjLbO4kwH6POA7uGp96shILOoY1QTJQ6Vbp4LY62n0gOBCpjEQvWk4uVpVnYaZqtugtaWRa2HTJ8ghqvVquGmdqhFwXgbnKoN72WYZuQmZ86A3MQ0zzPebybnL+V6/CSjAKAL5LIJlYMg4H8KVHKP70CxJjWS9+yuKPWsxUZXaV3DrkTlRcj8bkxZJf0iIMC8yLWLjMhhFdgl7k4pVHpimthVrlgXDN8JAG+Gv7JHRFf1b2hHi5mP83ZnM5qQjVx90xb69RKnmiKrni4hx0VQ7kXEuWPLs4vxHhXDWww==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <18fcf499-a2ae-ab48-a66f-ca0499097e8a@suse.com>
Date: Wed, 19 Apr 2023 12:46:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: [PATCH 5/5] x86/HVM: limit cache writeback overhead
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
In-Reply-To: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9272:EE_
X-MS-Office365-Filtering-Correlation-Id: ba814f8b-5957-4809-610d-08db40c3660c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YRA9wm5W+WiTVRPqwm+QPdxYdYqUIFLsSR0gIQUgfamTAseIesmpMRSHQoqM9juQVfEyzu9vrwByxZYO8pRzsMFuADZQwEZaY1I7RoC3GqbZF10Iej+pcMdaoRdShh9rgYAOU9BlCSPyEeS6y99cNNke7dibKzcFUnFNZ68ZzL8Auk9vXw2rUYq5jJ5OaYGqBnktCxpQr3qPYEj1vLG6gogHKiI/BjzW97hA3Za0xUCDTRK8f5aHbpYxhoeJN5N6Zxmvs+gIV08JDZf4ExzyV0DtsWcObWHmsSwcpXBkmS3P3XNqoNwcUQPOsxNsQAQp8cIVfiPbU8s+jEq+I2gnAPyAHATyskdj8bqn1EuYLcf7fZc6RWvbEgEpJCCYovA2EUt5sRnMXEkN/Fyvs7Mw+MTAHdLFbnI31YDkj+cI1nKo8ZSRTb4zZTeV+89v3G5tDWAF6y9chlVEWNxuYMfRGNm8wPaxRd5GkWvv7BhMN6cYY7SXLbIog0wOKT8sr3i2DM5OeDj8HtsOI76vtVs2PE/spo1+9aMbkzZX4R6sAfVzsLE9IGqZ6YGl9YEv+YTInxY+p/AQqKhXsK5UcCbiMkuRFYSCO2qoXmJ1jmgkZOfVVnJvV+6IBzraeDpS+zFspnn+yMTra0XP/PY8jAS3fQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(396003)(136003)(346002)(366004)(451199021)(66476007)(478600001)(38100700002)(8936002)(8676002)(316002)(41300700001)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(2906002)(6512007)(36756003)(26005)(83380400001)(6506007)(86362001)(31696002)(2616005)(31686004)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c1hzM2ZBVXNBT20xZ3lpY0paT3RUYytjSFF3Ky9Ga2ZpTk9vaWlMbFU0YmUw?=
 =?utf-8?B?NWkxYVFkYnVPaTRLRml2YXZlSUFNWGJrRXhRNTQvRzF6VEo1amxhdlpHTWxL?=
 =?utf-8?B?UmpCZW5sZnFONXpQVUdhM1JVVElXQ2lQWVU5eS84emMwbnBBWXJXa0N1TUlj?=
 =?utf-8?B?K3RudG5CY2Y0d05pcFJ5L25QK0ZQY3hqbmlGL3k0UXpLUlpwOXlNTVhsMGRD?=
 =?utf-8?B?em5iQlJjUlpuOXdROWRFcWl1ZXVQd0NraUFHd0lIVWY5YytIamJ4a1NEa0Vl?=
 =?utf-8?B?dFB0NUZ5aDJKRVJNNTRzcFQyNU9zVlQvb3JlSWZsTE5xKzhpRGMrdWVEWG9D?=
 =?utf-8?B?UjhuNk1kT2FqSzlxVXNPNUpxNmNzUForQnhRTXRXN0ZlN1pRQ0s3TE12S1Vp?=
 =?utf-8?B?ZHVrclFDM1lzNE9DTEJyeVllV0Z4ZWlhM0VMbUVtQ0Y1cFhDMFJ6Zy95TWRX?=
 =?utf-8?B?NWlnSkNuRGpFQ01weEtnZENFUUd3M0IwZTJGaFgvSXNYQUFSc1lkTGFOMklB?=
 =?utf-8?B?TjVOY2haRVJhL3NSckswOGZOdk9wdWJtNXhBTUVjSFlaS1oxMW9UOGxYY0la?=
 =?utf-8?B?NVJnaEpVbUV4Yi9JU2x6N1liRWt0d2dUV1g2czhiOXV2a0dsQ3NBNHJtbjJ1?=
 =?utf-8?B?QTZkMWw1WE82QTBtOGFNTmFCbGZTdXVvdzNvZ055ZmUvN2VmQWJhQktDZTRu?=
 =?utf-8?B?N1pJbWVIYXpGamR3a0hnZmdlQUlGK1NvWEd0dXNkN3hpK1o1RlBleGFDdFNw?=
 =?utf-8?B?Q3BpdXFrakNjSGxiaUs2NVUra2FKNjkrTWowdWxGNGQxYWpRM1FMRy9TZG9B?=
 =?utf-8?B?S2k1L3k4bFZHZCtTa3Jwa3VlV0FacjhpbGdCdTVhcG5kZStZR0lvWmFYblZw?=
 =?utf-8?B?dFFRYUdRNkhvTVgvQjMvV2p2NFlLbFpxK3BPZ3lrelVQKzl2MnZBUDNVRHJE?=
 =?utf-8?B?eDJTNjJNdFc4T3ZDejgyeDhwUGMvU1padHNuQjA3NVFTVERzM3RFVWYwZzlM?=
 =?utf-8?B?dENtMnF1SzdBTDJOYTlYc0t2ZHkxMDRyUk90NzRhNzlINXVUR1FMSEFCS1pG?=
 =?utf-8?B?NXlwQUJzMjRPOGdKVUxaRkZuRXZodUZsOU9tWlRJaVdwZ0RGU0lVUy9IMmlo?=
 =?utf-8?B?SnFDN0FIaVRtd2MxVWU0OGhTUERQYkFkbnVhNVJ1bzBkVDI2UDByUXJoci9s?=
 =?utf-8?B?aVBGU056alJzbVJtY20wb2t6OW5JeVp5OUZIUUpIS01GdWpRc0VBSGRCMDdT?=
 =?utf-8?B?ZFp0MlBCbVJPVk85S01lb3h6MFVMTUYxd0JKclMzMEhRb2luTEhrd1J1djl2?=
 =?utf-8?B?SzF5NzhYcDFqVEl1NngxYXdWaGcwK3RTOUxVQXVJZ2lMc3NXTm1UV082OUVN?=
 =?utf-8?B?K0FEc1AvcDJ2aWlHL3B1NzFSdDdZZ1E3b2cxY1JPa3ZyUjF3dVhlb1l5WkJw?=
 =?utf-8?B?bzFHKzdHRUdiS091OEhscDFzNEp2SUtKSzVpaDJVSG40LzBId0lIVmxaTVRz?=
 =?utf-8?B?RUJDZjQydlpBMmpmT3IrUjdLUDNOKytLU3AwdEFiZDhPclZCUCtVQkd2OVBO?=
 =?utf-8?B?MzE2bFNBT1M0K1JaVXB0SzBNcW1nc0dYS0hVT2FKL2QyeHVITCtldUIrN01O?=
 =?utf-8?B?d0xLSmgxcTlzdGU1QmZKei9hT1NyektqQ0huS0VOMFQ1engxeFJJakwrZjFR?=
 =?utf-8?B?N3M5c3RiN20wYkowOHEwRk13NVpiaHV4YU5HMmN1eUEvck1nZUQvdklJMjRt?=
 =?utf-8?B?SHd3UE0rSnkwZ2ZHOUlSdzhQUFo1OTU3MmZ3enlJenVuMU5JTUpOdnpVKzZS?=
 =?utf-8?B?ckdaNDFFeklSb3p5V2VtaWdoajZyRHY2Kzh0MGkwa0RaOEV4YlkrVDE0QjFP?=
 =?utf-8?B?ek5NYzZGdEd6b2tCWFNDOVdVaEQyTlJESVd1VzJuRHgrQWtTN1lNZUNuL2ht?=
 =?utf-8?B?bXZSWjU5SytJQmRoU0JEaDhZTHRtREFYbDRWZGlzdkxwTm5TdHRXeUNqclZD?=
 =?utf-8?B?MTlOVllpUlV6QmdEQm9FdU5tSU93TVlyQnpMMGJScXB5bFd6RXdOSW12SkZE?=
 =?utf-8?B?NzdnOEVDZUdOQlZ4WDNLeStFVGNOWk0wSERTTzIrN1JpbnM0b1k1VEdIVnF4?=
 =?utf-8?Q?dFHX6btZNpbJaR8uBVLr45hlG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ba814f8b-5957-4809-610d-08db40c3660c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 10:46:57.9280
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k6Khoj9pc60HiNVQA0ybCULb6DXIyh9KQLJ9s2WHQxVgnw+7wpqzz51G7waPWtWXFhtu+0WYxKIfyxFeE4zupg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9272

There's no need to write back caches on all CPUs upon seeing a WBINVD
exit; ones that a vCPU hasn't run on since the last writeback (or since
it was started) can't hold data which may need writing back.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
With us not running AMD IOMMUs in non-coherent ways, I wonder whether
svm_wbinvd_intercept() really needs to do anything (or whether it
couldn't check iommu_snoop just like VMX does, knowing that as of
c609108b2190 ["x86/shadow: make iommu_snoop usage consistent with
HAP's"] that's always set; this would largely serve as grep fodder then,
to make sure this code is updated once / when we do away with this
global variable, and it would be the penultimate step to being able to
fold SVM's and VT-x'es functions).

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -537,6 +537,8 @@ void hvm_do_resume(struct vcpu *v)
         v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
     }
 
+    __cpumask_set_cpu(v->processor, v->arch.hvm.cache_dirty_mask);
+
     if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
     {
         struct x86_event info;
@@ -1592,6 +1594,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc )
         goto fail6;
 
+    rc = -ENOMEM;
+    if ( !zalloc_cpumask_var(&v->arch.hvm.cache_dirty_mask) )
+        goto fail6;
+
     rc = ioreq_server_add_vcpu_all(d, v);
     if ( rc != 0 )
         goto fail6;
@@ -1621,6 +1627,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
     hvm_vcpu_cacheattr_destroy(v);
  fail1:
     viridian_vcpu_deinit(v);
+    FREE_CPUMASK_VAR(v->arch.hvm.cache_dirty_mask);
     return rc;
 }
 
@@ -1628,6 +1635,8 @@ void hvm_vcpu_destroy(struct vcpu *v)
 {
     viridian_vcpu_deinit(v);
 
+    FREE_CPUMASK_VAR(v->arch.hvm.cache_dirty_mask);
+
     ioreq_server_remove_vcpu_all(v->domain, v);
 
     if ( hvm_altp2m_supported() )
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2363,8 +2363,14 @@ static void svm_vmexit_mce_intercept(
 
 static void cf_check svm_wbinvd_intercept(void)
 {
-    if ( cache_flush_permitted(current->domain) )
-        flush_all(FLUSH_WRITEBACK);
+    struct vcpu *curr = current;
+
+    if ( !cache_flush_permitted(curr->domain) )
+        return;
+
+    flush_mask(curr->arch.hvm.cache_dirty_mask, FLUSH_WRITEBACK);
+    cpumask_copy(curr->arch.hvm.cache_dirty_mask,
+                 cpumask_of(curr->processor));
 }
 
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs,
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3710,11 +3710,17 @@ static void vmx_do_extint(struct cpu_use
 
 static void cf_check vmx_wbinvd_intercept(void)
 {
-    if ( !cache_flush_permitted(current->domain) || iommu_snoop )
+    struct vcpu *curr = current;
+
+    if ( !cache_flush_permitted(curr->domain) || iommu_snoop )
         return;
 
     if ( cpu_has_wbinvd_exiting )
-        flush_all(FLUSH_WRITEBACK);
+    {
+        flush_mask(curr->arch.hvm.cache_dirty_mask, FLUSH_WRITEBACK);
+        cpumask_copy(curr->arch.hvm.cache_dirty_mask,
+                     cpumask_of(curr->processor));
+    }
     else
         wbnoinvd();
 }
--- a/xen/arch/x86/include/asm/hvm/vcpu.h
+++ b/xen/arch/x86/include/asm/hvm/vcpu.h
@@ -161,6 +161,8 @@ struct hvm_vcpu {
         struct svm_vcpu svm;
     };
 
+    cpumask_var_t       cache_dirty_mask;
+
     struct tasklet      assert_evtchn_irq_tasklet;
 
     struct nestedvcpu   nvcpu;



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 11:01:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 11:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523350.813319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5YI-0001yl-Ei; Wed, 19 Apr 2023 11:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523350.813319; Wed, 19 Apr 2023 11:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp5YI-0001ye-Br; Wed, 19 Apr 2023 11:00:42 +0000
Received: by outflank-mailman (input) for mailman id 523350;
 Wed, 19 Apr 2023 11:00:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iFQL=AK=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pp5YH-0001yY-DZ
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 11:00:41 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.51]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b791d32-dea1-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 13:00:39 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3JB0Ts80
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 19 Apr 2023 13:00:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b791d32-dea1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1681902030; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qLn7eJ/SBtQxncBY5F2ADI24CAHM/wdJl74j/yoVkCa3QvYrJVa4URBUuEgHiSZZ1h
    /xxp1BPOPeRFp0H6mS8Q3esewq17RAdaihh2Jq3XT3GFMMhEj5t2CC2evXL55jZT0rm6
    7tF1WLW67Z081XP5Vbh/ewSdtw4uZPRvSW92bXokSWm/lwQPQIVu9R1uzsR7yfz1B1QT
    b/nHpyzBg5g6RTvAeDV52dT/mfPm5ky+uNT1ffISBpgdYIy66v5H5w2n3TRryAsAVf+o
    akfFqfJ0LDSdvRAly4L6r1iMZFmO9r0zNfvIoRop9qF1Wnskjo5e1ruAOHgF4e2Vl9oF
    NWFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1681902030;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=OObpQ2qPdIBjpIQyaVqcTd8ZC+MfyA043X1uQ6czbLM=;
    b=NtjQvrz0w7v7t+Ct9jjD/TRBWBS9JKioUdAqBQ+FXwVcXdtOBLyCuJKHR6vov6vObL
    /hoRquQjyuP0lecqoaXUKSBvhwcAwt1aUByQh6leIlIfJZ+NyHTBIbBKVqbal+SztCtg
    0gR+jkeyED8zHc+8TXy6oiwTSEMpnEqej0b1QYkyN15rAXgX+HP2SNprF/BhEVrqQHtM
    DjC5SQn01pmSW2ZE4+pyYDbAC7FHuQFkAMMtSiMorb3K3m/1t3ZbsOEbakoG78uqMHet
    VRLXXZ5DXlpV35ypGwX485uHcYgpJrFr7JBE1ZqzDBzhrq3RB4Yx7O/cYanf78OlOKsB
    bL0g==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1681902030;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=OObpQ2qPdIBjpIQyaVqcTd8ZC+MfyA043X1uQ6czbLM=;
    b=mFmMjjLIQtYFvQzzwuGkfohNOjS4nXQhIN9bPqORzfP/bRh+73qr5FkhMCh9JINb2l
    SXfO3trnn/5rA0+eYVrMGjz1yRu4L46UkZRe0lC41A1CW/f+SdgIfYuwiIqbmls2c2OQ
    oU3N8KS+4F/jhP0gIJadl2MY/aSwLLRplWwfGS0GGJ3tjW01j94DU8eNNXane/5zXEjg
    TpErBpr2/SM8Qq8et9UnAUOthWLddqFgQEih2DCKvESY+zt99GtSn5/zmPsT/ajzTniD
    ILndD0bQPfQzi6PL7a3/aBWxLrbcMWr2P9pQyie24Rg9N5dpCUCDdleG22FqdscHXP7C
    EzeA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1681902030;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=OObpQ2qPdIBjpIQyaVqcTd8ZC+MfyA043X1uQ6czbLM=;
    b=rLiOJ3JrpCDt2v2F4irOOjx15mBCxMzhTZzLBvyeM1QV9qm3uRMcP8/i0MpF6JW1Wp
    S7i1gq88Wd+xZZPR1mBw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqlr7GpgtSxIX+ZWs95M7PYKTHoBaxED20qrwFA=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] tools: ipxe: update for fixing build with GCC12
Date: Wed, 19 Apr 2023 11:00:26 +0000
Message-Id: <20230419110026.25429-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Use a snapshot which includes commit
b0ded89e917b48b73097d3b8b88dfa3afb264ed0 ("[build] Disable dangling
pointer checking for GCC"), which fixes build with gcc12.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/firmware/etherboot/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/firmware/etherboot/Makefile b/tools/firmware/etherboot/Makefile
index 6ab9e5bc6b..884ba35692 100644
--- a/tools/firmware/etherboot/Makefile
+++ b/tools/firmware/etherboot/Makefile
@@ -7,7 +7,7 @@ include Config
 IPXE_GIT_URL ?= https://github.com/ipxe/ipxe.git
 
 # put an updated tar.gz on xenbits after changes to this variable
-IPXE_GIT_TAG := 3c040ad387099483102708bb1839110bc788cefb
+IPXE_GIT_TAG := 1d1cf74a5e58811822bee4b3da3cff7282fcdfca
 
 IPXE_TARBALL_URL ?= $(XEN_EXTFILES_URL)/ipxe-git-$(IPXE_GIT_TAG).tar.gz
 


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 11:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 11:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523358.813335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp67h-0005R8-Cz; Wed, 19 Apr 2023 11:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523358.813335; Wed, 19 Apr 2023 11:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp67h-0005R1-8T; Wed, 19 Apr 2023 11:37:17 +0000
Received: by outflank-mailman (input) for mailman id 523358;
 Wed, 19 Apr 2023 11:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp67f-0005Qr-NW; Wed, 19 Apr 2023 11:37:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp67f-0002pq-Ef; Wed, 19 Apr 2023 11:37:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp67f-0004eQ-3E; Wed, 19 Apr 2023 11:37:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pp67f-0000Lm-2R; Wed, 19 Apr 2023 11:37:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nfOuYarAz0+7XVui0diqzechvgviRlxHMebtvsSSSPo=; b=Rq4uPVvTrs441pGRFh81U7+VbN
	71/hJMVIApWXIyHvT5Q1uxWDXMhgvf59zpiRYMsPfNdK0g9TJ6OFBOSjGKkZ2o9XyR76vpkpLCYrP
	RmWOm7UGweowrNHWLG4kDlndUpIH+mggB7lAN8TbpBY9NqKJYhlkewtGPSSu7A6A4J/k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180305-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180305: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=af67688dca57999fd848f051eeea1d375ba546b2
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 11:37:15 +0000

flight 180305 linux-linus real [real]
flight 180315 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180305/
http://logs.test-lab.xenproject.org/osstest/logs/180315/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180315-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                af67688dca57999fd848f051eeea1d375ba546b2
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    2 days
Failing since        180281  2023-04-17 06:24:36 Z    2 days    5 attempts
Testing same since   180305  2023-04-19 00:41:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Arnd Bergmann <arnd@arndb.de>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Chris Morgan <macromorgan@hotmail.com>
  Conor Dooley <conor.dooley@microchip.com>
  Dan Johansen <strit@manjaro.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dragan Simic <dragan.simic@gmail.com>
  Fabio Estevam <festevam@denx.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Stuebner <heiko@sntech.de>
  Javier Martinez Canillas <javierm@redhat.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Rutland <mark.rutland@arm.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Neil Armstrong <neil.armstrong@linaro.org>
  Peng Fan <peng.fan@nxp.com>
  Peter Geis <pgwipeout@gmail.com>
  Rob Herring <robh@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Sudeep Holla <sudeep.holla@arm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 832 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 11:45:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 11:45:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523366.813345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6FL-0006wC-Bo; Wed, 19 Apr 2023 11:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523366.813345; Wed, 19 Apr 2023 11:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6FL-0006w5-8Q; Wed, 19 Apr 2023 11:45:11 +0000
Received: by outflank-mailman (input) for mailman id 523366;
 Wed, 19 Apr 2023 11:45:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp6FJ-0006vz-VY
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 11:45:10 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0cfbf22-dea7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 13:45:07 +0200 (CEST)
Received: from mail-bn8nam04lp2044.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 07:44:58 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BN8PR03MB5121.namprd03.prod.outlook.com (2603:10b6:408:7e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 11:44:56 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 11:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0cfbf22-dea7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681904707;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Y8BGdEJ3QQnufhepIT7ms4V5yYFycjifquAfuOhsTK4=;
  b=Cz+RO03QXv849c+5vnIiIkUF1ZVp6l9tTUtXXn0ka/DuVX4IsXod0JhI
   jyDErCI1ilkbokM6F5eBCfALcfHV5UIGstMp4+JBA7Gq77pwxBqKV2ujy
   BjDtv/YYACyjDZMLhy+YHG1ZJaTh0vgxKbv61bDtf1izHu8+1amlGfZXR
   Y=;
X-IronPort-RemoteIP: 104.47.74.44
X-IronPort-MID: 104864156
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:E3eQzK51az3gxTr1IEECJwxRtB/GchMFZxGqfqrLsTDasY5as4F+v
 mRJUT+FO6mOZmHxeN4ia9/j80wO65aDz9NkTgtl/yhnHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawS7QeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 K0CMXcmXAq6vePmxbubcOAyh8QtM5y+VG8fkikIITDxK98DGcyGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MkEotjNABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXwXOlBNtPStVU8NZ20W2IxTIRFycVbgS8ntSL12+7SoxQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBRw1V5dDm+N03lkiXEoolF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:Nj5zL6vpEmCDALHKQxQcltoC7skDUdV00zEX/kB9WHVpm5Sj5q
 WTdYcgpHvJYVcqKQodcL+7WZVoLUmwyXcx2/hyAV7AZnidhILLFuFfBOLZqlWKcREWtNQttp
 uIGJITNDSENzZHZLHBjzVQWOxQp+VvuJrY49v23jNmFhhwbatt9R10BwCBHCRNNXF77LQCZe
 Oh2vY=
X-Talos-CUID: 9a23:agrXrG6Vvvnl1drGidss3R8OA4cOQGfk9naAJkaYM283Sb2bVgrF
X-Talos-MUID: 9a23:1fRN8QvQq4gspw/CdM2nmGBSL8Q24buUC3tSlqoriZSlMD0uNGLI
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="104864156"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OjH70qYoosI1RLRxb+BUoy5DAUlN2UIL1R5R9XzbskWQFCmWBVucma/eT0hUk5WK3B9uf0u3vyGgOceg6C491tlxZCZK36e3na3BnIbpDAaOjjmM8XHNZcZB6bnB7GR9A0pA2lz2QWSucuKHo3+2CoVHeTLBTpqxj9FaNTkdsLyC4sqpvdnTmwyizCFGw9tSXYYwKs/e2AK0KCrYz5YrqqmYVM32d1IPy6HOPxAw+tkrwpIN26LBYLUW+sZBXc1FBkizg2ciTzspjk8Glb1W8ML+kyvKopgZ7F5xOn/Pvn1JkHQMGyt+iKiyh5XdxAPevzgugsTYsNFF6Xl1F3lezg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3isFPEO9Mtsln/cHqi8ql12EhebXSOOgUxQ4vgrp4nQ=;
 b=Xkj3Xd8jgASV2i6QMKcQDAWlBOc2jztfnU/sU6cmbrTukbLNpnqC2gyX+yLhaxiQptilepdcqQLFSXyqMPGwai+NhsY397HFvBjbTyJgFrsW3+9G9oPm5sMcaFOKlJ4jsoQ3qbnJE8Ib1RJOd2gMOA9D40WKCRA50spgzfibyU/DiFpgrdbIQgcJNydgLCcsg1LicNOTkXrioZF3xMSy9yIMuSyf3cJnMmq8h//jLNnv5/d909EoCPXUX9Hn0C8rh5N7vTwxBVEA0Kh8UdTS1k3XW0yFPglPIu7gYMYx5ZlcswnMSv+KcRIO1K4lwqP8QjjJUjNivwYpuYTW1OlhTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3isFPEO9Mtsln/cHqi8ql12EhebXSOOgUxQ4vgrp4nQ=;
 b=N2iHYDP5bRapSoQIfWXkOec6AkkWjNKOuOlWEL4sapqFtVZZRpF4o7fE6Q/VCE8di+/GLoAp6zv7Zr9Kudtn/IXHvmvpULv0zZUn9BevL4IKZYRCWEFZ0SRT1V9rgJwX4weznkYnIOEf+w6O+hjz0NY00X/7WQkwEWb5Ldbs0AM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 13:44:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZD/UMyeckvCq0ivf@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
X-ClientProxiedBy: LNXP265CA0027.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BN8PR03MB5121:EE_
X-MS-Office365-Filtering-Correlation-Id: a9dd60d1-6814-44b6-f7e7-08db40cb7f13
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N3E8TBwdErulZ0O1uEcXXp3SjtVv9TnNjtM3p0Uo0CPITUuxDetyT53BBKfW1ZpQHuhU+/ZS4ZcwLycPKE9j7Tih+/APfqwL6/+dIoLCvTBsE+uGtwZtcEXRYzBz0u4yxCT0AgnXpXpMlTC8qo2B8PScHck3iicLBPAK3EP028QMjD0NDg/8JkerPLBLqxBNSWb5maG5Qq5P2hzM4Vdjoua1foZSll+4z0maQa9PeD5zqLTDDuh9c/4Yvw+kzEfWSPGKYk6Cq4WN4KxDsIrps34fYQKw5ttCOhsSK+J+Ak+GboQJWjjYcwCInXh4PRmfdvRjAY6Nbl+yNOG5GNa0dj8uTUU6EupqmUJ9FCBF2K1WydO7OlqRkRRxrkBXvSUrSZDn1rQba0MyGXh9O6EZjiMtKMGEHw2zzUelLeylKBivFlLF4fNB7wk5QsZJXFbEtHWMI86q0nMHAnfMp/1+w0lSFepLyB7u2W2S8BPZhaOooI/z3VyfEeOfHDVlMBH/gadru2dZtNG+dh3pjc7/Mw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(346002)(376002)(366004)(396003)(39860400002)(451199021)(6486002)(33716001)(54906003)(966005)(478600001)(186003)(6666004)(9686003)(6512007)(6506007)(26005)(53546011)(5660300002)(2906002)(66946007)(6916009)(66476007)(66556008)(4326008)(8936002)(8676002)(41300700001)(86362001)(38100700002)(85182001)(316002)(83380400001)(82960400001)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YUxGVVQ4UmVhUUVvWlFjK1htS1lzVXR1RkhKSXQ5YlQxL093a1ZBRXB4MTdM?=
 =?utf-8?B?STA0Q3hZSkQrOVYvTkpLY3VMdEFJODc2akIyZGVyais2djN2eVNpWVlrMGZl?=
 =?utf-8?B?WXo4NEQ5di9MQ2FVaDc2L01tdXdhcDc2ZFlwQ1huVkpZdEQydzFHb1Bodm9l?=
 =?utf-8?B?ZFZXUWovOVdmVGVDajFtaEdyOHNTSndXc0dTMU9leVFoVGErcVhtMGFjZm9x?=
 =?utf-8?B?QjZGcWd5WHV0UFQwR0t0dFNNbDFhajZHSUUzRUdldFdpU1hvUjRtSm1QOWtw?=
 =?utf-8?B?MlBtQzlTS0pQUm1VYzBidFJ6VTRjTU1tekZTQ2RqSWh0Ym9HSTdsQVpuSkNT?=
 =?utf-8?B?bEdJaGN4a0JDdEQ5NDdYeWZxYzdFOFBWcTFtT3ROOUJRRjdyQS9lWWhydldN?=
 =?utf-8?B?V3ZPY0x0YlN6ZlNRY2k3dDNWRjNiL21aQ0ZzWXVxT2dzdU5yemZuNDlHT3ZJ?=
 =?utf-8?B?d2EwWk5sZWtvSmZFS0VWSW8yeFc4L09ZRElQL1ZHTUd4a2M2U1NjSWR5VnVF?=
 =?utf-8?B?djZEWVNlVHJrK0tuSTkxelJUeTBuZGhIb0VZN2lTd3NQTGRkZEdLc29CQzdP?=
 =?utf-8?B?ZHVoMTlGdkNVSFBuaHdZVnJFN2hNUG1UaERIMm9mbnJrMEJQQ1A5d1hLQ25h?=
 =?utf-8?B?RGhjSFFhNHZRMGNGaVRJTXV5YzUwaHdOVmJPUnBSVjdscTRHRE50RWVkVUpC?=
 =?utf-8?B?aTY5Zkt5bE54K2VHMTl0Y1NMYVBuMnRTMFF0bWpYWGlBSlQ4MmViNFBjbE1G?=
 =?utf-8?B?dGsxU2NlbXQvMFM5SUhhZ2lhNGpBT2FHQ2ZVVFk5NVAxbjBkaSs0QTdrVVo3?=
 =?utf-8?B?eHh1Zm5uemVjelBRNnJEQXdwSkZjUFlKek5FRmJaT3E5WndnbThUazhSaVdu?=
 =?utf-8?B?U0ppcHExMUhlVjRTWkUrek9ZWUZKbkdyYlZUZk5NZ0I3RXF1UlJJZ3d6YjVJ?=
 =?utf-8?B?bk9HcmFkMEsxR0oyNTVSZ04yZ1A1Z1IxSlhBblVZTzNwYStqUGJ0clhIVnky?=
 =?utf-8?B?dVBTVko0bFJiYXFKVVFNRkFaaHArbXZObnpENlB0cU1HRGV3RkVmOXhnaEl4?=
 =?utf-8?B?dmRhYUdPRW1uaWM1ckllNVlDVmhtQ2FKZDIraHlkcTZVdlhieWNnUWRMaEhi?=
 =?utf-8?B?ZlNIeWRuQ2R5RVRkVnlRRGxROHF6VFJuOTNqcGdTUXYwMVFUdytlbVdheUJt?=
 =?utf-8?B?NE9MNW1VQW1UWEl3K1FVWUlSRmY3SWtyZGlZRkQ1Vmw4Y3RaNVVlc0pOSGVL?=
 =?utf-8?B?M1pENGtrQVNpeExtMjBISGhIZEphWUNzL3BWUE1RZzN1STY1TE1ENndjMWNX?=
 =?utf-8?B?blBzZWo1b1h4ZEhtVStKNnZmeDdBd3dKZDl2dkZIOWMyVTZEQ1VmVncrYndj?=
 =?utf-8?B?ajVxUFdFVjFwdHRXRGJ4YXV5Y2FKUWdBUEp5bFJOOGlxbjRQUWNWeHRldE5j?=
 =?utf-8?B?NE1rczVYcFpZK2xWQTlMaEpkNFZGdnpQWWlCallFUXlLOW1aSTdBSVlFL3gv?=
 =?utf-8?B?Y2JSVW95VlN1WXQ5NTlhWHJhL3Y2ek4rMUNWNmw3TG1FKzlnRzIzZDNsRHFV?=
 =?utf-8?B?cFBmSFcrZ2k0dzhVVzJLY1BwYnZTRDgyQ3pWekNMYVZBTTU2OFd2UXIyOURz?=
 =?utf-8?B?ZnpDckpTVzRjVjFWaUJtVnRnUzFTd01RdDMrODI3djJpR0dEUmo2YWhaVzRq?=
 =?utf-8?B?d2dDbXg5YVF5U3hvR1I5UFozbUVUMktxNmhiRFFERXlwd0dhendGVXYyU2hi?=
 =?utf-8?B?MjBLVjV3dmVxKytvTEZtYkdKU2JBSUp1aHVwSHdHRjhiNnZHL2ZKeFN1RG84?=
 =?utf-8?B?QmZ2V0J4M0daSG9zUEphdUJNdlJ4TzBpNkZiVTBtbFJEdTRMTjlVUDJnUndZ?=
 =?utf-8?B?Skk4eDIzbDNMVTFwQnl5ODFoTTRoN2tGV2YxQzhadjFJbElzaEo4Wkl1RjU2?=
 =?utf-8?B?dlJiSENPZ3R3bW5tQm5wTDhrZmVydUh5aDZKRFJBa2RIcnpBTERMN1Bad3A1?=
 =?utf-8?B?b1FIS2RhWHhMUVRGUmFQMnRwMDdpd1AzajBLeU9BTDYwZURacWxWUFlVcWE5?=
 =?utf-8?B?dENRa2dQdkFQTmg0MkFienJEWXlFdDY3TnVkZVNjeUdVeTdyUFRIYW9sNnA1?=
 =?utf-8?B?bmJqdHZvQlFSMGRvVUhTRWR6TlRZcUdiM0FlamRIYjllYldSN01wN3RGZkpG?=
 =?utf-8?B?SGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	wIheP3Wxm2hae8X33YssYFBFgb1a9lAy/GO1mb87hvFvN+ENGpT7qzCuAo73Myfd7AEhZ5QN0aIEWhZ1g8DqU9uQzwJUAivakwjn9BQdWdymWnxUCmVIhBqjc6fIb9ueE0Q06fYzXMp7+zPMlmHriBMrOnrs15nehIr864bWYVWDqU7ezCC3ESR4ANp9Rikm7lBC4V+/VS2WQQTIyXUPyKuhsKpNOOfEJA9TTtGqT2KGY0zBR9CaF04Hn/7ltmbJ3U0yisT4wbmYodN6zQ02KnbUtLAP8RfoNK8tW+8Z2umsrCea0cIQH2lNQbssZuijkZCDEJUm6HK2zt/h6WkLplKJk+wCtesi0ODLwYWkxuIZf4pqFkYBVAJatTVTcVKVA9x6jbw6l0yTzCf8VBS2imAY1in37/yL5SFeJJom9I35wbhCqgf7o+vu8HD/5PoM/ofghNWw93O1IWmB8JESKK3Tb5O6rPLqHRkKV/FLVRE2BBK2ltphTrEUCyN4NlO7U0vfTb00LRAqFlV9CuNFw8glXOMkUto48IbisgYGsn3TbS7QdtNpFBQjbXFvrH8wWL921WXSWRZUWSn6L4ptQzSwqZH+rvtW0kzAOT1xYF+XRg325IVqe0rzAco4t6W/vmJOrUdJnDPiO+B5Fya/Li4U63BTmo92hv3kA3Cox9aiC8g/CDFh57SRuQ/Ym0HK4F5Q+W0itB6ag36mCldtj0H9qc/EOutarBJJdGAeyvIizVjJy61Kf1nW1NeIEGzql3Gat63uWp6rNnp/HfTYbPynZQ5keM92ceKkmEOeOuV6fbSNUzVzhqEmR0jrARQi7fVDESCTLOO1nbwIN1mCUA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9dd60d1-6814-44b6-f7e7-08db40cb7f13
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 11:44:56.0654
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OxvMMriiAHCInlCRZDVt0ngz8zTrWfuwKnDfDgNWdBvEgmBm2fTJIRSow00UDHNc48WRuJ95/xSUIcPsCjt9Mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5121

On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
> On 19.04.2023 10:25, Roger Pau Monné wrote:
> > On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
> >> On 18.04.2023 15:06, Roger Pau Monné wrote:
> >>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
> >>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
> >>>>> --- a/xen/arch/x86/include/asm/config.h
> >>>>> +++ b/xen/arch/x86/include/asm/config.h
> >>>>> @@ -44,6 +44,20 @@
> >>>>>  /* Linkage for x86 */
> >>>>>  #ifdef __ASSEMBLY__
> >>>>>  #define ALIGN .align 16,0x90
> >>>>> +#ifdef CONFIG_LIVEPATCH
> >>>>> +#define START_LP(name)                          \
> >>>>> +  jmp name;                                     \
> >>>>> +  .pushsection .text.name, "ax", @progbits;     \
> >>>>> +  name:
> >>>>> +#define END_LP(name)                            \
> >>>>> +  .size name, . - name;                         \
> >>>>> +  .type name, @function;                        \
> >>>>> +  .popsection
> >>>>> +#else
> >>>>> +#define START_LP(name)                          \
> >>>>> +  name:
> >>>>> +#define END_LP(name)
> >>>>> +#endif
> >>>>>  #define ENTRY(name)                             \
> >>>>>    .globl name;                                  \
> >>>>>    ALIGN;                                        \
> >>>>
> >>>> Couldn't END_LP() set type and size unconditionally? (But see also
> >>>> below.)
> >>>
> >>> I see, so that we could also use it for debug purposes.  I guess at
> >>> that point it might be better to use {START,END}_FUNC() to note that
> >>> the macros also have an effect beyond that of livepatching.
> >>>
> >>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
> >>> find START_ENTRY a weird name.
> >>
> >> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
> >> I take it that you're aware that we meanwhile have two or even three
> >> concurring proposals on a general scheme of such annotations, and we
> >> don't seem to be able to agree on one. (I guess I'll make a design
> >> session proposal on this topic for Prague.)
> > 
> > Oh, I wasn't aware we had other proposals, I've been away on an off
> > quite a lot recently, and haven't been able to keep up with all
> > xen-devel email.  Do you have any references at hand?
> 
> Andrew said he had posted something long ago, but I didn't recall and
> hence have no reference. My posting from about a year ago is
> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
> Subsequently Jane went kind of the Linux route:
> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
> 
> >> One thing needs to be clear though: Macros doing things solely needed
> >> for LP need to not have extra effects with it disabled, and such
> >> macros also better wouldn't e.g. insert stray JMP when not really
> >> needed. Hence I expect we still want (some) LP-specific macros besides
> >> whatever we settle on as the generic ones.
> > 
> > The stray jmp can be inserted only in the livepatch case, if we end up
> > having to add it.
> > 
> > Maybe we should just go with Linux names, so initially I would like to
> > use:
> > 
> > SYM_FUNC_START{_NOALIGN}(name)
> > SYM_FUNC_START_LOCAL{_NOALIGN}(name)
> > SYM_FUNC_END(name)
> 
> As said in replies on the earlier threads, I think these are overly
> verbose and come in overly many variations.

Right, I would only introduce the ones above and as lonog as I have at
least one user for them. I don't think there's much value in importing
the file wholesale if we have no use case for a lot of the imported
macros.

The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
need a tag for local function-like entry point labels, would you then
use PROC() for those? ENTRY_LOCAL()?

I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
think it's clearer.  I would agree on dropping the SYM_ prefix from
the Linux ones if there's consensus.

I would ideally like to be able to make progress on this before the
XenSummit.  I think we all agree we want those annotations, being
blocked on the names of the helpers seems absurd.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 11:46:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 11:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523371.813354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6GL-0007T2-MF; Wed, 19 Apr 2023 11:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523371.813354; Wed, 19 Apr 2023 11:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6GL-0007Sv-Iy; Wed, 19 Apr 2023 11:46:13 +0000
Received: by outflank-mailman (input) for mailman id 523371;
 Wed, 19 Apr 2023 11:46:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp6GJ-0007Er-Vy
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 11:46:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c53d552d-dea7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 13:46:08 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 07:46:05 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BY5PR03MB5362.namprd03.prod.outlook.com (2603:10b6:a03:220::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 11:46:03 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 11:46:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c53d552d-dea7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681904768;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=Fxc8zK/6+cjf1tbj2ZNFP8PJ2Vjq2AjBT97baPYizP4=;
  b=WqToZ4apjaZq/zNJJOkhkO6m0wPOvr9qZLeljnXx6SrjZm1Gam3EHS/H
   PJyNiGzIvCA/6JESACBWCLA54I5FTPbtE8Wss94bP4fNs9LDiXlG5jcKi
   Hs530bPJnVFN1NjuLgcp53o0jABRByUVyBUw65GmX4J2zSnM/XBa9ihQh
   E=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 106497649
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:a3qdDKmjDrOlOLyBtOgpevfo5gwFJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIbXT2FOPqKa2agf9sjPNy08EtSucKDztQ2QVA/q389FSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5Q6GzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fsjMw5WSAjSu/j137Smcu1IvJo/LPC+aevzulk4pd3YJdAPZMmbBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieC8WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHulBNNNSuHpnhJsqGCL+WdJOU0mbGOUvNOnlhe/cP12B
 VNBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQGucksVHoV3
 1mGt9rzAHpkt7j9YXma87KJqzKuKG4QJGkLaiIeZRsI5cH5p4M+hQ6JScxseIa3hNDoHTD7w
 xiRsTMzwb4UiKYj1bi//F3BqyKhoN7OVAFdzh7MQmuv4wd9ZYikT4+l817W6bBHNonxZkaFl
 GgJnY6Z9u9mMH2WvCmEQeFIGa7z4f+AaWXYmQQ2R8hn8Cmx8Xm+e4wW+Ct5OEpiLscDf3nuf
 VPXvgRSopRUORNGcJNKXm54MOxypYCIKDgvfqq8ggZmCnSpSDK6wQ==
IronPort-HdrOrdr: A9a23:6+LZ4q8ZLQxG2ZRnrv9uk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-Talos-CUID: 9a23:mW0wXm8jJ2Pm8hYeqyOVv0kvAf5+VGWM8G2TPEmlMWJNcuOJSWbFrQ==
X-Talos-MUID: 9a23:4GlaPQUHi4iEqfnq/AL0uzo4BZdY2pyBFWoTjcwHkOatGzMlbg==
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106497649"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iY0UqmCi1XErs+058fENRile4pT1goR4tSoDwzHT2T5NzmuFDn/SfmyH8HP43OQPNEbpXb/+Y1QLJ9qKwbj6GSZ5sLo5c8RjNtq5X5vphqVBwZUaC5J9fOdp3J2Jl9eWP5jC4a94xkTWgukZUYVfkvHUxHK0gfkSoAbAQD2mYxhMVHbW9rWRsUDl8EXPE0Oe9N9yydBdsA5YX9ifvISTvl29w+PzeRjLAg3+7A3cYxlVAfGtpbrjrgx3CbSlfBAvUoig/y+CdaC+vBlVSPhlM9rP2gugRrssMggNIctqbUIziMPDtkEUw8kYpFpVlt+AH1yLxjFPm56sji3j6wIb3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hZNmm6+aNePSyn3Qu4PBb3LZ/GwamRTwp7o8Waa//Vw=;
 b=fnj1T4e9CuD3q+7u+GudISZfi2BOTuUwNETw+eg3Lbqu6zyMslvJQXcdMCh+F8g7Bw3ypy/W+1NPb6l9kH9Wif6pVAi0sIOAr5YOVDhf4uFtHVeacYfvON6xTfGTFm/oUdVa13T7XjmKykshdAV3Jny2R5I8PoQx7KvQNsTkYNlGUJ56VBBciP030zXDiNJLDrjB+ttd2brmW5Ba+Y2o8YOSVFKLkLgG1WRWqbV3C5DlbonaiTIsBATXAp5qFoWkeCy0NN7tGVCQj/h8KyawyMY2GbVHNy4FVmSUAiL/N5hoRr8NtmUVT8A6trprtl/NcrXI7ZTGJOm3j9zMz4uBQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hZNmm6+aNePSyn3Qu4PBb3LZ/GwamRTwp7o8Waa//Vw=;
 b=MY2ir9ArsVQk3s3vWmmhssJbDLwGtjOCbAEHvFXPyiOF2+qg+k1bT814Y5qSixNTbQ9Y5tx5Ab+KNithsOzIgpGu+gGbOO7EiYtqSZ3cKQo2ulonxg4V51ltoP0S4wEyKThxIS7fl99429R/tHExzmpZjS0op963UeOFMJpagJw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Date: Wed, 19 Apr 2023 13:45:56 +0200
Message-Id: <20230419114556.34856-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0092.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BY5PR03MB5362:EE_
X-MS-Office365-Filtering-Correlation-Id: a5be0f05-b06b-4143-156f-08db40cba71e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xN8MjzGZ/Z0So/ArpVUFLVdhAqeeVS+tRH+q84mJUQm4T9LQyXCxQsGXJLWfKb7+O7ncshQkcj+lUg/Sw9FVGPZjGBgDymjYEYlXii7Pex0xWvE1V7sb9efG9gQ/0EtK2v+3Q2TVAqMcuafj8acHC+4UlmRV3JMasHAJ8YRa40HxYH5G6A29zTBR4QsH0rRyKhU6KTuoVWc9srLFRBSOlwmhDt5rMYWZUGYdRiqOqHb1zfK7EcERMBxdAbHyvjUn4EADrkI9eZm2w/079fXbvFdOhHGelzu8j9dn60jF3zK98Cf3/3Q1mLBV3AOReJJkp5VmAK2CEIiN2g+kxKi/RdCG7Bb0X8iC1nk9iu0E2RkWhDnZMRcVFKyYHIegAU4cjDmjapC4lduhTgVNEOsuOD1zIsqRLgGBnfCEw65NwROaaqgfh6q3prXfkJ+2/6xKv9/E8hgOcejrZHV3WROzOZuXhKupRBulZAP6uCfiPD3VfEey82lHTTALXX7mI1dQDLbcXUDzbhiezXhnfaSY0JFIWX822IxiU9oJZRBoqM4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199021)(66946007)(66556008)(66476007)(6916009)(4326008)(478600001)(316002)(54906003)(8936002)(8676002)(5660300002)(7416002)(41300700001)(82960400001)(38100700002)(186003)(2616005)(83380400001)(6486002)(6666004)(6512007)(6506007)(1076003)(26005)(86362001)(36756003)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eU5HbDIrWHljUjExS2x2OURpb2YySWpWRlc5NHc5bS9HL3U4MlB1b3B1OWpv?=
 =?utf-8?B?UEhGYmlVcjBGWjBybktsY05Gem8xcHptd0FXNHBBcUVvUU1TdlFJdUoxMHhv?=
 =?utf-8?B?SHN3M0NaRDlCK1BxWW1ZdXVPSkdia2ZqZUVUOUROeVBoT1RQeGFtMmF4V2Jj?=
 =?utf-8?B?NHpYd1JWQUVsSUFXL1FkRkJUSkFuRVVMek9mN3Q5RVRISzhYNG9ZY0E2VVRH?=
 =?utf-8?B?dkJIektxZ29aSmpWSGxvem0wMkI1ZEhDd2dVMkRva25nNUx0WnovUStidUl1?=
 =?utf-8?B?aWpHMU03dEJIS1JTSCs4K1pxYk0xcFhzaXJwMUhZRmk4cWtVd2pad0FacUEz?=
 =?utf-8?B?YjR1MTNsUVl5REV5SXpFeWdlVTNyUFBuM1FHeVQ0UmlKZlVFMjVhS3hsWVFw?=
 =?utf-8?B?ODN6WnJ5S2JIZTVpY1NpWHlaZzMxWGd1ejBkNkF1TGlqWTd5cUlYbkFlNDNW?=
 =?utf-8?B?aHkzb00xWXNlM2N4dEtpVnFVZE4vWnF3Mkx1RG9yUVRybkJtOGlxeEx3andP?=
 =?utf-8?B?U1BwNGZRN0d2R0F0aEJZR1NNSkhabnV4YjNXMENjOE1TVVVRUEl5WUhPMkJJ?=
 =?utf-8?B?SW53Ti96bzBWV1ZJRVJrUjZpV2ppVlg2VkUxalE2c1J0N3hNK3lEaVhBcUxY?=
 =?utf-8?B?cHpWZDlxRWJoQTYwUTlTMXR2cEg5YWpXZ3BIMzNHU2xTTGRtVXBhUWdNMlFu?=
 =?utf-8?B?dHNMQ3BlYmxhSzV1ZVAxSTRqUEtxTHVodkMzVFYybWdod2hJa1ZYeWIzZHZt?=
 =?utf-8?B?L1oxVUVyUjl1U29LL0xSWnEvTDV0ODFjeGdlQ1BJN1NpT0dqMEpmbG42WXZs?=
 =?utf-8?B?Mnp2S055cFY0TDBxSWxiUTNBTTg3aGx4Nm5NYlVDVkMzeC9vSEFTTzVtckc2?=
 =?utf-8?B?OE85Sk9XQk5Ha0JkbzRXajBnaGw4R3crNnh4c3F6WXBGT2ZUZHk3VjA1T1dM?=
 =?utf-8?B?M1Fua0E1ajlYMGdISmVXeTNWb2JYMkxHYWUyVTRkKys2MlM2WFh4SEZQL0ph?=
 =?utf-8?B?R2xSZ1YzR2lZbDRyTHNVQ3RjZmJIRE5BR0duVGh4NkpkMVZER2JBU25JbkdH?=
 =?utf-8?B?NFIwd0FCS2ZiU2tQT1JsQU1kQzk2MmlwNE9ESGptRHJKd3ppRE9RWTJlb3Nx?=
 =?utf-8?B?NkJrN3lWdDRrR0l1WWxuV2hZUW40cyttUUIvZDNhS3dkbGxiVlFFb3J3WVZO?=
 =?utf-8?B?NHBkNWtockZTMjF5UEdXVEdYQ2wyWDFKemxyclNxNVI4a1kzaWlOemlEcFd6?=
 =?utf-8?B?cFlyWFZDTWhTWlNzSTFLc1hRTi9FRTRobllnb2JLUDFPLzE1b3NIaTYvVG9J?=
 =?utf-8?B?bEk3d0R4Y2F1WVp6SGlwTFd3MFVNVFNEYndiQTlvM2RPYTZsRlh6QlptbGxs?=
 =?utf-8?B?cWNpNnA3VHc5aFlIWU9rUTE3UkV6OWhhajJoM2JNeTJPV3FvdnpvVHVpd3Zk?=
 =?utf-8?B?Y3pkSXAwTTRuUTZLOEdKNkJrZmh4aGtodXd4TDhKRnpFdmJraEVOVzQ3THZS?=
 =?utf-8?B?VDRUckRaU1p5clpUbkJNN01JaWg0QXFiT015Y2wycjlEUjhIZnRZN3h4Szds?=
 =?utf-8?B?U3hxbW9ZdFg4L3ZhWW9VM0NzTUgxeFhHazdWZW0yY2huVE1rUjJLVHNJK3hr?=
 =?utf-8?B?K3U5NEwvRkZnMG9oMThOenJUVkk1VThuYnNPbzhYMmlKN1hLdnlZcElIeTFF?=
 =?utf-8?B?bjllbmk2RlltVW9LN1Vua3dJN0RnSW56ZzR0aytpT3JRQ1FOM1lqRmd2UXVW?=
 =?utf-8?B?K3Nub25qUW9nem14MThxNVV0b0ROb3dzOHN2cWNQRG16MlMyWkF1ajg5cEFK?=
 =?utf-8?B?Z3B3N2lwL2tDV0lPV2RURGtHaEt2ejB1SGhHcTZlenEwTHBFMUVYbXhVOUIv?=
 =?utf-8?B?SDJzUzd0Y0dwbEpzT2NOMWYzVzFpU0pNZnVsdGhGUW1lZ1o1SnZrS1M3N1lX?=
 =?utf-8?B?eFZTb1NyWEJjcWxFMllVeUdHOTlTMk95MmQrK2FQVm9tc0JYbTZIeWxvVDlK?=
 =?utf-8?B?SU5leUd5M3pEM2t4djVYU2J3RjJ1MXFKY3R3QkcwSXMyU2xmcCtIK0tDci9W?=
 =?utf-8?B?T20yTnlabm5nNnBoSjZTU21Pc09sd3B0NVRaVlpiQ3IwOXd2N29MRkYrMjgr?=
 =?utf-8?B?NlppTURVMEUwSjBxQzFuRUVJZGIxUUpFaVZveE5iTFJVVVVQdW0ya29BMUZy?=
 =?utf-8?B?T0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	pOiTBv1jG03S4u0BCMYEif0Aizu0devd31QyTTnGKtvu9BLUh4tSV8hOV6bq5LUfrLiSUVQQ1zEuarHoUQf8byaxOAPWOoIprOlHUEXcPDFsne6j6RH1FWff7raRXnj1X0PPaeOmacbqQeri7KWq3Ym6Kwptbh7ZmYtAeTC2kFQnb7EMK6HVYf5763YGi8gpGSXUVOPeBZcFMBfx7cPKQF4QJ5SKhnxIJoHIlKDfWMCKK6CFjbGxg2r6J7dqOHL652mR8vub1q/vmUtYvnFHIw4jQYVT6/LfuXxERZMPusPnhLel8FWeiXhKHcAEP1tMkD4/vfN9VHyqXPBomzhbqr7I5ZhGY7lXaO4L9ZItr9DBcFrEAlwwAAnRSmvpPmQ8TqX/6cRUc5H51invqILQRE5wTNc4+AhaYnR03HzZebAC4jMzeNO8ryQD5PH8paxqzPngC3v1LaGWX4nSQx0wrOg+VF/JsB88Zhs72wxinTz6QaZ/ZAk9VTUdHwT8fnstzdXKjmsxpmneTOEmFnzYOx+z0cw67l2Jr9GvpuDJVpGE30t0nCPfkYZQyFnuPvZ9GadmqhwKLuv7gb/8O/jpRWsPhHjo2mA1Pd2k1/KOgD8ozXPLI7kLxdyRWPAaPiXrztGpq2h9Z2qzFIo5dh7sSfGJ5Zwm02BbqXdZErUsZryIXnpJvPueV3k78kgoyg3J11Q6uKNgxWLNr+INJqfHkeWVqj8IwOn/2YBmIlt9hy3O97NNIfH4kcomywC30WO2zMjbKpilVRt3B+gJlPe7l/QaRUjicH9hpZAyDNiGlHliGU7MC4kV4LlJ3S1FyiSVNvi3+AvRt670+glth38prl0tXr4+q09hD3BGHpKFXKIQC7dplM8esOqncHN/fxjxFj8hNWD1uwKuRS+YgVJfp1De7pg7qWWr5zDfA+mrbmI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a5be0f05-b06b-4143-156f-08db40cba71e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 11:46:03.1779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KZ2pobRHl6tcBvJYjw/Qz9bbrZVXGl3uO1pTsQwLgXqpX/y+PKRRE9ujB3YSkV4F0AM8QQSLNax50rULKCZPjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5362

The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
When the hypervisor returns -ENOTIME (timeout in the past) Linux keeps
retrying to setup the timer with a higher timeout instead of
self-injecting a timer interrupt.

On boxes without any hardware assistance for logdirty we have seen HVM
Linux guests < 4.7 with 32vCPUs give up trying to setup the timer when
logdirty is enabled:

CE: Reprogramming failure. Giving up
CE: xen increased min_delta_ns to 1000000 nsec
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
CE: xen increased min_delta_ns to 506250 nsec
CE: xen increased min_delta_ns to 759375 nsec
CE: xen increased min_delta_ns to 1000000 nsec
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
Freezing user space processes ...
INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
Task dump for CPU 14:
swapper/14      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14
INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
Task dump for CPU 26:
swapper/26      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14
INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
Task dump for CPU 26:
swapper/26      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14

Thus leading to CPU stalls and a broken system as a result.

Workaround this bogus usage by ignoring the VCPU_SSHOTTMR_future in
the hypervisor.  Old Linux versions are the only ones known to have
(wrongly) attempted to use the flag, and ignoring it is compatible
with the behavior expected by any guests setting that flag.

Note the usage of the flag has been removed from Linux by commit:

c06b6d70feb3 xen/x86: don't lose event interrupts

Which landed in Linux 4.7.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Just ignore the flag, as there's no ABI breakage.
---
 CHANGELOG.md              |  2 ++
 xen/common/domain.c       | 13 ++++++++++---
 xen/include/public/vcpu.h |  2 +-
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5dbf8b06d72c..ffe009af2dc8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 ### Changed
  - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
    cap toolstack provided values.
+ - Ignore VCPU_SSHOTTMR_future VCPUOP_set_singleshot_timer flag. The only
+   known user doesn't use it properly, leading to in-guest breakage.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae095..6a440590fe2a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1762,9 +1762,16 @@ long common_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&set, arg, 1) )
             return -EFAULT;
 
-        if ( (set.flags & VCPU_SSHOTTMR_future) &&
-             (set.timeout_abs_ns < NOW()) )
-            return -ETIME;
+        if ( set.timeout_abs_ns < NOW() )
+        {
+            /*
+             * Simplify the logic if the timeout has already expired and just
+             * inject the event.
+             */
+            stop_timer(&v->singleshot_timer);
+            send_timer_event(v);
+            break;
+        }
 
         migrate_timer(&v->singleshot_timer, smp_processor_id());
         set_timer(&v->singleshot_timer, set.timeout_abs_ns);
diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h
index 81a3b3a7438c..30b5291cd447 100644
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -150,7 +150,7 @@ typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
 
 /* Flags to VCPUOP_set_singleshot_timer. */
- /* Require the timeout to be in the future (return -ETIME if it's passed). */
+ /* Ignored. */
 #define _VCPU_SSHOTTMR_future (0)
 #define VCPU_SSHOTTMR_future  (1U << _VCPU_SSHOTTMR_future)
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 11:52:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 11:52:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523379.813365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6Ls-0000ad-H9; Wed, 19 Apr 2023 11:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523379.813365; Wed, 19 Apr 2023 11:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6Ls-0000aW-Ce; Wed, 19 Apr 2023 11:51:56 +0000
Received: by outflank-mailman (input) for mailman id 523379;
 Wed, 19 Apr 2023 11:51:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp6Lq-0000aQ-Iz
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 11:51:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92a5fb83-dea8-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 13:51:53 +0200 (CEST)
Received: from mail-dm6nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 07:51:48 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6875.namprd03.prod.outlook.com (2603:10b6:a03:43a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 11:51:45 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 11:51:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92a5fb83-dea8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681905113;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=RHe9ZI82jU5eDVIH+XZ9oUyWs8fp76d2ZgQmAdH27Xs=;
  b=V1NznTO6EMGD+R9TvVzneM/7Ukyy5n2P3Z7LoIaPsJaQCY5auUB/ywx3
   52oYm8zRlFN2F8gaq/fYzFla/30DZsyczN3ZMQ6EDtaCdln1VvMzlNPDd
   LvMeRIYZ2yUskQNABNkiQ829HlH2fFWR023dECzdfRbet2iCcZrcaBCrK
   w=;
X-IronPort-RemoteIP: 104.47.58.107
X-IronPort-MID: 105434527
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:mijxQKOrQ6yIUwrvrR23lsFynXyQoLVcMsEvi/4bfWQNrUpxgjQDn
 WZJXzyDPviKMTH8cot3OYW29ktVvp7Sy9RlSgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5w1mPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0qVyWkVH0
 MQpEw0Qbyub29mnmOi1GtA506zPLOGzVG8ekldJ6GiBSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujCMpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrx2rSTwHmkMG4UPKOf0sJqjACi/1JJIhI2U1aWhPXll3frDrqzL
 GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAGRDNcbN0ttOctWCcnk
 FSOmrvU6SdHtbSUTTeR8+mSpDbrYSwNdzZaOmkDUBcP5MTlrMcrlBXTQ91/EamzyNroBTX3x
 DPMpy8771kOsfM2O2yA1Qivq1qRSlLhF2bZOi2/srqZ0z5E
IronPort-HdrOrdr: A9a23:yETkJ6hpzZu5Phw/BhVXJAVvmHBQXvAji2hC6mlwRA09TyXBrb
 HLoBwavSWZtN9jYgBGpTngAtj5fZq4z/VICOYqXYtKMDONhILsFvAG0WKA+UyaJ8SdzJ8/6U
 4IScEXNDSzNykZsS+Q2maF+qMboeVvh5rHuQ6x9RpQpEpRGsddBk9Ce3+mLnE=
X-Talos-CUID: =?us-ascii?q?9a23=3AQ/0eIGsUyWVcNqR+VwiwL/Tj6Itmdyb5xVLoEnS?=
 =?us-ascii?q?FFGdSUOGuWAfK9vNdxp8=3D?=
X-Talos-MUID: 9a23:2P8IKAs47x2VawL3rc2nmhZGCMdkvpWXNhonnrUAtuShOXBtAmLI
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105434527"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b+Jo3ZDMO+3wI5Xiu9XjvX+ibthXKaHoM38SvNJY3IipC8RjhWgCrcRd1KsopTN3K/W+/54qZGib1IFEV5nNjzBCRtF17LVnPHQjgFr6WUoITxq16+5qbDn8BqCNsxzhPwcig1/ZZxNiVnvnfYKFIew49A41xdZycCkH3OUSdioWKB8LYbAobjfy1rolnQ2bhj2rEZDOC0FbDnlli4UprSkBUnDdwA/DSphbH5oMHwaJ8TtFMmBNH54eEA62jq8xO9zMRfgAlSDWbas2+kivzy/W7KJFVspj14V5wI00k+vdVco+408ZWY8drUhazv0k4YDROKCi/2t1smxomKvZjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7gxOiCj0ew1AXfI00DEpXrlfQJSdQHydWgSkSdAuWFg=;
 b=WsKkAfr18I1e1SonhJHpH087X+UjykDEFE+D1hL9lvtLrMkz73TYsE5KgS4XnUKAnndzavykCSvepU8OuRMLBm/vYthvau7QIPi/SOlmZ6vcKWe3zVjdQYEGhq8+YRdM9ynX5G8CnFi5IfgmrUigdSAkLB0GoWbw2FcehRlmDI+S2Jx68VqKXnylmEE1yOxy2GCFII+hNBTiyI9YNg24CvABrZ2HyyKqnLb6DLkV0UUpvabZb+Kkgh1kfI9zxnwLuY52CXRhHrti2ub/mkRdw7nGS5jqIoTAYEwP5f0fHkNA3c/82W9QQZYhD1Dwv7GHlyoSAsW0c18faxnOvPNL0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7gxOiCj0ew1AXfI00DEpXrlfQJSdQHydWgSkSdAuWFg=;
 b=Mqbb1dYk/s+XjEZ9sMhsSB81BuD5MtWqyKYpo6XLJl3ScVXmTLKx62/WVSTs/5qq21uj5ZFTrZ5joeAsFzI7nkV4MpxAPM26rY2k4lOldibZ8nxLiHAPck41E7nJtM/RSL99iRynPTBEEJkFoDkgWIhQalu/4/bUKObb+2v7d80=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 13:51:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Removing armhf boxes again (was: Re: HEADS UP: re-adding the armhf
 boxes to osstest)
Message-ID: <ZD/Vy1HAhvvr2tPM@Air-de-Roger>
References: <ZDkmu0mgy23ypaL7@Air-de-Roger>
 <92e6ea3e-a381-a77e-f909-bf65d009647f@suse.com>
 <ZD0fbNMqT7tMZVAq@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZD0fbNMqT7tMZVAq@Air-de-Roger>
X-ClientProxiedBy: LO4P123CA0125.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::22) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6875:EE_
X-MS-Office365-Filtering-Correlation-Id: 84019c47-25bf-4aa3-1fe0-08db40cc7316
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VV3E2PFXrBKZ8xfyAH/aop8frtQkGdlCoUQnaxA/T54DxGtWVe1tP/pfTWAP8QaLEcZV1Y0hSOwT43ubFmzfJYXPlwqlFmuOq9GYGnuB3Kxne2y+9OqLV3A0tjPrqp7iIrlyIGC/1XrdvPM9XqWFuRiJDwk1C5ZvBFxwC4/hoDQaukfwefbsUF2gGswt+Fxy/f7Pxaf15mq0KAVBct8HL94ac10OcuBuLMmqz98cWPQ3HOqFdSXD6wnkxrjpbDUWneP17hqT59D8Rx+X4xYS4oyLGsrmsn2mLIwODophgS7DWELIzDOeTmYhW+kKuFWfAqfXZt+zijrZqlDRnr2no46w1k4P0VTw3vcl8W2b10EXl4ffrCNiELak2zKPHUrKdF4YZuV0vmWdyHN3K86xAUG3iioVv7KK9Rgdxcnta46sfqsxeDqo7XMgXCsWf9X9uyAcvTNJirrDdE0gtEwVOzFtaJfYGwRV/6norteq1HDgnTXinArCn6LM+Ok4FQZZyLv6VwOl2ikZ5h630ZHfOz4HsqeUi/0evGxCqMkPcTvSRvRuMdZqKPwqi7nn/ZQI
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(396003)(136003)(346002)(366004)(451199021)(66476007)(478600001)(6666004)(38100700002)(8936002)(8676002)(316002)(82960400001)(41300700001)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(85182001)(2906002)(53546011)(6512007)(26005)(83380400001)(9686003)(6506007)(86362001)(33716001)(5660300002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aWZJMWVaTndmVTBiK2o3VDZuWHNHakRUZ0dPSHM4N2pZRHdTV2c4QzVMdHRu?=
 =?utf-8?B?NXdPTEtkZkpHaTUvMXBNa09ZOVY5YjJwd0ZJdVo5NVpMOE01NjJNODJBVms2?=
 =?utf-8?B?Y2RId1RBbFVlSFp5QkZ2VDNPb1pWWGp0Rm5mMmN2eGJnKzVCSlJlUElxc0tq?=
 =?utf-8?B?bG5SbExKYjc2MXNxM2M4WHBlMStlck94OXdrR3NqK2VHQWZWTXlvZEM3T3ZQ?=
 =?utf-8?B?TDZCMVFzZHkrK2R3cEYxSmcwTWxxQ2dKdFluUXREczh2aUZIVGVEcGsrMGZP?=
 =?utf-8?B?SHgrQ1JzcVc2V1c4TE1nb0ExdHY5bTB1U2xmazZYS0k2MXRGeTl6RGdYL2gr?=
 =?utf-8?B?cGFYWHptakNxdGlRTTBncHdCZ0tCZXVhZUpkS2ZMdE1sdVpZUHNheUJEV3NG?=
 =?utf-8?B?aW5zUHBNclVoWFM3NHBGRzdMaHMzSDEyWTFiOUFTRldJVGJUSFVNL2hPalM1?=
 =?utf-8?B?Z1pBMkducHE2cDdNSXNZaXBkY3FQeGNhdXBLU0hPdWVBS0FwelovQlZNVklD?=
 =?utf-8?B?RFlOYzdYSFN6U21XZkQraFRlWkFHclYzNjg1VStzVVFiemg4TDNIaUpHM2h3?=
 =?utf-8?B?MUpaSHRxK0ZtdzVZaHFXYkVmb0pxeXNSbWI3bTJZSmtQYnBYNktQYmdZc1lp?=
 =?utf-8?B?MUZTd1NtdzlheGNVajdwV3RCQ01zN2JnQ1ZzS09XcmpxNDZZeU1vNlNQU1Vm?=
 =?utf-8?B?QkJER2FldEhQeFpoK05NK3hJcGtWa29FQTIrdGtMMHpMSi9FMGdmeXpoY3Fo?=
 =?utf-8?B?QVF6L2E2UUlhKzBuK0ZmUS9LemhPWVZPbUEwcXpKMTFhYXQ1NCtVUWc3aFIr?=
 =?utf-8?B?MGhvR1FLWXpXNy9SSHgxN1RTSnE5L1plWmdldTBJQTZYcHNEZi8vcUs0ZG5G?=
 =?utf-8?B?WGRZR3RJOFAvMFlIUWsvTFFHMnZpSWZOSlU3cHgrMzFhbFJ4TlA4dVBZVElT?=
 =?utf-8?B?WTBndEdMN2k4bE5YMjljREs5dUFnWllySXQ4OFV6SGhtV2pPMkIyNW53R0da?=
 =?utf-8?B?Y0h0ZytFWElvdTl5MUVtVWZ4V3R0RGRabWVzTnp4ZHFGT2Zad09qMHRKOEtr?=
 =?utf-8?B?dy9mZk83eHFMVmU1RTB2eXJjb0VHUHNicnB2V1VHV3NZRUQ1KytEOG5DeUV3?=
 =?utf-8?B?YTZoYXA4NHRNZnI5akhzK09FYXhjQnNWVFlvUWFVcFZzV2hGanhzLzNMT3lm?=
 =?utf-8?B?Y24xS3JDS3RwY0kwY0ZTeDRRdGMwTVQxYkhhaTU2RnJBRnlpTW1NTGJkVDJP?=
 =?utf-8?B?L0I3WGUrZGZlTEx3MG1TdHNPMklpUEZMQmh3L1JOai9KdXBJdjlGWStMWEw4?=
 =?utf-8?B?dkZSbDR0cFd6NExrTjdMd3pBNGZXc3NCajJaSG9Eays4MzMyeS8rQWhaa25K?=
 =?utf-8?B?NFI1MGJXQWVVQnNSSHRud2JadzQrWHNMZlFBTjFQVWt6bFBhZmhkM2pKbG9x?=
 =?utf-8?B?c3F0Sm91NjJOUHlZQzZUYmNsR2pJQzkzSGtUVThSVnBpU1MydE5hRHNGdmk4?=
 =?utf-8?B?ak1Xa2JXT0NnM09leVlvRVM1U1E2SVpmbmhjZlZyTnc0eWNQZFZuY2tjZWlP?=
 =?utf-8?B?eUtQN3VMdG81NVl0Nk5nWkxKUkprOXc5WE5yeFdsU21pK1FEVnM5Q1NUQlAy?=
 =?utf-8?B?azFLRGtGOGJRK0JsUFpQQk16cDRrNlVpSDBPbmRnZjM2Uzd5S2hhMDhPVDFZ?=
 =?utf-8?B?dmttWkpBazhKTWlmY2NwRWhmb1NDbkZ3QmdVeDVCMTFoaEYrMU5xSCsyRDlK?=
 =?utf-8?B?N0dhanQyUGdlc3dMeWoxNFkxRFQ0b2d5eDUzTURIeEJYSW9FTWhrektMaWU4?=
 =?utf-8?B?K2Z1dzVkU0R1RVR6d2lSelQrMFpMdld4NjJDWDRENTZFeFA0UVVybGU4MVhD?=
 =?utf-8?B?ZzRicVd3Z0dDMEx5K2EvL1BFUno4MEJDY3ErZnRwckpCUnlpcE5BV2NhaE5M?=
 =?utf-8?B?cDlwU0h3ckpFVGlHck5scnpYWTl1b2xaL3hvMENKOWFoOVVFYW1tNmt1eGdC?=
 =?utf-8?B?NCtpZFB2aG1PaVZHVlVlR0kwSVBXSUNoSlNnNDNjb2FTcGticEtoTmYyaXdy?=
 =?utf-8?B?YWFXWUdiWFNqWnJZQzV0REh5QmlmWG53NzlkejJvUVRKSmVsL1VBSUtsZkRM?=
 =?utf-8?B?NkUwRU5MTFRlcTNlSjZwV09jRmhxNS9HZlZFZHFuNGJmVDlEL0MzUGZ6Q3Ju?=
 =?utf-8?B?aUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ndXIJHOkgJvrRpl+V/RQ68ww7LnrVNGuShoJpXNLWE8D7giqyUmM37ufsnIaEfzVmY69wxhtCrC82Rf/H0tUuA8H8T1BYFO/6wobc4DZLgZk6PenKH5F9yKt/5Wu5M+k0CFqzMi2i1kQt19mEitaSILxgiNP+niO4NeOtf5liHwkqOTeizkaAkcJXW/nwsaBiyrZ3zP6dsnGybxpRPPbChm21nI0LVjAzWvf6eLPrxq6TcS+nuijDiPY9YzfXpsXtjeyxWs06tJlPAkpMjYXn0aufSd/w0Ij4kYgoSIeOiSUEfD7G8TgO9hsk1TbfDlMeP5jevL+8yeXnLlFaRTy6cmrHnKvBkOotU+t+Vfh74AfNdjZ8NA9Ok9KWR9bly7o98PAb6XKGaIrz9YIg+Dcnx8weR+2WzCtSK7oj6SovEMwkF/dmwdASIOy5KNoX1BLkaR3Za9yRdlgRG/8wc+3fA64Cn3t8SeqO3IFVkZ76tyWqvFZ8OGw/WATlrI2i6hevyNOQDY0k/5spytvnvJkg13WWE+9C8q9btbBK5Wyv5cEFdL24plMxw7JXa+BAoHoHi3Ca7UH04TH3EEKG4rbbJ221phOHEsXwUAm9pPidbXPYQpgHn2T7qkSjMVugtbMrqGQB6Z5xUmQIf/vQkVGJ0p4/g7oNX0oG0H0onGP/6L0c8xPjcvcKtwJXMN2Ou37R0iGADhLbTi87v3L6naKhMmQoniRjWr/cd5A3oR5anTBaUqllBcHEI4owv9YwyEANHkhHANrUEMpuLWMrOfuxPg7z6eIxp4FRycTiXKvn77viW8PJAoWk/W0qZ/CZfxf
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 84019c47-25bf-4aa3-1fe0-08db40cc7316
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 11:51:45.3707
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dyFdO3D8wTCbqeD4kT0aqpfaMcnAQriMdl9Q4tV1qBurJuQAv77Y7Z27eyhhbrshlkdaH4zInrsXEo2eDzj4vQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6875

I'm afraid the serial output doesn't work on any of the Cubietruck
boxes, so I had to unbless all of them (because the arndales are not
suitable builders).

Have already contacted Credativ to further investigate.

Roger.

On Mon, Apr 17, 2023 at 12:29:00PM +0200, Roger Pau Monné wrote:
> On Mon, Apr 17, 2023 at 10:23:59AM +0200, Jan Beulich wrote:
> > On 14.04.2023 12:11, Roger Pau Monné wrote:
> > > We finally had the broken PDU replaced in the osstest colo, and the
> > > armhf boxes are operational again (those are the arndales and the
> > > cubietrucks).
> > > 
> > > I've run some ad-hoc tests on them and they look fine. I plan to bless
> > > them before the end of the day.
> > > 
> > > As usual, keep and eye on any failures that could be caused by the
> > > newly added boxes.
> > 
> > Sadly recent flights look to be reporting them as broken again.
> 
> I've unblessed arndale-bluewater which doesn't seem to reboot properly
> when the reboot is initiated by the OS:
> 
> The system is going down NOW!
> Sent SIGKILL to all processes
> Apr 17 06:59:53.337634 
> Requesting system reboot
> Apr 17 06:59:53.337668 [ 1109.675149] reboot: Reû
> 
> The other boxes seem to be fine.
> 
> Thanks, Roger.
> 


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:01:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523384.813375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6Ut-0002Am-Gr; Wed, 19 Apr 2023 12:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523384.813375; Wed, 19 Apr 2023 12:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6Ut-0002Af-DI; Wed, 19 Apr 2023 12:01:15 +0000
Received: by outflank-mailman (input) for mailman id 523384;
 Wed, 19 Apr 2023 12:01:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp6Ur-0002AZ-Fr
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:01:13 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0d91257-dea9-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 14:01:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8120.eurprd04.prod.outlook.com (2603:10a6:20b:3f1::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 12:00:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 12:00:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0d91257-dea9-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GrCHkhKqhUDZg1BTTBKE3JEtXkeSOKJKhiu1/KtsaeOv+2i0h4ChmwKPGZ4Af841ycpo7yujHbYv25SfrGSjMhTo4awFjtWIV+1iEEv4lr2gH/0rvDQMi/MUMaT7RbJkabOkVpvgy++ozvVCoYRKjWNuQ2F3D9BfJqowDOWmF3mwcvl8ZUGLY0/sBoSbbfn9yGA7YrYMo9QOWYA5a95YsmQfkRuMcrrJbvNsotnb3ZGPzNynPXXydbUtg3Pe8aVpFFSRsMfriN2dYLAHntCNCnPuvn8mrb39n2YjYsRjS0sm6nOEYvbngnH5kn4JNDQtZCTjES9FNbF/KWzpcgZgJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b3e3LB6FEQDrUg5WMMof/5oy8y9uE4aMWyrfHOGL1ik=;
 b=GtKjMyEARtVDkQaYL1Mv6Za0dXKsQE8xzJSOi/g9Bcq7vSrM5i/qYFuhpFP77ED2gyaMUzSPgan9HL2EqQb8HZbsW0THgkp+ziqYc7eRvxG5cPCY7zjS0ePimDIdFcmGRpX7o3MS6IqmdliHNMfjaPPLnskqZO82giytuxwsW1/qYwC/ZLtZbQqe4lI9ENpHmnfjYJLGwM2fPjnwi0Py1/+lbgbt5MKy4lQ4juvjxXL194ock27DQhSJSuQbZhT7+zOSHibKiZ8yKIdvRgRNYPP/2mnQjOjmB1bTiKAaod83DU1DCwuFOAefcebluvKcZWLFRhtNIMuSqI53q5tEIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b3e3LB6FEQDrUg5WMMof/5oy8y9uE4aMWyrfHOGL1ik=;
 b=G+hRNP6McZOqPqqxrPIehWShelLNODS338P/VkMXRyNCxUG2Bbxh8RbByspN6ry4djL+/ydlhkNI2xzLH6VoF3Bk3lcSXMs1NOd+2WfbCKGA1FD1yMId502Zwqfqe9kCYamGQEuUFre3uTi3Hmv7Ax/iFKlBxXLfo6c59vtWGXFmEuxnk0flVQc5EamskijhcUsOP7D4fP9le7abQYAdOJsvllkARaEiAIVxy+xi/+gx2h+j/brqPzYwTIBVneXElyBAaW4RM0PCsM8pwcdfW99L/4mVH0nC1gUkx+kHDGWFvYcZUvVnSYvhE3X2/G6vvknMZLqHKNfSUTHPUElzjg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <86823b76-6be1-da65-7608-af391ff48978@suse.com>
Date: Wed, 19 Apr 2023 14:00:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
 <ZD/UMyeckvCq0ivf@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD/UMyeckvCq0ivf@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0031.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8120:EE_
X-MS-Office365-Filtering-Correlation-Id: be2e5d68-acde-404b-1c90-08db40cdb22d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b4/DAQDm6eneeQbYxu/o2H2m5vaQ9UjoueJantJkY6Xv2ID7NUQ2f3HN/spIeGO4u79azR7n482rMWHefP+FLe6LULz9IgEhPNnxqUWB55hicQ+r6kDf0MNF8zylDCYvuTwlIW84e3wFN6kYI0c28vlrEUjS/JQRfP67YfQXcP37qyBKPEDClpHypLqennv1cs8KlOYzy1MZ5G6w+YPyFv8jvlly653/zyYz8yg9AiePe3riAdH6DAw5hmFGOj/jOAKfae9J97sm0sDZD33R3tzWtc1t6imfWAZtuC2Sz6s6v576V0l0TA8pAsJrKj57nyf79bol0QNlYxqI1b40gfMIiNImVwMqUuLzqJSOi3+0RNqh5X4HTHJrzGtGVJyMNboOZsaAsU8lb+wNIREsE9q7/UnnfbWmcsv1yqrDDfg9rQPTk+Hk/OztF3T+fOJ/gLoYaWb9OgLs9ZtS5q30BIFvq5y9KNWxTCRUXGwky+NkJoP9urn2qBWhs0HiyCZmFYUsWuhxB54kLJPftksqCzcHqSD4Zph2x/mdNdC/umhvtdG1MclRH1ih9cZ7ba4pIjPE0plOIa64x3llwSAc3gQayAK3+JrTytHCDVQAHBw7cApSieEPJrXnz0AX9ydC4ohlcH/PS0GdKCdsx+Me1Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(39860400002)(366004)(346002)(136003)(451199021)(6666004)(6486002)(966005)(478600001)(31696002)(86362001)(36756003)(2616005)(83380400001)(6512007)(186003)(6506007)(53546011)(26005)(38100700002)(316002)(66946007)(66476007)(66556008)(2906002)(6916009)(8676002)(5660300002)(8936002)(31686004)(41300700001)(4326008)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGlNMFY4M0FqeDZ5VFFVbWpiME8vOWRSdUx0RHVSeURPajRqSGk0cGt3Z2Js?=
 =?utf-8?B?c2VBcUV6blZlTHMyN2VLZ05Wc1VIMnJxSkxvdXFXTnVZalNnL2pKUS9WT3dx?=
 =?utf-8?B?L0FhZ253K3JGdzA1RHlZSlFYSFl3YWpqWGdaWE9UREhFRVFQSmJsdWluM1dp?=
 =?utf-8?B?U0hTemZ2VzdYZDUxRmlvajYrb0c3Z2k4dXBXOENVa0NBM2gra1dBN0NkclJH?=
 =?utf-8?B?WUIyRUt1VTFVQTVINmpZVGpHOW5lWjRweEpPbUxGZjhlaGpLZGU3TU9jaUdp?=
 =?utf-8?B?NEcrRUV0Z09IUzJqOWErYURrQ0U1Zkh5dXJkazY1aHl4S3RSdXVteGxJeDVx?=
 =?utf-8?B?ZnNTem1qNXl5MnVlZ0FZcjFuZkNNWkFneUd5ZkFVWmxXbnV4Njk4SThRc2dw?=
 =?utf-8?B?b1RrMWtidVF4T2hwQWJVQzNGb29LV3BaaklSVjBGTUZBRVBRaFRPZmtYREgx?=
 =?utf-8?B?aElJVS9oNWIrd2NMTXMwMndFVXcwYWxlSnQvOUw1TkJiQWhVOUh1My9NaFVu?=
 =?utf-8?B?K3kvV3FHTWpDL3pOYW1FeFZIUGZUWndVRGRaYXlEMm54R1d4b3JvVURBS1l2?=
 =?utf-8?B?RzVkL2xNbHBSRzRVUUtWQ2JybFY3YTJIa0lVRUFKcXNZd0FldE5NUG5STmpj?=
 =?utf-8?B?b1Y0YVhuQllsak4vY20yWXRRNGhId0dxNnVjOUlHM2c2N3V5WUlyWHRuMUV1?=
 =?utf-8?B?YlRGK09Sa3BOSXZud0Q5MklNQisyNTVxbXQ3ZGE0cHgyV0xzek45VDlpZEZL?=
 =?utf-8?B?WU9QSjcxeWk2TC9jNDEzaGQ2ZUdOay9wR01JSVB1R1RvYlIvRHAwb29CWG5z?=
 =?utf-8?B?TU5vQjU3TjhZWWY5UUhGdUdIaTRBL3M5Wkc3b1p1YThlVlY2YmtQQzlPKzdI?=
 =?utf-8?B?bzIyb3B1MFg3eEdSWUw0UGdxTm9ETnFQSndCdHFtN2REbEd4YThkbkZoU05o?=
 =?utf-8?B?WW9TdEI4M1lyamJmS0Z0MGo4b2wyelRTOE5ESmFrRGxOUDZuMURxc09zVHJQ?=
 =?utf-8?B?NU94WUNGSWhRMEtmRVg1emkxM2ZyNld0MC9HZW01N29TTnhiV3liZFp4RSsr?=
 =?utf-8?B?d2g4ZlV5S3lHUjRkTVc0ck1RN3gwbG4waXFvdE9HRG9zOVRSdUR4L3BnYWFi?=
 =?utf-8?B?OEV4S01PRTN0NEM4MUlRVkZjaDhjaXgyUHJma3g5Z1FaSEJ0cS92cUswbGsr?=
 =?utf-8?B?Z2hucmg3Q05GRWR4N3BqUTRmdllHRnAzbStyYmNISktCT2RQNWVDY1ZlYnBU?=
 =?utf-8?B?bHJKOERKaDhGMmdGY0xZWElQdkhoK1laakN2UmRFcVFMOXBxdFlPNnBpYjVX?=
 =?utf-8?B?VmlRampENFlHdUdEb21xYy9tVDRaRCtBdlhvc3BIdVdRSjVGK1FtcThuN0Zo?=
 =?utf-8?B?akU5MW5mVytKMmUweVhQN0s1dDh2a0hmbmRBRzJuTVZ3bnpIU2tyWXNsdUlv?=
 =?utf-8?B?Tlg4Q2xUZkNNN0grVmxTT0VMb0FxZkc2Y0FJaHBjVnIyQU12RmFWdERWN3lX?=
 =?utf-8?B?V1ZPT21USUhqUmVSZlhtVVNnY1N3b2F6TkRlMS81NCtrYmZNdXkvVHgwV0th?=
 =?utf-8?B?RHJyVHFKZG1OWEdMNStqb002encvT081amdOd0ZYSkxkSzJoR1pJNU5qZWVt?=
 =?utf-8?B?Sm9JSVhWTmJOMTZyNlk5Skc5VFRWbHJnTzRiMXRFN3lJMkxSZmR4U2J3LzVT?=
 =?utf-8?B?YTVNS0J0TTV0WHF3eFh0NGJrOHVoYU9LQlM5dWc1dSt5cjFrNGRxZVJHMTFk?=
 =?utf-8?B?aWpEdHJUVkdwVDJhcEZsWUNhcmdCdTFzRWpjemlVQU94bU1yRW8xOXRwZXhq?=
 =?utf-8?B?bWFXTjN5OU1KYnAzb1o0ZitTQncwcEJRVDZmRHNZK1VGTW9Kb0Q4Vi9Oekc0?=
 =?utf-8?B?L1JkNHJzN2VPTWRRK0pRN1VqVFg4a3pVOE05UkJ2cFVyWkJPRkJCR09pVXR1?=
 =?utf-8?B?QUFSV2tYQTZ5aHQ5LzdwdFRlbExnSHpYbWNub2lwM1gycEwxdW95ZHJUZ0p4?=
 =?utf-8?B?WE1rdzVpbFphUVhZODgxdHpCa1FhRjlaOWRVR01UYzdzSk1pZ3puM1F5UUFF?=
 =?utf-8?B?SHVZdHRZbks3VGNiWFBHYnZMWXJPcmVaOEszVVo3M3gzSHBUQ3F6dVBvS0Fz?=
 =?utf-8?Q?7s/ie77zNk+l9kojFwLzElozb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be2e5d68-acde-404b-1c90-08db40cdb22d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 12:00:40.6668
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kFvZ7EtEmBA9YJJB0S2chhj+DPOaeGttipWumUxaclO+STKMYoQwQzG8ZUXHoLS8+4z+kGDzCSYDdnEf7E5KCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8120

On 19.04.2023 13:44, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
>> On 19.04.2023 10:25, Roger Pau Monné wrote:
>>> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
>>>> On 18.04.2023 15:06, Roger Pau Monné wrote:
>>>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
>>>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
>>>>>>> --- a/xen/arch/x86/include/asm/config.h
>>>>>>> +++ b/xen/arch/x86/include/asm/config.h
>>>>>>> @@ -44,6 +44,20 @@
>>>>>>>  /* Linkage for x86 */
>>>>>>>  #ifdef __ASSEMBLY__
>>>>>>>  #define ALIGN .align 16,0x90
>>>>>>> +#ifdef CONFIG_LIVEPATCH
>>>>>>> +#define START_LP(name)                          \
>>>>>>> +  jmp name;                                     \
>>>>>>> +  .pushsection .text.name, "ax", @progbits;     \
>>>>>>> +  name:
>>>>>>> +#define END_LP(name)                            \
>>>>>>> +  .size name, . - name;                         \
>>>>>>> +  .type name, @function;                        \
>>>>>>> +  .popsection
>>>>>>> +#else
>>>>>>> +#define START_LP(name)                          \
>>>>>>> +  name:
>>>>>>> +#define END_LP(name)
>>>>>>> +#endif
>>>>>>>  #define ENTRY(name)                             \
>>>>>>>    .globl name;                                  \
>>>>>>>    ALIGN;                                        \
>>>>>>
>>>>>> Couldn't END_LP() set type and size unconditionally? (But see also
>>>>>> below.)
>>>>>
>>>>> I see, so that we could also use it for debug purposes.  I guess at
>>>>> that point it might be better to use {START,END}_FUNC() to note that
>>>>> the macros also have an effect beyond that of livepatching.
>>>>>
>>>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
>>>>> find START_ENTRY a weird name.
>>>>
>>>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
>>>> I take it that you're aware that we meanwhile have two or even three
>>>> concurring proposals on a general scheme of such annotations, and we
>>>> don't seem to be able to agree on one. (I guess I'll make a design
>>>> session proposal on this topic for Prague.)
>>>
>>> Oh, I wasn't aware we had other proposals, I've been away on an off
>>> quite a lot recently, and haven't been able to keep up with all
>>> xen-devel email.  Do you have any references at hand?
>>
>> Andrew said he had posted something long ago, but I didn't recall and
>> hence have no reference. My posting from about a year ago is
>> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
>> Subsequently Jane went kind of the Linux route:
>> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
>>
>>>> One thing needs to be clear though: Macros doing things solely needed
>>>> for LP need to not have extra effects with it disabled, and such
>>>> macros also better wouldn't e.g. insert stray JMP when not really
>>>> needed. Hence I expect we still want (some) LP-specific macros besides
>>>> whatever we settle on as the generic ones.
>>>
>>> The stray jmp can be inserted only in the livepatch case, if we end up
>>> having to add it.
>>>
>>> Maybe we should just go with Linux names, so initially I would like to
>>> use:
>>>
>>> SYM_FUNC_START{_NOALIGN}(name)
>>> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
>>> SYM_FUNC_END(name)
>>
>> As said in replies on the earlier threads, I think these are overly
>> verbose and come in overly many variations.
> 
> Right, I would only introduce the ones above and as lonog as I have at
> least one user for them. I don't think there's much value in importing
> the file wholesale if we have no use case for a lot of the imported
> macros.
> 
> The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
> need a tag for local function-like entry point labels, would you then
> use PROC() for those? ENTRY_LOCAL()?
> 
> I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
> think it's clearer.  I would agree on dropping the SYM_ prefix from
> the Linux ones if there's consensus.

Okay, I'm glad we can agree on no SYM_. But what value does START have?
And why would the type be (re)specified via ..._END()? FUNC(), DATA(),
and END() ought to be all we need. The type would be set by the entry
point macros, and the size by END(). To cover local vs global I could
live with _LOCAL suffixes, but personally would prefer e.g. LFUNC()
and GFUNC(). We could also limit ourselves to FUNC() plus DATA(), and
have (non-)global expressed by END() and e.g. LEND() or END_LOCAL().
One less macro, but maybe slightly odd to have the .global directives
then at the end rather than at the beginning.

Note that this is different from my proposed patch, where I aimed at
the change being unintrusive. This includes that this was matching
what Arm has (and hence required no change there at all). I think it
would certainly be nice if these constructs were as similar as
possible between arch-es; some may even be possible to share.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523388.813385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6ZR-0002nX-2c; Wed, 19 Apr 2023 12:05:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523388.813385; Wed, 19 Apr 2023 12:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6ZQ-0002nQ-Ul; Wed, 19 Apr 2023 12:05:56 +0000
Received: by outflank-mailman (input) for mailman id 523388;
 Wed, 19 Apr 2023 12:05:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp6ZP-0002nK-Cg
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:05:55 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe59::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86622ecc-deaa-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:05:51 +0200 (CEST)
Received: from BN0PR03CA0057.namprd03.prod.outlook.com (2603:10b6:408:e7::32)
 by BY5PR12MB4115.namprd12.prod.outlook.com (2603:10b6:a03:20f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 12:05:47 +0000
Received: from BN8NAM11FT100.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e7:cafe::72) by BN0PR03CA0057.outlook.office365.com
 (2603:10b6:408:e7::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 12:05:47 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT100.mail.protection.outlook.com (10.13.177.100) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 12:05:47 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 07:05:46 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 07:05:44 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86622ecc-deaa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YIKC/Fds7A9GVj+UogZd7za14uuWKuuCeWA78S7PZlnDhAMNjOue0xLPYxZKcW/nNiixsDdBtBTwKUuNSGl05aljDBiAyIm9hoVevoJWHVLQQD/YmqgKO3DO9Xk24Ouomadl7xg7/0G4aFglmRyqEBaSUOJeLeaGgRNLLk8GYrDmOLaCsMYxrGO1ly0UAmQ0VMHfwjg4xLPcAz11/U/UwhqIe8qbbjI5vrUCpkMXscJk7BM7lNW3SV/Z0YE8dmxqqR48y3TmFHeJ6BRZFV22wA4BtbHUTplMaLPeWyCsW9oAtLGhUtEVM3RmdArAKQD/sqmtBtdiSUvMPnR9RaOVFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=koVgJq0CDskxBs3mtL1uJ4fcZrANCwjGhvanYDgGC4k=;
 b=VAd7Tft5Or48kERkDmej1FYJAKJPF57FlKj4lNkqOmyPflhD1DDHh49sUPDgr9yoftyx6AxAnEMQc9KdQvQl64mRnMxrdXemQJpaBvDbAAfPjXRpRX3DXhVP4rGHIHyXB+qBeQUKIrUC71Q6AiBNGG5h14hkzahN/5IcROLj2nZo/DFAhJq5g/zZ/xYGCK0N6+andI8eyKja4X3FkICExqGHPoYpdPiXh/+c5l3V8NJ2p1BMf4KuNKErBSvHYJajCmUe4eNRMfjIAxhuYModEl1D/V+8OVsTIIiyVlLEr2ctWpUxqc9RZeYdmE6GrDLcKB7aFNY+kuycIkETro/WPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=koVgJq0CDskxBs3mtL1uJ4fcZrANCwjGhvanYDgGC4k=;
 b=ht9u83trJSHaw29RHdnIsLTVTkj+2Ye6zF1y1gVqpPKZifWAmTQ+qYq2bqadgMORdGnzG2WPO9qyf4/CyqL1u5oE7HEiyGkj+F7rhWDtwAIvAfv3USVscY0UhDO7yEzumm9MHDU4BIwLHp62ZbavC/YTnh94RbZWBMXvG/34Noc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <77b015b2-cc30-0534-4e0c-c392b5e8a7b3@amd.com>
Date: Wed, 19 Apr 2023 14:05:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 01/10] xen/arm: domain_build: Track unallocated pages
 using the frame number
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-2-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-2-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT100:EE_|BY5PR12MB4115:EE_
X-MS-Office365-Filtering-Correlation-Id: 529f73a3-e959-4a2c-cf77-08db40ce6918
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6jmyByN+Dw65wwFS0yaI4CyCm3hukPn2YWijC86Vf7/7ioCw9DF9JSv0JOn2buPL/I7SytUDH09nimsyPkHj2Bj1Cnx+r3hYovtEPiZN6kAJ+1JWBrcqxXt+g08jYFcBSwmb7TR738SjZE2D4Fv4NT19j/q+KQPTSX2mbwsgFhNHNPQVdvTCWM63KfpYIbS0RcyIj+6S7nDN/QwlWhiTSYnmoVYnHUaRUZTlT6yifJ47fsPdMtpF7dgoOfvk7291UgmPxJsC1DIHFmoYHvDLSUw3IWBp6kgRiakrWOfM9eQ1Lsj/kYINkKTXbrsAzMZiOxFjYetsM9YMuUSj1xImFaUMGDt9akMQ1yySvijn+qmewLxlCRqC8mK27rFicZKlKa60mdH54ucwD1yIUgwqDJuau0SZIRrcmBqJx29iNcxqOs3v7RJzwXI0AQbHT8iiUurnWU8Wll1eD41MPsJTvgcF5/JyL6wt0LtxMg9TKEj7bjzcCw5hM2joyIGlXmY6GlvkRXCy0tzWm1YMFurWraMp/FDdW2zyWnivBJxURfRILYSpJkhj+5sM1hvYmz/y9l+MvMc54bBYo97eGu04NizJPQhUdkN3vjMB/F4s9i3NskeM1Sz5HCITi22iddrnJR6bgUCYZyoN2lqPI5ilh1B4egYW01vWcMZJgCzUhSIp1kF2+STzM+5dCtQAH9snhecSjmNo30+IqdKG9AW4u0XpEZS2VC70TSuSp7RjXJVFEgBwiRV/z1JQJv3Dbmj7X7zZgw8cAfOumhJ3xIeSi7swLMiazlGKV8LIL1qEWII=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(186003)(70586007)(4326008)(16576012)(110136005)(54906003)(316002)(70206006)(478600001)(40480700001)(8936002)(82310400005)(41300700001)(8676002)(5660300002)(7416002)(44832011)(82740400003)(2906002)(86362001)(356005)(81166007)(336012)(426003)(2616005)(26005)(53546011)(40460700003)(36860700001)(47076005)(31696002)(31686004)(21314003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 12:05:47.3418
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 529f73a3-e959-4a2c-cf77-08db40ce6918
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT100.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4115

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> rangeset_{xxx}_range() functions are invoked with 'start' and 'size' as
> arguments which are either 'uint64_t' or 'paddr_t'. However, the function
> accepts 'unsigned long' for 'start' and 'size'. 'unsigned long' is 32 bits for
> Arm32. Thus, there is an implicit downcasting from 'uint64_t'/'paddr_t' to
> 'unsigned long' when invoking rangeset_{xxx}_range().
> 
> So, it may seem there is a possibility of lose of data due to truncation.
> 
> In reality, 'start' and 'size' are always page aligned. And Arm32 currently
> supports 40 bits as the width of physical address.
> So if the addresses are page aligned, the last 12 bits contain zeroes.
> Thus, we could instead pass page frame number which will contain 28 bits (40-12
> on Arm32) and this can be represented using 'unsigned long'.
> 
> On Arm64, this change will not induce any adverse side effect as the width of
> physical address is 48 bits. 
NIT: This reads as const, so it would be better to write: "as the max supported width of ..."

Thus, the width of 'gfn' (ie 48 - 12 = 36) can be
> represented using 'unsigned long' (which is 64 bits wide).
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

With or without (after all this is just a commit msg):
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:07:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523392.813395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6al-0003Ky-Bh; Wed, 19 Apr 2023 12:07:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523392.813395; Wed, 19 Apr 2023 12:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6al-0003Kq-8q; Wed, 19 Apr 2023 12:07:19 +0000
Received: by outflank-mailman (input) for mailman id 523392;
 Wed, 19 Apr 2023 12:07:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WT8v=AK=zen2.lab.linutronix.de=alex@srs-se1.protection.inumbo.net>)
 id 1pp6aj-0003Kg-Tb
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:07:17 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8ac257e-deaa-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:07:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8ac257e-deaa-11ed-8611-37d641c3527e
From: Alexander Kanavin <alex@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681906034;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=uB/axOQ5En9PckK/Aq+OqwZra7JiQP1h450lnS9RVyM=;
	b=iF8gDbav+fN6Eyk58sDi7615PCC6ZzLC9zAHucwDBKETpxMhdI+XDS9+lfal4ZbN9C5OIN
	oHnhx2bUTq7xx1xNLjonuvFa4IwTbqNdHQ5fsW9DU4c1oIfylmH+O9xB8t3Z0kMGNdE81/
	Tq0Ag872mbLJYUVVDsYSa1qgFqT4HSbRGFUlkhYAf5sgT5inZ0/6KmZif4tpDKJEmRct+Z
	J31PCoKp8TUgm1NvEBcq8tZDOD3BHxvFISDflSKkDrC9UEcDalrKwRc4sqd9/mUCg0S5D/
	IzitHA3lUbvHM2/LKYzJcXdIR0lpn9uQTTPcABrFvqf+AK7ja5v7qUBTCfb7ZA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681906034;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=uB/axOQ5En9PckK/Aq+OqwZra7JiQP1h450lnS9RVyM=;
	b=3slBlkz2qwt9oXtBNR+XVv1opP7r0DPcWckCgI9aPsIE2ggyok7RJW8g1OfMrQ0cgjyVdL
	q/96aaEPUpq6I7AQ==
To: xen-devel@lists.xenproject.org
Cc: Alexander Kanavin <alex@linutronix.de>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2] tools/xenstore/xenstored_control.c: correctly print time_t
Date: Wed, 19 Apr 2023 14:07:09 +0200
Message-Id: <20230419120710.855128-1-alex@linutronix.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On 32 bit systems with 64 bit time_t (hello, Y2038 problem),
the following error occurs otherwise:

| xenstored_control.c: In function 'lu_reject_reason':
| xenstored_control.c:646:70: error: format '%ld' expects argument of type 'long int', but argument 5 has type 'time_t' {aka 'long long int'} [-Werror=format=]

Signed-off-by: Alexander Kanavin <alex@linutronix.de>
---
 tools/xenstore/xenstored_control.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index cbd62556c3..403295788a 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -666,12 +666,12 @@ static const char *lu_reject_reason(const void *ctx)
 	time_t now = time(NULL);
 
 	list_for_each_entry(conn, &connections, list) {
-		if (conn->ta_start_time &&
-		    (now - conn->ta_start_time >= lu_status->timeout)) {
+		unsigned long tdiff = now - conn->ta_start_time;
+
+		if (conn->ta_start_time && (tdiff >= lu_status->timeout)) {
 			ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld s",
 					      ret ? : "Domains with long running transactions:",
-					      conn->id,
-					      now - conn->ta_start_time);
+					      conn->id, tdiff);
 		}
 	}
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:08:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523396.813405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6bU-0003qy-L2; Wed, 19 Apr 2023 12:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523396.813405; Wed, 19 Apr 2023 12:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6bU-0003qp-Hi; Wed, 19 Apr 2023 12:08:04 +0000
Received: by outflank-mailman (input) for mailman id 523396;
 Wed, 19 Apr 2023 12:08:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YXz3=AK=linutronix.de=alex@srs-se1.protection.inumbo.net>)
 id 1pp6bT-0003mD-Cw
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:08:03 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d532f2a6-deaa-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 14:08:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d532f2a6-deaa-11ed-b21f-6b7b168915f2
Message-ID: <97678ded-9ddf-3b5c-d39d-570614094c35@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681906082;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FmRIyeoPLiG7jYj40cDzoIoUUKUYA8sqqivPiq0rFvE=;
	b=WoUZGdDHccQ45v1fuReuq0PljU9xscLWXwehJWRYSzaKowDb4m8/LpfkYEYU/6vMtkv4qZ
	GA4dcofUk/TD3KdrCLTu85nG+QVKFMOSCTk04m0pbAFE3PJaFLMkqEnCeYCGNHhBijQ0Pc
	7oBZNBRVL+YTuYfdgPSUVXWWfGw3G8jaiYeLuAEvzR8zGyAuceTimarFxeTAdcq/ghOq3L
	XjbBLl5ralJl3EvrnLa4WW3YDuRC2Vcd/bZEHq6fdyrzfXhQX3d6uubuMEvF6ZGP9pJNB+
	GlzQ8Z3BG+qKmOfqYYULLupcEwvnWMi/1Sq3E5FEgqX72/6l/fGZglC/sPVSZQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681906082;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FmRIyeoPLiG7jYj40cDzoIoUUKUYA8sqqivPiq0rFvE=;
	b=ih/FJzYQ4Beod+3ysVKTE46fl6NY48E93HhJ/s2AIxdICSbSWQIZk5t/M+IimzF85Xx85s
	QSZ5h5pTF7EWTFDA==
Date: Wed, 19 Apr 2023 14:08:01 +0200
MIME-Version: 1.0
Subject: Re: [PATCH] tools/xenstore/xenstored_control.c: correctly print
 time_t
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230412090104.3794213-1-alex@linutronix.de>
 <fda641f1-e87e-3dc0-85a5-acf91d6f39ff@suse.com>
From: Alexander Kanavin <alex@linutronix.de>
In-Reply-To: <fda641f1-e87e-3dc0-85a5-acf91d6f39ff@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 4/17/23 08:35, Juergen Gross wrote:
>
> I'd rather have something like:
>
> diff --git a/tools/xenstore/xenstored_control.c 
> b/tools/xenstore/xenstored_control.c
> index cbd62556c3..f9452d63b4 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -666,12 +666,12 @@ static const char *lu_reject_reason(const void 
> *ctx)
>         time_t now = time(NULL);
>
>         list_for_each_entry(conn, &connections, list) {
> -               if (conn->ta_start_time &&
> -                   (now - conn->ta_start_time >= lu_status->timeout)) {
> +               unsigned long tdiff = now - conn->ta_start_time;
> +
> +               if (conn->ta_start_time && tdiff >= lu_status->timeout) {
>                         ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld 
> s",
>                                               ret ? : "Domains with 
> long running transactions:",
> -                                             conn->id,
> -                                             now - conn->ta_start_time);
> +                                             conn->id, tdiff);
>                 }
>         }


Thanks, I just sent a v2 that does this.


-- 
Alexander Kanavin
Linutronix GmbH | Bahnhofstrasse 3 | D-88690 Uhldingen-Mühlhofen
Phone: +49 7556 25 999 39; Fax.: +49 7556 25 999 99

Hinweise zum Datenschutz finden Sie hier (Informations on data privacy
can be found here): https://linutronix.de/legal/data-protection.php

Linutronix GmbH | Firmensitz (Registered Office): Uhldingen-Mühlhofen |
Registergericht (Registration Court): Amtsgericht Freiburg i.Br., HRB700
806 | Geschäftsführer (Managing Directors): Heinz Egger, Thomas Gleixner,
Sharon Heck, Yulia Beck, Tiffany Silva



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:09:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523403.813415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6d8-0004X4-4B; Wed, 19 Apr 2023 12:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523403.813415; Wed, 19 Apr 2023 12:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6d8-0004Wx-1E; Wed, 19 Apr 2023 12:09:46 +0000
Received: by outflank-mailman (input) for mailman id 523403;
 Wed, 19 Apr 2023 12:09:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VWR0=AK=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pp6d5-0004Wp-Rn
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:09:44 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20603.outbound.protection.outlook.com
 [2a01:111:f400:7d00::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f1441e4-deab-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:09:39 +0200 (CEST)
Received: from DB6PR07CA0006.eurprd07.prod.outlook.com (2603:10a6:6:2d::16) by
 DU0PR08MB9749.eurprd08.prod.outlook.com (2603:10a6:10:447::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.30; Wed, 19 Apr 2023 12:09:35 +0000
Received: from DBAEUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::97) by DB6PR07CA0006.outlook.office365.com
 (2603:10a6:6:2d::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 12:09:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT042.mail.protection.outlook.com (100.127.142.143) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 12:09:35 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 19 Apr 2023 12:09:35 +0000
Received: from 11222cb61d24.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F95AF0CE-BD80-46A7-9061-333E9A9CD6A9.1; 
 Wed, 19 Apr 2023 12:09:28 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 11222cb61d24.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Apr 2023 12:09:28 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8186.eurprd08.prod.outlook.com (2603:10a6:10:3ed::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 12:09:26 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.021; Wed, 19 Apr 2023
 12:09:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f1441e4-deab-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=awbrBfCio7YdH3xda32FMegz90Dh6jCKwnYTiPnYrKU=;
 b=KFWcSGvsGrtadckYupMKVEmG2WrZFPeFvfMwlpC6vMeisVsk7HgSWOzTmR6xS4ItaLAwopT8qG+dJpBjR4tupZNq0odjCgUvR6kU/n+AK95dDJSIijJY/4mY6r3wtk7V88iiYm/CdttLrIEK83TWwr8WgFPn1tvzEWw+oYNBsUg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6c29f8f1fbe5ec7a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W81b7/9WFx25EUuLy69CDM1G/YnhyrJXvFdZUwouYAczQnz3d5aQxy6o2xbTOfaBbuvYZPr4GM96+3jg86dZXjGjLWYLiD1u2q++OyQT/dFgVvPnAv8HlZ9hdXZWJeUTmQjxl2E8x+TCok28J3D+ivglhnTQ5QQlu87XmEilLQkQChKeawk3VEB/szuEh7Ti7Ub7VrTyUTLVVdl7XAtYOrheRN3jYZJhTHiJ4YhS+x5TCi929JXyEmc7tRKAv3i4B33riGZgpxQjD+wE5coQc/NwlUWnt/uqPlzUrs/J+m7gqF/pOmzd0r4ldcBClqKgP8+kPXgCG/mQM7sNaS8E/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=awbrBfCio7YdH3xda32FMegz90Dh6jCKwnYTiPnYrKU=;
 b=IgufVjchQE6v/pgjCsVzIecKXcnwV3qunpeOuRLLEQFPwwyIxZNs9KnMWMHEXWPqYLc5msKfu3NkQCKuoxT8eL6ufvNQch1mAchgMmYDce8s5NHBWQJMfRqM0aC/TxoYDaFp2HYAzofDxK6CLEZ5ufmr6QqOmMN02sW2UdxKj3LdarT5P2jSM8dVTbF6gfGiYfNYam3+Sw8A0JAVpEgp2CF0S8IknlWwmeAX8LtxoIfOM04TU0+yeGpOe7hsm2Yll1Ehruc2J2QTm6f+enDrZ71uxRy40qfHdejfA5U5D0zBXy8gNTABMRYPKV0+JcSjun96wlgKeEbdqAADDXXyoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=awbrBfCio7YdH3xda32FMegz90Dh6jCKwnYTiPnYrKU=;
 b=KFWcSGvsGrtadckYupMKVEmG2WrZFPeFvfMwlpC6vMeisVsk7HgSWOzTmR6xS4ItaLAwopT8qG+dJpBjR4tupZNq0odjCgUvR6kU/n+AK95dDJSIijJY/4mY6r3wtk7V88iiYm/CdttLrIEK83TWwr8WgFPn1tvzEWw+oYNBsUg=
From: Henry Wang <Henry.Wang@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Community Manager <community.manager@xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Thread-Topic: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Thread-Index: AQHZcrSSidx067y7IEiVJ78Xlkkoda8yiarw
Date: Wed, 19 Apr 2023 12:09:26 +0000
Message-ID:
 <AS8PR08MB79911D8D4C0B64076EDCF11E92629@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230419114556.34856-1-roger.pau@citrix.com>
In-Reply-To: <20230419114556.34856-1-roger.pau@citrix.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B09FA0A7610A0A498DFA606086FD79EC.0
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8186:EE_|DBAEUR03FT042:EE_|DU0PR08MB9749:EE_
X-MS-Office365-Filtering-Correlation-Id: 0de764b0-610a-43cf-2469-08db40cef108
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JQMp6prhlB+S2Bp+R5I6g8DtZqV48HYrpQfmltsw4uA69FhMc49XjjaZEdRWecEii/IL1eU3DJFdLrijQ71FPkKIMuDKURjU7DshG+7Sc8zjKvm4Beseauk+YZY/8KOw3hn0RDyX0tXM3tf9hBnIdeqQpk4u8SRAPaGH0xPDSnbM2XOls6Iypxz5J931iZ/6EvR8M+pXkFT+PUzpyVIAT2/WsqTxr+8ClrdX9WYv8qk3rlqnP7MxbSy9lY0UzvIGFYrx3XXP9FziQkYL4Q9PePpH1eEuDZaRU/VW6RKHpSm6TZXxwDiBtsdA9Ffat+GYiskBSFwn/sbBlBw94dPXzsIbk28zku1LHwkmcVHG+1eg5G0BKOIfOYCi9cM9469k7bCRwBdk6u3phlkni3HNrcJj1kgR7HC/H51LsI38MUrcHWsniZHcJWVBEOEUcRvEe8WErcwcN/6IsGxvs2P9T0pA7O4191JUdxxH33tLGa8gOnk+cmh4F3Yk/6XFWwzqg1DFvF6Gd242Y0wyYbvWyMqBGW5bmaBkpLqfXZ6M7sKbpRz9OXz35uAJFZlVuuWXRb5SzUxKlHRumoahCBa8Glz1y0ill8TVJjrsHEe6UuC6BX/wbgThUuZZE1PY02Ej
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(366004)(376002)(396003)(346002)(451199021)(54906003)(4326008)(110136005)(66946007)(66556008)(316002)(66446008)(64756008)(76116006)(66476007)(7696005)(478600001)(38070700005)(8936002)(71200400001)(41300700001)(8676002)(5660300002)(122000001)(52536014)(33656002)(86362001)(55016003)(2906002)(83380400001)(6506007)(9686003)(26005)(186003)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8186
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ea9a9f6b-95a5-4316-cedc-08db40ceeba8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XUQgNHeL69TSas7WEI0R5C2XXirKHAA+LmjzLWusgxfCwePTPPEG10J+q1fxXzUaVkh+PcfuSpzsnav6ZgSpB6MzdCRar/LD2hp+KtS7kDqSULVQ5AqNdeXR/qIALTD7mi9WdE+zCKLeP5LAsn+yqq6hti7FjGrORY0ykqbhzzKgAWK01Yqj/bZ4oCngpo71VBTEV7i8WfNy4tSyU5sXYcJPgiBiegcEdVJjHHFmzI5Ryh3sKjRtE5bfK2kd09S1UHeGX8FZSES38QzDCeFLmjxXQoXEU0EYC31htDeQ0deSgRfCIXkFCO9zzK72YgC/9Sxyt70VxIcv3ABPM1zS0+vCdlApwqgB354ZMR4e5q6cPRVhpzH0/FzPMPylvrfj+2ouiw1TuUlkDBbz64hZkScvopAZKUooj+kSx9QOajx/uu1gBASexwFaliHClegJgqqA95+UWFH7hVS9NSz3vBeSFugg95eGjcCncC9+i98BlxCkJTYcQEnQHqd8y9/vD9SmzulAmjTq7rL0BmS1hagd4qb7NRwMr/1EVvaKo7gZy3X5bta0pdkOuI3hB6pgzj9108m32Wk6aI2Dh90OnJUFfQZ18N69GRQrXiyp3lAuc+uOF/MaU2NgdNuQuYGUCfPnOKVMaplp2J/VheSjWR72ooRbyp14klxjlbDEpnlcRKWYFF4+71s/JV0wrc/yufrnXCSnBQ8C8nhzyGdsZN9WjKh8OmHEgmSk9ckyQuIQE/6oxXYpJAPcVA20UzVa
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(86362001)(8676002)(41300700001)(70206006)(70586007)(8936002)(4326008)(5660300002)(52536014)(316002)(110136005)(7696005)(356005)(54906003)(82310400005)(82740400003)(81166007)(478600001)(40460700003)(26005)(47076005)(33656002)(36860700001)(6506007)(83380400001)(336012)(55016003)(9686003)(186003)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 12:09:35.4184
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0de764b0-610a-43cf-2469-08db40cef108
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9749

SGkgUm9nZXIsDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUm9nZXIg
UGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gU3ViamVjdDogW1BBVENIIHYyXSB4
ZW4vdmNwdTogaWdub3JlIFZDUFVfU1NIT1RUTVJfZnV0dXJlDQo+IA0KPiBUaGUgdXNhZ2Ugb2Yg
VkNQVV9TU0hPVFRNUl9mdXR1cmUgaW4gTGludXggcHJpb3IgdG8gNC43IGlzIGJvZ3VzLg0KPiBX
aGVuIHRoZSBoeXBlcnZpc29yIHJldHVybnMgLUVOT1RJTUUgKHRpbWVvdXQgaW4gdGhlIHBhc3Qp
IExpbnV4IGtlZXBzDQo+IHJldHJ5aW5nIHRvIHNldHVwIHRoZSB0aW1lciB3aXRoIGEgaGlnaGVy
IHRpbWVvdXQgaW5zdGVhZCBvZg0KPiBzZWxmLWluamVjdGluZyBhIHRpbWVyIGludGVycnVwdC4N
Cj4gDQo+IE9uIGJveGVzIHdpdGhvdXQgYW55IGhhcmR3YXJlIGFzc2lzdGFuY2UgZm9yIGxvZ2Rp
cnR5IHdlIGhhdmUgc2VlbiBIVk0NCj4gTGludXggZ3Vlc3RzIDwgNC43IHdpdGggMzJ2Q1BVcyBn
aXZlIHVwIHRyeWluZyB0byBzZXR1cCB0aGUgdGltZXIgd2hlbg0KPiBsb2dkaXJ0eSBpcyBlbmFi
bGVkOg0KPiANCj4gQ0U6IFJlcHJvZ3JhbW1pbmcgZmFpbHVyZS4gR2l2aW5nIHVwDQo+IENFOiB4
ZW4gaW5jcmVhc2VkIG1pbl9kZWx0YV9ucyB0byAxMDAwMDAwIG5zZWMNCj4gQ0U6IFJlcHJvZ3Jh
bW1pbmcgZmFpbHVyZS4gR2l2aW5nIHVwDQo+IENFOiBSZXByb2dyYW1taW5nIGZhaWx1cmUuIEdp
dmluZyB1cA0KPiBDRTogeGVuIGluY3JlYXNlZCBtaW5fZGVsdGFfbnMgdG8gNTA2MjUwIG5zZWMN
Cj4gQ0U6IHhlbiBpbmNyZWFzZWQgbWluX2RlbHRhX25zIHRvIDc1OTM3NSBuc2VjDQo+IENFOiB4
ZW4gaW5jcmVhc2VkIG1pbl9kZWx0YV9ucyB0byAxMDAwMDAwIG5zZWMNCj4gQ0U6IFJlcHJvZ3Jh
bW1pbmcgZmFpbHVyZS4gR2l2aW5nIHVwDQo+IENFOiBSZXByb2dyYW1taW5nIGZhaWx1cmUuIEdp
dmluZyB1cA0KPiBDRTogUmVwcm9ncmFtbWluZyBmYWlsdXJlLiBHaXZpbmcgdXANCj4gRnJlZXpp
bmcgdXNlciBzcGFjZSBwcm9jZXNzZXMgLi4uDQo+IElORk86IHJjdV9zY2hlZCBkZXRlY3RlZCBz
dGFsbHMgb24gQ1BVcy90YXNrczogeyAxNH0gKGRldGVjdGVkIGJ5IDEwLCB0PTYwMDAyDQo+IGpp
ZmZpZXMsIGc9NDAwNiwgYz00MDA1LCBxPTE0MTMwKQ0KPiBUYXNrIGR1bXAgZm9yIENQVSAxNDoN
Cj4gc3dhcHBlci8xNCAgICAgIFIgIHJ1bm5pbmcgdGFzayAgICAgICAgMCAgICAgMCAgICAgIDEg
MHgwMDAwMDAwMA0KPiBDYWxsIFRyYWNlOg0KPiAgWzxmZmZmZmZmZjkwMTYwZjVkPl0gPyByY3Vf
ZXFzX2VudGVyX2NvbW1vbi5pc3JhLjMwKzB4M2QvMHhmMA0KPiAgWzxmZmZmZmZmZjkwN2I5YmRl
Pl0gPyBkZWZhdWx0X2lkbGUrMHgxZS8weGQwDQo+ICBbPGZmZmZmZmZmOTAwMzk1NzA+XSA/IGFy
Y2hfY3B1X2lkbGUrMHgyMC8weGMwDQo+ICBbPGZmZmZmZmZmOTAxMDgyMGE+XSA/IGNwdV9zdGFy
dHVwX2VudHJ5KzB4MTRhLzB4MWUwDQo+ICBbPGZmZmZmZmZmOTAwNWQzYTc+XSA/IHN0YXJ0X3Nl
Y29uZGFyeSsweDFmNy8weDI3MA0KPiAgWzxmZmZmZmZmZjkwMDAwMGQ1Pl0gPyBzdGFydF9jcHUr
MHg1LzB4MTQNCj4gSU5GTzogcmN1X3NjaGVkIGRldGVjdGVkIHN0YWxscyBvbiBDUFVzL3Rhc2tz
OiB7IDI2fSAoZGV0ZWN0ZWQgYnkgMjQsIHQ9NjAwMDINCj4gamlmZmllcywgZz02OTIyLCBjPTY5
MjEsIHE9NzAxMykNCj4gVGFzayBkdW1wIGZvciBDUFUgMjY6DQo+IHN3YXBwZXIvMjYgICAgICBS
ICBydW5uaW5nIHRhc2sgICAgICAgIDAgICAgIDAgICAgICAxIDB4MDAwMDAwMDANCj4gQ2FsbCBU
cmFjZToNCj4gIFs8ZmZmZmZmZmY5MDE2MGY1ZD5dID8gcmN1X2Vxc19lbnRlcl9jb21tb24uaXNy
YS4zMCsweDNkLzB4ZjANCj4gIFs8ZmZmZmZmZmY5MDdiOWJkZT5dID8gZGVmYXVsdF9pZGxlKzB4
MWUvMHhkMA0KPiAgWzxmZmZmZmZmZjkwMDM5NTcwPl0gPyBhcmNoX2NwdV9pZGxlKzB4MjAvMHhj
MA0KPiAgWzxmZmZmZmZmZjkwMTA4MjBhPl0gPyBjcHVfc3RhcnR1cF9lbnRyeSsweDE0YS8weDFl
MA0KPiAgWzxmZmZmZmZmZjkwMDVkM2E3Pl0gPyBzdGFydF9zZWNvbmRhcnkrMHgxZjcvMHgyNzAN
Cj4gIFs8ZmZmZmZmZmY5MDAwMDBkNT5dID8gc3RhcnRfY3B1KzB4NS8weDE0DQo+IElORk86IHJj
dV9zY2hlZCBkZXRlY3RlZCBzdGFsbHMgb24gQ1BVcy90YXNrczogeyAyNn0gKGRldGVjdGVkIGJ5
IDI0LCB0PTYwMDAyDQo+IGppZmZpZXMsIGc9ODQ5OSwgYz04NDk4LCBxPTc2NjQpDQo+IFRhc2sg
ZHVtcCBmb3IgQ1BVIDI2Og0KPiBzd2FwcGVyLzI2ICAgICAgUiAgcnVubmluZyB0YXNrICAgICAg
ICAwICAgICAwICAgICAgMSAweDAwMDAwMDAwDQo+IENhbGwgVHJhY2U6DQo+ICBbPGZmZmZmZmZm
OTAxNjBmNWQ+XSA/IHJjdV9lcXNfZW50ZXJfY29tbW9uLmlzcmEuMzArMHgzZC8weGYwDQo+ICBb
PGZmZmZmZmZmOTA3YjliZGU+XSA/IGRlZmF1bHRfaWRsZSsweDFlLzB4ZDANCj4gIFs8ZmZmZmZm
ZmY5MDAzOTU3MD5dID8gYXJjaF9jcHVfaWRsZSsweDIwLzB4YzANCj4gIFs8ZmZmZmZmZmY5MDEw
ODIwYT5dID8gY3B1X3N0YXJ0dXBfZW50cnkrMHgxNGEvMHgxZTANCj4gIFs8ZmZmZmZmZmY5MDA1
ZDNhNz5dID8gc3RhcnRfc2Vjb25kYXJ5KzB4MWY3LzB4MjcwDQo+ICBbPGZmZmZmZmZmOTAwMDAw
ZDU+XSA/IHN0YXJ0X2NwdSsweDUvMHgxNA0KPiANCj4gVGh1cyBsZWFkaW5nIHRvIENQVSBzdGFs
bHMgYW5kIGEgYnJva2VuIHN5c3RlbSBhcyBhIHJlc3VsdC4NCj4gDQo+IFdvcmthcm91bmQgdGhp
cyBib2d1cyB1c2FnZSBieSBpZ25vcmluZyB0aGUgVkNQVV9TU0hPVFRNUl9mdXR1cmUgaW4NCj4g
dGhlIGh5cGVydmlzb3IuICBPbGQgTGludXggdmVyc2lvbnMgYXJlIHRoZSBvbmx5IG9uZXMga25v
d24gdG8gaGF2ZQ0KPiAod3JvbmdseSkgYXR0ZW1wdGVkIHRvIHVzZSB0aGUgZmxhZywgYW5kIGln
bm9yaW5nIGl0IGlzIGNvbXBhdGlibGUNCj4gd2l0aCB0aGUgYmVoYXZpb3IgZXhwZWN0ZWQgYnkg
YW55IGd1ZXN0cyBzZXR0aW5nIHRoYXQgZmxhZy4NCj4gDQo+IE5vdGUgdGhlIHVzYWdlIG9mIHRo
ZSBmbGFnIGhhcyBiZWVuIHJlbW92ZWQgZnJvbSBMaW51eCBieSBjb21taXQ6DQo+IA0KPiBjMDZi
NmQ3MGZlYjMgeGVuL3g4NjogZG9uJ3QgbG9zZSBldmVudCBpbnRlcnJ1cHRzDQo+IA0KPiBXaGlj
aCBsYW5kZWQgaW4gTGludXggNC43Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1v
bm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQoNCkFja2VkLWJ5OiBIZW5yeSBXYW5nIDxIZW5y
eS5XYW5nQGFybS5jb20+ICMgQ0hBTkdFTE9HDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQo=


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:15:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523409.813424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6i4-0005zG-NJ; Wed, 19 Apr 2023 12:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523409.813424; Wed, 19 Apr 2023 12:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp6i4-0005z9-Ki; Wed, 19 Apr 2023 12:14:52 +0000
Received: by outflank-mailman (input) for mailman id 523409;
 Wed, 19 Apr 2023 12:14:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp6i3-0005z3-PM
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:14:51 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0627.outbound.protection.outlook.com
 [2a01:111:f400:fe02::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7eb98f3-deab-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:14:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7059.eurprd04.prod.outlook.com (2603:10a6:208:192::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 12:14:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 12:14:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7eb98f3-deab-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A8fAzLnzEdjpGhjAYtPwBIawNKIqU2ciziDeKIuqITLoyve7rMRCH4pREB9ojWIfHk+XBt45W+nJN/egKNBspLcAiSp/S3Ij3MN9XM8jjYHESD9rfb9SH/LZt/hShH5j0NEWAltPhbuw/yf2bdV1c2RMSsDfzOdhL87TxDa0v5cWPlYRIlkCv4UH2/5zIGCcJ7YsoewkCeK8S5KZF+wE2v4RcMhV4kNWx/67RXG80gnryqOGzinWUJ5RJFzWhXslt3s5OXcFtPR1hXJuXa3bRbIC3oOMzM8I/GXUUZOwxnAhigQ1bWF6/6jPr3knvFV4EBo1qzHlCaxuVV/+QEHUUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qYPWiUehziRNTyCPJNuNqeT2LIGKZjN4b6i91CCECGw=;
 b=aafTOkiThYD2//7ePg1LMTuLKKcXahsDaWyiA2HebVOJINQAQuOiiaGi/u5JNWMuFITRs6fT3bf/xreZSFR9rT9CIpZ3WqBXu+e1fxY0BRgcw48Vq8mm3SFuuP9SCa7Yr++6NfwNT8SJWC8xjwefVVofK07PvEk/ZvKH9hQ6XkTnbisoveLnpi84OhBcmeKP/tvSnOUPiL5xXVlCFP9JuhuP+my3pSN2/k6ymYsPTcTACF+Mv5JHqQZLJjF+LdlYjP81D0uK5SSqiWXjTJPlgjiurkvWCEjPTOcRazcIj7VRGQkSUU1UIwvzA3wdbiXFLxnfhtp5r+qf94ja0UjkSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qYPWiUehziRNTyCPJNuNqeT2LIGKZjN4b6i91CCECGw=;
 b=F716ODBD0Q5+2IinQfGPNnsPZBwFPEA0gS18fxF2ERwLqwKdf8nVY6lza0wDMtJhq5z22peykNYLIodwWpZOnVb11Wa86wB2HTWNl4cwrv6dDwEyYxNiSNkhGzSoYd21jiPpfqcY1lXAnBSizqC4o6Ofx3HzzTTstJbTkCuDXZ9yiTeUwjaUAQTJRLD9BjsPkMi+SF8O48JqXyoxkNwlotgCJbszojrX/BLpCmuywin9ri9uVk0o+246iLTcS3x1H3b2o3eVQDr34c9vjPYveONrCmgpSCEI+hSx4fAV+SIbPy1+PZsBWu82X9QQMrl3JorNYQZINoguAunwWyeRNA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e27c6431-053f-924e-27c4-763b3c45fdc2@suse.com>
Date: Wed, 19 Apr 2023 14:14:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230419114556.34856-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230419114556.34856-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0010.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7059:EE_
X-MS-Office365-Filtering-Correlation-Id: 8007b967-6405-47b1-5d1b-08db40cfaa79
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZxTdalgNrgDSGRRiJV7+KrWzjMclD7w4CGDcGxhzWOKXx6hq2E4sk9l7IFMTCDjaeo8kn+NBZzoF8/GQCYZBEwdbrlHwF5Pkoyugh2nWbpGM99zaoNZSLtxnMFQB492Zt9OBsy+270zVVQAJlkE89aerD8/uYUmOz/rL1wZU00yQBe1e5C8onAkK7vyalUMzY5zYTuEo5cw45PKFjB6+3zjsJbX7VEEjVempViJtvcauP4B+5MlqUF3Brk0TUMf6W4NTt9q6ysFIkK0U8wmAJ+y3waUl41JTcgWhhk87ens/b3HUhicu6qU3W+5YWfU1tNGM6dBKss3evn4Yd0XLAyVVW+xgEooOblhIuiwnvQZsnEm+3LDfqk/m8FbX8EGNtC2NCzBwotUPhPNPF2m55Pa9ZY9uz7Jxc8kHiYcPKFRTwy25sF50BPdi6U7Xv3uL0/uTAxoza0EuCclmwKmLWSb327gZzvMHHTHKxfPAUs7UPlcdS1jNPyMhC8LZVCUsczg9Y1KYG61O6y5K6gs5gDMoyOjJv+GUfedV48RRaDjMM7mkDycfPYyQ7tLr5f2q4s5VuKZDQU4Jce7d1KnE8Y2pGJBCb9SaFZB6qNjwkyXS8M5siO9Vyfri+iuen/L/atCUx761NtKcdGEp2P/WrQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(396003)(346002)(39860400002)(451199021)(53546011)(6512007)(6506007)(26005)(186003)(54906003)(478600001)(6486002)(5660300002)(8676002)(36756003)(8936002)(2906002)(66946007)(4326008)(6916009)(316002)(66556008)(86362001)(66476007)(38100700002)(41300700001)(31696002)(31686004)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEIweWZwNzZlclY0Q0dMRFFXclF1cUtJUCtlRXFUYW9kalFXV1I0TGpPSjQr?=
 =?utf-8?B?MVM1Q3FWeE0zUDFFWlJRRU8zNkt3VzZpcGxjTy85dDhiTERYSldNbzRkR3gz?=
 =?utf-8?B?NDJ5ZWJkSHRWbXhHWWVENFlhUm9KU0Y3Mzd2Q2huekd4aGlSVEhGK2E0amlv?=
 =?utf-8?B?bFNhN2Q3bXNBcS8xR1ZXaWlVRCs5eDlyUzVhclo0YzJZSnptbGl4aDFOWDNa?=
 =?utf-8?B?MGhSQW91N01qeTdQbjRaVGNOdVFrVEs0S2RZV1BwenBrWXdKOCsrelJQVW1K?=
 =?utf-8?B?WUl4aXNQZ3BTd1c2aWNHKzMrZ1piYlJyaW5aa25iMDE1T3o3d1QzeDVXUzFk?=
 =?utf-8?B?NW5tWHBUNDZFNzlBQ2o4bnpvUFVnRmxrSCtFOFc1djMxcjVHei94eW1MTmd3?=
 =?utf-8?B?eFdnM2E2Q0JXMVB1UnpRRjhEUmluNDNPUG1LVFVDWHpndDBUT3FDM0E5Tk1W?=
 =?utf-8?B?OUFlZW13cVpEVDM5WVVzNzE1eTVDZ2tEYzlHY1dIelhyU041aUxBKzlrL0tH?=
 =?utf-8?B?cys5TE85Q1VGektBVytxRFg4MEpPUFlVNXVCdURsZnRNcFRzbFEzYzNJZ3c2?=
 =?utf-8?B?MTljbDNvbHhVdFZjSU1vSXZWQjg2UE9ObEhvUWVuRlRUVjN0Y2xBUG5DNER6?=
 =?utf-8?B?S3YweEg1VW41MkU0WE9yMndvelJaT2pMQU96RWVjMXRDTmx0a1k4NjJTT24y?=
 =?utf-8?B?dDlVYnRUbmZUTzhQcllJMEpUcUtZTnd3Z3h0Vmxydk45Z1RlMlExdGNFM3J2?=
 =?utf-8?B?U2UwaS9SZjdVR2w4VFdtUnRZSEFVbHNSbytXY2xqZXpad1lYQzU3YnEvSDVM?=
 =?utf-8?B?N1MwNk9nNEdYaTZGa0VJaG1xSjJCMW4wS0tzdDBKbVEyQXlPa1h2Q0xBQVZy?=
 =?utf-8?B?UnRBdTgxSzhoczRkQmRoZU5uQmNRVVVreWRFMFNNbm9mUzlyNk1MekxZTXJv?=
 =?utf-8?B?OGx0dE9nMlRsT0VGL0lEK2trK3dTYkUvTCtUVGlmdmhYQ2xGdTBiend5ZkJ1?=
 =?utf-8?B?NGI4d05raEhuU21sWHBzeXljcFRPQ0RFRzFHZmhvSTFRUGJFRUpzdFFTRTJY?=
 =?utf-8?B?NVgwUUlBL2xBWnlGUDNGS3RIL21rd2h0TFRKV3lDWGt0VWdWNWFjUDY3NWRm?=
 =?utf-8?B?UDNBaHNkZlE4aXBEYzZXUVd3YVJtbHFBTUp6ckdxcE1BYWVaRkptRyt6bnJZ?=
 =?utf-8?B?RXhSNzBESGV4TXJEZDNJMllYR2Z0YTRqcE10ekduL3pEd2JuUm1RUE9CZnox?=
 =?utf-8?B?bGZ4WG4xbkFLaVNJUWx0djBRZTNMdkUxRlZwVm9QUHI5UkdQSzRIemJBRGM2?=
 =?utf-8?B?NC9WRGcyWlBKSU9ET084ZHdoS2d0aDd0ZWE2eWpTaDhGWW1UN1hzUXdlWjJo?=
 =?utf-8?B?Z2M5UE5zd1pvb0MrU2prSFJ6YlozTUd5VE5WYmNkZzhZb0dGbU9PMm9sZ2Rp?=
 =?utf-8?B?WUozU29OckJMVHBjaXdaK2xvVFJmWXFGQzUzNkFHWk5KcWtUOGhqOHZxOTJ0?=
 =?utf-8?B?NEVqdW1qV1RJTFFlcExvUnpmTVN2ZzZDM2V3ZVdUb1NQQncycnF5SmttRUdk?=
 =?utf-8?B?dVN0eVMrTGI1RE94TURVcEFtdFdoYllmcmc4T0xLVDlXZVhNVHNnRGwwa0Nn?=
 =?utf-8?B?V1lSRFF2YUwxeSs4Si9MdUpBODNVbnZUM1d6YXZzMlpuM3Q5OE93aHh6WS9o?=
 =?utf-8?B?UGgva2JwcjFjc1RlSytZdlZMajU0a1pCdFZ6MXlxK1lIa01BOHBOK2hFN29s?=
 =?utf-8?B?bkhYdHhuZW1PSjhlU0ttNERXa2N6LzJxWFF5N1VhcExhTnpvVmMrRUJnRGZh?=
 =?utf-8?B?SnlpbXB6SUhHUlR1cUl1cGNVelBnajMyQ3BFRE80S3FBZkNmTkhqNzUvQU15?=
 =?utf-8?B?T0g1NHpOMklpWCtFbFc3VFh1ajBJbUZPUllvMUNSMitDMVdzWDNreGdHVTVq?=
 =?utf-8?B?V0xSdUlFUW5wZi9zVVcyN2l0cmx6OU84UXBFZGhWaTlUaXdKUDdNOGg4QlVm?=
 =?utf-8?B?bnJMUGRWc1RyS3JBRjc2eGRHZk5OcUk5OXdWVVpBUExxbkJaV1VRTXJUM25D?=
 =?utf-8?B?VEFWaFpOTk5NcE93NStVY1g0Q1RpQ1g4Q04xZVhBNDlTWnYwNEhvcHdvRmd3?=
 =?utf-8?Q?bLgZszVP4Y0pza3BWDBMc60kW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8007b967-6405-47b1-5d1b-08db40cfaa79
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 12:14:46.7410
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yAX7nKu/KtCbwji+d9IfWj41I8PvaglW4ohli5MERq1leE741i6n+ZK+tNL+08wKvjuaUUD2oHSwfsQD1LRhbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7059

On 19.04.2023 13:45, Roger Pau Monne wrote:
> The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
> When the hypervisor returns -ENOTIME (timeout in the past) Linux keeps

Nit: -ETIME

> retrying to setup the timer with a higher timeout instead of
> self-injecting a timer interrupt.
>[...]
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>  ### Changed
>   - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
>     cap toolstack provided values.
> + - Ignore VCPU_SSHOTTMR_future VCPUOP_set_singleshot_timer flag. The only
> +   known user doesn't use it properly, leading to in-guest breakage.

Might this read a little better as

 - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
   known user doesn't use it properly, leading to in-guest breakage.

> --- a/xen/include/public/vcpu.h
> +++ b/xen/include/public/vcpu.h
> @@ -150,7 +150,7 @@ typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
>  DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
>  
>  /* Flags to VCPUOP_set_singleshot_timer. */
> - /* Require the timeout to be in the future (return -ETIME if it's passed). */
> + /* Ignored. */

I think this could do with something like "as of Xen 4.18", as the public
header shouldn't be tied to a specific version (and then perhaps also
retaining the original text). Arguably mentioning a specific version may be
a little odd in case we'd consider backporting this, but something would
imo better be said.

All this said - while I'm likely to ack the final patch, I would still feel
a little uneasy doing so.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:38:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523416.813438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp74p-0008U7-Ic; Wed, 19 Apr 2023 12:38:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523416.813438; Wed, 19 Apr 2023 12:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp74p-0008U0-Fe; Wed, 19 Apr 2023 12:38:23 +0000
Received: by outflank-mailman (input) for mailman id 523416;
 Wed, 19 Apr 2023 12:38:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dNGQ=AK=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pp74o-0008Tu-Ls
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:38:22 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10f979a1-deaf-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:38:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10f979a1-deaf-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681907899;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TzrbFJX0mLRnVyg6gK50wjW8gAdRTyt3eKsS6bO2Ho8=;
	b=UQOJzBaZIKBAqIIp2iqE9MqaQt4o5jf4X54iENF8PIgb3XVSKHeAToYhSAetn/VDc3XDSc
	vbXK6VM/E7725N29s2FSKaatj7nI1wyCTjA86cgr2JkrX8DxeEun3hjlUqogLtb5s5zfSa
	SywUUUueeIYsZNv4w36kFnEjVOxY/DSPPwkyq+kebzFI7z2gYg0r9CnlmtgFv9TqDyfAlD
	YKid0ziPPcge9cOBT8GWWTc95C+prcMhIbymQfHwn03mi0EsBFOddKVOKY2XMi3tesk7+V
	TiCAx54bzs6ELiLXkmKcxWv4fFnjWSoKaHB0PW1kBv0yCnbUWTFXcy/RdtQ2bA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681907899;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TzrbFJX0mLRnVyg6gK50wjW8gAdRTyt3eKsS6bO2Ho8=;
	b=G/PDlG02+awXAj/Znv2ir94TrCNuE4soVJYG72z/i6F9KUyMxTR0qJUk/z9cfxNZH98BUp
	FAdnV9eglXscn1DA==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <87a5z443g2.ffs@tglx>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx>
Date: Wed, 19 Apr 2023 14:38:18 +0200
Message-ID: <877cu83v45.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Wed, Apr 19 2023 at 11:38, Thomas Gleixner wrote:
> On Tue, Apr 18 2023 at 22:10, Paul Menzel wrote:
>> Am 18.04.23 um 10:40 schrieb Thomas Gleixner:
>>> Can you please provide the output of cpuid?
>>
>> Of course. Here the top, and the whole output is attached.
>
> Thanks for the data. Can you please apply the debug patch below and
> provide the dmesg output? Just the line which is added by the patch is
> enough. You can boot with cpuhp.parallel=off so you don't have wait for
> 10 seconds.

Borislav found some a machine which also refuses to boot. It turns of
the debug patch was spot on:

[    0.462724] .... node  #0, CPUs:      #1
[    0.462731] smpboot: Kicking AP alive: 17
[    0.465723]  #2
[    0.465732] smpboot: Kicking AP alive: 18
[    0.467641]  #3
[    0.467641] smpboot: Kicking AP alive: 19

So the kernel gets APICID 17, 18, 19 from ACPI but CPUID leaf 0x1
ebx[31:24], which is the initial APICID has:

CPU1		0x01
CPU2		0x02
CPU3		0x03

Which means the APICID to Linux CPU number lookup based on CPUID 0x01
fails for all of them and stops them dead in the low level startup code.

IOW, the BIOS assignes random numbers to the AP APICs for whatever
raisins, which leaves the parallel startup low level code up a creek
without a paddle, except for actually reading the APICID back from the
APIC. *SHUDDER*

I'm leaning towards disabling the CPUID lead 0x01 based discovery and be
done with it.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:42:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523421.813447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp78w-0001Wz-6n; Wed, 19 Apr 2023 12:42:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523421.813447; Wed, 19 Apr 2023 12:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp78w-0001Ws-4J; Wed, 19 Apr 2023 12:42:38 +0000
Received: by outflank-mailman (input) for mailman id 523421;
 Wed, 19 Apr 2023 12:42:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K18G=AK=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pp78v-0001Wm-5V
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:42:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a8bfadd7-deaf-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:42:35 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id C5EF221986;
 Wed, 19 Apr 2023 12:42:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 96AF71390E;
 Wed, 19 Apr 2023 12:42:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /whYI7rhP2TMEAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 19 Apr 2023 12:42:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8bfadd7-deaf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1681908154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5+NYry55bNWgpBMPW2QTjRl5RzMdlcbH8Fpx7gA6h+E=;
	b=S7cL/Zv6O7o/O5YvaY7zPFnZHN5czWMiUYbSqVHjpUMTpgUn6QL/mEN8LsH/c/VV9YLnly
	FMl+q2OxNdlXZL4PAkheMZInKTcGLd8siomNAruEcdWviqDSQ1BIrMZSaWQGtO2tIFjMYk
	nxEdvxcXaohUr/u4Vj6tQYxanPrVzvU=
Message-ID: <99abd9a7-87cd-2f37-05bd-f4cdffd47a9c@suse.com>
Date: Wed, 19 Apr 2023 14:42:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] tools/xenstore/xenstored_control.c: correctly print
 time_t
Content-Language: en-US
To: Alexander Kanavin <alex@linutronix.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230419120710.855128-1-alex@linutronix.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230419120710.855128-1-alex@linutronix.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------08iSab8KZaYOdHQsdzRTyQAc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------08iSab8KZaYOdHQsdzRTyQAc
Content-Type: multipart/mixed; boundary="------------Kv070jtSObHpcN0gkewur5bV";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Alexander Kanavin <alex@linutronix.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <99abd9a7-87cd-2f37-05bd-f4cdffd47a9c@suse.com>
Subject: Re: [PATCH v2] tools/xenstore/xenstored_control.c: correctly print
 time_t
References: <20230419120710.855128-1-alex@linutronix.de>
In-Reply-To: <20230419120710.855128-1-alex@linutronix.de>

--------------Kv070jtSObHpcN0gkewur5bV
Content-Type: multipart/mixed; boundary="------------97XNhJeuIboOGXtWz7fcczD4"

--------------97XNhJeuIboOGXtWz7fcczD4
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTkuMDQuMjMgMTQ6MDcsIEFsZXhhbmRlciBLYW5hdmluIHdyb3RlOg0KPiBPbiAzMiBi
aXQgc3lzdGVtcyB3aXRoIDY0IGJpdCB0aW1lX3QgKGhlbGxvLCBZMjAzOCBwcm9ibGVtKSwN
Cj4gdGhlIGZvbGxvd2luZyBlcnJvciBvY2N1cnMgb3RoZXJ3aXNlOg0KPiANCj4gfCB4ZW5z
dG9yZWRfY29udHJvbC5jOiBJbiBmdW5jdGlvbiAnbHVfcmVqZWN0X3JlYXNvbic6DQo+IHwg
eGVuc3RvcmVkX2NvbnRyb2wuYzo2NDY6NzA6IGVycm9yOiBmb3JtYXQgJyVsZCcgZXhwZWN0
cyBhcmd1bWVudCBvZiB0eXBlICdsb25nIGludCcsIGJ1dCBhcmd1bWVudCA1IGhhcyB0eXBl
ICd0aW1lX3QnIHtha2EgJ2xvbmcgbG9uZyBpbnQnfSBbLVdlcnJvcj1mb3JtYXQ9XQ0KPiAN
Cj4gU2lnbmVkLW9mZi1ieTogQWxleGFuZGVyIEthbmF2aW4gPGFsZXhAbGludXRyb25peC5k
ZT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0K
DQpKdWVyZ2VuDQoNCg==
--------------97XNhJeuIboOGXtWz7fcczD4
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------97XNhJeuIboOGXtWz7fcczD4--

--------------Kv070jtSObHpcN0gkewur5bV--

--------------08iSab8KZaYOdHQsdzRTyQAc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQ/4boFAwAAAAAACgkQsN6d1ii/Ey9E
CQf+MNqY8qwH7szIksdPBiWVxwjdG+Rll6WKITATZMu/DFtsz4pD/TbQgyspoHyOscrdO+qS1Oh1
rJUhwF//Unp5FuQGRyytwIF0VargnDtuua7U9J9pDBhJ08L/PyqIDTi+180T+/t5ZH15wk38SNV4
xHfbAClSl1CRVKYcflnnzCBpxxUvnlGqRJ8rvvuudvHnNcUTF3mxd+0asO+wAwyP3NORH8rqbGyu
z0qr7yLzZDPntiRrtwsjKRLt4KOtCDiPOUg8slwM17BocndfhKDkAw0TkG5n6zBbZkCdNZUsYeW5
9GZ5UHGaehOBioT4E1AsLrBHtGrL1qbTngQMtK6kbg==
=DBx9
-----END PGP SIGNATURE-----

--------------08iSab8KZaYOdHQsdzRTyQAc--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:48:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523426.813458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7EY-0002AN-RM; Wed, 19 Apr 2023 12:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523426.813458; Wed, 19 Apr 2023 12:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7EY-0002AG-OR; Wed, 19 Apr 2023 12:48:26 +0000
Received: by outflank-mailman (input) for mailman id 523426;
 Wed, 19 Apr 2023 12:48:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7EX-0002A6-1h; Wed, 19 Apr 2023 12:48:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7EW-0004LP-QS; Wed, 19 Apr 2023 12:48:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7EW-0007bt-IA; Wed, 19 Apr 2023 12:48:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7EW-00019q-Hi; Wed, 19 Apr 2023 12:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NgqvswyJ/R3SbDTvB6ArLFnBtyc4zkRPHKz+TcGk/PY=; b=LGKN5h77RUXXUFN68p6XnfMBk+
	45n78MwZrJmPRWvPze3j+BY+1PQgArjqu3TW2XHJLXiE1MsxMxT6Tx6SvYCyzsHC0d4iBJV7vtUMy
	eb8uqlR/m08QrpGMeRE4T1SIM7mEOaL69ntO8GTOboBIbyteFQ44dULzoABGKt9/uWa0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180314-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180314: trouble: broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:hosts-allocate:broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=569632a5832c02bd84790e0411940b8d3150fa17
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 12:48:24 +0000

flight 180314 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180314/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl           3 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  569632a5832c02bd84790e0411940b8d3150fa17
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    0 days
Testing same since   180314  2023-04-19 10:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl hosts-allocate

Not pushing.

------------------------------------------------------------
commit 569632a5832c02bd84790e0411940b8d3150fa17
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Apr 19 11:03:30 2023 +0200

    CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
    
    Note in the changelog that the purpose of
    gnttab_max_{maptrack_,}frames command line options has been changed.
    
    Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Henry Wang <Henry.Wang@arm.com>

commit 768846690d64bc730c1a1123e8de3af731bb2eb3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:02:47 2023 +0200

    x86: fix build with old gcc after CPU policy changes
    
    Old gcc won't cope with initializers involving unnamed struct/union
    fields.
    
    Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 741599fa521fbbb4cf71a98d7ec22ba5f4671cfa
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:01:29 2023 +0200

    x86: cpu{id,}_policy_updated() can be static
    
    The function merely needs moving earlier in the file to avoid the need
    for a forward declaration. While moving it, also rename it following the
    recent folding of CPUID and MSR policies.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 224211c55bdded74c5a65f5a7e34281a8c5c56f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:00:19 2023 +0200

    tests/cpu-policy: fix "run" goal
    
    An earlier change converted TARGET-y to TARGETS, but failed to replace
    all references. Convert run's dependency, but use $< in the command to
    avoid the leading blank that += inserts.
    
    Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 12:55:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 12:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523432.813468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7L7-0003cR-KV; Wed, 19 Apr 2023 12:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523432.813468; Wed, 19 Apr 2023 12:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7L7-0003cK-F4; Wed, 19 Apr 2023 12:55:13 +0000
Received: by outflank-mailman (input) for mailman id 523432;
 Wed, 19 Apr 2023 12:55:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K18G=AK=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pp7L5-0003bQ-Tz
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 12:55:11 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a5eef20-deb1-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 14:55:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 108C21FD8C;
 Wed, 19 Apr 2023 12:55:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id DF9921390E;
 Wed, 19 Apr 2023 12:55:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /t3NNKzkP2RUGAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 19 Apr 2023 12:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a5eef20-deb1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1681908909; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Yv4ELE92aFYt2QdmV0+wT68rjfxHkC6qcTXsnlMZF+8=;
	b=sRcZ+SaE1YSe+dB96HiaGgAbP9xRGBuM3/1JtJUaGgDlr7hHc0RfcbkE00ltyS91uIq4QB
	PXdqXT1skRONE7gn5k3eB0n1DGhgEi5NIQhbfgoYBuL5V9bmnHNOv0wPLWeOxYnBOVtpqu
	yN7Z5hR77eqPzJKJLRN1rUqiWmp5NrU=
Message-ID: <0fa11dd4-da6f-df95-bff7-dc4a80553b01@suse.com>
Date: Wed, 19 Apr 2023 14:55:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v1] tools/libs/guest: assist gcc13's realloc analyzer
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230419100633.13047-1-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230419100633.13047-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------xYCdlYbqKLWLzjsumfVXOtKv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------xYCdlYbqKLWLzjsumfVXOtKv
Content-Type: multipart/mixed; boundary="------------7DXRDtF2RGCrUM6MolD2vxQB";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <0fa11dd4-da6f-df95-bff7-dc4a80553b01@suse.com>
Subject: Re: [PATCH v1] tools/libs/guest: assist gcc13's realloc analyzer
References: <20230419100633.13047-1-olaf@aepfle.de>
In-Reply-To: <20230419100633.13047-1-olaf@aepfle.de>

--------------7DXRDtF2RGCrUM6MolD2vxQB
Content-Type: multipart/mixed; boundary="------------GwQcupx21pbiuN66Cfh6D5c5"

--------------GwQcupx21pbiuN66Cfh6D5c5
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTkuMDQuMjMgMTI6MDYsIE9sYWYgSGVyaW5nIHdyb3RlOg0KPiBnY2MxMyBmYWlscyB0
byB0cmFjayB0aGUgYWxsb2NhdGVkIG1lbW9yeSBpbiBiYWNrdXBfcHRlczoNCj4gDQo+IHhn
X29mZmxpbmVfcGFnZS5jOiBJbiBmdW5jdGlvbiAnYmFja3VwX3B0ZXMnOg0KPiB4Z19vZmZs
aW5lX3BhZ2UuYzoxOTE6MTM6IGVycm9yOiBwb2ludGVyICdvcmlnJyBtYXkgYmUgdXNlZCBh
ZnRlciAncmVhbGxvYycgWy1XZXJyb3I9dXNlLWFmdGVyLWZyZWVdDQo+ICAgIDE5MSB8ICAg
ICAgICAgICAgIGZyZWUob3JpZyk7DQo+IA0KPiBBc3Npc3QgdGhlIGFuYWx5emVyIGJ5IHNs
aWdodGx5IHJlYXJyYW5naW5nIHRoZSBjb2RlOg0KPiBJbiBjYXNlIHJlYWxsb2Mgc3VjY2Vl
ZHMsIHRoZSBwcmV2aW91cyBhbGxvY2F0aW9uIGlzIGVpdGhlciBleHRlbmRlZA0KPiBvciBy
ZWxlYXNlZCBpbnRlcm5hbGx5LiBJbiBjYXNlIHJlYWxsb2MgZmFpbHMsIHRoZSBwcmV2aW91
cyBhbGxvY2F0aW9uDQo+IGlzIGxlZnQgdW5jaGFuZ2VkLiBSZXR1cm4gYW4gZXJyb3IgaW4g
dGhpcyBjYXNlLCB0aGUgY2FsbGVyIHdpbGwNCj4gcmVsZWFzZSB0aGUgY3VycmVudGx5IGFs
bG9jYXRlZCBtZW1vcnkgaW4gaXRzIGVycm9yIHBhdGguDQo+IA0KPiBodHRwOi8vYnVnemls
bGEuc3VzZS5jb20vc2hvd19idWcuY2dpP2lkPTEyMTA1NzANCj4gDQo+IFNpZ25lZC1vZmYt
Ynk6IE9sYWYgSGVyaW5nIDxvbGFmQGFlcGZsZS5kZT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJn
ZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------GwQcupx21pbiuN66Cfh6D5c5
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------GwQcupx21pbiuN66Cfh6D5c5--

--------------7DXRDtF2RGCrUM6MolD2vxQB--

--------------xYCdlYbqKLWLzjsumfVXOtKv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmQ/5KwFAwAAAAAACgkQsN6d1ii/Ey/6
UQf/RRQwx+Hn90kvjYfmeP7SLbTx8TmPJQTPH8wEgI/SNqqoCdilfg3O5XNZMZ2JLNbiWabL3guu
m4YqC5H6POtvI/COskpRczr6qEEOW2NXWqvjVfNU2lXhFX8/kO/Bj2YxT/yNUbEDLIWV4zd1+H50
C7KNJDSmY2M9yQsndN1kN5XBcZ9RcKdRv5cVq7HBqstgi/Svsn+OUOGA6WpcyjQfUPadnM6vBua8
db7Ios+q9TqHUibyA7mD8kjVvzcBSp7LgCro8dTcb90TNBdsbeMU55fuuZHAsnNhqBv2wIbvuGIf
kcw8V/SbHcb//sMP9xWx5nG2PvzQil0MT+igXGRFeg==
=J0pG
-----END PGP SIGNATURE-----

--------------xYCdlYbqKLWLzjsumfVXOtKv--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:08:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523436.813478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7Y1-0005F2-Ok; Wed, 19 Apr 2023 13:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523436.813478; Wed, 19 Apr 2023 13:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7Y1-0005Ev-Lx; Wed, 19 Apr 2023 13:08:33 +0000
Received: by outflank-mailman (input) for mailman id 523436;
 Wed, 19 Apr 2023 13:08:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7Y0-0005Ej-Dj; Wed, 19 Apr 2023 13:08:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7Y0-0004qm-1W; Wed, 19 Apr 2023 13:08:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7Xz-0000BW-Je; Wed, 19 Apr 2023 13:08:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pp7Xz-0000Lc-J9; Wed, 19 Apr 2023 13:08:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rH9pqrSpUKf+WyTRY6w8vMgiDOvxKSrB/6+TATXT3As=; b=j/ZX2pTGbp2GRJesN07OqvHgI2
	gpkMBy37BYXBZPEGWf3svPRTdMpeuQN6L2Sxcw90LMz8O4KgAbYnK+aTr0QiWbkl5QCvXD2R/qqDe
	xrSBzjeuxdeBa5YE5K5WKdhRdKiLXT+1LsyOETa0iyadoTDBapT9G/aGpso/yDSZ+8pc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180308: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=b486430db34d0db1dcbf39b0d9840d03cd57f615
X-Osstest-Versions-That:
    libvirt=edd604a6723522f4c17a38ff2df35460208b0c81
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 13:08:31 +0000

flight 180308 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180308/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180293
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180293
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180293
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              b486430db34d0db1dcbf39b0d9840d03cd57f615
baseline version:
 libvirt              edd604a6723522f4c17a38ff2df35460208b0c81

Last test of basis   180293  2023-04-18 04:20:14 Z    1 days
Testing same since   180308  2023-04-19 04:18:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Jim Fehlig <jfehlig@suse.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>
  Temuri Doghonadze <temuri.doghonadze@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   edd604a672..b486430db3  b486430db34d0db1dcbf39b0d9840d03cd57f615 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:16:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523444.813488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7fn-0006mu-Md; Wed, 19 Apr 2023 13:16:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523444.813488; Wed, 19 Apr 2023 13:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7fn-0006mn-Jr; Wed, 19 Apr 2023 13:16:35 +0000
Received: by outflank-mailman (input) for mailman id 523444;
 Wed, 19 Apr 2023 13:16:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pp7fm-0006mh-Gw
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:16:34 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 652c6cbd-deb4-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:16:31 +0200 (CEST)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 09:16:27 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by MN2PR03MB5295.namprd03.prod.outlook.com (2603:10b6:208:1e7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 13:16:21 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::b6eb:c7db:7393:52b3]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::b6eb:c7db:7393:52b3%7]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 13:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 652c6cbd-deb4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681910191;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=1YdBZKWa+jw6Bg2n6ohQ0OAf47Qgi9VMmp8+aX0fASo=;
  b=HNVSSYx10zHh3992xfQGunqI5u5rHrUgD+3IXryh1gLHjkhXTQyL6p+E
   AQfifmwlYs7Rq2ZsftGhqUx/Th6OH0e35LwD0YbTUswienhenhqmSloqB
   8lRrwlGN7aZ3BlfUgjb3x9z1Nrl1P4Lj4BA/K8eMBqvYU5tm0XXck6xLE
   s=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 106134305
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MikOraLRHm4gq5TVFE+R8JQlxSXFcZb7ZxGr2PjKsXjdYENShjFTm
 2saD2+POayMYDTyf94kPo+x/BgGscLVztY2SlZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVvPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5cXT1o7
 /8UOQsHMAmint/v7ou6e/dj05FLwMnDZOvzu1lG5BSAV7MDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/RppTSCpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxX2iA95JTODQGvhCpQOq7FIWWDwvVgWmhKHltEW5fM5uN
 BlBksYphe1onKCxdfH6WxS2iHeJphAYVpxcHoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxQ5eIgAQJG4GICobFw0M5oC5pJlp1k6eCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb5D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:GlJIFa4jDlcCu5jAcAPXwOPXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwQZVoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eA3OBjKadH/DBbytHOuQ4D9QYUcei1UdAb0ztE
X-Talos-CUID: 9a23:AdzsJmN0zE3uF+5Dc3U70lQXFtEeeCPE7W7temGBCFd3R+jA
X-Talos-MUID: =?us-ascii?q?9a23=3A3k7aSg8LDROvntkLXbgIZhWQf+I22o6QDRAsqJZ?=
 =?us-ascii?q?YpuzcaRJ7PBCBvSviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106134305"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mYtVXrDM5PFcjrfOtlTHMaigpIv+226ZjjARtgFkquIFla46UOklPzJDGxnXXI3+cBLQ/njpjrcHVSYJ7lDP3ngWqXKsiAAMvFu+rEsLURHGJoaOW/KDjbVDnJ3aFFt/IK/hwnfoMy3bHnMmDrGwIiDkEwD3gm9fQqb6rVquVs1xUF9Q3/ZV3kS//DKvl8qQZOSvKDEesL/HfeFZDhtwPw5yaYlGswagTASYGiskJ4IyhKYEYWSJuSXnQLGedw3E8p4B5fHrGR0KnrnrvCDRGMEFK3SxP2dcNsJUFq2FJPUm9gb9O+KEcsxmBUdHFMCeUFfW8GM6FF9y6HCo58+QQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1YdBZKWa+jw6Bg2n6ohQ0OAf47Qgi9VMmp8+aX0fASo=;
 b=FPTZixnyhp7UzAfXwiBCnmapWhatiJSyOqK9omE8kUBMIbU+NPAuLQnVyk6iw+eqdVkZjBG5Gof1rXKHBvB18aLeJRdfhOEndwnYzOtipsmSIY5wGh133B0g6EjG91UW4CzwLQpXblgsvLwZqmuq7t0oL0/OGe1Pn4ZRlSRa6IFFbBbgcmm0kzfyfVpyJkspq4pHTpOj+e6LHPFf+r0B/6y8Sq4GVgPVOzlYf9AZ3JfwMSotJHtg9V/bu8GcE5Krp4bc5ba8Jbqo3B7xTUQx7217gexm6UhDHKDXYbItqmJiDe5vpDa2BHoXoXA6TCG5MYH6/z91dlcNkUmCATK5lQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1YdBZKWa+jw6Bg2n6ohQ0OAf47Qgi9VMmp8+aX0fASo=;
 b=toUliGOnGfA2OhzlE07KQzeG3dFQ8NCuAZrYeJF+Y+GgZPbUxHYIDXWxZuoyTU8eCqTPieKSI3oj8yyFZ2s2BS9T4hbiYhQs8q41YWjyrpiDLcVU6sa1fUnnwLdXJ0sQoXCKPZ0YKpIfk61R/TvUUB3xO6RC5pJYaMi9kVwK88Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0293e96a-3eed-5624-6580-226bbb5c03df@citrix.com>
Date: Wed, 19 Apr 2023 14:16:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v1] tools: ipxe: update for fixing build with GCC12
Content-Language: en-GB
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230419110026.25429-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230419110026.25429-1-olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO3P265CA0016.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::21) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|MN2PR03MB5295:EE_
X-MS-Office365-Filtering-Correlation-Id: 3179ef82-a3cf-49f7-0331-08db40d8445c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+nuX9986eAI8nYlOakP23JpzlRp/pifLee8PGtnvIDNpjYzN1bv2Cj56j+g17aRj2DrRgTdHwN/emwho4Avfwyk2WRems6ImH3xpmEoDQ9yx5+ZPmTPuY0mUT8YlqntTbz4F0vn1zzr8kTHXmtXavPT6nrx/RMFiLRVR37D+MMwFo/8LfjoOe3vjQNh5idqzdELJDwTknnA2fnmX42ykHTAC9XGVRZApZQHJ2n2IYq3bsUwC9b3/8R/LEr+4UDBmMaZCUo4jtF9rSf0+2Pew66wATzcTZOQ4bC8WjAdVEcDRR1EqdZVXpwjeq3wWkfK59vK577U1YZVEuZjEhdqOX7WCmo7SmS360ptmab8SoJnlkpygTqY/Q0mME4FC8FSKF1WowdqCsuxHYcv45Tgokhjg+UKGBBIR320firCzFBvZQWvnQeK1x284H4qZsTxloH7NFkyHdI2EnONs8M0pCZINE0EWz+9l6jhbaglm4AlMT8nipBb/frMS65VBZGagZylxftcj8HXNgMoJIISAAl5SiS0zpwrlrlg8YZ6GflAeV0P9IH5Pgfc6r8JqTl0Pe1VVf6e4xNlAPGBQkiVh+/Oj63An5L5ZYk9QCfp6TYiA/6CTe0cfE6lq8tQgKNlIWfNT2IDJLnUFBLWFnOLulQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(396003)(376002)(136003)(39860400002)(451199021)(4744005)(2906002)(8936002)(38100700002)(8676002)(5660300002)(36756003)(41300700001)(86362001)(31696002)(6486002)(6666004)(107886003)(53546011)(6512007)(6506007)(26005)(54906003)(478600001)(2616005)(31686004)(186003)(316002)(82960400001)(66946007)(4326008)(66476007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWttMUJYWldKSGMzcy9JR0trRTd5VnpsdHdRWWd2cExXZk5wOHZkQnRiaXZD?=
 =?utf-8?B?a0I2RDZaVldtNDBUOVM0dnRFNDVnZUlKQUdiNkpQZm5OTnJ6dDhLRHQyY1BP?=
 =?utf-8?B?U1UzbzNMZG9Rb2ZVU29sd3pGUDBxZE54VkU0cFZ6azNneUR1MWNkVy80ZnhY?=
 =?utf-8?B?bnNwV01zMnhVNm1XZVB5RDRjS0VpVUNlQ3pKdU9RczErckNBcENGckcrUi9k?=
 =?utf-8?B?dFdqbmdvRU9relZoSE1RemJ0UG5aNXFiVjdaNDVhRVd3WFRQVWlLYmVOcGpE?=
 =?utf-8?B?RGd4VDlONWUxYU13eHpyZkhmYzdaSDY3c25IbkZ6VUc5MVFJVkdpYUxNT0sx?=
 =?utf-8?B?VjNRczc5SzVhcy80YzVBamREWGhuZkt1T1VpTnVUWVRQL2l1MlZMSEJIbyt5?=
 =?utf-8?B?NG81L1VVNDUrQ1J3S20xN2VHMHZxbTRqQjR1UVVSMWJCL0V3RThNWlAxc3lk?=
 =?utf-8?B?REhkY0xoaHlaVXN0b1J2OGtyZTdpN00vbHJnNnRVNEl2ZlBJTGJxaFRaR0Rq?=
 =?utf-8?B?R3BDbTRuRkhLQkJjblVWTFB4a2txaXU0OUNLNHlxV29HaWJXbUsrMllEbFlE?=
 =?utf-8?B?OEFWRHJmc0UwSmk5NkFsWXBhdktXQzV6S1luM0lxalhidHpIWVFUNFg5Y20r?=
 =?utf-8?B?aWRJMGtDR2Zib1paT1pKejU3eEwxNzlwK0RkT1lRaXFoVmhValFzKzFzQUR1?=
 =?utf-8?B?d3VlY25mdEZIdlNRb1FKTUEyaEh0dlkxOEtTWk9aaytvaUJFU1hRdUFFVXQw?=
 =?utf-8?B?WHlBRVIzakg2RUU4V2QwNXJGL25zeU5RTHA4SlhaUjVUN1RzWXBGQXh4Z2xL?=
 =?utf-8?B?ZXd4TXNGMzltMHZzQ1FKZG10SjJ2WUNhbGduUHNIYkZSVGRDVVBzZGl5dXA3?=
 =?utf-8?B?TGFzU3RhNEJuRkpVanorSkp5bEFwcDBYdEhXakR4TWxOS0tKMWVXYWltZkVr?=
 =?utf-8?B?dEZJbHJ6L0VQeE8rd09tY1pkcnpjNFNnUGdwNytNeVd3SjZQVVVDSEJQaUEw?=
 =?utf-8?B?N3RNcVg1L2oxeFdmTkFFOWx6ZzdiNDBpc0pWMU5aMXo5YUczUFpJY3MwRDVV?=
 =?utf-8?B?Vzc5N21NWi9oQm5mREhqZ0VXY2F2Sk4wWTlwaWVrcFFMVlU2cFVQOXJRd1lv?=
 =?utf-8?B?QUhzRWZaUVJ0RUlId2xHVWJFaUY0T1RKMUdoSU4wQkREWGpxbDV1YVZZMnVW?=
 =?utf-8?B?bTFvTzkzT1drTDZYZnkwcE9MMTV6aG9sblF5bm0rUml2ZzNUcWJick5BekNj?=
 =?utf-8?B?SlAxVDdmQ0QwbjE3ajNyVEMweElRNWZDSU15SVoyck8rckFkVlhBSDExUUVr?=
 =?utf-8?B?S1Q3NGVML0tJWTcwZlJVcGVlQ1NmcmlWeDBCREx5YngyWnJNN3l0dXQ2ZXI0?=
 =?utf-8?B?V3BCVlBiVG16L3p6Q3NtY2ZnQU9qdHNwN1Q5MUtKZGkwSFhvVEk0Y1pIaE00?=
 =?utf-8?B?bE5qZVdDRU5UUktGMmkwQ1NoNWJNU0MxTjU0ajZKdkxSR0hyb2hzYmh3YkpY?=
 =?utf-8?B?TmxwZnZhakJQVmtLOW9mdWd6NUxyUjdsSitKcFlDTDZvWnU2MWx0ZXBQRnRL?=
 =?utf-8?B?WDJUcysxNVI2bjlXSEhEWG9GdXhrNDJyaWg1VnRJSWVJYzVHWlNzMkxld0pW?=
 =?utf-8?B?Q1dnMGNBRy9JZW5iY2sxeTJLek1XV3BQVXd3NUFzTDAwc0UvQ3llTVpTYUpl?=
 =?utf-8?B?Qm9SZ25lbWwzd0d2SXhZWjZmZUMwOXFQbE1BcmpzMXNHa0hud2VDSWJZSGhM?=
 =?utf-8?B?ZnRtOEZNbHQ5T2JxdFQwWW11WG9Sa3p0MUx0VTdkOWMxQlJoMHFyTGJGNDZ0?=
 =?utf-8?B?ZFRnYWZUN1RiV05tSjhmby9WTkFSc3BhcjZNaWwydkxDckRRQ0Npa0p4L20z?=
 =?utf-8?B?UDFLNGtWcitydlJIRGdMZ1dtdWI5OHRmbXpTRldMUHNwL05zd3hlVExxdGQ2?=
 =?utf-8?B?UGhLalBZUU5zR1ZpVTFxWkNLMG9aQnBLMkh2WW1xdTVpa2hOYm1pV3Z1NnJk?=
 =?utf-8?B?blA5N0VxYjI4N0ZCalREdTZQdTBBcWg0MEswWG1KUUx3MEQrbWNsUnlnWEFw?=
 =?utf-8?B?S0dLamRMV2k5eW1PRHdJaCtrcWtFQldwRlpRdmhVc3lPdkt0cVVKSTR6YXM3?=
 =?utf-8?B?VHYrTzE1WVFKbjlQZzBxdkxiN2NuanRBRWR3Y0RnUmFSelczQTFEQjJTaVg0?=
 =?utf-8?B?U0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	A3+AX/ijRDUe/i9Gb6zEGft/QuAM7TmAyJJ2gj4hs5WIMfSZ5SbKd/hNkT8XGfiIT0afU+6GjMfPsuPAA+BW3cWNJYkE694FMwkOqwNk0RCuiLu741XGp676APTG9cKpuGzityNfqqR7rter6lhrdI5ll52RyOp0HnmhnvImu+Mkoif0oJZ06768cXZ67KVXovvVelO8lQFhxH0kYIS9Vq1vTHffr3IumWH6w0+/Q+ZkorzgsGrh8r5xucXP/aCo5WhQE21q+cbfMwA1/cxxZpqsjK4ai3lZPb99EcsC94nTMcYtYeGr5GDZP/yPz7IdH8TEHIogyZa3A8wxcGTQte32+P3sTXrOFfUzch6fsFtlLJKNHUjKp95mt10lrakajaRKGgSQTn2B1egDWYiZpHyKsXl8T0AI0jkQFS9DnWG96TwN6wXFtVZOHJQi0dEAum9aHTyDxLvFQuD7dOfBnrAf+m45eRpj2t2gO51TvQWcQr3Yf32+9xm6NegMWiQbAMfhxcx50Jc8mm25mGusy7SwbGPGJoGVEYWST96e0ze+im7bkSRWc8/ucfE4fLbxAYxDWbc5urCPrYLUTuAXsYygUPTA46w6L6kasot+w47/ffCqGh2dlURFa8P6LO+tJhxDkvJGK44E009OdHkwzhmnHXQLzIUUwDtaYVXAbvS/aCC2WeAoj3QL2kluwHoQR/XHdUaTgvodW6zp+sQRg8jFtXkIqlyEbfEwbp9Ud4Mpplqtg/UR7iC33jC/vO5i5xWgOJ43bLtDRK8PyEFpgjqMP9OXUhxZq8EFp/Me92D6ALOlk4YPkWWLy/YdaEfzDsE7622fYOjtBgF2BB1psw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3179ef82-a3cf-49f7-0331-08db40d8445c
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:16:21.0886
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ticP/DxnxMLi+Hq5QsuxxghV21fiWOr06l4fjGO2C1DwZcWOQrPctr1cQ7ARbefVCjHBni+tYgeNcj2xrjZ6ZD1VmbveMps7KOWTm6op/g8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5295

On 19/04/2023 12:00 pm, Olaf Hering wrote:
> Use a snapshot which includes commit
> b0ded89e917b48b73097d3b8b88dfa3afb264ed0 ("[build] Disable dangling
> pointer checking for GCC"), which fixes build with gcc12.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

But the usual note to whomever commits this that a tarball needs to end
up on xenbits of OSSTest will explode.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:19:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523448.813498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7iq-0007MK-5e; Wed, 19 Apr 2023 13:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523448.813498; Wed, 19 Apr 2023 13:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7iq-0007M9-2l; Wed, 19 Apr 2023 13:19:44 +0000
Received: by outflank-mailman (input) for mailman id 523448;
 Wed, 19 Apr 2023 13:19:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp7ip-0007M3-33
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:19:43 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20602.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5f3f86e-deb4-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:19:39 +0200 (CEST)
Received: from MW4PR04CA0272.namprd04.prod.outlook.com (2603:10b6:303:89::7)
 by CH2PR12MB4168.namprd12.prod.outlook.com (2603:10b6:610:a8::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 13:19:35 +0000
Received: from CO1NAM11FT025.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:89:cafe::5f) by MW4PR04CA0272.outlook.office365.com
 (2603:10b6:303:89::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 13:19:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT025.mail.protection.outlook.com (10.13.175.232) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6298.30 via Frontend Transport; Wed, 19 Apr 2023 13:19:35 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 08:19:33 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 08:19:33 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 08:19:31 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5f3f86e-deb4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YaHyQs0nWGbgGyzizOjDs1QofaAdJIhy25ojJNCZ3Z7k8OHChUbNEhrpW8t5PsEE10vSaniKHVdR6K6VM2fS+ki4VPUMSCXd9wWRWBLr76kKapq8bwpUFVG1NG0d0AUtkgVIXMFBwQWIiNI/ZzJshxm+EibZ8oj1Xr5b+DJMojR8TB5pibQRpanliRGnU2YJ5cm3Z8HzI5dHUZ72uGVh94Rqpjzpc87ScdFj2BPKwjjJBb3GJtOBs8SUIMWind9G3bHTlJYBQ8VPsbvWsL3UhUs4St9X+3yhQTJoEQBLqPy51nuohwVWVUvO4f71KwLxAHXB4rOGP5KHJlQ6376vmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rt6oCy42EVeybJ8eQSutycI+l984cEPoJ+XxXBYIffc=;
 b=GvFWisdvgTlkC/cP5c31w2cBP2h0xYZDPUdXiFcWg9gWZos+j6AolQyYeWLkW+lea7HoGUQEzIGEAYog8yoPUsdtZfkjTZdaM2SlR3G/bbg9wHuSu3sNpB8x6vqS0cU2O6zf/anVO5ldBkJLmyZLQtmwtW/p/tH7kvMuWIYf4Yrvw5XPdJcEO5b7wMs392+5MxQVoD2ANH+QauYNFgous/r57HLLo6onzxaL4NOhFS7UPHTB9GBywQHlT4nwEoSzkjxUxa21D7ehpipwEldtcGXLAOtYRcDjIa13XomWLIMvSf6bqGfppUvrVesozZ4E8ElZxxXujOB9ldggcye+tA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rt6oCy42EVeybJ8eQSutycI+l984cEPoJ+XxXBYIffc=;
 b=bNmhvdGQURfWMciGxSRx3b02ruc+LX9DK54w7oJUd9Uebn4LAGr9s84Uq8zGoMyh8K/86pWlQ+7YYCDavb1OrxL5ufvxeNOAxZaM6d14Y7lkX38NPs74k3WlOP3meTpG9f195Hy3amXDdMy1whi2QMurG/120pu0im9u8utX/Cc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
Date: Wed, 19 Apr 2023 15:19:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-3-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT025:EE_|CH2PR12MB4168:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b84ec55-088f-4b3e-0d21-08db40d8b847
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cmLen4DxFd7na6ZtbWEcni1IN25glXKqBulPYzJRWZoA+ozVsqXBDgcIweW0+6wyyTR3PDo7ObxdrQTLCiEg83+KbgS9JT9PI33SP3IL03/kI5X+5B31Ljyg9jUwaDNqK5ftDf9XYYyT5JOTL1GxBUYj8QXyx1XfdMF9vzfpPiUayEElBV/jjKUseohiLNiqICR4Vc0RW8d4GKYRYPAFGtCSvSPpjDA+ef5Qcao++nPoCkwklXRhxYwYXjq/Kf8ugDKtATttMeNTFIT3Ja44fG78sCYZRNss/hIesPvlzWNl2hP7MbgDfAecz+qVfhUM3KB+NPUcOTzBt+ZsCp1PG0wKqfXis4A9dLX1yj2v1mWG+5LkEiVJk7GTKRQ8GddpEKdUUaJrTWLEruqN7H37StaFmmPTxAUiB3tyRJZXoZS+KtijMXF0HjPYUeLkfC0E6nftEL9aRk8FKSV88Jrc2EyeFIuFaRFhrWKPsAGffWDpI3z+UDy0UzLoxhvN7Sm3P9fumxQtybVYsqdKfnyIjuPwke7ioWHoaH+fucNHxgYrO5X0P3y0EgK2MCMoafhWpOgJ/tZUHA0Qyk5KdYufnys6O9kx17dGxFaO5EwY7qNrSAukkaldDZqKb3OOzcHNQFoX+w3cZ+qkkCy2vSJKICDDLkaVLCGlRmKyQzDLu11iZQFtwWSK1BugU1ppdBt6f5bPEritARGTst6UOB3Hju1/P9KKaWtXc3byMBojnSL/YYNXnpHqk5Zw3Dzp2lpBmRsdUNiWsSPqnCwFR4M0rg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(478600001)(31696002)(110136005)(86362001)(36860700001)(47076005)(36756003)(2616005)(426003)(336012)(83380400001)(40480700001)(186003)(53546011)(40460700003)(82740400003)(26005)(81166007)(82310400005)(316002)(356005)(70206006)(70586007)(30864003)(2906002)(44832011)(8676002)(5660300002)(8936002)(7416002)(31686004)(41300700001)(4326008)(54906003)(16576012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:19:35.0428
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b84ec55-088f-4b3e-0d21-08db40d8b847
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT025.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4168

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
> currently accept or return 64-bit values.
> 
> In future when we support 32-bit physical address, these DT functions are
> expected to accept/return 32-bit or 64-bit values (depending on the width of
> physical address). Also, we wish to detect if any truncation has occurred
> (i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).
> 
> device_tree_get_reg() should now be able to return paddr_t. This is invoked by
> various callers to get DT address and size.
> 
> For fdt_get_mem_rsv(), we have introduced a wrapper named
> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
> uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
> has been imported from external source.
> 
> For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
> dt_read_paddr() to read physical addresses. We chose not to modify the original
> function as it is used in places where it needs to specifically read 64-bit
> values from dt (For e.g. dt_property_read_u64()).
> 
> Xen prints warning when it detects truncation in cases where it is not able to
> return error.
> 
> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
> by the code changes.
> 
> Also, initialized variables to fix the warning "-Werror=maybe-uninitialized".
I can see that now you explicitly set to 0 variables passed to fdt_get_mem_rsv_paddr()
which haven't been initialized before being passed to fdt_get_mem_rsv(). Is this what
you are reffering to? I cannot reproduce it, hence my question.

> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from
> 
> v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
> "[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
> this approach achieves the same purpose.
> 
> 2. No need to check for truncation while converting values from u64 to paddr_t.
> 
> v2 - 1. Use "( (dt_start >> (PADDR_SHIFT - 1)) > 1 )" to detect truncation.
> 2. Introduced libfdt_xen.h to implement fdt_get_mem_rsv_paddr
> 3. Logged error messages in case truncation is detected.
> 
> v3 - 1. Renamed libfdt_xen.h to libfdt-xen.h.
> 2. Replaced u32/u64 with uint32_t/uint64_t
> 3. Use "(paddr_t)val != val" to check for truncation.
> 4. Removed the alias "#define PADDR_SHIFT PADDR_BITS".
> 
> v4 - 1. Added a WARN() when truncation is detected.
> 2. Always check the return value of fdt_get_mem_rsv().
> 
>  xen/arch/arm/bootfdt.c              | 48 +++++++++++++++++++------
>  xen/arch/arm/domain_build.c         |  2 +-
>  xen/arch/arm/include/asm/setup.h    |  4 +--
>  xen/arch/arm/setup.c                | 18 +++++-----
>  xen/arch/arm/smpboot.c              |  2 +-
>  xen/include/xen/device_tree.h       | 23 ++++++++++++
>  xen/include/xen/libfdt/libfdt-xen.h | 55 +++++++++++++++++++++++++++++
>  7 files changed, 129 insertions(+), 23 deletions(-)
>  create mode 100644 xen/include/xen/libfdt/libfdt-xen.h
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 0085c28d74..ac8148da55 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -11,7 +11,7 @@
>  #include <xen/efi.h>
>  #include <xen/device_tree.h>
>  #include <xen/lib.h>
> -#include <xen/libfdt/libfdt.h>
> +#include <xen/libfdt/libfdt-xen.h>
>  #include <xen/sort.h>
>  #include <xsm/xsm.h>
>  #include <asm/setup.h>
> @@ -52,11 +52,37 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
>      return false;
>  }
> 
> -void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
> -                                u32 size_cells, u64 *start, u64 *size)
> +void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
> +                                uint32_t size_cells, paddr_t *start,
> +                                paddr_t *size)
>  {
> -    *start = dt_next_cell(address_cells, cell);
> -    *size = dt_next_cell(size_cells, cell);
> +    uint64_t dt_start, dt_size;
> +
> +    /*
> +     * dt_next_cell will return uint64_t whereas paddr_t may not be 64-bit.
> +     * Thus, there is an implicit cast from uint64_t to paddr_t.
> +     */
> +    dt_start = dt_next_cell(address_cells, cell);
> +    dt_size = dt_next_cell(size_cells, cell);
> +
> +    if ( dt_start != (paddr_t)dt_start )
> +    {
> +        printk("Error: Physical address greater than max width supported\n");
I find it a bit odd to see Error and on the next line warning. I would drop "Error" here
but feel free not to as this is just my POV.

> +        WARN();
> +    }
> +
> +    if ( dt_size != (paddr_t)dt_size )
> +    {
> +        printk("Error: Physical size greater than max width supported\n");
> +        WARN();
> +    }
> +
> +    /*
> +     * Xen will truncate the address/size if it is greater than the maximum
> +     * supported width and it will give an appropriate warning.
> +     */
> +    *start = dt_start;
> +    *size = dt_size;
>  }
> 
>  static int __init device_tree_get_meminfo(const void *fdt, int node,
> @@ -326,7 +352,7 @@ static int __init process_chosen_node(const void *fdt, int node,
>          printk("linux,initrd-start property has invalid length %d\n", len);
>          return -EINVAL;
>      }
> -    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> +    start = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
> 
>      prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
>      if ( !prop )
> @@ -339,7 +365,7 @@ static int __init process_chosen_node(const void *fdt, int node,
>          printk("linux,initrd-end property has invalid length %d\n", len);
>          return -EINVAL;
>      }
> -    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
> +    end = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
> 
>      if ( start >= end )
>      {
> @@ -593,10 +619,12 @@ static void __init early_print_info(void)
>      nr_rsvd = fdt_num_mem_rsv(device_tree_flattened);
>      for ( i = 0; i < nr_rsvd; i++ )
>      {
> -        paddr_t s, e;
> -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
> +        paddr_t s = 0, e = 0;
> +
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
>              continue;
> -        /* fdt_get_mem_rsv returns length */
> +
> +        /* fdt_get_mem_rsv_paddr returns length */
>          e += s;
>          printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
>      }
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index c8f08d8ee2..15c8bdd9e4 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>          BUG_ON(!prop);
>          cells = (const __be32 *)prop->value;
>          device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
> -        psize = dt_read_number(cells, size_cells);
> +        psize = dt_read_paddr(cells, size_cells);
>          if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
>          {
>              printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index a926f30a2b..7b697d879e 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -157,8 +157,8 @@ const char *boot_module_kind_as_string(bootmodule_kind kind);
>  extern uint32_t hyp_traps_vector[];
>  void init_traps(void);
> 
> -void device_tree_get_reg(const __be32 **cell, u32 address_cells,
> -                         u32 size_cells, u64 *start, u64 *size);
> +void device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
> +                         uint32_t size_cells, paddr_t *start, paddr_t *size);
> 
>  u32 device_tree_get_u32(const void *fdt, int node,
>                          const char *prop_name, u32 dflt);
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f26f67b90..d2a3d8c340 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -29,7 +29,7 @@
>  #include <xen/virtual_region.h>
>  #include <xen/vmap.h>
>  #include <xen/trace.h>
> -#include <xen/libfdt/libfdt.h>
> +#include <xen/libfdt/libfdt-xen.h>
>  #include <xen/acpi.h>
>  #include <xen/warning.h>
>  #include <asm/alternative.h>
> @@ -220,13 +220,13 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
> 
>      for ( i = first; i < nr ; i++ )
>      {
> -        paddr_t r_s, r_e;
> +        paddr_t r_s = 0, r_e = 0;
> 
> -        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
>              /* If we can't read it, pretend it doesn't exist... */
>              continue;
> 
> -        r_e += r_s; /* fdt_get_mem_rsv returns length */
> +        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
> 
>          if ( s < r_e && r_s < e )
>          {
> @@ -500,15 +500,15 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
> 
>      for ( ; i < mi->nr_mods + nr; i++ )
>      {
> -        paddr_t mod_s, mod_e;
> +        paddr_t mod_s = 0, mod_e = 0;
> 
> -        if ( fdt_get_mem_rsv(device_tree_flattened,
> -                             i - mi->nr_mods,
> -                             &mod_s, &mod_e ) < 0 )
> +        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
> +                                   i - mi->nr_mods,
> +                                   &mod_s, &mod_e ) < 0 )
>              /* If we can't read it, pretend it doesn't exist... */
>              continue;
> 
> -        /* fdt_get_mem_rsv returns length */
> +        /* fdt_get_mem_rsv_paddr returns length */
>          mod_e += mod_s;
> 
>          if ( s < mod_e && mod_s < e )
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index 412ae22869..c15c177487 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
>              continue;
>          }
> 
> -        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
> +        addr = dt_read_paddr(prop, dt_n_addr_cells(cpu));
> 
>          hwid = addr;
>          if ( hwid != addr )
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 19a74909ce..11bda2fd3d 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -241,6 +241,29 @@ static inline u64 dt_read_number(const __be32 *cell, int size)
>      return r;
>  }
> 
> +/* Wrapper for dt_read_number() to return paddr_t (instead of uint64_t) */
> +static inline paddr_t dt_read_paddr(const __be32 *cell, int size)
> +{
> +    uint64_t dt_r = 0;
no need for this assignment

> +    paddr_t r;
> +
> +    dt_r = dt_read_number(cell, size);
In device_tree_get_reg() you added a note about implicit cast and here it is missing.

> +
> +    if ( dt_r != (paddr_t)dt_r )
> +    {
> +        printk("Error: Physical address greater than max width supported\n");
> +        WARN();
> +    }
> +
> +    /*
> +     * Xen will truncate the address/size if it is greater than the maximum
> +     * supported width and it will give an appropriate warning.
> +     */
> +    r = dt_r;
> +
> +    return r;
> +}
> +
>  /* Helper to convert a number of cells to bytes */
>  static inline int dt_cells_to_size(int size)
>  {
> diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
> new file mode 100644
> index 0000000000..3296a368a6
> --- /dev/null
> +++ b/xen/include/xen/libfdt/libfdt-xen.h
> @@ -0,0 +1,55 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-only
Our CODING_STYLE says:
New files should start with a single-line SPDX comment, ..., e.g.
/* SPDX-License-Identifier: GPL-2.0 */

For me it would be perfectly fine to do as you did but it is not what our docs state
(i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.

> + *
> + * xen/include/xen/libfdt/libfdt-xen.h
> + *
> + * Wrapper functions for device tree. This helps to convert dt values
> + * between uint64_t and paddr_t.
> + *
> + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
> + */
> +
> +#ifndef LIBFDT_XEN_H
> +#define LIBFDT_XEN_H
> +
> +#include <xen/libfdt/libfdt.h>
> +
> +static inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
> +                                        paddr_t *address,
> +                                        paddr_t *size)
> +{
> +    uint64_t dt_addr;
> +    uint64_t dt_size;
> +    int ret = 0;
no need for this assignment

> +
> +    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
> +    if ( ret )
> +        return ret;
> +
> +    if ( dt_addr != (paddr_t)dt_addr )
> +    {
> +        printk("Error: Physical address greater than max width supported\n");
> +        return -FDT_ERR_MAX;
> +    }
> +
> +    if ( dt_size != (paddr_t)dt_size )
> +    {
> +        printk("Error: Physical size greater than max width supported\n");
> +        return -FDT_ERR_MAX;
> +    }
> +
> +    *address = dt_addr;
> +    *size = dt_size;
> +
> +    return ret;
> +}
> +
> +#endif /* LIBFDT_XEN_H */
please add an empty line here.

> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 2.17.1
> 
> 

~Michal



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:22:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523453.813508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7lU-0000OX-Nl; Wed, 19 Apr 2023 13:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523453.813508; Wed, 19 Apr 2023 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7lU-0000OQ-Ks; Wed, 19 Apr 2023 13:22:28 +0000
Received: by outflank-mailman (input) for mailman id 523453;
 Wed, 19 Apr 2023 13:22:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp7lT-0000OI-0H
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:22:27 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3879940d-deb5-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 15:22:25 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 09:22:17 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB7172.namprd03.prod.outlook.com (2603:10b6:510:243::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 13:22:14 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 13:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3879940d-deb5-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681910545;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=yxdc90/Fg3WfNMF+Vz38DV1IBRDKNTD9fpx5PL2iw9I=;
  b=fjdIJ2nLeUG+0X8rs/vTjdP7teSXkvoeQqzKbTs35hYW4rDvbTu8qN4l
   OIbgFdPURELp4yfMRgsZUPC5W8xWCgp+X2HwvrcYPE+21DhA4i4MXnwQB
   3a8b73tGTYfTd7RxrnjwIjJoQcMPos4OfNhYu5cbkAOG0N9aW/meMOhLH
   w=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 105444978
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ls/fc6x48xKake/Kc+R6t+ezxyrEfRIJ4+MujC+fZmUNrF6WrkUAn
 WIYUGDTPKqIYWSkeopwaYS+9x9S68DcmIRlSQI6/CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKET5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVhsq
 dAfDQ4iVxash8CW/YPld/Fy2P12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQvuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eSwX+nCd1CfFG+3sIxjXGv4TAvMkw9el+fp96poHDicPsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwCGAzLDFpTmQAGcsRyRELtchsaceTjsv0
 0KPns/4QzlmtrSaRGi15rqStSm1OyUeMSkFfyBsZQkY59jupqkjgxSJScxseIaulcH8Ezz0x
 zGMrQA9iq8VgMpN0L+0lXjYhxq8q56PSRQ6ji3HU2Tg4g5naYqNY42z9UOd/ftGNJyeTFSKo
 D4Dgcf20QwVJZSElSjISuNSGrisvq6BKGeF2QApGIQ9/TOw/XLlZZpX/Dx1OEZuNIADZCPtZ
 0jQ/whW4fe/IUeXUEO+WKrpY+xC8EQqPY6Nuiz8BjaWXqVMSQ==
IronPort-HdrOrdr: A9a23:92FipKtRlJ0eLaIsy8bvG2o17skDZ9V00zEX/kB9WHVpm62j+v
 xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM
 JdT5Q=
X-Talos-CUID: =?us-ascii?q?9a23=3ALK6EIWgXXDEjJ/JO2rmtvSDeXDJuS1+B4HD1HWC?=
 =?us-ascii?q?BD0lpap2oeAGt4L06qp87?=
X-Talos-MUID: =?us-ascii?q?9a23=3Ag5LywA6TR2BFwEEpRfB1sTmGxoxUuKmCNGdRra9?=
 =?us-ascii?q?XkOaAESpXKx2j0B+eF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105444978"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FXyfTx4Au/z9FUgZcbipaKHvzr3dVolHb6jL9L+9SKoEAiVzmBZvsNOv+pzpkM0eRP4+jHo3spB/ZVORyQ+4kDIv+UkcIBNPMcyZf7F4Pu11FsY9x2DmEo/8KkdmmDcwm/Rn+t/BKsHD6SKaBPnCW672jS82U+6U9P1u0t3JKgzLdeOobxnqf3WCc01Ezds6YBWpgU+bTZVRrlTFpBGKdZS1DWHLalByd476BHNG8bzLsE7ZlRY4rVuYAwXeSVWHvqKtHVBLjrrauwr+/qHabxVV3XI8dhJ/yiojCKt8IoNCiwiwvHbPduI67ZeUgvw0iiqwjfTyNOCawv/IqoxP/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QkzvTkT6qCBo0nLBvino6SeRdGrgn2Mu2/5R9W2P/tw=;
 b=CmkU2Df/3KqSWZ28TbC93C7GPrmjZvgODKGKrEbb5zGa1SvOUPMywTsJK8oR2U9hIEz75XGO6Mnv+IEHcQQ0deDsPcbd7NR+nGlMd0kF8TPPfYkOjeioKRsS6zDuOVZ3XL4RFgSh3AHj3P1uMzXd+uu/+p/0XGI3+66jXkjWgGihHYHFXyqTJiPGl2lhdrvab1S/RDJPwiIhLyDWhB4cGh/Nk9gPXnRQgVOyo2Lhwry/tP3jCLQ5WWRfRIEht75/EFli8UirV15+WbJemUodS+M1J8ZLBQ4cQlc+b3RJ/l7sB9C5Zp0sr+6M3UHzuEqSrqLkuW9r8Ks8TP1+0Bl1WA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QkzvTkT6qCBo0nLBvino6SeRdGrgn2Mu2/5R9W2P/tw=;
 b=mxa1UwgMo4CpKwWyulrg/miOhC9jGT+uRAEb32AhufjkeGfVSLBHio6mwd20u7Vs6HH8lpN6hJlSMERhyXgIReuhmFgsvqGZ9L6zxJW7MvqMRZn9a3rGa4WD1dxMfv/1vUH/LTQFiohucJK858yMhEA6R38rcvWDbrudDwqxk68=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 15:22:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Message-ID: <ZD/q/3yI0+8gfJ0g@Air-de-Roger>
References: <20230419114556.34856-1-roger.pau@citrix.com>
 <e27c6431-053f-924e-27c4-763b3c45fdc2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <e27c6431-053f-924e-27c4-763b3c45fdc2@suse.com>
X-ClientProxiedBy: LO4P123CA0653.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:296::23) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB7172:EE_
X-MS-Office365-Filtering-Correlation-Id: d6518eb3-644e-4224-54d3-08db40d916b8
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b1NalanmITjDQ0J9Rzqd9DyX4bc85oMM9Z4fNdkbqrrOOcdKYWDM/H3DSPBWjjAhYgwp29BVapfYzYTBm44KtRG8xTzc8Qm/p54KyALcQHfLlKTfBBl0AY72/Og8pyITpj8k865h7cwsXqKAkaOmIVtPoWyp9uQwPm5Aj561nUtlyZrvQXQ8GgtA+4VlKiIHqGMuzVXe7mhdG9fdAbnwrBfaKBwESJuuK5ff0F6xy2izSSqf+BV21xb55RnPcwLxpbQV4WJNHcJli4Bbnqtlakh1abYfk8Q0aoQFjx9ivAokmGOhlbeT45K6qvacrzR4NWk3qc0Op/LH4BVOk34vsbwzXyY1VKNnMmnYfjlvRapy9+DNkpFTW1WRs2q3Svhu+8h9oBEx40AtVFENtSK8J+6OywnWgNITsFzA6yW53RYIfioYV3GD5RDXnnWZHiiNWSNzrFspAibBZLOzx6tDLrALUNf8TDVDw2vZFgvedT4o1JP0wcmXlkgAQ5s6BtbkeoejEQW8qVfXN7R8o9LW1bxCXQ0Yyy/HmPrpeFR6NVw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(396003)(136003)(346002)(366004)(451199021)(66476007)(478600001)(6666004)(38100700002)(8936002)(8676002)(316002)(82960400001)(41300700001)(6916009)(4326008)(66946007)(66556008)(54906003)(186003)(85182001)(2906002)(53546011)(6512007)(26005)(83380400001)(9686003)(6506007)(86362001)(33716001)(5660300002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVozM0MwbnNpMFZneHV4bmhVejNYR0pSdmw5dW9XOFNqZStIS05WMDhFYllT?=
 =?utf-8?B?Vjk5TkFvbXJxNExmcGFkK2hSSEZkT3FPY1p4Uk12MW9jRFR1WTFzcG44WWlW?=
 =?utf-8?B?eXUzTkwzT3k3emkxeHh2VGdlQkQvTzl2bnRTVzVkTktHWGQ0R0p0b2U0YnZI?=
 =?utf-8?B?M1Zid0Q4dVhKbU9HV2ZCUW9ENjhjaUg1ZTdSMU04ZENOaVUvWjNYb09Uck5o?=
 =?utf-8?B?cUlTamhtVHNrNjBweGNkYTk2UFVnVFZxYTJkRkpvTGkzRm5VaTNTZkhkR2pR?=
 =?utf-8?B?ZllFd1BzZStwSTl6UGhqUGxoN2w4VUNYYld4bWNzeTI2VXVPR1lTS2Ztd2Vw?=
 =?utf-8?B?dDhjdzBpYTFHaUo2SmoxZVlzNlZmamMzeTNmVk1QWXQzTWRWWXRvN0h5NWE3?=
 =?utf-8?B?YzNxaXhNUnpyUVJDUVpMUWJjS214a0Vqcm4zNllPeWdEbXFXUWJTY2x5MzlE?=
 =?utf-8?B?c3c1Y0VsQVdYenRMRmQ2TkVUbTBEVlNZVk9wTlBlWG1KcUpvazJiVVAvOVRD?=
 =?utf-8?B?WnZQRmRsSlJrQnF6ZlFCNWJVRnNUa3ljM2pwNlA3RlVSODlUREVMdXROOWtS?=
 =?utf-8?B?UzdmSUw1ZFpoZ0pVa0ltb0hqM0Nya0o4ZWJCWEhoaGlkS1VWSTdYSmdMdEtV?=
 =?utf-8?B?ZC9ac3pxOXhzb05XTlFRNEtwZjdKREVUb3diR1VOeU9SSFpZZktnV0lGY0c2?=
 =?utf-8?B?bkNqK01HZmhnQlVweDZPMjM2czBWOUg2YmM1dGw4LzUyRVA0SGw5SStkQUF2?=
 =?utf-8?B?RTdtbHdlU1JYZVpXVFF3RmZ0WkFUd0M1b05ucWNMaTF0dVRuVVB1cGxpZ0py?=
 =?utf-8?B?YUlXVlQ3bGRvVzNtVUkxQ0hqdS9zRHJmaG9KdnFGbHBtZVo1clNKK0F6ai9q?=
 =?utf-8?B?N1VnazlZSXg2ckMwa0NKaXpMWEVtZTkzcEZyUGtScHRlUzhSRFNVd1lwZEtX?=
 =?utf-8?B?aFIxSFBYOFhVQ3M5TStnOVZwRHVSVG1EZk5LNzMzNlIxVXJPRFl1bzdHcHM5?=
 =?utf-8?B?ekt0ZSs3K2xLV29sTkZCSy9MV1JmV1pXZHRlOWx3ZlV5UHB0VndKNGVucmg5?=
 =?utf-8?B?ZTg0SXdwTE8wdWswWUxmK1Z0dG9HS2tuZUhKNktwbHozdDhZKzZKVTF3akVL?=
 =?utf-8?B?dGkwenUyUUVRTHBhN1l0UXNBUEdEMzVIak5hVmZvOFV2UWZtZ0llNVZTNXBL?=
 =?utf-8?B?dVBJWlo3azBzNTcvNU9ySlZGRGtSOWZXUUlablk5cUFQTno5OUxxMUJQRm5X?=
 =?utf-8?B?bFJaTnZRNVdQcXRuTlBqYjladmo0RzNLWUorQUgvamVsNEtER3lTalhTN1B3?=
 =?utf-8?B?dUFFeCtUTEtwZHJBUG91NjZkZUt2S0lCaW9VL2d3VnhrbU5PTW1qTFN4Y2NF?=
 =?utf-8?B?dGwwQzA3OEVDL3JMV29md0paZmVUUTdpaW10MG9rZGZ2ZURIK0tubzcwNXhn?=
 =?utf-8?B?UllyNW1rTGlDdi9VYU85OXh1OXY3OHBTWXh5My81aWxMK0hDSHJOeHNNTFB0?=
 =?utf-8?B?UWRXN21FUHpXMGFMYUdpaG1VWHZrS0xjUWhGbWVzU0hFUnZmQTdXNWdPTysz?=
 =?utf-8?B?dHliWGtNWC8yWWV2NW9TUXZVcXFUdlc2SnFETTF3NjRRRkJKVHJ6Zk9sS1ZE?=
 =?utf-8?B?MS9ucEcreTh5SzZ3dFFxNmFkaW9BTW1mMklOMjAzanAyb1luUm1OU0VWa0Vh?=
 =?utf-8?B?dWRrSThsd3BnVjRzeFRZQklXYXE3R3B1Nmg5V0RUNnVOQkljenc5UjM2NmRz?=
 =?utf-8?B?aGdLUUM0K1RkYytpSVpBT2xGY3VBK1RrbHN6OFM1NmNGa09BaEVJc1NRWGRN?=
 =?utf-8?B?empBakNDcVVIMWFFNjlTSW5SUEh1Y3R5cGNneThpTXZHbE1iUE1HdWJENnps?=
 =?utf-8?B?UGxLb1Vra0dGZld5SnhvTlIxbktqTVdaSWFpaFlGOVJsSm13UEpQckl2ZFhs?=
 =?utf-8?B?eVQ4RkZ0NklJdlJLOEhIU016SzB1bXpsWDd5SHhZdzdzeWREcTJkcGRld1Y1?=
 =?utf-8?B?T0Q0K25DSU51NU92ZytwWGFYTnlvaE5rTUVtYi91QjBwMlFwV05YL3FxckU1?=
 =?utf-8?B?OVlud29xay9CQ0FqQ2hGTmhOOUdMUktCdlEzNlQyOE5BbERZck93WGo3dlBB?=
 =?utf-8?B?a2t2ZXJJOCs2NEd3bFl4aEhOQzdCUFFEMWlWaEdKemRWRlVUYmpxTVZDRGUy?=
 =?utf-8?B?Q1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	vNskSt/Vg4B22Ugyvp025wEHtd1zefpG2A/+bxev8Yh+mA6jpI05j86CjIuDB7wW8Xny9Hb3u1q1CbeDUhW6MuDtKTCSLQ70zamhy5S/+0yuMfndenCtjWDcFOztC5zuQXqDX3hLPTHgtasH/f9uMYdleHKPMzv/Re98Ji10AS+cLGYmvzoge7a8MkyvRMw0wy11yLsRBRADH0+5qFS3Qfgq7eqXDHzpGWX1HzCQnKQX+Y+UWV0Mip6cJX6xoGDPD+eVyAkD4aAdgLOyL+4Chs5cDWRFaQGWDEKz7kxrI7YgTufhJ1EqJSKlzk8i/KhmpHM9hI04UWvcEYcgf/RbBdjtMdVPxsMq8zQdFLEFbRa7yfogxQ1vDXRrb5VaVsfdHH0d+nuFlRkUeteGBMPaGANRTpzasDqmaGsg95dEdnm6ztBTlnTNSdtH0CbMYkN/ARJ5UrpOBvidL7nR2C+phfWVkN/sa2r6C2dso8wEPPcQ9+rMVXLfUywds35CsVULNO5gQeLeQdSswhXJEYBN+Vrc33ILe1F3SqwSqaD8DZyuENksLi6OJJsjhCeziaKJiyVPGeb79h5j0whmIP7hV5gRuiYzsXGm+CSCt5e1Fh8hZL2Bk2OTOMaCu22aP3vvWvQx1OjD4vuZYea48VB5o4QMsSvxGBde0E8k0rKe4y5gZqR9bvc0MstnWJp61nJOZLqVgPhzkUbT1IhUiIB+4JiX7RITlZ7LYYmBCyissaJEgRv9IAu+b1Qumam3JRPoyHXmWaArwcEgz34J/N0JIF+34o7/Cs49ZSD4i3kEmXojmNQvHHZb5RD/sGY+uGBfZoqwUh+fCvjl95SZOL0Fx2J5+hNtuEaGBDCe8UAwnaEDNl9vxPY1Yp1oiRoMtOnxvz+LsjuKamagvQ6Av7cNsSdKzWuRe+cHbfdKwL4zMgg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6518eb3-644e-4224-54d3-08db40d916b8
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:22:14.0569
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: icwgUQ5XDcaRweCf2d7IvGwcu8IUjWlIx2OZV+KgH5iVPlUTMFIhxmZolbD/dgPqH8ykl0ktAdYwo3PlUH2Jrw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7172

On Wed, Apr 19, 2023 at 02:14:44PM +0200, Jan Beulich wrote:
> On 19.04.2023 13:45, Roger Pau Monne wrote:
> > The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
> > When the hypervisor returns -ENOTIME (timeout in the past) Linux keeps
> 
> Nit: -ETIME

Oh, thanks.

> 
> > retrying to setup the timer with a higher timeout instead of
> > self-injecting a timer interrupt.
> >[...]
> > --- a/CHANGELOG.md
> > +++ b/CHANGELOG.md
> > @@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
> >  ### Changed
> >   - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
> >     cap toolstack provided values.
> > + - Ignore VCPU_SSHOTTMR_future VCPUOP_set_singleshot_timer flag. The only
> > +   known user doesn't use it properly, leading to in-guest breakage.
> 
> Might this read a little better as
> 
>  - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
>    known user doesn't use it properly, leading to in-guest breakage.

Sure.

> > --- a/xen/include/public/vcpu.h
> > +++ b/xen/include/public/vcpu.h
> > @@ -150,7 +150,7 @@ typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
> >  DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
> >  
> >  /* Flags to VCPUOP_set_singleshot_timer. */
> > - /* Require the timeout to be in the future (return -ETIME if it's passed). */
> > + /* Ignored. */
> 
> I think this could do with something like "as of Xen 4.18", as the public
> header shouldn't be tied to a specific version (and then perhaps also
> retaining the original text). Arguably mentioning a specific version may be
> a little odd in case we'd consider backporting this, but something would
> imo better be said.

Hm, at least for XenServer we will backport this to 4.13, other
vendors might backport to different versions, at which point I'm not
sure the comment is very helpful.  It can be misleading because it
might seem to infer that checking the Xen version will tell you
whether the flag is ignored or not.

What about:

/*
 * Request the timeout to be in the future (return -ETIME if it's passed)
 * but can be ignored by the hypervisor.
 */

> All this said - while I'm likely to ack the final patch, I would still feel
> a little uneasy doing so.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:26:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523459.813521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7pe-00012N-8f; Wed, 19 Apr 2023 13:26:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523459.813521; Wed, 19 Apr 2023 13:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7pe-00012G-63; Wed, 19 Apr 2023 13:26:46 +0000
Received: by outflank-mailman (input) for mailman id 523459;
 Wed, 19 Apr 2023 13:26:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pp7pc-00012A-2x
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:26:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp7pb-0005EK-JK; Wed, 19 Apr 2023 13:26:43 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp7pb-0006MR-BN; Wed, 19 Apr 2023 13:26:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GT73UPkmb2sR/HGFpxdQ3inbqkPJSbG2H9bMIcU03NM=; b=gQHSN2NqsRPyXuR8xojrFX7rvW
	aQlTPHPVO0uBRGkdIWdeAqw8BL/ovKmKjuvlYfQPdyexEr9vQZtKoqmNyzz7ykU0Mb8fcFDUd97jN
	hh7Px0qPtzfrnik87X/M5z1QGeKRQnaM3JkaM5UWsSEHIjJr8tZBMUWq1p9JHf+oin4U=;
Message-ID: <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
Date: Wed, 19 Apr 2023 14:26:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

>> diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
>> new file mode 100644
>> index 0000000000..3296a368a6
>> --- /dev/null
>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>> @@ -0,0 +1,55 @@
>> +/*
>> + * SPDX-License-Identifier: GPL-2.0-only
> Our CODING_STYLE says:
> New files should start with a single-line SPDX comment, ..., e.g.
> /* SPDX-License-Identifier: GPL-2.0 */
> 
> For me it would be perfectly fine to do as you did but it is not what our docs state
> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.

 From my reading of https://spdx.dev/ids/#how, what you suggest would 
not be a valid SPDX-License-Idenfier as everything in needs to be in 
*one* (including the begin/end comment).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:34:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523465.813531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7x1-0002Vd-42; Wed, 19 Apr 2023 13:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523465.813531; Wed, 19 Apr 2023 13:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7x1-0002VW-0d; Wed, 19 Apr 2023 13:34:23 +0000
Received: by outflank-mailman (input) for mailman id 523465;
 Wed, 19 Apr 2023 13:34:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5rk+=AK=casper.srs.infradead.org=BATV+223ac6495519acf57df5+7178+infradead.org+dwmw2@srs-se1.protection.inumbo.net>)
 id 1pp7wz-0002VQ-By
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:34:22 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e22c9be4-deb6-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:34:18 +0200 (CEST)
Received: from [2001:8b0:10b:5:7e08:a586:e17f:4e92]
 (helo=u3832b3a9db3152.ant.amazon.com)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pp7vc-00DHgo-Ts; Wed, 19 Apr 2023 13:32:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e22c9be4-deb6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:References:
	In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Z7wk1+WmrpLynoJg1RNmwt+ebZZ6qkCFyyApOA/bZSg=; b=pbbaxCF5CqzOna10Z26iIQqoHa
	FMWUstUEg19L+jzS89Q8o3YvVhxwOz4GGnUkHfaO+6wUdHmKAyxyyJP0/G7UfxRY5WlGDXHqvBdZ2
	e4HDWEncFWZv9twu1VoxyJxOI3YcZoVfGAanC9CgbRJ0mHy5yfJQMCjHu+p4/m40u6Ox/Q4qMOIi9
	+YbTCMoImhIsqD7nFp7L/RxopBODKanckZ56PE9Lz2iIMNRCPv40nIInK0xKE8itAoxZqYrSN63Ti
	qL1Z8iR+yn2YmVWmByYciR2uXcT9Op5RSlv+zMRsIl8vAf+GY03YIUQZLgolA7liAXjfiCIWutABN
	k36YyyAA==;
Message-ID: <c185e8a668e9b879af139ffa8d5104e643b3f66a.camel@infradead.org>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>, Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>, Arjan van de
 Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, Sean
 Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama Arif
 <usama.arif@bytedance.com>, =?ISO-8859-1?Q?J=FCrgen_Gro=DF?=
 <jgross@suse.com>,  Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,  Arnd
 Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>,  linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>,  Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>, 
 Sabin Rapan <sabrapan@amazon.com>
Date: Wed, 19 Apr 2023 14:32:54 +0100
In-Reply-To: <877cu83v45.ffs@tglx>
References: <20230414225551.858160935@linutronix.de>
	 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
	 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
	 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
	 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx>
Content-Type: multipart/signed; micalg="sha-256"; protocol="application/pkcs7-signature";
	boundary="=-X0h7NkHlm7AixhyIbeie"
User-Agent: Evolution 3.44.4-0ubuntu1 
MIME-Version: 1.0
X-SRS-Rewrite: SMTP reverse-path rewritten from <dwmw2@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html


--=-X0h7NkHlm7AixhyIbeie
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2023-04-19 at 14:38 +0200, Thomas Gleixner wrote:
>=20
> I'm leaning towards disabling the CPUID lead 0x01 based discovery and be
> done with it.

Makes sense. The large machines where users really want the parallel
startup all ought to have X2APIC and hence CPUID 0x0b.


--=-X0h7NkHlm7AixhyIbeie
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCEkQw
ggYQMIID+KADAgECAhBNlCwQ1DvglAnFgS06KwZPMA0GCSqGSIb3DQEBDAUAMIGIMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKTmV3IEplcnNleTEUMBIGA1UEBxMLSmVyc2V5IENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEuMCwGA1UEAxMlVVNFUlRydXN0IFJTQSBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xODExMDIwMDAwMDBaFw0zMDEyMzEyMzU5NTlaMIGWMQswCQYDVQQG
EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYD
VQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50
aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAyjztlApB/975Rrno1jvm2pK/KxBOqhq8gr2+JhwpKirSzZxQgT9tlC7zl6hn1fXjSo5MqXUf
ItMltrMaXqcESJuK8dtK56NCSrq4iDKaKq9NxOXFmqXX2zN8HHGjQ2b2Xv0v1L5Nk1MQPKA19xeW
QcpGEGFUUd0kN+oHox+L9aV1rjfNiCj3bJk6kJaOPabPi2503nn/ITX5e8WfPnGw4VuZ79Khj1YB
rf24k5Ee1sLTHsLtpiK9OjG4iQRBdq6Z/TlVx/hGAez5h36bBJMxqdHLpdwIUkTqT8se3ed0PewD
ch/8kHPo5fZl5u1B0ecpq/sDN/5sCG52Ds+QU5O5EwIDAQABo4IBZDCCAWAwHwYDVR0jBBgwFoAU
U3m/WqorSs9UgOHYm8Cd8rIDZsswHQYDVR0OBBYEFAnA8vwL2pTbX/4r36iZQs/J4K0AMA4GA1Ud
DwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDARBgNVHSAECjAIMAYGBFUdIAAwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VTRVJUcnVzdFJTQUNlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUF
BwEBBGowaDA/BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdFJT
QUFkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0G
CSqGSIb3DQEBDAUAA4ICAQBBRHUAqznCFfXejpVtMnFojADdF9d6HBA4kMjjsb0XMZHztuOCtKF+
xswhh2GqkW5JQrM8zVlU+A2VP72Ky2nlRA1GwmIPgou74TZ/XTarHG8zdMSgaDrkVYzz1g3nIVO9
IHk96VwsacIvBF8JfqIs+8aWH2PfSUrNxP6Ys7U0sZYx4rXD6+cqFq/ZW5BUfClN/rhk2ddQXyn7
kkmka2RQb9d90nmNHdgKrwfQ49mQ2hWQNDkJJIXwKjYA6VUR/fZUFeCUisdDe/0ABLTI+jheXUV1
eoYV7lNwNBKpeHdNuO6Aacb533JlfeUHxvBz9OfYWUiXu09sMAviM11Q0DuMZ5760CdO2VnpsXP4
KxaYIhvqPqUMWqRdWyn7crItNkZeroXaecG03i3mM7dkiPaCkgocBg0EBYsbZDZ8bsG3a08LwEsL
1Ygz3SBsyECa0waq4hOf/Z85F2w2ZpXfP+w8q4ifwO90SGZZV+HR/Jh6rEaVPDRF/CEGVqR1hiuQ
OZ1YL5ezMTX0ZSLwrymUE0pwi/KDaiYB15uswgeIAcA6JzPFf9pLkAFFWs1QNyN++niFhsM47qod
x/PL+5jR87myx5uYdBEQkkDc+lKB1Wct6ucXqm2EmsaQ0M95QjTmy+rDWjkDYdw3Ms6mSWE3Bn7i
5ZgtwCLXgAIe5W8mybM2JzCCBhQwggT8oAMCAQICEQDGvhmWZ0DEAx0oURL6O6l+MA0GCSqGSIb3
DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNlY3RpZ28g
UlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTIyMDEwNzAw
MDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJARYTZHdtdzJAaW5mcmFkZWFkLm9y
ZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3GpC2bomUqk+91wLYBzDMcCj5C9m6
oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZHh7htyAkWYVoFsFPrwHounto8xTsy
SSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT9YgcBqKCo65pTFmOnR/VVbjJk4K2
xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNjP+qDrh0db7PAjO1D4d5ftfrsf+kd
RR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy2U+eITZ5LLE5s45mX2oPFknWqxBo
bQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3BgBEmfsYWlBXO8rVXfvPgLs32VdV
NZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/7auNVRmPB3v5SWEsH8xi4Bez2V9U
KxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmdlFYhAflWKQ03Ufiu8t3iBE3VJbc2
5oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9aelIl6vtbhMA+l0nfrsORMa4kobqQ5
C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMBAAGjggHMMIIByDAfBgNVHSMEGDAW
gBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeDMcimo0oz8o1R1Nver3ZVpSkwDgYD
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMC
MEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYBBQUHAgEWF2h0dHBzOi8vc2VjdGln
by5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuc2VjdGlnby5jb20vU2VjdGln
b1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYoGCCsGAQUFBwEB
BH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdvLmNvbS9TZWN0aWdvUlNBQ2xpZW50
QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAjBggrBgEFBQcwAYYXaHR0cDovL29j
c3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5mcmFkZWFkLm9yZzANBgkqhkiG9w0B
AQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQvQ/fzPXmtR9t54rpmI2TfyvcKgOXp
qa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvIlSPrzIB4Z2wyIGQpaPLlYflrrVFK
v9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9ChWFfgSXvrWDZspnU3Gjw/rMHrGnql
Htlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0whpBtXdyDjzBtQTaZJ7zTT/vlehc/
tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9IzCCBhQwggT8oAMCAQICEQDGvhmW
Z0DEAx0oURL6O6l+MA0GCSqGSIb3DQEBCwUAMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0
ZWQxPjA8BgNVBAMTNVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJl
IEVtYWlsIENBMB4XDTIyMDEwNzAwMDAwMFoXDTI1MDEwNjIzNTk1OVowJDEiMCAGCSqGSIb3DQEJ
ARYTZHdtdzJAaW5mcmFkZWFkLm9yZzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALQ3
GpC2bomUqk+91wLYBzDMcCj5C9m6oZaHwvmIdXftOgTbCJXADo6G9T7BBAebw2JV38EINgKpy/ZH
h7htyAkWYVoFsFPrwHounto8xTsySSePMiPlmIdQ10BcVSXMUJ3Juu16GlWOnAMJY2oYfEzmE7uT
9YgcBqKCo65pTFmOnR/VVbjJk4K2xE34GC2nAdUQkPFuyaFisicc6HRMOYXPuF0DuwITEKnjxgNj
P+qDrh0db7PAjO1D4d5ftfrsf+kdRR4gKVGSk8Tz2WwvtLAroJM4nXjNPIBJNT4w/FWWc/5qPHJy
2U+eITZ5LLE5s45mX2oPFknWqxBobQZ8a9dsZ3dSPZBvE9ZrmtFLrVrN4eo1jsXgAp1+p7bkfqd3
BgBEmfsYWlBXO8rVXfvPgLs32VdVNZxb/CDWPqBsiYv0Hv3HPsz07j5b+/cVoWqyHDKzkaVbxfq/
7auNVRmPB3v5SWEsH8xi4Bez2V9UKxfYCnqsjp8RaC2/khxKt0A552Eaxnz/4ly/2C7wkwTQnBmd
lFYhAflWKQ03Ufiu8t3iBE3VJbc25oMrglj7TRZrmKq3CkbFnX0fyulB+kHimrt6PIWn7kgyl9ae
lIl6vtbhMA+l0nfrsORMa4kobqQ5C5rveVgmcIad67EDa+UqEKy/GltUwlSh6xy+TrK1tzDvAgMB
AAGjggHMMIIByDAfBgNVHSMEGDAWgBQJwPL8C9qU21/+K9+omULPyeCtADAdBgNVHQ4EFgQUzMeD
Mcimo0oz8o1R1Nver3ZVpSkwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYw
FAYIKwYBBQUHAwQGCCsGAQUFBwMCMEAGA1UdIAQ5MDcwNQYMKwYBBAGyMQECAQEBMCUwIwYIKwYB
BQUHAgEWF2h0dHBzOi8vc2VjdGlnby5jb20vQ1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9j
cmwuc2VjdGlnby5jb20vU2VjdGlnb1JTQUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1h
aWxDQS5jcmwwgYoGCCsGAQUFBwEBBH4wfDBVBggrBgEFBQcwAoZJaHR0cDovL2NydC5zZWN0aWdv
LmNvbS9TZWN0aWdvUlNBQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAj
BggrBgEFBQcwAYYXaHR0cDovL29jc3Auc2VjdGlnby5jb20wHgYDVR0RBBcwFYETZHdtdzJAaW5m
cmFkZWFkLm9yZzANBgkqhkiG9w0BAQsFAAOCAQEAyW6MUir5dm495teKqAQjDJwuFCi35h4xgnQv
Q/fzPXmtR9t54rpmI2TfyvcKgOXpqa7BGXNFfh1JsqexVkIqZP9uWB2J+uVMD+XZEs/KYNNX2PvI
lSPrzIB4Z2wyIGQpaPLlYflrrVFKv9CjT2zdqvy2maK7HKOQRt3BiJbVG5lRiwbbygldcALEV9Ch
WFfgSXvrWDZspnU3Gjw/rMHrGnqlHtlyebp3pf3fSS9kzQ1FVtVIDrL6eqhTwJxe+pXSMMqFiN0w
hpBtXdyDjzBtQTaZJ7zTT/vlehc/tDuqZwGHm/YJy883Ll+GP3NvOkgaRGWEuYWJJ6hFCkXYjyR9
IzGCBMcwggTDAgEBMIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVz
dGVyMRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMT
NVNlY3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA
xr4ZlmdAxAMdKFES+jupfjANBglghkgBZQMEAgEFAKCCAeswGAYJKoZIhvcNAQkDMQsGCSqGSIb3
DQEHATAcBgkqhkiG9w0BCQUxDxcNMjMwNDE5MTMzMjU0WjAvBgkqhkiG9w0BCQQxIgQgwW0ReM7Y
k0AUHDo2oaxceej5KOXcSCJ68XNeDJ3HJKkwgb0GCSsGAQQBgjcQBDGBrzCBrDCBljELMAkGA1UE
BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMT4wPAYDVQQDEzVTZWN0aWdvIFJTQSBDbGllbnQgQXV0aGVu
dGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRAMa+GZZnQMQDHShREvo7qX4wgb8GCyqGSIb3
DQEJEAILMYGvoIGsMIGWMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVy
MRAwDgYDVQQHEwdTYWxmb3JkMRgwFgYDVQQKEw9TZWN0aWdvIExpbWl0ZWQxPjA8BgNVBAMTNVNl
Y3RpZ28gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEAxr4Z
lmdAxAMdKFES+jupfjANBgkqhkiG9w0BAQEFAASCAgBnAfV7CT1u9GiQqbPqf4fb9t35esc45ipr
KevtZZHK+O5wRZoTxvTfqh0Qic5v+YDPnW+4U/i5yDdhiaFlfjbEqdvwty8JSES8E/ap1EaHEXKv
/cGk1MIvh5CggGNCqu3m6utWxSvZQBo78tTbuNUA5/Ji83ITVsMpL77LDf3MOx5W4MFD40ta3oiP
rFE/QRxiHgTepRMSOA2/3jblcOCwpJnbVvJW5VK5D8J0Khc4YL6a3uUc2Cbz44bJK14CB+rTbfSS
/SJRbROvJ4XH7731ZRM9JYcWqq3A42vtxoX59PfRpo8BVuIMAK1PKcry3faDbFSQ2QXx3r3B5Wvn
gfpGRODS9lUAcSMacHbmGXmu+v3jREp45epsk/fMyRJ75i/Z5+daCbkho5a5YQs1z6S3/EUyaKio
/Mf5myho6UsuwfI1zpyFUGKjNnsRUlRLTw5ODu7Ow8QUWqmHFybhlnpv0EaYAXapbRQuuc7p3VGK
S/KgGPk35BxL78VlFVoP2WFx3hOgXVfCPfsLQFY2moWznexb/4BLEyKkJYYImNCaeS8Y9s+hS8Fr
ZVGQGyP91q/Bjc8C9s2twz5DQ1NexIz6SiDqCYkbEJM3tAeQbwWAYTD6bxwN1C5dZS408rI27q6e
CPzhDHpNr9fym8XsGWPOhM271uSLwx/YoSJxAyIp9wAAAAAAAA==


--=-X0h7NkHlm7AixhyIbeie--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:35:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:35:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523463.813541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7yN-00037d-Hg; Wed, 19 Apr 2023 13:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523463.813541; Wed, 19 Apr 2023 13:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7yN-00037W-Eu; Wed, 19 Apr 2023 13:35:47 +0000
Received: by outflank-mailman (input) for mailman id 523463;
 Wed, 19 Apr 2023 13:34:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DmUl=AK=gmail.com=vernon2gm@srs-se1.protection.inumbo.net>)
 id 1pp7wj-0002Up-FJ
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:34:05 +0000
Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com
 [2607:f8b0:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d99e031e-deb6-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 15:34:04 +0200 (CEST)
Received: by mail-pf1-x430.google.com with SMTP id
 d2e1a72fcca58-63b73203e0aso16098001b3a.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 06:34:04 -0700 (PDT)
Received: from vernon-pc ([114.231.52.113]) by smtp.gmail.com with ESMTPSA id
 fv5-20020a17090b0e8500b0023b4d4ca3a9sm1392755pjb.50.2023.04.19.06.33.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 06:34:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d99e031e-deb6-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681911243; x=1684503243;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=PxKBdJz6P62xw3o1oOHy6gvL/UmSXqmJ0vRsGDHfXz4=;
        b=mzt7zUu9/LUJvSIPaNj+nC+ArFvk3Wmz36d7P4hExfQnu/+5EJ/luadmhJCFNiq8eA
         5mamh3uH/3u0VIJzJy4W31TNXYt1DvfBn6ZHGOaQZO/LaodxyaE7yMJE8BXQrMBetDD1
         or6ct0PxWqhNSzFTI7QUR0GNkBsKZpZ79DqaN9iIdwxdDsEv8FenNeci7+RSvuikuLAT
         68J4r2aw28njQbzjpScT2QNoBA3X+SSSqlRvn9uiCaccB9t2Uwb/G4C4D/IZDIaojb9q
         azW1GR2HjveVXh0qkoLiSeS/y+b+MPHfKKr5FHg+1Fk7e61fzU4VCdWEM74x3aiYuZwi
         ZxNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681911243; x=1684503243;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PxKBdJz6P62xw3o1oOHy6gvL/UmSXqmJ0vRsGDHfXz4=;
        b=c6i2fXe4NVJ7Ie8UUaMqkSI5IQUPNGxNhYE57bMrQ/3lplS0Re6bv8a1+c0kGwcirY
         JtwumdlzfeGxrUXTZYlotT26ufT2SBZO4CglP5TgJRGtltMY1HZEhvXyqU/JAy4hw9fF
         xdLLWcFGOaeudvVnav6w7VlKlV/ula9afAcVKSxpSzcmXrugMKyo6rPm28ZdKfUOksut
         9KeGhM3NJ9RFZnLUqPt1w9qDT874RLxvw5GK6F8viU1bczdLk5IT2TKHkr1zqlDbq5l7
         iEC9oadoDGxlI4Nqe17vcv8X5AdZv8cvTWEetzZivxWY5upmT8lolC0wWzLC+lAdgi8o
         C29g==
X-Gm-Message-State: AAQBX9cYMlT74UAkEOHXfGGt0yEbbloQA+EiXIMgI9spj7/qQ6TUCH5T
	QTBC7jB/Q3TjYBTfdLzMuHc=
X-Google-Smtp-Source: AKy350YIBnQaE7WQxRLYlOkzgJwtc2fN5JwvE5Bu5492IB3LxeDfFswMVcmvTtBOzAP11iX9vi1kMA==
X-Received: by 2002:a17:90a:c095:b0:247:4e73:cbdd with SMTP id o21-20020a17090ac09500b002474e73cbddmr2798673pjs.9.1681911242426;
        Wed, 19 Apr 2023 06:34:02 -0700 (PDT)
Date: Wed, 19 Apr 2023 21:33:54 +0800
From: Vernon Yang <vernon2gm@gmail.com>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH 4/33] mm: add utility functions for ptdesc
Message-ID: <ZD/syK8RYO9FZ6ks@vernon-pc>
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-5-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230417205048.15870-5-vishal.moola@gmail.com>

On Mon, Apr 17, 2023 at 01:50:19PM -0700, Vishal Moola wrote:
> Introduce utility functions setting the foundation for ptdescs. These
> will also assist in the splitting out of ptdesc from struct page.
>
> ptdesc_alloc() is defined to allocate new ptdesc pages as compound
> pages. This is to standardize ptdescs by allowing for one allocation
> and one free function, in contrast to 2 allocation and 2 free functions.
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  include/asm-generic/tlb.h | 11 ++++++++++
>  include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
>  include/linux/pgtable.h   | 13 ++++++++++++
>  3 files changed, 68 insertions(+)
>
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index b46617207c93..6bade9e0e799 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
>  	return tlb_remove_page_size(tlb, page, PAGE_SIZE);
>  }
>
> +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
> +{
> +	tlb_remove_table(tlb, pt);
> +}
> +
> +/* Like tlb_remove_ptdesc, but for page-like page directories. */
> +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt)
> +{
> +	tlb_remove_page(tlb, ptdesc_page(pt));
> +}
> +
>  static inline void tlb_change_page_size(struct mmu_gather *tlb,
>  						     unsigned int page_size)
>  {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b18848ae7e22..ec3cbe2fa665 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
>  }
>  #endif /* CONFIG_MMU */
>
> +static inline struct ptdesc *virt_to_ptdesc(const void *x)
> +{
> +	return page_ptdesc(virt_to_head_page(x));
> +}
> +
> +static inline void *ptdesc_to_virt(struct ptdesc *pt)
> +{
> +	return page_to_virt(ptdesc_page(pt));
> +}
> +
> +static inline void *ptdesc_address(struct ptdesc *pt)
> +{
> +	return folio_address(ptdesc_folio(pt));
> +}
> +
> +static inline bool ptdesc_is_reserved(struct ptdesc *pt)
> +{
> +	return folio_test_reserved(ptdesc_folio(pt));
> +}
> +
> +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
> +{
> +	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
> +
> +	return page_ptdesc(page);
> +}
> +
> +static inline void ptdesc_free(struct ptdesc *pt)
> +{
> +	struct page *page = ptdesc_page(pt);
> +
> +	__free_pages(page, compound_order(page));
> +}
> +
> +static inline void ptdesc_clear(void *x)
> +{
> +	clear_page(x);
> +}
> +
>  #if USE_SPLIT_PTE_PTLOCKS
>  #if ALLOC_SPLIT_PTLOCKS
>  void __init ptlock_cache_init(void);
> @@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct page *page)
>  	adjust_managed_page_count(page, -1);
>  }
>
> +static inline void free_reserved_ptdesc(struct ptdesc *pt)
> +{
> +	free_reserved_page(ptdesc_page(pt));
> +}
> +
>  /*
>   * Default method to free all the __init memory into the buddy system.
>   * The freed pages will be poisoned with pattern "poison" if it's within
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 7cc6ea057ee9..7cd803aa38eb 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -97,6 +97,19 @@ TABLE_MATCH(ptl, ptl);
>  #undef TABLE_MATCH
>  static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
>
> +#define ptdesc_page(pt)			(_Generic((pt),			\
> +	const struct ptdesc *:		(const struct page *)(pt),	\
> +	struct ptdesc *:		(struct page *)(pt)))
> +
> +#define ptdesc_folio(pt)		(_Generic((pt),			\
> +	const struct ptdesc *:		(const struct folio *)(pt),	\
> +	struct ptdesc *:		(struct folio *)(pt)))
> +
> +static inline struct ptdesc *page_ptdesc(struct page *page)
> +{
> +	return (struct ptdesc *)page;
> +}

Hi Vishal,

I'm a little curious, why is the page_ptdesc() using inline functions instead of macro?
If this is any magic, please tell me, thank you very much.

> +
>  /*
>   * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
>   *
>
> --
> 2.39.2
>


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:36:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523475.813550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7zJ-0003fv-Qe; Wed, 19 Apr 2023 13:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523475.813550; Wed, 19 Apr 2023 13:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp7zJ-0003fm-Nb; Wed, 19 Apr 2023 13:36:45 +0000
Received: by outflank-mailman (input) for mailman id 523475;
 Wed, 19 Apr 2023 13:36:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp7zI-0003fS-LK
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:36:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 37dbae19-deb7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 15:36:43 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 09:36:40 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB7146.namprd03.prod.outlook.com (2603:10b6:806:335::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 13:36:37 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 13:36:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37dbae19-deb7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681911402;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=oiujBh1zgNY4u3WSkT/hwx1Y74HixuxpZDiSlSutRuw=;
  b=U2bMztMOL/QPBlb14MUA6UktsDhRoY///ioUrDHS1lPLsuGO/J7oxAL+
   LpUpMrrtBQGLQ2UTe82yYh/nrbvqDN3fpQQYX7oAIUebNs8NTE0RWRl8I
   BHgtNN5c8SBdruRa8ZIx4Okb26LjTr5xe8dZd0rg0ExQ3ppEMZhOqGvzR
   8=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 105447036
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GcBLKazn6/Bh13z9R/h6t+cRxyrEfRIJ4+MujC+fZmUNrF6WrkVSx
 2YYW2yHO//ZMWb0L9x0YIjg/E5UvsWDm4I1HgA9+yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKET5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXAN2
 9sRaxIBVxSGvfuEmJe1VNZGve12eaEHPKtH0p1h5RfwKK9+BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVQguFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eSwX+kANlMfFG+3tpOrU+VwVciMicPfmq9n8G8sWuEZOsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluP1TM9KGYDYWoISFUD6ty6+IUr1EuXH5BkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPuRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:kZvFmqwTrxSWoPoKoNKJKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: =?us-ascii?q?9a23=3AC5bKDGpgac00Cx8bannOyaTmUdFmXFrUj1XcGku?=
 =?us-ascii?q?9M39bEJibV2C6w4oxxg=3D=3D?=
X-Talos-MUID: 9a23:pt4p9AaLuWaYruBTkxvBljdlD/ZS2b2FOUYzqZA9nvO7DHkl
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="105447036"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OkiwOZdh02YO6zwREs11PNCIe8tQui7KMLaz0WZdUnR/sD6iwiiDx/ZTEd4hRal/9gl7IBLScb1HeABAmwZJxjAeY+GknrucmmzJUbxzsJw1HCRJbYf0hI+Llb9fmx/oCVEfboTL+R2s79U76u7K4csD0e1ddEP9zdxhJ9PZqXfZ3J5YsmGn1kW4602Ry9DqarBAI8V2uCUyytKD0LY4/zFtSlI+VuXEfN4qRhYdV1ng65DE5nVj8S50rZO0MFVfbhMZRv9PoIaqgpCKg5Kg+Ber+W9GHIM/zEo/7HJ7MfRxUBl4lM3mMOR+Me1sTQNHGijQUkedxXsqHas6tJnEow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HdGuTUn1srRQHQZM4vrpK/BDUTN31bZMdHTs2ip2RBo=;
 b=CIcpB9WogqlJP+kpipCdwMgtkekz0jkfWRraLTmeO7IFQDUPjSwAHVxsNvby2gnZ+oOya3h9lYcFSq4hef5msDtKnrXFpSn0JQIKCXo0YyuW0hBdmXpPx9sj1gAQc6DjPK2XU/yx/+aoPX+F7RdHMsAeIrtS/vkgdZfAqCpT78Mt9C2j8mGArcaib0GbCr1zVMnSNWkfymM+7t30bW0ZMQX1bEIwOPFlZXGCqXkJmydfRrRKorojo5r+35K0c876fZ+wglfDa4Jot1EvwW76C2eOrkwXOBfkUD5fBW7pAUctsl0rkVYedv50SCpOxR0AvlmH3o9JqiVd7zuaOXZhFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HdGuTUn1srRQHQZM4vrpK/BDUTN31bZMdHTs2ip2RBo=;
 b=ptkdRrm2x75ygOndzIpml0iN7pu3Q0pS/KFENjKiVUK5R5UjzRGReht0ZKcdEGPvEnaSLbABeLhkOQOG4L/N1obNPOWUFLnsE3tT5GTTE3jpmtR4FpwRn5GQ4q0SIX+Fq1415C7iFdhLSF6ApzwEbnaWUu29sHR+EZbJLCGt7Ps=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 15:36:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZD/uX1VqYchQ4GgT@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
 <ZD/UMyeckvCq0ivf@Air-de-Roger>
 <86823b76-6be1-da65-7608-af391ff48978@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <86823b76-6be1-da65-7608-af391ff48978@suse.com>
X-ClientProxiedBy: LO0P265CA0013.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:355::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB7146:EE_
X-MS-Office365-Filtering-Correlation-Id: 5dcdebaf-ae8c-4fa2-84ff-08db40db1927
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	46nxhnqJU7XCJ4Mj3HcEhZEPua/96qxD+MssvT5h8mS9M8Hq5KBgg7GMfwur+BA7gJa83/ITOv45wdKy1CtlqR1uds9axdnDDXjzDWGzLEDEAdOy9f+x7rJHAjwa8KSE46BnsN8LYZH+N0aOeUaJXr/b40+hNkeUJVr81VZRJWjZ0BILOwaqmp9s8+4nOvJuoAYNCOW/N5IrkVlmPlJWnDJ3p2VbbkzY9XtLS5P3gRxpuntyyJohvh3Em6k0jCPlxS9Sa+9PpKOk5PJCLQPrdJgZQXMq4+NgM8ASDQHn8WeVH1NCWvM/tUlyaBGYYJuuU9KkUH8hrSdP1rUTMvuBxIvjSRhWOQUZB3y3CrH/8Jwq0Xbf/LSPaPntSvj4Ae3pRd4SufAvSI9uJ7F3xcFpLE7VFYu50AxY4R88QnM2kz0Omd36RjWNjENq88TG5UztH3YqxtZidGszGSrl41Y2iuTAMGBSCfS97Ttoz1k+9I7g1IzKYB4YDpXmHHSZ7kavOSFLp6hnts+4wosT587MEQcJdOqgun8vxqw6oi/kslo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(376002)(366004)(39860400002)(346002)(136003)(451199021)(2906002)(478600001)(6666004)(86362001)(966005)(26005)(53546011)(6512007)(6506007)(33716001)(9686003)(6486002)(186003)(85182001)(82960400001)(41300700001)(54906003)(38100700002)(8676002)(8936002)(4326008)(66556008)(66476007)(83380400001)(66946007)(316002)(6916009)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vmt3VDJIeE5zL2cxbUFHTkphTjhSSFZmdVhTNXVNTU5ZZDY5c2hsbldicldn?=
 =?utf-8?B?UG1uL00yUlZUeTR4MC9ZeU1raWFrRnFUSUpXLzdjSkZsN1R6c29ZTUJJZ1h6?=
 =?utf-8?B?OTZHV2lDanZyZFNXbVNxemYvTmp2RlhySG04R1lGTmRVVndiNW1uTHZxUnht?=
 =?utf-8?B?YUs2TXVwUWRnUTB5eVQ3VUFEMUI2aW4wQUYrWUJTL1RRRUJYaEhaWGpZNHkz?=
 =?utf-8?B?eFcwZUNMdFFEcTdZUmdJQ0VVak83UjFIdllETTgwMDRQa0h3WkZ0V3F1TnRP?=
 =?utf-8?B?V3pCTlk2RTVDWVRud1ZuNWZOR1hLSFI1eUJRbkEwcVJNZjA1c2VPUmFxVzZu?=
 =?utf-8?B?RmRGaFVnYlRUYUhDL2NNZkJ0YjBhcHBiRWp1cjB1SzV5S0grSC91dzN4YzhE?=
 =?utf-8?B?bkJBYnhNYzNjbHlXeFk1dWdvOHZ4eForYXN6dVpOQnpzN0hBcjRzK1MrVStQ?=
 =?utf-8?B?dTB2aVIyTFFxbE82VUF3V0p1bUI1a0s2RkpIOVljbHJxZzUycnN1SklGc05C?=
 =?utf-8?B?ZVF2cDhRUG8vcTEvMDFDUFZ5NWVlTzkxYWFpa3NxdXVZR1VSUmlBeUVDQjlJ?=
 =?utf-8?B?WUthNndMY0d2MUJSK0daTkFHdWJoeVBxVlQwSXRwbTduSlQ5MzhBWGNOUFV3?=
 =?utf-8?B?WVh4NUFtRHNudURaM1h5L3FoREM2dWp2NG1CUVFJK1YrRjBqcVV1U3R2OEUv?=
 =?utf-8?B?UDNJS0dzci9EKzM0MUZQcU81SjdEenJHRmpUTlpzVUUvR1JlSkhTNlNxcUhu?=
 =?utf-8?B?MlQ0c1MwQ3NEWlhZd0NhbUpGMHhYV09seFlScnpFeU1ZdC9HS0ZOdHJ6N3lX?=
 =?utf-8?B?QjJMb2p2QmRMQ1Q0d1NGNzgvbnU0bC9ITkVraUJzZ21qUkYvZVlveG53SlVm?=
 =?utf-8?B?Uit5YVdLRVJwa1ROaU9ZK3MzbkIzZUhMTFdpa3FHZmRKVUVPd0k3RExobkNk?=
 =?utf-8?B?Vnl6emV4N3VMUW9EZlNrYUJDejJzWDE0N1o2bzZ4RWJISU1lSmNDeGFNZ3BK?=
 =?utf-8?B?NkJnUzN0OXFxV0MrdDhiYnhzSGs5cmJ0RjRWemlYNHNqc2RNc3BBZW80MkF2?=
 =?utf-8?B?aXFPNmx5SjBTWXBDdnpMT3gweGJQRkxCRUZqd3BnK3E3Z2xoZDJ5ZC9zRzJN?=
 =?utf-8?B?WUNaVkF3a3prMnBlZ3F0clloMThQRzdsa3ZyNXByZnRYWGVuUXVveC9MYUlM?=
 =?utf-8?B?SytFVUNscDhET21ldzBQKzZpejhFYk8xOU0vQ2QrYVoxM29rZitZTGRxcmtT?=
 =?utf-8?B?eVVrODBvTU1NaTlnc1V5SVhoRXdQbDhYbmhIVVRkNCthdTNBVVMyK1h1VldR?=
 =?utf-8?B?eFBRSnlzT1ozb05LQnUyUG5FdWhmSDRmQno2T0lUQ0JYaUxkOEE1b1NFZStz?=
 =?utf-8?B?VjhGdWxnNHNRb1lJcWljMjc2YWF0RkxtdmtGVlp2R2piazhSVllwVTRSUnZl?=
 =?utf-8?B?SG01OFZQdy8wbUVwYkJIQ1Q1K2RsYklEZDJXYWlDOHczWHlPUGhVZkRHK05C?=
 =?utf-8?B?a09pYWltSGhrWm5QYWJpRlBiWlVLS2tOamp3TFNoQnBqWHN1YURIOTh4YlMv?=
 =?utf-8?B?bHA4TnV5ZGtUU21tUEhVVTNObFA0Q2xieWp6WHBLWVd1OFNpcGRaWlNJTnFl?=
 =?utf-8?B?ZzZXUEt0MzVOYk9MVVcwS0lUdGpzU1R2R1ZkZFEwV2NKeXVQV21CeWNrUUhi?=
 =?utf-8?B?Z0ZnZEtqN1lmV0huVTNvcTlIdTNmSFRvQzJGaExSYThqSFprZVFuS3hhNHVP?=
 =?utf-8?B?YVF5TUN2ZmdieFFHWW1WSFp0YmdIcnkwUnlhRkdhaDJ5cFhKK0l0cnA1UUta?=
 =?utf-8?B?eWUycUozM1ozc0NId2RxZE1wYTY2RjM3VVVaZG1Nd1FPNW4rUCtFaloxd2pk?=
 =?utf-8?B?TTk0QTVqSXUwN3dtOTc4bmNxNGFJZS9CRFBwOHRqK3BvbWFCVE1BYlFFYUFi?=
 =?utf-8?B?UVBabnd0b09GL216bng4SXpKSmU4Mk02OUlqWjBHdzNQVDJMYkx3WUY4RDZK?=
 =?utf-8?B?OU1aRkxwM2hWZUsvZHFVTjJYODZsWWZwb0l6NFkxeWIybFNlYktTYVlYUFp1?=
 =?utf-8?B?a1MxY0ZrWnhtK0JDOHdhZDZrMzZYanRhUXl6WHQ5WHduYlltdXFNSUFaSWxC?=
 =?utf-8?B?Q2hFVS9ZWjN1M0JCNHNycGxBZ1NBK0VxU3B3dGJlY1lRUjl6eXF2US9vMUVp?=
 =?utf-8?B?UXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nDyUG99uy03lvhAlwk/4dmiaXlMEXFHv5mft10mSNpJ1H3q+/th4ZyzL7DaOXb1iUYtoUDudiebZ9yh7y+PF06ILcSGc1n9M/kMRRagh9D30Oeb9RIFKJ90kdQSRep5xeiM0y8LS5rKAQEtGVpYfawjmuWyN9z2kiehPHG+4IfHPWxwIacDtvtuUhI+JdhIdyx99CaW6vPiFbHOColItyiYD+gK9hIiGYtFMNfF+aWxAZktks4ofJNkY6JZ3owsO8OvDzGuIWCFSskAJy/nxDFT0Uw/t3TA24nm+cMfLMf7H5DDfHRuLDT5KtowbgplyOC8x5ymGZQxj6XTF4u8xXSne8Ed3NqX3q3EnmTCUQlKVkUEAYk82AhIK5m2smZ+IiqqoxwSyBbPs8W3LfoaNF906HQCDR/LzNNB7mksPqzXoMbeSIYfjVp9s5dC5BR35xJQLDl65zDDxLUfYs+6S0skaQFXnCTKJFittkC1NEMsym9MCk7NLNCZQPtfP+5ja3V2VWnbefPLsBsm0DGV5k9qCrGdG9LGxg4clUwlO9RDlRx5+K8f2czNFasgNrT7uCW3rxolGfvTRVrPBwPOHM0ZilOXfm81tj+eCN9bpb/ELA5xrdlja49+rifpYE7OD74hFuFz/IoRSJLB6BEGxYnTZNCAYVq1OTEUBPO1HeJjK9YAGgryUFZ481piNzO7MIBuJJMJVbqzmZuPMEuflbtslrocUYgWeenSNyS8fhvBOh/hHNc0V19BfyLt+tTu0RoANpktcflukUXN8/a1/Luh2WdajgoR+BdIWWHNKdFplCFGyOtgSE9mJpwUXKNlG
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5dcdebaf-ae8c-4fa2-84ff-08db40db1927
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:36:36.9583
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NedbuoDKVbKAh3Mb4hj2uir/CYBY8+KWlSmP9bZQpQaoaFDfuBLfpVT3iZAueijNoG5XqsovMDM+TwyvnAbGTQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB7146

On Wed, Apr 19, 2023 at 02:00:38PM +0200, Jan Beulich wrote:
> On 19.04.2023 13:44, Roger Pau Monné wrote:
> > On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
> >> On 19.04.2023 10:25, Roger Pau Monné wrote:
> >>> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
> >>>> On 18.04.2023 15:06, Roger Pau Monné wrote:
> >>>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
> >>>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
> >>>>>>> --- a/xen/arch/x86/include/asm/config.h
> >>>>>>> +++ b/xen/arch/x86/include/asm/config.h
> >>>>>>> @@ -44,6 +44,20 @@
> >>>>>>>  /* Linkage for x86 */
> >>>>>>>  #ifdef __ASSEMBLY__
> >>>>>>>  #define ALIGN .align 16,0x90
> >>>>>>> +#ifdef CONFIG_LIVEPATCH
> >>>>>>> +#define START_LP(name)                          \
> >>>>>>> +  jmp name;                                     \
> >>>>>>> +  .pushsection .text.name, "ax", @progbits;     \
> >>>>>>> +  name:
> >>>>>>> +#define END_LP(name)                            \
> >>>>>>> +  .size name, . - name;                         \
> >>>>>>> +  .type name, @function;                        \
> >>>>>>> +  .popsection
> >>>>>>> +#else
> >>>>>>> +#define START_LP(name)                          \
> >>>>>>> +  name:
> >>>>>>> +#define END_LP(name)
> >>>>>>> +#endif
> >>>>>>>  #define ENTRY(name)                             \
> >>>>>>>    .globl name;                                  \
> >>>>>>>    ALIGN;                                        \
> >>>>>>
> >>>>>> Couldn't END_LP() set type and size unconditionally? (But see also
> >>>>>> below.)
> >>>>>
> >>>>> I see, so that we could also use it for debug purposes.  I guess at
> >>>>> that point it might be better to use {START,END}_FUNC() to note that
> >>>>> the macros also have an effect beyond that of livepatching.
> >>>>>
> >>>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
> >>>>> find START_ENTRY a weird name.
> >>>>
> >>>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
> >>>> I take it that you're aware that we meanwhile have two or even three
> >>>> concurring proposals on a general scheme of such annotations, and we
> >>>> don't seem to be able to agree on one. (I guess I'll make a design
> >>>> session proposal on this topic for Prague.)
> >>>
> >>> Oh, I wasn't aware we had other proposals, I've been away on an off
> >>> quite a lot recently, and haven't been able to keep up with all
> >>> xen-devel email.  Do you have any references at hand?
> >>
> >> Andrew said he had posted something long ago, but I didn't recall and
> >> hence have no reference. My posting from about a year ago is
> >> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
> >> Subsequently Jane went kind of the Linux route:
> >> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
> >>
> >>>> One thing needs to be clear though: Macros doing things solely needed
> >>>> for LP need to not have extra effects with it disabled, and such
> >>>> macros also better wouldn't e.g. insert stray JMP when not really
> >>>> needed. Hence I expect we still want (some) LP-specific macros besides
> >>>> whatever we settle on as the generic ones.
> >>>
> >>> The stray jmp can be inserted only in the livepatch case, if we end up
> >>> having to add it.
> >>>
> >>> Maybe we should just go with Linux names, so initially I would like to
> >>> use:
> >>>
> >>> SYM_FUNC_START{_NOALIGN}(name)
> >>> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
> >>> SYM_FUNC_END(name)
> >>
> >> As said in replies on the earlier threads, I think these are overly
> >> verbose and come in overly many variations.
> > 
> > Right, I would only introduce the ones above and as lonog as I have at
> > least one user for them. I don't think there's much value in importing
> > the file wholesale if we have no use case for a lot of the imported
> > macros.
> > 
> > The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
> > need a tag for local function-like entry point labels, would you then
> > use PROC() for those? ENTRY_LOCAL()?
> > 
> > I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
> > think it's clearer.  I would agree on dropping the SYM_ prefix from
> > the Linux ones if there's consensus.
> 
> Okay, I'm glad we can agree on no SYM_. But what value does START have?
> And why would the type be (re)specified via ..._END()? FUNC(), DATA(),
> and END() ought to be all we need.

Does it imply that we would then drop ENTRY()? (seems so, would just
like to confirm).

> The type would be set by the entry
> point macros, and the size by END(). To cover local vs global I could
> live with _LOCAL suffixes, but personally would prefer e.g. LFUNC()
> and GFUNC(). We could also limit ourselves to FUNC() plus DATA(), and
> have (non-)global expressed by END() and e.g. LEND() or END_LOCAL().
> One less macro, but maybe slightly odd to have the .global directives
> then at the end rather than at the beginning.

Hm, yes, I do find it odd to have the .global at the end.  FUNC and
FUNC_LOCAL would be my preference, I do find {L,G}FUNC a bit
confusing.

> 
> Note that this is different from my proposed patch, where I aimed at
> the change being unintrusive. This includes that this was matching
> what Arm has (and hence required no change there at all). I think it
> would certainly be nice if these constructs were as similar as
> possible between arch-es; some may even be possible to share.

Well, yes, that would seem desirable as long as we can agree on a set
of helper names.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523480.813561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp81l-0004JG-6Y; Wed, 19 Apr 2023 13:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523480.813561; Wed, 19 Apr 2023 13:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp81l-0004J9-3p; Wed, 19 Apr 2023 13:39:17 +0000
Received: by outflank-mailman (input) for mailman id 523480;
 Wed, 19 Apr 2023 13:39:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp81j-0004J1-Cy
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:39:15 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 929858f8-deb7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 15:39:14 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2a8b766322bso27992341fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 06:39:13 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 j25-20020a19f519000000b004edc6067affsm950399lfb.8.2023.04.19.06.39.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 06:39:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 929858f8-deb7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681911553; x=1684503553;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QX0N3D0hQrUbD4jeZe1Gbsve/yRImx9bQlJw/URsiQg=;
        b=o+4sqYUO2JQ/IA15VCmMSNc5qQsfRh2ipHK+953vItL4eBgPgzhVeH0hF7Z5CKFJzF
         OkidxC1b8ULHJkwD5Qr06U9KVUUYIGyFuNx0Qkr/XLuJQHel8xZQzXQtncb7leeFJmna
         KuRukHTVRoAHMQgoqV3Qi+IuQVGMdVw5zdo1VK6DpgNmt7Mpj2d991Doa4ceaUsbCibm
         UotF86LnSmuuaupWwgAd0p8ZRZWQPaz5xln9FKImbDSdSHFv74ED1LSXod6JH7TFWuSO
         CfisceeARYxYludS8AJnTZWU8wYIEKyHi+zlSY4LUs4ryKwZwnxhVvRqJz0YONWjYTpp
         hZTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681911553; x=1684503553;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QX0N3D0hQrUbD4jeZe1Gbsve/yRImx9bQlJw/URsiQg=;
        b=He2CJO7UVbzDAKX7Z44d8sTk+0WaAo7IhGdIBCHU/AiwbzNkZT4m90KUhNt4G4/VLB
         Htn9PqgBiX8Jk/MMB0RcLwsGu2xvMyUEiZCABMzrbY237kUP2+8SHaNidpirazczrFhA
         7id38x6aF0JY/NEDTwBI2jFCt8MiJCF3YBfxmmMbRN1c6dVrsgys45LsC6PVSrKjws4Z
         ogCMatDqWj5Q1N+yVsrXgjbfU6kwFjI1AfU0uKlCDAX5dRXdcu9Ybe6ivCOgP931BpG2
         Wkt3CoJzz7j1iN9Q/wy6Mwc2bFj0u8lb0NRl5GkevSEjBERS1xzqDpw/Ul/MQMaUrAMV
         Yi4g==
X-Gm-Message-State: AAQBX9eYnkdcF3l6iouHSnHMicEud2krYFvEAbYmtKzV0BnCP0bnkJQg
	CxWQL05Du6a/NSQa24/o6og=
X-Google-Smtp-Source: AKy350aPeBtJqKa95LeXOxot5gZMEtPdZ/2PMrQULjnztZGIWWEaLCDOIqY8snx0oTIMQT8W/tUG5Q==
X-Received: by 2002:ac2:5147:0:b0:4ec:8cad:3c39 with SMTP id q7-20020ac25147000000b004ec8cad3c39mr4028033lfd.61.1681911553119;
        Wed, 19 Apr 2023 06:39:13 -0700 (PDT)
Message-ID: <7934f6b1545f161f876755ed1ab7ce5364220a83.camel@gmail.com>
Subject: Re: [PATCH v4 1/3] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Wed, 19 Apr 2023 16:39:12 +0300
In-Reply-To: <1cd40a12-7030-ec0d-dae7-e60132c2989c@suse.com>
References: <cover.1680882176.git.oleksii.kurochko@gmail.com>
	 <50ed83073ccb440fb651070de8b0abebd3888b43.1680882176.git.oleksii.kurochko@gmail.com>
	 <1cd40a12-7030-ec0d-dae7-e60132c2989c@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.46.4 (3.46.4-1.fc37) 
MIME-Version: 1.0

Hi Jan,

On Mon, 2023-04-17 at 13:50 +0200, Jan Beulich wrote:
> On 07.04.2023 17:48, Oleksii Kurochko wrote:
> > The idea was taken from xvisor but the following changes
> > were done:
> > * Use only a minimal part of the code enough to enable MMU
> > * rename {_}setup_initial_pagetables functions
> > * add an argument for setup_initial_mapping to have
> > =C2=A0 an opportunity to make set PTE flags.
> > * update setup_initial_pagetables function to map sections
> > =C2=A0 with correct PTE flags.
> > * Rewrite enable_mmu() to C.
> > * map linker addresses range to load addresses range without
> > =C2=A0 1:1 mapping. It will be 1:1 only in case when
> > =C2=A0 load_start_addr is equal to linker_start_addr.
> > * add safety checks such as:
> > =C2=A0 * Xen size is less than page size
> > =C2=A0 * linker addresses range doesn't overlap load addresses
> > =C2=A0=C2=A0=C2=A0 range
> > * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
> > * change PTE_LEAF_DEFAULT to RW instead of RWX.
> > * Remove phys_offset as it is not used now
> > * Remove alignment=C2=A0 of {map, pa}_start &=3D XEN_PT_LEVEL_MAP_MASK(=
0);
> > =C2=A0 in=C2=A0 setup_inital_mapping() as they should be already aligne=
d.
> > =C2=A0 Make a check that {map_pa}_start are aligned.
> > * Remove clear_pagetables() as initial pagetables will be
> > =C2=A0 zeroed during bss initialization
> > * Remove __attribute__((section(".entry")) for
> > setup_initial_pagetables()
> > =C2=A0 as there is no such section in xen.lds.S
> > * Update the argument of pte_is_valid() to "const pte_t *p"
> > * Add check that Xen's load address is aligned at 4k boundary
> > * Refactor setup_initial_pagetables() so it is mapping linker
> > =C2=A0 address range to load address range. After setup needed
> > =C2=A0 permissions for specific section ( such as .text, .rodata, etc )
> > =C2=A0 otherwise RW permission will be set by default.
> > * Add function to check that requested SATP_MODE is supported
> >=20
> > Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes in V4:
> > =C2=A0 * use GB() macros instead of defining SZ_1G
> > =C2=A0 * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1
> > - GB(1))
>=20
> Perhaps in a separate patch, may I ask that you add - like x86 and
> Arm
> have it - a block comment to config.h laying out virtual address
> space
> use? Knowing what is planned to be put where (even if just vaguely,
> i.e.
> keeping open the option of changing the layout) is likely going to
> help
> with figuring whether this is a good placement.
>=20
> Such a comment could then also be accompanied by mentioning that
> virtual address space really "wraps" at certain boundaries (due to
> the
> upper address bits simply being ignored). For an x86 person like me
> this is certainly unexpected / unusual behavior.
>=20
Sure, it makes sense. I'll add that to new version of the patch series.

> > =C2=A0 * remove unnecessary 'asm' word at the end of #error
> > =C2=A0 * encapsulate pte_t definition in a struct
> > =C2=A0 * rename addr_to_ppn() to ppn_to_paddr().
> > =C2=A0 * change type of paddr argument from const unsigned long to
> > paddr_t
> > =C2=A0 * pte_to_paddr() update prototype.
> > =C2=A0 * calculate size of Xen binary based on an amount of page tables
> > =C2=A0 * use unsgined int instead of 'uint32_t' instead of uint32_t as
> > =C2=A0=C2=A0=C2=A0 their use isn't warranted.
> > =C2=A0 * remove extern of bss_{start,end} as they aren't used in mm.c
> > anymore
> > =C2=A0 * fix code style
> > =C2=A0 * add argument for HANDLE_PGTBL macros instead of curr_lvl_num
> > variable
> > =C2=A0 * make enable_mmu() as noinline to prevent under link-time
> > optimization
> > =C2=A0=C2=A0=C2=A0 because of the nature of enable_mmu()
> > =C2=A0 * add function to check that SATP_MODE is supported.
> > =C2=A0 * update the commit message
> > =C2=A0 * update setup_initial_pagetables to set correct PTE flags in on=
e
> > pass
> > =C2=A0=C2=A0=C2=A0 instead of calling setup_pte_permissions after
> > setup_initial_pagetables()
> > =C2=A0=C2=A0=C2=A0 as setup_initial_pagetables() isn't used to change p=
ermission
> > flags.
> > ---
> > Changes in V3:
> > =C2=A0- update definition of pte_t structure to have a proper size of
> > pte_t
> > =C2=A0=C2=A0 in case of RV32.
> > =C2=A0- update asm/mm.h with new functions and remove unnecessary
> > 'extern'.
> > =C2=A0- remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
> > =C2=A0- update paddr_to_pte() to receive permissions as an argument.
> > =C2=A0- add check that map_start & pa_start is properly aligned.
> > =C2=A0- move=C2=A0 defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_=
SHIFT
> > to
> > =C2=A0=C2=A0 <asm/page-bits.h>
> > =C2=A0- Rename PTE_SHIFT to PTE_PPN_SHIFT
> > =C2=A0- refactor setup_initial_pagetables: map all LINK addresses to
> > LOAD addresses
> > =C2=A0=C2=A0 and after setup PTEs permission for sections; update check=
 that
> > linker
> > =C2=A0=C2=A0 and load addresses don't overlap.
> > =C2=A0- refactor setup_initial_mapping: allocate pagetable 'dynamically=
'
> > if it is
> > =C2=A0=C2=A0 necessary.
> > =C2=A0- rewrite enable_mmu in C; add the check that map_start and
> > pa_start are
> > =C2=A0=C2=A0 aligned on 4k boundary.
> > =C2=A0- update the comment for setup_initial_pagetable funcion
> > =C2=A0- Add RV_STAGE1_MODE to support different MMU modes
> > =C2=A0- set XEN_VIRT_START very high to not overlap with load address
> > range
> > =C2=A0- align bss section
> > ---
> > Changes in V2:
> > =C2=A0* update the commit message:
> > =C2=A0* Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
> > =C2=A0=C2=A0 introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
> > =C2=A0* Rework pt_linear_offset() and pt_index based on=C2=A0
> > XEN_PT_LEVEL_*()
> > =C2=A0* Remove clear_pagetables() functions as pagetables were zeroed
> > during
> > =C2=A0=C2=A0 .bss initialization
> > =C2=A0* Rename _setup_initial_pagetables() to setup_initial_mapping()
> > =C2=A0* Make PTE_DEFAULT equal to RX.
> > =C2=A0* Update prototype of setup_initial_mapping(..., bool writable) -=
>
> > =C2=A0=C2=A0 setup_initial_mapping(..., UL flags)=C2=A0=20
> > =C2=A0* Update calls of setup_initial_mapping according to new prototyp=
e
> > =C2=A0* Remove unnecessary call of:
> > =C2=A0=C2=A0 _setup_initial_pagetables(..., load_addr_start, load_addr_=
end,
> > load_addr_start, ...)
> > =C2=A0* Define index* in the loop of setup_initial_mapping
> > =C2=A0* Remove attribute "__attribute__((section(".entry")))" for
> > setup_initial_pagetables()
> > =C2=A0=C2=A0 as we don't have such section
> > =C2=A0* make arguments of paddr_to_pte() and pte_is_valid() as const.
> > =C2=A0* make xen_second_pagetable static.
> > =C2=A0* use <xen/kernel.h> instead of declaring extern unsigned long
> > _stext, 0etext, _srodata, _erodata
> > =C2=A0* update=C2=A0 'extern unsigned long __init_begin' to 'extern uns=
igned
> > long __init_begin[]'
> > =C2=A0* use aligned() instead of
> > "__attribute__((__aligned__(PAGE_SIZE)))"
> > =C2=A0* set __section(".bss.page_aligned") for page tables arrays
> > =C2=A0* fix identatations
> > =C2=A0* Change '__attribute__((section(".entry")))' to '__init'
> > =C2=A0* Remove phys_offset as it isn't used now.
> > =C2=A0* Remove alignment=C2=A0 of {map, pa}_start &=3D
> > XEN_PT_LEVEL_MAP_MASK(0); in
> > =C2=A0=C2=A0 setup_inital_mapping() as they should be already aligned.
> > =C2=A0* Remove clear_pagetables() as initial pagetables will be
> > =C2=A0=C2=A0 zeroed during bss initialization
> > =C2=A0* Remove __attribute__((section(".entry")) for
> > setup_initial_pagetables()
> > =C2=A0=C2=A0 as there is no such section in xen.lds.S
> > =C2=A0* Update the argument of pte_is_valid() to "const pte_t *p"
> > ---
> >=20
> > =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 1 +
> > =C2=A0xen/arch/riscv/include/asm/config.h=C2=A0=C2=A0=C2=A0 |=C2=A0 12 =
+-
> > =C2=A0xen/arch/riscv/include/asm/mm.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |=C2=A0=C2=A0 9 +
> > =C2=A0xen/arch/riscv/include/asm/page-bits.h |=C2=A0 10 +
> > =C2=A0xen/arch/riscv/include/asm/page.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0 65 +++++
> > =C2=A0xen/arch/riscv/mm.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 319
> > +++++++++++++++++++++++++
> > =C2=A0xen/arch/riscv/riscv64/head.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +
> > =C2=A0xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 11 +
> > =C2=A0xen/arch/riscv/xen.lds.S=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 4 +
> > =C2=A09 files changed, 432 insertions(+), 1 deletion(-)
> > =C2=A0create mode 100644 xen/arch/riscv/include/asm/mm.h
> > =C2=A0create mode 100644 xen/arch/riscv/include/asm/page.h
> > =C2=A0create mode 100644 xen/arch/riscv/mm.c
> >=20
> > diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> > index 443f6bf15f..956ceb02df 100644
> > --- a/xen/arch/riscv/Makefile
> > +++ b/xen/arch/riscv/Makefile
> > @@ -1,5 +1,6 @@
> > =C2=A0obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o
> > =C2=A0obj-y +=3D entry.o
> > +obj-y +=3D mm.o
> > =C2=A0obj-$(CONFIG_RISCV_64) +=3D riscv64/
> > =C2=A0obj-y +=3D sbi.o
> > =C2=A0obj-y +=3D setup.o
> > diff --git a/xen/arch/riscv/include/asm/config.h
> > b/xen/arch/riscv/include/asm/config.h
> > index 763a922a04..0cf9673558 100644
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -39,12 +39,22 @@
> > =C2=A0=C2=A0 name:
> > =C2=A0#endif
> > =C2=A0
> > -#define XEN_VIRT_START=C2=A0 _AT(UL, 0x80200000)
> > +#ifdef CONFIG_RISCV_64
> > +#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 -
> > GB(1)) */
> > +#else
> > +#error "RV32 isn't supported"
> > +#endif
> > =C2=A0
> > =C2=A0#define SMP_CACHE_BYTES (1 << 6)
> > =C2=A0
> > =C2=A0#define STACK_SIZE PAGE_SIZE
> > =C2=A0
> > +#ifdef CONFIG_RISCV_64
> > +#define RV_STAGE1_MODE SATP_MODE_SV39
> > +#else
> > +#define RV_STAGE1_MODE SATP_MODE_SV32
> > +#endif
> > +
> > =C2=A0#endif /* __RISCV_CONFIG_H__ */
> > =C2=A0/*
> > =C2=A0 * Local variables:
> > diff --git a/xen/arch/riscv/include/asm/mm.h
> > b/xen/arch/riscv/include/asm/mm.h
> > new file mode 100644
> > index 0000000000..e16ce66fae
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/mm.h
> > @@ -0,0 +1,9 @@
> > +#ifndef _ASM_RISCV_MM_H
> > +#define _ASM_RISCV_MM_H
> > +
> > +void setup_initial_pagetables(void);
> > +
> > +void enable_mmu(void);
> > +void cont_after_mmu_is_enabled(void);
> > +
> > +#endif /* _ASM_RISCV_MM_H */
> > diff --git a/xen/arch/riscv/include/asm/page-bits.h
> > b/xen/arch/riscv/include/asm/page-bits.h
> > index 1801820294..0879a527f2 100644
> > --- a/xen/arch/riscv/include/asm/page-bits.h
> > +++ b/xen/arch/riscv/include/asm/page-bits.h
> > @@ -1,6 +1,16 @@
> > =C2=A0#ifndef __RISCV_PAGE_BITS_H__
> > =C2=A0#define __RISCV_PAGE_BITS_H__
> > =C2=A0
> > +#ifdef CONFIG_RISCV_64
> > +#define PAGETABLE_ORDER=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 (9)
> > +#else /* CONFIG_RISCV_32 */
> > +#define PAGETABLE_ORDER=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 (10)
> > +#endif
> > +
> > +#define PAGETABLE_ENTRIES=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (1 << PA=
GETABLE_ORDER)
> > +
> > +#define PTE_PPN_SHIFT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 10
> > +
> > =C2=A0#define PAGE_SHIFT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 12 /* 4 KiB Pages */
> > =C2=A0#define PADDR_BITS=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 56 /* 44-bit PPN */
> > =C2=A0
> > diff --git a/xen/arch/riscv/include/asm/page.h
> > b/xen/arch/riscv/include/asm/page.h
> > new file mode 100644
> > index 0000000000..30406aa614
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/page.h
> > @@ -0,0 +1,65 @@
> > +#ifndef _ASM_RISCV_PAGE_H
> > +#define _ASM_RISCV_PAGE_H
> > +
> > +#include <xen/const.h>
> > +#include <xen/types.h>
> > +
> > +#define VPN_MASK=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ((unsigned
> > long)(PAGETABLE_ENTRIES - 1))
> > +
> > +#define XEN_PT_LEVEL_ORDER(lvl)=C2=A0=C2=A0=C2=A0=C2=A0 ((lvl) * PAGET=
ABLE_ORDER)
> > +#define XEN_PT_LEVEL_SHIFT(lvl)=C2=A0=C2=A0=C2=A0=C2=A0 (XEN_PT_LEVEL_=
ORDER(lvl) +
> > PAGE_SHIFT)
> > +#define XEN_PT_LEVEL_SIZE(lvl)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (_AT(padd=
r_t, 1) <<
> > XEN_PT_LEVEL_SHIFT(lvl))
> > +#define XEN_PT_LEVEL_MAP_MASK(lvl)=C2=A0 (~(XEN_PT_LEVEL_SIZE(lvl) -
> > 1))
> > +#define XEN_PT_LEVEL_MASK(lvl)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (VPN_MASK=
 <<
> > XEN_PT_LEVEL_SHIFT(lvl))
> > +
> > +#define PTE_VALID=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(0, UL)
> > +#define PTE_READABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(1, UL)
> > +#define PTE_WRITABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(2, UL)
> > +#define PTE_EXECUTABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(3, UL)
> > +#define PTE_USER=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(4, UL)
> > +#define PTE_GLOBAL=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(5, UL)
> > +#define PTE_ACCESSED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(6, UL)
> > +#define PTE_DIRTY=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(7, UL)
> > +#define PTE_RSW=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (BIT(8, =
UL) | BIT(9, UL))
> > +
> > +#define PTE_LEAF_DEFAULT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 (PTE_VALID | PTE_READABLE |
> > PTE_WRITABLE)
> > +#define PTE_TABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (PTE_VALID)
> > +
> > +/* Calculate the offsets into the pagetables for a given VA */
> > +#define pt_linear_offset(lvl, va)=C2=A0=C2=A0 ((va) >>
> > XEN_PT_LEVEL_SHIFT(lvl))
> > +
> > +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) &
> > XEN_PT_LEVEL_MASK(lvl))
> > +
> > +/* Page Table entry */
> > +typedef struct {
> > +#ifdef CONFIG_RISCV_64
> > +uint64_t pte;
> > +#else
> > +uint32_t pte;
> > +#endif
> > +} pte_t;
>=20
> Please indent both field declarations accordingly.
>=20
> > +#define addr_to_pte(x) (((x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)
>=20
> This still looks to be converting _to_ an address, not to PTE layout,
> ...
>=20
> > +/* Shift the VPN[x] or PPN[x] fields of a virtual or physical
> > address
> > + * to become the shifted PPN[x] fields of a page table entry */
> > +#define ppn_to_paddr(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)
>=20
> ... while this converts an address (not a ppn) to PTE layout (not a
> paddr). Getting the names of these helpers right is crucial for easy
> following of any code using them. To be honest, I'll stop reviewing
> here, because the names being wrong is just going to be too
> confusing.
You are right the names are confusing and should be renamed.
Thanks.

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:39:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523485.813570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp829-0004qL-HS; Wed, 19 Apr 2023 13:39:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523485.813570; Wed, 19 Apr 2023 13:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp829-0004qC-El; Wed, 19 Apr 2023 13:39:41 +0000
Received: by outflank-mailman (input) for mailman id 523485;
 Wed, 19 Apr 2023 13:39:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp828-0004iP-Hf
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:39:40 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a01b7343-deb7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:39:37 +0200 (CEST)
Received: from DS7PR03CA0223.namprd03.prod.outlook.com (2603:10b6:5:3ba::18)
 by BY5PR12MB4033.namprd12.prod.outlook.com (2603:10b6:a03:213::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 13:39:33 +0000
Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3ba:cafe::3c) by DS7PR03CA0223.outlook.office365.com
 (2603:10b6:5:3ba::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 13:39:33 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 13:39:33 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 08:39:31 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 08:39:29 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a01b7343-deb7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ivwj+/fLHsAAMYalgfNCWwDE0p8dNlTfbmaqNLTXqPS8Ect4tvXfRvrYzOhHjHpJJ6X7rE9Wvz2cF2wVObCIV3g9fpw+VVqoadAGOI2NYN8pGyW06c1Yb8mTekHRdEUjFtn8THgZZzI1JMotj9/DPCix3ptjbWOCK5jVlXmXonnIoUgk9XaJKSkxnRgNydRbX7d10ZWlEaArZNLWzWCLof4xfYxWQXvHebtwlWEhpiG707EqffSConQTO84LDMzMCnNmdhaiy4DblRmNfcwcutfTHl7Xv4wzXVQKXbWDmx9rdTKG7xkkiOSMoZ7P80g6yGlKGzbZ8DDks3JniuXV9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BI+HV4k8xOmnaLaaH27Lz93CeUO7TuvqADEcZE2eBPU=;
 b=EwiccXcPrF6Vhp8G7JdGQ+LRUa6m0NV+dY60JqzB+uOTaS+eOrLs9f1E1E8FPHkIRCHxiGf1Rg8OaAmZ8m75tbZqoLvZ2vITClplv8KLDfKfauTnb3eiP96G6ZTf85fHoV7K7kbeggrD4Tv6NPf7ESd90vAg+PoIKxF7BqeiNWL0sT8XlE4HdHpE5RIyZBPaVmFjtXkDVI8yBwzPES1CA/YKlElZGQ5u4TedVxKE59MB8SVN/BLg8gAG2Ao7ayf8bNTS/UO2f8/qUx+4C1zCVrIqsiJESsfEEAioRi5WZTCFqKOVfXo/ajhG1DGqxtUSPwdQGBE/6PlYx+RLTGizJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BI+HV4k8xOmnaLaaH27Lz93CeUO7TuvqADEcZE2eBPU=;
 b=dapKqsEdCZ9mT1ArbFZ0WCmnJbtPT/lEK+ujakGNocRYqUOw9y+0R1TtINWoXEtyppT+aZl7FyVIC+ngG47z3C1gZnnqQ1Xy+gpO71D3bz2OdEEurhF3W34fGIiVdsaWum4jYHB4Oh8FlKHfkHDAZ7lBPOtQwpoALslKIB2qiwM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <5915f963-97d9-19f3-e797-3fd02b6fb406@amd.com>
Date: Wed, 19 Apr 2023 15:39:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT009:EE_|BY5PR12MB4033:EE_
X-MS-Office365-Filtering-Correlation-Id: 1876b437-0673-4bcc-94a0-08db40db824a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Yj63Ju6mZSj+B5wbooop8R79GWdEu5CXF1ecgUepO0J2aOqOXYD3MzTIj55SP7CYwAWPMoyo5cF+UWqxWsUtVF9n0VnD7ZV0Q/IdGMflA63bnjazES3iX4dpOMyxQgxOfjnP5fwW7EcMcAnXvw/HihZOv67MRIQ481UecGt3oiwxdxwtdC3rfk1Q7DKz8bpUB33B0ulmyyCDGUmVoIlPyHX/5+SB4JNuxg2I/tydv3mhA2vw6/mEzeCP5rWIWq0S9fObkxpPlBVsFm8NWLgZEzGWffp7Qe3VNvlH908w3v2LdwE+NfcGAMsZ4LAIO5kQ1OtQ+N5aGYnxxAD7RKeZl5TbM3Ndexm53eR2XjRgSEUOcz7jZl582FAwUaAZDgJDMBELrr7VZNnzZ1q4sHb1I5rL+Gew3i7syz1A1bHFtbkUoxam5Jyw6rCufA1lJDeORgCdHtfLGtNZaZqHD3odhf1ejL/9EJZhVDP9L/9V1HMB3LJ+xMh+IswybP/I4hpG4WYx+9U9jaFqef4WsApCBQGn+wu7FmXxPmSgOlOocVb+iYiqVsdpeQyXTG/dOtn71rvkDkMGjopMLy8mpJm30qwSJJoMtBt+Bl1b86+wWVSzQv/+IudzU0dR3mbGPCs+eqaFbrHyOkPTfD/r5eHi0/QJ1uPPf4Dzp1r962PhCyheBZrK5Dj1835CQkPLhi0JFMtfF7sIgjXbjzbj3gXcf9uet85yD7m8zjx4s4Loo1tBltzL6TUpoF76A3NMmpNx
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(40460700003)(8936002)(8676002)(7416002)(44832011)(5660300002)(36756003)(82310400005)(86362001)(31696002)(40480700001)(26005)(966005)(110136005)(54906003)(478600001)(2616005)(36860700001)(31686004)(47076005)(336012)(426003)(53546011)(186003)(316002)(82740400003)(4326008)(70586007)(70206006)(356005)(16576012)(81166007)(41300700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:39:33.0217
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1876b437-0673-4bcc-94a0-08db40db824a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT009.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4033

Hi Julien,

On 19/04/2023 15:26, Julien Grall wrote:
> 
> 
> Hi,
> 
>>> diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
>>> new file mode 100644
>>> index 0000000000..3296a368a6
>>> --- /dev/null
>>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>>> @@ -0,0 +1,55 @@
>>> +/*
>>> + * SPDX-License-Identifier: GPL-2.0-only
>> Our CODING_STYLE says:
>> New files should start with a single-line SPDX comment, ..., e.g.
>> /* SPDX-License-Identifier: GPL-2.0 */
>>
>> For me it would be perfectly fine to do as you did but it is not what our docs state
>> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.
> 
>  From my reading of https://spdx.dev/ids/#how, what you suggest would
> not be a valid SPDX-License-Idenfier as everything in needs to be in
> *one* (including the begin/end comment).
I said what is written in our CODING_STYLE and what we did already for lots of files, i.e. having 2 comments:
/* SPDX-License-Identifier: GPL-2.0 */
/*
 * ...
 */

In the link you provided, the "*one line*" requirement refers only to SPDX short form identifier
which does not contain any other information like author, description, etc..

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:43:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523491.813581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp85m-0006LB-28; Wed, 19 Apr 2023 13:43:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523491.813581; Wed, 19 Apr 2023 13:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp85l-0006L4-Ux; Wed, 19 Apr 2023 13:43:25 +0000
Received: by outflank-mailman (input) for mailman id 523491;
 Wed, 19 Apr 2023 13:43:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dNGQ=AK=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pp85k-0006Ky-P6
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:43:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26cbe37b-deb8-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:43:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26cbe37b-deb8-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681911801;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jFu1BiBIiZJxXn+1gDPlWVAOg5qQuCy99rAdIveUYFI=;
	b=GioJJVLP3xsJ8meXcrDWG5s5gdc8c0v+E+hPsGfax24KRpT161PXreAJ6xSi1/ffGHB7Ig
	QiBx1CCi33pppdwmwttJl7f18G3nqG0JMirkvI2B/KLs1tu2KE0oys4tguZIBjdpC698x5
	qJxDi+HpdhZXd4d+95gQdCA+EcnPYPHRun+2UdCS7FOBBh/HuWgOjO1wzT9vOvsEMfGXhZ
	Bp60sCQp4nsgLMYaR72tlHdyX8qr1HW2HBQ6txx5eUn8Q1HfGVvmcQX/+2gPF1pOlVbSvA
	J67MiipF05Kzmx+7IpQU6gIRKBAfRvBEfF5UQtkBc9rk1oQMLK4aT2yGkLma2g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681911801;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jFu1BiBIiZJxXn+1gDPlWVAOg5qQuCy99rAdIveUYFI=;
	b=PpcgZgskhULkL7/jgakP7Wurg7G56yGaYp+uPBAUp/opltoRjM69gQ7OhILB+Kv0LHnaHA
	BEEUccjsOmhHz0CA==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <877cu83v45.ffs@tglx>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx>
Date: Wed, 19 Apr 2023 15:43:20 +0200
Message-ID: <874jpc3s3r.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Wed, Apr 19 2023 at 14:38, Thomas Gleixner wrote:
> On Wed, Apr 19 2023 at 11:38, Thomas Gleixner wrote:
> IOW, the BIOS assignes random numbers to the AP APICs for whatever
> raisins, which leaves the parallel startup low level code up a creek
> without a paddle, except for actually reading the APICID back from the
> APIC. *SHUDDER*

So Andrew just pointed out on IRC that this might be related to the
ancient issue of the 3-wire APIC bus where IO/APIC and APIC shared the
ID space, but that system is definitely post 3-wire APIC :)

Thanks,

        tglx




From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:51:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523497.813590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8DV-0007pE-QH; Wed, 19 Apr 2023 13:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523497.813590; Wed, 19 Apr 2023 13:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8DV-0007p7-NX; Wed, 19 Apr 2023 13:51:25 +0000
Received: by outflank-mailman (input) for mailman id 523497;
 Wed, 19 Apr 2023 13:51:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pp8DU-0007oz-3s
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:51:24 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 40c830bd-deb9-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:51:18 +0200 (CEST)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 09:51:09 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by SA1PR03MB6593.namprd03.prod.outlook.com (2603:10b6:806:1c8::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 13:51:06 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::b6eb:c7db:7393:52b3]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::b6eb:c7db:7393:52b3%7]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 13:51:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40c830bd-deb9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681912278;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OUvWnWHeBNfJ6iz4jUFbmJklMiluZd83SPGRjvc32C8=;
  b=dngnqtebgQT2mS1DclMgJxvGtcR+WXJdHjXTy+oN8+gEQfO1XKVEynJR
   AZIfOg6U//yKjPSoSPUKuMwvpEKxOutmN/IhnuiSSpBxwYWnlwxxWyAyk
   uCk3JbOF2LfH4vQruZ6aHfNOT16SzcL6M7gs4oGVI88x6RiZuV/u8/t7F
   I=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 106000917
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RyW2WK5bBHMCfvP+XQxi0gxRtP7BchMFZxGqfqrLsTDasY5as4F+v
 mtND2nUb6uLamunL91zPITgpBsOupXdzNBgQQRu+y89Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawS7QeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mt
 tETJgAMdku5i8m7+I+qerRPuPpkM5y+VG8fkikIITDxK98DGMqGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6NkkotgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMTOXgp6c16LGV7n0TIiQpDgHjm/Pj116UfohYc
 WAO1zV7+MDe82TuFLERRSaQvHGArjYYWtxND/c97gCdjKbZiy6QHG4IZjdbbtAs8sM7LRQu1
 1mUj5bgHzkqs7CPT3+Z3rOVti+pfykTI3ULaSINQU0C+daLiIo6khvLT9AlFaevj9v8Ahn52
 TXMpy87750MjMoP/6a251bKh3SrvJehZgEo4R/aWEqm4xl/aYrjYJangXDf7PBdPMOSVVqQl
 GYLltLY7+0UC5yJ0iuXT40lGLCv+uatOSfZhV9pAoln8C6ivWOgFahM+zt5K0pvPu4NfSfpa
 QndowpX55JVenenBYdVaoe8EIIAyrT8EtHhW+H8btNIeZV3bAad4Cdjf1WQ3nirm08p+YkzN
 JOffNyEAnMKT6hqpBKyRuEAwfozzzo/3mj7W5/21VKk3KCYaXrTTq0KWHOWZ/09qryNpADO9
 9tOH9CLxRRBXav1ZSy/2ZYaKVUHN1A3DJr8pshLMOWEJ2JOHGAnFu+U27I6cJJNm6VOivyO/
 3ezQEZUxVPzw3rdJm2ibnFldaOqWJdntlolMiE2e1WlwX4uZcCo9qh3X4s6e/wr+fJuydZwT
 uIZYIOQD/JXUDPF9j8BK57no+RKfhOxhBnIODG5ej8hZJ1xbwvT89Tgc03k8yxmJii4r8wlu
 JW7yxjWB5EEQmxKFMPbbuCiy3u0unwSleQ0WlPHZNVeEG3o8Y53O2nyg+UxLsUkNxrO3H2Z2
 hyQDBNeovPCy6cloIfhhq2eqYqtVexkESJyE3be8uyeNC/U5GOvzIZMFuGSclj1Xnn79b+5e
 c1ayvj9NLsMm1Mim4NmEb9t5aY/4cb/4b5c0glgWn7MajyDErlnJGuH28hnrKBBxrZF/wCxX
 yqn4tBHPbCCO+vmHUQXKQ5jaf6MvdkImj7X5O9zJEj86CZ8/6uvWEBeeRKLjUR1N7Z8NoEu6
 egmv8ES5kq0jR9CGtGCgyNP/mKANFQGWrkuspwcA4itjAMuxzlqboTdASTx5JCnatpFNkgjZ
 DSTgcLqg6lXxEPENX4uHnrE2+1DrZsLsRFOilQFIjyhlsXAmdc+2RRD9jJxRQMT0xYv+/9oM
 21icURvJKum9SZ0icRKWWuhXQpGAXWx/0XryB0TiXHdRk+kfmXXKSs2PuPl1FsQ+XhVZjFd1
 LidwXTiV3Dhe8SZ48cpcUtsqvimSMMr8ATHwZqjB57dQ8h8Zif5iKizY2ZOswHgHc46mEzAo
 69t4fp0bqr4cyUXpsXXFrWn6FjZczjcTEQqfB2r1Plh8b30EN1q5QWzFg==
IronPort-HdrOrdr: A9a23:cKVgp6O9ByBiAMBcTsKjsMiBIKoaSvp037Dk7TEXdfU1SL3/qy
 nKpp4mPHDP+VUssR0b+exoW5PgfZq/z+8W3WB5B97LNzUO01HYSb2Kg7GSpwEI2BeTygee78
 pdmmRFZ+EYxGIVsfrH
X-Talos-CUID: 9a23:92ml/WyvznlX/ZtaoygQBgVXN8srQnzFl03cCEKANHdZFZ6rdxiPrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3APtCZvA4kxNGdnePh79p9W0RQxox5446XNxtXya8?=
 =?us-ascii?q?M4ZSUHwkqPG2xlhmoF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106000917"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UmYoRP+bk7tThxllioU0UpA5atjcA38wKGAkATH9Wji5t/2Y9A7aUOxsZrIDxkazG38X4bGRTTJFChOhjzVLmmMnwRCAFUUoeUraTldYHr/BqgseF+ddoaLuIWvymrAIDXZf/fbSKGoYCXd+mSI2gjpRthZzqbkSlnCdgvbW1X5ty/je5U92lHYHKGbHfl3B8D12RJdGvGjI5qSBv8vjQChnCtbZrPAAP2VGIc2BGPStp6uglglIjUQzSYxfDgMo6tsp8hUr0IhcnFQEV8xAQGTp9FBUmj8WGC7WQdOU3A5gDSwTUkfBoNqBLMBr4PwZftnMSaCSlfKcEwhbHFcKKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OUvWnWHeBNfJ6iz4jUFbmJklMiluZd83SPGRjvc32C8=;
 b=CXx1aGkp3YUdwV1mSaQTtPz9LfqlYl2Jl+Co0obDhnd4IaL/GZbkHoQIpmxJivA+X2ZGsyzvQnbzRWstGT4194XKO7WitTldFNtgD38qJw59/KQKDzeO1XBl4Gopnc1roWJwpcBa9ZrzpOu0H855nHGqiH8HwD7h503Pg9REy+ZbtV3/C7Qi5PGdjzACF7l0C1asiatW0jG0UP2V9pJwTK9XwKvpZQxTZbjI+psP4o+KvIglPv7ZZAeZuyfUSsQ210A9mem4t3HsSIna761bZIZOkne8ZVKzcHKKYWkuvrY/li/PMsap0MUza2HTckhGfwk/wlWJKaievvZ7j+QQXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OUvWnWHeBNfJ6iz4jUFbmJklMiluZd83SPGRjvc32C8=;
 b=HKpB9kk0c8CogMoHT9jfj96klcJVL4e5Ywi8qFwibAvDFLvRJirRsrBoxQhfZJju/hDLB47MH8kzlVACrnCqD+3gMuUnAMIvx23UDdONVoUbxRnRo97OPaAW+MmqIDHaVaqgXNTNA6nIybynn0R9wXLr8zmLXkzAMg93aaRqULk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
Date: Wed, 19 Apr 2023 14:50:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-GB
To: Thomas Gleixner <tglx@linutronix.de>, Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
In-Reply-To: <874jpc3s3r.ffs@tglx>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0348.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::24) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|SA1PR03MB6593:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b7f846f-55fa-4669-a5cf-08db40dd1eea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hQVYEYsT9wbSjlUOTNiTpKUr/x/OCF1vqvV1XfO6X/FPG9vZtjBOzuAvKHgoNLcxaIIi/OkO8U/NmP6E+oqALy8wAM7rsZMldwfSd1RrVYIhkDb7BZoqeP33JlGd8ptsKKmoGWzr71ifg1VG0rTI8iwXB1lvFOWcExUYZMQhQMOSLbwEJIsL3i69GCl79j5LI4ssr5rjwPE7V8dWx79kygC7gF43ascbEXdmfFX7Bh73aaFpChqet0VJ5d0XHDsP4GTgSkONZN62bTyn5e3ZcAVaufINEydmda2x8f+G/UAb9DDu0XqXySzbe5DmxNH2hBw4McPjVgO5wjZTqK1KSbjeXjC9j3ZsjnQi3ww2566okoytMHleEh//IEUbmhB2DaMPQ5qINXsSca2At71xFL+sHMWkSkFtfpWRQUEhU8y1Xb7QAl/YWjNWLbYWx398WmkIJ40/CHyjrL2E6DMmSoFb5djf0xsdqh6im+jI8AN5A+IH26MGZbHE2ec7UrO5Mn6RNZ1FPWOdKEYYcHtB1BVZ87ZxAdXq3VM6hdbrxcP/FiboGv0spGAcALJoawLmyJi572/NHuXolJAwPqytjSWmJI/AbQ2JkGhkQGLo5vQw8pxfDOm43/w00liNoEKtOvAthFCFtDUwaLZ7SrrrsA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(396003)(376002)(136003)(39860400002)(451199021)(4744005)(2906002)(8936002)(38100700002)(8676002)(7406005)(7416002)(5660300002)(36756003)(41300700001)(86362001)(31696002)(6486002)(6666004)(53546011)(6512007)(6506007)(26005)(54906003)(478600001)(2616005)(31686004)(83380400001)(186003)(316002)(82960400001)(110136005)(66946007)(4326008)(66476007)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TUh4YzFJWjBoUjRZdFNacVZIWlpYcVJpZVd3YXZqaENMMDdKMkxqcWhEcVV2?=
 =?utf-8?B?aFRWZzFhWXRuNzFxTWxaZGtWMlZZV0JpSUI2OFhGL254bnNlR3VjZ2xKQ2l6?=
 =?utf-8?B?Vzk1VWFsTDRXUURqZWdRbXF4NFppQ0F5STZ4NnRuS1o5NjNHNnhlTUhjMmF2?=
 =?utf-8?B?aEFzU1kzYmVneFVjUEFGUDVqbDJnK1VVYkhvQ292NTlOVHFQaFYrVTRHQWVH?=
 =?utf-8?B?c3ZNMGtYV0tZam0xb2t0T09kclB1OTgvcXZrd1lCRStyMFZxc3ZwYjl1RHoz?=
 =?utf-8?B?WXRFOUZXUkRaZGd2WVV5N21mVDVBOWNhQWJoVWFUTE05U1A1dk5TMVl6QmlF?=
 =?utf-8?B?cFArSHRsNDNkaWw0T2dwaTJXQ2lkbHY4ZnQ0ai9TdVgreWlUTWo1UmVIS0w4?=
 =?utf-8?B?Z0o1TnNLYTg5VDhFVmRtY3EyTjBLbzcxL2RMV2E5VzVFb2VjcGxtOWFIRE01?=
 =?utf-8?B?VGRuWG5KcUtUanV3dWU5cFdnTUMvNCsyY1ZZMlY5dmZHZDZPY1J4K1dpcEwv?=
 =?utf-8?B?VWFmZjRPbmRDNEhmbzJXYzV1N0cxRmxiZWdDcFF4b0RaRWpaQjNLSXpHei9I?=
 =?utf-8?B?Zk5kaUpiaHIrOHZNUGNjQkd1TlBNNitBelF5NWZOK3A5V2t3ZTl5SVZEaG1O?=
 =?utf-8?B?TnlsT3ZVSGtBV3IrdUtuSHpuY05UaU03RkZ3Nm5aV2xhUGxpaW1hSXgraTRW?=
 =?utf-8?B?VXVmcGxkQm9CR3U1NUloNHBtaUVIYlgybXN1VUtkRUdDMm9nNGQrL1ZIZ1oz?=
 =?utf-8?B?QVJmSnlTRjUvNUhsT0JxcXBlQ3VFUTZCUkZFSDdOVWVIdWp2U3Baem1Bdjc4?=
 =?utf-8?B?MGF6K01CY1N4aGYxdEVaMnRCRE1GSjhaSUlDSll2RytyeGZpN3NBTVY4M0ZC?=
 =?utf-8?B?TFRkTWpybUR2MU1DVVB0aTNWVktuei84UDkzK0JYY1dRdUF6Yno5a2VNRHpW?=
 =?utf-8?B?czQyM0FkSlkyQ1pOUVl5bmdzeDRscHRJRXNieTFlOWNaWWphN3JCb3I2cG93?=
 =?utf-8?B?UHdvNmtkQjdleWpBaHhSKzYzOUpEY0RLNk84WUZvTXlTZktWN013Vk5LbTJi?=
 =?utf-8?B?US9tWDQxNUk5MnVZNzBqYVUzVit1OWFuTWc4K1ZKc2Ztb3dPY29VWHhkbGhH?=
 =?utf-8?B?STFvNEV2eXR2TE1EWDBhdjYvRnpiQ1QwTnBteGtMS1RSajgzRGhkSm5qWmJ0?=
 =?utf-8?B?b0U1Z2Vya2c4d0UrclNwK2FSekRzUlZKRCtMT1RmcVdUbHBwT0JVdHp0V3Fw?=
 =?utf-8?B?SUl5Rk4vTGNma05IRkFkTG9tdXY2Z2QveDZ3dldJUU5CZkFNVGZmbEhoblVs?=
 =?utf-8?B?VlV5dGUvMVRETFVPYldvSnZRWE9ORDR0QXFjbWRhMmdyZ0g1RW4rRzVwbWJs?=
 =?utf-8?B?cWVZU0cxdzRZYXFqYVdiQW5uY0JtK0NxRTZPQlNBdjlvcEpkM1hGRDE5U0JO?=
 =?utf-8?B?ekNYQU1Od1VuanNyOHEzdVg5REpveTMvdjBWbTAzejcrcW1qUVU2Y2tHQ1o1?=
 =?utf-8?B?Z3BWVzIrVm5LNXBBRWtWTjN6U2J1RWV0NzMxNGU2bVZnRHc2MEV1OThlUDVR?=
 =?utf-8?B?TTByMG5BemRyQmFXQVlBbGNkSTZQcE1RSkxuNkkzOC9HWjVDRkw0c0RwM1px?=
 =?utf-8?B?RkhQeEg4Y2dlRXJQcjVzL2FUbkMvS2hhYU0vTTRheGNPdlAwZEhpeFJUa0h1?=
 =?utf-8?B?ajlyZStyU1Zodm5wV0tWS3ZEOEN5NlB3VEw3c0FPWURoQXBuTk5qQVYvLzBx?=
 =?utf-8?B?b3BNNTNMTFV5bEUyK3Zrd2FyZG51WlREb0dtcENWUUxXOThFTFpYRGQxTVRz?=
 =?utf-8?B?ZXkzdWlQeDZuc2g2NmhWYVVYTlhDTEpQcXVrL1gxd1N1ZXRUOE9SVU5uekky?=
 =?utf-8?B?Z1ZMQkp2MVhnK21zcTVaZ2Y0RDZqNFN4cWtzMVdlTU1NOXd1UGs2TnJ4TnIz?=
 =?utf-8?B?L3lrY1FNZ05rVzhSdWhjRUZXcHJGTnZyQmVCdXp5RlZUUWc4WnJlODZVTWFR?=
 =?utf-8?B?eTRxN3cxcmdsUG5Tb1dLcytnN1NZSEY5VUs3VnB4WThpMVRwUUVyeGJWNWhT?=
 =?utf-8?B?bGVCT1VRdDhabnZjcUlCZ25NK3hRYXcxaGs2YVNseXlOcDM1enk5ZlZFTDIr?=
 =?utf-8?B?YVg3SkRzNUpPRGw4alg2ZGtnc3dHamRTdXN4UXB4OXNMR1ZUNm54SE5xQ0th?=
 =?utf-8?B?TUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?eXVWRjVwdGM1MGZQUThqa1BqdTEwd3VKaU5jT3ZlSENNWVpYNGRzRGVlV25a?=
 =?utf-8?B?UXMzMHIrRURXRUhkbkh2Vkw5bTd6VEduK3JXYllMa1FZVktZNWNuQVp4WHdZ?=
 =?utf-8?B?VXg2Zkx4SmNSbkJsQlVLVkZLUGFoWmJweWhCTHV4ZmxqQjJxS21PM0YxVXI4?=
 =?utf-8?B?SFovUXZDaUxjS2lrM2R3NEdFVHZtS21yb2lwWEJCWVY2N0ZIVUR6eTBlbytM?=
 =?utf-8?B?RTZNR1hlSmRycFhWYXByOHpZN3VGNUdjLy9DVklkV29tRlhmRzN2UWFPNVkr?=
 =?utf-8?B?STZnY25UZkJkQlR6UWdPYmYrNFNVZWExTmhUcFNkZWdVbGFvOVVPQzN2WUhK?=
 =?utf-8?B?MzZwalhpZW5wNzlEMmMzcW9QaHJvSzZUQXg0YVVXd2hKVzBJWlZSWU1oamdx?=
 =?utf-8?B?QWNyNmxaWUd2dDFvY0ZrRUdSRHhsMU92OXRBMmhZS2NCQlVmQnYzcnBUdlMw?=
 =?utf-8?B?WHJDaHZWem91SEZQeWhhb25EalU4eVVjMU1NZmhsdzhoaWoxcktxZFo2UUV2?=
 =?utf-8?B?NjNPSWlNV0dqKzh6WUt6S2llUUVpaXcxRGtZNXV4UTU0a3piOU8yeWt5clRQ?=
 =?utf-8?B?UCthWVp6T0JqTWRZcmtNd2RZT2hmN25NZ1M2cFMvZWRSRUdFaUJzZnJEV2xO?=
 =?utf-8?B?N296b1lpamM4WU9lQ1lSdUNqZSt3ZElqNG9ncXFnYzduNjU1ZjBSYTdZUlNh?=
 =?utf-8?B?RzB2T3RObjAwVWh5d1UyeEZxd05aNXNJV2FQN1cwK1NQVG92dUF0QURBOEt5?=
 =?utf-8?B?N2xtdm4yck9HdzVxNlFINGVnaTYvcUFickJuK3JQTnZXeDBoMmJabENsTWFN?=
 =?utf-8?B?U0lMNEg3ZFhTYnVBUkdqdFM5VXRyV0VuazZsOXA1M012MzZXc2x6LzdpZStj?=
 =?utf-8?B?Ri9xRWdML3F5d09YYlI0MWFHRVFJL29MQmlycFhvTzhkQldsRk5lblZlZVZX?=
 =?utf-8?B?STVLeS9PMFp4QXZtQnUzR2RxUkY4OTRabFo3dUt2VGt0ekxwSnZtYWY0d3FZ?=
 =?utf-8?B?MHhEZ2hjaGVWaCtxbmpzOUp6dG1EVUZrbkJ2eml2ay8xQ1A2SVpIcFowTTBs?=
 =?utf-8?B?VmtnY0x1aDBDREFIMFJNUVorZmdhMGtyMmV1a0poVEdEY21wendFM0tycTh2?=
 =?utf-8?B?NzkzdHpodzBFUU5YbXd0bCtoSDJUVGxhK2swT05YK09SY1pvdnVXKzBvNmNK?=
 =?utf-8?B?SWloS0xCWUxlcW11WGZGemxyZllLajJzZHdkcEowT0ZMRXo4MjcreGQ1VG9F?=
 =?utf-8?B?WXJRejBXdkU3YUNpSldZNm9KWW5tVXozWkdGSTVORmtsc0h2K05uSkhZakcv?=
 =?utf-8?B?NWh0cVJKQnJXekJDVC9hbVh3VTVSQ2Z0cWFMbVJLWCtVdHd1ZS95YkVrYXpH?=
 =?utf-8?B?QVh2ekNmZ1lGTFNCYzM3VVZZRVpIRHN2REJXQzhaWktkeC9CQk1XbVpRcGZy?=
 =?utf-8?B?UnlYK0x6Yjh6Wlkzb3Zob2IrUHpvWFF6UVdvUkszNXA3OEcvWkhYcnVjbCtH?=
 =?utf-8?B?b2NMcVArSlFML3RHZThGeTNPeFV2N1VYVm5RSW8xbVhJNnFvcTczYkZEWjJ1?=
 =?utf-8?B?Wnc5ZURRMEg0RFlCSjNJNzRMSVJ1RkdQYWI0YXNPM0E0SisveGpOY081MTBa?=
 =?utf-8?B?dG5uQTNHRSthcGs4aXVuRW92WEFHazQrOUJpOU5jSFF4L3h4aWg0ME0yUUhP?=
 =?utf-8?B?cVNVZU0wY2tzQ0FENDJLL3BIVFlRdVg2eVVmQ3p0QkVSK0R5MWhlSENadGVC?=
 =?utf-8?B?MnhGbmt4V3VIckRSenZLV0pNeHFmUDBNVGJiNVhqeEJZcFcrMHZUdzNuNjB6?=
 =?utf-8?B?MmhlTThOdFB6UnpTQm56U2pRbDlrOFpXeWdlcEZXZWNvYmZxQVpNeGxieGx6?=
 =?utf-8?B?NGN6T3hHRTYySXk1U3Y1RWJ5UVY5dEVLTWNTL1NsbFNsaUNmTmZXc1lIZk1l?=
 =?utf-8?B?M2pjMzVuNFZUTUxnVXhRNkdvamhhdHhQUW5LWUkwd1loVnUwMnc4SVFLb3pj?=
 =?utf-8?B?TWJqTUZtUFM2SCs2akk2cDd2YldoVmpMUUJkMEh1VTZKZVVzZU5oK0hrMTZL?=
 =?utf-8?B?aHNndDhJaVh5aGpBM1lQd1U3VUFHQ1I1MVBQR2hhZzU5UW5iTVdScW9YNkVO?=
 =?utf-8?B?dVQzWUo2VTZGOTlmSDNXWkVsdW1oWFpteHoyUXBFUTVRUDlQckIwVjJCRmdz?=
 =?utf-8?B?RURPMjNrdWtWMmw4SUZIaHZxOHNnWWZURHlZOEdQblVrbVA0WlU2bytTc0I2?=
 =?utf-8?B?TmRSWEt0WnNhUW1iSTcrdzdzZy9RPT0=?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b7f846f-55fa-4669-a5cf-08db40dd1eea
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:51:05.5846
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mUm8h+Sgq//siPDyShmHF4uYins17zv8UFm7T+zm8NirjF37GE1aKbf2+FtFpLBo4/g8+pNw4aRH9rrrNvOkz3DQ4TXtDudhVn91D2nQTks=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6593

On 19/04/2023 2:43 pm, Thomas Gleixner wrote:
> On Wed, Apr 19 2023 at 14:38, Thomas Gleixner wrote:
>> On Wed, Apr 19 2023 at 11:38, Thomas Gleixner wrote:
>> IOW, the BIOS assignes random numbers to the AP APICs for whatever
>> raisins, which leaves the parallel startup low level code up a creek
>> without a paddle, except for actually reading the APICID back from the
>> APIC. *SHUDDER*
> So Andrew just pointed out on IRC that this might be related to the
> ancient issue of the 3-wire APIC bus where IO/APIC and APIC shared the
> ID space, but that system is definitely post 3-wire APIC :)

Doesn't mean the BIOS code was updated adequately following that.

What I'm confused by is why this system boots in the first place.  I can
only think that's is a system which only has 4-bit APIC IDs, and happens
to function when bit 4 gets truncated off the top of the SIPI destination...

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:54:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523503.813601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8GU-0008SX-EL; Wed, 19 Apr 2023 13:54:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523503.813601; Wed, 19 Apr 2023 13:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8GU-0008SQ-B1; Wed, 19 Apr 2023 13:54:30 +0000
Received: by outflank-mailman (input) for mailman id 523503;
 Wed, 19 Apr 2023 13:54:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp8GS-0008SI-SI
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:54:28 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20603.outbound.protection.outlook.com
 [2a01:111:f400:7e88::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2728df2-deb9-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 15:54:27 +0200 (CEST)
Received: from BN9PR03CA0158.namprd03.prod.outlook.com (2603:10b6:408:f4::13)
 by MW4PR12MB6682.namprd12.prod.outlook.com (2603:10b6:303:1e3::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 13:54:24 +0000
Received: from BN8NAM11FT033.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f4:cafe::d) by BN9PR03CA0158.outlook.office365.com
 (2603:10b6:408:f4::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Wed, 19 Apr 2023 13:54:23 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT033.mail.protection.outlook.com (10.13.177.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 13:54:23 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 08:54:22 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 06:54:22 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 08:54:20 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2728df2-deb9-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JIudMR5PNccoLSuUTWK8WY/Pd+DnN1whehTvpViHcEpO0sdINe/aNWYeJ1iuqua+VEkFAPo41AakREVQz+pD6Bdvd5Q9GH0IiBYdmLY5KGrgCTlGp4bt91Cmdg+kwMHzOmZYYfjJkoASomg9SXYYzTddowd6BhTbVi7d4PrH3wTtRrmkyts/VOKAImlGy7qIB/uS2elF/TelaW38mU0TWf7SB80kckuvcstgcYdg8dbRsVcKJwxJiv+qqEIXaXD6ctAVpLUnMiTQ9LJVNHKgTXJwQfYFwVp7jgBK9tL+GuAw8fsnC4ARBFenvN2JSDdk7SRZuSb5VxCJYOR6VHvSkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oHMkKwX1+xcemUZez2mazKLKo7+2/VeQYEertk59Pc4=;
 b=ahwm/etPRIqjEW0I5M/8nLsfqw7eN5DT+gFekYesTr6nIXJglgNGKkjrrghgTlgy9ciuoknZD8TeTkXRSnDy3odepCHY6QEjWE7pQ/EpA6aex5mdR3O3dhM2mxuwKrthS3VW8L1OevdP8kPh6Aoqa2yYzwMsfuK7wi0MxI24w903HWjBZ5Tk7+U6tJRvTOPDXG9WR98o0D8Hs11R/B/O7lzl+hObXosjNOmkjZPHiLfI/JfujrBWfLqHvgx3luZPCGffS0+DjGUQ8Yt5Af9y/x+ZhR715f3PzApTIjsv2kdfWZEtzpfitY1zr0NciJNFENJXEyVI/jse6EIasJt25A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oHMkKwX1+xcemUZez2mazKLKo7+2/VeQYEertk59Pc4=;
 b=qJo3rXnknFRp9XOXuHux/fPPI/BZtaZL0AmfspuwAS+G0WtZwo6lrC3Qfh6lmlSpVwXAZqyGompJyzQgIS5g1JhyZRv3nPuZGMqvRejzsIq1tqnxfZUvtEO8sS0hPqpKZQlYL/+vFRA6b0WWRfUgz4cy7ev/VTEpmfVUvTNy5MY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <8560fac2-5c92-8362-090d-bbaeae3f5d22@amd.com>
Date: Wed, 19 Apr 2023 15:54:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
In-Reply-To: <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT033:EE_|MW4PR12MB6682:EE_
X-MS-Office365-Filtering-Correlation-Id: 780171f6-2523-438d-307a-08db40dd94f2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EfkG4OhvvWrmpu+yBcTas1+n1MnFX4JOSOZhKSIOjTegaeFWkLHN1pHVAbyjk3VgcXOFONRIILj/O5F+zfwmC/IrVpEnOyXhRjb5JkzeG2jAX4FqDNJgbgU+mZpEPTu/cg9aAdPOx8/T/A2uwBhfJ1+x6xL9pBkoi8eQD02F/7iJzDGchW4Be3XP9oGsW8Ws+6LLj2hSGmmrXEu6I1JEhtPFaHICQAtGOB3OgT0Q9Uvbg1R0nMcJpDzfuC1H+ijabn/a0fX9kqkRzr2ujvUOo58b9D0eI8tsi75dFShzoVD6zLCs2RU2ocLM30EGKB8JE2BydTtHg9E5VSo5EzNNCFbpiWYwcD9y5eP+Z84j1ec2Q/91P5GWjEoAhsg9/S/jmJ489gkRy2Z/tXDf0xqGI7N4TqgDwJXHopCRRT4TJ1Sl72eFMYaE8/m0Ju0QaSkUoc6mRtbx5I2Lbr6jQkR8vMiWyV2Tg81qn1UNd074cSy1n3pnbPmCQSNMOX0zUvOC3lusxK6CkLfBbZ37tfrt8eIOT9udIT/UDxDHOLDOZi3H9WW4bHfsBo/KLXadJ3vFpxarJthpLROKz5OP5mHhJphX2UfBUJSu8RckBaBKTEeglwJdQJ7UiStvXbQRmtS50k74ptxud1Pt2NQ7cUIX/kWsLKAwug6V5MPlfRi4D3Iweb1m3lqBA1LyWEwTeqJ3INv2hqJlnDNwVIcwUpTy6F3/n7ayQsywQHipIkhTeq0/tOfdE4FH3Fz6uRvtb/Ca6pzYwHdOlp7c8ARNcG4z6w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(40470700004)(46966006)(36840700001)(356005)(82740400003)(478600001)(6666004)(40460700003)(16576012)(110136005)(54906003)(81166007)(2906002)(40480700001)(83380400001)(426003)(336012)(47076005)(186003)(2616005)(53546011)(82310400005)(36756003)(36860700001)(4326008)(5660300002)(316002)(7416002)(44832011)(70586007)(26005)(70206006)(86362001)(31686004)(31696002)(41300700001)(8676002)(8936002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:54:23.3438
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 780171f6-2523-438d-307a-08db40dd94f2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT033.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6682


On 19/04/2023 15:19, Michal Orzel wrote:
> 
> 
> Hi Ayan,
> 
> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>
>>
>> The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
>> currently accept or return 64-bit values.
>>
>> In future when we support 32-bit physical address, these DT functions are
>> expected to accept/return 32-bit or 64-bit values (depending on the width of
>> physical address). Also, we wish to detect if any truncation has occurred
>> (i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).
>>
>> device_tree_get_reg() should now be able to return paddr_t. This is invoked by
>> various callers to get DT address and size.
>>
>> For fdt_get_mem_rsv(), we have introduced a wrapper named
>> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
>> uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
>> has been imported from external source.
>>
>> For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
>> dt_read_paddr() to read physical addresses. We chose not to modify the original
>> function as it is used in places where it needs to specifically read 64-bit
>> values from dt (For e.g. dt_property_read_u64()).
>>
>> Xen prints warning when it detects truncation in cases where it is not able to
>> return error.
>>
>> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
>> by the code changes.
>>
>> Also, initialized variables to fix the warning "-Werror=maybe-uninitialized".
> I can see that now you explicitly set to 0 variables passed to fdt_get_mem_rsv_paddr()
> which haven't been initialized before being passed to fdt_get_mem_rsv(). Is this what
> you are reffering to? I cannot reproduce it, hence my question.
I can see why did you get this error.
Before your change, we always checked for an error from fdt_get_mem_rsv() by checking if < 0.
In your wrapper fdt_get_mem_rsv_paddr(), you switched (not sure why) to checking if not zero.
Becasue of this, you got an error and tried to fix it by initializing the variables to 0.

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 13:58:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 13:58:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523509.813610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8Kd-0000eS-Vl; Wed, 19 Apr 2023 13:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523509.813610; Wed, 19 Apr 2023 13:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8Kd-0000eL-T4; Wed, 19 Apr 2023 13:58:47 +0000
Received: by outflank-mailman (input) for mailman id 523509;
 Wed, 19 Apr 2023 13:58:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp8Kd-0000eF-Cq
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 13:58:47 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2087.outbound.protection.outlook.com [40.107.7.87])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c54000f-deba-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 15:58:44 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7202.eurprd04.prod.outlook.com (2603:10a6:20b:1da::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 13:58:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 13:58:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c54000f-deba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JvkVzzgEX40RyCYQC7DwySOsW9q4aCmN00JptROe8BO62uPeZQqYt+NceVf9NydLvqglHaDxb7VPwqMlVlPNslq3DeTwowJR64zT5O4NFHnHtiaUxXTX0ECCR9OcC7faRObF9rA/Po7Frsx0IK05iXLH0/wT/fC5cHiEJ3SAzomq+GZ5zRo89C+15baWp8qu73d2m3nukU6PIjWZiC/7EHs4ZjpqatbB+XtEI4VXMfgj+eF39sHVW6QIKpIoczIHvMfWkqg/Khl7fvvzi+8ULpBNiDO4siv5TmBfezdHwRrnvQzY3Hth1uPtAn8E365BEuh7W99pGBD+Y9yeoOTNSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rXoozgBKkSgVlPp0MUaOUPD90sQ/DqpzpY6sdL+4jPE=;
 b=na6FG7zkh9CGwA9hZs3R9ZM1833xL5ieUxvvRWhJHKFpTWPZkbBmDtjQ4RJ13S6H3jQP3kbJROebO+Vxe3zJNPSvVrYQJ1Yre1birT2L95GCE9Php8wq80PgHaEkb4etnoHxxf00mO14U4UCg4AypqLI/2qolrp6zSN/JN0n7m96JvbuA12E6ZfG7XDctmiTMxOFx58pTSCEPajMQmOmoZ8jYX8ZLB45IXPHIZEjPA5OGLwWZ7ihdtREIzFro8z9aWN0uvX+XoT19WulgH8OQA+odnbZxRB1GVCsI7fu1OVhzW0MC26k4Z8CZ/UBb+rJjZ3LrRL+KKokEpwlGVV6gA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rXoozgBKkSgVlPp0MUaOUPD90sQ/DqpzpY6sdL+4jPE=;
 b=Ve50+zB42YxQipWJUzk+R5VWpL3dqWrCqx6pcrCfhjGboFpH1PbhpveZOHW/4lNyZrHWFqT0QrulO3ETwcIGC/+FnKBRQCOshpUf3iLNExc8unDb9MmDufBnbkP61kbL4K7DKhn+/JLvDGldDUxKLiXE3e1374/OS+hYNuTLN0ALztKAArGqL9mVJ3igNZNRdbHezrwnJXuq5XWkeS4ZepmobwNo8Wg1m/jUaLHH5P2qZ4wdMsQOvqFK5LV2TKrQnwpZBvVvyFmScYdKSvDnryQIxwdp5ovYhI1twrbjgpcwjY9LlKckTyUaDqPjTq608gYtXmnmSoWKKfsl09ek4A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
Date: Wed, 19 Apr 2023 15:58:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD6AejXJxQxAyrx1@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7202:EE_
X-MS-Office365-Filtering-Correlation-Id: 2012ae58-2304-409b-bdde-08db40de1e32
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2uQwgkL+mGkqNA5WyEw/t/8uqU8989x7Cv9Bz5VDobRrEC2x6bwZwSnB9tAhyDJz4SsE6X7cGS9UXUM+gUqT/kk0J8KJu1pL7AKUubGRSr1EfUyu5Ugnudkj9nbMiQkG/a7KFeBDdcVhXIw1cL2lMX8azoMPKOh7IqlkEsQx1F6vwB5XAkzI/dAJY64FBM5JyR/lqhk2bmC6kU9k8lt+8yYkPvL8XysfwbsNdwfBrc1XuXaGVlj7gdyN13wS0NYOZ2PajKb8dAxK6rK+iv5i43FeDUwHDESdHquvha8X+ikX0Ss+9LucYfL20EW4uXt8Oi4Y8heoZYwD8GBFQRErXeXyuX6rfzOUGBz+zuFjMjAXpYeGA2cTTyY/pmx2SRzm2CnU5YFKOyG0gwzUJBpe30fftBe16CNvxdKO6saFLGiDQhRw6VpwNaqtMh9oAH0e3aD5vHH4iD7SFJ3+Tz62YXla2bXDn2sy3M+P0PyCcXE6oEtoFaAKptuMvPV2QZSnkN+SZj9n99v6H/rGiIJO4OFTmZLVvLFpGg25YIdR0JUjgRG3oQU4K+wzoXFaoUwJEjgQwUmwN9T65Djt741cLV3scobfak+KPG+29Ba1udMYWgP480tFcUzUozBi/4k4LJgog8eE3DOfEAn8rbwP9g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(346002)(39860400002)(366004)(451199021)(31696002)(66899021)(38100700002)(2906002)(31686004)(478600001)(6486002)(6666004)(2616005)(83380400001)(26005)(53546011)(6512007)(6506007)(86362001)(186003)(36756003)(41300700001)(316002)(5660300002)(8936002)(8676002)(54906003)(66946007)(66556008)(6916009)(4326008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VzBDbDdtQWdpT09TelEzWENLSklDSDA0WktteW1YdGhzUGJnanRMdTEwcDRk?=
 =?utf-8?B?QUJnSDc0REF1Ym91bEtlN0JKRnVreTZvS0VyVnRCRUZ6U0JBZ0QzU3VMNG5J?=
 =?utf-8?B?YklLTXdxWnQxZm9vbEVBTFRFV0w2azB4Sm9IT3RrN1NJWGFwNVFESmlvZDRv?=
 =?utf-8?B?Q1k0VE55S2pEUUowczEvV2QvcmRJaGpONHlXd2RIZmJKbzF5Z29RUHV6YU9D?=
 =?utf-8?B?ckQ0Q0Q0Sk1wWFpyR01la2Y4K1RVS0hodjBvWDZEYXNWbnRpUXBDalA2YzBF?=
 =?utf-8?B?Z25HWGNVM0J1MWxMWWE5UStHSjNpNzVscHhwVWVPdEd1bTUyMnFTZlFKUm9w?=
 =?utf-8?B?V2EwTzNkVXdIcTRUK2dkNDNydXNBSlpjSXhrak1lUW5Ka3puREdFMEtIS2dw?=
 =?utf-8?B?aVZpQWpxdHcyRVNKbmVhZGtmaFhsdGk5Z281RHlZY2hrRGt5UHM3dTlhekda?=
 =?utf-8?B?ZEVTUitNa05HOVJ2RXJOblF1UXdZQVRoNUNLcmgwQ1hTWjNWWXVWdDUxZE5h?=
 =?utf-8?B?eHZkeVhDOXA3bCtCYnNraXVCelZpM3FNYTVOazdmbi9DT3pRb0NWQnBESlBH?=
 =?utf-8?B?WmltRU1XQ0lDVVl5VnVrdGFZcHhSVXZONDZvSEg1RTNtbVNUSWFsTWJSa2hN?=
 =?utf-8?B?TVZxemt0dWMwam13QlFjcytqU3VYbWxPdGVNcWZCOVhKcHArS0g3YVE1ZE9D?=
 =?utf-8?B?WCtaODUwalEvR3Y1Zms2US9sREZacTA3ZUowY2JnNDBJbi9Pd1FmVFdVVkty?=
 =?utf-8?B?UGVxNnRzOEtKVnFHT2R4aEdQS01jOW1nRlJjc3FLL2ZvUzY5SXhwVkdJY2Jw?=
 =?utf-8?B?R1ppM1lZSXBIalM5bTdVUnIydUtkRjFSNWdNOUNTWUZkbzE5elBibmVrOFI5?=
 =?utf-8?B?ZjN6aUhOVDIwVFM4amhZc2gvL2lyMVZqMGlrT1h3WnBYSVJRV0RjTFkzeDVi?=
 =?utf-8?B?c0RKTFFtNFRIYXROdE9lZHkyMlVUUThIMmpqWEVJbTVIdENTYVU5TnZycUVo?=
 =?utf-8?B?ZHFyWU5LcEMzSUhZdVgwdU8vRDhkRG90d2duZXJaTHdmb21ZUlFvMDhWQWgr?=
 =?utf-8?B?OENhOFBHeVh5V28ramlnbHJXbVNIVlZTb1ZGMVlrOGF6NXlwUDc1b0ErbDRC?=
 =?utf-8?B?bitOSHJTQ0pBKzhYMk1tTXBJTkd2YW5KVEFjWU9JK0ZQdW9nK0dtM1h6M3F0?=
 =?utf-8?B?WlNQOFVIM1F4VjVibC8weXM0RXo4YTZGQkNnYktVcjNlQ2FwWTRMRHVNRXll?=
 =?utf-8?B?YzBjdGNjbnRUaGZGNVBFbmJlVTlZalpoTSt4endnMEg5aEJuOGZTbjRTTkhw?=
 =?utf-8?B?SkRYc2M0Rk1QVUhyZmRKWEdsdmlUdFl1WjhwOGVPblVoVkJlSjFLTHFINGVB?=
 =?utf-8?B?dDdSbnZXSFZTUDhoQjNWYXh3enNibVhoQmFkOTJNQ1MyN21MSksrYi9qQlZ5?=
 =?utf-8?B?bWI0UlRiYkx0Uy9vQXVjcWJnTzdnSjNSa1JReUdXSTF1V1ZZSmtPclBpOGNu?=
 =?utf-8?B?MklhcysrOEFwQ3drMjdLM1Z2U3BOSnpJRnF3blJlMFN0ajNiZFlCanp6Y3BC?=
 =?utf-8?B?UkJTSjExckV2RGQ1Q1hWMHZVZzhya1RDdCtsT1BWUmdlZzRLZXRYYVZZazAw?=
 =?utf-8?B?d3BUOUIyaHdiempPbklFd3dGY0RreUU1dVNKRTQxVlo3amE5RmY0OUpubCtU?=
 =?utf-8?B?enpBblU3ZXhjMnh5WXZ6OFIxMXRhSm9Pd0RxL25VblBNM0R4OFRUbGl2NnF1?=
 =?utf-8?B?ZGVBbWlFdlFCZkp1WXdKUE5oUTV5TEJlRHBEV1YwRFVYUEd3czZETUtLSHJ0?=
 =?utf-8?B?NU1FQnR5U1JkeVFxcHNJNUFBb3RBeDdNbHJGTGFwcU80ZVFkUDJRVXR1U3B0?=
 =?utf-8?B?NUtLYm1rYzI2TUJPVDQrdWhIQ01vOVFpZmRrdHdRZDdsMldTQTNiakpzSjB3?=
 =?utf-8?B?NWxvUS9TZlU2d1QvVHBqWjUycmxKc1NCaS9JVzVXRkpCckZ0TG9PSGdYdGVJ?=
 =?utf-8?B?M3p6MW5kTjBmY29oMVBNazU0aGNmcExqZXJkdDVHd2JLNmh5cnpOTXlzOGlS?=
 =?utf-8?B?ZWUvc0s5bFFKcnFZdnFraXhyalB2SDZZbmFjM1NzNDJxem9TSkNKOHNZQjMr?=
 =?utf-8?Q?GEh4vXmFW0NVpoVsHlENetHDk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2012ae58-2304-409b-bdde-08db40de1e32
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 13:58:14.0161
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GCW+hFpSNbLH+i5sOU85ebwWONI9SSB/SO5YhOJCeRosgtwTX6JnuxEu4KqXDTGR9ZLNDrnyuz1MxeEyPPbLmg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7202

On 18.04.2023 13:35, Roger Pau Monné wrote:
> On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
>> ... in order to also intercept Dom0 accesses through the alias ports.
>>
>> Also stop intercepting accesses to the CMOS ports if we won't ourselves
>> use the CMOS RTC, because of there being none.
>>
>> Note that rtc_init() deliberately uses 16 as the upper loop bound,
>> despite probe_cmos_alias() using 8: The higher bound is benign now, but
>> would save us touching the code (or, worse, missing to touch it) in case
>> the lower one was doubled.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Before committing I went back to read through doc and earlier comments,
in particular regarding the NMI disable. As a result I'm now inclined
to follow your earlier request and fold in the change below. Thoughts?

Jan

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1305,6 +1305,13 @@ bool is_cmos_port(unsigned int port, uns
 {
     unsigned int offs;
 
+    /*
+     * While not really CMOS-related, port 0x70 always needs intercepting
+     * to deal with the NMI disable bit.
+     */
+    if ( port <= RTC_PORT(0) && port + bytes > RTC_PORT(0) )
+        return true;
+
     if ( !is_hardware_domain(d) )
         return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
 
@@ -1342,6 +1349,17 @@ unsigned int rtc_guest_read(unsigned int
          * underlying hardware would permit doing so.
          */
         data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
+
+        /*
+         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
+         * ports. While reading the index register isn't normally possible,
+         * play safe and return back whatever can be read (just in case a value
+         * written through an alias would be attempted to be read back here).
+         */
+        if ( port == RTC_PORT(0) &&
+             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
+             ioports_access_permitted(currd, port, port) )
+            data = inb(port) & 0x7f;
         break;
 
     case RTC_PORT(1):
@@ -1378,6 +1396,16 @@ void rtc_guest_write(unsigned int port,
          * ports.
          */
         currd->arch.cmos_idx = data & (0xff >> (port == RTC_PORT(0)));
+
+        /*
+         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
+         * ports. Therefore the port write, with the NMI disable bit zapped,
+         * needs carrying out right away.
+         */
+        if ( port == RTC_PORT(0) &&
+             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
+             ioports_access_permitted(currd, port, port) )
+            outb(data & 0x7f, port);
         break;
 
     case RTC_PORT(1):




From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:02:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:02:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523513.813620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8OY-00029f-HQ; Wed, 19 Apr 2023 14:02:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523513.813620; Wed, 19 Apr 2023 14:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8OY-00029Y-Ec; Wed, 19 Apr 2023 14:02:50 +0000
Received: by outflank-mailman (input) for mailman id 523513;
 Wed, 19 Apr 2023 14:02:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp8OX-00029C-BE
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:02:49 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ddba33fa-deba-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 16:02:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7202.eurprd04.prod.outlook.com (2603:10a6:20b:1da::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 14:02:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 14:02:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddba33fa-deba-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhOfWiZ/3HtBYYoDpahJtM8x2WbssuwxV925G9gt4Se+D64+GISNk1rj3Ut9sf/F8Bu9dpZs2BveMdnRrTNs7RKJdiyCZjP5vvBy8JEDHtLIE3bUtKbfoBZqldDBa8+BIFCE8CaXicQZjXAD10243piYc/6n1c5Bf3CX9nqL3Czr7yzB6Bre08xOwr7JLkpZuoIvuy7mmtwtPkC24Cm8fr05pktklM52kcsMebtOM293Pjrbm4+j45MlKVeE5MhqBC2z38dB/9lZgmHGuSyGzlhhgziKwoCFIiV+dmM0wfogQRcmMV1kTqQj3tDYNuOaJZx5knpIhjKia+eAvdG9bQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5t8N/uqflNeiN+ynmSj7yMfbYyfzQzjkJvfO6pvU/os=;
 b=kjMtIGOS8s+XNd9NNkFjDYprwcbqIe/Eg2w+t8tghL9rq541kXQ42SZ4VV8E9BTToTuTUYgJCuStcRkzXvscFoIFvV2sfpDvWrmQpALaO3+c4RUIJucgC5Mw68mjMcsq5HbJglhJ7A/xbMUvCbqbDJra3oZfz291/P5AyteMo2CdRMxgqcBKBRTcPM01eGoFVb464MJSxbiVSUl3nMSLyBoPxuHFdvKjq2+/p6SlU18B36WHKxgXFVZ7jEqaKwYSYIFRgtxFQhrUpiTM4fcsgjyxXwRomKZyfWIW4VEQm+E3lIZZYtC3CknhGM42Z1T48jOGZDWNeD5j60hIErKOlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5t8N/uqflNeiN+ynmSj7yMfbYyfzQzjkJvfO6pvU/os=;
 b=xeMkQS64vNEySKqp3jnAVDmhK7MVi9R6xRsB/AZK3va0YBvjSmSNUAJdB+OQkQdEGRRoKZkVZLNTj/nZU57Zd5Q6fdyaYbcbedzzZE+bitiZnmwFWCqov8bs8hzxszT5TyOPT/qpAyDaaFDN41WCDWbzCxnkCdfCA0bYjrTJeNToGU6UUG2/0op/JQ08xkbKixExr+a8E2C93V747naVkHB4lPXxYgIJ5GMqkFEiCwHlyjSPzWxFbnZJxYA9zRRCq6+9gQflweTGyK4Sd1ioegdiqt0mkPy8qcCFxrSYY9biI0nVmc36L3vnbNuDWrZvHgUl36QUpHkYlaKxpzN2vQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c18daf62-47be-8f17-7e18-4dc8952fd5b8@suse.com>
Date: Wed, 19 Apr 2023 16:02:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] xen/vcpu: ignore VCPU_SSHOTTMR_future
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230419114556.34856-1-roger.pau@citrix.com>
 <e27c6431-053f-924e-27c4-763b3c45fdc2@suse.com>
 <ZD/q/3yI0+8gfJ0g@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD/q/3yI0+8gfJ0g@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7202:EE_
X-MS-Office365-Filtering-Correlation-Id: f6475e78-c0da-4917-c267-08db40deb0f4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HCJa3UsXAxtU0nsF6lSmePw23vUlppJi4DJav6P6C8nywysYTc5tn2VSjxqnbYPpfDJ9yrrew1VCERPWYUs5pE6LUg2OO4vo9e7dyvm8RGVbQHkZ1LJEYokaQ3NjR0Q17qwzTPwWT+lykMmo7kSro4ibJwwFFSU0hhQjYEHRDcMvUunl6w7vckWuba7xnoD+IqLqeZNKxiQ2YZ1v1CIIwGB+z/qUgGscfoa79fr52OOMeUdDvH3OYY1zMfvq2x6f/eFLL42cgrtHMLUPPN4dHWp/+m9T4UpSc4gv9EnL22tl6ubHwbYIaPDice8IlxAFNZmvLkU+utTc92E7j/JtKmBe6AFWYNVWFwyTPg239cGSjFX2BiTENHi7VgsVUd2vv5oXyody1hNY9PxwqO2srZP0CPdH6Qo9sLVx4KJq95DZO7G+D+yRxJa1z4/VhN8Jq4Js7v2tNqFDZ52qgNFyGIAQtH1d3pje7OoeVglwhzGDpO3EKDIYRO7FQvs9tVd3KibNY18I+IVbP3EIcgkGkpDcTkNbQMfoVqbDqQtaSu1lDxweoZsv03jXhaVcbo/lthQI40kJ+SPCjnmKDJGhbFycjExbo+WYLdr/9NGI45CH4lx+id56MWA6V0IP+9HGT9PV9oD8+AxlH9zVp9zCow==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(346002)(39860400002)(366004)(451199021)(31696002)(38100700002)(2906002)(31686004)(478600001)(6486002)(6666004)(2616005)(83380400001)(26005)(53546011)(6512007)(6506007)(86362001)(186003)(36756003)(41300700001)(316002)(5660300002)(8936002)(8676002)(54906003)(66946007)(66556008)(6916009)(4326008)(66476007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NC9xL1BJT3FzSTZUUG5JNDd6UjlaZlJpaDZCMldmQVJmd0tCM0VnbkFmakpG?=
 =?utf-8?B?WkZNTUxyKytOckY2WWpxTkNGQU82RUpjOERESFZVUStNYjd1K1UxSldQMTMr?=
 =?utf-8?B?V25IQm9mblpNN1krWUpXU2FSUnl6ZSsyRmpQWGRTUHM2NGZKTVNzaTI0cXlG?=
 =?utf-8?B?dDlIOHl5SVYvNFB0NXRGdUFjcXp4R09GYzIydkRBMld0ZzgyZzNzVmtoaStG?=
 =?utf-8?B?VUJEdCtUcWc2S3RmVHpsWmxFcHBldWsxc0Y2NkNGd3YxekUybTJwS0pjVm4x?=
 =?utf-8?B?Y1kyTGZqL2FoNy9tUkozQTZFNWJPYll0anhrd21NYjFxenBMaEd2bnQvdDhm?=
 =?utf-8?B?NnBQWHFEN1paUEJHV3FNS2MrYmlvQktaQndvQXhqRnkyZ0Ftc0czQTEvNWk3?=
 =?utf-8?B?cVByTVc2SE40Y0VpdVphWlIrZm14bkR0VkcwR3plOWgrQ0tHSkFtNmh6c3FW?=
 =?utf-8?B?S3NVWC8xZGZPVUhyKzZmaWNzWDQ4TVFlZm81ejRGUXlBNUliMUNqRWtMZ1Ns?=
 =?utf-8?B?Vzh1UHZzRmdVKzlOQjRIOE5FengrVlBRdzYzVHdjbkxsVmtuaGM2eDJKb3Rm?=
 =?utf-8?B?Y1cvYjU1VEg5b09JRWR4TjE4VVk2YlZyTWJ2Q2Z1aW9uUjJyOFlldVVWL1pR?=
 =?utf-8?B?akNwQnlUaGlqMis5VWlLejBuQ1lqNUpzemxzWlpRKzVmQ0V0eFJUTHY0bGRu?=
 =?utf-8?B?S0grbkFLb29YeGpNZjZURVEreDgzdmp1b3VJVGZnSS9UdzZBV21VWHM1S1Iv?=
 =?utf-8?B?SzkvNWFpTmNZVVRVYzdHdEJrVlJ1M3AyUGtqaXZpQVNZY2FMdkZmU3JSSDd1?=
 =?utf-8?B?M3dtc0tKWVJTWnhUQzZBcXlpOWNrbzJpNWhhZW1hU0dOTUo2V0JNcjh6Q3l3?=
 =?utf-8?B?bys3dUhjbnFSM1RTWW5SK3hMY2FDSjVveElWei9aa3d1NGlwWTJkOHdPS24r?=
 =?utf-8?B?SmFDSHllY005dlFRVWkxNjE4aU9RTjh3Uy9CM2I2WncrOU5zNUhUcHZjeHZq?=
 =?utf-8?B?V012T0JTaWRTaDJqNHdsUmU5cHpTUWFnSEVnaVJVRWthKzRjcCtlc3haUk5X?=
 =?utf-8?B?L1N4d3VtSUMyYlY4ZWRaRjBzTVdYQXhRbkhjZ0RSQTNENW5BWldzWGNXM1ZH?=
 =?utf-8?B?RkRjT1FobG93T3ZZK1FlaVNFb2pTKzhuMURMWDREMFdITXFZc0V6RmVTM1M1?=
 =?utf-8?B?OTM5T29ZZmxOTjk1YXpVbW1GTEt0b3FRd0dBRHYyY0c1OXpHL0tvckgreCt1?=
 =?utf-8?B?emQrVnhFOXpNY3JNWmo3NkRONGt4aWltd2M4UmNIM0Y3WENja2M4Z1NIYkp2?=
 =?utf-8?B?S3dqSWdlaUtRTVAwT0FsK2lLY1RidTk2amxXdjl2YVh6ZlhwMDhQNGhtYjZR?=
 =?utf-8?B?d2ovbXdGYnpjWDhyUEtzOTVxdVQxL0U3WldkblJXSi9qQkJncUNyOGxaT254?=
 =?utf-8?B?d0Z0VWl2NmRHdjJ0VWNnakdjaDh6UUQ5aU92VW1hNFVQVVJnSzJ5U29OYTdI?=
 =?utf-8?B?eHRwZXRpdjhWMTNrOVQyZVJ6dTlpTDRqSGlFK3lGR0Y4dkRIUi9JNm1PMlli?=
 =?utf-8?B?QjQxQkovcUtLN1VPN2w5M3lnNzltZUhBSkR2Q0xkRTVsUG1zbS9zKzdqWEVP?=
 =?utf-8?B?d3dmNlZqV05tMnJGNHRmSDFUWWJ4bUc5Sm9NV01VVk5GRVVWdzRFeHV6Q3A1?=
 =?utf-8?B?d015QlR3Umh5aWZjUzIwQmpISXEwSjJKSmx2M2J0Y1RDc2VvdHhGVDk3MDRp?=
 =?utf-8?B?YTdmcjBtZEtVeXFQWjQ0c0owZUZCRldpaHRVSFBSdjNXc0NPNitoTHJMUE16?=
 =?utf-8?B?ZFpJMTZtdnEyaldWVlc1Z0FnRHpSbmNhamlXdzNCMGZRM2kzbzBlcWNNeVg1?=
 =?utf-8?B?dldWWjBKTzBsZkZONTFQK2J5NDNyM0Q3alRtSHRFbHRFNTRockMwTmFTSHg0?=
 =?utf-8?B?MGI2dnA1ODN1VDhJYnBjNUpHd1Y5MnQ1cU5CTkNYSjJHMlFuWUplKzJlT25W?=
 =?utf-8?B?bUkyZjlDRlU4YSt1cHo1RUNlNXVVd21WaU5uUzZwSXNGbk8rUXpRN1ZyYlJm?=
 =?utf-8?B?RDE0MTNWOXQrM3NzVEVBUnF2cWRjKzdSU29NbmdNZUFlZk1ZRnR4QXd6eCta?=
 =?utf-8?Q?yJH1ChJv1RyzL//ddN0TTICMT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f6475e78-c0da-4917-c267-08db40deb0f4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:02:20.0276
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8JVbfBopvZEDNIozbeypmgBZ2mzNnxRtck2fW8gCTo8iufis2wi5mhV+5JkMpNBtSUFO+pUD2iw/+vXus85Hqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7202

On 19.04.2023 15:22, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 02:14:44PM +0200, Jan Beulich wrote:
>> On 19.04.2023 13:45, Roger Pau Monne wrote:
>>> --- a/xen/include/public/vcpu.h
>>> +++ b/xen/include/public/vcpu.h
>>> @@ -150,7 +150,7 @@ typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
>>>  DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
>>>  
>>>  /* Flags to VCPUOP_set_singleshot_timer. */
>>> - /* Require the timeout to be in the future (return -ETIME if it's passed). */
>>> + /* Ignored. */
>>
>> I think this could do with something like "as of Xen 4.18", as the public
>> header shouldn't be tied to a specific version (and then perhaps also
>> retaining the original text). Arguably mentioning a specific version may be
>> a little odd in case we'd consider backporting this, but something would
>> imo better be said.
> 
> Hm, at least for XenServer we will backport this to 4.13, other
> vendors might backport to different versions, at which point I'm not
> sure the comment is very helpful.  It can be misleading because it
> might seem to infer that checking the Xen version will tell you
> whether the flag is ignored or not.
> 
> What about:
> 
> /*
>  * Request the timeout to be in the future (return -ETIME if it's passed)
>  * but can be ignored by the hypervisor.
>  */

Would be fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:21:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523518.813631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8fv-0004gs-3y; Wed, 19 Apr 2023 14:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523518.813631; Wed, 19 Apr 2023 14:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8fv-0004gl-1M; Wed, 19 Apr 2023 14:20:47 +0000
Received: by outflank-mailman (input) for mailman id 523518;
 Wed, 19 Apr 2023 14:20:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pp8ft-0004gf-E1
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:20:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp8fs-0006qG-Hy; Wed, 19 Apr 2023 14:20:44 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp8fs-0000EE-9b; Wed, 19 Apr 2023 14:20:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CGAyeELYbnc6DNVGhu+h9UWuX5iEvXwGyTwvm3jBQ8k=; b=vQQ2glI0FT6JZQK27jWrxb1009
	th20PsbAdy4Wv1yy7s54YREWd6amZNcXBywBi27lleG+NKut6kIMPSK/mfd0OrIgRImmNgMooOT2f
	1WtizPgX9pNcRAIeYmrmBfXh9IQJ9i/FT0loYET5dtcSoexA342a+NpbpaGm+W5yPnBk=;
Message-ID: <fcb46556-c729-df8a-1db1-820f7400e429@xen.org>
Date: Wed, 19 Apr 2023 15:20:40 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
 <5915f963-97d9-19f3-e797-3fd02b6fb406@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5915f963-97d9-19f3-e797-3fd02b6fb406@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/04/2023 14:39, Michal Orzel wrote:
> Hi Julien,
> 
> On 19/04/2023 15:26, Julien Grall wrote:
>>
>>
>> Hi,
>>
>>>> diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
>>>> new file mode 100644
>>>> index 0000000000..3296a368a6
>>>> --- /dev/null
>>>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>>>> @@ -0,0 +1,55 @@
>>>> +/*
>>>> + * SPDX-License-Identifier: GPL-2.0-only
>>> Our CODING_STYLE says:
>>> New files should start with a single-line SPDX comment, ..., e.g.
>>> /* SPDX-License-Identifier: GPL-2.0 */
>>>
>>> For me it would be perfectly fine to do as you did but it is not what our docs state
>>> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.
>>
>>   From my reading of https://spdx.dev/ids/#how, what you suggest would
>> not be a valid SPDX-License-Idenfier as everything in needs to be in
>> *one* (including the begin/end comment).
> I said what is written in our CODING_STYLE and what we did already for lots of files, i.e. having 2 comments:
> /* SPDX-License-Identifier: GPL-2.0 */
> /*
>   * ...
>   */

My comment was about your suggestion to update the CODING_STYLE not what 
it is currently written. Sorry if this wasn't clear enough.

>  > In the link you provided, the "*one line*" requirement refers only to 
SPDX short form identifier
> which does not contain any other information like author, description, etc..
Right because the SPDX block is intended to be in own block and there is 
a another block for the author, description, etc.

I am not in favor of changing of the CODING_STYLE and I thought I would 
express it right now to spare the round-trip of writing a patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:29:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:29:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523522.813640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8oB-0005L0-VA; Wed, 19 Apr 2023 14:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523522.813640; Wed, 19 Apr 2023 14:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8oB-0005Kt-SS; Wed, 19 Apr 2023 14:29:19 +0000
Received: by outflank-mailman (input) for mailman id 523522;
 Wed, 19 Apr 2023 14:29:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pp8o9-0005Kn-UG
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:29:18 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20614.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f104cff-debe-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 16:29:14 +0200 (CEST)
Received: from BN9PR03CA0394.namprd03.prod.outlook.com (2603:10b6:408:111::9)
 by BL0PR12MB5009.namprd12.prod.outlook.com (2603:10b6:208:1c2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 14:29:10 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:111:cafe::fa) by BN9PR03CA0394.outlook.office365.com
 (2603:10b6:408:111::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend
 Transport; Wed, 19 Apr 2023 14:29:10 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 14:29:10 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 09:29:10 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 07:29:09 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 09:29:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f104cff-debe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BX4NlSJfUQZL6A9Baa9+U61yjvbXc9ipY8wq06jPbhAaH39QM8JI4UtGiARoq0ZyJOHTcP+zNg2iR3SiSsQMej3zONBh/mWArLxGTGaATZCR7zCXRo8AoLjmWTX3UTZvqGTo3pO/OHWc9dzLhoBGI64mjSn+c4TD56mQa6/BU3i2Vl12yxQ6DY0tyiWO3r6d5yRnanKwjLsElpOB95MrjYWhcxR6atXQJOLl5rkIHx7bKpPt90G6S0iuM6UFdLgdShLgBhGXj+iwfd6JHt7rgpZvPd93iG/Qk8jme/LkUdQMFnRq33qCA9VZrhVPF8o0Og9hm9/FWRUz3LGxQqFOYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BTbDgP5w/WjI39w2xuG1k38dsw3CiWoafhOH97Jjcuw=;
 b=aPyB1Xc1wNoVz7g2bSQ85Lw5XAHVnA0pJpaEMErMc7q1UklGdzNF2AJiEjxXdU/9odGbvwfqlPIEA+GZs7Yha9sDRNXwxYy3AzcuXzZYCcWACCQynAtqwKcuMO2lmz+R1AzLsNBk6MtSXgoN8LC9IGBnEz8eVMaaFNGj/VwUcAJ2qL4eSLkS+q6NBhXAZvq1K+Xr+FHlAmwRDk2OU53phnhEJGttaIulLfTaK58tffSVQYRV50KW3/vTfLS7h31oGEaSA90UTezY0W92Fndff7evJULEcogmEQw/+fA/SwQZekW1e3J1WPysfRMnwAmKFWG/xFuKBarFeCpoDQxyeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BTbDgP5w/WjI39w2xuG1k38dsw3CiWoafhOH97Jjcuw=;
 b=Gpyygrg7rsLLZ9V1/xLSassOY46TNvdVviOovT3adqKjReyWN5svVH6bUsbvR7f1oGT/AfoBWAX5502d+pn/mSyI5xQTl60JT6XBpmkP4xee8sO2sqR2WoUOOuMdgQSqZupdekeP2KFW4/x6tHRcd9EYgxoPIrXvS3jRj6mMKLo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <0a1246d2-092a-2767-4c1e-0db4ddf71013@amd.com>
Date: Wed, 19 Apr 2023 16:29:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
 <5915f963-97d9-19f3-e797-3fd02b6fb406@amd.com>
 <fcb46556-c729-df8a-1db1-820f7400e429@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <fcb46556-c729-df8a-1db1-820f7400e429@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|BL0PR12MB5009:EE_
X-MS-Office365-Filtering-Correlation-Id: 0a58a53c-0875-456a-250d-08db40e270eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dY/eujwzoAgv4k8c12mO+HOORAXVDbxC37f2Ts299FLuSaS3k7cNVhrFCk5slqg45A8hfPp8W7kHewcFv6wjafA5AvP9MJN0cwriqZE55MEy8McfWw7TALNQGxQ1BgXESTqe3nlssveccOMq7VvKnF4fjZtBNiVRe28GNAe7do3FrbvLqj5N/G085x9GwADwrhG+n7h4Raqcte9eOTpCbttHHkU6Bm17VvJH1fQpyOIUih2fxjh2A550eWYvDpuE258x5EVjC/fGtP7mvf/njDhDNX4ZPhAPpHPhLhBSsgt3V6XHMuwEktK4e7OzyXZBvtLLI+N6uSSfOl3fUHcvuVUkZTXyDFFlQwYEkVzwPOGxFS9GpWNl7HsyMN/RRjCuBu88t3B9u+T7oL07Q0H755m94dD1yHbF5pSgbfvdkAtpp+pnZiySvxRsmYYAy/G1gW4z/lFmjLZJvVw+uXTc0Hfc0i9LjabZODZ0SoaNGbawxX6NsmhtGd2qG0Ac+fk68Nlrm8lJAPvODlyL3BeQEv2nzV/hzM3VS3kmd87PAzdrizQKbMW9+U0+jd6kIxNwNA3uGn1FEsILoNdb54jE6xboFmsZt2ApE+89W1U4iTS9VydZVgU4ocEIcLsb6ti0O4F577EWHYh3J8JDhO+CVrjxI99I2CGoGgm3qv71PRuPIbytHdFFAjAn6fDCJAz5toq4FWWOnXQCzPOTF5keFTVH351V1c1ZihDvqVkYXMWMa6gGjsAcTSH1AvhoXpEH
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(40470700004)(36840700001)(46966006)(44832011)(40480700001)(40460700003)(16576012)(110136005)(70586007)(82740400003)(478600001)(4326008)(316002)(5660300002)(8676002)(7416002)(81166007)(41300700001)(8936002)(70206006)(356005)(54906003)(53546011)(47076005)(336012)(83380400001)(426003)(2616005)(186003)(36860700001)(26005)(966005)(82310400005)(31696002)(2906002)(86362001)(36756003)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:29:10.3833
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a58a53c-0875-456a-250d-08db40e270eb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB5009


On 19/04/2023 16:20, Julien Grall wrote:
> 
> 
> On 19/04/2023 14:39, Michal Orzel wrote:
>> Hi Julien,
>>
>> On 19/04/2023 15:26, Julien Grall wrote:
>>>
>>>
>>> Hi,
>>>
>>>>> diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
>>>>> new file mode 100644
>>>>> index 0000000000..3296a368a6
>>>>> --- /dev/null
>>>>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>>>>> @@ -0,0 +1,55 @@
>>>>> +/*
>>>>> + * SPDX-License-Identifier: GPL-2.0-only
>>>> Our CODING_STYLE says:
>>>> New files should start with a single-line SPDX comment, ..., e.g.
>>>> /* SPDX-License-Identifier: GPL-2.0 */
>>>>
>>>> For me it would be perfectly fine to do as you did but it is not what our docs state
>>>> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.
>>>
>>>   From my reading of https://spdx.dev/ids/#how, what you suggest would
>>> not be a valid SPDX-License-Idenfier as everything in needs to be in
>>> *one* (including the begin/end comment).
>> I said what is written in our CODING_STYLE and what we did already for lots of files, i.e. having 2 comments:
>> /* SPDX-License-Identifier: GPL-2.0 */
>> /*
>>   * ...
>>   */
> 
> My comment was about your suggestion to update the CODING_STYLE not what
> it is currently written. Sorry if this wasn't clear enough.
> 
>>  > In the link you provided, the "*one line*" requirement refers only to
> SPDX short form identifier
>> which does not contain any other information like author, description, etc..
> Right because the SPDX block is intended to be in own block and there is
> a another block for the author, description, etc.
> 
> I am not in favor of changing of the CODING_STYLE and I thought I would
> express it right now to spare the round-trip of writing a patch.
Sure thing :)
So, all in all, we agree that SPDX comment must be a single line comment on its own
(which is also stated in our CODING_STYLE) and not like it was placed in this patch, right?

If so, somehow the wrong placement sneaked into recently added RISCV files:
arch/riscv/include/asm/csr.h
arch/riscv/include/asm/riscv_encoding.h

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:32:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523527.813651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8r9-0006kB-DU; Wed, 19 Apr 2023 14:32:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523527.813651; Wed, 19 Apr 2023 14:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8r9-0006k4-AY; Wed, 19 Apr 2023 14:32:23 +0000
Received: by outflank-mailman (input) for mailman id 523527;
 Wed, 19 Apr 2023 14:32:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pp8r8-0006jw-3q
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:32:22 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fbeb6f4d-debe-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 16:32:19 +0200 (CEST)
Received: from mail-bn7nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 10:32:09 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5884.namprd03.prod.outlook.com (2603:10b6:806:f8::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 14:32:05 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 14:32:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbeb6f4d-debe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681914739;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=+cZqqRjoP/N1+XErZEgvp63Q1BcuHi5fGFvgZGYAYi8=;
  b=JjPJ/tMeXdNiFrKVEy9NVUUdXd9R/A/hepEvYTkogl3Rl/br7E/sy7nm
   946Xyi/huad3vgILss9dEff6KPlSTEvPfJ81SLMrmsKXdNsde+Q4CBMSR
   X8fhm0ET2R2XksK3OvnYiDgGcEX8sl2MfmoVguCfE1DKIbVQi692z7ieb
   A=;
X-IronPort-RemoteIP: 104.47.70.104
X-IronPort-MID: 108550460
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+lbQSK1CHmI7NE9BafbD5Vdwkn2cJEfYwER7XKvMYLTBsI5bpzEBz
 jFLD2mAOfeJZjT0LtFwPoy+/EsA6JLWn9diQAo5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnNKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfXEpA0
 eYJdG83YzuPqPPomKy8E9VKmZF2RCXrFNt3VnBI6xj8VK5ja7acBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouy6KlFAZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r+SwHunBtp6+LuQ9NNz2kO1/0IqBSJKEkKbgOSYgUu0VIcKQ
 6AT0m90xUQoz2S7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZh5MbsY38vA/QzMC3
 0WM2djuAFRHu7qQTG+b96uF6za7PyEaIHUqdSICVREC4dTovMc0lB2nZtRpHbOxj9b1MSrt2
 D3Mpy87750RkMoK2qOT7V3BxTW2qfDhVRUp7w/aWmak6AJRZ4O/YYGsr1/B4p5oM4KxXlSH+
 n8elKCjAPsmCJiMkGmHRroLFbTwv/KdamSE3RhoAoUr8Cmr9zi7Z4dM7TpiJUBvdMEZZTvuZ
 0yVsgRUjHNOAEaXgWZMS9rZI6wXIWLISLwJiti8ggJyX6VM
IronPort-HdrOrdr: A9a23:0yhEoqsA8plkXFWcn2tZV1jl7skCYoAji2hC6mlwRA09TyXBrb
 HXoBwavSWVtN9jYgBmpTngAtjHfZq4z/VICOYqTNOftWXd1ldAabsSlLcKoAeQbREWlNQtsp
 uIGpIWYLedYmSSz/yKhjVQeOxQo+VvhZrY4Ns2uE0dLz2CBZsA0+/VYTz3LmRGAC19QbYpHp
 uV4cRK4xKmZHQsd8y+QlUVQuTZoNXPtZT+JToLHQQu5gWihS6hrOeSKWnR4j4uFxd0hZsy+2
 nMlAL0oo2lrvGA0xfZk0PD8phMn9Pl691bQOiBkNIcJDnAghuhIK5hR7qBljYop/zH0idirP
 D85zMbe+hj4XLYeW+45TH33RP77Too43j+jXeFnHrKu6XCNXgHIvsEobgcXgrS6kImst05+r
 lMxXilu51eCg6FtDjh5uLPSwphmiOP0DcfeK8o/jBiuLklGfFsRL8kjQJo+VA7bWLHAbUcYa
 ZT5QfnlbVrmB2hHjLkVyJUsaeRtzwIb227qs9ogL3Q79CD90oJinfwgvZv1Uso5dYzTYJJ6P
 /DNbktnLZSTtUOZaY4H+sZR9CrY1a9NC4kn1jiX2gPOZt3SE4lkaSHkokd9aWvYtgF3ZEykJ
 POXBdRsnMzYVvnDYmL0IdQ+h7ATW2hVXC1o/sukKRRq/n5Xv7mICeDQFchn4+ppOgeGNTSX7
 K2NIhNC/HuIGPyEcJC3hH4WZNVNX4COfdlzuoTShaLuIbGO4fqvuvUfLLaI6fsCy8tXiflDn
 4KTFHIVbV9B4CQKw7FaTTqKgzQkxbEjO9N+YDhjpQu9LQ=
X-Talos-CUID: =?us-ascii?q?9a23=3ArAdNyWqr3Av8yEdgkV64a9nmUZ0OfFnH3SuPGUK?=
 =?us-ascii?q?TBUl4FuWLTgas3rwxxg=3D=3D?=
X-Talos-MUID: 9a23:iI5UdQulxcAfTt2qo82nhhx7btlO2oqXBX9R1owmmtSlPil6NGLI
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="108550460"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FTFHBd25KYuz5FpXNgfT0MHtdG8XrxA5w2yOpH9WEPYbJiZIB6Vm3DYGpFUusff2nfre0TnrGHlr/X1myOvgRsBS4CLBFwsbY4abYVPrU0u0irY5Ix/a6VkTGjj7n2paIoAqcpdtsZFpVgEHwnw6M5HalDmovC9/Mlsne3lNPEz+GwGfR3cR75B2XYBle3FKyBb5VzZyUY2d00UBlSqmoPSEMX2/dlbz7EPtU+gtMJlzGk4phbjSlpOQw2aY3PIKAeJrkA1x+YQj346w9CoWQJLav8okdmd2k+fDKaqoebokXVPapK9ynDBJD78WGfk7FctB0WFguW27ZbKjSBUUaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lj4RJ6agQM5HczTdtZAY9gfemG/J0BB2RY0p7h0DKPM=;
 b=KPZs+JfClQ3Mi95dnqne6t+2T2qJwVQ9uXmzS/8iYqpAYibNgSnyXvDJFld+yyfhwAe0sCZf3CsjkkIapUssI1GnlRJSct4t1aNjsv34oQzvhqjI5j8T952fq9nPh3xrCqs5u47aMwbcEpFsKGUqFErztuEJaNg8MQokEkqI0YwGpgH9fOACQHC6Zrof3++wFpsnTwPabwsi8Q5BaaGVWMH6XxM34+H3kU1sjbKCQbSyvnP0oMfji9mSiwT/NTTuqxV/Jl1mMUkYI+PK18pDIOMfM3rk7Ro5CwSeuVlaGTR5QBoZZycdXHY1u/LNnc0TJLOihsXaQK58Q3WIJaNRKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lj4RJ6agQM5HczTdtZAY9gfemG/J0BB2RY0p7h0DKPM=;
 b=GVf4y30En45axHcCnIWTOAx6NNAGjDgU/tqBRllDf1oLjCiFX+TuutXh/n1APehtZY7Mq48WjMmaedSUPu7iBWPWmnFTlmed+X0xOWSFWAXPN8MC/1ezC/Cf8JkuohxgSMWOXenteOyX2ifPOFQPpBv7AqL6+2h4T8XIFRj+ZRo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3] xen/vcpu: ignore VCPU_SSHOTTMR_future
Date: Wed, 19 Apr 2023 16:31:55 +0200
Message-Id: <20230419143155.36864-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0440.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5884:EE_
X-MS-Office365-Filtering-Correlation-Id: bde78113-365a-40db-6247-08db40e2d8c7
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LNpQwg36wjEBorcQ6NusEDUB2HdasU0qHwQ7wmyqA6J6+bAmZ3onrBgBUVX6vPKRLwRsBHzN7VAjGBC/o9xH4R1E/oCKN/3Vdlpe1po8GHmQ+m/nym9Hi7CN+A7fJw3PQVYsvSM/ixUfWYbP1JkgW9svvIj+YlzoB75nJY/lgTZr1YEpUd4bPBxxBJAZF0On+0ShjvPrAC22/Yn8o7cveb04UhAe18f1aUPUgt3hd6O3TA03oTn5D2XraIOWSlxhmgk1KsEi0qfzk5ZGbLRAOknxfzonNVN9AZS72qLms+UkrPiw1a/u8MU6ExoPH3PGsv+tsIHKgvPnV1XpuwKF9MpTOpyVcxpKa1eJv3xQgR8DyDjDTjzkXYvfbfAfEXr0fth1Za5ua/pa7GGy7HqxNfYgu1glfF85CHGJr4PUDYTyvGrPuaFQKrP1l1OOl/bnhLCvtJhodIJhhSwu309l7FvM+QnkMX6352jdhE8LcMHAAC9erbWs5yPDNmMuYLkmSR8/EGlQPvAoqm9QAxfLhz7JvJu3jzIcgM4+4LJw4hI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(8936002)(38100700002)(36756003)(8676002)(7416002)(2906002)(86362001)(5660300002)(478600001)(6486002)(6666004)(54906003)(186003)(2616005)(6506007)(1076003)(66946007)(66476007)(6512007)(26005)(316002)(82960400001)(6916009)(83380400001)(66556008)(41300700001)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WTRFOEJ5V25vTW45N0g3d1BOVUxETkFuSllMQVIwL2VQbEZaL2tsbUxZYzBa?=
 =?utf-8?B?dGg2UEt4eXAwRGtibU4xWWV3dGE2MjdLVS9DSEZ2TkxMQ2pNdDk2MEdOakF6?=
 =?utf-8?B?VlUzYnhaOFRRc1Q5MzZmRHQ5UTRNUXRGdzgyVHlQM2ZFVmkwM0VxZjl5TGZM?=
 =?utf-8?B?M3p3MVdlb2Y0ZjIxWjdzRmpEUWZJZWNrWVNTb2ZaaGxRTkxwRmxlNUFtYU5u?=
 =?utf-8?B?cENVWUdVcXhlWDZWV2Nvd1NVdVRjd0Nxa29xNjJURzE4OWI3K0lGa0pRcE5s?=
 =?utf-8?B?V1FBaThFakV0YnNyaE1WYVY4aCtVQlY0M3JKb2MxUEZreDBlU3RaSS91OG1s?=
 =?utf-8?B?RDNVYXliLzZZalhsNDExbmlNaDJRcElxSFhYWUVnNFVjSzJZVFZaN1J0dEpi?=
 =?utf-8?B?ZjJVSUNzOUQxZ1FFekpaa1ExVldCdFk3SnB5TSswUVJiS0VIT2kvdndVYlly?=
 =?utf-8?B?NGhaVG5hcjhWZy9xZ1UrMWZvODNkWmltRVZ4d1dRdjRKVlpvYzY1b3RReDda?=
 =?utf-8?B?TENOQWZiZnB3R2dJSjh4OWVFZXRRMHNZMGc1V0taTnBYYUZWZWFiSGp3OWlS?=
 =?utf-8?B?YThLNmg2R1JUeHJYd0Q1SDFqQ3dqOVJoMVAyRXNNYWwwL0FiRjhCNGpEUmFB?=
 =?utf-8?B?bm82SS9wMXFSd1FQM2NWL0hXd1U3UGlKckl0eXJtaGF1eUUyQTRyQi9KdWtv?=
 =?utf-8?B?TzAzSlBJS0R6ejZrd2NPdlZSb3kvalJnMUVORjM0RGx4djJ4ZENBaWxjLzBp?=
 =?utf-8?B?UDhUbTFSNDZsNlRSYVVJWFpGTGFESmE3UVdTUkVjUkZtdXIzTzdFa09Sb1NN?=
 =?utf-8?B?NDgxZTByb1daYi82UytBdDBWSXd3clBReWx3aDlydHl5S3lacVFiZUxiWkxs?=
 =?utf-8?B?ZmVIVXJNY0ZVQmVwYlB1Qkp1c0JNc2IvVHY0V1RGUGFYTkY2WjNXUzNiUExO?=
 =?utf-8?B?YWlDa0FIWW9XK25DdkZSRElVazN3enBWR1JBOXVDWVB5WWdiTS9ldkNsa2Nk?=
 =?utf-8?B?R2tqWXQwd0R3Y3l4SDNVb1F5OXNCSEp1aW5PNHZaclFwVHkyVExBRlJHMXpL?=
 =?utf-8?B?alRBY3RySnREY1NmK0YyTCtWSnNoY052OXpFNzhUU1BkblVSSmx4Z3Izczkr?=
 =?utf-8?B?c0tZaGp5SFc5NXJmM2ZmaStTejdhL0EzMnE2b3ZSYjVWVjdLSkFwYkcwNjZY?=
 =?utf-8?B?L2Z0QUZNMmFjbXBlK2pmTzVRS0xnbVJKRmNEc2pKQXVwR2VkYzE3WmQrdTJl?=
 =?utf-8?B?SmhhZGoxek9WdDFxVk5aaGZtd25QMjEvcklUdnd2MlNxcHJubndROUFRK3lj?=
 =?utf-8?B?Sk9XcGd6eGNTRDI4bHdqdUdkTm1lTWFGeWZzNzFCQTVuV1pFcnhkdysyNnNZ?=
 =?utf-8?B?T1RTZDhFUjNjRlhlb2k1V3BZVTRkWnVGSTN0eVpXVnJXeHJMWHFaWWJ3eWp0?=
 =?utf-8?B?WldybXU0OW5qS1NESVFKeWV4cUNJQjZjREpZZUFkS2VzK3U4YytRalczayti?=
 =?utf-8?B?WExKd2xoZXRYWjRVNFpReXdLNWROQWFhTXVpSS9aQ3VUZmx1NWdUMHBYUlZP?=
 =?utf-8?B?eEJSMTVWWS9veUdsMUR1TGNPNHNlVjZsYVhKZWNzeks4OGFSSkpUS1YxMjJK?=
 =?utf-8?B?dmM1SXluaC94SG9leDJMTkI4MUFRc1cxK2lnSFo3MytZNUJ1cVBaQ21lUVlu?=
 =?utf-8?B?UXZaQWRxSnB1U0p5NGxlWG5UR1VJTzcra1MvSERZZnRObnRpUnBrYnZVclJv?=
 =?utf-8?B?cS9aMjFnMUQxZFAwditWMlpOcjlWSG5XSVp1Mm1GdlZHMlRnazVoZEhnRWFZ?=
 =?utf-8?B?dU1xZlFFdlJLQWFXZm5TOFlTaExCaThhUXhLK0NrVzR4TUx6MUFCSGsveWpR?=
 =?utf-8?B?NytrNVJaeFAvQ0tRNmJHbFZlTWdmMjJIajNuNGVheW1HOE92SmJ6WW1sZVQx?=
 =?utf-8?B?TkZJR0RLZlh2dlhRSmN3cjNtMUlxano0NVJiVDhVbEp1YmF1Q255MTdGVkta?=
 =?utf-8?B?T2QvNUFGZCtYUlphWi9NWUxKTDgyQS9Oam8yVit6MFZnZ0MvWjZtZVFQR0Na?=
 =?utf-8?B?VFZnTXRydjFFSVBJUG1xSFB6MzB1NXlpUzFYWnhCNXlPaXZvZWRsZlQrNFcv?=
 =?utf-8?B?MVZDdDNCTkZianRmekJnNS8va0FlSjlTNkljaWxaV01mU2RvL2tiSVVNb2Rl?=
 =?utf-8?B?aGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	n/zM06wXCBt7fEuTq5/JQCZQAwWkRHF3JL1x+qgRZmfcigwR68oFFx36t5B+bMAEPlDugeuIpdnIbMIQzG/5JbjNYtMpoK9IDPfXvgaJ8DeucbRNzQBUqdbKZ1KcG+2UFcj1QVjkjTWufmetYXGh6jM6xcuDz7duBAl8xbn+F9XgdbZ5FS3tdTwu+IHjAI4sCksIQKdnVfyj/wbwTDkp3SEKQWgqKkY4W/cl68RddRRDuqkMshaiUbSu69hntN0U2V3ra5wOA/j7Cas5BZCH8tJyKy4pKBw2tZSjd+RnkE6FIWEEX7j50sgBdM9gWCySblg5/oiD0HEdgqcfAXVehP17Aa7VnahWKvRkzoSlMyRY1UbtMRGu1o7tFAmC3NrvzLlvCp3U3Z4onWhvJSNeYYu5Ev0oLqipH6mR3QbPucj5ubuGSxCpiUhoCFzeszXvffgQAnyiuXVXrJ60Rgxw1qfItsuAPapFHtg7kO9AIPOCOSViw5+nhryIGeAkzgIaEQy6JWIPexpBcPpPO1RZaARMBqqMC0he9d6Tu/PO0A4VFLqe2YJsVFE+PcA092HNXcR1mLAreNt7Un0S4yiWdTRCFYDXG3Auh4gpOb4JXjpXRxopV713LqMdKo3cNuq8XdAM3a3jHFEw3uaNlVxcaQnkUEjnU3STHBzgEu0IwTO56uzpC1Z1XjlMMDFK7R0KGmujMNC78dLz+9rJefve5HA0SAIYU7K1mWxBl+BkwLkm/IgWUes7u9MMb8miC9V7igpcA4C6aIJAw4Z1PyoUZ87rvehgq5qUVCL+rFC5RYA6D6JtIombPms6wpsg7x0BFRZZZO65V9BIibiSyI4WIwx7Qq4Dw51CAsP48lGiZjzaFNS7o2q/YeiNtNChBUL3ybap6aAEzlb2JZ6o5g3DhrATnH0xu5R02tOO7Nwfqzs=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bde78113-365a-40db-6247-08db40e2d8c7
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:32:05.0101
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ALceAurmeDXYJKM4kV4/onJL78f166YK0d3a+9rNtMQDgcbq3v8J7hQd0tyA0r0jOQGckxZX4ktVO0JCUnhTVg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5884

The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
When the hypervisor returns -ETIME (timeout in the past) Linux keeps
retrying to setup the timer with a higher timeout instead of
self-injecting a timer interrupt.

On boxes without any hardware assistance for logdirty we have seen HVM
Linux guests < 4.7 with 32vCPUs give up trying to setup the timer when
logdirty is enabled:

CE: Reprogramming failure. Giving up
CE: xen increased min_delta_ns to 1000000 nsec
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
CE: xen increased min_delta_ns to 506250 nsec
CE: xen increased min_delta_ns to 759375 nsec
CE: xen increased min_delta_ns to 1000000 nsec
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
CE: Reprogramming failure. Giving up
Freezing user space processes ...
INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
Task dump for CPU 14:
swapper/14      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14
INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
Task dump for CPU 26:
swapper/26      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14
INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
Task dump for CPU 26:
swapper/26      R  running task        0     0      1 0x00000000
Call Trace:
 [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
 [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
 [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
 [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
 [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
 [<ffffffff900000d5>] ? start_cpu+0x5/0x14

Thus leading to CPU stalls and a broken system as a result.

Workaround this bogus usage by ignoring the VCPU_SSHOTTMR_future in
the hypervisor.  Old Linux versions are the only ones known to have
(wrongly) attempted to use the flag, and ignoring it is compatible
with the behavior expected by any guests setting that flag.

Note the usage of the flag has been removed from Linux by commit:

c06b6d70feb3 xen/x86: don't lose event interrupts

Which landed in Linux 4.7.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
---
Changes since v2:
 - Reword the comment about VCPU_SSHOTTMR_future being ignored.
 - s/ENOTIME/ETIME/ in the commit message.
 - Reword changelog message.

Changes since v1:
 - Just ignore the flag, as there's no ABI breakage.
---
 CHANGELOG.md              |  2 ++
 xen/common/domain.c       | 13 ++++++++++---
 xen/include/public/vcpu.h |  5 ++++-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5dbf8b06d72c..5bfd3aa5c0d5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 ### Changed
  - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
    cap toolstack provided values.
+ - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
+   known user doesn't use it properly, leading to in-guest breakage.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae095..6a440590fe2a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1762,9 +1762,16 @@ long common_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&set, arg, 1) )
             return -EFAULT;
 
-        if ( (set.flags & VCPU_SSHOTTMR_future) &&
-             (set.timeout_abs_ns < NOW()) )
-            return -ETIME;
+        if ( set.timeout_abs_ns < NOW() )
+        {
+            /*
+             * Simplify the logic if the timeout has already expired and just
+             * inject the event.
+             */
+            stop_timer(&v->singleshot_timer);
+            send_timer_event(v);
+            break;
+        }
 
         migrate_timer(&v->singleshot_timer, smp_processor_id());
         set_timer(&v->singleshot_timer, set.timeout_abs_ns);
diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h
index 81a3b3a7438c..a836b264a911 100644
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -150,7 +150,10 @@ typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
 
 /* Flags to VCPUOP_set_singleshot_timer. */
- /* Require the timeout to be in the future (return -ETIME if it's passed). */
+ /*
+  * Request the timeout to be in the future (return -ETIME if it's passed)
+  * but can be ignored by the hypervisor.
+  */
 #define _VCPU_SSHOTTMR_future (0)
 #define VCPU_SSHOTTMR_future  (1U << _VCPU_SSHOTTMR_future)
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:36:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523533.813661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8ur-0007PY-3t; Wed, 19 Apr 2023 14:36:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523533.813661; Wed, 19 Apr 2023 14:36:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8ur-0007PR-1E; Wed, 19 Apr 2023 14:36:13 +0000
Received: by outflank-mailman (input) for mailman id 523533;
 Wed, 19 Apr 2023 14:36:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pp8uq-0007PL-3W
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:36:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp8up-0007KU-IG; Wed, 19 Apr 2023 14:36:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp8up-0000sK-Ak; Wed, 19 Apr 2023 14:36:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Ya2xh48fL0W2mtUddvUGYN9g8EKkLWyBHqJGJGejEdE=; b=MJ+ulhP2i8dtXIFiTNecNW236b
	redCzOWXYcUdMsedKhtFEC9k+jE2L6/KxdQGsai0ndAQKq1eEk+wRN5//rvfp7ck2mvkgUkK4Vj0n
	hop5C2ecmosT8issExNvk1rcI6tFAz+5kXI6Wa2QT86+FArg/OMeqR5sCl+1XBp+XzZw=;
Message-ID: <ad321af4-574b-6133-9f10-1d41d4b544f0@xen.org>
Date: Wed, 19 Apr 2023 15:36:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <752ce1ba-8c23-e397-3f6a-15c93ac6cee0@xen.org>
 <5915f963-97d9-19f3-e797-3fd02b6fb406@amd.com>
 <fcb46556-c729-df8a-1db1-820f7400e429@xen.org>
 <0a1246d2-092a-2767-4c1e-0db4ddf71013@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <0a1246d2-092a-2767-4c1e-0db4ddf71013@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/04/2023 15:29, Michal Orzel wrote:
>> I am not in favor of changing of the CODING_STYLE and I thought I would
>> express it right now to spare the round-trip of writing a patch.
> Sure thing :)
> So, all in all, we agree that SPDX comment must be a single line comment on its own
> (which is also stated in our CODING_STYLE) and not like it was placed in this patch, right?

Correct.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523537.813671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8xh-0007yp-Hr; Wed, 19 Apr 2023 14:39:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523537.813671; Wed, 19 Apr 2023 14:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp8xh-0007yi-Ep; Wed, 19 Apr 2023 14:39:09 +0000
Received: by outflank-mailman (input) for mailman id 523537;
 Wed, 19 Apr 2023 14:39:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pp8xf-0007ya-GE
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:39:07 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0607.outbound.protection.outlook.com
 [2a01:111:f400:fe02::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef061e91-debf-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 16:39:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8735.eurprd04.prod.outlook.com (2603:10a6:102:21f::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 14:39:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 14:39:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef061e91-debf-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j3BdDt+8H0P+DlCngnec5CxLrDQoqcY7XkZzTZhYcYALpsXie0L+jXpgXgwWae+WStiLuoLpG/VdMru0mXCxPHdGjtuqOvBvtvengd5qkR3M5/iLysGh52QsEx6M7Mp4b5TgW9x9NxJ6cGkgpE1ieueU+KC9CAFdsCl+fS0fpN77j/Kddx6Kpf+Vnm3EfATOcDi9HtehPeb9ss0QK/FpFCqAPHQbmV5U+LIxSRitzNzm/ptiIlwPddA10DOKYugVl0qAriLHOoU8YhLJXZToM/b2Mb0+hHskqNxzSRFstRbmYYRuGLQWTQXADBmKmYS0qj/nKUKHaCBoSEJ2A1HONg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xssnYWGobsNkq1BAfJb2W+65MdVPlriedpNJgVRsLzY=;
 b=inx2ovpC0xM6jkwYNlUsWB1ITSYOiX0HYvqVhImwzYtT/6Z/XXoMR8814p8DlF0DJxv/8Wen3rMQF5Uj1S154Y8tCwuz0/WNPU/PihmjQJEOQZC74tSiSmzOkr2Hkx1ATPOTnClxFYXO6XdLhnJ87R38xhA2Va9JfIUE5NE7aYY4++SK3rpq9VUm8eM7pjC74YPnBdPlCuqROQGLtrsW5IPOGRnsOT/pco82pGcm+vCiQOIp4nXM7vzSUCJ0UbhCEBO9Bolvaskp1bKBbZ+8wBbnLS+cXD4/fSRWESuyUr/XkXd6vVxUWLzyuHaTSXEsW1bDJv+6JJqu20+z94b6pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xssnYWGobsNkq1BAfJb2W+65MdVPlriedpNJgVRsLzY=;
 b=KYBhsXZob4jW3/7srTptVFpx1CpAyNEpoOmYDg+TgxVRojtUTJRYZI9soLLSMAX3lGeE2KCWbHQAdMM0wxnLhyc2jeHQ7EIVMKFzL93LHQ2XlEHv40a+8OkVthz57ui3717OA6FHLolFIndIB6qkZtpEjkbqIiguTsj5lknJfS2ukxGBDJZ2yjJPlYohs8fQEOcvhyXHSbO08oXrXlYuKrR1WG93WpfmfBpuSu16Urq24k6i2Yb0uNJG7F6ikaldTstvjLTZOAFt/1VVkpUjWTjrtTR45qqxQHBYdKd4OzJNk0HAhHTe4ZB4GN/1W8xmaTCw4iScXsq7puW7+yuFow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4cfcaed2-21e9-a794-86b4-97f9b350c0d4@suse.com>
Date: Wed, 19 Apr 2023 16:39:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
 <ZD/UMyeckvCq0ivf@Air-de-Roger>
 <86823b76-6be1-da65-7608-af391ff48978@suse.com>
 <ZD/uX1VqYchQ4GgT@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZD/uX1VqYchQ4GgT@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8735:EE_
X-MS-Office365-Filtering-Correlation-Id: 000d9b4f-a9be-4fde-0d2e-08db40e3d22b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H/6/ERKnqDZrlUykk+rLTfM/5aBdvTTK7FwoTfRVK2KXlFV/8tLsmSdf/+hFz+MPsAXzwhTXA27u5i0TfKWUIHXZdsUlubRc1d9W5/sMfqDQvANelkPHprxJ/GK/lq6GQw592MgcbpxYs4p7k+aFyoPPa9HlXNA/fU2Uxwc3Z6iWolhBALzKL8CKpaND9Wfjvpkxl2l4IpASOgZPtv/trHzhCg9+Dnzry3SD5nuLl2yamOH4oAyu10qctMJkFCNA8V1G6TkjRbhvVPZt0Ps5ob2MMBddk0nB32m8MIT5cAmO4Lj7enYAnGZX33Srh2f5Lcq1pU5EXnMwsvSgCF7wnywjkxVLlkSQFbayIvBUupnInR63pWnr8m5EVX2qJl27KP4winjSSprbpkOLPSSDRZuo/7CDLTg1M6WaDlUSGVEjkVkVA9xeqxM9PwK0rUTzqYpf2jQkILSWh11bl96SqM1EEi0iE+UbMxGVIRy8Wo8NCb8GgPoq/5m8Nr9pWwzzNHakuJvvemxNDw2T93vHw/VdNCpE2np3QwiQoW1MZX/3KOp1ASWkcsBF3VtPjbkqRZLWP0BDVpQZ/r7LLehau2QajvKMEyZBAmcN9+zISCbCjZz4gPyU/TUomRk7E4cmNuqg6J/c4xF4A5ML0HhBXw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199021)(36756003)(86362001)(31696002)(38100700002)(478600001)(26005)(6506007)(6512007)(966005)(6486002)(5660300002)(41300700001)(66556008)(4326008)(316002)(8676002)(54906003)(8936002)(2906002)(6916009)(66476007)(66946007)(31686004)(83380400001)(66899021)(2616005)(53546011)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YXMvWkROMU5MVTZ4dmxNNi9talo1T0pxUkE1c3BYREIvYjVMaVB0cW5WakV6?=
 =?utf-8?B?YjRZS1ZieldrcTVlNXRuSTRCc2N0VzNLdnpVU28xejI4MFVueFBsOWR0Ulkr?=
 =?utf-8?B?V3ZEUUZCSnM1ZDZUQUpLT2lPalc1MTduenNGeVJmWlhjSFhQZEM5TnlLSENR?=
 =?utf-8?B?VXVsRDl1OGpzVEJ0UHVsUVZkVlFQRnl1aHU1dEdJdHMrbUZYUzUrWGZXNWpB?=
 =?utf-8?B?RnlEY1MxVEVZRkpXMCswRk5lanFvemV2RTU5SFhqbGk2TVRTMjFyeW5WYTFu?=
 =?utf-8?B?Qy9vRGYwYmdzQUhCUFNwSEdNZmF0ZUNWcnJ5YjR6amtTN1NLWFlMdnFtN1J6?=
 =?utf-8?B?eWlNbnIzQytNVUtYOS9QRk4yY2VWOTNvL2c0WW5idXpqcGgvZDUxT2R5ZHNj?=
 =?utf-8?B?UUx3dXRjVHBVNU0rZG9mWk5vMUUwbTRVRXlTbE5RNUkvbWZZbmZIS1BkMmNr?=
 =?utf-8?B?Y2o5YlhaaDgxWXY1L3dVUWRKSXFOV01NV0FzM0xlbXQ0T2xvMWh5b1NYS3VI?=
 =?utf-8?B?MytoUERSK3BnS0ZLcDJtUWNoV3dqVThYaU5haGhlZTBxOFVoTG5hL01VMjE0?=
 =?utf-8?B?WU4vV1Z3UHQvTXVNdzVpOUtJTWliWjFTL2N6WlkxMlM1Y1hTK3ZzTWZaYlNW?=
 =?utf-8?B?SWVWbUViOGhITVlKL1ozZ3hNRUl1V0pCRHZFa1J1TTBNOFFpMkFLN1FsMTZy?=
 =?utf-8?B?OWFiQ0VvNnJqMzlDbmI0THV3a1lJUis4MTZ1NUpTVW1KSWZ5b0pQd2g4aHZ2?=
 =?utf-8?B?RjJsUy8yWW5teU1zQTF5UG5WVUdsTjZzS20rVld5anNqOWgwaFVtYUJ1T1lD?=
 =?utf-8?B?YXJPQ25kRGVpWlNTbFEvRm1aQXRpdmhPUjlEOTRKbVdjd0N5dGc0STY3WVRM?=
 =?utf-8?B?aCs2SWhodXhZeEs0Y3NjQzRtNitDMkJYck9HUkQ1dWQ4UkV6U1Z4WU5INkI5?=
 =?utf-8?B?OWFjajYybE9UMTZZeC9ralJJVmpXaHBzZDl0TEQ3TTZHZ1pCUlM4VTI3azVO?=
 =?utf-8?B?UHcxOXNQVDYybGtpMC8rUlB6cWJsL1JxaFVzSWJGUEJGUHBIaWxucHN5cktP?=
 =?utf-8?B?S0tQM2NWNlJnTnF0MkhRR2Z6U1FqODhLWjR3bXByeFZ0cCszUzdtczZROHVz?=
 =?utf-8?B?aENrQUFvam1Cb0E1ZGVsVXRaSEI0V3ZZbVp5enZ2UXdoYS9iblRSbVJqY25F?=
 =?utf-8?B?ZHVJcUxDVjJES3NWMWNTKy95Y3RueGVaNUVieEhPZnI1RUxwUWlPNVRwUHR6?=
 =?utf-8?B?bGlqUkd0NkVUUHd1VHgvY2RiaWt2NWFJNVFJdlVwUG9CRHVnUlJOcm5RQWl0?=
 =?utf-8?B?VjkwRUZUcmxoR20wMmQyOFhZTWlyRitzY0JMUGFyYmxQLytMbVhUWTNIVGR1?=
 =?utf-8?B?dXE3WFQxWW9UWmNUUG5sVTdZeGg4SlRkQ2pUdktmSDNtM0xjb05NUDZCSUI2?=
 =?utf-8?B?SUYxTU5TMDNlVEtXODJzSThEc1M1ZFZrQ28vQVpENGNYRmxISHRWUXpaa1o4?=
 =?utf-8?B?MXVNcVkwbi9RL1lwVGlzYmNxMUFtdmFZbEFPeWZNTHlEVjJPb1MvQjJTVERP?=
 =?utf-8?B?MzRXbVAyQ0l2QXkwWm0wRDV3eFpsb2s2elRMR0UwanU5cXdOOUwxTVJLV3Ay?=
 =?utf-8?B?ZzFiZllDcjB4NjcrUEZNN1I0Y0g1MzRnWjdwNzhOeWlsempxNWRXcWZYQzZ0?=
 =?utf-8?B?dFIxNzRnUUxXYW5TYU1XOWdscUtxZVFScUhQZURyTndWSVRrN3lqYnBSdzdl?=
 =?utf-8?B?VkZ4VVlKT3hiN0R4MTU4WkViNTlUbFcvdUZvVysybEhTbm1rV2RGZ010OWhY?=
 =?utf-8?B?d2FXcExHbUwzelVzRlo3cHR1K1BFN3ErVmdkVGVYbWVxQU9Yb1pXNFQ0TmVo?=
 =?utf-8?B?TVVUOEJ3OGtVQVVDaVRsK0w3cVhaNjhtR3JaYXYxYjZadEM3T0R1SCtQUVR1?=
 =?utf-8?B?czVJNVlOWUJwQTBzY0Z0Qkt0bytZYS9WRFFxTmNKVWhKNmk4ZlNSUlltYVg1?=
 =?utf-8?B?VlRXQVJtcm5xMHRjdkZ6YkdYcWJDUk9TSFh4ajQxeE02R1BVU0hVd1IwNGVm?=
 =?utf-8?B?dlQ2TDJaS3FZK28rSmZmTDV6UkFoQVRReElRclBleVQxYzlDYy9QcGp2dWhx?=
 =?utf-8?Q?El1c4OC9R5S/IV7l1RZcCH3H+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 000d9b4f-a9be-4fde-0d2e-08db40e3d22b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:39:03.2269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dAzlJQC8XB34porN/UNXvQnI3XNtwoPwR5FcdNKjhRncWkQ7ubbV1kJ4TxAz2J8tBRtYvyhErFNxx4014WLsIQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8735

On 19.04.2023 15:36, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 02:00:38PM +0200, Jan Beulich wrote:
>> On 19.04.2023 13:44, Roger Pau Monné wrote:
>>> On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
>>>> On 19.04.2023 10:25, Roger Pau Monné wrote:
>>>>> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
>>>>>> On 18.04.2023 15:06, Roger Pau Monné wrote:
>>>>>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
>>>>>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
>>>>>>>>> --- a/xen/arch/x86/include/asm/config.h
>>>>>>>>> +++ b/xen/arch/x86/include/asm/config.h
>>>>>>>>> @@ -44,6 +44,20 @@
>>>>>>>>>  /* Linkage for x86 */
>>>>>>>>>  #ifdef __ASSEMBLY__
>>>>>>>>>  #define ALIGN .align 16,0x90
>>>>>>>>> +#ifdef CONFIG_LIVEPATCH
>>>>>>>>> +#define START_LP(name)                          \
>>>>>>>>> +  jmp name;                                     \
>>>>>>>>> +  .pushsection .text.name, "ax", @progbits;     \
>>>>>>>>> +  name:
>>>>>>>>> +#define END_LP(name)                            \
>>>>>>>>> +  .size name, . - name;                         \
>>>>>>>>> +  .type name, @function;                        \
>>>>>>>>> +  .popsection
>>>>>>>>> +#else
>>>>>>>>> +#define START_LP(name)                          \
>>>>>>>>> +  name:
>>>>>>>>> +#define END_LP(name)
>>>>>>>>> +#endif
>>>>>>>>>  #define ENTRY(name)                             \
>>>>>>>>>    .globl name;                                  \
>>>>>>>>>    ALIGN;                                        \
>>>>>>>>
>>>>>>>> Couldn't END_LP() set type and size unconditionally? (But see also
>>>>>>>> below.)
>>>>>>>
>>>>>>> I see, so that we could also use it for debug purposes.  I guess at
>>>>>>> that point it might be better to use {START,END}_FUNC() to note that
>>>>>>> the macros also have an effect beyond that of livepatching.
>>>>>>>
>>>>>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
>>>>>>> find START_ENTRY a weird name.
>>>>>>
>>>>>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
>>>>>> I take it that you're aware that we meanwhile have two or even three
>>>>>> concurring proposals on a general scheme of such annotations, and we
>>>>>> don't seem to be able to agree on one. (I guess I'll make a design
>>>>>> session proposal on this topic for Prague.)
>>>>>
>>>>> Oh, I wasn't aware we had other proposals, I've been away on an off
>>>>> quite a lot recently, and haven't been able to keep up with all
>>>>> xen-devel email.  Do you have any references at hand?
>>>>
>>>> Andrew said he had posted something long ago, but I didn't recall and
>>>> hence have no reference. My posting from about a year ago is
>>>> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
>>>> Subsequently Jane went kind of the Linux route:
>>>> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
>>>>
>>>>>> One thing needs to be clear though: Macros doing things solely needed
>>>>>> for LP need to not have extra effects with it disabled, and such
>>>>>> macros also better wouldn't e.g. insert stray JMP when not really
>>>>>> needed. Hence I expect we still want (some) LP-specific macros besides
>>>>>> whatever we settle on as the generic ones.
>>>>>
>>>>> The stray jmp can be inserted only in the livepatch case, if we end up
>>>>> having to add it.
>>>>>
>>>>> Maybe we should just go with Linux names, so initially I would like to
>>>>> use:
>>>>>
>>>>> SYM_FUNC_START{_NOALIGN}(name)
>>>>> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
>>>>> SYM_FUNC_END(name)
>>>>
>>>> As said in replies on the earlier threads, I think these are overly
>>>> verbose and come in overly many variations.
>>>
>>> Right, I would only introduce the ones above and as lonog as I have at
>>> least one user for them. I don't think there's much value in importing
>>> the file wholesale if we have no use case for a lot of the imported
>>> macros.
>>>
>>> The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
>>> need a tag for local function-like entry point labels, would you then
>>> use PROC() for those? ENTRY_LOCAL()?
>>>
>>> I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
>>> think it's clearer.  I would agree on dropping the SYM_ prefix from
>>> the Linux ones if there's consensus.
>>
>> Okay, I'm glad we can agree on no SYM_. But what value does START have?
>> And why would the type be (re)specified via ..._END()? FUNC(), DATA(),
>> and END() ought to be all we need.
> 
> Does it imply that we would then drop ENTRY()? (seems so, would just
> like to confirm).

Yes. ENTRY() may not go away immediately, but I'd expect it to be
phased out.

>> The type would be set by the entry
>> point macros, and the size by END(). To cover local vs global I could
>> live with _LOCAL suffixes, but personally would prefer e.g. LFUNC()
>> and GFUNC(). We could also limit ourselves to FUNC() plus DATA(), and
>> have (non-)global expressed by END() and e.g. LEND() or END_LOCAL().
>> One less macro, but maybe slightly odd to have the .global directives
>> then at the end rather than at the beginning.
> 
> Hm, yes, I do find it odd to have the .global at the end.  FUNC and
> FUNC_LOCAL would be my preference, I do find {L,G}FUNC a bit
> confusing.

Well, yes, I was expecting this to be the case. Hence why I said I could
live with _LOCAL suffixes, even if they aren't my preference. What we
may want to keep in mind is that sooner or later we may want to have
non-aligning variants of these. That'll again make for larger names,
unless we went with e.g. an optional 2nd parameter which, if absent,
means default alignment, while if present it would specify the alignment
(which then can be used to effectively specify no alignment). E.g.

#define ALIGN(algn...) .balign algn

#define GLOBAL(name)                \
    .globl name;                    \
    .hidden name;                   \
    name:

#define FUNC(name, algn...)         \
    ALIGN(LAST(16, ## algn), 0x90); \
    GLOBAL(name);                   \
    .type name, @function

with these helpers (and count_args() as we already have it), or ideally
something yet more simple (which right now I can't seem to be able to
think of):

#define ARG1_(x, y...) (x)
#define ARG2_(x, y...) (y)

#define LAST__(nr) ARG ## nr ## _
#define LAST_(nr)  LAST__(nr)
#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:47:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:47:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523541.813680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp95Y-00012N-9Z; Wed, 19 Apr 2023 14:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523541.813680; Wed, 19 Apr 2023 14:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp95Y-00012G-6z; Wed, 19 Apr 2023 14:47:16 +0000
Received: by outflank-mailman (input) for mailman id 523541;
 Wed, 19 Apr 2023 14:47:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp95W-000126-HS; Wed, 19 Apr 2023 14:47:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp95W-0007Zf-A8; Wed, 19 Apr 2023 14:47:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pp95W-0004Hr-3m; Wed, 19 Apr 2023 14:47:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pp95W-0005Ga-3R; Wed, 19 Apr 2023 14:47:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0oz+6iMalNB+r9cLBtCG4pAn/9m4zJsyjVLo/w4L5U8=; b=b1KjuQ3yncfn0xHUxGnBuJK7SI
	61Jr++YpkL9mG3FqjQod3KymXjySsmxAeMeyCRpOeRu0m5s1fhIvYFRh2uoTjeb8c98EZWYwSIDiD
	y4aBmBDr9SNcjODBpq1L8djuGbP/RmMng96QYOvTS882HVsErBdEKa6tZ5Gyj3X+PAD0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180318-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180318: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=569632a5832c02bd84790e0411940b8d3150fa17
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 14:47:14 +0000

flight 180318 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180318/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  569632a5832c02bd84790e0411940b8d3150fa17
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    0 days
Testing same since   180314  2023-04-19 10:00:24 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit 569632a5832c02bd84790e0411940b8d3150fa17
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Apr 19 11:03:30 2023 +0200

    CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
    
    Note in the changelog that the purpose of
    gnttab_max_{maptrack_,}frames command line options has been changed.
    
    Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Henry Wang <Henry.Wang@arm.com>

commit 768846690d64bc730c1a1123e8de3af731bb2eb3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:02:47 2023 +0200

    x86: fix build with old gcc after CPU policy changes
    
    Old gcc won't cope with initializers involving unnamed struct/union
    fields.
    
    Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 741599fa521fbbb4cf71a98d7ec22ba5f4671cfa
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:01:29 2023 +0200

    x86: cpu{id,}_policy_updated() can be static
    
    The function merely needs moving earlier in the file to avoid the need
    for a forward declaration. While moving it, also rename it following the
    recent folding of CPUID and MSR policies.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 224211c55bdded74c5a65f5a7e34281a8c5c56f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:00:19 2023 +0200

    tests/cpu-policy: fix "run" goal
    
    An earlier change converted TARGET-y to TARGETS, but failed to replace
    all references. Convert run's dependency, but use $< in the command to
    avoid the leading blank that += inserts.
    
    Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:53:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523548.813691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9BO-0002T4-1P; Wed, 19 Apr 2023 14:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523548.813691; Wed, 19 Apr 2023 14:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9BN-0002Sx-Sz; Wed, 19 Apr 2023 14:53:17 +0000
Received: by outflank-mailman (input) for mailman id 523548;
 Wed, 19 Apr 2023 14:53:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pp9BM-0002Sr-PL
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:53:16 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e870c41a-dec1-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 16:53:14 +0200 (CEST)
Received: from mail-mw2nam04lp2177.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 10:53:11 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5834.namprd03.prod.outlook.com (2603:10b6:806:f8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 14:53:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 14:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e870c41a-dec1-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681915994;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9Lf9L4TIjuPLV+y9efo4rR+Ke087A6RoihdoNsBV/0g=;
  b=PQABCnRSFyj1B0KMPwsStfPU30yY7OGP+ag3nIbxTA45VnBwNLN9hq9g
   zYJdvL+JPK1R6oNFxw/wBs9h4pHSceWaRRcONjh/c7JoJw4M3Id4Ju83Q
   43VfAYBr4wCwZNiZUJlUSsWbm4IkhjIPMtBTBqAWkX0u7vVqtuoedjOKx
   s=;
X-IronPort-RemoteIP: 104.47.73.177
X-IronPort-MID: 106011074
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LUbaCayoc2nmoh89QCt6t+fyxyrEfRIJ4+MujC+fZmUNrF6WrkUFn
 2EdWzzVb/eMYmCgLohzPoiy8RtTsJTTmoNjTFM/+CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiPKET5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXhSy
 vMXFhdSVTvdruXv6aO2SMJAoMt2eaEHPKtH0p1h5RfwKK56BLX8GeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDOVlVMuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANxKTeTgqaICbFu74DIWOCUERQCA+7qf02exZYt6N
 mEO5X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8/AKxFmUCCDlbZ7QOt8gwXzUmk
 ECIm9DBAiZmu7mYD3ma89+8vT60fCQYM2IGTSsFVhcepcnuppkpiRDCRcolF7S65uAZAhn1y
 jGO6S0h3bMaiJZX073hpA+YxTWxupLOUwg5oB3NWX6o5R94Y4jjYJG07V/c7rBLK4PxokS9g
 UXoUvO2tIgmZaxhXgTUKAnRNNlFP8q4DQA=
IronPort-HdrOrdr: A9a23:2LfDEKhBhc2sqgRhvr49CyIJG3BQXssji2hC6mlwRA09TyX4ra
 2TdZEgvnXJYVkqKRIdcK+7Scu9qB/nm6KdgrN8AV7BZmnbUQKTRelfBODZrAEIdReeygdV79
 YET5RD
X-Talos-CUID: 9a23:1yXf6m+YG5J3AvOIPgqVv1cIJc8qTFqC91yOCkjhIHdkSpa5b1DFrQ==
X-Talos-MUID: 9a23:Fs/4SwkrR74cyqeDATZsdnpEasJY27a+U3tX0rNfksa6bAltFi2C2WE=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106011074"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bGiZ0TSar0WaTJQG40V+jqU9MMIQwWQCmaALvVZjHoCt0NnfeTcQ/kLi6CGWN76sqA7/BquGmVxSOQi5iIB4qoSwabtEHUXZ3JvUEn6+rmR9vnWtt6deuiLRhRpJ37XNNyQ8wIHhpIzICV1r25EptBiyl1CkDJ0WNmcexdrgDY8B++unATVTJY/gPhu6wJD8cqaYTulWoaEqZUiBzdolAW5MNWguo7PAVY/VVvR5yFigGVKUFLF8JHwO8YReXL9CLvz9iH0mYfQqEclIF3dnklD6fwNG5ijGK3hIVe/ufgNjs684+GrIToZ+JLp/AlAlnLIW4d1wFCGXTWwWEANPRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h70u+VLgZRoQxgBmEQPvknYZq19eftvXA1PiiZMl9Lw=;
 b=d+6W18auxZ7nbqKB7bXwHyxtPAhVzBUNEkLOSYo4CI4LmuPEMXgnKsxAHaJ7adZWkQvJES76fpr01MOcrY0trQRtu4xBfRaf1tExpqLVTVcW6Fjz1lakwoUFBBMwGTaDDjN2fr7d95MfqVkPZYHmBpFg4oGq3KyCWL9LjLex5C8HTZQEYZhNAqGVWBrwFE3Kzm1A09hxBGlsLBbsdLlueh75aBzjbiziyf8iJpjiG58cLwRgAd+UDv5FzQlS9+0XlYgT99ivOaJ4REAOEZ5hDBqzl66BZ603EQ2kYzE3RjEC6S13UlcagbzvWGUwDzRDn5Bp8p80kPJGKR2RFjt9fA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h70u+VLgZRoQxgBmEQPvknYZq19eftvXA1PiiZMl9Lw=;
 b=BE8e94n7o4U/NCRyxy8dJSg3b9ekQpaMSYee3ylb37CEEf2b0CSukB5z6faDwqAPYYTyN4qk+RWF/Brg/yJU/nZmpn33lkcpQQOXj7FxeCVaIs4oUZ9Xi9IVo62GroIAFqUvYpPBFQFnohjQYunYIfPpVxTaONn/fEm2D6usVtg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <dc2bb5fe-f440-74ed-5e3e-caed9ca626f0@citrix.com>
Date: Wed, 19 Apr 2023 15:53:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3] x86/livepatch: Fix livepatch application when CET is
 active
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230418111032.487587-1-andrew.cooper3@citrix.com>
 <c2693ac0-4f6a-83ae-c477-75b3f05b938a@citrix.com>
 <226fba6c-aeca-d38b-7d47-07b2f8d6b403@citrix.com>
 <b81f6d44-080a-10fa-3148-67aa23504561@suse.com>
In-Reply-To: <b81f6d44-080a-10fa-3148-67aa23504561@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP123CA0017.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::29) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA2PR03MB5834:EE_
X-MS-Office365-Filtering-Correlation-Id: d51df30b-2385-4c76-2dbc-08db40e5c98d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hl6OhJd+KGB26nRoZIKDmXfgJUWakEAr3W4RB+gUGMIFj21U/mDuf1jzyWA6q8lsAP+BKclzN0OpIVktKRViTjlYFMH11E35hnx7qG7XNwY/Ej0/3sZMtqu/wjfqJWFRv3VZM2Mdib3nt0AZ2M0S9m9H0P2VoeYmUpF4w2OzHDAv8EUNnG3AoDI0YY2PSmOpzczTeE8O2xtmUVtpf/YQkgPOzGSuK19dzaiBBElQfGX+4FfR/JGRxN3/bNODspvJ6gGVxNmoEXs5lpEfUx/LHeZCtyGbBFv/Vs9C7t0WB8brAdUqvStgSZcTrPEHg4QQST3eXPYoweGDfRMNr/s35u1HRRAgpTkA0G4u2fL96dGjDD095dmjHRY2vfqkkSwEdDoPCV7DN+xX72l7SU/fbifdHPwoFEGNtXT/+pYhCAwyhVwV8MAJ1Ax7RaqHPXg1rB9dGGef6h3bUPYZqU8iacvKeJ5aqZ5kQp24X1jdwQIDFmHRTa8sgaoZ8u1J2hCS8lY8aajFnOjx1EbbvSrWZMeJoj1IfgS+Upo103grCp22CzwnY4k/GkTHl3vp/OJY6OIhqMmb1hyQniM+FluLGjtRqLx+sgTduYd9idtvRSo9oSJPg/8enP8o0IccwTmg6qQw+VvqIOEL+mHlrDDVbQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(8936002)(38100700002)(36756003)(8676002)(2906002)(86362001)(31696002)(5660300002)(478600001)(6486002)(6666004)(54906003)(31686004)(186003)(2616005)(6506007)(53546011)(66946007)(66476007)(6512007)(26005)(316002)(82960400001)(6916009)(83380400001)(66556008)(41300700001)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K3R5TDVpSGhzYm9zZ2NZV2J0RFlkSGwzcUR0dUZqSTRrYVF5cTMxbzZlbmtU?=
 =?utf-8?B?TEpjdCtseUljSU9mb1doVUZQdWw0TjU5RTdpUmFNY1JWb0V4b0pVVExRUkVa?=
 =?utf-8?B?cExSS295bjBFQUx1bEtqU0FRTllYOTRPUm9tUzl2anpxRWEwaklRR2RQOFpN?=
 =?utf-8?B?TURaSHUxUzAwYTRPNHlJTExmaWdmUE1HTDJOOEdHRERpdmdpRml3REpPUWls?=
 =?utf-8?B?dFB1cmErejJaTzZkS3RCY1BqYXhsRWJVU2xObXg5aHhqYjdNcHFoUk1FNGtG?=
 =?utf-8?B?UE4wSnhUSGo0NWhGK045Zm02T003c2dpY0ZOSDhXSG1NMCttNWI2dnEyT1hH?=
 =?utf-8?B?Z0dRVlNSdkhBTmFGck5rN2k1U3VsRTV0ckIxWTNsc0NnWGhBb3hkeklQdWJi?=
 =?utf-8?B?UW1HSC9DTVFpbUsrN2RSUFNNQ3FmRmJiTTZFZmR5Vk9WaUw5TjRjTGVId0d0?=
 =?utf-8?B?UWdSckZxSWpXbnlqS3VFM0UwckNodGpyaFBQQ3BHcDlPWXZwQlVQZU1yb2Z2?=
 =?utf-8?B?cFRYOTQ1NjlJR2c0bWhIUC82TEs0TUZRSmhOeHFwekVLS0JxNERiSE84dXkw?=
 =?utf-8?B?M1lJclY2aHkxRFN0b2tSSDZSWXRrRm1qb1BzNVJPa0JqSTJteWw2a1VzcFNu?=
 =?utf-8?B?Zk9LMzhsNldzdnNpNGM0ZURPbCtNOHRpYitsdllkaDdUdlFXUlNLOHlOZjBy?=
 =?utf-8?B?RHUyaEl6Q1JEUzdoS0UvWDNTallNUXlHRUZIN1hjczdLWEVSNElHcG8rT094?=
 =?utf-8?B?N3dSaldCY1RMRGZwb2ZEYWd3bVJHLzBYQ2RRbnVmSjZUVytQZmxJeWhJY0di?=
 =?utf-8?B?V3dRREhIczVxV3U5MFlxdmUzWnZjQ3krbmRudVJPNmgxcExoV0pTdEo2YVQz?=
 =?utf-8?B?YzA4YURuaE4vcWNFYW1zV3dMeHZoRTBDNlBkWEgxYVdFRmVsd0ZoaGNlbDcy?=
 =?utf-8?B?OG9NMGhLZk9XRGZZZVJzZE1QckRuTnRIQUpEdnpISk9OYktEMFc3VktQRXl4?=
 =?utf-8?B?YzBPNG9UTUU2bXBoM01EOXNvakcyeFp2dDhDMVZxM3hzTmxjUWloaHFpMXdT?=
 =?utf-8?B?bnBzSW11N3k2eUpoMEhKdjdYMDFwR2FkS25wWVd2R1BrNVUxR2ZGUnVRbEZl?=
 =?utf-8?B?NEk3ZGR0ZUpsRzd4cVNkR24xUk1PcmpndUhHdStYTk1jcWQzOU5VbjFxNTR0?=
 =?utf-8?B?aWpsMXlMTUdadGpyMlFCWURLK3dtdVlKcG1xUHdvdWNYYW4xMXppOS8wbk5y?=
 =?utf-8?B?c1JucUhVS1JJNjk5VnBwSk5OaFNHdzI0K3JPR0pZSkt3TElEaXExRXR3VFNt?=
 =?utf-8?B?dGFUQ0E1WXlUdTQ4SkJEYWMyS0JMMmFBSkNTOUR2S3NuS3BnVFBpUUlKYThy?=
 =?utf-8?B?L2c4NzVGZUFnUlVaejBVZ2pKUGRTV1lXajF5aS9XL3p5UmRnNElUd3U3UkdB?=
 =?utf-8?B?cFF6OGxsWUVPVmdBQ1ZzZm01ZXpISzhkUGdQKzlDNjlNcFJJVGZlWi9Eek1x?=
 =?utf-8?B?cnkwZEwrWTMxZmRxLzYxU3dkOW94bGwvK2kzVnJicUtkYWNVMytxd2MwdG9K?=
 =?utf-8?B?NlVBZXlxeW91Y29UcVZlZ3lBa003elJZWUlUcVptUVU1WWJkWnVuNUpFYlJV?=
 =?utf-8?B?aEZrbGhPcDlDS2h5YVhwZnhEZWxET1c2TXNOUndOclpUeThMb25DZlUxRmdo?=
 =?utf-8?B?WTQvMnIrV0dYWTYxaFpHN1BMUjdMQlNMdFA3aFAybVJFVVZraW1qNFZhTk5B?=
 =?utf-8?B?Z3BwWWxzb0tQTTY0aStxeWZ2ck9EVmxobjhMWTNDUldML09Xa0Z0dTAvUS9s?=
 =?utf-8?B?SWdSSjFyeTB5NnMvMEdtWUc2MllMK1ltaUJoV2NqVmMzaFBuSVlVMEtMYzFB?=
 =?utf-8?B?aHdPUHRyMzgrTktkczhqSEdjU3VzR2I2TkFoRGwxa0Q2UkQzM3gvUWd2UWR0?=
 =?utf-8?B?d1JxTFVyUEw2RmwzY1dNS1NBKy9aZldOQnJ6a1JyZFJqSjd6dlRlNjY2aFdL?=
 =?utf-8?B?WllkdTJKUlA5d0UvVkcwdmFxRzFxekI2VURra1owNEhzWCszMW03K1Q5N2FF?=
 =?utf-8?B?WjFkRjhhV1h5aVdGdThkSkJWaUkvTnhiallmUzMrS2t3VkdMZGluK3h4YVl6?=
 =?utf-8?B?QTFyWE55U1QrRkpPZ2NpY1FiNjQ1eHBWTm94R0dPczhWQnFmdjV1ZUJkRDYw?=
 =?utf-8?B?NWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	lj2NHQMMjxY+PiPtjMhhGGTXga2PzOB3MHmgmePgqhjoUM2U1dpPskKs+6lpmmAYVab/vHeQVXY5cu/QQ5Q4sRUWuYQ8EzgccBM6dkNzCmrl9W5b5sLtTKPRhD0eNVDBl/87ETZhJSdnpd5zRMbL4BJK+58fikDDXpFF60p9jRVYIZ3E/DlusBPOJlLABmFFzOkPb/lTXflXACJ/9T7+U2UULRl5putaD6ANRYv5JL72cfDosPJ/iurjafu+QnHmFe3UmEdw5Ap5MAooC5T5EWhYTDKC18kk+kQB2vbsx/ZQOCX1Mi6TPJRnD8kfC3NXGVmEXRS6y7D6LLLLXepUcmycC+rmURP+7XEA56SZn9MeI0LNhp5sgndhRONIdjxUMMA48dbXU6GTT2G6/42OddEEydtQige6tftxQsVtPy+XJKcT7yMfNDD/DxUkLRxob78YU7bzgW3GlKjXXnT3OsZcU80rO0PXSRBtDmlggKs9ZnS302XoY4mIY19ML/wlANRsDn7qhBJ2VeurVk6b0seZYY/2SkccBtSLBkRX1+1ipBP9ZN7VrNoakdRkLvwhRsmROs/4z7cNOdvkmA3yogsPkvAAq1if9H9R/PPtHB5O8cRIycLKL70ORif82p7D9uUuDfO9+wWcTiSIN29VEiX50w5ogrgOSLRt34fKoRLYfqqC9+mo58fk6waDv4ZCdcgBstjxcEifmWvqiIUSn46xyAV4FDZYcKdyX7stxlTw+Y1mXnIE+a2X3Pb+22k9RySoNz7SEQZeynPK8KKpd11C8La7BZfyiqQ72QtlHJLZJGDoXJPR6UnA8AxgkjPmxk972oTA6Do+A/LS80hik3kbcQFOSHoTALYYQHHvOPY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d51df30b-2385-4c76-2dbc-08db40e5c98d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:53:07.9414
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rFtLM3M68hwUrAMs2JaHsjaj5M1YYtuE9wclEf4hGh1MV8VU2BRysq0cOmuTrZmMeZlsfE96BIleqaoKjzCXbfEmYI1qvHvBFrtUi8ziq7M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5834

On 19/04/2023 7:41 am, Jan Beulich wrote:
> On 18.04.2023 19:54, Andrew Cooper wrote:
>> On 18/04/2023 6:30 pm, Andrew Cooper wrote:
>>> On 18/04/2023 12:10 pm, Andrew Cooper wrote:
>>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>>> index 36a07ef77eae..98529215ddec 100644
>>>> @@ -5879,6 +5880,75 @@ int destroy_xen_mappings(unsigned long s, unsigned long e)
>>>>      return modify_xen_mappings(s, e, _PAGE_NONE);
>>>>  }
>>>>  
>>>> +/*
>>>> + * Similar to modify_xen_mappings(), but used by the alternatives and
>>>> + * livepatch in weird contexts.  All synchronization, TLB flushing, etc is the
>>>> + * responsibility of the caller, and *MUST* not be introduced here.
>>>> + *
>>>> + * Must be limited to XEN_VIRT_{START,END}, i.e. over l2_xenmap[].
>>>> + * Must be called with present flags, and over present mappings.
>>>> + * Must be called on leaf page boundaries, i.e. s and e must not be in the
>>>> + * middle of a superpage.
>>>> + */
>>>> +void init_or_livepatch modify_xen_mappings_lite(
>>>> +    unsigned long s, unsigned long e, unsigned int _nf)
>>>> +{
>>>> +    unsigned long v = s, fm, nf;
>>>> +
>>>> +    /* Set of valid PTE bits which may be altered. */
>>>> +#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
>>>> +    fm = put_pte_flags(FLAGS_MASK);
>>>> +    nf = put_pte_flags(_nf & FLAGS_MASK);
>>>> +#undef FLAGS_MASK
>>>> +
>>>> +    ASSERT(nf & _PAGE_PRESENT);
>>>> +    ASSERT(IS_ALIGNED(s, PAGE_SIZE) && s >= XEN_VIRT_START);
>>>> +    ASSERT(IS_ALIGNED(e, PAGE_SIZE) && e <= XEN_VIRT_END);
>>>> +
>>>> +    while ( v < e )
>>>> +    {
>>>> +        l2_pgentry_t *pl2e = &l2_xenmap[l2_table_offset(v)];
>>>> +        l2_pgentry_t l2e = l2e_read_atomic(pl2e);
>>>> +        unsigned int l2f = l2e_get_flags(l2e);
>>>> +
>>>> +        ASSERT(l2f & _PAGE_PRESENT);
>>>> +
>>>> +        if ( l2e_get_flags(l2e) & _PAGE_PSE )
>>>> +        {
>>>> +            ASSERT(l1_table_offset(v) == 0);
>>>> +            ASSERT(e - v >= (1UL << L2_PAGETABLE_SHIFT));
>>> On second thoughts, no.  This has just triggered in my final sanity
>>> testing before pushing.
>>>
>>> Currently debugging.
>> (XEN) livepatch: lp: Applying 1 functions
>> (XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x163)
>> (XEN)   l2t[001] SP: 000000009f4001a1->000000009f4001e3  (v
>> ffff82d040200000, e ffff82d0403b4000)
>> (XEN) hi_func: Hi! (called 1 times)
>> (XEN) Hook executing.
>> (XEN) *** ML (ffff82d040200000, ffff82d0403b4000, 0x121)
>> (XEN)   l2t[001] SP: 000000009f4001e3->000000009f4001a1  (v
>> ffff82d040200000, e ffff82d0403b4000)
>> (XEN) livepatch: module metadata:
>>
>> When Xen is using forced 2M alignment, the virtual_region entry for
>> .text isn't aligned up to the end of the region.
>>
>> So the final bullet point is actually wrong.  I'm going to relax it to
>> say that it is the callers responsibility to make sure that bad things
>> don't happen if s or e are in the middle of a superpage, because I'm not
>> changing how virtual_region works to satisfy this assert.
> I.e. you'll drop both assertions, not just the one added recently?

Yes, I dropped both.  I didn't think it was reasonable to keep one but
not the other.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 14:58:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 14:58:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523553.813701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9GZ-00038V-OE; Wed, 19 Apr 2023 14:58:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523553.813701; Wed, 19 Apr 2023 14:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9GZ-00038O-K8; Wed, 19 Apr 2023 14:58:39 +0000
Received: by outflank-mailman (input) for mailman id 523553;
 Wed, 19 Apr 2023 14:58:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=69VN=AK=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pp9GY-00038I-W7
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 14:58:38 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20626.outbound.protection.outlook.com
 [2a01:111:f400:7e89::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a75f8d41-dec2-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 16:58:34 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by PH0PR12MB7905.namprd12.prod.outlook.com (2603:10b6:510:28b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 14:58:31 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 14:58:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a75f8d41-dec2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fl9x7G5rnLziVBuaNcM1bgktgG4jf2hWxwveneCMEXp/3kqmZ5Ck3NJ/INrw7rbYif7floyzn8pnkniWV2DfB1/tJcYlkTiZOAhyQnnYVbrOVu4u+RUL2OmnlInB8KG6KUh7vMlKaJD1AuAZ8xaPuwkwTmGNfwDuEyq3RTI1iOP0dPFi7nj6jJaG+zPsQ4pkFvcyEadJL9bREBIHR7QqHD2AsFCy102fbGf2Df/au164VrCFd4TSkY9nSAv59lLs5s4vg9j7QKmLvviY6pPLhdxUkM2N5LDrf0FgtqZL0tyfn4BlsMl8HrZLoJxlmYFk7zxIofsUcqGGB14lepH+rw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qm1+NpqhsrN/A7lDdV1aW9Indv6FlUFdLbge98LkndE=;
 b=D1zau06UE+/69+QUYSzueFscFk+QlWFHL+yTaV8/ayzsq/yTQ3lR4ZOkgzQ3EYyUvK4cdl3dElJ4NMJHp37Y0K55x0LBRVIGQDo53+JKz0/h3aSU55YWBPR+QW7qJ7K0ZNRFfnBH6KXqAZSTc5neb5Z9v65gEQRhZRH3ATaqFoX0BAaY6GlV81VB9OB4fVpOLbVKTWTl8CiuAD0FSZ8GQYSaQgvSQoGkJRjabbziaDV1kuUhe8y2KZjTpRgkryy3BcQhYI7rKnjeyx6g7mzAskUaWTs08TPyBpkHwOnXGSgskxF44/B8teYghQKTy+6trroNkmJoOBh+wTpo/mGH2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qm1+NpqhsrN/A7lDdV1aW9Indv6FlUFdLbge98LkndE=;
 b=iVKvYf3LdDWgmT58dWbozNh4L8AeMw4+xF2EnzrLampVi3/IBxO0j4nP8m4EdoqI+vWfm2utdVE0pWE77G744n3G4vZlK0R0HDxYU57ITjj3DKV7SrzotDDhJHtSOLXEnW+DW3eRGSvM2Dznb7qbT5dm7TO9H3gMnzX/CvomPLQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <4e11e153-2224-15a1-2563-0f1a3c5a6a81@amd.com>
Date: Wed, 19 Apr 2023 15:58:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <8560fac2-5c92-8362-090d-bbaeae3f5d22@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <8560fac2-5c92-8362-090d-bbaeae3f5d22@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0067.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::31) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|PH0PR12MB7905:EE_
X-MS-Office365-Filtering-Correlation-Id: f0e3f49a-dc99-425c-3f20-08db40e68a15
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oV/09N8Qwwetmmni0d/flCSIxKG/p5D1xstxuYKw84KKyfC9mFmek7fGVuM2+PxydUkJ2HOkt9aHZ12I5nnBl9sShakNUri9DPnwsiMkfFcpxb9UXoltVGKFVbth596VLC5ACxAqwm5uLeoBfvVYUDUONHoPr6EjKZI+Qzlte5+P+DN7K1cWtzCrHEegbYJKqrUb8iHfRUYiMyYGHZDX2hgREBNbGyXN6Y5P5egkNoudPOyniirSQ0VKXKA86tKMAkW02RIHx5cyMjjOwKksym1F6J4uaAUGwcFRZ164BdCSIO55pJn+rbqZ8HchwVv6sb/60loNvWj/w5cQpF7R+11Y64L6Bqd8WqhgGqKBMe0NWR6UKHE9XKFT13Ku7+0cvo7JnnQx9w6JmQw1nW5tUU7h9Z3ZcpqaWyTvIliZpvJSTzZMFsBzuanv/G/jSthsQObaNYhAnbSY+oXBsVsBDCfDKWvUj1Ge1tOaocQzrZhTOuXdSM9YukYaU2Xt8Zt6b7walonh/UrW+PoB4IRQr6aIupshTdg1pQUTdmMuu3CRyV8UqFT6s6QRDFsRYD1DfcmsO46T+wqbwB0hdD/OfuGFciqyP5gz1PgycbegOU1Ryl+Sv6GKeLXBXTZSJYeEA+t2hmtGtf37jAq8DWXhog==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199021)(36756003)(110136005)(4326008)(316002)(66946007)(66556008)(66476007)(478600001)(6486002)(6666004)(8936002)(5660300002)(41300700001)(8676002)(7416002)(2906002)(38100700002)(2616005)(6512007)(6506007)(26005)(186003)(53546011)(83380400001)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjFZQXhMRjJCUTVIMzBtNVA4R2dESjFTZWJzb2pGTitEK09QNHFtWU9HMy9U?=
 =?utf-8?B?R3lLU2dzRVhQaWFpQzQrQVVGWUhqQmU1b1RzaTNLcHNUcGZHTmxncWVZVVFC?=
 =?utf-8?B?S2ErV0VYRkRtNFNwVUFNVkdINXdKVUdBZTBTL3BJcm9lOHFEeWZmU1hNSnY2?=
 =?utf-8?B?dkZsVm5aWGM4bGtUUE52Umt1QnlBVTZpcDFqZG40UGlYVzlJa2J0K05QTldz?=
 =?utf-8?B?M1ZpQUp0a1VUdmRnMkhGN2RyZzJCMWFneUdDMytNbUd5dFg4UnQxdkt0dDlX?=
 =?utf-8?B?YnZaSG9DNHJmNlNuU0Y2dGNQSm93R25Na0Yzalhhb1QwcFg4cTAzOVdKcjBU?=
 =?utf-8?B?QmtrSkd2alpGRmRwMkZBVWZnZ1RkNmJmSDh2VUh6ZGpMWHVya3VLMjdpUXFp?=
 =?utf-8?B?VXUyRGgrT3JWSjBoOUd2Q1BicUZMNXFCNzI2LzU2Z3I3TDJycG0wK3ZxSEVJ?=
 =?utf-8?B?dWIvcTBXVzZyU21zM0FzY2JzVG80UTVPMGNLK3B6YWlwRlBJcnB4dUEza3hY?=
 =?utf-8?B?dmVNZCt3MndVZWc3eWJKKy83M08yak5MNzI3OVYzZWtuV3dYNDIyU0Q1ZHRV?=
 =?utf-8?B?U0JIWFJFMisyN2lEd1ZpSmk2K0RuM2ZqNmNJb01ycEtNdkNNSEhUZ0hvakh5?=
 =?utf-8?B?a2JyMDdaeGNCMEJFemRFNE9YRnkxZzIvYXowVjJKRjNrKysxVlFTZ2hOOWJ2?=
 =?utf-8?B?cHNQdEpoMHIzSTYyOWlkOHFxMXIrbHFab21kU0orbG9sR2ZwTnArQ2lGbmR1?=
 =?utf-8?B?dTFzRko4b3l1UmJETjlOcjhoZ2RjSUZMV2VPc3pLdUs2YlpCd3kyMk43akly?=
 =?utf-8?B?ZS9lWHJRNXcrZjVIRjQvS01QTkYyVDEzbzFLQnRxQ01WVlVNaW9NR2lNeEVR?=
 =?utf-8?B?WEZMR2NpUVhCS2sxcWdRb3gyQVJtM0ptUkkwb3BrNEViRUJkdTdpeVJtN3Bj?=
 =?utf-8?B?czdIOWJ6aHdic2I5dm92OEwrek43a2pKNHpTcThqdGNyRTk4TTZhdUl3ODhh?=
 =?utf-8?B?VmlCRGJseU1PZmNaNGEydTQwSGcvMWE2VDFyWkc4aS9TZXRJQzlVNTBaUFU1?=
 =?utf-8?B?ZCtBUTQ1cG5UM3JtRzh1b3BidnYzSkltOEhReWVxMGxyZXdXNVp1SE9iTncy?=
 =?utf-8?B?NHVYY3R3VDBXRWlST3ZUUW5rdU93b3RRZjNQdkxlTmlMbnZoTkRudUUvNG5C?=
 =?utf-8?B?UEQzQ3pwY2MzdkVyUEhDOXc3VktxYkY5Q0xQVllhanpPbVVsc0Mva0E1NEV3?=
 =?utf-8?B?NVVDcXlDSEtNWGRRSjJ0azI3S0FZeGVaQzkyOVhzd1RXTTcwRVlDQmhQdUMr?=
 =?utf-8?B?andFbVdEbDY5eDNTOE1DdTFIRGszeUdmOUcyTjBKc25zOVlWeS9RNi9jOG1t?=
 =?utf-8?B?TVR0dXFLRUozVTFvYm4wM0ZkYkhhV3pkOUpGeTlwWlpOVnczbjNYYWtBNlJG?=
 =?utf-8?B?WjNiQTR0alJZUkJlcXMrQ2RqdmF0OWdOY2grSnZNTXhTMGJQL0xTVFh5WFJL?=
 =?utf-8?B?RzBkTjViMGxBYkx5LzJXUUphMlAyODA2NkNNbFo4MFR5Nkg1UlBjbUt6aUEr?=
 =?utf-8?B?ZWNDc1drc0hpR1dKVnIvbXEzZ0FiUGhJSVJmY2lYbm0wNjludlhQdlYyb1VG?=
 =?utf-8?B?ZVBvazRLVXJpL3hYRVJ3MVQwQXVSOG5XRndiSERiakdCem16YU9vVjloaEVh?=
 =?utf-8?B?RTVqV21IVURHdXcvQmU1Ry9XS29mMWZKSkM4NXJUUUtDZDNpNzBKVHpHNVNM?=
 =?utf-8?B?V1g0S0JnRktpREY2UzA2NjdZQmRBbDJQeW41dE5kSXdrcFBGK20yaDFkakFl?=
 =?utf-8?B?ZDJZWDlmTGd3b2RwZVkxYmhLcWFDN2xqS1JJNXdiWjdWdnRYQkVNYTYzWUx5?=
 =?utf-8?B?UXdPNG5uU0FGcVo0V3VML2ZaRU1JRTJmT05wcWhEVU95dkJEWkNpTEpuSWpo?=
 =?utf-8?B?Z1ZZbFJ5M2FYeFY3QzhnYzBhMlhvZFFOSG05b2VHODV1SHJ0K0F5QWYzd05t?=
 =?utf-8?B?ZnFlTzBsUGY4MWtteGJyYlUvZ3d5Ui9kaXkwTUs0U3NBNHl2ZTN1RmpReWJk?=
 =?utf-8?B?UXoxZGNUTjgzK2daTG9JSzg2aFZlaEpZUHlWbWdmMUE1Rk9Fc08rNHJrT2Zl?=
 =?utf-8?Q?NsEUoEKpBuROpzPopIjJbizI9?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0e3f49a-dc99-425c-3f20-08db40e68a15
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 14:58:30.9583
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0PD7zh2BZM/Wkz+dsdsUvgPpqjf7XedB1l+TKCAQY4hZffUdSlNBJqSPcNVYPsK9I+L89t6zcvd+NFUXcubACw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7905


On 19/04/2023 14:54, Michal Orzel wrote:
> On 19/04/2023 15:19, Michal Orzel wrote:
>>
>> Hi Ayan,
>>
>> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>>
>>> The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
>>> currently accept or return 64-bit values.
>>>
>>> In future when we support 32-bit physical address, these DT functions are
>>> expected to accept/return 32-bit or 64-bit values (depending on the width of
>>> physical address). Also, we wish to detect if any truncation has occurred
>>> (i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).
>>>
>>> device_tree_get_reg() should now be able to return paddr_t. This is invoked by
>>> various callers to get DT address and size.
>>>
>>> For fdt_get_mem_rsv(), we have introduced a wrapper named
>>> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
>>> uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
>>> has been imported from external source.
>>>
>>> For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
>>> dt_read_paddr() to read physical addresses. We chose not to modify the original
>>> function as it is used in places where it needs to specifically read 64-bit
>>> values from dt (For e.g. dt_property_read_u64()).
>>>
>>> Xen prints warning when it detects truncation in cases where it is not able to
>>> return error.
>>>
>>> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
>>> by the code changes.
>>>
>>> Also, initialized variables to fix the warning "-Werror=maybe-uninitialized".
>> I can see that now you explicitly set to 0 variables passed to fdt_get_mem_rsv_paddr()
>> which haven't been initialized before being passed to fdt_get_mem_rsv(). Is this what
>> you are reffering to? I cannot reproduce it, hence my question.
> I can see why did you get this error.
> Before your change, we always checked for an error from fdt_get_mem_rsv() by checking if < 0.
> In your wrapper fdt_get_mem_rsv_paddr(), you switched (not sure why) to checking if not zero.
> Becasue of this, you got an error and tried to fix it by initializing the variables to 0.

I actually wanted to return the error code obtained from 
fdt_get_mem_rsv() to the caller.

In this case, it returns a single error code. So does this look sane to 
you ?

diff --git a/xen/include/xen/libfdt/libfdt-xen.h 
b/xen/include/xen/libfdt/libfdt-xen.h
index 3296a368a6..1da87d6668 100644
--- a/xen/include/xen/libfdt/libfdt-xen.h
+++ b/xen/include/xen/libfdt/libfdt-xen.h
@@ -22,9 +22,8 @@ static inline int fdt_get_mem_rsv_paddr(const void 
*fdt, int n,
      uint64_t dt_size;
      int ret = 0;

-    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
-    if ( ret )
-        return ret;
+    if ( fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size) < 0 )
+        return -FDT_ERR_BADOFFSET;

      if ( dt_addr != (paddr_t)dt_addr )
      {

- Ayan

>
> ~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:13:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:13:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523558.813714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9UJ-0005YG-W0; Wed, 19 Apr 2023 15:12:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523558.813714; Wed, 19 Apr 2023 15:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9UJ-0005Y9-Ss; Wed, 19 Apr 2023 15:12:51 +0000
Received: by outflank-mailman (input) for mailman id 523558;
 Wed, 19 Apr 2023 15:12:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=69VN=AK=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pp9UJ-0005Y3-0P
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:12:51 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2060a.outbound.protection.outlook.com
 [2a01:111:f400:fe59::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4e9779c-dec4-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:12:49 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by PH0PR12MB5466.namprd12.prod.outlook.com (2603:10b6:510:d7::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 15:12:46 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 15:12:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4e9779c-dec4-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJewN83j+zcTt4VUNz8J99K6+L4/JQMw4u6rwMvk47gRLJtAdIV9PAqpX2FyzY59B2V2EPNx9byBDfG89K3weIwqzzrmbdJYCVCiTaHUMobjzCk+WB1xJamq+IJzbyjmcWjWOLNl8IFi71BasTdv75gshw/ZFBOPqxXpW1O5r4AlyXLXAz4o8OSdA7krfYJ2lbqMueMXYrIyC6G8NXG1aNKTlSW7FhwyZ8KvUdi4V5UB0D3e7gAcNVUbYvNSjhNwThMgpwO9mKB2QXxSf7M4ctj4hYy2qzuMxV4eecSRWyV/NsCIp52pf4PCGHHFewvefDieJ7eSKQjq8Hk0DRWjGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PjStGSEMB1ILhnUhnahiOGJY3M/slAJ3k+sqICye4Ls=;
 b=FNYs01OPbfGrSeRn5miJjObZYcjQENrFCZoVWJqTPtAfRsDPt8i8USpr0+wglqYSnYc9XumjKSIXEiVa/mxvGzXM/dt+IcADjFkQglIEg7/6sWCHJAfs27hputZpmw+cTUwcPYXt66zVG7PrPq28Ecr1Qqf2Z9/pUYYfA+u2eS6y12EWfT8WO1uxTcZ4uZtC+/7EmWCz73ZV0JSmYJBcKBjUgjPU3HNCRmdeRYJ1ZEfkQl3jCGE+OcCaFUEDiMtbaR6yDZh7xq9o8UtHBO/De27608LMV022QvAziHze4BneJxvdtoImAW0MhWYWNmVtCMgQuf0aQ6xLAee9rGwy9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PjStGSEMB1ILhnUhnahiOGJY3M/slAJ3k+sqICye4Ls=;
 b=HE/NHwFGM2nEVPdz/FPbiuQoQeMrz8hh/kob+5vb5ufceX7GAfGMUDVULeUd7FIuDnLo0YQm99gMby8tGdZnBd+YSeJGvVBVslQA1XbHa7ubprGIOGLubZlq8Z4c1bRuu6dbUkNqINlpphucZ/IQVNct9n+Y7a3KfN7lUZ5tNh8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <f6c88703-a3bf-94e6-246e-ab0d0582eb7a@amd.com>
Date: Wed, 19 Apr 2023 16:12:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0446.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|PH0PR12MB5466:EE_
X-MS-Office365-Filtering-Correlation-Id: c1a4698d-03ef-4a07-698e-08db40e887ce
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	egMCrr0uzyoemsytCwFOAWLmyYirnIFNgFdsxmZn3ULsORxKbVPPNmJG2WFz/A2xwI4aQvbGmnuvtv5ygHRAw0WZthzvKe0DTki/9hfBGHkifIOD8wgVTXoN3Ea3GpdxkX45k1kNVSoMZYr87EU6U+0X8h1i5KpgwYygZNZdK/dmyWB2SwXASKKoax3XeYWLvuCVpfhvIN+7TNRvfbi2BZVs/9XiQl2XwXRosnnUgjWq/6kmOhxcki4crh67yqtuRePSM+8MpnQ+iD8d31bcDBCZt/m42S7/OnHOShQJbiUf674m+itLszvES7/D6EOSO+z4yJy793gjgYCWKJoR41UOg5z55aVA/cACiJshGhXcITB3ZnXf8/IO+6/GNXRyY7FX9Ve/1vF6wKoVy0rLxSxxg2Wm4PygB7SXq4b93/sPgHusAOsl83tGW+bS2J83/53iqHCU2YxEsT7a/R5Sq7gJRGdBfDdF8bwDz3YK1OQSclV0BtDdYS+JoLt4PWZRV06gm/a9QOrV+6iVWLthrVfx5j/VM9NZxWpk5YKcxkVzuw7j+HGPldY4bKQPUsibrLheSuEd1fK4RJjOGJB8YY9EdjcKFDpqt/8q9mkgJ5TZ41SPSB3FFl4sVKOM8qkIDpbhvTFQp0u61sE0+macJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(451199021)(7416002)(36756003)(2906002)(8936002)(5660300002)(8676002)(41300700001)(31696002)(38100700002)(31686004)(478600001)(110136005)(2616005)(316002)(53546011)(26005)(6506007)(6512007)(186003)(6666004)(6486002)(4326008)(66476007)(66556008)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aEhOcXVkOCtDVnRKT3U0ODdPeTNsdm9zdUFJT1VTUTVYTi81QlFzMnlIVy96?=
 =?utf-8?B?cU9VK3Z5WFZvanU4bjR2K3A5Y0dGcWV5RnhFdGV5UlQ1MlZMV0hTSkZwY0Jx?=
 =?utf-8?B?UVVUOS9WV0h2T3Rjb0owL0NnUE1CTThnVTFDT3JWT0x5cGIxT0VKYnIvQ3Zw?=
 =?utf-8?B?YTE5RSttSUlTd3QrbDRGT0lNYTV6amtVWmt4a0hnQk9xSDRUUHN0SDdoZGwx?=
 =?utf-8?B?dUF4T3RnckxpU0E3ME9TU3hxUmhIUkhBKzAwR2pFemdhNUtTTm1ibkZzUTN1?=
 =?utf-8?B?Zmx3RnBjc2lUZHJWV2Y0TmtGTExjNmhyZmpmTVJqNy9vV0pqdHBEalFjaFpn?=
 =?utf-8?B?dG9IQlRNaTdkSnozbXlVYi9IajAvNUhuU3VpeVE1aVBDaVlIa3AwOVo2RnJC?=
 =?utf-8?B?Z0V3Z0VnRHNqSUk4eWFUYmg1b0U5aVVidmQwNXV2MWhJc0QvWjQwSVdNSUNl?=
 =?utf-8?B?WG80VWJ2cnRrZEdqL1UxR2dvdWNGS0dqcGZuUXRQQUxQa212S2tLZzVaVk4v?=
 =?utf-8?B?WVdQcmFFa0tYeWtRZXFWKzlUOWcyb3FKSy9JRjl3Wnp4ZjlrSWpJNU8zN01N?=
 =?utf-8?B?by8rcWR4a2h3Z0t4R1BudGV4cVF5TkNZQ1dPcmlKTUdXTU83U2pOcnNLRlNX?=
 =?utf-8?B?YVByODRQYVFlZVdVeHpWNDNCUGJZUXJWVHRORmJ1OEIzOFp2S3I3MGxGejBD?=
 =?utf-8?B?dmJFNmJTR0sxRk01ckRPeGoyb2lSbXNVYkU4SElnbi9pdnkyTEt6OGdlVjR1?=
 =?utf-8?B?MnJLMUQ0RnlMWUQzTmh4VEdKZnZFTFBDZWZqR0Q2Q092d2QrbmM2bUpLTmxl?=
 =?utf-8?B?dUFBR1luYVlvQ2FZU1hKRUxJMUpkNzNWVytCZEMrKzZLRjA4b05mK0lxdE9a?=
 =?utf-8?B?OWZRTk1aTnZySXdmTXJGZTRiWm5EanZDY1UzeFVzSVFSbWxXVTBDQTlIZXJY?=
 =?utf-8?B?KzRySndhNldTanpkclh0VzRYcitnWnRDeHk2UFpSMW42ODl0aTZCNVJOWjNO?=
 =?utf-8?B?bWc0SWd1NXhGY1VOQjNqTkdqMEswZHFmdzRwZlkzZTVsa3RpYTBuQ051M3Fa?=
 =?utf-8?B?b29Rc3ZERGtTMHJ3VmVnYXBRQ1BoNks2UUYvd2NaSkxHVFBpMjJzaGkrLzRY?=
 =?utf-8?B?K21udkltaElSb3ZhUEZqd1NrMUptN1NSZnJnTC9vdFpXN3M0QUVWaVhsYmlE?=
 =?utf-8?B?c3kxczIzdEJIaHdpd1lJUUFkQ0pnaUFtRWZVVWdmamQ3aE9uYjlKSGVMZ2to?=
 =?utf-8?B?aDA1TjVsVUk5WkNZb3NWY3BqdUJhbGROWHNmVmFDaGtabk4zSWc4WXRrVEQv?=
 =?utf-8?B?QlBNbk9YbnJXc0NySUp4UzhrSzJub3E1R0tWdTVGK0k4YUhFMWpIK2JNYVBq?=
 =?utf-8?B?RUJHaG8vblhENVJDQlk0ajJFZkJZTnlRRHNWUDVjdjRYbndHTk8zNW9jcHUw?=
 =?utf-8?B?VHE2cnJSSHVYOUZYVm55blc4Nko0RFZrcThwRVRMM1phSmxYU0dYcmZVUm9O?=
 =?utf-8?B?MC9EaHdhUXQ4Smh4ZFVMYXorYm5YRjRmdk1SelQvTytzazJkVVpXTFdQMXpD?=
 =?utf-8?B?UjdFWUtIR0VpMDVHSmtXVVdiMk1Zam5RU2tGR3E3Mm5lY05wZ3FUMmZ2eFJ4?=
 =?utf-8?B?Zmk3cHpUMUNJbEUyZjBTWVdId2NJL2Y4SEpETzVISGozVCtaMCt6VmcrUHBt?=
 =?utf-8?B?NlRqd0QwSCtWc2JIbUV6cmNLSlBQSmI5dit2SEs4TW55b2c5ajJ2dkJoeUx1?=
 =?utf-8?B?VGR6RHREL1k1bTE5ZVp4eVE3VmVkNXl4bE82UEVCSXV3Y01ibjQrN01zYlJD?=
 =?utf-8?B?ajJHSVczVS91NC9Jam84S2NJamhhYVlEeFJoRFp2NGIvSEpmZ1V0QzVPUHBD?=
 =?utf-8?B?WXRIQ21yZE9ReFJjb0JRajRxVVB1a2UyQm0vcGhUSjg5ZU1UVUpJTU9YNGtV?=
 =?utf-8?B?ZmNhbHd6MWRjUHZhbUM4QkQxWktTQ1lvcUMzRktGNzgzMFk5Rjh5d0lNSlRF?=
 =?utf-8?B?eFQwenUvc2RVRVhiRU1vb0xTVDZXcmUwNWFyb05DcUVTNTNITjFmMjFvRzZm?=
 =?utf-8?B?Z1laR1p2ZTR5OGxrYVEvbzRLSEJmOWVaU1RXdFRQcVMweVoyWnlKbkhQSlFW?=
 =?utf-8?Q?5XafUDp6DGJCKLetKgE/gmtyQ?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1a4698d-03ef-4a07-698e-08db40e887ce
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 15:12:46.1011
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U0YX2bPd8rhUvqE0/QVr65aEEvAoVXy0V6tkbrddhUmOA59vVGKtNndRGUma9ygAOGXGPnmN5LfViv3++XGaCQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5466


On 19/04/2023 14:19, Michal Orzel wrote:
> Hi Ayan,

Hi Michal,

...

> --- /dev/null
> +++ b/xen/include/xen/libfdt/libfdt-xen.h
> @@ -0,0 +1,55 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-only
> Our CODING_STYLE says:
> New files should start with a single-line SPDX comment, ..., e.g.
> /* SPDX-License-Identifier: GPL-2.0 */
>
> For me it would be perfectly fine to do as you did but it is not what our docs state
> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.

Just to be clear, this is what we should have (as Julien had earlier 
suggested to use **GPL-2.0-only** )

diff --git a/xen/include/xen/libfdt/libfdt-xen.h 
b/xen/include/xen/libfdt/libfdt-xen.h
index 3296a368a6..cad7ad3bfb 100644
--- a/xen/include/xen/libfdt/libfdt-xen.h
+++ b/xen/include/xen/libfdt/libfdt-xen.h
@@ -1,6 +1,5 @@
+// SPDX-License-Identifier: GPL-2.0-only
  /*
- * SPDX-License-Identifier: GPL-2.0-only
- *
   * xen/include/xen/libfdt/libfdt-xen.h
   *
   * Wrapper functions for device tree. This helps to convert dt values

- Ayan



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:19:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523562.813724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9aF-0006AO-MG; Wed, 19 Apr 2023 15:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523562.813724; Wed, 19 Apr 2023 15:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9aF-0006AH-JY; Wed, 19 Apr 2023 15:18:59 +0000
Received: by outflank-mailman (input) for mailman id 523562;
 Wed, 19 Apr 2023 15:18:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pp9aE-0006AB-M5
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:18:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp9aE-0008PI-2Y; Wed, 19 Apr 2023 15:18:58 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pp9aD-0002qq-R2; Wed, 19 Apr 2023 15:18:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=4DpkaVD9eoUsHlIbEkGYNGcpGPLsG/M87jhWJrUp+vg=; b=wnPXoUnjYpDrWpMNtlIyMA21dT
	g13Z/EefJ1zY5Za/HKzUVmYdkvj87xEnmIXEykaBfn++8Tuc6WOPDhMwT/QwOsAzBCFDPG3aoggsu
	giSWdnsBSe4TZlNLZPD2dI5zwdEImgAx4VmU0p10Z3p/2/sEoOCJ6Iuk7WW+whUAMMks=;
Message-ID: <b9dbb568-e35a-4854-3b75-2b4b3ebb9ab6@xen.org>
Date: Wed, 19 Apr 2023 16:18:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, Michal Orzel
 <michal.orzel@amd.com>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <8560fac2-5c92-8362-090d-bbaeae3f5d22@amd.com>
 <4e11e153-2224-15a1-2563-0f1a3c5a6a81@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <4e11e153-2224-15a1-2563-0f1a3c5a6a81@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 19/04/2023 15:58, Ayan Kumar Halder wrote:
> 
> On 19/04/2023 14:54, Michal Orzel wrote:
>> On 19/04/2023 15:19, Michal Orzel wrote:
>>>
>>> Hi Ayan,
>>>
>>> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>>>
>>>> The DT functions (dt_read_number(), device_tree_get_reg(), 
>>>> fdt_get_mem_rsv())
>>>> currently accept or return 64-bit values.
>>>>
>>>> In future when we support 32-bit physical address, these DT 
>>>> functions are
>>>> expected to accept/return 32-bit or 64-bit values (depending on the 
>>>> width of
>>>> physical address). Also, we wish to detect if any truncation has 
>>>> occurred
>>>> (i.e. while parsing 32-bit physical addresses from 64-bit values 
>>>> read from DT).
>>>>
>>>> device_tree_get_reg() should now be able to return paddr_t. This is 
>>>> invoked by
>>>> various callers to get DT address and size.
>>>>
>>>> For fdt_get_mem_rsv(), we have introduced a wrapper named
>>>> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and 
>>>> translate
>>>> uint64_t to paddr_t. The reason being we cannot modify 
>>>> fdt_get_mem_rsv() as it
>>>> has been imported from external source.
>>>>
>>>> For dt_read_number(), we have also introduced a wrapper named 
>>>> dt_read_paddr()
>>>> dt_read_paddr() to read physical addresses. We chose not to modify 
>>>> the original
>>>> function as it is used in places where it needs to specifically read 
>>>> 64-bit
>>>> values from dt (For e.g. dt_property_read_u64()).
>>>>
>>>> Xen prints warning when it detects truncation in cases where it is 
>>>> not able to
>>>> return error.
>>>>
>>>> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
>>>> by the code changes.
>>>>
>>>> Also, initialized variables to fix the warning 
>>>> "-Werror=maybe-uninitialized".
>>> I can see that now you explicitly set to 0 variables passed to 
>>> fdt_get_mem_rsv_paddr()
>>> which haven't been initialized before being passed to 
>>> fdt_get_mem_rsv(). Is this what
>>> you are reffering to? I cannot reproduce it, hence my question.
>> I can see why did you get this error.
>> Before your change, we always checked for an error from 
>> fdt_get_mem_rsv() by checking if < 0.
>> In your wrapper fdt_get_mem_rsv_paddr(), you switched (not sure why) 
>> to checking if not zero.
>> Becasue of this, you got an error and tried to fix it by initializing 
>> the variables to 0.
> 
> I actually wanted to return the error code obtained from 
> fdt_get_mem_rsv() to the caller.
> 
> In this case, it returns a single error code.

I would rather not rely on this.

> So does this look sane to 
> you ?
> 
> diff --git a/xen/include/xen/libfdt/libfdt-xen.h 
> b/xen/include/xen/libfdt/libfdt-xen.h
> index 3296a368a6..1da87d6668 100644
> --- a/xen/include/xen/libfdt/libfdt-xen.h
> +++ b/xen/include/xen/libfdt/libfdt-xen.h
> @@ -22,9 +22,8 @@ static inline int fdt_get_mem_rsv_paddr(const void 
> *fdt, int n,
>       uint64_t dt_size;
>       int ret = 0;
> 
> -    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
> -    if ( ret )
> -        return ret;
> +    if ( fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size) < 0 )
> +        return -FDT_ERR_BADOFFSET;

So the problem if you check for ret to be non-zero. But the caller of 
fdt_get_mem_rsv_paddr() check for < 0.

Given that fdt_get_mem_rsv() is not inline, the compiler doesn't know 
that it will not return a positive value (other than 0). Hence why I 
think you get an unitialize value.

The snippet below should work:

if ( ret < 0 )
   return ret;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:40:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523570.813743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9uz-00013U-HN; Wed, 19 Apr 2023 15:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523570.813743; Wed, 19 Apr 2023 15:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9uz-00013N-Ec; Wed, 19 Apr 2023 15:40:25 +0000
Received: by outflank-mailman (input) for mailman id 523570;
 Wed, 19 Apr 2023 15:40:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=69VN=AK=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pp9ux-00013H-Sm
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:40:23 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20615.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d717ad5-dec8-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 17:40:20 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN2PR12MB4470.namprd12.prod.outlook.com (2603:10b6:208:260::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Wed, 19 Apr
 2023 15:40:17 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 15:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d717ad5-dec8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JGQkYB+1If2dlpkI1fyLEM+3nTfmAaivMRP6zX1z01O6KVTUOU5DzvCmknt3GydlCzya5RRgH1pbHSdEVpsez3G/Xw2WNvDyo5RWbSVM3YFfxdfEguNwyloQaP1v0qkVm6Uv5zI+jWTW/PS0r3WqpsECgxqRjNdOrjrg6EStLuGccfA6gKb32euGJfMio37XWBo60p493/bNfDT3pb3RJhMSHWQ6oIRHh5utGVgQbOSG6B+7aqtbzbdA5p6ObtWjmPWdswy9IA3MRpMuSpa7Lu4kpCs+YUizWqmC8H5ha22oZuq1XuEp3SiS+Rmy75hOQ3CoCmikIPj41fGU/wNfxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DeXM0AacA8l146vdLOHHv0X0V//zGxDisucT65c7Oxk=;
 b=YMrYj2TwTyvyk36Od3ObCb3aUA0OmLTOGaCTstf4UeavMzyejSWaBsLz/pRpZ5b8Yr1dnCOeHHuUBlTm4Zew1f5/VQI8pqB/Ctp0ymmSNEKWBOj7/FKpPM0gr0NLQnZUk+d6XziPu4zvIw4+PM8g3F1RthHv42etQQ4gfPhr+2RzzxqdNKYNixoRtieSoGs49JDvgRSGanVYPYy0BL8LYrjLrvjC1PlFpQBKvMNJ2femsKkg9boDWhc6EMCQfwEXW04kcLPcL/iz8CLfBW26Pjm42xLoGEIBtjrRkn2oF1RvsOs+HR5ZtDwCywT1KcgTnAy+mXSe8Cf0u99uzQZRmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DeXM0AacA8l146vdLOHHv0X0V//zGxDisucT65c7Oxk=;
 b=zDjGmnPfDKKeaiLPitoVkU0pUq9RSonubdxVGZLohUVf/gw3yqitY20i9yy5gFXMfXENYXc38jL0V384eq1AsKoTjcUS5Le1J6BTCEQyLKBZEcUG/kBRGV+OqZ6J/CUHm3yg6+Jt19YuucqseS9VzyUJJGtuCxOeGU7kngGGbxU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <561c8918-8d3b-91f6-5641-0313b0df5b82@amd.com>
Date: Wed, 19 Apr 2023 16:40:10 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <8560fac2-5c92-8362-090d-bbaeae3f5d22@amd.com>
 <4e11e153-2224-15a1-2563-0f1a3c5a6a81@amd.com>
 <b9dbb568-e35a-4854-3b75-2b4b3ebb9ab6@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <b9dbb568-e35a-4854-3b75-2b4b3ebb9ab6@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0470.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::26) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN2PR12MB4470:EE_
X-MS-Office365-Filtering-Correlation-Id: 4206cc03-cb1e-4017-8447-08db40ec601b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CSEIrdmfwPL0RgSW0BtJwCrc9ErgAQNYeJ54B2P1YgVGiys1R4tk9WQXA+GtIhj/sgsUGLWNi8kXP1UifV59SNdZw0GbnoDSNQYp4ekBXK4U4svHM++4HO4cT4YAelz0Tvqzw0sFX9JgfcsjJdO/kjIxumk9jfcJnigzal504ks+QQeDDj1e2BJz7omfBO/8P6MLrFlMm5s18dAV9DsVLsUk7znVHg4eNPZFqUymKjjjW0j8R6gLzPSUa+sF51aM36o/HJSy7FN3y9W/CM4zaI6zvK3LYpg95EdSRIi2JaE5sPSKM83NjtcGeQZziPghmsJsgd13D2SDMyBlrgK6hwga+iPcPJCQyR+m/wF2mZwkj/9uGi2O+zl4do75XK2M4L0/N7+/b22MFWNhb3n7U33BNCScuJnr/VG30lQOzYjJhgcldEb/lbTcN5rFvclE+ti8U/aOSSWryMIZTBHQkOw4ZEablZFtOaLCCsOcTlW6tEEG0arvb4rMa90qFjJg5ioCFh4BZLBC1URkxYNh9f0Jov7O5j32j1Id4vOR/suqua+CuDLChK5GUbLRusl18vjx3+W7fEra8NCUKJejAj2gSicZ+Q8HLhV1GsY3WHFK/g5ntYrjAXr33ZWRDDNszWLAHzguS7Rf1QRrCQT9TA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(366004)(136003)(451199021)(36756003)(2616005)(83380400001)(5660300002)(31686004)(7416002)(38100700002)(53546011)(8936002)(8676002)(2906002)(478600001)(6666004)(6486002)(316002)(26005)(4326008)(6506007)(6512007)(41300700001)(110136005)(66946007)(66556008)(66476007)(31696002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWs0ZXVtdzZNcVhqdml2a1ZiREVZRTB4RlRQRUtHZ242WHFFRzgyZmFTOUE4?=
 =?utf-8?B?NUp5L3d0djJMbysrbG54UWtDRlptQm43cVN6dy9qcUljaEhReEJGRGkrSk1r?=
 =?utf-8?B?Q1poenRpeWdQSVVoRVZEY0dPc2NSUkx4enJhRmYvSk5jUEZDdThaN0wyS2Rp?=
 =?utf-8?B?SlBPYXhvUkxBSFVuamZqMVZHMWVvVXoyb05DMXBETkNJYW9xeG5POHJ3aXVa?=
 =?utf-8?B?QjZVRWVqakhRaTg3VzQzSjUxQUI2QnhVWlJ3d3c2c0hvMzE5aEQzRkVBazkx?=
 =?utf-8?B?OTRLdE1Mek5PTkxYUC9HMHZOZ2lTOTU4c3RBNmdYeDlLSlExS05mNW5IdkZC?=
 =?utf-8?B?bU8xU1NsSUhwM0c5MGNKVHlweHB3d0VzWUM0b0RFWUIwaVpCSDRCdVVUZnM4?=
 =?utf-8?B?Q201ZE55TVorMjJ1bWkxbGdXOGxPUzhBakxySjFEVzczNFVITTVNSGszTjA0?=
 =?utf-8?B?TXhmdGtaQzQ4OEhtS09qSnJiQ00xQk9WN3FzVzBRaGlmbDMyYzRJc3NSZEw1?=
 =?utf-8?B?UW5NelEybEpoSUlZTk5GWWRJQnp4SDFuNVBpd282VUtNTHZFZVQ3aU5yOWw5?=
 =?utf-8?B?b3hjQStTeENmMTVIUE5YRjF1OEFVRHJnN2hwWlc0cGIyQ0JLY1BnREd6M0E5?=
 =?utf-8?B?dVJvR0lrNmplUjl5cXl0S3U2SUdwTVZDN1dTb0VmQnV6OVY2aXVNOXpGTzRX?=
 =?utf-8?B?dHMzWFNEemgvYzZyYXAyeVBSazl2djFTQUFCcDhkTnhiQ1BlTmNiSGwwZmtN?=
 =?utf-8?B?UnRlbVdWZFZQYlJVOUN3MTBmYXAzRnNldGU5UzZUdkZlVlUzU2VJZlk2RFhK?=
 =?utf-8?B?U2Fkb2xXelZNbVFsanV1SWVvRmxONUVoSUJyUmdyR09kQTZ0QzM3b2lWMGto?=
 =?utf-8?B?Y2Rsc3dRaTJLcXNLSGcwTWxTSzE5dk9naG9lN1hxcmFDQ2VKaVYxdHJZYmh4?=
 =?utf-8?B?b0VWN3EyZlVablIwUkdhOVlmSnRZcFBaN0xrNWxlcE9lMVdrSjRtQitZU1Uz?=
 =?utf-8?B?V3BtNG9hU0tHZmZoYjR4RXl4Ry9zTVJoajRyd01ERXJJd2RWZUh4SmI4WERu?=
 =?utf-8?B?TzB4eUtWOStKOVFvckFQeFBRUG1vZUhRQUppaG9KRzczajVEUkNsb0IrQ0dB?=
 =?utf-8?B?bExINi9GSnpiT1k1ZmJLcGI1UWRYOVhTUm01bzZ3RHdSK3dFZWlqN051aGo3?=
 =?utf-8?B?b2NxS0d3dnBwY1QxVlJubG5mNUxvdzhjUDgxQU0vVVNCK3dwOW9WRmJscmhB?=
 =?utf-8?B?VFovYU1IQzRTUExqRTdESEQyc2d0d2UrdXRXRk5Pa1pUMDVVNzNXOG1oQ21B?=
 =?utf-8?B?eDJRVU5WcE5tZ3hxbis1eDdrSDBpM2dSM0ladDdYK0g0a0pLdUdCNTUySFpV?=
 =?utf-8?B?L2JMNmIvcUFJb25JVlBUZ25Va3M1d3RMVHZaS3JiRWhucEZURjBheExtdzlP?=
 =?utf-8?B?RnJPNjJjNVRaVkNLTmJ4eE9QL3pYaGZBd0RTOWdBR2NER1Q5b3lpZ0hzZS9Q?=
 =?utf-8?B?bEFCZ3RaRk1ZYU8rNFBiOEdFalNtTVFHWHZOdmprVXlsdkgyWjRsTmx5WFRt?=
 =?utf-8?B?SjdxR1dYRCt0ZFJnaVp0bVpTb2tQQjg2dmtQSXdkYW9sTU5GWmhrM3JkRGtW?=
 =?utf-8?B?T1dkZTN4dkIveEF3cEwwTDVQanFqTEJIc29UZ0ZWUnlGUi9Gak5EMTZJbWp0?=
 =?utf-8?B?RUpDQ2tBVGkxeVVJTFlwc25PVUhjTkplUG00RHpzRFdPRGYzYlFNU2w4Vlhn?=
 =?utf-8?B?N1JHUGJoQjRpQkR4Y2Q1d1hTanJSWGl5KyszUlRCTmM3S0Y0WWhEL0NRVjZl?=
 =?utf-8?B?eEM1VzBVMGhVQnJqQ3FOUUQwcjlxcENCMGRDRW40dld3VTcxd1NqdWdGQ0lj?=
 =?utf-8?B?ZnJkZzN6WFBPaThXTDFRS1Z4RnJvS01FQVZyNFZwM2dJMkVVNDVUbS85dmpw?=
 =?utf-8?B?cjV0aG1NWUtKK3lrY1pDcCtZK05JS0VOM0RGbmlCcDl0OHJsWkZ5WTZjdnor?=
 =?utf-8?B?OGJURGJFd0lNYm1qR1llMUlDT241dVcyN2EvdkZ1aTJKbFdxb2VQamt2dTFW?=
 =?utf-8?B?WHFrb2FXdUFSa01RTXlSNzM5azROWUd5WmhTa0kwWWVsMDJzdzFOZmYxM29E?=
 =?utf-8?Q?dTCzQ8oUBebpa6a2kpFEx+HIo?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4206cc03-cb1e-4017-8447-08db40ec601b
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 15:40:17.4744
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N+K+RgsLuc8o7xx9QLDppJdQ3JvEIlSMnzdcbXEfqRvqdqVXplJK0Joc7hMyNq0AnibXowKHzQ1qMa53l3c61w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4470


On 19/04/2023 16:18, Julien Grall wrote:
> Hi Ayan,
Hi Julien,
>
> On 19/04/2023 15:58, Ayan Kumar Halder wrote:
>>
>> On 19/04/2023 14:54, Michal Orzel wrote:
>>> On 19/04/2023 15:19, Michal Orzel wrote:
>>>>
>>>> Hi Ayan,
>>>>
>>>> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>>>>
>>>>> The DT functions (dt_read_number(), device_tree_get_reg(), 
>>>>> fdt_get_mem_rsv())
>>>>> currently accept or return 64-bit values.
>>>>>
>>>>> In future when we support 32-bit physical address, these DT 
>>>>> functions are
>>>>> expected to accept/return 32-bit or 64-bit values (depending on 
>>>>> the width of
>>>>> physical address). Also, we wish to detect if any truncation has 
>>>>> occurred
>>>>> (i.e. while parsing 32-bit physical addresses from 64-bit values 
>>>>> read from DT).
>>>>>
>>>>> device_tree_get_reg() should now be able to return paddr_t. This 
>>>>> is invoked by
>>>>> various callers to get DT address and size.
>>>>>
>>>>> For fdt_get_mem_rsv(), we have introduced a wrapper named
>>>>> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and 
>>>>> translate
>>>>> uint64_t to paddr_t. The reason being we cannot modify 
>>>>> fdt_get_mem_rsv() as it
>>>>> has been imported from external source.
>>>>>
>>>>> For dt_read_number(), we have also introduced a wrapper named 
>>>>> dt_read_paddr()
>>>>> dt_read_paddr() to read physical addresses. We chose not to modify 
>>>>> the original
>>>>> function as it is used in places where it needs to specifically 
>>>>> read 64-bit
>>>>> values from dt (For e.g. dt_property_read_u64()).
>>>>>
>>>>> Xen prints warning when it detects truncation in cases where it is 
>>>>> not able to
>>>>> return error.
>>>>>
>>>>> Also, replaced u32/u64 with uint32_t/uint64_t in the functions 
>>>>> touched
>>>>> by the code changes.
>>>>>
>>>>> Also, initialized variables to fix the warning 
>>>>> "-Werror=maybe-uninitialized".
>>>> I can see that now you explicitly set to 0 variables passed to 
>>>> fdt_get_mem_rsv_paddr()
>>>> which haven't been initialized before being passed to 
>>>> fdt_get_mem_rsv(). Is this what
>>>> you are reffering to? I cannot reproduce it, hence my question.
>>> I can see why did you get this error.
>>> Before your change, we always checked for an error from 
>>> fdt_get_mem_rsv() by checking if < 0.
>>> In your wrapper fdt_get_mem_rsv_paddr(), you switched (not sure why) 
>>> to checking if not zero.
>>> Becasue of this, you got an error and tried to fix it by 
>>> initializing the variables to 0.
>>
>> I actually wanted to return the error code obtained from 
>> fdt_get_mem_rsv() to the caller.
>>
>> In this case, it returns a single error code.
>
> I would rather not rely on this.
>
>> So does this look sane to you ?
>>
>> diff --git a/xen/include/xen/libfdt/libfdt-xen.h 
>> b/xen/include/xen/libfdt/libfdt-xen.h
>> index 3296a368a6..1da87d6668 100644
>> --- a/xen/include/xen/libfdt/libfdt-xen.h
>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>> @@ -22,9 +22,8 @@ static inline int fdt_get_mem_rsv_paddr(const void 
>> *fdt, int n,
>>       uint64_t dt_size;
>>       int ret = 0;
>>
>> -    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
>> -    if ( ret )
>> -        return ret;
>> +    if ( fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size) < 0 )
>> +        return -FDT_ERR_BADOFFSET;
>
> So the problem if you check for ret to be non-zero. But the caller of 
> fdt_get_mem_rsv_paddr() check for < 0.
>
> Given that fdt_get_mem_rsv() is not inline, the compiler doesn't know 
> that it will not return a positive value (other than 0). Hence why I 
> think you get an unitialize value.
>
> The snippet below should work:
>
> if ( ret < 0 )
>   return ret;

Awesome, thanks for the explanation. This works. :)

- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:43:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523575.813752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xO-0001ff-2o; Wed, 19 Apr 2023 15:42:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523575.813752; Wed, 19 Apr 2023 15:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xN-0001fY-W6; Wed, 19 Apr 2023 15:42:53 +0000
Received: by outflank-mailman (input) for mailman id 523575;
 Wed, 19 Apr 2023 15:42:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp9xN-0001fK-11
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:42:53 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8444d25-dec8-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:42:52 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id q21so5348682ljp.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 08:42:52 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 f3-20020a2e6a03000000b00298dc945e9bsm2945367ljc.125.2023.04.19.08.42.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 08:42:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8444d25-dec8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681918971; x=1684510971;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E46jpy0Zie/1cfMUcFkLi1mB8rgGvBGdIMdB+mtmZc0=;
        b=JOkinOp95RvBpDpCom11brtIcJVSBUfZN6ck+zgam3uwRNxX9SCjWu2SqRvA1EuTyH
         A4VyJiZzLCuWSZJPFlx9W82rJswDupB1BLgj0GXe3pD0QmcYKWZd9OSMaELKalBxW2Pr
         8UvvGqlXxRNWmQdMsyBzM/DSqZdQeBk6NB5NBKZ2nTyFN7eRYXtYdFEMjXcBI4TQNmib
         sqIJQNZYdTpV4l2YQDRwtXqGnlOwshTWbc56bzjg3sXBkQSFGGxUyklk3iW1KFucXAgO
         RfdMbtbuFbpGsXxs1K4faeKTMtbrHhBE+sfVUWA6AuvKSTpvxA3TLy58DsgIndGowtQB
         5C7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681918971; x=1684510971;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=E46jpy0Zie/1cfMUcFkLi1mB8rgGvBGdIMdB+mtmZc0=;
        b=UapNJct3B/qYeP11Gq9BFv040QRr130ZoakcRqNGkNaNBUi2M5bEOpd7mT/G5X3ALl
         o1bWLUZKEcnvNn1jPlcdXzygYeHEKJw2l0SQkr7+z/fdFEcZoSnxa29WHo0IXVGQikBa
         4vHl1ITzCgFA0gJK0wk72zzNpaZW4UuyQmQXFr1eaZeNgSu1KnBU1wH9xlnWhVovvlx4
         ekV1OdzhAlfOBRfZoa4nhxclhBBvX9QPOb+ot5kPuKzQQRvvxUBVpnEdKspLH8c3APEz
         6zr6EuZy9GbmpTEHJ6l1aZgAvCuz8FKIfgPFqaPIofxCD9vFV8vmzb2g0jGaOfe9QWG9
         LxZg==
X-Gm-Message-State: AAQBX9d0l6jwPRX1o4pUV01SxP0+QM4wenE3R5v2Y/NdtYzPANK8Ka2a
	NsmHGcr9FbKBH5NiidBeBc5sJOTnolQ=
X-Google-Smtp-Source: AKy350Y9kQRULjxFCNmtC9rjhZdI6coee3xk3hp29Rj7PYWHXtuixmN0nW2dU10oJ4yVUoYlrOk7AA==
X-Received: by 2002:a05:651c:22a:b0:2a7:a54b:a10c with SMTP id z10-20020a05651c022a00b002a7a54ba10cmr2109510ljn.3.1681918971266;
        Wed, 19 Apr 2023 08:42:51 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 1/4] xen/riscv: add VM space layout
Date: Wed, 19 Apr 2023 18:42:44 +0300
Message-Id: <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1681918194.git.oleksii.kurochko@gmail.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Also it was added explanation about ignoring of top VA bits

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
* the patch was introduced in the current patch series.
---
 xen/arch/riscv/include/asm/config.h | 31 +++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..0c860e88ce 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -4,6 +4,37 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
+/*
+ * RISC-V64 Layout:
+ *
+ * From the riscv-privileged doc:
+ *   When mapping between narrower and wider addresses,
+ *   RISC-V zero-extends a narrower physical address to a wider size.
+ *   The mapping between 64-bit virtual addresses and the 39-bit usable
+ *   address space of Sv39 is not based on zero-extension but instead
+ *   follows an entrenched convention that allows an OS to use one or
+ *   a few of the most-significant bits of a full-size (64-bit) virtual
+ *   address to quickly distinguish user and supervisor address regions.
+ *
+ * It means that:
+ *   top VA bits are simply ignored for the purpose of translating to PA.
+ *
+ * The similar is true for other Sv{32, 39, 48, 57}.
+ *
+ * ============================================================================
+ *    Start addr    |   End addr        |  Size  | VM area description
+ * ============================================================================
+ * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
+ * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
+ * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
+ *     .................. unused ..................
+ * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2 slot: 200-509)
+ * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2 slot: 196-197)
+ * 0000003080000000 |  00000030c0000000 |  1 GB  | VMAP (L2 slot: 194-195)
+ *     .................. unused ..................
+ * ============================================================================
+ */
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:43:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523576.813763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xP-0001uw-9r; Wed, 19 Apr 2023 15:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523576.813763; Wed, 19 Apr 2023 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xP-0001up-6S; Wed, 19 Apr 2023 15:42:55 +0000
Received: by outflank-mailman (input) for mailman id 523576;
 Wed, 19 Apr 2023 15:42:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp9xN-0001fS-Nq
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:42:53 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7f15698-dec8-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 17:42:51 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id h8so18879865ljf.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 08:42:51 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 f3-20020a2e6a03000000b00298dc945e9bsm2945367ljc.125.2023.04.19.08.42.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 08:42:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7f15698-dec8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681918971; x=1684510971;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=m9Im33ejIUp7ApAAW/Ua4ZyW06n6nVeWXflx7lL6ZEM=;
        b=rd4a1AN3oWPZ6y22KK4Flk/1/D9CGdXd5tyPAB6U4Lx+sw6bedkIdEHH2kmMp3lpQZ
         ATgRl7K1MT6ApV1vT0Zz3HoJODR//jiQCkrbO4Wu+OVvgH3bCZVWPU++k/74OZqszNmm
         wkdrjkO0oa9rbze3PED7DLMeg7EMuWghY/tmWXNPVYoO5Vb8VcTAWgcCZ6eF48MB+aBu
         ivnVliPJadsrINW6QRPDupZrBB/uXnPXq4f8XH4+WrCshuL+Xoj/2VjFdMBMfvWoO/fr
         oqDoU57wPuOpymQb1Miei9rAuK1iU8bSUZ2WcNMX7/P897jAyvCQHbbJ4mnm00NHqIks
         Mf1Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681918971; x=1684510971;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=m9Im33ejIUp7ApAAW/Ua4ZyW06n6nVeWXflx7lL6ZEM=;
        b=SmD4g8B7jj6MRZky8atmQHPjN2PgDJs4qgqE/nDaW+Lmgnq4uXrjhNYDoeXvTSk0Ez
         dOm5d8VU7kUrWUt2AnQgyipWe8zPm1qm3GFqI07zRhTtWggvxx7NekS7odE2x+3IcN6A
         1wsDE7VLC5C/HCf/ceZgc+6fxVHn27qxSUmF3pVZTN2eQmuWAKRCFbULuyvSGDQGxfxj
         4W7KRhQ2iYKIMvrcaeS5Q98Ab0wU//czt4VGXZVnlcLoVbMCjmBQ6bv3WpF4KcuFCNHQ
         aOnYOkRZTduSTyYpkmaW94pzZcUDwXP5dH1OWaSS9Zlmn1O3keYiny7hp8GFaDJ+MGjM
         3B9w==
X-Gm-Message-State: AAQBX9cfJ8txzANmT63fdEfPz02LKSCLMNTjey72HUgziCQgpNdrP7qG
	Ah9XjsOWvB8SCKi0mBMa8/UJiozbOgk=
X-Google-Smtp-Source: AKy350YvXotrsrOAfp1KIsVtnNfTmiB4HcxPkAUCpXC9hm8f/HY0ULz/E8RXXc9AohmAk7p8eUw2VQ==
X-Received: by 2002:a2e:9091:0:b0:2a7:6e12:f75a with SMTP id l17-20020a2e9091000000b002a76e12f75amr2080863ljg.53.1681918970586;
        Wed, 19 Apr 2023 08:42:50 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 0/4] enable MMU for RISC-V
Date: Wed, 19 Apr 2023 18:42:43 +0300
Message-Id: <cover.1681918194.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

---
Changes in V5:
  * rebase the patch series on top of current staging
  * Update cover letter: it was removed the info about the patches on which
    MMU patch series is based as they were merged to staging
  * add new patch with description of VM layout for RISC-V2
  * Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (4):
  xen/riscv: add VM space layout
  xen/riscv: introduce setup_initial_pages
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  43 +++-
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  63 +++++
 xen/arch/riscv/mm.c                    | 319 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   2 +
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   4 +
 9 files changed, 464 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:43:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523577.813769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xP-0001z4-OV; Wed, 19 Apr 2023 15:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523577.813769; Wed, 19 Apr 2023 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xP-0001yM-GM; Wed, 19 Apr 2023 15:42:55 +0000
Received: by outflank-mailman (input) for mailman id 523577;
 Wed, 19 Apr 2023 15:42:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp9xN-0001fK-Tc
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:42:54 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d89d7361-dec8-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:42:52 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id z38so18635590ljq.12
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 08:42:52 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 f3-20020a2e6a03000000b00298dc945e9bsm2945367ljc.125.2023.04.19.08.42.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 08:42:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d89d7361-dec8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681918972; x=1684510972;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eU8aCTHu5cx3tQ+vk3d1bAbIO58Fbf4v0HDH4HjO7+4=;
        b=JiK7IdRj93qlr2R4RtSIXjyyYBe1t/jKa59RncxWQ6D+kFeEyuqzR0X9cglj1hO3we
         8R3EzeWKoEBkhonxKgNTZ+wDaI394FgJIjYnEVXHz7iAn1XV21IwSL9jmwBP+qFwESXS
         VV+YaWeO7ia5fAqRMNXlUkbXvugaL67f5ANKmpMtMP+z3Xji3LT47YoU6NVHtWQ9uU2E
         VQ5XuVUhWcM96F74kbv/W6xg9xHsccuHwYbrYIs7tAYdGdLK3nrIQi+5VBKKIf6pNhWe
         8Wkz74HKyMtZ/aIMRWwiencU2Vrp3Uimmv4YcCWOR3iegvfksLKAAeiLEuKZNb1/VBYe
         9glQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681918972; x=1684510972;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=eU8aCTHu5cx3tQ+vk3d1bAbIO58Fbf4v0HDH4HjO7+4=;
        b=dX+B4TGVFM0yIfHsLD2MQMuOSkF3Mgzw8MGQW9/bYX/x6OLbLleF707l2b/URVsf0Z
         kCV/2tpf0BKylht3w90dwBx1mQDOJVyFU1L+BumjqGBMfi6T5aLGQVWOZSM3yovd7iff
         n/M2OzxaN9nrR+DGhzH97qodwDv6BdH9fE3qQdYI+NM8M/e4ovyL0kwyUUM4Q30s8vf7
         clJWK3uBXM8xhkZWFBYGDOxIvF9wADqvTJA824OD9A+04XCbUuYoJFlv5eAZHmj2nD7Q
         wH3+2oFKmHOo9Sk3qubKgQpSH/IH3jlQPCPrQD4MW0nPXMhtQ9urIb7c40NvvaMbJJkE
         WGFw==
X-Gm-Message-State: AAQBX9egf1ibuK573rOFWEEbplI/Q56YYsMV2Q5aU79TY/8dbGbnKFms
	K4CChcbQuE5KkOH9jh/JpTYPnAFNCIQ=
X-Google-Smtp-Source: AKy350YOFdTDwM6B9aRXvknZa9os+TpNvP67EHugiVHuAhSKXhdw+/ohj5KMFR9SNi/rttLsGYJ68A==
X-Received: by 2002:a2e:9209:0:b0:2a8:ea9c:116a with SMTP id k9-20020a2e9209000000b002a8ea9c116amr1200016ljg.42.1681918971946;
        Wed, 19 Apr 2023 08:42:51 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
Date: Wed, 19 Apr 2023 18:42:45 +0300
Message-Id: <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1681918194.git.oleksii.kurochko@gmail.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
	* Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  12 +-
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  63 +++++
 xen/arch/riscv/mm.c                    | 319 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   2 +
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   4 +
 9 files changed, 430 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 0c860e88ce..4cd4f7a701 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -70,12 +70,22 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..e16ce66fae
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,9 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..0879a527f2 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -1,6 +1,16 @@
 #ifndef __RISCV_PAGE_BITS_H__
 #define __RISCV_PAGE_BITS_H__
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..daa880558e
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,63 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+    uint64_t pte;
+#else
+    uint32_t pte;
+#endif
+} pte_t;
+
+#define pte_to_addr(x) (((x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)
+
+#define addr_to_pte(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)
+
+static inline pte_t paddr_to_pte(const paddr_t paddr,
+                                 const unsigned long permissions)
+{
+    unsigned long tmp = addr_to_pte(paddr);
+    return (pte_t) { .pte = tmp | permissions };
+}
+
+static inline paddr_t pte_to_paddr(const pte_t pte)
+{
+    return pte_to_addr(pte.pte);
+}
+
+static inline bool pte_is_valid(const pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..43b7181c33
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,319 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+
+#include <asm/early_printk.h>
+#include <asm/config.h>
+#include <asm/csr.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned long num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 4 page (in case when Sv48 or Sv57 are used )
+ * tables are needed to cover 2 MB. One for each page level
+ * table with PAGE_SIZE = 4 Kb
+ *
+ * One L0 page table can cover 2 MB
+ * (512 entries of one page table * PAGE_SIZE).
+ *
+ * It might be needed one more page table in case when
+ * Xen load address isn't 2 MB aligned.
+ *
+ */
+#define PGTBL_INITIAL_COUNT     (5)
+
+#define PGTBL_ENTRY_AMOUNT  (PAGE_SIZE / sizeof(pte_t))
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PGTBL_ENTRY_AMOUNT];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PGTBL_ENTRY_AMOUNT];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PGTBL_ENTRY_AMOUNT;                         \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start,
+                                         unsigned long permissions)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+    pte_t pte_to_be_written;
+    unsigned long paddr;
+    unsigned long tmp_permissions;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
+         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    page_addr = map_start;
+    while ( page_addr < map_end )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch (mmu_desc->num_levels)
+        {
+            case 4: /* Level 3 */
+                HANDLE_PGTBL(3);
+            case 3: /* Level 2 */
+                HANDLE_PGTBL(2);
+            case 2: /* Level 1 */
+                HANDLE_PGTBL(1);
+            case 1: /* Level 0 */
+                index = pt_index(0, page_addr);
+                paddr = (page_addr - map_start) + pa_start;
+
+                tmp_permissions = permissions;
+
+                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                    tmp_permissions =
+                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                    tmp_permissions = PTE_READABLE | PTE_VALID;
+
+                pte_to_be_written = paddr_to_pte(paddr, tmp_permissions);
+
+                if ( !pte_is_valid(pgtbl[index]) )
+                    pgtbl[index] = pte_to_be_written;
+                else
+                {
+                    /*
+                    * get an adresses of current pte and that one to
+                    * be written without permission flags
+                    */
+                    unsigned long curr_pte =
+                        pgtbl[index].pte & ~((1 << PTE_PPN_SHIFT) - 1);
+
+                    pte_to_be_written.pte &= ~((1 << PTE_PPN_SHIFT) - 1);
+
+                    if ( curr_pte != pte_to_be_written.pte )
+                    {
+                        early_printk("PTE that is intended to write isn't the"
+                                    "same that the once are overwriting\n");
+                        /* panic(), <asm/bug.h> aren't ready now. */
+                        die();
+                    }
+                }
+        }
+
+        /* Point to next page */
+        page_addr += XEN_PT_LEVEL_SIZE(0);
+    }
+}
+
+static void __init calc_pgtbl_lvls_num(struct  mmu_desc *mmu_desc)
+{
+    unsigned long satp_mode = RV_STAGE1_MODE;
+
+    /* Number of page table levels */
+    switch (satp_mode)
+    {
+        case SATP_MODE_SV32:
+            mmu_desc->num_levels = 2;
+            break;
+        case SATP_MODE_SV39:
+            mmu_desc->num_levels = 3;
+            break;
+        case SATP_MODE_SV48:
+            mmu_desc->num_levels = 4;
+            break;
+        default:
+            early_printk("(XEN) Unsupported SATP_MODE\n");
+            die();
+    }
+}
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start,
+                                            unsigned long satp_mode)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    asm volatile("sfence.vma");
+    csr_write(CSR_SATP,
+              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
+              satp_mode << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
+        is_mode_supported = true;
+
+    /* Clean MMU root page table and disable MMU */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    csr_write(CSR_SATP, 0);
+    asm volatile("sfence.vma");
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { 0, 0, NULL, 0 };
+
+    /*
+     * Access to _{stard, end } is always PC-relative
+     * thereby when access them we will get load adresses
+     * of start and end of Xen
+     * To get linker addresses LOAD_TO_LINK() is required
+     * to use
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) ) {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    calc_pgtbl_lvls_num(&mmu_desc);
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start, RV_STAGE1_MODE) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start,
+                          PTE_LEAF_DEFAULT);
+}
+
+void __init noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    asm volatile("sfence.vma");
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile(".align 2");
+      mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     */
+    asm volatile ("mv sp, %0" : : "r"((unsigned long)cpu0_boot_stack + STACK_SIZE));
+
+    /*
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+    */
+    cont_after_mmu_is_enabled();
+}
+
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 8887f0cbd4..b3309d902c 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,5 @@
 #include <asm/asm.h>
+#include <asm/asm-offsets.h>
 #include <asm/riscv_encoding.h>
 
         .section .text.header, "ax", %progbits
@@ -32,3 +33,4 @@ ENTRY(start)
         add     sp, sp, t0
 
         tail    start_xen
+
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 31e0d3576c..2f7f76bee6 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -136,6 +136,7 @@ SECTIONS
     . = ALIGN(POINTER_ALIGN);
     __init_end = .;
 
+    . = ALIGN(PAGE_SIZE);
     .bss : {                     /* BSS */
         __bss_start = .;
         *(.bss.stack_aligned)
@@ -172,3 +173,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:43:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:43:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523578.813774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xQ-00025C-0F; Wed, 19 Apr 2023 15:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523578.813774; Wed, 19 Apr 2023 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xP-00023t-QR; Wed, 19 Apr 2023 15:42:55 +0000
Received: by outflank-mailman (input) for mailman id 523578;
 Wed, 19 Apr 2023 15:42:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp9xO-0001fK-QX
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:42:54 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9981225-dec8-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:42:54 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id r9so21017386ljp.9
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 08:42:54 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 f3-20020a2e6a03000000b00298dc945e9bsm2945367ljc.125.2023.04.19.08.42.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 08:42:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9981225-dec8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681918974; x=1684510974;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d6nuZqH4BFpdEi3VMqbbjTAjUlawv3o6/WP+NDuHGao=;
        b=FTQsPdb4VZ4U8RE3yeFfHXGlvbgFa0QGfqVFr+jC6kl6lxCZtsLyajUUqYQLMgURWb
         J4uGmuFYPY0pjFG3CvgZXBQ+DQSwKkx3aLHi8JiMlqqwFM7VZfY8qLUXuH8hNbaooToh
         oUl8JWcg6/dqqTOISasLH4ArbdoYMv/7qTsrk21lipm8TZ0o+iGKtQcaitVK3NT7LnHh
         cHUofTY6cexuLNRmw8QblHcS8LmQhx8wUSQySx0lMZFd0Mvb9N4HS1X65zhUQf+nG5lT
         3cGIRqrrWfhy8kgYqFl1kyHbNncTpe5lX859YIwTjvJeyWa/XSb9WXw3PXjXypHssGbH
         H7Zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681918974; x=1684510974;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=d6nuZqH4BFpdEi3VMqbbjTAjUlawv3o6/WP+NDuHGao=;
        b=Ny8H3Ypi0tTbfRCZjJONA1mxkFCo4dXZGZAIfjCIQUdMSRHwCuiuqtLj7XlgQFYRZf
         LNHavrN3fJFflEYEFhkzyeUmj41cOPfCUzh/QeL+MfjXLJpnklR7x1XVbrIZTVQ+n2+r
         86XrsWL9/suZQ1qbxV3RUK/T93AdrAPID+LQYPDCqKVCVGrlTEx+LnAZIKEJGD61zurU
         1Itm8jscpnXvw60SeI++RWUkS7ZpdcWC3MMCPhCWNx687HDyrgTKXcJr+q2P3hgtZSl8
         gUDZT0TCSU3qIzJ9pjgAQuUxuNEh5XKGZh66Q+mxlBzQzPF9I0BKWQAzYYIFRTWSXYb0
         r86w==
X-Gm-Message-State: AAQBX9eqqFzqLlj6udxAlu3mgBWrAH7AnwjX67eCRhQTpbgtYzvJYFxq
	tylnPSnqEXv4+PBZItEM5Sza+JBaq4M=
X-Google-Smtp-Source: AKy350b0B/kQEXWANRl+Ln8SJiVSd3pvv9i8HvVi4kbbG70yeWrmbVGwKZQAs+lqgOodMZ12n2+Y5w==
X-Received: by 2002:a05:651c:cb:b0:2a7:79e6:1625 with SMTP id 11-20020a05651c00cb00b002a779e61625mr1987538ljr.37.1681918973693;
        Wed, 19 Apr 2023 08:42:53 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 4/4] xen/riscv: remove dummy_bss variable
Date: Wed, 19 Apr 2023 18:42:47 +0300
Message-Id: <6b56f750edc5d8f3ed41769896c865e3ea89c68f.1681918194.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1681918194.git.oleksii.kurochko@gmail.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:43:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:43:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523579.813782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xQ-0002J5-Ii; Wed, 19 Apr 2023 15:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523579.813782; Wed, 19 Apr 2023 15:42:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pp9xQ-0002Eg-D3; Wed, 19 Apr 2023 15:42:56 +0000
Received: by outflank-mailman (input) for mailman id 523579;
 Wed, 19 Apr 2023 15:42:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RA+/=AK=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pp9xO-0001fS-WB
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:42:54 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9091f47-dec8-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 17:42:53 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id y24so6980342ljm.6
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 08:42:53 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 f3-20020a2e6a03000000b00298dc945e9bsm2945367ljc.125.2023.04.19.08.42.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Apr 2023 08:42:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9091f47-dec8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681918973; x=1684510973;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sK6lOiJmrrx8EIkDgF6+ULo+pkwl5Br6n5iUV/9+gh0=;
        b=JY3+kb4x2B3fIW868V/DEQbhu+mTCUCi1tY01W+zSiNxgxm9xuR3y8tCW4kOQViPIK
         /RCkVPNQUc4KfL2ShZuG+bT0+PNuDRLZkhIw1xQaVaOqDGdiXJGeXQzLyxqhFM+YkbxK
         qXw/4BuAUE90ea2JDSeWo1ujNpatGO1FPLbx7I885L15uACE2DjzA20Vx/JgFwWFkpYr
         g7tUR/jB7BuCYhb2qv2dGuiu00lF+rT70xuwZNAn5zlX7yzWNVNtOfJfWKf2Q9ts9UIe
         hJM1vfO42IUk4/UFJYhcfHeAILiMpvr/Pye9uTf9k2kCtLPzVjZiAhNV+BDZS04nu383
         wqEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681918973; x=1684510973;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sK6lOiJmrrx8EIkDgF6+ULo+pkwl5Br6n5iUV/9+gh0=;
        b=SKQ3xIjjGIxoOQ79mLsmDqmEF338cowCVNKrO/N0GxJoRmkyrGWdegi9nGpBFKMLnG
         SNMaw/6FBogIvluuCB1U1zQ4U/DoQoHOEimIO2pgRBQXeR2tSrKsZHb2QbQCR2HbN98g
         cwQK9JxzUKQvPnQvQAnsBOuW8oaIG4ScE9j1rUvxDpxUf5lHZ4Ywr0N0K0fL4mYeIObK
         oJpQWsf8NGnhvj/iWimSzLk9xNckWSILM5PfsLF7GrK2npUif3js7tfL5bKS7uAHBdXG
         apnp5+4nFlHep7RiuSk1jdY1J4cRNSxcLVLFevl3mm6XrgSzXx8y+3mGEBLkCIaHhTLx
         6kEA==
X-Gm-Message-State: AAQBX9cS4HA7z9IjdGEHIPojyUFuk38tvLLVNbu8f8BWZyqX5mnJRyMf
	OxzEjAgFqwCJLOrUEMCelkc2Sv5bs+M=
X-Google-Smtp-Source: AKy350bGcPFjMvPqoJslalD3B9AiEdsxHV+y7m279vTxSFKXJbRv+mXVwElB34h0bTjwCmFPzOHvNw==
X-Received: by 2002:a2e:b041:0:b0:2a8:ea22:28b5 with SMTP id d1-20020a2eb041000000b002a8ea2228b5mr1253128ljl.4.1681918972863;
        Wed, 19 Apr 2023 08:42:52 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v5 3/4] xen/riscv: setup initial pagetables
Date: Wed, 19 Apr 2023 18:42:46 +0300
Message-Id: <cbd1417ddfc62de0d730511040d48c608cc09ea4.1681918194.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.1681918194.git.oleksii.kurochko@gmail.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:56:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:56:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523597.813806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppA9v-0005VL-RU; Wed, 19 Apr 2023 15:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523597.813806; Wed, 19 Apr 2023 15:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppA9v-0005VE-O8; Wed, 19 Apr 2023 15:55:51 +0000
Received: by outflank-mailman (input) for mailman id 523597;
 Wed, 19 Apr 2023 15:55:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppA9u-0005Up-9b
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:55:50 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a51151dc-deca-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:55:47 +0200 (CEST)
Received: from mail-co1nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 11:55:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB7030.namprd03.prod.outlook.com (2603:10b6:806:333::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 15:55:35 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 15:55:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a51151dc-deca-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681919747;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=E/xgNpqFB3QeVJdJrTJxWmIQMMWo15dz77hJ4wJEjzk=;
  b=c0OvsGa9IEKnmUlInha/9m0diyLz1OG8xaw6Wk7IwpIaPreVrWeqP/WG
   DKnIAzR/u+QJpg1wVeXlZ5rckyh/oQPdYVW2zNnfiYH4d5vu/Eyi8osJ3
   8QDU8qGSEczTTlQX70gyh950BWTmGlxU0wfM9dq1qtoyZho3NCj61b/kT
   A=;
X-IronPort-RemoteIP: 104.47.56.177
X-IronPort-MID: 108565700
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Y7fvbahGdakqwiIIzRrcQnhiX161VBEKZh0ujC45NGQN5FlHY01je
 htvWWDUa6reZmOmLtEgPoznoE5VsZfSytZkGlRprnw9EH4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaPzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQIJHdcaCifg9mu++2aG8MrnOkRdODCadZ3VnFIlVk1DN4AaLWaGuDhwoYd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluGyb7I5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjCN9MSeLgrpaGhnXOzGYBByU2bGeEoOarj2yFdoNSC
 nMLr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9YWqU67O8vT60fy8PIgcqeissXQYDpd75r+kOYgnnS99iFOu+iYTzEDSpm
 jSS9nFh2PMUkNIB0Li98RbfmTWwq5PVTwkzoALKQmai6QA/b4mgD2C11WXmAT97BN7xZjG8U
 LIswqByMMhm4UmxqRGw
IronPort-HdrOrdr: A9a23:QZYqGa6LOo9inBxHNAPXwNnXdLJyesId70hD6qkRc203TiX8ra
 vFoB1173PJYUkqKRMdcLy7VZVoIkm9yXcW2+cs1N6ZNWHbUQCTQ72Kg7GC/9ToIVyaytJg
X-Talos-CUID: =?us-ascii?q?9a23=3ArFrU5GtXG79faxrO/DBUMc/u6Is1fjqMyk/MJ3S?=
 =?us-ascii?q?gMjlIGZfSU3KioIpNxp8=3D?=
X-Talos-MUID: 9a23:d/KAeguViQOn3lJgjc2n2TtoJshQw52SN0ESg5AkgOTVOA1PJGLI
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="108565700"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WrOh7XV1fyMTqusNF13J19r3xsZKOP6bnBh6qdFM7yozjUpKe78yZSTNeEpNEyIQqTvz/CBcX+JAOemIqQyFGmP6TuulL8Vf2C3e/gJiXhbMII9M/aQHOLzEpA7z1LsqPxX3snysW/fcPqaeJf7fEE/RwgCqSPaCcl5X347J0tCltS6fQF7hi9WcB4Dyu/1k3UQj6vRq4CmvcGaL0cHqwFzFTieRkd2pnosRvrh8EsBJqfNT7bKr7+m7ZEda7o0siks8Fql3a7q6vTrDAONcmw/1Rfa3vHcf9vCfzhRLqaVEQqHs0nOEx0BtL4YUgdzOS8YLqf8Bq1ctN+BYnajWGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JAi+37JWDwMaeQymdzel3P9769qLutm3BqZRLCAEZHI=;
 b=A2s1k4hr5tQYLXtUjLy35EdS/O+QPj8PwH0GOR9b0wjkAuSNnCOvY10WCA0wvPRkxTvZh2Bz0EhEdfphRoZsj1H028ewad3AkTc4ADGuW/oJ1fibKxMhNIFNIaF1g647PTyM0ANw9sh1KtgFJW3PEHSTeDbQhHljEzP6fxrNztjf1okGogEn+Q8NtwW1QJ796TZzVzBu1WS6uh1qX49GYS37jdsSb8t+XapVoo9En6mKvGzpC0bfuf6fdY9Qn0zH6zPQy9ZelUA2sRs6EfdH7AhWr5U/aPCvRLrA9p8xV/t3ydsepC7etNVR3dvvYbx6nXickHxGX3mrhtiw7mWGew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JAi+37JWDwMaeQymdzel3P9769qLutm3BqZRLCAEZHI=;
 b=QadnrXKWe34fCTunwM2rFgU9iuUDJwcCvxtAZXASKKombyOFHquvUmG6JH89SkReqZjGMrTw5j2fX5cAp6M+fZJnBGowV7Z+6/gv5Fho47R2skSJBhRXcfVO1xnkF9GEKWVxJq6N4KST4K6ox7kkBmW4C50C+DNsVuuhOukaMKg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 17:55:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZEAO8e9iTjmi86fr@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
 <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
X-ClientProxiedBy: LO6P123CA0009.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:338::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB7030:EE_
X-MS-Office365-Filtering-Correlation-Id: 4e4963d9-ab06-4f60-9318-08db40ee833e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EvXw1KMx5bYt+0OyCmR2HaSt7HBKU9ROTFB66SSR+hjsweCBJZXOL1x95tGJNJVt7uj+0hL7t55Q/YEvHXa9cSB8TuHQlThYVGTr/qC4X4miwTU5Qd6PPjWNq9Bi/1FGs7JXp+ddxvk1re/2UcFJZ6J2FSVyX8FINs6Zn8VoCZLSp169v6yTXPnxFCuPLtAu3KjkPZ50HRw1HguSDFL2DNcWopPsssigjkHo2RLWM+l3usxxVvaX1v8DH5Jtphts3dbJikLAMVk1rF+U7zaxIrt+6mlX+S8H6jomlYjaPXsU7MxtLMnGZe/lIbJD9Oe2KsH/CPEhQd/TJ0roFmxLCqdm+Zydhky7uRSaZafw1MgJ/rWG2mZD2QtuBSo4CxMI4sVVxHkR98qkPNTDU7fWtFLXnhLL5X8PK5axWJJFZBRfEfTDR0NywKszhGd2HY3c/7Mtg0f58sgdeE0KBTU5AopNqwq3SMvwTm2fVBqehU4uCXAmUSKsvwgt1CB1Q7685lUYKQaUfPbzsiYArgE8TP0M4QuR0GSyzi+8HWiVTKqlf5e5lqTvfzGV1Sf9Hf+d
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(39860400002)(346002)(136003)(396003)(376002)(451199021)(83380400001)(6486002)(82960400001)(2906002)(38100700002)(54906003)(316002)(5660300002)(41300700001)(8936002)(8676002)(66556008)(66946007)(66476007)(478600001)(6916009)(4326008)(86362001)(85182001)(6666004)(26005)(186003)(6512007)(6506007)(9686003)(53546011)(33716001)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VWphV04vbHBBOWY2WlV0SjIzMHVvL2NDVTFZRlFmSzV1VXljSWpyN3dlUVdL?=
 =?utf-8?B?eUxqT0JnNVM0Q2NrbnpoK29VeHZ0VERORzV6NU5oUGFJS21xOUFySStXalRK?=
 =?utf-8?B?Z0ZrVjlSTDV2eEhVTGZiUGdNMEJzRkE3UVpwN01EdWNFbzFJVExFemVtL29h?=
 =?utf-8?B?MEs2bVVHSkJ6NEtUUFd4RDJaYkhSbXJKVno3MW1IeWxaRktyWnlyV3E2ZWF5?=
 =?utf-8?B?RXhBVll5aS9NOVo0UjRyYU85Tmp3MWZhbDBaNCtOWmRScUpEbnozbnlGMkNJ?=
 =?utf-8?B?QUs5K0pVc2c4cVBNRG04UndMSUQrYjZHb0xpanhzM2RzYWdyNlJLVkk2K3hn?=
 =?utf-8?B?MlV6R0J6N3VvWXFDMmtka0dOb05QUHBTTCs0Yndlb0ZvMkJCTlNPVTJOTC9D?=
 =?utf-8?B?ZUkxeHdkektGeW9Td0ZHdUlSVjlMUVBFcVJKTW5XczF2d0hMSG8zdk55ckJo?=
 =?utf-8?B?MDlFNE1LSlVYZmVPZitHbERpUE84MUVNR29GTFhia1VXaUJaSmlYS0Y5bzZ3?=
 =?utf-8?B?d3R5Mi9CS3BRalJkWVcvSDJ1TklzdTZucVczOGs2Y1lGRGZndWdySm16cGFH?=
 =?utf-8?B?YnROME1ueDdyNGsrZU5NZ21NeVZYN2hwQzROQ202ak5iamJmaDhZNXBIL0x2?=
 =?utf-8?B?MGN5TFFvYTNlMzhSTCt1Vmw5N2pVM2ZGblRXOHZuQVg3dTR4dENpL1Q1MkpZ?=
 =?utf-8?B?eGRONy9wQlFVRVR0THBmL1k2elZvcDRxVkp3b2JHM2FjQnloNUVxL2FRemts?=
 =?utf-8?B?RGhBOUpldjhwOTZsOGgxVzJkTnVNVy94aWozdnQ1cmREamw0TjN1SVBnaFh5?=
 =?utf-8?B?Y3J1M0M0R3BqTlBMQUZDQmtXaHdnWXZmbW9Qc25CY2VYU1hsWWlQck5QbC8w?=
 =?utf-8?B?NzhTWDFvMWNuYVIrWC9YY3pUWFgzclFDTWNWRVpDOU1FMkJQZkJlV3FzRUdE?=
 =?utf-8?B?Z2l3OGdiREhramVBb2pYeGQxcW9OMjRiSy8yOE1FUlo3M1hiTDRyWVA0RkdW?=
 =?utf-8?B?NG5wR3l0eWNkTVVNclJ5RmM5c2VCZWNKeFh1TXFEd3J6UFRSSllOWlJLQzJh?=
 =?utf-8?B?NXE0RHV0M2dnS2g3RFVZeDRadVZCVG81aVZ0Y2ZrY3FyNEtEWXg1N3NaZ0U5?=
 =?utf-8?B?NTAyaldRZE50ZG9Ya1c2bnE2djM3OXZnS3UreGMzL0tmcmVacHptbGZIckg3?=
 =?utf-8?B?SHFiOWJ4VG43VlY5aFhHdXZScGNwMHBTUjhVRVFPVCtBUHVMMElHRXY3QkI5?=
 =?utf-8?B?ZHNVVjZrcUo4TmVDU2hVM2N5UEsvZ0pZVnRoenBWSFNRRHcyb3ZpcUtreEVM?=
 =?utf-8?B?NC8rOEFKMWhzYTZ0M05DeVFVUWNGdmlHZmxZTEo3Rng3TjlUdEFES1VNUzNh?=
 =?utf-8?B?R3BXaE94SHdja0RiSFhETStWekE3V2ZuQUlGSUg4SGwxT0RHRXZRdlhQNUcy?=
 =?utf-8?B?WFF0bzc4bVl3dzRrNHZaM2VRb3YzV3l3eVRUS2o3ZlNrMU51Sno1dmVCenRN?=
 =?utf-8?B?MllQRGVOaUoyQmIzSnlnNzcrTC91ZnZ3UUVSTEZIcmE0eVVrYzMzMjY3OE1V?=
 =?utf-8?B?ZlE0aEhwMnZFU3ZLZEwxd3IvY1QxQTNxZDFCcWR5VzJZcmlwMmdvWVVtVjRX?=
 =?utf-8?B?bXZXUFMvUjFFZGU2dG5OaDlPUWI2UGVxOTkvamlmRnhPVS9Qa3ZVSTJKQ0dy?=
 =?utf-8?B?MnJMQk5UNitrdnY5VFdWbjBLdjhzeW1jUHl1dC8zcnYwY09xdUE5U0lPTHFs?=
 =?utf-8?B?L0tHaWR4SGZnTDhvNGZtSW5jNXl3ZTVvYkhzZEVvV0dsVDZBeDgxcm85dnNm?=
 =?utf-8?B?OUJNYWE5cUY0Z1NBVG5QU056THpjMDYxcFJrOU9XUWtjR000Q24wRWx5Q0Ir?=
 =?utf-8?B?WjRlT0ZHckZHbXd6T3luQStsUUFONlJTMzdIeE4vZi93K05UOVpWZ3pRZ0Yx?=
 =?utf-8?B?QkxNeHJuZkFhcjRoa2FmK2pma1Vmd0I0UUk2SDFiTVhwSTN2VkliZzRkTDZV?=
 =?utf-8?B?cnRzRW1yMEdsUFFJeFIzZHBOdStZMzR2QzUyTk1ZMklqcnk4VUZIWWpZc3pr?=
 =?utf-8?B?OVVxTnNid1pQSlJ6eFBPNXljdEFFemNXTUlOYlZsazRrRjJzbGxqWUJpejZO?=
 =?utf-8?B?NnZoaExZWWNZY2lYcTJFT2xTRnp5STVsd0Z0YWhUSWFiSWRZQk1Vc01PMjdV?=
 =?utf-8?B?d3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	x2EuZ+s/5C9xoGhloUfY1+1MMEoKj5WZVdr9yRPlb7VeuXnfQUlkhmHt8YnhzSRfQbPHsKlm4+dEBa+PWKtevw8JI3wRtZAUrIYhlL1q2u01cznH9wYMjSQkHEqcImxdJWnjHAL84uy1GRfRwN2jWIRih9ODG32tbqCdgzvyHH7vdJPK90jphNVZe7SIOh1PdNd/E3GQ95LebzDP2NvBjnQJhonS9rf9PUrXks1BNpPgmldYEJ1+H+oGJlwCxwcLv72mhtMVtA+zwu5Rd/jNg3zhN3ii45/HRDOIsQ+3WyNCpGoVfrWB4IMwK2dATUJAFwWy8a2PDt7i4zNckKJG9qTcQ6+upQV7M6ju5WhDNMeDKJdax3GSqxr/PRx3ybx1pL3XfTDGzewu6ki7SYGRp6ptEbmzB5UF8C+5iw3aNkLxOoeq634hyGFHJmn6vBwxAVdDvvTEi4WB1WHafmSmgz9ZcvHoFq3s9NsEg7+PWt+ov2vxp6SbbxI3aDtLBX7oyp/1rS90rK/+AqoQdKGdLepcBA7/XtzLl7YdZjhfjTm3U0Qd7rgIFzTe2fpUCj3XT9bD3WA5xOp2V6qZFxuEtyDwGmk2Hp5wf34uGPoiZCxhdZYkypC8I7li1umnNtwKBlFJiA5kUSTayG7K8QIKalJc91TIXIoUdFdH+T+ouKpoFqaH6OktD6RMSFZ2xuv4AI9aq1XQYun+U+w5tuyYGLQllLyTVJ/ovhTC2Qb84qgdzKWghXi4KYaZEY3S++JiVksaPAjhi86XaMcjoSZY3Uu8JOaguf0orL+p6yBCLk9zMGdrq35O4e9AxOUONiyu
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e4963d9-ab06-4f60-9318-08db40ee833e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 15:55:35.3001
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wY3JhhKCnDAoBl5/7wU49e7nlnLtoFBTAs8cx/UKN0tS5glfVoj6W+XQ+I/BqbVWS+wTjCZHcOqHimxRav/UUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB7030

On Wed, Apr 19, 2023 at 03:58:10PM +0200, Jan Beulich wrote:
> On 18.04.2023 13:35, Roger Pau Monné wrote:
> > On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
> >> ... in order to also intercept Dom0 accesses through the alias ports.
> >>
> >> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> >> use the CMOS RTC, because of there being none.
> >>
> >> Note that rtc_init() deliberately uses 16 as the upper loop bound,
> >> despite probe_cmos_alias() using 8: The higher bound is benign now, but
> >> would save us touching the code (or, worse, missing to touch it) in case
> >> the lower one was doubled.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Before committing I went back to read through doc and earlier comments,
> in particular regarding the NMI disable. As a result I'm now inclined
> to follow your earlier request and fold in the change below. Thoughts?

It was unclear to me whether port 0x70 also had this NMI disabling
behavior when the RTC/CMOS is not present but it seems that port is
shared between the RTC index and the NMI logic, so lack of RTC doesn't
imply lack of the NMI bit.

> Jan
> 
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1305,6 +1305,13 @@ bool is_cmos_port(unsigned int port, uns
>  {
>      unsigned int offs;
>  
> +    /*
> +     * While not really CMOS-related, port 0x70 always needs intercepting
> +     * to deal with the NMI disable bit.
> +     */
> +    if ( port <= RTC_PORT(0) && port + bytes > RTC_PORT(0) )
> +        return true;

It might make it clearer to move this after the !is_hardware_domain(d)
check, as non-hardware domains don't get access to that port anyway?

> +
>      if ( !is_hardware_domain(d) )
>          return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);
>  
> @@ -1342,6 +1349,17 @@ unsigned int rtc_guest_read(unsigned int
>           * underlying hardware would permit doing so.
>           */
>          data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
> +
> +        /*
> +         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
> +         * ports. While reading the index register isn't normally possible,
> +         * play safe and return back whatever can be read (just in case a value
> +         * written through an alias would be attempted to be read back here).
> +         */
> +        if ( port == RTC_PORT(0) &&
> +             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
> +             ioports_access_permitted(currd, port, port) )
> +            data = inb(port) & 0x7f;

Do we really need to mask the high bit here?  We don't allow setting
that bit in the first place.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 15:58:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 15:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523602.813816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAC6-00063D-71; Wed, 19 Apr 2023 15:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523602.813816; Wed, 19 Apr 2023 15:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAC6-000636-3a; Wed, 19 Apr 2023 15:58:06 +0000
Received: by outflank-mailman (input) for mailman id 523602;
 Wed, 19 Apr 2023 15:58:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WAk3=AK=citrix.com=prvs=46623c849=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppAC4-000630-Aw
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 15:58:04 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f62d25ec-deca-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 17:58:02 +0200 (CEST)
Received: from mail-bn7nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 11:58:00 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA0PR03MB5612.namprd03.prod.outlook.com (2603:10b6:806:b8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 15:57:57 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 15:57:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f62d25ec-deca-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681919882;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=GFYifgGDZpvOJBMjSWQWQ6bFHATjgDEHKSezjHxZ/6g=;
  b=YdpFlycWs1HgTaTtwb+AoQGu0svuf5PQyOKUqgn3ZgDXYM0NXF1Y5WnN
   9nLJ78dbs1OgOVxCdpcjc4f5qB2DblOmP0GT+JppA/wgR+wfQctBMwpvr
   cuVC4j7SZyd+8I0qgw5Vy6qnQ/eIcGRID0TSt1EgqOcLKh0pMrV7GlUde
   Q=;
X-IronPort-RemoteIP: 104.47.70.108
X-IronPort-MID: 108566003
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1BECjqABlr+nzxVW/w/iw5YqxClBgxIJ4kV8jS/XYbTApG923mYGn
 DQWXm7Vb6yDamr1fNAnbYSxpx8C6MfTnNUyQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw//pXWHt8x
 dUjLWpdbSKsiNuqyY24Rbw57igjBJGD0II3nFhFlGmcJ9B5BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/exuuzi7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqiLx2LGXxXiTtIQ6BaKyqtpYp3io12USBEUIW2qbu/mpsxvrMz5YA
 wlOksY0loAw/kG2Stj2XzWjvWWJ+BUbXrJ4DOkS+AyLjK3O7G6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnop4thu3MCkRaGUEOikNSFJd58G5+dljyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsgDGJ5T+REMWTm0Ms=
IronPort-HdrOrdr: A9a23:mPJQeqA6HsdG9mjlHemh55DYdb4zR+YMi2TDtnoBLiC9F/bzqy
 nApoV56faZslYssRIb+OxoWpPwI080nKQdieIs1NyZLWzbUQWTXeVfBEjZrwEI2ReSygeQ78
 hdmmFFZuHNMQ==
X-Talos-CUID: 9a23:xhmng28xE0gwazLMbOGVv0sIOpk/QE3093KKc0yKJmlQVLSqSlDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AwukSWA8k+zIFpX42q4Z85iyQf9Z36fiKOmcoqoQ?=
 =?us-ascii?q?PusXcKyJBGy2dnA3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="108566003"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dr0t/WXEsYegYWAmAzcoVVsGNNm2gifWFlNPAatc1sC4B6SHHiMOkNRRPQa/VbPEaxVvC7PmQYh2O9W9mKl6YGOHYmiOaVr5PfZ6w+bYgid2FOUtOKbrSJv82G87JmGuVKkp/sSYas86tyGRW62sgAlPvRafOFhSvLzNQS1AoktcSfgTnqVE2dJKdhmifK4M5hdGjscplPUj1q/NrYx/OGUmisPoCjtrjptB9BL/O9Hy++kb2FF7VISY5DFW7F4hL2jsgerq8j9XdjWKTmxHlIVLwDszR602RbjS3mhWKG97kHL1BT6h90WVa3wdCEUVB5Ck8Kp/8xNDjCwtVhvH+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x+ZJXdJRAw4xA0MVtYBgRDgmGrbVUrXuptUuzWPSoyg=;
 b=GBeoNLwkGG35iT4dr24Bfc0SxdOA6AhAaV7TDkEGPw6tQbyh3Cbi9ykzAOmOaxw6607wnRHuIxxMcjZpJ+n/6lFA0ceFdZxIsnvvgeK6y6iY8Tfj39RvpU+n22DsJ2eb2SvZAosaGfepbP/p9sndJCjKW5cIHukmJnAcDOCBd2Y/A/V2ain9Bc/hgND5kJw1XagTFki6we3M9UNgDr7ExsZ7sHqke4UxYY2sdO20n3PEmqGhK8NItTLa5zEPC33xS+HDi2U22nG2xNam44BQnD50+SA18bPBn5aqQcuUnpGHwnO6GwutBxQtH/qMV2CzqRos7RnSlJ5D+uHRntjmAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x+ZJXdJRAw4xA0MVtYBgRDgmGrbVUrXuptUuzWPSoyg=;
 b=svXmD57+FXpG8+t+Q12s2+aA/MASqpZD//4N3ec31Fbaph4lstj3b5J/VHHME1X1tiMLUT+9jnIIe08uBWbiBHgY9ggZvqJD3akZmyph4t+ZqHcDvokeHhdj79XiHc5ej5a9BZrd8Yq/XCVz9sFT71bghYwtG7GUm/bwdDeW7xg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 19 Apr 2023 17:57:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Message-ID: <ZEAPgFiYZoeJMLqc@Air-de-Roger>
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
 <ZD/UMyeckvCq0ivf@Air-de-Roger>
 <86823b76-6be1-da65-7608-af391ff48978@suse.com>
 <ZD/uX1VqYchQ4GgT@Air-de-Roger>
 <4cfcaed2-21e9-a794-86b4-97f9b350c0d4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4cfcaed2-21e9-a794-86b4-97f9b350c0d4@suse.com>
X-ClientProxiedBy: LO2P123CA0051.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA0PR03MB5612:EE_
X-MS-Office365-Filtering-Correlation-Id: 3050daa4-7097-4da1-af66-08db40eed819
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Eqfios2rXp6qDsAO4Gc/r+O9bHPT+7KIaT504OM+SAMvRmSZ2gzUeI+f9Uj8bZQD6ZduCFPLI6hOzNtCyK8iQzBJDGsuO/XQ5kAs25kC/tRmZK9ApryqKuZRbGb7x7BWbVyFLiAzRuZOMRun529veApuaP77t9DErEN7vWRycj237YvFcT3/4zUcgYlshx1Ni92DWaG9crSEYbhDFGjtfKVw2Ie2lFesIeoIlT764ftQI4RFz2rjAbM3h77onPnBOBDA5jMZgHYmHAD3fj7kWET7ZTutRMmLXy5ii9CkqEkWwLDyh8NdZDXoYHZJd9f9ABywJZscfAcqEJ4PPpwyo19VFPixqrOtQqebUZX0KtofbQwM4HHqyz4XX4mq29obcNldFvJGSp9o5/CYkXm7pVkZ+O8jP8BEseyG+y/i/N4l6DoYHOgfq1HYo0127JrnoE1/i/4cnad4FmENyv8FcVUEiNfU1dY99zKtuIPlPmiQ83ljVtMmssApopL6E4jEzeuf6IOGTS2XqAPhP0yZ1g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(376002)(136003)(346002)(39860400002)(366004)(451199021)(85182001)(4326008)(54906003)(316002)(6916009)(66946007)(66556008)(66476007)(966005)(53546011)(6486002)(41300700001)(478600001)(6666004)(33716001)(5660300002)(8936002)(8676002)(2906002)(86362001)(38100700002)(6506007)(9686003)(26005)(83380400001)(82960400001)(186003)(6512007)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c1JGNTJoeDN5TzNpMnJtWFBWQ0p0RkF4dTFrMFk0bTdkR3kyVjZabUlIdGRw?=
 =?utf-8?B?QkhvbHdvdUR0SU1qblY2UFFTNHZiQjU0YWZRZVpDenAxVHVueHg1bElKMThN?=
 =?utf-8?B?YTlacWV4V2Fwc0FKYy9wMzM0b25xd2xISStRQTBsdnZ2WjliVzA1aEVXdytt?=
 =?utf-8?B?aVBKTjQxaGp3YkpWakJDTnplMWN6T0JiZW9ZbTVIOWc0dmZsZXhaWHF2V2cw?=
 =?utf-8?B?Wm11K2NPdVlQdGhtVmw0UXcxQ1NEYzdCdVpSNlNXblVCQW9iWSs1Vnc1cjRq?=
 =?utf-8?B?bWFwRmJpamhaLzk5aTZQMS9ESkxDVVlVZEJvbWx0UlhPUlNkUE9ZWlZGVXF0?=
 =?utf-8?B?MDZwbXNPS0ZibHB4L2FXL3dpSW1yaU5UWm9DeXR2QVdLbjBaa25zZUZCYkxM?=
 =?utf-8?B?S0R1NnRZMXVlL2RvNkp5MHk0d0l4a055cFppUmtKM1h6ZFlqSUk2YVlTa2lL?=
 =?utf-8?B?eWtJWTZQdTdJQkkrN1VzQjRxZXFkbXF3d1ByQi9QMkplUDhZT2FGeDhXNzFr?=
 =?utf-8?B?ZVBnUEl5eUh0c3dLYzQ1RFpZU0JwMjZpeGtKU2pQOHRVbUVaTVJDeHFWeFh4?=
 =?utf-8?B?djlHKzhTSDRQMGJkNWxXMmVFeVVSOWhabmVMSEltSXRvRmw1TVVVVzA1WU5j?=
 =?utf-8?B?T3RNMVlxNHVWakRJUCtEelVham5LMng1c0FvU0JaSmhIcjZPdEdMeUY4QTFZ?=
 =?utf-8?B?cjVoM3B5WUFBUG44R0wyNHVsYnhNVnE2a1liTjlhOEtpYmlYTUZCUVJpanlm?=
 =?utf-8?B?TVdWd1RYeW01Wjd6RTlwSTdsczQxV1dsWVpQSUtnbDlxYm9UakVqU3hZSmtE?=
 =?utf-8?B?QzdRdzYrZXdlMWNycm5EQzZUcWR5MFk1SXl3aVJEYU5XcEtUTFFEK1pHenZh?=
 =?utf-8?B?M1J6TVVydk1lRG9oS3pwQmQ4K3duNkFKTkVXTG5rbmVYSE53WDBMdFMyRUs1?=
 =?utf-8?B?UXZnQVp5cURHOHRTb256T3l0cWpYYlZEcm01UEF3b1NyRjZ4V01ZU3Z4MWZp?=
 =?utf-8?B?eW03OW9Mdm1ncDlZOUtMQ1lNc0Z6SjZXZkUxNm5meXgrZUpqNHFta2xwTGNw?=
 =?utf-8?B?NFlXVjBXMWVkdUIzL2RSY3U5cWJSQ2xtRTJiVHpBczY4ME94cFdxeHlzY255?=
 =?utf-8?B?empkSys0M1l1aXY2WGJWSWpqVzJjVUNvd1VnTnhJekg5OHJ2cXJkOHA4QUZG?=
 =?utf-8?B?WC9MRFBnNFRjYW51OXVpemovdmd5ME1SV25TN3FWYytWV3hPMms4ZVY2UG1X?=
 =?utf-8?B?aFV5Yzcwc2s3MzZPWkIxbVgwRVdjank1VmJjT1FsMnMrYWlhdDU2OEtZOHJa?=
 =?utf-8?B?ZWEraXVYZnY5T2NWVyt5aWZ3dk1jdVV1bi9Sbk9KNndwbnd4TEhibHpzVzFN?=
 =?utf-8?B?Ujg1RTlWaUhlQmFkaGF0REtDbVIzOHdTZEV6YlE3QlFKRTNBd0JwdzltbzVU?=
 =?utf-8?B?Z28xcmJzYjgzcGhMbU5MdmF5TnlaUFdXd1lUdVVRcmhzYWM4OFRRbThWWUsr?=
 =?utf-8?B?V3llWlhreHE4WVRSZHpBNjF3akNUUjVORHpvZzg2aHpRZGExNFZTWlZ4WFFh?=
 =?utf-8?B?M3VpUnFkVkNDMUxrRnBHMXRwSC8zbm9UNHBsaldqeDV6K2Y2TE9MRC9zRklr?=
 =?utf-8?B?OFl4bjRpZTFoalZERGQrTEYwVkhia1MwU0NYU2RsRnBaZzRyV0x5OGh3QTln?=
 =?utf-8?B?bnhpcUFYQ1ZEeTNwZENtOFhhdUJFMUVIMjBtZTRXTWFsNkQ3UGlOMStlUVJk?=
 =?utf-8?B?d3RFanJzT0NXTnE0bWJEaGFuRjNkQjNYamYvVGlQdXh4SkovVFN6Y0xlOFBI?=
 =?utf-8?B?aVBBbG9vVlcvR1BVQWVBQ1lmdzQvQnZldFVqZmdPWks2N2RiMW5hdThkR3hQ?=
 =?utf-8?B?aUtaYW9IQkx0blk1WHVVcWxKK0VhQTNta2YzZXhQcHZBc01SNXJrNHlkYWt0?=
 =?utf-8?B?dnFpZk5YZFFHVVMvZkt4enlON21DNzY1d3pyY2JGWGJYVGtZUzdhRzFDKzVD?=
 =?utf-8?B?c2U1ZjVCV1pyL2tnNkE4MEZRd05pODV1T2xNRnhtS29GelpJa3k2VzRualkv?=
 =?utf-8?B?NTRkQVFtaXdydTFidGFrVDhIVGl6R0ticjVuNUFyWWxtYkxBbEtGTGtHQ3NS?=
 =?utf-8?B?R0lCeXVlWEtjdEM1YUtpQVRobXVscU84Nys0dS82RE1YeGZMcUhnU2ZGUkFw?=
 =?utf-8?B?MFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+hoT0ZstuaTqoY+B0bowVfPdzAlcQOo7nMGyNJajx2mQeyyOZdcoOJmb/cYH36T15pzFbdt2wG1XlesEZ7zSjl7Q059jsSKewcxrNvW7gSAECbHEPn3IgZtWkJZqX6fhdmQmWll2mAuBN4qRWPAlGbN255Gb4EtHEHOuICgK2gST6b6L5aiEGlINohv8vm5McV/POo1tx+3ZNdwP0g3UyOGAfZ+vATWyMuOp68j5trRxpKSekg2YgGHM60qO2pjo9lk+FuIgpN5K1I88hJftadNMJTglyeRrfvSmNyBvh58bpAxNiG1FK/+hcuyjAJ0q60G8LtX6Ad9oK1re5vus5K9Cvkxb3mw8VihC9uPFmO2spidkfXhbWzA08RoQV6x9nvKJm/pNnSlIgFe8hp4V8HUKXYO95kUcXqK56quKejwhTr20SG4ae7DCpdefKpUrPgxSy/7VJTQED4sOEzo3+yfa64uaZAkJjGaNUAGTyV7WDDT5cAiXcsC2ViN7+SY/2m6+lLK+9AIxRhReMFr5WEivzsqhJnQifs+R/auQm5Y8sioE82M7UF+4kjPL7+B/TaNvqdghe9U0We2xs2s4voGqpQuzh+tDgA15Fv6wgPo2Va32LmH21AkTpfQKNz0St5OffAxVyep5ihJZXf0uOFSiOh+n0HU3WwqtScwL8DDOzT7l1g8gGyRVfi2rr+2NWkD1VM0nvdEP3+G075Ow9jnHtLrBHmHAbYBnqayH/7FcL6/LIQ23YPrFAeD7/Rh9MLT2LGWPs2fWVe79J+GHK6TOjj+RvHN/Uc9WUzVJBhbU6eNmfcwI5UIR9JXnNUXP
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3050daa4-7097-4da1-af66-08db40eed819
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 15:57:57.6791
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YPaLiFO1IgdQjObjAdgY3KLDaUChk55Av94l2zIyT4QBujeQk2fJBW1683VqGGD9ARjRf7xp6N6mIEhtIxTxOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5612

On Wed, Apr 19, 2023 at 04:39:01PM +0200, Jan Beulich wrote:
> On 19.04.2023 15:36, Roger Pau Monné wrote:
> > On Wed, Apr 19, 2023 at 02:00:38PM +0200, Jan Beulich wrote:
> >> On 19.04.2023 13:44, Roger Pau Monné wrote:
> >>> On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
> >>>> On 19.04.2023 10:25, Roger Pau Monné wrote:
> >>>>> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
> >>>>>> On 18.04.2023 15:06, Roger Pau Monné wrote:
> >>>>>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
> >>>>>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
> >>>>>>>>> --- a/xen/arch/x86/include/asm/config.h
> >>>>>>>>> +++ b/xen/arch/x86/include/asm/config.h
> >>>>>>>>> @@ -44,6 +44,20 @@
> >>>>>>>>>  /* Linkage for x86 */
> >>>>>>>>>  #ifdef __ASSEMBLY__
> >>>>>>>>>  #define ALIGN .align 16,0x90
> >>>>>>>>> +#ifdef CONFIG_LIVEPATCH
> >>>>>>>>> +#define START_LP(name)                          \
> >>>>>>>>> +  jmp name;                                     \
> >>>>>>>>> +  .pushsection .text.name, "ax", @progbits;     \
> >>>>>>>>> +  name:
> >>>>>>>>> +#define END_LP(name)                            \
> >>>>>>>>> +  .size name, . - name;                         \
> >>>>>>>>> +  .type name, @function;                        \
> >>>>>>>>> +  .popsection
> >>>>>>>>> +#else
> >>>>>>>>> +#define START_LP(name)                          \
> >>>>>>>>> +  name:
> >>>>>>>>> +#define END_LP(name)
> >>>>>>>>> +#endif
> >>>>>>>>>  #define ENTRY(name)                             \
> >>>>>>>>>    .globl name;                                  \
> >>>>>>>>>    ALIGN;                                        \
> >>>>>>>>
> >>>>>>>> Couldn't END_LP() set type and size unconditionally? (But see also
> >>>>>>>> below.)
> >>>>>>>
> >>>>>>> I see, so that we could also use it for debug purposes.  I guess at
> >>>>>>> that point it might be better to use {START,END}_FUNC() to note that
> >>>>>>> the macros also have an effect beyond that of livepatching.
> >>>>>>>
> >>>>>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
> >>>>>>> find START_ENTRY a weird name.
> >>>>>>
> >>>>>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
> >>>>>> I take it that you're aware that we meanwhile have two or even three
> >>>>>> concurring proposals on a general scheme of such annotations, and we
> >>>>>> don't seem to be able to agree on one. (I guess I'll make a design
> >>>>>> session proposal on this topic for Prague.)
> >>>>>
> >>>>> Oh, I wasn't aware we had other proposals, I've been away on an off
> >>>>> quite a lot recently, and haven't been able to keep up with all
> >>>>> xen-devel email.  Do you have any references at hand?
> >>>>
> >>>> Andrew said he had posted something long ago, but I didn't recall and
> >>>> hence have no reference. My posting from about a year ago is
> >>>> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
> >>>> Subsequently Jane went kind of the Linux route:
> >>>> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
> >>>>
> >>>>>> One thing needs to be clear though: Macros doing things solely needed
> >>>>>> for LP need to not have extra effects with it disabled, and such
> >>>>>> macros also better wouldn't e.g. insert stray JMP when not really
> >>>>>> needed. Hence I expect we still want (some) LP-specific macros besides
> >>>>>> whatever we settle on as the generic ones.
> >>>>>
> >>>>> The stray jmp can be inserted only in the livepatch case, if we end up
> >>>>> having to add it.
> >>>>>
> >>>>> Maybe we should just go with Linux names, so initially I would like to
> >>>>> use:
> >>>>>
> >>>>> SYM_FUNC_START{_NOALIGN}(name)
> >>>>> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
> >>>>> SYM_FUNC_END(name)
> >>>>
> >>>> As said in replies on the earlier threads, I think these are overly
> >>>> verbose and come in overly many variations.
> >>>
> >>> Right, I would only introduce the ones above and as lonog as I have at
> >>> least one user for them. I don't think there's much value in importing
> >>> the file wholesale if we have no use case for a lot of the imported
> >>> macros.
> >>>
> >>> The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
> >>> need a tag for local function-like entry point labels, would you then
> >>> use PROC() for those? ENTRY_LOCAL()?
> >>>
> >>> I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
> >>> think it's clearer.  I would agree on dropping the SYM_ prefix from
> >>> the Linux ones if there's consensus.
> >>
> >> Okay, I'm glad we can agree on no SYM_. But what value does START have?
> >> And why would the type be (re)specified via ..._END()? FUNC(), DATA(),
> >> and END() ought to be all we need.
> > 
> > Does it imply that we would then drop ENTRY()? (seems so, would just
> > like to confirm).
> 
> Yes. ENTRY() may not go away immediately, but I'd expect it to be
> phased out.
> 
> >> The type would be set by the entry
> >> point macros, and the size by END(). To cover local vs global I could
> >> live with _LOCAL suffixes, but personally would prefer e.g. LFUNC()
> >> and GFUNC(). We could also limit ourselves to FUNC() plus DATA(), and
> >> have (non-)global expressed by END() and e.g. LEND() or END_LOCAL().
> >> One less macro, but maybe slightly odd to have the .global directives
> >> then at the end rather than at the beginning.
> > 
> > Hm, yes, I do find it odd to have the .global at the end.  FUNC and
> > FUNC_LOCAL would be my preference, I do find {L,G}FUNC a bit
> > confusing.
> 
> Well, yes, I was expecting this to be the case. Hence why I said I could
> live with _LOCAL suffixes, even if they aren't my preference. What we
> may want to keep in mind is that sooner or later we may want to have
> non-aligning variants of these. That'll again make for larger names,
> unless we went with e.g. an optional 2nd parameter which, if absent,
> means default alignment, while if present it would specify the alignment
> (which then can be used to effectively specify no alignment). E.g.
> 
> #define ALIGN(algn...) .balign algn
> 
> #define GLOBAL(name)                \
>     .globl name;                    \
>     .hidden name;                   \
>     name:
> 
> #define FUNC(name, algn...)         \
>     ALIGN(LAST(16, ## algn), 0x90); \
>     GLOBAL(name);                   \
>     .type name, @function
> 
> with these helpers (and count_args() as we already have it), or ideally
> something yet more simple (which right now I can't seem to be able to
> think of):
> 
> #define ARG1_(x, y...) (x)
> #define ARG2_(x, y...) (y)
> 
> #define LAST__(nr) ARG ## nr ## _
> #define LAST_(nr)  LAST__(nr)
> #define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)

Would seem acceptable to me.  Would you like to make a proposal
(likely updating your previous patch) along this lines?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 16:01:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 16:01:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523608.813826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAFA-00084z-QQ; Wed, 19 Apr 2023 16:01:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523608.813826; Wed, 19 Apr 2023 16:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAFA-00084s-NC; Wed, 19 Apr 2023 16:01:16 +0000
Received: by outflank-mailman (input) for mailman id 523608;
 Wed, 19 Apr 2023 16:01:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ifGd=AK=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ppAFA-00084m-4Y
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 16:01:16 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20626.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 67cc0629-decb-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 18:01:13 +0200 (CEST)
Received: from MW2PR2101CA0005.namprd21.prod.outlook.com (2603:10b6:302:1::18)
 by IA0PR12MB8693.namprd12.prod.outlook.com (2603:10b6:208:48e::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 16:01:09 +0000
Received: from CO1NAM11FT087.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::71) by MW2PR2101CA0005.outlook.office365.com
 (2603:10b6:302:1::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.8 via Frontend
 Transport; Wed, 19 Apr 2023 16:01:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT087.mail.protection.outlook.com (10.13.174.68) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 16:01:08 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 11:00:22 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr
 2023 09:00:20 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 19 Apr 2023 11:00:18 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67cc0629-decb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lKFXGndJakPwVwcqWTjcZfCEJRw0nmGbatqukqCg0ISANFEHgCKlqZL7CxMHoBKSL2W0DwXNUVdCUCo8YNBGkkRnIWGimiTKueilDvMfcmUYINUlFn2CflkSIMDOlbI2/9nKMr6vfom37crCG40gAzWWEGKEk52c0gssvg4VBRoiXlEIhnwpmfKGizAsHpYAqOkPLqJQskWz0kA1Y63mkN0p9g/8MdMiUv5NVlSmsTe8Cy1OYa+zN0yUJO3uy36c694gxU+SqMvfpyVPrG0LVeDhtuZxaoSVr6+d+PYpZsoFx+3p3MgCu7EgkwlNqs+U2mXzi7JKUkPYuhUmw/JfXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y6Kh9SVHgRW+NFSRH3uXf1Z891SUPvWW+uA5beSDbuY=;
 b=XKJhau+jVA42qvGa52LNxQUrl7deNM0uMOm+sd2ljxS4UbVblGqmoERUD5oejbGQm6p5CjjZoFCHQeebrSeQE+Wku0J74nDlC14aS2pF0lEb+kflvtzt4Xe7MwgPuPHfuDSmcF2LP5yfsjr6XWUkU5YuJjRk1XKb1ODnSssg8WKlfdkYcX6uxP7sWJGWhTi+fILu85MWItYJwJlcXRkTZx6vR+2AKF9pYOK2EF0Ddtg7aZuTIqDKw/7L0Lbh6ST+U0O/yftzJBxeV16OoCyfz2nHesFtUMlDa1rzz/B4YbSQHA1pvG69Xip5iPsi23Ic6Zs/7Hr6oHzhK6UtOCeZDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6Kh9SVHgRW+NFSRH3uXf1Z891SUPvWW+uA5beSDbuY=;
 b=46NTGojBGC7XF6arQnW519kYOR21ilYRhc95JjARrLv+iAz1TOS02o+LLKTXAjWLEUg5z2VE5dcezBSPmahJ0SLuDhNoEqLhW/tUlXyOOSrkj562CZKRdWR4nx7el212JuUpWIFZ8Lc+aYBvI9P3mpSO+PDAPhd9pLi6ix8rigU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <46daf4f9-ef43-b4fa-9246-c81ee3d8f13c@amd.com>
Date: Wed, 19 Apr 2023 18:00:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 02/10] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-3-ayan.kumar.halder@amd.com>
 <458367fe-1781-7751-230c-8a43cecbfca6@amd.com>
 <f6c88703-a3bf-94e6-246e-ab0d0582eb7a@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <f6c88703-a3bf-94e6-246e-ab0d0582eb7a@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT087:EE_|IA0PR12MB8693:EE_
X-MS-Office365-Filtering-Correlation-Id: e1062344-89c7-411e-91dd-08db40ef4a23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/UIWEDDAWZiGk9gKjtUl2CrW6Ql9LVsDfGEhdS7qirLWgLQ+JapVggf69DzsdlOf8+cvoXkLBFzV6239P3pHz/I3M33IJO90rvOaSEYCY3FK42d/MSpWMOUGTu1/5ZhY1YFV+VXLcVKxJZnAqx/a5IP4fQ4VDs8/psRC0t96sg6PQ3hd8sMQUtu+KqWZINSdGKgOHWtM1HrsuoDHkvT/peEhYR8b/7gi66Kr3BKNDmdFiLzVYwm/bD1jxf8lARBPbOPKG0Ck9Kbe8kBxMXEPUqkKhWV3kHCf3UVG1yGG3eXFCeF13dsuW9WXg4loJyyw/bc2m0oKM54P4oVMs7ZXp6wvs6G7hBBD/ygrfYFDBsRFxIaak5DRZlaxf6rohbk5QtZwKygWPf76ALKvk2WwyawqS6RkBJF7FSoduymm5VhfMwZkptdi0lO/BA7S8LKf0EipF33L5qCfjwQwJVbfbAnWZ7dvJUiB4kAa4aFqnymwyR1DpvnF6iXnzp8oIgChr5Q4Kfs88n7HEllIyYpQdxxeKIdGlyzJJnsaTCzNnX6xtMXXpB6nOkhLQz3/09Qso3nTkTZTf9OzdrwdgWmEQFoyyqeCfhYh4iHrlZKHkfF84sTGPOqai5AXhLn+LooBZsSlNCMYWNOva5Q4Gip+Wz03YPLM3IgnOsq9K87H4mkUO7R2WjrayzcxwK1TaG1EtY90ofCU1/nYoi+JItuZDbgttUCKwd7MV2XdWmWSeoGELIEv4AM3P9rbfbvw1QK44IAwmr9bGv/9uPXggRl5yQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(36756003)(186003)(70586007)(4326008)(16576012)(110136005)(54906003)(316002)(70206006)(478600001)(40480700001)(8936002)(82310400005)(41300700001)(8676002)(5660300002)(7416002)(44832011)(82740400003)(2906002)(86362001)(356005)(81166007)(336012)(426003)(2616005)(26005)(53546011)(40460700003)(36860700001)(47076005)(31696002)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 16:01:08.6701
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e1062344-89c7-411e-91dd-08db40ef4a23
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT087.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8693

Hi Ayan,

On 19/04/2023 17:12, Ayan Kumar Halder wrote:
> 
> On 19/04/2023 14:19, Michal Orzel wrote:
>> Hi Ayan,
> 
> Hi Michal,
> 
> ...
> 
>> --- /dev/null
>> +++ b/xen/include/xen/libfdt/libfdt-xen.h
>> @@ -0,0 +1,55 @@
>> +/*
>> + * SPDX-License-Identifier: GPL-2.0-only
>> Our CODING_STYLE says:
>> New files should start with a single-line SPDX comment, ..., e.g.
>> /* SPDX-License-Identifier: GPL-2.0 */
>>
>> For me it would be perfectly fine to do as you did but it is not what our docs state
>> (i.e. single-line comment). It might be that we need to modify CODING_STYLE instead.
> 
> Just to be clear, this is what we should have (as Julien had earlier 
> suggested to use **GPL-2.0-only** )
I used GPL-2.0 just for example. We want of course GPL-2.0-only.

> 
> diff --git a/xen/include/xen/libfdt/libfdt-xen.h 
> b/xen/include/xen/libfdt/libfdt-xen.h
> index 3296a368a6..cad7ad3bfb 100644
> --- a/xen/include/xen/libfdt/libfdt-xen.h
> +++ b/xen/include/xen/libfdt/libfdt-xen.h
> @@ -1,6 +1,5 @@
> +// SPDX-License-Identifier: GPL-2.0-only
You should use /* */ style instead of //

>   /*
> - * SPDX-License-Identifier: GPL-2.0-only
> - *
>    * xen/include/xen/libfdt/libfdt-xen.h
>    *
>    * Wrapper functions for device tree. This helps to convert dt values
> 
> - Ayan
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 16:03:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 16:03:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523613.813836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAHX-0000C5-9H; Wed, 19 Apr 2023 16:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523613.813836; Wed, 19 Apr 2023 16:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAHX-0000By-4j; Wed, 19 Apr 2023 16:03:43 +0000
Received: by outflank-mailman (input) for mailman id 523613;
 Wed, 19 Apr 2023 16:03:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8hvW=AK=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppAHW-0000Ba-5p
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 16:03:42 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfe9145a-decb-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 18:03:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8942.eurprd04.prod.outlook.com (2603:10a6:102:20d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 16:03:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6298.045; Wed, 19 Apr 2023
 16:03:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfe9145a-decb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jR/qI7erq912LHe6FBh+2DNe0ZIoU3HSbpDRFhwADgh5QFLE47LahrbQKxKgmyrsc/cWGAZ8jPero8cjsfjaMogN9p+bR5Lh7rp79+MaZpHbY5lyOFRcpV/UW49H6DcknY9ZZZl2yE0RvILbUK+DsTC4rvc6EVjBI6iguW995+li3Rs8uqPV2HgpPSJULPJA/TQBBZxrqJJaCayx7KFrBN6wSbzwQ5R9PGBa6rFHJiyfBynaAJuYUJ+qIxfyttiITGlhiWeoA7dUDfWmwJfgeWkPmy8UKoA1VfQSrwcXnN9RK36k3jieoYs4l/qNcHg6tlHlN471Bl2y64AEefpp6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rTiADMqIJtFaQvtva9EyWgQK6SXimH726hZ7fGv33Jo=;
 b=BynraFVj9uaLPI2bKrvcF8sQXANQdK+V+Td4srsW+Szg3n52TxuukalzfzuQ5ediXLGXbeh9QqY0QwnMv5tF6sI4r9VbWecIJMLpQZmQfY7C0E9nDuRIFl2bFRng+FpJnYW+sf9OS2GU0uMf3VLarFpkDuLotWo1UkE7BnPPfdQvkp26c33yfK0X5DIOk6ffxDpV7i152g1qcgM3WFL6GdA5iK6rr4NHSW2k9TsO2lymaP3C1pse/LOhOvEMzucaFiPg+e0ZA4daSFoJlrc+XbsR+0rR6lLLus0Nc7z55jFZOTivFW1eIYaUqvaQMsb72oqx1PX7UNuozTwu61bjTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rTiADMqIJtFaQvtva9EyWgQK6SXimH726hZ7fGv33Jo=;
 b=0mHxrVf2tu69o2yXnoYPn6HThodihiRwX0ZQJQobrVZY+vlLnxxlXYhTRtjgUW8xvH2oAjbhgCOij/xUjDwl/ffHFGfqUynkcNVJVWvnv5nNKMeJwhZ3rzMRA+U/c64wBCzoAezN7V4dnHIo8M3eQXzG7ELcLI48MwwbUJ5sx6xKQyHDfAfsoDwJA6PzZf2b6q6z+xbTweHbT6h6OBpy4Cw0ulmpVjDudu1pZV9dAJCJAq8Y0ADE/4YH8SLgckIu6akMVFo49fJwjiW3MLZKZB3KI7FYRQG/o/FbKgASeH3XQkUqe/AhmaRe92dlEuwtcvkbCwLC1QrYyRfscokR4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <add420e4-cdaa-64dd-74ec-08244b9e238e@suse.com>
Date: Wed, 19 Apr 2023 18:03:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] x86/livepatch: enable livepatching assembly source files
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230418092458.15253-1-roger.pau@citrix.com>
 <ab604666-e9a4-3656-73a6-c09b2ae9d3bd@suse.com>
 <ZD6V0wzw/VS/MMw/@Air-de-Roger>
 <d301e110-f840-a032-c406-2f7404752783@suse.com>
 <ZD+ljXSEPCmPMAtN@Air-de-Roger>
 <5c476b65-0340-2a0e-e436-46368d3236b7@suse.com>
 <ZD/UMyeckvCq0ivf@Air-de-Roger>
 <86823b76-6be1-da65-7608-af391ff48978@suse.com>
 <ZD/uX1VqYchQ4GgT@Air-de-Roger>
 <4cfcaed2-21e9-a794-86b4-97f9b350c0d4@suse.com>
 <ZEAPgFiYZoeJMLqc@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEAPgFiYZoeJMLqc@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8942:EE_
X-MS-Office365-Filtering-Correlation-Id: 9354a200-b68e-4eef-9c55-08db40efa210
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H5DRLGmvr46QVMecjji/xG1Dl6vgpG2tj3+GH8y+0/9BfFABX21gzjo8F2/fW4j0P+V39lAhpGxb28cdwTF2pEyWUs+VgYvn4m25fGxBc/qJtKrEPzwWGlFwDfZg2aQAc4QTPfoji9DS8VhVI/pZ7XMi89Od9RsI6yR92Ga0Kze2xE0K3M/1wOAteD3+O/xCjMs7lWzldYeVY5/JsQMg/9zMH9KIJhavxuyGlpvtfpHNi256EhRbemjYqmKyGUxvmcMFFOnC2JMwfL+qyWg2TshsFDtc1dONVaD3W2d/McqMJpoWhrUi2LJ4grMjLWkOADCf1B54giYc5WLbUmWgUFWce0AWUhmm1QQjEXWzLzy1GzJSaxU/bhd08ugEuTp+KzWfZRG75dW+b3uJb8BdURKXeACJRELtXJuKUQLhsfoMILAcesKZ7qYRO/6lpp2Krs6YK3eAicnl7/qdu7wArqjJ/s3UquwufqjJLTEKBicABpNQ5K5PHaeUiDyTJTcbn6ts+p8n4h7qwXHAbduhJclPQaepdPC6QApOnMrrUy+URBRfUOjIe5R0ayk7gaY5zGiM5xv9F47ZhBm9wIN+Y12nDGbA86fTS2PLzqyL+1kKBY3TOhpVgMAe8FmWrfJ0DwYok5TqyESOnI3k6O/cXA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(136003)(396003)(346002)(366004)(451199021)(5660300002)(4326008)(86362001)(2616005)(31696002)(83380400001)(966005)(53546011)(186003)(6506007)(6512007)(26005)(38100700002)(8676002)(8936002)(478600001)(110136005)(6486002)(36756003)(41300700001)(316002)(66556008)(31686004)(66946007)(66476007)(66899021)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UFAvd0VmYit1czkxQUhheXloUmRVWWxoRU9lOWJ5UnhwOWp0cG5jQ3JiazhE?=
 =?utf-8?B?MzBqSDV2YzI5TVV4RkozaWd4Y2NQaUdIcGV6dXdZYkoyZDMzcTBkazIrMGRp?=
 =?utf-8?B?eE1icXFMZVN4VytMSTZ0T2I0ZjEvTDc1VitNb01UWm1qaCtJUXZyRWZRaG5L?=
 =?utf-8?B?OWJNNWx3OHNoS2tPL0JXMnJnK3V3SUVFRStUOTNFeVllUGdBOTRXcHJ2NWZ5?=
 =?utf-8?B?TGJpeWRUbENTSFFqQ2g0akErOWN5V25RNXVocm1CM2dUT1FZT3hNUjR5ekx2?=
 =?utf-8?B?SXhaR2U0WG1vWkRlSHRIaC9xeWw5bDJudWRlT2NBak5WMktnVDQvZ1JhRWxC?=
 =?utf-8?B?dHYzUUpYSHYvVnZOZlN5SzZLNTBDNkV5YlA4aG1uZzR6Skw5OGJudHBOVGov?=
 =?utf-8?B?aDFhWEZaOTBrVVEyRGcyZWMwVlBNblJrdnpEaml3S1dyeTJ4Y08xWlU3TzVn?=
 =?utf-8?B?MFhDenpmUEhjMHIycTVpVlgwV3dyLy9FaDZCN045Q2VsM3E1U1hiV3pDUXBv?=
 =?utf-8?B?OFV4NkJqemVJYjZLWmR3eC9CdG1sRStkc0xFUms0ZGNjclRmQUhDTnFhbmxt?=
 =?utf-8?B?WjhxV2x2RmxYOG43TEdHcFE2WUM0cWM3blhxVC9mbUpFbnVJdVFlUE9MUjhq?=
 =?utf-8?B?SjRZcHM1WkZQUW9COXJkMEt0QlFwSmJNZHJRMHJZVC90SElzQVE2ZkcwNDJD?=
 =?utf-8?B?VHlyRityajR2TTdLK3hCRE9HaW5NemQ2dmxBRkFxYkRHQjRBbUxLdit0UmtG?=
 =?utf-8?B?aE1mTk4zNmNzSEd1aGo4dFB1RDJoQVZCenV4V256TVNBVEJRTlBCWTkwbjNF?=
 =?utf-8?B?ZTd5dFBNL3VNN3ltZVBVR0o1TTN5cHNaUWhMMEx3RDhRbDg1YWVEbjc1bW14?=
 =?utf-8?B?d3ZpUXJpSWJJRjB0MXRBeFRybmhRcFRYMHdFZHBkTkJnalREWlVXMUF5M2Ur?=
 =?utf-8?B?RmdsTVBpVEZySVJac3h5VGVBQ1ByYW9mMkoyNnRuY1FMSVRjUURwcmR1QVEx?=
 =?utf-8?B?MTIyRDEwNXd2eUVBSVVmWEVaQkZaaUlUTXJwUkNFeWd6b2NvRzUxVlRCQmxu?=
 =?utf-8?B?SXJHOThpbTljV2dwQnovYVJ3YXNOQUczaytyK2k0b2U5YmRmbmpYSk55ajVR?=
 =?utf-8?B?VlR2NTRXZkZVV2Y1M3k0cnhXTjVINSt1MTRBKzkvZFlXc1FjMEFkY0RZUTRN?=
 =?utf-8?B?Rk1OaUJZSjZ5U1pIc01ZVmFLdE5IZTUvdkxHdmhtM2ZDZ0dDMjYwVytkcyt2?=
 =?utf-8?B?aGpmcStjUytzR1RSaW5kMGFHZmlZN3lBVHlMQndGZFJPZGkzeTVBOGNlR0lQ?=
 =?utf-8?B?bTc1bERkbWVhVG1EM2xCUFcySTNGZUJvUTVNTXJ3VXlhVHdCQUxVY2tlb0dl?=
 =?utf-8?B?bXdOWE5QQ3hXNCttNGMzY0NmTHlXVitvUURGcDc5cHczL2xnemdPaFNET25N?=
 =?utf-8?B?VUN3SzI0OFpDVDRVQlkxWjBWd0ZkM2VBM1duTDZCejVZMmJtbTVjQ0NzT0lG?=
 =?utf-8?B?dnBpYjBNUGxrbVRDcUpmc0RQNEM0M3k1WG5Vd21lSTRUOUJ6TnhCeFJleTRR?=
 =?utf-8?B?QWJWNG5HSUlKRW0vWnF2dlBybzI1OWtCTzBldW1MREZaRTArdUlOcjBwcnlD?=
 =?utf-8?B?Mjk4K1RTeWgwYnVMSG9QeWQrclA2S0RoaTN2MWh0bEd4VkwvVzF3dDZQdngx?=
 =?utf-8?B?UWwvOWIrMHYrbGRVMHdmeklsSjU5Wk8vT0lLc2ovbHJXRThvZW5QWEFQVzY5?=
 =?utf-8?B?NDRlVmJnTEN3WHJURDlTMVQweW5ndkE4ZWhIWXM0UGlsb3g1MC9CYjNBenBz?=
 =?utf-8?B?Vnh3TlpkRXR4SjhNYWkxRmNzK09aK1haYVYyTHhjMFdRQ3M3WmM3bnoxcGJS?=
 =?utf-8?B?UXdnaEFlNWk1ZU1mNVUyM2RjaVFQOW5Cd2lVaGxscHRjK1d6bitwNHQvTEt3?=
 =?utf-8?B?SjJkTTNlZFFHRHFYUG9TK3RpKzZ1U0ExbXZ0VS8xeWI4Y2lTVSt4MzRId0VS?=
 =?utf-8?B?VXJ5VUNCdWFqSTFYWk1ZZ3ljajJCSDNKYzkya3lzVUlHMm9mdTB6Q251SE1t?=
 =?utf-8?B?QWRaWjJ0SkJYTUg4ZVNTN3lVTGdwQThZTXVteUMvTTZhV0xCTDI5aW5LbFBB?=
 =?utf-8?Q?5haRNnS7TRGiz99bu0snKOVsf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9354a200-b68e-4eef-9c55-08db40efa210
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 16:03:36.5330
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fRZIE6+1r609sXKkoYKDLK+51YmEZcsl+QCG6F+4ZSCHwIKDP0bJg1RQ0k1PP4HimQvpGgaI86MBi8+w+nk/kA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8942

On 19.04.2023 17:57, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 04:39:01PM +0200, Jan Beulich wrote:
>> On 19.04.2023 15:36, Roger Pau Monné wrote:
>>> On Wed, Apr 19, 2023 at 02:00:38PM +0200, Jan Beulich wrote:
>>>> On 19.04.2023 13:44, Roger Pau Monné wrote:
>>>>> On Wed, Apr 19, 2023 at 10:43:22AM +0200, Jan Beulich wrote:
>>>>>> On 19.04.2023 10:25, Roger Pau Monné wrote:
>>>>>>> On Wed, Apr 19, 2023 at 08:17:45AM +0200, Jan Beulich wrote:
>>>>>>>> On 18.04.2023 15:06, Roger Pau Monné wrote:
>>>>>>>>> On Tue, Apr 18, 2023 at 01:00:53PM +0200, Jan Beulich wrote:
>>>>>>>>>> On 18.04.2023 11:24, Roger Pau Monne wrote:
>>>>>>>>>>> --- a/xen/arch/x86/include/asm/config.h
>>>>>>>>>>> +++ b/xen/arch/x86/include/asm/config.h
>>>>>>>>>>> @@ -44,6 +44,20 @@
>>>>>>>>>>>  /* Linkage for x86 */
>>>>>>>>>>>  #ifdef __ASSEMBLY__
>>>>>>>>>>>  #define ALIGN .align 16,0x90
>>>>>>>>>>> +#ifdef CONFIG_LIVEPATCH
>>>>>>>>>>> +#define START_LP(name)                          \
>>>>>>>>>>> +  jmp name;                                     \
>>>>>>>>>>> +  .pushsection .text.name, "ax", @progbits;     \
>>>>>>>>>>> +  name:
>>>>>>>>>>> +#define END_LP(name)                            \
>>>>>>>>>>> +  .size name, . - name;                         \
>>>>>>>>>>> +  .type name, @function;                        \
>>>>>>>>>>> +  .popsection
>>>>>>>>>>> +#else
>>>>>>>>>>> +#define START_LP(name)                          \
>>>>>>>>>>> +  name:
>>>>>>>>>>> +#define END_LP(name)
>>>>>>>>>>> +#endif
>>>>>>>>>>>  #define ENTRY(name)                             \
>>>>>>>>>>>    .globl name;                                  \
>>>>>>>>>>>    ALIGN;                                        \
>>>>>>>>>>
>>>>>>>>>> Couldn't END_LP() set type and size unconditionally? (But see also
>>>>>>>>>> below.)
>>>>>>>>>
>>>>>>>>> I see, so that we could also use it for debug purposes.  I guess at
>>>>>>>>> that point it might be better to use {START,END}_FUNC() to note that
>>>>>>>>> the macros also have an effect beyond that of livepatching.
>>>>>>>>>
>>>>>>>>> Maybe also introduce a START_ENTRY() that replaces ENTRY()?  Albeit I
>>>>>>>>> find START_ENTRY a weird name.
>>>>>>>>
>>>>>>>> So do I. {START,END}_FUNC() or whatever else are in principle fine, but
>>>>>>>> I take it that you're aware that we meanwhile have two or even three
>>>>>>>> concurring proposals on a general scheme of such annotations, and we
>>>>>>>> don't seem to be able to agree on one. (I guess I'll make a design
>>>>>>>> session proposal on this topic for Prague.)
>>>>>>>
>>>>>>> Oh, I wasn't aware we had other proposals, I've been away on an off
>>>>>>> quite a lot recently, and haven't been able to keep up with all
>>>>>>> xen-devel email.  Do you have any references at hand?
>>>>>>
>>>>>> Andrew said he had posted something long ago, but I didn't recall and
>>>>>> hence have no reference. My posting from about a year ago is
>>>>>> https://lists.xen.org/archives/html/xen-devel/2022-04/msg00876.html
>>>>>> Subsequently Jane went kind of the Linux route:
>>>>>> https://lists.xen.org/archives/html/xen-devel/2022-08/msg00236.html
>>>>>>
>>>>>>>> One thing needs to be clear though: Macros doing things solely needed
>>>>>>>> for LP need to not have extra effects with it disabled, and such
>>>>>>>> macros also better wouldn't e.g. insert stray JMP when not really
>>>>>>>> needed. Hence I expect we still want (some) LP-specific macros besides
>>>>>>>> whatever we settle on as the generic ones.
>>>>>>>
>>>>>>> The stray jmp can be inserted only in the livepatch case, if we end up
>>>>>>> having to add it.
>>>>>>>
>>>>>>> Maybe we should just go with Linux names, so initially I would like to
>>>>>>> use:
>>>>>>>
>>>>>>> SYM_FUNC_START{_NOALIGN}(name)
>>>>>>> SYM_FUNC_START_LOCAL{_NOALIGN}(name)
>>>>>>> SYM_FUNC_END(name)
>>>>>>
>>>>>> As said in replies on the earlier threads, I think these are overly
>>>>>> verbose and come in overly many variations.
>>>>>
>>>>> Right, I would only introduce the ones above and as lonog as I have at
>>>>> least one user for them. I don't think there's much value in importing
>>>>> the file wholesale if we have no use case for a lot of the imported
>>>>> macros.
>>>>>
>>>>> The main issue with ENTRY() and ENDPROC() / ENDDATA() is that we still
>>>>> need a tag for local function-like entry point labels, would you then
>>>>> use PROC() for those? ENTRY_LOCAL()?
>>>>>
>>>>> I have to admit I prefer the FUNC_START{_LOCAL} for that purpose as I
>>>>> think it's clearer.  I would agree on dropping the SYM_ prefix from
>>>>> the Linux ones if there's consensus.
>>>>
>>>> Okay, I'm glad we can agree on no SYM_. But what value does START have?
>>>> And why would the type be (re)specified via ..._END()? FUNC(), DATA(),
>>>> and END() ought to be all we need.
>>>
>>> Does it imply that we would then drop ENTRY()? (seems so, would just
>>> like to confirm).
>>
>> Yes. ENTRY() may not go away immediately, but I'd expect it to be
>> phased out.
>>
>>>> The type would be set by the entry
>>>> point macros, and the size by END(). To cover local vs global I could
>>>> live with _LOCAL suffixes, but personally would prefer e.g. LFUNC()
>>>> and GFUNC(). We could also limit ourselves to FUNC() plus DATA(), and
>>>> have (non-)global expressed by END() and e.g. LEND() or END_LOCAL().
>>>> One less macro, but maybe slightly odd to have the .global directives
>>>> then at the end rather than at the beginning.
>>>
>>> Hm, yes, I do find it odd to have the .global at the end.  FUNC and
>>> FUNC_LOCAL would be my preference, I do find {L,G}FUNC a bit
>>> confusing.
>>
>> Well, yes, I was expecting this to be the case. Hence why I said I could
>> live with _LOCAL suffixes, even if they aren't my preference. What we
>> may want to keep in mind is that sooner or later we may want to have
>> non-aligning variants of these. That'll again make for larger names,
>> unless we went with e.g. an optional 2nd parameter which, if absent,
>> means default alignment, while if present it would specify the alignment
>> (which then can be used to effectively specify no alignment). E.g.
>>
>> #define ALIGN(algn...) .balign algn
>>
>> #define GLOBAL(name)                \
>>     .globl name;                    \
>>     .hidden name;                   \
>>     name:
>>
>> #define FUNC(name, algn...)         \
>>     ALIGN(LAST(16, ## algn), 0x90); \
>>     GLOBAL(name);                   \
>>     .type name, @function
>>
>> with these helpers (and count_args() as we already have it), or ideally
>> something yet more simple (which right now I can't seem to be able to
>> think of):
>>
>> #define ARG1_(x, y...) (x)
>> #define ARG2_(x, y...) (y)
>>
>> #define LAST__(nr) ARG ## nr ## _
>> #define LAST_(nr)  LAST__(nr)
>> #define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)
> 
> Would seem acceptable to me.  Would you like to make a proposal
> (likely updating your previous patch) along this lines?

I wouldn't mind doing so, as long as there was at least a vague chance
that this also comes somewhat close to meet Andrew's expectations.
Andrew?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 16:21:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 16:21:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523619.813848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAYm-0002eA-Nn; Wed, 19 Apr 2023 16:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523619.813848; Wed, 19 Apr 2023 16:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAYm-0002e3-L7; Wed, 19 Apr 2023 16:21:32 +0000
Received: by outflank-mailman (input) for mailman id 523619;
 Wed, 19 Apr 2023 16:21:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppAYl-0002dx-0A
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 16:21:31 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3bb5de31-dece-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 18:21:28 +0200 (CEST)
Received: from mail-dm6nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 12:21:21 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5297.namprd03.prod.outlook.com (2603:10b6:a03:218::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 16:21:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 16:21:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bb5de31-dece-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681921288;
  h=message-id:date:subject:from:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=EbvIGXNgh/zaxnFX6SI/Z6fNGTGYTgr68pem4F7SbbA=;
  b=Vge9Fj5Yx3W3tMzB07mrzhTxuXrgmvFcgCKf7LsaeVlcXPC3ixp6kkp7
   rTMASEAU9Ky+4fLZeNBQfeMyPhzMedogwwj5PjVLnI5n95BHChih9NWO2
   eREEVUloLWANnQsDWq47EV0/MUrlhidNB8QehJXvpXUjnTvYhRpr1Omso
   o=;
X-IronPort-RemoteIP: 104.47.58.107
X-IronPort-MID: 106539691
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:azaNOKCAUmIQmBVW/+jlw5YqxClBgxIJ4kV8jS/XYbTApD10g2QFx
 2dNXmnSaa6IM2v1fd11a43ioEoC68Xcy9drQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwqqEnPD1X/
 Nkjdy0KYwyApOS7zqCpc7w57igjBJGD0II3nFhFlGmcIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuuze7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXw36gCdxDTdVU8NY7slCPmnIQAiEUel38kdLns1OzUIxmf
 hl8Fi0G6PJaGFaQZsXyWw2QpH+CowIGXNxRA6s25Wmlw7DS7y6QHG4IRHhNZbQOsM4wWCxv3
 0WM2tDkHzpitJWRSGmB7fGVqz2oMCQYJGNEYjULJSMO6sXqpIA1yBfGVNdlGbWdh8fwXzr3x
 liisyk6g50QgNQN2qH9+krI6xqju5/TQwgd5QjNWG+hqARja+aNY42u9Eie5u1DPa6HQVSb+
 nsJgc6T6KYJF57lvCiMRvgdNLiz4vKENiDMx1l1EN889FyF52Wpeo9d5j1WLUNxMs9CdyXgZ
 kzev0Vd6fd7HnKvYLQxSYWtF8MvyaXxPdTsUOvZaMJHeIR3eRWc/CB2I0WX2gjFnEUolawlE
 ZieYYCgAB4yDaVh0SrzX+4H0KQg7j4xyHmVRp3hyRmjl72Eaxa9Ub4YNB2RZ+Q896eJsS3O8
 theKcbMwBJaOMXmay3S/JQ7IVkAJ3U/Gdb9rMk/XuqCJBd2XXsgEPTP6bcgYJB+2aVTmP3B8
 ny0Vglf0lWXrXnGLxiaL3VicKjHQ5lytzQ4MDYqMFLu3GIsCa6/4aFae5YpcL0P8O140eUyX
 /QDY9+HAPlEVnLA4Tt1RZP6to15MhOwmRiJITGmcRA4ZZdrQwGP8djhFiPt+zMJFTGfrtYlr
 vur0QazaYIGSgJwAcL+bfWrwF63+30Fl6R+UiPgJthVZVWp84VwLSH1puE4LttKKhjZwDaek
 QGMDn8wtbmTi4w46t/EgeaDtYjBO+dhG1UAN2jd46y/OSTT8iyk24AoeOyScj3HT3/c96CrZ
 OETxPb5WNUOhlRDtaJ4ErB23eQ/4cfio/lRyQEMNG3BaFK5C7VhCmOL0clGqutGwboxkRO/R
 0aK8dpyOriTPs7hVlkLK2INcuSE0PgP3D3f6/g8K0brzCZy8PyMVkA6FwKBgSpfJ5N6N4Qqx
 epns8kTgyS4jhcpKd+Agjp88W2QJ3gNVaYg8JcTBYKDogUzy1pJZpjdIi756ZWOYpNHNUxCC
 juJj6jLgfJY20HAenw0C1DC2ONcgdIFvxUi5FMdIkuhk9/MlPgxmhZWtyk0JixI1RxC2qR6I
 GllH0dkOaiC9jFjiY5IWGXEMwRIHh7f4FHtz1IPmEXQVUzuXWvIREUmMOyX9VgQ9UpVezFJ+
 7few2HgOQsGZ+n01yo2HEJg9frqSIUp8hWYwZj4WcOYA5M9fDzpxLe0YnYFoAfmBsV3g1Dbo
 e5t/6B7bqiT2TMsnpDXwrKyjdw4IC1o7kQfKR291MvlxV3hRQw=
IronPort-HdrOrdr: A9a23:wuMVlaloU/PKYKxUJnn/4K+0COTpDfMoiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdK9qXO1z+8X3WBjB8bZYOCGghriEGgG1+ffKlLbakrDH4JmtJ
 uINpIOcOEYbmIKx/oSgjPIderIqePvmM/HuQ6d9QYVcegAUdAD0+4NMHf+LqQAfngiOXNWLv
 qhz/sCgwDlVWUcb8y9CHVAd+/fp+fTnJajTQ8aCwUh4AyuiyrtzLLhCRCX0joXTjsKmN4ZgC
 T4uj28wp/mn+Cwyxfa2WOWx5NKmOH5wt8GKN2QhtMTIjDMjB/tQIh6QbWNsB08venqwlc3l9
 vnpQsmIq1Imj7sV1DwhSGo9xjr0T4o5XOn40Sfm2HfrcvwQy9/I9ZdhKpCGyGppXYIjZVZ6u
 ZmzmiZv51YAVfrhyLm/eXFUBlsiw6dvWciq+gOlHZSOLFuKYO57LZvvH+9Iq1wXh4S27pXVd
 WGy/usoMq+RGnqLEwxeFMfh+BEEE5DWCtuCXJyx/B9mwIm4EyRiXFoi/D3101wi64VWt1K4f
 /JPb9vk6wLRsgKbbhlDONEWsevDHfRKCi8Rl56DG6XYJ3vAUi93KLf8fEw/qWnaZYIxJw9lN
 DIV05Zr3c7fwbrBdeV1JNG/xjRSCHlNA6dgv129tx8oPnxVbDrOSqMRBQnlNahuewWBoneV+
 yoMJxbDvf/JS/lGJpP3Qf5R55OQENuGfE9q5I+QRaDs8jLIorluqjSd+vSPqPkFXI+Vmb2Eh
 I4LU3OzQV7nzKWs1PD8WjssinWCzLCFLpLYdnn1vlWzpQRPYtRtQVQgUil56iwWE5/jpA=
X-Talos-CUID: =?us-ascii?q?9a23=3ASH4HN2movN262QyPKWI4MXm+rhzXOS2anEqMOka?=
 =?us-ascii?q?dMjl0T7CJSnyW9a12gfM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AbAHzew0PakmMmJShX4xvXx1a1zUjspajFBwTkp8?=
 =?us-ascii?q?6ueaCchxbZQ6AgT/pe9py?=
X-IronPort-AV: E=Sophos;i="5.99,208,1677560400"; 
   d="scan'208";a="106539691"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GB78ZB7bi6jOjaFmNnzr4rZROJpgVRhzAMV2SRXO+usyURGKRzHMbQ5oK75mI0d2utzCLTiF3lcPjXTkFH0JK2lDbVF7iw+qFWnzlP98poJTrL9OeZL0Jwfa9NA7aHQFCNWfQvwio+iMBSwAl97KavzGCrRHKXsdUYdU2cFkgHTjWFnJa3X03PF4H4UASVKj2KDDCdkVqewlegk0p64cnaUhD+Uwvtq7gz6Ek1j90oTkDHuSF0pGoznVcv2pH5Il+WTSKT7IRo97azHUB42R8nVBuSkw4RTQqwTAus8T7riQT/qckuhiWx8P6jG3ZB4mMIOddK9CKws6RkULw/lU2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EbvIGXNgh/zaxnFX6SI/Z6fNGTGYTgr68pem4F7SbbA=;
 b=X5IM9aSicnYmeU6lRXqJI+uHt6aOUaGbtr0erfyjAHvLMjOq8T3sOfZ2wEe1xuELQpIXq1PCpotu8o5bgEpAj7ayUJ9qrgVI9i52VSfXicuAyQK4kenQTiGgqFZvX5dXc+6TZ/G0psUIMAz+fuPcu3JrE248NRY/K0Rz0eQ7a9jFQqoK5i8YRBW4AYp8hAZY4U0/IhsuohgEA0qWcaGfd0LJCrj+D2TvB4Rg3Jqrv63BAXWJbnFOGiCv0ifYfTRhH7dOBc4rwjwCqP+ajx84LlbsVlN0zKtexpuZOcTeC/v58WctDLxgX09CANtd6zlGe+9Cvh/2jwJaBj9k0mgAnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EbvIGXNgh/zaxnFX6SI/Z6fNGTGYTgr68pem4F7SbbA=;
 b=V6dlGhH/O87U2mPAaAlbaf3WLrWLjdZcESNaITnVfZ47RQYsxJ8S3zke1EkZrXp2Exw8vcDvOByZzePrb/XHpGbJVHTNy1mkluLKSbcXAaOEeutLuy77jtsn5DP+VohUZ9KCFlzWkgbrwd7hWoEnxoIaTYuqSBv6uZPh1nWn/+k=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com>
Date: Wed, 19 Apr 2023 17:21:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Thomas Gleixner <tglx@linutronix.de>, Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
In-Reply-To: <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0033.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5297:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b40fe5f-5205-44b9-6666-08db40f21b29
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2SLY+4hr22TDCJ9hVdmbBqe6LpPKrIsDDPnmraYQ694MY3elYnvia9z3v8IuzPSA0HWfkMfhKH0sKe87lgnvVV9z/sbCDp3qkGQs1+i1CdPKxf6eiQ7W5KBEqMKNOz0cn8Qd8c57P/WDOfgg8jw8GsVDTIltYNU5mvBhELDJJyEyFjqO5VYA3DoHrylQNcXz3svZlgbOOgbBT5mFEi0+qHdHS5W2ixoPnx1X7k16hCViK9ww1SY/rbI3x6Onj6/PrF6R7kDo3X0mD/ui/LdDaAwT78ogbQm2h/ElZimdPbz7kD9Y+IXUr+n9vPWqwG7ruQFDsEsr5i89BtWkBZ/iVyQl80msBZqhaxbVa4uHQvCTkkxJQu0jushbk4nW2NzETbElZiVTQFYGLfBWclucg4yIBpF6f1Oycv7kC7luEQsVy7k4q5MWlEaQbxk+PDRxwgfDrxnJmmmyLWiXQoRdkR0dU7pXj/Up8kKIgK6wkZSWu2M4FrS+sDBdYFN95FfjfkqruSQdU2HMrITOuiTZVEDR//8YLfVIFlUNsVK25TutV9N2itLG7Z4Gec7PuowF9z28YI2w8E3AH1/BoUx9y8hPdrs9s2LQEY9pExL99UB/fCgOS686t18M8tpuNfkO0xhVWt+LX9Y0B+httFZ/fQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(136003)(346002)(376002)(39860400002)(451199021)(26005)(38100700002)(478600001)(110136005)(86362001)(31696002)(82960400001)(53546011)(2616005)(6506007)(186003)(6512007)(36756003)(6486002)(966005)(6666004)(83380400001)(316002)(4326008)(66476007)(66556008)(66946007)(2906002)(7406005)(7416002)(8676002)(8936002)(5660300002)(31686004)(41300700001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UFFIUHVwUE00M0dYOG0yVTAwUVhmcnc5U2llRk5vbjltRUlCNnlXNFNlSmxr?=
 =?utf-8?B?aE9IOHVuTTFiREh5N05GK0l2b2o2c3daaTJ0Sm9FeVBwVmtVUFFPYXhWdFJH?=
 =?utf-8?B?WUdVYmk0eFkvM21PckY2amlaM1lvb3AvWkI5R201dnI3NWU5R1J4V1BQZFA3?=
 =?utf-8?B?WnhadlRxV2s1cTZRYlVza1pjMkc3NVdTRVN5Snprd3VHUHdUNXVHQ1EvbXVi?=
 =?utf-8?B?Mm1namxzTXE3SVpwT0o0Q1dnbjhEa2tOSnNyM3RKM1c3L2gzb2REbWR1U1FK?=
 =?utf-8?B?Rk8rSzU5TWdOTkpoaGM4V3dSdTA4SXl0ME0xY044bDc3UDl4SWVWOHUvaTJC?=
 =?utf-8?B?ZnltT3NxZ1dGR2d3QjZ6aURxdG9pQ0IxcTlMNlllTEJ1TXF3TTRnaytJVmwy?=
 =?utf-8?B?YkNTZzhSL3I0dktwSFVneGFEUnRuNStjSHRob1RwL21HOEpIK1Q2TTdOZGRO?=
 =?utf-8?B?NFk1Q3E0MFM3SS9aaVl0NDNwR0xpa1V4T3VxakJHYUgxQXd5UlFWQ0xjdjQw?=
 =?utf-8?B?d3JNU1ZTOGlxNXZoeVd3VVdMY25wczJOQWxPY2dheVBERGhDUGNCK3lDYWtj?=
 =?utf-8?B?SHoxV1FQdnF2SWhzNUJZb2hqRnY5UUpkL0YycnBQQXdCcmdnR3hNa1c1Z3Qw?=
 =?utf-8?B?NkduT3VONmxWSzlvT2tpb0VQWXI4cTF4ejJyWHYxUHg2NWdkb1EvbWk4STIv?=
 =?utf-8?B?c21rQXdrTExpMEtGN0U3TWVLS0dhQ0RuZDhTczRTNWRNQjAraFhNMkw2dDc3?=
 =?utf-8?B?WWd0eHY0QWdjMVpXVEt4bzB2Sy9LQkNtc0V5aW14K3VsOEYvZ3FUZWNYS3Bv?=
 =?utf-8?B?WFA5TlhsRXB3ZUZESmUzUTZCK1FSUUZLUXFkTk9qUFIwWk9DaHBTMWUzMUdL?=
 =?utf-8?B?MUhtaThhM2R0QTZPUVkwRXg0bVpmVnZsS3RUanhvKzhmODcvVzlnQmpZbm1V?=
 =?utf-8?B?czA3N0FjUEI5T3NtYTB1V1Nxc2ZQbUl1ajNkRE1RVjFSN2Z0WWVENWVsNkZQ?=
 =?utf-8?B?NHRRVVo2eFg0WmZxMHFpalh2ZjBpenFoalJWWW1pS2hLUWp5eUw3UWNsbzFJ?=
 =?utf-8?B?Qm1XVDZwQ09vdVBSdWNtWmI2VmROTDZ6ZEo4MUI4RzFhMmRmVGVLWHBQVVpX?=
 =?utf-8?B?NGlNVjlZeGxCTHhTS1ZPdkc4cG4zb2xjcTlENEo4cEZucTE4OUZ4SGZoejR6?=
 =?utf-8?B?eWxJZGpyL3kvK3dUZGl2clgvVEZmSFFUbHJIUU1WUFJzM1ZBYkppenhSNFlj?=
 =?utf-8?B?NnF0eUUxbDV2S0lEclA3Q0lHSWZEM05OUFpOb3ZZRXk1d3BDOEJkSVZCcDB3?=
 =?utf-8?B?aWlGQWwwb2g0TzlVcEIxNUg5cGZIc3pzL1VmRnZlS2k2UGxSdDVkcDNkRUMr?=
 =?utf-8?B?T3RObzdCbXB4SlRZcWtTSFhOM0xTZDZlZmxHMG5PS21mVzJtd203YU5pdWUy?=
 =?utf-8?B?cGpWbmQ1VTJJaGJxNVZDQ2RNOGlGaXhVS3Q2Z2FCSE9BQnBSV0E2QVpGdGdG?=
 =?utf-8?B?dHZTZGV6RDVIZWc3NE4za0V1anA0U2ZjU1dwVXFsUVBrWlJyenVLWlRLSVp1?=
 =?utf-8?B?OXNsaEJhakFiMVI4Q2N6d293STFjV1BWSDdoSkdWN25jU3FSMTI3bGlaTW1V?=
 =?utf-8?B?NzV3YWxqS1l2alV4aGF3ZzliL3Rqb2ptYlpzUEY2M1ZuQStXZFV6RDI3Wlpl?=
 =?utf-8?B?cDJIUDh1ZWEzN3NIU3JMelJtb0c5NWNLSDFPNEtZNHYvN1dKMUlOc2JGMThC?=
 =?utf-8?B?MGNhWGVTVFZIcFFSaXZDcWhNUG14dXFoYkwrcUc4UnB3LzBVMVJZOEh0L1pI?=
 =?utf-8?B?Q05YN3oyL0JZK3dkdi9heCtsWHQ1MDJmLzBtVzVOTjd4QUpra0VldmhzK043?=
 =?utf-8?B?YVErR2F0cFMyUE9yS05tTDE4ZkppZ0MxTUs5cmZoQ3l5QnAyclI4Ri8rWHJQ?=
 =?utf-8?B?L3NnZjFuRytndi9sU1RjcEFRblB3Y3dyTDRFazJBa3NnbENSV0tFcTFNVFgy?=
 =?utf-8?B?TWR5UnlrcmIwWVZwSVI5eDAzRmZWOVE1NG9ZbUFhVmxsYjR1YmFZU2MrOU1s?=
 =?utf-8?B?cmhTb2hZR0d2bW83RjVXUU94UzAwRXFycEl2RllmWkFuNThzTVhLRHJmcU83?=
 =?utf-8?B?Wng1bkpLQTlJSDlVb3YwV2ZPRnc1WUFPM2t1MHJUdXltT0xzQUYvclNmYnRJ?=
 =?utf-8?B?Z3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?SElKaXVSekRPRUFBZnR6ckRObHNJdDJkNFdCSlJwNzBQMVRDMWhVeDJlQ0Va?=
 =?utf-8?B?TFhuMHgxL092ODBMM3U0Q1lHVXMrRGhPN0lwMDRWeTRlbWtCUjFtc1dJMjUw?=
 =?utf-8?B?a29FM3VvQWxZcHFvS0VTU3VYMFlWcWZzM29BOG5qc1o0YnFlQXNqbVhLQXdO?=
 =?utf-8?B?V0V6TlRWK1ROcGliNmlJaDFHcXg5LzdLN0d6MVhFKytjOXFJMUxXNnNldHBW?=
 =?utf-8?B?WFdmVkZhTU0zMDliaUlKNEVtQzRTYkx5WGR3VnUwZjZ2am9uajhlK1ZhRDhC?=
 =?utf-8?B?Q0JqOFBrQXlrNFh5M2Jma0ZKZFB5ZGlGSGt4Q25OOGw5ZVJPUjZXaU45NEx3?=
 =?utf-8?B?bTlISmIxakU2S21FWGcvS094VG9FZnZkL29PTXVsNmFPMTQ3R3VLVzRqL3dv?=
 =?utf-8?B?czAwWU4vMG1yYkMzdFhNeG92TmhpdlhtYXNENkZXRmhQRVIyWXBOK0xQYXhB?=
 =?utf-8?B?N0YyMXBUNlYwbGhvaTZ0M3dCWVdzd0hLQzRIUTdPYlpXS0dobjZERXU2bGNR?=
 =?utf-8?B?RnpiMkRNVnVMNDIwR1hZOHd4aTdkMFAveXJmTkozbHRWcHJxVjMyTGNkNVFL?=
 =?utf-8?B?SVc5MVRZQzhkN3FKUmRRd2JPR2ZjcE5jZ2YrTkt3SVM1anpRZGNYdWlrRFVv?=
 =?utf-8?B?MTNDSUREaEFjMFpUMW5GYzdWeTNSWEpOK1lqZG5vWmlPVGd1bzF5K2tIeXhS?=
 =?utf-8?B?UkNiOGxhKzdPdUM5T1NVQXVYNmtiV3dTWWgvTTZWQ1FtRktVTktLcWg5cVE3?=
 =?utf-8?B?aE9tVk5odjZDNjJjK3JsYkVWTG9lMjMzd1VNclR3Z0ZxblkvM0llWjVvQUxO?=
 =?utf-8?B?NGE0ZmdQaXZCN1Jjc3paR0pSWFRBNzdyQ1pWSU84YmUvcDl3WTVOYStreHhu?=
 =?utf-8?B?Y3VRYTY2NUxpWHBUZUxhWkVjVXNCUWlqREF2SGJibThsb1FTZzl0elVFQjhy?=
 =?utf-8?B?b0QwenEvZ1c2RGw5elZ6VjF5dm5yK3F4Ymtxa3NaWDEzZ1NSZ05mU3hNWk9v?=
 =?utf-8?B?NDZiWk1haEhmY3djWjMwTUxITnBQUVp0c1VBNnUwZ1B5THlvZU81UEpuSW1o?=
 =?utf-8?B?U3lFQUFVQ3dxUzV6ZXlOTjU2dzRGL0QrZWhiSm9qVldXeHJCVFd3elhvcVMw?=
 =?utf-8?B?VWF5OUpjZWZQUG5ncUY3Nm81NlVJcFZGTGx1bnc5TEIxYlU2WHZkT3dlbkIz?=
 =?utf-8?B?S0xxSFV4SnpiUklSMDlSclBCcGQwNUdrZXdibDV6Ty8zY0tvd0ZpYytVNmc4?=
 =?utf-8?B?M2x5bnoySGhERjFXVyszUGhtRU1SNW9MZ0RNMzRwUXhudURWMlJLR3JzS2tr?=
 =?utf-8?B?L0h3RnlCNkcvUlBrMmxpaGREeEhVVGhWZnZ5NGJyd1pYZ2ZzbytvODRHdE1n?=
 =?utf-8?B?VDN2bjlKdDlKaW1IcXl1TFluMzhpSEUwWWZwRVVJTm5BMWRFUGI5R3ZqeER6?=
 =?utf-8?B?MlBKZ01wbzZVem9qQlllMldDWlR1eEFFMFh1UUFNZ2diczNBd2puNzdXSjVY?=
 =?utf-8?B?N3JwVWE4N0xVQkpqRW05MkdFazRFNkZ6MFZURitORVVTTk5pRC9sS0FJL1Vp?=
 =?utf-8?B?OGkyNkE4MGJuNHlZNlhYdS9MR0JmcklQdHdDTDV1M2d5QmZzUXJCdDlnSVdp?=
 =?utf-8?B?RW42ei8wR3llMWVFS2MyMUZHRDhUVE5mbmloaFFUeERVMUpNUW9hZTdteEli?=
 =?utf-8?B?VzhQMGM4dXBYeEdMN3ZjL1Mydi9KYWlFUklaNmVnaC9jUEJyOE5ORGIrdzRW?=
 =?utf-8?B?RDhCSnhoa0FCOWowajBQQjZDMVhQc3lnUm90eTYrTy9CbE9kVm1aeHBvOFZt?=
 =?utf-8?B?SEdRS2xKdWFGWTBYZ05VWlVtbzB6eWllUmZvRlRhaDlvSWRycmlzQTdJMENB?=
 =?utf-8?B?M3dBU21sZm9xUU1RQjlabHVhS0FKQ2RjYW5IejNjZ3ZQaUh4Y0VqMnMwVTlq?=
 =?utf-8?B?R1RjaWlhRFo5SWI5dWFjOGV0cDNOMkNqQnpXa1F2NXp2RWIzcUFBR1lndzNq?=
 =?utf-8?B?bXErdEVMa3M1RkFMT29FUy9kVjlac1JsbVNncEZPdHh0REF5dWpOaThwSnpM?=
 =?utf-8?B?ejFQRVkxeWk3NDhhMUYwZnlLeG1panhESUd5MVBuTkFDL0FqVVNjdTlwT2Ex?=
 =?utf-8?B?SS9mMVpnQmRVd0lYZFpRZlRzZndQWUdpVG5KWnpLeHRHQXY2STFsZDdOc2k4?=
 =?utf-8?B?a2tQU2lXOWs2Y052SWd2dU92QzdxdGJIMzVlR2FlNlFLZ2pRS3NZRGEvV2s2?=
 =?utf-8?B?NFRXYTViemRaRDh1dk1vNlVZL2tnPT0=?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b40fe5f-5205-44b9-6666-08db40f21b29
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 16:21:19.0651
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: z74InxPWTBE55lHfOCErvyNbL07nA3hAbe2qQFdwXoJyMsoolz3/zKDStpOIPaP+d7rGYRydJSSy9bXNsNBCyjUdjhaZ9Pz4O+aQAEz81yQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5297

On 19/04/2023 2:50 pm, Andrew Cooper wrote:
> On 19/04/2023 2:43 pm, Thomas Gleixner wrote:
>> On Wed, Apr 19 2023 at 14:38, Thomas Gleixner wrote:
>>> On Wed, Apr 19 2023 at 11:38, Thomas Gleixner wrote:
>>> IOW, the BIOS assignes random numbers to the AP APICs for whatever
>>> raisins, which leaves the parallel startup low level code up a creek
>>> without a paddle, except for actually reading the APICID back from the
>>> APIC. *SHUDDER*
>> So Andrew just pointed out on IRC that this might be related to the
>> ancient issue of the 3-wire APIC bus where IO/APIC and APIC shared the
>> ID space, but that system is definitely post 3-wire APIC :)
> Doesn't mean the BIOS code was updated adequately following that.
>
> What I'm confused by is why this system boots in the first place.  I can
> only think that's is a system which only has 4-bit APIC IDs, and happens
> to function when bit 4 gets truncated off the top of the SIPI destination...

https://www.amd.com/system/files/TechDocs/42300_15h_Mod_10h-1Fh_BKDG.pdf

This system does still require the IO-APICs to be at 0, and the LAPICs
to start at some offset, which is clearly 16 in this case.  Also, this
system has configurable 4-bit or 8-bit wide APIC IDs, and I can't tell
which mode is active just from the manual.

But, it does mean that the BIOS has genuinely modified the APIC IDs of
the logic processors.  This does highlight an error in reasoning with
the parallel bringup code.

For xAPIC, the APIC_ID register is writeable (at least, model
specifically), and CPUID is only the value it would have had at reset. 
So the AP bringup logic can't actually use CPUID reliably.

This was changed in x2APIC, which made the x2APIC_ID immutable.

I don't see an option other than the AP bringup code query for xAPIC vs
x2APIC mode, and either looking at the real APIC_ID register, or falling
back to CPUID.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 16:45:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 16:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523627.813859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAvh-0005D5-QL; Wed, 19 Apr 2023 16:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523627.813859; Wed, 19 Apr 2023 16:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppAvh-0005Cg-LM; Wed, 19 Apr 2023 16:45:13 +0000
Received: by outflank-mailman (input) for mailman id 523627;
 Wed, 19 Apr 2023 16:45:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uegH=AK=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1ppAvf-0005BX-Fb
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 16:45:11 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8999c81e-ded1-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 18:45:06 +0200 (CEST)
Received: from [192.168.0.2] (unknown [95.90.232.77])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id F291361E4052B;
 Wed, 19 Apr 2023 18:45:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8999c81e-ded1-11ed-b21f-6b7b168915f2
Content-Type: multipart/mixed; boundary="------------RIXs5aB9kAV059bPykAqBHIR"
Message-ID: <c12e97b7-1aea-ca98-0728-efecb6670629@molgen.mpg.de>
Date: Wed, 19 Apr 2023 18:45:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx>
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <877cu83v45.ffs@tglx>

This is a multi-part message in MIME format.
--------------RIXs5aB9kAV059bPykAqBHIR
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 19.04.23 um 14:38 schrieb Thomas Gleixner:
> On Wed, Apr 19 2023 at 11:38, Thomas Gleixner wrote:
>> On Tue, Apr 18 2023 at 22:10, Paul Menzel wrote:
>>> Am 18.04.23 um 10:40 schrieb Thomas Gleixner:
>>>> Can you please provide the output of cpuid?
>>>
>>> Of course. Here the top, and the whole output is attached.
>>
>> Thanks for the data. Can you please apply the debug patch below and
>> provide the dmesg output? Just the line which is added by the patch is
>> enough. You can boot with cpuhp.parallel=off so you don't have wait for
>> 10 seconds.
> 
> Borislav found some a machine which also refuses to boot. It turns of
> the debug patch was spot on:
> 
> [    0.462724] .... node  #0, CPUs:      #1
> [    0.462731] smpboot: Kicking AP alive: 17
> [    0.465723]  #2
> [    0.465732] smpboot: Kicking AP alive: 18
> [    0.467641]  #3
> [    0.467641] smpboot: Kicking AP alive: 19
> 
> So the kernel gets APICID 17, 18, 19 from ACPI but CPUID leaf 0x1
> ebx[31:24], which is the initial APICID has:
> 
> CPU1		0x01
> CPU2		0x02
> CPU3		0x03
> 
> Which means the APICID to Linux CPU number lookup based on CPUID 0x01
> fails for all of them and stops them dead in the low level startup code.

I am attaching the logs for completeness. Linux is build from your 
branch with the debug print on top. The firmware, coreboot based, is 
built from [1], but it also happened non-parallel MP init. The code has 
better debug prints (attached) though as far as I can see. As Borislav 
is able to reproduce this too with some non-coreboot firmware, I assume 
it’s unrelated to coreboot.

```
[    0.259247] smp: Bringing up secondary CPUs ...
[    0.259446] x86: Booting SMP configuration:
[    0.259448] .... node  #0, CPUs:      #1
[    0.259453] smpboot: Kicking AP alive: 17
[   10.260918] CPU1 failed to report alive state
[   10.260998] smp: Brought up 1 node, 1 CPU
[   10.261000] smpboot: Max logical packages: 2
[   10.261001] smpboot: Total of 1 processors activated (7801.09 BogoMIPS)
```

> IOW, the BIOS assignes random numbers to the AP APICs for whatever
> raisins, which leaves the parallel startup low level code up a creek
> without a paddle, except for actually reading the APICID back from the
> APIC. *SHUDDER*
> 
> I'm leaning towards disabling the CPUID lead 0x01 based discovery and be
> done with it.


Kind regards,

Paul


[1]: https://review.coreboot.org/68169
--------------RIXs5aB9kAV059bPykAqBHIR
Content-Type: text/plain; charset=UTF-8;
 name="kodi-linux-6.3-rc3-smp-tglx.txt"
Content-Disposition: attachment; filename="kodi-linux-6.3-rc3-smp-tglx.txt"
Content-Transfer-Encoding: base64

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5
YzgwYiAocm9vdEBiZjE2ZjM2NDZhODQpIChnY2MgKERlYmlhbiAxMS4yLjAtMTIpIDExLjIu
MCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi40MCkgIzQ0OSBTTVAgUFJF
RU1QVF9EWU5BTUlDIFdlZCBBcHIgMTkgMTY6MTM6NTQgVVRDIDIwMjMKWyAgICAwLjAwMDAw
MF0gQ29tbWFuZCBsaW5lOiBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmMzLTAw
MDQ1LWc2NGRlNGRmOWM4MGIgcm9vdD0vZGV2L3NkYTMgcncgcXVpZXQgbm9pc2FwbnAgY3J5
cHRvbWdyLm5vdGVzdHMgaXB2Ni5kaXNhYmxlX2lwdjY9MSBzZWxpbnV4PTAKWyAgICAwLjAw
MDAwMF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDAxOiAneDg3IGZs
b2F0aW5nIHBvaW50IHJlZ2lzdGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3VwcG9y
dGluZyBYU0FWRSBmZWF0dXJlIDB4MDAyOiAnU1NFIHJlZ2lzdGVycycKWyAgICAwLjAwMDAw
MF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDA0OiAnQVZYIHJlZ2lz
dGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogeHN0YXRlX29mZnNldFsyXTogIDU3Niwg
eHN0YXRlX3NpemVzWzJdOiAgMjU2ClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IEVuYWJsZWQg
eHN0YXRlIGZlYXR1cmVzIDB4NywgY29udGV4dCBzaXplIGlzIDgzMiBieXRlcywgdXNpbmcg
J3N0YW5kYXJkJyBmb3JtYXQuClsgICAgMC4wMDAwMDBdIHNpZ25hbDogbWF4IHNpZ2ZyYW1l
IHNpemU6IDE3NzYKWyAgICAwLjAwMDAwMF0gQklPUy1wcm92aWRlZCBwaHlzaWNhbCBSQU0g
bWFwOgpbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAw
LTB4MDAwMDAwMDAwMDA5ZmJmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1MtZTgyMDog
W21lbSAweDAwMDAwMDAwMDAwOWZjMDAtMHgwMDAwMDAwMDAwMDlmZmZmXSByZXNlcnZlZApb
ICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMGYwMDAwLTB4MDAw
MDAwMDAwMDBmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVt
IDB4MDAwMDAwMDAwMDEwMDAwMC0weDAwMDAwMDAwNWZlM2NmZmZdIHVzYWJsZQpbICAgIDAu
MDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDVmZTNkMDAwLTB4MDAwMDAwMDA3
ZmZmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVtIDB4MDAw
MDAwMDBmODAwMDAwMC0weDAwMDAwMDAwZmJmZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIEJJT1MtZTgyMDogW21lbSAweDAwMDAwMDAwZmVjMTAwMDAtMHgwMDAwMDAwMGZlYzEw
ZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAw
MTAwMDAwMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIE5Y
IChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQpbICAgIDAuMDAwMDAwXSBT
TUJJT1MgMy4wLjAgcHJlc2VudC4KWyAgICAwLjAwMDAwMF0gRE1JOiBBU1VTIEYyQTg1LU1f
UFJPL0YyQTg1LU1fUFJPLCBCSU9TIDQuMTgtMTUtZ2M3ODJlZjQzNDUgMDQvMTkvMjAyMwpb
ICAgIDAuMDAwMDAwXSB0c2M6IEZhc3QgVFNDIGNhbGlicmF0aW9uIHVzaW5nIFBJVApbICAg
IDAuMDAwMDAwXSB0c2M6IERldGVjdGVkIDM5MDAuNTQ5IE1IeiBwcm9jZXNzb3IKWyAgICAw
LjAwMDc1Nl0gZTgyMDogdXBkYXRlIFttZW0gMHgwMDAwMDAwMC0weDAwMDAwZmZmXSB1c2Fi
bGUgPT0+IHJlc2VydmVkClsgICAgMC4wMDA3NTldIGU4MjA6IHJlbW92ZSBbbWVtIDB4MDAw
YTAwMDAtMHgwMDBmZmZmZl0gdXNhYmxlClsgICAgMC4wMDA3NjNdIGxhc3RfcGZuID0gMHgx
N2YwMDAgbWF4X2FyY2hfcGZuID0gMHg0MDAwMDAwMDAKWyAgICAwLjAwMDc2OF0geDg2L1BB
VDogQ29uZmlndXJhdGlvbiBbMC03XTogV0IgIFdDICBVQy0gVUMgIFdCICBXUCAgVUMtIFdU
ICAKWyAgICAwLjAwMDk0MF0gbGFzdF9wZm4gPSAweDVmZTNkIG1heF9hcmNoX3BmbiA9IDB4
NDAwMDAwMDAwClsgICAgMC4wMDQwMDBdIFVzaW5nIEdCIHBhZ2VzIGZvciBkaXJlY3QgbWFw
cGluZwpbICAgIDAuMDA0MDAwXSBBQ1BJOiBFYXJseSB0YWJsZSBjaGVja3N1bSB2ZXJpZmlj
YXRpb24gZGlzYWJsZWQKWyAgICAwLjAwNDAwMF0gQUNQSTogUlNEUCAweDAwMDAwMDAwMDAw
RjY4MzAgMDAwMDI0ICh2MDIgQ09SRXY0KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBYU0RUIDB4
MDAwMDAwMDA1RkU0QTBFMCAwMDAwNzQgKHYwMSBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAg
Q09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogRkFDUCAweDAwMDAwMDAwNUZF
NEJCQzAgMDAwMTE0ICh2MDYgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUgMjAyMDA5
MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IERTRFQgMHgwMDAwMDAwMDVGRTRBMjgwIDAwMTkz
QSAodjAyIENPUkV2NCBDT1JFQk9PVCAwMDAxMDAwMSBJTlRMIDIwMjAwOTI1KQpbICAgIDAu
MDA0MDAwXSBBQ1BJOiBGQUNTIDB4MDAwMDAwMDA1RkU0QTI0MCAwMDAwNDAKWyAgICAwLjAw
NDAwMF0gQUNQSTogRkFDUyAweDAwMDAwMDAwNUZFNEEyNDAgMDAwMDQwClsgICAgMC4wMDQw
MDBdIEFDUEk6IFNTRFQgMHgwMDAwMDAwMDVGRTRCQ0UwIDAwMDA4QSAodjAyIENPUkV2NCBD
T1JFQk9PVCAwMDAwMDAyQSBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBN
Q0ZHIDB4MDAwMDAwMDA1RkU0QkQ3MCAwMDAwM0MgKHYwMSBDT1JFdjQgQ09SRUJPT1QgMDAw
MDAwMDAgQ09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogQVBJQyAweDAwMDAw
MDAwNUZFNEJEQjAgMDAwMDYyICh2MDMgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUg
MjAyMDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IEhQRVQgMHgwMDAwMDAwMDVGRTRCRTIw
IDAwMDAzOCAodjAxIENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpb
ICAgIDAuMDA0MDAwXSBBQ1BJOiBIRVNUIDB4MDAwMDAwMDA1RkU0QkU2MCAwMDAxRDAgKHYw
MSBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAyMDIwMDkyNSkKWyAgICAwLjAwNDAw
MF0gQUNQSTogSVZSUyAweDAwMDAwMDAwNUZFNEMwMzAgMDAwMDcwICh2MDIgQU1EICAgIEFN
RElPTU1VIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDApClsgICAgMC4wMDQwMDBdIEFDUEk6IFNT
RFQgMHgwMDAwMDAwMDVGRTRDMEEwIDAwMDUxRiAodjAyIEFNRCAgICBBTElCICAgICAwMDAw
MDAwMSBNU0ZUIDA0MDAwMDAwKQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBTU0RUIDB4MDAwMDAw
MDA1RkU0QzVDMCAwMDA2QjIgKHYwMSBBTUQgICAgUE9XRVJOT1cgMDAwMDAwMDEgQU1EICAw
MDAwMDAwMSkKWyAgICAwLjAwNDAwMF0gQUNQSTogVkZDVCAweDAwMDAwMDAwNUZFNENDODAg
MDBGMjY5ICh2MDEgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUgMjAyMDA5MjUpClsg
ICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBGQUNQIHRhYmxlIG1lbW9yeSBhdCBbbWVt
IDB4NWZlNGJiYzAtMHg1ZmU0YmNkM10KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5n
IERTRFQgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU0YTI4MC0weDVmZTRiYmI5XQpbICAg
IDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgRkFDUyB0YWJsZSBtZW1vcnkgYXQgW21lbSAw
eDVmZTRhMjQwLTB4NWZlNGEyN2ZdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBG
QUNTIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNGEyNDAtMHg1ZmU0YTI3Zl0KWyAgICAw
LjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIFNTRFQgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1
ZmU0YmNlMC0weDVmZTRiZDY5XQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgTUNG
RyB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTRiZDcwLTB4NWZlNGJkYWJdClsgICAgMC4w
MDQwMDBdIEFDUEk6IFJlc2VydmluZyBBUElDIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZl
NGJkYjAtMHg1ZmU0YmUxMV0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIEhQRVQg
dGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU0YmUyMC0weDVmZTRiZTU3XQpbICAgIDAuMDA0
MDAwXSBBQ1BJOiBSZXNlcnZpbmcgSEVTVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTRi
ZTYwLTB4NWZlNGMwMmZdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBJVlJTIHRh
YmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNGMwMzAtMHg1ZmU0YzA5Zl0KWyAgICAwLjAwNDAw
MF0gQUNQSTogUmVzZXJ2aW5nIFNTRFQgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU0YzBh
MC0weDVmZTRjNWJlXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgU1NEVCB0YWJs
ZSBtZW1vcnkgYXQgW21lbSAweDVmZTRjNWMwLTB4NWZlNGNjNzFdClsgICAgMC4wMDQwMDBd
IEFDUEk6IFJlc2VydmluZyBWRkNUIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNGNjODAt
MHg1ZmU1YmVlOF0KWyAgICAwLjAwNDAwMF0gTm8gTlVNQSBjb25maWd1cmF0aW9uIGZvdW5k
ClsgICAgMC4wMDQwMDBdIEZha2luZyBhIG5vZGUgYXQgW21lbSAweDAwMDAwMDAwMDAwMDAw
MDAtMHgwMDAwMDAwMTdlZmZmZmZmXQpbICAgIDAuMDA0MDAwXSBOT0RFX0RBVEEoMCkgYWxs
b2NhdGVkIFttZW0gMHgxN2VmZTkwMDAtMHgxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdIFpv
bmUgcmFuZ2VzOgpbICAgIDAuMDA0MDAwXSAgIERNQSAgICAgIFttZW0gMHgwMDAwMDAwMDAw
MDAxMDAwLTB4MDAwMDAwMDAwMGZmZmZmZl0KWyAgICAwLjAwNDAwMF0gICBETUEzMiAgICBb
bWVtIDB4MDAwMDAwMDAwMTAwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdClsgICAgMC4wMDQw
MDBdICAgTm9ybWFsICAgW21lbSAweDAwMDAwMDAxMDAwMDAwMDAtMHgwMDAwMDAwMTdlZmZm
ZmZmXQpbICAgIDAuMDA0MDAwXSAgIERldmljZSAgIGVtcHR5ClsgICAgMC4wMDQwMDBdIE1v
dmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlClsgICAgMC4wMDQwMDBdIEVhcmx5IG1l
bW9yeSBub2RlIHJhbmdlcwpbICAgIDAuMDA0MDAwXSAgIG5vZGUgICAwOiBbbWVtIDB4MDAw
MDAwMDAwMDAwMTAwMC0weDAwMDAwMDAwMDAwOWVmZmZdClsgICAgMC4wMDQwMDBdICAgbm9k
ZSAgIDA6IFttZW0gMHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA1ZmUzY2ZmZl0KWyAg
ICAwLjAwNDAwMF0gICBub2RlICAgMDogW21lbSAweDAwMDAwMDAxMDAwMDAwMDAtMHgwMDAw
MDAwMTdlZmZmZmZmXQpbICAgIDAuMDA0MDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVt
IDB4MDAwMDAwMDAwMDAwMTAwMC0weDAwMDAwMDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBd
IE9uIG5vZGUgMCwgem9uZSBETUE6IDEgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzClsg
ICAgMC4wMDQwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDk3IHBhZ2VzIGluIHVuYXZhaWxh
YmxlIHJhbmdlcwpbICAgIDAuMDA0MDAwXSBPbiBub2RlIDAsIHpvbmUgTm9ybWFsOiA0NTEg
cGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzClsgICAgMC4wMDQwMDBdIE9uIG5vZGUgMCwg
em9uZSBOb3JtYWw6IDQwOTYgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzClsgICAgMC4w
MDQwMDBdIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4ODE4ClsgICAgMC4wMDQwMDBdIEFD
UEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVkZ2UgbGludFsweDFdKQpbICAg
IDAuMDA0MDAwXSBJT0FQSUNbMF06IGFwaWNfaWQgNCwgdmVyc2lvbiAzMywgYWRkcmVzcyAw
eGZlYzAwMDAwLCBHU0kgMC0yMwpbICAgIDAuMDA0MDAwXSBBQ1BJOiBJTlRfU1JDX09WUiAo
YnVzIDAgYnVzX2lycSAwIGdsb2JhbF9pcnEgMiBkZmwgZGZsKQpbICAgIDAuMDA0MDAwXSBB
Q1BJOiBJTlRfU1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2
ZWwpClsgICAgMC4wMDQwMDBdIEFDUEk6IFVzaW5nIEFDUEkgKE1BRFQpIGZvciBTTVAgY29u
ZmlndXJhdGlvbiBpbmZvcm1hdGlvbgpbICAgIDAuMDA0MDAwXSBBQ1BJOiBIUEVUIGlkOiAw
eDEwMjI4MjEwIGJhc2U6IDB4ZmVkMDAwMDAKWyAgICAwLjAwNDAwMF0gc21wYm9vdDogQWxs
b3dpbmcgMiBDUFVzLCAwIGhvdHBsdWcgQ1BVcwpbICAgIDAuMDA0MDAwXSBbbWVtIDB4ODAw
MDAwMDAtMHhmN2ZmZmZmZl0gYXZhaWxhYmxlIGZvciBQQ0kgZGV2aWNlcwpbICAgIDAuMDA0
MDAwXSBjbG9ja3NvdXJjZTogcmVmaW5lZC1qaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1h
eF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOiA3NjQ1NTE5NjAwMjExNTY4IG5z
ClsgICAgMC4wMDQwMDBdIHNldHVwX3BlcmNwdTogTlJfQ1BVUzo2NCBucl9jcHVtYXNrX2Jp
dHM6MiBucl9jcHVfaWRzOjIgbnJfbm9kZV9pZHM6MQpbICAgIDAuMDA0MDAwXSBwZXJjcHU6
IEVtYmVkZGVkIDU1IHBhZ2VzL2NwdSBzMTg4MzkyIHI4MTkyIGQyODY5NiB1MTA0ODU3Ngpb
ICAgIDAuMDA0MDAwXSBwY3B1LWFsbG9jOiBzMTg4MzkyIHI4MTkyIGQyODY5NiB1MTA0ODU3
NiBhbGxvYz0xKjIwOTcxNTIKWyAgICAwLjAwNDAwMF0gcGNwdS1hbGxvYzogWzBdIDAgMSAK
WyAgICAwLjAwNDAwMF0gRmFsbGJhY2sgb3JkZXIgZm9yIE5vZGUgMDogMCAKWyAgICAwLjAw
NDAwMF0gQnVpbHQgMSB6b25lbGlzdHMsIG1vYmlsaXR5IGdyb3VwaW5nIG9uLiAgVG90YWwg
cGFnZXM6IDg5ODQzNgpbICAgIDAuMDA0MDAwXSBQb2xpY3kgem9uZTogTm9ybWFsClsgICAg
MC4wMDQwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IEJPT1RfSU1BR0U9L2Jvb3Qvdm1saW51
ei02LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiByb290PS9kZXYvc2RhMyBydyBxdWll
dCBub2lzYXBucCBjcnlwdG9tZ3Iubm90ZXN0cyBpcHY2LmRpc2FibGVfaXB2Nj0xIHNlbGlu
dXg9MApbICAgIDAuMDA0MDAwXSBVbmtub3duIGtlcm5lbCBjb21tYW5kIGxpbmUgcGFyYW1l
dGVycyAibm9pc2FwbnAgQk9PVF9JTUFHRT0vYm9vdC92bWxpbnV6LTYuMy4wLXJjMy0wMDA0
NS1nNjRkZTRkZjljODBiIiwgd2lsbCBiZSBwYXNzZWQgdG8gdXNlciBzcGFjZS4KWyAgICAw
LjAwNDAwMF0gRGVudHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTI0Mjg4IChvcmRl
cjogMTAsIDQxOTQzMDQgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjAwNDAwMF0gSW5vZGUtY2Fj
aGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9yZGVyOiA5LCAyMDk3MTUyIGJ5dGVz
LCBsaW5lYXIpClsgICAgMC4wMDQwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVh
cCBhbGxvYzpvZmYsIGhlYXAgZnJlZTpvZmYKWyAgICAwLjAwNDAwMF0gc3RhY2tkZXBvdDog
YWxsb2NhdGluZyBoYXNoIHRhYmxlIHZpYSBhbGxvY19sYXJnZV9zeXN0ZW1faGFzaApbICAg
IDAuMDA0MDAwXSBzdGFja2RlcG90IGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChvcmRl
cjogOSwgMjA5NzE1MiBieXRlcywgbGluZWFyKQpbICAgIDAuMDA0MDAwXSBzb2Z0d2FyZSBJ
TyBUTEI6IGFyZWEgbnVtIDIuClsgICAgMC4wMDQwMDBdIE1lbW9yeTogMzQ3NzEwNEsvMzY1
MTQzNksgYXZhaWxhYmxlICgxNDMzNksga2VybmVsIGNvZGUsIDIzNDBLIHJ3ZGF0YSwgNTMw
OEsgcm9kYXRhLCAyOTA4SyBpbml0LCAxMTA2NEsgYnNzLCAxNzQwNzJLIHJlc2VydmVkLCAw
SyBjbWEtcmVzZXJ2ZWQpClsgICAgMC4wMDQwMDBdIFNMVUI6IEhXYWxpZ249NjQsIE9yZGVy
PTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTEKWyAgICAwLjAwNDAwMF0gZnRy
YWNlOiBhbGxvY2F0aW5nIDM4NjUyIGVudHJpZXMgaW4gMTUxIHBhZ2VzClsgICAgMC4wMDQw
MDBdIGZ0cmFjZTogYWxsb2NhdGVkIDE1MSBwYWdlcyB3aXRoIDUgZ3JvdXBzClsgICAgMC4w
MDQwMDBdIER5bmFtaWMgUHJlZW1wdDogZnVsbApbICAgIDAuMDA0MDAwXSByY3U6IFByZWVt
cHRpYmxlIGhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24uClsgICAgMC4wMDQwMDBd
IHJjdTogCVJDVSByZXN0cmljdGluZyBDUFVzIGZyb20gTlJfQ1BVUz02NCB0byBucl9jcHVf
aWRzPTIuClsgICAgMC4wMDQwMDBdIAlUcmFtcG9saW5lIHZhcmlhbnQgb2YgVGFza3MgUkNV
IGVuYWJsZWQuClsgICAgMC4wMDQwMDBdIAlSdWRlIHZhcmlhbnQgb2YgVGFza3MgUkNVIGVu
YWJsZWQuClsgICAgMC4wMDQwMDBdIAlUcmFjaW5nIHZhcmlhbnQgb2YgVGFza3MgUkNVIGVu
YWJsZWQuClsgICAgMC4wMDQwMDBdIHJjdTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2No
ZWR1bGVyLWVubGlzdG1lbnQgZGVsYXkgaXMgMjUgamlmZmllcy4KWyAgICAwLjAwNDAwMF0g
cmN1OiBBZGp1c3RpbmcgZ2VvbWV0cnkgZm9yIHJjdV9mYW5vdXRfbGVhZj0xNiwgbnJfY3B1
X2lkcz0yClsgICAgMC4wMDQwMDBdIE5SX0lSUVM6IDQzNTIsIG5yX2lycXM6IDQ0MCwgcHJl
YWxsb2NhdGVkIGlycXM6IDE2ClsgICAgMC4wMDQwMDBdIHJjdTogc3JjdV9pbml0OiBTZXR0
aW5nIHNyY3Vfc3RydWN0IHNpemVzIGJhc2VkIG9uIGNvbnRlbnRpb24uClsgICAgMC4wMDQw
MDBdIHNwdXJpb3VzIDgyNTlBIGludGVycnVwdDogSVJRNy4KWyAgICAwLjAwNDAwMF0gQ29u
c29sZTogY29sb3VyIFZHQSsgODB4MjUKWyAgICAwLjAwNDAwMF0gcHJpbnRrOiBjb25zb2xl
IFt0dHkwXSBlbmFibGVkClsgICAgMC4wMDQwMDBdIEFDUEk6IENvcmUgcmV2aXNpb24gMjAy
MjEwMjAKWyAgICAwLjAwNDAwMF0gY2xvY2tzb3VyY2U6IGhwZXQ6IG1hc2s6IDB4ZmZmZmZm
ZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lkbGVfbnM6IDEzMzQ4NDg3MzUwNCBu
cwpbICAgIDAuMDA0MDAwXSBBUElDOiBTd2l0Y2ggdG8gc3ltbWV0cmljIEkvTyBtb2RlIHNl
dHVwClsgICAgMC4wMDQwMDBdIEFNRC1WaTogVXNpbmcgZ2xvYmFsIElWSEQgRUZSOjB4MCwg
RUZSMjoweDAKWyAgICAwLjAwNDAwMF0gLi5USU1FUjogdmVjdG9yPTB4MzAgYXBpYzE9MCBw
aW4xPTIgYXBpYzI9LTEgcGluMj0tMQpbICAgIDAuMDA0MDAwXSBjbG9ja3NvdXJjZTogdHNj
LWVhcmx5OiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczogMHg3MDcyYzNj
MGY1NCwgbWF4X2lkbGVfbnM6IDg4MTU5MDc1MjgwNiBucwpbICAgIDAuMTQ0OTEwXSBDYWxp
YnJhdGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0
aW1lciBmcmVxdWVuY3kuLiA3ODAxLjA5IEJvZ29NSVBTIChscGo9MTU2MDIxOTYpClsgICAg
MC4xNDQ5MTNdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMQpbICAgIDAu
MTQ1MDA4XSBMU006IGluaXRpYWxpemluZyBsc209Y2FwYWJpbGl0eQpbICAgIDAuMTQ1MTAz
XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDgxOTIgKG9yZGVyOiA0LCA2NTUz
NiBieXRlcywgbGluZWFyKQpbICAgIDAuMTQ1MTIwXSBNb3VudHBvaW50LWNhY2hlIGhhc2gg
dGFibGUgZW50cmllczogODE5MiAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsg
ICAgMC4xNDU1MTVdIExhc3QgbGV2ZWwgaVRMQiBlbnRyaWVzOiA0S0IgNTEyLCAyTUIgMTAy
NCwgNE1CIDUxMgpbICAgIDAuMTQ1NTE4XSBMYXN0IGxldmVsIGRUTEIgZW50cmllczogNEtC
IDEwMjQsIDJNQiAxMDI0LCA0TUIgNTEyLCAxR0IgMApbICAgIDAuMTQ1NTIyXSBTcGVjdHJl
IFYxIDogTWl0aWdhdGlvbjogdXNlcmNvcHkvc3dhcGdzIGJhcnJpZXJzIGFuZCBfX3VzZXIg
cG9pbnRlciBzYW5pdGl6YXRpb24KWyAgICAwLjE0NTUyNV0gU3BlY3RyZSBWMiA6IE1pdGln
YXRpb246IFJldHBvbGluZXMKWyAgICAwLjE0NTUyNl0gU3BlY3RyZSBWMiA6IFNwZWN0cmUg
djIgLyBTcGVjdHJlUlNCIG1pdGlnYXRpb246IEZpbGxpbmcgUlNCIG9uIGNvbnRleHQgc3dp
dGNoClsgICAgMC4xNDU1MjZdIFNwZWN0cmUgVjIgOiBTcGVjdHJlIHYyIC8gU3BlY3RyZVJT
QiA6IEZpbGxpbmcgUlNCIG9uIFZNRVhJVApbICAgIDAuMTQ1NTI3XSBTcGVjdHJlIFYyIDog
RW5hYmxpbmcgU3BlY3VsYXRpb24gQmFycmllciBmb3IgZmlybXdhcmUgY2FsbHMKWyAgICAw
LjE0NTUyOF0gUkVUQmxlZWQ6IE1pdGlnYXRpb246IHVudHJhaW5lZCByZXR1cm4gdGh1bmsK
WyAgICAwLjE0NTUzMF0gU3BlY3RyZSBWMiA6IG1pdGlnYXRpb246IEVuYWJsaW5nIGNvbmRp
dGlvbmFsIEluZGlyZWN0IEJyYW5jaCBQcmVkaWN0aW9uIEJhcnJpZXIKWyAgICAwLjE0NTUz
Ml0gU3BlY3VsYXRpdmUgU3RvcmUgQnlwYXNzOiBNaXRpZ2F0aW9uOiBTcGVjdWxhdGl2ZSBT
dG9yZSBCeXBhc3MgZGlzYWJsZWQgdmlhIHByY3RsClsgICAgMC4xNDk5NzNdIEZyZWVpbmcg
U01QIGFsdGVybmF0aXZlcyBtZW1vcnk6IDMySwpbICAgIDAuMjU4MTEyXSBzbXBib290OiBD
UFUwOiBBTUQgQTYtNjQwMEsgQVBVIHdpdGggUmFkZW9uKHRtKSBIRCBHcmFwaGljcyAoZmFt
aWx5OiAweDE1LCBtb2RlbDogMHgxMywgc3RlcHBpbmc6IDB4MSkKWyAgICAwLjI1ODM0OF0g
Y2JsaXN0X2luaXRfZ2VuZXJpYzogU2V0dGluZyBhZGp1c3RhYmxlIG51bWJlciBvZiBjYWxs
YmFjayBxdWV1ZXMuClsgICAgMC4yNTgzNTBdIGNibGlzdF9pbml0X2dlbmVyaWM6IFNldHRp
bmcgc2hpZnQgdG8gMSBhbmQgbGltIHRvIDEuClsgICAgMC4yNTgzODBdIGNibGlzdF9pbml0
X2dlbmVyaWM6IFNldHRpbmcgc2hpZnQgdG8gMSBhbmQgbGltIHRvIDEuClsgICAgMC4yNTg0
MDZdIGNibGlzdF9pbml0X2dlbmVyaWM6IFNldHRpbmcgc2hpZnQgdG8gMSBhbmQgbGltIHRv
IDEuClsgICAgMC4yNTg0MzNdIFBlcmZvcm1hbmNlIEV2ZW50czogRmFtMTVoIGNvcmUgcGVy
ZmN0ciwgQU1EIFBNVSBkcml2ZXIuClsgICAgMC4yNTg0NTVdIC4uLiB2ZXJzaW9uOiAgICAg
ICAgICAgICAgICAwClsgICAgMC4yNTg0NTZdIC4uLiBiaXQgd2lkdGg6ICAgICAgICAgICAg
ICA0OApbICAgIDAuMjU4NDU3XSAuLi4gZ2VuZXJpYyByZWdpc3RlcnM6ICAgICAgNgpbICAg
IDAuMjU4NDU4XSAuLi4gdmFsdWUgbWFzazogICAgICAgICAgICAgMDAwMGZmZmZmZmZmZmZm
ZgpbICAgIDAuMjU4NDU5XSAuLi4gbWF4IHBlcmlvZDogICAgICAgICAgICAgMDAwMDdmZmZm
ZmZmZmZmZgpbICAgIDAuMjU4NDYwXSAuLi4gZml4ZWQtcHVycG9zZSBldmVudHM6ICAgMApb
ICAgIDAuMjU4NDYxXSAuLi4gZXZlbnQgbWFzazogICAgICAgICAgICAgMDAwMDAwMDAwMDAw
MDAzZgpbICAgIDAuMjU4NTgxXSByY3U6IEhpZXJhcmNoaWNhbCBTUkNVIGltcGxlbWVudGF0
aW9uLgpbICAgIDAuMjU4NTgyXSByY3U6IAlNYXggcGhhc2Ugbm8tZGVsYXkgaW5zdGFuY2Vz
IGlzIDEwMDAuClsgICAgMC4yNTkxNzNdIE5NSSB3YXRjaGRvZzogRW5hYmxlZC4gUGVybWFu
ZW50bHkgY29uc3VtZXMgb25lIGh3LVBNVSBjb3VudGVyLgpbICAgIDAuMjU5MjQ3XSBzbXA6
IEJyaW5naW5nIHVwIHNlY29uZGFyeSBDUFVzIC4uLgpbICAgIDAuMjU5NDQ2XSB4ODY6IEJv
b3RpbmcgU01QIGNvbmZpZ3VyYXRpb246ClsgICAgMC4yNTk0NDhdIC4uLi4gbm9kZSAgIzAs
IENQVXM6ICAgICAgIzEKWyAgICAwLjI1OTQ1M10gc21wYm9vdDogS2lja2luZyBBUCBhbGl2
ZTogMTcKWyAgIDEwLjI2MDkxOF0gQ1BVMSBmYWlsZWQgdG8gcmVwb3J0IGFsaXZlIHN0YXRl
ClsgICAxMC4yNjA5OThdIHNtcDogQnJvdWdodCB1cCAxIG5vZGUsIDEgQ1BVClsgICAxMC4y
NjEwMDBdIHNtcGJvb3Q6IE1heCBsb2dpY2FsIHBhY2thZ2VzOiAyClsgICAxMC4yNjEwMDFd
IHNtcGJvb3Q6IFRvdGFsIG9mIDEgcHJvY2Vzc29ycyBhY3RpdmF0ZWQgKDc4MDEuMDkgQm9n
b01JUFMpClsgICAxMC4yNjE1MzJdIGRldnRtcGZzOiBpbml0aWFsaXplZApbICAgMTAuMjYx
NjI4XSB4ODYvbW06IE1lbW9yeSBibG9jayBzaXplOiAxMjhNQgpbICAgMTAuMjYyNzE4XSBj
bG9ja3NvdXJjZTogamlmZmllczogbWFzazogMHhmZmZmZmZmZiBtYXhfY3ljbGVzOiAweGZm
ZmZmZmZmLCBtYXhfaWRsZV9uczogNzY0NTA0MTc4NTEwMDAwMCBucwpbICAgMTAuMjYyNzI2
XSBmdXRleCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVz
LCBsaW5lYXIpClsgICAxMC4yNjI4MTddIHBpbmN0cmwgY29yZTogaW5pdGlhbGl6ZWQgcGlu
Y3RybCBzdWJzeXN0ZW0KWyAgIDEwLjI2Mjg4OV0gUE06IFJUQyB0aW1lOiAxNjoyNToyNCwg
ZGF0ZTogMjAyMy0wNC0xOQpbICAgMTAuMjYzNjM2XSBORVQ6IFJlZ2lzdGVyZWQgUEZfTkVU
TElOSy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHkKWyAgIDEwLjI2Mzg2MV0gYXVkaXQ6IGlu
aXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpClsgICAxMC4yNjQxMThdIHRo
ZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ2ZhaXJfc2hhcmUnClsg
ICAxMC4yNjQxMjBdIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3Ig
J2JhbmdfYmFuZycKWyAgIDEwLjI2NDEyMV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhl
cm1hbCBnb3Zlcm5vciAnc3RlcF93aXNlJwpbICAgMTAuMjY0MTIyXSB0aGVybWFsX3N5czog
UmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICd1c2VyX3NwYWNlJwpbICAgMTAuMjY0MTQx
XSBjcHVpZGxlOiB1c2luZyBnb3Zlcm5vciBsYWRkZXIKWyAgIDEwLjI2NDE0Nl0gY3B1aWRs
ZTogdXNpbmcgZ292ZXJub3IgbWVudQpbICAgMTAuMjY0MzU3XSBQQ0k6IE1NQ09ORklHIGZv
ciBkb21haW4gMDAwMCBbYnVzIDAwLTNmXSBhdCBbbWVtIDB4ZjgwMDAwMDAtMHhmYmZmZmZm
Zl0gKGJhc2UgMHhmODAwMDAwMCkKWyAgIDEwLjI2NDM2Ml0gUENJOiBNTUNPTkZJRyBhdCBb
bWVtIDB4ZjgwMDAwMDAtMHhmYmZmZmZmZl0gcmVzZXJ2ZWQgYXMgRTgyMCBlbnRyeQpbICAg
MTAuMjY0Mzc0XSBQQ0k6IFVzaW5nIGNvbmZpZ3VyYXRpb24gdHlwZSAxIGZvciBiYXNlIGFj
Y2VzcwpbICAgMTAuMjY0NTc0XSBrcHJvYmVzOiBrcHJvYmUganVtcC1vcHRpbWl6YXRpb24g
aXMgZW5hYmxlZC4gQWxsIGtwcm9iZXMgYXJlIG9wdGltaXplZCBpZiBwb3NzaWJsZS4KWyAg
IDEwLjI2ODk5M10gYXVkaXQ6IHR5cGU9MjAwMCBhdWRpdCgxNjgxOTIxNTI0LjE0MDoxKTog
c3RhdGU9aW5pdGlhbGl6ZWQgYXVkaXRfZW5hYmxlZD0wIHJlcz0xClsgICAxMC4yODEwMDld
IEh1Z2VUTEI6IHJlZ2lzdGVyZWQgMS4wMCBHaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVk
IDAgcGFnZXMKWyAgIDEwLjI4MTAxMl0gSHVnZVRMQjogMTYzODAgS2lCIHZtZW1tYXAgY2Fu
IGJlIGZyZWVkIGZvciBhIDEuMDAgR2lCIHBhZ2UKWyAgIDEwLjI4MTAxNF0gSHVnZVRMQjog
cmVnaXN0ZXJlZCAyLjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcwpb
ICAgMTAuMjgxMDE1XSBIdWdlVExCOiAyOCBLaUIgdm1lbW1hcCBjYW4gYmUgZnJlZWQgZm9y
IGEgMi4wMCBNaUIgcGFnZQpbICAgMTAuMjg1ODY0XSBjcnlwdGQ6IG1heF9jcHVfcWxlbiBz
ZXQgdG8gMTAwMApbICAgMTAuMjg5MTg4XSBBQ1BJOiBBZGRlZCBfT1NJKE1vZHVsZSBEZXZp
Y2UpClsgICAxMC4yODkxOTFdIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIERldmljZSkK
WyAgIDEwLjI4OTE5Ml0gQUNQSTogQWRkZWQgX09TSSgzLjAgX1NDUCBFeHRlbnNpb25zKQpb
ICAgMTAuMjg5MTkzXSBBQ1BJOiBBZGRlZCBfT1NJKFByb2Nlc3NvciBBZ2dyZWdhdG9yIERl
dmljZSkKWyAgIDEwLjI5NTIwM10gQUNQSTogNCBBQ1BJIEFNTCB0YWJsZXMgc3VjY2Vzc2Z1
bGx5IGFjcXVpcmVkIGFuZCBsb2FkZWQKWyAgIDEwLjI5NjcwMl0gQUNQSTogSW50ZXJwcmV0
ZXIgZW5hYmxlZApbICAgMTAuMjk2NzI3XSBBQ1BJOiBQTTogKHN1cHBvcnRzIFMwIFMxIFMz
IFM1KQpbICAgMTAuMjk2NzI4XSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVycnVwdCBy
b3V0aW5nClsgICAxMC4yOTY3NzldIEhFU1Q6IFRhYmxlIHBhcnNpbmcgaGFzIGJlZW4gaW5p
dGlhbGl6ZWQuClsgICAxMC4yOTY4MDBdIEdIRVM6IEZhaWxlZCB0byBlbmFibGUgQVBFSSBm
aXJtd2FyZSBmaXJzdCBtb2RlLgpbICAgMTAuMjk2ODAzXSBQQ0k6IFVzaW5nIGhvc3QgYnJp
ZGdlIHdpbmRvd3MgZnJvbSBBQ1BJOyBpZiBuZWNlc3NhcnksIHVzZSAicGNpPW5vY3JzIiBh
bmQgcmVwb3J0IGEgYnVnClsgICAxMC4yOTY4MDRdIFBDSTogSWdub3JpbmcgRTgyMCByZXNl
cnZhdGlvbnMgZm9yIGhvc3QgYnJpZGdlIHdpbmRvd3MKWyAgIDEwLjI5NzA4NF0gQUNQSTog
RW5hYmxlZCA4IEdQRXMgaW4gYmxvY2sgMDAgdG8gMUYKWyAgIDEwLjMwMjg5OF0gQUNQSTog
UENJIFJvb3QgQnJpZGdlIFtQQ0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1mZl0pClsgICAx
MC4zMDI5MDldIGFjcGkgUE5QMEEwMzowMDogX09TQzogT1Mgc3VwcG9ydHMgW0V4dGVuZGVk
Q29uZmlnIEFTUE0gQ2xvY2tQTSBTZWdtZW50cyBNU0kgSFBYLVR5cGUzXQpbICAgMTAuMzAy
OTg3XSBhY3BpIFBOUDBBMDM6MDA6IF9PU0M6IE9TIG5vdyBjb250cm9scyBbUE1FIEFFUiBQ
Q0llQ2FwYWJpbGl0eSBMVFJdClsgICAxMC4zMDMwMDBdIGFjcGkgUE5QMEEwMzowMDogW0Zp
cm13YXJlIEluZm9dOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAwMDAgW2J1cyAwMC0zZl0gb25s
eSBwYXJ0aWFsbHkgY292ZXJzIHRoaXMgYnJpZGdlClsgICAxMC4zMDMwNzldIGFjcGkgUE5Q
MEEwMzowMDogaG9zdCBicmlkZ2Ugd2luZG93IGV4cGFuZGVkIHRvIFtpbyAgMHgwMDAwLTB4
MGNmNyB3aW5kb3ddOyBbaW8gIDB4MDNiMC0weDAzZGYgd2luZG93XSBpZ25vcmVkClsgICAx
MC4zMDMzMTFdIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAgMTAuMzAzMzEz
XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwMDAwLTB4MGNm
NyB3aW5kb3ddClsgICAxMC4zMDMzMTZdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVz
b3VyY2UgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10KWyAgIDEwLjMwMzMxOF0gcGNpX2J1
cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MDAwYTAwMDAtMHgwMDBkZmZm
Zl0KWyAgIDEwLjMwMzMyMF0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBb
bWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0KWyAgIDEwLjMwMzMyMl0gcGNpX2J1cyAwMDAw
OjAwOiByb290IGJ1cyByZXNvdXJjZSBbYnVzIDAwLWZmXQpbICAgMTAuMzAzMzQ1XSBwY2kg
MDAwMDowMDowMC4wOiBbMTAyMjoxNDEwXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAx
MC4zMDM0OTBdIHBjaSAwMDAwOjAwOjAwLjI6IFsxMDIyOjE0MTldIHR5cGUgMDAgY2xhc3Mg
MHgwODA2MDAKWyAgIDEwLjMwMzU4MF0gcGNpIDAwMDA6MDA6MDEuMDogWzEwMDI6OTk5Nl0g
dHlwZSAwMCBjbGFzcyAweDAzMDAwMApbICAgMTAuMzAzNTg4XSBwY2kgMDAwMDowMDowMS4w
OiByZWcgMHgxMDogW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYgcHJlZl0KWyAgIDEwLjMw
MzU5M10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTQ6IFtpbyAgMHgxMDAwLTB4MTBmZl0K
WyAgIDEwLjMwMzU5OF0gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTg6IFttZW0gMHhmMDE4
MDAwMC0weGYwMWJmZmZmXQpbICAgMTAuMzAzNjE0XSBwY2kgMDAwMDowMDowMS4wOiBlbmFi
bGluZyBFeHRlbmRlZCBUYWdzClsgICAxMC4zMDM2MjVdIHBjaSAwMDAwOjAwOjAxLjA6IFZp
ZGVvIGRldmljZSB3aXRoIHNoYWRvd2VkIFJPTSBhdCBbbWVtIDB4MDAwYzAwMDAtMHgwMDBk
ZmZmZl0KWyAgIDEwLjMwMzY0Ml0gcGNpIDAwMDA6MDA6MDEuMDogc3VwcG9ydHMgRDEgRDIK
WyAgIDEwLjMwMzcwNl0gcGNpIDAwMDA6MDA6MDEuMTogWzEwMDI6OTkwMl0gdHlwZSAwMCBj
bGFzcyAweDA0MDMwMApbICAgMTAuMzAzNzE0XSBwY2kgMDAwMDowMDowMS4xOiByZWcgMHgx
MDogW21lbSAweGYwMWMwMDAwLTB4ZjAxYzNmZmZdClsgICAxMC4zMDM3MzVdIHBjaSAwMDAw
OjAwOjAxLjE6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MKWyAgIDEwLjMwMzc1OV0gcGNpIDAw
MDA6MDA6MDEuMTogc3VwcG9ydHMgRDEgRDIKWyAgIDEwLjMwMzg0Ml0gcGNpIDAwMDA6MDA6
MTEuMDogWzEwMjI6NzgwMV0gdHlwZSAwMCBjbGFzcyAweDAxMDYwMQpbICAgMTAuMzAzODU1
XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxMDogW2lvICAweDE0MTAtMHgxNDE3XQpbICAg
MTAuMzAzODYzXSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxNDogW2lvICAweDE0MjAtMHgx
NDIzXQpbICAgMTAuMzAzODcwXSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxODogW2lvICAw
eDE0MTgtMHgxNDFmXQpbICAgMTAuMzAzODc4XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgx
YzogW2lvICAweDE0MjQtMHgxNDI3XQpbICAgMTAuMzAzODg1XSBwY2kgMDAwMDowMDoxMS4w
OiByZWcgMHgyMDogW2lvICAweDE0MDAtMHgxNDBmXQpbICAgMTAuMzAzODkyXSBwY2kgMDAw
MDowMDoxMS4wOiByZWcgMHgyNDogW21lbSAweGYwMWNjMDAwLTB4ZjAxY2M3ZmZdClsgICAx
MC4zMDQwMzhdIHBjaSAwMDAwOjAwOjEyLjA6IFsxMDIyOjc4MDddIHR5cGUgMDAgY2xhc3Mg
MHgwYzAzMTAKWyAgIDEwLjMwNDA1MV0gcGNpIDAwMDA6MDA6MTIuMDogcmVnIDB4MTA6IFtt
ZW0gMHhmMDFjODAwMC0weGYwMWM4ZmZmXQpbICAgMTAuMzA0MjIwXSBwY2kgMDAwMDowMDox
Mi4yOiBbMTAyMjo3ODA4XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzIwClsgICAxMC4zMDQyMzRd
IHBjaSAwMDAwOjAwOjEyLjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2QwMDAtMHhmMDFjZDBm
Zl0KWyAgIDEwLjMwNDI5OV0gcGNpIDAwMDA6MDA6MTIuMjogc3VwcG9ydHMgRDEgRDIKWyAg
IDEwLjMwNDMwMF0gcGNpIDAwMDA6MDA6MTIuMjogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBE
MSBEMiBEM2hvdApbICAgMTAuMzA0NDMyXSBwY2kgMDAwMDowMDoxMy4wOiBbMTAyMjo3ODA3
XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAxMC4zMDQ0NDZdIHBjaSAwMDAwOjAwOjEz
LjA6IHJlZyAweDEwOiBbbWVtIDB4ZjAxYzkwMDAtMHhmMDFjOWZmZl0KWyAgIDEwLjMwNDYx
NF0gcGNpIDAwMDA6MDA6MTMuMjogWzEwMjI6NzgwOF0gdHlwZSAwMCBjbGFzcyAweDBjMDMy
MApbICAgMTAuMzA0NjI3XSBwY2kgMDAwMDowMDoxMy4yOiByZWcgMHgxMDogW21lbSAweGYw
MWNlMDAwLTB4ZjAxY2UwZmZdClsgICAxMC4zMDQ2OTJdIHBjaSAwMDAwOjAwOjEzLjI6IHN1
cHBvcnRzIEQxIEQyClsgICAxMC4zMDQ2OTNdIHBjaSAwMDAwOjAwOjEzLjI6IFBNRSMgc3Vw
cG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QKWyAgIDEwLjMwNDgyM10gcGNpIDAwMDA6MDA6
MTQuMDogWzEwMjI6NzgwYl0gdHlwZSAwMCBjbGFzcyAweDBjMDUwMApbICAgMTAuMzA1NTU0
XSBwY2kgMDAwMDowMDoxNC4yOiBbMTAyMjo3ODBkXSB0eXBlIDAwIGNsYXNzIDB4MDQwMzAw
ClsgICAxMC4zMDU1NzNdIHBjaSAwMDAwOjAwOjE0LjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAx
YzQwMDAtMHhmMDFjN2ZmZiA2NGJpdF0KWyAgIDEwLjMwNTYzM10gcGNpIDAwMDA6MDA6MTQu
MjogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgIDEwLjMwNTc2OV0g
cGNpIDAwMDA6MDA6MTQuMzogWzEwMjI6NzgwZV0gdHlwZSAwMCBjbGFzcyAweDA2MDEwMApb
ICAgMTAuMzA1OTQ1XSBwY2kgMDAwMDowMDoxNC40OiBbMTAyMjo3ODBmXSB0eXBlIDAxIGNs
YXNzIDB4MDYwNDAxClsgICAxMC4zMDYwOTBdIHBjaSAwMDAwOjAwOjE0LjU6IFsxMDIyOjc4
MDldIHR5cGUgMDAgY2xhc3MgMHgwYzAzMTAKWyAgIDEwLjMwNjEwNF0gcGNpIDAwMDA6MDA6
MTQuNTogcmVnIDB4MTA6IFttZW0gMHhmMDFjYTAwMC0weGYwMWNhZmZmXQpbICAgMTAuMzA2
MjY5XSBwY2kgMDAwMDowMDoxNS4wOiBbMTAyMjo0M2EwXSB0eXBlIDAxIGNsYXNzIDB4MDYw
NDAwClsgICAxMC4zMDYyOThdIHBjaSAwMDAwOjAwOjE1LjA6IGVuYWJsaW5nIEV4dGVuZGVk
IFRhZ3MKWyAgIDEwLjMwNjMzOF0gcGNpIDAwMDA6MDA6MTUuMDogc3VwcG9ydHMgRDEgRDIK
WyAgIDEwLjMwNjQ5OV0gcGNpIDAwMDA6MDA6MTUuMTogWzEwMjI6NDNhMV0gdHlwZSAwMSBj
bGFzcyAweDA2MDQwMApbICAgMTAuMzA2NTMwXSBwY2kgMDAwMDowMDoxNS4xOiBlbmFibGlu
ZyBFeHRlbmRlZCBUYWdzClsgICAxMC4zMDY1NjldIHBjaSAwMDAwOjAwOjE1LjE6IHN1cHBv
cnRzIEQxIEQyClsgICAxMC4zMDY3MjVdIHBjaSAwMDAwOjAwOjE1LjI6IFsxMDIyOjQzYTJd
IHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgIDEwLjMwNjc1Ml0gcGNpIDAwMDA6MDA6MTUu
MjogZW5hYmxpbmcgRXh0ZW5kZWQgVGFncwpbICAgMTAuMzA2NzkxXSBwY2kgMDAwMDowMDox
NS4yOiBzdXBwb3J0cyBEMSBEMgpbICAgMTAuMzA2ODY2XSBwY2kgMDAwMDowMDoxNi4wOiBb
MTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAxMC4zMDY4NzldIHBjaSAw
MDAwOjAwOjE2LjA6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2IwMDAtMHhmMDFjYmZmZl0KWyAg
IDEwLjMwNzA1N10gcGNpIDAwMDA6MDA6MTYuMjogWzEwMjI6NzgwOF0gdHlwZSAwMCBjbGFz
cyAweDBjMDMyMApbICAgMTAuMzA3MDcwXSBwY2kgMDAwMDowMDoxNi4yOiByZWcgMHgxMDog
W21lbSAweGYwMWNmMDAwLTB4ZjAxY2YwZmZdClsgICAxMC4zMDcxMzVdIHBjaSAwMDAwOjAw
OjE2LjI6IHN1cHBvcnRzIEQxIEQyClsgICAxMC4zMDcxMzZdIHBjaSAwMDAwOjAwOjE2LjI6
IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QKWyAgIDEwLjMwNzI3OF0gcGNp
IDAwMDA6MDA6MTguMDogWzEwMjI6MTQwMF0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAg
MTAuMzA3MzQyXSBwY2kgMDAwMDowMDoxOC4xOiBbMTAyMjoxNDAxXSB0eXBlIDAwIGNsYXNz
IDB4MDYwMDAwClsgICAxMC4zMDc0MDJdIHBjaSAwMDAwOjAwOjE4LjI6IFsxMDIyOjE0MDJd
IHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgIDEwLjMwNzQ2N10gcGNpIDAwMDA6MDA6MTgu
MzogWzEwMjI6MTQwM10gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgMTAuMzA3NjAwXSBw
Y2kgMDAwMDowMDoxOC40OiBbMTAyMjoxNDA0XSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsg
ICAxMC4zMDc2NjNdIHBjaSAwMDAwOjAwOjE4LjU6IFsxMDIyOjE0MDVdIHR5cGUgMDAgY2xh
c3MgMHgwNjAwMDAKWyAgIDEwLjMwNzczOF0gcGNpX2J1cyAwMDAwOjAxOiBleHRlbmRlZCBj
b25maWcgc3BhY2Ugbm90IGFjY2Vzc2libGUKWyAgIDEwLjMwNzgwNF0gcGNpIDAwMDA6MDA6
MTQuNDogUENJIGJyaWRnZSB0byBbYnVzIDAxXSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAg
MTAuMzA3ODEzXSBwY2kgMDAwMDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweDAw
MDAtMHgwY2Y3IHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgIDEwLjMwNzgxNl0g
cGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwZDAwLTB4ZmZmZiB3
aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAxMC4zMDc4MThdIHBjaSAwMDAwOjAw
OjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4MDAwYTAwMDAtMHgwMDBkZmZmZl0gKHN1
YnRyYWN0aXZlIGRlY29kZSkKWyAgIDEwLjMwNzgyMF0gcGNpIDAwMDA6MDA6MTQuNDogICBi
cmlkZ2Ugd2luZG93IFttZW0gMHg4MDAwMDAwMC0weGZmZmZmZmZmXSAoc3VidHJhY3RpdmUg
ZGVjb2RlKQpbICAgMTAuMzA3ODY4XSBwY2kgMDAwMDowMDoxNS4wOiBQQ0kgYnJpZGdlIHRv
IFtidXMgMDJdClsgICAxMC4zMDc5NTNdIHBjaSAwMDAwOjAzOjAwLjA6IFsxYjIxOjEwNDJd
IHR5cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgIDEwLjMwNzk4OV0gcGNpIDAwMDA6MDM6MDAu
MDogcmVnIDB4MTA6IFttZW0gMHhmMDAwMDAwMC0weGYwMDA3ZmZmIDY0Yml0XQpbICAgMTAu
MzA4MTY3XSBwY2kgMDAwMDowMzowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQzaG90IEQz
Y29sZApbICAgMTAuMzA4MjEwXSBwY2kgMDAwMDowMzowMC4wOiAyLjAwMCBHYi9zIGF2YWls
YWJsZSBQQ0llIGJhbmR3aWR0aCwgbGltaXRlZCBieSAyLjUgR1QvcyBQQ0llIHgxIGxpbmsg
YXQgMDAwMDowMDoxNS4xIChjYXBhYmxlIG9mIDQuMDAwIEdiL3Mgd2l0aCA1LjAgR1QvcyBQ
Q0llIHgxIGxpbmspClsgICAxMC4zMTcwNDddIHBjaSAwMDAwOjAwOjE1LjE6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwM10KWyAgIDEwLjMxNzA1OV0gcGNpIDAwMDA6MDA6MTUuMTogICBicmlk
Z2Ugd2luZG93IFttZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgMTAuMzE3MDY4XSBw
Y2kgMDAwMDowMDoxNS4yOiBicmlkZ2UgY29uZmlndXJhdGlvbiBpbnZhbGlkIChbYnVzIDAw
LTAwXSksIHJlY29uZmlndXJpbmcKWyAgIDEwLjMxNzE4OF0gcGNpIDAwMDA6MDQ6MDAuMDog
WzEwZWM6ODE2OF0gdHlwZSAwMCBjbGFzcyAweDAyMDAwMApbICAgMTAuMzE3MjA2XSBwY2kg
MDAwMDowNDowMC4wOiByZWcgMHgxMDogW2lvICAweDAwMDAtMHgwMGZmXQpbICAgMTAuMzE3
MjI3XSBwY2kgMDAwMDowNDowMC4wOiByZWcgMHgxODogW21lbSAweDAwMDAwMDAwLTB4MDAw
MDBmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjMxNzI0MV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVn
IDB4MjA6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAzZmZmIDY0Yml0IHByZWZdClsgICAxMC4z
MTczNDldIHBjaSAwMDAwOjA0OjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAxMC4zMTczNTFd
IHBjaSAwMDAwOjA0OjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3Qg
RDNjb2xkClsgICAxMC4zMjg5NzBdIHBjaSAwMDAwOjAwOjE1LjI6IFBDSSBicmlkZ2UgdG8g
W2J1cyAwNC1mZl0KWyAgIDEwLjMyODk4MV0gcGNpIDAwMDA6MDA6MTUuMjogICBicmlkZ2Ug
d2luZG93IFtpbyAgMHgwMDAwLTB4MGZmZl0KWyAgIDEwLjMyODk4NV0gcGNpIDAwMDA6MDA6
MTUuMjogICBicmlkZ2Ugd2luZG93IFttZW0gMHgwMDAwMDAwMC0weDAwMGZmZmZmXQpbICAg
MTAuMzI4OTg5XSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweDAw
MDAwMDAwLTB4MDAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjMyODk5M10gcGNpX2J1cyAw
MDAwOjA0OiBidXNuX3JlczogW2J1cyAwNC1mZl0gZW5kIGlzIHVwZGF0ZWQgdG8gMDQKWyAg
IDEwLjMyOTUwMl0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlRBIGNvbmZpZ3VyZWQg
Zm9yIElSUSAwClsgICAxMC4zMjk1OTZdIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5U
QiBjb25maWd1cmVkIGZvciBJUlEgMApbICAgMTAuMzI5Njg3XSBBQ1BJOiBQQ0k6IEludGVy
cnVwdCBsaW5rIElOVEMgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgIDEwLjMyOTc3N10gQUNQ
STogUENJOiBJbnRlcnJ1cHQgbGluayBJTlREIGNvbmZpZ3VyZWQgZm9yIElSUSAwClsgICAx
MC4zMjk4NjldIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5URSBjb25maWd1cmVkIGZv
ciBJUlEgMApbICAgMTAuMzI5OTU5XSBBQ1BJOiBQQ0k6IEludGVycnVwdCBsaW5rIElOVEYg
Y29uZmlndXJlZCBmb3IgSVJRIDAKWyAgIDEwLjMzMDA1MV0gQUNQSTogUENJOiBJbnRlcnJ1
cHQgbGluayBJTlRHIGNvbmZpZ3VyZWQgZm9yIElSUSAwClsgICAxMC4zMzAxNDFdIEFDUEk6
IFBDSTogSW50ZXJydXB0IGxpbmsgSU5USCBjb25maWd1cmVkIGZvciBJUlEgMApbICAgMTAu
MzMwMzcwXSBpb21tdTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJhbnNsYXRlZCAKWyAgIDEw
LjMzMDM3Ml0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRhdGlvbiBwb2xpY3k6IGxh
enkgbW9kZSAKWyAgIDEwLjMzMDU1MV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQKWyAg
IDEwLjMzMDYzNV0gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAxMC4zMzA2Njhd
IEFDUEk6IGJ1cyB0eXBlIFVTQiByZWdpc3RlcmVkClsgICAxMC4zMzA2OTJdIHVzYmNvcmU6
IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgIDEwLjMzMDcwMl0g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAgIDEwLjMz
MDcxNV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKWyAgIDEw
LjMzMTA0NF0gUENJOiBVc2luZyBBQ1BJIGZvciBJUlEgcm91dGluZwpbICAgMTAuMzMyNjEz
XSBQQ0k6IHBjaV9jYWNoZV9saW5lX3NpemUgc2V0IHRvIDY0IGJ5dGVzClsgICAxMC4zMzI2
NjRdIGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4MDAwOWZjMDAtMHgwMDA5ZmZm
Zl0KWyAgIDEwLjMzMjY2N10gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHg1ZmUz
ZDAwMC0weDVmZmZmZmZmXQpbICAgMTAuMzMyNjY5XSBlODIwOiByZXNlcnZlIFJBTSBidWZm
ZXIgW21lbSAweDE3ZjAwMDAwMC0weDE3ZmZmZmZmZl0KWyAgIDEwLjMzMjcxNF0gaHBldDA6
IGF0IE1NSU8gMHhmZWQwMDAwMCwgSVJRcyAyLCA4LCAwClsgICAxMC4zMzI3MTldIGhwZXQw
OiAzIGNvbXBhcmF0b3JzLCAzMi1iaXQgMTQuMzE4MTgwIE1IeiBjb3VudGVyClsgICAxMC4z
MzM5OTJdIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB0c2MtZWFybHkK
WyAgIDEwLjM1MDk4Ml0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjYuMApbICAgMTAuMzUx
MDEyXSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlciAw
LCA0MDk2IGJ5dGVzKQpbICAgMTAuMzUxMTIzXSBwbnA6IFBuUCBBQ1BJIGluaXQKWyAgIDEw
LjM1MTQxNl0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmVjMTAwMDItMHhmZWMxMTAwMV0gY291
bGQgbm90IGJlIHJlc2VydmVkClsgICAxMC4zNTE3MTldIHBucDogUG5QIEFDUEk6IGZvdW5k
IDIgZGV2aWNlcwpbICAgMTAuMzU4ODcwXSBjbG9ja3NvdXJjZTogYWNwaV9wbTogbWFzazog
MHhmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmYsIG1heF9pZGxlX25zOiAyMDg1NzAxMDI0
IG5zClsgICAxMC4zNTkwMDNdIE5FVDogUmVnaXN0ZXJlZCBQRl9JTkVUIHByb3RvY29sIGZh
bWlseQpbICAgMTAuMzU5MTUwXSBJUCBpZGVudHMgaGFzaCB0YWJsZSBlbnRyaWVzOiA2NTUz
NiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcywgbGluZWFyKQpbICAgMTAuMzYwNzE2XSB0Y3Bf
bGlzdGVuX3BvcnRhZGRyX2hhc2ggaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRlcjog
MywgMzI3NjggYnl0ZXMsIGxpbmVhcikKWyAgIDEwLjM2MDczMV0gVGFibGUtcGVydHVyYiBo
YXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5l
YXIpClsgICAxMC4zNjA3MzldIFRDUCBlc3RhYmxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6
IDMyNzY4IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpClsgICAxMC4zNjA4MDVd
IFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmllczogMzI3NjggKG9yZGVyOiA4LCAxMDQ4NTc2
IGJ5dGVzLCBsaW5lYXIpClsgICAxMC4zNjEyNTZdIFRDUDogSGFzaCB0YWJsZXMgY29uZmln
dXJlZCAoZXN0YWJsaXNoZWQgMzI3NjggYmluZCAzMjc2OCkKWyAgIDEwLjM2MTMyN10gVURQ
IGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5l
YXIpClsgICAxMC4zNjEzNDddIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMjA0OCAo
b3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAxMC4zNjE0NTNdIE5FVDogUmVn
aXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWlseQpbICAgMTAuMzYxNDg3
XSBwY2kgMDAwMDowMDoxNS4yOiBCQVIgMTU6IGFzc2lnbmVkIFttZW0gMHg4MDAwMDAwMC0w
eDgwMGZmZmZmIDY0Yml0IHByZWZdClsgICAxMC4zNjE0OTJdIHBjaSAwMDAwOjAwOjE1LjI6
IEJBUiAxMzogYXNzaWduZWQgW2lvICAweDIwMDAtMHgyZmZmXQpbICAgMTAuMzYxNDk2XSBw
Y2kgMDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdClsgICAxMC4zNjE1MDdd
IHBjaSAwMDAwOjAwOjE1LjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwMl0KWyAgIDEwLjM2MTUx
NV0gcGNpIDAwMDA6MDA6MTUuMTogUENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAgMTAuMzYx
NTE5XSBwY2kgMDAwMDowMDoxNS4xOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGYwMDAwMDAw
LTB4ZjAwZmZmZmZdClsgICAxMC4zNjE1MjZdIHBjaSAwMDAwOjA0OjAwLjA6IEJBUiA0OiBh
c3NpZ25lZCBbbWVtIDB4ODAwMDAwMDAtMHg4MDAwM2ZmZiA2NGJpdCBwcmVmXQpbICAgMTAu
MzYxNTM5XSBwY2kgMDAwMDowNDowMC4wOiBCQVIgMjogYXNzaWduZWQgW21lbSAweDgwMDA0
MDAwLTB4ODAwMDRmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjM2MTU1MF0gcGNpIDAwMDA6MDQ6
MDAuMDogQkFSIDA6IGFzc2lnbmVkIFtpbyAgMHgyMDAwLTB4MjBmZl0KWyAgIDEwLjM2MTU1
NV0gcGNpIDAwMDA6MDA6MTUuMjogUENJIGJyaWRnZSB0byBbYnVzIDA0XQpbICAgMTAuMzYx
NTU3XSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweDIwMDAtMHgy
ZmZmXQpbICAgMTAuMzYxNTYyXSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cg
W21lbSAweDgwMDAwMDAwLTB4ODAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgIDEwLjM2MTU2OV0g
cGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4MGNmNyB3aW5kb3dd
ClsgICAxMC4zNjE1NzBdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNSBbaW8gIDB4MGQw
MC0weGZmZmYgd2luZG93XQpbICAgMTAuMzYxNTcyXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291
cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsgICAxMC4zNjE1NzRdIHBjaV9i
dXMgMDAwMDowMDogcmVzb3VyY2UgNyBbbWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0KWyAg
IDEwLjM2MTU3NV0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4
MGNmNyB3aW5kb3ddClsgICAxMC4zNjE1NzddIHBjaV9idXMgMDAwMDowMTogcmVzb3VyY2Ug
NSBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgMTAuMzYxNTc4XSBwY2lfYnVzIDAw
MDA6MDE6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsgICAxMC4z
NjE1ODBdIHBjaV9idXMgMDAwMDowMTogcmVzb3VyY2UgNyBbbWVtIDB4ODAwMDAwMDAtMHhm
ZmZmZmZmZl0KWyAgIDEwLjM2MTU4Ml0gcGNpX2J1cyAwMDAwOjAzOiByZXNvdXJjZSAxIFtt
ZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgMTAuMzYxNTgzXSBwY2lfYnVzIDAwMDA6
MDQ6IHJlc291cmNlIDAgW2lvICAweDIwMDAtMHgyZmZmXQpbICAgMTAuMzYxNTg0XSBwY2lf
YnVzIDAwMDA6MDQ6IHJlc291cmNlIDIgW21lbSAweDgwMDAwMDAwLTB4ODAwZmZmZmYgNjRi
aXQgcHJlZl0KWyAgIDEwLjM2MTY3M10gcGNpIDAwMDA6MDA6MDEuMTogRDAgcG93ZXIgc3Rh
dGUgZGVwZW5kcyBvbiAwMDAwOjAwOjAxLjAKWyAgIDEwLjM2MjM1MF0gcGNpIDAwMDA6MDA6
MTIuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxpbmcgaXQKWyAgIDEw
LjM2MjkxNV0gcGNpIDAwMDA6MDA6MTMuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQz
LCBkaXNhYmxpbmcgaXQKWyAgIDEwLjM2Mzc3N10gcGNpIDAwMDA6MDA6MTYuMjogUE1FIyBk
b2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxpbmcgaXQKWyAgIDEwLjM2NDA0OF0gUENJ
OiBDTFMgNjQgYnl0ZXMsIGRlZmF1bHQgNjQKWyAgIDEwLjM2NDE2Ml0gcGNpIDAwMDA6MDA6
MDAuMjogQU1ELVZpOiBBcHBseWluZyBlcnJhdHVtIDc0NiB3b3JrYXJvdW5kClsgICAxMC4z
NjQyNDVdIHBjaSAwMDAwOjAwOjAxLjA6IEFkZGluZyB0byBpb21tdSBncm91cCAwClsgICAx
MC4zNjQyNjddIHBjaSAwMDAwOjAwOjAxLjE6IEFkZGluZyB0byBpb21tdSBncm91cCAwClsg
ICAxMC4zNjQyOTBdIHBjaSAwMDAwOjAwOjExLjA6IEFkZGluZyB0byBpb21tdSBncm91cCAx
ClsgICAxMC4zNjQzMjNdIHBjaSAwMDAwOjAwOjEyLjA6IEFkZGluZyB0byBpb21tdSBncm91
cCAyClsgICAxMC4zNjQzNDVdIHBjaSAwMDAwOjAwOjEyLjI6IEFkZGluZyB0byBpb21tdSBn
cm91cCAyClsgICAxMC4zNjQzNzddIHBjaSAwMDAwOjAwOjEzLjA6IEFkZGluZyB0byBpb21t
dSBncm91cCAzClsgICAxMC4zNjQzOTVdIHBjaSAwMDAwOjAwOjEzLjI6IEFkZGluZyB0byBp
b21tdSBncm91cCAzClsgICAxMC4zNjQ0MzNdIHBjaSAwMDAwOjAwOjE0LjA6IEFkZGluZyB0
byBpb21tdSBncm91cCA0ClsgICAxMC4zNjQ0NTBdIHBjaSAwMDAwOjAwOjE0LjI6IEFkZGlu
ZyB0byBpb21tdSBncm91cCA0ClsgICAxMC4zNjQ0NjhdIHBjaSAwMDAwOjAwOjE0LjM6IEFk
ZGluZyB0byBpb21tdSBncm91cCA0ClsgICAxMC4zNjQ0ODhdIHBjaSAwMDAwOjAwOjE0LjQ6
IEFkZGluZyB0byBpb21tdSBncm91cCA1ClsgICAxMC4zNjQ1MTJdIHBjaSAwMDAwOjAwOjE0
LjU6IEFkZGluZyB0byBpb21tdSBncm91cCA2ClsgICAxMC4zNjQ1NDldIHBjaSAwMDAwOjAw
OjE1LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAxMC4zNjQ1NjZdIHBjaSAwMDAw
OjAwOjE1LjE6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAxMC4zNjQ1ODVdIHBjaSAw
MDAwOjAwOjE1LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAxMC4zNjQ2MTddIHBj
aSAwMDAwOjAwOjE2LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA4ClsgICAxMC4zNjQ2MzRd
IHBjaSAwMDAwOjAwOjE2LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA4ClsgICAxMC4zNjQ2
OTBdIHBjaSAwMDAwOjAwOjE4LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAxMC4z
NjQ3MDhdIHBjaSAwMDAwOjAwOjE4LjE6IEFkZGluZyB0byBpb21tdSBncm91cCA5ClsgICAx
MC4zNjQ3MjldIHBjaSAwMDAwOjAwOjE4LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA5Clsg
ICAxMC4zNjQ3NDZdIHBjaSAwMDAwOjAwOjE4LjM6IEFkZGluZyB0byBpb21tdSBncm91cCA5
ClsgICAxMC4zNjQ3NjZdIHBjaSAwMDAwOjAwOjE4LjQ6IEFkZGluZyB0byBpb21tdSBncm91
cCA5ClsgICAxMC4zNjQ3ODRdIHBjaSAwMDAwOjAwOjE4LjU6IEFkZGluZyB0byBpb21tdSBn
cm91cCA5ClsgICAxMC4zNjQ3OTJdIHBjaSAwMDAwOjAzOjAwLjA6IEFkZGluZyB0byBpb21t
dSBncm91cCA3ClsgICAxMC4zNjQ4MDZdIHBjaSAwMDAwOjA0OjAwLjA6IEFkZGluZyB0byBp
b21tdSBncm91cCA3ClsgICAxMC4zNjY5MzddIHBjaSAwMDAwOjAwOjAwLjI6IEFNRC1WaTog
Rm91bmQgSU9NTVUgY2FwIDB4NDAKWyAgIDEwLjM2Njk0Ml0gQU1ELVZpOiBFeHRlbmRlZCBm
ZWF0dXJlcyAoMHg4MDAwMDA4NTMsIDB4MCk6IFByZUYgUFBSIEdUIElBClsgICAxMC4zNjY5
NDhdIEFNRC1WaTogSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkClsgICAxMC4zNjcxMzFd
IFBDSS1ETUE6IFVzaW5nIHNvZnR3YXJlIGJvdW5jZSBidWZmZXJpbmcgZm9yIElPIChTV0lP
VExCKQpbICAgMTAuMzY3MTMyXSBzb2Z0d2FyZSBJTyBUTEI6IG1hcHBlZCBbbWVtIDB4MDAw
MDAwMDA1YmUzZDAwMC0weDAwMDAwMDAwNWZlM2QwMDBdICg2NE1CKQpbICAgMTAuMzY3MTgy
XSBMVlQgb2Zmc2V0IDAgYXNzaWduZWQgZm9yIHZlY3RvciAweDQwMApbICAgMTAuMzY3MjAz
XSBwZXJmOiBBTUQgSUJTIGRldGVjdGVkICgweDAwMDAwMGZmKQpbICAgMTAuMzY3MjExXSBh
bWRfdW5jb3JlOiA0ICBhbWRfbmIgY291bnRlcnMgZGV0ZWN0ZWQKWyAgIDEwLjM2ODAxN10g
d29ya2luZ3NldDogdGltZXN0YW1wX2JpdHM9MzcgbWF4X29yZGVyPTIwIGJ1Y2tldF9vcmRl
cj0wClsgICAxMC4zNjgwNDddIHpidWQ6IGxvYWRlZApbICAgMTAuMzY4NTA4XSBORVQ6IFJl
Z2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseQpbICAgMTAuMzY4NTEyXSBLZXkgdHlw
ZSBhc3ltbWV0cmljIHJlZ2lzdGVyZWQKWyAgIDEwLjM2ODUxNF0gQXN5bW1ldHJpYyBrZXkg
cGFyc2VyICd4NTA5JyByZWdpc3RlcmVkClsgICAxMC4zNjg3ODNdIGFsZzogc2VsZi10ZXN0
cyBkaXNhYmxlZApbICAgMTAuMzY4ODc2XSBCbG9jayBsYXllciBTQ1NJIGdlbmVyaWMgKGJz
ZykgZHJpdmVyIHZlcnNpb24gMC40IGxvYWRlZCAobWFqb3IgMjUxKQpbICAgMTAuMzY5MzIw
XSBpbyBzY2hlZHVsZXIgbXEtZGVhZGxpbmUgcmVnaXN0ZXJlZApbICAgMTAuMzY5MzIyXSBp
byBzY2hlZHVsZXIga3liZXIgcmVnaXN0ZXJlZApbICAgMTAuMzcwNDQ3XSBwY2llcG9ydCAw
MDAwOjAwOjE1LjA6IFBNRTogU2lnbmFsaW5nIHdpdGggSVJRIDI1ClsgICAxMC4zNzA2MDhd
IHBjaWVwb3J0IDAwMDA6MDA6MTUuMTogUE1FOiBTaWduYWxpbmcgd2l0aCBJUlEgMjYKWyAg
IDEwLjM3MDY4MF0gcGNpZXBvcnQgMDAwMDowMDoxNS4yOiBlbmFibGluZyBkZXZpY2UgKDAw
MDAgLT4gMDAwMykKWyAgIDEwLjM3MDg4M10gcGNpZXBvcnQgMDAwMDowMDoxNS4yOiBQTUU6
IFNpZ25hbGluZyB3aXRoIElSUSAyNwpbICAgMTAuMzcxMTM4XSBpbnB1dDogUG93ZXIgQnV0
dG9uIGFzIC9kZXZpY2VzL0xOWFNZU1RNOjAwL0xOWFBXUkJOOjAwL2lucHV0L2lucHV0MApb
ICAgMTAuMzcxMjAxXSBBQ1BJOiBidXR0b246IFBvd2VyIEJ1dHRvbiBbUFdSRl0KWyAgIDEw
LjM3MTI1M10gQUNQSTogXF9TQl8uUDAwMDogRm91bmQgMiBpZGxlIHN0YXRlcwpbICAgMTAu
MzcxMzY5XSBBQ1BJOiBcX1NCXy5QMDAxOiBGb3VuZCAyIGlkbGUgc3RhdGVzClsgICAxMC4z
NzIyNTVdIHRoZXJtYWwgTE5YVEhFUk06MDA6IHJlZ2lzdGVyZWQgYXMgdGhlcm1hbF96b25l
MApbICAgMTAuMzcyMjU4XSBBQ1BJOiB0aGVybWFsOiBUaGVybWFsIFpvbmUgW1RaMDBdICgw
IEMpClsgICAxMC4zNzI1NzNdIE5vbi12b2xhdGlsZSBtZW1vcnkgZHJpdmVyIHYxLjMKWyAg
IDEwLjM3MjY0OF0gQU1ELVZpOiBBTUQgSU9NTVV2MiBsb2FkZWQgYW5kIGluaXRpYWxpemVk
ClsgICAxMC4zNzI2NjddIEFDUEk6IGJ1cyB0eXBlIGRybV9jb25uZWN0b3IgcmVnaXN0ZXJl
ZApbICAgMTAuMzcyNzk5XSBhaGNpIDAwMDA6MDA6MTEuMDogdmVyc2lvbiAzLjAKWyAgIDEw
LjM3MzEwOV0gYWhjaSAwMDAwOjAwOjExLjA6IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDgg
cG9ydHMgNiBHYnBzIDB4NDAgaW1wbCBTQVRBIG1vZGUKWyAgIDEwLjM3MzExM10gYWhjaSAw
MDAwOjAwOjExLjA6IGZsYWdzOiA2NGJpdCBuY3Egc250ZiBpbGNrIGxlZCBjbG8gcGlvIApb
ICAgMTAuMzc0MzMxXSBzY3NpIGhvc3QwOiBhaGNpClsgICAxMC4zNzQ1MzFdIHNjc2kgaG9z
dDE6IGFoY2kKWyAgIDEwLjM3NDcwNF0gc2NzaSBob3N0MjogYWhjaQpbICAgMTAuMzc0ODk2
XSBzY3NpIGhvc3QzOiBhaGNpClsgICAxMC4zNzUwNzZdIHNjc2kgaG9zdDQ6IGFoY2kKWyAg
IDEwLjM3NTI2NV0gc2NzaSBob3N0NTogYWhjaQpbICAgMTAuMzc1NDQ1XSBzY3NpIGhvc3Q2
OiBhaGNpClsgICAxMC4zNzU2MjJdIHNjc2kgaG9zdDc6IGFoY2kKWyAgIDEwLjM3NTcxNF0g
YXRhMTogRFVNTVkKWyAgIDEwLjM3NTcxNV0gYXRhMjogRFVNTVkKWyAgIDEwLjM3NTcxNl0g
YXRhMzogRFVNTVkKWyAgIDEwLjM3NTcxN10gYXRhNDogRFVNTVkKWyAgIDEwLjM3NTcxOF0g
YXRhNTogRFVNTVkKWyAgIDEwLjM3NTcxOF0gYXRhNjogRFVNTVkKWyAgIDEwLjM3NTcyMF0g
YXRhNzogU0FUQSBtYXggVURNQS8xMzMgYWJhciBtMjA0OEAweGYwMWNjMDAwIHBvcnQgMHhm
MDFjYzQwMCBpcnEgMTkKWyAgIDEwLjM3NTcyMl0gYXRhODogRFVNTVkKWyAgIDEwLjM3NTk4
Ml0gaTgwNDI6IFBOUDogTm8gUFMvMiBjb250cm9sbGVyIGZvdW5kLgpbICAgMTAuMzc1OTgz
XSBpODA0MjogUHJvYmluZyBwb3J0cyBkaXJlY3RseS4KWyAgIDEwLjM3ODUyOV0gc2VyaW86
IGk4MDQyIEtCRCBwb3J0IGF0IDB4NjAsMHg2NCBpcnEgMQpbICAgMTAuMzc4NTk1XSBzZXJp
bzogaTgwNDIgQVVYIHBvcnQgYXQgMHg2MCwweDY0IGlycSAxMgpbICAgMTAuMzc4NzA4XSBt
b3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZvciBhbGwgbWljZQpbICAgMTAu
Mzc4NzYwXSBydGNfY21vcyAwMDowMTogUlRDIGNhbiB3YWtlIGZyb20gUzQKWyAgIDEwLjM3
OTAwMl0gcnRjX2Ntb3MgMDA6MDE6IHJlZ2lzdGVyZWQgYXMgcnRjMApbICAgMTAuMzc5MDI2
XSBydGNfY21vcyAwMDowMTogc2V0dGluZyBzeXN0ZW0gY2xvY2sgdG8gMjAyMy0wNC0xOVQx
NjoyNToyNCBVVEMgKDE2ODE5MjE1MjQpClsgICAxMC4zNzkwNjddIHJ0Y19jbW9zIDAwOjAx
OiBhbGFybXMgdXAgdG8gb25lIGRheSwgeTNrLCAxMTQgYnl0ZXMgbnZyYW0sIGhwZXQgaXJx
cwpbICAgMTAuMzc5MTAxXSBkZXZpY2UtbWFwcGVyOiB1ZXZlbnQ6IHZlcnNpb24gMS4wLjMK
WyAgIDEwLjM3OTE2N10gZGV2aWNlLW1hcHBlcjogaW9jdGw6IDQuNDcuMC1pb2N0bCAoMjAy
Mi0wNy0yOCkgaW5pdGlhbGlzZWQ6IGRtLWRldmVsQHJlZGhhdC5jb20KWyAgIDEwLjM3OTMy
M10gaGlkOiByYXcgSElEIGV2ZW50cyBkcml2ZXIgKEMpIEppcmkgS29zaW5hClsgICAxMC4z
NzkzNjNdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiaGlk
ClsgICAxMC4zNzkzNjRdIHVzYmhpZDogVVNCIEhJRCBjb3JlIGRyaXZlcgpbICAgMTAuMzc5
NDU3XSBJbml0aWFsaXppbmcgWEZSTSBuZXRsaW5rIHNvY2tldApbICAgMTAuMzc5NDY1XSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfUEFDS0VUIHByb3RvY29sIGZhbWlseQpbICAgMTAuMzc5NDY3
XSB4ODYvcG06IGZhbWlseSAweDE1IGNwdSBkZXRlY3RlZCwgTVNSIHNhdmluZyBpcyBuZWVk
ZWQgZHVyaW5nIHN1c3BlbmRpbmcuClsgICAxMC4zNzk2MzNdIG1pY3JvY29kZTogQ1BVMDog
cGF0Y2hfbGV2ZWw9MHgwNjAwMTExZgpbICAgMTAuMzc5NjQyXSBtaWNyb2NvZGU6IE1pY3Jv
Y29kZSBVcGRhdGUgRHJpdmVyOiB2Mi4yLgpbICAgMTAuMzc5NjQ2XSBJUEkgc2hvcnRoYW5k
IGJyb2FkY2FzdDogZW5hYmxlZApbICAgMTAuMzc5NjU0XSBBVlggdmVyc2lvbiBvZiBnY21f
ZW5jL2RlYyBlbmdhZ2VkLgpbICAgMTAuMzc5NjcwXSBBRVMgQ1RSIG1vZGUgYnk4IG9wdGlt
aXphdGlvbiBlbmFibGVkClsgICAxMC4zODM2NTBdIHNjaGVkX2Nsb2NrOiBNYXJraW5nIHN0
YWJsZSAoMTAyNjQwMDY2NjMsIDExNjkwNTc2MiktPigxMDM4MzQxNzIwNywgLTI1MDQ3ODIp
ClsgICAxMC4zODM4MzZdIHJlZ2lzdGVyZWQgdGFza3N0YXRzIHZlcnNpb24gMQpbICAgMTAu
Mzg0MDc5XSB6c3dhcDogbG9hZGVkIHVzaW5nIHBvb2wgbHpvL3pidWQKWyAgIDEwLjM4ODI3
Nl0ga21lbWxlYWs6IEtlcm5lbCBtZW1vcnkgbGVhayBkZXRlY3RvciBpbml0aWFsaXplZCAo
bWVtIHBvb2wgYXZhaWxhYmxlOiAxNTY3OSkKWyAgIDEwLjM4ODI4Ml0gZGVidWdfdm1fcGd0
YWJsZTogW2RlYnVnX3ZtX3BndGFibGUgICAgICAgICBdOiBWYWxpZGF0aW5nIGFyY2hpdGVj
dHVyZSBwYWdlIHRhYmxlIGhlbHBlcnMKWyAgIDEwLjM5MTQ4N10ga21lbWxlYWs6IEF1dG9t
YXRpYyBtZW1vcnkgc2Nhbm5pbmcgdGhyZWFkIHN0YXJ0ZWQKWyAgIDEwLjM5MzMyOV0gS2V5
IHR5cGUgZW5jcnlwdGVkIHJlZ2lzdGVyZWQKWyAgIDEwLjM5NjA5NV0gUE06ICAgTWFnaWMg
bnVtYmVyOiAzOjQ4Nzo0MzkKWyAgIDEwLjQ4ODQ0OF0gYXRhNzogU0FUQSBsaW5rIHVwIDYu
MCBHYnBzIChTU3RhdHVzIDEzMyBTQ29udHJvbCAzMDApClsgICAxMC40ODg2MDNdIGF0YTcu
MDA6IEFUQS05OiBTYW5EaXNrIFNEU1NEUDA2NEcsIDIuMC4wLCBtYXggVURNQS8xMzMKWyAg
IDEwLjQ4ODYwNl0gYXRhNy4wMDogMTI1MDQ1NDI0IHNlY3RvcnMsIG11bHRpIDE6IExCQTQ4
IE5DUSAoZGVwdGggMzIpClsgICAxMC40ODg4MDVdIGF0YTcuMDA6IGNvbmZpZ3VyZWQgZm9y
IFVETUEvMTMzClsgICAxMC40ODkwMDNdIHNjc2kgNjowOjA6MDogRGlyZWN0LUFjY2VzcyAg
ICAgQVRBICAgICAgU2FuRGlzayBTRFNTRFAwNiAwICAgIFBROiAwIEFOU0k6IDUKWyAgIDEw
LjQ5MDA2MV0gc2QgNjowOjA6MDogW3NkYV0gMTI1MDQ1NDI0IDUxMi1ieXRlIGxvZ2ljYWwg
YmxvY2tzOiAoNjQuMCBHQi81OS42IEdpQikKWyAgIDEwLjQ5MDA3OF0gc2QgNjowOjA6MDog
W3NkYV0gV3JpdGUgUHJvdGVjdCBpcyBvZmYKWyAgIDEwLjQ5MDA4M10gc2QgNjowOjA6MDog
W3NkYV0gTW9kZSBTZW5zZTogMDAgM2EgMDAgMDAKWyAgIDEwLjQ5MDEwNV0gc2QgNjowOjA6
MDogW3NkYV0gV3JpdGUgY2FjaGU6IGVuYWJsZWQsIHJlYWQgY2FjaGU6IGVuYWJsZWQsIGRv
ZXNuJ3Qgc3VwcG9ydCBEUE8gb3IgRlVBClsgICAxMC40OTAxMzZdIHNkIDY6MDowOjA6IFtz
ZGFdIFByZWZlcnJlZCBtaW5pbXVtIEkvTyBzaXplIDUxMiBieXRlcwpbICAgMTAuNDkxNTEw
XSAgc2RhOiBzZGExIHNkYTIgc2RhMwpbICAgMTAuNDkxOTUwXSBzZCA2OjA6MDowOiBbc2Rh
XSBBdHRhY2hlZCBTQ1NJIGRpc2sKWyAgIDEwLjUwNDA0N10gRVhUNC1mcyAoc2RhMyk6IG1v
dW50ZWQgZmlsZXN5c3RlbSBmZTI5ZTBkYy02MzAzLTQ0MDEtOTg3Yy04NDcyYmMxYjk1MTYg
d2l0aCBvcmRlcmVkIGRhdGEgbW9kZS4gUXVvdGEgbW9kZTogbm9uZS4KWyAgIDEwLjUwNDA5
MF0gVkZTOiBNb3VudGVkIHJvb3QgKGV4dDQgZmlsZXN5c3RlbSkgb24gZGV2aWNlIDg6My4K
WyAgIDEwLjUwNjA0N10gZGV2dG1wZnM6IG1vdW50ZWQKWyAgIDEwLjUxMDAxMl0gRnJlZWlu
ZyB1bnVzZWQga2VybmVsIGltYWdlIChpbml0bWVtKSBtZW1vcnk6IDI5MDhLClsgICAxMC41
MTcwNjBdIFdyaXRlIHByb3RlY3RpbmcgdGhlIGtlcm5lbCByZWFkLW9ubHkgZGF0YTogMjA0
ODBrClsgICAxMC41MTczMTldIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAocm9kYXRh
L2RhdGEgZ2FwKSBtZW1vcnk6IDgzNksKWyAgIDEwLjU1NDM4NV0geDg2L21tOiBDaGVja2Vk
IFcrWCBtYXBwaW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQuClsgICAxMC41NTQz
OTFdIHJvZGF0YV90ZXN0OiBhbGwgdGVzdHMgd2VyZSBzdWNjZXNzZnVsClsgICAxMC41NTQ0
MTNdIFJ1biAvc2Jpbi9pbml0IGFzIGluaXQgcHJvY2VzcwpbICAgMTAuNTU0NDE1XSAgIHdp
dGggYXJndW1lbnRzOgpbICAgMTAuNTU0NDE2XSAgICAgL3NiaW4vaW5pdApbICAgMTAuNTU0
NDE3XSAgICAgbm9pc2FwbnAKWyAgIDEwLjU1NDQxOF0gICB3aXRoIGVudmlyb25tZW50Ogpb
ICAgMTAuNTU0NDE5XSAgICAgSE9NRT0vClsgICAxMC41NTQ0MjBdICAgICBURVJNPWxpbnV4
ClsgICAxMC41NTQ0MjBdICAgICBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmMz
LTAwMDQ1LWc2NGRlNGRmOWM4MGIKWyAgIDEwLjc1Nzg5NV0gc3lzdGVtZFsxXTogSW5zZXJ0
ZWQgbW9kdWxlICdhdXRvZnM0JwpbICAgMTAuNzg1ODg2XSBORVQ6IFJlZ2lzdGVyZWQgUEZf
SU5FVDYgcHJvdG9jb2wgZmFtaWx5ClsgICAxMC43ODY3MzFdIFNlZ21lbnQgUm91dGluZyB3
aXRoIElQdjYKWyAgIDEwLjc4Njc2MF0gSW4tc2l0dSBPQU0gKElPQU0pIHdpdGggSVB2Ngpb
ICAgMTAuODEyNDYwXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kIDI1Mi42LTEgcnVubmluZyBpbiBz
eXN0ZW0gbW9kZSAoK1BBTSArQVVESVQgK1NFTElOVVggK0FQUEFSTU9SICtJTUEgK1NNQUNL
ICtTRUNDT01QICtHQ1JZUFQgLUdOVVRMUyArT1BFTlNTTCArQUNMICtCTEtJRCArQ1VSTCAr
RUxGVVRJTFMgK0ZJRE8yICtJRE4yIC1JRE4gK0lQVEMgK0tNT0QgK0xJQkNSWVBUU0VUVVAg
K0xJQkZESVNLICtQQ1JFMiAtUFdRVUFMSVRZICtQMTFLSVQgK1FSRU5DT0RFICtUUE0yICtC
WklQMiArTFo0ICtYWiArWkxJQiArWlNURCAtQlBGX0ZSQU1FV09SSyAtWEtCQ09NTU9OICtV
VE1QICtTWVNWSU5JVCBkZWZhdWx0LWhpZXJhcmNoeT11bmlmaWVkKQpbICAgMTAuODEyNDcw
XSBzeXN0ZW1kWzFdOiBEZXRlY3RlZCBhcmNoaXRlY3R1cmUgeDg2LTY0LgpbICAgMTAuODE2
ODEwXSBzeXN0ZW1kWzFdOiBIb3N0bmFtZSBzZXQgdG8gPGtvZGk+LgpbICAgMTEuMTEzMjA2
XSBzeXN0ZW1kWzFdOiBRdWV1ZWQgc3RhcnQgam9iIGZvciBkZWZhdWx0IHRhcmdldCBncmFw
aGljYWwudGFyZ2V0LgpbICAgMTEuMTIzNTUwXSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNl
IHN5c3RlbS1nZXR0eS5zbGljZSAtIFNsaWNlIC9zeXN0ZW0vZ2V0dHkuClsgICAxMS4xMjQ2
MzZdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2Ugc3lzdGVtLW1vZHByb2JlLnNsaWNlIC0g
U2xpY2UgL3N5c3RlbS9tb2Rwcm9iZS4KWyAgIDExLjEyNTQ3OF0gc3lzdGVtZFsxXTogQ3Jl
YXRlZCBzbGljZSB1c2VyLnNsaWNlIC0gVXNlciBhbmQgU2Vzc2lvbiBTbGljZS4KWyAgIDEx
LjEyNTY1M10gc3lzdGVtZFsxXTogU3RhcnRlZCBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25z
b2xlLnBhdGggLSBEaXNwYXRjaCBQYXNzd29yZCBSZXF1ZXN0cyB0byBDb25zb2xlIERpcmVj
dG9yeSBXYXRjaC4KWyAgIDExLjEyNTc4MF0gc3lzdGVtZFsxXTogU3RhcnRlZCBzeXN0ZW1k
LWFzay1wYXNzd29yZC13YWxsLnBhdGggLSBGb3J3YXJkIFBhc3N3b3JkIFJlcXVlc3RzIHRv
IFdhbGwgRGlyZWN0b3J5IFdhdGNoLgpbICAgMTEuMTI2MjA0XSBzeXN0ZW1kWzFdOiBTZXQg
dXAgYXV0b21vdW50IHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLmF1dG9tb3VudCAtIEFyYml0
cmFyeSBFeGVjdXRhYmxlIEZpbGUgRm9ybWF0cyBGaWxlIFN5c3RlbSBBdXRvbW91bnQgUG9p
bnQuClsgICAxMS4xMjYyNDBdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IGNyeXB0c2V0
dXAudGFyZ2V0IC0gTG9jYWwgRW5jcnlwdGVkIFZvbHVtZXMuClsgICAxMS4xMjYyODRdIHN5
c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IGludGVncml0eXNldHVwLnRhcmdldCAtIExvY2Fs
IEludGVncml0eSBQcm90ZWN0ZWQgVm9sdW1lcy4KWyAgIDExLjEyNjMzMV0gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgcGF0aHMudGFyZ2V0IC0gUGF0aCBVbml0cy4KWyAgIDExLjEy
NjM2Ml0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgcmVtb3RlLWZzLnRhcmdldCAtIFJl
bW90ZSBGaWxlIFN5c3RlbXMuClsgICAxMS4xMjYzODldIHN5c3RlbWRbMV06IFJlYWNoZWQg
dGFyZ2V0IHNsaWNlcy50YXJnZXQgLSBTbGljZSBVbml0cy4KWyAgIDExLjEyNjQyN10gc3lz
dGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgc3dhcC50YXJnZXQgLSBTd2Fwcy4KWyAgIDExLjEy
NjQ2OF0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgdmVyaXR5c2V0dXAudGFyZ2V0IC0g
TG9jYWwgVmVyaXR5IFByb3RlY3RlZCBWb2x1bWVzLgpbICAgMTEuMTI4OTYyXSBzeXN0ZW1k
WzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1jb3JlZHVtcC5zb2NrZXQgLSBQcm9jZXNzIENv
cmUgRHVtcCBTb2NrZXQuClsgICAxMS4xMjkyMjFdIHN5c3RlbWRbMV06IExpc3RlbmluZyBv
biBzeXN0ZW1kLWZzY2tkLnNvY2tldCAtIGZzY2sgdG8gZnNja2QgY29tbXVuaWNhdGlvbiBT
b2NrZXQuClsgICAxMS4xMjkzODFdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1k
LWluaXRjdGwuc29ja2V0IC0gaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBpcGUuClsg
ICAxMS4xMjk2OTBdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWpvdXJuYWxk
LWF1ZGl0LnNvY2tldCAtIEpvdXJuYWwgQXVkaXQgU29ja2V0LgpbICAgMTEuMTI5OTYwXSBz
eXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1qb3VybmFsZC1kZXYtbG9nLnNvY2tl
dCAtIEpvdXJuYWwgU29ja2V0ICgvZGV2L2xvZykuClsgICAxMS4xMzAyMzldIHN5c3RlbWRb
MV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWpvdXJuYWxkLnNvY2tldCAtIEpvdXJuYWwgU29j
a2V0LgpbICAgMTEuMTMwNDg4XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1u
ZXR3b3JrZC5zb2NrZXQgLSBOZXR3b3JrIFNlcnZpY2UgTmV0bGluayBTb2NrZXQuClsgICAx
MS4xMzEzMDhdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLXVkZXZkLWNvbnRy
b2wuc29ja2V0IC0gdWRldiBDb250cm9sIFNvY2tldC4KWyAgIDExLjEzMTU4MF0gc3lzdGVt
ZFsxXTogTGlzdGVuaW5nIG9uIHN5c3RlbWQtdWRldmQta2VybmVsLnNvY2tldCAtIHVkZXYg
S2VybmVsIFNvY2tldC4KWyAgIDExLjEzNDM1OF0gc3lzdGVtZFsxXTogTW91bnRpbmcgZGV2
LWh1Z2VwYWdlcy5tb3VudCAtIEh1Z2UgUGFnZXMgRmlsZSBTeXN0ZW0uLi4KWyAgIDExLjEz
NjgwN10gc3lzdGVtZFsxXTogTW91bnRpbmcgZGV2LW1xdWV1ZS5tb3VudCAtIFBPU0lYIE1l
c3NhZ2UgUXVldWUgRmlsZSBTeXN0ZW0uLi4KWyAgIDExLjE0MDczM10gc3lzdGVtZFsxXTog
TW91bnRpbmcgc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudCAtIEtlcm5lbCBEZWJ1ZyBGaWxlIFN5
c3RlbS4uLgpbICAgMTEuMTUyNjkzXSBzeXN0ZW1kWzFdOiBNb3VudGluZyBzeXMta2VybmVs
LXRyYWNpbmcubW91bnQgLSBLZXJuZWwgVHJhY2UgRmlsZSBTeXN0ZW0uLi4KWyAgIDExLjE1
OTIxNV0gc3lzdGVtZFsxXTogU3RhcnRpbmcga21vZC1zdGF0aWMtbm9kZXMuc2VydmljZSAt
IENyZWF0ZSBMaXN0IG9mIFN0YXRpYyBEZXZpY2UgTm9kZXMuLi4KWyAgIDExLjE2Nzk2OV0g
c3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAY29uZmlnZnMuc2VydmljZSAtIExvYWQg
S2VybmVsIE1vZHVsZSBjb25maWdmcy4uLgpbICAgMTEuMTg1MjY1XSBzeXN0ZW1kWzFdOiBT
dGFydGluZyBtb2Rwcm9iZUBkbV9tb2Quc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBk
bV9tb2QuLi4KWyAgIDExLjE4NzkwNF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVA
ZHJtLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZHJtLi4uClsgICAxMS4xOTQ5ODFd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHByb2JlQGVmaV9wc3RvcmUuc2VydmljZSAtIExv
YWQgS2VybmVsIE1vZHVsZSBlZmlfcHN0b3JlLi4uClsgICAxMS4yMDIyMDBdIHN5c3RlbWRb
MV06IFN0YXJ0aW5nIG1vZHByb2JlQGZ1c2Uuc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVs
ZSBmdXNlLi4uClsgICAxMS4yMTMyMzVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHByb2Jl
QGxvb3Auc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBsb29wLi4uClsgICAxMS4yMTMz
MDNdIHN5c3RlbWRbMV06IHN5c3RlbWQtZmlyc3Rib290LnNlcnZpY2UgLSBGaXJzdCBCb290
IFdpemFyZCB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlvbiBjaGVj
ayAoQ29uZGl0aW9uRmlyc3RCb290PXllcykuClsgICAxMS4yMTMzNzJdIHN5c3RlbWRbMV06
IHN5c3RlbWQtZnNjay1yb290LnNlcnZpY2UgLSBGaWxlIFN5c3RlbSBDaGVjayBvbiBSb290
IERldmljZSB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlvbiBjaGVj
ayAoQ29uZGl0aW9uUGF0aElzUmVhZFdyaXRlPSEvKS4KWyAgIDExLjIxMzQwOV0gc3lzdGVt
ZFsxXTogUmVhY2hlZCB0YXJnZXQgbG9jYWwtZnMudGFyZ2V0IC0gTG9jYWwgRmlsZSBTeXN0
ZW1zLgpbICAgMTEuMjEzNDcxXSBzeXN0ZW1kWzFdOiBhcHBhcm1vci5zZXJ2aWNlIC0gTG9h
ZCBBcHBBcm1vciBwcm9maWxlcyB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNv
bmRpdGlvbiBjaGVjayAoQ29uZGl0aW9uU2VjdXJpdHk9YXBwYXJtb3IpLgpbICAgMTEuMjE3
Mzg3XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlIC0gU2V0
IFVwIEFkZGl0aW9uYWwgQmluYXJ5IEZvcm1hdHMuLi4KWyAgIDExLjIyNTYxMV0gbG9vcDog
bW9kdWxlIGxvYWRlZApbICAgMTEuMjI3OTMzXSBmdXNlOiBpbml0IChBUEkgdmVyc2lvbiA3
LjM4KQpbICAgMTEuMjQxMzk0XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLWpvdXJu
YWxkLnNlcnZpY2UgLSBKb3VybmFsIFNlcnZpY2UuLi4KWyAgIDExLjI0NDAwOF0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlIC0gTG9hZC9TYXZl
IFJhbmRvbSBTZWVkLi4uClsgICAxMS4yNjUyMTNdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5
c3RlbWQtc3lzY3RsLnNlcnZpY2UgLSBBcHBseSBLZXJuZWwgVmFyaWFibGVzLi4uClsgICAx
MS4yNjc4NzNdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtc3lzdXNlcnMuc2Vydmlj
ZSAtIENyZWF0ZSBTeXN0ZW0gVXNlcnMuLi4KWyAgIDExLjI5NDA1OV0gc3lzdGVtZFsxXTog
U3RhcnRpbmcgc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZSAtIENvbGRwbHVnIEFsbCB1
ZGV2IERldmljZXMuLi4KWyAgIDExLjMxNjA5Nl0gc3lzdGVtZFsxXTogTW91bnRlZCBkZXYt
aHVnZXBhZ2VzLm1vdW50IC0gSHVnZSBQYWdlcyBGaWxlIFN5c3RlbS4KWyAgIDExLjMxNjI5
M10gc3lzdGVtZFsxXTogTW91bnRlZCBkZXYtbXF1ZXVlLm1vdW50IC0gUE9TSVggTWVzc2Fn
ZSBRdWV1ZSBGaWxlIFN5c3RlbS4KWyAgIDExLjMxNjQ2N10gc3lzdGVtZFsxXTogTW91bnRl
ZCBzeXMta2VybmVsLWRlYnVnLm1vdW50IC0gS2VybmVsIERlYnVnIEZpbGUgU3lzdGVtLgpb
ICAgMTEuMzE2NjQwXSBzeXN0ZW1kWzFdOiBNb3VudGVkIHN5cy1rZXJuZWwtdHJhY2luZy5t
b3VudCAtIEtlcm5lbCBUcmFjZSBGaWxlIFN5c3RlbS4KWyAgIDExLjMzNzk5NF0gc3lzdGVt
ZFsxXTogRmluaXNoZWQga21vZC1zdGF0aWMtbm9kZXMuc2VydmljZSAtIENyZWF0ZSBMaXN0
IG9mIFN0YXRpYyBEZXZpY2UgTm9kZXMuClsgICAxMS4zMzg3OTBdIHN5c3RlbWRbMV06IG1v
ZHByb2JlQGNvbmZpZ2ZzLnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAg
IDExLjM0MTYyMl0gc3lzdGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAY29uZmlnZnMuc2Vy
dmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBjb25maWdmcy4KWyAgIDExLjM0MjI4NV0gc3lz
dGVtZFsxXTogbW9kcHJvYmVAZG1fbW9kLnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3Nm
dWxseS4KWyAgIDExLjM0MjU0Nl0gc3lzdGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAZG1f
bW9kLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZG1fbW9kLgpbICAgMTEuMzQzMTIy
XSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBkcm0uc2VydmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vz
c2Z1bGx5LgpbICAgMTEuMzQzMzY0XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBk
cm0uc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBkcm0uClsgICAxMS4zNDM5MjhdIHN5
c3RlbWRbMV06IG1vZHByb2JlQGVmaV9wc3RvcmUuc2VydmljZTogRGVhY3RpdmF0ZWQgc3Vj
Y2Vzc2Z1bGx5LgpbICAgMTEuMzQ0MTc2XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9i
ZUBlZmlfcHN0b3JlLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZWZpX3BzdG9yZS4K
WyAgIDExLjM0NDczNF0gc3lzdGVtZFsxXTogbW9kcHJvYmVAZnVzZS5zZXJ2aWNlOiBEZWFj
dGl2YXRlZCBzdWNjZXNzZnVsbHkuClsgICAxMS4zNTg5MDVdIHN5c3RlbWRbMV06IEZpbmlz
aGVkIG1vZHByb2JlQGZ1c2Uuc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBmdXNlLgpb
ICAgMTEuMzU5NjE2XSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBsb29wLnNlcnZpY2U6IERlYWN0
aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAgIDExLjM2NTM1OF0gc3lzdGVtZFsxXTogRmluaXNo
ZWQgbW9kcHJvYmVAbG9vcC5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGxvb3AuClsg
ICAxMS4zNjY2ODVdIHN5c3RlbWRbMV06IEZpbmlzaGVkIHN5c3RlbWQtc3lzY3RsLnNlcnZp
Y2UgLSBBcHBseSBLZXJuZWwgVmFyaWFibGVzLgpbICAgMTEuMzY3NjM0XSBzeXN0ZW1kWzFd
OiBGaW5pc2hlZCBzeXN0ZW1kLXN5c3VzZXJzLnNlcnZpY2UgLSBDcmVhdGUgU3lzdGVtIFVz
ZXJzLgpbICAgMTEuMzY4MTUwXSBzeXN0ZW1kWzFdOiBwcm9jLXN5cy1mcy1iaW5mbXRfbWlz
Yy5hdXRvbW91bnQ6IEdvdCBhdXRvbW91bnQgcmVxdWVzdCBmb3IgL3Byb2Mvc3lzL2ZzL2Jp
bmZtdF9taXNjLCB0cmlnZ2VyZWQgYnkgMTMzIChzeXN0ZW1kLWJpbmZtdCkKWyAgIDExLjM4
MjQ0MV0gc3lzdGVtZFsxXTogTW91bnRpbmcgcHJvYy1zeXMtZnMtYmluZm10X21pc2MubW91
bnQgLSBBcmJpdHJhcnkgRXhlY3V0YWJsZSBGaWxlIEZvcm1hdHMgRmlsZSBTeXN0ZW0uLi4K
WyAgIDExLjM4NTI1MF0gdHNjOiBSZWZpbmVkIFRTQyBjbG9ja3NvdXJjZSBjYWxpYnJhdGlv
bjogMzkwMC4yMjUgTUh6ClsgICAxMS4zODUyNThdIGNsb2Nrc291cmNlOiB0c2M6IG1hc2s6
IDB4ZmZmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDcwNzA1ZDdhOWFlLCBtYXhfaWRs
ZV9uczogODgxNTkwNDk1NTMyIG5zClsgICAxMS4zODUyNzBdIGNsb2Nrc291cmNlOiBTd2l0
Y2hlZCB0byBjbG9ja3NvdXJjZSB0c2MKWyAgIDExLjQyOTEzOV0gc3lzdGVtZFsxXTogTW91
bnRpbmcgc3lzLWZzLWZ1c2UtY29ubmVjdGlvbnMubW91bnQgLSBGVVNFIENvbnRyb2wgRmls
ZSBTeXN0ZW0uLi4KWyAgIDExLjQ0MjE5Ml0gc3lzdGVtZFsxXTogTW91bnRpbmcgc3lzLWtl
cm5lbC1jb25maWcubW91bnQgLSBLZXJuZWwgQ29uZmlndXJhdGlvbiBGaWxlIFN5c3RlbS4u
LgpbICAgMTEuNDQyMzIwXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXBzdG9yZS5zZXJ2aWNlIC0g
UGxhdGZvcm0gUGVyc2lzdGVudCBTdG9yYWdlIEFyY2hpdmFsIHdhcyBza2lwcGVkIGJlY2F1
c2Ugb2YgYW4gdW5tZXQgY29uZGl0aW9uIGNoZWNrIChDb25kaXRpb25EaXJlY3RvcnlOb3RF
bXB0eT0vc3lzL2ZzL3BzdG9yZSkuClsgICAxMS40NDI0NzRdIHN5c3RlbWRbMV06IHN5c3Rl
bWQtcmVwYXJ0LnNlcnZpY2UgLSBSZXBhcnRpdGlvbiBSb290IERpc2sgd2FzIHNraXBwZWQg
YmVjYXVzZSBubyB0cmlnZ2VyIGNvbmRpdGlvbiBjaGVja3Mgd2VyZSBtZXQuClsgICAxMS40
NjU5MzBdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAtZGV2
LnNlcnZpY2UgLSBDcmVhdGUgU3RhdGljIERldmljZSBOb2RlcyBpbiAvZGV2Li4uClsgICAx
MS40NjczNTVdIHN5c3RlbWRbMV06IE1vdW50ZWQgcHJvYy1zeXMtZnMtYmluZm10X21pc2Mu
bW91bnQgLSBBcmJpdHJhcnkgRXhlY3V0YWJsZSBGaWxlIEZvcm1hdHMgRmlsZSBTeXN0ZW0u
ClsgICAxMS40ODU5MzldIHN5c3RlbWRbMV06IEZpbmlzaGVkIHN5c3RlbWQtYmluZm10LnNl
cnZpY2UgLSBTZXQgVXAgQWRkaXRpb25hbCBCaW5hcnkgRm9ybWF0cy4KWyAgIDExLjQ4Nzc4
M10gc3lzdGVtZFsxXTogTW91bnRlZCBzeXMtZnMtZnVzZS1jb25uZWN0aW9ucy5tb3VudCAt
IEZVU0UgQ29udHJvbCBGaWxlIFN5c3RlbS4KWyAgIDExLjQ4Nzk2OV0gc3lzdGVtZFsxXTog
TW91bnRlZCBzeXMta2VybmVsLWNvbmZpZy5tb3VudCAtIEtlcm5lbCBDb25maWd1cmF0aW9u
IEZpbGUgU3lzdGVtLgpbICAgMTEuNTU0NzI3XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBzeXN0
ZW1kLXRtcGZpbGVzLXNldHVwLWRldi5zZXJ2aWNlIC0gQ3JlYXRlIFN0YXRpYyBEZXZpY2Ug
Tm9kZXMgaW4gL2Rldi4KWyAgIDExLjU2NTI5MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lz
dGVtZC11ZGV2ZC5zZXJ2aWNlIC0gUnVsZS1iYXNlZCBNYW5hZ2VyIGZvciBEZXZpY2UgRXZl
bnRzIGFuZCBGaWxlcy4uLgpbICAgMTEuNTcwODA5XSBzeXN0ZW1kWzFdOiBTdGFydGVkIHN5
c3RlbWQtam91cm5hbGQuc2VydmljZSAtIEpvdXJuYWwgU2VydmljZS4KWyAgIDExLjYxNzI4
Ml0gc3lzdGVtZC1qb3VybmFsZFsxMzRdOiBSZWNlaXZlZCBjbGllbnQgcmVxdWVzdCB0byBm
bHVzaCBydW50aW1lIGpvdXJuYWwuClsgICAxMi4xMTgwMzNdIHNkIDY6MDowOjA6IEF0dGFj
aGVkIHNjc2kgZ2VuZXJpYyBzZzAgdHlwZSAwClsgICAxMi4yMjQ5MjJdIHJhbmRvbTogY3Ju
ZyBpbml0IGRvbmUKWyAgIDEyLjQxMjQ5OF0gYWNwaV9jcHVmcmVxOiBvdmVycmlkaW5nIEJJ
T1MgcHJvdmlkZWQgX1BTRCBkYXRhClsgICAxMi41NjEzNzRdIFFVSVJLOiBFbmFibGUgQU1E
IFBMTCBmaXgKWyAgIDEyLjU2MTQyMl0gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBFSENJIEhv
c3QgQ29udHJvbGxlcgpbICAgMTIuNTYxNDQ5XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IG5l
dyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgMQpbICAgMTIuNTYx
NDYxXSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IGFwcGx5aW5nIEFNRCBTQjcwMC9TQjgwMC9I
dWRzb24tMi8zIEVIQ0kgZHVtbXkgcWggd29ya2Fyb3VuZApbICAgMTIuNTYxNDY5XSBlaGNp
LXBjaSAwMDAwOjAwOjEyLjI6IGRlYnVnIHBvcnQgMQpbICAgMTIuNTYxNjQwXSBlaGNpLXBj
aSAwMDAwOjAwOjEyLjI6IGlycSAxNywgaW8gbWVtIDB4ZjAxY2QwMDAKWyAgIDEyLjU3Njk0
M10gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kgMS4wMApb
ICAgMTIuNTc3MjMwXSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9y
PTFkNmIsIGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjU3NzIzM10g
dXNiIHVzYjE6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNl
cmlhbE51bWJlcj0xClsgICAxMi41NzcyMzVdIHVzYiB1c2IxOiBQcm9kdWN0OiBFSENJIEhv
c3QgQ29udHJvbGxlcgpbICAgMTIuNTc3MjM2XSB1c2IgdXNiMTogTWFudWZhY3R1cmVyOiBM
aW51eCA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiBlaGNpX2hjZApbICAgMTIuNTc3
MjM4XSB1c2IgdXNiMTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEyLjIKWyAgIDEyLjU3NzY4
MF0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjU3NzcwOF0gaHViIDEtMDox
LjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjU3ODM2M10gZWhjaS1wY2kgMDAwMDowMDox
My4yOiBFSENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuNTc4MzgyXSBlaGNpLXBjaSAwMDAw
OjAwOjEzLjI6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIg
MgpbICAgMTIuNTc4MzkzXSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IGFwcGx5aW5nIEFNRCBT
QjcwMC9TQjgwMC9IdWRzb24tMi8zIEVIQ0kgZHVtbXkgcWggd29ya2Fyb3VuZApbICAgMTIu
NTc4NDAyXSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IGRlYnVnIHBvcnQgMQpbICAgMTIuNTc4
NTI5XSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IGlycSAxNywgaW8gbWVtIDB4ZjAxY2UwMDAK
WyAgIDEyLjU4NzE3MF0gcGlpeDRfc21idXMgMDAwMDowMDoxNC4wOiBTTUJ1cyBIb3N0IENv
bnRyb2xsZXIgYXQgMHhiMDAsIHJldmlzaW9uIDAKWyAgIDEyLjU4NzE3N10gcGlpeDRfc21i
dXMgMDAwMDowMDoxNC4wOiBVc2luZyByZWdpc3RlciAweDJlIGZvciBTTUJ1cyBwb3J0IHNl
bGVjdGlvbgpbICAgMTIuNTg3NjU0XSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjE0LjA6IEF1eGls
aWFyeSBTTUJ1cyBIb3N0IENvbnRyb2xsZXIgYXQgMHhiMjAKWyAgIDEyLjU5MjkzOV0gZWhj
aS1wY2kgMDAwMDowMDoxMy4yOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kgMS4wMApbICAgMTIu
NTkzNDI5XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIs
IGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjU5MzQzM10gdXNiIHVz
YjI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51
bWJlcj0xClsgICAxMi41OTM0MzVdIHVzYiB1c2IyOiBQcm9kdWN0OiBFSENJIEhvc3QgQ29u
dHJvbGxlcgpbICAgMTIuNTkzNDM3XSB1c2IgdXNiMjogTWFudWZhY3R1cmVyOiBMaW51eCA2
LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiBlaGNpX2hjZApbICAgMTIuNTkzNDM4XSB1
c2IgdXNiMjogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEzLjIKWyAgIDEyLjU5MzkyOV0gaHVi
IDItMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjU5Mzk1Nl0gaHViIDItMDoxLjA6IDUg
cG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjU5NDYyMl0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBF
SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuNTk0NjQwXSBlaGNpLXBjaSAwMDAwOjAwOjE2
LjI6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgMwpbICAg
MTIuNTk0NjUwXSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IGFwcGx5aW5nIEFNRCBTQjcwMC9T
QjgwMC9IdWRzb24tMi8zIEVIQ0kgZHVtbXkgcWggd29ya2Fyb3VuZApbICAgMTIuNTk0NjU5
XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IGRlYnVnIHBvcnQgMQpbICAgMTIuNTk0NzkxXSBl
aGNpLXBjaSAwMDAwOjAwOjE2LjI6IGlycSAxNywgaW8gbWVtIDB4ZjAxY2YwMDAKWyAgIDEy
LjYwODkzNl0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBVU0IgMi4wIHN0YXJ0ZWQsIEVIQ0kg
MS4wMApbICAgMTIuNjA5MjIwXSB1c2IgdXNiMzogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlk
VmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDYuMDMKWyAgIDEyLjYw
OTIyM10gdXNiIHVzYjM6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0
PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi42MDkyMjVdIHVzYiB1c2IzOiBQcm9kdWN0OiBF
SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuNjA5MjI2XSB1c2IgdXNiMzogTWFudWZhY3R1
cmVyOiBMaW51eCA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiBlaGNpX2hjZApbICAg
MTIuNjA5MjI4XSB1c2IgdXNiMzogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjE2LjIKWyAgIDEy
LjYwOTY2N10gaHViIDMtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjYwOTY5NF0gaHVi
IDMtMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjYxMTM4N10gb2hjaS1wY2kgMDAw
MDowMDoxMi4wOiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgIDEyLjYxMTQyMF0gb2hj
aS1wY2kgMDAwMDowMDoxMi4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBi
dXMgbnVtYmVyIDQKWyAgIDEyLjYxMTU5Nl0gb2hjaS1wY2kgMDAwMDowMDoxMi4wOiBpcnEg
MTgsIGlvIG1lbSAweGYwMWM4MDAwClsgICAxMi42NjA0NjNdIHI4MTY5IDAwMDA6MDQ6MDAu
MDogZW5hYmxpbmcgZGV2aWNlICgwMDAwIC0+IDAwMDMpClsgICAxMi42OTAzNjddIHVzYiB1
c2I0OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAw
MDEsIGJjZERldmljZT0gNi4wMwpbICAgMTIuNjkwMzc0XSB1c2IgdXNiNDogTmV3IFVTQiBk
ZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgIDEy
LjY5MDM3Nl0gdXNiIHVzYjQ6IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpb
ICAgMTIuNjkwMzc4XSB1c2IgdXNiNDogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzMt
MDAwNDUtZzY0ZGU0ZGY5YzgwYiBvaGNpX2hjZApbICAgMTIuNjkwMzgwXSB1c2IgdXNiNDog
U2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEyLjAKWyAgIDEyLjY5OTE0MV0gaHViIDQtMDoxLjA6
IFVTQiBodWIgZm91bmQKWyAgIDEyLjcwMzI2Nl0gaHViIDQtMDoxLjA6IDUgcG9ydHMgZGV0
ZWN0ZWQKWyAgIDEyLjcyNTczMV0gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBPSENJIFBDSSBo
b3N0IGNvbnRyb2xsZXIKWyAgIDEyLjcyNTc1Nl0gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBu
ZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDUKWyAgIDEyLjcy
NTg2NV0gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBpcnEgMTgsIGlvIG1lbSAweGYwMWM5MDAw
ClsgICAxMi43Mjc1NDBdIHI4MTY5IDAwMDA6MDQ6MDAuMCBldGgwOiBSVEw4MTY4Zi84MTEx
ZiwgMDg6NjA6NmU6NzQ6N2E6NTEsIFhJRCA0ODAsIElSUSAyOApbICAgMTIuNzI3NTQ1XSBy
ODE2OSAwMDAwOjA0OjAwLjAgZXRoMDoganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTE5NCBi
eXRlcywgdHggY2hlY2tzdW1taW5nOiBrb10KWyAgIDEyLjc0MTU4Ml0geGhjaV9oY2QgMDAw
MDowMzowMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuNzQxNjA3XSB4aGNpX2hj
ZCAwMDAwOjAzOjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBu
dW1iZXIgNgpbICAgMTIuNzQ4MjAwXSBzbmRfaGRhX2ludGVsIDAwMDA6MDA6MDEuMTogRm9y
Y2UgdG8gbm9uLXNub29wIG1vZGUKWyAgIDEyLjgwMjg1NV0geGhjaV9oY2QgMDAwMDowMzow
MC4wOiBoY2MgcGFyYW1zIDB4MDIwMGYxODAgaGNpIHZlcnNpb24gMHg5NiBxdWlya3MgMHgw
MDAwMDAwMDAwMDgwMDEwClsgICAxMi44MDM2NjRdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDog
eEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgIDEyLjgwMzY3OV0geGhjaV9oY2QgMDAwMDowMzow
MC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDcKWyAg
IDEyLjgwMzY5Ml0geGhjaV9oY2QgMDAwMDowMzowMC4wOiBIb3N0IHN1cHBvcnRzIFVTQiAz
LjAgU3VwZXJTcGVlZApbICAgMTIuODAzODYwXSB1c2IgdXNiNjogTmV3IFVTQiBkZXZpY2Ug
Zm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyLCBiY2REZXZpY2U9IDYuMDMK
WyAgIDEyLjgwMzg2M10gdXNiIHVzYjY6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0z
LCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi44MDM4NjVdIHVzYiB1c2I2OiBQ
cm9kdWN0OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuODAzODY3XSB1c2IgdXNiNjog
TWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiB4aGNp
LWhjZApbICAgMTIuODAzODY4XSB1c2IgdXNiNjogU2VyaWFsTnVtYmVyOiAwMDAwOjAzOjAw
LjAKWyAgIDEyLjgwNzY0NF0gaHViIDYtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjgw
NzY3N10gaHViIDYtMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjgwOTc1MV0gdXNi
IHVzYjc6IFdlIGRvbid0IGtub3cgdGhlIGFsZ29yaXRobXMgZm9yIExQTSBmb3IgdGhpcyBo
b3N0LCBkaXNhYmxpbmcgTFBNLgpbICAgMTIuODA5ODg1XSB1c2IgdXNiNzogTmV3IFVTQiBk
ZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAzLCBiY2REZXZpY2U9
IDYuMDMKWyAgIDEyLjgwOTg4OF0gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6
IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMi44MDk4OTBdIHVzYiB1
c2I3OiBQcm9kdWN0OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgMTIuODA5ODkxXSB1c2Ig
dXNiNzogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5Yzgw
YiB4aGNpLWhjZApbICAgMTIuODA5ODkzXSB1c2IgdXNiNzogU2VyaWFsTnVtYmVyOiAwMDAw
OjAzOjAwLjAKWyAgIDEyLjgxMDM1M10gaHViIDctMDoxLjA6IFVTQiBodWIgZm91bmQKWyAg
IDEyLjgxMDM4Ml0gaHViIDctMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjgxMjcx
Ml0gaW5wdXQ6IEhEQSBBVEkgSERNSSBIRE1JL0RQLHBjbT0zIGFzIC9kZXZpY2VzL3BjaTAw
MDA6MDAvMDAwMDowMDowMS4xL3NvdW5kL2NhcmQwL2lucHV0MQpbICAgMTIuODEzMDI3XSBp
bnB1dDogSERBIEFUSSBIRE1JIEhETUkvRFAscGNtPTcgYXMgL2RldmljZXMvcGNpMDAwMDow
MC8wMDAwOjAwOjAxLjEvc291bmQvY2FyZDAvaW5wdXQyClsgICAxMi44MjU2NjBdIHNuZF9o
ZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogQUxDODkyOiBTS1Ugbm90IHJlYWR5IDB4
MDAwMDAxMDAKWyAgIDEyLjgyNjUyN10gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9D
MUQwOiBhdXRvY29uZmlnIGZvciBBTEM4OTI6IGxpbmVfb3V0cz00ICgweDE0LzB4MTYvMHgx
NS8weDE3LzB4MCkgdHlwZTpsaW5lClsgICAxMi44MjY1MzRdIHNuZF9oZGFfY29kZWNfcmVh
bHRlayBoZGF1ZGlvQzFEMDogICAgc3BlYWtlcl9vdXRzPTAgKDB4MC8weDAvMHgwLzB4MC8w
eDApClsgICAxMi44MjY1MzZdIHNuZF9oZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDog
ICAgaHBfb3V0cz0xICgweDFiLzB4MC8weDAvMHgwLzB4MCkKWyAgIDEyLjgyNjUzOV0gc25k
X2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICBtb25vOiBtb25vX291dD0weDAK
WyAgIDEyLjgyNjU0MF0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICBk
aWctb3V0PTB4MWUvMHgwClsgICAxMi44MjY1NDFdIHNuZF9oZGFfY29kZWNfcmVhbHRlayBo
ZGF1ZGlvQzFEMDogICAgaW5wdXRzOgpbICAgMTIuODI2NTQzXSBzbmRfaGRhX2NvZGVjX3Jl
YWx0ZWsgaGRhdWRpb0MxRDA6ICAgICAgUmVhciBNaWM9MHgxOApbICAgMTIuODI2NTQ1XSBz
bmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgICAgRnJvbnQgTWljPTB4MTkK
WyAgIDEyLjgyNjU0Nl0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICAg
IExpbmU9MHgxYQpbICAgMTIuODI2NTQ3XSBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRp
b0MxRDA6ICAgICAgQ0Q9MHgxYwpbICAgMTIuODU0NzQ0XSBpbnB1dDogSEQtQXVkaW8gR2Vu
ZXJpYyBSZWFyIE1pYyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9zb3Vu
ZC9jYXJkMS9pbnB1dDMKWyAgIDEyLjg1NTAzN10gaW5wdXQ6IEhELUF1ZGlvIEdlbmVyaWMg
RnJvbnQgTWljIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4yL3NvdW5kL2Nh
cmQxL2lucHV0NApbICAgMTIuODU1MzE2XSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBMaW5l
IGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4yL3NvdW5kL2NhcmQxL2lucHV0
NQpbICAgMTIuODU1NjAyXSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBMaW5lIE91dCBGcm9u
dCBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1
dDYKWyAgIDEyLjg1NTg3OF0gaW5wdXQ6IEhELUF1ZGlvIEdlbmVyaWMgTGluZSBPdXQgU3Vy
cm91bmQgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEv
aW5wdXQ3ClsgICAxMi44NTYxNjJdIGlucHV0OiBIRC1BdWRpbyBHZW5lcmljIExpbmUgT3V0
IENMRkUgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEv
aW5wdXQ4ClsgICAxMi44NTY0MzNdIGlucHV0OiBIRC1BdWRpbyBHZW5lcmljIExpbmUgT3V0
IFNpZGUgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEv
aW5wdXQ5ClsgICAxMi44NTY3MDRdIGlucHV0OiBIRC1BdWRpbyBHZW5lcmljIEZyb250IEhl
YWRwaG9uZSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9zb3VuZC9jYXJk
MS9pbnB1dDEwClsgICAxMi44NjEwMzBdIHVzYiB1c2I1OiBOZXcgVVNCIGRldmljZSBmb3Vu
ZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEsIGJjZERldmljZT0gNi4wMwpbICAg
MTIuODYxMDM3XSB1c2IgdXNiNTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFBy
b2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgIDEyLjg2MTAzOV0gdXNiIHVzYjU6IFByb2R1
Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgMTIuODYxMDQxXSB1c2IgdXNiNTog
TWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiBvaGNp
X2hjZApbICAgMTIuODYxMDQzXSB1c2IgdXNiNTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEz
LjAKWyAgIDEyLjg2NzEwMV0gaHViIDUtMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjg3
NDEwNF0gaHViIDUtMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgIDEyLjkwMjAyMV0gb2hj
aS1wY2kgMDAwMDowMDoxNC41OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgIDEyLjkw
MjA1Ml0gb2hjaS1wY2kgMDAwMDowMDoxNC41OiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBh
c3NpZ25lZCBidXMgbnVtYmVyIDgKWyAgIDEyLjkwMjE3NV0gb2hjaS1wY2kgMDAwMDowMDox
NC41OiBpcnEgMTgsIGlvIG1lbSAweGYwMWNhMDAwClsgICAxMi45MTQ4OTldIHI4MTY5IDAw
MDA6MDQ6MDAuMCBlbnA0czA6IHJlbmFtZWQgZnJvbSBldGgwClsgICAxMi45ODE1OTJdIHVz
YiB1c2I4OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0
PTAwMDEsIGJjZERldmljZT0gNi4wMwpbICAgMTIuOTgxNTk5XSB1c2IgdXNiODogTmV3IFVT
QiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAg
IDEyLjk4MTYwMV0gdXNiIHVzYjg6IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxl
cgpbICAgMTIuOTgxNjAzXSB1c2IgdXNiODogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1y
YzMtMDAwNDUtZzY0ZGU0ZGY5YzgwYiBvaGNpX2hjZApbICAgMTIuOTgxNjA0XSB1c2IgdXNi
ODogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjE0LjUKWyAgIDEyLjk4MzMwOF0gaHViIDgtMDox
LjA6IFVTQiBodWIgZm91bmQKWyAgIDEyLjk5ODE3Ml0gaHViIDgtMDoxLjA6IDIgcG9ydHMg
ZGV0ZWN0ZWQKWyAgIDEzLjAwNDQ1N10gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBPSENJIFBD
SSBob3N0IGNvbnRyb2xsZXIKWyAgIDEzLjAwNDQ4MV0gb2hjaS1wY2kgMDAwMDowMDoxNi4w
OiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDkKWyAgIDEz
LjAwNDU4Nl0gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBpcnEgMTgsIGlvIG1lbSAweGYwMWNi
MDAwClsgICAxMy4wMjg5MjldIHVzYiA0LTE6IG5ldyBsb3ctc3BlZWQgVVNCIGRldmljZSBu
dW1iZXIgMiB1c2luZyBvaGNpLXBjaQpbICAgMTMuMTE3NTE1XSB1c2IgdXNiOTogTmV3IFVT
QiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxLCBiY2REZXZp
Y2U9IDYuMDMKWyAgIDEzLjExNzUyMl0gdXNiIHVzYjk6IE5ldyBVU0IgZGV2aWNlIHN0cmlu
Z3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAxMy4xMTc1MjRdIHVz
YiB1c2I5OiBQcm9kdWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgIDEzLjExNzUy
Nl0gdXNiIHVzYjk6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmMzLTAwMDQ1LWc2NGRl
NGRmOWM4MGIgb2hjaV9oY2QKWyAgIDEzLjExNzUyN10gdXNiIHVzYjk6IFNlcmlhbE51bWJl
cjogMDAwMDowMDoxNi4wClsgICAxMy4xMjI4MjNdIGh1YiA5LTA6MS4wOiBVU0IgaHViIGZv
dW5kClsgICAxMy4xMjI4NTVdIGh1YiA5LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgICAx
My4xNTAxODddIHI4MTY5IDAwMDA6MDQ6MDAuMDogRGlyZWN0IGZpcm13YXJlIGxvYWQgZm9y
IHJ0bF9uaWMvcnRsODE2OGYtMS5mdyBmYWlsZWQgd2l0aCBlcnJvciAtMgpbICAgMTMuMTUw
MTk4XSByODE2OSAwMDAwOjA0OjAwLjA6IFVuYWJsZSB0byBsb2FkIGZpcm13YXJlIHJ0bF9u
aWMvcnRsODE2OGYtMS5mdyAoLTIpClsgICAxMy4xNTA2ODhdIFJUTDgyMTFFIEdpZ2FiaXQg
RXRoZXJuZXQgcjgxNjktMC00MDA6MDA6IGF0dGFjaGVkIFBIWSBkcml2ZXIgKG1paV9idXM6
cGh5X2FkZHI9cjgxNjktMC00MDA6MDAsIGlycT1NQUMpClsgICAxMy4yMzEzODddIHI4MTY5
IDAwMDA6MDQ6MDAuMCBlbnA0czA6IExpbmsgaXMgRG93bgpbICAgMTMuMjU2MDUxXSBbZHJt
XSByYWRlb24ga2VybmVsIG1vZGVzZXR0aW5nIGVuYWJsZWQuClsgICAxMy4yNjIxMDldIFtk
cm1dIGluaXRpYWxpemluZyBrZXJuZWwgbW9kZXNldHRpbmcgKEFSVUJBIDB4MTAwMjoweDk5
OTYgMHgxMDAyOjB4OTk5NiAweDAwKS4KWyAgIDEzLjI2MjE3N10gQVRPTSBCSU9TOiAxMTMK
WyAgIDEzLjI2MjI4NF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogVlJBTTogNTEyTSAweDAwMDAw
MDAwMDAwMDAwMDAgLSAweDAwMDAwMDAwMUZGRkZGRkYgKDUxMk0gdXNlZCkKWyAgIDEzLjI2
MjI4OF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogR1RUOiAxMDI0TSAweDAwMDAwMDAwMjAwMDAw
MDAgLSAweDAwMDAwMDAwNUZGRkZGRkYKWyAgIDEzLjI2MjI5Nl0gW2RybV0gRGV0ZWN0ZWQg
VlJBTSBSQU09NTEyTSwgQkFSPTI1Nk0KWyAgIDEzLjI2MjI5N10gW2RybV0gUkFNIHdpZHRo
IDY0Yml0cyBERFIKWyAgIDEzLjI2MjQ4NF0gW2RybV0gcmFkZW9uOiA1MTJNIG9mIFZSQU0g
bWVtb3J5IHJlYWR5ClsgICAxMy4yNjI0OTBdIFtkcm1dIHJhZGVvbjogMTAyNE0gb2YgR1RU
IG1lbW9yeSByZWFkeS4KWyAgIDEzLjI2MjUzMF0gW2RybV0gTG9hZGluZyBBUlVCQSBNaWNy
b2NvZGUKWyAgIDEzLjI3MDY2N10gW2RybV0gSW50ZXJuYWwgdGhlcm1hbCBjb250cm9sbGVy
IHdpdGhvdXQgZmFuIGNvbnRyb2wKWyAgIDEzLjI3MTA4N10gW2RybV0gcmFkZW9uOiBkcG0g
aW5pdGlhbGl6ZWQKWyAgIDEzLjI3NTcxNF0gW2RybV0gRm91bmQgVkNFIGZpcm13YXJlL2Zl
ZWRiYWNrIHZlcnNpb24gNTAuMC4xIC8gMTchClsgICAxMy4yNzU3NzFdIFtkcm1dIEdBUlQ6
IG51bSBjcHUgcGFnZXMgMjYyMTQ0LCBudW0gZ3B1IHBhZ2VzIDI2MjE0NApbICAgMTMuMjc5
MDk1XSB1c2IgNC0xOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9NDEzYywgaWRQ
cm9kdWN0PTIxMDYsIGJjZERldmljZT0gMS4wMQpbICAgMTMuMjc5MTAxXSB1c2IgNC0xOiBO
ZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MSwgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9
MApbICAgMTMuMjc5MTAzXSB1c2IgNC0xOiBQcm9kdWN0OiBEZWxsIFF1aWV0S2V5IEtleWJv
YXJkClsgICAxMy4yNzkxMDVdIHVzYiA0LTE6IE1hbnVmYWN0dXJlcjogREVMTApbICAgMTMu
Mjg2ODMzXSBpbnB1dDogREVMTCBEZWxsIFF1aWV0S2V5IEtleWJvYXJkIGFzIC9kZXZpY2Vz
L3BjaTAwMDA6MDAvMDAwMDowMDoxMi4wL3VzYjQvNC0xLzQtMToxLjAvMDAwMzo0MTNDOjIx
MDYuMDAwMS9pbnB1dC9pbnB1dDExClsgICAxMy4zMTczNDhdIFtkcm1dIFBDSUUgR0FSVCBv
ZiAxMDI0TSBlbmFibGVkICh0YWJsZSBhdCAweDAwMDAwMDAwMDAxRDYwMDApLgpbICAgMTMu
MzE3NTk2XSByYWRlb24gMDAwMDowMDowMS4wOiBXQiBlbmFibGVkClsgICAxMy4zMTc1OTld
IHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDAgdXNlIGdwdSBh
ZGRyIDB4MDAwMDAwMDAyMDAwMGMwMApbICAgMTMuMzE3OTc3XSByYWRlb24gMDAwMDowMDow
MS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyA1IHVzZSBncHUgYWRkciAweDAwMDAwMDAwMDAw
NzVhMTgKWyAgIDEzLjMzODg1MF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVy
IG9uIHJpbmcgNiB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDIwMDAwYzE4ClsgICAxMy4zMzg4
NTRdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDcgdXNlIGdw
dSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMxYwpbICAgMTMuMzM4ODU2XSByYWRlb24gMDAwMDow
MDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyAxIHVzZSBncHUgYWRkciAweDAwMDAwMDAw
MjAwMDBjMDQKWyAgIDEzLjMzODg1OF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJp
dmVyIG9uIHJpbmcgMiB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDIwMDAwYzA4ClsgICAxMy4z
Mzg4NjBdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDMgdXNl
IGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMwYwpbICAgMTMuMzM4ODYxXSByYWRlb24gMDAw
MDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyA0IHVzZSBncHUgYWRkciAweDAwMDAw
MDAwMjAwMDBjMTAKWyAgIDEzLjMzOTE1M10gcmFkZW9uIDAwMDA6MDA6MDEuMDogcmFkZW9u
OiBNU0kgbGltaXRlZCB0byAzMi1iaXQKWyAgIDEzLjMzOTM0Ml0gcmFkZW9uIDAwMDA6MDA6
MDEuMDogcmFkZW9uOiB1c2luZyBNU0kuClsgICAxMy4zMzk0MTRdIFtkcm1dIHJhZGVvbjog
aXJxIGluaXRpYWxpemVkLgpbICAgMTMuMzQ1NTcxXSBoaWQtZ2VuZXJpYyAwMDAzOjQxM0M6
MjEwNi4wMDAxOiBpbnB1dCxoaWRyYXcwOiBVU0IgSElEIHYxLjEwIEtleWJvYXJkIFtERUxM
IERlbGwgUXVpZXRLZXkgS2V5Ym9hcmRdIG9uIHVzYi0wMDAwOjAwOjEyLjAtMS9pbnB1dDAK
WyAgIDEzLjM5MDQxMl0gW2RybV0gcmluZyB0ZXN0IG9uIDAgc3VjY2VlZGVkIGluIDMgdXNl
Y3MKWyAgIDEzLjM5MDQyM10gW2RybV0gcmluZyB0ZXN0IG9uIDMgc3VjY2VlZGVkIGluIDQg
dXNlY3MKWyAgIDEzLjM5MDQzMV0gW2RybV0gcmluZyB0ZXN0IG9uIDQgc3VjY2VlZGVkIGlu
IDQgdXNlY3MKWyAgIDEzLjQ0MTkxNV0gW2RybV0gcmluZyB0ZXN0IG9uIDUgc3VjY2VlZGVk
IGluIDIgdXNlY3MKWyAgIDEzLjQ2MTg1OF0gW2RybV0gVVZEIGluaXRpYWxpemVkIHN1Y2Nl
c3NmdWxseS4KWyAgIDEzLjU3MTM1NV0gW2RybV0gcmluZyB0ZXN0IG9uIDYgc3VjY2VlZGVk
IGluIDE4IHVzZWNzClsgICAxMy41NzEzNjddIFtkcm1dIHJpbmcgdGVzdCBvbiA3IHN1Y2Nl
ZWRlZCBpbiA0IHVzZWNzClsgICAxMy41NzEzNjhdIFtkcm1dIFZDRSBpbml0aWFsaXplZCBz
dWNjZXNzZnVsbHkuClsgICAxMy41NzE0OTRdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDowMS4x
OiBib3VuZCAwMDAwOjAwOjAxLjAgKG9wcyByYWRlb25fYXVkaW9fY29tcG9uZW50X2JpbmRf
b3BzIFtyYWRlb25dKQpbICAgMTMuNTcxNjY3XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgMCBz
dWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgMTMuNTcxNzIxXSBbZHJtXSBpYiB0ZXN0IG9uIHJp
bmcgMyBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgMTMuNTcxNzcxXSBbZHJtXSBpYiB0ZXN0
IG9uIHJpbmcgNCBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgMTMuNzY4OTkzXSB1c2IgNC0y
OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDMgdXNpbmcgb2hjaS1wY2kKWyAg
IDEzLjk2NTE5Nl0gdXNiIDQtMjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTA0
NmQsIGlkUHJvZHVjdD1jMDE2LCBiY2REZXZpY2U9IDMuNDAKWyAgIDEzLjk2NTIwOF0gdXNi
IDQtMjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTAKWyAgIDEzLjk2NTIxMl0gdXNiIDQtMjogUHJvZHVjdDogT3B0aWNhbCBVU0Ig
TW91c2UKWyAgIDEzLjk2NTIxNl0gdXNiIDQtMjogTWFudWZhY3R1cmVyOiBMb2dpdGVjaApb
ICAgMTMuOTc2MjE4XSBpbnB1dDogTG9naXRlY2ggT3B0aWNhbCBVU0IgTW91c2UgYXMgL2Rl
dmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjEyLjAvdXNiNC80LTIvNC0yOjEuMC8wMDAzOjA0
NkQ6QzAxNi4wMDAyL2lucHV0L2lucHV0MTIKWyAgIDEzLjk3NzgzN10gaGlkLWdlbmVyaWMg
MDAwMzowNDZEOkMwMTYuMDAwMjogaW5wdXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBNb3Vz
ZSBbTG9naXRlY2ggT3B0aWNhbCBVU0IgTW91c2VdIG9uIHVzYi0wMDAwOjAwOjEyLjAtMi9p
bnB1dDAKWyAgIDE0LjEwNTEwOV0gW2RybV0gaWIgdGVzdCBvbiByaW5nIDUgc3VjY2VlZGVk
ClsgICAxNC42NDkwMzZdIFtkcm1dIGliIHRlc3Qgb24gcmluZyA2IHN1Y2NlZWRlZApbICAg
MTUuMTYxMDI3XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgNyBzdWNjZWVkZWQKWyAgIDE1LjE3
MzM2OV0gW2RybV0gUmFkZW9uIERpc3BsYXkgQ29ubmVjdG9ycwpbICAgMTUuMTczMzc2XSBb
ZHJtXSBDb25uZWN0b3IgMDoKWyAgIDE1LjE3MzM3OF0gW2RybV0gICBEUC0xClsgICAxNS4x
NzMzODBdIFtkcm1dICAgSFBEMQpbICAgMTUuMTczMzgyXSBbZHJtXSAgIEREQzogMHg2NTMw
IDB4NjUzMCAweDY1MzQgMHg2NTM0IDB4NjUzOCAweDY1MzggMHg2NTNjIDB4NjUzYwpbICAg
MTUuMTczMzg3XSBbZHJtXSAgIEVuY29kZXJzOgpbICAgMTUuMTczMzg4XSBbZHJtXSAgICAg
REZQMTogSU5URVJOQUxfVU5JUEhZMgpbICAgMTUuMTczMzkwXSBbZHJtXSBDb25uZWN0b3Ig
MToKWyAgIDE1LjE3MzM5Ml0gW2RybV0gICBWR0EtMQpbICAgMTUuMTczMzkzXSBbZHJtXSAg
IEhQRDIKWyAgIDE1LjE3MzM5NV0gW2RybV0gICBEREM6IDB4NjU0MCAweDY1NDAgMHg2NTQ0
IDB4NjU0NCAweDY1NDggMHg2NTQ4IDB4NjU0YyAweDY1NGMKWyAgIDE1LjE3MzM5OV0gW2Ry
bV0gICBFbmNvZGVyczoKWyAgIDE1LjE3MzQwMF0gW2RybV0gICAgIENSVDE6IElOVEVSTkFM
X1VOSVBIWTIKWyAgIDE1LjE3MzQwMl0gW2RybV0gICAgIENSVDE6IE5VVE1FRwpbICAgMTUu
MTczNDAzXSBbZHJtXSBDb25uZWN0b3IgMjoKWyAgIDE1LjE3MzQwNV0gW2RybV0gICBIRE1J
LUEtMQpbICAgMTUuMTczNDA2XSBbZHJtXSAgIEhQRDMKWyAgIDE1LjE3MzQwOF0gW2RybV0g
ICBEREM6IDB4NjU1MCAweDY1NTAgMHg2NTU0IDB4NjU1NCAweDY1NTggMHg2NTU4IDB4NjU1
YyAweDY1NWMKWyAgIDE1LjE3MzQxMV0gW2RybV0gICBFbmNvZGVyczoKWyAgIDE1LjE3MzQx
Ml0gW2RybV0gICAgIERGUDI6IElOVEVSTkFMX1VOSVBIWQpbICAgMTUuNDQ3NzM0XSBbZHJt
XSBmYiBtYXBwYWJsZSBhdCAweEUwM0U5MDAwClsgICAxNS40NDc3NDFdIFtkcm1dIHZyYW0g
YXBwZXIgYXQgMHhFMDAwMDAwMApbICAgMTUuNDQ3NzQzXSBbZHJtXSBzaXplIDUyNDI4ODAK
WyAgIDE1LjQ0Nzc0NV0gW2RybV0gZmIgZGVwdGggaXMgMjQKWyAgIDE1LjQ0Nzc0N10gW2Ry
bV0gICAgcGl0Y2ggaXMgNTEyMApbICAgMTUuNDQ4MzQyXSBmYmNvbjogcmFkZW9uZHJtZmIg
KGZiMCkgaXMgcHJpbWFyeSBkZXZpY2UKWyAgIDE1LjYzMTY3MF0gQ29uc29sZTogc3dpdGNo
aW5nIHRvIGNvbG91ciBmcmFtZSBidWZmZXIgZGV2aWNlIDE2MHg2NApbICAgMTUuNjMzNTUx
XSByYWRlb24gMDAwMDowMDowMS4wOiBbZHJtXSBmYjA6IHJhZGVvbmRybWZiIGZyYW1lIGJ1
ZmZlciBkZXZpY2UKWyAgIDE1LjY0NTIzMF0gW2RybV0gSW5pdGlhbGl6ZWQgcmFkZW9uIDIu
NTAuMCAyMDA4MDUyOCBmb3IgMDAwMDowMDowMS4wIG9uIG1pbm9yIDAKWyAgIDE1Ljg3Mjgw
NF0gcjgxNjkgMDAwMDowNDowMC4wIGVucDRzMDogTGluayBpcyBVcCAtIDFHYnBzL0Z1bGwg
LSBmbG93IGNvbnRyb2wgcngvdHgKWyAgIDE1Ljg3MjgxOV0gSVB2NjogQUREUkNPTkYoTkVU
REVWX0NIQU5HRSk6IGVucDRzMDogbGluayBiZWNvbWVzIHJlYWR5ClsgICAxOC4xODU5Mjhd
IG1lbWZkX2NyZWF0ZSgpIHdpdGhvdXQgTUZEX0VYRUMgbm9yIE1GRF9OT0VYRUNfU0VBTCwg
cGlkPTI0NiAnc3lzdGVtZCcKWyAgIDE4Ljk4MzgzNl0gW2RybV0gYW1kZ3B1IGtlcm5lbCBt
b2Rlc2V0dGluZyBlbmFibGVkLgo=
--------------RIXs5aB9kAV059bPykAqBHIR
Content-Type: text/plain; charset=UTF-8;
 name="20230419-coreboot-cbmem-log-cb-68169.txt"
Content-Disposition: attachment;
 filename="20230419-coreboot-cbmem-log-cb-68169.txt"
Content-Transfer-Encoding: base64

CgpbTk9URSBdICBjb3JlYm9vdC00LjE4LTE1LWdjNzgyZWY0MzQ1IFdlZCBBcHIgMTkgMTY6
MDQ6NTQgVVRDIDIwMjMgYm9vdGJsb2NrIHN0YXJ0aW5nIChsb2cgbGV2ZWw6IDcpLi4uCltE
RUJVR10gIEZNQVA6IEZvdW5kICJGTEFTSCIgdmVyc2lvbiAxLjEgYXQgMHgxMDAwMC4KW0RF
QlVHXSAgRk1BUDogYmFzZSA9IDB4ZmZjMDAwMDAgc2l6ZSA9IDB4NDAwMDAwICNhcmVhcyA9
IDQKW0RFQlVHXSAgRk1BUDogYXJlYSBDT1JFQk9PVCBmb3VuZCBAIDEwMjAwICg0MTI4MjU2
IGJ5dGVzKQpbSU5GTyBdICBDQkZTOiBtY2FjaGUgQDB4MDAwMzRlMDAgYnVpbHQgZm9yIDE1
IGZpbGVzLCB1c2VkIDB4MzMwIG9mIDB4NDAwMCBieXRlcwpbSU5GTyBdICBDQkZTOiBGb3Vu
ZCAnZmFsbGJhY2svcm9tc3RhZ2UnIEAweDIwZTAwIHNpemUgMHg0Y2Y5MCBpbiBtY2FjaGUg
QDB4MDAwMzRmYzAKW0RFQlVHXSAgQlM6IGJvb3RibG9jayB0aW1lcyAoZXhlYyAvIGNvbnNv
bGUpOiB0b3RhbCAodW5rbm93bikgLyAxIG1zCgoKW05PVEUgXSAgY29yZWJvb3QtNC4xOC0x
NS1nYzc4MmVmNDM0NSBXZWQgQXByIDE5IDE2OjA0OjU0IFVUQyAyMDIzIHJvbXN0YWdlIHN0
YXJ0aW5nIChsb2cgbGV2ZWw6IDcpLi4uCltERUJVR10gIEFQSUMgMDA6IENQVSBGYW1pbHlf
TW9kZWwgPSAwMDYxMGYzMQoKW0RFQlVHXSAgQVBJQyAwMDogKiogRW50ZXIgQW1kSW5pdFJl
c2V0IFswMDAyMDAwN10KW0RFQlVHXSAgRmNoIE9FTSBjb25maWcgaW4gSU5JVCBSRVNFVApb
REVCVUddICBBbWRJbml0UmVzZXQoKSByZXR1cm5lZCBBR0VTQV9TVUNDRVNTCltERUJVR10g
IEFQSUMgMDA6IEhlYXAgaW4gTG9jYWxDYWNoZSAoMikgYXQgMHgwMDQwMDAwMApbREVCVUdd
ICBBUElDIDAwOiAqKiBFeGl0ICBBbWRJbml0UmVzZXQgWzAwMDIwMDA3XQoKW0RFQlVHXSAg
QVBJQyAwMDogKiogRW50ZXIgQW1kSW5pdEVhcmx5IFswMDAyMDAwMl0KW0RFQlVHXSAgQW1k
SW5pdEVhcmx5KCkgcmV0dXJuZWQgQUdFU0FfU1VDQ0VTUwpbREVCVUddICBBUElDIDAwOiBI
ZWFwIGluIExvY2FsQ2FjaGUgKDIpIGF0IDB4MDA0MDAwMDAKW0RFQlVHXSAgQVBJQyAwMDog
KiogRXhpdCAgQW1kSW5pdEVhcmx5IFswMDAyMDAwMl0KCltERUJVR10gIEFQSUMgMDA6ICoq
IEVudGVyIEFtZEluaXRQb3N0IFswMDAyMDAwNl0KW0VSUk9SXSAgLS0tLS0tLS0tLS0tLVNQ
RCBSRUFEIEVSUk9SLS0tLS0tLS0tLS0KW0VSUk9SXSAgLS0tLS0tLS0tLS0tLVNQRCBSRUFE
IEVSUk9SLS0tLS0tLS0tLS0KW0RFQlVHXSAgQW1kSW5pdFBvc3QoKSByZXR1cm5lZCBBR0VT
QV9TVUNDRVNTCltERUJVR10gIEFQSUMgMDA6IEhlYXAgaW4gVGVtcE1lbSAoMykgYXQgMHgw
MDBiMDAwMApbREVCVUddICBBUElDIDAwOiAqKiBFeGl0ICBBbWRJbml0UG9zdCBbMDAwMjAw
MDZdCltERUJVR10gIENCTUVNOgpbREVCVUddICBJTUQ6IHJvb3QgQCAweDVmZmZmMDAwIDI1
NCBlbnRyaWVzLgpbREVCVUddICBJTUQ6IHJvb3QgQCAweDVmZmZlYzAwIDYyIGVudHJpZXMu
CltERUJVR10gIEZNQVA6IGFyZWEgQ09SRUJPT1QgZm91bmQgQCAxMDIwMCAoNDEyODI1NiBi
eXRlcykKW0RFQlVHXSAgTm9ybWFsIGJvb3QKW0lORk8gXSAgQ0JGUzogRm91bmQgJ2ZhbGxi
YWNrL3Bvc3RjYXInIEAweDIzNDAgc2l6ZSAweDUxYTQgaW4gbWNhY2hlIEAweDAwMDM0ZWU4
CltERUJVR10gIExvYWRpbmcgbW9kdWxlIGF0IDB4NWZmZDAwMDAgd2l0aCBlbnRyeSAweDVm
ZmQwMDMxLiBmaWxlc2l6ZTogMHg0ZWUwIG1lbXNpemU6IDB4YjFmMApbREVCVUddICBQcm9j
ZXNzaW5nIDE2MSByZWxvY3MuIE9mZnNldCB2YWx1ZSBvZiAweDVkZmQwMDAwCltERUJVR10g
IEJTOiByb21zdGFnZSB0aW1lcyAoZXhlYyAvIGNvbnNvbGUpOiB0b3RhbCAodW5rbm93bikg
LyAyIG1zCgoKW05PVEUgXSAgY29yZWJvb3QtNC4xOC0xNS1nYzc4MmVmNDM0NSBXZWQgQXBy
IDE5IDE2OjA0OjU0IFVUQyAyMDIzIHBvc3RjYXIgc3RhcnRpbmcgKGxvZyBsZXZlbDogNyku
Li4KW0RFQlVHXSAgRk1BUDogYXJlYSBDT1JFQk9PVCBmb3VuZCBAIDEwMjAwICg0MTI4MjU2
IGJ5dGVzKQpbSU5GTyBdICBDQkZTOiBGb3VuZCAnZmFsbGJhY2svcmFtc3RhZ2UnIEAweDZk
ZTQwIHNpemUgMHgyMTdmZSBpbiBtY2FjaGUgQDB4NWZmZGQyNDAKW0RFQlVHXSAgTG9hZGlu
ZyBtb2R1bGUgYXQgMHg1ZmViODAwMCB3aXRoIGVudHJ5IDB4NWZlYjgwMDAuIGZpbGVzaXpl
OiAweDQ2MDM4IG1lbXNpemU6IDB4MTE2ODI4CltERUJVR10gIFByb2Nlc3NpbmcgMzk3OCBy
ZWxvY3MuIE9mZnNldCB2YWx1ZSBvZiAweDViZWI4MDAwCltERUJVR10gIEJTOiBwb3N0Y2Fy
IHRpbWVzIChleGVjIC8gY29uc29sZSk6IHRvdGFsICh1bmtub3duKSAvIDAgbXMKCgpbTk9U
RSBdICBjb3JlYm9vdC00LjE4LTE1LWdjNzgyZWY0MzQ1IFdlZCBBcHIgMTkgMTY6MDQ6NTQg
VVRDIDIwMjMgcmFtc3RhZ2Ugc3RhcnRpbmcgKGxvZyBsZXZlbDogNykuLi4KW0RFQlVHXSAg
Tm9ybWFsIGJvb3QKCltERUJVR10gIEFQSUMgMDA6ICoqIEVudGVyIEFtZEluaXRFbnYgWzAw
MDIwMDAzXQpbREVCVUddICBXaXBlZCBIRUFQIGF0IFsxMDAwMDAwMCAtIDEwMDJmZmZmXQpb
REVCVUddICBGY2ggT0VNIGNvbmZpZyBpbiBJTklUIEVOVgpbREVCVUddICBBbWRJbml0RW52
KCkgcmV0dXJuZWQgQUdFU0FfU1VDQ0VTUwpbREVCVUddICBBUElDIDAwOiBIZWFwIGluIFN5
c3RlbU1lbSAoNCkgYXQgMHgxMDAwMDAxNApbREVCVUddICBBUElDIDAwOiAqKiBFeGl0ICBB
bWRJbml0RW52IFswMDAyMDAwM10KW0RFQlVHXSAgQlM6IEJTX1BSRV9ERVZJQ0UgZW50cnkg
dGltZXMgKGV4ZWMgLyBjb25zb2xlKTogMjUgLyAwIG1zCltJTkZPIF0gIEVudW1lcmF0aW5n
IGJ1c2VzLi4uCltERUJVR10gIFJvb3QgRGV2aWNlIHNjYW5uaW5nLi4uCltERUJVR10gIENQ
VV9DTFVTVEVSOiAwIGVuYWJsZWQKW0RFQlVHXSAgRE9NQUlOOiAwMDAwIGVuYWJsZWQKW0RF
QlVHXSAgRE9NQUlOOiAwMDAwIHNjYW5uaW5nLi4uCltERUJVR10gIFBDSTogcGNpX3NjYW5f
YnVzIGZvciBidXMgMDAKW0RFQlVHXSAgUENJOiAwMDowMC4wIFsxMDIyLzE0MTBdIGVuYWJs
ZWQKW0RFQlVHXSAgUENJOiAwMDowMC4yIFsxMDIyLzE0MTldIGVuYWJsZWQKW0RFQlVHXSAg
UENJOiAwMDowMS4wIFsxMDAyLzk5OTZdIGVuYWJsZWQKW0RFQlVHXSAgUENJOiAwMDowMS4x
IFsxMDAyLzk5MDJdIGVuYWJsZWQKW0lORk8gXSAgUENJOiBTdGF0aWMgZGV2aWNlIFBDSTog
MDA6MDIuMCBub3QgZm91bmQsIGRpc2FibGluZyBpdC4KW0RFQlVHXSAgaHVkc29uX2VuYWJs
ZSgpCltJTkZPIF0gIFBDSTogU3RhdGljIGRldmljZSBQQ0k6IDAwOjEwLjAgbm90IGZvdW5k
LCBkaXNhYmxpbmcgaXQuCltERUJVR10gIGh1ZHNvbl9lbmFibGUoKQpbSU5GTyBdICBQQ0k6
IFN0YXRpYyBkZXZpY2UgUENJOiAwMDoxMC4xIG5vdCBmb3VuZCwgZGlzYWJsaW5nIGl0Lgpb
REVCVUddICBodWRzb25fZW5hYmxlKCkKW0RFQlVHXSAgUENJOiAwMDoxMS4wIFsxMDIyLzc4
MDFdIGVuYWJsZWQKW0RFQlVHXSAgaHVkc29uX2VuYWJsZSgpCltERUJVR10gIFBDSTogMDA6
MTIuMCBbMTAyMi83ODA3XSBlbmFibGVkCltERUJVR10gIGh1ZHNvbl9lbmFibGUoKQpbREVC
VUddICBQQ0k6IDAwOjEyLjIgWzEwMjIvNzgwOF0gZW5hYmxlZApbREVCVUddICBodWRzb25f
ZW5hYmxlKCkKW0RFQlVHXSAgUENJOiAwMDoxMy4wIFsxMDIyLzc4MDddIGVuYWJsZWQKW0RF
QlVHXSAgaHVkc29uX2VuYWJsZSgpCltERUJVR10gIFBDSTogMDA6MTMuMiBbMTAyMi83ODA4
XSBlbmFibGVkCltERUJVR10gIGh1ZHNvbl9lbmFibGUoKQpbREVCVUddICBQQ0k6IDAwOjE0
LjAgWzEwMjIvNzgwYl0gZW5hYmxlZApbREVCVUddICBodWRzb25fZW5hYmxlKCkKW0RFQlVH
XSAgaHVkc29uX2VuYWJsZSgpCltERUJVR10gIFBDSTogMDA6MTQuMiBbMTAyMi83ODBkXSBl
bmFibGVkCltERUJVR10gIGh1ZHNvbl9lbmFibGUoKQpbREVCVUddICBQQ0k6IDAwOjE0LjMg
WzEwMjIvNzgwZV0gZW5hYmxlZApbREVCVUddICBodWRzb25fZW5hYmxlKCkKW0RFQlVHXSAg
UENJOiAwMDoxNC40IFsxMDIyLzc4MGZdIGVuYWJsZWQKW0RFQlVHXSAgUENJOiAwMDoxNC41
IFsxMDIyLzc4MDldIGVuYWJsZWQKW0RFQlVHXSAgaHVkc29uX2VuYWJsZSgpCltERUJVR10g
IGh1ZHNvbl9lbmFibGUoKQpbREVCVUddICBQQ0k6IDAwOjE1LjAgWzEwMjIvNDNhMF0gZW5h
YmxlZApbREVCVUddICBodWRzb25fZW5hYmxlKCkKW0RFQlVHXSAgUENJOiAwMDoxNS4xIFsx
MDIyLzQzYTFdIGVuYWJsZWQKW0RFQlVHXSAgaHVkc29uX2VuYWJsZSgpCltERUJVR10gIFBD
STogMDA6MTUuMiBbMTAyMi80M2EyXSBkaXNhYmxlZApbREVCVUddICBQQ0k6IDAwOjE2LjAg
WzEwMjIvNzgwN10gZW5hYmxlZApbREVCVUddICBQQ0k6IDAwOjE2LjIgWzEwMjIvNzgwOF0g
ZW5hYmxlZApbREVCVUddICBQQ0k6IDAwOjE4LjAgWzEwMjIvMTQwMF0gZW5hYmxlZApbREVC
VUddICBQQ0k6IDAwOjE4LjEgWzEwMjIvMTQwMV0gZW5hYmxlZApbREVCVUddICBQQ0k6IDAw
OjE4LjIgWzEwMjIvMTQwMl0gZW5hYmxlZApbREVCVUddICBQQ0k6IDAwOjE4LjMgWzEwMjIv
MTQwM10gZW5hYmxlZApbREVCVUddICBQQ0k6IDAwOjE4LjQgWzEwMjIvMTQwNF0gZW5hYmxl
ZApbREVCVUddICBQQ0k6IDAwOjE4LjUgWzEwMjIvMTQwNV0gZW5hYmxlZApbV0FSTiBdICBQ
Q0k6IExlZnRvdmVyIHN0YXRpYyBkZXZpY2VzOgpbV0FSTiBdICBQQ0k6IDAwOjAyLjAKW1dB
Uk4gXSAgUENJOiAwMDoxMC4wCltXQVJOIF0gIFBDSTogMDA6MTAuMQpbV0FSTiBdICBQQ0k6
IDAwOjE0LjEKW1dBUk4gXSAgUENJOiAwMDoxNC43CltXQVJOIF0gIFBDSTogQ2hlY2sgeW91
ciBkZXZpY2V0cmVlLmNiLgpbREVCVUddICBQQ0k6IDAwOjE0LjAgc2Nhbm5pbmcuLi4KW0RF
QlVHXSAgc2Nhbl9idXM6IGJ1cyBQQ0k6IDAwOjE0LjAgZmluaXNoZWQgaW4gMCBtc2Vjcwpb
REVCVUddICBQQ0k6IDAwOjE0LjMgc2Nhbm5pbmcuLi4KW0RFQlVHXSAgUE5QOiAwMDJlLjAg
ZGlzYWJsZWQKW0RFQlVHXSAgUE5QOiAwMDJlLjEgZGlzYWJsZWQKW0RFQlVHXSAgUE5QOiAw
MDJlLjIgZW5hYmxlZApbREVCVUddICBQTlA6IDAwMmUuMyBkaXNhYmxlZApbREVCVUddICBQ
TlA6IDAwMmUuNSBlbmFibGVkCltERUJVR10gIFBOUDogMDAyZS42IGRpc2FibGVkCltERUJV
R10gIFBOUDogMDAyZS43IGVuYWJsZWQKW0RFQlVHXSAgUE5QOiAwMDJlLjggZGlzYWJsZWQK
W0RFQlVHXSAgUE5QOiAwMDJlLjEwOCBlbmFibGVkCltERUJVR10gIFBOUDogMDAyZS45IGRp
c2FibGVkCltERUJVR10gIFBOUDogMDAyZS4xMDkgZW5hYmxlZApbREVCVUddICBQTlA6IDAw
MmUuMjA5IGVuYWJsZWQKW0RFQlVHXSAgUE5QOiAwMDJlLjMwOSBlbmFibGVkCltERUJVR10g
IFBOUDogMDAyZS40MDkgZW5hYmxlZApbREVCVUddICBQTlA6IDAwMmUuNTA5IGVuYWJsZWQK
W0RFQlVHXSAgUE5QOiAwMDJlLjYwOSBlbmFibGVkCltERUJVR10gIFBOUDogMDAyZS43MDkg
ZW5hYmxlZApbREVCVUddICBQTlA6IDAwMmUuYSBlbmFibGVkCltERUJVR10gIFBOUDogMDAy
ZS5iIGVuYWJsZWQKW0RFQlVHXSAgUE5QOiAwMDJlLmQgZGlzYWJsZWQKW0RFQlVHXSAgUE5Q
OiAwMDJlLmUgZGlzYWJsZWQKW0RFQlVHXSAgUE5QOiAwMDJlLmYgZW5hYmxlZApbREVCVUdd
ICBQTlA6IDAwMmUuMTQgZW5hYmxlZApbREVCVUddICBQTlA6IDAwMmUuMTYgZGlzYWJsZWQK
W0RFQlVHXSAgc2Nhbl9idXM6IGJ1cyBQQ0k6IDAwOjE0LjMgZmluaXNoZWQgaW4gMCBtc2Vj
cwpbREVCVUddICBQQ0k6IDAwOjE0LjQgc2Nhbm5pbmcuLi4KW0RFQlVHXSAgUENJOiBwY2lf
c2Nhbl9idXMgZm9yIGJ1cyAwMQpbREVCVUddICBzY2FuX2J1czogYnVzIFBDSTogMDA6MTQu
NCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTUuMCBzY2FubmluZy4u
LgpbREVCVUddICBQQ0k6IHBjaV9zY2FuX2J1cyBmb3IgYnVzIDAyCltERUJVR10gIHNjYW5f
YnVzOiBidXMgUENJOiAwMDoxNS4wIGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJ
OiAwMDoxNS4xIHNjYW5uaW5nLi4uCltERUJVR10gIFBDSTogcGNpX3NjYW5fYnVzIGZvciBi
dXMgMDMKW0RFQlVHXSAgUENJOiAwMzowMC4wIFsxYjIxLzEwNDJdIGVuYWJsZWQKW0RFQlVH
XSAgc2Nhbl9idXM6IGJ1cyBQQ0k6IDAwOjE1LjEgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVC
VUddICBzY2FuX2J1czogYnVzIERPTUFJTjogMDAwMCBmaW5pc2hlZCBpbiAwIG1zZWNzCltE
RUJVR10gIHNjYW5fYnVzOiBidXMgUm9vdCBEZXZpY2UgZmluaXNoZWQgaW4gMCBtc2Vjcwpb
SU5GTyBdICBkb25lCltERUJVR10gIEJTOiBCU19ERVZfRU5VTUVSQVRFIHJ1biB0aW1lcyAo
ZXhlYyAvIGNvbnNvbGUpOiAxIC8gMCBtcwpbREVCVUddICBmb3VuZCBWR0EgYXQgUENJOiAw
MDowMS4wCltERUJVR10gIFNldHRpbmcgdXAgVkdBIGZvciBQQ0k6IDAwOjAxLjAKW0RFQlVH
XSAgU2V0dGluZyBQQ0lfQlJJREdFX0NUTF9WR0EgZm9yIGJyaWRnZSBET01BSU46IDAwMDAK
W0RFQlVHXSAgU2V0dGluZyBQQ0lfQlJJREdFX0NUTF9WR0EgZm9yIGJyaWRnZSBSb290IERl
dmljZQpbSU5GTyBdICBBbGxvY2F0aW5nIHJlc291cmNlcy4uLgpbSU5GTyBdICBSZWFkaW5n
IHJlc291cmNlcy4uLgpbREVCVUddICBmeF9kZXZzPTB4MQpbRVJST1JdICBQTlA6IDAwMmUu
NyBtaXNzaW5nIHJlYWRfcmVzb3VyY2VzCltERUJVR10gIEFkZGluZyBQQ0llIGVuaGFuY2Vk
IGNvbmZpZyBzcGFjZSBCQVIgMHhmODAwMDAwMC0weGZjMDAwMDAwLgpbSU5GTyBdICBEb25l
IHJlYWRpbmcgcmVzb3VyY2VzLgpbRVJST1JdICBza2lwcGluZyBQTlA6IDAwMmUuN0BmNCBm
aXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGluZyBQTlA6IDAwMmUuN0Bl
MCBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGluZyBQTlA6IDAwMmUu
N0BlMSBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGluZyBQTlA6IDAw
MmUuMTA4QGUwIGZpeGVkIHJlc291cmNlLCBzaXplPTAhCltFUlJPUl0gIHNraXBwaW5nIFBO
UDogMDAyZS4xMDhAZTIgZml4ZWQgcmVzb3VyY2UsIHNpemU9MCEKW0VSUk9SXSAgc2tpcHBp
bmcgUE5QOiAwMDJlLjEwOEBlNCBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBz
a2lwcGluZyBQTlA6IDAwMmUuMTA4QGYwIGZpeGVkIHJlc291cmNlLCBzaXplPTAhCltFUlJP
Ul0gIHNraXBwaW5nIFBOUDogMDAyZS4xMDhAZjQgZml4ZWQgcmVzb3VyY2UsIHNpemU9MCEK
W0VSUk9SXSAgc2tpcHBpbmcgUE5QOiAwMDJlLjEwOEBmNSBmaXhlZCByZXNvdXJjZSwgc2l6
ZT0wIQpbRVJST1JdICBza2lwcGluZyBQTlA6IDAwMmUuMTA4QGY2IGZpeGVkIHJlc291cmNl
LCBzaXplPTAhCltFUlJPUl0gIHNraXBwaW5nIFBOUDogMDAyZS4xMDhAZjcgZml4ZWQgcmVz
b3VyY2UsIHNpemU9MCEKW0VSUk9SXSAgc2tpcHBpbmcgUE5QOiAwMDJlLjIwOUBlMCBmaXhl
ZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGluZyBQTlA6IDAwMmUuMzA5QGU0
IGZpeGVkIHJlc291cmNlLCBzaXplPTAhCltFUlJPUl0gIHNraXBwaW5nIFBOUDogMDAyZS4z
MDlAZTUgZml4ZWQgcmVzb3VyY2UsIHNpemU9MCEKW0VSUk9SXSAgc2tpcHBpbmcgUE5QOiAw
MDJlLjQwOUBmMCBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGluZyBQ
TlA6IDAwMmUuYUBlNiBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lwcGlu
ZyBQTlA6IDAwMmUuYUBlNyBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBza2lw
cGluZyBQTlA6IDAwMmUuYkBlMiBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1JdICBz
a2lwcGluZyBQTlA6IDAwMmUuYkBlNCBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJST1Jd
ICBza2lwcGluZyBQTlA6IDAwMmUuZkBlNiBmaXhlZCByZXNvdXJjZSwgc2l6ZT0wIQpbRVJS
T1JdICBza2lwcGluZyBQTlA6IDAwMmUuMTRAZTAgZml4ZWQgcmVzb3VyY2UsIHNpemU9MCEK
W0lORk8gXSAgU2V0dGluZyByZXNvdXJjZXMuLi4KW0RFQlVHXSAgbm9kZSAwOiBtbWlvX2Jh
c2VrPTAwMjAwMDAwLCBiYXNlaz0wMDQwMDAwMCwgbGltaXRrPTAwNWZjMDAwCltJTkZPIF0g
IGFkZF91bWFfcmVzb3VyY2VfYmVsb3dfdG9sbTogdW1hIHNpemUgMHgyMDAwMDAwMCwgbWVt
b3J5IHN0YXJ0IDB4NjAwMDAwMDAKW0RFQlVHXSAgUENJOiAwMDowMC4yIDQ0IDwtIFsweDAw
MDAwMDAwZjAxMDAwMDAgLSAweDAwMDAwMDAwZjAxN2ZmZmZdIHNpemUgMHgwMDA4MDAwMCBn
cmFuIDB4MTMgbWVtCltERUJVR10gIFBDSTogMDA6MDEuMCAxMCA8LSBbMHgwMDAwMDAwMGUw
MDAwMDAwIC0gMHgwMDAwMDAwMGVmZmZmZmZmXSBzaXplIDB4MTAwMDAwMDAgZ3JhbiAweDFj
IHByZWZtZW0KW0RFQlVHXSAgUENJOiAwMDowMS4wIDE0IDwtIFsweDAwMDAwMDAwMDAwMDEw
MDAgLSAweDAwMDAwMDAwMDAwMDEwZmZdIHNpemUgMHgwMDAwMDEwMCBncmFuIDB4MDggaW8K
W0RFQlVHXSAgUENJOiAwMDowMS4wIDE4IDwtIFsweDAwMDAwMDAwZjAxODAwMDAgLSAweDAw
MDAwMDAwZjAxYmZmZmZdIHNpemUgMHgwMDA0MDAwMCBncmFuIDB4MTIgbWVtCltERUJVR10g
IFBDSTogMDA6MDEuMSAxMCA8LSBbMHgwMDAwMDAwMGYwMWMwMDAwIC0gMHgwMDAwMDAwMGYw
MWMzZmZmXSBzaXplIDB4MDAwMDQwMDAgZ3JhbiAweDBlIG1lbQpbREVCVUddICBQQ0k6IDAw
OjExLjAgMTAgPC0gWzB4MDAwMDAwMDAwMDAwMTQxMCAtIDB4MDAwMDAwMDAwMDAwMTQxN10g
c2l6ZSAweDAwMDAwMDA4IGdyYW4gMHgwMyBpbwpbREVCVUddICBQQ0k6IDAwOjExLjAgMTQg
PC0gWzB4MDAwMDAwMDAwMDAwMTQyMCAtIDB4MDAwMDAwMDAwMDAwMTQyM10gc2l6ZSAweDAw
MDAwMDA0IGdyYW4gMHgwMiBpbwpbREVCVUddICBQQ0k6IDAwOjExLjAgMTggPC0gWzB4MDAw
MDAwMDAwMDAwMTQxOCAtIDB4MDAwMDAwMDAwMDAwMTQxZl0gc2l6ZSAweDAwMDAwMDA4IGdy
YW4gMHgwMyBpbwpbREVCVUddICBQQ0k6IDAwOjExLjAgMWMgPC0gWzB4MDAwMDAwMDAwMDAw
MTQyNCAtIDB4MDAwMDAwMDAwMDAwMTQyN10gc2l6ZSAweDAwMDAwMDA0IGdyYW4gMHgwMiBp
bwpbREVCVUddICBQQ0k6IDAwOjExLjAgMjAgPC0gWzB4MDAwMDAwMDAwMDAwMTQwMCAtIDB4
MDAwMDAwMDAwMDAwMTQwZl0gc2l6ZSAweDAwMDAwMDEwIGdyYW4gMHgwNCBpbwpbREVCVUdd
ICBQQ0k6IDAwOjExLjAgMjQgPC0gWzB4MDAwMDAwMDBmMDFjYzAwMCAtIDB4MDAwMDAwMDBm
MDFjYzdmZl0gc2l6ZSAweDAwMDAwODAwIGdyYW4gMHgwYiBtZW0KW0RFQlVHXSAgUENJOiAw
MDoxMi4wIDEwIDwtIFsweDAwMDAwMDAwZjAxYzgwMDAgLSAweDAwMDAwMDAwZjAxYzhmZmZd
IHNpemUgMHgwMDAwMTAwMCBncmFuIDB4MGMgbWVtCltERUJVR10gIFBDSTogMDA6MTIuMiAx
MCA8LSBbMHgwMDAwMDAwMGYwMWNkMDAwIC0gMHgwMDAwMDAwMGYwMWNkMGZmXSBzaXplIDB4
MDAwMDAxMDAgZ3JhbiAweDA4IG1lbQpbREVCVUddICBQQ0k6IDAwOjEzLjAgMTAgPC0gWzB4
MDAwMDAwMDBmMDFjOTAwMCAtIDB4MDAwMDAwMDBmMDFjOWZmZl0gc2l6ZSAweDAwMDAxMDAw
IGdyYW4gMHgwYyBtZW0KW0RFQlVHXSAgUENJOiAwMDoxMy4yIDEwIDwtIFsweDAwMDAwMDAw
ZjAxY2UwMDAgLSAweDAwMDAwMDAwZjAxY2UwZmZdIHNpemUgMHgwMDAwMDEwMCBncmFuIDB4
MDggbWVtCltERUJVR10gIFBDSTogMDA6MTQuMiAxMCA8LSBbMHgwMDAwMDAwMGYwMWM0MDAw
IC0gMHgwMDAwMDAwMGYwMWM3ZmZmXSBzaXplIDB4MDAwMDQwMDAgZ3JhbiAweDBlIG1lbTY0
CltERUJVR10gIFBOUDogMDAyZS4yIDYwIDwtIFsweDAwMDAwMDAwMDAwMDAzZjggLSAweDAw
MDAwMDAwMDAwMDAzZmZdIHNpemUgMHgwMDAwMDAwOCBncmFuIDB4MDMgaW8KW0RFQlVHXSAg
UE5QOiAwMDJlLjIgNzAgPC0gWzB4MDAwMDAwMDAwMDAwMDAwNCAtIDB4MDAwMDAwMDAwMDAw
MDAwNF0gc2l6ZSAweDAwMDAwMDAxIGdyYW4gMHgwMCBpcnEKW0RFQlVHXSAgUE5QOiAwMDJl
LjUgNjAgPC0gWzB4MDAwMDAwMDAwMDAwMDA2MCAtIDB4MDAwMDAwMDAwMDAwMDA2MF0gc2l6
ZSAweDAwMDAwMDAxIGdyYW4gMHgwMCBpbwpbREVCVUddICBQTlA6IDAwMmUuNSA2MiA8LSBb
MHgwMDAwMDAwMDAwMDAwMDY0IC0gMHgwMDAwMDAwMDAwMDAwMDY0XSBzaXplIDB4MDAwMDAw
MDEgZ3JhbiAweDAwIGlvCltERUJVR10gIFBOUDogMDAyZS41IDcwIDwtIFsweDAwMDAwMDAw
MDAwMDAwMDEgLSAweDAwMDAwMDAwMDAwMDAwMDFdIHNpemUgMHgwMDAwMDAwMSBncmFuIDB4
MDAgaXJxCltERUJVR10gIFBOUDogMDAyZS41IDcyIDwtIFsweDAwMDAwMDAwMDAwMDAwMGMg
LSAweDAwMDAwMDAwMDAwMDAwMGNdIHNpemUgMHgwMDAwMDAwMSBncmFuIDB4MDAgaXJxCltF
UlJPUl0gIFBOUDogMDAyZS43IG1pc3Npbmcgc2V0X3Jlc291cmNlcwpbREVCVUddICBQTlA6
IDAwMmUuMTA4IGUwIDwtIFsweDAwMDAwMDAwMDAwMDAwZmYgLSAweDAwMDAwMDAwMDAwMDAw
ZmVdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltERUJVR10gIFBOUDogMDAyZS4x
MDggZTIgPC0gWzB4MDAwMDAwMDAwMDAwMDBmZiAtIDB4MDAwMDAwMDAwMDAwMDBmZV0gc2l6
ZSAweDAwMDAwMDAwIGdyYW4gMHgwMCBpcnEKW0RFQlVHXSAgUE5QOiAwMDJlLjEwOCBlNCA8
LSBbMHgwMDAwMDAwMDAwMDAwMGZmIC0gMHgwMDAwMDAwMDAwMDAwMGZlXSBzaXplIDB4MDAw
MDAwMDAgZ3JhbiAweDAwIGlycQpbREVCVUddICBQTlA6IDAwMmUuMTA4IGYwIDwtIFsweDAw
MDAwMDAwMDAwMDAwZmYgLSAweDAwMDAwMDAwMDAwMDAwZmVdIHNpemUgMHgwMDAwMDAwMCBn
cmFuIDB4MDAgaXJxCltERUJVR10gIFBOUDogMDAyZS4xMDggZjQgPC0gWzB4MDAwMDAwMDAw
MDAwMDAwOCAtIDB4MDAwMDAwMDAwMDAwMDAwN10gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgw
MCBpcnEKW0RFQlVHXSAgUE5QOiAwMDJlLjEwOCBmNSA8LSBbMHgwMDAwMDAwMDAwMDAwMGZm
IC0gMHgwMDAwMDAwMDAwMDAwMGZlXSBzaXplIDB4MDAwMDAwMDAgZ3JhbiAweDAwIGlycQpb
REVCVUddICBQTlA6IDAwMmUuMTA4IGY2IDwtIFsweDAwMDAwMDAwMDAwMDAwMDAgLSAweGZm
ZmZmZmZmZmZmZmZmZmZdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltERUJVR10g
IFBOUDogMDAyZS4xMDggZjcgPC0gWzB4MDAwMDAwMDAwMDAwMDBmZiAtIDB4MDAwMDAwMDAw
MDAwMDBmZV0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgwMCBpcnEKW0RFQlVHXSAgUE5QOiAw
MDJlLjIwOSBlMCA8LSBbMHgwMDAwMDAwMDAwMDAwMGZmIC0gMHgwMDAwMDAwMDAwMDAwMGZl
XSBzaXplIDB4MDAwMDAwMDAgZ3JhbiAweDAwIGlycQpbREVCVUddICBQTlA6IDAwMmUuMzA5
IGU0IDwtIFsweDAwMDAwMDAwMDAwMDAwN2YgLSAweDAwMDAwMDAwMDAwMDAwN2VdIHNpemUg
MHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltERUJVR10gIFBOUDogMDAyZS4zMDkgZTUgPC0g
WzB4MDAwMDAwMDAwMDAwMDAwMCAtIDB4ZmZmZmZmZmZmZmZmZmZmZl0gc2l6ZSAweDAwMDAw
MDAwIGdyYW4gMHgwMCBpcnEKW0RFQlVHXSAgUE5QOiAwMDJlLjQwOSBmMCA8LSBbMHgwMDAw
MDAwMDAwMDAwMGZmIC0gMHgwMDAwMDAwMDAwMDAwMGZlXSBzaXplIDB4MDAwMDAwMDAgZ3Jh
biAweDAwIGlycQpbREVCVUddICBQTlA6IDAwMmUuNTA5IGY0IDwtIFsweDAwMDAwMDAwMDAw
MDAwZmYgLSAweDAwMDAwMDAwMDAwMDAwZmZdIHNpemUgMHgwMDAwMDAwMSBncmFuIDB4MDAg
aXJxCltXQVJOIF0gIFBOUDogMDAyZS41MDkgZjUgaXJxIHNpemU6IDB4MDAwMDAwMDAwMSBu
b3QgYXNzaWduZWQgaW4gZGV2aWNldHJlZQpbV0FSTiBdICBQTlA6IDAwMmUuNjA5IGY0IGly
cSBzaXplOiAweDAwMDAwMDAwMDEgbm90IGFzc2lnbmVkIGluIGRldmljZXRyZWUKW1dBUk4g
XSAgUE5QOiAwMDJlLjYwOSBmNSBpcnEgc2l6ZTogMHgwMDAwMDAwMDAxIG5vdCBhc3NpZ25l
ZCBpbiBkZXZpY2V0cmVlCltERUJVR10gIFBOUDogMDAyZS5hIGU2IDwtIFsweDAwMDAwMDAw
MDAwMDAwNGMgLSAweDAwMDAwMDAwMDAwMDAwNGJdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4
MDAgaXJxCltERUJVR10gIFBOUDogMDAyZS5hIGU3IDwtIFsweDAwMDAwMDAwMDAwMDAwMTEg
LSAweDAwMDAwMDAwMDAwMDAwMTBdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltE
RUJVR10gIFBOUDogMDAyZS5hIGYyIDwtIFsweDAwMDAwMDAwMDAwMDAwNWQgLSAweDAwMDAw
MDAwMDAwMDAwNWRdIHNpemUgMHgwMDAwMDAwMSBncmFuIDB4MDAgaXJxCltERUJVR10gIFBO
UDogMDAyZS5iIDYwIDwtIFsweDAwMDAwMDAwMDAwMDAyOTAgLSAweDAwMDAwMDAwMDAwMDAy
OTFdIHNpemUgMHgwMDAwMDAwMiBncmFuIDB4MDEgaW8KW0RFQlVHXSAgUE5QOiAwMDJlLmIg
NjIgPC0gWzB4MDAwMDAwMDAwMDAwMDAwMCAtIDB4MDAwMDAwMDAwMDAwMDAwMV0gc2l6ZSAw
eDAwMDAwMDAyIGdyYW4gMHgwMSBpbwpbREVCVUddICBQTlA6IDAwMmUuYiA3MCA8LSBbMHgw
MDAwMDAwMDAwMDAwMDAwIC0gMHgwMDAwMDAwMDAwMDAwMDAwXSBzaXplIDB4MDAwMDAwMDEg
Z3JhbiAweDAwIGlvCltERUJVR10gIFBOUDogMDAyZS5iIGUyIDwtIFsweDAwMDAwMDAwMDAw
MDAwN2YgLSAweDAwMDAwMDAwMDAwMDAwN2VdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAg
aXJxCltERUJVR10gIFBOUDogMDAyZS5iIGU0IDwtIFsweDAwMDAwMDAwMDAwMDAwZjEgLSAw
eDAwMDAwMDAwMDAwMDAwZjBdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltXQVJO
IF0gIFBOUDogMDAyZS5iIGYwIGlycSBzaXplOiAweDAwMDAwMDAwMDEgbm90IGFzc2lnbmVk
IGluIGRldmljZXRyZWUKW0RFQlVHXSAgUE5QOiAwMDJlLmYgZTYgPC0gWzB4MDAwMDAwMDAw
MDAwMDAwNyAtIDB4MDAwMDAwMDAwMDAwMDAwNl0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgw
MCBpcnEKW0RFQlVHXSAgUE5QOiAwMDJlLjE0IGUwIDwtIFsweDAwMDAwMDAwMDAwMDAwMDAg
LSAweGZmZmZmZmZmZmZmZmZmZmZdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MDAgaXJxCltE
RUJVR10gIFBDSTogMDA6MTQuNCAxYyA8LSBbMHgwMDAwMDAwMDAwMDBmZmZmIC0gMHgwMDAw
MDAwMDAwMDBmZmZlXSBzaXplIDB4MDAwMDAwMDAgZ3JhbiAweDBjIGJ1cyAwMSBpbwpbREVC
VUddICBQQ0k6IDAwOjE0LjQgMjQgPC0gWzB4MDAwMDAwMDBmN2ZmZmZmZiAtIDB4MDAwMDAw
MDBmN2ZmZmZmZV0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgxNCBidXMgMDEgcHJlZm1lbQpb
REVCVUddICBQQ0k6IDAwOjE0LjQgMjAgPC0gWzB4MDAwMDAwMDBmN2ZmZmZmZiAtIDB4MDAw
MDAwMDBmN2ZmZmZmZV0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgxNCBidXMgMDEgbWVtCltE
RUJVR10gIFBDSTogMDA6MTQuNSAxMCA8LSBbMHgwMDAwMDAwMGYwMWNhMDAwIC0gMHgwMDAw
MDAwMGYwMWNhZmZmXSBzaXplIDB4MDAwMDEwMDAgZ3JhbiAweDBjIG1lbQpbREVCVUddICBQ
Q0k6IDAwOjE1LjAgMWMgPC0gWzB4MDAwMDAwMDAwMDAwZmZmZiAtIDB4MDAwMDAwMDAwMDAw
ZmZmZV0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgwYyBidXMgMDIgaW8KW0RFQlVHXSAgUENJ
OiAwMDoxNS4wIDI0IDwtIFsweDAwMDAwMDAwZjdmZmZmZmYgLSAweDAwMDAwMDAwZjdmZmZm
ZmVdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MTQgYnVzIDAyIHByZWZtZW0KW0RFQlVHXSAg
UENJOiAwMDoxNS4wIDIwIDwtIFsweDAwMDAwMDAwZjdmZmZmZmYgLSAweDAwMDAwMDAwZjdm
ZmZmZmVdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MTQgYnVzIDAyIG1lbQpbREVCVUddICBQ
Q0k6IDAwOjE1LjEgMWMgPC0gWzB4MDAwMDAwMDAwMDAwZmZmZiAtIDB4MDAwMDAwMDAwMDAw
ZmZmZV0gc2l6ZSAweDAwMDAwMDAwIGdyYW4gMHgwYyBidXMgMDMgaW8KW0RFQlVHXSAgUENJ
OiAwMDoxNS4xIDI0IDwtIFsweDAwMDAwMDAwZjdmZmZmZmYgLSAweDAwMDAwMDAwZjdmZmZm
ZmVdIHNpemUgMHgwMDAwMDAwMCBncmFuIDB4MTQgYnVzIDAzIHByZWZtZW0KW0RFQlVHXSAg
UENJOiAwMDoxNS4xIDIwIDwtIFsweDAwMDAwMDAwZjAwMDAwMDAgLSAweDAwMDAwMDAwZjAw
ZmZmZmZdIHNpemUgMHgwMDEwMDAwMCBncmFuIDB4MTQgYnVzIDAzIG1lbQpbREVCVUddICBQ
Q0k6IDAzOjAwLjAgMTAgPC0gWzB4MDAwMDAwMDBmMDAwMDAwMCAtIDB4MDAwMDAwMDBmMDAw
N2ZmZl0gc2l6ZSAweDAwMDA4MDAwIGdyYW4gMHgwZiBtZW02NApbREVCVUddICBQQ0k6IDAw
OjE2LjAgMTAgPC0gWzB4MDAwMDAwMDBmMDFjYjAwMCAtIDB4MDAwMDAwMDBmMDFjYmZmZl0g
c2l6ZSAweDAwMDAxMDAwIGdyYW4gMHgwYyBtZW0KW0RFQlVHXSAgUENJOiAwMDoxNi4yIDEw
IDwtIFsweDAwMDAwMDAwZjAxY2YwMDAgLSAweDAwMDAwMDAwZjAxY2YwZmZdIHNpemUgMHgw
MDAwMDEwMCBncmFuIDB4MDggbWVtCltJTkZPIF0gIERvbmUgc2V0dGluZyByZXNvdXJjZXMu
CltJTkZPIF0gIERvbmUgYWxsb2NhdGluZyByZXNvdXJjZXMuCltERUJVR10gIEJTOiBCU19E
RVZfUkVTT1VSQ0VTIHJ1biB0aW1lcyAoZXhlYyAvIGNvbnNvbGUpOiAxIC8gMCBtcwoKW0RF
QlVHXSAgQVBJQyAwMDogKiogRW50ZXIgQW1kSW5pdE1pZCBbMDAwMjAwMDVdCltERUJVR10g
IEFtZEluaXRNaWQoKSByZXR1cm5lZCBBR0VTQV9TVUNDRVNTCltERUJVR10gIEFQSUMgMDA6
IEhlYXAgaW4gU3lzdGVtTWVtICg0KSBhdCAweDEwMDAwMDE0CltERUJVR10gIEFQSUMgMDA6
ICoqIEV4aXQgIEFtZEluaXRNaWQgWzAwMDIwMDA1XQpbREVCVUddICBQQ0lfSU5UUiB0YWJs
ZXM6IFdyaXRpbmcgcmVnaXN0ZXJzIEMwMC9DMDEgZm9yIFBJQyBtb2RlIFBDSSBJUlEgcm91
dGluZzoKW0RFQlVHXSAgCVBDSV9JTlRSX0lOREVYCQlQQ0lfSU5UUl9EQVRBCltERUJVR10g
IAkweDAwIElOVEEjCQk6IDB4MUYKW0RFQlVHXSAgCTB4MDEgSU5UQiMJCTogMHgxRgpbREVC
VUddICAJMHgwMiBJTlRDIwkJOiAweDFGCltERUJVR10gIAkweDAzIElOVEQjCQk6IDB4MUYK
W0RFQlVHXSAgCTB4MDQgSU5URSMJCTogMHgxRgpbREVCVUddICAJMHgwNSBJTlRGIwkJOiAw
eDFGCltERUJVR10gIAkweDA2IElOVEcjCQk6IDB4MUYKW0RFQlVHXSAgCTB4MDcgSU5USCMJ
CTogMHgxRgpbREVCVUddICAJMHgwOCBNaXNjCQk6IDB4MEEKW0RFQlVHXSAgCTB4MDkgTWlz
YzAJCTogMHhGMQpbREVCVUddICAJMHgwQSBNaXNjMQkJOiAweDAwCltERUJVR10gIAkweDBC
IE1pc2MyCQk6IDB4MDAKW0RFQlVHXSAgCTB4MEMgU2VyIElSUSBJTlRBCTogMHgxRgpbREVC
VUddICAJMHgwRCBTZXIgSVJRIElOVEIJOiAweDFGCltERUJVR10gIAkweDBFIFNlciBJUlEg
SU5UQwk6IDB4MUYKW0RFQlVHXSAgCTB4MEYgU2VyIElSUSBJTlRECTogMHgxRgpbREVCVUdd
ICAJMHgxMCBTQ0kJCTogMHgwOQpbREVCVUddICAJMHgxMSBTTUJVUzAJCTogMHgxRgpbREVC
VUddICAJMHgxMiBBU0YJCTogMHgxRgpbREVCVUddICAJMHgxMyBIREEJCTogMHgxRgpbREVC
VUddICAJMHgxNCBTRAkJCTogMHgxRgpbREVCVUddICAJMHgxNSBHRUMJCTogMHgxRgpbREVC
VUddICAJMHgxNiBQZXJNb24JCTogMHgxRgpbREVCVUddICAJMHgyMCBJTUMgSU5UMAkJOiAw
eDFGCltERUJVR10gIAkweDIxIElNQyBJTlQxCQk6IDB4MUYKW0RFQlVHXSAgCTB4MjIgSU1D
IElOVDIJCTogMHgxRgpbREVCVUddICAJMHgyMyBJTUMgSU5UMwkJOiAweDFGCltERUJVR10g
IAkweDI0IElNQyBJTlQ0CQk6IDB4MUYKW0RFQlVHXSAgCTB4MjUgSU1DIElOVDUJCTogMHgx
RgpbREVCVUddICAJMHgzMCBEZXYxOC4wIElOVEEJOiAweDFGCltERUJVR10gIAkweDMxIERl
djE4LjIgSU5UQgk6IDB4MUYKW0RFQlVHXSAgCTB4MzIgRGV2MTkuMCBJTlRBCTogMHgxRgpb
REVCVUddICAJMHgzMyBEZXYxOS4yIElOVEIJOiAweDFGCltERUJVR10gIAkweDM0IERldjIy
LjAgSU5UQQk6IDB4MUYKW0RFQlVHXSAgCTB4MzUgRGV2MjIuMiBJTlRCCTogMHgxRgpbREVC
VUddICAJMHgzNiBEZXYyMC41IElOVEMJOiAweDFGCltERUJVR10gIAkweDQwIElERQkJOiAw
eDFGCltERUJVR10gIAkweDQxIFNBVEEJCTogMHgxRgpbREVCVUddICAJMHg1MCBHUFBJbnQw
CQk6IDB4MUYKW0RFQlVHXSAgCTB4NTEgR1BQSW50MQkJOiAweDFGCltERUJVR10gIAkweDUy
IEdQUEludDIJCTogMHgxRgpbREVCVUddICAJMHg1MyBHUFBJbnQzCQk6IDB4MUYKW0RFQlVH
XSAgUENJX0lOVFIgdGFibGVzOiBXcml0aW5nIHJlZ2lzdGVycyBDMDAvQzAxIGZvciBBUElD
IG1vZGUgUENJIElSUSByb3V0aW5nOgpbREVCVUddICAJUENJX0lOVFJfSU5ERVgJCVBDSV9J
TlRSX0RBVEEKW0RFQlVHXSAgCTB4MDAgSU5UQSMJCTogMHgxMApbREVCVUddICAJMHgwMSBJ
TlRCIwkJOiAweDExCltERUJVR10gIAkweDAyIElOVEMjCQk6IDB4MTIKW0RFQlVHXSAgCTB4
MDMgSU5URCMJCTogMHgxMwpbREVCVUddICAJMHgwNCBJTlRFIwkJOiAweDE0CltERUJVR10g
IAkweDA1IElOVEYjCQk6IDB4MTUKW0RFQlVHXSAgCTB4MDYgSU5URyMJCTogMHgxNgpbREVC
VUddICAJMHgwNyBJTlRIIwkJOiAweDE3CltERUJVR10gIAkweDA4IE1pc2MJCTogMHgwMApb
REVCVUddICAJMHgwOSBNaXNjMAkJOiAweDAwCltERUJVR10gIAkweDBBIE1pc2MxCQk6IDB4
MDAKW0RFQlVHXSAgCTB4MEIgTWlzYzIJCTogMHgwMApbREVCVUddICAJMHgwQyBTZXIgSVJR
IElOVEEJOiAweDFGCltERUJVR10gIAkweDBEIFNlciBJUlEgSU5UQgk6IDB4MUYKW0RFQlVH
XSAgCTB4MEUgU2VyIElSUSBJTlRDCTogMHgxRgpbREVCVUddICAJMHgwRiBTZXIgSVJRIElO
VEQJOiAweDFGCltERUJVR10gIAkweDEwIFNDSQkJOiAweDA5CltERUJVR10gIAkweDExIFNN
QlVTMAkJOiAweDFGCltERUJVR10gIAkweDEyIEFTRgkJOiAweDFGCltERUJVR10gIAkweDEz
IEhEQQkJOiAweDEwCltERUJVR10gIAkweDE0IFNECQkJOiAweDFGCltERUJVR10gIAkweDE1
IEdFQwkJOiAweDEwCltERUJVR10gIAkweDE2IFBlck1vbgkJOiAweDFGCltERUJVR10gIAkw
eDIwIElNQyBJTlQwCQk6IDB4MDUKW0RFQlVHXSAgCTB4MjEgSU1DIElOVDEJCTogMHgxRgpb
REVCVUddICAJMHgyMiBJTUMgSU5UMgkJOiAweDFGCltERUJVR10gIAkweDIzIElNQyBJTlQz
CQk6IDB4MUYKW0RFQlVHXSAgCTB4MjQgSU1DIElOVDQJCTogMHgxRgpbREVCVUddICAJMHgy
NSBJTUMgSU5UNQkJOiAweDFGCltERUJVR10gIAkweDMwIERldjE4LjAgSU5UQQk6IDB4MTIK
W0RFQlVHXSAgCTB4MzEgRGV2MTguMiBJTlRCCTogMHgxMQpbREVCVUddICAJMHgzMiBEZXYx
OS4wIElOVEEJOiAweDEyCltERUJVR10gIAkweDMzIERldjE5LjIgSU5UQgk6IDB4MTEKW0RF
QlVHXSAgCTB4MzQgRGV2MjIuMCBJTlRBCTogMHgxMgpbREVCVUddICAJMHgzNSBEZXYyMi4y
IElOVEIJOiAweDExCltERUJVR10gIAkweDM2IERldjIwLjUgSU5UQwk6IDB4MTIKW0RFQlVH
XSAgCTB4NDAgSURFCQk6IDB4MTEKW0RFQlVHXSAgCTB4NDEgU0FUQQkJOiAweDEzCltERUJV
R10gIAkweDUwIEdQUEludDAJCTogMHgxMApbREVCVUddICAJMHg1MSBHUFBJbnQxCQk6IDB4
MTEKW0RFQlVHXSAgCTB4NTIgR1BQSW50MgkJOiAweDEyCltERUJVR10gIAkweDUzIEdQUElu
dDMJCTogMHgxMwpbV0FSTiBdICBDYW4ndCB3cml0ZSBQQ0kgSVJRIGFzc2lnbm1lbnRzIGJl
Y2F1c2UgJ21haW5ib2FyZF9waXJxX2RhdGEnIHN0cnVjdHVyZSBkb2VzIG5vdCBleGlzdApb
REVCVUddICBCUzogQlNfREVWX0VOQUJMRSBlbnRyeSB0aW1lcyAoZXhlYyAvIGNvbnNvbGUp
OiA1IC8gMCBtcwpbSU5GTyBdICBFbmFibGluZyByZXNvdXJjZXMuLi4KW0RFQlVHXSAgUENJ
OiAwMDowMC4wIHN1YnN5c3RlbSA8LSAxMDIyLzE0MTAKW0RFQlVHXSAgUENJOiAwMDowMC4w
IGNtZCA8LSAwNgpbREVCVUddICBQQ0k6IDAwOjAwLjIgY21kIDwtIDA2CltERUJVR10gIFBD
STogMDA6MDEuMCBjbWQgPC0gMDcKW0RFQlVHXSAgUENJOiAwMDowMS4xIGNtZCA8LSAwMgpb
REVCVUddICBQQ0k6IDAwOjExLjAgY21kIDwtIDAzCltERUJVR10gIFBDSTogMDA6MTIuMCBz
dWJzeXN0ZW0gPC0gMTAyMi8xNDEwCltERUJVR10gIFBDSTogMDA6MTIuMCBjbWQgPC0gMDIK
W0RFQlVHXSAgUENJOiAwMDoxMi4yIHN1YnN5c3RlbSA8LSAxMDIyLzE0MTAKW0RFQlVHXSAg
UENJOiAwMDoxMi4yIGNtZCA8LSAwMgpbREVCVUddICBQQ0k6IDAwOjEzLjAgc3Vic3lzdGVt
IDwtIDEwMjIvMTQxMApbREVCVUddICBQQ0k6IDAwOjEzLjAgY21kIDwtIDAyCltERUJVR10g
IFBDSTogMDA6MTMuMiBzdWJzeXN0ZW0gPC0gMTAyMi8xNDEwCltERUJVR10gIFBDSTogMDA6
MTMuMiBjbWQgPC0gMDIKW0RFQlVHXSAgUENJOiAwMDoxNC4wIHN1YnN5c3RlbSA8LSAxMDIy
LzE0MTAKW0RFQlVHXSAgUENJOiAwMDoxNC4wIGNtZCA8LSA0MDMKW0RFQlVHXSAgUENJOiAw
MDoxNC4yIHN1YnN5c3RlbSA8LSAxMDIyLzE0MTAKW0RFQlVHXSAgUENJOiAwMDoxNC4yIGNt
ZCA8LSAwMgpbREVCVUddICBQQ0k6IDAwOjE0LjMgc3Vic3lzdGVtIDwtIDEwMjIvMTQxMApb
REVCVUddICBQQ0k6IDAwOjE0LjMgY21kIDwtIDBmCltERUJVR10gIGh1ZHNvbiBscGMgZGVj
b2RlOlBOUDogMDAyZS4yLCBiYXNlPTB4MDAwMDAzZjgsIGVuZD0weDAwMDAwM2ZmCltERUJV
R10gIGh1ZHNvbiBscGMgZGVjb2RlOlBOUDogMDAyZS41LCBiYXNlPTB4MDAwMDAwNjAsIGVu
ZD0weDAwMDAwMDYwCltERUJVR10gIGh1ZHNvbiBscGMgZGVjb2RlOlBOUDogMDAyZS41LCBi
YXNlPTB4MDAwMDAwNjQsIGVuZD0weDAwMDAwMDY0CltERUJVR10gIGh1ZHNvbiBscGMgZGVj
b2RlOlBOUDogMDAyZS5iLCBiYXNlPTB4MDAwMDAyOTAsIGVuZD0weDAwMDAwMjkxCltERUJV
R10gIFBDSTogMDA6MTQuNCBicmlkZ2UgY3RybCA8LSAwMDEzCltERUJVR10gIFBDSTogMDA6
MTQuNCBjbWQgPC0gMDAKW0RFQlVHXSAgUENJOiAwMDoxNC41IGNtZCA8LSAwMgpbREVCVUdd
ICBQQ0k6IDAwOjE1LjAgYnJpZGdlIGN0cmwgPC0gMDAxMwpbREVCVUddICBQQ0k6IDAwOjE1
LjAgY21kIDwtIDAwCltERUJVR10gIFBDSTogMDA6MTUuMSBicmlkZ2UgY3RybCA8LSAwMDEz
CltERUJVR10gIFBDSTogMDA6MTUuMSBjbWQgPC0gMDYKW0RFQlVHXSAgUENJOiAwMDoxNi4w
IGNtZCA8LSAwMgpbREVCVUddICBQQ0k6IDAwOjE2LjIgY21kIDwtIDAyCltERUJVR10gIFBD
STogMDA6MTguMCBjbWQgPC0gMDAKW0RFQlVHXSAgUENJOiAwMDoxOC4xIHN1YnN5c3RlbSA8
LSAxMDIyLzE0MTAKW0RFQlVHXSAgUENJOiAwMDoxOC4xIGNtZCA8LSAwMApbREVCVUddICBQ
Q0k6IDAwOjE4LjIgc3Vic3lzdGVtIDwtIDEwMjIvMTQxMApbREVCVUddICBQQ0k6IDAwOjE4
LjIgY21kIDwtIDAwCltERUJVR10gIFBDSTogMDA6MTguMyBzdWJzeXN0ZW0gPC0gMTAyMi8x
NDEwCltERUJVR10gIFBDSTogMDA6MTguMyBjbWQgPC0gMDAKW0RFQlVHXSAgUENJOiAwMDox
OC40IHN1YnN5c3RlbSA8LSAxMDIyLzE0MTAKW0RFQlVHXSAgUENJOiAwMDoxOC40IGNtZCA8
LSAwMApbREVCVUddICBQQ0k6IDAwOjE4LjUgc3Vic3lzdGVtIDwtIDEwMjIvMTQxMApbREVC
VUddICBQQ0k6IDAwOjE4LjUgY21kIDwtIDAwCltERUJVR10gIFBDSTogMDM6MDAuMCBjbWQg
PC0gMDIKW0lORk8gXSAgZG9uZS4KW0lORk8gXSAgSW5pdGlhbGl6aW5nIGRldmljZXMuLi4K
W0RFQlVHXSAgQ1BVX0NMVVNURVI6IDAgaW5pdAoKW0RFQlVHXSAgTVRSUiBjaGVjawpbREVC
VUddICBGaXhlZCBNVFJScyAgIDogRGlzYWJsZWQKW0RFQlVHXSAgVmFyaWFibGUgTVRSUnM6
IEVuYWJsZWQKCltJTkZPIF0gIENQVTogQU1EIEE2LTY0MDBLIEFQVSB3aXRoIFJhZGVvbih0
bSkgSEQgR3JhcGhpY3MgICAuCltJTkZPIF0gIExBUElDIDB4MTAgaW4gWEFQSUMgbW9kZS4K
W0RFQlVHXSAgTG9hZGluZyBtb2R1bGUgYXQgMHgwMDAzMDAwMCB3aXRoIGVudHJ5IDB4MDAw
MzAwMDAuIGZpbGVzaXplOiAweDFiOCBtZW1zaXplOiAweDFiOApbREVCVUddICBQcm9jZXNz
aW5nIDE4IHJlbG9jcy4gT2Zmc2V0IHZhbHVlIG9mIDB4MDAwMzAwMDAKW0RFQlVHXSAgQXR0
ZW1wdGluZyB0byBzdGFydCAxIEFQcwpbREVCVUddICBXYWl0aW5nIGZvciAxMG1zIGFmdGVy
IHNlbmRpbmcgSU5JVC4KW0RFQlVHXSAgV2FpdGluZyBmb3IgU0lQSSB0byBjb21wbGV0ZS4u
LgpbSU5GTyBdICBMQVBJQyAweDExIGluIFhBUElDIG1vZGUuCltERUJVR10gIGRvbmUuCltJ
TkZPIF0gIEFQOiBzbG90IDEgYXBpY19pZCAxMQpbREVCVUddICBXYWl0aW5nIGZvciBTSVBJ
IHRvIGNvbXBsZXRlLi4uCltERUJVR10gIGRvbmUuCltJTkZPIF0gIEluaXRpYWxpemluZyBD
UFUgIzAKW0RFQlVHXSAgQ1BVOiB2ZW5kb3IgQU1EIGRldmljZSA2MTBmMzEKW0RFQlVHXSAg
Q1BVOiBmYW1pbHkgMTUsIG1vZGVsIDEzLCBzdGVwcGluZyAwMQpbREVCVUddICBNb2RlbCAx
NSBJbml0LgpbREVCVUddICBzaWJsaW5ncyA9IDAxLCBDUFUgIzAgaW5pdGlhbGl6ZWQKW0lO
Rk8gXSAgSW5pdGlhbGl6aW5nIENQVSAjMQpbREVCVUddICBDUFU6IHZlbmRvciBBTUQgZGV2
aWNlIDYxMGYzMQpbREVCVUddICBDUFU6IGZhbWlseSAxNSwgbW9kZWwgMTMsIHN0ZXBwaW5n
IDAxCltERUJVR10gIE1vZGVsIDE1IEluaXQuCltERUJVR10gIHNpYmxpbmdzID0gMDEsIENQ
VSAjMSBpbml0aWFsaXplZApbSU5GTyBdICBic3BfZG9fZmxpZ2h0X3BsYW4gZG9uZSBhZnRl
ciAwIG1zZWNzLgpbREVCVUddICBDUFVfQ0xVU1RFUjogMCBpbml0IGZpbmlzaGVkIGluIDEw
IG1zZWNzCltERUJVR10gIFBDSTogMDA6MDAuMCBpbml0CltERUJVR10gIFBDSTogMDA6MDAu
MCBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMDowMS4wIGluaXQK
W0RFQlVHXSAgUENJOiAwMDowMS4wIGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVCVUdd
ICBQQ0k6IDAwOjAxLjEgaW5pdApbREVCVUddICBQQ0k6IDAwOjAxLjEgaW5pdCBmaW5pc2hl
ZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTEuMCBpbml0CltERUJVR10gIFBDSTog
MDA6MTEuMCBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMDoxMi4w
IGluaXQKW0RFQlVHXSAgUENJOiAwMDoxMi4wIGluaXQgZmluaXNoZWQgaW4gMCBtc2Vjcwpb
REVCVUddICBQQ0k6IDAwOjEyLjIgaW5pdApbREVCVUddICBQQ0k6IDAwOjEyLjIgaW5pdCBm
aW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTMuMCBpbml0CltERUJVR10g
IFBDSTogMDA6MTMuMCBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAw
MDoxMy4yIGluaXQKW0RFQlVHXSAgUENJOiAwMDoxMy4yIGluaXQgZmluaXNoZWQgaW4gMCBt
c2VjcwpbREVCVUddICBQQ0k6IDAwOjE0LjAgaW5pdApbREVCVUddICBJT0FQSUM6IEluaXRp
YWxpemluZyBJT0FQSUMgYXQgMHhmZWMwMDAwMApbREVCVUddICBJT0FQSUM6IElEID0gMHgw
NApbREVCVUddICBJT0FQSUM6IDI0IGludGVycnVwdHMKW0RFQlVHXSAgSU9BUElDOiBDbGVh
cmluZyBJT0FQSUMgYXQgMHhmZWMwMDAwMApbREVCVUddICBJT0FQSUM6IEJvb3RzdHJhcCBQ
cm9jZXNzb3IgTG9jYWwgQVBJQyA9IDB4MTAKW0RFQlVHXSAgUENJOiAwMDoxNC4wIGluaXQg
ZmluaXNoZWQgaW4gMCBtc2VjcwpbREVCVUddICBQQ0k6IDAwOjE0LjIgaW5pdApbREVCVUdd
ICBQQ0k6IDAwOjE0LjIgaW5pdCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTog
MDA6MTQuMyBpbml0CltERUJVR10gIFJUQyBJbml0CltERUJVR10gIFBDSTogMDA6MTQuMyBp
bml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMDoxNC41IGluaXQKW0RF
QlVHXSAgUENJOiAwMDoxNC41IGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVCVUddICBQ
Q0k6IDAwOjE1LjAgaW5pdApbREVCVUddICBQQ0k6IDAwOjE1LjAgaW5pdCBmaW5pc2hlZCBp
biAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTUuMSBpbml0CltERUJVR10gIFBDSTogMDA6
MTUuMSBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMDoxNi4wIGlu
aXQKW0RFQlVHXSAgUENJOiAwMDoxNi4wIGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVC
VUddICBQQ0k6IDAwOjE2LjIgaW5pdApbREVCVUddICBQQ0k6IDAwOjE2LjIgaW5pdCBmaW5p
c2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTguMSBpbml0CltERUJVR10gIFBD
STogMDA6MTguMSBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMDox
OC4yIGluaXQKW0RFQlVHXSAgUENJOiAwMDoxOC4yIGluaXQgZmluaXNoZWQgaW4gMCBtc2Vj
cwpbREVCVUddICBQQ0k6IDAwOjE4LjMgaW5pdApbREVCVUddICBQQ0k6IDAwOjE4LjMgaW5p
dCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBDSTogMDA6MTguNCBpbml0CltERUJV
R10gIFBDSTogMDA6MTguNCBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJ
OiAwMDoxOC41IGluaXQKW0RFQlVHXSAgUENJOiAwMDoxOC41IGluaXQgZmluaXNoZWQgaW4g
MCBtc2VjcwpbREVCVUddICBQTlA6IDAwMmUuMiBpbml0CltERUJVR10gIFBOUDogMDAyZS4y
IGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVCVUddICBQTlA6IDAwMmUuNSBpbml0CltE
RUJVR10gIFBOUDogMDAyZS41IGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbREVCVUddICBQ
TlA6IDAwMmUuMTA4IGluaXQKW0RFQlVHXSAgUE5QOiAwMDJlLjEwOCBpbml0IGZpbmlzaGVk
IGluIDAgbXNlY3MKW0RFQlVHXSAgUE5QOiAwMDJlLjEwOSBpbml0CltERUJVR10gIFBOUDog
MDAyZS4xMDkgaW5pdCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBOUDogMDAyZS4y
MDkgaW5pdApbREVCVUddICBQTlA6IDAwMmUuMjA5IGluaXQgZmluaXNoZWQgaW4gMCBtc2Vj
cwpbREVCVUddICBQTlA6IDAwMmUuMzA5IGluaXQKW0RFQlVHXSAgUE5QOiAwMDJlLjMwOSBp
bml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUE5QOiAwMDJlLjQwOSBpbml0CltE
RUJVR10gIFBOUDogMDAyZS40MDkgaW5pdCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10g
IFBOUDogMDAyZS41MDkgaW5pdApbREVCVUddICBQTlA6IDAwMmUuNTA5IGluaXQgZmluaXNo
ZWQgaW4gMCBtc2VjcwpbREVCVUddICBQTlA6IDAwMmUuNjA5IGluaXQKW0RFQlVHXSAgUE5Q
OiAwMDJlLjYwOSBpbml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUE5QOiAwMDJl
LjcwOSBpbml0CltERUJVR10gIFBOUDogMDAyZS43MDkgaW5pdCBmaW5pc2hlZCBpbiAwIG1z
ZWNzCltERUJVR10gIFBOUDogMDAyZS5hIGluaXQKW0RFQlVHXSAgUE5QOiAwMDJlLmEgaW5p
dCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBOUDogMDAyZS5iIGluaXQKW0RFQlVH
XSAgUE5QOiAwMDJlLmIgaW5pdCBmaW5pc2hlZCBpbiAwIG1zZWNzCltERUJVR10gIFBOUDog
MDAyZS5mIGluaXQKW0RFQlVHXSAgUE5QOiAwMDJlLmYgaW5pdCBmaW5pc2hlZCBpbiAwIG1z
ZWNzCltERUJVR10gIFBOUDogMDAyZS4xNCBpbml0CltERUJVR10gIFBOUDogMDAyZS4xNCBp
bml0IGZpbmlzaGVkIGluIDAgbXNlY3MKW0RFQlVHXSAgUENJOiAwMzowMC4wIGluaXQKW0RF
QlVHXSAgUENJOiAwMzowMC4wIGluaXQgZmluaXNoZWQgaW4gMCBtc2VjcwpbSU5GTyBdICBE
ZXZpY2VzIGluaXRpYWxpemVkCltERUJVR10gIEJTOiBCU19ERVZfSU5JVCBydW4gdGltZXMg
KGV4ZWMgLyBjb25zb2xlKTogMTAgLyAwIG1zCltJTkZPIF0gIEZpbmFsaXplIGRldmljZXMu
Li4KW0RFQlVHXSAgUENJOiAwMDoxNC4zIGZpbmFsCltJTkZPIF0gIERldmljZXMgZmluYWxp
emVkCgpbREVCVUddICBBUElDIDAwOiAqKiBFbnRlciBBbWRJbml0TGF0ZSBbMDAwMjAwMDRd
CltFTUVSR10gIEFTU0VSVElPTiBFUlJPUjogZmlsZSAnc3JjL3ZlbmRvcmNvZGUvYW1kL2Fn
ZXNhL2YxNXRuL1Byb2MvQ29tbW9uL0NvbW1vblJldHVybnMuYycsIGxpbmUgMTg3CltFTUVS
R10gIEFTU0VSVElPTiBFUlJPUjogZmlsZSAnc3JjL3ZlbmRvcmNvZGUvYW1kL2FnZXNhL2Yx
NXRuL1Byb2MvQ1BVL2NwdUdlbmVyYWxTZXJ2aWNlcy5jJywgbGluZSA3NzYKW0VNRVJHXSAg
QVNTRVJUSU9OIEVSUk9SOiBmaWxlICdzcmMvdmVuZG9yY29kZS9hbWQvYWdlc2EvZjE1dG4v
UHJvYy9Db21tb24vQ29tbW9uUmV0dXJucy5jJywgbGluZSAxODcKW0VNRVJHXSAgQVNTRVJU
SU9OIEVSUk9SOiBmaWxlICdzcmMvdmVuZG9yY29kZS9hbWQvYWdlc2EvZjE1dG4vUHJvYy9D
UFUvY3B1R2VuZXJhbFNlcnZpY2VzLmMnLCBsaW5lIDc3NgpbRU1FUkddICBBU1NFUlRJT04g
RVJST1I6IGZpbGUgJ3NyYy92ZW5kb3Jjb2RlL2FtZC9hZ2VzYS9mMTV0bi9Qcm9jL0NvbW1v
bi9Db21tb25SZXR1cm5zLmMnLCBsaW5lIDE4NwpbRU1FUkddICBBU1NFUlRJT04gRVJST1I6
IGZpbGUgJ3NyYy92ZW5kb3Jjb2RlL2FtZC9hZ2VzYS9mMTV0bi9Qcm9jL0NQVS9jcHVHZW5l
cmFsU2VydmljZXMuYycsIGxpbmUgNzc2CltFTUVSR10gIEFTU0VSVElPTiBFUlJPUjogZmls
ZSAnc3JjL3ZlbmRvcmNvZGUvYW1kL2FnZXNhL2YxNXRuL1Byb2MvQ29tbW9uL0NvbW1vblJl
dHVybnMuYycsIGxpbmUgMTg3CltFTUVSR10gIEFTU0VSVElPTiBFUlJPUjogZmlsZSAnc3Jj
L3ZlbmRvcmNvZGUvYW1kL2FnZXNhL2YxNXRuL1Byb2MvQ1BVL2NwdUdlbmVyYWxTZXJ2aWNl
cy5jJywgbGluZSA3NzYKW0VNRVJHXSAgQVNTRVJUSU9OIEVSUk9SOiBmaWxlICdzcmMvdmVu
ZG9yY29kZS9hbWQvYWdlc2EvZjE1dG4vUHJvYy9Db21tb24vQ29tbW9uUmV0dXJucy5jJywg
bGluZSAxODcKW0VNRVJHXSAgQVNTRVJUSU9OIEVSUk9SOiBmaWxlICdzcmMvdmVuZG9yY29k
ZS9hbWQvYWdlc2EvZjE1dG4vUHJvYy9DUFUvY3B1R2VuZXJhbFNlcnZpY2VzLmMnLCBsaW5l
IDc3NgpbREVCVUddICBBbWRJbml0TGF0ZSgpIHJldHVybmVkIEFHRVNBX1NVQ0NFU1MKW0RF
QlVHXSAgQVBJQyAwMDogSGVhcCBpbiBTeXN0ZW1NZW0gKDQpIGF0IDB4MTAwMDAwMTQKW0RF
QlVHXSAgQVBJQyAwMDogKiogRXhpdCAgQW1kSW5pdExhdGUgWzAwMDIwMDA0XQoKW0RFQlVH
XSAgQVBJQyAwMDogKiogRW50ZXIgQW1kUzNTYXZlIFswMDAyMDAwYl0KW0RFQlVHXSAgRk1B
UDogYXJlYSBDT1JFQk9PVCBmb3VuZCBAIDEwMjAwICg0MTI4MjU2IGJ5dGVzKQpbSU5GTyBd
ICBDQkZTOiBGb3VuZCAncGNpMTAwMiw5OTk2LnJvbScgQDB4OGY2YzAgc2l6ZSAweGYyMDAg
aW4gbWNhY2hlIEAweDVmZmRkMjk0CltERUJVR10gIEZNQVA6IGFyZWEgUldfTVJDX0NBQ0hF
IGZvdW5kIEAgMCAoNjU1MzYgYnl0ZXMpCltERUJVR10gIE1SQzogQ2hlY2tpbmcgY2FjaGVk
IGRhdGEgdXBkYXRlIGZvciAnUldfTVJDX0NBQ0hFJy4KW0lORk8gXSAgTWFudWZhY3R1cmVy
OiBlZgpbSU5GTyBdICBTRjogRGV0ZWN0ZWQgZWYgNDAxNiB3aXRoIHNlY3RvciBzaXplIDB4
MTAwMCwgdG90YWwgMHg0MDAwMDAKW0RFQlVHXSAgTVJDOiBjYWNoZSBkYXRhICdSV19NUkNf
Q0FDSEUnIG5lZWRzIHVwZGF0ZS4KW0RFQlVHXSAgTVJDOiB1cGRhdGVkICdSV19NUkNfQ0FD
SEUnLgpbREVCVUddICBBbWRTM1NhdmUoKSByZXR1cm5lZCBBR0VTQV9TVUNDRVNTCltFTUVS
R10gIEFTU0VSVElPTiBFUlJPUjogZmlsZSAnc3JjL2RyaXZlcnMvYW1kL2FnZXNhL3N0YXRl
X21hY2hpbmUuYycsIGxpbmUgMjc2CltERUJVR10gIEFQSUMgMDA6IEhlYXAgaW4gU3lzdGVt
TWVtICg0KSBhdCAweDEwMDAwMDE0CltERUJVR10gIEFQSUMgMDA6ICoqIEV4aXQgIEFtZFMz
U2F2ZSBbMDAwMjAwMGJdCltERUJVR10gIEJTOiBCU19QT1NUX0RFVklDRSBleGl0IHRpbWVz
IChleGVjIC8gY29uc29sZSk6IDE0IC8gMCBtcwpbSU5GTyBdICBXcml0aW5nIElSUSByb3V0
aW5nIHRhYmxlcyB0byAweGYwMDAwLi4ud3JpdGVfcGlycV9yb3V0aW5nX3RhYmxlIGRvbmUu
CltJTkZPIF0gIFdyaXRpbmcgSVJRIHJvdXRpbmcgdGFibGVzIHRvIDB4NWZlNmUwMDAuLi53
cml0ZV9waXJxX3JvdXRpbmdfdGFibGUgZG9uZS4KW0RFQlVHXSAgUElSUSB0YWJsZTogNDgg
Ynl0ZXMuCltJTkZPIF0gIENCRlM6IEZvdW5kICdmYWxsYmFjay9kc2R0LmFtbCcgQDB4NzAw
IHNpemUgMHgxOTNhIGluIG1jYWNoZSBAMHg1ZmZkZDA5NApbV0FSTiBdICBDQkZTOiAnZmFs
bGJhY2svc2xpYycgbm90IGZvdW5kLgpbSU5GTyBdICBBQ1BJOiBXcml0aW5nIEFDUEkgdGFi
bGVzIGF0IDVmZTRhMDAwLgpbREVCVUddICBBQ1BJOiAgICAqIEZBQ1MKW0RFQlVHXSAgQUNQ
STogICAgKiBEU0RUCltERUJVR10gIEFDUEk6ICAgICogRkFEVApbREVCVUddICBwbV9iYXNl
OiAweDA4MDAKW0RFQlVHXSAgQUNQSTogYWRkZWQgdGFibGUgMS8zMiwgbGVuZ3RoIG5vdyA0
MApbREVCVUddICBBQ1BJOiAgICAgKiBTU0RUCltJTkZPIF0gIENCRlM6IEZvdW5kICdwY2kx
MDAyLDk5OTYucm9tJyBAMHg4ZjZjMCBzaXplIDB4ZjIwMCBpbiBtY2FjaGUgQDB4NWZmZGQy
OTQKW0RFQlVHXSAgSW4gQ0JGUywgUk9NIGFkZHJlc3MgZm9yIFBDSTogMDA6MDEuMCA9IDB4
ZmZjOWY4ZWMKW0VSUk9SXSAgUENJOiAwMDowMS4wOiBNaXNzaW5nIEFDUEkgc2NvcGUKW0RF
QlVHXSAgQUNQSTogYWRkZWQgdGFibGUgMi8zMiwgbGVuZ3RoIG5vdyA0NApbREVCVUddICBB
Q1BJOiAgICAqIE1DRkcKW0RFQlVHXSAgQUNQSTogYWRkZWQgdGFibGUgMy8zMiwgbGVuZ3Ro
IG5vdyA0OApbREVCVUddICBBQ1BJOiAgICAqIE1BRFQKW0RFQlVHXSAgQUNQSTogYWRkZWQg
dGFibGUgNC8zMiwgbGVuZ3RoIG5vdyA1MgpbREVCVUddICBjdXJyZW50ID0gNWZlNGJlMjAK
W0RFQlVHXSAgQUNQSTogICAgKiBIUEVUCltERUJVR10gIEFDUEk6IGFkZGVkIHRhYmxlIDUv
MzIsIGxlbmd0aCBub3cgNTYKW0RFQlVHXSAgQUNQSTogYWRkZWQgdGFibGUgNi8zMiwgbGVu
Z3RoIG5vdyA2MApbREVCVUddICBBQ1BJOiAgICAqIElWUlMgYXQgNWZlNGMwMzAKW0RFQlVH
XSAgQUNQSTogYWRkZWQgdGFibGUgNy8zMiwgbGVuZ3RoIG5vdyA2NApbREVCVUddICBBQ1BJ
OiAgICAqIFNSQVQgYXQgNWZlNGMwYTAKW0RFQlVHXSAgICBBR0VTQSBTUkFUIHRhYmxlIE5V
TEwuIFNraXBwaW5nLgpbREVCVUddICBBQ1BJOiAgICogU0xJVCBhdCA1ZmU0YzBhMApbREVC
VUddICAgIEFHRVNBIFNMSVQgdGFibGUgTlVMTC4gU2tpcHBpbmcuCltERUJVR10gIEFDUEk6
ICAqIEFHRVNBIEFMSUIgU1NEVCBhdCA1ZmU0YzBhMApbREVCVUddICBBQ1BJOiBhZGRlZCB0
YWJsZSA4LzMyLCBsZW5ndGggbm93IDY4CltERUJVR10gIEFDUEk6ICAgICogU1NEVCBhdCA1
ZmU0YzVjMApbREVCVUddICBBQ1BJOiBhZGRlZCB0YWJsZSA5LzMyLCBsZW5ndGggbm93IDcy
CltERUJVR10gIEFDUEk6ICAgICogU1NEVCBmb3IgUFN0YXRlIGF0IDVmZTRjYzcyCltJTkZP
IF0gIENCRlM6IEZvdW5kICdwY2kxMDAyLDk5OTYucm9tJyBAMHg4ZjZjMCBzaXplIDB4ZjIw
MCBpbiBtY2FjaGUgQDB4NWZmZGQyOTQKW0RFQlVHXSAgSW4gQ0JGUywgUk9NIGFkZHJlc3Mg
Zm9yIFBDSTogMDA6MDEuMCA9IDB4ZmZjOWY4ZWMKW0RFQlVHXSAgICAgICAgICAgICBDb3B5
aW5nIFZCSU9TIGltYWdlIGZyb20gMHhmZmM5ZjhlYwpbREVCVUddICBBQ1BJOiAgICAqIFZG
Q1QgYXQgNWZlNGNjODAKW0RFQlVHXSAgQUNQSTogYWRkZWQgdGFibGUgMTAvMzIsIGxlbmd0
aCBub3cgNzYKW0lORk8gXSAgQUNQSTogZG9uZS4KW0RFQlVHXSAgQUNQSSB0YWJsZXM6IDcz
NDU2IGJ5dGVzLgpbREVCVUddICBzbWJpb3Nfd3JpdGVfdGFibGVzOiA1ZmU0MjAwMApbREVC
VUddICBTTUJJT1MgZmlybXdhcmUgdmVyc2lvbiBpcyBzZXQgdG8gY29yZWJvb3RfdmVyc2lv
bjogJzQuMTgtMTUtZ2M3ODJlZjQzNDUnCltERUJVR10gIFNNQklPUyB0YWJsZXM6IDUzMSBi
eXRlcy4KW0RFQlVHXSAgV3JpdGluZyB0YWJsZSBmb3J3YXJkIGVudHJ5IGF0IDB4MDAwMDA1
MDAKW0RFQlVHXSAgV3JvdGUgY29yZWJvb3QgdGFibGUgYXQ6IDB4MDAwMDA1MDAsIDB4MTAg
Ynl0ZXMsIGNoZWNrc3VtIGFmZjcKW0RFQlVHXSAgV3JpdGluZyBjb3JlYm9vdCB0YWJsZSBh
dCAweDVmZTZmMDAwCltERUJVR10gICAwLiAwMDAwMDAwMDAwMDAwMDAwLTAwMDAwMDAwMDAw
MDBmZmY6IENPTkZJR1VSQVRJT04gVEFCTEVTCltERUJVR10gICAxLiAwMDAwMDAwMDAwMDAx
MDAwLTAwMDAwMDAwMDAwOWZmZmY6IFJBTQpbREVCVUddICAgMi4gMDAwMDAwMDAwMDBjMDAw
MC0wMDAwMDAwMDVmZTQxZmZmOiBSQU0KW0RFQlVHXSAgIDMuIDAwMDAwMDAwNWZlNDIwMDAt
MDAwMDAwMDA1ZmViN2ZmZjogQ09ORklHVVJBVElPTiBUQUJMRVMKW0RFQlVHXSAgIDQuIDAw
MDAwMDAwNWZlYjgwMDAtMDAwMDAwMDA1ZmZjZWZmZjogUkFNU1RBR0UKW0RFQlVHXSAgIDUu
IDAwMDAwMDAwNWZmY2YwMDAtMDAwMDAwMDA1ZmZmZmZmZjogQ09ORklHVVJBVElPTiBUQUJM
RVMKW0RFQlVHXSAgIDYuIDAwMDAwMDAwNjAwMDAwMDAtMDAwMDAwMDA3ZmZmZmZmZjogUkVT
RVJWRUQKW0RFQlVHXSAgIDcuIDAwMDAwMDAwZjgwMDAwMDAtMDAwMDAwMDBmYmZmZmZmZjog
UkVTRVJWRUQKW0RFQlVHXSAgIDguIDAwMDAwMDAwZmVjMTAwMDAtMDAwMDAwMDBmZWMxMGZm
ZjogUkVTRVJWRUQKW0RFQlVHXSAgIDkuIDAwMDAwMDAxMDAwMDAwMDAtMDAwMDAwMDE3ZWZm
ZmZmZjogUkFNCltERUJVR10gIFdyb3RlIGNvcmVib290IHRhYmxlIGF0OiAweDVmZTZmMDAw
LCAweDM1NCBieXRlcywgY2hlY2tzdW0gZDZjOApbREVCVUddICBjb3JlYm9vdCB0YWJsZTog
ODc2IGJ5dGVzLgpbREVCVUddICBJTUQgUk9PVCAgICAwLiAweDVmZmZmMDAwIDB4MDAwMDEw
MDAKW0RFQlVHXSAgSU1EIFNNQUxMICAgMS4gMHg1ZmZmZTAwMCAweDAwMDAxMDAwCltERUJV
R10gIENPTlNPTEUgICAgIDIuIDB4NWZmZGUwMDAgMHgwMDAyMDAwMApbREVCVUddICBSTyBN
Q0FDSEUgICAzLiAweDVmZmRkMDAwIDB4MDAwMDAzMzAKW0RFQlVHXSAgVElNRSBTVEFNUCAg
NC4gMHg1ZmZkYzAwMCAweDAwMDAwOTEwCltERUJVR10gIEFGVEVSIENBUiAgIDUuIDB4NWZm
Y2YwMDAgMHgwMDAwZDAwMApbREVCVUddICBSQU1TVEFHRSAgICA2LiAweDVmZWI3MDAwIDB4
MDAxMTgwMDAKW0RFQlVHXSAgU01NIEJBQ0tVUCAgNy4gMHg1ZmVhNzAwMCAweDAwMDEwMDAw
CltERUJVR10gIEFDUElTQ1JBVENIIDguIDB4NWZlNzcwMDAgMHgwMDAzMDAwMApbREVCVUdd
ICBDT1JFQk9PVCAgICA5LiAweDVmZTZmMDAwIDB4MDAwMDgwMDAKW0RFQlVHXSAgSVJRIFRB
QkxFICAxMC4gMHg1ZmU2ZTAwMCAweDAwMDAxMDAwCltERUJVR10gIEFDUEkgICAgICAgMTEu
IDB4NWZlNGEwMDAgMHgwMDAyNDAwMApbREVCVUddICBTTUJJT1MgICAgIDEyLiAweDVmZTQy
MDAwIDB4MDAwMDgwMDAKW0RFQlVHXSAgSU1EIHNtYWxsIHJlZ2lvbjoKW0RFQlVHXSAgICBJ
TUQgUk9PVCAgICAwLiAweDVmZmZlYzAwIDB4MDAwMDA0MDAKW0RFQlVHXSAgICBGTUFQICAg
ICAgICAxLiAweDVmZmZlYjIwIDB4MDAwMDAwZTAKW0RFQlVHXSAgICBST01TVEFHRSAgICAy
LiAweDVmZmZlYjAwIDB4MDAwMDAwMDQKW0RFQlVHXSAgICBST01TVEcgU1RDSyAzLiAweDVm
ZmZlYTYwIDB4MDAwMDAwODgKW0RFQlVHXSAgICBBR0VTQSBNVFJSICA0LiAweDVmZmZlOTYw
IDB4MDAwMDAwZjAKW0RFQlVHXSAgQlM6IEJTX1dSSVRFX1RBQkxFUyBydW4gdGltZXMgKGV4
ZWMgLyBjb25zb2xlKTogOSAvIDAgbXMKW0lORk8gXSAgQ0JGUzogRm91bmQgJ2ZhbGxiYWNr
L3BheWxvYWQnIEAweDllOTAwIHNpemUgMHgxMThlOSBpbiBtY2FjaGUgQDB4NWZmZGQyYzAK
W0RFQlVHXSAgQ2hlY2tpbmcgc2VnbWVudCBmcm9tIFJPTSBhZGRyZXNzIDB4ZmZjYWViMmMK
W0RFQlVHXSAgQ2hlY2tpbmcgc2VnbWVudCBmcm9tIFJPTSBhZGRyZXNzIDB4ZmZjYWViNDgK
W0RFQlVHXSAgTG9hZGluZyBzZWdtZW50IGZyb20gUk9NIGFkZHJlc3MgMHhmZmNhZWIyYwpb
REVCVUddICAgIGNvZGUgKGNvbXByZXNzaW9uPTEpCltERUJVR10gICAgTmV3IHNlZ21lbnQg
ZHN0YWRkciAweDAwMGRlZTgwIG1lbXNpemUgMHgyMTE4MCBzcmNhZGRyIDB4ZmZjYWViNjQg
ZmlsZXNpemUgMHgxMThiMQpbREVCVUddICBMb2FkaW5nIFNlZ21lbnQ6IGFkZHI6IDB4MDAw
ZGVlODAgbWVtc3o6IDB4MDAwMDAwMDAwMDAyMTE4MCBmaWxlc3o6IDB4MDAwMDAwMDAwMDAx
MThiMQpbREVCVUddICB1c2luZyBMWk1BCltERUJVR10gIExvYWRpbmcgc2VnbWVudCBmcm9t
IFJPTSBhZGRyZXNzIDB4ZmZjYWViNDgKW0RFQlVHXSAgICBFbnRyeSBQb2ludCAweDAwMGZk
MjY2CltERUJVR10gIEJTOiBCU19QQVlMT0FEX0xPQUQgcnVuIHRpbWVzIChleGVjIC8gY29u
c29sZSk6IDEzIC8gMCBtcwpbREVCVUddICBKdW1waW5nIHRvIGJvb3QgY29kZSBhdCAweDAw
MGZkMjY2KDB4NWZlNmYwMDApClNlYUJJT1MgKHZlcnNpb24gcmVsLTEuMTYuMi0wLWdlYTFi
N2EwKQpCVUlMRDogZ2NjOiAoRGViaWFuIDEyLjIuMC0xNCkgMTIuMi4wIGJpbnV0aWxzOiAo
R05VIEJpbnV0aWxzIGZvciBEZWJpYW4pIDIuNDAKRm91bmQgY29yZWJvb3QgY2JtZW0gY29u
c29sZSBAIDVmZmRlMDAwCkZvdW5kIG1haW5ib2FyZCBBU1VTIEYyQTg1LU1fUFJPClJlbG9j
YXRpbmcgaW5pdCBmcm9tIDB4MDAwZTA1YzAgdG8gMHg1ZWUzNGIyMCAoc2l6ZSA1NDMzNikK
Rm91bmQgQ0JGUyBoZWFkZXIgYXQgMHhmZmMxMDIyYwptdWx0aWJvb3Q6IGVheD01ZmVmYjNk
OCwgZWJ4PTVmZWZiM2E0CkZvdW5kIDI2IFBDSSBkZXZpY2VzIChtYXggUENJIGJ1cyBpcyAw
MykKQ29weWluZyBTTUJJT1MgZnJvbSAweDVmZTQyMDAwIHRvIDB4MDAwZjY4ODAKQ29weWlu
ZyBTTUJJT1MgMy4wIGZyb20gMHg1ZmU0MjAyMCB0byAweDAwMGY2ODYwCkNvcHlpbmcgQUNQ
SSBSU0RQIGZyb20gMHg1ZmU0YTAwMCB0byAweDAwMGY2ODMwCkNvcHlpbmcgUElSIGZyb20g
MHg1ZmU2ZTAwMCB0byAweDAwMGY2ODAwCnRhYmxlKDUwNDM0MTQ2KT0weDVmZTRiYmMwICh2
aWEgeHNkdCkKVXNpbmcgcG10aW1lciwgaW9wb3J0IDB4ODE4ClNjYW4gZm9yIFZHQSBvcHRp
b24gcm9tClJ1bm5pbmcgb3B0aW9uIHJvbSBhdCBjMDAwOjAwMDMKVHVybmluZyBvbiB2Z2Eg
dGV4dCBtb2RlIGNvbnNvbGUKU2VhQklPUyAodmVyc2lvbiByZWwtMS4xNi4yLTAtZ2VhMWI3
YTApClBDSTogWEhDSSBhdCAwMzowMC4wIChtbWlvIDB4ZjAwMDAwMDApClhIQ0kgaW5pdDog
cmVncyBAIDB4ZjAwMDAwMDAsIDQgcG9ydHMsIDMyIHNsb3RzLCAzMiBieXRlIGNvbnRleHRz
ClhIQ0kgICAgZXh0Y2FwIDB4MSBAIDB4ZjAwMDA4MDAKWEhDSSAgICBwcm90b2NvbCBVU0Ig
IDMuMDAsIDIgcG9ydHMgKG9mZnNldCAxKSwgZGVmIDAKWEhDSSAgICBwcm90b2NvbCBVU0Ig
IDIuMDAsIDIgcG9ydHMgKG9mZnNldCAzKSwgZGVmIDEKRUhDSSBpbml0IG9uIGRldiAwMDox
Mi4yIChyZWdzPTB4ZjAxY2QwMjApCkVIQ0kgaW5pdCBvbiBkZXYgMDA6MTMuMiAocmVncz0w
eGYwMWNlMDIwKQpFSENJIGluaXQgb24gZGV2IDAwOjE2LjIgKHJlZ3M9MHhmMDFjZjAyMCkK
T0hDSSBpbml0IG9uIGRldiAwMDoxMi4wIChyZWdzPTB4ZjAxYzgwMDApCk9IQ0kgaW5pdCBv
biBkZXYgMDA6MTMuMCAocmVncz0weGYwMWM5MDAwKQpPSENJIGluaXQgb24gZGV2IDAwOjE0
LjUgKHJlZ3M9MHhmMDFjYTAwMCkKT0hDSSBpbml0IG9uIGRldiAwMDoxNi4wIChyZWdzPTB4
ZjAxY2IwMDApCkFIQ0kgY29udHJvbGxlciBhdCAwMDoxMS4wLCBpb2Jhc2UgMHhmMDFjYzAw
MCwgaXJxIDAKU2VhcmNoaW5nIGJvb3RvcmRlciBmb3I6IEhBTFQKRm91bmQgMCBscHQgcG9y
dHMKRm91bmQgMSBzZXJpYWwgcG9ydHMKU2VhcmNoaW5nIGJvb3RvcmRlciBmb3I6IC9wY2lA
aTBjZjgvKkAxMS9kcml2ZUA2L2Rpc2tAMApBSENJLzY6IFNldCB0cmFuc2ZlciBtb2RlIHRv
IFVETUEtNgpTZWFyY2hpbmcgYmlvcy1nZW9tZXRyeSBmb3I6IC9wY2lAaTBjZjgvKkAxMS9k
cml2ZUA2L2Rpc2tAMApBSENJLzY6IHJlZ2lzdGVyaW5nOiAiQUhDSS82OiBTYW5EaXNrIFNE
U1NEUDA2NEcgQVRBLTkgSGFyZC1EaXNrICg2MTA1NyBNaUJ5dGVzKSIKVVNCIGtleWJvYXJk
IGluaXRpYWxpemVkClVTQiBtb3VzZSBpbml0aWFsaXplZApYSENJIG5vIGRldmljZXMgZm91
bmQKUFMyIGtleWJvYXJkIGluaXRpYWxpemVkCkFsbCB0aHJlYWRzIGNvbXBsZXRlLgpTY2Fu
IGZvciBvcHRpb24gcm9tcwoKUHJlc3MgRVNDIGZvciBib290IG1lbnUuCgpTZWFyY2hpbmcg
Ym9vdG9yZGVyIGZvcjogSEFMVApkcml2ZSAweDAwMGY2NzkwOiBQQ0hTPTE2MzgzLzE2LzYz
IHRyYW5zbGF0aW9uPWxiYSBMQ0hTPTEwMjQvMjU1LzYzIHM9MTI1MDQ1NDI0ClNwYWNlIGF2
YWlsYWJsZSBmb3IgVU1COiBjZjgwMC1lYzAwMCwgZjYwYTAtZjY3OTAKUmV0dXJuZWQgMTY3
NTY3MzYgYnl0ZXMgb2YgWm9uZUhpZ2gKZTgyMCBtYXAgaGFzIDggaXRlbXM6CiAgMDogMDAw
MDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWZjMDAgPSAxIFJBTQogIDE6IDAwMDAwMDAw
MDAwOWZjMDAgLSAwMDAwMDAwMDAwMGEwMDAwID0gMiBSRVNFUlZFRAogIDI6IDAwMDAwMDAw
MDAwZjAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwID0gMiBSRVNFUlZFRAogIDM6IDAwMDAwMDAw
MDAxMDAwMDAgLSAwMDAwMDAwMDVmZTNkMDAwID0gMSBSQU0KICA0OiAwMDAwMDAwMDVmZTNk
MDAwIC0gMDAwMDAwMDA4MDAwMDAwMCA9IDIgUkVTRVJWRUQKICA1OiAwMDAwMDAwMGY4MDAw
MDAwIC0gMDAwMDAwMDBmYzAwMDAwMCA9IDIgUkVTRVJWRUQKICA2OiAwMDAwMDAwMGZlYzEw
MDAwIC0gMDAwMDAwMDBmZWMxMTAwMCA9IDIgUkVTRVJWRUQKICA3OiAwMDAwMDAwMTAwMDAw
MDAwIC0gMDAwMDAwMDE3ZjAwMDAwMCA9IDEgUkFNCmVudGVyIGhhbmRsZV8xOToKICBOVUxM
CkJvb3RpbmcgZnJvbSBIYXJkIERpc2suLi4KQm9vdGluZyBmcm9tIDAwMDA6N2MwMAo=

--------------RIXs5aB9kAV059bPykAqBHIR--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:19:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523634.813869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBSM-0000Ew-Gr; Wed, 19 Apr 2023 17:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523634.813869; Wed, 19 Apr 2023 17:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBSM-0000Ep-DM; Wed, 19 Apr 2023 17:18:58 +0000
Received: by outflank-mailman (input) for mailman id 523634;
 Wed, 19 Apr 2023 17:18:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppBSL-0000EX-K0; Wed, 19 Apr 2023 17:18:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppBSL-0003Ec-6n; Wed, 19 Apr 2023 17:18:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppBSK-0008A7-Mr; Wed, 19 Apr 2023 17:18:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppBSK-0004CH-MN; Wed, 19 Apr 2023 17:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5HK0wKztW4S60Yhkl5HnOdpoTPrRQaVi/c08KW0dpQ0=; b=VsujQW1rpSo2BbxuYD3nZBfprh
	eU/J6meUVDJGmQ4cRV1Lrd2ciN9huvOd0A9+f9zB/5d5V/c+MeUrsSk974vpFL1gP1Ahvh0JKQ5+k
	1zDYJ2Fx1yCII4LXhMPNpLqPAgSyqBPihGS9NPfd0eHexzk/ONHNpDDYGpjkG49ORb1E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180319-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180319: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=569632a5832c02bd84790e0411940b8d3150fa17
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 17:18:56 +0000

flight 180319 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180319/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  569632a5832c02bd84790e0411940b8d3150fa17
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    0 days
Testing same since   180314  2023-04-19 10:00:24 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit 569632a5832c02bd84790e0411940b8d3150fa17
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Apr 19 11:03:30 2023 +0200

    CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
    
    Note in the changelog that the purpose of
    gnttab_max_{maptrack_,}frames command line options has been changed.
    
    Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Henry Wang <Henry.Wang@arm.com>

commit 768846690d64bc730c1a1123e8de3af731bb2eb3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:02:47 2023 +0200

    x86: fix build with old gcc after CPU policy changes
    
    Old gcc won't cope with initializers involving unnamed struct/union
    fields.
    
    Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 741599fa521fbbb4cf71a98d7ec22ba5f4671cfa
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:01:29 2023 +0200

    x86: cpu{id,}_policy_updated() can be static
    
    The function merely needs moving earlier in the file to avoid the need
    for a forward declaration. While moving it, also rename it following the
    recent folding of CPUID and MSR policies.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 224211c55bdded74c5a65f5a7e34281a8c5c56f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:00:19 2023 +0200

    tests/cpu-policy: fix "run" goal
    
    An earlier change converted TARGET-y to TARGETS, but failed to replace
    all references. Convert run's dependency, but use $< in the command to
    avoid the leading blank that += inserts.
    
    Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523642.813899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbd-0002DZ-Np; Wed, 19 Apr 2023 17:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523642.813899; Wed, 19 Apr 2023 17:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbd-0002DS-L2; Wed, 19 Apr 2023 17:28:33 +0000
Received: by outflank-mailman (input) for mailman id 523642;
 Wed, 19 Apr 2023 17:28:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbc-0001ia-BW
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98e55cf1-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-171-_tjGd7aHPKSYhHr_wbflLA-1; Wed, 19 Apr 2023 13:28:24 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 19BA985A5A3;
 Wed, 19 Apr 2023 17:28:23 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 81D2D483EC4;
 Wed, 19 Apr 2023 17:28:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98e55cf1-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925308;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TxF1pTMYrXy3osQY76QFRJXTyPl1McCp6rJWUy+E5CI=;
	b=hRQQiwtAIG4oo+d8OQ4SbFcTvtubQTjwAozD9IkIe61VBzhyO56Ar1v9a4vMv+Vcv7KmUd
	+4FP35yF6pl8lfVxSMLPtOmG5FnXBcLAMztYYcsXR2xc2uD1s/PVsZjuTrREbV2i0rqBLX
	lSsXaQbB63JH7NfatWH7GGEv81pb1Jg=
X-MC-Unique: _tjGd7aHPKSYhHr_wbflLA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 01/16] hw/qdev: introduce qdev_is_realized() helper
Date: Wed, 19 Apr 2023 13:28:02 -0400
Message-Id: <20230419172817.272758-2-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Add a helper function to check whether the device is realized without
requiring the Big QEMU Lock. The next patch adds a second caller. The
goal is to avoid spreading DeviceState field accesses throughout the
code.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/qdev-core.h | 17 ++++++++++++++---
 hw/scsi/scsi-bus.c     |  3 +--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index bd50ad5ee1..4d734cf35e 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -1,6 +1,7 @@
 #ifndef QDEV_CORE_H
 #define QDEV_CORE_H
 
+#include "qemu/atomic.h"
 #include "qemu/queue.h"
 #include "qemu/bitmap.h"
 #include "qemu/rcu.h"
@@ -164,9 +165,6 @@ struct NamedClockList {
 
 /**
  * DeviceState:
- * @realized: Indicates whether the device has been fully constructed.
- *            When accessed outside big qemu lock, must be accessed with
- *            qatomic_load_acquire()
  * @reset: ResettableState for the device; handled by Resettable interface.
  *
  * This structure should not be accessed directly.  We declare it here
@@ -332,6 +330,19 @@ DeviceState *qdev_new(const char *name);
  */
 DeviceState *qdev_try_new(const char *name);
 
+/**
+ * qdev_is_realized:
+ * @dev: The device to check.
+ *
+ * May be called outside big qemu lock.
+ *
+ * Returns: %true% if the device has been fully constructed, %false% otherwise.
+ */
+static inline bool qdev_is_realized(DeviceState *dev)
+{
+    return qatomic_load_acquire(&dev->realized);
+}
+
 /**
  * qdev_realize: Realize @dev.
  * @dev: device to realize
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..07275fb631 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus,
      * the user access the device.
      */
 
-    if (retval && !include_unrealized &&
-        !qatomic_load_acquire(&retval->qdev.realized)) {
+    if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) {
         retval = NULL;
     }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523643.813909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbf-0002TQ-06; Wed, 19 Apr 2023 17:28:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523643.813909; Wed, 19 Apr 2023 17:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbe-0002TJ-TV; Wed, 19 Apr 2023 17:28:34 +0000
Received: by outflank-mailman (input) for mailman id 523643;
 Wed, 19 Apr 2023 17:28:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbd-0001ia-Br
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a72062f-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-78-r-TkgJWjMKico9NCxDBaVg-1; Wed, 19 Apr 2023 13:28:29 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E9633801F59;
 Wed, 19 Apr 2023 17:28:28 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5CB2D40C2064;
 Wed, 19 Apr 2023 17:28:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a72062f-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925310;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=;
	b=BhrFeIxCW9QgSBsPhxIzmYgXwF4ynexsTTCtwuv8ZSeFV0jGMBFJ2MNHwYtUqeyOh5Dsr1
	3JQOJPSUC7tS23brp+NgPvi2qjrVjxWms1DAp9fKKGXbHNAWhbRWL4H5s2G3R627eLtDZM
	94o726pLFl8s0+jYx56nZBr5tnBVlh8=
X-MC-Unique: r-TkgJWjMKico9NCxDBaVg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v2 03/16] virtio-scsi: stop using aio_disable_external() during unplug
Date: Wed, 19 Apr 2023 13:28:04 -0400
Message-Id: <20230419172817.272758-4-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
   that in-flight requests are cancelled synchronously. This ensures
   that no in-flight requests remain once qdev_simple_device_unplug_cb()
   returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-disk.c   | 1 +
 hw/scsi/virtio-scsi.c | 3 ---
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e01bd84541 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
 
 static void scsi_unrealize(SCSIDevice *dev)
 {
+    scsi_device_purge_requests(dev, SENSE_CODE(RESET));
     del_boot_device_lchs(&dev->qdev, NULL);
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 000961446c..a02f9233ec 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523640.813879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbb-0001in-95; Wed, 19 Apr 2023 17:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523640.813879; Wed, 19 Apr 2023 17:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbb-0001ig-66; Wed, 19 Apr 2023 17:28:31 +0000
Received: by outflank-mailman (input) for mailman id 523640;
 Wed, 19 Apr 2023 17:28:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBba-0001ia-M1
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:30 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 985f871d-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-637-61a8q54tM8SArQfWECWbMQ-1; Wed, 19 Apr 2023 13:28:22 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CF33185C069;
 Wed, 19 Apr 2023 17:28:20 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 65E3C483EC4;
 Wed, 19 Apr 2023 17:28:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 985f871d-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925307;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=JAqVXMqjwf5xGB6ARdoAeUgfvsknmkfSYSzm+VSDhMc=;
	b=B/gffyY6tWv2BCXraGKpu2rn5Fy+0sL3j0wosTVDP7L4lg5wFOmT8bf3qog7y3en4Qcg3k
	JDOIBGCNx0/mxxnB9HHBCnzAH/3jWvA42caQyOlAaNzHyw/2CAtZWAF1DixhnxAcSbjIvu
	jDtpAIZMsaf9Gb15G0Q+ZZQFG/CP7RY=
X-MC-Unique: 61a8q54tM8SArQfWECWbMQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 00/16] block: remove aio_disable_external() API
Date: Wed, 19 Apr 2023 13:28:01 -0400
Message-Id: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

v2:
- Do not rely on BlockBackend request queuing, implement .drained_begin/end()
  instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
- Add qdev_is_realized() API [Philippe]
- Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
- Add patch to call .drained_begin/end() from main loop thread to simplify
  callback implementations

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

The approach in this patch series is to implement BlockDevOps
.drained_begin/end() callbacks that temporarily stop file descriptor handlers.
This ensures that new I/O requests are not submitted in drained sections.

The first two virtio-scsi patches were already sent as a separate series. I
included them because they are necessary in order to fully remove
aio_disable_external().

Based-on: 087bc644b7634436ca9d52fe58ba9234e2bef026 (kevin/block-next)

Stefan Hajnoczi (16):
  hw/qdev: introduce qdev_is_realized() helper
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  block/export: only acquire AioContext once for
    vhost_user_server_stop()
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  block: add blk_in_drain() API
  block: drain from main loop thread in bdrv_co_yield_to_drain()
  xen-block: implement BlockDevOps->drained_begin()
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/export: don't require AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  virtio: make it possible to detach host notifier from any thread

 hw/block/dataplane/xen-block.h              |   2 +
 include/block/export.h                      |   2 +
 include/hw/qdev-core.h                      |  17 ++-
 include/qemu/vhost-user-server.h            |   8 +-
 include/sysemu/block-backend-common.h       |  25 ++--
 include/sysemu/block-backend-global-state.h |   1 +
 block/block-backend.c                       |   7 ++
 block/export/export.c                       |  13 +-
 block/export/fuse.c                         |  58 ++++++++-
 block/export/vduse-blk.c                    | 128 ++++++++++++++------
 block/export/vhost-user-blk-server.c        |  73 +++++++----
 block/io.c                                  |   3 +-
 hw/block/dataplane/virtio-blk.c             |   2 +
 hw/block/dataplane/xen-block.c              |  42 +++++--
 hw/block/xen-block.c                        |  24 +++-
 hw/i386/kvm/xen_xenstore.c                  |   2 +-
 hw/scsi/scsi-bus.c                          |   6 +-
 hw/scsi/scsi-disk.c                         |   1 +
 hw/scsi/virtio-scsi-dataplane.c             |   9 ++
 hw/scsi/virtio-scsi.c                       |  21 ++--
 hw/xen/xen-bus.c                            |  11 +-
 util/vhost-user-server.c                    |  37 +++---
 22 files changed, 348 insertions(+), 144 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523641.813889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbc-0001xm-G2; Wed, 19 Apr 2023 17:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523641.813889; Wed, 19 Apr 2023 17:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbc-0001xf-DC; Wed, 19 Apr 2023 17:28:32 +0000
Received: by outflank-mailman (input) for mailman id 523641;
 Wed, 19 Apr 2023 17:28:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbb-0001ia-BU
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99217fe4-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-196-Qupe_d5wMdSygq6czM087A-1; Wed, 19 Apr 2023 13:28:27 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D81E485C06D;
 Wed, 19 Apr 2023 17:28:25 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 208C6C16024;
 Wed, 19 Apr 2023 17:28:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99217fe4-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925308;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=;
	b=JXD632/F9qlMu+j6HuP+J9WZudIIE2fWrz48o6YGHH9epoaWs3P+5hP3Mtey49f0ILHNHA
	1r0E9d8Ygk44FZDVEXhEI5QyIfd5FPvno8DvAZ9Ub602XAK65dLCbFesrbjLb6vZqOJRYg
	vRh9rohEqEz52jeNz7s/oK0eO6sJP1o=
X-MC-Unique: Qupe_d5wMdSygq6czM087A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v2 02/16] virtio-scsi: avoid race between unplug and transport event
Date: Wed, 19 Apr 2023 13:28:03 -0400
Message-Id: <20230419172817.272758-3-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-bus.c    |  3 ++-
 hw/scsi/virtio-scsi.c | 18 +++++++++---------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 07275fb631..64d7311757 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -486,7 +486,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qdev_is_realized(&dev->qdev)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..000961446c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
-
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
     aio_enable_external(ctx);
@@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, sd,
+                               VIRTIO_SCSI_T_TRANSPORT_RESET,
+                               VIRTIO_SCSI_EVT_RESET_REMOVED);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523644.813919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbl-0002rN-HL; Wed, 19 Apr 2023 17:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523644.813919; Wed, 19 Apr 2023 17:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbl-0002rA-D0; Wed, 19 Apr 2023 17:28:41 +0000
Received: by outflank-mailman (input) for mailman id 523644;
 Wed, 19 Apr 2023 17:28:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbk-0001ia-Fs
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f49f692-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:40 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-178-QXXZW6VEOQuQZxBq_W3LMA-1; Wed, 19 Apr 2023 13:28:32 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E9D802823810;
 Wed, 19 Apr 2023 17:28:30 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5495F2166B33;
 Wed, 19 Apr 2023 17:28:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f49f692-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925318;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=;
	b=Gj6JkrpS82YHcn37K3ZEmocqzGqkD3zrNgIxXNj+0sB3sBh53Zp3l8EvES1VVrz3CY4IWV
	KvTOfpbRICb++cmuJfEIWLQ4csRiUVOktAjTo7MakOILzRRYGruIUXAKL2RCfrYHPdMm3O
	pshpmkeniLnTV34rAt204jArzf/n3TI=
X-MC-Unique: QXXZW6VEOQuQZxBq_W3LMA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 04/16] block/export: only acquire AioContext once for vhost_user_server_stop()
Date: Wed, 19 Apr 2023 13:28:05 -0400
Message-Id: <20230419172817.272758-5-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE()
requires that AioContext is only acquired once.

Since blk_exp_request_shutdown() already acquires the AioContext it
shouldn't be acquired again in vhost_user_server_stop().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 40f36ea214..5b6216069c 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
     aio_context_release(server->ctx);
 }
 
+/* server->ctx acquired by caller */
 void vhost_user_server_stop(VuServer *server)
 {
-    aio_context_acquire(server->ctx);
-
     qemu_bh_delete(server->restart_listener_bh);
     server->restart_listener_bh = NULL;
 
@@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server)
         AIO_WAIT_WHILE(server->ctx, server->co_trip);
     }
 
-    aio_context_release(server->ctx);
-
     if (server->listener) {
         qio_net_listener_disconnect(server->listener);
         object_unref(OBJECT(server->listener));
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523645.813929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbm-00039G-QD; Wed, 19 Apr 2023 17:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523645.813929; Wed, 19 Apr 2023 17:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbm-000390-MF; Wed, 19 Apr 2023 17:28:42 +0000
Received: by outflank-mailman (input) for mailman id 523645;
 Wed, 19 Apr 2023 17:28:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbl-0001ia-R8
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:41 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0094f25-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:41 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-486-UcPUP67CM-aZaF8Q22BlDA-1; Wed, 19 Apr 2023 13:28:36 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 962993C14844;
 Wed, 19 Apr 2023 17:28:35 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D2833492B04;
 Wed, 19 Apr 2023 17:28:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0094f25-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925320;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=;
	b=JoXg8kWo+HvsWqgyV4zeD1qapFv8eXMPyGRyIG15Js1YHFbfccP8uA3OdR9F5R08P+LnEC
	lNb41IBhFjndMGGX68m2kcqOknhn9XYEBmH5GRWgdQ9RrjxuZFGWg1WDZ5hpPLHyNFaId/
	cYRQScHyAdh2epn9Ma/uTmzylurGPC0=
X-MC-Unique: UcPUP67CM-aZaF8Q22BlDA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 06/16] block/export: wait for vhost-user-blk requests when draining
Date: Wed, 19 Apr 2023 13:28:07 -0400
Message-Id: <20230419172817.272758-7-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 19 +++++++++++++++++++
 util/vhost-user-server.c             | 14 ++++++++++----
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e93f2ed6b4..dbf5207162 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -254,6 +254,22 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
+static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
+};
+
 static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              Error **errp)
 {
@@ -292,6 +308,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
                              logical_block_size, num_queues);
 
+    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vexp);
 
@@ -299,6 +316,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                                  num_queues, &vu_blk_iface, errp)) {
         blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
                                         blk_aio_detach, vexp);
+        blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
     }
@@ -312,6 +330,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
 
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vexp);
+    blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..2e6b640050 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523646.813939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbp-0003Ss-3z; Wed, 19 Apr 2023 17:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523646.813939; Wed, 19 Apr 2023 17:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbp-0003Sh-04; Wed, 19 Apr 2023 17:28:45 +0000
Received: by outflank-mailman (input) for mailman id 523646;
 Wed, 19 Apr 2023 17:28:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbo-00036z-4B
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9fb41f81-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:28:40 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-608-H9zMJQ8GPMyry8zMS7zBDw-1; Wed, 19 Apr 2023 13:28:34 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A49285A588;
 Wed, 19 Apr 2023 17:28:33 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9305D492C3E;
 Wed, 19 Apr 2023 17:28:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fb41f81-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925319;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=leMfEV1uRMjOAqE+D3iAWq8u1umrQobSbCNMGjDuzrE=;
	b=KXbY1NielY7yQfYfm+k9uRC64FeecyuiWCy6KQYfbItRrbE3LiMbvEcyaBnCf54npF3mMg
	79X+MFK2URSVT8BxA2YY3nXvsYU4+pL5lmcyRCcTC8eZrbD9DrAxM6qGfVzfXwVYu5ySVf
	4KXgXSEZQUjKzJ38GZGfQdkEo7OjUvk=
X-MC-Unique: H9zMJQ8GPMyry8zMS7zBDw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 05/16] util/vhost-user-server: rename refcount to in_flight counter
Date: Wed, 19 Apr 2023 13:28:06 -0400
Message-Id: <20230419172817.272758-6-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 3409d9e02e..e93f2ed6b4 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523647.813949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbr-0003u9-DI; Wed, 19 Apr 2023 17:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523647.813949; Wed, 19 Apr 2023 17:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbr-0003ty-9T; Wed, 19 Apr 2023 17:28:47 +0000
Received: by outflank-mailman (input) for mailman id 523647;
 Wed, 19 Apr 2023 17:28:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbp-00036z-ES
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:45 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1822235-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:28:43 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-140-B9iU9cznPoul00GA4s393w-1; Wed, 19 Apr 2023 13:28:39 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E5A10185A791;
 Wed, 19 Apr 2023 17:28:37 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4AB5E492C3E;
 Wed, 19 Apr 2023 17:28:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1822235-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925322;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=;
	b=JqtDMXgTsQ1H/W3AelkJtzxSnizjEvx8CqYrFlyEfVH6FhqxhFs3jDEd7tXSt1yIBqWju2
	uYUrWz4CCHxkhpxpBtAomjv2/kln1f2WRxBPoPvQvqt6i0b/dEMV88b9/5F9JJ+liwEblz
	wsdtVNkUz94QhY+XEc97qIYcRp3WBFs=
X-MC-Unique: B9iU9cznPoul00GA4s393w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 07/16] block/export: stop using is_external in vhost-user-blk server
Date: Wed, 19 Apr 2023 13:28:08 -0400
Message-Id: <20230419172817.272758-8-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
 util/vhost-user-server.c             | 10 +++----
 2 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index dbf5207162..6e1bc196fb 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface = {
     .process_msg           = vu_blk_process_msg,
 };
 
-static void blk_aio_attached(AioContext *ctx, void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
-}
-
-static void blk_aio_detach(void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
-    vexp->export.ctx = NULL;
-}
-
 static void
 vu_blk_initialize_config(BlockDriverState *bs,
                          struct virtio_blk_config *config,
@@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    vexp->export.ctx = blk_get_aio_context(vexp->export.blk);
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
 };
 
@@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              logical_block_size, num_queues);
 
     blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
-    blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                 vexp);
 
     if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
                                  num_queues, &vu_blk_iface, errp)) {
-        blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
-                                        blk_aio_detach, vexp);
         blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
@@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp)
 {
     VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
 
-    blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                    vexp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 2e6b640050..332aea9306 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:28:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:28:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523649.813959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbs-0004HF-Tx; Wed, 19 Apr 2023 17:28:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523649.813959; Wed, 19 Apr 2023 17:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbs-0004Fo-Op; Wed, 19 Apr 2023 17:28:48 +0000
Received: by outflank-mailman (input) for mailman id 523649;
 Wed, 19 Apr 2023 17:28:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbr-00036z-HM
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:47 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2d7273d-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:28:45 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-581-gx2svh4pM8KqGWApR1ZRlw-1; Wed, 19 Apr 2023 13:28:41 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5B7E68996E0;
 Wed, 19 Apr 2023 17:28:40 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BABDC2026D16;
 Wed, 19 Apr 2023 17:28:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2d7273d-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925324;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1INS84S/tNTOXSgAzqbNWqBblrNC9wvg2yPbndQJO20=;
	b=PJFWR14PpG+d062hNmHOVKeOR3b+9CRPzlKJj0AN2yN96fPIUJCYgRnycJU4eNPoK2czPl
	WzQs9676Xf6UYI/IlGb2NncQfFRH559BK/HqFbnjJ0xHHGYP8s5f6GCxBVI1Wgmu+KRZbn
	ARV0ivAhVH4AvVRYv6dsfIIOUQffyBo=
X-MC-Unique: gx2svh4pM8KqGWApR1ZRlw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH v2 08/16] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Wed, 19 Apr 2023 13:28:09 -0400
Message-Id: <20230419172817.272758-9-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:29:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523651.813969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbu-0004bj-At; Wed, 19 Apr 2023 17:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523651.813969; Wed, 19 Apr 2023 17:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBbu-0004b6-4l; Wed, 19 Apr 2023 17:28:50 +0000
Received: by outflank-mailman (input) for mailman id 523651;
 Wed, 19 Apr 2023 17:28:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBbt-0001ia-BU
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:28:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a47b38c0-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:28:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-542-_TxeHOorMreDsIuRonGL0g-1; Wed, 19 Apr 2023 13:28:43 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 42CE31C189B0;
 Wed, 19 Apr 2023 17:28:42 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B1BA2492C3E;
 Wed, 19 Apr 2023 17:28:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a47b38c0-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925327;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6sTqQmJRDK/JwUQj6FoYqmlPFJsGwZK9pc+TSucKp9o=;
	b=Mv8C/c4wquXxxhjQPf7nJkxAOdO8BEPQw2gqJE7arr1WGda4/uL01w6m3THBIC/eELrKCX
	aisiPPFcfWpujsDIL1eiKTkbccxC5Id0DXBYmVxoTl0V/POOQIUlxXHL7DD4nsaDm1VeUR
	5n7SB5drWtpAf3hdY1lVRdFUwD3MWwE=
X-MC-Unique: _TxeHOorMreDsIuRonGL0g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 09/16] block: add blk_in_drain() API
Date: Wed, 19 Apr 2023 13:28:10 -0400
Message-Id: <20230419172817.272758-10-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

The BlockBackend quiesce_counter is greater than zero during drained
sections. Add an API to check whether the BlockBackend is in a drained
section.

The next patch will use this API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-global-state.h | 1 +
 block/block-backend.c                       | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h
index 2b6d27db7c..ac7cbd6b5e 100644
--- a/include/sysemu/block-backend-global-state.h
+++ b/include/sysemu/block-backend-global-state.h
@@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp);
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
 void blk_aio_cancel(BlockAIOCB *acb);
 int blk_commit_all(void);
+bool blk_in_drain(BlockBackend *blk);
 void blk_drain(BlockBackend *blk);
 void blk_drain_all(void);
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
diff --git a/block/block-backend.c b/block/block-backend.c
index 9e0f48692a..e0a1d9ec0f 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1270,6 +1270,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes)
     return 0;
 }
 
+/* Are we currently in a drained section? */
+bool blk_in_drain(BlockBackend *blk)
+{
+    GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */
+    return qatomic_read(&blk->quiesce_counter);
+}
+
 /* To be called between exactly one pair of blk_inc/dec_in_flight() */
 static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523672.813992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBll-00006g-AP; Wed, 19 Apr 2023 17:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523672.813992; Wed, 19 Apr 2023 17:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBll-0008VQ-01; Wed, 19 Apr 2023 17:39:01 +0000
Received: by outflank-mailman (input) for mailman id 523672;
 Wed, 19 Apr 2023 17:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcS-00036z-4g
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:24 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7ecc5cb-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:29:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-131-E-rqK67lM7ud5XIt6PKv0Q-1; Wed, 19 Apr 2023 13:29:14 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DC4C9101A531;
 Wed, 19 Apr 2023 17:29:13 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 41B9740BC798;
 Wed, 19 Apr 2023 17:29:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7ecc5cb-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925360;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=;
	b=c1zupwSmsln/Eeac95ycfoim13UmKYh85VP0gxXovSb2NwWWzGjj9SEam+QiCoh0DcRDPQ
	nbif/jDNfRPMF8+FR3M6I9WU6E2YPjHt5svK2oyazcE16dowSlmsonfqvswJ+5eunl3eDa
	uXejSZ2f8DK5Pj3wbwJ+D97SlVA6wb4=
X-MC-Unique: E-rqK67lM7ud5XIt6PKv0Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 13/16] block/export: rewrite vduse-blk drain code
Date: Wed, 19 Apr 2023 13:28:14 -0400
Message-Id: <20230419172817.272758-14-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index f7ae44e3ce..35dc8fcf45 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
@@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523676.814018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBln-0000d4-5R; Wed, 19 Apr 2023 17:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523676.814018; Wed, 19 Apr 2023 17:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlm-0000aE-Kj; Wed, 19 Apr 2023 17:39:02 +0000
Received: by outflank-mailman (input) for mailman id 523676;
 Wed, 19 Apr 2023 17:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcR-00036z-4h
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7ffd628-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:29:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-263-whon3Qt8MtqXVgvzgaJ_Ug-1; Wed, 19 Apr 2023 13:29:17 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5340285A5B1;
 Wed, 19 Apr 2023 17:29:16 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9494440C2064;
 Wed, 19 Apr 2023 17:29:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7ffd628-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925360;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=svW5Iqq6ywfg5XGxDCHKSniKJYjg3etTmQ7BLHFrTFw=;
	b=bJ5eK0foYwZXHicu/RQwVMa+0Kc3xER1cfViU5+dVZz1Ou018pGa6YD0BGU5W8a1tuDxAy
	s/tkL4BOJD+Qgy5XodMygaSfUX7qOEIIqtNQv65I2lQh6ZjkIcCcAm9NEO1cj0E1I2Vn5x
	zz7JinxcSLT4B39hYq8Xz0404G+7YlE=
X-MC-Unique: whon3Qt8MtqXVgvzgaJ_Ug-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 14/16] block/export: don't require AioContext lock around blk_exp_ref/unref()
Date: Wed, 19 Apr 2023 13:28:15 -0400
Message-Id: <20230419172817.272758-15-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

The FUSE export calls blk_exp_ref/unref() without the AioContext lock.
Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they
work without the AioContext lock. This way it's less error-prone.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/export.h   |  2 ++
 block/export/export.c    | 13 ++++++-------
 block/export/vduse-blk.c |  4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..f2fe0f8078 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -57,6 +57,8 @@ struct BlockExport {
      * Reference count for this block export. This includes strong references
      * both from the owner (qemu-nbd or the monitor) and clients connected to
      * the export.
+     *
+     * Use atomics to access this field.
      */
     int refcount;
 
diff --git a/block/export/export.c b/block/export/export.c
index e3fee60611..edb05c9268 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -201,11 +201,10 @@ fail:
     return NULL;
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_ref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    exp->refcount++;
+    assert(qatomic_read(&exp->refcount) > 0);
+    qatomic_inc(&exp->refcount);
 }
 
 /* Runs in the main thread */
@@ -227,11 +226,10 @@ static void blk_exp_delete_bh(void *opaque)
     aio_context_release(aio_context);
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_unref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    if (--exp->refcount == 0) {
+    assert(qatomic_read(&exp->refcount) > 0);
+    if (qatomic_fetch_dec(&exp->refcount) == 1) {
         /* Touch the block_exports list only in the main thread */
         aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh,
                                 exp);
@@ -339,7 +337,8 @@ void qmp_block_export_del(const char *id,
     if (!has_mode) {
         mode = BLOCK_EXPORT_REMOVE_MODE_SAFE;
     }
-    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) {
+    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE &&
+        qatomic_read(&exp->refcount) > 1) {
         error_setg(errp, "export '%s' still in use", exp->id);
         error_append_hint(errp, "Use mode='hard' to force client "
                           "disconnect\n");
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 35dc8fcf45..611430afda 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
     if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
         /* Prevent export from being deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_ref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
         aio_wait_kick();
 
         /* Now the export can be deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_unref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523675.814013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlm-0000UE-Pc; Wed, 19 Apr 2023 17:39:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523675.814013; Wed, 19 Apr 2023 17:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlm-0000Si-5e; Wed, 19 Apr 2023 17:39:02 +0000
Received: by outflank-mailman (input) for mailman id 523675;
 Wed, 19 Apr 2023 17:38:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcW-0001ia-0d
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb7d62a9-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:29:27 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-457-UORK3zzVOxmLu7977uaTMQ-1; Wed, 19 Apr 2023 13:29:22 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59D9D1C189A4;
 Wed, 19 Apr 2023 17:29:21 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8045B40C2064;
 Wed, 19 Apr 2023 17:29:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb7d62a9-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925366;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=;
	b=P81Hbad2MkrxYTfi71uOxxkGaggxm+9W6bTi5GM2BTkPDcPFnK7pF6YXAlQOQkOq/8nV1O
	hWV0a02+t5ibJI9zvKkiL8RyS6ToCmoM4vuje/hz+FkZktCEZmssTifAfPzGl6DH0z5Wb/
	GjxSdH06n467P6qYOIzjoaC5c7SG0UE=
X-MC-Unique: UORK3zzVOxmLu7977uaTMQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 16/16] virtio: make it possible to detach host notifier from any thread
Date: Wed, 19 Apr 2023 13:28:17 -0400
Message-Id: <20230419172817.272758-17-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

virtio_queue_aio_detach_host_notifier() does two things:
1. It removes the fd handler from the event loop.
2. It processes the virtqueue one last time.

The first step can be peformed by any thread and without taking the
AioContext lock.

The second step may need the AioContext lock (depending on the device
implementation) and runs in the thread where request processing takes
place. virtio-blk and virtio-scsi therefore call
virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
AioContext

Scheduling a BH is undesirable for .drained_begin() functions. The next
patch will introduce a .drained_begin() function that needs to call
virtio_queue_aio_detach_host_notifier().

Move the virtqueue processing out to the callers of
virtio_queue_aio_detach_host_notifier() so that the function can be
called from any thread. This is in preparation for the next patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 ++
 hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..bd7cc6e76b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
 
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
+        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
         virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 20bb91766e..81643445ed 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
 {
     VirtIOSCSI *s = opaque;
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
+    EventNotifier *host_notifier;
     int i;
 
     virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->event_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523673.814005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlm-0000LP-6M; Wed, 19 Apr 2023 17:39:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523673.814005; Wed, 19 Apr 2023 17:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBll-0000IR-Mj; Wed, 19 Apr 2023 17:39:01 +0000
Received: by outflank-mailman (input) for mailman id 523673;
 Wed, 19 Apr 2023 17:38:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcF-0001ia-1N
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b17cf833-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:29:10 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-561-t53o4P_VPVaZURreVJcYqg-1; Wed, 19 Apr 2023 13:29:06 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 186651C189A4;
 Wed, 19 Apr 2023 17:29:05 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C83E540BC798;
 Wed, 19 Apr 2023 17:28:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b17cf833-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925349;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TBoAHwSzsacA9uVcaYGhwuzX40gWj9dN8dhM1t9uyLM=;
	b=ghl6274+2MaUilmMocS4No0tX1RYmpz8NG2nuTYF6E13PtUUkMSLNhklTCIqivjJyPq7+F
	9RUd5MihozaH7yLfLGhtA4IhmlAeC12gTtoA52fHSU8UdJh4wcPULefWO+kluGv/nueh3q
	SxR8kS0AdwplrB/TKo/fB18LMGRwACQ=
X-MC-Unique: t53o4P_VPVaZURreVJcYqg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 10/16] block: drain from main loop thread in bdrv_co_yield_to_drain()
Date: Wed, 19 Apr 2023 13:28:11 -0400
Message-Id: <20230419172817.272758-11-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

For simplicity, always run BlockDevOps .drained_begin/end/poll()
callbacks in the main loop thread. This makes it easier to implement the
callbacks and avoids extra locks.

Move the function pointer declarations from the I/O Code section to the
Global State section in block-backend-common.h.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-common.h | 25 +++++++++++++------------
 block/io.c                            |  3 ++-
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
index 2391679c56..780cea7305 100644
--- a/include/sysemu/block-backend-common.h
+++ b/include/sysemu/block-backend-common.h
@@ -59,6 +59,19 @@ typedef struct BlockDevOps {
      */
     bool (*is_medium_locked)(void *opaque);
 
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+
     /*
      * I/O API functions. These functions are thread-safe.
      *
@@ -76,18 +89,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
 } BlockDevOps;
 
 /*
diff --git a/block/io.c b/block/io.c
index db438c7657..6285d67546 100644
--- a/block/io.c
+++ b/block/io.c
@@ -331,7 +331,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (ctx != co_ctx) {
         aio_context_release(ctx);
     }
-    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
+    replay_bh_schedule_oneshot_event(qemu_get_aio_context(),
+                                     bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523670.813984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlk-0008Pk-O3; Wed, 19 Apr 2023 17:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523670.813984; Wed, 19 Apr 2023 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlk-0008PO-Ec; Wed, 19 Apr 2023 17:39:00 +0000
Received: by outflank-mailman (input) for mailman id 523670;
 Wed, 19 Apr 2023 17:38:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcL-0001ia-He
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b53042d3-ded7-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 19:29:16 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-615-GPguznyEPGaF7KXF2UnWHQ-1; Wed, 19 Apr 2023 13:29:12 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5554B3801F5C;
 Wed, 19 Apr 2023 17:29:11 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 462742026D16;
 Wed, 19 Apr 2023 17:29:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b53042d3-ded7-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925355;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=;
	b=gQVtIVBw6ET3cfMWOrRuDOfFgt87/WReAZzNzBBWFRLDy8L1DtVIEMs1ngB/Lkqr60+sJi
	+ZjiQX9Q2guXRl+Yrso2qoLz+hwtboka8K/J0rgZBle+iVoRJa7sqVn5J/+gNv1tdw5/1l
	6o0g/o+hTe0ZgYp5IMKHW8bMJRBjhhQ=
X-MC-Unique: GPguznyEPGaF7KXF2UnWHQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 12/16] hw/xen: do not set is_external=true on evtchn fds
Date: Wed, 19 Apr 2023 13:28:13 -0400
Message-Id: <20230419172817.272758-13-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The previous commit converted the xen-block device to use BlockDevOps
.drained_begin/end() callbacks. It no longer relies on is_external=true
so it is safe to pass is_external=false.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/xen/xen-bus.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index b8f408c9ed..bf256d4da2 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           true, xen_device_event, NULL, xen_device_poll, NULL,
-                           channel);
+                           false, xen_device_event, NULL, xen_device_poll,
+                           NULL, channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523668.813979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlk-0008NZ-9t; Wed, 19 Apr 2023 17:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523668.813979; Wed, 19 Apr 2023 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlk-0008NS-7E; Wed, 19 Apr 2023 17:39:00 +0000
Received: by outflank-mailman (input) for mailman id 523668;
 Wed, 19 Apr 2023 17:38:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcK-00036z-NX
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:16 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b40ec7e9-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:29:14 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-649-EFEK4HXsNuO_lqX_O-YnCw-1; Wed, 19 Apr 2023 13:29:09 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5B48A1C189A7;
 Wed, 19 Apr 2023 17:29:08 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BA5F8C08492;
 Wed, 19 Apr 2023 17:29:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b40ec7e9-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=;
	b=XbgJMXcZ5G0LgnQYMqMOsRWh9n1JNMXcHY7N3fuaTRezUEqJUa3iW64P+uYSd0c/Q4+Exx
	2gUw21WWqKVLxD5hQ/Ri4S2Erb/wm7nWoktuxZU8BLxaIy50bK38Cm8V2XForXcmEzQsZp
	24S9fibe23r8ZFdiE+e1C8x4Wd0yu3g=
X-MC-Unique: EFEK4HXsNuO_lqX_O-YnCw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 11/16] xen-block: implement BlockDevOps->drained_begin()
Date: Wed, 19 Apr 2023 13:28:12 -0400
Message-Id: <20230419172817.272758-12-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.

Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.

It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/xen-block.h |  2 ++
 hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++---------
 hw/block/xen-block.c           | 24 ++++++++++++++++---
 hw/xen/xen-bus.c               |  7 ++++--
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h
index 76dcd51c3d..7b8e9df09f 100644
--- a/hw/block/dataplane/xen-block.h
+++ b/hw/block/dataplane/xen-block.h
@@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp);
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane);
 
 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..02e0fd6115 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -663,6 +663,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
     g_free(dataplane);
 }
 
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         NULL, &error_abort);
+}
+
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         dataplane->ctx, &error_abort);
+}
+
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
@@ -673,13 +697,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 
     xendev = dataplane->xendev;
 
-    aio_context_acquire(dataplane->ctx);
-    if (dataplane->event_channel) {
-        /* Only reason for failure is a NULL channel */
-        xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                             qemu_get_aio_context(),
-                                             &error_abort);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_detach(dataplane);
     }
+
+    aio_context_acquire(dataplane->ctx);
     /* Xen doesn't have multiple users for nodes, so this can't fail */
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
@@ -818,11 +840,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
     aio_context_release(old_context);
 
-    /* Only reason for failure is a NULL channel */
-    aio_context_acquire(dataplane->ctx);
-    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                         dataplane->ctx, &error_abort);
-    aio_context_release(dataplane->ctx);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_attach(dataplane);
+    }
 
     return;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index f5a744589d..f099914831 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque)
     xen_device_backend_printf(xendev, "state", "%u", state);
 }
 
+/* Suspend request handling */
+static void xen_block_drained_begin(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_detach(blockdev->dataplane);
+}
+
+/* Resume request handling */
+static void xen_block_drained_end(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_attach(blockdev->dataplane);
+}
+
 static const BlockDevOps xen_block_dev_ops = {
-    .resize_cb = xen_block_resize_cb,
+    .resize_cb     = xen_block_resize_cb,
+    .drained_begin = xen_block_drained_begin,
+    .drained_end   = xen_block_drained_end,
 };
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
@@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
-
     if (conf->discard_granularity == -1) {
         conf->discard_granularity = conf->physical_block_size;
     }
@@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     blockdev->dataplane =
         xen_block_dataplane_create(xendev, blk, conf->logical_block_size,
                                    blockdev->props.iothread);
+
+    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
 }
 
 static void xen_block_frontend_changed(XenDevice *xendev,
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..b8f408c9ed 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
-                       xen_device_event, NULL, xen_device_poll, NULL, channel);
+    if (ctx) {
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
+                           true, xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
+    }
 }
 
 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 17:39:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 17:39:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523671.813988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBll-0008S3-07; Wed, 19 Apr 2023 17:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523671.813988; Wed, 19 Apr 2023 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppBlk-0008QZ-Ls; Wed, 19 Apr 2023 17:39:00 +0000
Received: by outflank-mailman (input) for mailman id 523671;
 Wed, 19 Apr 2023 17:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Pu/K=AK=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppBcT-00036z-56
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 17:29:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b89e3c4e-ded7-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 19:29:22 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-441-nt5MpswVMfyrsieZJncL0w-1; Wed, 19 Apr 2023 13:29:19 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AD2C71C189A7;
 Wed, 19 Apr 2023 17:29:18 +0000 (UTC)
Received: from localhost (unknown [10.39.192.234])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 10F09C16024;
 Wed, 19 Apr 2023 17:29:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b89e3c4e-ded7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681925361;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=;
	b=KpRyrFruTGvEL+gAea8wNrOCH9N0EyfTVmy7JCe2Aqy4RFAjjB9HQmmdJ2Ilhh9B1/2IQd
	W1C5XGAc/1YVUAAhZoXDJ0sisv6EerlfC1SJuMQ9znA4pzHoGRBSTo3Iy3K133x5y9m5Qp
	t3qmNAZmb/Ys6xcWYzAOvB2xwS6+n+8=
X-MC-Unique: nt5MpswVMfyrsieZJncL0w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Paul Durrant <paul@xen.org>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Juan Quintela <quintela@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Peter Lieven <pl@kamp.de>,
	eesposit@redhat.com,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	David Woodhouse <dwmw2@infradead.org>
Subject: [PATCH v2 15/16] block/fuse: do not set is_external=true on FUSE fd
Date: Wed, 19 Apr 2023 13:28:16 -0400
Message-Id: <20230419172817.272758-16-stefanha@redhat.com>
In-Reply-To: <20230419172817.272758-1-stefanha@redhat.com>
References: <20230419172817.272758-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..65a7f4d723 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque)
 
     blk_exp_ref(&exp->common);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     blk_exp_unref(&exp->common);
 }
 
@@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
@@ -287,6 +339,8 @@ static void fuse_export_delete(BlockExport *blk_exp)
 {
     FuseExport *exp = container_of(blk_exp, FuseExport, common);
 
+    blk_set_dev_ops(exp->common.blk, NULL, NULL);
+
     if (exp->fuse_session) {
         if (exp->mounted) {
             fuse_session_unmount(exp->fuse_session);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:26:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:26:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523727.814049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCVg-0000O8-Uv; Wed, 19 Apr 2023 18:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523727.814049; Wed, 19 Apr 2023 18:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCVg-0000O1-RP; Wed, 19 Apr 2023 18:26:28 +0000
Received: by outflank-mailman (input) for mailman id 523727;
 Wed, 19 Apr 2023 18:26:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppCVf-0000Ns-H6
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:26:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCVe-0004uY-Rp; Wed, 19 Apr 2023 18:26:26 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCVe-0008AF-Ka; Wed, 19 Apr 2023 18:26:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NTxGG+Wi2y1G3NRuaUPx296WWyqbtUW6lIoscbs5XIw=; b=NlGya7ac/zgDkKuwWugGlzMGGt
	wZSpJb8ozd/gR+pM5ybRL1rvlGCTQaba7mltRje2t5zCDjgBzuJM74Z6JiDKxFosS3ylyB+vaQ0+4
	9kqOFpvda1bjB6XvGUeHm4fEVtZEOlkpLeHsKihj1u408PlThBLEv0fM9VRAMUU6/8n8=;
Message-ID: <32060944-324d-4c38-ad60-69553ab5a6be@xen.org>
Date: Wed, 19 Apr 2023 19:26:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 3/3] xen/arm: fix unitialized use warning
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
 <20230414185714.292881-4-stewart.hildebrand@amd.com>
 <5fb567c5-1e82-a048-1cfe-f6f69e0b5ebc@xen.org>
 <3833c906-8d88-d35d-b9dd-b70d5f7a9fa7@amd.com>
 <b779a5cf-1421-086b-f7f3-188fcb9af3db@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <b779a5cf-1421-086b-f7f3-188fcb9af3db@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Stewart,

On 17/04/2023 03:08, Stewart Hildebrand wrote:
> On 4/16/23 22:03, Stewart Hildebrand wrote:
>> On 4/16/23 08:53, Julien Grall wrote:
>>> Hi Stewart,
>>
>> Hi Julien,
>>
>>> On 14/04/2023 19:57, Stewart Hildebrand wrote:
>>>> When building the hypervisor with -Og, we encounter the following error:
>>>
>>> Is this with GCC 12 as well?
>>
>> Yes. If my memory serves me correctly this particular one occurs with both GCC 11 and 12.

Thanks. I will update the commit message to mention it.

>>
>>>> arch/arm/domain_build.c: In function ‘make_cpus_node’:
>>>> arch/arm/domain_build.c:2040:12: error: ‘clock_valid’ may be used uninitialized [-Werror=maybe-uninitialized]
>>>>    2040 |         if ( clock_valid )
>>>>         |            ^
>>>> arch/arm/domain_build.c:1947:10: note: ‘clock_valid’ was declared here
>>>>    1947 |     bool clock_valid;
>>>>         |          ^~~~~~~~~~~
>>>> cc1: all warnings being treated as errors
>>>>
>>>> Fix it by initializing the variable.
>>>>
>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
>>>> ---
>>>> See previous discussion here
>>>> https://lists.xenproject.org/archives/html/xen-devel/2022-10/msg00741.html
>>>> ---
>>>>    xen/arch/arm/domain_build.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index 4f9d4f9d8867..18b350734a8e 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -1944,7 +1944,7 @@ static int __init make_cpus_node(const struct domain *d, void *fdt)
>>>>        /* Placeholder for cpu@ + a 32-bit hexadecimal number + \0 */
>>>>        char buf[13];
>>>>        u32 clock_frequency;
>>>> -    bool clock_valid;
>>>> +    bool clock_valid = false;
>>>
>>> NIT: I would add "Keep the compiler happy with -Og"
>>>
>>> I am happy to add it while committing if you agree.
>>
>> Yes, please do. Thanks.
> 
> One more thing, there is a typo in the subject, if you are willing to correct it while committing. s/unitialized/uninitialized/

Sure. I will do that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:29:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:29:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523731.814058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCYu-0000xk-DF; Wed, 19 Apr 2023 18:29:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523731.814058; Wed, 19 Apr 2023 18:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCYu-0000xd-Ad; Wed, 19 Apr 2023 18:29:48 +0000
Received: by outflank-mailman (input) for mailman id 523731;
 Wed, 19 Apr 2023 18:29:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppCYs-0000xV-JP
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:29:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCYs-0004yF-3a; Wed, 19 Apr 2023 18:29:46 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCYr-0008EW-Q4; Wed, 19 Apr 2023 18:29:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=72hGMZruVJR4m0hVe2ukYNscQjKQ+xnXYty2iQP2X14=; b=3LNIU4x11vTdZn/5wqKank1G1p
	h2Bwy9IWxZG4qi/tFYREPsp8WoVqT6XYEujzk5qHHm3+B9RjgWy36iEM5zQDvnN1JVtXcXT7dCZ5D
	rLbE1hk6+8Ho0FTJenHGsX/BPcB3ZdrcEW4zS6dwZdKBD7uVlcuKs0J55zpyCFhk1PeE=;
Message-ID: <1c6c87bc-df60-830b-a6c1-9e99df8b865d@xen.org>
Date: Wed, 19 Apr 2023 19:29:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 0/3] xen/arm: fix build errors with -Og
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230414185714.292881-1-stewart.hildebrand@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230414185714.292881-1-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stewart,

On 14/04/2023 19:57, Stewart Hildebrand wrote:
> This is a collection of fixes needed to build the hypervisor with -Og for arm64.
> 
> I build-tested this with the following command:
> 
> make -j $(nproc) \
>      EXTRA_CFLAGS_XEN_CORE="-Og" \
>      XEN_TARGET_ARCH=arm64 \
>      CROSS_COMPILE=aarch64-none-linux-gnu- \
>      dist-xen
> 
> 
> Stewart Hildebrand (3):
>    xen/arm: mark __guest_cmpxchg always_inline
>    xen/efi: fix unitialized use warning
>    xen/arm: fix unitialized use warning

I have committed patch #1 and #3. Jan already committed patch #2.

Thanks,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:31:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523734.814069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCas-0002MQ-Pf; Wed, 19 Apr 2023 18:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523734.814069; Wed, 19 Apr 2023 18:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCas-0002MJ-M6; Wed, 19 Apr 2023 18:31:50 +0000
Received: by outflank-mailman (input) for mailman id 523734;
 Wed, 19 Apr 2023 18:31:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppCar-0002MD-HA
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:31:49 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCap-0005Bx-94; Wed, 19 Apr 2023 18:31:47 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCap-0008H3-0J; Wed, 19 Apr 2023 18:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=35h22Yf84WTwtpzdSSa7TKg5qV7e96LLjfsZ+RDUjiQ=; b=MdtPMdFdmLKpiY4jm/2Wdv/5fp
	u31J+DV8IBzy048iW5Bue7+sK4bBOmJt3qFjaHSOIfqMCfU8Lf5Zpzfp63GJpkND1gFgJq1Iw4d+4
	xxndZr9okmlL//bk9Aq29JDwFd3/VowulS7ZT49ZOAD9lAkivtURyz5TNUPQeWQlrQqI=;
Message-ID: <cd940d9c-a12c-e52a-0363-e5770c282688@xen.org>
Date: Wed, 19 Apr 2023 19:31:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2] tools/xenstore/xenstored_control.c: correctly print
 time_t
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, Alexander Kanavin <alex@linutronix.de>,
 xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230419120710.855128-1-alex@linutronix.de>
 <99abd9a7-87cd-2f37-05bd-f4cdffd47a9c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <99abd9a7-87cd-2f37-05bd-f4cdffd47a9c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 19/04/2023 13:42, Juergen Gross wrote:
> On 19.04.23 14:07, Alexander Kanavin wrote:
>> On 32 bit systems with 64 bit time_t (hello, Y2038 problem),
>> the following error occurs otherwise:
>>
>> | xenstored_control.c: In function 'lu_reject_reason':
>> | xenstored_control.c:646:70: error: format '%ld' expects argument of 
>> type 'long int', but argument 5 has type 'time_t' {aka 'long long 
>> int'} [-Werror=format=]
>>
>> Signed-off-by: Alexander Kanavin <alex@linutronix.de>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

I have committed it.

Thanks,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:38:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523738.814079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCh2-000300-Dq; Wed, 19 Apr 2023 18:38:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523738.814079; Wed, 19 Apr 2023 18:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCh2-0002zt-AU; Wed, 19 Apr 2023 18:38:12 +0000
Received: by outflank-mailman (input) for mailman id 523738;
 Wed, 19 Apr 2023 18:38:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppCh0-0002zl-Sv
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:38:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCh0-0005Kx-D8; Wed, 19 Apr 2023 18:38:10 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCh0-0000IA-5l; Wed, 19 Apr 2023 18:38:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=1X/1FwzRbM6pd+AOIWz25wUqPjVNmZ+pmCYZ5OuuyRk=; b=zCEigFSQh9F2crMzCjTwrxc6uk
	s+mmgfuMuMb6INrTGV5u0In9p6gbgeyvuV2HoONtmfM5ic40R8VNWO2Qa4yJk0gDb4DMTAIpEYmDj
	AE/LKv9M84F4R0bMHoETC3poB370RLYQDnAJysFuu4mCNf4wwt/LXN2ABsUTiLL7YQSg=;
Message-ID: <d69d88f5-c388-e72a-343f-a96c0a3753c8@xen.org>
Date: Wed, 19 Apr 2023 19:38:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v7 2/5] xen/arm64: Rework the memory layout
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230416143211.72227-1-julien@xen.org>
 <20230416143211.72227-3-julien@xen.org>
 <b088d287-3809-0a37-1a41-992d6ed9d631@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <b088d287-3809-0a37-1a41-992d6ed9d631@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 17/04/2023 08:28, Michal Orzel wrote:
> On 16/04/2023 16:32, Julien Grall wrote:
>>
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Xen is currently not fully compliant with the Arm Arm because it will
>> switch the TTBR with the MMU on.
>>
>> In order to be compliant, we need to disable the MMU before
>> switching the TTBR. The implication is the page-tables should
>> contain an identity mapping of the code switching the TTBR.
>>
>> In most of the case we expect Xen to be loaded in low memory. I am aware
>> of one platform (i.e AMD Seattle) where the memory start above 512GB.
>> To give us some slack, consider that Xen may be loaded in the first 2TB
>> of the physical address space.
>>
>> The memory layout is reshuffled to keep the first four slots of the zeroeth
>> level free. All the regions currently in L0 slot 0 will not be part of
>> slot 4 (2TB). This requires a slight tweak of the boot code because
>> XEN_VIRT_START (2TB + 2MB) cannot be used as an immediate.
>>
>> This reshuffle will make trivial to create a 1:1 mapping when Xen is
>> loaded below 2TB.
>>
>> Lastly, take the opportunity to check a compile time if any of the
> s/a/at/ compile time
> 
>> regions may overlap with the reserved area for identity mapping.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ----
>>      Changes in v7:
>>          - Remove all tags
>>          - Add BUILD_BUG_ON()s
>>          - Don't forget to update FRAMETABLE_VIRT_START and
>>            VMAP_VIRT_START
>>
>>      Changes in v6:
>>          - Correct the BUILD_BUG_ON(), Xen virtual address should be
>>            above 2TB (i.e. slot0 > 4).
>>          - Add Bertrand's reviewed-by
>>
>>      Changes in v5:
>>          - We are reserving 4 slots rather than 2.
>>          - Fix the addresses in the layout comment.
>>          - Fix the size of the region in the layout comment
>>          - Add Luca's tested-by (the reviewed-by was not added
>>            because of the changes requested by Michal)
>>          - Add Michal's reviewed-by
>>
>>      Changes in v4:
>>          - Correct the documentation
>>          - The start address is 2TB, so slot0 is 4 not 2.
>>
>>      Changes in v2:
>>          - Reword the commit message
>>          - Load Xen at 2TB + 2MB
>>          - Update the documentation to reflect the new layout
>> ---
>>   xen/arch/arm/arm64/head.S         |  3 ++-
>>   xen/arch/arm/include/asm/config.h | 38 +++++++++++++++++++++----------
>>   xen/arch/arm/mm.c                 | 23 +++++++++++++++----
>>   3 files changed, 46 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index 4a3f87117c83..663f5813b12e 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -607,7 +607,8 @@ create_page_tables:
>>            * need an additional 1:1 mapping, the virtual mapping will
>>            * suffice.
>>            */
>> -        cmp   x19, #XEN_VIRT_START
>> +        ldr   x0, =XEN_VIRT_START
>> +        cmp   x19, x0
>>           bne   1f
>>           ret
>>   1:
>> diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/config.h
>> index 5df0e4c4959b..2cfe5e480256 100644
>> --- a/xen/arch/arm/include/asm/config.h
>> +++ b/xen/arch/arm/include/asm/config.h
>> @@ -72,16 +72,13 @@
>>   #include <xen/page-size.h>
>>
>>   /*
>> - * Common ARM32 and ARM64 layout:
>> + * ARM32 layout:
>>    *   0  -   2M   Unmapped
>>    *   2M -   4M   Xen text, data, bss
>>    *   4M -   6M   Fixmap: special-purpose 4K mapping slots
>>    *   6M -  10M   Early boot mapping of FDT
>>    *   10M - 12M   Livepatch vmap (if compiled in)
>>    *
>> - * ARM32 layout:
>> - *   0  -  12M   <COMMON>
>> - *
>>    *  32M - 128M   Frametable: 32 bytes per page for 12GB of RAM
>>    * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
>>    *                    space
>> @@ -90,14 +87,23 @@
>>    *   2G -   4G   Domheap: on-demand-mapped
>>    *
>>    * ARM64 layout:
>> - * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
>> - *   0  -  12M   <COMMON>
>> + * 0x0000000000000000 - 0x000001ffffffffff (2TB, L0 slots [0..3])
>> + *
>> + *  Reserved to identity map Xen
>> + *
>> + * 0x0000020000000000 - 0x0000027fffffffff (512GB, L0 slot [4]
> missing closing parenthesis at the end of line
> 
> This can be done on commit, so:

Thanks for spotting the typoes. I will update on commit.

> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:42:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523741.814089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCl0-0004RM-UC; Wed, 19 Apr 2023 18:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523741.814089; Wed, 19 Apr 2023 18:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCl0-0004RF-QY; Wed, 19 Apr 2023 18:42:18 +0000
Received: by outflank-mailman (input) for mailman id 523741;
 Wed, 19 Apr 2023 18:42:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppCkz-0004R9-H0
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:42:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCky-0005Rl-UM; Wed, 19 Apr 2023 18:42:16 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.29.18]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppCky-0000Sn-Ng; Wed, 19 Apr 2023 18:42:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=+YTg87TRLWy8kwISNa9qkM4Xpo8XA4iiV4IhtWsdpn8=; b=gq91/KHslv0BbGCgSjYaXnnbW2
	ZbjYWdshgDl0HRoXAawp33OoISTeGSBVwQykZHbr4AL/AdocosWzsjuCAlbQaPhz2QxcxGipRqvdy
	yZeIsHEnqxcfIVVR2T4MunhGL5Jt/RTw525dMbRYNYJsgY4407XiVGvUsJkMar9/sU8k=;
Message-ID: <7578ab9d-40d0-2ca1-394f-b0de38684103@xen.org>
Date: Wed, 19 Apr 2023 19:42:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v7 0/5] xen/arm: Don't switch TTBR while the MMU is on
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Luca.Fancellu@arm.com, michal.orzel@amd.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230416143211.72227-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230416143211.72227-1-julien@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 16/04/2023 15:32, Julien Grall wrote:
  > Currently, Xen on Arm will switch TTBR whilst the MMU is on. This is
> similar to replacing existing mappings with new ones. So we need to
> follow a break-before-make sequence.
> 
> When switching the TTBR, we need to temporarily disable the MMU
> before updating the TTBR. This means the page-tables must contain an
> identity mapping.
> 
> The current memory layout is not very flexible and has an higher chance
> to clash with the identity mapping.
> 
> On Arm64, we have plenty of unused virtual address space Therefore, we can
> simply reshuffle the layout to leave the first part of the virtual
> address space empty.
> 
> On Arm32, the virtual address space is already quite full. Even if we
> find space, it would be necessary to have a dynamic layout. So a
> different approach will be necessary. The chosen one is to have
> a temporary mapping that will be used to jumped from the ID mapping
> to the runtime mapping (or vice versa). The temporary mapping will
> be overlapping with the domheap area as it should not be used when
> switching on/off the MMU.
> 
> The Arm32 part is not yet addressed and will be handled in a follow-up
> series.
> 
> After this series, most of Xen page-table code should be compliant
> with the Arm Arm. The last two issues I am aware of are:
>   - domheap: Mappings are replaced without using the Break-Before-Make
>     approach.
>   - The cache is not cleaned/invalidated when updating the page-tables
>     with Data cache off (like during early boot).
> 
> The long term plan is to get rid of boot_* page tables and then
> directly use the runtime pages. This means for coloring, we will
> need to build the pages in the relocated Xen rather than the current
> Xen.
> 
> For convience, I pushed a branch with everything applied:
> 
> https://xenbits.xen.org/git-http/people/julieng/xen-unstable.git
> branch boot-pt-rework-v7
> 
> Cheers,
> 
> Julien Grall (5):
>    xen/arm32: head: Widen the use of the temporary mapping
>    xen/arm64: Rework the memory layout
>    xen/arm64: mm: Introduce helpers to prepare/enable/disable the
>      identity mapping
>    xen/arm64: mm: Rework switch_ttbr()
>    xen/arm64: smpboot: Directly switch to the runtime page-tables

This series is now fully committed.

Thanks for the help to review it!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 18:52:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 18:52:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523751.814099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCuS-0005vw-Re; Wed, 19 Apr 2023 18:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523751.814099; Wed, 19 Apr 2023 18:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppCuS-0005vp-On; Wed, 19 Apr 2023 18:52:04 +0000
Received: by outflank-mailman (input) for mailman id 523751;
 Wed, 19 Apr 2023 18:52:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v8Um=AK=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1ppCuR-0005vj-M7
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 18:52:03 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4401538b-dee3-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 20:52:00 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-615-ozoFPFQuMuWMzNh6HcmPCA-1; Wed, 19 Apr 2023 14:51:56 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2DF4F3C0E464;
 Wed, 19 Apr 2023 18:51:55 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.177])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id EEF8F492B04;
 Wed, 19 Apr 2023 18:51:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4401538b-dee3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681930319;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hS4DWN2qmeP36WD9ZHX7mMIPBRS78Jiokpqf6BGkG+4=;
	b=dXRxY4Yi13jI6j59NoYgQPW1RJ68o0DGcD64mqkHH+UrvZhSoB0VX41gwTFfHlSJ+/iaBY
	ISAXbpuuAQPChLzYv1FVCCOy+9FhWm/xqoTTOf8++Vky0tPBwSjttiGLj4CYl8/NMxxu6+
	m8K/hEvynw7CbeIIQDiL9+MYeCztmF4=
X-MC-Unique: ozoFPFQuMuWMzNh6HcmPCA-1
Date: Wed, 19 Apr 2023 13:51:48 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>, 
	Hanna Reitz <hreitz@redhat.com>, Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>, Paul Durrant <paul@xen.org>, 
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, Eduardo Habkost <eduardo@habkost.net>, 
	Juan Quintela <quintela@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Garzarella <sgarzare@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, 
	Kevin Wolf <kwolf@redhat.com>, "Richard W.M. Jones" <rjones@redhat.com>, 
	Richard Henderson <richard.henderson@linaro.org>, xen-devel@lists.xenproject.org, qemu-block@nongnu.org, 
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Peter Lieven <pl@kamp.de>, eesposit@redhat.com, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>, 
	David Woodhouse <dwmw2@infradead.org>
Subject: Re: [PATCH v2 16/16] virtio: make it possible to detach host
 notifier from any thread
Message-ID: <msjl3ep44f2dxpno7xw3zxjrkuh5iegyieszertt6ppkhpk62q@xxi7a5shhkc2>
References: <20230419172817.272758-1-stefanha@redhat.com>
 <20230419172817.272758-17-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230419172817.272758-17-stefanha@redhat.com>
User-Agent: NeoMutt/20230407
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

On Wed, Apr 19, 2023 at 01:28:17PM -0400, Stefan Hajnoczi wrote:
> virtio_queue_aio_detach_host_notifier() does two things:
> 1. It removes the fd handler from the event loop.
> 2. It processes the virtqueue one last time.
> 
> The first step can be peformed by any thread and without taking the
> AioContext lock.
> 
> The second step may need the AioContext lock (depending on the device
> implementation) and runs in the thread where request processing takes
> place. virtio-blk and virtio-scsi therefore call
> virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
> AioContext
> 
> Scheduling a BH is undesirable for .drained_begin() functions. The next
> patch will introduce a .drained_begin() function that needs to call
> virtio_queue_aio_detach_host_notifier().
> 
> Move the virtqueue processing out to the callers of
> virtio_queue_aio_detach_host_notifier() so that the function can be
> called from any thread. This is in preparation for the next patch.
>

This mentions a next patch, but is 16/16 in the series.  Am I missing
something?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed Apr 19 19:31:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 19:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523763.814109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDWr-0001qP-Ug; Wed, 19 Apr 2023 19:31:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523763.814109; Wed, 19 Apr 2023 19:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDWr-0001qI-Qu; Wed, 19 Apr 2023 19:31:45 +0000
Received: by outflank-mailman (input) for mailman id 523763;
 Wed, 19 Apr 2023 19:31:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BGcS=AK=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1ppDWq-0001qC-As
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 19:31:44 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d07953ae-dee8-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 21:31:43 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id b16so1098771ejz.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 12:31:43 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-077-183-003-214.77.183.pool.telefonica.de.
 [77.183.3.214]) by smtp.gmail.com with ESMTPSA id
 hr33-20020a1709073fa100b0094f71c73d35sm4932629ejc.145.2023.04.19.12.31.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Apr 2023 12:31:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d07953ae-dee8-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681932702; x=1684524702;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=cX+LAQk2HqYVss/aZUAQt/9OzhHNeQrU+BnfsGMovN4=;
        b=HxeWSwrMem+6qghIookMXWlSJMAxjoDLmGG2GFJ4sX2M70DvIpn9jZjQPx4ptIIxHy
         JmQgrxda+zuIdlWSFeW5CzUy+AEh4V+rhqBW6fXk0Ng94j5qHSe7nHwfidHQ+4Z+K3k4
         LcHCpVHBTgZ9fWdaCPnzJSw54v3OHnr/KAp3iY1M3WCY42W84ajtX9HnvdSBqUQXUUr2
         qOihFZtssVs8/nIkAbpvur0rhT3qdLSq5NeiiA2UHZtQ6oi/+/o8rlDUADUlGV3nIEu5
         xeBr2DAJYPj5MsXct4m79D2ehSRFukE0URKHnXzYUy94z0FVMXd0bii8/grxLFp7YkPo
         LlQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681932702; x=1684524702;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=cX+LAQk2HqYVss/aZUAQt/9OzhHNeQrU+BnfsGMovN4=;
        b=guXRyFGRMtWUhxPrsjnhj7tcACuqOBP9++7bpix6qa2JyHylx2p7J6kQ7kPlo9WUy6
         2kZYbJvSY+ji5rhNiY3os80RPoxNEkh2jNVngxKkpc4F3LPMXQkaTgzuTUZN9d4QNlIp
         7yJTBogM0s8PofYoIBc4CRkFLAQuFsa6HLtYKtl22Qm/X1UmpyszbgivI1uR+kwpw1kq
         vTgwXi4SxZSc7iaesDHkL5w1qk/ItsNpUUblW6PwMHur9ErF+/eYrRaOfKjbrTVq8a92
         WLhulMUqpkYRuJNTmqgYcv3GAgqZ58mRYFMiPyY29HmiSCHXZ2iXABAzV1Onxo3wzWVo
         J6aQ==
X-Gm-Message-State: AAQBX9cnAxY8upGvUirmjIbxHocRdkPzlgyZXNBdIZcfKR5B4Yzw8lw6
	1oVaeSoEOrMceJ25cbIbJ9U=
X-Google-Smtp-Source: AKy350aKGpFJubkKktMayOWdACPXwfPdtTZaagCkoJdP4zKiUQ+blYrEZa3km7cNMrFTvwaUujLZMA==
X-Received: by 2002:a17:906:82c1:b0:953:8322:a99f with SMTP id a1-20020a17090682c100b009538322a99fmr293399ejy.0.1681932702210;
        Wed, 19 Apr 2023 12:31:42 -0700 (PDT)
Date: Wed, 19 Apr 2023 19:31:31 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: qemu-devel@nongnu.org
CC: "Michael S. Tsirkin" <mst@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>, Eduardo Habkost <eduardo@habkost.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Chuck Zmudzinski <brchuckz@aol.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Subject: =?US-ASCII?Q?Re=3A_=5BPATCH_v4_2/7=5D_hw/pci/pci=2Ec=3A_Don=27t_l?= =?US-ASCII?Q?eak_PCIBus=3A=3Airq=5Fcount=5B=5D_in_pci=5Fbus=5Firqs=28=29?=
In-Reply-To: <20230403074124.3925-3-shentey@gmail.com>
References: <20230403074124.3925-1-shentey@gmail.com> <20230403074124.3925-3-shentey@gmail.com>
Message-ID: <C82C3BC0-E328-4AE8-B182-ADACF3E34A45@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 3=2E April 2023 07:41:19 UTC schrieb Bernhard Beschow <shentey@gmail=2E=
com>:
>When calling pci_bus_irqs() multiple times on the same object without cal=
ling
>pci_bus_irqs_cleanup() in between PCIBus::irq_count[] is currently leaked=
=2E
>Let's fix this because Xen will do just that in a few commits, and becaus=
e
>calling pci_bus_irqs_cleanup() in between seems fragile and cumbersome=2E
>
>Note that pci_bus_irqs_cleanup() now has to NULL irq_count such that
>pci_bus_irqs() doesn't do a double free=2E
>
>Signed-off-by: Bernhard Beschow <shentey@gmail=2Ecom>

Ping PCI maintainers

>---
> hw/pci/pci=2Ec | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/hw/pci/pci=2Ec b/hw/pci/pci=2Ec
>index def5000e7b=2E=2Ebe1c5d16ec 100644
>--- a/hw/pci/pci=2Ec
>+++ b/hw/pci/pci=2Ec
>@@ -558,6 +558,7 @@ void pci_bus_irqs(PCIBus *bus, pci_set_irq_fn set_irq=
,
>     bus->set_irq =3D set_irq;
>     bus->irq_opaque =3D irq_opaque;
>     bus->nirq =3D nirq;
>+    g_free(bus->irq_count);
>     bus->irq_count =3D g_malloc0(nirq * sizeof(bus->irq_count[0]));
> }
>=20
>@@ -573,6 +574,7 @@ void pci_bus_irqs_cleanup(PCIBus *bus)
>     bus->irq_opaque =3D NULL;
>     bus->nirq =3D 0;
>     g_free(bus->irq_count);
>+    bus->irq_count =3D NULL;
> }
>=20
> PCIBus *pci_register_root_bus(DeviceState *parent, const char *name,


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 19:57:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 19:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523780.814163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDvK-0004uE-KA; Wed, 19 Apr 2023 19:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523780.814163; Wed, 19 Apr 2023 19:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDvK-0004u7-HC; Wed, 19 Apr 2023 19:57:02 +0000
Received: by outflank-mailman (input) for mailman id 523780;
 Wed, 19 Apr 2023 19:57:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppDvJ-0004tt-4f; Wed, 19 Apr 2023 19:57:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppDvJ-0007Ah-2S; Wed, 19 Apr 2023 19:57:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppDvI-0007UJ-GX; Wed, 19 Apr 2023 19:57:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppDvI-0000o1-G5; Wed, 19 Apr 2023 19:57:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NAJkPSW7Oi9muJn1gWOilQ96nVdujzFbtQD4UaXIAE4=; b=yF6oXiuICMxhzp3fIuwcNuLep7
	ZQwTsz1JoJ3jcA2PvRxUVQuQ36ypmj4AzB/8MUk1V1VBCb1JGsj3aQ5HDTaFRtRayNizGZnVnTvR+
	6/mHvm8oZjAr/Ow5iMVK25uXh/cem+mABsofSXDO9C7Y4xC75XGrdGtq3jTMJw3I6Clw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180309-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180309: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
X-Osstest-Versions-That:
    xen=5eb6bd7454e253f4907dbeb7aa982967b21698bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 19:57:00 +0000

flight 180309 xen-unstable real [real]
flight 180322 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180309/
http://logs.test-lab.xenproject.org/osstest/logs/180322/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180322-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop      fail blocked in 180287
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180287
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180287
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180287
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180287
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180287
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180287
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180287
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180287
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180287
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542
baseline version:
 xen                  5eb6bd7454e253f4907dbeb7aa982967b21698bc

Last test of basis   180287  2023-04-17 18:09:05 Z    2 days
Failing since        180296  2023-04-18 06:36:57 Z    1 days    3 attempts
Testing same since   180309  2023-04-19 04:41:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dietmar Hahn <dietmar.hahn@fujitsu.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Michal Orzel <michal.orzel@amd.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5eb6bd7454..8676092a0f  8676092a0f16ca6ad188d3fb270784a2caecf542 -> master


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 19:57:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 19:57:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523783.814172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDvT-0005B7-T9; Wed, 19 Apr 2023 19:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523783.814172; Wed, 19 Apr 2023 19:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppDvT-0005B0-QQ; Wed, 19 Apr 2023 19:57:11 +0000
Received: by outflank-mailman (input) for mailman id 523783;
 Wed, 19 Apr 2023 19:57:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppDvS-0005AZ-B5
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 19:57:10 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5cb1f496-deec-11ed-b21f-6b7b168915f2;
 Wed, 19 Apr 2023 21:57:08 +0200 (CEST)
Received: from mail-bn8nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 15:57:05 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5590.namprd03.prod.outlook.com (2603:10b6:5:2c8::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 19:57:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 19:57:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cb1f496-deec-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681934228;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=XY5n4IacEEi2TagWgZ5BUriE5NoAcsy2HbugFzefi7w=;
  b=F7vTZyWF4GR2be+TcHbVTLkCuRujLqJxxJBluEXywB0kvjVTx5m9hUVZ
   5BSvxWOUNlAJD0wdfQvwDq28l8HGzsn+tWINW9aPIgxZhdIgbHOwR7mTi
   8jLaIVAE0m4q/3497Z+viZhUvuxnfz1dlxn2Bj1zk93JzmERC7NYxqdTb
   E=;
X-IronPort-RemoteIP: 104.47.74.41
X-IronPort-MID: 106048928
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Bq3Ka6lm4O8CcbEzGDYYmFLo5gyjJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOWD+DbPzZMTHwLd9xaNy18UoDvZ/Vy9EyTldprCk2HyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5Q6GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 ds7cy4XMTmAvuW32uLqR/Fvn90xPca+aevzulk4pd3YJdAPZMifBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1A3jOGF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKSuXkpq8w0DV/wEQ6CUJVfHegm8CmtReafvB9N
 XZFwiMH+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU3313qiQhSO/P24SN2BqWMMfZQ4M4t2mqodjiBvKFopnCPTt0oSzHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxa8owFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:5nY2Eat2Cvs8md/WsvjXiaCN7skDU9V00zEX/kB9WHVpm62j9/
 xG+c5x6faaslgssR0b6LW90cq7Lk80l6QV3WB5B97LNmSLhILPFvAa0WKL+UyGJ8TQzJ8+6U
 4KSdkbNDSfNykBse/KpCW+DtY80J2m3cmT9JrjJzwEd3ANV0ga1XYbNu9DKDwPeOCRP+tDKK
 ah
X-Talos-CUID: =?us-ascii?q?9a23=3ANrVM2Wgv7t57n1KdoA8W19jtQzJuVV6G0m2PCky?=
 =?us-ascii?q?BDmdzGOyYUVyJ4bF1nJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3ARiO05w6wWHdYrTF1yuC+B3NkxoxB4vvtIWwWq6w?=
 =?us-ascii?q?ZhJCgEA5SJQ3MtD2eF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,210,1677560400"; 
   d="scan'208";a="106048928"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QtLoD7zo9ZX2HtuWp0qQwQzK+3b88phfVWfLlOWehsipq5gQZnWxiijxdugYTFEXudRtxG00ep5DmiJ7kl9NUAzW3Mo3JU0Yd5kVhCWIk+IHbsfQg3+zLkZqdNIqSnqkFl73lDKaAANgEVa+Z+WkLbJ1xyekU/5h+L+PNLucaCvKG14QfrQPR8bsqybg2qdxUlT4NQ4EwvpGEKiuIRSdDiLjStkdmlBwYTQzl8GoEKrm8X9qCS9HWaEKmgQHedYSTPkujSG05+s9mJnVo1p7OAY6G8PxvdWPjcMM4Q4CK5u55nFFvmAk1Em4P/HBjfzIE8mfw7aOfUoykZdXjpxFRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IiD3lT9jsNQl5Yegvqz5+XSeV3o+sEPBLk96esFJqL4=;
 b=CKnc9cClRlErsVxD3DMO8ySgQiipZ4Xi4f88kncnp+eHKKFEYTPUSR4Z+MIXQRjYZG3GlCV8A55A+QEj9hHtv2YK7JPoboDpmeTQqM3F3j9w5y97TCwnUe6yGP18GXngDmHl32VVugCdU2Cm9yJ/Bn8+e8ne3dfObxM9Infvi7pq9MJ6A7q6IvNFK8FZmh9h1qr3akUQkPggitZByccUSIBhWrCuL1a7WvHzpYHiUxHcf2fTB1RWJEMV8Fvs6YBzf1YG3gr7wNhpZHPBjxuAfDe8RZTSIgOEhXpO1T8h0dPrzvdfFUXwJlqJiSLmfMZknjjg9z6Gght6nbxMNKwjTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IiD3lT9jsNQl5Yegvqz5+XSeV3o+sEPBLk96esFJqL4=;
 b=p3bsYgm7Q29Yt8+zGbdvhYWKHP/69i0YVaTy43lOb+7G6Yk8uy0IsLzX3MNpupWyfx3NZmTeMKXNcnNi1VZgsAIZgtdDd1q0/3zPBuiYHnKoJF1Hl6nzKckd+E9qCc5e5HcfmRJFN+E1+Owzv6vbgM+koqz4sa9MTM12K33zJQ4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <53204261-3dac-579f-ede5-7acffd04f4db@citrix.com>
Date: Wed, 19 Apr 2023 20:56:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/5] x86: support cache-writeback in flush_area_local() et
 al
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <ee33ad20-ef6e-504d-6987-59ccb166f8e4@suse.com>
In-Reply-To: <ee33ad20-ef6e-504d-6987-59ccb166f8e4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P123CA0001.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:338::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS7PR03MB5590:EE_
X-MS-Office365-Filtering-Correlation-Id: 7253f363-ea9a-4a99-248e-08db41103e1d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/wKxRnSGnDvf8a55hzqqlQ3PyWMw9VHJytEy6/vSCxG2ubAwFAf1jciDhijUrRa5vP+XieQT+aCD70iXjvai2KfpSmFFkCJS0aB2GxV6g3G27BySa6aa++k/XQX2Qz74FUS5butt+IJa8VjOS/WMVBwwPiPXRIp82VVRaA0WZHIDxbrIPe2rih6XVHqoRg5T5dkXS3BLOLGjHElvd+i2sqEan7TpQZ+HvSyIAOG+bJnNtPXKSTclxYklWuEN6QLK7i9glkWC6cJgOt37iqa/YUaF1RQvS/LM0AE71JzqKuqEttGubxGRqaC9wUgt0oOnbC63UHs57/o4PQ314uIP4bOdqHKM28FNYuMx83ggylZQF0XD++tbssRJIcUHZH/2yHGbXVMJ5sga8CiZhwirPITBAsQJU7JsHh4xvX/1wxDjSUQIzzcBq755LPytTDraXkWATyBXulrRxXpkKyQUt/DLFE+YpbKY8iL6Jepgr/YU0HEPoDa6Kw21nuSEk8pTqVTuQw4oVqpaiI/Iy2hMOcM+kRxNvtq2mbJoOe32PxR0WLGuTw50yBA0ObmMO5L4TND1Nj/Vmdws0sH6FnjIlVtGaft7EKs2QsFozAe2XLfffEatRZRbgV566xMEkPRLl6s77CIntGv6j+VlFmhTgQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(136003)(376002)(346002)(396003)(451199021)(4326008)(5660300002)(86362001)(2616005)(107886003)(31696002)(53546011)(6512007)(186003)(6506007)(82960400001)(26005)(38100700002)(8676002)(8936002)(110136005)(54906003)(478600001)(316002)(6666004)(6486002)(41300700001)(36756003)(66946007)(66556008)(31686004)(66476007)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0RmQTdxb0M1MVUzS2ZEQVdRQ0tTa2dJWDhVVWJJcm1ZVW1Ea29WcjZCeFpH?=
 =?utf-8?B?bm9CcUpnakFLQUxoZEh0ZlFmVEo5VTRKd1c0ZG5pcWlzaGdQOVFJd3JqUFZw?=
 =?utf-8?B?RzhPNVRCeEpSN29JaXhJQ2tPUUdpRjJBc1I2SVFsZnFIMHk2SnA3WjgzK1dB?=
 =?utf-8?B?MVN6bU9ESFdWL3pZd3J2WUlGcXQwNjRuUk00UElGQytjQjZFQTNFbXowRjQ4?=
 =?utf-8?B?d0VWbXpJYTA0TGxyRUo5TEpXR0V0R0F1cSszZ1JxTVdUWmdoSFNJRldOZkxF?=
 =?utf-8?B?Y3Y4MXFSWTQ3RUVBbXpDWGdjejYvdTFMck9VbVJEVlJYUGdaVk9GekdLNDFi?=
 =?utf-8?B?VkczQjFELzdWNkpEbUY3ZEZvQ0lSWjJzTjhVNXpFZXRwaFQyTmJnNnVzd2hs?=
 =?utf-8?B?NXhENUNBKzVhdm1FWVFjZDJBTExBK0x0OGJqRW15ektlWXpMTm1GbVlTdGtH?=
 =?utf-8?B?OGlzU2tIRDNUYjBqRzFmY1pGSnE5Z1IxVXU5d1ZTci95WnNockhnQjNLOGN2?=
 =?utf-8?B?bllWYzlZaDdpNVQ5ZWlia1ZIS2JpZ3g2VTJFa0ZGbG5ZVWNlaU02UDdJZ3Za?=
 =?utf-8?B?bjdNcDNEa0pBZ3BuS2NEQVp6UFYrSytwNmZyN25lU2MxNmtaWk02MkN4NkEx?=
 =?utf-8?B?TnllNnhEZXZ5UjRWeEJLT3JaWkhzeEJoSHh3bVlHRDdacjhPdmlob1RRaFY0?=
 =?utf-8?B?bEZiVCtLOHZ3RWNDbkd6bnpYUVRUeHRrdTdlbHBOcFZ3SStBRmtBVTlmVlVD?=
 =?utf-8?B?dEtQb2YzKzRZMGYwN2pHQzdBbXFuenBmYWp2MGZmTE5wbkRNNlBNcUpNNUdL?=
 =?utf-8?B?cjdMZitvM0NuOVpGNDhjQkFkc1FzRnViU2NHN3p4SzBxTm1FbkUxcmJlaklw?=
 =?utf-8?B?VHhhRE5vRHl1Q3krOTk3VkVWdFRLb21RbitYOEJ0bml3S1Zxa2pWdUIvenpF?=
 =?utf-8?B?UndkL0dGd2pNU3N2T2IzdkVIQlVVODB6aCtFc1BhM25SblhER3laM1FUVGsv?=
 =?utf-8?B?YVdTS0hlMkFCRGNxVDdwdjRpbWdIanN3T2NTSndyOGpGVitqd2srR3pRRUY4?=
 =?utf-8?B?emdjV2pXUU5aQ0pMZmdDeU1hZTNGKzRtYzVYcFlpTWtpeTJiVHRFUFBQUUhq?=
 =?utf-8?B?aHlueUN5VjVZYlBhS1NZREVLc0NLUGZubXgxajByOWVpSEZvTkcwZktBc3ZW?=
 =?utf-8?B?Q29mQVpRQkxBNXBIVk01TkUxMHFKVnpMTTFSeWpBWFJBVkF6SWVxUzZnVS85?=
 =?utf-8?B?RXRDb25EeEo1YjM5RzcwZEk2c1REUEVTNThWWHlwN1V3SjJ0enVSSEZ6N2JT?=
 =?utf-8?B?N3VCcTczQm44WXg4YUp4VW9EMWpoSU5TdzJDV1JFMGRGOWFJZGZydVhyaW1K?=
 =?utf-8?B?RytYTUk5Q3daclVPOTFFazhRQnY3UzMzWHEvbzFYbmx3OFdKdVEzMGhGeUxa?=
 =?utf-8?B?TTl1SEowL2FxVWxlcEF2VXQ1Y0FwUEhzN2thMWY4U01vUHdJOTJHUzRiUllL?=
 =?utf-8?B?eUVPNkE2V2U1UWx2V2xIc09Ic1hZSDdWRnVFV1Urem5MMnJVWllrbGlMOEtK?=
 =?utf-8?B?WWRGVkxzNWUvUUZrT25MYjdkblRGY0wvRkpSTFRQaWI4clIzVlZnb2VhRUJY?=
 =?utf-8?B?K0lnL1JXSlc4ZlBkTytIczhsL0tZZVFxOTlXMzVhelpST3dPZ2RZVjdjZ0kx?=
 =?utf-8?B?MnZGdVBUTjlHT3gxREQvVDkxdExmWXVudi9KL1c5R3M2dmxOb0JOTTkvREN0?=
 =?utf-8?B?RjhXRDMxT1VBUWJ6aFRwYnprbk5CL2tMc0trSUd1ckdpbjBNanNHK3A1Ukk4?=
 =?utf-8?B?TnFhSW5yTGZjVUZHYzJKN05TcW5uUHlFaDRNNDlLUHhVa1NIK2RwUEhQWFY0?=
 =?utf-8?B?aVg1V0tBT0I0T0hqZlNPT0duYU1WeEFJWXM0bndmbEFXUEVUdXo0REVXaDRp?=
 =?utf-8?B?bEJXdHBjSjNUK2lWT0piMFc5a2xEVzNvU2xNbjhjcTBZbFVaT0ZJcUYrbnA5?=
 =?utf-8?B?UWgvZEFIdzVlRThRck9VZzlEcUxiTlRDM0lBdUJnY3BYUHV5dWpKMlR6OVpz?=
 =?utf-8?B?cmdqOXJ4YTI3VjdxUnZjTE93QjdIbmdEVUtQNXB3ZDNZTDlZM2FoUkJRcFZM?=
 =?utf-8?B?TWdTZmRSV24zYTdPTVNnR285a3VEb25yT2pUMEhBQWNFbmhCdHRwd1g0Ukl6?=
 =?utf-8?B?bFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LN9/u233tmo4YKe07I0QOO+DDrhDWOIpioQH0ZWCSRM0WFNUNkLL6MyR8RsRbgxCQ+CvzYHpCHZ8NJSrbQcmB3UdKdYo5nIXBmVrNOuaHF6xyqF2D5bTLq0rsGXlgHEPygoh2pT4J9Lq7NX2vVsC7ZSjPDS6mHBo8L0sqvs/652KskPQWBzymycA+8j8T+ykO2DHSV/k/ijXNpdUSXQgBhjVDF7ZezEQLiL6/V5EXMnpXVH2i/eMNxLIiQN+kBTA4/20xAs/Ufvo8DW33TVlC8m01ouw0WUPUTR/B7jRBFm5bO2T7X6MPxTJeaoo8+LvAyCHxczpgWB360wtTdTiFEDdHtn59MJNhynbnYUAK9B95o3iLaNMjAgW2ATg+SE3Ytd+Iud1kTDfrESezFI2a7Bo4imHuDfOQWUOmn6SF5eQN659v+ToMvOOGcOzAf/HA+i+Uz0R3ICKgc1j1MrZim+wiefqFMyooTnnFLiMV66xA37knd1Lb90YiRHy5GV7KTBVaQuxsumZm9HggvQ4LNjon4AbebFqXsOmQ2HCib0mw55YOLV1B50IHlBcuyVbcgmBtRvA+J4s+CLCtZvnbMPCLAOfxNCsEIVTAkc99Nif49XpSx34/fRz1jtCsb6u9E2wpkEzh8WvVkBh5YMm70/JP69GGv+wRY51yDne1WkzgV7DZYiIyA+dm5gBqEvxn+vYJTXFz7oj+Snd63JZeePU3gxsMifzHZoQfbCZKPdATMLVQEyovLvL9Li225bPAUbQQogjT2bzonUIRVcboC95pwe9/ByUlH8OwxSKbwtE6t1qsSctMABkxKVt/UdH
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7253f363-ea9a-4a99-248e-08db41103e1d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 19:57:02.4657
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P2GWB5kMxLZMhJyvbN2H8i86gpALD8CgdplT/ucPZ201eBnGvSAVkXaIQVGyeU0r+WahqqdOZustqRvjuKGbl4lDTgWHwD9d4TiDuhJKLZ8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5590

On 19/04/2023 11:44 am, Jan Beulich wrote:
> --- a/xen/arch/x86/flushtlb.c
> +++ b/xen/arch/x86/flushtlb.c
> @@ -232,7 +232,7 @@ unsigned int flush_area_local(const void
>      if ( flags & FLUSH_HVM_ASID_CORE )
>          hvm_flush_guest_tlbs();
>  
> -    if ( flags & FLUSH_CACHE )
> +    if ( flags & (FLUSH_CACHE | FLUSH_WRITEBACK) )

Given that we already have FLUSH_CACHE, then adding writeback also seems
fine, but we need to get the naming corrected first.

We've got a file called flushtlb.c which flushes more than the TLBs now,
and various APIs in it.

We have a bunch of ARM specific APIs which AFAICT exist purely to prop
up the ARM-specific gnttab_cache_flush().  That needs to go and live
behind an ifdef and stop polluting other architectures with an
incredibly short sighted hypercall interface decision.

The "area" in the low level calls isn't good.  Range might be better,
but I'm not sure.  The "mask" part of the name would be better as "some"
or perhaps "others", to be a better counterpoint to "local".  Some of
the wrappers too really ought to be dropped - there are lots of them,
and too few users to justify.

But on to the main thing which caught my eye...

The FLUSH in FLUSH_CACHE means the flush infrastructure, not "cache
flushing", and FLUSH_WRITEBACK is nonsensical next to this.  All other
things we flush have a qualification that makes them clear in context.
(other than the assist one which I'm going to time out objections to and
revert back to name which made more sense.)

At an absolutely minimum, FLUSH_CACHE first needs renaming to
FLUSH_CACHE_EVICT and then this new one you're adding needs to be
FLUSH_CACHE_WRITEBACK.

Except...

Is there any credible reason to have EVICT as an option by the end of
this cleanup?

CLDEMOTE does exist for a reason (reducing coherency traffic overhead
when you know the consumer is on a different CPU), but it would be
totally bogus to use this in an all or mask form, and you wouldn't want
to use it in local form either, simply from an overhead point of view.

We have evict behaviour simply because `clflush` was the only game in
town for decades, not because evicting the cacheline is what you want
actually want to do.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 20:05:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 20:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523792.814186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE3D-000743-Pd; Wed, 19 Apr 2023 20:05:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523792.814186; Wed, 19 Apr 2023 20:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE3D-00073w-MZ; Wed, 19 Apr 2023 20:05:11 +0000
Received: by outflank-mailman (input) for mailman id 523792;
 Wed, 19 Apr 2023 20:05:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LUBK=AK=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1ppE3C-00073q-L6
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 20:05:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b4f374e-deed-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 22:05:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1FFAE6135A;
 Wed, 19 Apr 2023 20:05:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A254C433D2;
 Wed, 19 Apr 2023 20:05:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b4f374e-deed-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1681934706;
	bh=skettEFhn5mLTCfTNgZ5V3AOrv9yC9N0PC6N1/XXApA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pClq4ub2UWdWgmvOXlN4pPFqHJKJ72tJbCC3xIpyGU7VStfHCv1lfdeSnwpTiSpDV
	 1hqqzuDztb3kYpfnrI6MkpzfwntbX/tJbAHXsqZWyj9xCd78OWCKKjqR7JDHZURNU1
	 qSRO1Num4chsJu3GkNp2vOydMoJpZBTCPgJETrARze9xgaNystidjOonrhd7Sbh4WR
	 tbKESFZZ80WhfU+6Y+yoWAO3gPEd/2+FhPRLE8e5B3yoWdq0ha7IshwyaNjtgF4c7J
	 AlEXBfiaVJdkzDfqypbGJP3MPoYttIQ6C/bW27MXGzGieS365abEhctdX3OdbBgRmh
	 1oBVHT0KrCmDQ==
Date: Wed, 19 Apr 2023 13:05:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Michal Orzel <michal.orzel@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com> <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com> <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop> <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com> <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-133349997-1681934706=:15580"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-133349997-1681934706=:15580
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> Hi Michal,
> 
> I corrected xen's command line.
> Now it is
> xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
> timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";

4 colors is way too many for xen, just do xen_colors=0-0. There is no
advantage in using more than 1 color for Xen.

4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
Each color is 256M. For 1600M you should give at least 7 colors. Try:

xen_colors=0-0 dom0_colors=1-8



> Unfortunately the result was the same.
> 
> (XEN)  - Dom0 mode: Relaxed
> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Coloring general information
> (XEN) Way size: 64kB
> (XEN) Max. number of colors available: 16
> (XEN) Xen color(s): [ 0 ]
> (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
> (XEN) Color array allocation failed for dom0
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Error creating domain 0
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> 
> I am going to find out how command line arguments passed and parsed.
> 
> Regards,
> Oleg
> 
> ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com>:
>       Hi Michal,
> 
> You put my nose into the problem. Thank you.
> I am going to use your point.
> Let's see what happens.
> 
> Regards,
> Oleg
> 
> 
> ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com>:
>       Hi Oleg,
> 
>       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >       
>       >
>       >
>       > Hello Stefano,
>       >
>       > Thanks for the clarification.
>       > My company uses yocto for image generation.
>       > What kind of information do you need to consult me in this case ?
>       >
>       > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org> ?
> 
>       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
>       Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
>       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
>       is set, I strongly doubt that this is the actual command line in use.
> 
>       You wrote:
>       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native
>       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
> 
>       but:
>       1) way_szize has a typo
>       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>       (XEN) Xen color(s): [ 0 ]
> 
>       This makes me believe that no colors configuration actually end up in command line that Xen booted with.
>       Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.
> 
>       So I would suggest to first cross-check the command line in use.
> 
>       ~Michal
> 
> 
>       >
>       > Regards,
>       > Oleg
>       >
>       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>       >
>       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >     > Hi Julien,
>       >     >
>       >     > >> This feature has not been merged in Xen upstream yet
>       >     >
>       >     > > would assume that upstream + the series on the ML [1] work
>       >     >
>       >     > Please clarify this point.
>       >     > Because the two thoughts are controversial.
>       >
>       >     Hi Oleg,
>       >
>       >     As Julien wrote, there is nothing controversial. As you are aware,
>       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >
>       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>       >
>       >
>       >     Instead, the upstream Xen tree lives here:
>       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >
>       >
>       >     The Cache Coloring feature that you are trying to configure is present
>       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>       >     outstanding patch series to add cache coloring to Xen upstream but it
>       >     hasn't been merged yet.)
>       >
>       >
>       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>       >     you as you already have Cache Coloring as a feature there.
>       >
>       >
>       >     I take you are using ImageBuilder to generate the boot configuration? If
>       >     so, please post the ImageBuilder config file that you are using.
>       >
>       >     But from the boot message, it looks like the colors configuration for
>       >     Dom0 is incorrect.
>       >
> 
> 
> 
--8323329-133349997-1681934706=:15580--


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 20:10:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 20:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523797.814196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE8Q-0008W7-DV; Wed, 19 Apr 2023 20:10:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523797.814196; Wed, 19 Apr 2023 20:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE8Q-0008W0-A3; Wed, 19 Apr 2023 20:10:34 +0000
Received: by outflank-mailman (input) for mailman id 523797;
 Wed, 19 Apr 2023 20:10:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppE8P-0008Vu-Bp
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 20:10:33 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ae5d928-deee-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 22:10:30 +0200 (CEST)
Received: from mail-mw2nam12lp2043.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 16:10:27 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5797.namprd03.prod.outlook.com (2603:10b6:510:30::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Wed, 19 Apr
 2023 20:10:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 20:10:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ae5d928-deee-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681935030;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Pk64iZcYGJmdTnk/DsC7aNC27xnIgMKl1KzYdE7kF5Y=;
  b=AUXzTA/dAbJWkMjiZgHEmewUAhHCcVCVIqNV/XtBG65tJJjaWiHMIiRK
   IkpcC3DoiD43cWZZbwuZtFT07h8hAnZUxjEAMCmiOO87obTjG9Klz9MMJ
   aaHcZh/YJ9B5wNViyir8y9iWlcKcmJM98blwYImrvQP61y5sBRMp9X89N
   E=;
X-IronPort-RemoteIP: 104.47.66.43
X-IronPort-MID: 106565352
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ncEH96hcbgQ+JDWDPzA7dN6yX161QhEKZh0ujC45NGQN5FlHY01je
 htvUD+BaK6NZWqned1xOYrloUhX6JOHz9JqGgtlqyswQSIb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AaPzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQfcGkLNUu7i9uog6OjGu9nqst6HfDkadZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluS2WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOPlqKMx3Qf7Kmo7CRo7cUCmpfWCpmmXRdlcD
 kgt3ix3hP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+LqRuiNC5TKnUNDRLoViMA6tjn5YQs1BTGS485FLbv1oGuXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94ZRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:1htQbaAHy3GjG4XlHemo55DYdb4zR+YMi2TDtnocdfUxSKelfq
 +V88jzuSWbtN9yYhEdcKG7WZVoKEm0nfQZ3WB7B8bAYOCJghrNEKhSqafk3j38C2nf24dmpM
 BdmnFFeb/N5I5B/KPH3DU=
X-Talos-CUID: 9a23:477fDmAISQxNaQH6Ey4/rUhPQfgpS2Oe3W+AJUSUBj9YcbLAHA==
X-Talos-MUID: 9a23:j6MhowTjn7hCwmMnRXTtgxxGL9ps75+CL2kTmpYjv/CgM3VZbmI=
X-IronPort-AV: E=Sophos;i="5.99,210,1677560400"; 
   d="scan'208";a="106565352"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iR6qkiG8M7tFq6ZhZSuu//czQvXqlm/BffIaYiGw+2ciAHucj8lMaR7kVF4kXxXqB3vdIKxBSPyQP4r4/3ktS5dFr3fQ5k54cTJzGyiAs8+zdJuaZUuBEaxFt+BtC6fVsNLVnLj7SYG83U6gpVCsGe415mvUns9s37KWR714badd38cvfxALaFdC66o/XOe4Pwo9R8mPwDtZ2F5cCdf1cd7fHU/XVVnRx5hhIIGMoTDOec6kTzjOug3CfJWjE5pKJ/2ee4MWw8pLFLyz+mwEVU7P1Bo0uhBeESdV0LOBD0SUxQ83WAgaIhZMZXzFUbIuEvKGH+PvHxbdCZjAA/Lkog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PvW8nhshFmywhubnDrkF86hWHYWnj8nZfNpbs2AdZ9Q=;
 b=jaU/5ylxPYt7SbMxetsIIKmdkubTKAbPwbE1NtYuUVGy4rj6pICXkMiAMgRqpUHY/rBEL1RaPo1b8+LmPbcqDkdi/v8lwI4EbFZ1fh0vYBt2kp/jiBzMEqEZxI9sN4zmQovjxzMpxSrB+0lnvm+jY5x4kozsUKim8/jmmYr0pDrXCao5GaMXjiVo+0SDOZVj47zL/ZvTlPtuhIZOm6hp/rWrA6eqLUa1O+KxFGupC2RY0exrpicVFEKSmIhmURBjb77ohixx94IgyQ1gD3EoETtEtYhQkMLccXpRgCStJZnegzsbgH1DijaF8SRBKFyvkt4s1qgrbYRLmBprWU64mw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PvW8nhshFmywhubnDrkF86hWHYWnj8nZfNpbs2AdZ9Q=;
 b=QUdg3Z0WDiT2aLiDVaBuhKG5285fb8cxblmQBcE6Th6IdTUQ4PxPzkk/MKOLbYurESBk/j3YYEejOXAhX18q+zVtpXuCJ/Y0ywXGv3WA3TyMA9I7LTzdcqLzCqI4omzrCCoonD8xowizF3u9D4zEgnnYaFzZ1kYyMN4EQzHgNd8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e6315912-7d1f-bad4-71d0-355d361ccbc1@citrix.com>
Date: Wed, 19 Apr 2023 21:10:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/5] x86/PV: restrict guest-induced WBINVD (or alike) to
 cache writeback
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <0e60520c-d660-1a83-3f57-3466a0ad617b@suse.com>
In-Reply-To: <0e60520c-d660-1a83-3f57-3466a0ad617b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0287.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::35) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB5797:EE_
X-MS-Office365-Filtering-Correlation-Id: 58afb9ff-4c79-49bb-8560-08db41121d4a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tMiI5IU93pmmdQ/9f2faTO+tvFd+nP41cz3pTJWSd0zKQpdEtMQZY26NZ6iYZIUZkXeSXoGBJEAfpi8ztfDw+3fX1AbchkW+8UskdHPm6qXA4lda/kHWtJN/WVkyrdtRvQEtO+43s/hM12YVOPtYQnzezGTfvj5dTPRXyuxW+IzkZm0pu7ltoNt+xY7LZk4iR0VzEcTJrkcSn+dJLB+c6e1LcDlZZzh7m4i5TCX5weJogK0ICgc5+M6FC3fCLG4naC6NtZs9m0XrSAOfG4WQzf5nchSO54YuiYD3Gu7Lb6zAAGUnz4I/B2ZnUQesjrMt7Uaiqe0ji50pnK2UYuHt5pe2ztTDXpp8SIWmhO2vLasjnMwyCSZF4xKhU26/mkFmhKsi8RlgGr2xSs8Q4eF3fUfKiIAiENuMiKhVgDqU8MI+raTTqZnzQ3d+ugweXqFD0kzQA1apII458SiwtCznx/gi7wpRSreMJl0IrbVeiqGrFDZQQAwFRrKQxLkvNl3wwIw2FtqJipBKdJi27FDIS2xmXUPmj0moy3NYhl2rEoG2Kk1vkAuPNLK9SXoflI5jshx8DfsRWrnmTWfaSfFGkLDVIA/cRbaBDaMVtJ60MvTVCV1Tva4nDq4sbKPw4uh/pWxnxMYuI4LuJp23wOqedQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(366004)(451199021)(186003)(6506007)(53546011)(6512007)(26005)(6486002)(6666004)(2616005)(107886003)(5660300002)(41300700001)(4326008)(2906002)(316002)(66556008)(66476007)(66946007)(8676002)(8936002)(110136005)(54906003)(82960400001)(478600001)(38100700002)(86362001)(31696002)(36756003)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YjI1Y1NJVXNMa1NSY2txdzU1aTNpOVdxLzgzTE1WdG1Ha2s5NWFkMHAySDJS?=
 =?utf-8?B?cCtDVEQ4WnBIQkpRM0NCbWFWaGZXN21hOTBvaTRUcGJJRWVKOWlYc2p6bXQx?=
 =?utf-8?B?RkdVV0w4c0kxRkFmZnppMlUwaWNGM1duNTlUTWxnQzZmWk9tdEE0ZFcvTlBC?=
 =?utf-8?B?OFE0RDFIZE85MlJwWkY5Y2tMQmdySU15M3h3SEVIYmxEV3dhN2dvRnhkem5h?=
 =?utf-8?B?TkNDMGZ3Rjk3MzlNNmtyNDhGV2tKUkRpQlVGa2M5dXIxUjNSN0Qyc1RKSktJ?=
 =?utf-8?B?NHA2UDF2WXFJempubk42OEdWV25CaC93VDlXYVF2WThlZXZyRG1VekJueWhM?=
 =?utf-8?B?UFNVYzFta0dTajFheUxaTVUvOTNKMTVYZDJnVmU4ZnZzencxZzl1ZnRIUGts?=
 =?utf-8?B?ZVIwMXZPYS90aDJnU3BmV2RWRjFzM21JMTJaZGN1NUtmaUUxUkpzTEE3cmR2?=
 =?utf-8?B?ZWY3dlZhOUFVMFpWYS9nWGt4WEpIWkhWSEJxSHR4blY2MCtBRzVIR3BEQ1Bu?=
 =?utf-8?B?TTU0S1lQTDZnb3A5Qk9KUlhaRzd4cmQvYnZLOEtCaUFjUktWR0NPM05lUm54?=
 =?utf-8?B?T1htQnU0MFc2dlZTNXplVFFPWW1Oc0lMWkl0OHROSnR0SUlmNUREQzZBcHBa?=
 =?utf-8?B?ZUFCbjBrVGEvQjQ3UUxWTHlieE9tQStxNFNtYVJMeEtZR2tEajNEM0tsNno4?=
 =?utf-8?B?NlY2cTJmNFlLbmRkQlhCT0g0ci9jWlpEc2daa2lxcm5Zdk1IZnhKRjVicFc5?=
 =?utf-8?B?dStlMXdPMExTMG5pMXNBQmRBRGZpeWs1WDhNbWQ5WFFYMzRDdERwNzNibzln?=
 =?utf-8?B?K2NGS0tVeUNTMUNIOXNpZ25hUElVc0hlWkZWZHVQSjRwNEt4YXN4c2dnL2h1?=
 =?utf-8?B?bEpmbmFKd2NVc1EyMmxidG1zd2ErWWpVZTQwMnpSZFY4VUtEanNrOVNUdGQ2?=
 =?utf-8?B?N202QVM1T3U5RHIrQWowbm4yd1NyK1ozSEdFMzBDQkk1OEF6cmlYUkFuNk9V?=
 =?utf-8?B?cEpwNWZEeVhETElESWRBMjZMZDF6clVNdmsvYitMOEdVNk92U0xpSDBXYmRX?=
 =?utf-8?B?OFF3WlNNcXloSnNQOFJabjZVZlBDbHNDWlVFQ0lUK21kZ1ZBSHloV3Rialpj?=
 =?utf-8?B?WGl0NVZiZEdkR1d1UTVDL3NHRm5hTDMwc09mOUhYWjJ4SWpkbjl2b1J4aXRT?=
 =?utf-8?B?U1o1SkRXcm1MVGJGRU96N29GbnMyR0VCdUhZRXByajkrNG1OcElrZzR3ZlY2?=
 =?utf-8?B?bnpqc0c5NEtick1wWk90SFRhQjRUWXd1OWsvempHWEZoOFZ2VDZkT0pROC9v?=
 =?utf-8?B?OGx0OG83Si9oK0VBQ3NQQmU0VXRlN3Y1NkpaY1hyY3E0L0NndzB5L21tOWxC?=
 =?utf-8?B?YXBhVFpxNkowaXBETmRFSng5SWJnR2JZaWgxdFp4K1lYcHlJM2xuNUR6c2JG?=
 =?utf-8?B?a0d5akxxTE1SL3pDTHJ2OWZHbEVhOHdXOTFhQXJFSExvbTNoN3h5VTV2VzRP?=
 =?utf-8?B?ckZaMWFSUndyNGp0N05xSytueVVnaVU5RmdFVk9yZGswV0p1ZzZvODNsck5Q?=
 =?utf-8?B?NTgwek5VaEg0cWlkSEpZWWVJSTUyVExNbHdIYUpEUGF1TzNBNzUvWXJDeWow?=
 =?utf-8?B?UVpIa0w1VTZGUkpwWEhsNjRiYmV2SXhUWjZQY0daOW1KWDk0WnJMNHEzbUVi?=
 =?utf-8?B?ckVORVloaTkrc2VQdm45eXh4VHJiSzB4aEIzQWtIbVFWeG12WDNBbjV0bVUy?=
 =?utf-8?B?RC85eVhFYS9aZEd3SmVqWFptNGNoOUFwTW1pTDl2UzA3RDVCOThLVElKNktH?=
 =?utf-8?B?dVRKSzVqOWY2SU5JS0M4M1dxYm9ZdmxjR3VqWS9aQnQ1TkE1SVlyOWdnaHVy?=
 =?utf-8?B?WjZhWHJIcm9keUJ1VUdTOHl0bnBFNlBPV2ZNU1JsNXZtRkUvdEpzclc5OUh4?=
 =?utf-8?B?LzlWQ0hlK0pma2FwOFVIak54bmpwanQrWVFsSEhiZmF2bGljTFR1RTc5Vzho?=
 =?utf-8?B?ei96YkpjV0NzR1VQSTVaVVU0dm9Za1A4eGZERWdZelg5cEMwcFprQldHbTlY?=
 =?utf-8?B?MDA1QktzN1lHYW04NEdFKzFIcWJEOVVjcUgzVG83UkJKdEd4cHQ5ajB1TElU?=
 =?utf-8?B?WnRWMEpBeE1lc053VGJJL0JrM2h5ZFdHUExSeEw1Ym5SVDRCWHN4SEN3eXQ0?=
 =?utf-8?B?Vmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	qviV5KK7qa0WAi81QE9Ul1mOynFGWcq7bsKPR8lrjQ17OuRYklSkk1RKzP0FoyzhvwJGbyv439m7sWXWROkDv10R1B47eyWHHns+O9XZiL3HF8PBV0ApsRe6wjDg4BRUaaXKM2iajIfRe1Adr5drUS7+UBx9165Q7JqVUPwr2620QMhwG0E38PRjsRe7q8u73dJGVqYcMxPe+PyyeoZtk/9X+N47EuB4mbeG7V2xkbGlEWZgyN4FlqQt/xlOfwFqKM/3UdgkDtM1oGj9zleHbArEqkcdUqn8O/hkltr68q9JlHYk0LVmbDUFlil+kj23L1GA0Y/PSPCS0qssv4RYsIfX6gs7QB7zkfDN8MWMO+hG3Qyrrvc+nKmKhu7KNR3VhA2W/8ekyb+6q44zbTKYWU5PlJXVmtHw+Dx/BVi7QX8QpmlZRjV/Q2cf5s+YeWP0k7OH0PZVlkjX2xBmJsI2BDM1yA/jWhTZhGVktx55DTD1jOHE/akt7VVskJbXoNj3j91Kx4ilFUJNVAuDs5dT/zIOBWgtWW8YNfOWXedKLeIJVpYvHDwcESgkpH2YBnaT7hHerEsblyXdUhcNTWfkF4rVNLFPDgsTJgQIqh8hnFOGkw9CtchPAG4HRZraO6yJOyEX42KvM8fszSPndVuTawOp9om6TSuuVOByu1Q9fSit8A8n5scM7js0bit1ZWZmJDhp27Oty0MQrJ8AjO8cef2FUA01Y2S/kwLZ9fNo3v53R71HfxfC/NdCIkar/0k/LSdpNrVsAbsgOjNw2x0AMCkRIs14zWQQB0Fmbg7Qgi05LGV9L9x3G5rvpkvHTTFX
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 58afb9ff-4c79-49bb-8560-08db41121d4a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 20:10:26.3973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MO03/mjy6stGcJZklwYL/Z15+3p2l1TuojQtJ2v9l6DSwriKNZGMmR1rb1sWSUy3lXtL90bAOzibXIb8wrGe2M7D0LyBKDdb43A7/LdsScI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5797

On 19/04/2023 11:45 am, Jan Beulich wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3772,7 +3772,7 @@ long do_mmuext_op(
>              else if ( unlikely(!cache_flush_permitted(currd)) )
>                  rc = -EACCES;
>              else
> -                wbinvd();
> +                wbnoinvd();
>              break;

It occurs to me that this is fundamentally broken.

The guest is not in any position to know (or control) whether it gets
rescheduled elsewhere between now and it logically continuing execution.

So if there is actually any cache maintenance to do, it can't be a
WB{...} of any form on just this CPU alone.

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
>  /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
>  XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
>  XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
> -XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
> +XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*A  WBNOINVD instruction */

Given that PV guests do have several real hypercalls for doing this, I'm
not particularly inclined to let them do it via an emulated path,
however easy it might be at a technical level.

I doubt anything good can come from it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 20:11:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 20:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523799.814206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE8v-0000Zf-M0; Wed, 19 Apr 2023 20:11:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523799.814206; Wed, 19 Apr 2023 20:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppE8v-0000ZY-If; Wed, 19 Apr 2023 20:11:05 +0000
Received: by outflank-mailman (input) for mailman id 523799;
 Wed, 19 Apr 2023 20:11:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppE8u-0000ZJ-P5; Wed, 19 Apr 2023 20:11:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppE8u-0007gD-J3; Wed, 19 Apr 2023 20:11:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppE8u-00084l-7Y; Wed, 19 Apr 2023 20:11:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppE8u-0003rG-76; Wed, 19 Apr 2023 20:11:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w64XKtEYXIRq8vwqnHJaUrQUAQKM4vomDVEkVsKV5W0=; b=ChZrbow0aTuQoQRP12cmHSiSs7
	dDRmiAfwidRlulxL0mY8rxBlfx/s6lJ2U0RRec/sFeQOw/3BR3Lo5RQdcE+5g2WEt01981UwNQooD
	Bo7g5acEtxMEhqwHtFE0XYwSgJOFy0UFBjMDe2IVpTF06YOlgl+1hYFmQyUb9D4kY4Lg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180321-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180321: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e74360e4ba4a6b6827a44f8b1b22a0ec4311694a
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 20:11:04 +0000

flight 180321 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180321/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e74360e4ba4a6b6827a44f8b1b22a0ec4311694a
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    4 attempts
Testing same since   180321  2023-04-19 18:01:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit e74360e4ba4a6b6827a44f8b1b22a0ec4311694a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 13 20:56:15 2023 +0100

    xen/livepatch: Fix .altinstructions safety checks
    
    The prior check has && vs || mixups, making it tautologically false and thus
    providing no safety at all.  There are boundary errors too.
    
    First start with a comment describing how the .altinstructions and
    .altinstr_replacement sections interact, and perform suitable cross-checking.
    
    Second, rewrite the alt_instr loop entirely from scratch.  Origin sites have
    non-zero size, and must be fully contained within the livepatches .text
    section(s).  Any non-zero sized replacements must be fully contained within
    the .altinstr_replacement section.
    
    Fixes: f8a10174e8b1 ("xsplice: Add support for alternatives")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>

commit 418cf59c4e29451010d7efb3835b900690d19866
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sun Apr 16 01:10:43 2023 +0100

    arm/alternatives: Rename alt_instr fields which are used in common code
    
    Alternatives auditing for livepatches is currently broken.  To fix it, the
    livepatch code needs to inspect more fields of alt_instr.
    
    Rename ARM's fields to match x86's, because:
    
     * ARM already exposes alt_offset under the repl name via ALT_REPL_PTR().
     * "alt" is ambiguous in a structure entirely about alternatives already.
     * "repl", being the same width as orig leads to slightly neater code.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit cfa2bb82c01f0c656804cedd8f44eb2a99a2b5bc
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sun Apr 16 01:29:09 2023 +0100

    xen/ELF: Fix ELF32 PRI formatters
    
    It is rude to hide width formatting inside a PRI* macro, doubly so when it's
    only in one bitness of the macro.
    
    However its fully buggy when all the users use %#"PRI because then it expands
    to the common trap of %#08x which does not do what the author intends.
    
    Switch the 32bit ELF PRI formatters to use plain integer PRI's, just like on
    the 64bit side already.  No practical change.
    
    Fixes: 7597fabca76e ("livepatch: Include sizes when an mismatch occurs")
    Fixes: 380b229634f8 ("xsplice: Implement payload loading")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>

commit 569632a5832c02bd84790e0411940b8d3150fa17
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed Apr 19 11:03:30 2023 +0200

    CHANGELOG: add gnttab_max_{maptrack_,}frames option changes
    
    Note in the changelog that the purpose of
    gnttab_max_{maptrack_,}frames command line options has been changed.
    
    Fixes: b2ea81d2b935 ('xen/grants: repurpose command line max options')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Henry Wang <Henry.Wang@arm.com>

commit 768846690d64bc730c1a1123e8de3af731bb2eb3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:02:47 2023 +0200

    x86: fix build with old gcc after CPU policy changes
    
    Old gcc won't cope with initializers involving unnamed struct/union
    fields.
    
    Fixes: 441b1b2a50ea ("x86/emul: Switch x86_emulate_ctxt to cpu_policy")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 741599fa521fbbb4cf71a98d7ec22ba5f4671cfa
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:01:29 2023 +0200

    x86: cpu{id,}_policy_updated() can be static
    
    The function merely needs moving earlier in the file to avoid the need
    for a forward declaration. While moving it, also rename it following the
    recent folding of CPUID and MSR policies.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 224211c55bdded74c5a65f5a7e34281a8c5c56f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Apr 19 11:00:19 2023 +0200

    tests/cpu-policy: fix "run" goal
    
    An earlier change converted TARGET-y to TARGETS, but failed to replace
    all references. Convert run's dependency, but use $< in the command to
    avoid the leading blank that += inserts.
    
    Fixes: 6a9f5477637a ("tests/cpu-policy: Rework Makefile")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 20:29:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 20:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523809.814215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppEQl-0002J1-9j; Wed, 19 Apr 2023 20:29:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523809.814215; Wed, 19 Apr 2023 20:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppEQl-0002Iu-6w; Wed, 19 Apr 2023 20:29:31 +0000
Received: by outflank-mailman (input) for mailman id 523809;
 Wed, 19 Apr 2023 20:29:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yb30=AK=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ppEQk-0002Io-0Q
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 20:29:30 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e105fc7a-def0-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 22:29:27 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-54fa9da5e5bso17488767b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Apr 2023 13:29:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e105fc7a-def0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681936166; x=1684528166;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ri4M7wwnqfUyWg7ITQYtSCFlkjwgDD80RYs577YrqxA=;
        b=WIB0Mw6mTuaRsirvoZFjheEp9yY5Y/q9onOHwXnfWFMdFmYyli6xaVNh9nuRkFM8Bu
         OnXLT3OfefylOw1YM4VwkunQfq+Wm/ji1lKG5q5iuDSnqXz+pveEOmagYUHY70pV+qoa
         KvVDM5rfvXh7k0t2ztxy0CpMZU4jrTcGpdw3ugWQ7PYvTc5DIMuci51oMb3/89hTRu18
         ASTMyUIpSKSpdVyzVJqnIsAfKfcTX+AN3kdlEImle/BVRj8WezpdfvQGOxxtdhWDpR5M
         v2CCX9NS0Q1nwilMdzeBzucKwgU44O+OvH3u0Wiv6qbWAvaUduq3fyhR1v2QD/zzuIcP
         d4gg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681936166; x=1684528166;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ri4M7wwnqfUyWg7ITQYtSCFlkjwgDD80RYs577YrqxA=;
        b=aeV0Rh+qLo/N+on7GDdnyvogjzhN+ilMLlyItGw/9d2bFzr0bCdrFQr73wZYxYn9nP
         UOwOyEtD07+LzbhUsBz3OOUbk08Y/b6jXOIYq0iJ2eD3/p6KgMho3QyV30FeQgpKFHhF
         Fzf6tFGFvrt9hzLQOOeQzcFog6WknwXQin7DZlDJMhlF0zHDx7Cr5k16LJq3zGT39yZ5
         903FGWslrpJGpuA6GPMMnfQVmayM3njTzR7+M6f6FEFe9KQx2jfy49HfLRLmwmS8PPaw
         kRrz52z45VMAx18OeI49fP26zno7AALj2DWi89NHhWMr7OfmxhNhadIBIcN38K7uMwgM
         izwA==
X-Gm-Message-State: AAQBX9eYoeqFAxVT8Wc/sBF3U6sdBbXCOZGGTNUecvn/oFpwe6U6p+Fh
	YQAE4avfdcnIc2Rd1cbkyZhia8e4LZWK5eweZkY=
X-Google-Smtp-Source: AKy350aNDMALBM0VAdPN5cwgCAJgGmI7MyMPzfpEMhO/TACZWqeGBJUZoGoWlGr6ZHAaYh0Tzxhzwi/dxg7Fov1Ofog=
X-Received: by 2002:a81:53c2:0:b0:54e:84f6:6669 with SMTP id
 h185-20020a8153c2000000b0054e84f66669mr4502802ywb.49.1681936166163; Wed, 19
 Apr 2023 13:29:26 -0700 (PDT)
MIME-Version: 1.0
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-5-vishal.moola@gmail.com> <ZD/syK8RYO9FZ6ks@vernon-pc>
In-Reply-To: <ZD/syK8RYO9FZ6ks@vernon-pc>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Wed, 19 Apr 2023 13:29:14 -0700
Message-ID: <CAOzc2pyt8MBv7N0qizdxr0__RKXK7hMLX-Jqvsd6RPh3nyTFVw@mail.gmail.com>
Subject: Re: [PATCH 4/33] mm: add utility functions for ptdesc
To: Vernon Yang <vernon2gm@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 19, 2023 at 6:34=E2=80=AFAM Vernon Yang <vernon2gm@gmail.com> w=
rote:
>
> On Mon, Apr 17, 2023 at 01:50:19PM -0700, Vishal Moola wrote:
> > Introduce utility functions setting the foundation for ptdescs. These
> > will also assist in the splitting out of ptdesc from struct page.
> >
> > ptdesc_alloc() is defined to allocate new ptdesc pages as compound
> > pages. This is to standardize ptdescs by allowing for one allocation
> > and one free function, in contrast to 2 allocation and 2 free functions=
.
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  include/asm-generic/tlb.h | 11 ++++++++++
> >  include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
> >  include/linux/pgtable.h   | 13 ++++++++++++
> >  3 files changed, 68 insertions(+)
> >
> > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> > index b46617207c93..6bade9e0e799 100644
> > --- a/include/asm-generic/tlb.h
> > +++ b/include/asm-generic/tlb.h
> > @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gath=
er *tlb, struct page *page)
> >       return tlb_remove_page_size(tlb, page, PAGE_SIZE);
> >  }
> >
> > +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
> > +{
> > +     tlb_remove_table(tlb, pt);
> > +}
> > +
> > +/* Like tlb_remove_ptdesc, but for page-like page directories. */
> > +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, stru=
ct ptdesc *pt)
> > +{
> > +     tlb_remove_page(tlb, ptdesc_page(pt));
> > +}
> > +
> >  static inline void tlb_change_page_size(struct mmu_gather *tlb,
> >                                                    unsigned int page_si=
ze)
> >  {
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index b18848ae7e22..ec3cbe2fa665 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct =
*mm, pud_t *pud, unsigned long a
> >  }
> >  #endif /* CONFIG_MMU */
> >
> > +static inline struct ptdesc *virt_to_ptdesc(const void *x)
> > +{
> > +     return page_ptdesc(virt_to_head_page(x));
> > +}
> > +
> > +static inline void *ptdesc_to_virt(struct ptdesc *pt)
> > +{
> > +     return page_to_virt(ptdesc_page(pt));
> > +}
> > +
> > +static inline void *ptdesc_address(struct ptdesc *pt)
> > +{
> > +     return folio_address(ptdesc_folio(pt));
> > +}
> > +
> > +static inline bool ptdesc_is_reserved(struct ptdesc *pt)
> > +{
> > +     return folio_test_reserved(ptdesc_folio(pt));
> > +}
> > +
> > +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int orde=
r)
> > +{
> > +     struct page *page =3D alloc_pages(gfp | __GFP_COMP, order);
> > +
> > +     return page_ptdesc(page);
> > +}
> > +
> > +static inline void ptdesc_free(struct ptdesc *pt)
> > +{
> > +     struct page *page =3D ptdesc_page(pt);
> > +
> > +     __free_pages(page, compound_order(page));
> > +}
> > +
> > +static inline void ptdesc_clear(void *x)
> > +{
> > +     clear_page(x);
> > +}
> > +
> >  #if USE_SPLIT_PTE_PTLOCKS
> >  #if ALLOC_SPLIT_PTLOCKS
> >  void __init ptlock_cache_init(void);
> > @@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct pag=
e *page)
> >       adjust_managed_page_count(page, -1);
> >  }
> >
> > +static inline void free_reserved_ptdesc(struct ptdesc *pt)
> > +{
> > +     free_reserved_page(ptdesc_page(pt));
> > +}
> > +
> >  /*
> >   * Default method to free all the __init memory into the buddy system.
> >   * The freed pages will be poisoned with pattern "poison" if it's with=
in
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 7cc6ea057ee9..7cd803aa38eb 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -97,6 +97,19 @@ TABLE_MATCH(ptl, ptl);
> >  #undef TABLE_MATCH
> >  static_assert(sizeof(struct ptdesc) <=3D sizeof(struct page));
> >
> > +#define ptdesc_page(pt)                      (_Generic((pt),          =
       \
> > +     const struct ptdesc *:          (const struct page *)(pt),      \
> > +     struct ptdesc *:                (struct page *)(pt)))
> > +
> > +#define ptdesc_folio(pt)             (_Generic((pt),                 \
> > +     const struct ptdesc *:          (const struct folio *)(pt),     \
> > +     struct ptdesc *:                (struct folio *)(pt)))
> > +
> > +static inline struct ptdesc *page_ptdesc(struct page *page)
> > +{
> > +     return (struct ptdesc *)page;
> > +}
>
> Hi Vishal,
>
> I'm a little curious, why is the page_ptdesc() using inline functions ins=
tead of macro?
> If this is any magic, please tell me, thank you very much.

No magic here, I was mainly basing it off Matthew's netmem
series. I'm not too clear on when to use macros vs inlines
myself :/.

If there's a benefit to having it be a macro let me
know and I can make that change in v2.


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 20:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 20:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523814.814227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppEh9-0004j2-PC; Wed, 19 Apr 2023 20:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523814.814227; Wed, 19 Apr 2023 20:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppEh9-0004it-KM; Wed, 19 Apr 2023 20:46:27 +0000
Received: by outflank-mailman (input) for mailman id 523814;
 Wed, 19 Apr 2023 20:46:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppEh8-0004in-O1
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 20:46:26 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e1ce108-def3-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 22:46:23 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 16:46:17 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6000.namprd03.prod.outlook.com (2603:10b6:5:38b::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Wed, 19 Apr
 2023 20:46:15 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 20:46:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e1ce108-def3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681937183;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=c77FjfaBuZdj5rwIN+hqJQqXOUBWPnGOnHHzw1Z7E5M=;
  b=UOwLwi4dvCf+SHxLEELUhskc9XHf2Fcoscb1Q5+AcFC/VetnnxNKwE4U
   eRZm7qVfDyn39iydzHBlfJ0akbFvuKzJJ+XjNyyDUnAoLsdW2DxaMKjUe
   U347kBy9gYEyNU5VLg9zZRVumKQtWkD5qtXPe5VmDtF0TenMgXQiDFMsV
   U=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 105502908
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:22t8aaBARMG6chVW/xHiw5YqxClBgxIJ4kV8jS/XYbTApGhx0j1Sz
 GQWWGuBMv3eN2byLdlwaduwoR9U65TRy9dnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9B7wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/t0pC35St
 twiBjkAVTmS2uWznLuRc7w57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xrvgA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eSwHuhCdpCfFG+3vtPuFms2jNCNDgbFgKrgKKiiFKDYesKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluaOiULLGlEeS4NSyME5cXupMc4iRenczp4OKu8j9mwHC6qx
 TmP9XI6n+9L0ZNN0Lin91fahT7qvoLOUgM++gTQWCSi8x99Y4mmIYev7DA38Mp9EWpQdXHZ1
 FBspiRUxLlm4U2l/MBVfNgwIQ==
IronPort-HdrOrdr: A9a23:aM6fta+7/ZzpYYuS50Buk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-Talos-CUID: 9a23:W7BsBm5ZrZbT+RmZodss8HVKEN85MV3n4n7dGGapVGFsFLS/cArF
X-Talos-MUID: =?us-ascii?q?9a23=3A7/ul7Q1dABJXVqpF099ub2b40DUj5oGWDH8RzsQ?=
 =?us-ascii?q?6nsTHbDxRYCyfkAmNe9py?=
X-IronPort-AV: E=Sophos;i="5.99,210,1677560400"; 
   d="scan'208";a="105502908"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H7CqOKPamWW7vVBs/lOrg1I9DiBJcNgBJxqCG0qMGMA5tk7Ja4eeK+6tRogxPXL+gYp1L1wJvSYqMISSS5naIsShBenMeZdXQcMZRn/638jTlqBEsRnpyhWYYl9mIj86V3oqtcouSiSh0Tmg+0LZ9c0bdCj1gN9WHZvrMoc8yttx/1r+Huuxt+p7yJFfPvr+bH9r0joRQqz0orGuzgi5oaeqYYk+wNxcx1WysitgGNI7nY/+YaKqL/LyXalRI3lh3PZ0y1IJqzxxwVnVMdgKwa7REtl/5lzTJXbF5Bzp0KevZ3OSPGB6l9BcfJa11aR0T11mHftJvGfu/Q5bVlHJHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=c77FjfaBuZdj5rwIN+hqJQqXOUBWPnGOnHHzw1Z7E5M=;
 b=PJGCRYG02Xe0nws0qk0sZgKO3jlBt12NnhVLHYBOmX1t8aUg5ct7VzWhpjQV4XWKnAQnQgCYZb4ZiM/9acbkLJuw9Iiiup/NklemL7ZpDC+mSclopiigpcEeVZVI89TsItS6I/MSWAsAok6QNGiKYSPIpBFbqE/9YM7IgGD+ciYuvqtzvHc2Hpgybfl68Kc4R6KXYPum4jyQxvo6reCXRlJSFsR9Pc96cv0hRJjOOikrHUfFG0Q/DvOX2UL4hdInn8eGCJIe9WN+uWGvzUwUtgPzyVZ+09IcdIvUfLGqMqLw9DaQgOwzKjUlv4AhvO+HlyOKSv36hJzOok74sfoSTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c77FjfaBuZdj5rwIN+hqJQqXOUBWPnGOnHHzw1Z7E5M=;
 b=wnaCrMdgwprr+Punak4Th2DHDCwixPaReXXY6hiTBBT1sAMBRVI8+iwvWkWtfu688ToNUx7+JWfXjLWLlAq1CRqXYXMZd8TfrTLPd4sb/2kCRdservqlGQriXdXff2VEFB7t+iKeYQYA5N+KgBiG5SZ6l7Bc6vWlzmRKQpnZDTA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <583d3f63-c3b3-dc73-6ca9-0ab5c60b26d8@citrix.com>
Date: Wed, 19 Apr 2023 21:46:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/5] VT-d: restrict iommu_flush_all() to cache writeback
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <d07ee286-52f2-c7ec-2d0d-1c343dbc78be@suse.com>
In-Reply-To: <d07ee286-52f2-c7ec-2d0d-1c343dbc78be@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0295.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6000:EE_
X-MS-Office365-Filtering-Correlation-Id: fe293934-89f6-4245-81a7-08db41171dfd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zTwVFPuDuzBy6o88rlRBDw9Z88fHfI1USD7/TE9DL4aMP6hNnE0wdSc0anDdKkmwrCX1Ge9UsOBaG78njul+mUy7mNY+y0GU/sYLw69KRbITFgPOruCd/RmqnMVmdxn+6wzXZb4ulTCspk2uUMSZ4ANDwGTeyuVwvnmumbUyjY7KisLBycS51XRrhTqIpu8UNoraxd+tIsOflgO7ob/fkEJoqU3TBc7VfEXssWT5GIyG6GU+0umEMaaHcrnOLtjMkpwLq7vbO8MITFibrENEpcZf5U+3XI0ECWHdcnzL2vso7v7ZY0e2R4P6kc28HQFYZTsWTt51lNOUYERdGK/Cfq8gMldoTIpO6umB2fo9mBzL5MK4uUFWgV7hPeCmbbt5uz6FWOzcJPID5TBLZqmdPDh7UWvud3bOz1AhjnXRgDTsmU7yMQosCcgys7LLghFdLrkJpEJzL238Aom27xcD0S8zb9GkkU19Ra8u+aDVbVOG8+cqqPiloZIKsVxvTpnegYqhjDBIgcy8I75jIBap3/fmkux3n/nDaGHHsquRhK0JrrYG3iFTS2yVZFozm7hNUb/nHe9MJYRgh/xoCX5tSuQmx2avTKaCUfC9La4dXvbcTZ/b3FCfnEhjHxWxvxIXphwsPepZQVDF6FAMog7e7Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(136003)(376002)(346002)(451199021)(316002)(41300700001)(4326008)(66946007)(66476007)(66556008)(478600001)(110136005)(82960400001)(54906003)(8936002)(8676002)(5660300002)(38100700002)(6486002)(186003)(53546011)(83380400001)(2616005)(6666004)(6512007)(26005)(6506007)(86362001)(31696002)(36756003)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejNoNFdVcmFPcEhqUW4vR0ZTdHFtOXBtMDZrY3RKZjM1Tzh5dW5pL25WUHd6?=
 =?utf-8?B?Uyt4UURLNjhMUEFMditsWEU2dldIMktGT0liRHMvcU4wQll2TVRsK20yR2R1?=
 =?utf-8?B?UXBDSmlUTUlwdlBXcWNhdVZzY1VtT0VtbVFRL3RrK0t6SFNIZFhZYW1LYVV5?=
 =?utf-8?B?NXpHUmY2dHNRazJJdE1NeVFvYWQ1bUl4Tk1lYXkxL0svM2MzNGtjcGtGVVEw?=
 =?utf-8?B?WTBoVE5FWU5KQVFUSmliMXczdDBYV2huK2YxTSs4R2NxN1hIVW5KMXJuWGhO?=
 =?utf-8?B?VHBOL0h0RnlORFlRZlVNUFJnNlVYUXhYR25BZFV3U2E5Ri9Pa3EzVmhKejJK?=
 =?utf-8?B?K0NjR0ZvQytURUlxc1pXYXRmQWRHYjdBdXdlVHB2T2VnVCtja29qYmx3cE1F?=
 =?utf-8?B?RVAzWVp3di8wcGdNTnVEaDhlb0xvT3NRL29CaFRMNmNNbjRabFlrSDNLbjV1?=
 =?utf-8?B?a3k0c2w2NEkvdlRLRGFob2tiN051L0NIYzB5WEY3NHFlRUJVUFJGSjR4ZGpR?=
 =?utf-8?B?UHVLc1dON2JsMDZZV1NwODJBQ1NkZEppa3NGMXd4NklJclU1NXc1c2g1T2p1?=
 =?utf-8?B?NXlLSmt5VXZqeTdTZ3R2ZzAyNzBKZE50RzVRalBlWFhWSHFDbkdWYzdiZ1Rx?=
 =?utf-8?B?UGRzSktPemJyeDhtRFdYeHdxbDJ4am14MmNSVzhVLzBaRkh1QzQ2c2F5WFJY?=
 =?utf-8?B?bkxldmFwUXk4ZlNQbzhJRFNPbHlLNFNBdnlJMFIzbWFaMVhYZU9wOXBNYnBk?=
 =?utf-8?B?OWcwcTVSUFM3MlR2V2gxZVJKOGF3N0JqMWhrd2c4UHVSTG5teFBkYTJKYThq?=
 =?utf-8?B?dXBJNWdwVUw1VEwxcmtHdDNGZE4rUWlPTHVWWUxWVldKMDBQWXhyQ2I0WW93?=
 =?utf-8?B?b0ZZQUkwRGRzcUZPZkRtd255QmJGQi94amx5TXRBMWZKWHlOUlBQQVVIakgr?=
 =?utf-8?B?ekpCbjluQWFYa2V2eHJReWFXbWMycVQrVXlkVXV2SnR0V1hjWDVPRlhIdVh6?=
 =?utf-8?B?cythaGhUckZtNnpEZzFUOUtXNXhHVUpSNmNiS2JxK0RvUm9CMUdCVG5jeWhr?=
 =?utf-8?B?aEdTWWE4Y252eVI2UE1WT3pSeXcwRWJ1WFNvQWd1MUYyaUZlRW1ydEVvQXVj?=
 =?utf-8?B?blN0UXBjUkNCTThPZ005bkdsSmZzY3BKbG5GeE04VWdncHZNd3BQUFAzL05t?=
 =?utf-8?B?Rlo2UHFXdURUc3ZmWEhEcmt0eDJZOTNNOXdURHh2NXJqRXhHZTNIRGtmV25K?=
 =?utf-8?B?c2FmMTgxTThRcVpjcldaRXVGakhZVERoSHFUUDlTVDFHMU9nR1dDdkFtejE3?=
 =?utf-8?B?aVdMYzl4cGI0eHFvSXZxM25LeGhza2taZk5RUXdVVlpzMjFKQlFMOHRuNHFW?=
 =?utf-8?B?NmRnak1vR1RoY29WZXNTcDZmdDdScWx1Nm1DUHVzQ3ZQZDJRMHIyZWVSOVZk?=
 =?utf-8?B?Z0kvK3l1QXRFY3ljUlBmTnBEQ1czcmk0VDd1dW5Ub1FyampEWWNmSnpNSHUx?=
 =?utf-8?B?MDdZeGZNR05LNWlqOStRMERpNnRwbkR5UVJ5SFo5ZWZJTjJCcFJLbnRQUytW?=
 =?utf-8?B?S2VySDR4TXRueGduVDlHWW1yNGpnVWgyL1lDZnF4Z3BhcTVTU0Q4UTR4S3N2?=
 =?utf-8?B?dVVONzB2cWt2Zkh0cXVmZURkWi81T1JOdngzUnBteXlvUHRDRTA2bmY5MnEy?=
 =?utf-8?B?UVFneUlaaUJEL3NvWnZTViswblUwZTJtSS9DbFM0QmF6RjhLbW1NQjExTWdq?=
 =?utf-8?B?TkpGb3N0bGFnTitTWThCZ2dMTzVYeVBxUDFueG02cXpxN1IvNU5jNCtCdzN4?=
 =?utf-8?B?bEV5VHdPdGYrdkQzai8zQ3BSVU5uL0x3dEMyVjZCOGt1VzJrRXhtOC9reld1?=
 =?utf-8?B?bEJFQUllaWdqL05oaDNIb01nbVM5UmVUaTFFZ21DekI2OE1iWnoxN0VwbjBq?=
 =?utf-8?B?NDlpT2NXRENnN3pIcEtmRUNXOUxGMTZuMmEySGg3Z3R1NmltZ2h3bEQ1Yk1w?=
 =?utf-8?B?SjcxRGp5M0NycnJ4eGFLSXo2ZDhvV0lsRDJ2aTFtNkJCM2NueHRUWExmMkR6?=
 =?utf-8?B?MVFSVlhtbzFHRGlhWUNJWHRqWHptZnZKQ2FBdFNZOHMybkhzMWptOFBPQVZB?=
 =?utf-8?B?S2QxTHYvQzQ5QlNGTEh4OTVMZUUxRDdDY1BuL1JEQTVlb3JPY3ZpdHBUOHZU?=
 =?utf-8?B?QkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	c/am04pME+HGZeXjhaJyFCdC6z1rnjTRCN0hEnfKTWivMJ6krcs26yRhYn2t9w2N7p5c2oklu91ND8zeWOuC0wPaq8KGNTDRy4AW1Akgr1PQfSAbpCBYfdJGKluViTKWVjzCv+DKcWGfQry7xjbO3DoMHXpSFRxSxE6WE5jL5ClLI1OY9NRtPJ/hpGU6fnoxryIt0SOQ5EGJXBF32OlzpoikD6bnlftlSw+3/Td1ZrH8TX/floZ3oOIN5u1+HSqQQcUsYGMytmp2/nCJ0AsXtgiS/l5iLbPziC/dI4IA/mhEo56sDjNuyECjyGijXTT7BpN4KofNVee1pyhvXbmquggOqID8jTO1YYm98Uf/2vE6iul4LR81j3YctNQN4sAPM4kDo492UA1js7ovrplG3s53ZnBHDRJeZD82K4ByHoj008GsNNIdeKl4XJG8pd/q/ipYYLE3EbGEqVEjicD+ck7MbnjFLhbQYv4LbNGNUzbnLVQOazZ7CEkyXRTVRaKdGH9btM7tBwzPguHB2UVsWX5zAoRX6iwXRQiN/6i4u/LMZkXdhC/+XlVEV+1pZfVvOCmPxZKd7E0pC3kCrJ2jFJvG1wfSbrei8ZQBABKTGjflN+fW/lhkn9tAHfSs0AP4hA9UgJIdQxaBq86TT7hDmS/OwHl4N2cL80HJbZAL7lGE0Pj5anPSI6DtgVgxu+ZD5DUBNbBb3k7r6oQ7oo/j3XuxsnqPtK6vMRlPWTLtACjtJxaSpUFF137yldWiklw4fq1hTf0qU2rLtSr08IQj4E7DNesYktDcrs892cM+Sr8QYmtmfwASyX7MNbqKB0MZB62neVHi4Ac7NW24ldvuZV6m9N5/pv1ypfLJEkNcGw4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe293934-89f6-4245-81a7-08db41171dfd
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 20:46:15.1039
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6EGG6VfLDx71TbhRM707mYdgbrOb8MXRoc4zv5m+czm4uKIaY9WSIowSZEJEfHNCImaAwUgAE4PBrqIP9ho20M3/XoI6F//IT7q5QN8fqfk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6000

On 19/04/2023 11:46 am, Jan Beulich wrote:
> We don't need to invalidate caches here; all we're after is that earlier
> writes have made it to main memory (and aiui even that just in case).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> This, aiui, being an analogue to uses of iommu_sync_cache() (just not
> range restricted), I wonder whether it shouldn't be conditional upon
> iommu_non_coherent. Then again I'm vaguely under the impression that
> we had been here before, possibly even as far as questioning the need
> for this call altogether.

I'd far rather we fix it property than continue to massage around the
sides of known-broken logic.

Coherency, or not, of the memory accesses of an IOMMU is a per-IOMMU
property, not a system-wide property.  What the iommu_non_coherent
global boolean currently does for us is enforce cache maintenance on all
IOMMUs, even the coherent ones, when any single IOMMU in the system is
non-coherent.

Inside the IOMMU driver, cache maintenance should depend on iommu->ecap
alone, disregarding anything else.  This will improve the performance on
any system with mixed coherent and non-coherent IOMMUs, without any
other behavioural impact.

The complication comes in in p2m-ept when we're sharing EPT and IOMMU
pagetables, because the EPT can be accessed by more than one IOMMU if
the guest has devices behind different IOMMUs.

But we know the devices assigned to the domain.  They're almost always
static for the lifetime of the domain, and generally single device only,
so there may be value in considering a per-domain "I've got a
non-coherent IOMMU" flag, because we absolutely don't want to be walking
the list of attached IOMMUs on each EPT update.


But...

SandyBridge era systems are the only ones I'm aware of where the IOMMUs
advertise themselves as non-coherent.  And on that generation we quirk
the superpage capability off anyway, meaning they are ineligible for
HAP-PT sharing anyway.

And if we are doing HAP-PT sharing, the cache maintenance for regular
EPT updates will be crippling for CPU performance, especially as the
software bit updates are not relevant to the IOMMU anyway.

A better option would be to simply disallow HAP-PT sharing when there's
a non-coherent IOMMU in the system, or (slightly more fine grained) to
disallow adding a device behind a non-coherent IOMMU to a domain using
HAP-PT sharing (as there's a disable now anyway).

Either will reduce the complexity of Xen's code without any loss of
functionality in real systems AFAICT.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 21:56:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 21:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523821.814236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppFmi-0003OM-P8; Wed, 19 Apr 2023 21:56:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523821.814236; Wed, 19 Apr 2023 21:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppFmi-0003OF-LO; Wed, 19 Apr 2023 21:56:16 +0000
Received: by outflank-mailman (input) for mailman id 523821;
 Wed, 19 Apr 2023 21:56:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDf1=AK=citrix.com=prvs=466cd93b2=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppFmh-0003O9-9j
 for xen-devel@lists.xenproject.org; Wed, 19 Apr 2023 21:56:15 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe4de3b2-defc-11ed-8611-37d641c3527e;
 Wed, 19 Apr 2023 23:56:11 +0200 (CEST)
Received: from mail-mw2nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 Apr 2023 17:55:56 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6738.namprd03.prod.outlook.com (2603:10b6:510:115::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr
 2023 21:55:54 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Wed, 19 Apr 2023
 21:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe4de3b2-defc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681941371;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=TSlcUKq5mYScgZ1rUT638dakeHZvvjZczDBMwx3+0Ro=;
  b=follSvq2+MzVeEAz8pagjC7fUYfKG0Ub1bGU2a1N2zSSfK1gUNurJedU
   DcyIhgzgEWDV2vhmI74T4dFz6Mm5wgZA3U++75mBfpzBpjAA7vCN9Ii0m
   T2D/hUI6eC6RwzwOKAoQgWsDQdFRMfoaSgPtGKf8FsbSueuHUsbx27mht
   g=;
X-IronPort-RemoteIP: 104.47.55.106
X-IronPort-MID: 108604224
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IQOZB6I0aajNFvJoFE+RApQlxSXFcZb7ZxGr2PjKsXjdYENShDwBx
 jYWX2qAP/7cZWX8L9hxPYzl9EkBupPWnIRgTwJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gVvPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5vXUN05
 d0Vdwktbyiqtc6sh7OYStNz05FLwMnDZOvzu1lG5BSAVbMDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dqpTGLkmSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurBtpOTufknhJsqHyOhUAjMyFNbEfh/t2fjHOYX8lPA
 ENBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9Vna15rqS6zSoNkA9LmIcZClCUQoM5fHipp0+ilTESdMLOKyoiJvzEDL5w
 TGPpQA/gakeiYgA0KDTwLzcqzelp5yMSxFv4AzSBzqh9lkgPNDjYJG041/G6/oGNJyeUlSKo
 HkDnY6Z8fwKCpaO0ieKRY3hAY2U2hpMCxWE6XYHInXr323FF6KLFWyI3AxDGQ==
IronPort-HdrOrdr: A9a23:ShNj66FkRUhon1T6pLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:Pz9gAmHveS5Ss+1iqmJ/qGwuCuEoLUT9zU/5EWOiTmdKSIeKHAo=
X-Talos-MUID: 9a23:uSr4YwkwEDf3x4gehRaAdnpDKYRE05+SU3kRy8oinc+5EyhrKxyS2WE=
X-IronPort-AV: E=Sophos;i="5.99,210,1677560400"; 
   d="scan'208";a="108604224"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BjMBJKLX0GI6hnmztJ6KEu3CrAWruyxlQAw7mmJ0TNSM1VLkws7+8sfhapvrBXHR1tDdr6ArGsifSvpAf4FKXeyg+G8tkB9N/yLTY+2+nVQ7Rvv9QgaUKxHQENX5+GvQsgDppH0IXUMJ1iU51jQQfDUelfLQhwvRvbe5zSynq6xkA60dcSq47dmvK+hwftImLygwG99kwrXP+6D9G8A1VVnAb4zWiqIRLc6maEav6kp38ChAbyiqEb6gI4ew9mq1LmXddmkwSPDUwuSwG5OvdWGObDSr0me0cHnwp5pHA8ccIItKyIU4xnF8Yvv13f5+A3QvzNAEQCyOGxPY1xFGCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TSlcUKq5mYScgZ1rUT638dakeHZvvjZczDBMwx3+0Ro=;
 b=EBEOVULBIkXU4xz4ht0LSzrPuioddaWRcVkEuC35sxJZRH37pwbVqYxTDWUMPF9ojO6EMKwwQfQSY3Us79JsDLjLBazO4NfabFWBIQNkN4Ull+stMKZd9hHb07yuvbyh6RlMsY4whOQxHNFo8RAccCVP78uBeT1Pkh+aXTlo72m4pVEBYuS+IKnJ060MyaUOlnyZ+e+WVIiwi7MM+QH/1+Kr4BqmvN9jpMX/X414oerEizQ7zXw/wHIp22j7jNa3138DAgdjIJNWkPxZxg5hgcB1EdrOVSlXGIpL+dvr66oTlLbYKXQpstg56H7/yGWwzk7h66lwouScs2wiON/EWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TSlcUKq5mYScgZ1rUT638dakeHZvvjZczDBMwx3+0Ro=;
 b=nSEG88zflEUzbaHFQNbuzQiabtlbTchc/CMa+Bxi5Lzmf0QL91kp5RplJwj6dsTw4A+/JrXMQ39vuFyDZPebSHIU2N7gVkaUzK68d2GJAAh85TbKmzaXVwJlrSURlLVql6o+l4SQoSLeSIBC2+7SQSv0lmmeCvYTBJ2YM47wz3E=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <1c79060a-89c0-6dbd-47cf-964192b82020@citrix.com>
Date: Wed, 19 Apr 2023 22:55:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/5] x86/HVM: limit cache writeback overhead
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <18fcf499-a2ae-ab48-a66f-ca0499097e8a@suse.com>
In-Reply-To: <18fcf499-a2ae-ab48-a66f-ca0499097e8a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0482.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6738:EE_
X-MS-Office365-Filtering-Correlation-Id: f265d8cb-1a64-4081-bdca-08db4120d905
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5zfm9hII2Qcv1yZS647OWAuGZz2Q/gnAhPCPszrvvfqszRPWGCP/vp5R9vt3hCobsZ7G+T3fsxMlJI7JeVmfS3a++pUAH/+DFSYFgZJ43MOcbygtFunXCyVESqOEy4NwNtTuP5kbXfKQPg2Hza2ahlL94+BwvUi5uwy+4R7l7sldAXSnWbK2lycQ0xDyRuS5LXw8l32tkliPAFdTAoGc6iBuO5lxJSiehXf5hKieilpvmEDXHSkfatE74AuPszGbTlYMpPMF/lgIpEZQfM1lrgVLY8KcMkyO6pwsujNGeb6+VomhfXIhw7MOR9Qgnw0Hjjv+fvXKzo72J/CXRaHz96x7An4O89I3KyVh/Lu47CSRXQ66vWZNfzDlWyI0gPSj52CGRz1DQCl2ed3BhO2cReuV8ValLu4N9pWLyZxUvX9xfAwVAWJF0Hh9DIRdrDrrfTkcxs5V8f2klWJkV/kKhk0YSRyXnJZv+HQoe5ThFv5LEVLK2k9q0yZ3oNESVZFYoNC/IO24xYwDx6dzlfgmZMLY7NQm9dhnEayhxS0OqIPIyysYnRxCK9uAIepB2eSWDZ0w2fIl0QHJBb4rX/jttR0swRt4HmlvCMMrOHnYYIXX3pfEjKsT6D+q1bA2LlST8uDEHCY7empv6gWWmTXIHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(396003)(376002)(39860400002)(366004)(451199021)(36756003)(53546011)(4326008)(54906003)(316002)(110136005)(66946007)(66556008)(66476007)(6486002)(41300700001)(6666004)(478600001)(5660300002)(8676002)(8936002)(2906002)(86362001)(31696002)(38100700002)(2616005)(6506007)(26005)(82960400001)(186003)(6512007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WC9oMHhtUG51RjNFZnp0VWxvK1hIWW5KcDNqRXIrckQ2eDhCS2xGemFLSUt2?=
 =?utf-8?B?aHhVVXVZUFE4bFlXQ29VeTFpeW93bVhQSnB0TTQxeTlsVXVya2d6bDRaVWUx?=
 =?utf-8?B?ZzV0eHN5aFJ3MTNTSmJvVVZiT21QVTVVbGNwd3ZIWDIzZnZXZWJ2dC9LUTZo?=
 =?utf-8?B?TjY1Y093ZHpzVmQ4eWxkVUZ3VTJCNHJ3eUV5cjBZN2RoSmZ0SlZMd2dGeGQx?=
 =?utf-8?B?WjRPWmNPTzhCSDhheWo0TngzMURaOVBCem12NzlBZ053SGw2Rlc3Q3A4TXlR?=
 =?utf-8?B?V2NzUzBFdUlXWGJJTG9WWFRRR0Z5d3NLdmtRd2hwcHhZcWR6WWVQeUJvKzg1?=
 =?utf-8?B?aW5RRW56aXE3cnJ3TmcraERXd0VseE5GeER3bTUvZkd3aUlzREVnSW1wOEFo?=
 =?utf-8?B?OXc5S1A5dE5wQnloRTVKTUxKSlVJaFhsdTBTYlJTMFJ2WHMrUEdKcU5nek8x?=
 =?utf-8?B?bFdiWmtQL3RqZGwzSzlncTRoODQ0MEYwVzdzQjFkN1FGT3p3OWVLZVgzRVJI?=
 =?utf-8?B?N2ZpRUQxdWxqeGozU1dCbU5kc1lGN0p0c3BzUjR1RkRMZUJaSnFJQzdwcHp0?=
 =?utf-8?B?OEJHU04wTmVuVXA1TTlpYlUreW9WaVB6bG9lZjIrYlJKQ3FNLzl0V0Q0cGFB?=
 =?utf-8?B?Qm1xZEFRamZvM0V6SitwU2pkVitqQlg4NmpDeWphcmJNcDdCM1JhVS9GQkZR?=
 =?utf-8?B?eHowOGJ5L3RHQ3lRMi9IdWR1MExjcGgxOFpWWWZSYWJJem9tWjdNQTVhWm9R?=
 =?utf-8?B?dlNhTlNWRnA4UjZ6YjZaTDdXbTZVWHZrYmIrTjVoS2NrYjUvaXcrK1dKYmF6?=
 =?utf-8?B?WHBEcWtwSmRDOHk4NUtlcktSVXpQZkU2Sy9vMzFNQUZ4Uk5hMnY0YXJieWtX?=
 =?utf-8?B?R3N0dllGTDY1ckxYa1dzU09abXJLTHM5WkI2KzlFVy9BcGQrZ3RRempwYTVN?=
 =?utf-8?B?cVMrM0l2d1IxQm1UZ1RWdmhoeEx5OFlLNXhSS25MSzhyT3RzZEN3bFVab0gz?=
 =?utf-8?B?eUNnUUJWMkdPSTdOaXRsakpTVnltcWd2YlphSUpmR212YjlmTTFueTRXeU5t?=
 =?utf-8?B?VEtPNWhjbWFJK3BnWUFVOHRWSGlqRyttVVNkOGIxOExzckhMcVFqTVMyQUEz?=
 =?utf-8?B?RlBaWWpTRzZVTlhRdHlKUS9kTnRVOFhMS084eE8yb3FKMERjY2tEWnFhTGRE?=
 =?utf-8?B?bEFRYWc5dTc1T2Q3YTVsYk1DRkptWnBXWnNkcFZjaEQxUGtSNlVFNlllUWdT?=
 =?utf-8?B?L0NWVjNOR21BYnhnRHkvZnhUS3pqRzRwV3VtSVdSajNrNWpYa2hVS08vRHVO?=
 =?utf-8?B?Q2hQVTAwemplWXdLdndQeGw1M1h3Wm9LbUVuM3I3b0RpTmd4YmJ4c2xaSGFT?=
 =?utf-8?B?empKYWU3VzhRUHczZjNPTG5YbjdHVkk0OGw4SUtPSkdER255dDlQUjJ5VjVv?=
 =?utf-8?B?emRJd0NEUnJrMWRQeStwelFKV0VEMXRoejhpcFZjanJQdGZ6WVRXckh5WVNM?=
 =?utf-8?B?eWxwaHFLaDErZzZ5YkF1bkNDanI4ajJ1VjlXQjdZdmh4djVNWldYTHdDTS9u?=
 =?utf-8?B?WjB1M3NOUW55QWtHdWxjZ25rQXNMdlNHbnVOMHQ1azBjUU1JdGkzdm5CTXNC?=
 =?utf-8?B?MXNueEh2ck1RR1p5VXJwa1Y4aDNMdkNxSytjaFF0NXRjRlB5M1UwMGJXeEVH?=
 =?utf-8?B?elFRRTBkb1A3U1V4L2JNTHZCVDJtTmRXSGI5RjBSZnU5ZHkwYmd3TlBDcTA5?=
 =?utf-8?B?YklUM1RRU0tUUjd5YmFuT0M2Y2RzQUZvdWpORW5TcVNjcENrcFhrQ01XeThN?=
 =?utf-8?B?SXZmY2xmQktTNkptelE4TXVFSGxiU1orR0NCSk9hOThIZDVHeUo5eDc5SXZP?=
 =?utf-8?B?aFdrd095QzB4dFVqUUoweHJmVllXeWZPdVYrdWNGem9URFJURWkxTGNuOHlt?=
 =?utf-8?B?a1ZsSjVJYmZKdmFFc283YXZoY3J5VEJtWkp5Yyt1ZWdlTlNueVZvRHg3K3pJ?=
 =?utf-8?B?ZjdmdityU3h1WU5hVmw4a3E0YU1WS3dvb1M5RFVXTUxTclZQNzhqemc1bGVZ?=
 =?utf-8?B?SmxhVmtDQ0xvNXVmUWhJdEJGai9TTHpZczJPTDRkbC9XZ2xhNTRScktaQVlm?=
 =?utf-8?B?bG5EQ3lmQXgwQksraEFaOHpqa3JESzJRU05lSGZxM3plb0kzQk84YTJXaEJi?=
 =?utf-8?B?MFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	fEa6xNeuZTg/QnmM0FFv81bRXja4MnLMASzEECgLQvIUQu3z/zXGK+EYnttrRPS9e/GuxFH5KOjSHM2botv/RvmlGmrpkRqdlnR0x3X81v+M2OW0tGvi7qN0xx5mJpOOba3+q2b1GAtJ+Z5T+GRvDlSJVBMqExAOaf4LnTWbH7eiEwTdesOk/x10qJrybp1rw89Hw2q4PSw570GyFvvx06zv8sMHu5IltfiSjYhpJqkL98DBUXFX/cImhk9MvTbHCbuoL/Sr0nGUCAKQTdSRXDOLv4OZfDNIqYUtEwkkdY8AMTLusBEbe2cXsQpADAvmslhIdS6K70xPbR05Ujyw463CIdd9KSRFXjULHjZ1NXZ+cQwyF4ZIkxNg6BcHFNdTDBfQgj/KEAaO3WvHeNfHtWLmV4OlG3npziqCWc2b961DhfA5MTMACadaQkhL6JsawA1LPv3UT+r5mEENlnAoz3wh4fHLDNxhyJxU2HE6YBJAk19T/jPL5l1XPOKJbLJEurZA1NvjtT9NMQjVpP6O9Qzza6hfJhE7AxYkG6GNpy6PnnO4TJiNfLe3KRVhPBbVXujzN4edD7Q+NffvQYRCeGh26wFH30YyMkyWTopRE5F3JDXSm4h+xeWCa5tAplf+zlih+n/dz8JSwbU2qTCEf1TnKSN77cSsPCWNnL9lxtLd81Vo3BvP2CQBjus15jHR8RNn4vMLLNgtzpRictasb1jmMI4pOryXfiruIoG37PcduM4A0pkI8o1BKmKV3McU6/TQx4CqnVJ5Px9QnRUDuN/ZP1MjKT7YPwtqYONvE7IC3tF3KClENuNHuwmx5UYgGZTxrV/7SYGfaUxTH0E0DGbs5AIYgmibDnaTqiO1S9w=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f265d8cb-1a64-4081-bdca-08db4120d905
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 21:55:54.2557
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hPFEdfGmbqsFvvV42oV2kRd53iG7u/+Xmn8oqT2m/kFt8P9VCHccQPelSbZbiSB7zMD2nVSIPVkDrGC1jNIKk/f6mt2RtwFp4oTkYgHSkLY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6738

On 19/04/2023 11:46 am, Jan Beulich wrote:
> There's no need to write back caches on all CPUs upon seeing a WBINVD
> exit; ones that a vCPU hasn't run on since the last writeback (or since
> it was started) can't hold data which may need writing back.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I find it unlikely that this is an improvement in any way at all.

You're adding a memory allocation, and making the fastpath slower, for
all HVM domains even the ~100% of them in practice which never get given
a device in the first place.

Just so you can skip the WBINVD side effect on the L1/L2 caches of the
CPUs this domain happens not to have run on since the last time they
flushed (which is already an under estimate).  Note how this does not
change the behaviour for the L3 caches, which form the overwhelming
majority of the WBINVD overhead in the first place.

So my response was going to be "definitely not without the per-domain
'reduced cacheability permitted' setting we've discussed".  And even
then, not without numbers suggesting it's a problem in the first place,
or at least a better explanation of why it might plausibly be an issue.


But, in writing this, I've realised a real bug.

Cache snoops can occur and pull lines sideways for microarchitectural
reasons.  And even if we want to hand-wave that away as being unlikely
(it is), you can't hand-wave away rogue speculation in the directmap.

By not issuing WBINVD on all cores, you've got a real chance of letting
some lines escape the attempt to evict them.

But it's worse than that - even IPIing all cores, there's a speculation
pattern which can cause some lines to survive.  Rare, sure, but not
impossible.

Right now, I'm not sure that WBINVD can even be used safely without the
extra careful use of CR0.{CD,NW}, which provides a workaround for
native, but nothing helpful for hypervisors...

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 19 23:12:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Apr 2023 23:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523828.814245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppGyd-0003JJ-Fc; Wed, 19 Apr 2023 23:12:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523828.814245; Wed, 19 Apr 2023 23:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppGyd-0003JC-Cs; Wed, 19 Apr 2023 23:12:39 +0000
Received: by outflank-mailman (input) for mailman id 523828;
 Wed, 19 Apr 2023 23:12:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppGyc-0003J2-ET; Wed, 19 Apr 2023 23:12:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppGyc-0003Iw-5d; Wed, 19 Apr 2023 23:12:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppGyb-0004JH-L8; Wed, 19 Apr 2023 23:12:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppGyb-0001Gv-Ki; Wed, 19 Apr 2023 23:12:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1pz1IhvUiEkQ3tWEW0J0bEI9kaYAFsU7/+rnKP8o0Cs=; b=AIv1lR3eVZJj1XTinft9m+B5jB
	jRWOlRy9Oseb6O1WiRaoEeZB2olm9ogYn3irj7VR0sNjk9y02dCbivg6VC4RRHNcJ9H6zWe4NV1Jz
	LaB1HB4AQuaDihMLsShKTAk9op9WAYUVLfxzLUu+13viCKaQ+Vcsy9t2DisTjO66Z8tU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180323-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180323: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Apr 2023 23:12:37 +0000

flight 180323 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180323/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    5 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 01:50:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 01:50:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523872.814256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppJR5-0000q9-8A; Thu, 20 Apr 2023 01:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523872.814256; Thu, 20 Apr 2023 01:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppJR5-0000q2-4B; Thu, 20 Apr 2023 01:50:11 +0000
Received: by outflank-mailman (input) for mailman id 523872;
 Thu, 20 Apr 2023 01:50:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppJR3-0000ps-6v; Thu, 20 Apr 2023 01:50:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppJR2-0005cY-PL; Thu, 20 Apr 2023 01:50:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppJR2-00080i-B6; Thu, 20 Apr 2023 01:50:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppJR2-0000zx-AL; Thu, 20 Apr 2023 01:50:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DIRknC+wqmyeZz8kYsWCIOy24G81MtOHiFszN7OCqgc=; b=Z+tIgr0TnogK/wIUa4DWeXexmZ
	Wu9oHQGim6gNLUVmFIdFVhNejPIrPQrR3bxdDAoYYw7qjp05vH5uEZXfXfTI3IsZIo4FCFv+J5T7+
	Ptb0kwZirQmRjdZV6PvF7UJIb7iSuPaGz80GVHjI9oAM1kH9z9jGFrM+Ua/sbpyihve8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180324: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 01:50:08 +0000

flight 180324 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180324/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    6 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 03:40:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 03:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523880.814265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppL9B-0002xp-VS; Thu, 20 Apr 2023 03:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523880.814265; Thu, 20 Apr 2023 03:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppL9B-0002xi-Si; Thu, 20 Apr 2023 03:39:49 +0000
Received: by outflank-mailman (input) for mailman id 523880;
 Thu, 20 Apr 2023 03:39:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppL9A-0002xY-IA; Thu, 20 Apr 2023 03:39:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppL9A-00006Z-5f; Thu, 20 Apr 2023 03:39:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppL99-0004XI-Ka; Thu, 20 Apr 2023 03:39:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppL99-0000ez-KB; Thu, 20 Apr 2023 03:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DFaYlqRNSK1cGX+eWEQOFO5RQG5yQ9bsYuDQZ/p31Rg=; b=0URa33ceRPPjGSSzBSdi7mBUd0
	FXy4bp53i40vub3zqTCgjVwcz5JCNQVF/mAcSm3dLeSLjasrJaVgYDx/5avQb9g8VmQMYmzBUu8uj
	OKk4IGa6D1TLUCJmPGS2z++QkNzqTtgzwc8zDjnA09MEuSt3Eg77k6ZQrpb+p64r6saU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180317-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180317: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):starved:nonblocking
    linux-linus:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=af67688dca57999fd848f051eeea1d375ba546b2
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 03:39:47 +0000

flight 180317 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 180305 REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180305
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180305

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot              fail in 180305 like 180278
 test-armhf-armhf-xl           8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-libvirt      8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot          fail in 180305 like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot           fail in 180305 like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot            fail in 180305 like 180278
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 180305 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 180305 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-examine      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                af67688dca57999fd848f051eeea1d375ba546b2
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    3 days
Failing since        180281  2023-04-17 06:24:36 Z    2 days    6 attempts
Testing same since   180305  2023-04-19 00:41:47 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Arnd Bergmann <arnd@arndb.de>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Chris Morgan <macromorgan@hotmail.com>
  Conor Dooley <conor.dooley@microchip.com>
  Dan Johansen <strit@manjaro.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dragan Simic <dragan.simic@gmail.com>
  Fabio Estevam <festevam@denx.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Stuebner <heiko@sntech.de>
  Javier Martinez Canillas <javierm@redhat.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Rutland <mark.rutland@arm.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Neil Armstrong <neil.armstrong@linaro.org>
  Peng Fan <peng.fan@nxp.com>
  Peter Geis <pgwipeout@gmail.com>
  Rob Herring <robh@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Sudeep Holla <sudeep.holla@arm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  starved 
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     starved 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               starved 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     starved 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 832 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 04:14:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 04:14:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523888.814276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppLgt-0007U1-Rs; Thu, 20 Apr 2023 04:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523888.814276; Thu, 20 Apr 2023 04:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppLgt-0007Tu-O3; Thu, 20 Apr 2023 04:14:39 +0000
Received: by outflank-mailman (input) for mailman id 523888;
 Thu, 20 Apr 2023 04:14:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppLgs-0007Tk-I4; Thu, 20 Apr 2023 04:14:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppLgs-0000vq-8R; Thu, 20 Apr 2023 04:14:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppLgr-0006UA-Q5; Thu, 20 Apr 2023 04:14:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppLgr-00088H-PN; Thu, 20 Apr 2023 04:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2DwXf8YxTZskjySyu69zK3cES/cgU9yoWi9oryZjCAo=; b=pcuSltsyhwXgQjs5acsYnqPCjo
	gnfWSvlBASXutK2ZAU0u5fhobUZSyKEYr4KkrAyQSuzPR26Y/SdsCdY7KBWrIlsfZXrPwW+rKGSS8
	TGuG+0UBHOhFGzEQf4ptGs6CpYlQL4+ywB1rvd/SP8iwClQzX/oISRZNx/ved8rufvlI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180326-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180326: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 04:14:37 +0000

flight 180326 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180326/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    7 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 06:25:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 06:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523896.814286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppNjh-0003o8-66; Thu, 20 Apr 2023 06:25:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523896.814286; Thu, 20 Apr 2023 06:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppNjh-0003o1-25; Thu, 20 Apr 2023 06:25:41 +0000
Received: by outflank-mailman (input) for mailman id 523896;
 Thu, 20 Apr 2023 06:25:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppNjf-0003nv-VC
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 06:25:40 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 279dba2d-df44-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 08:25:33 +0200 (CEST)
Received: from DB9PR02CA0006.eurprd02.prod.outlook.com (2603:10a6:10:1d9::11)
 by AS4PR08MB8143.eurprd08.prod.outlook.com (2603:10a6:20b:58e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 06:25:31 +0000
Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d9:cafe::75) by DB9PR02CA0006.outlook.office365.com
 (2603:10a6:10:1d9::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Thu, 20 Apr 2023 06:25:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Thu, 20 Apr 2023 06:25:31 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 20 Apr 2023 06:25:31 +0000
Received: from cd09c05a7ffa.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2CB9B858-BA73-4B56-8A3F-D8C8D6469CB2.1; 
 Thu, 20 Apr 2023 06:25:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd09c05a7ffa.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 06:25:20 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV2PR08MB9349.eurprd08.prod.outlook.com (2603:10a6:150:da::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.30; Thu, 20 Apr
 2023 06:25:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 06:25:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 279dba2d-df44-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GHTXyT3w69jhk3j5qZqqNOYC4IXsOYyARxZGILvn4MA=;
 b=iZd7ZNHgcULBpBTBWF1A9d/22Y53gzxRrQ66hhU6JUPlTUpYsQEUKrAKGg8UqjxMJZKdq2AUDblDf+/c3rHWKIThbULBOLytLgCZIDBt2GoWwgf5mrohXoS4sZULTDPoEi2lLJmhkMwme/LVZj/IQUPJYXWZxxGFYzqNZY9bo5k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 09701844e10beace
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ux4FFDIMh+f65bZajmqtRpsV3NKh20x66Vt91jvDiDTWJxlZ39kmMexVKdQgfcsjV9dem7CANQv3bvOHQoOXNHlnCZYvZ8uReLDSdooYGX2r1zjqDhK310+2N1w4seLXqV5sCPiqd+vBkufkWMgBIXmdTYVJZoS/CsIeRcL72+rArFl2r4LkhgV8yrb8/20cuZVL40V+tPa0LZoaUcZw7CtssRTdTQ/wYoJ/obVzzl5trBmyIoAD31VB0zzkUOHdYb7akhRWuMayI25W4Xn08pjJLG4psU3GcLqr5auaSr+U/ZebkTTMXp0dOftbvs8O1LWrbczCoeCtYHCR5pNkdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GHTXyT3w69jhk3j5qZqqNOYC4IXsOYyARxZGILvn4MA=;
 b=Cg56RHMUwXV0cSQLITrEuzLl7IHeIcsf5qiCAeK6Xw3PyNBSxOU7eZvCaeBJX3GlZnSXzYNbO4cSkqGkY2XaS+CalY8k57H40oxAgV+zVpd7Zfw7S+h0oqsqzYYezmJ97RqnRrzadCbhIsi2wz1azkwWkTyyXK+fBFAtKEm923MEfXWjooaf8pBFsZzyQmZSYUdvaf4P1ltOOxnEPHS3sFW794mzw+cUxSHDRh96HFPgT7Xo/e/Rin8i/xt50qA/Jl8l9WmFlno9lsfbgxNTjVG91nL36RTRfrfBMP+FWl/jJvsqiaJOK9OZXZTF0QfCKAAaFeJYUmyv8MeE1NmaZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GHTXyT3w69jhk3j5qZqqNOYC4IXsOYyARxZGILvn4MA=;
 b=iZd7ZNHgcULBpBTBWF1A9d/22Y53gzxRrQ66hhU6JUPlTUpYsQEUKrAKGg8UqjxMJZKdq2AUDblDf+/c3rHWKIThbULBOLytLgCZIDBt2GoWwgf5mrohXoS4sZULTDPoEi2lLJmhkMwme/LVZj/IQUPJYXWZxxGFYzqNZY9bo5k=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQ+XccnxtxNG0WbEgpX88e2za8vRy4AgASAOoA=
Date: Thu, 20 Apr 2023 06:25:16 +0000
Message-ID: <81ED2CE3-D6AD-4CAD-B6CA-C7A045444AAE@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2978b495-c222-a3f2-16e1-ff577c7b699c@suse.com>
In-Reply-To: <2978b495-c222-a3f2-16e1-ff577c7b699c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV2PR08MB9349:EE_|DBAEUR03FT028:EE_|AS4PR08MB8143:EE_
X-MS-Office365-Filtering-Correlation-Id: d02146af-83b8-427a-0e12-08db41680a83
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XZH8y+H2I6GBOdvJnPLfXOCjORS6VETfwkQ92le+RhuHKYPlmw7bDyJRDu1P1zxTU836et1Ml6nBZY//sLrKzjmP3cJaN++Jqzf2bPdLaUirkNtEtEa4Y4SIcc9ujW2YB/ikf11SvYu8JY9f54LyLO3eBC49PyqLAABoqqetj3lAAU2Gn66BoLaxkKEmdEKNTSZXA9lwjvP3wabgk9fk1IzxkV/vzSWPT5VNJuYSX5/wVfafU8Qa2SIr+A7lejRg3i66Uy65ZrOcpVpSSmL1jT1RE5OYDYL+pjp7bJv+Nfd0Drvvhwq5ArI0vacAJge1n3TVN/1NMPKyvAY/c9vjlDs22rV3n1zvj5Y4RDV1/7cAG4LFn5D2JQQLdIurwVe6uByPf8ZgKs+m72hWCGUlQBacxkZDTcNCSprDioge86BWDV1AvfZ85FnksTafuBLfWmkwvsRs3etIFeGmkoSsktMGO7Fg7uKvcZIBdra5vl1Nfyid81WLDm8XZfofYGHfjXF0og/fm/ebavxj+jDnaz/+mTSjxHp3Duvmg/uJg/P7/NmR0gUlOW4WNEjpLM+OlYmWHlHVTTKdRTl6V+dopP+pdOy0kLair4OzDAxayhd/SQzEG8MdyaLI9EJ0xKeZWd8v8mWErZwi4AVpD3OoDw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(451199021)(66446008)(66556008)(38070700005)(41300700001)(66476007)(76116006)(8676002)(8936002)(2906002)(6916009)(86362001)(91956017)(66946007)(4326008)(64756008)(5660300002)(54906003)(122000001)(38100700002)(316002)(478600001)(71200400001)(6486002)(2616005)(26005)(6512007)(33656002)(6506007)(83380400001)(53546011)(36756003)(186003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <1EB89040FAD9DA468EC9D630FFA4C777@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9349
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9a91b0bd-284d-44ce-2559-08db416801b9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3SWLe+as+voVj/Y8lLSg2Q52ikItLNlLBYs9VCNgxZh1gM7kkMUpIpwHBujNT9ssdb5mhDL7Eg3EL2/6fYMmzDI3z5SARNQxIhgYFKOmAnpnhu/9kIQlX/fYhrP4cmPVV+XYf1WlH/qjR/zL2EbcV7osyjlCZsKnD8C5ebjMwxgo9+T3UnUEdEZeluRCou6x/Ew9t26/2qyCmIdsxwpsS/X3wdttkNDKxtVKlnNjg8gW58n6U5zqJcCM93lWwv/OJV8Cf7BUnlgT7IzYPoJmI7FZU0TbyuKVw6FZSYxjWl7W7f3gH5Y87wx7+HD0jYOe7zc1bWSh7VQGLM39MSTTiDxMX03vka8MMvkamweA3czpUaEa5F1f8j3f66n+Evk1S6Nmx0W9fwJvEuWa6NMPlvcEEtPT6+W66V2WBD7xoCH0AugM/WnaataAEooNWtDF80kxWDk9ebhm7kDsM0FWyo0+O3N3NObxMlX/Ic8HZgrAnLki8GcfGKzSP592BUfoSg1mQD3+dhHpd+wSCo6EJ2NkJUnIKFjOe7QnLOGEHkGVQz5j0Ph2BsEiQVn9N7kNLPPyQErN9YRyXXkpZN0YcyHSnsLAQ9/P3TOGd8llSwm7yKxM8802LesGHSOcYCR0olgI6GkwEJ36q5HulLJmAvNCDNMvwxVjcIyfeAJTfHvHVnXF1NBFX4XwyE2B1Duxmhf3ZtiB7CQk29jFWkSTQA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(83380400001)(2616005)(47076005)(2906002)(36860700001)(186003)(82310400005)(336012)(5660300002)(36756003)(6506007)(53546011)(8676002)(6862004)(8936002)(6512007)(26005)(6486002)(41300700001)(33656002)(81166007)(356005)(316002)(82740400003)(70586007)(70206006)(40480700001)(478600001)(86362001)(40460700003)(4326008)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 06:25:31.1822
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d02146af-83b8-427a-0e12-08db41680a83
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8143

SGkgSmFuLA0KDQpTb3JyeSBmb3IgdGhlIGxhdGUgcmVwbHksIEnigJl2ZSBtaXNzZWQgdGhpcyBv
bmUsDQoNCj4gT24gMTcgQXByIDIwMjMsIGF0IDEwOjQxLCBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTIuMDQuMjAyMyAxMTo0OSwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+IEBAIC0xMTgsMyArMTIxLDIxIEBAIHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUo
c3RydWN0IHZjcHUgKnYpDQo+PiANCj4+ICAgICBzdmVfbG9hZF9jdHgoc3ZlX2N0eF96cmVnX2Vu
ZCwgdi0+YXJjaC52ZnAuZnByZWdzLCAxKTsNCj4+IH0NCj4+ICsNCj4+ICtpbnQgX19pbml0IHN2
ZV9zYW5pdGl6ZV92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCkNCj4+ICt7DQo+
PiArICAgIC8qDQo+PiArICAgICAqIE5lZ2F0aXZlIFNWRSBwYXJhbWV0ZXIgdmFsdWUgbWVhbnMg
dG8gdXNlIHRoZSBtYXhpbXVtIHN1cHBvcnRlZA0KPj4gKyAgICAgKiB2ZWN0b3IgbGVuZ3RoLCBv
dGhlcndpc2UgaWYgYSBwb3NpdGl2ZSB2YWx1ZSBpcyBwcm92aWRlZCwgY2hlY2sgaWYgdGhlDQo+
PiArICAgICAqIHZlY3RvciBsZW5ndGggaXMgYSBtdWx0aXBsZSBvZiAxMjggYW5kIG5vdCBiaWdn
ZXIgdGhhbiB0aGUgbWF4aW11bSB2YWx1ZQ0KPj4gKyAgICAgKiAyMDQ4DQo+PiArICAgICAqLw0K
Pj4gKyAgICBpZiAoIHZhbCA8IDAgKQ0KPj4gKyAgICAgICAgKm91dCA9IGdldF9zeXNfdmxfbGVu
KCk7DQo+PiArICAgIGVsc2UgaWYgKCAoKHZhbCAlIFNWRV9WTF9NVUxUSVBMRV9WQUwpID09IDAp
ICYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRTKSApDQo+PiArICAgICAgICAqb3V0ID0gdmFsOw0K
Pj4gKyAgICBlbHNlDQo+PiArICAgICAgICByZXR1cm4gLTE7DQo+PiArDQo+PiArICAgIHJldHVy
biAwOw0KPj4gK30NCj4gDQo+IEkgdGhpbmsgc3VjaCBhIGZ1bmN0aW9uIHdhbnRzIHRvIGVpdGhl
ciByZXR1cm4gYm9vbGVhbiwgb3IgLUUuLi4gaW4gdGhlDQo+IGVycm9yIGNhc2UuIEJvb2xlYW4g
d291bGQgLi4uDQo+IA0KPj4gQEAgLTQxMDksNiArNDEyNSwxNyBAQCB2b2lkIF9faW5pdCBjcmVh
dGVfZG9tMCh2b2lkKQ0KPj4gICAgIGlmICggaW9tbXVfZW5hYmxlZCApDQo+PiAgICAgICAgIGRv
bTBfY2ZnLmZsYWdzIHw9IFhFTl9ET01DVExfQ0RGX2lvbW11Ow0KPj4gDQo+PiArICAgIGlmICgg
b3B0X2RvbTBfc3ZlICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgdW5zaWduZWQgaW50IHZsOw0K
Pj4gKw0KPj4gKyAgICAgICAgaWYgKCAhc3ZlX3Nhbml0aXplX3ZsX3BhcmFtKG9wdF9kb20wX3N2
ZSwgJnZsKSApDQo+IA0KPiAuLi4geWllbGQgYSBzbGlnaHRseSBiZXR0ZXIgY2FsbCBzaXRlIGhl
cmUsIGltby4NCg0KT2sgSeKAmWxsIGNob3NlIG9uZSBvZiB0aGUgdHdvLCBkZXBlbmRpbmcgb24g
dGhlIGRpc2N1c3Npb24gd2l0aCBCZXJ0cmFuZCBhYm91dCB0aGUgcGFyYW1ldGVyIGNoZWNraW5n
DQoNCj4gDQo+PiArICAgICAgICAgICAgZG9tMF9jZmcuYXJjaC5zdmVfdmwgPSBzdmVfZW5jb2Rl
X3ZsKHZsKTsNCj4+ICsgICAgICAgIGVsc2UNCj4+ICsgICAgICAgICAgICBwcmludGsoWEVOTE9H
X1dBUk5JTkcNCj4+ICsgICAgICAgICAgICAgICAgICAgIlNWRSB2ZWN0b3IgbGVuZ3RoIGVycm9y
LCBkaXNhYmxlIGZlYXR1cmUgZm9yIERvbTBcbiIpOw0KPiANCj4gSSBhcHByZWNpYXRlIHRoZSBu
b3cgYmV0dGVyIGJlaGF2aW9yIGhlcmUsIGJ1dCBJIHRoaW5rIHRoZSByZXNwZWN0aXZlIHBhcnQN
Cj4gb2YgdGhlIGRvYyBpcyBub3cgc3RhbGU/DQo+IA0KPj4gQEAgLTI4LDkgKzM1LDEyIEBAIGlu
dCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCj4+IHZvaWQgc3ZlX2NvbnRleHRf
ZnJlZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiB2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1
ICp2KTsNCj4+IHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4gK2lu
dCBzdmVfc2FuaXRpemVfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWduZWQgaW50ICpvdXQpOw0KPj4g
DQo+PiAjZWxzZSAvKiAhQ09ORklHX0FSTTY0X1NWRSAqLw0KPj4gDQo+PiArI2RlZmluZSBvcHRf
ZG9tMF9zdmUgKDApDQo+IA0KPiBXaXRoIHRoaXMgSSBkb24ndCB0aGluayB5b3UgbmVlZCAuLi4N
Cj4gDQo+PiBAQCAtNTUsNiArNjUsMTEgQEAgc3RhdGljIGlubGluZSB2b2lkIHN2ZV9jb250ZXh0
X2ZyZWUoc3RydWN0IHZjcHUgKnYpIHt9DQo+PiBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX3NhdmVf
c3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+PiBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX3Jlc3Rv
cmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+PiANCj4+ICtzdGF0aWMgaW5saW5lIGludCBz
dmVfc2FuaXRpemVfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWduZWQgaW50ICpvdXQpDQo+PiArew0K
Pj4gKyAgICByZXR1cm4gLTE7DQo+PiArfQ0KPiANCj4gLi4uIHN1Y2ggYSBzdHViIGZ1bmN0aW9u
OyBoYXZpbmcgdGhlIGRlY2xhcmF0aW9uIHZpc2libGUgc2hvdWxkIGJlDQo+IGVub3VnaCBmb3Ig
dGhpbmdzIHRvIGJ1aWxkICh0aGFua3MgdG8gRENFLCB3aGljaCB3ZSB1c2UgZm9yIHNpbWlsYXIN
Cj4gcHVycG9zZXMgb24gbWFueSBvdGhlciBwbGFjZXMpLg0KDQpPayBJ4oCZbGwgdHJ5IHRoaXMg
YXBwcm9hY2ggYW5kIEnigJlsbCBjaGFuZ2UgaWYgSSBkb27igJl0IGZpbmQgYW55IGlzc3VlLCB0
aGFua3MgZm9yIHN1Z2dlc3RpbmcgdGhhdA0KDQo+IA0KPj4gLS0tIGEveGVuL2NvbW1vbi9rZXJu
ZWwuYw0KPj4gKysrIGIveGVuL2NvbW1vbi9rZXJuZWwuYw0KPj4gQEAgLTMxNCw2ICszMTQsMzEg
QEAgaW50IHBhcnNlX2Jvb2xlYW4oY29uc3QgY2hhciAqbmFtZSwgY29uc3QgY2hhciAqcywgY29u
c3QgY2hhciAqZSkNCj4+ICAgICByZXR1cm4gLTE7DQo+PiB9DQo+PiANCj4+ICtpbnQgX19pbml0
IHBhcnNlX3NpZ25lZF9pbnRlZ2VyKGNvbnN0IGNoYXIgKm5hbWUsIGNvbnN0IGNoYXIgKnMsIGNv
bnN0IGNoYXIgKmUsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsb25nIGxv
bmcgKnZhbCkNCj4+ICt7DQo+PiArICAgIHNpemVfdCBzbGVuLCBubGVuOw0KPj4gKyAgICBjb25z
dCBjaGFyICpzdHI7DQo+PiArICAgIGxvbmcgbG9uZyBwdmFsOw0KPiANCj4gV2hhdCB1c2UgaXMg
dGhpcyBleHRyYSB2YXJpYWJsZT8NCg0KSeKAmW0gdXNpbmcgcHZhbCB0byBhdm9pZCB1c2luZyAq
dmFsIGluIHRoZSBjYXNlIHdlIGZpbmQgdGhhdCB0aGUgcGFyc2VkIG51bWJlciBpcyBub3QgZ29v
ZCwNCkkgdGhpbmsgaXTigJlzIGJldHRlciB0byBkb27igJl0IGNoYW5nZSB0aGUgKnZhbCBpZiBh
bnkgZXJyb3IgY29tZSBvdXQsIHdoYXQgZG8geW91IHRoaW5rPw0KDQo+IA0KPj4gKyAgICBzbGVu
ID0gZSA/ICh7IEFTU0VSVChlID49IHMpOyBlIC0gczsgfSkgOiBzdHJsZW4ocyk7DQo+PiArICAg
IG5sZW4gPSBzdHJsZW4obmFtZSk7DQo+PiArDQo+PiArICAgIC8qIERvZXMgcyBzdGFydCB3aXRo
IG5hbWUgb3IgY29udGFpbnMgb25seSB0aGUgbmFtZT8gKi8NCj4+ICsgICAgaWYgKCAoc2xlbiA8
PSBubGVuKSB8fCBzdHJuY21wKHMsIG5hbWUsIG5sZW4pIHx8IChzW25sZW5dICE9ICc9JykgKQ0K
Pj4gKyAgICAgICAgcmV0dXJuIC0xOw0KPiANCj4gVGhlIGNvbW1lbnQgaW1vIHdhbnRzIHdvcmRp
bmcgY29uc2lzdGVudGx5IHBvc2l0aXZlIG9yIGNvbnNpc3RlbnRseQ0KPiBuZWdhdGl2ZS4gSU9X
IGVpdGhlciB5b3Ugc2F5IHdoYXQgeW91J3JlIGxvb2tpbmcgZm9yLCBvciB5b3Ugc2F5DQo+IHdo
YXQgeW91J3JlIG1lYW5pbmcgdG8gcmVqZWN0Lg0KDQpPayBJ4oCZbGwgcmVwaHJhc2UgdG86DQoN
Ci8qIENoZWNrIHRoYXQgdGhpcyBpcyB0aGUgbmFtZSB3ZSBhcmUgbG9va2luZyBmb3IgYW5kIHRo
ZSBzeW50YXggaXMgcmlnaHQgKi8NCg0KSXMgdGhhdCBiZXR0ZXI/DQoNCj4gDQo+PiArICAgIHB2
YWwgPSBzaW1wbGVfc3RydG9sbCgmc1tubGVuICsgMV0sICZzdHIsIDApOw0KPiANCj4gSSB3b25k
ZXIgd2hldGhlciwgd2hlbiBwb3RlbnRpYWxseSBleHBlY3RpbmcgbmVnYXRpdmUgbnVtYmVycywN
Cj4gYWNjZXB0aW5nIG90aGVyIHRoYW4gZGVjaW1hbCBudW1iZXJzIGhlcmUgaXMgdXNlZnVsLg0K
DQpZb3UgYXJlIHJpZ2h0LCBJ4oCZbGwgY2hhbmdlIHRvIDEwIGJhc2UNCg0KPiANCj4gSmFuDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 06:45:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 06:45:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523901.814299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppO38-0006Fz-OZ; Thu, 20 Apr 2023 06:45:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523901.814299; Thu, 20 Apr 2023 06:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppO38-0006Fs-M0; Thu, 20 Apr 2023 06:45:46 +0000
Received: by outflank-mailman (input) for mailman id 523901;
 Thu, 20 Apr 2023 06:45:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppO38-0006Fi-1f; Thu, 20 Apr 2023 06:45:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppO37-0004fB-Qy; Thu, 20 Apr 2023 06:45:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppO37-0007uX-FQ; Thu, 20 Apr 2023 06:45:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppO37-0004lm-F1; Thu, 20 Apr 2023 06:45:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IStIO3FTEgf3n7B8kwjMiVZUauZ6k6gEuwIxCdPKxSU=; b=UBDYRMbs2dug0aecbZtV9R6eQW
	fm/5OpiadiOR8ofvaC2P2Drn9rGwQa3AUgWG1jFayewC2w0O10O7KYLUY5bO/Ze6G8Wzp4IKTXbWm
	heVC2Gkl4seBYQP/l1pJHPIfmp8n7uhWk8UAdhEtEFXuJFeFwRO1m+m53yGvDYLDvXg4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180329-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180329: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 06:45:45 +0000

flight 180329 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180329/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    8 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:43:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523909.814312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppOwt-00042z-4C; Thu, 20 Apr 2023 07:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523909.814312; Thu, 20 Apr 2023 07:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppOwt-00042s-1D; Thu, 20 Apr 2023 07:43:23 +0000
Received: by outflank-mailman (input) for mailman id 523909;
 Thu, 20 Apr 2023 07:43:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppOwr-00042m-6N
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 07:43:21 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7d00::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03402b5e-df4f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 09:43:17 +0200 (CEST)
Received: from AS9PR06CA0182.eurprd06.prod.outlook.com (2603:10a6:20b:45d::16)
 by DU2PR08MB10087.eurprd08.prod.outlook.com (2603:10a6:10:491::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Thu, 20 Apr
 2023 07:43:12 +0000
Received: from AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45d:cafe::d7) by AS9PR06CA0182.outlook.office365.com
 (2603:10a6:20b:45d::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 07:43:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT025.mail.protection.outlook.com (100.127.140.199) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Thu, 20 Apr 2023 07:43:11 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 20 Apr 2023 07:43:11 +0000
Received: from 566a26e9f737.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D472B863-D985-416A-AB30-7ABD77FB96E2.1; 
 Thu, 20 Apr 2023 07:43:05 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 566a26e9f737.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 07:43:05 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB6511.eurprd08.prod.outlook.com (2603:10a6:102:12d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 07:43:03 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 07:43:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03402b5e-df4f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ejMdyeZuH+ZWQ3q+pm85H2YppzItml6TjaAJ/NV8I8=;
 b=YhLvKhZok2tCe1RsQRfuNIAw7gtSh+93L14XuOdGoUFA11S+Nu8boBjBwIXcJOdwClms4icQsnlffiTrunpS+O5/J1C1KtKH2KTHc1Dgz7xeL1LPVhg4qQWby8Wyxp/bB7YQc1vLu2bzEGF86y8zkhJp+Hksw3q0bRwIduiHVQY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3222906a3534b6ab
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kYu4zkGMFBb1Jzc+PAzQF2QgK0Woxn9EYRRcgPskQ28YvI6xzYxuXCxaTpBhmA7O1MaMHdoVJgJd9WAqUerv2QH3xq8kpduZBfVX0ERqNMwbz9CghtiCEGhj4VBDCcNx6PwF2NWYKwVKWObGTZr9+lhmbzcANDl/NFYWH6P3yyUQLIdSeQjCCCTf/enED8hxFfPtV0cQubg5hiXWQCMB4Ep6wYEoXfSQ/+0xjYYwk8Bsi6P5ky8DNd8z5EYxrZLfLZhHoTe627cBwowvoNSyAtZNi6694jfduhc0s6+pHjtwH1KxSBbzE3i5M2Y0QTvTyy/UjdBiLJ7Y7xLOAKzWTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0ejMdyeZuH+ZWQ3q+pm85H2YppzItml6TjaAJ/NV8I8=;
 b=MqYirsW6kfXNdVBQJDoXB3zQIIyd7ZXkLqDhWaMH2MXTEqrZRbqJj0EFjVGFs/sLYvAkzGcJGYkGDP9ZmbF+KPpDtYl8y84ctdxL4zPnrUveAwD37nYsH7xD431Y/WlOgBa8Gi03DtK2ZJT5aWLamQFH0i240jMVzDeZDNpXwP9GZuE+tdgwT4ZZVDS+QJLmSq0zjqiGVDB7nw/nBOLMoLQYd9tH2uqaJHjiE0fuHG+aDpEXXT+a09WJsVPwuAl1LTTCvOdBBMq87qkSuBhltQaL0zx4UlClwb8Ahf6ZaIDOYnU89XFvRWEZ9PJ/v/m5gHz245YSLa8eZ5NGtqLIhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ejMdyeZuH+ZWQ3q+pm85H2YppzItml6TjaAJ/NV8I8=;
 b=YhLvKhZok2tCe1RsQRfuNIAw7gtSh+93L14XuOdGoUFA11S+Nu8boBjBwIXcJOdwClms4icQsnlffiTrunpS+O5/J1C1KtKH2KTHc1Dgz7xeL1LPVhg4qQWby8Wyxp/bB7YQc1vLu2bzEGF86y8zkhJp+Hksw3q0bRwIduiHVQY=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQ26+FC1T02dEGoq+zimN/6Ra8xC6+AgAE104CAAAEsgIABmneA
Date: Thu, 20 Apr 2023 07:43:03 +0000
Message-ID: <5AE27C3A-3857-4044-9010-F452C7CF7E6F@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
 <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
 <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
In-Reply-To: <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB6511:EE_|AM7EUR03FT025:EE_|DU2PR08MB10087:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b162689-1aaa-4c8e-7fb8-08db4172e459
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TgETE3Z0sHA/tN0GIBvxi5GpLWHKkdU858IEDJAT9cmV29JyR2Gu0M+Pnj/EUlYXxQgB8g+wSnw7BbVl7S5iNOJr7yuAYQ851rwQYciY6Jv+OTPZ+PCADYn2aXQlLipM2eJ6nnveOfuCWdCAdHVGNBUuLY5hl+P6upw3D+byEalbi5umFlA4rItNY/3skM20ZEQ2bLz3zFdtmU5FWUn63D/Z0SgD0jVxa5cUUcSaJAVZPohXJKVVbG9d+hUVk04hkZ0Uq+wdsmpuo4N63UFV64cugvAvp0/NJD1fSfXG8Ue67geVn1KriKeQEd3XJycOxj3ppk616Yzi5bFJjH5aZEObjlaz/rC1LhIqGx8ooqKHm+g8JwDw2ksYXgxmcpNBBaJbM/F1O9UvLCi+E/l3ERvpr9j/Cj6nfd1qnrOF0ev5vTVGdQeAjDbWSg6jIYZKO0MPVpF7LJcN6sdjDIqwnXv1ABx/TzwpE1fk0c79SiXbh5yL6owE8xWgkBj2hrUF6i3yB3kDv8dHeqbNTAj8tbBy+82MgVmssAUKEW+EgoiS3CZrNNImRB1+XdZRKUHRLfFQkCX8TAo7o2hkF0Nu9Mga8eR86wrPzBB2qn135V7vpBsQmBdqNhgSdseJjoaTpiGZS0b6I1gyHzGxI3ZPTw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(346002)(366004)(136003)(451199021)(4326008)(5660300002)(86362001)(2616005)(83380400001)(186003)(26005)(6506007)(6512007)(122000001)(38100700002)(71200400001)(38070700005)(6862004)(8936002)(8676002)(33656002)(478600001)(54906003)(37006003)(6486002)(6636002)(41300700001)(316002)(91956017)(76116006)(36756003)(66446008)(66476007)(66556008)(66946007)(64756008)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <061D7AB672F1134F98F0389DB33CCF72@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6511
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bb73254e-8300-4da1-e26d-08db4172df82
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DANnII+A8dEB9AcBH6Lb+i78cYSPHMdIKh9AVOnqS1gtnl6xBLWc9++KmHSRqh3mm/yuP2H49AoSAr4jdxwaYBrNXdtH6AsLf6x8blhe4FwDlJX5H90J1z9i7vO3wMfmf18ifGPlOtZZK/h4xQoqOXxHmNeB6sVzk7kfupbeSR5HAwGx8fQ92LzpvrcitElJycnQOsa6wCyg8yjN6sspe/d/i3r+Hn+tr0LDp/GTjlbWif94zh0hW5jWc6k/Be0Rbrn7MdRS/4xpFq6uN5UJdssuBll9TEvyE2P3I/pxKOGtHM5bIwNlP7SL9giENBhpIZkF/rQo51kiiCBHaF6dt6+SHc/u3K7pug2v07rbGZtohG/REVT452v7M1r/hm8kxVIJ3OFnS4vaZDwFLAKPOs7gXke7W3wi/tUSwllmRO+NkxJkqouwJlxuJU7/lF0XtK/wvBg540s0Qnn9vQSwepsVCXsUYbWGlnRuL+z9pyd42xOFiNQeupusnrpZ6MDgKoTT1ZXx7SZ8FFf3s6rag5M3/T+/30/dF0udAYNPNCjGSwJzNKfxbeFMrCwJPnXYdt1c4ruT6fvSYy3vzNBm96bGW339BRM0GR4t/tjNaMmTSxgR4PxwjP8iMz1D4JggcXMAxdkoWLeh4ABEGZZtnhLqrgOJa2JsLTtEdUhUKPcvvJ70FJFF8MkMByaU395PpYpnW+/EUzAqJrWa4Gk6Ew==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(40460700003)(5660300002)(82310400005)(336012)(2616005)(47076005)(83380400001)(86362001)(356005)(107886003)(81166007)(6512007)(82740400003)(26005)(6506007)(186003)(36860700001)(8936002)(8676002)(6862004)(478600001)(6636002)(33656002)(41300700001)(6486002)(40480700001)(54906003)(37006003)(316002)(36756003)(70586007)(4326008)(70206006)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 07:43:11.5580
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b162689-1aaa-4c8e-7fb8-08db4172e459
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10087

DQo+Pj4+ICt2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPj4+PiArew0KPj4+
PiArICAgIHVuc2lnbmVkIGludCBzdmVfdmxfYml0cyA9IHN2ZV9kZWNvZGVfdmwodi0+ZG9tYWlu
LT5hcmNoLnN2ZV92bCk7DQo+Pj4+ICsgICAgdWludDY0X3QgKnN2ZV9jdHhfenJlZ19lbmQgPSB2
LT5hcmNoLnZmcC5zdmVfY29udGV4dCArDQo+Pj4+ICsgICAgICAgICAgICAoc3ZlX3pyZWdfY3R4
X3NpemUoc3ZlX3ZsX2JpdHMpIC8gc2l6ZW9mKHVpbnQ2NF90KSk7DQo+Pj4gDQo+Pj4gWW91IGRv
IHF1aXRlIHNvbWUgY29tcHV0YXRpb24gaGVyZSBmb3Igc29tZXRoaW5nIHdoaWNoIGRvZXMgbm90
IGNoYW5nZQ0KPj4+IGR1cmluZyB0aGUgbGlmZSBvZiB0aGUgVk0uDQo+Pj4gQ291bGQgd2Ugc2F2
ZSB0aGUgY29udGV4dF9lbmQgaW4gdGhlIHZjcHUgaW5zdGVhZCBhbmQganVzdCBkbyB0aGlzDQo+
Pj4gY29tcHV0YXRpb24gb24gaW5pdCBhbmQgZnJlZSBvbmx5ID8NCj4+IA0KPj4gWWVzIHN1cmUs
IHdvdWxkIHlvdSBiZSBvayB0byBoYXZlIGl0IGluIHN0cnVjdCB2ZnBfc3RhdGU/DQo+IA0KPiBZ
ZXMgZGVmaW5pdGVseSBpIHdvdWxkIHN0b3JlIGl0IGluc3RlYWQgb2YgdGhlIGN1cnJlbnQgc3Zl
X2NvbnRleHQuDQo+IA0KDQpIaSBCZXJ0cmFuZCwgDQoNClRoZXNlIGFyZSB0aGUgY2hhbmdlcyBJ
4oCZbSBkb2luZyB0byB0aGlzIHBhdGNoIHRvIGFkZHJlc3MgeW91ciBjb21tZW50LCBhcmUgeW91
IG9rIHdpdGggdGhlbT8NCg0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYyBi
L3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KaW5kZXggZjBlYWIxOGRjMzg0Li4xZmVmNDY2YmEw
YWEgMTAwNjQ0DQotLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCisrKyBiL3hlbi9hcmNo
L2FybS9hcm02NC9zdmUuYw0KQEAgLTkxLDM1ICs5MSwzNSBAQCBpbnQgc3ZlX2NvbnRleHRfaW5p
dChzdHJ1Y3QgdmNwdSAqdikNCiAgICAgaWYgKCAhY3R4ICkNCiAgICAgICAgIHJldHVybiAtRU5P
TUVNOw0KIA0KLSAgICB2LT5hcmNoLnZmcC5zdmVfY29udGV4dCA9IGN0eDsNCisgICAgLyogUG9p
bnQgdG8gdGhlIGVuZCBvZiBaMC1aMzEgbWVtb3J5LCBqdXN0IGJlZm9yZSBGRlIgbWVtb3J5ICov
DQorICAgIHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9lbmQgPSBjdHggKw0KKyAgICAgICAgKHN2
ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSAvIHNpemVvZih1aW50NjRfdCkpOw0KIA0KICAg
ICByZXR1cm4gMDsNCiB9DQogDQogdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2
KQ0KIHsNCi0gICAgWEZSRUUodi0+YXJjaC52ZnAuc3ZlX2NvbnRleHQpOw0KKyAgICB1bnNpZ25l
ZCBpbnQgc3ZlX3ZsX2JpdHMgPSBzdmVfZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwp
Ow0KKw0KKyAgICAvKiBQb2ludCBiYWNrIHRvIHRoZSBiZWdpbm5pbmcgb2YgWjAtWjMxICsgRkZS
IG1lbW9yeSAqLw0KKyAgICB2LT5hcmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kID0gdi0+YXJjaC52
ZnAuc3ZlX3pyZWdfY3R4X2VuZCAtDQorICAgICAgICAoc3ZlX3pyZWdfY3R4X3NpemUoc3ZlX3Zs
X2JpdHMpIC8gc2l6ZW9mKHVpbnQ2NF90KSk7DQorDQorICAgIFhGUkVFKHYtPmFyY2gudmZwLnN2
ZV96cmVnX2N0eF9lbmQpOw0KIH0NCiANCiB2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1
ICp2KQ0KIHsNCi0gICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bCh2
LT5kb21haW4tPmFyY2guc3ZlX3ZsKTsNCi0gICAgdWludDY0X3QgKnN2ZV9jdHhfenJlZ19lbmQg
PSB2LT5hcmNoLnZmcC5zdmVfY29udGV4dCArDQotICAgICAgICAgICAgKHN2ZV96cmVnX2N0eF9z
aXplKHN2ZV92bF9iaXRzKSAvIHNpemVvZih1aW50NjRfdCkpOw0KLQ0KICAgICB2LT5hcmNoLnpj
cl9lbDEgPSBSRUFEX1NZU1JFRyhaQ1JfRUwxKTsNCiANCi0gICAgc3ZlX3NhdmVfY3R4KHN2ZV9j
dHhfenJlZ19lbmQsIHYtPmFyY2gudmZwLmZwcmVncywgMSk7DQorICAgIHN2ZV9zYXZlX2N0eCh2
LT5hcmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kLCB2LT5hcmNoLnZmcC5mcHJlZ3MsIDEpOw0KIH0N
CiANCiB2b2lkIHN2ZV9yZXN0b3JlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KQ0KIHsNCi0gICAgdW5z
aWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bCh2LT5kb21haW4tPmFyY2guc3Zl
X3ZsKTsNCi0gICAgdWludDY0X3QgKnN2ZV9jdHhfenJlZ19lbmQgPSB2LT5hcmNoLnZmcC5zdmVf
Y29udGV4dCArDQotICAgICAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSAv
IHNpemVvZih1aW50NjRfdCkpOw0KLQ0KICAgICBXUklURV9TWVNSRUcodi0+YXJjaC56Y3JfZWwx
LCBaQ1JfRUwxKTsNCiAgICAgV1JJVEVfU1lTUkVHKHYtPmFyY2guemNyX2VsMiwgWkNSX0VMMik7
DQogDQotICAgIHN2ZV9sb2FkX2N0eChzdmVfY3R4X3pyZWdfZW5kLCB2LT5hcmNoLnZmcC5mcHJl
Z3MsIDEpOw0KKyAgICBzdmVfbG9hZF9jdHgodi0+YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZCwg
di0+YXJjaC52ZnAuZnByZWdzLCAxKTsNCiB9DQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL2FybTY0L3ZmcC5oIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3Zm
cC5oDQppbmRleCA4YWY3MTRjYjhlY2MuLjRhYTM3MWU4NWQyNiAxMDA2NDQNCi0tLSBhL3hlbi9h
cmNoL2FybS9pbmNsdWRlL2FzbS9hcm02NC92ZnAuaA0KKysrIGIveGVuL2FyY2gvYXJtL2luY2x1
ZGUvYXNtL2FybTY0L3ZmcC5oDQpAQCAtMTMsMTAgKzEzLDEyIEBAIHN0cnVjdCB2ZnBfc3RhdGUN
CiAgICAgICovDQogICAgIHVpbnQ2NF90IGZwcmVnc1s2NF0gX192ZnBfYWxpZ25lZDsNCiAgICAg
LyoNCi0gICAgICogV2hlbiBTVkUgaXMgZW5hYmxlZCBmb3IgdGhlIGd1ZXN0LCBzdmVfY29udGV4
dCBjb250YWlucyBtZW1vcnkgdG8NCi0gICAgICogc2F2ZS9yZXN0b3JlIFowLVozMSByZWdpc3Rl
cnMgYW5kIEZGUi4NCisgICAgICogV2hlbiBTVkUgaXMgZW5hYmxlZCBmb3IgdGhlIGd1ZXN0LCBz
dmVfenJlZ19jdHhfZW5kIHBvaW50cyB0byBtZW1vcnkNCisgICAgICogd2hlcmUgWjAtWjMxIHJl
Z2lzdGVycyBhbmQgRkZSIGNhbiBiZSBzYXZlZC9yZXN0b3JlZCwgaXQgcG9pbnRzIGF0IHRoZQ0K
KyAgICAgKiBlbmQgb2YgdGhlIFowLVozMSBzcGFjZSBhbmQgYXQgdGhlIGJlZ2lubmluZyBvZiB0
aGUgRkZSIHNwYWNlLCBpdCdzIGRvbmUNCisgICAgICogbGlrZSB0aGF0IHRvIGVhc2UgdGhlIHNh
dmUvcmVzdG9yZSBhc3NlbWJseSBvcGVyYXRpb25zLg0KICAgICAgKi8NCi0gICAgdWludDY0X3Qg
KnN2ZV9jb250ZXh0Ow0KKyAgICB1aW50NjRfdCAqc3ZlX3pyZWdfY3R4X2VuZDsNCiAgICAgcmVn
aXN0ZXJfdCBmcGNyOw0KICAgICByZWdpc3Rlcl90IGZwZXhjMzJfZWwyOw0KICAgICByZWdpc3Rl
cl90IGZwc3I7DQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:54:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:54:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523914.814325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP7Y-0005cO-9G; Thu, 20 Apr 2023 07:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523914.814325; Thu, 20 Apr 2023 07:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP7Y-0005cH-6R; Thu, 20 Apr 2023 07:54:24 +0000
Received: by outflank-mailman (input) for mailman id 523914;
 Thu, 20 Apr 2023 07:54:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AMeW=AL=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1ppP7W-0005c9-Hr
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 07:54:22 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8dfe138a-df50-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 09:54:19 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f17fdb520dso4547835e9.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 00:54:19 -0700 (PDT)
Received: from [192.168.6.46] (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id
 m2-20020a056000008200b002f53fa16239sm1196264wrx.103.2023.04.20.00.54.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Apr 2023 00:54:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dfe138a-df50-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681977259; x=1684569259;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=rlMZGzABfYVwgRPwdO0lLnSJgoIp6XmXJrdTz3/kXaA=;
        b=ECc9asmq2Lz+YurUvhjRkiSwU5CCV5r6VG4FSRg7bkfEYkNji8piCScs8/i3C1Z5Ug
         Eu7Sut11pXOaas2NyhGS+3Xpd1keubuzuajHS3qEwv5k4Ddaz0PqQONRZgK5CodzBu1+
         dzlZQcAx1ldrB9dNXBdBk1//D9hkIIuOvVNOI6I4njYdS9gYrXWUmwMecbCa7z4LOhHe
         TtMRy4PdPFI968J2aEavPPsCIJCLZnGL6KzSyG7RqaVkv0VNMj1xT3xpoYnJBovrmSer
         kE10BeCndvKj5q3/T4ylpDchCZAw41ICzhEguxZy4jbdqDPU3exJNIIZ84DEHlFvT7sY
         RWcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681977259; x=1684569259;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rlMZGzABfYVwgRPwdO0lLnSJgoIp6XmXJrdTz3/kXaA=;
        b=fcVG9wi4+wJosa2WbN66fL25D7B0ZTZJlJC9xtWWLdQ4o3tAPlqqpMccegsyz6JcMU
         BoEB9KU3ymYHFefI3oZdPmIth8LJlQIW0TU1Q7RD/MnKCe2b08T/7Og9jTfJnGV8meCP
         dwk35l2g6Pw9EytGmMQ/TN5oS2vZnY5pb3D3lok44xlVbN550Ldrm1DKQ/CifRp14CyI
         9/CMfy1nqMgwAXdTDGrSvAMubC8EktqmjgP2kfmka+qDDSAaVyPiqrehQ+M6FdZOIPrR
         AR4BZRKziq2ZQV/wHHS2M6DvTk2Ubp8zK+Du3y3M/k1qXyFnF1VNBr4BJQ0+SNjco8mH
         5NvA==
X-Gm-Message-State: AAQBX9eU4/rC+kfi2fp3YrHF6PP4wz77k//dPpLmRqh/XNCUOKtHs0k8
	l1rutdD96YRTHMjAHXc2okM=
X-Google-Smtp-Source: AKy350YwSxaMZK3BIhAnKSE4ARMe4QpMbE9h4JdhWydGxfvL57OHI8B911smipOAj0mZNoxRya3sKg==
X-Received: by 2002:adf:e886:0:b0:2f9:94ae:99f8 with SMTP id d6-20020adfe886000000b002f994ae99f8mr617299wrm.14.1681977258895;
        Thu, 20 Apr 2023 00:54:18 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <2e7ce635-a3f6-ebf4-6866-51b78c736265@xen.org>
Date: Thu, 20 Apr 2023 08:54:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH v2 08/16] hw/xen: do not use
 aio_set_fd_handler(is_external=true) in xen_xenstore
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Fam Zheng <fam@euphon.net>,
 Julia Suvorova <jusual@redhat.com>, Hanna Reitz <hreitz@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 Eduardo Habkost <eduardo@habkost.net>, Juan Quintela <quintela@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Garzarella <sgarzare@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Kevin Wolf <kwolf@redhat.com>,
 "Richard W.M. Jones" <rjones@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>,
 Peter Lieven <pl@kamp.de>, eesposit@redhat.com,
 Aarushi Mehta <mehta.aaru20@gmail.com>, Stefan Weil <sw@weilnetz.de>,
 Xie Yongji <xieyongji@bytedance.com>, David Woodhouse <dwmw2@infradead.org>,
 David Woodhouse <dwmw@amazon.co.uk>
References: <20230419172817.272758-1-stefanha@redhat.com>
 <20230419172817.272758-9-stefanha@redhat.com>
Organization: Xen Project
In-Reply-To: <20230419172817.272758-9-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 19/04/2023 18:28, Stefan Hajnoczi wrote:
> There is no need to suspend activity between aio_disable_external() and
> aio_enable_external(), which is mainly used for the block layer's drain
> operation.
> 
> This is part of ongoing work to remove the aio_disable_external() API.
> 
> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   hw/i386/kvm/xen_xenstore.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:55:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523917.814335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP8N-00069y-Hy; Thu, 20 Apr 2023 07:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523917.814335; Thu, 20 Apr 2023 07:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP8N-00069r-F0; Thu, 20 Apr 2023 07:55:15 +0000
Received: by outflank-mailman (input) for mailman id 523917;
 Thu, 20 Apr 2023 07:55:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Idg=AL=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1ppP8M-0005c9-5V
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 07:55:14 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae3f2b8b-df50-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 09:55:13 +0200 (CEST)
Received: from DB6PR0202CA0022.eurprd02.prod.outlook.com (2603:10a6:4:29::32)
 by PAWPR08MB8983.eurprd08.prod.outlook.com (2603:10a6:102:340::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Thu, 20 Apr
 2023 07:55:11 +0000
Received: from DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29::4) by DB6PR0202CA0022.outlook.office365.com
 (2603:10a6:4:29::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 07:55:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT026.mail.protection.outlook.com (100.127.142.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Thu, 20 Apr 2023 07:55:11 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Thu, 20 Apr 2023 07:55:11 +0000
Received: from 24d640a6055e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A8673633-E96A-46A4-8E8A-A36F6B371A44.1; 
 Thu, 20 Apr 2023 07:55:04 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 24d640a6055e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 07:55:04 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DBAPR08MB5831.eurprd08.prod.outlook.com (2603:10a6:10:1a8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 07:55:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 07:55:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae3f2b8b-df50-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZCIp0LNKgmmQzKLmDUIZqNT0jh+9nsp5OHiCSiuEWrQ=;
 b=HcXHLRTR2U7CVR5396yZAo2WEAf6GsDaf7CocJ4lmg1gLIfSt/NnlE9Ak7jAyaChg2hJz0Mo1w7fCMBUUTBZr39vL9aK0c4zYYrWV8+m1ydJXZRNMLugrSaOOQT1ay4n1o7LbLOfIMuVACR8lJQfyNzBYWs+Yy60BpJGjg91w30=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 73dbad5c90052534
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=epQnDpQkMI3NKuoxlF0vI5MDahbIZxIJO+PmjVjUtat7XgHAdTlIV+bZyCOs8lGMOMTYRNLzsOAKbD/JxqExPWm5Zg42f2z36kIkm4VB0gq6cQWOI2L4u3PP59883rURcea65666cb0/c4cSbR149wSLSLGpVBJuYAbpvYImMREGbKldEtjn2hRWRM2NfO5RzS70ib5a+0PfIH732pO01dp4Hv7h96keTDJ9NoR8dnLCK5MdgrAXl4i+mchg0KPmFyDmDgCXGmlluklXyr2M0cMCIFANr5nRHcvxpU/ofb1U+9ft40R0xqeM/yhmsOjNZUPadLsEoD9GRpcxoI9NSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZCIp0LNKgmmQzKLmDUIZqNT0jh+9nsp5OHiCSiuEWrQ=;
 b=mNMBvozis/Zq1lHzxJya3+OcIo2sAHYzZguwABtfIBYS6lOlol1/cgvWCiPC3k7ln2P63Fq8ox0FHYHA9bWmswWa999G9A05g48cfnMCy7sGatRukaqZDfByNQRLS8u8bIz1crfnnyPUqyUCrvTjvToxoHJ+eJDl0DkdapgKcgJBndiq6eCbRJix9b3lF7ZtoYxL42/8/LauJH8DTdiqDcibQRjCuo0FaOq34YMltsdjO+63EOnSsvIXKcYMOmN/y1Wq+DMw6b5b1njZh8exV1CIozhn1IGQwP1RadTxmP2VxgLSyGhzr0Eq2YLPY9+AocvS7ULweb9CqKibiEmnuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZCIp0LNKgmmQzKLmDUIZqNT0jh+9nsp5OHiCSiuEWrQ=;
 b=HcXHLRTR2U7CVR5396yZAo2WEAf6GsDaf7CocJ4lmg1gLIfSt/NnlE9Ak7jAyaChg2hJz0Mo1w7fCMBUUTBZr39vL9aK0c4zYYrWV8+m1ydJXZRNMLugrSaOOQT1ay4n1o7LbLOfIMuVACR8lJQfyNzBYWs+Yy60BpJGjg91w30=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZbSQq1oowf9vRd0a2ZPNGYXQmqK8xC64AgAE14QCAAAESAIABmpCAgAADSgA=
Date: Thu, 20 Apr 2023 07:55:01 +0000
Message-ID: <06C78335-3FD1-4AD2-A576-BCE636018280@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
 <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
 <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
 <5AE27C3A-3857-4044-9010-F452C7CF7E6F@arm.com>
In-Reply-To: <5AE27C3A-3857-4044-9010-F452C7CF7E6F@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DBAPR08MB5831:EE_|DBAEUR03FT026:EE_|PAWPR08MB8983:EE_
X-MS-Office365-Filtering-Correlation-Id: 59b8f5de-c046-4b94-2a1d-08db41749131
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5Bnxu/2kJ+D7Yr2LhZ6sAXMJsiG06sASh99Xew2/7tS+7oFIfQ58e7p2Ke+TBB4Rk5K3Tldm+qeQHWegFnmGK4YkEpkjo5iyUEGPK+kwoeY1A5qhbTw4cO7K3seez855UK3gQY2hKAIwpM/rUMzsSBPYlbUeAlrw1lC5sPjTi7CfNthgw6tsWFHqOOV5sUThUR44szr7LHLpS/Ek/RR4fTLA5/ODN2IqquQ9peHBaMiNXpCewrKE6rhjIeLxJsjZCHFgotFW9ZiN+655WTozRrY7bjn0njJfSJeze9CQsoY+B/iSVhHhIHblvrb5S+YLmNRqS38ZvZ6Ecnk18Ccg5LuGoVdb2ovVy8fOSI7VUr+eo6FDb6eG//SWeRhFZG/5q9mbun3rv6ohIVRNuWtmRJV7Q4itbGfok3+8bmBjB7Liw1FMVP9HIftJSNNqynwTgXsVChvVKNIzKFqWxKvePyolrVCrJerb2vpRS+OTE7hR+Wt+mrGQPJW7rQE4O4xly642E+FxkG3W8VLoC4uvJMd5fb9lc2+IHT00m0Vqu2VWZo7O0vpFcnOHs0MWW6D6qjIyyCmTbEmOq+u1EySfMvEieJrkIXsLxshPgriyAFrpYfg1DYCTaPaTHRCEmcUlTvaMyAnV5gTM3bvqUZ8Qsw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(366004)(346002)(39850400004)(451199021)(36756003)(2616005)(2906002)(66946007)(76116006)(64756008)(66446008)(6636002)(66476007)(91956017)(186003)(53546011)(6506007)(6512007)(83380400001)(71200400001)(86362001)(6486002)(38070700005)(66556008)(478600001)(5660300002)(54906003)(37006003)(33656002)(4326008)(122000001)(38100700002)(6862004)(41300700001)(8936002)(8676002)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <9A6C09646F827D4EB4328405CAF96744@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5831
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d8aa8866-19da-443e-1a82-08db41748b6a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DnDBpyf+x9vkvbPICa37wLT2qsiwHdNXjE1hjZHM6R3VSrRLe900bVKx/JZSL1T+k9HiLHwvE5p9aWXkexHZshzuN8lT2M075qcvF7CbEyWYHQ4mRVhS0N/UitX0PKvAj1Wvw/KklFCaxQAzfMr8rqu+gx21q32OT85HQER7QHRU0D5py0H6PsD3ENVHdUGW6M8Pr8gbgDbWY1wbXqXO+FC1JdyevVSp7yIbIGCneautAYqSv6feExBlaLUGsCUlbStpZn6YhVD2HbP1QCMtHTcZERZrPzrB6sCjY9HDIGkoNLAx6J0CerGSyDYxhGxQyMCJL1emR5kEGAqx/Rc1U5T/Lsgy7zpZFT4WLaxiPxY0dq+hxUY2NNG7fF9ZaCuAnIkmpFbGQA7MNjpRyJuIVxHW/GrvXXPWWvbL8Vi7CeQ1FNtuqWhw6r67B0W2fj1rmT3H8eNJGj7ySx+wS+B6sDG6BkUbK/X6zTFR3uXMqILuPSITMBVfWydWJ+lVrBEKaXOZg1p9UQCglGxY+W8VLA41kmvKmE/xIrh2jtw1Je1/A0dBm/5rb1a0wJq8OhOUT/8EPpUv9reN2G8YumABdVgbhQR5Bqax4HrateZOQXUUeDQAjQqzfrZEj7ldST1AzW1/ae0GF1pWCU5g/8q7B4jnNn8LPUGRFkzl6jcXESy9N/UFGHp9vfNk7SbT/BnzjEOlk2MuDCTABcLuR+S+7g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(82310400005)(6512007)(6506007)(82740400003)(81166007)(26005)(356005)(40460700003)(6486002)(478600001)(70586007)(4326008)(70206006)(36756003)(6636002)(41300700001)(86362001)(316002)(37006003)(54906003)(83380400001)(36860700001)(186003)(40480700001)(47076005)(336012)(107886003)(53546011)(33656002)(8676002)(6862004)(8936002)(2906002)(5660300002)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 07:55:11.0987
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 59b8f5de-c046-4b94-2a1d-08db41749131
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB8983

SGkgTHVjYSwNCg0KPiBPbiAyMCBBcHIgMjAyMywgYXQgMDk6NDMsIEx1Y2EgRmFuY2VsbHUgPEx1
Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiANCj4+Pj4+ICt2b2lkIHN2ZV9zYXZl
X3N0YXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPj4+Pj4gK3sNCj4+Pj4+ICsgICAgdW5zaWduZWQgaW50
IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bCh2LT5kb21haW4tPmFyY2guc3ZlX3ZsKTsNCj4+
Pj4+ICsgICAgdWludDY0X3QgKnN2ZV9jdHhfenJlZ19lbmQgPSB2LT5hcmNoLnZmcC5zdmVfY29u
dGV4dCArDQo+Pj4+PiArICAgICAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRz
KSAvIHNpemVvZih1aW50NjRfdCkpOw0KPj4+PiANCj4+Pj4gWW91IGRvIHF1aXRlIHNvbWUgY29t
cHV0YXRpb24gaGVyZSBmb3Igc29tZXRoaW5nIHdoaWNoIGRvZXMgbm90IGNoYW5nZQ0KPj4+PiBk
dXJpbmcgdGhlIGxpZmUgb2YgdGhlIFZNLg0KPj4+PiBDb3VsZCB3ZSBzYXZlIHRoZSBjb250ZXh0
X2VuZCBpbiB0aGUgdmNwdSBpbnN0ZWFkIGFuZCBqdXN0IGRvIHRoaXMNCj4+Pj4gY29tcHV0YXRp
b24gb24gaW5pdCBhbmQgZnJlZSBvbmx5ID8NCj4+PiANCj4+PiBZZXMgc3VyZSwgd291bGQgeW91
IGJlIG9rIHRvIGhhdmUgaXQgaW4gc3RydWN0IHZmcF9zdGF0ZT8NCj4+IA0KPj4gWWVzIGRlZmlu
aXRlbHkgaSB3b3VsZCBzdG9yZSBpdCBpbnN0ZWFkIG9mIHRoZSBjdXJyZW50IHN2ZV9jb250ZXh0
Lg0KPj4gDQo+IA0KPiBIaSBCZXJ0cmFuZCwNCj4gDQo+IFRoZXNlIGFyZSB0aGUgY2hhbmdlcyBJ
4oCZbSBkb2luZyB0byB0aGlzIHBhdGNoIHRvIGFkZHJlc3MgeW91ciBjb21tZW50LCBhcmUgeW91
IG9rIHdpdGggdGhlbT8NCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vYXJtNjQvc3Zl
LmMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCj4gaW5kZXggZjBlYWIxOGRjMzg0Li4xZmVm
NDY2YmEwYWEgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPiArKysg
Yi94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCj4gQEAgLTkxLDM1ICs5MSwzNSBAQCBpbnQgc3Zl
X2NvbnRleHRfaW5pdChzdHJ1Y3QgdmNwdSAqdikNCj4gICAgIGlmICggIWN0eCApDQo+ICAgICAg
ICAgcmV0dXJuIC1FTk9NRU07DQo+IA0KPiAtICAgIHYtPmFyY2gudmZwLnN2ZV9jb250ZXh0ID0g
Y3R4Ow0KPiArICAgIC8qIFBvaW50IHRvIHRoZSBlbmQgb2YgWjAtWjMxIG1lbW9yeSwganVzdCBi
ZWZvcmUgRkZSIG1lbW9yeSAqLw0KPiArICAgIHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9lbmQg
PSBjdHggKw0KPiArICAgICAgICAoc3ZlX3pyZWdfY3R4X3NpemUoc3ZlX3ZsX2JpdHMpIC8gc2l6
ZW9mKHVpbnQ2NF90KSk7DQo+IA0KPiAgICAgcmV0dXJuIDA7DQo+IH0NCj4gDQo+IHZvaWQgc3Zl
X2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikNCj4gew0KPiAtICAgIFhGUkVFKHYtPmFyY2gu
dmZwLnN2ZV9jb250ZXh0KTsNCj4gKyAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMgPSBzdmVf
ZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwpOw0KPiArDQo+ICsgICAgLyogUG9pbnQg
YmFjayB0byB0aGUgYmVnaW5uaW5nIG9mIFowLVozMSArIEZGUiBtZW1vcnkgKi8NCj4gKyAgICB2
LT5hcmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kID0gdi0+YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2Vu
ZCAtDQo+ICsgICAgICAgIChzdmVfenJlZ19jdHhfc2l6ZShzdmVfdmxfYml0cykgLyBzaXplb2Yo
dWludDY0X3QpKTsNCg0KUGxlYXNlIHVzZSBhIGxvY2FsIHZhcmlhYmxlIGhlcmUgaW5zdGVhZC4N
Cg0KRm9yIHRoZSByZXN0IGFsbCBnb29kIHllcywgaXQgbWFrZXMgdGhlIHNhdmUvcmVzdG9yZSBj
b2RlIHNpbXBsZXIgOi0pDQoNClRoYW5rcw0KQmVydHJhbmQNCg0KPiArDQo+ICsgICAgWEZSRUUo
di0+YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZCk7DQo+IH0NCj4gDQo+IHZvaWQgc3ZlX3NhdmVf
c3RhdGUoc3RydWN0IHZjcHUgKnYpDQo+IHsNCj4gLSAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2Jp
dHMgPSBzdmVfZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwpOw0KPiAtICAgIHVpbnQ2
NF90ICpzdmVfY3R4X3pyZWdfZW5kID0gdi0+YXJjaC52ZnAuc3ZlX2NvbnRleHQgKw0KPiAtICAg
ICAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSAvIHNpemVvZih1aW50NjRf
dCkpOw0KPiAtDQo+ICAgICB2LT5hcmNoLnpjcl9lbDEgPSBSRUFEX1NZU1JFRyhaQ1JfRUwxKTsN
Cj4gDQo+IC0gICAgc3ZlX3NhdmVfY3R4KHN2ZV9jdHhfenJlZ19lbmQsIHYtPmFyY2gudmZwLmZw
cmVncywgMSk7DQo+ICsgICAgc3ZlX3NhdmVfY3R4KHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9l
bmQsIHYtPmFyY2gudmZwLmZwcmVncywgMSk7DQo+IH0NCj4gDQo+IHZvaWQgc3ZlX3Jlc3RvcmVf
c3RhdGUoc3RydWN0IHZjcHUgKnYpDQo+IHsNCj4gLSAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2Jp
dHMgPSBzdmVfZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwpOw0KPiAtICAgIHVpbnQ2
NF90ICpzdmVfY3R4X3pyZWdfZW5kID0gdi0+YXJjaC52ZnAuc3ZlX2NvbnRleHQgKw0KPiAtICAg
ICAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSAvIHNpemVvZih1aW50NjRf
dCkpOw0KPiAtDQo+ICAgICBXUklURV9TWVNSRUcodi0+YXJjaC56Y3JfZWwxLCBaQ1JfRUwxKTsN
Cj4gICAgIFdSSVRFX1NZU1JFRyh2LT5hcmNoLnpjcl9lbDIsIFpDUl9FTDIpOw0KPiANCj4gLSAg
ICBzdmVfbG9hZF9jdHgoc3ZlX2N0eF96cmVnX2VuZCwgdi0+YXJjaC52ZnAuZnByZWdzLCAxKTsN
Cj4gKyAgICBzdmVfbG9hZF9jdHgodi0+YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZCwgdi0+YXJj
aC52ZnAuZnByZWdzLCAxKTsNCj4gfQ0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1
ZGUvYXNtL2FybTY0L3ZmcC5oIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3ZmcC5o
DQo+IGluZGV4IDhhZjcxNGNiOGVjYy4uNGFhMzcxZTg1ZDI2IDEwMDY0NA0KPiAtLS0gYS94ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvdmZwLmgNCj4gKysrIGIveGVuL2FyY2gvYXJtL2lu
Y2x1ZGUvYXNtL2FybTY0L3ZmcC5oDQo+IEBAIC0xMywxMCArMTMsMTIgQEAgc3RydWN0IHZmcF9z
dGF0ZQ0KPiAgICAgICovDQo+ICAgICB1aW50NjRfdCBmcHJlZ3NbNjRdIF9fdmZwX2FsaWduZWQ7
DQo+ICAgICAvKg0KPiAtICAgICAqIFdoZW4gU1ZFIGlzIGVuYWJsZWQgZm9yIHRoZSBndWVzdCwg
c3ZlX2NvbnRleHQgY29udGFpbnMgbWVtb3J5IHRvDQo+IC0gICAgICogc2F2ZS9yZXN0b3JlIFow
LVozMSByZWdpc3RlcnMgYW5kIEZGUi4NCj4gKyAgICAgKiBXaGVuIFNWRSBpcyBlbmFibGVkIGZv
ciB0aGUgZ3Vlc3QsIHN2ZV96cmVnX2N0eF9lbmQgcG9pbnRzIHRvIG1lbW9yeQ0KPiArICAgICAq
IHdoZXJlIFowLVozMSByZWdpc3RlcnMgYW5kIEZGUiBjYW4gYmUgc2F2ZWQvcmVzdG9yZWQsIGl0
IHBvaW50cyBhdCB0aGUNCj4gKyAgICAgKiBlbmQgb2YgdGhlIFowLVozMSBzcGFjZSBhbmQgYXQg
dGhlIGJlZ2lubmluZyBvZiB0aGUgRkZSIHNwYWNlLCBpdCdzIGRvbmUNCj4gKyAgICAgKiBsaWtl
IHRoYXQgdG8gZWFzZSB0aGUgc2F2ZS9yZXN0b3JlIGFzc2VtYmx5IG9wZXJhdGlvbnMuDQo+ICAg
ICAgKi8NCj4gLSAgICB1aW50NjRfdCAqc3ZlX2NvbnRleHQ7DQo+ICsgICAgdWludDY0X3QgKnN2
ZV96cmVnX2N0eF9lbmQ7DQo+ICAgICByZWdpc3Rlcl90IGZwY3I7DQo+ICAgICByZWdpc3Rlcl90
IGZwZXhjMzJfZWwyOw0KPiAgICAgcmVnaXN0ZXJfdCBmcHNyOw0KPiANCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:57:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:57:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523921.814345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP9t-0006jk-TK; Thu, 20 Apr 2023 07:56:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523921.814345; Thu, 20 Apr 2023 07:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppP9t-0006jd-QY; Thu, 20 Apr 2023 07:56:49 +0000
Received: by outflank-mailman (input) for mailman id 523921;
 Thu, 20 Apr 2023 07:56:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppP9t-0006jV-70
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 07:56:49 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe12::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5d66291-df50-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 09:56:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8717.eurprd04.prod.outlook.com (2603:10a6:102:21c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 07:56:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 07:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5d66291-df50-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NqSlsWBxmxJbD6Aqdb1K/prhdwldMdYfiwqLUmUxXcTkrm3oqtybWMjk4GBOmJE+N1Aii/g8VKM7q6cvYgLgQVbFGMhpsca/7ZooLr0DH/UPIGt2XnWj+HkKF/1DmpIBjXedr3IHTt3beyz1BLX3Rj9WQ9dhKl1MAI4WMCr/A29bMPKUG3pTmK6iPdt3Mo/RNlPcxZczi/W9Cr9cNsvL27y1gl0vLO+ifxgLwQp9jXkxR/ES8Yz2g8GWteJWTpqkHwTdq9MLrTpd46VuGJuzuzGeSDDoKTLXAgF4ZeZvSMX5dOpPvdi9dDMwjG1/GYZqG2sTxGHzNUTtCKx3KlKbQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PYRTsVNVhfNdly8bLCccYcXLH0JdvZ06DV7MsVJEYg8=;
 b=IYrym91VIv4I5DoQpsa+2KthozYKniXekZFu4ko4/RtZhxMgQQt1gZ/2kNOkQV00YKINmd6Ivmmcf1c4q7fEPi33j1YQFrRM+sZ+BkGT/jttn3ck/oNbztVpqsaTMv3+Y7qZNGPr1fFLv50cAgRE+Se+ZaE2fW+tSOvp3hdcuTMkFPcLnvBmzRX39si7sTO5igozzKmD+2y+jCf05Rd3xl2m70BMbpjZZdir7h3PfA0eimeUdlFNEb8AebvqYkxwK9sgJvvEwR6TURDjPVUdQ835W74huLORs9J/vNTnpMmRzfBVltlYTLWH+1T/A8ooKm9+jzwa3hRuydk1c/jPkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PYRTsVNVhfNdly8bLCccYcXLH0JdvZ06DV7MsVJEYg8=;
 b=uymDbLS886mGjWGNGGl9k7cuks8arOgMqb+YA/+d4fmZ9YK2Dc7dvPbsQKMJ0+2AbfZt9EDmdRz4J7Pz+KQEqVqfuxp4sZMaZ6NTd4tGjV9yNoONoG6NIVjh3+sZMbZKR13PPQUWbXoMGk/5EU5PQP141zqL1EdCeAl2bcvoodr1Zw3ypVblNanPuCb7yCLXOnJ8ujnWlkKrVuzOBrT5YSPyURTeiK0cOaQ0LfH43ckkHKixG04dO8gN/tBUVD7zqqmZsluUKXhHdRor6iTMJBg0jB2TmBD2Rwx/TmB2UdF7MTDL9qz5tke2+e5ev1jmtUYrQKAUxUjU4fmiShzSYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <abe3b379-d46c-e770-6080-88e12ac4ace2@suse.com>
Date: Thu, 20 Apr 2023 09:56:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2978b495-c222-a3f2-16e1-ff577c7b699c@suse.com>
 <81ED2CE3-D6AD-4CAD-B6CA-C7A045444AAE@arm.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <81ED2CE3-D6AD-4CAD-B6CA-C7A045444AAE@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0258.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8717:EE_
X-MS-Office365-Filtering-Correlation-Id: 6488545b-025a-4c92-4160-08db4174c863
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pB7X7hQ1fj+gXIh67nc1FyPtGnHY41/7QHmnE6Y5Eeker/v7WUmFN/ud7t2MY6H0smyf5IiOCqzSdJ3Kxd7jyL+bd2++Qf0iLjOG69ItI58t93JohJAjBCBp+ZjnNNmg+0fu8Ob+FFGg9hs0qTXGhtlKooA0m/S+DbC7ywEOPMdS0F1ELZvSKhGGDCHD9celD13hR2FEVc/GV/CzB/tcgbGqmWJCJ+ihAretTrpxX1T7/Es3RKqCB9mp73D0EDL7hvInaubq6EHnUweNnQCbswOxA02GN4F4fQU7FA/2ZzFuEsamBvBUZiVzltbUg3ED8F/umljdgPmHOUw00pLvKVJlymzq3BZsuKmsGcR6KeTZyaY0anRdER8zOIEWKqoX9rYnvFHjLb7UtEbQoVlEeB+rrDHKPGZC2jMFcGBQn/bkTA35x3IeElJae+jopADQCMiWsJ1w45tuza+IM1b3mgtxkWbI+eJKjaKmdfmjFGtEB2UvpF87KKfYU+CuYOP0DcERVWPqAxlWp48m0sPu8g36v/Wr5Klop3sO55Mdlc7XjzZ0ug7oWoRxjyvCfYeoUF2METuyC9yRRkSCiXKHVWtLQghZF55Ym0qOjoH8zSligkEAJfqnHuQbFkSYNbX9F4J7XK5zfpOG7AmM38fz7Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(396003)(39860400002)(346002)(366004)(451199021)(31686004)(6506007)(6512007)(26005)(38100700002)(6486002)(478600001)(66946007)(31696002)(66476007)(66556008)(54906003)(4326008)(6916009)(86362001)(36756003)(6666004)(41300700001)(316002)(83380400001)(186003)(53546011)(8676002)(8936002)(5660300002)(2906002)(2616005)(7416002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?djhUb0pGMitIZ1BqTi9BaklrczFyYTRNYlVTdHZnVTcrOGFtMWd0WU13OE1G?=
 =?utf-8?B?TGp4Q0NvZndkdFQwbnlTb3NYeUxRd1NkL25oVkRFelJ5TGU5VTM3NE9MZTkx?=
 =?utf-8?B?VGJXQ0VJSmtzUFljUmxiSzJVZTgwOTgwb01RMXd2RW13U0tldDUrODBsUjBr?=
 =?utf-8?B?dWhPZW12WHk4Vm01UERPWENESmIzTVgvaWNodUJ5QVprZzR4a3QzTWlyd0FO?=
 =?utf-8?B?S25JbENIQXFTWW1LV0dYeFhTajlFLzVLc1pVSStVTkxsN0t0S0RRV1ovcEVS?=
 =?utf-8?B?V1NjVFhjb0wwakF2QnZla2ZyeEM0NWhXQ2llVDhub0hQSHpHVjhGLzdqejZl?=
 =?utf-8?B?cHZCWXVya21HQnZCRElReFRTRmZmcnRKQmRlc3oyTWw2VTYvakZGUURTNm4v?=
 =?utf-8?B?WlAwS1NWZ2ZvcHpUaGNDZDZJSkE4MkRzYllvVi8yR2ZQOURVbThXdFE5Rkwz?=
 =?utf-8?B?WnRLZ1NIN1dLWlBnbzYvWGlkcGNiSVNqdkZlV3ZNZytVY3YzRCtkYVl4MWhP?=
 =?utf-8?B?YnZ0OHBFQ0VodEMzOTJWcmtsV2JDSUx6dkluQ1FYRXpCanZFdk9vWi9CdFl4?=
 =?utf-8?B?aHhXakdpRXByMkhQZE1ZSDAyM21lbjhUSlhnTFQrM2tWaTczMDFrVlFkVW9z?=
 =?utf-8?B?ZFIyTDhiRjU1ckw0dXhDNVUwbzlXbW9YQ2xZcHR0Yi9ZWmxCakp3RmpQSVBv?=
 =?utf-8?B?NnRRVDVJRDdqeWFZMnpCRi80djVpaHovZVpBUWQrWXVLYTczYjg3TVlNU2Fp?=
 =?utf-8?B?SlFFaEJIbFduenpJRjQ2SFNaZUlCSVpuQ2RKSFlHREpiQUx2aVdIOW8rVTBI?=
 =?utf-8?B?UFV3WWVxeWJIdDNRS3hhRUkzLzhVUlZyL0ZWemM0Y1VXQWV3ZVRtNU55Y0s3?=
 =?utf-8?B?K25yL2ZPRWYxUzhDQkdZNFJFdXBZalFoMGNCWmZaTmJCZUxQK3lSOXoxZjJh?=
 =?utf-8?B?TFQ0cEcrMTBxeHB5Vk52V01XcFZWY0ZFQTZHdzVDOUYxRFM5clB5QVJ1dEFP?=
 =?utf-8?B?QWZnMWNPUUFRT2YwLzVvWGxwaHRvaTlrdmhKa1RkR3NQU1dsa211VlhBT1RZ?=
 =?utf-8?B?RnErZTdTL2tDV3JyQVdxdEpBVmtuK0wzSFlFZ1ArdElzR3I2RDAyWCszdzRv?=
 =?utf-8?B?K3d3dHhkWm41S0NyM0laRHdhQ0I0RVIyZWNob0NxZk1keHdJYTZNZFJValVZ?=
 =?utf-8?B?Q0gzOGVsa1c3dmxKY0FRbVo4QmFMNVplR2NITUkzanJZQ3lhSlUveUZjNEJh?=
 =?utf-8?B?cFc1b3NrNFp5M2U3S0QwZlJHUnB5L1VJRkVxWEY1NVZHTXVpNkpVKzQyMlVV?=
 =?utf-8?B?WmoxMGN3V2RKdlV6cHg2MTJWK0E1Y3NabFpudnhuTFZCYVRPdEQwYngyTVhp?=
 =?utf-8?B?TGsyRjJCVms4MEZ0eE1DTVdBbUEyUklaek9wYUZwWkc0bFRNUlZES2NlQzJ0?=
 =?utf-8?B?djhiQU5QSVYvMTNBTGdwNXZ1bTc1MzR2R3dxSnNuaUNsYVVMOVZqTi9VK0s5?=
 =?utf-8?B?enJrYzVQdk5uUVBTNTNRaUR5ZitDOXJEVDRSSEJIb2JJdGtJbjFhdVh2YlNk?=
 =?utf-8?B?ZUZ0U2x3bi8vY0IyY3duVS84dWFBQUl6ZVZpZC9NNE94UkhYbTBvb0xJZTJE?=
 =?utf-8?B?VzFGWERVcnNvbWZsSjlqOWx6UXB0TnFSbkZhOWVqbkxYZktHam9FQ1g5a1VR?=
 =?utf-8?B?UDYzbWJUWmg4dEVNbXVzbEpkeE1hb2M2NGxKR0lsQlRhbDBQWk8yMDBKS0Nn?=
 =?utf-8?B?enVXOEpKZ0t3anhuRkROME9iZlA5V1pqek9kby9IQUpzN3RoNStJMkFpdTFm?=
 =?utf-8?B?eE1TOXlNa0wvSEFlMlcySEJQNlJyR3NTQjRXclRaeTVPZjZQVjRLeUdyUGwr?=
 =?utf-8?B?STk5TVg4R1UyZWxwUUdaWW03OXc0MHlOdUhaY2ZZVkorTmZOUXRMTmUwbHR6?=
 =?utf-8?B?ZnJseVdtTjUyU1E0elNOTnZVT25UdUV4eFlQMHEyYVZuRDgvVW1LWHRyejZK?=
 =?utf-8?B?aWg2VlJWZEJld290Z1E5NWJHTUlzclUvZm14QUM3OHhOSDNVeW9QUXhtOGRq?=
 =?utf-8?B?aUZpM3pqdGRITnhPREZHdTB4TTlXQ0tXa2VqbFdQWFlwNit3R2diUjFacjBM?=
 =?utf-8?Q?sGgshijZlnlkfdDJvirAECjht?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6488545b-025a-4c92-4160-08db4174c863
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 07:56:43.8808
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hMmUF7pWfuTQ5prktidfbbG/+PXlZwmrrUj83gPmdPmVHswc4WSyYi2CvHRcueWvJU/++KT/XfetKkbMQKfCRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8717

On 20.04.2023 08:25, Luca Fancellu wrote:
>> On 17 Apr 2023, at 10:41, Jan Beulich <jbeulich@suse.com> wrote:
>> On 12.04.2023 11:49, Luca Fancellu wrote:
>>> --- a/xen/common/kernel.c
>>> +++ b/xen/common/kernel.c
>>> @@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
>>>     return -1;
>>> }
>>>
>>> +int __init parse_signed_integer(const char *name, const char *s, const char *e,
>>> +                                long long *val)
>>> +{
>>> +    size_t slen, nlen;
>>> +    const char *str;
>>> +    long long pval;
>>
>> What use is this extra variable?
> 
> I’m using pval to avoid using *val in the case we find that the parsed number is not good,
> I think it’s better to don’t change the *val if any error come out, what do you think?

Caller ought to check the return value before even considering to look
at the value. Then again I can see how, if the address of a global
variable was passed in, that global may be unduly affected. So I guess
what you have is actually okay.

>>> +    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
>>> +    nlen = strlen(name);
>>> +
>>> +    /* Does s start with name or contains only the name? */
>>> +    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
>>> +        return -1;
>>
>> The comment imo wants wording consistently positive or consistently
>> negative. IOW either you say what you're looking for, or you say
>> what you're meaning to reject.
> 
> Ok I’ll rephrase to:
> 
> /* Check that this is the name we are looking for and the syntax is right */
> 
> Is that better?

It is, thanks. Alternatively how about "... and a value was provided"?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:59:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523926.814355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPC8-0007OJ-CX; Thu, 20 Apr 2023 07:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523926.814355; Thu, 20 Apr 2023 07:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPC8-0007OC-9A; Thu, 20 Apr 2023 07:59:08 +0000
Received: by outflank-mailman (input) for mailman id 523926;
 Thu, 20 Apr 2023 07:59:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppPC7-0007O1-58
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 07:59:07 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe02::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38a0f633-df51-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 09:59:05 +0200 (CEST)
Received: from DUZPR01CA0100.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4bb::8) by DBAPR08MB5640.eurprd08.prod.outlook.com
 (2603:10a6:10:1a3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Thu, 20 Apr
 2023 07:59:03 +0000
Received: from DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4bb:cafe::74) by DUZPR01CA0100.outlook.office365.com
 (2603:10a6:10:4bb::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 07:59:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT029.mail.protection.outlook.com (100.127.142.181) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Thu, 20 Apr 2023 07:59:03 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 20 Apr 2023 07:59:02 +0000
Received: from 8374955078cc.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B891B144-7EB9-4236-A79C-C88F58A347BF.1; 
 Thu, 20 Apr 2023 07:58:52 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8374955078cc.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 07:58:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB5911.eurprd08.prod.outlook.com (2603:10a6:20b:292::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 07:58:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 07:58:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38a0f633-df51-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irotP6+UbtqFU53+Q4mHgUkGGsv/vRCZx313XL/vIlc=;
 b=hBscOYq0fq58slNxnM66Uhhy98ZLi/HYi1VDdswDfqBfhBzyM3zBpOUn2OczohIqQviQKYnuKukVIRSpddwWHcdF1Ae00hy1fCNpGsJVWIpgIxIzt+F2Mz5d3N1uKbc5MRHB96gYWqOQwlSEN42kUIh5wVh/9+7CA5iX+86x3eo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 777f873ce0e3541d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gzAn80FGNgSxdWHW0jhx74bEbiAWZq9NuZEpRWUTeq6EHZTPEgtSE3Rl7YmukGY0S32jicnPdCL07IFar4O8hpQDJ8YXld0AjmlVbHFvhN1Pn2B1PcaF6KMdu4v+uHGPXSop/00nxoVOpbBz9x16EafZsgQ41KKvGb8/1T2UN5f+nOXENccAzSFTX5G20MJaAm4g3RAsGhuyynTagKyhq14BV3fi2FMio/Lu0Ho9LTDiknuPtEMen2bYIfhjS6GBGAJgjsrKj4H0RosK1RSxvDPKuf8xt/RZy+0W+m9whup10i4GwGmFpKAEc0dVnep7NfeEAyogjpc75mZZZEkkiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=irotP6+UbtqFU53+Q4mHgUkGGsv/vRCZx313XL/vIlc=;
 b=QjRntaNrUURCd6zEzTJIFaJQniVV3kOm9DSoO6pphCtuI/SrJ7LS5m0zt4jnXHljkv1FDdbY/tMEhJqjdFRJgsFAdg+JVrBgEnVmlq05RwKNLFF4rDZcnM3edQgBcudW0zpb3BSGUnTMuk2yuURZmCoICmJ/Ol+Rds6KZkNfI169OkHnmEOZfecqHtNV1WOBAePqroxbdwcPTZ47vpU7QrYf/MOn+3OQZN9acCV2syiSiWZODJHqaC2I2RC1G27DU62q9KEQzCnL1b7d4QhhzwpOJs1UJWFSqRZWmpu/Ohf6D5GBMp4siwvCSd2bE++h9sp3ZP+wXzktVwn6YfxklQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irotP6+UbtqFU53+Q4mHgUkGGsv/vRCZx313XL/vIlc=;
 b=hBscOYq0fq58slNxnM66Uhhy98ZLi/HYi1VDdswDfqBfhBzyM3zBpOUn2OczohIqQviQKYnuKukVIRSpddwWHcdF1Ae00hy1fCNpGsJVWIpgIxIzt+F2Mz5d3N1uKbc5MRHB96gYWqOQwlSEN42kUIh5wVh/9+7CA5iX+86x3eo=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index:
 AQHZbSQ26+FC1T02dEGoq+zimN/6Ra8xC6+AgAE104CAAAEsgIABmneAgAADY4CAAAEDAA==
Date: Thu, 20 Apr 2023 07:58:48 +0000
Message-ID: <0015B539-BB74-498A-8E05-DF0D84AE1B0A@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
 <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
 <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
 <5AE27C3A-3857-4044-9010-F452C7CF7E6F@arm.com>
 <06C78335-3FD1-4AD2-A576-BCE636018280@arm.com>
In-Reply-To: <06C78335-3FD1-4AD2-A576-BCE636018280@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB5911:EE_|DBAEUR03FT029:EE_|DBAPR08MB5640:EE_
X-MS-Office365-Filtering-Correlation-Id: b752edbf-85f0-4922-e6e0-08db41751b70
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 L7VToYW3jNAstY1EkNHxqeEHas71CXc/V7bMGLN3E6P00md7Phajx0zZvm89Snl5rHqokURMXVPpOZV/U4TD2WXXRz6jx9WChGo61yTYOII+5CkY2DKw6ghn+z3jBZDK2M1YVqrD9fJmDKFqjkqudzRz+6gz5C0RvMgLZnVYS200/+v6pErEHxq75fO6cBVHbAAoQYj6SCa1r7TqPP41pXtJY/mS8uVWSP7Pa7eEFv9Tf5/4fSVRrOrN1zdysPqOpEgR9K9Sy5Fccd/O2Vj0zOkZVbcDdC1W87K5ULRDC83iGmyohbQQC4/Jvuxqv05U3wQwD8Mokr17sb11pSY3JQiOVw3PUI/kItwhY0priK+TnO1kV1sRkbzospeX+zuAQ+8KhzR8Kq4l2lIM1KGNQy+Ad81Tsw2EHiTpcIa/LPYT9kapDZpVr1zD3heYoMFxZlyM+sT82vsFBL3p6Kp56sBZS73YAuNdSMeHR2dy8Q+un8CyTNUBKMUk74b6BexSVxzBVxa+PPKoFsGBvKzBTKLtUnfbciET/phg8vouEgPDNDLdHrisploP8iIoAnFBc/FYS/fifjKQAoEJxktFddUr+mnUNXDWEfd+A/xtf6KedHZUeNGZrHnFWw+OLjLrOgJx95ILPpFxZVDVudkHOA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(366004)(451199021)(6486002)(86362001)(38100700002)(54906003)(122000001)(37006003)(36756003)(66556008)(478600001)(5660300002)(2906002)(316002)(33656002)(91956017)(66476007)(8676002)(6636002)(41300700001)(76116006)(66446008)(66946007)(64756008)(6862004)(4326008)(8936002)(71200400001)(186003)(38070700005)(2616005)(26005)(6506007)(6512007)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <3977301F44F620428386C65B12A855AF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5911
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	db81b06c-a8a3-4369-3bf6-08db417512e9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KHAjcSV1qGR2AMHwkglUKQYdRa1pYjmi0Swj+0jS4v7bSi4fdMhNoBFj3gyJCkSvgXMDztQUioXhgvacEw9K15LGCy8pPP0Eg8WPcGeu0/RK/8RwRxqfgE2YWP79Avs3auhHGb2PCwGR9aX1q2PZYrZs4Df+6c0SWxlmD9Y9LZKxxoLjTFZez0A5pJSm9VJMrjch/0u5yIRPyLb5p5z7WuR+pBSrSqCU9p4qjNy7LWZ3jKJe/ZA3U1tNLH6T4VlxGGi7joH7Znpy3uDXAHBdwIPqoBtPIIlbKXIRAHWAujyOuVUifLKGfR+7t6FqRqAoqYQNwDzeAM7nc99OyeOynKzJz2F0zdD2vYKTPGORMXicacizkXBdFg0Q020MjJvO9lYvtphSS5uh+q7MJn91ZcA0pZR35cWTK9azMTSTBAwN/I63L5/WZk0B5ku/MI6pdpFpZYX6o0PkebdHZ/vdLZX7UgzwibPnj20kIz5ALXNYYogGdUvCfS0juEUC0l8OyWATyvmGqmVDBCcjcBXMmwgXUdVyY76eAOK5yGSSz+/m0EQ1XP6soQq/bp8fKPevQfINKq1Iqzc7VsZaGxAzaW9TCWgHaPe/xFicBqIapu/yHlIgWfhQUZuM011ldzB+5QpDhC0duMY2KgrfHqsHSj+bqsQYRGIuWcO6yuZnjGq9uwGCGeRYIbDNizb1dkWj1mAr5KHTlrlzeKyQ+m9Q8Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(39850400004)(451199021)(46966006)(40470700004)(36840700001)(336012)(6486002)(478600001)(86362001)(2616005)(36860700001)(47076005)(40480700001)(83380400001)(33656002)(26005)(107886003)(6512007)(40460700003)(186003)(6506007)(82740400003)(70206006)(356005)(70586007)(81166007)(316002)(2906002)(4326008)(5660300002)(8936002)(36756003)(8676002)(6862004)(41300700001)(82310400005)(37006003)(54906003)(6636002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 07:59:03.0420
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b752edbf-85f0-4922-e6e0-08db41751b70
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5640

Pj4gDQo+PiBIaSBCZXJ0cmFuZCwNCj4+IA0KPj4gVGhlc2UgYXJlIHRoZSBjaGFuZ2VzIEnigJlt
IGRvaW5nIHRvIHRoaXMgcGF0Y2ggdG8gYWRkcmVzcyB5b3VyIGNvbW1lbnQsIGFyZSB5b3Ugb2sg
d2l0aCB0aGVtPw0KPj4gDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5j
IGIveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBpbmRleCBmMGVhYjE4ZGMzODQuLjFmZWY0
NjZiYTBhYSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gKysr
IGIveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+PiBAQCAtOTEsMzUgKzkxLDM1IEBAIGludCBz
dmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KQ0KPj4gICAgaWYgKCAhY3R4ICkNCj4+ICAg
ICAgICByZXR1cm4gLUVOT01FTTsNCj4+IA0KPj4gLSAgICB2LT5hcmNoLnZmcC5zdmVfY29udGV4
dCA9IGN0eDsNCj4+ICsgICAgLyogUG9pbnQgdG8gdGhlIGVuZCBvZiBaMC1aMzEgbWVtb3J5LCBq
dXN0IGJlZm9yZSBGRlIgbWVtb3J5ICovDQo+PiArICAgIHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0
eF9lbmQgPSBjdHggKw0KPj4gKyAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRz
KSAvIHNpemVvZih1aW50NjRfdCkpOw0KPj4gDQo+PiAgICByZXR1cm4gMDsNCj4+IH0NCj4+IA0K
Pj4gdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2KQ0KPj4gew0KPj4gLSAgICBY
RlJFRSh2LT5hcmNoLnZmcC5zdmVfY29udGV4dCk7DQo+PiArICAgIHVuc2lnbmVkIGludCBzdmVf
dmxfYml0cyA9IHN2ZV9kZWNvZGVfdmwodi0+ZG9tYWluLT5hcmNoLnN2ZV92bCk7DQo+PiArDQo+
PiArICAgIC8qIFBvaW50IGJhY2sgdG8gdGhlIGJlZ2lubmluZyBvZiBaMC1aMzEgKyBGRlIgbWVt
b3J5ICovDQo+PiArICAgIHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9lbmQgPSB2LT5hcmNoLnZm
cC5zdmVfenJlZ19jdHhfZW5kIC0NCj4+ICsgICAgICAgIChzdmVfenJlZ19jdHhfc2l6ZShzdmVf
dmxfYml0cykgLyBzaXplb2YodWludDY0X3QpKTsNCj4gDQo+IFBsZWFzZSB1c2UgYSBsb2NhbCB2
YXJpYWJsZSBoZXJlIGluc3RlYWQuDQo+IA0KPiBGb3IgdGhlIHJlc3QgYWxsIGdvb2QgeWVzLCBp
dCBtYWtlcyB0aGUgc2F2ZS9yZXN0b3JlIGNvZGUgc2ltcGxlciA6LSkNCj4gDQoNCkkgZGlkIGF0
IHRoZSBiZWdpbm5pbmcsIGJ1dCB0aGVuIEkgcmVhbGlzZWQgWEZSRUUgd291bGQgc2V0IHRvIE5V
TEwgdGhlIGxvY2FsIHZhcmlhYmxlIGluc3RlYWQsDQpJIGNvdWxkIG9wZW4gY29kZSBpdCBhbmQg
Y2FsbCB4ZnJlZSBvbiB0aGUgbG9jYWwgdmFyaWFibGUgYW5kIHB1dCB2LT5hcmNoLnZmcC5zdmVf
enJlZ19jdHhfZW5kDQp0byBOVUxMIGFmdGVyd2FyZHMsIGJ1dCBKdWxpZW4gYXNrZWQgbWUgdG8g
dXNlIFhGUkVFLg0KDQpXaGF0IGlzIHlvdXIgcHJlZmVyZW5jZSBoZXJlPw0KDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 07:59:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 07:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523927.814365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPCN-0007ht-Kk; Thu, 20 Apr 2023 07:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523927.814365; Thu, 20 Apr 2023 07:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPCN-0007hm-HR; Thu, 20 Apr 2023 07:59:23 +0000
Received: by outflank-mailman (input) for mailman id 523927;
 Thu, 20 Apr 2023 07:59:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppPCM-0007hC-A3; Thu, 20 Apr 2023 07:59:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppPCM-0006Gc-1V; Thu, 20 Apr 2023 07:59:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppPCL-0002G6-Kw; Thu, 20 Apr 2023 07:59:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppPCL-0001dC-KO; Thu, 20 Apr 2023 07:59:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y8V53DRj7+1MBTJI+C0XgvFe7I8i1pe4UtSb6sJnpcw=; b=F/MEHRxqMbmVsTkOfbd1yjD89U
	MAnRvUPv5FfhrAZ5ELWWH1COX3Mri3cxOLMbQHIgOMurn16OsxLPt7PbXYOtlp5Tdpx9roMpmuDcZ
	8jPz/nHpjl2uMVlDmmZHYFlu717L9NsbX7qAcsWJc6c8R7V0/WBPHiHol05+25V4JnOA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180320-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180320: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:test-amd64-i386-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
X-Osstest-Versions-That:
    qemuu=7dbd6f8a27e30fe14adb3d5869097cddf24038d6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 07:59:21 +0000

flight 180320 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-i386-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail REGR. vs. 180258

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 build-armhf                   2 hosts-allocate        broken starved in 180258
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180258
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180258
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180258
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180258
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180258
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
baseline version:
 qemuu                7dbd6f8a27e30fe14adb3d5869097cddf24038d6

Last test of basis   180258  2023-04-14 08:48:34 Z    5 days
Testing same since   180320  2023-04-19 16:38:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Wed Apr 19 17:27:13 2023 +0100

    Update version for v8.0.0 release
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:17:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523941.814375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPU3-0002Yo-LW; Thu, 20 Apr 2023 08:17:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523941.814375; Thu, 20 Apr 2023 08:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPU3-0002Yh-Hu; Thu, 20 Apr 2023 08:17:39 +0000
Received: by outflank-mailman (input) for mailman id 523941;
 Thu, 20 Apr 2023 08:17:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wBYt=AL=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1ppPU1-0002Yb-Ih
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:17:37 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce078657-df53-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 10:17:35 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 eo4-20020a05600c82c400b003f05a99a841so690851wmb.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 01:17:35 -0700 (PDT)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002fe87e0706bsm1247834wrc.97.2023.04.20.01.17.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Apr 2023 01:17:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce078657-df53-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681978654; x=1684570654;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=f13MQ93ddq/Nd75N1ll36Z5zAz1KfuOg0cGpRV2ear0=;
        b=Dctjjv3Her1C+F0+KW0Qt046ZCEeDgZkO8003lMWpMr0cRK2zh6yCL6RYOmwdiFB6B
         zmVyNkKHCVk8PWYN3iSJuDcg/Hz/2pl7O56QaaORZ062381fPJidzo9qZzw2LJo4+PWS
         yU+r2fu7RH06XYw+XmjTlXFB3H5PlhyY4vJRDzWOIcLPu6vIoNpwcPWZDMmpOjjGIrrP
         Zx2XXZBP/gitGY/tqxUfd5jSGImn2wm+RiPX2kUPGke3B0mUvaeCM1+R6PUZNgtS1js4
         kDoD9FFhmx9jmoJroTu5BK21cqrW4TfUQjOAIvDyUfNtZJk3wnMEwpE+HJreU0YNXdnO
         e14w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681978654; x=1684570654;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=f13MQ93ddq/Nd75N1ll36Z5zAz1KfuOg0cGpRV2ear0=;
        b=VoSB5KnM6TShHth9xURHkuFgPQDtej8vvkXsFHYuCUcL86WtlbpGAhg49S9Qr04IYs
         DPxgUgadFX2XidRAOg9DsBNlk+Usq33SFeazhmJrlq4ZCMe/xSRbioM+oNI0adKLyJtY
         Enar0kVAEDq7pgH6ag0a1HfrFUlOfcL80aVIhZBnOJ8or0tMP3czFHq0RJJfh1mudqj9
         26feM1nngZgfOfmqc0B8IB73/Oy+wZ6rCTj/jCcGN0o0/Ee2OZqXOj0iNvPS3vR082Dh
         7+NV1VLaatzFNtfleGs0ZZnJ69FW+S0jGciIvMATOMq0tiCoIxuwXkqOeRo0OkyNsazp
         NsGQ==
X-Gm-Message-State: AAQBX9eMZfsGWJYpQDZFiXHX0v3wGbEOKjPwZT0GF0z2Nm3uDtDjfK+P
	8Lj7LqLcpafTeji4aakSO+r5Ww==
X-Google-Smtp-Source: AKy350a+S7Bo5cgeIITonHrFz17ZzQ14J39mXHJXktyn9+4RrBlds2+14MAKJinCJiahV/R/tP2QIA==
X-Received: by 2002:a05:600c:2183:b0:3f1:7848:6740 with SMTP id e3-20020a05600c218300b003f178486740mr647559wme.8.1681978654594;
        Thu, 20 Apr 2023 01:17:34 -0700 (PDT)
Message-ID: <cd8bb8f9-5d4f-76ba-1783-aebabb66a896@linaro.org>
Date: Thu, 20 Apr 2023 10:17:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 01/16] hw/qdev: introduce qdev_is_realized() helper
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Fam Zheng <fam@euphon.net>,
 Julia Suvorova <jusual@redhat.com>, Hanna Reitz <hreitz@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>,
 Paul Durrant <paul@xen.org>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 Eduardo Habkost <eduardo@habkost.net>, Juan Quintela <quintela@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Garzarella <sgarzare@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Kevin Wolf <kwolf@redhat.com>,
 "Richard W.M. Jones" <rjones@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Peter Lieven <pl@kamp.de>,
 eesposit@redhat.com, Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
 David Woodhouse <dwmw2@infradead.org>
References: <20230419172817.272758-1-stefanha@redhat.com>
 <20230419172817.272758-2-stefanha@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230419172817.272758-2-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 19/4/23 19:28, Stefan Hajnoczi wrote:
> Add a helper function to check whether the device is realized without
> requiring the Big QEMU Lock. The next patch adds a second caller. The
> goal is to avoid spreading DeviceState field accesses throughout the
> code.
> 
> Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   include/hw/qdev-core.h | 17 ++++++++++++++---
>   hw/scsi/scsi-bus.c     |  3 +--
>   2 files changed, 15 insertions(+), 5 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>

Thank you!



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523945.814384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPVD-000355-Tr; Thu, 20 Apr 2023 08:18:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523945.814384; Thu, 20 Apr 2023 08:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPVD-00034y-RI; Thu, 20 Apr 2023 08:18:51 +0000
Received: by outflank-mailman (input) for mailman id 523945;
 Thu, 20 Apr 2023 08:18:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wBYt=AL=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1ppPVC-00034i-NK
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:18:50 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa0591f1-df53-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 10:18:49 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f178da21b5so3145415e9.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 01:18:49 -0700 (PDT)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 d18-20020a5d4f92000000b002c7163660a9sm1252678wru.105.2023.04.20.01.18.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Apr 2023 01:18:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa0591f1-df53-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681978728; x=1684570728;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=LudFVx/kmb9nhOnpzQ2miKh295xVDuQ1UuHNRlYF5E8=;
        b=ZKvH4ZcWBUH/wCLtBz693lPtHcwRTD4bd932yhoiieRpluJcdneoiyJ88m2vBBZFoN
         Ijbn7atjnwH4cst2NNEcbt9dzwzAUuIW8O+bb6EQRkrSTAB+5khMOWc5rCA3mCcsUtf8
         fKK65BEau5W2824TUa1Ohd/Vqc3GcW1njv56LvA0yJKYWLuW5foU9kMfBeIi/BSkZlAw
         LpYyY1gY7MNisFs3dAqsjnyFMEPW5iJ0MISjV1Qlc5W+lr7/Zt2KeDvoY+13nPVkhVkf
         tkV82da0HgJE7NiRM4gmM+V9odZbEFTdRG9U30ZqqnGnkWLIxL06qACDZsly3QMLsh+y
         gZ1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681978728; x=1684570728;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=LudFVx/kmb9nhOnpzQ2miKh295xVDuQ1UuHNRlYF5E8=;
        b=dw9BUxSg1aIvqAwQKcm8n7yPMwFazjkOk9nmjrkEHfkdc5nYnaxjhkv1ej/wkbGtTy
         Ta4zr4oUvzG9ZScyVGOXL8Tij8m8gvq3+KLOjw99t285Ozsn0+srPJpmJQf331C2Amhv
         x2kUloGfphP1JyL6MaTHO4nFc6d+NRtT40SV/CuyBYSllAQwTwy+m6pJeXvAQxhI0Niu
         PzZc3WLUYRVHYTuk4gUFs8yVUalawI9dB8ej+6QDHGb6KCfD+byMDcrFbaWVaT5Fe7gX
         6oEPoIl3YOGZJtBhrfNz9S2YYcyze9lDvuT2k1Oh5kk/AV4OfcjmVFxAYcFVUqn+UlTq
         P/0Q==
X-Gm-Message-State: AAQBX9cHJKJlzULY5VaNpzcq3RL2NVGYEffNmlGqtSa1KADTLznNX7Qk
	V2hCGNskT1zkhCGq+6iNhauRkw==
X-Google-Smtp-Source: AKy350YVxN5lhmruZ0zTbMvn34EXb83NU6+ImB/xmwIw6S7TfcXRxCS0cGTACE75Tjyv4xAwd3Bvkw==
X-Received: by 2002:a5d:6592:0:b0:2e5:56f8:61fc with SMTP id q18-20020a5d6592000000b002e556f861fcmr578070wru.15.1681978728456;
        Thu, 20 Apr 2023 01:18:48 -0700 (PDT)
Message-ID: <f25c43d7-2924-b739-ec65-a10523d4e566@linaro.org>
Date: Thu, 20 Apr 2023 10:18:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 05/16] util/vhost-user-server: rename refcount to
 in_flight counter
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Fam Zheng <fam@euphon.net>,
 Julia Suvorova <jusual@redhat.com>, Hanna Reitz <hreitz@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>,
 Paul Durrant <paul@xen.org>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
 Eduardo Habkost <eduardo@habkost.net>, Juan Quintela <quintela@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefano Garzarella <sgarzare@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, Kevin Wolf <kwolf@redhat.com>,
 "Richard W.M. Jones" <rjones@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Peter Lieven <pl@kamp.de>,
 eesposit@redhat.com, Aarushi Mehta <mehta.aaru20@gmail.com>,
 Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
 David Woodhouse <dwmw2@infradead.org>
References: <20230419172817.272758-1-stefanha@redhat.com>
 <20230419172817.272758-6-stefanha@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230419172817.272758-6-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 19/4/23 19:28, Stefan Hajnoczi wrote:
> The VuServer object has a refcount field and ref/unref APIs. The name is
> confusing because it's actually an in-flight request counter instead of
> a refcount.
> 
> Normally a refcount destroys the object upon reaching zero. The VuServer
> counter is used to wake up the vhost-user coroutine when there are no
> more requests.
> 
> Avoid confusing by renaming refcount and ref/unref to in_flight and
> inc/dec.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   include/qemu/vhost-user-server.h     |  6 +++---
>   block/export/vhost-user-blk-server.c | 11 +++++++----
>   util/vhost-user-server.c             | 14 +++++++-------
>   3 files changed, 17 insertions(+), 14 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:31:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523951.814395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPhF-0005TI-1A; Thu, 20 Apr 2023 08:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523951.814395; Thu, 20 Apr 2023 08:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPhE-0005TB-U2; Thu, 20 Apr 2023 08:31:16 +0000
Received: by outflank-mailman (input) for mailman id 523951;
 Thu, 20 Apr 2023 08:31:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppPhD-0005T5-Hh
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:31:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7d00::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5acc986-df55-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 10:31:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7624.eurprd04.prod.outlook.com (2603:10a6:20b:291::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 08:31:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 08:31:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5acc986-df55-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VBJ87ElbuPgqpTjCkLnuMzZtb0yClOFQoRc3ESD0OqcQf6JVOsi2WMU7Oj5O1dC7uJrkWm9vmcbrb4RHMLSQYe7283VtdlCVqFRKbCOQdwDo0JLo+25lIx6B7JacCp4OgfCp7O8taVGscT0FCln9oJk9w74/NIsJcF0+87r/3A1VwxudrTQv5cyOWz9T/SiIXhcM23IjI6U76kEPJnG3srjOxFI5cXQJqBAzBnoYXZNoOWK1G+9feFpa2ji3DmErbu2D2IzeS1bkXx2S9JIK1sbgJNo48lPOtZNM7G4755X2Q25plM44zJUtyjuRqB0BsFzcjZflwzsItiU+pXH8tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AaMuU00G0QkFqYddEGM362WOg54PsulMYrJYjY/kpaM=;
 b=HiL3tbfRhsff1Up99UFZyYYBgklYLaREgueE4avqeFwD5wdIKLjtE8lORi1OWQCeIaH7fCrNVYTeLy79SDBoU4+FsCe2wpzy4MDmwgTcc1N+aaCPMMasM9X2gOj7uVzWykB0nle7KDc0WJL13zOUPKNMCvoUaSn2XT1MjTU7CiMU3Z+4VRZlNa62TGmaeJjvbeKLuhRZFD8juj1kYzD+pRSqQXDcd0BLe+PlAYqyOq7PHKrKJJO09j7qbCjNcovnEXNdSfdzDXIG7iYU+mxe2880wYm68CBzYaFV7oR93cB1Er1LXsbRfXWiFJJ3sSh2WfQ+HqZGydz6Ja6l3bhQqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AaMuU00G0QkFqYddEGM362WOg54PsulMYrJYjY/kpaM=;
 b=Z2ks5EwYsw6TUjcX1XiRuJkNjLSolXjhcetFpU8nRpUnQqL3zRUjVKlvHK0fP3SqiBjtV1cdqdw4cmyed1r1qdheEzl0Xyu6ESZghDSyjEdnMXtg3YMgANDxbRr1lYNp24cuDNKs1jxa30G9J4Y0K37Ixo96uspIvNh6v8cBCsujrbuB5eeZ51jQlh66GOHFGpG/d9BwwnZFJYZw0BUsHI+6FGwDDcl40HWMrD/vqsJLBv16bXUBV80KNqWaCXEDZiy9t8R6mJsaYvkntb5zx3voOZ654HdhqW1WzQfuofCcb8a9I3xqHehE1zcqg6+fr2ybiGJeR+qiAomMHzZqlQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e3246c7-6de2-b3eb-477f-99ef9bd1b33b@suse.com>
Date: Thu, 20 Apr 2023 10:31:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
 <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
 <ZEAO8e9iTjmi86fr@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEAO8e9iTjmi86fr@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0145.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7624:EE_
X-MS-Office365-Filtering-Correlation-Id: c8a79e16-1da4-4726-58ff-08db4179984f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v3E2ZZA+bwK3SYtg9chPCu+3kSnBY/L2GhI8HJsDgYZu+ah72tbkSipIomAeSdGg9EZbYgNU+E9DW57FIH5wKO8QrpLTAYAKwkp+4zvmjIW/csUYwwQ+RchlV3+yLXfhqEbKKn6RymUXsBa8/JpfTwIyt2ZB1856kEdcurDoV9tiqkTXEzxXGqXCbon4BAJBIzN5detuIkbZg5A7jFDHg7wMFKwXYLSFou4vgWkWHjckIoZa3Hv6cG2JypRPP6Qf9uZVTlZ6iP+ni4iq9+1m0EM33kS/kLOlvbTGg+moTKMoE9/ZxKK+J/sFA6U7deoVSqBHctanA2liUkI+X1fS41EZeflsLrQ7O0LnAW9uDhmq23/Qwvv++l/utQq9NojsGP15D8sdP7NDsDLeZ3jsa+ECmhjyyPSabdL+wRbkEr8Nsm/lBcYAofz9eSRZ5lYU+MGA5bo8V2fHfRp9ulKEnIVKRn/zKKEiEjRUyYMmK3o5t92oT4qyr7bpy82WQB1q+TU2/xEvLcbpPyhRh+35ZM0VRXbBxiKg1Z2ZSm+h/ItYvX+Me/KsLT/YKfXni44ntJBuTomTC2bX/hF/ZQWlmq3MjTEEnSNOkAMJtXRj5KThR85lQDwzjc2J2+m3sfr9F3w80pxkZmWBzVvi5uQjGA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(376002)(346002)(39860400002)(366004)(451199021)(31686004)(6916009)(66476007)(66556008)(66946007)(316002)(54906003)(8936002)(36756003)(41300700001)(8676002)(5660300002)(66899021)(2906002)(38100700002)(83380400001)(31696002)(4326008)(86362001)(6486002)(478600001)(6506007)(53546011)(2616005)(186003)(26005)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TFBIN0liWXhON0laQlN5NVNzQkNGNVlKZzZZQ3AyU1IvM2laNXRqTnEwSDUw?=
 =?utf-8?B?ZXdaemQwdGlYRnRxRkN5K2RUNXZ4a3M2dkNWaC83aEd2SVFXdzkzWTJkRnov?=
 =?utf-8?B?Y3NTdGRobHJBQW5qUWJDOGRCcmxMZnROaktCRTZ0YW00Ris0TW1zRXhlaGk3?=
 =?utf-8?B?SXdhU2dqVk1iYkIvdXpWR0pHc29FYXZ2SE9oSnZIQzc0RWE5K1R2b3psZG9y?=
 =?utf-8?B?c3pVZE9Db09aTTZ0aHFHZkxDUm5XVDBKbnZjNzBBOXVaSkVTMk5tU2dFaHhM?=
 =?utf-8?B?am9DUVpTTUZnU3Foejh3bXhOT1djMDR5bVNaQmtTNzJVR0phRjNPaWJIN0V3?=
 =?utf-8?B?Mi96d1F6MEpnT3JDSEV3cFNMVXEyQW9wTWdaYWs3c0VYck9pK2kxRlF6K3hU?=
 =?utf-8?B?YXVSVGFjVmpRMk9ZR2hsT2x2MlJjVVJ0SlJkWWJkd1VqV1VaaE51M2FnVzIv?=
 =?utf-8?B?cFRyeDRwc1l1bG4wVjhGWTFlVjhnbER3QlMyMnhRNnhqaW9mYWpnTTFxVDN1?=
 =?utf-8?B?ZlBkTWxGZUx3TXNIOWlPR3NEK1BrUDlJRW9nVkV2a1lXRmg2WGxqOUFwN3Q5?=
 =?utf-8?B?M2ljS2lzbHo4VHU3dVEvWWJKQmtLc2diSEFzYlg5Vk4zSEVEWVgzOVV6OTFI?=
 =?utf-8?B?NnNuZXR0SHhMcHlRR0NRcjJ1K0szWE9DS3dvYSt5Z1BRVGpGd2xodnB2Z1Yx?=
 =?utf-8?B?dW5YZ0F5MEpBa1ZVMVJ2QjRSVUdmL3FibER0UUtKeWY2U0szYkxFMUl3a2Iv?=
 =?utf-8?B?NlJ6bVNCZGZxV1cwTkMvTXd2a1dDUFppdFZ0WXNLV2VjSUZ3YWh6UW4zdHB0?=
 =?utf-8?B?enFzbURHb0M2dE95QS9aVTFtdVIzallaZFN4WFNPQkpDV1NwMUtGOHhtcG5N?=
 =?utf-8?B?RGpRMU5WRnJHU1Frd0U2b3dnV1RjbXJxTFFWcGhQMXRES2ZoYzJabWxJNkhI?=
 =?utf-8?B?eEt6aTdlZWgrY3EwV2dleXVyNEdLQXRlajBsY2xncFgxMUNiODFxNzRZUGp6?=
 =?utf-8?B?MWMvZnZBbzgrZDlZU0tha21OK2Vib1RJVE9zOHd0b0FIckNkcXNFMHByU096?=
 =?utf-8?B?Ung0c0llWHdQU2grcW5Xa3VnNGkzbmFYUWZoZUZocXgxcTNHS01RMyt2VXg2?=
 =?utf-8?B?cDNnSmVrR2l2c3FyR1l6eEVWTU1abzhPUk5mWjFveHhnOU1wSWcvV2cwWWNM?=
 =?utf-8?B?NDU2M1Q5dlhNaHY1UmUvOFNCejRjakpwSndzcDFia2IySldNcEFoOGllUHp6?=
 =?utf-8?B?YVRwaHhsOC9mVjlnUDlmR3YvSXJldjVqN3RtekIvSXZhQ1hwSUVSeElmWEhL?=
 =?utf-8?B?bGZKSU9vSEhwbmdwMXdyUEdYb1VjVTYyTCtiT2VBKzh4Z0o2aFJheUpUZ01K?=
 =?utf-8?B?STNHZ3hrcVlpRWtSSFNwK1MvMklzV0dVV2dNS0R6V3BnMENnTFZuQloxdkhu?=
 =?utf-8?B?amozUlpoK2IxdngrcnRpSHVGblYzV0hkK1dUZWdTSE41UjlnS3Q4R0taVG5U?=
 =?utf-8?B?WStRNUVac2pzMEtkWE1EU3QyNkVLN3I5WWZFZ1B3ZUUrQ0hRSkYxMmc4MmZQ?=
 =?utf-8?B?aU5XaXBtdVp0dU9sWHRPbmM2SGlON3U3TzFoY3VQalp6MExIcUFFcnM3bGFy?=
 =?utf-8?B?d25hbGVKTnRXaldGa3pJNmxqem10TUJvTWJXR1NzaHJaT2VlKy9uTjBQVGlK?=
 =?utf-8?B?UktZZEsvWlFEaTlSa0U1NWM4c1BnZVVEbnorWUFrS0U3Si9qUjdtMDFFM2FO?=
 =?utf-8?B?RXJ3QVhPS0YwTkZpdFFLRFJMTm5qbk9DRnA0UTNRWUNUZWl4ZzB0clRhQWJq?=
 =?utf-8?B?NHlrQ21ZWjUyd0ljQkM2OUgrZVJrS2VsWDc1NmlYaGlXclFxT2lpL1l1bUpm?=
 =?utf-8?B?OFZ1eTZxY1lZczVIQnQ2NVJXVmNxL2d3NHErNURKcVhlS0lqR2tIVXdQNy9M?=
 =?utf-8?B?RzQ2eTcwREZBVnNJSVIrUmM3OGFqbzIzMVAwMWFvNFc1djVLVGFVMTA5TjFl?=
 =?utf-8?B?c3hESzJZZ0lJU3U3MHlIOVVLTWxGL0txTWl4QnJuQVhKZUpIT1VmS3FJQkVw?=
 =?utf-8?B?SjNJWEJGREJhREx2WlQzM3BaWjZnY0xSTjVydndnN3l2ejRBQ2hrMTlzbFpG?=
 =?utf-8?Q?mu/0Rejr7Lp8xbVdnip6jB+tq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8a79e16-1da4-4726-58ff-08db4179984f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 08:31:10.6960
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ir2gkDC1/3AL9SrZmq8Go9LB+nVH4MFO48cICFCfhMF82r1gKM93JvagZLc1mFnmC0iIF2Eeh0RnpkuEij54Gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7624

On 19.04.2023 17:55, Roger Pau Monné wrote:
> On Wed, Apr 19, 2023 at 03:58:10PM +0200, Jan Beulich wrote:
>> On 18.04.2023 13:35, Roger Pau Monné wrote:
>>> On Tue, Apr 18, 2023 at 11:24:19AM +0200, Jan Beulich wrote:
>>>> ... in order to also intercept Dom0 accesses through the alias ports.
>>>>
>>>> Also stop intercepting accesses to the CMOS ports if we won't ourselves
>>>> use the CMOS RTC, because of there being none.
>>>>
>>>> Note that rtc_init() deliberately uses 16 as the upper loop bound,
>>>> despite probe_cmos_alias() using 8: The higher bound is benign now, but
>>>> would save us touching the code (or, worse, missing to touch it) in case
>>>> the lower one was doubled.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Before committing I went back to read through doc and earlier comments,
>> in particular regarding the NMI disable. As a result I'm now inclined
>> to follow your earlier request and fold in the change below. Thoughts?
> 
> It was unclear to me whether port 0x70 also had this NMI disabling
> behavior when the RTC/CMOS is not present but it seems that port is
> shared between the RTC index and the NMI logic, so lack of RTC doesn't
> imply lack of the NMI bit.

Right. My earlier oversight was that the datasheet I had pointed you
at actually explicitly mentions the NMI disable bit (really NMI# enable
there, named NMI_EN) in a separate section.

>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -1305,6 +1305,13 @@ bool is_cmos_port(unsigned int port, uns
>>  {
>>      unsigned int offs;
>>  
>> +    /*
>> +     * While not really CMOS-related, port 0x70 always needs intercepting
>> +     * to deal with the NMI disable bit.
>> +     */
>> +    if ( port <= RTC_PORT(0) && port + bytes > RTC_PORT(0) )
>> +        return true;
> 
> It might make it clearer to move this after the !is_hardware_domain(d)
> check, as non-hardware domains don't get access to that port anyway?

I've done this; I had is earlier because when moved ...

>> +
>>      if ( !is_hardware_domain(d) )
>>          return port <= RTC_PORT(1) && port + bytes > RTC_PORT(0);

... below here it becomes yet more odd with the 2nd of following if()s.
But I guess that's still "acceptably odd".

>> @@ -1342,6 +1349,17 @@ unsigned int rtc_guest_read(unsigned int
>>           * underlying hardware would permit doing so.
>>           */
>>          data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
>> +
>> +        /*
>> +         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
>> +         * ports. While reading the index register isn't normally possible,
>> +         * play safe and return back whatever can be read (just in case a value
>> +         * written through an alias would be attempted to be read back here).
>> +         */
>> +        if ( port == RTC_PORT(0) &&
>> +             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
>> +             ioports_access_permitted(currd, port, port) )
>> +            data = inb(port) & 0x7f;
> 
> Do we really need to mask the high bit here?  We don't allow setting
> that bit in the first place.

I think it's more consistent to mask it off, specifically with the code
visible in context right above the insertion. The doc isn't really clear
about readability of that bit: On one hand in says R/W for port 0x70 in
the NMI_EN section, yet otoh in the RTC section it says "Note that port
70h is not directly readable. The only way to read this register is
through Alt Access mode." (I think the NMI_EN section is more trustworthy,
but still.) Plus if we were to ever make use of the NMI disable, we
wouldn't want Dom0 see the bit set.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:32:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:32:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523954.814404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPiK-0005zh-B2; Thu, 20 Apr 2023 08:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523954.814404; Thu, 20 Apr 2023 08:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPiK-0005za-8D; Thu, 20 Apr 2023 08:32:24 +0000
Received: by outflank-mailman (input) for mailman id 523954;
 Thu, 20 Apr 2023 08:32:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J46Y=AL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1ppPiJ-0005zQ-6x
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:32:23 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id deb4d0fa-df55-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 10:32:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: deb4d0fa-df55-11ed-b21f-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681979541;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I+ebiUykc2oGiW9xPV9ZKUG/P8FsTezy0HSUaGX7/pY=;
	b=Jz/cXv5/DHLlLjV0z8BL66SPZKoeNMGD449tum189TN4LYaNtkzbsEgLqCXxeR5D2x4DmB
	AiiH4E3wu4mBGTwiuHyl8BJq1iKIMjEryKHI4Rwh2aIdHQXjHwgf71hZZEbZyP9aBjU6fv
	3zXpJo+I3lE6AOTbjLnufJBUwJfr8usMbNL18Mci5Spg10fimG4h36b3KXVFHqqEKIS4NL
	6O/vuWFYSqyD35wd+FaTjGBiDpyu2BeCfT9L9YjBMhEbpDoYxV0wuRUxlhwDe39bW/c/BG
	Fut9OwTjPUuzPGtLKQMIt2adUOVPSweDTmlzV8g0EUacW9ueX1QxUKw2lpWF0g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681979541;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I+ebiUykc2oGiW9xPV9ZKUG/P8FsTezy0HSUaGX7/pY=;
	b=fiV2/prU9cb5dhmyq+s9y7Q+ypTlDxy8b3yL8uPmb2zNIq4Xu7UguxKO2bu7hAd0D1st/s
	PIPzYABPo+6ab0BQ==
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Menzel
 <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>, Arjan van de Veen
 <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com>
Date: Thu, 20 Apr 2023 10:32:19 +0200
Message-ID: <871qkf3qek.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 19 2023 at 17:21, Andrew Cooper wrote:
> On 19/04/2023 2:50 pm, Andrew Cooper wrote:
>> What I'm confused by is why this system boots in the first place.=C2=A0 =
I can
>> only think that's is a system which only has 4-bit APIC IDs, and happens
>> to function when bit 4 gets truncated off the top of the SIPI destinatio=
n...
>
> https://www.amd.com/system/files/TechDocs/42300_15h_Mod_10h-1Fh_BKDG.pdf
>
> This system does still require the IO-APICs to be at 0, and the LAPICs
> to start at some offset, which is clearly 16 in this case.=C2=A0 Also, th=
is
> system has configurable 4-bit or 8-bit wide APIC IDs, and I can't tell
> which mode is active just from the manual.

That document contradicts itself:

  "The ApicId of core j must be enumerated/assigned as:
   ApicId[core=3Dj] =3D (OFFSET_IDX) * MNC + j

   Where OFFSET_IDX is an integer offset (0 to N) used to shift up the
   core ApicId values to allow room for IOAPIC devices.

   It is recommended that BIOS use the following APIC ID assignments for
   the broadest operating system sup- port. Given N =3D MNC and M =3D
   Number_Of_IOAPICs:

   =E2=80=A2 Assign the core ApicId=E2=80=99s first from 0 to N-1, and the =
IOAPIC IDs
     from N to N+(M-1)."

Oh well. If the rest of these docs is of the same quality then it's not
a surprise that BIOSes are trainwrecks.

> But, it does mean that the BIOS has genuinely modified the APIC IDs of
> the logic processors.=C2=A0 This does highlight an error in reasoning with
> the parallel bringup code.

Yes.

> For xAPIC, the APIC_ID register is writeable (at least, model
> specifically), and CPUID is only the value it would have had at reset.=C2=
=A0
> So the AP bringup logic can't actually use CPUID reliably.
>
> This was changed in x2APIC, which made the x2APIC_ID immutable.
>
> I don't see an option other than the AP bringup code query for xAPIC vs
> x2APIC mode, and either looking at the real APIC_ID register, or falling
> back to CPUID.

I'm pondering to simply deny parallel mode if x2APIC is not there.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:33:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:33:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523960.814415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPja-0006cY-O7; Thu, 20 Apr 2023 08:33:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523960.814415; Thu, 20 Apr 2023 08:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPja-0006cR-LI; Thu, 20 Apr 2023 08:33:42 +0000
Received: by outflank-mailman (input) for mailman id 523960;
 Thu, 20 Apr 2023 08:33:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Idg=AL=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1ppPjY-0006cL-Qn
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:33:40 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe12::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0be465ea-df56-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 10:33:38 +0200 (CEST)
Received: from DU2PR04CA0004.eurprd04.prod.outlook.com (2603:10a6:10:3b::9) by
 PA4PR08MB6319.eurprd08.prod.outlook.com (2603:10a6:102:e8::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22; Thu, 20 Apr 2023 08:33:28 +0000
Received: from DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::ad) by DU2PR04CA0004.outlook.office365.com
 (2603:10a6:10:3b::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 08:33:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT064.mail.protection.outlook.com (100.127.143.3) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.23 via Frontend Transport; Thu, 20 Apr 2023 08:33:27 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 20 Apr 2023 08:33:27 +0000
Received: from ed74f0d169c7.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3F377C9B-1E92-43CC-8836-74022DB5F8CA.1; 
 Thu, 20 Apr 2023 08:33:21 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ed74f0d169c7.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 08:33:21 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by PAVPR08MB9860.eurprd08.prod.outlook.com (2603:10a6:102:2f4::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Thu, 20 Apr
 2023 08:33:19 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 08:33:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0be465ea-df56-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IvpCEQ0WeMAYJbTx3wkz02Zg2UNJAx0Run0ZkNj+LAE=;
 b=DkloXxhLAFKb2qfo3yHnxIJ0OUMCgGedUJ2ZBuhfMM9oS0EVlZ8UVXpXtBuvxUieEltdQf0PkZBZf4TiAMXAKoynOjC0iM1zlgbnPqq3tstYgfAfyLC8dZIs5U2It94D9/RaCM0sEEX7ojoUfwLw/LF7vn4Cpf+B4ajaYPlK/IM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 24bff3db67057052
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J+5jCV9p1l0IijFPj/le/l8MkPi5DyDmGCzdx9HeNWVViK+MHEY7JJEo7NtwVoJNXfj9nqTqvEoT+xc3JrOYfSluWr34Lq8qVpZC5uoKBYeo75TyaMvCjg8QlNA8oaQHBOgXL69jo+51wme8UrZ/HDUuvldtNbWzXKzdDF9KienhWabgj6jbksCrnWMdZx8hdP6nZHmoUhTq3GA4WKF1qk0y7jH963wqqJESAnAeDl/oPy10+0ZkhWf9D8gsYyNdD/Qe8RwK0VwoWl1CLi6+amlbpzd6ipEmmHizYbGJq16pdl9mbJnMrxrLVRK+2/1qDOCykpbBpayfZSu1Pt3lwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IvpCEQ0WeMAYJbTx3wkz02Zg2UNJAx0Run0ZkNj+LAE=;
 b=AlDY5iWwvWve462+WXeEKoXYC9uxBbe3iNsfixYyk67eRQ6kbzbmLcDx7WAGivD/t+atn3Qu0+XN3dpo00bJ8WufaOjyH9hI80raE+fA5hYKAr9xmKJZKQrfMOTthBCTrcfcdNmyiZ1Np05SoEGTbChjQBQbGbsRDdtcLyYAuYmHW0w6Gtpe4m5em7pw9RJ1i1xCvmvgCfi47f4GfcWjMm5DTupNC14DySGK/MnGpvXt3pjr+eA4Ppu34WSULuHenbOakxudXIh4lWo9PRK3Y6yavgdFuGtsNjSDIQCdCF0rIf+fa6BCpne+vs8zfUIDm7TRwlKtrLe74kvhVUWTDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IvpCEQ0WeMAYJbTx3wkz02Zg2UNJAx0Run0ZkNj+LAE=;
 b=DkloXxhLAFKb2qfo3yHnxIJ0OUMCgGedUJ2ZBuhfMM9oS0EVlZ8UVXpXtBuvxUieEltdQf0PkZBZf4TiAMXAKoynOjC0iM1zlgbnPqq3tstYgfAfyLC8dZIs5U2It94D9/RaCM0sEEX7ojoUfwLw/LF7vn4Cpf+B4ajaYPlK/IM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v5 05/12] arm/sve: save/restore SVE context switch
Thread-Index:
 AQHZbSQq1oowf9vRd0a2ZPNGYXQmqK8xC64AgAE14QCAAAESAIABmpCAgAADSgCAAAEcAIAACZgA
Date: Thu, 20 Apr 2023 08:33:18 +0000
Message-ID: <3EE3D284-0964-4E73-9F59-37BD62989228@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-6-luca.fancellu@arm.com>
 <109F3491-6845-4A5F-9F77-F24D8970B1BE@arm.com>
 <C99DF25D-538F-4373-9F3A-F4E62B9C4E54@arm.com>
 <2B510623-0438-4D01-A916-14A8DE8D0A8C@arm.com>
 <5AE27C3A-3857-4044-9010-F452C7CF7E6F@arm.com>
 <06C78335-3FD1-4AD2-A576-BCE636018280@arm.com>
 <0015B539-BB74-498A-8E05-DF0D84AE1B0A@arm.com>
In-Reply-To: <0015B539-BB74-498A-8E05-DF0D84AE1B0A@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|PAVPR08MB9860:EE_|DBAEUR03FT064:EE_|PA4PR08MB6319:EE_
X-MS-Office365-Filtering-Correlation-Id: 4301f68a-27a2-4041-60ac-08db4179ea42
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 reudmqUgYLYbiOfZl5aJQRONbQCnz79WCQQeac6cPhewAPGkbsAr2K36o+dVVEE4VzbYS5OexH8K5JSfNxTL45GLGA1xsWZdCoh5c4zk2iiDp8N7+EpDmkKSjyPoFh8EyqbVpkUCn/F/2gVgCu1LDetmJcuSa1P06Mms8F6cG7xLTWiIjf3UDYposbNso6xqHL3YckMUiYC9kI/l2KySRdWg3oFktTKTXBBOfpyA+e+7wKd5cUohxn3Kg2L1OmHUH7g8eYItvsdyZ5E23ItzCRMV1UUcDMBkK7/9AGpmePBAH98sosJ7caRdenU2UjQ8HZxac5NTDGCQLXdnfvOdc/TLhUrp+fFmmy72OWq7xJwn3vaZtytO7ZXlu1guCqZ9g6iao+VI3p4dddOMt6aSImiPbTn32GVy6A78e53E43zvIhp322mJHvTas/AJrqBli1D4Q+POznRTMobkAe5SNutFb4pQ5KezO9ny47dhI8+km5KQKfj0/3qdAviWRZZTTVY3URSYazCjssQH+4WGn0LRCgCZP69fAHnYKUj3oWDG2mr4PdyXiBtd4ZD4h/Rqvr2qM0V7b4//mwaeKUsuYUSZHnuQANtmVzwCWR76TpYNCAArNKhQYlaSqZpZAtoehbnWLooerv/+xgo1uOa85w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(346002)(376002)(39860400002)(396003)(451199021)(6506007)(86362001)(64756008)(6512007)(38070700005)(83380400001)(36756003)(33656002)(186003)(2906002)(53546011)(2616005)(5660300002)(6486002)(71200400001)(8676002)(6862004)(8936002)(66476007)(37006003)(6636002)(478600001)(54906003)(122000001)(38100700002)(41300700001)(66446008)(316002)(66556008)(66946007)(76116006)(4326008)(91956017)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <C9F087BDEBDCAA47A9F0889C602AABBE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9860
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e855161b-901c-4499-40a9-08db4179e4da
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IF861nSyJXZGrESUGhO+p93C7S5gPS4vwWyDj9csHdQBR+e6jL/lKaBtR6d7ukk9UugT6VOlZxrJC449d/kmDodmafkNvCrD6r39ButkBm0+jq+KioaUDZ5sDeKAQxlgwdD5dfuWeWbuaZiSk9WM06jPk+B8kM6n3RhjOtHZmcDJycFncWT9/BfMpMorjmfkgnPoF+47yXmHTqQsXF2+DK4I+Z5uH/4ilb2wy0U8NLW/y7JL1Gc197nA92o3avdLcwVovOYDlBN+ltIRavDzHfbn/TAyhAyM9kQ6wj4F71dNJRK2hEi+iOj666AO/BJwIK2roIkwv6V2EbpEL7wtY6nCb0gaLQS6iB5l9ZPqjlprO4Ed/m0WzgWgQDjo/REejFD2yawzkjagBbm2WgWinMpCH2XKIQuRFPFQ8d/+ljYZdjPBp/kZ4LarVFEwEe00pery2puk2j5DNmIdKTD/Eg8WmEh2tBSfdWTMyV7V56iI06dteIHEc4CnfaI4bHUhRvoGtNoNMFaFNWrfuYbYOJ16TsPAtLEn+XQMqmvhxNy5oof/OiYf1YK1FelJDu+AY56UQyPwUEBxosa1pUD5/j43klo+KspKvA9dySnLgbkpaHSW3QCqngJVkn4PZZma0DYXHE1OGd2gXl1enTRIXdm/KNrgUr65hQZxtkvhATDNHeTrx++Mq1MyCcXbghih7hMCwQBo/M8+WaexvbqWmA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199021)(40470700004)(46966006)(36840700001)(40480700001)(40460700003)(70586007)(4326008)(70206006)(478600001)(37006003)(316002)(54906003)(6636002)(8676002)(6862004)(8936002)(5660300002)(81166007)(41300700001)(356005)(82740400003)(53546011)(186003)(83380400001)(336012)(47076005)(2616005)(6486002)(36860700001)(26005)(107886003)(6506007)(6512007)(86362001)(82310400005)(36756003)(2906002)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 08:33:27.9967
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4301f68a-27a2-4041-60ac-08db4179ea42
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6319

SGkgTHVjYSwNCg0KPiBPbiAyMCBBcHIgMjAyMywgYXQgMDk6NTgsIEx1Y2EgRmFuY2VsbHUgPEx1
Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPj4+IA0KPj4+IEhpIEJlcnRyYW5kLA0K
Pj4+IA0KPj4+IFRoZXNlIGFyZSB0aGUgY2hhbmdlcyBJ4oCZbSBkb2luZyB0byB0aGlzIHBhdGNo
IHRvIGFkZHJlc3MgeW91ciBjb21tZW50LCBhcmUgeW91IG9rIHdpdGggdGhlbT8NCj4+PiANCj4+
PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jIGIveGVuL2FyY2gvYXJtL2Fy
bTY0L3N2ZS5jDQo+Pj4gaW5kZXggZjBlYWIxOGRjMzg0Li4xZmVmNDY2YmEwYWEgMTAwNjQ0DQo+
Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jDQo+Pj4gKysrIGIveGVuL2FyY2gvYXJt
L2FybTY0L3N2ZS5jDQo+Pj4gQEAgLTkxLDM1ICs5MSwzNSBAQCBpbnQgc3ZlX2NvbnRleHRfaW5p
dChzdHJ1Y3QgdmNwdSAqdikNCj4+PiAgIGlmICggIWN0eCApDQo+Pj4gICAgICAgcmV0dXJuIC1F
Tk9NRU07DQo+Pj4gDQo+Pj4gLSAgICB2LT5hcmNoLnZmcC5zdmVfY29udGV4dCA9IGN0eDsNCj4+
PiArICAgIC8qIFBvaW50IHRvIHRoZSBlbmQgb2YgWjAtWjMxIG1lbW9yeSwganVzdCBiZWZvcmUg
RkZSIG1lbW9yeSAqLw0KPj4+ICsgICAgdi0+YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZCA9IGN0
eCArDQo+Pj4gKyAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSAvIHNpemVv
Zih1aW50NjRfdCkpOw0KPj4+IA0KPj4+ICAgcmV0dXJuIDA7DQo+Pj4gfQ0KPj4+IA0KPj4+IHZv
aWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikNCj4+PiB7DQo+Pj4gLSAgICBYRlJF
RSh2LT5hcmNoLnZmcC5zdmVfY29udGV4dCk7DQo+Pj4gKyAgICB1bnNpZ25lZCBpbnQgc3ZlX3Zs
X2JpdHMgPSBzdmVfZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwpOw0KPj4+ICsNCj4+
PiArICAgIC8qIFBvaW50IGJhY2sgdG8gdGhlIGJlZ2lubmluZyBvZiBaMC1aMzEgKyBGRlIgbWVt
b3J5ICovDQo+Pj4gKyAgICB2LT5hcmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kID0gdi0+YXJjaC52
ZnAuc3ZlX3pyZWdfY3R4X2VuZCAtDQo+Pj4gKyAgICAgICAgKHN2ZV96cmVnX2N0eF9zaXplKHN2
ZV92bF9iaXRzKSAvIHNpemVvZih1aW50NjRfdCkpOw0KPj4gDQo+PiBQbGVhc2UgdXNlIGEgbG9j
YWwgdmFyaWFibGUgaGVyZSBpbnN0ZWFkLg0KPj4gDQo+PiBGb3IgdGhlIHJlc3QgYWxsIGdvb2Qg
eWVzLCBpdCBtYWtlcyB0aGUgc2F2ZS9yZXN0b3JlIGNvZGUgc2ltcGxlciA6LSkNCj4+IA0KPiAN
Cj4gSSBkaWQgYXQgdGhlIGJlZ2lubmluZywgYnV0IHRoZW4gSSByZWFsaXNlZCBYRlJFRSB3b3Vs
ZCBzZXQgdG8gTlVMTCB0aGUgbG9jYWwgdmFyaWFibGUgaW5zdGVhZCwNCj4gSSBjb3VsZCBvcGVu
IGNvZGUgaXQgYW5kIGNhbGwgeGZyZWUgb24gdGhlIGxvY2FsIHZhcmlhYmxlIGFuZCBwdXQgdi0+
YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZA0KPiB0byBOVUxMIGFmdGVyd2FyZHMsIGJ1dCBKdWxp
ZW4gYXNrZWQgbWUgdG8gdXNlIFhGUkVFLg0KPiANCj4gV2hhdCBpcyB5b3VyIHByZWZlcmVuY2Ug
aGVyZT8NCj4gDQpJIGRpZCBub3QgcmVhbGlzZWQgWEZSRUUgd2FzIGFjdHVhbGx5IGFsc28gc2V0
dGluZyBpdCB0byBOVUxMLg0KDQpUaGVuIGkgd291bGQga2VlcCB5b3VyIGNvZGUgYnV0IHVzZSAt
PSBpbnN0ZWFkLg0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:47:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:47:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523964.814425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPwQ-0008Al-Sj; Thu, 20 Apr 2023 08:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523964.814425; Thu, 20 Apr 2023 08:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppPwQ-0008Ae-Pl; Thu, 20 Apr 2023 08:46:58 +0000
Received: by outflank-mailman (input) for mailman id 523964;
 Thu, 20 Apr 2023 08:46:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppPwO-0008AY-Ss
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:46:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5666946-df57-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 10:46:52 +0200 (CEST)
Received: from AS4P251CA0030.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:5d3::18)
 by DBBPR08MB6171.eurprd08.prod.outlook.com (2603:10a6:10:20f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 08:46:34 +0000
Received: from AM7EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5d3:cafe::48) by AS4P251CA0030.outlook.office365.com
 (2603:10a6:20b:5d3::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 08:46:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT059.mail.protection.outlook.com (100.127.140.215) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22 via Frontend Transport; Thu, 20 Apr 2023 08:46:33 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Thu, 20 Apr 2023 08:46:33 +0000
Received: from 481ba923cfeb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 38BEA464-B33A-4533-BA3C-C20D90575DA0.1; 
 Thu, 20 Apr 2023 08:46:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 481ba923cfeb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 08:46:25 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM9PR08MB6147.eurprd08.prod.outlook.com (2603:10a6:20b:2da::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 08:46:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 08:46:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5666946-df57-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Rz4yL6V/xGbuBke0zDKVD1bLFGce80LP6JNTzJD9jk=;
 b=jp3tzm7IeTZZLSY9sUVpKWe4qGidAcJUBdwgZkD5BGO+WO3mFac6FyvdWtBvDg6/gqAbqq/PbhQfemws5awzeRH59oyziBCSA5T3k8ipjJsN/WZvCpvJEApZAu61tw6MDoTmx2L+7hid54AD1fNOkBIiyfm3Drl5MoudxpBIamQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 922d5b6c40c24139
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TMeHuXRM7+R3ZV6ioZCriLwp4ximu3BCwbD59QlYth37L5Psf1tXo+Loimy0wOokSzmzpTnaYutn+Xp/BwyuJbMKp0l7pbthDN68s3w8NOE2HhLl+0ISK5C38mveeLOcy8+ry5F6wwGBD8ao8NXfndUS0JrQsn2v//A+XeVzmlcCFp8YUGvUiqHffqIQuTtcsMCRyZBjZ2upUMQgLNQZ0taOK2wVUAtAwF34RLRTIni+dXu91y+NXcOkA9Oyw5HLekCBQUULAfE34jGyNqG0xyt338Zjoq5nKt3i1UTtINSz8d7D0iLgt0LUYMwhavWbboWeSep9dqbgnhCVMnRJnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7Rz4yL6V/xGbuBke0zDKVD1bLFGce80LP6JNTzJD9jk=;
 b=RjUFaVZHRJfaTcIND6dVXDNhGCWHbVk9Wk3QdXqW7sSFAoKaKl+H1ymhLXJ/ABzIQ3ZZjh00b1xsgSBTEPPJUrqF4qUlmHXKQChiNimpPqo/tetW9mrWxXqVAYIRLAAlWQyRe4csFxFuZ3kLxAZX9FFd3km7hmhRIuFDFfCsyeoKZ8hV9U3iWXkE+kFwcq9mgNhRiVGdesrc6NoUD/56MK8jJf+xyrfoJPJtxjbmNVQD3DCyMUcAPkfo+dNzvqnjy6J5XVcRlU1EbXp1Ax58wEEsq40z4Gp5UrdSpN3b5CBbk0EttgViPgR+s5JRrEiU6WPCphCiK3EtkLk/tjfuuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Rz4yL6V/xGbuBke0zDKVD1bLFGce80LP6JNTzJD9jk=;
 b=jp3tzm7IeTZZLSY9sUVpKWe4qGidAcJUBdwgZkD5BGO+WO3mFac6FyvdWtBvDg6/gqAbqq/PbhQfemws5awzeRH59oyziBCSA5T3k8ipjJsN/WZvCpvJEApZAu61tw6MDoTmx2L+7hid54AD1fNOkBIiyfm3Drl5MoudxpBIamQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZbSQ+XccnxtxNG0WbEgpX88e2za8xDZ2AgAE1hQCAAAGYAIAABb6AgAGkXIA=
Date: Thu, 20 Apr 2023 08:46:22 +0000
Message-ID: <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
In-Reply-To: <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM9PR08MB6147:EE_|AM7EUR03FT059:EE_|DBBPR08MB6171:EE_
X-MS-Office365-Filtering-Correlation-Id: 9cf9bbe5-26b4-4410-9e9f-08db417bbe75
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mlMJX3AGTCAUn6P2TdDM5rpLKQcpN1HZDQsckEt+LE7tK0Rr8rUIH51sXXsEDvYR/MyLRgWA37lv61dZFkoKa09uZSLv+ay3CziS2DAXjU+QmzGnynBQCQROn4wjc4Rd7rbAW/+msQ0bC1m+jZlUSnciTGfL/h5fErQbouUSi1YgoE7UPBM3MO9DReLUOFJLiurGNNy+pI5LLe9O6PYQ31uccEPu53kl/Wnbe95UcDQ8ULwkbplpOEJT54719ebFO7cNzcLdWYfVeSp2v/WSsTbAy27iF+bdx/7WY/R3Tas5+XB9qEvMXgUbZI7cREHRAR/qKIkDcPrux+nX3xgXvSG+F48ttx0VF9rrZ7m8GuT4rsDpu4LdWF8OwUeDfM0xtehNZBSjG/+Xk+mghoNjgtWDfNemv8frguHAJsa2ECTV0KwOXpn7eBj4yTK7D1cg/hDzRmzN5S3oRvi35bLDOohCszwlt2v+w1DGca3X8hNQ6OVC3rQZOR7ePwTsNsfYBoFjsWCzbvXk3KJFoPeUrVZooWMR1PASFzUbmiBNSRQ/c+KkBuzb7ty+hdAceLySQmM+hJ2zZQHC+/UCOGqjWHyxECkBaNeI7STAU8Ux71Nq7aoCWlDWdaT5f7nHI0ZnAFZAS+wSxDvgmlKJWo7zMejAesYOPpdHHImFXlDpwr8=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(376002)(396003)(39860400002)(451199021)(54906003)(6636002)(37006003)(83380400001)(478600001)(2616005)(6486002)(33656002)(6506007)(26005)(6512007)(71200400001)(316002)(41300700001)(66446008)(66946007)(76116006)(64756008)(4326008)(122000001)(91956017)(66556008)(66476007)(186003)(8676002)(38100700002)(8936002)(5660300002)(36756003)(2906002)(38070700005)(86362001)(6862004)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <61FBC4766C87CB45A1EC3A1C6704272C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6147
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ef546177-5b48-4355-444e-08db417bb812
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NZuXVOil4QJVBt+ESIsW9HaXTbq+GZwpC4MVGY0411XvRm02MDzXspOk9ipxPdGCnZqruHCpQ0KVo+fqEdKKiaJ7ZT30d3D0dkava7WV7i3A6TsuyL4bFJuPrv+KMEhZKqDBSSyHVtCl2bJaNj6lzq04KAYzPjR6JNaaf24R2dBLVaWHCwRjlajFMkR9keMdH3Xk9CHUmD+9RpRhR+3K5VxRpAjMsJTQrrEhJP4w2+XcOqh1ztOWM+LcQEUM08D8tYxEoepazE9Wg/+J3uRgQV1yBgLwcIlB7VgV3gQK/qxothDuq2JYNqjgz+XDFV1Mfh75znx3bZOb5CzT7xzSwvsoS4xlRW7eVw3npMhdlRvojnpoLyNtNgPrk7VtJVFD/YyJ0yDudUIRSsfB5UPNvdQlMQb9p6SzvIaCvmMtM9pPAWBGnYqNxPCcAii7NW2l053iicNRARHu2QDukoJUJxI5+XcCx88B3t4fwDXg8eBx0YO1zBxi4gRg914+Zzk35V2xzfyLlJHmHVc1a8aXI6au+VlEe/zBGVvSX7c3EAOxgdBrpLIKm5P6hfwkR1NTuO7M3xE/BiQVGZP0eDQXT4nz0vNXagFGAm719PxkNsyA6QOCtlyk2U0w8AMdMbdA27Z8JqP6ZqGwct1g40fTqJGu85ZA9XAuZmfKIT874TTGMo9sZW6zKM5BBK87xspPFaaSUNiCKiWhpjiiQdpRpEJJlBleBQfkzVTBK9zfojA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199021)(40470700004)(36840700001)(46966006)(33656002)(186003)(107886003)(336012)(36756003)(37006003)(478600001)(82310400005)(40460700003)(86362001)(54906003)(356005)(6512007)(2616005)(26005)(6506007)(81166007)(6486002)(4326008)(6636002)(316002)(36860700001)(70586007)(70206006)(82740400003)(41300700001)(8936002)(8676002)(6862004)(83380400001)(5660300002)(47076005)(2906002)(40480700001)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 08:46:33.4438
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9cf9bbe5-26b4-4410-9e9f-08db417bbe75
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6171

DQo+Pj4+PiAraW50IF9faW5pdCBzdmVfc2FuaXRpemVfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWdu
ZWQgaW50ICpvdXQpDQo+Pj4+PiArew0KPj4+Pj4gKyAgICAvKg0KPj4+Pj4gKyAgICAgKiBOZWdh
dGl2ZSBTVkUgcGFyYW1ldGVyIHZhbHVlIG1lYW5zIHRvIHVzZSB0aGUgbWF4aW11bSBzdXBwb3J0
ZWQNCj4+Pj4+ICsgICAgICogdmVjdG9yIGxlbmd0aCwgb3RoZXJ3aXNlIGlmIGEgcG9zaXRpdmUg
dmFsdWUgaXMgcHJvdmlkZWQsIGNoZWNrIGlmIHRoZQ0KPj4+Pj4gKyAgICAgKiB2ZWN0b3IgbGVu
Z3RoIGlzIGEgbXVsdGlwbGUgb2YgMTI4IGFuZCBub3QgYmlnZ2VyIHRoYW4gdGhlIG1heGltdW0g
dmFsdWUNCj4+Pj4+ICsgICAgICogMjA0OA0KPj4+Pj4gKyAgICAgKi8NCj4+Pj4+ICsgICAgaWYg
KCB2YWwgPCAwICkNCj4+Pj4+ICsgICAgICAgICpvdXQgPSBnZXRfc3lzX3ZsX2xlbigpOw0KPj4+
Pj4gKyAgICBlbHNlIGlmICggKCh2YWwgJSBTVkVfVkxfTVVMVElQTEVfVkFMKSA9PSAwKSAmJiAo
dmFsIDw9IFNWRV9WTF9NQVhfQklUUykgKQ0KPj4+Pj4gKyAgICAgICAgKm91dCA9IHZhbDsNCj4+
Pj4gDQo+Pj4+IFNob3VsZG4ndCB5b3UgYWxzbyBjaGVjayBpZiBpdCBpcyBub3QgZ3JlYXRlciB0
aGFuIHRoZSBtYXhpbXVtIHZlY3RvciBsZW5ndGggPw0KPj4+IA0KPj4+IEkgZG9u4oCZdCB1bmRl
cnN0YW5kLCBJIGFtIGNoZWNraW5nIHRoYXQgdGhlIHZhbHVlIGlzIGJlbG93IG9yIGVxdWFsIHRv
IFNWRV9WTF9NQVhfQklUUywNCj4+PiBJZiB5b3UgbWVhbiBpZiBpdCBzaG91bGQgYmUgY2hlY2tl
ZCBhbHNvIGFnYWluc3QgdGhlIG1heGltdW0gc3VwcG9ydGVkIGxlbmd0aCBieSB0aGUgcGxhdGZv
cm0sDQo+Pj4gSSB0aGluayB0aGlzIGlzIG5vdCB0aGUgcmlnaHQgcGxhY2UsIHRoZSBjaGVjayBp
cyBhbHJlYWR5IGluIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZygpLCBpbnRyb2R1Y2VkDQo+
Pj4gaW4gcGF0Y2ggIzINCj4+IA0KPj4gSWYgdGhpcyBpcyBub3QgdGhlIHJpZ2h0IHBsYWNlIHRv
IGNoZWNrIGl0IHRoZW4gd2h5IGNoZWNraW5nIHRoZSByZXN0IGhlcmUgPw0KPj4gDQo+PiBGcm9t
IGEgdXNlciBvciBhIGRldmVsb3BlciBwb2ludCBvZiB2aWV3IEkgd291bGQgZXhwZWN0IHRoZSB2
YWxpZGl0eSBvZiB0aGUgaW5wdXQgdG8gYmUgY2hlY2tlZCBvbmx5DQo+PiBpbiBvbmUgcGxhY2Uu
DQo+PiBJZiBoZXJlIGlzIG5vdCB0aGUgcGxhY2UgZm9yIHRoYXQgaXQgaXMgb2sgYnV0IHRoZW4g
aSB3b3VsZCBjaGVjayBldmVyeXRoaW5nIGluIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZw0K
Pj4gKG11bHRpcGxlLCBtaW4gYW5kIHN1cHBvcnRlZCkgaW5zdGVhZCBvZiBkb2luZyBpdCBwYXJ0
bHkgaW4gMiBwbGFjZXMuDQo+IA0KPiBPaywgZ2l2ZW4gdGhlIHdheSB3ZSBlbmNvZGVkIHRoZSB2
YWx1ZSBpbiB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiBzdHJ1Y3R1cmUsIHdlIGhhdmUgdGhhdCB0
aGUgdmFsdWUgdGFrZXMNCj4gdmVyeSBsaXR0bGUgc3BhY2UsIGJ1dCBhIHNtYWxsIGlzc3VlIGlz
IHRoYXQgd2hlbiB3ZSBlbmNvZGUgaXQsIHdlIGFyZSBkaXZpZGluZyBpdCBieSAxMjgsIHdoaWNo
IGlzIGZpbmUgZm9yIHVzZXIgcGFyYW1zDQo+IHRoYXQgYXJlIG11bHRpcGxlIG9mIDEyOCwgYnV0
IGl04oCZcyBsZXNzIGZpbmUgaWYgdGhlIHVzZXIgcGFzc2VzIOKAnDEyOeKAnS4NCj4gDQo+IFRv
IG92ZXJjb21lIHRoaXMgaXNzdWUgd2UgYXJlIGNoZWNraW5nIHRoZSB2YWx1ZSB3aGVuIGl0IGlz
IG5vdCBhbHJlYWR5IGVuY29kZWQuIE5vdywgdGhpbmtpbmcgYWJvdXQgaXQsIHRoZSBjaGVjayAN
Cj4gIiYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRTKeKAnSBpcyBub3QgcmVhbGx5IG5lZWRlZCwg
YmVjYXVzZSBldmVuIGlmIHRoZSB2YWx1ZSBpcyBhYm92ZSwgdGhlbiBpbiBhcmNoX3Nhbml0aXNl
X2RvbWFpbl9jb25maWcNCj4gd2Ugd2lsbCBoaXQgdGhlIHRvcCBsaW1pdCBvZiB0aGUgcGxhdGZv
cm0gbWF4aW11bSBWTC4NCj4gDQo+IGludCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoc3Ry
dWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+IHsNCj4gICAgdW5zaWduZWQg
aW50IG1heF92Y3B1czsNCj4gICAgdW5zaWduZWQgaW50IGZsYWdzX3JlcXVpcmVkID0gKFhFTl9E
T01DVExfQ0RGX2h2bSB8IFhFTl9ET01DVExfQ0RGX2hhcCk7DQo+ICAgIHVuc2lnbmVkIGludCBm
bGFnc19vcHRpb25hbCA9IChYRU5fRE9NQ1RMX0NERl9pb21tdSB8IFhFTl9ET01DVExfQ0RGX3Zw
bXUpOw0KPiAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMgPSBzdmVfZGVjb2RlX3ZsKGNvbmZp
Zy0+YXJjaC5zdmVfdmwpOw0KPiANCj4gICAgaWYgKCAoY29uZmlnLT5mbGFncyAmIH5mbGFnc19v
cHRpb25hbCkgIT0gZmxhZ3NfcmVxdWlyZWQgKQ0KPiAgICB7DQo+ICAgICAgICBkcHJpbnRrKFhF
TkxPR19JTkZPLCAiVW5zdXBwb3J0ZWQgY29uZmlndXJhdGlvbiAlI3hcbiIsDQo+ICAgICAgICAg
ICAgICAgIGNvbmZpZy0+ZmxhZ3MpOw0KPiAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+ICAgIH0N
Cj4gDQo+ICAgIC8qIENoZWNrIGZlYXR1cmUgZmxhZ3MgKi8NCj4gICAgaWYgKCBzdmVfdmxfYml0
cyA+IDAgKQ0KPiAgICB7DQo+ICAgICAgICB1bnNpZ25lZCBpbnQgemNyX21heF9iaXRzID0gZ2V0
X3N5c192bF9sZW4oKTsNCj4gDQo+ICAgICAgICBpZiAoICF6Y3JfbWF4X2JpdHMgKQ0KPiAgICAg
ICAgew0KPiAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sICJTVkUgaXMgdW5zdXBwb3J0
ZWQgb24gdGhpcyBtYWNoaW5lLlxuIik7DQo+ICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+
ICAgICAgICB9DQo+IA0KPiAgICAgICAgaWYgKCBzdmVfdmxfYml0cyA+IHpjcl9tYXhfYml0cyAp
DQo+ICAgICAgICB7DQo+ICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5GTywNCj4gICAgICAg
ICAgICAgICAgICAgICJSZXF1ZXN0ZWQgU1ZFIHZlY3RvciBsZW5ndGggKCV1KSA+IHN1cHBvcnRl
ZCBsZW5ndGggKCV1KVxuIiwNCj4gICAgICAgICAgICAgICAgICAgIHN2ZV92bF9iaXRzLCB6Y3Jf
bWF4X2JpdHMpOw0KPiAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiAgICAgICAgfQ0KPiAg
ICB9DQo+ICAgWy4uLl0NCj4gDQo+IE5vdywgSSB1bmRlcnN0YW5kIHlvdXIgcG9pbnQsIHdlIGNv
dWxkIGNoZWNrIGV2ZXJ5dGhpbmcgaW4gc3ZlX3Nhbml0aXplX3ZsX3BhcmFtKCksIGJ1dCBpdCB3
b3VsZCBsZWF2ZSBhIHByb2JsZW0NCj4gZm9yIGRvbWFpbnMgY3JlYXRlZCBieSBoeXBlcmNhbGxz
IGlmIEkgYW0gbm90IHdyb25nLg0KPiANCj4gV2hhdCBkbyB5b3UgdGhpbms/DQoNCkkgdGhvdWdo
dCBhYm91dCB0aGF0IGFuZCBhbm90aGVyIHBvc3NpYmlsaXR5IGlzIHRvIHN0b3JlIOKAnHN2ZV92
bOKAnSBhcyB1aW50MTZfdCBpbnNpZGUgc3RydWN0IHhlbl9hcmNoX2RvbWFpbmNvbmZpZywgYW5k
DQpjaGVjayBpdCBpbnNpZGUgYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnKCkgZm9yIGl0IHRv
IGJlIG1vZCAxMjggYW5kIGxlc3MgdGhhbiB0aGUgbWF4IHN1cHBvcnRlZCBWTCwgdGhpcyB3aWxs
DQphbGxvdyB0byBoYXZlIGFsbCB0aGUgY2hlY2tzIGluIG9uZSBwbGFjZSwgdGFraW5nIGEgYml0
IG1vcmUgc3BhY2UsIGFueXdheSB3ZSB3b3VsZCB0YWtlIHRoZSBzcGFjZSBmcm9tIHRoZSBpbXBs
aWNpdA0KcGFkZGluZyBhcyB0aGlzIGlzIHRoZSBjdXJyZW50IHN0YXR1czoNCg0Kc3RydWN0IGFy
Y2hfZG9tYWluIHsNCmVudW0gZG9tYWluX3R5cGUgICAgICAgICAgIHR5cGU7ICAgICAgICAgICAg
ICAgICAvKiAgICAgMCAgICAgNCAqLw0KdWludDhfdCAgICAgICAgICAgICAgICAgICAgc3ZlX3Zs
OyAgICAgICAgICAgICAgIC8qICAgICA0ICAgICAxICovDQoNCi8qIFhYWCAzIGJ5dGVzIGhvbGUs
IHRyeSB0byBwYWNrICovDQoNCnN0cnVjdCBwMm1fZG9tYWluICAgICAgICAgIHAybTsgICAgICAg
ICAgICAgICAgICAvKiAgICAgOCAgIDMyOCAqLw0KLyogLS0tIGNhY2hlbGluZSA1IGJvdW5kYXJ5
ICgzMjAgYnl0ZXMpIHdhcyAxNiBieXRlcyBhZ28gLS0tICovDQpzdHJ1Y3QgaHZtX2RvbWFpbiAg
ICAgICAgICBodm07ICAgICAgICAgICAgICAgICAgLyogICAzMzYgICAzMTIgKi8NCi8qIC0tLSBj
YWNoZWxpbmUgMTAgYm91bmRhcnkgKDY0MCBieXRlcykgd2FzIDggYnl0ZXMgYWdvIC0tLSAqLw0K
c3RydWN0IHBhZ2luZ19kb21haW4gICAgICAgcGFnaW5nOyAgICAgICAgICAgICAgIC8qICAgNjQ4
ICAgIDMyICovDQpzdHJ1Y3Qgdm1taW8gICAgICAgICAgICAgICB2bW1pbzsgICAgICAgICAgICAg
ICAgLyogICA2ODAgICAgMzIgKi8NCi8qIC0tLSBjYWNoZWxpbmUgMTEgYm91bmRhcnkgKDcwNCBi
eXRlcykgd2FzIDggYnl0ZXMgYWdvIC0tLSAqLw0KdW5zaWduZWQgaW50ICAgICAgICAgICAgICAg
cmVsX3ByaXY7ICAgICAgICAgICAgIC8qICAgNzEyICAgICA0ICovDQoNCi8qIFhYWCA0IGJ5dGVz
IGhvbGUsIHRyeSB0byBwYWNrICovDQoNCnN0cnVjdCB7DQp1aW50NjRfdCAgICAgICAgICAgb2Zm
c2V0OyAgICAgICAgICAgICAgIC8qICAgNzIwICAgICA4ICovDQpzX3RpbWVfdCAgICAgICAgICAg
bmFub3NlY29uZHM7ICAgICAgICAgIC8qICAgNzI4ICAgICA4ICovDQp9IHZpcnRfdGltZXJfYmFz
ZTsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgLyogICA3MjAgICAgMTYgKi8NCnN0cnVj
dCB2Z2ljX2Rpc3QgICAgICAgICAgIHZnaWM7ICAgICAgICAgICAgICAgICAvKiAgIDczNiAgIDIw
MCAqLw0KDQovKiBYWFggbGFzdCBzdHJ1Y3QgaGFzIDIgYnl0ZXMgb2YgcGFkZGluZyAqLw0KDQov
KiAtLS0gY2FjaGVsaW5lIDE0IGJvdW5kYXJ5ICg4OTYgYnl0ZXMpIHdhcyA0MCBieXRlcyBhZ28g
LS0tICovDQpzdHJ1Y3QgdnVhcnQgICAgICAgICAgICAgICB2dWFydDsgICAgICAgICAgICAgICAg
LyogICA5MzYgICAgMzIgKi8NCi8qIC0tLSBjYWNoZWxpbmUgMTUgYm91bmRhcnkgKDk2MCBieXRl
cykgd2FzIDggYnl0ZXMgYWdvIC0tLSAqLw0KdW5zaWduZWQgaW50ICAgICAgICAgICAgICAgZXZ0
Y2huX2lycTsgICAgICAgICAgIC8qICAgOTY4ICAgICA0ICovDQpzdHJ1Y3Qgew0KdWludDhfdCAg
ICAgICAgICAgIHByaXZpbGVnZWRfY2FsbF9lbmFibGVkOjE7IC8qICAgOTcyOiAwICAxICovDQp9
IG1vbml0b3I7ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgLyogICA5NzIg
ICAgIDEgKi8NCg0KLyogWFhYIDMgYnl0ZXMgaG9sZSwgdHJ5IHRvIHBhY2sgKi8NCg0Kc3RydWN0
IHZwbDAxMSAgICAgICAgICAgICAgdnBsMDExOyAgICAgICAgICAgICAgIC8qICAgOTc2ICAgIDcy
ICovDQoNCi8qIHNpemU6IDExNTIsIGNhY2hlbGluZXM6IDE4LCBtZW1iZXJzOiAxMyAqLw0KLyog
c3VtIG1lbWJlcnM6IDEwMzgsIGhvbGVzOiAzLCBzdW0gaG9sZXM6IDEwICovDQovKiBwYWRkaW5n
OiAxMDQgKi8NCi8qIHBhZGRpbmdzOiAxLCBzdW0gcGFkZGluZ3M6IDIgKi8NCn0gX19hdHRyaWJ1
dGVfXygoX19hbGlnbmVkX18oMTI4KSkpOw0KDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:50:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523969.814435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ01-0001EE-HZ; Thu, 20 Apr 2023 08:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523969.814435; Thu, 20 Apr 2023 08:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ01-0001E7-Ed; Thu, 20 Apr 2023 08:50:41 +0000
Received: by outflank-mailman (input) for mailman id 523969;
 Thu, 20 Apr 2023 08:50:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7ocM=AL=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1ppPzz-0001Dz-R9
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:50:39 +0000
Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com
 [2607:f8b0:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b94d160-df58-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 10:50:38 +0200 (CEST)
Received: by mail-pl1-x62a.google.com with SMTP id
 d9443c01a7336-1a50cb65c92so7733865ad.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 01:50:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b94d160-df58-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681980637; x=1684572637;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=C7UaJjs9bvBkYzqOF6kajdzJbTKr/ZiaP1QqK+7GkNI=;
        b=MkNb928H4ghrEc9c7ucgz55RSqcrf/k3Ei20sz/WOd8qmRbxXJ1pWO2T3FBEdpUwPF
         17McAedy9E+ISphvtc1wiUDoagGy263DShj2aiRPfmA0fCVOSg5or5BuX13IzrR7YtXP
         IDjg1N6d99/tZgJpHdFe1g2lq0xwU9wGFExYYRU0v4N2+kMrXuEBL3mchv+8tU6Qc90E
         hwErR/habZ5VcIFSXvbLpkHxXxcFG9HPQiX5UBsTvl7BWei94iFgLN8Npx8VCkbElgmy
         T9VEnIyiqaFtP66vun9ABtglvAldkcZBaoJ6QM4+bQeJt93Owba9mmDQN94NOPMYGKOc
         yUpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681980637; x=1684572637;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=C7UaJjs9bvBkYzqOF6kajdzJbTKr/ZiaP1QqK+7GkNI=;
        b=R7LUv97+whl3yIQle8Ty/g1ZrzgNuieytcL3QHbIMPfeNoc+NkxaPzxRV2zffc8fxq
         aaPoEwRfSKlhn9de8fRU+ELQcgId1KAycRPI6gc5zWJYXY5uLwwPFGU/cONCO1/X0Rzp
         pPItBtFFNGn5bz9qqDgPP1CJndVG/nsLsSkk55Rjhv45NsZ23iffw6cNxDlSoodHvKa7
         8Z1kLbFKdcz09cXKb3pqkxFFrl06By4kwy9ktMcVYJyJvF4/Hhbf0/FjWClCLeEIgTgi
         /1cHbmgXMF2MY2zP3MDh1qLUesAcZRVffB6TWxdslqe3NaxgjOqWiw/RDokGvl4XhGW4
         1Nag==
X-Gm-Message-State: AAQBX9frSzLE9NkjS9LNkwM8nVXBLwpkFWdYePNe2qLzygDaWcJfWpAC
	qQUPhksCRD50+FJKHUGh2Pxc3BklkvNI2g+3kVI=
X-Google-Smtp-Source: AKy350YwvZ6FtJvtNFDa8S/UL7XlUjfB+hmqz13+mA3AvHBBZKIjAefrcjqh61EF2qtRWFqyFXmPiDi12FdfojPsxEo=
X-Received: by 2002:a17:903:8c7:b0:1a0:5349:6606 with SMTP id
 lk7-20020a17090308c700b001a053496606mr865597plb.56.1681980636533; Thu, 20 Apr
 2023 01:50:36 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com> <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Thu, 20 Apr 2023 11:56:43 +0300
Message-ID: <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="0000000000005c1af205f9c0a284"

--0000000000005c1af205f9c0a284
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Thanks Stefano.

I am going to do it today.

Regards,
O.

=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Ste=
fano Stabellini <sstabellini@kernel.org>:

> On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> > Hi Michal,
> >
> > I corrected xen's command line.
> > Now it is
> > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600=
M
> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnul=
l
> > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>
> 4 colors is way too many for xen, just do xen_colors=3D0-0. There is no
> advantage in using more than 1 color for Xen.
>
> 4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
> Each color is 256M. For 1600M you should give at least 7 colors. Try:
>
> xen_colors=3D0-0 dom0_colors=3D1-8
>
>
>
> > Unfortunately the result was the same.
> >
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> > (XEN) Coloring general information
> > (XEN) Way size: 64kB
> > (XEN) Max. number of colors available: 16
> > (XEN) Xen color(s): [ 0 ]
> > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
> 00000000002ccc0c
> > (XEN) Color array allocation failed for dom0
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Error creating domain 0
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Reboot in five seconds...
> >
> > I am going to find out how command line arguments passed and parsed.
> >
> > Regards,
> > Oleg
> >
> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25,=
 Oleg Nikitenko <oleshiiwood@gmail.com>:
> >       Hi Michal,
> >
> > You put my nose into the problem. Thank you.
> > I am going to use your point.
> > Let's see what happens.
> >
> > Regards,
> > Oleg
> >
> >
> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37,=
 Michal Orzel <michal.orzel@amd.com>:
> >       Hi Oleg,
> >
> >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >       >
> >       >
> >       >
> >       > Hello Stefano,
> >       >
> >       > Thanks for the clarification.
> >       > My company uses yocto for image generation.
> >       > What kind of information do you need to consult me in this case=
 ?
> >       >
> >       > Maybe modules sizes/addresses which were mentioned by @Julien
> Grall <mailto:julien@xen.org> ?
> >
> >       Sorry for jumping into discussion, but FWICS the Xen command line
> you provided seems to be not the one
> >       Xen booted with. The error you are observing most likely is due t=
o
> dom0 colors configuration not being
> >       specified (i.e. lack of dom0_colors=3D<> parameter). Although in =
the
> command line you provided, this parameter
> >       is set, I strongly doubt that this is the actual command line in
> use.
> >
> >       You wrote:
> >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M
> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative
> >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
> dom0_colors=3D4-7";
> >
> >       but:
> >       1) way_szize has a typo
> >       2) you specified 4 colors (0-3) for Xen, but the boot log says
> that Xen has only one:
> >       (XEN) Xen color(s): [ 0 ]
> >
> >       This makes me believe that no colors configuration actually end u=
p
> in command line that Xen booted with.
> >       Single color for Xen is a "default if not specified" and way size
> was probably calculated by asking HW.
> >
> >       So I would suggest to first cross-check the command line in use.
> >
> >       ~Michal
> >
> >
> >       >
> >       > Regards,
> >       > Oleg
> >       >
> >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 20:44, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >       >
> >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >       >     > Hi Julien,
> >       >     >
> >       >     > >> This feature has not been merged in Xen upstream yet
> >       >     >
> >       >     > > would assume that upstream + the series on the ML [1]
> work
> >       >     >
> >       >     > Please clarify this point.
> >       >     > Because the two thoughts are controversial.
> >       >
> >       >     Hi Oleg,
> >       >
> >       >     As Julien wrote, there is nothing controversial. As you are
> aware,
> >       >     Xilinx maintains a separate Xen tree specific for Xilinx
> here:
> >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xe=
n
> >
> >       >
> >       >     and the branch you are using (xlnx_rebase_4.16) comes from
> there.
> >       >
> >       >
> >       >     Instead, the upstream Xen tree lives here:
> >       >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >       >
> >       >
> >       >     The Cache Coloring feature that you are trying to configure
> is present
> >       >     in xlnx_rebase_4.16, but not yet present upstream (there is
> an
> >       >     outstanding patch series to add cache coloring to Xen
> upstream but it
> >       >     hasn't been merged yet.)
> >       >
> >       >
> >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter
> too much for
> >       >     you as you already have Cache Coloring as a feature there.
> >       >
> >       >
> >       >     I take you are using ImageBuilder to generate the boot
> configuration? If
> >       >     so, please post the ImageBuilder config file that you are
> using.
> >       >
> >       >     But from the boot message, it looks like the colors
> configuration for
> >       >     Dom0 is incorrect.
> >       >
> >
> >
> >

--0000000000005c1af205f9c0a284
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Thanks Stefano.</div><div><br></div><div>I am going t=
o do it today.</div><div><br></div><div>Regards,</div><div>O.<br></div></di=
v><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D1=
=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefan=
o Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org">sstabellini@kern=
el.org</a>&gt;:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:=
0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">=
On Wed, 19 Apr 2023, Oleg Nikitenko wrote:<br>
&gt; Hi Michal,<br>
&gt; <br>
&gt; I corrected xen&#39;s command line.<br>
&gt; Now it is<br>
&gt; xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sche=
d=3Dnull<br>
&gt; timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quo=
t;;<br>
<br>
4 colors is way too many for xen, just do xen_colors=3D0-0. There is no<br>
advantage in using more than 1 color for Xen.<br>
<br>
4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.<br=
>
Each color is 256M. For 1600M you should give at least 7 colors. Try:<br>
<br>
xen_colors=3D0-0 dom0_colors=3D1-8<br>
<br>
<br>
<br>
&gt; Unfortunately the result was the same.<br>
&gt; <br>
&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt; (XEN) Coloring general information<br>
&gt; (XEN) Way size: 64kB<br>
&gt; (XEN) Max. number of colors available: 16<br>
&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt; (XEN) alternatives: Patching with alt table 00000000002cc690 -&gt; 000=
00000002ccc0c<br>
&gt; (XEN) Color array allocation failed for dom0<br>
&gt; (XEN)<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN) Panic on CPU 0:<br>
&gt; (XEN) Error creating domain 0<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN)<br>
&gt; (XEN) Reboot in five seconds...<br>
&gt; <br>
&gt; I am going to find out how command line arguments passed and parsed.<b=
r>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25=
, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Michal,<br>
&gt; <br>
&gt; You put my nose into the problem. Thank you.<br>
&gt; I am going to use your point.<br>
&gt; Let&#39;s see what happens.<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37=
, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank=
">michal.orzel@amd.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/04/2023 09:03, Oleg Nikitenko wrote:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; My company uses yocto for image generat=
ion.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; What kind of information do you need to=
 consult me in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe modules sizes/addresses which wer=
e mentioned by @Julien Grall &lt;mailto:<a href=3D"mailto:julien@xen.org" t=
arget=3D"_blank">julien@xen.org</a>&gt; ?<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry for jumping into discussion, but FWICS=
 the Xen command line you provided seems to be not the one<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen booted with. The error you are observing=
 most likely is due to dom0 colors configuration not being<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specified (i.e. lack of dom0_colors=3D&lt;&g=
t; parameter). Although in the command line you provided, this parameter<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set, I strongly doubt that this is the ac=
tual command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xen-bootargs =3D &quot;console=3Ddtuart =
dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscr=
ub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnull timer_slop=3D0 way_szize=3D6553=
6 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you specified 4 colors (0-3) for Xen, but=
 the boot log says that Xen has only one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) Xen color(s): [ 0 ]<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This makes me believe that no colors configu=
ration actually end up in command line that Xen booted with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single color for Xen is a &quot;default if n=
ot specified&quot; and way size was probably calculated by asking HW.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I would suggest to first cross-check the =
command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 20=
23=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a>&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023,=
 Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This f=
eature has not been merged in Xen upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assu=
me that upstream + the series on the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify =
this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two=
 thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, the=
re is nothing controversial. As you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a s=
eparate Xen tree specific for Xilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0and the branch you a=
re using (xlnx_rebase_4.16) comes from there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstrea=
m Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring f=
eature that you are trying to configure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16,=
 but not yet present upstream (there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch se=
ries to add cache coloring to Xen upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merg=
ed yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are u=
sing xlnx_rebase_4.16 it doesn&#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0you as you already h=
ave Cache Coloring as a feature there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I take you are using=
 ImageBuilder to generate the boot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0so, please post the =
ImageBuilder config file that you are using.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0But from the boot me=
ssage, it looks like the colors configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
&gt; <br>
&gt; </blockquote></div>

--0000000000005c1af205f9c0a284--


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:51:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523972.814445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ0R-0001hG-PX; Thu, 20 Apr 2023 08:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523972.814445; Thu, 20 Apr 2023 08:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ0R-0001h9-Mh; Thu, 20 Apr 2023 08:51:07 +0000
Received: by outflank-mailman (input) for mailman id 523972;
 Thu, 20 Apr 2023 08:51:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppQ0Q-0001gm-8S
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:51:06 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b231cd3-df58-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 10:51:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9462.eurprd04.prod.outlook.com (2603:10a6:102:2aa::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 08:51:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 08:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b231cd3-df58-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b2VhjiFsMchVBz5OEfG3kC8G3o4r2nWQ44b88stn9DvEiU77qGNoo7W8TJHmSTvBQWOnERNxyiM+BUYrLO+CV3eZfLVeHrZKFhhN0biAQlItIiKagU7AkX0W6GZB/SDQ57Gx1KXniJcfSeGiMY6/YrUL+lH77qBeht9vv6fbGKN5Qe3p5v/SKR6m4DctUPzItBGMksbDBd7CHnYK4oRogGxlnHsDah/GSCcJc+PhqhdWPlvK26lVc8n0zzIKRkRR5CNPA1NEJneKx2RtDcSHVqRg89/Z3OymK4/yjrCA/aYV8B6tIuCJgfFNc6rk+GGMKTa2knoubPOUCEI7Np6wIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qO8/eQLcgTZIMyAY7w3qZMCAYlF3MYJ1N65RvbhwuNE=;
 b=cSW3F3DcRRKo5Pc2/z1NRSZHFLucFQvwifaioJcjGFfQTESsDnw5urgdCf6uctJ/sLGVf9ZHHVuLFH7cr1zj5DFieu4rS58F3G4R+novPTe4VuEJazMcCFj5eNpnbvPm55AsgT2/fT943OMnkEY2VmbXZKx/NXprYK9pv1hITWA+/4Koq2T9qwn805bM+3xpzs4PyB7fMbq9D/zmdoYobNuAJp4uXN93pF4am8LAqv3eXIZR4XPqSibql3bs963GT3EHD5FujBJItq2xbn964f9+WrWjdP6SeQ3mKrHKx3BCKkVzhdel3l3O9aZK3XSMelLR6Dq7Ve+WXZPiAIe86g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qO8/eQLcgTZIMyAY7w3qZMCAYlF3MYJ1N65RvbhwuNE=;
 b=N8Ghl8/QPTuuMp4B8rGUUjosHM/oqWfqyvp3fdF2uW5Pzqitlvi146gr38zUfgXgh6X+8GliBlFiOcxLYx6fn2rV3jASs5fawRUMt/oHOksR40Mzvo0yxgLjp3pUJNuTVaJxkprnKXPbcvlq3t1auNRGSNDVjtLe4hadiO3Fib/cuOD0wexZ77Uc3zKhVyzLr4WqVI4/7FupM9c5zA2u3STLawOZUQp5/4uV/7g1eOPYg91GAZrPGnN0UeshEu9qYf7Ia2a1WhkDCCunxUjbsn09dM5KX8yXbNPuUyyEE6JmGfdfJ7VorAWPrmjrs2jh6pWdmXcLg/876KzRqKkf0g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6f3726d6-60af-3db7-40cc-63308e427e85@suse.com>
Date: Thu, 20 Apr 2023 10:50:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 1/5] x86: support cache-writeback in flush_area_local() et
 al
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <ee33ad20-ef6e-504d-6987-59ccb166f8e4@suse.com>
 <53204261-3dac-579f-ede5-7acffd04f4db@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <53204261-3dac-579f-ede5-7acffd04f4db@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9462:EE_
X-MS-Office365-Filtering-Correlation-Id: cfc0ab1c-69f3-4ce5-aa0c-08db417c5dfa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DmzVDtkEAxpg5ill8tAanTMPveXrXSjxJUx/FsixWb0c0FPMC6GHtzgdfDkTO71FbUMy37aA8CpFn7esTE8swBcznZebkwtiXaVZUs1IEkTWQoCVtgvYIAuTrZHEFPTGXHYIzciGsHV/93FV3S+PrR2jWlJ+XFjs55totMUqWv+uVvO/PwkdasJHYRYEjf+t3kN/s5hdX7AxAwasjP72riFMWqhWkD+7e5jTNavDzXK5Dpnr79T+oC+IG7nnXL7IG/i+OsbivupRzCfqgrC6qVXoamltqEL67kAofUzntMzq3mWrSbWFXXxG/IiuLXN7/5w2EOytLBV0HW6vMkibXJGuVUIHoVPeW8oSzlOOxU5cRJoXHag0/9tQlBjQggXTp6m3/WnBibl7FoRLWFb9D+6LwsQUX3inYGZor5BA7bejhmD8+qOHoQygoQw3rh6GRyqGI6tBdcWs8I2DAIEtiS7S3lf0WOOZAj7iq5ErkbPiY3rXQqb7vPlNrbfVj0wgdOBtr6mQM9I8sibMMPtJNOacDf8QtyfY53/4OTvvwZ++jYsqrcrUlJ/L2szRCdrKJ4rMDx8kt9M55LtSbzdlmz2IHAREoimdKuMGg3WwTzODYvRWts9MQVAGpx6byBQlvYdyqwhtwV9psiLxRuxnfA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(39860400002)(136003)(346002)(396003)(451199021)(54906003)(83380400001)(31686004)(478600001)(2616005)(31696002)(6506007)(6486002)(6512007)(26005)(66476007)(66556008)(6916009)(41300700001)(66946007)(4326008)(53546011)(316002)(186003)(5660300002)(38100700002)(8676002)(2906002)(8936002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TllaK0tZNXhMUjZBSERqSVlWS3BrRjE1VTV2ZlVkdzh2NGlMbnp5Nks2bkMw?=
 =?utf-8?B?dFJWcEJkbW96MTdpS3FhblprWlU5S3Z4d1pUbG5mZis4U3p0K0EwRTRwT0lo?=
 =?utf-8?B?UUp6aitlMkg5bTNIR1pKalBIbTgvNG1KNC9EK3hwVlBKSHBmMkIvUElTbVpU?=
 =?utf-8?B?TFBKZXRDNXNDRm81RU8wZXRLMjA3QzRrOTJRaDRkWVhUV0RyR2VwYnZCT2RE?=
 =?utf-8?B?aHgxSHdsSzZBNGlvNjRmR3IvbkcyOHk0bFo0aHgzZndqQTBJZUZaU2RYTG9m?=
 =?utf-8?B?NktDcXhjTkdodkRtcC91T2VNY1pUSzR4NXBhN3JtKzQwWUdLa3A0Zk5IenU0?=
 =?utf-8?B?YWJNU2w5M2xvMTh1WmtUR1hpUzlFTkZockE5UVlQdHozMFF3TkU4QmltcU1G?=
 =?utf-8?B?SUN3Yzl6cnF2Y1RiaWZYenFHVHFQVjc5YWJ2ekllTVdVVFppTnN6NUQ5YnlS?=
 =?utf-8?B?VmFpOTBxaENJT1JBNjRLd2JaQzdwMjNhL3luUXhaOWtwNU5UNVpuU2s1V3la?=
 =?utf-8?B?cFhHaXBzYWxqK1BkaEg0MDU3bjhKOGhKdTN4T3IrYnBMWWRTMEx6Ylh5SVpQ?=
 =?utf-8?B?YWJqRExDaEFZWjB0eUViT3I5cjdqbmtNWlZmUEg2dEUxUmJja2hISEtOT2VZ?=
 =?utf-8?B?VXJvV1EvaDFtdHVpM242WWx6YmRyVVUyVEYvUXdqZU93UVZROStpYllwNm9n?=
 =?utf-8?B?N2lYVkJocDRyTERtZ2htNmo1OStUQkk3TVZpUkVhY040RXlVV1B3UjFFNlJM?=
 =?utf-8?B?QmVZWVdmd3J4MHZQWjZhRS82eXg5c25GeXpyS1d4M3FGVThhTjh6cGh0TDNm?=
 =?utf-8?B?ajV1eG9HRzhtbDFraW1YT2JZemxJcmh2WnZzQUtFelY0VEZ3amJTVGY4Y29k?=
 =?utf-8?B?TEtuSXlWRUcwN0x6ZFBSRDlJK1Z4Sm1LSHlnZnArR3V2U1Q3K01EaENkSzN4?=
 =?utf-8?B?QVNBMk14aGxMMzdzSXJpVDNDa1U3cUZXZ1hZcGhGTzNVOG1ZOXFxeFRzb21P?=
 =?utf-8?B?ODlTclZoelZXcXM2eUNjTXN5Mkx3QnBOSnFDeXU5dlVwUHFpM3pLYkNZZEow?=
 =?utf-8?B?Wlp1NjdhYTYyNUlvZ0Q3b2szbGFCVTZObTNGa05JeUNId1M0NHQ2YklwNEVV?=
 =?utf-8?B?Qk5nUHVHR3kzL3lneHo2akd1V2RWSjB6YWhMazVhNC9XZEhpWUd2TlVPK3FH?=
 =?utf-8?B?VzdPRWlUek1vcWRQa0lpMHpIQTd3K21xZTFrbXV5K3lHWmVaOTFacUNWREti?=
 =?utf-8?B?Nk11ZTVxWVpHZGZGcWNZMzhyOTEvcW4wc1paOGtlYWZQaDhPejA1ZTlQTlc1?=
 =?utf-8?B?N2lZQmhMbEtCTjgrK1A4VzBqYlJWbXUwWElJTTJiRWJvRjB2TXhHaHVWLzVj?=
 =?utf-8?B?aDFkM0paRXUxazRMQ0xIaGNOeFIxUFJvVnd6c0RKaGlWVllkQUNaMVNHMlZL?=
 =?utf-8?B?Y3V1RUVLMC9mZE9YOTZnaWU1VlYwOFE3a0Z4V3RJcDRzTWVuZ1ZhKzZtbFNK?=
 =?utf-8?B?U0tDZGV2SWFZT2FQaHQ1UUtZTVQyV2VIVHVpMlJUVEFFRDNQYjQxZVZ2VzZL?=
 =?utf-8?B?QXFadUVmMHE5ZzlZYnJBckxDV3paNXZtQ29NcjlBZnFacGRGbXA4U3pieGRv?=
 =?utf-8?B?VUIzRmVLSitjWGw5UG1kYVBxVGN3SjFOS0VWbEVtZXExZjdxT2NOMkVxQmsv?=
 =?utf-8?B?Qmw4bm1xV2RIdHRHdUlDSlp1cWNKdDV2UmNpZFNrWE01NUdhaVFnd09SV2RD?=
 =?utf-8?B?TVZDbnk5VGJSZ3VuL2Q2dTN1eUdEdm1ueU9JdlNXc1JLbmU3YjY1WjJ0SDVD?=
 =?utf-8?B?TGtGN0V3eDJFZGEvR3J6bnBVbHB6M1d2eE1zOGlYNCthT0RoSEpxNzVxZktX?=
 =?utf-8?B?eTF0emkwS1lxdm5MSS9CSGdHb3pTQXJOMGdtRHUrU2w4VG9hYWhISlRZT0VN?=
 =?utf-8?B?TElWbzJvNDFxcnl5azJJMzBnbHcyamI3ZTZuWHRzWGYyV2RxODlxZnVGaHJt?=
 =?utf-8?B?OEpDaWcwQ1phQW9KL0hoK0doTVNLck45WERZVUhxSkJ5T2cwaUU0a25nY0FC?=
 =?utf-8?B?MUQwbjIyQ2puLytmWWRwV3JBQ1JocmtXM2dpQzFCVEFrWlAvekFqQ3ZFOTk1?=
 =?utf-8?Q?aMrK3JL092gE6kf21GlkiTNgt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfc0ab1c-69f3-4ce5-aa0c-08db417c5dfa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 08:51:01.3306
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NCPecYHptvnT1bWjqa4VR91FFcumVboEYoCiFQNCC7CLYf0lv6VaHIius9L6r036HPbWac6MrDNqc1pNYFTeUA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9462

On 19.04.2023 21:56, Andrew Cooper wrote:
> On 19/04/2023 11:44 am, Jan Beulich wrote:
>> --- a/xen/arch/x86/flushtlb.c
>> +++ b/xen/arch/x86/flushtlb.c
>> @@ -232,7 +232,7 @@ unsigned int flush_area_local(const void
>>      if ( flags & FLUSH_HVM_ASID_CORE )
>>          hvm_flush_guest_tlbs();
>>  
>> -    if ( flags & FLUSH_CACHE )
>> +    if ( flags & (FLUSH_CACHE | FLUSH_WRITEBACK) )
> 
> Given that we already have FLUSH_CACHE, then adding writeback also seems
> fine, but we need to get the naming corrected first.
> 
> We've got a file called flushtlb.c which flushes more than the TLBs now,
> and various APIs in it.
> 
> We have a bunch of ARM specific APIs which AFAICT exist purely to prop
> up the ARM-specific gnttab_cache_flush().  That needs to go and live
> behind an ifdef and stop polluting other architectures with an
> incredibly short sighted hypercall interface decision.
> 
> The "area" in the low level calls isn't good.  Range might be better,
> but I'm not sure.  The "mask" part of the name would be better as "some"
> or perhaps "others", to be a better counterpoint to "local".  Some of
> the wrappers too really ought to be dropped - there are lots of them,
> and too few users to justify.
> 
> But on to the main thing which caught my eye...
> 
> The FLUSH in FLUSH_CACHE means the flush infrastructure, not "cache
> flushing", and FLUSH_WRITEBACK is nonsensical next to this.

I agree; I chose the name simply to avoid further changes, while still
fitting the naming scheme (i.e. the FLUSH_ prefix). I'm okay to change
to ...

>  All other
> things we flush have a qualification that makes them clear in context.
> (other than the assist one which I'm going to time out objections to and
> revert back to name which made more sense.)
> 
> At an absolutely minimum, FLUSH_CACHE first needs renaming to
> FLUSH_CACHE_EVICT and then this new one you're adding needs to be
> FLUSH_CACHE_WRITEBACK.

... these, but I don't think they're much better (still primarily
because of the FLUSH_ prefixes).

FTAOD - while I'm going to make these adjustments (adding a single
prereq patch to carry out the initial rename), I don't really see me
doing any of there other adjustments you were outlining above (at
least not within this series).

> Except...
> 
> Is there any credible reason to have EVICT as an option by the end of
> this cleanup?

Yes: map_pages_to_xen(), hvm_shadow_handle_cd(), memory_type_changed(),
and hvm_set_mem_pinned_cacheattr(), all mean to evict caches aiui.

clean_and_invalidate_dcache_va_range(), as you say, really shouldn't
be required, but I'm unconvinced we can easily drop support for a
gnttab sub-op that's been around for a number of years. (The more with
there (still) not being any support for GNTTAB_CACHE_SOURCE_GREF, I'm
(still) thinking this shouldn't have been a gnttab sub-op in the first
place, but something truly arch-specific [a platform-op, or a memory
one, or ...].)

Jan

> CLDEMOTE does exist for a reason (reducing coherency traffic overhead
> when you know the consumer is on a different CPU), but it would be
> totally bogus to use this in an all or mask form, and you wouldn't want
> to use it in local form either, simply from an overhead point of view.
> 
> We have evict behaviour simply because `clflush` was the only game in
> town for decades, not because evicting the cacheline is what you want
> actually want to do.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:51:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:51:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523976.814454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ0p-0002HX-5L; Thu, 20 Apr 2023 08:51:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523976.814454; Thu, 20 Apr 2023 08:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ0p-0002HQ-2S; Thu, 20 Apr 2023 08:51:31 +0000
Received: by outflank-mailman (input) for mailman id 523976;
 Thu, 20 Apr 2023 08:51:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7ocM=AL=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1ppQ0n-0001Dz-RB
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:51:29 +0000
Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com
 [2607:f8b0:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 89b43660-df58-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 10:51:28 +0200 (CEST)
Received: by mail-pf1-x42f.google.com with SMTP id
 d2e1a72fcca58-63b73203e0aso5327207b3a.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 01:51:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89b43660-df58-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681980687; x=1684572687;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=/GMc2H/0ombjyg4s03C9rm5w1w/VLz4L8JzHNJyzLjk=;
        b=SXCopwZkoH2OoF3YZThvBm/PiKLIU5n+jSiKfYcRVcPrA+SmkBhCwQVTCKb9fZ2ews
         24hV6ihrTuqRUq5xL9+l5WoKuiXcnJ8AMtVedEKGI+i5EKBGJkVQZwRQ+X18+bAnF+Iz
         fLFdcuprPQMX7NBg/sb92wKWSH6STVC6h3d7+dj5ErJAyHES+QW0SLDxOW8KLpntKeVI
         OKXHWZBnUFISvE90FY/sRQ3MIF74BxKTKsPjEVrBnlEgSUfcavgGgxalh+KYYNjy7oP/
         SCPlBnvyDmBnhR3B5gbpk4aM0m8Zg0sLHWkosPU35wEEuCRx0ELVDLo2W8Gv/ZrHAO6W
         kBsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681980687; x=1684572687;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=/GMc2H/0ombjyg4s03C9rm5w1w/VLz4L8JzHNJyzLjk=;
        b=AfCLiNMx493nHTeS3Zdxe44BgDX6g67om2d37X3tU5iPnFh1qHMe3JO0Dt095xyBAt
         IG9XKEoeZneymRBMFPUlW3EeSjLt6+WmNs3ti/GPSX0wGi2hgrSRAoslUKYSCD+pxwjA
         4/YwfUKMY0IRM1puyTswXEmruIR9b6unQ9+Uc1ZX5wTlqnMW/mTqCExFxHtvxwn28xSS
         rKnQPvfzDaGLlFApy7BrfTljtLeABmEwb4nad1/hM7JbjnfXYCHhBREzOZs0u8BRvnWb
         wuFmoVveb7fIQ8KiniwFDqhLoFu/WfSQDQz8s0HCI4ejESsCflk4v5XaBtc/xAGE4NOk
         CDuQ==
X-Gm-Message-State: AAQBX9dGZSHyeR8pXjvvLMrpxSykO15xdlR2Cyg8MCMqCBWqKiyXQx2J
	6Jn54f04wTBQ/LhCPZvnq7hcWPJvj8UKolBy6pQ=
X-Google-Smtp-Source: AKy350b6A+qFwbTE2pNTwK58uEHMVqds4rMbXJFwGEC0vA1SwTo6DO3vwEURxMfoEpJUUJPhzfQp324bbYiRQ9h9D64=
X-Received: by 2002:a17:90a:6002:b0:246:865d:419a with SMTP id
 y2-20020a17090a600200b00246865d419amr5655831pji.6.1681980687247; Thu, 20 Apr
 2023 01:51:27 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop> <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
In-Reply-To: <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Thu, 20 Apr 2023 11:57:33 +0300
Message-ID: <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="00000000000061f03105f9c0a5b8"

--00000000000061f03105f9c0a5b8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Thanks Michal,

You gave me an idea.
I am going to try it today.

Regards,
O.

=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56, Ole=
g Nikitenko <oleshiiwood@gmail.com>:

> Thanks Stefano.
>
> I am going to do it today.
>
> Regards,
> O.
>
> =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, S=
tefano Stabellini <sstabellini@kernel.org>:
>
>> On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>> > Hi Michal,
>> >
>> > I corrected xen's command line.
>> > Now it is
>> > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D160=
0M
>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnu=
ll
>> > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>>
>> 4 colors is way too many for xen, just do xen_colors=3D0-0. There is no
>> advantage in using more than 1 color for Xen.
>>
>> 4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>> Each color is 256M. For 1600M you should give at least 7 colors. Try:
>>
>> xen_colors=3D0-0 dom0_colors=3D1-8
>>
>>
>>
>> > Unfortunately the result was the same.
>> >
>> > (XEN)  - Dom0 mode: Relaxed
>> > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>> > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>> > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>> > (XEN) Coloring general information
>> > (XEN) Way size: 64kB
>> > (XEN) Max. number of colors available: 16
>> > (XEN) Xen color(s): [ 0 ]
>> > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>> 00000000002ccc0c
>> > (XEN) Color array allocation failed for dom0
>> > (XEN)
>> > (XEN) ****************************************
>> > (XEN) Panic on CPU 0:
>> > (XEN) Error creating domain 0
>> > (XEN) ****************************************
>> > (XEN)
>> > (XEN) Reboot in five seconds...
>> >
>> > I am going to find out how command line arguments passed and parsed.
>> >
>> > Regards,
>> > Oleg
>> >
>> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25=
, Oleg Nikitenko <oleshiiwood@gmail.com>:
>> >       Hi Michal,
>> >
>> > You put my nose into the problem. Thank you.
>> > I am going to use your point.
>> > Let's see what happens.
>> >
>> > Regards,
>> > Oleg
>> >
>> >
>> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37=
, Michal Orzel <michal.orzel@amd.com>:
>> >       Hi Oleg,
>> >
>> >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>> >       >
>> >       >
>> >       >
>> >       > Hello Stefano,
>> >       >
>> >       > Thanks for the clarification.
>> >       > My company uses yocto for image generation.
>> >       > What kind of information do you need to consult me in this cas=
e
>> ?
>> >       >
>> >       > Maybe modules sizes/addresses which were mentioned by @Julien
>> Grall <mailto:julien@xen.org> ?
>> >
>> >       Sorry for jumping into discussion, but FWICS the Xen command lin=
e
>> you provided seems to be not the one
>> >       Xen booted with. The error you are observing most likely is due
>> to dom0 colors configuration not being
>> >       specified (i.e. lack of dom0_colors=3D<> parameter). Although in
>> the command line you provided, this parameter
>> >       is set, I strongly doubt that this is the actual command line in
>> use.
>> >
>> >       You wrote:
>> >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M
>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative
>> >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
>> dom0_colors=3D4-7";
>> >
>> >       but:
>> >       1) way_szize has a typo
>> >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>> that Xen has only one:
>> >       (XEN) Xen color(s): [ 0 ]
>> >
>> >       This makes me believe that no colors configuration actually end
>> up in command line that Xen booted with.
>> >       Single color for Xen is a "default if not specified" and way siz=
e
>> was probably calculated by asking HW.
>> >
>> >       So I would suggest to first cross-check the command line in use.
>> >
>> >       ~Michal
>> >
>> >
>> >       >
>> >       > Regards,
>> >       > Oleg
>> >       >
>> >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 20:44, Stefano Stabellini <
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>> >       >
>> >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>> >       >     > Hi Julien,
>> >       >     >
>> >       >     > >> This feature has not been merged in Xen upstream yet
>> >       >     >
>> >       >     > > would assume that upstream + the series on the ML [1]
>> work
>> >       >     >
>> >       >     > Please clarify this point.
>> >       >     > Because the two thoughts are controversial.
>> >       >
>> >       >     Hi Oleg,
>> >       >
>> >       >     As Julien wrote, there is nothing controversial. As you ar=
e
>> aware,
>> >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>> here:
>> >       >     https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>
>> >       >
>> >       >     and the branch you are using (xlnx_rebase_4.16) comes from
>> there.
>> >       >
>> >       >
>> >       >     Instead, the upstream Xen tree lives here:
>> >       >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >       >
>> >       >
>> >       >     The Cache Coloring feature that you are trying to configur=
e
>> is present
>> >       >     in xlnx_rebase_4.16, but not yet present upstream (there i=
s
>> an
>> >       >     outstanding patch series to add cache coloring to Xen
>> upstream but it
>> >       >     hasn't been merged yet.)
>> >       >
>> >       >
>> >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matte=
r
>> too much for
>> >       >     you as you already have Cache Coloring as a feature there.
>> >       >
>> >       >
>> >       >     I take you are using ImageBuilder to generate the boot
>> configuration? If
>> >       >     so, please post the ImageBuilder config file that you are
>> using.
>> >       >
>> >       >     But from the boot message, it looks like the colors
>> configuration for
>> >       >     Dom0 is incorrect.
>> >       >
>> >
>> >
>> >
>
>

--00000000000061f03105f9c0a5b8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Thanks Michal,</div><div><br></div><div>You gave me a=
n idea.</div><div>I am going to try it today.</div><div><br></div><div>Rega=
rds,</div><div>O.<br></div></div><br><div class=3D"gmail_quote"><div dir=3D=
"ltr" class=3D"gmail_attr">=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@g=
mail.com">oleshiiwood@gmail.com</a>&gt;:<br></div><blockquote class=3D"gmai=
l_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,20=
4,204);padding-left:1ex"><div dir=3D"ltr"><div>Thanks Stefano.</div><div><b=
r></div><div>I am going to do it today.</div><div><br></div><div>Regards,</=
div><div>O.<br></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 23:05, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;:<br></div><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1p=
x solid rgb(204,204,204);padding-left:1ex">On Wed, 19 Apr 2023, Oleg Nikite=
nko wrote:<br>
&gt; Hi Michal,<br>
&gt; <br>
&gt; I corrected xen&#39;s command line.<br>
&gt; Now it is<br>
&gt; xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sche=
d=3Dnull<br>
&gt; timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quo=
t;;<br>
<br>
4 colors is way too many for xen, just do xen_colors=3D0-0. There is no<br>
advantage in using more than 1 color for Xen.<br>
<br>
4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.<br=
>
Each color is 256M. For 1600M you should give at least 7 colors. Try:<br>
<br>
xen_colors=3D0-0 dom0_colors=3D1-8<br>
<br>
<br>
<br>
&gt; Unfortunately the result was the same.<br>
&gt; <br>
&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt; (XEN) Coloring general information<br>
&gt; (XEN) Way size: 64kB<br>
&gt; (XEN) Max. number of colors available: 16<br>
&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt; (XEN) alternatives: Patching with alt table 00000000002cc690 -&gt; 000=
00000002ccc0c<br>
&gt; (XEN) Color array allocation failed for dom0<br>
&gt; (XEN)<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN) Panic on CPU 0:<br>
&gt; (XEN) Error creating domain 0<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN)<br>
&gt; (XEN) Reboot in five seconds...<br>
&gt; <br>
&gt; I am going to find out how command line arguments passed and parsed.<b=
r>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25=
, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Michal,<br>
&gt; <br>
&gt; You put my nose into the problem. Thank you.<br>
&gt; I am going to use your point.<br>
&gt; Let&#39;s see what happens.<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37=
, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank=
">michal.orzel@amd.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/04/2023 09:03, Oleg Nikitenko wrote:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; My company uses yocto for image generat=
ion.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; What kind of information do you need to=
 consult me in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe modules sizes/addresses which wer=
e mentioned by @Julien Grall &lt;mailto:<a href=3D"mailto:julien@xen.org" t=
arget=3D"_blank">julien@xen.org</a>&gt; ?<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry for jumping into discussion, but FWICS=
 the Xen command line you provided seems to be not the one<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen booted with. The error you are observing=
 most likely is due to dom0 colors configuration not being<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specified (i.e. lack of dom0_colors=3D&lt;&g=
t; parameter). Although in the command line you provided, this parameter<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set, I strongly doubt that this is the ac=
tual command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xen-bootargs =3D &quot;console=3Ddtuart =
dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscr=
ub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnull timer_slop=3D0 way_szize=3D6553=
6 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you specified 4 colors (0-3) for Xen, but=
 the boot log says that Xen has only one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) Xen color(s): [ 0 ]<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This makes me believe that no colors configu=
ration actually end up in command line that Xen booted with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single color for Xen is a &quot;default if n=
ot specified&quot; and way size was probably calculated by asking HW.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I would suggest to first cross-check the =
command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 20=
23=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a>&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023,=
 Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This f=
eature has not been merged in Xen upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assu=
me that upstream + the series on the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify =
this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two=
 thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, the=
re is nothing controversial. As you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a s=
eparate Xen tree specific for Xilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0and the branch you a=
re using (xlnx_rebase_4.16) comes from there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstrea=
m Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring f=
eature that you are trying to configure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16,=
 but not yet present upstream (there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch se=
ries to add cache coloring to Xen upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merg=
ed yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are u=
sing xlnx_rebase_4.16 it doesn&#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0you as you already h=
ave Cache Coloring as a feature there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I take you are using=
 ImageBuilder to generate the boot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0so, please post the =
ImageBuilder config file that you are using.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0But from the boot me=
ssage, it looks like the colors configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
&gt; <br>
&gt; </blockquote></div>
</blockquote></div>

--00000000000061f03105f9c0a5b8--


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 08:55:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 08:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523982.814465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ4I-0002yA-Kr; Thu, 20 Apr 2023 08:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523982.814465; Thu, 20 Apr 2023 08:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQ4I-0002y3-I4; Thu, 20 Apr 2023 08:55:06 +0000
Received: by outflank-mailman (input) for mailman id 523982;
 Thu, 20 Apr 2023 08:55:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppQ4G-0002xx-GU
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 08:55:04 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09c5ea9a-df59-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 10:55:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9659.eurprd04.prod.outlook.com (2603:10a6:10:320::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 08:55:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 08:55:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09c5ea9a-df59-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rk+34WOiU3m+Cxm5lARyaX2feDHnb6yVoH3+l9VBuR4EyFYLR8Vt4u+XDuI6J/kmMZdDWnHB3N4PZxdmXcdU7Kkt2BQV4NfRC/3UA1SZjMDSRi4U8JxBbrP6YHs8JFlErdbrxI3nczo7mCJX1bIq13hLMMXaqd5tDdK4EJjEhgPpuLQy+DLGxlDb5W/HDQx822K87a+vclc9EK7Ig7k+A9ISbzfTH7AqvMrGegEiR/5GOVNIg/6fxGlJEB736FIQxPlAo+0FOrzRfQjA84yTXZGwbmfOGp7oZJftdBxR/8CaQxZugYB9WJ2Q7g7hKRBUkwRcCgdDbli+FdZjt0wPkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FzHe1lBJgLijKLEEgidhwjdq1Ta8y0+4RrHR7zra7tE=;
 b=RqKqh5qET2C04a0vXZJ44ygx6br1+ZDI4xgZwwzwPts61JmvHQ4WE5uMTsQ7qpOhTr8d2I9mR/Un1jz0sXr/Ve1LnmK7a+XCiT7VVjQ9/nArgXdAkqXAq6Cy5GhGfJCiO4QErq75y+x/q46DnEIL8KSWwhkb/cek/C1UVWdGxtQxx/bPZNkCs4eeU4uNmOib5d16gakg4ONAKmhPQvFG6VpzEvDk+2njJTeq8FLPBfMVC4ouW8iWziv+snxV3MERxT+E5CoJ67rSz/caF4lqJYmWgEPHgwx90UeBBwxmVWBWFT+Y64TnAAjoKFDmOrzw6P/wEWM3pJxtXFIk6ea9Nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FzHe1lBJgLijKLEEgidhwjdq1Ta8y0+4RrHR7zra7tE=;
 b=XqlMdMlMp33BgXt6IizvrPaiMIX8JQ0VLbDlYxjjIpEbpg7xmfzKSY4NEBFOI3YDFAlfj4ZQY0+aBjpqiyvwjfb294SGRzAdPRpGwu3Gi8PAC+r/33M0kGstJAsNBAqImjJXTgokDq8ztUm38kabCuvrhr8U3Hg9zHOJa/Hg0zHLpl7BfShaSYxD23jDauJ+D/572ASjXfqMocIvObJr0PlnkWCMeLlehg8+DvQvKUAXE0xMRFpPl/I/MxbkkuKSSdqRzIhLY12esmGUHEMxLd0lohyNZLitSBwqB8Vj7Q/usHBPE5QdlU8SiGZvtD2+Dm1Gat9DT5ukUuVyQaLaPA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ab185590-910b-6ada-d607-cc91e9002cfb@suse.com>
Date: Thu, 20 Apr 2023 10:54:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 1/5] x86: support cache-writeback in flush_area_local() et
 al
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <ee33ad20-ef6e-504d-6987-59ccb166f8e4@suse.com>
 <53204261-3dac-579f-ede5-7acffd04f4db@citrix.com>
 <6f3726d6-60af-3db7-40cc-63308e427e85@suse.com>
In-Reply-To: <6f3726d6-60af-3db7-40cc-63308e427e85@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0099.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9659:EE_
X-MS-Office365-Filtering-Correlation-Id: 13da261c-21f2-4ea3-95e1-08db417cec3a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6/A2VLXOFW4zNpWBSLO2oRgHfdVZzrdm0mwt9+0pTV3jYxInO2yaqaMesetPjFhwgaWu1eouAZ3RXNW2DYFFlyKQ2u9WtuKDixFd4o9E8bXbZsH5w/pMF0yKGcBSdM3YyafKpVwvXcB33opCSk4g7025nPCWvKFi7hMNKrsZqQkeo99vWYi8Cj4feS53xThyVdiP1UWlGKfFpOwwUl0K+U8Q7/3Got+fjDOXB6uaNP2cD8jSsngcSjrx0wDSfVmTYlQeL9A6yLwbWrNV3tx5JX6ec0S21kdovtO8CMFUODigFy2AODvTbCVy8KCc99RnLWs+jlhGamUjYZo62Xyo09tTi13KAt6wsVu9oJYKBA+qp0b6ZAcJxOznrjhm6tXlt+PT5jXKTy0v++t5m3Xbj12WHQLfRH1AmLEbgoZUEXqKz1gBBdieqJpJxJnGDfkaAv9kdpU6KqWVcra+MEgPWORP2+FymkyXj6GGalComFU+NH7sYT/zOfRUO7OMrP2SvAtNlurpcHW0HKBnKC/QwJ0TdR5Mg9V/X0LIJM31RC3F5AiMv7yo4vJxBJVJl4ZBBPS2tBgBzLN1nl2CTrHf6RJEKVPAVtjzSM9SaGUX/4puV7xwv4jeK8YZPsoTTh0ZfFXz9YOBP77AExlYQ/hKCw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(346002)(136003)(376002)(451199021)(38100700002)(6506007)(6512007)(26005)(186003)(53546011)(2616005)(8676002)(8936002)(5660300002)(2906002)(86362001)(36756003)(41300700001)(478600001)(6486002)(4326008)(6916009)(66946007)(31696002)(66476007)(66556008)(54906003)(316002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjJXUkxndkRQTE9Nbmo2Z2VralFvbUR6azVIeXJ5TjZ1ZURnWjF5UStRNlQ2?=
 =?utf-8?B?RXVXY1dJcGV4QlI1SFBLM2ZiZ1g2blFVRnAxdDFSampPcmxmWWtmUUk0Nmc2?=
 =?utf-8?B?am0weEdsYjlXM1czdzNoczhiVFhzeFV5NklOSkw1dys2Y1ZxSVd1VlJyN2J6?=
 =?utf-8?B?Ulo1U0V0NjdlR1YyV3pZREJVY3JTZU5yQzNpOUxVcmVWR0d3RDV0MEE3RzRN?=
 =?utf-8?B?aHNtSTVBSG5vS08zTVYzeitCRFJoM3B5NXhvMzFtTGgrSU44cm9NNTV3VEZi?=
 =?utf-8?B?K2FVOHpUNWpmOXdqaER1bE1hNlpUZ25ONEY4Uk5hRXdoN0RRa3ZIMWFoelk4?=
 =?utf-8?B?YlJaWHhzQmtINkdLR295d3ZYMklIeTQxTlUxSjhRSW1BSkQwUkE2c1QwL1ds?=
 =?utf-8?B?RDBlc1Mvd09EbVZ1YUowT3IvWlQ4RklXSFVYeVlqWnJjb1dmb1RDTDVhR2JQ?=
 =?utf-8?B?SmhJUzJmcUdEd0RCQm03QTU3cTNjalZWYUZqSEhSZ1FmY3k4cHVzcE1TQ3Ji?=
 =?utf-8?B?RkhUaUZ2SUVBdkEzYkRHdVovMHFoYnBpOStBU01UMWlLMWJzTnFaSnhQdzlq?=
 =?utf-8?B?ZjB6RkFrYVB5cTlXKzZDRitxQWwxR1I5dUh6Q0ZRUFhxNzVocnNlNEVieVI3?=
 =?utf-8?B?RTZwYW9QR3gvUTZMZjBiekg0T3o5SDdIai8xS1ptejZVQkNSZWRhTEphVEF1?=
 =?utf-8?B?eEFTVzdiVUl4bi9hcmJHcDJGVHh6MlFwYloxaVpHLzg3WGRvZmpwK29WZURI?=
 =?utf-8?B?dlE4NFZCWVlxc1poenFaQTJkOUMybTIxdmlvYVJCTXJFREN1V2NWZ3orWWcy?=
 =?utf-8?B?MVhqcVovWEVlcTVMMjAxLzVEZTNhcGlxeW5HMlJYZjFpN0FqK0FZS2VEMnJM?=
 =?utf-8?B?SVVIVDVRN2wrYUxvNmo1bmRkc2wxQ3FnT0poWlR2WHRBMUJERm5taDFoVXFC?=
 =?utf-8?B?NnAxRnUrYnFudjY5bFplTk83Z3ZIWWllRXN5ZDZ2bFBsMGxSZ0w1a2NIeU1u?=
 =?utf-8?B?Vkg4QVpDNy9VdTNtM1g1Unl4dDBRV1ZTMWhnbFRIcXpKam1JbmtIbmNqWFZv?=
 =?utf-8?B?dWhVNlA5QThaNG95YkpRZWRBdEl0YjFZcTc5QlY2ekE1Sk40eTI4eFp6Z2F2?=
 =?utf-8?B?V3R3b0hPdkk1d0JtZGE0Y0xaZzNKQWw4WWdUcjdBZWE5dTVUQkdiM3ROeWYz?=
 =?utf-8?B?VDJOUFd2cGVMbVlUT05FUnViL2thM3c5Q0c3NWdMUmZWeVpFVGtMaXFaaDky?=
 =?utf-8?B?ZkFIbHhMR0NQeEk5VFBseDJYUVVsdGh6b3BnYVhJTlAyelZQaWYxa3Bpd3ZB?=
 =?utf-8?B?NDg5UXFrUWsxMWhlRnZ3bTZBSXZPMDlTVVl3OWxkSVRXQ1pmREZtaGhUYkU5?=
 =?utf-8?B?QjB4Y1FxT0lSMlNxUlREcG40QjAzTzNGamVVSnMxbndzRi9tTGQvei9YcHUw?=
 =?utf-8?B?WDdwMDd1c2xqSFJuN2ZaVUlTUVNLRjVRVjkzTVhnSEIreDRFNjFHcWpkcmgw?=
 =?utf-8?B?ZEU1RVlvN0JBUm1NY3kxbnIxSU41dmpMd3MzajZjTlBFQmNvanpYRGgyNWRG?=
 =?utf-8?B?VjJLVWlzRG9hU1hqSnY5ck9ENXlRaTZGSjJVcUczUVV6UldwdnhPS3ArTWNL?=
 =?utf-8?B?U2owQjFWSitoTlpjNnZyR3hsNUE2bEpBNlMydk1aK3hKN2NNMzRaUDg0d1JM?=
 =?utf-8?B?RDZ0YUVCYzB4N1VRNFVTb0VSYzJnTTBMODZXazNaNkQrdUNEY2Z6Z0d3TExN?=
 =?utf-8?B?LzVPdlEzaWFwN0l0SjdsS1YzUWxQbW9XTjJXNlF6ZE1NRkdTOU1CR3lxb2V1?=
 =?utf-8?B?elc2dkNsd3pMOGd2YmR5V0Q0MU1EclVGOFBrd3Rpb3JmMkI1ZFNKZUMvc3F5?=
 =?utf-8?B?cUpBRy9XOVp3bEpxNU93cmFkTzlINkVWVTZVSEp4dkloNC9TaU1ySHZMaWJG?=
 =?utf-8?B?OEEybzZDcG1aVy9mSk0xeVB4Y21CbEw1ZHFLb0xzS2N3QTJCWkljUTR6cldJ?=
 =?utf-8?B?dmUwa3pCcHNROTVtekFQdjdJRitRZ3A3Vll0ZTdjTWNJek9RZGswVTVDSWxt?=
 =?utf-8?B?dk9FcFpvd3AzcThHRUJnZWFIVW5mQlBrSXZQSzNqWVgwc281OUJTeHFvcUNr?=
 =?utf-8?Q?cghvBcMY8mp70teqFNuldjDrF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13da261c-21f2-4ea3-95e1-08db417cec3a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 08:54:59.9957
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BH68eD9pvUObDd0kVcN3EJy7EeOmdH8Uu8Qb7RfnMfMDy52C6Q8MIgQ/j3tRVfjK1ENcYc5sgsx9xsr8hSOwzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9659

On 20.04.2023 10:50, Jan Beulich wrote:
> On 19.04.2023 21:56, Andrew Cooper wrote:
>> But on to the main thing which caught my eye...
>>
>> The FLUSH in FLUSH_CACHE means the flush infrastructure, not "cache
>> flushing", and FLUSH_WRITEBACK is nonsensical next to this.
> 
> I agree; I chose the name simply to avoid further changes, while still
> fitting the naming scheme (i.e. the FLUSH_ prefix). I'm okay to change
> to ...
> 
>>   All other
>> things we flush have a qualification that makes them clear in context.
>> (other than the assist one which I'm going to time out objections to and
>> revert back to name which made more sense.)
>>
>> At an absolutely minimum, FLUSH_CACHE first needs renaming to
>> FLUSH_CACHE_EVICT and then this new one you're adding needs to be
>> FLUSH_CACHE_WRITEBACK.
> 
> ... these, but I don't think they're much better (still primarily
> because of the FLUSH_ prefixes).

Actually - are you going to insist on "first"? There would be less code
churn if first I introduced and used FLUSH_CACHE_WRITEBACK, and only
then renamed the few remaining FLUSH_CACHE to FLUSH_CACHE_EVICT.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 09:09:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 09:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523988.814475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQHc-0004bm-WA; Thu, 20 Apr 2023 09:08:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523988.814475; Thu, 20 Apr 2023 09:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQHc-0004bf-TJ; Thu, 20 Apr 2023 09:08:52 +0000
Received: by outflank-mailman (input) for mailman id 523988;
 Thu, 20 Apr 2023 09:08:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppQHb-0004bZ-C5
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 09:08:51 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f628b43e-df5a-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 11:08:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9848.eurprd04.prod.outlook.com (2603:10a6:10:4dd::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 09:08:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 09:08:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f628b43e-df5a-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F5STbLhpl93ji5LFNjwFmlxY586XDp9rdhXynlec3viF+kJq5NvVO3jsRWHyS73Or+Md9MWHUHjukup4eYhA0sJYffQ/INeecrNu8p6xs/2GxF13MBK/Yzh9fcRUlST896UmiLOvHFEGMyrLWZ93vg2i+8foZ4sO8T9DENzQFqIGIIOn4clA3x2FFpl9HjPwx+pz4ljCUqVdxo9N4jtVnZnqk8BU4HFEd1usCxTFjz2dDVEVHstHTzk6j82TizILj8Ipb4kiQKozE4AilT6gVjEKMAemyqe02quHwodDbH2AbB1EqIN94j2W/Y1EOtVomvw3vNK1ikFOlIRorN2m9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JI5EVmmDpYm8IlsuJKdADFEyaspx7x1QIzgvOy/1n0Y=;
 b=gTAzaavRu1mMVc12gkYys5OE+bEcGIUQ2M0ksiwz2I56GcQWj+8yavRQ8r8NtkGpTva7+2f8RHotspUSTnWUlYBZfA5+DL/hq/MQwQzHTsS09NWg9u3rtehu4QABTU2Pt/kme62g2bfupC85jgX3qn41aSUA4g7rVz33y3QXysz/4dI0Cqb3S7a1Bam8eQREuM8UtSntSOW8/vHzbEJOiLkZAEiCrj8yNzAeMUqMG1uT1+IhKpo3LviSA/MPpUgaGxCUMdtOWe80tYzDCLYm6/hL5FzsVRYh9IPgJsT44Ih1w7pti+5+5W/QLhMlT3Iy8YxjCHId+bkEDA8DcCCbVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JI5EVmmDpYm8IlsuJKdADFEyaspx7x1QIzgvOy/1n0Y=;
 b=PHy8JtxdoZBvt2bTIzEa15ia3++vn2htjvNjqpy32lKSPzgnOxwy6NyQ9pg15He8v/hWPik03Wdw1sNeY1h4eLFRrTbJGmOlT8FdrwBawFPaR9nR47mf28fgWYdbDIZXuAfhZBLDWKC6bfGMLQB8A85DGPry1rCtpOGJ4PrnSwfnFeNUhFe/qjTpNyl8oX4W+5PtS+v3RVntW8CAXd0IoSoYqvaKHMTkEBKEn5RCr0MyTKL/v4KcH0rSQKoKBFeuJ1fkukEJ72dtuWqaxukjA45FIXt8ANjI/a/Mf1B3m4tpVmI7inkPCY7MJku4Cfyv0XgAKbTmjA3DCyXGyqze9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a3add69d-8ff5-a206-0a37-4bee454be86c@suse.com>
Date: Thu, 20 Apr 2023 11:08:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 3/5] x86/PV: restrict guest-induced WBINVD (or alike) to
 cache writeback
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <0e60520c-d660-1a83-3f57-3466a0ad617b@suse.com>
 <e6315912-7d1f-bad4-71d0-355d361ccbc1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e6315912-7d1f-bad4-71d0-355d361ccbc1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS9PR05CA0234.eurprd05.prod.outlook.com
 (2603:10a6:20b:494::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9848:EE_
X-MS-Office365-Filtering-Correlation-Id: c7eaebd2-9dd5-4118-5caa-08db417ed942
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OE++de3MFs9t2kl0IQ5qP5vMR98C2mekQ2HBLlcLM5hnPQuZegVKTAh14K2wPrb8J0Nczl1/HVfBbtQgcbnmSMPTenvcMuuZPbVDJsvX5lrdeE7zmJQYGmrXiqHSZa8dUq3PV2JYjc3hMUrgGpZVhSXJPrB0F2pVc3LC/IoakeQVZJxCtVoBGu4/7Kt9QTw+3diYmDrSnf3OYCjgiueAW/Rdeap9SomRJV0lKMLdI653cImkx1RoF5REf7FmbbA6PnTRyRFMzKOybM1U/Ze07WFrNaEbP3jV1Bu/t2wmH8VKVjd+WrLDfS99gqcAH+mEeyl6JQNLAMziFXsj3o6Po1fKiiX4edMagmBYnM8mBI7joGIVsB4ayG6AvoTk4ZDUG0EHwjjSruFpTl9rXn5dn0I6hhESIdIk9cpdEPNoYs01KtiouiSRon5TrU/VB+3Kl1SsOd08KZejaLI3nuhtPt/7sOQqCVJUXutUoN+fdgwvxG6QKc2urxq5oDE9G3pBuazMQeAyf12slP8nG1K1YXGgYYSlgMDKsW4FOFKNLv3L8zTRsf0M21E/DjE/jsWdMBMRn6rrLwv9JBZlxdBcMZNQcv1W1rrRaPtbJBCB5ErST3X1o/+bQ2edbegq2wR/sK4kn2v/Lbmcy93um5akWg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(376002)(346002)(366004)(39860400002)(451199021)(86362001)(31696002)(2906002)(36756003)(31686004)(6486002)(2616005)(6666004)(186003)(53546011)(83380400001)(6512007)(6506007)(26005)(478600001)(66946007)(66556008)(66476007)(6916009)(4326008)(316002)(41300700001)(38100700002)(54906003)(5660300002)(8936002)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3lvTy9Zb3NrYkRQZm5BaVcyNWVUUkdhbURrOWY3dU8xUzV1U0lwbDJXOGVW?=
 =?utf-8?B?dEhMU2NJNGpxWmpuSnhMZU5HZy9ibGs2TjFCS2VEaml0eUxGZy9qemFQWHZs?=
 =?utf-8?B?c2YvQU8yN0cyeTMrL2lWYTViL3FMaUQ3aWhSeVlqU2ZPbEo0OGo0djAyelBl?=
 =?utf-8?B?elNxK0VPcTB6MkJEYkcxaW9QSW5NQ2ZKNzJUemtqQXhSNHZITzV2cFZUMS9p?=
 =?utf-8?B?RUloVyt0cURRMDdGQVQzWXZRRnJlVDRUQmM1SktiQXcyTlV4YjVBWFk1aVdi?=
 =?utf-8?B?cVowOHJzZGR5L0dnam5tbTBVa2pyQk0wbkRCdDlBRHpuSGNKNkphSWIzL0ZH?=
 =?utf-8?B?Y2VUQklKcTBhekJMVTZpZ2I2bm5HNXQ1NG9OQUhhUTNKZFFYTjhDczl1YkxF?=
 =?utf-8?B?c2t4NWl3bTNLcFdkakcwUkNTTGIycGVadzAybmZWM2hQT1MxZDJzcGlYaEpr?=
 =?utf-8?B?T0dmSCs4YmNSd0VObmI2WWtRUExTWXZGY1ZZRHN6WFR1OEI1bUZheGVSbXha?=
 =?utf-8?B?d2M0WUhLbjZDUjZkejFLVzZxUHFhRENGMmc4TjZQVTlBMXkzVTFxeUgvRVA5?=
 =?utf-8?B?akNzSWN5VmtlQTRXa1hKQlBmdVpsTncxcGJycHNlbXFwS1JrVUZ5U1FZWThB?=
 =?utf-8?B?ZHU0STgxVWMyR3VmalZDQTJ0T09LbU9vemtjYk8yb0lwdzdkVFIyeWZGSTZ6?=
 =?utf-8?B?cXFvZHNEcFB2L0NTZlUxeUVKVEprMWlIMVJFV2o0YmpPY29wVVoyblBjQkxa?=
 =?utf-8?B?RHlvWnE2TmJZdFU4WHVYL3Q0WGtncnh1QjVFYlJRa3UvR3pyWXMzcS9kVm4r?=
 =?utf-8?B?ZTlESEcrVXdhZ2hOMzc1Y1JXVzF5bVFMdmpSWWdkZVFOZW9rTjFkS1BOZ3ZW?=
 =?utf-8?B?UVRsWElVYW8ra2QyelIvSmExOXBsemNmSFN6ZndvcDNWODZkRjRlem1qUjRt?=
 =?utf-8?B?bVgrTS9vbnVMUUxXeGh2cm05UU8rb2kyQUt1WHFqd2pWWGRlRGE3YXhIYlVH?=
 =?utf-8?B?RVZLWklBbE1YSlhiaGJ1Q0lvVjA1L0dpQUhhUGFuQjNqWDFoRXRrWENGL2pV?=
 =?utf-8?B?eXB4Z3pvYlB3K1lvbklpUkc0ZzFabUkxVTF3SitlRlYxblh0VDJXWEdnaVdu?=
 =?utf-8?B?T0RGdzBMYWNTdTZLWm5pdkNqTGFMWXNtZVZiT3lXeXd6VE9RKzVPSUMzaFZ5?=
 =?utf-8?B?WkZkUDFibitVMGhpRVFTRnRiVDVGOWNJdkhTTE10NVNZZ2Izdk9JT01oNkZZ?=
 =?utf-8?B?bTN6K2tpclBGNGUwRTNQNCswWVlPM3MzaXM2UnlIV3UwNXhxQmlNTWVXc0N2?=
 =?utf-8?B?elFWY20reDEzVWlaaTVEZlo0TnZlTHBRa05HT0hwL0FBSENaeHA4WjZyQjZ6?=
 =?utf-8?B?bTJCQTZQY0xpUnlrUmRNaVYvVmxuT3VmeFdRa2hiQU5oNDBoVVBBYmtnMVFv?=
 =?utf-8?B?elNaUHJyL1hpRnJHTlFzb1hVdkUrbnRBMHVvN3BTdnZjdkNCa0REMHdPcUJE?=
 =?utf-8?B?b3dxWVRMcVJham5McUJxZVJCNVRtUm1ndFowZTdNM1RnVGJic09Ld0NhMUhJ?=
 =?utf-8?B?N3VseW04N29XM3lmN3FIY3pSc3JuY0lxMzE1NFk1S0RnYStDUGtnYk5hRGRI?=
 =?utf-8?B?TjJtRS95d00wYWFEZlNDUjdBblZTV0RVZUVJQVVzNU8wNDVZWWxwbDZLd1N0?=
 =?utf-8?B?dHlveFlKWTRCMmNCOG9zTnQreWVrUVpIbW5CdmI0SUtWM1BTdEJMMzlzbDda?=
 =?utf-8?B?U3pUNVRZVkhsUG8xVVFCRTJOU25yZTYwcVRUeUdGNDNITlVtbW5odTV1SzV6?=
 =?utf-8?B?YWhteVFrNHZINXlVbVVJM3ZNY2RubEFTSXZhclJGM3FnR3BuSjBrN2tyS002?=
 =?utf-8?B?K1NzOUhtYUxKck1kNDgwTWVwYWdzSUdxZFVVeU00UlZlNFBRTWt1N1ZxdFRS?=
 =?utf-8?B?QTlpa1dGcnFRS0EzZDlSaGxab0RocDF4d290Y0IzYUo3Z200aEh6clJ4Y0tn?=
 =?utf-8?B?dlFBaldvb2RFankrbXA2bHRqT3c0UzZ1aVpSVWdHbGVncHRLVnN5VDE0QmNp?=
 =?utf-8?B?MkYzZTZUcTBiTU5oRFpuTUVSNXBXTTlFZjZGTkJIVWtjTzVyRE5wYlBkSlUr?=
 =?utf-8?Q?I28DTra0ftTCzbnJ6EvTwqcwt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c7eaebd2-9dd5-4118-5caa-08db417ed942
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:08:47.2045
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B6nKIA+fBkeuauyvKdAMIOu3lFsJhLq1PXx3tA18uejDalTO3fdcxbPWoGimPpnk2sFQjCmWxnYuNe79DvUGTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9848

On 19.04.2023 22:10, Andrew Cooper wrote:
> On 19/04/2023 11:45 am, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -3772,7 +3772,7 @@ long do_mmuext_op(
>>              else if ( unlikely(!cache_flush_permitted(currd)) )
>>                  rc = -EACCES;
>>              else
>> -                wbinvd();
>> +                wbnoinvd();
>>              break;
> 
> It occurs to me that this is fundamentally broken.
> 
> The guest is not in any position to know (or control) whether it gets
> rescheduled elsewhere between now and it logically continuing execution.
> 
> So if there is actually any cache maintenance to do, it can't be a
> WB{...} of any form on just this CPU alone.

Aren't you merely confirming the respective paragraph in the cover letter
here? (The implication would be that at least the data added in the last
patch wants to be guest-type-independent, so that it can be re-used for
PV, even if maybe not in the same patch.)

>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
>>  /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
>>  XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
>>  XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
>> -XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
>> +XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*A  WBNOINVD instruction */
> 
> Given that PV guests do have several real hypercalls for doing this, I'm
> not particularly inclined to let them do it via an emulated path,
> however easy it might be at a technical level.
> 
> I doubt anything good can come from it.

We permit use of WBINVD; it looks inconsistent to me to not also permit
WBNOINVD then. Furthermore exposing the feature then can serve as a
(better) hint that behind the scenes we actually carry out the mmuext
ops as write-back. not evict.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 09:15:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 09:15:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523992.814485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQNa-00064t-LY; Thu, 20 Apr 2023 09:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523992.814485; Thu, 20 Apr 2023 09:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQNa-00064m-Hr; Thu, 20 Apr 2023 09:15:02 +0000
Received: by outflank-mailman (input) for mailman id 523992;
 Thu, 20 Apr 2023 09:15:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppQNZ-00064Q-F7
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 09:15:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7d00::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d3815105-df5b-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 11:15:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7182.eurprd04.prod.outlook.com (2603:10a6:800:121::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 09:14:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 09:14:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3815105-df5b-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OF1Q7e7U5+nY4gfXu1qRZYEbdtIszIiKDSECDrPUqWaieeERju3Igp3LIjapCYdE99Nq2G88uf1VFZzmqG8uKOwXgfUq5+Oymqmcpyyc6SdIi/dSRP6Rsx0xjjMNgNDtCy7fnpZ9xXidS8XmfW3YU1s4390PjBMLGglUJVGVU3SMhRVWGXD+vEmbUV7A/TXm3sRz4ncMqijibqqNFvbZxnt/Ca3EF1d5i94fZEE9baVoElANahj1hKcMEPZUKakUeVhO4FOoHdTznwBxrrKcNqsacfVjjbUUphxd8Ue6O4tQriS0/f6sM9hWYhDoSC0IPOdLfC1qx8FXlaVuSH2myQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q4PWZ5dCXzP9D9m5d7FrH/Ok6O3ZPrfAaX/CkSbQix8=;
 b=Yj8o5B+fjNrCgBy01hUbDGtym2CVxo1E6f0CRmX3nvIHEc66ZhUMtKIMSlRMrFfKso2PjEmBNe4zY50LA/eFdX1DgIqHj8+SrVir8aSXiC9znCuTfJxf8BcJUPWXTnhzumGqTDRESHBVDIJIw9jW0iFj1pqopoqr20lJbJ0DWzQZimAf6I8PHPs9JilsXnRnXMVu1KAkqQJ2TuNfHfJl/bsAeGAoh8PxPG4b/3QEfAkEoyoW06qTHXJ0JvnS93bP7TOYQ9H2KtM7J+xdVSWUeDB67Aud7fYj6f6WywhnBkD/iArvDZxrijvRyN7C7uspsmHYvxrCaPhmxLr3F+Hg7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q4PWZ5dCXzP9D9m5d7FrH/Ok6O3ZPrfAaX/CkSbQix8=;
 b=lrkAG/uMyDSQETLe45RIQzy0ZIqAtkfigvHNl2+fKC03VkX22KXK8eH1vEedKGsP3Mn+zSdavtC6sT3E1MBi/Yyelr2YeZkBl1VkKvm1P2/6hMJKe9luef+B68IBTwLfAfKkiccJ8yI9IGqGatLJhPGCSjSDpzCyXe32gezn25JQdXJB7hUD3iuX+hiJI6ngcljX8ci3RNoD71ee0HQBmbKMg4JOHmyPwcpjT5tHyl0TJaYBw7uCY9YrLAoNTXnsB8Nr5CfZSK7cUT/Z1dAjkxY3VTobjwBdURncNPR+XY7gN+hX/7PMptFMUFkOPJrtyIjJNQqqFpg4weqoDuZaNQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2ca39f14-a1fa-e9c3-1ce2-65d0d5ed79c9@suse.com>
Date: Thu, 20 Apr 2023 11:14:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 4/5] VT-d: restrict iommu_flush_all() to cache writeback
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <d07ee286-52f2-c7ec-2d0d-1c343dbc78be@suse.com>
 <583d3f63-c3b3-dc73-6ca9-0ab5c60b26d8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <583d3f63-c3b3-dc73-6ca9-0ab5c60b26d8@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7182:EE_
X-MS-Office365-Filtering-Correlation-Id: afd2dcba-c489-4ab6-695c-08db417fb6c9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LEax8ZgN+mymUNjkaAWCnKFvf7Y5DjGnXrdyaVD8HQrzz/BZ+ZkKpBr5nvjyQnb27TrgevCshWOxLP1Ih3A0+9EwQnZsko9iwdd4FI0+epOa9R2cmWiEPPKOqHtnuMMX3n/hmnqe/foQHv0/BvQo4j6XRGJtyXIVgw3mIEwaoQtWnU1TteogxWNjIjmOdDY116p8aEXbwKQGmoSjzkTnSQ2U7Gqz8Lg7ssT9Izez+QPr7uYUIwror4oljEiR07i4Mv8pgyAjMg119bLXd5GjByOrOn5KYbN05iNUVZdZ6yrgJlwgCqrjWoQUXhONP0td1eFWzszbbksICzDm8u7U1RNxoFk9IjFCIs+WiZ+bzHvH5OwUPETKLYREI4kxy+CdDa0bA/2vGk3NlBZLuYVoEY88D2fa/ZpyhpeyKiVNQpKyiKWkjT2UroYk4mW4BRyOtWSv300Ijwj/zLXgSIJ4/VA8OLIbJQ9+ZiS8fm2L2yXjkauswMpmFs+wXniKEOzDgRih7fEENDf7q+dOfNTgX5uKjTQOV6voVSIAeSg4jmkIdfFSLcuRPVRGIUSb6wAaIWYNhvsHV5sBSBiNq85mcX04LT+R+GLM/pSQOS/Z/DJN5zdFb8XJxD1piplJJ//CQtheE4aejgXeV13giGWnQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(396003)(346002)(39860400002)(376002)(451199021)(8676002)(8936002)(6506007)(186003)(86362001)(6512007)(53546011)(26005)(2906002)(31696002)(2616005)(36756003)(54906003)(478600001)(5660300002)(66556008)(66476007)(66946007)(6916009)(4326008)(316002)(6486002)(31686004)(41300700001)(83380400001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWlxSjA2Y1lFeTc1MnFhejVrRkhEaDlXZVhTZm1yb2g0cUNZODRhWk1IZTFK?=
 =?utf-8?B?LzdwYjRxZGNoSjJzaU9PTFI3QlNreTQ1ejQ2MkhDYStCYVYxa1JXOHFqdHhp?=
 =?utf-8?B?d1BwYktBVWYxUklHbHNQaGl5NlJ6K2dGanI3c0xOUlhGSG02YjJvWjVQL21p?=
 =?utf-8?B?RFZVblN5N2wySmpBSXZiZlBSdkVnY0lGRDJyWUJLdXh5eDdLU2dPVHFLaHhS?=
 =?utf-8?B?amRRNGxHVGZyWlpleGtDWVUyZnUrc0I2RzgxaHdFQTBjUEhsRUJQd1RkNHFC?=
 =?utf-8?B?eGEvKzZGbXFNRmZaY3QrbnpaL3dXNnNmbzhJNE9rUmV6ZkpmcUxsdG5hQjgv?=
 =?utf-8?B?NkFocnhLd0Z1bTd3UlMwakVqbGxhWmdMOWFZeGF4M0VMTHp6YjFja040ZGhx?=
 =?utf-8?B?VEtDdjBSM3BSbEdxUmJTK0tsQzFjRFlYVGxLSjZMdUx0K3JQVzZSdHdoNmU2?=
 =?utf-8?B?clY3T3NxRHZXVTJrNWNWMWZ1QVR4TUNGMFhRN2RWZUhWN2hrME40QkNSczhW?=
 =?utf-8?B?aElVbWVSY1p4Ly9uSkwwK2xnU0lFb2MyemtObmJsU1U2Y0NQOFYxaWFEd1JG?=
 =?utf-8?B?WFBCMUowSkJVLzZPcFBncVdsSndVWkc1VGlXMjZ3eTVFZ1BrWHFSejc0SG9Q?=
 =?utf-8?B?STJzbDFDRlM0a1J4eXg0ZEw0NmgzVDY4U1Zqank2OGN6MXArWEpwYVBIZTdl?=
 =?utf-8?B?NUlVcUhhUkc5aFZpN1IzR3NGbGNBWUtxL2xMUWxsYTluaXdZMTRzaTV5c3lF?=
 =?utf-8?B?T1BrZ25rK21qVWJNSVk0SzRwWGU2T3kwdkJkSU9Db3BoSHFGTGgxV012a1gv?=
 =?utf-8?B?UTZYbW1JTC9ZazRzK2VSdkFrTE9wWmI4RGJNaWVrS1pPLzhvVG8xVWRQMHQr?=
 =?utf-8?B?NUdud0xVWnRvbDJEK2FxV0REUW9hMTB6WS9xd0prL1E2OEh2clR6VUZqZ1BE?=
 =?utf-8?B?NEtvUGoyOVYzby91OWRIR1IrVE1hU1ZVY1o5a3NJc1hnYnd6UTgyMHR1ZEE3?=
 =?utf-8?B?YUlTVGF5dkNKK1B5a3hqbUNTbXo1Wmlxc2hzMmJPZGx2V2JQTTI1dVBtNVYv?=
 =?utf-8?B?QkdPZTdyRFR3c1VUOVZ3QTRsb3FiNmUreC9VajBWaHpwbnl4QzJMRlUzOTc4?=
 =?utf-8?B?aUdkdjQ5Sy8rYWNDeGtTY2ExVXJUZCsvdzNkUW1aZzlyVHh6N2NMM2ZsQm5E?=
 =?utf-8?B?bnBlM3g3RWd4UXFIRjIzQ3Z2RFh2alVGZlhpY1hzOXJhcG1iaDJWcHVOekpn?=
 =?utf-8?B?cVd4SEZHVkN5bWNHZkxpWk5oYm40VnAvVEpmTndHbUM2Vm1hV2JrUHIzVzc3?=
 =?utf-8?B?MkZIV1pOQ0ppSUJ5akM1aG1nVkNCSGJmMy9hNjBLbWtCVk50MHN0a2Y5ZlR5?=
 =?utf-8?B?YlRwMlU5WkdPeGxsUHNQcldIMWhrRnNsTjZ2MjdickEwUkh3LzVCMlE1cWRC?=
 =?utf-8?B?YUN3U0FTSENDd2hKNGVzNzJOam1tTFV0WkpnMndmK2pOTzQyVXFSeFhuZ24v?=
 =?utf-8?B?MzV0MGx2OUd3d3NlMTE2eVhvY3ZpVkN1QlVob0xVaUZzTFJzUUpla2tZa1l6?=
 =?utf-8?B?RHZ0blJjV2ExRkh5L2lWMHIza3RjNlZSYncwZU1adC9QaEJhLzkzRk5ia09U?=
 =?utf-8?B?Wm0vRzdYbnhNNHFYMWxMOThOY1F0cHdiemF3NVpKL3Rpc0VxREpTWGlvRWxS?=
 =?utf-8?B?bzRDVFYyTmU3ajU1TXBkbmo4QUF6MzJrT0tHRk9jRVZJN2tCd1JOYjFwcVVF?=
 =?utf-8?B?TlpneThTV1FaUUVKZENVZ0F4cDVHSUdtYm5mcmY5bjBybzZGYnVTdHE0YUNy?=
 =?utf-8?B?ZGdTcldTbXN3TWtLeHpTZzR0dWZaYlVOU2o5SmlQeDNTOEJVbjV6Z20yNk1T?=
 =?utf-8?B?SmNqeW9ETGs2ZWF4dTZUTWptMk01VDBVUG5ON1ZxV2FKR1E1YlRJNGNHSXZi?=
 =?utf-8?B?bTNiaTI3cmUwRkZWMGFQQkgxYU9sNkRSWjFndjJKbDd3TmRlSERhMVFTZDJI?=
 =?utf-8?B?Sm80WXd1alNhZWJ5VXBLcEdiOVJtN2c2ZEh0SllKRThpSk9iUnM5TjBrY0tV?=
 =?utf-8?B?cUVHcUtOK1NCdlRVMlZ0c1B6Qk1oUlJpL09wcmE2M1NKRnpqTXlaMjhBYjMx?=
 =?utf-8?Q?aurUlvfJi2VxyO7jyyfQ/sssU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: afd2dcba-c489-4ab6-695c-08db417fb6c9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:14:58.8233
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jPZfH59nnIoh+Vjh/xk78vF7smxyRJZRbLOoVtG+Fs52JF/G4z7IhB+eWDhOBOihzMLzcixGlhJf0YgE6dHQGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7182

On 19.04.2023 22:46, Andrew Cooper wrote:
> On 19/04/2023 11:46 am, Jan Beulich wrote:
>> We don't need to invalidate caches here; all we're after is that earlier
>> writes have made it to main memory (and aiui even that just in case).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> This, aiui, being an analogue to uses of iommu_sync_cache() (just not
>> range restricted), I wonder whether it shouldn't be conditional upon
>> iommu_non_coherent. Then again I'm vaguely under the impression that
>> we had been here before, possibly even as far as questioning the need
>> for this call altogether.
> 
> I'd far rather we fix it property than continue to massage around the
> sides of known-broken logic.

I agree in principle, but you're not asking that I actually amend this
single-line change with all the work you outline, are you? To be quite
honest, what you ask for really is something the VT-d maintainer(s)
(i.e. Kevin as it presently stands) ought to be doing (or really have
done long ago) ...

Jan

> Coherency, or not, of the memory accesses of an IOMMU is a per-IOMMU
> property, not a system-wide property.  What the iommu_non_coherent
> global boolean currently does for us is enforce cache maintenance on all
> IOMMUs, even the coherent ones, when any single IOMMU in the system is
> non-coherent.
> 
> Inside the IOMMU driver, cache maintenance should depend on iommu->ecap
> alone, disregarding anything else.  This will improve the performance on
> any system with mixed coherent and non-coherent IOMMUs, without any
> other behavioural impact.
> 
> The complication comes in in p2m-ept when we're sharing EPT and IOMMU
> pagetables, because the EPT can be accessed by more than one IOMMU if
> the guest has devices behind different IOMMUs.
> 
> But we know the devices assigned to the domain.  They're almost always
> static for the lifetime of the domain, and generally single device only,
> so there may be value in considering a per-domain "I've got a
> non-coherent IOMMU" flag, because we absolutely don't want to be walking
> the list of attached IOMMUs on each EPT update.
> 
> 
> But...
> 
> SandyBridge era systems are the only ones I'm aware of where the IOMMUs
> advertise themselves as non-coherent.  And on that generation we quirk
> the superpage capability off anyway, meaning they are ineligible for
> HAP-PT sharing anyway.
> 
> And if we are doing HAP-PT sharing, the cache maintenance for regular
> EPT updates will be crippling for CPU performance, especially as the
> software bit updates are not relevant to the IOMMU anyway.
> 
> A better option would be to simply disallow HAP-PT sharing when there's
> a non-coherent IOMMU in the system, or (slightly more fine grained) to
> disallow adding a device behind a non-coherent IOMMU to a domain using
> HAP-PT sharing (as there's a disable now anyway).
> 
> Either will reduce the complexity of Xen's code without any loss of
> functionality in real systems AFAICT.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 09:24:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 09:24:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.523996.814495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQWI-0007Zs-G9; Thu, 20 Apr 2023 09:24:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 523996.814495; Thu, 20 Apr 2023 09:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQWI-0007Zl-Cj; Thu, 20 Apr 2023 09:24:02 +0000
Received: by outflank-mailman (input) for mailman id 523996;
 Thu, 20 Apr 2023 09:24:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=seYc=AL=citrix.com=prvs=467f33b72=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ppQWG-0007ZM-1v
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 09:24:00 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12924233-df5d-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 11:23:57 +0200 (CEST)
Received: from mail-bn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Apr 2023 05:23:48 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5052.namprd03.prod.outlook.com (2603:10b6:5:1f2::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 09:23:47 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%3]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 09:23:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12924233-df5d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681982637;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Ce9f+CnsLS53jZdO3yENs/I6TWQ0Bpki9dH27jC8ao8=;
  b=F05PTpXROgH0TdiTtX2bsKzoMCthjGY06ugISvRqezfvRFdO5kjZ7+7B
   WweuPD/mfY8r4cUc30RnJQHfOeD9WAE+7a9fVjQiwEA2Bk1yVyoY9T+NW
   XBmnw9XyZqjJwrlJrvwe70PkvMzR7Mg+Xst0Xuy74NEP+h7GiX1OrSaPQ
   w=;
X-IronPort-RemoteIP: 104.47.51.47
X-IronPort-MID: 104989347
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SFMPd6JLUPL5dWIGFE+RF5MlxSXFcZb7ZxGr2PjKsXjdYENS1zwEm
 jceD2qHa67ZMWf8Kox3aYu1phgC6sLSx9A1SVFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gZmPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c54MEwQz
 6wFCgsvRUChvsaZwoCSYdRz05FLwMnDZOvzu1lG5BSBV7MMZ8mGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dupTSMpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxX+qCNNJSOLQGvhCoR637DAfJzcvWHC4htCViHyBWNFEN
 BlBksYphe1onKCxdfHhUBmoiHqFuAMAQd1WEv185Azl4qPO5QqxD3ICQjQHZNFOnMs3QyE6k
 1aTmpbqCCZpvbm9TXOG6qzSrDW8IyEZIGYOIygeQmMt6ND/qYUyiFTKR8xiFqeuptTvHHf7x
 DXihDc/g7E7jsMR0ai/u1fdjFqEqYXOVAMzzgbaRGSo6kV+foHNT4ip70XLqP1bL5exUFaMp
 j4HltKY4eRICouC/ASJQeMQDJmg/fOBMTvBkRhoBZZn6jfF02K4d4df7TdyDE5tKsYNPzHza
 UnQtAUX6JI7FFmjaKJsJai2F9gtyKztBPzlX/bPY9xWa4JtcgKd5yFvfQib2GWFuEQhlaUyI
 7+UdNbqAXtyIaBmyiemAv8Uy74wzQggym7JA5P21RKq1fyZfnH9Ya8MLV/Icek96biArRT96
 NdRNtWHjR5YVYXWeiDT9IMJBVwDJ3I2AYywoMtSHsaHIwx7CCQ7CuTa35slepd5hOJUkOnS9
 32wU0Mez0Dw7VXDKAOXejVmaav0dYhwoGh9PiE2O1usnX85bu6H/KoZMpc6Y7Qj3Ohi1uJvC
 ekIfd2aBfZCQSiB/C4SBbH3q5Zjb1Ksnh6UODS+YykXeIRpTAjEvNTje2PH9iYUCTGsndAju
 LDm3QTeKbIbQglkHsvSQPeoy1y8uz4YgO00U0agCt1Sflj8tYtnMSr8itcpLMwWbxbO3D2X0
 0CRGxhwjfmd/ac2/cPPiKTCqJ2me8NyAU9FRUHa67isPCXX92blxpVPOM6CZz/QTnjo0Kqnb
 ORRifr7NZUvlUxIuoxUF7dt0LJ45t3zqrscxQNhdF3TZVOtGLJmI1Gc0MVPv7ELzbhc0SOuU
 1+L/9JZEbaEIsXoFBgWPgVNRviD0vQdgX/W4PI5KU759Qd++bbBWkJXVzGXhSVbLrBdP4Qiz
 uMs/sUR7mSXjR4nMc2PiCxO32KFMnUEXqMksdccCYrm4iIk0lBJapvYCwf375iLatwKOU4vS
 heQmaHAjrIawlfJcXM1Embl0u9UhJBIsxdPpHcOOFGWstPAj+0w2lta9nIqTWx90w5O1us1M
 3JqOWV/NLmD8z5uj8UFVGepcylEDQeavFbs118AkmHxRlOtEGfKKQUVIu+H5kkB+mR0dz1S7
 raejm3iVF7XkNrZ2yIzXQtvraXlRNkprAnawpj7QIKCAoUwZifjjum2f20UphD7AMQ3wkrau
 e1t++U2Yqr+XcINn5AG50Ch/ex4YHi5yKZqG5mNIIth8bngRQyP
IronPort-HdrOrdr: A9a23:AloMBqEVX6uaRyLQpLqEU8eALOsnbusQ8zAXPo5KKCC9Ffbo8f
 xG/c5rsiMc7Qx6ZJhOo7290cW7LU80sKQFgrX409+ZLXXbUSiTXfxfBbKL+UyeJ8SGzJ8i6U
 4DSchD4azLfDxHZJ3BkXCF+r8bqbHtzEnrv5a9854Kd25XgspbnmJE42igfHGebTM2dKYRJd
 6z5tdnuzHlQngedMK9b0N1JdTrlpnklI/GfRVDPBIs6BCPgTS0gYSKaCSw71MxUy5v3bxnym
 TOkxX46qK/99m3xwTRzXW71eUnpPLRjvVCGe2RgYwuJjLghh3AXvUYZ4G/
X-Talos-CUID: 9a23:mCHzfW9H033Y8EzIA8CVv0ElN54fImD/91X7c22BLUlrepS+dEDFrQ==
X-Talos-MUID: 9a23:bwBM/wmU9rlqbROVW6LxdnpDd+NCv6iBInkAupYhkuWGBxNzOCyC2WE=
X-IronPort-AV: E=Sophos;i="5.99,212,1677560400"; 
   d="scan'208";a="104989347"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E8zMb1TeH0nTGsOqgxOcBW/OkfBLU9jZToa9n06L4+pXVJyugWtVrKR1NzOSyNPulzMLxmj3Nl9Y7UzxePTNfN5xtKQlDwPBrXU/I9HKR24YHd5zheFk9eBB6FUd6TRu8Y+tnRivav7t7XybMPC+05SD4aFsboAvPMxXrtjPw26VviDIpdS9FUj/GPKzrub4ZUJ3fQBf5SmQ6zaV4qwwZt9xgePDXJd93Wmw7Jekr3OqMzXiRnTp+jAhtUYQJ23JZpWZwfh2JUmWh6Bc5bLw8uoJ4Q9fHt30BD//ZMV4jExj6fN8+tNNcezdCTa8XGkA+qqKD+QW24gUMpyqenBFmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ce9f+CnsLS53jZdO3yENs/I6TWQ0Bpki9dH27jC8ao8=;
 b=ZoTTQMhs8A7za5eh6A4o+MhURcLRJClCY6aEVa6qO4dlX0N8JyuZKJdqMAs6iiaLn8dUGkU4Aak1cxqxrQVMTCZjjXEVjN/owGkKFl+E7PAqmTm/VNhTUTJwshcY74XgUWx+7QPFUJPkohqgeEUyiWQ6uFYCgWmKzpujfrRTRk1TRB+xtphSPFlzp7qbNQ9oPN5eZx9+wpxdXC5fvSqgnfR7AsthAnMu9AkE+Udf+5DwwRWEnG3kAQQaQzySp6u2a6RLGgk+V0Mj9DD2YD0Na0vZ5yU2vaVCtsPIouzmqFvdtDUhjxRpEY78qW+DzkNhzpIqDb3v+AZ/+Fw0DyOCag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ce9f+CnsLS53jZdO3yENs/I6TWQ0Bpki9dH27jC8ao8=;
 b=sA8uCbdOP9XJGurAi8TGLTXfoUPnjTvu6Si1chW8lR35ta2B+jXx/9MZauL5TPDWT5ebOtLgUpmUAjWU6xNmFr5oDWJ9H7fV/kVxa9o76JV4bAIVbI7hGT4MU3g3j2pNEIAPYvous/ftqGRK60Jy2gBH0Sh19CclgCKpB6H21Gs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com>
Date: Thu, 20 Apr 2023 10:23:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-GB
To: Thomas Gleixner <tglx@linutronix.de>, Paul Menzel <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
In-Reply-To: <871qkf3qek.ffs@tglx>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0045.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5052:EE_
X-MS-Office365-Filtering-Correlation-Id: 20c06dfb-e80d-4c83-676c-08db4180f135
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DNE5Kxb3kN4KxzASBMFzvw/oVskXMxqM9vYq4/6hHEl2Bt/4Mh3pI4iKdhv8eaDDVWxkwFN/cNYCe+qQ6GFsTKY1TKuqjJXLzmvrccVoq0K7ZSyMVU09kQcPhUASpFomx7ETStiviGVI1b4SGtHvWIhkjpZipeRbH9JDvWHVkGm49bVhe7N6N3iAqLIgeaooV/fxFYigw22EVm7NfjvFNL2t90ID/Qa4/n5O2u/nZLgk8xsrfVilFSPsDQpua9ViltcaIzG6GQnr8L3Vld5T+q89w27gTW8eH22+3iq2uq5+2QTiHMGd2YZ8cnNWovU+MQq5TBPD7UQSuj2oRpeq+i39Tu7qiyozRY+QDDMcbLqiRvUODDrwXdxRiydPDDDilYp0+fErHlcx9aMxfy8ZcmDzQMcgDqu+zfPxJnA9FeapV2l/3GG7wrz5ZeRB0qNM6fH6+W4LRAZeHvi1T7M3/Ynq7zwA+2jyuT+Smkjb//tRi41T/2N4kATGx9eRo+rDL5rQ1zBxxCvAv6ESm/kdgwZKpmr+aKH/grF/kCBFxqY8Fk3qMBsk/rHTBDnjq0WQJ7Jag80oSxLpf+65xXrMJkeDrTTZaXonlAS+emgUPS8Nu/K1IFr6nLSM6h1U9DCpvBZP59waEz3xq0SgQpbDTw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(396003)(39860400002)(136003)(376002)(451199021)(110136005)(54906003)(4326008)(316002)(66946007)(66556008)(66476007)(478600001)(6486002)(6666004)(8936002)(5660300002)(41300700001)(8676002)(82960400001)(7416002)(7406005)(2906002)(86362001)(31696002)(36756003)(38100700002)(2616005)(26005)(6512007)(53546011)(186003)(6506007)(83380400001)(66574015)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WG1zMWZ1b2U1NHFYSXY1UTRod09DQVZxS1BmVThlUlBoMFFCZDRMVXhJSU1I?=
 =?utf-8?B?emd0L3AwS1huV3ZkRS81YWVEMjViWFQ0elNKNHJRK0FtTEo4SWhMbVE1T3VU?=
 =?utf-8?B?WmVHMVZjalY0MUtqWjRhVTk4UG9TMHlLSzJYT1J1VVp4cTFkWndPRlJRQWtL?=
 =?utf-8?B?SFpsQjc4ai9DbC94ZnhveTBsMm5nYXZrOG1YUDB6T0pybndXWms4eDhxSHhI?=
 =?utf-8?B?eWNhWExkMGU1K1NKTGlKakMzSm54ckptVTh5bEVUNTUvb3dqbGtuc00yNTFj?=
 =?utf-8?B?cWt4YVFQN0FxRFc2NUhDSlZjMDkxaWZhR1lvSUVENE5sbmhJNkNEV3FUVit2?=
 =?utf-8?B?ZXllSWM0TG1sTEVFWllVSlpLUzJXTDVyRVhpZkkzVFZlZzJzeWhsV0E2SzFl?=
 =?utf-8?B?TERobm43dE5hS2pRVGZHbWNadGJ4K20yYUR2NnZLNjMzVFZiVGE1OXFJQ2hn?=
 =?utf-8?B?amZEdWZJalAybGE4RER6Sm5kazlYSDZiZE1jMlN1bHIzaDY4QW9rakJaRlNp?=
 =?utf-8?B?clViZGNCQWt2Q2FjaU82VHJHOGtsRG9MRTc4cHZuUWMyUXhNUmNIajkzUnpY?=
 =?utf-8?B?RnBwQ1N0dVd1ektUVU5hS2NUOFhudFhteWxOdFcyMndIcWxBb1JXOUgrRTU3?=
 =?utf-8?B?RGozc2lZMHhzbzIxbDNwdE84NkFwVVV0QTgrcy9iU2xSTGQvMDJYYzYydXZ5?=
 =?utf-8?B?L28yVldUYWJCM0tXZWJ3RXZpanlWSWJETllGSjRTSHRkQWFDL3Z5YS9JVXJV?=
 =?utf-8?B?cTdDdmxxcThmZDY5V2xtTk5ReHhXUmZ0WGVyMnBkb3ZLaW1PSThzdFhTL0RY?=
 =?utf-8?B?YVJTM0Q4ZjByMlE3b1VxT1dRZitnTHAyTTZnTXJlUWV6a0lnWllCU0h4YTN1?=
 =?utf-8?B?UWdLRFBlRWthQUV6dnE3K0N5dkpFZzZFZE5vaFJISkU2V0FlVHJsaU1xcGVQ?=
 =?utf-8?B?bCtSMEI3ajloWDdZSkFVUU5UZFMxR2txMThPTHhtY3B4eXJaZ0duUG81RE4v?=
 =?utf-8?B?dnZLdk9uc0QvYldDUDdqM3ZQV1Y4ZmVoRTlUa2ExZm8xM3JUN3pEc3lSUWsz?=
 =?utf-8?B?eTl4K0RKVVYzdzJIcER4bEtaK1dJUHk0ams4WExrd0lkbUc1V3VSUWc0ZmZ3?=
 =?utf-8?B?ekd1MmMrZjRrUTRNelNqcEFhZjhlZjFFMWNvUXo4SXQvSE9QckFEYkp4WlBZ?=
 =?utf-8?B?dFRFS2NUMzljOGcvR1lPSU9vQXNwc2ozd2xRdFpGQ3BMTmIxOWUraHB6R2d5?=
 =?utf-8?B?R0lnalRiQytwZU1FUEdRNnNMaW1RZGo1V2l4RG8wVWQ1c1M4RTc2LzRZSEth?=
 =?utf-8?B?MVU2MVlnc1ZDSU9DNDlKT3NXdCtBblpPTjhQOTBlNTFwNVVuenZHOHVwQXBN?=
 =?utf-8?B?UDQ4SUpiTVN4VE5nWXlqWEhXV3VQODhEdFVuR3FRMkhYMWM1YjBVQmFRQWNP?=
 =?utf-8?B?NEpsVmVxYm5aVlJWbnpHekdjMjNpYnM0OTZ5THdIbU41bDkyWHhOa1ZaNVdk?=
 =?utf-8?B?RGc3K3BVYjhUYmZ6MHE3UTZveEZMNnVYTTM1b0pPa3NlMFZ5Q2pjVEtrVWcy?=
 =?utf-8?B?QzQ5bmhuNEQwZXJRUkVJQVpEWS9RZ2l0UkwrNUcxejVVSk5VdUhWMUVjc0hK?=
 =?utf-8?B?SCtHRWhVekowOThid0NMMjVibTdBME92aDZOV0c4b21nKzVBZmMycTlxUTVh?=
 =?utf-8?B?Wk53U2dCTXNKOGdsMWhEdnZnZXFzbTY3SE1BbFo1RUw2RFJucjZhZUN5M0FQ?=
 =?utf-8?B?YlRQaXRKbXdUV0R0Qi9VOWttdVdGaThicmJDckVKeE0wV2hDOThmWFRLSmlu?=
 =?utf-8?B?ZVZaS29RN2o3ZFpwczJkb0tDYUtqZmowbk5XRjRaaytYL2VSRWd4RnVMaEIy?=
 =?utf-8?B?RldwbHA0NnpQTzJ3MUNFZnRYaHJvNWdBVjhIeGdOSzNabkVubE0xQndQODdy?=
 =?utf-8?B?K0YrQWcyWHQrK3pmWmlWazdDMXdJRFF4OXUyWTBOdEYwWWgxSXY0THcwZlJL?=
 =?utf-8?B?Zi9ZYUdBZElBWktMdUFuUjdvZDVxKzRyYm1vUDdMK3NOWVJsV25PS3VDKy9Z?=
 =?utf-8?B?ZFpaNm9UNmF5L0d6UUV2YmhQdXhCcXJXQ1I4Z3BiaEsxVGNTT0Ira21CRWtJ?=
 =?utf-8?B?Mmx2U1IxbG5VNDdWQldYdndySzdVLzNONklkUGFiZEVxb0p5UkpIN2t5ZjZo?=
 =?utf-8?B?VlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?SG9wdjZOa0x0UG9oWDl2NG0wd281QlR2VlIvc1M3YU9yWnJmTWdYSUtiUVdB?=
 =?utf-8?B?MjlDSFB5V2ZqTEFkTkk2WTYrVnV3UkJ4N21lZHpaK1NBNWs4L1VUOWN2aXA3?=
 =?utf-8?B?VDVhL2FtWDNqKzZpdnA2dytmMnNLWS9sM0JNdzNORzBlYWk4dWVXNURvelBy?=
 =?utf-8?B?OFBUNWF5Z0p5L2JBZk12dm9ja1Z1VjhyQWNXS0ZzNklDbFVsRUNOb0JURzgv?=
 =?utf-8?B?bTEwZG9yMDRXcXpmM1BQWUlLTXdsYzh6NGJVb29IL3gwUmJHUHJmU1hOcUtI?=
 =?utf-8?B?VlZDbzBrcU5laGdzd0dLRGoxUkRJWVh6eVlJeFlTZG0xM1RRay9EajBRQTI3?=
 =?utf-8?B?N0VwRnE0ajF4OHhGL0k0d1ZyWGY0MnZTQUV4V1lIWVBXMGlKZHRoTlYzVGpm?=
 =?utf-8?B?UVoxUlhZbnZQMG0wOVM1TzhmeG1XZlZLdWRDRFJoWnBBQWxuMS9ndUplVGcx?=
 =?utf-8?B?YnN2WW8xbURET0FxOWhzdU5tWC8rWjM1M2RzejZGbmM5cjJHWmxNZ2xiZjgx?=
 =?utf-8?B?ZlZJYk5GSC9PUTZuYktTcitjbzZ6eThUMTh3WUpWNExud0F3Tlk4YjdzdXUr?=
 =?utf-8?B?S0prT3VNY0ZlZWtTczZWUTRMK2d3ZVE4Tm1xWTluemNaemJVcytVYkdlVDZp?=
 =?utf-8?B?YlVxMkNObjF0UDNDVHVsWnlkS2pBWHA0SnJ0T0NpQlZsR0kyc2t5bU80ajUr?=
 =?utf-8?B?WHY5K2NlSnc0Q2taTTdLRUlkTXBXVzZLaVNPeit0Rm9pajlhMFhsK05kRVlU?=
 =?utf-8?B?VGtUaXdFZUQzV1RKR1VmdXp2dnJCUEFMYXNlcVdJNEFXZ3ZkcWFXWEltUU1R?=
 =?utf-8?B?ajlaVlFVS0graFAwcEQwdjVIZU5jRzhRdGdaeDgvZUpwbjI2cG5XTDFkTk9m?=
 =?utf-8?B?aFZvN1VpaTgyVlR6MGgyMXBqa0FLNEg4aEFTSCtrem1EWWs2TEZSY3JkQjE3?=
 =?utf-8?B?WkwzTG5sNDRlaXVUaHBnTzBxTWNKQnlvcmlkbHZQaFZIQzc3TmprY2ZCL3cy?=
 =?utf-8?B?Smw3QVVMTUZKanlEVUttYUxrdTNwb0FpZG5jbkpvVUg3WGYrdzNiZERqM1Fm?=
 =?utf-8?B?bStUV09mSU5JU1I4NEp0ZTkybkozWDZnSlJieko3UEtvd1BUZmF0YUFwNk8w?=
 =?utf-8?B?cWVGZ01aM2h3TW85d1hXM2tlcHltZzFZTzVuQkRPZzYyYTJzYTNtUmhhMzFi?=
 =?utf-8?B?ZXQ1NGplcVVPaDlua0IybHBWeEhCdUhmUThUUTdFc1VmbVRBL01WT1AyVWlD?=
 =?utf-8?B?RFVmdTNLK1VtTWI1MHhhYW1PQjVuNUxLaG9USngwSWNYYzErVWJNUS9YdFBt?=
 =?utf-8?B?aXNVMUJnUzhMbGsycHhvRXFuMW03T1BabkxydGRsM3ZoUS8ydHNxdS9hY3Z6?=
 =?utf-8?B?ZS9ya2pGZ0J0SHVrb05nNFMxQVVEMHJOdElucDdEaWZrdEdDTC9HL2g0a2s0?=
 =?utf-8?B?SUNyUGVZOEtjYk02TDJ0K2p2SkR5UTc1NmViSXF2K1V5a2VPQldJRmw1eVFJ?=
 =?utf-8?B?bThvTXdnWmlPS3BXbmltM1plUW9KVEV6NHJGNVZPdnBhVlpDWjBudHFhYkhq?=
 =?utf-8?B?dHJ1YUlUSkp0aWJsQ0piRXdLcFhQK3h2N1ozTjA3T21JR0doSFZyQ2g4UXNF?=
 =?utf-8?B?SlZheEpORWg4TncyNndvby90ZUV4NnF3ZHVCQVNUQVZJaDRhOW0wMVRYSFpm?=
 =?utf-8?B?SEFySGV6ZGVGcUxIM25KU1kvZzZqQy9ZTkYwdWJnV0YxanBYdkkwb3RGQ0Vp?=
 =?utf-8?B?amx3TmRrOUkveTNuOGdJSmUwYnpBQTl6aGJTOFFkQnlER2dwY1NObWxXOHgv?=
 =?utf-8?B?bXhEeDJFSmJTeFkyN0M3UTVsOTlJSkg0eVVJcmNKT05sYkNZOG90ZG1DM3Ry?=
 =?utf-8?B?WlgyZlZKbVByeEphR0paQ2hOc1lhTFl5RHVNWHZHNUVHQ05UQ2R3ejVFd2VT?=
 =?utf-8?B?azR4U0xRZDlpaGdZR1pHQXNDbEVua1VML1gybEowQ2lzeHBQeDhBQ0dKU1B3?=
 =?utf-8?B?SHlxdlliSHhOVUlITi82WUZuK3JQL0RyYldlYng5R0R0MzJnMVVVQU1sTVNl?=
 =?utf-8?B?Y21acFhLclhVdXpJQ1FSM1NNZXVuSmovRHI5Mkk3NzFTQjBQNUFkNnRBV0Vv?=
 =?utf-8?B?NU8vWmNQd3dJMkJGRkNuU2crdVIwWHdxZFRJVnVVUm95ZEhiYytDT2NQS3VT?=
 =?utf-8?B?R2ZzVmk0R2s3UXljV0o0bE12Z2lQTlh2VVdkbisrVzRiQjFHejRRTjBncXR4?=
 =?utf-8?B?VGJwMm9vZWFIa1NQcnJYamVacjhRPT0=?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20c06dfb-e80d-4c83-676c-08db4180f135
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:23:46.5153
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OVW4+QNUut3bKt+RlHk7uwm9STvjt67GnuiJbjaNx98SkAxkANRPaoTM8k+2Zmgv9oTIvSvDl7PInUhl7VgaH+T/LQB6FyrR3MHivoUgfCI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5052

On 20/04/2023 9:32 am, Thomas Gleixner wrote:
> On Wed, Apr 19 2023 at 17:21, Andrew Cooper wrote:
>> On 19/04/2023 2:50 pm, Andrew Cooper wrote:
>> For xAPIC, the APIC_ID register is writeable (at least, model
>> specifically), and CPUID is only the value it would have had at reset. 
>> So the AP bringup logic can't actually use CPUID reliably.
>>
>> This was changed in x2APIC, which made the x2APIC_ID immutable.
>>
>> I don't see an option other than the AP bringup code query for xAPIC vs
>> x2APIC mode, and either looking at the real APIC_ID register, or falling
>> back to CPUID.
> I'm pondering to simply deny parallel mode if x2APIC is not there.

I'm not sure if that will help much.

Just because x2APIC is there doesn't mean it's in use.  There are
several generations of Intel system which have x2APIC but also use the
opt-out bit in ACPI tables.  There are some machines which have
mismatched APIC-ness settings in the BIOS->OS handover.

There's very little you can do on the BSP alone to know for certain that
the APs come out of wait-for-SIPI already in x2APIC mode.

One way is the ÆPIC Leak "locked into x2APIC mode" giant security
bodge.  If the system really does have a CPU with an APIC ID above 0xfe,
then chances are good that the APs come out consistently...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 09:42:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 09:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524001.814505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQnx-0001gK-2f; Thu, 20 Apr 2023 09:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524001.814505; Thu, 20 Apr 2023 09:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQnw-0001gD-WC; Thu, 20 Apr 2023 09:42:17 +0000
Received: by outflank-mailman (input) for mailman id 524001;
 Thu, 20 Apr 2023 09:42:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppQnu-0001g7-Tg
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 09:42:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7d00::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0db647f-df5f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 11:42:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8759.eurprd04.prod.outlook.com (2603:10a6:10:2e2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 09:42:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 09:42:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0db647f-df5f-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GKp2rN9f7ze6JpqZMZeQ4jPXO+lrTtscDlwVZhxF/mj/kpOwDZgAV20w3lAkGFB46nI+td0yDf8pxVzGogluXPKlkaUilSmzQK2pduuGh9lyguafRW+OaftP1Qkb4KH5eMTlAK5W1lOaV9I3sQZH/xQ4uBgY7U8Y7J5ajZIeQ0KF7o6TmtsHJEaalNbPbfxiGoIcDkqFI+zUGY5s+KWHJiq8pL3+FkCno+e8LEKigNBcvFfBhbaU41kX8PPFQYbzzsVBmhnsUqqwQzKUq+iZaTy+Ccj4tl+LUZnI6Pva4GHF7g2svwfT1tTq5adNnwvsu2dsUQRJkk/42liKt5Rogw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZKucVmGm0UiV2SG8WSNasg7AjjlCN6GAmny2xDfinp0=;
 b=Y/FV8bnvjerkfkfn+qqXUSs9mW3DFOcdQbkPvkYx1Yq1qd2ic5et8lXWbRAT3e1uADSt9+k2H8PtDGoU+AEiAYbr0IlESNn3cgj9tjYZA8RG/W9QwCiO3P0k03xiM1vJ+haifLt0B3cfLfz/2p1eS5diZvoFRJ8qxWziO1pCRJ4g8QnfH1VoNHub8vgSd8gYE10H6VsggggNTvVWNQfffUlvPi8bkr7ZWp0veJfgPD8mRSeWQ8F+SdKuqRAxb3YKDXdinE+HDKlqHEqGdhrXLFsjs6IFEB89PubvSLQ5FIZf/vNNFdZtDY85u0RCT1ZYCBXFjJCtFvKB6FttVgKrpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZKucVmGm0UiV2SG8WSNasg7AjjlCN6GAmny2xDfinp0=;
 b=Wee8qOHmfCn1ynZq/cPo82CQNITbHESZxXAmezbYlLAq/sT/b69XPpRgrBES8FaAk1vr1dw6DPd+V0mQtTSSina6Fmvfxa7W1g8iGd/ngzpJXtA6pfZ45RR3ETEKnsMirMrfGDOYYfWknNp3I9d2Nbm8kLxaJD1JaGMjF8NpzQ5nLIZGjYzDktSgBV0UMofRootUlfAm+HPvtNCBuk+zXCFVHCixV7kIBCGgIEfuWWpeevObKv08vlRR+yMJovmfjWx1ixi08VO/r0Yf66eXy/aTFE9xJhi01xXnNnkh8pHNZYSODYfDHLulKfBgg6+gzMNwMK+Hbqwx6c0piJh9Uw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8041b90d-ccdf-20a5-a2b9-d21a5454646a@suse.com>
Date: Thu, 20 Apr 2023 11:42:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 5/5] x86/HVM: limit cache writeback overhead
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4b42e920-f007-186c-d838-a0294bfa86b5@suse.com>
 <18fcf499-a2ae-ab48-a66f-ca0499097e8a@suse.com>
 <1c79060a-89c0-6dbd-47cf-964192b82020@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <1c79060a-89c0-6dbd-47cf-964192b82020@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0179.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8759:EE_
X-MS-Office365-Filtering-Correlation-Id: b7a645a4-5684-41a4-d0d8-08db4183837b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	65eE3X7oxGE2BliZP1wxxySf745xYF8hmMS2mDCvjWuv4gSHmTSJs8aEjCQcQaZFGymSFIJ/mJ+jXjl0eR6dvNz9BblAOxMVmS2VYsfAWSQr/0m0CaPc+0ugq7kZBOSqQXaeS44mmooA9VhjqNuJOHYjBTaBx8Of87jqbWHaaMOBAaR2aoW0wIU75txzEbNE/o7BT5LglbrddGaieoSm/q7HrON3Mv9xwGNpl+OBLJrHMdPhokrlPZUiAU7ssyXiIVwAu1icxciqdypqmmvXhcsQ3SFJwyKBCeAsuix8wncos0NcIDjMrNdPoU2z4t143oaD5AHVExQHUr6BE5XIIAxN9xbL19r77AXUQK/RkSAo6RFFmS1+xx7cbUT40AVbVF+fqXj97Uw0+uFHwKTzK78PX58ZddBFkIaGSaIFDckYFYrxaFzIqow/0Wxt+zO0E8W8+ZiwmwG5MXYn87DAhlF5kvKY8J550o47s7op1C//yn8nJuz/RjF2lSFITADyUlOjTs9w/tHIkg3nH00O+TOEfZ208MdDX4gOspzqLTkE/8VoyHx5sjlUPbC20xzz0mmlFIHBD76PE5BRsUaNIr8+8XCNzcoe/veOQwFx2vXv2/zMp+JMk/zjjtuLqqURghrjiKvAllElI7h06mIhHQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199021)(36756003)(2906002)(8936002)(38100700002)(8676002)(5660300002)(31696002)(86362001)(6486002)(6512007)(6506007)(26005)(54906003)(478600001)(2616005)(31686004)(83380400001)(53546011)(186003)(66476007)(66946007)(316002)(66556008)(41300700001)(4326008)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0hQMFRnaExRNjMxTDI1RmtDRUQyMUVxUVJxREExRzl0RmlOOEw1MFRsVENn?=
 =?utf-8?B?RWdqMWNiNGdVMzh4NUFJTktvRmREd216T1dzOFAxREhEU1pUUHZoN3Zwc3JC?=
 =?utf-8?B?ZStVWndxRE4rZHhJWFdRUnhuQmJVMkNUTUE0THNaVU5ZMU9uakgyYUtFdWw5?=
 =?utf-8?B?eWZGMUtROUhlN2dQYlQvc0x1NTZDaU9hRTY0VnlHR2hhRmtjSUgyVGlPVklN?=
 =?utf-8?B?Zy9KbkxWZklVTm9wYTFrT0k3cjYxOHU1QWJnd2lpT0RabGZTcGZZOW1zY1N5?=
 =?utf-8?B?MUJ0eWt3TEl2RFNSb08yZTJ4UkN0TE56aVZETDFKVThWb1lFL3R6N2FjZEJF?=
 =?utf-8?B?WklqWHRhcFZrT2RucVA2SVpIT0pMWHFsQXJyQllzYmx0UUZFMFhqN0JmL3ll?=
 =?utf-8?B?cXpIVVR2R254U2dNWTQ4SWVQc1hDeHFjaGRxWWsxWUxWbW0xazBIYnFJVUxJ?=
 =?utf-8?B?Q3ZmYlVWYzZWcjZCTm5TYmtxdFg1ZFhkaEk2ekE4NVlVYTBTSEdaMEh6NjNN?=
 =?utf-8?B?d1pwZDZrS1pjUHdDOWRzKzlDRE13U3JPM0V1M1RJZ0dLem1YbkdoMktNTHN1?=
 =?utf-8?B?VGNPNldoVEx0YmRQUkU1cjZkOHZpZVlxMlB6QzVmZVdLSDM3ZW91S052WktL?=
 =?utf-8?B?ZEFnWVluSkYvRVhJZjJtc0NNK2RiNlk1cURtVXhSSHQ5aFJyamZmSGdoVG12?=
 =?utf-8?B?SzRQd0NYRDdBUDNrR3BpenJ1OEd6UktOdmFXS0c0VjF4TklobWNJa09QdDYy?=
 =?utf-8?B?OHlyT1lXSi9vNll6eVVGbVRObDA3SC80Ym5KaGxkbXJ3UHc5TE1oUUVBaU1t?=
 =?utf-8?B?OHhDWWMvNzBHaXl1aHJndHlQcXJqNGJuRXNYa1ZpbWhtakU2MkVQa1Y2OTJw?=
 =?utf-8?B?aXVQS0NJU1Z0SElJZU9IK3M2Y1lseXRZS0k5dkxsYk5uU1hzb0VLa3lZc1Uv?=
 =?utf-8?B?MkJRVUlYMmF6dUI5NmliZmtzbHR0LzVXN2FYK3RJelpnN2hlTHBrdjd1c3JH?=
 =?utf-8?B?eTc0YUFFKzRsTU1Wc1ovanRMaDNvdkpUU0F2bmZTQ0tvcElpdTFGSVprSDVY?=
 =?utf-8?B?cWgzZ29ScVdIazBRamR5U3hEVWlQb0hKN09qZEVtRHQ5RE9RYVI4azB5c0JM?=
 =?utf-8?B?bVBvZ25YOE00RVZNZ2pRZ25JWFNReXVFYms1cTNMQmlvWktyTGVhRGdUWVR3?=
 =?utf-8?B?VElrQkg5SCsyTU1hSEJSUHI5TFN4bGJBQW4vSGdvK3RaNXJzU1dKSHN6SG10?=
 =?utf-8?B?K3I4REd1SktGa1FZc0FKNnhaSmZ6c216UmxXSUJxRitWZmNDYnVYVHVZWndB?=
 =?utf-8?B?eFBqVURubEFYb1JWR1R4b3B2QnY3YXIrSG5TRVVUdkNZekt2ZG1nMFlwUEdv?=
 =?utf-8?B?SmxQaU13dW1OWG9kcXhpNDhlWXhqRWdWUXZuajNhRmoxOURsVTMzRnNxbXI4?=
 =?utf-8?B?VDRldVlaaFE4dnJHQjRyeVVHRDAzdTBzaW1zWmU0MnJsenBNYWxrWHVvbzFx?=
 =?utf-8?B?SUNqMWM3UmZzbTlPSytrNWJSSjJ0aEdyTm1RajEydVNFQ2E5d0tsZnBRcWpo?=
 =?utf-8?B?V25wMFpubnhzQmk1OE04b3VnZ1ZxUHF1VzN2bC9Ndk9ZYjQ2Z0x5czl3Zyto?=
 =?utf-8?B?dFRteEtWNmhxaTNVeDk1RG5sZjBrNERXOGtiYVE0Qll4RVhqMFhCYk1mRkhU?=
 =?utf-8?B?VzdyWXkzSE1oZGQ3bm12dG5iSTZseE5CVnJCcHdQK2pOaVJLY04wdkxIcWR5?=
 =?utf-8?B?TmwxVnRMUXhZWDd4RU53eXg1L21ZaUlQdDhrdHVzb09ZS2p6YXZWTXI4cmNq?=
 =?utf-8?B?MFdKYkZDMmltTSszWXhPQ0NSUXRDa2RMb1AvZk5xVFFCZ3hmbDhIazk1ZzNM?=
 =?utf-8?B?dG53b1kwQlZqRFBTQUhDVWJWUFRvbFhrZ3BSTjZScXdEYW5kSkY5QWlYUjFZ?=
 =?utf-8?B?MUgxcTRCYVJBK3k3dDhzRVJ4dHBKd2VGWEx4ODRCNDZLaXVNRXJWTm9sb3Yr?=
 =?utf-8?B?anUreFdPUHZ3Vng3U1k3Lzd6TnhUNkpxSHQxUEd6bWVPeUNnbnpYRk1qL3Fl?=
 =?utf-8?B?bUZDNmMvbGVuYUVnVEwyVWFhZTk2T1Rwa3A0Q0NxTHhtK3U2M1FSYlgxc2NG?=
 =?utf-8?Q?YDOCEoRLAYZ6GY+kgEFOy+ROS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7a645a4-5684-41a4-d0d8-08db4183837b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:42:10.6862
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W3wrILNknWZlegYP94AtaUMyOsjorucqvseEClBkkc+mS805LpugMkBUpouYH+ROEloccd+Q+5nUN2BlFoW0Zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8759

On 19.04.2023 23:55, Andrew Cooper wrote:
> On 19/04/2023 11:46 am, Jan Beulich wrote:
>> There's no need to write back caches on all CPUs upon seeing a WBINVD
>> exit; ones that a vCPU hasn't run on since the last writeback (or since
>> it was started) can't hold data which may need writing back.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I find it unlikely that this is an improvement in any way at all.
> 
> You're adding a memory allocation, and making the fastpath slower, for
> all HVM domains even the ~100% of them in practice which never get given
> a device in the first place.
> 
> Just so you can skip the WBINVD side effect on the L1/L2 caches of the
> CPUs this domain happens not to have run on since the last time they
> flushed (which is already an under estimate).  Note how this does not
> change the behaviour for the L3 caches, which form the overwhelming
> majority of the WBINVD overhead in the first place.

You're thinking of only single-node systems here, it seems? I view this
change as particularly relevant for node-constrained domains on NUMA
systems.

As to "making the fastpath slower": That can only be the
__cpumask_set_cpu() added to hvm_do_resume(). What do you suggest? I
can certainly add a conditional there (and then I could also make the
memory allocation conditional), but I thought a LOCK-less RMW memory
operation might better be done in straight line code.

As an aside - after having thought of making this change, I did go and
check what KVM does. And (surprise or not) they do exactly this.

> So my response was going to be "definitely not without the per-domain
> 'reduced cacheability permitted' setting we've discussed".  And even
> then, not without numbers suggesting it's a problem in the first place,
> or at least a better explanation of why it might plausibly be an issue.
> 
> 
> But, in writing this, I've realised a real bug.
> 
> Cache snoops can occur and pull lines sideways for microarchitectural
> reasons.  And even if we want to hand-wave that away as being unlikely
> (it is), you can't hand-wave away rogue speculation in the directmap.
> 
> By not issuing WBINVD on all cores, you've got a real chance of letting
> some lines escape the attempt to evict them.
> 
> But it's worse than that - even IPIing all cores, there's a speculation
> pattern which can cause some lines to survive.  Rare, sure, but not
> impossible.
> 
> Right now, I'm not sure that WBINVD can even be used safely without the
> extra careful use of CR0.{CD,NW}, which provides a workaround for
> native, but nothing helpful for hypervisors...

Wait: See the title and the earlier patches. We're not talking of "evict"
here anymore, but of "write-back". The few cases of "evict" are left alone
by this change. If any of those are affected by what you say (and it looks
like some might be), then fixing that definitely is unrelated work. (You
may have meant that latter part of your reply like this, but you haven't
said so in any way I would recognize.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 09:54:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 09:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524005.814515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQzi-0003BJ-45; Thu, 20 Apr 2023 09:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524005.814515; Thu, 20 Apr 2023 09:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppQzi-0003BC-1N; Thu, 20 Apr 2023 09:54:26 +0000
Received: by outflank-mailman (input) for mailman id 524005;
 Thu, 20 Apr 2023 09:54:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppQzg-0003B2-Ox; Thu, 20 Apr 2023 09:54:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppQzg-0001B0-Ek; Thu, 20 Apr 2023 09:54:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppQzf-0006kz-U8; Thu, 20 Apr 2023 09:54:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppQzf-0006Zx-Tk; Thu, 20 Apr 2023 09:54:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nGntpIkBeprOltxWwiyjPn4sZvvk3hopPP2DaDNYSbU=; b=l4fYQ//OBFypWLZNYk+coSfLxq
	Wo/32xI/CBbzLyNA7S6wrcG94KTza7dkQn8hSTrxCirVupt4tMiqQY7pFECeyePxYtGAv45QgWYH6
	lSnZ05ihK4k8w3ts/iWjnXQxSWqFO7DAiiK8Rhn73aVuFnYTgUhAs5sj7hdX74ZL5jtU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180330: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 09:54:23 +0000

flight 180330 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180330/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    0 days    9 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 10:30:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 10:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524012.814525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppRYZ-0007Yu-RX; Thu, 20 Apr 2023 10:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524012.814525; Thu, 20 Apr 2023 10:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppRYZ-0007Yn-Oc; Thu, 20 Apr 2023 10:30:27 +0000
Received: by outflank-mailman (input) for mailman id 524012;
 Thu, 20 Apr 2023 10:20:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ilLs=AL=citrix.com=prvs=4670623b3=Mark.Syms@srs-se1.protection.inumbo.net>)
 id 1ppRP8-0006cO-Si
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 10:20:42 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id feae3d7c-df64-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 12:20:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feae3d7c-df64-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681986040;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=MULDkdNFZurv6NiIbs1+xdg+sYuQspr9nF1H9kICi2I=;
  b=PdYKy7xxlQFDBPrDbAtlMTjH85HzfBye6Ei0unshd+U3mqt/fMIE7p8d
   4FrNLfPrmpQ0I8Lio+sKkCNwWpRGAusOpg+xZu2KZ8qGUGLGBkFH8oDwF
   CSoZzCsaez8j1loSE8IcntLjgWhddWKJpBzaB4n+/v2uwABUGPvQTFuTr
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106259391
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:CD1G2KycRb9U0fLWNmZ6t+chxirEfRIJ4+MujC+fZmUNrF6WrkUCy
 zcaXGiAaP6NZWv3KYglOdjio0JV6JDczIVrSwVrpSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiP6gT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUwT/
 NcyEhVQUj24odC1wrCYROxLoMt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+Nj2P8NQZJrUm9rqsr+WnDigd21dABNfKMIoLQGJQOzx/wS
 mTu5GXZAFJHCPekwjPf0WmAiOieoDiiYddHfFG/3qEz2wDCroAJMzUPWF6m5PW0lEO6c9RYL
 UMS52wpt6da3FewUtD3Uhm8oXiFlh0RQdxdF6s98g7l4rLd/gKxFmUCCDlbZ7QOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgddE1FDuYO65thu0FSWFI0L/LOJYsPdNGz56
 BqwiXUCqo41v80J1Ya1+HfhjGf5znTWdTLZ9jk7T0r8sFMgO9/9Pt34gbTIxa0eddjEFzFtq
 FBBwpHDt75WUPlhgQTXGI0w8KeVC+Fp2dE2qXpmBNEf+juk4BZPlqgAsWgldC+F3ivpEAIFg
 XM/WisLvve/xFPwMcdKj3uZUqzGN5TIG9X/TezzZdFTeJV3fwLv1HgwNRfBhTG3zxJ2z/5X1
 XKnnSGEVC9yNEia5GDuG7d1PUEDnUjSOl8/tbiklk/6gNJylVaeSKsfMUvmU93VGJis+V2Pm
 /4Gbpvi9vmqeLGmCsUh2dJJfA9iwLlSLcyelvG7gcbZeFA8RDp8WqGOqV7jEqQ895loei7z1
 inVcidlJJDX3BUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:PvRYTqkxb7zZj4WHIUG/ma5hwlnpDfI93DAbv31ZSRFFG/Fw9v
 rCoB1/73SftN9/YgBCpTn+AtjjfZqxz/BICOoqUYtKPjOHhILAFugL0WKI+VLd8kPFl9K13J
 0QFpRDNA==
X-Talos-CUID: 9a23:gfxRNWMiEAtfL+5DdXRu32QuP50ZX0b7/nTSIWy2DEFPR+jA
X-Talos-MUID: =?us-ascii?q?9a23=3AgYyXdQ4V/S92jm6AoAS2narPxoxz4oSsSx0xuq4?=
 =?us-ascii?q?NspOnLgB+PGagl22OF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,212,1677560400"; 
   d="scan'208";a="106259391"
From: Mark Syms <mark.syms@citrix.com>
To: <qemu-devel@nongnu.org>
CC: <sstabellini@kernel.org>, <anthony.perard@citrix.com>, <paul@xen.org>,
	<xen-devel@lists.xenproject.org>, Mark Syms <mark.syms@citrix.com>
Subject: [PATCH 0/1] Updated: Ensure PV ring is drained on disconenct
Date: Thu, 20 Apr 2023 11:20:14 +0100
Message-ID: <20230420102014.647446-1-mark.syms@citrix.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230329105344.3465706-2-mark.syms@citrix.com>
References: <20230329105344.3465706-2-mark.syms@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Updated patch to address intermittent SIGSEGV on domain disconnect/shutdown.

Mark Syms (1):
  Ensure the PV ring is drained on disconnect

 hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

-- 
2.40.0

>From 21724baa15a72534d98aa2653e9ec39e83559319 Mon Sep 17 00:00:00 2001
From: Mark Syms <mark.syms@citrix.com>
Date: Thu, 20 Apr 2023 11:08:34 +0100
Subject: [PATCH 1/1] Ensure the PV ring is drained on disconnect

Also ensure all pending AIO is complete.

Signed-off-by: Mark Syms <mark.syms@citrix.com>
---
 hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..d9da4090bf 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
 
     dataplane->more_work = 0;
 
+    if (dataplane->sring == 0) {
+        return done_something;
+    }
+
     rc = dataplane->rings.common.req_cons;
     rp = dataplane->rings.common.sring->req_prod;
     xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
@@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
+    XenBlockRequest *request, *next;
 
     if (!dataplane) {
         return;
     }
 
+    /* We're about to drain the ring. We can cancel the scheduling of any
+     * bottom half now */
+    qemu_bh_cancel(dataplane->bh);
+
+    /* Ensure we have drained the ring */
+    aio_context_acquire(dataplane->ctx);
+    do {
+        xen_block_handle_requests(dataplane);
+    } while (dataplane->more_work);
+    aio_context_release(dataplane->ctx);
+
+    /* Now ensure that all inflight requests are complete */
+    while (!QLIST_EMPTY(&dataplane->inflight)) {
+        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
+            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
+                        request);
+        }
+    }
+
     xendev = dataplane->xendev;
 
     aio_context_acquire(dataplane->ctx);
+
     if (dataplane->event_channel) {
         /* Only reason for failure is a NULL channel */
         xen_device_set_event_channel_context(xendev, dataplane->event_channel,
@@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
 
-    /*
-     * Now that the context has been moved onto the main thread, cancel
-     * further processing.
-     */
-    qemu_bh_cancel(dataplane->bh);
-
     if (dataplane->event_channel) {
         Error *local_err = NULL;
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:03:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:03:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524018.814537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppS3s-0002Ys-9o; Thu, 20 Apr 2023 11:02:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524018.814537; Thu, 20 Apr 2023 11:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppS3s-0002Yl-7G; Thu, 20 Apr 2023 11:02:48 +0000
Received: by outflank-mailman (input) for mailman id 524018;
 Thu, 20 Apr 2023 11:02:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ilLs=AL=citrix.com=prvs=4670623b3=Mark.Syms@srs-se1.protection.inumbo.net>)
 id 1ppS3r-0002Yf-Gx
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:02:47 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df0aba39-df6a-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:02:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df0aba39-df6a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681988563;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=L0UWyMFi6bXAICg+oR9Paaiks0muCSEBbrXn0Xzhacc=;
  b=bReMXjFebhBzln97If2KBKdO48r0p/yLf6i0VA4QYvWiPiYAKFgflc0W
   Y6LPiDfEZDAewe9oXYox5lfLa0Qz4+vliPEwuPOk/ASZPxQSUxQr8b7bW
   boqVBq8E00uGJdA++BfpuSYzL6NdJn8C/9oztAEif13CJdHP+aAlQA7Nv
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 104999695
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:VvND4apsIwlr4fIpZIz/BxCjsX9eBmIOZRIvgKrLsJaIsI4StFCzt
 garIBmEbvveYzOgKNgnO4vjpxsC7Zbczt5nTwRk/yA9ESxHo5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSFNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADBRVhqOgtmZ+6CcR7NznME6FZXIPLpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxV9oUiW45Em5nP7xw1tyrn9dtHSf7RmQO0MxhrJ9
 zOYrjmR7hcyDPOjjmOk7UmQiuLowRPrd6cDLpzg6as/6LGU7jNKU0BHPbehmtGgh0ujHt5SN
 UEQ0iwpq6c06QqsVNaVdwajvHeOsxoYWtxRO+438geAzuzT+QnxLnANUzppeNEg8sgsSlQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJTbp+PG6/tt11EiWCI8+Tujs1Iad9SzML
 y6irHQGjbgWtuEwxYK2p0/dhiuV+rvJd1tgjunIZV5J/j+Vdab8OdzxtgmDtKcQRGqKZgLf5
 SZZwqBy+MhLVMjQz3LVHY3hCZnzv5643CvgbUmD9nXL3xCk4DadcI9Z+1mSz285Y59fKVcFj
 KI+0D69BaO/31PwN8ebm6rrV6wXIVHITLwJrMz8YNtUeYRWfwSa5ixobkP49zmzwBF9y/BlZ
 s3BL5nE4ZMm5UNPlWPeegvg+eVzmnBWKZ37HvgXMChLIZLBPSXIGN/pwXOFb/wj7bPsnTg5B
 +13bpPQoz0GCb2WX8Ui2dJLRbz8BSRhVM+eRg0+XrLrHzeK70l7UaWLneh8KtI690mX/8+Rl
 kyAtoZj4AKXrRX6xc+iMxiPtJuHsU5DkE8G
IronPort-HdrOrdr: A9a23:ipFU0qhpEc84WM3vHMLGMvF/M3BQXssji2hC6mlwRA09TyX4rb
 HMoB1/73SftN9/YhwdcK+7Scu9qB/nmaKdgrNwAV7BZmfbUQKTRelfBODZogEIdReQygdV79
 YET5RD
X-Talos-CUID: 9a23:Nt9/rG0KcTpBQuEBArtHl7xfGpwdeHDY91jrMWi1U3lzTaSsFXyQwfYx
X-Talos-MUID: 9a23:tcPH2Ab1elzZReBTjxDQw2hDK/xU76mJVHAdvrAXgejcKnkl
X-IronPort-AV: E=Sophos;i="5.99,212,1677560400"; 
   d="scan'208";a="104999695"
From: <mark.syms@citrix.com>
To: <qemu-devel@nongnu.org>
CC: Mark Syms <mark.syms@citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, "Paul
 Durrant" <paul@xen.org>, <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
Date: Thu, 20 Apr 2023 12:02:05 +0100
Message-ID: <20230420110205.688689-1-mark.syms@citrix.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

From: Mark Syms <mark.syms@citrix.com>

Ensure the PV ring is drained on disconnect. Also ensure all pending
AIO is complete, otherwise AIO tries to complete into a mapping of the
ring which has been torn down.

Signed-off-by: Mark Syms <mark.syms@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul@xen.org>
CC: xen-devel@lists.xenproject.org

v2:
 * Ensure all inflight requests are completed before teardown
 * RESEND to fix formatting
---
 hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..d9da4090bf 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
 
     dataplane->more_work = 0;
 
+    if (dataplane->sring == 0) {
+        return done_something;
+    }
+
     rc = dataplane->rings.common.req_cons;
     rp = dataplane->rings.common.sring->req_prod;
     xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
@@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
+    XenBlockRequest *request, *next;
 
     if (!dataplane) {
         return;
     }
 
+    /* We're about to drain the ring. We can cancel the scheduling of any
+     * bottom half now */
+    qemu_bh_cancel(dataplane->bh);
+
+    /* Ensure we have drained the ring */
+    aio_context_acquire(dataplane->ctx);
+    do {
+        xen_block_handle_requests(dataplane);
+    } while (dataplane->more_work);
+    aio_context_release(dataplane->ctx);
+
+    /* Now ensure that all inflight requests are complete */
+    while (!QLIST_EMPTY(&dataplane->inflight)) {
+        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
+            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
+                        request);
+        }
+    }
+
     xendev = dataplane->xendev;
 
     aio_context_acquire(dataplane->ctx);
+
     if (dataplane->event_channel) {
         /* Only reason for failure is a NULL channel */
         xen_device_set_event_channel_context(xendev, dataplane->event_channel,
@@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
 
-    /*
-     * Now that the context has been moved onto the main thread, cancel
-     * further processing.
-     */
-    qemu_bh_cancel(dataplane->bh);
-
     if (dataplane->event_channel) {
         Error *local_err = NULL;
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:08:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:08:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524023.814547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppS9P-0003Ej-00; Thu, 20 Apr 2023 11:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524023.814547; Thu, 20 Apr 2023 11:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppS9O-0003Ec-TU; Thu, 20 Apr 2023 11:08:30 +0000
Received: by outflank-mailman (input) for mailman id 524023;
 Thu, 20 Apr 2023 11:08:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JtFG=AL=citrix.com=prvs=4670f3580=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppS9O-0003EW-5H
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:08:30 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acdfe25b-df6b-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:08:28 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Apr 2023 07:08:26 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5520.namprd03.prod.outlook.com (2603:10b6:a03:282::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 11:08:24 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Thu, 20 Apr 2023
 11:08:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acdfe25b-df6b-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1681988908;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YgEzoChlIHWl5PCA8ukSD9zeZqs27v1RNPO4uWiJ/qQ=;
  b=EXwicNGAUn/VjYU9iQRbbMxv4PD0QgXez0MB+NzoevBvNDr1D3HooNi+
   Yl9JhvdXTpYfuyiGCHTWef7Ppl/hGSXvtVo0DcMsVw0HwPZ53Woxdr8FM
   KWfMcPtAYfTAppEodQBIb8QotCDQxITdol0lGgAmQUSO4iY3EWlUes5d9
   4=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 106264113
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4A4WZ6ClptqNVRVW/wviw5YqxClBgxIJ4kV8jS/XYbTApGsr3mECz
 mAeWWmCPamPYGOhLdF/O9ni/U9Su8LUnII1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C5gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwx7gwCjpB1
 fkkDRczc0iYotKE34mrVbw57igjBJGD0II3nFhFlGmcKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+Oxuuzm7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqiz22LOWxHiTtIQ6DpS51fVvjVyp+y83VUAsTVKlmuLglRvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/vrGDhuvbu9WX+bsLCOoluP1TM9KGYDYWoBUlED6ty6+IUr1EuXH5BkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPeRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:9L1Gu6/2Ke8JeG7/lpRuk+DnI+orL9Y04lQ7vn2ZhyYlC/Bw9v
 re5MjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: =?us-ascii?q?9a23=3AB1Rrw2nmKasU9bWa6Q5WmwSWQVrXOUCDi3yLLG+?=
 =?us-ascii?q?9NUNOdqDNUnqgyrxfg9U7zg=3D=3D?=
X-Talos-MUID: 9a23:okxf5AUE2GIMAzbq/CXFhRJpMcpp2ZntLB1Qq40PpfKfbDMlbg==
X-IronPort-AV: E=Sophos;i="5.99,212,1677560400"; 
   d="scan'208";a="106264113"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqF6gwSBTmvJl9eEWzeH2buzfUhhsL7oPgjxerSB4Y0pMaHv2BoUrRYwwEgnI81dv0FRWMA1Sq4/nTC4OZEQJTioXbEHfAB6FIvTR2/9WI3/FnMnYqEORle07rkSN4Rut11R/73DsGBjuq/hZrdnWzdv22+wDybpZHpIfktc9vLd/KfkOi8WmsXwo6R/kOAf3KpJG30y/mx0Mvmb0ZX1Jx8Flyt9t4gxW4HyE1xDW86sCQhjdiN86CrbvNbRjBMGIyZKooF9tULDlYGVMoJueN2FG/49csuOioLwb43CghbZ4WLqSmMBksKGmJp/n4Yzc+/wFAL106i6oC/BI2nsuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZRqCU4EAavlft/I/NT6IwiF4Ze+Fd1tEXZgv8kJkg38=;
 b=Qmw4uOgFUBnqHHv6Quiv0wblEnaSJXZlWLxSdLJQnw9lY8tTvqEMvyIErImK2Q8BJiTv00FgUr4xJKTYEFdAVUK0x5krNYKGaWjMD75JhujnH6AoM4EMRxckZ3qUH+27qxWUx9ofvrUyFQ8HYj2aVy7QoIjp0DcS1viPE2Q+XtOaJbZIjyY7Vk1gOIWUtI2qrfKyhOQAKtIsZj9Zr29tbGGAbnf5UOLy4SRrOjC/Cm1V3iU1r+uy7UhloVUlB6AJplBOqcCMOG55TX4XHTtZEWjLCAKzDJrEbaT/INJ1i61WTCkoOms6X+Q+MqtsBeJJj6m9DwA60e/nJJGmTQZMqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZRqCU4EAavlft/I/NT6IwiF4Ze+Fd1tEXZgv8kJkg38=;
 b=CMYcPWlFF8r0MKpqozmwl8RKofg4nFyWJb3kDb+YZq06i53lrBtKMPC9Kxr8qpfjVpfGEHBQ+YTybUTMbwc8VBZzRGZQxL0J86bUqht5eauoYOuhjQIGeRDJVBYD+us7Mz8+puPwu6/PpLMnBbZghZwVDlukmau4fa6iwc4CeSs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 20 Apr 2023 13:08:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <julien@xen.org>
Subject: Re: Removing armhf boxes again (was: Re: HEADS UP: re-adding the
 armhf boxes to osstest)
Message-ID: <ZEEdIzlMLZEj2PEv@Air-de-Roger>
References: <ZDkmu0mgy23ypaL7@Air-de-Roger>
 <92e6ea3e-a381-a77e-f909-bf65d009647f@suse.com>
 <ZD0fbNMqT7tMZVAq@Air-de-Roger>
 <ZD/Vy1HAhvvr2tPM@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZD/Vy1HAhvvr2tPM@Air-de-Roger>
X-ClientProxiedBy: LO2P265CA0500.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5520:EE_
X-MS-Office365-Filtering-Correlation-Id: 434054d1-7636-401c-1593-08db418f8f33
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nowy3E2ht71UVSMY/O9ijB7Nm5nDxxjcB4ZCCvRfp0uVXkg/zq6SHO6DqBghvKoVQFiUgigfGZyGBcjnrjtQUdag5hHFEFfIocXBFiQG9pBs0Y7GgBIM0qx13IVdroSwj0pa92uOP5bkedTT2uo8CW/o46NArek0343A58h625T1MC57fkWOEGphxSOwDEGqO6svhzOMT8TZ1XsKAnWfArolpEKcXHri7zPDRtZEpDnm+kZQ+TXUPuVOdnqqGeF1kvq4wBXrs5cxYSKFFKWFat5f3jQE1QsnhjeUybgmwH6NcEb3PrZwjsVfwSKtJ91DiBXwgK0lFczVuJat+dVN1Ottsn+WNoFjsGZE6SF39KxmkvdL2M/c1vSrtaUgoGI5BnEvxYN+kbKzMZaigdGAofkh9knWxB/KzW7zSGHNtSGm9h1A0OTpRb1FtScaUJXwopQ01Lb+PIeXKipxMJq1CZ+fvm6+U2O6chfkcsoz8WayvFsEo1nwe7aqCh9F8Y2ZQmhbywipjYDz7cZI3QffAHcEsFuHWDhGtP5cxmNoGajeDS5pE6AkZPyw4Rsm8JfR
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(136003)(396003)(366004)(39860400002)(376002)(451199021)(82960400001)(38100700002)(26005)(186003)(6506007)(6512007)(9686003)(5660300002)(2906002)(4744005)(8676002)(8936002)(85182001)(478600001)(54906003)(86362001)(6486002)(6666004)(66556008)(66946007)(66476007)(41300700001)(6916009)(316002)(4326008)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UjVwQWR0OWppZ05veitzUEhUbEdUa0ZTcFgxMTRKeW12SEpOVWxBMU5NcDBW?=
 =?utf-8?B?Z3NQbC9ScUM3emtjSmxqYmJ0b0N1NnRKVWdORDhTM0dkcDAvOTh0WmtaSlJk?=
 =?utf-8?B?ampqMHptRjd5NTdtUTVXRHMveHk4UnJGL0NZL0llOXZtRWR4NXZOUUYzZ05k?=
 =?utf-8?B?N0Y4bEpsRkMxdXhxQm1GOWhRNnpFdDVScFBEaE44OE5uOWxNQmY4czVVM05i?=
 =?utf-8?B?Z25kODZISm9JbzJDekdQSGs3OXRkeWk1cE51bGlIYTErRGNSZWJ0eEhnZ3lj?=
 =?utf-8?B?eUhGQmlMQTFZUHhheCtvdFJoNUVtVDY2NVA3T093eStuVXJrZXlaZ2R3QmZj?=
 =?utf-8?B?Q2F5WCtWYVJ4L2JqTTViM3lXR2NYL3J1S0FxNzdCNUxQY0xZL2gzWGlzR2VV?=
 =?utf-8?B?UkJjRTA4emhDK2lNS216R0ZweHVNVVk5MmEycWlpT0wzZ0R1Rk9UaFZIM3ZF?=
 =?utf-8?B?MGsraGc3VHJQZEdwWkJsQUxKMWlqaEMrakhBRHlSbjVPSytDRjhuS2dVcWdu?=
 =?utf-8?B?YW40d2NydHpmWEdteFhYZGJuOFJaZ2h1Q01saWNlVERnWnhYMWtsZWdwQVZS?=
 =?utf-8?B?RG1acnFFRXBPU1BTRW9SemcrV2wzYzhSdHIweWt2aTNRWmNHVEtHVmF2K1o0?=
 =?utf-8?B?eGcyRnBrOGxTL2xibEIwOTg4Sk1JazErOVNZZVJyNVgyVEd2UlJicU9EQzBo?=
 =?utf-8?B?d25lNk9taW13bXJRQXdlWmRGTndRcGhXWUhRR05MU1hnR2I3Slo0SG5PV3J3?=
 =?utf-8?B?WTAxZEMwWkZIUnR3Vis1U1ZNTFBWa3FSZ216cFhqc2pTSVEzVVF1RTlwUHBy?=
 =?utf-8?B?cDNmM0VpUzJSVkI0aGR5Ykwxb2VCc1lnRENBeGRlOCt6OUZMcC9WZm9iRE5F?=
 =?utf-8?B?ZjdQSGdTMndmdmpNT29haVBDS05pc0pEYW9QNVBoQUtIZndXSW13c09GNkhO?=
 =?utf-8?B?NG5oUVlJZW1KcW1GWUorempRYzJPSW1ySnVyQjlNc3dHUW9EaEtpaGhURnIz?=
 =?utf-8?B?Z0xsNHhaS3dmNDEwM2FBRExSVEt5T2ozM2w0YjY5WUVIRWFoNlJrbUF4Tldk?=
 =?utf-8?B?Y0FDWG1EZHJSWGluTTlKRFBuQnYraFE3QmpBZUE4YlZabEF4K0VBckVOU09l?=
 =?utf-8?B?dWIwai9FUUdlWTdzdVhHdkZlcjZlYUcwWVQzS1Mxb2pKaERQOGNGYWxTdjJm?=
 =?utf-8?B?ZXBHZ0pxQ1FZd01GY0YybDRzSHd2dUhjc2FkSEo5Qi95bldHc2NmQkNVcGlm?=
 =?utf-8?B?ZDFwRU9HM3NUQ1llMTRKaVBuTWpKMGh0YzFCTkdYM3NqTHdGVXRDWTN3dDZT?=
 =?utf-8?B?dVVRZmVLUkNpZGZYU3NUeHlXbDdYdkwxU3dQemtUb0ltT0FJK2RuUERKT3du?=
 =?utf-8?B?a01vNS9mbUk4WGlyNVNnOEFJY3hXM3VRYVNsc1MyUmhmYTAwanhHbWRweGJE?=
 =?utf-8?B?TGNoMytEVHI4Rm5LM0ZMSWU2blRsZUM2S0VOQ3BUcm51eGlKS2FXdlcybDM5?=
 =?utf-8?B?aGVNdWRITWZQdU9NSlBScHJHcFIzUHVuYjRqVGdqb3NBbmRibzBESEpQMkxZ?=
 =?utf-8?B?eVJVMUIvTlRuQXlrcnRDazQwa1hxSVBLQTdBUkEvdTFUM2VsVW5iT1pSSEJj?=
 =?utf-8?B?M2dBRTdOMjFPdk1ZZ2Zza3VlRXhkWURJU2tETW9odGZsMjE2ZUlVZDFpZmpP?=
 =?utf-8?B?bHN0aDZBNlpLZjI1UHZzQTVYcXVCUVNMRUhFaEV4YlI3OGFXQmdnUEdpRkht?=
 =?utf-8?B?Q1AvUlhYcG5tMExTbmtHMENGMzBnYWJ5T3BKVlg5YTd4T1NVMm1SVFVES3ZH?=
 =?utf-8?B?di9rR3BPcWZoajMrZ1BvcG5SSjFSYW53bzNtVmZuOGs0T0c3MWZMdnh6c1ZO?=
 =?utf-8?B?NDJUY01majVGeE56S081ajZrQlVPQmMzaDc5YmZyaUVXVGlZbzlxZGdxd1ox?=
 =?utf-8?B?YVJ3c1N5R3IveGJmcGV1azBRbnoyNzBvems2QmxWOEVsVVREazFlYXFBcjlR?=
 =?utf-8?B?bm1NYlpoTkVRSEREaDNmWEtCMmZoblZzT0lUYkZPWEJicTlTcEE0M3R0czBY?=
 =?utf-8?B?dHdiZDI5N05LTlIzTGpudEJ5cEhPcm5oYWMvUXhndXo2d3luUmt5amlmbmp2?=
 =?utf-8?Q?upvaeA9SmvHsAwhX9iNfbfPWx?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	djIYO5JN+AvoJLmcRNoCLyYqiexbkMctXDa5BWzZeIKVmiQd7aK2LUDAKEfqCgbBpLBAb8l7JvhOKjSbeY2PPE/LXk7VOSvgmD2F/gyzqFEKhWhVVjoQngfY2YNXOb5//0W1RYD0ueYO73+4d5icxIiDfIBWT+ehG8v3MEZwdn0xl09PGNA62ksYbDkmcRrEJ3/Ohf/fK9+P3QY7Y8fk85RCX/U6OAvxayyyCWmRSBNGPJuwAjuKWR8k1bbJfkmgJkGUiZcU1PdrN0HIzc1gNFTKXl8L7tFuzWshyIBasJ3sZ2e/U/FgPfZyNUb1X7tvT3WLx1QlX4nCF4bkMUNZaej88/a3ix+Or+7kRyeLLIaAJ7kbErDEjCkmoPhA0eNL81njyFwj5WhmMTg5Zu72ubSxh5vEpa/JLOL/onvQ1XvA+h8pqH23tTu4mAqNWPj1psPjYLJKGQWnVZ+z+xC1KMd2DAUyidYgY3yfmnGHiiUDOyiA4W/qLMp7gBSv0ldgGu29YvIrd6Wtw+X8XbXpm/YE6m1fPj/1mg0GJbuQgOR67Oxv9d3H98KlAS3O2w9Jsfiz/M2GV4XCvEDBSwcXimehuQLJryt1aThJ0xUYSr9qtwvaSzr8DxpplT9FE3JGQoBMo68Y8OOb4wX7hhw8OsV8dhV9Udn7TUSgp28tv1m1XaIA3aSOU+7tzVu+j1uCLmWJlUZu2PMZHYmN99+++NVUylC0mfPxq/Nxkd4kHVOVPfcn1ABpH5XD6BDdBufUzYsh9uPrLPBRPSbGPstdHTUAyenV41AVjllMEp3mYgV0tqbMA53fu/fBLYP2DKq0n0eWq/6hAtk8W2p1xhXXRQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 434054d1-7636-401c-1593-08db418f8f33
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 11:08:24.4120
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GC0RQVtpdN/ZApGaoH9Dsnu2gxJHBZAXTS68MKxhf+hpSQMhH4p/fWZfw1/7QrsowtsLbzi736nQPK7KsEfeqw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5520

On Wed, Apr 19, 2023 at 01:51:39PM +0200, Roger Pau Monné wrote:
> I'm afraid the serial output doesn't work on any of the Cubietruck
> boxes, so I had to unbless all of them (because the arndales are not
> suitable builders).
> 
> Have already contacted Credativ to further investigate.

In the meantime I think we can workaround the issue by using the
cubietrucks only for builds: that should prevent them getting picked up
for examine flights also.

I've removed the purpose-test flag from the cubietrucks and blessed
then again.  That however leaves us with just 3 armhf boxes for
running the tests, because arndale-bluewater is busted.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:18:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:18:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524027.814558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSIN-0004jg-Rt; Thu, 20 Apr 2023 11:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524027.814558; Thu, 20 Apr 2023 11:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSIN-0004jZ-OS; Thu, 20 Apr 2023 11:17:47 +0000
Received: by outflank-mailman (input) for mailman id 524027;
 Thu, 20 Apr 2023 11:17:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J46Y=AL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1ppSIM-0004jT-DZ
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:17:46 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f830a07d-df6c-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:17:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f830a07d-df6c-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1681989462;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t5XqcHn966UpzdXXi00/lSAzdYJUEVQOjSwT6u3WS8Y=;
	b=pAb6n5DzMyj9cPuaDVh1nEa4eMPTGWjlabRanF0gKYgC9Dt16bSxIF65k8A6Phxdd7b+m1
	8rtjGcomkGZGHPfqi5gCMDsNgIxkIz7fE3iWI1yVPEr7CzAxuLO52VMUjueL9yA1cignQ0
	2aZBXvrGK95/A3SZLIzR4CO2Gt6BjAJeGZFwZ0r5DeQJZY9f+42NHleeHSDTjxF+sNMncp
	Pj8zsu3xSjO1U4uiCaq79l7sMSO/XzBLws+gTsKxCyOwMVcTz52DRxt/Fl9qBPDScBZpAE
	EzRwxWNG9FQP9wQGTFVsN5ZBm94BxGBjsCTcRwuXnpgNipezItO8FzlRvwCjfQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1681989462;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t5XqcHn966UpzdXXi00/lSAzdYJUEVQOjSwT6u3WS8Y=;
	b=K7n7gkxobQSbdvGZErkz25VRgfZFGdff1cfdZxrOPRjWpL/QdasoOCujc47S1yFH9hOfDu
	jdhNH58O+gxzlPDw==
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Menzel
 <pmenzel@molgen.mpg.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>, Arjan van de Veen
 <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com>
References: <20230414225551.858160935@linutronix.de>
 <8247ce4d-15b7-03b2-0c9b-74f8cd6cad50@molgen.mpg.de> <87wn2a4la5.ffs@tglx>
 <bd5a6a93-def1-9248-2258-c3d3b40071ef@molgen.mpg.de> <87ttxd4qxz.ffs@tglx>
 <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com>
Date: Thu, 20 Apr 2023 13:17:40 +0200
Message-ID: <87y1mm3iqz.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 20 2023 at 10:23, Andrew Cooper wrote:
> On 20/04/2023 9:32 am, Thomas Gleixner wrote:
>> I'm pondering to simply deny parallel mode if x2APIC is not there.
>
> I'm not sure if that will help much.

Spoilsport.

> Just because x2APIC is there doesn't mean it's in use.=C2=A0 There are
> several generations of Intel system which have x2APIC but also use the
> opt-out bit in ACPI tables.=C2=A0 There are some machines which have
> mismatched APIC-ness settings in the BIOS->OS handover.
>
> There's very little you can do on the BSP alone to know for certain that
> the APs come out of wait-for-SIPI already in x2APIC mode.

Yeah. Reading the APIC that early is going to be entertaining too :)

> One way is the =C3=86PIC Leak "locked into x2APIC mode" giant security
> bodge.=C2=A0

Bah.

> If the system really does have a CPU with an APIC ID above 0xfe, then
> chances are good that the APs come out consistently...

Anything else would be really magic :)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:25:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:25:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524031.814567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSPy-0006Gs-JV; Thu, 20 Apr 2023 11:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524031.814567; Thu, 20 Apr 2023 11:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSPy-0006Gl-Go; Thu, 20 Apr 2023 11:25:38 +0000
Received: by outflank-mailman (input) for mailman id 524031;
 Thu, 20 Apr 2023 11:25:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSPx-0006GU-2x
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:37 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 10ab30ad-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:25:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1853B1480;
 Thu, 20 Apr 2023 04:26:17 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7B0443F587;
 Thu, 20 Apr 2023 04:25:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10ab30ad-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 00/17] Device tree based NUMA support for Arm - Part#3
Date: Thu, 20 Apr 2023 19:25:04 +0800
Message-Id: <20230420112521.3272732-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

(Henry: Following the offline discussion with Wei, I will be
the one to follow-up the upstream comments for this series.
I already fixed all comments in v2 so far. Hence sending v3 out.)

The preparation work to support NUMA on Arm has been merged
and can be found at [1] and [2]. The initial discussions of
the Arm NUMA support can be found at [3].

- Background of this series:

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on FVP in
Arm64 mode with NUMA configs in device tree and one HPE x86
NUMA machine.

[1] https://lists.xenproject.org/archives/html/xen-devel/2022-06/msg00499.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01043.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg01903.html

Henry Wang (1):
  xen/arm: Set correct per-cpu cpu_core_mask

Wei Chen (16):
  xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
  xen/arm: implement helpers to get and update NUMA status
  xen/arm: implement node distance helpers for Arm
  xen/arm: use arch_get_ram_range to memory ranges from bootinfo
  xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
  xen/arm: Add boot and secondary CPU to NUMA system
  xen/arm: introduce a helper to parse device tree processor node
  xen/arm: introduce a helper to parse device tree memory node
  xen/arm: introduce a helper to parse device tree NUMA distance map
  xen/arm: unified entry to parse all NUMA data from device tree
  xen/arm: keep guest still be NUMA unware
  xen/arm: enable device tree based NUMA in system init
  xen/arm: implement numa_node_to_arch_nid for device tree NUMA
  xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
  xen/arm: Provide Kconfig options for Arm to enable NUMA
  docs: update numa command line to support Arm

 SUPPORT.md                        |   1 +
 docs/misc/xen-command-line.pandoc |   2 +-
 xen/arch/arm/Kconfig              |  11 ++
 xen/arch/arm/Makefile             |   2 +
 xen/arch/arm/domain_build.c       |   6 +
 xen/arch/arm/include/asm/numa.h   |  92 +++++++++-
 xen/arch/arm/numa.c               | 185 +++++++++++++++++++
 xen/arch/arm/numa_device_tree.c   | 290 ++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c              |  17 ++
 xen/arch/arm/smpboot.c            |  38 ++++
 xen/arch/x86/include/asm/numa.h   |   1 -
 xen/arch/x86/srat.c               |   2 +-
 xen/include/xen/numa.h            |  10 ++
 13 files changed, 653 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/numa.c
 create mode 100644 xen/arch/arm/numa_device_tree.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:25:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:25:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524032.814578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQ2-0006X0-SI; Thu, 20 Apr 2023 11:25:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524032.814578; Thu, 20 Apr 2023 11:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQ2-0006Wt-O8; Thu, 20 Apr 2023 11:25:42 +0000
Received: by outflank-mailman (input) for mailman id 524032;
 Thu, 20 Apr 2023 11:25:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQ1-0006Vv-6M
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 14432ac5-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:25:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 339771516;
 Thu, 20 Apr 2023 04:26:23 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D9E013F587;
 Thu, 20 Apr 2023 04:25:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14432ac5-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
Date: Thu, 20 Apr 2023 19:25:05 +0800
Message-Id: <20230420112521.3272732-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

As a memory range described in device tree cannot be split across
multiple nodes. And it is very likely than if you have more than
64 nodes, you may need a lot more than 2 regions per node. So the
default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense
on Arm.

So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to
NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable
via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This
avoid to have different way to define the value based NUMA vs non-NUMA.

Further discussions can be found here[1].

[1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
By checking the discussion in [1] and [2]
[1] https://lists.xenproject.org/archives/html/xen-devel/2023-01/msg00595.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
v2 -> v3:
1. No change
v1 -> v2:
1. Add code comments to explain using NR_MEM_BANKS for Arm
2. Refine commit messages.
---
 xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++-
 xen/include/xen/numa.h          |  9 +++++++++
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e2bee2bd82..7d6ae36a19 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -3,9 +3,26 @@
 
 #include <xen/mm.h>
 
+#include <asm/setup.h>
+
 typedef u8 nodeid_t;
 
-#ifndef CONFIG_NUMA
+#ifdef CONFIG_NUMA
+
+/*
+ * It is very likely that if you have more than 64 nodes, you may
+ * need a lot more than 2 regions per node. So, for Arm, we would
+ * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS.
+ * And in the future NR_MEM_BANKS will be bumped for new platforms,
+ * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to
+ * have different way to define the value based NUMA vs non-NUMA.
+ *
+ * Further discussions can be found here:
+ * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
+ */
+#define NR_NODE_MEMBLKS NR_MEM_BANKS
+
+#else
 
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 29b8c2df89..b86d0851fc 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -13,7 +13,16 @@
 #define MAX_NUMNODES 1
 #endif
 
+/*
+ * Some architectures may have different considerations for
+ * number of node memory blocks. They can define their
+ * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural
+ * implementation. If the arch does not have specific implementation,
+ * the following default NR_NODE_MEMBLKS will be used.
+ */
+#ifndef NR_NODE_MEMBLKS
 #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2)
+#endif
 
 #define vcpu_to_node(v) (cpu_to_node((v)->processor))
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:25:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:25:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524033.814588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQ7-0006px-8g; Thu, 20 Apr 2023 11:25:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524033.814588; Thu, 20 Apr 2023 11:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQ7-0006pq-5V; Thu, 20 Apr 2023 11:25:47 +0000
Received: by outflank-mailman (input) for mailman id 524033;
 Thu, 20 Apr 2023 11:25:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQ5-0006Vv-M4
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 17195841-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:25:44 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0CAE21480;
 Thu, 20 Apr 2023 04:26:28 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 846CA3F587;
 Thu, 20 Apr 2023 04:25:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17195841-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 02/17] xen/arm: implement helpers to get and update NUMA status
Date: Thu, 20 Apr 2023 19:25:06 +0800
Message-Id: <20230420112521.3272732-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA has one global and one implementation specific switches. For
ACPI NUMA implementation, Xen has acpi_numa, so we introduce
device_tree_numa for device tree NUMA implementation. And use
enumerations to indicate init, off and on status.

arch_numa_disabled will get device_tree_numa status, but for
arch_numa_setup we have not provided boot arguments to setup
device_tree_numa. So we just return -EINVAL in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. Rename the first entry of enum dt_numa_status as DT_NUMA_DEFAULT.
2. Make enum dt_numa_status device_tree_numa as __ro_after_init and
   assign it explicitly to DT_NUMA_DEFAULT.
3. Update the year in copyright to 2023.
4. Don't move the x86 numa_disabled() and make Arm's numa_disabled()
   a static inline function for !CONFIG_NUMA.
v1 -> v2:
1. Use arch_numa_disabled to replace numa_enable_with_firmware.
2. Introduce enumerations for device tree numa status.
3. Use common numa_disabled, drop Arm version numa_disabled.
4. Introduce arch_numa_setup for Arm.
5. Rename bad_srat to numa_bad.
6. Add numa_enable_with_firmware helper.
7. Add numa_disabled helper.
8. Refine commit message.
---
 xen/arch/arm/include/asm/numa.h | 17 +++++++++++
 xen/arch/arm/numa.c             | 50 +++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+)
 create mode 100644 xen/arch/arm/numa.c

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 7d6ae36a19..83f60ad05b 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,8 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+extern bool numa_disabled(void);
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
@@ -39,6 +41,21 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
+static inline bool numa_disabled(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_unavailable(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_broken(void)
+{
+    return true;
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
new file mode 100644
index 0000000000..eb5d0632cb
--- /dev/null
+++ b/xen/arch/arm/numa.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/numa.h>
+
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+
+void __init numa_fw_bad(void)
+{
+    printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
+    device_tree_numa = DT_NUMA_OFF;
+}
+
+bool __init arch_numa_unavailable(void)
+{
+    return device_tree_numa != DT_NUMA_ON;
+}
+
+bool arch_numa_disabled(void)
+{
+    return device_tree_numa == DT_NUMA_OFF;
+}
+
+int __init arch_numa_setup(const char *opt)
+{
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524034.814598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQC-0007B7-GK; Thu, 20 Apr 2023 11:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524034.814598; Thu, 20 Apr 2023 11:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQC-0007AV-Cb; Thu, 20 Apr 2023 11:25:52 +0000
Received: by outflank-mailman (input) for mailman id 524034;
 Thu, 20 Apr 2023 11:25:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQB-0006Vv-20
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:51 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1a488df7-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:25:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F8CE1516;
 Thu, 20 Apr 2023 04:26:33 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A49843F587;
 Thu, 20 Apr 2023 04:25:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a488df7-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Date: Thu, 20 Apr 2023 19:25:07 +0800
Message-Id: <20230420112521.3272732-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

We will parse NUMA nodes distances from device tree. So we need
a matrix to record the distances between any two nodes we parsed.
Accordingly, we provide this node_set_distance API for device tree
NUMA to set the distance for any two nodes in this patch. When
NUMA initialization failed, __node_distance will return
NUMA_REMOTE_DISTANCE, this will help us avoid doing rollback
for distance maxtrix when NUMA initialization failed.

As both x86 and Arm have implemented __node_distance, so we move
its definition from asm/numa.h to xen/numa.h. At same time, the
outdated u8 return value of x86 has been changed to unsigned char.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. Use __ro_after_init for node_distance_map.
2. Correct format of if condition identation in numa_set_distance().
3. Drop the unnecessary change to the year of copyright.
4. Use ARRAY_SIZE() to determine node_distance_map's row, column size.
v1 -> v2:
1. Use unsigned int/char instead of uint32_t/u8.
2. Re-org the commit message.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h | 13 +++++++++
 xen/arch/arm/numa.c             | 52 +++++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/numa.h |  1 -
 xen/arch/x86/srat.c             |  2 +-
 xen/include/xen/numa.h          |  1 +
 6 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..9073398d6e 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
+obj-$(CONFIG_NUMA) += numa.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 83f60ad05b..123a1a8dd0 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,7 +22,20 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+/*
+ * In ACPI spec, 0-9 are the reserved values for node distance,
+ * 10 indicates local node distance, 20 indicates remote node
+ * distance. Set node distance map in device tree will follow
+ * the ACPI's definition.
+ */
+#define NUMA_DISTANCE_UDF_MIN   0
+#define NUMA_DISTANCE_UDF_MAX   9
+#define NUMA_LOCAL_DISTANCE     10
+#define NUMA_REMOTE_DISTANCE    20
+
 extern bool numa_disabled(void);
+extern void numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance);
 
 #else
 
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index eb5d0632cb..c5ef81aad3 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -28,6 +28,11 @@ enum dt_numa_status {
 
 static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
+static unsigned char __ro_after_init
+node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
+    { 0 }
+};
+
 void __init numa_fw_bad(void)
 {
     printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
@@ -48,3 +53,50 @@ int __init arch_numa_setup(const char *opt)
 {
     return -EINVAL;
 }
+
+void __init numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance)
+{
+    if ( from >= ARRAY_SIZE(node_distance_map) ||
+         to >= ARRAY_SIZE(node_distance_map[0]) )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
+               from, to, MAX_NUMNODES);
+        return;
+    }
+
+    /* NUMA defines 0xff as an unreachable node and 0-9 are undefined */
+    if ( distance >= NUMA_NO_DISTANCE ||
+         (distance >= NUMA_DISTANCE_UDF_MIN &&
+          distance <= NUMA_DISTANCE_UDF_MAX) ||
+         (from == to && distance != NUMA_LOCAL_DISTANCE) )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%"PRIu32"\n",
+               from, to, distance);
+        return;
+    }
+
+    node_distance_map[from][to] = distance;
+}
+
+unsigned char __node_distance(nodeid_t from, nodeid_t to)
+{
+    /* When NUMA is off, any distance will be treated as remote. */
+    if ( numa_disabled() )
+        return NUMA_REMOTE_DISTANCE;
+
+    /*
+     * Check whether the nodes are in the matrix range.
+     * When any node is out of range, except from and to nodes are the
+     * same, we treat them as unreachable (return 0xFF)
+     */
+    if ( from >= ARRAY_SIZE(node_distance_map) ||
+         to >= ARRAY_SIZE(node_distance_map[0]) )
+        return from == to ? NUMA_LOCAL_DISTANCE : NUMA_NO_DISTANCE;
+
+    return node_distance_map[from][to];
+}
+
+EXPORT_SYMBOL(__node_distance);
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 7866afa408..45456ac441 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -22,7 +22,6 @@ extern void init_cpu_to_node(void);
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 void srat_parse_regions(paddr_t addr);
-extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
 #endif
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 56749ddca5..50faf5d352 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -328,7 +328,7 @@ unsigned int numa_node_to_arch_nid(nodeid_t n)
 	return 0;
 }
 
-u8 __node_distance(nodeid_t a, nodeid_t b)
+unsigned char __node_distance(nodeid_t a, nodeid_t b)
 {
 	unsigned index;
 	u8 slit_val;
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index b86d0851fc..8356e47b61 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -114,6 +114,7 @@ extern bool numa_memblks_available(void);
 extern bool numa_update_node_memblks(nodeid_t node, unsigned int arch_nid,
                                      paddr_t start, paddr_t size, bool hotplug);
 extern void numa_set_processor_nodes_parsed(nodeid_t node);
+extern unsigned char __node_distance(nodeid_t a, nodeid_t b);
 
 #else
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524035.814608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQF-0007dT-PQ; Thu, 20 Apr 2023 11:25:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524035.814608; Thu, 20 Apr 2023 11:25:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQF-0007dI-Kf; Thu, 20 Apr 2023 11:25:55 +0000
Received: by outflank-mailman (input) for mailman id 524035;
 Thu, 20 Apr 2023 11:25:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQE-0006Vv-91
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1c6c5186-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:25:53 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E60761480;
 Thu, 20 Apr 2023 04:26:36 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A30503F587;
 Thu, 20 Apr 2023 04:25:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c6c5186-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 04/17] xen/arm: use arch_get_ram_range to memory ranges from bootinfo
Date: Thu, 20 Apr 2023 19:25:08 +0800
Message-Id: <20230420112521.3272732-5-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Implement the same helper "arch_get_ram_range" as x86 for NUMA
code to get memory bank from Arm bootinfo.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Use arch_get_ram_range instead of arch_get_memory_map.
---
 xen/arch/arm/numa.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index c5ef81aad3..a0e7b14925 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -100,3 +100,14 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 }
 
 EXPORT_SYMBOL(__node_distance);
+
+int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
+{
+    if ( idx >= bootinfo.mem.nr_banks )
+        return -ENOENT;
+
+    *start = bootinfo.mem.bank[idx].start;
+    *end = *start + bootinfo.mem.bank[idx].size;
+
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524041.814618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQK-00083y-1w; Thu, 20 Apr 2023 11:26:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524041.814618; Thu, 20 Apr 2023 11:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQJ-00083X-Ub; Thu, 20 Apr 2023 11:25:59 +0000
Received: by outflank-mailman (input) for mailman id 524041;
 Thu, 20 Apr 2023 11:25:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQI-0006Vv-5P
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:25:58 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1e98d4a4-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:25:57 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 924971480;
 Thu, 20 Apr 2023 04:26:40 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 57DFA3F587;
 Thu, 20 Apr 2023 04:25:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e98d4a4-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 05/17] xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
Date: Thu, 20 Apr 2023 19:25:09 +0800
Message-Id: <20230420112521.3272732-6-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA implementation has a cpu_to_node array to store CPU to NODE
map. Xen is using CPU logical ID in runtime components, so we
use CPU logical ID as CPU index in cpu_to_node.

In device tree case, cpu_logical_map is created in dt_smp_init_cpus.
So, when NUMA is enabled, dt_smp_init_cpus will fetch CPU NUMA id
at the same time for cpu_to_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Use static inline to replace macros to perform
   function paramerters type check.
2. Add numa_disabled to gate the numa-node-id check for
   CONFIG_NUMA on but numa disabled user case.
3. Use macro instead of static inline function to stub
   numa_set_node.
---
 xen/arch/arm/include/asm/numa.h |  4 ++++
 xen/arch/arm/smpboot.c          | 36 +++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 123a1a8dd0..e7a7d4e835 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -69,6 +69,10 @@ static inline bool arch_numa_broken(void)
     return true;
 }
 
+static inline void numa_set_node(unsigned int cpu, nodeid_t node)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..da7f2afd97 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -118,7 +118,12 @@ static void __init dt_smp_init_cpus(void)
     {
         [0 ... NR_CPUS - 1] = MPIDR_INVALID
     };
+    static nodeid_t node_map[NR_CPUS] __initdata =
+    {
+        [0 ... NR_CPUS - 1] = NUMA_NO_NODE
+    };
     bool bootcpu_valid = false;
+    unsigned int nid = 0;
     int rc;
 
     mpidr = system_cpuinfo.mpidr.bits & MPIDR_HWID_MASK;
@@ -169,6 +174,28 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
+        if ( IS_ENABLED(CONFIG_NUMA) )
+        {
+            /*
+             * When CONFIG_NUMA is set, try to fetch numa infomation
+             * from CPU dts node, otherwise the nid is always 0.
+             */
+            if ( !dt_property_read_u32(cpu, "numa-node-id", &nid) )
+            {
+                printk(XENLOG_WARNING
+                       "cpu[%d] dts path: %s: doesn't have numa information!\n",
+                       cpuidx, dt_node_full_name(cpu));
+                /*
+                 * During the early stage of NUMA initialization, when Xen
+                 * found any CPU dts node doesn't have numa-node-id info, the
+                 * NUMA will be treated as off, all CPU will be set to a FAKE
+                 * node 0. So if we get numa-node-id failed here, we should
+                 * set nid to 0.
+                 */
+                nid = 0;
+            }
+        }
+
         /*
          * 8 MSBs must be set to 0 in the DT since the reg property
          * defines the MPIDR[23:0]
@@ -228,9 +255,13 @@ static void __init dt_smp_init_cpus(void)
         {
             printk("cpu%d init failed (hwid %"PRIregister"): %d\n", i, hwid, rc);
             tmp_map[i] = MPIDR_INVALID;
+            node_map[i] = NUMA_NO_NODE;
         }
         else
+        {
             tmp_map[i] = hwid;
+            node_map[i] = nid;
+        }
     }
 
     if ( !bootcpu_valid )
@@ -246,6 +277,11 @@ static void __init dt_smp_init_cpus(void)
             continue;
         cpumask_set_cpu(i, &cpu_possible_map);
         cpu_logical_map(i) = tmp_map[i];
+
+        nid = node_map[i];
+        if ( nid >= MAX_NUMNODES )
+            nid = 0;
+        numa_set_node(i, nid);
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524047.814628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQO-0000A7-BL; Thu, 20 Apr 2023 11:26:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524047.814628; Thu, 20 Apr 2023 11:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQO-00009s-7J; Thu, 20 Apr 2023 11:26:04 +0000
Received: by outflank-mailman (input) for mailman id 524047;
 Thu, 20 Apr 2023 11:26:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQM-0006GU-OY
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 20cf2ecc-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3FA141480;
 Thu, 20 Apr 2023 04:26:44 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 056CD3F587;
 Thu, 20 Apr 2023 04:25:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20cf2ecc-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 06/17] xen/arm: Add boot and secondary CPU to NUMA system
Date: Thu, 20 Apr 2023 19:25:10 +0800
Message-Id: <20230420112521.3272732-7-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we make NUMA node online and add cpu to
its NUMA node. This will make NUMA-aware components
have NUMA affinity data to support their work.

To keep the mostly the same behavior of x86, we use
numa_detect_cpu_node to online node. The difference is that,
we have prepared cpu_to_node in dt_smp_init_cpus, so we don't
need to setup cpu_to_node in numa_detect_cpu_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Use unsigned int instead of int for cpu id.
2. Use static inline for stub to do type check.
---
 xen/arch/arm/include/asm/numa.h |  9 +++++++++
 xen/arch/arm/numa.c             | 10 ++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e7a7d4e835..2f3d7079d9 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -36,6 +36,7 @@ typedef u8 nodeid_t;
 extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
+extern void numa_detect_cpu_node(unsigned int cpu);
 
 #else
 
@@ -73,6 +74,14 @@ static inline void numa_set_node(unsigned int cpu, nodeid_t node)
 {
 }
 
+static inline void numa_add_cpu(unsigned int cpu)
+{
+}
+
+static inline void numa_detect_cpu_node(unsigned int cpu)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index a0e7b14925..05a339b044 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -81,6 +81,16 @@ void __init numa_set_distance(nodeid_t from, nodeid_t to,
     node_distance_map[from][to] = distance;
 }
 
+void numa_detect_cpu_node(unsigned int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    node_set_online(node);
+}
+
 unsigned char __node_distance(nodeid_t from, nodeid_t to)
 {
     /* When NUMA is off, any distance will be treated as remote. */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a..09e18d32df 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1205,6 +1205,11 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     for_each_present_cpu ( i )
     {
+        /* Detect and online node based on cpu_to_node[]. */
+        numa_detect_cpu_node(i);
+        /* Set up node_to_cpumask based on cpu_to_node[]. */
+        numa_add_cpu(i);
+
         if ( (num_online_cpus() < nr_cpu_ids) && !cpu_online(i) )
         {
             int ret = cpu_up(i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524052.814638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQV-0000r6-MR; Thu, 20 Apr 2023 11:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524052.814638; Thu, 20 Apr 2023 11:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQV-0000qy-J0; Thu, 20 Apr 2023 11:26:11 +0000
Received: by outflank-mailman (input) for mailman id 524052;
 Thu, 20 Apr 2023 11:26:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQU-0006GU-BS
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2527de5a-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B6B21480;
 Thu, 20 Apr 2023 04:26:51 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 412E33F587;
 Thu, 20 Apr 2023 04:26:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2527de5a-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 08/17] xen/arm: introduce a helper to parse device tree memory node
Date: Thu, 20 Apr 2023 19:25:12 +0800
Message-Id: <20230420112521.3272732-9-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Memory blocks' NUMA ID information is stored in device tree's
memory nodes as "numa-node-id". We need a new helper to parse
and verify this ID from memory nodes.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Move numa_disabled check to fdt_parse_numa_memory_node.
2. Use numa_bad to replace bad_srat.
3. Replace tabs by spaces.
4. Align parameters.
5. return ENODATA for a normal dtb without numa info.
6. Un-addressed comment:
   "Why not parse numa-node-id and call fdt_numa_memory_affinity_init
   from xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because
   device_tree_get_meminfo is called too early?"
   I checked the device_tree_get_meminfo code and I think the answer
   is similar as I reply in RFC. I prefer a unify numa initialization
   entry. Don't want to make numa parse code in different places.
7. Use node id as dummy PXM for numa_update_node_memblks.
---
 xen/arch/arm/numa_device_tree.c | 89 +++++++++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 52051e4537..9796490747 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -34,6 +34,26 @@ static int __init fdt_numa_processor_affinity_init(nodeid_t node)
     return 0;
 }
 
+/* Callback for parsing of the memory regions affinity */
+static int __init fdt_numa_memory_affinity_init(nodeid_t node,
+                                                paddr_t start, paddr_t size)
+{
+    if ( !numa_memblks_available() )
+    {
+        dprintk(XENLOG_WARNING,
+                "Too many numa entry, try bigger NR_NODE_MEMBLKS\n");
+        return -EINVAL;
+    }
+
+    numa_fw_nid_name = "numa-node-id";
+    if ( !numa_update_node_memblks(node, node, start, size, false) )
+        return -EINVAL;
+
+    device_tree_numa = DT_NUMA_ON;
+
+    return 0;
+}
+
 /* Parse CPU NUMA node info */
 static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 {
@@ -62,3 +82,72 @@ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 
     return fdt_numa_processor_affinity_init(nid);
 }
+
+/* Parse memory node NUMA info */
+static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
+                                             const char *name,
+                                             unsigned int addr_cells,
+                                             unsigned int size_cells)
+{
+    unsigned int nid;
+    int ret = 0, len;
+    paddr_t addr, size;
+    const struct fdt_property *prop;
+    unsigned int idx, ranges;
+    const __be32 *addresses;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this memory
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( node == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_WARNING "Node id %u exceeds maximum value\n", nid);
+        goto invalid_data;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "fdt: node `%s': missing `reg' property\n", name);
+        goto invalid_data;
+    }
+
+    addresses = (const __be32 *)prop->data;
+    ranges = len / (sizeof(__be32)* (addr_cells + size_cells));
+    for ( idx = 0; idx < ranges; idx++ )
+    {
+        device_tree_get_reg(&addresses, addr_cells, size_cells, &addr, &size);
+        /* Skip zero size ranges */
+        if ( !size )
+            continue;
+
+        ret = fdt_numa_memory_affinity_init(nid, addr, size);
+        if ( ret )
+            goto invalid_data;
+    }
+
+    if ( idx == 0 )
+    {
+        printk(XENLOG_ERR
+               "bad property in memory node, idx=%d ret=%d\n", idx, ret);
+        goto invalid_data;
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524055.814648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQa-0001G8-4X; Thu, 20 Apr 2023 11:26:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524055.814648; Thu, 20 Apr 2023 11:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQa-0001Fz-0w; Thu, 20 Apr 2023 11:26:16 +0000
Received: by outflank-mailman (input) for mailman id 524055;
 Thu, 20 Apr 2023 11:26:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQY-0006GU-Cz
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:14 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 272e8d04-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0F2121480;
 Thu, 20 Apr 2023 04:26:55 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C8EF03F587;
 Thu, 20 Apr 2023 04:26:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 272e8d04-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 09/17] xen/arm: introduce a helper to parse device tree NUMA distance map
Date: Thu, 20 Apr 2023 19:25:13 +0800
Message-Id: <20230420112521.3272732-10-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

A NUMA aware device tree will provide a "distance-map" node to
describe distance between any two nodes. This patch introduce a
new helper to parse this distance map.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Get rid of useless braces.
2. Use new NUMA status helper.
3. Use PRIu32 to replace u in print messages.
4. Fix opposite = __node_distance(to, from).
5. disable dtb numa info table when we find an invalid data
   in dtb.
---
 xen/arch/arm/numa_device_tree.c | 107 ++++++++++++++++++++++++++++++++
 1 file changed, 107 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 9796490747..718afca826 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -151,3 +151,110 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+/* Parse NUMA distance map v1 */
+static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
+{
+    const struct fdt_property *prop;
+    const __be32 *matrix;
+    unsigned int i, entry_count;
+    int len;
+
+    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
+
+    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "NUMA: No distance-matrix property in distance-map\n");
+        goto invalid_data;
+    }
+
+    if ( len % sizeof(__be32) != 0 )
+    {
+        printk(XENLOG_WARNING
+               "distance-matrix in node is not a multiple of u32\n");
+        goto invalid_data;
+    }
+
+    entry_count = len / sizeof(__be32);
+    if ( entry_count == 0 )
+    {
+        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
+        goto invalid_data;
+    }
+
+    matrix = (const __be32 *)prop->data;
+    for ( i = 0; i + 2 < entry_count; i += 3 )
+    {
+        unsigned int from, to, distance, opposite;
+
+        from = dt_next_cell(1, &matrix);
+        to = dt_next_cell(1, &matrix);
+        distance = dt_next_cell(1, &matrix);
+        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
+            (from != to && distance <= NUMA_LOCAL_DISTANCE) )
+        {
+            printk(XENLOG_WARNING
+                   "NUMA: Invalid distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                   from, to, distance);
+            goto invalid_data;
+        }
+
+        printk(XENLOG_INFO "NUMA: distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+               from, to, distance);
+
+        /* Get opposite way distance */
+        opposite = __node_distance(to, from);
+        if ( opposite == 0 )
+        {
+            /* Bi-directions are not set, set both */
+            numa_set_distance(from, to, distance);
+            numa_set_distance(to, from, distance);
+        }
+        else
+        {
+            /*
+             * Opposite way distance has been set to a different value.
+             * It may be a firmware device tree bug?
+             */
+            if ( opposite != distance )
+            {
+                /*
+                 * In device tree NUMA distance-matrix binding:
+                 * https://www.kernel.org/doc/Documentation/devicetree/bindings/numa.txt
+                 * There is a notes mentions:
+                 * "Each entry represents distance from first node to
+                 *  second node. The distances are equal in either
+                 *  direction."
+                 *
+                 * That means device tree doesn't permit this case.
+                 * But in ACPI spec, it cares to specifically permit this
+                 * case:
+                 * "Except for the relative distance from a System Locality
+                 *  to itself, each relative distance is stored twice in the
+                 *  matrix. This provides the capability to describe the
+                 *  scenario where the relative distances for the two
+                 *  directions between System Localities is different."
+                 *
+                 * That means a real machine allows such NUMA configuration.
+                 * So, place a WARNING here to notice system administrators,
+                 * is it the specail case that they hijack the device tree
+                 * to support their rare machines?
+                 */
+                printk(XENLOG_WARNING
+                       "Un-matched bi-direction! NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32", NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                       from, to, distance, to, from, opposite);
+            }
+
+            /* Opposite way distance has been set, just set this way */
+            numa_set_distance(from, to, distance);
+        }
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:26:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:26:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524057.814658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQc-0001cj-Ee; Thu, 20 Apr 2023 11:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524057.814658; Thu, 20 Apr 2023 11:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSQc-0001cU-9v; Thu, 20 Apr 2023 11:26:18 +0000
Received: by outflank-mailman (input) for mailman id 524057;
 Thu, 20 Apr 2023 11:26:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQb-0006GU-9N
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 295bbe74-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 92C8D1480;
 Thu, 20 Apr 2023 04:26:58 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 589843F587;
 Thu, 20 Apr 2023 04:26:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 295bbe74-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 10/17] xen/arm: unified entry to parse all NUMA data from device tree
Date: Thu, 20 Apr 2023 19:25:14 +0800
Message-Id: <20230420112521.3272732-11-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this function, we scan the whole device tree to parse CPU node id,
memory node id and distance-map. Though early_scan_node will invoke
a handler to process memory nodes. If we want to parse memory node id
in that handler, we have to embed NUMA parse code in that handler.
But we still need to scan whole device tree to find CPU NUMA id and
distance-map. In this case, we include memory NUMA id parse in this
function too. Another benefit is that we have a unique entry for device
tree NUMA data parse.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Fix typos in commit message.
2. Fix code style and align parameters.
3. Use strncmp to replace memcmp.
---
 xen/arch/arm/include/asm/numa.h |  1 +
 xen/arch/arm/numa_device_tree.c | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index fe1bf4251f..a2b4f6cc10 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -45,6 +45,7 @@ extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
+extern int numa_device_tree_init(const void *fdt);
 
 #else
 
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 718afca826..a7dc58188e 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -258,3 +258,33 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+static int __init fdt_scan_numa_nodes(const void *fdt, int node,
+                                      const char *uname, int depth,
+                                      unsigned int address_cells,
+                                      unsigned int size_cells, void *data)
+{
+    int len, ret = 0;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "device_type", &len);
+    if ( prop )
+    {
+        if ( strncmp(prop, "cpu", len) == 0 )
+            ret = fdt_parse_numa_cpu_node(fdt, node);
+        else if ( strncmp(prop, "memory", len) == 0 )
+            ret = fdt_parse_numa_memory_node(fdt, node, uname,
+                                address_cells, size_cells);
+    }
+    else if ( fdt_node_check_compatible(fdt, node,
+                                        "numa-distance-map-v1") == 0 )
+        ret = fdt_parse_numa_distance_map_v1(fdt, node);
+
+    return ret;
+}
+
+/* Initialize NUMA from device tree */
+int __init numa_device_tree_init(const void *fdt)
+{
+    return device_tree_for_each_node(fdt, 0, fdt_scan_numa_nodes, NULL);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524064.814672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTD-0003c5-AS; Thu, 20 Apr 2023 11:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524064.814672; Thu, 20 Apr 2023 11:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTD-0003bI-4y; Thu, 20 Apr 2023 11:28:59 +0000
Received: by outflank-mailman (input) for mailman id 524064;
 Thu, 20 Apr 2023 11:28:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQh-0006Vv-Id
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2d8eb75b-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:22 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B552E1480;
 Thu, 20 Apr 2023 04:27:05 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7AEBC3F587;
 Thu, 20 Apr 2023 04:26:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d8eb75b-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 12/17] xen/arm: enable device tree based NUMA in system init
Date: Thu, 20 Apr 2023 19:25:16 +0800
Message-Id: <20230420112521.3272732-13-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we can start to create NUMA system that is
based on device tree.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. replace ~0 by INVALID_PADDR.
2. only print error messages for invalid dtb data.
3. remove unnecessary return.
4. remove the parameter of numa_init.
---
 xen/arch/arm/include/asm/numa.h |  5 +++
 xen/arch/arm/numa.c             | 57 +++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c            |  7 ++++
 3 files changed, 69 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index a2b4f6cc10..8057db09c4 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -46,6 +46,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
+extern void numa_init(void);
 
 #else
 
@@ -91,6 +92,10 @@ static inline void numa_detect_cpu_node(unsigned int cpu)
 {
 }
 
+static inline void numa_init(void)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 06e067712d..0a8a67512b 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -18,7 +18,11 @@
  *
  */
 #include <xen/init.h>
+#include <xen/device_tree.h>
+#include <xen/nodemask.h>
 #include <xen/numa.h>
+#include <xen/pfn.h>
+#include <xen/acpi.h>
 
 enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
@@ -105,6 +109,59 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 
 EXPORT_SYMBOL(__node_distance);
 
+void __init numa_init(void)
+{
+    unsigned int idx;
+    paddr_t ram_start = INVALID_PADDR;
+    paddr_t ram_size = 0;
+    paddr_t ram_end = 0;
+
+    /* NUMA has been turned off through Xen parameters */
+    if ( numa_off )
+        goto mem_init;
+
+    /* Initialize NUMA from device tree when system is not ACPI booted */
+    if ( acpi_disabled )
+    {
+        int ret = numa_device_tree_init(device_tree_flattened);
+        if ( ret )
+        {
+            numa_off = true;
+            if ( ret == -EINVAL )
+                printk(XENLOG_WARNING
+                       "Init NUMA from device tree failed, ret=%d\n", ret);
+        }
+    }
+    else
+    {
+        /* We don't support NUMA for ACPI boot currently */
+        printk(XENLOG_WARNING
+               "ACPI NUMA has not been supported yet, NUMA off!\n");
+        numa_off = true;
+    }
+
+mem_init:
+    /*
+     * Find the minimal and maximum address of RAM, NUMA will
+     * build a memory to node mapping table for the whole range.
+     */
+    ram_start = bootinfo.mem.bank[0].start;
+    ram_size  = bootinfo.mem.bank[0].size;
+    ram_end   = ram_start + ram_size;
+    for ( idx = 1 ; idx < bootinfo.mem.nr_banks; idx++ )
+    {
+        paddr_t bank_start = bootinfo.mem.bank[idx].start;
+        paddr_t bank_size = bootinfo.mem.bank[idx].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        ram_size  = ram_size + bank_size;
+        ram_start = min(ram_start, bank_start);
+        ram_end   = max(ram_end, bank_end);
+    }
+
+    numa_initmem_init(PFN_UP(ram_start), PFN_DOWN(ram_end));
+}
+
 int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
 {
     if ( idx >= bootinfo.mem.nr_banks )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 09e18d32df..f05e233f3a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1121,6 +1121,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Parse the ACPI tables for possible boot-time configuration */
     acpi_boot_table_init();
 
+    /*
+     * Try to initialize NUMA system, if failed, the system will
+     * fallback to uniform system which means system has only 1
+     * NUMA node.
+     */
+    numa_init();
+
     end_boot_allocator();
 
     /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524066.814677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTD-0003ej-IB; Thu, 20 Apr 2023 11:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524066.814677; Thu, 20 Apr 2023 11:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTD-0003dK-CI; Thu, 20 Apr 2023 11:28:59 +0000
Received: by outflank-mailman (input) for mailman id 524066;
 Thu, 20 Apr 2023 11:28:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQz-0006Vv-20
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 384c8b37-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A99C81480;
 Thu, 20 Apr 2023 04:27:23 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DF3963F587;
 Thu, 20 Apr 2023 04:26:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 384c8b37-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 17/17] docs: update numa command line to support Arm
Date: Thu, 20 Apr 2023 19:25:21 +0800
Message-Id: <20230420112521.3272732-18-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Current numa command in documentation is x86 only. Remove
x86 from numa command's arch limitation in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v2 -> v3:
1. Add the Acked-by tag from Jan.
v1 -> v2:
1. Update Arm NUMA status in SUPPORT.md to "Tech Preview".
---
 SUPPORT.md                        | 1 +
 docs/misc/xen-command-line.pandoc | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..da4e3b9aa2 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -401,6 +401,7 @@ on embedded platforms and the x86 PV shim.
 Enables NUMA aware scheduling in Xen
 
     Status, x86: Supported
+    Status, Arm: Tech Preview
 
 ## Scalability
 
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..2fea22dd70 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1890,7 +1890,7 @@ i.e. a limit on the number of guests it is possible to start each having
 assigned a device sharing a common interrupt line.  Accepts values between
 1 and 255.
 
-### numa (x86)
+### numa
 > `= on | off | fake=<integer> | noacpi`
 
 > Default: `on`
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524061.814668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTD-0003a6-0v; Thu, 20 Apr 2023 11:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524061.814668; Thu, 20 Apr 2023 11:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTC-0003Zz-UE; Thu, 20 Apr 2023 11:28:58 +0000
Received: by outflank-mailman (input) for mailman id 524061;
 Thu, 20 Apr 2023 11:28:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQd-0006Vv-Ha
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2b79b97b-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:18 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 262E91480;
 Thu, 20 Apr 2023 04:27:02 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E01963F587;
 Thu, 20 Apr 2023 04:26:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b79b97b-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 11/17] xen/arm: keep guest still be NUMA unware
Date: Thu, 20 Apr 2023 19:25:15 +0800
Message-Id: <20230420112521.3272732-12-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

The NUMA information provided in the host Device-Tree
are only for Xen. For dom0, we want to hide them as they
may be different (for now, dom0 is still not aware of NUMA)
The CPU and memory nodes are recreated from scratch for the
domain. So we already skip the "numa-node-id" property for
these two types of nodes.

However, some devices like PCIe may have "numa-node-id"
property too. We have to skip them as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Add Rb
---
 xen/arch/arm/domain_build.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af2..2bf586fc45 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1185,6 +1185,10 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                 continue;
         }
 
+        /* Dom0 is currently NUMA unaware */
+        if ( dt_property_name_is_equal(prop, "numa-node-id") )
+            continue;
+
         res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
 
         if ( res )
@@ -2567,6 +2571,8 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TYPE("memory"),
         /* The memory mapped timer is not supported by Xen. */
         DT_MATCH_COMPATIBLE("arm,armv7-timer-mem"),
+        /* Numa info doesn't need to be exposed to Domain-0 */
+        DT_MATCH_COMPATIBLE("numa-distance-map-v1"),
         { /* sentinel */ },
     };
     static const struct dt_device_match timer_matches[] __initconst =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524078.814697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTK-0004SF-Qh; Thu, 20 Apr 2023 11:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524078.814697; Thu, 20 Apr 2023 11:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTK-0004Rs-Mp; Thu, 20 Apr 2023 11:29:06 +0000
Received: by outflank-mailman (input) for mailman id 524078;
 Thu, 20 Apr 2023 11:29:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQu-0006Vv-V6
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:36 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 35dbd8a5-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9C751480;
 Thu, 20 Apr 2023 04:27:19 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6F9433F587;
 Thu, 20 Apr 2023 04:26:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35dbd8a5-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 16/17] xen/arm: Provide Kconfig options for Arm to enable NUMA
Date: Thu, 20 Apr 2023 19:25:20 +0800
Message-Id: <20230420112521.3272732-17-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Arm platforms support both ACPI and device tree. We don't
want users to select device tree NUMA or ACPI NUMA manually.
We hope users can just enable NUMA for Arm, and device tree
NUMA and ACPI NUMA can be selected depends on device tree
feature and ACPI feature status automatically. In this case,
these two kinds of NUMA support code can be co-exist in one
Xen binary. Xen can check feature flags to decide using
device tree or ACPI as NUMA based firmware.

So in this patch, we introduce a generic option:
CONFIG_ARM_NUMA for users to enable NUMA for Arm.
And one CONFIG_DEVICE_TREE_NUMA option for ARM_NUMA
to select when HAS_DEVICE_TREE option is enabled.
Once when ACPI NUMA for Arm is supported, ACPI_NUMA
can be selected here too.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Remove the condition of selecting DEVICE_TREE_NUMA.
---
 xen/arch/arm/Kconfig | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..e751ad50d1 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -39,6 +39,17 @@ config ACPI
 config ARM_EFI
 	bool
 
+config ARM_NUMA
+	bool "Arm NUMA (Non-Uniform Memory Access) Support (UNSUPPORTED)" if UNSUPPORTED
+	depends on HAS_DEVICE_TREE
+	select DEVICE_TREE_NUMA
+	help
+	  Enable Non-Uniform Memory Access (NUMA) for Arm architecutres
+
+config DEVICE_TREE_NUMA
+	bool
+	select NUMA
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524079.814708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTO-0004oj-8J; Thu, 20 Apr 2023 11:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524079.814708; Thu, 20 Apr 2023 11:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTO-0004oV-4P; Thu, 20 Apr 2023 11:29:10 +0000
Received: by outflank-mailman (input) for mailman id 524079;
 Thu, 20 Apr 2023 11:29:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQk-0006Vv-LM
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2fb9e0f7-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:26 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4ED0E1480;
 Thu, 20 Apr 2023 04:27:09 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0BB0E3F587;
 Thu, 20 Apr 2023 04:26:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fb9e0f7-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 13/17] xen/arm: implement numa_node_to_arch_nid for device tree NUMA
Date: Thu, 20 Apr 2023 19:25:17 +0800
Message-Id: <20230420112521.3272732-14-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Device tree based NUMA doesn't have the proximity domain like
ACPI. So we can return node id directly as arch nid.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. Use numa_node_to_arch_nid instead of dummy node_to_pxm.
---
 xen/arch/arm/include/asm/numa.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 8057db09c4..30b1fa39f1 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -48,6 +48,15 @@ extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
 
+/*
+ * Device tree NUMA doesn't have architecural node id.
+ * So we can just return node id as arch nid.
+ */
+static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
+{
+    return n;
+}
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524080.814713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTO-0004s4-PT; Thu, 20 Apr 2023 11:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524080.814713; Thu, 20 Apr 2023 11:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTO-0004qx-E7; Thu, 20 Apr 2023 11:29:10 +0000
Received: by outflank-mailman (input) for mailman id 524080;
 Thu, 20 Apr 2023 11:29:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQp-0006GU-A6
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 31d2b099-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:29 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E14AB1480;
 Thu, 20 Apr 2023 04:27:12 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9DD043F587;
 Thu, 20 Apr 2023 04:26:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31d2b099-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 14/17] xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
Date: Thu, 20 Apr 2023 19:25:18 +0800
Message-Id: <20230420112521.3272732-15-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

node_online_map in smpboot still need for Arm when NUMA is turn
off by Kconfig.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. No change.
---
 xen/arch/arm/smpboot.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index da7f2afd97..4f71cc974a 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -41,8 +41,10 @@ integer_param("maxcpus", max_cpus);
 /* CPU logical map: map xen cpuid to an MPIDR */
 register_t __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID };
 
+#ifndef CONFIG_NUMA
 /* Fake one node for now. See also asm/numa.h */
 nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+#endif
 
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524087.814724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTQ-0005Hw-3t; Thu, 20 Apr 2023 11:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524087.814724; Thu, 20 Apr 2023 11:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTP-0005GY-Ur; Thu, 20 Apr 2023 11:29:11 +0000
Received: by outflank-mailman (input) for mailman id 524087;
 Thu, 20 Apr 2023 11:29:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQP-0006Vv-IF
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 22ef9f1a-df6e-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:26:04 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0B9B1480;
 Thu, 20 Apr 2023 04:26:47 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 867A73F587;
 Thu, 20 Apr 2023 04:26:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22ef9f1a-df6e-11ed-b21f-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v3 07/17] xen/arm: introduce a helper to parse device tree processor node
Date: Thu, 20 Apr 2023 19:25:11 +0800
Message-Id: <20230420112521.3272732-8-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Processor NUMA ID information is stored in device tree's processor
node as "numa-node-id". We need a new helper to parse this ID from
processor node. If we get this ID from processor node, this ID's
validity still need to be checked. Once we got a invalid NUMA ID
from any processor node, the device tree will be marked as NUMA
information invalid.

Since new helpers need to know the NUMA status, move the
enum dt_numa_status to the Arm NUMA header.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. Move the enum dt_numa_status to the Arm NUMA header.
2. Update the year in copyright to 2023.
v1 -> v2:
1. Move numa_disabled from fdt_numa_processor_affinity_init
   to fdt_parse_numa_cpu_node.
2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
3. Return ENODATA for normal dtb without NUMA info.
4. Use NUMA status helpers instead of SRAT functions.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h |  8 +++++
 xen/arch/arm/numa.c             |  8 +----
 xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
 4 files changed, 74 insertions(+), 7 deletions(-)
 create mode 100644 xen/arch/arm/numa_device_tree.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9073398d6e..bbc68e3735 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -39,6 +39,7 @@ obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
 obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_DEVICE_TREE_NUMA) += numa_device_tree.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 2f3d7079d9..fe1bf4251f 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,14 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+extern enum dt_numa_status device_tree_numa;
+
 /*
  * In ACPI spec, 0-9 are the reserved values for node distance,
  * 10 indicates local node distance, 20 indicates remote node
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 05a339b044..06e067712d 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -20,13 +20,7 @@
 #include <xen/init.h>
 #include <xen/numa.h>
 
-enum dt_numa_status {
-    DT_NUMA_DEFAULT,
-    DT_NUMA_ON,
-    DT_NUMA_OFF,
-};
-
-static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
 static unsigned char __ro_after_init
 node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
new file mode 100644
index 0000000000..52051e4537
--- /dev/null
+++ b/xen/arch/arm/numa_device_tree.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for device tree NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/nodemask.h>
+#include <xen/numa.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+
+/* Callback for device tree processor affinity */
+static int __init fdt_numa_processor_affinity_init(nodeid_t node)
+{
+    numa_set_processor_nodes_parsed(node);
+    device_tree_numa = DT_NUMA_ON;
+
+    printk(KERN_INFO "DT: NUMA node %"PRIu8" processor parsed\n", node);
+
+    return 0;
+}
+
+/* Parse CPU NUMA node info */
+static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
+{
+    unsigned int nid;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this CPU
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( nid == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_ERR "DT: CPU numa node id %u is invalid\n", nid);
+        numa_fw_bad();
+        return -EINVAL;
+    }
+
+    return fdt_numa_processor_affinity_init(nid);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524096.814738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTY-0006JM-Ji; Thu, 20 Apr 2023 11:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524096.814738; Thu, 20 Apr 2023 11:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTY-0006J7-Dr; Thu, 20 Apr 2023 11:29:20 +0000
Received: by outflank-mailman (input) for mailman id 524096;
 Thu, 20 Apr 2023 11:29:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppSQs-0006GU-Qw
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:26:34 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 33c33523-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:26:32 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2270F1480;
 Thu, 20 Apr 2023 04:27:16 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2A2183F587;
 Thu, 20 Apr 2023 04:26:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33c33523-df6e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v3 15/17] xen/arm: Set correct per-cpu cpu_core_mask
Date: Thu, 20 Apr 2023 19:25:19 +0800
Message-Id: <20230420112521.3272732-16-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230420112521.3272732-1-Henry.Wang@arm.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the common sysctl command XEN_SYSCTL_physinfo, the cores_per_socket
is calculated based on the cpu_core_mask of CPU0. Currently on Arm
this is a fixed value 1 (can be checked via xl info), which is not
correct. This is because during the Arm cpu online process,
set_cpu_sibling_map() only sets the per-cpu cpu_core_mask for itself.

cores_per_socket refers to the number of cores that belong to the same
socket (NUMA node). Therefore, this commit introduces a helper function
numa_set_cpu_core_mask(cpu), which sets the per-cpu cpu_core_mask to
the cpus in the same NUMA node as cpu. Calling this function at the
boot time can ensure the correct cpu_core_mask, leading to the correct
cores_per_socket to be returned by XEN_SYSCTL_physinfo.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v2 -> v3:
1. No change.
v1 -> v2:
1. New patch
---
 xen/arch/arm/include/asm/numa.h |  7 +++++++
 xen/arch/arm/numa.c             | 11 +++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 30b1fa39f1..d340b6c147 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -47,6 +47,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
+extern void numa_set_cpu_core_mask(int cpu);
 
 /*
  * Device tree NUMA doesn't have architecural node id.
@@ -63,6 +64,12 @@ static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
+static inline void numa_set_cpu_core_mask(int cpu)
+{
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &cpu_possible_map);
+}
+
 /*
  * TODO: make first_valid_mfn static when NUMA is supported on Arm, this
  * is required because the dummy helpers are using it.
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 0a8a67512b..2251d7177c 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -52,6 +52,17 @@ int __init arch_numa_setup(const char *opt)
     return -EINVAL;
 }
 
+void numa_set_cpu_core_mask(int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &node_to_cpumask(node));
+}
+
 void __init numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index f05e233f3a..7cef913b7c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1226,6 +1226,11 @@ void __init start_xen(unsigned long boot_phys_offset,
     }
 
     printk("Brought up %ld CPUs\n", (long)num_online_cpus());
+
+    /* Set per-cpu cpu_core_mask to cpus that belongs to the same NUMA node. */
+    for_each_online_cpu ( i )
+        numa_set_cpu_core_mask(i);
+
     /* TODO: smp_cpus_done(); */
 
     /* This should be done in a vpmu driver but we do not have one yet. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:29:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524107.814748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTg-000775-S5; Thu, 20 Apr 2023 11:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524107.814748; Thu, 20 Apr 2023 11:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSTg-00076s-Oz; Thu, 20 Apr 2023 11:29:28 +0000
Received: by outflank-mailman (input) for mailman id 524107;
 Thu, 20 Apr 2023 11:29:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zYAX=AL=gmail.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSTf-0004yB-6l
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:29:27 +0000
Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com
 [2607:f8b0:4864:20::112e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a7bbd1f-df6e-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:29:25 +0200 (CEST)
Received: by mail-yw1-x112e.google.com with SMTP id
 00721157ae682-54f6a796bd0so34920817b3.12
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 04:29:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a7bbd1f-df6e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1681990164; x=1684582164;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=mjZ/RzcdZKkVa0/zTvijUOjpF3h3Jp+5IB6rgUeEN4w=;
        b=WM/+x+qYreWmVU9adHqM+UxWyC0/dM50XgkM+CW+H+0ve0pPwJnx1QlGcZM0a3l9UW
         76C8ueuGvXkZ7se81BFsKNQJRq2p7ETQbYA2HnjcUrZwmeNPAgR7vBiOlJGk5m1+AvRN
         IBZBeF+XQ4yx2nSk231PpYEPg+kY5jvlmGO+nGt3mUahvja5y2mBRnckgmG5qnC3bKZJ
         56V9fUsSnSDOFyJPKBLuIJLHoPvB+0zo4aTLHYnCqXvE+4VQfEq2vKMmWcmha9cCy1/S
         vJCghyT2lP4p3ZDveLapf+W5D4+eW3fCAlug0gz/URAzGTc85PavbG4pKCs/1OqjQzuk
         8ETg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681990164; x=1684582164;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mjZ/RzcdZKkVa0/zTvijUOjpF3h3Jp+5IB6rgUeEN4w=;
        b=ZAg7Ku0FCR0upMnJMZUKLlF9MAHHMa8zMwzehFE5eJnxekZP+fhUmmu+Ld1NGtGLSd
         k7b2Wkq5WmwJbkXpNftSKZCNbE52iQHXanNHeuY3mzq4WN188OHyKHo4lHUEsZ4xHiJd
         qSi2HwFmJbBYW6VqMOnuP1qmloFiT/XriZspn82nLf7zvimCNouYvCbWfkj+jUtalofU
         O5RP5EXfgyKrkHlNUmCAqIjcxfpweUqtnx1nnws7Gey8aOy5wyHGy5wFQ6BJ7VWJOcmJ
         PTRHEWrAA/2KFvTCxg5HuAly4rP0+Qy2L7RWsxv3siRye19BCn5U3oTIHHT+QHfQ9JOx
         cPdQ==
X-Gm-Message-State: AAQBX9eq7xbOrPJWoWzAa7qrBx6Tbsp8N0HQx/kU332hC5zOi0QkKoBd
	6j6MD2ARtzyWUiFEsN03LTrLmo+89xtk+AsxQU0=
X-Google-Smtp-Source: AKy350YixT78MM+0m6MWX93M4LzsNss7C/9AVyxLfCe7ytGhtrZgR5oTod2XZI+W20XYg1P6Z9PRr+6+RonHjuqDLAo=
X-Received: by 2002:a0d:dd82:0:b0:555:d4ad:8067 with SMTP id
 g124-20020a0ddd82000000b00555d4ad8067mr512290ywe.17.1681990164463; Thu, 20
 Apr 2023 04:29:24 -0700 (PDT)
MIME-Version: 1.0
References: <20230419172817.272758-1-stefanha@redhat.com> <20230419172817.272758-17-stefanha@redhat.com>
 <msjl3ep44f2dxpno7xw3zxjrkuh5iegyieszertt6ppkhpk62q@xxi7a5shhkc2>
In-Reply-To: <msjl3ep44f2dxpno7xw3zxjrkuh5iegyieszertt6ppkhpk62q@xxi7a5shhkc2>
From: Stefan Hajnoczi <stefanha@gmail.com>
Date: Thu, 20 Apr 2023 07:29:12 -0400
Message-ID: <CAJSP0QVjFcicweDxVvLyhijmdQqQPTN_uhzP2wU7ZS4ZXxKkEQ@mail.gmail.com>
Subject: Re: [PATCH v2 16/16] virtio: make it possible to detach host notifier
 from any thread
To: Eric Blake <eblake@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
	Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>, Hanna Reitz <hreitz@redhat.com>, 
	=?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>, Paul Durrant <paul@xen.org>, 
	Ronnie Sahlberg <ronniesahlberg@gmail.com>, Eduardo Habkost <eduardo@habkost.net>, 
	Juan Quintela <quintela@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Garzarella <sgarzare@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, 
	Kevin Wolf <kwolf@redhat.com>, "Richard W.M. Jones" <rjones@redhat.com>, 
	Richard Henderson <richard.henderson@linaro.org>, xen-devel@lists.xenproject.org, 
	qemu-block@nongnu.org, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	Peter Lieven <pl@kamp.de>, eesposit@redhat.com, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>, 
	David Woodhouse <dwmw2@infradead.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, 19 Apr 2023 at 14:52, Eric Blake <eblake@redhat.com> wrote:
>
> On Wed, Apr 19, 2023 at 01:28:17PM -0400, Stefan Hajnoczi wrote:
> > virtio_queue_aio_detach_host_notifier() does two things:
> > 1. It removes the fd handler from the event loop.
> > 2. It processes the virtqueue one last time.
> >
> > The first step can be peformed by any thread and without taking the
> > AioContext lock.
> >
> > The second step may need the AioContext lock (depending on the device
> > implementation) and runs in the thread where request processing takes
> > place. virtio-blk and virtio-scsi therefore call
> > virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
> > AioContext
> >
> > Scheduling a BH is undesirable for .drained_begin() functions. The next
> > patch will introduce a .drained_begin() function that needs to call
> > virtio_queue_aio_detach_host_notifier().
> >
> > Move the virtqueue processing out to the callers of
> > virtio_queue_aio_detach_host_notifier() so that the function can be
> > called from any thread. This is in preparation for the next patch.
> >
>
> This mentions a next patch, but is 16/16 in the series.  Am I missing
> something?

Good thing you caught this. The patch series was truncated because I
was in the middle of git rebase -i :(.

I will send a v3 with the remaining patches.

Thanks,
Stefan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:37:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524121.814758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbh-000187-LT; Thu, 20 Apr 2023 11:37:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524121.814758; Thu, 20 Apr 2023 11:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbh-000180-I8; Thu, 20 Apr 2023 11:37:45 +0000
Received: by outflank-mailman (input) for mailman id 524121;
 Thu, 20 Apr 2023 11:37:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSbg-00017u-Jp
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:37:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c28d34f5-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:37:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-178-a4reCKcJMC-maL8cmDVnuQ-1; Thu, 20 Apr 2023 07:37:37 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0F9FB280BF69;
 Thu, 20 Apr 2023 11:37:36 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 7A6FC2166B33;
 Thu, 20 Apr 2023 11:37:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c28d34f5-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990661;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NjZzjvFlly0+fag4k7wDm16HnoaaLh2yTrNsegeJk9A=;
	b=bhWwn3VHQNh+DU/GBBa7Q41H1799jHIO6uQCtz5oqe/vY6OpX+W3z9IaDNwaYgKszDVahE
	9p05l5pGaS77sdJHGSjb69jdECyJBgR2FJh9H39O2dikonbZHpUTuSRwv6AttxP3yWo4/1
	KdTqRVqcqru/RtQTGA1w2pFIeiK1ZQo=
X-MC-Unique: a4reCKcJMC-maL8cmDVnuQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 00/20] block: remove aio_disable_external() API
Date: Thu, 20 Apr 2023 07:37:12 -0400
Message-Id: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

v3:
- Resend full patch series. v2 was sent in the middle of a git rebase and was
  missing patches. [Eric]
- Apply Reviewed-by tags.
v2:
- Do not rely on BlockBackend request queuing, implement .drained_begin/end()
  instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
- Add qdev_is_realized() API [Philippe]
- Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
- Add patch to call .drained_begin/end() from main loop thread to simplify
  callback implementations

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

The approach in this patch series is to implement BlockDevOps
.drained_begin/end() callbacks that temporarily stop file descriptor handlers.
This ensures that new I/O requests are not submitted in drained sections.

The first two virtio-scsi patches were already sent as a separate series. I
included them because they are necessary in order to fully remove
aio_disable_external().

Based-on: 087bc644b7634436ca9d52fe58ba9234e2bef026 (kevin/block-next)

Stefan Hajnoczi (20):
  hw/qdev: introduce qdev_is_realized() helper
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  block/export: only acquire AioContext once for
    vhost_user_server_stop()
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  block: add blk_in_drain() API
  block: drain from main loop thread in bdrv_co_yield_to_drain()
  xen-block: implement BlockDevOps->drained_begin()
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/export: don't require AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  virtio: make it possible to detach host notifier from any thread
  virtio-blk: implement BlockDevOps->drained_begin()
  virtio-scsi: implement BlockDevOps->drained_begin()
  virtio: do not set is_external=true on host notifiers
  aio: remove aio_disable_external() API

 hw/block/dataplane/xen-block.h              |   2 +
 include/block/aio.h                         |  55 ---------
 include/block/export.h                      |   2 +
 include/hw/qdev-core.h                      |  17 ++-
 include/hw/scsi/scsi.h                      |  14 +++
 include/qemu/vhost-user-server.h            |   8 +-
 include/sysemu/block-backend-common.h       |  25 ++--
 include/sysemu/block-backend-global-state.h |   1 +
 util/aio-posix.h                            |   1 -
 block.c                                     |   7 --
 block/blkio.c                               |  15 +--
 block/block-backend.c                       |   7 ++
 block/curl.c                                |  10 +-
 block/export/export.c                       |  13 +-
 block/export/fuse.c                         |  58 ++++++++-
 block/export/vduse-blk.c                    | 128 ++++++++++++++------
 block/export/vhost-user-blk-server.c        |  73 +++++++----
 block/io.c                                  |   5 +-
 block/io_uring.c                            |   4 +-
 block/iscsi.c                               |   3 +-
 block/linux-aio.c                           |   4 +-
 block/nfs.c                                 |   5 +-
 block/nvme.c                                |   8 +-
 block/ssh.c                                 |   4 +-
 block/win32-aio.c                           |   6 +-
 hw/block/dataplane/virtio-blk.c             |  19 ++-
 hw/block/dataplane/xen-block.c              |  42 +++++--
 hw/block/virtio-blk.c                       |  38 +++++-
 hw/block/xen-block.c                        |  24 +++-
 hw/i386/kvm/xen_xenstore.c                  |   2 +-
 hw/scsi/scsi-bus.c                          |  46 ++++++-
 hw/scsi/scsi-disk.c                         |  28 ++++-
 hw/scsi/virtio-scsi-dataplane.c             |  31 +++--
 hw/scsi/virtio-scsi.c                       |  59 +++++++--
 hw/virtio/virtio.c                          |   6 +-
 hw/xen/xen-bus.c                            |  11 +-
 io/channel-command.c                        |   6 +-
 io/channel-file.c                           |   3 +-
 io/channel-socket.c                         |   3 +-
 migration/rdma.c                            |  16 +--
 tests/unit/test-aio.c                       |  27 +----
 tests/unit/test-fdmon-epoll.c               |  73 -----------
 util/aio-posix.c                            |  20 +--
 util/aio-win32.c                            |   8 +-
 util/async.c                                |   3 +-
 util/fdmon-epoll.c                          |  10 --
 util/fdmon-io_uring.c                       |   8 +-
 util/fdmon-poll.c                           |   3 +-
 util/main-loop.c                            |   7 +-
 util/qemu-coroutine-io.c                    |   7 +-
 util/vhost-user-server.c                    |  38 +++---
 hw/scsi/trace-events                        |   2 +
 tests/unit/meson.build                      |   3 -
 53 files changed, 582 insertions(+), 436 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:37:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524122.814767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbn-0001Op-2Q; Thu, 20 Apr 2023 11:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524122.814767; Thu, 20 Apr 2023 11:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbm-0001Oi-VR; Thu, 20 Apr 2023 11:37:50 +0000
Received: by outflank-mailman (input) for mailman id 524122;
 Thu, 20 Apr 2023 11:37:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSbl-00017u-1y
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:37:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c619e470-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:37:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-587-VCLxhnfSPFqLD1O2cun8pQ-1; Thu, 20 Apr 2023 07:37:42 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 44168380671E;
 Thu, 20 Apr 2023 11:37:41 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 213381410F1C;
 Thu, 20 Apr 2023 11:37:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c619e470-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990667;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=;
	b=SibGHQhKPwH260n+GV9EZs4FUJwLqfTDL30rrhZFCTcwSTBOLGtZGqgAZSiUpjayUmkVUa
	OG5EZJmXmeDXx3fluHL0Td9uOw3GVj+OPBoxefJt2QKYdWIpCGtjGrWASqsHD7EIYU1XTg
	pA/PCNmLM4cw6qidhmKcTFPvIpZ/qnA=
X-MC-Unique: VCLxhnfSPFqLD1O2cun8pQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v3 02/20] virtio-scsi: avoid race between unplug and transport event
Date: Thu, 20 Apr 2023 07:37:14 -0400
Message-Id: <20230420113732.336620-3-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-bus.c    |  3 ++-
 hw/scsi/virtio-scsi.c | 18 +++++++++---------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 07275fb631..64d7311757 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -486,7 +486,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qdev_is_realized(&dev->qdev)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..000961446c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
-
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
     aio_enable_external(ctx);
@@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, sd,
+                               VIRTIO_SCSI_T_TRANSPORT_RESET,
+                               VIRTIO_SCSI_EVT_RESET_REMOVED);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524123.814778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbz-0001rT-CY; Thu, 20 Apr 2023 11:38:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524123.814778; Thu, 20 Apr 2023 11:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSbz-0001qe-7U; Thu, 20 Apr 2023 11:38:03 +0000
Received: by outflank-mailman (input) for mailman id 524123;
 Thu, 20 Apr 2023 11:38:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSbx-0001nK-W9
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:01 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cbebed97-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:37:59 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-433-iWi0qQDrOru7dTdAiGIe9Q-1; Thu, 20 Apr 2023 07:37:39 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4AD328996F3;
 Thu, 20 Apr 2023 11:37:38 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A66435AB7A;
 Thu, 20 Apr 2023 11:37:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbebed97-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990677;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DrOhiPPcL+d6htjDrKGwRstWfOt7XytfnWLBr7kygzo=;
	b=WIsjsDVv7HyDtM1F6k3FF5u8ijz8HUAezGGDfpYIUXukK6UH9iK2vO6JZghNPOWHQUn4Tx
	+HfDJXSjmcjtDDw44RO5K66REStYoIKlqmjiJj0gTqmfzWnttunz00HLGeumfj9B4xTBzX
	NqabMQREecI3QkKYSiObZJeKxJeKOqs=
X-MC-Unique: iWi0qQDrOru7dTdAiGIe9Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 01/20] hw/qdev: introduce qdev_is_realized() helper
Date: Thu, 20 Apr 2023 07:37:13 -0400
Message-Id: <20230420113732.336620-2-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Add a helper function to check whether the device is realized without
requiring the Big QEMU Lock. The next patch adds a second caller. The
goal is to avoid spreading DeviceState field accesses throughout the
code.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/qdev-core.h | 17 ++++++++++++++---
 hw/scsi/scsi-bus.c     |  3 +--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index bd50ad5ee1..4d734cf35e 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -1,6 +1,7 @@
 #ifndef QDEV_CORE_H
 #define QDEV_CORE_H
 
+#include "qemu/atomic.h"
 #include "qemu/queue.h"
 #include "qemu/bitmap.h"
 #include "qemu/rcu.h"
@@ -164,9 +165,6 @@ struct NamedClockList {
 
 /**
  * DeviceState:
- * @realized: Indicates whether the device has been fully constructed.
- *            When accessed outside big qemu lock, must be accessed with
- *            qatomic_load_acquire()
  * @reset: ResettableState for the device; handled by Resettable interface.
  *
  * This structure should not be accessed directly.  We declare it here
@@ -332,6 +330,19 @@ DeviceState *qdev_new(const char *name);
  */
 DeviceState *qdev_try_new(const char *name);
 
+/**
+ * qdev_is_realized:
+ * @dev: The device to check.
+ *
+ * May be called outside big qemu lock.
+ *
+ * Returns: %true% if the device has been fully constructed, %false% otherwise.
+ */
+static inline bool qdev_is_realized(DeviceState *dev)
+{
+    return qatomic_load_acquire(&dev->realized);
+}
+
 /**
  * qdev_realize: Realize @dev.
  * @dev: device to realize
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..07275fb631 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus,
      * the user access the device.
      */
 
-    if (retval && !include_unrealized &&
-        !qatomic_load_acquire(&retval->qdev.realized)) {
+    if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) {
         retval = NULL;
     }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524131.814788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScN-0002cq-KO; Thu, 20 Apr 2023 11:38:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524131.814788; Thu, 20 Apr 2023 11:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScN-0002cj-HH; Thu, 20 Apr 2023 11:38:27 +0000
Received: by outflank-mailman (input) for mailman id 524131;
 Thu, 20 Apr 2023 11:38:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScM-0002cJ-Cw
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc04e1c9-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:25 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-556-i8H5la--OhiNA-YvKIl4IA-1; Thu, 20 Apr 2023 07:38:21 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1588A1C08982;
 Thu, 20 Apr 2023 11:38:20 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 00AB92026D16;
 Thu, 20 Apr 2023 11:37:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc04e1c9-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990704;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=;
	b=WokIk7Sd4qauXqXo/HxBQkO6cavPsbyX9Z/3pt7RvwHMs5Q8OqfFxhlmo9oVddy7cvRI+w
	enHK4QXeKlDVJo8FxheFYR2v2SB+vT45SSSSKjUlT3Vz/B/finNbQiokyajldsPi317VD4
	AhalpofJu/UkzA38X5qfpThJeJj3tYM=
X-MC-Unique: i8H5la--OhiNA-YvKIl4IA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v3 03/20] virtio-scsi: stop using aio_disable_external() during unplug
Date: Thu, 20 Apr 2023 07:37:15 -0400
Message-Id: <20230420113732.336620-4-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
   that in-flight requests are cancelled synchronously. This ensures
   that no in-flight requests remain once qdev_simple_device_unplug_cb()
   returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-disk.c   | 1 +
 hw/scsi/virtio-scsi.c | 3 ---
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e01bd84541 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
 
 static void scsi_unrealize(SCSIDevice *dev)
 {
+    scsi_device_purge_requests(dev, SENSE_CODE(RESET));
     del_boot_device_lchs(&dev->qdev, NULL);
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 000961446c..a02f9233ec 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524132.814798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScP-0002t8-Re; Thu, 20 Apr 2023 11:38:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524132.814798; Thu, 20 Apr 2023 11:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScP-0002sz-P0; Thu, 20 Apr 2023 11:38:29 +0000
Received: by outflank-mailman (input) for mailman id 524132;
 Thu, 20 Apr 2023 11:38:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScO-0002cJ-Ih
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd9ef4fb-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-179-ZfmDbr5BPEGcI-TkGJ9ftw-1; Thu, 20 Apr 2023 07:38:23 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CC1A43C106AD;
 Thu, 20 Apr 2023 11:38:22 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 3653640BC798;
 Thu, 20 Apr 2023 11:38:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd9ef4fb-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990706;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cpef4kT6gqf7n/rR0/ynntTmCBLqt8QRSEC7iM5/ics=;
	b=NppMzTwMG4TkN0+g+uLrJIVy3l1ZuT19TuVx2gqvqJ05deYCFkOtq42HmqYoBRRWaU0I9M
	N3N93axiU/ujJr2PLSDuHucuUrmG4VJmsvKUqvCcEXjzScj2c6ohZ3ArPP5FcvNH99Sgin
	qYyZZSd0C8wvjsLfm5QoLyS04aunt2I=
X-MC-Unique: ZfmDbr5BPEGcI-TkGJ9ftw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 04/20] block/export: only acquire AioContext once for vhost_user_server_stop()
Date: Thu, 20 Apr 2023 07:37:16 -0400
Message-Id: <20230420113732.336620-5-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

vhost_user_server_stop() uses AIO_WAIT_WHILE(). AIO_WAIT_WHILE()
requires that AioContext is only acquired once.

Since blk_exp_request_shutdown() already acquires the AioContext it
shouldn't be acquired again in vhost_user_server_stop().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 util/vhost-user-server.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 40f36ea214..5b6216069c 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -346,10 +346,9 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
     aio_context_release(server->ctx);
 }
 
+/* server->ctx acquired by caller */
 void vhost_user_server_stop(VuServer *server)
 {
-    aio_context_acquire(server->ctx);
-
     qemu_bh_delete(server->restart_listener_bh);
     server->restart_listener_bh = NULL;
 
@@ -366,8 +365,6 @@ void vhost_user_server_stop(VuServer *server)
         AIO_WAIT_WHILE(server->ctx, server->co_trip);
     }
 
-    aio_context_release(server->ctx);
-
     if (server->listener) {
         qio_net_listener_disconnect(server->listener);
         object_unref(OBJECT(server->listener));
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524133.814808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScR-0003Aq-3i; Thu, 20 Apr 2023 11:38:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524133.814808; Thu, 20 Apr 2023 11:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScR-0003Af-05; Thu, 20 Apr 2023 11:38:31 +0000
Received: by outflank-mailman (input) for mailman id 524133;
 Thu, 20 Apr 2023 11:38:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScP-0002cJ-Fz
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de1390bc-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:28 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-551-6HovKqMOObKQmv-qX6PXMg-1; Thu, 20 Apr 2023 07:38:26 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 659D3380671E;
 Thu, 20 Apr 2023 11:38:25 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5E2681410F1C;
 Thu, 20 Apr 2023 11:38:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de1390bc-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990707;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PELhC/uFAvp+63YQ0JdZUaAWVtmgk8FdDWmhMxd1TSU=;
	b=UeiENIqJchyFt7kOAYluFBvmt/FAnRPzn/HmhrwVz8YuwEQoZbthq7p+b2kEoSxat0ElwE
	pBhhWWgP1BQbPJx2PzjNtTefey4Hi+UMDthpCVR3TzKJuXmD6n305UUKLC0ZPSz//BzXt6
	+g5R+sECbfESeDy3oJz3NAfd+XOMFLI=
X-MC-Unique: 6HovKqMOObKQmv-qX6PXMg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 05/20] util/vhost-user-server: rename refcount to in_flight counter
Date: Thu, 20 Apr 2023 07:37:17 -0400
Message-Id: <20230420113732.336620-6-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 3409d9e02e..e93f2ed6b4 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -49,7 +49,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -67,12 +70,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -94,7 +97,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524136.814818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScX-0003bC-Fz; Thu, 20 Apr 2023 11:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524136.814818; Thu, 20 Apr 2023 11:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScX-0003b2-CU; Thu, 20 Apr 2023 11:38:37 +0000
Received: by outflank-mailman (input) for mailman id 524136;
 Thu, 20 Apr 2023 11:38:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScV-0001nK-Cf
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:35 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0e935c8-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-135-Ag2DDzwlMYCeNFYqlalYBQ-1; Thu, 20 Apr 2023 07:38:29 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2BF321C08976;
 Thu, 20 Apr 2023 11:38:28 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 7FBD1C16024;
 Thu, 20 Apr 2023 11:38:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0e935c8-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990712;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+bLg6uTzP7Qgt6lQ3xKdY10AY3Nt3xLke1uGL1u9ShU=;
	b=APo/zLq4WecTRndfbXeZ96HbecFzPnDdiY2zExDS7D9BF/bda666K4lZry2iK6ZOkYU2sL
	LbuBZ1r6QfAxuFkKoC/mcqi+pJkT2zRgKOmuHZ7qP9BFmTnBu5CZ3g/Hoorvxf/Xc3TnQ5
	nQG3NSL+EgmroC5WSo20NjdZ/XTOtOg=
X-MC-Unique: Ag2DDzwlMYCeNFYqlalYBQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 06/20] block/export: wait for vhost-user-blk requests when draining
Date: Thu, 20 Apr 2023 07:37:18 -0400
Message-Id: <20230420113732.336620-7-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 19 +++++++++++++++++++
 util/vhost-user-server.c             | 14 ++++++++++----
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e93f2ed6b4..dbf5207162 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -254,6 +254,22 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
+static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
+};
+
 static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              Error **errp)
 {
@@ -292,6 +308,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
                              logical_block_size, num_queues);
 
+    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vexp);
 
@@ -299,6 +316,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                                  num_queues, &vu_blk_iface, errp)) {
         blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
                                         blk_aio_detach, vexp);
+        blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
     }
@@ -312,6 +330,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
 
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vexp);
+    blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..2e6b640050 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524138.814828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSca-00042O-Q9; Thu, 20 Apr 2023 11:38:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524138.814828; Thu, 20 Apr 2023 11:38:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSca-00042B-LL; Thu, 20 Apr 2023 11:38:40 +0000
Received: by outflank-mailman (input) for mailman id 524138;
 Thu, 20 Apr 2023 11:38:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSca-0002cJ-53
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e45ca430-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:39 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-202-Vrel_1NgP6-pdSb2HQrVtQ-1; Thu, 20 Apr 2023 07:38:32 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AA82B1C08976;
 Thu, 20 Apr 2023 11:38:31 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DDC2C40C2064;
 Thu, 20 Apr 2023 11:38:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e45ca430-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990718;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rd8h2e2Z0uhjua/p0PDxA8jGvK3/DbHaZUIqG5W/DmA=;
	b=fvHgmVgXIotklSEcHXw+WuOBbpHBiW0m5ejbQXp7SA0O/VECdVok49Ox0pn8ycNkcBdFTq
	3yc0b1IqyCghBFBhFRibbtcebgTj5KW9y0vyzAr7UN6BXDKXC8N7gKuiDGzUJHiz4O7kqX
	9s1/P4fUTGhimn770wMIsnFhG6+doGU=
X-MC-Unique: Vrel_1NgP6-pdSb2HQrVtQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 07/20] block/export: stop using is_external in vhost-user-blk server
Date: Thu, 20 Apr 2023 07:37:19 -0400
Message-Id: <20230420113732.336620-8-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
 util/vhost-user-server.c             | 10 +++----
 2 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index dbf5207162..6e1bc196fb 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -207,22 +207,6 @@ static const VuDevIface vu_blk_iface = {
     .process_msg           = vu_blk_process_msg,
 };
 
-static void blk_aio_attached(AioContext *ctx, void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
-}
-
-static void blk_aio_detach(void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
-    vexp->export.ctx = NULL;
-}
-
 static void
 vu_blk_initialize_config(BlockDriverState *bs,
                          struct virtio_blk_config *config,
@@ -254,6 +238,25 @@ static void vu_blk_exp_request_shutdown(BlockExport *exp)
     vhost_user_server_stop(&vexp->vu_server);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    vexp->export.ctx = blk_get_aio_context(vexp->export.blk);
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -267,6 +270,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
 };
 
@@ -309,13 +314,9 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              logical_block_size, num_queues);
 
     blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
-    blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                 vexp);
 
     if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
                                  num_queues, &vu_blk_iface, errp)) {
-        blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
-                                        blk_aio_detach, vexp);
         blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
@@ -328,8 +329,6 @@ static void vu_blk_exp_delete(BlockExport *exp)
 {
     VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
 
-    blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                    vexp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 2e6b640050..332aea9306 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524141.814838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSce-0004XC-1A; Thu, 20 Apr 2023 11:38:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524141.814838; Thu, 20 Apr 2023 11:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScd-0004X1-UQ; Thu, 20 Apr 2023 11:38:43 +0000
Received: by outflank-mailman (input) for mailman id 524141;
 Thu, 20 Apr 2023 11:38:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScd-0002cJ-3V
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:43 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e62d0ea4-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-424-DvBNxRfbMdOy5h9dVxM1-w-1; Thu, 20 Apr 2023 07:38:35 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E34C857FB2;
 Thu, 20 Apr 2023 11:38:34 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A89495AB7A;
 Thu, 20 Apr 2023 11:38:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e62d0ea4-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990721;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SGtAMiwxZ6UOXxi5QjVUlwj1Vpn4MSdaUDJYH4vesKw=;
	b=ghXl9mq63vET+nT25ry2UGPpAmUuzFmER1IYBuC4eH18JpncV63Q97BLxNTW5np7sTjYqf
	aMpyvshndRK+WXuDI7+/tU8SB3pESeCxZk1Gc8QQ5uGsVpsYospTZgbqE/IIRGulYUqMlt
	fJAg00AWks3F3fJu8C/WNkguxdZVtH4=
X-MC-Unique: DvBNxRfbMdOy5h9dVxM1-w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH v3 08/20] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Thu, 20 Apr 2023 07:37:20 -0400
Message-Id: <20230420113732.336620-9-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524142.814848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScg-0004tf-D3; Thu, 20 Apr 2023 11:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524142.814848; Thu, 20 Apr 2023 11:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScg-0004tP-93; Thu, 20 Apr 2023 11:38:46 +0000
Received: by outflank-mailman (input) for mailman id 524142;
 Thu, 20 Apr 2023 11:38:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSce-0001nK-BX
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e64fda60-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-121-zMSb4NMHMQGsHyfUy_KGfA-1; Thu, 20 Apr 2023 07:38:40 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59959886063;
 Thu, 20 Apr 2023 11:38:39 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 95E6F492C3E;
 Thu, 20 Apr 2023 11:38:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e64fda60-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990721;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TBoAHwSzsacA9uVcaYGhwuzX40gWj9dN8dhM1t9uyLM=;
	b=IOM8cZKyFfKXM8lt2kn57jDDH/ibCphamlDD/EO4eSbo/Gx5FtLV5ylddDsXEV+Yc/ImAl
	fQWgpzKa5KfZUBxC6IrggOH3MXlCE8EBjorDgbxErDZjKx0rkKXkBcpDzH0q5oYXlZy3QG
	gEMHbEfbYtk9NVxh490V9j2w1aWyYhs=
X-MC-Unique: zMSb4NMHMQGsHyfUy_KGfA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 10/20] block: drain from main loop thread in bdrv_co_yield_to_drain()
Date: Thu, 20 Apr 2023 07:37:22 -0400
Message-Id: <20230420113732.336620-11-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

For simplicity, always run BlockDevOps .drained_begin/end/poll()
callbacks in the main loop thread. This makes it easier to implement the
callbacks and avoids extra locks.

Move the function pointer declarations from the I/O Code section to the
Global State section in block-backend-common.h.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-common.h | 25 +++++++++++++------------
 block/io.c                            |  3 ++-
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
index 2391679c56..780cea7305 100644
--- a/include/sysemu/block-backend-common.h
+++ b/include/sysemu/block-backend-common.h
@@ -59,6 +59,19 @@ typedef struct BlockDevOps {
      */
     bool (*is_medium_locked)(void *opaque);
 
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+
     /*
      * I/O API functions. These functions are thread-safe.
      *
@@ -76,18 +89,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
 } BlockDevOps;
 
 /*
diff --git a/block/io.c b/block/io.c
index db438c7657..6285d67546 100644
--- a/block/io.c
+++ b/block/io.c
@@ -331,7 +331,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (ctx != co_ctx) {
         aio_context_release(ctx);
     }
-    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
+    replay_bh_schedule_oneshot_event(qemu_get_aio_context(),
+                                     bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524143.814854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSch-000526-3d; Thu, 20 Apr 2023 11:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524143.814854; Thu, 20 Apr 2023 11:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScg-000500-S8; Thu, 20 Apr 2023 11:38:46 +0000
Received: by outflank-mailman (input) for mailman id 524143;
 Thu, 20 Apr 2023 11:38:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScg-0001nK-59
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e76816f6-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:44 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-669-FevHGkSHNviPfel5s39mPA-1; Thu, 20 Apr 2023 07:38:37 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D7E978996F3;
 Thu, 20 Apr 2023 11:38:36 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 242C340C2064;
 Thu, 20 Apr 2023 11:38:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e76816f6-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990723;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6sTqQmJRDK/JwUQj6FoYqmlPFJsGwZK9pc+TSucKp9o=;
	b=X1W3GlOs2z9YBADvE1imCa68m4ULzfoqSPlxKlz8A1Dm+c4oNlaz46A4jp/GX9B2JeMZeg
	2P59So8U5chJGviYUq7pS7zvRDcd+F4962cCU5sd220bf8Kga6zCqTFbwqWVAw+J5/dVNL
	HM9KLI7Nmbig5+K1l83RvRd6L7yht5c=
X-MC-Unique: FevHGkSHNviPfel5s39mPA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 09/20] block: add blk_in_drain() API
Date: Thu, 20 Apr 2023 07:37:21 -0400
Message-Id: <20230420113732.336620-10-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

The BlockBackend quiesce_counter is greater than zero during drained
sections. Add an API to check whether the BlockBackend is in a drained
section.

The next patch will use this API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-global-state.h | 1 +
 block/block-backend.c                       | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h
index 2b6d27db7c..ac7cbd6b5e 100644
--- a/include/sysemu/block-backend-global-state.h
+++ b/include/sysemu/block-backend-global-state.h
@@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp);
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
 void blk_aio_cancel(BlockAIOCB *acb);
 int blk_commit_all(void);
+bool blk_in_drain(BlockBackend *blk);
 void blk_drain(BlockBackend *blk);
 void blk_drain_all(void);
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
diff --git a/block/block-backend.c b/block/block-backend.c
index 9e0f48692a..e0a1d9ec0f 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1270,6 +1270,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes)
     return 0;
 }
 
+/* Are we currently in a drained section? */
+bool blk_in_drain(BlockBackend *blk)
+{
+    GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */
+    return qatomic_read(&blk->quiesce_counter);
+}
+
 /* To be called between exactly one pair of blk_inc/dec_in_flight() */
 static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:38:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524147.814868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSck-0005ha-BH; Thu, 20 Apr 2023 11:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524147.814868; Thu, 20 Apr 2023 11:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSck-0005hJ-76; Thu, 20 Apr 2023 11:38:50 +0000
Received: by outflank-mailman (input) for mailman id 524147;
 Thu, 20 Apr 2023 11:38:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScj-0002cJ-At
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e9e9ddee-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:38:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-450-Pr_VEZpzNU-e5HY9btg3sQ-1; Thu, 20 Apr 2023 07:38:43 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 575F81C0897F;
 Thu, 20 Apr 2023 11:38:42 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4356C40BC798;
 Thu, 20 Apr 2023 11:38:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9e9ddee-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990727;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=;
	b=MOuAiicXINLawUAJu7KYm0xS8qi1jrt+8xG82IVAe9ZoCp7lX6ZEvI2EQB1KUcjJ/95s28
	YW8HFfahxJvSCV1LL7JagQUS0VYaiLvKMA2boFzXEhzvSi0695rcimGPNM/VRXmJEgu/Qf
	HeaFCRyse25AyYAuOm+M+GqHGvDDn+g=
X-MC-Unique: Pr_VEZpzNU-e5HY9btg3sQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 11/20] xen-block: implement BlockDevOps->drained_begin()
Date: Thu, 20 Apr 2023 07:37:23 -0400
Message-Id: <20230420113732.336620-12-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.

Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.

It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/xen-block.h |  2 ++
 hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++---------
 hw/block/xen-block.c           | 24 ++++++++++++++++---
 hw/xen/xen-bus.c               |  7 ++++--
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h
index 76dcd51c3d..7b8e9df09f 100644
--- a/hw/block/dataplane/xen-block.h
+++ b/hw/block/dataplane/xen-block.h
@@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp);
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane);
 
 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..02e0fd6115 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -663,6 +663,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
     g_free(dataplane);
 }
 
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         NULL, &error_abort);
+}
+
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         dataplane->ctx, &error_abort);
+}
+
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
@@ -673,13 +697,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 
     xendev = dataplane->xendev;
 
-    aio_context_acquire(dataplane->ctx);
-    if (dataplane->event_channel) {
-        /* Only reason for failure is a NULL channel */
-        xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                             qemu_get_aio_context(),
-                                             &error_abort);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_detach(dataplane);
     }
+
+    aio_context_acquire(dataplane->ctx);
     /* Xen doesn't have multiple users for nodes, so this can't fail */
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
@@ -818,11 +840,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
     aio_context_release(old_context);
 
-    /* Only reason for failure is a NULL channel */
-    aio_context_acquire(dataplane->ctx);
-    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                         dataplane->ctx, &error_abort);
-    aio_context_release(dataplane->ctx);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_attach(dataplane);
+    }
 
     return;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index f5a744589d..f099914831 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque)
     xen_device_backend_printf(xendev, "state", "%u", state);
 }
 
+/* Suspend request handling */
+static void xen_block_drained_begin(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_detach(blockdev->dataplane);
+}
+
+/* Resume request handling */
+static void xen_block_drained_end(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_attach(blockdev->dataplane);
+}
+
 static const BlockDevOps xen_block_dev_ops = {
-    .resize_cb = xen_block_resize_cb,
+    .resize_cb     = xen_block_resize_cb,
+    .drained_begin = xen_block_drained_begin,
+    .drained_end   = xen_block_drained_end,
 };
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
@@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
-
     if (conf->discard_granularity == -1) {
         conf->discard_granularity = conf->physical_block_size;
     }
@@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     blockdev->dataplane =
         xen_block_dataplane_create(xendev, blk, conf->logical_block_size,
                                    blockdev->props.iothread);
+
+    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
 }
 
 static void xen_block_frontend_changed(XenDevice *xendev,
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..b8f408c9ed 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
-                       xen_device_event, NULL, xen_device_poll, NULL, channel);
+    if (ctx) {
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
+                           true, xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
+    }
 }
 
 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:39:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524160.814878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScv-0006xp-K8; Thu, 20 Apr 2023 11:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524160.814878; Thu, 20 Apr 2023 11:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppScv-0006xi-Gc; Thu, 20 Apr 2023 11:39:01 +0000
Received: by outflank-mailman (input) for mailman id 524160;
 Thu, 20 Apr 2023 11:39:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSco-0001nK-Aw
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebe86e01-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-260-EdWkLBbuPIOz-2GegFim1A-1; Thu, 20 Apr 2023 07:38:45 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8A28E1C08976;
 Thu, 20 Apr 2023 11:38:44 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E41E12026D16;
 Thu, 20 Apr 2023 11:38:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebe86e01-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990730;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=;
	b=Ha5UD3PYInFCtPd8KKu9SabrltGxzGv6X/9ZNbMKo3HbaemezJ1Vk4zFTT9YtbHlMYs8MM
	z3zEcwgqL5Tp6eyPHbGxjp3vfrMA1/3yD2v8UCBXNZXREDxfyaVw6dJWL7IYFOvkGreFn7
	pFq3UkxwNXG0/zLzF2iabyDE2YE+k5A=
X-MC-Unique: EdWkLBbuPIOz-2GegFim1A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 12/20] hw/xen: do not set is_external=true on evtchn fds
Date: Thu, 20 Apr 2023 07:37:24 -0400
Message-Id: <20230420113732.336620-13-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The previous commit converted the xen-block device to use BlockDevOps
.drained_begin/end() callbacks. It no longer relies on is_external=true
so it is safe to pass is_external=false.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/xen/xen-bus.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index b8f408c9ed..bf256d4da2 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           true, xen_device_event, NULL, xen_device_poll, NULL,
-                           channel);
+                           false, xen_device_event, NULL, xen_device_poll,
+                           NULL, channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524168.814888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmZ-000111-Ip; Thu, 20 Apr 2023 11:48:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524168.814888; Thu, 20 Apr 2023 11:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmZ-00010u-Fb; Thu, 20 Apr 2023 11:48:59 +0000
Received: by outflank-mailman (input) for mailman id 524168;
 Thu, 20 Apr 2023 11:48:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScw-0002cJ-J3
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:02 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f17c3258-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:39:01 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-339-5QKxe0gLNiqnQQj1agtczQ-1; Thu, 20 Apr 2023 07:38:57 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 488F0280BF6C;
 Thu, 20 Apr 2023 11:38:56 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 782302166B33;
 Thu, 20 Apr 2023 11:38:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f17c3258-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990740;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=;
	b=M168B0/OthBl40Mt0XIG47N+Kcmq6Jf00zsagz8mRe6jB4kzA7Cf5APLHcpnO9FmYiWtuf
	zltUAWKPsVLoxs0pp8k9CEDyParaJy7judu/yUjGPQYOE0vSDXYHPxYcUAf08d4e00TRjF
	NAm9s70I8BiBuu/gbI8FZSyda1/qPok=
X-MC-Unique: 5QKxe0gLNiqnQQj1agtczQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 16/20] virtio: make it possible to detach host notifier from any thread
Date: Thu, 20 Apr 2023 07:37:28 -0400
Message-Id: <20230420113732.336620-17-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

virtio_queue_aio_detach_host_notifier() does two things:
1. It removes the fd handler from the event loop.
2. It processes the virtqueue one last time.

The first step can be peformed by any thread and without taking the
AioContext lock.

The second step may need the AioContext lock (depending on the device
implementation) and runs in the thread where request processing takes
place. virtio-blk and virtio-scsi therefore call
virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
AioContext

Scheduling a BH is undesirable for .drained_begin() functions. The next
patch will introduce a .drained_begin() function that needs to call
virtio_queue_aio_detach_host_notifier().

Move the virtqueue processing out to the callers of
virtio_queue_aio_detach_host_notifier() so that the function can be
called from any thread. This is in preparation for the next patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 ++
 hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..bd7cc6e76b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
 
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
+        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
         virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 20bb91766e..81643445ed 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
 {
     VirtIOSCSI *s = opaque;
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
+    EventNotifier *host_notifier;
     int i;
 
     virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->event_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524175.814898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmi-0001Jb-Pu; Thu, 20 Apr 2023 11:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524175.814898; Thu, 20 Apr 2023 11:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmi-0001JU-MT; Thu, 20 Apr 2023 11:49:08 +0000
Received: by outflank-mailman (input) for mailman id 524175;
 Thu, 20 Apr 2023 11:49:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSdC-0001nK-KM
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:18 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa0422e3-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:39:15 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-416-pry25u0-Ml6mKXfu6tnXEw-1; Thu, 20 Apr 2023 07:39:10 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB0B8185A78F;
 Thu, 20 Apr 2023 11:39:09 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BCE715AB7A;
 Thu, 20 Apr 2023 11:39:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa0422e3-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990754;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ees+R6DS17HMbptyKx4ymd8j+TNGhYwtuIAtfwIWJCc=;
	b=DE7a2X24XHIhPcJuohnkvUnnL87G4i7SV4pRlFXQO86ayjHvhdVfHFHPc7xjkkloexd0aF
	WHbS8xMRhoGBNw6MF6a63Lrx/MBzryWCe05U9i1SUs/IFq5SoJhoxvxXLeN3cc8MMRA1t8
	8eGDKIuDubYB3p2vKBN+776dMcdzBCA=
X-MC-Unique: pry25u0-Ml6mKXfu6tnXEw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 20/20] aio: remove aio_disable_external() API
Date: Thu, 20 Apr 2023 07:37:32 -0400
Message-Id: <20230420113732.336620-21-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.

Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().

The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().

Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:

  @@
  expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
  @@
  - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
  + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)

  @@
  expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
  @@
  - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
  + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)

Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/aio.h           | 55 --------------------------
 util/aio-posix.h              |  1 -
 block.c                       |  7 ----
 block/blkio.c                 | 15 +++----
 block/curl.c                  | 10 ++---
 block/export/fuse.c           |  8 ++--
 block/export/vduse-blk.c      | 10 ++---
 block/io.c                    |  2 -
 block/io_uring.c              |  4 +-
 block/iscsi.c                 |  3 +-
 block/linux-aio.c             |  4 +-
 block/nfs.c                   |  5 +--
 block/nvme.c                  |  8 ++--
 block/ssh.c                   |  4 +-
 block/win32-aio.c             |  6 +--
 hw/i386/kvm/xen_xenstore.c    |  2 +-
 hw/virtio/virtio.c            |  6 +--
 hw/xen/xen-bus.c              |  8 ++--
 io/channel-command.c          |  6 +--
 io/channel-file.c             |  3 +-
 io/channel-socket.c           |  3 +-
 migration/rdma.c              | 16 ++++----
 tests/unit/test-aio.c         | 27 +------------
 tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
 util/aio-posix.c              | 20 +++-------
 util/aio-win32.c              |  8 +---
 util/async.c                  |  3 +-
 util/fdmon-epoll.c            | 10 -----
 util/fdmon-io_uring.c         |  8 +---
 util/fdmon-poll.c             |  3 +-
 util/main-loop.c              |  7 ++--
 util/qemu-coroutine-io.c      |  7 ++--
 util/vhost-user-server.c      | 11 +++---
 tests/unit/meson.build        |  3 --
 34 files changed, 76 insertions(+), 290 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

diff --git a/include/block/aio.h b/include/block/aio.h
index e267d918fd..d4ce01ea08 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -467,7 +467,6 @@ bool aio_poll(AioContext *ctx, bool blocking);
  */
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -483,7 +482,6 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready);
@@ -612,59 +610,6 @@ static inline void aio_timer_init(AioContext *ctx,
  */
 int64_t aio_compute_timeout(AioContext *ctx);
 
-/**
- * aio_disable_external:
- * @ctx: the aio context
- *
- * Disable the further processing of external clients.
- */
-static inline void aio_disable_external(AioContext *ctx)
-{
-    qatomic_inc(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_enable_external:
- * @ctx: the aio context
- *
- * Enable the processing of external clients.
- */
-static inline void aio_enable_external(AioContext *ctx)
-{
-    int old;
-
-    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
-    assert(old > 0);
-    if (old == 1) {
-        /* Kick event loop so it re-arms file descriptors */
-        aio_notify(ctx);
-    }
-}
-
-/**
- * aio_external_disabled:
- * @ctx: the aio context
- *
- * Return true if the external clients are disabled.
- */
-static inline bool aio_external_disabled(AioContext *ctx)
-{
-    return qatomic_read(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_node_check:
- * @ctx: the aio context
- * @is_external: Whether or not the checked node is an external event source.
- *
- * Check if the node's is_external flag is okay to be polled by the ctx at this
- * moment. True means green light.
- */
-static inline bool aio_node_check(AioContext *ctx, bool is_external)
-{
-    return !is_external || !qatomic_read(&ctx->external_disable_cnt);
-}
-
 /**
  * aio_co_schedule:
  * @ctx: the aio context
diff --git a/util/aio-posix.h b/util/aio-posix.h
index 80b927c7f4..4264c518be 100644
--- a/util/aio-posix.h
+++ b/util/aio-posix.h
@@ -38,7 +38,6 @@ struct AioHandler {
 #endif
     int64_t poll_idle_timeout; /* when to stop userspace polling */
     bool poll_ready; /* has polling detected an event? */
-    bool is_external;
 };
 
 /* Add a handler to a ready list */
diff --git a/block.c b/block.c
index a79297f99b..e9625ffeee 100644
--- a/block.c
+++ b/block.c
@@ -7254,9 +7254,6 @@ static void bdrv_detach_aio_context(BlockDriverState *bs)
         bs->drv->bdrv_detach_aio_context(bs);
     }
 
-    if (bs->quiesce_counter) {
-        aio_enable_external(bs->aio_context);
-    }
     bs->aio_context = NULL;
 }
 
@@ -7266,10 +7263,6 @@ static void bdrv_attach_aio_context(BlockDriverState *bs,
     BdrvAioNotifier *ban, *ban_tmp;
     GLOBAL_STATE_CODE();
 
-    if (bs->quiesce_counter) {
-        aio_disable_external(new_context);
-    }
-
     bs->aio_context = new_context;
 
     if (bs->drv && bs->drv->bdrv_attach_aio_context) {
diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..72117fa005 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState *bs,
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(new_context,
-                       s->completion_fd,
-                       false,
-                       blkio_completion_fd_read,
-                       NULL,
+    aio_set_fd_handler(new_context, s->completion_fd,
+                       blkio_completion_fd_read, NULL,
                        blkio_completion_fd_poll,
-                       blkio_completion_fd_poll_ready,
-                       bs);
+                       blkio_completion_fd_poll_ready, bs);
 }
 
 static void blkio_detach_aio_context(BlockDriverState *bs)
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       s->completion_fd,
-                       false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, NULL,
+                       NULL, NULL, NULL);
 }
 
 /* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
diff --git a/block/curl.c b/block/curl.c
index 8bb39a134e..0fc42d03d7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value, void *opaque)
     CURLSocket *socket = value;
     BDRVCURLState *s = socket->s;
 
-    aio_set_fd_handler(s->aio_context, socket->fd, false,
+    aio_set_fd_handler(s->aio_context, socket->fd,
                        NULL, NULL, NULL, NULL, NULL);
     return true;
 }
@@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     trace_curl_sock_cb(action, (int)fd);
     switch (action) {
         case CURL_POLL_IN:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, NULL, NULL, NULL, socket);
             break;
         case CURL_POLL_OUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, curl_multi_do, NULL, NULL, socket);
             break;
         case CURL_POLL_INOUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, curl_multi_do,
                                NULL, NULL, socket);
             break;
         case CURL_POLL_REMOVE:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, NULL, NULL, NULL, NULL);
             break;
     }
diff --git a/block/export/fuse.c b/block/export/fuse.c
index 65a7f4d723..5c75c9407e 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque)
     FuseExport *exp = opaque;
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        NULL, NULL, NULL, NULL, NULL);
     exp->fd_handler_set_up = false;
 }
@@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque)
     exp->common.ctx = blk_get_aio_context(exp->common.blk);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 }
@@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -320,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), false,
+                               fuse_session_fd(exp->fuse_session),
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 611430afda..048bdcbfb6 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -137,7 +137,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
     }
 
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -151,7 +151,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
         return;
     }
 
-    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+    aio_set_fd_handler(vblk_exp->export.ctx, fd,
                        NULL, NULL, NULL, NULL, NULL);
 }
 
@@ -170,7 +170,7 @@ static void on_vduse_dev_kick(void *opaque)
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, on_vduse_dev_kick, NULL, NULL, NULL,
+                       on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
     /* Virtqueues are handled by vduse_blk_drained_end() */
@@ -179,7 +179,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
 
     /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
@@ -364,7 +364,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev),
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
diff --git a/block/io.c b/block/io.c
index 6285d67546..5cb76e39f4 100644
--- a/block/io.c
+++ b/block/io.c
@@ -357,7 +357,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
 
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
         bdrv_parent_drained_begin(bs, parent);
         if (bs->drv && bs->drv->bdrv_drain_begin) {
             bs->drv->bdrv_drain_begin(bs);
@@ -410,7 +409,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
             bs->drv->bdrv_drain_end(bs);
         }
         bdrv_parent_drained_end(bs, parent);
-        aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
diff --git a/block/io_uring.c b/block/io_uring.c
index 989f9a99ed..b64a3e6285 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -406,7 +406,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
 
 void luring_detach_aio_context(LuringState *s, AioContext *old_context)
 {
-    aio_set_fd_handler(old_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(old_context, s->ring.ring_fd,
                        NULL, NULL, NULL, NULL, s);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
@@ -416,7 +416,7 @@ void luring_attach_aio_context(LuringState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_luring_completion_bh, s);
-    aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(s->aio_context, s->ring.ring_fd,
                        qemu_luring_completion_cb, NULL,
                        qemu_luring_poll_cb, qemu_luring_poll_ready, s);
 }
diff --git a/block/iscsi.c b/block/iscsi.c
index 9fc0bed90b..34f97ab646 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun)
 
     if (ev != iscsilun->events) {
         aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
-                           false,
                            (ev & POLLIN) ? iscsi_process_read : NULL,
                            (ev & POLLOUT) ? iscsi_process_write : NULL,
                            NULL, NULL,
@@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
     IscsiLun *iscsilun = bs->opaque;
 
     aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     iscsilun->events = 0;
 
     if (iscsilun->nop_timer) {
diff --git a/block/linux-aio.c b/block/linux-aio.c
index fc50cdd1bf..129908531a 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -443,7 +443,7 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
 }
@@ -452,7 +452,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
-    aio_set_event_notifier(new_context, &s->e, false,
+    aio_set_event_notifier(new_context, &s->e,
                            qemu_laio_completion_cb,
                            qemu_laio_poll_cb,
                            qemu_laio_poll_ready);
diff --git a/block/nfs.c b/block/nfs.c
index 351dc6ec8d..2f603bba42 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client)
     int ev = nfs_which_events(client->context);
     if (ev != client->events) {
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false,
                            (ev & POLLIN) ? nfs_process_read : NULL,
                            (ev & POLLOUT) ? nfs_process_write : NULL,
                            NULL, NULL, client);
@@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
     NFSClient *client = bs->opaque;
 
     aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     client->events = 0;
 }
 
@@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client)
     if (client->context) {
         qemu_mutex_lock(&client->mutex);
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false, NULL, NULL, NULL, NULL, NULL);
+                           NULL, NULL, NULL, NULL, NULL);
         qemu_mutex_unlock(&client->mutex);
         if (client->fh) {
             nfs_close(client->context, client->fh);
diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..17937d398d 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     }
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     if (!nvme_identify(bs, namespace, errp)) {
@@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs)
     g_free(s->queues);
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
     event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]);
     qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map,
                             0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE);
@@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState *bs)
 
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
 }
 
 static void nvme_attach_aio_context(BlockDriverState *bs,
@@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
 
     s->aio_context = new_context;
     aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     for (unsigned i = 0; i < s->queue_count; i++) {
diff --git a/block/ssh.c b/block/ssh.c
index b3b3352075..2748253d4a 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     trace_ssh_restart_coroutine(restart->co);
-    aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL);
 
     aio_co_wake(restart->co);
 }
@@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
     trace_ssh_co_yield(s->sock, rd_handler, wr_handler);
 
     aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
-                       false, rd_handler, wr_handler, NULL, NULL, &restart);
+                       rd_handler, wr_handler, NULL, NULL, &restart);
     qemu_coroutine_yield();
     trace_ssh_co_yield_back(s->sock);
 }
diff --git a/block/win32-aio.c b/block/win32-aio.c
index ee87d6048f..6327861e1d 100644
--- a/block/win32-aio.c
+++ b/block/win32-aio.c
@@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfile)
 void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL);
     aio->aio_ctx = NULL;
 }
 
@@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *new_context)
 {
     aio->aio_ctx = new_context;
-    aio_set_event_notifier(new_context, &aio->e, false,
-                           win32_aio_completion_cb, NULL, NULL);
+    aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb,
+                           NULL, NULL);
 }
 
 QEMUWin32AIOState *win32_aio_init(void)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e81bc8791..0b189c6ab8 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh),
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index dcd7aabb4e..6125e4d556 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index bf256d4da2..1e08cf027a 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           false, xen_device_event, NULL, xen_device_poll,
-                           NULL, channel);
+                           xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
diff --git a/io/channel-command.c b/io/channel-command.c
index e7edd091af..7ed726c802 100644
--- a/io/channel-command.c
+++ b/io/channel-command.c
@@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
                                                    void *opaque)
 {
     QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
-    aio_set_fd_handler(ctx, cioc->readfd, false,
-                       io_read, NULL, NULL, NULL, opaque);
-    aio_set_fd_handler(ctx, cioc->writefd, false,
-                       NULL, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opaque);
 }
 
 
diff --git a/io/channel-file.c b/io/channel-file.c
index d76663e6ae..8b5821f452 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
                                                 void *opaque)
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
-    aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write,
-                       NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b0ea7d48b3..d99945ebec 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
                                                   void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false,
-                       io_read, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/migration/rdma.c b/migration/rdma.c
index df646be35e..aee41ca43e 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3104,15 +3104,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     }
 }
 
diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c
index 321d7ab01a..519440eed3 100644
--- a/tests/unit/test-aio.c
+++ b/tests/unit/test-aio.c
@@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque)
 static void set_event_notifier(AioContext *ctx, EventNotifier *notifier,
                                EventNotifierHandler *handler)
 {
-    aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL);
+    aio_set_event_notifier(ctx, notifier, handler, NULL, NULL);
 }
 
 static void dummy_notifier_read(EventNotifier *n)
@@ -383,30 +383,6 @@ static void test_flush_event_notifier(void)
     event_notifier_cleanup(&data.e);
 }
 
-static void test_aio_external_client(void)
-{
-    int i, j;
-
-    for (i = 1; i < 3; i++) {
-        EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
-        event_notifier_init(&data.e, false);
-        aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, NULL);
-        event_notifier_set(&data.e);
-        for (j = 0; j < i; j++) {
-            aio_disable_external(ctx);
-        }
-        for (j = 0; j < i; j++) {
-            assert(!aio_poll(ctx, false));
-            assert(event_notifier_test_and_clear(&data.e));
-            event_notifier_set(&data.e);
-            aio_enable_external(ctx);
-        }
-        assert(aio_poll(ctx, false));
-        set_event_notifier(ctx, &data.e, NULL);
-        event_notifier_cleanup(&data.e);
-    }
-}
-
 static void test_wait_event_notifier_noflush(void)
 {
     EventNotifierTestData data = { .n = 0 };
@@ -935,7 +911,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-    g_test_add_func("/aio/external-client",         test_aio_external_client);
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c
deleted file mode 100644
index ef5a856d09..0000000000
--- a/tests/unit/test-fdmon-epoll.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * fdmon-epoll tests
- *
- * Copyright (c) 2020 Red Hat, Inc.
- */
-
-#include "qemu/osdep.h"
-#include "block/aio.h"
-#include "qapi/error.h"
-#include "qemu/main-loop.h"
-
-static AioContext *ctx;
-
-static void dummy_fd_handler(EventNotifier *notifier)
-{
-    event_notifier_test_and_clear(notifier);
-}
-
-static void add_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        event_notifier_init(&notifiers[i], false);
-        aio_set_event_notifier(ctx, &notifiers[i], false,
-                               dummy_fd_handler, NULL, NULL);
-    }
-}
-
-static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL, NULL);
-        event_notifier_cleanup(&notifiers[i]);
-    }
-}
-
-/* Check that fd handlers work when external clients are disabled */
-static void test_external_disabled(void)
-{
-    EventNotifier notifiers[100];
-
-    /* fdmon-epoll is only enabled when many fd handlers are registered */
-    add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-
-    aio_disable_external(ctx);
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-    aio_enable_external(ctx);
-
-    remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-}
-
-int main(int argc, char **argv)
-{
-    /*
-     * This code relies on the fact that fdmon-io_uring disables itself when
-     * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
-     * to fdmon-epoll when the number of fds exceeds a threshold.
-     */
-    qemu_init_main_loop(&error_fatal);
-    ctx = qemu_get_aio_context();
-
-    while (g_main_context_iteration(NULL, false)) {
-        /* Do nothing */
-    }
-
-    g_test_init(&argc, &argv, NULL);
-    g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
-    return g_test_run();
-}
diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f76..934b1bbb85 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx,
         new_node->io_poll = io_poll;
         new_node->io_poll_ready = io_poll_ready;
         new_node->opaque = opaque;
-        new_node->is_external = is_external;
 
         if (is_new) {
             new_node->pfd.fd = fd;
@@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
 {
-    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external,
+    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
                        (IOHandler *)io_read, NULL, io_poll,
                        (IOHandler *)io_poll_ready, notifier);
 }
@@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx)
 
         /* TODO should this check poll ready? */
         revents = node->pfd.revents & node->pfd.events;
-        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
             result = true;
             break;
         }
-        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
             result = true;
             break;
         }
@@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
         QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll);
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
-        poll_ready && revents == 0 &&
-        aio_node_check(ctx, node->is_external) &&
-        node->io_poll_ready) {
+        poll_ready && revents == 0 && node->io_poll_ready) {
         node->io_poll_ready(node->opaque);
 
         /*
@@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
 
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_read) {
         node->io_read(node->opaque);
 
@@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_OUT | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_write) {
         node->io_write(node->opaque);
         progress = true;
@@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx,
     AioHandler *tmp;
 
     QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) {
-        if (aio_node_check(ctx, node->is_external) &&
-            node->io_poll(node->opaque)) {
+        if (node->io_poll(node->opaque)) {
             aio_add_poll_ready_handler(ready_list, node);
 
             node->poll_idle_timeout = now + POLL_IDLE_INTERVAL_NS;
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 6bded009a4..948ef47a4d 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -32,7 +32,6 @@ struct AioHandler {
     GPollFD pfd;
     int deleted;
     void *opaque;
-    bool is_external;
     QLIST_ENTRY(AioHandler) node;
 };
 
@@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx,
         node->opaque = opaque;
         node->io_read = io_read;
         node->io_write = io_write;
-        node->is_external = is_external;
 
         if (io_read) {
             bitmask |= FD_READ | FD_ACCEPT | FD_CLOSE;
@@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            bool is_external,
                             EventNotifierHandler *io_notify,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
@@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx,
             node->e = e;
             node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
             node->pfd.events = G_IO_IN;
-            node->is_external = is_external;
             QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node);
 
             g_source_add_poll(&ctx->source, &node->pfd);
@@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* fill fd sets */
     count = 0;
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!node->deleted && node->io_notify
-            && aio_node_check(ctx, node->is_external)) {
+        if (!node->deleted && node->io_notify) {
             assert(count < MAXIMUM_WAIT_OBJECTS);
             events[count++] = event_notifier_get_handle(node->e);
         }
diff --git a/util/async.c b/util/async.c
index 21016a1ac7..be0726038e 100644
--- a/util/async.c
+++ b/util/async.c
@@ -377,7 +377,7 @@ aio_ctx_finalize(GSource     *source)
         g_free(bh);
     }
 
-    aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_rec_mutex_destroy(&ctx->lock);
     qemu_lockcnt_destroy(&ctx->list_lock);
@@ -561,7 +561,6 @@ AioContext *aio_context_new(Error **errp)
     QSLIST_INIT(&ctx->scheduled_coroutines);
 
     aio_set_event_notifier(ctx, &ctx->notifier,
-                           false,
                            aio_context_notifier_cb,
                            aio_context_notifier_poll,
                            aio_context_notifier_poll_ready);
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index e11a8a022e..ef3eacacd2 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
     int i, ret = 0;
     struct epoll_event events[128];
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout > 0) {
         ret = qemu_poll_ns(&pfd, 1, timeout);
         if (ret > 0) {
@@ -131,11 +126,6 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    /* Do not upgrade while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return false;
-    }
-
     if (npfd >= EPOLL_ENABLE_THRESHOLD) {
         if (fdmon_epoll_try_enable(ctx)) {
             return true;
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index ab43052dd7..17ec18b7bd 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
     unsigned wait_nr = 1; /* block until at least one cqe is ready */
     int ret;
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout == 0) {
         wait_nr = 0; /* non-blocking */
     } else if (timeout > 0) {
@@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
         return true;
     }
 
-    /* Are we falling back to fdmon-poll? */
-    return qatomic_read(&ctx->external_disable_cnt);
+    return false;
 }
 
 static const FDMonOps fdmon_io_uring_ops = {
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
index 5fe3b47865..17df917cf9 100644
--- a/util/fdmon-poll.c
+++ b/util/fdmon-poll.c
@@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
     assert(npfd == 0);
 
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
-                && aio_node_check(ctx, node->is_external)) {
+        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) {
             add_pollfd(node);
         }
     }
diff --git a/util/main-loop.c b/util/main-loop.c
index e180c85145..3e43a9cd38 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -642,14 +642,13 @@ void qemu_set_fd_handler(int fd,
                          void *opaque)
 {
     iohandler_init();
-    aio_set_fd_handler(iohandler_ctx, fd, false,
-                       fd_read, fd_write, NULL, NULL, opaque);
+    aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL,
+                       opaque);
 }
 
 void event_notifier_set_handler(EventNotifier *e,
                                 EventNotifierHandler *handler)
 {
     iohandler_init();
-    aio_set_event_notifier(iohandler_ctx, e, false,
-                           handler, NULL, NULL);
+    aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL);
 }
diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c
index d791932d63..364f4d5abf 100644
--- a/util/qemu-coroutine-io.c
+++ b/util/qemu-coroutine-io.c
@@ -74,8 +74,7 @@ typedef struct {
 static void fd_coroutine_enter(void *opaque)
 {
     FDYieldUntilData *data = opaque;
-    aio_set_fd_handler(data->ctx, data->fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL);
     qemu_coroutine_enter(data->co);
 }
 
@@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd)
     data.ctx = qemu_get_current_aio_context();
     data.co = qemu_coroutine_self();
     data.fd = fd;
-    aio_set_fd_handler(
-        data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data);
+    aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL,
+                       &data);
     qemu_coroutine_yield();
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 332aea9306..9ba19121a2 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
     g_free(vu_fd_watch);
@@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index fa63cfe6ff..2980170397 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -121,9 +121,6 @@ if have_block
   if nettle.found() or gcrypt.found()
     tests += {'test-crypto-pbkdf': [io]}
   endif
-  if config_host_data.get('CONFIG_EPOLL_CREATE1')
-    tests += {'test-fdmon-epoll': [testblock]}
-  endif
 endif
 
 if have_system
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524178.814908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmn-0001dx-7O; Thu, 20 Apr 2023 11:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524178.814908; Thu, 20 Apr 2023 11:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmn-0001do-4J; Thu, 20 Apr 2023 11:49:13 +0000
Received: by outflank-mailman (input) for mailman id 524178;
 Thu, 20 Apr 2023 11:49:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScs-0001nK-Ff
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:58 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eedc5052-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:56 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-223-ugElyGhbO4WVcymr-LwNgg-1; Thu, 20 Apr 2023 07:38:51 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92E361C0897A;
 Thu, 20 Apr 2023 11:38:50 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8A8FD51E3;
 Thu, 20 Apr 2023 11:38:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eedc5052-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990735;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=svW5Iqq6ywfg5XGxDCHKSniKJYjg3etTmQ7BLHFrTFw=;
	b=an3g+b76QwMPE/QW5urr0RtNFLf/9WPXWqCz87YYVTEjqILJC8ytc7K0rP0KXSPk2Y3KJi
	XDexm/xEgHwFepMeW5/HmqXBNClgcK2VVkHWTHG7TqeTKLMCLt2ZfmlLbq628U65UKT5NT
	Dy2BlnY35Bxsu1dHwAaRMzxx1BKBpI4=
X-MC-Unique: ugElyGhbO4WVcymr-LwNgg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 14/20] block/export: don't require AioContext lock around blk_exp_ref/unref()
Date: Thu, 20 Apr 2023 07:37:26 -0400
Message-Id: <20230420113732.336620-15-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

The FUSE export calls blk_exp_ref/unref() without the AioContext lock.
Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they
work without the AioContext lock. This way it's less error-prone.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/export.h   |  2 ++
 block/export/export.c    | 13 ++++++-------
 block/export/vduse-blk.c |  4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..f2fe0f8078 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -57,6 +57,8 @@ struct BlockExport {
      * Reference count for this block export. This includes strong references
      * both from the owner (qemu-nbd or the monitor) and clients connected to
      * the export.
+     *
+     * Use atomics to access this field.
      */
     int refcount;
 
diff --git a/block/export/export.c b/block/export/export.c
index e3fee60611..edb05c9268 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -201,11 +201,10 @@ fail:
     return NULL;
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_ref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    exp->refcount++;
+    assert(qatomic_read(&exp->refcount) > 0);
+    qatomic_inc(&exp->refcount);
 }
 
 /* Runs in the main thread */
@@ -227,11 +226,10 @@ static void blk_exp_delete_bh(void *opaque)
     aio_context_release(aio_context);
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_unref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    if (--exp->refcount == 0) {
+    assert(qatomic_read(&exp->refcount) > 0);
+    if (qatomic_fetch_dec(&exp->refcount) == 1) {
         /* Touch the block_exports list only in the main thread */
         aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh,
                                 exp);
@@ -339,7 +337,8 @@ void qmp_block_export_del(const char *id,
     if (!has_mode) {
         mode = BLOCK_EXPORT_REMOVE_MODE_SAFE;
     }
-    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) {
+    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE &&
+        qatomic_read(&exp->refcount) > 1) {
         error_setg(errp, "export '%s' still in use", exp->id);
         error_append_hint(errp, "Use mode='hard' to force client "
                           "disconnect\n");
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 35dc8fcf45..611430afda 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
     if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
         /* Prevent export from being deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_ref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
         aio_wait_kick();
 
         /* Now the export can be deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_unref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524183.814918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmx-00027l-H9; Thu, 20 Apr 2023 11:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524183.814918; Thu, 20 Apr 2023 11:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSmx-00027b-Du; Thu, 20 Apr 2023 11:49:23 +0000
Received: by outflank-mailman (input) for mailman id 524183;
 Thu, 20 Apr 2023 11:49:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScv-0002cJ-Ir
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:01 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f125a873-df6f-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 13:39:00 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-186-PHYKj5zEMma4KAeFkTC-BQ-1; Thu, 20 Apr 2023 07:38:54 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8CD341C0897A;
 Thu, 20 Apr 2023 11:38:53 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9B3182166B33;
 Thu, 20 Apr 2023 11:38:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f125a873-df6f-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990739;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=;
	b=HHUpdBLDRTwFPcIjBZnOmohb/I7xh62pEK1gw+7GruAtIux/mc/ZQh4ztYOqu07MMgKuhz
	unt234Ykss/YxMKeEAyrGxPfsYdBJhHimWjohlBUa90uZQAlnLBfdUswFRu//HI0f8ShOP
	Y/RWrd6pJ5FDYvjD3upAQOQnwThlIxk=
X-MC-Unique: PHYKj5zEMma4KAeFkTC-BQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 15/20] block/fuse: do not set is_external=true on FUSE fd
Date: Thu, 20 Apr 2023 07:37:27 -0400
Message-Id: <20230420113732.336620-16-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..65a7f4d723 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque)
 
     blk_exp_ref(&exp->common);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     blk_exp_unref(&exp->common);
 }
 
@@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
@@ -287,6 +339,8 @@ static void fuse_export_delete(BlockExport *blk_exp)
 {
     FuseExport *exp = container_of(blk_exp, FuseExport, common);
 
+    blk_set_dev_ops(exp->common.blk, NULL, NULL);
+
     if (exp->fuse_session) {
         if (exp->mounted) {
             fuse_session_unmount(exp->fuse_session);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524186.814928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSn0-0002Sk-QJ; Thu, 20 Apr 2023 11:49:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524186.814928; Thu, 20 Apr 2023 11:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSn0-0002Sb-NM; Thu, 20 Apr 2023 11:49:26 +0000
Received: by outflank-mailman (input) for mailman id 524186;
 Thu, 20 Apr 2023 11:49:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppScr-0001nK-FU
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:38:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee117f44-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:38:55 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-630-cuBqF_L8OE-OpA04zSRkcQ-1; Thu, 20 Apr 2023 07:38:48 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CADD83C1069E;
 Thu, 20 Apr 2023 11:38:47 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 90FFB40C2064;
 Thu, 20 Apr 2023 11:38:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee117f44-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990734;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=;
	b=NawdEaNaS46y90chATYDNut8Qz5U9S/Vih5uxGdMNEf26z5u2WN4PMN/WfO+3K0lwB9tyS
	vm8gg83QN3XnNgk5d3V7+Yk33YF5U2e9ON/AWby32V6IR0lZs/WnjZMJDzllGb0kI8Jme/
	z+MawDflQzHLlLmqvdTwHoDsI3FN1kg=
X-MC-Unique: cuBqF_L8OE-OpA04zSRkcQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 13/20] block/export: rewrite vduse-blk drain code
Date: Thu, 20 Apr 2023 07:37:25 -0400
Message-Id: <20230420113732.336620-14-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index f7ae44e3ce..35dc8fcf45 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
@@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524191.814938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSn6-0002uq-3L; Thu, 20 Apr 2023 11:49:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524191.814938; Thu, 20 Apr 2023 11:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSn5-0002uf-W7; Thu, 20 Apr 2023 11:49:31 +0000
Received: by outflank-mailman (input) for mailman id 524191;
 Thu, 20 Apr 2023 11:49:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSd1-0001nK-Eh
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3f59c76-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:39:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-612-MLj2ARvDMWGTyWSNDv9oow-1; Thu, 20 Apr 2023 07:39:00 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3247C3C1069A;
 Thu, 20 Apr 2023 11:38:59 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 43EA62166B33;
 Thu, 20 Apr 2023 11:38:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3f59c76-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990744;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NTRadM71WOmRghvzBom8j3O5LrKwHmY9JQuXgfzMJZs=;
	b=hyyq3bR8Q99z4ycRCUfls4IkLk26+8DSoSuOs6Kj5xFTUNiLCTYGZjzEfmyMe7Vc53/TRY
	oOawV3M6fRnq8nsvVRETLpKEbnK1ewy9PGB7RHgTX0h2AyT0yIrUhWm3zjnyH70yHQNerW
	T9opdEh6PEljzXLV1JuO5JvF3UpqTWY=
X-MC-Unique: MLj2ARvDMWGTyWSNDv9oow-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 17/20] virtio-blk: implement BlockDevOps->drained_begin()
Date: Thu, 20 Apr 2023 07:37:29 -0400
Message-Id: <20230420113732.336620-18-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Detach ioeventfds during drained sections to stop I/O submission from
the guest. virtio-blk is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Take extra care to avoid attaching/detaching ioeventfds if the data
plane is started/stopped during a drained section. This should be rare,
but maybe the mirror block job can trigger it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 +++++++++------
 hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
 2 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index bd7cc6e76b..d77fc6028c 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     }
 
     /* Get this show started by hooking up our callbacks */
-    aio_context_acquire(s->ctx);
-    for (i = 0; i < nvqs; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_context_acquire(s->ctx);
+        for (i = 0; i < nvqs; i++) {
+            VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
-        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
   fail_aio_context:
@@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     trace_virtio_blk_data_plane_stop(s);
 
     aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+    }
 
     /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
     blk_drain(s->conf->conf.blk);
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cefca93b31..d8dedc575c 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1109,8 +1109,44 @@ static void virtio_blk_resize(void *opaque)
     aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev);
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_blk_drained_begin(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_blk_drained_end(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, ctx);
+    }
+}
+
 static const BlockDevOps virtio_block_ops = {
-    .resize_cb = virtio_blk_resize,
+    .resize_cb     = virtio_blk_resize,
+    .drained_begin = virtio_blk_drained_begin,
+    .drained_end   = virtio_blk_drained_end,
 };
 
 static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524195.814948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSnC-0003Sw-Fv; Thu, 20 Apr 2023 11:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524195.814948; Thu, 20 Apr 2023 11:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSnC-0003SR-BS; Thu, 20 Apr 2023 11:49:38 +0000
Received: by outflank-mailman (input) for mailman id 524195;
 Thu, 20 Apr 2023 11:49:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSd6-0001nK-Q6
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f76057b9-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:39:11 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-594-7mMXvanIPZa13KRkJ36Cwg-1; Thu, 20 Apr 2023 07:39:05 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CC588858289;
 Thu, 20 Apr 2023 11:39:04 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 307EA2166B33;
 Thu, 20 Apr 2023 11:39:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f76057b9-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990750;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BfXLlUrv5MV18t289wVpV4n/mCqschjVjFmLUIk7L5w=;
	b=i3NyddQd4WCFRDWRLhFDu2VQEe1MwOOtpHwKVe/Neh1Wwl/MA8mwwTTSEp8YCdeldBjd6O
	4J+gsrgHXBC6PVlmgVgOH5gyHVzt/SFCkMoCJj+mcw6PJwZKVgeEx598LP5WzMtPzKHZcI
	BMwuUGEXbzLII7eXwZ3/8QxXYoI6ggg=
X-MC-Unique: 7mMXvanIPZa13KRkJ36Cwg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 19/20] virtio: do not set is_external=true on host notifiers
Date: Thu, 20 Apr 2023 07:37:31 -0400
Message-Id: <20230420113732.336620-20-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Host notifiers can now use is_external=false since virtio-blk and
virtio-scsi no longer rely on is_external=true for drained sections.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 98c4819fcc..dcd7aabb4e 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 11:49:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 11:49:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524200.814958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSnI-00044c-P0; Thu, 20 Apr 2023 11:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524200.814958; Thu, 20 Apr 2023 11:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSnI-00044T-KF; Thu, 20 Apr 2023 11:49:44 +0000
Received: by outflank-mailman (input) for mailman id 524200;
 Thu, 20 Apr 2023 11:49:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=onos=AL=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ppSd2-0001nK-R7
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 11:39:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4dc3610-df6f-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 13:39:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-549-iXCyyEoXOhWgcUvuTttC3g-1; Thu, 20 Apr 2023 07:39:02 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B2226280BF65;
 Thu, 20 Apr 2023 11:39:01 +0000 (UTC)
Received: from localhost (unknown [10.39.193.254])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9A1EC63A5E;
 Thu, 20 Apr 2023 11:39:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4dc3610-df6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681990745;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V8/JM/4FpUbs1MFSPNPGCdntPIYaUvTzToJc4P5DEws=;
	b=LWD1bXZyTARn5T/RL8QolxwrGjHwamQ+ZCxGi0m6N92MqGDdtLf0nYDRSF4TUtWMijv8Wt
	R7Gf97xs231fQQ+EOg4JdLTAWv2RREF0EO+9hLvGNFi8icufxExpS/29N9hyeL/pl1EZaq
	9AFRCdCRZWOlCZ6+905nUoP+5BPK9cM=
X-MC-Unique: iXCyyEoXOhWgcUvuTttC3g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>,
	Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org,
	Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: [PATCH v3 18/20] virtio-scsi: implement BlockDevOps->drained_begin()
Date: Thu, 20 Apr 2023 07:37:30 -0400
Message-Id: <20230420113732.336620-19-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

The virtio-scsi Host Bus Adapter provides access to devices on a SCSI
bus. Those SCSI devices typically have a BlockBackend. When the
BlockBackend enters a drained section, the SCSI device must temporarily
stop submitting new I/O requests.

Implement this behavior by temporarily stopping virtio-scsi virtqueue
processing when one of the SCSI devices enters a drained section. The
new scsi_device_drained_begin() API allows scsi-disk to message the
virtio-scsi HBA.

scsi_device_drained_begin() uses a drain counter so that multiple SCSI
devices can have overlapping drained sections. The HBA only sees one
pair of .drained_begin/end() calls.

After this commit, virtio-scsi no longer depends on hw/virtio's
ioeventfd aio_set_event_notifier(is_external=true). This commit is a
step towards removing the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/scsi/scsi.h          | 14 ++++++++++++
 hw/scsi/scsi-bus.c              | 40 +++++++++++++++++++++++++++++++++
 hw/scsi/scsi-disk.c             | 27 +++++++++++++++++-----
 hw/scsi/virtio-scsi-dataplane.c | 22 ++++++++++--------
 hw/scsi/virtio-scsi.c           | 38 +++++++++++++++++++++++++++++++
 hw/scsi/trace-events            |  2 ++
 6 files changed, 129 insertions(+), 14 deletions(-)

diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h
index 6f23a7a73e..e2bb1a2fbf 100644
--- a/include/hw/scsi/scsi.h
+++ b/include/hw/scsi/scsi.h
@@ -133,6 +133,16 @@ struct SCSIBusInfo {
     void (*save_request)(QEMUFile *f, SCSIRequest *req);
     void *(*load_request)(QEMUFile *f, SCSIRequest *req);
     void (*free_request)(SCSIBus *bus, void *priv);
+
+    /*
+     * Temporarily stop submitting new requests between drained_begin() and
+     * drained_end(). Called from the main loop thread with the BQL held.
+     *
+     * Implement these callbacks if request processing is triggered by a file
+     * descriptor like an EventNotifier. Otherwise set them to NULL.
+     */
+    void (*drained_begin)(SCSIBus *bus);
+    void (*drained_end)(SCSIBus *bus);
 };
 
 #define TYPE_SCSI_BUS "SCSI"
@@ -144,6 +154,8 @@ struct SCSIBus {
 
     SCSISense unit_attention;
     const SCSIBusInfo *info;
+
+    int drain_count; /* protected by BQL */
 };
 
 /**
@@ -213,6 +225,8 @@ void scsi_req_cancel_complete(SCSIRequest *req);
 void scsi_req_cancel(SCSIRequest *req);
 void scsi_req_cancel_async(SCSIRequest *req, Notifier *notifier);
 void scsi_req_retry(SCSIRequest *req);
+void scsi_device_drained_begin(SCSIDevice *sdev);
+void scsi_device_drained_end(SCSIDevice *sdev);
 void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 64d7311757..b571fdf895 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -1668,6 +1668,46 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense)
     scsi_device_set_ua(sdev, sense);
 }
 
+void scsi_device_drained_begin(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count < INT_MAX);
+
+    /*
+     * Multiple BlockBackends can be on a SCSIBus and each may begin/end
+     * draining at any time. Keep a counter so HBAs only see begin/end once.
+     */
+    if (bus->drain_count++ == 0) {
+        trace_scsi_bus_drained_begin(bus, sdev);
+        if (bus->info->drained_begin) {
+            bus->info->drained_begin(bus);
+        }
+    }
+}
+
+void scsi_device_drained_end(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count > 0);
+
+    if (bus->drain_count-- == 1) {
+        trace_scsi_bus_drained_end(bus, sdev);
+        if (bus->info->drained_end) {
+            bus->info->drained_end(bus);
+        }
+    }
+}
+
 static char *scsibus_get_dev_path(DeviceState *dev)
 {
     SCSIDevice *d = SCSI_DEVICE(dev);
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index e01bd84541..2249087d6a 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2360,6 +2360,20 @@ static void scsi_disk_reset(DeviceState *dev)
     s->qdev.scsi_version = s->qdev.default_scsi_version;
 }
 
+static void scsi_disk_drained_begin(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_begin(&s->qdev);
+}
+
+static void scsi_disk_drained_end(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_end(&s->qdev);
+}
+
 static void scsi_disk_resize_cb(void *opaque)
 {
     SCSIDiskState *s = opaque;
@@ -2414,16 +2428,19 @@ static bool scsi_cd_is_medium_locked(void *opaque)
 }
 
 static const BlockDevOps scsi_disk_removable_block_ops = {
-    .change_media_cb = scsi_cd_change_media_cb,
+    .change_media_cb  = scsi_cd_change_media_cb,
+    .drained_begin    = scsi_disk_drained_begin,
+    .drained_end      = scsi_disk_drained_end,
     .eject_request_cb = scsi_cd_eject_request_cb,
-    .is_tray_open = scsi_cd_is_tray_open,
     .is_medium_locked = scsi_cd_is_medium_locked,
-
-    .resize_cb = scsi_disk_resize_cb,
+    .is_tray_open     = scsi_cd_is_tray_open,
+    .resize_cb        = scsi_disk_resize_cb,
 };
 
 static const BlockDevOps scsi_disk_block_ops = {
-    .resize_cb = scsi_disk_resize_cb,
+    .drained_begin = scsi_disk_drained_begin,
+    .drained_end   = scsi_disk_drained_end,
+    .resize_cb     = scsi_disk_resize_cb,
 };
 
 static void scsi_disk_unit_attention_reported(SCSIDevice *dev)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 81643445ed..1060038e13 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -153,14 +153,16 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
     s->dataplane_starting = false;
     s->dataplane_started = true;
 
-    aio_context_acquire(s->ctx);
-    virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
-    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
+        virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
-    for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        for (i = 0; i < vs->conf.num_queues; i++) {
+            virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
 fail_host_notifiers:
@@ -206,9 +208,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
     }
     s->dataplane_stopping = true;
 
-    aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
-    aio_context_release(s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
+        aio_context_release(s->ctx);
+    }
 
     blk_drain_all(); /* ensure there are no in-flight requests */
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index a02f9233ec..eba1e84dac 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1081,6 +1081,42 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_scsi_drained_begin(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_scsi_drained_end(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+    }
+}
+
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .tcq = true,
     .max_channel = VIRTIO_SCSI_MAX_CHANNEL,
@@ -1095,6 +1131,8 @@ static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .get_sg_list = virtio_scsi_get_sg_list,
     .save_request = virtio_scsi_save_request,
     .load_request = virtio_scsi_load_request,
+    .drained_begin = virtio_scsi_drained_begin,
+    .drained_end = virtio_scsi_drained_end,
 };
 
 void virtio_scsi_common_realize(DeviceState *dev,
diff --git a/hw/scsi/trace-events b/hw/scsi/trace-events
index ab238293f0..bdd4e2c7c7 100644
--- a/hw/scsi/trace-events
+++ b/hw/scsi/trace-events
@@ -6,6 +6,8 @@ scsi_req_cancel(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_data(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_data_canceled(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_dequeue(int target, int lun, int tag) "target %d lun %d tag %d"
+scsi_bus_drained_begin(void *bus, void *sdev) "bus %p sdev %p"
+scsi_bus_drained_end(void *bus, void *sdev) "bus %p sdev %p"
 scsi_req_continue(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_continue_canceled(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_parsed(int target, int lun, int tag, int cmd, int mode, int xfer) "target %d lun %d tag %d command %d dir %d length %d"
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:01:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:01:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524218.814968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSyU-0007UM-Vv; Thu, 20 Apr 2023 12:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524218.814968; Thu, 20 Apr 2023 12:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSyU-0007UF-SZ; Thu, 20 Apr 2023 12:01:18 +0000
Received: by outflank-mailman (input) for mailman id 524218;
 Thu, 20 Apr 2023 12:01:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8Idg=AL=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1ppSyT-0007U9-HT
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:01:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7d00::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c664fb4-df73-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 14:01:14 +0200 (CEST)
Received: from DU2PR04CA0051.eurprd04.prod.outlook.com (2603:10a6:10:234::26)
 by AM7PR08MB5317.eurprd08.prod.outlook.com (2603:10a6:20b:101::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 12:01:11 +0000
Received: from DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:234:cafe::ac) by DU2PR04CA0051.outlook.office365.com
 (2603:10a6:10:234::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.23 via Frontend
 Transport; Thu, 20 Apr 2023 12:01:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT056.mail.protection.outlook.com (100.127.142.88) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.25 via Frontend Transport; Thu, 20 Apr 2023 12:01:11 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 20 Apr 2023 12:01:11 +0000
Received: from 15006531b184.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 09E57CB8-0153-4205-849A-5066F25DFC7F.1; 
 Thu, 20 Apr 2023 12:01:00 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 15006531b184.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 12:01:00 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB8381.eurprd08.prod.outlook.com (2603:10a6:20b:558::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 12:00:57 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c664fb4-df73-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gzGnHnbRrqpGWIdkxLYSf6zrAOvAZ4s7dC7fJoJhHfw=;
 b=veRxT9ezyHiAXJE9kV3a0NULlIyrdcMkxVRwqq/sCz5VyT7i99aiIUDL4KhZQszPma4uDCPiwTzJFXSiZ6QQrcTWuwkyMj5lgM6woxMIdMb0EvQJ+h5qoiRNYupTd8N0K/7hOLx9xDdTw3+7/iM2i6878nvBeeaG9MUPZNpndL0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 349c65c05fb1fc8e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RHbv4dmy21UWDl/0LVMT/r875pOuRMTgkALySlQQkNvh86h42/S9mHp0SD7g9jIloV4hTZFMX0/aL+wYNak2a/nrt/g6ViKUi0mvovYmDwxyS9eYbmr9Ikv/sMVJFoB5IakmFjE8Bnvgh0nwf0JeqbWqvDEQ9tShloHbMc44EOVNx0AijGFkz8eNsarHDj6NqrQ/1vorvwlMcf8hD6z3fyHSV9CBv4zS1B19DjgfJXb4Gfp9wvg7TzzWBm7f0xGFRN5TQ//GaTkaB3TOsMX2GxsaAX4sKYCmFnqg6Cmd51rZsRdkCjgXGeggq1IldlQp9dlp9mOq5wH5AwCkzzdHwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gzGnHnbRrqpGWIdkxLYSf6zrAOvAZ4s7dC7fJoJhHfw=;
 b=Ex85n6YYzkGYCRGL45+HXUUDjp9z88+s1yvIo1NTmc9Ie34OcjOyrl/Tr1eB44uY+OiZotfRSvckDmr39B9kigZycl7sEj3AGq0M+MCO3ccTqzk7dPkSsDq5VSXqiOekbUqR/YeSilXU5MGIDMymGqNu3msaOTAGyybqDpcBymRDqvZAhmdZPXS9nthwMGfScxd4U9iRF8M/stFaax68OpWh/Fh+qO03MnnCJlIUimdorrsZ8X2Je4DJtdOiMa5nqgjN5KB0Amzy1A5ys0y3/OY/0cSm9nUa7dwsPvxPCbq9gb3n022wryGHxaHzuxIH/GtgJHOWX7bU9pPEg7+v3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gzGnHnbRrqpGWIdkxLYSf6zrAOvAZ4s7dC7fJoJhHfw=;
 b=veRxT9ezyHiAXJE9kV3a0NULlIyrdcMkxVRwqq/sCz5VyT7i99aiIUDL4KhZQszPma4uDCPiwTzJFXSiZ6QQrcTWuwkyMj5lgM6woxMIdMb0EvQJ+h5qoiRNYupTd8N0K/7hOLx9xDdTw3+7/iM2i6878nvBeeaG9MUPZNpndL0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZbSQxHdlbNP/OhU+SnBXlOek6la8xDZGAgAE1nQCAAAF/gIAABdgAgAGkXACAADZRAA==
Date: Thu, 20 Apr 2023 12:00:56 +0000
Message-ID: <24B35030-092A-493F-AC78-52732746FA63@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
 <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
In-Reply-To: <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB8381:EE_|DBAEUR03FT056:EE_|AM7PR08MB5317:EE_
X-MS-Office365-Filtering-Correlation-Id: fce1ee73-fa2f-4cc6-df9e-08db4196eef4
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ISSNA85t0pfbPCN/ETvnNZoC7d4LgVF2o04L+qUGKmaVKH89p3T9yOuSetIfg0VnUcGZWtVhAsD34PzRYlab+x7S2OZiYnM0MJh+O5OW6iZwclInlJVj0dX83ujIQoPsJnTgtlaK+HyyHyUwRPJPXGy8V/zVmlsRyv4ud5PPDcOeL3FOldDZg5WTnwbAzlCfFk8t1/y2f9TMGa/1nXu4UMAuCf3+1i4hSNlTkLxHY+Kdr7cCvMwDgKzN1WHxBCNKBdxBYro4+F/EOpXC5/grVloOPa64MeQYSXmcKDolwm6Te5GJLFzVcGtgy82TP7W1HjYrJPjtCM5o8eGrbAKWjxDcb2sEN7+luu0YbqrE2TdvyBNxsWZcsmTuv6rWTR+Ou8LFM/OVRjcDGA6doFcFyYu1GKMdcwb/vnELAk9pQa5aXfbwfq7s6L1Q2siHf+BsJrDtVHcH3M6gLKTN16IX5cz0ayZWvPCfrZJsrXX4dnR1Cha0cR/GC6+CnnGGcpJGvKIl0OeSo46Bj75XMgAHlHHeMabdz/OJnAItZuIra+LWrvbLC0nHCMbEL4onvomMK4avCam/4bUPfERQjc8tI7oAyrkQHGcfBsX6Gyidc2eUo7t392PCjJW/X38Kj8SiLbNUNgYO+metm86uv1Geg/cTZ2QEywp+scS8j6JturU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39850400004)(376002)(136003)(346002)(451199021)(83380400001)(6512007)(53546011)(6506007)(41300700001)(122000001)(6486002)(71200400001)(2616005)(186003)(91956017)(478600001)(33656002)(4326008)(66476007)(6636002)(76116006)(66556008)(316002)(64756008)(66946007)(66446008)(5660300002)(37006003)(54906003)(38070700005)(36756003)(86362001)(2906002)(8936002)(6862004)(8676002)(38100700002)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <9EA39343E09B144097F1BEAA562FDBAD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8381
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b7ecf22b-e438-428b-8034-08db4196e625
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WzZoKAPfWhdwICrkWtBTzfVbA8d7Vp2ZE6tRTK2RIZBbzHW/qnis8UdAwcuzYUn5QWViHXWEH9SgRfI1FuD84oa2GtgnH7gRUR/Lohiyuiw3orXtUib67yYRu+/66+LmE2IO+yX+oHnr3xguyzUWDiXhgD2XeIPP76gYYLDawCEexXamIrLqHk/jqumj8FzTCXlr2MPHx41St1N6RjOl/UI+1WtBULFJ2+M+SJU3VE9leQ+9YnYQJYiJVNjTR129lNrVlG1DYGu7uBQoYwn5Gg/cBkV9ObCW43lttY5Iif4vS3X1VJrjsA30A8Rp1gQqmNWf8/fTLLX7/SsTFoN05gbvbwyiDW5fsPl2DNJd9GBxBY04GEnJfudPKAPmpPVgS5o6opuRumEIuZrwV0iWe8Kvo51/POrYCr2j3BC7o72gnfDBz0Pacuc4lZv5SjtTtfxFk4w4sX129onbAbIDX97cgeM01czyTSDNhq/NQ9HTPYFww+XUop21YLRUUYNVVN5BOJAAiGttPjL855YlXGY959Ps5QMkCQcLCY5W5tuNFilQkSBRQWBrQxj/Auk6T975Q+ufQE/nC5XkqDJ4hajMqjgx+szH53FFlmPMMkLVGWtUOXpu+2/ELmS10Xw/0+PdJ7QS3AJEmVlFcSTmPalmHSJUVFyWCoS3AX4UkX1fm0JO2Qf38SI9oKuvy3zpgL1OZQ0kPJGBeS0C3E0ZZZCXL30Pva/7+X3QRf8qiv0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39850400004)(136003)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(186003)(6506007)(6512007)(53546011)(26005)(6486002)(107886003)(40480700001)(5660300002)(316002)(47076005)(2616005)(336012)(4326008)(41300700001)(70206006)(6636002)(70586007)(8936002)(8676002)(82740400003)(37006003)(6862004)(54906003)(82310400005)(478600001)(2906002)(86362001)(40460700003)(36756003)(33656002)(81166007)(83380400001)(36860700001)(356005)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:01:11.2826
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fce1ee73-fa2f-4cc6-df9e-08db4196eef4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5317

SGkgTHVjYSwNCg0KPiBPbiAyMCBBcHIgMjAyMywgYXQgMTA6NDYsIEx1Y2EgRmFuY2VsbHUgPEx1
Y2EuRmFuY2VsbHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPj4+Pj4+IA0KPj4+Pj4+ICtpbnQgX19p
bml0IHN2ZV9zYW5pdGl6ZV92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCkNCj4+
Pj4+PiArew0KPj4+Pj4+ICsgICAgLyoNCj4+Pj4+PiArICAgICAqIE5lZ2F0aXZlIFNWRSBwYXJh
bWV0ZXIgdmFsdWUgbWVhbnMgdG8gdXNlIHRoZSBtYXhpbXVtIHN1cHBvcnRlZA0KPj4+Pj4+ICsg
ICAgICogdmVjdG9yIGxlbmd0aCwgb3RoZXJ3aXNlIGlmIGEgcG9zaXRpdmUgdmFsdWUgaXMgcHJv
dmlkZWQsIGNoZWNrIGlmIHRoZQ0KPj4+Pj4+ICsgICAgICogdmVjdG9yIGxlbmd0aCBpcyBhIG11
bHRpcGxlIG9mIDEyOCBhbmQgbm90IGJpZ2dlciB0aGFuIHRoZSBtYXhpbXVtIHZhbHVlDQo+Pj4+
Pj4gKyAgICAgKiAyMDQ4DQo+Pj4+Pj4gKyAgICAgKi8NCj4+Pj4+PiArICAgIGlmICggdmFsIDwg
MCApDQo+Pj4+Pj4gKyAgICAgICAgKm91dCA9IGdldF9zeXNfdmxfbGVuKCk7DQo+Pj4+Pj4gKyAg
ICBlbHNlIGlmICggKCh2YWwgJSBTVkVfVkxfTVVMVElQTEVfVkFMKSA9PSAwKSAmJiAodmFsIDw9
IFNWRV9WTF9NQVhfQklUUykgKQ0KPj4+Pj4+ICsgICAgICAgICpvdXQgPSB2YWw7DQo+Pj4+PiAN
Cj4+Pj4+IFNob3VsZG4ndCB5b3UgYWxzbyBjaGVjayBpZiBpdCBpcyBub3QgZ3JlYXRlciB0aGFu
IHRoZSBtYXhpbXVtIHZlY3RvciBsZW5ndGggPw0KPj4+PiANCj4+Pj4gSSBkb27igJl0IHVuZGVy
c3RhbmQsIEkgYW0gY2hlY2tpbmcgdGhhdCB0aGUgdmFsdWUgaXMgYmVsb3cgb3IgZXF1YWwgdG8g
U1ZFX1ZMX01BWF9CSVRTLA0KPj4+PiBJZiB5b3UgbWVhbiBpZiBpdCBzaG91bGQgYmUgY2hlY2tl
ZCBhbHNvIGFnYWluc3QgdGhlIG1heGltdW0gc3VwcG9ydGVkIGxlbmd0aCBieSB0aGUgcGxhdGZv
cm0sDQo+Pj4+IEkgdGhpbmsgdGhpcyBpcyBub3QgdGhlIHJpZ2h0IHBsYWNlLCB0aGUgY2hlY2sg
aXMgYWxyZWFkeSBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoKSwgaW50cm9kdWNlZA0K
Pj4+PiBpbiBwYXRjaCAjMg0KPj4+IA0KPj4+IElmIHRoaXMgaXMgbm90IHRoZSByaWdodCBwbGFj
ZSB0byBjaGVjayBpdCB0aGVuIHdoeSBjaGVja2luZyB0aGUgcmVzdCBoZXJlID8NCj4+PiANCj4+
PiBGcm9tIGEgdXNlciBvciBhIGRldmVsb3BlciBwb2ludCBvZiB2aWV3IEkgd291bGQgZXhwZWN0
IHRoZSB2YWxpZGl0eSBvZiB0aGUgaW5wdXQgdG8gYmUgY2hlY2tlZCBvbmx5DQo+Pj4gaW4gb25l
IHBsYWNlLg0KPj4+IElmIGhlcmUgaXMgbm90IHRoZSBwbGFjZSBmb3IgdGhhdCBpdCBpcyBvayBi
dXQgdGhlbiBpIHdvdWxkIGNoZWNrIGV2ZXJ5dGhpbmcgaW4gYXJjaF9zYW5pdGlzZV9kb21haW5f
Y29uZmlnDQo+Pj4gKG11bHRpcGxlLCBtaW4gYW5kIHN1cHBvcnRlZCkgaW5zdGVhZCBvZiBkb2lu
ZyBpdCBwYXJ0bHkgaW4gMiBwbGFjZXMuDQo+PiANCj4+IE9rLCBnaXZlbiB0aGUgd2F5IHdlIGVu
Y29kZWQgdGhlIHZhbHVlIGluIHhlbl9kb21jdGxfY3JlYXRlZG9tYWluIHN0cnVjdHVyZSwgd2Ug
aGF2ZSB0aGF0IHRoZSB2YWx1ZSB0YWtlcw0KPj4gdmVyeSBsaXR0bGUgc3BhY2UsIGJ1dCBhIHNt
YWxsIGlzc3VlIGlzIHRoYXQgd2hlbiB3ZSBlbmNvZGUgaXQsIHdlIGFyZSBkaXZpZGluZyBpdCBi
eSAxMjgsIHdoaWNoIGlzIGZpbmUgZm9yIHVzZXIgcGFyYW1zDQo+PiB0aGF0IGFyZSBtdWx0aXBs
ZSBvZiAxMjgsIGJ1dCBpdOKAmXMgbGVzcyBmaW5lIGlmIHRoZSB1c2VyIHBhc3NlcyDigJwxMjni
gJ0uDQo+PiANCj4+IFRvIG92ZXJjb21lIHRoaXMgaXNzdWUgd2UgYXJlIGNoZWNraW5nIHRoZSB2
YWx1ZSB3aGVuIGl0IGlzIG5vdCBhbHJlYWR5IGVuY29kZWQuIE5vdywgdGhpbmtpbmcgYWJvdXQg
aXQsIHRoZSBjaGVjaw0KPj4gIiYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRTKeKAnSBpcyBub3Qg
cmVhbGx5IG5lZWRlZCwgYmVjYXVzZSBldmVuIGlmIHRoZSB2YWx1ZSBpcyBhYm92ZSwgdGhlbiBp
biBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcNCj4+IHdlIHdpbGwgaGl0IHRoZSB0b3AgbGlt
aXQgb2YgdGhlIHBsYXRmb3JtIG1heGltdW0gVkwuDQo+PiANCj4+IGludCBhcmNoX3Nhbml0aXNl
X2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+
PiB7DQo+PiAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7DQo+PiAgIHVuc2lnbmVkIGludCBmbGFn
c19yZXF1aXJlZCA9IChYRU5fRE9NQ1RMX0NERl9odm0gfCBYRU5fRE9NQ1RMX0NERl9oYXApOw0K
Pj4gICB1bnNpZ25lZCBpbnQgZmxhZ3Nfb3B0aW9uYWwgPSAoWEVOX0RPTUNUTF9DREZfaW9tbXUg
fCBYRU5fRE9NQ1RMX0NERl92cG11KTsNCj4+ICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0g
c3ZlX2RlY29kZV92bChjb25maWctPmFyY2guc3ZlX3ZsKTsNCj4+IA0KPj4gICBpZiAoIChjb25m
aWctPmZsYWdzICYgfmZsYWdzX29wdGlvbmFsKSAhPSBmbGFnc19yZXF1aXJlZCApDQo+PiAgIHsN
Cj4+ICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sICJVbnN1cHBvcnRlZCBjb25maWd1cmF0aW9u
ICUjeFxuIiwNCj4+ICAgICAgICAgICAgICAgY29uZmlnLT5mbGFncyk7DQo+PiAgICAgICByZXR1
cm4gLUVJTlZBTDsNCj4+ICAgfQ0KPj4gDQo+PiAgIC8qIENoZWNrIGZlYXR1cmUgZmxhZ3MgKi8N
Cj4+ICAgaWYgKCBzdmVfdmxfYml0cyA+IDAgKQ0KPj4gICB7DQo+PiAgICAgICB1bnNpZ25lZCBp
bnQgemNyX21heF9iaXRzID0gZ2V0X3N5c192bF9sZW4oKTsNCj4+IA0KPj4gICAgICAgaWYgKCAh
emNyX21heF9iaXRzICkNCj4+ICAgICAgIHsNCj4+ICAgICAgICAgICBkcHJpbnRrKFhFTkxPR19J
TkZPLCAiU1ZFIGlzIHVuc3VwcG9ydGVkIG9uIHRoaXMgbWFjaGluZS5cbiIpOw0KPj4gICAgICAg
ICAgIHJldHVybiAtRUlOVkFMOw0KPj4gICAgICAgfQ0KPj4gDQo+PiAgICAgICBpZiAoIHN2ZV92
bF9iaXRzID4gemNyX21heF9iaXRzICkNCj4+ICAgICAgIHsNCj4+ICAgICAgICAgICBkcHJpbnRr
KFhFTkxPR19JTkZPLA0KPj4gICAgICAgICAgICAgICAgICAgIlJlcXVlc3RlZCBTVkUgdmVjdG9y
IGxlbmd0aCAoJXUpID4gc3VwcG9ydGVkIGxlbmd0aCAoJXUpXG4iLA0KPj4gICAgICAgICAgICAg
ICAgICAgc3ZlX3ZsX2JpdHMsIHpjcl9tYXhfYml0cyk7DQo+PiAgICAgICAgICAgcmV0dXJuIC1F
SU5WQUw7DQo+PiAgICAgICB9DQo+PiAgIH0NCj4+ICBbLi4uXQ0KPj4gDQo+PiBOb3csIEkgdW5k
ZXJzdGFuZCB5b3VyIHBvaW50LCB3ZSBjb3VsZCBjaGVjayBldmVyeXRoaW5nIGluIHN2ZV9zYW5p
dGl6ZV92bF9wYXJhbSgpLCBidXQgaXQgd291bGQgbGVhdmUgYSBwcm9ibGVtDQo+PiBmb3IgZG9t
YWlucyBjcmVhdGVkIGJ5IGh5cGVyY2FsbHMgaWYgSSBhbSBub3Qgd3JvbmcuDQo+PiANCj4+IFdo
YXQgZG8geW91IHRoaW5rPw0KDQpTb3JyeSBpIG1pc3NlZCB0aGF0IGFuc3dlci4NCg0KWWVzIGkg
YWdyZWUsIG1heWJlIHdlIGNvdWxkIGZhY3Rvcml6ZSB0aGUgY2hlY2tzIGluIG9uZSBmdW5jdGlv
biBhbmQgdXNlIGl0IGluIHNldmVyYWwgcGxhY2VzID8NCg0KDQo+IA0KPiBJIHRob3VnaHQgYWJv
dXQgdGhhdCBhbmQgYW5vdGhlciBwb3NzaWJpbGl0eSBpcyB0byBzdG9yZSDigJxzdmVfdmzigJ0g
YXMgdWludDE2X3QgaW5zaWRlIHN0cnVjdCB4ZW5fYXJjaF9kb21haW5jb25maWcsIGFuZA0KPiBj
aGVjayBpdCBpbnNpZGUgYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnKCkgZm9yIGl0IHRvIGJl
IG1vZCAxMjggYW5kIGxlc3MgdGhhbiB0aGUgbWF4IHN1cHBvcnRlZCBWTCwgdGhpcyB3aWxsDQo+
IGFsbG93IHRvIGhhdmUgYWxsIHRoZSBjaGVja3MgaW4gb25lIHBsYWNlLCB0YWtpbmcgYSBiaXQg
bW9yZSBzcGFjZSwgYW55d2F5IHdlIHdvdWxkIHRha2UgdGhlIHNwYWNlIGZyb20gdGhlIGltcGxp
Y2l0DQo+IHBhZGRpbmcgYXMgdGhpcyBpcyB0aGUgY3VycmVudCBzdGF0dXM6DQo+IA0KPiBzdHJ1
Y3QgYXJjaF9kb21haW4gew0KPiBlbnVtIGRvbWFpbl90eXBlICAgICAgICAgICB0eXBlOyAgICAg
ICAgICAgICAgICAgLyogICAgIDAgICAgIDQgKi8NCj4gdWludDhfdCAgICAgICAgICAgICAgICAg
ICAgc3ZlX3ZsOyAgICAgICAgICAgICAgIC8qICAgICA0ICAgICAxICovDQo+IA0KPiAvKiBYWFgg
MyBieXRlcyBob2xlLCB0cnkgdG8gcGFjayAqLw0KPiANCj4gc3RydWN0IHAybV9kb21haW4gICAg
ICAgICAgcDJtOyAgICAgICAgICAgICAgICAgIC8qICAgICA4ICAgMzI4ICovDQo+IC8qIC0tLSBj
YWNoZWxpbmUgNSBib3VuZGFyeSAoMzIwIGJ5dGVzKSB3YXMgMTYgYnl0ZXMgYWdvIC0tLSAqLw0K
PiBzdHJ1Y3QgaHZtX2RvbWFpbiAgICAgICAgICBodm07ICAgICAgICAgICAgICAgICAgLyogICAz
MzYgICAzMTIgKi8NCj4gLyogLS0tIGNhY2hlbGluZSAxMCBib3VuZGFyeSAoNjQwIGJ5dGVzKSB3
YXMgOCBieXRlcyBhZ28gLS0tICovDQo+IHN0cnVjdCBwYWdpbmdfZG9tYWluICAgICAgIHBhZ2lu
ZzsgICAgICAgICAgICAgICAvKiAgIDY0OCAgICAzMiAqLw0KPiBzdHJ1Y3Qgdm1taW8gICAgICAg
ICAgICAgICB2bW1pbzsgICAgICAgICAgICAgICAgLyogICA2ODAgICAgMzIgKi8NCj4gLyogLS0t
IGNhY2hlbGluZSAxMSBib3VuZGFyeSAoNzA0IGJ5dGVzKSB3YXMgOCBieXRlcyBhZ28gLS0tICov
DQo+IHVuc2lnbmVkIGludCAgICAgICAgICAgICAgIHJlbF9wcml2OyAgICAgICAgICAgICAvKiAg
IDcxMiAgICAgNCAqLw0KPiANCj4gLyogWFhYIDQgYnl0ZXMgaG9sZSwgdHJ5IHRvIHBhY2sgKi8N
Cj4gDQo+IHN0cnVjdCB7DQo+IHVpbnQ2NF90ICAgICAgICAgICBvZmZzZXQ7ICAgICAgICAgICAg
ICAgLyogICA3MjAgICAgIDggKi8NCj4gc190aW1lX3QgICAgICAgICAgIG5hbm9zZWNvbmRzOyAg
ICAgICAgICAvKiAgIDcyOCAgICAgOCAqLw0KPiB9IHZpcnRfdGltZXJfYmFzZTsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgLyogICA3MjAgICAgMTYgKi8NCj4gc3RydWN0IHZnaWNfZGlz
dCAgICAgICAgICAgdmdpYzsgICAgICAgICAgICAgICAgIC8qICAgNzM2ICAgMjAwICovDQo+IA0K
PiAvKiBYWFggbGFzdCBzdHJ1Y3QgaGFzIDIgYnl0ZXMgb2YgcGFkZGluZyAqLw0KPiANCj4gLyog
LS0tIGNhY2hlbGluZSAxNCBib3VuZGFyeSAoODk2IGJ5dGVzKSB3YXMgNDAgYnl0ZXMgYWdvIC0t
LSAqLw0KPiBzdHJ1Y3QgdnVhcnQgICAgICAgICAgICAgICB2dWFydDsgICAgICAgICAgICAgICAg
LyogICA5MzYgICAgMzIgKi8NCj4gLyogLS0tIGNhY2hlbGluZSAxNSBib3VuZGFyeSAoOTYwIGJ5
dGVzKSB3YXMgOCBieXRlcyBhZ28gLS0tICovDQo+IHVuc2lnbmVkIGludCAgICAgICAgICAgICAg
IGV2dGNobl9pcnE7ICAgICAgICAgICAvKiAgIDk2OCAgICAgNCAqLw0KPiBzdHJ1Y3Qgew0KPiB1
aW50OF90ICAgICAgICAgICAgcHJpdmlsZWdlZF9jYWxsX2VuYWJsZWQ6MTsgLyogICA5NzI6IDAg
IDEgKi8NCj4gfSBtb25pdG9yOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IC8qICAgOTcyICAgICAxICovDQo+IA0KPiAvKiBYWFggMyBieXRlcyBob2xlLCB0cnkgdG8gcGFj
ayAqLw0KPiANCj4gc3RydWN0IHZwbDAxMSAgICAgICAgICAgICAgdnBsMDExOyAgICAgICAgICAg
ICAgIC8qICAgOTc2ICAgIDcyICovDQo+IA0KPiAvKiBzaXplOiAxMTUyLCBjYWNoZWxpbmVzOiAx
OCwgbWVtYmVyczogMTMgKi8NCj4gLyogc3VtIG1lbWJlcnM6IDEwMzgsIGhvbGVzOiAzLCBzdW0g
aG9sZXM6IDEwICovDQo+IC8qIHBhZGRpbmc6IDEwNCAqLw0KPiAvKiBwYWRkaW5nczogMSwgc3Vt
IHBhZGRpbmdzOiAyICovDQo+IH0gX19hdHRyaWJ1dGVfXygoX19hbGlnbmVkX18oMTI4KSkpOw0K
DQpUaGF0IHdvdWxkIHdvcmsgYnV0IGl0IGlzIGEgYml0IG9kZCB0byBzYXZlIGEgMTZiaXQgdmFs
dWUganVzdCBzbw0KeW91IGNvdWxkIHNhdmUgaW52YWxpZCB2YWx1ZXMgYW5kIGdpdmUgYW4gZXJy
b3IuDQoNCkNoZWVycw0KQmVydHJhbmQNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:02:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:02:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524222.814978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSzp-00081U-BS; Thu, 20 Apr 2023 12:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524222.814978; Thu, 20 Apr 2023 12:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppSzp-00081N-7s; Thu, 20 Apr 2023 12:02:41 +0000
Received: by outflank-mailman (input) for mailman id 524222;
 Thu, 20 Apr 2023 12:02:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppSzn-000819-Vo; Thu, 20 Apr 2023 12:02:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppSzn-0004X7-Nz; Thu, 20 Apr 2023 12:02:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppSzn-0005M3-7v; Thu, 20 Apr 2023 12:02:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppSzn-0003UI-7W; Thu, 20 Apr 2023 12:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oOxoFHpM8ritLVbFkLbQiACGGVp1SgLmtp8QpxnxpEk=; b=CxvbRfSYw0dp8N2vZHGRLC4sUh
	VPHLciWm/38KjDjCGmT2BPmikwJviBwcldZf71I54/5hg2SBvdUOCaoPn4IENllcVkA/YZw8Mo5Ib
	7h03NH/0PEIn2Lj29jzFuYvk7PbqSB44pz0Z8Q4Zn96FegoZ5XOS6JT1cjqr8ZgiJ/s4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180332-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180332: trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:hosts-allocate:broken:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 12:02:39 +0000

flight 180332 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180332/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    1 days   10 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:21:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524228.814987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTHM-00024d-Qb; Thu, 20 Apr 2023 12:20:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524228.814987; Thu, 20 Apr 2023 12:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTHM-00024W-Ns; Thu, 20 Apr 2023 12:20:48 +0000
Received: by outflank-mailman (input) for mailman id 524228;
 Thu, 20 Apr 2023 12:20:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7xv=AL=redhat.com=quintela@srs-se1.protection.inumbo.net>)
 id 1ppTHL-00024Q-KJ
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:20:47 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c65612ca-df75-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:20:46 +0200 (CEST)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-517-WzpQVsNqN96718_HY60feA-1; Thu, 20 Apr 2023 08:20:43 -0400
Received: by mail-wr1-f69.google.com with SMTP id
 ffacd0b85a97d-2f8c2258b48so218832f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 05:20:43 -0700 (PDT)
Received: from redhat.com (static-214-39-62-95.ipcom.comunitel.net.
 [95.62.39.214]) by smtp.gmail.com with ESMTPSA id
 w9-20020a05600c474900b003f17e37ce60sm5241991wmo.47.2023.04.20.05.20.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Apr 2023 05:20:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c65612ca-df75-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1681993244;
	h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
	 content-type:content-type:in-reply-to:in-reply-to:  references:references;
	bh=tl6voZ2jHwhB9nV9y4cRkZPhJXFytEfPJKXZLTHgxaY=;
	b=e9FMSmItkzUaxRkfA7IZufTuoElvL7FRCm1YMXJIKRd5JH3Va26vhiUyPqd+uqgNdRMzbL
	rIfCvXd/1SOJ8uw0Gf3Iv4FBjs8vDKgdCXSYjnE93rhunmLpLDdZN+IdopmmHv5amx/W8Y
	DqR9rbBqXC1dn/jl7yf6eJzWq0HO3u4=
X-MC-Unique: WzpQVsNqN96718_HY60feA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681993242; x=1684585242;
        h=mime-version:message-id:date:reply-to:user-agent:references
         :in-reply-to:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tl6voZ2jHwhB9nV9y4cRkZPhJXFytEfPJKXZLTHgxaY=;
        b=hGDlqUGi8QqY77sA39oTXrjr2QXDZj3AtMLmaVmhZLN0Lls4J8EisjuJGDUnqklbC9
         Kfe4oSEwSnsH1c7mERFKIvSxAByynUId2Blk40ROPSUoh/wx+j1/wbNjxnwhHVQFzSrc
         6jMj5xgIMvp5N7+KIbkFwbCT85VWtU9NZk22CJhxXxRdEsQ9Dtj2xcD2XDaXKoNqH8E6
         u5h5KteZqpLZDjzRJjC9k+R5I2egUkE8inzib7ksDz3Jsy2/u7M+1usGWoccL/DqOS/2
         raKMXBakV+SHql0AomsD0QcpYqfwPyZoqlQ+mjQKjkxUg2ZtJKllJyPsbDinC4vzmkqQ
         ATfw==
X-Gm-Message-State: AAQBX9dgZpGg+JMExWltNR0D1E01UKo/+V/a0Fu/BEkDQ5x/J9UuVgy5
	lyqOd3gJxt8tY1GZGf2Ot7vn6bG4OKw33yxFc1NMT7coL96pzaIdMnzDAwcw59R9ejsJUe449zm
	4mJDSOgE9RfEUEpHNgWj38aYD1Ww=
X-Received: by 2002:adf:fe45:0:b0:302:1b72:b951 with SMTP id m5-20020adffe45000000b003021b72b951mr963595wrs.26.1681993242613;
        Thu, 20 Apr 2023 05:20:42 -0700 (PDT)
X-Google-Smtp-Source: AKy350aDAVOfKI9kohboQVeUsjo0MtZrf//nMbnob2NhGWz6dOPqHPY1GQOzVqmYbUEd0harlhNaRg==
X-Received: by 2002:adf:fe45:0:b0:302:1b72:b951 with SMTP id m5-20020adffe45000000b003021b72b951mr963566wrs.26.1681993242210;
        Thu, 20 Apr 2023 05:20:42 -0700 (PDT)
From: Juan Quintela <quintela@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Eric Blake <eblake@redhat.com>,  Stefan Hajnoczi <stefanha@redhat.com>,
  qemu-devel@nongnu.org,  Stefano Stabellini <sstabellini@kernel.org>,
  Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,  Fam Zheng
 <fam@euphon.net>,  Julia Suvorova <jusual@redhat.com>,  Hanna Reitz
 <hreitz@redhat.com>,  Daniel P. =?utf-8?Q?Berrang=C3=A9?=
 <berrange@redhat.com>,  Paolo
 Bonzini <pbonzini@redhat.com>,  Coiby Xu <Coiby.Xu@gmail.com>,  Paul
 Durrant <paul@xen.org>,  Ronnie Sahlberg <ronniesahlberg@gmail.com>,
  Eduardo Habkost <eduardo@habkost.net>,  "Michael S. Tsirkin"
 <mst@redhat.com>,  Stefano Garzarella <sgarzare@redhat.com>,  Anthony
 Perard <anthony.perard@citrix.com>,  Kevin Wolf <kwolf@redhat.com>,
  "Richard W.M. Jones" <rjones@redhat.com>,  Richard Henderson
 <richard.henderson@linaro.org>,  xen-devel@lists.xenproject.org,
  qemu-block@nongnu.org,  "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
  Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>,  Peter Lieven
 <pl@kamp.de>,
  eesposit@redhat.com,  Aarushi Mehta <mehta.aaru20@gmail.com>,  Stefan
 Weil <sw@weilnetz.de>,  Xie Yongji <xieyongji@bytedance.com>,  David
 Woodhouse <dwmw2@infradead.org>
Subject: Re: [PATCH v2 16/16] virtio: make it possible to detach host
 notifier from any thread
In-Reply-To: <CAJSP0QVjFcicweDxVvLyhijmdQqQPTN_uhzP2wU7ZS4ZXxKkEQ@mail.gmail.com>
	(Stefan Hajnoczi's message of "Thu, 20 Apr 2023 07:29:12 -0400")
References: <20230419172817.272758-1-stefanha@redhat.com>
	<20230419172817.272758-17-stefanha@redhat.com>
	<msjl3ep44f2dxpno7xw3zxjrkuh5iegyieszertt6ppkhpk62q@xxi7a5shhkc2>
	<CAJSP0QVjFcicweDxVvLyhijmdQqQPTN_uhzP2wU7ZS4ZXxKkEQ@mail.gmail.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Reply-To: quintela@redhat.com
Date: Thu, 20 Apr 2023 14:20:40 +0200
Message-ID: <87edoeycbr.fsf@secure.mitica>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain

Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Wed, 19 Apr 2023 at 14:52, Eric Blake <eblake@redhat.com> wrote:
>>
>> On Wed, Apr 19, 2023 at 01:28:17PM -0400, Stefan Hajnoczi wrote:
>> > virtio_queue_aio_detach_host_notifier() does two things:
>> > 1. It removes the fd handler from the event loop.
>> > 2. It processes the virtqueue one last time.
>> >
>> > The first step can be peformed by any thread and without taking the
>> > AioContext lock.
>> >
>> > The second step may need the AioContext lock (depending on the device
>> > implementation) and runs in the thread where request processing takes
>> > place. virtio-blk and virtio-scsi therefore call
>> > virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
>> > AioContext
>> >
>> > Scheduling a BH is undesirable for .drained_begin() functions. The next
>> > patch will introduce a .drained_begin() function that needs to call
>> > virtio_queue_aio_detach_host_notifier().
>> >
>> > Move the virtqueue processing out to the callers of
>> > virtio_queue_aio_detach_host_notifier() so that the function can be
>> > called from any thread. This is in preparation for the next patch.
>> >
>>
>> This mentions a next patch, but is 16/16 in the series.  Am I missing
>> something?
>
> Good thing you caught this. The patch series was truncated because I
> was in the middle of git rebase -i :(.
>
> I will send a v3 with the remaining patches.

I saw that it was not migration/* stuff and though that I was done O:-)



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:21:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:21:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524230.814998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTHk-0002UH-7A; Thu, 20 Apr 2023 12:21:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524230.814998; Thu, 20 Apr 2023 12:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTHk-0002TE-3K; Thu, 20 Apr 2023 12:21:12 +0000
Received: by outflank-mailman (input) for mailman id 524230;
 Thu, 20 Apr 2023 12:21:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppTHi-00024Q-LM
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:21:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4a4fde4-df75-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:21:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7726.eurprd04.prod.outlook.com (2603:10a6:102:ea::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 12:21:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:21:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4a4fde4-df75-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RpS6gVlwrf2Wv4DRe60ws3W3kymEUdEaKSORDl2re9iZe+7tv55p7UGHVWMeN+1M+uHIngGMzEfGnDI8YXMFwbi/1tj/3TO2PLRqEGemsdxtpJh4iX/DlyOSW3JWHq3r9wygpFsTJcx+NfNyqyWNp5pmlla0x3csl1BrYJvhWfQs6oRh/nz8ala7Eb9K8QxsawbpgNN6Hb7QXjOApTU5O3tz+Vq4zWj51TkzJ1ozilYdVue7BzlMkecIKa+oVp/AwBXSzpqkmmKBqmXc0zY0GhYTJ8sVrO0uNrzL12yBAoD0ykQlU7QqzlGueqkxeT2yEIqBtz9WamS+aNPL77FtCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=inHa3VuybAYPx2l8Ci6A+cf7U65p4vG0Esy0HkUw2OU=;
 b=kPajlHjTuHyDkopZlhVa76DFDNH9pf5w6ZtDKYQ52YWlzinbtv5xFOgjtWgDijq3Gtb9jXtK9MVBYYCKnUf+DpySdtt+t9tMQLcrmQdQyjyxsodXQVl9VU6AwjTAn+SWV+tlVsbv6hTGFC8mbz1DI5hlXymMeW+kWLIwTWd8EsbVev+xXFUZ9KzaTlvsGfY2l67IFzooHHx3yqzhVBBmOa9H1S0yyDS9LwfAkOBkq4OtzLNmtS08KR5DiEwBYBIMBxmCzy1F84FZTte/h5wU/Jcz1DkkVMFyXiw52wNwCWRRxQacWvu0zLO6Le2mhM+L763ExRDmhpEdCBShH4Kovg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=inHa3VuybAYPx2l8Ci6A+cf7U65p4vG0Esy0HkUw2OU=;
 b=TdhwtyOvRUSLpgC747uUP8kUbcUSap7C7/xPoeiiKlzJgTmTlJBjANIc4sm7++eEsVCIkuHgX0lwtIyqpjJsKl6UbTNQsh5WgWbS/9kcT8Lzl2bm+lmH+alISjrr+GR7CdkNyGdKQntym4QqyJcwyWYMBionYZHP/ZjNvONPkmWstbBE9IxW4QnyO/s805iF+YRqOIJLA93h5vKGeqBgix6owDX4vF1uTU8taR624RmVJUFTAZeI5wwZmjDPCYZN0zf8ebGMGMA2IJ5buSRO0XyJx89+rivDL3E570Uf2UANi+QxEMN+OdaT7OKjZDkRFo1GEIE/YZZm7X6CMtUfSQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bcde0374-e309-1e65-e9c2-d20cc6c3e005@suse.com>
Date: Thu, 20 Apr 2023 14:21:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 01/17] xen/arm: use NR_MEM_BANKS to override default
 NR_NODE_MEMBLKS
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-2-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230420112521.3272732-2-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7726:EE_
X-MS-Office365-Filtering-Correlation-Id: 3dad5e5b-89cf-46b2-4490-08db4199b6ad
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tigO903+Fm4+n+WXFzj+kspr8hi9XwO6jwgN5OW+5Wq0rIOWYXREfg3Ze+lV4MRnsimVwZG0w4LOQMIOqfBmx7Kp0LhwusNm6cUQ46qwu2Zj+INE3O4rRZCoB3KyYwx+hnseGzZHOiBJx0JerW4RkFS5eD75ZK+9Q7FszJMign8o7czp5yrSAsplPKFDtIklChTwEKYmlKXWxRrOoMV15SZtiR96RzQXgNLdzSXftYWVCIbcUq/pNTPqKz3dBHa34jA8GuU1hUyfPHTL1/UcDnJq+OgFKPMKxENiJFQ8gBJhteunw3bj8p/AmbzSQVKlZhzXB8BeH1426k9G4Zq/gnVads1d+sXm43nPCYz5ZfE67O2J1TxRannCXcX0tzqOD2D94Ft7pX/K62x3XyqM1LQJnvhU3OHCCi/AR39VwIdUq3YMndndp9Ie8P9dtoO+t8l2JvjzC5Tod2844Rl3Dl4KkEtzLh+KJ+0icxJ4on4RXUVKmmyxSa2D9CwYdKGMxwFa5vja7bOcRIDxn24h0ZW2cOXQeyRJHmMhBGA8SXZtphi084Y28P6xyCILxxD67fP7QHG2indR35YEGS1g/JSTQbA32L6FohQcPs5AT26slGQ376AS0koZrjf9hVQORlM4jrDu8KmsfsMrSavGOg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(346002)(396003)(136003)(39860400002)(451199021)(41300700001)(31696002)(83380400001)(66556008)(66946007)(66476007)(966005)(2616005)(86362001)(316002)(6916009)(4326008)(36756003)(38100700002)(53546011)(54906003)(2906002)(4744005)(6506007)(478600001)(6512007)(8936002)(8676002)(26005)(186003)(7416002)(5660300002)(31686004)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RS9DWUxXQUVVcG5XdzNtOTFBTTUzN1lMVFFpMHB2VFJwNXhQRlBBY0RsU1F6?=
 =?utf-8?B?QjhvbnV6S1Azd3hSaU44UkxubXdLNUd1UUkyc3dNTGhDeDl2Ly9SVFBPN2da?=
 =?utf-8?B?TXZxTEdQTmVNMTZPOUhJV2VxRlNNMzd3cnNkQXZmcTZQeEhMV0lNL3VzdTMx?=
 =?utf-8?B?L1FoZE0zRlhSRnQ3WkRhbGtJQmxHL0YxenVDaVY4V0RiQ1ErQ1VxOTJNUTNo?=
 =?utf-8?B?TlNoa0kwbW9XdElkejkxK3RlZi90M3kweGUySm5TbEJOcGtwQTk4eXF4OXkw?=
 =?utf-8?B?aHJudHRHOHZRcW1JMmF4MXJBbThlbUI4TDJsVXRtMHNQT1lRR2NEWHRzNTdE?=
 =?utf-8?B?STlVSDYrL3hLSVRqM1NTeU80TGpwVWwyRFhyeGd4SFdUOWZoUk4zdnVhUkVQ?=
 =?utf-8?B?bHV0NUxsZWljUEc3ejZKL0FEK1FkSXNQenlUUjdHRmtDaDVJTnM4YXhRVjRZ?=
 =?utf-8?B?Z2xUWE9ScWZIWmlWajRzeGNGUjMrczhwU1I5ZmppVDArQk9uK0Q1VWNpamt1?=
 =?utf-8?B?eUo5T3RNTkJqeHhKdkpyWFZ3dmYxTWxPeFF2MU83L0Iwc0dhTW5kamNvRi9a?=
 =?utf-8?B?NXFDeldpTTU3RHlOWWU4U3Bvcm1VNWFrMFlxYStrZFMzMGFVWEkwYzR6d1hJ?=
 =?utf-8?B?ekRoc1JrWVJZSzV4TDlCN28yeEhWVE5yZmplajRNMEwyMDk3RzBpeFozY0RR?=
 =?utf-8?B?S2dTcTNnVG5sSk5hSmtqYm82SVptM2Raa2J3MFpXREk3RWI3bmhtbjltUGFI?=
 =?utf-8?B?SDg0N0gyMlpRS2Q4MnNUQXJKa3o4RXNzeXNaV1VneUNRRXlPenFBTUhVdWNL?=
 =?utf-8?B?MGpWMExOdXJqZTRqV3BTd2ZaYlNPUmV2REREMEZlTDFxWlNSWlI5UHBGR0ZT?=
 =?utf-8?B?MDE0TjJ2K1QwK1dwZHA3Qkh4c2dEQ2V0U2NIT2R1dVBqUkxRSDdSTTM4UDdo?=
 =?utf-8?B?cjBKMDNXRWpjWktjWCtKeERWRktLOWYwZytpbmNGZ0ZNVkp6djM3OEpvRVo1?=
 =?utf-8?B?UTVWN3ZPbUtrSWMrRkw4a3lRWktrSnRzSkRTeldURFExNHFHU3hGNWN5QzVQ?=
 =?utf-8?B?L1FpT1pMcEZYZng4NnFLT0o1b01wMVp5QnljeDd2dmcxc29YUUdRWDd6c2FO?=
 =?utf-8?B?d3lZMGhGZ0d0MmRKdDFFMU9CbEFzaWNRUkMwTHh5T1VSS2hmR3R0ak5FQk55?=
 =?utf-8?B?bFpaazdpU1l2T1lqbGVrWE93SjlzQmk2THNJSVZROWNxMnQyRjJTQWhsWWRp?=
 =?utf-8?B?bURrWXBEdEFERkVmVzg1c3ZvRnJaM0d0bk1BK3E3NWx0b3VwQW1vUVNBTjZ6?=
 =?utf-8?B?ZWFwdi82N1BkRTlhQmZFTUswMkZZNkp5QTNVSnpwT0dQUGlTQlpPTi9tdENw?=
 =?utf-8?B?TlR0TGg5eCtoMUZ6WEdYV0RIaU1WQ3pSdnF1akVJa0hQQ2d6RWlaeUh6UGtC?=
 =?utf-8?B?dlUwaTZDdVNoU2s3R2VwYU13M3pMaHRoRnFtdjB6S0ZyL3o5UW9LYmJNM1BG?=
 =?utf-8?B?cmk5SGM3T3FIYTNSdnRsMjNQb1pTb1U4Zk14dHQxTDlmZzNDdzlEUExsZm5K?=
 =?utf-8?B?SkxWSFR4SEhNK0hmMUJ3M2RacHYyUEZtWDBQeUtZRXhRWGVsRmp6UG5vZnRS?=
 =?utf-8?B?Zm56ZWlwb2tVQloyQzVlRnBGUDd6ZFFWSGFuV3BBU3RpeTdGekFnejdxTlZ5?=
 =?utf-8?B?QUF1SVpXVGY1K3ZkUC9aMzM2TlBLK1FlOFl0MmZYbGhkRFdQdlNTUHd0QlZT?=
 =?utf-8?B?QzMvcjFRWDMvT0lYU3BlaGU3MUpGUkswdEh1cDNHeTRTRkMxREtZR1p0MTk2?=
 =?utf-8?B?b0FIOXZFKzFUUkJMTStMVWhUUmR2dEpqQ0s1RVhacHk4K2VaM0ZEVEl0QU9x?=
 =?utf-8?B?cHhUVEpyc3N4emErQkZPL29ESXQxVzRhd0QreTM1dndiSXVCTVdxTFZxbUUv?=
 =?utf-8?B?MlVWMjRKNE9JU1JldHFpb1VwZS9XYjVUL3VFbmZUMDV2WkRacG8yNXVoRHJV?=
 =?utf-8?B?eThKZ0lsL3FxeTA5MTVISUZsa2p6SzlSNEVkTFJGbHZ2TUZxQkQ4ZkI0QUJW?=
 =?utf-8?B?V0tUODRjU1huQnliTHRqNmlJSk1zc2kzRVM4WWo0U3RibDBqK1pOaHlSVWI3?=
 =?utf-8?Q?SNUnQVXvqy3ut5smWda6HCnb6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3dad5e5b-89cf-46b2-4490-08db4199b6ad
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:21:05.5406
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZJHqnn5hOIOPkJNPZreP5FCmnzANLbGKYvQS/RKXT1qSxhy42gSZXscVWQR56xKpQHL8Srv+IXH+fhQWfudWvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7726

On 20.04.2023 13:25, Henry Wang wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> As a memory range described in device tree cannot be split across
> multiple nodes. And it is very likely than if you have more than
> 64 nodes, you may need a lot more than 2 regions per node. So the
> default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense
> on Arm.
> 
> So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to
> NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable
> via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This
> avoid to have different way to define the value based NUMA vs non-NUMA.
> 
> Further discussions can be found here[1].
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:29:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:29:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524237.815008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTPr-0003Jz-1u; Thu, 20 Apr 2023 12:29:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524237.815008; Thu, 20 Apr 2023 12:29:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTPq-0003Js-Ut; Thu, 20 Apr 2023 12:29:34 +0000
Received: by outflank-mailman (input) for mailman id 524237;
 Thu, 20 Apr 2023 12:29:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppTPp-0003Jm-ND
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:29:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppTPp-0004y4-1t; Thu, 20 Apr 2023 12:29:33 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[192.168.15.245]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppTPo-0006iX-Pn; Thu, 20 Apr 2023 12:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=USVBJklPfw8TJJtOqRW3yDXziEqZrf9nq38LnawyVK8=; b=khYldIgvkWwGUNOIItf+GgH1Ze
	XugnH/0sQKaIGoqNHli1LLOecOl3lW+ptYjPEtLxSSbabXG8ei51z5aA4sv2FhBZGJRG9nm/ly1sK
	lCS4iDBeY6v0whSghUlahKP5zv4H6zUSooKBk4ARB1YngI2TxIaPI0CblaQ9lbKEKlb4=;
Message-ID: <bb6b5288-f123-8d25-3cc3-ef36164ea04c@xen.org>
Date: Thu, 20 Apr 2023 13:29:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
 <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 20/04/2023 09:46, Luca Fancellu wrote:
> 
>>>>>> +int __init sve_sanitize_vl_param(int val, unsigned int *out)
>>>>>> +{
>>>>>> +    /*
>>>>>> +     * Negative SVE parameter value means to use the maximum supported
>>>>>> +     * vector length, otherwise if a positive value is provided, check if the
>>>>>> +     * vector length is a multiple of 128 and not bigger than the maximum value
>>>>>> +     * 2048
>>>>>> +     */
>>>>>> +    if ( val < 0 )
>>>>>> +        *out = get_sys_vl_len();
>>>>>> +    else if ( ((val % SVE_VL_MULTIPLE_VAL) == 0) && (val <= SVE_VL_MAX_BITS) )
>>>>>> +        *out = val;
>>>>>
>>>>> Shouldn't you also check if it is not greater than the maximum vector length ?
>>>>
>>>> I don’t understand, I am checking that the value is below or equal to SVE_VL_MAX_BITS,
>>>> If you mean if it should be checked also against the maximum supported length by the platform,
>>>> I think this is not the right place, the check is already in arch_sanitise_domain_config(), introduced
>>>> in patch #2
>>>
>>> If this is not the right place to check it then why checking the rest here ?
>>>
>>>  From a user or a developer point of view I would expect the validity of the input to be checked only
>>> in one place.
>>> If here is not the place for that it is ok but then i would check everything in arch_sanitise_domain_config
>>> (multiple, min and supported) instead of doing it partly in 2 places.
>>
>> Ok, given the way we encoded the value in xen_domctl_createdomain structure, we have that the value takes
>> very little space, but a small issue is that when we encode it, we are dividing it by 128, which is fine for user params
>> that are multiple of 128, but it’s less fine if the user passes “129”.
>>
>> To overcome this issue we are checking the value when it is not already encoded. Now, thinking about it, the check
>> "&& (val <= SVE_VL_MAX_BITS)” is not really needed, because even if the value is above, then in arch_sanitise_domain_config
>> we will hit the top limit of the platform maximum VL.
>>
>> int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>> {
>>     unsigned int max_vcpus;
>>     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>     unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>>
>>     if ( (config->flags & ~flags_optional) != flags_required )
>>     {
>>         dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
>>                 config->flags);
>>         return -EINVAL;
>>     }
>>
>>     /* Check feature flags */
>>     if ( sve_vl_bits > 0 )
>>     {
>>         unsigned int zcr_max_bits = get_sys_vl_len();
>>
>>         if ( !zcr_max_bits )
>>         {
>>             dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>             return -EINVAL;
>>         }
>>
>>         if ( sve_vl_bits > zcr_max_bits )
>>         {
>>             dprintk(XENLOG_INFO,
>>                     "Requested SVE vector length (%u) > supported length (%u)\n",
>>                     sve_vl_bits, zcr_max_bits);
>>             return -EINVAL;
>>         }
>>     }
>>    [...]
>>
>> Now, I understand your point, we could check everything in sve_sanitize_vl_param(), but it would leave a problem
>> for domains created by hypercalls if I am not wrong.
>>
>> What do you think?
> 
> I thought about that and another possibility is to store “sve_vl” as uint16_t inside struct xen_arch_domainconfig, and
> check it inside arch_sanitise_domain_config() for it to be mod 128 and less than the max supported VL, this will
> allow to have all the checks in one place, taking a bit more space, anyway we would take the space from the implicit
> padding as this is the current status:

Sorry, I am having trouble to follow the discussion. If you are checking 
the value in arch_sanitise_domain_config(), then why do you need to take 
more space in arch_domain?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:30:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524240.815018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTQX-0004fJ-Az; Thu, 20 Apr 2023 12:30:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524240.815018; Thu, 20 Apr 2023 12:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTQX-0004fC-6x; Thu, 20 Apr 2023 12:30:17 +0000
Received: by outflank-mailman (input) for mailman id 524240;
 Thu, 20 Apr 2023 12:30:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppTQW-0004f2-GJ
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:30:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppTQT-00050M-Is; Thu, 20 Apr 2023 12:30:13 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[192.168.15.245]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppTQT-0006jo-Bf; Thu, 20 Apr 2023 12:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nSWYaANsJVA/M6A750YbBpgphrkLqfCBpbXVkB3eAN4=; b=6IMy7mSW79/H5jiqJ8gBO3t6pQ
	Pj4fhPXn6jMVm1Hd1nUO9IzQ1EjBWOOv0n9ixXt1JlIPQDZQdNrUJPjaE+OeyWVIosUKHG6HEXbmR
	oiJ1sXvKjCJJmRpKLKdXi34goY7JPD2JmadbG6i0oOnn1ogLJq35ptJ/UN4fjUMg0gs8=;
Message-ID: <8e3d2240-3741-497e-2318-5a4a4d7bfd7b@xen.org>
Date: Thu, 20 Apr 2023 13:30:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 00/12] SVE feature for arm guests
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Luca Fancellu <Luca.Fancellu@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <08BE4F94-C4B5-4B05-AD92-61C5C5D24F39@arm.com>
 <bdb1b5e3-c3d9-1c39-f7f7-8f48157ba7b3@xen.org>
 <4cbaaf12-bd11-ca04-eed1-f8848290a692@suse.com>
 <C21BD176-AD46-4379-947F-4271D3EE05A1@arm.com>
 <5f5b65eb-d1fc-271a-02db-aa347cc708e9@suse.com>
 <7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/04/2023 09:20, Bertrand Marquis wrote:
> Hi Jan,
> 
>> On 19 Apr 2023, at 09:52, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 19.04.2023 09:31, Bertrand Marquis wrote:
>>> Hi Jan,
>>>
>>>> On 19 Apr 2023, at 08:28, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 18.04.2023 16:25, Julien Grall wrote:
>>>>> On 18/04/2023 14:13, Bertrand Marquis wrote:
>>>>>> On this serie I would like to open a discussion on how to handle the vector size
>>>>>> and the corresponding command line / configuration / device tree parameters.
>>>>>>
>>>>>> In general the user must either give a vector size it wants or has a solution to
>>>>>> just request the maximum supported size.
>>>>>>
>>>>>> In the current implementation if a size bigger than the supported one is provided:
>>>>>> - we silently disable SVE for dom0
>>>>>> - we silently disable SVE for dom0less
>>>>>> - we do not create a guest when done through tools
>>>>>>
>>>>>> This is not completely coherent and i think we should aim for a coherent behaviour
>>>>>> unless we have arguments for this status.
>>>>>
>>>>> +1.
>>>>>
>>>>>> Is there any good reason to silently disable for Dom0 and dom0less only ?
>>>>>>
>>>>>> I see some possible solutions here:
>>>>>>
>>>>>> - modify parameter behaviour to use the supported size if parameter is bigger than
>>>>>> it. This would at least keep SVE enabled if a VM depends on it and could simplify
>>>>>> some of the handling by using 2048 to use the maximum supported size.
>>>>>
>>>>> My concern with this approach and the third one is the user may take
>>>>> some time to realize the problem in the xl.cfg. So...
>>>>>
>>>>>>
>>>>>> - coherently stop if the parameter value is not supported (including if sve is
>>>>>> not supported)
>>>>>
>>>>> ... this is my preferred approach because it would be clear that the
>>>>> value passed to Xen is bogus.
>>>>
>>>> I did say earlier on that this comes with its own downside of preventing
>>>> boot to complete for no real reason. It's all Arm code, so you're fine
>>>> to ignore me, but in similar situations elsewhere (sorry, don't recall a
>>>> concrete example off the top of my head) we've aimed to allow the system
>>>> to boot, for the admin to then take corrective action if/as needed.
>>>
>>> But a guest depending on the feature will just crash later when booting.
>>> This is making the assumption that guests are all able to properly adapt
>>> to different hardware possibilities.
>>> This might be the case when you have a full Linux but if you consider an
>>> embedded use case, if something is activated due to command line parameters
>>> or configuration ones, it should not be expected that those are ignored I think.
>>>
>>> There are definitely 2 different needs here, maybe we need to have something
>>> like a "strict" switch to allow both use cases ?
>>
>> Possibly. Yet along what I've said before - would you then also mean that to
>> fail boot upon encountering entirely unknown command line options?
> 
> I think this should depend:
> - completely unknow: we can ignore
> - not supported (sve while sve is not supported by the platform or Xen): we should not ignore
> 
> I agree that one could use custom command line arguments for lots of reasons
> (in linux you can do that and get them back from /proc for example) but silently
> ignoring a parameter that is known to Xen i do not think we should do.
> 
> I think in most cases, one could think its system is correctly running but could get
> problems later (or in some cases even not have any) so having a clear error at the
> beginning is a lot more clear.

FWIW, I agree with Bertrand.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:35:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:35:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524246.815027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTVM-0005Ks-SD; Thu, 20 Apr 2023 12:35:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524246.815027; Thu, 20 Apr 2023 12:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTVM-0005Kl-PL; Thu, 20 Apr 2023 12:35:16 +0000
Received: by outflank-mailman (input) for mailman id 524246;
 Thu, 20 Apr 2023 12:35:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppTVL-0005Kb-W2; Thu, 20 Apr 2023 12:35:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppTVL-0005GF-NC; Thu, 20 Apr 2023 12:35:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppTVL-00076F-5n; Thu, 20 Apr 2023 12:35:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppTVL-00079e-5M; Thu, 20 Apr 2023 12:35:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SZ6hjhRErb8QiR4/S4QbDDkcri9m+TrgeVAw6tDWPBg=; b=mL/B2m53KHCpguyLpf7CbV8YuO
	d6D5ouguUQLd8nXea929P5rp95krnFy53KJk1DLPp+25e2r1c+paNyVIMpsn5Xb26WGv67oGQXV52
	waL/LKyUjJtSoNXycb3xvpqydoypKN5lrNBzHugTTUnewrpo/B1Lc6BXZ2c549gBQKoU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180331: trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:broken:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):starved:nonblocking
    qemu-mainline:build-arm64-pvops:hosts-allocate:starved:nonblocking
    qemu-mainline:build-arm64-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
X-Osstest-Versions-That:
    qemuu=7dbd6f8a27e30fe14adb3d5869097cddf24038d6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 12:35:15 +0000

flight 180331 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180331/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 build-armhf                   2 hosts-allocate        broken starved in 180258
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180258
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180258
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180258
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180258
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180258
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw  1 build-check(1)               starved  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               starved  n/a
 test-arm64-arm64-xl           1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               starved  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               starved  n/a
 build-arm64-pvops             2 hosts-allocate               starved  n/a
 build-arm64-xsm               2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
baseline version:
 qemuu                7dbd6f8a27e30fe14adb3d5869097cddf24038d6

Last test of basis   180258  2023-04-14 08:48:34 Z    6 days
Testing same since   180320  2023-04-19 16:38:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              starved 
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            starved 
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit c1eb2ddf0f8075faddc5f7c3d39feae3e8e9d6b4
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Wed Apr 19 17:27:13 2023 +0100

    Update version for v8.0.0 release
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:38:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:38:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524253.815037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTYb-0005ym-FN; Thu, 20 Apr 2023 12:38:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524253.815037; Thu, 20 Apr 2023 12:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTYb-0005yf-Cj; Thu, 20 Apr 2023 12:38:37 +0000
Received: by outflank-mailman (input) for mailman id 524253;
 Thu, 20 Apr 2023 12:38:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppTYa-0005yX-QI
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:38:36 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44842d9d-df78-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:38:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9123.eurprd04.prod.outlook.com (2603:10a6:102:22e::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 12:38:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44842d9d-df78-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fLp900ZfRDhbiUKwT77Mx2VF15lp1wqI+ZOCjpSv2Wvw/Q6SL3EZGYU5FG1b8m2/TWxu5aydywyNxxhf9GmV9GSdLXvFiAPZGlMOZzPibzCXNkgXhRwNu+/glk4pLVgcdAH6V0xYgzxG8FR3HJNhLkIT2k1C94n9f31BOm2IPFLXLmJUx8E4VDPA1x7Z1PeEaU2eBhNeuUJJc+7LLpg4ypu59vUflabMXe+HxkP9uE2WUKh/LomiF0gDcyzYV73JlGZ1xKEj+b6k8WXsqfQ02J8/+uEm25X4Th+gmg+WNwBN1rmKK1qd66TkRes7OpSVnTDI0yTIiUXEW5lg35r7oA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4To2x6HVjUmz8BqSUY476S/NfcKyHY0DK5xyL1YWgTU=;
 b=ZryiYXxG8ax7+V8vLQYQ6J8VOd9E9AOIPL4D/2nRLPVK144SxoaX6J/wS826P8IXFsOuVWweSotRq/df7OXfjAB+K6ff5GWzBlLdPU1rVchFFyiZbL9ctJdpYMXNw/HjRwvWUlo5jGbynd+q4TGWrISTBMrkTJ4t0XTPyEcuH39QKswb2X8ijpOMO+SrOEy1QN5QJ8tnubhuEO4/FqG05zjv0EF4Sy64WnzmXZ8rPHqz4ngoEn92tGaAETDWQiYexWHb42QI8Fy9U9vJsNIEYEUdGjnuXNGzHLYMXNva6VZiTcyqUiyekUQwCYbElD95dLm91OdlFzs4RaAyGBvalw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4To2x6HVjUmz8BqSUY476S/NfcKyHY0DK5xyL1YWgTU=;
 b=UCNchg7eq4VwMV+w3E/Ys5ciYBR6bjkbBL/mr/ZlFBKUh6BLr25BHkzHsyKBaSlXgdqrV6AXEJ5x54Do4IfEQgFttVRr4DjbFSw6DP8DXfAGNWEcyMgTcKdaP6VKfFbUIzmbcWVWOC1eotfuFxdODwrvEst8CzgRIWGABP2PTIuEJEyGsrlDfQY031I/yn+RmHXOCrqvb15Sliq8ae95PxQgHOg8Jn9l8cUrmkoOaD819BJyAJJB9YXnxKIPa8P/Wqp6btu6yL17wfjHsAu+At875kGY0fcA82ZWHSU2g/NgNDPl9+bwc7n+gzo4Df4YvRSoBi0lCw/txnawi/TgBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
Date: Thu, 20 Apr 2023 14:38:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230420112521.3272732-4-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9123:EE_
X-MS-Office365-Filtering-Correlation-Id: cb756cda-7ec8-4529-31e8-08db419c2757
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LUDRYFP1r1gGMYXqDG2f2bs8X1URUkMPYL0Y/+C61lYK3Wlq9a9yv0p8+qEhGWUuD6Bmzupc2ODvVRZkAceDBvCQEJfPuehSSnEuX3y0+3GHQNYWsf48OCpw7m9xuO3lBYa9D9BmngS51gZxvHaKt8oFP82HrxfUadW2jRSMqou3lIaVhp4M0WCdrvIzUd+BsBb2+Hbeoqn9QPNf7AoG0QAVYreXN3CD8pWCVpWgI99Kn8kS/JcO6Y45M+BfBMzjya29N7++bHAfvC2DnrWfWYAvEV5LM5gvhIifOEbY8mzCzlQOU7eBWB5MwnS4vliR0EQDPbigc69WAryQ3+BilzvP7d2uny57Wij7kRzHAZ2x3rGD/g//3rw7vSRiilYZhJGhVxrbkyWguMLvZkFHHS6U29MCIs/g2xvUQbRDB6VMYL4TaUEua0uvAxHIxiEfkMHusv69Eyck47KT6c3U26r4ExN+Wx3rh30XvF01rZOtTUYAYJOjrtluohoDcuFZQW2/UShRvc8k5ySuHYRDVrW3QLAzTCkeh7LmbJ7tMCOk4+E9/NTr7SLtxlmG1+33+zBclyq1CkHi1n+aGj3yvCeXlT07pzULdrfxshjNGx5o44iart7+fwBmHm9mr5uI7NPpFgmzvmMwOfcYZabJ5w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(376002)(366004)(346002)(451199021)(53546011)(2906002)(54906003)(86362001)(2616005)(66556008)(66476007)(66946007)(6916009)(4326008)(38100700002)(36756003)(316002)(31686004)(6486002)(8936002)(8676002)(26005)(186003)(7416002)(6512007)(478600001)(6506007)(5660300002)(41300700001)(83380400001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?KzVVcnl4VnJrbUFFcjcwOW9meTl3RTA3REFLMDhCeXk2dGtqOFFzNWJ5VWg2?=
 =?utf-8?B?Z0s2RG5sc3NKTUp6a2VZenhBemI4UXE1M0VQNm9pR3U1MHpSeFY1N1Q4U210?=
 =?utf-8?B?bWZuWUE3MGRVWnNBTUx1WDk3ejdUNzZLZUdiY1FETmpQWWNuNElvNitLU29p?=
 =?utf-8?B?Z00xc1hJQ2cybGZSa0IxeFFwY1hiNUxJaCtaVVpxZlk4T3FQSEFvS09IWHVJ?=
 =?utf-8?B?Z1hqU1hSdHY3aHBLcW9YSHJqWmZvR056TnZPMy9oSE5sRmYzcDRZS1dVeG5J?=
 =?utf-8?B?UkU0MkRuakdLOXZma2dkODVrYmFSQ2ZTbWRtMUt3NCtsMzNDSTdVZTEvbnhL?=
 =?utf-8?B?M2RQaEMwMjEweVVnTGVlK1R1YVN1eGlmOFdycC96akdUM05uTmQzZkxab3V6?=
 =?utf-8?B?TjlxS0dzNlRtQndqTnBvYUF6NlZRWWRMWGppSFB0NW1vWll0YlNXcE9ZQndS?=
 =?utf-8?B?Q3IzSHU2SFlqWDVDRFV1bXB6ZHpBT2hxOFhQZUxvUFI4Ni82YTJYYk1FVUxM?=
 =?utf-8?B?QWJHV1VlM2NMQzFqanpQaXlyRXdxTWRVZC9kWFRsNWg5UGI5WUNjOTRBa043?=
 =?utf-8?B?RVBrOS95SitJZ0VkMHlxR1ZuemdCczdvcVdodWkrVXJ0cFJoeWtKR0FvUDJ2?=
 =?utf-8?B?b2E4a1dmOEVLVjJ4T24wV3dTYUxJYXpiTkJMQnFuYzFyWGZIeGJrandhNER3?=
 =?utf-8?B?Mkt1K0ZqaC9hUmFVa2ZzNUs4L3A2bEZjUnZmUnEzME9VdzVhdDAyVDQ4QWpB?=
 =?utf-8?B?UlVyOE9KeUNwamRXbWxJcnFWTkV2dGk3dHgrTXp2OEFMb0huTmM3ZHpyVkxW?=
 =?utf-8?B?dCtIZFczQkpZTlhzWE1XSFhOMUdPdnpQR2RZTDZLMmpTNWkzNHBybWdsWE5p?=
 =?utf-8?B?Zk9kVVdPSnhHSHBjUDYvTjgzb0dJK2V2ektGSmVVSmRubklzQWdOanBYMXlh?=
 =?utf-8?B?SGYvbjVpb1ljNXoxdVVVMVQ1V2tWc0dHZHV4RmFCMGFVRmk2bS9yQzVRQmVT?=
 =?utf-8?B?TnVzMk9PQzlPTHRBWlgzdTBkOWxlNmIyRkpJWW1iMkNqZ3FMNjM2cUpUampL?=
 =?utf-8?B?Mm4zNzlpYXNqcmY2SmFaM05jbHdRTmVrUm5DbEFNWlNaMmFvNzlQajBod3g2?=
 =?utf-8?B?Uk5WbXExb1FYMFZ3NWFIY2dSSTJIdjMxaGh2TGlIRTdyZWRWRzFIc0hhQlFa?=
 =?utf-8?B?ZmlFbDZueFM0TVlqNXJ6K2VmY2FEc0U5Rms1dzh5M01ZNGhaQjFoTmovMmlt?=
 =?utf-8?B?Y0tnWGpxajlsV1JvTlljTW4zODArQzdCUmFkUW5ZdlRLbkE0SWdEOUVkcGdJ?=
 =?utf-8?B?N3FPdU03MEhuaFI3YVE3RFkzb0NaMVQvcitTSE5QNk1FR1M3c3Fzbk9hQmo3?=
 =?utf-8?B?SGNGL3FQdjlIZzdvOEs0eHg3eXFzQjhjT25qbFM0ejNVeCtNTUYxSDB1bDAv?=
 =?utf-8?B?dnY4NmZZRDYyVFdsdUY2dUFKV3VGTnVpOEROeWw0ajZGa0NBRE1NaEZhVjJM?=
 =?utf-8?B?Nit6NmxBdkt3akxWeFpRMUpGZllGaU1xM3MzMkRDTURtbnREbUplUWZIWW5B?=
 =?utf-8?B?eTJsWHRqM2FDWk15c0d0OE0yVncyQXFNL1k2L2RYRlZteStpR2Y0OW9XY1ZZ?=
 =?utf-8?B?QU1JR1oxSmZJVERMM050T2daT1MzUUhFSTdvZ2NOeGFvbG9oY0Yxc0QzOU9F?=
 =?utf-8?B?NkViL2RnOVYyT1RpcURYVkJKNG9RR3l0NmdIK0VxMjE1OEJoTW1HNitFdjFR?=
 =?utf-8?B?RmJqaGtuR1JTN2h4SURQRGlka0pDVXhwUHJqWjVkNWNWWWZKZC9FTFNJVkVy?=
 =?utf-8?B?WmtCcXVqTnE2QlZYa2d1NDhGWjdnVGRZc2VyQ3lnQnVZYjlYaUFvbHVRU3Vm?=
 =?utf-8?B?TFBYR3lSSXNCL24wS0FObUFwbHFZL2d4UDJ5L2RlY0JSMytuNlZhdzdiZFZO?=
 =?utf-8?B?RjJwS1YyRUowMkt2UnB2TGJLKzN4a3oxZlpIbzNhY2VDbXorRTU0Zk5UR1ox?=
 =?utf-8?B?Q3VxWDhYWkNBYlZHU2xVTVh1WWdNMjBPdHNNZ09PUlMzQ2hjTzkyRUZUOVg2?=
 =?utf-8?B?SlltOG5rYzlEa0tMYktlcWc2OHI3UVB3SlBGVEpVeDMvMGZlckNEMjRLUkpG?=
 =?utf-8?Q?NrtsOJsTbJtW7vc3JdKRheEpp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb756cda-7ec8-4529-31e8-08db419c2757
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:38:33.5582
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dmdJu5HQZv1U99DK3pp9dH8jf8AzFYAv0ww6H0lEBh+GRidI7QgGuU1wigrTBUU9zZyqPlzNtoiuVmx+AF7zDQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9123

On 20.04.2023 13:25, Henry Wang wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> We will parse NUMA nodes distances from device tree. So we need
> a matrix to record the distances between any two nodes we parsed.
> Accordingly, we provide this node_set_distance API for device tree
> NUMA to set the distance for any two nodes in this patch. When
> NUMA initialization failed, __node_distance will return
> NUMA_REMOTE_DISTANCE, this will help us avoid doing rollback
> for distance maxtrix when NUMA initialization failed.
> 
> As both x86 and Arm have implemented __node_distance, so we move
> its definition from asm/numa.h to xen/numa.h.

Nit: You mean "declaration", not "definition".

> At same time, the
> outdated u8 return value of x86 has been changed to unsigned char.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

non-Arm parts; to more it's not applicable anyway:
Acked-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/arm/numa.c
> +++ b/xen/arch/arm/numa.c
> @@ -28,6 +28,11 @@ enum dt_numa_status {
>  
>  static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
>  
> +static unsigned char __ro_after_init
> +node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
> +    { 0 }
> +};

Nit: There's no (incomplete or complete) initializer needed here, if
all you're after is having all slots set to zero.

However, looking at the code below, don't you mean to have the array
pre-set to all NUMA_NO_DISTANCE?

> @@ -48,3 +53,50 @@ int __init arch_numa_setup(const char *opt)
>  {
>      return -EINVAL;
>  }
> +
> +void __init numa_set_distance(nodeid_t from, nodeid_t to,
> +                              unsigned int distance)
> +{
> +    if ( from >= ARRAY_SIZE(node_distance_map) ||
> +         to >= ARRAY_SIZE(node_distance_map[0]) )
> +    {
> +        printk(KERN_WARNING
> +               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
> +               from, to, MAX_NUMNODES);
> +        return;
> +    }
> +
> +    /* NUMA defines 0xff as an unreachable node and 0-9 are undefined */
> +    if ( distance >= NUMA_NO_DISTANCE ||
> +         (distance >= NUMA_DISTANCE_UDF_MIN &&

Compilers may warn about comparison of "unsigned int" to be >= 0. I'm
not sure about the usefulness of the NUMA_DISTANCE_UDF_MIN define in
the first place, so maybe best drop it and its use here?

> +unsigned char __node_distance(nodeid_t from, nodeid_t to)
> +{
> +    /* When NUMA is off, any distance will be treated as remote. */
> +    if ( numa_disabled() )
> +        return NUMA_REMOTE_DISTANCE;

Wouldn't it make sense to have the "from == to" special case ahead of
this (rather than further down), thus yielding a sensible result for
from == to == 0? And else return NUMA_NO_DISTANCE, thus having a
sensible result also for any from/to != 0?

> +    /*
> +     * Check whether the nodes are in the matrix range.
> +     * When any node is out of range, except from and to nodes are the
> +     * same, we treat them as unreachable (return 0xFF)
> +     */
> +    if ( from >= ARRAY_SIZE(node_distance_map) ||
> +         to >= ARRAY_SIZE(node_distance_map[0]) )
> +        return from == to ? NUMA_LOCAL_DISTANCE : NUMA_NO_DISTANCE;
> +
> +    return node_distance_map[from][to];
> +}
> +
> +EXPORT_SYMBOL(__node_distance);

What is this needed for?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:41:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524257.815048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTbL-0007OW-TU; Thu, 20 Apr 2023 12:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524257.815048; Thu, 20 Apr 2023 12:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTbL-0007OP-Qh; Thu, 20 Apr 2023 12:41:27 +0000
Received: by outflank-mailman (input) for mailman id 524257;
 Thu, 20 Apr 2023 12:41:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppTbK-0007O7-7i
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:41:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9973f91-df78-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:41:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9123.eurprd04.prod.outlook.com (2603:10a6:102:22e::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 12:41:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:41:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9973f91-df78-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VYzuSukPP4wtDlovFc4yWowamFVYpgonPomntO9A4N15vUuKtvTI395qeRVZ5HOY4MIiTeDdr/XOiE2bOpbFpyo4Op73dRt13yrud1QQQGwW145xioKmoyNujfNeLRCmSYJA0H8qh714P68TAHjwDAa8+ECL9MLfhM48xDVVm7q8JvI4EmJKa6nWMayEMgmg8tnsKvKtkterDiPT99l7Ybc6NENMbwj5TCNOSVMOd2UicT1luytS784eQnstfGL4nbb49xO1avLVS7+sZrFPPe6u/bye4ZSwL+7LNdsn3UvsPxB9lIm0ywJS05i03LDgDmyHeQEgSyB78FyWS+N4YQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oa7hjJ+1iyAibRi5KjUaA36TQGdTBA5D+gtL6is65VE=;
 b=GzktQZvLiOX9Uv08DOZmA0r+aBBYzv1IP2F8q4cSRbwDyiNSaj829Vnc6WutMKLjL7m2C7uVcJSFuEQE/MZ2GD+UOjVBZIofRwotGz4pZRhwOOYJpo5hJt+RLO8HHjzyBoiWFP7LEn2c5TnvyNOfSeW1BlRDcTQp5CVZtuvudXGy7E3acIKDZ1QWxCtViTF4rspPyQiucQOkN8wt9QhsDDcd0cGA3G/m9MyYV4GyVrl2Xe3KZb1m9lI+SsOQ1eUkz17o3/FLVLTl2YXwul1334yAL0VzgBVmk8xdnZ1I8u+xqMvZ+kXD4D36t0HmvrKYj31LKJlWxKzFMY4sTpSObA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oa7hjJ+1iyAibRi5KjUaA36TQGdTBA5D+gtL6is65VE=;
 b=wE2dI1/Zp9gQ/9PnJaaEt4EmnAgkOivqPvJJempiFg1mi1+Bi5tasHkr83g82msenemmxIVQhz1eC/882q6XABQm8+KZ9XoD1MRUlTY1qR4Lh44d4zswkoaNnu1gJmoq5wbp5x4Uh53WFJZvJOuWShi3eZKc+pxQTPeXuNIx9N4p5LkWM56bO8+VC50qQdCtykG5ynHXN1o4fZexYqB8Jhq/YVkYZrO/M7wjdA9lZWPw78HYhOjoUyvFHD7XKqYgqT7ST+zPnb9tQ12jIBW7VDE4ZEBL6WoToiaooYjmsOrZKoIoAA+LXAddcoMJ12JMwF2wW2V1ib8RSidJQ8QlsQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97a3bcbe-8f04-e4d6-a4ae-adc45543bc6b@suse.com>
Date: Thu, 20 Apr 2023 14:41:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 02/17] xen/arm: implement helpers to get and update
 NUMA status
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-3-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230420112521.3272732-3-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9123:EE_
X-MS-Office365-Filtering-Correlation-Id: db012417-e18a-4773-b2e6-08db419c8d33
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ix9kM3/TPnkZ+pzSWrVajKsdIeUAJJaXZiqsQRSDj79qDvo9N1Lqs3c/sgCfFA6tOso6LNSMN3POb+TQUvlA9MhcfWAlMjj5jJm9xoWsETo4G/XkBRVY1/wSK0ZCmwKiIUxDr7Ruh0XiNlD6rXR1OhQrtnU0m+ouPsTkwR2XqPQUxPMSSGH7u0i6W1mb6itGZuH310MfdT+iJKxyJzTGCGzasLr9TlUJxvrPKf0M4EzecGASfcaKpQdbq0Qb2EU1IVvN9vTPlKA1bKpvaKiW3l6VpQ+VX0jxr5Rc6NWnjVHAMFk1eJr3g5hwTY3DlrleGcFY2c/Mn+x4LH+fSHfxiwSn1Xg3hMW4Mer4s5k1KOYbvVsITfjWl3ZzpjIsz0G+3x08OD6bcaHwNC+drrwEx9QSJOFanuDEsxm8b+ztVfn1w/qkyOkWpehFioH7fbedct7PVU5ngbXmKNe0QQO9eBKrV1IKrA6KJ2Dg09z6JpV6P1ZHNuNxje5Bzl8jAcZE4dUMEdeg8+KFvEvFYfdtAhcUzIKd2W3ATdu4lpHuHqYuDiohLv+pHfQWprNoBT7APQ5z1Vr1wTO6EXbj8MCgpzy0RgXjXWDEm359P5v+X84ZHmj5Fu34Ljn5Q1IfiU0na+T9MwZt8WIxZhfj1QI4uQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(376002)(366004)(346002)(451199021)(53546011)(2906002)(54906003)(86362001)(2616005)(66556008)(66476007)(66946007)(6916009)(4326008)(38100700002)(36756003)(316002)(31686004)(6486002)(8936002)(8676002)(26005)(186003)(6512007)(478600001)(6506007)(15650500001)(5660300002)(41300700001)(83380400001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3hzNUFZVWNUczZPWVBLTXVxT0dnbVhaOVNlVmh0MHJaMVJiS0hDWU5WM3ph?=
 =?utf-8?B?V1U1Z3pCcW9SRnBZLzNsa2k1V0NtWWZUNTZpZEJRa1Y1akRvNVdXY1J6ZjBI?=
 =?utf-8?B?TGE0aktXc0dKSGR5ZktVUm5sQU5pY2ZiTFhHOE1EZWxnVmFXTWUrcWNoY0tH?=
 =?utf-8?B?S1JZUDlIRlZZVitnaTgwcVVHLzhrUG1aSEUzY2duT25EbGs5NGNlSzVLZ0lU?=
 =?utf-8?B?Qm5zNnhSV1NZYnJRZVdBdnZtejVkdzcvMkVXSU1hMm14T004anpCMjE0b0tl?=
 =?utf-8?B?ZHc2NDFqRk1NYmtyMHAyRUtmcXZsL1QxcFlYVGZua1ExeEZIR3NqbzU4Mjdv?=
 =?utf-8?B?YWlXWldtdHVrL1pGeHNlSE9BckhnSDMvRlFHcllaU0JISVpBTGNkNnI5MjdU?=
 =?utf-8?B?VFI1UkRyTStKWnRpVjZCcXl3ZTRFUit6VEYzRGJQd1ptanR4RW5NQ21LakQ3?=
 =?utf-8?B?RmlXOE5NWDJrTnVwVndhbWNod1VsYlVUd3l5NzhLcWQrakRjMEN4aEl6SnRF?=
 =?utf-8?B?TXdxWWFOYmJnRFlaOUJ0SkVGb3FEc1hSMW1LY1JTd25aWUY0Vy9yQ3ZiMStz?=
 =?utf-8?B?dDRCYTF5WjNySFdubzNiUVFUR2drV2ZQOVNkdG5tS1hVTnlKOGdGN2l3SGha?=
 =?utf-8?B?TEkrSlMxbVMyV0VXeU9ldDh3YjdhTmx6N2FMM21WdGtUazBOUGxnR1dQQW9t?=
 =?utf-8?B?UUhwdG5KR0ZZc29zZS9lZk1sTmZkZEZrZ1l3QStzTERORngwQmFNQ1QvYm5V?=
 =?utf-8?B?VG1JZGNBcHNMaHFlZHNiem9xYm9KQWFZeUtNc3NZc2dlUnR1aVBhTGdJaTdn?=
 =?utf-8?B?OW5laGZ3bzlhSmNLUWU2VDdOVnR5cndmejh5enNWZUhwZnNPbnMxTWF1VlR0?=
 =?utf-8?B?bXVKUHArU0luN0QvY2lHTTROcXFBMDF6bjZ3WGFna1ZRQjFJMnljSzhtSkg3?=
 =?utf-8?B?STYzSFNrTWEwU3BYYlk3ZFozbFBseXNnbDR0TUkxeWFTNVZyMEhvaVVyTFRJ?=
 =?utf-8?B?NkIrWjFwQm9pQU5DSjQ2UHU2WmhIc25GU2xBUHk5MlhzVW0yMDVhdWFhTU03?=
 =?utf-8?B?ZmVuSEhobUNGM0dEakhJQzhLck1lTXRuREVxWGJaSGlIQWFNT0JKTUlNSmdu?=
 =?utf-8?B?NXJxemcyRW1waTVkbjFyMXB6L2VCNXU2ZjEwbkpCU29saDVhaXBZS1ZaNlAw?=
 =?utf-8?B?djNvcUFxbnFmT09tamYwYjBBcWp6WXYrazJFQ2l3SWlBcm4xcURHMEU2a01H?=
 =?utf-8?B?WE5EVnJzMS9EMlZ6dkhnTDM4NFdIUnhzSEkvRFB0b2UySHdBTlg0WXphanVZ?=
 =?utf-8?B?azh6cVYvVjhQN0NRSW1ncjFSWlFRRTgwamEvTW1PMWF4OGJBbFFYaW1BUWtT?=
 =?utf-8?B?VDY5bzJveE55WHZuV0xPRktiQXVsS2o1a0FXUVV2bWNqUWxsVzJtaHBuNUt2?=
 =?utf-8?B?UzBQL2Q4SG1tVXVCQWJyS29mdWNyUmJuVG5tbjdCS3dZTFViU0VRM25oTjRk?=
 =?utf-8?B?TWlBZTlqZnBYRm9XZDdjWjVzSGJ2djl4UGhJTUdkMGtIdWcrRzU0akQ0UW9w?=
 =?utf-8?B?M2FWZUgzTWltZ3paM1JsZU82SFFueG5ZRlRmT3hGYUZDZWtVYVE1ZnNpVXRK?=
 =?utf-8?B?N3AxbXFOZWQyU21YV1NaaVRmSUxCQmQyQ00zQVZENTQvMFBia09rSU1RWTM5?=
 =?utf-8?B?UzI4SW1PZzAvUWJabWg0SDZ3UUNyNmFMZE1EcVdUaGU3NWZpUUpFcHJkMVVX?=
 =?utf-8?B?SUp5TUUwYzVjVUxKcDhuUVIxby9icWJRQlhGcFRnTkNJNDZhRWtjZWlGb0xK?=
 =?utf-8?B?OEdiY1d0SFJESjlhYnhwL05IcTd3RkFKVVBMUWtReXgwaGNZSjRTMnhZVm1u?=
 =?utf-8?B?cW5xODNGanJ1OHVkTmZFU2ZHZGtkbFVwdFI1dTBPajgvenI2dENsODBSTk9Q?=
 =?utf-8?B?R0daek1MRGpUMmtDWkdZMFd4Wm1NTzY0OGU1OGVoNk0zR1dCY1RjbTVQRnJJ?=
 =?utf-8?B?MFRUY3V4elhCcGRxSzdjUFdBRG51MDNhYU5IdU9XaHliODhzSEsvdHV6T0g5?=
 =?utf-8?B?VlFCVXVnN0YvRmVNeDAvRmZUcXFwZWl4b1ZCRDdwU1d4QWVCbTdkVzBJOFBI?=
 =?utf-8?Q?sL88cARZ2Spcn2l38dtW86oOe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: db012417-e18a-4773-b2e6-08db419c8d33
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:41:24.4133
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EimQkhGv/0VanbFzU0HwN5WOX+iPkNiOVWM0UTcN09yGxyt7Rvt7DiUN9i8nyj2iBEHXtlnFvUf/HqXlKoRc1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9123

On 20.04.2023 13:25, Henry Wang wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> NUMA has one global and one implementation specific switches. For
> ACPI NUMA implementation, Xen has acpi_numa, so we introduce
> device_tree_numa for device tree NUMA implementation. And use
> enumerations to indicate init, off and on status.
> 
> arch_numa_disabled will get device_tree_numa status, but for
> arch_numa_setup we have not provided boot arguments to setup
> device_tree_numa. So we just return -EINVAL in this patch.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> v2 -> v3:
> 1. Rename the first entry of enum dt_numa_status as DT_NUMA_DEFAULT.
> 2. Make enum dt_numa_status device_tree_numa as __ro_after_init and
>    assign it explicitly to DT_NUMA_DEFAULT.
> 3. Update the year in copyright to 2023.
> 4. Don't move the x86 numa_disabled() and make Arm's numa_disabled()
>    a static inline function for !CONFIG_NUMA.
> v1 -> v2:
> 1. Use arch_numa_disabled to replace numa_enable_with_firmware.
> 2. Introduce enumerations for device tree numa status.
> 3. Use common numa_disabled, drop Arm version numa_disabled.
> 4. Introduce arch_numa_setup for Arm.
> 5. Rename bad_srat to numa_bad.
> 6. Add numa_enable_with_firmware helper.
> 7. Add numa_disabled helper.
> 8. Refine commit message.
> ---
>  xen/arch/arm/include/asm/numa.h | 17 +++++++++++
>  xen/arch/arm/numa.c             | 50 +++++++++++++++++++++++++++++++++
>  2 files changed, 67 insertions(+)
>  create mode 100644 xen/arch/arm/numa.c

While I was Cc-ed on this one, neither the diffstat nor any possible remarks
make clear whether anything is expected of me here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:43:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524261.815058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTdj-0007xq-9x; Thu, 20 Apr 2023 12:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524261.815058; Thu, 20 Apr 2023 12:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTdj-0007xj-77; Thu, 20 Apr 2023 12:43:55 +0000
Received: by outflank-mailman (input) for mailman id 524261;
 Thu, 20 Apr 2023 12:43:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppTdh-0007xd-Cm
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:43:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20630.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ffa87937-df78-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 14:43:50 +0200 (CEST)
Received: from AS9PR06CA0189.eurprd06.prod.outlook.com (2603:10a6:20b:45d::7)
 by PAWPR08MB9830.eurprd08.prod.outlook.com (2603:10a6:102:2e2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 12:43:47 +0000
Received: from AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45d:cafe::34) by AS9PR06CA0189.outlook.office365.com
 (2603:10a6:20b:45d::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.24 via Frontend
 Transport; Thu, 20 Apr 2023 12:43:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT058.mail.protection.outlook.com (100.127.140.247) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.23 via Frontend Transport; Thu, 20 Apr 2023 12:43:47 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 20 Apr 2023 12:43:46 +0000
Received: from 3d1dc7003da4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 55E81245-8C00-44B3-9743-7DA7EE059D98.1; 
 Thu, 20 Apr 2023 12:43:40 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3d1dc7003da4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 12:43:40 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAWPR08MB8815.eurprd08.prod.outlook.com (2603:10a6:102:337::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Thu, 20 Apr
 2023 12:43:37 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffa87937-df78-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eebYkdrr6m6foLMENZbrTRxHua8J/TVczgQQV+fkLWc=;
 b=nougqb9w1OX6VC7vVDDfxhi/cn5npEdjJytUTqxJbbimYOQ6POhA+r4ckBAOTUZIV1vs9ewuL0NTz1EWppKm52crx92D20/huj/AsjgpgdQ/M4Ax/7VAN/VlIZAHfS7lvl+LxhhjLPppy+2qD9nH/lG97TUeEgGc03jdh0UfqMk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1ee8c370bf54f063
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FyOFv/9yyXyc0RNZ99UMkwD807WdCD903dvy/TKboPoqgJIg+OQWTgTT3MY1fL1/TI1KIRGrWaHGOIubKKI4gWzYqjzVw6AZOWlXfX3HZZEXpprU9bMlCEZ7wnOUU7eHD9rA82ibaHzLDMO2srC4amP5PaD8q6i9Fs31yHE3Y3nUNbRmWaeyG3WE/rYRd8QTVJAXocPaBN9Gd4+EuxPnZdDC23yAPXW8qxdj4/1SdqOVtex7jbnV7UvECMHYwE0ZpudVTXY9iHnQBNAPDmPznRUZgxJzuAwafVezF3L/0gacUPQ7/0QwUIUEVztCitwxj6ktTW1OMZlq063aEdJewg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eebYkdrr6m6foLMENZbrTRxHua8J/TVczgQQV+fkLWc=;
 b=BkOQnc48B8air+q50YhyRBQoWXcjcC4iJdsMNlAnkC4yn7TS35ecL3IMucsXnWQm9BZ3h+VwxUbdL4aFkZsgT0gJeotlav9KRfQlbfByN59pBdKiwtAPTFMG4kXc5VVfa+2MmlZlm5xwvj0m/J+Sq/Z5j4Lym8tld76QrvXYYlqhhZjxGu/OMPGd8gkoPjm2qb6IcYFaky5os7Vsbtg+/8yCA7oJkpROzIy3EAB9JAWwYv6OMmK1JLsc9VfTBqBDs1K9E1oGIaAS7eoUPtuoGeLxIkcLFNYKhKzFNjQWJWiGgn7uZgFMzQhzXOMq9wlGaKFZft0K7f73JHBKQ7T1fA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eebYkdrr6m6foLMENZbrTRxHua8J/TVczgQQV+fkLWc=;
 b=nougqb9w1OX6VC7vVDDfxhi/cn5npEdjJytUTqxJbbimYOQ6POhA+r4ckBAOTUZIV1vs9ewuL0NTz1EWppKm52crx92D20/huj/AsjgpgdQ/M4Ax/7VAN/VlIZAHfS7lvl+LxhhjLPppy+2qD9nH/lG97TUeEgGc03jdh0UfqMk=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZbSQ+XccnxtxNG0WbEgpX88e2za8xDZ2AgAE1hQCAAAGYAIAABb6AgAGkXICAAD5kgIAAA+SA
Date: Thu, 20 Apr 2023 12:43:36 +0000
Message-ID: <BE516382-0E45-4D6E-8012-1D75C1F13680@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
 <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
 <bb6b5288-f123-8d25-3cc3-ef36164ea04c@xen.org>
In-Reply-To: <bb6b5288-f123-8d25-3cc3-ef36164ea04c@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAWPR08MB8815:EE_|AM7EUR03FT058:EE_|PAWPR08MB9830:EE_
X-MS-Office365-Filtering-Correlation-Id: 53a68c21-5ae7-4050-8bc4-08db419ce275
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1T/IhAc4Qmonzb8TePV0wWKrdfIyt/pjJUpwOp1+InwImu/6te45K5sgB1mbW3DfOpacE9FzWoBslxIsD1+nSxGe7pjustzEx3KA64queLi/whlYzEqLeANC7zPOFBq3nJuDcRsxTNFzElT/4HRIG6+Bpe5p6c1j1TVSjfDsTqZ9XCLLa9uKDJYs6Cachk3CWjKS8EIpgNyVbWAEi0n2FHbAsjW7sAfGfxtrDY0Qq6My9paUg62AvDCOdSi8IvBilft3MbjOtejCVgER8v07C9GXOCGQYaNvOi4rXSiuE937xYfysTqj0CGSrPDQdI3jlpW27hPvBHhM57bqmQ38y/H4IWr+zP8kLn49wYKwJU272qUeL7qSep0W9WM3k8S6JUUBWhFqz2TbkAz9mv9B5ISuopfV9ifSSNP9YAGxb+QpknsdqEY69sTMKeOD3TtGpZtr0lmWzc2GW863uwQQ136JEu+j1yzPvcfzFFfW2Dfyd8UzKKxrhVgqYORo2WgL8vEdwMGpFJmTSc3bJ7luBHGjlenHsSVdvUMqVuGSHHyQBQX23cGI7q+f0CJzgOtzdzjBPPAn8d5jyp9PStNFpXV+Gc6UTOeEUw6O+Jd/2JZa8bMUAYIgBIWl4GRQuJhksEDl4svn7om6Eutn69uguQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(451199021)(2906002)(38070700005)(86362001)(83380400001)(5660300002)(2616005)(36756003)(186003)(33656002)(6506007)(26005)(53546011)(8676002)(6512007)(8936002)(38100700002)(122000001)(316002)(71200400001)(6486002)(41300700001)(91956017)(76116006)(64756008)(66446008)(66556008)(66946007)(6916009)(478600001)(4326008)(66476007)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <4BED82742EB7EB42AAB082F60816584C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB8815
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	113037c3-acd5-4ed7-7440-08db419cdbdb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kkgyFiL3EuJAurHYCaQGcCGw+3E2QXBuc1cTpxcQXIVPDaI0jaDObRTSOInaYsJeGRU80K6ep5+ysdm8WWEFWlFcUzBn1YytQvNiV8ax5TJcV1aldf3nsp5P0TQq2oWhMJbKEpO65HdsbiBR/UCBNST8zh0LFXlKnE5HQzHeeGUcVvtpWrd4JhUJz5uX6uFJgRYpLc5/tCZ8lxgtw+oPuxh9jvj0elJFol1Riy9Fb/WlKHNyKI+uPXVMqM9rsgtRKLPN8QrSGAhuGdzXg5CyISAM7UpQS/NsJQSEl0/BQOM1ZJBPdSMfz9HAJrGbfFZ4uAcLlcFXhkL65qdYmqv6PO8YDhJxPD6w/FIZEau5PCd9ExlCeLBz+IXyAExn3b6+tvKFpQyEdUJvXca9x+aaph5eUagClazvQ3rgrFhlD9ul8kxVKZU5CIXvsdkxfk5Vh+CvNGVvOpeXA3AFGpGY6ZxjDN9Zf5C/A0H1nwKX0zioNiq1InFmDSjKasHkQrV6cV60H8ZhkQgjwCpJVbb3mM33MVC8x6xBBrr2HApix3GOc3xmeupCHZGvo5yDsxFONC0FUryRfxRCTlbyigikJqve2KYKLAGgkVdJJqBXN+p/HnPl6f67ekMNE58gBRyfC1+qI8oQ1YQXIbdrpqGhE4Xv8DQzxm9y+6XOGlqIrJreE34bDGq/I9B5aePRmWNkYbf5y8tUTl0VqCDX67MMCw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(54906003)(83380400001)(47076005)(478600001)(2616005)(36860700001)(40480700001)(86362001)(6486002)(6506007)(6512007)(26005)(53546011)(107886003)(4326008)(316002)(70206006)(70586007)(82740400003)(186003)(336012)(6862004)(8676002)(8936002)(2906002)(81166007)(356005)(41300700001)(5660300002)(40460700003)(33656002)(36756003)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:43:47.2333
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 53a68c21-5ae7-4050-8bc4-08db419ce275
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9830

DQoNCj4gT24gMjAgQXByIDIwMjMsIGF0IDEzOjI5LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAyMC8wNC8yMDIzIDA5OjQ2LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+Pj4+PiAraW50IF9faW5pdCBzdmVfc2FuaXRpemVfdmxf
cGFyYW0oaW50IHZhbCwgdW5zaWduZWQgaW50ICpvdXQpDQo+Pj4+Pj4+ICt7DQo+Pj4+Pj4+ICsg
ICAgLyoNCj4+Pj4+Pj4gKyAgICAgKiBOZWdhdGl2ZSBTVkUgcGFyYW1ldGVyIHZhbHVlIG1lYW5z
IHRvIHVzZSB0aGUgbWF4aW11bSBzdXBwb3J0ZWQNCj4+Pj4+Pj4gKyAgICAgKiB2ZWN0b3IgbGVu
Z3RoLCBvdGhlcndpc2UgaWYgYSBwb3NpdGl2ZSB2YWx1ZSBpcyBwcm92aWRlZCwgY2hlY2sgaWYg
dGhlDQo+Pj4+Pj4+ICsgICAgICogdmVjdG9yIGxlbmd0aCBpcyBhIG11bHRpcGxlIG9mIDEyOCBh
bmQgbm90IGJpZ2dlciB0aGFuIHRoZSBtYXhpbXVtIHZhbHVlDQo+Pj4+Pj4+ICsgICAgICogMjA0
OA0KPj4+Pj4+PiArICAgICAqLw0KPj4+Pj4+PiArICAgIGlmICggdmFsIDwgMCApDQo+Pj4+Pj4+
ICsgICAgICAgICpvdXQgPSBnZXRfc3lzX3ZsX2xlbigpOw0KPj4+Pj4+PiArICAgIGVsc2UgaWYg
KCAoKHZhbCAlIFNWRV9WTF9NVUxUSVBMRV9WQUwpID09IDApICYmICh2YWwgPD0gU1ZFX1ZMX01B
WF9CSVRTKSApDQo+Pj4+Pj4+ICsgICAgICAgICpvdXQgPSB2YWw7DQo+Pj4+Pj4gDQo+Pj4+Pj4g
U2hvdWxkbid0IHlvdSBhbHNvIGNoZWNrIGlmIGl0IGlzIG5vdCBncmVhdGVyIHRoYW4gdGhlIG1h
eGltdW0gdmVjdG9yIGxlbmd0aCA/DQo+Pj4+PiANCj4+Pj4+IEkgZG9u4oCZdCB1bmRlcnN0YW5k
LCBJIGFtIGNoZWNraW5nIHRoYXQgdGhlIHZhbHVlIGlzIGJlbG93IG9yIGVxdWFsIHRvIFNWRV9W
TF9NQVhfQklUUywNCj4+Pj4+IElmIHlvdSBtZWFuIGlmIGl0IHNob3VsZCBiZSBjaGVja2VkIGFs
c28gYWdhaW5zdCB0aGUgbWF4aW11bSBzdXBwb3J0ZWQgbGVuZ3RoIGJ5IHRoZSBwbGF0Zm9ybSwN
Cj4+Pj4+IEkgdGhpbmsgdGhpcyBpcyBub3QgdGhlIHJpZ2h0IHBsYWNlLCB0aGUgY2hlY2sgaXMg
YWxyZWFkeSBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoKSwgaW50cm9kdWNlZA0KPj4+
Pj4gaW4gcGF0Y2ggIzINCj4+Pj4gDQo+Pj4+IElmIHRoaXMgaXMgbm90IHRoZSByaWdodCBwbGFj
ZSB0byBjaGVjayBpdCB0aGVuIHdoeSBjaGVja2luZyB0aGUgcmVzdCBoZXJlID8NCj4+Pj4gDQo+
Pj4+IEZyb20gYSB1c2VyIG9yIGEgZGV2ZWxvcGVyIHBvaW50IG9mIHZpZXcgSSB3b3VsZCBleHBl
Y3QgdGhlIHZhbGlkaXR5IG9mIHRoZSBpbnB1dCB0byBiZSBjaGVja2VkIG9ubHkNCj4+Pj4gaW4g
b25lIHBsYWNlLg0KPj4+PiBJZiBoZXJlIGlzIG5vdCB0aGUgcGxhY2UgZm9yIHRoYXQgaXQgaXMg
b2sgYnV0IHRoZW4gaSB3b3VsZCBjaGVjayBldmVyeXRoaW5nIGluIGFyY2hfc2FuaXRpc2VfZG9t
YWluX2NvbmZpZw0KPj4+PiAobXVsdGlwbGUsIG1pbiBhbmQgc3VwcG9ydGVkKSBpbnN0ZWFkIG9m
IGRvaW5nIGl0IHBhcnRseSBpbiAyIHBsYWNlcy4NCj4+PiANCj4+PiBPaywgZ2l2ZW4gdGhlIHdh
eSB3ZSBlbmNvZGVkIHRoZSB2YWx1ZSBpbiB4ZW5fZG9tY3RsX2NyZWF0ZWRvbWFpbiBzdHJ1Y3R1
cmUsIHdlIGhhdmUgdGhhdCB0aGUgdmFsdWUgdGFrZXMNCj4+PiB2ZXJ5IGxpdHRsZSBzcGFjZSwg
YnV0IGEgc21hbGwgaXNzdWUgaXMgdGhhdCB3aGVuIHdlIGVuY29kZSBpdCwgd2UgYXJlIGRpdmlk
aW5nIGl0IGJ5IDEyOCwgd2hpY2ggaXMgZmluZSBmb3IgdXNlciBwYXJhbXMNCj4+PiB0aGF0IGFy
ZSBtdWx0aXBsZSBvZiAxMjgsIGJ1dCBpdOKAmXMgbGVzcyBmaW5lIGlmIHRoZSB1c2VyIHBhc3Nl
cyDigJwxMjnigJ0uDQo+Pj4gDQo+Pj4gVG8gb3ZlcmNvbWUgdGhpcyBpc3N1ZSB3ZSBhcmUgY2hl
Y2tpbmcgdGhlIHZhbHVlIHdoZW4gaXQgaXMgbm90IGFscmVhZHkgZW5jb2RlZC4gTm93LCB0aGlu
a2luZyBhYm91dCBpdCwgdGhlIGNoZWNrDQo+Pj4gIiYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRT
KeKAnSBpcyBub3QgcmVhbGx5IG5lZWRlZCwgYmVjYXVzZSBldmVuIGlmIHRoZSB2YWx1ZSBpcyBh
Ym92ZSwgdGhlbiBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcNCj4+PiB3ZSB3aWxsIGhp
dCB0aGUgdG9wIGxpbWl0IG9mIHRoZSBwbGF0Zm9ybSBtYXhpbXVtIFZMLg0KPj4+IA0KPj4+IGlu
dCBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9t
YWluICpjb25maWcpDQo+Pj4gew0KPj4+ICAgIHVuc2lnbmVkIGludCBtYXhfdmNwdXM7DQo+Pj4g
ICAgdW5zaWduZWQgaW50IGZsYWdzX3JlcXVpcmVkID0gKFhFTl9ET01DVExfQ0RGX2h2bSB8IFhF
Tl9ET01DVExfQ0RGX2hhcCk7DQo+Pj4gICAgdW5zaWduZWQgaW50IGZsYWdzX29wdGlvbmFsID0g
KFhFTl9ET01DVExfQ0RGX2lvbW11IHwgWEVOX0RPTUNUTF9DREZfdnBtdSk7DQo+Pj4gICAgdW5z
aWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bChjb25maWctPmFyY2guc3ZlX3Zs
KTsNCj4+PiANCj4+PiAgICBpZiAoIChjb25maWctPmZsYWdzICYgfmZsYWdzX29wdGlvbmFsKSAh
PSBmbGFnc19yZXF1aXJlZCApDQo+Pj4gICAgew0KPj4+ICAgICAgICBkcHJpbnRrKFhFTkxPR19J
TkZPLCAiVW5zdXBwb3J0ZWQgY29uZmlndXJhdGlvbiAlI3hcbiIsDQo+Pj4gICAgICAgICAgICAg
ICAgY29uZmlnLT5mbGFncyk7DQo+Pj4gICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4+ICAgIH0N
Cj4+PiANCj4+PiAgICAvKiBDaGVjayBmZWF0dXJlIGZsYWdzICovDQo+Pj4gICAgaWYgKCBzdmVf
dmxfYml0cyA+IDAgKQ0KPj4+ICAgIHsNCj4+PiAgICAgICAgdW5zaWduZWQgaW50IHpjcl9tYXhf
Yml0cyA9IGdldF9zeXNfdmxfbGVuKCk7DQo+Pj4gDQo+Pj4gICAgICAgIGlmICggIXpjcl9tYXhf
Yml0cyApDQo+Pj4gICAgICAgIHsNCj4+PiAgICAgICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8s
ICJTVkUgaXMgdW5zdXBwb3J0ZWQgb24gdGhpcyBtYWNoaW5lLlxuIik7DQo+Pj4gICAgICAgICAg
ICByZXR1cm4gLUVJTlZBTDsNCj4+PiAgICAgICAgfQ0KPj4+IA0KPj4+ICAgICAgICBpZiAoIHN2
ZV92bF9iaXRzID4gemNyX21heF9iaXRzICkNCj4+PiAgICAgICAgew0KPj4+ICAgICAgICAgICAg
ZHByaW50ayhYRU5MT0dfSU5GTywNCj4+PiAgICAgICAgICAgICAgICAgICAgIlJlcXVlc3RlZCBT
VkUgdmVjdG9yIGxlbmd0aCAoJXUpID4gc3VwcG9ydGVkIGxlbmd0aCAoJXUpXG4iLA0KPj4+ICAg
ICAgICAgICAgICAgICAgICBzdmVfdmxfYml0cywgemNyX21heF9iaXRzKTsNCj4+PiAgICAgICAg
ICAgIHJldHVybiAtRUlOVkFMOw0KPj4+ICAgICAgICB9DQo+Pj4gICAgfQ0KPj4+ICAgWy4uLl0N
Cj4+PiANCj4+PiBOb3csIEkgdW5kZXJzdGFuZCB5b3VyIHBvaW50LCB3ZSBjb3VsZCBjaGVjayBl
dmVyeXRoaW5nIGluIHN2ZV9zYW5pdGl6ZV92bF9wYXJhbSgpLCBidXQgaXQgd291bGQgbGVhdmUg
YSBwcm9ibGVtDQo+Pj4gZm9yIGRvbWFpbnMgY3JlYXRlZCBieSBoeXBlcmNhbGxzIGlmIEkgYW0g
bm90IHdyb25nLg0KPj4+IA0KPj4+IFdoYXQgZG8geW91IHRoaW5rPw0KPj4gSSB0aG91Z2h0IGFi
b3V0IHRoYXQgYW5kIGFub3RoZXIgcG9zc2liaWxpdHkgaXMgdG8gc3RvcmUg4oCcc3ZlX3Zs4oCd
IGFzIHVpbnQxNl90IGluc2lkZSBzdHJ1Y3QgeGVuX2FyY2hfZG9tYWluY29uZmlnLCBhbmQNCj4+
IGNoZWNrIGl0IGluc2lkZSBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoKSBmb3IgaXQgdG8g
YmUgbW9kIDEyOCBhbmQgbGVzcyB0aGFuIHRoZSBtYXggc3VwcG9ydGVkIFZMLCB0aGlzIHdpbGwN
Cj4+IGFsbG93IHRvIGhhdmUgYWxsIHRoZSBjaGVja3MgaW4gb25lIHBsYWNlLCB0YWtpbmcgYSBi
aXQgbW9yZSBzcGFjZSwgYW55d2F5IHdlIHdvdWxkIHRha2UgdGhlIHNwYWNlIGZyb20gdGhlIGlt
cGxpY2l0DQo+PiBwYWRkaW5nIGFzIHRoaXMgaXMgdGhlIGN1cnJlbnQgc3RhdHVzOg0KDQpIaSBK
dWxpZW4sDQoNCj4gDQo+IFNvcnJ5LCBJIGFtIGhhdmluZyB0cm91YmxlIHRvIGZvbGxvdyB0aGUg
ZGlzY3Vzc2lvbi4gSWYgeW91IGFyZSBjaGVja2luZyB0aGUgdmFsdWUgaW4gYXJjaF9zYW5pdGlz
ZV9kb21haW5fY29uZmlnKCksIHRoZW4gd2h5IGRvIHlvdSBuZWVkIHRvIHRha2UgbW9yZSBzcGFj
ZSBpbiBhcmNoX2RvbWFpbj8NCg0KWWVzIEkgYW0gY2hlY2tpbmcgdGhlIHZhbHVlIGluIGFyY2hf
c2FuaXRpc2VfZG9tYWluX2NvbmZpZywgdGhlIHZhbHVlIGNoZWNrZWQgaXMgZnJvbSBhcmNoX2Rv
bWFpbiBhbmQgaXQgaXMgdGhlIHZlY3RvciBsZW5ndGggZGl2aWRlZCBieSAxMjgsIHNvIGFuIGVu
Y29kZWQgdmFsdWUuDQoNCkJlcnRyYW5kIHdhcyBwdXp6bGVkIGJ5IHRoZSBmYWN0IHRoYXQgSSBh
bHNvIHB1dCBhIGNoZWNrIGluIHN2ZV9zYW5pdGl6ZV92bF9wYXJhbSB0byBzZWUgaWYgdGhlIHZl
Y3RvciBsZW5ndGggcGFzc2VkIGJ5IHRoZSB1c2VyIGlzIG1vZCAxMjgsIGhpcyBwb2ludCBpcyB0
aGF0IHdlIHNob3VsZCBjaGVjayBhIHZhbHVlIG9ubHkgaW4gb25lIHBsYWNlIGFuZCBub3QgaW4g
dHdvLCBhbmQgaXQgaXMgYSB2YWxpZCBwb2ludCBidXQgaW4gdGhpcyBjYXNlIHdlIGNhbuKAmXQg
cHV0IGFsbCB0aGUgY2hlY2sgaW50byBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcgYmVjYXVz
ZSB3ZSBkb27igJl0IGhhdmUgdGhlIOKAnGZ1bGzigJ0gdmFsdWUgZnJvbSBhcmNoX2RvbWFpbiwg
YW5kIHdlIGNhbuKAmXQgcHV0IGFsbCB0aGUgY2hlY2tzIGluIHN2ZV9zYW5pdGl6ZV92bF9wYXJh
bSBiZWNhdXNlIGl0IHdpbGwgbGVhdmUgb3V0IGZyb20gdGhlIGNoZWNrIGRvbWFpbnMgY3JlYXRl
ZCBieSBoeXBlciBjYWxscy4NCg0KU28gdG8gaGF2ZSBvbmx5IG9uZSBwb2ludCB3aGVyZSB0aGUg
Y2hlY2tzIGFyZSBkb25lIChtb2QgMTI4IGFuZCA8PSBIVyBzdXBwb3J0ZWQgVkwpLCB3ZSBtaWdo
dCBuZWVkIHRvIGhhdmUgYSBmdWxsIHJlc29sdXRpb24gVkwgdmFsdWUgaW4gc3RydWN0IHhlbl9h
cmNoX2RvbWFpbmNvbmZpZyAoMTYgYml0KS4NCg0KQnV0IGlmIHdlIHdhbnQgdG8gc2F2ZSBzcGFj
ZSBmb3IgdGhlIGZ1dHVyZSwgd2UgY291bGQgbGVhdmUgdGhlIGNvZGUgYXMgaXQgaXMgYW5kIHJl
bHkgb24gdGhlIGZhY3QgdGhhdCB0aGUgdG9vbHMgKG9yIFhlbikgd2hlbiBjcmVhdGluZyB0aGUg
ZG9tYWluIGNvbmZpZ3VyYXRpb24sIHdpbGwgY2hlY2sgdGhhdCB0aGUgU1ZFIFZMIHBhcmFtZXRl
ciBpcyBtb2QgMTI4Lg0KSW4gdGhpcyBsYXN0IGNhc2Ugd2hhdCBpcyBpbiBzdHJ1Y3QgeGVuX2Fy
Y2hfZG9tYWluY29uZmlnIGlzIGltcGxpY2l0bHkgbW9kIDEyOCBhbmQgb25seSB0aGUgdXBwZXIg
Ym91bmRhcnkgb2YgdGhlIG1heCBzdXBwb3J0ZWQgVkwgd2lsbCBiZSBjaGVja2VkIGJ5IFhlbiBp
bnNpZGUgYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnLg0KDQo+IA0KPiBDaGVlcnMsDQo+IA0K
PiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:48:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:48:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524267.815068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTiM-0000FB-1b; Thu, 20 Apr 2023 12:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524267.815068; Thu, 20 Apr 2023 12:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTiL-0000F4-Ug; Thu, 20 Apr 2023 12:48:41 +0000
Received: by outflank-mailman (input) for mailman id 524267;
 Thu, 20 Apr 2023 12:48:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PE5v=AL=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppTiL-0000Ey-0o
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:48:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aca756a4-df79-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:48:40 +0200 (CEST)
Received: from AS9PR01CA0048.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:542::22) by VE1PR08MB5568.eurprd08.prod.outlook.com
 (2603:10a6:800:1a8::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 12:48:37 +0000
Received: from AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:542:cafe::75) by AS9PR01CA0048.outlook.office365.com
 (2603:10a6:20b:542::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend
 Transport; Thu, 20 Apr 2023 12:48:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT003.mail.protection.outlook.com (100.127.140.227) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.25 via Frontend Transport; Thu, 20 Apr 2023 12:48:36 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 20 Apr 2023 12:48:36 +0000
Received: from 89b547ba2ec4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 673FC421-5324-48C8-910D-D313B7C1C47A.1; 
 Thu, 20 Apr 2023 12:48:30 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 89b547ba2ec4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 12:48:30 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB6453.eurprd08.prod.outlook.com (2603:10a6:20b:31b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 12:48:27 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.021; Thu, 20 Apr 2023
 12:48:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aca756a4-df79-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y9NZXMXyJlhdkvb/UfNppfnwRORp6eLFj1F6loDhpuY=;
 b=9hKB9TA7jiEt2DHJ4XvsbZhHTwJdRjqwVpzgGoGtoj3d5EXKAJ5A4dj6SfyYhMOD13DnFXs8vvsSwex6OLlQNp1m9mjAlF1t2SBDAYHkytjk0GBz2Tg94OVU8fZSZkhaCmiq3tL5Rh0uCt5D52Eg6CVmds1WSgltPpRp4hEFC5M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ip9LUN/K2iryNQJEkxtRO4ZFz7U0TneV8ZLsP3nt+Xpu0VB32+pKKrqmzud45fufElsrfnrGAb00zNJER5UC79j4Q9aRydgleMKFsdaqYZLRV7VmuhkSMHeR4QXgpPt6Kar7sI0e1PihRvF2MWAROcjuu7h9YIKpdnyHC8Pv9nrWg+KxbHxuS3HRV9z2qfpAGDGTzNxmdYYrG9K6nSHfC2kvF6/mKZ5SeAa7hqovdRu0DAowfFh2825cEjhObJFOtxCuBoxQoBznlsmLFOQXvD5ZQuREonraqLmeyhmeMisJQ4vNuTu2m6LB7+00Igdj9Ti2O2b2pWTG1hbljRkeaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y9NZXMXyJlhdkvb/UfNppfnwRORp6eLFj1F6loDhpuY=;
 b=K3ZOyTQJIGYsEkSqQAk1aZE3Sl7EXjO6d7QLXFSyUpTMTjF9u60wNsvkyv8Gxas1VX8t3JajliTCGG2jowLH8eW3K/zOp8HmcZz6awUAmLY7xehxRQuEVnmuJBPUqOTvPQSmRUSKu7q2FvyD6ecpSaKl34uE66rpb6WkRWNS+qj7t1rJCv2GA5jYiKZ1ofi6GsgtXtJ7+RregRZ0ucW2VlWleQCbxbPyAFtudwP5yv3AYjGsPgzxoiNXBUW61S0aHWMkJxNDIggrzPfXAW0Y4Rw6X9YpLGmz5NWjKG/ZmG0IwwOcovOC8+yLGRejc43mOZpIiO1BAA3pbSccxKytig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y9NZXMXyJlhdkvb/UfNppfnwRORp6eLFj1F6loDhpuY=;
 b=9hKB9TA7jiEt2DHJ4XvsbZhHTwJdRjqwVpzgGoGtoj3d5EXKAJ5A4dj6SfyYhMOD13DnFXs8vvsSwex6OLlQNp1m9mjAlF1t2SBDAYHkytjk0GBz2Tg94OVU8fZSZkhaCmiq3tL5Rh0uCt5D52Eg6CVmds1WSgltPpRp4hEFC5M=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Topic: [PATCH v3 02/17] xen/arm: implement helpers to get and update
 NUMA status
Thread-Index: AQHZc3rmEF3WFnogck6f7H8jIMmqw680I9wAgAAAs+A=
Date: Thu, 20 Apr 2023 12:48:26 +0000
Message-ID:
 <AS8PR08MB7991B9AC0AB673286ED1419192639@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-3-Henry.Wang@arm.com>
 <97a3bcbe-8f04-e4d6-a4ae-adc45543bc6b@suse.com>
In-Reply-To: <97a3bcbe-8f04-e4d6-a4ae-adc45543bc6b@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 2E8DB22E213E4A4E87E93A8B57DB42C3.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB6453:EE_|AM7EUR03FT003:EE_|VE1PR08MB5568:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a103b63-01e0-4e84-30d0-08db419d8f21
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 lPaHRqBEIHGZPpKm8japuF8wxFhsqfINx89SXT5ZyKJZXaHnCdQ1rQ+E5hhxpcyvmMWF6BZIQsNJlRrT3Eb4S9EkvfLLsteOAnUBLj/88g96uHMeiZ0mHqwKp48MaPKBxOdynpIzB0O0CE+fx8YWbyvSmpXHxxMnvXDCXTv7gpBsnHnoK8UpMeedX5ckLmLFlLXtePmM63/Axt+Hcv7v2FexwLo5bXiSxcPFxGwiPJEytdQQ2JnmedIGvYSlfw794DL0ylgCew0BzRbM3iW0qmMvtaTY24Yl6oHMvVofB3CPUXZ4sLa5DS2Vl3tgChBXvD5lyJld9Mp5jFa6V6W7zmWRhsqedwt9OytzupQ8OYXy57DydcDFmTpKl+WnzPIost3OR3Ej+aXsHLT0mabvMrIjMGQ2tNCSZ4/NMpcip09825xJlND7MU2wcRNalHcsZE19sbUU17ToVcaDLOE14eqhL+g0vDLUO7ZcJzDi0AQJPPJcyCjNBFxBxVFCPRn/lUASVCvfiBl9o73ZbruROyUx4LRttTkWcy8o6n6waDVFkHsoRigxBIU0SbLsA0CRkLz3I75kqRql32J3A6+B5PhtGrcSuwyuK7hdahNUlQw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(366004)(396003)(136003)(451199021)(5660300002)(66446008)(15650500001)(64756008)(66556008)(66476007)(6916009)(66946007)(2906002)(316002)(8936002)(7696005)(4326008)(76116006)(8676002)(41300700001)(52536014)(54906003)(83380400001)(478600001)(966005)(71200400001)(9686003)(186003)(26005)(55016003)(6506007)(33656002)(38070700005)(86362001)(122000001)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6453
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3b56c9be-f8d4-40c1-99aa-08db419d8916
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6mQ86jpS9FDVR4WUPhvUOSnLQzykenjtu1eQWM4TSQpyD92ZMc5LjKtk1/avzy2qtGNEOaxhgKMsAiSSPXLgYlzG0MTJQLzWIMcsMKJr+rZrm49r6Z1C2sUPx1v143vg89PCC+s5N6pAkJu2MifzC8xUpBS1DP6iGJlKL6GoTdOADpGHAwebu2TShBkz5osZjtqer7vKEjLc59PPRKxKQN/rMJTXb0unq55D09+CcOo8rS526JD/Pxa0HI67Wk5Nkx0eUwfzsmZllKKl77zIw77GSueTexkrZN7uEqIgiZu7VLghvYBPNHJ9e7ZVESleTSvuHIEdYe/nhnL+nylpm33OS/EvRsgHvh3ZzYlR/9h9qqG4u7EBhxLbpvxMkdDnfiZJaStgZpDYD5ikjvg/TkZsav5ZrT3ZTsSaD68nklMxzERReKupIq0KojUQXeGIP01CcLN4MMGDNdnEk9M29npF16Xa62dHTM790N9AWpqZoLVvghSjlgbnWzSzHRz4aFEKAPWlY9yuJEmpZkQO70q+2GS2JRuD/DJjmgZg7/n36FRj8DXBkihQhYAcD21qV40pGEOtlAcEjT9+QZj/omvHBaicrIAkkO5J67LDch8UcaEhtWvaqHHx5KWhStyvefNR/XQhHmTb8s8Q6I0YgjOBYOvSkl2AX113uhBvvsEQBQvL0MfGxNvZUrvdmTUe
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(39860400002)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(40480700001)(82740400003)(26005)(40460700003)(6506007)(186003)(9686003)(36860700001)(81166007)(356005)(55016003)(5660300002)(336012)(47076005)(15650500001)(2906002)(52536014)(33656002)(6862004)(8676002)(8936002)(83380400001)(478600001)(54906003)(966005)(86362001)(7696005)(70206006)(70586007)(41300700001)(316002)(4326008)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:48:36.9420
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a103b63-01e0-4e84-30d0-08db419d8f21
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5568

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMi8xN10g
eGVuL2FybTogaW1wbGVtZW50IGhlbHBlcnMgdG8gZ2V0IGFuZCB1cGRhdGUNCj4gTlVNQSBzdGF0
dXMNCj4gPiAtLS0NCj4gPiB2MiAtPiB2MzoNCj4gPiAxLiBSZW5hbWUgdGhlIGZpcnN0IGVudHJ5
IG9mIGVudW0gZHRfbnVtYV9zdGF0dXMgYXMgRFRfTlVNQV9ERUZBVUxULg0KPiA+IDIuIE1ha2Ug
ZW51bSBkdF9udW1hX3N0YXR1cyBkZXZpY2VfdHJlZV9udW1hIGFzIF9fcm9fYWZ0ZXJfaW5pdCBh
bmQNCj4gPiAgICBhc3NpZ24gaXQgZXhwbGljaXRseSB0byBEVF9OVU1BX0RFRkFVTFQuDQo+ID4g
My4gVXBkYXRlIHRoZSB5ZWFyIGluIGNvcHlyaWdodCB0byAyMDIzLg0KPiA+IDQuIERvbid0IG1v
dmUgdGhlIHg4NiBudW1hX2Rpc2FibGVkKCkgYW5kIG1ha2UgQXJtJ3MgbnVtYV9kaXNhYmxlZCgp
DQo+ID4gICAgYSBzdGF0aWMgaW5saW5lIGZ1bmN0aW9uIGZvciAhQ09ORklHX05VTUEuDQo+ID4g
djEgLT4gdjI6DQo+ID4gMS4gVXNlIGFyY2hfbnVtYV9kaXNhYmxlZCB0byByZXBsYWNlIG51bWFf
ZW5hYmxlX3dpdGhfZmlybXdhcmUuDQo+ID4gMi4gSW50cm9kdWNlIGVudW1lcmF0aW9ucyBmb3Ig
ZGV2aWNlIHRyZWUgbnVtYSBzdGF0dXMuDQo+ID4gMy4gVXNlIGNvbW1vbiBudW1hX2Rpc2FibGVk
LCBkcm9wIEFybSB2ZXJzaW9uIG51bWFfZGlzYWJsZWQuDQo+ID4gNC4gSW50cm9kdWNlIGFyY2hf
bnVtYV9zZXR1cCBmb3IgQXJtLg0KPiA+IDUuIFJlbmFtZSBiYWRfc3JhdCB0byBudW1hX2JhZC4N
Cj4gPiA2LiBBZGQgbnVtYV9lbmFibGVfd2l0aF9maXJtd2FyZSBoZWxwZXIuDQo+ID4gNy4gQWRk
IG51bWFfZGlzYWJsZWQgaGVscGVyLg0KPiA+IDguIFJlZmluZSBjb21taXQgbWVzc2FnZS4NCj4g
PiAtLS0NCj4gPiAgeGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL251bWEuaCB8IDE3ICsrKysrKysr
KysrDQo+ID4gIHhlbi9hcmNoL2FybS9udW1hLmMgICAgICAgICAgICAgfCA1MCArKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysNCj4gPiAgMiBmaWxlcyBjaGFuZ2VkLCA2NyBpbnNlcnRp
b25zKCspDQo+ID4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vYXJjaC9hcm0vbnVtYS5jDQo+IA0K
PiBXaGlsZSBJIHdhcyBDYy1lZCBvbiB0aGlzIG9uZSwgbmVpdGhlciB0aGUgZGlmZnN0YXQgbm9y
IGFueSBwb3NzaWJsZSByZW1hcmtzDQo+IG1ha2UgY2xlYXIgd2hldGhlciBhbnl0aGluZyBpcyBl
eHBlY3RlZCBvZiBtZSBoZXJlLg0KDQpTb3JyeSBmb3IgdGhlIGNvbmZ1c2lvbi4gSSBhbHNvIHRo
b3VnaHQgb2YgdGhpcyB3aGVuIHNlbmRpbmcgdGhlIGVtYWlsIGJ1dA0KZXZlbnR1YWxseSBkZWNp
ZGVkIHRvIGFkZCB5b3UgaW4gdGhlIENjIGFzIHlvdSBtYWRlIHNvbWUgY29ycmVjdCBhbmQgaGVs
cGZ1bA0KY29tbWVudHMgaW4gdjIgWzFdLCBzbyBJIHRoaW5rIGl0IGlzIGEgZ29vZCBtYW5uZXIg
dG8gYWRkIHlvdSBzbyB0aGF0IHlvdSB3b3VsZA0Ka25vdyBhbGwgeW91ciAoY29ycmVjdCkgcmVt
YXJrcyBhcmUgYWRkcmVzc2VkLg0KDQpUaGFua3MgZm9yIHJldmlld2luZyB0aGUgc2VyaWVzLg0K
DQpbMV0gaHR0cHM6Ly9wYXRjaHdvcmsua2VybmVsLm9yZy9wcm9qZWN0L3hlbi1kZXZlbC9wYXRj
aC8yMDIzMDExMDA4NDkzMC4xMDk1MjAzLTMtd2VpLmNoZW5AYXJtLmNvbS8NCg0KS2luZCByZWdh
cmRzLA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 12:58:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 12:58:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524271.815078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTs0-0001kj-VP; Thu, 20 Apr 2023 12:58:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524271.815078; Thu, 20 Apr 2023 12:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppTs0-0001kc-SA; Thu, 20 Apr 2023 12:58:40 +0000
Received: by outflank-mailman (input) for mailman id 524271;
 Thu, 20 Apr 2023 12:58:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppTs0-0001kW-51
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 12:58:40 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 11a37e8d-df7b-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 14:58:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7366.eurprd04.prod.outlook.com (2603:10a6:10:1a0::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 12:58:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 12:58:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11a37e8d-df7b-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKKZ/QOQHLXSpyRS/pdeUvv1vYDPuL+e3ogo0nTzervEKrq89yXzxk5wiLogHzNn0LJvTwGBD1OJPNau7ncOhNTHpxfC6uTr/zSMXtvpZXw+x1SPJtUwFZ+yk3pKVGqsURuHG3dGF3QHvUL3M6easqHTJjVt8X3WMnO6XJGiDTFaV6I+B9lOAhAG80h9rfuM4zcJHF5/WqX7ahGTK/vtwxasXjrll81dwamR7K4DzCEnKcAt7Zl2oVN5F9m7XB6rBMTnVBASzBEaKFFkRHBmHx9aUBKtXI4YbdxBDAllZ945GD6UM1NQlBwvvLF6n/1KTdyvmXYFOiHXoTeDTFHeFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QCfGgZl4X27b9QoV88wafyiR3/oTe67CNBLXpNbzCMI=;
 b=FlCU3eBr4TGvMMen44EXLqt8KvOF1bRcQiUE6psZWKx3bradMEf9z3Si7/Dj8kKhffYb3ID3KY4ZsezUoJeYupgfrdw4Ck6bieBoEm4Ug9k48NW8e+WUZjPK5GmoSXXoy1kteRTLiNdg32SJo4VPV58mJ6bP6BOX1qSYqRFm2kClFIsRNWJVwplcWfuZn0gk1On1pqi+4u1s4dGVWnCNy/BHqiTMP2oyS4qgQyRTtOBK0eiHTh0SHTpC26H460tgdox6D+fzXds8lL/h7CqPTaSz0vRfOVO24c2u1H3iqUOWSEWE5IeEAnsPaL6kj89T3zdzS6YPs/Sp0yi3shEz9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QCfGgZl4X27b9QoV88wafyiR3/oTe67CNBLXpNbzCMI=;
 b=BmIgFe/LYF0w0EiIX0Am8ipfXpXnL+neoGOimri+SCA/FymmioQUpK3LtGE5/sDcKo22qa6YKOPan+7TOCyPyY87R7WLQErVbCMURwIPgs6JBDP6gG7abrXvR1XEFolnt5zqRcHcpPaSsW5pbGTFwdyDV44zSUDv2ElyZkcbItWY0aCOuAucy+e6KsvE9LtNHbUKK7XLHr+XAVVTzyANHu87xGqe3asFw5uVPl7ZRvjojo+A6E/73KeU4Brwj6eoasZHQk90QNdrQAnf5VYMvvJSCwOSpMEQgnQMFMmBWX0RUf5RjHUXCHwkNzJx9iUkk8CkztfSh5/cWuMsL/fyzA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
Date: Thu, 20 Apr 2023 14:58:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7366:EE_
X-MS-Office365-Filtering-Correlation-Id: b81071cd-cc4a-4252-895c-08db419ef43c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yzKzCBQ5trFT4F2zRRwg4U/zVRMQ2piQxQznwFsloScPA/M+YetTXZVeJMahslzoUgUc84M6GsQ8O5+I5YJIThxDOf5Nmm9MAIuZEBbvv8ro4IHIe6GOZVLTRk6z5pi53xcLdqEam071evVKzdYqdCOBofb42Tfti9RRzqujUsOVA+rgY1l+S2uQ91OkEaaWTqDzjyYDJmZ0igGKmr4//eUGByA82G292jL/EwUFub7w8n0RP7C7kiNSHeMdaicD0olM0vG5aCf30GBs/yF+Ev1E0PeT9JXh3gZ/FSp5irvfXrIBVBXWMJnweOG0Iw+Oqq40nbeKVHvXrU8c5untbQn3qOLSacuir2qCkWrmhgx8V4HpjiVaydOP4SmJCTdWYnqiScuCVhPxt/uGclr58I0R4yMZPk39itBfur1X+R/luGFu1tQ6qCihYP057RTF20gxZj6cDHZq5c+16Qd3S6Ncb/OVJ5/kE+jOBQYf4EjwMzr+MMp0wwhhcSwd1lMXMRQ5WwvJCrd5eZt0LyZsiVYjfE1T+16VJocVIqtR/qH4qKBGLQc400RjtyMq0bRrQeLNsJVJnmmddKrDOtEn7lqCQCXDzTjup0PN8U7Kf2kaCDMXsY1+Tv5fdCFuzH/18tzeGXeVqMs9S5zuQmf4rw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(396003)(366004)(39860400002)(451199021)(31686004)(6916009)(4326008)(6512007)(316002)(26005)(186003)(6506007)(53546011)(8936002)(31696002)(36756003)(86362001)(2906002)(8676002)(5660300002)(478600001)(54906003)(41300700001)(2616005)(38100700002)(66476007)(66556008)(66946007)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K3l1MmUrQXBkM3dzZ2p1UGNtWDJJVTUwd0tONmtOYjF5em9WTG0vK0pwOFBk?=
 =?utf-8?B?UDN3UHZ6TEhNZnZiMUNFMXdGa2VFK1ZEYVpILzdxSEtaREFZd05mamxCQ3ZB?=
 =?utf-8?B?T1h6alJBMC9vbWxseHZvMGI3czUwWEUrcHR6Mi8yVFMyank3M1JVZko4Q0p0?=
 =?utf-8?B?UFZaRTJSZkVHNWxybWI2Y3Z3WmZKMmZwSWxNaU9qb0t2L0pld2FlbnZpQU9j?=
 =?utf-8?B?cTJNelhJOWZZNzBKZUlhcTNyK04xNTRoUGVZK0d4SzUxZ21XRHJxRW51TWtj?=
 =?utf-8?B?aTBGaERaU3RFWm5YYmptVHVxK3BBWXZsTEZFUW9qNGpwbTZ3ZWxFMWxYWkZO?=
 =?utf-8?B?VERldzhIdlArR3NuWkhpL1JXMFpoQUtFUFQxWUNleHppdFV4VnFnZk1wRTNh?=
 =?utf-8?B?NG1SdUVJanJEVW1EVUFYSXJ0N3VCN3dmeW1NcVBTc2dWV1pLdTV4NFVKQW9H?=
 =?utf-8?B?aWVSUWg4UUZaL3h1YlVXdFg2NFBCNm51QllyNElsajJaeUYyOWd4d25KS0k0?=
 =?utf-8?B?SGpnVmtTeDlmWlkwVDNHTzh2eTUydXowVVdJWFV5ZGdWcUx0YW5NQ3lxdmVz?=
 =?utf-8?B?TFE3NXdKcWhnR1haZldoK2xkcCs1YU96QWtsVnI3MXBTbUhwTDZua2Y5bGpz?=
 =?utf-8?B?L082RTYzeUgxMW44bVRoOXBjekZpZksrbXBCMXg3RDdxMkZsQXFBd2VsQzRG?=
 =?utf-8?B?V2NrQ1ZER1FWQjFsdUxyQjgwOHo3T3NmS2ZsNVVMaWZmNVdHTVc0ekhGMktL?=
 =?utf-8?B?V2pTT2UxLzExbXpWWnpyMkcyb3E0Y1hHY05mQkhFeGRpV3RvOGlUNUJyUEZR?=
 =?utf-8?B?d2c1ZlRpRVRFQThrNHVnQXdyWjl0YktzMlowRXRJMndwUWtWM1RvRnVTWWt4?=
 =?utf-8?B?bkhXaGdzcmd3TnU1ejdpeTZsUDduejg4WUhNdVp0Z25Ndk4zVTF0TVl1WmFs?=
 =?utf-8?B?aW01M2lCUUN3eHYwOVFvbnpURlVQMStzR01ZMjJPTHpUOUQyWmY5MDFTaXVs?=
 =?utf-8?B?WWQxWnc1UFp2NG4yaWFkRW54eDd2WHdmaFMwbjlnZldiYytRNkN4RDV1ZFp2?=
 =?utf-8?B?UloxWEEyMU0ydHA1WTJEQVgzSjA5ck5MN2J6bXp4dVhMblNkcEpSMkl2ZmY4?=
 =?utf-8?B?Ky83K3VMRFB1K3F1Q1gwTlV3QlRKQm1JeSs2M0RobENnTWxnQjQ3eUVzNG82?=
 =?utf-8?B?T1RzVkpwNDArTTNpREtZRjhDTU0wMWxGc1dCY2p6Umo5RWpVbHJVMGJ1TVM5?=
 =?utf-8?B?NWFtd2lJMGV4dVF5U1pYa0ZEWk9WS016SjgzUUhvaldwNWNFYzFlVmVIb0RD?=
 =?utf-8?B?aFN1SExZVy9EUEVObFBhdWpCWkNqV2huWXIvM09KSU1WMnB1M3ZKTmpQNVEv?=
 =?utf-8?B?NGdFK1NvNzFRMy83ak1kRXNVMEJSbTdaVGhISHVkV2h3eXlEcGp5K1dEK1F5?=
 =?utf-8?B?bnFSTEw0TGFnSVIxL0dabFE3UkZLeXdsalRGOERqNnFJaGRXcHBucnlMRERw?=
 =?utf-8?B?RUtlL3FDYkN3L2pUb0FQREdjczNLME41T3Q5ZGZCN21uczU0M1ByNVMzemlR?=
 =?utf-8?B?UkUwdDhMdjg5UW5DWTFDcXVRV0J4UkxnU0tPdmxhWHRUQVUzRklodEdkUEZz?=
 =?utf-8?B?blB0SkZjYkZoYmtFQTlZdml3ektqQUlXRWpiRmI5RGVNcGF2Qi9HSm4zRVd4?=
 =?utf-8?B?cittNWpWdU5IUzFRek5OVVZ4NXlUd2VpK2xjYU5idDVQaXpmam02TFY3NVQ2?=
 =?utf-8?B?S2h0Ym16cVdkcElVRVExWjErSUo5cDhkM3h4c1JDSmc3ZkpobVI3Y1lVY1BG?=
 =?utf-8?B?QnhDUm1BZDN4Z1E3dlNxWE16eWNoWXM5aVdxWmdzRlprSndVS1R4UWw0NkVX?=
 =?utf-8?B?elkrV2lPdFR0SElMN2xCKzZNdGNHZzMzMWhseDc4NlRnMHhJVWZ1aitvNElJ?=
 =?utf-8?B?aVlzZnBEd2tLOExHYzhJdnBDSS8wNUVSVWhJSnBsYWtSV01pd2YxWEJFNkRE?=
 =?utf-8?B?cmNLVlpYNis1a1M3bS81cFZJYkkwcGI2ajNEa2lpVDI5U1RmZHEvKzdGVVVF?=
 =?utf-8?B?Y3ArV3BwS0pkdFJOcmVOVlo4bU1tSEdGdHlIK09QSExibFBwRi9hdHdhRnc0?=
 =?utf-8?Q?9NGmfSDIkuTsaFT/sH9SdvscT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b81071cd-cc4a-4252-895c-08db419ef43c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 12:58:36.2524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KquLQquu9L2IAdM2TSA3u522yDToaGWsNsQrhlK9yiQ4lCm7nAtDot+2fX7U6XOohREFBhuaPzPDznbqwDsLtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7366

On 19.04.2023 17:42, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -4,6 +4,37 @@
>  #include <xen/const.h>
>  #include <xen/page-size.h>
>  
> +/*
> + * RISC-V64 Layout:
> + *
> + * From the riscv-privileged doc:
> + *   When mapping between narrower and wider addresses,
> + *   RISC-V zero-extends a narrower physical address to a wider size.
> + *   The mapping between 64-bit virtual addresses and the 39-bit usable
> + *   address space of Sv39 is not based on zero-extension but instead
> + *   follows an entrenched convention that allows an OS to use one or
> + *   a few of the most-significant bits of a full-size (64-bit) virtual
> + *   address to quickly distinguish user and supervisor address regions.
> + *
> + * It means that:
> + *   top VA bits are simply ignored for the purpose of translating to PA.
> + *
> + * The similar is true for other Sv{32, 39, 48, 57}.

Personally I find it odd that Sv32 is mentioned here (there's no truncation /
aliasing there aiui), and that Sv39 is mentioned here again (when that's what
the earlier paragraph talks about).

> + * ============================================================================
> + *    Start addr    |   End addr        |  Size  | VM area description
> + * ============================================================================
> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap

These are all L2 slot 511 aiui, which may be worth mentioning especially since
the top bits don't match the top bits further down in the table (because of the
aliasing).

> + *     .................. unused ..................

This is covering slot 510, which again may be worth mentioning.

> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2 slot: 200-509)
> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2 slot: 196-197)

1Gb is, if I'm not mistaken, a single L2 slot.

Also assuming a 32-byte struct page_info (I don't think you'll get away with
less than that, when even Arm32 requires this much), there's a mismatch
between direct map and frame table size: With 4k pages, the scaling factor
would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here to
cover for (slightly more than) the 331Gb of memory you mean to be able to
map?

> + * 0000003080000000 |  00000030c0000000 |  1 GB  | VMAP (L2 slot: 194-195)
> + *     .................. unused ..................

There are further unused regions between these three entries, while imo will
want making explicit (for the avoidance of doubt, if nothing else).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 13:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 13:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524275.815087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppU1g-0003HQ-UC; Thu, 20 Apr 2023 13:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524275.815087; Thu, 20 Apr 2023 13:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppU1g-0003HJ-RB; Thu, 20 Apr 2023 13:08:40 +0000
Received: by outflank-mailman (input) for mailman id 524275;
 Thu, 20 Apr 2023 13:08:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ppU1f-0003HD-EA
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 13:08:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppU1e-00063U-Np; Thu, 20 Apr 2023 13:08:38 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.24.51]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ppU1e-0000RA-Fm; Thu, 20 Apr 2023 13:08:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=fmYh1zD2ruL4yp8fnxjdp4QfDu5JUHeqOWrH153mrQk=; b=GqMLPuwytQz93Hrn6+gpt7oN5S
	Y7U6Si5j+AoecmqsFgOFZP6Ggq0dWf2DQFD7lsYynSZIJpUhAyc5zERURIQMXrhDpFHMuQpLa00T+
	ZHDMTQGfVT0z5jZsJymmafIX/WBvMAH3sbr5OcH17D3uEhkxXGhh8XcE56H9oTxSkjEI=;
Message-ID: <00fc2213-d315-f4e8-d4bc-bc78028dd9b2@xen.org>
Date: Thu, 20 Apr 2023 14:08:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
 <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
 <bb6b5288-f123-8d25-3cc3-ef36164ea04c@xen.org>
 <BE516382-0E45-4D6E-8012-1D75C1F13680@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <BE516382-0E45-4D6E-8012-1D75C1F13680@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 20/04/2023 13:43, Luca Fancellu wrote:
>> On 20 Apr 2023, at 13:29, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 20/04/2023 09:46, Luca Fancellu wrote:
>>>>>>>> +int __init sve_sanitize_vl_param(int val, unsigned int *out)
>>>>>>>> +{
>>>>>>>> +    /*
>>>>>>>> +     * Negative SVE parameter value means to use the maximum supported
>>>>>>>> +     * vector length, otherwise if a positive value is provided, check if the
>>>>>>>> +     * vector length is a multiple of 128 and not bigger than the maximum value
>>>>>>>> +     * 2048
>>>>>>>> +     */
>>>>>>>> +    if ( val < 0 )
>>>>>>>> +        *out = get_sys_vl_len();
>>>>>>>> +    else if ( ((val % SVE_VL_MULTIPLE_VAL) == 0) && (val <= SVE_VL_MAX_BITS) )
>>>>>>>> +        *out = val;
>>>>>>>
>>>>>>> Shouldn't you also check if it is not greater than the maximum vector length ?
>>>>>>
>>>>>> I don’t understand, I am checking that the value is below or equal to SVE_VL_MAX_BITS,
>>>>>> If you mean if it should be checked also against the maximum supported length by the platform,
>>>>>> I think this is not the right place, the check is already in arch_sanitise_domain_config(), introduced
>>>>>> in patch #2
>>>>>
>>>>> If this is not the right place to check it then why checking the rest here ?
>>>>>
>>>>>  From a user or a developer point of view I would expect the validity of the input to be checked only
>>>>> in one place.
>>>>> If here is not the place for that it is ok but then i would check everything in arch_sanitise_domain_config
>>>>> (multiple, min and supported) instead of doing it partly in 2 places.
>>>>
>>>> Ok, given the way we encoded the value in xen_domctl_createdomain structure, we have that the value takes
>>>> very little space, but a small issue is that when we encode it, we are dividing it by 128, which is fine for user params
>>>> that are multiple of 128, but it’s less fine if the user passes “129”.
>>>>
>>>> To overcome this issue we are checking the value when it is not already encoded. Now, thinking about it, the check
>>>> "&& (val <= SVE_VL_MAX_BITS)” is not really needed, because even if the value is above, then in arch_sanitise_domain_config
>>>> we will hit the top limit of the platform maximum VL.
>>>>
>>>> int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>>> {
>>>>     unsigned int max_vcpus;
>>>>     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>>>>     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
>>>>     unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>>>>
>>>>     if ( (config->flags & ~flags_optional) != flags_required )
>>>>     {
>>>>         dprintk(XENLOG_INFO, "Unsupported configuration %#x\n",
>>>>                 config->flags);
>>>>         return -EINVAL;
>>>>     }
>>>>
>>>>     /* Check feature flags */
>>>>     if ( sve_vl_bits > 0 )
>>>>     {
>>>>         unsigned int zcr_max_bits = get_sys_vl_len();
>>>>
>>>>         if ( !zcr_max_bits )
>>>>         {
>>>>             dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
>>>>             return -EINVAL;
>>>>         }
>>>>
>>>>         if ( sve_vl_bits > zcr_max_bits )
>>>>         {
>>>>             dprintk(XENLOG_INFO,
>>>>                     "Requested SVE vector length (%u) > supported length (%u)\n",
>>>>                     sve_vl_bits, zcr_max_bits);
>>>>             return -EINVAL;
>>>>         }
>>>>     }
>>>>    [...]
>>>>
>>>> Now, I understand your point, we could check everything in sve_sanitize_vl_param(), but it would leave a problem
>>>> for domains created by hypercalls if I am not wrong.
>>>>
>>>> What do you think?
>>> I thought about that and another possibility is to store “sve_vl” as uint16_t inside struct xen_arch_domainconfig, and
>>> check it inside arch_sanitise_domain_config() for it to be mod 128 and less than the max supported VL, this will
>>> allow to have all the checks in one place, taking a bit more space, anyway we would take the space from the implicit
>>> padding as this is the current status:
> 
> Hi Julien,
> 
>>
>> Sorry, I am having trouble to follow the discussion. If you are checking the value in arch_sanitise_domain_config(), then why do you need to take more space in arch_domain?
> 
> Yes I am checking the value in arch_sanitise_domain_config, the value checked is from arch_domain and it is the vector length divided by 128, so an encoded value.

I think this is where the disconnect is coming from. 
arch_sanitise_domain_config() doesn't use struct arch_domain because the 
domain itself is not yet allocated. Instead, it is using 
xen_arch_domainconfig.

I care less about the increase of xen_arch_domainconfig because this is 
a one off structure.

> 
> Bertrand was puzzled by the fact that I also put a check in sve_sanitize_vl_param to see if the vector length passed by the user is mod 128, his point is that we should check a value only in one place and not in two, and it is a valid point but in this case we can’t put all the check into arch_sanitise_domain_config because we don’t have the “full” value from arch_domain, and we can’t put all the checks in sve_sanitize_vl_param because it will leave out from the check domains created by hyper calls.
> 
> So to have only one point where the checks are done (mod 128 and <= HW supported VL), we might need to have a full resolution VL value in struct xen_arch_domainconfig (16 bit).
> 
> But if we want to save space for the future, we could leave the code as it is and rely on the fact that the tools (or Xen) when creating the domain configuration, will check that the SVE VL parameter is mod 128.
> In this last case what is in struct xen_arch_domainconfig is implicitly mod 128 and only the upper boundary of the max supported VL will be checked by Xen inside arch_sanitise_domain_config.

Before answering to the approach, can you explain why you ask the user 
to pass a value that is a multiple of 128 rather than the already 
"divided" value? Is it more natural for the user?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 13:18:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 13:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524281.815097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUBU-0004tH-Vp; Thu, 20 Apr 2023 13:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524281.815097; Thu, 20 Apr 2023 13:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUBU-0004tA-TB; Thu, 20 Apr 2023 13:18:48 +0000
Received: by outflank-mailman (input) for mailman id 524281;
 Thu, 20 Apr 2023 13:18:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9VJI=AL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ppUBT-0004t2-Ec
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 13:18:47 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20602.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0d85e33-df7d-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 15:18:45 +0200 (CEST)
Received: from AS9PR0301CA0028.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::19) by DU0PR08MB9726.eurprd08.prod.outlook.com
 (2603:10a6:10:446::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 13:18:30 +0000
Received: from AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:468:cafe::99) by AS9PR0301CA0028.outlook.office365.com
 (2603:10a6:20b:468::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.50 via Frontend
 Transport; Thu, 20 Apr 2023 13:18:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT012.mail.protection.outlook.com (100.127.141.26) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Thu, 20 Apr 2023 13:18:29 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Thu, 20 Apr 2023 13:18:29 +0000
Received: from 737b45c5fc05.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D3EABE22-45E1-4B2D-AE34-DD0A1128C1D7.1; 
 Thu, 20 Apr 2023 13:18:22 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 737b45c5fc05.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 Apr 2023 13:18:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6421.eurprd08.prod.outlook.com (2603:10a6:20b:31c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 13:18:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 13:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0d85e33-df7d-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dgrPcMEDLrjOdJcGYE7PZJ+BF+DJE0ViV1nrw9r52W0=;
 b=1qlzX/wYdcSl3AIlhNN2gxxpBpkaslDAfoML8MwJ1mVodTAaeEa6nCQHxPUyW9MTbdz3NgzFFiApIqTgRQg9fZuWOhnOxopSlpiMdi7v7YxpnbHPQ0MW7/IDiwpL8T1pt7yhV+7UAn/L8cFSAIriiJ9A7D4HlEu+akkjIO9hIH8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6e4e3eae62fb9eee
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OlaXA+Jo+oFW9A6dWzgbI3fG8lQpuix7MoQ8Zz+nyM7+aiEzpZWsqCxMa6nb5g+rIqo4SXFdjFbHNBw/0sJ+WReKoRPXsW5qKKSHlPeqbeqyB+2N5hflhSLuSPvUKzNMZVKdHnPCD4+Hzqo5qltI9FL9esIz+u9Mqfo7a05zIXBzrW6rC6FsD1XANLX4za8zkA4NQvUzCsQGrdhmj2MmZq0dJOPrMwFAkHK+JZ6nOWLFbuOIsT4WwSUHO9o2Oq5YeFSjQcMtyrc9FB5Mzh0JqqN+lolLL24WIg9msZ2IvTkzeOJnkAFSVyDEaSEFdvHPyyvMw31je59rSeZ3Recdow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dgrPcMEDLrjOdJcGYE7PZJ+BF+DJE0ViV1nrw9r52W0=;
 b=iHzer0ULXP45MS3CKs4P9MuB7AkQF/Ha1UpqEhK9dCaE8BvMbym7IWuTfRxZTJkgI5IHvVhjOJq2VtGBWbyKioTvVPgncBcABBSUX2os9PKzThwNUdRBPBSXbQjoK/Yy+fuYsATyQs14V+H/QEXtDYoSNlSG+2MAYImWLjQzmHastAHxXD5iW/x3RZSfVpT4oSUKSgdRuIcZ7gkB1MYap//9PfEoNlk/L6W5W21kTsQNcvTyMIbKjXaD4jfYfgcicop+6gMCQZgkVYdJP1lubvvbO+3bFZOddgYmXt037OOm1llGPCAOKOxF20aKwxlNyqO/GF8wBrrVTy5zY64aug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dgrPcMEDLrjOdJcGYE7PZJ+BF+DJE0ViV1nrw9r52W0=;
 b=1qlzX/wYdcSl3AIlhNN2gxxpBpkaslDAfoML8MwJ1mVodTAaeEa6nCQHxPUyW9MTbdz3NgzFFiApIqTgRQg9fZuWOhnOxopSlpiMdi7v7YxpnbHPQ0MW7/IDiwpL8T1pt7yhV+7UAn/L8cFSAIriiJ9A7D4HlEu+akkjIO9hIH8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v5 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZbSQ+XccnxtxNG0WbEgpX88e2za8xDZ2AgAE1hQCAAAGYAIAABb6AgAGkXICAAD5kgIAAA+SAgAAHCICAAAKogA==
Date: Thu, 20 Apr 2023 13:18:15 +0000
Message-ID: <D0E8B461-DDE7-49DB-AD34-5F1632DCA614@arm.com>
References: <20230412094938.2693890-1-luca.fancellu@arm.com>
 <20230412094938.2693890-8-luca.fancellu@arm.com>
 <2833DC4A-95F0-482A-AD0F-3E942D1DA223@arm.com>
 <F30CEF99-693A-4218-AC80-433A183DE866@arm.com>
 <3DE2B914-FA6E-49EF-8748-BB8DE4B2CC11@arm.com>
 <8DA3FECA-DEBD-479E-9E5A-57676B98ADA4@arm.com>
 <DE00F3DB-C6D9-4D90-97A8-FD964FD03099@arm.com>
 <bb6b5288-f123-8d25-3cc3-ef36164ea04c@xen.org>
 <BE516382-0E45-4D6E-8012-1D75C1F13680@arm.com>
 <00fc2213-d315-f4e8-d4bc-bc78028dd9b2@xen.org>
In-Reply-To: <00fc2213-d315-f4e8-d4bc-bc78028dd9b2@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6421:EE_|AM7EUR03FT012:EE_|DU0PR08MB9726:EE_
X-MS-Office365-Filtering-Correlation-Id: c7d5ca27-99fc-42b8-6c18-08db41a1bbd1
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4V9zbajyj6i1ZsFTZa9EK15gZA9IjeaxvuiPSwzuN2f1wN1hfcaYhRqZAdVv8bB8HjRm9YMUwz/b1JaTgWdqGnH27fXUKgPBIXQ4Armfmn7inP7UpT4Yhf2JkD5be8o7D8VOPHCTs3w+B3FCCO0IGg3oQGKWXvdqkwwVJ7fTmZemSxExbd4r/YcRMiiYBPnOvdSlSs2lzyjBkD2dgwiHITBb6KpoBi1E3Lw1M9tG9PbF7Eo8aXQ/K+m3pWRzhh/LpUt0bqc81p3Tr0kzGOcyELIr4sGGxir8H3hiVOw5MNJSnDQqCQI8FXDdBrBPlZf4Be4LQBYE/BsuD/mKrYBeKp7tvvUCZDjqw7FHjMSrdB5mm+YSxVz+PY5TEZFY5fus3UVdOZ9sdnJWOvmYcPtTIz8nRL20XBgx/LQpPJVUGlCS05ZlxT6Vm6WTEyDZsUar5EgYxOiCMxZQXJzH4hSy/luYjLLwLwim58Ry5ghAwzhm+RNzgaRhQZAOtfLcGjcZwW65JFp+5cFMmn7LFpmI/DwsulxZgvIy8DCAq14fxsJ98yeHGuYuZP7K4wk4ZyihgZn3xf8gJeJC+hGfB7dbMO0ZmvHEtAsN1wcHnKt4Pev8GQrqR9N7GByR0At3AVO4nQHu+61mFyVPLfZW9Jme4g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(366004)(39860400002)(451199021)(38070700005)(2906002)(8936002)(38100700002)(8676002)(5660300002)(36756003)(33656002)(86362001)(6506007)(6486002)(71200400001)(26005)(6512007)(54906003)(478600001)(2616005)(83380400001)(53546011)(186003)(316002)(4326008)(66556008)(6916009)(66446008)(64756008)(91956017)(76116006)(66476007)(66946007)(122000001)(41300700001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <C2C1A13C304962469CBD459A37FDBEE2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6421
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c43fdde4-f41d-4670-0daa-08db41a1b33a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K9D3k+rWyZ6/llcxogHIjnh882mYNQC5dMooFMu60GsmJjdty3UgiNoLrt1wV7GBsRQKW7724A48NZKiThIkal9eEOrnCDvUma5r9YZmPPMd9mCwhFoM5b5AK1TaBdx3SVpRm/o+apsrXWzf5JtRkHj/+FJaThT1p9v1wrJdACLOOYvUaXcJ7svaB3LaAWxVd35LEtJCmvH9UtUXqDIcWypJRg0kO2sbVgw9OOPXE3SMJqDjBzgX0KAQfUr/mgvfDRHzFWTP/2H7fdsvwUKa3pwiXJRv6vk06g6ZtUj00B78AOMYUG5WyOPoVOc+uuegWhqkFrXLpvpqBit3gmfCZp2TJn1ZEZc5vOhDVq383FiAj93PsjEVqzXz2K5I+K01zbVhaqbJcXgJYGt9oRt3v4F20Cyp5/ZoTUTjCHwvMrztX4iurm9k/A/RSE+m03KFPSHZPbLsgZ74AdP6JG6n1h6OXuyAODIx0GrQTy6M3gnru1VtLPuCyt21fJaJcLVtr98jMbBB2t3Xuarx7WEtJUUueVCn/I5TLXC1y0bK93irxJ9lLw23C7NxlTB/H6S97KBNypZs6EQD3LIl9EB3M59lTkpip9ZCcd/qAfY6CnPKxG3w6lDPnYVfaYkxYaVpFu/v8poDUswSuUoBVegMq2zUbdmiG//zgIGvMiQmoA5Uty3zsJaC3kdyPFuzoTYQaPNkoR/71cDed8rye9qe6g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(2906002)(8936002)(40460700003)(6862004)(8676002)(5660300002)(82310400005)(36756003)(33656002)(86362001)(6506007)(40480700001)(6486002)(107886003)(26005)(6512007)(54906003)(478600001)(2616005)(36860700001)(83380400001)(47076005)(336012)(53546011)(186003)(316002)(82740400003)(4326008)(70206006)(70586007)(356005)(41300700001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 13:18:29.9036
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c7d5ca27-99fc-42b8-6c18-08db41a1bbd1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9726

DQoNCj4gT24gMjAgQXByIDIwMjMsIGF0IDE0OjA4LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAyMC8wNC8yMDIzIDEzOjQzLCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDIwIEFwciAyMDIzLCBhdCAxMzoyOSwgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4gDQo+Pj4gSGkgTHVjYSwNCj4+PiAN
Cj4+PiBPbiAyMC8wNC8yMDIzIDA5OjQ2LCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+Pj4+Pj4+
ICtpbnQgX19pbml0IHN2ZV9zYW5pdGl6ZV92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQg
Km91dCkNCj4+Pj4+Pj4+PiArew0KPj4+Pj4+Pj4+ICsgICAgLyoNCj4+Pj4+Pj4+PiArICAgICAq
IE5lZ2F0aXZlIFNWRSBwYXJhbWV0ZXIgdmFsdWUgbWVhbnMgdG8gdXNlIHRoZSBtYXhpbXVtIHN1
cHBvcnRlZA0KPj4+Pj4+Pj4+ICsgICAgICogdmVjdG9yIGxlbmd0aCwgb3RoZXJ3aXNlIGlmIGEg
cG9zaXRpdmUgdmFsdWUgaXMgcHJvdmlkZWQsIGNoZWNrIGlmIHRoZQ0KPj4+Pj4+Pj4+ICsgICAg
ICogdmVjdG9yIGxlbmd0aCBpcyBhIG11bHRpcGxlIG9mIDEyOCBhbmQgbm90IGJpZ2dlciB0aGFu
IHRoZSBtYXhpbXVtIHZhbHVlDQo+Pj4+Pj4+Pj4gKyAgICAgKiAyMDQ4DQo+Pj4+Pj4+Pj4gKyAg
ICAgKi8NCj4+Pj4+Pj4+PiArICAgIGlmICggdmFsIDwgMCApDQo+Pj4+Pj4+Pj4gKyAgICAgICAg
Km91dCA9IGdldF9zeXNfdmxfbGVuKCk7DQo+Pj4+Pj4+Pj4gKyAgICBlbHNlIGlmICggKCh2YWwg
JSBTVkVfVkxfTVVMVElQTEVfVkFMKSA9PSAwKSAmJiAodmFsIDw9IFNWRV9WTF9NQVhfQklUUykg
KQ0KPj4+Pj4+Pj4+ICsgICAgICAgICpvdXQgPSB2YWw7DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFNo
b3VsZG4ndCB5b3UgYWxzbyBjaGVjayBpZiBpdCBpcyBub3QgZ3JlYXRlciB0aGFuIHRoZSBtYXhp
bXVtIHZlY3RvciBsZW5ndGggPw0KPj4+Pj4+PiANCj4+Pj4+Pj4gSSBkb27igJl0IHVuZGVyc3Rh
bmQsIEkgYW0gY2hlY2tpbmcgdGhhdCB0aGUgdmFsdWUgaXMgYmVsb3cgb3IgZXF1YWwgdG8gU1ZF
X1ZMX01BWF9CSVRTLA0KPj4+Pj4+PiBJZiB5b3UgbWVhbiBpZiBpdCBzaG91bGQgYmUgY2hlY2tl
ZCBhbHNvIGFnYWluc3QgdGhlIG1heGltdW0gc3VwcG9ydGVkIGxlbmd0aCBieSB0aGUgcGxhdGZv
cm0sDQo+Pj4+Pj4+IEkgdGhpbmsgdGhpcyBpcyBub3QgdGhlIHJpZ2h0IHBsYWNlLCB0aGUgY2hl
Y2sgaXMgYWxyZWFkeSBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcoKSwgaW50cm9kdWNl
ZA0KPj4+Pj4+PiBpbiBwYXRjaCAjMg0KPj4+Pj4+IA0KPj4+Pj4+IElmIHRoaXMgaXMgbm90IHRo
ZSByaWdodCBwbGFjZSB0byBjaGVjayBpdCB0aGVuIHdoeSBjaGVja2luZyB0aGUgcmVzdCBoZXJl
ID8NCj4+Pj4+PiANCj4+Pj4+PiBGcm9tIGEgdXNlciBvciBhIGRldmVsb3BlciBwb2ludCBvZiB2
aWV3IEkgd291bGQgZXhwZWN0IHRoZSB2YWxpZGl0eSBvZiB0aGUgaW5wdXQgdG8gYmUgY2hlY2tl
ZCBvbmx5DQo+Pj4+Pj4gaW4gb25lIHBsYWNlLg0KPj4+Pj4+IElmIGhlcmUgaXMgbm90IHRoZSBw
bGFjZSBmb3IgdGhhdCBpdCBpcyBvayBidXQgdGhlbiBpIHdvdWxkIGNoZWNrIGV2ZXJ5dGhpbmcg
aW4gYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnDQo+Pj4+Pj4gKG11bHRpcGxlLCBtaW4gYW5k
IHN1cHBvcnRlZCkgaW5zdGVhZCBvZiBkb2luZyBpdCBwYXJ0bHkgaW4gMiBwbGFjZXMuDQo+Pj4+
PiANCj4+Pj4+IE9rLCBnaXZlbiB0aGUgd2F5IHdlIGVuY29kZWQgdGhlIHZhbHVlIGluIHhlbl9k
b21jdGxfY3JlYXRlZG9tYWluIHN0cnVjdHVyZSwgd2UgaGF2ZSB0aGF0IHRoZSB2YWx1ZSB0YWtl
cw0KPj4+Pj4gdmVyeSBsaXR0bGUgc3BhY2UsIGJ1dCBhIHNtYWxsIGlzc3VlIGlzIHRoYXQgd2hl
biB3ZSBlbmNvZGUgaXQsIHdlIGFyZSBkaXZpZGluZyBpdCBieSAxMjgsIHdoaWNoIGlzIGZpbmUg
Zm9yIHVzZXIgcGFyYW1zDQo+Pj4+PiB0aGF0IGFyZSBtdWx0aXBsZSBvZiAxMjgsIGJ1dCBpdOKA
mXMgbGVzcyBmaW5lIGlmIHRoZSB1c2VyIHBhc3NlcyDigJwxMjnigJ0uDQo+Pj4+PiANCj4+Pj4+
IFRvIG92ZXJjb21lIHRoaXMgaXNzdWUgd2UgYXJlIGNoZWNraW5nIHRoZSB2YWx1ZSB3aGVuIGl0
IGlzIG5vdCBhbHJlYWR5IGVuY29kZWQuIE5vdywgdGhpbmtpbmcgYWJvdXQgaXQsIHRoZSBjaGVj
aw0KPj4+Pj4gIiYmICh2YWwgPD0gU1ZFX1ZMX01BWF9CSVRTKeKAnSBpcyBub3QgcmVhbGx5IG5l
ZWRlZCwgYmVjYXVzZSBldmVuIGlmIHRoZSB2YWx1ZSBpcyBhYm92ZSwgdGhlbiBpbiBhcmNoX3Nh
bml0aXNlX2RvbWFpbl9jb25maWcNCj4+Pj4+IHdlIHdpbGwgaGl0IHRoZSB0b3AgbGltaXQgb2Yg
dGhlIHBsYXRmb3JtIG1heGltdW0gVkwuDQo+Pj4+PiANCj4+Pj4+IGludCBhcmNoX3Nhbml0aXNl
X2RvbWFpbl9jb25maWcoc3RydWN0IHhlbl9kb21jdGxfY3JlYXRlZG9tYWluICpjb25maWcpDQo+
Pj4+PiB7DQo+Pj4+PiAgICB1bnNpZ25lZCBpbnQgbWF4X3ZjcHVzOw0KPj4+Pj4gICAgdW5zaWdu
ZWQgaW50IGZsYWdzX3JlcXVpcmVkID0gKFhFTl9ET01DVExfQ0RGX2h2bSB8IFhFTl9ET01DVExf
Q0RGX2hhcCk7DQo+Pj4+PiAgICB1bnNpZ25lZCBpbnQgZmxhZ3Nfb3B0aW9uYWwgPSAoWEVOX0RP
TUNUTF9DREZfaW9tbXUgfCBYRU5fRE9NQ1RMX0NERl92cG11KTsNCj4+Pj4+ICAgIHVuc2lnbmVk
IGludCBzdmVfdmxfYml0cyA9IHN2ZV9kZWNvZGVfdmwoY29uZmlnLT5hcmNoLnN2ZV92bCk7DQo+
Pj4+PiANCj4+Pj4+ICAgIGlmICggKGNvbmZpZy0+ZmxhZ3MgJiB+ZmxhZ3Nfb3B0aW9uYWwpICE9
IGZsYWdzX3JlcXVpcmVkICkNCj4+Pj4+ICAgIHsNCj4+Pj4+ICAgICAgICBkcHJpbnRrKFhFTkxP
R19JTkZPLCAiVW5zdXBwb3J0ZWQgY29uZmlndXJhdGlvbiAlI3hcbiIsDQo+Pj4+PiAgICAgICAg
ICAgICAgICBjb25maWctPmZsYWdzKTsNCj4+Pj4+ICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4+
Pj4+ICAgIH0NCj4+Pj4+IA0KPj4+Pj4gICAgLyogQ2hlY2sgZmVhdHVyZSBmbGFncyAqLw0KPj4+
Pj4gICAgaWYgKCBzdmVfdmxfYml0cyA+IDAgKQ0KPj4+Pj4gICAgew0KPj4+Pj4gICAgICAgIHVu
c2lnbmVkIGludCB6Y3JfbWF4X2JpdHMgPSBnZXRfc3lzX3ZsX2xlbigpOw0KPj4+Pj4gDQo+Pj4+
PiAgICAgICAgaWYgKCAhemNyX21heF9iaXRzICkNCj4+Pj4+ICAgICAgICB7DQo+Pj4+PiAgICAg
ICAgICAgIGRwcmludGsoWEVOTE9HX0lORk8sICJTVkUgaXMgdW5zdXBwb3J0ZWQgb24gdGhpcyBt
YWNoaW5lLlxuIik7DQo+Pj4+PiAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPj4+Pj4gICAg
ICAgIH0NCj4+Pj4+IA0KPj4+Pj4gICAgICAgIGlmICggc3ZlX3ZsX2JpdHMgPiB6Y3JfbWF4X2Jp
dHMgKQ0KPj4+Pj4gICAgICAgIHsNCj4+Pj4+ICAgICAgICAgICAgZHByaW50ayhYRU5MT0dfSU5G
TywNCj4+Pj4+ICAgICAgICAgICAgICAgICAgICAiUmVxdWVzdGVkIFNWRSB2ZWN0b3IgbGVuZ3Ro
ICgldSkgPiBzdXBwb3J0ZWQgbGVuZ3RoICgldSlcbiIsDQo+Pj4+PiAgICAgICAgICAgICAgICAg
ICAgc3ZlX3ZsX2JpdHMsIHpjcl9tYXhfYml0cyk7DQo+Pj4+PiAgICAgICAgICAgIHJldHVybiAt
RUlOVkFMOw0KPj4+Pj4gICAgICAgIH0NCj4+Pj4+ICAgIH0NCj4+Pj4+ICAgWy4uLl0NCj4+Pj4+
IA0KPj4+Pj4gTm93LCBJIHVuZGVyc3RhbmQgeW91ciBwb2ludCwgd2UgY291bGQgY2hlY2sgZXZl
cnl0aGluZyBpbiBzdmVfc2FuaXRpemVfdmxfcGFyYW0oKSwgYnV0IGl0IHdvdWxkIGxlYXZlIGEg
cHJvYmxlbQ0KPj4+Pj4gZm9yIGRvbWFpbnMgY3JlYXRlZCBieSBoeXBlcmNhbGxzIGlmIEkgYW0g
bm90IHdyb25nLg0KPj4+Pj4gDQo+Pj4+PiBXaGF0IGRvIHlvdSB0aGluaz8NCj4+Pj4gSSB0aG91
Z2h0IGFib3V0IHRoYXQgYW5kIGFub3RoZXIgcG9zc2liaWxpdHkgaXMgdG8gc3RvcmUg4oCcc3Zl
X3Zs4oCdIGFzIHVpbnQxNl90IGluc2lkZSBzdHJ1Y3QgeGVuX2FyY2hfZG9tYWluY29uZmlnLCBh
bmQNCj4+Pj4gY2hlY2sgaXQgaW5zaWRlIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZygpIGZv
ciBpdCB0byBiZSBtb2QgMTI4IGFuZCBsZXNzIHRoYW4gdGhlIG1heCBzdXBwb3J0ZWQgVkwsIHRo
aXMgd2lsbA0KPj4+PiBhbGxvdyB0byBoYXZlIGFsbCB0aGUgY2hlY2tzIGluIG9uZSBwbGFjZSwg
dGFraW5nIGEgYml0IG1vcmUgc3BhY2UsIGFueXdheSB3ZSB3b3VsZCB0YWtlIHRoZSBzcGFjZSBm
cm9tIHRoZSBpbXBsaWNpdA0KPj4+PiBwYWRkaW5nIGFzIHRoaXMgaXMgdGhlIGN1cnJlbnQgc3Rh
dHVzOg0KPj4gSGkgSnVsaWVuLA0KPj4+IA0KPj4+IFNvcnJ5LCBJIGFtIGhhdmluZyB0cm91Ymxl
IHRvIGZvbGxvdyB0aGUgZGlzY3Vzc2lvbi4gSWYgeW91IGFyZSBjaGVja2luZyB0aGUgdmFsdWUg
aW4gYXJjaF9zYW5pdGlzZV9kb21haW5fY29uZmlnKCksIHRoZW4gd2h5IGRvIHlvdSBuZWVkIHRv
IHRha2UgbW9yZSBzcGFjZSBpbiBhcmNoX2RvbWFpbj8NCj4+IFllcyBJIGFtIGNoZWNraW5nIHRo
ZSB2YWx1ZSBpbiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25maWcsIHRoZSB2YWx1ZSBjaGVja2Vk
IGlzIGZyb20gYXJjaF9kb21haW4gYW5kIGl0IGlzIHRoZSB2ZWN0b3IgbGVuZ3RoIGRpdmlkZWQg
YnkgMTI4LCBzbyBhbiBlbmNvZGVkIHZhbHVlLg0KPiANCj4gSSB0aGluayB0aGlzIGlzIHdoZXJl
IHRoZSBkaXNjb25uZWN0IGlzIGNvbWluZyBmcm9tLiBhcmNoX3Nhbml0aXNlX2RvbWFpbl9jb25m
aWcoKSBkb2Vzbid0IHVzZSBzdHJ1Y3QgYXJjaF9kb21haW4gYmVjYXVzZSB0aGUgZG9tYWluIGl0
c2VsZiBpcyBub3QgeWV0IGFsbG9jYXRlZC4gSW5zdGVhZCwgaXQgaXMgdXNpbmcgeGVuX2FyY2hf
ZG9tYWluY29uZmlnLg0KPiANCj4gSSBjYXJlIGxlc3MgYWJvdXQgdGhlIGluY3JlYXNlIG9mIHhl
bl9hcmNoX2RvbWFpbmNvbmZpZyBiZWNhdXNlIHRoaXMgaXMgYSBvbmUgb2ZmIHN0cnVjdHVyZS4N
Cg0KSeKAmW0gc29ycnksIHllcywgSSBtZWFudCBzdHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21h
aW4sIHNvcnJ5IEkgZ290IGNvbmZ1c2VkIGNvcHlpbmcgZnJvbSB0aGlzIHRocmVhZA0KDQo+IA0K
Pj4gQmVydHJhbmQgd2FzIHB1enpsZWQgYnkgdGhlIGZhY3QgdGhhdCBJIGFsc28gcHV0IGEgY2hl
Y2sgaW4gc3ZlX3Nhbml0aXplX3ZsX3BhcmFtIHRvIHNlZSBpZiB0aGUgdmVjdG9yIGxlbmd0aCBw
YXNzZWQgYnkgdGhlIHVzZXIgaXMgbW9kIDEyOCwgaGlzIHBvaW50IGlzIHRoYXQgd2Ugc2hvdWxk
IGNoZWNrIGEgdmFsdWUgb25seSBpbiBvbmUgcGxhY2UgYW5kIG5vdCBpbiB0d28sIGFuZCBpdCBp
cyBhIHZhbGlkIHBvaW50IGJ1dCBpbiB0aGlzIGNhc2Ugd2UgY2Fu4oCZdCBwdXQgYWxsIHRoZSBj
aGVjayBpbnRvIGFyY2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZyBiZWNhdXNlIHdlIGRvbuKAmXQg
aGF2ZSB0aGUg4oCcZnVsbOKAnSB2YWx1ZSBmcm9tIGFyY2hfZG9tYWluLCBhbmQgd2UgY2Fu4oCZ
dCBwdXQgYWxsIHRoZSBjaGVja3MgaW4gc3ZlX3Nhbml0aXplX3ZsX3BhcmFtIGJlY2F1c2UgaXQg
d2lsbCBsZWF2ZSBvdXQgZnJvbSB0aGUgY2hlY2sgZG9tYWlucyBjcmVhdGVkIGJ5IGh5cGVyIGNh
bGxzLg0KPj4gU28gdG8gaGF2ZSBvbmx5IG9uZSBwb2ludCB3aGVyZSB0aGUgY2hlY2tzIGFyZSBk
b25lIChtb2QgMTI4IGFuZCA8PSBIVyBzdXBwb3J0ZWQgVkwpLCB3ZSBtaWdodCBuZWVkIHRvIGhh
dmUgYSBmdWxsIHJlc29sdXRpb24gVkwgdmFsdWUgaW4gc3RydWN0IHhlbl9hcmNoX2RvbWFpbmNv
bmZpZyAoMTYgYml0KS4NCj4+IEJ1dCBpZiB3ZSB3YW50IHRvIHNhdmUgc3BhY2UgZm9yIHRoZSBm
dXR1cmUsIHdlIGNvdWxkIGxlYXZlIHRoZSBjb2RlIGFzIGl0IGlzIGFuZCByZWx5IG9uIHRoZSBm
YWN0IHRoYXQgdGhlIHRvb2xzIChvciBYZW4pIHdoZW4gY3JlYXRpbmcgdGhlIGRvbWFpbiBjb25m
aWd1cmF0aW9uLCB3aWxsIGNoZWNrIHRoYXQgdGhlIFNWRSBWTCBwYXJhbWV0ZXIgaXMgbW9kIDEy
OC4NCj4+IEluIHRoaXMgbGFzdCBjYXNlIHdoYXQgaXMgaW4gc3RydWN0IHhlbl9hcmNoX2RvbWFp
bmNvbmZpZyBpcyBpbXBsaWNpdGx5IG1vZCAxMjggYW5kIG9ubHkgdGhlIHVwcGVyIGJvdW5kYXJ5
IG9mIHRoZSBtYXggc3VwcG9ydGVkIFZMIHdpbGwgYmUgY2hlY2tlZCBieSBYZW4gaW5zaWRlIGFy
Y2hfc2FuaXRpc2VfZG9tYWluX2NvbmZpZy4NCj4gDQo+IEJlZm9yZSBhbnN3ZXJpbmcgdG8gdGhl
IGFwcHJvYWNoLCBjYW4geW91IGV4cGxhaW4gd2h5IHlvdSBhc2sgdGhlIHVzZXIgdG8gcGFzcyBh
IHZhbHVlIHRoYXQgaXMgYSBtdWx0aXBsZSBvZiAxMjggcmF0aGVyIHRoYW4gdGhlIGFscmVhZHkg
ImRpdmlkZWQiIHZhbHVlPyBJcyBpdCBtb3JlIG5hdHVyYWwgZm9yIHRoZSB1c2VyPw0KDQpZZXMg
SSB0aG91Z2h0IGlzIHdhcyBtb3JlIG5hdHVyYWwgZm9yIHRoZSB1c2VyIHRvIHRoaW5rIGFib3V0
IG51bWJlciBvZiBiaXRzIChmb3IgZXhhbXBsZSA1MTIpIGluc3RlYWQgb2YgYW4gZW5jb2RlZCB2
YWx1ZSAoNCBpbiB0aGlzIGNhc2UpLg0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVs
aWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 13:40:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 13:40:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524285.815107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUW6-00084K-LX; Thu, 20 Apr 2023 13:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524285.815107; Thu, 20 Apr 2023 13:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUW6-00084D-IZ; Thu, 20 Apr 2023 13:40:06 +0000
Received: by outflank-mailman (input) for mailman id 524285;
 Thu, 20 Apr 2023 13:40:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wBYt=AL=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1ppUW4-0007cA-FR
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 13:40:04 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da08e31d-df80-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 15:40:02 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 o29-20020a05600c511d00b003f1739de43cso1138575wms.4
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 06:40:02 -0700 (PDT)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 l2-20020a05600c4f0200b003ee74c25f12sm5547824wmq.35.2023.04.20.06.40.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Apr 2023 06:40:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da08e31d-df80-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681998002; x=1684590002;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=/GiG+Nst/1MJjUfnBePcObQL8ovvkVXUYIAzxiDMAvw=;
        b=N01w+0tV15vawofusJhYNrRm5ORcEtfZ0ry9sdj4Pw02eBe4DdzvXjAumMEc7D5hjm
         QqpBF8AqT8NOA/bBR5G2chpRptcvD5tVJwSx9knt1hfLTAnB5ZwaIPYaP0gpyMW3Xy39
         Hq2hhmqtMcpvrwvcB7hTE/UpaMVLoewWn5MfNo5TMHOe6KvpQ25ag1IM3mgjwHqYwvSu
         ZaAXHVihzqQmg8jo8Q5kBnro8pwpCrADL2rc2uu87XQgafSiWUH5VtIs6fZiy4h1HGLM
         gGzVSp1Sk85w/LxzSWSPmfNrtDn2H+KvmA6KfH10aDrTLyek3Nq0f9GGVDTnXeNbsl0X
         TT3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681998002; x=1684590002;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=/GiG+Nst/1MJjUfnBePcObQL8ovvkVXUYIAzxiDMAvw=;
        b=kHI732TvzT9n7y7eflwq15zmPLFKVQhyZaL2ru0IHFw4LqmVDHNZpyJJWWQU9bLKMU
         5oVeZSIlsPNp81zcCw13P48qhAzoFDrUPDXb0PX+S65siR5sbxOGyjC7kbtPmABVEf3W
         i8bBaOXDTfz/FJ6JP35Jhknk1vnr4oXWI3Wv92IIRKwhsMXwVVnYpz3ut0x/JegFPTjr
         hFhka+to4EVzbDLgO7pkehafArBehf1jsKvgPfeblKg7T2xLPk5SUecWTQ3fLAWK8rNS
         Dp77OL3BCPjsg6jVaXIt/huBbBeRjzUfhXpT512QRss1QUqeXviuX3buFrwRdA4lkV34
         +Xug==
X-Gm-Message-State: AAQBX9dDIwhkAhdOF4KQeXDTaIDFJ1FeNpMiKwEMNTVrdSMw2zWfhZNY
	N0YFR1ZNaQeTfCWZVkgBRiLmqw==
X-Google-Smtp-Source: AKy350bCAvGEny4oQbcK6hW+EriPXZ9h1014EIUuUtSNCjgSOgE+vXvCzTLS1YG+SVNb9B5PGSUOYQ==
X-Received: by 2002:a7b:c8c9:0:b0:3f1:7b8d:38ec with SMTP id f9-20020a7bc8c9000000b003f17b8d38ecmr1157488wml.35.1681998002127;
        Thu, 20 Apr 2023 06:40:02 -0700 (PDT)
Message-ID: <1e1f3a54-7113-7929-38a1-23d97bfa4d45@linaro.org>
Date: Thu, 20 Apr 2023 15:39:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v3 00/20] block: remove aio_disable_external() API
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>, Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Xie Yongji <xieyongji@bytedance.com>, Juan Quintela <quintela@redhat.com>,
 qemu-block@nongnu.org, Eduardo Habkost <eduardo@habkost.net>,
 Richard Henderson <richard.henderson@linaro.org>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Hanna Reitz <hreitz@redhat.com>, "Dr. David Alan Gilbert"
 <dgilbert@redhat.com>, eesposit@redhat.com, Kevin Wolf <kwolf@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Aarushi Mehta <mehta.aaru20@gmail.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 "Richard W.M. Jones" <rjones@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>,
 Stefano Garzarella <sgarzare@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230420113732.336620-1-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefan,

On 20/4/23 13:37, Stefan Hajnoczi wrote:
> v3:
> - Resend full patch series. v2 was sent in the middle of a git rebase and was
>    missing patches. [Eric]
> - Apply Reviewed-by tags.

> Based-on: 087bc644b7634436ca9d52fe58ba9234e2bef026 (kevin/block-next)

It seems kevin/block-next got rebased and doesn't contain 087bc644b76.

Based on 3d1ba50c4b ("vmdk: make vmdk_is_cid_valid a coroutine_fn")
I get:

Applying: hw/qdev: introduce qdev_is_realized() helper
Applying: virtio-scsi: avoid race between unplug and transport event
Applying: virtio-scsi: stop using aio_disable_external() during unplug
Applying: block/export: only acquire AioContext once for 
vhost_user_server_stop()
error: patch failed: util/vhost-user-server.c:346
error: util/vhost-user-server.c: patch does not apply
Patch failed at 0004 block/export: only acquire AioContext once for 
vhost_user_server_stop()

Hmm patch #4 is already merged as commit 2957dc40a2, let's skip it:

$ git am --skip
Applying: util/vhost-user-server: rename refcount to in_flight counter
Applying: block/export: wait for vhost-user-blk requests when draining
Applying: block/export: stop using is_external in vhost-user-blk server
Applying: hw/xen: do not use aio_set_fd_handler(is_external=true) in 
xen_xenstore
Applying: block: add blk_in_drain() API
Applying: block: drain from main loop thread in bdrv_co_yield_to_drain()
Applying: xen-block: implement BlockDevOps->drained_begin()
Applying: hw/xen: do not set is_external=true on evtchn fds
Applying: block/export: rewrite vduse-blk drain code
Applying: block/export: don't require AioContext lock around 
blk_exp_ref/unref()
Applying: block/fuse: do not set is_external=true on FUSE fd
Applying: virtio: make it possible to detach host notifier from any thread
Applying: virtio-blk: implement BlockDevOps->drained_begin()
Applying: virtio-scsi: implement BlockDevOps->drained_begin()
Applying: virtio: do not set is_external=true on host notifiers
Applying: aio: remove aio_disable_external() API
error: patch failed: util/fdmon-epoll.c:131
error: util/fdmon-epoll.c: patch does not apply
Patch failed at 0020 aio: remove aio_disable_external() API

Now this clashes with commit e62da98527 ("aio-posix: fix race between 
epoll upgrade and aio_set_fd_handler()").

Indeed reverting both e62da98527 / 2957dc40a2, I can apply your
series.


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 13:44:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 13:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524289.815118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUa5-0000MU-5B; Thu, 20 Apr 2023 13:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524289.815118; Thu, 20 Apr 2023 13:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppUa5-0000MN-2U; Thu, 20 Apr 2023 13:44:13 +0000
Received: by outflank-mailman (input) for mailman id 524289;
 Thu, 20 Apr 2023 13:44:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wBYt=AL=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1ppUa4-0000MH-Ad
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 13:44:12 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d830450-df81-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 15:44:10 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 eo4-20020a05600c82c400b003f05a99a841so1150113wmb.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 06:44:10 -0700 (PDT)
Received: from [192.168.30.216] ([81.0.6.76]) by smtp.gmail.com with ESMTPSA id
 14-20020a05600c028e00b003f18b942338sm1754424wmk.3.2023.04.20.06.44.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Apr 2023 06:44:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d830450-df81-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1681998249; x=1684590249;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+LSn1h+77W6+OZnHZu4E525LVI4dAj+WkAvi+DNuqyE=;
        b=iDFxq6eqjhE1XWK9UcrB/+SZtqwGYurx9DHYMWprhSoXu45S5ZlBXfjx23UdcwFgcR
         Rf2Vu7nQa7bIdWWZeqn0YZBoxOkWUsMPeHvJfQMs8ZGzbubr0rREVxnqZ2+Q3RdRoKQ8
         7wy+Ak8UxyRO1kedfDjejEfRYz6MvppST4csFyhSC+HrXaOA56WUaXKz7SwJ86144TmZ
         KNlKfU384iDAGrNgMyMeS3O+nX8z1zYlirjfNlS/ZTuMY7qrEnFGFnhA3Wjv/+7ZWmib
         UVGYdWt7/uGuwFgNgHH/jQvs4qlfp22uZcgEUuOUO9jIaebdOM/cy6UNdbyYpK0/pnGy
         fY/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1681998249; x=1684590249;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+LSn1h+77W6+OZnHZu4E525LVI4dAj+WkAvi+DNuqyE=;
        b=QXLpzbm9F/Vd0oUYcSyD4erVsnoj3r85YJh8tbnXI+2A3iO06eAZoPx5BQX7uYxn42
         HTfGdxArEr0Wg7uuTKwjEltR3Rvt3QevUaSBlwje2+XW6MqXPD4tK1hZ7RMoMKTSrybz
         LVJLmTRXVCdHjweU3csSxB5CD5Iq+SAT349W3hHWcKtUZdcNoo4P7oTU7hJBlQOSNnHG
         PposLmgEEZ7Ad3tDP/ZN8woLrWKIKe2WHmxzUj/LhrNdlpCuxuUGdVIvwhqkddZECgGB
         ljcUJzvgTJOwrXAU9nXYsHclLlluFVoY6LHigo8PNeeZBct07pXfWZilzElO0szeYioj
         Lxhg==
X-Gm-Message-State: AAQBX9fh1hgtId6ZOGTZiNl2QV6Ba0ACmKC1Q4y5iTCYwtwXMtrVUkOA
	fwOnhXxKx4PEwfRYf9Oc70/NHQ==
X-Google-Smtp-Source: AKy350ZgSUF5d1AFXHG6+3IeU9w8lo38tMgINImldFbcXq1hqE8YiduWy58P4owV8AtE8lb/zPy7wQ==
X-Received: by 2002:a05:600c:2055:b0:3f0:9ff5:79fb with SMTP id p21-20020a05600c205500b003f09ff579fbmr1214996wmg.39.1681998249527;
        Thu, 20 Apr 2023 06:44:09 -0700 (PDT)
Message-ID: <f7b20c96-be06-2299-5589-11dbf23251f8@linaro.org>
Date: Thu, 20 Apr 2023 15:44:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v3 20/20] aio: remove aio_disable_external() API
Content-Language: en-US
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>, Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Xie Yongji <xieyongji@bytedance.com>, Juan Quintela <quintela@redhat.com>,
 qemu-block@nongnu.org, Eduardo Habkost <eduardo@habkost.net>,
 Richard Henderson <richard.henderson@linaro.org>,
 David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
 Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Hanna Reitz <hreitz@redhat.com>, "Dr. David Alan Gilbert"
 <dgilbert@redhat.com>, eesposit@redhat.com, Kevin Wolf <kwolf@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Aarushi Mehta <mehta.aaru20@gmail.com>, "Michael S. Tsirkin"
 <mst@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 "Richard W.M. Jones" <rjones@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>,
 Stefano Garzarella <sgarzare@redhat.com>
References: <20230420113732.336620-1-stefanha@redhat.com>
 <20230420113732.336620-21-stefanha@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@linaro.org>
In-Reply-To: <20230420113732.336620-21-stefanha@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 20/4/23 13:37, Stefan Hajnoczi wrote:
> All callers now pass is_external=false to aio_set_fd_handler() and
> aio_set_event_notifier(). The aio_disable_external() API that
> temporarily disables fd handlers that were registered is_external=true
> is therefore dead code.
> 
> Remove aio_disable_external(), aio_enable_external(), and the
> is_external arguments to aio_set_fd_handler() and
> aio_set_event_notifier().
> 
> The entire test-fdmon-epoll test is removed because its sole purpose was
> testing aio_disable_external().
> 
> Parts of this patch were generated using the following coccinelle
> (https://coccinelle.lip6.fr/) semantic patch:
> 
>    @@
>    expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
>    @@
>    - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
>    + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
> 
>    @@
>    expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
>    @@
>    - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
>    + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
> 
> Reviewed-by: Juan Quintela <quintela@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   include/block/aio.h           | 55 --------------------------
>   util/aio-posix.h              |  1 -
>   block.c                       |  7 ----
>   block/blkio.c                 | 15 +++----
>   block/curl.c                  | 10 ++---
>   block/export/fuse.c           |  8 ++--
>   block/export/vduse-blk.c      | 10 ++---
>   block/io.c                    |  2 -
>   block/io_uring.c              |  4 +-
>   block/iscsi.c                 |  3 +-
>   block/linux-aio.c             |  4 +-
>   block/nfs.c                   |  5 +--
>   block/nvme.c                  |  8 ++--
>   block/ssh.c                   |  4 +-
>   block/win32-aio.c             |  6 +--
>   hw/i386/kvm/xen_xenstore.c    |  2 +-
>   hw/virtio/virtio.c            |  6 +--
>   hw/xen/xen-bus.c              |  8 ++--
>   io/channel-command.c          |  6 +--
>   io/channel-file.c             |  3 +-
>   io/channel-socket.c           |  3 +-
>   migration/rdma.c              | 16 ++++----
>   tests/unit/test-aio.c         | 27 +------------
>   tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
>   util/aio-posix.c              | 20 +++-------
>   util/aio-win32.c              |  8 +---
>   util/async.c                  |  3 +-
>   util/fdmon-epoll.c            | 10 -----
>   util/fdmon-io_uring.c         |  8 +---
>   util/fdmon-poll.c             |  3 +-
>   util/main-loop.c              |  7 ++--
>   util/qemu-coroutine-io.c      |  7 ++--
>   util/vhost-user-server.c      | 11 +++---
>   tests/unit/meson.build        |  3 --
>   34 files changed, 76 insertions(+), 290 deletions(-)
>   delete mode 100644 tests/unit/test-fdmon-epoll.c


> -/**
> - * aio_disable_external:
> - * @ctx: the aio context
> - *
> - * Disable the further processing of external clients.
> - */
> -static inline void aio_disable_external(AioContext *ctx)
> -{
> -    qatomic_inc(&ctx->external_disable_cnt);
> -}
> -
> -/**
> - * aio_enable_external:
> - * @ctx: the aio context
> - *
> - * Enable the processing of external clients.
> - */
> -static inline void aio_enable_external(AioContext *ctx)
> -{
> -    int old;
> -
> -    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
> -    assert(old > 0);
> -    if (old == 1) {
> -        /* Kick event loop so it re-arms file descriptors */
> -        aio_notify(ctx);
> -    }
> -}
> -
> -/**
> - * aio_external_disabled:
> - * @ctx: the aio context
> - *
> - * Return true if the external clients are disabled.
> - */
> -static inline bool aio_external_disabled(AioContext *ctx)
> -{
> -    return qatomic_read(&ctx->external_disable_cnt);
> -}

Missing:

-- >8 --
diff --git a/include/block/aio.h b/include/block/aio.h
index d4ce01ea08..266be26f8e 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -224,6 +224,4 @@ struct AioContext {
      QEMUTimerListGroup tlg;

-    int external_disable_cnt;
-
      /* Number of AioHandlers without .io_poll() */
      int poll_disable_cnt;
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index d9d3807062..5c89169e46 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -436,5 +436,4 @@ static void test_graph_change_drain_all(void)
      g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
      g_assert_cmpint(b_s->drain_count, ==, 0);
-    g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, ==, 0);

      bdrv_unref(bs_b);
---

Once cleaned:
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 14:32:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 14:32:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524297.815128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVKL-0005lG-Uz; Thu, 20 Apr 2023 14:32:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524297.815128; Thu, 20 Apr 2023 14:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVKL-0005l9-S4; Thu, 20 Apr 2023 14:32:01 +0000
Received: by outflank-mailman (input) for mailman id 524297;
 Thu, 20 Apr 2023 14:32:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JtFG=AL=citrix.com=prvs=4670f3580=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppVKK-0005l3-6g
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 14:32:00 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 185b9f80-df88-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 16:31:55 +0200 (CEST)
Received: from mail-co1nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Apr 2023 10:31:52 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6393.namprd03.prod.outlook.com (2603:10b6:303:120::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Thu, 20 Apr
 2023 14:31:49 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6298.045; Thu, 20 Apr 2023
 14:31:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 185b9f80-df88-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682001115;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=hu3N8RfGTnYimH5sNIanmVTJHPJnSeQh5LXMy1SqoKY=;
  b=XApEdXV2vew2TPpYG2Z2YcNwGtQ8gfUBUo5XSrz6iXrn3vrhw516LIDe
   aD22pWcdeH6XzLy1dtaZq8AMw+dbPnbdmhONqv8W/0XAkwi0yo7g5ZUnp
   BeWcd1d0nSNx8Nv45NAiP2f2h/wW4Z45OzXUdvMDEf90SH/U5mVPcPGoP
   s=;
X-IronPort-RemoteIP: 104.47.56.176
X-IronPort-MID: 106291836
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yNvRaKrFbXfnSFWCmJRe2quuxv1eBmIsZBIvgKrLsJaIsI4StFCzt
 garIBmHP/mOYjDyL95yPIuwoU8A7MXVn9QySApsq3o8Fi8R+ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSFNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGgOQU2C17+1+arhRblCn854PtG2OpxK7xmMzRmBZRonabbqZvyToPR/hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jearaYWLEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpMTuzhpqU06LGV7kdCWU0GTVSbnfywilXjccl4e
 1Y46BN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq17r6JqRuiNC5TKnUNDQcbSSMV7t+lp5s85i8jVf5mGa+xy9byQDf5x
 mnTqDBk3upNy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDEhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:LeS++6EKv1riyTrVpLqEHseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV86faQslwssR4b9uxoVJPvfZqYz+8W3WBzB8bEYOCFghrKEGgK1+KLrwEIWReOk9K1vZ
 0KT0EUMqyVMbEVt6fHCAnTKade/DGEmprY+9s3GR1WPHBXg6IL1XYINu6CeHcGPTWvnfACZe
 ehDswsnUvZRV0nKv6VK1MiROb5q9jChPvdEGI7705O0nj0sduwgoSKaSSl4g==
X-Talos-CUID: 9a23:fjv9KWxkAuZFrBaKHA9iBgVMGe94cy2Az07IeUDoJ39jE6+NFFmPrfY=
X-Talos-MUID: 9a23:p+oBZgR2dK57+e4pRXTH1WAyJN9nw5irAUAWzL8dnJmeGT5JbmI=
X-IronPort-AV: E=Sophos;i="5.99,212,1677560400"; 
   d="scan'208";a="106291836"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QFZCBOyWAKCQRAmnXCgJh7g4Gbm/IzzV43EY/AojHazI8Ndg/RloOHvz1o3bzc6wNLxOGo/DD8aLxaGEjFiN95EKWlR/4lH9hzDJccZbpMXyYBf7eJBxj7CDHo/k1i5HbPvMfrH4/Ca2z/jNi511TiuGk9QMDSuZdUuLVohg4OJhEqLlqI34kMlmRul4SmUZaNZoYoC3P2DP0TqS4wIvQhVp06nTG3BFbSGmfunSQPa9TRluBMR8CtTqnpE6of32ooW/JwT9appt898kA3CsLu999gty/jx11P3EnyV8GlrKvTrquXl2bEQvoY/qYQ6vm3pzDdE0FNNoq/m2Fi8eKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8/jlALJv5IVhVxP45HXkIvs+gsd3eb1cG16uDH6fbuc=;
 b=AAM4KMSsp/TvT9hYbLu0UBPQH3/d1/rS1zEhOQQBXVB2xgfvIVnai1gAVZosr7zDCaULOYciEjLtsiDNOKm7w0xfePMdGZzYlC3JQ+HkHF5O/DD0UNdU63qD3qm6Ry09ckcVyBIgi89+rnL58rVD537fkZhcg1dHik8GFPe4axPUzuTo4EAOj1z68I4f+Jtef/wpb2yPTuhzD2opIGtRnUcgogtuhnLIT6tZC7sUwf0G6rRiOplA+G6oG/esMXW6N2cdeZU/o0JMUac+emR1Ui6YhYj4nQ0uv5mpkRP5z4XjNuU4qaANghcUWyN218AY1+imSEqje7ASxWdqeVAnjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8/jlALJv5IVhVxP45HXkIvs+gsd3eb1cG16uDH6fbuc=;
 b=ADzi/aHxiFtI0B8wpqB8J//i8m8akfr/oVTthvGm+nhpeH+oOE293ugSNDZWB7B/ArrVlZiMYyhU2UGtG2BCqpvhBkv4qGj4r/UxIFeqZz/uLJApO6bL7qMdC2XAzUUNB+Iw/1DImfS2ydlMruGUSKi/qa7iqpRn8a5VrBhEcKw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 20 Apr 2023 16:31:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <ZEFMzu8k5wlYt2aD@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
 <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
 <ZEAO8e9iTjmi86fr@Air-de-Roger>
 <7e3246c7-6de2-b3eb-477f-99ef9bd1b33b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7e3246c7-6de2-b3eb-477f-99ef9bd1b33b@suse.com>
X-ClientProxiedBy: LO4P265CA0070.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2af::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6393:EE_
X-MS-Office365-Filtering-Correlation-Id: bad8d85c-1a78-41e3-9124-08db41abf99a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RJ7ptLSlItEpIlLifTBG7LGDjnhiz9xAMu0AFpFjiZ1AmHXUjUVR9LmMGsHOK23XmpKKoAXl/Ipfk8Olc5pDqM4qIOy7EEp1Kw5MoPQwL0i3bUmL0Onv0qgA5SRHUMc35lR1cVTy+CjIui60641oR8GCZlRhiFo4Mim9pKMLOWydHY4rnuSviTyyRQ7/BykVpPKgzIJ8cVNhANjP59hWDLW7cfOnGRmAHyyKzN54K4XfTsZFAzlW0Ued2kVyRxrwY/z6yKsTLGVzVm7G61X8NVi7vFCeg94alEhYpTu4futJTsJaOla6P4TmU+AWGk7r6nE2cLQGIEoSYOUqx0UGB4QmgcIr0/Jrnv77vuFTcGatxDI8D9cYHGuUR2l06NOQ4JuE994zuYR/VgxmFcUyuR0LESKqMqoFziK2v0qzL8NdN3HP4WYBrqj2jmFylNzpGBl+jfQgHEa7XtqoUMQNV33kctv0XlR+8cuipUNnrGWHjzHgSouSJqFOOwiz8g9hM3uQwcUOZpKDEerRKbgSpuP3jiHn5wfSEHrXVruLjjeLGeSSCFneRxC8rAhf2eBa
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(376002)(346002)(366004)(396003)(39860400002)(451199021)(33716001)(26005)(6512007)(6506007)(82960400001)(38100700002)(6486002)(478600001)(66946007)(66476007)(66556008)(6916009)(4326008)(54906003)(86362001)(85182001)(6666004)(41300700001)(316002)(83380400001)(9686003)(186003)(53546011)(8936002)(8676002)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0RoR2psME9KcU5rLytIZXlyYXZnN0lVdnJBVmFYSmFZWms1d2tmc3B0UUcv?=
 =?utf-8?B?M3ZKZmkydTBxMHpLWVhkdmtGeGU0R09OZEhxeWVSSGl1RThyTjZFWEY1ajlS?=
 =?utf-8?B?NGY2V21FcDZnV0d2enJUNWFrQXZLZjdUM0VHNTFLdnBCM0tSMXIyS0RvV1J2?=
 =?utf-8?B?K05zSExjYS9Jb1I2TGJGVUxnaFFtbG12UlJnV0JlSjkzSWJuTjkrVDhTQkE4?=
 =?utf-8?B?TXZDblozM2lDTlRpc2VPeUlCdzh2LzY1STB4UGtHMXhucGU3WHZyWkNKT1gv?=
 =?utf-8?B?b2YyTENUcGdoc2ZqQ09XcFdNMCtET21MOVdVZ0M0bWVOUFcyMDNSMm81S01E?=
 =?utf-8?B?c3pranBiblpXTmVGcm4wUXY0NnZ1QWRqcldYSGxpTVpYTXUwQkZkT0xhZFY1?=
 =?utf-8?B?czcrRUh2VUsrMUpuQ1ZmeXVGRi9meVlEMFZ1dGwvbThsN2xFaStxNHQvOW1V?=
 =?utf-8?B?TjFMNlRjMUZObFFKVkJReXN6MGNNc2xUV215R0pLeG5wY1hXNEt6NXBtQ0xV?=
 =?utf-8?B?RHlJTXBPSzkzZlc4M2V2TmUwK1MvdXNWMHpUUm0waTZtWnhJKzY0TTlYcHU1?=
 =?utf-8?B?Sm1LT2J5TlpuRllubkNFS2M0ajk2RUZFOEgxSjZNT0NsOUs3bWs0SlA1TmRC?=
 =?utf-8?B?UXFhUFBsa1dCdjdrNXBub3MzRmZqaU1yTGRXRFVILzhYQVpQcStLNXNqWGk2?=
 =?utf-8?B?TFRGS01PV3hZOXlLRjZvcDh3dk9aRWJDUUhwdGYvb0FMZWo1YUJoMFZTZzdW?=
 =?utf-8?B?WGdYNytmM0NCZFljQ2hETW1iMlJoZ2R1UHhFWE1Qb3hQWUFNZmdXMzdFcERv?=
 =?utf-8?B?emw1L250VXZnd0ZDN3FVZTNwYzZjSXlmVGVSLzJON3pFQkNuTUJ6cTdXeTE0?=
 =?utf-8?B?WkhmNFFPb2dLR2ZlbUlSSXlZY3hkOU9JakJkZ0JHMFlUT2FLbnpVTnhodGNC?=
 =?utf-8?B?Y3JSQUlzQUo4QklkQWRPSGNoem9FcWRsaDk3ODdQNlZvMjM1Wk4xeXhwRzA0?=
 =?utf-8?B?VFRhYlYyR01kSkE0NVJ2WGJwUy9ibnpIK1NIY1d3Mmc1aUltMkwvQTZKUVEz?=
 =?utf-8?B?b0RhY2dzU1hBaCtlVStidkR2TjAxVTlNbFhSN3FTRVJCYjlQWFV4RzJTbEJz?=
 =?utf-8?B?RVF3NDJHMVdzRVVPV1BEUjhwbTRhQ1MzakkzekV5YzlhRERsemdDN21HakE4?=
 =?utf-8?B?Tm5jTWdvTzhzY3VYRlU2ZXVIc2hNZDRqazFialU0RlJhSXMwdWpRV1k0Rm13?=
 =?utf-8?B?SXR4K3cxM3BOVlNSbWwxOXdPdndkQVA4d3hseDdyenBGU1NyUDRQNklnRkxv?=
 =?utf-8?B?MnNzOVJkMXhZOVFpOFhpYjFydmRQclpTbGkwd1NrRkY4cStiSGZEeFliTWdQ?=
 =?utf-8?B?blBOVTRHd010d0g2RWFvNktsaVlHMWlCK2FMclB3emx2elp2Q1ZPM09wOGlj?=
 =?utf-8?B?TVdqMmF2ZU5PM1NJc29Bb2VpK3N5NHRXTmtEdXU4RW12VUZKelQvTW9wR3ZM?=
 =?utf-8?B?MGRGL2YvQy9pWUlnTGw5TnV5NysxQnF5eFJjcEVpRERQbTcyb2FFa01mNzBj?=
 =?utf-8?B?bGozRTZlMWNrN2VxYTBSN1daUnNreVVER0pmUUc2UTZVOStVY2NpS2lWQUlQ?=
 =?utf-8?B?Ni9hVXk2a280eWhFakpndm00UWRBTHJtbms5aWE0bndRMTMwbzNKUTRZTk80?=
 =?utf-8?B?K0ZNZUc2WE1aNU40Sm1SbVZBbVYvZ3h2ZGViNDdTUkE1VVFKSEd2ODE5bVBH?=
 =?utf-8?B?ckV4MW84M1RCU3lPeEVtTVRlS0tQUXdNcG9JVE5rUzQzKzlCTVJnVWNBK29a?=
 =?utf-8?B?ekZERlVCalpyeUEzaDVPZEdwZkRUUmYvUSt2N2VEczhoNmtGbytYaW01VkZU?=
 =?utf-8?B?T0hLQjFKODQ3Z2RJYUU4ODhqbnlVMkNnWFc3RjVxVjBMa2M0dG9ZelNGTTVt?=
 =?utf-8?B?TjJWamhTR09WM2tYZFhZd2oyN2hmWFZjU0V0aTJYaVVvNTkwRDRWSmdLbUxK?=
 =?utf-8?B?V254emJyQktUYTY2STNFemN0dTg1RlZqK210VmpQcmYza0xEejhUdHhwb0Vo?=
 =?utf-8?B?dFFxZXJPNDZXc3FsTGRJNWw4T093MXRQVzRFc2tVRkJOK2laRkJ2YXh0Zmhm?=
 =?utf-8?Q?foT+g4iCpEwPgOv9Ceee4fk0E?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/jnhhzZyvmN2taGJYX/tqcEFJNc9I3PbSv5CllKBE5KxswSLG3Ktwu5CmT20BLVKbKVJHEkF8OiJLJqStr0uTxm17lv/bgiWzqX0pgpAOyzRHIMYpvJcx5dbVYhr10Hqg6jXAqjzy2svDQO0fodV6vQjE/KRNvEZj030o+1cVxbeCVh8N74Zp1JfFAxa7PG+IEjwQjrxzP86dkU4Lg8QYFCgt13um7N/nzDv72OBuVQnrTT86loeo3Gi79zXHQal3aQTlcdzBzR6rL6coJ+B8xLhn3ELeSe4sOjCslTI1JEHWWek/ODCDNcrxQtd+k5N/jv2+kBVfg7x3/rfU7sTYydoWx5tHgv4H33g5MOTAm7Wg6S0/RlH+NKN3NyFsrODU3wAV0TiEMZzXDP/SaNO0LN3d3FYOSWjMWq/l/lQLsAE17NmGoG2cPV6PBMXL25yCAxhWWKqYFQyva0TSphhq/9l7ZIifJD0Drfnfooo1Sz8XHwKa4SyekXUi7Lnqa/zeXwjrpqopFOsEneflu0AdpxY71SObBChtu8rHTV9grJ1CCNvKO+5n/qVyYIOrH3vRmoMcd0Zb8lLMp0Lcr7GQgSxn9aSvlM/1tTBjPWIkIfTm0P5JOLiOw1eDNNHRRK9Wt3OMH6I/j/5HtpeVRgZT/pT4qOMmnTEZgeE6j6AJPYPizNGHrbhYdG9/zmAcsyWHFyqwHv8+F1BCOlVj5PYHojYC9CQgrClFRcfrPBgZQKcxWRoThPqVpWbnRH/gRQuRMav7zNpOS7kPmnCTSIGxmW7Hy8v93ugxs0N/qLih0dofOFrgnaXXc5kB5DyYOjW
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bad8d85c-1a78-41e3-9124-08db41abf99a
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 14:31:49.0037
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kHZXM2fMOMKRwN7OckFoyJ0rBUkQgAil7GAzSrbCMHAMolQ4oAwkPIOaZXq41eFySFv2lYqgAYDdx9zd2H6czQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6393

On Thu, Apr 20, 2023 at 10:31:08AM +0200, Jan Beulich wrote:
> On 19.04.2023 17:55, Roger Pau Monné wrote:
> > On Wed, Apr 19, 2023 at 03:58:10PM +0200, Jan Beulich wrote:
> >> @@ -1342,6 +1349,17 @@ unsigned int rtc_guest_read(unsigned int
> >>           * underlying hardware would permit doing so.
> >>           */
> >>          data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
> >> +
> >> +        /*
> >> +         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
> >> +         * ports. While reading the index register isn't normally possible,
> >> +         * play safe and return back whatever can be read (just in case a value
> >> +         * written through an alias would be attempted to be read back here).
> >> +         */
> >> +        if ( port == RTC_PORT(0) &&
> >> +             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
> >> +             ioports_access_permitted(currd, port, port) )
> >> +            data = inb(port) & 0x7f;
> > 
> > Do we really need to mask the high bit here?  We don't allow setting
> > that bit in the first place.
> 
> I think it's more consistent to mask it off, specifically with the code
> visible in context right above the insertion. The doc isn't really clear
> about readability of that bit: On one hand in says R/W for port 0x70 in
> the NMI_EN section, yet otoh in the RTC section it says "Note that port
> 70h is not directly readable. The only way to read this register is
> through Alt Access mode." (I think the NMI_EN section is more trustworthy,
> but still.) Plus if we were to ever make use of the NMI disable, we
> wouldn't want Dom0 see the bit set.

I guess so, at the end Xen itself doesn't use the bit so far.  Maybe
at some point we would want to expose the value of the bit to dom0 if
Xen starts using it (most than anything for informative purposes if
NMIs are disabled).

Feel free to fold the diff to the existing patch and keep the RB.

I guess you will also add something to the commit message about the
special handling of the NMI enable bit even when the RTC/CMOS is not
present?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 14:36:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 14:36:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524302.815137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVOh-0006Mx-GV; Thu, 20 Apr 2023 14:36:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524302.815137; Thu, 20 Apr 2023 14:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVOh-0006Mq-Dc; Thu, 20 Apr 2023 14:36:31 +0000
Received: by outflank-mailman (input) for mailman id 524302;
 Thu, 20 Apr 2023 14:36:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppVOf-0006Mk-HN
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 14:36:29 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe12::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bbc314a0-df88-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 16:36:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9678.eurprd04.prod.outlook.com (2603:10a6:102:23c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 14:36:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 14:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbc314a0-df88-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UF4FEj40QIceYJ1OYQzVsOKEOLmVZuFkThKk46nTZYBNFEpnWx/alJWs2fa6Gwll4oz0Tp3G6SiNmrkfeTclJF2UzvDHLbMVFAgsAreQoQiSlGImXbE17pUUCG6067rVbjQpd3yk/FsQlA5/yZZ4qbDS4YgktKK5URmaoGOxKf/RxW/5qkA7cGjASlqCxEskFqOKTwBZhdcHlfwiyICg15Hnuj14ZPngF/3qWUEBxv46i6utfDKJYU2TvkGQPLS21ZaoOCXZcbAiNbPKSh7mIsXjcfmN2jnimFWh89ZfNDRC9f6w26agjKuZbn9V/ODa3SpSh61YLYqeOrNXLTmSug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8GYrhpBVt5Xann+A51gcmJLcBXuzZJmCGtZW7sESBss=;
 b=kkcItEmcheUYIF8kbxS2b5H3p9WN5aDLztR1teLSoljH5TmVgqzYCVnrMrQWAtn9Pxkettk9mRhAZez7QDKs/G90bec6maaBLTsm9bymEOPh3/VFqnXpvgSEOEosvd9Qim+1WjGsiFq2VoPclrjJx+98ZdLfhWaHclneT36K73X17pODJZJ4C55yotFcVoSrPF0JhRe5tWi8yg9rln2iMp72ywFCMREQxs3wRRvoqFhdShDUwOgKayhGCklentYLowF0xJpzOKTu9+86svU8bWuzr6GcrPF2I24+tsqqwFcEi+OtHizyfyWQpeEE2o1bOKAabKf7Re/WzQci0hjgxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8GYrhpBVt5Xann+A51gcmJLcBXuzZJmCGtZW7sESBss=;
 b=Hz4P22aW1+d0thdIE8AUkjAVTltsnBaHypvrN0YcCNljd8BIxJ3qEQm2imKoox94jin/K8ZswayJfZEXtWv76NrmWkZ87MqiN+w4Zp0Td3AJhj7fKyzfngiGbJu/HB2+JRZ7l3pt+21favrcGvLoWStg/MVINfKdi+OKrEsgDetpuRcDaVHFNNK9a67x5lkBkRrzPgCy1HM2zV7E/UU+jLrPqGeoPdryImrcog8IEB+jepseIeW2R0ATt20EW4LDzR2jDytqDgqELA4NbZKJtRN+FQJAjt/ADckwnqTw3aSi4ZfwjsVVDwNyvy5XEuACADPmRhsXCD/K4cuIhk4LPw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
Date: Thu, 20 Apr 2023 16:36:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9678:EE_
X-MS-Office365-Filtering-Correlation-Id: 37104aaa-e517-405e-77ed-08db41ac9ea6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DI/UFgrYcx0zLReDFFj64mRW24zErCOedlJjUdTHiv1eHCVFTk4l+w6DkfeCKUi7ZyhrBUCW917G5A1hCpSZFluiT2l2GScFNaKy7TjqOQFwle4BEIeYWeuR0RyG9YvtyldcqJM9K/frzUwWP5g40SXXQ1Yj37p5GSrl/hYLvi/ljql+6r60D8CwjM0fWYNGujX8Banb7HcjjEYx6wd+YvpWgU/x/tXOAMxKjC1wp9L2J9G50xE5A3aJkMwY5ahQWCt7ykiZhsHvU8jR1t9Pwky6R579U/XHCBMzw9MabncQThYkb6gKqb+C8OLIPbpOIXHIcb82e8kBBHm13SCx5CNRmVdYYWqGMAaT5iVkAW24P4DNAPIpiXk0SZefzOxpKYwd6pknM8SPn/rwwGhF2HXA8vIvEz/IZGYOKr97uQN3dlUBGxTB0pqv/bb774QeQPur7cXhD9GASXBy/kmrtT8wgIJM/GTwy0iJFWRHWJvAjabIN2pEOfeNqm7LE3/NckJiKgOXKdJtJ1M9tK/wqtpcbc79ZNvGq48xPQAKR+UpGYQRgKpNKY8PRS14AuxAnABtXC4X6gno6/sN5Ptbj7klkmcslykYa1Cgn+NfXv+81cefe3bnLnGKOvOBbKFIRL/IexJ+UfPmlvvSBdV2gg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(366004)(346002)(396003)(451199021)(6666004)(6486002)(86362001)(83380400001)(36756003)(53546011)(2616005)(38100700002)(31696002)(8936002)(8676002)(26005)(186003)(6512007)(31686004)(6506007)(41300700001)(6916009)(4326008)(316002)(66476007)(66556008)(30864003)(2906002)(66946007)(54906003)(5660300002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U3lYWWtDbGloclE0WVYxMXEzcmQyQmZtb0cxQUFaTm9IQWt6QmZoWlN0YzNG?=
 =?utf-8?B?NXZ1VUl4NW5PSmJjekNhMG42SmhZSDV4ZlF0SkhqZzZ4Z2ZiZzh6a3JnV2hr?=
 =?utf-8?B?VkR5L0ZCTmQwVkFpcnRwTytWRVBCV0dHUjZUKy85Q3hocU1wbmFLZjZIQ09I?=
 =?utf-8?B?SGdKcnpnK1lmZXRzekdTRnVrVnNCTzdMVG9qZjNJTXNaN21qNkVobGdEdzdk?=
 =?utf-8?B?bjJyNXhReUplejNINUpzMm5nVzhlUDBzazVyRHplWU9NOUZESTdKNkFWd0lM?=
 =?utf-8?B?aTdIWlNEK3RHK21PWFNMQjFUYUtpZ1F1dGt0K3AxaXlUd1VoQUh1bVorbHB4?=
 =?utf-8?B?cEZCdXZyaVNaYWc2dHRZeWh6dTM0N2JsMVFyMnNuWFlCbFIxeHZHZU9tdGNG?=
 =?utf-8?B?U0xtdFFJSVFHZE9LbHJkbnZRWWd0RHAvTVNiZ1l1VXgzazM3NlB5QXF6cnI5?=
 =?utf-8?B?RzF3SGRabUFSdTg1ZG9aRGZyVUhtT2dhNVVMd2ZMejFLZklmQXRNamUwTlkw?=
 =?utf-8?B?TzNhVWtkMndPOS9jRVRBcEZGSGt3VU4xRS9rK1Iwb1ZxTkFxUHR6WDhrOXhW?=
 =?utf-8?B?TzZ4Z0VzUldyc1lCZzRLVWNsS3Q2KzRLZkIzVWZKL0dMZ3dpU2NIbGhCL1BH?=
 =?utf-8?B?UzVkV1RPaVQ0KzlBMDFnWW9SRHU3TG9ZUk8xdFBOc090RnVJQnNmSCt1WUdR?=
 =?utf-8?B?RXo5TUpGbUREUWY3OWpXRkJHandSS0hUVUVtNDFzUjBjMWZEUkZjSDZNL2Zw?=
 =?utf-8?B?b2dCazFkT1BiQmtpNlUvQ1NCYk5xdkQraUVzUzZrdmJuTDJlMUp3Z1U4UkdW?=
 =?utf-8?B?TFJPdUpHby9sRXF1MGMwOTdxMzIrQmRBcEhWc0ZMRHhFOWVuNzdMU09VYlVG?=
 =?utf-8?B?QVc1V1VweFZCeUpodTNack1IYUg2TCt6SUZzK2JLT1JURXZEdUk2VytWV2lp?=
 =?utf-8?B?RHJjeE4yeU81SnN5WFFHWjJ3YW1lOVpXWnNoMis2ZXNnUVgzNWc5R2hHeUUy?=
 =?utf-8?B?NVFXMnU3V3FWd25rOWtlZW9jRVpPZUw5SFFORytvMzI2QkVEY3d3aEhPbUU5?=
 =?utf-8?B?SHNhSGpNM0VyeXFER3BDYTM3STNENERReGtzdWE1alFZNUlGT3dieGJoQVpL?=
 =?utf-8?B?OEl1UWJEK1A5c0RpS2pyY3lSWjcxSTBTd2VLR0ZUQ1lIOU5kTGFEWDlmbGM4?=
 =?utf-8?B?Uml1SUowcWdKSzQ1Y0VaMkFBaElFdjZVeDNIU2F0THp6MWE5clMxU3VHTzhR?=
 =?utf-8?B?RENWV0xkN0RlQXZteWsrT3crYU0yMlF5MjJFTkZMd29YZ0NRWmlzdDBhS0tv?=
 =?utf-8?B?akRjWG53SFhrYzljejJUWGZRYWx0cnd4UForOEhnMzJlWi9NUnlxQnc3TG9n?=
 =?utf-8?B?UVBGMjNjSmg0YnYrbExkOHZHODlHVjVZTGJKNG9OWFhxaWlzaFZCVndSNVZj?=
 =?utf-8?B?eHZzVG1QY1o0bmNLMXg1RDRnMjE3b2xCTVJJLzJQSDkxbVVGSit3cy9pRUNM?=
 =?utf-8?B?cjBVS1Bqand1aENEbDhoQytPd1ZXeFRQakwzMWExVytIeDRsc2huZEhLcjBQ?=
 =?utf-8?B?dkRMMWV6dWtxWjEydWM5YlpJQ2w4TFQrTnRTdTlsdVhTdFhXYm5ZVWM2TTZQ?=
 =?utf-8?B?QmI3ZEZQbWpEK1REVnVIM0QreWVDMFlPZjArOGlzRHJIZVZWY2tlVUw4TWhO?=
 =?utf-8?B?dWJJdUdyWGExaS9ZcDloTU9CVDBRY2xsQ2xPV29ZcEhtcFVnK3YzTVJkTHg4?=
 =?utf-8?B?K093VUJKWE9hb0ptMXZsVEVqT0VVRVY3WEI0c0hEdHhGZWIycjBkZm50bmRw?=
 =?utf-8?B?L3FPYWdQUWRHWlNud2xZU1d0NmRRaDI1VElHeWkrTUk4dkhEaFpyakpseUlm?=
 =?utf-8?B?dXJod0QyOG1Tc0lqaFlKYU9oVmkyQ1lMZE1oaHBwR0hEb1lrZHI4dWlpTlpM?=
 =?utf-8?B?MkRtNk5oa3M2ZnhPSm5CdWh2cDY5eGVNREUyQk9JdXRQaW1OWVJLRnd5YklH?=
 =?utf-8?B?L1NML0gwOVhSS2RwamY2NGRqQzdCMHJpUzRLWjRGUWY3VzFOZlNYTlNKV1ZJ?=
 =?utf-8?B?NnVNUHF3UncvMW5yK2FrcTRWRWJhKzIzY3hxUkQrajJ2eExIUlBVa2Z2VzhG?=
 =?utf-8?Q?onT0il2HVi+9rJs3Rt5lo8zfz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37104aaa-e517-405e-77ed-08db41ac9ea6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 14:36:25.7715
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bzMlj0nRKa6T6b9MzCbXA2jPqSTTGVWdTI3DdfnFE4xPGQUYQlmx4u2plrdzwNQyVCuSflubj74ZeZLNB3cgzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9678

On 19.04.2023 17:42, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/page-bits.h
> +++ b/xen/arch/riscv/include/asm/page-bits.h
> @@ -1,6 +1,16 @@
>  #ifndef __RISCV_PAGE_BITS_H__
>  #define __RISCV_PAGE_BITS_H__
>  
> +#ifdef CONFIG_RISCV_64
> +#define PAGETABLE_ORDER         (9)
> +#else /* CONFIG_RISCV_32 */
> +#define PAGETABLE_ORDER         (10)
> +#endif
> +
> +#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
> +
> +#define PTE_PPN_SHIFT           10
> +
>  #define PAGE_SHIFT              12 /* 4 KiB Pages */
>  #define PADDR_BITS              56 /* 44-bit PPN */

Personally I think these two would better remain at the top. But maybe
that's just me ...

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,63 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
> +
> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define PTE_VALID                   BIT(0, UL)
> +#define PTE_READABLE                BIT(1, UL)
> +#define PTE_WRITABLE                BIT(2, UL)
> +#define PTE_EXECUTABLE              BIT(3, UL)
> +#define PTE_USER                    BIT(4, UL)
> +#define PTE_GLOBAL                  BIT(5, UL)
> +#define PTE_ACCESSED                BIT(6, UL)
> +#define PTE_DIRTY                   BIT(7, UL)
> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
> +
> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
> +#define PTE_TABLE                   (PTE_VALID)
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))
> +
> +/* Page Table entry */
> +typedef struct {
> +#ifdef CONFIG_RISCV_64
> +    uint64_t pte;
> +#else
> +    uint32_t pte;
> +#endif
> +} pte_t;
> +
> +#define pte_to_addr(x) (((x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)

This will be at risk of overflow for RV32 without a cast to paddr_t (which
I expect would be a 64-bit type on RV32 just like it is on RV64).

> +#define addr_to_pte(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)
> +
> +static inline pte_t paddr_to_pte(const paddr_t paddr,
> +                                 const unsigned long permissions)
> +{
> +    unsigned long tmp = addr_to_pte(paddr);
> +    return (pte_t) { .pte = tmp | permissions };
> +}
> +
> +static inline paddr_t pte_to_paddr(const pte_t pte)
> +{
> +    return pte_to_addr(pte.pte);
> +}
> +
> +static inline bool pte_is_valid(const pte_t p)
> +{
> +    return p.pte & PTE_VALID;
> +}

A remark on all of the "const" here: It's fine if you want to keep them,
but generally we care about const-correctness only for pointed-to types.
Scalar and compound parameter values are owned by called function anyway,
so all the "const" achieves is that the function can't alter its own
argument(s).

> --- /dev/null
> +++ b/xen/arch/riscv/mm.c
> @@ -0,0 +1,319 @@
> +#include <xen/compiler.h>
> +#include <xen/init.h>
> +#include <xen/kernel.h>
> +
> +#include <asm/early_printk.h>
> +#include <asm/config.h>
> +#include <asm/csr.h>
> +#include <asm/mm.h>
> +#include <asm/page.h>
> +#include <asm/processor.h>
> +
> +struct mmu_desc {
> +    unsigned long num_levels;

Isn't "unsigned int" sufficient here?

> +/*
> + * It is expected that Xen won't be more then 2 MB.
> + * The check in xen.lds.S guarantees that.
> + * At least 4 page (in case when Sv48 or Sv57 are used )
> + * tables are needed to cover 2 MB. One for each page level
> + * table with PAGE_SIZE = 4 Kb
> + *
> + * One L0 page table can cover 2 MB
> + * (512 entries of one page table * PAGE_SIZE).
> + *
> + * It might be needed one more page table in case when
> + * Xen load address isn't 2 MB aligned.
> + *
> + */
> +#define PGTBL_INITIAL_COUNT     (5)

On x86 we have a CONFIG_PAGING_LEVELS constant. If you had something
like this as well, this 5 would better match the comment as
((CONFIG_PAGING_LEVELS - 1) + 1), avoiding to make space for the two
levels you won't need as long as only Sv39 is really meant to be used.

> +#define PGTBL_ENTRY_AMOUNT  (PAGE_SIZE / sizeof(pte_t))
> +
> +pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
> +stage1_pgtbl_root[PGTBL_ENTRY_AMOUNT];
> +
> +pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
> +stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PGTBL_ENTRY_AMOUNT];
> 
> +#define HANDLE_PGTBL(curr_lvl_num)                                          \
> +    index = pt_index(curr_lvl_num, page_addr);                              \
> +    if ( pte_is_valid(pgtbl[index]) )                                       \
> +    {                                                                       \
> +        /* Find L{ 0-3 } table */                                           \
> +        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
> +    }                                                                       \
> +    else                                                                    \
> +    {                                                                       \
> +        /* Allocate new L{0-3} page table */                                \
> +        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
> +        {                                                                   \
> +            early_printk("(XEN) No initial table available\n");             \
> +            /* panic(), BUG() or ASSERT() aren't ready now. */              \
> +            die();                                                          \
> +        }                                                                   \
> +        mmu_desc->pgtbl_count++;                                            \
> +        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
> +                                    PTE_VALID);                             \
> +        pgtbl = mmu_desc->next_pgtbl;                                       \
> +        mmu_desc->next_pgtbl += PGTBL_ENTRY_AMOUNT;                         \
> +    }
> +
> +static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
> +                                         unsigned long map_start,
> +                                         unsigned long map_end,
> +                                         unsigned long pa_start,
> +                                         unsigned long permissions)

Wouldn't the last one more sensibly be "unsigned int"?

> +{
> +    unsigned int index;
> +    pte_t *pgtbl;
> +    unsigned long page_addr;
> +    pte_t pte_to_be_written;
> +    unsigned long paddr;
> +    unsigned long tmp_permissions;
> +
> +    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
> +        die();
> +    }
> +
> +    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
> +         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
> +    {
> +        early_printk("(XEN) map and pa start addresses should be aligned\n");
> +        /* panic(), BUG() or ASSERT() aren't ready now. */
> +        die();
> +    }
> +
> +    page_addr = map_start;
> +    while ( page_addr < map_end )
> +    {
> +        pgtbl = mmu_desc->pgtbl_base;
> +
> +        switch (mmu_desc->num_levels)

Nit: Style (missing blanks).

> +        {
> +            case 4: /* Level 3 */

Nit: Indentation of case labels matches that of the opening brace in our
style.

> +                HANDLE_PGTBL(3);
> +            case 3: /* Level 2 */
> +                HANDLE_PGTBL(2);
> +            case 2: /* Level 1 */
> +                HANDLE_PGTBL(1);
> +            case 1: /* Level 0 */
> +                index = pt_index(0, page_addr);
> +                paddr = (page_addr - map_start) + pa_start;
> +
> +                tmp_permissions = permissions;
> +
> +                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
> +                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
> +                    tmp_permissions =
> +                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
> +
> +                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
> +                    tmp_permissions = PTE_READABLE | PTE_VALID;
> +
> +                pte_to_be_written = paddr_to_pte(paddr, tmp_permissions);
> +
> +                if ( !pte_is_valid(pgtbl[index]) )
> +                    pgtbl[index] = pte_to_be_written;
> +                else
> +                {
> +                    /*
> +                    * get an adresses of current pte and that one to
> +                    * be written without permission flags
> +                    */
> +                    unsigned long curr_pte =
> +                        pgtbl[index].pte & ~((1 << PTE_PPN_SHIFT) - 1);
> +
> +                    pte_to_be_written.pte &= ~((1 << PTE_PPN_SHIFT) - 1);

If you mean to only compare addresses, why don't you use pte_to_paddr()?
Question though is whether it's correct to ignore permission differenes.
I'd expect you only want to mask off PTE_ACCESSED and PTE_DIRTY.

> +                    if ( curr_pte != pte_to_be_written.pte )
> +                    {
> +                        early_printk("PTE that is intended to write isn't the"
> +                                    "same that the once are overwriting\n");

Nit: One-off indentation. As to the message text - I take it that's
temporary only anyway, and hence there's little point thinking about
improving it?

> +                        /* panic(), <asm/bug.h> aren't ready now. */
> +                        die();
> +                    }
> +                }
> +        }
> +
> +        /* Point to next page */
> +        page_addr += XEN_PT_LEVEL_SIZE(0);

Seeing this as well as the loop heading - maybe more suitable as a
for(;;) loop?

> +    }
> +}

Since HANDLE_PGTBL() is strictly for use above only, I think you'd better
#undef it here.

> +static void __init calc_pgtbl_lvls_num(struct  mmu_desc *mmu_desc)
> +{
> +    unsigned long satp_mode = RV_STAGE1_MODE;
> +
> +    /* Number of page table levels */
> +    switch (satp_mode)
> +    {
> +        case SATP_MODE_SV32:
> +            mmu_desc->num_levels = 2;
> +            break;
> +        case SATP_MODE_SV39:
> +            mmu_desc->num_levels = 3;
> +            break;
> +        case SATP_MODE_SV48:
> +            mmu_desc->num_levels = 4;
> +            break;
> +        default:
> +            early_printk("(XEN) Unsupported SATP_MODE\n");
> +            die();
> +    }
> +}
> +
> +static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
> +                                            unsigned long load_start,
> +                                            unsigned long satp_mode)
> +{
> +    bool is_mode_supported = false;
> +    unsigned int index;
> +    unsigned int page_table_level = (mmu_desc->num_levels - 1);
> +    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
> +
> +    unsigned long aligned_load_start = load_start & level_map_mask;
> +    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
> +    unsigned long xen_size = (unsigned long)(_end - start);
> +
> +    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
> +    {
> +        early_printk("please place Xen to be in range of PAGE_SIZE "
> +                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
> +                     "depending on expected SATP_MODE \n"
> +                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
> +        die();
> +    }
> +
> +    index = pt_index(page_table_level, aligned_load_start);
> +    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
> +                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
> +
> +    asm volatile("sfence.vma");

Nit (here and several times again below): Style (missing three blanks, as
"asm" is a keyword).

> +    csr_write(CSR_SATP,
> +              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
> +              satp_mode << SATP_MODE_SHIFT);
> +
> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
> +        is_mode_supported = true;
> +
> +    /* Clean MMU root page table and disable MMU */
> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
> +
> +    csr_write(CSR_SATP, 0);
> +    asm volatile("sfence.vma");

I guess what you do in this function could do with some more comments.
Looks like you're briefly enabling the MMU to check that what you wrote
to SATP you can also read back. (Isn't there a register reporting
whether the feature is available?)

If so, I think you cannot clear the stage1_pgtbl_root[] slot before
you've disabled the MMU again.

(As a minor aspect, I'd like to encourage you to write plain zero just
as 0, not as 0x0, unless used in contexts where other hex numbers nearby
make this desirable.)

> +    return is_mode_supported;
> +}
> +
> +/*
> + * setup_initial_pagetables:
> + *
> + * Build the page tables for Xen that map the following:
> + *  1. Calculate page table's level numbers.
> + *  2. Init mmu description structure.
> + *  3. Check that linker addresses range doesn't overlap
> + *     with load addresses range
> + *  4. Map all linker addresses and load addresses ( it shouldn't
> + *     be 1:1 mapped and will be 1:1 mapped only in case if
> + *     linker address is equal to load address ) with
> + *     RW permissions by default.
> + *  5. Setup proper PTE permissions for each section.
> + */
> +void __init setup_initial_pagetables(void)
> +{
> +    struct mmu_desc mmu_desc = { 0, 0, NULL, 0 };

Just {} ought to do as initializer, but if you want to spell things out,
then please use 0 / NULL consistently for integral / pointer type fields.

> +    /*
> +     * Access to _{stard, end } is always PC-relative

I guess you mean _start. For just a leading underscore I also don't think
using this braced form is useful.

> +     * thereby when access them we will get load adresses
> +     * of start and end of Xen
> +     * To get linker addresses LOAD_TO_LINK() is required
> +     * to use
> +     */
> +    unsigned long load_start    = (unsigned long)_start;
> +    unsigned long load_end      = (unsigned long)_end;
> +    unsigned long linker_start  = LOAD_TO_LINK(load_start);
> +    unsigned long linker_end    = LOAD_TO_LINK(load_end);
> +
> +    if ( (linker_start != load_start) &&
> +         (linker_start <= load_end) && (load_start <= linker_end) ) {

Nit: Style (brace placement).

> +        early_printk("(XEN) linker and load address ranges overlap\n");
> +        die();
> +    }
> +
> +    calc_pgtbl_lvls_num(&mmu_desc);
> +
> +    if ( !check_pgtbl_mode_support(&mmu_desc, load_start, RV_STAGE1_MODE) )
> +    {
> +        early_printk("requested MMU mode isn't supported by CPU\n"
> +                     "Please choose different in <asm/config.h>\n");
> +        die();
> +    }
> +
> +    mmu_desc.pgtbl_base = stage1_pgtbl_root;
> +    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
> +
> +    setup_initial_mapping(&mmu_desc,
> +                          linker_start,
> +                          linker_end,
> +                          load_start,
> +                          PTE_LEAF_DEFAULT);
> +}
> +
> +void __init noinline enable_mmu()
> +{
> +    /*
> +     * Calculate a linker time address of the mmu_is_enabled
> +     * label and update CSR_STVEC with it.
> +     * MMU is configured in a way where linker addresses are mapped
> +     * on load addresses so in a case when linker addresses are not equal
> +     * to load addresses, after MMU is enabled, it will cause
> +     * an exception and jump to linker time addresses.
> +     * Otherwise if load addresses are equal to linker addresses the code
> +     * after mmu_is_enabled label will be executed without exception.
> +     */
> +    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
> +
> +    /* Ensure page table writes precede loading the SATP */
> +    asm volatile("sfence.vma");
> +
> +    /* Enable the MMU and load the new pagetable for Xen */
> +    csr_write(CSR_SATP,
> +              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |

Please try to avoid open-coding of existing constructs: Here you mean
either PFN_DOWN() or paddr_to_pfn() (you see, we have even two). (As I
notice I did overlook at least similar earlier instance.)

> +              RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +
> +    asm volatile(".align 2");
> +      mmu_is_enabled:

How in the world is one to spot this label? Yes, it shouldn't be entirely
unindented. But it also should be indented less than the surrounding code
(with the exception of switch() statement case labels, where a non-case
label might be intended the same as a case ones). Our rule of thumb is to
indent such labels by a single space.

> +    /*
> +     * Stack should be re-inited as:
> +     * 1. Right now an address of the stack is relative to load time
> +     *    addresses what will cause an issue in case of load start address
> +     *    isn't equal to linker start address.
> +     * 2. Addresses in stack are all load time relative which can be an
> +     *    issue in case when load start address isn't equal to linker
> +     *    start address.
> +     */
> +    asm volatile ("mv sp, %0" : : "r"((unsigned long)cpu0_boot_stack + STACK_SIZE));

Nit: Style (overly long line).

What's worse - I don't think it is permitted to alter sp in the middle of
a function. The compiler may maintain local variables on the stack which
don't correspond to any programmer specified ones, including pointer ones
which point into the stack frame. This is specifically why both x86 and
Arm have switch_stack_and_jump() macros.

> +    /*
> +     * We can't return to the caller because the stack was reseted
> +     * and it may have stash some variable on the stack.
> +     * Jump to a brand new function as the stack was reseted
> +    */

Nit: Style (indentation).

> +    cont_after_mmu_is_enabled();
> +}
> +
> diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
> index 8887f0cbd4..b3309d902c 100644
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,5 @@
>  #include <asm/asm.h>
> +#include <asm/asm-offsets.h>
>  #include <asm/riscv_encoding.h>
>  
>          .section .text.header, "ax", %progbits
> @@ -32,3 +33,4 @@ ENTRY(start)
>          add     sp, sp, t0
>  
>          tail    start_xen
> +

???

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -136,6 +136,7 @@ SECTIONS
>      . = ALIGN(POINTER_ALIGN);
>      __init_end = .;
>  
> +    . = ALIGN(PAGE_SIZE);
>      .bss : {                     /* BSS */
>          __bss_start = .;
>          *(.bss.stack_aligned)

Why do you need this? You properly use __aligned(PAGE_SIZE) for the
page tables you define, and PAGE_SIZE wouldn't be enough here anyway
if STACK_SIZE > PAGE_SIZE (as .bss.stack_aligned comes first). The
only time you'd need such an ALIGN() is if the following label
(__bss_start in this case) needed to be aligned at a certain
boundary. (I'm a little puzzled though that __bss_start isn't
[immediately] preceded by ". = ALIGN(POINTER_ALIGN);" - didn't .bss
clearing rely on such alignment?)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 14:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 14:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524308.815148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVdb-0000Ob-SC; Thu, 20 Apr 2023 14:51:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524308.815148; Thu, 20 Apr 2023 14:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVdb-0000OU-PV; Thu, 20 Apr 2023 14:51:55 +0000
Received: by outflank-mailman (input) for mailman id 524308;
 Thu, 20 Apr 2023 14:51:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ZnX=AL=flex--seanjc.bounces.google.com=3h1FBZAYKCVgI40D926EE6B4.2ECN4D-34L4BB8IJI.N4DFHE942J.EH6@srs-se1.protection.inumbo.net>)
 id 1ppVda-0000OM-QU
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 14:51:54 +0000
Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com
 [2607:f8b0:4864:20::64a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2b0ab03-df8a-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 16:51:53 +0200 (CEST)
Received: by mail-pl1-x64a.google.com with SMTP id
 d9443c01a7336-1a6820f90c6so7694265ad.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 07:51:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2b0ab03-df8a-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1682002311; x=1684594311;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=cenhvMgRBJVEyO+DQ1Yk7EeAnoD+Mx3QDGrPvs6dp/U=;
        b=ozdQf66WPw8ShRBluz+LMQljss+d3T0PBYjA3jmNDdv//pVrmGXx6L9uusb7fIhJSL
         LymgWp9X3FbkLAAxceftLeQZn0/6M8JdZmIbsV9oONQVhJAe+e9OODUteXA7bMLvWkmz
         5ayRkEY6NENxUYGlnGZFkZBlmyRHMXNIOgoLHCbFX3aaDwjuintlWNx5LkJKfs2Sd6PE
         Ya8JLH6D3Xz61J6mffVksjDGSWXvO9yhMWIRSY/rzCnaTg7svns1MKliwXodcMA+7eb1
         u30LO7+8+pIMTa1eXTgGdK8wfjlZ/7y7DufflgYGy0ae6b3znNofgHpeyL7dnXNOKs4z
         yrGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682002311; x=1684594311;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=cenhvMgRBJVEyO+DQ1Yk7EeAnoD+Mx3QDGrPvs6dp/U=;
        b=UVOG4CH6ABhPrsR2y8WPnuayPtk1Phdb8xUHkHDUxX2/+ZSOSQ7PdYqvkNc6PEHRw8
         9dMKdpynCSi305o4/TYRlmE4pVACXRLSDOwkcY3su4Ynjxck50fsWQlvnDsPnY8rmRoL
         dGV2ClksgmTt2L2bIX3ybVHeF37GZhXE3HhtD1n82n6otxiHqgL2/CbJ+ByHyOSS1MyG
         p2ChW37gytCjsyRHxYDF1qt0gCVngZ/rPxuleqXyDYDoaoMTZEbgfpCCeKv+qPiCcbx2
         7wpo9DuVallieJM+/q04zSTbOVjZbCdg6Sq7kaPG282IMOoW+te4FRsh3IJVlQkDFKtb
         kZng==
X-Gm-Message-State: AAQBX9fC9Ryt0tx8KqzUUV/naU5ogVbO7fa783IhZJQKZB2Ih87o6+8z
	9etqrEKpKiZOB7GGmmnN95ksGOSeny0=
X-Google-Smtp-Source: AKy350YZDYx9Ka0U+LJTenQgM1KG3UW2Yutu07LdZ7q6GZTzBdJRqMrlgBJSIie+uzUyI4FO9iSrYEblhR4=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a17:902:f807:b0:1a6:3a2e:b731 with SMTP id
 ix7-20020a170902f80700b001a63a2eb731mr623230plb.1.1682002311424; Thu, 20 Apr
 2023 07:51:51 -0700 (PDT)
Date: Thu, 20 Apr 2023 07:51:49 -0700
In-Reply-To: <87y1mm3iqz.ffs@tglx>
Mime-Version: 1.0
References: <87r0sh4m7a.ffs@tglx> <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de>
 <87a5z443g2.ffs@tglx> <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com> <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com>
 <871qkf3qek.ffs@tglx> <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
Message-ID: <ZEFRhXua6Jxvit1R@google.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
From: Sean Christopherson <seanjc@google.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Menzel <pmenzel@molgen.mpg.de>, 
	linux-kernel@vger.kernel.org, x86@kernel.org, 
	David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>, 
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	Paul McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, 
	Oleksandr Natalenko <oleksandr@natalenko.name>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>, 
	Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, 
	Usama Arif <usama.arif@bytedance.com>, 
	"=?iso-8859-1?Q?J=FCrgen_Gro=DF?=" <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>, 
	Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, 
	linux-csky@vger.kernel.org, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, 
	linux-mips@vger.kernel.org, 
	"James E. J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org, 
	Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Content-Type: text/plain; charset="us-ascii"

On Thu, Apr 20, 2023, Thomas Gleixner wrote:
> On Thu, Apr 20 2023 at 10:23, Andrew Cooper wrote:
> > On 20/04/2023 9:32 am, Thomas Gleixner wrote:
> > > On Wed, Apr 19, 2023, Andrew Cooper wrote:
> > > > This was changed in x2APIC, which made the x2APIC_ID immutable.
>
> >> I'm pondering to simply deny parallel mode if x2APIC is not there.
> >
> > I'm not sure if that will help much.
> 
> Spoilsport.

LOL, well let me pile on then.  x2APIC IDs aren't immutable on AMD hardware.  The
ID is read-only when the CPU is in x2APIC mode, but any changes made to the ID
while the CPU is in xAPIC mode survive the transition to x2APIC.  From the APM:

  A value previously written by software to the 8-bit APIC_ID register (MMIO offset
  30h) is converted by hardware into the appropriate format and reflected into the
  32-bit x2APIC_ID register (MSR 802h).

FWIW, my observations from testing on bare metal are that the xAPIC ID is effectively
read-only (writes are dropped) on Intel CPUs as far back as Haswell, while the above
behavior described in the APM holds true on at least Rome and Milan.

My guess is that Intel's uArch specific behavior of the xAPIC ID being read-only
was introduced when x2APIC came along, but I didn't test farther back than Haswell.


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 14:55:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 14:55:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524312.815157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVhE-0000zn-C1; Thu, 20 Apr 2023 14:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524312.815157; Thu, 20 Apr 2023 14:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppVhE-0000zg-9J; Thu, 20 Apr 2023 14:55:40 +0000
Received: by outflank-mailman (input) for mailman id 524312;
 Thu, 20 Apr 2023 14:55:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fY2H=AL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppVhC-0000zW-MZ
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 14:55:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68ece81e-df8b-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 16:55:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8641.eurprd04.prod.outlook.com (2603:10a6:20b:428::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 14:55:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 14:55:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68ece81e-df8b-11ed-b21f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UvouuGhvgUd/qvo8vsfAkA6dQV8T+MZDc6Jq4mMA8d2pAUj5mSPog8kaADb6/+WGDasaFSS9d6oBla3jEUlRw/but3cie4meXUqV2QC6rksHaWQBjEtq9YuAT7AvGVOHoNuAHCnp2eH3swmUnPOnAeFklGTo+XNJ1qBnLwh53jC71eCAVgyy1rx4krOH5ZZw0HBMvgWWykCvERsSOLFX+xc6tkEEkMdOLuXenhC5DdQH7Vbnzqvr44sYHuvp8z9T5eMpa0XZJ86isOCc0gtJr5jHB+U6Pud6bQxaWR+TTXn8m1aafW31LAENNoBFQ8L173rbPD7zK3bdaezNISd7bQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wThXQbyC/G5eLouaVldG04k+jzCpyz3zziIZopr+s4I=;
 b=CmN869umJQbv81gaDjW6UNnbTlVcOqIgDx7/1DJptviPxekr1UH8HOpuNDnm/2xKbI7VJ3wEjHsCE19RgoljYnrhOBrVVx+LehTzAxJy19r38yjPM6HdHdRdO2BhSG7CvazMKDUe+91WSw22ZeCgzsdSVLhR481msxhXXFPVGHDOk0J+c0gIDNJ6YWqk63beuToS55J9kLzI9283aiEgSbS1FYX4INfYiaeFR4GL2jmM4WOU4hTyhWUWVq6JWc8YhDTl2dUfHNNkIVlpkQLh7l2Ea7ftFk0z5FsY8OnHS1QWZw0tQIDWLVN/uiZ/v9wFtzWuUrTcLESJYTMsvBlbvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wThXQbyC/G5eLouaVldG04k+jzCpyz3zziIZopr+s4I=;
 b=FGzZQAF8dqTPaBVH21NUVoZAJ8WssqPhlG+KJLddQLTLewJuS2bpayc3cWMVJGJxXPpaslnramSCzCFl7hPDJG+aLEfsw1sp+EdK2qsDOwYAeSY5Th21ok3EUtx5RiTo7jLZ1FMXGt7VZQIAG5ERGXyhAv6xyrGAJu/h4/em81JCxvL97xa+l3YEbIzRt3a9OHRW3AfF4vkRPMzdcdWrs4JDZaHz27MRGg0N08nBsF49CiRJl6fpuOH+XHrAx6IRNagv0mEyWMZvPiHe36LgPjnAiD4jWLT2X4nWfJVhlVf1NkbTV4tpJeUd2867ZYYdfQZefsR/p4MsyXd3hsp8NA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3f79d18f-8289-87df-1bbb-336fac5f381e@suse.com>
Date: Thu, 20 Apr 2023 16:55:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <116bb94f-955c-4c46-f16a-d7a5e1cc72b5@suse.com>
 <ZD6AejXJxQxAyrx1@Air-de-Roger>
 <c736271f-96ad-dfdb-48ae-b8e9cc002d9b@suse.com>
 <ZEAO8e9iTjmi86fr@Air-de-Roger>
 <7e3246c7-6de2-b3eb-477f-99ef9bd1b33b@suse.com>
 <ZEFMzu8k5wlYt2aD@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEFMzu8k5wlYt2aD@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8641:EE_
X-MS-Office365-Filtering-Correlation-Id: 2e14d2ae-f43a-473e-999b-08db41af4b6f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mcyJVtfH0HmQxEF/51QCdi7l0W0Pd6Y5M02Ca5AywDCMVErWokFLJvmsJoTvKDneVkgMgtn8n52kC5QOUwk6CYadjvE2I+DR73tWPp3s6+G3wr37MehdAbn3CxufCtSZmH/smrrcsDRnnMkbWUMYAZbbr2bHyw2Ig0OgL6RXvmzuBY0TD+Q5/9en+Ubi4lQL/Yaj6zZ4BvXY5oesIT9I0JBRrdoyQHj9PzE7tLTRqTYsIAjPzgNk4DLIY6y34Kh9Flu/9/u8aF7UIhfxWzBe4eV/we/fSuwewy4PXSLcMc+2IfbaOWIOzUD/qNZ9sUK4XBPEuf6SQrZ/lfZW5JOEVhJaAXm+1pA2R57Qw8a+vWZpOSSS7Lm7WhkOs3Sj8BUjMnsUw7ZWg0CDvV1I2b+GlGoqboY35dgVFcEint+mxK02S2+PrEP4cM/lq7Xwcs/1VDQ14fr/acGUbSl0RkIkV0caiORSTfHejqSygorZna6ByJjGGwA3AMEZAFKsrKB8TIxgUI9hXOZzfq9FWbd68QqsoiJv5sfrwuP7KML0KOBvfWvVy0GRlTQ0R3O2WdJtRJVpxI+h8uEjAJu42OOZtGQMJVAbJpeCiofPVOYZ+fRC+tkIpdnXGkVbqIXsK0ljVKPJ0+LeCCPwKQR8Q7HZVA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(376002)(136003)(366004)(451199021)(4326008)(5660300002)(86362001)(2616005)(83380400001)(31696002)(53546011)(186003)(6506007)(26005)(6512007)(38100700002)(8936002)(8676002)(478600001)(54906003)(6486002)(41300700001)(316002)(36756003)(66556008)(66946007)(66476007)(31686004)(6916009)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?STJydTJJeUtwU1Rqd2lqK3Fxa005V2kxakpsKzhCUU9wRnMveWtXWkM1dUd1?=
 =?utf-8?B?ZzN4R3RCcnJZdy9QU1laWTRGNG10UmJOQ0RzOWhkQW1qb3JYd3BPd3phVk5w?=
 =?utf-8?B?eVZkTTZDMXY5UE9vYU9aTEZZMVFENlVtYU8vMDRjTFhZLzBKSnpkR0Z5dmx2?=
 =?utf-8?B?WFZLMER6eHBldWJ1b2FDTDB0dzNRc0tLaWlTdWhjWHkwNDlYZGhzeUxyZk1v?=
 =?utf-8?B?YWhEaXMxeXlDTlNVditQWEdIS1YwVUJrM3k5cFpuQ3kzamg1LzdEZjMwWXVY?=
 =?utf-8?B?THVYbGc0WVE0TXhrU0NLekpJc2tMRWVBOXgzNUFtUTU0a3dBd3JDbFZ6MjdF?=
 =?utf-8?B?YXVkeFl3SkUxOTk2MkcyUmpMS0RtNkZJK3REeWV5UFRQQXM4VmJNWjBkV2lQ?=
 =?utf-8?B?eGNLV2pKMDY3WUIzRVFGWmYrdE5WbkN4b0VXSVBVTlJZOGU1dDdBNzBZVVF2?=
 =?utf-8?B?Y0U5bXhmTWFHUVRpTzRaT1ZCK0VPVGRtdkl3WElOa2w0VGhHNXdxbHpSby8x?=
 =?utf-8?B?VUJyTXFFYTl1QnVEZTZodW1CbmV5TnZyMXVmVXVlU0tiMlZTeVkvQS81b0JH?=
 =?utf-8?B?ZjYwd1VuWkJWeDhRYzBlalowWjZ4YjdJZlhvOC9scDhIK0w2bktDdElVVW40?=
 =?utf-8?B?NWMvYVpaWkhzYXAzSlFTUCtQZzhnVm9COUd3aVhsbk14OFV3MmUxcmdsSjFl?=
 =?utf-8?B?WUpvK3BPeFFJZWl6WlNqb1BUNkgvSWloN29GaEhrZ0lxZU12MEl4NjBLSTZC?=
 =?utf-8?B?dk1TV21jZVdSU1BqaHJUcUhXV0RVUmg3Qng1eTVra2d5WDNUNGlpRVlUc2FE?=
 =?utf-8?B?Q3ZrMU1EV0NRdjRsMjlTdi9NelI1NmtLeFYwUDRHRHd2R0RrMFR4NkRhbEdW?=
 =?utf-8?B?WnU3YzZWUms3Q2pQQ2hyZ2MvRlJvZXNhRGIxSWh2WmxyenZId0RPV3Z1RUV1?=
 =?utf-8?B?eFBlMW94QnRGUkVzb3R0UkpjL25VNUw2bkhtVlhpRTJMNldZS3J2RWRhamo2?=
 =?utf-8?B?cE5jR3l3Wm85Z3pGeUxueWVVaEJnenVmMWFoODNZcGRNTWtWMXd6aXRmUTZM?=
 =?utf-8?B?WUVUTFkwOWY2WXNiNDJMT3FDTFdWZVdRcHRodnZRSkR1ZE56bHlKZVFVc0Np?=
 =?utf-8?B?NXFsQ1hHNVlWL1BBM3VrRGZmaUxTSzJBTTFVSFZKMkY5R2dad05YL05WRksy?=
 =?utf-8?B?cERacXF2UGRKMDl5L01ocFQraVhIb1UrYmNidEVUWDN0eGs5R1puMmFaa3Zv?=
 =?utf-8?B?N2JsN0xjWGRudXRTcGhtNEJqRTV4amMzQ3JIMFRoWUVLWWR6NVE5SVhYUDhH?=
 =?utf-8?B?elMxcXdVbGVYNitwTXRlS3pnS2NJQnd6eGtZQzJXM2VOUUpnWEtTbHYvQVJz?=
 =?utf-8?B?Yk9id3hmNUQ3Q3FyRmlKVlR0emtSS1Rndzg4YU1HbXAzMWdRUkJuVlNHREpX?=
 =?utf-8?B?Tk82WmswVkRVakZ0YVlJSm1NenV4cXQvWm9uTlI3NFhSUU54QlZzK3l0V1lq?=
 =?utf-8?B?K293bmZ3c0xtT3dvbHFBNzY1R3hVb1FxWkduQmxwSnNOTERINExhbmhGaGta?=
 =?utf-8?B?cEFkZnFITGxTUEE3RS9lSXBiSDhMK3NRem9QaWdnVWhvM1Vna2NvVUNkM0JW?=
 =?utf-8?B?SE1mL1d6OGd5SjV0SVgva1hDVjRHc3FvditWS3l4ZGh3RFhHMEEwUTVEbk9w?=
 =?utf-8?B?UTV2MzdOS3hHOTFPQnUzZUJvZmdLRldOV1ZKKzNHWjlFblFUSllSdWZ2a2Ev?=
 =?utf-8?B?cXB2M3p0b2YrTEl2V2NMT291SzJXSXBuczFSam12TUpCcTI1TkdNeGg0a2ND?=
 =?utf-8?B?Mm9EaUY1S0pZUzQ5UGRzbCtLTFJobFMvbDUrcmoweklnRnFyOHQvT3ljYWxL?=
 =?utf-8?B?dEVUbTBnSzRNVU1WdFFhUVBBY1Z4OWt6KzVTQ2l4bUtyWWEwcVVXeHVDRndQ?=
 =?utf-8?B?b1JkTGJBRnQvUmFOWTRxQ2RhcGRmdG5TSmlOeXlZSzNrRlp6LzJRcFhENDFj?=
 =?utf-8?B?SG11dU12OFFPYW4zbkJrSFYyM0FLUzV1SlNuREQ4UlJVUExvSklaK0NYU2Z1?=
 =?utf-8?B?dm1ub3l0MllIUkRocmwxbEFDamVGVEFTaFcxYWpIRUU2R3hwLzlrRGpjSlAz?=
 =?utf-8?Q?uNt1y8iEySz51fEZh1fA2Rj5L?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e14d2ae-f43a-473e-999b-08db41af4b6f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 14:55:34.6149
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JuiLz9amE4d4hHWWPObugbAomEHsX0chHPe9qv93TFVeH0bSXVZoHPDf6BngJaithK98PIkpTkDo73zOPyq+iA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8641

On 20.04.2023 16:31, Roger Pau Monné wrote:
> On Thu, Apr 20, 2023 at 10:31:08AM +0200, Jan Beulich wrote:
>> On 19.04.2023 17:55, Roger Pau Monné wrote:
>>> On Wed, Apr 19, 2023 at 03:58:10PM +0200, Jan Beulich wrote:
>>>> @@ -1342,6 +1349,17 @@ unsigned int rtc_guest_read(unsigned int
>>>>           * underlying hardware would permit doing so.
>>>>           */
>>>>          data = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(0)));
>>>> +
>>>> +        /*
>>>> +         * When there's (supposedly) no RTC/CMOS, we don't intercept the other
>>>> +         * ports. While reading the index register isn't normally possible,
>>>> +         * play safe and return back whatever can be read (just in case a value
>>>> +         * written through an alias would be attempted to be read back here).
>>>> +         */
>>>> +        if ( port == RTC_PORT(0) &&
>>>> +             (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
>>>> +             ioports_access_permitted(currd, port, port) )
>>>> +            data = inb(port) & 0x7f;
>>>
>>> Do we really need to mask the high bit here?  We don't allow setting
>>> that bit in the first place.
>>
>> I think it's more consistent to mask it off, specifically with the code
>> visible in context right above the insertion. The doc isn't really clear
>> about readability of that bit: On one hand in says R/W for port 0x70 in
>> the NMI_EN section, yet otoh in the RTC section it says "Note that port
>> 70h is not directly readable. The only way to read this register is
>> through Alt Access mode." (I think the NMI_EN section is more trustworthy,
>> but still.) Plus if we were to ever make use of the NMI disable, we
>> wouldn't want Dom0 see the bit set.
> 
> I guess so, at the end Xen itself doesn't use the bit so far.  Maybe
> at some point we would want to expose the value of the bit to dom0 if
> Xen starts using it (most than anything for informative purposes if
> NMIs are disabled).
> 
> Feel free to fold the diff to the existing patch and keep the RB.

Thanks.

> I guess you will also add something to the commit message about the
> special handling of the NMI enable bit even when the RTC/CMOS is not
> present?

Of course, albeit not more than a sentence, as the code comments provide
the details.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 15:57:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 15:57:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524318.815168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWfB-0007EH-Sd; Thu, 20 Apr 2023 15:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524318.815168; Thu, 20 Apr 2023 15:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWfB-0007EA-PQ; Thu, 20 Apr 2023 15:57:37 +0000
Received: by outflank-mailman (input) for mailman id 524318;
 Thu, 20 Apr 2023 15:57:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J46Y=AL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1ppWfA-0007E4-7l
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 15:57:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1037fc23-df94-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 17:57:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1037fc23-df94-11ed-b21f-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682006253;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vp8ju/GGiywRxL7MXypBTDLjpcNajsRiFLiJwpmF6ko=;
	b=vo3gBr3jR8zdhKk/BuxN9j1GS6zLedRSUgNo4oBDDdFKd4/mRt+nx3JD2K3clcn/BgIaB5
	tscLZZLjA+6B52jr2RLzgOWJG/k7AGus37e3Ph2Y6fR1tbNSsZOQMZch+7rI5i6ViChQ8h
	gMou9zesernS1zW3B6NxuE83RX7Oh0u7Oil+zXe4hsW7X7GW3E+CbBzOa2F6arYbDZ5cj9
	lXO3K4RpZ22XO5V8RkycQZxDW917EnVPPdDR/zEw9iS+7JTZjat3sbrsHzVs96q7iM+rI3
	vYyKeXNjfLkyPEjNB4pTt4b2s2ZZh8TVnwmM8p25xqjzl/ftgBMUCxvVL2f/5Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682006253;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vp8ju/GGiywRxL7MXypBTDLjpcNajsRiFLiJwpmF6ko=;
	b=paJdIvPumFOspQVO4jY9SRLTjODXaUUemmcFlu8EiYU/m3yQ/cAvZPtcC+mBk2UuqHrs0b
	wp5ZrEmqeJc+8MAw==
To: Sean Christopherson <seanjc@google.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Menzel
 <pmenzel@molgen.mpg.de>, linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
 <thomas.lendacky@amd.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <ZEFRhXua6Jxvit1R@google.com>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com>
Date: Thu, 20 Apr 2023 17:57:31 +0200
Message-ID: <87v8hq35sk.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Apr 20 2023 at 07:51, Sean Christopherson wrote:
> On Thu, Apr 20, 2023, Thomas Gleixner wrote:
>> On Thu, Apr 20 2023 at 10:23, Andrew Cooper wrote:
>> > On 20/04/2023 9:32 am, Thomas Gleixner wrote:
>> > > On Wed, Apr 19, 2023, Andrew Cooper wrote:
>> > > > This was changed in x2APIC, which made the x2APIC_ID immutable.
>>
>> >> I'm pondering to simply deny parallel mode if x2APIC is not there.
>> >
>> > I'm not sure if that will help much.
>> 
>> Spoilsport.
>
> LOL, well let me pile on then.  x2APIC IDs aren't immutable on AMD hardware.  The
> ID is read-only when the CPU is in x2APIC mode, but any changes made to the ID
> while the CPU is in xAPIC mode survive the transition to x2APIC.  From the APM:
>
>   A value previously written by software to the 8-bit APIC_ID register (MMIO offset
>   30h) is converted by hardware into the appropriate format and reflected into the
>   32-bit x2APIC_ID register (MSR 802h).
>
> FWIW, my observations from testing on bare metal are that the xAPIC ID is effectively
> read-only (writes are dropped) on Intel CPUs as far back as Haswell, while the above
> behavior described in the APM holds true on at least Rome and Milan.
>
> My guess is that Intel's uArch specific behavior of the xAPIC ID being read-only
> was introduced when x2APIC came along, but I didn't test farther back than Haswell.

I'm not so worried about modern hardware. The horrorshow is the old muck
as demonstrated and of course there is virt :)

Something like the completely untested below should just work whatever
APIC ID the BIOS decided to dice.

That might just work on SEV too without that GHCB muck, but what do I
know.

Thanks,

        tglx
---
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -138,7 +138,8 @@
 #define		APIC_EILVT_MASKED	(1 << 16)
 
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
-#define APIC_BASE_MSR	0x800
+#define APIC_BASE_MSR		0x800
+#define APIC_X2APIC_ID_MSR	0x802
 #define XAPIC_ENABLE	(1UL << 11)
 #define X2APIC_ENABLE	(1UL << 10)
 
@@ -162,6 +163,7 @@
 #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
 #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
 
+#ifndef __ASSEMBLY__
 /*
  * the local APIC register structure, memory mapped. Not terribly well
  * tested, but we might eventually use this one in the future - the
@@ -435,4 +437,5 @@ enum apic_delivery_modes {
 	APIC_DELIVERY_MODE_EXTINT	= 7,
 };
 
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASM_X86_APICDEF_H */
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -195,14 +195,13 @@ extern void nmi_selftest(void);
 #endif
 
 extern unsigned int smpboot_control;
+extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
 /* Control bits for startup_64 */
-#define STARTUP_APICID_CPUID_1F 0x80000000
-#define STARTUP_APICID_CPUID_0B 0x40000000
-#define STARTUP_APICID_CPUID_01 0x20000000
-#define STARTUP_APICID_SEV_ES	0x10000000
+#define STARTUP_READ_APICID	0x80000000
+#define STARTUP_APICID_SEV_ES	0x40000000
 
 /* Top 8 bits are reserved for control */
 #define STARTUP_PARALLEL_MASK	0xFF000000
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -101,6 +101,8 @@ static int apic_extnmi __ro_after_init =
  */
 static bool virt_ext_dest_id __ro_after_init;
 
+unsigned long apic_mmio_base __ro_after_init;
+
 /*
  * Map cpu index to physical APIC ID
  */
@@ -2164,6 +2166,7 @@ void __init register_lapic_address(unsig
 
 	if (!x2apic_mode) {
 		set_fixmap_nocache(FIX_APIC_BASE, address);
+		apic_mmio_base = APIC_BASE;
 		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
 			    APIC_BASE, address);
 	}
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -24,8 +24,10 @@
 #include "../entry/calling.h"
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/apicdef.h>
 #include <asm/fixmap.h>
 #include <asm/smp.h>
+
 #include <asm/sev-common.h>
 
 /*
@@ -237,37 +239,24 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 
 #ifdef CONFIG_SMP
 	/*
-	 * For parallel boot, the APIC ID is retrieved from CPUID, and then
-	 * used to look up the CPU number.  For booting a single CPU, the
-	 * CPU number is encoded in smpboot_control.
+	 * For parallel boot, the APIC ID is either retrieved the APIC or
+	 * from CPUID, and then used to look up the CPU number.
+	 * For booting a single CPU, the CPU number is encoded in
+	 * smpboot_control.
 	 *
-	 * Bit 31	STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
-	 * Bit 30	STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
-	 * Bit 29	STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
-	 * Bit 28	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
+	 * Bit 31	STARTUP_APICID_READ (Read APICID from APIC)
+	 * Bit 30	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
 	 * Bit 0-23	CPU# if STARTUP_APICID_CPUID_xx flags are not set
 	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_READ_APICID, %ecx
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	testl	$STARTUP_APICID_SEV_ES, %ecx
 	jnz	.Luse_sev_cpuid_0b
 #endif
-	testl	$STARTUP_APICID_CPUID_1F, %ecx
-	jnz	.Luse_cpuid_1f
-	testl	$STARTUP_APICID_CPUID_0B, %ecx
-	jnz	.Luse_cpuid_0b
-	testl	$STARTUP_APICID_CPUID_01, %ecx
-	jnz	.Luse_cpuid_01
 	andl	$(~STARTUP_PARALLEL_MASK), %ecx
 	jmp	.Lsetup_cpu
 
-.Luse_cpuid_01:
-	mov	$0x01, %eax
-	cpuid
-	mov	%ebx, %edx
-	shr	$24, %edx
-	jmp	.Lsetup_AP
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 .Luse_sev_cpuid_0b:
 	/* Set the GHCB MSR to request CPUID 0x0B_EDX */
@@ -292,24 +281,30 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	jmp	.Lsetup_AP
 #endif
 
-.Luse_cpuid_0b:
-	mov	$0x0B, %eax
-	xorl	%ecx, %ecx
-	cpuid
-	jmp	.Lsetup_AP
+.Lread_apicid:
+	mov	$MSR_IA32_APICBASE, %ecx
+	rdmsr
+	testl	$X2APIC_ENABLE, %eax
+	jnz	read_apicid_msr
+
+	/* Read the APIC ID from the fix-mapped MMIO space. */
+	movq	apic_mmio_base(%rip), %rcx
+	addq	$APIC_ID, %rcx
+	movl	(%rcx), %eax
+	shr	$24, %eax
+	jnz	.Lread_apicid
 
-.Luse_cpuid_1f:
-	mov	$0x1f, %eax
-	xorl	%ecx, %ecx
-	cpuid
+.Lread_apicid_msr:
+	mov	$APIC_X2APIC_ID_MSR, %ecx
+	rdmsr
 
 .Lsetup_AP:
-	/* EDX contains the APIC ID of the current CPU */
+	/* EAX contains the APIC ID of the current CPU */
 	xorq	%rcx, %rcx
 	leaq	cpuid_to_apicid(%rip), %rbx
 
 .Lfind_cpunr:
-	cmpl	(%rbx,%rcx,4), %edx
+	cmpl	(%rbx,%rcx,4), %eax
 	jz	.Lsetup_cpu
 	inc	%ecx
 #ifdef CONFIG_FORCE_NR_CPUS
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1253,41 +1253,22 @@ bool __init arch_cpuhp_init_parallel_bri
 		return false;
 	}
 
-	/* Encrypted guests require special CPUID handling. */
+	/* Encrypted guests require special handling. */
 	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
 		switch (cc_get_vendor()) {
 		case CC_VENDOR_AMD:
 			ctrl = STARTUP_APICID_SEV_ES;
 			if (topology_extended_leaf == 0x0b)
-				goto setup;
+				break;
 			fallthrough;
 		default:
 			pr_info("Parallel CPU startup disabled due to guest state encryption\n");
 			return false;
 		}
+	} else {
+		ctrl = STARTUP_READ_APICID;
 	}
 
-	switch (topology_extended_leaf) {
-	case 0x0b:
-		ctrl = STARTUP_APICID_CPUID_0B;
-		break;
-	case 0x1f:
-		ctrl = STARTUP_APICID_CPUID_1F;
-		break;
-	case 0x00:
-		/* For !x2APIC mode 8 bits from leaf 0x01 are sufficient. */
-		if (!x2apic_mode) {
-			ctrl = STARTUP_APICID_CPUID_01;
-			break;
-		}
-		fallthrough;
-	default:
-		pr_info("Parallel CPU startup disabled. Unsupported topology leaf %u\n",
-			topology_extended_leaf);
-		return false;
-	}
-
-setup:
 	pr_debug("Parallel CPU startup enabled: 0x%08x\n", ctrl);
 	smpboot_control = ctrl;
 	return true;


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 15:59:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 15:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524323.815177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWgd-0007oK-9O; Thu, 20 Apr 2023 15:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524323.815177; Thu, 20 Apr 2023 15:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWgd-0007oD-6l; Thu, 20 Apr 2023 15:59:07 +0000
Received: by outflank-mailman (input) for mailman id 524323;
 Thu, 20 Apr 2023 15:59:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppWgb-0007nn-Iy; Thu, 20 Apr 2023 15:59:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppWgb-0001Em-9n; Thu, 20 Apr 2023 15:59:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppWga-0002Ni-S4; Thu, 20 Apr 2023 15:59:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppWga-0008KM-QU; Thu, 20 Apr 2023 15:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UiySx1VUUDeeXafDc3TUzdhHwVxw48Ah1OJM+Y5KPTY=; b=NBL9QEzpo46+cwt4pDPG333kdJ
	cYotRSR1ensoMJSeb9O0FJDAUwvRK6iMUZZTE9qydxBDRj0jmvtNTTkDo4TRmFzmnFXowkZdXPLS7
	M3KwePtSjHjwGvlm+iej6amcg/k5/egOgtnC68rXI2Ovx/SszatgvDfWyQsCyB4AAJGk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180325-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180325: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:hosts-allocate:broken:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 15:59:04 +0000

flight 180325 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180325/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180309

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180309 pass in 180325
 test-amd64-i386-libvirt-xsm   7 xen-install                fail pass in 180309
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 180309

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 180309 blocked in 180325
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 180309 blocked in 180325
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 180309 blocked in 180325
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-xl         15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 180309 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 180309 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 180309 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180309
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180309
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180309
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180309
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180309
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180309
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180309
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180309
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180309
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180325  2023-04-20 01:53:23 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Apr 20 16:15:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 16:15:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524331.815188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWwm-0002PX-Mf; Thu, 20 Apr 2023 16:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524331.815188; Thu, 20 Apr 2023 16:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppWwm-0002PQ-Is; Thu, 20 Apr 2023 16:15:48 +0000
Received: by outflank-mailman (input) for mailman id 524331;
 Thu, 20 Apr 2023 16:15:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IO+W=AL=citrix.com=prvs=467e7f66c=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1ppWwl-0002PK-2e
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 16:15:47 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 985cab62-df96-11ed-b21f-6b7b168915f2;
 Thu, 20 Apr 2023 18:15:43 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 Apr 2023 12:15:01 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by BL1PR03MB6150.namprd03.prod.outlook.com (2603:10b6:208:31e::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr
 2023 16:14:56 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::4418:2c5d:f218:e58]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::4418:2c5d:f218:e58%6]) with mapi id 15.20.6319.022; Thu, 20 Apr 2023
 16:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 985cab62-df96-11ed-b21f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682007342;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5qYw18nuoylvmGWA8A6EOst6p1DtVudzE1QBsL1wGH8=;
  b=dUZxL6dboTBhHboJiNrn1wvsQMilA7pXXFhfs4dgaNEJ4htqi98eyqCa
   hcxffn3R5y1FGtDHkLxXGPiIBWId+klFuVt8bUacpmZ5KAza9Ehphz13W
   UEHDX6vbKAyyU/439sOsFcksxtTfTEBH9wVL9U5zhFER0VzYWd+g0xny5
   4=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 106308308
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hbWLT6rxK0V2P/A7xO5krmcLwgleBmI/ZBIvgKrLsJaIsI4StFCzt
 garIBmPO/yPNDb9c99zaIqwoBgDvMOBzdFgQQc5+Sw1QyxDpJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSFNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAG4BdFO+u/K2+bO6ZK5pgJ8DdsTxYKpK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSEEjCJbZw9ckKwv
 GXd5CL9Cx4XLsOWzT6t+XOwnO7f2yj8Xer+EZXhrqAx2A3Injx75Bs+e2a9jeW72hWCRNdcC
 nEF/BVypqtp3Rn+JjX6d1jiyJKehTYMVtwVH+Ak5QWlzqvP/x3fFmUCViRGatEtqIkxXzNC/
 lyOmcngCXpwsbmWYXWH/7yQoHW5Pi19EIMZTSoNTA9A6d+zpog210vLVow6Tv/zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQOFhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:MbtWUa67x+r5N42g1wPXwOrXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwX5VoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eB3OBjKadZ/DBbytHPuQ4D9QYXcei1UdAc0+8XYjzra3FLeA==
X-Talos-CUID: =?us-ascii?q?9a23=3Ajq9DaWtoOPXz2xWN4UIapKYx6It7fmee8UvAI3a?=
 =?us-ascii?q?5CEguTbvOaQWd4v9Nxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AUXY0yQ6ApIvbZMsSg6UwTuNYxoxG3qKCFEJdyqk?=
 =?us-ascii?q?ZvsqlCD12BGuWj2+eF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,213,1677560400"; 
   d="scan'208";a="106308308"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CjpzLjh2DlQQl+TspiHwKVVl5R/i7owrGn97wrSak2wbj9jC5IhGk6/cWepv3RJQdyLKkRnmOaHtWdoKBfDEKNKLlWQ8E3+QqICCM4UkpLNCYdpbHqnQK2TLidOGyJTWzsiq9AsjKq2nkO34AIDs/lnLqNFnLMjmWkf40zTCjaXBSKPOQIMIt+y5CO5fbzjc+Oz1yq+B80cgy6rge3GwlJ6AsugsWV97jPSGIK9WG7eAlVWlLrQpMHv4GKdxVsbmc6dXK45UKt11BEvh09mymJT+NzBi1hPe4qy3Aw10ZHSHrxhYZ1aHGAucA+8xw989SMP+yn0kQ2+09y6sUXZvCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5qYw18nuoylvmGWA8A6EOst6p1DtVudzE1QBsL1wGH8=;
 b=ZoBt7tCildVaKnke2inakBiB7CVNBS7l6GL4LWhhQvAogrGvcAPzWbWB3Zdvkx8z0f4SJxF9U3vKCjddKi7xz52tWrUY96SKyE44+YsZH3eLmw21kBUU/MHZCotNFR7vD8SlnjkzlLRaONe0JMnkoGjM0Wbk9c5CxJ2WGgVvdsq8JFTWrMj2ZBUtzCzaEyrmqwkEKUQgZaaKEA4R7CoMhp9dPDdCzrb+qpd1bY6VEBJEaZPEuch2GhcANsGiAi2ip3dhnK+mwrMXnN4j/isDSfyyLxdBpCHwTLWhpr0OpwbGQjsehKFbs9MYB/4fji1VAQX+5nW/QHonzvyGrD9GwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5qYw18nuoylvmGWA8A6EOst6p1DtVudzE1QBsL1wGH8=;
 b=sET5pzYrAq2/P/JWfeOFAq1Shw6aMvnjsL8qsf4SqXLw0UB5MIkF14klYCDrt1SXZuCnFfb5IxqZ67xH84cgit29r1I7shF3sUdFyMMtzrUMg/oihXa7j2ShIe2ZejaCOaWiWeng2bK1gHXn1PMwv4Ea7O84yWiozWY/4WwW3ng=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH v2] livepatch-tools: remove usage of error.h
Thread-Topic: [PATCH v2] livepatch-tools: remove usage of error.h
Thread-Index: AQHZaHyz8c+PPcqCtEm6H34gcW31tK80dMV2
Date: Thu, 20 Apr 2023 16:14:55 +0000
Message-ID:
 <DM6PR03MB53728797915DB90C57CE03B2F0639@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230406114106.54735-1-roger.pau@citrix.com>
In-Reply-To: <20230406114106.54735-1-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|BL1PR03MB6150:EE_
x-ms-office365-filtering-correlation-id: 67b3d50d-4bf7-48af-5b18-08db41ba619a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 +dWh4rs6FNIv1vMHiY7EMwEDVWeefxjhb781Uxzyf/p8e6h77Zclt8ZDKldv0uYbrTacTrCvRyCFnLCsD/9Ud3gGZtXNCRRYLU066oJPGViciSxCZVdiVILZVfsjY6XPC1/pDFIlqF3HPj6AOutUd1fjiWuADDH3CQdUQkuGi7FA+khlJUVH45Jtq3PIJMdF2iq7AnROMFRpj49cNMCQGvmQkW4+LePN7rzd5K7iNnzV/J50hwJV5Xw7hawBfC03wvGEV4BVJ2Rrj1uvMcgh4Q7xNNnBKcmwRLKcH+W9bpqO5zb0g9Y9afq+/yQjId4WMuuyHzlIGFf8YYCPasHrZ+GjGY3H4n7iClrLbDnv2yHtC3lxlzUaBZpZ+A3an4zFX5iWg1EHRTHa++MlVBFBL3qfCFmDiD6grA1npIp0F6AUfsscWZSRei0ooqKt2Amkg1PrIqOCHfjrvY6wSQvqyH0oTSwKqgAtHCaisBsNAN+3xiFTBOSmL3vbSeV9tIe/fDj6ZK8l8qdjcuBerMxVdfxF4lNFC71tr7l7LqM1T8BfQOhrCEo7wDA+5wN8JhMfK3dqRHTRwCkjO+OaXGGEZdxedqiNNYu3alm8yyOmEg8KMYO2E9amXP/kBYUVaklf
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(136003)(376002)(396003)(451199021)(33656002)(4326008)(110136005)(316002)(66556008)(66476007)(66446008)(64756008)(76116006)(66946007)(91956017)(7696005)(478600001)(71200400001)(55016003)(8936002)(5660300002)(41300700001)(8676002)(52536014)(2906002)(44832011)(38070700005)(86362001)(122000001)(38100700002)(82960400001)(26005)(6506007)(53546011)(9686003)(83380400001)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?gP98pwC5tySQ4A9F29jNc6CnCzzgRz7O0FqC9gp5YxkUL8Hzmwf9bhYw8N?=
 =?iso-8859-1?Q?Srrmyjsbe2w6DCxyNVa+bP4KewW1D2szeooZ7irA9GrbAjiu3T1FI1DkZs?=
 =?iso-8859-1?Q?hLvfdhYhqThAI6JMLn4uu3B+8fhWRtA2p4GpecREoKNHJD4PoVV21XOGtf?=
 =?iso-8859-1?Q?BfJd30ujh2+Opg1xaKDJ4VilOzpYVR++1Q/fdWu58I3eF8oyAA2TGGu3O4?=
 =?iso-8859-1?Q?vbrdVT+j/NrIt5mvMgSOQN/+NcMsOTslKCf8lYDpzHfkipFOivLZzJsxKz?=
 =?iso-8859-1?Q?MYRwfVlnZjQl37orgl6bZD1bGQ1UvohPBqlvPcCSKIXxmrClglN3EGkhdO?=
 =?iso-8859-1?Q?BXOD7I5h5LmWVmGttuD81BdkVNce4z9u8hA5Hu22mS+HNlIIaOoDo/xVDY?=
 =?iso-8859-1?Q?29FJUCkS4PI9GKSi8BUB3+lTo4CjSmUHtiyHQdL25Miuqyg5Cfcu0eoBcL?=
 =?iso-8859-1?Q?OKHLv1IV6JQMxzV5UlFAjWOiL1awAL15+2jQwdBRQnuPYsVyx39BKv+sUV?=
 =?iso-8859-1?Q?wVVOYnp9ktNxbxHIOsGioK4KL37XV1XSUWfyx0VEp25l+5aVaJJv2L1xMw?=
 =?iso-8859-1?Q?ih9jsatUmA8l8ysRdfdjqoZJB/mEfmvR4CqXVMToM6txvUprh/5sur72Cg?=
 =?iso-8859-1?Q?v0uvt+FBe8AAjYmTagKM/E0NhVSUtvIV8S1ER0tmxMqBugjk2Ns4WAm7BY?=
 =?iso-8859-1?Q?JsG0DIbxunhug2IdZyQIOmvpSZ0+jqAWog8V2CWkxkfQXHa3estolYiM5p?=
 =?iso-8859-1?Q?kjXRjPgOsPJY78gfXe+IR0kOripQrXviwqyIBj4nTvQi7d1fVgVxpbg42E?=
 =?iso-8859-1?Q?y0qv7qOgSjppz/f7khPuySaFsGC+wYr5Rt6c/rlvq+vMgbNYd+VegJnTVj?=
 =?iso-8859-1?Q?HQYG1Im/uBUVbnpIRhi8wZ1tw5Elqc8hIR12cRoJ6dW7+xQybn2/DzFr22?=
 =?iso-8859-1?Q?YAx/cZiKHttBtVjo3vB0C38lDlJiHqnwRiBrCd/QgaIxOaJre6J2wxnPUn?=
 =?iso-8859-1?Q?27oWO8optDJgiKTVsz8DFQIdgDvcyrKe4ukR+dmSdbZ88z7dQubvyfLHVi?=
 =?iso-8859-1?Q?aQs+Ym8184lLPxsUwgS2B4fVUg3ycA7+YCb0yWS4DnfoyFMlouUIwIkl+0?=
 =?iso-8859-1?Q?hcQPUS5P0lFc039pWjmOhMh46mXEzfY06KzbGL8/rwyewbqvzJ9AJJaxls?=
 =?iso-8859-1?Q?yQFXsyGUhY5PE50awOVa1RuM+WWA/xfO/LDL9yCQJpq/Yso7Q9zU62WkZA?=
 =?iso-8859-1?Q?SDwgu9Cb6fiQkOC49h1WdvQm3f1LpNBwERNBWsElzxfvtyFHZ+wd/jjmRH?=
 =?iso-8859-1?Q?10D3ZmuWflue3keC9JsQWxppRlU2++doUyA5e/qDTR/+3P5XRwcLQZq+Tk?=
 =?iso-8859-1?Q?fW0wcI5RtJZ7hYjj7JBh63MR3GJnOy7a08e813C3WklD/wTMRAS7OKpQ6x?=
 =?iso-8859-1?Q?rPoUzdDAjeJBKyV/R3C8FRUO7QOBcAfjQiCoXpBvbDBPeNmyTA8fe2ZaFa?=
 =?iso-8859-1?Q?8v9j8chodkKcj2SusMDq/wH0L1t3/Ddugns33XsAr5ie9xceRBj2GkmfgZ?=
 =?iso-8859-1?Q?IhcS6uQ+rpLqKnO3cm5iCDl+d++vxsqAVi9KHMxL0J1B5N8qX1yyPCql9E?=
 =?iso-8859-1?Q?8CoeI7C2VzBSezVRulv3JHuSI8CtLfPs5s?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	7ZMkqTsO/snJtAJAupYpvvjnlWj1jUQBLZj4nm+mDYQlCO5dl+gVke3gkQVN0IoBQVtrHcwIHMEXF3vcRqpWs4k2K7NlpftmrRL/o/FMBwbR60BraGl/TSiJY6ZNq/gJ9qY2rsstpkAMtaUFTKVcDr9dWruYzrZOgHS8YkwpM6oyTpKbQuwP9DO1X6JK36mv3ZDkODDyMoLB1FwhHi4dO9Dy75oBqq+UjpDt2o77Zx+rujQRf8xAJGDNV9PXbA74ep7rztoGbIXW6GLuq4hROm9kib4qtvsawbsB26a39vnyJj+OVHYY4RuBN50xCv4aoT0kU2C8fshnqYXmPEb4dmZiMpUqWFYSWwJAY9yxlA+xtWiVvGNsE7z1NooZnntVfjUQ8mD24QCvnyBB0jZ4EjICursrPhR/KKXuRnLxxst2GDDDpIebCJ2KHR+KaQdgRFbFPgZIy4HIpS4yTjvea1mBHN+gqMTzFzLQLeO+8U/lljcbBi09okvGB7NMvZRIRTEjJ115rlha3HAz3xBPMoQ3/lLgbozIcBz+aHYSDL7pqIZ/Tq+5NxtzVSCcJncnl4L8WzCGGdnfT9st7Og9OUia0DLYf7UG5zE+iH+SNRy/pCzVDC9+bPwCG+rZlhRRPgF4N6Z4vYb8ywLXfTfbNp+gavyn5htFLmb9rIRWb5hMf76fqjLIcD8HTw1hEun4sqIHgND3ozVo4Tc49NoMBp9Xn+IDF3YdFkKizZnBnR0NO1Z1wE15VTkW2/RWP1Fn5AGrRJkvg2r3pUyTEzFCG5OuWnKU9Q8X7de1CHAlMpv8sVb2IzjsaAjBho6wQzJd
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67b3d50d-4bf7-48af-5b18-08db41ba619a
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Apr 2023 16:14:56.0032
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: h1AoaiNdVwhRa56V97vrmpc8ke/mA95pxbIVg0B17j/tx66zr6PcUwrI3aLEIZN3CJbkB3KVcRbInu8vR8CN7blc+8lJyUdzbFE0sQF/Sbs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6150

> From: Roger Pau Monne <roger.pau@citrix.com>=0A=
> Sent: Thursday, April 6, 2023 12:41 PM=0A=
> To: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>=0A=
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Konrad Rzeszutek Wilk <konrad=
.wilk@oracle.com>; Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> Subject: [PATCH v2] livepatch-tools: remove usage of error.h =0A=
> =A0=0A=
> It's a GNU libc specific header which prevents building on musl for=0A=
> example.=A0 Instead use errx() in ERROR() and DIFF_FATAL() macros.=0A=
> =0A=
> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>=0A=
> ---=0A=
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>=0A=
> Cc: Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> ---=0A=
> Changes since v1:=0A=
> =A0- Use errx().=0A=
> ---=0A=
> =A0common.h=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 9 ++++++---=0A=
> =A0create-diff-object.c | 1 -=0A=
> =A0lookup.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 7 +++++--=0A=
> =A0prelink.c=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 | 1 -=0A=
> =A04 files changed, 11 insertions(+), 7 deletions(-)=0A=
> =0A=
> diff --git a/common.h b/common.h=0A=
> index 9a9da79..bbaa950 100644=0A=
> --- a/common.h=0A=
> +++ b/common.h=0A=
> @@ -1,18 +1,21 @@=0A=
> =A0#ifndef _COMMON_H_=0A=
> =A0#define _COMMON_H_=0A=
> =A0=0A=
> -#include <error.h>=0A=
> +#include <err.h>=0A=
> =A0=0A=
> =A0extern char *childobj;=0A=
> =A0=0A=
> =A0#define ERROR(format, ...) \=0A=
> -=A0=A0=A0=A0=A0=A0 error(1, 0, "ERROR: %s: %s: %d: " format, childobj, _=
_FUNCTION__, __LINE__, ##__VA_ARGS__)=0A=
> +({ \=0A=
> +=A0=A0=A0=A0=A0=A0 fflush(stdout); \=0A=
> +=A0=A0=A0=A0=A0=A0 errx(1, "ERROR: %s: %s: %d: " format "\n", childobj, =
__FUNCTION__, __LINE__, ##__VA_ARGS__); \=0A=
> +})=0A=
=0A=
Did you mean to add "\n" here? Wouldn't that result in a double new=0A=
line?=0A=
=0A=
With that removed (can be done during commit),=0A=
=0A=
Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>=


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 16:47:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 16:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524337.815197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppXRb-0005ri-6g; Thu, 20 Apr 2023 16:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524337.815197; Thu, 20 Apr 2023 16:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppXRb-0005rb-41; Thu, 20 Apr 2023 16:47:39 +0000
Received: by outflank-mailman (input) for mailman id 524337;
 Thu, 20 Apr 2023 16:47:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oMKn=AL=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1ppXRa-0005rV-1J
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 16:47:38 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0cac3c05-df9b-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 18:47:35 +0200 (CEST)
Received: from [192.168.0.2] (ip5f5aee70.dynamic.kabel-deutschland.de
 [95.90.238.112])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id C4A9E61E4052B;
 Thu, 20 Apr 2023 18:47:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cac3c05-df9b-11ed-8611-37d641c3527e
Message-ID: <56e59a4d-a47f-4bfe-7db5-5f921062ad69@molgen.mpg.de>
Date: Thu, 20 Apr 2023 18:47:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Sean Christopherson <seanjc@google.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org,
 x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Brian Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com> <87v8hq35sk.ffs@tglx>
Content-Language: en-US
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <87v8hq35sk.ffs@tglx>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 20.04.23 um 17:57 schrieb Thomas Gleixner:
> On Thu, Apr 20 2023 at 07:51, Sean Christopherson wrote:
>> On Thu, Apr 20, 2023, Thomas Gleixner wrote:
>>> On Thu, Apr 20 2023 at 10:23, Andrew Cooper wrote:
>>>> On 20/04/2023 9:32 am, Thomas Gleixner wrote:
>>>>> On Wed, Apr 19, 2023, Andrew Cooper wrote:
>>>>>> This was changed in x2APIC, which made the x2APIC_ID immutable.
>>>
>>>>> I'm pondering to simply deny parallel mode if x2APIC is not there.
>>>>
>>>> I'm not sure if that will help much.
>>>
>>> Spoilsport.
>>
>> LOL, well let me pile on then.  x2APIC IDs aren't immutable on AMD hardware.  The
>> ID is read-only when the CPU is in x2APIC mode, but any changes made to the ID
>> while the CPU is in xAPIC mode survive the transition to x2APIC.  From the APM:
>>
>>    A value previously written by software to the 8-bit APIC_ID register (MMIO offset
>>    30h) is converted by hardware into the appropriate format and reflected into the
>>    32-bit x2APIC_ID register (MSR 802h).
>>
>> FWIW, my observations from testing on bare metal are that the xAPIC ID is effectively
>> read-only (writes are dropped) on Intel CPUs as far back as Haswell, while the above
>> behavior described in the APM holds true on at least Rome and Milan.
>>
>> My guess is that Intel's uArch specific behavior of the xAPIC ID being read-only
>> was introduced when x2APIC came along, but I didn't test farther back than Haswell.
> 
> I'm not so worried about modern hardware. The horrorshow is the old muck
> as demonstrated and of course there is virt :)
> 
> Something like the completely untested below should just work whatever
> APIC ID the BIOS decided to dice.
> 
> That might just work on SEV too without that GHCB muck, but what do I
> know.
> 
> Thanks,
> 
>          tglx
> ---
> --- a/arch/x86/include/asm/apicdef.h
> +++ b/arch/x86/include/asm/apicdef.h
> @@ -138,7 +138,8 @@
>   #define		APIC_EILVT_MASKED	(1 << 16)
>   
>   #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
> -#define APIC_BASE_MSR	0x800
> +#define APIC_BASE_MSR		0x800
> +#define APIC_X2APIC_ID_MSR	0x802
>   #define XAPIC_ENABLE	(1UL << 11)
>   #define X2APIC_ENABLE	(1UL << 10)
>   
> @@ -162,6 +163,7 @@
>   #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
>   #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
>   
> +#ifndef __ASSEMBLY__
>   /*
>    * the local APIC register structure, memory mapped. Not terribly well
>    * tested, but we might eventually use this one in the future - the
> @@ -435,4 +437,5 @@ enum apic_delivery_modes {
>   	APIC_DELIVERY_MODE_EXTINT	= 7,
>   };
>   
> +#endif /* !__ASSEMBLY__ */
>   #endif /* _ASM_X86_APICDEF_H */
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -195,14 +195,13 @@ extern void nmi_selftest(void);
>   #endif
>   
>   extern unsigned int smpboot_control;
> +extern unsigned long apic_mmio_base;
>   
>   #endif /* !__ASSEMBLY__ */
>   
>   /* Control bits for startup_64 */
> -#define STARTUP_APICID_CPUID_1F 0x80000000
> -#define STARTUP_APICID_CPUID_0B 0x40000000
> -#define STARTUP_APICID_CPUID_01 0x20000000
> -#define STARTUP_APICID_SEV_ES	0x10000000
> +#define STARTUP_READ_APICID	0x80000000
> +#define STARTUP_APICID_SEV_ES	0x40000000
>   
>   /* Top 8 bits are reserved for control */
>   #define STARTUP_PARALLEL_MASK	0xFF000000
> --- a/arch/x86/kernel/apic/apic.c
> +++ b/arch/x86/kernel/apic/apic.c
> @@ -101,6 +101,8 @@ static int apic_extnmi __ro_after_init =
>    */
>   static bool virt_ext_dest_id __ro_after_init;
>   
> +unsigned long apic_mmio_base __ro_after_init;
> +
>   /*
>    * Map cpu index to physical APIC ID
>    */
> @@ -2164,6 +2166,7 @@ void __init register_lapic_address(unsig
>   
>   	if (!x2apic_mode) {
>   		set_fixmap_nocache(FIX_APIC_BASE, address);
> +		apic_mmio_base = APIC_BASE;
>   		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
>   			    APIC_BASE, address);
>   	}
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -24,8 +24,10 @@
>   #include "../entry/calling.h"
>   #include <asm/export.h>
>   #include <asm/nospec-branch.h>
> +#include <asm/apicdef.h>
>   #include <asm/fixmap.h>
>   #include <asm/smp.h>
> +
>   #include <asm/sev-common.h>
>   
>   /*
> @@ -237,37 +239,24 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>   
>   #ifdef CONFIG_SMP
>   	/*
> -	 * For parallel boot, the APIC ID is retrieved from CPUID, and then
> -	 * used to look up the CPU number.  For booting a single CPU, the
> -	 * CPU number is encoded in smpboot_control.
> +	 * For parallel boot, the APIC ID is either retrieved the APIC or
> +	 * from CPUID, and then used to look up the CPU number.
> +	 * For booting a single CPU, the CPU number is encoded in
> +	 * smpboot_control.
>   	 *
> -	 * Bit 31	STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
> -	 * Bit 30	STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
> -	 * Bit 29	STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
> -	 * Bit 28	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
> +	 * Bit 31	STARTUP_APICID_READ (Read APICID from APIC)
> +	 * Bit 30	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
>   	 * Bit 0-23	CPU# if STARTUP_APICID_CPUID_xx flags are not set
>   	 */
>   	movl	smpboot_control(%rip), %ecx
> +	testl	$STARTUP_READ_APICID, %ecx
>   #ifdef CONFIG_AMD_MEM_ENCRYPT
>   	testl	$STARTUP_APICID_SEV_ES, %ecx
>   	jnz	.Luse_sev_cpuid_0b
>   #endif
> -	testl	$STARTUP_APICID_CPUID_1F, %ecx
> -	jnz	.Luse_cpuid_1f
> -	testl	$STARTUP_APICID_CPUID_0B, %ecx
> -	jnz	.Luse_cpuid_0b
> -	testl	$STARTUP_APICID_CPUID_01, %ecx
> -	jnz	.Luse_cpuid_01
>   	andl	$(~STARTUP_PARALLEL_MASK), %ecx
>   	jmp	.Lsetup_cpu
>   
> -.Luse_cpuid_01:
> -	mov	$0x01, %eax
> -	cpuid
> -	mov	%ebx, %edx
> -	shr	$24, %edx
> -	jmp	.Lsetup_AP
> -
>   #ifdef CONFIG_AMD_MEM_ENCRYPT
>   .Luse_sev_cpuid_0b:
>   	/* Set the GHCB MSR to request CPUID 0x0B_EDX */
> @@ -292,24 +281,30 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>   	jmp	.Lsetup_AP
>   #endif
>   
> -.Luse_cpuid_0b:
> -	mov	$0x0B, %eax
> -	xorl	%ecx, %ecx
> -	cpuid
> -	jmp	.Lsetup_AP
> +.Lread_apicid:
> +	mov	$MSR_IA32_APICBASE, %ecx
> +	rdmsr
> +	testl	$X2APIC_ENABLE, %eax
> +	jnz	read_apicid_msr
> +
> +	/* Read the APIC ID from the fix-mapped MMIO space. */
> +	movq	apic_mmio_base(%rip), %rcx
> +	addq	$APIC_ID, %rcx
> +	movl	(%rcx), %eax
> +	shr	$24, %eax
> +	jnz	.Lread_apicid
>   
> -.Luse_cpuid_1f:
> -	mov	$0x1f, %eax
> -	xorl	%ecx, %ecx
> -	cpuid
> +.Lread_apicid_msr:
> +	mov	$APIC_X2APIC_ID_MSR, %ecx
> +	rdmsr
>   
>   .Lsetup_AP:
> -	/* EDX contains the APIC ID of the current CPU */
> +	/* EAX contains the APIC ID of the current CPU */
>   	xorq	%rcx, %rcx
>   	leaq	cpuid_to_apicid(%rip), %rbx
>   
>   .Lfind_cpunr:
> -	cmpl	(%rbx,%rcx,4), %edx
> +	cmpl	(%rbx,%rcx,4), %eax
>   	jz	.Lsetup_cpu
>   	inc	%ecx
>   #ifdef CONFIG_FORCE_NR_CPUS
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1253,41 +1253,22 @@ bool __init arch_cpuhp_init_parallel_bri
>   		return false;
>   	}
>   
> -	/* Encrypted guests require special CPUID handling. */
> +	/* Encrypted guests require special handling. */
>   	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
>   		switch (cc_get_vendor()) {
>   		case CC_VENDOR_AMD:
>   			ctrl = STARTUP_APICID_SEV_ES;
>   			if (topology_extended_leaf == 0x0b)
> -				goto setup;
> +				break;
>   			fallthrough;
>   		default:
>   			pr_info("Parallel CPU startup disabled due to guest state encryption\n");
>   			return false;
>   		}
> +	} else {
> +		ctrl = STARTUP_READ_APICID;
>   	}
>   
> -	switch (topology_extended_leaf) {
> -	case 0x0b:
> -		ctrl = STARTUP_APICID_CPUID_0B;
> -		break;
> -	case 0x1f:
> -		ctrl = STARTUP_APICID_CPUID_1F;
> -		break;
> -	case 0x00:
> -		/* For !x2APIC mode 8 bits from leaf 0x01 are sufficient. */
> -		if (!x2apic_mode) {
> -			ctrl = STARTUP_APICID_CPUID_01;
> -			break;
> -		}
> -		fallthrough;
> -	default:
> -		pr_info("Parallel CPU startup disabled. Unsupported topology leaf %u\n",
> -			topology_extended_leaf);
> -		return false;
> -	}
> -
> -setup:
>   	pr_debug("Parallel CPU startup enabled: 0x%08x\n", ctrl);
>   	smpboot_control = ctrl;
>   	return true;

I quickly applied it on top of your branch, but I am getting:

```
$ wget https://lore.kernel.org/lkml/87v8hq35sk.ffs@tglx/raw
$ patch -p1 < raw
$ make
[…]
   LD      .tmp_vmlinux.kallsyms1
ld: arch/x86/kernel/head_64.o: in function `secondary_startup_64_no_verify':
(.head.text+0xbf): undefined reference to `read_apicid_msr'
make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1
make: *** [Makefile:1249: vmlinux] Error 2
```


Kind regards,

Paul


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 18:20:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 18:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524341.815208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppYso-0006db-V0; Thu, 20 Apr 2023 18:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524341.815208; Thu, 20 Apr 2023 18:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppYso-0006dU-RD; Thu, 20 Apr 2023 18:19:50 +0000
Received: by outflank-mailman (input) for mailman id 524341;
 Thu, 20 Apr 2023 18:19:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppYsn-0006dI-Gk; Thu, 20 Apr 2023 18:19:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppYsn-000552-9H; Thu, 20 Apr 2023 18:19:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppYsm-0007lC-J7; Thu, 20 Apr 2023 18:19:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppYsm-0001ex-Ij; Thu, 20 Apr 2023 18:19:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Tr1EAU8yeNMOvR2eLRcwY8U5ELJGqqDQKsONH0qqzcA=; b=dc/nKGARlUSeVzmdwiaoofGcJD
	2huSh49AZTv+ve8vdNC2Ur1JpaenY/YybjOyyf/IJiwmAhHK49qUQYJ3mtf9T6w8+e2OZsEO4NE09
	amZxi9UWvLKO98Jk+aRd/2ry1O+j0vYPraS1QV96v80kWeKEgbRhJUqEyj7KFFc10XXk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180335: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 18:19:48 +0000

flight 180335 xen-unstable-smoke real [real]
flight 180336 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180335/
http://logs.test-lab.xenproject.org/osstest/logs/180336/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    1 days
Failing since        180314  2023-04-19 10:00:24 Z    1 days   11 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 18:39:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 18:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524349.815220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppZC8-0000h4-J0; Thu, 20 Apr 2023 18:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524349.815220; Thu, 20 Apr 2023 18:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppZC8-0000gx-GJ; Thu, 20 Apr 2023 18:39:48 +0000
Received: by outflank-mailman (input) for mailman id 524349;
 Thu, 20 Apr 2023 18:39:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppZC6-0000gn-VL; Thu, 20 Apr 2023 18:39:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppZC6-0005a2-Mp; Thu, 20 Apr 2023 18:39:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppZC6-0008FN-8r; Thu, 20 Apr 2023 18:39:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppZC6-0006Ca-8U; Thu, 20 Apr 2023 18:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XZ2icQ5LssUKD9bcTPtkgDAursXoiybuZs4FqmlInRA=; b=rmyr4bt0vKRfMgisiFFPnvOI9n
	R1vkePs8UAe+bXRPCqvhz+Afm9Eimu036dVwuq1t7u9J9lHrxnmXwoOERSIzmk3R7IAYqzzNY20RN
	8LnQ2+NwJ1x0SRQFyZAPHEVCrLTOoz1fJ3m29P50lQ6YbQPDN70gArr/0pqxpw9/4O3Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180337: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8f4ec0cc433a33967cdbbb945acd37b6ae1d3fce
X-Osstest-Versions-That:
    ovmf=e3d2c08322bc61e9c5b87b3c282dd2af3d52aec6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 18:39:46 +0000

flight 180337 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180337/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8f4ec0cc433a33967cdbbb945acd37b6ae1d3fce
baseline version:
 ovmf                 e3d2c08322bc61e9c5b87b3c282dd2af3d52aec6

Last test of basis   180307  2023-04-19 04:10:44 Z    1 days
Testing same since   180337  2023-04-20 16:41:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marvin Häuser <mhaeuser@posteo.de>
  Marvin Häuser <mhaeuser@posteo.de>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e3d2c08322..8f4ec0cc43  8f4ec0cc433a33967cdbbb945acd37b6ae1d3fce -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 19:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 19:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524356.815231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppZg4-0004ua-VS; Thu, 20 Apr 2023 19:10:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524356.815231; Thu, 20 Apr 2023 19:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppZg4-0004uT-Rq; Thu, 20 Apr 2023 19:10:44 +0000
Received: by outflank-mailman (input) for mailman id 524356;
 Thu, 20 Apr 2023 19:10:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J46Y=AL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1ppZg4-0004uN-6F
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 19:10:44 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0aa737d0-dfaf-11ed-8611-37d641c3527e;
 Thu, 20 Apr 2023 21:10:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0aa737d0-dfaf-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682017840;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IP9ZNUD6L+xj0tXvEDGg3l5TK/5q2SwHvuD7zakyePA=;
	b=WXvOBhQnrav0UZqBuiQZWizGndl7UnBGE4WEv/h2i7ST7IiAdxEZfLwucan2JIMhkBg01P
	uUN5BsPKTBLEdYA1FDN0qfXk7vkFGDhn+EN/Y4xhpGnXkq59aXs2yqe6OA0c/kkflNeB8x
	M4nW29KgQHanGvT8ypy6QbmjFrkppakOwQhJdF3RPgMsYdigf/wmMjAH+ekKBmfSsv83A6
	DDtyepjU24kHPMCyp6lGMu4+T30tWy2hYs4fFm0so009cR1Vp4crko8h7h67DD72woi9E/
	GajaPaudI4kAd8QwWs3G+B9IQjYV5JonnXf/f2veHQIrSFHnCbuSwV7BCC/AwQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682017840;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IP9ZNUD6L+xj0tXvEDGg3l5TK/5q2SwHvuD7zakyePA=;
	b=N6gei/2hTKux/sJnPsYCf5fnEl3UCTMBtgBN8A/utlew7zJA7EnlFuMmSWaMDFGbNzCBtQ
	2LmnohpRtSqZ1yAQ==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: Sean Christopherson <seanjc@google.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
 <thomas.lendacky@amd.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <56e59a4d-a47f-4bfe-7db5-5f921062ad69@molgen.mpg.de>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com> <87v8hq35sk.ffs@tglx>
 <56e59a4d-a47f-4bfe-7db5-5f921062ad69@molgen.mpg.de>
Date: Thu, 20 Apr 2023 21:10:38 +0200
Message-ID: <87sfcu2wup.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Apr 20 2023 at 18:47, Paul Menzel wrote:
> Am 20.04.23 um 17:57 schrieb Thomas Gleixner:
> I quickly applied it on top of your branch, but I am getting:

As I said it was untested. I was traveling and did not have access to a
machine to even build it completely. Fixed up and tested version below.

Thanks,

        tglx
---
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -138,7 +138,8 @@
 #define		APIC_EILVT_MASKED	(1 << 16)
 
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
-#define APIC_BASE_MSR	0x800
+#define APIC_BASE_MSR		0x800
+#define APIC_X2APIC_ID_MSR	0x802
 #define XAPIC_ENABLE	(1UL << 11)
 #define X2APIC_ENABLE	(1UL << 10)
 
@@ -162,6 +163,7 @@
 #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
 #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
 
+#ifndef __ASSEMBLY__
 /*
  * the local APIC register structure, memory mapped. Not terribly well
  * tested, but we might eventually use this one in the future - the
@@ -435,4 +437,5 @@ enum apic_delivery_modes {
 	APIC_DELIVERY_MODE_EXTINT	= 7,
 };
 
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASM_X86_APICDEF_H */
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -195,14 +195,13 @@ extern void nmi_selftest(void);
 #endif
 
 extern unsigned int smpboot_control;
+extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
 /* Control bits for startup_64 */
-#define STARTUP_APICID_CPUID_1F 0x80000000
-#define STARTUP_APICID_CPUID_0B 0x40000000
-#define STARTUP_APICID_CPUID_01 0x20000000
-#define STARTUP_APICID_SEV_ES	0x10000000
+#define STARTUP_READ_APICID	0x80000000
+#define STARTUP_APICID_SEV_ES	0x40000000
 
 /* Top 8 bits are reserved for control */
 #define STARTUP_PARALLEL_MASK	0xFF000000
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -101,6 +101,8 @@ static int apic_extnmi __ro_after_init =
  */
 static bool virt_ext_dest_id __ro_after_init;
 
+unsigned long apic_mmio_base __ro_after_init;
+
 /*
  * Map cpu index to physical APIC ID
  */
@@ -2164,6 +2166,7 @@ void __init register_lapic_address(unsig
 
 	if (!x2apic_mode) {
 		set_fixmap_nocache(FIX_APIC_BASE, address);
+		apic_mmio_base = APIC_BASE;
 		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
 			    APIC_BASE, address);
 	}
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -24,8 +24,10 @@
 #include "../entry/calling.h"
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/apicdef.h>
 #include <asm/fixmap.h>
 #include <asm/smp.h>
+
 #include <asm/sev-common.h>
 
 /*
@@ -237,37 +239,25 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 
 #ifdef CONFIG_SMP
 	/*
-	 * For parallel boot, the APIC ID is retrieved from CPUID, and then
-	 * used to look up the CPU number.  For booting a single CPU, the
-	 * CPU number is encoded in smpboot_control.
+	 * For parallel boot, the APIC ID is either retrieved the APIC or
+	 * from CPUID, and then used to look up the CPU number.
+	 * For booting a single CPU, the CPU number is encoded in
+	 * smpboot_control.
 	 *
-	 * Bit 31	STARTUP_APICID_CPUID_1F flag (use CPUID 0x1f)
-	 * Bit 30	STARTUP_APICID_CPUID_0B flag (use CPUID 0x0b)
-	 * Bit 29	STARTUP_APICID_CPUID_01 flag (use CPUID 0x01)
-	 * Bit 28	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
+	 * Bit 31	STARTUP_APICID_READ (Read APICID from APIC)
+	 * Bit 30	STARTUP_APICID_SEV_ES flag (CPUID 0x0b via GHCB MSR)
 	 * Bit 0-23	CPU# if STARTUP_APICID_CPUID_xx flags are not set
 	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_READ_APICID, %ecx
+	jnz	.Lread_apicid
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	testl	$STARTUP_APICID_SEV_ES, %ecx
 	jnz	.Luse_sev_cpuid_0b
 #endif
-	testl	$STARTUP_APICID_CPUID_1F, %ecx
-	jnz	.Luse_cpuid_1f
-	testl	$STARTUP_APICID_CPUID_0B, %ecx
-	jnz	.Luse_cpuid_0b
-	testl	$STARTUP_APICID_CPUID_01, %ecx
-	jnz	.Luse_cpuid_01
 	andl	$(~STARTUP_PARALLEL_MASK), %ecx
 	jmp	.Lsetup_cpu
 
-.Luse_cpuid_01:
-	mov	$0x01, %eax
-	cpuid
-	mov	%ebx, %edx
-	shr	$24, %edx
-	jmp	.Lsetup_AP
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 .Luse_sev_cpuid_0b:
 	/* Set the GHCB MSR to request CPUID 0x0B_EDX */
@@ -292,24 +282,30 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	jmp	.Lsetup_AP
 #endif
 
-.Luse_cpuid_0b:
-	mov	$0x0B, %eax
-	xorl	%ecx, %ecx
-	cpuid
+.Lread_apicid:
+	mov	$MSR_IA32_APICBASE, %ecx
+	rdmsr
+	testl	$X2APIC_ENABLE, %eax
+	jnz	.Lread_apicid_msr
+
+	/* Read the APIC ID from the fix-mapped MMIO space. */
+	movq	apic_mmio_base(%rip), %rcx
+	addq	$APIC_ID, %rcx
+	movl	(%rcx), %eax
+	shr	$24, %eax
 	jmp	.Lsetup_AP
 
-.Luse_cpuid_1f:
-	mov	$0x1f, %eax
-	xorl	%ecx, %ecx
-	cpuid
+.Lread_apicid_msr:
+	mov	$APIC_X2APIC_ID_MSR, %ecx
+	rdmsr
 
 .Lsetup_AP:
-	/* EDX contains the APIC ID of the current CPU */
+	/* EAX contains the APIC ID of the current CPU */
 	xorq	%rcx, %rcx
 	leaq	cpuid_to_apicid(%rip), %rbx
 
 .Lfind_cpunr:
-	cmpl	(%rbx,%rcx,4), %edx
+	cmpl	(%rbx,%rcx,4), %eax
 	jz	.Lsetup_cpu
 	inc	%ecx
 #ifdef CONFIG_FORCE_NR_CPUS
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1253,41 +1253,22 @@ bool __init arch_cpuhp_init_parallel_bri
 		return false;
 	}
 
-	/* Encrypted guests require special CPUID handling. */
+	/* Encrypted guests require special handling. */
 	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
 		switch (cc_get_vendor()) {
 		case CC_VENDOR_AMD:
 			ctrl = STARTUP_APICID_SEV_ES;
 			if (topology_extended_leaf == 0x0b)
-				goto setup;
+				break;
 			fallthrough;
 		default:
 			pr_info("Parallel CPU startup disabled due to guest state encryption\n");
 			return false;
 		}
+	} else {
+		ctrl = STARTUP_READ_APICID;
 	}
 
-	switch (topology_extended_leaf) {
-	case 0x0b:
-		ctrl = STARTUP_APICID_CPUID_0B;
-		break;
-	case 0x1f:
-		ctrl = STARTUP_APICID_CPUID_1F;
-		break;
-	case 0x00:
-		/* For !x2APIC mode 8 bits from leaf 0x01 are sufficient. */
-		if (!x2apic_mode) {
-			ctrl = STARTUP_APICID_CPUID_01;
-			break;
-		}
-		fallthrough;
-	default:
-		pr_info("Parallel CPU startup disabled. Unsupported topology leaf %u\n",
-			topology_extended_leaf);
-		return false;
-	}
-
-setup:
 	pr_debug("Parallel CPU startup enabled: 0x%08x\n", ctrl);
 	smpboot_control = ctrl;
 	return true;


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 19:45:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 19:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524363.815241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppaDs-0008V9-RU; Thu, 20 Apr 2023 19:45:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524363.815241; Thu, 20 Apr 2023 19:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppaDs-0008V2-Mt; Thu, 20 Apr 2023 19:45:40 +0000
Received: by outflank-mailman (input) for mailman id 524363;
 Thu, 20 Apr 2023 19:45:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppaDr-0008Us-Mb; Thu, 20 Apr 2023 19:45:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppaDr-00075w-BI; Thu, 20 Apr 2023 19:45:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppaDq-0001N0-TD; Thu, 20 Apr 2023 19:45:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppaDq-0000En-Sm; Thu, 20 Apr 2023 19:45:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SG957UMzl7xKaxxX9UzqbbXIb+32BPjlVg/BodJwnIs=; b=la3VxbJ95rgfZnygnXloOtnGGb
	jxc6o0iHf3nRVbXf3hvY+j0RpptptKF3KhqVZGmyVMKsup+SpvyKUVwhb69VPRGyVfTjN4BqzgsFU
	ycAs4J8mdiNm26qZ232SpkcIV73+otWeSA5hKr6atpklMrqZSijAUvKlUJ62mgruUUwg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180327: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:hosts-allocate:broken:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cb0856346a60fe3eb837ba5e73588a41f81ac05f
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 19:45:38 +0000

flight 180327 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180327/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cb0856346a60fe3eb837ba5e73588a41f81ac05f
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    4 days
Failing since        180281  2023-04-17 06:24:36 Z    3 days    7 attempts
Testing same since   180327  2023-04-20 03:43:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Andrew Morton <akpm@linux-foundation.org>
  Arnd Bergmann <arnd@arndb.de>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Johansen <strit@manjaro.org>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dragan Simic <dragan.simic@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Javier Martinez Canillas <javierm@redhat.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michal Hocko <mhocko@suse.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Ondrej Mosnacek <omosnace@redhat.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Xu <peterx@redhat.com>
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 2146 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 20:50:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 20:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524372.815257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppbEE-0007Ir-Gd; Thu, 20 Apr 2023 20:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524372.815257; Thu, 20 Apr 2023 20:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppbEE-0007Ik-D7; Thu, 20 Apr 2023 20:50:06 +0000
Received: by outflank-mailman (input) for mailman id 524372;
 Thu, 20 Apr 2023 20:50:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppbEC-00070i-Nv; Thu, 20 Apr 2023 20:50:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppbEC-0000J6-Hg; Thu, 20 Apr 2023 20:50:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppbEC-00033D-7G; Thu, 20 Apr 2023 20:50:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppbEC-0001sC-6n; Thu, 20 Apr 2023 20:50:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9PuxDlN66ldRVB7UJzvkOMxKlU3NTRyQFK2NR59t3Tw=; b=1lrUABYQGD/R8embbPx4Dt13eJ
	JMhYnziLHTeXdEWiNUoqx6V4+pS7hV11dFFvSN8onDEJN63H6HJZVrICQJuphMGRHsaGiJg46NgP4
	rtAoewQQMusDMVY8YHRc3xbqufhRxVwctk7PDh1DoskJc2Se7cxb/7q3zPEOuSJUd9lA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180339: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=45f5341f6de16edc7aed082e15e6afd48a664ee1
X-Osstest-Versions-That:
    ovmf=8f4ec0cc433a33967cdbbb945acd37b6ae1d3fce
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 20:50:04 +0000

flight 180339 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180339/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 45f5341f6de16edc7aed082e15e6afd48a664ee1
baseline version:
 ovmf                 8f4ec0cc433a33967cdbbb945acd37b6ae1d3fce

Last test of basis   180337  2023-04-20 16:41:37 Z    0 days
Testing same since   180339  2023-04-20 18:42:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8f4ec0cc43..45f5341f6d  45f5341f6de16edc7aed082e15e6afd48a664ee1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524380.815267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppcTy-000735-E3; Thu, 20 Apr 2023 22:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524380.815267; Thu, 20 Apr 2023 22:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppcTy-00072y-9w; Thu, 20 Apr 2023 22:10:26 +0000
Received: by outflank-mailman (input) for mailman id 524380;
 Thu, 20 Apr 2023 22:10:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppcTw-00072o-VZ; Thu, 20 Apr 2023 22:10:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppcTw-00028H-NS; Thu, 20 Apr 2023 22:10:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppcTv-0004qf-Ny; Thu, 20 Apr 2023 22:10:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppcTv-0001Us-NR; Thu, 20 Apr 2023 22:10:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WqZRkVLsNRxz098nxDFeOD2c3S2k4HHfhnHbVz0XuOo=; b=kKQ9sng4dkB8HhZtr97uS1/Dw4
	Z4xY19px4/9bb7ePhqmIgiNr8k9JGnPEVL9UKyhUM/7rrlJK5T8EtgPVNTD9bBfV9ERz0/pkp8pBY
	VIKFDhI0uVU1DMQLEYifU5+LRsniVrrbGm6yAQtYDHFfsTcfayDwWBXqNWIBbxNlnuaQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180340-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180340: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 22:10:23 +0000

flight 180340 xen-unstable-smoke real [real]
flight 180344 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180340/
http://logs.test-lab.xenproject.org/osstest/logs/180344/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    2 days
Failing since        180314  2023-04-19 10:00:24 Z    1 days   12 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 22:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 22:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524386.815276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppd2d-00022M-5W; Thu, 20 Apr 2023 22:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524386.815276; Thu, 20 Apr 2023 22:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppd2d-00022F-2k; Thu, 20 Apr 2023 22:46:15 +0000
Received: by outflank-mailman (input) for mailman id 524386;
 Thu, 20 Apr 2023 22:46:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppd2b-000225-JG; Thu, 20 Apr 2023 22:46:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppd2b-0002wa-DE; Thu, 20 Apr 2023 22:46:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppd2a-0006SB-OV; Thu, 20 Apr 2023 22:46:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppd2a-0003ZR-O0; Thu, 20 Apr 2023 22:46:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pWNlw9r9n1MUT4iJt1TufPIgBVuzPhY5172a75ktsH8=; b=u/URXNTRjVMmhQNqAN7Ls0h9VN
	Wwg6U/S6RqyiB3Zt9cXHwH62pbiypGNV3ZQlqxEixzgm7lNS3ziehDMZxSn2WbCw1JhTykfhHCmGV
	4HXTDsNjs5YGeKXbWjEFGNdOg0+K1yMMxyfVoNRU3FR3Zc6tZFRMR7t3V+/CvtJx6QRw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180328: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-armhf:<job status>:broken:regression
    libvirt:build-armhf:hosts-allocate:broken:regression
    libvirt:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=0d0604a51a5674e432335d201418cbba80899b73
X-Osstest-Versions-That:
    libvirt=b486430db34d0db1dcbf39b0d9840d03cd57f615
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 22:46:12 +0000

flight 180328 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180328/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   2 hosts-allocate         broken REGR. vs. 180308
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 180308
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180308

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              0d0604a51a5674e432335d201418cbba80899b73
baseline version:
 libvirt              b486430db34d0db1dcbf39b0d9840d03cd57f615

Last test of basis   180308  2023-04-19 04:18:48 Z    1 days
Testing same since   180328  2023-04-20 04:18:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

------------------------------------------------------------
commit 0d0604a51a5674e432335d201418cbba80899b73
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Apr 17 10:10:15 2023 +0200

    networkRefreshDhcpDaemon: Get dnsmasq's PID once
    
    This is a relic of commit v3.7.0-rc1~132 when getter/setter APIs
    for dnsmasq's PID were introduced. Previously, obj->dnsmasqPid
    was accessed directly. But the aforementioned commit introduced
    two calls to virNetworkObjGetDnsmasqPid() even though the result
    of the first call is stored in a variable.
    
    Remove the second call as it's unnecessary.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 004d5141c59b08904396d53efe64ebe15f30c7b6
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Apr 17 10:10:04 2023 +0200

    conf: Initialize _virNetworkObj::dnsmasqPid to -1 in virNetworkObjNew()
    
    Throughout all of our network driver code we assume that
    dnsmasqPid of value -1 means the network has no dnsmasq process
    running. There are plenty of calls to:
    
      virNetworkObjSetDnsmasqPid(obj, -1);
    
    or:
    
      pid_t dnsmasqPid = virNetworkObjGetDnsmasqPid(obj);
      if (dnsmasqPid > 0) ...;
    
    Now, a virNetworkObj is created via virNetworkObjNew() which
    might as well set this de-facto default value.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 212dfa94eeec88eb1a0bcf0c935a0ce17984306a
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Apr 17 10:09:51 2023 +0200

    networkUpdateState: do not assume dnsmasq_caps
    
    Assume there's a dnsmasq running (because there's an active
    virtual network that spawned it). Now, shut down the daemon,
    remove the dnsmasq binary and start the daemon again. At this
    point, networkUpdateState() is called, but dnsmasq_caps is NULL
    (because networkStateInitialize() called earlier failed to set
    them, rightfully though).
    
    Now, the networkUpdateState() tries to read the dnsmasq's PID
    file using virPidFileReadIfAlive() which takes a path to the
    corresponding binary as one of its arguments. To provide that
    path, dnsmasqCapsGetBinaryPath() is called, but since
    dnsmasq_caps is NULL, it dereferences it and thus causes a crash.
    
    It's true that virPidFileReadIfAlive() can deal with a removed
    binary (well virPidFileReadPathIfAlive() which it calls can), but
    iff the binary path is provided in its absolute form. Otherwise,
    virFileResolveAllLinks() fails to canonicalize the path
    (expected, the path doesn't exist anyway).
    
    Therefore, reading dnsmasq's PID file didn't work before
    v8.1.0-rc1~401 which introduced this crash. It was always set to
    -1. But passing NULL as binary path instead, makes
    virPidFileReadIfAlive() return early, right after the PID file is
    read and it's confirmed the PID exists.
    
    Yes, this may yield wrong results, as the PID might be of a
    completely different binary. But this problem is preexistent and
    until we start locking PID files, there's nothing we can do about
    it. IOW, it would require rework of dnsmasq PID file handling.
    
    Fixes: 4b68c982e283471575bacbf87302495864da46fe
    Resolves: https://gitlab.com/libvirt/libvirt/-/issues/456
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 03094f8c65f71b8cc55b73d1c2af575b0e84a23d
Author: Pavel Borecki <pavel.borecki@gmail.com>
Date:   Wed Apr 19 07:48:48 2023 +0200

    Translated using Weblate (Czech)
    
    Currently translated at 97.9% (10191 of 10400 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/cs/
    
    Translated using Weblate (Czech)
    
    Currently translated at 97.9% (10189 of 10400 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/cs/
    
    Co-authored-by: Pavel Borecki <pavel.borecki@gmail.com>
    Signed-off-by: Pavel Borecki <pavel.borecki@gmail.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 23:33:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 23:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524393.815287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppdly-0007HB-OJ; Thu, 20 Apr 2023 23:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524393.815287; Thu, 20 Apr 2023 23:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppdly-0007H4-LD; Thu, 20 Apr 2023 23:33:06 +0000
Received: by outflank-mailman (input) for mailman id 524393;
 Thu, 20 Apr 2023 23:33:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VXON=AL=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ppdlx-0007Gy-BN
 for xen-devel@lists.xenproject.org; Thu, 20 Apr 2023 23:33:05 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b17156f0-dfd3-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 01:33:03 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-54fb89e1666so2161497b3.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 16:33:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b17156f0-dfd3-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682033582; x=1684625582;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PN2ixoAK9D3BuA0Ys9lwnMZMdyz2nW/UTyxM+6NXzqI=;
        b=AX4YGgPJjZWyR0kqu580/PVasSF+uNRiA7gr7gHqMdE/2vLkFpdjQpKmKbwxnvzy1W
         4jX/DWRv14Hh6L+AuqxNt9TrfsVNFOA+DMslH6msB9ssOPuKNdU5LH4UlZNRkrQgi/yF
         2C7UqT75y8+OehNqUJfsf2nClBSiv3rZRBTIf/8qXrl8xEYNdOg0mllWNN17Cw7o+BHc
         Dv7TVavT6/AGmzCH4d7Tksroa3F14WaPTNTcwjLqhrbimQGYJu8Di4n0FcYkgCSZZuxf
         BRI5CuXSQtBQDqv2Sdnh5lzey/n44xr+tGNe/P6HPq7sE+h81UtCj457v+cM+mPjpeEU
         pLaA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682033582; x=1684625582;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PN2ixoAK9D3BuA0Ys9lwnMZMdyz2nW/UTyxM+6NXzqI=;
        b=Yz73ePMtXXqZ+g5gf46nol9YXXf8t6OViB3pd7EZv3pkn+xiyaOezBfmM3Opvc9mML
         E/6vQO2RJpWG07z1/Yncd40xJCxNUC9HqjfYRteQvButfbz8DfAjNTDmJ5wz7WEoJR9j
         sU/8jASlMSZ7BJinX/dcpuQJqvHQhj3y8mfAjJNB53Xk3S8bgmN7+877jBjLCtDoWao1
         YrcxyhglEKhs6hXcdiK42iTXXAat0YzKkMdIQ5jLskZIwQR/ZrDQxriVz45gYAyP7Le6
         sHL55jj2eVkUAD2cPrwj5yyiQRQKhHYJYhyA0rhZmwyaYWKM8DNnuF4wbXYFsjXnnoDl
         zlww==
X-Gm-Message-State: AAQBX9d7Eh83QcKq+KYq+ITpEoV5ziwPPWuvdXXicIZKCIR7LJujOpZ8
	vAFzFVxlTe7k8U4Ej1Yq/YBrmdztSLmivmQHqHY=
X-Google-Smtp-Source: AKy350bopqnJQtT+418DUBi5UUf4Yx/MaaBIpccKeoXBRimnuUzPebu61FMjyfXCRvipIUnUp3uBwA4Azmrc32p/hx0=
X-Received: by 2002:a0d:f205:0:b0:541:8810:8d7b with SMTP id
 b5-20020a0df205000000b0054188108d7bmr466003ywf.15.1682033581993; Thu, 20 Apr
 2023 16:33:01 -0700 (PDT)
MIME-Version: 1.0
References: <20230417205048.15870-1-vishal.moola@gmail.com>
 <20230417205048.15870-2-vishal.moola@gmail.com> <da600570-51c7-8088-b46b-7524c9e66e5d@redhat.com>
 <CAOzc2pwpRhNoFbdzdzuvrqbZdf2OsrTvBGs40QCZJjA5fS_q1A@mail.gmail.com> <e0c0ad67-f23f-ff35-80bf-841dcfd43d99@redhat.com>
In-Reply-To: <e0c0ad67-f23f-ff35-80bf-841dcfd43d99@redhat.com>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 20 Apr 2023 16:32:50 -0700
Message-ID: <CAOzc2pwDtn836Tf0Egh+Z258hxSTVtvwuyU2qiJa1iLa6vZFjQ@mail.gmail.com>
Subject: Re: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking
To: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 19, 2023 at 12:54=E2=80=AFAM David Hildenbrand <david@redhat.co=
m> wrote:
>
> On 18.04.23 23:33, Vishal Moola wrote:
> > On Tue, Apr 18, 2023 at 8:45=E2=80=AFAM David Hildenbrand <david@redhat=
.com> wrote:
> >>
> >> On 17.04.23 22:50, Vishal Moola (Oracle) wrote:
> >>> s390 uses page->index to keep track of page tables for the guest addr=
ess
> >>> space. In an attempt to consolidate the usage of page fields in s390,
> >>> replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
> >>>
> >>> This will help with the splitting of struct ptdesc from struct page, =
as
> >>> well as allow s390 to use _pt_frag_refcount for fragmented page table
> >>> tracking.
> >>>
> >>> Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NU=
LL
> >>> before freeing the pages as well.
> >>>
> >>> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> >>> ---
> >>
> >> [...]
> >>
> >>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> >>> index 3fc9e680f174..2616d64c0e8c 100644
> >>> --- a/include/linux/mm_types.h
> >>> +++ b/include/linux/mm_types.h
> >>> @@ -144,7 +144,7 @@ struct page {
> >>>                struct {        /* Page table pages */
> >>>                        unsigned long _pt_pad_1;        /* compound_he=
ad */
> >>>                        pgtable_t pmd_huge_pte; /* protected by page->=
ptl */
> >>> -                     unsigned long _pt_pad_2;        /* mapping */
> >>> +                     unsigned long _pt_s390_gaddr;   /* mapping */
> >>>                        union {
> >>>                                struct mm_struct *pt_mm; /* x86 pgds o=
nly */
> >>>                                atomic_t pt_frag_refcount; /* powerpc =
*/
> >>
> >> The confusing part is, that these gmap page tables are not ordinary
> >> process page tables that we would ordinarily place into this section
> >> here. That's why they are also not allocated/freed using the typical
> >> page table constructor/destructor ...
> >
> > I initially thought the same, so I was quite confused when I saw
> > __gmap_segment_gaddr was using pmd_pgtable_page().
> >
> > Although they are not ordinary process page tables, since we
> > eventually want to move them out of struct page, I think shifting them
> > to be in ptdescs, being a memory descriptor for page tables, makes
> > the most sense.
>
> Seeing utilities like tlb_remove_page_ptdesc() that don't really apply
> to such page tables, I wonder if we should much rather treat such
> shadow/auxiliary/... page tables (just like other architectures like
> x86, arm, ... employ as well) as a distinct type.
>
> And have ptdesc be the common type for all process page tables.

Although I do like the idea of having a distinct type for them, I'm not sur=
e
I see the merits of having another type specifically for those types of
page tables.

As it currently is, tlb_remove_table() is only distinct from tlb_remove_pag=
e()
when an architecture defines its own removal function. I'm not too familiar
with most of their differences, but we can probably continue to let them do
that. As of now, I'm not too sure what a distinct type would look like that
could meet all their needs holistically.

> >
> > Another option is to leave pmd_pgtable_page() as is just for this case.
> > Or we can revert commit 7e25de77bc5ea which uses the function here
> > then figure out where these gmap pages table pages will go later.
>
> I'm always confused when reading gmap code, so let me have another look :=
)
>
> The confusing part is that s390x shares the lowest level page tables
> (PTE tables) between the process and gmap ("guest mapping", similar to
> EPT on x86-64). It maps these process PTE tables (covering 1 MiB) into
> gmap-specific PMD tables.

Especially in cases like this. If the architecture wants to share page tabl=
es
then everything being in form ptdesc would make that easiest, and
continue to let them define their own niche functions for their needs.

> pmd_pgtable_page() should indeed always give us a gmap-specific
> PMD-table. In fact, something allocated via gmap_alloc_table().
>
> Decoupling both concepts sounds like a good idea.

Yeah, I'm not a fan of how this gmap caller is the only external caller
using this to get a page for their own purposes. I'll update that in v2.


From xen-devel-bounces@lists.xenproject.org Thu Apr 20 23:34:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Apr 2023 23:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524397.815297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppdnV-0007oT-2J; Thu, 20 Apr 2023 23:34:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524397.815297; Thu, 20 Apr 2023 23:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppdnU-0007oM-VZ; Thu, 20 Apr 2023 23:34:40 +0000
Received: by outflank-mailman (input) for mailman id 524397;
 Thu, 20 Apr 2023 23:34:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppdnT-0007o0-2U; Thu, 20 Apr 2023 23:34:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppdnS-00048h-Nt; Thu, 20 Apr 2023 23:34:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppdnS-0008JD-DU; Thu, 20 Apr 2023 23:34:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppdnS-000816-Cv; Thu, 20 Apr 2023 23:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lv+UE+W78G1f83RIPjpWglK/tc061EBehcF9fT6BIEw=; b=OKSYRoX96BwV96C1dEqj1XtGfO
	/Dx2CBFp6oVQkKhGxAvu3zN/l/ITZ5a80H94Mo59nP7hUyYeetKmLT3ilUdUD7YVgkihE1yIQZ0M6
	iTN7qyyLlNaG1dz9UxDyqOgTcaFF6a1giPDlE7QIy8jHvf3Xl5xSa8Qn3moAzgAWQQu4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180343: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9bf79303ae5cb4d0e14ed7a219107b53e2ecdcd0
X-Osstest-Versions-That:
    ovmf=45f5341f6de16edc7aed082e15e6afd48a664ee1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Apr 2023 23:34:38 +0000

flight 180343 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180343/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9bf79303ae5cb4d0e14ed7a219107b53e2ecdcd0
baseline version:
 ovmf                 45f5341f6de16edc7aed082e15e6afd48a664ee1

Last test of basis   180339  2023-04-20 18:42:06 Z    0 days
Testing same since   180343  2023-04-20 21:12:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Wu <linusx.wu@intel.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   45f5341f6d..9bf79303ae  9bf79303ae5cb4d0e14ed7a219107b53e2ecdcd0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 02:01:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 02:01:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524408.815313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppg5h-00058l-Hj; Fri, 21 Apr 2023 02:01:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524408.815313; Fri, 21 Apr 2023 02:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppg5h-00058V-Bu; Fri, 21 Apr 2023 02:01:37 +0000
Received: by outflank-mailman (input) for mailman id 524408;
 Fri, 21 Apr 2023 02:01:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppg5f-00058J-EG; Fri, 21 Apr 2023 02:01:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppg5f-00073O-7Y; Fri, 21 Apr 2023 02:01:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppg5e-0005eC-Pt; Fri, 21 Apr 2023 02:01:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppg5e-0007Do-PR; Fri, 21 Apr 2023 02:01:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RBSdCcjUAwlzjkvlr33GAN+R88nZ/kmhjnWhQGocF3Y=; b=HMQaqsZAUTWZi6FAohj5mxZwzq
	jZ1U70wcsA28fSpl/2d2IklGzZmXhjev/4igCNDYcB15O9x7vuN4T+tVAUsFUjeCCxAZXtkc4P/5v
	suJOh0Pv46BjocY7xFtIw97m/cgk7pIIJ54cNQFISvtAEydi82tR8i4sJxI1KMG6+5Es=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180346: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3163f34a42a5dacaf63499e69bf0fefdc409d89e
X-Osstest-Versions-That:
    ovmf=9bf79303ae5cb4d0e14ed7a219107b53e2ecdcd0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 02:01:34 +0000

flight 180346 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180346/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3163f34a42a5dacaf63499e69bf0fefdc409d89e
baseline version:
 ovmf                 9bf79303ae5cb4d0e14ed7a219107b53e2ecdcd0

Last test of basis   180343  2023-04-20 21:12:04 Z    0 days
Testing same since   180346  2023-04-20 23:40:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9bf79303ae..3163f34a42  3163f34a42a5dacaf63499e69bf0fefdc409d89e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 02:26:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 02:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524415.815324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppgU5-0007dF-IO; Fri, 21 Apr 2023 02:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524415.815324; Fri, 21 Apr 2023 02:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppgU5-0007d8-Dm; Fri, 21 Apr 2023 02:26:49 +0000
Received: by outflank-mailman (input) for mailman id 524415;
 Fri, 21 Apr 2023 02:26:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppgU3-0007cy-Qe; Fri, 21 Apr 2023 02:26:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppgU3-0007RT-Im; Fri, 21 Apr 2023 02:26:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppgU2-0006Cr-Ts; Fri, 21 Apr 2023 02:26:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppgU2-00089M-TN; Fri, 21 Apr 2023 02:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PuXktgMOi9IKEALP6OzV/O+QQZr5Nw6cdq7QQmjUThY=; b=apEcVJdk0UQbLQSGmN7OD5y3B0
	xxpH0Mg0KNM+et1h1gEcQiumb8lVU0JTry/lrlfhgSXZiHdTR8U1CovAqdmG6yyqkGKTkX5LKWrK1
	jc2EcVSDGhe1tyum/XduGFwGpTIgbaqPbkCGVXoc5NZdmy2AxU/ElefjW7WxBKla3fzc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180345: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 02:26:46 +0000

flight 180345 xen-unstable-smoke real [real]
flight 180348 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180345/
http://logs.test-lab.xenproject.org/osstest/logs/180348/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    2 days
Failing since        180314  2023-04-19 10:00:24 Z    1 days   13 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 02:51:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 02:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524422.815336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppgrr-0002aq-HH; Fri, 21 Apr 2023 02:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524422.815336; Fri, 21 Apr 2023 02:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppgrr-0002aj-E7; Fri, 21 Apr 2023 02:51:23 +0000
Received: by outflank-mailman (input) for mailman id 524422;
 Fri, 21 Apr 2023 02:51:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L0IV=AM=gmail.com=htejun@srs-se1.protection.inumbo.net>)
 id 1ppgrp-0002ad-KG
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 02:51:21 +0000
Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com
 [2607:f8b0:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 646b1c9d-dfef-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 04:51:20 +0200 (CEST)
Received: by mail-pf1-x434.google.com with SMTP id
 d2e1a72fcca58-63b60366047so1512959b3a.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 19:51:20 -0700 (PDT)
Received: from localhost
 (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com.
 [2603:800c:1a02:1bae:a7fa:157f:969a:4cde])
 by smtp.gmail.com with ESMTPSA id
 n11-20020a056a00212b00b0063f167b41bdsm366523pfj.38.2023.04.20.19.51.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Apr 2023 19:51:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 646b1c9d-dfef-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682045479; x=1684637479;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HCPjQIn4S0PQfuCNwvELUc3S8eh+s9mUvwJw7/d5uLU=;
        b=XMLFtb9wYP9gqpPSKTS7T/p74xQvZs2nOgKFEigpBUk7eM6Kb8bvSh/sAWoXK1dcSj
         KWvLP7zD7yOkPXgcobpT2jsCaQldUniFt2NYg1yoc43TcDmS4stE8W3P1iJVksjOSFgn
         jFjMkwjkXmYgdw0wK5uHCXH21wcr+eFz6d3UdFMZxRfvvPHtseSbL2d8RWYeNIn0heuJ
         FLmKIh3j/9iUx4YIIjpr7XgLkSBxR4PUF5gMWpa5OT7mIXmR9L4DowA/AYb/EeXijmVO
         hy4tJiy1od2XsdGkrSh8kcquhPEtsLG9Z6cwResFqBkNejkqYJI+lXHC9L6wZvRHe583
         m/KQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682045479; x=1684637479;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from
         :to:cc:subject:date:message-id:reply-to;
        bh=HCPjQIn4S0PQfuCNwvELUc3S8eh+s9mUvwJw7/d5uLU=;
        b=DWds2gRN4Ngjjm9FhV702yrneRGXuSwpSWOGUuZYIcrCX0Mxz5xQwQj/xa9J3b2sQ+
         TbFmyt34dSfq9dMEm/f8S/stH39DcbBLao2hTjSGJmQs4YycugUKQ9xvrnI9gFtmcph/
         /q5LbuawrRKPmxXN+ZY1Y92USHoiQEWf9k8NjtsZ1L96+E/Cc0pC7NFAXMOKqvGlQ/f+
         xgK0OR1I3UR2/5Pavz9wZyi5WL45mvwLz4xiOPrN/o/LIILQ8Coh7lue55f9vEcQgHXv
         HU9vjuge8kFKwjtB3oEi7o41a5ZzP8Qlj2Pw4bgR0KC35XpekPvAqY3pjyh3nm4ukLud
         Gtjw==
X-Gm-Message-State: AAQBX9e8Gct//Fxsjtutbe8/K5cay/q6fJYihpMAOx6nx/PjwX3v7I+0
	Cf/ikhWYvbKEn1BQ8YXs8oE=
X-Google-Smtp-Source: AKy350ZqQdhTADBSG8sDswV21AqHVDTdbtktu2O/S0Hkg5un6hH0viAxUI3uMxO9h5GAesNZA/ePlA==
X-Received: by 2002:a05:6a00:15ce:b0:63d:3411:f9e3 with SMTP id o14-20020a056a0015ce00b0063d3411f9e3mr5061818pfu.19.1682045478499;
        Thu, 20 Apr 2023 19:51:18 -0700 (PDT)
Sender: Tejun Heo <htejun@gmail.com>
From: Tejun Heo <tj@kernel.org>
To: jiangshanlai@gmail.com
Cc: linux-kernel@vger.kernel.org,
	kernel-team@meta.com,
	Tejun Heo <tj@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 15/22] xen/pvcalls: Use alloc_ordered_workqueue() to create ordered workqueues
Date: Thu, 20 Apr 2023 16:50:39 -1000
Message-Id: <20230421025046.4008499-16-tj@kernel.org>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230421025046.4008499-1-tj@kernel.org>
References: <20230421025046.4008499-1-tj@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

BACKGROUND
==========

When multiple work items are queued to a workqueue, their execution order
doesn't match the queueing order. They may get executed in any order and
simultaneously. When fully serialized execution - one by one in the queueing
order - is needed, an ordered workqueue should be used which can be created
with alloc_ordered_workqueue().

However, alloc_ordered_workqueue() was a later addition. Before it, an
ordered workqueue could be obtained by creating an UNBOUND workqueue with
@max_active==1. This originally was an implementation side-effect which was
broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
ordered"). Because there were users that depended on the ordered execution,
5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered")
made workqueue allocation path to implicitly promote UNBOUND workqueues w/
@max_active==1 to ordered workqueues.

While this has worked okay, overloading the UNBOUND allocation interface
this way creates other issues. It's difficult to tell whether a given
workqueue actually needs to be ordered and users that legitimately want a
min concurrency level wq unexpectedly gets an ordered one instead. With
planned UNBOUND workqueue updates to improve execution locality and more
prevalence of chiplet designs which can benefit from such improvements, this
isn't a state we wanna be in forever.

This patch series audits all callsites that create an UNBOUND workqueue w/
@max_active==1 and converts them to alloc_ordered_workqueue() as necessary.

WHAT TO LOOK FOR
================

The conversions are from

  alloc_workqueue(WQ_UNBOUND | flags, 1, args..)

to

  alloc_ordered_workqueue(flags, args...)

which don't cause any functional changes. If you know that fully ordered
execution is not ncessary, please let me know. I'll drop the conversion and
instead add a comment noting the fact to reduce confusion while conversion
is in progress.

If you aren't fully sure, it's completely fine to let the conversion
through. The behavior will stay exactly the same and we can always
reconsider later.

As there are follow-up workqueue core changes, I'd really appreciate if the
patch can be routed through the workqueue tree w/ your acks. Thanks.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/pvcalls-back.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 1f5219e12cc3..b41516f3f84a 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -361,7 +361,7 @@ static struct sock_mapping *pvcalls_new_active_socket(
 	map->data.in = map->bytes;
 	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
 
-	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	map->ioworker.wq = alloc_ordered_workqueue("pvcalls_io", 0);
 	if (!map->ioworker.wq)
 		goto out;
 	atomic_set(&map->io, 1);
@@ -637,7 +637,7 @@ static int pvcalls_back_bind(struct xenbus_device *dev,
 
 	INIT_WORK(&map->register_work, __pvcalls_back_accept);
 	spin_lock_init(&map->copy_lock);
-	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
+	map->wq = alloc_ordered_workqueue("pvcalls_wq", 0);
 	if (!map->wq) {
 		ret = -ENOMEM;
 		goto out;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 03:23:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 03:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524429.815349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pphMp-0006B2-3X; Fri, 21 Apr 2023 03:23:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524429.815349; Fri, 21 Apr 2023 03:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pphMp-0006Av-0D; Fri, 21 Apr 2023 03:23:23 +0000
Received: by outflank-mailman (input) for mailman id 524429;
 Fri, 21 Apr 2023 03:23:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ii9+=AM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pphMo-0006Ap-B0
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 03:23:22 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20613.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id daa51514-dff3-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 05:23:18 +0200 (CEST)
Received: from DUZPR01CA0192.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b6::15) by AS2PR08MB9416.eurprd08.prod.outlook.com
 (2603:10a6:20b:594::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.21; Fri, 21 Apr
 2023 03:23:05 +0000
Received: from DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b6:cafe::30) by DUZPR01CA0192.outlook.office365.com
 (2603:10a6:10:4b6::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.27 via Frontend
 Transport; Fri, 21 Apr 2023 03:23:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT059.mail.protection.outlook.com (100.127.142.102) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.26 via Frontend Transport; Fri, 21 Apr 2023 03:23:04 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Fri, 21 Apr 2023 03:23:04 +0000
Received: from 68399d3fb0b3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 096B505E-256B-43B8-AA24-CAA8DE7B3A84.1; 
 Fri, 21 Apr 2023 03:22:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 68399d3fb0b3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 03:22:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by GV1PR08MB7756.eurprd08.prod.outlook.com (2603:10a6:150:57::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 03:22:52 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 03:22:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: daa51514-dff3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LENz2T83NBmBa89xPiAXm3Qmok3tPn+Fo4PXGwsp4vE=;
 b=MUBi1QJqCZNBJ2T1bmbGoYbSPx9f1miJOxU/XHR2N3jFWyy9HFrEEyH8bK6+4MTlvQFUn/V13jK+KufviGxMhT0xJJEw8rOaHDVb1n10UYlyAaAQOCvUM6huAbH32fypsxRcxIsGyK5U0CQhpm2NVAQosrrDzl4PuNPTIAmr+bM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F89WwR5p+Rc2JSjatdP0UtxvOCXw/Le5Wiw/l6vALzo4NWOr+74onijUlQtOcFgtLQApbZlt+OhofKuZxWn0FRzuL31dFySGdzD1c8fQ6xYGwD5WU9f+s90a7dk/LgOwa1DNVxLQeG331YlJo8/9bTtI2B7rnm6+QSjh51VYYj3HANPLUVKw0EDbKihMCf6ZXzte7fNXkMBPcu1r+C8oQ3auNKDpg/uDTKHbz8bN4WPC3XySidp2cwLuWugCiSJL8IwOoF6l0FeDajzicWYHN53ukqI2apvTpu6WwzrvmK18pkyu6Dk9THnNc5DABM2DEQWFWqh9VLsl5LmR3p2tRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LENz2T83NBmBa89xPiAXm3Qmok3tPn+Fo4PXGwsp4vE=;
 b=OU8QtHZwMNisF+g/fL7Etgh6NGRAqg85dMSSvOWR3AFIaklv0eQ9xcRJUNtsTKG0P79cNjyBV7VHVMudOvYm8yrMI/vbvOksEOgrQbFxtWws9wbZA4HVpL9DPMVwiJ10ZPLHbg4mLWgc/iLwYQVELkBOqcLAcgKED6QAROK3F8NhY3EJYx5gBYvBFh9qfOV40s6ULlPGmSPYMQxmYHe/tE7fAp6lbhqM7DUuP0UKOjnO7WJ03p/bTXEPp/xXlUPidLff3Er/SaMh5kOSAVNOrkS/oemh5f7k2bbykVj5NKECPyrmUdYaIf4wsA1zxvpADGkVtJFIwBzR7sVeK+eztg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LENz2T83NBmBa89xPiAXm3Qmok3tPn+Fo4PXGwsp4vE=;
 b=MUBi1QJqCZNBJ2T1bmbGoYbSPx9f1miJOxU/XHR2N3jFWyy9HFrEEyH8bK6+4MTlvQFUn/V13jK+KufviGxMhT0xJJEw8rOaHDVb1n10UYlyAaAQOCvUM6huAbH32fypsxRcxIsGyK5U0CQhpm2NVAQosrrDzl4PuNPTIAmr+bM=
From: Henry Wang <Henry.Wang@arm.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "community.manager@xenproject.org" <community.manager@xenproject.org>,
	Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, George Dunlap <george.dunlap@cloud.com>, Juergen
 Gross <jgross@suse.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18 release:
 Proposed release schedule)
Thread-Topic: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Index: AQHZdACNvTYSYql02kern4BL0bR1Kw==
Date: Fri, 21 Apr 2023 03:22:49 +0000
Message-ID:
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
References:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 70CBA437B84F104799042B1B96D0274B.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|GV1PR08MB7756:EE_|DBAEUR03FT059:EE_|AS2PR08MB9416:EE_
X-MS-Office365-Filtering-Correlation-Id: 9204668f-4fbd-4602-4a85-08db4217b830
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +Icr5BS1NYU8t5GVRPPa0TKrMMrOlKMhO7C8sr4Q4Brx4fpwQSMIzD5l8Xvt4QzJuAe/cyW7vq7BGMNctmiGBVcq8s1fIf4Qmhvho/g7EpmtQDOkEuw89Qs/g6APKMqOGionaziQ3Ile6isgfNhXblwRJlBvlYbUWa6FXoTZHHNpC3JPueUDRgojDKdu58Gsa9jXox6GOOjEblABZMuIy1InU/Ag2Q2XWei46Gx4Shh4g3WTklKBHYr3XFDdKh2M8iwX+eeTm0surS+/Rx0/cU359ZpQxa3xPKMb0f2CeJdQgwEbZ5VzBWTaX9Kxn4vJ2ETIZzsBEH8XA1itI9y+LM4r2vy4QVuBoRV8qTIVfnHJR/giHPpO1/eBLQK77giPGnxBDlVxSLkziB6U5L8A/ySIwTRmKyjXgF4mYD7zZSTSiyLm4+XgnOMXOPxxr4fjnCubLDmxCfr6ns1sJoPWqX3Qfmt7nyTGt7jKtVTjXAPSPWLxPNCppAK6e1r+jXjJCPWMOuQbKFd8OpQMsX0G7RQrfIPl1DiTnO5TdpdVrerynqtA7+TJ0L3+O9rhLtoaLiMQ3MUVrJuKp7LQtGS4N+7k7j58YWX6nOpDtMmvgsywmXhs8c3hFN+d9KEaBMdp
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(366004)(346002)(39860400002)(396003)(451199021)(54906003)(8936002)(8676002)(122000001)(52536014)(15650500001)(7696005)(6666004)(55016003)(7416002)(5660300002)(6506007)(38070700005)(33656002)(41300700001)(26005)(186003)(9686003)(83380400001)(2906002)(38100700002)(76116006)(66946007)(66556008)(66476007)(66446008)(64756008)(4326008)(6916009)(316002)(478600001)(71200400001)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7756
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fa5fb66d-8f0e-4b11-69d6-08db4217af92
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hp2nzMigZpwrYVtjH8I7S9qlU6CpZRVAQH2GaURcLxELXyIP0OtIHUd6ZxG34X9kcmB9qmhoomwXPryObIAkxPoq3svfKY1nVPNujcipDEKwW+k4GjXoJoHyEuSJwXTJkmwkQbUAyQKJ/M8rlJrRwkwbtJpPiaK8iWFXJFSEuWeFIU2Ke1Y7GLGTlBMye13JG+QH4LpUOweUjjVYR0/gA95cWgmbiR3P8u2Hdi9Hoy8thrEUvsRBKrNbzOgTbuSpwQKOQl9ld9jaNhpWyG4BRAoVDlM3K2wdVfVuSwNLL0GxfbEB7oPuvlQDZLJlIqvaFofaiIGOUa22Sq8oPQX8WCIrPc2gZAWexniHzBy2nKXBOnjrX/CTFy2+WH3Fnj42duHTgVndrZOgjG6JOfxkUOMLzIX4z1Ypsr/3lfT6witQxaCiUhR2OmqHrmCF7VtiNfvTaCrU1QUuF5/079W0pjYYvbbiCrnPIwvzjS+tHc2zV19mAU87i92Ddg6GZlCGXzIPU7lwd023jUPS3vgz2AOpcvsj3xu9ty4fvFQhsTwrn9w3ZsAYaEIVMHJgoUadY763wPvWm97/sH/8sZmlOlcXgtrrJlpn4M4aJl9sNTFeZZBmxMAEfMuUB+uJNAZaQb/A/Ka3vtekdB5F1PN+xvkdwLf+5m+1Onp20SjlNRHvnxO6HqPVTUBfzu9+iRHfhBdAv+bWqG3z+b9EY0wqyA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(33656002)(82740400003)(82310400005)(81166007)(356005)(2906002)(15650500001)(47076005)(36860700001)(83380400001)(336012)(8676002)(52536014)(5660300002)(8936002)(86362001)(26005)(186003)(41300700001)(6506007)(9686003)(316002)(4326008)(6916009)(70586007)(70206006)(40480700001)(6666004)(54906003)(55016003)(7696005)(40460700003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 03:23:04.4754
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9204668f-4fbd-4602-4a85-08db4217b830
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9416

Hi all,

Following the discussion in April community call, here comes the two
updated possible release schedule options that I came up with.

Both of these two options will satisfy the requirements/concerns that
I've received so far. But I personally would prefer the option 2 as we
shouldn't expect there will be much progress happen in August due to
the EU holiday season. I wonder if anyone has any objections or alternate
suggestions.

Please don't hesitate to raise your concerns and opinions. I would
encourage that the feedback collection is cut off by the middle of May
(say May 19). If nobody will have anything better, then let's go option 2
by "lazy consensus". Thanks.

** Proposed option 1: Wed Aug 30, 2023 **
(+8 months from Xen 4.17 release)

- Last posting date          Fri Jun 16, 2023

Patches adding new features are expected to be posted to the mailing
list by this date, although perhaps not in their final version.

(Note that Xen Summit is Jun 24 - 26, 2023)

- Feature freeze             Fri Jul 7, 2023 (+3 weeks from Last posting da=
te)

Patches adding new features should be committed by this date.
Straightforward bugfixes may continue to be accepted by maintainers.

- Code freeze                Fri Jul 21, 2023 (+2 weeks from Feature freeze=
)

Bugfixes only.

- Hard code freeze           Fri Aug 11, 2023 (+3 weeks from Code freeze)

Bugfixes for serious bugs (including regressions), and low-risk fixes only.

- Final commits              Fri Aug 25, 2023 (+2 weeks from Hard code free=
ze)

Branch off staging-4.18.

- Release                    Wed Aug 30, 2023


** Proposed option 2: Wed Sep 27, 2023 (or the first week of Oct)**
(+9 months from Xen 4.17 release)

- Last posting date          Fri Jul 14, 2023

Patches adding new features are expected to be posted to the mailing
list by this date, although perhaps not in their final version.

- Feature freeze             Fri Aug 4, 2023 (+3 weeks from Last posting da=
te)

Patches adding new features should be committed by this date.
Straightforward bugfixes may continue to be accepted by maintainers.

- Code freeze                Fri Aug 18, 2023 (+2 weeks from Feature freeze=
)

Bugfixes only.

- Hard code freeze           Fri Sep 8, 2023 (+3 weeks from Code freeze)

Bugfixes for serious bugs (including regressions), and low-risk fixes only.

- Final commits              Fri Sep 22, 2023 (+2 weeks from Hard code free=
ze)

Branch off staging-4.18.

- Release                    Wed Sep 27, 2023

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 04:04:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 04:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524435.815358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pphzt-00028w-5v; Fri, 21 Apr 2023 04:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524435.815358; Fri, 21 Apr 2023 04:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pphzt-00028p-3L; Fri, 21 Apr 2023 04:03:45 +0000
Received: by outflank-mailman (input) for mailman id 524435;
 Fri, 21 Apr 2023 04:03:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pphzs-00028f-0x; Fri, 21 Apr 2023 04:03:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pphzr-0001Gg-LC; Fri, 21 Apr 2023 04:03:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pphzr-0002k8-1C; Fri, 21 Apr 2023 04:03:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pphzr-00072r-0m; Fri, 21 Apr 2023 04:03:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tnbm1jAepUtbMbD3yBPuFtAhOup8yTScXd1pjpDB/1U=; b=l9DmdsrGARmyWvJ5t79f1ecRDZ
	TUGdw0FRLp3YxTa/nad4it9xS6ZFnxnatFAmR0G6ZlwAp9QJ9jzKTwOqc6oA7xogiU2wF6m0tSHBb
	JV5D44qFLNiEc+EntzBJeHxDYWhlwakaXDWy67YkdqQuKSEbG90i7FwES45BJ/667a24=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180333: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-5.4:build-armhf:<job status>:broken:regression
    linux-5.4:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:build-armhf:hosts-allocate:broken:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
X-Osstest-Versions-That:
    linux=32bea3bac5ca484c6f7e302c8c96fc686f62e7b4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 04:03:43 +0000

flight 180333 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180333/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf                   2 hosts-allocate        broken starved in 180149
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180149
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac
baseline version:
 linux                32bea3bac5ca484c6f7e302c8c96fc686f62e7b4

Last test of basis   180149  2023-04-05 09:43:16 Z   15 days
Testing same since   180333  2023-04-20 10:12:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Amir Goldstein <amir73il@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bang Li <libang.linuxer@gmail.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Basavaraj Natikar <Basavaraj.Natikar@amd.com>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Brian Foster <bfoster@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Kerello <christophe.kerello@foss.st.com>
  Chuck Lever <chuck.lever@oracle.com>
  D Scott Phillips <scott@os.amperecomputing.com>
  Dai Ngo <dai.ngo@oracle.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  David Howells <dhowells@redhat.com>
  David Lechner <david@lechnology.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Denis Plotnikov <den-plotnikov@yandex-team.ru>
  Dhruva Gole <d-gole@ti.com>
  Emanuele Ghidoli <emanuele.ghidoli@toradex.com>
  Enrico Sau <enrico.sau@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Eric Van Hensbergen <ericvh@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Florian Fainelli <f.fainelli@gmail.com>
  George Cherian <george.cherian@marvell.com>
  Grant Grundler <grundler@chromium.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Layton <jlayton@kernel.org>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jeremy Soller <jeremy@system76.com>
  Jiri Kosina <jkosina@suse.cz>
  Johan Hovold <johan+linaro@kernel.org>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Keeping <john@metanate.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Kees Jan Koster <kjkoster@kjkoster.org>
  Kornel Dulęba <korneld@chromium.org>
  Lars-Peter Clausen <lars@metafoo.de>
  Lee Jones <lee.jones@linaro.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michal Kolar <mich.k@seznam.cz>
  Min Li <lm0963hack@gmail.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Miquel Raynal <miquel.raynal@bootlin.com> # v5.10, v4.19
  Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nicolas Schichan <nschichan@freebox.fr>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Paolo Abeni <pabeni@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Pratyush Yadav <ptyadav@amazon.de>
  RD Babiera <rdbabiera@google.com>
  Richard Weinberger <richard@nod.at>
  Robbie Harwood <rharwood@redhat.com>
  Roman Gushchin <roman.gushchin@linux.dev>
  Rongwei Wang <rongwei.wang@linux.alibaba.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sachi King <nakato@nakato.io>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sherry Sun <sherry.sun@nxp.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Shuangpeng Bai <sjb7183@psu.edu>
  Steve Clevenger <scclevenger@os.amperecomputing.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suzuki K Poulose <suzuki.poulose@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Glanzmann <thomas@glanzmann.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Crawford <tcrawford@system76.com>
  Tom Saeger <tom.saeger@oracle.com>
  Tyler Hicks (Microsoft) <code@tyhicks.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Waiman Long <longman@redhat.com>
  William Breathitt Gray <william.gray@linaro.org>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  Wolfram Sang <wsa@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  Xu Biang <xubiang@hust.edu.cn>
  Yongchen Yin <wb-yyc939293@alibaba-inc.com>
  ZhaoLong Wang <wangzhaolong1@huawei.com>
  Zheng Wang <zyytlz.wz@163.com>
  Zheng Yejian <zhengyejian1@huawei.com>
  Zhihao Cheng <chengzhihao1@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken
broken-step build-armhf hosts-allocate

Not pushing.

(No revision log; it would be 2835 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 05:49:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 05:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524433.815375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppjdp-0004FW-VE; Fri, 21 Apr 2023 05:49:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524433.815375; Fri, 21 Apr 2023 05:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppjdp-0004FP-RA; Fri, 21 Apr 2023 05:49:05 +0000
Received: by outflank-mailman (input) for mailman id 524433;
 Fri, 21 Apr 2023 03:36:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z4Eq=AM=bytedance.com=xieyongji@srs-se1.protection.inumbo.net>)
 id 1pphZN-0007iA-IB
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 03:36:22 +0000
Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com
 [2607:f8b0:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac76b144-dff5-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 05:36:18 +0200 (CEST)
Received: by mail-pf1-x433.google.com with SMTP id
 d2e1a72fcca58-63b70ca0a84so2277553b3a.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Apr 2023 20:36:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac76b144-dff5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bytedance.com; s=google; t=1682048176; x=1684640176;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1jAyLYhp4ydL9pXoFgTievz8fgHmpFOhj5xccyxtnbM=;
        b=lp2uPGA/zzrUBtwKYaosqKXYbp1Hfuaopw4Et1t12z8UqIEtWDFJUqvZZm8AXxlaCr
         J3D4BRSEP9UPFrmgBm0cFxhBEUg8LGrs1C4Ooa5DbdTjExLLIKiduWFeFqUZegKhy0lO
         PI+OUij8NLg8tdSDSUzWRwOKx0ey59ZTNH3eq4xYsIb0LEp/mIQ3vti5mCr2ciDZ2JMs
         GEfQiIjGOsYrMyGISiiPL/vdzn7T2jZsXypgv0Nc5Fg+66Fc4ppoZfjvKnb8nkPz0ZFC
         ByQBiMwVApRyDGt0M5Dap5+fvdGoz7hgNghs/fgJ1YsLYUm/2yHyvqNlG69gI7Ik3k7r
         H2mA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682048176; x=1684640176;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1jAyLYhp4ydL9pXoFgTievz8fgHmpFOhj5xccyxtnbM=;
        b=FfQ2Y09cPLFMWa1M+IoFEqxUKqgGusT7Ajed4NNE1x6PQ1jpeQifx0XEuUofDDAZmB
         Ooy1n/HukOazUpK6rm3IIzktR2qqotmrARzogunAwjBBrxtxt1/0Z8wYOaPgQ0Zn20/A
         jyxQQaJHr6B2p1L3BRUlYYQlP6LG4meC98DBqZl5P+1fSMvnw+YBiZShQl3CbYE5L+ko
         IMJ2fGqpUNwXvIJhAhckLpSoMas457z3G6o4mfvC0noNoTJSodZ0d5kpSvCtBFF5oCNn
         Zg0N3YaV+7v8QrHqKI76sVGbIS+7iWeUADEjJ0111dszDhxM4S8WiJtlq1usTXfZYSFu
         RFKw==
X-Gm-Message-State: AAQBX9cdc6ZKlbtka0fKL77ioUZhW3mmTxB2lWvKQWCIUQkWQFwdgDgB
	OAdPUqeq89stotJlU8zH3Pen3eIIQk+VdTGiyWv1
X-Google-Smtp-Source: AKy350atzH7REymFEakAtJcBDaAv5k9Z2KJkzzHAI/Sc8hjb+qBg15jiqdI8NG3zUUovGGdTavKfB0tnlOpRCjqsQ48=
X-Received: by 2002:a05:6a21:328d:b0:ee:ac3c:d2de with SMTP id
 yt13-20020a056a21328d00b000eeac3cd2demr6256801pzb.28.1682048176395; Thu, 20
 Apr 2023 20:36:16 -0700 (PDT)
MIME-Version: 1.0
References: <20230420113732.336620-1-stefanha@redhat.com> <20230420113732.336620-14-stefanha@redhat.com>
In-Reply-To: <20230420113732.336620-14-stefanha@redhat.com>
From: Yongji Xie <xieyongji@bytedance.com>
Date: Fri, 21 Apr 2023 11:36:02 +0800
Message-ID: <CACycT3suSR+nYhe4z2zuocYsBBVSDBCE+614zT0jfDZCBRveaA@mail.gmail.com>
Subject: Re: [PATCH v3 13/20] block/export: rewrite vduse-blk drain code
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu devel list <qemu-devel@nongnu.org>, Peter Lieven <pl@kamp.de>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Juan Quintela <quintela@redhat.com>, qemu-block@nongnu.org, 
	Eduardo Habkost <eduardo@habkost.net>, Richard Henderson <richard.henderson@linaro.org>, 
	David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>, Fam Zheng <fam@euphon.net>, 
	Julia Suvorova <jusual@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>, 
	xen-devel@lists.xenproject.org, Hanna Reitz <hreitz@redhat.com>, 
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, eesposit@redhat.com, Kevin Wolf <kwolf@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Paul Durrant <paul@xen.org>, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, 
	"Richard W.M. Jones" <rjones@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>, 
	Stefano Garzarella <sgarzare@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Stefan,

On Thu, Apr 20, 2023 at 7:39=E2=80=AFPM Stefan Hajnoczi <stefanha@redhat.co=
m> wrote:
>
> vduse_blk_detach_ctx() waits for in-flight requests using
> AIO_WAIT_WHILE(). This is not allowed according to a comment in
> bdrv_set_aio_context_commit():
>
>   /*
>    * Take the old AioContex when detaching it from bs.
>    * At this point, new_context lock is already acquired, and we are now
>    * also taking old_context. This is safe as long as bdrv_detach_aio_con=
text
>    * does not call AIO_POLL_WHILE().
>    */
>
> Use this opportunity to rewrite the drain code in vduse-blk:
>
> - Use the BlockExport refcount so that vduse_blk_exp_delete() is only
>   called when there are no more requests in flight.
>
> - Implement .drained_poll() so in-flight request coroutines are stopped
>   by the time .bdrv_detach_aio_context() is called.
>
> - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
>   .bdrv_detach_aio_context() constraint violation. It's no longer
>   needed due to the previous changes.
>
> - Always handle the VDUSE file descriptor, even in drained sections. The
>   VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
>   drained sections. This ensures that the VDUSE kernel code gets a fast
>   response.
>
> - Suspend virtqueue fd handlers in .drained_begin() and resume them in
>   .drained_end(). This eliminates the need for the
>   aio_set_fd_handler(is_external=3Dtrue) flag, which is being removed fro=
m
>   QEMU.
>
> This is a long list but splitting it into individual commits would
> probably lead to git bisect failures - the changes are all related.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
>  1 file changed, 93 insertions(+), 39 deletions(-)
>
> diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
> index f7ae44e3ce..35dc8fcf45 100644
> --- a/block/export/vduse-blk.c
> +++ b/block/export/vduse-blk.c
> @@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
>      VduseDev *dev;
>      uint16_t num_queues;
>      char *recon_file;
> -    unsigned int inflight;
> +    unsigned int inflight; /* atomic */
> +    bool vqs_started;
>  } VduseBlkExport;
>
>  typedef struct VduseBlkReq {
> @@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
>
>  static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
>  {
> -    vblk_exp->inflight++;
> +    if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) {

I wonder why we need to use atomic operations here.

> +        /* Prevent export from being deleted */
> +        aio_context_acquire(vblk_exp->export.ctx);
> +        blk_exp_ref(&vblk_exp->export);
> +        aio_context_release(vblk_exp->export.ctx);
> +    }
>  }
>
>  static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
>  {
> -    if (--vblk_exp->inflight =3D=3D 0) {
> +    if (qatomic_fetch_dec(&vblk_exp->inflight) =3D=3D 1) {
> +        /* Wake AIO_WAIT_WHILE() */
>          aio_wait_kick();
> +
> +        /* Now the export can be deleted */
> +        aio_context_acquire(vblk_exp->export.ctx);
> +        blk_exp_unref(&vblk_exp->export);
> +        aio_context_release(vblk_exp->export.ctx);
>      }
>  }
>
> @@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vd=
useVirtq *vq)
>  {
>      VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev);
>
> +    if (!vblk_exp->vqs_started) {
> +        return; /* vduse_blk_drained_end() will start vqs later */
> +    }
> +
>      aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
> -                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
> +                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
>      /* Make sure we don't miss any kick afer reconnecting */
>      eventfd_write(vduse_queue_get_fd(vq), 1);
>  }
> @@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, Vd=
useVirtq *vq)
>  static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
>  {
>      VduseBlkExport *vblk_exp =3D vduse_dev_get_priv(dev);
> +    int fd =3D vduse_queue_get_fd(vq);
>
> -    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
> -                       true, NULL, NULL, NULL, NULL, NULL);
> +    if (fd < 0) {
> +        return;
> +    }
> +
> +    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
> +                       NULL, NULL, NULL, NULL, NULL);
>  }
>
>  static const VduseOps vduse_blk_ops =3D {
> @@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
>
>  static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *c=
tx)
>  {
> -    int i;
> -
>      aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->=
dev),
> -                       true, on_vduse_dev_kick, NULL, NULL, NULL,
> +                       false, on_vduse_dev_kick, NULL, NULL, NULL,
>                         vblk_exp->dev);
>
> -    for (i =3D 0; i < vblk_exp->num_queues; i++) {
> -        VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i);
> -        int fd =3D vduse_queue_get_fd(vq);
> -
> -        if (fd < 0) {
> -            continue;
> -        }
> -        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
> -                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
> -    }
> +    /* Virtqueues are handled by vduse_blk_drained_end() */
>  }
>
>  static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
>  {
> -    int i;
> -
> -    for (i =3D 0; i < vblk_exp->num_queues; i++) {
> -        VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i);
> -        int fd =3D vduse_queue_get_fd(vq);
> -
> -        if (fd < 0) {
> -            continue;
> -        }
> -        aio_set_fd_handler(vblk_exp->export.ctx, fd,
> -                           true, NULL, NULL, NULL, NULL, NULL);
> -    }
>      aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->=
dev),
> -                       true, NULL, NULL, NULL, NULL, NULL);
> +                       false, NULL, NULL, NULL, NULL, NULL);
>
> -    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
> +    /* Virtqueues are handled by vduse_blk_drained_begin() */
>  }
>
>
> @@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
>                              (char *)&config.capacity);
>  }
>
> +static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
> +{
> +    for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) {
> +        VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i);
> +        vduse_blk_disable_queue(vblk_exp->dev, vq);
> +    }
> +
> +    vblk_exp->vqs_started =3D false;
> +}
> +
> +static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
> +{
> +    vblk_exp->vqs_started =3D true;
> +
> +    for (uint16_t i =3D 0; i < vblk_exp->num_queues; i++) {
> +        VduseVirtq *vq =3D vduse_dev_get_queue(vblk_exp->dev, i);
> +        vduse_blk_enable_queue(vblk_exp->dev, vq);
> +    }
> +}
> +
> +static void vduse_blk_drained_begin(void *opaque)
> +{
> +    BlockExport *exp =3D opaque;
> +    VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expor=
t);
> +
> +    vduse_blk_stop_virtqueues(vblk_exp);
> +}
> +
> +static void vduse_blk_drained_end(void *opaque)
> +{
> +    BlockExport *exp =3D opaque;
> +    VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expor=
t);
> +
> +    vduse_blk_start_virtqueues(vblk_exp);
> +}
> +
> +static bool vduse_blk_drained_poll(void *opaque)
> +{
> +    BlockExport *exp =3D opaque;
> +    VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expor=
t);
> +
> +    return qatomic_read(&vblk_exp->inflight) > 0;
> +}
> +
>  static const BlockDevOps vduse_block_ops =3D {
> -    .resize_cb =3D vduse_blk_resize,
> +    .resize_cb     =3D vduse_blk_resize,
> +    .drained_begin =3D vduse_blk_drained_begin,
> +    .drained_end   =3D vduse_blk_drained_end,
> +    .drained_poll  =3D vduse_blk_drained_poll,
>  };
>
>  static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *op=
ts,
> @@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, Blo=
ckExportOptions *opts,
>      vblk_exp->handler.serial =3D g_strdup(vblk_opts->serial ?: "");
>      vblk_exp->handler.logical_block_size =3D logical_block_size;
>      vblk_exp->handler.writable =3D opts->writable;
> +    vblk_exp->vqs_started =3D true;
>
>      config.capacity =3D
>              cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BIT=
S);
> @@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, B=
lockExportOptions *opts,
>          vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
>      }
>
> -    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
> +    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
>                         on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->de=
v);
>
>      blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_det=
ach,
>                                   vblk_exp);
> -
>      blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
>
> +    /*
> +     * We handle draining ourselves using an in-flight counter and by di=
sabling
> +     * virtqueue fd handlers. Do not queue BlockBackend requests, they n=
eed to
> +     * complete so the in-flight counter reaches zero.
> +     */
> +    blk_set_disable_request_queuing(exp->blk, true);
> +
>      return 0;
>  err:
>      vduse_dev_destroy(vblk_exp->dev);
> @@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
>      VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expor=
t);
>      int ret;
>
> +    assert(qatomic_read(&vblk_exp->inflight) =3D=3D 0);
> +
> +    vduse_blk_detach_ctx(vblk_exp);
>      blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_=
detach,
>                                      vblk_exp);
>      blk_set_dev_ops(exp->blk, NULL, NULL);
> @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
>      g_free(vblk_exp->handler.serial);
>  }
>
> +/* Called with exp->ctx acquired */
>  static void vduse_blk_exp_request_shutdown(BlockExport *exp)
>  {
>      VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expor=
t);
>
> -    aio_context_acquire(vblk_exp->export.ctx);
> -    vduse_blk_detach_ctx(vblk_exp);
> -    aio_context_acquire(vblk_exp->export.ctx);
> +    vduse_blk_stop_virtqueues(vblk_exp);

Can we add a AIO_WAIT_WHILE() here? Then we don't need to
increase/decrease the BlockExport refcount during I/O processing.

Thanks,
Yongji


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 06:19:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 06:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524448.815385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppk6w-0007jN-AN; Fri, 21 Apr 2023 06:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524448.815385; Fri, 21 Apr 2023 06:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppk6w-0007jG-76; Fri, 21 Apr 2023 06:19:10 +0000
Received: by outflank-mailman (input) for mailman id 524448;
 Fri, 21 Apr 2023 06:19:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slXh=AM=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1ppk6u-0007jA-Gw
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 06:19:08 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b483cba-e00c-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 08:19:06 +0200 (CEST)
Received: from AM6P193CA0107.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::48)
 by PAXPR08MB7365.eurprd08.prod.outlook.com (2603:10a6:102:225::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 06:19:02 +0000
Received: from AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::8e) by AM6P193CA0107.outlook.office365.com
 (2603:10a6:209:88::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.27 via Frontend
 Transport; Fri, 21 Apr 2023 06:19:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT022.mail.protection.outlook.com (100.127.140.217) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.24 via Frontend Transport; Fri, 21 Apr 2023 06:19:01 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Fri, 21 Apr 2023 06:19:01 +0000
Received: from 14df61cd0fd9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2D72305C-93DC-424F-A0A7-C0D7CB369AA7.1; 
 Fri, 21 Apr 2023 06:18:50 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 14df61cd0fd9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 06:18:50 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB9689.eurprd08.prod.outlook.com (2603:10a6:20b:607::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 06:18:47 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 06:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b483cba-e00c-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sZbSnU5Dj0hCxeLw/YUMLbImyl1A3Ia5b1rQ7yzBo1w=;
 b=C37k9z3hzeJY4dqoeWaYg0PjAHJE03XPMB354QfbW1AMaXIyoAdcZtaX7S7QezelCDvRGcl7GJHqTlBb3CZMBB2nPPlKeonZ8FmlGMEDrFtuLLbJYkXN4AR57uRHdBwYeq+RGkvjGF/B5RtXvyE1KkJD2oqAk49PXBa/ltSOfBk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6335a298667bd911
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U4cTCIXohqYmd2dOqlM1AuwQ1EKzMR6x/NXRZL69VHCYok5IUSaOxRVhRgEhO07tUB2FY6k9a1fo6wjza1Tie/pTlAnXEf4ILgjTNOWG04b7Gge2bWB6csy3PRISFBs07NwQEtf+Sfy42wFzkDpa8p60TNRZlN/C9kz08puVvjyBZnUM+0aTBEh0D77VIg5a0YdDrPPzjy1Cjv6VTF4kBSIO75pmIWOXPL6ac5X9bGxW0kHwKSWKJvqMnnIBG5AnLlrEJGBFmlIeaJ/FYY0//LnFEp2nyD4GylPCK1/CygVLNLHOHEUNnd2JcHDvDJb8zQYp6WQOCCyk0FVJmd5PBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sZbSnU5Dj0hCxeLw/YUMLbImyl1A3Ia5b1rQ7yzBo1w=;
 b=WeauND7aKFLCk+fFrW6bIpGFZWmQ1CYtDeK1AMGp/RAocIopuuW7fledtJZH3dJQeAyKLY187CHl2ICPxaNXohNoKKtIUeCrZDbzoblDElld+QbxY8kuimKlzBtSHiEEEa+tBuis/5T1Z1Ey6qoc5h11nFucDk1hYFwk1mBbIrpDLMbc4riHh7cAMKJZn4WwHU2mb3/Sc/NZj+IslacnCtLz3E6cIIcBdVc8PLrqPrMutM+XSmDeW/2d1HstJeAJ4FOhUnuFhSe3QWgguZIWwi04xdTPuI7NT3p/jlTumcz1zW/E57DdhifQo9nqBWCPOg75qN1pxRdbIC8twFXutw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sZbSnU5Dj0hCxeLw/YUMLbImyl1A3Ia5b1rQ7yzBo1w=;
 b=C37k9z3hzeJY4dqoeWaYg0PjAHJE03XPMB354QfbW1AMaXIyoAdcZtaX7S7QezelCDvRGcl7GJHqTlBb3CZMBB2nPPlKeonZ8FmlGMEDrFtuLLbJYkXN4AR57uRHdBwYeq+RGkvjGF/B5RtXvyE1KkJD2oqAk49PXBa/ltSOfBk=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Henry Wang <Henry.Wang@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"community.manager@xenproject.org" <community.manager@xenproject.org>, Julien
 Grall <julien@xen.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, George Dunlap <george.dunlap@cloud.com>, Juergen
 Gross <jgross@suse.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Re: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Topic: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Index: AQHZdACYcHm0T1gRME+oVIqMr2d2RK81SjQA
Date: Fri, 21 Apr 2023 06:18:47 +0000
Message-ID: <26AF44F7-A028-4737-9BF9-266CBAA83A18@arm.com>
References:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB9689:EE_|AM7EUR03FT022:EE_|PAXPR08MB7365:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ad40d74-2fac-4f81-21ca-08db42304ca2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5TcunEkh4Wkvov8ya0WS3YCFnyUdcg4KezEgpFEGdBY+gvrR4l+4n3w+W3oWgLtobdgZj4JaYOhlMBT1h/NZXo4xmtUwXPWnFH0Zc9Z7l++luQJD11Yx2I/HvDcqnoNZSleC5K9Kj2H86BszZF5/jXpUQFhpWfsxXvJ5FeZJaBcO5k3HzfMvlrD1h727AadIWv5gBi+L5FOmE1/clR6b9mrJEMrNb/W74VIQRm9yUFs48RZ2g2N0ozpHjTKkF2xdfvAfU6im7RNKSoAqa/Nw/QF7/CKFAhHVOUJjguyc3glSjxKfmi05xTs/jK+9+rWBYwZK7rn6XG66PTPtyn26YYVEpaOdpt3QvXPlO95NRcOwzwnNUcB8nBZ3a6+N7LuM1yyVHyQpxN58rOJHVlPU6IuO5fgdrOW8FzlSVbjKkVPi+xMEkkTyBZ8UrRdx+jq8JwoUacE0dnUJYSUOhR4V495EuqKpQ8xxBs+w1g4GrJ01x6fqkkuiExfhkzjkcWVT/Gl1hEJLC1g5adEmIfqC2W2e7PaAcjzSr2qRnARZg9dl9soPArQ4me4vMRhk7BJ2XAkGzSevRAxc/wiEH5/ApBY1FTus+Ap1rnUGSogt+/1+g1Kxh9u13xjtc4DB+CT5sz3I4CXlHZaNsAnDVUtZLA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(396003)(39860400002)(376002)(346002)(451199021)(36756003)(7416002)(15650500001)(38070700005)(2906002)(38100700002)(8936002)(8676002)(5660300002)(6862004)(86362001)(186003)(33656002)(6486002)(71200400001)(6512007)(6506007)(37006003)(54906003)(6636002)(478600001)(2616005)(83380400001)(53546011)(66476007)(66556008)(66946007)(316002)(66446008)(41300700001)(122000001)(91956017)(4326008)(64756008)(76116006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <F5E1E87777C1574CB4884FF0B18C589D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9689
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	be51319b-2e82-4c63-aa75-08db4230443d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y5aokEdi++7nvbh2myNeiwgRr+C4BzwtBDlgWbMH/DWAKbZY8efSh4VaLo0VH8Fa6NZ/FXzBS6hPUXMLCtqE718Za9hsXQWtUGCmDhMh+JCiLfX8mlbrksZKSHRq/Tckl/y9TcA750x2GV5ASzsgHlQEVv33NJCBIuw+H+BfuvaK80J9mOPw+Mppb7R0iPNx955rw+WKbOSevZBBTpB4v3jK2X7K7YGo5KY+U6OEKIyd27B0Ald35FlWGRrXnoxPH4f5i+F2azoTms9ffGngJVeEV1+1fv3SURQlwgiQliDZhZIy76sgypOHdLpwVH63KaxkUnNkMXBOytZZ4wLTlGAKc3H+TmqTtd23d9pYwfwyS/B4j8Z3E2kmnGKlNVMCE5FKI578XnzSVyhm/mq51w9eicOqgY2oyu8TSOL1+MY3K1Lzp2ZomZ0b7uPbxUH3KzT/mJEFgJJKGQL9HpeK7dh33aqjoVJvLYJt4wt/rBWaTrYDwyfPlmSaCmSSBr0oMkDK52bLi8XH7kO1On+H2/+idtP3VyUQRTCoCBuhz56zaM8sgkJFB4IULc/9XxntZ6a8KqEpHvH34RmbUa1eJrYY37kwBgi11eH/D5KSY/3x97Hml+dF50P42BLobdxgRlIWPXwNSmkivdDNCX1Wa2fptgb6rgF2ty0BPvbmTuLdt9Dq28OZAm/kCDbfYLjiS5rftzzNff7wbTq7bkt++g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(36756003)(70206006)(40460700003)(15650500001)(2906002)(8936002)(8676002)(5660300002)(6862004)(40480700001)(86362001)(82310400005)(186003)(33656002)(6486002)(6512007)(26005)(6506007)(37006003)(54906003)(6636002)(478600001)(36860700001)(2616005)(83380400001)(47076005)(336012)(53546011)(316002)(81166007)(41300700001)(82740400003)(4326008)(70586007)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 06:19:01.4145
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ad40d74-2fac-4f81-21ca-08db42304ca2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7365

Hi,

> On 21 Apr 2023, at 05:22, Henry Wang <Henry.Wang@arm.com> wrote:
>=20
> Hi all,
>=20
> Following the discussion in April community call, here comes the two
> updated possible release schedule options that I came up with.
>=20
> Both of these two options will satisfy the requirements/concerns that
> I've received so far. But I personally would prefer the option 2 as we
> shouldn't expect there will be much progress happen in August due to
> the EU holiday season. I wonder if anyone has any objections or alternate
> suggestions.

Even if the release date is in september, all major deadlines will happen i=
n August.

So how about introducing an end of october possibility ?

Cheers
Bertrand

>=20
> Please don't hesitate to raise your concerns and opinions. I would
> encourage that the feedback collection is cut off by the middle of May
> (say May 19). If nobody will have anything better, then let's go option 2
> by "lazy consensus". Thanks.
>=20
> ** Proposed option 1: Wed Aug 30, 2023 **
> (+8 months from Xen 4.17 release)
>=20
> - Last posting date          Fri Jun 16, 2023
>=20
> Patches adding new features are expected to be posted to the mailing
> list by this date, although perhaps not in their final version.
>=20
> (Note that Xen Summit is Jun 24 - 26, 2023)
>=20
> - Feature freeze             Fri Jul 7, 2023 (+3 weeks from Last posting =
date)
>=20
> Patches adding new features should be committed by this date.
> Straightforward bugfixes may continue to be accepted by maintainers.
>=20
> - Code freeze                Fri Jul 21, 2023 (+2 weeks from Feature free=
ze)
>=20
> Bugfixes only.
>=20
> - Hard code freeze           Fri Aug 11, 2023 (+3 weeks from Code freeze)
>=20
> Bugfixes for serious bugs (including regressions), and low-risk fixes onl=
y.
>=20
> - Final commits              Fri Aug 25, 2023 (+2 weeks from Hard code fr=
eeze)
>=20
> Branch off staging-4.18.
>=20
> - Release                    Wed Aug 30, 2023
>=20
>=20
> ** Proposed option 2: Wed Sep 27, 2023 (or the first week of Oct)**
> (+9 months from Xen 4.17 release)
>=20
> - Last posting date          Fri Jul 14, 2023
>=20
> Patches adding new features are expected to be posted to the mailing
> list by this date, although perhaps not in their final version.
>=20
> - Feature freeze             Fri Aug 4, 2023 (+3 weeks from Last posting =
date)
>=20
> Patches adding new features should be committed by this date.
> Straightforward bugfixes may continue to be accepted by maintainers.
>=20
> - Code freeze                Fri Aug 18, 2023 (+2 weeks from Feature free=
ze)
>=20
> Bugfixes only.
>=20
> - Hard code freeze           Fri Sep 8, 2023 (+3 weeks from Code freeze)
>=20
> Bugfixes for serious bugs (including regressions), and low-risk fixes onl=
y.
>=20
> - Final commits              Fri Sep 22, 2023 (+2 weeks from Hard code fr=
eeze)
>=20
> Branch off staging-4.18.
>=20
> - Release                    Wed Sep 27, 2023
>=20
> Kind regards,
> Henry



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 06:22:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 06:22:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524452.815395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppkAL-0000iJ-PV; Fri, 21 Apr 2023 06:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524452.815395; Fri, 21 Apr 2023 06:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppkAL-0000iC-Lm; Fri, 21 Apr 2023 06:22:41 +0000
Received: by outflank-mailman (input) for mailman id 524452;
 Fri, 21 Apr 2023 06:22:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ii9+=AM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppkAJ-0000i4-Gi
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 06:22:39 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8818e4f-e00c-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 08:22:37 +0200 (CEST)
Received: from ZR0P278CA0185.CHEP278.PROD.OUTLOOK.COM (2603:10a6:910:44::18)
 by GV1PR08MB8449.eurprd08.prod.outlook.com (2603:10a6:150:81::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Fri, 21 Apr
 2023 06:22:29 +0000
Received: from VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:910:44:cafe::e2) by ZR0P278CA0185.outlook.office365.com
 (2603:10a6:910:44::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.28 via Frontend
 Transport; Fri, 21 Apr 2023 06:22:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT007.mail.protection.outlook.com (100.127.144.86) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.25 via Frontend Transport; Fri, 21 Apr 2023 06:22:28 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 21 Apr 2023 06:22:27 +0000
Received: from da8bee436a8d.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A698BBFB-E192-4B00-AF60-D88F24204446.1; 
 Fri, 21 Apr 2023 06:22:17 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id da8bee436a8d.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 06:22:17 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB8PR08MB5321.eurprd08.prod.outlook.com (2603:10a6:10:11c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 06:22:14 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 06:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8818e4f-e00c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fq670Mv16OR02ZgYB5RB2WKv6GaLRrEMgFvHvz1JLBo=;
 b=QAYrWTbcaCXBUjY4KO+T8Mu4K9bZxdW0D32PyyrnoOJy7i/iqMDapQkt5JUlyz+gDE2zWoGBz/AGJMMWzZDbXPBGsAwaDrUOnjVS7sKhE6+831yNBhiLCmaVpZvwrzGYXk5abS8/8ONS8385RyD/AoUiUjXM/+xHjqBSTwA1hn0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SQzK3tvoh89hTJvrWn1T0Tfxv/ebSZbjY+HZxl0E8qawlFAHGp40+fA5ZsfuncUGQc2xRcirmAfhtvbbSZY5u1Vl2uHtA4fxFlgMpkNrze+zWsQXPV9IUxxCGM2/Mz/mwqzApYKBzKDk+GDXn7QZgG7XfRZPe7dOR9mtGR2axL76gP99bEgoPjJFMO2R34j2RBw9b19p1+0hni/cG56tf1RVycg0VpjzdfmeALJ9NcYdYvHLbEXGtB8fDTgZOb75ddkhHpWBB4IsMg4ZRajHC5wR3SSy67emcfSKZN56BUizjC8aJuSN4JVoIg3Vc7VkXRksuIXNGzFGRje3wFhEQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fq670Mv16OR02ZgYB5RB2WKv6GaLRrEMgFvHvz1JLBo=;
 b=LIePveZGC9eLMi4CsFmY2tRdV+IlVfTYtTZx+wbR8Es6xRODdl5J2Sr8JA2CwV2WiDFLNswj3ODiPx4+WUQql/rDw4i+OZzwV/a9cWlHC92cTJu2Iz5YiEf7/C3O9XGkReFz0WVnN1IA2qSlbPyQbVWW6EtOFf7bfdWRQghhgxcm5lfEWLXP63coGSLRmaNQzJ0Y0U2LYfgsH/WcR3Gj8fcxBtgxAg8bbqGRS4A8TNUNN1kVI3JDl50ysM9J1ujY+18+z5kWSMNySMWEAn5Lb4yTbvzKGL39EqabdfsSPPvrWtUseHQ5CDtq6SWVH5Jj3DteAfEDtqEG71BJ9wBGMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fq670Mv16OR02ZgYB5RB2WKv6GaLRrEMgFvHvz1JLBo=;
 b=QAYrWTbcaCXBUjY4KO+T8Mu4K9bZxdW0D32PyyrnoOJy7i/iqMDapQkt5JUlyz+gDE2zWoGBz/AGJMMWzZDbXPBGsAwaDrUOnjVS7sKhE6+831yNBhiLCmaVpZvwrzGYXk5abS8/8ONS8385RyD/AoUiUjXM/+xHjqBSTwA1hn0=
From: Henry Wang <Henry.Wang@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "community.manager@xenproject.org" <community.manager@xenproject.org>,
	Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Anthony PERARD <anthony.perard@citrix.com>, George
 Dunlap <george.dunlap@cloud.com>, Juergen Gross <jgross@suse.com>, Wei Chen
	<Wei.Chen@arm.com>
Subject: RE: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Topic: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Index: AQHZdACNvTYSYql02kern4BL0bR1K681SkGAgAAAU2A=
Date: Fri, 21 Apr 2023 06:22:13 +0000
Message-ID:
 <AS8PR08MB79918CB17D6E1C987B5F048992609@AS8PR08MB7991.eurprd08.prod.outlook.com>
References:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <26AF44F7-A028-4737-9BF9-266CBAA83A18@arm.com>
In-Reply-To: <26AF44F7-A028-4737-9BF9-266CBAA83A18@arm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 16DB2E1EA8C83F4F9A8AE6F489B8CAE7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB8PR08MB5321:EE_|VI1EUR03FT007:EE_|GV1PR08MB8449:EE_
X-MS-Office365-Filtering-Correlation-Id: 5fd88e59-418b-4f78-8593-08db4230c7fa
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IvucoydqHLTpMM7mz8EUQlwTjEgaMZYv8olBCHC312xU+88DoezEv0mcpYH0kNoUxJujpx+uwDVx2oT2nFEWh1zUNR4dhpoXSxI0WyYNPd2fszzK7LJDMUV4V4akTLQsBjfJWMCxpjulqg97CVhT/lOZ91kZeKNaCZqMJgJtpDqu9PSrzBERtX1/zepDf2cUy+keA5uqVjjnnUhZXbmpRDMI7ZVMxzBgBW3xT7oD5iYxwO+jVrqtf1q7DIM0xwQUVQU5Ql0RciCaBqlvoeoqb6hTSurgQVPisGeFsRqPDoj1qlCHTzKbL/RO0LCJhHXvXmeJwXegvCjNeCSGaSGtg9r04nYGxapWmzw9dxjc0JCN60JrtefOlG43TqKDcfkEg75FEsnk8t4JMK8RYeKVFab01rrBRXHsiBeai3OiudFzi+ndF2w4uukm7s9J4GmEFDSgJnQyEOKI8QGjlVth+ItjBF91j29Q/aKGkmusnF8gWfz+ndwj1VemSLKwmQJ6VGkbTtb2pKhfkzUNsENg/mfvA1nQanBSSpVIVWevBvNCXX9rLkweqZCXwSeXI2DhwT56485fmi/0Mj22qkeCsEb6VbNmOHAzcb/pAqziZU9+8DNFjSE3JOqwp/y0WB+7
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(86362001)(54906003)(83380400001)(478600001)(55016003)(7696005)(6506007)(26005)(53546011)(9686003)(71200400001)(4326008)(64756008)(76116006)(66946007)(110136005)(316002)(186003)(66476007)(66556008)(66446008)(122000001)(41300700001)(52536014)(5660300002)(38100700002)(38070700005)(15650500001)(7416002)(8676002)(2906002)(8936002)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5321
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0527b59a-3d27-4c03-6190-08db4230bf63
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KEvsgpXsAUo3s4idUPyU0Gk5AKvkPI5Gnb2RZi1z70x0QgFkOgQ1V2GCIVEBWEL7x4oALErmX+X4sLO6UXUKjIWDVCM331PZN7KzMCxM40qUWbG/INVMbieAzyShSUR0YXNvuMyx2NfILNoRPtHwwtat6jXFveVaYl4qIh8CLhBgNCw4W9Y1CjznoHHneVTjKS6BvVDxOVBkuYtZUWhzb2iJ2F7HJ+DGsHK+C6diIpG4ZwsTElVbLMf8xcGox5upMOvqT5eOrlMr1Vt3fAAz/paWXINdPEUHDVq7wt4fTk8KPeQ/+e7GmMRpx+Aw/TurzWqbtuexRPaVH3qDuQtJiY7+6PkNE24ByUwv1bTf8t6wS7b2ROSlmfjsWQkeyVUtuNOvfCiyxh7XYJYIfVobHO5P6s469AWmYI4L25Rw8H3ULCCX6wewEIL3cy0YnssiNEmNXYb5tjtFmFdd/fg3Yhr8n4E6l8N+ra5dVTsEtl0jaWIYCx8NhXdwKYOyKJHdC5GH4ZWGRYMPZmuxBoRAMIwFZLTIv1jGsXIEQ41/+lJqMMoMz6sMt3joQHbM78OzayqyLnEm8Z4SpdImewSZp4pP2KImtNK6DhgaoMloL31tKz/DG23si1L/JBpRQf3nBvACDgj8rlMP3jbPy7oYJFS88loT/oK6zqIruga1WuoR9vBzaWhOLPL4lo31tkeqUIuCkewrexIKlquqPLqTBg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199021)(46966006)(36840700001)(40470700004)(26005)(6506007)(186003)(7696005)(9686003)(15650500001)(53546011)(2906002)(336012)(47076005)(40480700001)(316002)(70206006)(41300700001)(5660300002)(4326008)(70586007)(52536014)(8676002)(8936002)(478600001)(54906003)(110136005)(82310400005)(86362001)(40460700003)(36860700001)(33656002)(83380400001)(55016003)(356005)(81166007)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 06:22:28.2911
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5fd88e59-418b-4f78-8593-08db4230c7fa
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8449

Hi Bertrand,

> -----Original Message-----
> From: Bertrand Marquis <Bertrand.Marquis@arm.com>
> Subject: Re: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
> release: Proposed release schedule)
>=20
> Hi,
>=20
> > On 21 Apr 2023, at 05:22, Henry Wang <Henry.Wang@arm.com> wrote:
> >
> > Hi all,
> >
> > Following the discussion in April community call, here comes the two
> > updated possible release schedule options that I came up with.
> >
> > Both of these two options will satisfy the requirements/concerns that
> > I've received so far. But I personally would prefer the option 2 as we
> > shouldn't expect there will be much progress happen in August due to
> > the EU holiday season. I wonder if anyone has any objections or alterna=
te
> > suggestions.
>=20
> Even if the release date is in september, all major deadlines will happen=
 in
> August.
>=20
> So how about introducing an end of october possibility ?

Thanks for raising this. I am personally good with this plan. If nobody obj=
ects
this proposal then yes why not :-)

Kind regards,
Henry

>=20
> Cheers
> Bertrand
>=20
> >
> > Please don't hesitate to raise your concerns and opinions. I would
> > encourage that the feedback collection is cut off by the middle of May
> > (say May 19). If nobody will have anything better, then let's go option=
 2
> > by "lazy consensus". Thanks.
> >
> > ** Proposed option 1: Wed Aug 30, 2023 **
> > (+8 months from Xen 4.17 release)
> >
> > - Last posting date          Fri Jun 16, 2023
> >
> > Patches adding new features are expected to be posted to the mailing
> > list by this date, although perhaps not in their final version.
> >
> > (Note that Xen Summit is Jun 24 - 26, 2023)
> >
> > - Feature freeze             Fri Jul 7, 2023 (+3 weeks from Last postin=
g date)
> >
> > Patches adding new features should be committed by this date.
> > Straightforward bugfixes may continue to be accepted by maintainers.
> >
> > - Code freeze                Fri Jul 21, 2023 (+2 weeks from Feature fr=
eeze)
> >
> > Bugfixes only.
> >
> > - Hard code freeze           Fri Aug 11, 2023 (+3 weeks from Code freez=
e)
> >
> > Bugfixes for serious bugs (including regressions), and low-risk fixes o=
nly.
> >
> > - Final commits              Fri Aug 25, 2023 (+2 weeks from Hard code =
freeze)
> >
> > Branch off staging-4.18.
> >
> > - Release                    Wed Aug 30, 2023
> >
> >
> > ** Proposed option 2: Wed Sep 27, 2023 (or the first week of Oct)**
> > (+9 months from Xen 4.17 release)
> >
> > - Last posting date          Fri Jul 14, 2023
> >
> > Patches adding new features are expected to be posted to the mailing
> > list by this date, although perhaps not in their final version.
> >
> > - Feature freeze             Fri Aug 4, 2023 (+3 weeks from Last postin=
g date)
> >
> > Patches adding new features should be committed by this date.
> > Straightforward bugfixes may continue to be accepted by maintainers.
> >
> > - Code freeze                Fri Aug 18, 2023 (+2 weeks from Feature fr=
eeze)
> >
> > Bugfixes only.
> >
> > - Hard code freeze           Fri Sep 8, 2023 (+3 weeks from Code freeze=
)
> >
> > Bugfixes for serious bugs (including regressions), and low-risk fixes o=
nly.
> >
> > - Final commits              Fri Sep 22, 2023 (+2 weeks from Hard code =
freeze)
> >
> > Branch off staging-4.18.
> >
> > - Release                    Wed Sep 27, 2023
> >
> > Kind regards,
> > Henry
>=20



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 06:34:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 06:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524458.815404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppkLJ-0002Hk-To; Fri, 21 Apr 2023 06:34:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524458.815404; Fri, 21 Apr 2023 06:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppkLJ-0002Hd-Qt; Fri, 21 Apr 2023 06:34:01 +0000
Received: by outflank-mailman (input) for mailman id 524458;
 Fri, 21 Apr 2023 06:34:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ii9+=AM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppkLI-0002HX-Qt
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 06:34:00 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2060f.outbound.protection.outlook.com
 [2a01:111:f400:fe12::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f6d8e8d-e00e-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 08:33:59 +0200 (CEST)
Received: from AS9PR06CA0270.eurprd06.prod.outlook.com (2603:10a6:20b:45f::31)
 by AM9PR08MB5921.eurprd08.prod.outlook.com (2603:10a6:20b:2d4::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 06:33:47 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45f:cafe::9f) by AS9PR06CA0270.outlook.office365.com
 (2603:10a6:20b:45f::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.26 via Frontend
 Transport; Fri, 21 Apr 2023 06:33:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.14 via Frontend Transport; Fri, 21 Apr 2023 06:33:46 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 21 Apr 2023 06:33:46 +0000
Received: from 41676200c72b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DBAAE765-0658-4083-90BD-6BA3E169A8E9.1; 
 Fri, 21 Apr 2023 06:33:40 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 41676200c72b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 06:33:40 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS4PR08MB8022.eurprd08.prod.outlook.com (2603:10a6:20b:585::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 06:33:26 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 06:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f6d8e8d-e00e-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qYzOEI2eF1FWNwl0npH+OXcAlyJMzXWHPO1EsCBi8u4=;
 b=d3hw0VVSLvFST2br9gVBViKAB/isdX+LlicSzVJNyqUvhv70OJzwIOTAgFiHzFWFBEvCk/TUSJDRwyTU0Vk3VBV67V6J9RBKFxcyiXoDhL1oIwsxO3FZFOonqrxyoaePziBGACm9drY0u23z39q3t3E3w/QnkO7SY/rUf6gvV/M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B1iKJqegWuVQfXEeLcAIqXpt8hZDfiZZaqAyS1J6NHUBw/TnV6Vl/sABLaEiXog/b1R3sHOSwUgE/5dIxbbI0V/suVBU76Dc6/qjtEKv8yqLmm8RINRQ+wpngoTLGRBxYTR75smrDo7aseO5ZTspaXIk/FIDni9149ccIT5VFnakSTziUtBFC0uF533yCuvGuSzzmLMIYcp8bAo80Wzh3ie8Rs/YTflAhTVfS11xzBUC7ADv8R5mOpqUkhAwJBWOqxbTvSSPMDsNOVKPrgAgLTi+RcM0E3hb9QZm76ftjoeqA85mn2GwBeawGJSh0o55HyHuISTXBFZeHWgIzCEFig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qYzOEI2eF1FWNwl0npH+OXcAlyJMzXWHPO1EsCBi8u4=;
 b=SpxXF9Hn4fUR2UaEA+jfmWDo4+A38Xk7FgxWkr9joIlyTFuI4TIRuOiXzJSznUCqWKp7nbjUD4ZYCHDdFItHxYAz5kTdNZINqCdQDOP1xssYls9b7wJI1JWp4Ja15qVJIKaF1WKrhKy8NaDtpcK21VuMI2FieepwOlv+ya42R6vy6QaYhWf3kZKs5jfE+j81cLrS9nP16b4dKy3ABdPAovTyG192UNuJV04zJxNG7kGXL/hGG762wHvr8d3pzgYUVXC+9JVKa+KmNkg4M5yChlzrouhRbqAw05o8bCWfiFzsbv4vsje14Yeah/oDNKrg/aGKqP8h4k7q/iNp0Jghjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qYzOEI2eF1FWNwl0npH+OXcAlyJMzXWHPO1EsCBi8u4=;
 b=d3hw0VVSLvFST2br9gVBViKAB/isdX+LlicSzVJNyqUvhv70OJzwIOTAgFiHzFWFBEvCk/TUSJDRwyTU0Vk3VBV67V6J9RBKFxcyiXoDhL1oIwsxO3FZFOonqrxyoaePziBGACm9drY0u23z39q3t3E3w/QnkO7SY/rUf6gvV/M=
From: Henry Wang <Henry.Wang@arm.com>
To: Henry Wang <Henry.Wang@arm.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "community.manager@xenproject.org" <community.manager@xenproject.org>,
	Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Anthony PERARD <anthony.perard@citrix.com>, George
 Dunlap <george.dunlap@cloud.com>, Juergen Gross <jgross@suse.com>, Wei Chen
	<Wei.Chen@arm.com>
Subject: RE: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Topic: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Index: AQHZdACNvTYSYql02kern4BL0bR1K681SkGAgAAAU2CAAANQoA==
Date: Fri, 21 Apr 2023 06:33:26 +0000
Message-ID:
 <AS8PR08MB79919F9CE0B2BF80E7103FB592609@AS8PR08MB7991.eurprd08.prod.outlook.com>
References:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <26AF44F7-A028-4737-9BF9-266CBAA83A18@arm.com>
 <AS8PR08MB79918CB17D6E1C987B5F048992609@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB79918CB17D6E1C987B5F048992609@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8C47D913FFD3B945863394E97D56FC7A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS4PR08MB8022:EE_|AM7EUR03FT034:EE_|AM9PR08MB5921:EE_
X-MS-Office365-Filtering-Correlation-Id: a32131dc-efb3-4529-e4d4-08db42325c57
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8mMGFoApNNgIDQBya5hTk3RzlHtq8y/1wHu0wTmh7I+T3aRdAAw1qogk3pIjO5DxluqpOSXl1kYP/vKwZbcYgURT6ftOw19W8Y9NyikSMIbQALZV3yQj8hPRQKX2WNFKXzAJttOyJQ4docE86+tPqxkJOXH3Rnmr5dJWk0lZFN0UnYvi3jLHGpyXx2hnfdStvirpUzaIB24JlUQDdIBWJrx/WoHyCo3EuwllwmH05c/3dkn+lNe/QdaH+KEvUJ8Ai43wfGqyeatIJNH64SJwMI4dQiai4e3cd9ZoYeJv9WXKO/Fe8iM6yoD8x3bDBujycxupjP6KkdG0YfUpGFwzGJxpx4ZoP83YiwocI3Zr1tnfo3ntn389wqX8GGSKBKhCABXIwHd6IUcNk9K7N6Dy55bOub5UoWmw2/M5KAJW7ve+4kxHR2R7N03R940zXN9ZAHWGOLwuS9PH4LWUUay3Tk+Awe3zrLHVOgl7ANFsRYgoZvSbZ9b66qIzC7SFI1kntjJaOmmHpTq/dcpZd1Ft2Bsn+caypxtCiXrinMK19za0WK6WlMPRLF6WYDCzoDAreY7C4fkINwX6zFDWaIEsw1vwQ0ZcsTKLmGfwCImry36fUhOcqu/ik+bBQMGZwIJc
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(39860400002)(396003)(376002)(451199021)(186003)(6506007)(71200400001)(53546011)(9686003)(26005)(15650500001)(7696005)(2940100002)(38100700002)(5660300002)(41300700001)(52536014)(4326008)(8936002)(66476007)(76116006)(66446008)(66946007)(66556008)(8676002)(7416002)(110136005)(54906003)(478600001)(2906002)(316002)(38070700005)(64756008)(86362001)(33656002)(83380400001)(122000001)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8022
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9a2823ec-af35-445c-8a2a-08db42325025
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AY/FyHoX48I5urUh4zyzWBiU4r1Zge3yiGYeOSNsRQ910nPgHiO397P0+f5xt/LFdL8/aW1Er2TfcqzIp8ngPiOoWer13NAXzfpivLJhbBdbwjg7lPP/XphdDp5KBtlj+uAIzRbf/kGf0RbnXW/AqhYS/XXzfQp4LSs+WcOSieAkVsF6Xu6/rBByVPTPkeqzjCOlVNj9FFKyua5gYwTijqGCDzL6tvOt1I8kMA9yw9AdzQS0Lj6fS5GXUYPZux4RKUaWxHw6c5BAAyXEaS4HKwp4ZAaeh1SwNwakvuWs+l2dIPxnLbxN6Bx4igyXuXzbQ2ds5HuldJLsX+lRX6HGBZmkQxgOfhM3iatKO9hRXHT7wN4Ijkv6p79PV5/+lKUPUsfUGUeU3jhIO1DcFY0993bltT/dZJOc1MdGokW3bH/RMbl33xDHZE7GcqZ+hLMnVeq++t4S5B80PHwj2XqrNTOhx0/1ylKICRHtqkuX2PsTAdJdAIFEznRpOxHb26qCFBN/bfp3rxnNTtXSslymN/tUCThIVhzcCcMdTJOdi5x0x3UmUkG3VXYeRhfKUyXW93bC63copjXlpgPcoaC9k9YL0jeJY+Oxbl37oaIr8PipNM6WFZtTggcE5xlLeGnnpqNQUu8NaBf1NV18PbQgY6xLTQnhCZq0CKcU5G6dESQOFCCUYmo2C65aerJh9ofOF4u0+JCgr4tPQjtMbkrwLA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(52536014)(40460700003)(7696005)(15650500001)(86362001)(83380400001)(36860700001)(33656002)(53546011)(40480700001)(47076005)(2940100002)(336012)(82740400003)(82310400005)(356005)(81166007)(9686003)(55016003)(8936002)(8676002)(26005)(186003)(6506007)(41300700001)(316002)(4326008)(70586007)(70206006)(2906002)(110136005)(54906003)(5660300002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 06:33:46.7338
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a32131dc-efb3-4529-e4d4-08db42325c57
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5921

Hi,

> -----Original Message-----
> Subject: RE: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
> release: Proposed release schedule)
>=20
> Hi Bertrand,
>=20
> > -----Original Message-----
> > From: Bertrand Marquis <Bertrand.Marquis@arm.com>
> > Subject: Re: Xen 4.18 release schedule update and poll (was: RE: Xen 4.=
18
> > release: Proposed release schedule)
> >
> > Hi,
> >
> > > On 21 Apr 2023, at 05:22, Henry Wang <Henry.Wang@arm.com> wrote:
> > >
> > > Hi all,
> > >
> > > Following the discussion in April community call, here comes the two
> > > updated possible release schedule options that I came up with.
> > >
> > > Both of these two options will satisfy the requirements/concerns that
> > > I've received so far. But I personally would prefer the option 2 as w=
e
> > > shouldn't expect there will be much progress happen in August due to
> > > the EU holiday season. I wonder if anyone has any objections or alter=
nate
> > > suggestions.
> >
> > Even if the release date is in september, all major deadlines will happ=
en in
> > August.
> >
> > So how about introducing an end of october possibility ?
>=20
> Thanks for raising this. I am personally good with this plan. If nobody o=
bjects
> this proposal then yes why not :-)

And here comes the option 3:

** Proposed option 3: Wed Oct 25, 2023 **
(+10 months from Xen 4.17 release)

- Last posting date          Fri Aug 11, 2023

Patches adding new features are expected to be posted to the mailing
list by this date, although perhaps not in their final version.

- Feature freeze             Fri Sep 1, 2023 (+3 weeks from Last posting da=
te)

Patches adding new features should be committed by this date.
Straightforward bugfixes may continue to be accepted by maintainers.

- Code freeze                Fri Sep 15, 2023 (+2 weeks from Feature freeze=
)

Bugfixes only.

- Hard code freeze           Fri Oct 6, 2023 (+3 weeks from Code freeze)

Bugfixes for serious bugs (including regressions), and low-risk fixes only.

- Final commits              Fri Oct 20, 2023 (+2 weeks from Hard code free=
ze)

Branch off staging-4.18.

- Release                    Wed Oct 25, 2023

Kind regards,
Henry

> >
> > Cheers
> > Bertrand
> >
> > >
> > > Please don't hesitate to raise your concerns and opinions. I would
> > > encourage that the feedback collection is cut off by the middle of Ma=
y
> > > (say May 19). If nobody will have anything better, then let's go opti=
on 2
> > > by "lazy consensus". Thanks.
> > >
> > > ** Proposed option 1: Wed Aug 30, 2023 **
> > > (+8 months from Xen 4.17 release)
> > >
> > > - Last posting date          Fri Jun 16, 2023
> > >
> > > Patches adding new features are expected to be posted to the mailing
> > > list by this date, although perhaps not in their final version.
> > >
> > > (Note that Xen Summit is Jun 24 - 26, 2023)
> > >
> > > - Feature freeze             Fri Jul 7, 2023 (+3 weeks from Last post=
ing date)
> > >
> > > Patches adding new features should be committed by this date.
> > > Straightforward bugfixes may continue to be accepted by maintainers.
> > >
> > > - Code freeze                Fri Jul 21, 2023 (+2 weeks from Feature =
freeze)
> > >
> > > Bugfixes only.
> > >
> > > - Hard code freeze           Fri Aug 11, 2023 (+3 weeks from Code fre=
eze)
> > >
> > > Bugfixes for serious bugs (including regressions), and low-risk fixes=
 only.
> > >
> > > - Final commits              Fri Aug 25, 2023 (+2 weeks from Hard cod=
e freeze)
> > >
> > > Branch off staging-4.18.
> > >
> > > - Release                    Wed Aug 30, 2023
> > >
> > >
> > > ** Proposed option 2: Wed Sep 27, 2023 (or the first week of Oct)**
> > > (+9 months from Xen 4.17 release)
> > >
> > > - Last posting date          Fri Jul 14, 2023
> > >
> > > Patches adding new features are expected to be posted to the mailing
> > > list by this date, although perhaps not in their final version.
> > >
> > > - Feature freeze             Fri Aug 4, 2023 (+3 weeks from Last post=
ing date)
> > >
> > > Patches adding new features should be committed by this date.
> > > Straightforward bugfixes may continue to be accepted by maintainers.
> > >
> > > - Code freeze                Fri Aug 18, 2023 (+2 weeks from Feature =
freeze)
> > >
> > > Bugfixes only.
> > >
> > > - Hard code freeze           Fri Sep 8, 2023 (+3 weeks from Code free=
ze)
> > >
> > > Bugfixes for serious bugs (including regressions), and low-risk fixes=
 only.
> > >
> > > - Final commits              Fri Sep 22, 2023 (+2 weeks from Hard cod=
e freeze)
> > >
> > > Branch off staging-4.18.
> > >
> > > - Release                    Wed Sep 27, 2023
> > >
> > > Kind regards,
> > > Henry
> >
>=20



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:24:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:24:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524462.815415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppl7Z-0007o7-Mv; Fri, 21 Apr 2023 07:23:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524462.815415; Fri, 21 Apr 2023 07:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppl7Z-0007o0-K7; Fri, 21 Apr 2023 07:23:53 +0000
Received: by outflank-mailman (input) for mailman id 524462;
 Fri, 21 Apr 2023 07:23:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCD5=AM=citrix.com=prvs=4680c1a37=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppl7Y-0007ns-3g
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:23:52 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7483d348-e015-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 09:23:49 +0200 (CEST)
Received: from mail-mw2nam12lp2047.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Apr 2023 03:23:45 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB5341.namprd03.prod.outlook.com (2603:10b6:208:19f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 07:23:43 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 07:23:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7483d348-e015-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682061828;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ty805mIrLDDFvm0+EkffuLtXpPKRDe73cSYUKkrN17s=;
  b=TO8RCqxpJfnWSKaTGaqi9XiHoCMRvIQ5IQNkJFveRZn80jzPzDTz2CLs
   qzVwCavRxfNqlCT/fRKh8AwpFQbpn9fnFUovjuyPNCUEBJYBl4PW6rfKa
   K7BwwCJX7qZX43jmNvaebw5cf4zyDpYnA3pfom03QsUDYxw0o2tJeSvAM
   8=;
X-IronPort-RemoteIP: 104.47.66.47
X-IronPort-MID: 106760223
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xuvtYK93uFRLTw8PIqipDrUDnn+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 2UWXj/QPvqIYjakKNkjbNy//ElQ7MLUmNRmT1E6/Cs8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaoS5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklqy
 9AedHMGUSvElv60yZbjQOhlmv4aeZyD0IM34hmMzBn/JNN/G9XmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTaNilAuuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eRwn2jAthJfFG+3u4t3XyhyW06MxopdGG3uOD+plOUWc0Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZCb9o8vcNwWj0u1
 XeOhdriATEpu7qQIU9x7Z+RpDK2fC0Kd2kLYHZeSRNfu4W85oYukhjIU9BvVravicH4Ei3xx
 DbMqzUig7IUjogA0KDTEU37vg9Ab6PhFmYdjjg7lEr8hu+lTOZJv7CV1GU=
IronPort-HdrOrdr: A9a23:h5w5oKi0xTVofIPLD49gAV5Em3BQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-Talos-CUID: =?us-ascii?q?9a23=3A1dAPpGqky1nVM+raWsVG1BrmUeQbXnuBj3zvGAi?=
 =?us-ascii?q?pFWRgbOWfSFaMwpoxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3Aw1cr9g+kyQtXeahXpjam8kOQf9cvvoWcB0EkqpI?=
 =?us-ascii?q?5mvOILDU3Fyq3jiviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,214,1677560400"; 
   d="scan'208";a="106760223"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U41SZ815dtlfwm3UrFmkIz+fmza5hra7YaWV4u+wNl2xnqyRRfYCuWZ4WHTgMDfZGTjkvPjHEGz1nw5S+0BL2piRNk2rA2u95HOGrg6Ul8e32hz0993UxZyJ7jWEymoRj074NreFIzegU/lsL7auESc78Cv3ySY/ZAwUMBTLH4g630mNFJZmXnyamGJ/pFNC2RZUHTC2Z8ATZQBTNsswy2D+D7mLncFG/1+wyD34uKskejv/tMPYPA6KNlU6/85KCQRKYJeL2Yti2lkzwxKl9fQi6N/7ezPfmzmAe+i48Vs4u8Mc2TBfO1Fd1Oi22b/KkqHtuuqcRg/WXv6KcoIzJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OYpo+T+lEWFbN5SsXB//zMqCdlVvf6dsQVb8KerY+q0=;
 b=jnQBPs91UHzwusOeQLVB36ca2ZX8VzHZCvxIe52eWT3TQHBMHHZXfri7SQLqPQz67VD3YxCUrETluCs3G1FTZ6TMWYXdEaoi7PrtFKcjzinFtbmbAP/HMhVEyA8CGH6/W8mfuzcTMCym5T0UawNYcIEFUP/VgZ8gXXrgboiH4dF0C9+ixvioXaIcl0aa5P6GbmkaN30BHJ2AWzQN7+LCgfA+MdzRCiJ8OKDMmDjWgaBF1D2uuMFljkfghViImFwWe6UmwsbFYay1XdPKpG49xLXh92GcuA44YotJYhy+8lGPugzcNoEnUad5Jb+gAawcjNXlAALJ31puNrI3GR4lyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OYpo+T+lEWFbN5SsXB//zMqCdlVvf6dsQVb8KerY+q0=;
 b=LKeoKVXI7KUKAqhi1dmIH8o4spr0Wi89Qd02ZW5Gd0MVhDPO34eUKx+xjqSaBUCTTTvJbdZd4bbeMMPG53vcVKjq9mW4e/N3Ib0xigA5MpBvkx2sKjvg40nWJXtI+/HZa0d1OxYn9Xm4g4HJ5zcJy4CI7XCZlxvWXpqLi8Jp/Nk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 21 Apr 2023 09:23:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH v2] livepatch-tools: remove usage of error.h
Message-ID: <ZEI5+couvF57DBYc@Air-de-Roger>
References: <20230406114106.54735-1-roger.pau@citrix.com>
 <DM6PR03MB53728797915DB90C57CE03B2F0639@DM6PR03MB5372.namprd03.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <DM6PR03MB53728797915DB90C57CE03B2F0639@DM6PR03MB5372.namprd03.prod.outlook.com>
X-ClientProxiedBy: LO2P265CA0089.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::29) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB5341:EE_
X-MS-Office365-Filtering-Correlation-Id: 63814c6f-ffae-48da-b54b-08db42395633
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Pprzdkz6kxa8+NUSfQc6oB0Ipo+K15gZlSprEUV1xyELWUrJ7dBCtx2PGEMQl7OPLXpB3pb+opSrBUCN2p4OJ81Gj58/4nI5nLchihKvUjetGEKkjjDANfEm907YbfeF7/R5um1qv+oh+2Z3696OqV/175rdYVim5RkYXzFsuTqvmVYDhsYvlu17qrhE+HoQaG6Sk4XxDz65owA5zOLcIcTRv2TupKcWLirU1IC8t9nJeLQuR1poNHDFRWlb5TaoMoBvv6CsrG3qpt39KOpMoo5RTRqKKkMOMtDZZxU3ebGb+N3BsO+dHhaAGu0VCvfboc2oSsTiUUoaajFp95cP7MuOLQnzoNlmijZh8cYyEENbMHLj92CnO863F8bgHrmL4qao+liWoOFSBhYMy40Irt9W/+j80X9oVy3Vnvu5Met5NqDQwpyzKeow6aW4tjyNxLFT0HjAidDs3eGRju8fFU1RF6c+ZbLASC26liIXf2Osgkk0VN7BNO/3gyf+W3vA14hUqZSCEIEzRAdgFqnHEXxyg03NUOgL+F9xhHn2i7EPEKZ8FfxFqe8lO7BnXLLJzMfGBLY5L8VMQAR/xA78yXjd/Erw8q0Xn37IdTXdBtc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(346002)(376002)(366004)(396003)(39860400002)(451199021)(478600001)(54906003)(85182001)(6636002)(38100700002)(5660300002)(8676002)(8936002)(6862004)(2906002)(66946007)(66556008)(82960400001)(4326008)(316002)(66476007)(26005)(6506007)(9686003)(53546011)(6512007)(186003)(86362001)(41300700001)(33716001)(83380400001)(6486002)(6666004)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dkx4dWEyREl6bDAxVGJCTGhiOUZMaDZZOTRSdXhqbU5qc1NRV1JxeEJtQVNM?=
 =?utf-8?B?THV3N0M4Y3ZjTE85bnUzOWdhSjBuWHd2Z2RXaDBaQWFodkxlSTdlMXZTbEJS?=
 =?utf-8?B?L1pBOE5VVFJVSmhGSDcxR2R1dFEzNER0YlBJa3ZSRWFYYk5mTkZUNVNzUWQ2?=
 =?utf-8?B?d0tnSGhia3VJMzE0NmJFRE91UlprckJSUjVrOFdnb1JsVUUzdXZuOWtBcy9T?=
 =?utf-8?B?cmF2RTRtdGlYLzl3bmxJRURyeTJ5dHEzYXkvSTZDaXNLVENvcXBxZHAwdjN2?=
 =?utf-8?B?RHZQNklnQ3pBNzRCWU90bWpoVGVZeGtsejdSQUVGQ1NwaEFOVWZvY0hoNktn?=
 =?utf-8?B?NnZHS0djYkF2Y3hxbUZ6bC9iQldiVjFWQlQrYzU1eCt1ZGliVThuVW1QcVVF?=
 =?utf-8?B?cmxoNmpydkI1VVU3TTFyMXI5RHhubkZBR0ovMVJqYi9rdUQ2VUhxSUluVGlD?=
 =?utf-8?B?Z0Jmby9VU2ZFcUxIMklwMWNLZ2t4V1JLWHhSa2VDME9TeHZhZlI1UG9aRlhP?=
 =?utf-8?B?d3c5RjBiRnZjTHFyZTNaZlhaQUkwSHhOaSs5WTNOQ2QwTEZadEJjd3FUN2ZW?=
 =?utf-8?B?amZERG0rbnppdTRHdllOVitLdXBuRkpmejZ3aVlGQnlHbmR5OTltTEdQSUIz?=
 =?utf-8?B?RThzVU5KQ2JiMnp4U3gwR2dySnYveC8zRWdzbVU4UW9TaTRwNk1XalM0NHRT?=
 =?utf-8?B?TGh1UFBTT0pkZndYOTdTWGgyc0tORDg0U3p5MjJOQkozUVFTMjJrcjduNmFS?=
 =?utf-8?B?b3JUeCtCaitkVjlRT2NKZWIwZnQ3c0orRWZrTnZVNHB2OU43bkp6ZzhXVXVM?=
 =?utf-8?B?cko1WXlCNzhEUG93bmt6RGpkaHRTTmxWQ0hycndOVEtzaTVzY04zS1hTVUQw?=
 =?utf-8?B?MXRQeE1pR2IvbldPSlpEMjRNMmZGRTU1dHZuQkt1K0FjQk9mcDRQbVBndGNF?=
 =?utf-8?B?UXZJRnl6K1RTeUp4blJEcTFWeXcyY3hTNmp3VzgrZG1ZcWh5Mk8rUzJpczFK?=
 =?utf-8?B?QUFPSTV2dUJuUnhpYUoyN3h2UE10TkhUdlM4UVVLQ3hlQU1HZ2NFdjdoeS9m?=
 =?utf-8?B?cVoxeXEwZDZOR2FqTFpoSkFyUHZsRFdPMy9HYW1Cakh6dGFkSzJZcENMdXF4?=
 =?utf-8?B?ajhrcjFHSk9xMWZpMEh6NytzcUJBL2U1WUNmMkZkU0E5UVdKcG91N1ExcmZT?=
 =?utf-8?B?NldPZTNIR2FDbFFsdVc0SkhyUWl2L0NEcjUvcUZRY1pMZUxoZzM5TktibHFR?=
 =?utf-8?B?aTFsSVBrZ042R0RZQVlmd0VTeWczaC83bVBqYnZyazdsS0JtT2o2NHYvRHVy?=
 =?utf-8?B?YXdaSWVBeXZwRHVlUURCQWRHZWg2aVliOEJad2F6bHluNThDbXVGK0Y2MWhE?=
 =?utf-8?B?QlZDczZZL0N4Z2dPRUZLdDZGdDczMW4yNENhWjZWSWRMbmZWWml2OFo0QXRZ?=
 =?utf-8?B?c3E1dmRWVFhVS3AxajhCMDlLWkJWTW8wem1JVUJUZHFGVFhlR3hjcnBtaHEw?=
 =?utf-8?B?WWN1Vk45VU9SOHVBcUVmTm9Ub2xvRWhRR0NIVmNsb0NYNFFwTWFsUDN3QTQr?=
 =?utf-8?B?SWdvekplaHdsbGhJMjZIUVk0K3BXTVFNNEtETVhsbGkxNEd3UDJualRELzRv?=
 =?utf-8?B?RXBNY1VXT2tWdFJkM1RIalRLai9HOXlTeFZxWDdyamltOTM0cVVWMVdPbzgx?=
 =?utf-8?B?SzhsWjZDWDZ5UDMxY0ZwOWtHZ1E4THlJd2lrMlVOMFN2MlhLUDczVEZRbHBX?=
 =?utf-8?B?RmF1RUt4bVdZNWFuZ0ZVUEhuRS9rdVhjeWV1SU12bGZOMlgyaytjcTF2V0Nu?=
 =?utf-8?B?amJrWW1TOVRJMkY1LysyN1dGcThVN3NNT01ML3BMVUJidFNJUzlMY2RMSi8w?=
 =?utf-8?B?L2JzcGF4M0JMM1J0TVU2Szgzc2c4SUlUdEczTDRWT3poTWpHK1c0aGwrQjZw?=
 =?utf-8?B?c0J5M2ZZM0dZK1o1QklQZG5WZzN3NnZNc1dTdno3Z0NqNG45cFlIbTVUa0tC?=
 =?utf-8?B?Ym13bjAxbnVhZkp0MStBejJxRjJWcGJJQWFiMWVKODRYM2ZBNnF1YmZTTzdZ?=
 =?utf-8?B?alJRMkhYaUVGM1g1cjh5cU9EYmVwSml0Z3FhMVo1YVM1Q2FHZzJkZERyNkI4?=
 =?utf-8?Q?As4kur3lJGrtXCco8aNepA8jo?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	nhXCPTB86PqFGQUlG4RFvqzq4MFWNxJjFdQaJNIUi5QsuV57IIA2nwYKK/pQDt3bAmB42saOij6WIQCgqGcuGJFRbJCvddTjwjDMWomDsg5uym1ti8PZ1lLuTeGh12wwA84Y7xdmjpgzxlLbzV0JBuhev3cuVykOYb77rXuVi3lZYLb9ODnNY8OqoJQ6DFTF8vtRUakErTJV1Cj6q6cwRZt3lmO/TIdozCcJm/X11gjm7zfMber1I04nWO2zh5s7xb0rf5d04Mk7FhJXxeSpXJu41GKaTyOEInR5paTztYULdODXlnBzf+60QoMoMenQpl7TyGT9fKY3Xb4iRjJeRFmGDwplAnVHPUnSgTlad2nCIGcRN/eQjahRf2kWLXMQ2f3hxY7+dqKIGuCbW06lufG5dG9aRzvJSu3HyO9GKt7y7xb/2p9UvqoZhzsymWbDgGsb3Vv8IirLntXVV7dB80YA3UZWoBsNaAC+rUHumEJQb8oAdVFV36R/bL+HqY0+SX9LlMRNOqjP2A6Diq4UIwimRxoRsRjcP8ExCPFTzfkxA7BKZAt/fTH+zDiRxFCtgqSwxpSJlLEZoiY0xANKf+TsjtlC5Rxd34qj9kAlySMxaYFhghI5gNHUUIT1gqXtS9jHwYEkMOeLaKm8ed/wk10cwwraeQ4cahjPDERQVV8qtUZtY9bGaoaG8v7AdfAlJxLXHuywh6EVl264OaIz2EvzfB9KzhyzTn3m/XdHEE30PZte6OwH/xqWousBmCaME5ZG7T8XS6UYMdFsdQQbnAGLw2rjMsNZJ/v/4uZya+db+Bixni7tZHvOoQnL3Ug2
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 63814c6f-ffae-48da-b54b-08db42395633
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 07:23:43.3280
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: syhjI1eUUWJW96WctFzoc+EYbxiS0AGar/I3vBn6QHwHXY2h/m8OfK0eHnbFsZTACr74xsP9tCB24KHyksOgOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5341

On Thu, Apr 20, 2023 at 04:14:55PM +0000, Ross Lagerwall wrote:
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: Thursday, April 6, 2023 12:41 PM
> > To: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>
> > Cc: Roger Pau Monne <roger.pau@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Ross Lagerwall <ross.lagerwall@citrix.com>
> > Subject: [PATCH v2] livepatch-tools: remove usage of error.h 
> >  
> > It's a GNU libc specific header which prevents building on musl for
> > example.  Instead use errx() in ERROR() and DIFF_FATAL() macros.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
> > ---
> > Changes since v1:
> >  - Use errx().
> > ---
> >  common.h             | 9 ++++++---
> >  create-diff-object.c | 1 -
> >  lookup.c             | 7 +++++--
> >  prelink.c            | 1 -
> >  4 files changed, 11 insertions(+), 7 deletions(-)
> > 
> > diff --git a/common.h b/common.h
> > index 9a9da79..bbaa950 100644
> > --- a/common.h
> > +++ b/common.h
> > @@ -1,18 +1,21 @@
> >  #ifndef _COMMON_H_
> >  #define _COMMON_H_
> >  
> > -#include <error.h>
> > +#include <err.h>
> >  
> >  extern char *childobj;
> >  
> >  #define ERROR(format, ...) \
> > -       error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__)
> > +({ \
> > +       fflush(stdout); \
> > +       errx(1, "ERROR: %s: %s: %d: " format "\n", childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__); \
> > +})
> 
> Did you mean to add "\n" here? Wouldn't that result in a double new
> line?
> 
> With that removed (can be done during commit),
> 
> Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>

Thanks, please adjust at commit.  This is a leftover from v1
when I wasn't using errx.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:26:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:26:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524468.815425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplA8-0008SQ-6z; Fri, 21 Apr 2023 07:26:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524468.815425; Fri, 21 Apr 2023 07:26:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplA8-0008SJ-4B; Fri, 21 Apr 2023 07:26:32 +0000
Received: by outflank-mailman (input) for mailman id 524468;
 Fri, 21 Apr 2023 07:26:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCD5=AM=citrix.com=prvs=4680c1a37=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pplA6-0008SD-Kt
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:26:30 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d431d9b7-e015-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 09:26:29 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Apr 2023 03:26:22 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB5341.namprd03.prod.outlook.com (2603:10b6:208:19f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 07:26:20 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 07:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d431d9b7-e015-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682061989;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=4pr2RTBSSLxEq0rUD9QO4OvqPaMWVGtIBu38NFEcpEY=;
  b=drjUypkmnBexVfViB1kJxUjxg/2oRkEhS40j37uisHZhAZB12+nciO0D
   3RkSD5Y5GMEZBRU2xLG+EoEQUdP+VfdIXp87hY5FMwj3By4QSWHMlNBMB
   ebBJxWtPOvTVR+A0sau8EbryKwPDhBUwIr3NgpDiUTPYVVMl9k0oy2AgX
   4=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 108790950
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ZtN87K/Mp9rLtNYWS66FDrUDln+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 2ZND2GDa/aDZzf1ett/O47np0gEuMLQnNBkS1Rq+3g8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaoS5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklz0
 6ECEh4RRSzAxOa/g7KGFPNLlOc8eZyD0IM34hmMzBn/JNN/G9XmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTaNilApuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37efwnKjCd9KfFG+3qc7q2+qzE8UMS0XR3Wcsai0rFG6VN0Kf
 iT4/QJr98De7neDTMT5XhC+iG6JuFgbQdU4O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k
 FiTkLvU6SdHtbSUTTeY6e2SpDbrYywNdzdePWkDUBcP5MTlrMcrlBXTQ91/EamzyNroBTX3x
 DPMpy8771kOsfM2O2yA1Qivq1qRSlLhFGbZOi2/srqZ0z5E
IronPort-HdrOrdr: A9a23:FRQlDaPA3xvbhcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-Talos-CUID: 9a23:QWMQSm+RrzJwWyYYrRWVv3MyR9ofXGH28DSTJ0b7CVs3UpqHakDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AiQ7s+A+EwJMjCOESxZB2pxCQf545yJaXAmMKqKU?=
 =?us-ascii?q?HqeejJX19HSiZgzviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,214,1677560400"; 
   d="scan'208";a="108790950"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YaO1+5AViSIkU4b0HNsUG1/R52o0VutOnHWvIoYirwQ1wF3zwziVgzksTAhlqe5bm95hTSJEMxOgO3A3bLWnL8rkh52XzAsvkcikAyVKbuU7HlnU5g4P457zAHtdEPMeSeUfVt4uyW9ZXxbV5uM5b/7/UFtQGWa3rAHExsJJZ++qjcMsb++bVSFi4kLvXTkRzj5l+PkZ7/s7+ZbEKLWZdhhlJMyESNLzjI3zqDvkw9BkQ/SwJ1HZVBqy08hoymg1LPtORxs1w6VeyT78ECAeCA90nufOmEPqAat3ZLbECqmOCziOfrnWFZ1qeBpzEeyi1UJBCVnFr2C0L921+IEy7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jpOX9F/3TKyra1mCMLlUB/Sk8T20vsszUo9ULHrJFIk=;
 b=g1XxnLcJ6Fvnq7F3er8E79WLYK88Otjwr1RnuV2F97OI4qZ6BcPPkxwU750cx68/G2d3AUakcBsbP/Oci1GEb/nQwNmpGfk95ud3bBWyawoCtEMMYBSAoOboM8KrxzicPz0iLY9YCLsnWMZyf6MWuWntb7f4s6D65yMBIdf0j8N3lqu/rxuadakvWikCXzhRHynwQb8h6sCqZqR7oaM56euOlq5Rt5ZeqSycxNF/5WMhzoWqET+vlqnWsfyEhI5pUezcYyrCWgqQIodbhWiCOdAVWYRFMOiXNuoXRE1Rs+BZWn4/q7wXZM19dH97bowZrSiNIyqLi71wx9CVdPZP7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jpOX9F/3TKyra1mCMLlUB/Sk8T20vsszUo9ULHrJFIk=;
 b=QqqKp1cEpzwrfwvvWRGgbqZ8vdSf3LCNIESb2fFcMbiCFqdLD7CJgM/iesn/4QhhqiFnvI40hKXyIz3uL0zeXAUuIkeYDGNKfW+aPXnBp52YHTznFe7zQ9/Bj4H4iaty1+3O0rrsiKBAi7jndYTXSnZe3lNnrGTVjcfm/V7mo4I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 21 Apr 2023 09:26:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [xen-unstable-smoke test] 180345: regressions - FAIL
Message-ID: <ZEI6ltv7UNOE/EHY@Air-de-Roger>
References: <osstest-180345-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <osstest-180345-mainreport@xen.org>
X-ClientProxiedBy: LO4P265CA0247.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::6) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB5341:EE_
X-MS-Office365-Filtering-Correlation-Id: dc3cef0d-7201-44c3-f913-08db4239b3e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KnrYghr+kbT7AMZN8z+nO1PumZQMfRACUVcWkGvr5w7Vg6MZp152L2+NsEhJnM6isOoKgnmrITaXwpbFh8yK2n8RqtfpGcqaV4AnfsP4WU/zxVg5asi4DhGE3GGd6ysp0A1E+DLxlgJtEW8kBHy/07bleHWlJ9pMjl++5U42r9ZEBmuUOXTzH04tmN0I3AXriBa/yMNpKT6CIzbcQi2ZxXSHeET4tBhGjmoXrt6LjXjGlJV3IXFTGRp6QqBDrHDN7/92az2amJDWYxdcKpw+4syi3XYdT49S98itsKs1RN5pOwRE63j69RZHJQ/Y6hHeFKke8Xh6hEnSBpipj1+36I9I7UoW1YnHhh6KEFjVrfLJXKjvmNZCwPCJmd/f5JolMHNP3/4DwZ/bP8rmViJ6G32y3kWUe1lNHxqa5mt+TPkKqjXPfZIAgW+cXXy1gvwxKGpiiyp+dG4zZ9iqyFul1lVUl6gAoDIviJnvwybPD1tH9Zl+n5v7yZoOtzLh2lYaorn4PZRUKNjH6vPFkf7tNg1x25AT1xO8IhAZqGb+fX/wtz3fjkcAME57bcEfXir8s/nhG0cuVsRiSziZfAc/cDdf3ls9sgt/EGJhmkj5Zhc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(346002)(376002)(366004)(396003)(39860400002)(451199021)(478600001)(85182001)(38100700002)(5660300002)(8676002)(8936002)(2906002)(66946007)(66556008)(6916009)(82960400001)(4326008)(316002)(66476007)(26005)(6506007)(9686003)(6512007)(186003)(86362001)(41300700001)(33716001)(83380400001)(966005)(6486002)(6666004)(139555002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S1hHMCtZV1lrZE5DbGFzQUdNeEp1OHNPbkRiajU1aXBNbFZHbzBLTHA4N096?=
 =?utf-8?B?R2NtaUI2ZXVXalA1ZFJ4OFhxdnQ4OGQxeHdlT0dZRmg4bW5jU1lqeDBCWlp3?=
 =?utf-8?B?REg5NlpUZUY1UWdsVkc2M1QyQUwyaDVTcW5NVkFSczZjcDljOVV0YmxiRE9w?=
 =?utf-8?B?Rndiei9aNGlTYklHb0wrS1E4SEdhZGo4WXIxcGZvaVhMaWtta1ZOTERLS0du?=
 =?utf-8?B?RFRTZ2Y4cC9lMnUxU1o2UnlQODQvR25kZ1c1bnJFczd6OTYrODJDOFhKTTFY?=
 =?utf-8?B?Y3F1YU53YVd2WWlzQTZXTnpIckhWdUg3MkFKOVpUMlBVSURoR0QwN1YzSktv?=
 =?utf-8?B?R1Z0K21CZCtCYUZYeFFhOFovUVlUa3E3MVlSci9lS0E5bDZFS2VLUVZ3TkZ5?=
 =?utf-8?B?QXBTOGw1bmpVcE0xYmN5aUlQNm1OamZqQm9hV3ZtWkRxRGc3bUoyRi80N0hk?=
 =?utf-8?B?SGpKeU81RllTb21BNk5YZi96QjlUL2EyeFpvL2pnVWZjbXZHRml3MUE0a2hU?=
 =?utf-8?B?WHlKdlNSeWtxRXJNTnI0VFJqb0o0MGpFcjJBNUNSbGhWdVl6akJLMDd6WXhv?=
 =?utf-8?B?SDRiMEdINHZxd29jTkhXNTlJaFpFZmV5UXFzdEpNclFmdFRUeTVFZUFRTktL?=
 =?utf-8?B?VUYrbDEvUXp0RFpJR3RuMGd0WVQzMU9sYnFQOHg2L2ZpSVNidUZPZSthSmRq?=
 =?utf-8?B?WTRaMnNOZmg3NHFqVkJCZkVYOGJlcDZMOTFJVlNTM2c1STRpbTBCNlZGSzZl?=
 =?utf-8?B?cVNtc1FKaGtDUExGMlB0Z05EQkVNU3YvOTB0SUhzaXpoZ2tjd2RWcmdRNnhr?=
 =?utf-8?B?Y29FeWpJVUpXR2wzNUF3N08wQnkxMmhmclJpSXdCUlJKL0pSNTVRN2ZhWFpC?=
 =?utf-8?B?amlVMURqczZteEFNSnl6ZXRWNktKWnk2bjlTZDZmaGl3R3ExbnU1VXdjbHcy?=
 =?utf-8?B?cnBDbmt3bWQwTnhuU0h4eTVvckpYUFh4WkNLZ1lKaitJL3VPVDRXdXFRTi9W?=
 =?utf-8?B?bU1rM3lwbzlQaWp6ei90L1ZBbHFNNUJuUEZLMFlzZXZibGVrcjhhMU85WGsy?=
 =?utf-8?B?aFB2dHZwWjJSZ2syTytnTTlTUys4Y3FxdkRSbVdteUoxWGVHTjhqTzkrUUlu?=
 =?utf-8?B?UWpoenVieFR3VDR6TTVKb251SEh4L1NLRG5jaVMxZFZLRGVleDdhNDFnL1VF?=
 =?utf-8?B?UWFtdktYYU54SHovcGtheHNFamxRcHh2ZHJhbTVURElWMno1d0l6b093eCsw?=
 =?utf-8?B?ck9pUDZzU0VoUTh5bGJKNTg1Y0tFeFQ4d0pPMXR1VEl0dlB6K2laTCtEMzdk?=
 =?utf-8?B?UnhPM3pKZ0VQbUk4bHR6UzBGMVMvNE5ES0QvQU81NzhwOEphNERZWGlPWVhl?=
 =?utf-8?B?N1pWYjBaTTQ5a2FSa2hrenZVR3grL0k5VmVURlJSZ2lQWjBOL2NCSGhBbGQ5?=
 =?utf-8?B?Qzc1b2dpOEZKcjFKRUQ2bFJuVHQ5VmlZRWZIUmlpaUZEQ2RlaG51WDdJSmtJ?=
 =?utf-8?B?TUNTdUFLbW56QklQaVd4c0luTndBZTNic1JrN0drQ0ZpSDE5bE04VGx6M3dl?=
 =?utf-8?B?NHllVEtONDdzRDNzVTdDNklHa1ZLTFg2a3dHZ2wxayszNElsckFOcmF4a2RL?=
 =?utf-8?B?eUFuWkkvNVBSemUrcWdMRmdoUkpUUVhtR1dTdWx4TGZrZlg2OUV3dzN3RXdC?=
 =?utf-8?B?VUFNb3NENFNhMkxZZklHKyt1SjFxWkp6TWhOWkxtbVFEQTFqQXR3ai85WG9k?=
 =?utf-8?B?aTFqYkIrRzBMTDgxMWVGYXU2QUREUWlrdFBEMnJtK2l5dGpTdWxIRzlWTlIr?=
 =?utf-8?B?VnYzS2ZNSk5EL251MWNwMHQzWHIvb21wdG9xSEozU2FuOHNQM3dmOTBQdXZS?=
 =?utf-8?B?VVVwNFlqbFhJU0gvNzNOQjdPRnlOQ3NBR1E5dE85SW81V2wyMnJtM3BzdWJ0?=
 =?utf-8?B?TTNKZmg0V1ZiQ0ZpczhWQkI2QUlETTJoOXpqbGlwNmJwS1FGQ09WL2xiK0Ur?=
 =?utf-8?B?V1E2ekZoODFrOFdMRzQyUXV4UEN2YUNOTzIydDJxRHdhS3IzNm1FWEhoSG1L?=
 =?utf-8?B?VTYvM1hMN2N0RmlqWG8zdUZYQ1dHdTUyR1pZZTk2cmtNUTF0N1lBZjM4U016?=
 =?utf-8?Q?rLYCfnlnOMitDkmf2HwGEvT3h?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NUN5HPK9kK1gZYSW8c4SwrKN8cVDgyDrIEcWM29VMcnzeTV6O689QL/luIv1KpfhN/YKL5xYY6sFtqFYR9eTCxqZC7d18KjXqdlHbMVub9riEh2v0masQrzOAh6AyWV17MUM8nUiH4xoYDi5FzOxojUEmQDbeJR0x8pImcOA+YUExwGxuj7kf+0Nqr7iUgKN8bH7TeMcNklTOjeLmpe8+h26DHrXFOMwjsJ6fjc1syK3aR5KAgk3xbRYEKJIUHOTjzkoEBGnSOD/ckDxqYz83uP5pZtFS4l8PkdyBzh71RWidajgtOlQ+lRfHaHF9P22pr1csNGK6GKwKzPvmuTYrZ9ARoGjFvXiyLgXbNxmJ6KeS4Ngs0LfbQwVq/7+F32eGBRcPB/aFkVe+YIgw6om58eZNnDYUMXhPVrZqHze+ZA94qWOXNUoNr3CCGo/CdWdqe5FsnAin6A4l+5Ckzw2UlgNw+DSGnbsl8BknCDTU051tA3D0y4rwDfB1akECfQs9nJouYsbGSUYWgSM8Dant9+MZktxv3/pWB+RfG0xmFJfRmg/cZgl4KZX7r9OPvTJDC2a5ewfMp9sxB1xpfbi3s/HHM/hbN0vvMKK3jt450KxfA3ZaQ/PEKMfg+eW1XoNFDyPYayO5FHgqfnMxXmrAYPOkpD515SasXYnRRVo+4EHPGnfuacoY+anKcaSXtfx8+9E+zvq5YU+2isO0jyp0UicI+ZlqAzwuk6v0tj/OaHeT9a5x6dw9BgM25KfEq4KehakJeRRtXQiZnqHNh3UT2kbyQzRBbx8tZ4ebdg3ocA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc3cef0d-7201-44c3-f913-08db4239b3e4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 07:26:20.4924
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /NAJsb8tD+ulg0CTYrYYputR/nQqG8WmnZtyIARkklySBD1DAKVPRjcTgLxCCJD/wUV8Ey9drxmS1yQhM8WH+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5341

On Fri, Apr 21, 2023 at 02:26:46AM +0000, osstest service owner wrote:
> flight 180345 xen-unstable-smoke real [real]
> flight 180348 xen-unstable-smoke real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/180345/
> http://logs.test-lab.xenproject.org/osstest/logs/180348/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

This looks like a real failure, adding Julien who seems to have
modified arm32 head code recently.

Found U-Boot script /boot.scr
Apr 21 01:09:28.649633 
1764 bytes read in 49 ms (35.2 KiB/s)
Apr 21 01:09:28.721678 
## Executing script at 50000000
Apr 21 01:09:28.721732 
Loading dtbs/5.4.17+/exynos5250-arndale.dtb
Apr 21 01:09:28.733625 
46257 bytes read in 1849 ms (24.4 KiB/s)
Apr 21 01:09:30.533581 
1048584 bytes read in 156 ms (6.4 MiB/s)
Apr 21 01:09:30.701602 
Loaded xen-4.18-unstable to 0x41000000 (100008)
Apr 21 01:09:30.701634 
command line: conswitch=x watchdog noreboot async-show-all console=dtuart dtuart=serial2 dom0_mem=512M,max:512M ucode=scan
Apr 21 01:09:30.713619 
8651264 bytes read in 489 ms (16.9 MiB/s)
Apr 21 01:09:31.217604 
Loaded vmlinuz-5.4.17+ to 0x42000000 (840200)
Apr 21 01:09:31.217637 
command line: ro root=/dev/mapper/arndale--westfield--vg-root rootdelay=3 ro root=/dev/mapper/arndale--westfield--vg-root rootdelay=3 console=hvc0 net.ifnames=0 clk_ignore_unused clk_ignore_unused
Apr 21 01:09:31.241552 
21158982 bytes read in 1134 ms (17.8 MiB/s)
Apr 21 01:09:32.345607 
Loaded initrd.img-5.4.17+ to 0x43300000 (142dc46)
Apr 21 01:09:32.345639 
chosen {
Apr 21 01:09:32.345662 
	bootargs = "conswitch=x watchdog noreboot async-show-all console=dtuart dtuart=serial2 dom0_mem=512M,max:512M ucode=scan";
Apr 21 01:09:32.357622 
	#size-cells = <0x00000001>;
Apr 21 01:09:32.357650 
	#address-cells = <0x00000001>;
Apr 21 01:09:32.369644 
	stdout-path = "serial2:115200n8";
Apr 21 01:09:32.369672 
	module@1 {
Apr 21 01:09:32.369695 
		reg = <0x43300000 0x0142dc46>;
Apr 21 01:09:32.369719 
		compatible = "xen,linux-initrd", "xen,multiboot-module";
Apr 21 01:09:32.381620 
	};
Apr 21 01:09:32.381647 
	module@0 {
Apr 21 01:09:32.381669 
		bootargs = "ro root=/dev/mapper/arndale--westfield--vg-root rootdelay=3 ro root=/dev/mapper/arndale--westfield--vg-root rootdelay=3 console=hvc0 net.ifnames=0 clk_ignore_unused clk_ignore_unused";
Apr 21 01:09:32.393649 
		reg = <0x42000000 0x00840200>;
Apr 21 01:09:32.405613 
		compatible = "xen,linux-zimage", "xen,multiboot-module";
Apr 21 01:09:32.405643 
	};
Apr 21 01:09:32.405665 
};
Apr 21 01:09:32.405686 
Booting 0x41000000 - 0x43000000
Apr 21 01:09:32.405710 
Kernel image @ 0x41000000 [ 0x000000 - 0x13a298 ]
Apr 21 01:09:32.417611 
## Flattened Device Tree blob at 43000000
Apr 21 01:09:32.417640 
   Booting using the fdt blob at 0x43000000
Apr 21 01:09:32.417665 
   reserving fdt memory region: addr=43000000 size=c000
Apr 21 01:09:32.429691 
   Loading Device Tree to 4fff1000, end 4fffffff ... OK
Apr 21 01:09:32.429748 
Apr 21 01:09:32.429788 
Starting kernel ...(fake run for tracing)
Apr 21 01:09:32.441575 
Apr 21 01:09:32.441601 
Apr 21 01:09:32.441622 
Starting kernel ...
Apr 21 01:09:32.441645 
Apr 21 01:09:32.441666 
Apr 21 01:16:14.755516 <c


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:38:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:38:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524474.815441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplLF-0001bp-AI; Fri, 21 Apr 2023 07:38:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524474.815441; Fri, 21 Apr 2023 07:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplLF-0001bi-6w; Fri, 21 Apr 2023 07:38:01 +0000
Received: by outflank-mailman (input) for mailman id 524474;
 Fri, 21 Apr 2023 07:38:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/dH=AM=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pplLE-0001bc-07
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:38:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e76b743-e017-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 09:37:58 +0200 (CEST)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-515-B9oZYoPBOK6bJ9bUyAxu8A-1; Fri, 21 Apr 2023 03:37:53 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 5b1f17b1804b1-3f16ef3bf06so5237725e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 00:37:53 -0700 (PDT)
Received: from redhat.com ([2.55.17.255]) by smtp.gmail.com with ESMTPSA id
 y24-20020a7bcd98000000b003ee70225ed2sm4038602wmj.15.2023.04.21.00.37.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Apr 2023 00:37:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e76b743-e017-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682062675;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YAVbavJ95/H15qsA862i/P/RT/8yRxcuw2Hq9F2q0s8=;
	b=DeyEyRyR/Aqpo5j8d5V7BPBki3LPJOFGRZ0pnZO2Sqhvva51QxNuS0+bmbtYf5GJmoiXH8
	G1WxP4Ua++DRwMBIcFYCR3nkhQqc/khp5DKlYd3a4clfXz+XloMEk8Z2FUt6+9FNzm81OZ
	OM3/AfWFTbkL0sKCMbqJE+F72kZxJ0E=
X-MC-Unique: B9oZYoPBOK6bJ9bUyAxu8A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682062672; x=1684654672;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YAVbavJ95/H15qsA862i/P/RT/8yRxcuw2Hq9F2q0s8=;
        b=lJWSeswTI37J4NyXn3mtQibag5RymQFlChdpLYsJMaQXB3TVnwn1jkVFmPRm45LTPt
         Ucxq2ZAbps6nl2820vSyX7AvrMNO0KAB2aL27x74JqKbG/uNOYQ221EaIMRINO8GN1jC
         fL352eyKaJNx43LVRHMx8bKt4K4/nYY54hh3PoQVRcIAFSwFv0Kkytaw0K0oaPtFuSQW
         unY7n7v4eImI/rvVRYN15M3B6EZT57B2n+UjYPppSNKilUxcsdSB+zzNaQn6mOOfniCZ
         yPNwCQyhEuSmg4HuxgVC2zuWoZzfDO2eA7Ong0nPtWKMl4y5b1C0+l+8fG1usAA77PjH
         qoHg==
X-Gm-Message-State: AAQBX9fEoEV6cGWD/4zsp/1Rhf7mBklVnhUHKr8osnmJSXtC94qkODSP
	4IamVAqpBrNbKM6QDjkyE+mh61AxUvItvzboUL6i16A9qrKxTZW42CXrPOa2n96pmpRdurDblTI
	vwhA7IhcIzlSVBxB2ZRjIMK8BbjA=
X-Received: by 2002:a1c:6a08:0:b0:3f1:7ea7:20e5 with SMTP id f8-20020a1c6a08000000b003f17ea720e5mr1053136wmc.17.1682062672736;
        Fri, 21 Apr 2023 00:37:52 -0700 (PDT)
X-Google-Smtp-Source: AKy350aR+9rlW20eiWFQtSTgUAy6ULPnusZFxVacZlysZGxGS7xYybJEO//EPI+qe4seUXgWAwxoWA==
X-Received: by 2002:a1c:6a08:0:b0:3f1:7ea7:20e5 with SMTP id f8-20020a1c6a08000000b003f17ea720e5mr1053125wmc.17.1682062672470;
        Fri, 21 Apr 2023 00:37:52 -0700 (PDT)
Date: Fri, 21 Apr 2023 03:37:46 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Bernhard Beschow <shentey@gmail.com>
Cc: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 2/7] hw/pci/pci.c: Don't leak PCIBus::irq_count[] in
 pci_bus_irqs()
Message-ID: <20230421033735-mutt-send-email-mst@kernel.org>
References: <20230403074124.3925-1-shentey@gmail.com>
 <20230403074124.3925-3-shentey@gmail.com>
MIME-Version: 1.0
In-Reply-To: <20230403074124.3925-3-shentey@gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Mon, Apr 03, 2023 at 09:41:19AM +0200, Bernhard Beschow wrote:
> When calling pci_bus_irqs() multiple times on the same object without calling
> pci_bus_irqs_cleanup() in between PCIBus::irq_count[] is currently leaked.
> Let's fix this because Xen will do just that in a few commits, and because
> calling pci_bus_irqs_cleanup() in between seems fragile and cumbersome.
> 
> Note that pci_bus_irqs_cleanup() now has to NULL irq_count such that
> pci_bus_irqs() doesn't do a double free.
> 
> Signed-off-by: Bernhard Beschow <shentey@gmail.com>

ok

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>


> ---
>  hw/pci/pci.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index def5000e7b..be1c5d16ec 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -558,6 +558,7 @@ void pci_bus_irqs(PCIBus *bus, pci_set_irq_fn set_irq,
>      bus->set_irq = set_irq;
>      bus->irq_opaque = irq_opaque;
>      bus->nirq = nirq;
> +    g_free(bus->irq_count);
>      bus->irq_count = g_malloc0(nirq * sizeof(bus->irq_count[0]));
>  }
>  
> @@ -573,6 +574,7 @@ void pci_bus_irqs_cleanup(PCIBus *bus)
>      bus->irq_opaque = NULL;
>      bus->nirq = 0;
>      g_free(bus->irq_count);
> +    bus->irq_count = NULL;
>  }
>  
>  PCIBus *pci_register_root_bus(DeviceState *parent, const char *name,
> -- 
> 2.40.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:38:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524476.815450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplLZ-0001zB-J9; Fri, 21 Apr 2023 07:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524476.815450; Fri, 21 Apr 2023 07:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplLZ-0001z4-GQ; Fri, 21 Apr 2023 07:38:21 +0000
Received: by outflank-mailman (input) for mailman id 524476;
 Fri, 21 Apr 2023 07:38:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/dH=AM=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1pplLY-0001bc-11
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:38:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7bd02e79-e017-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 09:38:19 +0200 (CEST)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-605-q1gLG36eNVmktkkjeWM9YQ-1; Fri, 21 Apr 2023 03:38:15 -0400
Received: by mail-wm1-f70.google.com with SMTP id
 5b1f17b1804b1-3f08901fed3so8578125e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 00:38:15 -0700 (PDT)
Received: from redhat.com ([2.55.17.255]) by smtp.gmail.com with ESMTPSA id
 m7-20020adfe947000000b003011baf89b3sm3799958wrn.40.2023.04.21.00.38.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Apr 2023 00:38:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bd02e79-e017-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682062698;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dMMIuZlSmetsblZCDeUgqaAXAUJZ7nAmf6/XgoiCTeY=;
	b=PcXl7KXqnQvKwkdAxHjH/ziqHB9igzMLpB3SlCAZQfZK5P1hd9J5vXDEy0UWi4rQ5RNwpj
	MEAd0CuoDtnZQu/dIV3wmFWw6drTC6yyDiCfOdRqMAFml3qEtfHFoJeRtbBdpud012/h64
	qmOp6UFMCnuspUdbcqTW6kXaGTcD5pc=
X-MC-Unique: q1gLG36eNVmktkkjeWM9YQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682062695; x=1684654695;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dMMIuZlSmetsblZCDeUgqaAXAUJZ7nAmf6/XgoiCTeY=;
        b=fk4kdQhwkY/501tgUPa/oBCllhm19MIddCyrJxGnQrCt61SdSzBy1YtifE+GtvWTTa
         8sKFITgu4UfeyuqzBCiKceF+I+MYxEe+5+rlCBTiPwCVgfHa4BZDtBpc0BS/naHpiMsf
         qyHn6UjzWLEYzwyrTUAUdpJBFXJcKMUWM4xhEE/+r8+quYsh/sp4GQFD8Zfkm4Wp/pt0
         9fctl4vQHrJjbcdq+PfIfyl15hXMZddftL5bucE+hvC9cF2tuN0ebpul8mgfRDB5nIMi
         jV87p7UXVBIWNP9UtoOoew3NCk0TWH6b/YY5ulWe+7udPCvfTE/GRuBBEcue+zpp/xZP
         4jdw==
X-Gm-Message-State: AAQBX9ePDIK0xm1F9p+zFougNbt+7sihXPpFieFjyoXTGFYbTU4Mg9cl
	Iwz0rI25QxrxYACGRqUZQ3Y1SO5su5VgQoywA2da+CoYiYaAkd29hJKUFY6ONorKE1DEeNyTv7i
	ZZNQYgytjUwmK7y58u3zpHNin4Xk=
X-Received: by 2002:a7b:c5d4:0:b0:3f0:a0bb:58ef with SMTP id n20-20020a7bc5d4000000b003f0a0bb58efmr971421wmk.25.1682062694828;
        Fri, 21 Apr 2023 00:38:14 -0700 (PDT)
X-Google-Smtp-Source: AKy350YOrrdjWksNALRxEPQvKpHmRItRxbWVmT3lOFiuCk6zo4mcQss4rb0gRIrPaGwAOLoqPnIX7w==
X-Received: by 2002:a7b:c5d4:0:b0:3f0:a0bb:58ef with SMTP id n20-20020a7bc5d4000000b003f0a0bb58efmr971406wmk.25.1682062694511;
        Fri, 21 Apr 2023 00:38:14 -0700 (PDT)
Date: Fri, 21 Apr 2023 03:38:10 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Bernhard Beschow <shentey@gmail.com>
Cc: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Aurelien Jarno <aurelien@aurel32.net>,
	=?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
	Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
Message-ID: <20230421033757-mutt-send-email-mst@kernel.org>
References: <20230403074124.3925-1-shentey@gmail.com>
MIME-Version: 1.0
In-Reply-To: <20230403074124.3925-1-shentey@gmail.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Mon, Apr 03, 2023 at 09:41:17AM +0200, Bernhard Beschow wrote:
> There is currently a dedicated PIIX3 device model for use under Xen. By reusing
> existing PCI API during initialization this device model can be eliminated and
> the plain PIIX3 device model can be used instead.
> 
> Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also making Xen
> agnostic towards the precise south bridge being used in the PC machine. The
> latter might become particularily interesting once PIIX4 becomes usable in the
> PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3.

xen stuff so I assume that tree?

> Testing done:
> - `make check`
> - Run `xl create` with the following config:
>     name = "Manjaro"
>     type = 'hvm'
>     memory = 1536
>     apic = 1
>     usb = 1
>     disk = [ "file:manjaro-kde-21.2.6-220416-linux515.iso,hdc:cdrom,r" ]
>     device_model_override = "/usr/bin/qemu-system-x86_64"
>     vga = "stdvga"
>     sdl = 1
> - `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \
>     -cdrom manjaro-kde-21.2.6-220416-linux515.iso`
> 
> v4:
> - Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)
> 
> v3:
> - Rebase onto master
> 
> v2:
> - xen_piix3_set_irq() is already generic. Just rename it. (Chuck)
> 
> Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
> 
> Bernhard Beschow (7):
>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
>   hw/pci/pci.c: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
>   hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
>   hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
>   hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
>   hw/isa/piix3: Resolve redundant k->config_write assignments
>   hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
> 
>  include/hw/southbridge/piix.h |  1 -
>  include/hw/xen/xen.h          |  2 +-
>  hw/i386/pc_piix.c             | 36 +++++++++++++++++++--
>  hw/i386/xen/xen-hvm.c         |  2 +-
>  hw/isa/piix3.c                | 60 +----------------------------------
>  hw/pci/pci.c                  |  2 ++
>  stubs/xen-hw-stub.c           |  2 +-
>  7 files changed, 39 insertions(+), 66 deletions(-)
> 
> -- 
> 2.40.0
> 



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:50:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524484.815461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplWf-0003h3-JT; Fri, 21 Apr 2023 07:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524484.815461; Fri, 21 Apr 2023 07:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplWf-0003gw-G5; Fri, 21 Apr 2023 07:49:49 +0000
Received: by outflank-mailman (input) for mailman id 524484;
 Fri, 21 Apr 2023 07:49:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v1kD=AM=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pplWe-0003gq-2P
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:49:48 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20629.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14c3ac6c-e019-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 09:49:46 +0200 (CEST)
Received: from DM6PR14CA0071.namprd14.prod.outlook.com (2603:10b6:5:18f::48)
 by IA1PR12MB7520.namprd12.prod.outlook.com (2603:10b6:208:42f::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Fri, 21 Apr
 2023 07:49:41 +0000
Received: from DM6NAM11FT034.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:18f:cafe::f9) by DM6PR14CA0071.outlook.office365.com
 (2603:10b6:5:18f::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.27 via Frontend
 Transport; Fri, 21 Apr 2023 07:49:41 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT034.mail.protection.outlook.com (10.13.173.47) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.26 via Frontend Transport; Fri, 21 Apr 2023 07:49:41 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 02:49:40 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 00:49:40 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 21 Apr 2023 02:49:38 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14c3ac6c-e019-11ed-b220-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IR8ZY4E6Dt/Mu58+I1NZIZA1KOfsBM6Wo9YxNzlS06YuSiUbZRnswKSetQ5ktsFz/7mRV+I2HrR+qYjPlMfsyK3tseiy3VeKDhqPE7XAxDkm/Pu2Sj2UXCjBKfjiH8hRDFcz7UnYQbWowBL9MfywE09TWsUE068sasiAqf4kCFJxdvGPgEmDMpXm0HEWA5d2V/1ovLw5QkLAv1sys5HjpaDvbT4wD8G+WEHuga0ZdM55ZXPK7nSg0TpjzhYsh/jHmdHvAK9KYTYhbBtvbKUJekUh2K1bfRYY0nQvuHqMK1zzhK3MywzxYhKDx15ooM1lG+p60Krdn1IQhAnk3wlfLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/8wNm6Kyq3OBhCcHR2i62pZvaTXgVwUALR9ycD4YwzY=;
 b=PIE1wBeLpGpFp3lVs18jKmXfvGznL3foJ2IwSOjb02sVFIyNAGzjzGi+94iYT9mkhLpjYOIMP7iw5CUi1LBHFGM8cpOEDuJ3RCtOYRuWQenT9GHlWobhvArRa4tFtht0pBdgQR6KGfzTBbk0qlXOegpuNditaSmtlGebAy8kE4kfiR2pyApqkx0cRzFhEKxnCnqvg8x6AVqR186xv+oqx3SUpVk6JhGmGD3+QFgufbpl3BMQcMa4Rt/A4oqqCo7fC6qk5Uyt68wWP7+5ULM9QL84FDvRWiuF6VdqAXP7b8GdNiF5nHwl8tn5vEdn8yGWX5KwbYHi/B8GFFhiJ09+YQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/8wNm6Kyq3OBhCcHR2i62pZvaTXgVwUALR9ycD4YwzY=;
 b=OzbioPs4CIqU780Vihe5icCDPFRFhuTzTdqbTEXW6OgJ0viKunCmP8XSKwKpFyuoPGpRIB1wh2I5wJJGY7nxPZY4w8+k7FMIosMkG+Ugh8p68epoRs9UdyQHqKQtTXoT3vDh62dXTkBRHqnlQyvBl9DPAWP/n4S3SOg1gXsTuuc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <d5cb19c1-33e5-73ed-20fe-c98588eb5abc@amd.com>
Date: Fri, 21 Apr 2023 09:49:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 03/10] xen/arm: Introduce a wrapper for
 dt_device_get_address() to handle paddr_t
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-4-ayan.kumar.halder@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-4-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT034:EE_|IA1PR12MB7520:EE_
X-MS-Office365-Filtering-Correlation-Id: f7bc61f4-1deb-41e9-520f-08db423cf70b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NSOgwxmKTNZLj2gcz/5iHHkR/SXfShhoYT8qf/JLzF6kHWTZlhE1x26l2XdGLVyxiJ7l6qQCvYGQ6oGdQqs+EPiz2u+OnNHIYA3Wufu8CMEdmAgmvCaSutQZ1Kp0n3BaMdGKpsoSg0pSrcUekeUuitFWcuyoE2rhwz3TJzMttSVqE9HyL/vSeXia6zemwkH9jA6tzeVdZG/lvIxFRjJMUMmgAKh2wKxtQ7ztdjqIqR/mKtj+9OaQ6k/+8o50t4XKHLbfJo8mcNM9GUs8CJSaENG6PD1UpJNpw4zigWwav8bBcigoP6ekQuNC7t+ymMNkjgGhdZxpynty6mgPIB5V664kgaWb+e2EMcGVtoCyaesVctt/cAnhgObNS7S6HpGec5eQlsFdNmAj1VEDa7UUSK+IZIfnuIcUdCtC/dCjlYW0cU5ejEBHXrPDTTFkrkslV7VW2Sy/c1o07C9bJYXxXXKPdR6E1w6vFrHgQL5HJm0ZzYDCrvvL1jqZnY1mLhpqYNDVYYR4cPqbPYhWwEa8dBepiUhmq9ImZDfifkwp0DpPkFH6DXW1ODVa/t4mVXmeb0f5YMTnj/UWwgL5zfOgu6ItSoTd8F5iiTAcxxeSKEb+nsC4qOSa0/EMBV37ypqdmokZOema71B+jlR3HoJVrMsYdMQsyqfgPVIdRBzTpatiCXu41o6H0I7LNeSd7VWYHE6ntVGErupjIpZUeJYD8sA9aMRuedUlIx3bI2hIRTLLR2xNylwyKOCYxT3xpvfu
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(82310400005)(36756003)(4326008)(53546011)(966005)(8676002)(8936002)(86362001)(41300700001)(31686004)(31696002)(7416002)(44832011)(316002)(70586007)(5660300002)(70206006)(26005)(54906003)(478600001)(16576012)(110136005)(81166007)(2906002)(40460700003)(356005)(82740400003)(47076005)(186003)(2616005)(40480700001)(83380400001)(426003)(336012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 07:49:41.2304
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f7bc61f4-1deb-41e9-520f-08db423cf70b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT034.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7520

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> dt_device_get_address() can accept uint64_t only for address and size.
> However, the address/size denotes physical addresses. Thus, they should
> be represented by 'paddr_t'.
> Consequently, we introduce a wrapper for dt_device_get_address() ie
> dt_device_get_paddr() which accepts address/size as paddr_t and inturn
> invokes dt_device_get_address() after converting address/size to
> uint64_t.
> 
> The reason for introducing this is that in future 'paddr_t' may not
> always be 64-bit. Thus, we need an explicit wrapper to do the type
> conversion and return an error in case of truncation.
> 
> With this, callers can now invoke dt_device_get_paddr().
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v1 - 1. New patch.
> 
> v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size"
> into this patch.
> 
> 2. dt_device_get_address() callers now invoke dt_device_get_paddr() instead.
> 
> 3. Logged error in case of truncation.
> 
> v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
> 2. Some sanity fixes.
> 
> v4 - 1. Some sanity fixes.
> 2. Preserved the declaration of dt_device_get_address() in
> xen/include/xen/device_tree.h. The reason being it is currently used by
> ns16550.c. This driver requires some more changes as pointed by Jan in
> https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
> which is to be addressed as a separate series.
> 
>  xen/arch/arm/domain_build.c                | 10 +++----
>  xen/arch/arm/gic-v2.c                      | 10 +++----
>  xen/arch/arm/gic-v3-its.c                  |  4 +--
>  xen/arch/arm/gic-v3.c                      | 10 +++----
>  xen/arch/arm/pci/pci-host-common.c         |  6 ++--
>  xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
>  xen/arch/arm/platforms/brcm.c              |  6 ++--
>  xen/arch/arm/platforms/exynos5.c           | 32 ++++++++++----------
>  xen/arch/arm/platforms/sunxi.c             |  2 +-
>  xen/arch/arm/platforms/xgene-storm.c       |  2 +-
>  xen/common/device_tree.c                   | 35 ++++++++++++++++++++++
>  xen/drivers/char/cadence-uart.c            |  4 +--
>  xen/drivers/char/exynos4210-uart.c         |  4 +--
>  xen/drivers/char/imx-lpuart.c              |  4 +--
>  xen/drivers/char/meson-uart.c              |  4 +--
>  xen/drivers/char/mvebu-uart.c              |  4 +--
>  xen/drivers/char/omap-uart.c               |  4 +--
>  xen/drivers/char/pl011.c                   |  6 ++--
>  xen/drivers/char/scif-uart.c               |  4 +--
>  xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
>  xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
>  xen/drivers/passthrough/arm/smmu.c         |  8 ++---
>  xen/include/xen/device_tree.h              | 13 ++++++++
>  23 files changed, 116 insertions(+), 68 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 6c9712ab7b..fdef74e7ff 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -955,6 +955,41 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
>      return 0;
>  }
> 
> +int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
> +                        paddr_t *addr, paddr_t *size)
> +{
> +    uint64_t dt_addr = 0, dt_size = 0;
no need for these assignments

> +    int ret;
> +
> +    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
> +    if ( ret )
> +        return ret;
> +
> +    if ( addr )
Since dt_device_get_paddr() is a wrapper for dt_device_get_address(), I think you should maintain the same
error logic for addr i.e. if no addr was specified (NULL), you return -EINVAL:
if ( !addr )
    return -EINVAL;

> +    {
> +        if ( dt_addr != (paddr_t)dt_addr )
> +        {
> +            printk("Error: Physical address 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
> +                   dt_addr, dev->name, sizeof(paddr_t));
> +            return -ERANGE;
> +        }
> +
> +        *addr = dt_addr;
> +    }
> +
> +    if ( size )
> +    {
> +        if ( dt_size != (paddr_t)dt_size )
> +        {
> +            printk("Error: Physical size 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
> +                   dt_size, dev->name, sizeof(paddr_t));
> +            return -ERANGE;
> +        }
add empty line

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:54:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524490.815471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplar-0005AG-7n; Fri, 21 Apr 2023 07:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524490.815471; Fri, 21 Apr 2023 07:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplar-0005A9-49; Fri, 21 Apr 2023 07:54:09 +0000
Received: by outflank-mailman (input) for mailman id 524490;
 Fri, 21 Apr 2023 07:54:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pplaq-00059z-3I; Fri, 21 Apr 2023 07:54:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pplap-0007En-W3; Fri, 21 Apr 2023 07:54:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pplap-0001XK-M4; Fri, 21 Apr 2023 07:54:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pplap-0005Sq-La; Fri, 21 Apr 2023 07:54:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qxVEoiR+P7OCr2CJYqA3DLTmB49Na/RCrkNnP120rGU=; b=160LPqdN5Xj5B9hsq+KIjtJMry
	ZeupS+fpAGxnUPwjCgcjEpq396Bf2FmbU06e6nUOequo1pQFvLS3XuC0BjfHiPKKRAKXcsNavhEcj
	FiHpycRSefeYeZlYV4rbPrjLvVzhg4bpt77sIHXMFtaKm9h3EGX3llq0adDAorGBsetw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180351-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180351: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 07:54:07 +0000

flight 180351 xen-unstable-smoke real [real]
flight 180355 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180351/
http://logs.test-lab.xenproject.org/osstest/logs/180355/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    2 days
Failing since        180314  2023-04-19 10:00:24 Z    1 days   14 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 07:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 07:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524496.815481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppldy-0005m7-OX; Fri, 21 Apr 2023 07:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524496.815481; Fri, 21 Apr 2023 07:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppldy-0005m0-KD; Fri, 21 Apr 2023 07:57:22 +0000
Received: by outflank-mailman (input) for mailman id 524496;
 Fri, 21 Apr 2023 07:57:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v1kD=AM=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ppldx-0005lu-EP
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 07:57:21 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2314ac4c-e01a-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 09:57:19 +0200 (CEST)
Received: from DM5PR07CA0068.namprd07.prod.outlook.com (2603:10b6:4:ad::33) by
 SA3PR12MB8045.namprd12.prod.outlook.com (2603:10b6:806:31d::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6298.45; Fri, 21 Apr 2023 07:57:15 +0000
Received: from DM6NAM11FT087.eop-nam11.prod.protection.outlook.com
 (2603:10b6:4:ad:cafe::5b) by DM5PR07CA0068.outlook.office365.com
 (2603:10b6:4:ad::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.28 via Frontend
 Transport; Fri, 21 Apr 2023 07:57:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT087.mail.protection.outlook.com (10.13.172.150) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.26 via Frontend Transport; Fri, 21 Apr 2023 07:57:15 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 02:57:14 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 21 Apr 2023 02:57:12 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2314ac4c-e01a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n6F1h4KR6c1oMdejEprt5Ro/cJdRWDvp4+s+i/+CxYMEp4s10mlHujkgeiLgERafjmt1QNzjjuMeBbDKtTJDxBb0KR1hlsXatBzcJAWPu4Rx3u3Efpn8eiKMw6kbwJ9+e8tBZbW2rxumWPpk66HU3DhZHsqQAHpTqQLMsx1feJEG87uCZSy/WvaqClrRY7M2divkrQqSIFR+xXdDXqEBgpwQHGsvcenDHFuakHZk5iqKzN1vNzje1mfxMdtc3QD3dhd/5TySXKbnx8EmXbqAGnzqK7tJkAkpPATJV85GeW8dc5R/QKQ+J7h31ID2OSHzye8t+SykSLonuqNPzb4aBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aaLKSLBlH0pmitOGlaO7/UUHDoX6HYuf9NHfmDDiTwM=;
 b=DLcyBSX+sObCl78qpBLdCwIBLlg0tY0fo0oBqdlwjwEdW3OM2p0YP6HJk+MWa+hvMM53EOACwtVm8hoxPCZzz/0wZcVRJE5BWY70lJG/qLEJ5ZQAEaNNOTy9Q8Fp7meNJX6Yhzu9vwMaCVZ5Q26XMtt8F0jaJtAlwDwGgjDccgxxSUPhsXoWNhfNu7tsWoTXpXaKgwiS+oWe8tIgjx+MVvvE1JZLtj1i1wOV99oUkcpw/9QbgVXR+dFb2mEm7VCVhYXx33uRBxu2Q53Dtc+aqYFf3SZcJsjAL4lDba9gpL8zzJMIKwguUp/P4M0cZxisqSdhko9mok2UDUm5a3iCUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aaLKSLBlH0pmitOGlaO7/UUHDoX6HYuf9NHfmDDiTwM=;
 b=jWZCjL2IQLWqL1K8Smir13KZmoJ+RSsahbGddb9jTNZ+s0v7Mf6eScRDP3WjFYejE9510K4SeJoNZEQS4Yq9zuq7Jndn9Yl7SWPqDbkEdrVQZ4imAdXIeZBXGKtaPheYKL33r6kVTte8Ui1h1EFpqsMVXaSab8Ps2spZkqD//vE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <45b3d1cc-899d-5917-535e-fa6539f88571@amd.com>
Date: Fri, 21 Apr 2023 09:57:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 04/10] xen/arm: smmu: Use writeq_relaxed_non_atomic() for
 writing to SMMU_CBn_TTBR0
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-5-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-5-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT087:EE_|SA3PR12MB8045:EE_
X-MS-Office365-Filtering-Correlation-Id: 6314be5b-2924-496b-d17c-08db423e05ac
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9TKIB+ujAKhsF6Zw94c1/tFFz6OArUguZOxQSwSVeMW0YkGX3WAiS4aUkCQnaDh0d4617SM4ryfJVWdOiGhQECTnAWQ8I6cjRCsieIZyZRZYLgBgJloSkYyu/a2kpEkE22mNWn58l5I8SWdmJsKgNH+Ol8kcDDqcdBeXHIKlZIImkQgAgdaifoykwF0r9Lp40OMUu6RiENdm2KhtTubtmHgA4i5uIUUwBt7WPpFHpcWWCfjvkDA+eSGvNAtcLL0wELAYXz3MsBOgGGdV1xFJClCXH5rUYHIcVWNFq+lPzNr37kaZ9J/M2C+xt/R6quAzji0pVBE1bcrTGRsB5nFlkEn1n77DfUgZDwpW05IAAhazDLr6yG6CU9s6pChai2LUvAdJbUBdMMPKN1VheyT7vjcxiO/gzumYIxhyDcARXTBPrRt7mGx7lEXJ8JOBBQYuxU/V9+y9ziTjq5zYJrUgnGKGscBmeLRYRjqaS+OT1bHH7ImA2UoCmbezFi84y11Xxu3qgj+DWkWKfw4xm0qUN+qAs7Sswtvs3Iwt9QsrEEbPJimt45MzAuHqVoHnpdHxw2AbucPvvNaohpoAIuX5MJWD2SqsaVrR5zayBvbaPIBE3shKNMZEYMgfyrIYcOq+zOrIi4h59s4PTSn1ocAjaFJSpmd9T5AiDotATuJBDxztYVTtu3ehPe9mzT8aZSaHli1Rooqsdya/RsOOtdXwoIV+EZRXgVp3ThNP+iiA8E/2ugfzjk+WrXnut/idiQ/QJOzd7BT1pFF9MEEZo603KQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(46966006)(40470700004)(36840700001)(44832011)(36756003)(7416002)(40460700003)(82310400005)(8936002)(5660300002)(356005)(8676002)(41300700001)(2906002)(81166007)(40480700001)(86362001)(31696002)(54906003)(31686004)(478600001)(2616005)(110136005)(36860700001)(16576012)(316002)(53546011)(26005)(186003)(426003)(336012)(4326008)(70206006)(82740400003)(70586007)(83380400001)(47076005)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 07:57:15.2893
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6314be5b-2924-496b-d17c-08db423e05ac
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT087.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8045

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
> SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
> writeq_relaxed_non_atomic() to write to it instead of invoking
> writel_relaxed() twice for lower half and upper half of the register.
> 
> This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
> Thus, one can assign p2maddr to a 64 bit register and do the bit
> manipulations on it, to generate the value for SMMU_CBn_TTBR0.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> Changes from -
> 
> v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
> Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
> fashion.
> 
> v2 - 1. Added R-b.
> 
> v3 - 1. No changes.
> 
> v4 - 1. Reordered the R-b. No further changes.
> (This patch can be committed independent of the series).
> 
>  xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 79281075ba..c8ef2a925f 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
>  #define ARM_SMMU_CB_SCTLR              0x0
>  #define ARM_SMMU_CB_RESUME             0x8
>  #define ARM_SMMU_CB_TTBCR2             0x10
> -#define ARM_SMMU_CB_TTBR0_LO           0x20
> -#define ARM_SMMU_CB_TTBR0_HI           0x24
> +#define ARM_SMMU_CB_TTBR0              0x20
>  #define ARM_SMMU_CB_TTBCR              0x30
>  #define ARM_SMMU_CB_S1_MAIR0           0x38
>  #define ARM_SMMU_CB_FSR                        0x58
> @@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
>  static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  {
>         u32 reg;
> +       u64 reg64;
Shouldn't you be using uint64_t? Also ...

>         bool stage1;
>         struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
>         struct arm_smmu_device *smmu = smmu_domain->smmu;
> @@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>         dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
>                    smmu_domain->cfg.domain->domain_id, p2maddr);
> 
> -       reg = (p2maddr & ((1ULL << 32) - 1));
> -       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
> -       reg = (p2maddr >> 32);
> +       reg64 = p2maddr;
> +
>         if (stage1)
> -               reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
> -       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
> +               reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
... here you use uint64_t for a cast and not u64

~Michal


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 08:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 08:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524502.815491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplh8-0007jN-Em; Fri, 21 Apr 2023 08:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524502.815491; Fri, 21 Apr 2023 08:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplh8-0007jG-Br; Fri, 21 Apr 2023 08:00:38 +0000
Received: by outflank-mailman (input) for mailman id 524502;
 Fri, 21 Apr 2023 08:00:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vNOs=AM=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pplh7-0007jA-MX
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 08:00:37 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98b46450-e01a-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 10:00:36 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1a686260adcso21610425ad.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 01:00:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b46450-e01a-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682064035; x=1684656035;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=bhQ3hOXOzC3JH2/73Y1/JWO8cHUiW9FljkTD3H4Mg2E=;
        b=DJpyup/j5MvnbBWSHmAUd74i15QkPld8cSy1W/2SqJAn7A7SQR/CXpBQnPIbKJTzOq
         lk70ROaT8EX+yvy9XAnue+IxFIDvu5mdc6R6rtNfQSBfq92WbuuOWgbtVM7Q/alK5vDc
         2pdSNX6336fYe9zzvWrjxjXa2iSnADDRwgjja7hlrq5IibCtIFZDaFuqJb0/W0/k4pjR
         Na7S7Hhnj8BCGC7CaIkpYi7y4vyaE0qHd9ep2VusjUwqF5idKT072+D0/z1tJccrQ0sH
         bF+VvAbdVlv1ZkV/7JXMZr18ligmdGv3Kik19eX6WkbLu07sdsGr6KQ8CoxF1b3sFq3b
         mNcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682064035; x=1684656035;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=bhQ3hOXOzC3JH2/73Y1/JWO8cHUiW9FljkTD3H4Mg2E=;
        b=g5LDrYvRsI+6KgkXCjDAsQ8tiNF/2ynp+k1doSj7M8zT15v48jG7EK6egZPH8fvhRP
         Dhov763G/wtwFIccfDNjXj5sPBTc5Ff/HLmtgM5QZRtGYsZzqTYCSaKI0Ib2J0Hz/cHY
         QRjAjckC/7ITqbeW9o53qqFxR6v9ncur0P/+y4jvK0SJTpCV9XdNV3XIQBwqryXYkie3
         QRbn+2SnRO5epNo6klosJlMq+1uv8tPYEDXNt3S1hTxSupv+usmynFGoJVHcBC0oDXBQ
         1kr/09HYL5Va1V4JGsxOVBImfx7g/qjiFpwZgHp8UXHP0Mxt3aTukWeDtYCfS8Ubjq2e
         x63w==
X-Gm-Message-State: AAQBX9cfrFUjORwyUAiIqnQFe6aW9lktF85o9sWN2C5zCDvv73DDkIZ5
	+dDSk9DTj0zf2sT0UGPiet2rFXmEoMzrIqtkxxA=
X-Google-Smtp-Source: AKy350aoP73b9DBz1Po3a40hRYUWBMViuIVW1bNK3/W4JO2UQS0rpl1GKiE0xk0qDMQS74uSnuCJlmSYpM2qBELY1Mk=
X-Received: by 2002:a17:902:fb0b:b0:1a6:9363:1632 with SMTP id
 le11-20020a170902fb0b00b001a693631632mr3240112plb.25.1682064034595; Fri, 21
 Apr 2023 01:00:34 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com> <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
In-Reply-To: <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Fri, 21 Apr 2023 11:04:54 +0300
Message-ID: <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="00000000000045862f05f9d40d35"

--00000000000045862f05f9d40d35
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Michal,

Yes, I use yocto.

Yesterday all day long I tried to follow your suggestions.
I faced a problem.
Manually in the xen config build file I pasted the strings:

CONFIG_EARLY_PRINTK
CONFIG_EARLY_PRINTK_ZYNQMP
CONFIG_EARLY_UART_CHOICE_CADENCE

Host hangs in build time.
Maybe I did not set something in the config build file ?

Regards,
Oleg

=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Ole=
g Nikitenko <oleshiiwood@gmail.com>:

> Thanks Michal,
>
> You gave me an idea.
> I am going to try it today.
>
> Regards,
> O.
>
> =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56, O=
leg Nikitenko <oleshiiwood@gmail.com>:
>
>> Thanks Stefano.
>>
>> I am going to do it today.
>>
>> Regards,
>> O.
>>
>> =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, =
Stefano Stabellini <sstabellini@kernel.org>:
>>
>>> On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>>> > Hi Michal,
>>> >
>>> > I corrected xen's command line.
>>> > Now it is
>>> > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D16=
00M
>>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dn=
ull
>>> > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>>>
>>> 4 colors is way too many for xen, just do xen_colors=3D0-0. There is no
>>> advantage in using more than 1 color for Xen.
>>>
>>> 4 colors is too few for dom0, if you are giving 1600M of memory to Dom0=
.
>>> Each color is 256M. For 1600M you should give at least 7 colors. Try:
>>>
>>> xen_colors=3D0-0 dom0_colors=3D1-8
>>>
>>>
>>>
>>> > Unfortunately the result was the same.
>>> >
>>> > (XEN)  - Dom0 mode: Relaxed
>>> > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>>> > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>>> > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>>> > (XEN) Coloring general information
>>> > (XEN) Way size: 64kB
>>> > (XEN) Max. number of colors available: 16
>>> > (XEN) Xen color(s): [ 0 ]
>>> > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>>> 00000000002ccc0c
>>> > (XEN) Color array allocation failed for dom0
>>> > (XEN)
>>> > (XEN) ****************************************
>>> > (XEN) Panic on CPU 0:
>>> > (XEN) Error creating domain 0
>>> > (XEN) ****************************************
>>> > (XEN)
>>> > (XEN) Reboot in five seconds...
>>> >
>>> > I am going to find out how command line arguments passed and parsed.
>>> >
>>> > Regards,
>>> > Oleg
>>> >
>>> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:2=
5, Oleg Nikitenko <oleshiiwood@gmail.com>:
>>> >       Hi Michal,
>>> >
>>> > You put my nose into the problem. Thank you.
>>> > I am going to use your point.
>>> > Let's see what happens.
>>> >
>>> > Regards,
>>> > Oleg
>>> >
>>> >
>>> > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:3=
7, Michal Orzel <michal.orzel@amd.com>:
>>> >       Hi Oleg,
>>> >
>>> >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>>> >       >
>>> >       >
>>> >       >
>>> >       > Hello Stefano,
>>> >       >
>>> >       > Thanks for the clarification.
>>> >       > My company uses yocto for image generation.
>>> >       > What kind of information do you need to consult me in this
>>> case ?
>>> >       >
>>> >       > Maybe modules sizes/addresses which were mentioned by @Julien
>>> Grall <mailto:julien@xen.org> ?
>>> >
>>> >       Sorry for jumping into discussion, but FWICS the Xen command
>>> line you provided seems to be not the one
>>> >       Xen booted with. The error you are observing most likely is due
>>> to dom0 colors configuration not being
>>> >       specified (i.e. lack of dom0_colors=3D<> parameter). Although i=
n
>>> the command line you provided, this parameter
>>> >       is set, I strongly doubt that this is the actual command line i=
n
>>> use.
>>> >
>>> >       You wrote:
>>> >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_me=
m=3D1600M
>>> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative
>>> >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
>>> dom0_colors=3D4-7";
>>> >
>>> >       but:
>>> >       1) way_szize has a typo
>>> >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>>> that Xen has only one:
>>> >       (XEN) Xen color(s): [ 0 ]
>>> >
>>> >       This makes me believe that no colors configuration actually end
>>> up in command line that Xen booted with.
>>> >       Single color for Xen is a "default if not specified" and way
>>> size was probably calculated by asking HW.
>>> >
>>> >       So I would suggest to first cross-check the command line in use=
.
>>> >
>>> >       ~Michal
>>> >
>>> >
>>> >       >
>>> >       > Regards,
>>> >       > Oleg
>>> >       >
>>> >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 20:44, Stefano Stabellini <
>>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>>> >       >
>>> >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>>> >       >     > Hi Julien,
>>> >       >     >
>>> >       >     > >> This feature has not been merged in Xen upstream yet
>>> >       >     >
>>> >       >     > > would assume that upstream + the series on the ML [1]
>>> work
>>> >       >     >
>>> >       >     > Please clarify this point.
>>> >       >     > Because the two thoughts are controversial.
>>> >       >
>>> >       >     Hi Oleg,
>>> >       >
>>> >       >     As Julien wrote, there is nothing controversial. As you
>>> are aware,
>>> >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>>> here:
>>> >       >     https://github.com/xilinx/xen <
>>> https://github.com/xilinx/xen>
>>> >       >
>>> >       >     and the branch you are using (xlnx_rebase_4.16) comes fro=
m
>>> there.
>>> >       >
>>> >       >
>>> >       >     Instead, the upstream Xen tree lives here:
>>> >       >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>>> >       >
>>> >       >
>>> >       >     The Cache Coloring feature that you are trying to
>>> configure is present
>>> >       >     in xlnx_rebase_4.16, but not yet present upstream (there
>>> is an
>>> >       >     outstanding patch series to add cache coloring to Xen
>>> upstream but it
>>> >       >     hasn't been merged yet.)
>>> >       >
>>> >       >
>>> >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>>> matter too much for
>>> >       >     you as you already have Cache Coloring as a feature there=
.
>>> >       >
>>> >       >
>>> >       >     I take you are using ImageBuilder to generate the boot
>>> configuration? If
>>> >       >     so, please post the ImageBuilder config file that you are
>>> using.
>>> >       >
>>> >       >     But from the boot message, it looks like the colors
>>> configuration for
>>> >       >     Dom0 is incorrect.
>>> >       >
>>> >
>>> >
>>> >
>>
>>

--00000000000045862f05f9d40d35
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello Michal,</div><div><br></div><div>Yes, I use yoc=
to.<br></div><div><br></div><div>Yesterday all day long I tried to follow y=
our suggestions.</div><div>I faced a problem.</div><div>Manually in the xen=
 config build file I pasted the strings:</div><div><br></div><div>CONFIG_EA=
RLY_PRINTK</div><div>CONFIG_EARLY_PRINTK_ZYNQMP</div><div>CONFIG_EARLY_UART=
_CHOICE_CADENCE<br></div><div><br></div><div>Host hangs in build time.=C2=
=A0</div><div>Maybe I did not set something in the config build file ?</div=
><div><br></div><div>Regards,</div><div>Oleg<br></div></div><br><div class=
=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D1=87=D1=82, 20 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko &lt;<a h=
ref=3D"mailto:oleshiiwood@gmail.com">oleshiiwood@gmail.com</a>&gt;:<br></di=
v><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;borde=
r-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div>T=
hanks Michal,</div><div><br></div><div>You gave me an idea.</div><div>I am =
going to try it today.</div><div><br></div><div>Regards,</div><div>O.<br></=
div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_at=
tr">=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56,=
 Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt;:<br></div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex"><div dir=3D"ltr"><div>Thanks Stefano.</div><div><br></di=
v><div>I am going to do it today.</div><div><br></div><div>Regards,</div><d=
iv>O.<br></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =
=D0=B2 23:05, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a>&gt;:<br></div><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px sol=
id rgb(204,204,204);padding-left:1ex">On Wed, 19 Apr 2023, Oleg Nikitenko w=
rote:<br>
&gt; Hi Michal,<br>
&gt; <br>
&gt; I corrected xen&#39;s command line.<br>
&gt; Now it is<br>
&gt; xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sche=
d=3Dnull<br>
&gt; timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quo=
t;;<br>
<br>
4 colors is way too many for xen, just do xen_colors=3D0-0. There is no<br>
advantage in using more than 1 color for Xen.<br>
<br>
4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.<br=
>
Each color is 256M. For 1600M you should give at least 7 colors. Try:<br>
<br>
xen_colors=3D0-0 dom0_colors=3D1-8<br>
<br>
<br>
<br>
&gt; Unfortunately the result was the same.<br>
&gt; <br>
&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt; (XEN) Coloring general information<br>
&gt; (XEN) Way size: 64kB<br>
&gt; (XEN) Max. number of colors available: 16<br>
&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt; (XEN) alternatives: Patching with alt table 00000000002cc690 -&gt; 000=
00000002ccc0c<br>
&gt; (XEN) Color array allocation failed for dom0<br>
&gt; (XEN)<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN) Panic on CPU 0:<br>
&gt; (XEN) Error creating domain 0<br>
&gt; (XEN) ****************************************<br>
&gt; (XEN)<br>
&gt; (XEN) Reboot in five seconds...<br>
&gt; <br>
&gt; I am going to find out how command line arguments passed and parsed.<b=
r>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25=
, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Michal,<br>
&gt; <br>
&gt; You put my nose into the problem. Thank you.<br>
&gt; I am going to use your point.<br>
&gt; Let&#39;s see what happens.<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; <br>
&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37=
, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank=
">michal.orzel@amd.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/04/2023 09:03, Oleg Nikitenko wrote:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; My company uses yocto for image generat=
ion.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; What kind of information do you need to=
 consult me in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe modules sizes/addresses which wer=
e mentioned by @Julien Grall &lt;mailto:<a href=3D"mailto:julien@xen.org" t=
arget=3D"_blank">julien@xen.org</a>&gt; ?<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry for jumping into discussion, but FWICS=
 the Xen command line you provided seems to be not the one<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen booted with. The error you are observing=
 most likely is due to dom0 colors configuration not being<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specified (i.e. lack of dom0_colors=3D&lt;&g=
t; parameter). Although in the command line you provided, this parameter<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set, I strongly doubt that this is the ac=
tual command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xen-bootargs =3D &quot;console=3Ddtuart =
dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscr=
ub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnull timer_slop=3D0 way_szize=3D6553=
6 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you specified 4 colors (0-3) for Xen, but=
 the boot log says that Xen has only one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) Xen color(s): [ 0 ]<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This makes me believe that no colors configu=
ration actually end up in command line that Xen booted with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single color for Xen is a &quot;default if n=
ot specified&quot; and way size was probably calculated by asking HW.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I would suggest to first cross-check the =
command line in use.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 20=
23=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a>&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023,=
 Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This f=
eature has not been merged in Xen upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assu=
me that upstream + the series on the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify =
this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two=
 thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, the=
re is nothing controversial. As you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a s=
eparate Xen tree specific for Xilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0and the branch you a=
re using (xlnx_rebase_4.16) comes from there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstrea=
m Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring f=
eature that you are trying to configure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16,=
 but not yet present upstream (there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch se=
ries to add cache coloring to Xen upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merg=
ed yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are u=
sing xlnx_rebase_4.16 it doesn&#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0you as you already h=
ave Cache Coloring as a feature there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I take you are using=
 ImageBuilder to generate the boot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0so, please post the =
ImageBuilder config file that you are using.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0But from the boot me=
ssage, it looks like the colors configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
&gt; <br>
&gt; </blockquote></div>
</blockquote></div>
</blockquote></div>

--00000000000045862f05f9d40d35--


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 08:03:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 08:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524508.815501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplk1-0008MI-2v; Fri, 21 Apr 2023 08:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524508.815501; Fri, 21 Apr 2023 08:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplk1-0008M9-06; Fri, 21 Apr 2023 08:03:37 +0000
Received: by outflank-mailman (input) for mailman id 524508;
 Fri, 21 Apr 2023 08:03:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v1kD=AM=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ppljz-0008M1-PR
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 08:03:35 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01afc91b-e01b-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 10:03:32 +0200 (CEST)
Received: from MW2PR16CA0066.namprd16.prod.outlook.com (2603:10b6:907:1::43)
 by MN2PR12MB4520.namprd12.prod.outlook.com (2603:10b6:208:26f::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 08:03:27 +0000
Received: from CO1NAM11FT067.eop-nam11.prod.protection.outlook.com
 (2603:10b6:907:1:cafe::b1) by MW2PR16CA0066.outlook.office365.com
 (2603:10b6:907:1::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.27 via Frontend
 Transport; Fri, 21 Apr 2023 08:03:27 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT067.mail.protection.outlook.com (10.13.174.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.27 via Frontend Transport; Fri, 21 Apr 2023 08:03:26 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 03:03:26 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 03:03:25 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 21 Apr 2023 03:03:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01afc91b-e01b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fqPiAMUYpLi0DQlG3wf1yXw7upNMVjS0wWFACzElI2b74EKzn6E9YCZ1m23gmNLNqXDzTMFqIE67Zc7YivTMmSHbrgZEYgCPkIK42A9ixVtxnBeYoY/tXbV17JEKe3B4Q8DNz3Ni4yt/d6rwKx2/OUAkZQmYNcx8qdm6CC22gGV36Nu5on7OmJyadSQrdJX3Y6YHOkK1NDuNkbAr0M7cz8ZhOsuO48+t8+g+X9V8LZ3TuJ9W5B76P0zLdw/8QdfAiusUdI0PHd3a4Um3i+EDg4nedc+XcdEjfixlh9jxgeCwSoB+y83YwOvkioUYWQBUegvrTu/Wy4WvB0SDcdUBEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+YMUnGWGzw3nBKPKWLtJPAgE40LUnF7BLZ38MPB1Sw0=;
 b=PlK/kqBLwsBCSrg6h6Zx1l8qt5F8g7lN0S3MqL9laK4aWw0Hf7wZlBfkn8vFYVVtmZbBA60G/o2p0bjLb662Kf5K2Xjjd57hPDEy0FoITyhCIqEK5DOVSqIW+Xbza61ltwlU+SoxXNn3xa9p7crErnMLSoXpbHF6jsuE4SXD2s9K7BG+GxqTHNWBBGxz+fnxHVQCKEE0ObQrd4W+bY+dSgPygQ9CpftjVkR9KYygKT2pJodVimc/vxwy4NdXD3UWR09TdRHD1kOmx3SFYijs5BHh4FqQK3fg2GGAVbU72mcfMQESgsW0uKmsWAfQ5ZY6ZMsoBOTHeFT6/dEp+INf6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+YMUnGWGzw3nBKPKWLtJPAgE40LUnF7BLZ38MPB1Sw0=;
 b=cTrd46n3lQnBEtw8q6OF5VBpaEecs3jQqx0W20Wh/9DlcD8V7d9TJKHiaJ+QFO+pXmNRLc1aLGsbiIEdiRymUBfG56QuK8ZFGDqXElhdKbe3mnqIMYULs8EkdX4hoU84ziE5W8yeW1b4d+wO1D7XM52FH3+CSCJnLD9Mh8aoA5c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <cc29fe9f-5df6-816f-aeee-b8a1933cf3e8@amd.com>
Date: Fri, 21 Apr 2023 10:03:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 07/10] xen/arm: Restrict zeroeth_table_offset for ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-8-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-8-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT067:EE_|MN2PR12MB4520:EE_
X-MS-Office365-Filtering-Correlation-Id: ee51d579-134e-422f-395f-08db423ee31d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R8qUpPslDdKYq3l6WSD0yogmnHziN/UNaYeZHPAIIcJi/4gHkwe8ComxGfPCI/dtvUzVP5utUe5uhpHvgbUUH6SPT4VdBvh5M27NbJ1UGVdRFmAogql/9pa+ZHsIVBZ03jLKRnmk6/jwrby7iA+iYp+PIlCPfkqcx1J8AzXON9cMTCkL1VANTs7CyodEtgP3211NCRM677KarwMU+75tki/UAOEOqy4AU4MSYcvmvjYhC0OToqGzUIFaj0v3THlme9XFVZT3NcE/uh6pvGQ1SpRYSNaBLe5PjUBiFfi5nUsOBShEFeTJBFJ3/KW6EtlTqKObJGliDwsYbD4XffCqMLeGojPP465Qe/lOtQk5in+1kNqy1ga4HsNL7b3pQ76Jl2j9IFPDc0+lZTdCjxfCiyBwqlOczdIxEMqyTaMOpzhRWxBl/H+3Pem6Fqk2TcQSr+5JN7+Qi+0Og9ddmW4AW8dIYO8vKXmkeuSD3tM1Ut0DrJwlTxH4IZZUTlIBR+aNKh9sWYRg2rh/FZi2LLEsSQXMlZtmjH4yQOgYvZHZkFa6IcJ5QfQNUzSRBl2xzBBLCsEhJRnYWkLESFlLDIiWCz0yFs88K4bpXWFl3XG0s+VQLHRJgp6RtJOFf6os2yDw4nLE2gL5csmagXH1dFsD5nnWjYzxW69DNsALVmYTDU1aYcU1QTDPM2BUzqveWMopz5Zvi5i4XgNWrbm9o4z6i8sraBNU6qDEBMBQX2Zd8ceK3BpZ+oF9/wybp2RP7njaykVLn4yeymoSLR/sR+i3yQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(136003)(396003)(451199021)(46966006)(36840700001)(40470700004)(41300700001)(70206006)(70586007)(110136005)(316002)(26005)(54906003)(4326008)(478600001)(16576012)(8936002)(8676002)(426003)(336012)(7416002)(44832011)(5660300002)(53546011)(47076005)(36860700001)(31686004)(186003)(36756003)(40460700003)(2906002)(356005)(4744005)(82740400003)(2616005)(81166007)(82310400005)(40480700001)(86362001)(31696002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 08:03:26.7243
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ee51d579-134e-422f-395f-08db423ee31d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT067.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4520

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> When 32 bit physical addresses are used (ie PHYS_ADDR_T_32=y),
> "va >> ZEROETH_SHIFT" causes an overflow.
> Also, there is no zeroeth level page table on Arm32.
> 
> Also took the opportunity to clean up dump_pt_walk(). One could use
> DECLARE_OFFSETS() macro instead of declaring the declaring an array
s/declaring the declaring/declaring/

> of page table offsets.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 08:16:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 08:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524513.815510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplwc-0001Sd-7L; Fri, 21 Apr 2023 08:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524513.815510; Fri, 21 Apr 2023 08:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pplwc-0001SW-4c; Fri, 21 Apr 2023 08:16:38 +0000
Received: by outflank-mailman (input) for mailman id 524513;
 Fri, 21 Apr 2023 08:16:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v1kD=AM=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pplwa-0001SQ-QF
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 08:16:36 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d408a436-e01c-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 10:16:35 +0200 (CEST)
Received: from BL0PR02CA0031.namprd02.prod.outlook.com (2603:10b6:207:3c::44)
 by MN0PR12MB6174.namprd12.prod.outlook.com (2603:10b6:208:3c5::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 08:16:31 +0000
Received: from BL02EPF000108E8.namprd05.prod.outlook.com
 (2603:10b6:207:3c:cafe::cf) by BL0PR02CA0031.outlook.office365.com
 (2603:10b6:207:3c::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.27 via Frontend
 Transport; Fri, 21 Apr 2023 08:16:30 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000108E8.mail.protection.outlook.com (10.167.241.201) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.16 via Frontend Transport; Fri, 21 Apr 2023 08:16:30 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 03:16:29 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 03:16:29 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 21 Apr 2023 03:16:28 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d408a436-e01c-11ed-b220-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OssDkbCruEAOQJX/fbrtpZkqT9jK3D3JvrrKTwHknMjv1eqWZj6WWIrEkIT4zhjOU96S65UfiSHjVIJDJEXA42ClnCSbEnZ5WCtfY1JG0Qr5bDA8VfHRx7cBviVt36xx6ROXvTLhI2U9L9JdK6I7wv4X1D9VYRUH8O9WyAGB/Vsm5DVOmgZUePJApSKKtPbsywTuurx7O/o82Ym+t3KhyYo95pXhAfwW6r38qyXL8DgFCF117yNKdJb7vR8zE6wzO21EzlqvD37xSmtrxDXw3f6lhqzjwuiwyX4pHGPBYbqrOgSUDt43apDaFIMvOi8V9ok8hKoEf+Zygjt4ov7xfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mcADbVQFVluqgFlVeU+YPDTVVdfuGOccSVtbWhIqDA4=;
 b=gUTcCfv708vorHTzltvHjbmv44Sgu5Mfz6RiufF+EC7lqjnUPNYMADhyTYfTcOiTHyBjnhIQkijBZJoTAk5+4vWflsADDQQfkAkpbgg0OoAVsyL2siSG5Gb4HZ5wjKCfbbsukQwCJg9l0G2kEa4zLm4BabLl2zUpxs5weoMsAPPbsnVYGro25t4ItHlQ2wFhVcFnSuScrYf4qrTqRVyUx6SLxmUf2aBr571wNp7irrnnJ+0mjIci2nnjHyzJj35xZjEVCmLrBA1YT5nnwQ/j6Fote896hq/QROEn/gKoifhVHI0Kz5XoLE3/o3R6Jof9d43Af+p5dxpOpKdHn7+40A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mcADbVQFVluqgFlVeU+YPDTVVdfuGOccSVtbWhIqDA4=;
 b=bWOFSj94SgMbIoAfnoGUNfXb8u7T8+G+pQai3Im0cbgWS2dE9o/LhIP4id79jW5t8ND05IY7oxnxspd/Jw5op4eyk105xRbt+KTb3DHgpLN2LG9o/dtS+9BwRAh54j9OF8D1yY+6X0XCM1Yr1C1fmCxJwU8N2nDNAxnzz9AoKzc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
Date: Fri, 21 Apr 2023 10:16:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org>
 <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
 <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000108E8:EE_|MN0PR12MB6174:EE_
X-MS-Office365-Filtering-Correlation-Id: 8d6c9ba6-9502-498a-df56-08db4240b5f7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WTpfNzyrriM2Op4yJD3hp8ucwN6vRJMABgwkylMv73JRVT1LRzBc3lnlhQT+8f6tssNQKmaFM2ltFEsBpI1SUCqmyDlQ+VdaqnpxUBFEyDtCfnxsdHrTbIAQfKcwlUuQwTAzGIVw5KTalpyjddEp+NVmCtxvBQ2OsltO5h5a1kS6QTgQKhqkW9p8VjXFHGmLQzD8tZqd90fFJ1P1mm2zr+qp9sxjXS8cZOQEX3s4VnY65bUa52DxbK4UpugPlUCnjbVWqiyWTMc+808WYue76KIXKibwq4Ufwu4fXZEDHk8sQip1eO4KJ1AUh9HcMkPyitNUh3gUqGKXZpq6rCuzwhnQMje7cO8sXOs+CDE+UFM28dXFK8qYjAj3dqH1+UpiLeeT39Tt6tYTE851HbYkMFBFVj/dp4wf9ryf1rgz7mHQSqGIT64ruzZsQAfef7Ykc8lE2sF+Q6cO+SPygLE2HcfU3kBbO8E6iqr0a/ihbJUSvAsYxwwJcNk4ZeWOkLonoroJVxt8zkT+sRL2gsUQT0vo+rpzKcjNoejUCtXU6RRfGBq/uHGrLKZn88WIquhcWEzh11IHDYLJ4QQSup7rcArYypdaNrEgD6lbHQSRf8ZXJ63X2kz2Sd1MkZP16mlhsSpW8fmZhCVl93zv3UoksKSNXit3HNDiYAeihS0ALN0QDwG2mymeaCUeFWeGgmxF75b1VrI2wVUet6lJez8mf35lf9EUhgZt70rjIEPKgVd1F26CSZ5AZrJfcoW3cfQ9
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(451199021)(36840700001)(46966006)(40470700004)(82310400005)(36756003)(40480700001)(40460700003)(478600001)(54906003)(110136005)(356005)(41300700001)(81166007)(8936002)(8676002)(16576012)(70586007)(316002)(70206006)(4326008)(82740400003)(36860700001)(2616005)(47076005)(83380400001)(336012)(966005)(426003)(31686004)(186003)(26005)(53546011)(86362001)(44832011)(31696002)(2906002)(5660300002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 08:16:30.0849
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d6c9ba6-9502-498a-df56-08db4240b5f7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000108E8.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6174


On 21/04/2023 10:04, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello Michal,
> 
> Yes, I use yocto.
> 
> Yesterday all day long I tried to follow your suggestions.
> I faced a problem.
> Manually in the xen config build file I pasted the strings:
In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
You shouldn't really modify .config file but if you do, you should execute "make olddefconfig" afterwards.

> 
> CONFIG_EARLY_PRINTK
> CONFIG_EARLY_PRINTK_ZYNQMP
> CONFIG_EARLY_UART_CHOICE_CADENCE
I hope you added =y to them.

Anyway, you have at least the following solutions:
1) Run bitbake xen -c menuconfig to properly set early printk
2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not enabled by default)
3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
CONFIG_EARLY_PRINTK_ZYNQMP=y

~Michal

> 
> Host hangs in build time. 
> Maybe I did not set something in the config build file ?
> 
> Regards,
> Oleg
> 
> чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
> 
>     Thanks Michal,
> 
>     You gave me an idea.
>     I am going to try it today.
> 
>     Regards,
>     O.
> 
>     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
> 
>         Thanks Stefano.
> 
>         I am going to do it today.
> 
>         Regards,
>         O.
> 
>         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> 
>             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>             > Hi Michal,
>             >
>             > I corrected xen's command line.
>             > Now it is
>             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
>             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
> 
>             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>             advantage in using more than 1 color for Xen.
> 
>             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>             Each color is 256M. For 1600M you should give at least 7 colors. Try:
> 
>             xen_colors=0-0 dom0_colors=1-8
> 
> 
> 
>             > Unfortunately the result was the same.
>             >
>             > (XEN)  - Dom0 mode: Relaxed
>             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>             > (XEN) Coloring general information
>             > (XEN) Way size: 64kB
>             > (XEN) Max. number of colors available: 16
>             > (XEN) Xen color(s): [ 0 ]
>             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>             > (XEN) Color array allocation failed for dom0
>             > (XEN)
>             > (XEN) ****************************************
>             > (XEN) Panic on CPU 0:
>             > (XEN) Error creating domain 0
>             > (XEN) ****************************************
>             > (XEN)
>             > (XEN) Reboot in five seconds...
>             >
>             > I am going to find out how command line arguments passed and parsed.
>             >
>             > Regards,
>             > Oleg
>             >
>             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
>             >       Hi Michal,
>             >
>             > You put my nose into the problem. Thank you.
>             > I am going to use your point.
>             > Let's see what happens.
>             >
>             > Regards,
>             > Oleg
>             >
>             >
>             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>             >       Hi Oleg,
>             >
>             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>             >       >       
>             >       >
>             >       >
>             >       > Hello Stefano,
>             >       >
>             >       > Thanks for the clarification.
>             >       > My company uses yocto for image generation.
>             >       > What kind of information do you need to consult me in this case ?
>             >       >
>             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org <mailto:julien@xen.org>> ?
>             >
>             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
>             >       Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
>             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
>             >       is set, I strongly doubt that this is the actual command line in use.
>             >
>             >       You wrote:
>             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native
>             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>             >
>             >       but:
>             >       1) way_szize has a typo
>             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>             >       (XEN) Xen color(s): [ 0 ]
>             >
>             >       This makes me believe that no colors configuration actually end up in command line that Xen booted with.
>             >       Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.
>             >
>             >       So I would suggest to first cross-check the command line in use.
>             >
>             >       ~Michal
>             >
>             >
>             >       >
>             >       > Regards,
>             >       > Oleg
>             >       >
>             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>             >       >
>             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>             >       >     > Hi Julien,
>             >       >     >
>             >       >     > >> This feature has not been merged in Xen upstream yet
>             >       >     >
>             >       >     > > would assume that upstream + the series on the ML [1] work
>             >       >     >
>             >       >     > Please clarify this point.
>             >       >     > Because the two thoughts are controversial.
>             >       >
>             >       >     Hi Oleg,
>             >       >
>             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>             >       >
>             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>             >       >
>             >       >
>             >       >     Instead, the upstream Xen tree lives here:
>             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>             >       >
>             >       >
>             >       >     The Cache Coloring feature that you are trying to configure is present
>             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>             >       >     hasn't been merged yet.)
>             >       >
>             >       >
>             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>             >       >     you as you already have Cache Coloring as a feature there.
>             >       >
>             >       >
>             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>             >       >     so, please post the ImageBuilder config file that you are using.
>             >       >
>             >       >     But from the boot message, it looks like the colors configuration for
>             >       >     Dom0 is incorrect.
>             >       >
>             >
>             >
>             > 
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 09:24:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 09:24:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524517.815521 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppn02-0008WA-4c; Fri, 21 Apr 2023 09:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524517.815521; Fri, 21 Apr 2023 09:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppn02-0008W3-1l; Fri, 21 Apr 2023 09:24:14 +0000
Received: by outflank-mailman (input) for mailman id 524517;
 Fri, 21 Apr 2023 09:24:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ii9+=AM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1ppn00-0008Vx-OX
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 09:24:12 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 453246e9-e026-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 11:24:09 +0200 (CEST)
Received: from DB6PR07CA0104.eurprd07.prod.outlook.com (2603:10a6:6:2c::18) by
 DB3PR08MB9087.eurprd08.prod.outlook.com (2603:10a6:10:430::15) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.22; Fri, 21 Apr 2023 09:24:01 +0000
Received: from DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2c:cafe::71) by DB6PR07CA0104.outlook.office365.com
 (2603:10a6:6:2c::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.11 via Frontend
 Transport; Fri, 21 Apr 2023 09:24:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT014.mail.protection.outlook.com (100.127.143.22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.14 via Frontend Transport; Fri, 21 Apr 2023 09:24:01 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Fri, 21 Apr 2023 09:24:01 +0000
Received: from 9c66a560a4ed.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 856149B8-D419-4996-9CF0-02220F887A67.1; 
 Fri, 21 Apr 2023 09:23:50 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9c66a560a4ed.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 09:23:50 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM8PR08MB6563.eurprd08.prod.outlook.com (2603:10a6:20b:315::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 09:23:42 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 09:23:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 453246e9-e026-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WJzPHrLFK3QUDnsTvRJXvfb1sdWR+FKnafCYtNj4YeQ=;
 b=0cyYsS9hT1OHM9cc8dmVdhIfo5cexYMNXPHozk2L0jhmy2NzE6gleMumvrebExwV1GGJj7BlF6rqNniZI8DiWETXsw7ip1+TrNO/H5f98edeGz+PQYD25uwGTYB1uZZU9JI30Y7gh/t6uZtDfU6Q3SiEdCKVrKj5bhY5Wss0mDM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eN8vbNwyPsSQ8eWUPQGJBV6uqCRk6tVlMIjbUC0dwiPMSMt8/IaxdmW1ZlxRTcGFB4QX00l0kbC4EJxuHEm5W3dJL5L+SHM3a4OwjiOYQhB2gtrRNMEiITm0MZvuLdxcqtvza+fKi4uO8tgb4eaDzg6t8y4mIrEC2fLWdqC7fQ27bDrHlZoz5IMzlf7IrV7tUtOmRDln0bEIOffyCvFuqxWvMiQfx4QYY0Y8Z7ig2DNIseZuOgovn3bzgJCKkqaaoLX3kd5xiHXXV5pxZahm+d49LBoh1zcyuOd7xCzb/f9DYMsvOgGxerwyz7GwgvXg+5Lzf0HE05u/NZ+xuSB79w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WJzPHrLFK3QUDnsTvRJXvfb1sdWR+FKnafCYtNj4YeQ=;
 b=R1CU0yVvi4R6zX8tIFTSxZGxP3XO/5gxOqv+ndMHBx/wwsqGN1p9l4Sc1CZFqBTAiUZ6CZ7I1fjlhFZ+R2tsjPu7OmuHBVkyDoAkKOeHUCDj8lLcbzLDokEM8NPTUhVaa+v/13W1Gg0IglROy8lNkSSKWV58a7ksQm8mV4GiPqtrUfVLouZlgIhZGPqif9X/0l5dyiVFb76/zSaui+1LtekgKZwpKyxJe+XqFEGakT/rLXq1xbYDzpo8EMW60zpB2taHBJhYyCZirvyvxBVCLvxUUYh6lu5ySFYlWhs+LIeNz/9L9KeEX/vNDIt8p7RFoi638FbAjuzYJ+KIP6bW+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WJzPHrLFK3QUDnsTvRJXvfb1sdWR+FKnafCYtNj4YeQ=;
 b=0cyYsS9hT1OHM9cc8dmVdhIfo5cexYMNXPHozk2L0jhmy2NzE6gleMumvrebExwV1GGJj7BlF6rqNniZI8DiWETXsw7ip1+TrNO/H5f98edeGz+PQYD25uwGTYB1uZZU9JI30Y7gh/t6uZtDfU6Q3SiEdCKVrKj5bhY5Wss0mDM=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v3 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZc3rmCrt50C+BaEmmIgaK9uo9ea80IxGAgAFPzpA=
Date: Fri, 21 Apr 2023 09:23:42 +0000
Message-ID:
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
In-Reply-To: <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 153C2F97E054314A817D8908F5204979.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM8PR08MB6563:EE_|DBAEUR03FT014:EE_|DB3PR08MB9087:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c796484-9d1d-47d4-39aa-08db424a2498
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 w2zMQc7dFTP1SLRm1RIpYjOP6xRf4xwd7zB0/Gl14RUdWFLmHNIIcpRWvwWHyJFuKk4QYayJBe0pS0I33krwKJJv3zydHC8Vo5E/gPiGVJxfltNMummQTe3HYLEEUhWfrAgR0kdxi1MQD0SV0D8BMQI5rbDxH4pffuS7yGs6x6o/N9k9lXhY52b/bhaBtjiYjxpu7Q/a33KCDc1TDfi2TzhI9ofCIpZePtNktFPa4+VkllRbSSUHsvMvKN3/UerixOIe8+GrsXegGCvWtSYoVOJViirqtbFh3vbQ/CYod1xsefMixx0x/9xpbKjPCE0rtFuk/XVFbFbaLQv+M4R3WTfFuj7+hkXJH7lVW1WtZ4BcYA/qtmldgHynxmHzEJPDs0BMKuTeteOwW1ySKPxLEjQMLeVoDjNRUkllUbHCInaYA4IetLOMuw5DK1AcFxpjpev1NuV1zyer96m7R6LSYKpBdkuPO+aP+Q2OVV+OlaS0MJvMpVFMJnXYnZToKz8wnMeI1tZ3fvxWmUOVfvbp8/1IrP0Gx/VBmJQrG2Y9FBLY6bpCZSZTZWWPRUGsBy2BrfgAunxrq8rSipM9caXMKmeuKBTCBDsEpFX2AW7x8R4Q3lWUowQlDopbwXyQY0ir
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(136003)(366004)(376002)(451199021)(122000001)(316002)(64756008)(6916009)(66476007)(54906003)(76116006)(66446008)(66946007)(66556008)(4326008)(9686003)(6506007)(26005)(186003)(38100700002)(83380400001)(41300700001)(8936002)(5660300002)(8676002)(55016003)(478600001)(7696005)(71200400001)(86362001)(33656002)(52536014)(2906002)(38070700005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6563
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1fa10a5a-cf7d-40cc-ec41-08db424a193f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2DPgF+NiIuG4WyZYQS1vsxbCb5u5TWETyJi0LbFwkJtKpqN4lWTcnCJCdqlO2pRK2CuKmL4jDrrwo7sDIWr15t+6FBafV0jRGvBtfZMvnn1z33hYJ1/Sz32XBhPCm9tXsa0SJz7j3fzZ8HMYukqKDUCHR/XIYEz7wKS6VJd+CsXxuX/bJTS3KtfkfLiMgODrZAOj/TQIMeMoxFS0AiOF2eA9j+LJRzrE1EarK4FWS/wTWg8xqorjQ1YS5yDgLKyW0FANELnlCJ46tAFO0noE125/vySJlYViy2li07o8iyZ/PGAxSUdNI5SDWh11L2sM+R0W+3nhcgtQP5G/VxKkxp0hs74e4gLxXKT1vSqzDY8baalkHjCpsl5Pocl2swFM+gbbkjRDGO195wR9WFmjAFJCyF1l7I3MpijNPQrmj0AKGHu7gLHmz732iYOq5ysFDvrSxm21LEr9+qogv6S2sDEjEI5+6M1yIf2qQxHvqoojr7Cd1FnPKMXF/uHF0tgmr6SUQ6MBTgeqm63qe/0kjhUH6Qu0kvQijlwUkOoDPzdKKZXJeWrNpp77M+tCjIrBM7CNnccER+aRpemcX37/+tu5MMNotjv3LHyBmyVe/oZ7CLkfKr4vHMtamL2raMFwSZo1DvKZrQGDD1DGPvECLzt6eppLtPEWW0h/g1Ey0aV6RvFdDQh+8JUFlIG7C2N553Ge0F1tmAF7Ktln3hstGA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(186003)(6506007)(9686003)(26005)(7696005)(40480700001)(5660300002)(41300700001)(336012)(47076005)(52536014)(4326008)(70586007)(70206006)(8676002)(8936002)(6862004)(54906003)(478600001)(82740400003)(2906002)(316002)(86362001)(40460700003)(83380400001)(33656002)(81166007)(36860700001)(55016003)(356005)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 09:24:01.1870
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c796484-9d1d-47d4-39aa-08db424a2498
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB9087

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+ID4g
QXMgYm90aCB4ODYgYW5kIEFybSBoYXZlIGltcGxlbWVudGVkIF9fbm9kZV9kaXN0YW5jZSwgc28g
d2UgbW92ZQ0KPiA+IGl0cyBkZWZpbml0aW9uIGZyb20gYXNtL251bWEuaCB0byB4ZW4vbnVtYS5o
Lg0KPiANCj4gTml0OiBZb3UgbWVhbiAiZGVjbGFyYXRpb24iLCBub3QgImRlZmluaXRpb24iLg0K
DQpDb3JyZWN0LCBJIG92ZXJsb29rZWQgdGhpcyBtaXN3b3JkaW5nIGluIGNvbW1pdCBtZXNzYWdl
IHdoaWxlIGdvaW5nDQp0aHJvdWdoIHlvdXIgY29tbWVudHMgaW4gdjIuIHdpbGwgY29ycmVjdCBp
biB2NC4NCg0KPiANCj4gPiBBdCBzYW1lIHRpbWUsIHRoZQ0KPiA+IG91dGRhdGVkIHU4IHJldHVy
biB2YWx1ZSBvZiB4ODYgaGFzIGJlZW4gY2hhbmdlZCB0byB1bnNpZ25lZCBjaGFyLg0KPiA+DQo+
ID4gU2lnbmVkLW9mZi1ieTogV2VpIENoZW4gPHdlaS5jaGVuQGFybS5jb20+DQo+ID4gU2lnbmVk
LW9mZi1ieTogSGVucnkgV2FuZyA8SGVucnkuV2FuZ0Bhcm0uY29tPg0KPiANCj4gbm9uLUFybSBw
YXJ0czsgdG8gbW9yZSBpdCdzIG5vdCBhcHBsaWNhYmxlIGFueXdheToNCj4gQWNrZWQtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KSSB3aWxsIGFkZCAjIG5vbi1Bcm0gcGFy
dHMgaW4gdGhlIGVuZCBvZiB0aGUgdGFnIGFuZCB0YWtlIHRoZSB0YWcuDQoNCj4gDQo+ID4gLS0t
IGEveGVuL2FyY2gvYXJtL251bWEuYw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9udW1hLmMNCj4g
PiBAQCAtMjgsNiArMjgsMTEgQEAgZW51bSBkdF9udW1hX3N0YXR1cyB7DQo+ID4NCj4gPiAgc3Rh
dGljIGVudW0gZHRfbnVtYV9zdGF0dXMgX19yb19hZnRlcl9pbml0IGRldmljZV90cmVlX251bWEg
PQ0KPiBEVF9OVU1BX0RFRkFVTFQ7DQo+ID4NCj4gPiArc3RhdGljIHVuc2lnbmVkIGNoYXIgX19y
b19hZnRlcl9pbml0DQo+ID4gK25vZGVfZGlzdGFuY2VfbWFwW01BWF9OVU1OT0RFU11bTUFYX05V
TU5PREVTXSA9IHsNCj4gPiArICAgIHsgMCB9DQo+ID4gK307DQo+IA0KPiBOaXQ6IFRoZXJlJ3Mg
bm8gKGluY29tcGxldGUgb3IgY29tcGxldGUpIGluaXRpYWxpemVyIG5lZWRlZCBoZXJlLCBpZg0K
PiBhbGwgeW91J3JlIGFmdGVyIGlzIGhhdmluZyBhbGwgc2xvdHMgc2V0IHRvIHplcm8uDQoNClll
YWgsIEkgYWdyZWUgd2l0aCB5b3UsIHNvIEkgd2lsbCBkcm9wIHRoZSBpbml0aWFsaXplciBpbiB2
NCwgaG93ZXZlci4uLg0KDQo+IA0KPiBIb3dldmVyLCBsb29raW5nIGF0IHRoZSBjb2RlIGJlbG93
LCBkb24ndCB5b3UgbWVhbiB0byBoYXZlIHRoZSBhcnJheQ0KPiBwcmUtc2V0IHRvIGFsbCBOVU1B
X05PX0RJU1RBTkNFPw0KDQouLi5JIGFtIGEgYml0IHB1enpsZWQgYWJvdXQgd2h5IHByZS1zZXR0
aW5nIHRoZSBhcnJheSB0byBhbGwNCk5VTUFfTk9fRElTVEFOQ0UgbWF0dGVycyBoZXJlLCBhcyBJ
IHRoaW5rIHRoZSBub2RlIGRpc3RhbmNlIG1hcCB3aWxsDQpiZSBwb3B1bGF0ZWQgd2hlbiBwYXJz
aW5nIHRoZSBkZXZpY2UgdHJlZSBhbnl3YXkgbm8gbWF0dGVyIHdoYXQgdGhlaXINCmluaXRpYWwg
dmFsdWVzLg0KDQo+IA0KPiA+ICsgICAgLyogTlVNQSBkZWZpbmVzIDB4ZmYgYXMgYW4gdW5yZWFj
aGFibGUgbm9kZSBhbmQgMC05IGFyZSB1bmRlZmluZWQgKi8NCj4gPiArICAgIGlmICggZGlzdGFu
Y2UgPj0gTlVNQV9OT19ESVNUQU5DRSB8fA0KPiA+ICsgICAgICAgICAoZGlzdGFuY2UgPj0gTlVN
QV9ESVNUQU5DRV9VREZfTUlOICYmDQo+IA0KPiBDb21waWxlcnMgbWF5IHdhcm4gYWJvdXQgY29t
cGFyaXNvbiBvZiAidW5zaWduZWQgaW50IiB0byBiZSA+PSAwLiBJJ20NCj4gbm90IHN1cmUgYWJv
dXQgdGhlIHVzZWZ1bG5lc3Mgb2YgdGhlIE5VTUFfRElTVEFOQ0VfVURGX01JTiBkZWZpbmUgaW4N
Cj4gdGhlIGZpcnN0IHBsYWNlLCBzbyBtYXliZSBiZXN0IGRyb3AgaXQgYW5kIGl0cyB1c2UgaGVy
ZT8NCg0KWWVhaCwgd2lsbCBkbyB0aGF0IGluIHY0Lg0KDQo+IA0KPiA+ICt1bnNpZ25lZCBjaGFy
IF9fbm9kZV9kaXN0YW5jZShub2RlaWRfdCBmcm9tLCBub2RlaWRfdCB0bykNCj4gPiArew0KPiA+
ICsgICAgLyogV2hlbiBOVU1BIGlzIG9mZiwgYW55IGRpc3RhbmNlIHdpbGwgYmUgdHJlYXRlZCBh
cyByZW1vdGUuICovDQo+ID4gKyAgICBpZiAoIG51bWFfZGlzYWJsZWQoKSApDQo+ID4gKyAgICAg
ICAgcmV0dXJuIE5VTUFfUkVNT1RFX0RJU1RBTkNFOw0KPiANCj4gV291bGRuJ3QgaXQgbWFrZSBz
ZW5zZSB0byBoYXZlIHRoZSAiZnJvbSA9PSB0byIgc3BlY2lhbCBjYXNlIGFoZWFkIG9mDQo+IHRo
aXMgKHJhdGhlciB0aGFuIGZ1cnRoZXIgZG93biksIHRodXMgeWllbGRpbmcgYSBzZW5zaWJsZSBy
ZXN1bHQgZm9yDQo+IGZyb20gPT0gdG8gPT0gMD8gQW5kIGVsc2UgcmV0dXJuIE5VTUFfTk9fRElT
VEFOQ0UsIHRodXMgaGF2aW5nIGENCj4gc2Vuc2libGUgcmVzdWx0IGFsc28gZm9yIGFueSBmcm9t
L3RvICE9IDA/DQoNCkNvdWxkIHlvdSBwbGVhc2UgZWxhYm9yYXRlIGEgYml0IG1vcmUgYWJvdXQg
d2h5IDAgbWF0dGVycyBoZXJlPyBBcyBmcm9tDQpteSB1bmRlcnN0YW5kaW5nLA0KKDEpIElmIGZy
b20gPT0gdG8sIHRoZW4gd2Ugc2V0IHRoZSBkaXN0YW5jZSB0byBOVU1BX0xPQ0FMX0RJU1RBTkNF
DQp3aGljaCByZXByZXNlbnRzIHRoZSBkaWFnb25hbCBvZiB0aGUgbWF0cml4Lg0KKDIpIElmIGZy
b20gYW5kIHRvIGlzIGluIHRoZSBtYXRyaXggcmFuZ2UsIHRoZW4gd2UgcmV0dXJuDQpub2RlX2Rp
c3RhbmNlX21hcFtmcm9tXVt0b10uDQooMykgT3RoZXIgY2FzZXMgd2UgcmV0dXJuIE5VTUFfTk9f
RElTVEFOQ0UuDQoNClNvIHRoaXMgZGlmZiBpcyBlbm91Z2g6DQpkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL251bWEuYyBiL3hlbi9hcmNoL2FybS9udW1hLmMNCkBAIC05OCw2ICs5OCw5IEBAIHZv
aWQgbnVtYV9kZXRlY3RfY3B1X25vZGUodW5zaWduZWQgaW50IGNwdSkNCg0KIHVuc2lnbmVkIGNo
YXIgX19ub2RlX2Rpc3RhbmNlKG5vZGVpZF90IGZyb20sIG5vZGVpZF90IHRvKQ0KIHsNCisgICAg
aWYgKCBmcm9tID09IHRvICkNCisgICAgICAgIHJldHVybiBOVU1BX0xPQ0FMX0RJU1RBTkNFOw0K
Kw0KICAgICAvKiBXaGVuIE5VTUEgaXMgb2ZmLCBhbnkgZGlzdGFuY2Ugd2lsbCBiZSB0cmVhdGVk
IGFzIHJlbW90ZS4gKi8NCiAgICAgaWYgKCBudW1hX2Rpc2FibGVkKCkgKQ0KICAgICAgICAgcmV0
dXJuIE5VTUFfUkVNT1RFX0RJU1RBTkNFOw0KQEAgLTEwOSw3ICsxMTIsNyBAQCB1bnNpZ25lZCBj
aGFyIF9fbm9kZV9kaXN0YW5jZShub2RlaWRfdCBmcm9tLCBub2RlaWRfdCB0bykNCiAgICAgICov
DQogICAgIGlmICggZnJvbSA+PSBBUlJBWV9TSVpFKG5vZGVfZGlzdGFuY2VfbWFwKSB8fA0KICAg
ICAgICAgIHRvID49IEFSUkFZX1NJWkUobm9kZV9kaXN0YW5jZV9tYXBbMF0pICkNCi0gICAgICAg
IHJldHVybiBmcm9tID09IHRvID8gTlVNQV9MT0NBTF9ESVNUQU5DRSA6IE5VTUFfTk9fRElTVEFO
Q0U7DQorICAgICAgICByZXR1cm4gTlVNQV9OT19ESVNUQU5DRTsNCg0KICAgICByZXR1cm4gbm9k
ZV9kaXN0YW5jZV9tYXBbZnJvbV1bdG9dOw0KIH0NCg0KV291bGQgeW91IG1pbmQgcG9pbnRpbmcg
b3V0IHdoeSBteSB1bmRlcnN0YW5kaW5nIGlzIHdyb25nPyBUaGFua3MhDQoNCj4gDQo+ID4gKyAg
ICAvKg0KPiA+ICsgICAgICogQ2hlY2sgd2hldGhlciB0aGUgbm9kZXMgYXJlIGluIHRoZSBtYXRy
aXggcmFuZ2UuDQo+ID4gKyAgICAgKiBXaGVuIGFueSBub2RlIGlzIG91dCBvZiByYW5nZSwgZXhj
ZXB0IGZyb20gYW5kIHRvIG5vZGVzIGFyZSB0aGUNCj4gPiArICAgICAqIHNhbWUsIHdlIHRyZWF0
IHRoZW0gYXMgdW5yZWFjaGFibGUgKHJldHVybiAweEZGKQ0KPiA+ICsgICAgICovDQo+ID4gKyAg
ICBpZiAoIGZyb20gPj0gQVJSQVlfU0laRShub2RlX2Rpc3RhbmNlX21hcCkgfHwNCj4gPiArICAg
ICAgICAgdG8gPj0gQVJSQVlfU0laRShub2RlX2Rpc3RhbmNlX21hcFswXSkgKQ0KPiA+ICsgICAg
ICAgIHJldHVybiBmcm9tID09IHRvID8gTlVNQV9MT0NBTF9ESVNUQU5DRSA6IE5VTUFfTk9fRElT
VEFOQ0U7DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIG5vZGVfZGlzdGFuY2VfbWFwW2Zyb21dW3Rv
XTsNCj4gPiArfQ0KPiA+ICsNCj4gPiArRVhQT1JUX1NZTUJPTChfX25vZGVfZGlzdGFuY2UpOw0K
PiANCj4gV2hhdCBpcyB0aGlzIG5lZWRlZCBmb3I/DQoNCldpbGwgZHJvcCBpdC4NCg0KS2luZCBy
ZWdhcmRzLA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 09:41:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 09:41:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524524.815531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnGS-0002Yx-Jy; Fri, 21 Apr 2023 09:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524524.815531; Fri, 21 Apr 2023 09:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnGS-0002Yq-HJ; Fri, 21 Apr 2023 09:41:12 +0000
Received: by outflank-mailman (input) for mailman id 524524;
 Fri, 21 Apr 2023 09:41:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ppnGR-0002Yk-DI
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 09:41:11 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a06d6fc0-e028-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 11:41:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7919.eurprd04.prod.outlook.com (2603:10a6:102:c1::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 09:40:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 09:40:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a06d6fc0-e028-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PLeUKQJ2cdPNFHu1Wo+KPVYkdtNaKlzkqgGg7wcYfshJdHquYb0lEmUfe0sAxn4xPSaBJkXjjJzJiP2YY2kFdPcTuSCz+sEiL/D560tNIV8qTUe4mqirpTpN7QTlkIByJed0UcEEfhS04p6zFIwDt3myU6IaUfHyzMHN04tC1sJAqYkh2+GsvnNpvj+ej8P5hbdXAncIoMQ5/b/3TNgIWU/Z0GkVER3DFESPsk8P8olFiNTrE+7auQCctTlJTi5qgxLLtb/IBxWSYgBcVnsew0US1rboU4WJgEVxojOKm4aoMNR0lVXgUmV6Adllm4Ek57z9CPvlYAR9A6qxqhmRwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qmAZ9p1cRfWGhYOZ2UGE3s1GOrkkTeFoeuzkkqjL3Ck=;
 b=aOOmI20xgqJL/qnbVIY/E3DGOEfx3z1l6F06TLcyasAC9a3TWWui6sO6zzS3Z/Zs1Yu5fyXjR+JhE2OqzQsQK3HOXXOdwR2vh7esfheVU3UiNoZADiAlwqQSnt3JO8uH1QjHFC9iJvNvwXod7L6KPR7q01Gg/5w1YdN4MZJ2wC6rNlEVFFweUmHMnWgGxD2yVj6hB6vQG0ore4z0NtYEZHVW6cqQmCfP0prr8IguQNQiC1DnbMGKG2ey+2xPbuKZ1yWkgkFKJSeJ9dlsUA7c52hAJ9Rw1uGcUV49bApR2QVG6exXRILjYeWJaqHEmOvX0YDBA0WHXHv/xWuTJE0lxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qmAZ9p1cRfWGhYOZ2UGE3s1GOrkkTeFoeuzkkqjL3Ck=;
 b=QFSvenFEi+zyIv/OATPDm392DCbynJMOVpXnlhlr2eTBxJzrpVuo3mNP1526y6Tvyyczn+rUp931ZyXMPRlT2T9lkjQhKPLl/zDu77V1Qv8HaeTmtWHKr8JTj1327Il0YrvPTSg1mCZntAvaaYYixdl+0ouabuFs9tm1WbA7hwXMWEi1nMu4bTPN7MK9bBbRJ2FsV1vCp2dg807M+a9L9NJ41TuPGlyV89P091dHfDDWtCW/IEiOtvb3V8a7YniENfEe8iUtP3CLgo7v63ZqomyL0yr+D0FQkc/tGawDNGQAnFet29iomo7APH1jSwIWCfWWBhQ4jlXeEyRZpDESJA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
Date: Fri, 21 Apr 2023 11:40:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0139.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7919:EE_
X-MS-Office365-Filtering-Correlation-Id: 162f3579-244c-4907-88fb-08db424c8360
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lcPn/mszxLokdtkR1nGokoRuUsY6jfexa+3t35rr2NB3hBW2S1dgpsq7CqR4hDcl6DBnJSKOSVV0b1eSDdTVFakO38hoDoA12P4sG8y1EIOhFUC29ku/3SYZinw0bNWDogrcM5zwm4U6qr92gfVAcZ+6ILiKZyfjkikKhNg0zWfkxmZK9NiTyxJdvw5iqHYVPup4Xt4ZC6az2sn+4ewZgLorsDHY3IPPDJDABc9so/M5l7BaQ36S5BjIyaCrrY++Xjl0Trr7WhYOH5yltqruR7L5yE/K8Hs9/tONN+asP3RBszJoN/zkCopH9IKHUvRH+B0RsV1uCVBzbwz1g8rD606GNyIR8rkvhz7SIYUNRrFvqg77UlMuSGuDbKpzg3qujR5hEWlJIU0l5+WYwFzupu3oimrpL0c584mqs1TTywpq193I5fCCe7W4S0qNvDPgglbNkNKtTQ5xN4GuBRBh5Qy5ENXI8WfDU9me9TXZjj6M552IkzIjxcCzmdeoDu2ADFRBYESBpFh3Znh1Nb/9DAQAAnRM0bKYjJQl85IBw1MClCBz9YZp5ODXwED1cijQPN5WYMzLDiA+WJC1MPDkl5jqIT4TvfZp/Vyk/aITjaxdBHsWw8vk+7NL4ipf1XPI1WomyZNOMVsQHvH0563k9g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(39860400002)(376002)(346002)(136003)(451199021)(31686004)(38100700002)(6512007)(31696002)(26005)(6506007)(186003)(6486002)(86362001)(66476007)(66556008)(66946007)(6916009)(41300700001)(83380400001)(36756003)(54906003)(478600001)(316002)(4326008)(53546011)(7416002)(8676002)(8936002)(2906002)(5660300002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RUNqdCs1U1JxMUdHUmJvdWhxM3FSaWw1bjdwSTNFVjlBT1NNMlNnK0YxTWJr?=
 =?utf-8?B?cGJkNUpVRjRhT3poMCtFZlY1UUlWMGFkc1l2c1lRUVVwUG92aWU3YWQ3ZlUz?=
 =?utf-8?B?eWV6NFd4VUF2em4xRmU1SXJRVjFxdnBab2haY2ZsODhrTnZnYnBIZGorcFFO?=
 =?utf-8?B?SmE5WTJOL3lmRnBzOVJUb3lnVnU1RkdJdVZhYXZBOFVHa3VhcUZNU2dTL3Bv?=
 =?utf-8?B?TjNYR1k4eFpDNCtCNkdhamsxODhPWllUSnNKV28rVTZGa0U2ZU9LbW0wS1h2?=
 =?utf-8?B?TWFWbFlYK1NoR2hSSkw3dkk5ZGNSMlh1d0NTRFJvY1YySTdIcmZIRjA5dW5k?=
 =?utf-8?B?M2JmUGVpa3JuejZuL2F1clhrMkVjYVRYQUFsazVYanE2bHBLRzVscW8zNzdL?=
 =?utf-8?B?VEo2OHVmNHRnbTBteVM5d3pHUnhtYmVWaWczSVMyNmNOYll2RFppQnMyNk9D?=
 =?utf-8?B?Z3F5blEzbFBuSG1ua21EeHh0c2F5S2tzSi9keUFMcUJmaTIyNXhZZFVTMmkw?=
 =?utf-8?B?NWpTS1pjNTRac25ucTN4UExaN0ZZdkN6TW5aMVVuRHgwaWNsYlJMUjVqdGx5?=
 =?utf-8?B?RnRhenFpVDNLOHdPcTQ3WkxJN3JkZUNnK2ZXYzRrOURxWHVpV2VobElHL25P?=
 =?utf-8?B?R2txMkhKQW5rdTBhQkxHbkxUMnNSMDFwNkw2RVR4VktoWlFSQ282Y0ZqRnNL?=
 =?utf-8?B?MmNCd1FoUHdsVkVlR0JBNVpIL2dCa1dCSDMrNG9SZ3lHemNXQWhwWHNuZjRZ?=
 =?utf-8?B?MmU3eHFlaTM4YkUwTnNGakRoc3VwS2hXd2w3SkJDRkVoenB4K1dncWhQREJY?=
 =?utf-8?B?MURDZDlISXRuZmYzVFNqNmFzbS9Gb0dTc3p2dU16K0x3RzRJT1BqQU9JWERH?=
 =?utf-8?B?VG1XY1l0dGRvY3k0NUR5bFFRdlVTcUk3V3pPLzlrREpGVmo2ZnlxakM0aWg3?=
 =?utf-8?B?czR4TXJ4M1dQcW41bDk4bFN2YmtyLzc4cjJUd05SZmZZcys0bUxBTFBmT3dX?=
 =?utf-8?B?U2s2UXFyZk05QUFRTlp1eWE4V1U0WTE1VllteUJwaitEQ0ROditPSVNtZ1Vl?=
 =?utf-8?B?UjluWnpCRFFEMkNjYllQeDRnTko0QmZTSUZPc3EyV0U4bGRPOU4vV2tCNnBq?=
 =?utf-8?B?QzB5MGZGRFNwSHZHYzZvK1BHNUNSQTNwU00yVzE3cmFqSm43VS9mWU5SQitL?=
 =?utf-8?B?cUNhbC9CTzI4UDdsSXhETWI3bUpHN1FDWWFuTk5NK1BzZVA4bmxYR3JwTWFO?=
 =?utf-8?B?MGEvVVdaNW1IekNRSmhDK1pYdm1TVkpEeXhtLzJUM25oN1ljajFxVWpENGlr?=
 =?utf-8?B?WDl1L0dIS1ZGcVVhbi8yaUdZOUdSQmdmcktLMGJXTnRMQnNtSk5mbDhaK0Jh?=
 =?utf-8?B?dHNpU1h6VGpEWkIxcnA1eGdrU2huU1dxanpoaVgvV2RyMGNWb0c1WDVRZy9R?=
 =?utf-8?B?M1czcktXWXlsSHN3Z1dhNkRuWnFQVGFCRHNuUkk4SlhKZ2RRSG9XeTdiZDNj?=
 =?utf-8?B?ZmloeG15WFdkdE9MOUVKS1FibjZVSm9QUXhESmFDeElXYm5qQ2FJS2hwaG1X?=
 =?utf-8?B?ME5IckNtVVNSbnEzbUk1YkFoZ1YyQk92eVdtL3JFaGdORThUbmRQZEJoTTNk?=
 =?utf-8?B?MGdpdGhFK2pBUUYwdFl5NzQ4MFRyVkZJNjM1OFpmVzBESk5Oak84WHZVY3N3?=
 =?utf-8?B?WDc0R25zTVdnRnpSRlRGRnBlZ0FCTjZIUlE4MjJ1WGI5RnV2ck1WSFVjS0Fs?=
 =?utf-8?B?bGNLOXBNWTI0cXF3bEhKUFczcmlKaE1rZ21YSmdCc3JhWWJvSGI0b3BpVnFj?=
 =?utf-8?B?cmVoREZlVjB0bVp3elgwWUhNaDRUR0xmLy9RU2Z2UU9PM3ZpNlZwc0NsOHp5?=
 =?utf-8?B?TEx5UTNRNU40UFJYMncxOHZ0VEQ3SUthYzVUSVE3MmI3YlpKdG9ZMm1LOFZJ?=
 =?utf-8?B?dkdrRHQ5eTc0ZFhhU2lGeHhqTjY4QlByTkx6TTN4L1hiYkt5Skl0RGhTdGVC?=
 =?utf-8?B?dXYzdHVZRDFFc1gyTFZybGJTVDFQTGk3MjVlaE1aUjZqaEdXN2ZaQU5VYmhB?=
 =?utf-8?B?cmVaeW1obVpxT0paeXpaa1lyWjhma0xVaHFWdUZENXN2c0lXNzRCL25zN0ZU?=
 =?utf-8?Q?7x78oE7KJcr4dnmg0j8Zee4Hz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 162f3579-244c-4907-88fb-08db424c8360
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 09:40:59.4518
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vqPAeRXZXCJLwvNDm5ooD67rVDTgfkNsNzS0rENKGJ9WRkaEitwgdwkJDP7lCVsnTDRxde9yiEEfZaGi70z50w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7919

On 21.04.2023 11:23, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>>> --- a/xen/arch/arm/numa.c
>>> +++ b/xen/arch/arm/numa.c
>>> @@ -28,6 +28,11 @@ enum dt_numa_status {
>>>
>>>  static enum dt_numa_status __ro_after_init device_tree_numa =
>> DT_NUMA_DEFAULT;
>>>
>>> +static unsigned char __ro_after_init
>>> +node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
>>> +    { 0 }
>>> +};
>>
>> Nit: There's no (incomplete or complete) initializer needed here, if
>> all you're after is having all slots set to zero.
> 
> Yeah, I agree with you, so I will drop the initializer in v4, however...
> 
>>
>> However, looking at the code below, don't you mean to have the array
>> pre-set to all NUMA_NO_DISTANCE?
> 
> ...I am a bit puzzled about why pre-setting the array to all
> NUMA_NO_DISTANCE matters here, as I think the node distance map will
> be populated when parsing the device tree anyway no matter what their
> initial values.

>From this patch alone it doesn't become clear whether indeed all array
slots (and not just ones for valid nodes) would be populated. I think
the code in the patch here would better not make itself dependent on
behavior of code added subsequently (which may change; recall that a
series may be committed in pieces).

>>> +unsigned char __node_distance(nodeid_t from, nodeid_t to)
>>> +{
>>> +    /* When NUMA is off, any distance will be treated as remote. */
>>> +    if ( numa_disabled() )
>>> +        return NUMA_REMOTE_DISTANCE;
>>
>> Wouldn't it make sense to have the "from == to" special case ahead of
>> this (rather than further down), thus yielding a sensible result for
>> from == to == 0? And else return NUMA_NO_DISTANCE, thus having a
>> sensible result also for any from/to != 0?
> 
> Could you please elaborate a bit more about why 0 matters here?

When NUMA is off, there's only one node - node 0. Hence 0 has special
meaning in that case.

> As from my understanding,
> (1) If from == to, then we set the distance to NUMA_LOCAL_DISTANCE
> which represents the diagonal of the matrix.
> (2) If from and to is in the matrix range, then we return
> node_distance_map[from][to].

Provided that's set correctly. IOW this interacts with the other comment
(which really I made only after the one here, just that that's of course
not visible from the reply that I sent).

> (3) Other cases we return NUMA_NO_DISTANCE.

And when NUMA is off, it should be NUMA_NO_DISTANCE in _all_ other cases,
i.e. ...

> So this diff is enough:
> diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
> @@ -98,6 +98,9 @@ void numa_detect_cpu_node(unsigned int cpu)
> 
>  unsigned char __node_distance(nodeid_t from, nodeid_t to)
>  {
> +    if ( from == to )
> +        return NUMA_LOCAL_DISTANCE;
> +
>      /* When NUMA is off, any distance will be treated as remote. */
>      if ( numa_disabled() )
>          return NUMA_REMOTE_DISTANCE;

... this return is wrong in that case (even if in reality this likely
wouldn't matter much).

Jan

> @@ -109,7 +112,7 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
>       */
>      if ( from >= ARRAY_SIZE(node_distance_map) ||
>           to >= ARRAY_SIZE(node_distance_map[0]) )
> -        return from == to ? NUMA_LOCAL_DISTANCE : NUMA_NO_DISTANCE;
> +        return NUMA_NO_DISTANCE;
> 
>      return node_distance_map[from][to];
>  }
> 
> Would you mind pointing out why my understanding is wrong? Thanks!
> 
> Kind regards,
> Henry



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 10:11:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 10:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524530.815547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnjt-00060d-2Q; Fri, 21 Apr 2023 10:11:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524530.815547; Fri, 21 Apr 2023 10:11:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnjs-00060W-Vc; Fri, 21 Apr 2023 10:11:36 +0000
Received: by outflank-mailman (input) for mailman id 524530;
 Fri, 21 Apr 2023 10:11:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnjr-00060M-Oh; Fri, 21 Apr 2023 10:11:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnjr-0002sl-Ey; Fri, 21 Apr 2023 10:11:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnjr-0006KZ-0x; Fri, 21 Apr 2023 10:11:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnjr-0006BO-0L; Fri, 21 Apr 2023 10:11:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qpSk5tq0no4U99/D0a/xvHgmJFRsps0en4xbS5RFXWU=; b=Svyde9mktSX2gDnlJmToOYYe6t
	gLOcdcJEhkHy3xRbjqdFes2sw1qsDc9j6tGaG2Zup0fqhROJSYLi3QChYB7YGVob6Jho/Fl6rVBav
	PTrjYcOdAfSL/UTZ2xULoRpZdKMGtZrF5syycuVeCkmM9KPxACgoHKyJCjHAT0MlvF6o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180341: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6a66fdd29ea1695d615fcc93dccfb6dbe2f53b1d
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 10:11:35 +0000

flight 180341 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180341/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                6a66fdd29ea1695d615fcc93dccfb6dbe2f53b1d
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    4 days
Failing since        180281  2023-04-17 06:24:36 Z    4 days    8 attempts
Testing same since   180341  2023-04-20 20:12:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Borkmann <daniel@iogearbox.net>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Miguel Ojeda <ojeda@kernel.org>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4066 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 10:24:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 10:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524538.815557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnvi-0007aA-9q; Fri, 21 Apr 2023 10:23:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524538.815557; Fri, 21 Apr 2023 10:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppnvi-0007a3-6v; Fri, 21 Apr 2023 10:23:50 +0000
Received: by outflank-mailman (input) for mailman id 524538;
 Fri, 21 Apr 2023 10:23:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnvg-0007Zt-B5; Fri, 21 Apr 2023 10:23:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnvg-00034N-2D; Fri, 21 Apr 2023 10:23:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnvf-0006dE-NY; Fri, 21 Apr 2023 10:23:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppnvf-00026v-NA; Fri, 21 Apr 2023 10:23:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eJU1t+jzGPh2ra4Ob3uhQppTua0uEbe2/vyOUZhXSIk=; b=LKthoSNZPnxABVuVZJOv6GSQvD
	rZscENqL1Rx1oGnpXQXqDrMdyokkSiY/bPqb6+yjW8hYIeQaHgmjG42dzwMKT8x2Mek8aO60FklxC
	wHmvlLwRG49OqFy1+rSYpF4P/wPhmgri1Xa18xYffrUpfmwG0Opfrj1t35auizzCnbSE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180334-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180334: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2d82c32b2ceaca3dc3da5e36e10976f34bfcb598
X-Osstest-Versions-That:
    qemuu=7dbd6f8a27e30fe14adb3d5869097cddf24038d6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 10:23:47 +0000

flight 180334 qemu-mainline real [real]
flight 180358 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180334/
http://logs.test-lab.xenproject.org/osstest/logs/180358/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd       22 guest-start.2       fail pass in 180358-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180258
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180258
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180258
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180258
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180258
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180258
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180258
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180258

version targeted for testing:
 qemuu                2d82c32b2ceaca3dc3da5e36e10976f34bfcb598
baseline version:
 qemuu                7dbd6f8a27e30fe14adb3d5869097cddf24038d6

Last test of basis   180258  2023-04-14 08:48:34 Z    7 days
Failing since        180320  2023-04-19 16:38:30 Z    1 days    3 attempts
Testing same since   180334  2023-04-20 12:40:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7dbd6f8a27..2d82c32b2c  2d82c32b2ceaca3dc3da5e36e10976f34bfcb598 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 10:57:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 10:57:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524545.815567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppoRb-0002ZO-UG; Fri, 21 Apr 2023 10:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524545.815567; Fri, 21 Apr 2023 10:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppoRb-0002ZH-RH; Fri, 21 Apr 2023 10:56:47 +0000
Received: by outflank-mailman (input) for mailman id 524545;
 Fri, 21 Apr 2023 10:56:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/IAX=AM=epam.com=prvs=84751abf44=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1ppoRa-0002ZB-74
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 10:56:46 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31582fbb-e033-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 12:56:40 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33LA2Oev029980; Fri, 21 Apr 2023 10:56:29 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3q3dv2syjn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 21 Apr 2023 10:56:28 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by DB4PR03MB9481.eurprd03.prod.outlook.com (2603:10a6:10:3f7::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Fri, 21 Apr
 2023 10:56:23 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::bcf5:cd14:fd35:1300]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::bcf5:cd14:fd35:1300%9]) with mapi id 15.20.6298.045; Fri, 21 Apr 2023
 10:56:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31582fbb-e033-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SiS1NndOGA7r5q9A/SY7WIKgj7FNqWV+XklaQZraUy+HrO/ySzq5nz4YOAOZ4YVIfkSHA7nXYUsdtZurTepXWl0w+iZ8KQzWS/e/Dtd+kyADqNPEJvFXUL2Iapb8MfIc4AOTriB/h8rc8qo62UQ6OW6MbsFXK11FCfj8on0vGmk+7qnwtgg8ydS/bM3d6t07WyuZB5m5iODGonHzhEXMJ+oSw3aW74xDLXr2KPCyO0uvKDlE7UUw4WSsW2BUTWFTStH4N31DelvS2Lck9EaUbVqBoXqFVQKnzjNds/i4J5NrhCiII+yq6r/1gkA3BZdvhDDQQoARe01dYHo7ySJo+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HZkCm/LB3AZnRRvgfgw+vXRZyVlHqz8x1D28X7XVeYY=;
 b=Ic9iW4UswXtDc6EQqUfLHxrC1e/OsqxhwiGeZG3XneHfN1F9oMF66jmWXdBf7gtPjmdftSCG+nZmybzLHFlL7RIJX5vsZDaQlWyzSTO8Zecjc7ogy3jG9z5yG6OEDBJsTzbBBAVRyXykjYxP4QCwdJidsJMNDM9ah3v0NKII3Lu9PnjclA5mbx3nnP3K7s2pl8ffqvINKHlqUt55dT0ZQkPLO05vPkEbihqm3X3L4flHEzpEhxbttfHuagg552QT02jhgU7PYhbHUNKYsFxeLIoOczmZBgdUnkizv26nALdI3RXyGaRTTNWCkAtW6EOqOHhKtYUfO2UmQsyb3jQDIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HZkCm/LB3AZnRRvgfgw+vXRZyVlHqz8x1D28X7XVeYY=;
 b=Ic9L+Cg26Y4AcPnXyG3Eaw0WtXJLNvDCaV+rkIjwKLgBsCqOMC0ITaNhaO0+J+tuV8ejIx/ea+BF/U2Q9I7EPBSWuqs4VpHsRsXmm1QKjG+e/mNYKR2IfZ95eUkNw3cwG+gJRjl3RsXz4AiKtAmTokoCOix8PbKr6CJ++gnvrYtUKo0MCsa9Ed/MVpBZgQ4z/DbswY+YL6Gwes1fgjAoTtWD+pHvUmCe00CcKjf56pm/gRFEcYMv6DrAI2PawNMQqYSVqodVfF8gpbFCuT+IOF2S23TiO2vG+LJk6c/Y82fYv/u9BBVulaD0r31GPHATxqKJGzqky78HBbpZLFLILw==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "jgross@suse.com" <jgross@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        Sumit Semwal <sumit.semwal@linaro.org>,
        =?utf-8?B?Q2hyaXN0aWFuIEvDtm5pZw==?= <christian.koenig@amd.com>,
        "linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
        "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>,
        "linaro-mm-sig@lists.linaro.org" <linaro-mm-sig@lists.linaro.org>
Subject: Re: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Topic: [PATCH v1 0/3] Add ioctls to map grant refs on the external
 backing storage
Thread-Index: AQHZHq/2qxwFjlIdpkaeJi9xnPE21a82QnCA
Date: Fri, 21 Apr 2023 10:56:22 +0000
Message-ID: <637dcf09-9348-293f-95d2-6403192f52b1@epam.com>
References: <cover.1672666311.git.oleksii_moisieiev@epam.com>
In-Reply-To: <cover.1672666311.git.oleksii_moisieiev@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|DB4PR03MB9481:EE_
x-ms-office365-filtering-correlation-id: 2422802b-3aab-4bc4-e413-08db42570bc6
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 uT9cjqoI0WdS2+DaCxbRsZYVTB4geRt//BJjfLslLrxZJ8vflX8jsLn5iTyyl/DXlCPHNz0MD0WoN6/bF0w3CNkZg76XgbRnA5qTNWP/OQvU+xW0dKQ1klyGwvx86jerm5DuYBYifs7LX7N5laT2fWPLboTt5IcI7ZUGfitU/QtiJ3XNbmFr69WPYFb/qviYfOPN7Q3/o3Lu6I50jI/+d6bjLTPIHyPxXCJuMg3KlafqV4EYT4muCgezSZOK/md2NK8JcwZRMhh3jRgsOLSq5IE3y9JegsA/+YpFIy7ZMW89vEFnU+6zWxiAswnNNtoeu+5YTA1PqEm4E6Wwub6YHTztJXChr8msiAMCYavFItoQCHdi/eDZXadrUJJ7DI1lJJWb04o0YwFBCMN0hbKruHZhMDu3id3jEAU8RR7NXkHq69enXEhTueQkdVZiT3RnyUZV4p52JP7gN/0Jqhs7j7lX9l1RabGCqDVBo+OV5urcZvD/4JEUNHNLepDU8fKImK7nQmV5FYSHRg+lov543QfHA7zdl3co3NKSUtciEeF724YaZcylbmwSWzBLBUb5Af8//HpVKYnj995mV2twMujuV+0KXMpHd4c8VUVfQbcyXTsOtrUaZo5mpm+LA7CN6jP2hhEzQSnADK8rwOVDiw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(451199021)(6916009)(36756003)(53546011)(76116006)(4326008)(54906003)(66556008)(316002)(66476007)(966005)(66446008)(64756008)(91956017)(66946007)(6486002)(478600001)(71200400001)(41300700001)(5660300002)(8936002)(8676002)(2906002)(38070700005)(166002)(31696002)(86362001)(122000001)(38100700002)(2616005)(26005)(6506007)(83380400001)(186003)(6512007)(31686004)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?aGs5MUhwNkFlY3FHRXhDUldlY2Z0MmRxZXBpMjdlRHhGRzRGZWdzV0FSRzBS?=
 =?utf-8?B?NnBPWTZXRjhjSGR6TDhIcytlSjZWTnFFUTJSVUdla1F6eHZ0NDJ1UlJUY3Zx?=
 =?utf-8?B?cUdDRS8wMGpYM1d5ODlqcmk2YzNqNUJMRUh4WStYYXAyQTRyVUtkT0FFbW8w?=
 =?utf-8?B?Mlg5MnJSd2d4UzRmRHcvbCtweU1lSWl4elNaekk3YVNmQ201d25VbnVySTAy?=
 =?utf-8?B?am9iZzhiVDJyWis5NDFQUkhMT2VqWjFvN0NjRTFEdmUvUG9vMjFwUGRRSEdG?=
 =?utf-8?B?bUFsK2MrRXNWQXNNVnB0bnphc2V5L2ZyQ1gxYkxBNUJvK0xHc0lJdkFabWsz?=
 =?utf-8?B?RU1YcmF3VFpCOThsbGJrbzNSQXBYc2N2bE5TdEltY2ZLSklEWm96OHhMeVlD?=
 =?utf-8?B?dU85UU1QdGtrb3NGaFVoRDgvbHhOYmJ5anFucGJTR1V0SmJZYVRJbWdQaitG?=
 =?utf-8?B?cEFNbkJ0ckNhamZBdjFRNEpuWXlaS1ZEamZmYlBBdW1lWGZZdzNLcUhXeWpE?=
 =?utf-8?B?MjhHSit4L1J0V0d4aWtlUFVCUGFFelVMeDFnRXhGUU9RbUgzcjdROC9mYURZ?=
 =?utf-8?B?L2RQcWNONkZZUkFkTHE4L3grbjdFNFZXV3FiVkhhLzE2UjhWTjU3WmF6N21v?=
 =?utf-8?B?djZzbW40aTNlL1FwbWJ2TWFPc1gxT3M4N0RYNzN5OXI1Z2hkREl3aVRublhC?=
 =?utf-8?B?dTh0SHpONEI1dkRNYzhtSzZQck01NEVVVFpEdldCSFZ5bkdUQzR3Z1IrcHJv?=
 =?utf-8?B?M3RDMnZxRjVsb1FUZjZPVmdTQm5NbHNPdCtnS0dpV3FDMnpJMys0OFhucmEr?=
 =?utf-8?B?TitWOXl3eVJMYzl1RTczLzRWbEJROTVDcDdtamEvTzZyNDB0NWFLUE0zNW1X?=
 =?utf-8?B?cGFTQmN5S292Qzk0RmY5RnFINmxLMmQ3eGlabzIvQWtQVGxFUk1IL2w5RzZr?=
 =?utf-8?B?ZE5la3o3WmlhVUEwK3FxTHNFbDZYYnM5elhpOWRacmJFUTRFT3hFQ2xycUNv?=
 =?utf-8?B?d01YeFltNXcxTDQvQmNNbi9CRjZSUUwrSUVQcU8xblRFUzhOMncvVkhUekNL?=
 =?utf-8?B?dEozUmZuaFczOWtoWWZhTXVuTUZBZEd4SVo2Z3lCU2NIWXpJYnhyZ2hSRlpN?=
 =?utf-8?B?QXVBbVpWN0o2OVcxdDFTZ2NlK3JCaHdlQkRsVmM5ZWY1elJWNUN1WFdXMU45?=
 =?utf-8?B?UXorazhMK3JmWlpBa21ReS9EWGU0VGE0U0NiUENoVUtpWkJGcHRCSlFNZ3FH?=
 =?utf-8?B?d2VGQk5oRXIwR2NBREhDWU9MdVNxb3p0bENNWjNJQTF1TVVxd0t0dElJZTJB?=
 =?utf-8?B?dGlWTG9zMmpxb2lxQUZFclZ4cUZwb1NyT2l2S29CM3pybjR0K2FwaHpPNXdr?=
 =?utf-8?B?RHZDNjZGOU1CSXNTZnpablc3MkhJaU1YOWVPOHNOTExjOWk3U3U1OVNuekFp?=
 =?utf-8?B?MnRTRU9rdXE2NVZhV1lBMDg3ZWs1aFhsMHNDbWRuSWxEWTk3SzZVSmlXWmQz?=
 =?utf-8?B?Q1ZFZWUyQ2JZY1R2TEhPNHZiOUNJVDdOc29WRThwS2tHazl2bEI4UnU2THJJ?=
 =?utf-8?B?R0FINysxRE9WckhoclptakF0T3J6bE5FM0NGaW5aZis0UVJXWTJWQWJ1YXZJ?=
 =?utf-8?B?THNNcmZHeFozc3FybGM0ekdNQ3VnMDVDUG1VVnhJQmxQNldGc0VtS2FuQXRL?=
 =?utf-8?B?TTkraEFSUTN6TmRlTy84S042bm0zZWRxdC96YjRUaGErd2xMMm0xSWkwU3dm?=
 =?utf-8?B?V1NBdFF6WUxIYmxscDRHMmtsdWlTV0dwakZjV1ByVmtnWmEzRHhpZDIydHZW?=
 =?utf-8?B?bFlvalVJamhiTFM1VTk0aXZUMktiYW1iRmR3NTZMc2FsWTlOcHBhcGJVUGNM?=
 =?utf-8?B?QTNybkRMOE0waXVJS2p0eEN3WXhPMmRQcEJHenR1Q2cyL2J5THFaMHdxbkxp?=
 =?utf-8?B?alRtZVdYVzREY1FNWnROR0JmZm13N0ZMSC9nMVh1eExRb1NaSElVeUhaUmFE?=
 =?utf-8?B?c2cwcU9BZ2o3c0tDV1ZhNCt6QlVaZTFOMUM1VXZ6V1NaQzZPOUNVYjMrMFZi?=
 =?utf-8?B?WUxCbzNMS3IwMnJjWllCUEpFRGZjKzVnbERkSm4wZFNodlB5ekl6ODZWcHNr?=
 =?utf-8?B?aGQ2eUE2WmNZb1ltQzNDTGVJRVdYMldGTytmb201amhwNEZ0Vk9IcVkwOVVJ?=
 =?utf-8?B?Snc9PQ==?=
Content-Type: multipart/alternative;
	boundary="_000_637dcf099348293f95d26403192f52b1epamcom_"
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2422802b-3aab-4bc4-e413-08db42570bc6
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Apr 2023 10:56:22.9956
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: C3P9cieVph5lasIFzhR1sSK2SENsXYbBLVrglLJtAcehxNuLtmOp5y8E977SfA8ZzrliNrAeM6TF9zj8PSs1ffS9Yovc2XcnH84gUKuw2aU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR03MB9481
X-Proofpoint-GUID: 4ty_4iqcaBeh0wrlR3JpXDg24thDzcYt
X-Proofpoint-ORIG-GUID: 4ty_4iqcaBeh0wrlR3JpXDg24thDzcYt
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-21_04,2023-04-21_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 adultscore=0 spamscore=0 impostorscore=0 lowpriorityscore=0 clxscore=1011
 mlxscore=0 mlxlogscore=999 suspectscore=0 malwarescore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304210094

--_000_637dcf099348293f95d26403192f52b1epamcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGVsbG8sDQoNCkp1c3Qgd2FudGVkIHRvIGFkZCBzb21lIGFkZGl0aW9uYWwgaW5mb3JtYXRpb24g
aG9waW5nIGl0IHdpbGwgaGVscCB0byBzdGFydCB0aGUgZGlzY3Vzc2lvbiBhbmQgZmluZCBhIGNv
cnJlY3QgYXBwcm9hY2g6DQoNCk15IGdvYWwgaXMgdG8gcHJvdmlkZSBjb3JyZWN0IGJ1ZmZlciBm
b3IgdGhlIGludGVyZmFjZXMsIHN1Y2ggYXMgendwX2xpbnV4X2RtYWJ1Zl92MV9pbnRlcmZhY2Us
IHNvIHplcm8tY29weQ0KZmVhdHVyZSB3aWxsIHdvcmsuIE15IHN1Z2dlc3Rpb24gaXMgdG8gZ2l2
ZSBhIHBvc3NpYmlsaXR5IHRvIHVzZSBzb21lIHNwZWNpZmljIGJ1ZmZlciBhcyBiYWNraW5nIHN0
b3JhZ2Ugd2hlcmUgY2FsbGVyDQp0YWtlcyByZXNwb25zaWJpbGl0eSBmb3IgdGhlIHByb3ZpZGVk
IGJ1ZmZlciB0byBoYXZlIHRoZSBjb3JyZWN0IGZvcm1hdC4NCg0KSW4gb3VyIGNhc2Ugd2UgYXJl
IHVzaW5nIHRoZXNlIGNhbGxzIHRoZSBmb2xsb3dpbmcgd2F5Og0KMSkgR2V0IGdyZWZzIGZyb20g
YW5vdGhlciBEb21haW4gKFdlJ3JlIHdvcmtpbmcgb24gdGhlIHZpcnR1YWxpemVkIHN5c3RlbSB3
aXRoIGRpZmZlcmVudCBEb21haW5zDQp3b3JraW5nIGFzIHN0YW5kYWxvbmUgVk1zIHRoYXQgYXJl
IHNoYXJpbmcgcmVzb3VyY2VzKTsNCg0KMikgQ3JlYXRlIGJ1ZmZlciB1c2luZyBnYm1fYm9fY3Jl
YXRlIGFuZCByZWNlaXZlIGZkIChpLk1YOCByZXF1aXJlcyBlZ2wgYXBpIHRvIGJlIGNhbGxlZCBm
b3IgdGhlIGJ1ZmZlcg0KYWxsb2NhdGlvbiwgb3IgZWdsQ3JlYXRlSW1hZ2VLSFIgd2lsbCByZXR1
cm4gRUdMX05PX0lNQUdFX0tIUiBkdXJpbmcgcGFyYW0gc2V0dGluZyBmb3IgendwX2xpbnV4X2Rt
YWJ1Zik7DQoNCjMpIENhbGwgZm9yIElPQ1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFRlNfVE9fQlVG
IHRvIG1hcCBncmVmcyB1c2luZyBmZCBhcyB0aGUgYmFja2luZyBzdG9yYWdlOw0KDQo0KSBDYWxs
IGZvciB6d3BfbGludXhfZG1hYnVmX3YxX2ludGVyZmFjZSBhbmQgdXNlIHplcm8tY29weSBmZWF0
dXJlIHdpdGggV2F5bGFuZDsNCg0KNSkgQWZ0ZXIgd29yayBmaW5pc2hlZCAtIGNhbGwgZm9yIElP
Q1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFTEVBU0UgdG8gdW5tYXAgZ3JhbnQgcmVmczsNCg0KNikg
Q2FsbCBnYm1fYm9fZGVzdHJveSB0byByZWxlYXNlIGFsbG9jYXRlZCBCTyBvYmplY3Q7DQoNCjcp
IENhbGwgZm9yIElPQ1RMX0dOVERFVl9ETUFCVUZfTUFQX1dBSVRfUkVMRUFTRUQgdG8gd2FpdCB1
bnRpbCBtYXAgcmVsZWFzZWQgY29tcGxldGVseQ0KDQooVGhpcyBpbmNsdWRlcyByZWxlYXNpbmcg
Z3JhbnQgcmVmcyBvbiB0aGUgRG9tYWluIHNpZGUpLg0KDQpJJ3ZlIHRlc3RlZCBteSBjaGFuZ2Vz
IG9uIElNWDhRTSBib2FyZCB1c2luZyBnYm1fYm9fY3JlYXRlIHRvIGFsbG9jYXRlIGJ1ZmZlciBh
bmQNCg0Kb24gUUVNVSBzZXR1cCB1c2luZyBEUk1fSU9DVExfTU9ERV9DUkVBVEVfRFVNQiBmb3Ig
YnVmZmVyIGFsbG9jYXRpb24uDQoNCg0KVGhlcmUgd2FzIGEgc3VnZ2VzdGlvbiB0byByZXZlcnNl
IHJvbGVzIGJldHdlZW4gZXhwb3J0ZXIgYW5kIGltcG9ydGVkIGFuZCBhbGxvY2F0ZSBidWZmZXIg
b24gdGhlIG90aGVyIHNpZGUsIGJ1dA0KdGhpcyBhcHByb2FjaCBkb2Vzbid0IGZpdCB0byBvdXIg
c2V0dXAuDQoNCkkgaGF2ZSBhIGJvYXJkIG9uIHdoaWNoIHN0YXJ0cyBYRU4gaHlwZXJ2aXNvciB3
aXRoIHNldmVyYWwgdmlydHVhbCBkb21haW5zOg0KDQpPbmUgZG9tYWluIChEb20wKSBoYXMgYWNj
ZXNzIHRvIHRoZSBoYXJkd2FyZSBhbmQgaXMgYSBiYWNrZW5kIHNpZGUgZm9yIHRoZSBHcmFwaGlj
IHNoYXJpbmcuIEFub3RoZXIgZG9tYWluIChEb21VKSBpcyB0aGUgZnJvbnRlbmQNCg0Kc2lkZSBh
bmQgcnVucyBpdCdzIFdlc3RvbiBpbnN0YW5jZSB3aXRoIHplcm8tY29weSBvbiB0aGUgYnVmZmVy
LCBwcm92aWRlZCBieSB0aGUgYmFja2VuZC4NCg0KVGhlIGNvbmZpZ3VyYXRpb24gd2hlbiBEb20w
IGFzIERyaXZlciBkb21haW4gYWxsb2NhdGVzIGRtYS1idWYgYW5kIHRoZSBleHBvcnRzIGl0IHRv
IHRoZSBEb21VIGlzIG5vdCBtZWFudCB0byBiZSBzdXBwb3J0ZWQsIHNlZSBsaW5rOg0KaHR0cHM6
Ly9udmQubmlzdC5nb3YvdnVsbi9kZXRhaWwvQ1ZFLTIwMjEtMjY5MzQNCg0KVGhlIHZ1bG5lcmFi
aWxpdHkgaXMgdGhhdCB0aGVyZSBpcyBubyB3YXkgdG8gZnJlZSBidWZmZXIgd2hpY2ggd2FzIGV4
cG9ydGVkIHRvIERvbVUgaW4gY2FzZSBpZiBEb21VIGNyYXNoZXMuDQoNClRoYXQncyB3aHkgSSdt
IHRyeWluZyB0byBtYWtlIGEgc29sdXRpb24gd2hpY2ggc2hvdWxkIHdvcmsgd2l0aG91dCBuZWVk
IHRvIHN3aXRjaCBpbXBvcnRlciBhbmQgZXhwb3J0ZXIuDQoNCkknbSBhd2FyZSB0aGF0IG15IHNv
bHV0aW9uIG1heSBiZSAoYW5kIGlzKSBub3QgY29ycmVjdCwgYnV0IG15IGludGVuZCBpcyB0byBz
dGFydCB0aGUgZGlzY3Vzc2lvbiBpbiBvcmRlciB0byBwcm9kdWNlIHRoZSBhcHByb3ByaWF0ZSBz
b2x1dGlvbi4NCg0KQmVzdCByZWdhcmRzLA0KDQpPbGVrc2lpLg0KDQoNCg0KT24gMDIuMDEuMjMg
MTU6NDEsIE9sZWtzaWkgTW9pc2llaWV2IHdyb3RlOg0KDQpIZWxsbywNCg0KTGV0IG1lIGludHJv
ZHVjZSB0aGUgbmV3IGlvY3Rscywgd2hpY2ggYXJlIGludGVuZGVkIHRvIGFsbG93IGdudGRldiB0
bw0KbWFwIHNjYXR0ZXItZ2F0aGVyIHRhYmxlIG9uIHRvcCBvZiB0aGUgZXhpc3RpbmcgZG1hYnVm
LCByZWZlcmVuY2VkIGJ5DQpmaWxlIGRlc2NyaXB0b3IuDQoNCldoZW4gdXNpbmcgZG1hLWJ1ZiBl
eHBvcnRlciB0byBjcmVhdGUgZG1hLWJ1ZiB3aXRoIGJhY2tpbmcgc3RvcmFnZSBhbmQNCm1hcCBp
dCB0byB0aGUgZ3JhbnQgcmVmcywgcHJvdmlkZWQgZnJvbSB0aGUgZG9tYWluLCB3ZSd2ZSBtZXQg
YSBwcm9ibGVtLA0KdGhhdCBzZXZlcmFsIEhXIChpLk1YOCBncHUgaW4gb3VyIGNhc2UpIGRvIG5v
dCBzdXBwb3J0IGV4dGVybmFsIGJ1ZmZlcg0KYW5kIHJlcXVpcmVzIGJhY2tpbmcgc3RvcmFnZSB0
byBiZSBjcmVhdGVkIHVzaW5nIGl0J3MgbmF0aXZlIHRvb2xzLg0KVGhhdCdzIHdoeSBuZXcgaW9j
dGxzIHdlcmUgYWRkZWQgdG8gYmUgYWJsZSB0byBwYXNzIGV4aXN0aW5nIGRtYS1idWZmZXINCmZk
IGFzIGlucHV0IHBhcmFtZXRlciBhbmQgdXNlIGl0IGFzIGJhY2tpbmcgc3RvcmFnZSB0byBleHBv
cnQgdG8gcmVmcy4NCg0KRm9sbG93aW5nIGNhbGxzIHdlcmUgYWRkZWQ6DQpJT0NUTF9HTlRERVZf
RE1BQlVGX01BUF9SRUZTX1RPX0JVRiAtIG1hcCBleGlzdGluZyBidWZmZXIgYXMgdGhlIGJhY2tp
bmcNCnN0b3JhZ2UgYW5kIGV4cG9ydCBpdCB0byB0aGUgcHJvdmlkZWQgZ3JhbnQgcmVmczsNCklP
Q1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFTEVBU0UgLSBkZXRhY2ggYnVmZmVyIGZyb20gdGhlIGdy
YW50IHRhYmxlIGFuZA0Kc2V0IG5vdGlmaWNhdGlvbiB0byB1bm1hcCBncmFudCByZWZzIGJlZm9y
ZSByZWxlYXNpbmcgdGhlIGV4dGVybmFsDQpidWZmZXIuIEFmdGVyIHRoaXMgY2FsbCB0aGUgZXh0
ZXJuYWwgYnVmZmVyIHNob3VsZCBiZSBkZXRyb3llZC4NCklPQ1RMX0dOVERFVl9ETUFCVUZfTUFQ
X1dBSVRfUkVMRUFTRUQgLSB3YWl0IGZvciB0aW1lb3V0IHVudGlsIGJ1ZmZlciBpcw0KY29tcGxl
dGVseSBkZXN0cm95ZWQgYW5kIGdudCByZWZzIHVubWFwcGVkIHNvIGRvbWFpbiBjb3VsZCBmcmVl
IGdyYW50DQpwYWdlcy4gU2hvdWxkIGJlIGNhbGxlZCBhZnRlciBidWZmZXIgd2FzIGRlc3RveWVk
Lg0KDQpPdXIgc2V0dXAgaXMgYmFzZWQgb24gSU1YOFFNIGJvYXJkLiBXZSdyZSB0cnlpbmcgdG8g
aW1wbGVtZW50IHplcm8tY29weQ0Kc3VwcG9ydCBmb3IgRG9tVSBncmFwaGljcyB1c2luZyBXYXls
YW5kIHp3cF9saW51eF9kbWFidWZfdjFfaW50ZXJmYWNlDQppbXBsZW1lbnRhdGlvbi4NCg0KRm9y
IGRtYS1idWYgZXhwb3J0ZXIgd2UgdXNlZCBpLk1YOCBncHUgbmF0aXZlIHRvb2xzIHRvIGNyZWF0
ZSBiYWNraW5nDQpzdG9yYWdlIGdyYW50LXJlZnMsIHJlY2VpdmVkIGZyb20gRG9tVS4gQnVmZmVy
IGZvciB0aGUgYmFja2luZyBzdG9yYWdlIHdhcw0KYWxsb2NhdGVkIHVzaW5nIGdibV9ib19jcmVh
dGUgY2FsbCBiZWNhdXNlIGdwdSBkbyBub3Qgc3VwcG9ydCBleHRlcm5hbA0KYnVmZmVyIGFuZCBy
ZXF1aXJlcyBiYWNraW5nIHN0b3JhZ2UgdG8gYmUgY3JlYXRlZCB1c2luZyBpdCdzIG5hdGl2ZSB0
b29scw0KKGVnbENyZWF0ZUltYWdlS0hSIHJldHVybnMgRUdMX05PX0lNQUdFX0tIUiBmb3IgYnVm
ZmVycywgd2hpY2ggd2VyZSBub3QNCmNyZWF0ZWQgdXNpbmcgZ2JtX2JvX2NyZWF0ZSkuDQoNClRo
aXMgYmVoYXZpb3VyIHdhcyBhbHNvIHRlc3RlZCBvbiBRZW11IHNldHVwIHVzaW5nDQpEUk1fSU9D
VExfTU9ERV9DUkVBVEVfRFVNQiBjYWxsIHRvIGNyZWF0ZSBiYWNraW5nIHN0b3JhZ2UgYnVmZmVy
Lg0KDQotLS0NCk9sZWtzaWkgTW9pc2llaWV2ICgzKToNCiAgeGVuL2dyYW50LXRhYmxlOiBzYXZl
IHBhZ2VfY291bnQgb24gbWFwIGFuZCB1c2UgaWYgZHVyaW5nIGFzeW5jDQogICAgdW5tYXBwaW5n
DQogIGRtYS1idWY6IGFkZCBkbWEgYnVmZmVyIHJlbGVhc2Ugbm90aWZpZXIgY2FsbGJhY2sNCiAg
eGVuL2dyYW50LXRhYmxlOiBhZGQgbmV3IGlvY3RscyB0byBtYXAgZG1hYnVmIHRvIGV4aXN0aW5n
IGZkDQoNCiBkcml2ZXJzL2RtYS1idWYvZG1hLWJ1Zi5jICAgfCAgNDQgKysrKw0KIGRyaXZlcnMv
eGVuL2dudGRldi1jb21tb24uaCB8ICAgOCArLQ0KIGRyaXZlcnMveGVuL2dudGRldi1kbWFidWYu
YyB8IDQxNiArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0NCiBkcml2ZXJzL3hl
bi9nbnRkZXYtZG1hYnVmLmggfCAgIDcgKw0KIGRyaXZlcnMveGVuL2dudGRldi5jICAgICAgICB8
IDEwMSArKysrKysrKy0NCiBkcml2ZXJzL3hlbi9ncmFudC10YWJsZS5jICAgfCAgNzMgKysrKyst
LQ0KIGluY2x1ZGUvbGludXgvZG1hLWJ1Zi5oICAgICB8ICAxNSArKw0KIGluY2x1ZGUvdWFwaS94
ZW4vZ250ZGV2LmggICB8ICA2MiArKysrKysNCiBpbmNsdWRlL3hlbi9ncmFudF90YWJsZS5oICAg
fCAgIDggKw0KIDkgZmlsZXMgY2hhbmdlZCwgNzAzIGluc2VydGlvbnMoKyksIDMxIGRlbGV0aW9u
cygtKQ0KDQoNCg==

--_000_637dcf099348293f95d26403192f52b1epamcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <19D1C40A66482B45B118EE519297AB13@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KSGVsbG8sPGJyPg0K
PHA+SnVzdCB3YW50ZWQgdG8gYWRkIHNvbWUgYWRkaXRpb25hbCBpbmZvcm1hdGlvbiBob3Bpbmcg
aXQgd2lsbCBoZWxwIHRvIHN0YXJ0IHRoZSBkaXNjdXNzaW9uIGFuZCBmaW5kIGEgY29ycmVjdCBh
cHByb2FjaDoNCjwvcD4NCjxicj4NCk15IGdvYWwgaXMgdG8gcHJvdmlkZSBjb3JyZWN0IGJ1ZmZl
ciBmb3IgdGhlIGludGVyZmFjZXMsIHN1Y2ggYXMgendwX2xpbnV4X2RtYWJ1Zl92MV9pbnRlcmZh
Y2UsIHNvIHplcm8tY29weTxicj4NCmZlYXR1cmUgd2lsbCB3b3JrLiBNeSBzdWdnZXN0aW9uIGlz
IHRvIGdpdmUgYSBwb3NzaWJpbGl0eSB0byB1c2Ugc29tZSBzcGVjaWZpYyBidWZmZXIgYXMgYmFj
a2luZyBzdG9yYWdlIHdoZXJlIGNhbGxlcjxicj4NCnRha2VzIHJlc3BvbnNpYmlsaXR5IGZvciB0
aGUgcHJvdmlkZWQgYnVmZmVyIHRvIGhhdmUgdGhlIGNvcnJlY3QgZm9ybWF0Ljxicj4NCjxicj4N
CkluIG91ciBjYXNlIHdlIGFyZSB1c2luZyB0aGVzZSBjYWxscyB0aGUgZm9sbG93aW5nIHdheTo8
YnI+DQoxKSBHZXQgZ3JlZnMgZnJvbSBhbm90aGVyIERvbWFpbiAoV2UncmUgd29ya2luZyBvbiB0
aGUgdmlydHVhbGl6ZWQgc3lzdGVtIHdpdGggZGlmZmVyZW50IERvbWFpbnM8YnI+DQp3b3JraW5n
IGFzIHN0YW5kYWxvbmUgVk1zIHRoYXQgYXJlIHNoYXJpbmcgcmVzb3VyY2VzKTs8YnI+DQo8YnI+
DQoyKSBDcmVhdGUgYnVmZmVyIHVzaW5nIGdibV9ib19jcmVhdGUgYW5kIHJlY2VpdmUgZmQgKGku
TVg4IHJlcXVpcmVzIGVnbCBhcGkgdG8gYmUgY2FsbGVkIGZvciB0aGUgYnVmZmVyPGJyPg0KYWxs
b2NhdGlvbiwgb3IgZWdsQ3JlYXRlSW1hZ2VLSFIgd2lsbCByZXR1cm4gRUdMX05PX0lNQUdFX0tI
UiBkdXJpbmcgcGFyYW0gc2V0dGluZyBmb3IgendwX2xpbnV4X2RtYWJ1Zik7PGJyPg0KPGJyPg0K
MykgQ2FsbCBmb3IgSU9DVExfR05UREVWX0RNQUJVRl9NQVBfUkVGU19UT19CVUYgdG8gbWFwIGdy
ZWZzIHVzaW5nIGZkIGFzIHRoZSBiYWNraW5nIHN0b3JhZ2U7PGJyPg0KPHA+NCkgQ2FsbCBmb3Ig
endwX2xpbnV4X2RtYWJ1Zl92MV9pbnRlcmZhY2UgYW5kIHVzZSB6ZXJvLWNvcHkgZmVhdHVyZSB3
aXRoIFdheWxhbmQ7PC9wPg0KNSkgQWZ0ZXIgd29yayBmaW5pc2hlZCAtIGNhbGwgZm9yIElPQ1RM
X0dOVERFVl9ETUFCVUZfTUFQX1JFTEVBU0UgdG8gdW5tYXAgZ3JhbnQgcmVmczs8YnI+DQo8YnI+
DQo2KSBDYWxsIGdibV9ib19kZXN0cm95IHRvIHJlbGVhc2UgYWxsb2NhdGVkIEJPIG9iamVjdDs8
YnI+DQo8YnI+DQo3KSBDYWxsIGZvciBJT0NUTF9HTlRERVZfRE1BQlVGX01BUF9XQUlUX1JFTEVB
U0VEIHRvIHdhaXQgdW50aWwgbWFwIHJlbGVhc2VkIGNvbXBsZXRlbHk8YnI+DQo8YnI+DQooVGhp
cyBpbmNsdWRlcyByZWxlYXNpbmcgZ3JhbnQgcmVmcyBvbiB0aGUgRG9tYWluIHNpZGUpLjxicj4N
Cjxicj4NCkkndmUgdGVzdGVkIG15IGNoYW5nZXMgb24gSU1YOFFNIGJvYXJkIHVzaW5nIGdibV9i
b19jcmVhdGUgdG8gYWxsb2NhdGUgYnVmZmVyIGFuZDxicj4NCjxwPm9uIFFFTVUgc2V0dXAgdXNp
bmcgRFJNX0lPQ1RMX01PREVfQ1JFQVRFX0RVTUIgZm9yIGJ1ZmZlciBhbGxvY2F0aW9uLjwvcD4N
CjxwPjxicj4NCjwvcD4NClRoZXJlIHdhcyBhIHN1Z2dlc3Rpb24gdG8gcmV2ZXJzZSByb2xlcyBi
ZXR3ZWVuIGV4cG9ydGVyIGFuZCBpbXBvcnRlZCBhbmQgYWxsb2NhdGUgYnVmZmVyIG9uIHRoZSBv
dGhlciBzaWRlLCBidXQ8YnI+DQp0aGlzIGFwcHJvYWNoIGRvZXNuJ3QgZml0IHRvIG91ciBzZXR1
cC48YnI+DQo8YnI+DQpJIGhhdmUgYSBib2FyZCBvbiB3aGljaCBzdGFydHMgWEVOIGh5cGVydmlz
b3Igd2l0aCBzZXZlcmFsIHZpcnR1YWwgZG9tYWluczo8YnI+DQo8cD5PbmUgZG9tYWluIChEb20w
KSBoYXMgYWNjZXNzIHRvIHRoZSBoYXJkd2FyZSBhbmQgaXMgYSBiYWNrZW5kIHNpZGUgZm9yIHRo
ZSBHcmFwaGljIHNoYXJpbmcuIEFub3RoZXIgZG9tYWluIChEb21VKSBpcyB0aGUgZnJvbnRlbmQ8
L3A+DQo8cD5zaWRlIGFuZCBydW5zIGl0J3MgV2VzdG9uIGluc3RhbmNlIHdpdGggemVyby1jb3B5
IG9uIHRoZSBidWZmZXIsIHByb3ZpZGVkIGJ5IHRoZSBiYWNrZW5kLjwvcD4NClRoZSBjb25maWd1
cmF0aW9uIHdoZW4gRG9tMCBhcyBEcml2ZXIgZG9tYWluIGFsbG9jYXRlcyBkbWEtYnVmIGFuZCB0
aGUgZXhwb3J0cyBpdCB0byB0aGUgRG9tVSBpcyBub3QgbWVhbnQgdG8gYmUgc3VwcG9ydGVkLCBz
ZWUgbGluazo8YnI+DQo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJodHRw
czovL252ZC5uaXN0Lmdvdi92dWxuL2RldGFpbC9DVkUtMjAyMS0yNjkzNCI+aHR0cHM6Ly9udmQu
bmlzdC5nb3YvdnVsbi9kZXRhaWwvQ1ZFLTIwMjEtMjY5MzQ8L2E+PGJyPg0KPGJyPg0KVGhlIHZ1
bG5lcmFiaWxpdHkgaXMgdGhhdCB0aGVyZSBpcyBubyB3YXkgdG8gZnJlZSBidWZmZXIgd2hpY2gg
d2FzIGV4cG9ydGVkIHRvIERvbVUgaW4gY2FzZSBpZiBEb21VIGNyYXNoZXMuPGJyPg0KPHA+VGhh
dCdzIHdoeSBJJ20gdHJ5aW5nIHRvIG1ha2UgYSBzb2x1dGlvbiB3aGljaCBzaG91bGQgd29yayB3
aXRob3V0IG5lZWQgdG8gc3dpdGNoIGltcG9ydGVyIGFuZCBleHBvcnRlci48L3A+DQo8cD5JJ20g
YXdhcmUgdGhhdCBteSBzb2x1dGlvbiBtYXkgYmUgKGFuZCBpcykgbm90IGNvcnJlY3QsIGJ1dCBt
eSBpbnRlbmQgaXMgdG8gc3RhcnQgdGhlIGRpc2N1c3Npb24gaW4gb3JkZXIgdG8gcHJvZHVjZSB0
aGUgYXBwcm9wcmlhdGUgc29sdXRpb24uPC9wPg0KPGJyPg0KQmVzdCByZWdhcmRzLDxicj4NCjxi
cj4NCk9sZWtzaWkuPGJyPg0KPHA+PGJyPg0KPC9wPg0KPHA+PGZvbnQgc2l6ZT0iMiI+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMXB0Ij48L3NwYW4+PC9mb250PjwvcD4NCjxwPjxmb250IHNpemU9
IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTFwdCI+PGJyPg0KPC9zcGFuPjwvZm9udD48L3A+
DQo8cD48L3A+DQo8ZGl2IGNsYXNzPSJtb3otY2l0ZS1wcmVmaXgiPk9uIDAyLjAxLjIzIDE1OjQx
LCBPbGVrc2lpIE1vaXNpZWlldiB3cm90ZTo8YnI+DQo8L2Rpdj4NCjxibG9ja3F1b3RlIHR5cGU9
ImNpdGUiIGNpdGU9Im1pZDpjb3Zlci4xNjcyNjY2MzExLmdpdC5vbGVrc2lpX21vaXNpZWlldkBl
cGFtLmNvbSI+DQo8cHJlIGNsYXNzPSJtb3otcXVvdGUtcHJlIiB3cmFwPSIiPkhlbGxvLA0KDQpM
ZXQgbWUgaW50cm9kdWNlIHRoZSBuZXcgaW9jdGxzLCB3aGljaCBhcmUgaW50ZW5kZWQgdG8gYWxs
b3cgZ250ZGV2IHRvDQptYXAgc2NhdHRlci1nYXRoZXIgdGFibGUgb24gdG9wIG9mIHRoZSBleGlz
dGluZyBkbWFidWYsIHJlZmVyZW5jZWQgYnkNCmZpbGUgZGVzY3JpcHRvci4NCg0KV2hlbiB1c2lu
ZyBkbWEtYnVmIGV4cG9ydGVyIHRvIGNyZWF0ZSBkbWEtYnVmIHdpdGggYmFja2luZyBzdG9yYWdl
IGFuZA0KbWFwIGl0IHRvIHRoZSBncmFudCByZWZzLCBwcm92aWRlZCBmcm9tIHRoZSBkb21haW4s
IHdlJ3ZlIG1ldCBhIHByb2JsZW0sDQp0aGF0IHNldmVyYWwgSFcgKGkuTVg4IGdwdSBpbiBvdXIg
Y2FzZSkgZG8gbm90IHN1cHBvcnQgZXh0ZXJuYWwgYnVmZmVyDQphbmQgcmVxdWlyZXMgYmFja2lu
ZyBzdG9yYWdlIHRvIGJlIGNyZWF0ZWQgdXNpbmcgaXQncyBuYXRpdmUgdG9vbHMuDQpUaGF0J3Mg
d2h5IG5ldyBpb2N0bHMgd2VyZSBhZGRlZCB0byBiZSBhYmxlIHRvIHBhc3MgZXhpc3RpbmcgZG1h
LWJ1ZmZlcg0KZmQgYXMgaW5wdXQgcGFyYW1ldGVyIGFuZCB1c2UgaXQgYXMgYmFja2luZyBzdG9y
YWdlIHRvIGV4cG9ydCB0byByZWZzLg0KDQpGb2xsb3dpbmcgY2FsbHMgd2VyZSBhZGRlZDoNCklP
Q1RMX0dOVERFVl9ETUFCVUZfTUFQX1JFRlNfVE9fQlVGIC0gbWFwIGV4aXN0aW5nIGJ1ZmZlciBh
cyB0aGUgYmFja2luZw0Kc3RvcmFnZSBhbmQgZXhwb3J0IGl0IHRvIHRoZSBwcm92aWRlZCBncmFu
dCByZWZzOw0KSU9DVExfR05UREVWX0RNQUJVRl9NQVBfUkVMRUFTRSAtIGRldGFjaCBidWZmZXIg
ZnJvbSB0aGUgZ3JhbnQgdGFibGUgYW5kDQpzZXQgbm90aWZpY2F0aW9uIHRvIHVubWFwIGdyYW50
IHJlZnMgYmVmb3JlIHJlbGVhc2luZyB0aGUgZXh0ZXJuYWwNCmJ1ZmZlci4gQWZ0ZXIgdGhpcyBj
YWxsIHRoZSBleHRlcm5hbCBidWZmZXIgc2hvdWxkIGJlIGRldHJveWVkLg0KSU9DVExfR05UREVW
X0RNQUJVRl9NQVBfV0FJVF9SRUxFQVNFRCAtIHdhaXQgZm9yIHRpbWVvdXQgdW50aWwgYnVmZmVy
IGlzDQpjb21wbGV0ZWx5IGRlc3Ryb3llZCBhbmQgZ250IHJlZnMgdW5tYXBwZWQgc28gZG9tYWlu
IGNvdWxkIGZyZWUgZ3JhbnQNCnBhZ2VzLiBTaG91bGQgYmUgY2FsbGVkIGFmdGVyIGJ1ZmZlciB3
YXMgZGVzdG95ZWQuDQoNCk91ciBzZXR1cCBpcyBiYXNlZCBvbiBJTVg4UU0gYm9hcmQuIFdlJ3Jl
IHRyeWluZyB0byBpbXBsZW1lbnQgemVyby1jb3B5DQpzdXBwb3J0IGZvciBEb21VIGdyYXBoaWNz
IHVzaW5nIFdheWxhbmQgendwX2xpbnV4X2RtYWJ1Zl92MV9pbnRlcmZhY2UNCmltcGxlbWVudGF0
aW9uLg0KDQpGb3IgZG1hLWJ1ZiBleHBvcnRlciB3ZSB1c2VkIGkuTVg4IGdwdSBuYXRpdmUgdG9v
bHMgdG8gY3JlYXRlIGJhY2tpbmcNCnN0b3JhZ2UgZ3JhbnQtcmVmcywgcmVjZWl2ZWQgZnJvbSBE
b21VLiBCdWZmZXIgZm9yIHRoZSBiYWNraW5nIHN0b3JhZ2Ugd2FzDQphbGxvY2F0ZWQgdXNpbmcg
Z2JtX2JvX2NyZWF0ZSBjYWxsIGJlY2F1c2UgZ3B1IGRvIG5vdCBzdXBwb3J0IGV4dGVybmFsDQpi
dWZmZXIgYW5kIHJlcXVpcmVzIGJhY2tpbmcgc3RvcmFnZSB0byBiZSBjcmVhdGVkIHVzaW5nIGl0
J3MgbmF0aXZlIHRvb2xzDQooZWdsQ3JlYXRlSW1hZ2VLSFIgcmV0dXJucyBFR0xfTk9fSU1BR0Vf
S0hSIGZvciBidWZmZXJzLCB3aGljaCB3ZXJlIG5vdA0KY3JlYXRlZCB1c2luZyBnYm1fYm9fY3Jl
YXRlKS4NCg0KVGhpcyBiZWhhdmlvdXIgd2FzIGFsc28gdGVzdGVkIG9uIFFlbXUgc2V0dXAgdXNp
bmcNCkRSTV9JT0NUTF9NT0RFX0NSRUFURV9EVU1CIGNhbGwgdG8gY3JlYXRlIGJhY2tpbmcgc3Rv
cmFnZSBidWZmZXIuDQoNCi0tLQ0KT2xla3NpaSBNb2lzaWVpZXYgKDMpOg0KICB4ZW4vZ3JhbnQt
dGFibGU6IHNhdmUgcGFnZV9jb3VudCBvbiBtYXAgYW5kIHVzZSBpZiBkdXJpbmcgYXN5bmMNCiAg
ICB1bm1hcHBpbmcNCiAgZG1hLWJ1ZjogYWRkIGRtYSBidWZmZXIgcmVsZWFzZSBub3RpZmllciBj
YWxsYmFjaw0KICB4ZW4vZ3JhbnQtdGFibGU6IGFkZCBuZXcgaW9jdGxzIHRvIG1hcCBkbWFidWYg
dG8gZXhpc3RpbmcgZmQNCg0KIGRyaXZlcnMvZG1hLWJ1Zi9kbWEtYnVmLmMgICB8ICA0NCArKysr
DQogZHJpdmVycy94ZW4vZ250ZGV2LWNvbW1vbi5oIHwgICA4ICstDQogZHJpdmVycy94ZW4vZ250
ZGV2LWRtYWJ1Zi5jIHwgNDE2ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLQ0K
IGRyaXZlcnMveGVuL2dudGRldi1kbWFidWYuaCB8ICAgNyArDQogZHJpdmVycy94ZW4vZ250ZGV2
LmMgICAgICAgIHwgMTAxICsrKysrKysrLQ0KIGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmMgICB8
ICA3MyArKysrKy0tDQogaW5jbHVkZS9saW51eC9kbWEtYnVmLmggICAgIHwgIDE1ICsrDQogaW5j
bHVkZS91YXBpL3hlbi9nbnRkZXYuaCAgIHwgIDYyICsrKysrKw0KIGluY2x1ZGUveGVuL2dyYW50
X3RhYmxlLmggICB8ICAgOCArDQogOSBmaWxlcyBjaGFuZ2VkLCA3MDMgaW5zZXJ0aW9ucygrKSwg
MzEgZGVsZXRpb25zKC0pDQoNCjwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPC9ib2R5Pg0KPC9odG1s
Pg0K

--_000_637dcf099348293f95d26403192f52b1epamcom_--


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 11:00:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 11:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524552.815577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppoVK-000448-HZ; Fri, 21 Apr 2023 11:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524552.815577; Fri, 21 Apr 2023 11:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppoVK-000441-Dt; Fri, 21 Apr 2023 11:00:38 +0000
Received: by outflank-mailman (input) for mailman id 524552;
 Fri, 21 Apr 2023 11:00:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfVw=AM=epam.com=prvs=8475e47d9c=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1ppoVI-00043t-HW
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 11:00:36 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bcf94745-e033-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 13:00:34 +0200 (CEST)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33LAYHED000610; Fri, 21 Apr 2023 11:00:26 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2051.outbound.protection.outlook.com [104.47.14.51])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3q3ruqg39b-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 21 Apr 2023 11:00:26 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by PAWPR03MB9763.eurprd03.prod.outlook.com (2603:10a6:102:2f2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Fri, 21 Apr
 2023 11:00:23 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.045; Fri, 21 Apr 2023
 11:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcf94745-e033-11ed-b220-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k43S677qxAtYNyAvJdhUCAeF4vLrCy4T31FiMLoufSBaQLw2vedKeXXHjOhAQf3Bg3rqVeV5F1WstPMWdNV9YHdR6etFcsyn7G3TD8QyJ+QSaK017MvSCo7UQ5BqRfQ4iNwdK61EUjXOwiDWxxbYeIsjVU2K1/wZKpb1yLhBJEMXhbAtTDTsQ+B66l8guKy8N/T3mj2D62qIo7ko6vITWrr7kytrRbf/fOaUHfrikfELpfXB75I+rn7dunG3bXCva5Jjp4gev1UsrTsjjqjJfiML43qFgv37NtjTdeP7j8VWyCZxRlgAn5MdabO95gJOzPcIfDpqIURE6wWZej3a5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HXpGAsbDl1+pcBwnQ4KaJ4T5yV79/jNFVUPdgjrUxcI=;
 b=UXq1LG4UvqNHFwNG2TcGfatLKIQhZ/h71ea3lmB0G+iGqG48Mr1+XTCngchwac35VQl/jVtynbK4X+8262T+RPqHv3PUTwRoH3SnuGAlJ0S3RhsOLUsUMOPxXVltXw2l4pt/q6sC4qk+HLO557tbWOXug/e4+tP1TwBDG99ur71j23J0o+nKGVN/Wt8JzTReNTn0Tr4RZPOLEUcdHfxFrUJbzBuysXh25L/oHqpZ0Yfcgt6YiTjs0yyRTHHSo14PwIeYHr66OCH8rozPcvf6bimZXLjAzrEGpAqNgMRZb5KWYW+D0XgBWaEKObo5ZCfHEGAg5549WNkT+KfMavj/iQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HXpGAsbDl1+pcBwnQ4KaJ4T5yV79/jNFVUPdgjrUxcI=;
 b=HUVAvyUDZi0GMvGAdwI2kc1IzkCRGl3zX9J+SXZ3iU3D6rrby6xWJNL0m3Y6sC77rqDDr8nVPcJS3mxsI+X3cYJ1V3ASZtr66oyNZV4/Q9Uwyjii6s6Ma4XGmekqacjAfbhpWt+dPNhVVpe9j3irxF7IlXT3aZu5vEcPNXn8O//jet8MG+wcAIxpwLkjQ+uhuKjhcMlRPYWyjAN+rz7I4GUhaiNbef3GsfS0F+r5trFfslZRe53Is/mUI/rvXmsDy+iLy3rodSOB9fswvnQLFwKZH60lwrdsEqUFS1nEgHzpHDqBvMQlXf8MMg3dj+ySBNSvX7oVWzxoiFh/y4DGfQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: 
 AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGAgAGe14CAAJvOAIAFXoKAgAAEr4CAAAS6AIAGQ32A
Date: Fri, 21 Apr 2023 11:00:23 +0000
Message-ID: <87354t8pqg.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger>
In-Reply-To: <ZD0krtCOrEwiKMFP@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|PAWPR03MB9763:EE_
x-ms-office365-filtering-correlation-id: b5c0f831-f678-4d12-3abf-08db42579aee
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 u/1Eiw5zqfnewMciSVzRrTFXH8dFUtSzgG43z1yVf80yMzk8UKaX1LZ5TtSj5EeLPJmpb4hRa7uKryW5NOztxrGexhQHg99sI3DDG3og4Und3YrBP80Wy0mb7vVCKvt+rWR/978MrcX+88LjmcA23bQ8o2PlYSPTBQmMHznkDUE090MdtoOQZqqOkx/wFimAA/lHne5jEm5Wjn+ZnuHfQw1tLW9azeYUYQERls31mvZJ2z6TbJ5u+8MeUMM1Zs6ZXY4eDAK1ff72W1gY/8GhPZWWwSDOtkyjto/OewolhvrR/dsRi4WEUW5yRXwLj50olY8FoLHYk0yrN7kMQCCkhbzxEtrCTkLO+p3zTjacn4J6U7bf1g87IdT3SEA21Z4yCQZhkcQIi20VoaHJ7Uh3eExY1ZnQ/fjlTo+lNgv7UzusGU12SK4wbwCr7XxHU5aJUHd9oJeoCms4st7vwMs0A8t+lPIc5py6gf0A9MHe5EPnmVjO1xpN1W/bN+C5iLLHELFD7LmRN1RCk3TAY0sQT0u+NZ4qyhdClz2IkxbwQh6j8GUSBETP2j4+8TYwIgMxZVp7TRlldZvXvJdGa8U3fy0JX9Y3mgyb7IRubIqDir5DK+q6apjzJPwjsSHSlv6j
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(396003)(136003)(376002)(39860400002)(451199021)(66556008)(66446008)(64756008)(66946007)(66476007)(316002)(6916009)(4326008)(76116006)(91956017)(6486002)(86362001)(54906003)(71200400001)(478600001)(5660300002)(36756003)(7416002)(2616005)(8936002)(8676002)(122000001)(83380400001)(2906002)(38100700002)(41300700001)(38070700005)(186003)(26005)(53546011)(55236004)(6506007)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?alVNVG1leStFMjUzSS9XelJSQ1VYVlNNdWlweEtyczJib24yNGlyNCtsT3Zp?=
 =?utf-8?B?K29XMElRVkk1cVY3NElmNnlqZ1dUc3BRYXdNbjdNTUswWmVVbG51ZHgrS3J6?=
 =?utf-8?B?bGpNVEIralRYeW9DSERKR211SFhUOEdJYjhndWUzM01xelRmZElLaUhNM29G?=
 =?utf-8?B?VGRUTnlYaWxQQThMVXNIdndPZHJ4NVRXbjZnNnA5VWpTeHRSY3Z3OGxsUGR3?=
 =?utf-8?B?YWJnR3ZWYVJMclRPb1lYanZjeFcraTRkL2pBSkJZT3Q5cDl1N2hqSy9IY2tl?=
 =?utf-8?B?NUd3cVQvNEsyQkJMdUZ1azRCSDJOWTBkdmVqLzMvTXA1eXlBVkpocFptaHV6?=
 =?utf-8?B?d3VCR0ZVUGVWRWpRT29WVzM2MjM5YTJQeTZvRlQxU2JUMjFZRVI0MDNhSmR6?=
 =?utf-8?B?VjVXWENpUitvd0hPUUIvSlJ4eFZPeVdYZWtFa0VlQUpKWXdJbXhOaGw4K2pI?=
 =?utf-8?B?WWxKZXR3bXU1SUcxejRaeTZTTmU1QmdFSzYxU3dERVNVZzU5emd6T25GdlFv?=
 =?utf-8?B?cjFzQWowd1BCOVE4L2VUSStOTmZPaWR4eUtORFNKQmVnQWpVL3Y1aFZTKzNI?=
 =?utf-8?B?K0hvb1pYNTVjVXg3MFFtREJmbWYyV3ZzOTZVbG5DelhQcUw5ckF4VUFXc0Rv?=
 =?utf-8?B?cHYrU2FzTFprWjBtMHdNWHkyS1UzVFpqVVlDa29DMXFjdmhzb3AySHVFdEhP?=
 =?utf-8?B?cGQrQTJlYnJpd0l3cm9XZHZ4YjNpdCs0K0tjQ3lrVTliSzZ3MWVxWVlUKzdn?=
 =?utf-8?B?NmFHUEdhbmdOTHorSThHVGNvWDk3RmJuZENtOS9CK0l1ZGtMb2ZHd3dTMUo1?=
 =?utf-8?B?VUtDRDUzZ3RROHpFU0kyMVJUWXl2RlNwRTU3TkYwamo0bHFNSlhvR2tXdG9M?=
 =?utf-8?B?VTJRZnRNaXN5cUtaeUNwak5jbmdGS0FybTZ2aXVWcERIb01MZjZBUzVGVVNm?=
 =?utf-8?B?UkxXYzNUM0xOa2ZsWGxpMU44TjdIMFRtb2FsaFZQSjVzS20xMW1JOUxDaU9v?=
 =?utf-8?B?OFhPRURwY2hqdVFXU0s0TXZTaXVZaXdSSjZNQkRpaUJPSFYwV052M1kyTkdp?=
 =?utf-8?B?WEZhTkoyZzEvZGtES3VmQkdlVHJCNVFTNnRzL2hnMmgrd3VVTWwrcTNqSFl2?=
 =?utf-8?B?eEtjRGVhc2xIL01ZelhtWVhkcGErbWFUZDFPTXBDdHRnUEh2elkzelQveHRr?=
 =?utf-8?B?M2Ywa3VHamNQRExQaWduNSthSlJaVEs3dGdZQm8rV3dMY1lYRkR0cTRyZVBU?=
 =?utf-8?B?RktiVWtnZm1NSUZSblJaMEtFWSs3RDZhSlo0d3ZFbEVQUUZDUWwxVGVkWDUv?=
 =?utf-8?B?SmptVUZFSGFLc0toY1ZMRUx6dTdwYzlpaXNIM2JCNWlyVnhENzRyNUkxbUhP?=
 =?utf-8?B?QnV3SHZEWjhZUEhzZjE1U25PanlSa1R1N2pwZndFM0tDVGlack9POGsrdXhG?=
 =?utf-8?B?dm03QlNuL0ZiRFg0QUxpQVUrR1A1Q3IvQXo1OU11YzdSZnI1OG9qTVdhTGRG?=
 =?utf-8?B?QTVwMlpoQ2dnNWNqVVlKWVpPQ2FWZUpocUhPd0NuUVEycUZRNnNFYVpCVmJQ?=
 =?utf-8?B?RVFWTzRYYVZVaGNKK1hhSG0wdzVUN0xpVjUvZkRiZnBZRnBlT1pGcW85OVNn?=
 =?utf-8?B?RXg1OWlGUjR6UEdIMVk2R00zRXI3SjFsTDh0VzZaVUNucE1nSlByMGlvc2RS?=
 =?utf-8?B?cHJ1K0JuT1Y0MnNoUTFZL2htT3RON2RMMEpaL2t3MVFDWldibFYrMEJaVUFy?=
 =?utf-8?B?M2ZZUDdlaWpTN0N3UWpKNEpKNkc1MlRES1ZNT1R5RGFkQzRjcE9PM3dpQXRU?=
 =?utf-8?B?WnF5Y1FWdC9VR0lBeGhvbyt2UDdxeFI2eC9xVy9MeE9MaW8rM2lQdEJsVlZi?=
 =?utf-8?B?WEU4VUlGcmFHc051QkpONW03dWlsbWh6RkhoME93ODRtYi95OGpOOFRaN3A3?=
 =?utf-8?B?djQ4b2pmcFpKbVU1ZlQ1RHZMd0Q0YmJZV1BLR2J6Q25JbUVET1lGbjA2WDFS?=
 =?utf-8?B?K3d6MmlmUUtYUkp4azdGTEwwbFVheENsTFN3TmczMHZKUjZIU3hEbWFTRkZi?=
 =?utf-8?B?eEVKYXVUYWloUENWelhEbWlIMjdmMjlxcmFSaS9oSWRkWGU1enJ6SllmM0hU?=
 =?utf-8?B?SEF1NlVmZ25hRTE2T1BGd0FMaGdmZ3I0OWZqYWdKbXEwaDJCenNNTmF1ZEU5?=
 =?utf-8?B?dUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A9C7CAA54E123547B6EA5B3E53AAC3EB@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b5c0f831-f678-4d12-3abf-08db42579aee
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Apr 2023 11:00:23.1857
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: XiIilNslULr/JeE4s5tfHPWf1AuKhb7ZS19ACn75h9HwjgBDomvbk0HLNPhwsoJvBjXsvjzHL5iJxzv57rc8kJUMPMeIqtfuC1OOaORsrAA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR03MB9763
X-Proofpoint-ORIG-GUID: 27dSJjLw8rel6mzE-qVgG2R72_XhfpYh
X-Proofpoint-GUID: 27dSJjLw8rel6mzE-qVgG2R72_XhfpYh
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-21_04,2023-04-21_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 phishscore=0
 mlxlogscore=885 bulkscore=0 clxscore=1015 mlxscore=0 suspectscore=0
 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304210095

DQpIZWxsbyBSb2dlciwNCg0KUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
IHdyaXRlczoNCg0KPiBPbiBNb24sIEFwciAxNywgMjAyMyBhdCAxMjozNDozMVBNICswMjAwLCBK
YW4gQmV1bGljaCB3cm90ZToNCj4+IE9uIDE3LjA0LjIwMjMgMTI6MTcsIFJvZ2VyIFBhdSBNb25u
w6kgd3JvdGU6DQo+PiA+IE9uIEZyaSwgQXByIDE0LCAyMDIzIGF0IDAxOjMwOjM5QU0gKzAwMDAs
IFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4gPj4gQWJvdmUgSSBoYXZlIHByb3Bvc2VkIGFu
b3RoZXIgdmlldyBvbiB0aGlzLiBJIGhvcGUsIGl0IHdpbGwgd29yayBmb3INCj4+ID4+IHlvdS4g
SnVzdCB0byByZWl0ZXJhdGUsIGlkZWEgaXMgdG8gYWxsb3cgImhhcm1sZXNzIiByZWZjb3VudHMg
dG8gYmUgbGVmdA0KPj4gPj4gYWZ0ZXIgcmV0dXJuaW5nIGZyb20gcGNpX3JlbW92ZV9kZXZpY2Uo
KS4gQnkgImhhcm1sZXNzIiBJIG1lYW4gdGhhdA0KPj4gPj4gb3duZXJzIG9mIHRob3NlIHJlZmNv
dW50cyB3aWxsIG5vdCB0cnkgdG8gYWNjZXNzIHRoZSBwaHlzaWNhbCBQQ0kNCj4+ID4+IGRldmlj
ZSBpZiBwY2lfcmVtb3ZlX2RldmljZSgpIGlzIGFscmVhZHkgZmluaXNoZWQuDQo+PiA+IA0KPj4g
PiBJJ20gbm90IHN0cmljdGx5IGEgbWFpbnRhaW5lciBvZiB0aGlzIHBpZWNlIGNvZGUsIGFsYmVp
dCBJIGhhdmUgYW4NCj4+ID4gb3Bpbmlvbi4gIEkgd2lsbCBsaWtlIHRvIGFsc28gaGVhciBKYW5z
IG9waW5pb24sIHNpbmNlIGhlIGlzIHRoZQ0KPj4gPiBtYWludGFpbmVyLg0KPj4gDQo+PiBJJ20g
YWZyYWlkIEkgY2FuJ3QgcmVhbGx5IGFwcHJlY2lhdGUgdGhlIHRlcm0gImhhcm1sZXNzIHJlZmNv
dW50cyIuIFdob2V2ZXINCj4+IGhvbGRzIGEgcmVmIGlzIGVudGl0bGVkIHRvIGFjY2VzcyB0aGUg
ZGV2aWNlLiBBcyBzdGF0ZWQgYmVmb3JlLCBJIHNlZSBvbmx5DQo+PiB0d28gd2F5cyBvZiBnZXR0
aW5nIHRoaW5ncyBjb25zaXN0ZW50OiBFaXRoZXIgcGNpX3JlbW92ZV9kZXZpY2UoKSBpcw0KPj4g
aW52b2tlZCB1cG9uIGRyb3BwaW5nIG9mIHRoZSBsYXN0IHJlZiwNCj4NCj4gV2l0aCB0aGlzIGFw
cHJvYWNoLCB3aGF0IHdvdWxkIGJlIHRoZSBpbXBsZW1lbnRhdGlvbiBvZg0KPiBQSFlTREVWT1Bf
bWFuYWdlX3BjaV9yZW1vdmU/ICBXb3VsZCBpdCBqdXN0IGNoZWNrIHdoZXRoZXIgdGhlIHBkZXYN
Cj4gZXhpc3QgYW5kIGVpdGhlciByZXR1cm4gMCBvciAtRUJVU1k/DQo+DQoNCk9rYXksIEkgYW0g
cHJlcGFyaW5nIHBhdGNoZXMgd2l0aCB0aGUgYmVoYXZpb3IgeW91IHByb3Bvc2VkLiBUbyB0ZXN0
IGl0LA0KSSBhcnRpZmljaWFsbHkgc2V0IHJlZmNvdW50IHRvIDIgYW5kIGluZGVlZCBQSFlTREVW
T1BfbWFuYWdlX3BjaV9yZW1vdmUNCnJldHVybmVkIC1FQlVTWSwgd2hpY2ggcHJvcGFnYXRlZCB0
byB0aGUgbGludXggZHJpdmVyLiBQcm9ibGVtIGlzIHRoYXQNCkxpbnV4IGRyaXZlciBjYW4ndCBk
byBhbnl0aGluZyB3aXRoIHRoaXMuIEl0IGp1c3QgZGlzcGxheWVkIGFuIGVycm9yDQptZXNzYWdl
IGFuZCByZW1vdmVkIGRldmljZSBhbnl3YXlzLiBUaGlzIGlzIGJlY2F1c2UgTGludXggc2VuZHMN
ClBIWVNERVZPUF9tYW5hZ2VfcGNpX3JlbW92ZSBpbiBkZXZpY2VfcmVtb3ZlKCkgY2FsbCBwYXRo
IGFuZCB0aGVyZSBpcyBubw0Kd2F5IHRvIHByZXZlbnQgdGhlIGRldmljZSByZW1vdmFsLiBTbywg
YWRtaW4gaXMgbm90IGNhcGFibGUgdG8gdHJ5IHRoaXMNCmFnYWluLg0KDQpBcyBJIHdvcmthcm91
bmQsIEkgY2FuIGNyZWF0ZSBoeXBlcmNhbGwgY29udGludWF0aW9uIGluIGNhc2UgaWYNCnBjaV9y
ZW1vdmVfZGV2aWNlKCkgcmV0dXJucyAtRUJVU1kuIFdoYXQgaXMgeW91ciBvcGluaW9uPw0KDQoN
Ci0tIA0KV0JSLCBWb2xvZHlteXI=


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 12:10:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 12:10:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524562.815587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pppas-0002r7-NX; Fri, 21 Apr 2023 12:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524562.815587; Fri, 21 Apr 2023 12:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pppas-0002r0-Kh; Fri, 21 Apr 2023 12:10:26 +0000
Received: by outflank-mailman (input) for mailman id 524562;
 Fri, 21 Apr 2023 12:10:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slXh=AM=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pppar-0002qr-RL
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 12:10:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2084.outbound.protection.outlook.com [40.107.7.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e2f6de4-e03d-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 14:10:23 +0200 (CEST)
Received: from DB6PR0301CA0003.eurprd03.prod.outlook.com (2603:10a6:4:3e::13)
 by PAVPR08MB8941.eurprd08.prod.outlook.com (2603:10a6:102:328::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 12:09:52 +0000
Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3e:cafe::ac) by DB6PR0301CA0003.outlook.office365.com
 (2603:10a6:4:3e::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.50 via Frontend
 Transport; Fri, 21 Apr 2023 12:09:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.25 via Frontend Transport; Fri, 21 Apr 2023 12:09:52 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Fri, 21 Apr 2023 12:09:52 +0000
Received: from 0df0928c19f0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E5ECF5D2-554D-483E-BAB6-D7D14D75FF66.1; 
 Fri, 21 Apr 2023 12:09:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0df0928c19f0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 Apr 2023 12:09:41 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU0PR08MB8253.eurprd08.prod.outlook.com (2603:10a6:10:413::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 12:09:34 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::8e01:7058:6f40:90e1%7]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 12:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e2f6de4-e03d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y5bCnPNWe3lt9sU/fYTluRNEZvBLnz3+AKGxarQoOI0=;
 b=n3phtROk1HDi8gosupPeF8zLPrcogXXvDGdsQtg0qcZWz7g6zt0dRIT7fX8YneTmAijOQh3pcCNIOjMRDeF2Hq3vCkLPKLSv3Ce+eTstvSqvn4lgEH9Xq61ERZ/Eu6xxXaPbLI8982QSqKXgzSfDAM3EEa9Xqt3V1975so0hH9s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4ddd18dab9402fce
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BaKGzQsupkv/78zUwuYQgkti7brv8559KBtXEmOSn/xX+VxwPA3OyXebYfW8PXeYE1dbi2Qax+Po9nXNbXgPUGpereptA/Scr6aWue/F8zsgw10fVFIdDNB0Y6yVpWIhvSPVvHOhSipKaw1N6/DzW5HnLXIiPWxb25zFpf6Udli59G/31rI09IAwbTayIO15pNxjwiAI3DFsjPtYeFC+0czukYl5JJzfdac8UQheduVedFFGEMElFiznOFPswj6viwrCQ3sdJA8jqmm7oedmdVj3p2d2nrlozX6fmPJr1jJX7yebPhby8RF9z+9xa4XMES4wHwqAV45tTDAgSKpkVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y5bCnPNWe3lt9sU/fYTluRNEZvBLnz3+AKGxarQoOI0=;
 b=UAT34el/0T4jg6MxgvYJCc4RaUSni3A72RRkfXrHLeWnqamYH286D6uKZs0NI2MNQipHN3cqX94Bd3lZju2HLgRa2x6QV0NZw012rJffNniXk26p0WdWmklhYm4hepP/CrLyW0Jz01/Kog7vLK9os72g5mFBhJ4LU+IYP1kxg322ZIMrxpx+s+DObj5t8wERM1C/rQh4FnRZBCTTO5EqdqmtCXqgZIbagw/X7B3aQ27R78bpH5e1ZW/oXOVDYEKOm0AcEwzPdVozxuZ8r99WUNd8XbXMBA0xp2ykC6KGItpdJ9i5to2ZxCEq130FL27fMjWYSxuohOshgF6URir+EQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y5bCnPNWe3lt9sU/fYTluRNEZvBLnz3+AKGxarQoOI0=;
 b=n3phtROk1HDi8gosupPeF8zLPrcogXXvDGdsQtg0qcZWz7g6zt0dRIT7fX8YneTmAijOQh3pcCNIOjMRDeF2Hq3vCkLPKLSv3Ce+eTstvSqvn4lgEH9Xq61ERZ/Eu6xxXaPbLI8982QSqKXgzSfDAM3EEa9Xqt3V1975so0hH9s=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Henry Wang <Henry.Wang@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"community.manager@xenproject.org" <community.manager@xenproject.org>, Julien
 Grall <julien@xen.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, George Dunlap <george.dunlap@cloud.com>, Juergen
 Gross <jgross@suse.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Re: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Topic: Xen 4.18 release schedule update and poll (was: RE: Xen 4.18
 release: Proposed release schedule)
Thread-Index: AQHZdACYcHm0T1gRME+oVIqMr2d2RK81SjQAgAABA4CAAAMiAIAAXd4A
Date: Fri, 21 Apr 2023 12:09:34 +0000
Message-ID: <8E2CF14D-9675-439B-A157-AA18EE4CBD04@arm.com>
References:
 <AS8PR08MB7991424A3167C70A9B29530C92929@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <AS8PR08MB7991EAA2EF0E381FAFB4C1FD92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <26AF44F7-A028-4737-9BF9-266CBAA83A18@arm.com>
 <AS8PR08MB79918CB17D6E1C987B5F048992609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <AS8PR08MB79919F9CE0B2BF80E7103FB592609@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB79919F9CE0B2BF80E7103FB592609@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DU0PR08MB8253:EE_|DBAEUR03FT020:EE_|PAVPR08MB8941:EE_
X-MS-Office365-Filtering-Correlation-Id: 40311f96-e6a1-4229-53d3-08db42615008
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 CUVSon8G0U6IfX5eKJMRXvTeLfUBzkQ4Z5ol3gIUCq1V/D03ps7O88AJ1yVVgCzeOYAuXP8qAlI2XAC1y0h6hoNdRa8ItnjCcheneLRfhb5vxYx6yzb2B1jwpwkL7ZRVjGi4PfPRhRoUm6TTxuutPoHqKnGp+VXGVn/nGKRKbePftl1j0yxKVoIMbD7zxQ+NWqqHr/I7XGUIOw/iZnTUg9f9FfQlaNbXGmsZN2cSreXq2Wx6A+ady44eMtT+UGhg451l64OrTuhb5L83VVMRnSdv7Ql0ylAe4c3XrAr9usBKW7AW83F8vmWA1IJKNPri3UcqpqqDOPGfX7nR6fvM0ZJP0r9iSpiX9VWmP+C/atic3FfKk1OJchgPTmsGVaaLNWmZ7MUpdAJXXJEqV5Uu2T//6TUnQgQlATAa0kcKG/5UtvnEQtX0QGcYUVcowp4o7K95l5wHvg7Wl5LfUgjxDkXmTFL6hcWWNoU+7TydTdk/htTB3Wy/91wJWYESMYZ9/sDoodY2u9FFCatSwbVhCTLGuIYTefcAUQr2u61CBFfSgaNCsc2gAC1Tx7bpNgWCEi7gt2yQed4AhlvnX8iIbAuKgCPvSbCRk6Nf09LIcxKTaSovYYZV+FwepZec1MG8iEXHgB+WRur3dRcwGkKvVA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(376002)(136003)(39860400002)(451199021)(2616005)(76116006)(316002)(66946007)(37006003)(64756008)(4326008)(83380400001)(6636002)(66556008)(71200400001)(6486002)(478600001)(54906003)(6862004)(41300700001)(8936002)(8676002)(122000001)(38100700002)(91956017)(66446008)(66476007)(2906002)(15650500001)(38070700005)(5660300002)(7416002)(53546011)(6506007)(6512007)(186003)(86362001)(36756003)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <9BFE98376EB8D14D8B39B1E343D360EF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8253
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cf6a4db4-00ab-41df-1d3e-08db42614554
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NWCj762zwxs+wUiDVQX+68mNr9+oF4U5QFavyE5JDBCKoMA1rtVqM022YWLowKX0PPQQfUDnI5rzuOBMtrmPs8AvKD/+2dtZjX2Mx3yVdFGaDpP1zwbQI1GYsiVbCfdb+ovtctMq8ZCHq4JhE4x4aO5HzLAeVmPmhQhgv2Tl/yJlzRqVuuHramJDsamqhzkONF6GgGbMA5kPXY7ZbNsK5LEVSPiQpmohWlt72tz3mfrWHIFoOfHP7DSBVQUljRnOVb5bshdgosCqgbjVm4j6SDlrH9BAGVLRdYXO9Zvi2zFrGUM6eXRLE9r58FjbdDc8HyIZAh7KNdOE0j4sWYKC4ORdxgQabSfD+7a2AqzbWUuptncddBhCiOQZXelYWD0EdnYaQLy299l7HF9jkWYpQpFqQrhVOyFjkkLJOJHQOaRdpZSi723d5mKxndBzgwpu7wHT3DdiRGaDOElsxLdHehBr7ZJfAaMWxhbr1ls/kDvdZ9o9sU84SlGdD3OAm40nftkBY4nXt0Y8UUU9/cHtHseXeBrTo+2s/+bOMg6scCOBFCJS5c2qjBmrRHrVtnrRQeDffd4ayDV9cHWyIoWGhoz/Vq/Fyfw6uHc7XGTbxQd/qWjpjQ6+ksWEfo2bSjm+2cg6tns5pCMM1Lz8oXMdP+V+z+zQfWkihuVsy1Bo1OWZP00sFpd3hcyVWcBJZqNWqq25Ol6klTNDLEmNALJ1nA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(6862004)(8676002)(8936002)(316002)(81166007)(356005)(41300700001)(4326008)(82740400003)(86362001)(40480700001)(40460700003)(15650500001)(6512007)(2906002)(6506007)(53546011)(33656002)(26005)(82310400005)(6486002)(36860700001)(2616005)(83380400001)(47076005)(336012)(186003)(36756003)(70586007)(70206006)(37006003)(54906003)(6636002)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 12:09:52.4728
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40311f96-e6a1-4229-53d3-08db42615008
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB8941

Hi Henry,

> On 21 Apr 2023, at 08:33, Henry Wang <Henry.Wang@arm.com> wrote:
>=20
> Hi,
>=20
>> -----Original Message-----
>> Subject: RE: Xen 4.18 release schedule update and poll (was: RE: Xen 4.1=
8
>> release: Proposed release schedule)
>>=20
>> Hi Bertrand,
>>=20
>>> -----Original Message-----
>>> From: Bertrand Marquis <Bertrand.Marquis@arm.com>
>>> Subject: Re: Xen 4.18 release schedule update and poll (was: RE: Xen 4.=
18
>>> release: Proposed release schedule)
>>>=20
>>> Hi,
>>>=20
>>>> On 21 Apr 2023, at 05:22, Henry Wang <Henry.Wang@arm.com> wrote:
>>>>=20
>>>> Hi all,
>>>>=20
>>>> Following the discussion in April community call, here comes the two
>>>> updated possible release schedule options that I came up with.
>>>>=20
>>>> Both of these two options will satisfy the requirements/concerns that
>>>> I've received so far. But I personally would prefer the option 2 as we
>>>> shouldn't expect there will be much progress happen in August due to
>>>> the EU holiday season. I wonder if anyone has any objections or altern=
ate
>>>> suggestions.
>>>=20
>>> Even if the release date is in september, all major deadlines will happ=
en in
>>> August.
>>>=20
>>> So how about introducing an end of october possibility ?
>>=20
>> Thanks for raising this. I am personally good with this plan. If nobody =
objects
>> this proposal then yes why not :-)
>=20
> And here comes the option 3:
>=20
> ** Proposed option 3: Wed Oct 25, 2023 **
> (+10 months from Xen 4.17 release)
>=20
> - Last posting date          Fri Aug 11, 2023
>=20
> Patches adding new features are expected to be posted to the mailing
> list by this date, although perhaps not in their final version.
>=20
> - Feature freeze             Fri Sep 1, 2023 (+3 weeks from Last posting =
date)
>=20
> Patches adding new features should be committed by this date.
> Straightforward bugfixes may continue to be accepted by maintainers.
>=20
> - Code freeze                Fri Sep 15, 2023 (+2 weeks from Feature free=
ze)
>=20
> Bugfixes only.
>=20
> - Hard code freeze           Fri Oct 6, 2023 (+3 weeks from Code freeze)
>=20
> Bugfixes for serious bugs (including regressions), and low-risk fixes onl=
y.
>=20
> - Final commits              Fri Oct 20, 2023 (+2 weeks from Hard code fr=
eeze)
>=20
> Branch off staging-4.18.
>=20
> - Release                    Wed Oct 25, 2023

I will vote for 3 :-)

Cheers
Bertrand

>=20
> Kind regards,
> Henry
>=20
>>>=20
>>> Cheers
>>> Bertrand
>>>=20
>>>>=20
>>>> Please don't hesitate to raise your concerns and opinions. I would
>>>> encourage that the feedback collection is cut off by the middle of May
>>>> (say May 19). If nobody will have anything better, then let's go optio=
n 2
>>>> by "lazy consensus". Thanks.
>>>>=20
>>>> ** Proposed option 1: Wed Aug 30, 2023 **
>>>> (+8 months from Xen 4.17 release)
>>>>=20
>>>> - Last posting date          Fri Jun 16, 2023
>>>>=20
>>>> Patches adding new features are expected to be posted to the mailing
>>>> list by this date, although perhaps not in their final version.
>>>>=20
>>>> (Note that Xen Summit is Jun 24 - 26, 2023)
>>>>=20
>>>> - Feature freeze             Fri Jul 7, 2023 (+3 weeks from Last posti=
ng date)
>>>>=20
>>>> Patches adding new features should be committed by this date.
>>>> Straightforward bugfixes may continue to be accepted by maintainers.
>>>>=20
>>>> - Code freeze                Fri Jul 21, 2023 (+2 weeks from Feature f=
reeze)
>>>>=20
>>>> Bugfixes only.
>>>>=20
>>>> - Hard code freeze           Fri Aug 11, 2023 (+3 weeks from Code free=
ze)
>>>>=20
>>>> Bugfixes for serious bugs (including regressions), and low-risk fixes =
only.
>>>>=20
>>>> - Final commits              Fri Aug 25, 2023 (+2 weeks from Hard code=
 freeze)
>>>>=20
>>>> Branch off staging-4.18.
>>>>=20
>>>> - Release                    Wed Aug 30, 2023
>>>>=20
>>>>=20
>>>> ** Proposed option 2: Wed Sep 27, 2023 (or the first week of Oct)**
>>>> (+9 months from Xen 4.17 release)
>>>>=20
>>>> - Last posting date          Fri Jul 14, 2023
>>>>=20
>>>> Patches adding new features are expected to be posted to the mailing
>>>> list by this date, although perhaps not in their final version.
>>>>=20
>>>> - Feature freeze             Fri Aug 4, 2023 (+3 weeks from Last posti=
ng date)
>>>>=20
>>>> Patches adding new features should be committed by this date.
>>>> Straightforward bugfixes may continue to be accepted by maintainers.
>>>>=20
>>>> - Code freeze                Fri Aug 18, 2023 (+2 weeks from Feature f=
reeze)
>>>>=20
>>>> Bugfixes only.
>>>>=20
>>>> - Hard code freeze           Fri Sep 8, 2023 (+3 weeks from Code freez=
e)
>>>>=20
>>>> Bugfixes for serious bugs (including regressions), and low-risk fixes =
only.
>>>>=20
>>>> - Final commits              Fri Sep 22, 2023 (+2 weeks from Hard code=
 freeze)
>>>>=20
>>>> Branch off staging-4.18.
>>>>=20
>>>> - Release                    Wed Sep 27, 2023
>>>>=20
>>>> Kind regards,
>>>> Henry




From xen-devel-bounces@lists.xenproject.org Fri Apr 21 12:24:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 12:24:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524566.815596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pppoX-0004OQ-VH; Fri, 21 Apr 2023 12:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524566.815596; Fri, 21 Apr 2023 12:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pppoX-0004OJ-S2; Fri, 21 Apr 2023 12:24:33 +0000
Received: by outflank-mailman (input) for mailman id 524566;
 Fri, 21 Apr 2023 12:24:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pppoW-0004OD-Oa
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 12:24:32 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0617.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76abb505-e03f-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 14:24:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9679.eurprd04.prod.outlook.com (2603:10a6:102:23d::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 12:24:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 12:24:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76abb505-e03f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K1SPoFGn2WeK0Cz7oLBj/6kaB62Hm6Fxg8m4RCzgyiozrK2ytanqbFQvKiSuktCsyiZ6EExSqcEnYefjxGGJ+at5p07O44EQ9TCmX5yrfn/F8oYJaDffPU/dtBkzTESwbdJRvDnfvN1E0BQQKS90+aw3u6luiin48ETa3VoiSsY5VoRG4mYsMWEohH85f1XR/BX/zwwrnM2VApuRQJHv57Tp5kd+RuZDcM+Q/eiKJruF3vVFCncixs/53xZjty7yvfvUsT+wGFWzhOL0fseOC/k9cS59VSfjYeirumVm1jdiSb/SMqeRMjERSfXq0CvPZS5WsZFvbTNYCJvJrXlhWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qc5xUyGW06UDfQIBCdqwAH+qHYdibFuKOlXfBc/eo4M=;
 b=QS8Ec9osC7sKxrAv+sIUjvIwtcQHCxGDtvyKgETbHt5vBZpdMRd4+C0zx9McxZWQc/4qSaKQrRn6vRIDodNz8CInC2qfmwPTOMFkxkdS61xgkpPVNlEfZpPPz9/FgZ6guhnWQniqgNyMweekqnu02BW+7qzQVDIPhmJV/6YL+RlB0Q9gIUZ95+EAWUkmt+Z8/gsx6OazV1ZbvVm/m2NjUYi9R3kcBz/bOzg8vdApJQZKQHCjIf3oBD5FpxVDeFcuEoWOAtW+lXXzaohZiErknVOLTXVUDXvRxP6rAp6mlgKTM8VbjslpyZjt+Z5U4ZASKx1lw8A1rAH70Tx2LZLNTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qc5xUyGW06UDfQIBCdqwAH+qHYdibFuKOlXfBc/eo4M=;
 b=CYcQKX+p25C0elRP/foGKxVeGHFRODTV/JTJ73uzdMYdPrDaLn49yf6KBhVKsd+vIXaWzsvytF8Ilj4MN4npYUJouwGlgdTwGsWt88XwliSh7TBHMu3JXfNrnOnMH1UJm58rU2pAPpWa0ihiPabWdihtnFNS0tl/glLMynpOtLCZO2ZPmYpls7uKIVGULJOi7eblbCVMdQDAfuoLzefHlbgButNabpJ3BRxEwiDn43eufj8yRs6bK8SQ7h2b1SbMl/NPN8lHaEk88dUr1tHT5W5o1sgtYoRTT27wmpx2ntueQDwJbth44Y56CY87et5n/Vul5g7xaV8lmSZILJTSQw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d9ab412f-b1d1-3fef-a956-05373ce76dd2@suse.com>
Date: Fri, 21 Apr 2023 14:24:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Content-Language: en-US
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87354t8pqg.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0106.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9679:EE_
X-MS-Office365-Filtering-Correlation-Id: 8d3859da-8b01-487e-1314-08db4263592e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kigICMQy41musVPSHS0l0vlOqJocVhqgEjRhVUX5HCVlJED6X4Pzudq5kCVjYTpZFPsbOFgMuT5ZIV1WuNwbcmBxglzhFua9GCieeYYR3upEymDq5HOdBIS3xDIv4UaDySpFkVtwxfB/9zjGlEdJ7yO7KFMj261e2b4S9I+1i2MVBntlxpBDBtcLfxMRPwzNGPWj3n3SOuFry7BxEM+bezrgBr27nyzuGILQfxizyXE4Gi9GO7EDRRHi29bdLok26/2YwovKDQjHvL/QYn2b7fr19lfT9YaqlUqd/DBwx/vK2rfyQ7TJpyLgOEObOr381rvrbtqYhwH1Vzvu0j7+f7R54gSkvUFUUaliBLom69bGx+S2HO++rhFlcL1JXyDg6bLwZioD4si5Ehb1i5TCRkJ/J9L239jFFz+ZR/vrUGXRSX7whJgckhvC95Zqgn9fheGgNSFZNAlhJRYiCbLaWedDSv81kuVNmokXM0bba/W4wOgIShDOj0NIGMCDGmGmkgw0kFjXrVPF39VpujBpZygtTuhY27OnIjhxR/2HxHDYQ9R6kvpD81/l1AYwAhmVjKFMnftxSEPdSQb8KAvLRJ1bV2sXLJ2I80Bc663aqev6a4qYLo0jWzS7IcJk3zckz1iXpG2/JuPzgaXjVVpjUA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(376002)(366004)(396003)(136003)(451199021)(66476007)(66556008)(66946007)(316002)(6916009)(6486002)(478600001)(26005)(186003)(4326008)(54906003)(2616005)(6506007)(36756003)(53546011)(86362001)(6512007)(31696002)(38100700002)(83380400001)(8936002)(41300700001)(31686004)(8676002)(7416002)(5660300002)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YWplYThwTjRDTU5IaHI1VzB4eGxCUlc4UmN3YzhDcWsxQ0xQZHVYb2dBdStY?=
 =?utf-8?B?b3JzRjZxZWpjNytFZzdHWWo1dWlOZlE3VVFkR05tOHhpME1hcGFrZXB1UFJh?=
 =?utf-8?B?R2xoT00vWlo1Zk85ZkJRS0tSU2xHcXh1N0N3U01LQ0NORHZja2ViTkdnM0Rn?=
 =?utf-8?B?SDNodHE1OG56RmRUS05pMTNOeEltNlRBLzNzYTdsYUErYVVNMjQ5Vmg1b2s4?=
 =?utf-8?B?ZGNBNHFoYnB0dndDcE52Z29sbnhKL3MyTFJEM0o1UnZWZFAydTVaaFZkR3N6?=
 =?utf-8?B?THhIeXhBRDRJUzdiRHRMc1k0SVMwSkVIdmJldTNYaS9meGZtb1hmREtLZi9O?=
 =?utf-8?B?L3VqSnFBVjVpZ2ljT0piR2pWMHlNMjJRZUdWY1U0NmlnTTQ2RnNCZ1dDVWpv?=
 =?utf-8?B?SUtuMFFGN2pKQ2tzTFdVelpOUnB2cytEL3VBUXROdUxTWUd5SDZoUFlCY05t?=
 =?utf-8?B?RG1SM25OSy9GSTlvUmo5eTRyY1NTdlJRU1JnT1JadmtRZmRzN1RLZkF4elVQ?=
 =?utf-8?B?OG84czl6aVVMaTdnN2VUMVVGRzk0aWVFSUROK2NpVjlsRmZ3THJ5WGlGQjNL?=
 =?utf-8?B?eXdSTGlFc25ZVExrMWQwdGZ0SU5Ic1l0U29LUERSMkxvTlgwSTBHVUlHaFFR?=
 =?utf-8?B?TzNaM2VrYXF0bWd0YitMQzNaV0VHck05aVZPZjk5QnQrSkF0aWJjaTFZNW9Y?=
 =?utf-8?B?MXZGSGtiei9XSVl3SGJHdmEwcnV2UlMzVVRkYmJtR1VrOW9LUzg3dytmUzMz?=
 =?utf-8?B?aks3bEd5WDhwL3MyYTZHSFN2STZxK2ZqSW5QaXVpZG1nOENIMlh3ZWk3T2Zy?=
 =?utf-8?B?bG1vZThHUTdDcmpyZzh5bUJyZkI5WTFmc0I5c0g3VitNbzJPVVNZdHRQaHU2?=
 =?utf-8?B?cFIxY1lKNkJaVU42NVFHbE1qejZXZ3dJK3lKRGpJaGNVbnBiL3lUUFpRZWs3?=
 =?utf-8?B?N2U2UUZxekU3aXVzV0FGQURuMHRPMFB3aWRzOUJIaEpUN1J2cTdUTmVsTHpi?=
 =?utf-8?B?eGlveFVrOE1BMWZVeERiMGlLd2h4Wmc4KzJ4UlJrbituUHZkVnpyN25tNXI2?=
 =?utf-8?B?bXpzd0ZCMlMvd2U2cUZIbG9LNlFqZ04zRUlNYitMb1lIVkc2ZE1TTFNhZjg4?=
 =?utf-8?B?b2JDb2F3eW54bjN0bWcyZm1TU1l5clVyRzdZa3gzOUNtNFVNZGtzamNiU3VY?=
 =?utf-8?B?aVpyb092Zmtib2p6QkZxeFRJaW9YMlNrRitibTBkQ2dDSzNlNkp6VUdBa0hV?=
 =?utf-8?B?aHNqWGZ0S0Q4MDBsWVRpUEEwaHVVOE9hUDVjN1RuUDlORDJpZE1KYS9sN1ow?=
 =?utf-8?B?YzVpTFNpRUpsZUVnL1JGcjhIL2MxeWRWL0VMWW1wZjFhVHh5b0NQZ0NvVnRR?=
 =?utf-8?B?ZGV3RGplRGVtYStNM05GL24yaTZ6QWRGY1RlZm8rQUdyNExYWjBNS0Z5Ynlh?=
 =?utf-8?B?NURrdWtGbGF1UE0rZkpyaE55RUdUWldaaDU4VGxvS1o1TjluZDd3T2NxaFg3?=
 =?utf-8?B?VVFySGtDZFQwUDZoODRTMENkeWsxczIreWVuT1owTDI5WnZIZ08vNnNFbGtK?=
 =?utf-8?B?b3lPSmp5RExETHBJNWE4TXZWSzRRcWRzZ1RucjY1b0c1WFBQbWVFNU5NSFV4?=
 =?utf-8?B?L1lVWHo0VzJjVU5TdjFmRUcyaDRRTWtnZlIzeVhFSU1vbDJROUJUdmFReDhO?=
 =?utf-8?B?UVl4eGY4TFlVVHBmT28vbmVDNTFadFk0RTFURjVxUnN4bzZDRmpxTUxDeDBH?=
 =?utf-8?B?RVhzSmRIUWR5QlI5ZVR1ejBmY29UejNXQ0VybHQvbnZpdWNmNWdNVm42ZnZv?=
 =?utf-8?B?M25DeUlTV0xpa3NjVis5THRKSXNJQmF6K3RiMWJhalNCcU4wM1VLODdVRks2?=
 =?utf-8?B?bUpNN3Z1a0xGSnBMdWRGSXJSMElRZkZEeXZrNlR4aHE5VVhCSS94M1REU0pu?=
 =?utf-8?B?ZVdTSDFvS3dMQ2cxbU0wOXpWcGtMbFQvQnBCVk0vdjdySmEyeklrandHOWQy?=
 =?utf-8?B?SWZBaHlQWjZDV1BqTno3TWlIMjIzZzFIb2xRczNCMVlsUDZwdkJ1Vmg2Z3RP?=
 =?utf-8?B?eC9KZ2NNTnM1ZFlqdmtPYStzeEcrTWQ1ZjZJdk9NRm5wM2FRWjNTUXB4R2Za?=
 =?utf-8?Q?7Wtez1tK88obxKkOJjbe2Huri?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d3859da-8b01-487e-1314-08db4263592e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 12:24:27.0149
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qlEN3sZ1IQwYLde2KRADkzXYquC1pHGEi0CXXVeA14HYguWk5rgd3q21vi09jR3uQb53h5mbyZe+KfMH7SIqNA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9679

On 21.04.2023 13:00, Volodymyr Babchuk wrote:
> 
> Hello Roger,
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
>> On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
>>> On 17.04.2023 12:17, Roger Pau Monné wrote:
>>>> On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
>>>>> Above I have proposed another view on this. I hope, it will work for
>>>>> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
>>>>> after returning from pci_remove_device(). By "harmless" I mean that
>>>>> owners of those refcounts will not try to access the physical PCI
>>>>> device if pci_remove_device() is already finished.
>>>>
>>>> I'm not strictly a maintainer of this piece code, albeit I have an
>>>> opinion.  I will like to also hear Jans opinion, since he is the
>>>> maintainer.
>>>
>>> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
>>> holds a ref is entitled to access the device. As stated before, I see only
>>> two ways of getting things consistent: Either pci_remove_device() is
>>> invoked upon dropping of the last ref,
>>
>> With this approach, what would be the implementation of
>> PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
>> exist and either return 0 or -EBUSY?
>>
> 
> Okay, I am preparing patches with the behavior you proposed. To test it,
> I artificially set refcount to 2 and indeed PHYSDEVOP_manage_pci_remove
> returned -EBUSY, which propagated to the linux driver. Problem is that
> Linux driver can't do anything with this. It just displayed an error
> message and removed device anyways. This is because Linux sends
> PHYSDEVOP_manage_pci_remove in device_remove() call path and there is no
> way to prevent the device removal. So, admin is not capable to try this
> again.

So maybe Linux'es issuing of the call needs moving elsewhere? Or we need
a new sub-op, such that PHYSDEVOP_manage_pci_remove can remain purely a
last-moment notification?

> As I workaround, I can create hypercall continuation in case if
> pci_remove_device() returns -EBUSY. What is your opinion?

How would that help? You'd then spin perhaps for hours or days ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 12:45:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 12:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524570.815606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppq8b-0006qK-KA; Fri, 21 Apr 2023 12:45:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524570.815606; Fri, 21 Apr 2023 12:45:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppq8b-0006qD-HT; Fri, 21 Apr 2023 12:45:17 +0000
Received: by outflank-mailman (input) for mailman id 524570;
 Fri, 21 Apr 2023 12:45:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vNOs=AM=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1ppq8Z-0006q7-Pv
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 12:45:16 +0000
Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com
 [2607:f8b0:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5bb817dd-e042-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 14:45:14 +0200 (CEST)
Received: by mail-pl1-x629.google.com with SMTP id
 d9443c01a7336-1a8097c1ccfso24228895ad.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 05:45:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bb817dd-e042-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682081112; x=1684673112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=A4gG0rQ74kXCDWOS46pNxleDjo9didPPqpRTQSkB1RE=;
        b=VKbxMr6bJRA+q/7wsJsfQs37LCnN6Y78Av3U2mLOkilNVkMhI2KwweAveXbOg9hjYe
         LotBSzd5uTahjZrHAOQTE7Rcu7GL7/IdbXoGrn/rXEiYSekpHbK/U20PFbhI3YG/1kmd
         Ci63tdDkAL2sl75WoUdLQWirhtVnVvgxzMp1QM17GgSezHMVBsdh4eBjfC6RR0HN0z5s
         Nrd1k4Ri0cNCW2/nJ9pgt8kFWskYsW3pOhyDYiZfhxb4+u33nZ/fTPBUyfH2Vdans/+Z
         shZ3RUmSdW13KISSZrwehfwaw9/fF6GvSkBvR6/3cvOkS4/4oTjN2DXeTWWvWcPFoNFD
         TSsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682081112; x=1684673112;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=A4gG0rQ74kXCDWOS46pNxleDjo9didPPqpRTQSkB1RE=;
        b=Tl+f0EZTslC1+mpPRBYklMf1G9Jydyd/XKzCkFtkM7/bb3g3YlsFQQx2UM6vCRa+JX
         PVibvQoJ0NXZU//WnJYIe8WardE+WedPcJcJYpZ1w6oGIOUeNQvXTLxDQe1I+B0Xjn+6
         gYBbN74Nf9dG0CEu8IIRON+MC7iW7Ci2DMZHEV4oAa7oNF8I/qLhfled49eAdptKByxX
         1Pn0YW+UanhSZW21NG/VOu9QjhyXnxRkqrt2Dg46an6xOwf420SPu8VRRSAYWFMmooHR
         Mi2i0D6K5KVI+QgX09fmhcjgadoJBNLaB6j4mBXD4PvwAwAXsG8L/8GbutO7kxaFc52G
         bJmg==
X-Gm-Message-State: AAQBX9emQ2j1m10E/KOPpMQ0oaDsuMLBY9qBq+kYWBhhaTLDdrUEvM/x
	QJT/esRMIlB8PbEJ2vxRH9XUXXMnNdem/Ei8rX0=
X-Google-Smtp-Source: AKy350bIXozbSDqI+IeXGgSHUcUQgbhOi7SfDLZDzi1lDvxOyklUwotWm0VJaZucnZsIMM+IkjNWYh9qM1Emk0ho6e0=
X-Received: by 2002:a17:903:11c4:b0:1a6:81bd:c4d9 with SMTP id
 q4-20020a17090311c400b001a681bdc4d9mr5404301plh.39.1682081112063; Fri, 21 Apr
 2023 05:45:12 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <bad09a6f-d41e-ab8e-2291-7fde3b646710@xen.org> <CA+SAi2uPZ=Dq1GxF9Kj1zCO=nbb55ruVG31kH-TgdpR6bLznvA@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com> <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
In-Reply-To: <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Fri, 21 Apr 2023 15:49:32 +0300
Message-ID: <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="0000000000002b03cd05f9d80721"

--0000000000002b03cd05f9d80721
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Michal,

I was not able to enable earlyprintk in the xen for now.
I decided to choose another way.
This is a xen's command line that I found out completely.

(XEN) $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpu=
s=3D2
dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull timer_slop=3D0

So you are absolutely right about a command line.
Now I am going to find out why xen did not have the correct parameters from
the device tree.

Regards,
Oleg

=D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Mic=
hal Orzel <michal.orzel@amd.com>:

>
> On 21/04/2023 10:04, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello Michal,
> >
> > Yes, I use yocto.
> >
> > Yesterday all day long I tried to follow your suggestions.
> > I faced a problem.
> > Manually in the xen config build file I pasted the strings:
> In the .config file or in some Yocto file (listing additional Kconfig
> options) added to SRC_URI?
> You shouldn't really modify .config file but if you do, you should execut=
e
> "make olddefconfig" afterwards.
>
> >
> > CONFIG_EARLY_PRINTK
> > CONFIG_EARLY_PRINTK_ZYNQMP
> > CONFIG_EARLY_UART_CHOICE_CADENCE
> I hope you added =3Dy to them.
>
> Anyway, you have at least the following solutions:
> 1) Run bitbake xen -c menuconfig to properly set early printk
> 2) Find out how you enable other Kconfig options in your project (e.g.
> CONFIG_COLORING=3Dy that is not enabled by default)
> 3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
> CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
>
> ~Michal
>
> >
> > Host hangs in build time.
> > Maybe I did not set something in the config build file ?
> >
> > Regards,
> > Oleg
> >
> > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57,=
 Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>:
> >
> >     Thanks Michal,
> >
> >     You gave me an idea.
> >     I am going to try it today.
> >
> >     Regards,
> >     O.
> >
> >     =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11=
:56, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>:
> >
> >         Thanks Stefano.
> >
> >         I am going to do it today.
> >
> >         Regards,
> >         O.
> >
> >         =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 23:05, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >
> >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >             > Hi Michal,
> >             >
> >             > I corrected xen's command line.
> >             > Now it is
> >             > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dn=
ative
> sched=3Dnull
> >             > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3
> dom0_colors=3D4-7";
> >
> >             4 colors is way too many for xen, just do xen_colors=3D0-0.
> There is no
> >             advantage in using more than 1 color for Xen.
> >
> >             4 colors is too few for dom0, if you are giving 1600M of
> memory to Dom0.
> >             Each color is 256M. For 1600M you should give at least 7
> colors. Try:
> >
> >             xen_colors=3D0-0 dom0_colors=3D1-8
> >
> >
> >
> >             > Unfortunately the result was the same.
> >             >
> >             > (XEN)  - Dom0 mode: Relaxed
> >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >             > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resour=
ce
> >             > (XEN) Coloring general information
> >             > (XEN) Way size: 64kB
> >             > (XEN) Max. number of colors available: 16
> >             > (XEN) Xen color(s): [ 0 ]
> >             > (XEN) alternatives: Patching with alt table
> 00000000002cc690 -> 00000000002ccc0c
> >             > (XEN) Color array allocation failed for dom0
> >             > (XEN)
> >             > (XEN) ****************************************
> >             > (XEN) Panic on CPU 0:
> >             > (XEN) Error creating domain 0
> >             > (XEN) ****************************************
> >             > (XEN)
> >             > (XEN) Reboot in five seconds...
> >             >
> >             > I am going to find out how command line arguments passed
> and parsed.
> >             >
> >             > Regards,
> >             > Oleg
> >             >
> >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 11:25, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
> >             >       Hi Michal,
> >             >
> >             > You put my nose into the problem. Thank you.
> >             > I am going to use your point.
> >             > Let's see what happens.
> >             >
> >             > Regards,
> >             > Oleg
> >             >
> >             >
> >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 10:37, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> >             >       Hi Oleg,
> >             >
> >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >             >       >
> >             >       >
> >             >       >
> >             >       > Hello Stefano,
> >             >       >
> >             >       > Thanks for the clarification.
> >             >       > My company uses yocto for image generation.
> >             >       > What kind of information do you need to consult m=
e
> in this case ?
> >             >       >
> >             >       > Maybe modules sizes/addresses which were mentione=
d
> by @Julien Grall <mailto:julien@xen.org <mailto:julien@xen.org>> ?
> >             >
> >             >       Sorry for jumping into discussion, but FWICS the Xe=
n
> command line you provided seems to be not the one
> >             >       Xen booted with. The error you are observing most
> likely is due to dom0 colors configuration not being
> >             >       specified (i.e. lack of dom0_colors=3D<> parameter)=
.
> Although in the command line you provided, this parameter
> >             >       is set, I strongly doubt that this is the actual
> command line in use.
> >             >
> >             >       You wrote:
> >             >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dser=
ial0
> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dn=
ative
> >             >       sched=3Dnull timer_slop=3D0 way_szize=3D65536
> xen_colors=3D0-3 dom0_colors=3D4-7";
> >             >
> >             >       but:
> >             >       1) way_szize has a typo
> >             >       2) you specified 4 colors (0-3) for Xen, but the
> boot log says that Xen has only one:
> >             >       (XEN) Xen color(s): [ 0 ]
> >             >
> >             >       This makes me believe that no colors configuration
> actually end up in command line that Xen booted with.
> >             >       Single color for Xen is a "default if not specified=
"
> and way size was probably calculated by asking HW.
> >             >
> >             >       So I would suggest to first cross-check the command
> line in use.
> >             >
> >             >       ~Michal
> >             >
> >             >
> >             >       >
> >             >       > Regards,
> >             >       > Oleg
> >             >       >
> >             >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >             >       >
> >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >             >       >     > Hi Julien,
> >             >       >     >
> >             >       >     > >> This feature has not been merged in Xen
> upstream yet
> >             >       >     >
> >             >       >     > > would assume that upstream + the series o=
n
> the ML [1] work
> >             >       >     >
> >             >       >     > Please clarify this point.
> >             >       >     > Because the two thoughts are controversial.
> >             >       >
> >             >       >     Hi Oleg,
> >             >       >
> >             >       >     As Julien wrote, there is nothing
> controversial. As you are aware,
> >             >       >     Xilinx maintains a separate Xen tree specific
> for Xilinx here:
> >             >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >             >       >
> >             >       >     and the branch you are using
> (xlnx_rebase_4.16) comes from there.
> >             >       >
> >             >       >
> >             >       >     Instead, the upstream Xen tree lives here:
> >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >             >       >
> >             >       >
> >             >       >     The Cache Coloring feature that you are tryin=
g
> to configure is present
> >             >       >     in xlnx_rebase_4.16, but not yet present
> upstream (there is an
> >             >       >     outstanding patch series to add cache colorin=
g
> to Xen upstream but it
> >             >       >     hasn't been merged yet.)
> >             >       >
> >             >       >
> >             >       >     Anyway, if you are using xlnx_rebase_4.16 it
> doesn't matter too much for
> >             >       >     you as you already have Cache Coloring as a
> feature there.
> >             >       >
> >             >       >
> >             >       >     I take you are using ImageBuilder to generate
> the boot configuration? If
> >             >       >     so, please post the ImageBuilder config file
> that you are using.
> >             >       >
> >             >       >     But from the boot message, it looks like the
> colors configuration for
> >             >       >     Dom0 is incorrect.
> >             >       >
> >             >
> >             >
> >             >
> >
>

--0000000000002b03cd05f9d80721
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyBNaWNoYWwsPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRp
dj5JIHdhcyBub3QgYWJsZSB0byBlbmFibGUgZWFybHlwcmludGsgaW4gdGhlIHhlbiBmb3Igbm93
LjwvZGl2PjxkaXY+SSBkZWNpZGVkIHRvIGNob29zZSBhbm90aGVyIHdheS48L2Rpdj48ZGl2PlRo
aXMgaXMgYSB4ZW4mIzM5O3MgY29tbWFuZCBsaW5lIHRoYXQgSSBmb3VuZCBvdXQgY29tcGxldGVs
eS48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PihYRU4pICQkJCQgY29uc29sZT1kdHVhcnQgZHR1
YXJ0PXNlcmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9MiBkb20wX3ZjcHVzX3Bp
biBib290c2NydWI9MCB2d2ZpPW5hdGl2ZSBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MDwvZGl2Pjxk
aXY+PGJyPjwvZGl2PjxkaXY+U28geW91IGFyZSBhYnNvbHV0ZWx5IHJpZ2h0IGFib3V0IGEgY29t
bWFuZCBsaW5lLjwvZGl2PjxkaXY+Tm93IEkgYW0gZ29pbmcgdG8gZmluZCBvdXQgd2h5IHhlbiBk
aWQgbm90IGhhdmUgdGhlIGNvcnJlY3QgcGFyYW1ldGVycyBmcm9tIHRoZSBkZXZpY2UgdHJlZS48
L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5PbGVnPGJyPjwvZGl2
PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9
ImdtYWlsX2F0dHIiPtC/0YIsIDIxINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMToxNiwgTWljaGFs
IE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDs6PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9x
dW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29s
aWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij48YnI+DQpPbiAyMS8wNC8yMDIz
IDEwOjA0LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqA8YnI+DQom
Z3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0OyBIZWxsbyBNaWNoYWwsPGJyPg0KJmd0OyA8YnI+DQom
Z3Q7IFllcywgSSB1c2UgeW9jdG8uPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFllc3RlcmRheSBhbGwg
ZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91ciBzdWdnZXN0aW9ucy48YnI+DQomZ3Q7IEkg
ZmFjZWQgYSBwcm9ibGVtLjxicj4NCiZndDsgTWFudWFsbHkgaW4gdGhlIHhlbiBjb25maWcgYnVp
bGQgZmlsZSBJIHBhc3RlZCB0aGUgc3RyaW5nczo8YnI+DQpJbiB0aGUgLmNvbmZpZyBmaWxlIG9y
IGluIHNvbWUgWW9jdG8gZmlsZSAobGlzdGluZyBhZGRpdGlvbmFsIEtjb25maWcgb3B0aW9ucykg
YWRkZWQgdG8gU1JDX1VSST88YnI+DQpZb3Ugc2hvdWxkbiYjMzk7dCByZWFsbHkgbW9kaWZ5IC5j
b25maWcgZmlsZSBidXQgaWYgeW91IGRvLCB5b3Ugc2hvdWxkIGV4ZWN1dGUgJnF1b3Q7bWFrZSBv
bGRkZWZjb25maWcmcXVvdDsgYWZ0ZXJ3YXJkcy48YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsg
Q09ORklHX0VBUkxZX1BSSU5USzxicj4NCiZndDsgQ09ORklHX0VBUkxZX1BSSU5US19aWU5RTVA8
YnI+DQomZ3Q7IENPTkZJR19FQVJMWV9VQVJUX0NIT0lDRV9DQURFTkNFPGJyPg0KSSBob3BlIHlv
dSBhZGRlZCA9eSB0byB0aGVtLjxicj4NCjxicj4NCkFueXdheSwgeW91IGhhdmUgYXQgbGVhc3Qg
dGhlIGZvbGxvd2luZyBzb2x1dGlvbnM6PGJyPg0KMSkgUnVuIGJpdGJha2UgeGVuIC1jIG1lbnVj
b25maWcgdG8gcHJvcGVybHkgc2V0IGVhcmx5IHByaW50azxicj4NCjIpIEZpbmQgb3V0IGhvdyB5
b3UgZW5hYmxlIG90aGVyIEtjb25maWcgb3B0aW9ucyBpbiB5b3VyIHByb2plY3QgKGUuZy4gQ09O
RklHX0NPTE9SSU5HPXkgdGhhdCBpcyBub3QgZW5hYmxlZCBieSBkZWZhdWx0KTxicj4NCjMpIEFw
cGVuZCB0aGUgZm9sbG93aW5nIHRvICZxdW90O3hlbi9hcmNoL2FybS9jb25maWdzL2FybTY0X2Rl
ZmNvbmZpZyZxdW90Ozo8YnI+DQpDT05GSUdfRUFSTFlfUFJJTlRLX1pZTlFNUD15PGJyPg0KPGJy
Pg0Kfk1pY2hhbDxicj4NCjxicj4NCiZndDsgPGJyPg0KJmd0OyBIb3N0IGhhbmdzIGluIGJ1aWxk
IHRpbWUuwqA8YnI+DQomZ3Q7IE1heWJlIEkgZGlkIG5vdCBzZXQgc29tZXRoaW5nIGluIHRoZSBj
b25maWcgYnVpbGQgZmlsZSA/PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0
OyBPbGVnPGJyPg0KJmd0OyA8YnI+DQomZ3Q7INGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQ
siAxMTo1NywgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0Ozo8YnI+DQomZ3Q7IDxicj4NCiZn
dDvCoCDCoCDCoFRoYW5rcyBNaWNoYWwsPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBZb3Ug
Z2F2ZSBtZSBhbiBpZGVhLjxicj4NCiZndDvCoCDCoCDCoEkgYW0gZ29pbmcgdG8gdHJ5IGl0IHRv
ZGF5Ljxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAg
wqBPLjxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKg0YfRgiwgMjAg0LDQv9GALiAyMDIz4oCv
0LMuINCyIDExOjU2LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDsgPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgVGhhbmtzIFN0ZWZhbm8uPGJyPg0KJmd0OyA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqBJIGFtIGdvaW5nIHRvIGRvIGl0IHRvZGF5Ljxicj4NCiZndDsgPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqBPLjxicj4N
CiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg0YHRgCwgMTkg0LDQv9GALiAyMDIz4oCv0LMu
INCyIDIzOjA1LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDs6PGJyPg0KJmd0
OyA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqBPbiBXZWQsIDE5IEFwciAyMDIzLCBPbGVn
IE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEhpIE1p
Y2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyBJIGNvcnJlY3RlZCB4ZW4mIzM5O3MgY29tbWFuZCBsaW5lLjxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgTm93IGl0IGlzPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyB4ZW4seGVuLWJvb3RhcmdzID0gJnF1b3Q7Y29uc29sZT1kdHVhcnQg
ZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9MiBkb20wX3ZjcHVz
X3BpbiBib290c2NydWI9MCB2d2ZpPW5hdGl2ZSBzY2hlZD1udWxsPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyB0aW1lcl9zbG9wPTAgd2F5X3NpemU9NjU1MzYgeGVuX2NvbG9ycz0w
LTMgZG9tMF9jb2xvcnM9NC03JnF1b3Q7Ozxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgNCBjb2xvcnMgaXMgd2F5IHRvbyBtYW55IGZvciB4ZW4sIGp1c3QgZG8geGVuX2Nv
bG9ycz0wLTAuIFRoZXJlIGlzIG5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgYWR2YW50
YWdlIGluIHVzaW5nIG1vcmUgdGhhbiAxIGNvbG9yIGZvciBYZW4uPGJyPg0KJmd0OyA8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9ycyBpcyB0b28gZmV3IGZvciBkb20wLCBpZiB5
b3UgYXJlIGdpdmluZyAxNjAwTSBvZiBtZW1vcnkgdG8gRG9tMC48YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqBFYWNoIGNvbG9yIGlzIDI1Nk0uIEZvciAxNjAwTSB5b3Ugc2hvdWxkIGdpdmUg
YXQgbGVhc3QgNyBjb2xvcnMuIFRyeTo8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoHhlbl9jb2xvcnM9MC0wIGRvbTBfY29sb3JzPTEtODxicj4NCiZndDsgPGJyPg0KJmd0
OyA8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgVW5mb3J0dW5h
dGVseSB0aGUgcmVzdWx0IHdhcyB0aGUgc2FtZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoC0gRG9tMCBt
b2RlOiBSZWxheGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06
IDQwLWJpdCBJUEEgd2l0aCA0MC1iaXQgUEEgYW5kIDgtYml0IFZNSUQ8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTogMyBsZXZlbHMgd2l0aCBvcmRlci0xIHJvb3Qs
IFZUQ1IgMHgwMDAwMDAwMDgwMDIzNTU4PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
OyAoWEVOKSBTY2hlZHVsaW5nIGdyYW51bGFyaXR5OiBjcHUsIDEgQ1BVIHBlciBzY2hlZC1yZXNv
dXJjZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgQ29sb3JpbmcgZ2Vu
ZXJhbCBpbmZvcm1hdGlvbjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikg
V2F5IHNpemU6IDY0a0I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIE1h
eC4gbnVtYmVyIG9mIGNvbG9ycyBhdmFpbGFibGU6IDE2PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0OyAoWEVOKSBYZW4gY29sb3Iocyk6IFsgMCBdPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxl
IDAwMDAwMDAwMDAyY2M2OTAgLSZndDsgMDAwMDAwMDAwMDJjY2MwYzxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgKFhFTikgQ29sb3IgYXJyYXkgYWxsb2NhdGlvbiBmYWlsZWQgZm9y
IGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQYW5p
YyBvbiBDUFUgMDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEVycm9y
IGNyZWF0aW5nIGRvbWFpbiAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVO
KSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDsgKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLjxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcg
dG8gZmluZCBvdXQgaG93IGNvbW1hbmQgbGluZSBhcmd1bWVudHMgcGFzc2VkIGFuZCBwYXJzZWQu
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE9s
ZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MjUsIE9sZWcg
TmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFlvdSBwdXQgbXkgbm9zZSBpbnRvIHRo
ZSBwcm9ibGVtLiBUaGFuayB5b3UuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBJ
IGFtIGdvaW5nIHRvIHVzZSB5b3VyIHBvaW50Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgTGV0JiMzOTtzIHNlZSB3aGF0IGhhcHBlbnMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg0YHRgCwgMTkg0LDQv9GALiAyMDIz4oCv0LMuINCyIDEw
OjM3LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIDE5LzA0LzIw
MjMgMDk6MDMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBIZWxsbyBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFRoYW5rcyBmb3IgdGhlIGNsYXJpZmljYXRpb24uPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBNeSBjb21wYW55IHVzZXMgeW9jdG8gZm9y
IGltYWdlIGdlbmVyYXRpb24uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBXaGF0IGtpbmQgb2YgaW5mb3JtYXRpb24gZG8geW91IG5lZWQgdG8gY29uc3Vs
dCBtZSBpbiB0aGlzIGNhc2UgPzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IE1heWJlIG1vZHVsZXMgc2l6ZXMvYWRkcmVzc2VzIHdoaWNoIHdlcmUgbWVudGlvbmVkIGJ5
IEBKdWxpZW4gR3JhbGwgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4m
Z3Q7Jmd0OyA/PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNvcnJ5IGZvciBqdW1waW5nIGludG8gZGlz
Y3Vzc2lvbiwgYnV0IEZXSUNTIHRoZSBYZW4gY29tbWFuZCBsaW5lIHlvdSBwcm92aWRlZCBzZWVt
cyB0byBiZSBub3QgdGhlIG9uZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoFhlbiBib290ZWQgd2l0aC4gVGhlIGVycm9yIHlvdSBhcmUgb2JzZXJ2aW5nIG1vc3Qg
bGlrZWx5IGlzIGR1ZSB0byBkb20wIGNvbG9ycyBjb25maWd1cmF0aW9uIG5vdCBiZWluZzxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNwZWNpZmllZCAoaS5lLiBs
YWNrIG9mIGRvbTBfY29sb3JzPSZsdDsmZ3Q7IHBhcmFtZXRlcikuIEFsdGhvdWdoIGluIHRoZSBj
b21tYW5kIGxpbmUgeW91IHByb3ZpZGVkLCB0aGlzIHBhcmFtZXRlcjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGlzIHNldCwgSSBzdHJvbmdseSBkb3VidCB0aGF0
IHRoaXMgaXMgdGhlIGFjdHVhbCBjb21tYW5kIGxpbmUgaW4gdXNlLjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBZb3Ugd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgeGVuLHhlbi1ib290YXJncyA9ICZxdW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWww
IGRvbTBfbWVtPTE2MDBNIGRvbTBfbWF4X3ZjcHVzPTIgZG9tMF92Y3B1c19waW4gYm9vdHNjcnVi
PTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9ycz0wLTMg
ZG9tMF9jb2xvcnM9NC03JnF1b3Q7Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBidXQ6PGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMSkgd2F5X3N6aXplIGhhcyBhIHR5
cG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAyKSB5b3Ugc3Bl
Y2lmaWVkIDQgY29sb3JzICgwLTMpIGZvciBYZW4sIGJ1dCB0aGUgYm9vdCBsb2cgc2F5cyB0aGF0
IFhlbiBoYXMgb25seSBvbmU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGlz
IG1ha2VzIG1lIGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29uZmlndXJhdGlvbiBhY3R1YWxseSBl
bmQgdXAgaW4gY29tbWFuZCBsaW5lIHRoYXQgWGVuIGJvb3RlZCB3aXRoLjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNpbmdsZSBjb2xvciBmb3IgWGVuIGlzIGEg
JnF1b3Q7ZGVmYXVsdCBpZiBub3Qgc3BlY2lmaWVkJnF1b3Q7IGFuZCB3YXkgc2l6ZSB3YXMgcHJv
YmFibHkgY2FsY3VsYXRlZCBieSBhc2tpbmcgSFcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNvIEkg
d291bGQgc3VnZ2VzdCB0byBmaXJzdCBjcm9zcy1jaGVjayB0aGUgY29tbWFuZCBsaW5lIGluIHVz
ZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyDQstGCLCAxOCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjA6NDQsIFN0
ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZn
dDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoE9uIFR1
ZSwgMTggQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGkgSnVsaWVuLDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7ICZndDsmZ3Q7IFRoaXMgZmVhdHVyZSBoYXMgbm90IGJlZW4gbWVyZ2VkIGluIFhlbiB1cHN0
cmVhbSB5ZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0OyAmZ3Q7IHdvdWxkIGFzc3VtZSB0aGF0IHVwc3RyZWFtICsgdGhlIHNl
cmllcyBvbiB0aGUgTUwgWzFdIHdvcms8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBQbGVhc2UgY2xhcmlmeSB0aGlzIHBvaW50
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDsgQmVjYXVzZSB0aGUgdHdvIHRob3VnaHRzIGFyZSBjb250cm92ZXJzaWFsLjxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBIaSBPbGVnLDxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBcyBKdWxpZW4gd3JvdGUs
IHRoZXJlIGlzIG5vdGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91IGFyZSBhd2FyZSw8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBYaWxpbngg
bWFpbnRhaW5zIGEgc2VwYXJhdGUgWGVuIHRyZWUgc3BlY2lmaWMgZm9yIFhpbGlueCBoZXJlOjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFuY2gg
eW91IGFyZSB1c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXMgZnJvbSB0aGVyZS48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEluc3RlYWQsIHRoZSB1cHN0cmVhbSBY
ZW4gdHJlZSBsaXZlcyBoZXJlOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgVGhlIENhY2hlIENvbG9yaW5nIGZlYXR1cmUgdGhhdCB5b3UgYXJlIHRyeWluZyB0byBjb25m
aWd1cmUgaXMgcHJlc2VudDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoGluIHhsbnhfcmViYXNlXzQuMTYsIGJ1dCBub3QgeWV0IHByZXNlbnQg
dXBzdHJlYW0gKHRoZXJlIGlzIGFuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgb3V0c3RhbmRpbmcgcGF0Y2ggc2VyaWVzIHRvIGFkZCBjYWNo
ZSBjb2xvcmluZyB0byBYZW4gdXBzdHJlYW0gYnV0IGl0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgaGFzbiYjMzk7dCBiZWVuIG1lcmdlZCB5
ZXQuKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQW55d2F5LCBpZiB5
b3UgYXJlIHVzaW5nIHhsbnhfcmViYXNlXzQuMTYgaXQgZG9lc24mIzM5O3QgbWF0dGVyIHRvbyBt
dWNoIGZvcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoHlvdSBhcyB5b3UgYWxyZWFkeSBoYXZlIENhY2hlIENvbG9yaW5nIGFzIGEgZmVhdHVy
ZSB0aGVyZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5
b3UgYXJlIHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB0aGUgYm9vdCBjb25maWd1cmF0
aW9uPyBJZjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoHNvLCBwbGVhc2UgcG9zdCB0aGUgSW1hZ2VCdWlsZGVyIGNvbmZpZyBmaWxlIHRoYXQg
eW91IGFyZSB1c2luZy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgQnV0IGZyb20gdGhlIGJvb3QgbWVzc2FnZSwgaXQgbG9va3MgbGlrZSB0aGUgY29sb3Jz
IGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgRG9tMCBpcyBpbmNvcnJlY3QuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyA8YnI+DQomZ3Q7IDxicj4NCjwvYmxvY2txdW90ZT48L2Rpdj4NCg==
--0000000000002b03cd05f9d80721--


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 12:52:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 12:52:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524576.815616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqFI-0008Lw-F6; Fri, 21 Apr 2023 12:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524576.815616; Fri, 21 Apr 2023 12:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqFI-0008Lp-CQ; Fri, 21 Apr 2023 12:52:12 +0000
Received: by outflank-mailman (input) for mailman id 524576;
 Fri, 21 Apr 2023 12:52:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v1kD=AM=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ppqFH-0008Lj-9d
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 12:52:11 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e89::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 521206ad-e043-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 14:52:08 +0200 (CEST)
Received: from BN9PR03CA0660.namprd03.prod.outlook.com (2603:10b6:408:13b::35)
 by BL0PR12MB4946.namprd12.prod.outlook.com (2603:10b6:208:1c5::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Fri, 21 Apr
 2023 12:52:04 +0000
Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13b:cafe::d9) by BN9PR03CA0660.outlook.office365.com
 (2603:10b6:408:13b::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.25 via Frontend
 Transport; Fri, 21 Apr 2023 12:52:03 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6319.23 via Frontend Transport; Fri, 21 Apr 2023 12:52:03 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 07:52:03 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 21 Apr
 2023 05:52:03 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 21 Apr 2023 07:52:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 521206ad-e043-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hxjas5+i6Qmtl/IWYU2S6tR3KAPNxANdga11ZKPHWM73b8DLJVAfXFHFdmf2oITQMJO3Vbv2MSlV8Mg6ifrZX8ZHDW3FVp741WJl6CHMv0KCntM7IpQj5+vQsE6S6FYhjJgb1waJ8OE7/t5i2tAXuiuN0EqcudiRwP5ho4McpFGlucjD5Vm9gWZnEVjxDk45AplLc42dGZZp0C1T5A1dFPGsSqDu1EDchIB9V9EEd4LZOP/67692VdjwMc9gqqZ46GlwpmHb5F3y76HrD66LDA8JEk27djKIa8oA6turoxbqCPUgVysRhEgOyg64x2x1htX5C6LTxaY9OyiFrJqyEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xgdt6ZLreRtLGrIF59kbHE+xhYPe/lerXb0SOqaYsOE=;
 b=SZRgyHP6+wliwwJGDfQ+RMHeflCR4MqLXWbLnQEqbD/TsJgZJt8Fm8cH+IU4CImRQTNG3oa1sCp7+clXkMx8kiq9ZkYz8PthL5tKvxe4EfVAMOW0is8ZzQqbLGirYP67pPs7deidWVjMO4Bh0o9KadTtyGXEwikaTW3iZzx4nNbIT/7BGl+jR0GyjZ5ucPaHpQ1C39EzwrTUUU7ZZ/oCyr6tJjpQ++gjqRtUsDy5qVoJ7e0kzhs5cixCxe5ijybDzzxGud1URX6YlefomOMVyRvRBAxTGlNDXADeWcwUR4ns+wRA9xtl9t+UcE4Q+xDpXixjpTDWndoW2g+EP6H+nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xgdt6ZLreRtLGrIF59kbHE+xhYPe/lerXb0SOqaYsOE=;
 b=GNqeQRmeWlhyv+XRy7dSy6YPFFV/o0V6lp9WT9HWBD4u0nccvp7BRgIAOJ10spV3wic23LQ3dZk4Ur5xsCmDQJNw2+UG3b6LEZT3xahDO6AcdTT7I2io47U9+nWkokGgbUuQqoZciUKeadSrrxEdXgk3zJ8CitWNzaU6lGcImLU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
Date: Fri, 21 Apr 2023 14:52:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com>
 <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
 <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT046:EE_|BL0PR12MB4946:EE_
X-MS-Office365-Filtering-Correlation-Id: 8674e127-dd4d-442d-b2ea-08db426734d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DHNqoiTuHfOv+SMlup9U+Mr246WQ3gfrGCvLU3Nw+5+qLbLJpEY4DLtlUkslnifhfSCAoacDPtWV6fxGosb6A/IKtau/xXRRv5O9dOpvPyRJfdNmn5wiQkWD4JBoHybuqWWVHjcUn1A3WdKBpse/jZRkUxyw4sooF2hQYGglO053DcwZoJHUJRsI9gjt4kRBscimTANpuYvlaT+x6Jagn8P/I8xdVNjU8dhEMdITescY/bls4EBvRqPNQfYKHsHVHnyNXH+npT69t6bZynVg/MLw+g+J6nv2gpaZJFoN9bUgySKPhbhSzq0kA8dcSLMEv60pNDbMpTGn3g9tEowExuwlBXXzwPZrTEPKp1KUMdkV7ChuC8+7AP2AyPeLOQfF/wvmtIgRfdWFqvq1l+/FQVBMhK68TZpCBDMaUZFCA/ccZUt0TRlro8N9qqFbDXgLWGZWOjO6P9DIcFGsJeJhj90uvvXuGMDvoxvT+gZgaY/27pG2z0XY9lX4rc7buAn/NjvMTRYf/1XQIa0sssJx64uQvBU73ZtLA2XswBvMZtZumMYIrElUcnYlBSK274ZreWMv2sQXbNix6eKih8wN6a/uWIbBCYNDzARBPRpsD+MWJFizZBNy5/OFPm9mfyUid1WDfrdRJQG/c+7sokI/tGNrQsExHMU/dzISP4Xqc6+tZH1lYhA4WE1sJw4Vk6SdJ2jxeFajUVYzuLp1SGyWjV6FeyCLrKIVH7KYecrFdlx1nCpuW4x0V2vijobWhR8gzybybGQx2MhMyrC6WAOA+cYxBvROnv76vjbBpf/jFgk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(396003)(346002)(451199021)(40470700004)(36840700001)(46966006)(44832011)(82740400003)(356005)(2906002)(81166007)(40460700003)(31696002)(86362001)(82310400005)(40480700001)(2616005)(16576012)(478600001)(54906003)(316002)(41300700001)(70586007)(70206006)(4326008)(6916009)(26005)(53546011)(36860700001)(966005)(186003)(36756003)(31686004)(47076005)(8936002)(8676002)(426003)(336012)(83380400001)(5660300002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 12:52:03.8269
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8674e127-dd4d-442d-b2ea-08db426734d8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT046.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4946

Hi Oleg,

On 21/04/2023 14:49, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello Michal,
> 
> I was not able to enable earlyprintk in the xen for now.
> I decided to choose another way.
> This is a xen's command line that I found out completely.
> 
> (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null timer_slop=0
Yes, adding a printk() in Xen was also a good idea.

> 
> So you are absolutely right about a command line.
> Now I am going to find out why xen did not have the correct parameters from the device tree.
Maybe you will find this document helpful:
https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt

~Michal

> 
> Regards,
> Oleg
> 
> пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> 
> 
>     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>     >       
>     >
>     >
>     > Hello Michal,
>     >
>     > Yes, I use yocto.
>     >
>     > Yesterday all day long I tried to follow your suggestions.
>     > I faced a problem.
>     > Manually in the xen config build file I pasted the strings:
>     In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
>     You shouldn't really modify .config file but if you do, you should execute "make olddefconfig" afterwards.
> 
>     >
>     > CONFIG_EARLY_PRINTK
>     > CONFIG_EARLY_PRINTK_ZYNQMP
>     > CONFIG_EARLY_UART_CHOICE_CADENCE
>     I hope you added =y to them.
> 
>     Anyway, you have at least the following solutions:
>     1) Run bitbake xen -c menuconfig to properly set early printk
>     2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not enabled by default)
>     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>     CONFIG_EARLY_PRINTK_ZYNQMP=y
> 
>     ~Michal
> 
>     >
>     > Host hangs in build time. 
>     > Maybe I did not set something in the config build file ?
>     >
>     > Regards,
>     > Oleg
>     >
>     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>     >
>     >     Thanks Michal,
>     >
>     >     You gave me an idea.
>     >     I am going to try it today.
>     >
>     >     Regards,
>     >     O.
>     >
>     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>     >
>     >         Thanks Stefano.
>     >
>     >         I am going to do it today.
>     >
>     >         Regards,
>     >         O.
>     >
>     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>     >
>     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>     >             > Hi Michal,
>     >             >
>     >             > I corrected xen's command line.
>     >             > Now it is
>     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
>     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>     >
>     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>     >             advantage in using more than 1 color for Xen.
>     >
>     >             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>     >
>     >             xen_colors=0-0 dom0_colors=1-8
>     >
>     >
>     >
>     >             > Unfortunately the result was the same.
>     >             >
>     >             > (XEN)  - Dom0 mode: Relaxed
>     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >             > (XEN) Coloring general information
>     >             > (XEN) Way size: 64kB
>     >             > (XEN) Max. number of colors available: 16
>     >             > (XEN) Xen color(s): [ 0 ]
>     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>     >             > (XEN) Color array allocation failed for dom0
>     >             > (XEN)
>     >             > (XEN) ****************************************
>     >             > (XEN) Panic on CPU 0:
>     >             > (XEN) Error creating domain 0
>     >             > (XEN) ****************************************
>     >             > (XEN)
>     >             > (XEN) Reboot in five seconds...
>     >             >
>     >             > I am going to find out how command line arguments passed and parsed.
>     >             >
>     >             > Regards,
>     >             > Oleg
>     >             >
>     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>     >             >       Hi Michal,
>     >             >
>     >             > You put my nose into the problem. Thank you.
>     >             > I am going to use your point.
>     >             > Let's see what happens.
>     >             >
>     >             > Regards,
>     >             > Oleg
>     >             >
>     >             >
>     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>     >             >       Hi Oleg,
>     >             >
>     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>     >             >       >       
>     >             >       >
>     >             >       >
>     >             >       > Hello Stefano,
>     >             >       >
>     >             >       > Thanks for the clarification.
>     >             >       > My company uses yocto for image generation.
>     >             >       > What kind of information do you need to consult me in this case ?
>     >             >       >
>     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> ?
>     >             >
>     >             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
>     >             >       Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
>     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
>     >             >       is set, I strongly doubt that this is the actual command line in use.
>     >             >
>     >             >       You wrote:
>     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native
>     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>     >             >
>     >             >       but:
>     >             >       1) way_szize has a typo
>     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>     >             >       (XEN) Xen color(s): [ 0 ]
>     >             >
>     >             >       This makes me believe that no colors configuration actually end up in command line that Xen booted with.
>     >             >       Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.
>     >             >
>     >             >       So I would suggest to first cross-check the command line in use.
>     >             >
>     >             >       ~Michal
>     >             >
>     >             >
>     >             >       >
>     >             >       > Regards,
>     >             >       > Oleg
>     >             >       >
>     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>     >             >       >
>     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>     >             >       >     > Hi Julien,
>     >             >       >     >
>     >             >       >     > >> This feature has not been merged in Xen upstream yet
>     >             >       >     >
>     >             >       >     > > would assume that upstream + the series on the ML [1] work
>     >             >       >     >
>     >             >       >     > Please clarify this point.
>     >             >       >     > Because the two thoughts are controversial.
>     >             >       >
>     >             >       >     Hi Oleg,
>     >             >       >
>     >             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>     >             >       >
>     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>     >             >       >
>     >             >       >
>     >             >       >     Instead, the upstream Xen tree lives here:
>     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >             >       >
>     >             >       >
>     >             >       >     The Cache Coloring feature that you are trying to configure is present
>     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>     >             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>     >             >       >     hasn't been merged yet.)
>     >             >       >
>     >             >       >
>     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>     >             >       >     you as you already have Cache Coloring as a feature there.
>     >             >       >
>     >             >       >
>     >             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>     >             >       >     so, please post the ImageBuilder config file that you are using.
>     >             >       >
>     >             >       >     But from the boot message, it looks like the colors configuration for
>     >             >       >     Dom0 is incorrect.
>     >             >       >
>     >             >
>     >             >
>     >             >
>     >
> 


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 13:03:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 13:03:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524580.815626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqPZ-0001RP-FK; Fri, 21 Apr 2023 13:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524580.815626; Fri, 21 Apr 2023 13:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqPZ-0001RI-Ch; Fri, 21 Apr 2023 13:02:49 +0000
Received: by outflank-mailman (input) for mailman id 524580;
 Fri, 21 Apr 2023 13:02:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfVw=AM=epam.com=prvs=8475e47d9c=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1ppqPX-0001RB-Vk
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 13:02:48 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd3de100-e044-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 15:02:43 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33LA6K2E017901; Fri, 21 Apr 2023 13:02:38 GMT
Received: from eur02-db5-obe.outbound.protection.outlook.com
 (mail-db5eur02lp2107.outbound.protection.outlook.com [104.47.11.107])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3q372hku5u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 21 Apr 2023 13:02:37 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by AM7PR03MB6465.eurprd03.prod.outlook.com (2603:10a6:20b:1b3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.49; Fri, 21 Apr
 2023 13:02:34 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.045; Fri, 21 Apr 2023
 13:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd3de100-e044-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dcc1i3mh3JNbsDdbYgryjrYaY4a9tf0et6CcHqQGaXpegtg6sSVfLd4V2vlYfmskhDbEG7caZwFdmbVRfKkINCz+ez6Z0WrKvJaqoArjghhScNng5cnWUndyV48rm8rGoLx2BcLD58AcKO1q5uxGEmqMHEfL2pyg50qXSc2P6gfTDEiiNTcrgoziVjXxwFGBtgcOUjwtEbTPYXfREVQ2G7EWIF0vgNEAhO6RbUtqDnQa4Be0hgaUzWMW6rPxOcrkOv1z/SQ+ngYXEoHCN98tFZ4aGaAqkXivCpD0YHPjaiMkYBzXqGNHfeMU3S8G0lKjnXskUnRLiYmD4EMbxdqZ8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qUyX2dYvxqrGMER1SUiKatz+f2i6jGdC1T0t6ZOvL+8=;
 b=B3emrQGUJj8WIWBcCt8pPkdXjdhifbPUombIDyCR5hBW4e+D2z+RJRG1qOyn38q3wmhHsfnZ7w9W99+5M8NKM1ZHoKFF/esoCr53P622dz+8H/+5Zm0va33JqW/ef1AdDj5Dv4C33fBrGucX1X8E4zTtA5vSfcMvP4OkpdHoffkZ9VBxwkxWpNTzNuXl8e5oRY+Cw/datVHQOxXNc9XVu3Rbtqvy1gDNsBpSHXpBVllBKjFkmvmqF+L1vx08U8+t5JIim1jQHF4mJ7sv41qMJSIbxnaTTCFuQSeVl2t2xA9TWUZNoWiG1SCDL2Kx6hxDe7BhPIRAmEMwhWOZdNTUVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qUyX2dYvxqrGMER1SUiKatz+f2i6jGdC1T0t6ZOvL+8=;
 b=a7Db/1jFI7BDP+F489PyXHtWsQjheYcBpR9RcMj4jQTYoHGnqFtSR+LHJtH1xVSziLZBuqSQsNzds0wyzukYt9cHNMqHTTKcAMrCBML8uLy1pJI6Zp4oZ5Z3Bg9y8MlA+TmxcAdahZs5zH++SV2Xwfj8Tbf0zFw/F1ty88FBOKsLLCab0ZaDgOdnOWjBpl7fnW65gxCN5XFtE8UOdM8fWnWT/vc1CwaTTqGTEpcJqYBFArXuLZ3Ep+ao2qbg0pAMuykuXRIq2sGHpSSWV5BPqW/6TkbArP3pAtNJyNr3AYrqRb/p0mz+OHoDsXtWwYm1LjG5m4TJeisFWrEKxGzixA==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        George
 Dunlap <george.dunlap@citrix.com>,
        Julien Grall <julien@xen.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Paul Durrant <paul@xen.org>, Kevin Tian
	<kevin.tian@intel.com>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: 
 AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGAgAGe14CAAJvOAIAFXoKAgAAEr4CAAAS6AIAGQ32AgAAf0YCAAAVEAA==
Date: Fri, 21 Apr 2023 13:02:33 +0000
Message-ID: <87v8hp75hz.fsf@epam.com>
References: <20230314205612.3703668-1-volodymyr_babchuk@epam.com>
 <20230314205612.3703668-3-volodymyr_babchuk@epam.com>
 <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
 <d9ab412f-b1d1-3fef-a956-05373ce76dd2@suse.com>
In-Reply-To: <d9ab412f-b1d1-3fef-a956-05373ce76dd2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|AM7PR03MB6465:EE_
x-ms-office365-filtering-correlation-id: f303242d-8dde-42c7-fb88-08db4268ac59
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 vIl5SCmGe0RJu0Qg3cp32rcYwhiL7VlIvxe8PFZl05v2ZXbml2ytoXm8taSCfkfAql7Vp6pprRWI3D0SK88zqTr6pG2EGAbdEMc9hSRbnvc5vWS3TolkD/lyhMxo50Pdop3kG+4/C3YAdHfLYRwX+050NG8BN7XxW3x2Q6QvfjwDRgfod9ysvI3dU0oiCTFM0WaJhampb59jiSfJsKHlVTIZXIhp0vX//SaZRbg+2UN2o23YyPAZG01m8UsKGJcwSckrDWpOEpc6wq2dbaR02POgvUWKg4FB/dOtpeGBAGB6uJuEvwGmhy5bC+OOG+ex7IVSWsT3VHc6j5VR1sEQX7VBjhOvBHKeQZEc3ypm27XXx+pBRiegY1M8abP+yKQA3+o748Qqc498ZKQwgdike+faE5pqQEZITbQzyYULSJtzgzAnZPiKJJVzPk/q/Umm9uUlXSYYXQlr7hqGETlt7ZkqzYig4wVW5jMS7yBKCkbvnCIwXytGjL2t4j59vVyMeSXUYco6CB4i7M6VcC6qs3/4fW9r2OoaK/AaQ3akk4TsG3r1SqiAqluOpbtXU/f9bC0zhhi+VmzOq0/5UZdkjz5NCCKXYRbQd0Gj7FHtBe+CpkQDrQShEJ8ea3hY4brl
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(122000001)(64756008)(316002)(6916009)(66476007)(54906003)(76116006)(66446008)(66946007)(66556008)(4326008)(91956017)(6512007)(6506007)(26005)(55236004)(186003)(53546011)(38100700002)(2616005)(83380400001)(8936002)(5660300002)(8676002)(41300700001)(478600001)(71200400001)(6486002)(86362001)(36756003)(2906002)(38070700005)(7416002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?YTRJdnRZaU40OGw4c1dzbUprMmJpelczVzBLVHc5ZGNnTUttOVRoMkUwa25B?=
 =?utf-8?B?dVZLZllJVlphejRldnZ6V094U2pwMlhDRjFFZkZIQTNtRC9JOVlsMEoyZFVW?=
 =?utf-8?B?ZWNSdlVwaG9GTGY1SVdHdGNRVUgzNG0wWmdTTlUxcnpuUWlBUUdQeGhJTlha?=
 =?utf-8?B?MHNadlZKa3RXeDUwWUZqQTd0dTRncS9xS050azZUNUtKVTMzN3dOZDBrK2Fr?=
 =?utf-8?B?bVJHY0pDZnJTTnpWdUo5TEcxWWwyTUYySzdTQjZsMTAzL2xwTUcxVnl1Smhn?=
 =?utf-8?B?ZXJ3SXpuckNOb2lucTVWVUJDUlpQRHpDeW5ITVoxMlcvbjJrOXZDb0RqSUQv?=
 =?utf-8?B?ekNtd1dZMjFWYWViS1FxZUo4R1VlcDBaVit0RzgwZzNLbVAxZlZwQ0srU1hl?=
 =?utf-8?B?M01DSDdHSGMxaHVQMmtJWlNCYVNxV0NFR2YyT01VeXV2U3A1L3kwQUdMSUxi?=
 =?utf-8?B?aEZkR1BjTXhNZUZZdEFtN2NKVVlJdGVoYjFOMytlMmN6andRNjgxZGlCSVpZ?=
 =?utf-8?B?cEp4amNKeTNTV1JKREJodkpPclI3RU4rR3BkVkVEUHE0NnY0dnE5Y1J5ZFVy?=
 =?utf-8?B?MWh2cUpWM3FoSWMwRE82cWpUa0I0ckU1QVdUL3JsREFDbGV1Slg5QytKR24w?=
 =?utf-8?B?eEZPNk1ERmhMeWRlSlQxaytHZGVxZFkvRWxyRzIwU2o5U1N2by9xRFVuOGZx?=
 =?utf-8?B?Y0p2YW80RFJUTGFGdjQrcHF5NzFkZkRVeW5SWTI2SnBVTTFrYThCQzl4Ky9C?=
 =?utf-8?B?YUtSUnE4emxEVUVsdGVac2FEYVZEMC9lU0xvS1grMDVZRXpXM3R3YllOUmZh?=
 =?utf-8?B?K01YeW9OcUlib0hyNUxQaFl3UmdvTU9VZmppWjlmcWFVL3NoWk9DUkR1cEF6?=
 =?utf-8?B?MVBOVk5NeEtUTzA3anNUTkVYMUJ4MUZRZlVkOWRRZW16NTZHa2p4OEpTdVhQ?=
 =?utf-8?B?a3k3M0pvRXYvMlVyd1pieG5GbG1xckg4MUQxaWhKbm5pY0pPZHd5RU5rRjRI?=
 =?utf-8?B?WlRxVlBBTHFIV3hrenU0dkhqajBXcDlyS28yaTFmZVB0MDB1d2lVcXNVenJY?=
 =?utf-8?B?eU1RaC9VdWdabmhWVTVpUGRFYzB5QXdWak1XMjFVaHUwdnhTblRXOWlzSWlL?=
 =?utf-8?B?R3Q3dXh1aUVxZlZEWE5BdXZoNWpKbW5LMGpaSVVNNktmbEc5V2xvTFNIS3pM?=
 =?utf-8?B?NnVNUG1Ed2J2c2xabkJIbWYrbnFla3lmZ2lDT3NhYVZEWG9KWXltaWloQVFC?=
 =?utf-8?B?S1E4WkxrcmxNcFVrVit0ampHWnU1SkVMV2IxQmE4WHpnMlRGQkorN0F5Zmlt?=
 =?utf-8?B?bjV3RDV0UllPSzA1RGlBTGJQZ0dDUGVDc3hkQkgwV1JidFJNNHB4cGEzeW9y?=
 =?utf-8?B?QVdpMFdhYUdreFNnTzVsb2c2a0YxMFFOQXBvcnh4YXhGK3F6cThqRDdhT0Jt?=
 =?utf-8?B?SWZRYlc1MGZWVUEvN2s4Y3U0dEtwclpFa3pzRkYxVU1GeWxidHVQeE96ZnZO?=
 =?utf-8?B?azg0N3RyMDJWdnVQenFrRFNwYzVIR29nTGI4MU5CdCtLbVNSb3AxUUlaS1hG?=
 =?utf-8?B?dldvRERhWS9RK1I1alJuUkxyZnVRc2hDUUlPQmE1ZDlYR2luQVFQdnNja3h0?=
 =?utf-8?B?RWxJT1E1RjVLZElBQkZpa0ZQUmVRazN5U0MxaWQwUjdvSHNNYUJza0xBR215?=
 =?utf-8?B?OUUwekxXWFdwdE9rMmFsbFhIMTVVV0xZazhUQTVHbG9kS29vbVBrZXFmS2pN?=
 =?utf-8?B?OGxxTkhzVTNWYUhzNmpoU1NVcHJxeGtucHoxSXNqRW5wRXBjN0Iwd1FKVll5?=
 =?utf-8?B?UjJNbHVwTEY3cUJrWWFSOVc1bDJCWCtuY1BWbHRKemxGOWYrdXBOaStFNkwz?=
 =?utf-8?B?MFFaNmZ0Q01jYWJycTlPdjRXa2pndnpQSER1ckNtVEhKcWxkenZKTlJuRitx?=
 =?utf-8?B?WndwNEZUNnVsQ2NXcTVEczVEb3hMN1NBV25CcW0vRDU5MWp3R29EaFRpK1or?=
 =?utf-8?B?ZDJCNGNobnpXc1pmalNiWEZIWHdlYnZlYzBLYVNKajZrM2lyZzRqS2xma3RC?=
 =?utf-8?B?bURYN3YzZ0xPUHlFemVXMndZUnRLb1U5aE0wQ293Nnp1clhUemM4SFlkZGIx?=
 =?utf-8?B?M0JYS2xBNXBIRGZlVkdzWVdMZEF2YitDUlRDb2pPVWRCNjFPdUVjYUlhL04y?=
 =?utf-8?B?U2c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AFAFC34D145C5D41A275917467F7D243@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f303242d-8dde-42c7-fb88-08db4268ac59
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Apr 2023 13:02:33.8065
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 1yTJl8sAQ1tSIdlIuZfJf/OexF4SIuaCTKMVebEYbZU5MsLfIYp8gGx2zwzxEYalUvE1ExX0OuFVQidTQKSi4QwCToVFkPOvHQJVsM5K0xE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR03MB6465
X-Proofpoint-GUID: Ot-2W9zlodlPxvTyu5iWgxFvM4glXPkV
X-Proofpoint-ORIG-GUID: Ot-2W9zlodlPxvTyu5iWgxFvM4glXPkV
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-21_06,2023-04-21_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0
 priorityscore=1501 impostorscore=0 phishscore=0 adultscore=0
 mlxlogscore=999 malwarescore=0 clxscore=1015 spamscore=0
 lowpriorityscore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0
 reason=mlx scancount=1 engine=8.12.0-2303200000
 definitions=main-2304210113

DQpIaSBKYW4sDQoNCkphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JpdGVzOg0KDQo+
IE9uIDIxLjA0LjIwMjMgMTM6MDAsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4gDQo+PiBI
ZWxsbyBSb2dlciwNCj4+IA0KPj4gUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5j
b20+IHdyaXRlczoNCj4+IA0KPj4+IE9uIE1vbiwgQXByIDE3LCAyMDIzIGF0IDEyOjM0OjMxUE0g
KzAyMDAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+PiBPbiAxNy4wNC4yMDIzIDEyOjE3LCBSb2dl
ciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+Pj4gT24gRnJpLCBBcHIgMTQsIDIwMjMgYXQgMDE6MzA6
MzlBTSArMDAwMCwgVm9sb2R5bXlyIEJhYmNodWsgd3JvdGU6DQo+Pj4+Pj4gQWJvdmUgSSBoYXZl
IHByb3Bvc2VkIGFub3RoZXIgdmlldyBvbiB0aGlzLiBJIGhvcGUsIGl0IHdpbGwgd29yayBmb3IN
Cj4+Pj4+PiB5b3UuIEp1c3QgdG8gcmVpdGVyYXRlLCBpZGVhIGlzIHRvIGFsbG93ICJoYXJtbGVz
cyIgcmVmY291bnRzIHRvIGJlIGxlZnQNCj4+Pj4+PiBhZnRlciByZXR1cm5pbmcgZnJvbSBwY2lf
cmVtb3ZlX2RldmljZSgpLiBCeSAiaGFybWxlc3MiIEkgbWVhbiB0aGF0DQo+Pj4+Pj4gb3duZXJz
IG9mIHRob3NlIHJlZmNvdW50cyB3aWxsIG5vdCB0cnkgdG8gYWNjZXNzIHRoZSBwaHlzaWNhbCBQ
Q0kNCj4+Pj4+PiBkZXZpY2UgaWYgcGNpX3JlbW92ZV9kZXZpY2UoKSBpcyBhbHJlYWR5IGZpbmlz
aGVkLg0KPj4+Pj4NCj4+Pj4+IEknbSBub3Qgc3RyaWN0bHkgYSBtYWludGFpbmVyIG9mIHRoaXMg
cGllY2UgY29kZSwgYWxiZWl0IEkgaGF2ZSBhbg0KPj4+Pj4gb3Bpbmlvbi4gIEkgd2lsbCBsaWtl
IHRvIGFsc28gaGVhciBKYW5zIG9waW5pb24sIHNpbmNlIGhlIGlzIHRoZQ0KPj4+Pj4gbWFpbnRh
aW5lci4NCj4+Pj4NCj4+Pj4gSSdtIGFmcmFpZCBJIGNhbid0IHJlYWxseSBhcHByZWNpYXRlIHRo
ZSB0ZXJtICJoYXJtbGVzcyByZWZjb3VudHMiLiBXaG9ldmVyDQo+Pj4+IGhvbGRzIGEgcmVmIGlz
IGVudGl0bGVkIHRvIGFjY2VzcyB0aGUgZGV2aWNlLiBBcyBzdGF0ZWQgYmVmb3JlLCBJIHNlZSBv
bmx5DQo+Pj4+IHR3byB3YXlzIG9mIGdldHRpbmcgdGhpbmdzIGNvbnNpc3RlbnQ6IEVpdGhlciBw
Y2lfcmVtb3ZlX2RldmljZSgpIGlzDQo+Pj4+IGludm9rZWQgdXBvbiBkcm9wcGluZyBvZiB0aGUg
bGFzdCByZWYsDQo+Pj4NCj4+PiBXaXRoIHRoaXMgYXBwcm9hY2gsIHdoYXQgd291bGQgYmUgdGhl
IGltcGxlbWVudGF0aW9uIG9mDQo+Pj4gUEhZU0RFVk9QX21hbmFnZV9wY2lfcmVtb3ZlPyAgV291
bGQgaXQganVzdCBjaGVjayB3aGV0aGVyIHRoZSBwZGV2DQo+Pj4gZXhpc3QgYW5kIGVpdGhlciBy
ZXR1cm4gMCBvciAtRUJVU1k/DQo+Pj4NCj4+IA0KPj4gT2theSwgSSBhbSBwcmVwYXJpbmcgcGF0
Y2hlcyB3aXRoIHRoZSBiZWhhdmlvciB5b3UgcHJvcG9zZWQuIFRvIHRlc3QgaXQsDQo+PiBJIGFy
dGlmaWNpYWxseSBzZXQgcmVmY291bnQgdG8gMiBhbmQgaW5kZWVkIFBIWVNERVZPUF9tYW5hZ2Vf
cGNpX3JlbW92ZQ0KPj4gcmV0dXJuZWQgLUVCVVNZLCB3aGljaCBwcm9wYWdhdGVkIHRvIHRoZSBs
aW51eCBkcml2ZXIuIFByb2JsZW0gaXMgdGhhdA0KPj4gTGludXggZHJpdmVyIGNhbid0IGRvIGFu
eXRoaW5nIHdpdGggdGhpcy4gSXQganVzdCBkaXNwbGF5ZWQgYW4gZXJyb3INCj4+IG1lc3NhZ2Ug
YW5kIHJlbW92ZWQgZGV2aWNlIGFueXdheXMuIFRoaXMgaXMgYmVjYXVzZSBMaW51eCBzZW5kcw0K
Pj4gUEhZU0RFVk9QX21hbmFnZV9wY2lfcmVtb3ZlIGluIGRldmljZV9yZW1vdmUoKSBjYWxsIHBh
dGggYW5kIHRoZXJlIGlzIG5vDQo+PiB3YXkgdG8gcHJldmVudCB0aGUgZGV2aWNlIHJlbW92YWwu
IFNvLCBhZG1pbiBpcyBub3QgY2FwYWJsZSB0byB0cnkgdGhpcw0KPj4gYWdhaW4uDQo+DQo+IFNv
IG1heWJlIExpbnV4J2VzIGlzc3Vpbmcgb2YgdGhlIGNhbGwgbmVlZHMgbW92aW5nIGVsc2V3aGVy
ZT8gT3Igd2UgbmVlZA0KPiBhIG5ldyBzdWItb3AsIHN1Y2ggdGhhdCBQSFlTREVWT1BfbWFuYWdl
X3BjaV9yZW1vdmUgY2FuIHJlbWFpbiBwdXJlbHkgYQ0KPiBsYXN0LW1vbWVudCBub3RpZmljYXRp
b24/DQoNCkZyb20gTGludXggcG9pbnQgb2YgdmlldywgaXQgYWxyZWFkeSBjbGVhbmVkIHVwIGFs
bCB0aGUgZGV2aWNlIHJlc291cmNlcw0KYW5kIGl0IGlzIHJlYWR5IHRvIGhvdC11bnBsdWcgdGhl
IGRldmljZS4gWGVuIFBDSSBkcml2ZXIgaW4gTGludXgganVzdA0KZ2V0cyBhIG5vdGlmaWNhdGlv
biB0aGF0IGRldmljZSBpcyBiZWluZyByZW1vdmVkLg0KDQpCVFcsIHhlbl9wY2liYWNrIChBS0Eg
cGNpX3N0dWIpIGRyaXZlciBpbiBMaW51eCB0cmFja3MgdGhhdCBkZXZpY2UgaXMNCmFzc2lnbmVk
IHRvIGFub3RoZXIgZG9tYWluLCBidXQgYWxsIGl0IGNhbiBkbyBpcyB0byBsb3VkbHkgY29tcGxh
aW4gaW4NCmtlcm5lbCBsb2cgaWYgZGV2aWNlIGlzIGJlaW5nIHJlbW92ZWQgd2l0aG91dCBiZWlu
ZyBkZWFzc2lnbmVkIGZyb20NCmFub3RoZXIgZG9tYWluLg0KDQo+DQo+PiBBcyBJIHdvcmthcm91
bmQsIEkgY2FuIGNyZWF0ZSBoeXBlcmNhbGwgY29udGludWF0aW9uIGluIGNhc2UgaWYNCj4+IHBj
aV9yZW1vdmVfZGV2aWNlKCkgcmV0dXJucyAtRUJVU1kuIFdoYXQgaXMgeW91ciBvcGluaW9uPw0K
Pg0KPiBIb3cgd291bGQgdGhhdCBoZWxwPyBZb3UnZCB0aGVuIHNwaW4gcGVyaGFwcyBmb3IgaG91
cnMgb3IgZGF5cyAuLi4NCg0KQXJlIHlvdSBpbXBseWluZyB0aGUgY2FzZSB3aGVuIHdlIGluY3Jl
YXNlIHJlZmNvdW50ZXIgd2hlbiB3ZSBhc3NpZ24gYQ0KUENJIGRldmljZSB0byBhIGRvbWFpbj8g
SW4gdGhpcyBjYXNlIHllcywgaXQgaXMgcXVpdGUgcG9zc2libGUgdGhhdCB3ZQ0Kd2lsbCBzcGlu
IHRoZXJlIGZvciBhbnkgYXJiaXRyYXJ5IGFtb3VudCBvZiB0aW1lLi4uDQoNCi0tIA0KV0JSLCBW
b2xvZHlteXI=


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 13:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 13:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524587.815637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqXI-0002zJ-A6; Fri, 21 Apr 2023 13:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524587.815637; Fri, 21 Apr 2023 13:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppqXI-0002zC-71; Fri, 21 Apr 2023 13:10:48 +0000
Received: by outflank-mailman (input) for mailman id 524587;
 Fri, 21 Apr 2023 13:10:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCD5=AM=citrix.com=prvs=4680c1a37=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ppqXG-0002z6-GJ
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 13:10:46 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eaf116ff-e045-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 15:10:43 +0200 (CEST)
Received: from mail-mw2nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Apr 2023 09:10:27 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6650.namprd03.prod.outlook.com (2603:10b6:303:12d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Fri, 21 Apr
 2023 13:10:22 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6319.022; Fri, 21 Apr 2023
 13:10:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaf116ff-e045-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682082643;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=7SjmdCkQxSqN8/36rX5cL+xkunrfFJGlOH4Cqx2reN4=;
  b=UWHaAWLcI0cKX3W3jXoRaur14pdqOI05vaxYB+coZ5rQ06MnZ7aXONZK
   +1+DJrGmqhQ0ZFI9F/0lBfCjbuDuMRly8VOeDO4Go/IH9BQPmkc3KQuv2
   Q0xzEgpAzd1jDXlCiuifePRLyXu0l2fw8pvq2vd787PWoKZveXpAWalw+
   w=;
X-IronPort-RemoteIP: 104.47.55.100
X-IronPort-MID: 105155355
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lqkICqln/ZAQUBC8JL6UzTTo5gwPJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIZUGCGOq2CMTH1L4p0bN+y/EJVv5LRzIUwTQQ9pClgRCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5gaGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 c0nCiAQNRDAvuWV76unS/BCvuYOK9a+aevzulk4pd3YJdAPZMmbBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieC1WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHumAd9PT+DlnhJsqF+0yT08OQUbbFyA/vua0mO0YfkHK
 UNBr0LCqoB3riRHVOLVYRq8p3KVuw8GbPBZGeY69QKlx7Ld5kCSAW1sZjxLZcEitcQ2bSc3z
 VLPlNTsbRR/vbvQRX+D+7O8qTKpJTNTPWIEfTUDTwYO/5/kuo5bpg3LZsZuFuiylNKdMTPtx
 zGHqgAuirNVitQEv42g5kzOiT+oopnPTyY26x/RU2bj6Rl2DKa9bpGswUjW67BHNonxZlqMo
 nkC3dSf5eYmDJeRmSjLS+IIdIxF/N6AOTzYxFtwRZ8o8m31/2b5JNgIpjZjOE1uL8AIPyfzZ
 1Pesh9Q45kVO2a2aahwYMS6DMFCIbXcKOkJn8v8NrJmCqWdvifclM2yTSZ8B1zQrXU=
IronPort-HdrOrdr: A9a23:ICbwR6FOIPtgqAgQpLqEHseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV86faQslwssR4b9uxoVJPvfZqYz+8W3WBzB8bEYOCFghrKEGgK1+KLrwEIWReOk9K1vZ
 0KT0EUMqyVMbEVt6fHCAnTKade/DGEmprY+9s3GR1WPHBXg6IL1XYINu6CeHcGPTWvnfACZe
 ehDswsnUvZRV0nKv6VK1MiROb5q9jChPvdEGI7705O0nj0sduwgoSKaSSl4g==
X-Talos-CUID: 9a23:4s9zmW8mjEvQ2NdmINiVv38rONA4KX2e9XDzDFakCH9HcJ+4TkDFrQ==
X-Talos-MUID: 9a23:SQPDsAWjSoSAD47q/D3Li2BNBd9l2qOVV0IuybgMgeLcNyMlbg==
X-IronPort-AV: E=Sophos;i="5.99,214,1677560400"; 
   d="scan'208";a="105155355"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oHdm2sSKpcnmfZi80qV+wUB+k+l8NQn+2osEYgzwQRNkEqdrSFh0zSEz5BoenEa43fTthId0bxatyLuiIolaC6aFVCmbPTJuFEE81EtdzLKfCG4miPNsJeW8BNjq5u38cNNflrSKZko8R3kubqKrnvETbP+gUTONmTBS13J6/+gLgH6zbRPxmIRxd7PmClCVA9+P9N2SeSAWygDU370JOBXVLYvt0g8QcQNy5Yo8fub4sMRlb5WXNZCcFKeHWkRmS7QgkkI2mSl5jCOcLbIzXubBLDVr5Qjkd+1a/mW80AvJy6DLKxVnF1PakHxz07GlpCV8ZivJzauXxtla6NLhoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qv5ZHajn0vBl0eLZr5IpUzgdNceGnmYicG2G4mzKcA8=;
 b=i3cMjGp/n0iFPSOmssK4MI72GHL7wUsu7XluEzwSAhvMfsHfKNOSqM1Je6AAhVmEOn0WKUVlVn3xw43fe9aOcOhxhLtWId8TtwncCkYlSMlCqxxgDH0lYgr+0wMuUeg0/b8X1X8g1wFxGaXsCI72cLisXSLmt+DZ3UDf8BDbzaMVQDYPeWqowtaZp/ubsneCVQ9uzn4KXzqk415701WVpfINxivO964qRFJIVBjIcJofOayRmmqIAAZXUvBOmIgOFIK8Y/MBINsXUXZahVQbv5YJQE3gKx4pHt5K9H8tkdYnY/gKNAYVWvj49DYPzlvWJ8Kymb4u38qIjxT052xNOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qv5ZHajn0vBl0eLZr5IpUzgdNceGnmYicG2G4mzKcA8=;
 b=GQWqcZvgwBEyT0V/MM8zHISh9DWcEYZepq7CHipz9fY03PsD5g6YnJP2HY19/itZodM2QoxaT5vKjEUB6W0p84FnbfJjQH2+jbSKFecWDBFDfT70VbsorV259kScJaGgOCm0i1VIQdL115wVmUuY7KyIwCii7mkmJ4w987Lt6ZM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 21 Apr 2023 15:10:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Message-ID: <ZEKLN8AlzDUckorU@Air-de-Roger>
References: <ZBNA9q5DXJYG3KVp@Air-de-Roger>
 <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger>
 <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger>
 <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger>
 <87354t8pqg.fsf@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87354t8pqg.fsf@epam.com>
X-ClientProxiedBy: LO4P123CA0488.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6650:EE_
X-MS-Office365-Filtering-Correlation-Id: f46f449c-afe2-46cc-f947-08db4269c329
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BsagWIORd87UW+DrQ4kmPSiE5I3t19owR7jjmWkI6g6P/rLGt30zRPXesdN2UrjlhPIyQcCksMIQos9U3DgGbNUZn4D/v4zdRJamYBRmpy1z8H+GLnSsF9cGBt3ix9BgLm78bATq84i0DaCO79pRr82h6rGIw6/BGXCwcQ2GRTD0Pc4LD2Iip1EjqKlfeiRYh2uQI7BdfYn+0b1jourolTTqyKp51RvRrCXokLndjVlpgM8Iqp0wqVJ/mTMU+T5Im7Qvt+s8bu6WZX29tCIuXisq4mi6RbHZYRuuTsckyxiA+TNWgth1ixKXNlIznvDtnzrihTcP1SkBp58ic9pXzjpXxLACKxEtchvIcIupWvYpsVR62r3pR1L40Q0yPIfUDZblP3KfYDRxUeFzZHb+pH6T9eQDrSGeaAlGz8m5BO6UFdn5+VLOIt8EAwDz31cwO3aVgfo2310GW1UvZvIdqG5aWhWnulM5MRqKe8pCKKAWdJoN7ZCA+eQPAGF+v1FxJY5ftJe8djnHDByDveXbYzwvT6H7ww86nu1XgU+uuK0TbWLb7cTYf9ibv0HlHFeZ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(396003)(366004)(346002)(39860400002)(451199021)(2906002)(66899021)(6666004)(478600001)(26005)(33716001)(86362001)(6506007)(9686003)(6512007)(53546011)(6486002)(186003)(85182001)(41300700001)(82960400001)(54906003)(8936002)(38100700002)(8676002)(4326008)(66476007)(66946007)(83380400001)(6916009)(316002)(66556008)(7416002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUxXQ1pZVzVzQnJ3WklQekMrOEhrT013QVRFNHdHMWZQYm9TQTl5U2FXdFNh?=
 =?utf-8?B?RFFHc1l3bUkxVng2TXcwaUdnaFQ3YmRKZW1rRGk1Mi8rQkJYbHR3K1REQ1R1?=
 =?utf-8?B?SmdMZDFTMmxFSUo2bndrNThTcTRLTDI1T0orZnhCalBFT3RLWWhJOVRuZXJ1?=
 =?utf-8?B?dkMvR1B2Q2ZKem1YQytXVE5YL1JaQW9zRDFyQ1lCNzFrcGhHMmZEeW9uUW94?=
 =?utf-8?B?bGY2MXNaSk4wdkdHZmVMK1JqODdDcEJRLzFPb1cyTzhnOStMSnc4Y2lTc3V6?=
 =?utf-8?B?REwvUGs5YlE1TXMvb0g5Y2ZOV0ErV1dXUi9zbndzSldKNDhTYWhPRjRiTm1U?=
 =?utf-8?B?WFZaY05sRUp3bzgyaDFMRnVVbjdPMEJBQXVBSS9jdjIyVmp2Z2drQ1NrVlYy?=
 =?utf-8?B?WlY3Mlk2Q0hwMVp1U252ZFdTUG1Rd1Jnb09oQWEySmp2UVBwWFpCTnNieDdj?=
 =?utf-8?B?U2ZtMzRLSTF2TndXZmhVOEZFbmRJWUxYVjc0UTlPOENQZ00wMXJid1VmeG9T?=
 =?utf-8?B?UHo1cW5DLzJQOFhlQmNVWHowVVlZYmZjWnZkS0lzS3ZZelZSOE5QTjV0M2g4?=
 =?utf-8?B?L1Z6TSsxbzFiUVBpYUduU091Z2xQK0Y0TmxCWkppaDg0c09hQmxmNGhHaUFF?=
 =?utf-8?B?QjdNS3Nlb241K2gyZVZHY0IyMmJtZVFoaS9kaFYvUzgzWXRqTVhQbWNQVzRU?=
 =?utf-8?B?c0Q5cnUxVmg0OVZOM1dTdXM4aFhHTXNMOCt0TW9XaGptbDNaczMyRThwdTBG?=
 =?utf-8?B?Mm1ocTdKVURCTTdFVmpvQWhQWUFzMU1FWlg0WjkrN0pVbWkwSEU0WGdYdmR0?=
 =?utf-8?B?Vi9JL0Ixbm9GOEFMNVg5aVljS0huNjBHUjZUTEJVV09DZTNkTlV4ektyZmg3?=
 =?utf-8?B?Ym5WN2t2Wkl2TEV5azlpTEJJNDVhazdwNWhnWUg1ZXlWSGtZL2VDQXJXaGk4?=
 =?utf-8?B?amI5NmpXSDR1WWhwck5YQ1FNeEpyd2JDRyt4QU4rQ3BIcXFmSUNGaVU3WkJJ?=
 =?utf-8?B?RmpYK29GT1lidUl3eXRYSUx2L0ZwSXI1RVA1aHBrZ1dLOWI2Vktrc2NISnhz?=
 =?utf-8?B?THdzbjM2YTJYOTV2KzFFNWpoVE5nNW5QUUc5UE50cy9LQ1RYNG5XU3BPVjhY?=
 =?utf-8?B?cXZvTGU2Y1Y2VUpQcS9qM0RuRDkwYW5nWUVMUGxqbkxXdG1PaFhxVDJsRmM1?=
 =?utf-8?B?Qm1UMTVodmRUUVBVQURlaXRtS0YrUjJxQm90d3dCaVo2NmFCV1p1bWp1NXFr?=
 =?utf-8?B?alJUZlc2NlZqemRpY3o0OVM0Qzk2SXJyekVmcDcrUCtTSWtJaThmM0x5MnNH?=
 =?utf-8?B?NHlDeTJmdzB6MTZRY0VPMmI4dEh6aDhIdGFkVURaWHJPVDNidkNCNXVUM3Vm?=
 =?utf-8?B?ZVhBaDREY1cwQ0FETm9xOVI0OEdtR0dTZmV1ZlE4dGRhekZrVGxYNHlOOEF5?=
 =?utf-8?B?N2dDVlEvMjI5R3lHMmdnWnJqVmpTWE10dXVsWlJXb3oyVENIRDNGYTUwRTJv?=
 =?utf-8?B?aWdJSUdtQ05TQlh0NXlSVlM4dngyRHN5dm4wNS81NkZhSkZjaWRLOTE2M0FV?=
 =?utf-8?B?Z3VEdHVJTmJtb0JMcE10YkxQcFdxSkViWlJNdkxKOTRLd2RiVjZBSkQ4RTNT?=
 =?utf-8?B?RXlMeVZmWU1WTTdRcVRlVjVsVnVacm95QXFEd2tTUUhFVHY1Tkkvd3ZQU2tL?=
 =?utf-8?B?Yk1hbUFOTlhCSVVoeXhROEZhcDhkc1FQSGlsZnY1blo5R1ZJQjZ1eFJ3TU43?=
 =?utf-8?B?WEhtR2hLdHJ6M1pSK2NFU1lhcUlRVW5aMEk5SUlzS2hlaWNKeEhTdnlWVkNK?=
 =?utf-8?B?NDBFS0JuYXFiT2t4VkN2YkwyRWc4SGJSR0crSGpyMzlIT3MxSEVUWGNpV3hK?=
 =?utf-8?B?OTgvRVA3RHRaSThVOHdRcHJ2NFRvczlFTWZFKzN1ZnAyUnh0THFDOE1CMlBD?=
 =?utf-8?B?NU9vdS9ZSTZqSzZqcnpza2ZtNXRtZitWTjNEZWtmK3NCTmFsV3ZVSnQyL2hR?=
 =?utf-8?B?ZS9NR3pCQytaVlVPcGRseVIrdTdmYVhyd2ZhdnBpOU5rdE5aWVAxbVk4SzU5?=
 =?utf-8?B?b2s1Y0tmeXM0cEljei9YK3dXRlllS0lZbnI1U3RCaFJXUVJPS25JRVZibWJB?=
 =?utf-8?Q?2uwfGedDgwtJRTuyDpRi+YgsL?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CLsO24cNj1BdWY5EwjosCKIbAV6U1ibmxo/blappNdBcG4f/hRmoZXCMZ+YQ2if8ZSyti5714l3q31hFWyFEcnS7hR2piBKH/ztqt7cWL9wIlW7uiLhLMRLR+HWVTx4G8uwy82JmxxL+lHisy1O0RmIZSwg/ws12OO1iON6vPOQc9dFChEHtMwgzSslXJzFLTFlo3Haj2Ep2lBjp/W3z7LAZoM38DxPBwHzy1sr39VQpBPN2pCTxG/Q7fNgPiAgNNPP9bAf8vue1cm9JkAmHVeQVf95eLd6CP/cDC4wjoIlG1WXFMUwHcKe78vyFoZ6UppLuRx8inS5KhnfxgFLVo5flQFryHSYWH/VdHUJm+AmAZYscUpM14hs9GCJiGWpdVVLE0mrEshNiIcIxfyaJKKja8x8ONTJPc05fHdJpNxbql8pF8zy+sWlvEE+KmFnQ74LV+upHYce3yCqdYbXzL1dKO38vby6jK3HFDHsgn45RWM/BBLPkTvzPattbDKFymz6SGx3S/aa0e7wHyQ8YDg71gL/L3FqXpdOgzGwRTDZZF6C5s+d8X7IXy64O2PWpw+iyJsNgNrhqKNczVxD3wlQqzOvxVdXNyYSvBJK1D3OHvElz8e0uyZsUboW9JC3S7MRqNWza9v9h+2xowWdCrKsY9/yDTfGFHacUpPcOgl1zKGkp2lCIvpuyg4r6F4Q2x1CwU3xaV9hl7Vjr+/pFKvaDoMW/t4x/60NehcTwRZTSGWnkz47yfVe4cfnETlNfkPuP+b5v05SqM8XC47gf6VzYR4FNV1W9vjlFZAmZERpnOKtObS0Kdu4Fv4FXEn9TzRtx27ittCvCq3T6jQNkkcM495CycAy4yJS9mQFk57bbMU0b6D006wPohfDT681yHu0bYCtI2j0dlYqDY8fT8A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f46f449c-afe2-46cc-f947-08db4269c329
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2023 13:10:21.9250
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lB03F+xyTj/u2VNVqRxh3WNmkkVl3vl3CMbyXScUkzSY8WGK7rHJtmp9jq0pROPXqFA31EYdXDOy6k2FBthQXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6650

On Fri, Apr 21, 2023 at 11:00:23AM +0000, Volodymyr Babchuk wrote:
> 
> Hello Roger,
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
> > On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
> >> On 17.04.2023 12:17, Roger Pau Monné wrote:
> >> > On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
> >> >> Above I have proposed another view on this. I hope, it will work for
> >> >> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
> >> >> after returning from pci_remove_device(). By "harmless" I mean that
> >> >> owners of those refcounts will not try to access the physical PCI
> >> >> device if pci_remove_device() is already finished.
> >> > 
> >> > I'm not strictly a maintainer of this piece code, albeit I have an
> >> > opinion.  I will like to also hear Jans opinion, since he is the
> >> > maintainer.
> >> 
> >> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
> >> holds a ref is entitled to access the device. As stated before, I see only
> >> two ways of getting things consistent: Either pci_remove_device() is
> >> invoked upon dropping of the last ref,
> >
> > With this approach, what would be the implementation of
> > PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
> > exist and either return 0 or -EBUSY?
> >
> 
> Okay, I am preparing patches with the behavior you proposed. To test it,
> I artificially set refcount to 2 and indeed PHYSDEVOP_manage_pci_remove
> returned -EBUSY, which propagated to the linux driver. Problem is that
> Linux driver can't do anything with this. It just displayed an error
> message and removed device anyways. This is because Linux sends
> PHYSDEVOP_manage_pci_remove in device_remove() call path and there is no
> way to prevent the device removal. So, admin is not capable to try this
> again.

Ideally Linux won't remove the device, and then the admin would get to
retry.  Maybe the way the Linux hook is placed is not the best one?
The hypervisor should be authoritative on whether a device can be
removed or not, and hence PHYSDEVOP_manage_pci_remove returning an
error (EBUSY or otherwise) shouldn't allow the device unplug in Linux
to continue.

We could add a PHYSDEVOP_manage_pci_test or similar that could be
programmatically used to check whether a device has a matching pdev in
the hypervisor, but I have no idea how that could be used by Linux so
it's exposed to the user, and it seems to just make the interface more
complicated for noo real benefit, when the same could be accomplished
by PHYSDEVOP_manage_pci_remove.

Maybe the only feasible solution is for pci_remove_device() to drop a
reference expecting it would be the last one, and print a warning
message if it's not and return -EBUSY.  Expecting any remaining
references to be dropped and the backing pdev to be freed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 13:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 13:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524593.815646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprIZ-0007Nm-Rt; Fri, 21 Apr 2023 13:59:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524593.815646; Fri, 21 Apr 2023 13:59:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprIZ-0007Nf-PH; Fri, 21 Apr 2023 13:59:39 +0000
Received: by outflank-mailman (input) for mailman id 524593;
 Fri, 21 Apr 2023 13:59:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pprIY-0007NZ-FL
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 13:59:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfc732cb-e04c-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 15:59:36 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6EFD51FDDE;
 Fri, 21 Apr 2023 13:59:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2495713456;
 Fri, 21 Apr 2023 13:59:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4HPgBseWQmSQagAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 21 Apr 2023 13:59:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfc732cb-e04c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682085575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Z3rqDaYulloCqtPf8pm8IonGBjyS5zgzvmhN92oY7X0=;
	b=o8WZjHjronVBnkm2O01CKXgMdycbsVs0VjHzRblJ/n62Fb1dXqLcv6+bF5y/rdNvxsAjK5
	DEL8knwySxiEz7JiizC/MqdHYVf1B9lKY06AmIoH0l2ar/Q0oY8TONdbkzdgXlETUkXzgw
	BllkRlTqNE/yPN55ZlqDmVvrZGlIWIw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH] xen: add support for crash dump analysis with xen.efi
Date: Fri, 21 Apr 2023 15:59:33 +0200
Message-Id: <20230421135933.23353-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today it is not possible to analyse crash dumps of a system in
hypervisor mode when it had been booted via EFI, as the crash utility
doesn't understand the file format of xen.efi.

This can easily be solved by creating an ELF file from xen.efi via
objcopy. Using that file as name list for crash enables the user to
analyse the dump in hypervisor mode.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/Kconfig.debug     | 5 ++++-
 xen/Makefile          | 4 ++++
 xen/arch/x86/Makefile | 3 +++
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
index 94e818ee09..4aec0fd3aa 100644
--- a/xen/Kconfig.debug
+++ b/xen/Kconfig.debug
@@ -138,6 +138,9 @@ config DEBUG_INFO
 	  the EFI boot partition (look for "INSTALL_EFI_STRIP" in
 	  docs/misc/efi.pandoc for more information - when not using
 	  "make install-xen" for installing xen.efi, stripping needs to be
-	  done outside the Xen build environment).
+	  done outside the Xen build environment). Note that stripping
+	  xen.efi using "INSTALL_EFI_STRIP" will disable the building of
+	  xen.efi.elf, which is needed for "crash" dump analysis of systems
+	  booted via EFI.
 
 endmenu
diff --git a/xen/Makefile b/xen/Makefile
index 2710d7327e..db50dcb502 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -505,6 +505,9 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
 		if [ -e $(TARGET).efi.map ]; then \
 			$(INSTALL_DATA) $(TARGET).efi.map $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.map; \
 		fi; \
+		if [ -e $(TARGET).efi.elf ]; then \
+			$(INSTALL_DATA) $(TARGET).efi.elf $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.elf; \
+		fi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T).efi; \
@@ -539,6 +542,7 @@ _uninstall:
 	rm -f $(D)$(DEBUG_DIR)/$(T)-syms-$(XEN_FULLVERSION).map
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
+	rm -f $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.elf
 	rm -f $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.map
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
 	rm -f $(D)$(EFI_DIR)/$(T).efi
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index fc9487aa40..87ce00addf 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -224,6 +224,9 @@ endif
 	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
 	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
 	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
+ifeq ($(CONFIG_DEBUG_INFO),y)
+	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),,$(OBJCOPY) -O elf64-x86-64 $@ $@.elf)
+endif
 	$(NM) -pa --format=sysv $(@D)/$(@F) \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
 	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 14:14:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 14:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524599.815660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprWV-0001Qa-3u; Fri, 21 Apr 2023 14:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524599.815660; Fri, 21 Apr 2023 14:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprWV-0001QT-0X; Fri, 21 Apr 2023 14:14:03 +0000
Received: by outflank-mailman (input) for mailman id 524599;
 Fri, 21 Apr 2023 14:14:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfVw=AM=epam.com=prvs=8475e47d9c=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pprWS-0001QN-SM
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 14:14:01 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0839127-e04e-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 16:13:57 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33LA2PWM030003; Fri, 21 Apr 2023 14:13:48 GMT
Received: from eur03-dba-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2170.outbound.protection.outlook.com [104.47.51.170])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3q3dv2tsd3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 21 Apr 2023 14:13:48 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by GV1PR03MB8077.eurprd03.prod.outlook.com (2603:10a6:150:1c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Fri, 21 Apr
 2023 14:13:44 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.045; Fri, 21 Apr 2023
 14:13:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0839127-e04e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SVWk9mVfTJrMiTOnBGdh2MkiRYMpGs6LcwXCPqPcefYpDaAuQpTZRQA5Ltk0RItEzSRUpebiEE1uvLpQLzeqjCKmTfGcbWRaNxqN1bemfSK4g4AYHZbkqmnDuKiJBsxaJCC8LEZZrmccxBqSIhU4RhpJVXbKDMhuAr47T/nOPb2qVUYvlH8DnWMLNfZdz9FxXV8nKhr5y4vUiVhfwJYtG9qUUCpxKUwdNqmu2xUEejliOXI6NbJGFHageNq0cciGftSeHx69k3h/TmvuBUMk+T6qYktsWZBgWXacyXmWMHjagsWgjPfG+ab61nAZ9JNTtKJCTLnj/yEO1tZz1t7GHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=86g7TwiRUUiY1zsqf2UsjU9Buscg/ufe8UAnR6+97fg=;
 b=GegRsFaCTePpAuADWGIO33JeVI5IB+OzAgns6CjgUtCrVVmsiiVz8m7XEPrdvnVq/Q9b+hoBEV3Dcv217+nggA8tXdsn5HMiqKDGjsgK4NDZEr5kN/pfH4MUCS5NWyRMvpYycU7q5uvOymfihJiBLWR6iXCx+Rg64mnzdLUVIpQwKvKdmvFdwUONTTCLsX/K9OytgWYRCnKxWnjrBhsIjLyBrC5N7hQEKIGGtVXHWBu54budcARuHlP+0yJrGmLJJqrbJNCRhU0ExrMdyEy8F0D2PfyUZVDux67e12rShOt2nkS68KXF/McpEoPtaWGWApsY8aXrnrn9TrjAFhdacg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=86g7TwiRUUiY1zsqf2UsjU9Buscg/ufe8UAnR6+97fg=;
 b=W0Pf0H7L1LSpKwzvLie//aQlY9FM+HLf97FOQ6h2S9VK+jACkoWT/n5TGbTsJZeLiPuzKtf73HIjOm/r+hGS8EbguCKnHdLxqwrSBLm5opOHSN7NqubnabdN7d6LYA3RFNfBEU/2H/xCaYrVxoPJQ40OvU9x70XJl1UiF4Jqt0pc1EojFqk8V4yUqhf/mK+aarl56WUS/oZJnRWFJION7nrAXkJSz+STbZL7V+uEy7KfgCeEzrVDT01E2JOoplEKH0ziSIeuaxhE9qnYH4MgwWW8NAiNCWUD51lC8C8UEsv6GGsX/Ixrk3QRf+vDrfkmyTEWyc6h+3oaHB2H5BksxQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        George Dunlap <george.dunlap@citrix.com>,
        Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Paul Durrant
	<paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: 
 AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGAgAGe14CAAJvOAIAFXoKAgAAEr4CAAAS6AIAGQ32AgAAsn4CAAAL9gA==
Date: Fri, 21 Apr 2023 14:13:43 +0000
Message-ID: <87o7nh727c.fsf@epam.com>
References: <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
 <ZEKLN8AlzDUckorU@Air-de-Roger>
In-Reply-To: <ZEKLN8AlzDUckorU@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|GV1PR03MB8077:EE_
x-ms-office365-filtering-correlation-id: 22db354f-ec1e-4838-28f2-08db42729d8b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 yas49Cvs7HxwuA1PrFx9IfbaGlVau4JODMuCDKVtqFNCX+Di73cG53K5Hfes8szW8jTn94M1cPCSvUjWH12ee1M0QjXoyZEIKBaUpH4ErmzjabbqiAeTmuEQ3IispOa+fCrb3WoO+FDwfr9wuKYg1DKKkcchTTixlUEVks2tvUWxoOgklkc4yGE/yJ7KRXhgu7w6qI7Sf1oMOHNmF4FX/g2fQPfevorDEMNJeInT/rkRtQ3vdhZy+S/VM90kBskdY02EW65UAqQO6t9NaZC5xqCj+qo9KnVGU4jTM/mfQVDC9BK132VOS/IGQGOE7ZHyMrcMOqj8DCKwqAIgJgYgQlaoeqA3lsXGlU9wqqZYSOMenr59G51bL8sIO10uSgsLWnqfJ84TAo+SsgOQ4S3abXAG2shbfHosHYZ8X44DWoKZT7j0XriDNZV19SWffLRyIt32dWsgSEDzEn1TCqy5WccvwTwmmhB0+J8z0WS/Xacz9fOusdE8xcz3uLF+XyADzhD9o5wCf0uYFPcAtCkPVAE3frgiYlsmZb1hyjshDNg9BWqTe7mBj1ipsDZJEKDlPV46EIk26XtLLPQXbdcfZE6q4cqs5Hqu2xbIp19yitSix+w03AOJYvW41kfG+jBb
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(366004)(376002)(451199021)(66899021)(76116006)(316002)(91956017)(54906003)(66556008)(66476007)(66946007)(64756008)(66446008)(4326008)(36756003)(6916009)(53546011)(55236004)(6506007)(26005)(122000001)(38100700002)(6486002)(2616005)(186003)(6512007)(83380400001)(5660300002)(8936002)(478600001)(71200400001)(8676002)(38070700005)(41300700001)(86362001)(2906002)(7416002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?TzU3YUxwaTlHUDZiRnR6K3g1V0tZcVFuQWRRcnEyNStGZlp5dTJOYmNrTHVs?=
 =?utf-8?B?QytVQXRrZEFEdWdyNkE3RzBteFdHbzZYRFdhZXZINDZNL2xYM25iK3hCRWFT?=
 =?utf-8?B?d0xmN09jK2tReDQ1Rmk0bTJoSDNyQWlBSURFMTZDZ3liR1ZzMFJTR1dDQjF5?=
 =?utf-8?B?N2w0cElDMGpVM09jclY4cUVMNGxvWUE3UEw5U3pyL0VvaHorOXE4QzNaSWVI?=
 =?utf-8?B?RlkvRTMyYkJQVTN4QUFLN0N6enRtbS9EbGE3aGZSRm9BYWZEMkZ3WmwrdXpq?=
 =?utf-8?B?dEc4ZC9UbjNoUXBpSjZqcWkrTkJwdDNNK3JhL083TXVnYmdlcTYyUktxN0x2?=
 =?utf-8?B?QUhuZkVQYTFwUEhubk9MZ3dRdjBaMHRHTEpTUFE3WTMreW1POWduc211SXM5?=
 =?utf-8?B?QWVMdzNrOXA1dmw3Vk9KeUs1TThhazVsQVB6aHY1S0M2T3pMaWk1NHA2WmFh?=
 =?utf-8?B?eHhXbXhCek11S1orQUFwN2FLd0NPTkQxMzFBWnVNdkZYQnNRaThXUCsvUzVy?=
 =?utf-8?B?NHFqQjcrTUVJdUNYYlVNaGtwWTNiaTRuUEV3cCtMdXlrUlgvNjNFeHVRd0tr?=
 =?utf-8?B?Tkp1czVQVDZKZzZSdTZ0bFg2N01tR3FQY0VPNWJKRGNCMnRqdE9BVVF6ZkRW?=
 =?utf-8?B?VS9ZZUV5cXBrS2IvVzFhKzBNdllvcSsyUmJXT0RTU1VDdE1DT0FvZmlSYmww?=
 =?utf-8?B?R2Ziczd0MVFiY1N1T0IyNjZ3SVAxYVQ3WHRLSlV3Tjk2WFdUY2VJYTNqREJV?=
 =?utf-8?B?SDNlMTNheUhMb3JBbkprUkFqUm5HRnV5QVAzNkY5anNQL28raWZZS1dSb0Jk?=
 =?utf-8?B?UjN5eDh3djBEbFJ1V2lEemtFZzhjZnhPNWZHWHpLQWxhY2daMCtaY0NYWSti?=
 =?utf-8?B?dlBGRTRnd3o0K0gzNmZMMWQyRStOdk5rWTZvcEdtT1JuaHFZZEU4WDZNQVJ1?=
 =?utf-8?B?bGx5ZWdranRVNHd0YVZYYSt0OWNqdTRsTU9IU3JlOEhkS1lRM095YW5hdWZq?=
 =?utf-8?B?OXBFSW84dC82eXN0N0pLQXZPVm5Ma3FEWmdaR0tWdWFuV1F0Szg2OXFKbTR1?=
 =?utf-8?B?eFMwcThVUkF3em5NbVNuK0ZuSVFHSzdOc203T1ZuTElTSWxyMUx3eTB0TlU1?=
 =?utf-8?B?K252UWFZWHp6K21FUzFOTTI1Ui96a3NVSmNxQ29pQ1AzODBPWlEvaitFTitt?=
 =?utf-8?B?OUtoZlMvU3ZmaWdKTS9URDRzL21nek92NFE3UXhHTmo4R2V2bnA4TU1BYTh4?=
 =?utf-8?B?MDRnWEVDY3FZMFBmMjlCanJsTFJ4b1NtLzBWdEZiQkJjY09lbCt6MjZnbDhF?=
 =?utf-8?B?U1A0R21XM3h5MXpSMnRCVkd6eXpDZ0hlQTU2cFFoV2NkU2lzakd2VkYzY3B4?=
 =?utf-8?B?Y1c1c0puZ2VqRDBqMFM4U09qdHRTNnpTM0EvTFdnaGorQnN6dU5VMGNVMGdx?=
 =?utf-8?B?SjZ1K3p2ZmZPWmtZbEZhaHF3bHFrNkxFVmxtTkRyb0MwQ1A5eGtlWFlkSytJ?=
 =?utf-8?B?U2NzL3YvZ0t3YmFTMlBXME1BQVBMUGVMeFJTMjdjMHRMTnZEblJ3M0hHc09E?=
 =?utf-8?B?cG41U1BsSUM4TTRDcEZSZ0lDc3NIQnVJNGIvNGU5TEw5eERKQytNa3NMS0RE?=
 =?utf-8?B?bFBzV3ZUOUozR2pZMFdxVGg2RDNEb2h0T1lMWXFJcXZaZUZuZmdZRkdMM1Bu?=
 =?utf-8?B?RWFmeFBQZG5abHU3K2VFeHc1UmlYUUo0Tmp1anNpR2RlTk1YdFR6Z0hzd1d1?=
 =?utf-8?B?UEErL3JwcFc3cGtSWnY3Y0FGanBRbE95bkl4ak5yOUYyYVdPNTMxTkhUbnp0?=
 =?utf-8?B?QXpjdDFIa0ZzTGtEcmtsWHBaQlJ4aUxJNGpkQVVETTY0Sy9NM2NBcWRoRGNo?=
 =?utf-8?B?SVdXOElpTlVuQ1E5NDJ4cEZ2Ukd2ekxnSUlxVDM1T0o2MmNGUDdYVUFDTnhO?=
 =?utf-8?B?SDZ6OEU4c2xKM2xES0l2SmlFa3ZCbUhxc0FhM0UvVWd0OG9pMDRMdVdVOEg0?=
 =?utf-8?B?WkZRY1EyelRqL1pPNTE1ZkFYMUZNZVN0UHRZM0Eyb3Q4cmEvZUlzVTg2T1BW?=
 =?utf-8?B?NEl0TnJ0c2l1K3h1c1gxWndUTTdRakIxZHVDRFVndGk0REtQUFFVQmlnZGo5?=
 =?utf-8?B?bmFHcWFicGVUTVVMRmhLRXM0WVRSZ3lrV2oxYnlLRm9pUXlxZEt3U3M2T015?=
 =?utf-8?B?SVE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <DD736F41E3F6E7459FF421AE203503BC@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22db354f-ec1e-4838-28f2-08db42729d8b
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Apr 2023 14:13:43.9662
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: YL2yFoCT8bw1+/0VMmGI/hhLBS3hmd7wSdElBra5W37SXqFC+tbHoXUXFOXFNfbV74ER6kGfGZG3zEufBg1JjSwupQ3lzCxBFLVNg3Q9aA8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR03MB8077
X-Proofpoint-GUID: wfcxm9WxF6Ku4jdFz7-oxqhdJKfInrp0
X-Proofpoint-ORIG-GUID: wfcxm9WxF6Ku4jdFz7-oxqhdJKfInrp0
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-21_06,2023-04-21_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 adultscore=0 spamscore=0 impostorscore=0 lowpriorityscore=0 clxscore=1015
 mlxscore=0 mlxlogscore=922 suspectscore=0 malwarescore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2304210124

DQpIaSBSb2dlciwNCg0KUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdy
aXRlczoNCg0KPiBPbiBGcmksIEFwciAyMSwgMjAyMyBhdCAxMTowMDoyM0FNICswMDAwLCBWb2xv
ZHlteXIgQmFiY2h1ayB3cm90ZToNCj4+IA0KPj4gSGVsbG8gUm9nZXIsDQo+PiANCj4+IFJvZ2Vy
IFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cml0ZXM6DQo+PiANCj4+ID4gT24g
TW9uLCBBcHIgMTcsIDIwMjMgYXQgMTI6MzQ6MzFQTSArMDIwMCwgSmFuIEJldWxpY2ggd3JvdGU6
DQo+PiA+PiBPbiAxNy4wNC4yMDIzIDEyOjE3LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4g
Pj4gPiBPbiBGcmksIEFwciAxNCwgMjAyMyBhdCAwMTozMDozOUFNICswMDAwLCBWb2xvZHlteXIg
QmFiY2h1ayB3cm90ZToNCj4+ID4+ID4+IEFib3ZlIEkgaGF2ZSBwcm9wb3NlZCBhbm90aGVyIHZp
ZXcgb24gdGhpcy4gSSBob3BlLCBpdCB3aWxsIHdvcmsgZm9yDQo+PiA+PiA+PiB5b3UuIEp1c3Qg
dG8gcmVpdGVyYXRlLCBpZGVhIGlzIHRvIGFsbG93ICJoYXJtbGVzcyIgcmVmY291bnRzIHRvIGJl
IGxlZnQNCj4+ID4+ID4+IGFmdGVyIHJldHVybmluZyBmcm9tIHBjaV9yZW1vdmVfZGV2aWNlKCku
IEJ5ICJoYXJtbGVzcyIgSSBtZWFuIHRoYXQNCj4+ID4+ID4+IG93bmVycyBvZiB0aG9zZSByZWZj
b3VudHMgd2lsbCBub3QgdHJ5IHRvIGFjY2VzcyB0aGUgcGh5c2ljYWwgUENJDQo+PiA+PiA+PiBk
ZXZpY2UgaWYgcGNpX3JlbW92ZV9kZXZpY2UoKSBpcyBhbHJlYWR5IGZpbmlzaGVkLg0KPj4gPj4g
PiANCj4+ID4+ID4gSSdtIG5vdCBzdHJpY3RseSBhIG1haW50YWluZXIgb2YgdGhpcyBwaWVjZSBj
b2RlLCBhbGJlaXQgSSBoYXZlIGFuDQo+PiA+PiA+IG9waW5pb24uICBJIHdpbGwgbGlrZSB0byBh
bHNvIGhlYXIgSmFucyBvcGluaW9uLCBzaW5jZSBoZSBpcyB0aGUNCj4+ID4+ID4gbWFpbnRhaW5l
ci4NCj4+ID4+IA0KPj4gPj4gSSdtIGFmcmFpZCBJIGNhbid0IHJlYWxseSBhcHByZWNpYXRlIHRo
ZSB0ZXJtICJoYXJtbGVzcyByZWZjb3VudHMiLiBXaG9ldmVyDQo+PiA+PiBob2xkcyBhIHJlZiBp
cyBlbnRpdGxlZCB0byBhY2Nlc3MgdGhlIGRldmljZS4gQXMgc3RhdGVkIGJlZm9yZSwgSSBzZWUg
b25seQ0KPj4gPj4gdHdvIHdheXMgb2YgZ2V0dGluZyB0aGluZ3MgY29uc2lzdGVudDogRWl0aGVy
IHBjaV9yZW1vdmVfZGV2aWNlKCkgaXMNCj4+ID4+IGludm9rZWQgdXBvbiBkcm9wcGluZyBvZiB0
aGUgbGFzdCByZWYsDQo+PiA+DQo+PiA+IFdpdGggdGhpcyBhcHByb2FjaCwgd2hhdCB3b3VsZCBi
ZSB0aGUgaW1wbGVtZW50YXRpb24gb2YNCj4+ID4gUEhZU0RFVk9QX21hbmFnZV9wY2lfcmVtb3Zl
PyAgV291bGQgaXQganVzdCBjaGVjayB3aGV0aGVyIHRoZSBwZGV2DQo+PiA+IGV4aXN0IGFuZCBl
aXRoZXIgcmV0dXJuIDAgb3IgLUVCVVNZPw0KPj4gPg0KPj4gDQo+PiBPa2F5LCBJIGFtIHByZXBh
cmluZyBwYXRjaGVzIHdpdGggdGhlIGJlaGF2aW9yIHlvdSBwcm9wb3NlZC4gVG8gdGVzdCBpdCwN
Cj4+IEkgYXJ0aWZpY2lhbGx5IHNldCByZWZjb3VudCB0byAyIGFuZCBpbmRlZWQgUEhZU0RFVk9Q
X21hbmFnZV9wY2lfcmVtb3ZlDQo+PiByZXR1cm5lZCAtRUJVU1ksIHdoaWNoIHByb3BhZ2F0ZWQg
dG8gdGhlIGxpbnV4IGRyaXZlci4gUHJvYmxlbSBpcyB0aGF0DQo+PiBMaW51eCBkcml2ZXIgY2Fu
J3QgZG8gYW55dGhpbmcgd2l0aCB0aGlzLiBJdCBqdXN0IGRpc3BsYXllZCBhbiBlcnJvcg0KPj4g
bWVzc2FnZSBhbmQgcmVtb3ZlZCBkZXZpY2UgYW55d2F5cy4gVGhpcyBpcyBiZWNhdXNlIExpbnV4
IHNlbmRzDQo+PiBQSFlTREVWT1BfbWFuYWdlX3BjaV9yZW1vdmUgaW4gZGV2aWNlX3JlbW92ZSgp
IGNhbGwgcGF0aCBhbmQgdGhlcmUgaXMgbm8NCj4+IHdheSB0byBwcmV2ZW50IHRoZSBkZXZpY2Ug
cmVtb3ZhbC4gU28sIGFkbWluIGlzIG5vdCBjYXBhYmxlIHRvIHRyeSB0aGlzDQo+PiBhZ2Fpbi4N
Cj4NCj4gSWRlYWxseSBMaW51eCB3b24ndCByZW1vdmUgdGhlIGRldmljZSwgYW5kIHRoZW4gdGhl
IGFkbWluIHdvdWxkIGdldCB0bw0KPiByZXRyeS4gIE1heWJlIHRoZSB3YXkgdGhlIExpbnV4IGhv
b2sgaXMgcGxhY2VkIGlzIG5vdCB0aGUgYmVzdCBvbmU/DQo+IFRoZSBoeXBlcnZpc29yIHNob3Vs
ZCBiZSBhdXRob3JpdGF0aXZlIG9uIHdoZXRoZXIgYSBkZXZpY2UgY2FuIGJlDQo+IHJlbW92ZWQg
b3Igbm90LCBhbmQgaGVuY2UgUEhZU0RFVk9QX21hbmFnZV9wY2lfcmVtb3ZlIHJldHVybmluZyBh
bg0KPiBlcnJvciAoRUJVU1kgb3Igb3RoZXJ3aXNlKSBzaG91bGRuJ3QgYWxsb3cgdGhlIGRldmlj
ZSB1bnBsdWcgaW4gTGludXgNCj4gdG8gY29udGludWUuDQoNClllcywgaXQgd291bGQgYmUgaWRl
YWxseSwgYnV0IExpbnV4IGRyaXZlci9kZXZpY2UgbW9kZWwgaXMgd3JpdHRlbiBpbiBhDQpzdWNo
IHdheSwgdGhhdCBQQ0kgc3Vic3lzdGVtIHRyYWNrcyBhbGwgdGhlIFBDSSBkZXZpY2UgdXNhZ2Us
IHNvIGl0IGNhbg0KYmUgY2VydGFpbiB0aGF0IGl0IGNhbiByZW1vdmUgdGhlIGRldmljZS4gVGh1
cywgZnVuY3Rpb25zIGluIHRoZSBkZXZpY2UNCnJlbW92YWwgcGF0aCBlaXRoZXIgcmV0dXJuIHZv
aWQgb3IgMC4gT2YgY291cnNlLCBrZXJuZWwgZG9lcyBub3Qga25vdyB0aGF0DQpoeXBlcnZpc29y
IGhhcyBhZGRpdGlvbmFsIHVzZXMgZm9yIHRoZSBkZXZpY2UsIHNvIHRoZXJlIGlzIG5vIG1lY2hh
bmlzbXMNCnRvIHByZXZlbnQgcmVtb3ZhbC4NCg0KPiBXZSBjb3VsZCBhZGQgYSBQSFlTREVWT1Bf
bWFuYWdlX3BjaV90ZXN0IG9yIHNpbWlsYXIgdGhhdCBjb3VsZCBiZQ0KPiBwcm9ncmFtbWF0aWNh
bGx5IHVzZWQgdG8gY2hlY2sgd2hldGhlciBhIGRldmljZSBoYXMgYSBtYXRjaGluZyBwZGV2IGlu
DQo+IHRoZSBoeXBlcnZpc29yLCBidXQgSSBoYXZlIG5vIGlkZWEgaG93IHRoYXQgY291bGQgYmUg
dXNlZCBieSBMaW51eCBzbw0KPiBpdCdzIGV4cG9zZWQgdG8gdGhlIHVzZXIsIGFuZCBpdCBzZWVt
cyB0byBqdXN0IG1ha2UgdGhlIGludGVyZmFjZSBtb3JlDQo+IGNvbXBsaWNhdGVkIGZvciBub28g
cmVhbCBiZW5lZml0LCB3aGVuIHRoZSBzYW1lIGNvdWxkIGJlIGFjY29tcGxpc2hlZA0KPiBieSBQ
SFlTREVWT1BfbWFuYWdlX3BjaV9yZW1vdmUuDQoNCldlIGNhbiBpZ25vcmUgdGhlIGtlcm5lbCBi
ZWhhdmlvciBhbmQganVzdCBjYWxsDQpQSFlTREVWT1BfbWFuYWdlX3BjaV9yZW1vdmUgZnJvbSB0
b29sc3RhY2suIFNvbWV0aGluZyBsaWtlICJ4bA0KcGNpLWhvdHVucGx1ZyBTQkZEIi4gQnV0IHll
cywgdGhpcyB3aWxsIG1ha2UgaW50ZXJmYWNlIG1vcmUgY29tcGxpY2F0ZWQuDQoNCj4gTWF5YmUg
dGhlIG9ubHkgZmVhc2libGUgc29sdXRpb24gaXMgZm9yIHBjaV9yZW1vdmVfZGV2aWNlKCkgdG8g
ZHJvcCBhDQo+IHJlZmVyZW5jZSBleHBlY3RpbmcgaXQgd291bGQgYmUgdGhlIGxhc3Qgb25lLCBh
bmQgcHJpbnQgYSB3YXJuaW5nDQo+IG1lc3NhZ2UgaWYgaXQncyBub3QgYW5kIHJldHVybiAtRUJV
U1kuICBFeHBlY3RpbmcgYW55IHJlbWFpbmluZw0KPiByZWZlcmVuY2VzIHRvIGJlIGRyb3BwZWQg
YW5kIHRoZSBiYWNraW5nIHBkZXYgdG8gYmUgZnJlZWQuDQoNClNvLCBiYXNpY2FsbHkgaW4gdGhl
IHNhbWUgd2F5IGFzIEkgcHJvcG9zZWQgaW5pdGlhbGx5Pw0KDQotLSANCldCUiwgVm9sb2R5bXly


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 14:21:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 14:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524606.815673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppre3-0002zZ-3L; Fri, 21 Apr 2023 14:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524606.815673; Fri, 21 Apr 2023 14:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppre3-0002zS-0S; Fri, 21 Apr 2023 14:21:51 +0000
Received: by outflank-mailman (input) for mailman id 524606;
 Fri, 21 Apr 2023 14:21:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppre1-0002zI-Ft; Fri, 21 Apr 2023 14:21:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppre1-0008U4-6K; Fri, 21 Apr 2023 14:21:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppre0-0000aq-Od; Fri, 21 Apr 2023 14:21:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppre0-0001s6-Jy; Fri, 21 Apr 2023 14:21:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+GEJJDEwSkbSBQHRbbT5/9pId+gXTbknQJQIzAkgrl8=; b=Waj8RUBfojWpTBaJNHEVZvA7Jx
	ORNxv0DR9tguvqnuSG0auWkAl2rNoOC8o0BWHvOWRsyHcIwByn04JRcEofMrJEzoNIU04XKqqGNqm
	cNfvwwHanTtvpv1T8mkJuYqsOwMXK6EmYeAysqAL1KnqkPRMKZ9La7suNG2nBis42QVs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180357-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180357: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-armhf-armhf-xl:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dff17457c482dcb38b7caafd095334f461ef6d35
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 14:21:48 +0000

flight 180357 xen-unstable-smoke real [real]
flight 180362 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180357/
http://logs.test-lab.xenproject.org/osstest/logs/180362/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 180302

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dff17457c482dcb38b7caafd095334f461ef6d35
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    2 days
Failing since        180314  2023-04-19 10:00:24 Z    2 days   15 attempts
Testing same since   180323  2023-04-19 21:01:54 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 317 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 14:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 14:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524612.815682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprku-0003cg-RX; Fri, 21 Apr 2023 14:28:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524612.815682; Fri, 21 Apr 2023 14:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprku-0003cZ-Os; Fri, 21 Apr 2023 14:28:56 +0000
Received: by outflank-mailman (input) for mailman id 524612;
 Fri, 21 Apr 2023 14:28:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=p1uz=AM=bu.edu=alxndr@srs-se1.protection.inumbo.net>)
 id 1pprkt-0003cT-7u
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 14:28:55 +0000
Received: from esa2.hc2706-39.iphmx.com (esa2.hc2706-39.iphmx.com
 [216.71.152.49]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4fcb83c-e050-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 16:28:51 +0200 (CEST)
Received: from mail-qk1-f197.google.com ([209.85.222.197])
 by ob1.hc2706-39.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 21 Apr 2023 10:28:48 -0400
Received: by mail-qk1-f197.google.com with SMTP id
 af79cd13be357-74ab517c14bso798699285a.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 07:28:48 -0700 (PDT)
Received: from mozz.bu.edu (mozz.bu.edu. [128.197.127.33])
 by smtp.gmail.com with ESMTPSA id
 dl9-20020a05620a1d0900b0074d4cf8f9fcsm1353752qkb.107.2023.04.21.07.28.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Apr 2023 07:28:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4fcb83c-e050-11ed-8611-37d641c3527e
X-IronPort-RemoteIP: 209.85.222.197
X-IronPort-MID: 273949229
X-IronPort-Reputation: None
X-IronPort-Listener: OutgoingMail
X-IronPort-SenderGroup: RELAY_GSUITE
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BWC+E6B7ZriNdBVW/2jiw5YqxClBgxIJ4kV8jS/XYbTApDpz0zRTx
 mdOWmHVMvvbNGGkft5+OtyzoxwC6MOGnNQxTANkpHpgcSl2pJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH2dOCn9SImvU2xbuKUIPbePSxsThNTRi4kiBZy88Y0mYctitWia++3k
 YqaT/b3ZRn0ilaYDkpOs/jY8E815ayr0N8llgdWic5j7Qe2e0Y9Ucp3yZGZdxPQXoRSF+imc
 OfPpJnRErTxpkpF5nuNy94XQ2VTKlLgFVHmZkl+AsBOtiN/Shkaic7XAha9hXB/0F1ll/gpo
 DlEWAfZpQ0BZ8Ugk8xEO/VU/r0X0QSrN9YrLFDm2fF/wXEqfFPmzqRIKXwaZbEo2b1RIntf6
 tNGCwskO0Xra+KemNpXS8Fpj8UnadDoZcYR4yA/iz7eCvkiTNbIRKCiCd1whm9hwJATW6yEP
 YxAOGUHgBfoOnWjPn8eDII4kP2AjGS5fjFFwL6QjfBouDmPnVYrgdABNvLPJYOjXv5vxnq1g
 SGe/H/FATwlL/K2nG/tHnWEw7WncTnAcJIfEvi0++BnhHWXx3cPE1sGWF2ju/67h0WiHdVFJ
 CQpFjEGqKEz8AmyTYC4UUTp/ziLuRkTX9cWGOo/gO2Q9pfpD8+iLjBsZlZ8hBYO7qfamRRCO
 oe1ou7U
IronPort-HdrOrdr: A9a23:N+qLKa1cP3n/H+EkUny1agqjBGQkLtp133Aq2lEZdPWaSL39qy
 nIpoVm6faQslwssR4b6La90cW7MBHhHNtOkPIs1NSZLXTbURWTXfhfBOLZqlWKak7DH6xmpN
 1dmsBFaOEYZmIK6voSjjPIduoI8Z2s3Jrtq93j70pAeylXVoAI1XYHNu9ZKCFLrct9aKbR2K
 Dz2iM+nUvZRZ3fVKvbOkU4
X-Talos-CUID: 9a23:3nzQA2x2YJznBOkvd8T6BgUeF5EDV0DxlEvgABSeKnx4V5LLamS5rfY=
X-Talos-MUID: =?us-ascii?q?9a23=3A+qof9Q0InHKz/5mS+Ohz2fx8STUj+fvxIVs2q80?=
 =?us-ascii?q?6lu7HbS9rBzSwsy+Za9py?=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bu.edu; s=s1gsbu; t=1682087327; x=1684679327;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kGXBcFCs2WyZ0M3NvVgIIJLkCrqepaPB8SFIuEWrZaM=;
        b=k+Sb2dMJ1PAAttbp0Rv9m63Dra2Des4NLHBuClZbu8D06Dl/rd8JL3B2Tgl5g9Z1eG
         qdXewbXWaugJFlTww94/x6qFY8S5zZRLbCezIqw7ZfNgWg+OtS+6I/jyZjKeFTc+6QoA
         +oCInbyR/PnHsCXzKB+2D0b0RzOWX5ivxRTrGnGD4dyjjqO3EmEEGB8WihxRhqBJ07f/
         qO2GjdBsYdf9wKbBO3pr6tUwjAHjWhixoShDoD8K/DSRtYS56jgFKSjOZRoRLX/DcP9q
         OScQaBhS5v88YyCzWd8f4FDzG0NyRfeZbwr/LtyOn1LERcSUb47uH7PAfJfi1ronFluE
         tDyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682087327; x=1684679327;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kGXBcFCs2WyZ0M3NvVgIIJLkCrqepaPB8SFIuEWrZaM=;
        b=iLumU3J1QeeZREXLnwDnDcRHKKYG4+awgfBMwUJV3P/68XtKnA5FBGvS8RzTpjRDVR
         0bpJR1BWXvFI2aEiU68SyeJR01KkdEQSjru0J/dxROJBoULpTgeMYHH1q36dNcfipa6Y
         5UlQDOVUu16JAq22BjqtqrcbVnrtSNSy0bAi+EpkCHtw4vgYTWAWuQUzAQUJKSnV+3Co
         FcbyL3fntbCKT7GJXsi4CmaDZG5i7EtL8cAbUQtICZvvYc94NDCObAtJRWe/3yrY5768
         cziC+lIUOXnG+fBF5gFJAscMWQyEFsQMtqL6X/gWRBq8e/eoz/0DD5XDjuvEc5wO1D+j
         e8PQ==
X-Gm-Message-State: AAQBX9d3ro9ge+WCqNYEh6YVBialVq8VqR/KVuLMrdP+Da2iaeECMSiN
	OwsLGTKZRASCnxFiUZxe+XwYLk8BgoQkycVs7Ih1sbaJbCAAht95PRZAkZIW6TaDPQz+dKIcTGd
	IDWIdQnRu+U43mZO425nNMQbvEk1b2hE4/PHUHXk28g==
X-Received: by 2002:a05:622a:50:b0:3e3:9275:17ad with SMTP id y16-20020a05622a005000b003e3927517admr9704546qtw.12.1682087326639;
        Fri, 21 Apr 2023 07:28:46 -0700 (PDT)
X-Google-Smtp-Source: AKy350bE6B5TAtUa5seb2zqALPEatizHP4+7BdQl588VTLsCS+2itWROZIT4NIgkfrhlgVon+6HVag==
X-Received: by 2002:a05:622a:50:b0:3e3:9275:17ad with SMTP id y16-20020a05622a005000b003e3927517admr9704489qtw.12.1682087326266;
        Fri, 21 Apr 2023 07:28:46 -0700 (PDT)
From: Alexander Bulekov <alxndr@bu.edu>
To: qemu-devel@nongnu.org
Cc: Alexander Bulekov <alxndr@bu.edu>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>,
	Siqi Chen <coc.cyqh@gmail.com>,
	Michael Tokarev <mjt@tls.msk.ru>,
	Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Amit Shah <amit@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>,
	Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs),
	qemu-block@nongnu.org (open list:virtio-blk),
	qemu-arm@nongnu.org (open list:i.MX31 (kzm)),
	qemu-ppc@nongnu.org (open list:Old World (g3beige))
Subject: [PATCH v8 4/8] hw: replace most qemu_bh_new calls with qemu_bh_new_guarded
Date: Fri, 21 Apr 2023 10:27:32 -0400
Message-Id: <20230421142736.2817601-5-alxndr@bu.edu>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230421142736.2817601-1-alxndr@bu.edu>
References: <20230421142736.2817601-1-alxndr@bu.edu>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CES-GSUITE_AUTH: bf3aNvsZpxl8

This protects devices from bh->mmio reentrancy issues.

Thanks: Thomas Huth <thuth@redhat.com> for diagnosing OS X test failure.
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
---
 hw/9pfs/xen-9p-backend.c        | 5 ++++-
 hw/block/dataplane/virtio-blk.c | 3 ++-
 hw/block/dataplane/xen-block.c  | 5 +++--
 hw/char/virtio-serial-bus.c     | 3 ++-
 hw/display/qxl.c                | 9 ++++++---
 hw/display/virtio-gpu.c         | 6 ++++--
 hw/ide/ahci.c                   | 3 ++-
 hw/ide/ahci_internal.h          | 1 +
 hw/ide/core.c                   | 4 +++-
 hw/misc/imx_rngc.c              | 6 ++++--
 hw/misc/macio/mac_dbdma.c       | 2 +-
 hw/net/virtio-net.c             | 3 ++-
 hw/nvme/ctrl.c                  | 6 ++++--
 hw/scsi/mptsas.c                | 3 ++-
 hw/scsi/scsi-bus.c              | 3 ++-
 hw/scsi/vmw_pvscsi.c            | 3 ++-
 hw/usb/dev-uas.c                | 3 ++-
 hw/usb/hcd-dwc2.c               | 3 ++-
 hw/usb/hcd-ehci.c               | 3 ++-
 hw/usb/hcd-uhci.c               | 2 +-
 hw/usb/host-libusb.c            | 6 ++++--
 hw/usb/redirect.c               | 6 ++++--
 hw/usb/xen-usb.c                | 3 ++-
 hw/virtio/virtio-balloon.c      | 5 +++--
 hw/virtio/virtio-crypto.c       | 3 ++-
 25 files changed, 66 insertions(+), 33 deletions(-)

diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 74f3a05f88..0e266c552b 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -61,6 +61,7 @@ typedef struct Xen9pfsDev {
 
     int num_rings;
     Xen9pfsRing *rings;
+    MemReentrancyGuard mem_reentrancy_guard;
 } Xen9pfsDev;
 
 static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev);
@@ -443,7 +444,9 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
         xen_9pdev->rings[i].ring.out = xen_9pdev->rings[i].data +
                                        XEN_FLEX_RING_SIZE(ring_order);
 
-        xen_9pdev->rings[i].bh = qemu_bh_new(xen_9pfs_bh, &xen_9pdev->rings[i]);
+        xen_9pdev->rings[i].bh = qemu_bh_new_guarded(xen_9pfs_bh,
+                                                     &xen_9pdev->rings[i],
+                                                     &xen_9pdev->mem_reentrancy_guard);
         xen_9pdev->rings[i].out_cons = 0;
         xen_9pdev->rings[i].out_size = 0;
         xen_9pdev->rings[i].inprogress = false;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..a6202997ee 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -127,7 +127,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     } else {
         s->ctx = qemu_get_aio_context();
     }
-    s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
+    s->bh = aio_bh_new_guarded(s->ctx, notify_guest_bh, s,
+                               &DEVICE(vdev)->mem_reentrancy_guard);
     s->batch_notify_vqs = bitmap_new(conf->num_queues);
 
     *dataplane = s;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..d8bc39d359 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -633,8 +633,9 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev,
     } else {
         dataplane->ctx = qemu_get_aio_context();
     }
-    dataplane->bh = aio_bh_new(dataplane->ctx, xen_block_dataplane_bh,
-                               dataplane);
+    dataplane->bh = aio_bh_new_guarded(dataplane->ctx, xen_block_dataplane_bh,
+                                       dataplane,
+                                       &DEVICE(xendev)->mem_reentrancy_guard);
 
     return dataplane;
 }
diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index 7d4601cb5d..dd619f0731 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -985,7 +985,8 @@ static void virtser_port_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    port->bh = qemu_bh_new(flush_queued_data_bh, port);
+    port->bh = qemu_bh_new_guarded(flush_queued_data_bh, port,
+                                   &dev->mem_reentrancy_guard);
     port->elem = NULL;
 }
 
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index ec712d3ca2..c0460c4ef1 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2201,11 +2201,14 @@ static void qxl_realize_common(PCIQXLDevice *qxl, Error **errp)
 
     qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
 
-    qxl->update_irq = qemu_bh_new(qxl_update_irq_bh, qxl);
+    qxl->update_irq = qemu_bh_new_guarded(qxl_update_irq_bh, qxl,
+                                          &DEVICE(qxl)->mem_reentrancy_guard);
     qxl_reset_state(qxl);
 
-    qxl->update_area_bh = qemu_bh_new(qxl_render_update_area_bh, qxl);
-    qxl->ssd.cursor_bh = qemu_bh_new(qemu_spice_cursor_refresh_bh, &qxl->ssd);
+    qxl->update_area_bh = qemu_bh_new_guarded(qxl_render_update_area_bh, qxl,
+                                              &DEVICE(qxl)->mem_reentrancy_guard);
+    qxl->ssd.cursor_bh = qemu_bh_new_guarded(qemu_spice_cursor_refresh_bh, &qxl->ssd,
+                                             &DEVICE(qxl)->mem_reentrancy_guard);
 }
 
 static void qxl_realize_primary(PCIDevice *dev, Error **errp)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e15c79b94..66ac9b6cc5 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1339,8 +1339,10 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     g->ctrl_vq = virtio_get_queue(vdev, 0);
     g->cursor_vq = virtio_get_queue(vdev, 1);
-    g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
-    g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
+    g->ctrl_bh = qemu_bh_new_guarded(virtio_gpu_ctrl_bh, g,
+                                     &qdev->mem_reentrancy_guard);
+    g->cursor_bh = qemu_bh_new_guarded(virtio_gpu_cursor_bh, g,
+                                       &qdev->mem_reentrancy_guard);
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 55902e1df7..4e76d6b191 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1509,7 +1509,8 @@ static void ahci_cmd_done(const IDEDMA *dma)
     ahci_write_fis_d2h(ad);
 
     if (ad->port_regs.cmd_issue && !ad->check_bh) {
-        ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad);
+        ad->check_bh = qemu_bh_new_guarded(ahci_check_cmd_bh, ad,
+                                           &ad->mem_reentrancy_guard);
         qemu_bh_schedule(ad->check_bh);
     }
 }
diff --git a/hw/ide/ahci_internal.h b/hw/ide/ahci_internal.h
index 303fcd7235..2480455372 100644
--- a/hw/ide/ahci_internal.h
+++ b/hw/ide/ahci_internal.h
@@ -321,6 +321,7 @@ struct AHCIDevice {
     bool init_d2h_sent;
     AHCICmdHdr *cur_cmd;
     NCQTransferState ncq_tfs[AHCI_MAX_CMDS];
+    MemReentrancyGuard mem_reentrancy_guard;
 };
 
 struct AHCIPCIState {
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 2d034731cf..50c8935366 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -513,6 +513,7 @@ BlockAIOCB *ide_issue_trim(
         BlockCompletionFunc *cb, void *cb_opaque, void *opaque)
 {
     IDEState *s = opaque;
+    IDEDevice *dev = s->unit ? s->bus->slave : s->bus->master;
     TrimAIOCB *iocb;
 
     /* Paired with a decrement in ide_trim_bh_cb() */
@@ -520,7 +521,8 @@ BlockAIOCB *ide_issue_trim(
 
     iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
     iocb->s = s;
-    iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+    iocb->bh = qemu_bh_new_guarded(ide_trim_bh_cb, iocb,
+                                   &DEVICE(dev)->mem_reentrancy_guard);
     iocb->ret = 0;
     iocb->qiov = qiov;
     iocb->i = -1;
diff --git a/hw/misc/imx_rngc.c b/hw/misc/imx_rngc.c
index 632c03779c..082c6980ad 100644
--- a/hw/misc/imx_rngc.c
+++ b/hw/misc/imx_rngc.c
@@ -228,8 +228,10 @@ static void imx_rngc_realize(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->iomem);
 
     sysbus_init_irq(sbd, &s->irq);
-    s->self_test_bh = qemu_bh_new(imx_rngc_self_test, s);
-    s->seed_bh = qemu_bh_new(imx_rngc_seed, s);
+    s->self_test_bh = qemu_bh_new_guarded(imx_rngc_self_test, s,
+                                          &dev->mem_reentrancy_guard);
+    s->seed_bh = qemu_bh_new_guarded(imx_rngc_seed, s,
+                                     &dev->mem_reentrancy_guard);
 }
 
 static void imx_rngc_reset(DeviceState *dev)
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index 43bb1f56ba..80a789f32b 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -914,7 +914,7 @@ static void mac_dbdma_realize(DeviceState *dev, Error **errp)
 {
     DBDMAState *s = MAC_DBDMA(dev);
 
-    s->bh = qemu_bh_new(DBDMA_run_bh, s);
+    s->bh = qemu_bh_new_guarded(DBDMA_run_bh, s, &dev->mem_reentrancy_guard);
 }
 
 static void mac_dbdma_class_init(ObjectClass *oc, void *data)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 53e1c32643..447f669921 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2917,7 +2917,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
         n->vqs[index].tx_vq =
             virtio_add_queue(vdev, n->net_conf.tx_queue_size,
                              virtio_net_handle_tx_bh);
-        n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
+        n->vqs[index].tx_bh = qemu_bh_new_guarded(virtio_net_tx_bh, &n->vqs[index],
+                                                  &DEVICE(vdev)->mem_reentrancy_guard);
     }
 
     n->vqs[index].tx_waiting = 0;
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 49c1210fce..62e4a1d7d9 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -4604,7 +4604,8 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
         QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
     }
 
-    sq->bh = qemu_bh_new(nvme_process_sq, sq);
+    sq->bh = qemu_bh_new_guarded(nvme_process_sq, sq,
+                                 &DEVICE(sq->ctrl)->mem_reentrancy_guard);
 
     if (n->dbbuf_enabled) {
         sq->db_addr = n->dbbuf_dbs + (sqid << 3);
@@ -5250,7 +5251,8 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
         }
     }
     n->cq[cqid] = cq;
-    cq->bh = qemu_bh_new(nvme_post_cqes, cq);
+    cq->bh = qemu_bh_new_guarded(nvme_post_cqes, cq,
+                                 &DEVICE(cq->ctrl)->mem_reentrancy_guard);
 }
 
 static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
diff --git a/hw/scsi/mptsas.c b/hw/scsi/mptsas.c
index c485da792c..3de288b454 100644
--- a/hw/scsi/mptsas.c
+++ b/hw/scsi/mptsas.c
@@ -1322,7 +1322,8 @@ static void mptsas_scsi_realize(PCIDevice *dev, Error **errp)
     }
     s->max_devices = MPTSAS_NUM_PORTS;
 
-    s->request_bh = qemu_bh_new(mptsas_fetch_requests, s);
+    s->request_bh = qemu_bh_new_guarded(mptsas_fetch_requests, s,
+                                        &DEVICE(dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), &dev->qdev, &mptsas_scsi_info);
 }
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index ceceafb2cd..e5c9f7a53d 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -193,7 +193,8 @@ static void scsi_dma_restart_cb(void *opaque, bool running, RunState state)
         AioContext *ctx = blk_get_aio_context(s->conf.blk);
         /* The reference is dropped in scsi_dma_restart_bh.*/
         object_ref(OBJECT(s));
-        s->bh = aio_bh_new(ctx, scsi_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(ctx, scsi_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         qemu_bh_schedule(s->bh);
     }
 }
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index fa76696855..4de34536e9 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -1184,7 +1184,8 @@ pvscsi_realizefn(PCIDevice *pci_dev, Error **errp)
         pcie_endpoint_cap_init(pci_dev, PVSCSI_EXP_EP_OFFSET);
     }
 
-    s->completion_worker = qemu_bh_new(pvscsi_process_completion_queue, s);
+    s->completion_worker = qemu_bh_new_guarded(pvscsi_process_completion_queue, s,
+                                               &DEVICE(pci_dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), DEVICE(pci_dev), &pvscsi_scsi_info);
     /* override default SCSI bus hotplug-handler, with pvscsi's one */
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index 88f99c05d5..f013ded91e 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -937,7 +937,8 @@ static void usb_uas_realize(USBDevice *dev, Error **errp)
 
     QTAILQ_INIT(&uas->results);
     QTAILQ_INIT(&uas->requests);
-    uas->status_bh = qemu_bh_new(usb_uas_send_status_bh, uas);
+    uas->status_bh = qemu_bh_new_guarded(usb_uas_send_status_bh, uas,
+                                         &d->mem_reentrancy_guard);
 
     dev->flags |= (1 << USB_DEV_FLAG_IS_SCSI_STORAGE);
     scsi_bus_init(&uas->bus, sizeof(uas->bus), DEVICE(dev), &usb_uas_scsi_info);
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 8755e9cbb0..a0c4e782b2 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -1364,7 +1364,8 @@ static void dwc2_realize(DeviceState *dev, Error **errp)
     s->fi = USB_FRMINTVL - 1;
     s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
-    s->async_bh = qemu_bh_new(dwc2_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(dwc2_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
 
     sysbus_init_irq(sbd, &s->irq);
 }
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..c930c60921 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2533,7 +2533,8 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp)
     }
 
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, ehci_work_timer, s);
-    s->async_bh = qemu_bh_new(ehci_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(ehci_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
     s->device = dev;
 
     s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 8ac1175ad2..77baaa7a6b 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -1190,7 +1190,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
                               USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL);
         }
     }
-    s->bh = qemu_bh_new(uhci_bh, s);
+    s->bh = qemu_bh_new_guarded(uhci_bh, s, &DEVICE(dev)->mem_reentrancy_guard);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, uhci_frame_timer, s);
     s->num_ports_vmstate = NB_PORTS;
     QTAILQ_INIT(&s->queues);
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index 176868d345..f500db85ab 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -1141,7 +1141,8 @@ static void usb_host_nodev_bh(void *opaque)
 static void usb_host_nodev(USBHostDevice *s)
 {
     if (!s->bh_nodev) {
-        s->bh_nodev = qemu_bh_new(usb_host_nodev_bh, s);
+        s->bh_nodev = qemu_bh_new_guarded(usb_host_nodev_bh, s,
+                                          &DEVICE(s)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(s->bh_nodev);
 }
@@ -1739,7 +1740,8 @@ static int usb_host_post_load(void *opaque, int version_id)
     USBHostDevice *dev = opaque;
 
     if (!dev->bh_postld) {
-        dev->bh_postld = qemu_bh_new(usb_host_post_load_bh, dev);
+        dev->bh_postld = qemu_bh_new_guarded(usb_host_post_load_bh, dev,
+                                             &DEVICE(dev)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(dev->bh_postld);
     dev->bh_postld_pending = true;
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc..39fbaaab16 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1441,8 +1441,10 @@ static void usbredir_realize(USBDevice *udev, Error **errp)
         }
     }
 
-    dev->chardev_close_bh = qemu_bh_new(usbredir_chardev_close_bh, dev);
-    dev->device_reject_bh = qemu_bh_new(usbredir_device_reject_bh, dev);
+    dev->chardev_close_bh = qemu_bh_new_guarded(usbredir_chardev_close_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
+    dev->device_reject_bh = qemu_bh_new_guarded(usbredir_device_reject_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
     dev->attach_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, usbredir_do_attach, dev);
 
     packet_id_queue_init(&dev->cancelled, dev, "cancelled");
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 66cb3f7c24..38ee660a30 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -1032,7 +1032,8 @@ static void usbback_alloc(struct XenLegacyDevice *xendev)
 
     QTAILQ_INIT(&usbif->req_free_q);
     QSIMPLEQ_INIT(&usbif->hotplug_q);
-    usbif->bh = qemu_bh_new(usbback_bh, usbif);
+    usbif->bh = qemu_bh_new_guarded(usbback_bh, usbif,
+                                    &DEVICE(xendev)->mem_reentrancy_guard);
 }
 
 static int usbback_free(struct XenLegacyDevice *xendev)
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 746f07c4d2..d60dd1f61e 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -908,8 +908,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
         precopy_add_notifier(&s->free_page_hint_notify);
 
         object_ref(OBJECT(s->iothread));
-        s->free_page_bh = aio_bh_new(iothread_get_aio_context(s->iothread),
-                                     virtio_ballloon_get_free_page_hints, s);
+        s->free_page_bh = aio_bh_new_guarded(iothread_get_aio_context(s->iothread),
+                                             virtio_ballloon_get_free_page_hints, s,
+                                             &dev->mem_reentrancy_guard);
     }
 
     if (virtio_has_feature(s->host_features, VIRTIO_BALLOON_F_REPORTING)) {
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 802e1b9659..2fe804510f 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -1074,7 +1074,8 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
         vcrypto->vqs[i].dataq =
                  virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq_bh);
         vcrypto->vqs[i].dataq_bh =
-                 qemu_bh_new(virtio_crypto_dataq_bh, &vcrypto->vqs[i]);
+                 qemu_bh_new_guarded(virtio_crypto_dataq_bh, &vcrypto->vqs[i],
+                                     &dev->mem_reentrancy_guard);
         vcrypto->vqs[i].vcrypto = vcrypto;
     }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 14:41:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 14:41:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524618.815693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprxF-00061y-2U; Fri, 21 Apr 2023 14:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524618.815693; Fri, 21 Apr 2023 14:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pprxE-00061r-Vs; Fri, 21 Apr 2023 14:41:40 +0000
Received: by outflank-mailman (input) for mailman id 524618;
 Fri, 21 Apr 2023 14:41:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncdX=AM=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pprxD-00061l-AV
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 14:41:39 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e4732f0-e052-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 16:41:36 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id
 2adb3069b0e04-4ec81245ae1so1834962e87.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 07:41:36 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 n15-20020a056512388f00b004e96afb1e9asm581468lft.253.2023.04.21.07.41.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Apr 2023 07:41:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e4732f0-e052-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682088096; x=1684680096;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=l7tgeVV3z4+i/UtlPpONUumuR4T7mPWZlDRk58b0LDc=;
        b=Nnnfq2pMVdvxd1j+b0qvRSguU7+xBWFdp8N4euwKrpzvJe0jwBD+3HDsYAGtZnhYo5
         W/6aHAEIhweoGqDqrkKAs2PK+pLIyH4S2KP3ygwmhLUtGbdmmBV+zbzJTlg5YqTl6zwa
         nT/8LITTqhYeLOoafhHMhS9EamnRl/q0QncmurojpKXZ6C5TalhD9dlJ/B++/zW/1LbL
         1On4Kolv74V4zxtlEVyFoXtdncmYySldbwMToK8l2AtZSl3/raaO/34loiBexfG6eUCW
         kXS0UEaJaFcf/T0MbuvEscWp6wO8PRqGzALOJCW0ftPQ37hyb6KGe5nsMn8hryhNRlW2
         PDaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682088096; x=1684680096;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=l7tgeVV3z4+i/UtlPpONUumuR4T7mPWZlDRk58b0LDc=;
        b=BDEBONJ4+XSm0jlJkdv4wl2riVvVu5CQDtr6CHUMhR5fBaJu7Rzl54kYf2LoIS9ZJd
         JF8G5PzQQDCN0tekct1x/Sm2fG5R/ceikTviDryhtRmmG0MztraU9Zjp78AN1+unydmk
         aLo4pcsvFSneplqhZxc0dIlU4DVTaajHeMDGvyNr9ubwSTxdX+1K13b5WvgnIUPqpfDs
         GA6OOTPaS9TmcfLAViz+dP2d0dS4Sa/xcOYugM5ao0Ajax+ET9LFcUlxzXromY6A1FVg
         CQSRxCvTmpuN1RrYICy+m7Wkzj4TMuYxZJChIXw+5Pvh3g4L88iE6o0iOEa0JMkHF0qo
         hNwA==
X-Gm-Message-State: AAQBX9e/Q4T9+7544jUmWUDlr9KAdPShS9jegJbxcNpWlV1j5+Wy8rPd
	D2cVLruN9wUtYvn7BE61mfw=
X-Google-Smtp-Source: AKy350YYQACaKoeJDxaUGgcpjh7ZXM/HQ1pXd0ox6xojf/cf+5lYxm88N+Z0Meuqe+uijCUHYSl5xg==
X-Received: by 2002:ac2:4c0c:0:b0:4a4:6af4:43b7 with SMTP id t12-20020ac24c0c000000b004a46af443b7mr1219644lfq.69.1682088095974;
        Fri, 21 Apr 2023 07:41:35 -0700 (PDT)
Message-ID: <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Fri, 21 Apr 2023 17:41:13 +0300
In-Reply-To: <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
	 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.0 (3.48.0-1.fc38) 
MIME-Version: 1.0

On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
> On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -4,6 +4,37 @@
> > =C2=A0#include <xen/const.h>
> > =C2=A0#include <xen/page-size.h>
> > =C2=A0
> > +/*
> > + * RISC-V64 Layout:
> > + *
> > + * From the riscv-privileged doc:
> > + *=C2=A0=C2=A0 When mapping between narrower and wider addresses,
> > + *=C2=A0=C2=A0 RISC-V zero-extends a narrower physical address to a wi=
der
> > size.
> > + *=C2=A0=C2=A0 The mapping between 64-bit virtual addresses and the 39=
-bit
> > usable
> > + *=C2=A0=C2=A0 address space of Sv39 is not based on zero-extension bu=
t
> > instead
> > + *=C2=A0=C2=A0 follows an entrenched convention that allows an OS to u=
se one
> > or
> > + *=C2=A0=C2=A0 a few of the most-significant bits of a full-size (64-b=
it)
> > virtual
> > + *=C2=A0=C2=A0 address to quickly distinguish user and supervisor addr=
ess
> > regions.
> > + *
> > + * It means that:
> > + *=C2=A0=C2=A0 top VA bits are simply ignored for the purpose of trans=
lating
> > to PA.
> > + *
> > + * The similar is true for other Sv{32, 39, 48, 57}.
>=20
> Personally I find it odd that Sv32 is mentioned here (there's no
> truncation /
> aliasing there aiui), and that Sv39 is mentioned here again (when
> that's what
> the earlier paragraph talks about).
It looks that I wrote bad explanation.

I meant that all the mentioned above ( that usable address space isn't
based on zero-extension ) works for Sv{39, 48, 57} except size of
usable address space ( 39-bit, 48-bit and 57-bit usable address space
of 64-bit virtual address space ).

And I agree that this is not for Sv32 as it covers full 32-bit virtual
address space

>=20
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 End ad=
dr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 | VM area
> > description
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=A0 | Xen
> > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=A0 | FDT
> > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=A0 | Fixm=
ap
>=20
> These are all L2 slot 511 aiui, which may be worth mentioning
> especially since
> the top bits don't match the top bits further down in the table
> (because of the
> aliasing).

Than I'll add one more column where I'll put slot number

>=20
> > + *=C2=A0=C2=A0=C2=A0=C2=A0 .................. unused .................=
.
>=20
> This is covering slot 510, which again may be worth mentioning.
>=20
> > + * 0000003200000000 |=C2=A0 0000007f40000000 | 331 GB | Direct map(L2
> > slot: 200-509)
> > + * 0000003100000000 |=C2=A0 0000003140000000 |=C2=A0 1 GB=C2=A0 | Fram=
etable(L2
> > slot: 196-197)
>=20
> 1Gb is, if I'm not mistaken, a single L2 slot.
Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
included. I'll update the table.

>=20
> Also assuming a 32-byte struct page_info (I don't think you'll get
> away with
> less than that, when even Arm32 requires this much), there's a
> mismatch
> between direct map and frame table size: With 4k pages, the scaling
> factor
> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
> to
> cover for (slightly more than) the 331Gb of memory you mean to be
> able to
> map?
For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
are right it should 3 Gb in that case what will be enough ( taking into
account both available sizes of page_info structure ).
Thanks for the catch.
>=20
> > + * 0000003080000000 |=C2=A0 00000030c0000000 |=C2=A0 1 GB=C2=A0 | VMAP=
 (L2 slot:
> > 194-195)
> > + *=C2=A0=C2=A0=C2=A0=C2=A0 .................. unused .................=
.
>=20
> There are further unused regions between these three entries, while
> imo will
> want making explicit (for the avoidance of doubt, if nothing else).
Yeah, there are some. I'll add them explicitly.


~ Oleksii


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 15:55:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 15:55:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524632.815702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppt5y-0004v2-EO; Fri, 21 Apr 2023 15:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524632.815702; Fri, 21 Apr 2023 15:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppt5y-0004uu-Bi; Fri, 21 Apr 2023 15:54:46 +0000
Received: by outflank-mailman (input) for mailman id 524632;
 Fri, 21 Apr 2023 15:54:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppt5w-0004uk-Iq; Fri, 21 Apr 2023 15:54:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppt5w-0002CW-9j; Fri, 21 Apr 2023 15:54:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppt5v-0005Yq-R8; Fri, 21 Apr 2023 15:54:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppt5v-0004QO-Qm; Fri, 21 Apr 2023 15:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hfKodWlPWOSCIFPt5nNyx8up9h+Nf9Pi+5SCUf+7Hbg=; b=MVAzmVZiTPsr2LV5iDnfuxg7DG
	/ZqgwgAiMSKdXyHT4KbD1vdXqKLN7793lPMndDb7gpr5RK4F3yStCXPfd2+mIIdELEnEnqIaFPO4O
	iSm6yeHMOGxPLxyDjkuaus78SOYQjiz+GSnxNTFS9x0Rv0yXskR5/jJ8Cwu4/F6LajTw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180349: FAIL
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:hosts-allocate:broken:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 15:54:43 +0000

flight 180349 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180349/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken  in 180325
 build-armhf                 2 hosts-allocate broken in 180325 REGR. vs. 180349

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm   7 xen-install      fail in 180325 pass in 180349
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 180325 pass in 180349
 test-amd64-i386-libvirt       7 xen-install                fail pass in 180325
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180325

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)         blocked in 180325 n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 180325 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-examine      1 build-check(1)           blocked in 180325 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 180325 n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 180325 n/a
 test-amd64-i386-libvirt     15 migrate-support-check fail in 180325 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180309
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180309
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180309
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180325
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180325
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180325
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180325
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180325
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180325
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180325
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180325
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180325
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180349  2023-04-21 01:51:57 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 16:01:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 16:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524640.815713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptC9-0006z7-9a; Fri, 21 Apr 2023 16:01:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524640.815713; Fri, 21 Apr 2023 16:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptC9-0006z0-6i; Fri, 21 Apr 2023 16:01:09 +0000
Received: by outflank-mailman (input) for mailman id 524640;
 Fri, 21 Apr 2023 16:01:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ncdX=AM=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pptC8-0006yu-Hi
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 16:01:08 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8d98da6-e05d-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 18:01:05 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-4ec88c67b2eso1867936e87.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 09:01:05 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 u25-20020ac24c39000000b004eca2608886sm600445lfq.95.2023.04.21.09.01.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Apr 2023 09:01:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8d98da6-e05d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682092865; x=1684684865;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=xt1oUj0kuWahcDuk289rEmYttRpel9Ty+nPAHf5tBg8=;
        b=cx5XFUTFTcFXkGlJazzTsrXZK2HEqIfYRi8dxuqxiJBSWUnGFSKPP3XZNGmt/kU3ud
         qL79EBnfFrEoc6z1bHBSpaMfJlnaV3rTxQ8EaVOAwrdyj1haNZawcIOLUaJ9Kf7aLvC6
         5qJkNXJ8V2dQlsOsqLBgnhLxeWlLicL9EUh2RB/+sh4S2imff/TQeG8Jzk41fl9Ji37b
         25a3ZRnHpbTPt0XyZxExOtl7NgXNfQ4+p4NhqJXXYEunnJR5Pw/Nw//xpuoxSM0QZRVW
         DbahhukzGGDwQupifLT1afAqGefV/p6Rwd29DNvg1E9YXsF94eRdwj0xM+QxMcI2w2YU
         F5Bw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682092865; x=1684684865;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=xt1oUj0kuWahcDuk289rEmYttRpel9Ty+nPAHf5tBg8=;
        b=YWGpM0LeC7cvFGz4P4FevM6Ml4RCh+1a8BHAVUq8cFUa7LohiWABnJVUYkhXqk4MeH
         5HmngXvgt5VZW5W5Rpbof/wTygyHtuMGzG+Kmm6GXxla5UIsg+PEueJnrqaOolaDrNcm
         qigi1h1i1NIA0SKHslY7cLspv55JHzU3xNdvjbnGa7cElPXddTOnoj2C8abnuCXKAT4k
         SWopufXqEj+/cdVNzWvYtlADU8GN4gpOMWbtLescN2DOaCxs7J9Z/NRxF8ZADkAYRuW9
         kAgJ6l5s31wdJDINg8eq0UwtrQhFHkJZJGmtaH/CSSJaJ0nEQ7i0g0s2AtuGOcKJOsJ0
         A4bQ==
X-Gm-Message-State: AAQBX9dAuqVfyqlLcDPw4i4SmITs5166lsDN3yEq1IV989BXEgvyeGol
	QF3QpNJnmjMmIerjo+2Hg1M=
X-Google-Smtp-Source: AKy350ZINJOV0DkLiOz+byk124hoELx0w0lvFrkYO3Q3M1czPs5silr6t8zVyrTMQIguqHV+aI9FTg==
X-Received: by 2002:a19:7619:0:b0:4d8:71dd:5c5e with SMTP id c25-20020a197619000000b004d871dd5c5emr1540719lff.37.1682092864848;
        Fri, 21 Apr 2023 09:01:04 -0700 (PDT)
Message-ID: <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Fri, 21 Apr 2023 19:01:03 +0300
In-Reply-To: <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
	 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.48.0 (3.48.0-1.fc38) 
MIME-Version: 1.0

T24gVGh1LCAyMDIzLTA0LTIwIGF0IDE2OjM2ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAxOS4wNC4yMDIzIDE3OjQyLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gLS0tIGEveGVu
L2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vcGFnZS1iaXRzLmgKPiA+ICsrKyBiL3hlbi9hcmNoL3Jp
c2N2L2luY2x1ZGUvYXNtL3BhZ2UtYml0cy5oCj4gPiBAQCAtMSw2ICsxLDE2IEBACj4gPiDCoCNp
Zm5kZWYgX19SSVNDVl9QQUdFX0JJVFNfSF9fCj4gPiDCoCNkZWZpbmUgX19SSVNDVl9QQUdFX0JJ
VFNfSF9fCj4gPiDCoAo+ID4gKyNpZmRlZiBDT05GSUdfUklTQ1ZfNjQKPiA+ICsjZGVmaW5lIFBB
R0VUQUJMRV9PUkRFUsKgwqDCoMKgwqDCoMKgwqAgKDkpCj4gPiArI2Vsc2UgLyogQ09ORklHX1JJ
U0NWXzMyICovCj4gPiArI2RlZmluZSBQQUdFVEFCTEVfT1JERVLCoMKgwqDCoMKgwqDCoMKgICgx
MCkKPiA+ICsjZW5kaWYKPiA+ICsKPiA+ICsjZGVmaW5lIFBBR0VUQUJMRV9FTlRSSUVTwqDCoMKg
wqDCoMKgICgxIDw8IFBBR0VUQUJMRV9PUkRFUikKPiA+ICsKPiA+ICsjZGVmaW5lIFBURV9QUE5f
U0hJRlTCoMKgwqDCoMKgwqDCoMKgwqDCoCAxMAo+ID4gKwo+ID4gwqAjZGVmaW5lIFBBR0VfU0hJ
RlTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAxMiAvKiA0IEtpQiBQYWdlcyAqLwo+ID4gwqAj
ZGVmaW5lIFBBRERSX0JJVFPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA1NiAvKiA0NC1iaXQg
UFBOICovCj4gCj4gUGVyc29uYWxseSBJIHRoaW5rIHRoZXNlIHR3byB3b3VsZCBiZXR0ZXIgcmVt
YWluIGF0IHRoZSB0b3AuIEJ1dAo+IG1heWJlCj4gdGhhdCdzIGp1c3QgbWUgLi4uCkknbSBvayB0
byByZW1haW4gdGhlbSBhdCB0aGUgdG9wLgoKPiAKPiA+IC0tLSAvZGV2L251bGwKPiA+ICsrKyBi
L3hlbi9hcmNoL3Jpc2N2L2luY2x1ZGUvYXNtL3BhZ2UuaAo+ID4gQEAgLTAsMCArMSw2MyBAQAo+
ID4gKyNpZm5kZWYgX0FTTV9SSVNDVl9QQUdFX0gKPiA+ICsjZGVmaW5lIF9BU01fUklTQ1ZfUEFH
RV9ICj4gPiArCj4gPiArI2luY2x1ZGUgPHhlbi9jb25zdC5oPgo+ID4gKyNpbmNsdWRlIDx4ZW4v
dHlwZXMuaD4KPiA+ICsKPiA+ICsjZGVmaW5lIFZQTl9NQVNLwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgKCh1bnNpZ25lZAo+ID4gbG9uZykoUEFHRVRBQkxFX0VOVFJJRVMg
LSAxKSkKPiA+ICsKPiA+ICsjZGVmaW5lIFhFTl9QVF9MRVZFTF9PUkRFUihsdmwpwqDCoMKgwqAg
KChsdmwpICogUEFHRVRBQkxFX09SREVSKQo+ID4gKyNkZWZpbmUgWEVOX1BUX0xFVkVMX1NISUZU
KGx2bCnCoMKgwqDCoCAoWEVOX1BUX0xFVkVMX09SREVSKGx2bCkgKwo+ID4gUEFHRV9TSElGVCkK
PiA+ICsjZGVmaW5lIFhFTl9QVF9MRVZFTF9TSVpFKGx2bCnCoMKgwqDCoMKgIChfQVQocGFkZHJf
dCwgMSkgPDwKPiA+IFhFTl9QVF9MRVZFTF9TSElGVChsdmwpKQo+ID4gKyNkZWZpbmUgWEVOX1BU
X0xFVkVMX01BUF9NQVNLKGx2bCnCoCAofihYRU5fUFRfTEVWRUxfU0laRShsdmwpIC0KPiA+IDEp
KQo+ID4gKyNkZWZpbmUgWEVOX1BUX0xFVkVMX01BU0sobHZsKcKgwqDCoMKgwqAgKFZQTl9NQVNL
IDw8Cj4gPiBYRU5fUFRfTEVWRUxfU0hJRlQobHZsKSkKPiA+ICsKPiA+ICsjZGVmaW5lIFBURV9W
QUxJRMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQoMCwgVUwpCj4gPiAr
I2RlZmluZSBQVEVfUkVBREFCTEXCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQklUKDEs
IFVMKQo+ID4gKyNkZWZpbmUgUFRFX1dSSVRBQkxFwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIEJJVCgyLCBVTCkKPiA+ICsjZGVmaW5lIFBURV9FWEVDVVRBQkxFwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgQklUKDMsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX1VTRVLCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQoNCwgVUwpCj4gPiArI2RlZmluZSBQVEVfR0xP
QkFMwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQoNSwgVUwpCj4gPiArI2Rl
ZmluZSBQVEVfQUNDRVNTRUTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQklUKDYsIFVM
KQo+ID4gKyNkZWZpbmUgUFRFX0RJUlRZwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIEJJVCg3LCBVTCkKPiA+ICsjZGVmaW5lIFBURV9SU1fCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIChCSVQoOCwgVUwpIHwgQklUKDksIFVMKSkKPiA+ICsKPiA+ICsj
ZGVmaW5lIFBURV9MRUFGX0RFRkFVTFTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChQVEVfVkFMSUQg
fCBQVEVfUkVBREFCTEUgfAo+ID4gUFRFX1dSSVRBQkxFKQo+ID4gKyNkZWZpbmUgUFRFX1RBQkxF
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChQVEVfVkFMSUQpCj4gPiArCj4g
PiArLyogQ2FsY3VsYXRlIHRoZSBvZmZzZXRzIGludG8gdGhlIHBhZ2V0YWJsZXMgZm9yIGEgZ2l2
ZW4gVkEgKi8KPiA+ICsjZGVmaW5lIHB0X2xpbmVhcl9vZmZzZXQobHZsLCB2YSnCoMKgICgodmEp
ID4+Cj4gPiBYRU5fUFRfTEVWRUxfU0hJRlQobHZsKSkKPiA+ICsKPiA+ICsjZGVmaW5lIHB0X2lu
ZGV4KGx2bCwgdmEpIHB0X2xpbmVhcl9vZmZzZXQobHZsLCAodmEpICYKPiA+IFhFTl9QVF9MRVZF
TF9NQVNLKGx2bCkpCj4gPiArCj4gPiArLyogUGFnZSBUYWJsZSBlbnRyeSAqLwo+ID4gK3R5cGVk
ZWYgc3RydWN0IHsKPiA+ICsjaWZkZWYgQ09ORklHX1JJU0NWXzY0Cj4gPiArwqDCoMKgIHVpbnQ2
NF90IHB0ZTsKPiA+ICsjZWxzZQo+ID4gK8KgwqDCoCB1aW50MzJfdCBwdGU7Cj4gPiArI2VuZGlm
Cj4gPiArfSBwdGVfdDsKPiA+ICsKPiA+ICsjZGVmaW5lIHB0ZV90b19hZGRyKHgpICgoKHgpID4+
IFBURV9QUE5fU0hJRlQpIDw8IFBBR0VfU0hJRlQpCj4gCj4gVGhpcyB3aWxsIGJlIGF0IHJpc2sg
b2Ygb3ZlcmZsb3cgZm9yIFJWMzIgd2l0aG91dCBhIGNhc3QgdG8gcGFkZHJfdAo+ICh3aGljaAo+
IEkgZXhwZWN0IHdvdWxkIGJlIGEgNjQtYml0IHR5cGUgb24gUlYzMiBqdXN0IGxpa2UgaXQgaXMg
b24gUlY2NCkuClllYWgsIHBhZGRyX3QgaXMgdTY0IGZvciBib3RoIFJWMzIgYW5kIFJWNjQuCkkn
bGwgYWRkIGNhc3QgdG8gcGFkZHJfdCB0byBhdm9pZCBwb3NzaWJsZSByaXNrIG9mIG92ZXJmbG93
LgoKPiAKPiA+ICsjZGVmaW5lIGFkZHJfdG9fcHRlKHgpICgoKHgpID4+IFBBR0VfU0hJRlQpIDw8
IFBURV9QUE5fU0hJRlQpCj4gPiArCj4gPiArc3RhdGljIGlubGluZSBwdGVfdCBwYWRkcl90b19w
dGUoY29uc3QgcGFkZHJfdCBwYWRkciwKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGNvbnN0IHVuc2lnbmVkIGxvbmcg
cGVybWlzc2lvbnMpCj4gPiArewo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIHRtcCA9IGFkZHJf
dG9fcHRlKHBhZGRyKTsKPiA+ICvCoMKgwqAgcmV0dXJuIChwdGVfdCkgeyAucHRlID0gdG1wIHwg
cGVybWlzc2lvbnMgfTsKPiA+ICt9Cj4gPiArCj4gPiArc3RhdGljIGlubGluZSBwYWRkcl90IHB0
ZV90b19wYWRkcihjb25zdCBwdGVfdCBwdGUpCj4gPiArewo+ID4gK8KgwqDCoCByZXR1cm4gcHRl
X3RvX2FkZHIocHRlLnB0ZSk7Cj4gPiArfQo+ID4gKwo+ID4gK3N0YXRpYyBpbmxpbmUgYm9vbCBw
dGVfaXNfdmFsaWQoY29uc3QgcHRlX3QgcCkKPiA+ICt7Cj4gPiArwqDCoMKgIHJldHVybiBwLnB0
ZSAmIFBURV9WQUxJRDsKPiA+ICt9Cj4gCj4gQSByZW1hcmsgb24gYWxsIG9mIHRoZSAiY29uc3Qi
IGhlcmU6IEl0J3MgZmluZSBpZiB5b3Ugd2FudCB0byBrZWVwCj4gdGhlbSwKPiBidXQgZ2VuZXJh
bGx5IHdlIGNhcmUgYWJvdXQgY29uc3QtY29ycmVjdG5lc3Mgb25seSBmb3IgcG9pbnRlZC10bwo+
IHR5cGVzLgo+IFNjYWxhciBhbmQgY29tcG91bmQgcGFyYW1ldGVyIHZhbHVlcyBhcmUgb3duZWQg
YnkgY2FsbGVkIGZ1bmN0aW9uCj4gYW55d2F5LAo+IHNvIGFsbCB0aGUgImNvbnN0IiBhY2hpZXZl
cyBpcyB0aGF0IHRoZSBmdW5jdGlvbiBjYW4ndCBhbHRlciBpdHMgb3duCj4gYXJndW1lbnQocyku
ClRoZW4gaXQgZG9lc24ndCBtYWtlIGZvciB0aGUgY3VycmVudCBjYXNlIGFuZCByZW1vdmluZyB0
aGVtIHdpbGwKc2ltcGxpZnkgcGFkZHJfdG9fcHRlIGZ1bmN0aW9uLiBTbyBJIHByZWZlciB0byBy
ZW1vdmUgdGhlbS4KCj4gCj4gPiAtLS0gL2Rldi9udWxsCj4gPiArKysgYi94ZW4vYXJjaC9yaXNj
di9tbS5jCj4gPiBAQCAtMCwwICsxLDMxOSBAQAo+ID4gKyNpbmNsdWRlIDx4ZW4vY29tcGlsZXIu
aD4KPiA+ICsjaW5jbHVkZSA8eGVuL2luaXQuaD4KPiA+ICsjaW5jbHVkZSA8eGVuL2tlcm5lbC5o
Pgo+ID4gKwo+ID4gKyNpbmNsdWRlIDxhc20vZWFybHlfcHJpbnRrLmg+Cj4gPiArI2luY2x1ZGUg
PGFzbS9jb25maWcuaD4KPiA+ICsjaW5jbHVkZSA8YXNtL2Nzci5oPgo+ID4gKyNpbmNsdWRlIDxh
c20vbW0uaD4KPiA+ICsjaW5jbHVkZSA8YXNtL3BhZ2UuaD4KPiA+ICsjaW5jbHVkZSA8YXNtL3By
b2Nlc3Nvci5oPgo+ID4gKwo+ID4gK3N0cnVjdCBtbXVfZGVzYyB7Cj4gPiArwqDCoMKgIHVuc2ln
bmVkIGxvbmcgbnVtX2xldmVsczsKPiAKPiBJc24ndCAidW5zaWduZWQgaW50IiBzdWZmaWNpZW50
IGhlcmU/ClllcywgaXQgd2lsbCBiZSBzdWZmaWNpZW50LgoKPiAKPiA+ICsvKgo+ID4gKyAqIEl0
IGlzIGV4cGVjdGVkIHRoYXQgWGVuIHdvbid0IGJlIG1vcmUgdGhlbiAyIE1CLgo+ID4gKyAqIFRo
ZSBjaGVjayBpbiB4ZW4ubGRzLlMgZ3VhcmFudGVlcyB0aGF0Lgo+ID4gKyAqIEF0IGxlYXN0IDQg
cGFnZSAoaW4gY2FzZSB3aGVuIFN2NDggb3IgU3Y1NyBhcmUgdXNlZCApCj4gPiArICogdGFibGVz
IGFyZSBuZWVkZWQgdG8gY292ZXIgMiBNQi4gT25lIGZvciBlYWNoIHBhZ2UgbGV2ZWwKPiA+ICsg
KiB0YWJsZSB3aXRoIFBBR0VfU0laRSA9IDQgS2IKPiA+ICsgKgo+ID4gKyAqIE9uZSBMMCBwYWdl
IHRhYmxlIGNhbiBjb3ZlciAyIE1CCj4gPiArICogKDUxMiBlbnRyaWVzIG9mIG9uZSBwYWdlIHRh
YmxlICogUEFHRV9TSVpFKS4KPiA+ICsgKgo+ID4gKyAqIEl0IG1pZ2h0IGJlIG5lZWRlZCBvbmUg
bW9yZSBwYWdlIHRhYmxlIGluIGNhc2Ugd2hlbgo+ID4gKyAqIFhlbiBsb2FkIGFkZHJlc3MgaXNu
J3QgMiBNQiBhbGlnbmVkLgo+ID4gKyAqCj4gPiArICovCj4gPiArI2RlZmluZSBQR1RCTF9JTklU
SUFMX0NPVU5UwqDCoMKgwqAgKDUpCj4gCj4gT24geDg2IHdlIGhhdmUgYSBDT05GSUdfUEFHSU5H
X0xFVkVMUyBjb25zdGFudC4gSWYgeW91IGhhZCBzb21ldGhpbmcKPiBsaWtlIHRoaXMgYXMgd2Vs
bCwgdGhpcyA1IHdvdWxkIGJldHRlciBtYXRjaCB0aGUgY29tbWVudCBhcwo+ICgoQ09ORklHX1BB
R0lOR19MRVZFTFMgLSAxKSArIDEpLCBhdm9pZGluZyB0byBtYWtlIHNwYWNlIGZvciB0aGUgdHdv
Cj4gbGV2ZWxzIHlvdSB3b24ndCBuZWVkIGFzIGxvbmcgYXMgb25seSBTdjM5IGlzIHJlYWxseSBt
ZWFudCB0byBiZQo+IHVzZWQuClRoYW5rcyBmb3Igbm90ZS4gSSdsbCB1c2UgQ09ORklHX1BBR0lO
R19MRVZFTFMgdG9vLgoKPiAKPiA+ICsjZGVmaW5lIFBHVEJMX0VOVFJZX0FNT1VOVMKgIChQQUdF
X1NJWkUgLyBzaXplb2YocHRlX3QpKQo+ID4gKwo+ID4gK3B0ZV90IF9fc2VjdGlvbigiLmJzcy5w
YWdlX2FsaWduZWQiKSBfX2FsaWduZWQoUEFHRV9TSVpFKQo+ID4gK3N0YWdlMV9wZ3RibF9yb290
W1BHVEJMX0VOVFJZX0FNT1VOVF07Cj4gPiArCj4gPiArcHRlX3QgX19zZWN0aW9uKCIuYnNzLnBh
Z2VfYWxpZ25lZCIpIF9fYWxpZ25lZChQQUdFX1NJWkUpCj4gPiArc3RhZ2UxX3BndGJsX25vbnJv
b3RbUEdUQkxfSU5JVElBTF9DT1VOVCAqIFBHVEJMX0VOVFJZX0FNT1VOVF07Cj4gPiAKPiA+ICsj
ZGVmaW5lCj4gPiBIQU5ETEVfUEdUQkwoY3Vycl9sdmxfbnVtKcKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAKPiA+IFwKPiA+ICvCoMKgwqAgaW5kZXggPSBwdF9pbmRleChjdXJyX2x2bF9udW0sCj4g
PiBwYWdlX2FkZHIpO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDCoCBpZiAoIHB0ZV9pc192YWxpZChwZ3RibFtpbmRl
eF0pCj4gPiApwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgCj4gPiB7wqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgCj4gPiDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqAgLyogRmluZCBMeyAw
LTMgfSB0YWJsZQo+ID4gKi/CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDC
oMKgwqDCoMKgIHBndGJsID0gKHB0ZV90Cj4gPiAqKXB0ZV90b19wYWRkcihwZ3RibFtpbmRleF0p
O8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8Kg
wqDCoAo+ID4gfcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gwqDCoMKgwqAgXAo+ID4gK8KgwqDCoAo+
ID4gZWxzZcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gwqDCoMKgwqAgXAo+ID4gK8KgwqDCoAo+ID4ge8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoAo+ID4gwqDCoMKgwqAgXAo+ID4gK8KgwqDCoMKgwqDCoMKgIC8qIEFsbG9j
YXRlIG5ldyBMezAtM30gcGFnZSB0YWJsZQo+ID4gKi/CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKg
wqAgaWYgKCBtbXVfZGVzYy0+cGd0YmxfY291bnQgPT0gUEdUQkxfSU5JVElBTF9DT1VOVAo+ID4g
KcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ICvCoMKgwqDCoMKgwqDCoAo+
ID4ge8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gXAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
ZWFybHlfcHJpbnRrKCIoWEVOKSBObyBpbml0aWFsIHRhYmxlCj4gPiBhdmFpbGFibGVcbiIpO8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAvKiBw
YW5pYygpLCBCVUcoKSBvciBBU1NFUlQoKSBhcmVuJ3QgcmVhZHkgbm93Lgo+ID4gKi/CoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gZGll
KCk7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIFwKPiA+ICvCoMKgwqDCoMKgwqDCoAo+ID4gfcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gXAo+
ID4gK8KgwqDCoMKgwqDCoMKgIG1tdV9kZXNjLQo+ID4gPnBndGJsX2NvdW50Kys7wqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDCoMKgwqDCoMKgIHBndGJsW2luZGV4XSA9
IHBhZGRyX3RvX3B0ZSgodW5zaWduZWQgbG9uZyltbXVfZGVzYy0KPiA+ID5uZXh0X3BndGJsLMKg
wqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoAo+ID4gUFRFX1ZBTElEKTvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDC
oMKgwqAgcGd0YmwgPSBtbXVfZGVzYy0KPiA+ID5uZXh0X3BndGJsO8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgXAo+ID4gK8KgwqDCoMKgwqDCoMKgIG1tdV9kZXNjLT5uZXh0X3BndGJsICs9Cj4gPiBQR1RC
TF9FTlRSWV9BTU9VTlQ7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIFwKPiA+ICvCoMKgwqAgfQo+ID4gKwo+ID4gK3N0YXRpYyB2b2lkIF9faW5pdCBzZXR1
cF9pbml0aWFsX21hcHBpbmcoc3RydWN0IG1tdV9kZXNjCj4gPiAqbW11X2Rlc2MsCj4gPiArwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBtYXBfc3RhcnQsCj4gPiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgbG9uZyBtYXBfZW5kLAo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcgcGFfc3RhcnQsCj4gPiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgdW5zaWduZWQgbG9uZwo+ID4gcGVybWlzc2lvbnMpCj4gCj4gV291bGRuJ3QgdGhl
IGxhc3Qgb25lIG1vcmUgc2Vuc2libHkgYmUgInVuc2lnbmVkIGludCI/Ckl0IHdvdWxkLiBUaGFu
a3MsIEknbGwgdXBkYXRlIHRoZSBjb2RlLgo+IAo+ID4gK3sKPiA+ICvCoMKgwqAgdW5zaWduZWQg
aW50IGluZGV4Owo+ID4gK8KgwqDCoCBwdGVfdCAqcGd0Ymw7Cj4gPiArwqDCoMKgIHVuc2lnbmVk
IGxvbmcgcGFnZV9hZGRyOwo+ID4gK8KgwqDCoCBwdGVfdCBwdGVfdG9fYmVfd3JpdHRlbjsKPiA+
ICvCoMKgwqAgdW5zaWduZWQgbG9uZyBwYWRkcjsKPiA+ICvCoMKgwqAgdW5zaWduZWQgbG9uZyB0
bXBfcGVybWlzc2lvbnM7Cj4gPiArCj4gPiArwqDCoMKgIGlmICggKHVuc2lnbmVkIGxvbmcpX3N0
YXJ0ICUgWEVOX1BUX0xFVkVMX1NJWkUoMCkgKQo+ID4gK8KgwqDCoCB7Cj4gPiArwqDCoMKgwqDC
oMKgwqAgZWFybHlfcHJpbnRrKCIoWEVOKSBYZW4gc2hvdWxkIGJlIGxvYWRlZCBhdCA0awo+ID4g
Ym91bmRhcnlcbiIpOwo+ID4gK8KgwqDCoMKgwqDCoMKgIGRpZSgpOwo+ID4gK8KgwqDCoCB9Cj4g
PiArCj4gPiArwqDCoMKgIGlmICggbWFwX3N0YXJ0ICYgflhFTl9QVF9MRVZFTF9NQVBfTUFTSygw
KSB8fAo+ID4gK8KgwqDCoMKgwqDCoMKgwqAgcGFfc3RhcnQgJiB+WEVOX1BUX0xFVkVMX01BUF9N
QVNLKDApICkKPiA+ICvCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgIGVhcmx5X3ByaW50aygi
KFhFTikgbWFwIGFuZCBwYSBzdGFydCBhZGRyZXNzZXMgc2hvdWxkIGJlCj4gPiBhbGlnbmVkXG4i
KTsKPiA+ICvCoMKgwqDCoMKgwqDCoCAvKiBwYW5pYygpLCBCVUcoKSBvciBBU1NFUlQoKSBhcmVu
J3QgcmVhZHkgbm93LiAqLwo+ID4gK8KgwqDCoMKgwqDCoMKgIGRpZSgpOwo+ID4gK8KgwqDCoCB9
Cj4gPiArCj4gPiArwqDCoMKgIHBhZ2VfYWRkciA9IG1hcF9zdGFydDsKPiA+ICvCoMKgwqAgd2hp
bGUgKCBwYWdlX2FkZHIgPCBtYXBfZW5kICkKPiA+ICvCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDC
oMKgIHBndGJsID0gbW11X2Rlc2MtPnBndGJsX2Jhc2U7Cj4gPiArCj4gPiArwqDCoMKgwqDCoMKg
wqAgc3dpdGNoIChtbXVfZGVzYy0+bnVtX2xldmVscykKPiAKPiBOaXQ6IFN0eWxlIChtaXNzaW5n
IGJsYW5rcykuClRoYW5rcy4gSWxsIHVwZGF0ZS4KPiAKPiA+ICvCoMKgwqDCoMKgwqDCoCB7Cj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBjYXNlIDQ6IC8qIExldmVsIDMgKi8KPiAKPiBOaXQ6
IEluZGVudGF0aW9uIG9mIGNhc2UgbGFiZWxzIG1hdGNoZXMgdGhhdCBvZiB0aGUgb3BlbmluZyBi
cmFjZSBpbgo+IG91cgo+IHN0eWxlLgpTdXJlLiBNaXNzZWQgdGhhdC4KPiAKPiA+ICvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgSEFORExFX1BHVEJMKDMpOwo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgY2FzZSAzOiAvKiBMZXZlbCAyICovCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIEhBTkRMRV9QR1RCTCgyKTsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IGNhc2UgMjogLyogTGV2ZWwgMSAqLwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCBIQU5ETEVfUEdUQkwoMSk7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBjYXNlIDE6IC8q
IExldmVsIDAgKi8KPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaW5kZXggPSBw
dF9pbmRleCgwLCBwYWdlX2FkZHIpOwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCBwYWRkciA9IChwYWdlX2FkZHIgLSBtYXBfc3RhcnQpICsgcGFfc3RhcnQ7Cj4gPiArCj4gPiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHRtcF9wZXJtaXNzaW9ucyA9IHBlcm1pc3Np
b25zOwo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGlzX2tl
cm5lbF90ZXh0KExJTktfVE9fTE9BRChwYWdlX2FkZHIpKSB8fAo+ID4gK8KgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaXNfa2VybmVsX2luaXR0ZXh0KExJTktfVE9fTE9B
RChwYWdlX2FkZHIpKSApCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgdG1wX3Blcm1pc3Npb25zID0KPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIFBURV9FWEVDVVRBQkxFIHwgUFRFX1JFQURBQkxFIHwgUFRFX1ZBTElE
Owo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGlzX2tlcm5l
bF9yb2RhdGEoTElOS19UT19MT0FEKHBhZ2VfYWRkcikpICkKPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB0bXBfcGVybWlzc2lvbnMgPSBQVEVfUkVBREFCTEUgfCBQ
VEVfVkFMSUQ7Cj4gPiArCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0ZV90
b19iZV93cml0dGVuID0gcGFkZHJfdG9fcHRlKHBhZGRyLAo+ID4gdG1wX3Blcm1pc3Npb25zKTsK
PiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKCAhcHRlX2lzX3Zh
bGlkKHBndGJsW2luZGV4XSkgKQo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIHBndGJsW2luZGV4XSA9IHB0ZV90b19iZV93cml0dGVuOwo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBlbHNlCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIHsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAvKgo+ID4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogZ2V0IGFuIGFkcmVzc2Vz
IG9mIGN1cnJlbnQgcHRlIGFuZCB0aGF0IG9uZQo+ID4gdG8KPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqIGJlIHdyaXR0ZW4gd2l0aG91dCBwZXJtaXNzaW9uIGZs
YWdzCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKi8KPiA+ICvC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIGN1cnJf
cHRlID0KPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IHBndGJsW2luZGV4XS5wdGUgJiB+KCgxIDw8IFBURV9QUE5fU0hJRlQpCj4gPiAtIDEpOwo+ID4g
Kwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0ZV90b19iZV93
cml0dGVuLnB0ZSAmPSB+KCgxIDw8Cj4gPiBQVEVfUFBOX1NISUZUKSAtIDEpOwo+IAo+IElmIHlv
dSBtZWFuIHRvIG9ubHkgY29tcGFyZSBhZGRyZXNzZXMsIHdoeSBkb24ndCB5b3UgdXNlCj4gcHRl
X3RvX3BhZGRyKCk/ClllcywgaXQgd2lsbCBiZSBiZXR0ZXIgdG8gdXNlIHB0ZV90b19wYWRkcigp
Lgo+IFF1ZXN0aW9uIHRob3VnaCBpcyB3aGV0aGVyIGl0J3MgY29ycmVjdCB0byBpZ25vcmUgcGVy
bWlzc2lvbgo+IGRpZmZlcmVuZXMuCj4gSSdkIGV4cGVjdCB5b3Ugb25seSB3YW50IHRvIG1hc2sg
b2ZmIFBURV9BQ0NFU1NFRCBhbmQgUFRFX0RJUlRZLgpJbml0aWFsbHkgSSB3b3VsZCBsaWtlIHRv
IGRvIHJvdWdoIGNoZWNrIGJ1dCBwcm9iYWJseSB5b3UgYXJlIHJpZ2h0IGFuZAppdCB3aWxsIGJl
IGJldHRlciB0byBtYXNrIG9mZiBvbmx5IFBURV9BQ0NFU1NFRCBhbmQgUFRFX0RJUlRZLgoKPiAK
PiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGN1cnJfcHRl
ICE9IHB0ZV90b19iZV93cml0dGVuLnB0ZSApCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgZWFybHlfcHJpbnRrKCJQVEUgdGhhdCBpcyBpbnRlbmRlZCB0bwo+ID4gd3Jp
dGUgaXNuJ3QgdGhlIgo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgInNhbWUgdGhhdCB0aGUgb25jZSBhcmUK
PiA+IG92ZXJ3cml0aW5nXG4iKTsKPiAKPiBOaXQ6IE9uZS1vZmYgaW5kZW50YXRpb24uIEFzIHRv
IHRoZSBtZXNzYWdlIHRleHQgLSBJIHRha2UgaXQgdGhhdCdzCj4gdGVtcG9yYXJ5IG9ubHkgYW55
d2F5LCBhbmQgaGVuY2UgdGhlcmUncyBsaXR0bGUgcG9pbnQgdGhpbmtpbmcgYWJvdXQKPiBpbXBy
b3ZpbmcgaXQ/ClByb2JhYmx5IGl0IHdvdWxkIGJlIG1vcmUgY2xlYXI6CiJQVEUgb3ZlcnJpZGRl
biBoYXMgb2NjdXJyZWQiCj4gCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCAvKiBwYW5pYygpLCA8YXNtL2J1Zy5oPiBhcmVuJ3QgcmVhZHkgbm93Lgo+
ID4gKi8KPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IGRpZSgpOwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIH0KPiA+
ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfQo+ID4gK8KgwqDCoMKgwqDCoMKgIH0K
PiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoCAvKiBQb2ludCB0byBuZXh0IHBhZ2UgKi8KPiA+ICvC
oMKgwqDCoMKgwqDCoCBwYWdlX2FkZHIgKz0gWEVOX1BUX0xFVkVMX1NJWkUoMCk7Cj4gCj4gU2Vl
aW5nIHRoaXMgYXMgd2VsbCBhcyB0aGUgbG9vcCBoZWFkaW5nIC0gbWF5YmUgbW9yZSBzdWl0YWJs
ZSBhcyBhCj4gZm9yKDs7KSBsb29wPwpJIGFtIG5vdCBzdXJlIHRoYXQgSSB1bmRlcnN0YW5kIHRo
ZSBiZW5lZml0cyBvZiBjaGFuZ2luZyAid2hpbGUgKApwYWdlX2FkZHIgPCBtYXBfZW5kICkiIHRv
ICJmb3IoOzspIi4KQ291bGQgeW91IHBsZWFzZSBleHBsYWluIG1lIHdoYXQgdGhlIGJlbmVmaXRz
IGFyZT8KPiAKPiA+ICvCoMKgwqAgfQo+ID4gK30KPiAKPiBTaW5jZSBIQU5ETEVfUEdUQkwoKSBp
cyBzdHJpY3RseSBmb3IgdXNlIGFib3ZlIG9ubHksIEkgdGhpbmsgeW91J2QKPiBiZXR0ZXIKPiAj
dW5kZWYgaXQgaGVyZS4KPiAKPiA+ICtzdGF0aWMgdm9pZCBfX2luaXQgY2FsY19wZ3RibF9sdmxz
X251bShzdHJ1Y3TCoCBtbXVfZGVzYyAqbW11X2Rlc2MpCj4gPiArewo+ID4gK8KgwqDCoCB1bnNp
Z25lZCBsb25nIHNhdHBfbW9kZSA9IFJWX1NUQUdFMV9NT0RFOwo+ID4gKwo+ID4gK8KgwqDCoCAv
KiBOdW1iZXIgb2YgcGFnZSB0YWJsZSBsZXZlbHMgKi8KPiA+ICvCoMKgwqAgc3dpdGNoIChzYXRw
X21vZGUpCj4gPiArwqDCoMKgIHsKPiA+ICvCoMKgwqDCoMKgwqDCoCBjYXNlIFNBVFBfTU9ERV9T
VjMyOgo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbW11X2Rlc2MtPm51bV9sZXZlbHMgPSAy
Owo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7Cj4gPiArwqDCoMKgwqDCoMKgwqAg
Y2FzZSBTQVRQX01PREVfU1YzOToKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG1tdV9kZXNj
LT5udW1fbGV2ZWxzID0gMzsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGJyZWFrOwo+ID4g
K8KgwqDCoMKgwqDCoMKgIGNhc2UgU0FUUF9NT0RFX1NWNDg6Cj4gPiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBtbXVfZGVzYy0+bnVtX2xldmVscyA9IDQ7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBicmVhazsKPiA+ICvCoMKgwqDCoMKgwqDCoCBkZWZhdWx0Ogo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgZWFybHlfcHJpbnRrKCIoWEVOKSBVbnN1cHBvcnRlZCBTQVRQX01PREVcbiIp
Owo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZGllKCk7Cj4gPiArwqDCoMKgIH0KPiA+ICt9
Cj4gPiArCj4gPiArc3RhdGljIGJvb2wgX19pbml0IGNoZWNrX3BndGJsX21vZGVfc3VwcG9ydChz
dHJ1Y3QgbW11X2Rlc2MKPiA+ICptbXVfZGVzYywKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCB1bnNpZ25lZCBsb25nCj4gPiBsb2FkX3N0YXJ0LAo+ID4gK8KgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcKPiA+IHNhdHBfbW9kZSkKPiA+ICt7Cj4gPiAr
wqDCoMKgIGJvb2wgaXNfbW9kZV9zdXBwb3J0ZWQgPSBmYWxzZTsKPiA+ICvCoMKgwqAgdW5zaWdu
ZWQgaW50IGluZGV4Owo+ID4gK8KgwqDCoCB1bnNpZ25lZCBpbnQgcGFnZV90YWJsZV9sZXZlbCA9
IChtbXVfZGVzYy0+bnVtX2xldmVscyAtIDEpOwo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsZXZlbF9t
YXBfbWFzayA9Cj4gPiBYRU5fUFRfTEVWRUxfTUFQX01BU0socGFnZV90YWJsZV9sZXZlbCk7Cj4g
PiArCj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgYWxpZ25lZF9sb2FkX3N0YXJ0ID0gbG9hZF9z
dGFydCAmCj4gPiBsZXZlbF9tYXBfbWFzazsKPiA+ICvCoMKgwqAgdW5zaWduZWQgbG9uZyBhbGln
bmVkX3BhZ2Vfc2l6ZSA9Cj4gPiBYRU5fUFRfTEVWRUxfU0laRShwYWdlX3RhYmxlX2xldmVsKTsK
PiA+ICvCoMKgwqAgdW5zaWduZWQgbG9uZyB4ZW5fc2l6ZSA9ICh1bnNpZ25lZCBsb25nKShfZW5k
IC0gc3RhcnQpOwo+ID4gKwo+ID4gK8KgwqDCoCBpZiAoIChsb2FkX3N0YXJ0ICsgeGVuX3NpemUp
ID4gKGFsaWduZWRfbG9hZF9zdGFydCArCj4gPiBhbGlnbmVkX3BhZ2Vfc2l6ZSkgKQo+ID4gK8Kg
wqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKgwqAgZWFybHlfcHJpbnRrKCJwbGVhc2UgcGxhY2UgWGVu
IHRvIGJlIGluIHJhbmdlIG9mIFBBR0VfU0laRQo+ID4gIgo+ID4gK8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgIndoZXJlIFBBR0VfU0laRSBpcyBYRU5fUFRfTEVWRUxf
U0laRSgge0wzIHwKPiA+IEwyIHwgTDF9ICkgIgo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgImRlcGVuZGluZyBvbiBleHBlY3RlZCBTQVRQX01PREUgXG4iCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAiWEVOX1BUX0xFVkVM
X1NJWkUgaXMgZGVmaW5lZCBpbgo+ID4gPGFzbS9wYWdlLmg+XG4iKTsKPiA+ICvCoMKgwqDCoMKg
wqDCoCBkaWUoKTsKPiA+ICvCoMKgwqAgfQo+ID4gKwo+ID4gK8KgwqDCoCBpbmRleCA9IHB0X2lu
ZGV4KHBhZ2VfdGFibGVfbGV2ZWwsIGFsaWduZWRfbG9hZF9zdGFydCk7Cj4gPiArwqDCoMKgIHN0
YWdlMV9wZ3RibF9yb290W2luZGV4XSA9IHBhZGRyX3RvX3B0ZShhbGlnbmVkX2xvYWRfc3RhcnQs
Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgUFRFX0xFQUZfREVGQVVMVCB8Cj4g
PiBQVEVfRVhFQ1VUQUJMRSk7Cj4gPiArCj4gPiArwqDCoMKgIGFzbSB2b2xhdGlsZSgic2ZlbmNl
LnZtYSIpOwo+IAo+IE5pdCAoaGVyZSBhbmQgc2V2ZXJhbCB0aW1lcyBhZ2FpbiBiZWxvdyk6IFN0
eWxlIChtaXNzaW5nIHRocmVlCj4gYmxhbmtzLCBhcwo+ICJhc20iIGlzIGEga2V5d29yZCkuClRo
YW5rcy4gSSdsbCBhZGQgdGhlbQo+IAo+ID4gK8KgwqDCoCBjc3Jfd3JpdGUoQ1NSX1NBVFAsCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKCh1bnNpZ25lZCBsb25nKXN0YWdlMV9wZ3Ri
bF9yb290ID4+IFBBR0VfU0hJRlQpIHwKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBz
YXRwX21vZGUgPDwgU0FUUF9NT0RFX1NISUZUKTsKPiA+ICsKPiA+ICvCoMKgwqAgaWYgKCAoY3Ny
X3JlYWQoQ1NSX1NBVFApID4+IFNBVFBfTU9ERV9TSElGVCkgPT0gc2F0cF9tb2RlICkKPiA+ICvC
oMKgwqDCoMKgwqDCoCBpc19tb2RlX3N1cHBvcnRlZCA9IHRydWU7Cj4gPiArCj4gPiArwqDCoMKg
IC8qIENsZWFuIE1NVSByb290IHBhZ2UgdGFibGUgYW5kIGRpc2FibGUgTU1VICovCj4gPiArwqDC
oMKgIHN0YWdlMV9wZ3RibF9yb290W2luZGV4XSA9IHBhZGRyX3RvX3B0ZSgweDAsIDB4MCk7Cj4g
PiArCj4gPiArwqDCoMKgIGNzcl93cml0ZShDU1JfU0FUUCwgMCk7Cj4gPiArwqDCoMKgIGFzbSB2
b2xhdGlsZSgic2ZlbmNlLnZtYSIpOwo+IAo+IEkgZ3Vlc3Mgd2hhdCB5b3UgZG8gaW4gdGhpcyBm
dW5jdGlvbiBjb3VsZCBkbyB3aXRoIHNvbWUgbW9yZQo+IGNvbW1lbnRzLgo+IExvb2tzIGxpa2Ug
eW91J3JlIGJyaWVmbHkgZW5hYmxpbmcgdGhlIE1NVSB0byBjaGVjayB0aGF0IHdoYXQgeW91Cj4g
d3JvdGUKPiB0byBTQVRQIHlvdSBjYW4gYWxzbyByZWFkIGJhY2suIChJc24ndCB0aGVyZSBhIHJl
Z2lzdGVyIHJlcG9ydGluZwo+IHdoZXRoZXIgdGhlIGZlYXR1cmUgaXMgYXZhaWxhYmxlPykKSSBz
dXBwb3NlZCB0aGF0IGl0IGhhcyB0byBiZSBidXQgSSBjb3VsZG4ndCBmaW5kIGEgcmVnaXN0ZXIg
aW4gZG9jcy4KCj4gCj4gSWYgc28sIEkgdGhpbmsgeW91IGNhbm5vdCBjbGVhciB0aGUgc3RhZ2Ux
X3BndGJsX3Jvb3RbXSBzbG90IGJlZm9yZQo+IHlvdSd2ZSBkaXNhYmxlZCB0aGUgTU1VIGFnYWlu
Lgo+IAo+IChBcyBhIG1pbm9yIGFzcGVjdCwgSSdkIGxpa2UgdG8gZW5jb3VyYWdlIHlvdSB0byB3
cml0ZSBwbGFpbiB6ZXJvCj4ganVzdAo+IGFzIDAsIG5vdCBhcyAweDAsIHVubGVzcyB1c2VkIGlu
IGNvbnRleHRzIHdoZXJlIG90aGVyIGhleCBudW1iZXJzCj4gbmVhcmJ5Cj4gbWFrZSB0aGlzIGRl
c2lyYWJsZS4pCllvdSBhcmUgcmlnaHQgaXQgc2hvdWxkIGJlIGNsZWFyZWQgYWZ0ZXIgY3NyX3dy
aXRlKENTUl9TQVRQLCAwKQo+IAo+ID4gK8KgwqDCoCByZXR1cm4gaXNfbW9kZV9zdXBwb3J0ZWQ7
Cj4gPiArfQo+ID4gKwo+ID4gKy8qCj4gPiArICogc2V0dXBfaW5pdGlhbF9wYWdldGFibGVzOgo+
ID4gKyAqCj4gPiArICogQnVpbGQgdGhlIHBhZ2UgdGFibGVzIGZvciBYZW4gdGhhdCBtYXAgdGhl
IGZvbGxvd2luZzoKPiA+ICsgKsKgIDEuIENhbGN1bGF0ZSBwYWdlIHRhYmxlJ3MgbGV2ZWwgbnVt
YmVycy4KPiA+ICsgKsKgIDIuIEluaXQgbW11IGRlc2NyaXB0aW9uIHN0cnVjdHVyZS4KPiA+ICsg
KsKgIDMuIENoZWNrIHRoYXQgbGlua2VyIGFkZHJlc3NlcyByYW5nZSBkb2Vzbid0IG92ZXJsYXAK
PiA+ICsgKsKgwqDCoMKgIHdpdGggbG9hZCBhZGRyZXNzZXMgcmFuZ2UKPiA+ICsgKsKgIDQuIE1h
cCBhbGwgbGlua2VyIGFkZHJlc3NlcyBhbmQgbG9hZCBhZGRyZXNzZXMgKCBpdCBzaG91bGRuJ3QK
PiA+ICsgKsKgwqDCoMKgIGJlIDE6MSBtYXBwZWQgYW5kIHdpbGwgYmUgMToxIG1hcHBlZCBvbmx5
IGluIGNhc2UgaWYKPiA+ICsgKsKgwqDCoMKgIGxpbmtlciBhZGRyZXNzIGlzIGVxdWFsIHRvIGxv
YWQgYWRkcmVzcyApIHdpdGgKPiA+ICsgKsKgwqDCoMKgIFJXIHBlcm1pc3Npb25zIGJ5IGRlZmF1
bHQuCj4gPiArICrCoCA1LiBTZXR1cCBwcm9wZXIgUFRFIHBlcm1pc3Npb25zIGZvciBlYWNoIHNl
Y3Rpb24uCj4gPiArICovCj4gPiArdm9pZCBfX2luaXQgc2V0dXBfaW5pdGlhbF9wYWdldGFibGVz
KHZvaWQpCj4gPiArewo+ID4gK8KgwqDCoCBzdHJ1Y3QgbW11X2Rlc2MgbW11X2Rlc2MgPSB7IDAs
IDAsIE5VTEwsIDAgfTsKPiAKPiBKdXN0IHt9IG91Z2h0IHRvIGRvIGFzIGluaXRpYWxpemVyLCBi
dXQgaWYgeW91IHdhbnQgdG8gc3BlbGwgdGhpbmdzCj4gb3V0LAo+IHRoZW4gcGxlYXNlIHVzZSAw
IC8gTlVMTCBjb25zaXN0ZW50bHkgZm9yIGludGVncmFsIC8gcG9pbnRlciB0eXBlCj4gZmllbGRz
LgpUaGFua3MuCgo+IAo+ID4gK8KgwqDCoCAvKgo+ID4gK8KgwqDCoMKgICogQWNjZXNzIHRvIF97
c3RhcmQsIGVuZCB9IGlzIGFsd2F5cyBQQy1yZWxhdGl2ZQo+IAo+IEkgZ3Vlc3MgeW91IG1lYW4g
X3N0YXJ0LiBGb3IganVzdCBhIGxlYWRpbmcgdW5kZXJzY29yZSBJIGFsc28gZG9uJ3QKPiB0aGlu
awo+IHVzaW5nIHRoaXMgYnJhY2VkIGZvcm0gaXMgdXNlZnVsLgpPSyB0aGVuIEknbGwgdXBkYXRl
IHRoZSBjb21tZW50IHcvbyB1c2FnZSBvZiBicmFjZWQgZm9ybS4KPiAKPiA+ICvCoMKgwqDCoCAq
IHRoZXJlYnkgd2hlbiBhY2Nlc3MgdGhlbSB3ZSB3aWxsIGdldCBsb2FkIGFkcmVzc2VzCj4gPiAr
wqDCoMKgwqAgKiBvZiBzdGFydCBhbmQgZW5kIG9mIFhlbgo+ID4gK8KgwqDCoMKgICogVG8gZ2V0
IGxpbmtlciBhZGRyZXNzZXMgTE9BRF9UT19MSU5LKCkgaXMgcmVxdWlyZWQKPiA+ICvCoMKgwqDC
oCAqIHRvIHVzZQo+ID4gK8KgwqDCoMKgICovCj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgbG9h
ZF9zdGFydMKgwqDCoCA9ICh1bnNpZ25lZCBsb25nKV9zdGFydDsKPiA+ICvCoMKgwqAgdW5zaWdu
ZWQgbG9uZyBsb2FkX2VuZMKgwqDCoMKgwqAgPSAodW5zaWduZWQgbG9uZylfZW5kOwo+ID4gK8Kg
wqDCoCB1bnNpZ25lZCBsb25nIGxpbmtlcl9zdGFydMKgID0gTE9BRF9UT19MSU5LKGxvYWRfc3Rh
cnQpOwo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIGxpbmtlcl9lbmTCoMKgwqAgPSBMT0FEX1RP
X0xJTksobG9hZF9lbmQpOwo+ID4gKwo+ID4gK8KgwqDCoCBpZiAoIChsaW5rZXJfc3RhcnQgIT0g
bG9hZF9zdGFydCkgJiYKPiA+ICvCoMKgwqDCoMKgwqDCoMKgIChsaW5rZXJfc3RhcnQgPD0gbG9h
ZF9lbmQpICYmIChsb2FkX3N0YXJ0IDw9IGxpbmtlcl9lbmQpCj4gPiApIHsKPiAKPiBOaXQ6IFN0
eWxlIChicmFjZSBwbGFjZW1lbnQpLgpUaGFua3MsCgo+IAo+ID4gK8KgwqDCoMKgwqDCoMKgIGVh
cmx5X3ByaW50aygiKFhFTikgbGlua2VyIGFuZCBsb2FkIGFkZHJlc3MgcmFuZ2VzCj4gPiBvdmVy
bGFwXG4iKTsKPiA+ICvCoMKgwqDCoMKgwqDCoCBkaWUoKTsKPiA+ICvCoMKgwqAgfQo+ID4gKwo+
ID4gK8KgwqDCoCBjYWxjX3BndGJsX2x2bHNfbnVtKCZtbXVfZGVzYyk7Cj4gPiArCj4gPiArwqDC
oMKgIGlmICggIWNoZWNrX3BndGJsX21vZGVfc3VwcG9ydCgmbW11X2Rlc2MsIGxvYWRfc3RhcnQs
Cj4gPiBSVl9TVEFHRTFfTU9ERSkgKQo+ID4gK8KgwqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKgwqAg
ZWFybHlfcHJpbnRrKCJyZXF1ZXN0ZWQgTU1VIG1vZGUgaXNuJ3Qgc3VwcG9ydGVkIGJ5IENQVVxu
Igo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgIlBsZWFzZSBj
aG9vc2UgZGlmZmVyZW50IGluCj4gPiA8YXNtL2NvbmZpZy5oPlxuIik7Cj4gPiArwqDCoMKgwqDC
oMKgwqAgZGllKCk7Cj4gPiArwqDCoMKgIH0KPiA+ICsKPiA+ICvCoMKgwqAgbW11X2Rlc2MucGd0
YmxfYmFzZSA9IHN0YWdlMV9wZ3RibF9yb290Owo+ID4gK8KgwqDCoCBtbXVfZGVzYy5uZXh0X3Bn
dGJsID0gc3RhZ2UxX3BndGJsX25vbnJvb3Q7Cj4gPiArCj4gPiArwqDCoMKgIHNldHVwX2luaXRp
YWxfbWFwcGluZygmbW11X2Rlc2MsCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgbGlua2VyX3N0YXJ0LAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGxpbmtlcl9lbmQsCj4gPiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbG9hZF9zdGFydCwK
PiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBQ
VEVfTEVBRl9ERUZBVUxUKTsKPiA+ICt9Cj4gPiArCj4gPiArdm9pZCBfX2luaXQgbm9pbmxpbmUg
ZW5hYmxlX21tdSgpCj4gPiArewo+ID4gK8KgwqDCoCAvKgo+ID4gK8KgwqDCoMKgICogQ2FsY3Vs
YXRlIGEgbGlua2VyIHRpbWUgYWRkcmVzcyBvZiB0aGUgbW11X2lzX2VuYWJsZWQKPiA+ICvCoMKg
wqDCoCAqIGxhYmVsIGFuZCB1cGRhdGUgQ1NSX1NUVkVDIHdpdGggaXQuCj4gPiArwqDCoMKgwqAg
KiBNTVUgaXMgY29uZmlndXJlZCBpbiBhIHdheSB3aGVyZSBsaW5rZXIgYWRkcmVzc2VzIGFyZQo+
ID4gbWFwcGVkCj4gPiArwqDCoMKgwqAgKiBvbiBsb2FkIGFkZHJlc3NlcyBzbyBpbiBhIGNhc2Ug
d2hlbiBsaW5rZXIgYWRkcmVzc2VzIGFyZQo+ID4gbm90IGVxdWFsCj4gPiArwqDCoMKgwqAgKiB0
byBsb2FkIGFkZHJlc3NlcywgYWZ0ZXIgTU1VIGlzIGVuYWJsZWQsIGl0IHdpbGwgY2F1c2UKPiA+
ICvCoMKgwqDCoCAqIGFuIGV4Y2VwdGlvbiBhbmQganVtcCB0byBsaW5rZXIgdGltZSBhZGRyZXNz
ZXMuCj4gPiArwqDCoMKgwqAgKiBPdGhlcndpc2UgaWYgbG9hZCBhZGRyZXNzZXMgYXJlIGVxdWFs
IHRvIGxpbmtlciBhZGRyZXNzZXMKPiA+IHRoZSBjb2RlCj4gPiArwqDCoMKgwqAgKiBhZnRlciBt
bXVfaXNfZW5hYmxlZCBsYWJlbCB3aWxsIGJlIGV4ZWN1dGVkIHdpdGhvdXQKPiA+IGV4Y2VwdGlv
bi4KPiA+ICvCoMKgwqDCoCAqLwo+ID4gK8KgwqDCoCBjc3Jfd3JpdGUoQ1NSX1NUVkVDLCBMT0FE
X1RPX0xJTksoKHVuc2lnbmVkCj4gPiBsb25nKSYmbW11X2lzX2VuYWJsZWQpKTsKPiA+ICsKPiA+
ICvCoMKgwqAgLyogRW5zdXJlIHBhZ2UgdGFibGUgd3JpdGVzIHByZWNlZGUgbG9hZGluZyB0aGUg
U0FUUCAqLwo+ID4gK8KgwqDCoCBhc20gdm9sYXRpbGUoInNmZW5jZS52bWEiKTsKPiA+ICsKPiA+
ICvCoMKgwqAgLyogRW5hYmxlIHRoZSBNTVUgYW5kIGxvYWQgdGhlIG5ldyBwYWdldGFibGUgZm9y
IFhlbiAqLwo+ID4gK8KgwqDCoCBjc3Jfd3JpdGUoQ1NSX1NBVFAsCj4gPiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgKCh1bnNpZ25lZCBsb25nKXN0YWdlMV9wZ3RibF9yb290ID4+IFBBR0Vf
U0hJRlQpIHwKPiAKPiBQbGVhc2UgdHJ5IHRvIGF2b2lkIG9wZW4tY29kaW5nIG9mIGV4aXN0aW5n
IGNvbnN0cnVjdHM6IEhlcmUgeW91IG1lYW4KPiBlaXRoZXIgUEZOX0RPV04oKSBvciBwYWRkcl90
b19wZm4oKSAoeW91IHNlZSwgd2UgaGF2ZSBldmVuIHR3bykuIChBcwo+IEkKPiBub3RpY2UgSSBk
aWQgb3Zlcmxvb2sgYXQgbGVhc3Qgc2ltaWxhciBlYXJsaWVyIGluc3RhbmNlLikKU3VyZS4gVGhh
bmtzLgo+IAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFJWX1NUQUdFMV9NT0RFIDw8
IFNBVFBfTU9ERV9TSElGVCk7Cj4gPiArCj4gPiArwqDCoMKgIGFzbSB2b2xhdGlsZSgiLmFsaWdu
IDIiKTsKPiA+ICvCoMKgwqDCoMKgIG1tdV9pc19lbmFibGVkOgo+IAo+IEhvdyBpbiB0aGUgd29y
bGQgaXMgb25lIHRvIHNwb3QgdGhpcyBsYWJlbD8gWWVzLCBpdCBzaG91bGRuJ3QgYmUKPiBlbnRp
cmVseQo+IHVuaW5kZW50ZWQuIEJ1dCBpdCBhbHNvIHNob3VsZCBiZSBpbmRlbnRlZCBsZXNzIHRo
YW4gdGhlIHN1cnJvdW5kaW5nCj4gY29kZQo+ICh3aXRoIHRoZSBleGNlcHRpb24gb2Ygc3dpdGNo
KCkgc3RhdGVtZW50IGNhc2UgbGFiZWxzLCB3aGVyZSBhIG5vbi0KPiBjYXNlCj4gbGFiZWwgbWln
aHQgYmUgaW50ZW5kZWQgdGhlIHNhbWUgYXMgYSBjYXNlIG9uZXMpLiBPdXIgcnVsZSBvZiB0aHVt
Ygo+IGlzIHRvCj4gaW5kZW50IHN1Y2ggbGFiZWxzIGJ5IGEgc2luZ2xlIHNwYWNlLgpUaGFua3Mu
IEl0IGxvb2tzIGxpa2UgSSBtaXN1bmRlcnN0b29kIHByZXZpb3VzIGNvbW1lbnQgYWJvdXQgbGFi
ZWwKaW5kZW50YXRpb24uCj4gCj4gPiArwqDCoMKgIC8qCj4gPiArwqDCoMKgwqAgKiBTdGFjayBz
aG91bGQgYmUgcmUtaW5pdGVkIGFzOgo+ID4gK8KgwqDCoMKgICogMS4gUmlnaHQgbm93IGFuIGFk
ZHJlc3Mgb2YgdGhlIHN0YWNrIGlzIHJlbGF0aXZlIHRvIGxvYWQKPiA+IHRpbWUKPiA+ICvCoMKg
wqDCoCAqwqDCoMKgIGFkZHJlc3NlcyB3aGF0IHdpbGwgY2F1c2UgYW4gaXNzdWUgaW4gY2FzZSBv
ZiBsb2FkIHN0YXJ0Cj4gPiBhZGRyZXNzCj4gPiArwqDCoMKgwqAgKsKgwqDCoCBpc24ndCBlcXVh
bCB0byBsaW5rZXIgc3RhcnQgYWRkcmVzcy4KPiA+ICvCoMKgwqDCoCAqIDIuIEFkZHJlc3NlcyBp
biBzdGFjayBhcmUgYWxsIGxvYWQgdGltZSByZWxhdGl2ZSB3aGljaCBjYW4KPiA+IGJlIGFuCj4g
PiArwqDCoMKgwqAgKsKgwqDCoCBpc3N1ZSBpbiBjYXNlIHdoZW4gbG9hZCBzdGFydCBhZGRyZXNz
IGlzbid0IGVxdWFsIHRvCj4gPiBsaW5rZXIKPiA+ICvCoMKgwqDCoCAqwqDCoMKgIHN0YXJ0IGFk
ZHJlc3MuCj4gPiArwqDCoMKgwqAgKi8KPiA+ICvCoMKgwqAgYXNtIHZvbGF0aWxlICgibXYgc3As
ICUwIiA6IDogInIiKCh1bnNpZ25lZAo+ID4gbG9uZyljcHUwX2Jvb3Rfc3RhY2sgKyBTVEFDS19T
SVpFKSk7Cj4gCj4gTml0OiBTdHlsZSAob3Zlcmx5IGxvbmcgbGluZSkuCj4gCj4gV2hhdCdzIHdv
cnNlIC0gSSBkb24ndCB0aGluayBpdCBpcyBwZXJtaXR0ZWQgdG8gYWx0ZXIgc3AgaW4gdGhlCj4g
bWlkZGxlIG9mCj4gYSBmdW5jdGlvbi4gVGhlIGNvbXBpbGVyIG1heSBtYWludGFpbiBsb2NhbCB2
YXJpYWJsZXMgb24gdGhlIHN0YWNrCj4gd2hpY2gKPiBkb24ndCBjb3JyZXNwb25kIHRvIGFueSBw
cm9ncmFtbWVyIHNwZWNpZmllZCBvbmVzLCBpbmNsdWRpbmcgcG9pbnRlcgo+IG9uZXMKPiB3aGlj
aCBwb2ludCBpbnRvIHRoZSBzdGFjayBmcmFtZS4gVGhpcyBpcyBzcGVjaWZpY2FsbHkgd2h5IGJv
dGggeDg2Cj4gYW5kCj4gQXJtIGhhdmUgc3dpdGNoX3N0YWNrX2FuZF9qdW1wKCkgbWFjcm9zLgpi
dXQgdGhlIG1hY3JvcyAoZnJvbSBBUk0pIGxvb2tzIGVxdWFsIHRvIHRoZSBjb2RlIG1lbnRpb25l
ZCBhYm92ZToKI2RlZmluZSBzd2l0Y2hfc3RhY2tfYW5kX2p1bXAoc3RhY2ssIGZuKSBkbyB7ICAg
ICAgICAgICAgICAgICAgICAgICAgIApcCiAgICBhc20gdm9sYXRpbGUgKCJtb3Ygc3AsJTA7IGIg
IiBTVFIoZm4pIDogOiAiciIgKHN0YWNrKSwgIlgiIChmbikgOgoibWVtb3J5IiApOyBcCiAgICB1
bnJlYWNoYWJsZSgpOyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAKXAp9IHdoaWxlICggZmFsc2UgKQoKSGVyZSBpcyBwYXJ0IG9mIGRpc2Fzc2VtYmxl
ZCBlbmFibGVfbW11KCk6CgpmZmZmZmZmZmMwMDRhZWRjOiAgICAgICAxODA3OTA3MyAgICAgICAg
ICAgICAgICBjc3J3ICAgIHNhdHAsYTUKZmZmZmZmZmZjMDA0YWVlMDogICAgICAgMDAwMTY3OTcg
ICAgICAgICAgICAgICAgYXVpcGMgICBhNSwweDE2CmZmZmZmZmZmYzAwNGFlZTQ6ICAgICAgIDEy
MDc4NzkzICAgICAgICAgICAgICAgIGFkZGkgICAgYTUsYTUsMjg4CmZmZmZmZmZmYzAwNGFlZTg6
ICAgICAgIDgxM2UgICAgICAgICAgICAgICAgICAgIG12ICAgICAgc3AsYTUKZmZmZmZmZmZjMDA0
YWYwMDogICAgICAgMGY0MDAwZWYgICAgICAgICAgICAgICAgamFsICAgIApyYSxmZmZmZmZmZmMw
MDRhZmY0IDxjb250X2FmdGVyX21tdV9pc19lbmFibGVkPgouLi4KCgo+IAo+ID4gK8KgwqDCoCAv
Kgo+ID4gK8KgwqDCoMKgICogV2UgY2FuJ3QgcmV0dXJuIHRvIHRoZSBjYWxsZXIgYmVjYXVzZSB0
aGUgc3RhY2sgd2FzIHJlc2V0ZWQKPiA+ICvCoMKgwqDCoCAqIGFuZCBpdCBtYXkgaGF2ZSBzdGFz
aCBzb21lIHZhcmlhYmxlIG9uIHRoZSBzdGFjay4KPiA+ICvCoMKgwqDCoCAqIEp1bXAgdG8gYSBi
cmFuZCBuZXcgZnVuY3Rpb24gYXMgdGhlIHN0YWNrIHdhcyByZXNldGVkCj4gPiArwqDCoMKgICov
Cj4gCj4gTml0OiBTdHlsZSAoaW5kZW50YXRpb24pLgpUaGFua3MuCj4gCj4gPiArwqDCoMKgIGNv
bnRfYWZ0ZXJfbW11X2lzX2VuYWJsZWQoKTsKPiA+ICt9Cj4gPiArCj4gPiBkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gvcmlzY3YvcmlzY3Y2NC9oZWFkLlMKPiA+IGIveGVuL2FyY2gvcmlzY3YvcmlzY3Y2
NC9oZWFkLlMKPiA+IGluZGV4IDg4ODdmMGNiZDQuLmIzMzA5ZDkwMmMgMTAwNjQ0Cj4gPiAtLS0g
YS94ZW4vYXJjaC9yaXNjdi9yaXNjdjY0L2hlYWQuUwo+ID4gKysrIGIveGVuL2FyY2gvcmlzY3Yv
cmlzY3Y2NC9oZWFkLlMKPiA+IEBAIC0xLDQgKzEsNSBAQAo+ID4gwqAjaW5jbHVkZSA8YXNtL2Fz
bS5oPgo+ID4gKyNpbmNsdWRlIDxhc20vYXNtLW9mZnNldHMuaD4KPiA+IMKgI2luY2x1ZGUgPGFz
bS9yaXNjdl9lbmNvZGluZy5oPgo+ID4gwqAKPiA+IMKgwqDCoMKgwqDCoMKgwqAgLnNlY3Rpb24g
LnRleHQuaGVhZGVyLCAiYXgiLCAlcHJvZ2JpdHMKPiA+IEBAIC0zMiwzICszMyw0IEBAIEVOVFJZ
KHN0YXJ0KQo+ID4gwqDCoMKgwqDCoMKgwqDCoCBhZGTCoMKgwqDCoCBzcCwgc3AsIHQwCj4gPiDC
oAo+ID4gwqDCoMKgwqDCoMKgwqDCoCB0YWlswqDCoMKgIHN0YXJ0X3hlbgo+ID4gKwo+IAo+ID8/
PwpTaG91bGRuJ3QgaXQgYmUgdGhlIG9uZSBlbXB0eSBsaW5lIGF0IHRoZSBlbmQgb2YgYSBmaWxl
PwoKPiAKPiA+IC0tLSBhL3hlbi9hcmNoL3Jpc2N2L3hlbi5sZHMuUwo+ID4gKysrIGIveGVuL2Fy
Y2gvcmlzY3YveGVuLmxkcy5TCj4gPiBAQCAtMTM2LDYgKzEzNiw3IEBAIFNFQ1RJT05TCj4gPiDC
oMKgwqDCoCAuID0gQUxJR04oUE9JTlRFUl9BTElHTik7Cj4gPiDCoMKgwqDCoCBfX2luaXRfZW5k
ID0gLjsKPiA+IMKgCj4gPiArwqDCoMKgIC4gPSBBTElHTihQQUdFX1NJWkUpOwo+ID4gwqDCoMKg
wqAgLmJzcyA6IHvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIC8qIEJT
UyAqLwo+ID4gwqDCoMKgwqDCoMKgwqDCoCBfX2Jzc19zdGFydCA9IC47Cj4gPiDCoMKgwqDCoMKg
wqDCoMKgICooLmJzcy5zdGFja19hbGlnbmVkKQo+IAo+IFdoeSBkbyB5b3UgbmVlZCB0aGlzPyBZ
b3UgcHJvcGVybHkgdXNlIF9fYWxpZ25lZChQQUdFX1NJWkUpIGZvciB0aGUKPiBwYWdlIHRhYmxl
cyB5b3UgZGVmaW5lLCBhbmQgUEFHRV9TSVpFIHdvdWxkbid0IGJlIGVub3VnaCBoZXJlIGFueXdh
eQo+IGlmIFNUQUNLX1NJWkUgPiBQQUdFX1NJWkUgKGFzIC5ic3Muc3RhY2tfYWxpZ25lZCBjb21l
cyBmaXJzdCkuIFRoZQo+IG9ubHkgdGltZSB5b3UnZCBuZWVkIHN1Y2ggYW4gQUxJR04oKSBpcyBp
ZiB0aGUgZm9sbG93aW5nIGxhYmVsCj4gKF9fYnNzX3N0YXJ0IGluIHRoaXMgY2FzZSkgbmVlZGVk
IHRvIGJlIGFsaWduZWQgYXQgYSBjZXJ0YWluCj4gYm91bmRhcnkuIChJJ20gYSBsaXR0bGUgcHV6
emxlZCB0aG91Z2ggdGhhdCBfX2Jzc19zdGFydCBpc24ndAo+IFtpbW1lZGlhdGVseV0gcHJlY2Vk
ZWQgYnkgIi4gPSBBTElHTihQT0lOVEVSX0FMSUdOKTsiIC0gZGlkbid0IC5ic3MKPiBjbGVhcmlu
ZyByZWx5IG9uIHN1Y2ggYWxpZ25tZW50PykKQUxJR04oUEFHRV9TSVpFKSAgaXNuJ3QgbmVlZGVk
IGFueW1vcmUuCkkgdXNlZCBpdCB0byBoYXZlIDRrIGFsaWduZWQgcGh5c2ljYWwgYWRkcmVzcyBm
b3IgUFRFIHdoZW4gSSBtYXBwZWQKZWFjaCBzZWN0aW9uIHNlcGFyYXRlbHkgKCBpdCB3YXMgc28g
aW4gdGhlIHByZXZpb3VzIHZlcnN0aW9uIG9mIE1NVQpwYXRjaCBzZXJpZXMgKQoKUmVnYXJkaW5n
ICIuID0gQUxJR04oUE9JTlRFUl9BTElHTik7IiBJIHdvdWxkIHNheSB0aGF0IGl0IGlzIGVub3Vn
aCB0bwpoYXZlIGFsaWduZWQgX19ic3NfZW5kICggd2hhdCB3YXMgZG9uZSApIHRvIGJlIHN1cmUg
dGhhdCB3ZSBjYW4gY2xlYXIKX19TSVpFT0ZfUE9JTlRFUl9fIGJ5dGVzIGVhY2ggaXRlcmF0aW9u
IG9mIC5ic3MgY2xlYXJpbmcgbG9vcCBhbmQgZG9uJ3QKd29ycnkgdGhhdCBzaXplIG9mIC5ic3Mg
c2VjdGlvbiBtYXkgbm90IGJlIGRpdmlzaWJsZSBieQpfX1NJWkVPRl9QT0lOVEVSX18uCgp+IE9s
ZWtzaWkK



From xen-devel-bounces@lists.xenproject.org Fri Apr 21 16:35:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 16:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524647.815723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptjC-0002Od-UM; Fri, 21 Apr 2023 16:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524647.815723; Fri, 21 Apr 2023 16:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptjC-0002OW-Qf; Fri, 21 Apr 2023 16:35:18 +0000
Received: by outflank-mailman (input) for mailman id 524647;
 Fri, 21 Apr 2023 16:35:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FHzF=AM=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pptjC-0002OK-69
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 16:35:18 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f176ce8-e062-11ed-b220-6b7b168915f2;
 Fri, 21 Apr 2023 18:35:16 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-507bdc5ca2aso3226894a12.3
 for <xen-devel@lists.xenproject.org>; Fri, 21 Apr 2023 09:35:16 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-077-011-063-073.77.11.pool.telefonica.de.
 [77.11.63.73]) by smtp.gmail.com with ESMTPSA id
 k2-20020a056402048200b00501c96564b5sm1928829edv.93.2023.04.21.09.35.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 21 Apr 2023 09:35:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f176ce8-e062-11ed-b220-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682094916; x=1684686916;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M+rZJB4BR8f4VGV6ARLrNeDvBJuGNsvX2XWADjdtv9A=;
        b=JMrLQF24sUfKFSINcYxpUg+75W5bVxl3yiQY57kAUuXvrMiFyKL6XxfFu/5/yyh6TJ
         hKH1rnn/OiRFECC5LMs0RXlRXUm0U6zVVEiGRqgk7tV/xLDPc7IFf7Z2qIBVxJQItyAY
         JprUTCfN8p6JxOFwREyOFpH9QUsN++xY1PUH+YWhiYayx+e7imsZ44eyi90XNYRi6trk
         jApMRyfywwEbUU99qPvU/ZAcu62ZxN2iYxHmyGXPncP6BwHditiq5LRgSdFBZ1uOn2CL
         heqBbLK8LxEvuvwQyeaamCHNR1GVig5SSeVVMdEFKWsE6UDdw/qttNEnN28BdXjaXlPN
         R9Pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682094916; x=1684686916;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M+rZJB4BR8f4VGV6ARLrNeDvBJuGNsvX2XWADjdtv9A=;
        b=SL79ljg2m4ftVoUzCKkNLWxushu/gvs6Lqp9tsHDzu3odU7NhvKYkyEBuVp/cz/y0g
         2QJ3ADjjfKhEAxxZbAV6nJ3JTzVmEvff9rzR7v7t4nuRALcFwjA6RdzeTeMgsqklgo95
         iM9Wi11wQ/yiGHRoztGYfFJjZ33YA6g1/7O3pEPQBUZY8nC99tUlgQEQNO7k0nagQMUH
         pk15+CX5OYVkRVr8+Bbzg0Dj/NFyORBpFvGo7ak7iv3roMEfrac68XO6wo6vc5CQ1Ugn
         6UFa3j0+LXY2Hg3zTVLA2n7C7+Q0KaCsY1NyeeooCJZwuGY5O6kWScFgGCkQv+roA+yQ
         d6DA==
X-Gm-Message-State: AAQBX9e8ezV4887tH7USkQ6zF32L0BeqpCQwsIoPziZqH0hiAE7oNGpe
	NpwWnLvK9KLTr7b/vt5Kil8=
X-Google-Smtp-Source: AKy350b/xgJKoEFQP69MIuQi3JGwn0UUyeUhY+hiy9C4pQjzASYm/Wq4WbTDH2cg1uR94ZhLOkADAg==
X-Received: by 2002:a05:6402:18c:b0:505:47a:7ae8 with SMTP id r12-20020a056402018c00b00505047a7ae8mr5197719edv.4.1682094915648;
        Fri, 21 Apr 2023 09:35:15 -0700 (PDT)
Date: Fri, 21 Apr 2023 16:35:10 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
CC: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>, Eduardo Habkost <eduardo@habkost.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Chuck Zmudzinski <brchuckz@aol.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <20230421033757-mutt-send-email-mst@kernel.org>
References: <20230403074124.3925-1-shentey@gmail.com> <20230421033757-mutt-send-email-mst@kernel.org>
Message-ID: <B3B2C264-CFEF-4A8D-AEBA-194038A6E757@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 21=2E April 2023 07:38:10 UTC schrieb "Michael S=2E Tsirkin" <mst@redha=
t=2Ecom>:
>On Mon, Apr 03, 2023 at 09:41:17AM +0200, Bernhard Beschow wrote:
>> There is currently a dedicated PIIX3 device model for use under Xen=2E =
By reusing
>> existing PCI API during initialization this device model can be elimina=
ted and
>> the plain PIIX3 device model can be used instead=2E
>>=20
>> Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also making =
Xen
>> agnostic towards the precise south bridge being used in the PC machine=
=2E The
>> latter might become particularily interesting once PIIX4 becomes usable=
 in the
>> PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>
>xen stuff so I assume that tree?

Anthony?

This series is now fully reviewed=2E Once it lands in master I'd rebase th=
e PIIX consolidation series onto it which is still under discussion=2E

Best regards,
Bernhard

>
>> Testing done:
>> - `make check`
>> - Run `xl create` with the following config:
>>     name =3D "Manjaro"
>>     type =3D 'hvm'
>>     memory =3D 1536
>>     apic =3D 1
>>     usb =3D 1
>>     disk =3D [ "file:manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso,hdc:c=
drom,r" ]
>>     device_model_override =3D "/usr/bin/qemu-system-x86_64"
>>     vga =3D "stdvga"
>>     sdl =3D 1
>> - `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \
>>     -cdrom manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso`
>>=20
>> v4:
>> - Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)
>>=20
>> v3:
>> - Rebase onto master
>>=20
>> v2:
>> - xen_piix3_set_irq() is already generic=2E Just rename it=2E (Chuck)
>>=20
>> Tested-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>
>>=20
>> Bernhard Beschow (7):
>>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
>>   hw/pci/pci=2Ec: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
>>   hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
>>   hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
>>   hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
>>   hw/isa/piix3: Resolve redundant k->config_write assignments
>>   hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>>=20
>>  include/hw/southbridge/piix=2Eh |  1 -
>>  include/hw/xen/xen=2Eh          |  2 +-
>>  hw/i386/pc_piix=2Ec             | 36 +++++++++++++++++++--
>>  hw/i386/xen/xen-hvm=2Ec         |  2 +-
>>  hw/isa/piix3=2Ec                | 60 +--------------------------------=
--
>>  hw/pci/pci=2Ec                  |  2 ++
>>  stubs/xen-hw-stub=2Ec           |  2 +-
>>  7 files changed, 39 insertions(+), 66 deletions(-)
>>=20
>> --=20
>> 2=2E40=2E0
>>=20
>


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 16:36:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 16:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524651.815733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptkA-0002tr-7G; Fri, 21 Apr 2023 16:36:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524651.815733; Fri, 21 Apr 2023 16:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pptkA-0002tk-4D; Fri, 21 Apr 2023 16:36:18 +0000
Received: by outflank-mailman (input) for mailman id 524651;
 Fri, 21 Apr 2023 16:36:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W9w3=AM=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pptk8-0002tM-RU
 for xen-devel@lists.xenproject.org; Fri, 21 Apr 2023 16:36:17 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a19eab0b-e062-11ed-8611-37d641c3527e;
 Fri, 21 Apr 2023 18:36:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a19eab0b-e062-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682094973;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AMzOR95vay5pg0WrO+uJUGs+TLmecn0h01ht4y+YBW4=;
	b=o/TkSlykoJbZmwaCartOT04EgN5C+kODI7gZukI9DsIFy2Dgj3RH1WuWvwMvvYXkyiHIQk
	oxshpYHuRgHC8o/JYqoEGQ0sMgPeCIrVwZGLDjjKa0oD47tebLMPcc0kwKmrYNmhibL69p
	CuPI7WXBesGu+IQZaD33O9Vh8aqxpxbvEKmf8Inm6RIaaj7YSGPtkcXqQaDLBztSRLFiTD
	jAl7P2Gr2fqg2KdBtxFZJUc3m5xmd9qgJZ7OLjPxBmIrUzXSJ1uWPWJLXceKreW9vWpPl+
	SmWK7o2t1K1ok1OKIOZ0yX0u1SEvxXShW0sf3Kg6kxQp0c93gSDpceB1rVFESQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682094973;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=AMzOR95vay5pg0WrO+uJUGs+TLmecn0h01ht4y+YBW4=;
	b=44dX9VJX+TMEmkgFv0MNfddpnQu4SMuQ4SBx4WdfdKvH1kS+zbYESmYQqIpwSH+YghznlV
	s1xiQKd/rkmpEKCg==
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: Sean Christopherson <seanjc@google.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
 <thomas.lendacky@amd.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <87sfcu2wup.ffs@tglx>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com> <87v8hq35sk.ffs@tglx>
 <56e59a4d-a47f-4bfe-7db5-5f921062ad69@molgen.mpg.de> <87sfcu2wup.ffs@tglx>
Date: Fri, 21 Apr 2023 18:36:12 +0200
Message-ID: <87bkjh2nwj.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Apr 20 2023 at 21:10, Thomas Gleixner wrote:
> On Thu, Apr 20 2023 at 18:47, Paul Menzel wrote:
>> Am 20.04.23 um 17:57 schrieb Thomas Gleixner:
>> I quickly applied it on top of your branch, but I am getting:
>
> As I said it was untested. I was traveling and did not have access to a
> machine to even build it completely. Fixed up and tested version below.

I've updated

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotplug

for your conveniance.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 18:33:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 18:33:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524661.815749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppvZ9-0006Nr-Mk; Fri, 21 Apr 2023 18:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524661.815749; Fri, 21 Apr 2023 18:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppvZ9-0006Nk-J7; Fri, 21 Apr 2023 18:33:03 +0000
Received: by outflank-mailman (input) for mailman id 524661;
 Fri, 21 Apr 2023 18:33:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvZ8-0006Na-FK; Fri, 21 Apr 2023 18:33:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvZ8-0006VZ-2o; Fri, 21 Apr 2023 18:33:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvZ7-000135-Rr; Fri, 21 Apr 2023 18:33:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvZ7-00084q-RW; Fri, 21 Apr 2023 18:33:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t/IVewr0trq9mN4c9mPh/PFonS1vMfb7elQk2s7Nh80=; b=LNDPpX0625QpqucKr7ZpYqquxy
	nznWHNN6nlm0VSSe8dSGA8/Mx2cE/ips/Q1o0ZRFGYP9awMzxIq01TuNzsWtbisi4uMpqcdHM4dEb
	8W/KB/0GmwZuQ4bv4GfQzXuVf1G8BcetjE+bBEyLggCYbAFuMeDP8QbPq9W8AIvpkp+M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180365: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ed2ff315db7e800dd7718b1d1320ea8024d4e8b2
X-Osstest-Versions-That:
    ovmf=3163f34a42a5dacaf63499e69bf0fefdc409d89e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 18:33:01 +0000

flight 180365 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180365/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ed2ff315db7e800dd7718b1d1320ea8024d4e8b2
baseline version:
 ovmf                 3163f34a42a5dacaf63499e69bf0fefdc409d89e

Last test of basis   180346  2023-04-20 23:40:41 Z    0 days
Testing same since   180365  2023-04-21 16:12:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3163f34a42..ed2ff315db  ed2ff315db7e800dd7718b1d1320ea8024d4e8b2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 18:51:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 18:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524667.815759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppvqn-0000M8-7h; Fri, 21 Apr 2023 18:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524667.815759; Fri, 21 Apr 2023 18:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppvqn-0000M1-41; Fri, 21 Apr 2023 18:51:17 +0000
Received: by outflank-mailman (input) for mailman id 524667;
 Fri, 21 Apr 2023 18:51:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvql-0000Lr-Up; Fri, 21 Apr 2023 18:51:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvql-0006o0-Ff; Fri, 21 Apr 2023 18:51:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvql-0001W4-4y; Fri, 21 Apr 2023 18:51:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppvql-0003AW-4U; Fri, 21 Apr 2023 18:51:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FXFjNndvfhjPTCrZ7SB1Mlfa6jp0Mmhaa5cGzEr4z58=; b=J3JqsH6sc+HQUhAuQTf7iUYv19
	NxnrQbzJPSowvJkmfPKRDlrNhF6LsMVRnzaTecyx2PXp8UrTzKw4BTSOR9t8LsQK6vwujmCyMIDl5
	skTi7W5r9DbzYQDv/O5agMGY6rkqou8BxAn/bCEzfSgcBpAl6/D7twxPub6jFkRctPAM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180364-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180364: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 18:51:15 +0000

flight 180364 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180364/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180302  2023-04-18 20:01:55 Z    2 days
Failing since        180314  2023-04-19 10:00:24 Z    2 days   16 attempts
Testing same since   180364  2023-04-21 15:01:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8676092a0f..c6c8c0808f  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 21:17:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 21:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524677.815772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppy7m-00068p-In; Fri, 21 Apr 2023 21:16:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524677.815772; Fri, 21 Apr 2023 21:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppy7m-00068i-G0; Fri, 21 Apr 2023 21:16:58 +0000
Received: by outflank-mailman (input) for mailman id 524677;
 Fri, 21 Apr 2023 21:16:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppy7l-00068Y-LY; Fri, 21 Apr 2023 21:16:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppy7l-0001oI-Dw; Fri, 21 Apr 2023 21:16:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppy7k-0005q4-RE; Fri, 21 Apr 2023 21:16:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppy7k-0000xJ-Qc; Fri, 21 Apr 2023 21:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=knNwcF6a/nHaNaVj4U262ppmzAGte+jy5cl/TxCEqxk=; b=dfVbiWpo+s259RdngTSTHLZYVR
	s6ZYwmIllgpG6sCz8FSMpIsiZxyBxAhC+p/exzAlN0hPohkV39AIBHH9TdLxNa+D5OTkkBVkefp2p
	06zP+CJ2YJKQygfWJdxe7vo2g3YsVn/IjOsOaf4TE+gkwOCIvHnrHipTMP1UrcI3cIgo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180352-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180352: FAIL
X-Osstest-Failures:
    linux-5.4:build-armhf:<job status>:broken:regression
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-5.4:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:build-armhf:hosts-allocate:broken:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start.2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
X-Osstest-Versions-That:
    linux=32bea3bac5ca484c6f7e302c8c96fc686f62e7b4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 21:16:56 +0000

flight 180352 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180352/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken  in 180333

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail pass in 180333
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180333

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 180333 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)         blocked in 180333 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 180333 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-examine      1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 180333 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 180333 n/a
 build-armhf                2 hosts-allocate broken in 180333 starved in 180149
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180149
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180149
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail starved in 180149
 test-armhf-armhf-xl-credit1  19 guest-start.2           fail starved in 180149
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail starved in 180149
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180149
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180149

version targeted for testing:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac
baseline version:
 linux                32bea3bac5ca484c6f7e302c8c96fc686f62e7b4

Last test of basis   180149  2023-04-05 09:43:16 Z   16 days
Testing same since   180333  2023-04-20 10:12:45 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Amir Goldstein <amir73il@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bang Li <libang.linuxer@gmail.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Basavaraj Natikar <Basavaraj.Natikar@amd.com>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Brian Foster <bfoster@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Kerello <christophe.kerello@foss.st.com>
  Chuck Lever <chuck.lever@oracle.com>
  D Scott Phillips <scott@os.amperecomputing.com>
  Dai Ngo <dai.ngo@oracle.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  David Howells <dhowells@redhat.com>
  David Lechner <david@lechnology.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Denis Plotnikov <den-plotnikov@yandex-team.ru>
  Dhruva Gole <d-gole@ti.com>
  Emanuele Ghidoli <emanuele.ghidoli@toradex.com>
  Enrico Sau <enrico.sau@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Eric Van Hensbergen <ericvh@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Florian Fainelli <f.fainelli@gmail.com>
  George Cherian <george.cherian@marvell.com>
  Grant Grundler <grundler@chromium.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Layton <jlayton@kernel.org>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jeremy Soller <jeremy@system76.com>
  Jiri Kosina <jkosina@suse.cz>
  Johan Hovold <johan+linaro@kernel.org>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Keeping <john@metanate.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Kees Jan Koster <kjkoster@kjkoster.org>
  Kornel Dulęba <korneld@chromium.org>
  Lars-Peter Clausen <lars@metafoo.de>
  Lee Jones <lee.jones@linaro.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michal Kolar <mich.k@seznam.cz>
  Min Li <lm0963hack@gmail.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Miquel Raynal <miquel.raynal@bootlin.com> # v5.10, v4.19
  Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nicolas Schichan <nschichan@freebox.fr>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Paolo Abeni <pabeni@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Pratyush Yadav <ptyadav@amazon.de>
  RD Babiera <rdbabiera@google.com>
  Richard Weinberger <richard@nod.at>
  Robbie Harwood <rharwood@redhat.com>
  Roman Gushchin <roman.gushchin@linux.dev>
  Rongwei Wang <rongwei.wang@linux.alibaba.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sachi King <nakato@nakato.io>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sherry Sun <sherry.sun@nxp.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Shuangpeng Bai <sjb7183@psu.edu>
  Steve Clevenger <scclevenger@os.amperecomputing.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suzuki K Poulose <suzuki.poulose@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Glanzmann <thomas@glanzmann.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Crawford <tcrawford@system76.com>
  Tom Saeger <tom.saeger@oracle.com>
  Tyler Hicks (Microsoft) <code@tyhicks.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Waiman Long <longman@redhat.com>
  William Breathitt Gray <william.gray@linaro.org>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  Wolfram Sang <wsa@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  Xu Biang <xubiang@hust.edu.cn>
  Yongchen Yin <wb-yyc939293@alibaba-inc.com>
  ZhaoLong Wang <wangzhaolong1@huawei.com>
  Zheng Wang <zyytlz.wz@163.com>
  Zheng Yejian <zhengyejian1@huawei.com>
  Zhihao Cheng <chengzhihao1@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 2835 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 21 22:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Apr 2023 22:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524686.815782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppzGu-0005l6-1w; Fri, 21 Apr 2023 22:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524686.815782; Fri, 21 Apr 2023 22:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ppzGt-0005kz-VR; Fri, 21 Apr 2023 22:30:27 +0000
Received: by outflank-mailman (input) for mailman id 524686;
 Fri, 21 Apr 2023 22:30:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppzGs-0005ka-Hh; Fri, 21 Apr 2023 22:30:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppzGs-0003NP-DH; Fri, 21 Apr 2023 22:30:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ppzGr-0000r4-V5; Fri, 21 Apr 2023 22:30:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ppzGr-0004sy-R3; Fri, 21 Apr 2023 22:30:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K7naathhtQ665aC5wEJ0xDma4SdOPdED2rRJJfUiOLY=; b=GLQQ+pJmbX4Qgu+UWcCXFIFMYa
	6B6q7me4Pqu7x75anL/Otip0Rh66a6MKGbE+3D3oE5gfuYJJM17zIYAPiQXfYMzKzt8JvA0f1Il/U
	/NdEtgIi1ypT+kcejkuK2oJ8eRuBPlVeAn6O2HxC7f0TsXELFK50RJlwyXTznaoH0nfc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180353: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=d063389f10707a6694fb98a69f403a63e4d22245
X-Osstest-Versions-That:
    libvirt=b486430db34d0db1dcbf39b0d9840d03cd57f615
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Apr 2023 22:30:25 +0000

flight 180353 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180353/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180308
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180308
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180308
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              d063389f10707a6694fb98a69f403a63e4d22245
baseline version:
 libvirt              b486430db34d0db1dcbf39b0d9840d03cd57f615

Last test of basis   180308  2023-04-19 04:18:48 Z    2 days
Failing since        180328  2023-04-20 04:18:54 Z    1 days    2 attempts
Testing same since   180353  2023-04-21 04:18:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ján Tomko <jtomko@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   b486430db3..d063389f10  d063389f10707a6694fb98a69f403a63e4d22245 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 01:31:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 01:31:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524696.815791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq25w-0005hj-Aj; Sat, 22 Apr 2023 01:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524696.815791; Sat, 22 Apr 2023 01:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq25w-0005hc-78; Sat, 22 Apr 2023 01:31:20 +0000
Received: by outflank-mailman (input) for mailman id 524696;
 Sat, 22 Apr 2023 01:31:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq25u-0005hS-Rm; Sat, 22 Apr 2023 01:31:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq25u-0006Ut-Hc; Sat, 22 Apr 2023 01:31:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq25t-0005Cw-Va; Sat, 22 Apr 2023 01:31:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pq25t-0007Dp-Uy; Sat, 22 Apr 2023 01:31:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lhfzxdW3vabCw5h8oUKDqlauSemujDEW55VV7ykrL/E=; b=aprXE7SunKDN5lseUlGpQTnta7
	5RIwPBaAx0h8+nWcCfcykq8eggIrNBDqSjrmuY+muLlHcN9/hb2q5hp5QDrVSWugzuP9Ledaoh09J
	S6f6BOXFQrcG0+8WhbqMfnZBW0zk1P3wwPriLWWrX8zmZg+2Un2bSWVgqfJPN9yl/VAw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180368-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180368: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2c2cb235289642775a7c4e6eaeffa6d3828d279c
X-Osstest-Versions-That:
    ovmf=ed2ff315db7e800dd7718b1d1320ea8024d4e8b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 01:31:17 +0000

flight 180368 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180368/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2c2cb235289642775a7c4e6eaeffa6d3828d279c
baseline version:
 ovmf                 ed2ff315db7e800dd7718b1d1320ea8024d4e8b2

Last test of basis   180365  2023-04-21 16:12:18 Z    0 days
Testing same since   180368  2023-04-21 19:10:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ed2ff315db..2c2cb23528  2c2cb235289642775a7c4e6eaeffa6d3828d279c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 05:37:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 05:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524710.815802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq5w0-0004sg-0J; Sat, 22 Apr 2023 05:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524710.815802; Sat, 22 Apr 2023 05:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq5vz-0004sY-QR; Sat, 22 Apr 2023 05:37:19 +0000
Received: by outflank-mailman (input) for mailman id 524710;
 Sat, 22 Apr 2023 05:37:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq5vy-0004sO-DB; Sat, 22 Apr 2023 05:37:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq5vy-00051Q-7z; Sat, 22 Apr 2023 05:37:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq5vx-0005ax-N1; Sat, 22 Apr 2023 05:37:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pq5vx-0005Ts-MX; Sat, 22 Apr 2023 05:37:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gSHz8QdhSYMEysujXb++MGj2y+LLaBM+2adLDtbD3d4=; b=0eFcKD05Uao9JA6IgqiQmegSnd
	tylurcwdj7JejYtR/ywJ5kCjgIgIldlXO7adCDjVtFv0Zcch6fmPjGGN/q1swoEOOfbex80bnHOdk
	lQRU+8IDe1OcdNiFXu9TDk3uakxIGGRqhuOGMtpeAGe2yTspC0xd15hJUjDk8F0yNIag=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180360: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2af3e53a4dc08657f1b46f97f04ff4a0ab3cad8d
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 05:37:17 +0000

flight 180360 linux-linus real [real]
flight 180370 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180360/
http://logs.test-lab.xenproject.org/osstest/logs/180370/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2af3e53a4dc08657f1b46f97f04ff4a0ab3cad8d
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    5 days
Failing since        180281  2023-04-17 06:24:36 Z    4 days    9 attempts
Testing same since   180360  2023-04-21 10:14:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Miguel Ojeda <ojeda@kernel.org>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4512 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 09:18:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 09:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524749.815812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq9O8-00020Z-6v; Sat, 22 Apr 2023 09:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524749.815812; Sat, 22 Apr 2023 09:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pq9O8-00020S-3P; Sat, 22 Apr 2023 09:18:36 +0000
Received: by outflank-mailman (input) for mailman id 524749;
 Sat, 22 Apr 2023 09:18:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq9O7-00020G-1F; Sat, 22 Apr 2023 09:18:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq9O6-00024L-Oh; Sat, 22 Apr 2023 09:18:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pq9O6-00063F-7C; Sat, 22 Apr 2023 09:18:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pq9O6-0006WW-6f; Sat, 22 Apr 2023 09:18:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eNpNB2AyPP+WeKCafiMBfrFx1KfQIkF0PCPjSIfT5U8=; b=dbn+/PeLJrxZio05FSvNUznORe
	J7TI7ITjLUGG5jz9cQZ2qeIdqxqkQec4dwdjjbCjNTwKrQ/Ta0EZu9kssSOmeCVorQtJzf08Q5jcz
	R/mY4gouOkcyq+hX5dhJaXFkOYE3Q74sMEepj8kqH0JyQaEfjS30L+GuiaW1n4+ms5h8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180361-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180361: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:host-ping-check-xen:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=29c343a463450f1aa15609014fb87a0472403d70
X-Osstest-Versions-That:
    qemuu=2d82c32b2ceaca3dc3da5e36e10976f34bfcb598
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 09:18:34 +0000

flight 180361 qemu-mainline real [real]
flight 180372 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180361/
http://logs.test-lab.xenproject.org/osstest/logs/180372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180334

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 10 host-ping-check-xen fail pass in 180372-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 180372 like 180334
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 180372 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180334
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180334
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180334
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180334
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180334
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180334
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180334
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                29c343a463450f1aa15609014fb87a0472403d70
baseline version:
 qemuu                2d82c32b2ceaca3dc3da5e36e10976f34bfcb598

Last test of basis   180334  2023-04-20 12:40:18 Z    1 days
Testing same since   180361  2023-04-21 11:11:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Axel Heider <axel.heider@hensoldt.net>
  Feng Jiang <jiangfeng@kylinos.cn>
  Guenter Roeck <linux@roeck-us.net>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Weil <sw@weilnetz.de>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 411 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 13:17:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 13:17:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524804.815831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqD6t-0000Yi-E6; Sat, 22 Apr 2023 13:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524804.815831; Sat, 22 Apr 2023 13:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqD6t-0000Yb-9w; Sat, 22 Apr 2023 13:17:03 +0000
Received: by outflank-mailman (input) for mailman id 524804;
 Sat, 22 Apr 2023 13:17:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqD6s-0000YR-JP; Sat, 22 Apr 2023 13:17:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqD6s-0007mp-BQ; Sat, 22 Apr 2023 13:17:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqD6r-00079Q-SP; Sat, 22 Apr 2023 13:17:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqD6r-0002FF-S0; Sat, 22 Apr 2023 13:17:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Uyt0pI3kJWEDMxM4KDIFKOBFLJCk/2ibemp+OjI+Jjo=; b=An5iz9fgqhcx+r1MaHjZo4b3tg
	H8i0cdbwELCPEIWph7AKjVa/lHPZyx29y62Wes3HL/AOwlOMuoKYYCCmTcnwGqWIMtWD1XKToLzum
	I+8J4UMzRM9r302iRsslyeaknrGjadjEeRA8cfsSfwHscnOqe3yVadVvMDKou1n1Hq78=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180367: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=8676092a0f16ca6ad188d3fb270784a2caecf542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 13:17:01 +0000

flight 180367 xen-unstable real [real]
flight 180376 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180367/
http://logs.test-lab.xenproject.org/osstest/logs/180376/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2    22 guest-start.2       fail pass in 180376-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180349
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180349
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180349
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180349
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180349
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180349
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180349
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180349
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180349
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180349
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180349
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180349
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180349
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  8676092a0f16ca6ad188d3fb270784a2caecf542

Last test of basis   180349  2023-04-21 01:51:57 Z    1 days
Testing same since   180367  2023-04-21 19:07:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Kanavin <alex@linutronix.de>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Henry Wang <Henry.Wang@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8676092a0f..c6c8c0808f  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51 -> master


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 15:38:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 15:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524845.815908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqFJC-0006sg-7f; Sat, 22 Apr 2023 15:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524845.815908; Sat, 22 Apr 2023 15:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqFJC-0006sZ-4x; Sat, 22 Apr 2023 15:37:54 +0000
Received: by outflank-mailman (input) for mailman id 524845;
 Sat, 22 Apr 2023 15:37:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqFJA-0006sP-21; Sat, 22 Apr 2023 15:37:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqFJ9-0002qE-Rk; Sat, 22 Apr 2023 15:37:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqFJ8-0005Gi-Mm; Sat, 22 Apr 2023 15:37:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqFJ8-0006P0-MC; Sat, 22 Apr 2023 15:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P+Vw6DbapAT4iSd3pkEmZX1lBTI44ycd4kk9ok7pGvk=; b=x5Bxtbnz4G717JFr8D+ZeHGukE
	0QLOFZm2OYqEPR8nnhq5LA790vMDJA+QTFu8p8QAvCzB6OBd582Icf0Cxaj3eDtr1p/0mtltgQv/1
	8uB8wnHXv2Hn0AScGNkn+OuOoW4GXJgKarmBzCI8pce/BJ1m8c3zZDutbRFLkT1hS5Cc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180369: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start.2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
X-Osstest-Versions-This:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
X-Osstest-Versions-That:
    linux=32bea3bac5ca484c6f7e302c8c96fc686f62e7b4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 15:37:50 +0000

flight 180369 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180369/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail in 180352 pass in 180369
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180352 pass in 180369
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180352

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180352 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180352 never pass
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 180352 starved in 180149
 test-armhf-armhf-xl-credit1  19 guest-start.2 fail in 180352 starved in 180149
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180149
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180149
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180149
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180149
 test-armhf-armhf-xl-credit2  14 guest-start             fail starved in 180149
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180149
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180149
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail starved in 180149

version targeted for testing:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac
baseline version:
 linux                32bea3bac5ca484c6f7e302c8c96fc686f62e7b4

Last test of basis   180149  2023-04-05 09:43:16 Z   17 days
Testing same since   180333  2023-04-20 10:12:45 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Amir Goldstein <amir73il@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Ard Biesheuvel <ardb@kernel.org>
  Arseniy Krasnov <AVKrasnov@sberdevices.ru>
  Bang Li <libang.linuxer@gmail.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Basavaraj Natikar <Basavaraj.Natikar@amd.com>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Bjørn Mork <bjorn@mork.no>
  Boris Brezillon <boris.brezillon@collabora.com>
  Brian Foster <bfoster@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Kerello <christophe.kerello@foss.st.com>
  Chuck Lever <chuck.lever@oracle.com>
  D Scott Phillips <scott@os.amperecomputing.com>
  Dai Ngo <dai.ngo@oracle.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  David Howells <dhowells@redhat.com>
  David Lechner <david@lechnology.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Denis Plotnikov <den-plotnikov@yandex-team.ru>
  Dhruva Gole <d-gole@ti.com>
  Emanuele Ghidoli <emanuele.ghidoli@toradex.com>
  Enrico Sau <enrico.sau@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Eric Van Hensbergen <ericvh@kernel.org>
  Felix Fietkau <nbd@nbd.name>
  Florian Fainelli <f.fainelli@gmail.com>
  George Cherian <george.cherian@marvell.com>
  Grant Grundler <grundler@chromium.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Hulk Robot <hulkrobot@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Layton <jlayton@kernel.org>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jeremy Soller <jeremy@system76.com>
  Jiri Kosina <jkosina@suse.cz>
  Johan Hovold <johan+linaro@kernel.org>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Keeping <john@metanate.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Kan Liang <kan.liang@linux.intel.com>
  Kees Cook <keescook@chromium.org>
  Kees Jan Koster <kjkoster@kjkoster.org>
  Kornel Dulęba <korneld@chromium.org>
  Lars-Peter Clausen <lars@metafoo.de>
  Lee Jones <lee.jones@linaro.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark-PK Tsai <mark-pk.tsai@mediatek.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michal Kolar <mich.k@seznam.cz>
  Min Li <lm0963hack@gmail.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Miquel Raynal <miquel.raynal@bootlin.com> # v5.10, v4.19
  Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nicolas Schichan <nschichan@freebox.fr>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
  Paolo Abeni <pabeni@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Pratyush Yadav <ptyadav@amazon.de>
  RD Babiera <rdbabiera@google.com>
  Richard Weinberger <richard@nod.at>
  Robbie Harwood <rharwood@redhat.com>
  Roman Gushchin <roman.gushchin@linux.dev>
  Rongwei Wang <rongwei.wang@linux.alibaba.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sachi King <nakato@nakato.io>
  Saravanan Vajravel <saravanan.vajravel@broadcom.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sherry Sun <sherry.sun@nxp.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Shuangpeng Bai <sjb7183@psu.edu>
  Steve Clevenger <scclevenger@os.amperecomputing.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suzuki K Poulose <suzuki.poulose@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Glanzmann <thomas@glanzmann.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Crawford <tcrawford@system76.com>
  Tom Saeger <tom.saeger@oracle.com>
  Tyler Hicks (Microsoft) <code@tyhicks.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Waiman Long <longman@redhat.com>
  William Breathitt Gray <william.gray@linaro.org>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  Wolfram Sang <wsa@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  Xu Biang <xubiang@hust.edu.cn>
  Yongchen Yin <wb-yyc939293@alibaba-inc.com>
  ZhaoLong Wang <wangzhaolong1@huawei.com>
  Zheng Wang <zyytlz.wz@163.com>
  Zheng Yejian <zhengyejian1@huawei.com>
  Zhihao Cheng <chengzhihao1@huawei.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   32bea3bac5ca..58f42ed1cd31  58f42ed1cd31238745bddd943c4f5849dc83a2ac -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Apr 22 23:32:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Apr 2023 23:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524897.815918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqMiG-0003w6-8I; Sat, 22 Apr 2023 23:32:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524897.815918; Sat, 22 Apr 2023 23:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqMiG-0003vy-5I; Sat, 22 Apr 2023 23:32:16 +0000
Received: by outflank-mailman (input) for mailman id 524897;
 Sat, 22 Apr 2023 23:32:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqMiF-0003vp-Rw; Sat, 22 Apr 2023 23:32:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqMiF-00064S-JN; Sat, 22 Apr 2023 23:32:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqMiF-0004Nl-9Z; Sat, 22 Apr 2023 23:32:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqMiF-00040q-98; Sat, 22 Apr 2023 23:32:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=//tLuuZNlWJAOaFfWw7EXOv1wguN37IuiLoGkhjaHT8=; b=TSzDfFz7PdEl7z/ljSqbnN727f
	yEh8XAN+z6gSdv8o/8An7GvHacQSGx+k58U2hxohHWzDBU9MV34g+xGw4+InZWojWKYjZZ/L9Dlib
	PBXM7ug+F1CiqIhajerRcwoJsrkmLZplw6KPiScfFfS1UJYzRoYXxzFnjrfzSvb528Fo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180371-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180371: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8e41e0a575664d26bb87e012c39435c4c3914ed9
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Apr 2023 23:32:15 +0000

flight 180371 linux-linus real [real]
flight 180378 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180371/
http://logs.test-lab.xenproject.org/osstest/logs/180378/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180378-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8e41e0a575664d26bb87e012c39435c4c3914ed9
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    6 days
Failing since        180281  2023-04-17 06:24:36 Z    5 days   10 attempts
Testing same since   180371  2023-04-22 05:41:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Brian Masney <bmasney@redhat.com>
  Chancel Liu <chancel.liu@nxp.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Carpenter <error27@gmail.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Baluta <daniel.baluta@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jaroslav Kysela <perex@perex.cz>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Long Wang <long.wang@analog.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@amd.com>
  Miguel Ojeda <ojeda@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@gmail.com>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  William Breathitt Gray <william.gray@linaro.org>
  Xu Yilun <yilun.xu@intel.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5108 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 01:14:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 01:14:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524906.815928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqOIf-0004b8-WF; Sun, 23 Apr 2023 01:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524906.815928; Sun, 23 Apr 2023 01:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqOIf-0004b1-Rg; Sun, 23 Apr 2023 01:13:57 +0000
Received: by outflank-mailman (input) for mailman id 524906;
 Sun, 23 Apr 2023 01:13:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqOIe-0004ar-JY; Sun, 23 Apr 2023 01:13:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqOIe-0000LL-8O; Sun, 23 Apr 2023 01:13:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqOId-0007ZZ-QX; Sun, 23 Apr 2023 01:13:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqOId-0004ZY-Po; Sun, 23 Apr 2023 01:13:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UXZLebtEIII5+/I/GJ41rvH+TGggRtdvupI9a8EDkfw=; b=EtHWy0lNszPqpnHx/vyOwCcMOt
	r5b5x/NiMzGi7+Ooo9W+2teSzNeZ1Kh75m0gW6D8HLcYV7OIHDrMktGPnHv9362WQSPrB/NI8Y9vx
	h5QRZlKJ1gZJpGq1rZqaJwFgOct8yVj98j6eD17AMKpm/W+bgc22nDf6D2zwyMX47nc4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180377-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180377: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvshim:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-amd64-pair:debian-fixup/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 01:13:55 +0000

flight 180377 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180377/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2    22 guest-start.2    fail in 180367 pass in 180377
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180367 pass in 180377
 test-amd64-i386-libvirt       7 xen-install                fail pass in 180367
 test-amd64-amd64-xl-pvshim   13 debian-fixup               fail pass in 180367
 test-amd64-amd64-pair        21 debian-fixup/dst_host      fail pass in 180367

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt     15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180367 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180367 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180367 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180367
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180367
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180367
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180367
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180367
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180367
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180367
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180367
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180367
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180367
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180367
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180367
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180377  2023-04-22 13:20:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 23 03:14:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 03:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524918.815938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqQBP-0008TN-Rc; Sun, 23 Apr 2023 03:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524918.815938; Sun, 23 Apr 2023 03:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqQBP-0008TF-NE; Sun, 23 Apr 2023 03:14:35 +0000
Received: by outflank-mailman (input) for mailman id 524918;
 Sun, 23 Apr 2023 03:14:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqQBO-0008T5-SB; Sun, 23 Apr 2023 03:14:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqQBO-0003Oe-Nr; Sun, 23 Apr 2023 03:14:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqQBO-0005Jv-AI; Sun, 23 Apr 2023 03:14:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqQBO-0007LR-8f; Sun, 23 Apr 2023 03:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nnAOQid5BSRMYecyL9oyy8YH1FIr8YgoUHkEaoVzfFs=; b=5hLdjBCgMo/ZO9DlA9nc7n3pxA
	GvzyshdsdBVHeTZiuLk6FWCiAytQzJrVCA6MolOHQ3SXy1tD5Lh77mgvOPrwYj8uVnoWHXef2rtb1
	RK1/InPX6vgEa7sz3DmZ8wGtLPnVqrxVJHwMKSvCqMLSiIucnimsmDWyK3L53AFW7rhQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180373-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180373: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1cc6e1a20144c0ae360cbeb0e035fdee1bd80609
X-Osstest-Versions-That:
    qemuu=2d82c32b2ceaca3dc3da5e36e10976f34bfcb598
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 03:14:34 +0000

flight 180373 qemu-mainline real [real]
flight 180380 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180373/
http://logs.test-lab.xenproject.org/osstest/logs/180380/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 180380-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180334
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180334
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180334
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180334
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180334
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180334
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180334
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180334
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1cc6e1a20144c0ae360cbeb0e035fdee1bd80609
baseline version:
 qemuu                2d82c32b2ceaca3dc3da5e36e10976f34bfcb598

Last test of basis   180334  2023-04-20 12:40:18 Z    2 days
Failing since        180361  2023-04-21 11:11:00 Z    1 days    2 attempts
Testing same since   180373  2023-04-22 09:22:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Axel Heider <axel.heider@hensoldt.net>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Feng Jiang <jiangfeng@kylinos.cn>
  Gavin Shan <gshan@redhat.com>
  Guenter Roeck <linux@roeck-us.net>
  Joel Stanley <joel@jms.id.au>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Ninad Palsule <ninad@linux.ibm.com>
  Ninad Palsule<ninad@linux.ibm.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Weil <sw@weilnetz.de>
  Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
  Strahinja Jankovic <strahinjapjankovic@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Vaibhav Jain <vaibhav@linux.ibm.com>
  Yang Zhong <yang.zhong@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   2d82c32b2c..1cc6e1a201  1cc6e1a20144c0ae360cbeb0e035fdee1bd80609 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 05:37:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 05:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524926.815947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqSPD-0005rC-Rv; Sun, 23 Apr 2023 05:36:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524926.815947; Sun, 23 Apr 2023 05:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqSPD-0005r5-Oz; Sun, 23 Apr 2023 05:36:59 +0000
Received: by outflank-mailman (input) for mailman id 524926;
 Sun, 23 Apr 2023 05:36:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jp6u=AO=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pqSPB-0005qz-To
 for xen-devel@lists.xenproject.org; Sun, 23 Apr 2023 05:36:58 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da7f4043-e198-11ed-8611-37d641c3527e;
 Sun, 23 Apr 2023 07:36:54 +0200 (CEST)
Received: from AM6PR01CA0039.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::16) by PAWPR08MB9068.eurprd08.prod.outlook.com
 (2603:10a6:102:330::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Sun, 23 Apr
 2023 05:36:51 +0000
Received: from AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::ae) by AM6PR01CA0039.outlook.office365.com
 (2603:10a6:20b:e0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32 via Frontend
 Transport; Sun, 23 Apr 2023 05:36:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT040.mail.protection.outlook.com (100.127.140.128) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.18 via Frontend Transport; Sun, 23 Apr 2023 05:36:51 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Sun, 23 Apr 2023 05:36:50 +0000
Received: from 144686ff64e6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 98DA1B89-C652-46D1-A6F7-613B01CC3A71.1; 
 Sun, 23 Apr 2023 05:36:45 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 144686ff64e6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sun, 23 Apr 2023 05:36:45 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB9324.eurprd08.prod.outlook.com (2603:10a6:10:41f::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Sun, 23 Apr
 2023 05:36:42 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.032; Sun, 23 Apr 2023
 05:36:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da7f4043-e198-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CNWTjDvMEVcE72DbqkZ3VeLOC1bIZOdNtEBHrdX2ho0=;
 b=jtfiQAtBRoEBzBlZqKrJdq/YUuXdoYwfhQ7jfeBzywqBiI4rTK5rPZbEqw3RBJ0knoeQWlj9OFSflE5egcXMQdRhWbBcMDB+1BuqAKK2LV6xfUuZAq4RqmpxElWWuR5Dq9Uzx0QNQmQWy7W0BtAxlzfu1uGfy2CiItOKXie5WJI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X6ohWziq5y2xc8PFRhEscMmexQ+blJ+Km+g46oRlir7dqYsUhAPvFf2WhCeDMwKtOpPPoE4tflpUNU3UdSOX3UTfIHJayD4UUOCL3w0jxGq9w8pxLrR+mikQU9RNwewywvPjcRyJIGuQ0mgc43XlKLWGB7LElPc12vR2fsGmyKfcpQ6dvYBMyO8Bly9uNYGocKm1eQgiDNjYo1jxr3c/l2EfCF7aGYUAH5hEtG+uY9W/YZR7GvGPvP4CN6X9TUxh0Z+ipEO5qu9jwin8+KhXfT7E+Nqd2PjDkpTZyzZm/oaUKhiC2HL4mNJ8gaxAWU0rEQ4fAILwl6gLpO/juEDkJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CNWTjDvMEVcE72DbqkZ3VeLOC1bIZOdNtEBHrdX2ho0=;
 b=nKafVI0zyzFIvifOPKfsT2THQQ3frTQ2xJx3aYgwcAxhUaGGx+HAmTPEhrVoyhFFwFh8UZAvkomKE3KZovbjwSuQggfbzefR0icyjlz0nZTrBfhNrz5VL9i5sZAcqlmtELm8QpiAnnjUltoRfdlPtwR/Q3gsM1nhOpgS5Lp2pqUOiFE0RAM38D4kDN5xEjwI/i5Rs3O5P2qqvuw3Aum/wqDvHIsjskz1R4tS9+1ZE801a14R1JlTNGyUVW3FuJZs5+TSCH8/Pex86+hwY1usXBeWOq7uXnkCuuQKEYwX6EtG7F5vLbKEDW/3STrIiMGsFH5Jq5wzHP5mMF3yLJUzpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CNWTjDvMEVcE72DbqkZ3VeLOC1bIZOdNtEBHrdX2ho0=;
 b=jtfiQAtBRoEBzBlZqKrJdq/YUuXdoYwfhQ7jfeBzywqBiI4rTK5rPZbEqw3RBJ0knoeQWlj9OFSflE5egcXMQdRhWbBcMDB+1BuqAKK2LV6xfUuZAq4RqmpxElWWuR5Dq9Uzx0QNQmQWy7W0BtAxlzfu1uGfy2CiItOKXie5WJI=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v3 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZc3rmCrt50C+BaEmmIgaK9uo9ea80IxGAgAFPzpCAABDqgIAC3lpQ
Date: Sun, 23 Apr 2023 05:36:42 +0000
Message-ID:
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
In-Reply-To: <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A5FC215D96C98549A01D59CE5228927C.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB9324:EE_|AM7EUR03FT040:EE_|PAWPR08MB9068:EE_
X-MS-Office365-Filtering-Correlation-Id: 6010b7e4-2b39-4f5f-9c76-08db43bcbd47
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pr9QPEHDmdjnMB1ut6FkARwlEa/NTzMuLvatT0agdSViR1TdVgoYFTMVaeZqq5auIKLFeai6KUFCxy5RQxvTpadXx4BvhQ5FiDkXIdzWHWofFaBizFMIrLqjCL4tnEma1BqqRX9WM6aGOuHjJIlY/AMKBvrMGh0OxbqHGV80qPDp8J9EtydG/HULAZwQO1AIr9+OJYH39iEo+HKdVYQRjw57+T8HFfUNxhUo5Ou23Rahrp4Au366r/N8frFzGj5R+ANmnRL4XJ1beS6BB4MBuiArEUDv0vhnBaJe4xCwN0A8es6yxcOFKwbQA4rkkKktLeWzPM74Cjx4vp+R2qlDLvc5T+nnbKHW/ds6yRdU6LDfw+JEOcPWZ2iixJBZshzXEwV/J5Spq3nlHMcTSWP5dyn905qNHZx8kgcKEXHOvOOs8fAnyLbbycj0uf7wsj/gBOS7iW5NqoMs6wL+QlFD3qV2dbccArD1FMe+bADapduBddDb1zGZcCSf98i+vG65YqwXBRkxvZrY5tNVTVc+bd1XOec2ktVdD1hgXpaJrERcTLtWAATaHh4Zfq48W7fFbq97088kslFLhtMJ3jYgp8xBjl+3hn65j1kLJxzIv7dRzIlSNt7y70Nfs8Fr4IUU
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(376002)(39860400002)(136003)(451199021)(316002)(76116006)(66946007)(66556008)(66476007)(66446008)(64756008)(6916009)(4326008)(86362001)(478600001)(71200400001)(55016003)(7696005)(8936002)(8676002)(38100700002)(52536014)(122000001)(5660300002)(54906003)(83380400001)(26005)(2906002)(9686003)(41300700001)(6506007)(38070700005)(33656002)(186003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9324
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	63be9fd7-2342-4ef0-951e-08db43bcb818
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3Ky2JtIWC2KOpJcttu/MhwCCQAKOexm9Q8a3PLyLnWr/e4xbOBoxAfGXfKyl8+3pscAaynoJ6AFuUP3oWcoSzw22cvyimIDmNf+Cruhu8C03cUhQtP9gJsKLb+XRRpk8YlYaLD/rpEkF9K/9L+pu6ObrUXGCiCzBSLXzTkxJyKSHYzdsAFw9GNdYwIr01PU0yVlaBTXloLBN2g0aBFIleDeuHPeKfQc5BIHJmGgUhxGbO4sdu962aeC7yeBEnw+lQatcD1aoKFwoOPwCC+czC8Mfb2XOHbfJiE9o7tWX+GORVjFDXD5A4yvLSEXB0gjjgreFsvY97LcAf3310qbUJkYMhIHX7+uKer+Ms810AM9D/XyTEUnOD+21L0Yoj2JoXhMsPY5bJ+oVouymc+oNeUUSr0zONLJqZvpDwXbZPMqvqWBaZDZOJlPX67906dcEuGjhqMtsYIOWT0I91/69ke9bSByZoD5nBBvaFn5E35bnUDwd9OH6GZ+RzZR/ILCwSJiUQTuQMEoJZJ8kh6XA15pgJP8jh9GTuU1d51zKNKFAIjfiRQAw93wrn+z6GhTspSVns7jB5AfO0MBm5OOpZP1mepTzFBqiT6k5gkW6tJ+ku2H2ghPMfkz2e3iQ2qo2rDV5Q1hy1maKhH4ugQx7I3rORKuEQAZXwNr0NdBWkBQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(36840700001)(46966006)(8936002)(336012)(5660300002)(8676002)(52536014)(6862004)(36860700001)(47076005)(2906002)(83380400001)(9686003)(40480700001)(478600001)(55016003)(7696005)(54906003)(186003)(4326008)(316002)(6506007)(70586007)(41300700001)(70206006)(26005)(33656002)(86362001)(81166007)(82740400003)(356005)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Apr 2023 05:36:51.0619
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6010b7e4-2b39-4f5f-9c76-08db43bcbd47
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9068

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+ID4+
IEhvd2V2ZXIsIGxvb2tpbmcgYXQgdGhlIGNvZGUgYmVsb3csIGRvbid0IHlvdSBtZWFuIHRvIGhh
dmUgdGhlIGFycmF5DQo+ID4+IHByZS1zZXQgdG8gYWxsIE5VTUFfTk9fRElTVEFOQ0U/DQo+ID4N
Cj4gPiAuLi5JIGFtIGEgYml0IHB1enpsZWQgYWJvdXQgd2h5IHByZS1zZXR0aW5nIHRoZSBhcnJh
eSB0byBhbGwNCj4gPiBOVU1BX05PX0RJU1RBTkNFIG1hdHRlcnMgaGVyZSwgYXMgSSB0aGluayB0
aGUgbm9kZSBkaXN0YW5jZSBtYXAgd2lsbA0KPiA+IGJlIHBvcHVsYXRlZCB3aGVuIHBhcnNpbmcg
dGhlIGRldmljZSB0cmVlIGFueXdheSBubyBtYXR0ZXIgd2hhdCB0aGVpcg0KPiA+IGluaXRpYWwg
dmFsdWVzLg0KPiANCj4gRnJvbSB0aGlzIHBhdGNoIGFsb25lIGl0IGRvZXNuJ3QgYmVjb21lIGNs
ZWFyIHdoZXRoZXIgaW5kZWVkIGFsbCBhcnJheQ0KPiBzbG90cyAoYW5kIG5vdCBqdXN0IG9uZXMg
Zm9yIHZhbGlkIG5vZGVzKSB3b3VsZCBiZSBwb3B1bGF0ZWQuIEkgdGhpbmsNCj4gdGhlIGNvZGUg
aW4gdGhlIHBhdGNoIGhlcmUgd291bGQgYmV0dGVyIG5vdCBtYWtlIGl0c2VsZiBkZXBlbmRlbnQg
b24NCj4gYmVoYXZpb3Igb2YgY29kZSBhZGRlZCBzdWJzZXF1ZW50bHkgKHdoaWNoIG1heSBjaGFu
Z2U7IHJlY2FsbCB0aGF0IGENCj4gc2VyaWVzIG1heSBiZSBjb21taXR0ZWQgaW4gcGllY2VzKS4N
Cg0KQ29ycmVjdCwgSSBhZ3JlZS4gSSBhZGRlZCBhIG51bWFfaW5pdF9kaXN0YW5jZSgpIGZ1bmN0
aW9uIChpbiBwYXRjaCAjMTIpIHRvDQpzZXQgYWxsIHZhbHVlcyB0byBOVU1BX05PX0RJU1RBTkNF
LiBUaGUgbnVtYV9pbml0X2Rpc3RhbmNlKCkgd2lsbCBiZQ0KY2FsbGVkIGluIHRoZSBiZWdpbm5p
bmcgb2YgbnVtYV9pbml0KCkuDQoNCj4gDQo+ID4+PiArdW5zaWduZWQgY2hhciBfX25vZGVfZGlz
dGFuY2Uobm9kZWlkX3QgZnJvbSwgbm9kZWlkX3QgdG8pDQo+ID4+PiArew0KPiA+Pj4gKyAgICAv
KiBXaGVuIE5VTUEgaXMgb2ZmLCBhbnkgZGlzdGFuY2Ugd2lsbCBiZSB0cmVhdGVkIGFzIHJlbW90
ZS4gKi8NCj4gPj4+ICsgICAgaWYgKCBudW1hX2Rpc2FibGVkKCkgKQ0KPiA+Pj4gKyAgICAgICAg
cmV0dXJuIE5VTUFfUkVNT1RFX0RJU1RBTkNFOw0KPiA+Pg0KPiA+PiBXb3VsZG4ndCBpdCBtYWtl
IHNlbnNlIHRvIGhhdmUgdGhlICJmcm9tID09IHRvIiBzcGVjaWFsIGNhc2UgYWhlYWQgb2YNCj4g
Pj4gdGhpcyAocmF0aGVyIHRoYW4gZnVydGhlciBkb3duKSwgdGh1cyB5aWVsZGluZyBhIHNlbnNp
YmxlIHJlc3VsdCBmb3INCj4gPj4gZnJvbSA9PSB0byA9PSAwPyBBbmQgZWxzZSByZXR1cm4gTlVN
QV9OT19ESVNUQU5DRSwgdGh1cyBoYXZpbmcgYQ0KPiA+PiBzZW5zaWJsZSByZXN1bHQgYWxzbyBm
b3IgYW55IGZyb20vdG8gIT0gMD8NCj4gPg0KPiA+IENvdWxkIHlvdSBwbGVhc2UgZWxhYm9yYXRl
IGEgYml0IG1vcmUgYWJvdXQgd2h5IDAgbWF0dGVycyBoZXJlPw0KPiANCj4gV2hlbiBOVU1BIGlz
IG9mZiwgdGhlcmUncyBvbmx5IG9uZSBub2RlIC0gbm9kZSAwLiBIZW5jZSAwIGhhcyBzcGVjaWFs
DQo+IG1lYW5pbmcgaW4gdGhhdCBjYXNlLg0KPiANCj4gPiBBcyBmcm9tIG15IHVuZGVyc3RhbmRp
bmcsDQo+ID4gKDEpIElmIGZyb20gPT0gdG8sIHRoZW4gd2Ugc2V0IHRoZSBkaXN0YW5jZSB0byBO
VU1BX0xPQ0FMX0RJU1RBTkNFDQo+ID4gd2hpY2ggcmVwcmVzZW50cyB0aGUgZGlhZ29uYWwgb2Yg
dGhlIG1hdHJpeC4NCj4gPiAoMikgSWYgZnJvbSBhbmQgdG8gaXMgaW4gdGhlIG1hdHJpeCByYW5n
ZSwgdGhlbiB3ZSByZXR1cm4NCj4gPiBub2RlX2Rpc3RhbmNlX21hcFtmcm9tXVt0b10uDQo+IA0K
PiBQcm92aWRlZCB0aGF0J3Mgc2V0IGNvcnJlY3RseS4gSU9XIHRoaXMgaW50ZXJhY3RzIHdpdGgg
dGhlIG90aGVyIGNvbW1lbnQNCj4gKHdoaWNoIHJlYWxseSBJIG1hZGUgb25seSBhZnRlciB0aGUg
b25lIGhlcmUsIGp1c3QgdGhhdCB0aGF0J3Mgb2YgY291cnNlDQo+IG5vdCB2aXNpYmxlIGZyb20g
dGhlIHJlcGx5IHRoYXQgSSBzZW50KS4NCj4gDQo+ID4gKDMpIE90aGVyIGNhc2VzIHdlIHJldHVy
biBOVU1BX05PX0RJU1RBTkNFLg0KPiANCj4gQW5kIHdoZW4gTlVNQSBpcyBvZmYsIGl0IHNob3Vs
ZCBiZSBOVU1BX05PX0RJU1RBTkNFIGluIF9hbGxfIG90aGVyDQo+IGNhc2VzLA0KPiBpLmUuIC4u
Lg0KPiANCj4gPiAgICAgIC8qIFdoZW4gTlVNQSBpcyBvZmYsIGFueSBkaXN0YW5jZSB3aWxsIGJl
IHRyZWF0ZWQgYXMgcmVtb3RlLiAqLw0KPiA+ICAgICAgaWYgKCBudW1hX2Rpc2FibGVkKCkgKQ0K
PiA+ICAgICAgICAgIHJldHVybiBOVU1BX1JFTU9URV9ESVNUQU5DRTsNCj4gDQo+IC4uLiB0aGlz
IHJldHVybiBpcyB3cm9uZyBpbiB0aGF0IGNhc2UgKGV2ZW4gaWYgaW4gcmVhbGl0eSB0aGlzIGxp
a2VseQ0KPiB3b3VsZG4ndCBtYXR0ZXIgbXVjaCkuDQoNClRoYW5rcyBmb3IgdGhlIGV4cGxhbmF0
aW9uISBJIHRoaW5rIEkgbm93IHVuZGVyc3RhbmQgOikgV291bGQgdGhpcyBkaWZmIGJlbG93DQps
b29rIGdvb2QgdG8geW91IHRoZW4/IEFwcHJlY2lhdGUgeW91ciBwYXRpZW5jZS4NCg0KdW5zaWdu
ZWQgY2hhciBfX25vZGVfZGlzdGFuY2Uobm9kZWlkX3QgZnJvbSwgbm9kZWlkX3QgdG8pDQogew0K
LSAgICAvKiBXaGVuIE5VTUEgaXMgb2ZmLCBhbnkgZGlzdGFuY2Ugd2lsbCBiZSB0cmVhdGVkIGFz
IHJlbW90ZS4gKi8NCisgICAgaWYgKCBmcm9tID09IHRvICkNCisgICAgICAgIHJldHVybiBOVU1B
X0xPQ0FMX0RJU1RBTkNFOw0KKw0KKyAgICAvKiBXaGVuIE5VTUEgaXMgb2ZmLCBhbnkgZGlzdGFu
Y2Ugd2lsbCBiZSB0cmVhdGVkIGFzIHVucmVhY2hhYmxlICgweEZGKS4gKi8NCiAgICAgaWYgKCBu
dW1hX2Rpc2FibGVkKCkgKQ0KLSAgICAgICAgcmV0dXJuIE5VTUFfUkVNT1RFX0RJU1RBTkNFOw0K
KyAgICAgICAgcmV0dXJuIE5VTUFfTk9fRElTVEFOQ0U7DQoNCiAgICAgLyoNCiAgICAgICogQ2hl
Y2sgd2hldGhlciB0aGUgbm9kZXMgYXJlIGluIHRoZSBtYXRyaXggcmFuZ2UuDQogICAgICAqIFdo
ZW4gYW55IG5vZGUgaXMgb3V0IG9mIHJhbmdlLCBleGNlcHQgZnJvbSBhbmQgdG8gbm9kZXMgYXJl
IHRoZQ0KLSAgICAgKiBzYW1lLCB3ZSB0cmVhdCB0aGVtIGFzIHVucmVhY2hhYmxlIChyZXR1cm4g
MHhGRikNCisgICAgICogc2FtZSwgd2UgdHJlYXQgdGhlbSBhcyB1bnJlYWNoYWJsZSAoMHhGRikN
CiAgICAgICovDQogICAgIGlmICggZnJvbSA+PSBBUlJBWV9TSVpFKG5vZGVfZGlzdGFuY2VfbWFw
KSB8fA0KICAgICAgICAgIHRvID49IEFSUkFZX1NJWkUobm9kZV9kaXN0YW5jZV9tYXBbMF0pICkN
Ci0gICAgICAgIHJldHVybiBmcm9tID09IHRvID8gTlVNQV9MT0NBTF9ESVNUQU5DRSA6IE5VTUFf
Tk9fRElTVEFOQ0U7DQorICAgICAgICByZXR1cm4gTlVNQV9OT19ESVNUQU5DRTsNCg0KICAgICBy
ZXR1cm4gbm9kZV9kaXN0YW5jZV9tYXBbZnJvbV1bdG9dOw0KIH0NCg0KS2luZCByZWdhcmRzLA0K
SGVucnkNCg0KPiANCj4gSmFuDQo+IA0K


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 09:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 09:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524954.815958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqWK9-0005iP-9q; Sun, 23 Apr 2023 09:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524954.815958; Sun, 23 Apr 2023 09:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqWK9-0005iI-58; Sun, 23 Apr 2023 09:48:01 +0000
Received: by outflank-mailman (input) for mailman id 524954;
 Sun, 23 Apr 2023 09:47:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqWK7-0005i8-UL; Sun, 23 Apr 2023 09:47:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqWK7-0005Cw-Pz; Sun, 23 Apr 2023 09:47:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqWK7-0000Oa-7s; Sun, 23 Apr 2023 09:47:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqWK7-0001cV-7N; Sun, 23 Apr 2023 09:47:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y4xu13VaXn7RCaK89WwcqntEniUEE/+7myscGg10ldE=; b=VzKtNWZZ+kDGQ4h3WmdmzSPx/B
	Jtwt9eIgqJ5xAUanK43wEixMDfG/suDqiKx1dFZh/nAYmzijTk37BKZNQmDZDXrP0yjszqFNZh3m/
	TM931HMSqCEKM0acvvjuVQUxPInLRR1qrewXPR2oVe7Ej0qcing4LpQugU+zHYAsimnY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180379-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180379: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2caeeb9d4a1bccd923b7918427f9e9ef7151ddd8
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 09:47:59 +0000

flight 180379 linux-linus real [real]
flight 180383 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180379/
http://logs.test-lab.xenproject.org/osstest/logs/180383/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180383-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2caeeb9d4a1bccd923b7918427f9e9ef7151ddd8
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    6 days
Failing since        180281  2023-04-17 06:24:36 Z    6 days   11 attempts
Testing same since   180379  2023-04-22 23:41:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Brian Masney <bmasney@redhat.com>
  Chancel Liu <chancel.liu@nxp.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Dan Carpenter <error27@gmail.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Baluta <daniel.baluta@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jaroslav Kysela <perex@perex.cz>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Long Wang <long.wang@analog.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@amd.com>
  Miguel Ojeda <ojeda@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Oliver Upton <oliver.upton@linux.dev>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@gmail.com>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  William Breathitt Gray <william.gray@linaro.org>
  Xu Yilun <yilun.xu@intel.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5319 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 12:06:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 12:06:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.524978.815967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqYTf-0002px-BE; Sun, 23 Apr 2023 12:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 524978.815967; Sun, 23 Apr 2023 12:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqYTf-0002pq-8V; Sun, 23 Apr 2023 12:05:59 +0000
Received: by outflank-mailman (input) for mailman id 524978;
 Sun, 23 Apr 2023 12:05:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqYTd-0002pg-Rv; Sun, 23 Apr 2023 12:05:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqYTd-0008LN-J7; Sun, 23 Apr 2023 12:05:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqYTd-0006T9-6K; Sun, 23 Apr 2023 12:05:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqYTd-0006dp-5i; Sun, 23 Apr 2023 12:05:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=heJbN0/Z6u8ZX/FFHiUjK/tRaOoxzbRjRcW4njQOBao=; b=3OFP3TV42kuj/eH0GUXPX3Lka0
	R1e/yO2kw0vUcQmMrxXgUuBJJ6gpDhFDLx4T+YNFTEBaZa89GX+PlAFTR48tDlGweCqo5BDQJD/IY
	u8fpWoLIx6HSasnUz40HNF+i2S2tdqd4j+xStIDXZMBHxfeGxfYO5dzYFda+IRfUZFeQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180381-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180381: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-pair:debian-fixup/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvshim:debian-fixup:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 12:05:57 +0000

flight 180381 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180381/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install      fail in 180377 pass in 180381
 test-amd64-amd64-pair   21 debian-fixup/dst_host fail in 180377 pass in 180381
 test-amd64-amd64-xl-pvshim   13 debian-fixup     fail in 180377 pass in 180381
 test-armhf-armhf-xl-arndale  13 debian-fixup               fail pass in 180377
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180377
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180377

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 180377 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 180377 never pass
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180367
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180377
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180377
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180377
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180377
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180377
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180377
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180377
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180377
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180377
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180377
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180377
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180377
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-credit1   3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-xl           3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate           starved in 180377 n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate           starved in 180377 n/a

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180381  2023-04-23 01:52:21 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 23 16:03:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 16:03:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525016.815977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqcBX-0001ZV-Ty; Sun, 23 Apr 2023 16:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525016.815977; Sun, 23 Apr 2023 16:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqcBX-0001ZO-RO; Sun, 23 Apr 2023 16:03:31 +0000
Received: by outflank-mailman (input) for mailman id 525016;
 Sun, 23 Apr 2023 16:03:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqcBW-0001ZE-Al; Sun, 23 Apr 2023 16:03:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqcBW-0005go-0k; Sun, 23 Apr 2023 16:03:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqcBV-0004vh-D8; Sun, 23 Apr 2023 16:03:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqcBV-0005Fu-Ch; Sun, 23 Apr 2023 16:03:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5U8V11iMjU2MoKgqokqlXlZ4V+KtYPxe+EIvgMaXCAs=; b=wI/8pMGR6WncimOQ/i0UNJmOL6
	igSqhVGW+SmK3jPcC0wf1Jb7YOSSVP33wyVhcUGG32wuSS7FFEVdkrCrfVk7Zih8llLFvJDH+Cpqv
	yMrLzXOMOw1n4Q55drBbPR4hHR3MVOgYKZUoWuqOeAKvbnjrtYLi0mnvYeE3feXqUfAc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180382-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180382: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6dd06214892d71cbbdd25daed7693e58afcb1093
X-Osstest-Versions-That:
    qemuu=1cc6e1a20144c0ae360cbeb0e035fdee1bd80609
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 16:03:29 +0000

flight 180382 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180382/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180373
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180373
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180373
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180373
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180373
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180373
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180373
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180373
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                6dd06214892d71cbbdd25daed7693e58afcb1093
baseline version:
 qemuu                1cc6e1a20144c0ae360cbeb0e035fdee1bd80609

Last test of basis   180373  2023-04-22 09:22:27 Z    1 days
Testing same since   180382  2023-04-23 03:16:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marco Liebel <quic_mliebel@quicinc.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Taylor Simpson <tsimpson@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1cc6e1a201..6dd0621489  6dd06214892d71cbbdd25daed7693e58afcb1093 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 22:48:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 22:48:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525038.815988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqiV3-0007VY-5b; Sun, 23 Apr 2023 22:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525038.815988; Sun, 23 Apr 2023 22:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqiV3-0007VR-2c; Sun, 23 Apr 2023 22:48:05 +0000
Received: by outflank-mailman (input) for mailman id 525038;
 Sun, 23 Apr 2023 22:48:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqiV1-0007VH-Jk; Sun, 23 Apr 2023 22:48:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqiV1-0006mb-7P; Sun, 23 Apr 2023 22:48:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqiV0-0002cu-Ou; Sun, 23 Apr 2023 22:48:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqiV0-0002c6-OY; Sun, 23 Apr 2023 22:48:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1+m1iOstIBdFxwgUJQke6+3q3iID4u34N4G61QrqX1o=; b=Db5WSPneaIYTK1ERbVlTiClaOg
	DhwK78h18Sl3gJ4dGqojXb2vVsNATEGwW0Ul8FpPTWT+bYyzz/pCgLUZc9sA7IF/zyZQB2nywgcu6
	JZAyKZaGqPE3UOomzVEffKoc4U+esmuMnvM8YX21nrz4x43HYGbzws6vPN/DpRR9obyA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180386-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180386: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=327ec8d6c2a2223b78d311153a471036e474c5c5
X-Osstest-Versions-That:
    qemuu=6dd06214892d71cbbdd25daed7693e58afcb1093
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 22:48:02 +0000

flight 180386 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180386/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180382
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180382

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180382
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180382
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180382
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180382
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180382
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180382
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                327ec8d6c2a2223b78d311153a471036e474c5c5
baseline version:
 qemuu                6dd06214892d71cbbdd25daed7693e58afcb1093

Last test of basis   180382  2023-04-23 03:16:49 Z    0 days
Testing same since   180386  2023-04-23 16:07:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 327ec8d6c2a2223b78d311153a471036e474c5c5
Merge: 6dd0621489 3ea9be3340
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Sun Apr 23 11:20:36 2023 +0100

    Merge tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu into staging
    
    tcg cleanups:
      - Remove tcg_abort()
      - Split out extensions as known backend interfaces
      - Put the separate extensions together as tcg_out_movext
      - Introduce tcg_out_xchg as a backend interface
      - Clear TCGLabelQemuLdst on allocation
      - Avoid redundant extensions for riscv
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRE69sdHHJpY2hhcmQu
    # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/6jQf6Al9cgeJ6guVMpoRS
    # +sXaTs5U2yaqRvz5gGn2ANFuFgD2QanbWHjS5guTnhbsvq3icyOCpIXIPg/Z04LB
    # fTgAUCF5ut8U8C12HyGq/p4BFoTTWnCGPwY+PB9pMb5LiEcmaSUUz+fSA8xMX1b6
    # EylI8YNd74A9j5PBNbGIXooj8llM71p9YztwQ9V7sPH3ZON4qbPRDgrJsb5TngMa
    # daTpGoW+A9UyG7z0Ie6UuiOyYAzeQqm64WmMlc7UYeb9lL+yxvCq4+MXH2V/SKqg
    # GLOF95DCdqj1EeZCOt0aN1ybZPcYFFkmpXrD1iLu0Mhy7Qo/vghX/eFoFnLleD+Y
    # yM+LTg==
    # =d2hZ
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Sun 23 Apr 2023 09:27:07 AM BST
    # gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
    # gpg:                issuer "richard.henderson@linaro.org"
    # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
    
    * tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu:
      tcg/riscv: Conditionalize tcg_out_exts_i32_i64
      tcg: Clear TCGLabelQemuLdst on allocation
      tcg: Introduce tcg_out_xchg
      tcg: Introduce tcg_out_movext
      tcg: Split out tcg_out_extrl_i64_i32
      tcg: Split out tcg_out_extu_i32_i64
      tcg: Split out tcg_out_exts_i32_i64
      tcg: Split out tcg_out_ext32u
      tcg: Split out tcg_out_ext32s
      tcg: Split out tcg_out_ext16u
      tcg: Split out tcg_out_ext16s
      tcg: Split out tcg_out_ext8u
      tcg: Split out tcg_out_ext8s
      tcg: Replace tcg_abort with g_assert_not_reached
      tcg: Replace if + tcg_abort with tcg_debug_assert
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 3ea9be33400f14305565a9a094cb6031c07183d5
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:43:47 2023 -0700

    tcg/riscv: Conditionalize tcg_out_exts_i32_i64
    
    Since TCG_TYPE_I32 values are kept sign-extended in registers, via "w"
    instructions, we don't need to extend if the register matches.
    This is already relied upon by comparisons.
    
    Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 4745b156b8412ef12af32bd474fee70c25940950
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Thu Apr 6 11:38:56 2023 -0700

    tcg: Clear TCGLabelQemuLdst on allocation
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 767c250310ee0494d37bf7514d24973dd50e38ea
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 21:39:54 2023 -0700

    tcg: Introduce tcg_out_xchg
    
    We will want a backend interface for register swapping.
    This is only properly defined for x86; all others get a
    stub version that always indicates failure.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b3dfd5fc181433bd43e2163b1a94b11a548edfba
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 21:16:28 2023 -0700

    tcg: Introduce tcg_out_movext
    
    This is common code in most qemu_{ld,st} slow paths, extending the
    input value for the store helper data argument or extending the
    return value from the load helper.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b8b94ac6753effcfda7880d3b9ac49b530e3d2ab
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 19:58:35 2023 -0700

    tcg: Split out tcg_out_extrl_i64_i32
    
    We will need a backend interface for type truncation.  For those backends
    that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b9bfe000f954e1defefb4c917f98bf82c337144b
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:56:28 2023 -0700

    tcg: Split out tcg_out_extu_i32_i64
    
    We will need a backend interface for type extension with zero.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 9c6aa274a494ce807e998a3652fa16a3d2da4387
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:30:56 2023 -0700

    tcg: Split out tcg_out_exts_i32_i64
    
    We will need a backend interface for type extension with sign.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 9ecf5f61b8f468f17483f325f565802c645983a5
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:07:05 2023 -0700

    tcg: Split out tcg_out_ext32u
    
    We will need a backend interface for performing 32-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 52bf3398c3a2f51d3eaf8fd30dafcdc0cc7fc571
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 17:50:09 2023 -0700

    tcg: Split out tcg_out_ext32s
    
    We will need a backend interface for performing 32-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 379afdff47556f01e75ce2caffd7ae9efa4f1214
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 16:25:22 2023 -0700

    tcg: Split out tcg_out_ext16u
    
    We will need a backend interface for performing 16-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 753e42eada5c790bb3727c262f2e368e81cc788f
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 14:49:59 2023 -0700

    tcg: Split out tcg_out_ext16s
    
    We will need a backend interface for performing 16-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d0e66c897f2cdfb0807b76567a17d7811487fac3
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 13:26:51 2023 -0700

    tcg: Split out tcg_out_ext8u
    
    We will need a backend interface for performing 8-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 678155b2c50aa3bf37abef6bfe914bf58f49bec2
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 11:17:01 2023 -0700

    tcg: Split out tcg_out_ext8s
    
    We will need a backend interface for performing 8-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 732e89f4c401c3cf175aa84c987a029b9729070b
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 12:09:14 2023 -0700

    tcg: Replace tcg_abort with g_assert_not_reached
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 1a057554cc2e3ece8ed166f12a9b85cd5ec4cbe1
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 12:08:46 2023 -0700

    tcg: Replace if + tcg_abort with tcg_debug_assert
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


From xen-devel-bounces@lists.xenproject.org Sun Apr 23 23:34:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Apr 2023 23:34:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525045.816000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqjD8-0004Nu-KY; Sun, 23 Apr 2023 23:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525045.816000; Sun, 23 Apr 2023 23:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqjD8-0004Nn-I1; Sun, 23 Apr 2023 23:33:38 +0000
Received: by outflank-mailman (input) for mailman id 525045;
 Sun, 23 Apr 2023 23:33:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqjD7-0004Nd-MK; Sun, 23 Apr 2023 23:33:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqjD7-0007s6-FE; Sun, 23 Apr 2023 23:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqjD7-0003cN-46; Sun, 23 Apr 2023 23:33:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqjD7-0006rt-3i; Sun, 23 Apr 2023 23:33:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ckWxIg64sKxl3Le8yoQT2dSnb28bw9/YYk7BF8c3zY4=; b=62WUiouOyUVlPnREFn0ZMjHuQv
	C/iIysvBaCmVIi4brKR3H0wHFCO3Q2HkUuV0mTfjDfeiNrFknxJ5DHcHZ3ehPdkYrZQGShzvkfARx
	YkNSc9T+6RH4Q077Vx9cT6ZZa1xBDkSt7HtJFTBCQYP1yeK4wf/rCm11bwI4mRgtnui0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180384-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180384: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start.2:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=622322f53c6d9ddd3c2a4aad852b3e1adbd56da7
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Apr 2023 23:33:37 +0000

flight 180384 linux-linus real [real]
flight 180387 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180384/
http://logs.test-lab.xenproject.org/osstest/logs/180387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd      22 guest-start.2       fail pass in 180387-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                622322f53c6d9ddd3c2a4aad852b3e1adbd56da7
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    7 days
Failing since        180281  2023-04-17 06:24:36 Z    6 days   12 attempts
Testing same since   180384  2023-04-23 09:52:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Brian Masney <bmasney@redhat.com>
  Chancel Liu <chancel.liu@nxp.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Dan Carpenter <error27@gmail.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Baluta <daniel.baluta@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jaroslav Kysela <perex@perex.cz>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Long Wang <long.wang@analog.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@amd.com>
  Miguel Ojeda <ojeda@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Oliver Upton <oliver.upton@linux.dev>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@gmail.com>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  William Breathitt Gray <william.gray@linaro.org>
  Xu Yilun <yilun.xu@intel.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5349 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525069.816044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI1-00018m-P9; Mon, 24 Apr 2023 06:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525069.816044; Mon, 24 Apr 2023 06:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI1-00018f-Il; Mon, 24 Apr 2023 06:03:05 +0000
Received: by outflank-mailman (input) for mailman id 525069;
 Mon, 24 Apr 2023 06:03:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI0-0000eR-0L
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:04 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ac52d16c-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8173E1477;
 Sun, 23 Apr 2023 23:03:46 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BC53E3F587;
 Sun, 23 Apr 2023 23:03:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac52d16c-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 03/12] xen/arm: Expose SVE feature to the guest
Date: Mon, 24 Apr 2023 07:02:39 +0100
Message-Id: <20230424060248.1488859-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a guest is allowed to use SVE, expose the SVE features through
the identification registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - given the move of is_sve_domain() in asm/arm64/sve.h, add the
   header to vsysreg.c
 - dropping Bertrand's R-by because of the change
Changes from v4:
 - no changes
Changes from v3:
 - no changes
Changes from v2:
 - no changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/arm64/vsysreg.c | 40 ++++++++++++++++++++++++++++++++++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 758750983c11..e1ef927b3347 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -18,6 +18,8 @@
 
 #include <xen/sched.h>
 
+#include <asm/arm64/cpufeature.h>
+#include <asm/arm64/sve.h>
 #include <asm/current.h>
 #include <asm/regs.h>
 #include <asm/traps.h>
@@ -295,7 +297,28 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
-    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+
+    case HSR_SYSREG_ID_AA64PFR0_EL1:
+    {
+        register_t guest_reg_value = guest_cpuinfo.pfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+        {
+            /* 4 is the SVE field width in id_aa64pfr0_el1 */
+            uint64_t mask = GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
+                                    ID_AA64PFR0_SVE_SHIFT);
+            /* sysval is the sve field on the system */
+            uint64_t sysval = cpuid_feature_extract_unsigned_field_width(
+                                system_cpuinfo.pfr64.bits[0],
+                                ID_AA64PFR0_SVE_SHIFT, 4);
+            guest_reg_value &= ~mask;
+            guest_reg_value |= (sysval << ID_AA64PFR0_SVE_SHIFT) & mask;
+        }
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
+
     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
@@ -306,7 +329,20 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
-    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    case HSR_SYSREG_ID_AA64ZFR0_EL1:
+    {
+        /*
+         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
+         * needs to be exposed.
+         */
+        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];
+        if ( is_sve_domain(v->domain) )
+            guest_reg_value = system_cpuinfo.zfr64.bits[0];
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
 
     /*
      * Those cases are catching all Reserved registers trapped by TID3 which
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525074.816093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI7-0002SQ-Lf; Mon, 24 Apr 2023 06:03:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525074.816093; Mon, 24 Apr 2023 06:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI7-0002S7-Gi; Mon, 24 Apr 2023 06:03:11 +0000
Received: by outflank-mailman (input) for mailman id 525074;
 Mon, 24 Apr 2023 06:03:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI6-0000eR-Cy
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id b0458582-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:09 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C3B9FEC;
 Sun, 23 Apr 2023 23:03:53 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 02BD33F587;
 Sun, 23 Apr 2023 23:03:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0458582-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 08/12] xen/physinfo: encode Arm SVE vector length in arch_capabilities
Date: Mon, 24 Apr 2023 07:02:44 +0100
Message-Id: <20230424060248.1488859-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the arm platform supports SVE, advertise the feature in the
field arch_capabilities in struct xen_sysctl_physinfo by encoding
the SVE vector length in it.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - Add R-by from Bertrand
Changes from v4:
 - Write arch_capabilities from arch_do_physinfo instead of using
   stub functions (Jan)
Changes from v3:
 - domainconfig_encode_vl is now named sve_encode_vl
Changes from v2:
 - Remove XEN_SYSCTL_PHYSCAP_ARM_SVE_SHFT, use MASK_INSR and
   protect with ifdef XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK (Jan)
 - Use the helper function sve_arch_cap_physinfo to encode
   the VL into physinfo arch_capabilities field.
Changes from v1:
 - Use only arch_capabilities and some defines to encode SVE VL
   (Bertrand, Stefano, Jan)
Changes from RFC:
 - new patch
---
 xen/arch/arm/sysctl.c       | 4 ++++
 xen/include/public/sysctl.h | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10d0..e9a0661146e4 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -11,11 +11,15 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
+#include <asm/arm64/sve.h>
 #include <public/sysctl.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
     pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap;
+
+    pi->arch_capabilities |= MASK_INSR(sve_encode_vl(get_sys_vl_len()),
+                                       XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
 }
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd00e..9d06e92d0f6a 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -94,6 +94,10 @@ struct xen_sysctl_tbuf_op {
 /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
 #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
 
+#if defined(__arm__) || defined(__aarch64__)
+#define XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK  (0x1FU)
+#endif
+
 struct xen_sysctl_physinfo {
     uint32_t threads_per_core;
     uint32_t cores_per_socket;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525072.816069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI3-0001hk-SV; Mon, 24 Apr 2023 06:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525072.816069; Mon, 24 Apr 2023 06:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI3-0001g4-JJ; Mon, 24 Apr 2023 06:03:07 +0000
Received: by outflank-mailman (input) for mailman id 525072;
 Mon, 24 Apr 2023 06:03:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI2-0000eR-Cc
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:06 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ad978108-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:05 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE2CE168F;
 Sun, 23 Apr 2023 23:03:48 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D05AC3F587;
 Sun, 23 Apr 2023 23:03:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad978108-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Date: Mon, 24 Apr 2023 07:02:41 +0100
Message-Id: <20230424060248.1488859-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Save/restore context switch for SVE, allocate memory to contain
the Z0-31 registers whose length is maximum 2048 bits each and
FFR who can be maximum 256 bits, the allocated memory depends on
how many bits is the vector length for the domain and how many bits
are supported by the platform.

Save P0-15 whose length is maximum 256 bits each, in this case the
memory used is from the fpregs field in struct vfp_state,
because V0-31 are part of Z0-31 and this space would have been
unused for SVE domain otherwise.

Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
creation given the requested vector length and restore it on
context switch, save/restore ZCR_EL1 value as well.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - use XFREE instead of xfree, keep the headers (Julien)
 - Avoid math computation for every save/restore, store the computation
   in struct vfp_state once (Bertrand)
 - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
   that sve_vl is available only on arm64
Changes from v4:
 - No changes
Changes from v3:
 - don't use fixed len types when not needed (Jan)
 - now VL is an encoded value, decode it before using.
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - Moved zcr_el2 field introduction in this patch, restore its
   content inside sve_restore_state function. (Julien)
---
 xen/arch/arm/arm64/sve-asm.S             | 141 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 |  63 ++++++++++
 xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
 xen/arch/arm/domain.c                    |   9 ++
 xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
 xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  12 ++
 xen/arch/arm/include/asm/domain.h        |   2 +
 8 files changed, 288 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
index 4d1549344733..8c37d7bc95d5 100644
--- a/xen/arch/arm/arm64/sve-asm.S
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -17,6 +17,18 @@
     .endif
 .endm
 
+.macro _sve_check_zreg znr
+    .if (\znr) < 0 || (\znr) > 31
+        .error "Bad Scalable Vector Extension vector register number \znr."
+    .endif
+.endm
+
+.macro _sve_check_preg pnr
+    .if (\pnr) < 0 || (\pnr) > 15
+        .error "Bad Scalable Vector Extension predicate register number \pnr."
+    .endif
+.endm
+
 .macro _check_num n, min, max
     .if (\n) < (\min) || (\n) > (\max)
         .error "Number \n out of range [\min,\max]"
@@ -26,6 +38,54 @@
 /* SVE instruction encodings for non-SVE-capable assemblers */
 /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
 
+/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
 /* RDVL X\nx, #\imm */
 .macro _sve_rdvl nx, imm
     _check_general_reg \nx
@@ -35,11 +95,92 @@
         | (((\imm) & 0x3f) << 5)
 .endm
 
+/* RDFFR (unpredicated): RDFFR P\np.B */
+.macro _sve_rdffr np
+    _sve_check_preg \np
+    .inst 0x2519f000                \
+        | (\np)
+.endm
+
+/* WRFFR P\np.B */
+.macro _sve_wrffr np
+    _sve_check_preg \np
+    .inst 0x25289000                \
+        | ((\np) << 5)
+.endm
+
+.macro __for from:req, to:req
+    .if (\from) == (\to)
+        _for__body %\from
+    .else
+        __for %\from, %((\from) + ((\to) - (\from)) / 2)
+        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
+    .endif
+.endm
+
+.macro _for var:req, from:req, to:req, insn:vararg
+    .macro _for__body \var:req
+        .noaltmacro
+        \insn
+        .altmacro
+    .endm
+
+    .altmacro
+    __for \from, \to
+    .noaltmacro
+
+    .purgem _for__body
+.endm
+
+.macro sve_save nxzffrctx, nxpctx, save_ffr
+    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
+    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
+        cbz \save_ffr, 1f
+        _sve_rdffr 0
+        _sve_str_p 0, \nxzffrctx
+        _sve_ldr_p 0, \nxpctx
+        b 2f
+1:
+        str xzr, [x\nxzffrctx]      // Zero out FFR
+2:
+.endm
+
+.macro sve_load nxzffrctx, nxpctx, restore_ffr
+    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
+        cbz \restore_ffr, 1f
+        _sve_ldr_p 0, \nxzffrctx
+        _sve_wrffr 0
+1:
+    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
+.endm
+
 /* Gets the current vector register size in bytes */
 GLOBAL(sve_get_hw_vl)
     _sve_rdvl 0, 1
     ret
 
+/*
+ * Save the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Save FFR if non-zero
+ */
+GLOBAL(sve_save_ctx)
+    sve_save 0, 1, x2
+    ret
+
+/*
+ * Load the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Restore FFR if non-zero
+ */
+GLOBAL(sve_load_ctx)
+    sve_load 0, 1, x2
+    ret
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 86a5e617bfca..064832b450ff 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,6 +5,8 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
+#include <xen/sched.h>
+#include <xen/sizes.h>
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
@@ -13,6 +15,24 @@
 #include <asm/system.h>
 
 extern unsigned int sve_get_hw_vl(void);
+extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
+extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
+                         int restore_ffr);
+
+static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
+{
+    /*
+     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
+     * in bytes is VL/8.
+     */
+    return (vl / 8U) * 32U;
+}
+
+static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
+{
+    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
+    return (vl / 64U);
+}
 
 register_t compute_max_zcr(void)
 {
@@ -60,3 +80,46 @@ unsigned int get_sys_vl_len(void)
     return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
             SVE_VL_MULTIPLE_VAL;
 }
+
+int sve_context_init(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
+                             sve_ffrreg_ctx_size(sve_vl_bits),
+                             L1_CACHE_BYTES);
+
+    if ( !ctx )
+        return -ENOMEM;
+
+    /* Point to the end of Z0-Z31 memory, just before FFR memory */
+    v->arch.vfp.sve_zreg_ctx_end = ctx +
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    return 0;
+}
+
+void sve_context_free(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+
+    /* Point back to the beginning of Z0-Z31 + FFR memory */
+    v->arch.vfp.sve_zreg_ctx_end -=
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    XFREE(v->arch.vfp.sve_zreg_ctx_end);
+}
+
+void sve_save_state(struct vcpu *v)
+{
+    v->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
+
+    sve_save_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
+
+void sve_restore_state(struct vcpu *v)
+{
+    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
+    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);
+
+    sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 47885e76baae..2d0d7c2e6ddb 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -2,29 +2,35 @@
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
+#include <asm/arm64/sve.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
-                 "stp q2, q3, [%1, #16 * 2]\n\t"
-                 "stp q4, q5, [%1, #16 * 4]\n\t"
-                 "stp q6, q7, [%1, #16 * 6]\n\t"
-                 "stp q8, q9, [%1, #16 * 8]\n\t"
-                 "stp q10, q11, [%1, #16 * 10]\n\t"
-                 "stp q12, q13, [%1, #16 * 12]\n\t"
-                 "stp q14, q15, [%1, #16 * 14]\n\t"
-                 "stp q16, q17, [%1, #16 * 16]\n\t"
-                 "stp q18, q19, [%1, #16 * 18]\n\t"
-                 "stp q20, q21, [%1, #16 * 20]\n\t"
-                 "stp q22, q23, [%1, #16 * 22]\n\t"
-                 "stp q24, q25, [%1, #16 * 24]\n\t"
-                 "stp q26, q27, [%1, #16 * 26]\n\t"
-                 "stp q28, q29, [%1, #16 * 28]\n\t"
-                 "stp q30, q31, [%1, #16 * 30]\n\t"
-                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_save_state(v);
+    else
+    {
+        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                     "stp q2, q3, [%1, #16 * 2]\n\t"
+                     "stp q4, q5, [%1, #16 * 4]\n\t"
+                     "stp q6, q7, [%1, #16 * 6]\n\t"
+                     "stp q8, q9, [%1, #16 * 8]\n\t"
+                     "stp q10, q11, [%1, #16 * 10]\n\t"
+                     "stp q12, q13, [%1, #16 * 12]\n\t"
+                     "stp q14, q15, [%1, #16 * 14]\n\t"
+                     "stp q16, q17, [%1, #16 * 16]\n\t"
+                     "stp q18, q19, [%1, #16 * 18]\n\t"
+                     "stp q20, q21, [%1, #16 * 20]\n\t"
+                     "stp q22, q23, [%1, #16 * 22]\n\t"
+                     "stp q24, q25, [%1, #16 * 24]\n\t"
+                     "stp q26, q27, [%1, #16 * 26]\n\t"
+                     "stp q28, q29, [%1, #16 * 28]\n\t"
+                     "stp q30, q31, [%1, #16 * 30]\n\t"
+                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    }
 
     v->arch.vfp.fpsr = READ_SYSREG(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG(FPCR);
@@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
-                 "ldp q2, q3, [%1, #16 * 2]\n\t"
-                 "ldp q4, q5, [%1, #16 * 4]\n\t"
-                 "ldp q6, q7, [%1, #16 * 6]\n\t"
-                 "ldp q8, q9, [%1, #16 * 8]\n\t"
-                 "ldp q10, q11, [%1, #16 * 10]\n\t"
-                 "ldp q12, q13, [%1, #16 * 12]\n\t"
-                 "ldp q14, q15, [%1, #16 * 14]\n\t"
-                 "ldp q16, q17, [%1, #16 * 16]\n\t"
-                 "ldp q18, q19, [%1, #16 * 18]\n\t"
-                 "ldp q20, q21, [%1, #16 * 20]\n\t"
-                 "ldp q22, q23, [%1, #16 * 22]\n\t"
-                 "ldp q24, q25, [%1, #16 * 24]\n\t"
-                 "ldp q26, q27, [%1, #16 * 26]\n\t"
-                 "ldp q28, q29, [%1, #16 * 28]\n\t"
-                 "ldp q30, q31, [%1, #16 * 30]\n\t"
-                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_restore_state(v);
+    else
+    {
+        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                     "ldp q2, q3, [%1, #16 * 2]\n\t"
+                     "ldp q4, q5, [%1, #16 * 4]\n\t"
+                     "ldp q6, q7, [%1, #16 * 6]\n\t"
+                     "ldp q8, q9, [%1, #16 * 8]\n\t"
+                     "ldp q10, q11, [%1, #16 * 10]\n\t"
+                     "ldp q12, q13, [%1, #16 * 12]\n\t"
+                     "ldp q14, q15, [%1, #16 * 14]\n\t"
+                     "ldp q16, q17, [%1, #16 * 16]\n\t"
+                     "ldp q18, q19, [%1, #16 * 18]\n\t"
+                     "ldp q20, q21, [%1, #16 * 20]\n\t"
+                     "ldp q22, q23, [%1, #16 * 22]\n\t"
+                     "ldp q24, q25, [%1, #16 * 24]\n\t"
+                     "ldp q26, q27, [%1, #16 * 26]\n\t"
+                     "ldp q28, q29, [%1, #16 * 28]\n\t"
+                     "ldp q30, q31, [%1, #16 * 30]\n\t"
+                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    }
 
     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 143359d0f313..24c722a4a11e 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -552,7 +552,14 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.cptr_el2 = get_default_cptr_flags();
     if ( is_sve_domain(v->domain) )
+    {
+        if ( (rc = sve_context_init(v)) != 0 )
+            goto fail;
         v->arch.cptr_el2 &= ~HCPTR_CP(8);
+#ifdef CONFIG_ARM64_SVE
+        v->arch.zcr_el2 = vl_to_zcr(sve_decode_vl(v->domain->arch.sve_vl));
+#endif
+    }
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -582,6 +589,8 @@ fail:
 
 void arch_vcpu_destroy(struct vcpu *v)
 {
+    if ( is_sve_domain(v->domain) )
+        sve_context_free(v);
     vcpu_timer_destroy(v);
     vcpu_vgic_free(v);
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 730c3fb5a9c8..582405dfdf6a 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -26,6 +26,10 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(unsigned int vl);
 unsigned int get_sys_vl_len(void);
+int sve_context_init(struct vcpu *v);
+void sve_context_free(struct vcpu *v);
+void sve_save_state(struct vcpu *v);
+void sve_restore_state(struct vcpu *v);
 
 #else /* !CONFIG_ARM64_SVE */
 
@@ -46,6 +50,15 @@ static inline unsigned int get_sys_vl_len(void)
     return 0;
 }
 
+static inline int sve_context_init(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline void sve_context_free(struct vcpu *v) {}
+static inline void sve_save_state(struct vcpu *v) {}
+static inline void sve_restore_state(struct vcpu *v) {}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4cabb9eb4d5e..3fdeb9d8cdef 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -88,6 +88,9 @@
 #ifndef ID_AA64ISAR2_EL1
 #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
 #endif
+#ifndef ZCR_EL1
+#define ZCR_EL1                     S3_0_C1_C2_0
+#endif
 
 /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
 
diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
index e6e8c363bc16..4aa371e85d26 100644
--- a/xen/arch/arm/include/asm/arm64/vfp.h
+++ b/xen/arch/arm/include/asm/arm64/vfp.h
@@ -6,7 +6,19 @@
 
 struct vfp_state
 {
+    /*
+     * When SVE is enabled for the guest, fpregs memory will be used to
+     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
+     * registers.
+     */
     uint64_t fpregs[64] __vfp_aligned;
+    /*
+     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
+     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
+     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
+     * like that to ease the save/restore assembly operations.
+     */
+    uint64_t *sve_zreg_ctx_end;
     register_t fpcr;
     register_t fpexc32_el2;
     register_t fpsr;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 331da0f3bcc3..814652d92568 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -195,6 +195,8 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t zcr_el1;
+    register_t zcr_el2;
     register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525073.816083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI6-0002CA-Bm; Mon, 24 Apr 2023 06:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525073.816083; Mon, 24 Apr 2023 06:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI6-0002C0-6r; Mon, 24 Apr 2023 06:03:10 +0000
Received: by outflank-mailman (input) for mailman id 525073;
 Mon, 24 Apr 2023 06:03:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI4-0000mg-U7
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:08 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id aeab044f-e265-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 08:03:07 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 423E21691;
 Sun, 23 Apr 2023 23:03:50 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F3E183F587;
 Sun, 23 Apr 2023 23:03:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeab044f-e265-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v6 06/12] xen/common: add dom0 xen command line argument for Arm
Date: Mon, 24 Apr 2023 07:02:42 +0100
Message-Id: <20230424060248.1488859-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently x86 defines a Xen command line argument dom0=<list> where
there can be specified dom0 controlling sub-options, to use it also
on Arm, move the code that loops through the list of arguments from
x86 to the common code and from there, call architecture specific
functions to handle the comma separated sub-options.

No functional changes are intended.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - Add Bertrand R-by
Changes from v4:
 - return EINVAL in Arm implementation of parse_arch_dom0_param,
   shorten variable name in the funtion from str_begin, str_end to
   s, e. Removed variable rc from x86 parse_arch_dom0_param
   implementation. (Jan)
 - Add R-By Jan
Changes from v3:
 - new patch
---
 xen/arch/arm/domain_build.c |  5 ++++
 xen/arch/x86/dom0_build.c   | 48 ++++++++++++++-----------------------
 xen/common/domain.c         | 23 ++++++++++++++++++
 xen/include/xen/domain.h    |  1 +
 4 files changed, 47 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ffabe567ac3f..d9450416f665 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -60,6 +60,11 @@ static int __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
+int __init parse_arch_dom0_param(const char *s, const char *e)
+{
+    return -EINVAL;
+}
+
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 79234f18ff01..9f5300a3efbb 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -266,42 +266,30 @@ bool __initdata opt_dom0_pvh = !IS_ENABLED(CONFIG_PV);
 bool __initdata opt_dom0_verbose = IS_ENABLED(CONFIG_VERBOSE_DEBUG);
 bool __initdata opt_dom0_msr_relaxed;
 
-static int __init cf_check parse_dom0_param(const char *s)
+int __init parse_arch_dom0_param(const char *s, const char *e)
 {
-    const char *ss;
-    int rc = 0;
+    int val;
 
-    do {
-        int val;
-
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
-            opt_dom0_pvh = false;
-        else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
-            opt_dom0_pvh = true;
+    if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
+        opt_dom0_pvh = false;
+    else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
+        opt_dom0_pvh = true;
 #ifdef CONFIG_SHADOW_PAGING
-        else if ( (val = parse_boolean("shadow", s, ss)) >= 0 )
-            opt_dom0_shadow = val;
+    else if ( (val = parse_boolean("shadow", s, e)) >= 0 )
+        opt_dom0_shadow = val;
 #endif
-        else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
-            opt_dom0_verbose = val;
-        else if ( IS_ENABLED(CONFIG_PV) &&
-                  (val = parse_boolean("cpuid-faulting", s, ss)) >= 0 )
-            opt_dom0_cpuid_faulting = val;
-        else if ( (val = parse_boolean("msr-relaxed", s, ss)) >= 0 )
-            opt_dom0_msr_relaxed = val;
-        else
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
+    else if ( (val = parse_boolean("verbose", s, e)) >= 0 )
+        opt_dom0_verbose = val;
+    else if ( IS_ENABLED(CONFIG_PV) &&
+              (val = parse_boolean("cpuid-faulting", s, e)) >= 0 )
+        opt_dom0_cpuid_faulting = val;
+    else if ( (val = parse_boolean("msr-relaxed", s, e)) >= 0 )
+        opt_dom0_msr_relaxed = val;
+    else
+        return -EINVAL;
 
-    return rc;
+    return 0;
 }
-custom_param("dom0", parse_dom0_param);
 
 static char __initdata opt_dom0_ioports_disable[200] = "";
 string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 626debbae095..7779ba088675 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,6 +364,29 @@ static int __init cf_check parse_extra_guest_irqs(const char *s)
 }
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
+static int __init cf_check parse_dom0_param(const char *s)
+{
+    const char *ss;
+    int rc = 0;
+
+    do {
+        int ret;
+
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        ret = parse_arch_dom0_param(s, ss);
+        if ( ret && !rc )
+            rc = ret;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("dom0", parse_dom0_param);
+
 /*
  * Release resources held by a domain.  There may or may not be live
  * references to the domain, and it may or may not be fully constructed.
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 26f9c4f6dd5b..1df8f933d076 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -16,6 +16,7 @@ typedef union {
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id);
 
 unsigned int dom0_max_vcpus(void);
+int parse_arch_dom0_param(const char *s, const char *e);
 struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525067.816023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI0-0000f8-7S; Mon, 24 Apr 2023 06:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525067.816023; Mon, 24 Apr 2023 06:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI0-0000ew-1f; Mon, 24 Apr 2023 06:03:04 +0000
Received: by outflank-mailman (input) for mailman id 525067;
 Mon, 24 Apr 2023 06:03:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpHy-0000eR-NX
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id aa306986-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:02:59 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D9DE8D75;
 Sun, 23 Apr 2023 23:03:42 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C0FA63F587;
 Sun, 23 Apr 2023 23:02:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa306986-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v6 00/12] SVE feature for arm guests
Date: Mon, 24 Apr 2023 07:02:36 +0100
Message-Id: <20230424060248.1488859-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie is introducing the possibility for Dom0 and DomU guests to use
sve/sve2 instructions.

SVE feature introduces new instruction and registers to improve performances on
floating point operations.

The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
when available the ID_AA64ZFR0_EL1 register provides additional information
about the implemented version and other SVE feature.

New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.

Z0-Z31 are scalable vector register whose size is implementation defined and
goes from 128 bits to maximum 2048, the term vector length will be used to refer
to this quantity.
P0-P15 are predicate registers and the size is the vector length divided by 8,
same size is the FFR (First Fault Register).
ZCR_ELx is a register that can control and restrict the maximum vector length
used by the <x> exception level and all the lower exception levels, so for
example EL3 can restrict the vector length usable by EL3,2,1,0.

The platform has a maximum implemented vector length, so for every value
written in ZCR register, if this value is above the implemented length, then the
lower value will be used. The RDVL instruction can be used to check what vector
length is the HW using after setting ZCR.

For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.

SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
register is added to the domain state, to be able to trap only the guests that
are not allowed to use SVE.

This serie is introducing a command line parameter to enable Dom0 to use SVE and
to set its maximum vector length that by default is 0 which means the guest is
not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
the selected value used as maximum allowed vector length (which could be lower
if the implemented one is lower).
For DomUs, an XL parameter with the same way of use is introduced and a dom0less
DTB binding is created.

The context switch is the most critical part because there can be big registers
to be saved, in this serie an easy approach is used and the context is
saved/restored every time for the guests that are allowed to use SVE.

Luca Fancellu (12):
  xen/arm: enable SVE extension for Xen
  xen/arm: add SVE vector length field to the domain
  xen/arm: Expose SVE feature to the guest
  xen/arm: add SVE exception class handling
  arm/sve: save/restore SVE context switch
  xen/common: add dom0 xen command line argument for Arm
  xen: enable Dom0 to use SVE feature
  xen/physinfo: encode Arm SVE vector length in arch_capabilities
  tools: add physinfo arch_capabilities handling for Arm
  xen/tools: add sve parameter in XL configuration
  xen/arm: add sve property for dom0less domUs
  xen/changelog: Add SVE and "dom0" options to the changelog for Arm

 CHANGELOG.md                                  |   3 +
 SUPPORT.md                                    |   6 +
 docs/man/xl.cfg.5.pod.in                      |  16 ++
 docs/misc/arm/device-tree/booting.txt         |  16 ++
 docs/misc/xen-command-line.pandoc             |  20 +-
 tools/golang/xenlight/helpers.gen.go          |   4 +
 tools/golang/xenlight/types.gen.go            |  24 +++
 tools/include/libxl.h                         |  11 +
 .../include/xen-tools/arm-arch-capabilities.h |  28 +++
 tools/include/xen-tools/common-macros.h       |   2 +
 tools/libs/light/libxl.c                      |   1 +
 tools/libs/light/libxl_arm.c                  |  28 +++
 tools/libs/light/libxl_internal.h             |   1 -
 tools/libs/light/libxl_types.idl              |  23 +++
 tools/ocaml/libs/xc/xenctrl.ml                |   4 +-
 tools/ocaml/libs/xc/xenctrl.mli               |   4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   8 +-
 tools/python/xen/lowlevel/xc/xc.c             |   8 +-
 tools/xl/xl_info.c                            |   8 +
 tools/xl/xl_parse.c                           |   8 +
 xen/arch/arm/Kconfig                          |  10 +-
 xen/arch/arm/arm64/Makefile                   |   1 +
 xen/arch/arm/arm64/cpufeature.c               |   7 +-
 xen/arch/arm/arm64/domctl.c                   |   4 +
 xen/arch/arm/arm64/sve-asm.S                  | 189 ++++++++++++++++++
 xen/arch/arm/arm64/sve.c                      | 145 ++++++++++++++
 xen/arch/arm/arm64/vfp.c                      |  79 ++++----
 xen/arch/arm/arm64/vsysreg.c                  |  40 +++-
 xen/arch/arm/cpufeature.c                     |   6 +-
 xen/arch/arm/domain.c                         |  47 ++++-
 xen/arch/arm/domain_build.c                   |  61 ++++++
 xen/arch/arm/include/asm/arm64/sve.h          |  86 ++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h      |   4 +
 xen/arch/arm/include/asm/arm64/vfp.h          |  12 ++
 xen/arch/arm/include/asm/cpufeature.h         |  14 ++
 xen/arch/arm/include/asm/domain.h             |   8 +
 xen/arch/arm/include/asm/processor.h          |   3 +
 xen/arch/arm/setup.c                          |   5 +-
 xen/arch/arm/sysctl.c                         |   4 +
 xen/arch/arm/traps.c                          |  37 +++-
 xen/arch/x86/dom0_build.c                     |  48 ++---
 xen/common/domain.c                           |  23 +++
 xen/common/kernel.c                           |  25 +++
 xen/include/public/arch-arm.h                 |   2 +
 xen/include/public/domctl.h                   |   2 +-
 xen/include/public/sysctl.h                   |   4 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/lib.h                         |  10 +
 48 files changed, 995 insertions(+), 105 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525070.816050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI2-0001H4-6w; Mon, 24 Apr 2023 06:03:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525070.816050; Mon, 24 Apr 2023 06:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI2-0001Dm-0b; Mon, 24 Apr 2023 06:03:06 +0000
Received: by outflank-mailman (input) for mailman id 525070;
 Mon, 24 Apr 2023 06:03:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI0-0000eR-U4
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:04 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id acfa86d4-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:04 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8CF521684;
 Sun, 23 Apr 2023 23:03:47 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C60BC3F587;
 Sun, 23 Apr 2023 23:03:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acfa86d4-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 04/12] xen/arm: add SVE exception class handling
Date: Mon, 24 Apr 2023 07:02:40 +0100
Message-Id: <20230424060248.1488859-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SVE has a new exception class with code 0x19, introduce the new code
and handle the exception.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - modified error messages (Julien)
 - add R-by Bertrand
Changes from v4:
 - No changes
Changes from v3:
 - No changes
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/include/asm/processor.h | 1 +
 xen/arch/arm/traps.c                 | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index bc683334125c..7e42ff8811fc 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -426,6 +426,7 @@
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
+#define HSR_EC_SVE                  0x19
 #endif
 #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
 #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index c0611c2ef6a5..d672d2c694ef 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2173,6 +2173,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_sysreg);
         do_sysreg(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        GUEST_BUG_ON(regs_mode_is_32bit(regs));
+        gprintk(XENLOG_WARNING, "Domain tried to use SVE while not allowed\n");
+        inject_undef_exception(regs, hsr);
+        break;
 #endif
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
@@ -2202,6 +2207,10 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
     case HSR_EC_BRK:
         do_trap_brk(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        /* An SVE exception is a bug somewhere in hypervisor code */
+        do_unexpected_trap("SVE trap at EL2", regs);
+        break;
 #endif
     case HSR_EC_DATA_ABORT_CURR_EL:
     case HSR_EC_INSTR_ABORT_CURR_EL:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525075.816098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI8-0002Xp-48; Mon, 24 Apr 2023 06:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525075.816098; Mon, 24 Apr 2023 06:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI7-0002Wr-R9; Mon, 24 Apr 2023 06:03:11 +0000
Received: by outflank-mailman (input) for mailman id 525075;
 Mon, 24 Apr 2023 06:03:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI6-0000mg-Ek
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id af8040c4-e265-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 08:03:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B18381692;
 Sun, 23 Apr 2023 23:03:51 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 87C823F587;
 Sun, 23 Apr 2023 23:03:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af8040c4-e265-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Date: Mon, 24 Apr 2023 07:02:43 +0100
Message-Id: <20230424060248.1488859-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a command line parameter to allow Dom0 the use of SVE resources,
the command line parameter sve=<integer>, sub argument of dom0=,
controls the feature on this domain and sets the maximum SVE vector
length for Dom0.

Add a new function, parse_signed_integer(), to parse an integer
command line argument.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - stop the domain if VL error occurs (Julien, Bertrand)
 - update the documentation
 - Rename sve_sanitize_vl_param to sve_domctl_vl_param to
   mark the fact that we are sanitizing a parameter coming from
   the user before encoding it into sve_vl in domctl structure.
   (suggestion from Bertrand in a separate discussion)
 - update comment in parse_signed_integer, return boolean in
   sve_domctl_vl_param (Jan).
Changes from v4:
 - Negative values as user param means max supported HW VL (Jan)
 - update documentation, make use of no_config_param(), rename
   parse_integer into parse_signed_integer and take long long *,
   also put a comment on the -2 return condition, update
   declaration comment to reflect the modifications (Jan)
Changes from v3:
 - Don't use fixed len types when not needed (Jan)
 - renamed domainconfig_encode_vl to sve_encode_vl
 - Use a sub argument of dom0= to enable the feature (Jan)
 - Add parse_integer() function
Changes from v2:
 - xen_domctl_createdomain field has changed into sve_vl and its
   value now is the VL / 128, create an helper function for that.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed docs to explain that the domain won't be created if the
   requested vector length is above the supported one from the
   platform.
---
 docs/misc/xen-command-line.pandoc    | 20 ++++++++++++++++++--
 xen/arch/arm/arm64/sve.c             | 20 ++++++++++++++++++++
 xen/arch/arm/domain_build.c          | 25 +++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 14 ++++++++++++++
 xen/common/kernel.c                  | 25 +++++++++++++++++++++++++
 xen/include/xen/lib.h                | 10 ++++++++++
 6 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d3319..47e5b4eb6199 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
 
 ### dom0
     = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
-                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
+                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
 
-    Applicability: x86
+    = List of [ sve=<integer> ] (Arm)
 
 Controls for how dom0 is constructed on x86 systems.
 
@@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
 
     If using this option is necessary to fix an issue, please report a bug.
 
+Enables features on dom0 on Arm systems.
+
+*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
+    the maximum SVE vector length, the option is applicable only to AArch64
+    guests.
+    A value equal to 0 disables the feature, this is the default value.
+    Values below 0 means the feature uses the maximum SVE vector length
+    supported by hardware, if SVE is supported.
+    Values above 0 explicitly set the maximum SVE vector length for Dom0,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a non zero value, but the platform doesn't support
+    SVE.
+
 ### dom0-cpuid
     = List of comma separated booleans
 
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 064832b450ff..4d964f2b56b4 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -14,6 +14,9 @@
 #include <asm/processor.h>
 #include <asm/system.h>
 
+/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
+int __initdata opt_dom0_sve;
+
 extern unsigned int sve_get_hw_vl(void);
 extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
 extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
@@ -123,3 +126,20 @@ void sve_restore_state(struct vcpu *v)
 
     sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
 }
+
+bool __init sve_domctl_vl_param(int val, unsigned int *out)
+{
+    /*
+     * Negative SVE parameter value means to use the maximum supported
+     * vector length, otherwise if a positive value is provided, check if the
+     * vector length is a multiple of 128
+     */
+    if ( val < 0 )
+        *out = get_sys_vl_len();
+    else if ( (val % SVE_VL_MULTIPLE_VAL) == 0 )
+        *out = val;
+    else
+        return false;
+
+    return true;
+}
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d9450416f665..4a6b73348594 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -62,6 +62,21 @@ custom_param("dom0_mem", parse_dom0_mem);
 
 int __init parse_arch_dom0_param(const char *s, const char *e)
 {
+    long long val;
+
+    if ( !parse_signed_integer("sve", s, e, &val) )
+    {
+#ifdef CONFIG_ARM64_SVE
+        if ( (val >= INT_MIN) && (val <= INT_MAX) )
+            opt_dom0_sve = val;
+        else
+            printk(XENLOG_INFO "'sve=%lld' value out of range!\n", val);
+#else
+        no_config_param("ARM64_SVE", "sve", s, e);
+#endif
+        return 0;
+    }
+
     return -EINVAL;
 }
 
@@ -4117,6 +4132,16 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
+    if ( opt_dom0_sve )
+    {
+        unsigned int vl;
+
+        if ( sve_domctl_vl_param(opt_dom0_sve, &vl) )
+            dom0_cfg.arch.sve_vl = sve_encode_vl(vl);
+        else
+            panic("SVE vector length error\n");
+    }
+
     dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
     if ( IS_ERR(dom0) )
         panic("Error creating domain 0 (rc = %ld)\n", PTR_ERR(dom0));
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 582405dfdf6a..71bddb41f19c 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -19,8 +19,15 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
     return sve_vl * SVE_VL_MULTIPLE_VAL;
 }
 
+static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
+{
+    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
+}
+
 #ifdef CONFIG_ARM64_SVE
 
+extern int opt_dom0_sve;
+
 #define is_sve_domain(d) ((d)->arch.sve_vl > 0)
 
 register_t compute_max_zcr(void);
@@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
 void sve_context_free(struct vcpu *v);
 void sve_save_state(struct vcpu *v);
 void sve_restore_state(struct vcpu *v);
+bool sve_domctl_vl_param(int val, unsigned int *out);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define opt_dom0_sve     (0)
 #define is_sve_domain(d) (0)
 
 static inline register_t compute_max_zcr(void)
@@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
 static inline void sve_save_state(struct vcpu *v) {}
 static inline void sve_restore_state(struct vcpu *v) {}
 
+static inline bool sve_domctl_vl_param(int val, unsigned int *out)
+{
+    return false;
+}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f7b1f65f373c..b67d9056fec3 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
     return -1;
 }
 
+int __init parse_signed_integer(const char *name, const char *s, const char *e,
+                                long long *val)
+{
+    size_t slen, nlen;
+    const char *str;
+    long long pval;
+
+    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
+    nlen = strlen(name);
+
+    /* Check that this is the name we're looking for and a value was provided */
+    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
+        return -1;
+
+    pval = simple_strtoll(&s[nlen + 1], &str, 0);
+
+    /* Number not recognised */
+    if ( str != e )
+        return -2;
+
+    *val = pval;
+
+    return 0;
+}
+
 int cmdline_strcmp(const char *frag, const char *name)
 {
     for ( ; ; frag++, name++ )
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index e914ccade095..5343ee7a944a 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
  */
 int parse_boolean(const char *name, const char *s, const char *e);
 
+/**
+ * Given a specific name, parses a string of the form:
+ *   $NAME=<integer number>
+ * returning 0 and a value in val, for a recognised integer.
+ * Returns -1 for name not found, general errors, or -2 if name is found but
+ * not recognised number.
+ */
+int parse_signed_integer(const char *name, const char *s, const char *e,
+                         long long *val);
+
 /**
  * Very similar to strcmp(), but will declare a match if the NUL in 'name'
  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed for
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525076.816112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI9-0002xT-B4; Mon, 24 Apr 2023 06:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525076.816112; Mon, 24 Apr 2023 06:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI9-0002vB-4v; Mon, 24 Apr 2023 06:03:13 +0000
Received: by outflank-mailman (input) for mailman id 525076;
 Mon, 24 Apr 2023 06:03:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI8-0000eR-Be
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:12 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id b140535c-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CED1C1477;
 Sun, 23 Apr 2023 23:03:54 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72AFE3F587;
 Sun, 23 Apr 2023 23:03:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b140535c-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling for Arm
Date: Mon, 24 Apr 2023 07:02:45 +0100
Message-Id: <20230424060248.1488859-10-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On Arm, the SVE vector length is encoded in arch_capabilities field
of struct xen_sysctl_physinfo, make use of this field in the tools
when building for arm.

Create header arm-arch-capabilities.h to handle the arch_capabilities
field of physinfo for Arm.

Removed include for xen-tools/common-macros.h in
python/xen/lowlevel/xc/xc.c because it is already included by the
arm-arch-capabilities.h header.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
Changes from v4:
 - Move arm-arch-capabilities.h into xen-tools/, add LIBXL_HAVE_,
   fixed python return type to I instead of i. (Anthony)
Changes from v3:
 - add Ack-by for the Golang bits (George)
 - add Ack-by for the OCaml tools (Christian)
 - now xen-tools/libs.h is named xen-tools/common-macros.h
 - changed commit message to explain why the header modification
   in python/xen/lowlevel/xc/xc.c
Changes from v2:
 - rename arm_arch_capabilities.h in arm-arch-capabilities.h, use
   MASK_EXTR.
 - Now arm-arch-capabilities.h needs MASK_EXTR macro, but it is
   defined in libxl_internal.h, it doesn't feel right to include
   that header so move MASK_EXTR into xen-tools/libs.h that is also
   included in libxl_internal.h
Changes from v1:
 - now SVE VL is encoded in arch_capabilities on Arm
Changes from RFC:
 - new patch
---
 tools/golang/xenlight/helpers.gen.go          |  2 ++
 tools/golang/xenlight/types.gen.go            |  1 +
 tools/include/libxl.h                         |  6 ++++
 .../include/xen-tools/arm-arch-capabilities.h | 28 +++++++++++++++++++
 tools/include/xen-tools/common-macros.h       |  2 ++
 tools/libs/light/libxl.c                      |  1 +
 tools/libs/light/libxl_internal.h             |  1 -
 tools/libs/light/libxl_types.idl              |  1 +
 tools/ocaml/libs/xc/xenctrl.ml                |  4 +--
 tools/ocaml/libs/xc/xenctrl.mli               |  4 +--
 tools/ocaml/libs/xc/xenctrl_stubs.c           |  8 ++++--
 tools/python/xen/lowlevel/xc/xc.c             |  8 ++++--
 tools/xl/xl_info.c                            |  8 ++++++
 13 files changed, 62 insertions(+), 12 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..35397be2f9e2 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3506,6 +3506,7 @@ x.CapVmtrace = bool(xc.cap_vmtrace)
 x.CapVpmu = bool(xc.cap_vpmu)
 x.CapGnttabV1 = bool(xc.cap_gnttab_v1)
 x.CapGnttabV2 = bool(xc.cap_gnttab_v2)
+x.ArchCapabilities = uint32(xc.arch_capabilities)
 
  return nil}
 
@@ -3540,6 +3541,7 @@ xc.cap_vmtrace = C.bool(x.CapVmtrace)
 xc.cap_vpmu = C.bool(x.CapVpmu)
 xc.cap_gnttab_v1 = C.bool(x.CapGnttabV1)
 xc.cap_gnttab_v2 = C.bool(x.CapGnttabV2)
+xc.arch_capabilities = C.uint32_t(x.ArchCapabilities)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..3d968a496744 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1079,6 +1079,7 @@ CapVmtrace bool
 CapVpmu bool
 CapGnttabV1 bool
 CapGnttabV2 bool
+ArchCapabilities uint32
 }
 
 type Connectorinfo struct {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a191318c..4fa09ff7635a 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -525,6 +525,12 @@
  */
 #define LIBXL_HAVE_PHYSINFO_CAP_GNTTAB 1
 
+/*
+ * LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES indicates that libxl_physinfo has a
+ * arch_capabilities field.
+ */
+#define LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES 1
+
 /*
  * LIBXL_HAVE_MAX_GRANT_VERSION indicates libxl_domain_build_info has a
  * max_grant_version field for setting the max grant table version per
diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
new file mode 100644
index 000000000000..ac44c8b14344
--- /dev/null
+++ b/tools/include/xen-tools/arm-arch-capabilities.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef ARM_ARCH_CAPABILITIES_H
+#define ARM_ARCH_CAPABILITIES_H
+
+#include <stdint.h>
+#include <xen/sysctl.h>
+
+#include <xen-tools/common-macros.h>
+
+static inline
+unsigned int arch_capabilities_arm_sve(unsigned int arch_capabilities)
+{
+#if defined(__aarch64__)
+    unsigned int sve_vl = MASK_EXTR(arch_capabilities,
+                                    XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
+
+    /* Vector length is divided by 128 before storing it in arch_capabilities */
+    return sve_vl * 128U;
+#else
+    return 0;
+#endif
+}
+
+#endif /* ARM_ARCH_CAPABILITIES_H */
diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h
index 76b55bf62085..d53b88182560 100644
--- a/tools/include/xen-tools/common-macros.h
+++ b/tools/include/xen-tools/common-macros.h
@@ -72,6 +72,8 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+
 #ifndef __must_check
 #define __must_check __attribute__((__warn_unused_result__))
 #endif
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f69..175d6dde0b80 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -409,6 +409,7 @@ int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v1);
     physinfo->cap_gnttab_v2 =
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v2);
+    physinfo->arch_capabilities = xcphysinfo.arch_capabilities;
 
     GC_FREE;
     return 0;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 5244fde6239a..8aba3e138909 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -132,7 +132,6 @@
 
 #define DIV_ROUNDUP(n, d) (((n) + (d) - 1) / (d))
 
-#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
 #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
 
 #define LIBXL__LOGGING_ENABLED
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..fd31dacf7d5a 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -1133,6 +1133,7 @@ libxl_physinfo = Struct("physinfo", [
     ("cap_vpmu", bool),
     ("cap_gnttab_v1", bool),
     ("cap_gnttab_v2", bool),
+    ("arch_capabilities", uint32),
     ], dir=DIR_OUT)
 
 libxl_connectorinfo = Struct("connectorinfo", [
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e4096bf92c1d..bf23ca50bb15 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -128,12 +128,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index ef2254537430..ed1e28ea30a0 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -113,12 +113,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 6ec9ed6d1e6f..526a3610fa42 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -853,13 +853,15 @@ CAMLprim value stub_xc_physinfo(value xch_val)
 	arch_cap_list = Tag_cons;
 
 	arch_cap_flags_tag = 1; /* tag x86 */
-#else
-	caml_failwith("Unhandled architecture");
-#endif
 
 	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
 	Store_field(arch_cap_flags, 0, arch_cap_list);
 	Store_field(physinfo, 10, arch_cap_flags);
+#elif defined(__aarch64__)
+	Store_field(physinfo, 10, Val_int(c_physinfo.arch_capabilities));
+#else
+	caml_failwith("Unhandled architecture");
+#endif
 
 	CAMLreturn(physinfo);
 }
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 35901c2d63b6..94b0354cf52f 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -22,6 +22,7 @@
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/params.h>
 
+#include <xen-tools/arm-arch-capabilities.h>
 #include <xen-tools/common-macros.h>
 
 /* Needed for Python versions earlier than 2.3. */
@@ -897,7 +898,7 @@ static PyObject *pyxc_physinfo(XcObject *self)
     if ( p != virt_caps )
       *(p-1) = '\0';
 
-    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
+    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s,s:I}",
                             "nr_nodes",         pinfo.nr_nodes,
                             "threads_per_core", pinfo.threads_per_core,
                             "cores_per_socket", pinfo.cores_per_socket,
@@ -907,7 +908,10 @@ static PyObject *pyxc_physinfo(XcObject *self)
                             "scrub_memory",     pages_to_kib(pinfo.scrub_pages),
                             "cpu_khz",          pinfo.cpu_khz,
                             "hw_caps",          cpu_cap,
-                            "virt_caps",        virt_caps);
+                            "virt_caps",        virt_caps,
+                            "arm_sve_vl",
+                              arch_capabilities_arm_sve(pinfo.arch_capabilities)
+                        );
 }
 
 static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject *args, PyObject *kwds)
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 712b7638b013..ddc42f96b979 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -27,6 +27,7 @@
 #include <libxl_json.h>
 #include <libxl_utils.h>
 #include <libxlutil.h>
+#include <xen-tools/arm-arch-capabilities.h>
 
 #include "xl.h"
 #include "xl_utils.h"
@@ -224,6 +225,13 @@ static void output_physinfo(void)
          info.cap_gnttab_v2 ? " gnttab-v2" : ""
         );
 
+    /* Print arm SVE vector length only on ARM platforms */
+#if defined(__aarch64__)
+    maybe_printf("arm_sve_vector_length  : %u\n",
+         arch_capabilities_arm_sve(info.arch_capabilities)
+        );
+#endif
+
     vinfo = libxl_get_version_info(ctx);
     if (vinfo) {
         i = (1 << 20) / vinfo->pagesize;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525071.816063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI3-0001by-Cw; Mon, 24 Apr 2023 06:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525071.816063; Mon, 24 Apr 2023 06:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI3-0001bn-9B; Mon, 24 Apr 2023 06:03:07 +0000
Received: by outflank-mailman (input) for mailman id 525071;
 Mon, 24 Apr 2023 06:03:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpI2-0000mg-Bb
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:06 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id abc8eec8-e265-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 08:03:02 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7953D1042;
 Sun, 23 Apr 2023 23:03:45 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4DD7E3F587;
 Sun, 23 Apr 2023 23:03:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abc8eec8-e265-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 02/12] xen/arm: add SVE vector length field to the domain
Date: Mon, 24 Apr 2023 07:02:38 +0100
Message-Id: <20230424060248.1488859-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
to allow the domain to have an information about the SVE feature
and the number of SVE register bits that are allowed for this
domain.

sve_vl field is the vector length in bits divided by 128, this
allows to use less space in the structures.

The field is used also to allow or forbid a domain to use SVE,
because a value equal to zero means the guest is not allowed to
use the feature.

Check that the requested vector length is lower or equal to the
platform supported vector length, otherwise fail on domain
creation.

Check that only 64 bit domains have SVE enabled, otherwise fail.

Bump the XEN_DOMCTL_INTERFACE_VERSION because of the new field
in struct xen_arch_domainconfig.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - Update commit message stating the interface ver. bump (Bertrand)
 - in struct arch_domain, protect sve_vl with CONFIG_ARM64_SVE,
   given the change, move also is_sve_domain() where it's protected
   inside sve.h and create a stub when the macro is not defined,
   protect the usage of sve_vl where needed.
   (Julien)
 - Add a check for 32 bit guest running on top of 64 bit host that
   have sve parameter enabled to stop the domain creation, added in
   construct_domain() of domain_build.c and subarch_do_domctl of
   domctl.c. (Julien)
Changes from v4:
 - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
   removed else if since the conditions can't fallthrough, removed not
   needed condition checking for VL bits validity because it's already
   covered, so delete is_vl_valid() function. (Jan)
Changes from v3:
 - don't use fixed types when not needed, use encoded value also in
   arch_domain so rename sve_vl_bits in sve_vl. (Jan)
 - rename domainconfig_decode_vl to sve_decode_vl because it will now
   be used also to decode from arch_domain value
 - change sve_vl from uint16_t to uint8_t and move it after "type" field
   to optimize space.
Changes from v2:
 - rename field in xen_arch_domainconfig from "sve_vl_bits" to
   "sve_vl" and use the implicit padding after gic_version to
   store it, now this field is the VL/128. (Jan)
 - Created domainconfig_decode_vl() function to decode the sve_vl
   field and use it as plain bits value inside arch_domain.
 - Changed commit message reflecting the changes
Changes from v1:
 - no changes
Changes from RFC:
 - restore zcr_el2 in sve_restore_state, that will be introduced
   later in this serie, so remove zcr_el2 related code from this
   patch and move everything to the later patch (Julien)
 - add explicit padding into struct xen_arch_domainconfig (Julien)
 - Don't lower down the vector length, just fail to create the
   domain. (Julien)
---
 xen/arch/arm/arm64/domctl.c          |  4 ++++
 xen/arch/arm/arm64/sve.c             | 12 ++++++++++++
 xen/arch/arm/domain.c                | 29 ++++++++++++++++++++++++++++
 xen/arch/arm/domain_build.c          |  7 +++++++
 xen/arch/arm/include/asm/arm64/sve.h | 16 +++++++++++++++
 xen/arch/arm/include/asm/domain.h    |  5 +++++
 xen/include/public/arch-arm.h        |  2 ++
 xen/include/public/domctl.h          |  2 +-
 8 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index 0de89b42c448..14fc622e9956 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -10,6 +10,7 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 static long switch_mode(struct domain *d, enum domain_type type)
@@ -43,6 +44,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         case 32:
             if ( !cpu_has_el1_32 )
                 return -EINVAL;
+            /* SVE is not supported for 32 bit domain */
+            if ( is_sve_domain(d) )
+                return -EINVAL;
             return switch_mode(d, DOMAIN_32BIT);
         case 64:
             return switch_mode(d, DOMAIN_64BIT);
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 6f3fb368c59b..86a5e617bfca 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -8,6 +8,7 @@
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
+#include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/system.h>
 
@@ -48,3 +49,14 @@ register_t vl_to_zcr(unsigned int vl)
     ASSERT(vl > 0);
     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
 }
+
+/* Get the system sanitized value for VL in bits */
+unsigned int get_sys_vl_len(void)
+{
+    if ( !cpu_has_sve )
+        return 0;
+
+    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */
+    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
+            SVE_VL_MULTIPLE_VAL;
+}
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 0350d8c61ed8..143359d0f313 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
@@ -550,6 +551,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.cptr_el2 = get_default_cptr_flags();
+    if ( is_sve_domain(v->domain) )
+        v->arch.cptr_el2 &= ~HCPTR_CP(8);
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
@@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    /* Check feature flags */
+    if ( sve_vl_bits > 0 )
+    {
+        unsigned int zcr_max_bits = get_sys_vl_len();
+
+        if ( !zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
+            return -EINVAL;
+        }
+
+        if ( sve_vl_bits > zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO,
+                    "Requested SVE vector length (%u) > supported length (%u)\n",
+                    sve_vl_bits, zcr_max_bits);
+            return -EINVAL;
+        }
+    }
+
     /* The P2M table must always be shared between the CPU and the IOMMU */
     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
     {
@@ -744,6 +768,11 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vpci_init(d)) != 0 )
         goto fail;
 
+#ifdef CONFIG_ARM64_SVE
+    /* Copy the encoded vector length sve_vl from the domain configuration */
+    d->arch.sve_vl = config->arch.sve_vl;
+#endif
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af206..ffabe567ac3f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -26,6 +26,7 @@
 #include <asm/platform.h>
 #include <asm/psci.h>
 #include <asm/setup.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
 #include <xen/event.h>
@@ -3674,6 +3675,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
         return -EINVAL;
     }
 
+    if ( is_sve_domain(d) && (kinfo->type == DOMAIN_32BIT) )
+    {
+        printk("SVE is not available for 32-bit domain\n");
+        return -EINVAL;
+    }
+
     if ( is_64bit_domain(d) )
         vcpu_switch_to_aarch64_mode(v);
 
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 144d2b1cc485..730c3fb5a9c8 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -13,13 +13,24 @@
 /* Vector length must be multiple of 128 */
 #define SVE_VL_MULTIPLE_VAL (128U)
 
+static inline unsigned int sve_decode_vl(unsigned int sve_vl)
+{
+    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
+    return sve_vl * SVE_VL_MULTIPLE_VAL;
+}
+
 #ifdef CONFIG_ARM64_SVE
 
+#define is_sve_domain(d) ((d)->arch.sve_vl > 0)
+
 register_t compute_max_zcr(void);
 register_t vl_to_zcr(unsigned int vl);
+unsigned int get_sys_vl_len(void);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define is_sve_domain(d) (0)
+
 static inline register_t compute_max_zcr(void)
 {
     return 0;
@@ -30,6 +41,11 @@ static inline register_t vl_to_zcr(unsigned int vl)
     return 0;
 }
 
+static inline unsigned int get_sys_vl_len(void)
+{
+    return 0;
+}
+
 #endif /* CONFIG_ARM64_SVE */
 
 #endif /* _ARM_ARM64_SVE_H */
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index e776ee704b7d..331da0f3bcc3 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -67,6 +67,11 @@ struct arch_domain
     enum domain_type type;
 #endif
 
+#ifdef CONFIG_ARM64_SVE
+    /* max SVE encoded vector length */
+    uint8_t sve_vl;
+#endif
+
     /* Virtual MMU */
     struct p2m_domain p2m;
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..38311f559581 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 struct xen_arch_domainconfig {
     /* IN/OUT */
     uint8_t gic_version;
+    /* IN - Contains SVE vector length divided by 128 */
+    uint8_t sve_vl;
     /* IN */
     uint16_t tee_type;
     /* IN */
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 529801c89ba3..e2e22cb534d6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -21,7 +21,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525068.816025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI0-0000gk-Dy; Mon, 24 Apr 2023 06:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525068.816025; Mon, 24 Apr 2023 06:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpI0-0000gQ-9w; Mon, 24 Apr 2023 06:03:04 +0000
Received: by outflank-mailman (input) for mailman id 525068;
 Mon, 24 Apr 2023 06:03:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpHy-0000eR-UX
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id aad9d3b8-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:00 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07FD5FEC;
 Sun, 23 Apr 2023 23:03:44 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 29FCB3F587;
 Sun, 23 Apr 2023 23:02:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aad9d3b8-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Date: Mon, 24 Apr 2023 07:02:37 +0100
Message-Id: <20230424060248.1488859-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot only when SVE resources are accessed.
While there, correct coding style for the comment on coprocessor
trapping.

Now cptr_el2 is part of the domain context and it will be restored
on context switch, this is a preparation for saving the SVE context
which will be part of VFP operations, so restore it before the call
to save VFP registers.
To save an additional isb barrier, restore cptr_el2 before an
existing isb barrier and move the call for saving VFP context after
that barrier.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - Add R-by Bertrand
Changes from v4:
 - don't use fixed types in vl_to_zcr, forgot to address that in
   v3, by mistake I changed that in patch 2, fixing now (Jan)
Changes from v3:
 - no changes
Changes from v2:
 - renamed sve_asm.S in sve-asm.S, new files should not contain
   underscore in the name (Jan)
Changes from v1:
 - Add assert to vl_to_zcr, it is never called with vl==0, but just
   to be sure it won't in the future.
Changes from RFC:
 - Moved restoring of cptr before an existing barrier (Julien)
 - Marked the feature as unsupported for now (Julien)
 - Trap and un-trap only when using SVE resources in
   compute_max_zcr() (Julien)
---
 xen/arch/arm/Kconfig                     | 10 +++--
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++--
 xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    |  9 +++--
 xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 ++-
 xen/arch/arm/traps.c                     | 28 +++++++------
 14 files changed, 201 insertions(+), 24 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..41f45d8d1203 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM_64
 	help
-	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
+	  Scalar Vector Extension (SVE/SVE2) support for guests.
+
+	  Please be aware that currently, enabling this feature will add latency on
+	  VM context switch between SVE enabled guests, between not-enabled SVE
+	  guests and SVE enabled guests and viceversa, compared to the time
+	  required to switch between not-enabled SVE guests.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 28481393e98f..54ad55c75cda 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -13,6 +13,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve-asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..6f3fb368c59b
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+register_t compute_max_zcr(void)
+{
+    register_t cptr_bits = get_default_cptr_flags();
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /* Remove trap for SVE resources */
+    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
+    isb();
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    /* Restore CPTR_EL2 */
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
+    isb();
+
+    return vl_to_zcr(hw_vl);
+}
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+register_t vl_to_zcr(unsigned int vl)
+{
+    ASSERT(vl > 0);
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..83b84368f6d5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default to the guests */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d8ef6501ff8e..0350d8c61ed8 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
-    /* VFP */
-    vfp_restore_state(n);
-
     /* XXX MPU */
 
     /* Fault Status */
@@ -234,6 +231,7 @@ static void ctxt_switch_to(struct vcpu *n)
     p2m_restore_state(n);
 
     /* Control Registers */
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
     WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
 
     /*
@@ -258,6 +256,9 @@ static void ctxt_switch_to(struct vcpu *n)
 #endif
     isb();
 
+    /* VFP */
+    vfp_restore_state(n);
+
     /* CP 15 */
     WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
 
@@ -548,6 +549,8 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..144d2b1cc485
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS (2048U)
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL (128U)
+
+#ifdef CONFIG_ARM64_SVE
+
+register_t compute_max_zcr(void);
+register_t vl_to_zcr(unsigned int vl);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline register_t compute_max_zcr(void)
+{
+    return 0;
+}
+
+static inline register_t vl_to_zcr(unsigned int vl)
+{
+    return 0;
+}
+
+#endif /* CONFIG_ARM64_SVE */
+
+#endif /* _ARM_ARM64_SVE_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..6d703e051906 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       (0)
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@ struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 2a51f0ca688e..e776ee704b7d 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 54f253087718..bc683334125c 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a15..4191a766767a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d40c331a4e9c..c0611c2ef6a5 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
 
 void init_traps(void)
 {
+    register_t cptr_bits = get_default_cptr_flags();
     /*
      * Setup Hyp vector base. Note they might get updated with the
      * branch predictor hardening.
@@ -135,17 +151,7 @@ void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
-     */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525077.816123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpIC-0003SJ-Ra; Mon, 24 Apr 2023 06:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525077.816123; Mon, 24 Apr 2023 06:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpIC-0003S4-Nh; Mon, 24 Apr 2023 06:03:16 +0000
Received: by outflank-mailman (input) for mailman id 525077;
 Mon, 24 Apr 2023 06:03:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpIB-0000mg-5j
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:15 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id b2133e1d-e265-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 08:03:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1915E1691;
 Sun, 23 Apr 2023 23:03:56 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1FB4F3F587;
 Sun, 23 Apr 2023 23:03:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2133e1d-e265-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v6 10/12] xen/tools: add sve parameter in XL configuration
Date: Mon, 24 Apr 2023 07:02:46 +0100
Message-Id: <20230424060248.1488859-11-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve parameter in XL configuration to allow guests to use
SVE feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - Update documentation
 - re-generated golang files
Changes from v4:
 - Rename sve field to sve_vl (Anthony), changed type to
   libxl_sve_type
 - Sanity check of sve field in libxl instead of xl, update docs
   (Anthony)
 - drop Ack-by from George because of the changes in the Golang bits
Changes from v3:
 - no changes
Changes from v2:
 - domain configuration field name has changed to sve_vl,
   also its value now is VL/128.
 - Add Ack-by George for the Golang bits
Changes from v1:
 - updated to use arch_capabilities field for vector length
Changes from RFC:
 - changed libxl_types.idl sve field to uint16
 - now toolstack uses info from physinfo to check against the
   sve XL value
 - Changed documentation
---
 docs/man/xl.cfg.5.pod.in             | 16 ++++++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   | 23 +++++++++++++++++++++++
 tools/include/libxl.h                |  5 +++++
 tools/libs/light/libxl_arm.c         | 28 ++++++++++++++++++++++++++++
 tools/libs/light/libxl_types.idl     | 22 ++++++++++++++++++++++
 tools/xl/xl_parse.c                  |  8 ++++++++
 7 files changed, 104 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 10f37990be57..2ea996caa256 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2952,6 +2952,22 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=item B<sve="vl">
+
+The `sve` parameter enables Arm Scalable Vector Extension (SVE) usage for the
+guest and sets the maximum SVE vector length, the option is applicable only to
+AArch64 guests.
+A value equal to "disabled" disables the feature, this is the default value.
+Allowed values are "disabled", "128", "256", "384", "512", "640", "768", "896",
+"1024", "1152", "1280", "1408", "1536", "1664", "1792", "1920", "2048", "hw".
+Specifying "hw" means that the maximum vector length supported by the platform
+will be used.
+Please be aware that if a specific vector length is passed and its value is
+above the maximum vector length supported by the platform, an error will be
+raised.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 35397be2f9e2..cd1a16e32eac 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1149,6 +1149,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+x.ArchArm.SveVl = SveType(xc.arch_arm.sve_vl)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
@@ -1653,6 +1654,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+xc.arch_arm.sve_vl = C.libxl_sve_type(x.ArchArm.SveVl)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 3d968a496744..b131a7eedc9d 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -490,6 +490,28 @@ TeeTypeNone TeeType = 0
 TeeTypeOptee TeeType = 1
 )
 
+type SveType int
+const(
+SveTypeHw SveType = -1
+SveTypeDisabled SveType = 0
+SveType128 SveType = 128
+SveType256 SveType = 256
+SveType384 SveType = 384
+SveType512 SveType = 512
+SveType640 SveType = 640
+SveType768 SveType = 768
+SveType896 SveType = 896
+SveType1024 SveType = 1024
+SveType1152 SveType = 1152
+SveType1280 SveType = 1280
+SveType1408 SveType = 1408
+SveType1536 SveType = 1536
+SveType1664 SveType = 1664
+SveType1792 SveType = 1792
+SveType1920 SveType = 1920
+SveType2048 SveType = 2048
+)
+
 type RdmReserve struct {
 Strategy RdmReserveStrategy
 Policy RdmReservePolicy
@@ -564,6 +586,7 @@ TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
+SveVl SveType
 }
 ArchX86 struct {
 MsrRelaxed Defbool
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4fa09ff7635a..cac641a7eba2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -283,6 +283,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * libxl_domain_build_info has the arch_arm.sve_vl field.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_SVE_VL 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index ddc7b2a15975..1e69dac2c4fa 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -3,6 +3,8 @@
 #include "libxl_libfdt_compat.h"
 #include "libxl_arm.h"
 
+#include <xen-tools/arm-arch-capabilities.h>
+
 #include <stdbool.h>
 #include <libfdt.h>
 #include <assert.h>
@@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
+    if (d_config->b_info.arch_arm.sve_vl) {
+        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
+        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
+    }
+
     return 0;
 }
 
@@ -1681,6 +1689,26 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
 
+    /* Sanitise SVE parameter */
+    if (b_info->arch_arm.sve_vl) {
+        unsigned int max_sve_vl =
+            arch_capabilities_arm_sve(physinfo->arch_capabilities);
+
+        if (!max_sve_vl) {
+            LOG(ERROR, "SVE is unsupported on this machine.");
+            return ERROR_FAIL;
+        }
+
+        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
+            b_info->arch_arm.sve_vl = max_sve_vl;
+        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
+            LOG(ERROR,
+                "Invalid sve value: %d. Platform supports up to %u bits",
+                b_info->arch_arm.sve_vl, max_sve_vl);
+            return ERROR_FAIL;
+        }
+    }
+
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return 0;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index fd31dacf7d5a..9e48bb772646 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -523,6 +523,27 @@ libxl_tee_type = Enumeration("tee_type", [
     (1, "optee")
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
+libxl_sve_type = Enumeration("sve_type", [
+    (-1, "hw"),
+    (0, "disabled"),
+    (128, "128"),
+    (256, "256"),
+    (384, "384"),
+    (512, "512"),
+    (640, "640"),
+    (768, "768"),
+    (896, "896"),
+    (1024, "1024"),
+    (1152, "1152"),
+    (1280, "1280"),
+    (1408, "1408"),
+    (1536, "1536"),
+    (1664, "1664"),
+    (1792, "1792"),
+    (1920, "1920"),
+    (2048, "2048")
+    ], init_val = "LIBXL_SVE_TYPE_DISABLED")
+
 libxl_rdm_reserve = Struct("rdm_reserve", [
     ("strategy",    libxl_rdm_reserve_strategy),
     ("policy",      libxl_rdm_reserve_policy),
@@ -690,6 +711,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("sve_vl", libxl_sve_type),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..f036e56fc239 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2887,6 +2887,14 @@ skip_usbdev:
         }
     }
 
+    if (!xlu_cfg_get_string (config, "sve", &buf, 1)) {
+        e = libxl_sve_type_from_string(buf, &b_info->arch_arm.sve_vl);
+        if (e) {
+            fprintf(stderr, "Unknown sve \"%s\" specified\n", buf);
+            exit(EXIT_FAILURE);
+        }
+    }
+
     parse_vkb_list(config, d_config);
 
     d_config->virtios = NULL;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525078.816127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpID-0003VP-8p; Mon, 24 Apr 2023 06:03:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525078.816127; Mon, 24 Apr 2023 06:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpID-0003Ud-2E; Mon, 24 Apr 2023 06:03:17 +0000
Received: by outflank-mailman (input) for mailman id 525078;
 Mon, 24 Apr 2023 06:03:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpIB-0000eR-JB
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:15 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id b37a34f0-e265-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 08:03:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AB5B3FEC;
 Sun, 23 Apr 2023 23:03:58 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 66CF33F587;
 Sun, 23 Apr 2023 23:03:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b37a34f0-e265-11ed-b223-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 12/12] xen/changelog: Add SVE and "dom0" options to the changelog for Arm
Date: Mon, 24 Apr 2023 07:02:48 +0100
Message-Id: <20230424060248.1488859-13-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Arm now can use the "dom0=" Xen command line option and the support
for guests running SVE instructions is added, put entries in the
changelog.

Mention the "Tech Preview" status and add an entry in SUPPORT.md

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - Add Tech Preview status and add entry in SUPPORT.md (Bertrand)
Changes from v4:
 - No changes
Change from v3:
 - new patch
---
 CHANGELOG.md | 3 +++
 SUPPORT.md   | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5dbf8b06d72c..c82a03afd2cf 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 ### Changed
  - Repurpose command line gnttab_max_{maptrack_,}frames options so they don't
    cap toolstack provided values.
+ - The "dom0" option is now supported on Arm and "sve=" sub-option can be used
+   to enable dom0 guest to use SVE/SVE2 instructions.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
@@ -18,6 +20,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - On Arm, Xen supports guests running SVE/SVE2 instructions. (Tech Preview)
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f09..3711fc83b48a 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -89,6 +89,12 @@ Extension to the GICv3 interrupt controller to support MSI.
 
     Status: Experimental
 
+### ARM Scalable Vector Extension (SVE/SVE2)
+
+AArch64 guest can use Scalable Vector Extension (SVE/SVE2).
+
+    Status: Tech Preview
+
 ## Guest Type
 
 ### x86/PV
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 06:03:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 06:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525079.816133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpID-0003e3-UT; Mon, 24 Apr 2023 06:03:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525079.816133; Mon, 24 Apr 2023 06:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqpID-0003ct-J5; Mon, 24 Apr 2023 06:03:17 +0000
Received: by outflank-mailman (input) for mailman id 525079;
 Mon, 24 Apr 2023 06:03:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pqpIC-0000mg-5s
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 06:03:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id b2b197b1-e265-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 08:03:13 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 21A8F1477;
 Sun, 23 Apr 2023 23:03:57 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C2953F587;
 Sun, 23 Apr 2023 23:03:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2b197b1-e265-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v6 11/12] xen/arm: add sve property for dom0less domUs
Date: Mon, 24 Apr 2023 07:02:47 +0100
Message-Id: <20230424060248.1488859-12-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424060248.1488859-1-luca.fancellu@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a device tree property in the dom0less domU configuration
to enable the guest to use SVE.

Update documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v5:
 - Stop the domain creation if SVE not supported or SVE VL
   errors (Julien, Bertrand)
 - now sve_sanitize_vl_param is renamed to sve_domctl_vl_param
   and returns a boolean, change the affected code.
 - Reworded documentation.
Changes from v4:
 - Now it is possible to specify the property "sve" for dom0less
   device tree node without any value, that means the platform
   supported VL will be used.
Changes from v3:
 - Now domainconfig_encode_vl is named sve_encode_vl
Changes from v2:
 - xen_domctl_createdomain field name has changed into sve_vl
   and its value is the VL/128, use domainconfig_encode_vl
   to encode a plain VL in bits.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed documentation
---
 docs/misc/arm/device-tree/booting.txt | 16 ++++++++++++++++
 xen/arch/arm/domain_build.c           | 24 ++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e0a..32a0e508c471 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -193,6 +193,22 @@ with the following properties:
     Optional. Handle to a xen,cpupool device tree node that identifies the
     cpupool where the guest will be started at boot.
 
+- sve
+
+    Optional. The `sve` property enables Arm SVE usage for the domain and sets
+    the maximum SVE vector length, the option is applicable only to AArch64
+    guests.
+    A value equal to 0 disables the feature, this is the default value.
+    Specifying this property with no value, means that the SVE vector length
+    will be set equal to the maximum vector length supported by the platform.
+    Values above 0 explicitly set the maximum SVE vector length for the domain,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a non zero value, but the platform doesn't support
+    SVE.
+
 - xen,enhanced
 
     A string property. Possible property values are:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 4a6b73348594..b61cc35b8524 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -4011,6 +4011,30 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( dt_get_property(node, "sve", &val) )
+        {
+            unsigned int sve_vl_bits;
+            bool ret = false;
+
+            if ( !val )
+            {
+                /* Property found with no value, means max HW VL supported */
+                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
+            }
+            else
+            {
+                if ( dt_property_read_u32(node, "sve", &val) )
+                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
+                else
+                    panic("Error reading 'sve' property");
+            }
+
+            if ( ret )
+                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
+            else
+                panic("SVE vector length error\n");
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:23:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525142.816153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqXH-0007d8-TK; Mon, 24 Apr 2023 07:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525142.816153; Mon, 24 Apr 2023 07:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqXH-0007d1-QJ; Mon, 24 Apr 2023 07:22:55 +0000
Received: by outflank-mailman (input) for mailman id 525142;
 Mon, 24 Apr 2023 07:22:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qjkp=AP=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pqqXH-0007cv-5B
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:22:55 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2062b.outbound.protection.outlook.com
 [2a01:111:f400:fe12::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d400cd49-e270-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 09:22:54 +0200 (CEST)
Received: from AM6P192CA0023.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::36)
 by DU0PR08MB9461.eurprd08.prod.outlook.com (2603:10a6:10:42f::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 07:22:47 +0000
Received: from AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::28) by AM6P192CA0023.outlook.office365.com
 (2603:10a6:209:83::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 07:22:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT058.mail.protection.outlook.com (100.127.140.247) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 07:22:46 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Mon, 24 Apr 2023 07:22:46 +0000
Received: from 052e517e695c.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 97B7CC79-374B-4004-A543-AA5D64B22E7A.1; 
 Mon, 24 Apr 2023 07:22:40 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 052e517e695c.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 07:22:40 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAWPR08MB9006.eurprd08.prod.outlook.com (2603:10a6:102:33f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:22:37 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 07:22:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d400cd49-e270-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zWZY2rSM/tCJktyejvgNGix7coLYZlQDSM6wK3mXlsU=;
 b=vn0aJ68HqVkWLoffnAPlZrjijZD612IQMYBu26SxYJSheuNdyhuAV2w+7EpKgDFf8/A+uHvtRwR7JjYAGsx7bh57bPE85WcQqC82s0A6IJWv8USvjKT9EDYsVwQ9msN/DD5THkCZkBXR1fdDDkHu2MDIcziWiP6lGGelXRgmeag=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=azUAZlf9IwESan4bJ5s6O77bw0qj9h8Yd2MP7pjSC8tbMMQ6yZ6DgRykrO0YtqEp9JEIGMnLbPPGSrrMEfpupKlgdkvtTefK7pRhi/mkH+q19RAyvFXo5ZjWePjXzXkF9Rj/iLv7P9qWVeCBXaXZgX3bUy+vFbiZRWT/UQvFkoRoCXY+tDa8j5s6QI7gWZiGAs0kbTfFhG1iywR67YCVYs9dYWl073KzEL+9S7ZDNEjtIJwfsNyh02pF5a/ngxMZt5JQLpa6w+8Kl1q9On9PfKq4BLV8R2K+R4xkhxXzbTXRKeFotOxe27qgtvh9q5qaABLtel73Yx4fGEm0XdW0bA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zWZY2rSM/tCJktyejvgNGix7coLYZlQDSM6wK3mXlsU=;
 b=J4ZSK1OXdZTImoQ8A2eBMbs1vdX0dKxWrfYIyvy1Tx/JqJQGmdWg8RHISWVOrpw0F7nSZVPKJx90GUITWku0hpI2W6QdGilc5XHmYBg8OIc2fZ0Cgx+PSP/RzrwqjTZl8cwOZlhGuYkwFkxRpOMlVmSZ26Aa14UnpbfTitSbzULE4CuuwA8Ifk39a9m4MS7K8Bp+zRAEtZMgoaH3s2EvI0eCSUcRQUQE0awhAegAKF+mGGwsnGsH9JFvoE5Mbo7WeAi3NgUs6VId8fV2Cy/AzFAtqDfRbGHVVv4i6HBFnMNP/RvvHNrEom0PYriLIVaL2g9Ka4pOIMlhVvvuWhZG3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zWZY2rSM/tCJktyejvgNGix7coLYZlQDSM6wK3mXlsU=;
 b=vn0aJ68HqVkWLoffnAPlZrjijZD612IQMYBu26SxYJSheuNdyhuAV2w+7EpKgDFf8/A+uHvtRwR7JjYAGsx7bh57bPE85WcQqC82s0A6IJWv8USvjKT9EDYsVwQ9msN/DD5THkCZkBXR1fdDDkHu2MDIcziWiP6lGGelXRgmeag=
From: Henry Wang <Henry.Wang@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Community Manager <community.manager@xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: RE: [PATCH v6 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v6 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZdnJ6OYC31hV5IU6Nadg0A3PyM686De0Q
Date: Mon, 24 Apr 2023 07:22:36 +0000
Message-ID:
 <AS8PR08MB799146450E0BBD6BF1C740BA92679@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-13-luca.fancellu@arm.com>
In-Reply-To: <20230424060248.1488859-13-luca.fancellu@arm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 14832DCF3D12584084C5229F410F1645.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAWPR08MB9006:EE_|AM7EUR03FT058:EE_|DU0PR08MB9461:EE_
X-MS-Office365-Filtering-Correlation-Id: 3148ab1c-11dd-4edc-4bbc-08db4494b3e6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KQL14xjoFToztO9iKoHvxRLX01JjSRcYgsULmva5nT9Q0i24zf3ryIwayKup3pkFwh8mUbuB41JjJvQISf3XkG84bQkVHqabe8EbusfoB8BIuacnxAY9g6aAIMn9cmysti1F8JGiBMltLaJz8zcv06f018mngRnOaznwntjKEJGUhPWEuM26ooFHdDKw0/nEL4/u+KXJ8eL7u5RKgH7PK1Fv639Oz5g9J45hp+jdUQ9rnI4rJFNg6LZ2ghj1hFx+FZcZRfF02NctkbbTXach2PvCXXqgIAxx92JfKPX/URHVQ0nnNua8TWIpmc8j4DfcQ8ZAv0fPMJcp3pkd1XHgzSP96U/s8WiNmeeh/xXcPv8o0TynjZ15BT0Wnwc2E6C+86/b1TsHGnfaZeV/NFiraX4V6I7JJHXm2kXTEgcZvUJ4UmL6huvIuBpQQ2eXYNgdYFfDO+Wmc/D0uNjsmvJyjIaokdjkSeZt5H+B8/p4+Cf8lL91xGrvGYErgKMCO9gqeGojF7GXWvssAhaRm+NZ4kBDvq1f6nSaScDO3FUknzrs1dh+SNqwo7JoEZ5gLx5DEvGjUqtlNdKbT1kOdaE1cmd+hJW3RTin/PHpzY+BSy3GerHMtFGs8S+GjKKCkQGKmXjJlEgoLwxIAjOWaTC9bpu8qum3uBKgC89ugNREBssZVyAfYjJ0cSe7alj4JfmtmdHmx8f5mIlA+kBJElfE8A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(366004)(136003)(451199021)(38100700002)(122000001)(6506007)(9686003)(26005)(55016003)(186003)(83380400001)(33656002)(2906002)(4744005)(8676002)(8936002)(52536014)(5660300002)(54906003)(478600001)(7696005)(71200400001)(38070700005)(110136005)(316002)(4326008)(41300700001)(64756008)(66446008)(66556008)(66946007)(76116006)(86362001)(66476007)(59356011)(219803003)(207903002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9006
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	36eaa4ac-c16a-42d7-01bf-08db4494ae04
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jFhWxbUr57iE40xG175KiG8m1aos2Fsa8nhwk8qeJuRo4oId1RyMOFkgKz9n6qDfOg7Km5xneZYYK4yPqMVARGXNR+1gN9x/Fw8PrJtTVxIP1ZaqPuHH3fZ/snbPWrZz6kbJHJISernJNl/vZL5kSEiUhs5Vn89GcAZm2SpB4I9KL8xzKPjOT+fhi778Hf/TAeaB282X+2nlNs4RTUYCqQd5pMW4IiNqFDRNHY9lmKYad5yG7ifJWAdwRXA1rb6aeiMBUVBJEGXnruJeTQMEIu727IeFm5k8ASEr/76xrDsx3Ln1MdwJHHIsELxQDiA5QKlknjE2yyRBtqB9tMRoTNtRS38AxHmJ2EIbUs0Gq7a6oWkHeuGXjIv0wWcy4Kdix2zM7AYNeTLpUttNG0RIOiryj3IaVvnE/WMui9xRuKjjmq8lNWi1JacAnwOV8hTXv0RpzNvloGXtQtrvcC/syrOASU6iR5cH4pP++AV2xs1l/eFGfRyDDkOi21wXXRB89+gYRM7qoeoJVKnEpBOMtyDOXEGXhQO6sghKZ6VZ5iUJubnQYsWZopRjJzAVW44+qGgQBe/iWgF4GjPBHgvgB5yF34aESKRlMg0SpzAEJiKSxzfysCJMYDsfEzSVnh9qSszDyS1bNARSij13YciRdsQM6tuqlVkAc/SGl/SlnKwtwEhoQw+0QI6Zcw4UpjTqBKbRvsISCIE2NYv8N0A/l0PSPGNg/C824FtfFYIkdeqJHSOOlJE3we1gVRYpYEMSclHCDuXBqXZxFEaZn75CqkVY021nJfxH/Mq7l/Iq7vA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(40470700004)(46966006)(36840700001)(54906003)(110136005)(40460700003)(478600001)(82740400003)(316002)(4326008)(40480700001)(70206006)(70586007)(55016003)(356005)(41300700001)(81166007)(4744005)(2906002)(8936002)(8676002)(52536014)(5660300002)(36860700001)(186003)(86362001)(336012)(26005)(33656002)(6506007)(9686003)(7696005)(47076005)(83380400001)(82310400005)(59356011)(207903002)(219803003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:22:46.6495
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3148ab1c-11dd-4edc-4bbc-08db4494b3e6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9461

Hi Luca,

> -----Original Message-----
> From: Luca Fancellu <luca.fancellu@arm.com>
> Subject: [PATCH v6 12/12] xen/changelog: Add SVE and "dom0" options to th=
e
> changelog for Arm
>=20
> Arm now can use the "dom0=3D" Xen command line option and the support
> for guests running SVE instructions is added, put entries in the
> changelog.
>=20
> Mention the "Tech Preview" status and add an entry in SUPPORT.md
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:42:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:42:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525155.816163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqpf-0001lD-IW; Mon, 24 Apr 2023 07:41:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525155.816163; Mon, 24 Apr 2023 07:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqpf-0001l6-FS; Mon, 24 Apr 2023 07:41:55 +0000
Received: by outflank-mailman (input) for mailman id 525155;
 Mon, 24 Apr 2023 07:41:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqqpe-0001l0-2u
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:41:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7aa44a5f-e273-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 09:41:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7889.eurprd04.prod.outlook.com (2603:10a6:20b:24c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:41:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 07:41:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7aa44a5f-e273-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MgSAb5kjAxsFdggzGHOXIAFcZh1G+HKgySFx01iKA8NDPCVfd1z2gUPEd5zoluL+PG/bDZ+HN7HM41Y4MWtkifx1Z/IFo4k7cGgCqREPWcBajywcPZWs5l11+zJf5Ff/U0CtFAFr9W5YiBIdC9uuikvjcRpf8HJiDUXT/z1s9Aw5oByjvYkoYAZYVLb6nB8dbgZG/97ABR5Ih7mjzYkQervSzWV96Iykkno/Ny8B1lF8fqU+QGZoNmecpDu7jmDfQzclRCa1bE+8tovgns3pyvrIeU0L6RStJ02M5h/tPHafSRy1aaCWwRnaM9wmX3Y++02DYhgG7wNU+81/oIqnNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9O3QSKKiMQmag2nEoAv4P6qdeA1sEqoAcHERrvq+sQM=;
 b=LoepIrKTgZkDrK/hSKPMVMavFwrj5HRj530K3LDceOR9J1LTuH2HwIXL3mzC627HzbuuvfYpZiUrzkH/DSI6r+A83w07UnqSNAKeLkhJ/crLve/z+HfAmbM7FMP4FBepAouUwU/tWMp4D63nUCtXG9WChxzjVh3EDr9ZGCHO8ZtulPCb2Yd37pBbnOY/oQjRe2+kPkKMIOgLH73cR12NxbHnDGFFYix5dN23HrU3jlZlva6oZvFya/FYXIkmkhLNe+HoZP7pcN76ck894dF50REAtJuQ81I+K70RRR0CbIzxWkthB8+Xi6/zsEj/mANjptsgIs0vAw8UClGF1wUKgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9O3QSKKiMQmag2nEoAv4P6qdeA1sEqoAcHERrvq+sQM=;
 b=rkqA1dOUkSf20mSaNEV02uxC5ud9hSAWvoRA6nk/EVgmU+8+5jAU4t3580ldU6QqUUwCjiAobwwj4fMxWBOg5UZe2MFrQzr62P8wGT3eb1n0XfzAjE5c2elawOD8HYSvl3lmrK9P4vYWZf+ib7ZwaQcIqFf4LrLjkpPCKp7KGahql1c9o9FqdHpjtf09kZ/beaj6FKLDVEFjsQLZ14pp9G+k+n8hfgUTCHAO72WmeGH4Q+txmPA0SoxXysjz553k9DklryOp0XLKBllrqmpt1Iaep1FsWPTXPfPTtSxvhMpZpBWXTPcVXpabJ0E2cHiebZsUnjunXbTTa8nhzZVM8g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
Date: Mon, 24 Apr 2023 09:41:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7889:EE_
X-MS-Office365-Filtering-Correlation-Id: 43692ca4-97b6-47d7-dfd4-08db44975e06
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sLORtVpC/focrFTn97Il6bF7MmbHhCrlgTBQv8rUBGknhFW1RCE1vwvKbgBVvaYLuCkXmXYyhVscZZn4KIaL/b2jfVzmvI06IAFzoKsEDZYNLGBcsyd2rR+SL8gA5Losf4s1wGeYY6J2pofnHlShBYC4tYLj61GqeO31OAwxnhzHLse/+hcSO9FoiC+J2ZIEuZrWxfDEopAjkj5HiOyQEHewP5guC3d7TgY+0FZE9wfsSiLJ6knso+QIzJcqGURo3iL4pVqPVSfq/NRbiXCPpM3/7I47yG/LfYo+KUgR1PWO2m9Hv/bAu3grTDK6g2Tz+sI6Xw7h/hU1+PQ87CJXf+WKLPwAcvucbqVLTxQUu8YPu8DQE6+BKViS1DyfHjg844TedQ5SwtKQMzKJ12TVAjGogM2H17hVaj991dCwCiWJZh3lbfPoEqSrP3MQq0AL2DN4lJc87EnLNK4GLvtL8tR9/IO3NLuzDPFMQP1Im3at2z43EfeI3ffUsxOCACoGUXuIZZysZTBBClsj3LBlIFkSeQ1FB17qZdXUdrbK4tyZgS+9wzY5NdONH5XhK+ZXMJ9/qp9BQg4kGGQFhqnLTJLfRuuI4grAq/6UqIkGjLYzbHyANnAgfeotHOMyq2oMlrI/mpYKMVVDp59EgIpveA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(136003)(396003)(39860400002)(366004)(451199021)(5660300002)(7416002)(2616005)(86362001)(31696002)(83380400001)(53546011)(186003)(6512007)(6506007)(26005)(38100700002)(8936002)(8676002)(54906003)(478600001)(316002)(6486002)(41300700001)(36756003)(4326008)(6916009)(66476007)(66556008)(66946007)(31686004)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RHJWdXFyeEtkdlVFWWoyN21HaG1QWmkwcjFFY2JOOVovUUladG9WbkUyU2sz?=
 =?utf-8?B?RUpKUHRGc1NzNEt0UDZOQjNwdWw4Szg0Wll2Ny9IcmtmWXkwWThXdXpVVUZ4?=
 =?utf-8?B?UHJCSmRRbXcwb0t4S21RZVpHVitCMEdrcXF4VG1iSlNXbkNJZStFKzVJdkoy?=
 =?utf-8?B?NGhJN3owZzdlaFZDeDdLdXNJK05aRVpuSWtLVmVHcjFoa2RsTDdwc3BvK0l5?=
 =?utf-8?B?Mm8yR3Q5TDNmRG1Oek9zT2pYeTRsU05LRUY0Ym4zcm5MMzI3Y2U1TUQ1Tkc4?=
 =?utf-8?B?alFFK2ViK08xZWxMYnhXVXNsdXdWbjlpdDhMSmcwbDRCRVBNclJWNkVwWXBr?=
 =?utf-8?B?Ynp2TFVjaUY2cTNKOXFhWVVxS2laeUVnZVk3blZ6akJSWXkxOXNiaFhxL284?=
 =?utf-8?B?ampNZGtoS2xnSXM4SlJ2K2pxTGFsbFgwdW01alpvQVg0dUF6MmZyaHJXcFNI?=
 =?utf-8?B?bllQU1lLL1lwZFZWaUpGWlZvWG5hMldRTE9XM3gyMk1QcGh1UTZPWDlPempM?=
 =?utf-8?B?SkliTG1TTjBvMnFmK0hpYmozcmpQRS9OUlFRaGR6UnJXRlRNLzJyV2o2K0p1?=
 =?utf-8?B?T2UvSHVYemJWSkVOamU3dGt6M25WblhRZTI3dVlaTnd3QzVwMW51S2hxUzNI?=
 =?utf-8?B?Z1gvcmpYcWQ3eHBCYTgzaE1TTkFISTlVSXMrOVRsWXZjSzY0cDdiMUptWWpM?=
 =?utf-8?B?L0dWMzdpMXlpQWZ3S2FJYnlDVkhuTlFseFlHdVA1c2dqS2lWVmVDUU5OUmUw?=
 =?utf-8?B?MTBNLzlJemU2Yi9OcDFubGVaVXp2VGVEcVFzc0U1TXlHaGNqRFdIVElMeTIv?=
 =?utf-8?B?S25XemkwVUd3cVp5VTE0UXg0TEY0SExPZDhIZWwyUEdJbG5zZzh0Tm56bWRS?=
 =?utf-8?B?UzFIajhtVUZJcDZLa28zQXpwVjA4SndtZHhETWM2ekpoN2ZPcXlQUUlCNVg1?=
 =?utf-8?B?UDZOUmtIR2NJRTZ6RnVERFZmMUVhczVCKzhKZTZWN0NzRHY5QkJzT0d4VWNR?=
 =?utf-8?B?VTNsaU41MFhDcTFDQnRHcVJ0R0dIME5TcXFKcFpCNHhaOGRwVnNRcW5xZFBG?=
 =?utf-8?B?TGMxRXRoWENCbi9NNnNqVnY5eE1qZjVmWGhsbjIvQTIzLzZMQVRwYWV0ekU4?=
 =?utf-8?B?TUJyRDlxalBuUWYyaENPdk1IaUdIVjk1YVh6aHU5TnlLak54YkhzMndOSTlM?=
 =?utf-8?B?NmxHcDRScm04Yi9NQ1pscUkwdXU5ZkE2RU84bFc3ZWF4NEJ4WlBjdzlJclFs?=
 =?utf-8?B?TzFSeWxWR2dmQlpwbjkwZmNGckZwRWg2c0JHa09DeHh5OEVadHhnUXY0STI4?=
 =?utf-8?B?TGNPSjE2K0xVTmNqakdhb3U2ZFhzRFpTalYrUTVoV2JqWVU5UGxVbUFJVnRj?=
 =?utf-8?B?V2tWNXV5YVpBUTBOeFBHeWRKcEtFYitKZXVHbWczRzFiZzgxSmduZUJQMkhB?=
 =?utf-8?B?MFlEakM4WXh3SUl0a1QrdjV6RnNaS0JmMWNhRFp2SXNyR0RCWitlNXp2R1FL?=
 =?utf-8?B?VzU5WFhLVEh5MnRUc3hNREpkQzJOejRVVmxwdFFRNzVlaFBuQlp5YTFnNGdv?=
 =?utf-8?B?R0lGMzh0eTdUUlNsQUdpZHFCcXduUTdCMmhSRkY4bmZtRWtkT2lWUkcrcHZ1?=
 =?utf-8?B?a05WeDZBd2Q3YTJna2RocE5TcDNHTFZUVlozWVlzSWoxL2lRWDdXbkpiL1dn?=
 =?utf-8?B?b0RGYUxtY1JDK1Avd2F2MEdqTmJMdEhjTE95NE14c3U2VndkTFNZdXdqOFly?=
 =?utf-8?B?aFNkbnlvVzZXcGdNWU5peTVrUEIwOUNQMTEzT3JIandGQXlpWFhDZmJJMFpU?=
 =?utf-8?B?WjJycUdHazltRmxFeVdZamdUL3hNQmU0dTJ4YTZxMmR0clJoNDhYeDNPR3lz?=
 =?utf-8?B?TlhhMmNNUk1Id3doMlAycmxwcWd2bDk4cnBLVWRtOEVvOHprcklMSmR1ajU3?=
 =?utf-8?B?U01yQ2NYcWpIdXBIaDdRZ3VFY3g3UTFLQ0RWd3pwNXZKajJVN2FmVEhSZWFX?=
 =?utf-8?B?UkprdHB1N0d0NkhLd043NjFLVUtFQ1d3UGpuem01R0FvaG5QQlNWTFQ5MjFo?=
 =?utf-8?B?aHdFekFQYmhGa2tEV29CVndIYXdocG9hSU0xU2hDenUrMkpJSk5HZnRodGti?=
 =?utf-8?Q?QSG3xcMriXdnB9RXh1l64k89v?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43692ca4-97b6-47d7-dfd4-08db44975e06
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:41:51.2873
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 05EMZEM+YS+f/0Ldmn84KDUbS3FhgH2sgiWEtWcrwYaI1N+smSr6Bd5OeZEbgusw526XIJPAP82DwX1MJf8oRw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7889

On 23.04.2023 07:36, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>>> However, looking at the code below, don't you mean to have the array
>>>> pre-set to all NUMA_NO_DISTANCE?
>>>
>>> ...I am a bit puzzled about why pre-setting the array to all
>>> NUMA_NO_DISTANCE matters here, as I think the node distance map will
>>> be populated when parsing the device tree anyway no matter what their
>>> initial values.
>>
>> From this patch alone it doesn't become clear whether indeed all array
>> slots (and not just ones for valid nodes) would be populated. I think
>> the code in the patch here would better not make itself dependent on
>> behavior of code added subsequently (which may change; recall that a
>> series may be committed in pieces).
> 
> Correct, I agree. I added a numa_init_distance() function (in patch #12) to
> set all values to NUMA_NO_DISTANCE. The numa_init_distance() will be
> called in the beginning of numa_init().

Why do you need a function for this? As said, this array can be pre-set at
compile time (unless I'm overlooking something).

>>>>> +unsigned char __node_distance(nodeid_t from, nodeid_t to)
>>>>> +{
>>>>> +    /* When NUMA is off, any distance will be treated as remote. */
>>>>> +    if ( numa_disabled() )
>>>>> +        return NUMA_REMOTE_DISTANCE;
>>>>
>>>> Wouldn't it make sense to have the "from == to" special case ahead of
>>>> this (rather than further down), thus yielding a sensible result for
>>>> from == to == 0? And else return NUMA_NO_DISTANCE, thus having a
>>>> sensible result also for any from/to != 0?
>>>
>>> Could you please elaborate a bit more about why 0 matters here?
>>
>> When NUMA is off, there's only one node - node 0. Hence 0 has special
>> meaning in that case.
>>
>>> As from my understanding,
>>> (1) If from == to, then we set the distance to NUMA_LOCAL_DISTANCE
>>> which represents the diagonal of the matrix.
>>> (2) If from and to is in the matrix range, then we return
>>> node_distance_map[from][to].
>>
>> Provided that's set correctly. IOW this interacts with the other comment
>> (which really I made only after the one here, just that that's of course
>> not visible from the reply that I sent).
>>
>>> (3) Other cases we return NUMA_NO_DISTANCE.
>>
>> And when NUMA is off, it should be NUMA_NO_DISTANCE in _all_ other
>> cases,
>> i.e. ...
>>
>>>      /* When NUMA is off, any distance will be treated as remote. */
>>>      if ( numa_disabled() )
>>>          return NUMA_REMOTE_DISTANCE;
>>
>> ... this return is wrong in that case (even if in reality this likely
>> wouldn't matter much).
> 
> Thanks for the explanation! I think I now understand :) Would this diff below
> look good to you then? Appreciate your patience.

Looks largely okay, but possibly one part can now be omitted (see below).

> unsigned char __node_distance(nodeid_t from, nodeid_t to)
>  {
> -    /* When NUMA is off, any distance will be treated as remote. */
> +    if ( from == to )
> +        return NUMA_LOCAL_DISTANCE;
> +
> +    /* When NUMA is off, any distance will be treated as unreachable (0xFF). */

Please avoid mentioning the actual value of 0xFF: This serves no real
purpose (afaict) and is liable to go stale at some point.

>      if ( numa_disabled() )
> -        return NUMA_REMOTE_DISTANCE;
> +        return NUMA_NO_DISTANCE;

With the code below this is now only an optimization. Might be worth
saying so in the comment (assuming having this optimization is deemed
worth it).

Jan

>      /*
>       * Check whether the nodes are in the matrix range.
>       * When any node is out of range, except from and to nodes are the
> -     * same, we treat them as unreachable (return 0xFF)
> +     * same, we treat them as unreachable (0xFF)
>       */
>      if ( from >= ARRAY_SIZE(node_distance_map) ||
>           to >= ARRAY_SIZE(node_distance_map[0]) )
> -        return from == to ? NUMA_LOCAL_DISTANCE : NUMA_NO_DISTANCE;
> +        return NUMA_NO_DISTANCE;
> 
>      return node_distance_map[from][to];
>  }
> 
> Kind regards,
> Henry
> 
>>
>> Jan
>>



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:44:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:44:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525159.816173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqs0-0002Jq-Vb; Mon, 24 Apr 2023 07:44:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525159.816173; Mon, 24 Apr 2023 07:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqs0-0002Jj-Sz; Mon, 24 Apr 2023 07:44:20 +0000
Received: by outflank-mailman (input) for mailman id 525159;
 Mon, 24 Apr 2023 07:44:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wwt=AP=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pqqrz-0002Jb-LJ
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:44:19 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20610.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfb06d29-e273-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 09:44:17 +0200 (CEST)
Received: from BN9P220CA0010.NAMP220.PROD.OUTLOOK.COM (2603:10b6:408:13e::15)
 by PH7PR12MB5781.namprd12.prod.outlook.com (2603:10b6:510:1d0::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:44:13 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13e:cafe::73) by BN9P220CA0010.outlook.office365.com
 (2603:10b6:408:13e::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 07:44:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.18 via Frontend Transport; Mon, 24 Apr 2023 07:44:12 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 02:44:11 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 00:44:11 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 02:44:09 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfb06d29-e273-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eHPFNujeeqlLsaVHxsahf2KkpvQxHjnZIOOIO+/ALo5r4Bi7ISo3OjbqLBTFvQJT7W/5pUREqxv7RO1Au6RoMSwEsEpboDtxEvC1+XnZ3BPCHCygmTG36x3ggEMyWd+e4tikPQgB0VeTpbNy1Sz9ed+5yRZbWwn/daGnINNpSCDM1dvCf0LQqT/t725MiIq5UxS/uz+5bojQvEl9esiyP0ukhYvVZN+zbJSIJ1iE7v7OPF/AGsQ4LK8SzzEC8FD+4dmkZhHFjmt0V2oY9MsGU4menSFkuIHtVW1YcL2EI9Ua+fRNkZbII1JckAEPqi/UnY3xHcgLm05fPfrQHhgS5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=grNcrikzXvK4eARuWWUjkz9IC2bJ7kOp0Z+dkBHWOH8=;
 b=Bnz8JYaOcRfyOd8fg6saAQ20K9/auu1p4zuYmdqu6hkGCMtB612iwjFqrlEJslLWSCdVScnKYtp2XuzDyHGpeurV9CADH8F6t2ZoQCnEQvIMQdjSltJ4X9NuVLl7rQxZgdZ1MQ/CL1PWvsrQqAVX74CyhQRsGQyUObyWdmEUxcLE+Flv/upRM66tVFkJecgfGW6OPAV5ux4YBBHflC/P7XKgJT4K8Oz8ayLAjmAT0b+qbs6wo3ZRt0x4vcK8Z0n9hgA/P17CymzPCxcjiu0SBUGPGsZFj++O6Og7whmy+a/CS+/K7c0wuI7ljOKtiBY2vjSt4oVwNvsCEPCZ0GomeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=grNcrikzXvK4eARuWWUjkz9IC2bJ7kOp0Z+dkBHWOH8=;
 b=jpAnzKCV10j/e/FXJQgqS1OyK6b1hJJjQtc1PcyV+Df5c5iAg+rY/rWNSBULNvbOvzeRD0yGZsr22c7Qz8apgIuB1HTYAqjFabNe6t+C3z7YeB7BxNRPAlcbS0z0PF5z8PkIQ3q00wiVwFWEUiRlCjw/kUN5b21v2OTECYLPb1E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <3bd89851-e09d-1b24-6fde-5a13862f5eb2@amd.com>
Date: Mon, 24 Apr 2023 09:44:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 08/10] xen/arm: domain_build: Check if the address fits
 the range of physical address
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-9-ayan.kumar.halder@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-9-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|PH7PR12MB5781:EE_
X-MS-Office365-Filtering-Correlation-Id: b5449ea4-dbda-4cff-0846-08db4497b231
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xkBXG0l3efRsm5viFoj+pXO9N+AnibWW0Za55+vRYDHCGAL6DiG6uW9BldXIREuWVT5LhbUmq+Y14W/I9Iar9bUM62a9rvEAatiRLDyfsVOu4Jd4SYIlNA0GH15ASTp7R3YAnKQf/CAd8NTkNlg8r0zLxz/Aj8JORA7bSzJcK7fGaA0yx2pLOuyAWnfLedIaUGD4kTuMNBb9pLxhE8ztr5rtUd8CX6sADbP3BZGjOBKZPq9mvuwKAgvei51JCD4wEJPo1D9MPfl4yatiDNNVsDL6TM+PhuF83DBYzGhphGnVuCFoLr84JOXIbEQ5P9miOYRi9h+6ypYH0eM2d/H7Y6KZpo/sGC4mAmP2YETHzf4rfFEBpb+c83qHc3q0120YrPPmtJOCsSv+QFu5thAQ/VD79QuVzOPytKU2Vx1aJkGbc3X996+LlGD2o0ERIk5cMfUgM+Dj8R9jWGU8Rjw9tZ0VWlOax/PZ4HqFrzxL2zyVCCpk1u1ni/2cXQo8Wkla7Tzqt1OTOHB+cjxXgd4oEssdfNUkcD6bg92jchxITrEFklzBpLkOiwnRuYa2vmav/9eiX9HiOPLRuMcv+cqu1YAVdbsGnCEqvUeYYwwPNlYkzVNEr0Zfy6J9cGr0tHv/dOPTbSmVgWolPUZbI4zVQp/Ays+jr5fFPgKvVjqz5zJDRR3bGJU9I4gMIHfYTJJS14j6ck1AVWb4e1lNiToIDdy9B0AwCrQ2ePL4xnNz+fF80xE3WCH+Jg3k28lLfuZFDnx/6SQsRCAg6Cb/ZCHYRQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(53546011)(26005)(40480700001)(336012)(426003)(2616005)(36756003)(83380400001)(36860700001)(47076005)(186003)(40460700003)(356005)(82740400003)(81166007)(70206006)(70586007)(31696002)(478600001)(86362001)(31686004)(8936002)(8676002)(54906003)(7416002)(44832011)(110136005)(5660300002)(16576012)(41300700001)(2906002)(82310400005)(4326008)(6666004)(316002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:44:12.2428
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b5449ea4-dbda-4cff-0846-08db4497b231
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5781

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
> parameters. Then frame numbers are obtained from addr and len by right shifting
> with PAGE_SHIFT. The page frame numbers are saved using unsigned long.
Maybe better to say "expressed" rather than "saved"

> 
> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
> system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
> when the result is stored as 'unsigned long'.
> 
> To mitigate this issue, we check if the starting and end address can be
> contained within the range of physical address supported on the system. If not,
> then an appropriate error is returned.
> 
> Also, the end address is computed once and used when required. And replaced u64
> with uint64_t.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from :-
> v1...v4 - NA. New patch introduced in v5.
> 
>  xen/arch/arm/domain_build.c | 30 +++++++++++++++++++++++-------
>  1 file changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7d28b75517..b98ee506a8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1637,15 +1637,23 @@ out:
>  }
> 
>  static int __init handle_pci_range(const struct dt_device_node *dev,
> -                                   u64 addr, u64 len, void *data)
> +                                   uint64_t addr, uint64_t len, void *data)
>  {
>      struct rangeset *mem_holes = data;
>      paddr_t start, end;
>      int res;
> +    uint64_t end_addr = addr + len - 1;
> +
> +    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
> +    {
> +        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",
I don't think it is wise to print variable names (end_addr) to the user. Better would be to say explicitly: start, end address.
Also to make the message shorter you could write: ... exceeds the maximum allowed PA width (%u bits)

> +               addr, end_addr, CONFIG_PADDR_BITS);
Why CONFIG_PADDR_BITS if you already introduced PADDR_BITS macro

> +        return -ERANGE;
> +    }
> 
>      start = addr & PAGE_MASK;
> -    end = PAGE_ALIGN(addr + len);
> -    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
> +    end = PAGE_ALIGN(end_addr);
> +    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
I doubt PFN_DOWN(end) is the same as PFN_DOWN(end - 1), so I think you should keep the behavior as it was

>      if ( res )
>      {
>          printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
> @@ -2330,11 +2338,19 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>  }
> 
>  int __init map_range_to_domain(const struct dt_device_node *dev,
> -                               u64 addr, u64 len, void *data)
> +                               uint64_t addr, uint64_t len, void *data)
You changed u64 to uint64_t in a definition but not in a prototype. Please fix.

>  {
>      struct map_range_data *mr_data = data;
>      struct domain *d = mr_data->d;
>      int res;
> +    uint64_t end_addr = addr + len - 1;
> +
> +    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
> +    {
> +        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",
> +               addr, end_addr, CONFIG_PADDR_BITS);
please see the remarks above about this code

> +        return -ERANGE;
> +    }
> 
>      /*
>       * reserved-memory regions are RAM carved out for a special purpose.
> @@ -2345,13 +2361,13 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
>                       strlen("/reserved-memory/")) != 0 )
>      {
>          res = iomem_permit_access(d, paddr_to_pfn(addr),
> -                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
> +                paddr_to_pfn(PAGE_ALIGN(end_addr)));
>          if ( res )
>          {
>              printk(XENLOG_ERR "Unable to permit to dom%d access to"
>                      " 0x%"PRIx64" - 0x%"PRIx64"\n",
>                      d->domain_id,
> -                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
> +                    addr & PAGE_MASK, PAGE_ALIGN(end_addr) - 1);
>              return res;
>          }
>      }
> @@ -2368,7 +2384,7 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
>          {
>              printk(XENLOG_ERR "Unable to map 0x%"PRIx64
>                     " - 0x%"PRIx64" in domain %d\n",
> -                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
> +                   addr & PAGE_MASK, PAGE_ALIGN(end_addr) - 1,
>                     d->domain_id);
>              return res;
>          }
> --
> 2.17.1
> 
> 

~Michal



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:46:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:46:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525163.816183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqu7-0002uv-C9; Mon, 24 Apr 2023 07:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525163.816183; Mon, 24 Apr 2023 07:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqu7-0002uo-9Q; Mon, 24 Apr 2023 07:46:31 +0000
Received: by outflank-mailman (input) for mailman id 525163;
 Mon, 24 Apr 2023 07:46:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqqu5-0002ug-Mb
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:46:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e6730cb-e274-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 09:46:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6951.eurprd04.prod.outlook.com (2603:10a6:20b:10f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:46:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 07:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e6730cb-e274-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KGvhsRyLCMzFUKtWfqS1g2tPuQZ5CnTP3PPX7nK1ABgDT5daAMWzUmebo0v+BgAcpqFU+U9tvGTtv/zb2aQfUyjiIHWMBH5KtGNvtNV2T5Oxl1GJNGC99YMFsU1sbfa6QmdAcn+Z5JiFyoQmCuH5IddqIt+a/drvEO7yVSBrv8Vpn/zCtO12WYirhscpTqB5fe6Ok3NMOhG9W4u74SM7OJ0aYstKcbn39PC6eEGx7ODXXKrhejzGtW+hIg0GtdrGwxgu7Q06b2pQikjE/gOOyFS+t7CR5mXtu5IvW70OmOsWa2J/+OXtIlCXzG8UU4JbvEA2R47IasDa+i15dK15hw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mOigmP19IwE7vzgMkR7QKeTE0ExJfx78fcPYeV2EqxQ=;
 b=GtyXVqYJwBDQ+q2srdoKQWix4NyY3CJztjDgVzIO1yNGF8CUG86XZREhO6bAkjPZZlgqgqvxrJQObkGE8awP5WIQDEhLWbmmoPpar6niW28BCunZmhSLo7sWV4yZw2Fl7DrOx5jwPQ5ahnDW4ydBBHjrW3dfr2cnkPjFeWbFuZxzGWKPjep0ak85r27JfiQ2wmzyGIosYweRakpS6FJl8FnpP18s5gf0UoPtTtwX4HTRbOgMeUr4hszA/kIzqGJwroHVAiKMLPv6glbe4rkdr4UwgKuxlsONH1GDZpsitRKoIelg7LXDBOx7cwhWUtWiNgcf/PQ9Idv7gC6HUtgYLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mOigmP19IwE7vzgMkR7QKeTE0ExJfx78fcPYeV2EqxQ=;
 b=wx2v6ufij1tFaqt5R+bzn+A4SVRHIOQY6LXEqnyOJ/nu6XKbAOIN6S6++ZSzci3iLpADmFdMRgUBjDRo44KVwNVXVs+onEINKrTOdnOxzcqi3ZEf1xCVDRPWoW+4R96weVV0daa8wrPFR9ORsIvI4cC11i91AnkfCQLg3uGsXOLM34vGkKvBqEkIRwvu75JYV1ovT3PF6yPHPPVgQDkgyYySi1ZrBbylm1nT0YuHTw012xrGvnCzDCf8BUp2WwVA4flPMvpdXdQX6HUFt540Xpimau+PvnZgrhC2SgqFrMxOJpSj83gB7oAjznT4Iq1cwNdGuEGXTc8eCzRpLZmPGQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bb71c3f3-20a7-b020-6685-879bd4e5786d@suse.com>
Date: Mon, 24 Apr 2023 09:46:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Content-Language: en-US
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
 <ZEKLN8AlzDUckorU@Air-de-Roger> <87o7nh727c.fsf@epam.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87o7nh727c.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0030.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6951:EE_
X-MS-Office365-Filtering-Correlation-Id: c3011829-d758-4cce-d6c0-08db4498012c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NWshK5QFrJc0gXu1eTXhdlVw7mn/i3LpW2iu0/gu6rF6RTOQr6xJP94z+5rL+4TkminHP6VJygnH4DWoeq0L8D1qrRJ209h7pKdyCEF+XG6UDs1RPhZHlNXyAYdzraWSPzRL+w64EgON0dyg98a0b+/b7yOEzAIzHzjzG8qKaZ8iCtyQOgFHxM0r75gyRKBlmCzBPBnbMqsuQjkEKuLPRztasnX83VpT3iWZ9T2b68vA8T5asqyuaaPawfliLeP/1uI6w2z0pUZ8HvwvYCkJSSmsDGffZjD5xJsRw6dTxW+g/bfH1pkiYoylfkjggxt8GJv7F8IelbMHKXM2KgkVUJux1Fldp7XKLm6UHdbJUOKz+M4ct8t5i+fuuWo/gKSZfu8lZMzRF2vpGML/4Nsr7CsM81fkb6KUTxyUoSC802T6+ilFyh42akM9p9B0I1Wc0DfH84wN4ATECnKiUV8W8dj/KbVAC4vY5wI+6ULgsPkEaApNnDocplUs8B782D7dyfVNkfQ2aqYX8YG34Qw8x0XGA9xicaaun7dROKfGutdU6957LSvEPhwRsiiWyvp2xuv9MwlJV4d/yr6NCXvd6CTT8PNZoJMxEo8GsqMYNpssGr4MXhElL4+4Fq4bTwjl36UrZmvvNmtz7gh9531eVg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(366004)(376002)(346002)(396003)(451199021)(2906002)(66476007)(66556008)(66946007)(6916009)(316002)(4326008)(7416002)(66899021)(8676002)(8936002)(5660300002)(41300700001)(36756003)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(38100700002)(478600001)(6486002)(83380400001)(31686004)(2616005)(6506007)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UitDWnU0ejZCaERJSitXN0VFTHRWUGNKS044QWFMZSsyZGR3OHBWY3U0ZEJj?=
 =?utf-8?B?aC8rQ1QxTkFCS0pBeW05ZWFJd1NhMWVYQ21PWC9jMnJyTjYyd0d5NTY3aENi?=
 =?utf-8?B?K3JuVGNpZmVqTWYvSjNvMUp1b2o0emhONG5FVmQ2T2N1ZUlCWDByV0RnbndV?=
 =?utf-8?B?aHN5UzViejIrUmJrVzlvQ0hEQnpqTGs5Y0tnaVNQMmljM0V2eDZpQ1dJWmVw?=
 =?utf-8?B?OWhFTTljRWZTWG1GZGlEWmlUU1BHYW1lR1ROREk4bkpJY0dFZndGRVN1VlN5?=
 =?utf-8?B?Q3J4VG9SOFpXQVVrc0d3a2xmZHRIalZVUklqVUVhckNMNVV6SlljR2FkbTJ0?=
 =?utf-8?B?WEx4R0VMRkRndmVId2F3L0xCdmRYc1VzMnlTSHpHS3ZOZGFUUmV0UTY2RWY0?=
 =?utf-8?B?NVZQcnZmdHkvM1oxcUFRMmJjUkl2YW9xNERWa084L2JZRVBKZmZKWlB6dExF?=
 =?utf-8?B?cmpHN2NlM1FCMzVLMk1OWTd5TVNCU05LRnQzZXNLN0Q2cXord1dtVUMzWDkx?=
 =?utf-8?B?dnpnNTF0cVZNRWJjeEtiYXhHVG1HVm9TRDJwTG45OUJzSElOVENFRlZnMDJZ?=
 =?utf-8?B?ejlqWUdpTWFqNkpxWkZZeDIxNC9UOFhLdHB3RnlleXZkNHVhaENFazFVM0or?=
 =?utf-8?B?UFRxZk92czFIQWFkRHBzWGEvS3gwV1RYT1FLS3phMW5UeXNjbTlJa0toclpX?=
 =?utf-8?B?UWpkM0trS2szdURvYXJyWHpwNk9lNENET1BzWUJ1TTNMZkN6RnZrSmdKVlRu?=
 =?utf-8?B?R05IRTNNSkNUcitvQ2dXOWw4OThwSGsza2owNjZNU3dUa2dGcXJvUDdlbFJT?=
 =?utf-8?B?NFBoOXBRQ2VWZGl5QmVZU0V4ZDl6RGxsT213QytyMVZnU2cvMmtSN1JGSHQ2?=
 =?utf-8?B?SWlxUVErR09UQk1FMWprbVRTbGtFTmVNb1JaTWpmdERmSXRFM1VoYzdSK2VW?=
 =?utf-8?B?dVBXTERMT0NsV0JpdllqcXRDTmsxOGRFTXhYdVcyd2dsL1JYbXpyMWQ2ODRH?=
 =?utf-8?B?VHRVcVB6WUNOMzRYRDlwNmVFTU5NV3ZlczVHeVRQZk5zSHFpWGVoK2dkdDlI?=
 =?utf-8?B?QXE3SzRtOEc3anV4RFlwSm10RGRkUzVxcWs4bG5PelRZbnJwVm1ZdkZFbmJv?=
 =?utf-8?B?NVVHUTBGSGFxK3paTXNYSzJSa0xkWStXMVYwenEzUEJpbS90RHhLZjRSNU1q?=
 =?utf-8?B?WVRNZURrbEVjTHVrNVFIc0tNanAyN2U4OUFRTGJmOXVlL0JrYlFSQ1dwQTRo?=
 =?utf-8?B?WG5xbVcyRGpGaVZkL2NTd1RKQXIvalNZeng3U1dCdG1Ra2t1c1c0UldHazd1?=
 =?utf-8?B?Q2ltNFJzQUZOMFpVc1FOcVUvLzVxeUp0aVlPdEpvdW1XUDdaZTZGbDlHZGor?=
 =?utf-8?B?LytzbWdrR3ladnVoZ3Nxa0lCUk5NaThRbkticmV4SXlsTm1PVjdwY1gvYTVx?=
 =?utf-8?B?b2xZSGZlN1cySHZJU1BDZjhTMm4yVm4xSE9XS01WSmRSdFhVZGkyaFhrY2R2?=
 =?utf-8?B?RmJldXBGc2RzNitkZGlVWmZCNEswMlh5S29QSHg0ZmNPZTRtbjB0REpqb1lO?=
 =?utf-8?B?YW1CMHNLTzNNaUM1TDFHa0JQQ2IzWmVETGNib2UyVlhvL1VKcE5CZjBrZk40?=
 =?utf-8?B?c0FCZmdSTW1Qa2JjY0NxSHY5TWZ4bWxVYUJQR2xKUEM2bDRIMFJONCs0Z2R5?=
 =?utf-8?B?b1grTXZUOCsvMjVsWU1XNmJEQmRYVTlIbmJGQU5JZHBVSWluWjYzYjBFQlQv?=
 =?utf-8?B?djBMd05iNmkvVTg0UWhvYjZubVdTV1BQV3BLMU9RQUFqTWdEMHJTM3c3ekRz?=
 =?utf-8?B?d0VrUm44NVRIMEFnWnM1Znd0T0lBWWlVYkdrdThQL2k2ZlZWRzROWUVkeDls?=
 =?utf-8?B?LzdBby9XbHNSbkloc0dvazdSNCtiSmRibVNsN1FRMUdsSkgyL1dKV2RHbFZT?=
 =?utf-8?B?Qkp0R25RS3dsbWNINzZVbVFadUdpRmpWd0Z6djIrTGtVZEpRd3RzSkVMakxF?=
 =?utf-8?B?UUVCWGFwb0YzTVdPTExrRW5YazBJbFZMM0dOeTMzVGoxeFRNNWlSTVh4M3hE?=
 =?utf-8?B?ejQ2UEVEb3VDY0M2dXp2akxjQ0xxQ2I2Zi9DaEpDdjU0NmdnUkg5L1kvbHdV?=
 =?utf-8?Q?T5RhK+42c9PAwxzhMxkGgbIFJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3011829-d758-4cce-d6c0-08db4498012c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:46:25.0152
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zx+/xV35TwgtvQMVDjwZoCGCDP9cocR7v6dJ4mZpGk6CLLSALGgXiMcSYK8uFR7U4EsHDd9NHF1gyQ0l+hzN2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6951

On 21.04.2023 16:13, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> Roger Pau Monné <roger.pau@citrix.com> writes:
> 
>> On Fri, Apr 21, 2023 at 11:00:23AM +0000, Volodymyr Babchuk wrote:
>>>
>>> Hello Roger,
>>>
>>> Roger Pau Monné <roger.pau@citrix.com> writes:
>>>
>>>> On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
>>>>> On 17.04.2023 12:17, Roger Pau Monné wrote:
>>>>>> On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
>>>>>>> Above I have proposed another view on this. I hope, it will work for
>>>>>>> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
>>>>>>> after returning from pci_remove_device(). By "harmless" I mean that
>>>>>>> owners of those refcounts will not try to access the physical PCI
>>>>>>> device if pci_remove_device() is already finished.
>>>>>>
>>>>>> I'm not strictly a maintainer of this piece code, albeit I have an
>>>>>> opinion.  I will like to also hear Jans opinion, since he is the
>>>>>> maintainer.
>>>>>
>>>>> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
>>>>> holds a ref is entitled to access the device. As stated before, I see only
>>>>> two ways of getting things consistent: Either pci_remove_device() is
>>>>> invoked upon dropping of the last ref,
>>>>
>>>> With this approach, what would be the implementation of
>>>> PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
>>>> exist and either return 0 or -EBUSY?
>>>>
>>>
>>> Okay, I am preparing patches with the behavior you proposed. To test it,
>>> I artificially set refcount to 2 and indeed PHYSDEVOP_manage_pci_remove
>>> returned -EBUSY, which propagated to the linux driver. Problem is that
>>> Linux driver can't do anything with this. It just displayed an error
>>> message and removed device anyways. This is because Linux sends
>>> PHYSDEVOP_manage_pci_remove in device_remove() call path and there is no
>>> way to prevent the device removal. So, admin is not capable to try this
>>> again.
>>
>> Ideally Linux won't remove the device, and then the admin would get to
>> retry.  Maybe the way the Linux hook is placed is not the best one?
>> The hypervisor should be authoritative on whether a device can be
>> removed or not, and hence PHYSDEVOP_manage_pci_remove returning an
>> error (EBUSY or otherwise) shouldn't allow the device unplug in Linux
>> to continue.
> 
> Yes, it would be ideally, but Linux driver/device model is written in a
> such way, that PCI subsystem tracks all the PCI device usage, so it can
> be certain that it can remove the device. Thus, functions in the device
> removal path either return void or 0. Of course, kernel does not know that
> hypervisor has additional uses for the device, so there is no mechanisms
> to prevent removal.

Could pciback obtain a reference on behalf of the hypervisor, dropping it
when device removal is requested (i.e. much closer to the start of that
operation), and only if it finds that no guests use the device anymore?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:51:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525169.816193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqyG-0004R9-0o; Mon, 24 Apr 2023 07:50:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525169.816193; Mon, 24 Apr 2023 07:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqyF-0004R2-UK; Mon, 24 Apr 2023 07:50:47 +0000
Received: by outflank-mailman (input) for mailman id 525169;
 Mon, 24 Apr 2023 07:50:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qjkp=AP=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pqqyE-0004Qw-V5
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:50:46 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe13::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7d32504-e274-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 09:50:44 +0200 (CEST)
Received: from DBBPR09CA0026.eurprd09.prod.outlook.com (2603:10a6:10:d4::14)
 by DU0PR08MB7883.eurprd08.prod.outlook.com (2603:10a6:10:3b1::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:50:38 +0000
Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:d4:cafe::d4) by DBBPR09CA0026.outlook.office365.com
 (2603:10a6:10:d4::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 07:50:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 07:50:37 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Mon, 24 Apr 2023 07:50:37 +0000
Received: from 175549e814df.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E3BB9576-1ACB-4984-8664-B976C20384B7.1; 
 Mon, 24 Apr 2023 07:50:28 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 175549e814df.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 07:50:28 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB7308.eurprd08.prod.outlook.com (2603:10a6:20b:443::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:50:25 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 07:50:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7d32504-e274-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6koWl1EEaDK1bKXULodqi9HRV5gxBShr8+3W08htdeY=;
 b=51UW1+8ccWMi1kD83MJonO2eDWcW+r/R+QZeylsLN9VkPvrh/FbIWPsqYjUKXzoOc0b7Wx4TEKwT2vggRW/2FAslrN6Iflipx/2yGfp/b9tcxdVpesI7TOkiQu9PSuw/5l00lLD9n+xYJigJT++S4KX1S3kbKaWCDQg5UXJ+znE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WDfqucf7bhvNbRAndA80AnSCrVuFWSezA0bXxfLPvw3JugnuJ8CzkCJ9z6Mn2BdTQkhwLuBc7RyIBvNxEmtO/qCLr+ZBw/FY01AYHLniaKaOY7AP9GLSs1A57Z/Ji+xV5x4yYqnSuGtTE9ZjhSs/+XFBZ39QU7zdfwW1x+kv3i/kQn38yrkDxI0hQgghdMqToN2+nTHdpVe6CeEOLZM4q9pygXyuyblEJirBq1hslzX1Mjn3QefXUVHiiHt4J+hLNIVMUy/gP7j9gF3LOo/aYVTYRsSd28L4Yu2b6RlTykcxaMte7egP0TWoTqLiJLvaae4S7YATXbLZkiqA2eonlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6koWl1EEaDK1bKXULodqi9HRV5gxBShr8+3W08htdeY=;
 b=WNEZczuXFkBXPTSFw5bVS2TDf5yYVvpZiPUG8E93soc3M6lq70YFdZZOfNVLgYxX+UzYz5zD2QjHd6QEBpq0It+xcKkfd8PCKmCbyKvKtekoDBFV6HYOR+lPpkDsFWRrWxa1ARRSWOyZTLnAtV7oDsSc8DTUSOnL3T3oJbzNOh54DnGnNxmSWLlqT85vl/TWPl+Ms0KQLQMK0kl6Im2hYmjhAnhcEmweYhXZsWf+6okpZdodEeGsZJf/mBHemYRqLrFxW9z3HM+qnbQWG0+IjqK1d7MviC0XM5Gkkhj9TN6gFiEPrXzcddjIqGMTSton7trMdwx5Z0tlhZDPbQHhkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6koWl1EEaDK1bKXULodqi9HRV5gxBShr8+3W08htdeY=;
 b=51UW1+8ccWMi1kD83MJonO2eDWcW+r/R+QZeylsLN9VkPvrh/FbIWPsqYjUKXzoOc0b7Wx4TEKwT2vggRW/2FAslrN6Iflipx/2yGfp/b9tcxdVpesI7TOkiQu9PSuw/5l00lLD9n+xYJigJT++S4KX1S3kbKaWCDQg5UXJ+znE=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v3 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index:
 AQHZc3rmCrt50C+BaEmmIgaK9uo9ea80IxGAgAFPzpCAABDqgIAC3lpQgAG3XoCAAACYQA==
Date: Mon, 24 Apr 2023 07:50:25 +0000
Message-ID:
 <AS8PR08MB79911870174FFEC9385FDDAA92679@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
In-Reply-To: <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 80439627BB49324B9F6D7B7E8E7CF2C0.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB7308:EE_|DBAEUR03FT020:EE_|DU0PR08MB7883:EE_
X-MS-Office365-Filtering-Correlation-Id: 987ba83b-85f6-45c9-c3f6-08db44989803
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fjSDaOAGRp6mKgZYfxRgrKAEsrmEcDD+LBSDThl+SVtpRVh9aRFU8utB9uDBqgjN5E+LVjoJJkkznVzzYktDcp3jeDjtBqgwrnQItdVmaa8hw1l25E9qg/UZCpcE1KLgpabP6Pt6gnUgo6308O94fUYCB+eM56RfDCEF6tT+U/SQkT1K2Wax/daFATbiXIHmnW/BCwfH3c7DbDdOaXxJFw0zhC0UxfegoGxadTJVhT6YOOyqjwXVtwHtlFJmbsTGCHYL6E/YRwf48uKXjoz3JkdWA4f/mKQ4pmRtZ+RmxVwJC5W1zqqt3flO9UJ1GRESLu2RnKWSEtCtEHgYVSaHHpOjRu8/LOYVH5q31KNtI0nUefATNx1cQocfYgn7wW7peODPSO+cHtJMIwQuUhKgVqoPLHMOUv7/Q+oip5tme98EdPiUsvEzqhX0EucshtYzLnXfGrXJViDpgCibecaE3a/7rD4tuYavpKmxH5ANrNbn5wmM74vJl+DuDsP41nHZGjL1qZfoz2TFqPCfWyMxL1j4ZaeSvy32JxwthtEAqOnoYi3r0XU9YFXn//IyR2eqtRX6CMuQNgbKnB1XyGMzQIp0en6OaNgZdwRWsFkUO0aimx9lribWaOgC+YsfKDjo
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(7696005)(71200400001)(55016003)(9686003)(6506007)(26005)(186003)(76116006)(66946007)(66556008)(66476007)(66446008)(64756008)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(52536014)(54906003)(38070700005)(38100700002)(122000001)(86362001)(33656002)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7308
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	91783264-d382-4b04-8e3b-08db44989062
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w4dBveEpy7mSFzXmn8DhhviFRYDsFXKM/bCAudGuTnPlTUh0lHSJnqn17TW6kMOdDTPdfy6m13OiPlaU21LN2y3F6iAkp/uhtCNZ0+ZCSBslRsbyYYZB3FPYZcV10egEXtJX8EYaUSrX3g3cCcKWPL8lbuLAv6/PCWqOFfgQev8VSWJRn5ThvMWWngB9iNqlw3mxCMzJfU6nV/wzBnx9I6V9JtRs+cvoXj1Cu5Dra/rZCG3P+Al+7JOUBSKKdSudqH+UNEFuZD5BM9W4R1PkO9AlNFdACvoG7srk9LiZWLhCbu+AbVZTQhBbCsIb5JPUQ5O2VgcSFPlZBQinWwAo66OY8n3miGbCvwOjl2gLHNp+mRb2wnAknGxwshAtye5cFm5vieIhIsx442u6bKxntsU9QHNYIISe3JPaUpjMeKXRNCoPMlS6fJ8tAMvCRt8tZj3Vn1q7iYp9yxX9/yf0I6Z6b0ahvMJDZnFdXKHeVx1A8Q65hzHLf6cIWT8gZaohmtIWMHtO/EG8zX/DCVanrszLgWvb5wufenIqovPJ69ZAmYk9QIhx0OZ5zuCN3ZA5eFb27ja94fG8HmM1cfagdrH2wi4LsbmEFfA3WgzSGxSUCv+mGIGIZA5rHNDVIS+vfIgQiB+Y5hJl/tso6QABhlN9N+2esKADWWmjdv2w0wg5u6hDkxFhEvrXCa67OJpzG6ymoIKgacuHz/6uEfekZw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199021)(40470700004)(46966006)(36840700001)(8676002)(8936002)(6862004)(54906003)(40460700003)(478600001)(55016003)(4326008)(70206006)(70586007)(40480700001)(356005)(81166007)(316002)(41300700001)(82740400003)(2906002)(52536014)(5660300002)(86362001)(186003)(36860700001)(336012)(26005)(6506007)(9686003)(33656002)(7696005)(47076005)(83380400001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:50:37.9002
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 987ba83b-85f6-45c9-c3f6-08db44989803
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7883

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+IA0K
PiA+IFRoYW5rcyBmb3IgdGhlIGV4cGxhbmF0aW9uISBJIHRoaW5rIEkgbm93IHVuZGVyc3RhbmQg
OikgV291bGQgdGhpcyBkaWZmDQo+IGJlbG93DQo+ID4gbG9vayBnb29kIHRvIHlvdSB0aGVuPyBB
cHByZWNpYXRlIHlvdXIgcGF0aWVuY2UuDQo+IA0KPiBMb29rcyBsYXJnZWx5IG9rYXksIGJ1dCBw
b3NzaWJseSBvbmUgcGFydCBjYW4gbm93IGJlIG9taXR0ZWQgKHNlZSBiZWxvdykuDQo+IA0KPiA+
IHVuc2lnbmVkIGNoYXIgX19ub2RlX2Rpc3RhbmNlKG5vZGVpZF90IGZyb20sIG5vZGVpZF90IHRv
KQ0KPiA+ICB7DQo+ID4gLSAgICAvKiBXaGVuIE5VTUEgaXMgb2ZmLCBhbnkgZGlzdGFuY2Ugd2ls
bCBiZSB0cmVhdGVkIGFzIHJlbW90ZS4gKi8NCj4gPiArICAgIGlmICggZnJvbSA9PSB0byApDQo+
ID4gKyAgICAgICAgcmV0dXJuIE5VTUFfTE9DQUxfRElTVEFOQ0U7DQo+ID4gKw0KPiA+ICsgICAg
LyogV2hlbiBOVU1BIGlzIG9mZiwgYW55IGRpc3RhbmNlIHdpbGwgYmUgdHJlYXRlZCBhcyB1bnJl
YWNoYWJsZSAoMHhGRikuDQo+ICovDQo+IA0KPiBQbGVhc2UgYXZvaWQgbWVudGlvbmluZyB0aGUg
YWN0dWFsIHZhbHVlIG9mIDB4RkY6IFRoaXMgc2VydmVzIG5vIHJlYWwNCj4gcHVycG9zZSAoYWZh
aWN0KSBhbmQgaXMgbGlhYmxlIHRvIGdvIHN0YWxlIGF0IHNvbWUgcG9pbnQuDQoNCkdvb2QgcG9p
bnQsIEkgd2lsbCBkcm9wIHRoZSAweEZGLg0KDQo+IA0KPiA+ICAgICAgaWYgKCBudW1hX2Rpc2Fi
bGVkKCkgKQ0KPiA+IC0gICAgICAgIHJldHVybiBOVU1BX1JFTU9URV9ESVNUQU5DRTsNCj4gPiAr
ICAgICAgICByZXR1cm4gTlVNQV9OT19ESVNUQU5DRTsNCj4gDQo+IFdpdGggdGhlIGNvZGUgYmVs
b3cgdGhpcyBpcyBub3cgb25seSBhbiBvcHRpbWl6YXRpb24uIE1pZ2h0IGJlIHdvcnRoDQo+IHNh
eWluZyBzbyBpbiB0aGUgY29tbWVudCAoYXNzdW1pbmcgaGF2aW5nIHRoaXMgb3B0aW1pemF0aW9u
IGlzIGRlZW1lZA0KPiB3b3J0aCBpdCkuDQoNClNvdW5kcyBnb29kLCBpZiB5b3UgdGhpbmsgYmVs
b3cgY29tbWVudCBtYWtlcyBzZW5zZSB0byB5b3UsIEkgd2lsbCBhZGQ6DQoiV2hlbiBOVU1BIGlz
IGRpc2FibGVkLCB0aGUgbm9kZSBkaXN0YW5jZSBzaG91bGQgYWx3YXlzIGJlDQpOVU1BX05PX0RJ
U1RBTkNFLCBkaXJlY3RseSByZXR1cm4gaGVyZSBhcyBhbiBvcHRpbWl6YXRpb24uIg0KDQpLaW5k
IHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:52:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525173.816203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqzU-0004yL-Ar; Mon, 24 Apr 2023 07:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525173.816203; Mon, 24 Apr 2023 07:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqzU-0004yE-7P; Mon, 24 Apr 2023 07:52:04 +0000
Received: by outflank-mailman (input) for mailman id 525173;
 Mon, 24 Apr 2023 07:52:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqqzS-0004xi-Ty; Mon, 24 Apr 2023 07:52:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqqzS-0003vs-NK; Mon, 24 Apr 2023 07:52:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqqzS-00047m-7d; Mon, 24 Apr 2023 07:52:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqqzS-00034O-78; Mon, 24 Apr 2023 07:52:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ojyH0WmAkZIkJKne4bL/yxXc4APy0gClDF/ZWv/tkHc=; b=Gn/jRvc4a4gvFjY82V8mF/LidN
	gJs+O4W4QaEYLcm/PpCDWxgKJ1r68vaXeIFULLjC2MVESBwbsTMUbS0h329BCmNk2SIIrWTCrE7Qn
	kfQ3JQbpWdeTVzrbAd5UvoH3MuDT4tkrf/n9BFlwFw/iS/syYW4VDogXm30IpDk9YsQY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180389: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=327ec8d6c2a2223b78d311153a471036e474c5c5
X-Osstest-Versions-That:
    qemuu=6dd06214892d71cbbdd25daed7693e58afcb1093
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Apr 2023 07:52:02 +0000

flight 180389 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180389/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build   fail in 180386 REGR. vs. 180382

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180386 pass in 180389
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start             fail pass in 180386
 test-amd64-i386-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 180386
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180386

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 180386 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180386 n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180382
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180382
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180382
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180382
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180382
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180382
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                327ec8d6c2a2223b78d311153a471036e474c5c5
baseline version:
 qemuu                6dd06214892d71cbbdd25daed7693e58afcb1093

Last test of basis   180382  2023-04-23 03:16:49 Z    1 days
Testing same since   180386  2023-04-23 16:07:44 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 327ec8d6c2a2223b78d311153a471036e474c5c5
Merge: 6dd0621489 3ea9be3340
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Sun Apr 23 11:20:36 2023 +0100

    Merge tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu into staging
    
    tcg cleanups:
      - Remove tcg_abort()
      - Split out extensions as known backend interfaces
      - Put the separate extensions together as tcg_out_movext
      - Introduce tcg_out_xchg as a backend interface
      - Clear TCGLabelQemuLdst on allocation
      - Avoid redundant extensions for riscv
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRE69sdHHJpY2hhcmQu
    # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/6jQf6Al9cgeJ6guVMpoRS
    # +sXaTs5U2yaqRvz5gGn2ANFuFgD2QanbWHjS5guTnhbsvq3icyOCpIXIPg/Z04LB
    # fTgAUCF5ut8U8C12HyGq/p4BFoTTWnCGPwY+PB9pMb5LiEcmaSUUz+fSA8xMX1b6
    # EylI8YNd74A9j5PBNbGIXooj8llM71p9YztwQ9V7sPH3ZON4qbPRDgrJsb5TngMa
    # daTpGoW+A9UyG7z0Ie6UuiOyYAzeQqm64WmMlc7UYeb9lL+yxvCq4+MXH2V/SKqg
    # GLOF95DCdqj1EeZCOt0aN1ybZPcYFFkmpXrD1iLu0Mhy7Qo/vghX/eFoFnLleD+Y
    # yM+LTg==
    # =d2hZ
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Sun 23 Apr 2023 09:27:07 AM BST
    # gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
    # gpg:                issuer "richard.henderson@linaro.org"
    # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
    
    * tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu:
      tcg/riscv: Conditionalize tcg_out_exts_i32_i64
      tcg: Clear TCGLabelQemuLdst on allocation
      tcg: Introduce tcg_out_xchg
      tcg: Introduce tcg_out_movext
      tcg: Split out tcg_out_extrl_i64_i32
      tcg: Split out tcg_out_extu_i32_i64
      tcg: Split out tcg_out_exts_i32_i64
      tcg: Split out tcg_out_ext32u
      tcg: Split out tcg_out_ext32s
      tcg: Split out tcg_out_ext16u
      tcg: Split out tcg_out_ext16s
      tcg: Split out tcg_out_ext8u
      tcg: Split out tcg_out_ext8s
      tcg: Replace tcg_abort with g_assert_not_reached
      tcg: Replace if + tcg_abort with tcg_debug_assert
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 3ea9be33400f14305565a9a094cb6031c07183d5
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:43:47 2023 -0700

    tcg/riscv: Conditionalize tcg_out_exts_i32_i64
    
    Since TCG_TYPE_I32 values are kept sign-extended in registers, via "w"
    instructions, we don't need to extend if the register matches.
    This is already relied upon by comparisons.
    
    Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 4745b156b8412ef12af32bd474fee70c25940950
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Thu Apr 6 11:38:56 2023 -0700

    tcg: Clear TCGLabelQemuLdst on allocation
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 767c250310ee0494d37bf7514d24973dd50e38ea
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 21:39:54 2023 -0700

    tcg: Introduce tcg_out_xchg
    
    We will want a backend interface for register swapping.
    This is only properly defined for x86; all others get a
    stub version that always indicates failure.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b3dfd5fc181433bd43e2163b1a94b11a548edfba
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 21:16:28 2023 -0700

    tcg: Introduce tcg_out_movext
    
    This is common code in most qemu_{ld,st} slow paths, extending the
    input value for the store helper data argument or extending the
    return value from the load helper.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b8b94ac6753effcfda7880d3b9ac49b530e3d2ab
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 19:58:35 2023 -0700

    tcg: Split out tcg_out_extrl_i64_i32
    
    We will need a backend interface for type truncation.  For those backends
    that did not enable TCG_TARGET_HAS_extrl_i64_i32, use tcg_out_mov.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b9bfe000f954e1defefb4c917f98bf82c337144b
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:56:28 2023 -0700

    tcg: Split out tcg_out_extu_i32_i64
    
    We will need a backend interface for type extension with zero.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 9c6aa274a494ce807e998a3652fa16a3d2da4387
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:30:56 2023 -0700

    tcg: Split out tcg_out_exts_i32_i64
    
    We will need a backend interface for type extension with sign.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 9ecf5f61b8f468f17483f325f565802c645983a5
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 18:07:05 2023 -0700

    tcg: Split out tcg_out_ext32u
    
    We will need a backend interface for performing 32-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 52bf3398c3a2f51d3eaf8fd30dafcdc0cc7fc571
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 17:50:09 2023 -0700

    tcg: Split out tcg_out_ext32s
    
    We will need a backend interface for performing 32-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 379afdff47556f01e75ce2caffd7ae9efa4f1214
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 16:25:22 2023 -0700

    tcg: Split out tcg_out_ext16u
    
    We will need a backend interface for performing 16-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 753e42eada5c790bb3727c262f2e368e81cc788f
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 14:49:59 2023 -0700

    tcg: Split out tcg_out_ext16s
    
    We will need a backend interface for performing 16-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit d0e66c897f2cdfb0807b76567a17d7811487fac3
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 13:26:51 2023 -0700

    tcg: Split out tcg_out_ext8u
    
    We will need a backend interface for performing 8-bit zero-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 678155b2c50aa3bf37abef6bfe914bf58f49bec2
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 11:17:01 2023 -0700

    tcg: Split out tcg_out_ext8s
    
    We will need a backend interface for performing 8-bit sign-extend.
    Use it in tcg_reg_alloc_op in the meantime.
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 732e89f4c401c3cf175aa84c987a029b9729070b
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 12:09:14 2023 -0700

    tcg: Replace tcg_abort with g_assert_not_reached
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 1a057554cc2e3ece8ed166f12a9b85cd5ec4cbe1
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed Apr 5 12:08:46 2023 -0700

    tcg: Replace if + tcg_abort with tcg_debug_assert
    
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 07:52:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 07:52:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525178.816213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqzr-0005Tp-OB; Mon, 24 Apr 2023 07:52:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525178.816213; Mon, 24 Apr 2023 07:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqqzr-0005Ti-Kh; Mon, 24 Apr 2023 07:52:27 +0000
Received: by outflank-mailman (input) for mailman id 525178;
 Mon, 24 Apr 2023 07:52:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqqzq-0005TW-PN
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 07:52:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3717dcc-e274-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 09:52:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7378.eurprd04.prod.outlook.com (2603:10a6:20b:1d8::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 07:52:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 07:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3717dcc-e274-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HX8ycvLbvVdasf5KuV6h5plbd+u7Qu9xXYkzFXLYA4KrHiXulk1yNWqeGC1Wb/wxxokeYJPctJjtXIBMaZEd6fHoww6FWOJAf45KEpa3I8iCzu+n6YQGSrbrl4MFLjyU01d1i8Nd4YJyDUgQQQf8kL43n8XvhCA4nNGBbqUWkaV3oxqwSu5zNlDqGqAzdRUk4byAACtSw0NavQl+NfJ2W8K38CuW/Ovfrk2VSiWiJwzDfEkzTTEWlZT6XFCVzGZ2Vlv+3NXmrsH1iJQaNTaB7UO2HQP0X1/fwmmptHjYjXQIuJ/VHuZQUkCgd1BzsdmMBWHtJoMAAFwaujrbpHhbpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3dwj6XMKKQa/hrCE0rpy93vapUt0q65wfw6orRzgCE8=;
 b=jclBd9XD3EPXTMoMNw/K7BKY5dOPRfrwwfjet0MOQMUAxSu3wLi+qds0AbfNw0WBx74oVyiftX0R3luYUOazRr1N369z1f5t3tILVRt7vMxP/0xnmV0sXTyA2nZSLbbDa5PzXFLTyP0XuCWafPj75B2xZOfrTesi+1AbWg8e5NEj0OdhcrdXAmz1GySF+RovoJgczGJGOHVtJHBpxvbuDF2H9xkbjr6BlbLiUwhfQU+Gj1v7beNypy9aDQKCvNqlHZg4aBb0mHnASndv4eR9ZVQZg6498Ay0IasBtvcQkiIKeYIQrPsdGmvGOMnONaf/AmI0EZ5cLxOn4t90qQfRvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3dwj6XMKKQa/hrCE0rpy93vapUt0q65wfw6orRzgCE8=;
 b=pAPUm9oW4ZaemTBp13Kfs2qHUMc/igB3KIKVvekMoZ/c4llYooa3eJ3BwbYHWWbOTKc6kLQgMtBIPyr3OSiDu0ySaRzL0m+vnASdS+f9P8QRZ9npLY0fp0E0447APBSq2kftJfyZLlKxF5vbmTSt+eSqQUgzQvWQ7FD7yeK08HrqVzNWRS8nAuKYjsoSlo8uu4fMa8vSmpMVuBdi8vdVBVrh0yt91/c9l8IIQfYdlK6d+fIuTlq202SfqGSmsHkEoBPlQNueSqp0Hjm7n7aVRNcgUqGwBfG3Khp5hhObMG0k+uoLwiwjqoEI3rfO5aNsEi4VthL7whd+ZKk2RhvFQw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5329c881-3113-3740-79c9-18dd0afc7011@suse.com>
Date: Mon, 24 Apr 2023 09:52:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [linux-linus test] 180384: regressions - FAIL
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <osstest-180384-mainreport@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <osstest-180384-mainreport@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0030.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7378:EE_
X-MS-Office365-Filtering-Correlation-Id: 3f6e6197-51c5-4f5a-cfcf-08db4498d63a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	22dR0GbxnNFDCHyk9ULs8LGYGp2s3DI9R4e4ijBUs53fiTcq9vP1FHMb+gRuE5/mRmw3gbAFpmjZwDlO4boFpGuotV608/8kfadpQt47mKpZYrLF0Jepr5wMemXpnmFiLuhsunSRBTtLFXVlfU5dIRkq4s8BNT2LiSiW30zDHOnvktRxmsVF0c1n5uGIyVRvqVEmdbrF5OuIDZc2G09zW6cgORB1NbcyWbo3T0Y/KY99zrPPged5rsbtngaptMq6sifQvmWNoZGDwA5tRMv/miKQLq2/BQ3jGAdUonCzbfOEDQXXHcUfPzX11Wf6gkwpyc/DH+8qBFWcf6pcwngFXusCWfu7Jrb63E7p4gPeHzVjEzUJUiqT2VN7HZN+/+AGsbi/r0XvDG02BVzVdgCg/ylb+w0XN84FFoUse02LIr8WZp4DGKr7/XYiC4icgQMyfOnnwAD67pVNPKrTROVMSiytWt9muvQWAaZixdPUq+/DEJ8Fu856l7MxKLnhP7exekIzZsZD7A8jRSyZKUCSA9NbjXN3n7+HKQLZAj7GUN1q9XjA7f1E/yEDcrq6MLKAiR2efbm8rte5/5BrR8mBNLyfrvDhC3wnOnFHtMHuHdEsCo//5rpwkvZiK317wo6WEe8I5QWdPF3miPL140tl5NhrbSishkCaVmkhob/4FcbvPfL7ZOCFQcnTwory7ZkH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(4744005)(6486002)(966005)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VlRMbVRXVzJkb25KVml5djZaMnpNcStRV2F3Q2VGVHQzZiswREZSSkdtSWVE?=
 =?utf-8?B?bzAvdHh0RXFIZWZyelVUNzQrNlNIZDJSakgraVRhZnErNU5tcXVVMzdlTFhm?=
 =?utf-8?B?cXZJank5c3A4d0k3NHNIVEhqdVZGUDk5QkV4QTRPV0lsT04yTWlZemxtbkZl?=
 =?utf-8?B?V0tCTjZzR2ROSGVsVmYyNlRib0ZSbXo2d0dKUjhRNjZVWEhLemRoMmVZcHdM?=
 =?utf-8?B?ckpkTHpvNlhDeUZUYnR5N0xoYkd2V1hmYi9ieHlwOFV2NDBKb3NrMm5aZjhs?=
 =?utf-8?B?T3VnemRwSEVDaVU0ZDNtRlA0YnJhYVorbm5MOWo3T2M0K01jWk4rWXMxZE1M?=
 =?utf-8?B?eVZPSW5oYTNIYk15bnVYTFVwWVIyblN2WnUrdnpRdkFudjFVV3phVktnd0E2?=
 =?utf-8?B?Y2tFT2ZTQmlCUWluejZXUTFWcG1OSkpZY1dpYkZWMlRUMTk5b2QvZXgvclVj?=
 =?utf-8?B?dUppYytrbHBmR1hJbU9uUXlWYXNpTDJjNFBQMXhZWWtSYzRvbEhvN0ZUYnBn?=
 =?utf-8?B?V2U3MTROWXAvSDJLbXUrSWlwNno3SG9KSnVRWGRVMDBWUk9YUnJOT2FHVDE3?=
 =?utf-8?B?WXNBWm95Y0svSG02c29pWXZwK1FRK1V0MnV5RDBhaEtqdlFVT3B0Nm5JL3Jz?=
 =?utf-8?B?VUZhRkJvcGVzYkJGTXJYbEkvKzQxT1pSTW1Zd2x2VXRBZHBMTWxwUEUyZG4x?=
 =?utf-8?B?WDlyd0RxNEc0UzA0SXQxcENIbFpvaEpLc0I4OGtEejl5WUFKdjJyQkxVcFRD?=
 =?utf-8?B?MjdHa0Z2aDVWV3EwSGt5SnNjQmRsd1hVTzJteFAwam9yaVVyL3V6RlRrUWdh?=
 =?utf-8?B?a3NpbnJtSmVWZno4bGd2MEhNdjNQSGw4RytNMTlURXo1dzQ4YVM3ODEvQnNT?=
 =?utf-8?B?YXJtUDBYUXJ3aksxTjJYa20wb1dTQ0tUSGdXNHlLNUNUWFJIOFRaWDY1VCtY?=
 =?utf-8?B?NHROWnV4cGVjTnBSMmpXU3dmOHFabDJMcTZyNDhmK1BYM0I0UUZQZDljNFZX?=
 =?utf-8?B?TSs5REF5V2VVNDZPWDBhNW15cmhGem5KWVJUZ01vajFQRFpSY0x4T3JCOGpm?=
 =?utf-8?B?RzB1RnpOdDhUUEZGN3U0dXlWR004VHdCY1hlbkx0YXRWK0p2NmZwTlJDcFU3?=
 =?utf-8?B?dUs5aGd0eGVDd0lpMEhsQ2VVTFlieFY4ZEV6YVlzaWxuWSsxVXpaaTg2MXpW?=
 =?utf-8?B?Nm53OUhDa1FrUElUNDdyNnRGVHBwWTlpdzBHYm85T2Q3QnExRXRlTUcwbDNi?=
 =?utf-8?B?ZTAyeDhxcCtwaUVLdExTV3ppaUwyODR1cnlwZlBtWUYzc29MRWczZm9mazBR?=
 =?utf-8?B?UWdETlZlbUN5OW9aWElLNU9DcHhOTUFVaml5NHRFalVwa2lianh3MWhXVVpz?=
 =?utf-8?B?UXBUdllQdDRRRU5wbUJiOVRKL3BLQ0tQUDY4cXpzRldmN01KbWhyQ3U4NSs1?=
 =?utf-8?B?ZnlKSkVaY3ROTWFUM296Ny81VFZDU3ZoK3pSYi9LalJyb0lNVXVFd0c4RnJ4?=
 =?utf-8?B?dlYvZENuRWE4Q01xSklZZlV5c244YVpYejFoVUZKZnhLZHg2alVuUXZkK0Zh?=
 =?utf-8?B?OTgvendqSFdWNnBGQkdoWTNMUDM0dGR0RXBSZUhVdHhjdDVRa1oxRkRwaXRP?=
 =?utf-8?B?S1ZuVHFjZEdGN3paWXRHYi9PMzMyai84di9xeFcyK3ZWWjZLcmFXOElkU0ov?=
 =?utf-8?B?Ukp4emJDcS9ZTkMraHF6MXVPL2pSSTNWVENSUWtSOHVRYlFmeTBIa2tEZUpn?=
 =?utf-8?B?TEdSa09iNDVVMDBIU2l4Z0FtQUhldEdDd0ZoY0JROTdHSm8yTVhha2JKRTNu?=
 =?utf-8?B?QlpBTWhEaGN6dkdJQnhBUFJxL2xCT3cxYmpIVTNUdVU2WE5kUjFjN3ZzcFhu?=
 =?utf-8?B?ZHNvKzgvcGlncm5ZMitlbmRKVHEvbDdmVzd6UVczemJXNDNlVTA1aVFla3Nk?=
 =?utf-8?B?V2pIRUxPOG1PUXhydXBsOUVWTzc2M0ZOU2wwTVV4aW9sNU5ZSGkyempMSFV2?=
 =?utf-8?B?TnBRakhJUmRnNHA2QlNGMno4aS9PLzVWdDBtTEZYQ3ViQndBU1BCV0d1TG91?=
 =?utf-8?B?eDFqRVZObVU4S0o1RThmNGkxOVhEM1JYT0lGSGROa29JejBZMmpEbHYxd3pL?=
 =?utf-8?Q?lhR8F+GVybGn9HKng6q8++zoI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f6e6197-51c5-4f5a-cfcf-08db4498d63a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 07:52:22.4942
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IulKFUiiRpOJtCfdC1PtLMKI0896uYEoXG4hyK+QBfxOaQH7SwuBO6llashvxQbVJeIZ0b8gd31UmLRIzzAj8A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7378

On 24.04.2023 01:33, osstest service owner wrote:
> flight 180384 linux-linus real [real]
> flight 180387 linux-linus real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/180384/
> http://logs.test-lab.xenproject.org/osstest/logs/180387/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
>  test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Looks like the earlier problem is back, just in a more limited fashion
(affecting only a single kind of system): It's again "Volume group ...
not found". Of course it could also be that this is now due to an
entirely different underlying reason, but in the absence of other error
messages (at least I couldn't spot any) I'm again suspecting that some
required driver can't be loaded, perhaps because of a missing dependency.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:12:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525190.816224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrIu-0000Fe-Kq; Mon, 24 Apr 2023 08:12:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525190.816224; Mon, 24 Apr 2023 08:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrIu-0000FX-GX; Mon, 24 Apr 2023 08:12:08 +0000
Received: by outflank-mailman (input) for mailman id 525190;
 Mon, 24 Apr 2023 08:12:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqrIt-0000FR-4a
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:12:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrIs-0004yr-FL; Mon, 24 Apr 2023 08:12:06 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrIs-0006gy-9O; Mon, 24 Apr 2023 08:12:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=pCgIVEL+Cqz/tjpINl/21T7ppjaJXybvIn8sEOMR5ZM=; b=HeBY3KIE8fg088QfG2fi6ito/i
	DyXLJbjVfcjRNkyoNYbkfQL3TOmiKLHxbZ4y8XYO24xpE8Y6Whe55nvDB3HaCobvHPlJwlXWIuNBP
	+nZIE9vWU+9Ac18EnqBuz3JRi7c+LrCEiy4/K8kZVuppDaIwOAdlWf1l0/UXPRfNs2yI=;
Message-ID: <bac775c3-8a28-bd69-8c2a-bd1c9a8d6d4c@xen.org>
Date: Mon, 24 Apr 2023 09:12:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 01/10] xen/arm: domain_build: Track unallocated pages
 using the frame number
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-2-ayan.kumar.halder@amd.com>
 <77b015b2-cc30-0534-4e0c-c392b5e8a7b3@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <77b015b2-cc30-0534-4e0c-c392b5e8a7b3@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 19/04/2023 13:05, Michal Orzel wrote:
> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>
>>
>> rangeset_{xxx}_range() functions are invoked with 'start' and 'size' as
>> arguments which are either 'uint64_t' or 'paddr_t'. However, the function
>> accepts 'unsigned long' for 'start' and 'size'. 'unsigned long' is 32 bits for
>> Arm32. Thus, there is an implicit downcasting from 'uint64_t'/'paddr_t' to
>> 'unsigned long' when invoking rangeset_{xxx}_range().
>>
>> So, it may seem there is a possibility of lose of data due to truncation.
>>
>> In reality, 'start' and 'size' are always page aligned. And Arm32 currently
>> supports 40 bits as the width of physical address.
>> So if the addresses are page aligned, the last 12 bits contain zeroes.
>> Thus, we could instead pass page frame number which will contain 28 bits (40-12
>> on Arm32) and this can be represented using 'unsigned long'.
>>
>> On Arm64, this change will not induce any adverse side effect as the width of
>> physical address is 48 bits.
> NIT: This reads as const, so it would be better to write: "as the max supported width of ..."
> 
> Thus, the width of 'gfn' (ie 48 - 12 = 36) can be
>> represented using 'unsigned long' (which is 64 bits wide).
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> 
> With or without (after all this is just a commit msg):

It is always good to have accurate commit message for the future reader.

> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:13:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525197.816233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrJf-0000ka-T2; Mon, 24 Apr 2023 08:12:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525197.816233; Mon, 24 Apr 2023 08:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrJf-0000kT-Pj; Mon, 24 Apr 2023 08:12:55 +0000
Received: by outflank-mailman (input) for mailman id 525197;
 Mon, 24 Apr 2023 08:12:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wwt=AP=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pqrJd-0000kD-Pc
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:12:53 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cdc4ed30-e277-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 10:12:51 +0200 (CEST)
Received: from DM6PR06CA0061.namprd06.prod.outlook.com (2603:10b6:5:54::38) by
 CH2PR12MB4117.namprd12.prod.outlook.com (2603:10b6:610:ae::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.33; Mon, 24 Apr 2023 08:12:47 +0000
Received: from DM6NAM11FT075.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:54:cafe::48) by DM6PR06CA0061.outlook.office365.com
 (2603:10b6:5:54::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 08:12:47 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT075.mail.protection.outlook.com (10.13.173.42) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:12:46 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:12:46 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 03:12:43 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdc4ed30-e277-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VZxY6XtjdLnwuPy32RntmO9R5S+TF3UKTeCy4bwa3h2iXyZ9+tmVZvWBzXENIePExb5sSDKDdG/2Ho0P8QTnxY/Dv3treSUybdbZl1GQAq9vqs45T1aDL+yLvVSBGdGdnAsmJMKvIKHvvdgSJJ5YWUjOTvDGAkq2cDrwJAZFg815OfCIi3dHXTYSqz/YQXexDrba8mJzhjRhSIE7EjyQZB9BRNkpbL7iU3u11x/gMObITcuBDQ3RG+G5mghXQD+6EEiYURHS8rDTXDH04VzQ85c9l+IMRHkAoTmOGkr+tiYO4E90KRe+1Ty7HciqSEKlvr9bmZmoAk52lxvbFPpimQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3mRq/uiVtxaRxCW5b3OqAYLbyNMTIACsCw6yt1MILfY=;
 b=NJdXwnLUXf00x2LmyVuw/7DEcsRLlpYLCajMdyDbCTc8MM9ThmvvZAAhOoX8D+BvjiK5Qer+QBNzjywFvinkSiBjNb0KYV+er6duqtI0GgX3gzKmui5UYBdht0ZZ6eY5t4pWFoRmDh5Cvrj+iOA373bmDBfpsap158S3bk/ZqDgBKOtIRPekhvKxDxi55yRcVGBqYfnoATJSmtMEF2uuA9oN+YjblyOrqbXTYzt/WEYQTmKoXQllISJUZ55Pp4ToMNLqtdWeQTOGp6W3xQZqqdpYy+y7apytWKX+fYMKvvPyV2KyJznb85BlQdyiqJct3tEnsFb6W6qGAwNXiE05/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3mRq/uiVtxaRxCW5b3OqAYLbyNMTIACsCw6yt1MILfY=;
 b=O3xIby8SjFNWGAdP4GN7hnVddusM2O1FU8KFOoZDmMW28vKxQbGtueSWjBUPrfSI/6C0M5LmjbaIL0Y66IfOu1hJotWaqihe67/qDDlP7DPYB1wUMg7q2Aje+8fpbcLiDhlSeGMM8NWNhzaMf+WHTQra+gQ4fQ2emsk7ReDUX9M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <47f32844-e26e-01a6-63e1-a774e2038301@amd.com>
Date: Mon, 24 Apr 2023 10:12:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 09/10] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-10-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-10-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT075:EE_|CH2PR12MB4117:EE_
X-MS-Office365-Filtering-Correlation-Id: af791226-6ce2-4d44-633b-08db449bb00b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uiX8fyIVRbOgOtfjZ7NwPyAHyheu3ePWYoaMhI8QlxQPyJtsa17foKQ9sAjl6BCgz8FOiQsdDR0jXVMWgsHLekTKrsaHWOBg0OgXfzzXNAeZMzxXyAporJmQZdUooliFTlOSHGkKC31IfwxB1F9B9PFGI79dQ2m7XBD+ULsn0gDkaCCz0rxQybEMIwq2UDv2OkayXrNcOXALFuhwSTULg6c5/LQmV+yJuR7h9qt4/yfXoAl0CBJR7/QIM2s3d5pTwFmwF/BwCaL8laaFoQgnBMaB5C2g5omliWa4oY4v5qokvjHdkQF0Y2tZIR4e/jEnuK+Tj3PCTs0IGkShoFl2/vW0M8s/Ojqs3CauGrIT0zBUoVdixTGuRdJRyJC2dZITijV6hVBI/qxrlGDbfhhNfETckX9Zyps55+lX8KlxA1mgyRkgr7J4HizVREAmV6S/HXFC/fx6jUVBMG11mvIXtOJMrcgOyKLS35AxFZpm8qmwdvmywny7GlR3yfT7m3kIYlIS5qUMGoP5GdXsQ2RgyUol3MFWCi8+rSfIDlV4nlEgno3R4JYnTmqLcbaZ+X36iR22pjK2//hDLsVWXJ0Hv6tK3j62Ff9Kad4jnyKb/vP0+qqFHHnUzBExC5M4rubmA+rrXkOrrRX7rAEDbJjX7NQShZa7b2DPy2VpJnIW9Jcq+3qMDDqa3Y3bFp1gUYbw6wfP1rmWLkXO7B+srCNbRJHXOPqZb6UcPaJMK4SarXjvTEGEd6jdXDELqJIfZAu1TIIgb8I1BY0+voQVAMykfw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(36840700001)(46966006)(40470700004)(478600001)(40460700003)(31696002)(110136005)(16576012)(54906003)(86362001)(36756003)(186003)(53546011)(26005)(82310400005)(40480700001)(4326008)(316002)(82740400003)(44832011)(83380400001)(70206006)(70586007)(36860700001)(2906002)(336012)(356005)(426003)(41300700001)(81166007)(8676002)(8936002)(31686004)(5660300002)(47076005)(7416002)(2616005)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:12:46.6588
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: af791226-6ce2-4d44-633b-08db449bb00b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT075.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4117

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> Restructure the code so that one can use pa_range_info[] table for both
> ARM_32 as well as ARM_64.
> 
> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
> p2m_root_order can be obtained from the pa_range_info[].root_order and
> p2m_root_level can be obtained from pa_range_info[].sl0.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1345,
> "Use of concatenated first-level translation tables
> 
> ...However, a 40-bit input address range with a translation granularity of 4KB
> requires a total of 28 bits of address resolution. Therefore, a stage 2
> translation that supports a 40-bit input address range requires two concatenated
> first-level translation tables,..."
> 
> Thus, root-order is 1 for 40-bit IPA on ARM_32.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1348,
> 
> "Determining the required first lookup level for stage 2 translations
> 
> For a stage 2 translation, the output address range from the stage 1
> translations determines the required input address range for the stage 2
> translation. The permitted values of VTCR.SL0 are:
> 
> 0b00 Stage 2 translation lookup must start at the second level.
> 0b01 Stage 2 translation lookup must start at the first level.
> 
> VTCR.T0SZ must indicate the required input address range. The size of the input
> address region is 2^(32-T0SZ) bytes."
> 
> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
> address region is 2^40 bytes.
> 
> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from -
> 
> v3 - 1. New patch introduced in v4.
> 2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
> well as ARM_64.
> 
> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
> The reason being root_order will not be always 1 (See the next patch).
> 2. Updated the commit message to explain t0sz, sl0 and root_order values for
> 32-bit IPA on Arm32.
> 3. Some sanity fixes.
> 
>  xen/arch/arm/include/asm/p2m.h |  8 +-------
>  xen/arch/arm/p2m.c             | 34 ++++++++++++++++++----------------
>  2 files changed, 19 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
> index 91df922e1c..28c68428d3 100644
> --- a/xen/arch/arm/include/asm/p2m.h
> +++ b/xen/arch/arm/include/asm/p2m.h
> @@ -14,16 +14,10 @@
>  /* Holds the bit size of IPAs in p2m tables.  */
>  extern unsigned int p2m_ipa_bits;
> 
> -#ifdef CONFIG_ARM_64
>  extern unsigned int p2m_root_order;
>  extern unsigned int p2m_root_level;
> -#define P2M_ROOT_ORDER    p2m_root_order
> +#define P2M_ROOT_ORDER p2m_root_order
>  #define P2M_ROOT_LEVEL p2m_root_level
> -#else
> -/* First level P2M is always 2 consecutive pages */
> -#define P2M_ROOT_ORDER    1
> -#define P2M_ROOT_LEVEL 1
> -#endif
> 
>  struct domain;
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 948f199d84..4583658f92 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -19,9 +19,9 @@
> 
>  #define INVALID_VMID 0 /* VMID 0 is reserved */
> 
> -#ifdef CONFIG_ARM_64
>  unsigned int __read_mostly p2m_root_order;
>  unsigned int __read_mostly p2m_root_level;
> +#ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>  /* VMID is by default 8 bit width on AArch64 */
>  #define MAX_VMID       max_vmid
> @@ -2265,16 +2265,6 @@ void __init setup_virt_paging(void)
>      /* Setup Stage 2 address translation */
>      register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
> 
> -#ifdef CONFIG_ARM_32
> -    if ( p2m_ipa_bits < 40 )
> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
> -              p2m_ipa_bits);
> -
> -    printk("P2M: 40-bit IPA\n");
> -    p2m_ipa_bits = 40;
> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
> -#else /* CONFIG_ARM_64 */
>      static const struct {
>          unsigned int pabits; /* Physical Address Size */
>          unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
> @@ -2283,19 +2273,24 @@ void __init setup_virt_paging(void)
>      } pa_range_info[] __initconst = {
>          /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
>          /*      PA size, t0sz(min), root-order, sl0(max) */
> -        [0] = { 32,      32/*32*/,  0,          1 },
> -        [1] = { 36,      28/*28*/,  0,          1 },
> -        [2] = { 40,      24/*24*/,  1,          1 },
> +        [0] = { 40,      24/*24*/,  1,          1 },
Something does not add up here.
This table maintains the same order as PARange field of MMFR0 register, so that later on
we can do:
pa_range_info[system_cpuinfo.mm64.pa_range]

When PARange is 0, the PA size is 32, 1 -> 36 and so on.
However, you modified this behavior and now, if PARange is 0, this table will return PA size as 40.
This is wrong.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:14:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525201.816243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrKl-0001JD-6B; Mon, 24 Apr 2023 08:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525201.816243; Mon, 24 Apr 2023 08:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrKl-0001J6-3A; Mon, 24 Apr 2023 08:14:03 +0000
Received: by outflank-mailman (input) for mailman id 525201;
 Mon, 24 Apr 2023 08:14:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqrKk-0001Iw-64
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:14:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrKj-00050e-PM; Mon, 24 Apr 2023 08:14:01 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrKj-0006jF-Jy; Mon, 24 Apr 2023 08:14:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=kEq/jg49Uhuq/7+FNqe+k6jASHTbFCtNKGJnVJlIW38=; b=MwPSRSaH1ODDp+/zCzqErRbxqh
	xzXmXBKHbkJmaQBghQTMtfF+Qiy69mZwdzPTchWb8SwtFn1juPWZ80IovfNMCGmBzegNtYSEWcO34
	jXZC2o7YDtQvFhd24Lx12YaWWEEpMXtxoCC+NmaBr9Jf55vb5YpXAAuthM0KYcYF1R4Y=;
Message-ID: <879d527e-16b7-4e19-4864-ddace80597c8@xen.org>
Date: Mon, 24 Apr 2023 09:13:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 07/10] xen/arm: Restrict zeroeth_table_offset for ARM_64
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-8-ayan.kumar.halder@amd.com>
 <cc29fe9f-5df6-816f-aeee-b8a1933cf3e8@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <cc29fe9f-5df6-816f-aeee-b8a1933cf3e8@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 21/04/2023 09:03, Michal Orzel wrote:
> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>
>>
>> When 32 bit physical addresses are used (ie PHYS_ADDR_T_32=y),
>> "va >> ZEROETH_SHIFT" causes an overflow.
>> Also, there is no zeroeth level page table on Arm32.
>>
>> Also took the opportunity to clean up dump_pt_walk(). One could use
>> DECLARE_OFFSETS() macro instead of declaring the declaring an array
> s/declaring the declaring/declaring/
> 
>> of page table offsets.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:21:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525208.816263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRp-0003Ca-7v; Mon, 24 Apr 2023 08:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525208.816263; Mon, 24 Apr 2023 08:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRp-0003CR-4W; Mon, 24 Apr 2023 08:21:21 +0000
Received: by outflank-mailman (input) for mailman id 525208;
 Mon, 24 Apr 2023 08:21:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QOl=AP=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pqrRn-0003By-Qf
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:19 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fbd993db-e278-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 10:21:17 +0200 (CEST)
Received: from BN8PR04CA0041.namprd04.prod.outlook.com (2603:10b6:408:d4::15)
 by DM6PR12MB4927.namprd12.prod.outlook.com (2603:10b6:5:20a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 24 Apr
 2023 08:21:14 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:d4:cafe::ec) by BN8PR04CA0041.outlook.office365.com
 (2603:10b6:408:d4::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 08:21:14 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:13 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:21:13 -0500
Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 03:21:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbd993db-e278-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KZipKmYEDaWvtt/tucRuUwLiBzI025dBtpTsrwXikUSP5avmUe4hKJzs1G2LldqXap6XO9ZeqMiO1OwzonNaNXx5c/fX4JHS+Smg1qm+j+ZxQLUNDxm17WjwXZFXMlcRn7DSqgBEgF+u7tIw9uklx+7jum+8ef88lMXtFozD6UJRLr2suxM0fBhYbklllDpIdc/5koNc/CiuwpbbpDe5tu+ui8eOqvCuG2pTIJpyUpjt3xXtQ+2D+kTGM9TKSubkUPx+42vlYW9Rft7adYtvAeH7bo67asfESQaxvgcHgCYYsbxPE05TFwKSWjNlQuSAXaYNOtMuRqnupTjmNy39Tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=;
 b=e1RCL6otz0kmLETyVCNUj3QS6YSYU1Ohk0SblkS/M1n8AQ7xBt3leEaxPil/okBC1EATWo1xAZEuc9ZlxDqYP62SBxhG/rOPPOw/ljhzVBoVHLHTOfRcUVxlqBRKUVkOpDMfNagFWPGt/S1e6Y5XjzSzwUsxzatJFZicnYWwwlHpgIcoi5h4tvWvWzJ7Qawj/SexIDQijRipYBsmc9BHm6zfk8e1U7Q2rqOF3zMMaWv/XS1zb/La+aamlTpE61Yyn4EGQTggS6M8VArB6zmtlj/VpBSb1poeA+3e09D+hh23wwiFRnH6hQ3jWCrODj7uWq1ticKgBz6d2JRmPlUXzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=;
 b=Wtd0XFnLQxCFgDwNmNRjH5XVBmBmyfNkTEP/oHOj912BDcOOvb8FvMrDH23w+to1qCzY5LLe00FjcGSWO+29G/1LZeckpIBGYxUU3mPeHvo3sBAoQPKhVPA0Bd0hezuKon841tdP2I607nIlEhOg20a4vGlWwHDLAYycwYG0AVQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Xenia Ragiadakou <xenia.ragiadakou@amd.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 1/3] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()
Date: Mon, 24 Apr 2023 11:20:36 +0300
Message-ID: <20230424082038.541122-2-xenia.ragiadakou@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
References: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|DM6PR12MB4927:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d336e64-144f-4b06-daba-08db449cde7b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R9mfFcbaNDyE1ILb8hPrTj0VzsfYHmpky3OVp0aSVfdJ+PQzoPcybJlOD4qCx/8iKQEgEnDSUPjQIpHezYfnzi0ibX7MvXA/yhmBLj2A8vs6zvJTtEsJ0cm+thLKv+zVAgEKtCzY/56N2m7CD2vpUk0FNTSahRkNTLCJLjmfRl+jj3uQtkH6KGJ8efBSGO6sy2XG03j8bnWHQagEjBnrNr66XWS04skNjw3WoUeNHPucUe7fsnizwe7WkgSlUoXxYBqZnJLVkBWDN0ZYzM1ckxDeHskf939PmXdADzlI1B9xzWQ8IYPoGnduTfSzAJuKm7TpA6CkmVB6DY9FwRY5H7Z12mvOEfqLCclYhCfGYruOE4L9jG0nmJNNH2qTYqLyabkAl+kiEcBH3b0epJGFSWA8HT1sTwkdtPj+Rq2xnzACSLOMjpnRfblu0jlhz9bwCTjF6fAfSU4giZ41l1g9BKt8PNsmHWmbLCm6C6/AhCxV4BQhmTYwo/FyMlWNndv7cy+zAlyze+3th4jMsXHn/PgrVeJsZCF3HqdFLimTIuDEm0QLyf2W9q/MoNk/TB6r/9iFrunT0wdwrGTxpkcwYvRM3M6hwkj6751B+bpQop2LmORyvHQL7noxWRQ/ohUCRT5g65vaVgNwJMMkB10cnPV8J5yht4v45EEizQCjsKNzABkKBjHsM54b6iX6QnX7ty30xCoUjEsZYynL8VGUW859eyn+kHMn+ixWhGNLv04=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(36756003)(8936002)(81166007)(66899021)(8676002)(40460700003)(5660300002)(44832011)(2906002)(82310400005)(86362001)(336012)(1076003)(70206006)(40480700001)(478600001)(6666004)(54906003)(16576012)(186003)(70586007)(2616005)(36860700001)(82740400003)(356005)(316002)(83380400001)(6916009)(4326008)(47076005)(41300700001)(426003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:13.9896
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d336e64-144f-4b06-daba-08db449cde7b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4927

This change aims to render the control interface of MSR intercepts identical
between SVM and VMX code, so that the control of the MSR intercept in common
code can be done through an hvm_funcs callback.

Create two new functions:
- svm_set_msr_intercept(), enables interception of read/write accesses to the
  corresponding MSR, by setting the corresponding read/write bits in the MSRPM
  based on the flags
- svm_clear_msr_intercept(), disables interception of read/write accesses to
  the corresponding MSR, by clearing the corresponding read/write bits in the
  MSRPM based on the flags

More specifically:
- if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls read
  access to the MSR
- if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls write
  access to the MSR
- if flag is MSR_RW, the functions {set,clear} both MSRPM bits

Place the definitions of the flags in asm/hvm/hvm.h because there is the
intention to be used by VMX code as well.

Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new
functions and flags instead.

The macros svm_{en,dis}able_intercept_for_msr() will be retained for now but
they will be eventually open-coded with a follow-up patch, because only one
of them is actually used, and because the meaning of "enabling/disabling"
msr intercepts is not consistent through the code(for instance the hvm_func
enable_msr_interception() sets only the write MSRPM bit, not both).
In the meantime, take the opportunity to remove excess parentheses.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
---

Changes in v2:
  - restore BUG_ON(), reported by Jan
  - coding style fixes, reported by Jan
  - remove excess parentheses from macros, suggested by Jan
  - change from int to unsigned int the type of param flags, reported by Jan
  - change from uint32_t to unsigned int the type of param msr, reported by Jan

 xen/arch/x86/cpu/vpmu_amd.c             |  9 +--
 xen/arch/x86/hvm/svm/svm.c              | 74 ++++++++++++++++---------
 xen/arch/x86/include/asm/hvm/hvm.h      |  4 ++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 15 ++---
 4 files changed, 64 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index 18266b9521..da8e906972 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -154,8 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+        svm_clear_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_W);
+        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
     }
 
     msr_bitmap_on(vpmu);
@@ -168,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
     }
 
     msr_bitmap_off(vpmu);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 59a6e88dff..3ee0805ff3 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -277,23 +277,33 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
     return msr_bit;
 }
 
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
+void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags)
 {
-    unsigned long *msr_bit;
-    const struct domain *d = v->domain;
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
-    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
     BUG_ON(msr_bit == NULL);
+
     msr &= 0x1fff;
 
-    if ( flags & MSR_INTERCEPT_READ )
+    if ( flags & MSR_R )
          __set_bit(msr * 2, msr_bit);
-    else if ( !monitored_msr(d, msr) )
-         __clear_bit(msr * 2, msr_bit);
-
-    if ( flags & MSR_INTERCEPT_WRITE )
+    if ( flags & MSR_W )
         __set_bit(msr * 2 + 1, msr_bit);
-    else if ( !monitored_msr(d, msr) )
+}
+
+void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             unsigned int flags)
+{
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
+
+    BUG_ON(msr_bit == NULL);
+
+    if ( monitored_msr(v->domain, msr) )
+        return;
+
+    if ( flags & MSR_R )
+        __clear_bit(msr * 2, msr_bit);
+    if ( flags & MSR_W )
         __clear_bit(msr * 2 + 1, msr_bit);
 }
 
@@ -302,7 +312,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr)
     struct vcpu *v;
 
     for_each_vcpu ( d, v )
-        svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
+    {
+        svm_set_msr_intercept(v, msr, MSR_W);
+        svm_clear_msr_intercept(v, msr, MSR_R);
+    }
 }
 
 static void svm_save_dr(struct vcpu *v)
@@ -319,10 +332,10 @@ static void svm_save_dr(struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -350,10 +363,10 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -584,22 +597,29 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v)
     vmcb_set_exception_intercepts(vmcb, bitmap);
 
     /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_SPEC_CTRL,
-                      cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibrs )
+        svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
     /*
      * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
      * setting and avoid having to perform a rdmsr on vmexit to get the guest
      * setting even if VIRT_SSBD is offered to Xen itself.
      */
-    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
-                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
-                      !cpu_has_amd_ssbd ?
-                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
+    if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd )
+    {
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W);
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R);
+    }
+    else
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
 
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_PRED_CMD,
-                      cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibpb )
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
 }
 
 void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 04cbd4ff24..5740a64281 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -250,6 +250,10 @@ extern struct hvm_function_table hvm_funcs;
 extern bool_t hvm_enabled;
 extern s8 hvm_port80_allowed;
 
+#define MSR_R       BIT(0, U)
+#define MSR_W       BIT(1, U)
+#define MSR_RW      (MSR_W | MSR_R)
+
 extern const struct hvm_function_table *start_svm(void);
 extern const struct hvm_function_table *start_vmx(void);
 
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index a1a8a7fd25..94deb0a236 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -603,13 +603,14 @@ void svm_destroy_vmcb(struct vcpu *v);
 
 void setup_vmcb_dump(void);
 
-#define MSR_INTERCEPT_NONE    0
-#define MSR_INTERCEPT_READ    1
-#define MSR_INTERCEPT_WRITE   2
-#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
-#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE)
-#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW)
+void svm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                           unsigned int flags);
+void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             unsigned int flags);
+#define svm_disable_intercept_for_msr(v, msr) \
+    svm_clear_msr_intercept(v, msr, MSR_RW)
+#define svm_enable_intercept_for_msr(v, msr) \
+    svm_set_intercept_msr(v, msr, MSR_RW)
 
 /*
  * VMCB accessor functions.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:21:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525207.816253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRl-0002wc-0e; Mon, 24 Apr 2023 08:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525207.816253; Mon, 24 Apr 2023 08:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRk-0002wV-UA; Mon, 24 Apr 2023 08:21:16 +0000
Received: by outflank-mailman (input) for mailman id 525207;
 Mon, 24 Apr 2023 08:21:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QOl=AP=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pqrRj-0002wP-Vm
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:16 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20603.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f95f1e33-e278-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 10:21:13 +0200 (CEST)
Received: from BN9PR03CA0357.namprd03.prod.outlook.com (2603:10b6:408:f6::32)
 by SA3PR12MB8024.namprd12.prod.outlook.com (2603:10b6:806:312::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 24 Apr
 2023 08:21:10 +0000
Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::77) by BN9PR03CA0357.outlook.office365.com
 (2603:10b6:408:f6::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 08:21:10 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:09 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:21:09 -0500
Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 03:21:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f95f1e33-e278-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ODVvUb8/A1qqPkqZu9yKtZIpIAy/SDXJ16edM9t6iRCp70Z4YsTdSE1MrzpGUazuraQn6hN2h/dAtob5KI0jepWNeGvS9UJXmWdYxuUj1ihFLDoryXj/KKHkbLNRGNzfSyx45j84Qn4r12zgcooWbA+glFy4+gYhFj9NtVpSeUN16G9amay6gbLyqTL6E0CazQUayVxjSje3cHpYW2eAxmW7+TL9EDc7M5J34h21TNbSPyWcDaGTAj85lIiJhio8L1K8Wq5gPYYuRQZZxsevdu4TOE2P5FQXE9SF8LxbOZg8Sdj8OEnbVfu5uQD4eBz6GYETB8D/zC3j/vERd+a/hg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/AT/BDH1B9LopiCynqMsAL8ehH/tawPNLS5lO/XRNmc=;
 b=P9+3r9+dEY+Tqvz4mJ7HQsag9IMrkDV9ViEZFHw6W7wQHJxjCM0G2yXpXeCgs8fgI+Enz5aV2AH828JeZxtrbVPLVNq01M4K359N4vZc2xNuKxo15V+lE8Dyifq3kVoFosKVlnKLEZf4yi1mv1559l3yn/N+/NIILqzne/Bvmi8ChB9rzB3eKoiBzaTsd0rfgLb0ap1lCnUmkhQXEeAjmdZGkik3buYSfR4h5v6uXRNgYwhT527X603Id0gVCDJWntFHCAltDxlyXv1Zv5AxQiNzQ2Fd53iY0Kr1cAx6bBX/sZxHmqOluhQ54aTE22R+kfMyC5WLoY4dRGhRPM0CVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/AT/BDH1B9LopiCynqMsAL8ehH/tawPNLS5lO/XRNmc=;
 b=D6JoYHdkJVKqGfvq7KX7gr2DSJ38KW5FloBL+xJNqJ9XaDTjvQOYJqsbHUG/XAA6ZtZcppBEo8Y/Yf/wU2/8nDIKt1vWT2YULtVM/xOh+ImwQmQ758nhaz9SSrenMKaqk2qMsX4QviT+LEOYdpljtMO2a+69UEwz78zNIwDZ9wA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Xenia Ragiadakou <xenia.ragiadakou@amd.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 0/3] hvm: add hvm_funcs hooks for msr intercept handling
Date: Mon, 24 Apr 2023 11:20:35 +0300
Message-ID: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT044:EE_|SA3PR12MB8024:EE_
X-MS-Office365-Filtering-Correlation-Id: e0c06adc-0a2c-483e-cf36-08db449cdc05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7Abs+Qrpc1KbgY1r4XDSzOljBor7/pIBCakAwdD3yEypBNfAy1fBSWwbOuNxrrBkmVbh3jBRW1RvPIMdfEgxIN+rWE5y/6FiAeIuw3C8qS4NrOq47y0Y4lguCVyl/ItFm/jqCbwy6xhMRYWVmnm5r90ZwoYrIM+ywpsxUFu2URVLw2whTyYtldL68ajqQ6Yr0TL+9ufMhHUyQxblUmGC18FcbrRtSKRMxrryydAcmcBXPHdvB210rW0MMkWD+wcyQsy61SX25Uf3GSb02H35lKvETj5K5lkby4ctslYGnRWKu5sU4ix9nKDGEmZ/afg+WrWU/AS2P9bbQVlngZ+CGXW6k3MHUuovXG3u1plYdNXs4sWjGqN++Hk1OQ+BEDiQD+d/qPSlG+hY7TKrSSr9RnFkh3C13dOxzxkUigz9rmyBAORgb0/B/Swi5S4CATCdEZoG+hpb5kLVNVSI3Rxo9muor8pkwSzySAnNCzrXV013ZOKDT6A9WoRJveMuBZEHCgDiYArzPGWNQiNSAhHTw4R2ojyRNT7JNgNeQlDxPFCJDyaff4Tl2IuUJJWctpi/oYeGpjtmReqmB8lMi+hQDxGf/OAPA4kLgP3jZ5q+vDYdqNx0JUhILFmd7py8PWmbezT+pHTNX1bw82k8nnBwr3MpAXNrM6q0yHSV8sJQ0H1mMPKkkL4eNb/pgXFzeJh8XLLYsmcellGTCfYPloVcS8GhWLRh/N64bTFN4Zjoiz0=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(46966006)(40470700004)(36840700001)(478600001)(6666004)(8936002)(8676002)(316002)(82740400003)(6916009)(4326008)(70206006)(70586007)(40480700001)(41300700001)(54906003)(81166007)(356005)(40460700003)(186003)(83380400001)(2906002)(1076003)(26005)(426003)(47076005)(336012)(82310400005)(36756003)(36860700001)(5660300002)(86362001)(2616005)(16576012)(44832011)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:09.9543
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e0c06adc-0a2c-483e-cf36-08db449cdc05
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT044.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8024

This patch series aims to make the msr intercept handling, performed in
vpmu code, virtualization technology agnostic.
It creates a common interface for setting/clearing the msr intercepts and
then add hooks to the corresponding hvm_funcs table to be able to call the
svm/vmx specific handlers through a generic hvm wrapper function.

Version 2 addresses the comments made on version 1 to ease further review.

Still there is a pending question made by Jan, whether there could be other,
than the vpmu one, use cases that would require msr intercept handling to be
performed outside of virtualization techonology-specific code, and whether
this abstraction is actually usefull to have.

Xenia Ragiadakou (3):
  x86/svm: split svm_intercept_msr() into
    svm_{set,clear}_msr_intercept()
  x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
  x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()

 xen/arch/x86/cpu/vpmu_amd.c             |  9 +--
 xen/arch/x86/cpu/vpmu_intel.c           | 24 ++++----
 xen/arch/x86/hvm/svm/svm.c              | 75 ++++++++++++++++---------
 xen/arch/x86/hvm/vmx/vmcs.c             | 40 ++++++-------
 xen/arch/x86/hvm/vmx/vmx.c              | 46 +++++++--------
 xen/arch/x86/include/asm/hvm/hvm.h      | 34 +++++++++++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 15 ++---
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 16 ++----
 8 files changed, 155 insertions(+), 104 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:21:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525209.816272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRt-0003Tu-F2; Mon, 24 Apr 2023 08:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525209.816272; Mon, 24 Apr 2023 08:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRt-0003Tn-Bt; Mon, 24 Apr 2023 08:21:25 +0000
Received: by outflank-mailman (input) for mailman id 525209;
 Mon, 24 Apr 2023 08:21:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QOl=AP=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pqrRs-0002wP-1Y
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:24 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2062a.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fdcc21ab-e278-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 10:21:21 +0200 (CEST)
Received: from DM6PR07CA0105.namprd07.prod.outlook.com (2603:10b6:5:330::8) by
 SN7PR12MB7911.namprd12.prod.outlook.com (2603:10b6:806:32a::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.31; Mon, 24 Apr
 2023 08:21:17 +0000
Received: from DM6NAM11FT005.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:330:cafe::b2) by DM6PR07CA0105.outlook.office365.com
 (2603:10b6:5:330::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 08:21:17 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT005.mail.protection.outlook.com (10.13.172.238) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:16 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:21:16 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:21:16 -0500
Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 03:21:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdcc21ab-e278-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l8FC0XbDaSGc5RHITZFFOvPU1twvx0+O2uJ/L+UrfaSxFBxo53IW8SjpywggItLJNUFGc5KUm10afLNpO5Qlc6cFpJ5CXjJojjUEvvMbvUw32BnjueLMjDOSH1Mm2LgOBltDaS49GJjQUuv0pjxTliQpRObNxLBkCkL/KPpZybN8jHFVpV16pV6T3nZYfBpuGtagFbx4SynSUlLe+qNo7+DmOFSoeJiHl5hXHUTWlteABN7FAja8KdyCNJEMRYZoxtflPd2waoqZAlh+z9Oo/0m/mzqTKUdMZgCNIDkmghavzqUZFr5RY20A+CdarAufqyd8thDklhHnZJcsaxdi2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=;
 b=DZO5UJ9J4acgEvg3PjhowLVUmjjMZAXiFQyCBhsCW4RBhbUXElEV0EwApxJ7KYBqHjxEzNmLXpO9BMRMIZ8838rUTWJhIf3PD/D3FX8m3lAoYhG6ClAhkderGWcS8J9D03T/xZQBcko3hTNlSOXRI4208J28R2BNyB0EWrwAYzfS34aRume7zibjualDY32Ukb+9lYhREx2bhxG5+Qt5Xx1c0r1gnWbP1PPbikS8Tmd6Rx+jqnRZ83pC4VM3K/leLRSctIQ3jI7yXQ2RoNjyK/2sA+dGuvolZ1yj/NDvgLzoQPbWuLaU14dUQSyuOSNj0itFm1wuCjn2d5PyZx8xSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vLJl8NuTEtKiCgcahKscn4Cu5cCYjQA1mE/ixk7iJfk=;
 b=TOWpWF+5TokNpEQNiQF28yIZ0LjQ1W71JL5pK5ez1igh4yfyNrDQeULEq0Af/MDHL1nrCplSAocn+2xrBUIjy/GHROxA4zfoaF1gKPkA0yjwGHkuZlSJ5qp2L70ZZOLX3X4oNQFs6MRJbP5aZOhy+6tRAOVcYP7MOZOY/fe2bME=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Xenia Ragiadakou <xenia.ragiadakou@amd.com>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 2/3] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags
Date: Mon, 24 Apr 2023 11:20:37 +0300
Message-ID: <20230424082038.541122-3-xenia.ragiadakou@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
References: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT005:EE_|SN7PR12MB7911:EE_
X-MS-Office365-Filtering-Correlation-Id: 939b278f-4e43-4a68-1506-08db449ce037
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ShtoRzldRtAD+WGvNnfSe2+t3at5cAj5tYcz9HReR7BB4m2ROu1W0Csm2GqzSgY5qOr/Wj1gNdeGZ74Lx7bFvyZH6oGDw6Se2tq8R+gN7X7KAnVmzAzK9CLOKT/9V+QQOZBzZpWlBpv9ryUtgcOg8A7WyTosTPEqql/0rYjNk2u7/WuzQN4aVpjyKY7WoTB6dGaaepZNNgFF1vgrvsqltFoWdOxhjImn26jE/yJgWyqT5WCi0hHUs8OGyUDd65ZTGgGy/iBpkoyGrkm2f+TzhI9v/8hpPpLFDXKY5yuDo5p+fLkkCXRIc7ysPg0wrz3LA6YGJyIXDv2yxRXMIEnJWcBWGuxFftDL+NiY4yW19s9RDKigZ9qFVtSiqmsmmCIfJIhs8I8va7ifKEVthoc5QUwU1R8m+tPLDpRtyaq430AbPFjrSu1l6+sZS6hveeMY1H7D12P7TSti/aleuAbCOgtXbcNEEU+ql5jKhiwo1DX3LY93Ixo0hCRztF9sutp6cO/Grlsql7I3D+LQD6knAWV1kzYGlyD0Eq3b2HbBXAFf5A1hiXxdZjnlmZ3/ekmgHSNKNKRbXSlvTejL/LCzSBFc5LqjyVu5vGPdKp+67CbtJpl0JgLRQnrfhEEttD5WsV/olfyHtXZiwpCET6KF35iQ5lCetnIt/17OiJpWNGpYe/isqZ09+rfntE9ZlKslALCSf32sqHbPinm37BYvMEtunzWqTMOYAillVYODBxA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(40470700004)(46966006)(36840700001)(54906003)(36756003)(4326008)(6916009)(2906002)(186003)(44832011)(70586007)(70206006)(16576012)(316002)(6666004)(30864003)(41300700001)(5660300002)(36860700001)(478600001)(40480700001)(81166007)(356005)(426003)(336012)(82740400003)(47076005)(8676002)(82310400005)(8936002)(86362001)(83380400001)(1076003)(26005)(2616005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:16.9412
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 939b278f-4e43-4a68-1506-08db449ce037
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT005.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7911

Replace enum vmx_msr_intercept_type with the msr access flags, defined
in hvm.h, so that the functions {svm,vmx}_{set,clear}_msr_intercept()
share the same prototype.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
---

Changes in v2:
  - change from int to unsigned int the type of param type, reported by Jan

 xen/arch/x86/cpu/vpmu_intel.c           | 24 +++++++-------
 xen/arch/x86/hvm/vmx/vmcs.c             | 36 ++++++++++----------
 xen/arch/x86/hvm/vmx/vmx.c              | 44 ++++++++++++-------------
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 12 ++-----
 4 files changed, 54 insertions(+), 62 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 35e350578b..395830e803 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 
     /* Allow Read/Write PMU Counters MSR Directly. */
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
+        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
-    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
+    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
@@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
     unsigned int i;
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R);
+        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R);
-    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R);
+    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index b209563625..e7b67313a2 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -892,7 +892,7 @@ static void vmx_set_host_env(struct vcpu *v)
 }
 
 void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             enum vmx_msr_intercept_type type)
+                             unsigned int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
     struct domain *d = v->domain;
@@ -906,17 +906,17 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
 
     if ( msr <= 0x1fff )
     {
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             clear_bit(msr, msr_bitmap->read_low);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             clear_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             clear_bit(msr, msr_bitmap->read_high);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             clear_bit(msr, msr_bitmap->write_high);
     }
     else
@@ -924,7 +924,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
 }
 
 void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           enum vmx_msr_intercept_type type)
+                           unsigned int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
 
@@ -934,17 +934,17 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
 
     if ( msr <= 0x1fff )
     {
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             set_bit(msr, msr_bitmap->read_low);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             set_bit(msr, msr_bitmap->write_low);
     }
     else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) )
     {
         msr &= 0x1fff;
-        if ( type & VMX_MSR_R )
+        if ( type & MSR_R )
             set_bit(msr, msr_bitmap->read_high);
-        if ( type & VMX_MSR_W )
+        if ( type & MSR_W )
             set_bit(msr, msr_bitmap->write_high);
     }
     else
@@ -1151,17 +1151,17 @@ static int construct_vmcs(struct vcpu *v)
         v->arch.hvm.vmx.msr_bitmap = msr_bitmap;
         __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap));
 
-        vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW);
-        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_FS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_GS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, MSR_RW);
         if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) )
-            vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
         if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
              (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
-            vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW);
+            vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, MSR_RW);
     }
 
     /* I/O access bitmap. */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 096c69251d..8a873147a5 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -791,7 +791,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
      */
     if ( cp->feat.ibrsb )
     {
-        vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
         rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0);
         if ( rc )
@@ -799,7 +799,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
     }
     else
     {
-        vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
         rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST);
         if ( rc && rc != -ESRCH )
@@ -809,20 +809,20 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v)
 
     /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */
     if ( cp->feat.ibrsb || cp->extd.ibpb )
-        vmx_clear_msr_intercept(v, MSR_PRED_CMD,  VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_PRED_CMD, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_PRED_CMD,  VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_PRED_CMD, MSR_RW);
 
     /* MSR_FLUSH_CMD is safe to pass through if the guest knows about it. */
     if ( cp->feat.l1d_flush )
-        vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW);
 
     if ( cp->feat.pks )
-        vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+        vmx_clear_msr_intercept(v, MSR_PKRS, MSR_RW);
     else
-        vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
+        vmx_set_msr_intercept(v, MSR_PKRS, MSR_RW);
 
  out:
     vmx_vmcs_exit(v);
@@ -1418,7 +1418,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value)
 
             vmx_get_guest_pat(v, pat);
             vmx_set_guest_pat(v, uc_pat);
-            vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
 
             wbinvd();               /* flush possibly polluted cache */
             hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */
@@ -1429,7 +1429,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value)
             v->arch.hvm.cache_mode = NORMAL_CACHE_MODE;
             vmx_set_guest_pat(v, *pat);
             if ( !is_iommu_enabled(v->domain) || iommu_snoop )
-                vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+                vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW);
             hvm_asid_flush_vcpu(v); /* no need to flush cache */
         }
     }
@@ -1883,9 +1883,9 @@ static void cf_check vmx_update_guest_efer(struct vcpu *v)
      * into hardware, clear the read intercept to avoid unnecessary VMExits.
      */
     if ( guest_efer == v->arch.hvm.guest_efer )
-        vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R);
+        vmx_clear_msr_intercept(v, MSR_EFER, MSR_R);
     else
-        vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R);
+        vmx_set_msr_intercept(v, MSR_EFER, MSR_R);
 }
 
 static void nvmx_enqueue_n2_exceptions(struct vcpu *v,
@@ -2312,7 +2312,7 @@ static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t msr)
     struct vcpu *v;
 
     for_each_vcpu ( d, v )
-        vmx_set_msr_intercept(v, msr, VMX_MSR_W);
+        vmx_set_msr_intercept(v, msr, MSR_W);
 }
 
 static void cf_check vmx_vcpu_update_eptp(struct vcpu *v)
@@ -3479,17 +3479,17 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v)
             {
                 for ( msr = MSR_X2APIC_FIRST;
                       msr <= MSR_X2APIC_LAST; msr++ )
-                    vmx_clear_msr_intercept(v, msr, VMX_MSR_R);
+                    vmx_clear_msr_intercept(v, msr, MSR_R);
 
-                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R);
-                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_PPR, MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, MSR_R);
+                vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, MSR_R);
             }
             if ( cpu_has_vmx_virtual_intr_delivery )
             {
-                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W);
-                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, MSR_W);
+                vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, MSR_W);
             }
         }
         else
@@ -3500,7 +3500,7 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v)
            SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
         for ( msr = MSR_X2APIC_FIRST;
               msr <= MSR_X2APIC_LAST; msr++ )
-            vmx_set_msr_intercept(v, msr, VMX_MSR_RW);
+            vmx_set_msr_intercept(v, msr, MSR_RW);
 
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
@@ -3636,7 +3636,7 @@ static int cf_check vmx_msr_write_intercept(
                         return X86EMUL_OKAY;
                     }
 
-                    vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW);
+                    vmx_clear_msr_intercept(v, lbr->base + i, MSR_RW);
                 }
             }
 
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index 51641caa9f..af6a95b5d9 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -633,18 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
     return 0;
 }
 
-
-/* MSR intercept bitmap infrastructure. */
-enum vmx_msr_intercept_type {
-    VMX_MSR_R  = 1,
-    VMX_MSR_W  = 2,
-    VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W,
-};
-
 void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             enum vmx_msr_intercept_type type);
+                             unsigned int type);
 void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           enum vmx_msr_intercept_type type);
+                           unsigned int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:21:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525210.816283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRy-0003qT-Ry; Mon, 24 Apr 2023 08:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525210.816283; Mon, 24 Apr 2023 08:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrRy-0003qK-Na; Mon, 24 Apr 2023 08:21:30 +0000
Received: by outflank-mailman (input) for mailman id 525210;
 Mon, 24 Apr 2023 08:21:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QOl=AP=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pqrRx-0002wP-Do
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:21:29 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 012caa17-e279-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 10:21:27 +0200 (CEST)
Received: from MW4PR04CA0240.namprd04.prod.outlook.com (2603:10b6:303:87::35)
 by SJ0PR12MB8615.namprd12.prod.outlook.com (2603:10b6:a03:484::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 08:21:20 +0000
Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:87:cafe::16) by MW4PR04CA0240.outlook.office365.com
 (2603:10b6:303:87::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 08:21:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 08:21:19 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 03:21:18 -0500
Received: from 10.0.2.15 (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 03:21:17 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 012caa17-e279-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hQHOFTg5LDC8AuUINx3rtFIeffax/z53phTYUpiVD8K2y3nWy+TbTEeVw2Q7V9rSLdZJhhKnURCY/3m6iu/PC0eHyaDzGbG9rF5xd8n1nfJT1FmY1YGe7aNFrkt48oOYU8t1PYOLxBAKDH9h33lUE0OPyuEbIQWgha5+R68CmUK9cWwYXTbRVX+BeOIFtOE16MtIVrX9vbwlWP24nGemDHan9MPFdt50EQ7I9+80vTAD5IAwTh+YLnd00L0y3TmQBr6winAc1Ulw6Xj0V36G0rYaKUUnnv6/ByvbwN+fi6/xfPSHtxDHu9qjUW9PLc2r0r38tbTGWWG8svSxb1MZ9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=;
 b=XEdCcuBUAaBJYU06Iu6o1UZmZObiWQ0laNlsFw3MYfSGacF7SI4Z2OPU2g8E6FAdf+wa1uOCS9OwoR6DTBdmtKuxl+6pmJpX1PWKh0r+R40uF4yVyMix6bb1kaou2trkBp2iW4qteLoAVlDStz7P4Fs9me2S66KC+m2XW1OfIfzVvk+GTGG2NKBST8ltPGRrSdTuzKc2hvSGNsasl9Hqf2Dj5akhNQnFDc1KaTqjxeexX4FwVsAQyCtJEgO43R8/n6hu9S6ulxVr4GBPnaCwSnoQ16eU6LTqyZAaq7oFHIGc+ooaZ2+NdAPBPmE6IwTTWhjdbUv0YYKLZDFyK54wkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bsJJg35SdGTTC2bSQZuKwbd3Cr24cZUb+TyVYaHwI5A=;
 b=F6X+ymiTsSshlTcG93xG3puoplcraaOGVjV7jbV7n4aZOyFN3jd1uWVH/Ur2U1RX/OU7vhrbWmTRwyr+PDjZu+jTUCCwLvpA9lE8nXgwXpqSRLiIolReafNwOsvrkpb7v9sOrzOPevyxvYqu7fBLyHNjygqb90FUPe5h9uAXU1g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Xenia Ragiadakou <xenia.ragiadakou@amd.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 3/3] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept()
Date: Mon, 24 Apr 2023 11:20:38 +0300
Message-ID: <20230424082038.541122-4-xenia.ragiadakou@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
References: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT018:EE_|SJ0PR12MB8615:EE_
X-MS-Office365-Filtering-Correlation-Id: b8bff983-5563-4151-185e-08db449ce1e7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z9hANcABW7CmQe4DK4uY5GHrSsvcFmfdp/vybTPNqg/6f3aNkBy8k9uE8sn+1u9bd4oEH/TWrbQyf5znFMh/pRaCMT3l1s2/GauMM1XcBc3+a1mj37xzcGhGCaR+FgQb1U5G/okVz656C4Ts1eWX5TNq4XBkn6IPyd6ITg64/6xFNgYFrz6NVT/j1C1cOK3K+WzGcG1f6oaR59FvWZCkIsqbIMgAoZexiW3dusUBrjX7gBgPPe9PTx7VFsMfnqpRPT8TVomsubQxGN3KjkbPkuhyq/gfxLj1UVAw/ZEX8IHgS8OPVCfWshgLJdIbVWvItHkACqh+HnrYrGo53sZJEiTC3ZYSWz91HNDaHykRO+VP2+Dvb1pDZdwIllLeIljq0GkpgHJvApuQB6ARONmbt2+KOUDCq++ZhbtnlHluoXsbIujVUkYX16pBeE6v8qYrAypcg848wju4Hod5Izn6MMEcefMdHCzQ01H2zFaf7xtT7rbo/kMp6M16wbet4Jay8FLTSnKPdwpgOuRG+iRvPt9OL7VGuaY/3BY0OQKWp5TTFe6KwK3+2qo0YCfa94KU5r2qDrDfIvAq3NJnTuEukuGu15rYlmTLFV9OfL7FvJHzgj6xJhUqxX8AIktmpMlOCDc3gqKfPTyftPHsCTGbtmbMDFbOR2qHqoqFFQpCych1+eqaSZ2oS2L2qLmGahSDNx/cakIwZGQDBMX7d/OJ0uYbbTn4FKtLv/oxPlQl9no=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(6916009)(316002)(4326008)(44832011)(8676002)(8936002)(5660300002)(41300700001)(82310400005)(36756003)(86362001)(40480700001)(356005)(26005)(186003)(1076003)(81166007)(478600001)(6666004)(36860700001)(83380400001)(47076005)(2616005)(336012)(426003)(16576012)(82740400003)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:21:19.7256
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b8bff983-5563-4151-185e-08db449ce1e7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT018.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8615

Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr
intercept in common vpmu code.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
---

Changes in v2:
  - change the parameter types to unsigned int

 xen/arch/x86/cpu/vpmu_amd.c             | 10 ++++-----
 xen/arch/x86/cpu/vpmu_intel.c           | 24 ++++++++++----------
 xen/arch/x86/hvm/svm/svm.c              |  7 +++---
 xen/arch/x86/hvm/vmx/vmcs.c             |  8 +++----
 xen/arch/x86/hvm/vmx/vmx.c              |  2 ++
 xen/arch/x86/include/asm/hvm/hvm.h      | 30 +++++++++++++++++++++++++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h |  8 +++----
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h |  8 +++----
 8 files changed, 65 insertions(+), 32 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index da8e906972..77dee08588 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -154,9 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_clear_msr_intercept(v, counters[i], MSR_RW);
-        svm_set_msr_intercept(v, ctrls[i], MSR_W);
-        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
+        hvm_clear_msr_intercept(v, counters[i], MSR_RW);
+        hvm_set_msr_intercept(v, ctrls[i], MSR_W);
+        hvm_clear_msr_intercept(v, ctrls[i], MSR_R);
     }
 
     msr_bitmap_on(vpmu);
@@ -169,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_set_msr_intercept(v, counters[i], MSR_RW);
-        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
+        hvm_set_msr_intercept(v, counters[i], MSR_RW);
+        hvm_set_msr_intercept(v, ctrls[i], MSR_RW);
     }
 
     msr_bitmap_off(vpmu);
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 395830e803..ed32d4d754 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -219,22 +219,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v)
 
     /* Allow Read/Write PMU Counters MSR Directly. */
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
+        hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
+        hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
+            hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
+        hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
-    vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
+    hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
@@ -242,21 +242,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v)
     unsigned int i;
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
+        hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW);
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
+        hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW);
 
         if ( full_width_write )
-            vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
+            hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW);
     }
 
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
+        hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R);
 
-    vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
-    vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
+    hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R);
+    hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 3ee0805ff3..cbd8eff270 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -277,7 +277,8 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
     return msr_bit;
 }
 
-void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags)
+void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                    unsigned int flags)
 {
     unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
@@ -291,8 +292,8 @@ void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int flags)
         __set_bit(msr * 2 + 1, msr_bit);
 }
 
-void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             unsigned int flags)
+void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                      unsigned int flags)
 {
     unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index e7b67313a2..c051bcb91b 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -891,8 +891,8 @@ static void vmx_set_host_env(struct vcpu *v)
               (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code);
 }
 
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             unsigned int type)
+void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                      unsigned int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
     struct domain *d = v->domain;
@@ -923,8 +923,8 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
         ASSERT(!"MSR out of range for interception\n");
 }
 
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           unsigned int type)
+void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                    unsigned int type)
 {
     struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 8a873147a5..6a33e92b0a 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2742,6 +2742,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
     .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources,
     .update_vlapic_mode = vmx_vlapic_msr_changed,
     .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m,
+    .set_msr_intercept    = vmx_set_msr_intercept,
+    .clear_msr_intercept  = vmx_clear_msr_intercept,
     .enable_msr_interception = vmx_enable_msr_interception,
     .altp2m_vcpu_update_p2m = vmx_vcpu_update_eptp,
     .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 5740a64281..96ff235614 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -213,6 +213,10 @@ struct hvm_function_table {
                                 paddr_t *L1_gpa, unsigned int *page_order,
                                 uint8_t *p2m_acc, struct npfec npfec);
 
+    void (*set_msr_intercept)(struct vcpu *v, unsigned int msr,
+                              unsigned int flags);
+    void (*clear_msr_intercept)(struct vcpu *v, unsigned int msr,
+                                unsigned int flags);
     void (*enable_msr_interception)(struct domain *d, uint32_t msr);
 
     /* Alternate p2m */
@@ -647,6 +651,20 @@ static inline int nhvm_hap_walk_L1_p2m(
         v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec);
 }
 
+static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                         unsigned int flags)
+{
+    if ( hvm_funcs.set_msr_intercept )
+        alternative_vcall(hvm_funcs.set_msr_intercept, v, msr, flags);
+}
+
+static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                           unsigned int flags)
+{
+    if ( hvm_funcs.clear_msr_intercept )
+        alternative_vcall(hvm_funcs.clear_msr_intercept, v, msr, flags);
+}
+
 static inline void hvm_enable_msr_interception(struct domain *d, uint32_t msr)
 {
     alternative_vcall(hvm_funcs.enable_msr_interception, d, msr);
@@ -905,6 +923,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
     ASSERT_UNREACHABLE();
 }
 
+static inline void hvm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                         unsigned int flags)
+{
+    ASSERT_UNREACHABLE();
+}
+
+static inline void hvm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                           unsigned int flags)
+{
+    ASSERT_UNREACHABLE();
+}
+
 #define is_viridian_domain(d) ((void)(d), false)
 #define is_viridian_vcpu(v) ((void)(v), false)
 #define has_viridian_time_ref_count(d) ((void)(d), false)
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index 94deb0a236..5e84b4f4c1 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -603,10 +603,10 @@ void svm_destroy_vmcb(struct vcpu *v);
 
 void setup_vmcb_dump(void);
 
-void svm_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           unsigned int flags);
-void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             unsigned int flags);
+void cf_check svm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                    unsigned int flags);
+void cf_check svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                      unsigned int flags);
 #define svm_disable_intercept_for_msr(v, msr) \
     svm_clear_msr_intercept(v, msr, MSR_RW)
 #define svm_enable_intercept_for_msr(v, msr) \
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index af6a95b5d9..7f7d785977 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -633,10 +633,10 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr,
     return 0;
 }
 
-void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
-                             unsigned int type);
-void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
-                           unsigned int type);
+void cf_check vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                                      unsigned int type);
+void cf_check vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                                    unsigned int type);
 void vmx_vmcs_switch(paddr_t from, paddr_t to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:26:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525225.816293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrWf-0005GQ-DU; Mon, 24 Apr 2023 08:26:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525225.816293; Mon, 24 Apr 2023 08:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrWf-0005GJ-9t; Mon, 24 Apr 2023 08:26:21 +0000
Received: by outflank-mailman (input) for mailman id 525225;
 Mon, 24 Apr 2023 08:26:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqrWe-0005GC-Cm
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:26:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrWd-0005FE-Kz; Mon, 24 Apr 2023 08:26:19 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrWd-00079U-FZ; Mon, 24 Apr 2023 08:26:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=N5iiU+e6cL21moY5Kzy3DB7PxGJDMBJamxdxLxV7pVs=; b=xM9fSKQf2rjpLy0KlWtr76JGBp
	k21p82vtq+AuvDPs+EOFSEQu7CbP+jdj1vryIO0cixsrkMLGJqwp9bo5rP+CUg5SpFSAfXeWl9mij
	XvpVQfdQH8s+NCXuqaFj/xPWU8trp8Y3QyAyciLiv9ucSk4OoFgVqOzV9BhhpyF6rkxA=;
Message-ID: <e1769508-287d-a278-56d0-01aacc77b391@xen.org>
Date: Mon, 24 Apr 2023 09:26:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 08/10] xen/arm: domain_build: Check if the address fits
 the range of physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-9-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230413173735.48387-9-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 13/04/2023 18:37, Ayan Kumar Halder wrote:
> handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
> parameters. Then frame numbers are obtained from addr and len by right shifting
> with PAGE_SHIFT. The page frame numbers are saved using unsigned long.
> 
> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
> system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
> when the result is stored as 'unsigned long'.
> 
> To mitigate this issue, we check if the starting and end address can be
> contained within the range of physical address supported on the system. If not,
> then an appropriate error is returned.
> 
> Also, the end address is computed once and used when required. And replaced u64
> with uint64_t.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> Changes from :-
> v1...v4 - NA. New patch introduced in v5.
> 
>   xen/arch/arm/domain_build.c | 30 +++++++++++++++++++++++-------
>   1 file changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 7d28b75517..b98ee506a8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1637,15 +1637,23 @@ out:
>   }
>   
>   static int __init handle_pci_range(const struct dt_device_node *dev,
> -                                   u64 addr, u64 len, void *data)
> +                                   uint64_t addr, uint64_t len, void *data)
>   {
>       struct rangeset *mem_holes = data;
>       paddr_t start, end;
>       int res;
> +    uint64_t end_addr = addr + len - 1;

I find the difference between end and end_addr a bit confusing. How about...

> +
> +    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )

... replace the second part with (((paddr_t)~0 - addr) > len))

> +    {
> +        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",

In addition to what Michal says, I would replace the "addr .... 
end_addr" with "[start, end]" to further reduce the message.

Also, please use %u rather than %d as the number of bits cannot be negative.

> +               addr, end_addr, CONFIG_PADDR_BITS);
> +        return -ERANGE;
> +    }
>   
>       start = addr & PAGE_MASK;
> -    end = PAGE_ALIGN(addr + len);
> -    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
> +    end = PAGE_ALIGN(end_addr);
> +    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));

And this will not need to be changed.

>       if ( res )
>       {
>           printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
> @@ -2330,11 +2338,19 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   }
>   
>   int __init map_range_to_domain(const struct dt_device_node *dev,
> -                               u64 addr, u64 len, void *data)
> +                               uint64_t addr, uint64_t len, void *data)
>   {
My comment on the previous function applies for this one as well.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:29:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:29:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525229.816303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrZX-0005t9-SV; Mon, 24 Apr 2023 08:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525229.816303; Mon, 24 Apr 2023 08:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrZX-0005t2-Of; Mon, 24 Apr 2023 08:29:19 +0000
Received: by outflank-mailman (input) for mailman id 525229;
 Mon, 24 Apr 2023 08:29:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqrZW-0005sw-Iy
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:29:18 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrZV-0005Hg-Ui; Mon, 24 Apr 2023 08:29:17 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.240])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqrZV-0007ET-PB; Mon, 24 Apr 2023 08:29:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LW4g4gm/HXr9l2S7uxBRH827rDL7q3bN6gv3LqALDNY=; b=JZtDXH36RwAQRhG+TBOZGGkMd6
	abOlaBIHGxhJyjD799sZbj0pJkpaXzcDyk07GsNG+pQKKSYnUkYkvrEbIbEUaSn+qiXU5xmFEWVcJ
	QJAHYnt5/rHRdoNiXI7I+vMK9/Wiw1dHBxEV30n7VqbgVgxFSF41YKxzTH9N4BddocFI=;
Message-ID: <b5b3efd3-8a5d-31be-4919-2d621ef7000a@xen.org>
Date: Mon, 24 Apr 2023 09:29:15 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 08/10] xen/arm: domain_build: Check if the address fits
 the range of physical address
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-9-ayan.kumar.halder@amd.com>
 <3bd89851-e09d-1b24-6fde-5a13862f5eb2@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <3bd89851-e09d-1b24-6fde-5a13862f5eb2@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 24/04/2023 08:44, Michal Orzel wrote:
> Hi Ayan,
> 
> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>
>>
>> handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
>> parameters. Then frame numbers are obtained from addr and len by right shifting
>> with PAGE_SHIFT. The page frame numbers are saved using unsigned long.
> Maybe better to say "expressed" rather than "saved"
> 
>>
>> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
>> system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
>> when the result is stored as 'unsigned long'.
>>
>> To mitigate this issue, we check if the starting and end address can be
>> contained within the range of physical address supported on the system. If not,
>> then an appropriate error is returned.
>>
>> Also, the end address is computed once and used when required. And replaced u64
>> with uint64_t.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from :-
>> v1...v4 - NA. New patch introduced in v5.
>>
>>   xen/arch/arm/domain_build.c | 30 +++++++++++++++++++++++-------
>>   1 file changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 7d28b75517..b98ee506a8 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1637,15 +1637,23 @@ out:
>>   }
>>
>>   static int __init handle_pci_range(const struct dt_device_node *dev,
>> -                                   u64 addr, u64 len, void *data)
>> +                                   uint64_t addr, uint64_t len, void *data)
>>   {
>>       struct rangeset *mem_holes = data;
>>       paddr_t start, end;
>>       int res;
>> +    uint64_t end_addr = addr + len - 1;
>> +
>> +    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
>> +    {
>> +        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for physical address\n",
> I don't think it is wise to print variable names (end_addr) to the user. Better would be to say explicitly: start, end address.
> Also to make the message shorter you could write: ... exceeds the maximum allowed PA width (%u bits)
> 
>> +               addr, end_addr, CONFIG_PADDR_BITS);
> Why CONFIG_PADDR_BITS if you already introduced PADDR_BITS macro
> 
>> +        return -ERANGE;
>> +    }
>>
>>       start = addr & PAGE_MASK;
>> -    end = PAGE_ALIGN(addr + len);
>> -    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
>> +    end = PAGE_ALIGN(end_addr);
>> +    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
> I doubt PFN_DOWN(end) is the same as PFN_DOWN(end - 1), so I think you should keep the behavior as it was
> 
>>       if ( res )
>>       {
>>           printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
>> @@ -2330,11 +2338,19 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>>   }
>>
>>   int __init map_range_to_domain(const struct dt_device_node *dev,
>> -                               u64 addr, u64 len, void *data)
>> +                               uint64_t addr, uint64_t len, void *data)
> You changed u64 to uint64_t in a definition but not in a prototype. Please fix.

This function is used as a callback. So if you want to switch from u64 
to uint64_t then you should also update dt_for_each_range().

Such changes would need to be done in a separate patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 08:56:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 08:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525239.816313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrzs-0000xO-4r; Mon, 24 Apr 2023 08:56:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525239.816313; Mon, 24 Apr 2023 08:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqrzs-0000xH-1x; Mon, 24 Apr 2023 08:56:32 +0000
Received: by outflank-mailman (input) for mailman id 525239;
 Mon, 24 Apr 2023 08:56:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqrzq-0000xB-CD
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 08:56:30 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20614.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6f24a5a-e27d-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 10:56:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8388.eurprd04.prod.outlook.com (2603:10a6:20b:3f8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 08:56:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 08:56:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6f24a5a-e27d-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BodDJAi6H/4HBIVBsXedY8Dbf64TBWRNwa5cdgcqdvbC2CbV4oySmpttPHjjAU8w84Szxj0QK5qwvregl/n8UxR3zu2wm+T7Ojp6lnuIbdi1ZxyWlK8sbWuNylrOtCOw9LVh78QdWChX03uRuXwTgsoo3GEZ6BDHGsdfpqJfYVuaaiRKxfFIZWY3V6+I4l9hemkxCBRaAtO3qrZOnZWJOPyOZtQY1mTnAtu/ykojARha+KZRnmNwwGkA4LwxJ5zFDqIqA9UEvMea5Y5Mf1KQt5uT2QDmaowI3YMl/XFanSoM8cuIoMRQzzPJXc0/vV0q8rU4yR+Zmtf6F6FfJ4K5+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IUR7VBKP8MkHieaWObt6DqrogvsLb6Z21kgjmVcVdF8=;
 b=NZjltycKdLMyJ/8paBcHrf9b7lVYapmNoRhCUThovlq5hdLyy4PJQvD+YhlSVvIe6Pkp6q3ZLzqgV2KzEF7kS+tFCnZC/B+Tg14iFprTVsubLqYmpmF8riCPvv+eiwYet3rMQhZs3Wnjl9ZHmyQoYFPKjG3m3Us6y79YYRXRhFP29C9pZb+BTT/CKMOuYbHZ/ehfzF0pe2SmoG29FlYYVX3It1OrwCPfp1itx0D2DtRvgDENDkwWl8+JXFTCHbjpMbvoOOk6skVFlEMx54Ku8n3nZPeDI69gPRbEu5CAOALvceUY3274ATlxOhGmEaC08JwbEOUelXviBVR22RHzBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IUR7VBKP8MkHieaWObt6DqrogvsLb6Z21kgjmVcVdF8=;
 b=otIZxq+hmJmLyz1/ESRbc7R5hisgIA5kw0XtQhLDYfe+3TzeJC5xfFFpAbDD3m6FD/sxQXeZfFYtrlb29i0sr1w5s95MAibGa59mZlwwKgDuOTMdZ5GDqYP+aF7r3xgMVDMN/C8DIcFqquBw8Fqge/Re1fkTJ5dmKg3/iq2xOBDdNYG8DSKe+PdBuYQRfLXkcTbmrWlHOsCp83EWnNLwIXvx57O7tJv92XBQ+KTPdgju+hIXL5kyrOYAGNO2C58C6FlSByWJRNOuP7scaJqVdldlCTJnqVqF4bArjhh5/HVMOWvyLPuTIJjLrWkqAtkG2px6FVSlm2A7uEOvKgKXyg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9df50e0a-c1fe-65cd-fc45-12ddb6e57c6b@suse.com>
Date: Mon, 24 Apr 2023 10:56:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] tools/xenstore: fix quota check in acc_fix_domains()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230328144327.6562-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230328144327.6562-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8388:EE_
X-MS-Office365-Filtering-Correlation-Id: 06dda0fb-f508-47a6-89b1-08db44a1c290
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qR0/TAbtcVGVVFr7HZQ0WLBMaNLDvnvb8FfnbOqs2Gep/Knajc4YtB3dQ2qvENMJLSaANMGkt8t0qnttEHZB5OHRu/2pO0X+1hq5P9pBZr48ylun4sYDf5YZ/w5MwVORny2rKjmvhBd+KPDGMhvtMcYQ+lEJe42jejW/RO65D1kLOeVmhagIZp63zr82iCP4H8ep8y9UjolO+hyYTpX9v91EPt14grqWGaVeyFVtyWlUsmv8WPKNZAi5RMY8/oc6CQ5HhqY2OUHlxDNdcagmY49lrLzW6fPeH007nPbe8a4ALBe6ar4pqXLoABjifnZ4jOsmYvNv7OQXJY6YFYarB/vutVdFtmQqv0Vg1YRKO3v05RnmeEKhIqPEbZE+s1s8+lgqDiq/eWKU2Bp8cvFuxZcudfkN9ltOaf4cAMsh1uNDkCNWmMCY/RE0sezwVB3LmgkDHtU156yd7BWxlca6sNBcMwvVIPfclpWuZkynoDmrvUCXE0ssslYve7ij5KS2usZjIJjGi4QLae9lHr756RWYWOOk6hXrAOO+yNFVI9wwGGeGij3CAXklRJyF/gzp0LRAQ85qVBj26p6RFuPmBsj3BepywjSUu1JsXxx4AE5c6ER06L42HY67pyICA6Kby9J7j6SOvC/OIfvdT5VMSg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(346002)(39860400002)(136003)(396003)(451199021)(478600001)(31696002)(110136005)(54906003)(86362001)(36756003)(186003)(53546011)(6486002)(26005)(6506007)(6512007)(4326008)(66476007)(66556008)(316002)(83380400001)(66946007)(2906002)(4744005)(38100700002)(15650500001)(41300700001)(8676002)(8936002)(31686004)(5660300002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2FkRzZ2dlR2V08raFVQMXBkZkt1d2IyZW1QZnJRZXp2aDRZR2JHYm1YdjFG?=
 =?utf-8?B?cFR0bU16Si9DVkpKZUwyWmNYNklWMDhTMitjWW9Jb2pVZTRIb3FJSHBLSVBX?=
 =?utf-8?B?SkNyaW52bHdFZmp3NTB2WldVMDhKeFJSMFR0K0w0RXdweUZMdVpNSTVJQXEy?=
 =?utf-8?B?WVB0SC9sVW5WRDIvNWJINHZ6YmJ6VmJ4Z3FlTmdUK1FCbVBHcHUvYXo5RGhu?=
 =?utf-8?B?TTJTWWZxV2xBWHY4RWVIazViSUFHMUY5azhaRGFtMFlDcEdLbm9xSUh3TEtq?=
 =?utf-8?B?cXRXaGlQdFlIRG5TZmxVM05ZbW9OVStJMzc4MFpPc2ZkMFNpQzJ4WloyNmxG?=
 =?utf-8?B?c09iVnd1QU9tWHM4eGRjYjNvd0gxNS81eGdhZmlWdm1FU0FnY2F4R0R1YnBM?=
 =?utf-8?B?bTBVZmlFMWxldHNCQ0ZqUTdpcnBJM1FtTU43ZmJCaGVDT0h3ZEsrNmNMZ0pp?=
 =?utf-8?B?bjVQa1g1eDVlYVpsS09zMU1Bd1JFT1pIMzVNeEJwc0F3Rjd2ZDZmQjFzQmhK?=
 =?utf-8?B?N2JJNXU1NGpQQXZEVlIrNElURFlCazJWU20wNENYQUgzWUJOSktHM3NrS1kw?=
 =?utf-8?B?NmpNY3ZJOTY5Y1JicUx1QWVHUFFxejBJSHVhMFYyTGhaZnhMcFUwQWJoL050?=
 =?utf-8?B?R2YwRDZuSFc3WC9Id1FYblE0akp2b0JSTzlzSjdONjRyRjdVaElhdldMRVFs?=
 =?utf-8?B?VVBVNVAxYjdsRENjeE0yVWgvUjhVNEkyeE5UMWNaelZUSTc4ZDg4NUtTQjdB?=
 =?utf-8?B?ZTRsZStQYk1VeDRtODBYL2l1dlhuY0NOUy9TYmF3dytldDNtUVljMDMzOXBK?=
 =?utf-8?B?QnV0dEZDV0p2OGxsMVJLMWlxbXJZeVZFclJob09Vb2F3aFo3NHphR0JHU0tv?=
 =?utf-8?B?WkNOYVNKMGpqbzdUMVRjVTVtQytFQkpVTzN2UStBM01Ld0gwQ2gyb2NWZ1Vn?=
 =?utf-8?B?ZUt5blpLMlFIU1I2aUFnUHNEVEgyNWxtVTV3MG5pdU5zQkNJdW1Yd1c5d0Rn?=
 =?utf-8?B?aVlSclhQa0gwT3RGZHYwaDV4YmFmZFdpY0IrTy9mRjVSNUlNdk5LblpzYmY5?=
 =?utf-8?B?QVF4bUpTRDNNWkZSd0RMQ2g3SmkwVHl3Y3U4OW9lallQL1lOV2U0dEl2MFA0?=
 =?utf-8?B?d2hXWG02Y0FXbVdlSEd2N1d2ZWpvS1MzZG9xTUJZRFNGenJqOWI0cUtGQ3VF?=
 =?utf-8?B?dlIvcTg5UHZwSW1JR0RXZzBTTCtaczg5MFNEL0Vwclk2NWVVUUtTc3Mrc1o0?=
 =?utf-8?B?Vmp6VVlxSkpwbXlMMlpkRmM2NGoyVVJCZkg3RVRkdEVzTUtycy9DQUlQN0dZ?=
 =?utf-8?B?YTVERC90Tk1wdlpScm1pWmw2eGw1QkNHaHM3S0Npcys1SmM1WFpjTkUvM0I1?=
 =?utf-8?B?OEw2WEZiM1FCY2NENHVqTXlQVWRTTGg5UytvSTlqK1lTRThXVTRqcy9XN0tO?=
 =?utf-8?B?TnhjUGliUzhqQk9Hb3VkQU4yVVFDYnozbkVkbXpOcXNWdzJOSjgxVlR2bXV5?=
 =?utf-8?B?NU5CWk4vdDdlUFZlcFhUZWxpd0JHUW1sNUVyOWQ0akQ5eXhnZHl5RjE1em42?=
 =?utf-8?B?WkVBQXU3RXd0djFKdWFDMXVnYjMwOUF1R3pRanFwVjVIeFhCZmw0blRDUll4?=
 =?utf-8?B?TXo4MEFOS21ad05wNjBPa1Jkeis4cStmQnFLdHRTTDFpczAvNHBITFRybVdm?=
 =?utf-8?B?TFFBdEFWMGw5Q3g3N0JOTVVtcEQ3WUtKVitUN1FkdWpPellMSXBzdkE2ejRl?=
 =?utf-8?B?d2M2K0xpY1N3eVN3eXhjQ0JtL3VxSC9ZOW9OMjBMbUtGNkJuQmlTL09Jb1pS?=
 =?utf-8?B?eUZBVmJ0S296L2hUNVZyckhZUTFFN0JtKzNkdmlKY2tHRWdyWC9saUMzaVYw?=
 =?utf-8?B?T04wMkp5M3M0Q08wMWg0VUZ1WTBidlF4RDByUVNFOE91T0dPL2xucDAxZ1Jy?=
 =?utf-8?B?Y0dzVThIVHc0LzJqT25YSXN6QlBKZ1czZUhrY280eUNyQ1pkbCt0OWtjdklH?=
 =?utf-8?B?SDVyMVZhSjFJMnFPaXJNZWJhQlR5S0hEK0gyZG1MTGJndVM2OGdXdlFOeGtq?=
 =?utf-8?B?R2lzdm1vaDAwa0NQSkM0TUhoM1g3SThOTlpZU2FvM0dCc25yTFZVMWxweUEy?=
 =?utf-8?Q?xuOnzUECmh8HWqsEKMoNXgEPq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06dda0fb-f508-47a6-89b1-08db44a1c290
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 08:56:15.0090
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gsWXiE66c9mlP0kvWAvk+7WRq/RzpoDmKuCmSnw2O2Lf+0TjGlQN6PQCVJnX7bKQQYOyAyxnBv+ner8lK/hjgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8388

On 28.03.2023 16:43, Juergen Gross wrote:
> Today when finalizing a transaction the number of node quota is checked
> to not being exceeded after the transaction. This check is always done,
> even if the transaction is being performed by a privileged connection,
> or if there were no nodes created in the transaction.
> 
> Correct that by checking quota only if:
> - the transaction is being performed by an unprivileged guest, and
> - at least one node was created in the transaction
> 
> Reported-by: Julien Grall <julien@xen.org>
> Fixes: f2bebf72c4d5 ("xenstore: rework of transaction handling")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Considering the referenced commit is about 6 years old, I thought this
would be a backporting candidate. The function mentioned in the title
doesn't appear to exist on 4.17, though. Therefore, if there is an
intention for this to be backported, may I please ask that a suitable
backport be provided?

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 09:25:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 09:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525245.816323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsRn-0004SF-DS; Mon, 24 Apr 2023 09:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525245.816323; Mon, 24 Apr 2023 09:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsRn-0004S8-9F; Mon, 24 Apr 2023 09:25:23 +0000
Received: by outflank-mailman (input) for mailman id 525245;
 Mon, 24 Apr 2023 09:25:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqsRm-0004S2-Vw
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 09:25:22 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0609.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ed5cc76e-e281-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 11:25:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8364.eurprd04.prod.outlook.com (2603:10a6:10:24c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 09:25:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 09:25:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed5cc76e-e281-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=habRExT0SSnUBjvg3WdJK0dZLI4P1d1UwfcixYJx+CYz3NH7fryfKU4eXrVgTUaIXtN+XH/R0ovJJ29ZHRRkGdoBTFzlX8ABucCRLakJrHpDWVhxPopdvdUmp+XpN9twyYIV6vV5upmSWgtfvvoiDJEpra6qlkwB2GzvHq8mywO960LYDvffqyXpkMOV4HUDDKc2XLiez4sL+2nXH0bPXAe/FlyVj0u35ycldHHYpL1CSNrBOuDUSjtkrdml67BMeIY6KpqLw31eqpw1bbQuXKMP/x9djjbkh8UlAfGA9jBuoZEIvAjfMk9jhMV6LtmcuCtH/x+PMk65Zma1Dsejrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nC/EQbazuieMKA+YFrsvUJIrKhm83FWlyCJoYI8GTUs=;
 b=JeKM6dy6HnNMzwTXs3D+rsmgV9g/aZBx1FP5anqiZI7wEYRW4+3DxJV+j4AVv7EWxpuiQZIkPNYgVIfnwuQ0YrZ7OvYVf9xIQyejQdTjdi1kwX7BYhtiZpi7PDz9FhYljFUfwwgTXfMh342L6KVAj4RHOgdhH0Sbj7wUftf4kaXjXrVG8WvzHLBjeg7ZfapD90fJ9qiG9+hcKuIONxm8OL6wq/hNFWBpVaRfo53cFKNBy146LwfKjUHkcftnN0B28ATUMobpflIcZ+ed8Xg9jOg+Tw/ifvz+R7sqeHn6BRsM4PKrGIp5wuhc576QYCFzaeISQrqM+p9BnFc66fEtrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nC/EQbazuieMKA+YFrsvUJIrKhm83FWlyCJoYI8GTUs=;
 b=iHhKtbrIpjhYx/Tr2a9rj/asjhaxFBZwCBPdWwhCo2+HPHwPb0Q7X3trW29GuQhkVlGJNSX+dhCGKDLoRg4QKFr+9zortBbkImQQBqNwPVxokPTPpXlZBCuIg+2HTeUuljRWU59o7HdzeawnNe2i8JizB7tRszbRpHZ7A5zm3/XbbOtiLnhmnaw00+iNMAQxlwZyb+Y7koqzlG/GBQBf/5/Dfzw7zxl7fUrwZ90HAaLfk1Xeo0GpQIJX3ve3VQjAI+YachIInwikTfo7Lj03MAExt/n+Ao3P40hDMv3Hf5K4XzSKcDln60+O4E0f/MpNJXD/JLmZrNQdWLXkntVdBw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9a897eb7-f6c7-9e28-4f6b-4c4c43f7a4f1@suse.com>
Date: Mon, 24 Apr 2023 11:25:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8364:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b52e452-4735-4c47-c9f3-08db44a5cf16
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z5FiXkb8ekjJTNYOW0AdjyCPPzjuYCXHhDcQRpOpvPCRPcKnork0LVUSeC0BpnWyCmPYm3lJ745xThNtqaq4jFDbZYNKZjSfs9Va9MhTQ+eAntobyjC93l3NqIAWn2taezCH5llZoAs/pX3B3JAh000Bp/DwATtWwXhNLFVtJq/ofXjcVF4clcwQhtULz1aHQQ3dGVfptjgIjD1rtF4I8TR7wz3Va4sQhVZS/GPy3WfUNTPn1QjQUsL5TTaS2d/gX9UKq3FDFv11CxrC9bVJBh/CHMEGznn4dI0jqohfqh9vFNeylxkE0RBfEOLS9Fal/9nchzXh38IyHoGAXDi5oHf9TQrMeYUWmxv/wHKDEJOrqXpjMJrNV9KztvaSXdjPg974y+iJ059IvATSM37ADP2ngxjFH0e1FtyQJK0N9zkB/npejrOXyaDFatD72VFTMUgf3GFyA+0MDMaHZRNBI2Pq5HNIKOq/jndTXRfgg8hYz6RCn+YIONHG+YnwmCeqVa1vqHpT/Z9M7xUZhIeoTq5S0mwPxyxKn1QeIk0eck3d+fvZmaGKBfTCN1NaM87j6qKiB7A7gQPQyNRGAdUzcMJpxiZ6oAOPjWJ56iPra85s19TdMiTX0gfMV0JnMQLAYo6ANKTqNWAQA6iv3k0U7Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(346002)(396003)(366004)(376002)(451199021)(38100700002)(6506007)(6512007)(53546011)(26005)(2616005)(186003)(83380400001)(2906002)(8676002)(8936002)(5660300002)(36756003)(54906003)(478600001)(6486002)(316002)(4326008)(6916009)(41300700001)(66556008)(31696002)(66946007)(86362001)(66476007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c1lrejQwSmhCaGdBTFJITm4rS1JZZFk4UWozSFI3emNpa0QwYVFlYkFkYTBT?=
 =?utf-8?B?aWJuV0VXSGF3WDhBTkw4bUtFZDBwcEhrOUNUSVZDb1hhY1FESEJDNjV3dG1t?=
 =?utf-8?B?TWY2WTh1ZkZqd3JpVFgrRGRYR0ZZa05tL1F3cVJGOVV5Y0lhVng3SldqaUxZ?=
 =?utf-8?B?SmxldEJTR1hUSnhGR1V6SnU2UXNCNGZqYVZ6NGZFb2dkajVSVnVqMDJiKzZy?=
 =?utf-8?B?eW81L2pOSWQ0VE1IR0hGTWVLOWtyVGEzWG45cnBJOUZLZWxXclNTSFR4SnFS?=
 =?utf-8?B?S2NjMlNCeE9UeS83UGxZb2pHSmlIZDNlUENWb0MxNVV1ZzlzV1VNenVBMWlQ?=
 =?utf-8?B?blNIVUZNS0MraGF6SjZpSitoNkNoMmRsbFZCb2hDRUFBa1JaRkRRSVhPR21U?=
 =?utf-8?B?TU50V2NLYzlQUjZkNmZCMmI4SFQ3UGp3OWVIdEg4TEozZW4yN2U1R2craGhs?=
 =?utf-8?B?bCtQSFRPOVpKVEN2bW0wZmVtallDSjJRSDVMSmhNV05UZ2laYmo3dHBiUmZG?=
 =?utf-8?B?RzFWcWNUMklLQmk2ZExtTzhmSk4xTU5FeTFCZjJIWEJaa0NTZlloLzNrZCt1?=
 =?utf-8?B?eWlSc1dZbWgvd2hWSFYraVlxNDlISnlSN1JLTlAybDdXcVY0WTJ1L2QweXBB?=
 =?utf-8?B?YmhBNTNkYi9xelluZytrQU92MFlVQzhST3U2MDB4OW9lck9ENHdUY2o3aEtr?=
 =?utf-8?B?WFZlOVR5NVVFc2NPNmJIblRoL2o3eGptWWNVUHlHMWFTN1lmSU5LTXhoV2dp?=
 =?utf-8?B?dkRkM2xZOVhFeUttT1ROaC9raXBOSEtpalNZZ0lZWHNWRitvVHZINmpTTVly?=
 =?utf-8?B?OEtrTUhpUy92WVVLVHVWNVNSVjg4S282ZkIvWEM4d3JNQUVpeHRjSXpBL2pJ?=
 =?utf-8?B?eG9pNldHdXZFRHI5UjJWM2s0a3lleVBkZDhlMDJKUTBmcDRRQ1Y1aksyeEt6?=
 =?utf-8?B?VTROZnY0emdoYVRpTkpLaXFmWFlLYk52Q3ZmRGF6QUluRDF5bnBFcWhQTWVW?=
 =?utf-8?B?QmhFd0tZQlk5VU8rY09OTlorSThkWUsyVis2MkEvMDNZSGh0cmFDK0V3MTZF?=
 =?utf-8?B?cU9zK1dxZFN1YWsxTkQ1VHpTdWlXbmpkZE5CT29WRE1Qa243K0RkVHhidTJj?=
 =?utf-8?B?eUhqVmYwYXNFSm1heVNaeDFKZnpIMXFaTW1NcDFKRjFQSnNRMEpNWW9WbmRZ?=
 =?utf-8?B?b3NjTkdwWWFwb1lXWnJVQUh2RERVYitHZ0FHZG1FYkErRk9lMjBMM1dDUXor?=
 =?utf-8?B?L2M5L0pjTHBJaW9CY1MwUGhCV0Q0MUJIRlZtNUZIRjFGNmZnR3B6aVBva0pS?=
 =?utf-8?B?OVJOWHZPcmppTG9PMERlRXg2cjIwVFNRZUhRejR6aVIweUZXQVNwbFEzZ0k1?=
 =?utf-8?B?RWp0akdaN0l5SGNBeWNGRW5UOHlrclhvSE5ZUEE1Z1JRb2JrUEFDbFdXeGxw?=
 =?utf-8?B?Qk1EUmFkUTB3VVdydHhQMFp0UGlSTDd1L1B2N0c3K0k5NDk5Sm85NG9xUHgw?=
 =?utf-8?B?bkllZWpERDVCQmU0ZmZkQndzSzhKR2ppWXNKcXhZU0VJMXJzM3BjYkRIc0VY?=
 =?utf-8?B?blFSTk5nVU5wMkcwd3BBbWRqai9iNXNWSllBRUEyT1BSbzBGVXB0SEhjU3Ry?=
 =?utf-8?B?WU44YW56Ym9yUmN1Qk9RN0ltaEtsNkFLZWN5N09KVkJEeC9haER1Z2toNnA2?=
 =?utf-8?B?QXkvWmdNS0JWdmZJMTAzbWV5K3dUMkswZzBkWkh5OGVOYUVzREtSeGN3SEZu?=
 =?utf-8?B?R20vOGdrc3ZvMVMzSzBUWk9OU3N2ZTdyTUEwSjBwRWlKcVBzaldXRWNIQkJu?=
 =?utf-8?B?VWlYSXB5bDJyUEVhd0t0ODBuc0liUnhiTElSai8vY0FPYjd5NFpFY3BuZS9R?=
 =?utf-8?B?ZUQ5WmVleG5SZGZReWJxMXhPRzVBWnFBV3g4SVVKK2tJbFVFQjlmelUvSTRt?=
 =?utf-8?B?MEJiRTEwSUFzOUhINnl3K3REUC8zQkdjZkJoTVRDMmJaK1RwY0wyWCtVWW52?=
 =?utf-8?B?S1BkRy8yNW5mNzhOVWZBOVlDYXFUZW5uN3dSVDRTMmtGNGY0WVN0azVxaGNU?=
 =?utf-8?B?TmNGRisxYzJLVk1sNE5rTWo0SytwQTkyWmlQcUl5Q1pUWmpPallRTXpxMVUz?=
 =?utf-8?Q?pPThqgXkV4HiVQ50yUuzbHbQ1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b52e452-4735-4c47-c9f3-08db44a5cf16
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 09:25:13.9311
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tTrOEouoeH0bj6lNvlwi+TUAMcu+NWeNSsjlEJeRFaeXbY/CXQ3w/+zx3Y/8WJqfmNqPZdB1GtTeBg1m8v1Sog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8364

On 21.04.2023 16:41, Oleksii wrote:
> On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>> + *
>>> ===================================================================
>>> =========
>>> + *    Start addr    |   End addr        |  Size  | VM area
>>> description
>>> + *
>>> ===================================================================
>>> =========
>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
>>
>> These are all L2 slot 511 aiui, which may be worth mentioning
>> especially since
>> the top bits don't match the top bits further down in the table
>> (because of the
>> aliasing).
> 
> Than I'll add one more column where I'll put slot number
> 
>>
>>> + *     .................. unused ..................
>>
>> This is covering slot 510, which again may be worth mentioning.
>>
>>> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2
>>> slot: 200-509)
>>> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2
>>> slot: 196-197)
>>
>> 1Gb is, if I'm not mistaken, a single L2 slot.
> Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
> included. I'll update the table.
> 
>>
>> Also assuming a 32-byte struct page_info (I don't think you'll get
>> away with
>> less than that, when even Arm32 requires this much), there's a
>> mismatch
>> between direct map and frame table size: With 4k pages, the scaling
>> factor
>> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
>> to
>> cover for (slightly more than) the 331Gb of memory you mean to be
>> able to
>> map?
> For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
> are right it should 3 Gb in that case what will be enough ( taking into
> account both available sizes of page_info structure ).

Since you say "both": I don't expect you will want to use 3Gb for RV32
there? The table is, at least for now, for RV64 / Sv39 only anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 09:34:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 09:34:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525249.816333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsaF-0005wi-7D; Mon, 24 Apr 2023 09:34:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525249.816333; Mon, 24 Apr 2023 09:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsaF-0005wb-4J; Mon, 24 Apr 2023 09:34:07 +0000
Received: by outflank-mailman (input) for mailman id 525249;
 Mon, 24 Apr 2023 09:34:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqsaD-0005wV-Ug
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 09:34:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 275667b5-e283-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 11:34:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8618.eurprd04.prod.outlook.com (2603:10a6:20b:439::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 09:33:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 09:33:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 275667b5-e283-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J7B37BzNNMsWBFAfPawM8O2zIx2AbQ0iu0P5bYHXWP/so8yQDcXXprqgcLcUO3qf0XeWPryNKi7plVrxQvJ3Ag2tx3cTiIHUFkcd3aFqBC7kAPH9vDVq9uQkN0ugyQslJH6eT+Ln8PadylCUSs4f457UvbZgEWoFw6pImeKedUFYatxgIEx6KgfFmU1AdS0qCI59Xtb+jM/wX/uMVYWbe+o0p75lZpGmDcMm6qRwXZMBMHYO4FGJ7Ad6Tq8E2s+vwz0gBh24e3O4l5o5kb+dHG014GADrOP925/ooXNz4FHInPF+jlmaEKP7dkOtUGdSHS30bl+xgph0SZI6W0goeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RVXAoWc6NXaImP3vs70L3HwSuIwZM40S5vdIuobS4Vo=;
 b=GervMZK2mHwniiYr0asNTgA98YR6WjKRmSiYxaSlXkPhQKSg04DWbnMvPUvfXJS2XiWfH/P3UaKlgZFIFJ+aQBM9ILtoxNd/+bJ5G9iodtw9cFEmPFL8mgX/lthl9xovLV7Q+G82BZAsIHjy4mEL6BwnJBRSt5L1dAt4Y8h/RHCmTd1iqRWBOJhacHt5be3gTCYLSVg3i+Y+vA8ddU+hthvCY7ScPl5SQe+NPIZ1amatRzGOciFfUvPziy3M2xouw2v/HHKCDk/ODjh5K0lrO4e5HQ3Jdy8VN+PEdk8+ZX12Asybv42l3DSOJVic2RiJfdjAgEsEbP7eTBYykg3Jgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RVXAoWc6NXaImP3vs70L3HwSuIwZM40S5vdIuobS4Vo=;
 b=Nj6IgcmbfYdBHybxURBeHZzRvZ/rlEIzmy9s1k/NUcdZ5/AC+insveCCPENthLLnYrmIcyNMSd+A2fQLMsW/W7xrVNFK7bSSh2FHw+T9BjvlAH98d4/Qzb3wd5NuLwcc9Yrw/zpIjaqaiDRXCMCYknPvGOepL7MHTJriHeTv+qLQI9KVNQRGpQhhiF8lZTPcW46cf0S4fjpmlSofq1whxHHvUHGzMdFO5RB1NDqwzXrfI5ee3lqtCHNN1xtYNl14s3brYtAVi60AjavuIe2JhfODvjLqtA+316NeHQC0qHtxKhUUUVCKADffW5Yhx76YXjh25K5sbOHwdydiHIX/sw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
Date: Mon, 24 Apr 2023 11:33:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0187.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8618:EE_
X-MS-Office365-Filtering-Correlation-Id: dfd5f1ed-fa7a-4163-732c-08db44a70417
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m5+QMmwzQt22Y/qH47iM8hoJxL5dE3GISHmTLmPwbs3aph9Kmfr9lKIGJoOhJL989/OrBUDT37iwi8LsC1fLesRmne9N1JkHipj8EUye1LaqKAK7bN7bbO8CMR7OzOaJHz+2FcpOEvIrVkUzOQUwgwar+z5qplHHjDQulsks0ve2s2nGFA4kEupRXE8Fzb9OiHDmwKbznyCHAefKMjVUo+EbRWDUmg9pXVwak0P+Mlam4pKjaS3RYH0DkNkTnXjj+Xl2v3DSsIeUTRQiJaPwgYilSHkodFA6+ca26S5AgAfJqOBiPHugiFGKukh2gDiqA+VuN8cm2UDDiaxsBqmGxxEsuQADrDMJmMYuCqTKGRQJDvWYSqLd0tXjCjgQneOsjgpYzV4yNqCOIaalPexcfjjegHKtcnjvjzXCnoKM/JK0qeIDz4xNFyFyKz6iQa/haTiWr+2LjvMkIsmifnoSNohgc+ZKztu9K0ADQVO9IPtfiIWVWlHpQC8eH/PlxmoaeA5KNjbWJ8LikdXgiUYkEheUZN2OjqLKtxnKNCfXFPDYvSyP8/c1ckcwA/bjdWo1XUHKbPEccp89jS2fBYJP70DXsDNzNOJb4koo1EVa6F03skvF0zJMpTAJ24P+zHBbyjwPoMKQmhdZlLCkJw/H1A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(346002)(366004)(396003)(451199021)(36756003)(54906003)(110136005)(478600001)(316002)(4326008)(66946007)(66476007)(66556008)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(186003)(2616005)(86362001)(26005)(31686004)(6506007)(6512007)(53546011)(31696002)(6486002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUJYdk1adVJzbnlkVXhDNyttL1hjQ0hFUFNqT3gxd05LeCt4TXFjR3ZQbkds?=
 =?utf-8?B?K1pobUhBUkpvUGtRVHk2R1NXN2hmRlVYaks5ZFd4Qk9vNFpOSmJoTFROcStF?=
 =?utf-8?B?bTRJUFhqOFUxZ0VqRDNYSW50eW02c1h4b2FkNTdLL1BUZnZZdG1NQ0ROdjY4?=
 =?utf-8?B?dGxhdDdsbWZrMlp4ejZqcmFuSTY3VVl4ajA1R3ZKdHpGTUgyMzlvdjdBQkhQ?=
 =?utf-8?B?MHpGbTlMb3hFK2o1cUhpOGdNbWdRRytONTI0ZERZOEppTU5pc1cwL0s4N1Ey?=
 =?utf-8?B?Z1paemlHc2VwQWZMMW11bzlETHpZbWVvbVFyQlF5Z0JtOWdid1EzdEluMUNL?=
 =?utf-8?B?RUw0S2dNK3FrWW1nbWJTaThLVjUyRGYrbmpjcDRpVmVjTU9HNnRqY0RJRUFu?=
 =?utf-8?B?MUd6S3VFUXhEVVZTT3lteElyRjdaTWkrTG1vVlFnY2dnMkJYNUU4azUxSVBz?=
 =?utf-8?B?d0Vjdi9Zckk0MzhtVkllTHgyc3ZzSk1pNU1HYkNpbk53UFdxOW0zQmZFMW05?=
 =?utf-8?B?YjVaZWZkYVNMT3FXOHdKcmxmalplL0VaWnNmcng2ZWJpbitOS1FYUUxYakg1?=
 =?utf-8?B?RmdCUjhGT1hvTEhSWDBVNGk1L2RwN3pCRHVhYTMrTDIvMmlJTTA1ZWppOWlT?=
 =?utf-8?B?OEhTYkM5Q3ZVRjBNQ2ZibXJsV1lhdkJqRWp0WE16UStTcmZFbDQwTnpvdGVP?=
 =?utf-8?B?S25ZTXVMZGh2aUFSVUxtdFNiM0E1M2t1RTNsd05TNjJHWTErajVaSXFFbjVp?=
 =?utf-8?B?VFB4ZFR1ZGdtQm5lVWJzMFIrbmJjMm8zUGJrVGZ0MzVncFY4ZXNPQjhNUTlY?=
 =?utf-8?B?MHFxcGpxb3l4WjdPako3ZEZNR0ZORTgxMFc2bTFOeXdwSFFXQlFxdlRPd1kw?=
 =?utf-8?B?cUQrNnNKZVhUdUw2U1ovbVA5Tmk4ekRmbEZ3cWxHOWdvdnowc2pXQnFWN3Jj?=
 =?utf-8?B?b0hJNGhVWFg3MTRSMnZHN2F1am5hOXQ3MHhhQXoxbXhHQUJrWEZqK1hCYUxJ?=
 =?utf-8?B?cWtucWJNSlh4dC8vZzFoakZ2YlZ2NXRrcW5sQ0p6bUp5dmZSWmp6Y1ByQW9l?=
 =?utf-8?B?TzhpUWJzS2FHL21zNVRleEQ1aVF5R2pJRkhGQXQ1NUgvUmhTUVU4QXpPZ1Rw?=
 =?utf-8?B?bGRaMHBmTytiTzdaT2ZxM1JoOEViNW10Q01GZGNzV0d3cjJrUWpoa1ovaE1l?=
 =?utf-8?B?ZDVEaHpkNkM0cGpna2ZNODYvUWRad1pxckl4QmtPTkd0QXByZjJjZlRkODRn?=
 =?utf-8?B?WklaM0lwcmk2UEZzbjd1MVNjdFdvbE8zS2ZyeTBxR0NjOXV1Q2VaY0NZYnc3?=
 =?utf-8?B?RVVBaHVSTncwT0F2RHkybVNBSUxrazYwUmlsK3ovVzcvSTFvdjJMeVhPSWFC?=
 =?utf-8?B?VDNibzN6alQzRkRNYUpGcXJpY1ExVk13SVpza09tbkIzNDVRbnR2OFhrL3R1?=
 =?utf-8?B?emRVVkhWcVpleEJFR2c1aE5xdklsUmkwaUduUWkxOEV6T2JvZzB0ajlEb1I4?=
 =?utf-8?B?Y2hBQWo2QzNuamxXSllyZ0JUbFJGU1ViUEp6Y1MyVVpjd05pc2NGN0FBSUxR?=
 =?utf-8?B?N1JkaDkvSG5nUThvUHhhdmVpYi9YemlLcVlQcTFJYzhsaUdtek9NS090QWc2?=
 =?utf-8?B?Rnh1ZzVFSWlrVXNDZ2w5WDVCMjJLR0ZiQVRVNU5pZ1ZqQTdpeXZ6L3hOUVdG?=
 =?utf-8?B?azlRUmt4cDQyc2kweVVzSFFlUERmWUUwMTF3amZNWnNjTlhtWXhQd212V05J?=
 =?utf-8?B?TXpyWWNoN0dUdFhBNHlWWHBPVnBWamM3b3dEbjBZZHhUNUlHYWJ1OUpCVkxV?=
 =?utf-8?B?M1E0dnVqMDZJamMwak52S093MXFtZGtsb3hiUzIzWkN6UlJWcDBlV2hHSlBX?=
 =?utf-8?B?b1pyaXlLSFBYSXFmQlFjVWpML1UzQVFJNC9GaHdKQ0VLVXpoMzlzRkJPM21w?=
 =?utf-8?B?TjJuUDFnbmYvNTZnY3EzRVZYWDM0TnA5Y3hMK2I4ZmhtLy9QMHZ1NERoQ2ta?=
 =?utf-8?B?WXFjSDU0enBVWVVvRkpPOEk0MitUVTRQZUExNWN0SG5HVFh5NG1jQUJ0UnBq?=
 =?utf-8?B?SlJEbENRTTZud1pkNDE3TjQ3QnM1RnpkVGE5QnRqajhSbDIrbDIyUm1haG5h?=
 =?utf-8?Q?fVFoeBFDl44pp1fPs0gHpj5H8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dfd5f1ed-fa7a-4163-732c-08db44a70417
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 09:33:52.3251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5sX9F4niDE1Eym4YB4KPRJ853/IW/phtMUBxkn3FAKGPbwLJJzE8f1Kie8kwcu1JJKUqD2v7pyxF0NH5aTJAtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8618

On 21.04.2023 16:41, Oleksii wrote:
> On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>> + *
>>> ===================================================================
>>> =========
>>> + *    Start addr    |   End addr        |  Size  | VM area
>>> description
>>> + *
>>> ===================================================================
>>> =========
>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
>>
>> These are all L2 slot 511 aiui, which may be worth mentioning
>> especially since
>> the top bits don't match the top bits further down in the table
>> (because of the
>> aliasing).
> 
> Than I'll add one more column where I'll put slot number
> 
>>
>>> + *     .................. unused ..................
>>
>> This is covering slot 510, which again may be worth mentioning.
>>
>>> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2
>>> slot: 200-509)
>>> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2
>>> slot: 196-197)
>>
>> 1Gb is, if I'm not mistaken, a single L2 slot.
> Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
> included. I'll update the table.
> 
>>
>> Also assuming a 32-byte struct page_info (I don't think you'll get
>> away with
>> less than that, when even Arm32 requires this much), there's a
>> mismatch
>> between direct map and frame table size: With 4k pages, the scaling
>> factor
>> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
>> to
>> cover for (slightly more than) the 331Gb of memory you mean to be
>> able to
>> map?
> For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
> are right it should 3 Gb in that case what will be enough ( taking into
> account both available sizes of page_info structure ).

As to the plan to it being 56 bytes (i.e. like on Arm): Arm forever has
had a 64-bit padding field at the end. My best guess is that the field
was introduced to have a 32-byte struct on Arm32. But then why
artificially increase the struct from 48 to 56 bytes on Arm64? And hence
why have the same oddity on RV64?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 09:53:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 09:53:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525257.816343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqssL-00006Q-UG; Mon, 24 Apr 2023 09:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525257.816343; Mon, 24 Apr 2023 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqssL-00006J-Q4; Mon, 24 Apr 2023 09:52:49 +0000
Received: by outflank-mailman (input) for mailman id 525257;
 Mon, 24 Apr 2023 09:52:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pqssK-00006C-9P
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 09:52:48 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c450a931-e285-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 11:52:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CBAC521AA0;
 Mon, 24 Apr 2023 09:52:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9DBF81390E;
 Mon, 24 Apr 2023 09:52:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cWxBJW5RRmRWGwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 24 Apr 2023 09:52:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c450a931-e285-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682329966; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=sS6bv/OeBR52Q4taFtLeU4TquHlDl+8W7CE+JI7PwCI=;
	b=Fr5Xlex0euR74uTWceSZFFJH8d8tIzOjuDH+XmEcag+FMPkF7T7pPQUMgjtHn/jCrlx4mT
	cAIwdxmI6QZto/kgY96f4x89vpP82ZC6W/Zgc3IAxYKChXsMfcc4TzFBwirzXykwO9TcaU
	t1FLNTAsKsEiFU7tBBbMEDRR0fgp5fE=
Message-ID: <0737360b-b41f-1a0c-45f6-fb402c243db1@suse.com>
Date: Mon, 24 Apr 2023 11:52:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH v2] tools/xenstore: fix quota check in acc_fix_domains()
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230328144327.6562-1-jgross@suse.com>
 <9df50e0a-c1fe-65cd-fc45-12ddb6e57c6b@suse.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <9df50e0a-c1fe-65cd-fc45-12ddb6e57c6b@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0nyaCrGOvSXgypLqMFlkPBFT"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0nyaCrGOvSXgypLqMFlkPBFT
Content-Type: multipart/mixed; boundary="------------VPS1C4YnhxnY0Pr4rqlD5rGx";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <0737360b-b41f-1a0c-45f6-fb402c243db1@suse.com>
Subject: Re: [PATCH v2] tools/xenstore: fix quota check in acc_fix_domains()
References: <20230328144327.6562-1-jgross@suse.com>
 <9df50e0a-c1fe-65cd-fc45-12ddb6e57c6b@suse.com>
In-Reply-To: <9df50e0a-c1fe-65cd-fc45-12ddb6e57c6b@suse.com>

--------------VPS1C4YnhxnY0Pr4rqlD5rGx
Content-Type: multipart/mixed; boundary="------------CTL556HBx0dRynUBD49iy8uf"

--------------CTL556HBx0dRynUBD49iy8uf
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDQuMjMgMTA6NTYsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyOC4wMy4yMDIz
IDE2OjQzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVG9kYXkgd2hlbiBmaW5hbGl6aW5n
IGEgdHJhbnNhY3Rpb24gdGhlIG51bWJlciBvZiBub2RlIHF1b3RhIGlzIGNoZWNrZWQNCj4+
IHRvIG5vdCBiZWluZyBleGNlZWRlZCBhZnRlciB0aGUgdHJhbnNhY3Rpb24uIFRoaXMgY2hl
Y2sgaXMgYWx3YXlzIGRvbmUsDQo+PiBldmVuIGlmIHRoZSB0cmFuc2FjdGlvbiBpcyBiZWlu
ZyBwZXJmb3JtZWQgYnkgYSBwcml2aWxlZ2VkIGNvbm5lY3Rpb24sDQo+PiBvciBpZiB0aGVy
ZSB3ZXJlIG5vIG5vZGVzIGNyZWF0ZWQgaW4gdGhlIHRyYW5zYWN0aW9uLg0KPj4NCj4+IENv
cnJlY3QgdGhhdCBieSBjaGVja2luZyBxdW90YSBvbmx5IGlmOg0KPj4gLSB0aGUgdHJhbnNh
Y3Rpb24gaXMgYmVpbmcgcGVyZm9ybWVkIGJ5IGFuIHVucHJpdmlsZWdlZCBndWVzdCwgYW5k
DQo+PiAtIGF0IGxlYXN0IG9uZSBub2RlIHdhcyBjcmVhdGVkIGluIHRoZSB0cmFuc2FjdGlv
bg0KPj4NCj4+IFJlcG9ydGVkLWJ5OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0K
Pj4gRml4ZXM6IGYyYmViZjcyYzRkNSAoInhlbnN0b3JlOiByZXdvcmsgb2YgdHJhbnNhY3Rp
b24gaGFuZGxpbmciKQ0KPj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3Nz
QHN1c2UuY29tPg0KPiANCj4gQ29uc2lkZXJpbmcgdGhlIHJlZmVyZW5jZWQgY29tbWl0IGlz
IGFib3V0IDYgeWVhcnMgb2xkLCBJIHRob3VnaHQgdGhpcw0KPiB3b3VsZCBiZSBhIGJhY2tw
b3J0aW5nIGNhbmRpZGF0ZS4gVGhlIGZ1bmN0aW9uIG1lbnRpb25lZCBpbiB0aGUgdGl0bGUN
Cj4gZG9lc24ndCBhcHBlYXIgdG8gZXhpc3Qgb24gNC4xNywgdGhvdWdoLiBUaGVyZWZvcmUs
IGlmIHRoZXJlIGlzIGFuDQo+IGludGVudGlvbiBmb3IgdGhpcyB0byBiZSBiYWNrcG9ydGVk
LCBtYXkgSSBwbGVhc2UgYXNrIHRoYXQgYSBzdWl0YWJsZQ0KPiBiYWNrcG9ydCBiZSBwcm92
aWRlZD8NCg0KVGhpcyBzaG91bGQgZG8gdGhlIGpvYi4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------CTL556HBx0dRynUBD49iy8uf
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-tools-xenstore-fix-quota-check-in-acc_fix_domains.patch"
Content-Disposition: attachment;
 filename*0="0001-tools-xenstore-fix-quota-check-in-acc_fix_domains.patch"
Content-Transfer-Encoding: base64

RnJvbSAxZTMwODU1Mjk4OWRkM2MzNzFkNWZmMjdkOGU2MTVkMDdjYTk4NDdkIE1vbiBTZXAg
MTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
CkRhdGU6IE1vbiwgMjQgQXByIDIwMjMgMTE6NDE6MjYgKzAyMDAKU3ViamVjdDogW1BBVENI
XSB0b29scy94ZW5zdG9yZTogZml4IHF1b3RhIGNoZWNrIGluIGFjY19maXhfZG9tYWlucygp
CgpUb2RheSB3aGVuIGZpbmFsaXppbmcgYSB0cmFuc2FjdGlvbiB0aGUgbnVtYmVyIG9mIG5v
ZGUgcXVvdGEgaXMgY2hlY2tlZAp0byBub3QgYmVpbmcgZXhjZWVkZWQgYWZ0ZXIgdGhlIHRy
YW5zYWN0aW9uLiBUaGlzIGNoZWNrIGlzIGFsd2F5cyBkb25lLApldmVuIGlmIHRoZSB0cmFu
c2FjdGlvbiBpcyBiZWluZyBwZXJmb3JtZWQgYnkgYSBwcml2aWxlZ2VkIGNvbm5lY3Rpb24s
Cm9yIGlmIHRoZXJlIHdlcmUgbm8gbm9kZXMgY3JlYXRlZCBpbiB0aGUgdHJhbnNhY3Rpb24u
CgpDb3JyZWN0IHRoYXQgYnkgY2hlY2tpbmcgcXVvdGEgb25seSBpZjoKLSB0aGUgdHJhbnNh
Y3Rpb24gaXMgYmVpbmcgcGVyZm9ybWVkIGJ5IGFuIHVucHJpdmlsZWdlZCBndWVzdCwgYW5k
Ci0gYXQgbGVhc3Qgb25lIG5vZGUgd2FzIGNyZWF0ZWQgaW4gdGhlIHRyYW5zYWN0aW9uCgpD
b21taXQgZjZiODAxYzM2YmQ1ZTRhYjIyYTlmODBjOGQ1NzEyMWI2MmIxMzlhZiB1cHN0cmVh
bS4KClJlcG9ydGVkLWJ5OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPgpGaXhlczog
ZjJiZWJmNzJjNGQ1ICgieGVuc3RvcmU6IHJld29yayBvZiB0cmFuc2FjdGlvbiBoYW5kbGlu
ZyIpClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KLS0t
CiB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jICAgICAgICB8ICAzICsrKwogdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgfCAyNCArKysrKysrKysrKysr
KysrKysrKy0tLS0KIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5oIHwg
IDMgKysrCiAzIGZpbGVzIGNoYW5nZWQsIDI2IGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25z
KC0pCgpkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMKaW5kZXggNTZkYmRjMjUzMC4uODkwN2M2
Y2IwYiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYworKysg
Yi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jCkBAIC0xNDc0LDYgKzE0NzQsOSBA
QCBzdGF0aWMgc3RydWN0IG5vZGUgKmNyZWF0ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpj
b25uLCBjb25zdCB2b2lkICpjdHgsCiAJaWYgKCFub2RlKQogCQlyZXR1cm4gTlVMTDsKIAor
CWlmIChjb25uICYmIGNvbm4tPnRyYW5zYWN0aW9uKQorCQl0YV9ub2RlX2NyZWF0ZWQoY29u
bi0+dHJhbnNhY3Rpb24pOworCiAJbm9kZS0+ZGF0YSA9IGRhdGE7CiAJbm9kZS0+ZGF0YWxl
biA9IGRhdGFsZW47CiAKZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90
cmFuc2FjdGlvbi5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMK
aW5kZXggYWM4NTQxOTdjYS4uOGViZDE2YWVkMiAxMDA2NDQKLS0tIGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMKKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX3RyYW5zYWN0aW9uLmMKQEAgLTE3MiwxMiArMTcyLDIwIEBAIHN0cnVjdCB0cmFuc2Fj
dGlvbgogCS8qIExpc3Qgb2YgY2hhbmdlZCBkb21haW5zIC0gdG8gcmVjb3JkIHRoZSBjaGFu
Z2VkIGRvbWFpbiBlbnRyeSBudW1iZXIgKi8KIAlzdHJ1Y3QgbGlzdF9oZWFkIGNoYW5nZWRf
ZG9tYWluczsKIAorCS8qIFRoZXJlIHdhcyBhdCBsZWFzdCBvbmUgbm9kZSBjcmVhdGVkIGlu
IHRoZSB0cmFuc2FjdGlvbi4gKi8KKwlib29sIG5vZGVfY3JlYXRlZDsKKwogCS8qIEZsYWcg
Zm9yIGxldHRpbmcgdHJhbnNhY3Rpb24gZmFpbC4gKi8KIAlib29sIGZhaWw7CiB9OwogCiB1
aW50NjRfdCBnZW5lcmF0aW9uOwogCit2b2lkIHRhX25vZGVfY3JlYXRlZChzdHJ1Y3QgdHJh
bnNhY3Rpb24gKnRyYW5zKQoreworCXRyYW5zLT5ub2RlX2NyZWF0ZWQgPSB0cnVlOworfQor
CiBzdGF0aWMgc3RydWN0IGFjY2Vzc2VkX25vZGUgKmZpbmRfYWNjZXNzZWRfbm9kZShzdHJ1
Y3QgdHJhbnNhY3Rpb24gKnRyYW5zLAogCQkJCQkJY29uc3QgY2hhciAqbmFtZSkKIHsKQEAg
LTUxNCw3ICs1MjIsMTIgQEAgaW50IGRvX3RyYW5zYWN0aW9uX3N0YXJ0KGNvbnN0IHZvaWQg
KmN0eCwgc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sCiAJcmV0dXJuIDA7CiB9CiAKLXN0YXRp
YyBpbnQgdHJhbnNhY3Rpb25fZml4X2RvbWFpbnMoc3RydWN0IHRyYW5zYWN0aW9uICp0cmFu
cywgYm9vbCB1cGRhdGUpCisvKgorICogVXBkYXRlIG9yIGNoZWNrIG51bWJlciBvZiBub2Rl
cyBwZXIgZG9tYWluIGF0IHRoZSBlbmQgb2YgYSB0cmFuc2FjdGlvbi4KKyAqIElmICJ1cGRh
dGUiIGlzIHRydWUsICJjaGtfcXVvdGEiIGlzIGlnbm9yZWQuCisgKi8KK3N0YXRpYyBpbnQg
dHJhbnNhY3Rpb25fZml4X2RvbWFpbnMoc3RydWN0IHRyYW5zYWN0aW9uICp0cmFucywgYm9v
bCBjaGtfcXVvdGEsCisJCQkJICAgYm9vbCB1cGRhdGUpCiB7CiAJc3RydWN0IGNoYW5nZWRf
ZG9tYWluICpkOwogCWludCBjbnQ7CkBAIC01MjIsNyArNTM1LDcgQEAgc3RhdGljIGludCB0
cmFuc2FjdGlvbl9maXhfZG9tYWlucyhzdHJ1Y3QgdHJhbnNhY3Rpb24gKnRyYW5zLCBib29s
IHVwZGF0ZSkKIAlsaXN0X2Zvcl9lYWNoX2VudHJ5KGQsICZ0cmFucy0+Y2hhbmdlZF9kb21h
aW5zLCBsaXN0KSB7CiAJCWNudCA9IGRvbWFpbl9lbnRyeV9maXgoZC0+ZG9taWQsIGQtPm5i
ZW50cnksIHVwZGF0ZSk7CiAJCWlmICghdXBkYXRlKSB7Ci0JCQlpZiAoY250ID49IHF1b3Rh
X25iX2VudHJ5X3Blcl9kb21haW4pCisJCQlpZiAoY2hrX3F1b3RhICYmIGNudCA+PSBxdW90
YV9uYl9lbnRyeV9wZXJfZG9tYWluKQogCQkJCXJldHVybiBFTk9TUEM7CiAJCQlpZiAoY250
IDwgMCkKIAkJCQlyZXR1cm4gRU5PTUVNOwpAQCAtNTM4LDYgKzU1MSw3IEBAIGludCBkb190
cmFuc2FjdGlvbl9lbmQoY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwKIAljb25zdCBjaGFyICphcmcgPSBvbmVhcmcoaW4pOwogCXN0cnVjdCB0cmFuc2FjdGlv
biAqdHJhbnM7CiAJYm9vbCBpc19jb3JydXB0ID0gZmFsc2U7CisJYm9vbCBjaGtfcXVvdGE7
CiAJaW50IHJldDsKIAogCWlmICghYXJnIHx8ICghc3RyZXEoYXJnLCAiVCIpICYmICFzdHJl
cShhcmcsICJGIikpKQpAQCAtNTUyLDEzICs1NjYsMTUgQEAgaW50IGRvX3RyYW5zYWN0aW9u
X2VuZChjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9uICpjb25uLAogCWlmICgh
Y29ubi0+dHJhbnNhY3Rpb25fc3RhcnRlZCkKIAkJY29ubi0+dGFfc3RhcnRfdGltZSA9IDA7
CiAKKwljaGtfcXVvdGEgPSB0cmFucy0+bm9kZV9jcmVhdGVkICYmIGRvbWFpbl9pc191bnBy
aXZpbGVnZWQoY29ubik7CisKIAkvKiBBdHRhY2ggdHJhbnNhY3Rpb24gdG8gY3R4IGZvciBh
dXRvLWNsZWFudXAgKi8KIAl0YWxsb2Nfc3RlYWwoY3R4LCB0cmFucyk7CiAKIAlpZiAoc3Ry
ZXEoYXJnLCAiVCIpKSB7CiAJCWlmICh0cmFucy0+ZmFpbCkKIAkJCXJldHVybiBFTk9NRU07
Ci0JCXJldCA9IHRyYW5zYWN0aW9uX2ZpeF9kb21haW5zKHRyYW5zLCBmYWxzZSk7CisJCXJl
dCA9IHRyYW5zYWN0aW9uX2ZpeF9kb21haW5zKHRyYW5zLCBjaGtfcXVvdGEsIGZhbHNlKTsK
IAkJaWYgKHJldCkKIAkJCXJldHVybiByZXQ7CiAJCXJldCA9IGZpbmFsaXplX3RyYW5zYWN0
aW9uKGNvbm4sIHRyYW5zLCAmaXNfY29ycnVwdCk7CkBAIC01NjgsNyArNTg0LDcgQEAgaW50
IGRvX3RyYW5zYWN0aW9uX2VuZChjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLAogCQl3cmxfYXBwbHlfZGViaXRfdHJhbnNfY29tbWl0KGNvbm4pOwogCiAJCS8q
IGZpeCBkb21haW4gZW50cnkgZm9yIGVhY2ggY2hhbmdlZCBkb21haW4gKi8KLQkJdHJhbnNh
Y3Rpb25fZml4X2RvbWFpbnModHJhbnMsIHRydWUpOworCQl0cmFuc2FjdGlvbl9maXhfZG9t
YWlucyh0cmFucywgZmFsc2UsIHRydWUpOwogCiAJCWlmIChpc19jb3JydXB0KQogCQkJY29y
cnVwdChjb25uLCAidHJhbnNhY3Rpb24gaW5jb25zaXN0ZW5jeSIpOwpkaWZmIC0tZ2l0IGEv
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmggYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaAppbmRleCAzNDE3MzAzZjk0Li5mNTdkMjA0Y2Mw
IDEwMDY0NAotLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaAor
KysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uaApAQCAtMzYsNiAr
MzYsOSBAQCBpbnQgZG9fdHJhbnNhY3Rpb25fZW5kKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0
IGNvbm5lY3Rpb24gKmNvbm4sCiAKIHN0cnVjdCB0cmFuc2FjdGlvbiAqdHJhbnNhY3Rpb25f
bG9va3VwKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCB1aW50MzJfdCBpZCk7CiAKKy8qIFNl
dCBmbGFnIGZvciBjcmVhdGVkIG5vZGUuICovCit2b2lkIHRhX25vZGVfY3JlYXRlZChzdHJ1
Y3QgdHJhbnNhY3Rpb24gKnRyYW5zKTsKKwogLyogaW5jL2RlYyBlbnRyeSBudW1iZXIgbG9j
YWwgdG8gdHJhbnMgd2hpbGUgY2hhbmdpbmcgYSBub2RlICovCiB2b2lkIHRyYW5zYWN0aW9u
X2VudHJ5X2luYyhzdHJ1Y3QgdHJhbnNhY3Rpb24gKnRyYW5zLCB1bnNpZ25lZCBpbnQgZG9t
aWQpOwogdm9pZCB0cmFuc2FjdGlvbl9lbnRyeV9kZWMoc3RydWN0IHRyYW5zYWN0aW9uICp0
cmFucywgdW5zaWduZWQgaW50IGRvbWlkKTsKLS0gCjIuMzUuMwoK
--------------CTL556HBx0dRynUBD49iy8uf
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CTL556HBx0dRynUBD49iy8uf--

--------------VPS1C4YnhxnY0Pr4rqlD5rGx--

--------------0nyaCrGOvSXgypLqMFlkPBFT
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRGUW0FAwAAAAAACgkQsN6d1ii/Ey+b
jwf7B4cP4TASZHlLcdU2OuNLmP56YpEZ+FPnBIo5dD4sMH7ERooWDSDDoizCWFuuBGD9PR0YVj6U
mpN+rUsIBp+YldGv9XrzZDWezOqitMpqzMd0NZ01AtGc5XSF+YK9Qsb79T40QLq+VW/92nHECfPP
kgh5wuCItc/2mq7gKRL7UALCfTi49LBmxV8WPZwd+alLZRusRNoFomgAUDE82QQSvK8jVKd77Lw3
X5JA1zvXlntvfv+db9acHjAIIm3+lByvIQos6NHBGB+3XlsfMlKsRfcH7gJSOINkjNvzX5KowoEq
xmpXDojwssFpFE33UDlkEUtzDcZIprxCC2Ayis4iRg==
=0qXx
-----END PGP SIGNATURE-----

--------------0nyaCrGOvSXgypLqMFlkPBFT--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 09:58:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 09:58:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525262.816352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsxN-0000jj-GB; Mon, 24 Apr 2023 09:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525262.816352; Mon, 24 Apr 2023 09:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqsxN-0000jc-DR; Mon, 24 Apr 2023 09:58:01 +0000
Received: by outflank-mailman (input) for mailman id 525262;
 Mon, 24 Apr 2023 09:57:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqsxL-0000jS-E2; Mon, 24 Apr 2023 09:57:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqsxL-0007DI-5s; Mon, 24 Apr 2023 09:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqsxK-0000VL-FI; Mon, 24 Apr 2023 09:57:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqsxK-0003BY-Ec; Mon, 24 Apr 2023 09:57:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Oh3d51nRKtzkNhjxZTCl5EBI9ScQnvoGHx9aHyv7RoI=; b=N4RtyTY8KHQ5maJjN64mZyH+/A
	Ka/9aZFz486/9wuSqUXNz940EPTauluLNgvowxyKBMtcBdXyBbfkArLc4bofSEvom/s4k92RHy4j9
	ZOHGiGzxtWKpJtqabqgN/XGXUiK9G3V3cWLMiywRMURafmeJivqlpoDSs79j+XDdMk1Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180390: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:<job status>:broken:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:host-reuse/final:broken:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=457391b0380335d5e9a5babdec90ac53928b23b4
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Apr 2023 09:57:58 +0000

flight 180390 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180390/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel    <job status>                 broken
 test-amd64-amd64-dom0pvh-xl-intel 27 host-reuse/final  broken REGR. vs. 180278
 test-amd64-amd64-freebsd11-amd64 19 guest-localmigrate/x10 fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                457391b0380335d5e9a5babdec90ac53928b23b4
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    7 days
Failing since        180281  2023-04-17 06:24:36 Z    7 days   13 attempts
Testing same since   180390  2023-04-23 23:41:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Borislav Petkov (AMD) <bp@alien8.de>
  Brian Masney <bmasney@redhat.com>
  Chancel Liu <chancel.liu@nxp.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Dan Carpenter <error27@gmail.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Baluta <daniel.baluta@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  hrdl <git@hrdl.eu>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jaroslav Kysela <perex@perex.cz>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingbo Xu <jefflexu@linux.alibaba.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Long Wang <long.wang@analog.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@amd.com>
  Miguel Ojeda <ojeda@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Oliver Upton <oliver.upton@linux.dev>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@gmail.com>
  Soumya Negi <soumya.negi97@gmail.com>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  William Breathitt Gray <william.gray@linaro.org>
  Willy Tarreau <w@1wt.eu>
  Woody Suwalski <terraluna977@gmail.com>
  Xu Yilun <yilun.xu@intel.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            broken  
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-dom0pvh-xl-intel broken
broken-step test-amd64-amd64-dom0pvh-xl-intel host-reuse/final

Not pushing.

(No revision log; it would be 5589 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 10:19:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 10:19:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525271.816363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtHY-0003R9-Cw; Mon, 24 Apr 2023 10:18:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525271.816363; Mon, 24 Apr 2023 10:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtHY-0003R2-9k; Mon, 24 Apr 2023 10:18:52 +0000
Received: by outflank-mailman (input) for mailman id 525271;
 Mon, 24 Apr 2023 10:18:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqtHX-0003Qw-AF
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 10:18:51 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66f1c051-e289-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 12:18:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7193.eurprd04.prod.outlook.com (2603:10a6:10:121::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 10:18:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 10:18:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66f1c051-e289-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQL6BLi1vb6P058CRmO7BW6oDQTbrJHsWXELhiv3ddww5yO3f/p5lKbNzBpm2C7E9TOsQt8MsFfLCaCH6fgjUZg4MxlCsSOJAjLHDvICIXWjMxqBubKr/1fbFFCwkE7q4TJHUYgVHd4TG6AyQr6x+vePG/y2fumqfQV+aJlxo/jHHJlgCYP/0vzOir7hNXBu/9BdgL0yPAwVB4U847PvcUa95ntMsKvUI6Z8pqC1rV7MYuKflOL4blh7Bla6SME9SAlLokeFvG4Yzz2osbXPbOgob1SLFKC3E9o8W69v0mnQi3fcYQXj6nf9pExgvbWElPjlpcPKelxf2GzSt4AOVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4EjfACP2vBBjHQJN0zO+FIwJPNOQ1rmvaSRObIfri9A=;
 b=TPMWCH7iSQUvjv3ZoKHrLuk/G7UV17fVSX6711VS+4ixyFFLlGZUEsXxakRlBWMkWttN9i/H5R/0QBE5lFvlJg7L57m5AVVSO52qi/kKWsn6K2foqqXQX0ONc0+8R8VUmk0sE7jRSUG/6JpNs69jxonJvoS5DHUKkB9pIqovQipgdr657ANDHuopz8EsjacpZS0UaEwtdLJNA+U0XoKAsftSsKOP1nMi0PZYORMArq/WPTYuk7ppar41PVfGZCOCdoS8Irw2rCuKDgZ6p5Z7aEWuMtZsyChf4QSpgbpeFrNcJ+88IKDwjcHSnqflvUAMPmsRa6VmV/44Pvp4kmoSmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4EjfACP2vBBjHQJN0zO+FIwJPNOQ1rmvaSRObIfri9A=;
 b=J+8XKv9+369DjVCQI0i6C2Pa0H+ZS3sIZZ9j6z9uE6wHn52jsdd+VRbAPRkDXe1l/NJfCV6WPwP03NM2L4qN8cZK1r/GH3jKajnNo2bvSqMYUudbBwSP0w/6GzCBTDBt7/GEg2IplhXlOnOD8ijIOctsWuhgqM5d3QMAchDv+K8JS+KXr3ysbaJqJg7cwGjqR9xLFve4/mjX0Sg5hH8IAm4I0gq+XOxgLTKdbpJ9kMZAUpUAuBIhOo6uAcdejF6dVUx76SZQvS1niVk6RnyAdoPlph0WKJaUFHYqYuXeVgw8/7AthHNtCd7vouB6iskv9akpF48IE1BW7xFW1vTZwg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
Date: Mon, 24 Apr 2023 12:18:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
 <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7193:EE_
X-MS-Office365-Filtering-Correlation-Id: f9493af4-2a2d-4d44-8b24-08db44ad4a09
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d6Ky9kws5aEkq/dER9uF5okXrMcxbcXfxvzNrpEDr9rzj6HL6J3JeskNBfUonYy36SSMSfndfovcX3uabpgOLN4cKHhNEIzg1S/WBc+6GRwYQz2CATA8c1MIaEGTlcFM7mgNRd0PUJBtnvROYHU6JsFfT9miDH5ObRXWmGbGHYQc9XKCVC+2mIi5XGMyxIrM4yK5WzrTbpnMXmFx5zcdGE38Q+iuk223csOaxmgkKF2siD/INJ4yFY/LQLb3K7U/1Mf1RfevTHe9EiU3ZpBSQzY+FX8d+ivh93CkI8rQraQEGhqovxxb8y6aXlFMazw15FpzbhkCWldhW8weA1tmrDa3zWVLqwqXpqo58s5HbIU+djvl1hNS9MAFzrbRoRkzC4NcgFOl4DbIdM7hixLercWdKXf56alMPIi7FjaEDoaix+KE86BlTNyQv3D3Evo00gWdXXO0NXdBOVBWfazhP5S+enZsJr5LdrevcK2UapLAyPMbEPZARqsylSTuM4hmhzChRxz7EXVWht800T3HaR7bMAUtT4l9rXt8sd2Gsg7nqTQjBKUQZei8LKBIm8MRueEzMrRk5lBMDpBYrpx9dYqAXpBDlTqrgRAvowOkGy+0PQwIsZdHRaPGjpH+ZhHEysvzfwaJNGGUOgWKt22exA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199021)(54906003)(36756003)(478600001)(316002)(4326008)(6916009)(66476007)(66556008)(66946007)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(2616005)(31686004)(6512007)(6506007)(26005)(53546011)(86362001)(186003)(31696002)(83380400001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aXpXWkw4MWJneUhSVE1mMHloS1hFU3VrdjA3N3NpYnp0Mm1tZEFZZEFYK0Fh?=
 =?utf-8?B?d2FjWWRENTJ6bXNUTnNtYm50MzRCQ3FUQk8reDBwbklmaXRTenArcm8zSldE?=
 =?utf-8?B?RTc1TkdKM3pGYTFzZXZ0Zmt1bld2QWZYQ21sVVBIVzhTYU9oQW9YazJrVS9Q?=
 =?utf-8?B?VmhqSTFQUEUweHp4QWJVQU10cFdFc21BUHJIblpmZUl5UUgwaUkxblp5M3hz?=
 =?utf-8?B?cUg3VzNjbVhxdUkrNlIwWXhRTi9oMDEwR05qOXd5MTFCeG1pR1YxSTdLbzdx?=
 =?utf-8?B?ZytkVmVQSGU1OFcrMlRjeFBSTDAvb0hxSlhyWFFGNDRaY0llQTJ5czExR2FF?=
 =?utf-8?B?TC9ZN0ZreWx2dzlCSU1IY2RjRGR6RG9FN0J6U1pScFhBejJWdWphOXh3MGlq?=
 =?utf-8?B?L25DaG1Lbjllb09JeHZ0czVCekp2NXJhMDhqOXRybklXMDdUTURqRkRmM0g3?=
 =?utf-8?B?Z29hQ0FWSzJPKzNwYlJhbklCT0tzVEdmZWpBZ09GcHVnS3VUUXdlUGdJZzUv?=
 =?utf-8?B?TlRkb1pZUGp6cWlmMEtDY1BINDNYalRjN0pPS3pQSlIzODhDalUzWUtzTlJv?=
 =?utf-8?B?aG1hZzdhRlF2M0J4Rk1Ma3JsekZKQWpEVFhOYWZIeUNXMEtJYVVwVHVrdloy?=
 =?utf-8?B?cHBCb2tPeW1xVWNYalF3b3lVenJBQWFxYVdXTHprdEdKNnFLbUxrRmpiV3dI?=
 =?utf-8?B?ZVp6c1h1dER1WHl1d3ZCY3RGR2tDUnp0cS9iT3R1aTc0QW1EZEtIdVhBcWpP?=
 =?utf-8?B?NVROQnpPNmZpeFNMSkNBN3BCWU1WVzZTVGpzUng5YTF1eWNGZEFFNmt1L3Z0?=
 =?utf-8?B?NEVkN2lnNEZlMjlaSEZUbXN5cGhibVVaRjlscWVHcmhBbzk1eHhnM3pLNEor?=
 =?utf-8?B?WHZpbE5NQnlERUpHaFRZVHRFeE1HV0lRMTlwYlUvVlB3VnFyREcwdUxZdVVs?=
 =?utf-8?B?R01VcDYrV1lWMEEwaVNOK0J3YXhuTUFsb3ZGeGZMOVVWWUJNaTZnUHJGMWs3?=
 =?utf-8?B?andiMXFSOGI1bE9sSlo0QUNPZEJWVytadEJIY01JRUp5enRKNXdwVThwZ1hD?=
 =?utf-8?B?cGpRdXNMNDhZNjIwTlFuM0h1eFJCb3ZLaGJlYlNScnhQbWpVTE0xRnJpQkVK?=
 =?utf-8?B?V0pBTmFZRlQyazA4S2Zpc3VrRUpNMFYzOUZKU214Q01qbmozckd4UjczQVZZ?=
 =?utf-8?B?QnNraWZ6aUQwTG5lTThrVk9BeS9BNkhqeGp5SWFvNVBpL3dhb3JzaFAvOXlQ?=
 =?utf-8?B?UnNTYVloUkhQRnQyblZZbk81Z1N0TC9DaTVRb1l6VWVtTTJ1YUkyclMxM3Nu?=
 =?utf-8?B?dytDL2hZWDZFVzg2UFprcFR4L2FDTWxlZi9ub3NRRk9CNjIrRXA4R1pYS1E4?=
 =?utf-8?B?c1dDWFY0SVZxZzN1V05lQUNSUzBnWHBsZFlKL0hiRjJKbm1Jc3VlRGIwVkwy?=
 =?utf-8?B?Z2xNRXRZeXBlUVptTkQ0eWd4SWZnN0k3UkhnVnNxL3FCcTlYOWZDb3VaVk4x?=
 =?utf-8?B?WXZmdmNublFaOFgzQUxVUmhpOTVPVnRLN3dOYW91V0RZTHdISTVJWVRCdVNy?=
 =?utf-8?B?ZkhJOXY5aGl1NGFBRUhWUFg1YWhDYUJOT3Z0VGdab3A0c0RwbTdXc0k0VUVQ?=
 =?utf-8?B?QlIzeUwwWVoyTzkraTlJNFA4dG0xSitaVDNTQklPNUprdGtTWGdBd1BnbFlV?=
 =?utf-8?B?ak1uZVdTNk53bllmMGhvTENNdXVNMUtEazZrajBlbTNLM2NMTXYxelFIOURz?=
 =?utf-8?B?UXFnclVtSVpRdTJMOStIRHJVV1NqWkl6cUpwN3A3TmJhR1YyYk5LUDMxMmMw?=
 =?utf-8?B?alZtbHBCQnVvZjlLVzVsa2Q5VXpSeXU4am9VRHdOeUFpeGVZWW1EU1lGMDc5?=
 =?utf-8?B?ZHE1MXdpc3dwek5WN3JnaVBxSVJ4MmQ4MnlkR2xRNTQ3RVUrc1lJYWd2anlu?=
 =?utf-8?B?bUl1T3BsSGRaM09VTmFjY3l2bE16SFRMWFM1NVFWU1liUUlEa2NiQ2hsYWRZ?=
 =?utf-8?B?aEJ0dVBHVCsxRFBYSk1IZGhwemdKRGp1YkxjUXE1UUxUQXdNM1gwUndCTkJZ?=
 =?utf-8?B?dk94YytjLzNGZDBQTmU1TVhoRGgwRnFhbVJESEdhdjlBS3huMEc3WEFZYVJ1?=
 =?utf-8?Q?MepSp8IMaPivldQldkPrCM5rz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f9493af4-2a2d-4d44-8b24-08db44ad4a09
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 10:18:46.6905
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /oM4Y/CFaPGg32eNyjUZ2/3kh0RROfSgujqs6BGARAOSmfjSjdqlZj4ghLbYg1GxfPTxnhaaQ2RxHwfG41F30w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7193

On 21.04.2023 18:01, Oleksii wrote:
> On Thu, 2023-04-20 at 16:36 +0200, Jan Beulich wrote:
>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>> +                        /* panic(), <asm/bug.h> aren't ready now.
>>> */
>>> +                        die();
>>> +                    }
>>> +                }
>>> +        }
>>> +
>>> +        /* Point to next page */
>>> +        page_addr += XEN_PT_LEVEL_SIZE(0);
>>
>> Seeing this as well as the loop heading - maybe more suitable as a
>> for(;;) loop?
> I am not sure that I understand the benefits of changing "while (
> page_addr < map_end )" to "for(;;)".
> Could you please explain me what the benefits are?

The common case of using while() is in situations where one cannot
use for(). for() is (imo) preferable in all cases where the third of
the controlling expressions isn't empty (and is to be carried out
after every iteration): Any use of "continue" inside the loop will
then properly effect loop progress. (Of course there are cases where
this behavior isn't wanted; that's where while() may come into play
then.)

>>> +    csr_write(CSR_SATP,
>>> +              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT) |
>>> +              satp_mode << SATP_MODE_SHIFT);
>>> +
>>> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
>>> +        is_mode_supported = true;
>>> +
>>> +    /* Clean MMU root page table and disable MMU */
>>> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
>>> +
>>> +    csr_write(CSR_SATP, 0);
>>> +    asm volatile("sfence.vma");
>>
>> I guess what you do in this function could do with some more
>> comments.
>> Looks like you're briefly enabling the MMU to check that what you
>> wrote
>> to SATP you can also read back. (Isn't there a register reporting
>> whether the feature is available?)
> I supposed that it has to be but I couldn't find a register in docs.

Well, yes, interestingly the register is marked WARL, so apparently
intended to by used for probing like you do. (I find the definition of
WARL a little odd though, as such writes supposedly aren't necessarily
value preserving. For SATP this might mean that translation is enabled
by a write of an unsupported mode, with a different number of levels.
This isn't going to work very well, I'm afraid.)

>>> +    /*
>>> +     * Stack should be re-inited as:
>>> +     * 1. Right now an address of the stack is relative to load
>>> time
>>> +     *    addresses what will cause an issue in case of load start
>>> address
>>> +     *    isn't equal to linker start address.
>>> +     * 2. Addresses in stack are all load time relative which can
>>> be an
>>> +     *    issue in case when load start address isn't equal to
>>> linker
>>> +     *    start address.
>>> +     */
>>> +    asm volatile ("mv sp, %0" : : "r"((unsigned
>>> long)cpu0_boot_stack + STACK_SIZE));
>>
>> Nit: Style (overly long line).
>>
>> What's worse - I don't think it is permitted to alter sp in the
>> middle of
>> a function. The compiler may maintain local variables on the stack
>> which
>> don't correspond to any programmer specified ones, including pointer
>> ones
>> which point into the stack frame. This is specifically why both x86
>> and
>> Arm have switch_stack_and_jump() macros.
> but the macros (from ARM) looks equal to the code mentioned above:
> #define switch_stack_and_jump(stack, fn) do {                         
> \
>     asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack), "X" (fn) :
> "memory" ); \

Note how writing SP and branch are contained in a single asm() there.
By checking ...

>     unreachable();                                                    
> \
> } while ( false )
> 
> Here is part of disassembled enable_mmu():
> 
> ffffffffc004aedc:       18079073                csrw    satp,a5
> ffffffffc004aee0:       00016797                auipc   a5,0x16
> ffffffffc004aee4:       12078793                addi    a5,a5,288
> ffffffffc004aee8:       813e                    mv      sp,a5
> ffffffffc004af00:       0f4000ef                jal    
> ra,ffffffffc004aff4 <cont_after_mmu_is_enabled>
> ...

... what the generated code in your case is you won't guarantee that
things remain that way with future (or simply different) compilers.

>>> --- a/xen/arch/riscv/riscv64/head.S
>>> +++ b/xen/arch/riscv/riscv64/head.S
>>> @@ -1,4 +1,5 @@
>>>  #include <asm/asm.h>
>>> +#include <asm/asm-offsets.h>
>>>  #include <asm/riscv_encoding.h>
>>>  
>>>          .section .text.header, "ax", %progbits
>>> @@ -32,3 +33,4 @@ ENTRY(start)
>>>          add     sp, sp, t0
>>>  
>>>          tail    start_xen
>>> +
>>
>> ???
> Shouldn't it be the one empty line at the end of a file?

There should be a newline at the end of a file, but not normally a
blank one. When you introduce a new file, it can be viewed as a matter
of taste whether to have an empty last line, but when you have a
seemingly unrelated change to a file like the one here, this is at
least odd.

>>> --- a/xen/arch/riscv/xen.lds.S
>>> +++ b/xen/arch/riscv/xen.lds.S
>>> @@ -136,6 +136,7 @@ SECTIONS
>>>      . = ALIGN(POINTER_ALIGN);
>>>      __init_end = .;
>>>  
>>> +    . = ALIGN(PAGE_SIZE);
>>>      .bss : {                     /* BSS */
>>>          __bss_start = .;
>>>          *(.bss.stack_aligned)
>>
>> Why do you need this? You properly use __aligned(PAGE_SIZE) for the
>> page tables you define, and PAGE_SIZE wouldn't be enough here anyway
>> if STACK_SIZE > PAGE_SIZE (as .bss.stack_aligned comes first). The
>> only time you'd need such an ALIGN() is if the following label
>> (__bss_start in this case) needed to be aligned at a certain
>> boundary. (I'm a little puzzled though that __bss_start isn't
>> [immediately] preceded by ". = ALIGN(POINTER_ALIGN);" - didn't .bss
>> clearing rely on such alignment?)
> ALIGN(PAGE_SIZE)  isn't needed anymore.
> I used it to have 4k aligned physical address for PTE when I mapped
> each section separately ( it was so in the previous verstion of MMU
> patch series )
> 
> Regarding ". = ALIGN(POINTER_ALIGN);" I would say that it is enough to
> have aligned __bss_end ( what was done ) to be sure that we can clear
> __SIZEOF_POINTER__ bytes each iteration of .bss clearing loop and don't
> worry that size of .bss section may not be divisible by
> __SIZEOF_POINTER__.

How would guaranteeing this only for __bss_end help? __bss_start could
still be misaligned, and then you'd
(a) use misaligned stores for clearing and
(b) extend clearing to outside of the .bss (as the last of the misaligned
stores would cross the __bss_end boundary).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 10:23:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 10:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525275.816372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtLe-0004uK-Tq; Mon, 24 Apr 2023 10:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525275.816372; Mon, 24 Apr 2023 10:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtLe-0004uB-R3; Mon, 24 Apr 2023 10:23:06 +0000
Received: by outflank-mailman (input) for mailman id 525275;
 Mon, 24 Apr 2023 10:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqtLd-0004tm-KE
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 10:23:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqtLZ-0007zb-1W; Mon, 24 Apr 2023 10:23:01 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=[192.168.3.150]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqtLY-0003pR-Re; Mon, 24 Apr 2023 10:23:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=UcwEkydxSvmzXPLWYGZR+632HvL01o8v2DFYmJKYHb4=; b=KJ/jHtEU9JwPbEIaaXo2ulsHd7
	U2nBwng0NlRURR1z0bL0QeRksnLgaMYqjkrkwIAikdPnZ5R8zYEqP3WdmPVJU2/vNs32recCylOnj
	kSNZHOxKohg05Nco1xFE8wohF+uBAIZZflWw5xLs3FOGcMDbM7k8RtieggJL+rATn5Mk=;
Message-ID: <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
Date: Mon, 24 Apr 2023 11:22:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 24/04/2023 10:33, Jan Beulich wrote:
> On 21.04.2023 16:41, Oleksii wrote:
>> On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
>>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>>> + *
>>>> ===================================================================
>>>> =========
>>>> + *    Start addr    |   End addr        |  Size  | VM area
>>>> description
>>>> + *
>>>> ===================================================================
>>>> =========
>>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
>>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
>>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
>>>
>>> These are all L2 slot 511 aiui, which may be worth mentioning
>>> especially since
>>> the top bits don't match the top bits further down in the table
>>> (because of the
>>> aliasing).
>>
>> Than I'll add one more column where I'll put slot number
>>
>>>
>>>> + *     .................. unused ..................
>>>
>>> This is covering slot 510, which again may be worth mentioning.
>>>
>>>> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2
>>>> slot: 200-509)
>>>> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2
>>>> slot: 196-197)
>>>
>>> 1Gb is, if I'm not mistaken, a single L2 slot.
>> Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
>> included. I'll update the table.
>>
>>>
>>> Also assuming a 32-byte struct page_info (I don't think you'll get
>>> away with
>>> less than that, when even Arm32 requires this much), there's a
>>> mismatch
>>> between direct map and frame table size: With 4k pages, the scaling
>>> factor
>>> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
>>> to
>>> cover for (slightly more than) the 331Gb of memory you mean to be
>>> able to
>>> map?
>> For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
>> are right it should 3 Gb in that case what will be enough ( taking into
>> account both available sizes of page_info structure ).
> 
> As to the plan to it being 56 bytes (i.e. like on Arm): Arm forever has
> had a 64-bit padding field at the end. My best guess is that the field
> was introduced to have a 32-byte struct on Arm32. 

I can't exactly remember. But I would like to rework the struct 
page_info on Arm64 because...

But then why
> artificially increase the struct from 48 to 56 bytes on Arm64? And hence
> why have the same oddity on RV64?


... with 56 bytes, some struct page_info may cross a cache boundary. For 
RISC-V, I would recommend to make sure the struct page_info will never 
cross a cache boundary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 10:32:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 10:32:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525281.816383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtV0-0006Rv-S1; Mon, 24 Apr 2023 10:32:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525281.816383; Mon, 24 Apr 2023 10:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtV0-0006Ro-Oz; Mon, 24 Apr 2023 10:32:46 +0000
Received: by outflank-mailman (input) for mailman id 525281;
 Mon, 24 Apr 2023 10:32:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=riw3=AP=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1pqtUz-0006Ri-9y
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 10:32:45 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58a39b88-e28b-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 12:32:43 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 5b1f17b1804b1-3f086770a50so27748705e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 03:32:43 -0700 (PDT)
Received: from [192.168.22.129] (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id
 l2-20020a05600c4f0200b003ee74c25f12sm15346187wmq.35.2023.04.24.03.32.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Apr 2023 03:32:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58a39b88-e28b-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682332363; x=1684924363;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=HekhlNECns07R44Z9K5TIYVtdNSMvGMR7D7aXuLTW4w=;
        b=KqZvlooGqzPbFnYyoDT9FR0u1PhaTgbn10imLe1nJE+SRYop93wiiDtVO/ojxVwXND
         wYjg/iNbNfOxJobCSTYOX+yOxRdzaYbFnTQvB5hk9qI/MxAle4JMicOwxCgOjABOX0K1
         tzn/rTk6Row7ti33PB193CSA1cFJajtI4NXNYCLmX1Dq1BXyYNWQRxeOQhtr0fNAI230
         lB6HU89tg4MuAM0LxgLshhaLgF5xWVdfIkBebwjg1cJ1tkhNvZ9+MzgD36LcXmOzer3m
         A3/KCRvhNJNALRLc/vZnP/4LbU2lalgM8BSjIPMws/AF+35MLA3GerCfppzA5nN2mYEU
         nqxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682332363; x=1684924363;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HekhlNECns07R44Z9K5TIYVtdNSMvGMR7D7aXuLTW4w=;
        b=cupQE2qB+e1Yh/utCj+Okb6K+phY6rK4U3azYcAA/lj+dqtI+IdaPFlzKbcmW25b0/
         Xq70Io4cXwX26yWWJK+30BT72Y1GsxEFUS/5TQ1UOAKRJzbRQKq0mmJSGqhirtM2QMCr
         whOzCRNlvbXZg7v1dpDLfdw216t5eZ5WzIxaszuJmSCIVXBBS05kfWuQKpX9dmQFAdga
         WGbTlenyeyUIJWT0WbACYRDArEoWZbJbNIgqkb489SMHtumBK/4230iOiT0fHMmZVIDl
         zCEvDPPAyktkAQ5ZXigDikdKRRn87BXgbKIqUiGbh1wMLcUZ6x+l0wR/tvZs9kXIE1qi
         eohA==
X-Gm-Message-State: AAQBX9eC5mKQnaq9h6mxeB9SrIuqxQpxkKKgxuYcZXqRakkwxnzNY8SL
	ViLIuiL1xEJ8RF1/LTfSrBE=
X-Google-Smtp-Source: AKy350aHtxscROpY+5mNq4W2uKR9Is2yVrdGfKhbjlxYoDFPfS5Ov6f3mTC72TrqfIGT+sUwMQsshQ==
X-Received: by 2002:a7b:cb8a:0:b0:3ee:1acd:b039 with SMTP id m10-20020a7bcb8a000000b003ee1acdb039mr7454121wmi.34.1682332362824;
        Mon, 24 Apr 2023 03:32:42 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
Date: Mon, 24 Apr 2023 11:32:41 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
Content-Language: en-US
To: mark.syms@citrix.com, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20230420110205.688689-1-mark.syms@citrix.com>
Organization: Xen Project
In-Reply-To: <20230420110205.688689-1-mark.syms@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 20/04/2023 12:02, mark.syms@citrix.com wrote:
> From: Mark Syms <mark.syms@citrix.com>
> 
> Ensure the PV ring is drained on disconnect. Also ensure all pending
> AIO is complete, otherwise AIO tries to complete into a mapping of the
> ring which has been torn down.
> 
> Signed-off-by: Mark Syms <mark.syms@citrix.com>
> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Anthony Perard <anthony.perard@citrix.com>
> CC: Paul Durrant <paul@xen.org>
> CC: xen-devel@lists.xenproject.org
> 
> v2:
>   * Ensure all inflight requests are completed before teardown
>   * RESEND to fix formatting
> ---
>   hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
>   1 file changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
> index 734da42ea7..d9da4090bf 100644
> --- a/hw/block/dataplane/xen-block.c
> +++ b/hw/block/dataplane/xen-block.c
> @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>   
>       dataplane->more_work = 0;
>   
> +    if (dataplane->sring == 0) {
> +        return done_something;
> +    }
> +

I think you could just return false here... Nothing is ever going to be 
done if there's no ring :-)

>       rc = dataplane->rings.common.req_cons;
>       rp = dataplane->rings.common.sring->req_prod;
>       xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane >   void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>   {
>       XenDevice *xendev;
> +    XenBlockRequest *request, *next;
>   
>       if (!dataplane) {
>           return;
>       }
>   
> +    /* We're about to drain the ring. We can cancel the scheduling of any
> +     * bottom half now */
> +    qemu_bh_cancel(dataplane->bh);
> +
> +    /* Ensure we have drained the ring */
> +    aio_context_acquire(dataplane->ctx);
> +    do {
> +        xen_block_handle_requests(dataplane);
> +    } while (dataplane->more_work);
> +    aio_context_release(dataplane->ctx);
> +

I don't think we want to be taking new requests, do we?

> +    /* Now ensure that all inflight requests are complete */
> +    while (!QLIST_EMPTY(&dataplane->inflight)) {
> +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
> +            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
> +                        request);
> +        }
> +    }
> +

I think this could possibly be simplified by doing the drain after the 
call to blk_set_aio_context(), as long as we set dataplane->ctx to 
qemu_get_aio_context(). Alos, as long as more_work is not set then it 
should still be safe to cancel the bh before the drain AFAICT.

   Paul

>       xendev = dataplane->xendev;
>   
>       aio_context_acquire(dataplane->ctx);
> +
>       if (dataplane->event_channel) {
>           /* Only reason for failure is a NULL channel */
>           xen_device_set_event_channel_context(xendev, dataplane->event_channel,
> @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>       blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
>       aio_context_release(dataplane->ctx);
>   
> -    /*
> -     * Now that the context has been moved onto the main thread, cancel
> -     * further processing.
> -     */
> -    qemu_bh_cancel(dataplane->bh);
> -
>       if (dataplane->event_channel) {
>           Error *local_err = NULL;
>   



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 10:45:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 10:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525286.816393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqthA-00083K-VF; Mon, 24 Apr 2023 10:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525286.816393; Mon, 24 Apr 2023 10:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqthA-00083C-Rw; Mon, 24 Apr 2023 10:45:20 +0000
Received: by outflank-mailman (input) for mailman id 525286;
 Mon, 24 Apr 2023 10:45:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqth9-000835-Jk
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 10:45:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a647f71-e28d-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 12:45:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9917.eurprd04.prod.outlook.com (2603:10a6:10:4d8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 10:45:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 10:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a647f71-e28d-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KbOcbxh5pYT2gBuKKFol6E5fbwXQE4KLQKejHhYBDhA+tWDybCvBXeCIhGRmAakJx4DJzdKfcnXNYy1QUefFJYNIherrrM1PAu027ty/Wvphp1HNKUNHA1VNtFuBMKY+ptyg0xpM7hRl+TJr5maTDAGPGG6DcOt0UCqN/Ychep32qIOLZrKkIs3K5GR652mCuYt/MoInLpZfZiLaDkDwDTqtOyKMj7IIIlIRiojzwv6ak0rr5p0tofC9jN8eYjNY1+4/c7qarn7+tvvQ1S6+MREiASIqt96OXQo8UkDdy1C+tKIBZggwSYetraMwdFOYMaKwiA3+bA679y7vfGsfaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=a/n9s3tD3SDmkTWBtMPmGUBXIMQOeezjCi9pYjmZXqk=;
 b=bLhTyt3zynj7sODffmaz/CcSWqZO280zkZXe/rupvWN7vpq/A9VKbNd59h1j32jeu8KbTHQLn2sM5x50/nM2yLW9eOLQTEN6bIMykG1DAAknsDgujLcIyuzprs+RQi0SWNqggQd5HMTBOLycDXtSGY2eZFDbhvv8firDv2koA+z2XUHdYQ00Pu3UmyEy6c12UNLYbyI2Un6nGsuAECRgmJ845aR6bIMLHjShea2GC4peryZSTwZrB4k1tJO5mqZmaMWQJa9034k40zgvMMd55DyIKgsSZZQOFPCRptGsioHZPJxPNfsD1Dhl2hFmJnkGNay9WkuY8AllCgLQfS5nrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a/n9s3tD3SDmkTWBtMPmGUBXIMQOeezjCi9pYjmZXqk=;
 b=WrHD/RDG8qD3GOEROu7wf/K88Pu4Vhz0Y/p0xyWy3BGmNISkLpGwFFbl5IzBiWFL5eJaw4y/wI25nbg4z6QpK9GJT89teW79fGtdkgaAotd9cOmyu6xWXqGQm3nz3OWBe3rMWfcMc3G/QwJF8eFuHUYn/q/whWLCD7W80gZqjoWWUUaBGeknSjpioCKuN8f+7zLvZh+u+PMjCetz/PTCP+y+Jhp3qA6a254YrmnOjW04HWNUhGAiKGaNNBa5uf9/47euOdvusb2qKVTTmJvjHrZRWay6Kk1wXSS+oDjzsDptu2qjKUj6m4eC6TobRDXcb++lkzKkWWX7RBtIpeGaUw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d157b1e2-cfc5-f7b7-9443-16d1db9a4311@suse.com>
Date: Mon, 24 Apr 2023 12:45:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
 <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0188.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9917:EE_
X-MS-Office365-Filtering-Correlation-Id: a2286f2a-30ad-4ac1-dd59-08db44b0fdbc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OR1v2+zlzU5CU8LKpVjL6FGBbNFEzb1grtyFgx01QPNyy4/QgKeVfLElPvccMJjrQVuFnpehL/+AEf/toSnXna0dwopdkO43ZcF+IJnq9TE21+miu1oebczkM02YvFNZjMkq+g/HsiLjKbOqsz9831VhvpUzbZmww0bdH8WjT0uSF+BC3p98HBI9FifT5da0bGPcwNHvyBlT94w8/J8Ijs3yZqfA3YoKOYpzDae4+bBV/POuNtmmkPiP22PQIfdmGwQg8w1Z+Jd5GwxPHpxF7Is7b05nDZGr2QQ9noH5jdjOlfdMwGHwmX3VZIKlikNNzDgAx5nSHRJgu7b0Q1VMog5Mi0vG8zsSnP7762JVcYQMKvdeXG/8WC/hs/RDUTaAYkblErjw98Leza8hE7wg12sZxCn4bkJiyd8JK5HZeP1c/WbrPPzL0Bwsc9bbo7nunZMTLlM/QfMVwxWYWweZwmeK+WjloKbhIXWJn1zr/ZtpnWLq4Y7TuaYQMEpvV+d3+nWnbOismj+Lb3NhgiJjr05I14dea5doqGW9KdU+W95FlqsdVQVjZ9hyP/tHmJ6mGn3ZvinWWK9fB31a29WF34nd02SLuAhMwHapuRBcl1uPo00/fD4wRz0MDmbwyeAgfUFUKbhUb89C4cTmWUx6Aw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(346002)(366004)(396003)(451199021)(36756003)(54906003)(478600001)(316002)(4326008)(6916009)(66946007)(66476007)(66556008)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(186003)(2616005)(86362001)(26005)(31686004)(6506007)(6512007)(53546011)(31696002)(6486002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WjExY3UwTUVJYnVmY0M4RnN0ZWwxSGtUSXZWREFmNy9GYXE2dXR6UXFPallM?=
 =?utf-8?B?ZEZqbU92K0ZQV2xuSTVEY0ZJNGlhUDZuMDAzTGFIMUdHV1ZPcjBWWUVsZ1Z4?=
 =?utf-8?B?cjkybHgxVnFZWWJBRjVDUVY0cWxpT3laUDl3T0ZaU29MN05EWDFINlpiTDVk?=
 =?utf-8?B?cjI4bGM1UTZaMkgxdGRJSHRZY0RrTXp2c3ZTU0Zka1lnZWtkLzlCZ1hvYVFx?=
 =?utf-8?B?M0c2cGRQTHR3QkJZT3lZelVRMVhBZEFGZk54bjRjSVpHN1ZzMkVaNU83bHNE?=
 =?utf-8?B?N2puajZUVTFCeXhHbmtqRFo5eitZaWVXcFdKcmViVjZGVEFBZkdzekx1a0xT?=
 =?utf-8?B?YzFRc0VHdHJsT0dsWHhLeUdxeGNhSVkyVk1iNUsvdmNMbFVpcjVvN1V1ZG4z?=
 =?utf-8?B?TGJuN2pjNVduL1c0VE5zZEh2ampBVkVRZVpqM21UVG9ydUNIeWl6ZTVEay9O?=
 =?utf-8?B?RThyUVBpdmN5RDVyR09QR1FoZjVLNGozQlZKb0lRRTNUajgrNjN4dHFleVlm?=
 =?utf-8?B?N3lQdnhJRDRWQ3k3ZlJKbEVCODQyWkpsY2tNQk4zR05TSUxmbCtNdzdqeTB6?=
 =?utf-8?B?OVE1QXZPSDhrVkh0bTV3RFpRZG1OUTk4MWxTVmFlRGk2dlpRTFE0VzlhOUxU?=
 =?utf-8?B?RkdYZ09kVzZ5NGZzNHJUTFV1UVlzdFd2R2FZcytOVEVQTzcvUkV6UTNpcjZq?=
 =?utf-8?B?SHFVREU2Rk56dUt4VGZlNmFUMUtkNU9nWXNTNFJZcUVCaDEzZmJTODkvd29U?=
 =?utf-8?B?U0pOUUF6cjJIUGhuNytyOW92enptZW8vRTBwYm5DMSs3VENWeitiQkZ5RCtU?=
 =?utf-8?B?Yzh0MzQ4MUFHWDM1WWVEOTlLS2ZYeDZtYWxqdjh6UUVQa3J5dU1pV0ltRE5N?=
 =?utf-8?B?N0VzZTRCSlg0ZkNNVmJsNUliVmJXZFJWdjN4UXRVaDBuZ1A2TFN5a0Vha0p0?=
 =?utf-8?B?WDQ2dGN0ampYSGluKzJnekZiaDJEcDJuYUd4bGVEWFc4bXhwTTB2STR1WUMw?=
 =?utf-8?B?OVBrWEFEVTZ5ajVXVEhhRlFKd1BrN1h0TVFBcXRkQnczM2dsekppRUFFMjkr?=
 =?utf-8?B?Nko0cGZCaEtLZ2JFQU1tUUQzMHRyUVlnczg0UG9TekFZRmdWUHFGV0ExTUVL?=
 =?utf-8?B?K1hXbUl4a1FFSndmeHU3TXhWakdFdmxQakZ4VXp0bzUvVU5tOVJDMUl4am1m?=
 =?utf-8?B?ekdWOUhDYnU2VFpSRmpobkhUaFFMSkRNY3hJdnI2Z2FUc1QvLzgvUkNCVWor?=
 =?utf-8?B?STdmNWRWMGkwS1QrbHVvVzNBRzRXMGRDTmk4ZFhGM1BJLzkzMXQ5VnN0TWJH?=
 =?utf-8?B?dHZRTnJUMU9WTm81a0xTWVJsY2JkUW1PWU1ybHIwclFPNlpjR2dQd3d3NzI4?=
 =?utf-8?B?bGM0U0c2bzZPVmQxNXZ4MmR6S3VaQTJETUg2Wk1MOXpOaForNmtOZ2hFKzdt?=
 =?utf-8?B?end0cUZJaDlNL2hlOUdQdE9wc2lCUGM4TGlJVzBFOHlCbm5LWkcwSWZSYTRZ?=
 =?utf-8?B?T1pwUy96R296aG1BaFMzWUx6UW5aN04rTzZQdmNoZmw5WW5xYXlDRFF6T0lS?=
 =?utf-8?B?dEJ5VU1UK1pNMDdZVTVtYjM5czdNZXlGL2ZLaEIzbTZuSkpQbnRvSjJvOThD?=
 =?utf-8?B?L2ZTVXlwb3lYSy8xQ3B6MWJsUnJkd3BTejY5bndtT1JsbWFDZjFOWCtkTDcx?=
 =?utf-8?B?NzNEVUtoalM4eEViblhiTWpTZEdTLzNJZU9pTHFXUDcwY0RVMUhkUjdLNTlP?=
 =?utf-8?B?TnozbGNOWUFxUEJpZ3d6NS9xaEtRQVJCUXFjeEl2KzJpb0xnSHNVZktRQ0wz?=
 =?utf-8?B?eHVPSFcyZFEvMlI4UDNFUWFnalJNVElyODc1OTFoU2xvYjBWaVVKOXFpMVhW?=
 =?utf-8?B?eFF3NTRzYUlGallYQ2pNUXBUcUw5aTdoSzd2ZUllY3ZVZFZCaUdpQXloTWJl?=
 =?utf-8?B?SFRFYnBlbXFUS082TFZlRDNlcm84eDRtalA3TU15UUlSaGlaMHZ1YkhBZldR?=
 =?utf-8?B?a3NJckE3UkF3VmtQakJvRGVKamlTczBHNEx2SkVJRTN6UWdMRjJGcDhlVEp3?=
 =?utf-8?B?bG02RGZiZU5zUnhoMTBwWS81MWlZbWw0WWppa0owTnJkc1M1U0RDQU81ZVQ1?=
 =?utf-8?Q?Fw0MNB5IkcrpIaHu/Ud8cc7ky?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2286f2a-30ad-4ac1-dd59-08db44b0fdbc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 10:45:16.6442
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Qhs4nibFkaC9hd6nf9LX0zv9Kls5BvG6dRvWl1XMUEN3+isFRaCJF96QD4MCMLeENh/B2KcRUek9a52uZjgQ8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9917

On 24.04.2023 12:22, Julien Grall wrote:
> Hi,
> 
> On 24/04/2023 10:33, Jan Beulich wrote:
>> On 21.04.2023 16:41, Oleksii wrote:
>>> On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
>>>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>>>> + *
>>>>> ===================================================================
>>>>> =========
>>>>> + *    Start addr    |   End addr        |  Size  | VM area
>>>>> description
>>>>> + *
>>>>> ===================================================================
>>>>> =========
>>>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
>>>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
>>>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
>>>>
>>>> These are all L2 slot 511 aiui, which may be worth mentioning
>>>> especially since
>>>> the top bits don't match the top bits further down in the table
>>>> (because of the
>>>> aliasing).
>>>
>>> Than I'll add one more column where I'll put slot number
>>>
>>>>
>>>>> + *     .................. unused ..................
>>>>
>>>> This is covering slot 510, which again may be worth mentioning.
>>>>
>>>>> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2
>>>>> slot: 200-509)
>>>>> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2
>>>>> slot: 196-197)
>>>>
>>>> 1Gb is, if I'm not mistaken, a single L2 slot.
>>> Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
>>> included. I'll update the table.
>>>
>>>>
>>>> Also assuming a 32-byte struct page_info (I don't think you'll get
>>>> away with
>>>> less than that, when even Arm32 requires this much), there's a
>>>> mismatch
>>>> between direct map and frame table size: With 4k pages, the scaling
>>>> factor
>>>> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
>>>> to
>>>> cover for (slightly more than) the 331Gb of memory you mean to be
>>>> able to
>>>> map?
>>> For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
>>> are right it should 3 Gb in that case what will be enough ( taking into
>>> account both available sizes of page_info structure ).
>>
>> As to the plan to it being 56 bytes (i.e. like on Arm): Arm forever has
>> had a 64-bit padding field at the end. My best guess is that the field
>> was introduced to have a 32-byte struct on Arm32. 
> 
> I can't exactly remember. But I would like to rework the struct 
> page_info on Arm64 because...
> 
> But then why
>> artificially increase the struct from 48 to 56 bytes on Arm64? And hence
>> why have the same oddity on RV64?
> 
> 
> ... with 56 bytes, some struct page_info may cross a cache boundary.

I guess that's going to be challenging, unless you mean to go further up
to 64 bytes?

> For 
> RISC-V, I would recommend to make sure the struct page_info will never 
> cross a cache boundary.

Since going up to 64 bytes is wasteful, and going down to 32 bytes likely
isn't going to be easy, sticking to 48 bytes for now would seem reasonable
to me.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:02:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525292.816403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtxj-0002E7-GJ; Mon, 24 Apr 2023 11:02:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525292.816403; Mon, 24 Apr 2023 11:02:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqtxj-0002Dy-DT; Mon, 24 Apr 2023 11:02:27 +0000
Received: by outflank-mailman (input) for mailman id 525292;
 Mon, 24 Apr 2023 11:02:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qjkp=AP=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pqtxi-0002Ds-3W
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:02:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d428243-e28f-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 13:02:23 +0200 (CEST)
Received: from DUZPR01CA0097.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4bb::10) by AS2PR08MB10296.eurprd08.prod.outlook.com
 (2603:10a6:20b:648::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 11:02:21 +0000
Received: from DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4bb:cafe::59) by DUZPR01CA0097.outlook.office365.com
 (2603:10a6:10:4bb::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 11:02:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT048.mail.protection.outlook.com (100.127.142.200) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 11:02:21 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Mon, 24 Apr 2023 11:02:20 +0000
Received: from 15fe30c1be06.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 12F808F9-0FC5-4426-A8DA-9FBCD0EE061B.1; 
 Mon, 24 Apr 2023 11:02:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 15fe30c1be06.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 11:02:15 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com (2603:10a6:150:a9::12)
 by PAVPR08MB9859.eurprd08.prod.outlook.com (2603:10a6:102:30f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 11:02:12 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86]) by GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86%6]) with mapi id 15.20.6319.032; Mon, 24 Apr 2023
 11:02:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d428243-e28f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hCuXoG30wTYkeE6MqC1JmC1YalYftOOrnEMgM01rwBw=;
 b=6cwi8+jGTCevZy/lDkVbcWLfEy1mjngo6JcC7TYqLI2X/1cDoPOTgGQpCtCMBM8DnzcqWGUIHj79PgCRiejpqIExFXAbrDfektg/1ohMKxigtvcD7IC7WGz4roq/46ivSXfrcENb0hkrETXmzerZxOjr5euln7X7eH9bS9jI2ic=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lfk5zpn3YJA1tZR3+tZmjNqgfz/SurNmhEhjU+Ye2QYRyZ/qk+IQTR139xRVIrwas7jzAkLtmoZG4IBh0sM+NQyHXRNo05MmyBs5Qp0KrH0vZ6Ryxv+drBJk8/1tvaNDrrd8n9EKBjcGLgdTRUEdmk2m1g87q+4bO92FyJV/60/tK+gJjsMBI3pHkSJlixkPFkbdrbabfeslK5nUjcq8CUKCVaWn+ItzMG9sqfrM2xlnqqvITCAaKQAFAQ5y+WBoz2LuKHXbb9ktD7+6kWo/7BPTYhv4NChi0qjHirkG0OXIHq0eveNYnrFsaii2JgdbqIkTyoaLX75Anrxf1HSubg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hCuXoG30wTYkeE6MqC1JmC1YalYftOOrnEMgM01rwBw=;
 b=BYn0p+/rpsXylCgPLZM3vpyGeyOYJfTnF+so+rWh2vQWGSo+/Ns9daPrtlF5vNAdnoPt3EyqJ9VMMJLL8nolJdzv0qiSbcEzzHg3+3GZDPWUrnfRgiFBR/e4Q1SzNcLFOI8cQ2+w5WV+t91EfmrfN0dt4aK3UfWaudn8FVncFLG33I5KGhhNztaGdkG7fN1u1ku1x4+XI/xPDNMsKJ7VZawXtevuShYNWjtmRFvLWBhLphdIceqwZNjUSjbDK8SPLpp1QdfwRk3DDA7vgcjWorlsXEnOBPLcniLgf937N6VUvZf04Tj9iAgKiSS1OtZDSVSAr+CZO/PQkLHsD9Pt9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hCuXoG30wTYkeE6MqC1JmC1YalYftOOrnEMgM01rwBw=;
 b=6cwi8+jGTCevZy/lDkVbcWLfEy1mjngo6JcC7TYqLI2X/1cDoPOTgGQpCtCMBM8DnzcqWGUIHj79PgCRiejpqIExFXAbrDfektg/1ohMKxigtvcD7IC7WGz4roq/46ivSXfrcENb0hkrETXmzerZxOjr5euln7X7eH9bS9jI2ic=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v3 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index:
 AQHZc3rmCrt50C+BaEmmIgaK9uo9ea80IxGAgAFPzpCAABDqgIAC3lpQgAG3XoCAADcqoA==
Date: Mon, 24 Apr 2023 11:02:12 +0000
Message-ID:
 <GV2PR08MB8001D4A1EA5B854CA065B34992679@GV2PR08MB8001.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
In-Reply-To: <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4894548C962FCA4898B989D0104AC710.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	GV2PR08MB8001:EE_|PAVPR08MB9859:EE_|DBAEUR03FT048:EE_|AS2PR08MB10296:EE_
X-MS-Office365-Filtering-Correlation-Id: ecde4c3c-6966-4345-d622-08db44b36068
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 v9FN5PITsRa2y4U3ZTC82k2Q9Lmy07LVNaxBvS4RIpYYex1aSnVzqEuKRD0LEAK3DtR7lGAAla9SDGtTnWIcT+EYFV4wLLG3GIp7XwOP/1aYwmf+VewtgqyBrK5Y2xSxzr90HPGmtamE/hOPyXIuCg09TJ6p1LhNnJD2ncsllK6n2VqryHab7whBiv6hVaWS5Q3maRz+Fm6TlSPoDObomcTbatclTTU9zZjOpgOjkmqHE/mIF+27tsJbs6LNVLxnoe/cRMoW5REbonXu5Rd/3pa/tuF2W6fbmsRIvtF8rDL4EntYhyas1yywrnPEwsxTQ9m5ic8XYtj63IGsi3d2ItjNV/1W8VnUxQ+12NdgmjsbPk3V7Hq/HW77ZgQNAlUU2ZxEaQKa4oL5c4wcWB9O7ra+/W8u2GmoYxaJB59i4iRGKH63x4PvB+FH3a69W+9oAme/ni8+kWPkSnln6Bzswee1/y98NNPnZLYw04YhDBy9iJ6i3XMpeA+giEZPLRWRg0STtaEohlKSfxI7jNb2IocwKXTxMURlE0rffQbtSpiP4V9UTQGvTx/Nc7JxfRGgqhWyQ5gHT9dAmpT/6Ho4VSAfkX8++8mesNlJC4N2qFSLXW4+Xoy9yRupBDcoJBSJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:GV2PR08MB8001.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(9686003)(53546011)(6506007)(26005)(38070700005)(55016003)(83380400001)(186003)(38100700002)(122000001)(66946007)(478600001)(76116006)(86362001)(6916009)(64756008)(66556008)(66476007)(66446008)(8936002)(8676002)(54906003)(52536014)(5660300002)(71200400001)(7696005)(41300700001)(2906002)(4326008)(33656002)(316002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9859
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b4d105e3-216d-4ee7-e9ea-08db44b35b1d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fCRI0q27uKMcspj6xtebGhse5ZZLRMBh+r9YanHTV0iknx9Wql8OjtBT47xWS2u83/QCulgofRQOVJt41251zHrook7WwcgvlPxSSYXnR600dhbDIRpxjxb/rvO1W4YlPY6X0QtgBaJi7sUZgC/epqvGeQikZEkyOtE1yTcjtcy57qsZZYUicbaD+2mw+viWRXVppORQwO0rrj5gZ9irUgYnwWet8CTAPrPFgecbKOvxAy0whxOGXCuBIhYLYejMSNSTfCLkxue/AZeNCSm1TfqH09+hq1x/cc9hp7d3l7Ekm5icVaP09NV7uU0uz4SDKymB8wfv7uS8Tz27IwQGLC4Bwv7rFb7Nj+Gf6W9cwnhHqAqwGWo5awrouziibKOyzMcmBJVNRPsbphSALLGcIc47XcmJ9mEbvSzmZmJX6PDbQdOfI5766L4yvW8lqYH7o9IDW2y+hpT4SCRDXPGrWNbwmcs+XwjTGNTInxu8pNILaft7s8/Nz7oeFOi3H7XqYUdIJ5933Wu2Hflzg5saWr4mwvKVGwE/gnEqTe1JseqX7v9hHuGm8lbndAiicEWHBNcX5Qtrdg3YMCR0gBpfdcjX2sb8MBMtyiUJrBgWJuYo8u2TIyHt/MmJt/rWE4MLmq1JjYeAp+MAXo0jOb00CA6Z7yzQk0YumiNqDlHDVUghu/MWVFVm8l+iE7fuPq6pg2EgWQ7MoaVcF52neeAN6w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(7696005)(55016003)(9686003)(6506007)(26005)(53546011)(186003)(40480700001)(70586007)(70206006)(6862004)(8676002)(8936002)(316002)(41300700001)(4326008)(478600001)(5660300002)(52536014)(54906003)(82740400003)(356005)(81166007)(82310400005)(86362001)(40460700003)(33656002)(36860700001)(47076005)(336012)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 11:02:21.0218
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ecde4c3c-6966-4345-d622-08db44b36068
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10296

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+IA0K
PiBPbiAyMy4wNC4yMDIzIDA3OjM2LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+PiAtLS0tLU9yaWdp
bmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+DQo+ID4+Pj4gSG93ZXZlciwgbG9va2luZyBhdCB0aGUgY29kZSBiZWxvdywgZG9uJ3QgeW91
IG1lYW4gdG8gaGF2ZSB0aGUgYXJyYXkNCj4gPj4+PiBwcmUtc2V0IHRvIGFsbCBOVU1BX05PX0RJ
U1RBTkNFPw0KPiA+Pj4NCj4gPj4+IC4uLkkgYW0gYSBiaXQgcHV6emxlZCBhYm91dCB3aHkgcHJl
LXNldHRpbmcgdGhlIGFycmF5IHRvIGFsbA0KPiA+Pj4gTlVNQV9OT19ESVNUQU5DRSBtYXR0ZXJz
IGhlcmUsIGFzIEkgdGhpbmsgdGhlIG5vZGUgZGlzdGFuY2UgbWFwIHdpbGwNCj4gPj4+IGJlIHBv
cHVsYXRlZCB3aGVuIHBhcnNpbmcgdGhlIGRldmljZSB0cmVlIGFueXdheSBubyBtYXR0ZXIgd2hh
dCB0aGVpcg0KPiA+Pj4gaW5pdGlhbCB2YWx1ZXMuDQo+ID4+DQo+ID4+IEZyb20gdGhpcyBwYXRj
aCBhbG9uZSBpdCBkb2Vzbid0IGJlY29tZSBjbGVhciB3aGV0aGVyIGluZGVlZCBhbGwgYXJyYXkN
Cj4gPj4gc2xvdHMgKGFuZCBub3QganVzdCBvbmVzIGZvciB2YWxpZCBub2Rlcykgd291bGQgYmUg
cG9wdWxhdGVkLiBJIHRoaW5rDQo+ID4+IHRoZSBjb2RlIGluIHRoZSBwYXRjaCBoZXJlIHdvdWxk
IGJldHRlciBub3QgbWFrZSBpdHNlbGYgZGVwZW5kZW50IG9uDQo+ID4+IGJlaGF2aW9yIG9mIGNv
ZGUgYWRkZWQgc3Vic2VxdWVudGx5ICh3aGljaCBtYXkgY2hhbmdlOyByZWNhbGwgdGhhdCBhDQo+
ID4+IHNlcmllcyBtYXkgYmUgY29tbWl0dGVkIGluIHBpZWNlcykuDQo+ID4NCj4gPiBDb3JyZWN0
LCBJIGFncmVlLiBJIGFkZGVkIGEgbnVtYV9pbml0X2Rpc3RhbmNlKCkgZnVuY3Rpb24gKGluIHBh
dGNoICMxMikgdG8NCj4gPiBzZXQgYWxsIHZhbHVlcyB0byBOVU1BX05PX0RJU1RBTkNFLiBUaGUg
bnVtYV9pbml0X2Rpc3RhbmNlKCkgd2lsbCBiZQ0KPiA+IGNhbGxlZCBpbiB0aGUgYmVnaW5uaW5n
IG9mIG51bWFfaW5pdCgpLg0KPiANCj4gV2h5IGRvIHlvdSBuZWVkIGEgZnVuY3Rpb24gZm9yIHRo
aXM/IEFzIHNhaWQsIHRoaXMgYXJyYXkgY2FuIGJlIHByZS1zZXQgYXQNCj4gY29tcGlsZSB0aW1l
ICh1bmxlc3MgSSdtIG92ZXJsb29raW5nIHNvbWV0aGluZykuDQoNClNvcnJ5IEkgb3Zlcmxvb2tl
ZCB0aGlzIGNvbW1lbnQsIGNvcnJlY3QgbWUgaWYgSSBhbSB3cm9uZywgYnV0IElJVUMgd2UNCmNh
biBvbmx5IHByZS1zZXQgdGhlIDJEIGFycmF5IHRvIDAgYXQgdGhlIGNvbXBpbGUgdGltZS4gQ291
bGQgeW91IHBsZWFzZQ0KZWxhYm9yYXRlIGEgYml0IG1vcmUgYWJvdXQgdGhlIGNvZGUgaW4geW91
ciBtaW5kPyBUaGFua3MhDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQo=


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:08:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:08:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525305.816413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqu3C-0002ra-4A; Mon, 24 Apr 2023 11:08:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525305.816413; Mon, 24 Apr 2023 11:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqu3C-0002rD-0t; Mon, 24 Apr 2023 11:08:06 +0000
Received: by outflank-mailman (input) for mailman id 525305;
 Mon, 24 Apr 2023 11:08:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pqu3A-0002r7-UQ
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:08:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqu38-0000rf-De; Mon, 24 Apr 2023 11:08:02 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=[192.168.3.150]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pqu38-0005oM-6h; Mon, 24 Apr 2023 11:08:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=thDmyMjK9TuOKkvy1oSR0fJNU3F8rRvSYOHyO8k4/XA=; b=xUIzGhbPHC2jhPmAdmR3Oefk8j
	i95LNJbOM5fhzjqfXKyAjab4dCr6CGvHEhlps8qP8XXmlubuykMcvvNCNkBJkfY5H830BBn5AaFFU
	c1ed+NTJe9mEpMty6q42pUwnFdY8L52DAiEen13d1/JCF3lQNrYajNdLdmg6+hNdbwAs=;
Message-ID: <5176b0bc-3727-e939-9776-ee4bfd732e32@xen.org>
Date: Mon, 24 Apr 2023 12:08:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
 <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
 <d157b1e2-cfc5-f7b7-9443-16d1db9a4311@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d157b1e2-cfc5-f7b7-9443-16d1db9a4311@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jan,

On 24/04/2023 11:45, Jan Beulich wrote:
> On 24.04.2023 12:22, Julien Grall wrote:
>> Hi,
>>
>> On 24/04/2023 10:33, Jan Beulich wrote:
>>> On 21.04.2023 16:41, Oleksii wrote:
>>>> On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
>>>>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>>>>> + *
>>>>>> ===================================================================
>>>>>> =========
>>>>>> + *    Start addr    |   End addr        |  Size  | VM area
>>>>>> description
>>>>>> + *
>>>>>> ===================================================================
>>>>>> =========
>>>>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | Xen
>>>>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | FDT
>>>>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | Fixmap
>>>>>
>>>>> These are all L2 slot 511 aiui, which may be worth mentioning
>>>>> especially since
>>>>> the top bits don't match the top bits further down in the table
>>>>> (because of the
>>>>> aliasing).
>>>>
>>>> Than I'll add one more column where I'll put slot number
>>>>
>>>>>
>>>>>> + *     .................. unused ..................
>>>>>
>>>>> This is covering slot 510, which again may be worth mentioning.
>>>>>
>>>>>> + * 0000003200000000 |  0000007f40000000 | 331 GB | Direct map(L2
>>>>>> slot: 200-509)
>>>>>> + * 0000003100000000 |  0000003140000000 |  1 GB  | Frametable(L2
>>>>>> slot: 196-197)
>>>>>
>>>>> 1Gb is, if I'm not mistaken, a single L2 slot.
>>>> Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
>>>> included. I'll update the table.
>>>>
>>>>>
>>>>> Also assuming a 32-byte struct page_info (I don't think you'll get
>>>>> away with
>>>>> less than that, when even Arm32 requires this much), there's a
>>>>> mismatch
>>>>> between direct map and frame table size: With 4k pages, the scaling
>>>>> factor
>>>>> would be 128 if I'm not mistaken. So perhaps you really mean 3Gb here
>>>>> to
>>>>> cover for (slightly more than) the 331Gb of memory you mean to be
>>>>> able to
>>>>> map?
>>>> For RV64 page_info size will be 56-byte and 32-byte for RV32 but you
>>>> are right it should 3 Gb in that case what will be enough ( taking into
>>>> account both available sizes of page_info structure ).
>>>
>>> As to the plan to it being 56 bytes (i.e. like on Arm): Arm forever has
>>> had a 64-bit padding field at the end. My best guess is that the field
>>> was introduced to have a 32-byte struct on Arm32.
>>
>> I can't exactly remember. But I would like to rework the struct
>> page_info on Arm64 because...
>>
>> But then why
>>> artificially increase the struct from 48 to 56 bytes on Arm64? And hence
>>> why have the same oddity on RV64?
>>
>>
>> ... with 56 bytes, some struct page_info may cross a cache boundary.
> 
> I guess that's going to be challenging, unless you mean to go further up
> to 64 bytes?

Yes.

> 
>> For
>> RISC-V, I would recommend to make sure the struct page_info will never
>> cross a cache boundary.
> 
> Since going up to 64 bytes is wasteful, 

Well yes. But this is a trade-off between performance and memory usage. 
With the current situation, you may have to pull two cache lines for 
struct page_info.

I suspect you might see some slow down when using the grant. But I don't 
have any concrete numbers.

> and going down to 32 bytes likely
> isn't going to be easy, sticking to 48 bytes for now would seem reasonable
> to me.

It may be more difficult to argue for an increase (if we notice any 
performance degradation) in the future because this would reduce the 
memory usable for every users.

Anyway, I haven't fully explore the problem on Arm yet and it is 
possible we could deal with any performance degradation differently 
(e.g. re-order the field and/or slightly increasing/decreasing the size).

I thought I would point it out just in case the RISC-V folks care about it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:18:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:18:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525310.816423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquDG-0004RU-3H; Mon, 24 Apr 2023 11:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525310.816423; Mon, 24 Apr 2023 11:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquDG-0004RN-0H; Mon, 24 Apr 2023 11:18:30 +0000
Received: by outflank-mailman (input) for mailman id 525310;
 Mon, 24 Apr 2023 11:18:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pquDF-0004RH-5d
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:18:29 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20608.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bbdf6fba-e291-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 13:18:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAWPR04MB10031.eurprd04.prod.outlook.com (2603:10a6:102:38b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 11:18:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 11:18:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbdf6fba-e291-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dOzgqmt9zarBT1KyFH0rtEGb8k71B3sxGZ9bEJJRCHMSh+i0Vp+ExHpEdk2XzRy2ghAtGJguXklCjacDET5PJKBaOT5M3ps312abTITS7Bz0vGhR++xKeQHNKIrZNE9YJK5uYSvyQQ8LOpn6k38fOfetF4XvkEumhn7EH7OJdbtxgQiDzqqkN14+8HQGISoR5SBlqxsnWq8jt5RJugylPDLMttu+J6TKYetEO+D1YYZH39+gftJ7oGVYLTGr1qmOUsMCmSXvcPGvxDbLQNaVBbk+anmH4BIxyMmuQWEDX7adsXWMSXBXXUw3UrOhBiIZDTmX46YvT6spK4g3nBV6Dg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z9dKh8YrF6igLz27Jqrhx+ueGiA8Ku8+DNWjTADSGM0=;
 b=oBswKIVwggv1VdqlYLtj9AlBZDeh7hCjqKk2aFEQhcValC6yWJMym9RY2y2XG0Nxe92CNnzupr9V/rfQVuqyAbQ2wS7PK/07kV6wxBH5c09Pr3RBcxzQnrepQ2UPNXy14T083EOquP1KGujH7xdQusMrkPQDmoGRXuVIx32Mk449Mjcv1nWR/RDHGfdM3IollMDBKRf7LhjA8MiNJQRhjWOT9bvMDPItxytoATbfgSxqyx3tbQQ8urdhnsiy4pLl7JkSqN9yXw7C06wmaGxfaHl61MnngpdRTXnAINM6eHWEqOVd2uuCshgHlDGH2IfRbKWEBZA9s9wS8+vFqIR57g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z9dKh8YrF6igLz27Jqrhx+ueGiA8Ku8+DNWjTADSGM0=;
 b=wKm/OEtmnlolctOBi/Bq1foa73dJ1n2l8z6PVt5pibfcpmrSc/weBUvk6i9rKI793C8XAhHh8trpK1zIRmggFwVo8dB1zXCiQAjtzxge5oHETBpe6TVKrEDif+CF0DaUGn4b8fTUraQC9n74y1/iR1Sf4PJu6eMMlX/0arOCMgpz4+MOH/i+drJ+jedREocLZjjIaYPumifjl4g5eC8Ae2OZDa6AElJXatZi0Uuul7D+pLfuSe4uyTgxkrarwW2YS91i+4RW30q7uDLtJCG5DMRqn42gbaojqv6JQYyP06uyymnvxz9u3sn/0Qy3tMVJ5L92nNWAFuxC8EUmC1Q0cQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <63976a6a-f984-6c75-c3b7-34cd81eff33f@suse.com>
Date: Mon, 24 Apr 2023 13:18:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
 <GV2PR08MB8001D4A1EA5B854CA065B34992679@GV2PR08MB8001.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <GV2PR08MB8001D4A1EA5B854CA065B34992679@GV2PR08MB8001.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0042.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAWPR04MB10031:EE_
X-MS-Office365-Filtering-Correlation-Id: a72c13a7-0a80-463e-a5b8-08db44b59e76
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LOw4QkrkKc+/F1zk7EJ/UYB/PdKTeYgqGhCNEOa/uwBIFSUoY8ZTtfjuaH1eFLN15BycNVqvW3KJKmcYv7740VkD3RRtiYYSv/SEv71jUvacao+t/jptZP1Yk+Pp6ZP4IklD6jv+oOJ3R41zOc4dFyWeF1lKn/iMSSqSP4qtZlVs20re65FOAKMtkXctGArJVyYtuo3tHLypWVYknDJbdI0t+gMRV1q40/nnHD9oFipMxPw0MA2g7yMe/kPJYE7gzCTvPmXgNdkQlJUPekcp9dygr7elypfngeowSYBNhDjYYLfaORbimFw8AB0H3Naye7ernjDuWiAlptwyFbq9443I8Ot1BdvoTrqGRsxOUI042o55k9uGIVHFmBeFfTPVnjF986/trLq6SzVQ2TmidTiPZVWL+Q9v2188q401AvEQSZA3UYWIHkaeKnA7I1Z3OiE6nqE41LJnltaCxv4VOdiAujSQWh15MvQ+GDRqQG07WY+t0b9odhfBAdEcWlWa8ROzRwF8uk5QdDxTJbNY/+V7y4Gd3+rvo9Ruk6Zj3880aY4/wKbu6cpvRchZJgWY4hcVzs/qVobJOmz52o2JmtQQlfsjYZqTjS675lJ3rZ6eoypbRFgPmzune9iSUO0WcHCovB0RUN4ID0CZwNESlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(39860400002)(136003)(396003)(451199021)(54906003)(6486002)(2616005)(83380400001)(6506007)(53546011)(36756003)(6512007)(26005)(186003)(478600001)(8936002)(8676002)(41300700001)(6916009)(66946007)(66476007)(66556008)(4326008)(316002)(2906002)(86362001)(31696002)(31686004)(7416002)(5660300002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TXVNTUpzaEZNYXc4TXF1L1hvWFkrRm5zR0dwbWcyMXU2YmczNEc4b0psRkNT?=
 =?utf-8?B?cGZjZmkrVzQ4ZDZGZUxhTHFqc0JsKzJUNFZHWmp5UG9XYVJhNWhCZGEwM2RT?=
 =?utf-8?B?NlFBcmNvckV3ZHdhVnkxemRZWHhRSVdhajQ3ZHAwWEZHN0pVbGJ0eVlvTWRa?=
 =?utf-8?B?bkp6OXh5MTlhbGE2R0tyZ1hXQ0lHeU9DQWNSZC9XWnFteWxzOTdQbUk1a09s?=
 =?utf-8?B?UkcvNmtsYmNzbXZ2SGU0MUVHamE0RnhMbTJoQ0NwcUxJWFc4bElKVVdKd0Y3?=
 =?utf-8?B?SE5lY1VtZ1lUQ1lWak9zdDRqR3c0WXBLN1lmdm9Zcy9ldURoN2pDNGhPTXZN?=
 =?utf-8?B?VVFYbEY1NzJhUjZ2SjYxRVNjamF0Qmc3OGNrUkNzVWtDbjlZN2szdnJIREdP?=
 =?utf-8?B?QSswMTlVcHFhUDQ1bjdRSzM3VHlaVElHWlp2dDVkclV0TTRWck9DSWxleWtL?=
 =?utf-8?B?VVR0aEkydUJNOXFpVkhVdjBYcUpqQ2JPR1ptb01PRXk2VXpHalliWEpUOG5l?=
 =?utf-8?B?MURkV0FtSSt5QXVGOVlsUmZ2elRUOU93REluQ1crRS9IOFZBV0owM204ZFBv?=
 =?utf-8?B?UkwzVWpuVkNPMWxvL2Z3SS9BNzVTM2ZaWHFseG9FL0ZsUU11ZWkvN1NtbWRU?=
 =?utf-8?B?RWI1dmJodThaclVpNFNta0tTd0QxLzJaWVRlRU8wZzN4UXNLQ1AweGQ4U0dN?=
 =?utf-8?B?T0NHaGEwamJzSHMwQjQxRDMvaFpmVG5jb05XMUxISGhub3pYSjUzMWpKREow?=
 =?utf-8?B?Y1R5cm9VZXNreUk0aXptblJ0TnlFS3ZTdXJDQ1h2WGVpSmtsaDh1cEpsRUpT?=
 =?utf-8?B?MkphTUl2Q2xMRGp6bExLTjcwMzJsSitkT1MvOUEweXdDTWN3TjR2d2JGWTZC?=
 =?utf-8?B?dm1MZ00xelpVRVlZd2xrYTJsNWFmbmRHb2IvRXV0SG10WDloWDV5SC9ONkdN?=
 =?utf-8?B?QXppODkwL3dOd2ZoTktGNlhPa0MzNERibURya1RFRHN4WGgxUUZZQ3FlV21Q?=
 =?utf-8?B?M1k3ZERaekhkcDhGeFN1OXlTZlgyT0srTmdwLzNUckVTTG5MblVqOXFMRGdC?=
 =?utf-8?B?enQ3eTJMQXRESElhM0pRNkhvR0xSNWMrd09zdTZDclFiTGJrZXJJRTVaLzU0?=
 =?utf-8?B?Rkd6c01Uamh3YjlrMjViOFpMU2R6b2JjV0IremNEazZGbU5QdFFIcmJxL0wr?=
 =?utf-8?B?bEw4UXlPbUg4d2Q1K3RFS0Q0Uk8vdmYxWlpKVHMwYXEyOVdmV3NlRFpYcWp1?=
 =?utf-8?B?Snc1cmI5ZldmOHZjWnVwL0NFd3VCdjlkd0JXd3N0blpwbVhEUzE3S0NqaFA4?=
 =?utf-8?B?TEVIZDFIVzFPbzJYblM4WmRZd2ZQV3hUTWRLU0FlcUJjYzhVdUJoUUpTaGcr?=
 =?utf-8?B?Q3IvdVRpWXB1UHVwS0lucFhHSlRSc3lXc1ZOd1dtWGhyVEY2bUhjWGx0REFx?=
 =?utf-8?B?NlZ1SUlneVNGYmM0NXJybW43TUJGd1U5UithVXNSV2MwM3Z5d0U4RTE2UlV5?=
 =?utf-8?B?OTZ5YU14N3prN21aY1o4NjloY1dSSjRJNkZqMlcvQ2xxTEt0SHJCMU5pZnN3?=
 =?utf-8?B?ejFONlh2ZjlBNFVNcXhqN2tuSm1UUHJuS05wNWZRYytyV3p1OXZnTnpYZUxQ?=
 =?utf-8?B?MElGWnhnKzJBdUZxTWtPbjljOWpQUDRmM2hrNm5rZG4vUUFmQWliem16eTUy?=
 =?utf-8?B?S3JDeU0yUit5QUlrTnJYdzQweTVNK3dYT3lkM1JadlVDRm5LM2RsZnoyZkxo?=
 =?utf-8?B?UnJtTXJYU1MyR3VkL2ZGYVEvQXkzV2JmRVIyTDZydGNNaDlpVG55VDR1UXR0?=
 =?utf-8?B?YUk1TTlwaXhqSm54ZUlSNmE2WHppaWhBSmNIdXVLVlZ1b3RSSDBKTUxHdjNR?=
 =?utf-8?B?dUJZQkpRSlUwRnc2b25QQnlicTl2T0RDMndjNFFpa1FrVDRMc1RrVU96MWN3?=
 =?utf-8?B?ZjJwdWVCTSt6VlJTUTUyeW9mOUJOdEdrdmNIQUZueEJ4bjliM29JZjFXNTRV?=
 =?utf-8?B?OEZlQ2V5TllETjlzbmw2cEZqR0grWGlkdVZvV2JJSG9UTitra1BLRVlBdUIy?=
 =?utf-8?B?RWtpZFB0Qnp1RUhIa3lkN2Z6a2lUN3E5Z2Vja1YwbzZudjJOQ0t4Q1g2cFRq?=
 =?utf-8?Q?J7Uc7irpKarXPlTBapXzYG4A5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a72c13a7-0a80-463e-a5b8-08db44b59e76
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 11:18:24.3032
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Qmg4cfTyEEw3JXP5ing9sn8DuRzOPT/aevIPrO2Tg2Efn/aliEMjnrohZasAjCBrNYKD+aO9O7I0TgTHgHDLYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB10031

On 24.04.2023 13:02, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>> On 23.04.2023 07:36, Henry Wang wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> However, looking at the code below, don't you mean to have the array
>>>>>> pre-set to all NUMA_NO_DISTANCE?
>>>>>
>>>>> ...I am a bit puzzled about why pre-setting the array to all
>>>>> NUMA_NO_DISTANCE matters here, as I think the node distance map will
>>>>> be populated when parsing the device tree anyway no matter what their
>>>>> initial values.
>>>>
>>>> From this patch alone it doesn't become clear whether indeed all array
>>>> slots (and not just ones for valid nodes) would be populated. I think
>>>> the code in the patch here would better not make itself dependent on
>>>> behavior of code added subsequently (which may change; recall that a
>>>> series may be committed in pieces).
>>>
>>> Correct, I agree. I added a numa_init_distance() function (in patch #12) to
>>> set all values to NUMA_NO_DISTANCE. The numa_init_distance() will be
>>> called in the beginning of numa_init().
>>
>> Why do you need a function for this? As said, this array can be pre-set at
>> compile time (unless I'm overlooking something).
> 
> Sorry I overlooked this comment, correct me if I am wrong, but IIUC we
> can only pre-set the 2D array to 0 at the compile time. Could you please
> elaborate a bit more about the code in your mind? Thanks!

static unsigned char __ro_after_init
node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
    [0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_DISTANCE }
};

or even (limiting redundancy a little)

static unsigned char __ro_after_init
node_distance_map[][MAX_NUMNODES] = {
    [0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_DISTANCE }
};

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:28:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525317.816436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquMf-00065Q-6O; Mon, 24 Apr 2023 11:28:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525317.816436; Mon, 24 Apr 2023 11:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquMf-00065J-3O; Mon, 24 Apr 2023 11:28:13 +0000
Received: by outflank-mailman (input) for mailman id 525317;
 Mon, 24 Apr 2023 11:28:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qjkp=AP=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pquMd-000659-Ox
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:28:11 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1659c9f2-e293-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 13:28:08 +0200 (CEST)
Received: from AS9PR01CA0006.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:540::9) by AM9PR08MB6177.eurprd08.prod.outlook.com
 (2603:10a6:20b:283::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 11:28:01 +0000
Received: from AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:540:cafe::82) by AS9PR01CA0006.outlook.office365.com
 (2603:10a6:20b:540::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 11:28:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT016.mail.protection.outlook.com (100.127.140.106) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 11:28:01 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Mon, 24 Apr 2023 11:28:00 +0000
Received: from d5ec309458ec.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 77C487B9-331A-47C6-852C-F304CB758D08.1; 
 Mon, 24 Apr 2023 11:27:51 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d5ec309458ec.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 11:27:51 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com (2603:10a6:150:a9::12)
 by GV2PR08MB8368.eurprd08.prod.outlook.com (2603:10a6:150:bd::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Mon, 24 Apr
 2023 11:27:48 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86]) by GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86%6]) with mapi id 15.20.6319.032; Mon, 24 Apr 2023
 11:27:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1659c9f2-e293-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2HNfPZNksNZ3bEr9x5KOupK8IIm1iWP0K1b0Fs9XyX8=;
 b=woKwccxJbk5elUvqfpOlJjpYTAFuh8Efbd9ObDinrzmXyPdXsbFtfPMfEryJjVuC1eWDDTOzgudPU7vGxJ25Id0fjmIcKC9lF9htGA36iV2eK8xtR/Y7Z25L87KrGoJKg7+JI85U77yActx5ehK+ziz3rsvwwqB3F+nKg97mcq8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n6hI7WMrWYdH3gTN+iqXsHKEhRVlS5mI8xmfc0rw6/giUTt/NwIAB+jdp6D0EZBhuumXlwloB+1kZQROLRJDpCjm13JRarW7inhpNVXhms1xO+8ZQhBFKTf/yRd6xvkXKQWfSQ9E0txgOQywrNUN7rfEDWGjdrjeV/TYm/xXdjkx9IsVSxso5XfA0w9BHB4TMnZSeZ+BwDBacabfan77LJcquMduS2887iCbnJ79iW0btJx3+kjk1cKJV89ByyATNSqQCAvHaCPzQQmx8ll2X2olk1v/HOucVfKp4siXr8/gHvyaJHG9fGObCQf0pOqLNEFy3CsEzyl+CbZQbTj03A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2HNfPZNksNZ3bEr9x5KOupK8IIm1iWP0K1b0Fs9XyX8=;
 b=VaOkRLj9qPDgih8a4KNEwt4Psd7FNF1ugWSI13BGVXUMaAUnsELs9/SElN1R/qnn+mkSCYCSM0vSi7cS2Na/a1/GwNZHKRBOvCzYnXs4Op0s/4/mZ4g8offUFZaBzPSDxpoJ7WGQQwFHIBPFmF3gyfzeA8SXrVtxfm341vlhFaNtlphsAxwFAIyC2To5JAaCZPaNTBHgEFgJMMZ+HeKU9u1gJadeeYU9UHR6+1LdpWfntTtWIvCep/F7QKAf4OIS5MmR9/5aC5f8NRtG764OPKBK9LfkF2Z0qxF0ME2IL2cLiKVfxHRTkVfsyrAsldSfkegex0W8DzlEx0lVQE5dlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2HNfPZNksNZ3bEr9x5KOupK8IIm1iWP0K1b0Fs9XyX8=;
 b=woKwccxJbk5elUvqfpOlJjpYTAFuh8Efbd9ObDinrzmXyPdXsbFtfPMfEryJjVuC1eWDDTOzgudPU7vGxJ25Id0fjmIcKC9lF9htGA36iV2eK8xtR/Y7Z25L87KrGoJKg7+JI85U77yActx5ehK+ziz3rsvwwqB3F+nKg97mcq8=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v3 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v3 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index:
 AQHZc3rmCrt50C+BaEmmIgaK9uo9ea80IxGAgAFPzpCAABDqgIAC3lpQgAG3XoCAADcqoIAABVQAgAABqiA=
Date: Mon, 24 Apr 2023 11:27:48 +0000
Message-ID:
 <GV2PR08MB800157DF99EDBC2447D50EF292679@GV2PR08MB8001.eurprd08.prod.outlook.com>
References: <20230420112521.3272732-1-Henry.Wang@arm.com>
 <20230420112521.3272732-4-Henry.Wang@arm.com>
 <ac54e04c-58b7-d0c9-2443-bb09258c8bc8@suse.com>
 <AS8PR08MB79912F294EDAC48F835FBB7A92609@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <bdf33169-4e29-8c50-ff76-16d05df81a14@suse.com>
 <AS8PR08MB7991576C75D0D4482595E7E292669@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e06fc93f-293f-a873-c9b9-2d5c941168f9@suse.com>
 <GV2PR08MB8001D4A1EA5B854CA065B34992679@GV2PR08MB8001.eurprd08.prod.outlook.com>
 <63976a6a-f984-6c75-c3b7-34cd81eff33f@suse.com>
In-Reply-To: <63976a6a-f984-6c75-c3b7-34cd81eff33f@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A1D4F59BF895754CA9E2A022AF22F642.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	GV2PR08MB8001:EE_|GV2PR08MB8368:EE_|AM7EUR03FT016:EE_|AM9PR08MB6177:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ed83273-8952-4da5-311a-08db44b6f673
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +MidMhPO2ugkkLZdQPGYwu4W0J9goytZPtTwWqijlqC+ey5Cy4c8S/tjoyjBhScqbciLcRA78jDVG7zLuu+FdXkOINA91OfZlwbdwOXEcOsuIVMwwp3i/dgrKTfBLTC9pW87/nUxR014wAKYyxV+VF2XSj1PxxGLf3k5klbiSWQlSrVLrCOVgufMBLdprDPGzZFmFY10IgaLAwOU99dxjKwcggQxyOKXc6zKDDtbwWnXsT1DTB0dgO9cxksHIT50+TdaRpGxeu7tpnW329472HzFDl8g3b8wLATDag6Pv1/ZYxFPsZ7tkh3DvUOuAYGfuJAcT6HoAt/gTpK7OMq5GoJ+Ql6MHlibBAuTlxRAQbIfMklN6zmkL/o+P2HXAEstJ/eBlt4u/6qOO8+dTOW97EEeeXxQvid/L0KkVzU+bf134iEkxLPn0Jg4I3yIG1ccwn91QxhtCKqu2/WqKAYpKXshPVhQoXEUL47Wshl+w+fMmHjFU0Wbg3x9PAGz8scPq189k6A7DK4iQfTkczINKwVFTZecskE+vVxiutv7m5iQguLMCybcdtClkaF/swVwF2dbLLrQp8b7IHFUNcOH5dSDREdl1mpAZ8UxOP94YUs53vmpxoy6W1VvPemjBx8i
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:GV2PR08MB8001.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(366004)(451199021)(54906003)(71200400001)(316002)(66476007)(66556008)(66946007)(66446008)(76116006)(64756008)(6916009)(4326008)(86362001)(478600001)(55016003)(7696005)(8936002)(8676002)(38100700002)(52536014)(122000001)(5660300002)(83380400001)(6506007)(9686003)(2906002)(26005)(41300700001)(38070700005)(186003)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8368
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7d108584-7a59-45bf-50f7-08db44b6ee9f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N/TjvsqIoAM9hw0RNwwNWGZi4uBcjkP1y5TgXB50QdwK4Y9opX6W8pq6UZXqytgY3TN2pFZiQP8BRBvMQKGdQx90qRaVtabnLru2aC+waHg4ZwOS/FLxPQa5nurSVr00S5BFb9vQq129r/aHmMQS8EDDNxAKhqjE9PA90wZggk0DXWvMubYdkEzTgaY2RjKgQYNK89MHS+M+rGa+jUyYKM3tSdQvIFX5SIWUBkWbs3u/gPY/z3wZB+2KIsaVBxPfQ3Wa7OrSx7VfGGGFKfRq3OTLiyj45ux4CMJ48dgyhgJEzV0hwlDWNbG3Yg37czCEu5XnNFzsqLUqo7HW9CpoOuh5Dqu/bUyFP+O6U8SZ6CzJmnYFUoD4RuMRpocUgo3Hm+N6puVwE4NS4cVcNKeEQlI6xM2b/6nAJ6+pqJL0cLtb4Qk/CjnOnCMRczk41I3sS7DlFjasy5cGRmFfS7mpXhrb/igPAGIYH/inxEf2qZjcG7dqlHIlhB/94y8EL770z4NQM8cQTt7XjhQj3m8PEi5o/MU5IBqau8coiw2Mwd6brhMN8tk7zzhapDh4y0/XvQPBUQRc63Z1ozzzZqeqdI5ln9oY4KlSFP7Bg8Q5S9txxTP1xfSpQhE6WzDdQzlbR2DssAYQXUFbz8Tj6i1/gcEO4Ivdt0HmgiJcob+Jfeu1GaH2k8D7U2OSEzW/oxAzsfzOb8hCZlYiSN38WBLReg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(36840700001)(46966006)(40470700004)(478600001)(40460700003)(54906003)(86362001)(186003)(7696005)(26005)(82310400005)(9686003)(6506007)(55016003)(40480700001)(33656002)(4326008)(316002)(82740400003)(83380400001)(70206006)(70586007)(36860700001)(2906002)(336012)(356005)(41300700001)(81166007)(8676002)(6862004)(8936002)(5660300002)(47076005)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 11:28:01.1913
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ed83273-8952-4da5-311a-08db44b6f673
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6177

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2MyAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+ID4+
PiBDb3JyZWN0LCBJIGFncmVlLiBJIGFkZGVkIGEgbnVtYV9pbml0X2Rpc3RhbmNlKCkgZnVuY3Rp
b24gKGluIHBhdGNoICMxMikgdG8NCj4gPj4+IHNldCBhbGwgdmFsdWVzIHRvIE5VTUFfTk9fRElT
VEFOQ0UuIFRoZSBudW1hX2luaXRfZGlzdGFuY2UoKSB3aWxsIGJlDQo+ID4+PiBjYWxsZWQgaW4g
dGhlIGJlZ2lubmluZyBvZiBudW1hX2luaXQoKS4NCj4gPj4NCj4gPj4gV2h5IGRvIHlvdSBuZWVk
IGEgZnVuY3Rpb24gZm9yIHRoaXM/IEFzIHNhaWQsIHRoaXMgYXJyYXkgY2FuIGJlIHByZS1zZXQg
YXQNCj4gPj4gY29tcGlsZSB0aW1lICh1bmxlc3MgSSdtIG92ZXJsb29raW5nIHNvbWV0aGluZyku
DQo+ID4NCj4gPiBTb3JyeSBJIG92ZXJsb29rZWQgdGhpcyBjb21tZW50LCBjb3JyZWN0IG1lIGlm
IEkgYW0gd3JvbmcsIGJ1dCBJSVVDIHdlDQo+ID4gY2FuIG9ubHkgcHJlLXNldCB0aGUgMkQgYXJy
YXkgdG8gMCBhdCB0aGUgY29tcGlsZSB0aW1lLiBDb3VsZCB5b3UgcGxlYXNlDQo+ID4gZWxhYm9y
YXRlIGEgYml0IG1vcmUgYWJvdXQgdGhlIGNvZGUgaW4geW91ciBtaW5kPyBUaGFua3MhDQo+IA0K
PiBzdGF0aWMgdW5zaWduZWQgY2hhciBfX3JvX2FmdGVyX2luaXQNCj4gbm9kZV9kaXN0YW5jZV9t
YXBbTUFYX05VTU5PREVTXVtNQVhfTlVNTk9ERVNdID0gew0KPiAgICAgWzAgLi4uIE1BWF9OVU1O
T0RFUyAtIDFdID0geyBbMCAuLi4gTUFYX05VTU5PREVTIC0gMV0gPQ0KPiBOVU1BX05PX0RJU1RB
TkNFIH0NCj4gfTsNCj4gDQo+IG9yIGV2ZW4gKGxpbWl0aW5nIHJlZHVuZGFuY3kgYSBsaXR0bGUp
DQo+IA0KPiBzdGF0aWMgdW5zaWduZWQgY2hhciBfX3JvX2FmdGVyX2luaXQNCj4gbm9kZV9kaXN0
YW5jZV9tYXBbXVtNQVhfTlVNTk9ERVNdID0gew0KPiAgICAgWzAgLi4uIE1BWF9OVU1OT0RFUyAt
IDFdID0geyBbMCAuLi4gTUFYX05VTU5PREVTIC0gMV0gPQ0KPiBOVU1BX05PX0RJU1RBTkNFIH0N
Cj4gfTsNCg0KWWVhaCB5b3UgYXJlIGNvcnJlY3QuIEkgbWFkZSBhIG1pc3Rha2UgdGhhdCBtaXNz
aW5nIHRoZQ0KIlswIC4uLiBNQVhfTlVNTk9ERVMgLSAxXSIgaW4gdGhlIGxlZnQgc2lkZSBvZiAi
PSIgaGVuY2UgbXkgY29tcGlsZXINCmNvbXBsYWluZWQuLi4gVGhhbmtzIGZvciB5b3VyIHBhdGll
bmNlLg0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:34:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525321.816446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquSs-0007a0-RS; Mon, 24 Apr 2023 11:34:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525321.816446; Mon, 24 Apr 2023 11:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquSs-0007Zt-Nw; Mon, 24 Apr 2023 11:34:38 +0000
Received: by outflank-mailman (input) for mailman id 525321;
 Mon, 24 Apr 2023 11:34:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pquSq-0007Zn-Lw
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:34:36 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe12::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc7add45-e293-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 13:34:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9238.eurprd04.prod.outlook.com (2603:10a6:20b:4c6::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 11:34:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 11:34:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc7add45-e293-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gWS/1nA0G7bkT+2ZrpzwPIXAKOc5axxRl6seWGVyn4rOoJ/O7Tf9vz5ExSXIcahlEqMaNsFor5sW9nPrbBW1mLc+KtZAvnEYDJ5lYbV3R8JBVAgncGYXc+JelkTQB59aX16AHPO5LNA9zuiBLaZ+bH+5ATs5xyCGq2wOug6CFOqIOT3wbduznMC67aZBqgD2gCANT2iafypAIB1AslkJeLRjtMmZo4j5dKVRDJ8Bs4ahAXhHy8mjQTWe4MDaiQScrzDW+hj+zMY9f0bhV1/Mz7IDsrUymxTxwyalSGI3Y778TvQ2yOm7J20e0Nr8jO7I4QTRhumhqu+/GFSdEHbGcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WIkGlrGLB04uWnXxS5IY80jvUApWbQ4Ic4kmXGX/Bvg=;
 b=OCy1nTNuKlI2JHnk+eoOSP28hzyoD2imySUD3jzYRQd6RKhPMFjeEunu5oG5D7/adPnQONx27WaeWPI4JwCzsd4Z2J4fKt6qApaArEFQ7NJ8D3C/q1ZgDskKbH7GwV4DaH/ixt1tJUEJs40NlEi2KPPKf6bQRUpmJPL0GaSgMtFiPP/FZRd6VRZhIDTNAGdd8oSBeivtsqnviv52FbeEcck7R5tVaqml6jwunErCCtPx1mmOP9bDTtGblrG/H4QZ93sdRv+grMO2qnqCfkOQ0yA3f6dbzYn6Heb+raUiGVuaECO7gDLo+xFjtlct+aZY5XGCUaf6yRikjd9dSryHwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WIkGlrGLB04uWnXxS5IY80jvUApWbQ4Ic4kmXGX/Bvg=;
 b=0s2lmCfSuvA57UZTqUttFgoeo9aW/AfAfq+I5yNAkLJH6sUcuL94s+UXadx27if4iAHr7zfwVhy03Vz7ZJ6cgFreP1KmrnaEOv75UT684x6thcY4mFjOGlVl5sLAlEEENyVHzI1RjO0T5iSKx3TqSaqpWYE2iZU2y9b0geTi+fM+ty44v+zd4/Gd2cRT0sehXfgAbCyYsXiBSHP+C03rZh3hOKmEpqtdHoHQGQNyRcnKx035Eo/dC7DP0IRXMAyUPwX32iSouxXKbNyLwVa9dH6q+manuE5YT1O+3R0RFNA/jcQNOZV+LqpRVhTt69PIWctu0D1QthPxL/zRLijs/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
Date: Mon, 24 Apr 2023 13:34:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230424060248.1488859-8-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0152.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9238:EE_
X-MS-Office365-Filtering-Correlation-Id: 1007c15b-6f4a-45a2-77f0-08db44b7df1c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	76FRF24syfw+64KfvgA6G3aD65ych9IKSVkxhsYDdr1ztHLdSqim1PoyknMrDtk8uPFmV2/00E9W5a74iVEQbn/H20PFtBTG7d9bwdJZMCCF4BOioZ8fzFrhdQFEmIRpS0eLQmBSxEZ9GbUPdmdYOUwQuMm7/W3X3WljqBPUqjuebjT7iNj9WAnLEjAXH+MM7YoZSoTlQr7iGqZUfbn3PSmHGyw7P4ys4cAiXyjNFSWr6P4o+BaW1sRE+9gNfP0hjFubgUcxcMFaVBkWjDuvk7QzvJkQIEScGL+H19p/WCTNxHxJTlvZJlzSoOikKaU8y5Sj/UFFw3dLysi9rlcOJOS2SFJ7HKMaQe/FwgIlCtO9dEYAyoZC6HDFkr/OC5B11yQUhEaKar/a85+d/6mb05P957BQvnkQgBdkWQ0vAeQh2vfVZ3JRf7D4Ewxb1I4Ee5WDICQUYTrSmv3L3LX0l1kFBbP8RoLrkXrfrjscO6Vp7sBxx1J3CMJcX8R+do6vfgOiXeXa+T3huvVyQGNXNygrdIcUhDd0Z0EGGtwZKWywKTAfoo2dA6LNmYjMnFE1qIF5fscQj2G+pK1A4fzO/4a9+V4NcppIf7sI0BKFh2WtP1asgqx44zu/Wr5a591LhUccJq9WhcCVuaFwRDhkNA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(346002)(366004)(396003)(451199021)(36756003)(54906003)(478600001)(316002)(4326008)(6916009)(66946007)(66476007)(66556008)(7416002)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(186003)(2616005)(86362001)(26005)(31686004)(6506007)(6512007)(53546011)(31696002)(6486002)(6666004)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzlGRk1lUmJqbzNEOGFDV3RLVFhyZWNoSkxkeU15MnIrMG82OHAyMWxiNXZS?=
 =?utf-8?B?TEh3cGNFVEJOUk40SHkvdjZlUjJnbmZWY1laMFFaMUJ5Wm1TWnVLOW1pYnNF?=
 =?utf-8?B?MmJjWEhrR1RCVmczd01nT1VIY3J0NWNLQWd0WFFhdmkwSjdzM01VSVJ0YWU5?=
 =?utf-8?B?SnBrWjlsOWJwMEcwbzhQdFdJdCtYSEI4M3lYREgvTitYaUgvaTVTMllKTndi?=
 =?utf-8?B?RWRiRFF6MjVMd3g5WnNTamF0cCtjRGxlWDRkUGxZMkdjOUZpVE43Qnd2eTlT?=
 =?utf-8?B?N3ZqaWhiUlNkbE03WUJKbk8xZ05tcFV4YlJGQWs5dk1VeXd5aHVnYzVGQUpz?=
 =?utf-8?B?TUZFdWdyYkNDamYxMzRzUHZLSC8wNTYyd2Vnd2xLZWIwOWdXZ09lQnFVa2xm?=
 =?utf-8?B?Z2Z6RjVHVG9WN2tBQ2ZWVWVqUjc5RVNqMVhnZjBFYzljU3luVHppQ1loOE40?=
 =?utf-8?B?N1IxVE9FcnZSUGF0RlYwa3JrN1MxOGgvcGRkZXpUN3BrNElxZzdFRTNXZG83?=
 =?utf-8?B?M2ZQaVIwanRFWUoxc3hyMUdNVHF5aW5TWW5MM0U1ZVc3VnI4b1c0eUpBQzVu?=
 =?utf-8?B?ODgyL0FZdmNDK1pnRlBZb0t4b2RqeFlMVjMreEhWdUtpdDh0STlYT2xDMEc5?=
 =?utf-8?B?UVQzcjR1eWJRVHphNXUzN0hxNndRMzlYOStIQ0tHK3YvL2FBTUlKSXAxSmJv?=
 =?utf-8?B?Q2dacHU1N0RLTzZJbnUvY29EY3VCRHpmVER5OVBoaktkOHVONEpUVVNRalFZ?=
 =?utf-8?B?eHoxcmdGbDRwQ1BzMHJZNWZtMVloY1dnRHBlMG9HL2luaEdnUWRyaFZCK0t4?=
 =?utf-8?B?NWF3cjU2enM1OW9Ucm1FR21TTXluUnlKWFJJN0xHb2FNLzlTZkpJYWRWYXFm?=
 =?utf-8?B?MVNiVjNaalBoZkJkZy9FNTl4NzdZU2JROUIveGpEWTlLMko5TElvN1BrMUx2?=
 =?utf-8?B?OGIyS2wyQ29uc01MTmNSMG94a1VvWlJGOE8vVmM2RjFkVExmTHB3alc3Vmha?=
 =?utf-8?B?b01qOVQ1em9WeXYwR0pNTGhRVGNOTkxnRXpzNzl3NmlRcnYxMzFBSHI5d0dX?=
 =?utf-8?B?bWE0enlxSmYxem5XNU1sUk9uR1ZkNEJEakswRlF1Y3U5N1RQUnBNWEZqT2sy?=
 =?utf-8?B?bWhYWW9ORHRzZnpSWTNXK2FFYnJEb2QwZHZpZ0taTlFxcUJKc2lHNEdrOTVJ?=
 =?utf-8?B?M2VDVTk2YVFud0pZaXNlRHdqSW5HTmdoV1h4Vng2VUV1bjdRZDFUUFNSVnpt?=
 =?utf-8?B?L1hUL2NXSHQ1UEZjUnR3bGFKTGtIYjZOcGFZNFlJeGVUcWRPM3FpbXNERGRT?=
 =?utf-8?B?V0ZXZzM0Z0Q5aFpqTHhsdDU5OXg4dDhFQU9kL2w4S2JwSURzV2NMRHdpMlAr?=
 =?utf-8?B?QmtzMmhGejByWTFncmJRbUljL0pkc2p4bE5sR0Y1MjdjQXVJVU8wVEdOZ3c0?=
 =?utf-8?B?amlvYzd0VW1BaXlrZDVLeGdDYnJha29rcjZDYUtyT0xMZlordlprQVB5Z0Mz?=
 =?utf-8?B?bFlIbk83RGpIVlZwSSttTHdBRFZqTVo2MTVCdnROcC9tdmdKenBxS0dOUzNV?=
 =?utf-8?B?UHVxK2VzbWJLbmtqWEtkVWZEWVNwVXRQekl1aG14dEYrcjM2bnpzWjhDalVm?=
 =?utf-8?B?TEpZdjJQb1JZWCswY2FXREpiN1FpaGh3dU5DMytlV0o3aE5tL1JhaXBJY1Zh?=
 =?utf-8?B?UnFubDhqMnA3eFd3T2g5Q2JNSlVGMXJCT0hKTjRsOEpCNGovWUVIQWhDM1Vt?=
 =?utf-8?B?TTJhcnYvcDF0M2YxWGhhNVR3OHBoYzlseVdNb3pOanlTUSttSHgwUXJxNW5I?=
 =?utf-8?B?YThtaUZEbndhc012UVI1YnFSQ1ZpRHhta3NkejI3QlJMZVkyNno4aXNLa2J4?=
 =?utf-8?B?V3p4VnhBTHoyUUU5UitscXRHQitZL3kyS2RBMUpMdWZHRC82SkM1bU5yZjY0?=
 =?utf-8?B?amJJeVFIdEpWR2JoeEZ5ak5WU0xkc1kySnpIYVovS3pTM0JUcDErdFN5RXAz?=
 =?utf-8?B?c0V0MHdiVFpFL1ZxSk9EK1gxakJkRFBtMXF3aDRURHdudnVUU1BsR0FtTGJi?=
 =?utf-8?B?c0RkRHVMWkJtOW83QjZJYjU3M0tnZXo5WHViMXpjbkJrdE1xeVEvOGpwckdD?=
 =?utf-8?Q?IPi9R529vMy1Le4OID/Obc6zb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1007c15b-6f4a-45a2-77f0-08db44b7df1c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 11:34:31.7553
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ex1LhVyj9k3opO1bXAxayrXN0dQkl0b7TwB3VTcUkpaAOjwd04AZBPZkBjQpD2121i2In7ijpOBmzqN5vIDvLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9238

On 24.04.2023 08:02, Luca Fancellu wrote:
> @@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
>  void sve_context_free(struct vcpu *v);
>  void sve_save_state(struct vcpu *v);
>  void sve_restore_state(struct vcpu *v);
> +bool sve_domctl_vl_param(int val, unsigned int *out);
>  
>  #else /* !CONFIG_ARM64_SVE */
>  
> +#define opt_dom0_sve     (0)
>  #define is_sve_domain(d) (0)
>  
>  static inline register_t compute_max_zcr(void)
> @@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>  static inline void sve_save_state(struct vcpu *v) {}
>  static inline void sve_restore_state(struct vcpu *v) {}
>  
> +static inline bool sve_domctl_vl_param(int val, unsigned int *out)
> +{
> +    return false;
> +}

Once again I don't see the need for this stub: opt_dom0_sve is #define-d
to plain zero when !ARM64_SVE, so the only call site merely requires a
visible declaration, and DCE will take care of eliminating the actual call.

> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
>      return -1;
>  }
>  
> +int __init parse_signed_integer(const char *name, const char *s, const char *e,
> +                                long long *val)
> +{
> +    size_t slen, nlen;
> +    const char *str;
> +    long long pval;
> +
> +    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);

As per this "e" may come in as NULL, meaning that ...

> +    nlen = strlen(name);
> +
> +    /* Check that this is the name we're looking for and a value was provided */
> +    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
> +        return -1;
> +
> +    pval = simple_strtoll(&s[nlen + 1], &str, 0);
> +
> +    /* Number not recognised */
> +    if ( str != e )
> +        return -2;

... this is always going to lead to failure in that case. (I guess I could
have spotted this earlier, sorry.)

As a nit, I'd also appreciate if style here (parenthesization in particular)
could match that of parse_boolean(), which doesn't put parentheses around
the operands of comparison operators (a few lines up from here). With the
other function in mind, I'm then not going to pick on the seemingly
redundant (with the subsequent strncmp()) "slen <= nlen", which has an
equivalent there as well.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 11:59:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 11:59:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525326.816456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquqM-0001mF-PA; Mon, 24 Apr 2023 11:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525326.816456; Mon, 24 Apr 2023 11:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquqM-0001m8-L3; Mon, 24 Apr 2023 11:58:54 +0000
Received: by outflank-mailman (input) for mailman id 525326;
 Mon, 24 Apr 2023 11:58:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=azYu=AP=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pquqM-0001m2-1w
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 11:58:54 +0000
Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com
 [2607:f8b0:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ffed148-e297-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 13:58:50 +0200 (CEST)
Received: by mail-pg1-x532.google.com with SMTP id
 41be03b00d2f7-51b6d0b9430so3361423a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 04:58:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ffed148-e297-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682337529; x=1684929529;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=KbD1QziJYmCdG7Gp1WnaNfL53EuHpD5qn7OzxM32iAg=;
        b=IQUyCALtvByE9fS44ZHXx8F8RgPMzpRQKR1UOrfcgZH4Xn2mtnlqug0HthbvW2mvZJ
         Wgf9Odcl6vXYKfVLk7Hq1kGs1rEz3guOHjBXuiWV1AyUo7wtMqzrXHu7tV8QNMR0EOnm
         b94UNG9uUv7PPmRc30ZQSDl90kLhI0Ndv+kainf0NY/MyY2QsP9eO5XJYHEhNZFTiSJK
         P1+I2xtKvBPTpDz45Fxl1dzOeshAfQnqLUXmobJb3TAnR6QpuLxJJe3+5LClEyM7jaEf
         AMUSDodCcflaasVPPPFX0kkRMbYrRlVzX6Z4BsqldWcJnNfbLC8GFY2YRwwhhu9wxuV4
         nSow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682337529; x=1684929529;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=KbD1QziJYmCdG7Gp1WnaNfL53EuHpD5qn7OzxM32iAg=;
        b=e86v7zlkwXIzucbQsxhSajB4QGkWkThzQKEka9bEdkLQtg5Ly2FXcilk8CmqnwFlDq
         gKeUqp3PGW3CHgLJ9jK2JVlrzc6oSQKQCfcI1i07z6k8mjWq/9Tb0nB5XCd6WOi2pBR1
         UKrvW4k62F+0aTE+A3jypoPB5KsnHLiQQHqiG8LAj/3Ieon/+jUh/QXs5okYSu8g1fkE
         oz7LyEZoPkEZuoxqFrJUXwv2TKUbpzE/fkDHBoLJzDk53ilZSld37i/iWRZ0F9hYT79K
         IlHwAY69pA/adY+sbQhI5gQmBfwf7RfnaYsv+P6IwY5TR8N24lwzjfGebeqePV4CKU9e
         vcKw==
X-Gm-Message-State: AAQBX9dBphnik2YQNPa5JRjLwJlgQyZp9PTpLuTyc4Ns3kGKJ4febIks
	oywmgnt3YPSwZ8fbFplDKrJEYJLa+2ZJJlAQTi0=
X-Google-Smtp-Source: AKy350bSYXclEYX8vXymn6EoUn0+E5A9piaUHJHhvyfA7UKWqiXZ4DvWqfhXm2IKOIR7ZfYTVCtc+139GKbCdL2drFE=
X-Received: by 2002:a17:90b:2248:b0:24b:3295:3e23 with SMTP id
 hk8-20020a17090b224800b0024b32953e23mr13819971pjb.19.1682337528855; Mon, 24
 Apr 2023 04:58:48 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2s4WLiMEVa3u8rJRNZDpCpLTvnDygpObSUKxau-Q8dfyA@mail.gmail.com>
 <64326e46-096e-0f86-2aa9-31a72d3cd004@xen.org> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
In-Reply-To: <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Mon, 24 Apr 2023 15:03:14 +0300
Message-ID: <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="000000000000ccbfa805fa13ba2e"

--000000000000ccbfa805fa13ba2e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

Thanks guys.
I found out where the problem was.
Now dom0 booted more. But I have a new one.
This is a kernel panic during Dom0 loading.
Maybe someone is able to suggest something ?

Regards,
O.

[    3.771362] sfp_register_bus: upstream ops attach
[    3.776119] sfp_register_bus: Bus registered
[    3.780459] sfp_register_socket: register sfp_bus succeeded
[    3.789399] of_cfs_init
[    3.789499] of_cfs_init: OK
[    3.791685] clk: Not disabling unused clocks
[   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
[   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
5.15.72-xilinx-v2022.1 #1
[   11.010393] Workqueue: events_unbound async_run_entry_fn
[   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS
BTYPE=3D--)
[   11.010422] pc : simple_write_end+0xd0/0x130
[   11.010431] lr : generic_perform_write+0x118/0x1e0
[   11.010438] sp : ffffffc00809b910
[   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27:
ffffffef69ba88c0
[   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24:
0000000000000000
[   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21:
ffffff807315a260
[   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18:
0000000000000000
[   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15:
0000000000000000
[   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12:
0000000000000000
[   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 :
0000000000000000
[   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 :
000000002d89b700
[   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 :
0000000000001000
[   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 :
0000000000000005
[   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
[   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
5.15.72-xilinx-v2022.1 #1
[   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
[   11.010548] Workqueue: events_unbound async_run_entry_fn
[   11.010556] Call trace:
[   11.010558]  dump_backtrace+0x0/0x1c4
[   11.010567]  show_stack+0x18/0x2c
[   11.010574]  dump_stack_lvl+0x7c/0xa0
[   11.010583]  dump_stack+0x18/0x34
[   11.010588]  panic+0x14c/0x2f8
[   11.010597]  print_tainted+0x0/0xb0
[   11.010606]  arm64_serror_panic+0x6c/0x7c
[   11.010614]  do_serror+0x28/0x60
[   11.010621]  el1h_64_error_handler+0x30/0x50
[   11.010628]  el1h_64_error+0x78/0x7c
[   11.010633]  simple_write_end+0xd0/0x130
[   11.010639]  generic_perform_write+0x118/0x1e0
[   11.010644]  __generic_file_write_iter+0x138/0x1c4
[   11.010650]  generic_file_write_iter+0x78/0xd0
[   11.010656]  __kernel_write+0xfc/0x2ac
[   11.010665]  kernel_write+0x88/0x160
[   11.010673]  xwrite+0x44/0x94
[   11.010680]  do_copy+0xa8/0x104
[   11.010686]  write_buffer+0x38/0x58
[   11.010692]  flush_buffer+0x4c/0xbc
[   11.010698]  __gunzip+0x280/0x310
[   11.010704]  gunzip+0x1c/0x28
[   11.010709]  unpack_to_rootfs+0x170/0x2b0
[   11.010715]  do_populate_rootfs+0x80/0x164
[   11.010722]  async_run_entry_fn+0x48/0x164
[   11.010728]  process_one_work+0x1e4/0x3a0
[   11.010736]  worker_thread+0x7c/0x4c0
[   11.010743]  kthread+0x120/0x130
[   11.010750]  ret_from_fork+0x10/0x20
[   11.010757] SMP: stopping secondary CPUs
[   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
[   11.010788] PHYS_OFFSET: 0x0
[   11.010790] CPU features: 0x00000401,00000842
[   11.010795] Memory Limit: none
[   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError
Interrupt ]---

=D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:52, Mic=
hal Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> On 21/04/2023 14:49, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello Michal,
> >
> > I was not able to enable earlyprintk in the xen for now.
> > I decided to choose another way.
> > This is a xen's command line that I found out completely.
> >
> > (XEN) $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_=
vcpus=3D2
> dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull timer_slop=3D0
> Yes, adding a printk() in Xen was also a good idea.
>
> >
> > So you are absolutely right about a command line.
> > Now I am going to find out why xen did not have the correct parameters
> from the device tree.
> Maybe you will find this document helpful:
>
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
>
> ~Michal
>
> >
> > Regards,
> > Oleg
> >
> > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16,=
 Michal Orzel <michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>:
> >
> >
> >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
> >     >
> >     >
> >     >
> >     > Hello Michal,
> >     >
> >     > Yes, I use yocto.
> >     >
> >     > Yesterday all day long I tried to follow your suggestions.
> >     > I faced a problem.
> >     > Manually in the xen config build file I pasted the strings:
> >     In the .config file or in some Yocto file (listing additional
> Kconfig options) added to SRC_URI?
> >     You shouldn't really modify .config file but if you do, you should
> execute "make olddefconfig" afterwards.
> >
> >     >
> >     > CONFIG_EARLY_PRINTK
> >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >     > CONFIG_EARLY_UART_CHOICE_CADENCE
> >     I hope you added =3Dy to them.
> >
> >     Anyway, you have at least the following solutions:
> >     1) Run bitbake xen -c menuconfig to properly set early printk
> >     2) Find out how you enable other Kconfig options in your project
> (e.g. CONFIG_COLORING=3Dy that is not enabled by default)
> >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
> >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >
> >     ~Michal
> >
> >     >
> >     > Host hangs in build time.
> >     > Maybe I did not set something in the config build file ?
> >     >
> >     > Regards,
> >     > Oleg
> >     >
> >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 =
11:57, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >     >
> >     >     Thanks Michal,
> >     >
> >     >     You gave me an idea.
> >     >     I am going to try it today.
> >     >
> >     >     Regards,
> >     >     O.
> >     >
> >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 11:56, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >     >
> >     >         Thanks Stefano.
> >     >
> >     >         I am going to do it today.
> >     >
> >     >         Regards,
> >     >         O.
> >     >
> >     >         =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 23:05, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >     >
> >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >     >             > Hi Michal,
> >     >             >
> >     >             > I corrected xen's command line.
> >     >             > Now it is
> >     >             > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dser=
ial0
> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dn=
ative
> sched=3Dnull
> >     >             > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3
> dom0_colors=3D4-7";
> >     >
> >     >             4 colors is way too many for xen, just do
> xen_colors=3D0-0. There is no
> >     >             advantage in using more than 1 color for Xen.
> >     >
> >     >             4 colors is too few for dom0, if you are giving 1600M
> of memory to Dom0.
> >     >             Each color is 256M. For 1600M you should give at leas=
t
> 7 colors. Try:
> >     >
> >     >             xen_colors=3D0-0 dom0_colors=3D1-8
> >     >
> >     >
> >     >
> >     >             > Unfortunately the result was the same.
> >     >             >
> >     >             > (XEN)  - Dom0 mode: Relaxed
> >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per
> sched-resource
> >     >             > (XEN) Coloring general information
> >     >             > (XEN) Way size: 64kB
> >     >             > (XEN) Max. number of colors available: 16
> >     >             > (XEN) Xen color(s): [ 0 ]
> >     >             > (XEN) alternatives: Patching with alt table
> 00000000002cc690 -> 00000000002ccc0c
> >     >             > (XEN) Color array allocation failed for dom0
> >     >             > (XEN)
> >     >             > (XEN) ****************************************
> >     >             > (XEN) Panic on CPU 0:
> >     >             > (XEN) Error creating domain 0
> >     >             > (XEN) ****************************************
> >     >             > (XEN)
> >     >             > (XEN) Reboot in five seconds...
> >     >             >
> >     >             > I am going to find out how command line arguments
> passed and parsed.
> >     >             >
> >     >             > Regards,
> >     >             > Oleg
> >     >             >
> >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 11:25, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >     >             >       Hi Michal,
> >     >             >
> >     >             > You put my nose into the problem. Thank you.
> >     >             > I am going to use your point.
> >     >             > Let's see what happens.
> >     >             >
> >     >             > Regards,
> >     >             > Oleg
> >     >             >
> >     >             >
> >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 10:37, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
> >     >             >       Hi Oleg,
> >     >             >
> >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >     >             >       >
> >     >             >       >
> >     >             >       >
> >     >             >       > Hello Stefano,
> >     >             >       >
> >     >             >       > Thanks for the clarification.
> >     >             >       > My company uses yocto for image generation.
> >     >             >       > What kind of information do you need to
> consult me in this case ?
> >     >             >       >
> >     >             >       > Maybe modules sizes/addresses which were
> mentioned by @Julien Grall <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>>> ?
> >     >             >
> >     >             >       Sorry for jumping into discussion, but FWICS
> the Xen command line you provided seems to be not the one
> >     >             >       Xen booted with. The error you are observing
> most likely is due to dom0 colors configuration not being
> >     >             >       specified (i.e. lack of dom0_colors=3D<>
> parameter). Although in the command line you provided, this parameter
> >     >             >       is set, I strongly doubt that this is the
> actual command line in use.
> >     >             >
> >     >             >       You wrote:
> >     >             >       xen,xen-bootargs =3D "console=3Ddtuart
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin boots=
crub=3D0
> vwfi=3Dnative
> >     >             >       sched=3Dnull timer_slop=3D0 way_szize=3D65536
> xen_colors=3D0-3 dom0_colors=3D4-7";
> >     >             >
> >     >             >       but:
> >     >             >       1) way_szize has a typo
> >     >             >       2) you specified 4 colors (0-3) for Xen, but
> the boot log says that Xen has only one:
> >     >             >       (XEN) Xen color(s): [ 0 ]
> >     >             >
> >     >             >       This makes me believe that no colors
> configuration actually end up in command line that Xen booted with.
> >     >             >       Single color for Xen is a "default if not
> specified" and way size was probably calculated by asking HW.
> >     >             >
> >     >             >       So I would suggest to first cross-check the
> command line in use.
> >     >             >
> >     >             >       ~Michal
> >     >             >
> >     >             >
> >     >             >       >
> >     >             >       > Regards,
> >     >             >       > Oleg
> >     >             >       >
> >     >             >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >     >             >       >
> >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko
> wrote:
> >     >             >       >     > Hi Julien,
> >     >             >       >     >
> >     >             >       >     > >> This feature has not been merged i=
n
> Xen upstream yet
> >     >             >       >     >
> >     >             >       >     > > would assume that upstream + the
> series on the ML [1] work
> >     >             >       >     >
> >     >             >       >     > Please clarify this point.
> >     >             >       >     > Because the two thoughts are
> controversial.
> >     >             >       >
> >     >             >       >     Hi Oleg,
> >     >             >       >
> >     >             >       >     As Julien wrote, there is nothing
> controversial. As you are aware,
> >     >             >       >     Xilinx maintains a separate Xen tree
> specific for Xilinx here:
> >     >             >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >     >             >       >
> >     >             >       >     and the branch you are using
> (xlnx_rebase_4.16) comes from there.
> >     >             >       >
> >     >             >       >
> >     >             >       >     Instead, the upstream Xen tree lives
> here:
> >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >             >       >
> >     >             >       >
> >     >             >       >     The Cache Coloring feature that you are
> trying to configure is present
> >     >             >       >     in xlnx_rebase_4.16, but not yet presen=
t
> upstream (there is an
> >     >             >       >     outstanding patch series to add cache
> coloring to Xen upstream but it
> >     >             >       >     hasn't been merged yet.)
> >     >             >       >
> >     >             >       >
> >     >             >       >     Anyway, if you are using
> xlnx_rebase_4.16 it doesn't matter too much for
> >     >             >       >     you as you already have Cache Coloring
> as a feature there.
> >     >             >       >
> >     >             >       >
> >     >             >       >     I take you are using ImageBuilder to
> generate the boot configuration? If
> >     >             >       >     so, please post the ImageBuilder config
> file that you are using.
> >     >             >       >
> >     >             >       >     But from the boot message, it looks lik=
e
> the colors configuration for
> >     >             >       >     Dom0 is incorrect.
> >     >             >       >
> >     >             >
> >     >             >
> >     >             >
> >     >
> >
>

--000000000000ccbfa805fa13ba2e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello, <br></div><div><br></div><div>Thanks guys.</di=
v><div>I found out where the problem was. <br></div><div>Now dom0 booted mo=
re. But I have a new one.</div><div>This is a kernel panic during Dom0 load=
ing.<br></div><div>Maybe someone is able to suggest something ?</div><div><=
br></div><div>Regards,</div><div>O.<br></div><div><br></div><div>[ =C2=A0 =
=C2=A03.771362] sfp_register_bus: upstream ops attach<br>[ =C2=A0 =C2=A03.7=
76119] sfp_register_bus: Bus registered<br>[ =C2=A0 =C2=A03.780459] sfp_reg=
ister_socket: register sfp_bus succeeded <br>[ =C2=A0 =C2=A03.789399] of_cf=
s_init<br>[ =C2=A0 =C2=A03.789499] of_cfs_init: OK<br>[ =C2=A0 =C2=A03.7916=
85] clk: Not disabling unused clocks<br>[ =C2=A0 11.010355] SError Interrup=
t on CPU0, code 0xbe000000 -- SError<br>[ =C2=A0 11.010380] CPU: 0 PID: 9 C=
omm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1<br>[ =C2=A0 11.0103=
93] Workqueue: events_unbound async_run_entry_fn<br>[ =C2=A0 11.010414] pst=
ate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)<br>[ =C2=A0 =
11.010422] pc : simple_write_end+0xd0/0x130<br>[ =C2=A0 11.010431] lr : gen=
eric_perform_write+0x118/0x1e0<br>[ =C2=A0 11.010438] sp : ffffffc00809b910=
<br>[ =C2=A0 11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ff=
ffffef69ba88c0<br>[ =C2=A0 11.010451] x26: 0000000000003eec x25: ffffff8075=
15db00 x24: 0000000000000000<br>[ =C2=A0 11.010459] x23: ffffffc00809ba90 x=
22: 0000000002aac000 x21: ffffff807315a260<br>[ =C2=A0 11.010472] x20: 0000=
000000001000 x19: fffffffe02000000 x18: 0000000000000000<br>[ =C2=A0 11.010=
481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000<br>[=
 =C2=A0 11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000=
000000000<br>[ =C2=A0 11.010498] x11: 0000000000000000 x10: 000000000000000=
0 x9 : 0000000000000000<br>[ =C2=A0 11.010507] x8 : 0000000000000000 x7 : f=
fffffef693ba680 x6 : 000000002d89b700<br>[ =C2=A0 11.010515] x5 : fffffffe0=
2000000 x4 : ffffff807315a3c8 x3 : 0000000000001000<br>[ =C2=A0 11.010524] =
x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005<br>[ =C2=
=A0 11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt<br=
>[ =C2=A0 11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-x=
ilinx-v2022.1 #1<br>[ =C2=A0 11.010545] Hardware name: D14 Viper Board - Wh=
ite Unit (DT)<br>[ =C2=A0 11.010548] Workqueue: events_unbound async_run_en=
try_fn<br>[ =C2=A0 11.010556] Call trace:<br>[ =C2=A0 11.010558] =C2=A0dump=
_backtrace+0x0/0x1c4<br>[ =C2=A0 11.010567] =C2=A0show_stack+0x18/0x2c<br>[=
 =C2=A0 11.010574] =C2=A0dump_stack_lvl+0x7c/0xa0<br>[ =C2=A0 11.010583] =
=C2=A0dump_stack+0x18/0x34<br>[ =C2=A0 11.010588] =C2=A0panic+0x14c/0x2f8<b=
r>[ =C2=A0 11.010597] =C2=A0print_tainted+0x0/0xb0<br>[ =C2=A0 11.010606] =
=C2=A0arm64_serror_panic+0x6c/0x7c<br>[ =C2=A0 11.010614] =C2=A0do_serror+0=
x28/0x60<br>[ =C2=A0 11.010621] =C2=A0el1h_64_error_handler+0x30/0x50<br>[ =
=C2=A0 11.010628] =C2=A0el1h_64_error+0x78/0x7c<br>[ =C2=A0 11.010633] =C2=
=A0simple_write_end+0xd0/0x130<br>[ =C2=A0 11.010639] =C2=A0generic_perform=
_write+0x118/0x1e0<br>[ =C2=A0 11.010644] =C2=A0__generic_file_write_iter+0=
x138/0x1c4<br>[ =C2=A0 11.010650] =C2=A0generic_file_write_iter+0x78/0xd0<b=
r>[ =C2=A0 11.010656] =C2=A0__kernel_write+0xfc/0x2ac<br>[ =C2=A0 11.010665=
] =C2=A0kernel_write+0x88/0x160<br>[ =C2=A0 11.010673] =C2=A0xwrite+0x44/0x=
94<br>[ =C2=A0 11.010680] =C2=A0do_copy+0xa8/0x104<br>[ =C2=A0 11.010686] =
=C2=A0write_buffer+0x38/0x58<br>[ =C2=A0 11.010692] =C2=A0flush_buffer+0x4c=
/0xbc<br>[ =C2=A0 11.010698] =C2=A0__gunzip+0x280/0x310<br>[ =C2=A0 11.0107=
04] =C2=A0gunzip+0x1c/0x28<br>[ =C2=A0 11.010709] =C2=A0unpack_to_rootfs+0x=
170/0x2b0<br>[ =C2=A0 11.010715] =C2=A0do_populate_rootfs+0x80/0x164<br>[ =
=C2=A0 11.010722] =C2=A0async_run_entry_fn+0x48/0x164<br>[ =C2=A0 11.010728=
] =C2=A0process_one_work+0x1e4/0x3a0<br>[ =C2=A0 11.010736] =C2=A0worker_th=
read+0x7c/0x4c0<br>[ =C2=A0 11.010743] =C2=A0kthread+0x120/0x130<br>[ =C2=
=A0 11.010750] =C2=A0ret_from_fork+0x10/0x20<br>[ =C2=A0 11.010757] SMP: st=
opping secondary CPUs<br>[ =C2=A0 11.010784] Kernel Offset: 0x2f61200000 fr=
om 0xffffffc008000000<br>[ =C2=A0 11.010788] PHYS_OFFSET: 0x0<br>[ =C2=A0 1=
1.010790] CPU features: 0x00000401,00000842<br>[ =C2=A0 11.010795] Memory L=
imit: none<br>[ =C2=A0 11.277509] ---[ end Kernel panic - not syncing: Asyn=
chronous SError Interrupt ]---<br></div></div><br><div class=3D"gmail_quote=
"><div dir=3D"ltr" class=3D"gmail_attr">=D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 15:52, Michal Orzel &lt;<a href=3D"mailto:mic=
hal.orzel@amd.com">michal.orzel@amd.com</a>&gt;:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">Hi Oleg,<br>
<br>
On 21/04/2023 14:49, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt; <br>
&gt; <br>
&gt; Hello Michal,<br>
&gt; <br>
&gt; I was not able to enable earlyprintk in the xen for now.<br>
&gt; I decided to choose another way.<br>
&gt; This is a xen&#39;s command line that I found out completely.<br>
&gt; <br>
&gt; (XEN) $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max=
_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull timer_sl=
op=3D0<br>
Yes, adding a printk() in Xen was also a good idea.<br>
<br>
&gt; <br>
&gt; So you are absolutely right about a command line.<br>
&gt; Now I am going to find out why xen did not have the correct parameters=
 from the device tree.<br>
Maybe you will find this document helpful:<br>
<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/ar=
m/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.=
txt</a><br>
<br>
~Michal<br>
<br>
&gt; <br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16=
, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank=
">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.co=
m" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;:<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0On 21/04/2023 10:04, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yes, I use yocto.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yesterday all day long I tried to follow your =
suggestions.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; I faced a problem.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Manually in the xen config build file I pasted=
 the strings:<br>
&gt;=C2=A0 =C2=A0 =C2=A0In the .config file or in some Yocto file (listing =
additional Kconfig options) added to SRC_URI?<br>
&gt;=C2=A0 =C2=A0 =C2=A0You shouldn&#39;t really modify .config file but if=
 you do, you should execute &quot;make olddefconfig&quot; afterwards.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_PRINTK<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_PRINTK_ZYNQMP<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_UART_CHOICE_CADENCE<br>
&gt;=C2=A0 =C2=A0 =C2=A0I hope you added =3Dy to them.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Anyway, you have at least the following solutions:<=
br>
&gt;=C2=A0 =C2=A0 =C2=A01) Run bitbake xen -c menuconfig to properly set ea=
rly printk<br>
&gt;=C2=A0 =C2=A0 =C2=A02) Find out how you enable other Kconfig options in=
 your project (e.g. CONFIG_COLORING=3Dy that is not enabled by default)<br>
&gt;=C2=A0 =C2=A0 =C2=A03) Append the following to &quot;xen/arch/arm/confi=
gs/arm64_defconfig&quot;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0CONFIG_EARLY_PRINTK_ZYNQMP=3Dy<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Host hangs in build time.=C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Maybe I did not set something in the config bu=
ild file ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=
=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwoo=
d@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank=
">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Thanks Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0You gave me an idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I am going to try it today.=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0=D1=87=D1=82, 20 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko &lt;<a href=3D=
"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com=
" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:=
oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&=
gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Thanks Stefan=
o.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0I am going to=
 do it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=D1=81=D1=80,=
 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano Stabelli=
ni &lt;<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;ma=
ilto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellin=
i@kernel.org</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0On Wed, 19 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Hi Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; I corrected xen&#39;s command line.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Now it is<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_m=
em=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sc=
hed=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&=
quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A04 colors is way too many for xen, just do xen_colors=3D0-0. There is no<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0advantage in using more than 1 color for Xen.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A04 colors is too few for dom0, if you are giving 1600M of memory to Dom0.=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0Each color is 256M. For 1600M you should give at least 7 colors. Try:<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0xen_colors=3D0-0 dom0_colors=3D1-8<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Unfortunately the result was the same.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Coloring general information<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Way size: 64kB<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Max. number of colors available: 16<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) alternatives: Patching with alt table 00000000002cc690 -&gt; =
00000000002ccc0c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Color array allocation failed for dom0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) ****************************************<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Panic on CPU 0:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Error creating domain 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) ****************************************<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; (XEN) Reboot in five seconds...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; I am going to find out how command line arguments passed and parsed=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11=
:25, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com=
</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">=
oleshiiwood@gmail.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; You put my nose into the problem. Thank you.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; I am going to use your point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Let&#39;s see what happens.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10=
:37, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_bl=
ank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd=
.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"=
mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt=
;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.or=
zel@amd.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/04/2023 09:03, Oleg Nikitenko wrote=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; My company uses yocto for image gene=
ration.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; What kind of information do you need=
 to consult me in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe modules sizes/addresses which =
were mentioned by @Julien Grall &lt;mailto:<a href=3D"mailto:julien@xen.org=
" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@=
xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mai=
lto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;=
 ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry for jumping into discussion, but FW=
ICS the Xen command line you provided seems to be not the one<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen booted with. The error you are observ=
ing most likely is due to dom0 colors configuration not being<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specified (i.e. lack of dom0_colors=3D&lt=
;&gt; parameter). Although in the command line you provided, this parameter=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set, I strongly doubt that this is the=
 actual command line in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xen-bootargs =3D &quot;console=3Ddtua=
rt dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin boot=
scrub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnull timer_slop=3D0 way_szize=3D6=
5536 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you specified 4 colors (0-3) for Xen, =
but the boot log says that Xen has only one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This makes me believe that no colors conf=
iguration actually end up in command line that Xen booted with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single color for Xen is a &quot;default i=
f not specified&quot; and way size was probably calculated by asking HW.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I would suggest to first cross-check t=
he command line in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80.=
 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org"=
 target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt=
; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ss=
tabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"m=
ailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> =
&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ssta=
bellini@kernel.org</a>&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 20=
23, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; Thi=
s feature has not been merged in Xen upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would a=
ssume that upstream + the series on the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clari=
fy this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the =
two thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, =
there is nothing controversial. As you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains =
a separate Xen tree specific for Xilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https:=
//github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://githu=
b.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a h=
ref=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">=
https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/=
xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>=
&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0and the branch yo=
u are using (xlnx_rebase_4.16) comes from there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upst=
ream Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &l=
t;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=
=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>=
&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0The Cache Colorin=
g feature that you are trying to configure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.=
16, but not yet present upstream (there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch=
 series to add cache coloring to Xen upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been m=
erged yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you ar=
e using xlnx_rebase_4.16 it doesn&#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0you as you alread=
y have Cache Coloring as a feature there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I take you are us=
ing ImageBuilder to generate the boot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0so, please post t=
he ImageBuilder config file that you are using.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0But from the boot=
 message, it looks like the colors configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
</blockquote></div>

--000000000000ccbfa805fa13ba2e--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 12:01:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 12:01:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525332.816466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqutA-0003Km-9u; Mon, 24 Apr 2023 12:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525332.816466; Mon, 24 Apr 2023 12:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqutA-0003Kf-7L; Mon, 24 Apr 2023 12:01:48 +0000
Received: by outflank-mailman (input) for mailman id 525332;
 Mon, 24 Apr 2023 12:01:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqut9-0003KX-07
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 12:01:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c82a99bc-e297-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 14:01:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7819.eurprd04.prod.outlook.com (2603:10a6:10:1e9::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 12:01:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 12:01:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c82a99bc-e297-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bfzK61v1rYdPhquKWKAhUS9Bo4VRlf9fLV6SQKFWRXM7bnoNnHfy2ag9brcrLtig9Q570OAQwpZg7VGGObY1K7g1UXIClkfNiaVxXPqPFGHp8N63oWIUXM4lOp0qOlIOqxDH57mB0YtrTRr6TZ1ZMHUrrFTebsPWn9YJ4MKdESGnl4x2mj0Kj4391uocLgsbk/xBkLKqRJZQ2Z8fLMNhFbznDBx9F7igFerCcEwMl4aRaYxSa2Y4ZaYiKmB/iH9K9DiScxsbEgs3TTXs5Npz/1ORl9rS4UgUeBvSZipeGOnW193VFP6ajsl7uUvW53CB6U5nUF1VJJzYJmalYW2yQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n9hNCXuO0OnaZb3uPDhdrOiAnK2CzORCGdLFN0p9Z5U=;
 b=bQhlbhd9bEVl5PHDL/Q/m5MsB+J+1tnXIWLsKQY2LqS068kH7zPd/FSfduuLCgTSK4fZ3y4ojbWQ8bgLD7vmhXrzSMYhUdh28FBnQI6JGULss+E0M+TPBh83/nOn8/9FRpwM5hmJuZDXEofB9zGQ3o5JUkdfUIBvRPusCL6226Mnu+3G2r0zGFesRS5JFCXI8FemvZ+N95LMPeCgf+VfE9EL6c9xemSRm8KWbKts6WhnS1b7kRpNfScgRbzCMFcNc1aMQyZbbE5PjEqgaLnmRmbbb5EBi2dX+meHBKuTYexZATl9snMYwcOiWODuu5/xwLiKCfHA3XGOiXIaj7NdnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n9hNCXuO0OnaZb3uPDhdrOiAnK2CzORCGdLFN0p9Z5U=;
 b=nUBUtDUscSVVYLrbNjz1QXl4mKxjw+vAdV66G6EKcLzqG0ha5VxuxMPLeHqEl7Mlvlr397zN+OI4AkpUhKbOSWylnm0TZ/g6cuXw30UPkUDrdOQWAI9S9FsczbIN3whC4bf7SXfsqEo5Kqq8v4CjHprQFKZKhggLGmP/PkV0KdjFMIjQ1xeTTr2V6pQNE/rK8Nj7mYEZKHX4xlzA+J4K2MEuLN4nsqwWFX5r46Lcu715L6EizieTrAw1ypKMtzzqqYrL8Tw+miXHJ/M5kuG21TYOCKexonuwx5nqIeSZpQ6iyndRjekwpwroNAWlkNUxwGIrmO/VWSpRxZbjre6SfA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
Date: Mon, 24 Apr 2023 14:01:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230421135933.23353-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230421135933.23353-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7819:EE_
X-MS-Office365-Filtering-Correlation-Id: 52ce8907-5239-4bc8-2f15-08db44bbaad4
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pLqXnI3K7ZAW/o+Cb576yClwf42w6njc5BZ+xGDYroKIcpOaIvyqp6GMDtDU8eX6kaeRlEUg/BFpYdIRau6s+Ri3QhxpzSIwS4c9Tpx7wBFZLY6JmzIRU09OoxbOeE4QGenVeQ9mxbwKmf/IbKzcofYkN5tTTIuDqo4GVvxfoHf9KKsvFmaFUtCo7PJDJ2irv83y7GA9uv+MpnwPLi07zyLUPTQeWeSMIxuYfH1ajRv75h1lBnEcb3/AsfIA150Sm7L47uMrEjDIoXqNkmXzHrtf+6k5ZyrG3LZBc952e6zxndC1bmJHIcT9p2G1Xt7QrdxuEfVb5Ffg9ODxiucfWGOzJlHJ2wSYCxcl8bcAYpAYH9pTk8V555zLuiPDDcRvCy98rUxWrYRzX43YJQ1DKYg+TWNuBR/mnRnPZbL9TQc2viGkacKsEhsHJ4Mm5nxSudmeSxzFigJbZEsjFn8ELdmkLErVW0WyCjK6quLF2PmKxHombsFaYGw45/trsciVP/vOWgfCRN3VYBvJXODXVwpVUvsgKOpS+71D852sFyW+N78VLAotSzDAJEkh2shuV4xE06tKTsGjGjVMqlbSECZFs7JnIMnpbp6Z3AitvMGGiYBwCmyzQ+KX/bqR2lGTpcWi9Z3Ghnp+DTzEa3MBgQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(2906002)(66476007)(66556008)(66946007)(316002)(4326008)(6862004)(8676002)(8936002)(5660300002)(41300700001)(36756003)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(38100700002)(478600001)(6486002)(6666004)(83380400001)(31686004)(2616005)(6506007)(6636002)(54906003)(37006003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bSs2SVJ3Mjh5OVpxQWtoTE1rSXl6TVIyS0tjSGFwM0VoZnFmTDFyRXhZRDE2?=
 =?utf-8?B?ZkI3dXRoMWNCUkpnTGFQWnNGQkVpTnVkQnI2NjA0cUY1cklwRGZQbFBFTW5N?=
 =?utf-8?B?V0paekhJaW5DdVY2d094SGwrN0pnOTBmVzB5WnFsVVppUG9ja2dDSDFEOHY0?=
 =?utf-8?B?aUs3dFpOZTJhaDloL3c4bUUvRUVxR0VZT0wyRk1hYndvcGQ4eUJVRWx3ZlZK?=
 =?utf-8?B?bEpXV1lKRjRNelp3L3JVQ1ZaU00xRWR4UVdFR0ZjN3o2d1pReFZUZ1Rmc1pF?=
 =?utf-8?B?OXZob2Z1Y0FWQ25hWDlrMGQxL3F4L09nM2pueWt6Wjl0ZUxVS0lLZUxpWEZ4?=
 =?utf-8?B?ODlnOGljRzdUQWh2TVdtWUNuT2VHRkNGUVZpdGlKbjBGeERraC91NitYeGZo?=
 =?utf-8?B?Wllwb3UrWS9tR212WUFxaWZ5ZE55YVJ6eHRSdStuY0h6WGNRd20yaHJENWdD?=
 =?utf-8?B?NUx6NmdSZ1o2dTdzNjV1bG56N3BJM2ZaY2U2VzJFK3YyZjcvOGV5Yk4zcE1J?=
 =?utf-8?B?aDhzK25NQkV4Ukxac2QyNVY1Yk1FbnIwSFdNcUp0Q1BMNjl6TlNET1NxZCtZ?=
 =?utf-8?B?bVNCN1hqN2xSaWRuWGtKNmFrSDcrZFRUSm0wZmIvV0Vqcm5jVzlTRUFFUGtF?=
 =?utf-8?B?aENvNUprUGpBUU5XNWVRaVlZMFlhcFk5NnVyRlE4TnJud3h2enI4aHpyRm5P?=
 =?utf-8?B?R1g2UElaT0NsQjlHRkxpZGd4SEEvMFFPMkV1U3gvbUROUVF2UnlwOE96R3hK?=
 =?utf-8?B?d3JqYk92eForKzBmclFsYzJ6NmhycTl4UUswcml3bEpJMkRnbDNremtGTXFD?=
 =?utf-8?B?WnZXaDAxV0cxa0RtQ3VzQ0tzU2xpa2xZSnIrWDdzZ1ZtellqbHZ0U0hwQzNv?=
 =?utf-8?B?OVhHaHBmMktQZ1lyUUp6d3k5cUkvVUNrQVRTWlNNVDVXc1FsaS9JVzBITllV?=
 =?utf-8?B?U2RCa3h1NmhITFhHd3VGYitzTU54em54ZVFTTXpSMGt6R1hhR29DOVIxeit3?=
 =?utf-8?B?cHlTYWN4YXVxVFJXWjRoOXV5OGhISG5UVXNnWVB2S0ZxWnVVeXJUR1lnenow?=
 =?utf-8?B?ZHBLdkFEbUFZVWhnN1o3SzNRRkJ2Zk5ydFlrdEFxYkpwcUEyRVUvRUUvdEs5?=
 =?utf-8?B?RUFtMFRJQzdjUXlFMUhQVnI1MzRENmNBeUhkSk1hUXlRRFFhOFJZY1FqY1ly?=
 =?utf-8?B?ajBjVm1xNTFQQUNlOXlWWGhMc1ZtL3lHVGNORFJhaW5xUGFpaW1HeTVlTFhR?=
 =?utf-8?B?OGpHc1BvNGVDanFoODkzeXpaVWxWYTBMZEZGQWtSUWtsTXgyTVA4SHJGd3VL?=
 =?utf-8?B?LzlYckRwS25YbGcvd2NLSmh6bUREQkJMWEpPWjRhQllMcEFEajNKOUhKc1Fl?=
 =?utf-8?B?NWlvNkhTZ2ZjTXJRcWhoSW5DRXRnV0FPbnhtaEJ3Znk1dndBa0MzQTMrWXZX?=
 =?utf-8?B?c3Y3OE0zYTNEYVBNeUUzU0lkN3phemFjb21rSUFMSGtrdTk5NTFvU2YraERF?=
 =?utf-8?B?anRwQzZaOVhyd2htcVd2bld0dEVxVWdmTDNXR3pwU0Y0V24vcnp4UTdoSGp2?=
 =?utf-8?B?dXM4V3ZFQnN4NWNMRU0zM3QramtxeGNsMWRIKzcrcFdrSmJicXFsS0V2OXp4?=
 =?utf-8?B?RU5DRHcxcHJjNktvQ0VDd1BzTHArRDdOSEhxOVd0cGVtNjRFUkhHeDMySDNj?=
 =?utf-8?B?aUVQQXZlZ2Y5N2c1dFdvTm9OYy9YTTRQWUYzd0tXRkxaeEcremhnSE5ObGpr?=
 =?utf-8?B?MWN4K1RwRFRJYmFaMDZvWlo1anhiTnMzM1RRMWVxYzJKSU54T3ppbS9CRXA1?=
 =?utf-8?B?eno3aENReGJPZFlNakZlNHBWY1ovT1d4Y2hrZ3g3eXpFcVVBSXd2bmhscWkr?=
 =?utf-8?B?eVRkQWY5T3pTNGlRS1R4bW5EeWgyUEhKNTBmaHZXYlozZTh1a3pRT1pIUE40?=
 =?utf-8?B?VHZNUkV4Q21sV0lwbTJhVExIb1c4U09aWWV4RmgwWHpJRTY3Z2U3alBqVEZL?=
 =?utf-8?B?dUNQaktHZndYVUJYMDVudHAzUG53N1M1cEhlVEprRTFRK1ZKQWMySFk4bHBy?=
 =?utf-8?B?UjZTQkxLUzZZOGNBOUxETzJ4QjVqcG1yTVlvTXVkQVR0Q0tRSEVsSXFqaFpO?=
 =?utf-8?Q?plassOZ5b4XD/41bKHJ58zn/l?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 52ce8907-5239-4bc8-2f15-08db44bbaad4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 12:01:41.9945
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2/2/+Tw7FUCLUxJCIerBFFGiYFpVzC3sCzT7E1p/3jRUgzLOcKZJwLWF1VsOoOQ4syhZeqkg9ScoYKyY666a0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7819

On 21.04.2023 15:59, Juergen Gross wrote:
> Today it is not possible to analyse crash dumps of a system in
> hypervisor mode when it had been booted via EFI, as the crash utility
> doesn't understand the file format of xen.efi.
> 
> This can easily be solved by creating an ELF file from xen.efi via
> objcopy. Using that file as name list for crash enables the user to
> analyse the dump in hypervisor mode.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/Kconfig.debug     | 5 ++++-
>  xen/Makefile          | 4 ++++
>  xen/arch/x86/Makefile | 3 +++
>  3 files changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
> index 94e818ee09..4aec0fd3aa 100644
> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -138,6 +138,9 @@ config DEBUG_INFO
>  	  the EFI boot partition (look for "INSTALL_EFI_STRIP" in
>  	  docs/misc/efi.pandoc for more information - when not using
>  	  "make install-xen" for installing xen.efi, stripping needs to be
> -	  done outside the Xen build environment).
> +	  done outside the Xen build environment). Note that stripping
> +	  xen.efi using "INSTALL_EFI_STRIP" will disable the building of
> +	  xen.efi.elf, which is needed for "crash" dump analysis of systems
> +	  booted via EFI.

I'm afraid I don't understand this: INSTALL_EFI_STRIP only affects what
may (optionally) be placed on the EFI partition. The normally installed
xen.efi should be unaffected by this; an intermediate xen.efi.stripped
is created instead.

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -505,6 +505,9 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
>  		if [ -e $(TARGET).efi.map ]; then \
>  			$(INSTALL_DATA) $(TARGET).efi.map $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.map; \
>  		fi; \
> +		if [ -e $(TARGET).efi.elf ]; then \
> +			$(INSTALL_DATA) $(TARGET).efi.elf $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.elf; \
> +		fi; \

To avoid the risk of the two going out of sync (as also to limit
redundancy), could you wrap the earlier "if" in a "for x in map elf;
do ...; done" loop?

> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -224,6 +224,9 @@ endif
>  	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
>  	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
>  	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
> +ifeq ($(CONFIG_DEBUG_INFO),y)
> +	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),,$(OBJCOPY) -O elf64-x86-64 $@ $@.elf)
> +endif
>  	$(NM) -pa --format=sysv $(@D)/$(@F) \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
>  	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*

Personally I think that - in case of build problems - having the map file
created is more important (and less likely to fail) than the ELF one, so
I'd prefer if the two steps could be ordered the other way around.

Further I vaguely recall possible issues with entirely blank make rule
lines. Plus having some trace of the command in a verbose log perhaps
also wouldn't hurt. IOW maybe better add another use of the colon command
here (we already have some, at least through $(MKRELOC)):

ifeq ($(CONFIG_DEBUG_INFO),y)
	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:)$(OBJCOPY) -O elf64-x86-64 $@ $@.elf
endif

?

Finally - do you really need to copy all the non-debug sections as well?
Might --only-keep-debug be helpful here (provided it works for a PE/COFF
-> ELF copy operation; it looks to do so in my simple experiments)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 12:08:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 12:08:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525340.816475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquzu-00040w-0K; Mon, 24 Apr 2023 12:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525340.816475; Mon, 24 Apr 2023 12:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pquzt-00040p-TH; Mon, 24 Apr 2023 12:08:45 +0000
Received: by outflank-mailman (input) for mailman id 525340;
 Mon, 24 Apr 2023 12:08:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wwt=AP=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pquzs-00040d-Hl
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 12:08:44 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20615.outbound.protection.outlook.com
 [2a01:111:f400:7e89::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfb5978b-e298-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 14:08:41 +0200 (CEST)
Received: from BN9PR03CA0078.namprd03.prod.outlook.com (2603:10b6:408:fc::23)
 by SA1PR12MB7409.namprd12.prod.outlook.com (2603:10b6:806:29c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Mon, 24 Apr
 2023 12:08:36 +0000
Received: from BN8NAM11FT077.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fc:cafe::c1) by BN9PR03CA0078.outlook.office365.com
 (2603:10b6:408:fc::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 12:08:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT077.mail.protection.outlook.com (10.13.177.232) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 12:08:35 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 07:08:32 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 05:08:31 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 07:08:29 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfb5978b-e298-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EQhr/spC1I5ImLjbsuuothYCZKce5G39HzJgYk7cjtnpOYghPMe7qaOV2dDX9j6qBQnp+uiYH00H3TR4YVsnZeV6drUmuV4EcB/3g163ez8EqtkTaaVlBqWD/snVFkow7ldJQBva1cYgZFSt8dDCXmI9mSO2ZHgukNGd6eWbWi+8FTiMtTjmRRTRplj4ilAcv0+MR9xCkG0KYK6WofphTr4NBa8CZzzqB9WtIuawqdjEX2qEyNeyM6ti/CYwgnSIF66eaRRy9vv/awI1cSWX3EESIAxe5u1G9lN3XfCETgWkl0+ud6twhkTEmTUap7FbTCufxY3SLl9BsRO2Lbc1qg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nzCZ6OwnIpGRLsrvkEjJ8Q4GOcTCd5aFi9nRFFZQ/J8=;
 b=cZZSY0v+4lq+1fnvWLnA3U16WhUAutKic9lh9mtUkcnlCWyll/3gfvSw0+E0CZrYLdyyAbP9E+2mleYi8pLBT7MekDt6FublQJJpBF5+BOFnt9McAy5vY6IlOdj0WD4aPdFajjahFJVbOGIiGGaZLpB+1Eh/qT9NH4uskGxgWnidLCEAdKiwjJka81leAk0je964n/glC/sbp7gPaCMypWJ6qkzkStHQ2R9VJtJqnA0OosCrJ73/xghBK1H/48Mr2IdIVrHgFe/JW2R5IlZ7DpZCoeMIxFHYTSs+ejgtZkiPeUYOGNhbO95444MWCOJFh9D2lQ8FCuTGPUflWpzcYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nzCZ6OwnIpGRLsrvkEjJ8Q4GOcTCd5aFi9nRFFZQ/J8=;
 b=5ruGh9ZLDtPs01b54FgVh+uqePfcTns1Tq5rKMAAEV4X1V7sdYvQ24KMc/oymaItLQjGdeC3f1AvFg/NWJ+YEz63JRzylrGOP9ISyQOanyGS9jI/QW5lptnoFwmBvWulL3OzgYAUBRWMF95j2BPGCuKrSqiGNP0pME0/CRKQtc8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <ca8fd34c-3755-161d-38c1-651cf08ef589@amd.com>
Date: Mon, 24 Apr 2023 14:08:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 05/10] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-6-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230413173735.48387-6-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT077:EE_|SA1PR12MB7409:EE_
X-MS-Office365-Filtering-Correlation-Id: 38a56cba-dbed-4a7f-cfc8-08db44bca17d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LdVh97DfdLZp1227d4UrxW7qjyUL6Sa/13bsx7CcZRL4tSziobx4eghirjqsHC2HWSCfajS41v1u3nDFX4lAgfMwaAiL7rzlm2eN4hdVU7GYP1jnUJ/csa+6S1KkU48CtSTaZxv4XJq669Y71JrqQDrt0gxIBg2hj9Pw2VlProX6xcD2mZBrRxnezsrYKm31tCXNB76q+ibH18Wh5hKA3JAgyVh0391ol6mEoOBJEgYQ6DrNgr05llCGdcv/YbNJMXHnrTMhEHKpPfu5vSV+QbBJC3kLx1ymobcJdp0p8qPh4MUvcNVR7MRbFHmJhwc4BsFHpqskT83ZzFKNfZWGYLXYguFscd9Z8ekdS1WYGHvbbGxvdj8gyE9IM+pkeh+DxO/x1s23b40sj7uMhRsjwNyHqSoPC7zEjoid+k4cgGBjpKjotcdaYptE49//JtFBflD8jGW294ApVjvAOfg/wlf8h/2jf3tvttq0LvYD4hu1bgStH8R11r2aWAvkS5ZOn4vzCFlSxvNW8Fq0V6ub2umJUmFlCojKcv9f7mys3m7zZLe3IzvyoT1fZX1vvfU3BMW8xc2WVJKUIKxdZtjDKsop1DllkG0PTn9H6+EwiVovipD1lh2Q+i/Mjpw/ftjNUqGtvwiXmcBUe+iroXxstE8F08L3OTwpUAjxHP3M1/zmn2OmBMKA32HVe1r1zxZ4BilWQJA7nbGcvjHMk6wRV6d0+7K/fR5v2l6smCDsZ47WpJRjaEt5aFHcAPk7m4v09gzvDVHugobuybbz0f6zOfvBO1uMWoQ2NSoOBZ/jUQs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(36840700001)(40470700004)(46966006)(70586007)(2616005)(336012)(426003)(83380400001)(36860700001)(4326008)(41300700001)(47076005)(31686004)(8676002)(8936002)(82740400003)(70206006)(356005)(81166007)(53546011)(5660300002)(316002)(110136005)(16576012)(186003)(7416002)(54906003)(40460700003)(31696002)(44832011)(478600001)(82310400005)(26005)(40480700001)(36756003)(86362001)(2906002)(21314003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 12:08:35.6505
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 38a56cba-dbed-4a7f-cfc8-08db44bca17d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT077.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7409

Hi Ayan,

On 13/04/2023 19:37, Ayan Kumar Halder wrote:
> 
> 
> Some Arm based hardware platforms which does not support LPAE
> (eg Cortex-R52), uses 32 bit physical addresses.
> Also, users may choose to use 32 bits to represent physical addresses
> for optimization.
> 
> To support the above use cases, we have introduced arch independent
> configs to choose if the physical address can be represented using
> 32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
> For now only ARM_32 provides support to enable 32 bit physical
> addressing.
> 
> When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32.
> When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
> When PHYS_ADDR_T_32 is not defined for ARM_64, PADDR_BITS is set to 48.
> The last two are same as the current configuration used today on Xen.
> 
> PADDR_BITS is also set to 48 when ARM_64 is defined. The reason being
> the choice to select ARM_PA_BITS_32/ARM_PA_BITS_40/ARM_PA_BITS_48 is
> currently allowed when ARM_32 is defined.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
> 
> v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
> ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
> 2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'.
> 
> v3 - 1. Allow user to define PADDR_BITS by selecting different config options
> ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
> 2. Add the choice under "Architecture Features".
> 
> v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.
> 
>  xen/arch/Kconfig                     |  3 +++
>  xen/arch/arm/Kconfig                 | 37 ++++++++++++++++++++++++++--
>  xen/arch/arm/include/asm/page-bits.h |  6 +----
>  xen/arch/arm/include/asm/types.h     |  6 +++++
>  xen/arch/arm/mm.c                    |  5 ++++
>  5 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
> index 7028f7b74f..67ba38f32f 100644
> --- a/xen/arch/Kconfig
> +++ b/xen/arch/Kconfig
> @@ -1,6 +1,9 @@
>  config 64BIT
>         bool
> 
> +config PHYS_ADDR_T_32
> +       bool
> +
>  config NR_CPUS
>         int "Maximum number of CPUs"
>         range 1 4095
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c..3f6e13e475 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -19,13 +19,46 @@ config ARM
>         select HAS_PMAP
>         select IOMMU_FORCE_PT_SHARE
> 
> +menu "Architecture Features"
> +
> +choice
> +       prompt "Physical address space size" if ARM_32
Why is it protected by ARM_32 but in the next line you add something protected by ARM_64?
This basically means that for arm64, ARM_PA_BITS_XXX is never defined.

> +       default ARM_PA_BITS_48 if ARM_64
> +       default ARM_PA_BITS_40 if ARM_32
> +       help
> +         User can choose to represent the width of physical address. This can
> +         sometimes help in optimizing the size of image when user chooses a
> +         smaller size to represent physical address.
> +
> +config ARM_PA_BITS_32
> +       bool "32-bit"
> +       help
> +         On platforms where any physical address can be represented within 32 bits,
> +         user should choose this option. This will help is reduced size of the
> +         binary.
> +       select PHYS_ADDR_T_32
> +       depends on ARM_32
> +
> +config ARM_PA_BITS_40
> +       bool "40-bit"
> +       depends on ARM_32
> +
> +config ARM_PA_BITS_48
> +       bool "40-bit"
40-bit? I think this should be 48-bit.

> +       depends on ARM_48
What is ARM_48? Shouldn't it be ARM_64?
And if so, why bother defining it given everything here is protected by ARM_32.

> +endchoice
> +
> +config PADDR_BITS
> +       int
> +       default 32 if ARM_PA_BITS_32
> +       default 40 if ARM_PA_BITS_40
> +       default 48 if ARM_PA_BITS_48 || ARM_64
This reads as if on arm32 we could have 48-bit PA space which is not true (LPAE is 40 bit unless I missed something).
You could get rid of || ARM_64 if the choice wasn't protected by ARM_32 and fixing ARM_48 to ARM_64.

> +
>  config ARCH_DEFCONFIG
>         string
>         default "arch/arm/configs/arm32_defconfig" if ARM_32
>         default "arch/arm/configs/arm64_defconfig" if ARM_64
> 
> -menu "Architecture Features"
> -
>  source "arch/Kconfig"
> 
>  config ACPI
> diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
> index 5d6477e599..deb381ceeb 100644
> --- a/xen/arch/arm/include/asm/page-bits.h
> +++ b/xen/arch/arm/include/asm/page-bits.h
> @@ -3,10 +3,6 @@
> 
>  #define PAGE_SHIFT              12
> 
> -#ifdef CONFIG_ARM_64
> -#define PADDR_BITS              48
> -#else
> -#define PADDR_BITS              40
> -#endif
> +#define PADDR_BITS              CONFIG_PADDR_BITS
> 
>  #endif /* __ARM_PAGE_SHIFT_H__ */
> diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
> index e218ed77bd..e3cfbbb060 100644
> --- a/xen/arch/arm/include/asm/types.h
> +++ b/xen/arch/arm/include/asm/types.h
> @@ -34,9 +34,15 @@ typedef signed long long s64;
>  typedef unsigned long long u64;
>  typedef u32 vaddr_t;
>  #define PRIvaddr PRIx32
> +#if defined(CONFIG_PHYS_ADDR_T_32)
> +typedef unsigned long paddr_t;
> +#define INVALID_PADDR (~0UL)
> +#define PRIpaddr "08lx"
> +#else
>  typedef u64 paddr_t;
>  #define INVALID_PADDR (~0ULL)
>  #define PRIpaddr "016llx"
> +#endif
>  typedef u32 register_t;
>  #define PRIregister "08x"
>  #elif defined (CONFIG_ARM_64)
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index b99806af99..6dc37be97e 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -690,6 +690,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>      int rc;
> 
> +    /*
> +     * The size of paddr_t should be sufficient for the complete range of
> +     * physical address.
> +     */
> +    BUILD_BUG_ON((sizeof(paddr_t) * 8) < PADDR_BITS);
Just FYI, there is a macro BITS_PER_BYTE defined in bitops.h that you could use instead of 8.
Although I'm not sure if this would be better :)

>      BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
> 
>      if ( frametable_size > FRAMETABLE_SIZE )
> --
> 2.17.1
> 
> 

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 12:09:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 12:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525338.816486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqv0p-0004Zv-BF; Mon, 24 Apr 2023 12:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525338.816486; Mon, 24 Apr 2023 12:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqv0p-0004Zo-6u; Mon, 24 Apr 2023 12:09:43 +0000
Received: by outflank-mailman (input) for mailman id 525338;
 Mon, 24 Apr 2023 12:07:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6ihU=AP=tibco.com=msyms@srs-se1.protection.inumbo.net>)
 id 1pquz6-0003yq-IS
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 12:07:56 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a514f091-e298-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 14:07:55 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-2fe3fb8e25fso2580305f8f.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 05:07:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a514f091-e298-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682338075; x=1684930075;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=K/nk8/06PUIiEF4MTmCY2ffYz6LVNdA5FqcFP3h+JHE=;
        b=E/HqZX9LsREaDluM7cJjcYiaeEE/srWvsBzb31fs0CH00Nu768RWLMrUUQuDuYcGQJ
         yX7N1yUYRfWezOgunqHuu5v9jpf5UOheoeccCcIPEOM9rWw8t/q759zsvL3P7h1sBfaV
         AgnLwVzu/dqQWgIJJ1ECPPGufTa/GBT7ssQSI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682338075; x=1684930075;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=K/nk8/06PUIiEF4MTmCY2ffYz6LVNdA5FqcFP3h+JHE=;
        b=QPcB1wHwgYPBU9LYjZEPbgFC/UcppkjY6ZVuZk0jhpD3qnhme92aYSZN5K8ga3bQSD
         zjf4gZNB0cy6g2LzXHQGdo8lWgOtwhxHssJiJTIsJd6sQnAphaKLugJ3UvghcIbag6lF
         VhJrcOLjwyJG9O0EObPT/jQjhfUXTtvO+XrPbrlXBFZY09Hs2E1QszjR3n4A/6h3Hr9w
         j4auK7t1/R4zXLjfmz/y4nWWmVjq5FpHV5RELWOAE57bVqLvP8DzjpwG9I72Tw+W34ep
         4ciffxqWvot0NR4C2Z/iezSUmEdKuUfLT/lqvwB56qh0/MvoGLexRYWeiydFYv90DYsT
         UM4g==
X-Gm-Message-State: AC+VfDxLuiYfGyvMG4SneOU3AGZ1IrKc/HIN1fU5t/+ObOMs9x7/uUh6
	VEaF41v6niY+HI2jTlXy2OrskHFvsgxtgAQeDToTJA==
X-Google-Smtp-Source: ACHHUZ6u7WWEPIINNleWsZq9Pu/ASD8qrU9YsusjfMMTBjNblxYwTHQsh4UhCIhr4kycgtXT9iCpnCYDngowCfq8+Eg=
X-Received: by 2002:a5d:4b86:0:b0:304:8149:239b with SMTP id
 b6-20020a5d4b86000000b003048149239bmr1051205wrt.50.1682338074747; Mon, 24 Apr
 2023 05:07:54 -0700 (PDT)
MIME-Version: 1.0
References: <20230420110205.688689-1-mark.syms@citrix.com> <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
In-Reply-To: <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
From: Mark Syms <mark.syms@cloud.com>
Date: Mon, 24 Apr 2023 13:07:44 +0100
Message-ID: <CAPYKksVtGyfv3TbAjLH1G=N6=pH-pH2-FTX5c3+E5PsOKo2aOQ@mail.gmail.com>
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
To: paul@xen.org
Cc: mark.syms@citrix.com, qemu-devel@nongnu.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, tim.smith@citrix.com
Content-Type: multipart/alternative; boundary="000000000000567bf705fa13db30"

--000000000000567bf705fa13db30
Content-Type: text/plain; charset="UTF-8"

Copying in Tim who did the final phase of the changes.

On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote:
>
> On 20/04/2023 12:02, mark.syms@citrix.com wrote:
> > From: Mark Syms <mark.syms@citrix.com>
> >
> > Ensure the PV ring is drained on disconnect. Also ensure all pending
> > AIO is complete, otherwise AIO tries to complete into a mapping of the
> > ring which has been torn down.
> >
> > Signed-off-by: Mark Syms <mark.syms@citrix.com>
> > ---
> > CC: Stefano Stabellini <sstabellini@kernel.org>
> > CC: Anthony Perard <anthony.perard@citrix.com>
> > CC: Paul Durrant <paul@xen.org>
> > CC: xen-devel@lists.xenproject.org
> >
> > v2:
> >   * Ensure all inflight requests are completed before teardown
> >   * RESEND to fix formatting
> > ---
> >   hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
> >   1 file changed, 25 insertions(+), 6 deletions(-)
> >
> > diff --git a/hw/block/dataplane/xen-block.c
b/hw/block/dataplane/xen-block.c
> > index 734da42ea7..d9da4090bf 100644
> > --- a/hw/block/dataplane/xen-block.c
> > +++ b/hw/block/dataplane/xen-block.c
> > @@ -523,6 +523,10 @@ static bool
xen_block_handle_requests(XenBlockDataPlane *dataplane)
> >
> >       dataplane->more_work = 0;
> >
> > +    if (dataplane->sring == 0) {
> > +        return done_something;
> > +    }
> > +
>
> I think you could just return false here... Nothing is ever going to be
> done if there's no ring :-)
>
> >       rc = dataplane->rings.common.req_cons;
> >       rp = dataplane->rings.common.sring->req_prod;
> >       xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> > @@ -666,14 +670,35 @@ void
xen_block_dataplane_destroy(XenBlockDataPlane *dataplane >   void
xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
> >   {
> >       XenDevice *xendev;
> > +    XenBlockRequest *request, *next;
> >
> >       if (!dataplane) {
> >           return;
> >       }
> >
> > +    /* We're about to drain the ring. We can cancel the scheduling of
any
> > +     * bottom half now */
> > +    qemu_bh_cancel(dataplane->bh);
> > +
> > +    /* Ensure we have drained the ring */
> > +    aio_context_acquire(dataplane->ctx);
> > +    do {
> > +        xen_block_handle_requests(dataplane);
> > +    } while (dataplane->more_work);
> > +    aio_context_release(dataplane->ctx);
> > +
>
> I don't think we want to be taking new requests, do we?
>
> > +    /* Now ensure that all inflight requests are complete */
> > +    while (!QLIST_EMPTY(&dataplane->inflight)) {
> > +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
> > +            blk_aio_flush(request->dataplane->blk,
xen_block_complete_aio,
> > +                        request);
> > +        }
> > +    }
> > +
>
> I think this could possibly be simplified by doing the drain after the
> call to blk_set_aio_context(), as long as we set dataplane->ctx to
> qemu_get_aio_context(). Alos, as long as more_work is not set then it
> should still be safe to cancel the bh before the drain AFAICT.
>
>    Paul
>
> >       xendev = dataplane->xendev;
> >
> >       aio_context_acquire(dataplane->ctx);
> > +
> >       if (dataplane->event_channel) {
> >           /* Only reason for failure is a NULL channel */
> >           xen_device_set_event_channel_context(xendev,
dataplane->event_channel,
> > @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane
*dataplane)
> >       blk_set_aio_context(dataplane->blk, qemu_get_aio_context(),
&error_abort);
> >       aio_context_release(dataplane->ctx);
> >
> > -    /*
> > -     * Now that the context has been moved onto the main thread, cancel
> > -     * further processing.
> > -     */
> > -    qemu_bh_cancel(dataplane->bh);
> > -
> >       if (dataplane->event_channel) {
> >           Error *local_err = NULL;
> >
>

--000000000000567bf705fa13db30
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Copying in Tim who did the final phase of the changes=
.</div><div><br></div>On Mon, 24 Apr 2023 at 11:32, Paul Durrant &lt;<a hre=
f=3D"mailto:xadimgnik@gmail.com">xadimgnik@gmail.com</a>&gt; wrote:<br>&gt;=
<br>&gt; On 20/04/2023 12:02, <a href=3D"mailto:mark.syms@citrix.com">mark.=
syms@citrix.com</a> wrote:<br>&gt; &gt; From: Mark Syms &lt;<a href=3D"mail=
to:mark.syms@citrix.com">mark.syms@citrix.com</a>&gt;<br>&gt; &gt;<br>&gt; =
&gt; Ensure the PV ring is drained on disconnect. Also ensure all pending<b=
r>&gt; &gt; AIO is complete, otherwise AIO tries to complete into a mapping=
 of the<br>&gt; &gt; ring which has been torn down.<br>&gt; &gt;<br>&gt; &g=
t; Signed-off-by: Mark Syms &lt;<a href=3D"mailto:mark.syms@citrix.com">mar=
k.syms@citrix.com</a>&gt;<br>&gt; &gt; ---<br>&gt; &gt; CC: Stefano Stabell=
ini &lt;<a href=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a=
>&gt;<br>&gt; &gt; CC: Anthony Perard &lt;<a href=3D"mailto:anthony.perard@=
citrix.com">anthony.perard@citrix.com</a>&gt;<br>&gt; &gt; CC: Paul Durrant=
 &lt;<a href=3D"mailto:paul@xen.org">paul@xen.org</a>&gt;<br>&gt; &gt; CC: =
<a href=3D"mailto:xen-devel@lists.xenproject.org">xen-devel@lists.xenprojec=
t.org</a><br>&gt; &gt;<br>&gt; &gt; v2:<br>&gt; &gt; =C2=A0 * Ensure all in=
flight requests are completed before teardown<br>&gt; &gt; =C2=A0 * RESEND =
to fix formatting<br>&gt; &gt; ---<br>&gt; &gt; =C2=A0 hw/block/dataplane/x=
en-block.c | 31 +++++++++++++++++++++++++------<br>&gt; &gt; =C2=A0 1 file =
changed, 25 insertions(+), 6 deletions(-)<br>&gt; &gt;<br>&gt; &gt; diff --=
git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c<br>&g=
t; &gt; index 734da42ea7..d9da4090bf 100644<br>&gt; &gt; --- a/hw/block/dat=
aplane/xen-block.c<br>&gt; &gt; +++ b/hw/block/dataplane/xen-block.c<br>&gt=
; &gt; @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockD=
ataPlane *dataplane)<br>&gt; &gt; =C2=A0 <br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0=
 dataplane-&gt;more_work =3D 0;<br>&gt; &gt; =C2=A0 <br>&gt; &gt; + =C2=A0 =
=C2=A0if (dataplane-&gt;sring =3D=3D 0) {<br>&gt; &gt; + =C2=A0 =C2=A0 =C2=
=A0 =C2=A0return done_something;<br>&gt; &gt; + =C2=A0 =C2=A0}<br>&gt; &gt;=
 +<br>&gt;<br>&gt; I think you could just return false here... Nothing is e=
ver going to be<br>&gt; done if there&#39;s no ring :-)<br>&gt;<br>&gt; &gt=
; =C2=A0 =C2=A0 =C2=A0 rc =3D dataplane-&gt;rings.common.req_cons;<br>&gt; =
&gt; =C2=A0 =C2=A0 =C2=A0 rp =3D dataplane-&gt;rings.common.sring-&gt;req_p=
rod;<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 xen_rmb(); /* Ensure we see queued r=
equests up to &#39;rp&#39;. */<br>&gt; &gt; @@ -666,14 +670,35 @@ void xen_=
block_dataplane_destroy(XenBlockDataPlane *dataplane &gt; =C2=A0 void xen_b=
lock_dataplane_stop(XenBlockDataPlane *dataplane)<br>&gt; &gt; =C2=A0 {<br>=
&gt; &gt; =C2=A0 =C2=A0 =C2=A0 XenDevice *xendev;<br>&gt; &gt; + =C2=A0 =C2=
=A0XenBlockRequest *request, *next;<br>&gt; &gt; =C2=A0 <br>&gt; &gt; =C2=
=A0 =C2=A0 =C2=A0 if (!dataplane) {<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 return;<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 }<br>&gt; &gt; =C2=A0 =
<br>&gt; &gt; + =C2=A0 =C2=A0/* We&#39;re about to drain the ring. We can c=
ancel the scheduling of any<br>&gt; &gt; + =C2=A0 =C2=A0 * bottom half now =
*/<br>&gt; &gt; + =C2=A0 =C2=A0qemu_bh_cancel(dataplane-&gt;bh);<br>&gt; &g=
t; +<br>&gt; &gt; + =C2=A0 =C2=A0/* Ensure we have drained the ring */<br>&=
gt; &gt; + =C2=A0 =C2=A0aio_context_acquire(dataplane-&gt;ctx);<br>&gt; &gt=
; + =C2=A0 =C2=A0do {<br>&gt; &gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0xen_block_h=
andle_requests(dataplane);<br>&gt; &gt; + =C2=A0 =C2=A0} while (dataplane-&=
gt;more_work);<br>&gt; &gt; + =C2=A0 =C2=A0aio_context_release(dataplane-&g=
t;ctx);<br>&gt; &gt; +<br>&gt;<br>&gt; I don&#39;t think we want to be taki=
ng new requests, do we?<br>&gt;<br>&gt; &gt; + =C2=A0 =C2=A0/* Now ensure t=
hat all inflight requests are complete */<br>&gt; &gt; + =C2=A0 =C2=A0while=
 (!QLIST_EMPTY(&amp;dataplane-&gt;inflight)) {<br>&gt; &gt; + =C2=A0 =C2=A0=
 =C2=A0 =C2=A0QLIST_FOREACH_SAFE(request, &amp;dataplane-&gt;inflight, list=
, next) {<br>&gt; &gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0blk_aio_f=
lush(request-&gt;dataplane-&gt;blk, xen_block_complete_aio,<br>&gt; &gt; + =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0request);<br>&gt; &gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>&gt; &gt=
; + =C2=A0 =C2=A0}<br>&gt; &gt; +<br>&gt;<br>&gt; I think this could possib=
ly be simplified by doing the drain after the<br>&gt; call to blk_set_aio_c=
ontext(), as long as we set dataplane-&gt;ctx to<br>&gt; qemu_get_aio_conte=
xt(). Alos, as long as more_work is not set then it<br>&gt; should still be=
 safe to cancel the bh before the drain AFAICT.<br>&gt;<br>&gt; =C2=A0 =C2=
=A0Paul<br>&gt;<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 xendev =3D dataplane-&gt;=
xendev;<br>&gt; &gt; =C2=A0 <br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 aio_context_=
acquire(dataplane-&gt;ctx);<br>&gt; &gt; +<br>&gt; &gt; =C2=A0 =C2=A0 =C2=
=A0 if (dataplane-&gt;event_channel) {<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 /* Only reason for failure is a NULL channel */<br>&gt; &gt; =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 xen_device_set_event_channel_context(xen=
dev, dataplane-&gt;event_channel,<br>&gt; &gt; @@ -684,12 +709,6 @@ void xe=
n_block_dataplane_stop(XenBlockDataPlane *dataplane)<br>&gt; &gt; =C2=A0 =
=C2=A0 =C2=A0 blk_set_aio_context(dataplane-&gt;blk, qemu_get_aio_context()=
, &amp;error_abort);<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 aio_context_release(=
dataplane-&gt;ctx);<br>&gt; &gt; =C2=A0 <br>&gt; &gt; - =C2=A0 =C2=A0/*<br>=
&gt; &gt; - =C2=A0 =C2=A0 * Now that the context has been moved onto the ma=
in thread, cancel<br>&gt; &gt; - =C2=A0 =C2=A0 * further processing.<br>&gt=
; &gt; - =C2=A0 =C2=A0 */<br>&gt; &gt; - =C2=A0 =C2=A0qemu_bh_cancel(datapl=
ane-&gt;bh);<br>&gt; &gt; -<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 if (dataplane=
-&gt;event_channel) {<br>&gt; &gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Error=
 *local_err =3D NULL;<br>&gt; &gt; =C2=A0 <br>&gt;<br></div>

--000000000000567bf705fa13db30--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:06:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:06:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525360.816505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqvth-0002y9-Lt; Mon, 24 Apr 2023 13:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525360.816505; Mon, 24 Apr 2023 13:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqvth-0002y2-Iq; Mon, 24 Apr 2023 13:06:25 +0000
Received: by outflank-mailman (input) for mailman id 525360;
 Mon, 24 Apr 2023 13:06:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqvtg-0002xw-ON
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:06:24 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cee8220e-e2a0-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 15:06:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8866.eurprd04.prod.outlook.com (2603:10a6:20b:42d::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 13:06:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 13:06:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cee8220e-e2a0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ej+TeqkWiR+wSbk7KuX/ayQKhD9MVi+MzY9564NwC3z/74nPUj0mfYP+cHPzl3ykzdxNQPSuDcVH7wE3T90wUvUhyae0azGV45SJCES2W0sJgz8RBKEU7ibw3Mipg7NAFsJEoTifHHpLNxj042Tx87g+zNbhoOfNWlEsGvn6sFIyDKJmdjkJzQ9iHXFAQXAheDsIA8/ORkm/PVQG1e/IfDR3rtwbMWmKHcGS93Mt8K51xelm6t62M6tEkcHKjk+VJbywc/5ShWU2NOzA7MPg2n2QgkmNA7CPSqNStF6Ov+jklMxwOiMDUh4efD0VJtlP36Q7fi6ZZ/fIC7MxRPkr8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yupQeBK2A76Rgix1ZCPhY0pBSR/YgSQd2xOTQtLMNL8=;
 b=Fp+8l7j/hpjk+XPRPqDMRxtWq6XBWuyNOT/QjSUR1s9it9LLZcmQXv6uG5v45gmOrtzjhrMhn97v2d3j/kJeNGdA9lSc9bnwOQfjJ+l2iiZWWJhaojgKyaU3vLCKnKBCcM5bvu3dNTYiXjmx3Xacee33GfEu49V3L7m2useDmMsWr6dE3WM3cX/sUUY0sJzGWa8jw5U1YXc3TfbvFqfRIFxnf7Q79Q5OI+GooMnv4F4EtTsacTbaHn+NMfC3Y3M5aZAXVKNmyKKYNOkq3aYigOeLNnmzOIlg/yqNWQVsXKD+0asbW0eZCvUOXxw/PFJsi2i7Gpswa7QyAWM/HIOO0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yupQeBK2A76Rgix1ZCPhY0pBSR/YgSQd2xOTQtLMNL8=;
 b=FaC6GINrUCd9ytbGo7dxBbOGX8Hs6HT/OINDXsp6LgqqvZtblFToWiEuegkzPIRw7be+pbuXAiG+gaGKkeGdQGHcCgfVK7lgR/ILAmpBGqmBm19uMt7mDnygU3mgZDvMuPQ+XzNpeVKEP1LVKM9ZiWffxlNyKE03wXrYU/oG7bsdkpiyz5RsrGsOeMjB9nCqKEKpBbkb2Nh06EhgG6W1xYH3WsDgPzth+N+3tBRbCFXUZ30CHx0aN+CV70RV8HWPlD8MdMvJFm9QKmw1iTPTk940XYcHc8zIkHK2DLyXlVBOKej+kgtMMUd4v+YAMcnjPx9ac14DMNzTlsxcRr1LCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f6d866e9-c313-a0db-f12d-acf285e78c94@suse.com>
Date: Mon, 24 Apr 2023 15:06:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 1/4] x86/msi: passthrough all MSI-X vector ctrl writes
 to device model
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <f799fdc6b6899fa65a07eae0d6401753f7d61ef2.1680752649.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f799fdc6b6899fa65a07eae0d6401753f7d61ef2.1680752649.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0213.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8866:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f45e8b4-0606-4b23-753c-08db44c4b192
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jVlOjyQoMR+rVja61kCiQOQgMl8LAbnr64qFh+BmMs6nXWRWyHoIGEbJO2pxEbgtVtsluYsShKHubC3Jn7fsX8FBlAf2J2gRtfT22Ir8yCUR91q27x4WCb+94ngAuvuno/thH+U21uvsz6iLbxWSjGHPpfbiVvv9DXNy/gZRurL4C5PKo2Bz5IXkNasm4pdwScaifvavm8diHlLFnhKnS32hzUpanxQhIASAOsSQ9yY8QmtJlbsQRUQXtTw0ozvMowmQ94Bv9x6akDPuqcoiEJZTxpLMJktkPsS+Ruu26xiR/yUOEVJ7XJ7uxdVGBg/0t2F1EZqX9a5RYry81zKHMw5Y1K2GVVUOe6tR64eJDkgFCzHu1KZiQO+IUYs0aalBXOMPaJA29azAFY9EikJfxzlc01uVEYnUFEcnD2cJqEBNhq5wiLldqkzixzRhydWiDDxIPbvK5GtVAcjEB/tI8yGCCDfaOTSMf4pc6OchRNVUYpF9NWg8YFBRBsc62eqkfxpg63DkfGI4NlPHyNaZ2FV78zqgaYWoOOBF8RpW8t10X4j0Jb4PUq8cpoJ1TxdOQ2/0DGfLWUl1EwtR/4rajpLQ5JTSPNMO2KyUrJgNhjFbBD2nC352a9/CLYw/f31BcenitG8WFh4dJTp9TiU3oA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(6486002)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(66574015)(83380400001)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N21kNURXNVFjb3NyT2g3VFlFM3V0TW5tdWJtRHhrY0ZlNnR2amxuOU96bGMy?=
 =?utf-8?B?N0NZdWNJTlZUaUdPallYbnBGbys3Nlp5WjFkTHNEQ1QwNEl5VjBnRTFteUF1?=
 =?utf-8?B?Mmd6dmhmL0J6UVdwUjArZEtzbFI4czFVdDBVWTdXdDZhVVAyMnlMZHJBajBD?=
 =?utf-8?B?UEJqSWErZ3VOZnNvcW45NlkvL05oVVlrTVorMXVxZXN3QjNMTFY0ZFFBZkJl?=
 =?utf-8?B?Q0owcnRjZGZYMnN6c1UxbmpBT1Q4NnlqNTJYaVo3OUxvdUR3c3E2bTdHODlr?=
 =?utf-8?B?ZnI2TDZJQWdSZ1c1dnFzdU11b2g2THVFMFo2WGIyVzFINTk5cmlKN0duN3lZ?=
 =?utf-8?B?QU4zMzVsMUZnQXhLTXIvSCtabVB3WWV0ekpxZitmb2FVMWFsQmsvdGhtTVNV?=
 =?utf-8?B?OTVtS21XQU5oV25uRkxxS3R4ckljajBwL284SGJwcTJidjExQnlDY0c0Zk8x?=
 =?utf-8?B?bHpRWmRNeGJCWUJGd0RTbEVPOCtjdkN0TXh6aDBWSHJGMElTK3B4b0ttU081?=
 =?utf-8?B?OG1jRzRuMGU4T3lMKzR0VzVvV0RqaXhGZXdVbnNJSlBaMjBqVlhtYnR0cWd5?=
 =?utf-8?B?MW5CYkp5ZzlKM0xMdmJFYktzdnFDS00xeFkvZTZKYXB0T0hLdDdCemF6R2xM?=
 =?utf-8?B?WDhLY0VUOUgwQ1Zyd3pkcFZnK0JxM3dySUxGdzZ5V0tDWjZNcXdOT09MNFlS?=
 =?utf-8?B?aVg0NUNCT0hDZVNnaHhBRURZOG1jeWJhMHFyYUpXbC9haUVQWWFjeXM5eXIz?=
 =?utf-8?B?cTY3SzRQeHRmdThrU29qOWNmRFFqMmVaeDlkZWVyTWhLQ1A2bUlGYmZKRDFl?=
 =?utf-8?B?QTBTT2FCRDAxZG9nYWk0OXFLU0dvdUp3dGF1OHdDa2xRY0Z0WTl1djhNVGVV?=
 =?utf-8?B?VzQwZlZjT285d0ROazg3TFNrNU5xbDZpRGdKNXRiekR6dlhGZEdOSTRJdkd5?=
 =?utf-8?B?V1I1RFdMWkE0RGhjRGpmWlNsTkJscEZ6YjBJNW1nd3l4VVB4VjY2S1VDUjlZ?=
 =?utf-8?B?elVOWFdaMDJ4eGVIZ2J4eHk0NG9JbTNUektndW1FZ3UxbTV3WkVmbGlzUVps?=
 =?utf-8?B?Y0ZTUzlKbkZPSHRLZDU0aThXZExYbWZ4UUFrR2U3anNhVG94b1FyR05xNFNL?=
 =?utf-8?B?UVFlYkFrUVVhaUEzWkJxcjZ1WENMOHQ5MGkwK09rT3QrdTR6RGdzbGIrdDJv?=
 =?utf-8?B?Z2hWNmV4VGNCdEhrSXRPRTVkVGdRd3ZMQktKa0dJZ1QyMXl3cUNYMVVLc2RD?=
 =?utf-8?B?R2trQ2o0UWt5TXhacFJrYVNYcm1jWHVZY0p4alZTWVdROGVjSUpmdzVuOG5P?=
 =?utf-8?B?bUNRZHY4ZnVnZXRIQ0J6dHlqT0dZVGRqSFdDSjE0RXA5dFUyRWtsZ2M3eVZh?=
 =?utf-8?B?WHA5ZkdqRG1PNUpDeEdsT3BZcERVYkVIcGxSa0k0NGJVYUxiejdUTlRQSEh3?=
 =?utf-8?B?WWlnYUgyUExyMTB0SnJJYmR2SSttamhqZjkreTlFTFZPZ1E0RllXcE10a3JY?=
 =?utf-8?B?UzB1ZjFIOWE3TmczMDJoMlRpeU50NFRpTUhqS2RVWmlWMjluZVVwOXJnaWE1?=
 =?utf-8?B?emxiajZzSU5HbHExR0lZZFA5aVJySmVRUmFBcGlRUStuejdiSTlNNlFJRkcv?=
 =?utf-8?B?Zk9xVUlFU1JTTTRhTVRxZ3QrU2duWWFwZmRKUHRnbGtPSmdYOUp1M0lRYWls?=
 =?utf-8?B?eUI1UmFqaU1iOGdxdHV2clpiL05vTW8rT3NOVXpFQ3NCR1RpVTZwRVlyNlFj?=
 =?utf-8?B?YUs3NmFmYm9WREU1enpJWUNXWWV5YU1kcll3dFRLMURtUjdrc0FGbTQ5bUFv?=
 =?utf-8?B?SzR1VXd0UlBpK1hPS1NWN3BvUWx2dWtiT08ybUdkUzhQcGJvamRQOGViZTZi?=
 =?utf-8?B?YVhNakQzUmw5c21Pd3BBNGZWVHN2ZGV5NUFRejdCcTh2UWJQWFZTa1RDZDVU?=
 =?utf-8?B?cHZ2RGVoeCszcGRTdDBHWnhhTzEyMW02OHdDRkVFTUpGQU41YVU2bUE5bkYy?=
 =?utf-8?B?bG8zSmJFR3c5ZWFzZllLWUZzUUtPSXppd3RMM3p4Z3dNeFF2VXZKa0ZNQnhO?=
 =?utf-8?B?akxOdmZTUEF1UW1vWStnaXA1V1hncUZDQXJFUUZBaUpiQXRFb215ZnYxbjhk?=
 =?utf-8?Q?6tacAiOEbrqTNDvCuu+SoNATE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f45e8b4-0606-4b23-753c-08db44c4b192
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 13:06:18.8169
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AudaRhCcYtLje9x24knwY7vMYh/Ylzb2NAIYpCHn8DX0b5Psc/TbcCVNm/6ffO1tjk99WRyceMlIWn7bFzU0EQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8866

On 06.04.2023 05:57, Marek Marczykowski-Górecki wrote:
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -272,6 +272,15 @@ out:
>      return r;
>  }
>  
> +/*
> + * This function returns X86EMUL_UNHANDLEABLE even if write is properly
> + * handled, to propagate it to the device model (so it can keep its internal
> + * state in sync).
> + * len==0 means really len==4, but as a write completion that will return
> + * X86EMUL_OKAY on successful processing. Use WRITE_LEN4_COMPLETION to make it
> + * less confusing.
> + */
> +#define WRITE_LEN4_COMPLETION 0
>  static int msixtbl_write(struct vcpu *v, unsigned long address,
>                           unsigned int len, unsigned long val)
>  {
> @@ -283,8 +292,9 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
>      unsigned long flags;
>      struct irq_desc *desc;
>  
> -    if ( (len != 4 && len != 8) || (address & (len - 1)) )
> -        return r;
> +    if ( (len != 4 && len != 8 && len != WRITE_LEN4_COMPLETION) ||
> +         (len && (address & (len - 1))) )
> +        return X86EMUL_UNHANDLEABLE;
>  
>      rcu_read_lock(&msixtbl_rcu_lock);
>  
> @@ -345,7 +355,7 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
>  
>  unlock:
>      spin_unlock_irqrestore(&desc->lock, flags);
> -    if ( len == 4 )
> +    if ( len == WRITE_LEN4_COMPLETION )
>          r = X86EMUL_OKAY;
>  
>  out:
> @@ -635,7 +645,7 @@ void msix_write_completion(struct vcpu *v)
>          return;
>  
>      v->arch.hvm.hvm_io.msix_unmask_address = 0;
> -    if ( msixtbl_write(v, ctrl_address, 4, 0) != X86EMUL_OKAY )
> +    if ( msixtbl_write(v, ctrl_address, WRITE_LEN4_COMPLETION, 0) != X86EMUL_OKAY )
>          gdprintk(XENLOG_WARNING, "MSI-X write completion failure\n");
>  }
>  

This part is okay with me, but ...

> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -743,7 +743,8 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id)
>  static int ioreq_server_get_info(struct domain *d, ioservid_t id,
>                                   unsigned long *ioreq_gfn,
>                                   unsigned long *bufioreq_gfn,
> -                                 evtchn_port_t *bufioreq_port)
> +                                 evtchn_port_t *bufioreq_port,
> +                                 uint16_t *flags)
>  {
>      struct ioreq_server *s;
>      int rc;
> @@ -779,6 +780,9 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id,
>              *bufioreq_port = s->bufioreq_evtchn;
>      }
>  
> +    /* Advertise supported features/behaviors. */
> +    *flags = XEN_DMOP_all_msix_writes;
> +
>      rc = 0;
>  
>   out:
> @@ -1374,7 +1378,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op)
>                                     NULL : (unsigned long *)&data->ioreq_gfn,
>                                     (data->flags & XEN_DMOP_no_gfns) ?
>                                     NULL : (unsigned long *)&data->bufioreq_gfn,
> -                                   &data->bufioreq_port);
> +                                   &data->bufioreq_port, &data->flags);
> +
>          break;
>      }
>  
> diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
> index acdf91693d0b..490b151c5dd7 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -70,7 +70,9 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
>   * not contain XEN_DMOP_no_gfns then these pages will be made available and
>   * the frame numbers passed back in gfns <ioreq_gfn> and <bufioreq_gfn>
>   * respectively. (If the IOREQ Server is not handling buffered emulation
> - * only <ioreq_gfn> will be valid).
> + * only <ioreq_gfn> will be valid). When Xen returns XEN_DMOP_all_msix_writes
> + * flag set, it will notify the IOREQ server about all writes to MSI-X table
> + * (if it's handled by this IOREQ server), not only those clearing a mask bit.
>   *
>   * NOTE: To access the synchronous ioreq structures and buffered ioreq
>   *       ring, it is preferable to use the XENMEM_acquire_resource memory
> @@ -81,11 +83,13 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
>  struct xen_dm_op_get_ioreq_server_info {
>      /* IN - server id */
>      ioservid_t id;
> -    /* IN - flags */
> +    /* IN/OUT - flags */
>      uint16_t flags;
>  
> -#define _XEN_DMOP_no_gfns 0
> -#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns)
> +#define _XEN_DMOP_no_gfns         0  /* IN */
> +#define _XEN_DMOP_all_msix_writes 1  /* OUT */
> +#define XEN_DMOP_no_gfns         (1u << _XEN_DMOP_no_gfns)
> +#define XEN_DMOP_all_msix_writes (1u << _XEN_DMOP_all_msix_writes)
>  
>      /* OUT - buffered ioreq port */
>      evtchn_port_t bufioreq_port;

... this isn't: What is obtained through this hypercall is information
pertaining to a particular ioreq server. I don't think Xen behavior
should be reported there. To me this information much rather fits in
what public/features.h has / offers. And XENVER_get_features isn't
constrained in any way, so any dm ought to be able to retrieve the
information that way.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:19:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525366.816515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqw6E-0004f6-Qm; Mon, 24 Apr 2023 13:19:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525366.816515; Mon, 24 Apr 2023 13:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqw6E-0004ez-O4; Mon, 24 Apr 2023 13:19:22 +0000
Received: by outflank-mailman (input) for mailman id 525366;
 Mon, 24 Apr 2023 13:17:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZyYO=AP=tibco.com=tismith@srs-se1.protection.inumbo.net>)
 id 1pqw4r-0004Yd-B9
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:17:57 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c74ffe3-e2a2-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 15:17:55 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-4ec81773d50so4646501e87.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 06:17:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c74ffe3-e2a2-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=tibco.com; s=googleworkspace; t=1682342275; x=1684934275;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=a+WWeTT7yz9K2lWxU7Tm1CgnN360PHMug0vWEViBFWw=;
        b=BWPaFNEDCZlRCHN9dxkgh7qXzaciL8B83EKJJvifbFz249EAXpbyE4l385LM5Lf46p
         g/4PCXRt8s9Hyrn0+W/2NCn7NV2SRDLwlCp4EcsUJ7tbtNnMgqlH7aKHfB0+EtuwIPD+
         Yr+jsMTsN0XjaV7nju6TcI4Ns0mQ/TpZRXPVaBMHvlRP86X+IOfCiD0+cFxPPs1wCnSN
         1ZjWtVsl8MvvNN7+/PAn7d44zi5NRIQJ0EgEHkncBkHTgwGL5P92Qyx+PcgU3Uqmat0B
         hc4QudAiW731gsGmfJQzp4MiNblU2t3QiqLC0WQYGUDTU1blMo+4nCVT2vkOfSb5LtSI
         ElpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682342275; x=1684934275;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=a+WWeTT7yz9K2lWxU7Tm1CgnN360PHMug0vWEViBFWw=;
        b=J15/VIEhvcY/J+WICrd2JMZK+HBTaAWhAWNFOdETQs6zOV+W59woy5cMEzw40NWQHf
         XC2VdUIc1m5K1qu6BZzeEYuZtLRYK/W7TSgz9JlGF4JaeKAcHqgw3+0nl4PEfT53N1sp
         cNlSPwwqkfo54eQgHInaRnPlwAhxHdTbeJ7SDuJVrM1S6w8s35adEXTvqca9f/tHGtZO
         Qu7k3V/1okBPgRwroCE5dnZySl1KR0WPnTsn64y1WF94HCZF+mVhDCWhMSquvChve17T
         oJ3cOTScAHhTUVocADgqIB/By9KyVB6YbQG3bNdzNBGu1CTGCMRqBOz/8q46Edregayx
         iOIw==
X-Gm-Message-State: AAQBX9d5s1d0wFibgBUwid/2MENCiYgHY9/Nv0/Y0ffqJMkWPkUP/ymB
	TWQ8AzpzPmva7hNiLhyHlNvhww6F5YQ60g8ArxKZ8i2nol5DuVsIhVo+Pg==
X-Google-Smtp-Source: AKy350Z5yeGOzzt/OX5fmCYzzKSIt9TlG3RtF2l52DUYcBpUWfco1vW/N8sQfd/z7NWRHPo/Sv2pc/RCFIU+fqWeMQk=
X-Received: by 2002:a19:f018:0:b0:4ef:f126:affc with SMTP id
 p24-20020a19f018000000b004eff126affcmr796788lfc.7.1682342274784; Mon, 24 Apr
 2023 06:17:54 -0700 (PDT)
MIME-Version: 1.0
References: <20230420110205.688689-1-mark.syms@citrix.com> <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
 <CAPYKksVtGyfv3TbAjLH1G=N6=pH-pH2-FTX5c3+E5PsOKo2aOQ@mail.gmail.com>
In-Reply-To: <CAPYKksVtGyfv3TbAjLH1G=N6=pH-pH2-FTX5c3+E5PsOKo2aOQ@mail.gmail.com>
From: Tim Smith <tismith@tibco.com>
Date: Mon, 24 Apr 2023 14:17:43 +0100
Message-ID: <CALUK5G5T=8MkxaQxdeid_ypo1e4DJ-zBRAMb7D+dcHkVdJt2tQ@mail.gmail.com>
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
To: Mark Syms <mark.syms@cloud.com>
Cc: paul@xen.org, mark.syms@citrix.com, qemu-devel@nongnu.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, tim.smith@citrix.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 24, 2023 at 1:08=E2=80=AFPM Mark Syms <mark.syms@cloud.com> wro=
te:
>
> Copying in Tim who did the final phase of the changes.
>
> On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote:
> >
> > On 20/04/2023 12:02, mark.syms@citrix.com wrote:
> > > From: Mark Syms <mark.syms@citrix.com>
> > >
> > > Ensure the PV ring is drained on disconnect. Also ensure all pending
> > > AIO is complete, otherwise AIO tries to complete into a mapping of th=
e
> > > ring which has been torn down.
> > >
> > > Signed-off-by: Mark Syms <mark.syms@citrix.com>
> > > ---
> > > CC: Stefano Stabellini <sstabellini@kernel.org>
> > > CC: Anthony Perard <anthony.perard@citrix.com>
> > > CC: Paul Durrant <paul@xen.org>
> > > CC: xen-devel@lists.xenproject.org
> > >
> > > v2:
> > >   * Ensure all inflight requests are completed before teardown
> > >   * RESEND to fix formatting
> > > ---
> > >   hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
> > >   1 file changed, 25 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-=
block.c
> > > index 734da42ea7..d9da4090bf 100644
> > > --- a/hw/block/dataplane/xen-block.c
> > > +++ b/hw/block/dataplane/xen-block.c
> > > @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDa=
taPlane *dataplane)
> > >
> > >       dataplane->more_work =3D 0;
> > >
> > > +    if (dataplane->sring =3D=3D 0) {
> > > +        return done_something;
> > > +    }
> > > +
> >
> > I think you could just return false here... Nothing is ever going to be
> > done if there's no ring :-)
> >
> > >       rc =3D dataplane->rings.common.req_cons;
> > >       rp =3D dataplane->rings.common.sring->req_prod;
> > >       xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> > > @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPl=
ane *dataplane >   void xen_block_dataplane_stop(XenBlockDataPlane *datapla=
ne)
> > >   {
> > >       XenDevice *xendev;
> > > +    XenBlockRequest *request, *next;
> > >
> > >       if (!dataplane) {
> > >           return;
> > >       }
> > >
> > > +    /* We're about to drain the ring. We can cancel the scheduling o=
f any
> > > +     * bottom half now */
> > > +    qemu_bh_cancel(dataplane->bh);
> > > +
> > > +    /* Ensure we have drained the ring */
> > > +    aio_context_acquire(dataplane->ctx);
> > > +    do {
> > > +        xen_block_handle_requests(dataplane);
> > > +    } while (dataplane->more_work);
> > > +    aio_context_release(dataplane->ctx);
> > > +
> >
> > I don't think we want to be taking new requests, do we?
> >

If we're in this situation and the guest has put something on the
ring, I think we should do our best with it.
We cannot just rely on the guest to be well-behaved, because they're
not :-( We're about to throw the
ring away, so whatever is there would otherwise be lost. This bit is
here to try to handle guests which are
less than diligent about their shutdown. We *should* always be past
this fast enough when the disconnect()/connect()
of XenbusStateConnected happens that all remains well (if not, we were
in a worse situation before).

> > > +    /* Now ensure that all inflight requests are complete */
> > > +    while (!QLIST_EMPTY(&dataplane->inflight)) {
> > > +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next=
) {
> > > +            blk_aio_flush(request->dataplane->blk, xen_block_complet=
e_aio,
> > > +                        request);
> > > +        }
> > > +    }
> > > +
> >
> > I think this could possibly be simplified by doing the drain after the
> > call to blk_set_aio_context(), as long as we set dataplane->ctx to
> > qemu_get_aio_context(). Alos, as long as more_work is not set then it
> > should still be safe to cancel the bh before the drain AFAICT.

I'm not sure what you mean by simpler? Possibly I'm not getting something.

We have to make sure that any "aio_bh_schedule_oneshot_full()" which
happens as a result of
"blk_aio_flush()" has finished before any change of AIO context,
because it tries to use the one which
was current at the time of being called (I have the SEGVs to prove it
:-)). Whether that happens before or after
"blk_set_aio_context(qemu_get_aio_context())" doesn't seem to be a
change in complexity to me.

Motivation was to get as much as possible to happen in the way it
"normally" would, so that future changes
are less likely to regress, but as mentioned maybe I'm missing something.

The BH needs to be prevented from firing ASAP, otherwise the
disconnect()/connect() which happens when
XenbusStateConnected can have the bh fire from what the guest does
next right in the middle of juggling
contexts for the disconnect() (I have the SEGVs from that too...).

> >    Paul
> >
> > >       xendev =3D dataplane->xendev;
> > >
> > >       aio_context_acquire(dataplane->ctx);
> > > +
> > >       if (dataplane->event_channel) {
> > >           /* Only reason for failure is a NULL channel */
> > >           xen_device_set_event_channel_context(xendev, dataplane->eve=
nt_channel,
> > > @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane =
*dataplane)
> > >       blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &er=
ror_abort);
> > >       aio_context_release(dataplane->ctx);
> > >
> > > -    /*
> > > -     * Now that the context has been moved onto the main thread, can=
cel
> > > -     * further processing.
> > > -     */
> > > -    qemu_bh_cancel(dataplane->bh);
> > > -
> > >       if (dataplane->event_channel) {
> > >           Error *local_err =3D NULL;
> > >
> >

Tim (hoping GMail behaves itself with this message...)


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:44:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525376.816525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwUY-0008Es-Qd; Mon, 24 Apr 2023 13:44:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525376.816525; Mon, 24 Apr 2023 13:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwUY-0008El-Np; Mon, 24 Apr 2023 13:44:30 +0000
Received: by outflank-mailman (input) for mailman id 525376;
 Mon, 24 Apr 2023 13:44:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pqwUX-0008Ef-H2
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:44:29 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 210ad7e7-e2a6-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 15:44:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2BB271FD80;
 Mon, 24 Apr 2023 13:44:26 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E2B1513780;
 Mon, 24 Apr 2023 13:44:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id lz7cNbmHRmQ4JgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 24 Apr 2023 13:44:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 210ad7e7-e2a6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682343866; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ut1j3/DQIQ9XFsFrBKdqm6zTTRIlaSmBnfygF/bsZn8=;
	b=Urg3Q4i9PSh06ifN89I1K1mIl3MwhfdGzPmMcF2KT1rOlMp6OLKDA1xCIU1c0XNGvy+Ken
	dArOZfuMiu/O3RUYei1qaF4OkhzHkpCSMx3P7iNV1i84enhcp4PmO70S6HlhqcKX5lGLCR
	S2qZkiIhqm8729/IaKg5gtoWUw/3HpU=
Message-ID: <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
Date: Mon, 24 Apr 2023 15:44:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230421135933.23353-1-jgross@suse.com>
 <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
In-Reply-To: <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------5YGvLqDVWAAeuZx5hDz5eFzt"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------5YGvLqDVWAAeuZx5hDz5eFzt
Content-Type: multipart/mixed; boundary="------------uJ00jS94uZG0CIg6E5RACioU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
References: <20230421135933.23353-1-jgross@suse.com>
 <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
In-Reply-To: <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>

--------------uJ00jS94uZG0CIg6E5RACioU
Content-Type: multipart/mixed; boundary="------------ev5uejv2noG5F6pEpRgidfLb"

--------------ev5uejv2noG5F6pEpRgidfLb
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDQuMjMgMTQ6MDEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyMS4wNC4yMDIz
IDE1OjU5LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVG9kYXkgaXQgaXMgbm90IHBvc3Np
YmxlIHRvIGFuYWx5c2UgY3Jhc2ggZHVtcHMgb2YgYSBzeXN0ZW0gaW4NCj4+IGh5cGVydmlz
b3IgbW9kZSB3aGVuIGl0IGhhZCBiZWVuIGJvb3RlZCB2aWEgRUZJLCBhcyB0aGUgY3Jhc2gg
dXRpbGl0eQ0KPj4gZG9lc24ndCB1bmRlcnN0YW5kIHRoZSBmaWxlIGZvcm1hdCBvZiB4ZW4u
ZWZpLg0KPj4NCj4+IFRoaXMgY2FuIGVhc2lseSBiZSBzb2x2ZWQgYnkgY3JlYXRpbmcgYW4g
RUxGIGZpbGUgZnJvbSB4ZW4uZWZpIHZpYQ0KPj4gb2JqY29weS4gVXNpbmcgdGhhdCBmaWxl
IGFzIG5hbWUgbGlzdCBmb3IgY3Jhc2ggZW5hYmxlcyB0aGUgdXNlciB0bw0KPj4gYW5hbHlz
ZSB0aGUgZHVtcCBpbiBoeXBlcnZpc29yIG1vZGUuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiAgIHhlbi9LY29u
ZmlnLmRlYnVnICAgICB8IDUgKysrKy0NCj4+ICAgeGVuL01ha2VmaWxlICAgICAgICAgIHwg
NCArKysrDQo+PiAgIHhlbi9hcmNoL3g4Ni9NYWtlZmlsZSB8IDMgKysrDQo+PiAgIDMgZmls
ZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPj4NCj4+IGRp
ZmYgLS1naXQgYS94ZW4vS2NvbmZpZy5kZWJ1ZyBiL3hlbi9LY29uZmlnLmRlYnVnDQo+PiBp
bmRleCA5NGU4MThlZTA5Li40YWVjMGZkM2FhIDEwMDY0NA0KPj4gLS0tIGEveGVuL0tjb25m
aWcuZGVidWcNCj4+ICsrKyBiL3hlbi9LY29uZmlnLmRlYnVnDQo+PiBAQCAtMTM4LDYgKzEz
OCw5IEBAIGNvbmZpZyBERUJVR19JTkZPDQo+PiAgIAkgIHRoZSBFRkkgYm9vdCBwYXJ0aXRp
b24gKGxvb2sgZm9yICJJTlNUQUxMX0VGSV9TVFJJUCIgaW4NCj4+ICAgCSAgZG9jcy9taXNj
L2VmaS5wYW5kb2MgZm9yIG1vcmUgaW5mb3JtYXRpb24gLSB3aGVuIG5vdCB1c2luZw0KPj4g
ICAJICAibWFrZSBpbnN0YWxsLXhlbiIgZm9yIGluc3RhbGxpbmcgeGVuLmVmaSwgc3RyaXBw
aW5nIG5lZWRzIHRvIGJlDQo+PiAtCSAgZG9uZSBvdXRzaWRlIHRoZSBYZW4gYnVpbGQgZW52
aXJvbm1lbnQpLg0KPj4gKwkgIGRvbmUgb3V0c2lkZSB0aGUgWGVuIGJ1aWxkIGVudmlyb25t
ZW50KS4gTm90ZSB0aGF0IHN0cmlwcGluZw0KPj4gKwkgIHhlbi5lZmkgdXNpbmcgIklOU1RB
TExfRUZJX1NUUklQIiB3aWxsIGRpc2FibGUgdGhlIGJ1aWxkaW5nIG9mDQo+PiArCSAgeGVu
LmVmaS5lbGYsIHdoaWNoIGlzIG5lZWRlZCBmb3IgImNyYXNoIiBkdW1wIGFuYWx5c2lzIG9m
IHN5c3RlbXMNCj4+ICsJICBib290ZWQgdmlhIEVGSS4NCj4gDQo+IEknbSBhZnJhaWQgSSBk
b24ndCB1bmRlcnN0YW5kIHRoaXM6IElOU1RBTExfRUZJX1NUUklQIG9ubHkgYWZmZWN0cyB3
aGF0DQo+IG1heSAob3B0aW9uYWxseSkgYmUgcGxhY2VkIG9uIHRoZSBFRkkgcGFydGl0aW9u
LiBUaGUgbm9ybWFsbHkgaW5zdGFsbGVkDQo+IHhlbi5lZmkgc2hvdWxkIGJlIHVuYWZmZWN0
ZWQgYnkgdGhpczsgYW4gaW50ZXJtZWRpYXRlIHhlbi5lZmkuc3RyaXBwZWQNCj4gaXMgY3Jl
YXRlZCBpbnN0ZWFkLg0KDQpPaCwgb2J2aW91c2x5IEkgbWlzaW50ZXJwcmV0ZWQgc29tZXRo
aW5nLiBJJ2xsIGRyb3AgdGhpcyBodW5rLg0KDQo+IA0KPj4gLS0tIGEveGVuL01ha2VmaWxl
DQo+PiArKysgYi94ZW4vTWFrZWZpbGUNCj4+IEBAIC01MDUsNiArNTA1LDkgQEAgX2luc3Rh
bGw6ICQoVEFSR0VUKSQoQ09ORklHX1hFTl9JTlNUQUxMX1NVRkZJWCkNCj4+ICAgCQlpZiBb
IC1lICQoVEFSR0VUKS5lZmkubWFwIF07IHRoZW4gXA0KPj4gICAJCQkkKElOU1RBTExfREFU
QSkgJChUQVJHRVQpLmVmaS5tYXAgJChEKSQoREVCVUdfRElSKS8kKFQpLSQoWEVOX0ZVTExW
RVJTSU9OKS5lZmkubWFwOyBcDQo+PiAgIAkJZmk7IFwNCj4+ICsJCWlmIFsgLWUgJChUQVJH
RVQpLmVmaS5lbGYgXTsgdGhlbiBcDQo+PiArCQkJJChJTlNUQUxMX0RBVEEpICQoVEFSR0VU
KS5lZmkuZWxmICQoRCkkKERFQlVHX0RJUikvJChUKS0kKFhFTl9GVUxMVkVSU0lPTikuZWZp
LmVsZjsgXA0KPj4gKwkJZmk7IFwNCj4gDQo+IFRvIGF2b2lkIHRoZSByaXNrIG9mIHRoZSB0
d28gZ29pbmcgb3V0IG9mIHN5bmMgKGFzIGFsc28gdG8gbGltaXQNCj4gcmVkdW5kYW5jeSks
IGNvdWxkIHlvdSB3cmFwIHRoZSBlYXJsaWVyICJpZiIgaW4gYSAiZm9yIHggaW4gbWFwIGVs
ZjsNCj4gZG8gLi4uOyBkb25lIiBsb29wPw0KDQpTdXJlLCB3aWxsIGRvLg0KDQo+IA0KPj4g
LS0tIGEveGVuL2FyY2gveDg2L01ha2VmaWxlDQo+PiArKysgYi94ZW4vYXJjaC94ODYvTWFr
ZWZpbGUNCj4+IEBAIC0yMjQsNiArMjI0LDkgQEAgZW5kaWYNCj4+ICAgCSQoTUFLRSkgJChi
dWlsZCk9JChARCkgLiQoQEYpLjFyLm8gLiQoQEYpLjFzLm8NCj4+ICAgCSQoTEQpICQoY2Fs
bCBFRklfTERGTEFHUywkKFZJUlRfQkFTRSkpIC1UICQob2JqKS9lZmkubGRzIC1OICQ8IFwN
Cj4+ICAgCSAgICAgICQoQEQpLy4kKEBGKS4xci5vICQoQEQpLy4kKEBGKS4xcy5vICQob3Jw
aGFuLWhhbmRsaW5nLXkpICQobm90ZV9maWxlX29wdGlvbikgLW8gJEANCj4+ICtpZmVxICgk
KENPTkZJR19ERUJVR19JTkZPKSx5KQ0KPj4gKwkkKGlmICQoZmlsdGVyIC0tc3RyaXAtZGVi
dWcsJChFRklfTERGTEFHUykpLCwkKE9CSkNPUFkpIC1PIGVsZjY0LXg4Ni02NCAkQCAkQC5l
bGYpDQo+PiArZW5kaWYNCj4+ICAgCSQoTk0pIC1wYSAtLWZvcm1hdD1zeXN2ICQoQEQpLyQo
QEYpIFwNCj4+ICAgCQl8ICQob2JqdHJlZSkvdG9vbHMvc3ltYm9scyAtLWFsbC1zeW1ib2xz
IC0teGVuc3ltcyAtLXN5c3YgLS1zb3J0ID4kKEBEKS8kKEBGKS5tYXANCj4+ICAgCXJtIC1m
ICQoQEQpLy4kKEBGKS5bMC05XSogJChARCkvLi4kKEBGKS5bMC05XSoNCj4gDQo+IFBlcnNv
bmFsbHkgSSB0aGluayB0aGF0IC0gaW4gY2FzZSBvZiBidWlsZCBwcm9ibGVtcyAtIGhhdmlu
ZyB0aGUgbWFwIGZpbGUNCj4gY3JlYXRlZCBpcyBtb3JlIGltcG9ydGFudCAoYW5kIGxlc3Mg
bGlrZWx5IHRvIGZhaWwpIHRoYW4gdGhlIEVMRiBvbmUsIHNvDQo+IEknZCBwcmVmZXIgaWYg
dGhlIHR3byBzdGVwcyBjb3VsZCBiZSBvcmRlcmVkIHRoZSBvdGhlciB3YXkgYXJvdW5kLg0K
DQpGaW5lIHdpdGggbWUuDQoNCj4gRnVydGhlciBJIHZhZ3VlbHkgcmVjYWxsIHBvc3NpYmxl
IGlzc3VlcyB3aXRoIGVudGlyZWx5IGJsYW5rIG1ha2UgcnVsZQ0KPiBsaW5lcy4gUGx1cyBo
YXZpbmcgc29tZSB0cmFjZSBvZiB0aGUgY29tbWFuZCBpbiBhIHZlcmJvc2UgbG9nIHBlcmhh
cHMNCj4gYWxzbyB3b3VsZG4ndCBodXJ0LiBJT1cgbWF5YmUgYmV0dGVyIGFkZCBhbm90aGVy
IHVzZSBvZiB0aGUgY29sb24gY29tbWFuZA0KPiBoZXJlICh3ZSBhbHJlYWR5IGhhdmUgc29t
ZSwgYXQgbGVhc3QgdGhyb3VnaCAkKE1LUkVMT0MpKToNCj4gDQo+IGlmZXEgKCQoQ09ORklH
X0RFQlVHX0lORk8pLHkpDQo+IAkkKGlmICQoZmlsdGVyIC0tc3RyaXAtZGVidWcsJChFRklf
TERGTEFHUykpLDopJChPQkpDT1BZKSAtTyBlbGY2NC14ODYtNjQgJEAgJEAuZWxmDQo+IGVu
ZGlmDQo+IA0KPiA/DQoNCk9rYXkuDQoNCj4gDQo+IEZpbmFsbHkgLSBkbyB5b3UgcmVhbGx5
IG5lZWQgdG8gY29weSBhbGwgdGhlIG5vbi1kZWJ1ZyBzZWN0aW9ucyBhcyB3ZWxsPw0KPiBN
aWdodCAtLW9ubHkta2VlcC1kZWJ1ZyBiZSBoZWxwZnVsIGhlcmUgKHByb3ZpZGVkIGl0IHdv
cmtzIGZvciBhIFBFL0NPRkYNCj4gLT4gRUxGIGNvcHkgb3BlcmF0aW9uOyBpdCBsb29rcyB0
byBkbyBzbyBpbiBteSBzaW1wbGUgZXhwZXJpbWVudHMpPw0KDQpObywgZG9lc24ndCB3b3Jr
IChvYmpjb3B5IGRvZXMsIGJ1dCBub3QgY3Jhc2ggd2l0aCB0aGF0IGZpbGUpOg0KDQogICBj
cmFzaDogeGVuLmVmaS5lbGY6IG5vIHRleHQgYW5kIGRhdGEgY29udGVudHMNCg0KDQpKdWVy
Z2VuDQo=
--------------ev5uejv2noG5F6pEpRgidfLb
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ev5uejv2noG5F6pEpRgidfLb--

--------------uJ00jS94uZG0CIg6E5RACioU--

--------------5YGvLqDVWAAeuZx5hDz5eFzt
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRGh7kFAwAAAAAACgkQsN6d1ii/Ey9m
0AgAlMLzhz4BvTNJZLd7eX/cOrq8hTAuTTnhi5zXz5SfLWBbywdK0C55xFngmIjDuEhUfIBhVcka
lgnBrjeaRpPZ5CJCuPekC2wTBK+8GYb6ROruhW+ZU/CHUMxi4Hu7rdLKeW+T1QRNX0aEd8xRldGo
5uqfSiZku2PsnJdb2M8L1VU1Em0bhqFvbV5Vr30v8VIow0hVn3u3NZ9V9plYk0g1fmM0zNttVwo7
TTNXqFOw4ms9rcecXFIlFsHy0QdqSJImq5Aha+DTo8OV3VKPhlf0SO1qjMnxAOHZjwG/TMryPglv
lf9YXLStN6g8myKSGEvEQWusmfs/XTQbyy8Fe4UITw==
=XZ9C
-----END PGP SIGNATURE-----

--------------5YGvLqDVWAAeuZx5hDz5eFzt--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:46:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525382.816535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwWt-0000SN-BB; Mon, 24 Apr 2023 13:46:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525382.816535; Mon, 24 Apr 2023 13:46:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwWt-0000SG-8P; Mon, 24 Apr 2023 13:46:55 +0000
Received: by outflank-mailman (input) for mailman id 525382;
 Mon, 24 Apr 2023 13:46:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIvg=AP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pqwWr-0000SA-O1
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:46:53 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 776a1393-e2a6-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 15:46:51 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-4efd6e26585so2621371e87.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 06:46:51 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 z23-20020a2e8857000000b002a8c271de33sm1706787ljj.67.2023.04.24.06.46.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Apr 2023 06:46:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 776a1393-e2a6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682344011; x=1684936011;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=zw7JhD9MrUztxBz+XFIdPN/kxbYQ7BPTvgxVKGXcAJ8=;
        b=lEiCljbged+uwXIqHPV17OjBo9dyXTs4Gb+xP15degp19QXVbyfpl91TOBgONRr4KW
         ddqy0AmzuKeKOwVQwbVUUFGWpfMRIwH2eLQgT3bLTLczV3qKHXYxez4dYibil4QhDmWq
         bAkiRB36Pf8N3F3CgVA8XjQaN98GFGfKnuhJOAN0/GmNmrRu6WYLHqV8JORNUJneh9n+
         D4KM0JF+3b7AmH8eajYH5BJK4V/5mswJekzqepihT4hTayws/eYFYnkc8+243uM5X66M
         lnsggXnRdbfk3zOU/FjLIS+KpdNsrgK+h6R4BqTALybKgdsOtYqPK/IwJMKbu1Oy6MaP
         +l4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682344011; x=1684936011;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zw7JhD9MrUztxBz+XFIdPN/kxbYQ7BPTvgxVKGXcAJ8=;
        b=JQjPNrZ8I52Lm4gGt2Z+wdEBSB84pxq6Ot59+V193OCQkiYDXycmPAKzrGPsGwrK+d
         +N2HHf7vxCGBANxLuHBzCfqYXwpmwvmjmuEAMw3FsrcoHk69g5zaARkpcxv79A0fVR7B
         9qJrKW60lacuT5UC3E5xVqKeyrmHsT/6Gde/xF9ilER48J7qscHZ63Jsjxa44RmQHniH
         mopWh7L32AOoL4pOGqCajdECB/88gjLbu0Y9n6/F+GbqYJPrNaYbazeVOIf504hc7CG5
         94HMgYNdqgZnt1wUxdiph9sHl1KmcbcAZCFltD1d+EFjJdInxrEAkh+SUlUQD8Ssnbf7
         8aQg==
X-Gm-Message-State: AAQBX9eO2iCa+5PIRigPk/CN5Q9g4k+0rtAG+dYSUG4EuGnezo0Q3fL/
	KxptQtu1qZTJ/HtewKnI5+w=
X-Google-Smtp-Source: AKy350bh1Meaiw/7GNfJRix28petkvsUKzTQhNZ01rO/4UwGBJZL5kM25zeYsgVqZr7nyfhVdGASEQ==
X-Received: by 2002:ac2:42c4:0:b0:4ec:a0e8:9efc with SMTP id n4-20020ac242c4000000b004eca0e89efcmr3304684lfl.29.1682344010593;
        Mon, 24 Apr 2023 06:46:50 -0700 (PDT)
Message-ID: <00b9ff7d3108c446b5e3bf81348bf4deaa010c1b.camel@gmail.com>
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Mon, 24 Apr 2023 16:46:49 +0300
In-Reply-To: <9a897eb7-f6c7-9e28-4f6b-4c4c43f7a4f1@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
	 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
	 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
	 <9a897eb7-f6c7-9e28-4f6b-4c4c43f7a4f1@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.0 (3.48.0-1.fc38) 
MIME-Version: 1.0

On Mon, 2023-04-24 at 11:25 +0200, Jan Beulich wrote:
> On 21.04.2023 16:41, Oleksii wrote:
> > On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
> > > On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > > > + *
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > =3D=3D=3D=3D
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 En=
d addr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 | VM ar=
ea
> > > > description
> > > > + *
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > =3D=3D=3D=3D
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=A0 | =
Xen
> > > > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=A0 | =
FDT
> > > > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=A0 | =
Fixmap
> > >=20
> > > These are all L2 slot 511 aiui, which may be worth mentioning
> > > especially since
> > > the top bits don't match the top bits further down in the table
> > > (because of the
> > > aliasing).
> >=20
> > Than I'll add one more column where I'll put slot number
> >=20
> > >=20
> > > > + *=C2=A0=C2=A0=C2=A0=C2=A0 .................. unused .............=
.....
> > >=20
> > > This is covering slot 510, which again may be worth mentioning.
> > >=20
> > > > + * 0000003200000000 |=C2=A0 0000007f40000000 | 331 GB | Direct
> > > > map(L2
> > > > slot: 200-509)
> > > > + * 0000003100000000 |=C2=A0 0000003140000000 |=C2=A0 1 GB=C2=A0 |
> > > > Frametable(L2
> > > > slot: 196-197)
> > >=20
> > > 1Gb is, if I'm not mistaken, a single L2 slot.
> > Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
> > included. I'll update the table.
> >=20
> > >=20
> > > Also assuming a 32-byte struct page_info (I don't think you'll
> > > get
> > > away with
> > > less than that, when even Arm32 requires this much), there's a
> > > mismatch
> > > between direct map and frame table size: With 4k pages, the
> > > scaling
> > > factor
> > > would be 128 if I'm not mistaken. So perhaps you really mean 3Gb
> > > here
> > > to
> > > cover for (slightly more than) the 331Gb of memory you mean to be
> > > able to
> > > map?
> > For RV64 page_info size will be 56-byte and 32-byte for RV32 but
> > you
> > are right it should 3 Gb in that case what will be enough ( taking
> > into
> > account both available sizes of page_info structure ).
>=20
> Since you say "both": I don't expect you will want to use 3Gb for
> RV32
> there? The table is, at least for now, for RV64 / Sv39 only anyway.
>=20
You are right. There is no any sense to use 3gb for RV32.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:51:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525386.816545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwb3-0001yt-Ry; Mon, 24 Apr 2023 13:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525386.816545; Mon, 24 Apr 2023 13:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwb3-0001ym-PF; Mon, 24 Apr 2023 13:51:13 +0000
Received: by outflank-mailman (input) for mailman id 525386;
 Mon, 24 Apr 2023 13:51:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=riw3=AP=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1pqwb2-0001yg-Ob
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:51:12 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 122cf1a1-e2a7-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 15:51:11 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-3f19c473b9eso46343055e9.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 06:51:11 -0700 (PDT)
Received: from [192.168.22.129] (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id
 c4-20020adffb44000000b00304760c891asm3402769wrs.52.2023.04.24.06.51.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Apr 2023 06:51:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 122cf1a1-e2a7-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682344270; x=1684936270;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=BtKQwoCv0e8xCl8VUnSf38+zWNs0tN5+AskatxkHHM8=;
        b=BVfeRbL3v4AK48K3WhsQbY0+swrtlKdZxRlpHMtF+EI5VRTAzsTwtUnQ4etF28BmQm
         htbVJ7785ZnGqelqaIv0/lf8kMOt1llGMOB/K6ZbMl9rKDYO8Xy3qXYRmz5PildG40Tc
         tSuymYhqFjc/L0WOWO4F9DdAJgcovDR/WkQI0i5eFdygJY5G5LiHUV2be+E+jaW7JBmL
         +W4cDag3P5aWWPecyKVV7NuKpwn9p3xD4kpX+KsDw93mKNbDpUWCrsufIYTHTNxR8+RE
         h6A5KjqQ98wRG6bjg3UAMiktQh1OMZmxpfuw7YB7LGB5OoZV+OwHeqLjnz4v53ByK2Uq
         jOVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682344270; x=1684936270;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=BtKQwoCv0e8xCl8VUnSf38+zWNs0tN5+AskatxkHHM8=;
        b=SbORsD6vJa2wfPqAHMQGSbkb9ca9GU4JGg5Y54w2txLEqblPpXIcRvCC2M6rmfZSYZ
         9M+qstOaCozWUSRyDnxKm4JHyqIxIOt0KoAGBXpETtkAnK2WkSFwymNUOjYRiRlFsf06
         CFd9qlxRTNv6KhCHkS/zou3xd1HtQLGvKPsM2vHbY8sX07yw4SaaUlqxNqGaKHxd2RuP
         2nL+dJEZz5LQ1NNfSPyaEOqGdSTj2kbPDxiRXP1dG2Dw/sZU5kl3alCQpzbLSEqnOi12
         LCRDHAvh6KvvT1r7fjqnCmFu+3jow0rW3EMcHqVmSdNDZCv+sMx9dQZMpqb5u67VZfmx
         MUMA==
X-Gm-Message-State: AAQBX9ftaGUo5vbvNBaVnCn5U0fgJ9RFlJyPVq+Qpu+ZBhlXyqStuxvx
	tp6a1URxFm3VUCG5l/oFjVY=
X-Google-Smtp-Source: AKy350ZM2AlTdNcZK2XcWDq3EmcEpUhBpPbM+2DNs90QIn2SdzLHOMEJQaVKVHulEwj3d7s4+EPpVg==
X-Received: by 2002:a1c:7211:0:b0:3f1:6880:3308 with SMTP id n17-20020a1c7211000000b003f168803308mr8552987wmc.1.1682344270598;
        Mon, 24 Apr 2023 06:51:10 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <f1325cdb-9e0f-b955-7041-826fb6c78174@xen.org>
Date: Mon, 24 Apr 2023 14:51:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Reply-To: paul@xen.org
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
Content-Language: en-US
To: Tim Smith <tismith@tibco.com>, Mark Syms <mark.syms@cloud.com>
Cc: mark.syms@citrix.com, qemu-devel@nongnu.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 tim.smith@citrix.com
References: <20230420110205.688689-1-mark.syms@citrix.com>
 <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
 <CAPYKksVtGyfv3TbAjLH1G=N6=pH-pH2-FTX5c3+E5PsOKo2aOQ@mail.gmail.com>
 <CALUK5G5T=8MkxaQxdeid_ypo1e4DJ-zBRAMb7D+dcHkVdJt2tQ@mail.gmail.com>
Organization: Xen Project
In-Reply-To: <CALUK5G5T=8MkxaQxdeid_ypo1e4DJ-zBRAMb7D+dcHkVdJt2tQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 24/04/2023 14:17, Tim Smith wrote:
> On Mon, Apr 24, 2023 at 1:08 PM Mark Syms <mark.syms@cloud.com> wrote:
>>
>> Copying in Tim who did the final phase of the changes.
>>
>> On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote:
>>>
>>> On 20/04/2023 12:02, mark.syms@citrix.com wrote:
>>>> From: Mark Syms <mark.syms@citrix.com>
>>>>
>>>> Ensure the PV ring is drained on disconnect. Also ensure all pending
>>>> AIO is complete, otherwise AIO tries to complete into a mapping of the
>>>> ring which has been torn down.
>>>>
>>>> Signed-off-by: Mark Syms <mark.syms@citrix.com>
>>>> ---
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Anthony Perard <anthony.perard@citrix.com>
>>>> CC: Paul Durrant <paul@xen.org>
>>>> CC: xen-devel@lists.xenproject.org
>>>>
>>>> v2:
>>>>    * Ensure all inflight requests are completed before teardown
>>>>    * RESEND to fix formatting
>>>> ---
>>>>    hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
>>>>    1 file changed, 25 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
>>>> index 734da42ea7..d9da4090bf 100644
>>>> --- a/hw/block/dataplane/xen-block.c
>>>> +++ b/hw/block/dataplane/xen-block.c
>>>> @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>>>>
>>>>        dataplane->more_work = 0;
>>>>
>>>> +    if (dataplane->sring == 0) {
>>>> +        return done_something;
>>>> +    }
>>>> +
>>>
>>> I think you could just return false here... Nothing is ever going to be
>>> done if there's no ring :-)
>>>
>>>>        rc = dataplane->rings.common.req_cons;
>>>>        rp = dataplane->rings.common.sring->req_prod;
>>>>        xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
>>>> @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane >   void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>>>>    {
>>>>        XenDevice *xendev;
>>>> +    XenBlockRequest *request, *next;
>>>>
>>>>        if (!dataplane) {
>>>>            return;
>>>>        }
>>>>
>>>> +    /* We're about to drain the ring. We can cancel the scheduling of any
>>>> +     * bottom half now */
>>>> +    qemu_bh_cancel(dataplane->bh);
>>>> +
>>>> +    /* Ensure we have drained the ring */
>>>> +    aio_context_acquire(dataplane->ctx);
>>>> +    do {
>>>> +        xen_block_handle_requests(dataplane);
>>>> +    } while (dataplane->more_work);
>>>> +    aio_context_release(dataplane->ctx);
>>>> +
>>>
>>> I don't think we want to be taking new requests, do we?
>>>
> 
> If we're in this situation and the guest has put something on the
> ring, I think we should do our best with it.
> We cannot just rely on the guest to be well-behaved, because they're
> not :-( We're about to throw the
> ring away, so whatever is there would otherwise be lost.

We only throw away our mapping. The memory belongs to the guest and it
should ensure it does not submit requests after the state has left
'connected'

> This bit is
> here to try to handle guests which are
> less than diligent about their shutdown. We *should* always be past
> this fast enough when the disconnect()/connect()
> of XenbusStateConnected happens that all remains well (if not, we were
> in a worse situation before).
> 

What about a malicious guest that is piling requests into the ring. It 
could keep us in the loop forever, couldn't it?

>>>> +    /* Now ensure that all inflight requests are complete */
>>>> +    while (!QLIST_EMPTY(&dataplane->inflight)) {
>>>> +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
>>>> +            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
>>>> +                        request);
>>>> +        }
>>>> +    }
>>>> +
>>>
>>> I think this could possibly be simplified by doing the drain after the
>>> call to blk_set_aio_context(), as long as we set dataplane->ctx to
>>> qemu_get_aio_context(). Alos, as long as more_work is not set then it
>>> should still be safe to cancel the bh before the drain AFAICT.
> 
> I'm not sure what you mean by simpler? Possibly I'm not getting something.
> 

Sorry, I was referring to the need to do aio_context_acquire() calls but 
they are only around the disputed xen_block_handle_requests() call 
anyway, so there's no simplification in this bit.

> We have to make sure that any "aio_bh_schedule_oneshot_full()" which
> happens as a result of
> "blk_aio_flush()" has finished before any change of AIO context,
> because it tries to use the one which
> was current at the time of being called (I have the SEGVs to prove it
> :-)).

Ok, I had assumed that the issue was the context being picked up inside 
the xen_block_complete_aio() call.

> Whether that happens before or after
> "blk_set_aio_context(qemu_get_aio_context())" doesn't seem to be a
> change in complexity to me.
> 
> Motivation was to get as much as possible to happen in the way it
> "normally" would, so that future changes
> are less likely to regress, but as mentioned maybe I'm missing something.
> 
> The BH needs to be prevented from firing ASAP, otherwise the
> disconnect()/connect() which happens when
> XenbusStateConnected can have the bh fire from what the guest does
> next right in the middle of juggling
> contexts for the disconnect() (I have the SEGVs from that too...).
> 

So if you drop the ring drain then this patch should still stop the 
SEGVs, right?

   Paul



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:52:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:52:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525389.816555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwbp-0002UM-5Y; Mon, 24 Apr 2023 13:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525389.816555; Mon, 24 Apr 2023 13:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwbp-0002UF-2d; Mon, 24 Apr 2023 13:52:01 +0000
Received: by outflank-mailman (input) for mailman id 525389;
 Mon, 24 Apr 2023 13:52:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIvg=AP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pqwbo-0002Lp-2U
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:52:00 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e59c885-e2a7-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 15:51:58 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2a8c28158e2so42421461fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 06:51:58 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 i3-20020a2e9403000000b002a8b5310642sm1771789ljh.5.2023.04.24.06.51.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Apr 2023 06:51:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e59c885-e2a7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682344318; x=1684936318;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=bQFa9N1jXNE2/64W6rjBZl8mIxnUNrxCJGv5L5hBKss=;
        b=iXSeFyG/y8990eRZMM/URcuLAm2GJz9lc2+GqTTDEh/BuJ97lAkFBKm9T+KYe0Ex5p
         Iy5uoNxokYERlmgLCYc0EMeEbu+Fztjfwj3Qz00j2C9gkHisryRerBh7KfKZhPvPyaO2
         AYZYi/YBm6xeWdr6klgWHvkfgnvvnCYb8hmz4SQRE4z0/gzeZd0SD0yQZthK1MHW2Irv
         HfZVAu6na5g8nRcU3EBHfRyL4FhPZGVUnZ2Cs2XHX/i+3N1E6N3uMQvPnjsgkX4BuQCG
         AAi23FefAAuAo1ydHDVFKvDZ6QpZIFVd6i9uzJD8imu2oi9tLVETF3ZrXn3hiOrM383p
         IXSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682344318; x=1684936318;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=bQFa9N1jXNE2/64W6rjBZl8mIxnUNrxCJGv5L5hBKss=;
        b=bD4C5AJ6k55Hre7CzifB7az6JqJBCH4++5yc7k7e0GmE7NsPotWZvXSIgq/B6ZsZ6C
         VNxcVPccC2CvjX4b6vaBYd/uAf+QO4ItxNxEpw/nVUp1zri6FLGxPmp90hJ0oMr6DcFc
         9OZYyP6FeCalrgGMnRElZ67UIxIR3zKNCLylMnaLNpWMpPwHlWaQqt4C7fE9hG7pD6Hm
         TeRBNhhHKdwkwiFaehI/oJlVPMeNmhPPe6jbr68yrD7X+EvMtKjGzlMyrWF0JSJF9kKP
         1gDE8zdKew2dJ8wEvn+LoTdVNqcsh7QHZZPqzvN5Ysr7GheoVEV/NVh9mku8fhPFHf8H
         hKug==
X-Gm-Message-State: AAQBX9ddWTgJY9Lv1aFHeZ14gbF8cFubFwDddM6c1XvsODbK7nUXomiU
	2hJSRzn1HmGpIR1DdVu3DsE=
X-Google-Smtp-Source: AKy350bSOY/4+uNYqEyfltJG6KSHInzA3mqqVVkx6M6YIVBccCWf+ilt98axCkHwHMiO9RP8I+hzCg==
X-Received: by 2002:a2e:855a:0:b0:2a8:bafe:900c with SMTP id u26-20020a2e855a000000b002a8bafe900cmr2584063ljj.25.1682344317798;
        Mon, 24 Apr 2023 06:51:57 -0700 (PDT)
Message-ID: <822ea3e8595c21a3528849a68dac35f5cb534934.camel@gmail.com>
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,  Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
	xen-devel@lists.xenproject.org
Date: Mon, 24 Apr 2023 16:51:56 +0300
In-Reply-To: <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
	 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
	 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
	 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.0 (3.48.0-1.fc38) 
MIME-Version: 1.0

On Mon, 2023-04-24 at 11:33 +0200, Jan Beulich wrote:
> On 21.04.2023 16:41, Oleksii wrote:
> > On Thu, 2023-04-20 at 14:58 +0200, Jan Beulich wrote:
> > > On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > > > + *
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > =3D=3D=3D=3D
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 En=
d addr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 | VM ar=
ea
> > > > description
> > > > + *
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > =3D=3D=3D=3D
> > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=A0 | =
Xen
> > > > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=A0 | =
FDT
> > > > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=A0 | =
Fixmap
> > >=20
> > > These are all L2 slot 511 aiui, which may be worth mentioning
> > > especially since
> > > the top bits don't match the top bits further down in the table
> > > (because of the
> > > aliasing).
> >=20
> > Than I'll add one more column where I'll put slot number
> >=20
> > >=20
> > > > + *=C2=A0=C2=A0=C2=A0=C2=A0 .................. unused .............=
.....
> > >=20
> > > This is covering slot 510, which again may be worth mentioning.
> > >=20
> > > > + * 0000003200000000 |=C2=A0 0000007f40000000 | 331 GB | Direct
> > > > map(L2
> > > > slot: 200-509)
> > > > + * 0000003100000000 |=C2=A0 0000003140000000 |=C2=A0 1 GB=C2=A0 |
> > > > Frametable(L2
> > > > slot: 196-197)
> > >=20
> > > 1Gb is, if I'm not mistaken, a single L2 slot.
> > Yeah, it can be misunderstood. I meant [196, 197), so 197 isn't
> > included. I'll update the table.
> >=20
> > >=20
> > > Also assuming a 32-byte struct page_info (I don't think you'll
> > > get
> > > away with
> > > less than that, when even Arm32 requires this much), there's a
> > > mismatch
> > > between direct map and frame table size: With 4k pages, the
> > > scaling
> > > factor
> > > would be 128 if I'm not mistaken. So perhaps you really mean 3Gb
> > > here
> > > to
> > > cover for (slightly more than) the 331Gb of memory you mean to be
> > > able to
> > > map?
> > For RV64 page_info size will be 56-byte and 32-byte for RV32 but
> > you
> > are right it should 3 Gb in that case what will be enough ( taking
> > into
> > account both available sizes of page_info structure ).
>=20
> As to the plan to it being 56 bytes (i.e. like on Arm): Arm forever
> has
> had a 64-bit padding field at the end. My best guess is that the
> field
> was introduced to have a 32-byte struct on Arm32. But then why
> artificially increase the struct from 48 to 56 bytes on Arm64? And
> hence
> why have the same oddity on RV64?
I didn't think about that. I just re-used page_info structure from ARM.
I'll think about your comment. Thanks.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 13:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 13:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525399.816565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwiy-0003Gl-TB; Mon, 24 Apr 2023 13:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525399.816565; Mon, 24 Apr 2023 13:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwiy-0003Ge-QL; Mon, 24 Apr 2023 13:59:24 +0000
Received: by outflank-mailman (input) for mailman id 525399;
 Mon, 24 Apr 2023 13:59:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqwix-0003GY-4s
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 13:59:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35927e32-e2a8-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 15:59:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8368.eurprd04.prod.outlook.com (2603:10a6:102:1bf::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 13:59:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 13:59:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35927e32-e2a8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXx/GP3+Rzh8Y1Ywy2CcnTrPgX2BVk0E9NkQfz4erQE6yxf8o/otzT0FhbCVrOLFmi6AAJnXGCv+Hb7uAog+bS9AlXyQpwH752VrW2iijbBNCqizYBbgVnObwkqCz9w8EMSf5CHMqfmKmASPi1KsKhwPrxKQCcaeYBzKxTwXmbtMHoqtMDPvSDCyatoxMGN21+Qy4mi8O8OyjaqQvPH+mrBbbva2qVfO9lEB3CES3cU73yy30UXUo0G617dfQ8Mmv+98tyRu7NlvKWHzjrNuR+V74Z0IdLtg7gwTUcXcbd62Lia7HGcwVCqXBAAOrZfqbmxtxRXYEGxgjAXz8TdqWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h7K8zzvcm/qI7O13PT/5IEE5ayUGR9RPHKuFzhwTJvQ=;
 b=VDZPdKTVhrqehNxBRjxuI4zwVX+nwK6gA4B8aHe3d/QNM94yl+HBEL3uF0odMLUDB6hNfykgsFvlbE5bB/59ej93rrQGV4BjgV4+VQLhD/6T3mi+FP3PfT1VUEg/mlTktOy3GSP8+H/5/tYy7scb2lUn/mmjB4FaTkPEXJl9UOyJ8G9X0TTn2LnR+z/p6aQxHgWaqL1lJbl+AInktCCecJljIxkIMBCWaGVUX1e5PwgkJMuaH4eXAKFpdtAe9mADT7fqCLbie7QPtdnjMFUxfmdbZuKhGvRqJkszVvQwYR4BgtAke0C5+2+4qTWJf15b9qDoqeoQ8WSnEphCtMgBuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h7K8zzvcm/qI7O13PT/5IEE5ayUGR9RPHKuFzhwTJvQ=;
 b=t2HtdK4Z8OB5wXKSyFn+TCCKOP4mbGYff8qbanzYnUBQkEJuFtZO13VWiUrpEQK0jIQUM6CueZdmkRNuzgjLQGnQ8h99EHQ1tPaevhs1rcvUvif/7mKGeoD5Qmc2ecXxss1OCHaXztVnPBsVDRsItQcN0e2v25wnvE7X7bY7ThNPY0rsTXDKbGtTscljb3+7KPCqSkvVu94KhuIz6W6AEz9J6mploaaJsJDVmNdX1eT24GIMU/UQ85FaWNZA6F2pYAQSjbhppTIYhz3SXqKfQ328q2z5vP4UVTaMSsxnTgh2wXBtjqhhRqgJzRtg3WGOZxt5CPcqRQbA5b/UV4SU1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <69ae7948-64fa-deaa-03b8-30eb84e07b61@suse.com>
Date: Mon, 24 Apr 2023 15:59:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 3/4] x86/hvm: Allow writes to registers on the same
 page as MSI-X table
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <3a8f54cf631e0342b144935950c853d1884a7eac.1680752649.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3a8f54cf631e0342b144935950c853d1884a7eac.1680752649.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0080.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8368:EE_
X-MS-Office365-Filtering-Correlation-Id: cd0542d7-56bf-499c-f65d-08db44cc17e8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2FLc3ubGrUk86jB6NVP0xGzd8Hx172FKP042nUp8hA54igVVQOyrRfS0SK44pR/8JC6gXwyyUvKK2sX6GDSUjdt8UbaJyUGGzla2rBn+nO4nlgb2LtFDIWtHtzWi0R4pAfE51u1vG9kiv8htRw/NTTpBJfccuQ3t2IaBq6Q/MJXel4hf5SRed/fJICdEVQ7iC5qVv3lpxLJGQQXqxBiesRcAfB8nF4XzYA4+FIARzNR7MDOhgiPSKrg/picNn4ke2/FuwvWa4wOQuT3qNxW8uY0x3cRC7btZvvAPICewcR2RHQp4xZMl3b31nGwMMwNi/R/KEdFk+ZegB5vSO/SB1mk9VBcHQQ+wEa9Lxf9+ruJGYPkuLUKFdjV68XWXY+l5wftoRsnD4EYbAGXT3pj/mkiTak/XDkA8A4Ktzn/w3phFaaIQMbF17tMHPTKk1/42kxpevK69aL0hZU42/U/V83Z+IRoOtlNSG1NuR6hyfmgcJTqJqIeaNkYVVTaoDYOZG+NwPYiXL2fRv8OV52xesBR7nb1qGdpxeKWS4qIPXfPDRTPqAOS29WeUIfOryExvogSU00by47AMISIw2tS0OvOgcuWVhreGGXR2MI+xAsZQ/8NugpVxmo7Rap5y7kC62uXZDexfIRw+380pio/r2BrskpjKABCWVNwK4to6jr4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(6486002)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(66574015)(83380400001)(31686004)(66899021)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?azV4cUsvaTRWWXRYUWVmVHpwTGlpVzc4SENOV3JMRGV1ZG43RU80bERYSDMw?=
 =?utf-8?B?cFJjQlVqMDhQVi9odUdIcnRZazlNWjVXQzR5cHp3WXQvWmZKbS9nOWdsQ2JJ?=
 =?utf-8?B?QmdPaWF6cW5Lbmx1THpYRHQ0V2M5M0xkUjBuQjhKRVZWeVl1VGwwa2ZIOFhC?=
 =?utf-8?B?b2FWTGZvbUI0UVdYT2pKWUROakdjWEQ5SmNDeldVYXRrcXBLNzlRVEszbXI4?=
 =?utf-8?B?M1pnVzRWczUvU1FrOWg2OTE2K1NxVEJpMEgySWcxYXdtNWI0MkU4Y1JPL0Z6?=
 =?utf-8?B?Y093elVzQWtNbFRGeHM0dmFqbGV1ZkdMZmFZdmFoRVFYTi9Uck9SSVhrUERx?=
 =?utf-8?B?WmNaMkhjMGcwR3JDa3J5cDRkVllDdFpBSE1OQlkzNG1IREVEZEdpNFNFU3RK?=
 =?utf-8?B?QVRhU1ZMTklocG1hS0ZMdGpEN2RPbUlwMHdCYzZTaEJ3ajJRa05rUGdvSENa?=
 =?utf-8?B?bkNKSFV0M3R1OG5jUWJTd1lvU0d4cFoza20xSjNUNHA2TEJhV2UxNURkRWo5?=
 =?utf-8?B?STIzUnN3N0RHZFdTeW5FSkVRYWxSZ29lRStqM3oyUmwxM0dCRElReittRXNZ?=
 =?utf-8?B?V3BMNW9XQ3hOWXFDQzFENXowUnBVNkFHTk90MlZ6R0ZhOU9YcTRQcHNXY2pp?=
 =?utf-8?B?QXJQbnRtSnpqYzJDRzlrWk8wMG5tN1dwSXZXYm81N2dOd0pma3VYNjFEWElE?=
 =?utf-8?B?N2VLVko5ZEk4Q0ZZK2dPMENLTGhQKzlVUnkySC9aMkkrZnZwb0UvOGI2T2Fo?=
 =?utf-8?B?bmVQZUtxVWs2N0g4NzRNREdWdTdqd2lhVndpa2F2Ym10K0VjdDlsSCsyL082?=
 =?utf-8?B?bVZ0VTRMVGVSZStKOEtHMjAzUE40TGZhWFNHQmRkNVJyOUNIcUEwYTlmUTJL?=
 =?utf-8?B?Wkk1VDRyMk5aSklDaVUwdUJmZmtFbnVLNnJ4dDcxSkhCRlA2SXFObHJSQk5G?=
 =?utf-8?B?R3JMK0trdEdNOGIxdVVxcmNwQUVnTncxdjAxSjF0S0ZzUHMrdnBLUEM3WmJ4?=
 =?utf-8?B?T3F6UERNMWN2T2JncUNtbFhJYUQwaEh2dmZqSno3Y3MzdENROTI5KzVteTlk?=
 =?utf-8?B?SDVyaU5IWXNITTI3RkcvSlFVL0pXazFJazFvYXE1aWk5dVdob3ptbDFZelJz?=
 =?utf-8?B?TEY3OHVuNTExbHBvdXhWN0paSWtPdEM0cHVxeGx3Y2txUmgxVnVXMThIQ2Y2?=
 =?utf-8?B?YnJDZUJmZlcvQ0ZVL2tENisxU2tlQXBIcHdobU1rMldvM3paM2g1VXhUZWpa?=
 =?utf-8?B?QnVRMzVOdEEzSElzZThRRkFOYitrZVhhRXllNHJqOTFZQit1ZjVqdUtreWtY?=
 =?utf-8?B?ZmpMc2hockpUOGthU0xseDRnOVhMZFJmQk1BZ2QrbWN4WEZyYXo0R2NteWR1?=
 =?utf-8?B?Mjhwa0hGa09DZk5nVVY0RkhuVXpscStZVDBCeGN4MXhZZ3h3WFFEU1VUaXRD?=
 =?utf-8?B?Ym41Y2Y3enJneTdwM2R6RmpjQWtIQms5cnZIeVRUVFlxQ1dyVE1VQkxSWWUr?=
 =?utf-8?B?NHhubkdVNnR6L1BCL2JWMFRvcHNvbi9FeXBpUExCTVhFaEVKMXJ4d084WUxT?=
 =?utf-8?B?S2ViZzNTUnhCVGpRUC9pbVZORzFSaUZNbVcxU2xPcWl4ZGlROUlkMWMxc3ZT?=
 =?utf-8?B?Y1pwUkNmSSt0SDdsbE02ZldYWUJycnFHWnUvOFlsMU9RcHNxNVczNzl5ZUl0?=
 =?utf-8?B?aDNBVWpJTDdLcWk3Sy93RmN4VUE2TTlaa1NPOGJQdE5BWEVvajFyVkp5RHNr?=
 =?utf-8?B?bXBTOGl0ZlFzTFYzbWJsSk9aKzdrcnhVS2pQVnJjY1loU2VnMmJsRmRZZ3p5?=
 =?utf-8?B?OGRJcVQwS1JhSHF1RDBtdG1WUFF4WWlBWXpFRjB2bStQWXNSbFpsVzc3Zk5v?=
 =?utf-8?B?UGVsMWU3S1g1Nk5Ocm9kQmh6RTF1VDM4aFFYaU5QWXZWZllWNitqTHpPV2Zy?=
 =?utf-8?B?SGxWWm9ZZVdUL21PMFlTS2NqWE1FSU1yRVh0Tnc1SGdScVhKZjUzSjJyNlVs?=
 =?utf-8?B?REZlenpIY1c1YktHRlVoVklXY0d0b3g4ME50YTRxSWhYUlhTbUk2OFhOS2V1?=
 =?utf-8?B?SWRwUm5wUnlpSGNJd0F2UkNoK3JUbXg3WHcxaGxwL28weWl2aGZ4a3pjdmRP?=
 =?utf-8?Q?J2tNhpcBWzpGUg1aSvqxa9S6C?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd0542d7-56bf-499c-f65d-08db44cc17e8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 13:59:16.9532
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WCOYT4OBOOwNtsSk+el4D1W7g1Pxf2VYVsYZwOqrZH6qWNAvwSfeSwsBIBbi8kJDe3fufsHOz+nXSjePPkt+vw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8368

On 06.04.2023 05:57, Marek Marczykowski-Górecki wrote:
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -181,15 +181,21 @@ static bool msixtbl_initialised(const struct domain *d)
>  }
>  
>  static struct msixtbl_entry *msixtbl_find_entry(
> -    struct vcpu *v, unsigned long addr)
> +    struct vcpu *v, unsigned long addr, bool same_page)
>  {
>      struct msixtbl_entry *entry;
>      struct domain *d = v->domain;
>  
>      list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
> +    {
> +        if ( same_page &&
> +             PFN_DOWN(addr) >= PFN_DOWN(entry->gtable) &&
> +             PFN_DOWN(addr) <= PFN_DOWN(entry->gtable + entry->table_len) )

This will allow access to the subsequent page in case entry->gtable +
entry->table_len falls on a page boundary. I think you mean "< PFN_UP()"
or "<= PFN_DOWN(... - 1)".

But I anyway don't understand why you need both this and ...

> +            return entry;
>          if ( addr >= entry->gtable &&
>               addr < entry->gtable + entry->table_len )
>              return entry;

... this (and the new function parameter). Again like in the recently
added vPCI code, all you should need ought to be the new code you add
(see msix.c:msix_find()), with the caller then separating "in table" vs
"other space". Only one of the four callers really cares about _just_
the table range. In no event do you need to do both pairs of checks:

        if ( same_page
             ? PFN_DOWN(addr) >= PFN_DOWN(entry->gtable) &&
               ...
             : ...

(i.e. in case you disagree with moving the original check to the sole
place where it's needed).

> @@ -213,6 +219,144 @@ static struct msi_desc *msixtbl_addr_to_desc(
>      return NULL;
>  }
>  
> +/*
> + * Returns:
> + *  - UINT_MAX if no handling should be done
> + *  - UINT_MAX-1 if write should be discarded
> + *  - a fixmap idx to use for handling
> + */
> +#define ADJACENT_DONT_HANDLE UINT_MAX
> +#define ADJACENT_DISCARD_WRITE (UINT_MAX - 1)
> +static unsigned int adjacent_handle(
> +    const struct msixtbl_entry *entry, unsigned long addr, bool write)
> +{
> +    unsigned int adj_type;
> +
> +    if ( !entry || !entry->pdev )
> +        return ADJACENT_DONT_HANDLE;
> +
> +    if ( PFN_DOWN(addr) == PFN_DOWN(entry->gtable) && addr < entry->gtable )
> +        adj_type = ADJ_IDX_FIRST;
> +    else if ( PFN_DOWN(addr) == PFN_DOWN(entry->gtable + entry->table_len - 1) &&
> +              addr >= entry->gtable + entry->table_len )
> +        adj_type = ADJ_IDX_LAST;
> +    else
> +        return ADJACENT_DONT_HANDLE;
> +
> +    ASSERT(entry->pdev->msix);

Considering the code below, you probably want to latch this pointer into
a local variable.

> +    if ( !entry->pdev->msix->adj_access_table_idx[adj_type] )
> +    {
> +        gprintk(XENLOG_WARNING,
> +                "Page for adjacent(%d) MSI-X table access not initialized for %pp (addr %#lx, gtable %#lx\n",
> +                adj_type, &entry->pdev->sbdf, addr, entry->gtable);
> +
> +        return ADJACENT_DONT_HANDLE;
> +    }
> +
> +    /* If PBA lives on the same page too, discard writes. */
> +    if ( write && (

Nit: Style (no standalone opening parenthesis ought to be last on a line).

> +        (adj_type == ADJ_IDX_LAST &&
> +         entry->pdev->msix->table.last == entry->pdev->msix->pba.first) ||
> +        (adj_type == ADJ_IDX_FIRST &&
> +         entry->pdev->msix->table.first == entry->pdev->msix->pba.last)) )

And these all need indenting by one more blank.

> +    {
> +        gprintk(XENLOG_WARNING,
> +                 "MSI-X table and PBA of %pp live on the same page, "
> +                 "writing to other registers there is not implemented\n",
> +                 &entry->pdev->sbdf);

Whereas here a blank needs dropping. But - don't you discard writes to
more than just the PBA here?

> +        return ADJACENT_DISCARD_WRITE;
> +    }
> +
> +    return entry->pdev->msix->adj_access_table_idx[adj_type];
> +}
> +
> +static int adjacent_read(
> +        unsigned int fixmap_idx,
> +        uint64_t address, uint32_t len, uint64_t *pval)

Nit: Style (too deep indentation). Also while I'm fine with the two
uint64_t (arguably the former may want to be paddr_t), the uint32_t
clearly isn't needed here, and wants to be unsigned int.

> +{
> +    const void __iomem *hwaddr;
> +
> +    *pval = ~0UL;
> +
> +    if ( !IS_ALIGNED(address, len) )
> +    {
> +        gdprintk(XENLOG_WARNING,
> +                "Dropping unaligned read from MSI-X table page at %" PRIx64 "\n",
> +                address);

Here indentation is one too little again.

> +        return X86EMUL_OKAY;
> +    }
> +
> +    ASSERT(fixmap_idx != ADJACENT_DISCARD_WRITE);
> +
> +    hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address);
> +
> +    switch ( len )
> +    {
> +    case 1:
> +        *pval = readb(hwaddr);
> +        break;
> +
> +    case 2:
> +        *pval = readw(hwaddr);
> +        break;
> +
> +    case 4:
> +        *pval = readl(hwaddr);
> +        break;
> +
> +    case 8:
> +        *pval = readq(hwaddr);
> +        break;
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +    }
> +    return X86EMUL_OKAY;
> +}
> +
> +static int adjacent_write(
> +        unsigned int fixmap_idx,
> +        uint64_t address, uint32_t len, uint64_t val)
> +{
> +    void __iomem *hwaddr;
> +
> +    if ( !IS_ALIGNED(address, len) )
> +    {
> +        gdprintk(XENLOG_WARNING,
> +                "Dropping unaligned write to MSI-X table page at %" PRIx64 "\n",
> +                address);
> +        return X86EMUL_OKAY;
> +    }
> +
> +    if ( fixmap_idx == ADJACENT_DISCARD_WRITE )
> +        return X86EMUL_OKAY;
> +
> +    hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address);
> +
> +    switch ( len ) {

Nit: Style (brace placement).

> @@ -220,16 +364,27 @@ static int cf_check msixtbl_read(
>      unsigned long offset;
>      struct msixtbl_entry *entry;
>      unsigned int nr_entry, index;
> +    unsigned int adjacent_fixmap;
>      int r = X86EMUL_UNHANDLEABLE;
>  
> -    if ( (len != 4 && len != 8) || (address & (len - 1)) )
> +    if ( !IS_ALIGNED(address, len) )
>          return r;
>  
>      rcu_read_lock(&msixtbl_rcu_lock);
> -
> -    entry = msixtbl_find_entry(current, address);
> +    entry = msixtbl_find_entry(current, address, true);
>      if ( !entry )
>          goto out;
> +
> +    adjacent_fixmap = adjacent_handle(entry, address, false);
> +    if ( adjacent_fixmap != ADJACENT_DONT_HANDLE )
> +    {
> +        r = adjacent_read(adjacent_fixmap, address, len, pval);
> +        goto out;
> +    }

adjacent_handle() may return ADJACENT_DONT_HANDLE for reasons other than
the address not being in the expected ranges, so you can't blindly fall
through to the original code just yet (same in functions further down).

> @@ -375,14 +543,23 @@ static bool cf_check msixtbl_range(
>  {
>      struct vcpu *curr = current;
>      unsigned long addr = r->addr;
> -    const struct msi_desc *desc;
> +    struct msixtbl_entry *entry;

const?

> +    const struct msi_desc *desc = NULL;
> +    unsigned int adjacent_fixmap;
> +
>  

Nit: Please don't introduce double blank lines.

> --- a/xen/arch/x86/include/asm/msi.h
> +++ b/xen/arch/x86/include/asm/msi.h
> @@ -207,6 +207,10 @@ struct msg_address {
>                                         PCI_MSIX_ENTRY_SIZE + \
>                                         (~PCI_MSIX_BIRMASK & (PAGE_SIZE - 1)))
>  
> +/* indexes in adj_access_table_idx[] below */
> +#define ADJ_IDX_FIRST 0
> +#define ADJ_IDX_LAST  1
> +
>  struct arch_msix {
>      unsigned int nr_entries, used_entries;
>      struct {
> @@ -214,6 +218,7 @@ struct arch_msix {
>      } table, pba;
>      int table_refcnt[MAX_MSIX_TABLE_PAGES];
>      int table_idx[MAX_MSIX_TABLE_PAGES];
> +    int adj_access_table_idx[2];

adjacent_handle() returns unsigned int - why is it plain int here?

Also, to keep variable name length somewhat limited, I don't think "table"
adds much value to the name here.

> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -961,6 +961,34 @@ static int msix_capability_init(struct pci_dev *dev,
>                  domain_crash(d);
>              /* XXX How to deal with existing mappings? */
>          }
> +
> +        /*
> +         * If the MSI-X table doesn't start at the page boundary, map the first page for
> +         * passthrough accesses.
> +         */
> +        if ( PAGE_OFFSET(table_paddr) )
> +        {
> +            int idx = msix_get_fixmap(msix, table_paddr, table_paddr);
> +
> +            if ( idx > 0 )
> +                msix->adj_access_table_idx[ADJ_IDX_FIRST] = idx;
> +            else
> +                gprintk(XENLOG_ERR, "Failed to map first MSI-X table page: %d\n", idx);
> +        }
> +        /*
> +         * If the MSI-X table doesn't span full page(s), map the last page for
> +         * passthrough accesses.
> +         */
> +        if ( PAGE_OFFSET(table_paddr + msix->nr_entries * PCI_MSIX_ENTRY_SIZE) )
> +        {
> +            uint64_t entry_paddr = table_paddr + msix->nr_entries * PCI_MSIX_ENTRY_SIZE;
> +            int idx = msix_get_fixmap(msix, table_paddr, entry_paddr);
> +
> +            if ( idx > 0 )
> +                msix->adj_access_table_idx[ADJ_IDX_LAST] = idx;
> +            else
> +                gprintk(XENLOG_ERR, "Failed to map last MSI-X table page: %d\n", idx);
> +        }

Wouldn't this better be delayed until / restricted to the device actually
being prepared for guest use? Which may be as simple as making all of this
an "else" to the earlier if()?

Also I think the 2nd comment is somewhat misleading - you don't really mean
"spans full pages" but, along the lines of the first one, "ends on a page
boundary".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:01:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525405.816575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwkn-0004nQ-CL; Mon, 24 Apr 2023 14:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525405.816575; Mon, 24 Apr 2023 14:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwkn-0004nJ-92; Mon, 24 Apr 2023 14:01:17 +0000
Received: by outflank-mailman (input) for mailman id 525405;
 Mon, 24 Apr 2023 14:01:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pqwkl-0004nC-Th
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:01:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77db332c-e2a8-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:01:11 +0200 (CEST)
Received: from DB6PR0201CA0018.eurprd02.prod.outlook.com (2603:10a6:4:3f::28)
 by DB9PR08MB6506.eurprd08.prod.outlook.com (2603:10a6:10:263::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:01:02 +0000
Received: from DBAEUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::ef) by DB6PR0201CA0018.outlook.office365.com
 (2603:10a6:4:3f::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 14:01:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT050.mail.protection.outlook.com (100.127.142.250) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 14:01:02 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Mon, 24 Apr 2023 14:01:01 +0000
Received: from bab51ad44cc3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BD53452C-E9C1-4777-9BD4-7927851D3AF9.1; 
 Mon, 24 Apr 2023 14:00:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bab51ad44cc3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 14:00:50 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DBBPR08MB6041.eurprd08.prod.outlook.com (2603:10a6:10:206::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:00:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:00:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77db332c-e2a8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OyVYQEMGXUOQiregtJcDHODeHEeDQ3Nwpg4pQOrM0B0=;
 b=oibUaJv7WL5vACYMyudo8+bniX0atkTprxtueya6OSVfjvlCE9DW+yTmhLCLOh85KSolDvXSBGTrsps+4ZAsXGnkk4FOvuJh+wO91gpT07bkKz6xGXKcFrKGme2pMORasUlm7ERq/jg3LQwZAsZfbYWdPcL7yL81tkVovGaHmoo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3e27c4062971cdcf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G1d9EPvGvyri9IQhjvR0yMj+Zcs5mRHFbuZBKxbKSKaWQKsF2S0p/C9h2xD2eyd04LKgYTmFr68MDA0CIWxf76M1eb+fvufzUFO/3Ah5+7taeFmXypHJ5Nipf11OV48YRD6WvqRtAglectsolKuG5bdqw8RSV/OZryzbmExOAEJbeQSMa9Br2wlCMf4CrIFkvLlN5NCYbti8LtE3mKfBCoUbT0rdAbeChJM/oPGzjlHaRjPuzLl+rbH5XmNT6Jkv1G9PVeRw/1+XjESKMgs/4LzGFCByWQ/lGK5c3pSTEv51DwEMbxz1Jc4XcB8rDyiBLHQOff4ALpiZ0oG8VeXyoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OyVYQEMGXUOQiregtJcDHODeHEeDQ3Nwpg4pQOrM0B0=;
 b=VzxOJhMDhwRAz6gAprkepyycMN9/+XkSGOjnbomMaYl++74e/ndMZiYsqgiHKTwG9bkR0WZNVchaOQfgEiFlNrBcq5QHPnpbFrjqKgn/HWUDDBOx8QIakUIxreivM/5sJ8Fq29XYbSvuO/YZHFBfpskytuMgZhXEABTBPV7CpkwLPHdKD5tBt+NzrV02Ee4x2WnVvdOZgzHwKDtRl4CThh/VTDrZaebLl4FzlKAdYkBwuR/U++B0UlGuWbupKCCSCRBvC+G8xQhzJx5gvWsTqWVueKc/Om1IUM3coZVpLA0EjY119UdAq7voo0Y2LIEVW+CYvHdd5ZEsjXNG+Ba+Gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OyVYQEMGXUOQiregtJcDHODeHEeDQ3Nwpg4pQOrM0B0=;
 b=oibUaJv7WL5vACYMyudo8+bniX0atkTprxtueya6OSVfjvlCE9DW+yTmhLCLOh85KSolDvXSBGTrsps+4ZAsXGnkk4FOvuJh+wO91gpT07bkKz6xGXKcFrKGme2pMORasUlm7ERq/jg3LQwZAsZfbYWdPcL7yL81tkVovGaHmoo=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1IA=
Date: Mon, 24 Apr 2023 14:00:48 +0000
Message-ID: <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
In-Reply-To: <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DBBPR08MB6041:EE_|DBAEUR03FT050:EE_|DB9PR08MB6506:EE_
X-MS-Office365-Filtering-Correlation-Id: a46d18fb-4154-4130-0e70-08db44cc569e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6qrXwPvGmzlTRfAu5K/FMSFNvvK7/O51oAz6st969abKYcu38JFd8fbKHdIqGdROd9Mf1OBH1xs70BfAqDQcWuAWBBaPx2GeVJTKwMQLfAdgoVc6UWzpuKKN90PxCSL3Mcvl7SbXDcmr7ZOaDVLGv4585vcCkM3yDDA1akw87dBG5CgFHvXLRslXYhfqwWUC3enLdylApksFEnwI2cQR3KWKhXZORDsjZV6JMk5eNXJGkahEHhiseEVvfnmAMthqIN+4OgMpqmlvWcuc603gC7WIuSQMpRk242+tlgfTuDXBA4BDa/uXNQWvuonPb5Ta1B4z+ujW4pKwd1iXG5fr1orB1pgJ2tdOk1uDB940xBfe9sqzEE6j2wUfOIguJYrcZabx5wi0TsncFDI1gSlWzAT2B/lRzCKLlXhi6cjo6C4bUZ4sRDm0S9sVblb+hCKDQYifXDdpETes10LNR2rArZmTFetc2O/zv4pMzyXBOP/mfE2xoRDLg6+tQsYbgO9WiKDRcEVCmHiT2s1vOFoDYB4FmxKXiwD2PeTSGw/WuHKl2Wbx0w1/9jLeRg1zOj3A9ygQhjwEnDOPN6BByqEFNFfcoeSVl24ap40hYhPMysjTtKYiYmFvP2oT0wj/e8ai6zBzna3aYuhC039hag5XZA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(2906002)(76116006)(64756008)(66446008)(66476007)(66556008)(66946007)(6916009)(316002)(4326008)(8676002)(8936002)(5660300002)(38070700005)(41300700001)(33656002)(36756003)(86362001)(6512007)(26005)(186003)(122000001)(71200400001)(53546011)(38100700002)(478600001)(6486002)(83380400001)(2616005)(6506007)(54906003)(91956017)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <C51CB9FF01CA7C42BE48F1BD0A9368CB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6041
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	38f82446-425b-4675-a91d-08db44cc4e9b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k2VP5EpSe5MzDIh/4/lg3QgYkkHf3qa1Q/JL4afoPQG0DnwQ4v8jlOcA6e0dkoDp/rPeLUEi7Ghb4gv6ueS1dsnUsgIL38efhpzS9bM3tZoDI/ERLcGGnjWHBAUwILLRUnMv4n6YtEDML1wHi8R//wWnyKoK9KRD+2bxt0yjD/VQBgs7kE+sKSsOjLNLbQS7F27meMJD8XBri0hGYiH3VHozppP6t19O65/EKzUJLX6bm4D5TkM1lgQ/A3XFP367SfDYjWEtnXI2s5HfHJvxbnx6OEHHJAlM1yPRwdTK+jMZ/Ie4Gw6kqWm20nHeG1+8iJdwer/IpLUxWJObPxlYpcqjoItzxy16D8NWfZmvCjsM06YuE1E7pp635WZI6UBGys55o165fr63SozTL3V7e6lPe+73y+BZZ12sIqw0vxwa4/zVIIDS11ujXD2YAoziiWScrsKsIto2Eo7xtkMq+9hWT5XYVjyKGAQuGz4JzmNNdX2gzECr4KuEDWBX5ga0SZsaq5jOnL3u2GNbt/Q5fXpNjDklr+ELoNUq8Ov5ckHLGTBDM4/BGs4uFYWVPIQbRL1WJzsv/jX+5ptBhqh/iWc/MIXW998tv5NG5kNXyFhcYRxBJ2aG00CzwZLiXema14qspvU3FoTLCjC0tcYWs/LnfvIhZ/N7qp47PSfIh4u677f3ypQyp8HO64HHLzVDylfresNzVbdLgFGSyqAWBA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(316002)(4326008)(6862004)(8676002)(8936002)(5660300002)(41300700001)(33656002)(82310400005)(36756003)(86362001)(40480700001)(356005)(6512007)(26005)(186003)(81166007)(53546011)(478600001)(6486002)(36860700001)(83380400001)(47076005)(2616005)(336012)(6506007)(82740400003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:01:02.0117
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a46d18fb-4154-4130-0e70-08db44cc569e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6506

SGkgSmFuLA0KDQo+IE9uIDI0IEFwciAyMDIzLCBhdCAxMjozNCwgSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDI0LjA0LjIwMjMgMDg6MDIsIEx1Y2EgRmFu
Y2VsbHUgd3JvdGU6DQo+PiBAQCAtMzAsOSArMzcsMTEgQEAgaW50IHN2ZV9jb250ZXh0X2luaXQo
c3RydWN0IHZjcHUgKnYpOw0KPj4gdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2
KTsNCj4+IHZvaWQgc3ZlX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4gdm9pZCBzdmVf
cmVzdG9yZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiArYm9vbCBzdmVfZG9tY3RsX3ZsX3Bh
cmFtKGludCB2YWwsIHVuc2lnbmVkIGludCAqb3V0KTsNCj4+IA0KPj4gI2Vsc2UgLyogIUNPTkZJ
R19BUk02NF9TVkUgKi8NCj4+IA0KPj4gKyNkZWZpbmUgb3B0X2RvbTBfc3ZlICAgICAoMCkNCj4+
ICNkZWZpbmUgaXNfc3ZlX2RvbWFpbihkKSAoMCkNCj4+IA0KPj4gc3RhdGljIGlubGluZSByZWdp
c3Rlcl90IGNvbXB1dGVfbWF4X3pjcih2b2lkKQ0KPj4gQEAgLTU5LDYgKzY4LDExIEBAIHN0YXRp
YyBpbmxpbmUgdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2KSB7fQ0KPj4gc3Rh
dGljIGlubGluZSB2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KSB7fQ0KPj4gc3Rh
dGljIGlubGluZSB2b2lkIHN2ZV9yZXN0b3JlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KSB7fQ0KPj4g
DQo+PiArc3RhdGljIGlubGluZSBib29sIHN2ZV9kb21jdGxfdmxfcGFyYW0oaW50IHZhbCwgdW5z
aWduZWQgaW50ICpvdXQpDQo+PiArew0KPj4gKyAgICByZXR1cm4gZmFsc2U7DQo+PiArfQ0KPiAN
Cj4gT25jZSBhZ2FpbiBJIGRvbid0IHNlZSB0aGUgbmVlZCBmb3IgdGhpcyBzdHViOiBvcHRfZG9t
MF9zdmUgaXMgI2RlZmluZS1kDQo+IHRvIHBsYWluIHplcm8gd2hlbiAhQVJNNjRfU1ZFLCBzbyB0
aGUgb25seSBjYWxsIHNpdGUgbWVyZWx5IHJlcXVpcmVzIGENCj4gdmlzaWJsZSBkZWNsYXJhdGlv
biwgYW5kIERDRSB3aWxsIHRha2UgY2FyZSBvZiBlbGltaW5hdGluZyB0aGUgYWN0dWFsIGNhbGwu
DQoNCknigJl2ZSB0cmllZCB0byBkbyB0aGF0LCBJ4oCZdmUgcHV0IHRoZSBkZWNsYXJhdGlvbiBv
dXRzaWRlIHRoZSBpZmRlZiBzbyB0aGF0IGl0IHdhcyBhbHdheXMgaW5jbHVkZWQNCmFuZCBJIHJl
bW92ZWQgdGhlIHN0dWIsIGJ1dCBJIGdvdCBlcnJvcnMgb24gY29tcGlsYXRpb24gYmVjYXVzZSBv
ZiB1bmRlZmluZWQgZnVuY3Rpb24uDQpGb3IgdGhhdCByZWFzb24gIEkgbGVmdCB0aGF0IGNoYW5n
ZSBvdXQuDQoNCj4gDQo+PiAtLS0gYS94ZW4vY29tbW9uL2tlcm5lbC5jDQo+PiArKysgYi94ZW4v
Y29tbW9uL2tlcm5lbC5jDQo+PiBAQCAtMzE0LDYgKzMxNCwzMSBAQCBpbnQgcGFyc2VfYm9vbGVh
bihjb25zdCBjaGFyICpuYW1lLCBjb25zdCBjaGFyICpzLCBjb25zdCBjaGFyICplKQ0KPj4gICAg
IHJldHVybiAtMTsNCj4+IH0NCj4+IA0KPj4gK2ludCBfX2luaXQgcGFyc2Vfc2lnbmVkX2ludGVn
ZXIoY29uc3QgY2hhciAqbmFtZSwgY29uc3QgY2hhciAqcywgY29uc3QgY2hhciAqZSwNCj4+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxvbmcgbG9uZyAqdmFsKQ0KPj4gK3sNCj4+
ICsgICAgc2l6ZV90IHNsZW4sIG5sZW47DQo+PiArICAgIGNvbnN0IGNoYXIgKnN0cjsNCj4+ICsg
ICAgbG9uZyBsb25nIHB2YWw7DQo+PiArDQo+PiArICAgIHNsZW4gPSBlID8gKHsgQVNTRVJUKGUg
Pj0gcyk7IGUgLSBzOyB9KSA6IHN0cmxlbihzKTsNCj4gDQo+IEFzIHBlciB0aGlzICJlIiBtYXkg
Y29tZSBpbiBhcyBOVUxMLCBtZWFuaW5nIHRoYXQgLi4uDQo+IA0KPj4gKyAgICBubGVuID0gc3Ry
bGVuKG5hbWUpOw0KPj4gKw0KPj4gKyAgICAvKiBDaGVjayB0aGF0IHRoaXMgaXMgdGhlIG5hbWUg
d2UncmUgbG9va2luZyBmb3IgYW5kIGEgdmFsdWUgd2FzIHByb3ZpZGVkICovDQo+PiArICAgIGlm
ICggKHNsZW4gPD0gbmxlbikgfHwgc3RybmNtcChzLCBuYW1lLCBubGVuKSB8fCAoc1tubGVuXSAh
PSAnPScpICkNCj4+ICsgICAgICAgIHJldHVybiAtMTsNCj4+ICsNCj4+ICsgICAgcHZhbCA9IHNp
bXBsZV9zdHJ0b2xsKCZzW25sZW4gKyAxXSwgJnN0ciwgMCk7DQo+PiArDQo+PiArICAgIC8qIE51
bWJlciBub3QgcmVjb2duaXNlZCAqLw0KPj4gKyAgICBpZiAoIHN0ciAhPSBlICkNCj4+ICsgICAg
ICAgIHJldHVybiAtMjsNCj4gDQo+IC4uLiB0aGlzIGlzIGFsd2F5cyBnb2luZyB0byBsZWFkIHRv
IGZhaWx1cmUgaW4gdGhhdCBjYXNlLiAoSSBndWVzcyBJIGNvdWxkDQo+IGhhdmUgc3BvdHRlZCB0
aGlzIGVhcmxpZXIsIHNvcnJ5LikNCj4gDQo+IEFzIGEgbml0LCBJJ2QgYWxzbyBhcHByZWNpYXRl
IGlmIHN0eWxlIGhlcmUgKHBhcmVudGhlc2l6YXRpb24gaW4gcGFydGljdWxhcikNCj4gY291bGQg
bWF0Y2ggdGhhdCBvZiBwYXJzZV9ib29sZWFuKCksIHdoaWNoIGRvZXNuJ3QgcHV0IHBhcmVudGhl
c2VzIGFyb3VuZA0KPiB0aGUgb3BlcmFuZHMgb2YgY29tcGFyaXNvbiBvcGVyYXRvcnMgKGEgZmV3
IGxpbmVzIHVwIGZyb20gaGVyZSkuIFdpdGggdGhlDQo+IG90aGVyIGZ1bmN0aW9uIGluIG1pbmQs
IEknbSB0aGVuIG5vdCBnb2luZyB0byBwaWNrIG9uIHRoZSBzZWVtaW5nbHkNCj4gcmVkdW5kYW50
ICh3aXRoIHRoZSBzdWJzZXF1ZW50IHN0cm5jbXAoKSkgInNsZW4gPD0gbmxlbiIsIHdoaWNoIGhh
cyBhbg0KPiBlcXVpdmFsZW50IHRoZXJlIGFzIHdlbGwuDQoNCllvdSBhcmUgcmlnaHQsIGRvIHlv
dSB0aGluayB0aGlzIHdpbGwgYmUgb2s6DQoNCmRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2tlcm5l
bC5jIGIveGVuL2NvbW1vbi9rZXJuZWwuYw0KaW5kZXggYjY3ZDkwNTZmZWMzLi43Y2QwMGE0Yzk5
OWEgMTAwNjQ0DQotLS0gYS94ZW4vY29tbW9uL2tlcm5lbC5jDQorKysgYi94ZW4vY29tbW9uL2tl
cm5lbC5jDQpAQCAtMzI0LDExICszMjQsMTQgQEAgaW50IF9faW5pdCBwYXJzZV9zaWduZWRfaW50
ZWdlcihjb25zdCBjaGFyICpuYW1lLCBjb25zdCBjaGFyICpzLCBjb25zdCBjaGFyICplLA0KICAg
ICBzbGVuID0gZSA/ICh7IEFTU0VSVChlID49IHMpOyBlIC0gczsgfSkgOiBzdHJsZW4ocyk7DQog
ICAgIG5sZW4gPSBzdHJsZW4obmFtZSk7DQogDQorICAgIGlmICggIWUgKQ0KKyAgICAgICAgZSA9
IHMgKyBzbGVuOw0KKw0KICAgICAvKiBDaGVjayB0aGF0IHRoaXMgaXMgdGhlIG5hbWUgd2UncmUg
bG9va2luZyBmb3IgYW5kIGEgdmFsdWUgd2FzIHByb3ZpZGVkICovDQotICAgIGlmICggKHNsZW4g
PD0gbmxlbikgfHwgc3RybmNtcChzLCBuYW1lLCBubGVuKSB8fCAoc1tubGVuXSAhPSAnPScpICkN
CisgICAgaWYgKCBzbGVuIDw9IG5sZW4gfHwgc3RybmNtcChzLCBuYW1lLCBubGVuKSB8fCBzW25s
ZW5dICE9ICc9JyApDQogICAgICAgICByZXR1cm4gLTE7DQogDQotICAgIHB2YWwgPSBzaW1wbGVf
c3RydG9sbCgmc1tubGVuICsgMV0sICZzdHIsIDApOw0KKyAgICBwdmFsID0gc2ltcGxlX3N0cnRv
bGwoJnNbbmxlbiArIDFdLCAmc3RyLCAxMCk7DQogDQogICAgIC8qIE51bWJlciBub3QgcmVjb2du
aXNlZCAqLw0KICAgICBpZiAoIHN0ciAhPSBlICkNCg0KDQpQbGVhc2Ugbm90ZSB0aGF0IEnigJl2
ZSBhbHNvIGluY2x1ZGVkIHlvdXIgY29tbWVudCBhYm91dCB0aGUgYmFzZSwgd2hpY2ggSSBmb3Jn
b3QgdG8gYWRkLCBhcG9sb2dpZXMgZm9yIHRoYXQuDQoNCnNsZW4gPD0gbmxlbiBkb2VzbuKAmXQg
c2VlbXMgcmVkdW5kYW50IHRvIG1lLCBJIGhhdmUgdGhhdCBiZWNhdXNlIEnigJltIGFjY2Vzc2lu
ZyBzW25sZW5dIGFuZCBJIHdvdWxkIGxpa2UNCnRoZSBzdHJpbmcgcyB0byBiZSBhdCBsZWFzdCA+
IG5sZW4NCg0KDQoNCj4gDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:06:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525410.816584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwpD-0005Qb-TJ; Mon, 24 Apr 2023 14:05:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525410.816584; Mon, 24 Apr 2023 14:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwpD-0005QU-Qe; Mon, 24 Apr 2023 14:05:51 +0000
Received: by outflank-mailman (input) for mailman id 525410;
 Mon, 24 Apr 2023 14:05:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqwpB-0005QO-RU
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:05:49 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1bc35272-e2a9-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:05:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7789.eurprd04.prod.outlook.com (2603:10a6:102:c3::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:05:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:05:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bc35272-e2a9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E1OAGF7HyFFvIXa6PE3mXjKFp50KLs3K9fMsnvp9LNpjlhTqZMXiepW2/YX437n3VdO72ZNRtkYSunk560xWBZJxsqwEal0y8XMNeGrcd+OXtEKu/u93BxSvTPqJxjJ5rR0/aRQJJBAjJMERSRKAIl7dz0pt0pffGePUXENTRotkaWgaSdq/g3B9moNVg/IANa1IT6dUtKyYTNHagGSnBGkPk6KnaQ9c9cHWt+bx/SKUtjhl3Z3VQqtHsuW/etkrMlGLaZJGWFjCDFnAj7y3DBg2zrNmMnr2is9PH2oIFBjWmaaNKKsEWsrvqRIOUOO6RqbV6sR0NUi0vcNG/5GrDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TpcbWidt73BpmMU/JDX2CpCaPV4QH4c9ciMRnj8u0aE=;
 b=DrVFrxrTtGu794PW/SfTJ0b2U2E6oJRFE9nbh78FFXJk6fi7yJOdyDpBR68q316+cdgrRTl7WZKabwHW7PKKNUwyXKuH9yKZ4drwMrz09AT8yzDRri0Huv2b3aXHxsJFtODyvlhcjkt7Fh/qg0hORpAmwP8Xbzc8AtgrQOWt8iJ6EL8tgk94GXFIzLqA2KJcHFppC4/RyisGLl/XLzlPnZPzxLze1IIYwx4rTXRnomgSjjkdreqj6c0gGZMM/8adUwk7aAU8n878YgI1MOB/h/tAV/Dvoe263g5XI10N3+3x+x+86cSQ240DY70x3A4GBh6XLHBYpqvocwD5KTE2Og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TpcbWidt73BpmMU/JDX2CpCaPV4QH4c9ciMRnj8u0aE=;
 b=OdL5bzxugni2jhd2FCDkszGtoxN2CTy/z89b+nuFiD6vAOJn9Hnj26bs9QLnQRjD562yKUMjiotYQgncbgzMSMEfwOrQZVyEDz6br9efllukPxjlw+9wYRhDwh67ckN5Yt3K6QxN+I6GY4Ke9VpEQ8bjGqqEGIaTcFJfGZtznxywKXFenSAJeLL+5KODqcXyFkMGJTwgzqNevWuyXsQMUm8vN0Wf/gClNBIHje9oJPtLSXH9Ju4Et3uuowgZs4sP0udl1AqvI3XBUnfktcaiWyn0B3rVGABWV3SXEmzpyNYLuGQLwWZ1ZFAHEMQobT6c3Mrq0/MOFLFg/IkBZ+repA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
Date: Mon, 24 Apr 2023 16:05:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0189.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7789:EE_
X-MS-Office365-Filtering-Correlation-Id: 0e6b212f-d8af-490e-e9ce-08db44ccff07
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E3W1rYCKYvMszUGMa4/RGvYitiFY5m6xgk4VnJrW6Hckv6MshlMPOUpC/J26AD7w1ceMVdwX98qTNVqnPjUJcCilp2WxY448IJwYh/x5a0q6ld7Ncol5uQ9/VXdOS+M+6eHx1ZmnySd4lqf+YmAlChALg3EMI3ovQ4QcU70sF3KEeelt6LffSjhC6U/p/oqx9xWWmQp42uLtn+QcS5RPx9W4oRnyKSRaScrW+P9kGB90T1ImG7EaUqrXC1YBv7o6vA750ip3W9oZ2CClZuFZ8EPjiylUotOL2cqPpzdrHNZMmzlFvIfqD9IJN1qFxPAmcxLUUvjn+qageegZN/xPB76JXp/Uz/TWSsgvJd2Xcq3FdHnFpSjqxqCHZdJBJZJ+OctcaVQcWLLQI9EAQBye+DO/dZw+l+9n/usoRjscFLM+b3vvqGpyZ8o9sYCM3nIxYYhs3suDE21zgUwkkTu2x8vweFaZKIqk5tnB4di5fpiUrRLBdZVcJXhXAYlxE+rv4mtspLHseDbG/ZZh0bYeag2arKmsY9iDm4MA9i68sck9ks+WvpgQLelHV+m7bmTlZksPhW+OiZwNFdj27EoJlUQfIDer2syGw1OKs715kJLfKtFokl9uniBomRwi3BRZ5NvO7karwXAV8tQeMBxV0w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(346002)(39860400002)(136003)(396003)(451199021)(478600001)(31696002)(54906003)(86362001)(36756003)(186003)(53546011)(6486002)(26005)(6506007)(6512007)(4326008)(6916009)(66476007)(66556008)(316002)(83380400001)(66946007)(2906002)(38100700002)(41300700001)(8676002)(8936002)(31686004)(5660300002)(7416002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anZMQTc2bndTU3JKbVEzdEszeitCSG5TdWtDc2JoNm9pQnNqMmV4M2ZwbVd1?=
 =?utf-8?B?MUdxQkxGeDNhZGdsY2lGV3U4RnI2VG1GUi9hMkM5V1JVaGU2SGxjWkEvWTRu?=
 =?utf-8?B?RnpGaUtHa0JDY0RIbnZaUXVmTnZRVzd6M1ZTNHR6cnVTUEFIRlNjWlFoLysv?=
 =?utf-8?B?MWxQbmxxMWZLOTBIYUw5MUhkMHAvdXk3UzN2WWppY0o3K0NSdWxvcGE1UE9H?=
 =?utf-8?B?amQ3V3BhYzg4eGhCampqTmdESyttZFdkdVlvVE85ekljVEdENkh6aUdqL1FP?=
 =?utf-8?B?TVBaeXF2bXA4MXhQTXVMeTdEWEFGQzdhK0RzTXJpSE45bURpbkh2N1prL3dm?=
 =?utf-8?B?d2FsU3ZnT3IwNjRYSTBkYjhhYlBNZjh2elVMQzF2OWNHNDBqNE1NVmU5MW9R?=
 =?utf-8?B?L1NNbjhVWFRFZkQ5R2dIL1hOS3BEMWRLSWpWOVpjRVM1Y05WQjNpdFhQcVNJ?=
 =?utf-8?B?K2hwQks3cU14NXJxNzNIclZJRUpPOEpLS1M1VTk5MmlWZmtqNXhMczVjUDFU?=
 =?utf-8?B?VTd0MmNhWjBZNnlRS3hiSlNuM2JCWTB4Y09KZ0tndlZLeWVXcHkybWhHeXdu?=
 =?utf-8?B?ZUFtS2VxWWFQRVl4ZkN5RU1FU0NnbHZXaFdqeGxPNFNqTE5PMitORXhnbG5H?=
 =?utf-8?B?Mk12T3VrT0FiMHZUeXdnb1ZFS0N0ZGZianlkN0RncW90NFd5UEFWWTMycWly?=
 =?utf-8?B?RjVUY1hubDhUeEE1WTlmNndPczlveHlVeVJLQjlqNVM2Zy9SQ0ZSbGJLdy90?=
 =?utf-8?B?cFhDdmZrc1ZsL3VwQTAySzVzaGkwMEVDcGZoMkJHaXNCYXlDZjdwcjJuUlEy?=
 =?utf-8?B?T1B2azQ5L2VsU3Zpcm5tK3dsUUY4Q3N1TDE5eXR2MWU3eHUyR25CWU1OZGlV?=
 =?utf-8?B?YkJrYzcxM1RGYW9saGUyaDVQVlYzVjFWdmpVakI2VHdsVGprMTdwQUYvcnhq?=
 =?utf-8?B?STN4OE1UMGY1eVdBRHU4ZWxUSUhmVE9OMnphM01hUHEwRWcvbEgvTmdJZ3k1?=
 =?utf-8?B?ZEd3Y0FnVVhGaExQQnltVzZCYlhjcytZZUszeTQ2MERweVRQSWNuRTRVeWhB?=
 =?utf-8?B?VDNrWXZIWExvZDVTNnFvZ2RNTk5ZREdFZENBYjEycCszVG9uUmZ3OTlCcWdy?=
 =?utf-8?B?YTIrTnpCMTduSnhwQzE4cDV1NUJ2OUZWQ1BPV0NyMWNrckFkbHRrbFdmMVhT?=
 =?utf-8?B?RkRUNXFzY3AzR3pKaHBiVUxDNXoyOGpoYnE2bFgwN2ZVUWxNR3Y0WVZHeGJn?=
 =?utf-8?B?TFRzYjVmbTYxNlhiNXBLeUJUQmluN1BESW9ERmZKWlB1WHFDbzVTc0gydWZY?=
 =?utf-8?B?VGJmaUhXQ1I4Tk1zUnkvN1lua1htV3p2emxTQjVoNDkzeFd6aEs0QzFuaFkv?=
 =?utf-8?B?SkJlT3pyRzFUS0tMTHVvNU9oa1hXb0UrbUloUFpDRnE4elhEeDlGcTFGUDkz?=
 =?utf-8?B?UlpuNW5VSTVZM205NWpyZ254U2puekkrcmxwUUJyU1dJeXNDVUtUOTl4RGxn?=
 =?utf-8?B?WHhkR1h6YWF6N3VWODAybmxINzJOdHJaenUrc21OeWpvTnZ0Smp1aVFHYnJr?=
 =?utf-8?B?M1ZTTDVzQUo5N080TWRuUnZHQkd3aGg4d0RxYXJUbkNjUUwvMDRjNGdyNjB0?=
 =?utf-8?B?ZHhhU0JVdmFUZ00yZnhZTmVOeUE2UFFmdVh6MklRcWd0a1FzTDF0bXp6Tm9Z?=
 =?utf-8?B?dm1vQkFsck9qT2kvb3EyRGlQc1habm8xYmtHWDhLamg1ZGtOZ3NDWDdCbWZU?=
 =?utf-8?B?ZGU2M21qQWJYb0tZOXpQNnVzY2hQc1ZoTlRBbXhTWWRFTTJVREJPZHl6aG1Z?=
 =?utf-8?B?aGR4b3FrR2JFSEFjZXVQaEowcHFteG9HbHFOWnoxZHRQbHo5L3lCMVNPdkts?=
 =?utf-8?B?Ly9UeUwvODVFWkp5NU13eXd6dTVDY0p3d1M5V2RueTJxOU5XS1hZcWw5QXJs?=
 =?utf-8?B?WkRMQm5ja3JXOVU5Rm1pY25XTTBrZFU3Y09iSkRQMGdxWFJhVEhBSWxQNlFD?=
 =?utf-8?B?bDRKa25DenJIaVJJcGhJajd4eWJxYkxGM3NjSkZpUEsxNGtCQ3NkSEVNRENz?=
 =?utf-8?B?T0FRRGE0bWFPZlFOWCtyblRDc0tqQitrdVFYNnBhVUxlVUp3Rk5jN1kyK0U0?=
 =?utf-8?Q?ym47nI2qG4etL2zSlrGNJwePv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0e6b212f-d8af-490e-e9ce-08db44ccff07
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:05:44.7433
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Gzc4YdqbQHUVf915FfJXy3l8KNcSSShRQm+EN8GEe4TEhriRZLsBG1NmBVtj+SGkxyeVhie/CuQLl0OMuaqZ9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7789

On 24.04.2023 16:00, Luca Fancellu wrote:
>> On 24 Apr 2023, at 12:34, Jan Beulich <jbeulich@suse.com> wrote:
>> On 24.04.2023 08:02, Luca Fancellu wrote:
>>> @@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
>>> void sve_context_free(struct vcpu *v);
>>> void sve_save_state(struct vcpu *v);
>>> void sve_restore_state(struct vcpu *v);
>>> +bool sve_domctl_vl_param(int val, unsigned int *out);
>>>
>>> #else /* !CONFIG_ARM64_SVE */
>>>
>>> +#define opt_dom0_sve     (0)
>>> #define is_sve_domain(d) (0)
>>>
>>> static inline register_t compute_max_zcr(void)
>>> @@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>>> static inline void sve_save_state(struct vcpu *v) {}
>>> static inline void sve_restore_state(struct vcpu *v) {}
>>>
>>> +static inline bool sve_domctl_vl_param(int val, unsigned int *out)
>>> +{
>>> +    return false;
>>> +}
>>
>> Once again I don't see the need for this stub: opt_dom0_sve is #define-d
>> to plain zero when !ARM64_SVE, so the only call site merely requires a
>> visible declaration, and DCE will take care of eliminating the actual call.
> 
> I’ve tried to do that, I’ve put the declaration outside the ifdef so that it was always included
> and I removed the stub, but I got errors on compilation because of undefined function.
> For that reason  I left that change out.

Interesting. I don't see where the reference would be coming from.

>>> --- a/xen/common/kernel.c
>>> +++ b/xen/common/kernel.c
>>> @@ -314,6 +314,31 @@ int parse_boolean(const char *name, const char *s, const char *e)
>>>     return -1;
>>> }
>>>
>>> +int __init parse_signed_integer(const char *name, const char *s, const char *e,
>>> +                                long long *val)
>>> +{
>>> +    size_t slen, nlen;
>>> +    const char *str;
>>> +    long long pval;
>>> +
>>> +    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
>>
>> As per this "e" may come in as NULL, meaning that ...
>>
>>> +    nlen = strlen(name);
>>> +
>>> +    /* Check that this is the name we're looking for and a value was provided */
>>> +    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
>>> +        return -1;
>>> +
>>> +    pval = simple_strtoll(&s[nlen + 1], &str, 0);
>>> +
>>> +    /* Number not recognised */
>>> +    if ( str != e )
>>> +        return -2;
>>
>> ... this is always going to lead to failure in that case. (I guess I could
>> have spotted this earlier, sorry.)
>>
>> As a nit, I'd also appreciate if style here (parenthesization in particular)
>> could match that of parse_boolean(), which doesn't put parentheses around
>> the operands of comparison operators (a few lines up from here). With the
>> other function in mind, I'm then not going to pick on the seemingly
>> redundant (with the subsequent strncmp()) "slen <= nlen", which has an
>> equivalent there as well.
> 
> You are right, do you think this will be ok:

It'll do, I guess.

> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -324,11 +324,14 @@ int __init parse_signed_integer(const char *name, const char *s, const char *e,
>      slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
>      nlen = strlen(name);
>  
> +    if ( !e )
> +        e = s + slen;
> +
>      /* Check that this is the name we're looking for and a value was provided */
> -    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
> +    if ( slen <= nlen || strncmp(s, name, nlen) || s[nlen] != '=' )
>          return -1;
>  
> -    pval = simple_strtoll(&s[nlen + 1], &str, 0);
> +    pval = simple_strtoll(&s[nlen + 1], &str, 10);
>  
>      /* Number not recognised */
>      if ( str != e )
> 
> 
> Please note that I’ve also included your comment about the base, which I forgot to add, apologies for that.
> 
> slen <= nlen doesn’t seems redundant to me, I have that because I’m accessing s[nlen] and I would like
> the string s to be at least > nlen

Right, but doesn't strncmp() guarantee that already?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:09:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:09:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525417.816595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwsy-0006AX-G6; Mon, 24 Apr 2023 14:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525417.816595; Mon, 24 Apr 2023 14:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwsy-0006AQ-D0; Mon, 24 Apr 2023 14:09:44 +0000
Received: by outflank-mailman (input) for mailman id 525417;
 Mon, 24 Apr 2023 14:09:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqwsx-0006AK-1Y
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:09:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7858d28-e2a9-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:09:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8614.eurprd04.prod.outlook.com (2603:10a6:10:2d9::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:09:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:09:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7858d28-e2a9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5YRatqsLj3SJTF+66g1pB1gbds9JUCknklNp4ptrmxi6LUVUjOAfi2te/Y/vwHo+4w6iEmtqvvtZnUw4uZXk6/KNHw4VTUBDlDSasFR3ejLs19h0SzLjMgEU+m75KOGBdT1FnlV/Xo12871PUO+TWx68i/XgAJFSwr8jpaoM8e1I2uvF949i1C1CcY690EVqzhaO+ADM2S+IR6uMo9mQu+lO7jE/BJk5hgncOTGv5Rh7SERJQ3UtoKzggJv3qnzvYgqdPNE9WxSdzilu50VeEkaw/jrfM8bp8osYTshaWijWbNeVwtN5lUhaXsuXJwJFufFYhRcnZ8AtHbtDBA9yw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rE5ya0YLL1rycoFI8Q6BwwfZQ/6az3I8rlSvZ8FhZs4=;
 b=J7JxchkbgHYhcBbmyTNYLtKjaIYrO52c2rT3jhBxDO83Kn9MiomnW9xL61sU88JW2FO5iN/y3QNatroFDwsZfcStoFLSfibnTOShSNvSt3dae+RSUpYZOJ+GxhItn9u7dtPzrtN+2R4sTt/sxVrlbwOVXugooHtRFiOZi5oelx7pryeQkm1baKsQEe49XRwjEhQulvIcKNWA131mRj2LVFSI3D/+w8Hlia164tPz7l2j60jIeAq20FytJrEZrCwGgkwjey4Tj6vY1RNkeIy9On+H8nnp0EHw14NYpMEQFjkU7XXSxfuOEmJx+YUH98FGcUgwQHaQMYEkAXC5zmmHRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rE5ya0YLL1rycoFI8Q6BwwfZQ/6az3I8rlSvZ8FhZs4=;
 b=B65Rcs6S4noq3OWz/mgA7c7kkhjsSQRYU5vFjc7p3i9xSgG/miHND/Eoo2eQFujGyHAH4ylPm9ENwDdHyhxcTyxnZBy/lQraW69tNjmV34VGkdTUBeo+r80vR/+9cuGOF8P0LTFzrMMI7C9Df/7OsnUu6tqIIeUp2GhVjkSmkJEdeU/haHYRXzJvfNDFwIxeB8zng8Cod2dkUMzOptsc6cUrC6JeyqIlze2jnPow8Z7s63K2tffW/9NTts8MG+LpUK9MbUV7NrP+PzI/kSSZry6FjOBk7pD9PAaVvH5eCoKVzFXkbQP0equRn8mSUT/4GKWrcZvALWGnt1fiYRlv7A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2f20d5c0-d21f-ad1d-290c-631230e1f323@suse.com>
Date: Mon, 24 Apr 2023 16:09:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230421135933.23353-1-jgross@suse.com>
 <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
 <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0094.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8614:EE_
X-MS-Office365-Filtering-Correlation-Id: ecc4abdc-c84d-4c98-9965-08db44cd8a81
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/draUdBRZLRQz8BwqrpZmOAGgAcLttDjHoM7jPWkQiOoJQLx+ooKoChy2MaC8ViWIWz0o2NSCldAdAygL4oC80RpuM2FZqJ1l4+TdQwVplW1xvdh72uP8LDkrLARIimZnb77+e5BHlANsFclYa54BozLb7TayJXFAt7t1Bdk0xnKZ5v+MSbYFFwC2mZcJOPH0z4XzBqx9ifXOe1aua2O+nj56zv7B4eCZAhZrLK3QN+E+1GaLs2nEjkOUw/czZK3ufZ5uv0KwRKdXcReIyCMGY/zroXjck+aFBtXQP9abxcpFbypRxFjPuChpfoOHUVEyTKPCNWP72MXQJolE/LPWU+YDxNnSKOHMrEApDnEjM/ZQEQ3DpZDMgqoUmpC/VME1TD717Y9L0vJSBpTS8jDs1wpW1eKzKZcBMybY4TIP1MsHgptM5iJGvNUbiDlbHMjvxko2WIURu9tzZa+auJr1mfSTQDBB7nlUVXIAgVeQAJbbvWHuf6Gd/jSGsbCmfGPDFh+4qod8LkFuEWZaDU+v9jZHTHQPhrVZHhGS7bVpFo2x/5iXiTwy+n3bNo1DQ0LUUsG/pBYywqrq1h5bWgaPOtA05Dz7RglIPZozfd1UIk5yWq1i59O3KuE7r+bGPYW5445GXJ5Kb26uIxW+i4rTg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(4744005)(2906002)(66476007)(66556008)(66946007)(316002)(4326008)(6862004)(8676002)(8936002)(5660300002)(41300700001)(36756003)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(38100700002)(478600001)(6486002)(31686004)(2616005)(6506007)(6636002)(54906003)(37006003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T29jVUUxWmo1VkllM0phWjg5TndnZ3Z5b2pTU2V6dGpta2N0dDFrSTdkUU83?=
 =?utf-8?B?Tm0xMVpxTTdPdXhRRXBWNTh5Q1BGT2FYZ0lOM2xkRGRhZVJTOG1FSlNKa214?=
 =?utf-8?B?ZnViTjB1L3VlejJBT0ZORWRGbWMyS0pIbSsybzVBa2lSczdnc2pUWU5YY2t0?=
 =?utf-8?B?WHQ4ajRpZno3dlJwWU84N1pOL2lUTDI1NjVnOURKalZTVzJ2QkhQYlczWGFj?=
 =?utf-8?B?aUhHQkJrSnlHK29GWXdMaWUwWFVlVnlrWE9xVElMdWJRc2JaY1Zaem5xY0tJ?=
 =?utf-8?B?ZGIxNjAyVmxKb3FjWVdYRHMrcTRSV2tLKzBKbWdiV3lPV0JicHQwazREZVVU?=
 =?utf-8?B?bDl4aW95UTZ6akgwZG5EZkhnWHFuZCtTaXR6YXExelhaL0hvL2NpTlZ1eDI1?=
 =?utf-8?B?VWR3ZGthWWh3bVZ5a1FhWWNqTG5LbXRUVGRzcm5JTWxjR0NjSlpGS0R6Zy9o?=
 =?utf-8?B?ZTJNQk0zeFc2ZjhRM3UvT3dRMzJmREF2SmFCNGR3cm9qZnVJd3VTMTZ1MTFq?=
 =?utf-8?B?NnZqVGpaOTVRZk5aZlgyaStjS3dsakpVTDJxV1NDQ1VBNzExbXZjT25CMi85?=
 =?utf-8?B?anhJYXpYQzNDMFJtRmVZZ0JkQXlQbnIrWXJjRTBOTEduWUU1empZVnZnRlk0?=
 =?utf-8?B?NVIrOXlsMjUrWW4xQXVVMEx6Sk5yOG5JY0ZjdUR1b1VLM3VIcWw0Y1lGT3JI?=
 =?utf-8?B?Zkw0REZkd3pwaUFHaUp4QXRYZWdlTWdrNUJXY1lQQkp2ak9oYUpaejZiZ2lD?=
 =?utf-8?B?Rm15M3RvWmVUWklVMkVYNS9uVWhUQVR0a1FiVVlBeUhRSVNzSHVVTXBOYXdq?=
 =?utf-8?B?SE5vcDQxcGI3b0hCQjFCN2lTRkhDZmJTSCtGQ1dvdmZ6SUNTYjB4WFVwNWd2?=
 =?utf-8?B?Sy9pMDZNYTBwN0ZjTDk2bWNqNHdsSnV1UXZpSUdmNm1VKzJ2NFpzaFVFOHNU?=
 =?utf-8?B?VzlRQkNrT1llU3hoRW1JbVlIZzBPd1Mwb1lmK21OcHVaRGl6V2V1SXR6Z3Fz?=
 =?utf-8?B?MUtSMTR6di9lNUpCd0IrTk5NWEtsMzd2MGl6Yk1ib0hFMjQ1bkRFSFMwMGdY?=
 =?utf-8?B?VVA5ZnZrUnlKd2VuTkxtcnpDQW9MUHBaSWhQSmc0NHRoTmI4emgva0QwaUNp?=
 =?utf-8?B?STlhNjYzYkRPMEFFc1pJaGgyYWlCeE92SnJsamhCbFZLTXlNRFFYOHhkLzlo?=
 =?utf-8?B?S2VjY3EySW0yL0hCSjJTWngzdHN2YWxFdngvWXB3UFlwTEozOXQxZ3dva0Ry?=
 =?utf-8?B?SDN1cGVlMnY4SVVOU0oyNldZZVkvRTJnM3I4aDgvTW9HK3VVZVYyN3lBQUZu?=
 =?utf-8?B?dkZXZFJPaDk5NHdrT0VoVFM1eUZkMmtsYUN3VTFVZHZKYU9yckpZQjhnVU1x?=
 =?utf-8?B?c2lTT003VE9sajRnMmx3TWFCbjN3cG81YlIrZUxwOUszTFdwRUJYQjZ1TTlh?=
 =?utf-8?B?RktOcHU3REFDSHpqOGNNdmVSdExYbzMvT3dxa2R1WGlvTmgyVTdDNjlKdzFn?=
 =?utf-8?B?MjRTNXRNeFp5MVJJQ0MyYVJLZEJaMWlmK3o5QmtpODVkRDJ3c0N2aXJjdEth?=
 =?utf-8?B?VlppdlhlVzVNc3JjVGNpd01qd2JaaEYxZVNJU0c0MkV0VTNwVUZXTGx3dmNI?=
 =?utf-8?B?cXRXeXEzdUYwTURibHQ1UWU2cTR6aFNHZEZFckE0aHNtOHNndTVJd2xoYlpY?=
 =?utf-8?B?WGk0dHRpdURabU01b3BxYVRTUzlmaGlta0JCeVdUNktlMi9HRkdiUENBTE1X?=
 =?utf-8?B?NEtEalo4aURwT0xlYTlNWUZpUW9iWm96d2xWYjRhWkhKOHhiUE1keGlUa1Bz?=
 =?utf-8?B?cDZOeG80MHpnWU8xOUdWNmt3MHJqSzlid1hYWS9Va1FyalFIWEYrUi9oUVNs?=
 =?utf-8?B?eG1UdSs2NTlMK1YxcElLRzdacnNpY0Q4b05LSnkvZ2NRT0tKVGdweXBuc2Vm?=
 =?utf-8?B?YnJoVWl3WXVQZVdMSTBkTFkvYWMwZ3pGcFhzb0tDNmx3NkxPMDhOcW9leElU?=
 =?utf-8?B?TnkvbXpPdkp3eEFDbXc2OHBmUEpuNlBvTm9WdnlhLzF4dmQrT1BhS2lwVXB1?=
 =?utf-8?B?S0ZHdW9xUk1UVU5PQnpJR3ljR3paK1Y1dWZ2T2FEYzVBSSs0TTEyMmpFaVVy?=
 =?utf-8?Q?njVxKTFfYZ63pQm0wEvk9xUYQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ecc4abdc-c84d-4c98-9965-08db44cd8a81
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:09:38.7029
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kWb+fNxpnJEt0H12HeJ73sowp+ylcXTJ7i+qtRllSNi+js/Pzivm6mHfhZNc8YI4dD/soUp0/aCkxwf63RpsHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8614

On 24.04.2023 15:44, Juergen Gross wrote:
> On 24.04.23 14:01, Jan Beulich wrote:
>> Finally - do you really need to copy all the non-debug sections as well?
>> Might --only-keep-debug be helpful here (provided it works for a PE/COFF
>> -> ELF copy operation; it looks to do so in my simple experiments)?
> 
> No, doesn't work (objcopy does, but not crash with that file):
> 
>    crash: xen.efi.elf: no text and data contents

Oh, wow. What use do they have for the .text / .data contents? I very much
hope they use the in-memory Xen image for analysis, not what they may read
out of the image.

In any event, please add half a sentence to the description to mention
this aspect.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:12:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525421.816604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwvP-0007ZE-T0; Mon, 24 Apr 2023 14:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525421.816604; Mon, 24 Apr 2023 14:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwvP-0007Z7-QO; Mon, 24 Apr 2023 14:12:15 +0000
Received: by outflank-mailman (input) for mailman id 525421;
 Mon, 24 Apr 2023 14:12:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqwvO-0007Yx-HI; Mon, 24 Apr 2023 14:12:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqwvO-0005Fu-DS; Mon, 24 Apr 2023 14:12:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pqwvN-0008Vq-Pv; Mon, 24 Apr 2023 14:12:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pqwvN-0002an-PP; Mon, 24 Apr 2023 14:12:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=041p+Y+ntTioIWebTSaAhUjU6k3BlQ4oCrZp49wiU9A=; b=5gKZhqSSgKRSSDQl3GSESZ8K1a
	0F2GeAvW6eaeEh4loTC9bGf6rMiW3vNMt4+h5Ima2u5Lo9mTlTN1+0kDRAtJvgEiFcgXsKzFteid6
	RUZJfcr/SJG4PcBapfdzapgMo6BWXKC/f2r4O7ueDGeXY/gWDi/JO3jQ5pd76TQgwzNE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180391: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-arndale:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Apr 2023 14:12:13 +0000

flight 180391 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180391/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale  13 debian-fixup     fail in 180381 pass in 180391
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180381 pass in 180391
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180381 pass in 180391
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180381
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 180381

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180381 like 180367
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180381 like 180377
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180381
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180381
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180381
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180381
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180381
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180381
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180381
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180381
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180381
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180381
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180381
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180391  2023-04-24 01:53:48 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:15:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525430.816615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwyr-0008IO-Fm; Mon, 24 Apr 2023 14:15:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525430.816615; Mon, 24 Apr 2023 14:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqwyr-0008IH-D1; Mon, 24 Apr 2023 14:15:49 +0000
Received: by outflank-mailman (input) for mailman id 525430;
 Mon, 24 Apr 2023 14:15:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPGB=AP=epam.com=prvs=84781ceaa2=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1pqwyq-0008IB-2U
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:15:48 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80cd1a4d-e2aa-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 16:15:46 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 33OCJOL4030609; Mon, 24 Apr 2023 14:15:33 GMT
Received: from eur03-am7-obe.outbound.protection.outlook.com
 (mail-am7eur03lp2241.outbound.protection.outlook.com [104.47.51.241])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3q5sp18hcd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 24 Apr 2023 14:15:33 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com (2603:10a6:803:31::18)
 by AS8PR03MB6790.eurprd03.prod.outlook.com (2603:10a6:20b:295::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.54; Mon, 24 Apr
 2023 14:15:28 +0000
Received: from VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3]) by VI1PR03MB3710.eurprd03.prod.outlook.com
 ([fe80::9b45:4d32:a743:d5e3%3]) with mapi id 15.20.6298.045; Mon, 24 Apr 2023
 14:15:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80cd1a4d-e2aa-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kya0Lf8UgkTl7eggJ80mcaQwnFyTlpqFk9HK9cigxJGuf2FPII6ZDEkYAqQRJ12LWYkBHTehE3qSMnBEQ0nMkDSU8qBHcHQFIAlBDAvDSgFJ7GkEt9jbVSp1ucvJVwpRImaa9G1gtiCayS3RrbKY9vC5sMiPDwe3f3N054N6J5ZHWLqQajTNHSDZzJ1iFhIEfDq8wsMRUbqMGLPlHKus2+KCIEai/gUW/5sltpZPuYZObKvzyLrWgcm7pZ7Dx1XYn+1S/lOvlfW5UWO5I0aEa9IXsTc0qg2OszIC0hZraBs9IRcHzRuY1qftsa0KbdrafuZcVO/jEqBH0G0q7et2XA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Bopo97wd/BIYYnVeTcenWCOubx9SKDmqoQghR5ldbdk=;
 b=LYNOarAtzLjMX8mHjKz/K4sGn5Aa17x3rQzGKuzRi98rzspMhc6R3fsmUzXU3KRxovQaJs2djIyvxcRHCJfbR+sHIA6GhFdR94BkYtid2PreARlUg4QaVTA/fFVOJJR7SWoctX88XONdDjaEQxaxnrnPNEukqEnjt0Vw/0q8i2CkYyMc0h15kNCnbRmqezVtQl52M64bpA6Kk29JHzodWH4NJzKJu+VSy4Zty6lmFy5HTHLh1Mpd9Lp1l20/gD5q7jLRLeAI5+XHXxHRr68JazT3ZeKWnJKhAjdjW2GoOwakr8cS6kXAkw7FlnEWi8ffo1IyK2sHen6VcOVfkJQuqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Bopo97wd/BIYYnVeTcenWCOubx9SKDmqoQghR5ldbdk=;
 b=TCx62TyzkNv0NTkeekev3q/FNpvS3n/9fCHh3QI/zM7EObeVaTHVH+KUNf/hufK9I7CTXbzYleFNuhGb/dKIKIe49qa5Hzu7GbKTvh4wk0awKoRDUB76lGAiuSeGEHOdncb0QSWszQM/jnBtxIXsVv2sRvBU+/p8GFvqLMWwlLg5cw757cyB6zXU4deYVdeBEFSxf5h/6xFDrpabvd8Yf5E7BgK1YQiomb0UZkCjX2C0dmXbESwt6oONHJGA9Ixb6l/OKDGZv4fw1stj7RHr9rD6Wk06q9CdmOELzDwA+St/NjZtBxWLo4HUwuQ2v7mlwpNEktUxGHwryDPOmZyJpw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Paul Durrant <paul@xen.org>, Kevin Tian
	<kevin.tian@intel.com>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Topic: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Thread-Index: 
 AQHZVrdzsdckomMx4kauxHkZQ597Iq79mAMAgClI2ACAAK/igIAAVGGAgAGe14CAAJvOAIAFXoKAgAAEr4CAAAS6AIAGQ32AgAAsn4CAAAL9gIAEWYgAgABm+AA=
Date: Mon, 24 Apr 2023 14:15:27 +0000
Message-ID: <87h6t574e9.fsf@epam.com>
References: <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
 <ZEKLN8AlzDUckorU@Air-de-Roger> <87o7nh727c.fsf@epam.com>
 <bb71c3f3-20a7-b020-6685-879bd4e5786d@suse.com>
In-Reply-To: <bb71c3f3-20a7-b020-6685-879bd4e5786d@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.8.9; emacs 28.2
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: VI1PR03MB3710:EE_|AS8PR03MB6790:EE_
x-ms-office365-filtering-correlation-id: 888fb90e-94b2-4311-020c-08db44ce5abf
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 dcbnCLVKQkdmUt+BTWoREEouabY/LFlkeiZTnr6S/53DtIJXNmIh7BoCEM7+VcS1S5px2823iXpbEAQBK4yKVcA/KSwF4peRJeIeNOsuG5xO4w3t/5j41ZEvInxOaZAPieI2z85GK6RwgbiXXg30I/9kuhmg67J+NJ1fDB4Q8XGzSJsmghfEZH8VmTrnQt3X2EYcL10YhXhfCVV5Aym+QT8xBYL3Nidx18K2W//xJo5VPB5X239kF9cNf7IPuoxG/OSsWtzTvy9z5/oXpsBx0kS9GRzswdyGOo8sQaJPNS304VG/77XUDJGyrArPLw31BQJiOZ8CSRo72P20ZydEY1qUM4wJhGiAbJpNT9jlYKcIk0n2k14o/EtDrj5PtvqUq8v1VTZ9mWyE5w3lOu7Ii/WPl19IB9tf057a0aIj4p6/rcVjb9G2ZR22GL6fYeNT8+60Tl5PzyBJdvdXlZmbONyKcOI1ykOnug0NipZXPTIXPuOJ70I3glVrmukkmgWP4iL6/e+s24c/cdbJ/KlBRmj1byW99bZ4/RuGQrDfYJzgygeO2jEcqxPWKji+y5zNg4yzK8zWS14OreSh311cQ8OurM4JrYt+wjkbVXigXmV5+gtFgXfDHHlAOqHrF/o+
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB3710.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(366004)(136003)(396003)(451199021)(38070700005)(2616005)(2906002)(5660300002)(26005)(55236004)(6506007)(53546011)(6512007)(122000001)(41300700001)(83380400001)(8936002)(8676002)(7416002)(38100700002)(66899021)(316002)(66446008)(66476007)(64756008)(66556008)(66946007)(4326008)(76116006)(6916009)(186003)(36756003)(54906003)(86362001)(91956017)(478600001)(6486002)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?eUozT0VWcU9oVk5QdWJKdE9wcG9obDI2TWJpMU1xVzBaLzB1eVd0dTJKRHRk?=
 =?utf-8?B?YnRjeFozSE9LZ2NuZFJ5SGRlQ1R1bXlPdE1xdHVSbGRFQnQrZGc4YkpidjZr?=
 =?utf-8?B?UVVKU0MxSll6WG9NZktHK1B0SjBzZFlNV2JBRFdRc1hBUThrb1dBc2M2bTli?=
 =?utf-8?B?b1dxbmNvLzNGVnp3UDI3dDZUb0d1N3ZJZkVVdzJmY3JKOG1aQTZYTWtRUm5H?=
 =?utf-8?B?T0tSRnlNUlkrOG9wdjVIT0tyeHRCa0M2Ni84Mi9najRnc0J5eGVRSkJ1ZnNr?=
 =?utf-8?B?c2xHOTVSV0l0bjNWcVh6eUx1aTM0WThYOUxCdEJ3T3U1OFpTQWI1b1lHNWxP?=
 =?utf-8?B?RTVFRXZxblZsbVQrZ1dGQVhId0VYdDlJY01rMWdnM1Uzd3FsRXlpdUFSSFVN?=
 =?utf-8?B?RVZqSEdtSFBmRHdzSkowTzZQV0tlaVl6b1hlZEE2dzZVWWNqRWk1aFFMU1U1?=
 =?utf-8?B?cGZqQkNQVGozSzdQR0ZpdHZ5RkpjaGtFUXNaTHM4YnVVUjZXMVMxQ1hLTGNB?=
 =?utf-8?B?Tll4TTNuNGlMZWI2Uy94VDVKbUNFdUFFSzhzc0RpaTBpTU5HVy9CekE3bGU3?=
 =?utf-8?B?QmtsRTNrKzhRdHJiT20xaWZLTGdKWXBBMEZNU2RqYVBYWTU0ck4xbjVFbVpz?=
 =?utf-8?B?cmlBTVlJL25GZkdKZDlnWFpvZWJ3NmJSS1U3NG5VN1lOdUxiVHZDeTVRRWRD?=
 =?utf-8?B?K0xQdEZVN1RUUHNsV2JiTkVxSjYrM3l1bUEwWkV4eUdEQ05yN0phdElVOXVR?=
 =?utf-8?B?N3ZxamNtTzN2MHVaOTZ5ekhIbVdxSFpaemFnaStncEhKd1R3K1BmTjZOVVEv?=
 =?utf-8?B?ZWlpS3ViNFRMUFhadXVtbTFFYmZKYzFGcTJlcXZKNGk4dXk5SUV1dXdBbzZN?=
 =?utf-8?B?ckZHSGp3UkprQkw1RVAzN0VPVWpwZVE5bXAzcnJYd1RvNEdaTDVRTnBPVXBZ?=
 =?utf-8?B?Qis0RTFaY2RrTEVuMFprdjNtSm9hQ1JBR2F2T25IQmd1elJ5SDNYZjQ3K1Mz?=
 =?utf-8?B?RVNpRmpySkhRT0FzbUpuSVBKV1Mwajh0VHFvRnNhckkzaXdVdnpVVlNJQXRo?=
 =?utf-8?B?YzNKWTkxZ09Uekp3RjJ2Qmw4R3UzSkY2TmVEbE1EVHFjd05DTng2UHRHRVR4?=
 =?utf-8?B?QTJMRVlJcmgwNStacGNOQjBpRElVVnAzdjkxS0NQZnJ4UkNMUE9nV1RIR0N5?=
 =?utf-8?B?UURqT3dpdW9INVNweXRjYlF0VzV4TWpaZEhjbWhDOGJ6WWkrS3g3UDhiUmxt?=
 =?utf-8?B?ZWlUc2lKWVRvTmhvT0tKZDZEUi9nb1hlWVRkZ3R5cElUZHZ6WER3QzNObFht?=
 =?utf-8?B?STJ0K3U1WFZHejZhajJnaUpGbXRDRlJ6NFhIOTlkdzFnLy9QVnRMNElzVlBl?=
 =?utf-8?B?T2tHZnZXcnRhTVJZQVFTN2dOR2tjY0pHZWF3NmlVZkdsbWFuQ0Q5KzdtbWt2?=
 =?utf-8?B?NGE2OUpHb3lLUHlZVWF2a0pmdTRPV2txRHFMNUQyQmR2TE5LSjNFRU1SWnVy?=
 =?utf-8?B?MllVL1h3azh6QUNqZG5QZVI5cVh1SURseVNWU3ZBbTBNNVY4UUwyQ0Y3RE9Q?=
 =?utf-8?B?UXE3K1k1K2lVa0ZmK2VjVWp5cVh2RHhRMDVHd2hnQk5hM29FdUpmNENucVlH?=
 =?utf-8?B?Tk5tTHZHcm5sUGQyRnZYM3ArWkI0QzVIRDQxNENtWFRpMFFTYjk4YUFKazNO?=
 =?utf-8?B?QTdRQkl4UGlKL3dNd3hiMnJ6ak4wU2NZODBUQ0l6S1Z0aDA1RU84RHdYNVJk?=
 =?utf-8?B?a2JOUjZJWGt3SllBZUtPS0pzTXFydjl6MFIzbWZvbDZJUG1kME9ES0Zadks1?=
 =?utf-8?B?RnZlb1ZNVUZoQitWcXFuZ3pJMUFoZW5lTEtKcXNma3B6SXpoblpoclQ3Wnl0?=
 =?utf-8?B?UmhpemJyYkhDaUU1VjJWU2FXcVBsWE9mdmEwTkNWVUl3V01sOEdDNUFDR1Bx?=
 =?utf-8?B?SXYzZEIra0psMEJQUUZpZmRiM1FUenpFV3pvT2NRbHRLMHhrdms5VUh5ejYv?=
 =?utf-8?B?cTBFTTk2WTJFRXBsRTB6dngzbzcybGh4QWNocWtlR0srelZIM3dXZHNiMVE4?=
 =?utf-8?B?eExQQTcrU3A1WEg5WWxDZm9jaWxwZS9iZmNNaWRsWDN2eXdDdy9FY25wWjRx?=
 =?utf-8?B?Z1lXREMxeXNIS25SUjdhNWRHRXN3MzhuOHorWjFxdVFYKytpb2dXdEc2OXVr?=
 =?utf-8?B?UUE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <BD6D32B3D8FF9C499F7D85CE4F1C6B5E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB3710.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 888fb90e-94b2-4311-020c-08db44ce5abf
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Apr 2023 14:15:27.9065
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tj/gHxEK3ny3LcGTpetyCHmiDTdJAJ3K13GLa+4f1Bd3BQaTlY2r8ka/POs2/eOgHzX+lccuNs1DbfglV0iM/hfEFYIzA0uz3vtCyDqPGGs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR03MB6790
X-Proofpoint-GUID: hE1raLViRgYM09NPYzPKOD0ujMZcs_wu
X-Proofpoint-ORIG-GUID: hE1raLViRgYM09NPYzPKOD0ujMZcs_wu
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-04-24_09,2023-04-21_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0
 mlxlogscore=999 spamscore=0 mlxscore=0 lowpriorityscore=0 clxscore=1015
 malwarescore=0 bulkscore=0 phishscore=0 impostorscore=0 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000
 definitions=main-2304240128

DQpIaSBKYW4sDQoNCkphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JpdGVzOg0KDQo+
IE9uIDIxLjA0LjIwMjMgMTY6MTMsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4gDQo+PiBI
aSBSb2dlciwNCj4+IA0KPj4gUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
IHdyaXRlczoNCj4+IA0KPj4+IE9uIEZyaSwgQXByIDIxLCAyMDIzIGF0IDExOjAwOjIzQU0gKzAw
MDAsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4+Pg0KPj4+PiBIZWxsbyBSb2dlciwNCj4+
Pj4NCj4+Pj4gUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyaXRlczoN
Cj4+Pj4NCj4+Pj4+IE9uIE1vbiwgQXByIDE3LCAyMDIzIGF0IDEyOjM0OjMxUE0gKzAyMDAsIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4+IE9uIDE3LjA0LjIwMjMgMTI6MTcsIFJvZ2VyIFBhdSBN
b25uw6kgd3JvdGU6DQo+Pj4+Pj4+IE9uIEZyaSwgQXByIDE0LCAyMDIzIGF0IDAxOjMwOjM5QU0g
KzAwMDAsIFZvbG9keW15ciBCYWJjaHVrIHdyb3RlOg0KPj4+Pj4+Pj4gQWJvdmUgSSBoYXZlIHBy
b3Bvc2VkIGFub3RoZXIgdmlldyBvbiB0aGlzLiBJIGhvcGUsIGl0IHdpbGwgd29yayBmb3INCj4+
Pj4+Pj4+IHlvdS4gSnVzdCB0byByZWl0ZXJhdGUsIGlkZWEgaXMgdG8gYWxsb3cgImhhcm1sZXNz
IiByZWZjb3VudHMgdG8gYmUgbGVmdA0KPj4+Pj4+Pj4gYWZ0ZXIgcmV0dXJuaW5nIGZyb20gcGNp
X3JlbW92ZV9kZXZpY2UoKS4gQnkgImhhcm1sZXNzIiBJIG1lYW4gdGhhdA0KPj4+Pj4+Pj4gb3du
ZXJzIG9mIHRob3NlIHJlZmNvdW50cyB3aWxsIG5vdCB0cnkgdG8gYWNjZXNzIHRoZSBwaHlzaWNh
bCBQQ0kNCj4+Pj4+Pj4+IGRldmljZSBpZiBwY2lfcmVtb3ZlX2RldmljZSgpIGlzIGFscmVhZHkg
ZmluaXNoZWQuDQo+Pj4+Pj4+DQo+Pj4+Pj4+IEknbSBub3Qgc3RyaWN0bHkgYSBtYWludGFpbmVy
IG9mIHRoaXMgcGllY2UgY29kZSwgYWxiZWl0IEkgaGF2ZSBhbg0KPj4+Pj4+PiBvcGluaW9uLiAg
SSB3aWxsIGxpa2UgdG8gYWxzbyBoZWFyIEphbnMgb3Bpbmlvbiwgc2luY2UgaGUgaXMgdGhlDQo+
Pj4+Pj4+IG1haW50YWluZXIuDQo+Pj4+Pj4NCj4+Pj4+PiBJJ20gYWZyYWlkIEkgY2FuJ3QgcmVh
bGx5IGFwcHJlY2lhdGUgdGhlIHRlcm0gImhhcm1sZXNzIHJlZmNvdW50cyIuIFdob2V2ZXINCj4+
Pj4+PiBob2xkcyBhIHJlZiBpcyBlbnRpdGxlZCB0byBhY2Nlc3MgdGhlIGRldmljZS4gQXMgc3Rh
dGVkIGJlZm9yZSwgSSBzZWUgb25seQ0KPj4+Pj4+IHR3byB3YXlzIG9mIGdldHRpbmcgdGhpbmdz
IGNvbnNpc3RlbnQ6IEVpdGhlciBwY2lfcmVtb3ZlX2RldmljZSgpIGlzDQo+Pj4+Pj4gaW52b2tl
ZCB1cG9uIGRyb3BwaW5nIG9mIHRoZSBsYXN0IHJlZiwNCj4+Pj4+DQo+Pj4+PiBXaXRoIHRoaXMg
YXBwcm9hY2gsIHdoYXQgd291bGQgYmUgdGhlIGltcGxlbWVudGF0aW9uIG9mDQo+Pj4+PiBQSFlT
REVWT1BfbWFuYWdlX3BjaV9yZW1vdmU/ICBXb3VsZCBpdCBqdXN0IGNoZWNrIHdoZXRoZXIgdGhl
IHBkZXYNCj4+Pj4+IGV4aXN0IGFuZCBlaXRoZXIgcmV0dXJuIDAgb3IgLUVCVVNZPw0KPj4+Pj4N
Cj4+Pj4NCj4+Pj4gT2theSwgSSBhbSBwcmVwYXJpbmcgcGF0Y2hlcyB3aXRoIHRoZSBiZWhhdmlv
ciB5b3UgcHJvcG9zZWQuIFRvIHRlc3QgaXQsDQo+Pj4+IEkgYXJ0aWZpY2lhbGx5IHNldCByZWZj
b3VudCB0byAyIGFuZCBpbmRlZWQgUEhZU0RFVk9QX21hbmFnZV9wY2lfcmVtb3ZlDQo+Pj4+IHJl
dHVybmVkIC1FQlVTWSwgd2hpY2ggcHJvcGFnYXRlZCB0byB0aGUgbGludXggZHJpdmVyLiBQcm9i
bGVtIGlzIHRoYXQNCj4+Pj4gTGludXggZHJpdmVyIGNhbid0IGRvIGFueXRoaW5nIHdpdGggdGhp
cy4gSXQganVzdCBkaXNwbGF5ZWQgYW4gZXJyb3INCj4+Pj4gbWVzc2FnZSBhbmQgcmVtb3ZlZCBk
ZXZpY2UgYW55d2F5cy4gVGhpcyBpcyBiZWNhdXNlIExpbnV4IHNlbmRzDQo+Pj4+IFBIWVNERVZP
UF9tYW5hZ2VfcGNpX3JlbW92ZSBpbiBkZXZpY2VfcmVtb3ZlKCkgY2FsbCBwYXRoIGFuZCB0aGVy
ZSBpcyBubw0KPj4+PiB3YXkgdG8gcHJldmVudCB0aGUgZGV2aWNlIHJlbW92YWwuIFNvLCBhZG1p
biBpcyBub3QgY2FwYWJsZSB0byB0cnkgdGhpcw0KPj4+PiBhZ2Fpbi4NCj4+Pg0KPj4+IElkZWFs
bHkgTGludXggd29uJ3QgcmVtb3ZlIHRoZSBkZXZpY2UsIGFuZCB0aGVuIHRoZSBhZG1pbiB3b3Vs
ZCBnZXQgdG8NCj4+PiByZXRyeS4gIE1heWJlIHRoZSB3YXkgdGhlIExpbnV4IGhvb2sgaXMgcGxh
Y2VkIGlzIG5vdCB0aGUgYmVzdCBvbmU/DQo+Pj4gVGhlIGh5cGVydmlzb3Igc2hvdWxkIGJlIGF1
dGhvcml0YXRpdmUgb24gd2hldGhlciBhIGRldmljZSBjYW4gYmUNCj4+PiByZW1vdmVkIG9yIG5v
dCwgYW5kIGhlbmNlIFBIWVNERVZPUF9tYW5hZ2VfcGNpX3JlbW92ZSByZXR1cm5pbmcgYW4NCj4+
PiBlcnJvciAoRUJVU1kgb3Igb3RoZXJ3aXNlKSBzaG91bGRuJ3QgYWxsb3cgdGhlIGRldmljZSB1
bnBsdWcgaW4gTGludXgNCj4+PiB0byBjb250aW51ZS4NCj4+IA0KPj4gWWVzLCBpdCB3b3VsZCBi
ZSBpZGVhbGx5LCBidXQgTGludXggZHJpdmVyL2RldmljZSBtb2RlbCBpcyB3cml0dGVuIGluIGEN
Cj4+IHN1Y2ggd2F5LCB0aGF0IFBDSSBzdWJzeXN0ZW0gdHJhY2tzIGFsbCB0aGUgUENJIGRldmlj
ZSB1c2FnZSwgc28gaXQgY2FuDQo+PiBiZSBjZXJ0YWluIHRoYXQgaXQgY2FuIHJlbW92ZSB0aGUg
ZGV2aWNlLiBUaHVzLCBmdW5jdGlvbnMgaW4gdGhlIGRldmljZQ0KPj4gcmVtb3ZhbCBwYXRoIGVp
dGhlciByZXR1cm4gdm9pZCBvciAwLiBPZiBjb3Vyc2UsIGtlcm5lbCBkb2VzIG5vdCBrbm93IHRo
YXQNCj4+IGh5cGVydmlzb3IgaGFzIGFkZGl0aW9uYWwgdXNlcyBmb3IgdGhlIGRldmljZSwgc28g
dGhlcmUgaXMgbm8gbWVjaGFuaXNtcw0KPj4gdG8gcHJldmVudCByZW1vdmFsLg0KPg0KPiBDb3Vs
ZCBwY2liYWNrIG9idGFpbiBhIHJlZmVyZW5jZSBvbiBiZWhhbGYgb2YgdGhlIGh5cGVydmlzb3Is
IGRyb3BwaW5nIGl0DQo+IHdoZW4gZGV2aWNlIHJlbW92YWwgaXMgcmVxdWVzdGVkIChpLmUuIG11
Y2ggY2xvc2VyIHRvIHRoZSBzdGFydCBvZiB0aGF0DQo+IG9wZXJhdGlvbiksIGFuZCBvbmx5IGlm
IGl0IGZpbmRzIHRoYXQgbm8gZ3Vlc3RzIHVzZSB0aGUgZGV2aWNlIGFueW1vcmU/DQoNClllcywg
aXQgY2FuLCBpdCB0aGlzIGluZGVlZCB3aWxsIGhvbGQgYSByZWZlcmVuY2UgdG8gYSBwY2kgZGV2
aWNlIGZvciBhDQp0aW1lLCBidXQgdGhlcmUgYXJlIHNvbWUgY29uc2lkZXJhdGlvbiB0aGF0IG1h
ZGUgdGhpcyBhcHByb2FjaCBub3QNCmZlYXNpYmxlLg0KDQpCYXNpY2FsbHksIHdoZW4gYW4gdXNl
ciB3cml0ZXMgdG8gL3N5cy9idXMvcGNpL1NCREYvcmVtb3ZlLCB0aGUNCmZvbGxvd2luZyBoYXBw
ZW5zOg0KDQoxLiAvc3lzL2J1cy9wY2kvU0JGRC9yZW1vdmUgZW50cnkgaXMgcmVtb3ZlZCAtIHdl
IGNhbid0IHJldHJ5IHRoZQ0Kb3BlcmF0aW9uIGFueW1vcmUNCg0KW3VuaW1wb3J0YW50IHRoaW5n
c10NCg0KTi4gcGNpX3N0b3BfZGV2KCkgZnVuY3Rpb24gaXMgY2FsbGVkLiBUaGlzIGZ1bmN0aW9u
IHVubG9hZHMgYSBkZXZpY2UNCmRyaXZlci4gQW55IGdvb2QgYmVoYXZpbmcgZHJpdmVyIHNob3Vs
ZCBkcm9wIGFsbCBhZGRpdGlvbmFsIHJlZmVyZW5jZXMNCnRvIGEgZGV2aWNlIGF0IHRoaXMgcG9p
bnQuDQoNClttb3JlIHVuaW1wb3J0YW50IHRoaW5nc10NCg0KTS4gUENJIHN1YnN5c3RlbSBkcm9w
cyBvd24gcmVmZXJlbmNlIHRvIGEgZ2VuZXJpYyBkZXZpY2Ugb2JqZWN0DQoNClNvLCBhcyB5b3Ug
Y2FuIHNlZSwgYWRtaW4gY2FuJ3QgcmVzdGFydCBhICJmYWlsZWQiIGF0dGVtcHQgdG8gcmVtb3Zl
IGENClBDSSBkZXZpY2UuDQoNCk9uIG90aGVyIGhhbmQsIHJlbW92ZSgpIGZ1bmN0aW9uIGNhbiBz
bGVlcC4gVGhpcyBhbGxvd3MgdXMgdG8gcGF1c2UNCnJlbW92YWwgcHJvY2VzcyBhIGJpdCBhbmQg
Y2hlY2sgaWYgaHlwZXJ2aXNvciBoYWQgZmluaXNoZWQgcmVtb3ZpbmcgYQ0KUENJIGRldmljZSBv
biBpdHMgc2lkZS4gQnV0LCBhcyB5b3UgcG9pbnRlZCBvdXQsIHRoaXMgY2FuIHRha2Ugd2Vla3Mu
Li4NCg0KLS0gDQpXQlIsIFZvbG9keW15cg==


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:19:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:19:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525436.816627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqx24-0000XN-Vz; Mon, 24 Apr 2023 14:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525436.816627; Mon, 24 Apr 2023 14:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqx24-0000XG-T7; Mon, 24 Apr 2023 14:19:08 +0000
Received: by outflank-mailman (input) for mailman id 525436;
 Mon, 24 Apr 2023 14:19:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqx23-0000Wl-Ln
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:19:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8029665-e2aa-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:19:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6978.eurprd04.prod.outlook.com (2603:10a6:208:17d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:19:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:19:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8029665-e2aa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GXmh0/GEMJx7IhIBIidUhaoHkRCOKNhG/01qd8ezWcZINbAcKroMz6Q8Vg+PKi3r0UzJkhjpeuXPGsU2Z2YnEm1iG+IPZekaU5DQguRgvdSANaGN1gW3nyyd+uP/svZa1fWDT+nEGukGgWmxLYtthhhbTvznvci/JgYLgkWxyOWOj631QL0wmOltjaCFoC3604aCj+5CXuox3PpikvXrtpuGaOYZJ1f7ROqn9y/Ne/FqBhhea+PE0hFKOeX0IVnw/2tJVRoWm99On3Ko5cOcGWbqSRykRoZbuq8ykyFEjFpIdPDGlZZZGLKBjDwrJzVE+kQjvAREHQuYpdNCHEEbfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9rnbx5yThSfBBNczVjIzAkUl722Qr8yiBHpP9VeIVII=;
 b=V5Rcy0EY4Rzf1TngI3On7gPkHxyG2mI1KalkQeH1YeQBHfddbHxZYd4JhfnKgjaZfXkpMFdXFAO2zSuJOdER9x3nqk4S9vBbpkNoJRcD/as6ynGKeccQQCKMsABRSwbl+rRlQeW2gsMTroWZvIylfoE8NZYfRi0VAh06zQo7Bgcpoalt6aWjqGbKvWjXbRD203ZzH/VALfMu6MaDGWU+D56N8BMeJ312EuC8rXtsUMRkzrFEXlVgZT9dl8lL7Yd7OkTLokTtgOFcNKr2PdZIrNVWNI7yyyz/kx/XeVQvnXUx4Un84Nxx58tYnSE65ySvghzuPFyVwSCuQEjVn8YBSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9rnbx5yThSfBBNczVjIzAkUl722Qr8yiBHpP9VeIVII=;
 b=dUH33Od6t8Nssy9pvoItr3AFbNJoAg8+TVg3HErDzfdckni6EMo18NKWewpe3vZjo2yI92TPLZQSRsaA5naK06P0h1jwgwOkIcclkTTtZbpDjKF4UkxCwae7I2Ql/icx/NZhWcrKQHL11zZxKSF67rFbZTq7Nj3ue7RWYdbsr0ADS0C79NWTwO5/6DdXFvdYe1La+G6ZNxpSfDL3qDJXd1HDA/53wxlHvmzq9XhEicur3KpJOeikz69d3yuzjMltSO3JkLeCgHvQWtSsZqycihSGlEZQ4pukp/f2moxQcN7NdKr6ZBkn871+3765nQKCuZzw6eci640QzV+eyu/DgQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com>
Date: Mon, 24 Apr 2023 16:19:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Jason Andryuk <jandryuk@gmail.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0192.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6978:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ea03319-1a8c-4db1-c062-08db44cedad9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MPa23qLrLo4g5BYBi5ofqwVSW1Tb+QVTdOTXUNdpQU83kf6X1bDvQrkd8tgygYJJ3aBeAJ+GwKyPYCaEAWPYN/FR9dcW2rz23OBNiMRQPstazE++8ssCnOQVIZwJz/QtogAye6kB1yRWXyvoVPw/XhUlROOG/A4D5vqdTgWNQY8RaYCGEvxmJCAG4+oICL2R4TBVZjikMZRgwx+N1BoqtjE+2kSbC70SPigSRzM8JZSKd76BpFGGvedK8z7UK+WJZZdSGt3ZeYXB5gmDP+xEt0jiProLYwLWmuuEuCZR4I0rdcJmZA696tTE99VoDdYXtcMiSYQiDxbJuGvw+x6Wov6GCkEynSDt/hp2B4iAfDMVHpP2Om9UWZVrQ3HLThOX7wdY/vSnbbRd69sLeX0AJZrjTuIZy+3PL6e/HirFbkNcUa6zjVT0kHehLftZ/LLGy2huEpuSt2uQfu4NrFGNudUwIvCpm7xF/oKTBqUJAd5URKKjKcqit9uBEqMmL0HwOgquQKhaCB0FaQ/lQAxxQIJiQrTSNEVIRskGv4y6rUCXC/nAjR4BEHtam2y2FAMZxgh91gMmpRoymUVgKbtAfR5bgHMUi0KpOe/bAM/9hTWb7pWWr9A51qm3WdIhKnQyWjDplRU4S4lEuRFN0NwLDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199021)(38100700002)(6506007)(6512007)(53546011)(26005)(2616005)(186003)(83380400001)(66574015)(2906002)(8676002)(8936002)(5660300002)(36756003)(54906003)(478600001)(6486002)(110136005)(316002)(4326008)(41300700001)(66556008)(31696002)(66946007)(86362001)(66476007)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QmdPRVVtRkFDK2Y1TkZRYlRyajdGUkpmZGtKZzhVNkk1ci8wM0UvQTN1Mmlw?=
 =?utf-8?B?eGRuYjVKUnlYZUFLUmpZM2RmUjl3MEYwaVVhdUVDOXI4bHBKK1JObDFONFN3?=
 =?utf-8?B?YUphWkhmWWh1aTB5Sy9hUEVpZ05nQW1WS0pTd2hwSE81enZEVjU0cTduVWIr?=
 =?utf-8?B?ZTd0UEZLMUN3NHZqbC9pRDZ3QXRURjdzR1FsY1hlZ3RLckVxTm5WRlNCWkNU?=
 =?utf-8?B?dk5YVVAvWjNmZUlqajBVV25NWmdidy8rc1k5ZFlsdVk1WkZrS1pzV29aRnBB?=
 =?utf-8?B?c1orMUNuU1JkbitsamhxSUVmcXExSGRwSExFRmFBZUlvSUs4dTRpMG1SZEE3?=
 =?utf-8?B?UXBwWkI5ZmJ3ZXBacXpYN0NtdG9ON0QwdVpUYjZNSTFveVVQY25BK2d0aVUy?=
 =?utf-8?B?bVBESWhKaVJ3TnNSQ3RXc1BhcGlzMndHOGZrVFdlY3RaUFZiMUpUd2x6M0Fv?=
 =?utf-8?B?NEljclI1eVgxdnc5VXdxMi9ZNFU5V00zL1JzbHZOa1E5dEZ0M2RxVWJLUUVT?=
 =?utf-8?B?TEhvMkNqSDlHK2FTOEEzWW9KQWxnODV4TEhLVXNtNEkrZkU5U3RyQVVwY1hB?=
 =?utf-8?B?Z0NCR2FNRGJRTHVIeEU5eEJPMzZXTFA1TUhJZzJpaTc2Zk5hdWR1ZjR1aEk5?=
 =?utf-8?B?RFNFbWdwcENBdEZBa3FKdzA3UG41bzR3N2hWKzFrY0k0MHRCOHI4bjIzMDdK?=
 =?utf-8?B?eTk3d3NqZThFeldKVmhkeHl4SlFxd0ZCSi9zeFI5TU42K3lNR1ZpTkIxbGdh?=
 =?utf-8?B?RDRVS1ZWUjBEYTFFcm5PVFR5L1R4Rk0rbnVsVUo1ZVNkY0JGUTllS2ExL1Nr?=
 =?utf-8?B?R0owemVUOXlSUlBVT2FUbU95cjZUZjJOalhzTWxwT2haWUh2MGwyblN4SktL?=
 =?utf-8?B?TmhyMjJpRzFsSm1nbG95dElnQzV6dTQvL0dQTTFJZWNmeGFkdlRVNUF1OU5k?=
 =?utf-8?B?NGtqd09oWE9tc3NIU01QdHl3L3N1UXdNcFVMcjY4aXNVcmtJQ1Y2djEyQnYz?=
 =?utf-8?B?eE5PQWFqdDB6K1pERFZHUzRZOW9KdHlCdkd2MXV6R1NhVVk2RkxHUnZibVEx?=
 =?utf-8?B?SW9Jbm5weWlzWGszMHJRcWRtbHl4Sy9qTWNDUkNmejVjVjk5L213b2Jvd0Nx?=
 =?utf-8?B?cnV4SEI4M0lmSU9YTlc2U0Mvd3VGcUlzd1VWSXlKcVREVEVaZFdWUUdPU0Q4?=
 =?utf-8?B?RVprNVk0d2FuSWZkczBNT2M1MS9MaS83M0c5b2hJc1ZMcEYxR1plalJ5YlRT?=
 =?utf-8?B?R2xPa0NiUXphdWttSHNzV0VVVkYwWVM4VHJiR0xpV1RxSnlyYzFPeWtCbE9o?=
 =?utf-8?B?WnRMejY0cGFDSUR1N3JrWjVUWEd4U1FMMjFzNUlZcW1KRnVqeTFqZHBTZUxv?=
 =?utf-8?B?OHJBREY0bCs5ZGtZLzIxTG0yWjNWWlFwVWoyUU1aZDh3Uk5ydWVWd1lTL2xY?=
 =?utf-8?B?czgrQVFXUVNDVkNaMTJncm1TUHVSUENQQ1lQTGVrNFpwaUdjT3RVZUQycGZU?=
 =?utf-8?B?ajU2TklrcGtyblJqcjlid2I2dmtKSjJ0dnI5SERmNk5KUnV1d0FGTzBuS2d3?=
 =?utf-8?B?ZkplaXdVWm9wR21lYTZoelptanJhQ1JwMnh5MWlVZzA3Q0F5eTZsQWZwT1RW?=
 =?utf-8?B?NXpNbUFPbVN6bVhFbHBvQjhSZDZ2dXIzV1hvWllRbm5Ib2RqZlB5L1FoTFJz?=
 =?utf-8?B?MDhzdE9TUlA4SktqZlRMay9qWlc1TnhxUDhPaGpGUHdHdVIvTFVES2pUb1FU?=
 =?utf-8?B?UlhRR2xBYVhpcCtuSE5kSy9CdWpVQkRKU1F4RDBETkNyRnhKUWtBcXhiREZC?=
 =?utf-8?B?MWxJc1RkYkk2ZGFyN3hEM2FvYXhLeG9aYlkwK2pJcktadDJDdFRWbXNza3B6?=
 =?utf-8?B?ZjFpWCszNVhIeTREdnAzT240dDEwTWNscjhPSnBNekQ1T1FCNTZhbHBLZkdx?=
 =?utf-8?B?QUYzQ2dkenhaVm5FcjVncmRqeTdGYS96eFovTForVFM5aXI0c2gvbmk3ZEda?=
 =?utf-8?B?eElIRTRZcXk4VDNTaEQzeEhnZGZGWHN0ZEVvdG1yYU5DVVplcGFTWUVFMkQ2?=
 =?utf-8?B?SFlJaVVOb2xzM1VXMUFoZUFLOXMvRitHSVFIdnljS25WQlZXNW5KNHR5c2w4?=
 =?utf-8?Q?AjhXu8SPmf1kf0DGTcD+melC7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ea03319-1a8c-4db1-c062-08db44cedad9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:19:03.0152
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Dc79pOtpCFvXs9JuI0p7UxbsKYZq51E1EfkDOoaCRiC2z5cmUkwUXrENXWvAKnFWrYmHxQp7eeh7JSlMm4A+dQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6978

On 06.04.2023 05:57, Marek Marczykowski-Górecki wrote:
> Some firmware/devices are found to not reset MSI-X properly, leaving
> MASKALL set. Jason reports on his machine MASKALL persists through a
> warm reboot, but is cleared on cold boot. Xen relies on initial state
> being MASKALL clear. Especially, pci_reset_msix_state() assumes if
> MASKALL is set, it was Xen setting it due to msix->host_maskall or
> msix->guest_maskall. Clearing just MASKALL might be unsafe if ENABLE is
> set, so clear them both.
> 
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit with a couple of nits (which I'd be happy to address while
committing, so long as you agree). First one being on the last
sentence above: It's surely not just "might"; if resetting already
doesn't work right, nothing says that the individual mask bit all
end up set. Clearing ENABLE as well is only natural imo, if we
already need to fix up after firmware. So maybe "Even if so far not
observed to be left set, clear ENABLE as well"?

> --- a/xen/drivers/passthrough/msi.c
> +++ b/xen/drivers/passthrough/msi.c
> @@ -46,6 +46,23 @@ int pdev_msi_init(struct pci_dev *pdev)
>          spin_lock_init(&msix->table_lock);
>  
>          ctrl = pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
> +
> +        if ( ctrl & (PCI_MSIX_FLAGS_MASKALL|PCI_MSIX_FLAGS_ENABLE) )

Style (missing blanks around |; once more below).

> +        {
> +            /*
> +             * pci_reset_msix_state() relies on MASKALL not being set
> +             * initially, clear it (and ENABLE too - for safety), to meet that
> +             * expectation.
> +             */
> +            printk(XENLOG_WARNING
> +                   "%pp: unexpected initial MSI-X state (MASKALL=%d, ENABLE=%d), fixing\n",
> +                   &pdev->sbdf,
> +                   (ctrl & PCI_MSIX_FLAGS_MASKALL) ? 1 : 0,
> +                   (ctrl & PCI_MSIX_FLAGS_ENABLE) ? 1 : 0);

Our "canonical" way of dealing with this is !!(x & y).

> +            ctrl &= ~(PCI_MSIX_FLAGS_ENABLE|PCI_MSIX_FLAGS_MASKALL);
> +            pci_conf_write16(pdev->sbdf, msix_control_reg(pos), ctrl);
> +        }
> +
>          msix->nr_entries = msix_table_size(ctrl);
>  
>          pdev->msix = msix;


Aiui there's no dependency here on the earlier patches in the series;
please confirm (or otherwise).

Jason - any chance of getting a Tested-by: from you?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:27:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:27:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525441.816637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqx9r-00020R-ON; Mon, 24 Apr 2023 14:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525441.816637; Mon, 24 Apr 2023 14:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqx9r-00020K-Lk; Mon, 24 Apr 2023 14:27:11 +0000
Received: by outflank-mailman (input) for mailman id 525441;
 Mon, 24 Apr 2023 14:27:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pqx9q-00020E-Br
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:27:10 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 186166c6-e2ac-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 16:27:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B8D6E1FD87;
 Mon, 24 Apr 2023 14:27:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7601C1390E;
 Mon, 24 Apr 2023 14:27:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id m4J+G7yRRmR9PwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 24 Apr 2023 14:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 186166c6-e2ac-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682346428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=v755W+aGz+A0y0POsF4k2I5EYZ+j3vq34CdnqNBgV4g=;
	b=mi7P5vOCp/NNOa0HcdX+42Rp8g8vgzMZwYbMawC0iyewTjNyXe7MOtkebc/NIeEtoSNqg8
	BUNzQuUpAOiB3EwufsCZck1qgpUMUeOc5keNBHni38PCFqp/MiesGWmK60V3HmfhwbchA+
	5xgN9DimJgkcLwZGwu5vyEYXDwK3UFc=
Message-ID: <9e97bf0b-0a98-38ea-9705-1277175e50fc@suse.com>
Date: Mon, 24 Apr 2023 16:27:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230421135933.23353-1-jgross@suse.com>
 <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
 <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
 <2f20d5c0-d21f-ad1d-290c-631230e1f323@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <2f20d5c0-d21f-ad1d-290c-631230e1f323@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------I3nE0VjNruNiKltxviS0voWI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------I3nE0VjNruNiKltxviS0voWI
Content-Type: multipart/mixed; boundary="------------pdkI3UMFwU5x0mVfPq1dFgWA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <9e97bf0b-0a98-38ea-9705-1277175e50fc@suse.com>
Subject: Re: [PATCH] xen: add support for crash dump analysis with xen.efi
References: <20230421135933.23353-1-jgross@suse.com>
 <45bdf36f-93c5-9f7c-a028-0a9443f85013@suse.com>
 <e3a900e6-ef5e-9ac2-19fc-c276dc31c487@suse.com>
 <2f20d5c0-d21f-ad1d-290c-631230e1f323@suse.com>
In-Reply-To: <2f20d5c0-d21f-ad1d-290c-631230e1f323@suse.com>

--------------pdkI3UMFwU5x0mVfPq1dFgWA
Content-Type: multipart/mixed; boundary="------------t1LjkSM0rpaQ5DtANubeIHNd"

--------------t1LjkSM0rpaQ5DtANubeIHNd
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDQuMjMgMTY6MDksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyNC4wNC4yMDIz
IDE1OjQ0LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjQuMDQuMjMgMTQ6MDEsIEph
biBCZXVsaWNoIHdyb3RlOg0KPj4+IEZpbmFsbHkgLSBkbyB5b3UgcmVhbGx5IG5lZWQgdG8g
Y29weSBhbGwgdGhlIG5vbi1kZWJ1ZyBzZWN0aW9ucyBhcyB3ZWxsPw0KPj4+IE1pZ2h0IC0t
b25seS1rZWVwLWRlYnVnIGJlIGhlbHBmdWwgaGVyZSAocHJvdmlkZWQgaXQgd29ya3MgZm9y
IGEgUEUvQ09GRg0KPj4+IC0+IEVMRiBjb3B5IG9wZXJhdGlvbjsgaXQgbG9va3MgdG8gZG8g
c28gaW4gbXkgc2ltcGxlIGV4cGVyaW1lbnRzKT8NCj4+DQo+PiBObywgZG9lc24ndCB3b3Jr
IChvYmpjb3B5IGRvZXMsIGJ1dCBub3QgY3Jhc2ggd2l0aCB0aGF0IGZpbGUpOg0KPj4NCj4+
ICAgICBjcmFzaDogeGVuLmVmaS5lbGY6IG5vIHRleHQgYW5kIGRhdGEgY29udGVudHMNCj4g
DQo+IE9oLCB3b3cuIFdoYXQgdXNlIGRvIHRoZXkgaGF2ZSBmb3IgdGhlIC50ZXh0IC8gLmRh
dGEgY29udGVudHM/IEkgdmVyeSBtdWNoDQo+IGhvcGUgdGhleSB1c2UgdGhlIGluLW1lbW9y
eSBYZW4gaW1hZ2UgZm9yIGFuYWx5c2lzLCBub3Qgd2hhdCB0aGV5IG1heSByZWFkDQo+IG91
dCBvZiB0aGUgaW1hZ2UuDQoNCkkgdGhpbmsgY3Jhc2ggaXMgZmluZSBpbiB0aGlzIHJlZ2Fy
ZC4gQXQgbGVhc3QgX19zdGFydF94ZW4oKSBzZWVtcyB0byBiZSBhbGwNCnplcm9lcyBpbiBt
eSB0ZXN0IGR1bXAsIHdoaWNoIGlzIGZpbmUgZm9yIGFuIC5pbml0IGZ1bmN0aW9uIGFmdGVy
IHRoZSBzeXN0ZW0NCmlzIHVwIGFuZCBydW5uaW5nLiA6LSkNCg0KPiBJbiBhbnkgZXZlbnQs
IHBsZWFzZSBhZGQgaGFsZiBhIHNlbnRlbmNlIHRvIHRoZSBkZXNjcmlwdGlvbiB0byBtZW50
aW9uDQo+IHRoaXMgYXNwZWN0Lg0KDQpPa2F5Lg0KDQoNCkp1ZXJnZW4NCg==
--------------t1LjkSM0rpaQ5DtANubeIHNd
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------t1LjkSM0rpaQ5DtANubeIHNd--

--------------pdkI3UMFwU5x0mVfPq1dFgWA--

--------------I3nE0VjNruNiKltxviS0voWI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRGkbwFAwAAAAAACgkQsN6d1ii/Ey94
VQf7BZz3jQmxalMLJUhqF8PUGoZaWQ53j5GBn0j/ATpdYRLbNwBtlOp2cB5BiINhoo4QvXyQ1P6X
jcd/iqNufSdQxrr7armOmuN0fPXT+slqB8Kk4cCONrqskKmw29DIY6By8Y2gPa+a1AusThvfo7WU
IT7c0bhmn6ELFa6/FL6bcSWlqU3ND5V17A4gYyCrOF6fHhNGLLLJhS6NUZ9UIfseAQ5XtZVIs7UR
WgUPohbq9vBDJBOUzcpA7aZLxRUU8XP63gJ73wbCuevjAEljRGB8dO3kUPko0Y0+2/mWIaC7o2vs
Bgz7HXotj47GK4szbd1tfe+i4LkDof0SeaDGKrNdEg==
=+DEV
-----END PGP SIGNATURE-----

--------------I3nE0VjNruNiKltxviS0voWI--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:28:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525447.816648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxAt-0002aN-6Y; Mon, 24 Apr 2023 14:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525447.816648; Mon, 24 Apr 2023 14:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxAt-0002aG-3j; Mon, 24 Apr 2023 14:28:15 +0000
Received: by outflank-mailman (input) for mailman id 525447;
 Mon, 24 Apr 2023 14:28:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqxAs-0002a0-An
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:28:14 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2048.outbound.protection.outlook.com [40.107.13.48])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e970377-e2ac-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 16:28:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9059.eurprd04.prod.outlook.com (2603:10a6:102:220::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:27:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e970377-e2ac-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mJcL+MazkewhHYYcVcAEuKV4GRMbnTim7OdmJQ/bG1O5SKi5RjLHAfWoJK5ZyZOrtEOPBzy3M1X/uD+G+y0OoyEGnNWePM1/Rwcn+9zfl9JQpaHeXKxREWFWm47xtoUK1oSl+Ngp1N/rZaY0xiYC4VYnlY9KlWQ0YEnepEwLnYTvWvWcgUzX9+g2llVTZEm7stOrcFucN+n45X8wr+bUjkpR4t1AHbd/CgOZRb/5WGeivrVCAifwHFVw0y83RQrjTkYoK4+KbKF8neyjLDgc6qhc3gL/lchcUU3M80Kh0uBIRAR4+hJKkLCC8Q8AUBUXuHAN0gOcB7wLVsmjYM1I0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9AR/32oQ0LXq2/CHm3IaI5u1LOce7N+Pf+2xDoBpsYo=;
 b=HbYcCRcUbeIlw1GFsXhQe8qkL16yMu3NrV7W0NE8atZ2Yc8AiOgyYu2Ru2M9XYlYpVoMO6JAg3+0sNlgGB0XsCP1bvB2QjkKnzR5wMbcY0GIT8sWmBuo/mny+FT1vyIExoYn+FvsWekbyqESnrdIzcKfS99SNOGhNKbQdteypgZO4F/bb1eUgAdNESajpln9Zqlk0XAIaK/yEcOGi5C3ejJe+hJGU9h3EqpHUUIvB1VenqTkZgmCPvbGBVifRRUpBZdktp4DCBD8Oo+qONMwC2gHP/0xqe8t02nw+Ry9youBhYzcBpF2LG6rJ2fbHY4Uf+Yyfqh9uEPgCdL0bef1qA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9AR/32oQ0LXq2/CHm3IaI5u1LOce7N+Pf+2xDoBpsYo=;
 b=dGoLQ2ZyQE9WmUBRu0Q8Vwk0m19lngzc4deWlvRlyzvCSqhMyGb+HoVCz3RjeHB1MZbk5AEUgAxUDCutalkVoZ+kdFHyNPWr5HLXcYm8oIDmHI7CwJANIF9OrFIi8pfja0jjF+wc5zXIyHrTJWAa5cZ1Op2N7t9Np71zSpgZ2klwsVbi/HsYfDvutOggdHjPS6But9o23J6Ynj+Y/BhMZLwIsETWTUmUfzYQD87HrGoE8A6L4e96tz/ijRXN6+pmjgWEdpCOQW4RWwzW3Z12ne7k9zdk+YdRKmuHq7ahlpknGFi5LZ4fgo+L5iYH0Btlwtn9vp2GdKK6PySdLEuxBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b0f574f1-5696-636d-6409-170a6b193f86@suse.com>
Date: Mon, 24 Apr 2023 16:27:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev
Content-Language: en-US
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <ZBNA9q5DXJYG3KVp@Air-de-Roger> <873556xa0g.fsf@epam.com>
 <ZDZ2S4OxP2e12oSX@Air-de-Roger> <87v8i0wyv0.fsf@epam.com>
 <ZDgZEZIG89oW6rEw@Air-de-Roger> <87leivw8qp.fsf@epam.com>
 <ZD0cyXLt1knXyUzA@Air-de-Roger>
 <963624f1-a36a-5d48-c34f-552d9d6c4950@suse.com>
 <ZD0krtCOrEwiKMFP@Air-de-Roger> <87354t8pqg.fsf@epam.com>
 <ZEKLN8AlzDUckorU@Air-de-Roger> <87o7nh727c.fsf@epam.com>
 <bb71c3f3-20a7-b020-6685-879bd4e5786d@suse.com> <87h6t574e9.fsf@epam.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87h6t574e9.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9059:EE_
X-MS-Office365-Filtering-Correlation-Id: 37c5728b-416f-4b2a-d980-08db44d01138
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+ZtOvIATYy2KAmUtrpQJbYZMvo+7C6t7iMKeXePl+M+L2wV/HAY9YG5tiJLRWuoCM7JF025wH4Sao6HutZ1wDWodkeXQt2GYT6251naLI+jjhgA9a2sMA7vHbJlMsEq8sEQhK7hU/+mZYtH8YSAbw+O8Z/jrR23XIkNP/+TV/zZVwzIKldqp2MOX8evCr3rdUlqIMOkphXFty8e2bZ/2XqCF/Tg2ECCoY1faQpIr97hW0aBP2TvUzUbMGL4VVKThzfgmCr0LiA+h2A0INLd5yzkdWxvvaCTwM0R/ZwCENMdtkh2T0oUlUSW4F8JsIWbTdZRaNy+RHJhuBR2wn922eb7o/j8mehic+10nVEsHi/IWjHolmtQHGpnsES2+j1VvHy5knYGhDoEUzdgk6hb/Ign9DN6gfLZLeNaEtJotdrNqTpxevaLwZPmPM+Or0rFdF01qus8DlVJ01aD3DRNeLG8tmdWZKUrd/D3eEbWd6ms8QRLVnNc7XESMZrbJzi67nWxid/WyQFTx1ccFALEb/oCEil2de62Kb6SRNJawKj+NSPY51WKFV7lPw3LmaxbM7G06nCc0BourAKOe5rI1NCxxJgXmZCH00be10DPbhBkhRJGErXrzhBlUlrG08NG8ZEWWdKQQ4PeCdzU70NqXfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(53546011)(6506007)(26005)(6512007)(2616005)(36756003)(83380400001)(186003)(38100700002)(66946007)(31696002)(478600001)(86362001)(6916009)(66556008)(66476007)(31686004)(8936002)(8676002)(54906003)(7416002)(5660300002)(6486002)(66899021)(41300700001)(2906002)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a2t2Qjc1TVFIQVprZFR1dmRqTEg4YXpjNGhNTXd3b2NmcHVYNEwydVNhRWtR?=
 =?utf-8?B?M0dia1hwQk5EWmQ0RlFhVzFLL2RjVG9DRWdQdkxsMGJGVHEvN09Qd213UUYx?=
 =?utf-8?B?UFNYVW9mVTNOQ1lTWUIyV01pTVEwQlVsM0MvNzVaTTdndzRwaFo1OGZiODdl?=
 =?utf-8?B?elYzcFpteFA4alJuUjVXL1BFZTd0REZEVUo2ZHJrdVhEYWFNZzJETTlQSTY4?=
 =?utf-8?B?cGptcVpEY2dpb1EzVkkyZjdEL0JRWktLbklaUUFVRDFoV2pWTy9VemZta2pt?=
 =?utf-8?B?MFB0V1JMR3A2ZEFtaStpWUtiQVBnYUpsMmF1NGxZb0I4NlZGdVlsVzcwWmk5?=
 =?utf-8?B?U3JYWDNpVmJjejlmS3RjZVBMcUJqNnp3UE92czJvU0w4U2ZHeVRGUmpkLzMy?=
 =?utf-8?B?Y1d1RGtFT1JlZldkdDNqL2l2dlZoVkZoU3IrWFpUWXpBZjd5VGdYYmpoSEdV?=
 =?utf-8?B?ajZkRVdGd0Zsd2J4S0tGL0dnVE5PMkQrSTJ6MVUxbllQS1JpRG4wQ0VLbFJ0?=
 =?utf-8?B?WEZlVHJtbjNGc1FjNytWSVlvSmg2ZG9RckRkUW9NNzBhTHp6QTl0Qjc2b3N0?=
 =?utf-8?B?LzEvT2U5TzZrRmRMZGdMRkZYYUZtbExPOVE0M3cyNHhJaC9KQ0p5dnI0RXdW?=
 =?utf-8?B?K1lhdm5sdmZQZk5FYVIxbXBOOHBZajJnRlY4bTVsZFBuZmlYWktkeVc3TFkx?=
 =?utf-8?B?UHY3T2JWMXRDSXNNMkcrQTNNSU5rNU9yS2RJdUN0VjdxMGIxM0g2TEwzZ0l4?=
 =?utf-8?B?L0FWWU9SVFI4SExlc0JMYVBWWHlVTWxjNUpNT2QxV012YWFDSVZIV3ZaK0Ex?=
 =?utf-8?B?dUYwRXpQZStJbElNVmtKbm81dmgvOFREY09uVzN3WkQ1YlhiczRiZ1ZZQkFn?=
 =?utf-8?B?SlNxT3NKMzZmSDF3Zi9vTnJoZVNCNkE4VlRxK1JWaC8xdDB5aU8vczdtdEll?=
 =?utf-8?B?aDB6NDZ0cXp4T3ovdk1ZeUw2NE12aXVLaityVmFackNreGQ4TFNHR1d0YXMr?=
 =?utf-8?B?M2RsTU9tMEpTd0J0QnBvRUpZRzJ5Qms3ZVpJOTZUSFJvVU5xN2tMZHY0aDVR?=
 =?utf-8?B?UThXdEdhNlZFREtpOWlFZDFweVh4bi84cUdhWmVDYVFwMVBGTHRRTTVmMjdK?=
 =?utf-8?B?WkRQN0ZyU1hUaTJzVHpOUGtkajNEMGhrZXZLa0x0NmVtdXROZVpYSXZSSHJk?=
 =?utf-8?B?MzkwNVkzQXErclRjSDdIMVNyOERtcEZ2ZXo3Q0diQ054Ky94RTBGbnNIRisx?=
 =?utf-8?B?MzUxaGVlNnoyS3ZERXU2N2JRT0Z4R201ZWhYa01nVGNtT0tCUzBHWno5QnFm?=
 =?utf-8?B?T3RVOHlWeDNhK3hQT1llUENrN0hWM01VL1JxY056YXFnR1MxOGtCVW83WURa?=
 =?utf-8?B?VDJzUUN2SzJDdlNkN3Y4ejZyNFQ2UklyQXE5WkJJbWE4QTJEOWpEWllubEQv?=
 =?utf-8?B?SUMwQkVjMFI5ZUdJNXN5ZGRRcXQ3bVROdDgvUGU5SkVhTEJDb1RJSDhyNE1l?=
 =?utf-8?B?d29aWkJnT1BCWTVLYXptN3lQZHRUMWdueEhiTXMySldWTCtCZ1diWFlBZTgw?=
 =?utf-8?B?d2JPUWlZSWNQYVh0SjByR1ZzMThBOHhVOFlCb0pFRTRBaERrMWdPZWhGVFVk?=
 =?utf-8?B?TDVkZk5XR3pNTnk4dTRDUWxvSUNTYllwM3NCeXNjSEs4a09RZEZHSFNFdFNX?=
 =?utf-8?B?dVB4aTdya2dyUVdSdzhUSlRqZ2l0YjNWU2FmTHVsQTRTR0lZUE5nTHRuaHV5?=
 =?utf-8?B?dGZhN0hVV1ZReUo0TFZIZ1ExNm0rb0Y0SVRiVGJqR050NTV0SFJRbUJuMlp3?=
 =?utf-8?B?dVBldGoxb0lUVGdWMGNWUG9uS0VMNWtJWGNXRGRNclR2b1c2cTFlSlFSd0t6?=
 =?utf-8?B?ZWhhZGxpenRXdjgxRUk3Z0NXeFZNRHNDK25WMVRsRnBhRjJkM1FsZjhsa2xq?=
 =?utf-8?B?SjI3UlMrTm90dGd1aXRsdkNISGhWYlhLTExzeGJvQTMvMkUvUSs0Y2Nac0NZ?=
 =?utf-8?B?NkhYK3JELy85SkhyM3ZvUVV0OG9UVmR2dEVCVzF3UXZVSlgwdkxWQlNFVENJ?=
 =?utf-8?B?ZmQ5eGtmQUxpc3BvS093U2JRc0RubEwza1c4TkFRWFVUdlJiNVdCcldIR2o2?=
 =?utf-8?Q?1L8XqqmSyApMMxI1cm1IlSHaS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37c5728b-416f-4b2a-d980-08db44d01138
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:27:43.6877
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U1yN1/m88SQWCVdP66LAuomY/bz8vRD65vhqVdTpsXSbkZr1mrxvzN57pJOzBFtpz9INS//gYIjZnQzzKURA+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9059

On 24.04.2023 16:15, Volodymyr Babchuk wrote:
> 
> Hi Jan,
> 
> Jan Beulich <jbeulich@suse.com> writes:
> 
>> On 21.04.2023 16:13, Volodymyr Babchuk wrote:
>>>
>>> Hi Roger,
>>>
>>> Roger Pau Monné <roger.pau@citrix.com> writes:
>>>
>>>> On Fri, Apr 21, 2023 at 11:00:23AM +0000, Volodymyr Babchuk wrote:
>>>>>
>>>>> Hello Roger,
>>>>>
>>>>> Roger Pau Monné <roger.pau@citrix.com> writes:
>>>>>
>>>>>> On Mon, Apr 17, 2023 at 12:34:31PM +0200, Jan Beulich wrote:
>>>>>>> On 17.04.2023 12:17, Roger Pau Monné wrote:
>>>>>>>> On Fri, Apr 14, 2023 at 01:30:39AM +0000, Volodymyr Babchuk wrote:
>>>>>>>>> Above I have proposed another view on this. I hope, it will work for
>>>>>>>>> you. Just to reiterate, idea is to allow "harmless" refcounts to be left
>>>>>>>>> after returning from pci_remove_device(). By "harmless" I mean that
>>>>>>>>> owners of those refcounts will not try to access the physical PCI
>>>>>>>>> device if pci_remove_device() is already finished.
>>>>>>>>
>>>>>>>> I'm not strictly a maintainer of this piece code, albeit I have an
>>>>>>>> opinion.  I will like to also hear Jans opinion, since he is the
>>>>>>>> maintainer.
>>>>>>>
>>>>>>> I'm afraid I can't really appreciate the term "harmless refcounts". Whoever
>>>>>>> holds a ref is entitled to access the device. As stated before, I see only
>>>>>>> two ways of getting things consistent: Either pci_remove_device() is
>>>>>>> invoked upon dropping of the last ref,
>>>>>>
>>>>>> With this approach, what would be the implementation of
>>>>>> PHYSDEVOP_manage_pci_remove?  Would it just check whether the pdev
>>>>>> exist and either return 0 or -EBUSY?
>>>>>>
>>>>>
>>>>> Okay, I am preparing patches with the behavior you proposed. To test it,
>>>>> I artificially set refcount to 2 and indeed PHYSDEVOP_manage_pci_remove
>>>>> returned -EBUSY, which propagated to the linux driver. Problem is that
>>>>> Linux driver can't do anything with this. It just displayed an error
>>>>> message and removed device anyways. This is because Linux sends
>>>>> PHYSDEVOP_manage_pci_remove in device_remove() call path and there is no
>>>>> way to prevent the device removal. So, admin is not capable to try this
>>>>> again.
>>>>
>>>> Ideally Linux won't remove the device, and then the admin would get to
>>>> retry.  Maybe the way the Linux hook is placed is not the best one?
>>>> The hypervisor should be authoritative on whether a device can be
>>>> removed or not, and hence PHYSDEVOP_manage_pci_remove returning an
>>>> error (EBUSY or otherwise) shouldn't allow the device unplug in Linux
>>>> to continue.
>>>
>>> Yes, it would be ideally, but Linux driver/device model is written in a
>>> such way, that PCI subsystem tracks all the PCI device usage, so it can
>>> be certain that it can remove the device. Thus, functions in the device
>>> removal path either return void or 0. Of course, kernel does not know that
>>> hypervisor has additional uses for the device, so there is no mechanisms
>>> to prevent removal.
>>
>> Could pciback obtain a reference on behalf of the hypervisor, dropping it
>> when device removal is requested (i.e. much closer to the start of that
>> operation), and only if it finds that no guests use the device anymore?
> 
> Yes, it can, it this indeed will hold a reference to a pci device for a
> time, but there are some consideration that made this approach not
> feasible.
> 
> Basically, when an user writes to /sys/bus/pci/SBDF/remove, the
> following happens:
> 
> 1. /sys/bus/pci/SBFD/remove entry is removed - we can't retry the
> operation anymore

Looking at the comment ahead of pci_stop_and_remove_bus_device(),
isn't this too late already. The text there says "has been removed",
not e.g. "is about to be removed". (Of course chances are that it is
the comment which is wrong; I know too little about Linux'es hot-
unplug machinery.)

Jan

> [unimportant things]
> 
> N. pci_stop_dev() function is called. This function unloads a device
> driver. Any good behaving driver should drop all additional references
> to a device at this point.
> 
> [more unimportant things]
> 
> M. PCI subsystem drops own reference to a generic device object
> 
> So, as you can see, admin can't restart a "failed" attempt to remove a
> PCI device.
> 
> On other hand, remove() function can sleep. This allows us to pause
> removal process a bit and check if hypervisor had finished removing a
> PCI device on its side. But, as you pointed out, this can take weeks...
> 



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:31:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:31:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525451.816657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxDa-000454-Kd; Mon, 24 Apr 2023 14:31:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525451.816657; Mon, 24 Apr 2023 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxDa-00044x-HU; Mon, 24 Apr 2023 14:31:02 +0000
Received: by outflank-mailman (input) for mailman id 525451;
 Mon, 24 Apr 2023 14:31:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pqxDZ-00044p-Rh
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:31:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1f24b0e-e2ac-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:30:59 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9D2841FD86;
 Mon, 24 Apr 2023 14:30:59 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 47C451390E;
 Mon, 24 Apr 2023 14:30:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NvvfD6OSRmTWQQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 24 Apr 2023 14:30:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1f24b0e-e2ac-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682346659; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Fp5bxPwAYsX9cD2vOMsJUNZN2q/TS9Vqo0R+NK1jCA0=;
	b=PSNSfB9we60DTmsjy6sw5pVA6nEkHEL77NngDgQazMpG8WfFtKaYDpbq0aktA+u+MljiP6
	1BXNKKk+CvboPRlSLQbrvxSc8IeUGITKKw52TCPGMXQ0oaq429YB2VgzX1X9HdRYk9AZhM
	+xs9K+Vafq9WdojaIxBm80BamnAEYQ8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2] xen: add support for crash dump analysis with xen.efi
Date: Mon, 24 Apr 2023 16:30:57 +0200
Message-Id: <20230424143057.27469-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today it is not possible to analyse crash dumps of a system in
hypervisor mode when it had been booted via EFI, as the crash utility
doesn't understand the file format of xen.efi.

This can easily be solved by creating an ELF file from xen.efi via
objcopy. Using that file as name list for crash enables the user to
analyse the dump in hypervisor mode. Note that crash isn't happy with
a file containing no text and data, so using --only-keep-debug is not
an option.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- drop Kconfig help text changes (Jan Beulich)
- apply some Makefile changes (Jan Beulich)
- add xen.efi.elf removal to _clean target
---
 xen/Makefile          | 11 +++++++----
 xen/arch/x86/Makefile |  3 +++
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index 2710d7327e..5166403cff 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -502,9 +502,11 @@ _install: $(TARGET)$(CONFIG_XEN_INSTALL_SUFFIX)
 	if [ -r $(TARGET).efi -a -n '$(EFI_DIR)' ]; then \
 		[ -d $(D)$(EFI_DIR) ] || $(INSTALL_DIR) $(D)$(EFI_DIR); \
 		$(INSTALL_DATA) $(TARGET).efi $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi; \
-		if [ -e $(TARGET).efi.map ]; then \
-			$(INSTALL_DATA) $(TARGET).efi.map $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.map; \
-		fi; \
+		for x in map elf; do \
+			if [ -e $(TARGET).efi.$$x ]; then \
+				$(INSTALL_DATA) $(TARGET).efi.$$x $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.$$x; \
+			fi; \
+		done; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi; \
 		ln -sf $(T)-$(XEN_FULLVERSION).efi $(D)$(EFI_DIR)/$(T).efi; \
@@ -539,6 +541,7 @@ _uninstall:
 	rm -f $(D)$(DEBUG_DIR)/$(T)-syms-$(XEN_FULLVERSION).map
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_FULLVERSION).efi
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).$(XEN_SUBVERSION).efi
+	rm -f $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.elf
 	rm -f $(D)$(DEBUG_DIR)/$(T)-$(XEN_FULLVERSION).efi.map
 	rm -f $(D)$(EFI_DIR)/$(T)-$(XEN_VERSION).efi
 	rm -f $(D)$(EFI_DIR)/$(T).efi
@@ -569,7 +572,7 @@ _clean:
 		-o -name '*.lex.c' -o -name '*.tab.[ch]' -o -name "*.gcno" \
 		-o -name ".*.cmd" -o -name "lib.a" \) -exec rm -f {} \;
 	rm -f include/asm $(TARGET) $(TARGET).gz $(TARGET)-syms $(TARGET)-syms.map
-	rm -f $(TARGET).efi $(TARGET).efi.map $(TARGET).efi.stripped
+	rm -f $(TARGET).efi $(TARGET).efi.map $(TARGET).efi.elf $(TARGET).efi.stripped
 	rm -f asm-offsets.s arch/*/include/asm/asm-offsets.h
 	rm -f .banner .allconfig.tmp include/xen/compile.h
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index fc9487aa40..0c66ba9086 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -226,6 +226,9 @@ endif
 	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
 	$(NM) -pa --format=sysv $(@D)/$(@F) \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
+ifeq ($(CONFIG_DEBUG_INFO),y)
+	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:,$(OBJCOPY) -O elf64-x86-64 $@ $@.elf)
+endif
 	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
 ifeq ($(CONFIG_XEN_IBT),y)
 	$(SHELL) $(srctree)/tools/check-endbr.sh $@
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:51:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:51:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525457.816668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxX6-0006dM-7j; Mon, 24 Apr 2023 14:51:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525457.816668; Mon, 24 Apr 2023 14:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxX6-0006dF-4u; Mon, 24 Apr 2023 14:51:12 +0000
Received: by outflank-mailman (input) for mailman id 525457;
 Mon, 24 Apr 2023 14:51:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqxX4-0006d9-IB
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:51:10 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ede294d-e2af-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:51:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8587.eurprd04.prod.outlook.com (2603:10a6:20b:43a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 14:50:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:50:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ede294d-e2af-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jwskvd46SqlTRso5VWtWBCGnVBRuI7PGlJ0n5SgW+IW3scqgDPh3fZadNgA8fAM/Cp2UsRogOrT6yH327Dlldg15lVCrcR0yCSiIc4w814oYUHJ900qCvz0I+k710vUa8CbaH7kX7Ut/xLrk9sUSeci7z4Gv0KA9Mb1Jr9RCTHq1CQOE0e0VWqoqfJJNk/SCiXVkiZiSTLLvU1AsBr4tgw07Z4uwl13k2b3yWjeAwKATUJCV9rNgazncGF0PkvZKRr7lwzJhhbclOPXh6czg6mDI5ZDbONRTFpgw+c0GyMjwBabnzbQmxvFSjx3oZPOMdF07KFJB8t7+emqKlYFXsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2MzG3FcdovRv+RtWxcUhj/7yaDzpfcXx54EusU2lqs4=;
 b=IS0fqKmLcxRVKfQ+LXyEtbVwYtV/cRF0/dXXv3aoF2Et2099/9r+a30tWAO0Ok1TRwkm4riF7lJ4a7jMTR+4/YiUnaxXtRcx60nSVcGZ7OUSoyzGA8IODDl2S0Ft+elHeUCINV/46AidF9fdio4x+JfUnM5NWtTt0EA22LUZIVpeVMQ37L+W3XNiXlvrrmn4+ckvRDt1mpba68DrPov1FW9bqwfTONvpJk9BvW7xCDTHpBRjsr7Fqf7opkoGB7c2K56p6+ZiY/M84jW2E/u26R7zBbGHBhFL2j3TjLSVEZ3OLScKiNRUTjzrtS24iHqS+uKrsA51IfPqWurlZSOtfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2MzG3FcdovRv+RtWxcUhj/7yaDzpfcXx54EusU2lqs4=;
 b=Upp49Bg8OTjgyng6srace4EHVCllJGnFu0EBCyJ0zgcsNcu9HNk/52KVAxS6sVffTKFT+qd5jQMGz6KSfqKOHC54l5WMnxoajne+BGnTU4cjC8JARNqjlu5bq1p6SdCKKv0GBZWQXngxHUKCCkE7+HnsB5YdduBmBdx7vmkOQx7AbGXTk78lNp1futEl8QTyMPdvi9VXJn9RVFI7Is16gTujnV46IhzlyfGZS6EpoLveTQdMRDKdthwWg2+2x65sWdo8GdLFdjLHVKKBXL7X2dSWgifNApW1G2wf1r7IEGqmxyGQNZ1rcMZ4XYOzti324hBLeP7or2362erfQ0kLCw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <626da7fb-9934-2a85-0022-90ae32f1a748@suse.com>
Date: Mon, 24 Apr 2023 16:50:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] xen: add support for crash dump analysis with xen.efi
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230424143057.27469-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230424143057.27469-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8587:EE_
X-MS-Office365-Filtering-Correlation-Id: f3330234-07d6-44a4-b62d-08db44d35131
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sKD3EXxth8hCmYq2G4NJX6Ja19PPsetd0FU0aq8tL5psrul9YSNblSwf4LAmDrs6jTvSdd/yiBrAtGkyxyP522U/ISdzWUuGt4dBUOuXDnp29B25dg/rWGM0nrYLA41Fyi8I9cYCjGUpbpbLY68G1JjDzT7vewlqKy1LTJlxIEEFyYhjEJz80G9kyYIKv9l33LGYRM4RQ20jQNwmc3IN35oZP6Z8ApV39rNvxYKsyD6k0w3j/rX8kLXjm0NRe7wkAnTRcEBR9m/8IHJsaJaDJ/tghaEyZC8bNJD40eSkOMJrwl2qRwhFz1v3FUf4RyAfgvXBBp72jjcVSkrQlVWOujqNXaAK3XqljAP95eP2st2Zr2iE9h5BICnT28RfLHIs6OdIUJLyAMqbBLABLVErxwuVE3ogZkrf104Mxk4+/5fF2nPFTpVQxedbREacQuxIFrO1SAl55Yf7ixS18LWsnQUyBwa8okhR+cgjb4DNBjB95v0a0gSGoKa9a441qnR/SrxR7U2ke+Zz6bDv1CRKWxAoi5X41Gm30POCTChlf4/V+jko1B3ps4njYxxBXvuiCHDWIoYFkS77i+KEYVYXTRDD1v69YiZN6k9CGjxiaYCOmw0/NeJafYGVSryR20PzpszQlnMiLfQzCg+CsrhsQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(6486002)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(6862004)(8676002)(8936002)(316002)(41300700001)(4326008)(478600001)(5660300002)(54906003)(6636002)(37006003)(38100700002)(36756003)(86362001)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VWtHek9uQ3ZqSHR1bTNDTFVJaENOVVBuaExwNm9lSndSZTFwVEtqSTQzVzF1?=
 =?utf-8?B?cjZXVFBHT005RHZvZjJxaEJhOVg4Rzl6WGUvU2JsSVdrVFdsVHpRQUhlT1Mw?=
 =?utf-8?B?eDdlOGo4VzZMekttcjVkZlRsOUErcmdTR3A3MkQyc2h3eVVJTi93MkJ0cHdw?=
 =?utf-8?B?Nit2dW5HN25mSUx3eVhLZnl6QU5HL3RjQzlSR24zWU9HUGU2c3psbm5kR2lL?=
 =?utf-8?B?RHFiQnlRNVNseE9WV0xvQy9wZkkzbGs4SldyY3dIRXg4akRwanlOVGxKUXBU?=
 =?utf-8?B?MHVVY3JEQmFCTUpJa1NlUHh2Nm53cWxJcURKa0p6WFh6STFYc0dMRVBwUmw0?=
 =?utf-8?B?Rjl6c0llTVhaUVFlMFRVeC9xY3lYWE9xdFk5aDdVMVZRYzZoQVNOMnZzeTRP?=
 =?utf-8?B?ZFBtMmw0Sk1wUUkxMTdYUkx0cU9LV2F2NTRnZ09hRnZqZk5IR3dUMFBJUmNX?=
 =?utf-8?B?cEp6K1NXVXRGVm9ZTG0yU2xVK3J0NFU3Yi9GWEhnbHVxd1RuVUtsdzN2RExL?=
 =?utf-8?B?dm4vcXcxclFBQjMrVlpnZ2laQklqTk96TEg2MlRnbTZNRUl2QjZ3cjVPVEIz?=
 =?utf-8?B?ZHhmbHhsa08wRjNLblFwYlhQdlFsbjFMNHhSVTdZVlVmb0dWZ01EVmhINzN1?=
 =?utf-8?B?TG9lM2dJUTkrT3U0NEc1K0NJK2d5QzNoWWJQNldmTE13Z3F0RUdJeVpGRGV1?=
 =?utf-8?B?TDQxLzE1L2ZoVHZ0V0UyTTh2VmZRNXhkRkpIN0Qwd3Vzb2pjQTZuRSs2bTV5?=
 =?utf-8?B?QlNiN0tpNUt4WGcrYU9GVXVaNFR1SFhQc1dVY25UcnZ2M3R0M21LQzhWUitm?=
 =?utf-8?B?QnNqVnRCNHZ5bVdGVjREZUNNWjBBUjZQTUhxTW5HUnFvVWpnK2Z4bXQ4bVNs?=
 =?utf-8?B?djUrOWMwakdYV0dTTkxhQ0xGRlFhODNQdnpHbHhZMXJjM1RKRzFJRm9jeXc0?=
 =?utf-8?B?am42ekxpdjRvRlQ0Sk5yYXJkUnBudlM2eVY5Q2p1YVlzcUVNcU5oSXBiWDUv?=
 =?utf-8?B?VjZOdEdZdE1KL0diQVl5eWNNYnYvQ0dwOXlaKzlYR2dQWnd4OXR3ektWKy9B?=
 =?utf-8?B?MUdMT09BRUN4OUlnaEtVWnpLYnhkR0tLUUUvcXVmL3M5WHdodkFKYmZZT2FN?=
 =?utf-8?B?QkUvc0ZXVTI3ZzFtOHl1aVFiRG9iaFBwRXlibGVZY2JQUkpRNUdLZFFadkJo?=
 =?utf-8?B?UGExQ1RLcTI3RFNDVHJncDBnVnkwWmxKSFZldlJ4a09abXdQYmVKdXgydVFi?=
 =?utf-8?B?dTBOZUVhaVpDMDBJVHJrVlNXZW9MOUlNa2V4dmxDZ3NWT1BvTHRrekU0RmRk?=
 =?utf-8?B?RWFnM1lYcmJhTjIrdGp3ZlZybnN6aXFxZ0tEZjZPMWJMSTZXNzViRzB1NDJu?=
 =?utf-8?B?RlF2eVNoQUVUTjl4VFVjSG5lbVA3MHYrS2xua0RQVTFkRjMydlV4aWswVHFN?=
 =?utf-8?B?UGF2dnlGOUJyMUQrMThJeFcvZnE0Z1hZeTIra3FnS1FCbEpaZWx6VktVVEtJ?=
 =?utf-8?B?MGNJM283NjEvT25xQlZMa21WQmVkdWRla09nWHUxd0FBV0lBZFpISytGWnc0?=
 =?utf-8?B?akVZZDlkUG1YRUNhbnRmcTRUdTRsazlweGgwTFF2VmZwRENFMGluRDg2M0pj?=
 =?utf-8?B?S08yUk8rSGViWklvYnVFcVBhcTZuV0JZWGlRMzlsb1hwM1JWME15OVdTb2Nr?=
 =?utf-8?B?YzF6UVp2VFZPcGRrcVBOdzVxV3FyNklBcmREeDBQTWkzUHdVNmdJbXlqbnRa?=
 =?utf-8?B?b25sKzFNTlpiMWNNcWdnREx5Y2cwL3cwWWJPYjhZakcvNWxDS3RKdFFGeFV4?=
 =?utf-8?B?QWJXdHdsSnVmOHNVVTBnZ2kxNml2QlJwQzBZd0ZickN5RG9ZcXJrR256Njkr?=
 =?utf-8?B?MW5MeGtSZzlEc3FkQzFrWGhmdyszR2tUaWxpdXNOUEhhWEZZSmdiem56ZDJB?=
 =?utf-8?B?cmhXQ1p6WTRuRkFxMEppemRQNGl0RkY3TGhkWVdzK3JFRmNKcDJwUkt1YndW?=
 =?utf-8?B?TDVwYzkzUENwbC9XODdrTTlDNld4T3FGZzhlRFBTUFYzTmFROHc2TnB5TTRz?=
 =?utf-8?B?cDg5K2xlS2dZMTZqcXd2NWJTb3FzNXdSQ0YzaHU4ZXpGV1Y4Z1IzLytrZVJv?=
 =?utf-8?Q?IqPXMJPLJlycAmrS/O/F98l61?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3330234-07d6-44a4-b62d-08db44d35131
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:50:59.5299
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w0yE/zSJIUnu/GtzbuFJESedxqo+UfaxnIUrzPQnkU4EqJiBvxbFNBSENAvP1cj5Anai4RgHWdRgVC13mjenoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8587

On 24.04.2023 16:30, Juergen Gross wrote:
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -226,6 +226,9 @@ endif
>  	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
>  	$(NM) -pa --format=sysv $(@D)/$(@F) \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
> +ifeq ($(CONFIG_DEBUG_INFO),y)
> +	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:,$(OBJCOPY) -O elf64-x86-64 $@ $@.elf)

This only addresses one of the two earlier raised aspects, as you didn't
use what I proposed:

	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:)$(OBJCOPY) -O elf64-x86-64 $@ $@.elf

Quite possibly because there was a blank missing in there, to separate
the colon from $(OBJCOPY). Preferably with the adjustment (which I'd
be fine doing while committing, as long as you're okay)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

To also mention what we have just discussed: Since we're talking about
duplicating over 30Mb of data (at least according to my build), an
option is going to be to then strip debug info off of xen.efi itself,
getting its size into reasonable range again.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 14:57:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 14:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525465.816678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxdJ-0007KS-1v; Mon, 24 Apr 2023 14:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525465.816678; Mon, 24 Apr 2023 14:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxdI-0007KL-V9; Mon, 24 Apr 2023 14:57:36 +0000
Received: by outflank-mailman (input) for mailman id 525465;
 Mon, 24 Apr 2023 14:57:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pqxdH-0007KD-9Z
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 14:57:35 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56d7dff7-e2b0-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 16:57:32 +0200 (CEST)
Received: from DU2PR04CA0312.eurprd04.prod.outlook.com (2603:10a6:10:2b5::17)
 by PAXPR08MB6751.eurprd08.prod.outlook.com (2603:10a6:102:136::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:57:19 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::b4) by DU2PR04CA0312.outlook.office365.com
 (2603:10a6:10:2b5::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 14:57:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 14:57:18 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 24 Apr 2023 14:57:18 +0000
Received: from 9433716bb5b8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 02792C05-A91B-44A4-AE31-323E3B7FAF8F.1; 
 Mon, 24 Apr 2023 14:57:07 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9433716bb5b8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 14:57:07 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB7883.eurprd08.prod.outlook.com (2603:10a6:10:3b1::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 14:57:05 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 14:57:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56d7dff7-e2b0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pKD+Fyz15gU9iw9drqZOddrYVIxIps9dxcfAe7g8E+g=;
 b=nspMgTzQ79M+oDqijfDt+Cq9V3lJHxuPJKe8ORTxnKPCA6Tui7WUL8gUnRSiglmHEXqaUek3GO9kReSpMjUgVqh33n2pc+/qWmzS28i+OwZqz0cV2QOvvWfpKoOiQ4JSnN1DOYBBLaJ1QsSoNGyax8TZXDNmJGfHGqhGoQ2KSUA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d83dcacbb1f3a013
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e3UbRblaaS77f8/KRVbZigMDycp6R7x9N4sn2Gn8TTmUhp19b6pZKpPNdQn0ps/2NMboOk722dM36ui9J2LBsCiRVsJAPYlMj+2BzLCWsz1zXyRgOyGCFSgU994oh7MF6/pihe6BYxoF8Mt3fAuAfuMvxiXVPbpx2x4+lGqExf8tyMQ15xLKl/bGnZPJKqgbi2//2E12lQ0kvFS8D9kxNrNGYBpO2F5kfmIOCQCGzkX3Ml2Yc0mDEKVNFUvDpqIDvNFX5Ku+LVPCQpZI63+Rw0T0mEybXCD7Ib4fHbstrqcmW/ql63CrJulmoIILy6jxDNfKnPkuWVfZc4Nhr2iLWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pKD+Fyz15gU9iw9drqZOddrYVIxIps9dxcfAe7g8E+g=;
 b=cgQOu8dQEaYLpfDAfn/7rlMidiiVVjJF2+BvJo3zdNutLL2Ph9gyWmG3qSFvaNaMu0UNnSWWGkELuq8rWJQMBIZxnSxkzEFtFj+qYJ8LawZlrcB5DxXX53g/92HmLh/3WDGFbyaef0+sjWKZlEpNy879o8/jKX1be0QG6huz7Xh5Xb4P5p0vUmCLBrp85gZD4OEnfE8HjmU2rSgnqKHeXxYHi8pEa4VJqvsqbGfOLh2PZN1XGW4lrYUXoZW4rm/0Fq/shs2hkqI/G0phuAY6u8vhz8WwzCMy0TMWXXyNfMSjCx43jy/PCLyzqDIXeEongXl0Mz1b1rBbWRljKhI65w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pKD+Fyz15gU9iw9drqZOddrYVIxIps9dxcfAe7g8E+g=;
 b=nspMgTzQ79M+oDqijfDt+Cq9V3lJHxuPJKe8ORTxnKPCA6Tui7WUL8gUnRSiglmHEXqaUek3GO9kReSpMjUgVqh33n2pc+/qWmzS28i+OwZqz0cV2QOvvWfpKoOiQ4JSnN1DOYBBLaJ1QsSoNGyax8TZXDNmJGfHGqhGoQ2KSUA=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1ICAAAFsAIAADk4A
Date: Mon, 24 Apr 2023 14:57:04 +0000
Message-ID: <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
In-Reply-To: <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB7883:EE_|DBAEUR03FT019:EE_|PAXPR08MB6751:EE_
X-MS-Office365-Filtering-Correlation-Id: f2c44874-e59d-46b8-d195-08db44d4333d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xhycLgOuB+sQaMI+UGMGxKnRObd3AaeiXTtD4ZlRmrYsT1pZfgiIlGXUCFgHAiuBd7kvOf+cG0n3avSJ6z6tQ5M4cbQZrrb+jTji0wP6va7kZYzFOfBOotoQDwQVE/tBwGNhFkvtkZ2fq0FQMy7EFgCJyk0cmlY1yhhYxyyBbRDwwRg4OzPsW2o4GodVHZjSbiG1NyT+EXAGmi7P06I7WFmRpSpeeRhEtsnjW9qfv3/B+eHuJbcE2qCqXNulZvXDJdVZ+16lLL/9QkULA9Bufk+JZ6UKYwPC8zx2UGijfjQXyVvPiFRu1A5g7U8orXGHAmXqRQGGQml3E1tkddmQQ9DTauSCZjku2WV+Nes1ycavF5UTyf1U6xD2RaVpGdJGc2+6COGA9K4LcjbaggUcQ4oGvXH/X3UUa3CIBzIJHSphywECO1Kw04JmI+kkqgb4xP4PrAjSyCXjiJhfdlZRltc+6vxGnvJjqb8XqNRB2NJvJiDQXu/Uu+Hfvg9dnqdCA8TZOx9ZeRSAq8aGs8mpMVgA/RPANWh7yywInK8X/upfTwZ99TVEN8HXHrEP+RU/Zb6RdP3pdAU6zqlf4a1guOiOPk1i/L/Ahd6b+qassYgmAG84UvWTQkrm221vWDs/xxpyPjEd1M+IwaeaKdp33A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199021)(36756003)(8676002)(8936002)(38070700005)(54906003)(478600001)(66446008)(66476007)(66556008)(64756008)(4326008)(6916009)(66946007)(76116006)(91956017)(122000001)(316002)(41300700001)(2906002)(38100700002)(5660300002)(2616005)(86362001)(186003)(26005)(6506007)(6512007)(33656002)(53546011)(6486002)(71200400001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <23D2DBA80F2E33488FF6E9CDB86A0926@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7883
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fc1d2238-3088-41ef-f527-08db44d42b07
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LLpfzk5PseqLaCFw8FUWTWUbxCyjIMTgNNosMHIJMFf59uSb6Tyhn3YBw8Y51b/TJzgAc2TeiNrLwm7snWR/p26zKXv6USB/OCmy405Jt3mZAUC4C85MHfBt/NIpSVtia4zF2C/v+AUQTQY7CBO8dWwi/h4czfVEOE7ZABH+QUMLiKfoCHjbiwxAXeO8aJBmj+5fDmtXoqjWbFj51xhe3DI0OXgd0tP3IWaEWn/upy9CI9wx7IpbqmQCpNabUh/IPwncJWQMyb8sCrRLj04fs+yH4BE6Oexhg15PQBTXo/zmcA3o4CiAi4I0RzOww1VElyA5Vhvaby4Sp3oQ3SFq8mbph6GrVg34VbwlWOaBCAlX/XNhufmC4sjQEB1j0jrVxw5+cmQ/LDPR9KYFdLzyvz6Jsvzd3BhhZn202Gvo4i8kB6S/XublrjJFfdsUxtw7WEb+GvNEt4WirlfdxKb6QDvcICcmWKOnjgN+QAQfzYblzgjoUASQGImudTga/j+so+aE/ziqYZC3a6lHW3cfJLOvjDVLVkPankL9cdPdftgUZiFvuTVcH5jeLvo+ZWOPV4rAAQcU9UDDfH/RSsZgpz6ot+9ZwDjcYOUIhgnr29sf0CFVP66/73UU7c5JYQ7x46j0JJcUYO7nX9F/OonirZqFmfuao8+rVfG3VxN+aJ15tYhWIe+x1yAiCo3kwyAivpsQuIoJmtLxXK/t2NPodA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(36840700001)(46966006)(40470700004)(478600001)(40460700003)(54906003)(86362001)(36756003)(186003)(53546011)(6486002)(26005)(82310400005)(6506007)(6512007)(40480700001)(33656002)(4326008)(316002)(82740400003)(83380400001)(70206006)(70586007)(36860700001)(2906002)(336012)(356005)(41300700001)(81166007)(8676002)(6862004)(8936002)(5660300002)(47076005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 14:57:18.6345
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f2c44874-e59d-46b8-d195-08db44d4333d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6751

DQoNCj4gT24gMjQgQXByIDIwMjMsIGF0IDE1OjA1LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDQuMjAyMyAxNjowMCwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyNCBBcHIgMjAyMywgYXQgMTI6MzQsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjQuMDQuMjAyMyAwODowMiwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4gQEAgLTMwLDkgKzM3LDExIEBAIGludCBzdmVfY29udGV4dF9pbml0KHN0
cnVjdCB2Y3B1ICp2KTsNCj4+Pj4gdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2
KTsNCj4+Pj4gdm9pZCBzdmVfc2F2ZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdik7DQo+Pj4+IHZvaWQg
c3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4+PiArYm9vbCBzdmVfZG9tY3Rs
X3ZsX3BhcmFtKGludCB2YWwsIHVuc2lnbmVkIGludCAqb3V0KTsNCj4+Pj4gDQo+Pj4+ICNlbHNl
IC8qICFDT05GSUdfQVJNNjRfU1ZFICovDQo+Pj4+IA0KPj4+PiArI2RlZmluZSBvcHRfZG9tMF9z
dmUgICAgICgwKQ0KPj4+PiAjZGVmaW5lIGlzX3N2ZV9kb21haW4oZCkgKDApDQo+Pj4+IA0KPj4+
PiBzdGF0aWMgaW5saW5lIHJlZ2lzdGVyX3QgY29tcHV0ZV9tYXhfemNyKHZvaWQpDQo+Pj4+IEBA
IC01OSw2ICs2OCwxMSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1
Y3QgdmNwdSAqdikge30NCj4+Pj4gc3RhdGljIGlubGluZSB2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0
cnVjdCB2Y3B1ICp2KSB7fQ0KPj4+PiBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX3Jlc3RvcmVfc3Rh
dGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+IA0KPj4+PiArc3RhdGljIGlubGluZSBib29sIHN2
ZV9kb21jdGxfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWduZWQgaW50ICpvdXQpDQo+Pj4+ICt7DQo+
Pj4+ICsgICAgcmV0dXJuIGZhbHNlOw0KPj4+PiArfQ0KPj4+IA0KPj4+IE9uY2UgYWdhaW4gSSBk
b24ndCBzZWUgdGhlIG5lZWQgZm9yIHRoaXMgc3R1Yjogb3B0X2RvbTBfc3ZlIGlzICNkZWZpbmUt
ZA0KPj4+IHRvIHBsYWluIHplcm8gd2hlbiAhQVJNNjRfU1ZFLCBzbyB0aGUgb25seSBjYWxsIHNp
dGUgbWVyZWx5IHJlcXVpcmVzIGENCj4+PiB2aXNpYmxlIGRlY2xhcmF0aW9uLCBhbmQgRENFIHdp
bGwgdGFrZSBjYXJlIG9mIGVsaW1pbmF0aW5nIHRoZSBhY3R1YWwgY2FsbC4NCj4+IA0KPj4gSeKA
mXZlIHRyaWVkIHRvIGRvIHRoYXQsIEnigJl2ZSBwdXQgdGhlIGRlY2xhcmF0aW9uIG91dHNpZGUg
dGhlIGlmZGVmIHNvIHRoYXQgaXQgd2FzIGFsd2F5cyBpbmNsdWRlZA0KPj4gYW5kIEkgcmVtb3Zl
ZCB0aGUgc3R1YiwgYnV0IEkgZ290IGVycm9ycyBvbiBjb21waWxhdGlvbiBiZWNhdXNlIG9mIHVu
ZGVmaW5lZCBmdW5jdGlvbi4NCj4+IEZvciB0aGF0IHJlYXNvbiAgSSBsZWZ0IHRoYXQgY2hhbmdl
IG91dC4NCj4gDQo+IEludGVyZXN0aW5nLiBJIGRvbid0IHNlZSB3aGVyZSB0aGUgcmVmZXJlbmNl
IHdvdWxkIGJlIGNvbWluZyBmcm9tLg0KDQpDb3VsZCBpdCBiZSBiZWNhdXNlIHRoZSBkZWNsYXJh
dGlvbiBpcyB2aXNpYmxlLCBvdXRzaWRlIHRoZSBpZmRlZiwgYnV0IHRoZSBkZWZpbml0aW9uIGlz
IG5vdCBjb21waWxlZCBpbj8gDQoNCj4+Pj4gLS0tIGEveGVuL2NvbW1vbi9rZXJuZWwuYw0KPj4+
PiArKysgYi94ZW4vY29tbW9uL2tlcm5lbC5jDQo+Pj4+IEBAIC0zMTQsNiArMzE0LDMxIEBAIGlu
dCBwYXJzZV9ib29sZWFuKGNvbnN0IGNoYXIgKm5hbWUsIGNvbnN0IGNoYXIgKnMsIGNvbnN0IGNo
YXIgKmUpDQo+Pj4+ICAgIHJldHVybiAtMTsNCj4+Pj4gfQ0KPj4+PiANCj4+Pj4gK2ludCBfX2lu
aXQgcGFyc2Vfc2lnbmVkX2ludGVnZXIoY29uc3QgY2hhciAqbmFtZSwgY29uc3QgY2hhciAqcywg
Y29uc3QgY2hhciAqZSwNCj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbG9u
ZyBsb25nICp2YWwpDQo+Pj4+ICt7DQo+Pj4+ICsgICAgc2l6ZV90IHNsZW4sIG5sZW47DQo+Pj4+
ICsgICAgY29uc3QgY2hhciAqc3RyOw0KPj4+PiArICAgIGxvbmcgbG9uZyBwdmFsOw0KPj4+PiAr
DQo+Pj4+ICsgICAgc2xlbiA9IGUgPyAoeyBBU1NFUlQoZSA+PSBzKTsgZSAtIHM7IH0pIDogc3Ry
bGVuKHMpOw0KPj4+IA0KPj4+IEFzIHBlciB0aGlzICJlIiBtYXkgY29tZSBpbiBhcyBOVUxMLCBt
ZWFuaW5nIHRoYXQgLi4uDQo+Pj4gDQo+Pj4+ICsgICAgbmxlbiA9IHN0cmxlbihuYW1lKTsNCj4+
Pj4gKw0KPj4+PiArICAgIC8qIENoZWNrIHRoYXQgdGhpcyBpcyB0aGUgbmFtZSB3ZSdyZSBsb29r
aW5nIGZvciBhbmQgYSB2YWx1ZSB3YXMgcHJvdmlkZWQgKi8NCj4+Pj4gKyAgICBpZiAoIChzbGVu
IDw9IG5sZW4pIHx8IHN0cm5jbXAocywgbmFtZSwgbmxlbikgfHwgKHNbbmxlbl0gIT0gJz0nKSAp
DQo+Pj4+ICsgICAgICAgIHJldHVybiAtMTsNCj4+Pj4gKw0KPj4+PiArICAgIHB2YWwgPSBzaW1w
bGVfc3RydG9sbCgmc1tubGVuICsgMV0sICZzdHIsIDApOw0KPj4+PiArDQo+Pj4+ICsgICAgLyog
TnVtYmVyIG5vdCByZWNvZ25pc2VkICovDQo+Pj4+ICsgICAgaWYgKCBzdHIgIT0gZSApDQo+Pj4+
ICsgICAgICAgIHJldHVybiAtMjsNCj4+PiANCj4+PiAuLi4gdGhpcyBpcyBhbHdheXMgZ29pbmcg
dG8gbGVhZCB0byBmYWlsdXJlIGluIHRoYXQgY2FzZS4gKEkgZ3Vlc3MgSSBjb3VsZA0KPj4+IGhh
dmUgc3BvdHRlZCB0aGlzIGVhcmxpZXIsIHNvcnJ5LikNCj4+PiANCj4+PiBBcyBhIG5pdCwgSSdk
IGFsc28gYXBwcmVjaWF0ZSBpZiBzdHlsZSBoZXJlIChwYXJlbnRoZXNpemF0aW9uIGluIHBhcnRp
Y3VsYXIpDQo+Pj4gY291bGQgbWF0Y2ggdGhhdCBvZiBwYXJzZV9ib29sZWFuKCksIHdoaWNoIGRv
ZXNuJ3QgcHV0IHBhcmVudGhlc2VzIGFyb3VuZA0KPj4+IHRoZSBvcGVyYW5kcyBvZiBjb21wYXJp
c29uIG9wZXJhdG9ycyAoYSBmZXcgbGluZXMgdXAgZnJvbSBoZXJlKS4gV2l0aCB0aGUNCj4+PiBv
dGhlciBmdW5jdGlvbiBpbiBtaW5kLCBJJ20gdGhlbiBub3QgZ29pbmcgdG8gcGljayBvbiB0aGUg
c2VlbWluZ2x5DQo+Pj4gcmVkdW5kYW50ICh3aXRoIHRoZSBzdWJzZXF1ZW50IHN0cm5jbXAoKSkg
InNsZW4gPD0gbmxlbiIsIHdoaWNoIGhhcyBhbg0KPj4+IGVxdWl2YWxlbnQgdGhlcmUgYXMgd2Vs
bC4NCj4+IA0KPj4gWW91IGFyZSByaWdodCwgZG8geW91IHRoaW5rIHRoaXMgd2lsbCBiZSBvazoN
Cj4gDQo+IEl0J2xsIGRvLCBJIGd1ZXNzLg0KPiANCj4+IC0tLSBhL3hlbi9jb21tb24va2VybmVs
LmMNCj4+ICsrKyBiL3hlbi9jb21tb24va2VybmVsLmMNCj4+IEBAIC0zMjQsMTEgKzMyNCwxNCBA
QCBpbnQgX19pbml0IHBhcnNlX3NpZ25lZF9pbnRlZ2VyKGNvbnN0IGNoYXIgKm5hbWUsIGNvbnN0
IGNoYXIgKnMsIGNvbnN0IGNoYXIgKmUsDQo+PiAgICAgc2xlbiA9IGUgPyAoeyBBU1NFUlQoZSA+
PSBzKTsgZSAtIHM7IH0pIDogc3RybGVuKHMpOw0KPj4gICAgIG5sZW4gPSBzdHJsZW4obmFtZSk7
DQo+PiANCj4+ICsgICAgaWYgKCAhZSApDQo+PiArICAgICAgICBlID0gcyArIHNsZW47DQo+PiAr
DQo+PiAgICAgLyogQ2hlY2sgdGhhdCB0aGlzIGlzIHRoZSBuYW1lIHdlJ3JlIGxvb2tpbmcgZm9y
IGFuZCBhIHZhbHVlIHdhcyBwcm92aWRlZCAqLw0KPj4gLSAgICBpZiAoIChzbGVuIDw9IG5sZW4p
IHx8IHN0cm5jbXAocywgbmFtZSwgbmxlbikgfHwgKHNbbmxlbl0gIT0gJz0nKSApDQo+PiArICAg
IGlmICggc2xlbiA8PSBubGVuIHx8IHN0cm5jbXAocywgbmFtZSwgbmxlbikgfHwgc1tubGVuXSAh
PSAnPScgKQ0KPj4gICAgICAgICByZXR1cm4gLTE7DQo+PiANCj4+IC0gICAgcHZhbCA9IHNpbXBs
ZV9zdHJ0b2xsKCZzW25sZW4gKyAxXSwgJnN0ciwgMCk7DQo+PiArICAgIHB2YWwgPSBzaW1wbGVf
c3RydG9sbCgmc1tubGVuICsgMV0sICZzdHIsIDEwKTsNCj4+IA0KPj4gICAgIC8qIE51bWJlciBu
b3QgcmVjb2duaXNlZCAqLw0KPj4gICAgIGlmICggc3RyICE9IGUgKQ0KPj4gDQo+PiANCj4+IFBs
ZWFzZSBub3RlIHRoYXQgSeKAmXZlIGFsc28gaW5jbHVkZWQgeW91ciBjb21tZW50IGFib3V0IHRo
ZSBiYXNlLCB3aGljaCBJIGZvcmdvdCB0byBhZGQsIGFwb2xvZ2llcyBmb3IgdGhhdC4NCj4+IA0K
Pj4gc2xlbiA8PSBubGVuIGRvZXNu4oCZdCBzZWVtcyByZWR1bmRhbnQgdG8gbWUsIEkgaGF2ZSB0
aGF0IGJlY2F1c2UgSeKAmW0gYWNjZXNzaW5nIHNbbmxlbl0gYW5kIEkgd291bGQgbGlrZQ0KPj4g
dGhlIHN0cmluZyBzIHRvIGJlIGF0IGxlYXN0ID4gbmxlbg0KPiANCj4gUmlnaHQsIGJ1dCBkb2Vz
bid0IHN0cm5jbXAoKSBndWFyYW50ZWUgdGhhdCBhbHJlYWR5Pw0KDQpJIHRob3VnaHQgc3RybmNt
cCgpIGd1YXJhbnRlZXMgcyBjb250YWlucyBhdCBsZWFzdCBubGVuIGNoYXJzLCBtZWFuaW5nIGZy
b20gMCB0byBubGVuLTEsIGlzIG15IHVuZGVyc3RhbmRpbmcgd3Jvbmc/DQoNCj4gDQo+IEphbg0K
DQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:07:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:07:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525470.816688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxmE-0000VB-Uk; Mon, 24 Apr 2023 15:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525470.816688; Mon, 24 Apr 2023 15:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxmE-0000V4-Ra; Mon, 24 Apr 2023 15:06:50 +0000
Received: by outflank-mailman (input) for mailman id 525470;
 Mon, 24 Apr 2023 15:06:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqxmD-0000Uy-Vv
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:06:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2af8612-e2b1-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:06:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8937.eurprd04.prod.outlook.com (2603:10a6:20b:408::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:06:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:06:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2af8612-e2b1-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PGP+jhii0DZb6TPsYgyXDI0zYJ6xqqkVIdHqhYYC8abFQoonSJWPBpitVWehtgl3uvEXVisc9T7fyFg5EYHf7/pUbwlhcc1ZYRQ/C9aAf3lo5MCu77zhrsbKlO1OfNI6HEYMdkweWCoE+awOTM2QIolGDM/Il227XDoltYlYEucfNFVTiX+RFKG9QUzexxCJWiUybvhb8zjSMAjLTN3HLLATnPIgPRDngk3ENYb9c0o9XdiUIGebmf8dhiZroLoWarbdBfG1NePod9nXVD0Y3H4M7qtOQ466Nyp9BJ8M7wdorhEeF2MxBjaghexir+SSRHsX1qmqAlmPjk2z4SoERg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jd/MreHNNl56toIgF9vvqrwW1dSYYssWj+BiZVofhWk=;
 b=b5dCU/AkbQRJb0gCyrkc2qqNydtVN7NyqPrQouOEGWKVybkK5VbzpIUbNHTeMCC+aXT00enoEHQffdy6K+TjYDOczkrcZXZigUMWMql11A7s16qdTeuIe0OPSKtGMNEdgNHuF+//ZLfr8lCNSKfQ4Qz0arqIWKWQr73Wmww7fzGtzTtiymZWm9nCgLCRibcYgR5jovcbjLnk1zPVz/uf2j0rkLOmj7/uEx3nhJP2casZHPF4c+SqIoQi9pIR4goYZiovyrS7h+uT4LQwq48boWIgXtz7EHszs9QmjxxG+wQ1PSAJo/pjsFNNFyQ8X6NN2+7sGiWjtZMsO57tRz879g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jd/MreHNNl56toIgF9vvqrwW1dSYYssWj+BiZVofhWk=;
 b=r8KtfXj4ZyA8YHEJ9jyPTL9//X3R4jojUwjt39oEvOvyNU6VGVbnbDBR0WFep/CFXLRJfURkY3JRtyqowJBcbwcTe6KbmtLPN16svrzHlimUQYiuJRLJNB4vmvv3UEufx7pOgC2FogasMAriGudUYdfb/ybJqgcRPRh31mVqfmUhQBxo2h4tEEMecLNuJvdycqmSlJMWukwN7mzjru7D7ox8U4T1deYmuH2ab+K0binAEcHzNMdMPr+nF+V9AScVKBvpWubq4RjmArkew9RTzVRCdUujaOO0av4NHuNht3wz+Aj8sh3BXFV/czLYb2rPZWIBjBDiZXbfsoiWByZ1CQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
Date: Mon, 24 Apr 2023 17:06:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8937:EE_
X-MS-Office365-Filtering-Correlation-Id: f5156f3b-4548-4dc4-0917-08db44d585c1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AKSBntMfuYiEkX1lBoWsTEyVlg8ARCvYvtXZdL33g7yzuAeb855CQjJBtk+UJ2VuScU1T/oTg/sTI1FAZSjoQ+TQL5MWUIAiKYN5KRlVa4ax//lfn2rnhaTefd0wgt/JMtzi7Bbbi1gZFc1EeoOeLM0MnaBeQJhk2lgnjNqCGR3eUJG7TWhyaS2OLokZWDjKtuyoQkeaLcO1v+VVOrpJEmJSQ9zsorWddxEFCJpsS8RrgdEJoUxjHBJ8oqA6O2FP5IZtf+dnAERy6AZEtLjcTUdd927ZdzjJKgQQqzVMMT6sBoSFHHN0fm2d0CJb6y68ISaCjzTDd56wx5yPs678b4ibtzQ/eptoi/DhEFfCHQlvllxa7KhGCrrO6hTLOJcpeO77rzG0rKnEyLP9G1rM4uF23/vWsMZEcuts+5TZTMghEdKl34emk6QwT1nTxlh7iI+4nAeRiRey3kk5rhiSETnKmJBT8GOxtCLZMVFdVF124eCPKscXtaalgsNWNjNdIkNegAKepupqeDFHyLa1j/gUVZGQehl19LhOsnwmNa2TTZ0+oEootSbUIQIr54SiaXRdlf3f9V9hZei0B7p4YHJ/BC7FLI7ECeaz3Z3ze5ZNjGxfzZ2iS5IkkB8TVvDRv6NwSzTW3LMko1MYiAvNvg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(6486002)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(7416002)(54906003)(38100700002)(36756003)(86362001)(31696002)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnRMZmFhdHkvQzJhRFFCSEdSVWQxRjFUSkRmY1lTRUY5NXowN0dIV3g5ai91?=
 =?utf-8?B?V0NjWW85Yi9Sa1k3RC83bkFXQzVjc2hMZmtBcWp3d0FtZ2NCbXRlSEhTMnV4?=
 =?utf-8?B?akdOdTMyTGY4TTN2M2prejhjU2l0T2w5OUtrOG16QnZ1Rk9qS2FHQlY1NjJ5?=
 =?utf-8?B?MjlFNjhZMGVTa2NCOGVNQ0l4WnZ6Yk1DRWIrMWRkTzlVdC9LN25QU2ZHQnhn?=
 =?utf-8?B?ellRVFozNUNkSXJyZVBmR1A5U0NlZE9BaUF2NUxOaG9SSDFyM3NFKzI0VnEz?=
 =?utf-8?B?ZGRjL29ianlJR0p4K29mU3NNNWpxZDFXV2NlS3VoSmI2Z1lUNFdseTM3L2FP?=
 =?utf-8?B?QW51eGt1SDZzQTcyOXJKcTlRTWlRWGpHY1B3SE40Z1J6NURZZEJUdXY2UHJO?=
 =?utf-8?B?WHdNUUlsS3hJVmpwbzFVL1VjZkNxNlRENGtPaC9nRWt4S0xEeW53Z1pCdzFN?=
 =?utf-8?B?dWFWNFhJaGdyTWxCTzhIenJvQ3FyV3BhQ21XbUNjT01vUGlnMHhBZHczMlJF?=
 =?utf-8?B?ZVRSMmpsWjhGWjFzTmZENG0yTnR1amxIS0cyRTJOZjZ2Qk9lZ1RubXlnMVEz?=
 =?utf-8?B?Y1d4UjZBRzhYdmFtdnNaRzBNQ29FaktDWm4zNXdNWVlrWmRUZy8xRmU2Tm9n?=
 =?utf-8?B?TkowdndzaW1PNDVOMFZqUkQvSVU2K1crSHhUY21MNFcrWVZXWnQwWmNEMi9w?=
 =?utf-8?B?QTZaT1IvNEZRV1VMTlZzUzZvUDIyZzMyUVQ2MGovb2lkN2VBajZPaGdnMTUy?=
 =?utf-8?B?b0RSSGVNUTZzMXV3UEw5SFVjZEVTczNFMXhvM2Q3R2NvbHJqNkFRc2NISEZy?=
 =?utf-8?B?WitEUTNUMjdKUkoyNlRCNGRKM1Zhc2Z2bndCcW5tY0hsTlpSVks2RVZGRHJH?=
 =?utf-8?B?T3JiR0c5RWJNN21DelhieklBeXJBcXZWd0F6SnRCSS9vcklKNk56bEtDRDhv?=
 =?utf-8?B?NlhicmJTTlBLaTkwdFlMZ25DMm9oRW45bERoQlp2eTRxUVB3UlU5eFBjbjRV?=
 =?utf-8?B?ZG9STm1vS05rbG1pelpWbHRXY0k1ZjRSZ0RLaDlUQnVsUzIyTFBYeTh3dVV3?=
 =?utf-8?B?Nm51eG53aHZtZkhnNDcxK1pMeEpiTWpqaTBhemhqeDMya3lsZjlabDVKZFNv?=
 =?utf-8?B?NWJhUE4yYXRmQkJmTUtLVzRqelo0alRxNkl1MVJDSWZNVnllbWtmT2FtT0JC?=
 =?utf-8?B?ZkRTZW0rWG4yZEZhMzFnVTgyWFRqOVZES1A1ZnVCeVd2N2pDLzVyY0haWjMv?=
 =?utf-8?B?YUJVSSswVFZ6SkZ5aGFVSFpIaWhXK1J6ZGdDTjdlK3g0dG9ja2dmUzFYMVU0?=
 =?utf-8?B?VHZkdTZuVFhSaTgzOVc2dmhrcEp2aFBMZU9nc0dtYlZvamF2bkk0alFjS0NI?=
 =?utf-8?B?dHUyTGR2YklPalJMNEhUb2g1SHRZZk1YSTZ1UE94VUJUbURhMXI1b3VTNmRq?=
 =?utf-8?B?RnI0Q3UydUpidU1MUC9qTm9mMlErMU9aWThlV2xWMFo2Y0E1eGRvSUIxeXJh?=
 =?utf-8?B?dXVQNUxnVUg1aS9Lay9oTnBJTXFUK2p3Sk92b1liMzlNbWJQRlZYelRTRWlz?=
 =?utf-8?B?cC9IaTJNNkFwd0wxZUMzWk8yeSt2RWZBUExZVllieDFsK2d4cmJ6aVdlODRG?=
 =?utf-8?B?RzNYL28yeUM0STFucDM2TEtwNE1yZ0Fvb2tsd29aVExxQzlkK2R6RmhNWGFZ?=
 =?utf-8?B?QXNTbFBrc2ltQkl4VmVDUGhCd3hPRXhwY25mUVc5RGdxY29jelpSSWNyalVO?=
 =?utf-8?B?dmdWSFMweTBlZEkySng0blR0OG1RdGFtbTlxYmJJdjV2OXhqclZkZFZuMEZk?=
 =?utf-8?B?NVk0NnozUm1HenlIWVI5cGlUSTlJcG5SME5TYWVkVE54Z0VQL0dhVi9KZDNH?=
 =?utf-8?B?NlhkSEVycUtKVy9CbTArMkZIUDF2TmlTL0xObTFvZEhocTh0VXVVRkIvUlA3?=
 =?utf-8?B?eU5oeGZ4NmRzalN2NUNlcXZEWkRHUDkxK2lhMmt3Ykx0N3UwMXdHL0xXWTM0?=
 =?utf-8?B?L2ZlbXhJcFpYRnhETkZlV2Nxbkt6Q3NORGtiWFFldW1pV3pDdnU5ZmczMWNZ?=
 =?utf-8?B?SkNhcVdNZFM3RGNROUk1WmVaMHIzSFpxdzRKQWdHM1RMNzU4VEVoTHVRTnFi?=
 =?utf-8?Q?kcjtiS+c2FdAMLhPFX1LIP1lW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5156f3b-4548-4dc4-0917-08db44d585c1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:06:46.7490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lFykURRcfEBg6lOWGofL9ThW41M/jG+g8dLdn48rm7AYSGg7chH2Qlecq604pS9B3lQ4OpSe2gFs9kbdnEH9Mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8937

On 24.04.2023 16:57, Luca Fancellu wrote:
>> On 24 Apr 2023, at 15:05, Jan Beulich <jbeulich@suse.com> wrote:
>> On 24.04.2023 16:00, Luca Fancellu wrote:
>>>> On 24 Apr 2023, at 12:34, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 24.04.2023 08:02, Luca Fancellu wrote:
>>>>> @@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
>>>>> void sve_context_free(struct vcpu *v);
>>>>> void sve_save_state(struct vcpu *v);
>>>>> void sve_restore_state(struct vcpu *v);
>>>>> +bool sve_domctl_vl_param(int val, unsigned int *out);
>>>>>
>>>>> #else /* !CONFIG_ARM64_SVE */
>>>>>
>>>>> +#define opt_dom0_sve     (0)
>>>>> #define is_sve_domain(d) (0)
>>>>>
>>>>> static inline register_t compute_max_zcr(void)
>>>>> @@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>>>>> static inline void sve_save_state(struct vcpu *v) {}
>>>>> static inline void sve_restore_state(struct vcpu *v) {}
>>>>>
>>>>> +static inline bool sve_domctl_vl_param(int val, unsigned int *out)
>>>>> +{
>>>>> +    return false;
>>>>> +}
>>>>
>>>> Once again I don't see the need for this stub: opt_dom0_sve is #define-d
>>>> to plain zero when !ARM64_SVE, so the only call site merely requires a
>>>> visible declaration, and DCE will take care of eliminating the actual call.
>>>
>>> I’ve tried to do that, I’ve put the declaration outside the ifdef so that it was always included
>>> and I removed the stub, but I got errors on compilation because of undefined function.
>>> For that reason  I left that change out.
>>
>> Interesting. I don't see where the reference would be coming from.
> 
> Could it be because the declaration is visible, outside the ifdef, but the definition is not compiled in? 

Well, yes, likely. But the question isn't that but "Why did the reference
not get removed, when it's inside an if(0) block?"

>>> --- a/xen/common/kernel.c
>>> +++ b/xen/common/kernel.c
>>> @@ -324,11 +324,14 @@ int __init parse_signed_integer(const char *name, const char *s, const char *e,
>>>     slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
>>>     nlen = strlen(name);
>>>
>>> +    if ( !e )
>>> +        e = s + slen;
>>> +
>>>     /* Check that this is the name we're looking for and a value was provided */
>>> -    if ( (slen <= nlen) || strncmp(s, name, nlen) || (s[nlen] != '=') )
>>> +    if ( slen <= nlen || strncmp(s, name, nlen) || s[nlen] != '=' )
>>>         return -1;
>>>
>>> -    pval = simple_strtoll(&s[nlen + 1], &str, 0);
>>> +    pval = simple_strtoll(&s[nlen + 1], &str, 10);
>>>
>>>     /* Number not recognised */
>>>     if ( str != e )
>>>
>>>
>>> Please note that I’ve also included your comment about the base, which I forgot to add, apologies for that.
>>>
>>> slen <= nlen doesn’t seems redundant to me, I have that because I’m accessing s[nlen] and I would like
>>> the string s to be at least > nlen
>>
>> Right, but doesn't strncmp() guarantee that already?
> 
> I thought strncmp() guarantees s contains at least nlen chars, meaning from 0 to nlen-1, is my understanding wrong?

That's my understanding too. Translated to C this means "slen >= nlen",
i.e. the "slen < nlen" case is covered. The "slen == nlen" case is then
covered by "s[nlen] != '='", which - due to the earlier guarantee - is
going to be in bounds. That's because even when e is non-NULL and points
at non-nul, it still points into a valid nul-terminated string. (But yes,
I see now that the "slen == nlen" case is a little hairy, so perhaps
indeed best to keep the check as you have it.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:12:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:12:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525477.816698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxr6-00023a-Kj; Mon, 24 Apr 2023 15:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525477.816698; Mon, 24 Apr 2023 15:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxr6-00023T-I2; Mon, 24 Apr 2023 15:11:52 +0000
Received: by outflank-mailman (input) for mailman id 525477;
 Mon, 24 Apr 2023 15:11:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqxr5-00023N-1r
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:11:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7d00::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56402a16-e2b2-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:11:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8937.eurprd04.prod.outlook.com (2603:10a6:20b:408::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:11:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:11:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56402a16-e2b2-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FYyJ1MTGJhGgCpmEYKhIJKjEi7BQ8+BKcMqYWocszGQRbDFH6IxN4UTVqqakrwhHzw5eSdp79o4gW40PoM0JWI4C+XAHB7tOSHZNNVPqpeZnQtNJQy+HmxuL7MszNiwTvBXgl0F/FVJ+7N36/2CD7GMhM/KBtsQQnhaIth2RIXSpMtsoAT40DBNVQGu52/Wq/Muh0Skht4n6432Ww17UCwbfaj3Rw9cQJ7MlObazmFfahNztyQ15PzsiQenIHlYJoF6Br6oCsP0Avbw+fvqJTV8C/3ZzNkvMWIQwbmYr/4uoHFO8HFoJW0MEaOHuQFyV1Oq8F2J42Pj7D4af/zslYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7D4CWq3/ocWJ3zYqlZtaj3TsyERB+doYS6MmyRBvfdE=;
 b=LGWzAtTm268f4GMLJUCpQuajYz0lYggmOrur+xdkKc0izBig+c6cQlFz6WEiCc0wgdVLSQ/tcYtb+Vn9j7Db1ljIyI9bBhL2N2BfZb5pn1sGbEAhhaCh+D9QHY1Kh0HD2RJjWiIFJvlOY1gyMtTmPa2WHXFmlCr5uv1YRMAtnxBFuJkM7T3NQxrc/dkLnXFq+SytD447UxXrMNm7VmmwPlvUNdauk1+f5+mpmo+knspbdvhJssvscXbQOMnG1PoMDRPPjsSsFNJYs2WtuToevsNzH/KKb133uJIhFvqXf7CDpu47oiVke/kZO88q+4ISvRG36pnrzNyR870AIo2yYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7D4CWq3/ocWJ3zYqlZtaj3TsyERB+doYS6MmyRBvfdE=;
 b=OkCTtMHgGvkIi3N7nDyX/8pTKcYYhBNk2Zyr8wKG0xZvI8eeNzhyAOTXbLqhW3+rlLOlukzPDOKx0/481ZRhF28e81HKIYmzI7ecfsgJ1Q/nUbV6dRx/4KyQT+JMIOxDdLtPqy57EQZN48BLfpVAuK4WM4vClWhToz1bWNJzwvtR+liiETTsZt2f0LyEhJ57dLZWoYTY27UlFfjC8IZDKPCTu8SWTy5isf6RFSGqL81gBnWO/ifBp7tj6Co+nyO0et7rscOzPEUtfKGJi11k2BPP+D1PdxKXAa1jH3gH+NYCMAXiRTOLgSOjMYqJx5A9u8ry3gq0fHFj7v9qF2AYuw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b5495cdc-cfda-6aa2-4d6c-c226dc2b1ccb@suse.com>
Date: Mon, 24 Apr 2023 17:11:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 0/3] hvm: add hvm_funcs hooks for msr intercept
 handling
Content-Language: en-US
To: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230424082038.541122-1-xenia.ragiadakou@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8937:EE_
X-MS-Office365-Filtering-Correlation-Id: d26560cf-01b1-4966-090f-08db44d63898
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DjSX4UObh7c4qnwEwsLsKk0LIswj1kdf4xcM2aEKU0eqqJqc09nVDKcu9uvHFWjFJblpPSjdRzBt8fcz10hBGfbTMLgx8QSI+K1t8NJmzhiwHQnytLO6iYzMmEllKMx+AhACalTpU0KR7sRXx6D0qVJZCZ1QU18ae5UEwH5tnFQZiHqbNBqJl97CS9j9B1c6TrTrUS32AG1BJRPn2BqxifCvI6j5NbQoXx4XbLedMYP07ANC3FJHd5/X2QO1UTAkJxahp0G4TSlRLovyUpDndtprcal48e9dT14xfBAbYhFGYggPnG8V9mZgK8bd+Jn5q1K3wbk3Oj0AeZ1OTfU8/nid5XLwqN8jEp5HvO6F/QVMoRQhxH5Sk7BnGEsxyxsBFXwnyVh0MTHjpJOqVl1125bfLF6n1eLLvbWaDwiLEpSoN3npV6ET3VBY27EOTeDYNCHC8VYVpsw84Px61yaxnmO0YVUmrZ5VEwAS19S96jgNG/4zHOfrgo5w1t0JeE8EFOBGGTnFFibjgaa/j9JudBoUUAPtYWq5h1qINRvUAd/nQgH9V0kkihNvqqOn8TCP6Y48FnLRIe/O/9biBDGKR5rvzfbA6eI/hXH1Blvb+GQJ0AISff8cAlr1jLkHIjLSsr/oX/un8RLve/rJ3EnfUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(2906002)(4744005)(6486002)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(66946007)(66556008)(66476007)(8676002)(8936002)(316002)(41300700001)(6916009)(4326008)(478600001)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OEpQbzJCRmhLSUNxVmZnK1ZWSCtVUURhRUp4UXJmQVRhcU4wek9wd0Jlc2lY?=
 =?utf-8?B?NEFVY29ISlNuYWYxZ2V1aVpwSzU3VHBwb2cwcy83M3R3eGcrd0lpU0grbFBu?=
 =?utf-8?B?K0RMQ2EvQzFOdUs4RkhEZEtPY2dncWVhazRJeFd2QitTVGdSc0ZRVEoySmVk?=
 =?utf-8?B?KzF2VTRKRzRUcWJ1ZFFSOHhKaTN0b1Jnd0NrNmU0QnZIb3hQUDhYcENoR1U4?=
 =?utf-8?B?enpTOWdsaVU4N2JqSUVDNzRiSUV5UWsyeWlNY0NHbk9XZ0hwMmpBdGh4d09G?=
 =?utf-8?B?cXBkbzROWUd5RUpxaEdFNXd3Zm91ZWIxdnpNQXF4dzRFREJwdzhBdjJPN0Nk?=
 =?utf-8?B?OFprU0J4Y1JncnpQTXhEUm9wVHJMWjh6L2M1ZG4yT1pxNU5ubFdDbzZZaDZJ?=
 =?utf-8?B?aWZ5eUtzcTVtYjFTZUxpSjZZMGtlTkpEaXBNaERNdFJZSHArcys5eC9MMEtt?=
 =?utf-8?B?QWV1cEN6SlNQNENwallueStOSTFFdDBFbno5a3Q2dU5QbVZVR2hINEtUNVNJ?=
 =?utf-8?B?SGhIY3NMUTh3eHR1ZkJnNEtkQTZidFQxQUl1S0VMQjI0RG9iN2xRZi9KNkpZ?=
 =?utf-8?B?b0lmU1F0WFFCZURWeWF4bWpuTTFYKy9jTFIvcmxaR245eC9Kc2JPUlRGK0di?=
 =?utf-8?B?akdWRWNqWXE1MzRyeThocGlrQ0NWSjVkVFcvRVg2VHhwSlZHWlg0eEwvVjli?=
 =?utf-8?B?a1Y1aW00Z3krdDFuK2V1R2JPcWJ6UnVycWg5VndZTTlIMzMvdmtJQkJJNkEx?=
 =?utf-8?B?ZE1wTnhReWFZclZIM3ZnZTRDUi9sS05pTVF6a2Y5dElWZXd6ZkNubDFiQVZH?=
 =?utf-8?B?UUlNWEE0eWRGZDhPUHJsaEhyVW5VZS9IR1ZpNis0R1k3Tk51SGdBMmZxR1pr?=
 =?utf-8?B?azJyaXhJREYvMnZpM3ZkZFZOdUYvRjdOckROaS9KNlk4RkVGSU45Wkgra1M0?=
 =?utf-8?B?RzZBQm9leC9HenhEbEQ1THRWWEorYkFYWFBmUXYxMFdQM21COUxOR1J4amtp?=
 =?utf-8?B?Z1puaWVVQUZuMFRlVkVhN0h4blF5TTZXZTIvYVI0aG5LRmVkN3FFeU9Cd3px?=
 =?utf-8?B?VlFwWXBsMmttNmlnN2N4Z3hOSSt3OFFsdXFYWVA0WitRYmNxM0xUSmx6NzJO?=
 =?utf-8?B?NnNDUlEybWYwMzg3UjZzUnNST2k1TnpDRUg4NndpSlllZWZuUERyQmVHNDQv?=
 =?utf-8?B?bC9TUUNuYnJMV2VrY1dpN0Zra1BYYmdOUStmWkxwak8vTWNoeFl6Y0VxZ2Z4?=
 =?utf-8?B?eGp4a1JSa2pEWEMrTTRDSy81cFVaNHRtM2hMdnplSHBXMmxoMExPYk9jQkpD?=
 =?utf-8?B?amJVOVdqYlBvVjZYL3o4YXlLZERPSG5iTFgrUzB1bUpZdWxmU3dhbktEbXZo?=
 =?utf-8?B?UE5Ed2hYSWlobTFUM3Z3dUR3c2d4THdvejdFQ3JtaFp0YXFSUWJtRjRqOGgy?=
 =?utf-8?B?em5ZUXhDVXMwbUxvMnY1SEtLTUJSWHpoaWN3bFJ6OEllcmVjV29ib0pGUjdu?=
 =?utf-8?B?Yyt4d2ptQ1Zmdk96T0p4TWxEaFVEODQram1uMEl6dlRQbVVOVkNlS1hoY1Iy?=
 =?utf-8?B?Z2lQc0tjT1gwdmQzSHkzNC9WZWg4ODVYUjErdWtWMDZZWCtOSjRZdUdQZEJR?=
 =?utf-8?B?d3MxVEVYNStOQk5yL0dMa3pzRCs3MFFKZEorak1ZY2tYRUdJNjNxdkNmdy9o?=
 =?utf-8?B?aVBuTVRvUU55dE83em50TnNiK1NscU5LOE44eFlyeE5WVWVpRG84czFJWnZ3?=
 =?utf-8?B?MkF4b21rUDFOUStVK3R5WC9LcitsaGtobTV3ckc4ZzViUWRva25oNjkxVmo0?=
 =?utf-8?B?bVBGNlFwY2NteG1neElDTEIxcFRXL09TOW9GdTlsdmhscDBUR1lMbGJtTWVH?=
 =?utf-8?B?bWFPZ1Y2eDJKWElvOWhzYlhocDhxVmtFZjNML0toTS9vdmF3M0IzTHpNVVcr?=
 =?utf-8?B?QXhobnU5aVlRS1RYdi9SREJSU0NQcFdqU2JzWWtLMDBtSEN6ektXZjh0STVJ?=
 =?utf-8?B?Y2FJcm5JQ3lCS1o4SVcxKzhzVjJVRjQxTTdzazVHOXdrVWJTcmxZRUExL1Bi?=
 =?utf-8?B?OTdBWCtBNUkyd1oyc2s1NzRUdko2VTFld1I2QXJaZmtMY21CUVBndytXZTN2?=
 =?utf-8?Q?D17ilj6b21HrFLh38gqRVR0T5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d26560cf-01b1-4966-090f-08db44d63898
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:11:46.7918
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7svtXcFJ11WvKMaQgiejGLqoTG5HqHtCv2rJ4l9xyiaUzMoutaEDprCeFQIwE+wR05JCHFyQDfGo6eSUtd7FmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8937

On 24.04.2023 10:20, Xenia Ragiadakou wrote:
> This patch series aims to make the msr intercept handling, performed in
> vpmu code, virtualization technology agnostic.
> It creates a common interface for setting/clearing the msr intercepts and
> then add hooks to the corresponding hvm_funcs table to be able to call the
> svm/vmx specific handlers through a generic hvm wrapper function.
> 
> Version 2 addresses the comments made on version 1 to ease further review.
> 
> Still there is a pending question made by Jan, whether there could be other,
> than the vpmu one, use cases that would require msr intercept handling to be
> performed outside of virtualization techonology-specific code, and whether
> this abstraction is actually usefull to have.

Just for reference: The code changes look okay to me now. They could get my
ack, provided we really want to go this route (which I continue to be
unconvinced of).

Jan



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:16:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525482.816708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxvl-0002fv-5y; Mon, 24 Apr 2023 15:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525482.816708; Mon, 24 Apr 2023 15:16:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxvl-0002fo-3D; Mon, 24 Apr 2023 15:16:41 +0000
Received: by outflank-mailman (input) for mailman id 525482;
 Mon, 24 Apr 2023 15:16:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIvg=AP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pqxvj-0002fi-UF
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:16:40 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 025cfaeb-e2b3-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:16:38 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-4ec9c7c6986so4749713e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 08:16:38 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 m6-20020a056512014600b004efd3c2b746sm1297463lfo.162.2023.04.24.08.16.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Apr 2023 08:16:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 025cfaeb-e2b3-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682349398; x=1684941398;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=d24bZOk6Pj50vCvUQOlE/p+ytm74tbHWHSWQcyWROMU=;
        b=IDDsrE4w6TkNxZQBPmYvYL6M5Q5tzrYtIoPpZMZkZWtfpWmKd5cCaqKUMdKlhs7u8c
         ANlvRltB4uNCMnWCfjT4cWwJ+DxQehWRQDyBtVvRAAkUCYWP2nfNlCxfu4u0V+pmEFKy
         prjQ/kgxSbt2IMEgnKq0aYqQg81dd31HDFo0Ngt5323w4O6Bmm1gVRw/Hsoufzi4P9BO
         C5Z1WqHnyo0I6zxnBrp+clpOWRDi2QzJSYKL1GgzIAzRlhU6LRxs59omJKMAVEzOmRnj
         p7D676pWroHdaeSc8Ai+NjfpzPrBltT3lxrQ8sxsVo3h/2Tta2ctIzc0ugit7v7Ji70o
         sVTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682349398; x=1684941398;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=d24bZOk6Pj50vCvUQOlE/p+ytm74tbHWHSWQcyWROMU=;
        b=UxRKGIFodB7QbVoEM3BtSsTZMjS7DM6f6+cpK3YFyiKyrp31F9hdmgTOn4Om3gd+U+
         9EEdnsDxcn+MYxOufaMX9+eiEi/rbiauJcC38Kh1V14HRO9YyN1juV7Vc+WtYnc0Vgdf
         Curv9UM8Fr9AJZXMBw+W7I4OQc5b9dxGxb9aIZg4RB1CugvvOXr2UZsep2K2ZuN+/wRv
         KmDIRqtVdIHc4fbQN0vNgAQzbvy+Sg6v9QWFF5Xz9wgA8kbhQV2skBSMB3TpTNxkjFCi
         vyfVHEYhub54CRQcw6UGqsfTCqpk54NS+YrweWKMhtRAHScGQsH6+l+OIIpb2Jv90eFs
         wo+Q==
X-Gm-Message-State: AC+VfDzshJKeX1rSQqL916uADgTlYuH/hIrqTE8jYkiMxTsnv1/jw7WU
	JSud/vchveXSGc2GsYRMfR8=
X-Google-Smtp-Source: ACHHUZ4uq3bCFiT9y+9htoE+Wn4hqG2R4xyFcx7lGQFBszBntU4By9mWiLUqrBdyOg0lGyMWYVmmdQ==
X-Received: by 2002:ac2:5282:0:b0:4ef:f509:1186 with SMTP id q2-20020ac25282000000b004eff5091186mr640773lfm.1.1682349397746;
        Mon, 24 Apr 2023 08:16:37 -0700 (PDT)
Message-ID: <3d440048717892fe5d3ed7fe3255dc8c9f5d38a3.camel@gmail.com>
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Mon, 24 Apr 2023 18:16:36 +0300
In-Reply-To: <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
	 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
	 <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
	 <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.0 (3.48.0-1.fc38) 
MIME-Version: 1.0

On Mon, 2023-04-24 at 12:18 +0200, Jan Beulich wrote:
> On 21.04.2023 18:01, Oleksii wrote:
> > On Thu, 2023-04-20 at 16:36 +0200, Jan Beulich wrote:
> > > On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*=
 panic(), <asm/bug.h> aren't ready
> > > > now.
> > > > */
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 di=
e();
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 }
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Point to next page *=
/
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 page_addr +=3D XEN_PT_L=
EVEL_SIZE(0);
> > >=20
> > > Seeing this as well as the loop heading - maybe more suitable as
> > > a
> > > for(;;) loop?
> > I am not sure that I understand the benefits of changing "while (
> > page_addr < map_end )" to "for(;;)".
> > Could you please explain me what the benefits are?
>=20
> The common case of using while() is in situations where one cannot
> use for(). for() is (imo) preferable in all cases where the third of
> the controlling expressions isn't empty (and is to be carried out
> after every iteration): Any use of "continue" inside the loop will
> then properly effect loop progress. (Of course there are cases where
> this behavior isn't wanted; that's where while() may come into play
> then.)
Thanks for clarification. Now it is more clear.
>=20
> > > > +=C2=A0=C2=A0=C2=A0 csr_write(CSR_SATP,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT)
> > > > |
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 satp_mode << SATP_MODE_SHIFT);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) =
=3D=3D satp_mode
> > > > )
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 is_mode_supported =3D t=
rue;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 /* Clean MMU root page table and disable MMU */
> > > > +=C2=A0=C2=A0=C2=A0 stage1_pgtbl_root[index] =3D paddr_to_pte(0x0, =
0x0);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 csr_write(CSR_SATP, 0);
> > > > +=C2=A0=C2=A0=C2=A0 asm volatile("sfence.vma");
> > >=20
> > > I guess what you do in this function could do with some more
> > > comments.
> > > Looks like you're briefly enabling the MMU to check that what you
> > > wrote
> > > to SATP you can also read back. (Isn't there a register reporting
> > > whether the feature is available?)
> > I supposed that it has to be but I couldn't find a register in
> > docs.
>=20
> Well, yes, interestingly the register is marked WARL, so apparently
> intended to by used for probing like you do. (I find the definition
> of
> WARL a little odd though, as such writes supposedly aren't
> necessarily
> value preserving. For SATP this might mean that translation is
> enabled
> by a write of an unsupported mode, with a different number of levels.
> This isn't going to work very well, I'm afraid.)
Agree. It will be an issue in case of a different number of levels.

Then it looks there is no way to check if SATP mode is supported.

So we have to rely on the fact that the developer specified
RV_STAGE1_MODE correctly in the config file.

>=20
> > > > +=C2=A0=C2=A0=C2=A0 /*
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * Stack should be re-inited as:
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * 1. Right now an address of the stack is=
 relative to
> > > > load
> > > > time
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 addresses what will c=
ause an issue in case of load
> > > > start
> > > > address
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 isn't equal to linker=
 start address.
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * 2. Addresses in stack are all load time=
 relative which
> > > > can
> > > > be an
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 issue in case when lo=
ad start address isn't equal to
> > > > linker
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 start address.
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 */
> > > > +=C2=A0=C2=A0=C2=A0 asm volatile ("mv sp, %0" : : "r"((unsigned
> > > > long)cpu0_boot_stack + STACK_SIZE));
> > >=20
> > > Nit: Style (overly long line).
> > >=20
> > > What's worse - I don't think it is permitted to alter sp in the
> > > middle of
> > > a function. The compiler may maintain local variables on the
> > > stack
> > > which
> > > don't correspond to any programmer specified ones, including
> > > pointer
> > > ones
> > > which point into the stack frame. This is specifically why both
> > > x86
> > > and
> > > Arm have switch_stack_and_jump() macros.
> > but the macros (from ARM) looks equal to the code mentioned above:
> > #define switch_stack_and_jump(stack, fn) do
> > {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=20
> > \
> > =C2=A0=C2=A0=C2=A0 asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack=
), "X" (fn)
> > :
> > "memory" ); \
>=20
> Note how writing SP and branch are contained in a single asm() there.
> By checking ...
>=20
> > =C2=A0=C2=A0=C2=A0
> > unreachable();=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=20
> > \
> > } while ( false )
> >=20
> > Here is part of disassembled enable_mmu():
> >=20
> > ffffffffc004aedc:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 18079073=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 csrw=C2=A0=C2=A0=C2=A0 satp,a5
> > ffffffffc004aee0:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 00016797=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 auipc=C2=A0=C2=A0 a5,0x16
> > ffffffffc004aee4:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 12078793=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 addi=C2=A0=C2=A0=C2=A0 a5,a5,288
> > ffffffffc004aee8:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 813e=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 mv=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sp,a5
> > ffffffffc004af00:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0f4000ef=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 jal=C2=A0=C2=A0=C2=A0=20
> > ra,ffffffffc004aff4 <cont_after_mmu_is_enabled>
> > ...
>=20
> ... what the generated code in your case is you won't guarantee that
> things remain that way with future (or simply different) compilers.
Agree. Thanks for clarification. I'll take into account during the next
version of patch series.

>=20
> > > > --- a/xen/arch/riscv/riscv64/head.S
> > > > +++ b/xen/arch/riscv/riscv64/head.S
> > > > @@ -1,4 +1,5 @@
> > > > =C2=A0#include <asm/asm.h>
> > > > +#include <asm/asm-offsets.h>
> > > > =C2=A0#include <asm/riscv_encoding.h>
> > > > =C2=A0
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .section .text.hea=
der, "ax", %progbits
> > > > @@ -32,3 +33,4 @@ ENTRY(start)
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 add=C2=A0=C2=A0=C2=
=A0=C2=A0 sp, sp, t0
> > > > =C2=A0
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 tail=C2=A0=C2=A0=
=C2=A0 start_xen
> > > > +
> > >=20
> > > ???
> > Shouldn't it be the one empty line at the end of a file?
>=20
> There should be a newline at the end of a file, but not normally a
> blank one. When you introduce a new file, it can be viewed as a
> matter
> of taste whether to have an empty last line, but when you have a
> seemingly unrelated change to a file like the one here, this is at
> least odd.
Agree. Then I'll remove this change from the patch series.

>=20
> > > > --- a/xen/arch/riscv/xen.lds.S
> > > > +++ b/xen/arch/riscv/xen.lds.S
> > > > @@ -136,6 +136,7 @@ SECTIONS
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 . =3D ALIGN(POINTER_ALIGN);
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 __init_end =3D .;
> > > > =C2=A0
> > > > +=C2=A0=C2=A0=C2=A0 . =3D ALIGN(PAGE_SIZE);
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 .bss : {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 /* BSS */
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __bss_start =3D .;
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bss.stack_align=
ed)
> > >=20
> > > Why do you need this? You properly use __aligned(PAGE_SIZE) for
> > > the
> > > page tables you define, and PAGE_SIZE wouldn't be enough here
> > > anyway
> > > if STACK_SIZE > PAGE_SIZE (as .bss.stack_aligned comes first).
> > > The
> > > only time you'd need such an ALIGN() is if the following label
> > > (__bss_start in this case) needed to be aligned at a certain
> > > boundary. (I'm a little puzzled though that __bss_start isn't
> > > [immediately] preceded by ". =3D ALIGN(POINTER_ALIGN);" - didn't
> > > .bss
> > > clearing rely on such alignment?)
> > ALIGN(PAGE_SIZE)=C2=A0 isn't needed anymore.
> > I used it to have 4k aligned physical address for PTE when I mapped
> > each section separately ( it was so in the previous verstion of MMU
> > patch series )
> >=20
> > Regarding ". =3D ALIGN(POINTER_ALIGN);" I would say that it is enough
> > to
> > have aligned __bss_end ( what was done ) to be sure that we can
> > clear
> > __SIZEOF_POINTER__ bytes each iteration of .bss clearing loop and
> > don't
> > worry that size of .bss section may not be divisible by
> > __SIZEOF_POINTER__.
>=20
> How would guaranteeing this only for __bss_end help? __bss_start
> could
> still be misaligned, and then you'd
> (a) use misaligned stores for clearing and
> (b) extend clearing to outside of the .bss (as the last of the
> misaligned
> stores would cross the __bss_end boundary).
It seems you are right. I'll create a separate commit to align
__bss_start properly.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:17:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:17:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525486.816719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxwm-0003CM-Hu; Mon, 24 Apr 2023 15:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525486.816719; Mon, 24 Apr 2023 15:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxwm-0003CF-Df; Mon, 24 Apr 2023 15:17:44 +0000
Received: by outflank-mailman (input) for mailman id 525486;
 Mon, 24 Apr 2023 15:17:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pqxwk-000370-GQ
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:17:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 276bb53a-e2b3-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 17:17:40 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4B35F1FD8E;
 Mon, 24 Apr 2023 15:17:40 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0919513780;
 Mon, 24 Apr 2023 15:17:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 54fXAJSdRmQQXwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 24 Apr 2023 15:17:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 276bb53a-e2b3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682349460; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=oeDhGnn1kXclI1wIeQrChU8SI8SHAgK2tPz5hDl4WdY=;
	b=IcqpwTVZgaW7kCvppuwQvL2yCmXNH+orT7OArRmCRFfl3RPBiAC/RaqGnFpQJarFvMMTKV
	TjZYpBeW8B2u+jcfKo07MUQgNLKKk2Lz4kwf6KmRv0qgt2mz62OUdgqknHB1eFRwRvJbVe
	mTek+PP733C10QVg/qRcsinK7ipn7Kg=
Message-ID: <f1d30ad1-4612-0838-3064-611c5092c686@suse.com>
Date: Mon, 24 Apr 2023 17:17:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230424143057.27469-1-jgross@suse.com>
 <626da7fb-9934-2a85-0022-90ae32f1a748@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] xen: add support for crash dump analysis with xen.efi
In-Reply-To: <626da7fb-9934-2a85-0022-90ae32f1a748@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------3iLpi9RKgeqAtiKTguyw7X1V"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------3iLpi9RKgeqAtiKTguyw7X1V
Content-Type: multipart/mixed; boundary="------------840oFkfKRNlKZ8g7au2UWUvU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
Message-ID: <f1d30ad1-4612-0838-3064-611c5092c686@suse.com>
Subject: Re: [PATCH v2] xen: add support for crash dump analysis with xen.efi
References: <20230424143057.27469-1-jgross@suse.com>
 <626da7fb-9934-2a85-0022-90ae32f1a748@suse.com>
In-Reply-To: <626da7fb-9934-2a85-0022-90ae32f1a748@suse.com>

--------------840oFkfKRNlKZ8g7au2UWUvU
Content-Type: multipart/mixed; boundary="------------oOcaZk8f080hL8ZMWvrCZF0M"

--------------oOcaZk8f080hL8ZMWvrCZF0M
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDQuMjMgMTY6NTAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyNC4wNC4yMDIz
IDE2OjMwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gLS0tIGEveGVuL2FyY2gveDg2L01h
a2VmaWxlDQo+PiArKysgYi94ZW4vYXJjaC94ODYvTWFrZWZpbGUNCj4+IEBAIC0yMjYsNiAr
MjI2LDkgQEAgZW5kaWYNCj4+ICAgCSAgICAgICQoQEQpLy4kKEBGKS4xci5vICQoQEQpLy4k
KEBGKS4xcy5vICQob3JwaGFuLWhhbmRsaW5nLXkpICQobm90ZV9maWxlX29wdGlvbikgLW8g
JEANCj4+ICAgCSQoTk0pIC1wYSAtLWZvcm1hdD1zeXN2ICQoQEQpLyQoQEYpIFwNCj4+ICAg
CQl8ICQob2JqdHJlZSkvdG9vbHMvc3ltYm9scyAtLWFsbC1zeW1ib2xzIC0teGVuc3ltcyAt
LXN5c3YgLS1zb3J0ID4kKEBEKS8kKEBGKS5tYXANCj4+ICtpZmVxICgkKENPTkZJR19ERUJV
R19JTkZPKSx5KQ0KPj4gKwkkKGlmICQoZmlsdGVyIC0tc3RyaXAtZGVidWcsJChFRklfTERG
TEFHUykpLDosJChPQkpDT1BZKSAtTyBlbGY2NC14ODYtNjQgJEAgJEAuZWxmKQ0KPiANCj4g
VGhpcyBvbmx5IGFkZHJlc3NlcyBvbmUgb2YgdGhlIHR3byBlYXJsaWVyIHJhaXNlZCBhc3Bl
Y3RzLCBhcyB5b3UgZGlkbid0DQo+IHVzZSB3aGF0IEkgcHJvcG9zZWQ6DQo+IA0KPiAJJChp
ZiAkKGZpbHRlciAtLXN0cmlwLWRlYnVnLCQoRUZJX0xERkxBR1MpKSw6KSQoT0JKQ09QWSkg
LU8gZWxmNjQteDg2LTY0ICRAICRALmVsZg0KPiANCj4gUXVpdGUgcG9zc2libHkgYmVjYXVz
ZSB0aGVyZSB3YXMgYSBibGFuayBtaXNzaW5nIGluIHRoZXJlLCB0byBzZXBhcmF0ZQ0KPiB0
aGUgY29sb24gZnJvbSAkKE9CSkNPUFkpLiBQcmVmZXJhYmx5IHdpdGggdGhlIGFkanVzdG1l
bnQgKHdoaWNoIEknZA0KPiBiZSBmaW5lIGRvaW5nIHdoaWxlIGNvbW1pdHRpbmcsIGFzIGxv
bmcgYXMgeW91J3JlIG9rYXkpDQoNClRvb2sgc29tZSB0aW1lIHRvIHVuZGVyc3RhbmQgeW91
ciBjb25jZXJuIGhlcmUsIGJ1dCBmaW5hbGx5IEkndmUgZ290IGl0LiA6LSkNCg0KWWVzLCBJ
J20gZmluZSB3aXRoIHRoaXMgY2hhbmdlLg0KDQo+IFJldmlld2VkLWJ5OiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+DQoNClRoYW5rcy4NCg0KPiBUbyBhbHNvIG1lbnRpb24g
d2hhdCB3ZSBoYXZlIGp1c3QgZGlzY3Vzc2VkOiBTaW5jZSB3ZSdyZSB0YWxraW5nIGFib3V0
DQo+IGR1cGxpY2F0aW5nIG92ZXIgMzBNYiBvZiBkYXRhIChhdCBsZWFzdCBhY2NvcmRpbmcg
dG8gbXkgYnVpbGQpLCBhbg0KPiBvcHRpb24gaXMgZ29pbmcgdG8gYmUgdG8gdGhlbiBzdHJp
cCBkZWJ1ZyBpbmZvIG9mZiBvZiB4ZW4uZWZpIGl0c2VsZiwNCj4gZ2V0dGluZyBpdHMgc2l6
ZSBpbnRvIHJlYXNvbmFibGUgcmFuZ2UgYWdhaW4uDQoNClllcywgdGhpcyB3b3VsZCBzaHJp
bmsgaXQgdG8gYSBsaXR0bGUgYml0IGFib3ZlIDNNQi4NCg0KDQpKdWVyZ2VuDQo=
--------------oOcaZk8f080hL8ZMWvrCZF0M
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------oOcaZk8f080hL8ZMWvrCZF0M--

--------------840oFkfKRNlKZ8g7au2UWUvU--

--------------3iLpi9RKgeqAtiKTguyw7X1V
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRGnZMFAwAAAAAACgkQsN6d1ii/Ey8p
Fwf/fO+jm+xZG/5qhWiNzbpasll43J/+C4XAEYgUVSdlGOfW15T/sJXKVgSKcEKfWNuALQWsGeuP
qxF0UXKJEfuJWgaW+6HXEQTTuvdwp0SeVfRjS3inim63m8V2ErC1RwbbXBvGNN/ZFydqZocvTmoA
axr96vtB7YVXxwl4SJWkgaP60QWgYFMIplup7BWJacxafd8yTcJ7KVoTP8AF1ik3eYV07IKInfOM
Ug5f4v19YPCIVq/9S6NoEuB7s3QitBdTSTsQ1UKVmONp8rJ+FYKB6QoPxADu250BN45vbNMTHn+N
cagbvLV7wFk+B4xWerCg6X4ZGGEGVfvthdBdxsfWEA==
=XP59
-----END PGP SIGNATURE-----

--------------3iLpi9RKgeqAtiKTguyw7X1V--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:19:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525493.816728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxxu-0003pA-W2; Mon, 24 Apr 2023 15:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525493.816728; Mon, 24 Apr 2023 15:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqxxu-0003p3-Sk; Mon, 24 Apr 2023 15:18:54 +0000
Received: by outflank-mailman (input) for mailman id 525493;
 Mon, 24 Apr 2023 15:18:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pqxxt-0003ov-IV
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:18:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20626.outbound.protection.outlook.com
 [2a01:111:f400:7d00::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 518e2e6e-e2b3-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:18:51 +0200 (CEST)
Received: from DUZPR01CA0043.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::6) by DB3PR08MB8842.eurprd08.prod.outlook.com
 (2603:10a6:10:437::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:18:45 +0000
Received: from DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:468:cafe::71) by DUZPR01CA0043.outlook.office365.com
 (2603:10a6:10:468::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 15:18:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT041.mail.protection.outlook.com (100.127.142.233) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 15:18:45 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Mon, 24 Apr 2023 15:18:45 +0000
Received: from d9775d042153.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CB07F9D7-35F0-4D6A-8F6C-25B90B6AFBAF.1; 
 Mon, 24 Apr 2023 15:18:39 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d9775d042153.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 15:18:39 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB4PR08MB9310.eurprd08.prod.outlook.com (2603:10a6:10:3f6::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 15:18:37 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:18:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 518e2e6e-e2b3-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=btgEbgqMdyE3nXB7sc6fc0uwwQB28LZdUobqEgBNANQ=;
 b=BQ2VNWfztvzjfNFFefYaAu1mT0JRD62gA+GXoVbwCfVV+V/UCqzhIJfbMSul9HOHbcM5oRWPNV31DUVijZa8ZAfdc2U9CcAsO6i3uCeanGXjQ0hSKOpzrxlnZnknJgQxUjKWaN+m64I8SBK6gkQodZcC4z/UnWm16BTwYGBH5F0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 16b1daad3e056a46
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QK854+p+DMT6y0t+wvd2RNB1ubZGJehnM4sjzLK1pf13kauchIXASxUU4bW92Q7n98/0w/er8C4i8wkurbG+PmwSpcdpaMgOf155KRrEoFSOXhoCUwJ8XiFD009j51gNyKavGp3PRxj9JleyO4b7y+d4zeCJKOFLdV2dVwmd0aMre5Avz4foZr8jrNoxwmEenY1iSQpzSmyTNvZhjYcqc8Ni1/fh3sDAuIun99/7Zs/LOJpXzA0XIaUG2zfEVslXNUgLKtS0IhTdo716Ub1x7LGYdA9+nmj+PkakhKw516vElAFfUFWSg7RqizdbybPORLAU990wCZgbQjdffIIhCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=btgEbgqMdyE3nXB7sc6fc0uwwQB28LZdUobqEgBNANQ=;
 b=QRVYrxKPT1JRgTpqSBrW/GUDOVfYo0pZYPFPt32H1hqPP0AynhFg4IFIVrNhHh2KfwHKKIrD0NkNHc5DSeFZU4u4vG08DmeKCqhF8pktP96SbvkbQ13dpxA6fLtnJbio5MvAVa0/Ih6T7VL1J9vLgftsKuu2oqQWxa/5Udv8iIDNAP4hzDyrNvsEtKoS0iAcJvcfvfW2mMfAfeGXqVBpUVTLeK3rrzXiUUKFXuR+c7Ys+uM0rm7kekMfXV7X31gWwpmH+b02BD6Hq+5iXm1fsny3O5z8HJR1oFAeTAz+p8djw2Nx/6JPx8ZqKND8i44fxQ1ClaM2UbYCw/v1fnFPiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=btgEbgqMdyE3nXB7sc6fc0uwwQB28LZdUobqEgBNANQ=;
 b=BQ2VNWfztvzjfNFFefYaAu1mT0JRD62gA+GXoVbwCfVV+V/UCqzhIJfbMSul9HOHbcM5oRWPNV31DUVijZa8ZAfdc2U9CcAsO6i3uCeanGXjQ0hSKOpzrxlnZnknJgQxUjKWaN+m64I8SBK6gkQodZcC4z/UnWm16BTwYGBH5F0=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1ICAAAFsAIAADk4AgAACvwCAAANFAA==
Date: Mon, 24 Apr 2023 15:18:37 +0000
Message-ID: <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
In-Reply-To: <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB4PR08MB9310:EE_|DBAEUR03FT041:EE_|DB3PR08MB8842:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a378f49-2400-409f-631f-08db44d73221
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jw0C7XNZX49Y7SSZxjYLOX+dXepkmhW5z2Fe1DH7wWDfN8a0012o50LmMzB4LR2Svm80XDNCVtL2FrrCYkAdOpArmdBzIBGoQy6gxJeDH88gWqoXEmkX3hbxMAvyIqKSDMPXQcvw942uHTPreGXHe8rFQOlm/0jjyPlgPqiY8LMohWNdLv6rWpNHTXSRFXdWNCTgt/kjX+Z4/WgVp5eeDMLrXTJnwl8zDdC5SkudukJwdTiSE1m6UVElbABFmPGABkLCkFFxuoYRbxQrLOiM/AsHcoHv1gol9f8TOnKWk6NkEhPHmB/32qC6k3r3EzvUP5qpC8Q7Ap6JE11ts6uPaGg/8H/RmUE589HAEpfFUIwFyIzBah1Plp1CIXF4bK+FCXh2hv16SWShsmYXVV2xLEdm0gvUzONEwuXiW60AeXSXK9wEqKcphenXgbHrAw/uUiRrOEMbmnjwlk+FqJc6X38XQpolXYC8f0qI3pP7E0QWcONBf1Htgb66S1Gvv2S/B59KTYTS/ep++TvXVlvu4qA4g8SGdZrhiMCSUFdgp2YscxkB4hGvejsyXWDeb0Y6sJVIya2j9OqucJEGEmZjPzTzr68mlptAg5UBJ280R7hnMODUO0SFCZExarZmBkft/8yJknUy2nrjVFKNqY3Xyw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39850400004)(396003)(366004)(451199021)(54906003)(36756003)(38070700005)(478600001)(316002)(91956017)(4326008)(6916009)(76116006)(64756008)(66446008)(66476007)(66556008)(66946007)(122000001)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(2616005)(6512007)(6506007)(26005)(53546011)(86362001)(186003)(33656002)(83380400001)(71200400001)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <EC325CB17AFF7C48A15E5B5019F39EFC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB9310
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	39a266ac-0072-46ab-cdf9-08db44d72d6b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xleCuQlsEayIgsAZw6YduaTQKTdEgSsRnmE3RZXWtV91wJJvPf+qCTyb6owAwjPEOVzUSr9ivhAKsCgX3dFdJLPZZVp5Dxsv4129mH4FkCF6TYRdYRe8OIYE5RO8TW7s8Gfl09PXOt+AaQX/Qf/QMqhLSnrQRNAj9KUbLgyNBHW6CKzSn0Qu64Sgo4dqFvu90B0Bso7ViZeUASQ59JxW4H44owOAhcUX0HCeU2MaF8rO6wn3aYkxQjPLjvW4MaEEeYos8k8Q2kgN1xutcU4ctCng0Nuouuq8bQNqeR3gIvhrA299t1oUFHcrv66ei4WXcHea9cChyNp25rVDGLcN3DqXl3ZmGl3FczpX3es/aFJO5Wdj3QXV4ADzwY+baXq/1l6imQta4Gd19NUDV+pNjjF6f0rBoqAcrKfOkTeY+Ed9itJ/h7kNc9vVl65IP7owdqZpoL2tev540e4i2hP2hl5I20MeXJPrCv2HKzZHB9I43WcwvsElT1QxrTClk4PFz9F5s9CjLOWNQcYvcyvopJ7k3d4ktEmGH8BMujpraTlhSatWmB4ya2wYXy8U4MrOC5PGZ9TdenNpOjCF4TBBAaigcBpa8xD3yi4GnMOQQNH9SWob0hV59ozhVlUFh1KkTDngNUgXHQoUKle70CBEm3dk0laj61DzUiMVO2UmO0lC1M82VVJGQZHPfPITO47O6qD0ANU6wNr7n6HU3kW+Sw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39850400004)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(316002)(4326008)(6862004)(8676002)(8936002)(5660300002)(41300700001)(33656002)(82310400005)(36756003)(86362001)(40480700001)(356005)(6512007)(26005)(186003)(81166007)(53546011)(478600001)(6486002)(36860700001)(83380400001)(47076005)(2616005)(336012)(6506007)(82740400003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:18:45.2607
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a378f49-2400-409f-631f-08db44d73221
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8842

DQoNCj4gT24gMjQgQXByIDIwMjMsIGF0IDE2OjA2LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDQuMjAyMyAxNjo1NywgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyNCBBcHIgMjAyMywgYXQgMTU6MDUsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjQuMDQuMjAyMyAxNjowMCwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxMjozNCwgSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+IE9uIDI0LjA0LjIwMjMgMDg6MDIsIEx1Y2Eg
RmFuY2VsbHUgd3JvdGU6DQo+Pj4+Pj4gQEAgLTMwLDkgKzM3LDExIEBAIGludCBzdmVfY29udGV4
dF9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCj4+Pj4+PiB2b2lkIHN2ZV9jb250ZXh0X2ZyZWUoc3Ry
dWN0IHZjcHUgKnYpOw0KPj4+Pj4+IHZvaWQgc3ZlX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUgKnYp
Ow0KPj4+Pj4+IHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4+Pj4+
ICtib29sIHN2ZV9kb21jdGxfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWduZWQgaW50ICpvdXQpOw0K
Pj4+Pj4+IA0KPj4+Pj4+ICNlbHNlIC8qICFDT05GSUdfQVJNNjRfU1ZFICovDQo+Pj4+Pj4gDQo+
Pj4+Pj4gKyNkZWZpbmUgb3B0X2RvbTBfc3ZlICAgICAoMCkNCj4+Pj4+PiAjZGVmaW5lIGlzX3N2
ZV9kb21haW4oZCkgKDApDQo+Pj4+Pj4gDQo+Pj4+Pj4gc3RhdGljIGlubGluZSByZWdpc3Rlcl90
IGNvbXB1dGVfbWF4X3pjcih2b2lkKQ0KPj4+Pj4+IEBAIC01OSw2ICs2OCwxMSBAQCBzdGF0aWMg
aW5saW5lIHZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikge30NCj4+Pj4+PiBz
dGF0aWMgaW5saW5lIHZvaWQgc3ZlX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+
Pj4gc3RhdGljIGlubGluZSB2b2lkIHN2ZV9yZXN0b3JlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KSB7
fQ0KPj4+Pj4+IA0KPj4+Pj4+ICtzdGF0aWMgaW5saW5lIGJvb2wgc3ZlX2RvbWN0bF92bF9wYXJh
bShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCkNCj4+Pj4+PiArew0KPj4+Pj4+ICsgICAgcmV0
dXJuIGZhbHNlOw0KPj4+Pj4+ICt9DQo+Pj4+PiANCj4+Pj4+IE9uY2UgYWdhaW4gSSBkb24ndCBz
ZWUgdGhlIG5lZWQgZm9yIHRoaXMgc3R1Yjogb3B0X2RvbTBfc3ZlIGlzICNkZWZpbmUtZA0KPj4+
Pj4gdG8gcGxhaW4gemVybyB3aGVuICFBUk02NF9TVkUsIHNvIHRoZSBvbmx5IGNhbGwgc2l0ZSBt
ZXJlbHkgcmVxdWlyZXMgYQ0KPj4+Pj4gdmlzaWJsZSBkZWNsYXJhdGlvbiwgYW5kIERDRSB3aWxs
IHRha2UgY2FyZSBvZiBlbGltaW5hdGluZyB0aGUgYWN0dWFsIGNhbGwuDQo+Pj4+IA0KPj4+PiBJ
4oCZdmUgdHJpZWQgdG8gZG8gdGhhdCwgSeKAmXZlIHB1dCB0aGUgZGVjbGFyYXRpb24gb3V0c2lk
ZSB0aGUgaWZkZWYgc28gdGhhdCBpdCB3YXMgYWx3YXlzIGluY2x1ZGVkDQo+Pj4+IGFuZCBJIHJl
bW92ZWQgdGhlIHN0dWIsIGJ1dCBJIGdvdCBlcnJvcnMgb24gY29tcGlsYXRpb24gYmVjYXVzZSBv
ZiB1bmRlZmluZWQgZnVuY3Rpb24uDQo+Pj4+IEZvciB0aGF0IHJlYXNvbiAgSSBsZWZ0IHRoYXQg
Y2hhbmdlIG91dC4NCj4+PiANCj4+PiBJbnRlcmVzdGluZy4gSSBkb24ndCBzZWUgd2hlcmUgdGhl
IHJlZmVyZW5jZSB3b3VsZCBiZSBjb21pbmcgZnJvbS4NCj4+IA0KPj4gQ291bGQgaXQgYmUgYmVj
YXVzZSB0aGUgZGVjbGFyYXRpb24gaXMgdmlzaWJsZSwgb3V0c2lkZSB0aGUgaWZkZWYsIGJ1dCB0
aGUgZGVmaW5pdGlvbiBpcyBub3QgY29tcGlsZWQgaW4/IA0KPiANCj4gV2VsbCwgeWVzLCBsaWtl
bHkuIEJ1dCB0aGUgcXVlc3Rpb24gaXNuJ3QgdGhhdCBidXQgIldoeSBkaWQgdGhlIHJlZmVyZW5j
ZQ0KPiBub3QgZ2V0IHJlbW92ZWQsIHdoZW4gaXQncyBpbnNpZGUgYW4gaWYoMCkgYmxvY2s/Ig0K
DQpPaCBvaywgSSBkb27igJl0IGtub3csIGhlcmUgd2hhdCBJIGdldCBpZiBmb3IgZXhhbXBsZSBJ
IGJ1aWxkIGFybTMyOg0KDQphcm0tbGludXgtZ251ZWFiaWhmLWxkIC1FTCAtVCBhcmNoL2FybS94
ZW4ubGRzIC1OIHByZWxpbmsubyBcDQouL2NvbW1vbi9zeW1ib2xzLWR1bW15Lm8gLW8gLi8ueGVu
LXN5bXMuMA0KYXJtLWxpbnV4LWdudWVhYmloZi1sZDogcHJlbGluay5vOiBpbiBmdW5jdGlvbiBg
Y3JlYXRlX2RvbVVzJzoNCiguaW5pdC50ZXh0KzB4MTM0NjQpOiB1bmRlZmluZWQgcmVmZXJlbmNl
IHRvIGBzdmVfZG9tY3RsX3ZsX3BhcmFtJw0KYXJtLWxpbnV4LWdudWVhYmloZi1sZDogKC5pbml0
LnRleHQrMHgxMzZiNCk6IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHN2ZV9kb21jdGxfdmxfcGFy
YW0nDQphcm0tbGludXgtZ251ZWFiaWhmLWxkOiAuLy54ZW4tc3ltcy4wOiBoaWRkZW4gc3ltYm9s
IGBzdmVfZG9tY3RsX3ZsX3BhcmFtJyBpc24ndCBkZWZpbmVkDQphcm0tbGludXgtZ251ZWFiaWhm
LWxkOiBmaW5hbCBsaW5rIGZhaWxlZDogYmFkIHZhbHVlDQptYWtlWzNdOiAqKiogWy9kYXRhX3Nk
Yy9sdWNmYW4wMS9raXJrc3RvbmVfeGVuL3hlbi94ZW4vYXJjaC9hcm0vTWFrZWZpbGU6OTU6IHhl
bi1zeW1zXSBFcnJvciAxDQptYWtlWzJdOiAqKiogWy9kYXRhX3NkYy9sdWNmYW4wMS9raXJrc3Rv
bmVfeGVuL3hlbi94ZW4vLi9idWlsZC5tazo5MDogeGVuXSBFcnJvciAyDQptYWtlWzFdOiAqKiog
Wy9kYXRhX3NkYy9sdWNmYW4wMS9raXJrc3RvbmVfeGVuL3hlbi94ZW4vTWFrZWZpbGU6NTkwOiB4
ZW5dIEVycm9yIDINCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5ICcvZGF0YV9zZGMvbHVjZmFu
MDEva2lya3N0b25lX3hlbi9idWlsZC94ZW4tcWVtdS1hcm0zMicNCm1ha2U6ICoqKiBbTWFrZWZp
bGU6MTgxOiBfX3N1Yi1tYWtlXSBFcnJvciAyDQptYWtlOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2Rh
dGFfc2RjL2x1Y2ZhbjAxL2tpcmtzdG9uZV94ZW4veGVuL3hlbuKAmQ0KDQpUaGVzZSBhcmUgdGhl
IG1vZGlmaWNhdGlvbiBJ4oCZdmUgZG9uZToNCg0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9p
bmNsdWRlL2FzbS9hcm02NC9zdmUuaCBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02NC9z
dmUuaA0KaW5kZXggNzFiZGRiNDFmMTljLi4zMzBjNDdlYTg4NjQgMTAwNjQ0DQotLS0gYS94ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3ZlLmgNCisrKyBiL3hlbi9hcmNoL2FybS9pbmNs
dWRlL2FzbS9hcm02NC9zdmUuaA0KQEAgLTI0LDYgKzI0LDggQEAgc3RhdGljIGlubGluZSB1bnNp
Z25lZCBpbnQgc3ZlX2VuY29kZV92bCh1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMpDQogICAgIHJl
dHVybiBzdmVfdmxfYml0cyAvIFNWRV9WTF9NVUxUSVBMRV9WQUw7DQogfQ0KIA0KK2Jvb2wgc3Zl
X2RvbWN0bF92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCk7DQorDQogI2lmZGVm
IENPTkZJR19BUk02NF9TVkUNCiANCiBleHRlcm4gaW50IG9wdF9kb20wX3N2ZTsNCkBAIC0zNyw3
ICszOSw2IEBAIGludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCiB2b2lkIHN2
ZV9jb250ZXh0X2ZyZWUoc3RydWN0IHZjcHUgKnYpOw0KIHZvaWQgc3ZlX3NhdmVfc3RhdGUoc3Ry
dWN0IHZjcHUgKnYpOw0KIHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0K
LWJvb2wgc3ZlX2RvbWN0bF92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCk7DQog
DQogI2Vsc2UgLyogIUNPTkZJR19BUk02NF9TVkUgKi8NCiANCkBAIC02OCwxMSArNjksNiBAQCBz
dGF0aWMgaW5saW5lIHZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikge30NCiBz
dGF0aWMgaW5saW5lIHZvaWQgc3ZlX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQogc3Rh
dGljIGlubGluZSB2b2lkIHN2ZV9yZXN0b3JlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KSB7fQ0KIA0K
LXN0YXRpYyBpbmxpbmUgYm9vbCBzdmVfZG9tY3RsX3ZsX3BhcmFtKGludCB2YWwsIHVuc2lnbmVk
IGludCAqb3V0KQ0KLXsNCi0gICAgcmV0dXJuIGZhbHNlOw0KLX0NCi0NCiAjZW5kaWYgLyogQ09O
RklHX0FSTTY0X1NWRSAqLw0KIA0KICNlbmRpZiAvKiBfQVJNX0FSTTY0X1NWRV9IICovDQoNCg0K
PiANCj4+Pj4gLS0tIGEveGVuL2NvbW1vbi9rZXJuZWwuYw0KPj4+PiArKysgYi94ZW4vY29tbW9u
L2tlcm5lbC5jDQo+Pj4+IEBAIC0zMjQsMTEgKzMyNCwxNCBAQCBpbnQgX19pbml0IHBhcnNlX3Np
Z25lZF9pbnRlZ2VyKGNvbnN0IGNoYXIgKm5hbWUsIGNvbnN0IGNoYXIgKnMsIGNvbnN0IGNoYXIg
KmUsDQo+Pj4+ICAgIHNsZW4gPSBlID8gKHsgQVNTRVJUKGUgPj0gcyk7IGUgLSBzOyB9KSA6IHN0
cmxlbihzKTsNCj4+Pj4gICAgbmxlbiA9IHN0cmxlbihuYW1lKTsNCj4+Pj4gDQo+Pj4+ICsgICAg
aWYgKCAhZSApDQo+Pj4+ICsgICAgICAgIGUgPSBzICsgc2xlbjsNCj4+Pj4gKw0KPj4+PiAgICAv
KiBDaGVjayB0aGF0IHRoaXMgaXMgdGhlIG5hbWUgd2UncmUgbG9va2luZyBmb3IgYW5kIGEgdmFs
dWUgd2FzIHByb3ZpZGVkICovDQo+Pj4+IC0gICAgaWYgKCAoc2xlbiA8PSBubGVuKSB8fCBzdHJu
Y21wKHMsIG5hbWUsIG5sZW4pIHx8IChzW25sZW5dICE9ICc9JykgKQ0KPj4+PiArICAgIGlmICgg
c2xlbiA8PSBubGVuIHx8IHN0cm5jbXAocywgbmFtZSwgbmxlbikgfHwgc1tubGVuXSAhPSAnPScg
KQ0KPj4+PiAgICAgICAgcmV0dXJuIC0xOw0KPj4+PiANCj4+Pj4gLSAgICBwdmFsID0gc2ltcGxl
X3N0cnRvbGwoJnNbbmxlbiArIDFdLCAmc3RyLCAwKTsNCj4+Pj4gKyAgICBwdmFsID0gc2ltcGxl
X3N0cnRvbGwoJnNbbmxlbiArIDFdLCAmc3RyLCAxMCk7DQo+Pj4+IA0KPj4+PiAgICAvKiBOdW1i
ZXIgbm90IHJlY29nbmlzZWQgKi8NCj4+Pj4gICAgaWYgKCBzdHIgIT0gZSApDQo+Pj4+IA0KPj4+
PiANCj4+Pj4gUGxlYXNlIG5vdGUgdGhhdCBJ4oCZdmUgYWxzbyBpbmNsdWRlZCB5b3VyIGNvbW1l
bnQgYWJvdXQgdGhlIGJhc2UsIHdoaWNoIEkgZm9yZ290IHRvIGFkZCwgYXBvbG9naWVzIGZvciB0
aGF0Lg0KPj4+PiANCj4+Pj4gc2xlbiA8PSBubGVuIGRvZXNu4oCZdCBzZWVtcyByZWR1bmRhbnQg
dG8gbWUsIEkgaGF2ZSB0aGF0IGJlY2F1c2UgSeKAmW0gYWNjZXNzaW5nIHNbbmxlbl0gYW5kIEkg
d291bGQgbGlrZQ0KPj4+PiB0aGUgc3RyaW5nIHMgdG8gYmUgYXQgbGVhc3QgPiBubGVuDQo+Pj4g
DQo+Pj4gUmlnaHQsIGJ1dCBkb2Vzbid0IHN0cm5jbXAoKSBndWFyYW50ZWUgdGhhdCBhbHJlYWR5
Pw0KPj4gDQo+PiBJIHRob3VnaHQgc3RybmNtcCgpIGd1YXJhbnRlZXMgcyBjb250YWlucyBhdCBs
ZWFzdCBubGVuIGNoYXJzLCBtZWFuaW5nIGZyb20gMCB0byBubGVuLTEsIGlzIG15IHVuZGVyc3Rh
bmRpbmcgd3Jvbmc/DQo+IA0KPiBUaGF0J3MgbXkgdW5kZXJzdGFuZGluZyB0b28uIFRyYW5zbGF0
ZWQgdG8gQyB0aGlzIG1lYW5zICJzbGVuID49IG5sZW4iLA0KPiBpLmUuIHRoZSAic2xlbiA8IG5s
ZW4iIGNhc2UgaXMgY292ZXJlZC4gVGhlICJzbGVuID09IG5sZW4iIGNhc2UgaXMgdGhlbg0KPiBj
b3ZlcmVkIGJ5ICJzW25sZW5dICE9ICc9JyIsIHdoaWNoIC0gZHVlIHRvIHRoZSBlYXJsaWVyIGd1
YXJhbnRlZSAtIGlzDQo+IGdvaW5nIHRvIGJlIGluIGJvdW5kcy4gVGhhdCdzIGJlY2F1c2UgZXZl
biB3aGVuIGUgaXMgbm9uLU5VTEwgYW5kIHBvaW50cw0KPiBhdCBub24tbnVsLCBpdCBzdGlsbCBw
b2ludHMgaW50byBhIHZhbGlkIG51bC10ZXJtaW5hdGVkIHN0cmluZy4gKEJ1dCB5ZXMsDQo+IEkg
c2VlIG5vdyB0aGF0IHRoZSAic2xlbiA9PSBubGVuIiBjYXNlIGlzIGEgbGl0dGxlIGhhaXJ5LCBz
byBwZXJoYXBzDQo+IGluZGVlZCBiZXN0IHRvIGtlZXAgdGhlIGNoZWNrIGFzIHlvdSBoYXZlIGl0
LikNCj4gDQo+IEphbg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:25:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:25:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525497.816737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy41-0005Nc-Ke; Mon, 24 Apr 2023 15:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525497.816737; Mon, 24 Apr 2023 15:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy41-0005NQ-Hm; Mon, 24 Apr 2023 15:25:13 +0000
Received: by outflank-mailman (input) for mailman id 525497;
 Mon, 24 Apr 2023 15:25:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqy40-0005MZ-Dh
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:25:12 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe16::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33da1089-e2b4-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:25:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6805.eurprd04.prod.outlook.com (2603:10a6:20b:dc::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:25:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:25:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33da1089-e2b4-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NzCT/GZckR47kM1kOmh+sRqT/8NTqq9L7aAefpoq4guURWg25gxotN/9VRtTOVCdAoBITS2QWVBHecIzUpx3+tsrMu7UTvi5Uxgses18TxQjyp06jY75odMwVBE86O+kMDz6xF0kdQI0wMIspFD1wlxJKXEjzWhbTpCwt2O12PxAFuFAclLF8DnSympmJMH1i/Kr1zTvPXAgRTyR59w8EkH/r4DspTh+hjdGowm05zQzhoMZxpuBzHKq+2dna3cvEPGRPrIHcA/v/i4zhgv7QZOD0fZpQPPmtD3AmgPY0m0lrPCxgGyFjn/PFW63EQdVyEwYGlwDfgeiWfEf/6s8zA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B5P9msyo4VrLRMsZH+r6p3iSXdvlDl7JHUBUiG/6zpU=;
 b=c4VVurWYtwxoZ3MyDq96Y5gj2ofSaQs4aC19axqof1LScBWotFpBKJI43OBJb+KoYWFmNqQVJ8xi7CBURN/27Z9TFYouK9SmIUPl58EuaFPBkabm1C/2jzWAv6lHgBOExCQTZQURkLa1fTOzJTQaXeaLsVdtL7xRbkDAqRaP7JRmaRC2CzXtnZg9wnJWit1z+jGjVkLbQBxGod+aP0Kq5dSqyLZpOC5I4BxvOfWdDnSRl16kbpEWUYo3xRG7buLP8GCOuCFuYSBcBHC9djZ0mLYOcQrNCzmcRt1yzA/vUnNnb11wqBubDM0c8+USlEEJwt8f+JLqo32+iOmiFw9trA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B5P9msyo4VrLRMsZH+r6p3iSXdvlDl7JHUBUiG/6zpU=;
 b=aIGIL3EJtpuWYpIJvHezxihyJgFBa42e/nd4+mQ65zUbFK8N/TlywsRlz+XTqBP676IlV0/EPvfz2g9EGmRIAEHyxpDDNprVDY7jFLEJ21brqh4tLltQfYzd+O/SuH5Pqqk/Qi2Ykuwczss8iPRuXIERU2rJTKLIw85dYq/cC8M/OyhIyxKIAzR9tgqtFfNtOaij1MUelsQ89mzqpjNXh++g31lxdwmEbEBqRgCS6DaBJiS2qbR8ZQmXvoqjsZv5qEJKCr2f2H4LtPEnTl4y/m1dVRmZYtZldaJQ69Yky3gKi8QneWeTVCFifv66wxRlSNn9HoSCcATc9St+u64gdA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
Date: Mon, 24 Apr 2023 17:25:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6805:EE_
X-MS-Office365-Filtering-Correlation-Id: f01eed67-9039-4e70-3a5d-08db44d8168e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S39pXvZEPHe+FRRLaI2Zsy5XY5InPUUUm1wDQOwGdF+DdD4H/HxUri3Ni94Srk7UDgiRq6J3Jvx6i9+jEejPUPXirhfTQtKa1surMmlnYjkJtsEAxE0Drj1ZNUWy3zawUryVi7qegb1jBa9f21btV5RsQ9h9/PlVDUKfsWZGnvxM73jNY8azdEbII7bKUcsAmtyq8grUISWQlm0r6HtHflQBlwE39lTLZU32w5uXxZrx+XzQjVESPfvYXeMGCiYHvFbF9DgekyjxXJYz1bKWhyK143+Nsm7H1F17RCq045ZDqAoKI0sYcIhsWaRABtDPOXo1iS+BSIquMw5zRwjL/w9TWq5uzxTsxufG5BGyzzh07QYI0WieiT8i4zqBkecWuJzlXFhDmrz/T1okX76OZYVJnsHtck8iPLcfGZTiPFd3tL5c3MzZQMRheSrui1s2TTnWQWCLK8X+fHNf0s1jV4K5ra5IuB3GywQKqgJX5VB39/WEPB3Dl/CS9vKFVrp1RX4CTcDG38Gkk6IIfIkLNHCrNd4SKvGKoKw13CoZqZ0Ejt90t/iJxTVOSc/j2F5xaKNzz8eDQKQ3VO8RjVz+D+RUU0yxc9BRUPdrWLx8bjpeZ+3huWho4MYpQTPyZ5yG1RX7Bwe9HvoSjaJXoW96Tg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199021)(54906003)(36756003)(478600001)(316002)(4326008)(6916009)(66476007)(66556008)(66946007)(7416002)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(2616005)(31686004)(6512007)(6506007)(26005)(53546011)(86362001)(186003)(31696002)(83380400001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2dpYzRoNXk5V3dZejZpQ253UmdSeUFZam9CaXZlbU9zMmQ5UTBKWXZvQ1Z5?=
 =?utf-8?B?Ym5JK3oyK2l4N01hTTd3TEthNndNMkhNUThzNy9SZHZuV0F0NncybUo5VDRB?=
 =?utf-8?B?am81SFVzTjJlWGVURjM2aTBoQ1dmSTFDTEZqblQwQmxtZ0V6RDN6cU1hMDQy?=
 =?utf-8?B?YkRJWjRkamdwZnNDZ0dTSGNHZGtwYThmcnVFdEM5MEd0MUlnaDlYeEJXaFln?=
 =?utf-8?B?RFBoUjBBWUpObTg1Ui9EOEhScUxTdUM0N0QwSUh4clJxanUzYlU5N2xadHZ4?=
 =?utf-8?B?SlpqaHBrUFhydEJSdlhBbGhOM3pWcnJVWS8vQVd3d2pnbkZhMWw2b09IWlc3?=
 =?utf-8?B?b2x2WDBTUkF4S20rUm5tYjhOL3pmRFJraGlvckE4UEkzUk1CM3JyRWJzSmtM?=
 =?utf-8?B?cXFvbE96cWdsWkdyczNLb01NdCt0ckhnQjh0THlqOVI3SGxRZ2o3VzJnTHgr?=
 =?utf-8?B?dlZPMUpvKy9xSk1xSndzRXF4ZC9FeEFSRGtFSDdMSzNRbTdpM3FBMU9CVE1a?=
 =?utf-8?B?MS9ydklvdFY4Y3dkVVlQaG91K29wS1F0c20xOVpJaWVmNWxaRS80bDQ4Yzk4?=
 =?utf-8?B?aTd2TkFVbTU5ZWUyVVFtYm5tZ0R2aXdja2wyTlQyc21aL2RjZU9vR05EbzNB?=
 =?utf-8?B?akd2TUxtcmU5YXh1b1lsQ3NTanFqbkM3TnNxVlF6eHhQdGhqeTNJWjIva0Ro?=
 =?utf-8?B?OUlEZlNFOWhlM3BrbnFGUkltaXYwb016ZGZtTUhycURhekt4TG5TVVZRZkM1?=
 =?utf-8?B?SUVLc2NqVjZDUkZ2ZFNxcHlKUG11SUlOVVQ0Zkl2RWVmMFF4aHNZU3BKSjlX?=
 =?utf-8?B?TW1aTXZaUStYbE1UTzVhczh4clRUdkd6OXdqS3J5K1ZtQlExNytuYWlhTUJh?=
 =?utf-8?B?V1Fqdmt4N3Z5NkNwbVZDS3VqMStUSGNaZVhWTktzTGlhaXVmTDNURFZkN2l0?=
 =?utf-8?B?WXN0MXNjcmdDTGsvSnZzRHZFcEVZS0krYVg5S0I1bHVnUjhIejUvSVg1L2tq?=
 =?utf-8?B?TlM0aWdTSWVWN3hBVC9rdTM5NWl0K0Nhakkramd1aVB6SDlPN2xXUENFaGNn?=
 =?utf-8?B?bGw2ZEd5VzNFMytYd3U4NHpnMXVDclJFYjZPU0UzeHZWTGFUK2h2ODJPNk5L?=
 =?utf-8?B?K1VLVGJRQmxNQ0Rya2JHSGpzL0RodnpJanExM3Q3OW5iK3paZE10NUpYN3dr?=
 =?utf-8?B?enZmckhST3dLSm5aRTJ5L3V5VE5tZXlKSUZzb3lyNjZ0Y0VMRFV3OXphQTRI?=
 =?utf-8?B?clp4L0hjcE1DYTRjaXlRUVZmSng3TUsrVURjN2tUdVk3emkyM1M4cHhvVEp3?=
 =?utf-8?B?UGhPakliY3pDbWVaQ2JMdkVNalhlQXFsaGJxb3ZJemd5by8xaTBqWE00eVpz?=
 =?utf-8?B?dnRyNHE5SlM1L2hlMFpWbWNNQzRwS3ByNkpwRzluQmdyTHppcGlkWHRSa0dY?=
 =?utf-8?B?cFFxV3FhdnVUbXBjSWJRSjNHVDFxSDF0cFg4dVdKYTV5NWxudnk1M2l2eTMr?=
 =?utf-8?B?SFNvMUlrYUUvcy9SYjhWWHJpcExPaExYc3NzNXcybXNkWkFzUWZJODRwZjRY?=
 =?utf-8?B?T21mVHRTUDdMU2ZFcHhYbk0zcHhYcW0wS0s3aTM4T3h0RjRmS3poeWF3cDN4?=
 =?utf-8?B?bFFYNHdXcTlLby9BYUx3QlFRRzlIZmM0eEpjMUMwMUtGL2NCRFhRREw1VFE4?=
 =?utf-8?B?MnpRSWR0Q1lDL1NKQUg5SlB4enFJSG10WlFYTWpUY1dINVdweCs2SDB5cVpS?=
 =?utf-8?B?TEo1Q2sra25SdDdWbmxJNUlMTldrNkcyMERVTU43RUIxVTkwMTFzSXBkYWxZ?=
 =?utf-8?B?U2hNUDY2RTZBM1JnR2JOYThnL2tKcWhWbUR5U0VvSnZhRmdtbVhnMys1U3NK?=
 =?utf-8?B?cXl6R3ZodDVDUjdqbHVuRW9MS0FqMm1JSVU0dkwzdTV1alpibTZETkx4anFo?=
 =?utf-8?B?cEdRNXBUbXE3OE1SbUU3Ly9BZG9qYk0yd2Q4S040QU5zZllWRm00bWsveHJm?=
 =?utf-8?B?aEtveU84NEZldjI1TGdudlppdkpGV2xBa1l6OTJxeTdKdFNpc0UrSzA0T09l?=
 =?utf-8?B?aXRGWXg3dTJWcmRST1FRK2lwVS9yci90SlJHQThGeWRCYmQwbzlyYXFNaS9m?=
 =?utf-8?Q?9csYqWIyhme6P2TiDU8AGOrnt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f01eed67-9039-4e70-3a5d-08db44d8168e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:25:08.7213
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DshZB9t4C78m2sjutYZSZUHlSd5tsxUKPs1p+nNKysDLUYv0i41c3zIQTreNTRP8ju7Os0qY8nyPBU4UJmS3WQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6805

On 24.04.2023 17:18, Luca Fancellu wrote:
>> On 24 Apr 2023, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>> On 24.04.2023 16:57, Luca Fancellu wrote:
>>>> On 24 Apr 2023, at 15:05, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 24.04.2023 16:00, Luca Fancellu wrote:
>>>>>> On 24 Apr 2023, at 12:34, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 24.04.2023 08:02, Luca Fancellu wrote:
>>>>>>> @@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
>>>>>>> void sve_context_free(struct vcpu *v);
>>>>>>> void sve_save_state(struct vcpu *v);
>>>>>>> void sve_restore_state(struct vcpu *v);
>>>>>>> +bool sve_domctl_vl_param(int val, unsigned int *out);
>>>>>>>
>>>>>>> #else /* !CONFIG_ARM64_SVE */
>>>>>>>
>>>>>>> +#define opt_dom0_sve     (0)
>>>>>>> #define is_sve_domain(d) (0)
>>>>>>>
>>>>>>> static inline register_t compute_max_zcr(void)
>>>>>>> @@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>>>>>>> static inline void sve_save_state(struct vcpu *v) {}
>>>>>>> static inline void sve_restore_state(struct vcpu *v) {}
>>>>>>>
>>>>>>> +static inline bool sve_domctl_vl_param(int val, unsigned int *out)
>>>>>>> +{
>>>>>>> +    return false;
>>>>>>> +}
>>>>>>
>>>>>> Once again I don't see the need for this stub: opt_dom0_sve is #define-d
>>>>>> to plain zero when !ARM64_SVE, so the only call site merely requires a
>>>>>> visible declaration, and DCE will take care of eliminating the actual call.
>>>>>
>>>>> I’ve tried to do that, I’ve put the declaration outside the ifdef so that it was always included
>>>>> and I removed the stub, but I got errors on compilation because of undefined function.
>>>>> For that reason  I left that change out.
>>>>
>>>> Interesting. I don't see where the reference would be coming from.
>>>
>>> Could it be because the declaration is visible, outside the ifdef, but the definition is not compiled in? 
>>
>> Well, yes, likely. But the question isn't that but "Why did the reference
>> not get removed, when it's inside an if(0) block?"
> 
> Oh ok, I don’t know, here what I get if for example I build arm32:
> 
> arm-linux-gnueabihf-ld -EL -T arch/arm/xen.lds -N prelink.o \
> ./common/symbols-dummy.o -o ./.xen-syms.0
> arm-linux-gnueabihf-ld: prelink.o: in function `create_domUs':
> (.init.text+0x13464): undefined reference to `sve_domctl_vl_param'

In particular with seeing this: What you copied here is a build with the
series applied only up to this patch? I ask because the patch here adds a
call only out of create_dom0().

Jan

> arm-linux-gnueabihf-ld: (.init.text+0x136b4): undefined reference to `sve_domctl_vl_param'
> arm-linux-gnueabihf-ld: ./.xen-syms.0: hidden symbol `sve_domctl_vl_param' isn't defined
> arm-linux-gnueabihf-ld: final link failed: bad value
> make[3]: *** [/data_sdc/lucfan01/kirkstone_xen/xen/xen/arch/arm/Makefile:95: xen-syms] Error 1
> make[2]: *** [/data_sdc/lucfan01/kirkstone_xen/xen/xen/./build.mk:90: xen] Error 2
> make[1]: *** [/data_sdc/lucfan01/kirkstone_xen/xen/xen/Makefile:590: xen] Error 2
> make[1]: Leaving directory '/data_sdc/lucfan01/kirkstone_xen/build/xen-qemu-arm32'
> make: *** [Makefile:181: __sub-make] Error 2
> make: Leaving directory '/data_sdc/lucfan01/kirkstone_xen/xen/xen’
> 
> These are the modification I’ve done:
> 
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index 71bddb41f19c..330c47ea8864 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -24,6 +24,8 @@ static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
>      return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
>  }
>  
> +bool sve_domctl_vl_param(int val, unsigned int *out);
> +
>  #ifdef CONFIG_ARM64_SVE
>  
>  extern int opt_dom0_sve;
> @@ -37,7 +39,6 @@ int sve_context_init(struct vcpu *v);
>  void sve_context_free(struct vcpu *v);
>  void sve_save_state(struct vcpu *v);
>  void sve_restore_state(struct vcpu *v);
> -bool sve_domctl_vl_param(int val, unsigned int *out);
>  
>  #else /* !CONFIG_ARM64_SVE */
>  
> @@ -68,11 +69,6 @@ static inline void sve_context_free(struct vcpu *v) {}
>  static inline void sve_save_state(struct vcpu *v) {}
>  static inline void sve_restore_state(struct vcpu *v) {}
>  
> -static inline bool sve_domctl_vl_param(int val, unsigned int *out)
> -{
> -    return false;
> -}
> -
>  #endif /* CONFIG_ARM64_SVE */
>  
>  #endif /* _ARM_ARM64_SVE_H */



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:25:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:25:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525498.816748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy44-0005dl-2W; Mon, 24 Apr 2023 15:25:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525498.816748; Mon, 24 Apr 2023 15:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy43-0005de-Ty; Mon, 24 Apr 2023 15:25:15 +0000
Received: by outflank-mailman (input) for mailman id 525498;
 Mon, 24 Apr 2023 15:25:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zAuZ=AP=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pqy42-0005MZ-R6
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:25:14 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35bbb1c9-e2b4-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:25:14 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-504eac2f0b2so7899648a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 08:25:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35bbb1c9-e2b4-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682349914; x=1684941914;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rsTK8roHyv7ixQ7OoBOH8UzoFvjwkzAMb1gTGOJ495M=;
        b=C5PkmlkY9r3anzmFWQZTunOhtrkeLWSH0bFKYjAWQPXudqIccZMwAJ56ZpAli2l2eC
         Vd0Y9rDZ+fBW1k4YAPRyrX2KMGNlYYo2Rsz8Gl1Zd0WZ0ztdBxe+if+e+b56jDyFKmIO
         lpGbZ5Ew850/WvJaYnaZkMTr8zDlBtybwq/gQkyYSEcYM2p+VvCTZ6kchtWUZRFc7x5H
         X1UDoDejBfNoN0NAru6o8cVeBLYKNlEQCaXOqAPfpRc9zSoP/xGpt/tnKCo9d7+ZGuZt
         8pkmPe2roOuiXN4vR1XnpjYGcm9sfrToLxU6xmsyJI5inlQOLEcoO1VobIM/Qf0ywv1t
         QlFQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682349914; x=1684941914;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=rsTK8roHyv7ixQ7OoBOH8UzoFvjwkzAMb1gTGOJ495M=;
        b=NOD0uuX3sJGytcR9AkzuG8QyEIO73vbi9sg7261VV6vGUDvdxtJm9Gk8UOGJ8FzDYL
         Uewqpyc2JsAXP5Drew0HU/qI3hIKnhkXLfQsvI7hEJKzFrNp/24laQlVZ3uukAaGGiDw
         YemV8B22xVPl5imM/rNBrjNw1kRvzIOA+EfL5j1nNB+NMNePe0z25CCfEY9L3nzdeTQg
         mI3K2TL3O3qVFhyAcHKa+7xx/A2D/Px+DOtw8TCtQRDfgCQ+xB7RSVYSzb858t7hM5gT
         D9wkSQtfjTxVOqhK5wIlWE9sgQd7u0eWnZWPSlZ2w/4wHatfu08GHoH4T4ThyoBHTxcA
         mZtw==
X-Gm-Message-State: AAQBX9dS1P+KqonmRY67WctDIJFdNhHPJbfMpb55WLczdrjqwMG5lntm
	AItgOfyqGT/FOfyeVna/OdQeGszmD4tNLd7mQsM=
X-Google-Smtp-Source: AKy350bKPkSNtWaUQZU2/X4cqSiFg8qPRvOm5qSiFA582/nES5VeB55FywDaT+6pz/4B1dg85CqYxHdKpKP5O3pJo+E=
X-Received: by 2002:aa7:d744:0:b0:506:c2b2:72fc with SMTP id
 a4-20020aa7d744000000b00506c2b272fcmr10705022eds.7.1682349913587; Mon, 24 Apr
 2023 08:25:13 -0700 (PDT)
MIME-Version: 1.0
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
 <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com>
In-Reply-To: <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 24 Apr 2023 11:25:01 -0400
Message-ID: <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 24, 2023 at 10:19=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 06.04.2023 05:57, Marek Marczykowski-G=C3=B3recki wrote:
> > Some firmware/devices are found to not reset MSI-X properly, leaving
> > MASKALL set. Jason reports on his machine MASKALL persists through a
> > warm reboot, but is cleared on cold boot. Xen relies on initial state
> > being MASKALL clear. Especially, pci_reset_msix_state() assumes if
> > MASKALL is set, it was Xen setting it due to msix->host_maskall or
> > msix->guest_maskall. Clearing just MASKALL might be unsafe if ENABLE is
> > set, so clear them both.
> >
> > Reported-by: Jason Andryuk <jandryuk@gmail.com>
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit with a couple of nits (which I'd be happy to address while
> committing, so long as you agree). First one being on the last
> sentence above: It's surely not just "might"; if resetting already
> doesn't work right, nothing says that the individual mask bit all
> end up set. Clearing ENABLE as well is only natural imo, if we
> already need to fix up after firmware. So maybe "Even if so far not
> observed to be left set, clear ENABLE as well"?
>
> > --- a/xen/drivers/passthrough/msi.c
> > +++ b/xen/drivers/passthrough/msi.c
> > @@ -46,6 +46,23 @@ int pdev_msi_init(struct pci_dev *pdev)
> >          spin_lock_init(&msix->table_lock);
> >
> >          ctrl =3D pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
> > +
> > +        if ( ctrl & (PCI_MSIX_FLAGS_MASKALL|PCI_MSIX_FLAGS_ENABLE) )
>
> Style (missing blanks around |; once more below).
>
> > +        {
> > +            /*
> > +             * pci_reset_msix_state() relies on MASKALL not being set
> > +             * initially, clear it (and ENABLE too - for safety), to m=
eet that
> > +             * expectation.
> > +             */
> > +            printk(XENLOG_WARNING
> > +                   "%pp: unexpected initial MSI-X state (MASKALL=3D%d,=
 ENABLE=3D%d), fixing\n",
> > +                   &pdev->sbdf,
> > +                   (ctrl & PCI_MSIX_FLAGS_MASKALL) ? 1 : 0,
> > +                   (ctrl & PCI_MSIX_FLAGS_ENABLE) ? 1 : 0);
>
> Our "canonical" way of dealing with this is !!(x & y).
>
> > +            ctrl &=3D ~(PCI_MSIX_FLAGS_ENABLE|PCI_MSIX_FLAGS_MASKALL);
> > +            pci_conf_write16(pdev->sbdf, msix_control_reg(pos), ctrl);
> > +        }
> > +
> >          msix->nr_entries =3D msix_table_size(ctrl);
> >
> >          pdev->msix =3D msix;
>
>
> Aiui there's no dependency here on the earlier patches in the series;
> please confirm (or otherwise).
>
> Jason - any chance of getting a Tested-by: from you?

I'm building v3 now.  v2  worked for clearing MASKALL on initial boot.

I posted in these two messages - a summary is below.
https://lore.kernel.org/xen-devel/CAKf6xpto87QRSKT2qc1yApNfaw2SrLLxPoytYJv_=
jEbYTAbjCg@mail.gmail.com/
https://lore.kernel.org/xen-devel/CAKf6xptHALLR-Qjf=3Dp5y0o9Ud2V7eFMJuB8Ap-=
PLjv-N7PAJVQ@mail.gmail.com/

OpenXT has a patch that performs an extra reset after domain shutdown,
and that causes Xen to set MASKALL.  I confirmed by removing it.  So
this patch helps with clearing MASKALL on host boot, but with the
OpenXT patch, rebooting a domain fails.  MASKALL gets set on VM
shutdown and then the subsequent boot can't assign the device.

So this patch is helpful in some scenarios, but it was also an issue
caused by the OpenXT patch.  Does that make it unsuitable for
inclusion?  I assume the OpenXT patch wasn't an issue previously since
MSI-X was never enabled.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525507.816758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy9E-0007Ut-Kb; Mon, 24 Apr 2023 15:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525507.816758; Mon, 24 Apr 2023 15:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqy9E-0007Um-Hk; Mon, 24 Apr 2023 15:30:36 +0000
Received: by outflank-mailman (input) for mailman id 525507;
 Mon, 24 Apr 2023 15:30:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqy9C-0007Ug-KV
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:30:34 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2089.outbound.protection.outlook.com [40.107.13.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3ec5532-e2b4-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:30:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6805.eurprd04.prod.outlook.com (2603:10a6:20b:dc::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:30:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:30:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3ec5532-e2b4-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XUJB5Fiu/zXMXo5b2KK0IOUNYr9dpyniYy/necX4UdQjJLTT8OAwdAB9G9pxxUv3I9YgJXE2ytLtRuSv5wQg7ELUN8yNzq+J4ZOHAbqINI3VmpGPv35jSxBb+NBQmoAAv1Gk1q3TC3l6T5fukFggQ5NDSKbu+vF5TP6S5s2tc97N6xUm5vRvnEejyCe6EHJwWumkmaWBQmqpQ6pHr/XZSVrsqvo8DXsAPN4ncpqDazvt8DHsVg2Hjg1LktYizzeMizire2fiZFRQADmTz4iOWtkRtLVqiFv/aVhtg+N65dZh4hdK9GMCpuDP4jFDKZzzdpZnpDR2fKsrmmrWVmyC2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wDeIOqfcbR2UUhbgoEWDvtWfNlE2iU/FWi3AzS6QAiA=;
 b=CIvbBDN3OHMJiaGbQ3dwDOjywJFCRdPbH9P8mXrdPc7dWQc/rgMMGAaZC+ml0x67fyP4QH/kCkgWXDBMM40H8LlNbgyW6+EMbtEz2jSvdiwQi1kc4k8KgucL+knTp7k9dJGjVSj78CcdjZTQ0VVddNHtyBXRR66BU5KDiHVB1DmAJpZXiaMjwA6bzFCB9Vr03KOhZcl3zcn0JLfc+e/P8yFIKt0hPlHfR9btYGlmQEVeoDXUshpKpzLds/6sjxv1HKLX+oJPoheUEPo+HPsSr1Eod7yf8U3vO0HVJdKhuhbn/as2lDVzW8je78NnMhlPquzQLc1eZqkEmJgZjaut0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wDeIOqfcbR2UUhbgoEWDvtWfNlE2iU/FWi3AzS6QAiA=;
 b=2PSbO5G6rMva0e9bgxMGa11Mhd5ggAWu6+ElAz6Mc8K8x40OvKLieDxkCqo0S6Llenom6vQuDtX8p6xEBFgFm222j13GpbvuocRa8N8pIpsNfSkVCyFFw7ISmzN7WcW0WJWHTO02MWBdWer5xwhSOKF/Eej0u30/AbVsIdRcMZvnUil/Yn4QzHnjCLghSAbAQN+7W6s0ioeA4+2MvA1iXvRzOePDbRL2SzxNFts3L3y6eIePcvjP8u4YhmKVOgZe3rH6zmjnQS/3mMZYHAEQwIIpB6FB8ZdMMnBrZWU5flBhiqzR3EaHPXMqYvUk499+pNyKViBsJSH9Tn21ICnUmA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <50a0883c-efb8-9456-7dac-a01cca3a17cf@suse.com>
Date: Mon, 24 Apr 2023 17:30:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
 <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com>
 <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0115.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6805:EE_
X-MS-Office365-Filtering-Correlation-Id: 0a3d7d43-ecd0-4beb-d705-08db44d8c70e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZEdXHwuzdtPe94zHSXAIAcZEImVDbMSvpQiT08PfvNqgNUjB8FCFdsGorM0vlM/rX3L2EpClh1n1F/CZAbqr1myi3XOvO8SUhgc56HUd5ZwhDIHmMommijVjYhGC3fWUskvu4UUtBrChCa3bwWWLcHc+w3+vbjorAk72ABHar2WUXbnnMBi0VkBmDmY3Fr3xntJ8wAlllPxyr/BFlFuomZIMcG/u3oxtM1OO6yQF9dl0OxPddcjjKyHEIFX5qKpxyap3ARd4JxtcR3NpRBFBhnrKgQuCMHvBA8VRG5dAMrE7HPQJ7P7VFUg9z9oCkQ0RzpV1be52QXr9TBGD487p49SkQ3GIBp8alFSdanKi2MoZHptaUT62sPIyKW5GLbWDrN98oV7ZgJnAjWK9TlnlTiyLPvRil8ttTOb/FB/OKchmOnBH3vcXGUEfVmOvA1FadnAi5czfeymoIXtbZFAU5VGCJtxpCyqMh1+RhR3l2Ex9cc7UYiSmdPJ3UjbJjZ4Jj3TQPxke83jjjTb0pjzvPg+VnvhOnlxC+W/EM8BT8Bs/sZ33rO0w1h57Fv9KYbe/qohvUvuhqdKvGR1hVOIeO1M5YQVAOzgc8MnIDXmzmNewhlipjBNBrT7HnYPGljIOjMVNTlbj4hwkhR/MS7maa6NTNwec3pExEZKaotZkxVSSF1VwGzza1mA9bGx6YKvw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199021)(54906003)(36756003)(478600001)(316002)(4326008)(6916009)(66476007)(66556008)(66946007)(41300700001)(2906002)(8936002)(8676002)(5660300002)(38100700002)(2616005)(31686004)(966005)(6512007)(6506007)(26005)(53546011)(86362001)(186003)(31696002)(83380400001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SkxaaURqQmg4eFNCdlF4bmErbkZ6RlZ2ZkZqSm9DT2dybmpUUlJSUFN2VWFP?=
 =?utf-8?B?dFhZUDh6djQvUFJqUk01UnovVjVmMVkzOEJ5OUFTNGZacHpycXdBZVREa3pq?=
 =?utf-8?B?Y0xvc3Q2ZzVJakRhYXFRSTNvYjVGRW5JWDduNmFxbHBwSFhIZm9KUm83MUxM?=
 =?utf-8?B?cW9XZzU4Y0FPNE9wb01IN0JDSDA4Q0F6ZGh4alJXRVFDSTZLL2sxbXlNclFD?=
 =?utf-8?B?RnYyUHdQeldWWGZEUGpnVnRoYTIxZmM5ZWI5S1BIWTVRSEtJRHUrS2JHaCtK?=
 =?utf-8?B?b1VQQk56WHk2anFpUW1Jei9mOVFIVjZ0ajg0SWZEUUt0ejBNSDlBTE5ONllH?=
 =?utf-8?B?NEh0S05YRUsvWm4vaGxIdVJlOTgyS3IxOWx4NmdvdjZJYmYxQ2ZBemVkb0Q3?=
 =?utf-8?B?bFVMVnpvYnl1K09XR1VrTk5nc0hwYlBEWlNQTmV5SXJ4SlV5aGh4NFJ4RW5z?=
 =?utf-8?B?WERuRitaaERTaG54eFMwUlpGd0gzTW5ZNVUxUmppS3JrUjZXK0dERlJUbDNr?=
 =?utf-8?B?K0djS0ZJTlpnQXNRT01rN1NOeU5FV0tIaVhmVXNJVm1uaEcwRktVNy9HYVEr?=
 =?utf-8?B?eFVaM3hvUFpHRHhGS0hPckU5VDlZUkMxcTlELzlDbUdOSHZJb2JCUmlaUjBD?=
 =?utf-8?B?dFVUMUJ2VlZaTmRNWitjTmM0QWlIRENEUmU0WnhVS0Uvd1cyT2FDa05TT2xO?=
 =?utf-8?B?TUtyTjE5cGVuZ2xka1FCZk0vK3dnWXNmdlNOK3J1Qmc4LytVSVU2V09QZmVz?=
 =?utf-8?B?WHhUcjVHbnA5ajVTUFJaZW5pUU1ndjMyUTZ6bGg3RDU4Z2Y3eVNHWGVDT1Rx?=
 =?utf-8?B?c05GTkl4Q0hZTUpQeTVBV205eE5vQzJsaCtWS3J5bHJoYys4TGRZRXBCdWVL?=
 =?utf-8?B?TXhXK29helJNNXVDRTNLbXlnbXp6clNBQit2YWVaQndqeDlOYnlYWGI4WFNt?=
 =?utf-8?B?ZXY2ckxoZGtGd1V5MFVaZFNnTVcvQjVCMVN6UWxJeFYrK21VQ3QycmlxdEwz?=
 =?utf-8?B?cVk4UExnRnc4dDBEQmpCVmhYUU1teGk2ckdqTVV0N1Q5SUVBeVFOMW1QOVJj?=
 =?utf-8?B?RnAvTkNzUlhmSjB5OS9RUk4wNVNTU2Qvb0tpaEdFRXNRblpJaFdOMWhybUtU?=
 =?utf-8?B?L1gxaHFyeDI1SGVsVk9ZWXptWFBSc20wMFdXN1Q0VUNsNHdNS210aC9wb0Fy?=
 =?utf-8?B?L0taT0J3cWFRMnVmT28xWGdiaEpMaHBaSVlKR29mdjFuSE8yU0FJRldkZktX?=
 =?utf-8?B?cGlUMldIUkJoalZiYXdWUkI3TFFmU3hJWDFSTE04aGRiYjJmUEtrZHNBOEJX?=
 =?utf-8?B?M2t1UnV5NmRIQjYwcUlRenhnTXZpZkN6VmdWY0x2dE9rWTh5NkwrTEtHTi9E?=
 =?utf-8?B?UHFDM213YkZvb3lqNVI3dHhaTmUySUhIUnN3NGM5cFBZR3ZMc1QzUkFYSllU?=
 =?utf-8?B?VFk2YnpmMCs3YlRndC9tV1AwTG1IWm12TFFrRmxqZk5IU2YwMHRzQTRJZk13?=
 =?utf-8?B?RWFQK3RLS25ucnBkbjBpc2NxRTR3M1RPci8xU0xYTUhaL2Q0MUJuUmpCd2sx?=
 =?utf-8?B?a2JUZ0kvUWpYZUVXQU15ZmFJaTRPZFFtdmcvZ2p5QXQ0RkIxVVhXcFpsUkV3?=
 =?utf-8?B?OTQ5WXliMU5pdmRiN2FNbm9aR05XR3luL1QvZG1uejEyUERPdmp6Mmtld0hy?=
 =?utf-8?B?WnpGS0dib3ZGL3Z0dWd5L2g2YUtsbmc2ZkJOVS9iYzFtcGdjVENCc2V3Y2lW?=
 =?utf-8?B?MnRPUDJhbHkrelhIYysrT1piU24wUk1HUis0cWU3SDBJbkpoSWxLQXVuTEZR?=
 =?utf-8?B?MUlhaC9sMldUNWRTVkl2ZkxsOWYrWXNCcytNcG5KY3pHSFBFMUlKMWpkcFhr?=
 =?utf-8?B?SWhDOVNsczJHaGxnSlZ2emxra2dUb1BkUVVxSEtZZllkcVo0eFBkUXpMSVlq?=
 =?utf-8?B?OTJoU0czR2tGU09NNEl6UCtmNjhQWk5hZ0Y4bXFqSGk3NThIc0ViNURxUTZH?=
 =?utf-8?B?L3I0cDBEQ3ZMTmdUTklZTFhtL1FxQUlqZXZnWjlQendmUW1Yb21mVC9WVG9C?=
 =?utf-8?B?U0pVZlpaR0l4YzVySG91RUNXYTZIRmlKMVBIQm0yUHBBVDFYVHl2aHNCTGpn?=
 =?utf-8?Q?dP9rzT3jwQExzyXEIl1EhcqzX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a3d7d43-ecd0-4beb-d705-08db44d8c70e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:30:04.7539
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yazo46s8kccwGR2YE6SeFD5NHXCVCmXMiLhLSzBWtClGVqgHjUoBQ0/JTuF6bsqPMpVphIm+IPiuqQ5NDXs1lQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6805

On 24.04.2023 17:25, Jason Andryuk wrote:
> On Mon, Apr 24, 2023 at 10:19 AM Jan Beulich <jbeulich@suse.com> wrote:
>> Jason - any chance of getting a Tested-by: from you?
> 
> I'm building v3 now.  v2  worked for clearing MASKALL on initial boot.
> 
> I posted in these two messages - a summary is below.
> https://lore.kernel.org/xen-devel/CAKf6xpto87QRSKT2qc1yApNfaw2SrLLxPoytYJv_jEbYTAbjCg@mail.gmail.com/
> https://lore.kernel.org/xen-devel/CAKf6xptHALLR-Qjf=p5y0o9Ud2V7eFMJuB8Ap-PLjv-N7PAJVQ@mail.gmail.com/
> 
> OpenXT has a patch that performs an extra reset after domain shutdown,
> and that causes Xen to set MASKALL.  I confirmed by removing it.  So
> this patch helps with clearing MASKALL on host boot, but with the
> OpenXT patch, rebooting a domain fails.  MASKALL gets set on VM
> shutdown and then the subsequent boot can't assign the device.
> 
> So this patch is helpful in some scenarios, but it was also an issue
> caused by the OpenXT patch.  Does that make it unsuitable for
> inclusion?

What is "it" here? If I get your reply right, there is a similar issue
left unaddressed by this version of the change (and as was said before,
a device reset changing state that Xen tracks or otherwise cares about
needs to be reported to Xen). Yet that doesn't really fit with the
question, at least the way I read it ...

Jan

>  I assume the OpenXT patch wasn't an issue previously since
> MSI-X was never enabled.
> 
> Regards,
> Jason



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:34:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:34:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525512.816768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyDK-00085P-5n; Mon, 24 Apr 2023 15:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525512.816768; Mon, 24 Apr 2023 15:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyDK-00085I-38; Mon, 24 Apr 2023 15:34:50 +0000
Received: by outflank-mailman (input) for mailman id 525512;
 Mon, 24 Apr 2023 15:34:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pqyDJ-00085C-Av
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:34:49 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on060f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b5cae82-e2b5-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:34:47 +0200 (CEST)
Received: from AM6P194CA0094.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::35)
 by DBBPR08MB6217.eurprd08.prod.outlook.com (2603:10a6:10:201::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:34:45 +0000
Received: from AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::c5) by AM6P194CA0094.outlook.office365.com
 (2603:10a6:209:8f::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 15:34:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT056.mail.protection.outlook.com (100.127.140.107) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 15:34:44 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Mon, 24 Apr 2023 15:34:44 +0000
Received: from 5140db261521.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A6183F77-F5EA-4CEC-B63A-3E21CFA0806B.1; 
 Mon, 24 Apr 2023 15:34:33 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5140db261521.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 15:34:33 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB7495.eurprd08.prod.outlook.com (2603:10a6:10:36c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:34:30 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:34:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b5cae82-e2b5-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UuJS8RkoXZADJ4VbzV+EtG7uMm7ioWIq2ddvgdwdYNM=;
 b=QXWwHqLLU8gouGGt7ncMOeueBq6CibWwJgFbyiryFrzYQJxHwE1TDW0FrVG8qw7z/sFcNyFyw0ls/IbX+iZBAfNp8sCktfuohPOLwadL82UgohhW/zkdJFqwBX5+80dWAVJHL7XpDYeBr/+5de6FgeNG75Htk+g4KXSw8AD1bHQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5b71ef954be05ccb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XVsUITlw7UAUIUDam5vwac3ZOJHKcTZXmN3+W2AzQI7+D3xKT5j0xYlA6qauDnXhCa3Xcl6plI+Z/H1S2MqWcMZ+W+Pq86QdtnACxPgpVyKUi4V1ktzwkeJ12kcq7FknzOaHqYImC32JTO78Fk3Z/x2BHNH4HFkwJLDjieHJfMr5724S7ek8FI4Gm71tuVRu/GTAOMfb7quMtUD/WZqGDqR9KMzGNZcf3VVw0/RCSvsnH0Ji506w7FAgLwWfl7zOHSf7df4SRVsfsq0K1Yh9WcI9EhXKS29pI0v9imC6dBw4bKr5LsNvNdKMVdBrVTWnjLL+4E2s1jTzaqKnZGHOnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UuJS8RkoXZADJ4VbzV+EtG7uMm7ioWIq2ddvgdwdYNM=;
 b=ZE8gKbl6lrAFqabZzwOq+SDQRNFnwFKD5gTGyjzChCRo2PGddkqvnxcaqqLzqkvAeDsHnKoXQMDcFxorZuxPHg1wGQnxUpWMpIDRXy1+SQN8ou4uTu8Q8X/TMGP/RU85ht+nvNgtZWvNTlkU4DwMJI9yZ3FN+AafDd5cURB8dS7DC7qR6KFkLZFxKhtAUMYitar/sQH3g77P2mRE91Z4FurHf2yxbKErxJiwKE3pwOl8+5dt48/w9GFYOYj8geJw4QdI6XbGeuOztYkuZPHuvf1R8wfaz47oepDrkEgDyP8Ah+ApdnqS8upnllfjzsCj8GJgzdAGmZi4Dca7oKmoLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UuJS8RkoXZADJ4VbzV+EtG7uMm7ioWIq2ddvgdwdYNM=;
 b=QXWwHqLLU8gouGGt7ncMOeueBq6CibWwJgFbyiryFrzYQJxHwE1TDW0FrVG8qw7z/sFcNyFyw0ls/IbX+iZBAfNp8sCktfuohPOLwadL82UgohhW/zkdJFqwBX5+80dWAVJHL7XpDYeBr/+5de6FgeNG75Htk+g4KXSw8AD1bHQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1ICAAAFsAIAADk4AgAACvwCAAANFAIAAAd0AgAACk4A=
Date: Mon, 24 Apr 2023 15:34:29 +0000
Message-ID: <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
In-Reply-To: <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB7495:EE_|AM7EUR03FT056:EE_|DBBPR08MB6217:EE_
X-MS-Office365-Filtering-Correlation-Id: ca4c93d0-8598-4cf3-ce21-08db44d96ded
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cnfLwYoF8rSmoNAJBGYipm9XGsrXYVpEw1ELYutUkMDObdStn63artvMjcs12gKxShqHexIoiLtxH2tuEw+a2YLdecA8gRW1a41pmu/wzWnGPyLbZD6C3kuqBq/gV6BDULSbH8KEjwp6KasNxyFCb8AB6Op/0ct+yfI6A4N7jZ2fwJrCb5RQ56xq3RnLcpf4jWyWiXeGpEqOYEV/RTV+B6SWlG9nwkopk+mEEJTmcP/Xc8yb66c/EEv9DNkHatlvXoGjbTP6BJAJPGTv/LonKvyg3uW8uBTSTVK5K5HFNRQFtUFsPnkWksRk25Ea8eECDCgui8lUoRnpv2vSz+yIwwtiLnJQ98py9ulhSUW+5qmoHZG46DB6mKZ/uMZwFEt3D+hr4Cd1A7HlDNC3j2HpVcnyPL14nupRjZ3o0xHjvMdbdvZTe9XtyUzpIS/+kyPibsEzI3NfPF48Hi37DBE+Qq21EeZWpxZFUh5t50myZ3/WrDS0WefitInsJtQgaG+lclrmo6N9GF04cMRK3KsCred+nvyrBfvic2vhFMhvtJt/MJ6pfq/IeQPdcEB2GRcbZ2VO/YIU4QorKKDpRvUPzDu0gAXZyunjbInPrWQYsLr0jVN9VAYKfmYe5N84FEnqf5vwK5FKP1SAJ91S5Qxkew==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199021)(36756003)(8676002)(8936002)(38070700005)(54906003)(478600001)(66446008)(66476007)(66556008)(64756008)(4326008)(6916009)(66946007)(76116006)(91956017)(122000001)(316002)(41300700001)(2906002)(38100700002)(5660300002)(2616005)(86362001)(186003)(26005)(6506007)(6512007)(33656002)(53546011)(6486002)(71200400001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <F2F189D465D08B48A1B1055947FE92ED@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7495
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b28dc74c-1411-4fc4-970e-08db44d9651f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RFJQSnIy9hFi4tOL/kFPvDgUZDs19doly/QipighY5JPkAW5XnYMXMs0l1zM7+8ULxal/rgljB9tPJk9S3qmOiut1grtVW5+1z6haq+cIIgeE15uWSF5xBmmK6D1KLWHAIkRQJawurcr9beJHKoMfKWcASMC2mNU4avo//WR3YqcgkzdI1H1smRoxwUiW5FuhuCpkPkTM6fbTwi3G7PmKexFD+Aft4YKtpWx9WI7y0wemxSE5XsEZ38KqT7iORln4OqAtdC9UbffT+vTuz/GXyTUTjG4cnqjtqVqV7fBYXfd9tjX6s48m2R4wNZecFKXGrKNIIwD7mMFs/KrwyfNOo8+EFyZ4FN27fCbh6sNCCq+s7RuaVYgrUoRlI/RH7yigfSHTo997DKSZRmR37AxQ/M8MSWnwh9ruLNqoe1/KMSZcuVzWTsRAwfb2RysJaQsLR70b+ZdvEGklM7LoQ0fvaFB/m4DNHhwHNhhWhMjuBBthLr+9yIpYP2vSJYH4CgEnWq3XsLIPlokyImxnx8Y9PaHGa9nnfRZs5MFPq8dozhNtmMk9JbIBryH3mo2trz6JH0CHbe2AIc8vjUZxcAk8WXMHWZd3MghvP4l2hj5HuKU2QW19z/4fQLjrj58+c7BgidPUN1f7B6e1MenTJ9Julok4iUhy4Aqclwx+yS5p2tW0pvySiHPhddogKOYNnchCoh5JXkAqNy8RwaMeM4wCg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(346002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(81166007)(356005)(82740400003)(336012)(6506007)(6512007)(53546011)(26005)(2616005)(40480700001)(186003)(83380400001)(47076005)(36860700001)(33656002)(2906002)(8676002)(8936002)(6862004)(5660300002)(36756003)(40460700003)(54906003)(478600001)(6486002)(316002)(4326008)(41300700001)(70206006)(70586007)(86362001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:34:44.5169
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ca4c93d0-8598-4cf3-ce21-08db44d96ded
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6217

DQoNCj4gT24gMjQgQXByIDIwMjMsIGF0IDE2OjI1LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDQuMjAyMyAxNzoxOCwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyNCBBcHIgMjAyMywgYXQgMTY6MDYsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjQuMDQuMjAyMyAxNjo1NywgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxNTowNSwgSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+IE9uIDI0LjA0LjIwMjMgMTY6MDAsIEx1Y2Eg
RmFuY2VsbHUgd3JvdGU6DQo+Pj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxMjozNCwgSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+Pj4gT24gMjQuMDQuMjAyMyAw
ODowMiwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4+Pj4+IEBAIC0zMCw5ICszNywxMSBAQCBp
bnQgc3ZlX2NvbnRleHRfaW5pdChzdHJ1Y3QgdmNwdSAqdik7DQo+Pj4+Pj4+PiB2b2lkIHN2ZV9j
b250ZXh0X2ZyZWUoc3RydWN0IHZjcHUgKnYpOw0KPj4+Pj4+Pj4gdm9pZCBzdmVfc2F2ZV9zdGF0
ZShzdHJ1Y3QgdmNwdSAqdik7DQo+Pj4+Pj4+PiB2b2lkIHN2ZV9yZXN0b3JlX3N0YXRlKHN0cnVj
dCB2Y3B1ICp2KTsNCj4+Pj4+Pj4+ICtib29sIHN2ZV9kb21jdGxfdmxfcGFyYW0oaW50IHZhbCwg
dW5zaWduZWQgaW50ICpvdXQpOw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiAjZWxzZSAvKiAhQ09ORklH
X0FSTTY0X1NWRSAqLw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiArI2RlZmluZSBvcHRfZG9tMF9zdmUg
ICAgICgwKQ0KPj4+Pj4+Pj4gI2RlZmluZSBpc19zdmVfZG9tYWluKGQpICgwKQ0KPj4+Pj4+Pj4g
DQo+Pj4+Pj4+PiBzdGF0aWMgaW5saW5lIHJlZ2lzdGVyX3QgY29tcHV0ZV9tYXhfemNyKHZvaWQp
DQo+Pj4+Pj4+PiBAQCAtNTksNiArNjgsMTEgQEAgc3RhdGljIGlubGluZSB2b2lkIHN2ZV9jb250
ZXh0X2ZyZWUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+Pj4+PiBzdGF0aWMgaW5saW5lIHZvaWQg
c3ZlX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+Pj4+PiBzdGF0aWMgaW5saW5l
IHZvaWQgc3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+Pj4+PiANCj4+
Pj4+Pj4+ICtzdGF0aWMgaW5saW5lIGJvb2wgc3ZlX2RvbWN0bF92bF9wYXJhbShpbnQgdmFsLCB1
bnNpZ25lZCBpbnQgKm91dCkNCj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+PiArICAgIHJldHVybiBmYWxz
ZTsNCj4+Pj4+Pj4+ICt9DQo+Pj4+Pj4+IA0KPj4+Pj4+PiBPbmNlIGFnYWluIEkgZG9uJ3Qgc2Vl
IHRoZSBuZWVkIGZvciB0aGlzIHN0dWI6IG9wdF9kb20wX3N2ZSBpcyAjZGVmaW5lLWQNCj4+Pj4+
Pj4gdG8gcGxhaW4gemVybyB3aGVuICFBUk02NF9TVkUsIHNvIHRoZSBvbmx5IGNhbGwgc2l0ZSBt
ZXJlbHkgcmVxdWlyZXMgYQ0KPj4+Pj4+PiB2aXNpYmxlIGRlY2xhcmF0aW9uLCBhbmQgRENFIHdp
bGwgdGFrZSBjYXJlIG9mIGVsaW1pbmF0aW5nIHRoZSBhY3R1YWwgY2FsbC4NCj4+Pj4+PiANCj4+
Pj4+PiBJ4oCZdmUgdHJpZWQgdG8gZG8gdGhhdCwgSeKAmXZlIHB1dCB0aGUgZGVjbGFyYXRpb24g
b3V0c2lkZSB0aGUgaWZkZWYgc28gdGhhdCBpdCB3YXMgYWx3YXlzIGluY2x1ZGVkDQo+Pj4+Pj4g
YW5kIEkgcmVtb3ZlZCB0aGUgc3R1YiwgYnV0IEkgZ290IGVycm9ycyBvbiBjb21waWxhdGlvbiBi
ZWNhdXNlIG9mIHVuZGVmaW5lZCBmdW5jdGlvbi4NCj4+Pj4+PiBGb3IgdGhhdCByZWFzb24gIEkg
bGVmdCB0aGF0IGNoYW5nZSBvdXQuDQo+Pj4+PiANCj4+Pj4+IEludGVyZXN0aW5nLiBJIGRvbid0
IHNlZSB3aGVyZSB0aGUgcmVmZXJlbmNlIHdvdWxkIGJlIGNvbWluZyBmcm9tLg0KPj4+PiANCj4+
Pj4gQ291bGQgaXQgYmUgYmVjYXVzZSB0aGUgZGVjbGFyYXRpb24gaXMgdmlzaWJsZSwgb3V0c2lk
ZSB0aGUgaWZkZWYsIGJ1dCB0aGUgZGVmaW5pdGlvbiBpcyBub3QgY29tcGlsZWQgaW4/IA0KPj4+
IA0KPj4+IFdlbGwsIHllcywgbGlrZWx5LiBCdXQgdGhlIHF1ZXN0aW9uIGlzbid0IHRoYXQgYnV0
ICJXaHkgZGlkIHRoZSByZWZlcmVuY2UNCj4+PiBub3QgZ2V0IHJlbW92ZWQsIHdoZW4gaXQncyBp
bnNpZGUgYW4gaWYoMCkgYmxvY2s/Ig0KPj4gDQo+PiBPaCBvaywgSSBkb27igJl0IGtub3csIGhl
cmUgd2hhdCBJIGdldCBpZiBmb3IgZXhhbXBsZSBJIGJ1aWxkIGFybTMyOg0KPj4gDQo+PiBhcm0t
bGludXgtZ251ZWFiaWhmLWxkIC1FTCAtVCBhcmNoL2FybS94ZW4ubGRzIC1OIHByZWxpbmsubyBc
DQo+PiAuL2NvbW1vbi9zeW1ib2xzLWR1bW15Lm8gLW8gLi8ueGVuLXN5bXMuMA0KPj4gYXJtLWxp
bnV4LWdudWVhYmloZi1sZDogcHJlbGluay5vOiBpbiBmdW5jdGlvbiBgY3JlYXRlX2RvbVVzJzoN
Cj4+ICguaW5pdC50ZXh0KzB4MTM0NjQpOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGBzdmVfZG9t
Y3RsX3ZsX3BhcmFtJw0KPiANCj4gSW4gcGFydGljdWxhciB3aXRoIHNlZWluZyB0aGlzOiBXaGF0
IHlvdSBjb3BpZWQgaGVyZSBpcyBhIGJ1aWxkIHdpdGggdGhlDQo+IHNlcmllcyBhcHBsaWVkIG9u
bHkgdXAgdG8gdGhpcyBwYXRjaD8gSSBhc2sgYmVjYXVzZSB0aGUgcGF0Y2ggaGVyZSBhZGRzIGEN
Cj4gY2FsbCBvbmx5IG91dCBvZiBjcmVhdGVfZG9tMCgpLg0KDQpObyBJ4oCZdmUgZG8gdGhlIGNo
YW5nZXMgb24gdG9wIG9mIHRoZSBzZXJpZSwgSeKAmXZlIHRyaWVkIGl0IG5vdywgb25seSB0byB0
aGlzIHBhdGNoIGFuZCBpdCBidWlsZHMgY29ycmVjdGx5LA0KSXQgd2FzIG15IG1pc3Rha2UgdG8g
ZG9u4oCZdCByZWFkIGNhcmVmdWxseSB0aGUgZXJyb3Igb3V0cHV0Lg0KDQpBbnl3YXkgSSBndWVz
cyB0aGlzIGNoYW5nZSBpcyBub3QgYXBwbGljYWJsZSBiZWNhdXNlIHdlIGRvbuKAmXQgaGF2ZSBh
IHN5bWJvbCB0aGF0IGlzIHBsYWluIDAgZm9yIGRvbVVzDQp0byBiZSBwbGFjZWQgaW5zaWRlIGNy
ZWF0ZV9kb21Vcy4NCg0KPiANCj4gSmFuDQo+IA0KPj4gYXJtLWxpbnV4LWdudWVhYmloZi1sZDog
KC5pbml0LnRleHQrMHgxMzZiNCk6IHVuZGVmaW5lZCByZWZlcmVuY2UgdG8gYHN2ZV9kb21jdGxf
dmxfcGFyYW0nDQo+PiBhcm0tbGludXgtZ251ZWFiaWhmLWxkOiAuLy54ZW4tc3ltcy4wOiBoaWRk
ZW4gc3ltYm9sIGBzdmVfZG9tY3RsX3ZsX3BhcmFtJyBpc24ndCBkZWZpbmVkDQo+PiBhcm0tbGlu
dXgtZ251ZWFiaWhmLWxkOiBmaW5hbCBsaW5rIGZhaWxlZDogYmFkIHZhbHVlDQo+PiBtYWtlWzNd
OiAqKiogWy9kYXRhX3NkYy9sdWNmYW4wMS9raXJrc3RvbmVfeGVuL3hlbi94ZW4vYXJjaC9hcm0v
TWFrZWZpbGU6OTU6IHhlbi1zeW1zXSBFcnJvciAxDQo+PiBtYWtlWzJdOiAqKiogWy9kYXRhX3Nk
Yy9sdWNmYW4wMS9raXJrc3RvbmVfeGVuL3hlbi94ZW4vLi9idWlsZC5tazo5MDogeGVuXSBFcnJv
ciAyDQo+PiBtYWtlWzFdOiAqKiogWy9kYXRhX3NkYy9sdWNmYW4wMS9raXJrc3RvbmVfeGVuL3hl
bi94ZW4vTWFrZWZpbGU6NTkwOiB4ZW5dIEVycm9yIDINCj4+IG1ha2VbMV06IExlYXZpbmcgZGly
ZWN0b3J5ICcvZGF0YV9zZGMvbHVjZmFuMDEva2lya3N0b25lX3hlbi9idWlsZC94ZW4tcWVtdS1h
cm0zMicNCj4+IG1ha2U6ICoqKiBbTWFrZWZpbGU6MTgxOiBfX3N1Yi1tYWtlXSBFcnJvciAyDQo+
PiBtYWtlOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2RhdGFfc2RjL2x1Y2ZhbjAxL2tpcmtzdG9uZV94
ZW4veGVuL3hlbuKAmQ0KPj4gDQo+PiBUaGVzZSBhcmUgdGhlIG1vZGlmaWNhdGlvbiBJ4oCZdmUg
ZG9uZToNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02
NC9zdmUuaCBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02NC9zdmUuaA0KPj4gaW5kZXgg
NzFiZGRiNDFmMTljLi4zMzBjNDdlYTg4NjQgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0v
aW5jbHVkZS9hc20vYXJtNjQvc3ZlLmgNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2Fz
bS9hcm02NC9zdmUuaA0KPj4gQEAgLTI0LDYgKzI0LDggQEAgc3RhdGljIGlubGluZSB1bnNpZ25l
ZCBpbnQgc3ZlX2VuY29kZV92bCh1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMpDQo+PiAgICAgcmV0
dXJuIHN2ZV92bF9iaXRzIC8gU1ZFX1ZMX01VTFRJUExFX1ZBTDsNCj4+IH0NCj4+IA0KPj4gK2Jv
b2wgc3ZlX2RvbWN0bF92bF9wYXJhbShpbnQgdmFsLCB1bnNpZ25lZCBpbnQgKm91dCk7DQo+PiAr
DQo+PiAjaWZkZWYgQ09ORklHX0FSTTY0X1NWRQ0KPj4gDQo+PiBleHRlcm4gaW50IG9wdF9kb20w
X3N2ZTsNCj4+IEBAIC0zNyw3ICszOSw2IEBAIGludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2
Y3B1ICp2KTsNCj4+IHZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiB2
b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KTsNCj4+IHZvaWQgc3ZlX3Jlc3RvcmVf
c3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4gLWJvb2wgc3ZlX2RvbWN0bF92bF9wYXJhbShpbnQg
dmFsLCB1bnNpZ25lZCBpbnQgKm91dCk7DQo+PiANCj4+ICNlbHNlIC8qICFDT05GSUdfQVJNNjRf
U1ZFICovDQo+PiANCj4+IEBAIC02OCwxMSArNjksNiBAQCBzdGF0aWMgaW5saW5lIHZvaWQgc3Zl
X2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikge30NCj4+IHN0YXRpYyBpbmxpbmUgdm9pZCBz
dmVfc2F2ZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdikge30NCj4+IHN0YXRpYyBpbmxpbmUgdm9pZCBz
dmVfcmVzdG9yZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdikge30NCj4+IA0KPj4gLXN0YXRpYyBpbmxp
bmUgYm9vbCBzdmVfZG9tY3RsX3ZsX3BhcmFtKGludCB2YWwsIHVuc2lnbmVkIGludCAqb3V0KQ0K
Pj4gLXsNCj4+IC0gICAgcmV0dXJuIGZhbHNlOw0KPj4gLX0NCj4+IC0NCj4+ICNlbmRpZiAvKiBD
T05GSUdfQVJNNjRfU1ZFICovDQo+PiANCj4+ICNlbmRpZiAvKiBfQVJNX0FSTTY0X1NWRV9IICov
DQoNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:35:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525513.816778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyDY-0008SH-Jo; Mon, 24 Apr 2023 15:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525513.816778; Mon, 24 Apr 2023 15:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyDY-0008SA-Gx; Mon, 24 Apr 2023 15:35:04 +0000
Received: by outflank-mailman (input) for mailman id 525513;
 Mon, 24 Apr 2023 15:35:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pqyDX-0008QR-8z
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:35:03 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92853caf-e2b5-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 17:35:00 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 0175D5C0190;
 Mon, 24 Apr 2023 11:34:59 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Mon, 24 Apr 2023 11:34:59 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 11:34:57 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92853caf-e2b5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682350498; x=1682436898; bh=5z2xL8Dc89vedDFBEfOdP2o8+4s0AnBLxIg
	AbTHdUKs=; b=SMzb+eXJ3/Yc9ihXsw99AGKXjo/GZk7WwjGzLlP2ENTsuyfzDlP
	moqcazokhSGQNXz764EwropL5Y9BBf0ViUSodmO6aE011vaD7+6kEwf5y1Iyxlh1
	KbTo8vLNqhKYYseCFqwX02O65nUM5JwZUN4HJ9Bw5ZQVo7cJ05G8jgz9JV6XfxTD
	QlXT99E9O6WoEuNn46jE/4Uk9ybwLacTFRm9l5g29xv5962lHggzXxkWvgatOscm
	ROkpwvofOzCPWLpw1krrOBikESJLAt/ajSh1XPWx/jrYPYpwlkT3yndl+kzOzoyP
	u/fZEnC3KZjcHyZmpZqxDOnT4u4hiJ07v3w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682350498; x=1682436898; bh=5z2xL8Dc89ved
	DFBEfOdP2o8+4s0AnBLxIgAbTHdUKs=; b=BasBoKXeay3N+zSIQKdtetQxxsCjb
	WLo2AZpxcG13UuCiP5lB0igkLMK0lyAVgYjow665gK/Np4jECyIZCeBDqmsDhAS5
	EyqFT0ERMMctPkOEN3de4BKYM4zKGEz9UQm5rpRhK9Mgm4lgmD8GUcvGtiRdjwCF
	EQdZ7q7v7le/X5Ub3IZQ7cZw14Ee/dTI4g0UKdF7ukL4iFq5Na+qNyIAON8hqDfo
	Q9vITvN6UC9poGx9aCF/X+PY7NG5+up3lm11xw0jL7qUbeknMSF5KecwCAk5i0S7
	CdKmvaaUSMxjJ9MNzMpdxgchkU+To47B+s5G/Hncb6j/seHRoIfFh2+yg==
X-ME-Sender: <xms:oqFGZFpqelXkbsdMGf_Qw1MZO81cO5qPvVcpYLAzdKamhUpBDcuLIQ>
    <xme:oqFGZHpAmr_vgKm9HEyO_nKvxXAjsiABcvr2evAg1yPRFo-dKnfD9vu02UOa_6d-j
    9ObNqwt9xoM2A>
X-ME-Received: <xmr:oqFGZCNoc17PXQY0sV5ms7GTtjhMFizxMQoi-7McqFhN0mmboeb9d-iY2eySVsL6-JbpEDb2HaBi7ws5Acutf6dXgzxOh5KfEYU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgledtucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtroertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepleeu
    tdevfeffueelgfduieevuefftdekheevjeeiiefgtedttdefgfekheduteefnecuffhomh
    grihhnpehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:oqFGZA412oV-6fnXs-37gm76-3LQqD68ZhaREG11xH6kAc9i58VmxQ>
    <xmx:oqFGZE7vvlI301n2j3HIoD-vO-ZJJGtcWi9DtAniuJflU4sBC7TCsg>
    <xmx:oqFGZIiUGocgbyhXMSjuxT1561pvU5eiFUh48wKMj8tyniHregqepA>
    <xmx:oqFGZEFYTwadJj8UtBlbh9_pWWwmBOip0RahwsYLMsfcLE5QcMRWeg>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 24 Apr 2023 17:34:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
Message-ID: <ZEahnVfwnDgLwodp@mail-itl>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
 <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com>
 <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="w/O0S1i4n57pGbhV"
Content-Disposition: inline
In-Reply-To: <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>


--w/O0S1i4n57pGbhV
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 24 Apr 2023 17:34:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot

On Mon, Apr 24, 2023 at 11:25:01AM -0400, Jason Andryuk wrote:
> On Mon, Apr 24, 2023 at 10:19=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >
> > On 06.04.2023 05:57, Marek Marczykowski-G=C3=B3recki wrote:
> > > Some firmware/devices are found to not reset MSI-X properly, leaving
> > > MASKALL set. Jason reports on his machine MASKALL persists through a
> > > warm reboot, but is cleared on cold boot. Xen relies on initial state
> > > being MASKALL clear. Especially, pci_reset_msix_state() assumes if
> > > MASKALL is set, it was Xen setting it due to msix->host_maskall or
> > > msix->guest_maskall. Clearing just MASKALL might be unsafe if ENABLE =
is
> > > set, so clear them both.
> > >
> > > Reported-by: Jason Andryuk <jandryuk@gmail.com>
> > > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethi=
ngslab.com>
> >
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > albeit with a couple of nits (which I'd be happy to address while
> > committing, so long as you agree).=20

Yes, thanks!

> > First one being on the last
> > sentence above: It's surely not just "might"; if resetting already
> > doesn't work right, nothing says that the individual mask bit all
> > end up set. Clearing ENABLE as well is only natural imo, if we
> > already need to fix up after firmware. So maybe "Even if so far not
> > observed to be left set, clear ENABLE as well"?
> >
> > > --- a/xen/drivers/passthrough/msi.c
> > > +++ b/xen/drivers/passthrough/msi.c
> > > @@ -46,6 +46,23 @@ int pdev_msi_init(struct pci_dev *pdev)
> > >          spin_lock_init(&msix->table_lock);
> > >
> > >          ctrl =3D pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
> > > +
> > > +        if ( ctrl & (PCI_MSIX_FLAGS_MASKALL|PCI_MSIX_FLAGS_ENABLE) )
> >
> > Style (missing blanks around |; once more below).
> >
> > > +        {
> > > +            /*
> > > +             * pci_reset_msix_state() relies on MASKALL not being set
> > > +             * initially, clear it (and ENABLE too - for safety), to=
 meet that
> > > +             * expectation.
> > > +             */
> > > +            printk(XENLOG_WARNING
> > > +                   "%pp: unexpected initial MSI-X state (MASKALL=3D%=
d, ENABLE=3D%d), fixing\n",
> > > +                   &pdev->sbdf,
> > > +                   (ctrl & PCI_MSIX_FLAGS_MASKALL) ? 1 : 0,
> > > +                   (ctrl & PCI_MSIX_FLAGS_ENABLE) ? 1 : 0);
> >
> > Our "canonical" way of dealing with this is !!(x & y).
> >
> > > +            ctrl &=3D ~(PCI_MSIX_FLAGS_ENABLE|PCI_MSIX_FLAGS_MASKALL=
);
> > > +            pci_conf_write16(pdev->sbdf, msix_control_reg(pos), ctrl=
);
> > > +        }
> > > +
> > >          msix->nr_entries =3D msix_table_size(ctrl);
> > >
> > >          pdev->msix =3D msix;
> >
> >
> > Aiui there's no dependency here on the earlier patches in the series;
> > please confirm (or otherwise).

Indeed. An earlier patch uncovered a firmware (or such) issue on some
systems and this patch deals with it, but it doesn't depend on earlier
patches.

> > Jason - any chance of getting a Tested-by: from you?
>=20
> I'm building v3 now.  v2  worked for clearing MASKALL on initial boot.
>=20
> I posted in these two messages - a summary is below.
> https://lore.kernel.org/xen-devel/CAKf6xpto87QRSKT2qc1yApNfaw2SrLLxPoytYJ=
v_jEbYTAbjCg@mail.gmail.com/
> https://lore.kernel.org/xen-devel/CAKf6xptHALLR-Qjf=3Dp5y0o9Ud2V7eFMJuB8A=
p-PLjv-N7PAJVQ@mail.gmail.com/
>=20
> OpenXT has a patch that performs an extra reset after domain shutdown,
> and that causes Xen to set MASKALL.  I confirmed by removing it.  So
> this patch helps with clearing MASKALL on host boot, but with the
> OpenXT patch, rebooting a domain fails.  MASKALL gets set on VM
> shutdown and then the subsequent boot can't assign the device.
>=20
> So this patch is helpful in some scenarios, but it was also an issue
> caused by the OpenXT patch.  Does that make it unsuitable for
> inclusion?  I assume the OpenXT patch wasn't an issue previously since
> MSI-X was never enabled.

Upstream Xen IMO should deal with the state it gets on boot, regardless
of what was running previously (the actual issue is likely in firmware
or device itself, that it doesn't clear that bit, but well...). So,
rebooting from OpenXT, into vanilla upstream Xen should result in fully
functional system. That's why I included this patch, but haven't dealt
with an issue caused by OpenXT patch on subsequent domain startups (as
it doesn't apply to the upstream code base).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--w/O0S1i4n57pGbhV
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRGoZwACgkQ24/THMrX
1ywQQggAgLsWK3skwL1kN/N5dbGQ8muzBVTA47Pb8Q9PMZeXnj7Nab/iS184sLb4
TuL3bGHREPLOUKFLqY0febOOKSBHCGzSLjeLEnYDMfpjPZbW17spo4VqLvrsL/sx
cbduZHfctojCU5I9PX0mTMUSnq7s8brgMooWZ+cN3FX/8291xWXd4h+D1KRoqfTN
gD56C1pglOGIBvy26nnbP7O8yKYO7tlQmDUKqg84twlVZakEGbH308MATO4/PMxU
QKabaeyLI/NwapA4LzkR+YFBcOHnDusY/pSISb0N8YqW+KJba0MrcKL1Jf/7sc/r
c80RSzWoYLDAuQ74ii4Trj1GKReIew==
=jnTX
-----END PGP SIGNATURE-----

--w/O0S1i4n57pGbhV--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:36:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525522.816788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyEv-0000qC-UB; Mon, 24 Apr 2023 15:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525522.816788; Mon, 24 Apr 2023 15:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyEv-0000q5-R7; Mon, 24 Apr 2023 15:36:29 +0000
Received: by outflank-mailman (input) for mailman id 525522;
 Mon, 24 Apr 2023 15:36:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqyEu-0000px-Mw
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:36:28 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2049.outbound.protection.outlook.com [40.107.7.49])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c70b4d60-e2b5-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 17:36:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8166.eurprd04.prod.outlook.com (2603:10a6:20b:3fa::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:35:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:35:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c70b4d60-e2b5-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sl+4jFKU0OunTLJlTcOcqp35j9UYF+0ny+R2F2/i5/emxCFQLOZx7NWB5+cvskzpDdqa7LIjp03M1hLdhpLJePEupFmZ6BTQikSqh+gWnyGKNVVvJN4NHQjYxB5rOsA+0m2FMTpmx7cicZo/JRF3tPE2LDWE+gAQiJeoGZJAFJIFl5GhFA+OPseRLMqR1H5H8EuaHZxuvbWnLiCXB80UOGZItEqw8JKwgjm6mWzipSrdnNWJENzmtnmxbm/9NJoTml9LnMyd2pOtlMb6Ko7HL4mBvTZyey8SPHxblQWpMFbNTb7HrS8D1nUGccFgSm08emVS3c8jls3DkZM+TmaabQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8L1MefyeX44jGk7haZjaW+suQtvdSDuBGojG5wUjjN4=;
 b=b73IoAtDmqL2ZtcarKAeZOvrw/rHT6W40RxBpMUA/zSnODKsHyiJlsSP3lorXCei9OeLN9ceuDsx5qZ1E9YfcKSF4B2QMyHb4Ij15FFbTUOdj0dwe1dxolB9lmWYPXZtL5Mf4uY1G+BaaRf2iFCCD3Rip2QwbiCgrAkLg0I6z8PJukyXp8TCQQ+hDpA6Bq9+YA+5pkM+XkTDOytGjIqH11dZa2pjPsQipO98PECy0kkwTHMLHBH9wrTzM6ARwWrslW5gCjhgu9w8XtK4HhwH82yIEkaUBgYI4uaoxGNfLO+EOpx5TfWizMLz8DdFTyA5MNvW/FDnDsHqWp1sOPZCFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8L1MefyeX44jGk7haZjaW+suQtvdSDuBGojG5wUjjN4=;
 b=1mbLhFnvwNMqfYhESgncLjlPmt092IbNfe3SOCUTAPEYF+F/8goka94XFS3P/JrN7155yKy2+fNUxNHD0ibDtDXDSZNSE3FDBdCKKWddS000eXz68w8qF6OM3vJNmSAVYiGObAk9d+vnkDeP65MtaPIpXvV3b1GDYA2AAlj6/bHEkR1lRJyLf6oqX9py3dlnRF4Wb2fdF0E5TVJP47rwNzMuS41pt0BSw46nsExQzNBAThdRW6Pu5ekneERyV5g2CIuBrLFs/SVscdjE30RPkxt41zW4kKSeZ88zi0X6FTdRKXrUWQzt+4r9M+R34zLmwIXZVnvI40RxN4YxTaJXgA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2c424759-3072-cd07-913d-c45ae6791ce2@suse.com>
Date: Mon, 24 Apr 2023 17:35:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Connor Davis
 <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
 <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
 <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
 <3d440048717892fe5d3ed7fe3255dc8c9f5d38a3.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3d440048717892fe5d3ed7fe3255dc8c9f5d38a3.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0077.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8166:EE_
X-MS-Office365-Filtering-Correlation-Id: 467cd46d-3bac-4858-107b-08db44d99a15
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fz9cABQc1HTyQkIjPD7UuPSeRvLIObNkG/8oGhCZ4AmrGXBiRpd/W52E1vooPLscDHKwR4ZR0hYEEckN7xLTNF1ytBKRXQQrIzvwy3xzdrMfDkn1PJ22SkuSQ0Gkxglm8iYp5KMVMiWhrerVgp/ZvUITkimx6yC+2SxbEa5A3G0qvRlFiB2u0F0nbcG0cn+bZlQd7A356R/j23osucPRcuCaCMCzyQfmQzlPq1JTf4MGasCyjzEMEaW5CyVAvjS4mOyOe2OZhOAAtzYWjZFu/GjW0oNGTiGO5KOjqnKcSx3+fvWRSGbb/uXw4P70vU3qok0ruYYmxt1fxnjVuoAAFGXmdPk3y6A6f/KSdqxpOUhRbudktCm6TZfWPPAoL2sdoFKubqjpzdlZwpXSUAt5oEAcFJWSjPJsCuItlUkjfanNBZU34C/QiknEY9y2fU9WpRJ9IhfRmkPM8d3dQTHkxZ6/oPT8K7ROOmY145fWZa9Z4JEDYIMbuLm9DB9Br5p8uQk4pU4OSKxc/rN7Fr2pu2kB4zPHTWNM/wu6zh4pyxvodn5XB21+88oV3GOOBxStBVpkr/rcQdqlSFOmLo8L+QEfZ3xMjcUYx3+G5DCmjIT6pFB7LhtElEE45qqAXNS4Bv2WPd8b0VkhEvez7isWeQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(53546011)(6506007)(26005)(6512007)(2616005)(36756003)(186003)(38100700002)(66946007)(31696002)(478600001)(86362001)(66556008)(66476007)(31686004)(8936002)(8676002)(54906003)(110136005)(5660300002)(6486002)(41300700001)(2906002)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnk0WFlMWmFoUjBFN0xaUkVyay9IM3FXbU9sSnlmdGIzL1JnaFo2Nklubk1h?=
 =?utf-8?B?aDdETzh0SVhSMXJwK3BlNHlsM25qQStndlR0cVZMQ0p0ZWNwVm1nd2l2V0Q5?=
 =?utf-8?B?SENwOEJTOG0xYzJOMnJNa25VTmJYZmRSWVhPM093QW9mMTBvYXRaYUhrQm5w?=
 =?utf-8?B?ZDh2bDdJZXplZ2RUOXJjL3FsK3NlelNWRklEYWxNd0NlNm0yNHFRV1FoV2pN?=
 =?utf-8?B?MDJGd2ZWcTQwdzM1eW9aNkZsMUEycUFCcjl1MExoZ0o1VHg1M2tOQmcxVzhL?=
 =?utf-8?B?Rm5UQk0vbjBmaldnNDNUV0JvWm9QNnkvUGtKeHpHN2NhN0tKV2hQbnhMSUsz?=
 =?utf-8?B?TWFqWk4xUFlLb1c1M0RPbjdQa25LVU5qQ1I5cEFwQVhVUmgwTW51WG9VanRj?=
 =?utf-8?B?QnEyUGRNWFp0dlNLQ3ZVSjdUMTNJVE1qVlZSdWNGeUIrZUdHMUUyTWFjeGpE?=
 =?utf-8?B?K2RQLytQQThMUENVNGZEWjhZTkk3WHhtaDZDcHM5MkhxS09ma3VpZXFpYnFo?=
 =?utf-8?B?NmRTbi9pM2dxZ2U2OXdTUnFIZ1Z2ejRCSjhyU3VQME1VaWxiK0RlOUdVb3l3?=
 =?utf-8?B?a0NGUXJSc0ZQd0RzTFV5OW5xT0lieXZZVStMMGtMOVhGczduYllYWjU2aFdh?=
 =?utf-8?B?WUh4QWNYUFhFSmdDeUFtUzVKNWMvc1NZbzFFNFBWeTJBS3o4cm4vSThPZkhP?=
 =?utf-8?B?bkhkYjkyU2NUaEd0UUd0QzZrdTBWcFNDdmdDVE9PMVdISkJuUzlQWFNQYTc4?=
 =?utf-8?B?eVRNYjJDY0pKNkNVSGhqbEJoUGZoUnFQNGZFbWNJTCszeFdKc3RHamRTeFJY?=
 =?utf-8?B?RUJra3FUeE1POG9PUE83YTd5ZTNoa1hISkJkS3hYWS9yN0RsMklHSmJrdTNR?=
 =?utf-8?B?bUNoWWlPbml4empsa25NWSt1TWpnemZIOUNtdUV4WEFURnpxVTN4WnZFVXM0?=
 =?utf-8?B?a3YwZElaQ2RxSDhRbHBHNWlUd2FWV0xYUjFOSDRZQUR6L2V4RERRZERYSkw0?=
 =?utf-8?B?Ynpvc0NVT0trNnpiNzF6RmdwcHJpdk1pQ283RFdxbXkxZFBTeVBJcFpkVzNF?=
 =?utf-8?B?UjQySUYzdlppZmcvVElacUhiTkRZWmRzNFdSdEpzZFFKQm9xenFKeG8wUDJj?=
 =?utf-8?B?aS91SVhNaUFqclB5cTljSFg1MEQ2QUJtUHNiSW5FcWgvUlk3bmNDTFlqNjNu?=
 =?utf-8?B?K0JURGdMbWRCSVhORU5QdjJSSUVuTXY4cm5lMjJwWk91MzN1NG5GdGJoa0pt?=
 =?utf-8?B?WVpzaC82aUdIdnhFYWM3ejRwbFBkdGQzbnhyclByeWtESE1iblpYL0hFWWxj?=
 =?utf-8?B?eFcvNHJuOXRUeDdWcFpIOWRYZWdyRU5Rd3pjRFlGaWgyTzg5OVMwN2daeUtN?=
 =?utf-8?B?eG80Z0YvZVJyK0kvdnUwZlMvVkxOQWFhaHBzcE9INEdpOFFpeGliMU9aTTNY?=
 =?utf-8?B?ZkxSaTRabktVUHVjMVBoc1N0bWFGbTBqc3RUbEJaRnMwQkNyNzNZcjB6MTA4?=
 =?utf-8?B?TWZiLzRpU1F6aWplUmM3cVkyRUtYdkNxbUhRWURoYlFOc20zenhqZVdiTDJV?=
 =?utf-8?B?MW43NVU0R3ArTGtKaS96ZmYrajl2b0Ixd1FFN0t0Y1VvRENWd2JhZUhFT0Vt?=
 =?utf-8?B?T3o2bllDWHBvOVdWcCtBN1ZKRjdPbEtvRUhKMFlPSFZIRzc5bWpUTmI3MzJU?=
 =?utf-8?B?UllJVnlJbUVuT0JTeUxLc2lPbDJwbTJpVFZVSndUdnh5N20rUHY5ZkNzS3Rt?=
 =?utf-8?B?RFZmTnF2YUZ1SEFXMWl2R0FzMXpMcUlZZVZCaG9CRFM3NEN5NEhQUWlEaTVN?=
 =?utf-8?B?ZnpYMjlMODZNWTJUanJFRndVTE1sRG1PVDF3UGZmb3FpRWkrQnlrNGluMkpT?=
 =?utf-8?B?b0pWaUFsUkxhT0J0S2lib1pMSnFscFY0Q3BQUkJOOTNtTndZaWJVcW1hZFZy?=
 =?utf-8?B?OE9MODFic2NiUEU1VE1lR3NaRkJsb3FHU0tUVjR4THVBOHZzazBaaFdpa01o?=
 =?utf-8?B?VDhteitpUUFUUkZBSW4vd0RLVHBqckFrVjJlWFFQNGNGd21RSXQ2QW5MaVJu?=
 =?utf-8?B?L2RCTjRzTDEwSjN5bkpjRGp6VUgvakZnRWMyTWh4S2ViQ0d5ODl4T2IwRys0?=
 =?utf-8?Q?H32leA5e4WaMRQ0yl3yK7IkNp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 467cd46d-3bac-4858-107b-08db44d99a15
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:35:58.8061
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oYsq+wzhvJeCSOuvOdeyRgeKSO4JvVIE2DdnFqGNBq4M9jplSd3R51CO2CExU7XABB5uwmUsF57DcoaQLzlC8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8166

On 24.04.2023 17:16, Oleksii wrote:
> On Mon, 2023-04-24 at 12:18 +0200, Jan Beulich wrote:
>> On 21.04.2023 18:01, Oleksii wrote:
>>> On Thu, 2023-04-20 at 16:36 +0200, Jan Beulich wrote:
>>>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>>>> +    csr_write(CSR_SATP,
>>>>> +              ((unsigned long)stage1_pgtbl_root >> PAGE_SHIFT)
>>>>> |
>>>>> +              satp_mode << SATP_MODE_SHIFT);
>>>>> +
>>>>> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode
>>>>> )
>>>>> +        is_mode_supported = true;
>>>>> +
>>>>> +    /* Clean MMU root page table and disable MMU */
>>>>> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
>>>>> +
>>>>> +    csr_write(CSR_SATP, 0);
>>>>> +    asm volatile("sfence.vma");
>>>>
>>>> I guess what you do in this function could do with some more
>>>> comments.
>>>> Looks like you're briefly enabling the MMU to check that what you
>>>> wrote
>>>> to SATP you can also read back. (Isn't there a register reporting
>>>> whether the feature is available?)
>>> I supposed that it has to be but I couldn't find a register in
>>> docs.
>>
>> Well, yes, interestingly the register is marked WARL, so apparently
>> intended to by used for probing like you do. (I find the definition
>> of
>> WARL a little odd though, as such writes supposedly aren't
>> necessarily
>> value preserving. For SATP this might mean that translation is
>> enabled
>> by a write of an unsupported mode, with a different number of levels.
>> This isn't going to work very well, I'm afraid.)
> Agree. It will be an issue in case of a different number of levels.
> 
> Then it looks there is no way to check if SATP mode is supported.
> 
> So we have to rely on the fact that the developer specified
> RV_STAGE1_MODE correctly in the config file.

Well, maybe the spec could be clarified in this regard. That WARL
behavior may be okay for some registers, but as said I think it isn't
enough of a guarantee for SATP probing. Alistair, Bob - any thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:42:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:42:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525529.816798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyK7-0002NL-JN; Mon, 24 Apr 2023 15:41:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525529.816798; Mon, 24 Apr 2023 15:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyK7-0002NE-GJ; Mon, 24 Apr 2023 15:41:51 +0000
Received: by outflank-mailman (input) for mailman id 525529;
 Mon, 24 Apr 2023 15:41:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqyK6-0002N8-25
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:41:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85f71c48-e2b6-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 17:41:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9924.eurprd04.prod.outlook.com (2603:10a6:20b:67e::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 15:41:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:41:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85f71c48-e2b6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PL0oGI07z4QE8O8szRsq+aSj0XxBxT88I4JUwdvWJEDyq2riz16ZC4MIHxM2eiWS7VPi8VCyWPz2v6dRvWXk1z7AB6Ku+jylkJvnJGATBW9M6IVVv1ZIV0ub6bHTtrZyRwnFyfGn+jsMu8HddEk8sOc3tPrcgGC9J1LhKpUMv+O/a4kPAgGFaqRalZK2+rfl57LxGQY0T1vcp0U1r/kwBqwICNz5z9qwGe0WpBcgdiaSBd6HKpB/RhiToO34hxz/98SqYvAphf2NrgyXd/858Fl+pTh/yTfiVz0mvKocPsCkFawRPvdGO+KaJyM73YD2OGPCTaRdPVUN2PRKMgAdRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z0ZOhIcLVtB9txDqrr0krIMyUlDPGIGDdINws1TZPyw=;
 b=g3OztvzWg7M1rBGO/dz9qXRZEo85WV2/XYaqirtN6ynNvzEvT9gY1SR/z8qrl42BAaTXAVrpfQ/0Co9d0koBpwvzi5WzG5LZHuhf6/RqMnx5MzR4RwvcLpyAUg0YVoMqIcNBpyGQdTw5YCSHMAeYiCBxioiExNYkzk6PyPQKDeppCDrFMtVAPqUYVedXTO2F3ipLM7LC/nkJeea6wW0yjXRYVolWCAY07ZF4wMtlKl/KuruCkJ3Q1U7JWX7Je/1NyYgFjiCgzgtojNrlnNOweljCGpTQCB+m73HhUt7SeGBM6lTgQS1p/h9E2XDTryz29lFkkeWvLgtXyzIJOf28kA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z0ZOhIcLVtB9txDqrr0krIMyUlDPGIGDdINws1TZPyw=;
 b=sYdpWbD4oszrm62wKTRumHTePksiISuMzQvpOhPnMnKqUFfu4uj+4jaS77fvwbSQnznr6bD5QhkF5mxZag1j+ytcI/pzWNFRyhlGV5Zrf/8G096Uwlt4fCSJMRWVMUAkrw2rdXVeu8yrwaK1I2FIQQnNpSL+spVg5U1ZnvkVebNj5Xv+YF08dKi7eSbJXefwu45djrGH7d290v77UWiey4zb13fmDd2WLMLqYE8C2w/L2+F1klFDxJDor1scdWcDPzIGuB5pSka0+jSYS6IvV2gdLdKeUnjWu2rWvmoJ6M6j8d5KgVEVr6BEu+toSBy+bOlaXHP2EG2ZCFsndxtNwQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
Date: Mon, 24 Apr 2023 17:41:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
 <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9924:EE_
X-MS-Office365-Filtering-Correlation-Id: 9fd0f126-100b-4734-0ade-08db44da55eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NiICn67UTafGcmutt/tY833yipZeROyBZN01VTlKtf3RaDx+zYAuSID39Q+CRFlJh2i+ZcOav6gOlSXjeDgZX9VaWd9Pcentpxmf/NwtK57I2V9ajZEnQ4ffXy8uRYxhLeu5DHN9sIqYZ+4ljX0hWrqz8ylbvCrvQMecyXbpx3CkYfKRPoHyda2ApeqJQnXtRtpti74B+6gZXxY+ofVJBm+JjqKoIObvfyP7GyODtjSJtWnnEESIQoh+Lczh3tuxGGDOcDbG4qTvhoDZEnNF6Lvc7NErD6I02qEoItdeQsS2Hjt8WYnkrsQzhPM/LxMHMpuHHKWMRYoec3JnOjV8pBF7RT+3EnBNW/UI5bbAti1vOydyjuheSkRzLzhz3nK4hKLhAQCTdU4FDnrIKjEMVu4V/vfEb01A5ss6YSr8M+jlDyPrMMGTQtwmipKs7AJYV4u3Dz3nTMyk5FUMw4glhW409yPT8JvY3G2IKmTw7K9rD7HvzJo+1Zdejley0n9261KnLnEYfBUpxnkf1DYSGt9LhQcosNz9SWP5pUkWM626bo3bd33+CUAg+ICrJ3h/OHiWDYIufak82qmyGAjzabYPSE/5oKHIRPL1Z8ubGiLGJO9Z1XAxsO7KUP88U8rMBI1sb5dc1v4dgx2Yi0tG/A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(396003)(376002)(136003)(366004)(451199021)(31696002)(54906003)(86362001)(478600001)(36756003)(186003)(53546011)(6486002)(6512007)(6506007)(26005)(316002)(41300700001)(4326008)(6916009)(66946007)(66556008)(66476007)(83380400001)(8936002)(8676002)(5660300002)(7416002)(31686004)(2906002)(38100700002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UkI2NGp2SDN3eElTeTk0RmFrYjlZd0VPWDYxanJpcFpHSDBjMXJnclFZRlhr?=
 =?utf-8?B?K2p5YUtxT3o2UWxxcldUa1pwR2dxQUJaTm1la0dvU2Q2RlZkdW5tQ0IvcEI2?=
 =?utf-8?B?RWpmM0FSNndFR2ZXZVE0UTYxRyt5aTZLU2M5bjVpTnFvcUwyRCtXVUVEdllL?=
 =?utf-8?B?TG43OWVSa3ZEQUhtMEFNcy91ZUxMblpIbTJ2OXNnempybjJ4bDNvRTVSOXly?=
 =?utf-8?B?aEZRTDhiYnVvMXBaN2loVW1qaDhpMjVYamxLQjRkZ1RHSGduU0JhRElGb243?=
 =?utf-8?B?Rld3ZWIrMUFYLzExemVJNVJuYmNtL3ZYTS81MndlTXQvUGlkeGg4M1FYYnVO?=
 =?utf-8?B?RDQwQVBDQXprL0tZUGRPSWRwU1VzT0Z1SWVUbENaditIOEpoSktld3BrcTJU?=
 =?utf-8?B?ZEFKSURiNnJ5RkJLbE1Wcjg3NlVLczhWblM2SmtzM1BpUmd3ZVNqalNrVVlM?=
 =?utf-8?B?dktvR2pRb2dXbXdMc254dEM0ZEQvQlB5RHY0cnlzcHI4THp4MFN0UTdhUWhB?=
 =?utf-8?B?RDBDdmppREkzczFwM0VGZkplbGpUMjE3WlhocC9FWUIvalVsNUhhczloSURQ?=
 =?utf-8?B?WUMwdXpFNmgzS3JHZ0hjRlNwUzRPZXI5dW1XSm93QUZqK1ZFc0xvSk9QSmZl?=
 =?utf-8?B?QVpSWDJUeXUrMUV0MVBxMU90OXVIWGFQQlFOUkxDNjFtNFUzWDdTSi9Ha3NQ?=
 =?utf-8?B?Z0l6Rlp1dUtkTlBKaDhNaWF3SURheDBnNjRWNGw4eURNaWpad1ZRS1FHbzBB?=
 =?utf-8?B?aXdwVEFoeDJ6Njl1OEJMNTVHd3FrUGxaUkEzNFlFVHQ0QmFLQ2p5VC9LZGZm?=
 =?utf-8?B?eHE2TTJ1VkF0ZDR3TE1saXZEaDUrRlRVSWUyZDhPKzRpQ2J6SEpSZjh4S3Vw?=
 =?utf-8?B?cDVKOE4xYTJYTVdqS04vRnJFcjFHSmIyTnZZTlhzTjFiUVorUFQ1S3UwVkVy?=
 =?utf-8?B?S0U5WWFmZ3kyNFpQMWFaSEhVU1RtZ1RyMExjazFieXU3Yi9LMmxGTC8yWWFE?=
 =?utf-8?B?TmVhYVEweGR3S3NrTTNWOWFkdGVJdUFxWVlieDNLNHNGMjJtclFibkZNZ1hk?=
 =?utf-8?B?WDA5R3M2UU1Pa0JzNTRaU1doaHltZE1TRTNvd2c2Ujh3bmIxTE9KVnIxa2I2?=
 =?utf-8?B?VXBzZ1ZXN2twdGMzbTNGcUJFVldHZGVhM2FpRENrTE5qTm9FNklTM1RLcitp?=
 =?utf-8?B?Yys5bjM4OUo1ZmZuK0tOYnZUS3EwWVl1aEZ2Rzdielg5UjhvWms0QjJXVW56?=
 =?utf-8?B?UHVlUzg0ZS9weGt5QlV0YzF1azF2Q3RSS0txZCtsbHNZNTNZZDlpKzFOb0Nv?=
 =?utf-8?B?UEs3UC9OREw1QXovYnVNWWlsd0FkOElURXdhVU04RnllZ3VPSG5JWmNnZjVY?=
 =?utf-8?B?YmxwbjRRQlpTMnQrZWNyQlVpMy9sOWhEc1p6aDd0MnAraUZVenNRbjU1aXhM?=
 =?utf-8?B?dXFFNERRc3FjZGpPbmxxczNqbHZpeUNNQXJ2dXdQdGUyMXBHMytzc1VuV2Uz?=
 =?utf-8?B?Q0NpeWtoRlZGZi83OXJGNmgzNzlUaVJrU213TTE1RVA2YWdJVmtOcldhd3dS?=
 =?utf-8?B?ZGw2R094THU5b2NMOE92L3hwSUU4N1RYVW95aWkwTFdjN3lsMFUxRi9MRkdY?=
 =?utf-8?B?N2NvMkZmTURLcHpBalRxbzlEMXpYL3pvL013TC9ySUpFUlVHTkI2REFFUkNT?=
 =?utf-8?B?QjhuekVLRXU1STBlWlFlWE9YTDFGQ1NZU0U5bTQvZFlLazJzNGpQWGVBTGN1?=
 =?utf-8?B?M1lxRVhveWVkVzN1dXZpYVZ1T0ROYkNYMWxwMTFKaG54S3liUXhBZ1Y3cUtr?=
 =?utf-8?B?cVFBdkd6TVF5THllUVI4WGhMbnF2U1JzWjNGdDhzT0JqaVF1Q1pIYlFCOVQ2?=
 =?utf-8?B?ZUhKZ2tkRnFPMXVvQ05vWHllMWhhWmJ6bktEeWx2Sk5VYXlJMytKMlNCcEJB?=
 =?utf-8?B?cGJYRTZzMEZtN09pMmVmTEw0M256WFhFN0NDM013eWNRQ0R5eUVkMXhzZmVK?=
 =?utf-8?B?bWFwUXM1Z2Q2KzBGYXE1dUZZUVpUM3ZHZjJ2VnBhRHNLazNVS1JPc2w0NDAr?=
 =?utf-8?B?YkgwR3hMTS9DZ2g1UU1aa2ZLY2dEQWdyT2U0WnR6UjFZSTlta1NqblRaQjdJ?=
 =?utf-8?Q?moVk1+JWtGlBhePmfz1StHD6X?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9fd0f126-100b-4734-0ade-08db44da55eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:41:13.9368
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WvDFgTI8Ybhq2betzyJ6w6Jh9NtFqw/RBdHep+/syw/BVpplguPS4smYmKWAhGgVDWCmGSdRgg86UGZfNd3h2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9924

On 24.04.2023 17:34, Luca Fancellu wrote:
>> On 24 Apr 2023, at 16:25, Jan Beulich <jbeulich@suse.com> wrote:
>> On 24.04.2023 17:18, Luca Fancellu wrote:
>>>> On 24 Apr 2023, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 24.04.2023 16:57, Luca Fancellu wrote:
>>>>>> On 24 Apr 2023, at 15:05, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 24.04.2023 16:00, Luca Fancellu wrote:
>>>>>>>> On 24 Apr 2023, at 12:34, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>> On 24.04.2023 08:02, Luca Fancellu wrote:
>>>>>>>>> @@ -30,9 +37,11 @@ int sve_context_init(struct vcpu *v);
>>>>>>>>> void sve_context_free(struct vcpu *v);
>>>>>>>>> void sve_save_state(struct vcpu *v);
>>>>>>>>> void sve_restore_state(struct vcpu *v);
>>>>>>>>> +bool sve_domctl_vl_param(int val, unsigned int *out);
>>>>>>>>>
>>>>>>>>> #else /* !CONFIG_ARM64_SVE */
>>>>>>>>>
>>>>>>>>> +#define opt_dom0_sve     (0)
>>>>>>>>> #define is_sve_domain(d) (0)
>>>>>>>>>
>>>>>>>>> static inline register_t compute_max_zcr(void)
>>>>>>>>> @@ -59,6 +68,11 @@ static inline void sve_context_free(struct vcpu *v) {}
>>>>>>>>> static inline void sve_save_state(struct vcpu *v) {}
>>>>>>>>> static inline void sve_restore_state(struct vcpu *v) {}
>>>>>>>>>
>>>>>>>>> +static inline bool sve_domctl_vl_param(int val, unsigned int *out)
>>>>>>>>> +{
>>>>>>>>> +    return false;
>>>>>>>>> +}
>>>>>>>>
>>>>>>>> Once again I don't see the need for this stub: opt_dom0_sve is #define-d
>>>>>>>> to plain zero when !ARM64_SVE, so the only call site merely requires a
>>>>>>>> visible declaration, and DCE will take care of eliminating the actual call.
>>>>>>>
>>>>>>> I’ve tried to do that, I’ve put the declaration outside the ifdef so that it was always included
>>>>>>> and I removed the stub, but I got errors on compilation because of undefined function.
>>>>>>> For that reason  I left that change out.
>>>>>>
>>>>>> Interesting. I don't see where the reference would be coming from.
>>>>>
>>>>> Could it be because the declaration is visible, outside the ifdef, but the definition is not compiled in? 
>>>>
>>>> Well, yes, likely. But the question isn't that but "Why did the reference
>>>> not get removed, when it's inside an if(0) block?"
>>>
>>> Oh ok, I don’t know, here what I get if for example I build arm32:
>>>
>>> arm-linux-gnueabihf-ld -EL -T arch/arm/xen.lds -N prelink.o \
>>> ./common/symbols-dummy.o -o ./.xen-syms.0
>>> arm-linux-gnueabihf-ld: prelink.o: in function `create_domUs':
>>> (.init.text+0x13464): undefined reference to `sve_domctl_vl_param'
>>
>> In particular with seeing this: What you copied here is a build with the
>> series applied only up to this patch? I ask because the patch here adds a
>> call only out of create_dom0().
> 
> No I’ve do the changes on top of the serie, I’ve tried it now, only to this patch and it builds correctly,
> It was my mistake to don’t read carefully the error output.
> 
> Anyway I guess this change is not applicable because we don’t have a symbol that is plain 0 for domUs
> to be placed inside create_domUs.

Possible, but would you mind first telling me in which other patch(es) the
further reference(s) are being introduced, so I could take a look without
(again) digging through the entire series?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 15:44:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 15:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525535.816807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyMT-0002zm-3v; Mon, 24 Apr 2023 15:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525535.816807; Mon, 24 Apr 2023 15:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqyMT-0002zf-1K; Mon, 24 Apr 2023 15:44:17 +0000
Received: by outflank-mailman (input) for mailman id 525535;
 Mon, 24 Apr 2023 15:44:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhGg=AP=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pqyMS-0002zX-8P
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 15:44:16 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dcceec8a-e2b6-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 17:44:14 +0200 (CEST)
Received: from AS8P251CA0030.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::19)
 by DU0PR08MB9274.eurprd08.prod.outlook.com (2603:10a6:10:41a::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr
 2023 15:44:11 +0000
Received: from AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2:cafe::91) by AS8P251CA0030.outlook.office365.com
 (2603:10a6:20b:2f2::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 15:44:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT025.mail.protection.outlook.com (100.127.140.199) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 15:44:10 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Mon, 24 Apr 2023 15:44:10 +0000
Received: from 7b026c4c8a00.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B21E1024-AECE-4393-8E46-C486930AEDBA.1; 
 Mon, 24 Apr 2023 15:43:58 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7b026c4c8a00.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 Apr 2023 15:43:58 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV2PR08MB8701.eurprd08.prod.outlook.com (2603:10a6:150:b9::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Mon, 24 Apr
 2023 15:43:56 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 15:43:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcceec8a-e2b6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sFcqQxD4pWX21r8QWLsCpwqkdRYPl2u+rTazAYdJ27A=;
 b=jJYvCJz4gpF0MfvddLguNF3vMYlYRK3MDffAo+fUjpqDHgDYcLXWWz0qTt4RXHK9fZMuexfuv9ZnIl+0DC7Rpjr1Lfyc5yG725cnrMv8sQc9wCj/tMfJHPMsJiAlfH0X2upSdzjiP/qgQxCWBgpJnSnoFGq47ygRsrz4vqqTyo4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2061f3b29d5fb529
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G4C2XzM/jSHcuDme81ZA/lYk8OCyYEsO8VlTyWbMo/ckcJshEBxcUaCeNVilbtwDM7tANAQd8ukwTfFbIkgoXbx4lBbJp+kceC+4kCoGIZIoN/Af/VuyHfWqSg/wtSNZcrXgZS34z0q0pB0NYXXmynKCQpOLZZBcTBOQ2mV+s8JdRhWrq/gW//GtrX6F9PQXl74KglI+uHCq/SjiYYNmXcgH/bDsaEzlbd7Vwx4Gu+EOAE41q1EoIkIL2rM3nM3EqvolPLUv1k6E3W5O2VbjaFvtdHyBlhya09KaqzkvAQHSbWaerKd+lgTVQH9rFqmHroeTMOzYn+QA3swFLCi92g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sFcqQxD4pWX21r8QWLsCpwqkdRYPl2u+rTazAYdJ27A=;
 b=oa43FUbKLw45rVJw/SXiQoYSahO4/z3Y9+zdbufU2cyqodq4GxtWIuDBxAbAapMvFQTqsMd4s9d6ailvj8+MUrtubkRjS3PiIDSfflAFkv5rYvhVMKCPmDpMXYcQ5oi7R1f6zCB7M7emq9Ug0hc/mE/13yfY6f9xSL78O6lK+8v2oxywFcjPi0d21F3u7Fo2G/L5EsFraf3g7sUbcPsfMrbzkLY4GVo3MXfXXGFBGa8jK2yUgrEdhy3sJLXabNz/fiuo9YXq5jWLaWhnyIU17kWlo76+VKZK3sOOrGYbtmOY7jAMWyvSNMd6lJI3HQET4wcK2doThSS8M7LDbjUniQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sFcqQxD4pWX21r8QWLsCpwqkdRYPl2u+rTazAYdJ27A=;
 b=jJYvCJz4gpF0MfvddLguNF3vMYlYRK3MDffAo+fUjpqDHgDYcLXWWz0qTt4RXHK9fZMuexfuv9ZnIl+0DC7Rpjr1Lfyc5yG725cnrMv8sQc9wCj/tMfJHPMsJiAlfH0X2upSdzjiP/qgQxCWBgpJnSnoFGq47ygRsrz4vqqTyo4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1ICAAAFsAIAADk4AgAACvwCAAANFAIAAAd0AgAACk4CAAAHsAIAAALeA
Date: Mon, 24 Apr 2023 15:43:56 +0000
Message-ID: <B534E482-71BF-4C5F-B9A8-3D567367F7AA@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
 <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
 <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
In-Reply-To: <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV2PR08MB8701:EE_|AM7EUR03FT025:EE_|DU0PR08MB9274:EE_
X-MS-Office365-Filtering-Correlation-Id: b2808641-cbb5-4269-d4e4-08db44dabf65
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7WPq/4D9b8jfv5eeWtJKtDAkXURkx+bf8neQV/5hXikYzEUNSQR2ach0NrUedeIO/mDzXITyS5ho/mBttqplL29UyTpYtcbXY2JluqW7GNs+SzbfcE02lilJUlrdBQycYcUPTzCrLXWdZsfMRC5HGPVqbr3JVPKAbjXBLVho4wBSVyJxlo6uTH+OJUvkrE9bwLEIvd9lmAvtlXfgijPzTQT4a2fOteemRDe3Y/3wRDhk5zt21GD2df+B3EslUWG/nerzQNGM2O+JKP+bZkbJHui9Vf+hqo1X1q4sBGg4v45O9Bq8ThrVh8+knjN155Vihc7U61PF5GBi8x1ODX7PPHgEZzEUtQXFp99l/DBuLGmiYkRtftEW8pc7UTPl571RuARhKiQ/l6VAhPRCKAGBNnYT3rm7OIYVZpiQZ4p0UXS4uyn9F7l3v1+7+fwhIORPyU1M+jIYVMjGHwLZwjaZXoAOV3eiVZ6DqcLZhUz6xqdN5+LYUeXwicXcGMMm+6ki7Hzx1dLsFxVbhNu70pMqNLnIPvL8Yzsc7q3lk4DBA2yRb4aa8kq/ZzvVwGme6LE6dU4Cxi9+LNXZ0BdpfrcAHzDB8vklALHHj74jdyIyts4/azzfpJmcAyaT9lBG87gPwp+YrlVYvw7nxtbRL88bEA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199021)(316002)(66476007)(91956017)(66446008)(71200400001)(66946007)(66556008)(76116006)(6916009)(64756008)(4326008)(86362001)(478600001)(6486002)(5660300002)(36756003)(2616005)(8676002)(8936002)(122000001)(2906002)(38100700002)(54906003)(83380400001)(53546011)(33656002)(26005)(41300700001)(38070700005)(186003)(6512007)(6506007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <281D286DE30CA44DAB8373871AC36ABA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8701
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1b812720-6992-432f-58d4-08db44dab6c2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ywnlXtV3gAAqqwykT7nOTrPNCh/MrX4xLSwp25IqRmtJ1oVFMzb5SGXA08WMQciVfjcUqZyFm4/Tg80DSQRguzgv9I1TMhZeJaxshaCLnHlizUggZYJO+z1U6KcoanVR1M6X4I5eQHOB8e4CFs8KtdbJQZyeHIY+4lQFmY7/D2srEXXz2ln3MLmebJJwT19B6k82v+Nc+Uz/rHvnFypfvgPRv4ywOC00x9EjwpmHfPmF9YR+tirDevfWXFS6O6/9oXdSF+BgsIaeyt0GKAX/opRdOcdFBOtMsV4oTd08PJCUv6Eno+8GTcvMsXmw6uoUb4EFBX1cwjyEQIh+7Ergddj4lm5BIn/GJWjsq+jhRm5xdSEu49/bGadFFqwVdoh+wAS0LbmZFcum6jKJacxF+owc097A7eEowr86R7XfpNNFX92rpKhpRFy9O4yayT/OVZD7OmZeGkuqY0PRKwYJIrB+SabDOu5HySbnqiKGjYcjTV2HWwgW62UYSR+GERtzVAlHlzUV+DgnPu58o/obvfsGrplZ39FOLUjTw1AUCpmNy6ScjlUsrrrbUQZnmgh91JukPwSNLOAuWHZ9Rulfq9zDGTSjZ5ln87HZ2rMONXqRU6pX1jFcLo66WrtI9vNlrViVmrPCP5TtfZhKm3DCuMhsRsmYofXezy6SvItNgOMb+HXiJgxI/WrFEhJ4+uI6LuqsT7itsiz7bfOpyvNeew==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(2906002)(70206006)(70586007)(316002)(4326008)(6862004)(8676002)(8936002)(5660300002)(41300700001)(33656002)(82310400005)(36756003)(86362001)(40480700001)(356005)(6512007)(26005)(186003)(81166007)(53546011)(478600001)(6486002)(36860700001)(83380400001)(47076005)(2616005)(336012)(6506007)(82740400003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 15:44:10.6957
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b2808641-cbb5-4269-d4e4-08db44dabf65
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9274

DQoNCj4gT24gMjQgQXByIDIwMjMsIGF0IDE2OjQxLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDQuMjAyMyAxNzozNCwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyNCBBcHIgMjAyMywgYXQgMTY6MjUsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjQuMDQuMjAyMyAxNzoxOCwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxNjowNiwgSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+IE9uIDI0LjA0LjIwMjMgMTY6NTcsIEx1Y2Eg
RmFuY2VsbHUgd3JvdGU6DQo+Pj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxNTowNSwgSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+Pj4gT24gMjQuMDQuMjAyMyAx
NjowMCwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4+Pj4+PiBPbiAyNCBBcHIgMjAyMywgYXQg
MTI6MzQsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4g
T24gMjQuMDQuMjAyMyAwODowMiwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4+Pj4+Pj4gQEAg
LTMwLDkgKzM3LDExIEBAIGludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KTsNCj4+
Pj4+Pj4+Pj4gdm9pZCBzdmVfY29udGV4dF9mcmVlKHN0cnVjdCB2Y3B1ICp2KTsNCj4+Pj4+Pj4+
Pj4gdm9pZCBzdmVfc2F2ZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdik7DQo+Pj4+Pj4+Pj4+IHZvaWQg
c3ZlX3Jlc3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpOw0KPj4+Pj4+Pj4+PiArYm9vbCBzdmVf
ZG9tY3RsX3ZsX3BhcmFtKGludCB2YWwsIHVuc2lnbmVkIGludCAqb3V0KTsNCj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+ICNlbHNlIC8qICFDT05GSUdfQVJNNjRfU1ZFICovDQo+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+PiArI2RlZmluZSBvcHRfZG9tMF9zdmUgICAgICgwKQ0KPj4+Pj4+Pj4+PiAjZGVm
aW5lIGlzX3N2ZV9kb21haW4oZCkgKDApDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBzdGF0aWMg
aW5saW5lIHJlZ2lzdGVyX3QgY29tcHV0ZV9tYXhfemNyKHZvaWQpDQo+Pj4+Pj4+Pj4+IEBAIC01
OSw2ICs2OCwxMSBAQCBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3Qg
dmNwdSAqdikge30NCj4+Pj4+Pj4+Pj4gc3RhdGljIGlubGluZSB2b2lkIHN2ZV9zYXZlX3N0YXRl
KHN0cnVjdCB2Y3B1ICp2KSB7fQ0KPj4+Pj4+Pj4+PiBzdGF0aWMgaW5saW5lIHZvaWQgc3ZlX3Jl
c3RvcmVfc3RhdGUoc3RydWN0IHZjcHUgKnYpIHt9DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiAr
c3RhdGljIGlubGluZSBib29sIHN2ZV9kb21jdGxfdmxfcGFyYW0oaW50IHZhbCwgdW5zaWduZWQg
aW50ICpvdXQpDQo+Pj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+Pj4+ICsgICAgcmV0dXJuIGZhbHNlOw0K
Pj4+Pj4+Pj4+PiArfQ0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uY2UgYWdhaW4gSSBkb24ndCBz
ZWUgdGhlIG5lZWQgZm9yIHRoaXMgc3R1Yjogb3B0X2RvbTBfc3ZlIGlzICNkZWZpbmUtZA0KPj4+
Pj4+Pj4+IHRvIHBsYWluIHplcm8gd2hlbiAhQVJNNjRfU1ZFLCBzbyB0aGUgb25seSBjYWxsIHNp
dGUgbWVyZWx5IHJlcXVpcmVzIGENCj4+Pj4+Pj4+PiB2aXNpYmxlIGRlY2xhcmF0aW9uLCBhbmQg
RENFIHdpbGwgdGFrZSBjYXJlIG9mIGVsaW1pbmF0aW5nIHRoZSBhY3R1YWwgY2FsbC4NCj4+Pj4+
Pj4+IA0KPj4+Pj4+Pj4gSeKAmXZlIHRyaWVkIHRvIGRvIHRoYXQsIEnigJl2ZSBwdXQgdGhlIGRl
Y2xhcmF0aW9uIG91dHNpZGUgdGhlIGlmZGVmIHNvIHRoYXQgaXQgd2FzIGFsd2F5cyBpbmNsdWRl
ZA0KPj4+Pj4+Pj4gYW5kIEkgcmVtb3ZlZCB0aGUgc3R1YiwgYnV0IEkgZ290IGVycm9ycyBvbiBj
b21waWxhdGlvbiBiZWNhdXNlIG9mIHVuZGVmaW5lZCBmdW5jdGlvbi4NCj4+Pj4+Pj4+IEZvciB0
aGF0IHJlYXNvbiAgSSBsZWZ0IHRoYXQgY2hhbmdlIG91dC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IElu
dGVyZXN0aW5nLiBJIGRvbid0IHNlZSB3aGVyZSB0aGUgcmVmZXJlbmNlIHdvdWxkIGJlIGNvbWlu
ZyBmcm9tLg0KPj4+Pj4+IA0KPj4+Pj4+IENvdWxkIGl0IGJlIGJlY2F1c2UgdGhlIGRlY2xhcmF0
aW9uIGlzIHZpc2libGUsIG91dHNpZGUgdGhlIGlmZGVmLCBidXQgdGhlIGRlZmluaXRpb24gaXMg
bm90IGNvbXBpbGVkIGluPyANCj4+Pj4+IA0KPj4+Pj4gV2VsbCwgeWVzLCBsaWtlbHkuIEJ1dCB0
aGUgcXVlc3Rpb24gaXNuJ3QgdGhhdCBidXQgIldoeSBkaWQgdGhlIHJlZmVyZW5jZQ0KPj4+Pj4g
bm90IGdldCByZW1vdmVkLCB3aGVuIGl0J3MgaW5zaWRlIGFuIGlmKDApIGJsb2NrPyINCj4+Pj4g
DQo+Pj4+IE9oIG9rLCBJIGRvbuKAmXQga25vdywgaGVyZSB3aGF0IEkgZ2V0IGlmIGZvciBleGFt
cGxlIEkgYnVpbGQgYXJtMzI6DQo+Pj4+IA0KPj4+PiBhcm0tbGludXgtZ251ZWFiaWhmLWxkIC1F
TCAtVCBhcmNoL2FybS94ZW4ubGRzIC1OIHByZWxpbmsubyBcDQo+Pj4+IC4vY29tbW9uL3N5bWJv
bHMtZHVtbXkubyAtbyAuLy54ZW4tc3ltcy4wDQo+Pj4+IGFybS1saW51eC1nbnVlYWJpaGYtbGQ6
IHByZWxpbmsubzogaW4gZnVuY3Rpb24gYGNyZWF0ZV9kb21Vcyc6DQo+Pj4+ICguaW5pdC50ZXh0
KzB4MTM0NjQpOiB1bmRlZmluZWQgcmVmZXJlbmNlIHRvIGBzdmVfZG9tY3RsX3ZsX3BhcmFtJw0K
Pj4+IA0KPj4+IEluIHBhcnRpY3VsYXIgd2l0aCBzZWVpbmcgdGhpczogV2hhdCB5b3UgY29waWVk
IGhlcmUgaXMgYSBidWlsZCB3aXRoIHRoZQ0KPj4+IHNlcmllcyBhcHBsaWVkIG9ubHkgdXAgdG8g
dGhpcyBwYXRjaD8gSSBhc2sgYmVjYXVzZSB0aGUgcGF0Y2ggaGVyZSBhZGRzIGENCj4+PiBjYWxs
IG9ubHkgb3V0IG9mIGNyZWF0ZV9kb20wKCkuDQo+PiANCj4+IE5vIEnigJl2ZSBkbyB0aGUgY2hh
bmdlcyBvbiB0b3Agb2YgdGhlIHNlcmllLCBJ4oCZdmUgdHJpZWQgaXQgbm93LCBvbmx5IHRvIHRo
aXMgcGF0Y2ggYW5kIGl0IGJ1aWxkcyBjb3JyZWN0bHksDQo+PiBJdCB3YXMgbXkgbWlzdGFrZSB0
byBkb27igJl0IHJlYWQgY2FyZWZ1bGx5IHRoZSBlcnJvciBvdXRwdXQuDQo+PiANCj4+IEFueXdh
eSBJIGd1ZXNzIHRoaXMgY2hhbmdlIGlzIG5vdCBhcHBsaWNhYmxlIGJlY2F1c2Ugd2UgZG9u4oCZ
dCBoYXZlIGEgc3ltYm9sIHRoYXQgaXMgcGxhaW4gMCBmb3IgZG9tVXMNCj4+IHRvIGJlIHBsYWNl
ZCBpbnNpZGUgY3JlYXRlX2RvbVVzLg0KPiANCj4gUG9zc2libGUsIGJ1dCB3b3VsZCB5b3UgbWlu
ZCBmaXJzdCB0ZWxsaW5nIG1lIGluIHdoaWNoIG90aGVyIHBhdGNoKGVzKSB0aGUNCj4gZnVydGhl
ciByZWZlcmVuY2UocykgYXJlIGJlaW5nIGludHJvZHVjZWQsIHNvIEkgY291bGQgdGFrZSBhIGxv
b2sgd2l0aG91dA0KPiAoYWdhaW4pIGRpZ2dpbmcgdGhyb3VnaCB0aGUgZW50aXJlIHNlcmllcz8N
Cg0KU3VyZSwgdGhlIG90aGVyIHJlZmVyZW5jZXMgdG8gdGhlIGZ1bmN0aW9uIGFyZSBpbnRyb2R1
Y2VkIGluICJ4ZW4vYXJtOiBhZGQgc3ZlIHByb3BlcnR5IGZvciBkb20wbGVzcyBkb21Vc+KAnSBw
YXRjaCAxMQ0KDQo+IA0KPiBKYW4NCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 16:10:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 16:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525545.816818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqylb-00074i-6I; Mon, 24 Apr 2023 16:10:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525545.816818; Mon, 24 Apr 2023 16:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqylb-00074b-39; Mon, 24 Apr 2023 16:10:15 +0000
Received: by outflank-mailman (input) for mailman id 525545;
 Mon, 24 Apr 2023 16:10:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0Hh8=AP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pqylZ-00072v-RE
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 16:10:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061c.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c408d3b-e2ba-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 18:10:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7882.eurprd04.prod.outlook.com (2603:10a6:10:1e7::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 16:10:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Mon, 24 Apr 2023
 16:10:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c408d3b-e2ba-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ANjVEypR2dehnasN6FSLN8uhZbMwUoC0Rnh4nMb1DTe4tsWs3p45HICPa/2ZCBR1WJi8ctcxUBTj75SFYN8WQcqLJ2imkrCFH5EWstzDP6RdvT8t5Vw3/aD97Tw2PXeZH/p+ff+nN6HdU2Qslxg4vXQ8+4aCFKW2txpShgyed+FyAXgGyq4l4hDHyMV9uzuiVvPGhlPRZe9eww0qPfb/qS6JLgTYxs8slSXSJ4+x8eAQLTeD8we7K2G6RRCXMuZgTY+/fJeZHPRP3mimVGaavBKdNGPo0iIHNrQ9ZtmXOlpmEQWzDgFh9QB8AvFC7y09YL+XnTcnsbY0YuFvicVVKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PKf1rklL4kim2o6be/N/6Qkhxt83x3us1rIf78YHmi8=;
 b=c+P8S/K6HVOtk+I9PVjHyHNYeXU3n8KxAJ4NX7rolIQqO+Diuih00lKptO7H0f2fqzMX0FODuij9kF8l18Z0Vyuh3A4v3jNtOV7GAhGai9nrm65baVQ5RtSmLZX1jFULmxg5sGjImQFSaD2Esd6aZ5kHU1zWh+Il7rZpoT1C6H2ojA0x0Au0yVoDHW1DXYolldRJqijZFXfwGDJnb3Fmw/ws5G5X7Z0zemxR+59+ItWJm1JEsfjIAC0vpzYMUyNUSAkETdhGnx9KzDaW1mYfgBmDovJeu46/Aolsro/X3gO1J0n+ouqjzAL5tLrIlPF7K6bCzeq9AvuUYM0hh+f8/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PKf1rklL4kim2o6be/N/6Qkhxt83x3us1rIf78YHmi8=;
 b=DH1pwA66/ONbVaIDC88JfF8qZgxx3TSKkKTxfBZ+rZwKLIxhGzERmu1n7lWj4IqdHilGyI8jU8X7A6TlxwgJ9wMBkT0R+yN3ZtH6eLTYbWoE8Kf1gXFSi9vWK/sLftUT8TbCT1e/6CSiF6DsNsP5AKOP2FJt4a85fbu6LeziE9zjrnO1Ax5QLuO8VEc6QSYY4dSw4n5pXM/2aeZ2+r5UsDvMeU0ifMmf+gmhOA+E5P/OZOUrfQzEzVX1K1AeSEx3Dx+2nw5e0Kjrkn5HW/EG1/gJSb9D4XpLBZhjcdWM/1WWT/kF3rH9ZGBW5Lz8o1tFWEpt+tultjzqaKwESKVMpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f9e631e1-02bb-a565-4df4-ccbb66fbaf49@suse.com>
Date: Mon, 24 Apr 2023 18:10:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
 <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
 <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
 <B534E482-71BF-4C5F-B9A8-3D567367F7AA@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <B534E482-71BF-4C5F-B9A8-3D567367F7AA@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7882:EE_
X-MS-Office365-Filtering-Correlation-Id: 479e3b65-8108-4e9a-ce1f-08db44de5e9f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M7HgBkT2mCxdT4C+cot1gq8z+BOB2n3cBB9QgtzytqOnbeA1ZP/iPrXWHNMGq3a30oODF/NuClLACDphMQapx1p3BJMnds7rovJeGeR94SRlJbQW7LDZ9wHmOdSWjTZYHR1VGwwStXk57tbmKw7puH1n+J3sN2EJgFpzxog0jHgqw56taJh7MZqqgbaxOodHMm2fq1C0PY+EtPAP17bEhm2MmeLsGtA87uPG9EMO3PJxHbdfUzvJl4M9m1B6dUet7mdksne7xJXebc5OfTiVMrQ9Uskhizcu/bM/wAYPxT9BXzHzimyEbkdkfSUvx7zIis34y42XQxPQhos/otMortd929eWsJl12NDAxHI8p5zvK09hawvv3px9IRv50aIs0Xz94iIhSSWPKkG4RWoRs1ENu6uo0LH2dGzEuz7Us9VbmBqHA67EhbtBwNNuvUqITNsLeZHtGvUk+16khymGninT57cN0FtNMFw92kLx3wy1lypr0i1eOnergaS/8PMoDOjQYdurVuqmLnTljgk2Zu6e66X+vVX/V3pF4ZyyQCFg5kT7nwozDzqPOzsU4PYPqjXwJ86GTmaldfdaupgbDCvXaU27uz8yOAhHD+QYtR303Ih5Mx23ZspcYZwa3zMCtDuhHCxt/0+hSoQb/dRe/Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(366004)(376002)(346002)(396003)(451199021)(2906002)(66476007)(66556008)(66946007)(6916009)(316002)(4326008)(7416002)(8676002)(8936002)(5660300002)(41300700001)(36756003)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(38100700002)(478600001)(6486002)(6666004)(31686004)(2616005)(6506007)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUMzS1MreWRkOVA2cW1FbHdTVG1NREUyMmgvT3lyQmNHNUVwYTdicXJnM0ZL?=
 =?utf-8?B?VjIyeVRYT3hiUXpNL0hXd21YcWZIM2tKNUVvYkc5NHVmVFBqQmpxTFpZdjVQ?=
 =?utf-8?B?SVdoNFY4SStkNWZRZ053TFdsWHQrVnozdjJNaXF6V1RrejVOL01ua3EyUDQ3?=
 =?utf-8?B?RlRmdUxwZVhpSnFWdk1ONnEzV1E4QzNzdnN4SU5yWHhuRG5BVERQUmVYRDFq?=
 =?utf-8?B?TWxCN2FYT0FRMW5oRUtQTUsxdmdadFNEZkJmVjhYZU9hN25iT2JreDVERnlX?=
 =?utf-8?B?ZHBCQ2J4TGF5aW1TMDJCREpKdUtZbTByUVRDTnFRdmo3QTRjb205REZCczA0?=
 =?utf-8?B?VDBaemZ0OUFXWXhveW9EdnVEY2ZJTTNIcjRUclJHalI5QmFSeWJCaGhhdkl2?=
 =?utf-8?B?VUxuSFF0WU5BU2xEWFFYME9tZ1JpWmVTWFk4OGdFMFJwaGhlN0xqZ08xUnU0?=
 =?utf-8?B?RUlobHJuWkJtd1VCTU04ZWJGSm11K1FpeWlDNzFvdGpUeFV3WXNPdXNUT0pG?=
 =?utf-8?B?MEZIRTBMY2Q4RisyN1A3amFkaExJSVJjVEdrUC9HblFnakl1TTlxNGJ0NmZ4?=
 =?utf-8?B?ajI5MFpYYjQ5SFFibStUUGJEU2c1WEptOE5pUlFpWG5MNjhEMmIxUHFKMHZJ?=
 =?utf-8?B?VTJObWVISWNCWWZvZEdCL25JcklKd0lvaUkvMjNvSEdqV1VRRUpZNm95Vktm?=
 =?utf-8?B?NXBlemJHSXJ6MjNSUzZtb29Yd2c4ZXBMQm5pRnRmcEM2Wng3UEMyOFhrdHE1?=
 =?utf-8?B?bXcrVmxQenUvc2w4ZFJ3dU9CV3BxZytRd0hQWm9rcHNJRmpKbjhpMXF6ODR0?=
 =?utf-8?B?c0pUWGJ3RFk3NHBPaGVtOXF3bG1HUjN0N25ueERWbWJWY3RqWGdSQnJtVmlm?=
 =?utf-8?B?MXBTSmxoak0yZWpUTytVYklGVnhYR2oxdHpDZnd1eVdwVnNCRFgrNXBsejZ0?=
 =?utf-8?B?emdhdjBmTWdnbGlPSEFpTW5TenRzMW9rYmpCejdrRmsxZEJiUmVLUTRLSUVS?=
 =?utf-8?B?cEpERUE0STF2V3ozMlRDSTRTYkQ4K1V1K1ZJbGtzM1h2N3pDLzVPa21LcE0z?=
 =?utf-8?B?UytVdlNIVTBKb3R0ZVc1OVlqU0ZpM2FjcEgzeHkxU1lLYVZPVmFsY1RBWGlS?=
 =?utf-8?B?OThCT2F3TjMvYnU1TFNPWFV4TWJTbVlZMFBKZHBERU1kVURVVUp4aWNURFpl?=
 =?utf-8?B?VUo4bEtwdFd4aXhaaDBDMjBNeVRxZC9LcGxLYWdXMGNwdTFSYlVTdlB2Q0pT?=
 =?utf-8?B?bElFSTUwdnkzeGdPVUQyMUZrTVVIcTN3NzFOZTdJdUc5T215bUhYaFV6cVlt?=
 =?utf-8?B?SnJNZUlPSGJDUGNUdEdGWlpWZ0dRVEsxejFJa1NEQnJpK0tGdGhwSzJOdk9T?=
 =?utf-8?B?OFFhazFoeWErcld0cmdhU2FvejJQQnlJbnlOMnpsVWZBejJ3dFNnckI5ODk0?=
 =?utf-8?B?c1hXRVJtZTlwV0R6dlZrYkVUaDFJaG5xM0hLYXk3RGFweGQzVmdMQmFJTkt2?=
 =?utf-8?B?REhBdVlIb01KdDJGWWpWM1BTR3l2U0o0T3BFd2pKK0gwTThyS1NuZjZqcERO?=
 =?utf-8?B?UzcyelFMS0dveWZaRjBHMHNXNWY1MjhYc1ZDZTBWV2VTRHovVlN6QnJyZzJP?=
 =?utf-8?B?NDYzbHhId3hCdnhXMFVBTVNJakswUktJVmJvM3ZnV29oNDNVRTdnWDZiK3Y0?=
 =?utf-8?B?TnJzNGh6Yk0wWHU3enhYOVlIZjhWMkJhRGVMSHJITWUycUZVYzhCbHRCdUFX?=
 =?utf-8?B?K0M0OEdHTm5qd0t6eThQWWV0c2wxRkFEN3EzeFYxdjhyVlNKcDd0VXNsNUcz?=
 =?utf-8?B?QkVHWGIrZU5WbXlia0FER2xGTTBGdGU2UlRqQ1ZpZ3JZY2tOZjdRZzdBbTJQ?=
 =?utf-8?B?V2t4dWtUeU9XSitnTHJWMGwyU1pWM2xIRURKNnhMTktyV3pJNHByQ2E1N25X?=
 =?utf-8?B?Y3RoZ3IyWm5UcERWemU1T3Bnc1NNODdrOVJzTGJ1RkZuOHUxdGVwcnl5ckVJ?=
 =?utf-8?B?Z2pmUlg3Q1gwcENMWm9SMUNjVmFJUGs1TE1YeENjOTVLWWtid0pNMWdvMk05?=
 =?utf-8?B?UUdhZDkwZjVhQzd2dkV6aE5sMk0zS2xCTlVHL1lsb2R1NmxxMjhvRTlrekEw?=
 =?utf-8?Q?rHynjteX/BmlzdlzIIP5ohX3b?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 479e3b65-8108-4e9a-ce1f-08db44de5e9f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 16:10:06.5220
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 84M04vLhHAZHY0JYCLDxQWIbLVaqg8X6ln2uWcTLWaSVvYhQQpCJicjRwKCfvZiTb0jUoe9mHmOY/Xe8GN37uQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7882

On 24.04.2023 17:43, Luca Fancellu wrote:
>> On 24 Apr 2023, at 16:41, Jan Beulich <jbeulich@suse.com> wrote:
>> On 24.04.2023 17:34, Luca Fancellu wrote:
>>>> On 24 Apr 2023, at 16:25, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 24.04.2023 17:18, Luca Fancellu wrote:
>>>>> Oh ok, I don’t know, here what I get if for example I build arm32:
>>>>>
>>>>> arm-linux-gnueabihf-ld -EL -T arch/arm/xen.lds -N prelink.o \
>>>>> ./common/symbols-dummy.o -o ./.xen-syms.0
>>>>> arm-linux-gnueabihf-ld: prelink.o: in function `create_domUs':
>>>>> (.init.text+0x13464): undefined reference to `sve_domctl_vl_param'
>>>>
>>>> In particular with seeing this: What you copied here is a build with the
>>>> series applied only up to this patch? I ask because the patch here adds a
>>>> call only out of create_dom0().
>>>
>>> No I’ve do the changes on top of the serie, I’ve tried it now, only to this patch and it builds correctly,
>>> It was my mistake to don’t read carefully the error output.
>>>
>>> Anyway I guess this change is not applicable because we don’t have a symbol that is plain 0 for domUs
>>> to be placed inside create_domUs.
>>
>> Possible, but would you mind first telling me in which other patch(es) the
>> further reference(s) are being introduced, so I could take a look without
>> (again) digging through the entire series?
> 
> Sure, the other references to the function are introduced in "xen/arm: add sve property for dom0less domUs” patch 11

Personally I'm inclined to suggest adding "#ifdef CONFIG_ARM64_SVE" there.
But I guess that may again go against your desire to not ignore inapplicable
options. Still I can't resist to at least ask how an "sve" node on Arm32 is
different from an entirely unknown one.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 16:42:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 16:42:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525554.816828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqzH1-0002Cg-Oq; Mon, 24 Apr 2023 16:42:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525554.816828; Mon, 24 Apr 2023 16:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pqzH1-0002CZ-MH; Mon, 24 Apr 2023 16:42:43 +0000
Received: by outflank-mailman (input) for mailman id 525554;
 Mon, 24 Apr 2023 16:42:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zAuZ=AP=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pqzGz-0002CT-Vi
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 16:42:42 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06a482e6-e2bf-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 18:42:39 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-5055141a8fdso6798517a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 09:42:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06a482e6-e2bf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682354559; x=1684946559;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NxfJ7PXp6Ka4kQy3f5Kbc6uM9GAl3Q7uoq/tXeCesYc=;
        b=CCFlJU5L61AcW9mGIxLgclBWO9Tq9H4hfwk+n+GSTwBtaXG4VMJJO63UPFY16YAwnA
         IzDy9kCPZJUfFF+7TLKicB8K3VFNDWeH6X776nGEVelcGXMaiVnbtCgug/40VdQCeG4o
         Ts5eH5c5rC+qTy1kkQZAZfoAmFm4xw3ZJOJy7dMN8kNv8V4o97ub8sor8FJh5UB/ZrCe
         4IjV800l8twen3XqGS/ZXpk++igrOaE/y6g8c2r4iwiAg+r8PKtjsMwHlRMqSlakJ+pE
         Yugiz5udB95mhMyeyOVMkIpNp6ZpTVJuqiFqvHhSRL6zYslPlFC0DJbI00BQRAqSSmPW
         ILkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682354559; x=1684946559;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NxfJ7PXp6Ka4kQy3f5Kbc6uM9GAl3Q7uoq/tXeCesYc=;
        b=BeXNFvdw56cRVyGO8waghUPW0SZ+4y7gpb4+Uv635Qp29c7fT9YTOxMubM73oITKlD
         Yyo1LZiIWu/2Oizp5JMXQMyKznJ/Fc4r/yxPmIQIUWF6qbuUNH5rYlqBEYmrCiR3W3sU
         jAuyBB3WHpmS0LNuYze9j1kNS/3cHoCqwKEZ8mOTi9r58T7Mh8bIn1x6pCQVKGxPnCFm
         X3aE8p9/gKCsPiXamghaVqE64l5MnQFIjgyL7pVPqlzHo5IVRcDICwFAY4b1T23sDzFH
         +rBNzfAbFq1njLx/6ivsGS3MBAOyFq8j+tlry6FbCQp+bD2aszEpNCY7kycS7BauzaKG
         n7vQ==
X-Gm-Message-State: AAQBX9fOv8HHwCQRL4CMqqq7I9LPcNwvn61ccCv96rC0cUgDAWnjFHM/
	Tx8tzzM5OR0IuW9z0iVzB0hf6eJr7iHabLxTGFg=
X-Google-Smtp-Source: AKy350aFHfq1BQDh8mDjW8ADJ9SKpwkE+6vwXksE8DDkLzrKKKewt3J1pYje+gp2oZg93dWnvcjIq8muRVUr6GiHO+8=
X-Received: by 2002:aa7:d796:0:b0:506:b8ca:e07e with SMTP id
 s22-20020aa7d796000000b00506b8cae07emr11220329edq.11.1682354559115; Mon, 24
 Apr 2023 09:42:39 -0700 (PDT)
MIME-Version: 1.0
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <6984a8571dac35d04c85117834d99b00fe1c4184.1680752649.git-series.marmarek@invisiblethingslab.com>
 <4eb45940-5615-2398-633d-e5f59dc6987d@suse.com> <CAKf6xps2nVoYL6LtOqW2UBHadNSQzkb1XAe7WRxXmLzyN3kAGQ@mail.gmail.com>
 <50a0883c-efb8-9456-7dac-a01cca3a17cf@suse.com>
In-Reply-To: <50a0883c-efb8-9456-7dac-a01cca3a17cf@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 24 Apr 2023 12:42:26 -0400
Message-ID: <CAKf6xpuWfFojO7_CX=uZoJGwRmiPe06DDNyhu4tqFvd8D+WzLg@mail.gmail.com>
Subject: Re: [PATCH v3 4/4] x86/msi: clear initial MSI-X state on boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 24, 2023 at 11:30=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 24.04.2023 17:25, Jason Andryuk wrote:
> > On Mon, Apr 24, 2023 at 10:19=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >> Jason - any chance of getting a Tested-by: from you?
> >
> > I'm building v3 now.  v2  worked for clearing MASKALL on initial boot.
> >
> > I posted in these two messages - a summary is below.
> > https://lore.kernel.org/xen-devel/CAKf6xpto87QRSKT2qc1yApNfaw2SrLLxPoyt=
YJv_jEbYTAbjCg@mail.gmail.com/
> > https://lore.kernel.org/xen-devel/CAKf6xptHALLR-Qjf=3Dp5y0o9Ud2V7eFMJuB=
8Ap-PLjv-N7PAJVQ@mail.gmail.com/
> >
> > OpenXT has a patch that performs an extra reset after domain shutdown,
> > and that causes Xen to set MASKALL.  I confirmed by removing it.  So
> > this patch helps with clearing MASKALL on host boot, but with the
> > OpenXT patch, rebooting a domain fails.  MASKALL gets set on VM
> > shutdown and then the subsequent boot can't assign the device.
> >
> > So this patch is helpful in some scenarios, but it was also an issue
> > caused by the OpenXT patch.  Does that make it unsuitable for
> > inclusion?
>
> What is "it" here? If I get your reply right, there is a similar issue
> left unaddressed by this version of the change (and as was said before,
> a device reset changing state that Xen tracks or otherwise cares about
> needs to be reported to Xen). Yet that doesn't really fit with the
> question, at least the way I read it ...

"So this patch is helpful in some scenarios, but setting MASKALL in
the first place is an issue caused by the OpenXT patch.  Does that
make this patch unsuitable for inclusion?"

I think Marek's response that "Xen IMO should deal with the state it
gets on boot, regardless of what was running previously" makes sense
and means this is worthy of inclusion.

And I tested it.  Without the OpenXT libxl-fix-flr.patch:
(XEN) 0000:00:14.3: unexpected initial MSI-X state (MASKALL=3D0, ENABLE=3D1=
), fixing
With the OpenXT patch:
(XEN) 0000:00:14.3: unexpected initial MSI-X state (MASKALL=3D1, ENABLE=3D1=
), fixing

Tested-by: Jason Andryuk <jandryuk@gmail.com>

The patch is here if anyone want to look:
https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/fil=
es/libxl-fix-flr.patch

It's calling libxl__device_pci_reset() from destroy_finish_check(), so
it's not trying to do anything behind Xen's back.  It's just that Xen
sees memory decoding disabled, and then sets MASKALL.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 17:59:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 17:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525578.816838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr0Sb-00015v-6w; Mon, 24 Apr 2023 17:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525578.816838; Mon, 24 Apr 2023 17:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr0Sb-00015o-43; Mon, 24 Apr 2023 17:58:45 +0000
Received: by outflank-mailman (input) for mailman id 525578;
 Mon, 24 Apr 2023 17:58:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SIZu=AP=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pr0SZ-00015i-4D
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 17:58:43 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a45c57c4-e2c9-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 19:58:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a45c57c4-e2c9-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682359118;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/B8ViPkT84zF77YJVg/Vg+DuX3x6sDAhKrSrx63B0t8=;
	b=zNVUbHm89L6WRj2i94+vPyVn3/KO1LB2C60TWY+kKf2/e8X9/9XtdKbM639oCyDfzo+kF6
	y7T/sJmhL5ytO4DUh414kpf+vt4CvUe/YodEpDD02hfzlBINbTEIvI3SsdLqezsCo+5tFW
	AvbLWyv3qGi42txG7eIeRdoNoCDf86KN/QgJXAPPGmhTl1TqeCcHqSPF7eM9+ujvoD5z/m
	AjrKuWx0mKwm8kHTyjPIB1SSadCZzqxSWvKRYA61NK+fA/i4N78zaZprmbihpUvGoCep3s
	2RQE5w5ABRrYdGBrSZjZPmPvtUXxeogAae/i/g9/9f8KekX5LyUd6mtLrTj6qw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682359118;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/B8ViPkT84zF77YJVg/Vg+DuX3x6sDAhKrSrx63B0t8=;
	b=ZaMRZ/Y/wVrPL6Kd1U5Zo8HZlUCLMRTrTHQ/7Xvf+abFcSRFeSo9Dw3jXKeMeUtXfr3+up
	aT9nxD885g2bqnAA==
To: Brian Gerst <brgerst@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Arjan van
 de Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, Sean
 Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 35/37] x86/smpboot: Support parallel startup of
 secondary CPUs
In-Reply-To: <87mt38yhwh.ffs@tglx>
References: <20230414225551.858160935@linutronix.de>
 <20230414232311.379210081@linutronix.de>
 <CAMzpN2hUbYpYrqDL1ViXUWGKGa7mDEG6iHtWEZg9GvrAoRgvKQ@mail.gmail.com>
 <87mt38yhwh.ffs@tglx>
Date: Mon, 24 Apr 2023 19:58:36 +0200
Message-ID: <878reh17sj.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Sat, Apr 15 2023 at 23:06, Thomas Gleixner wrote:

> On Sat, Apr 15 2023 at 09:22, Brian Gerst wrote:
>> On Fri, Apr 14, 2023 at 7:45=E2=80=AFPM Thomas Gleixner <tglx@linutronix=
.de> wrote:
>>> @@ -248,10 +311,20 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>>>          *
>>>          * RDX contains the per-cpu offset
>>>          */
>>> -       movq    pcpu_hot + X86_current_task(%rdx), %rax
>>> -       movq    TASK_threadsp(%rax), %rsp
>>> +       movq    pcpu_hot + X86_top_of_stack(%rdx), %rsp
>>
>> Switching to using pcpu_hot.top_of_stack is ok, but it's not
>> completely equivalent.  top_of_stack points to the end of the pt_regs
>> structure, while the kernel stack starts below pt_regs even for kernel
>> threads.  So you need to subtract PTREGS_SIZE from the stack pointer
>> after this.
>>
>> This change should also be a separate patch.
>
> You're right on both counts.

Actually no. We can't do that as this breaks suspend/resume (again).

/me drops it.


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 18:47:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 18:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525583.816848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr1DN-0006V0-Sb; Mon, 24 Apr 2023 18:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525583.816848; Mon, 24 Apr 2023 18:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr1DN-0006Ut-PI; Mon, 24 Apr 2023 18:47:05 +0000
Received: by outflank-mailman (input) for mailman id 525583;
 Mon, 24 Apr 2023 18:47:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kRCY=AP=molgen.mpg.de=pmenzel@srs-se1.protection.inumbo.net>)
 id 1pr1DM-0006Un-I4
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 18:47:04 +0000
Received: from mx3.molgen.mpg.de (mx3.molgen.mpg.de [141.14.17.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 650afa21-e2d0-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 20:47:00 +0200 (CEST)
Received: from [192.168.0.2] (ip5f5aebe8.dynamic.kabel-deutschland.de
 [95.90.235.232])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: pmenzel)
 by mx.molgen.mpg.de (Postfix) with ESMTPSA id E281A61E4052B;
 Mon, 24 Apr 2023 20:46:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650afa21-e2d0-11ed-8611-37d641c3527e
Content-Type: multipart/mixed; boundary="------------lVKi9UGmWM0EmezMyrPPBq8L"
Message-ID: <2ff626d6-534e-7b34-8186-7f2612c1c569@molgen.mpg.de>
Date: Mon, 24 Apr 2023 20:46:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Sean Christopherson <seanjc@google.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, linux-kernel@vger.kernel.org,
 x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Brian Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, David Woodhouse
 <dwmw@amazon.co.uk>, Usama Arif <usama.arif@bytedance.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E. J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com> <87v8hq35sk.ffs@tglx>
 <56e59a4d-a47f-4bfe-7db5-5f921062ad69@molgen.mpg.de> <87sfcu2wup.ffs@tglx>
Content-Language: en-US
From: Paul Menzel <pmenzel@molgen.mpg.de>
In-Reply-To: <87sfcu2wup.ffs@tglx>

This is a multi-part message in MIME format.
--------------lVKi9UGmWM0EmezMyrPPBq8L
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Dear Thomas,


Am 20.04.23 um 21:10 schrieb Thomas Gleixner:
> On Thu, Apr 20 2023 at 18:47, Paul Menzel wrote:
>> Am 20.04.23 um 17:57 schrieb Thomas Gleixner:
>> I quickly applied it on top of your branch, but I am getting:
> 
> As I said it was untested. I was traveling and did not have access to a
> machine to even build it completely. Fixed up and tested version below.

Sorry, if it sounded like a complaint. I just wanted to give a quick 
feedback.

[…]

I tested your new version even on Friday, and it worked fine – no ten 
seconds delay. Please find the messages attached.

Thank you all for your great work.


Kind regards,

Paul


PS: I am going to try to test your updated branch at the end of the week.
--------------lVKi9UGmWM0EmezMyrPPBq8L
Content-Type: text/plain; charset=UTF-8;
 name="kodi-linux-6.3-rc3-smp-tglx-with-apic-fix.txt"
Content-Disposition: attachment;
 filename="kodi-linux-6.3-rc3-smp-tglx-with-apic-fix.txt"
Content-Transfer-Encoding: base64

WyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA2LjMuMC1yYzMtMDAwNDYtZzhiYTY0M2Q3
ZTFjNyAocm9vdEBiZjE2ZjM2NDZhODQpIChnY2MgKERlYmlhbiAxMS4yLjAtMTIpIDExLjIu
MCwgR05VIGxkIChHTlUgQmludXRpbHMgZm9yIERlYmlhbikgMi40MCkgIzQ1MiBTTVAgUFJF
RU1QVF9EWU5BTUlDIFRodSBBcHIgMjAgMjA6MTU6MDEgVVRDIDIwMjMKWyAgICAwLjAwMDAw
MF0gQ29tbWFuZCBsaW5lOiBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmMzLTAw
MDQ2LWc4YmE2NDNkN2UxYzcgcm9vdD0vZGV2L3NkYTMgcncgcXVpZXQgbm9pc2FwbnAgY3J5
cHRvbWdyLm5vdGVzdHMgaXB2Ni5kaXNhYmxlX2lwdjY9MSBzZWxpbnV4PTAKWyAgICAwLjAw
MDAwMF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDAxOiAneDg3IGZs
b2F0aW5nIHBvaW50IHJlZ2lzdGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogU3VwcG9y
dGluZyBYU0FWRSBmZWF0dXJlIDB4MDAyOiAnU1NFIHJlZ2lzdGVycycKWyAgICAwLjAwMDAw
MF0geDg2L2ZwdTogU3VwcG9ydGluZyBYU0FWRSBmZWF0dXJlIDB4MDA0OiAnQVZYIHJlZ2lz
dGVycycKWyAgICAwLjAwMDAwMF0geDg2L2ZwdTogeHN0YXRlX29mZnNldFsyXTogIDU3Niwg
eHN0YXRlX3NpemVzWzJdOiAgMjU2ClsgICAgMC4wMDAwMDBdIHg4Ni9mcHU6IEVuYWJsZWQg
eHN0YXRlIGZlYXR1cmVzIDB4NywgY29udGV4dCBzaXplIGlzIDgzMiBieXRlcywgdXNpbmcg
J3N0YW5kYXJkJyBmb3JtYXQuClsgICAgMC4wMDAwMDBdIHNpZ25hbDogbWF4IHNpZ2ZyYW1l
IHNpemU6IDE3NzYKWyAgICAwLjAwMDAwMF0gQklPUy1wcm92aWRlZCBwaHlzaWNhbCBSQU0g
bWFwOgpbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAw
LTB4MDAwMDAwMDAwMDA5ZmJmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1MtZTgyMDog
W21lbSAweDAwMDAwMDAwMDAwOWZjMDAtMHgwMDAwMDAwMDAwMDlmZmZmXSByZXNlcnZlZApb
ICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDAwMGYwMDAwLTB4MDAw
MDAwMDAwMDBmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVt
IDB4MDAwMDAwMDAwMDEwMDAwMC0weDAwMDAwMDAwNWZlNGNmZmZdIHVzYWJsZQpbICAgIDAu
MDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAwMDVmZTRkMDAwLTB4MDAwMDAwMDA3
ZmZmZmZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUy1lODIwOiBbbWVtIDB4MDAw
MDAwMDBmODAwMDAwMC0weDAwMDAwMDAwZmJmZmZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAw
MDBdIEJJT1MtZTgyMDogW21lbSAweDAwMDAwMDAwZmVjMTAwMDAtMHgwMDAwMDAwMGZlYzEw
ZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TLWU4MjA6IFttZW0gMHgwMDAwMDAw
MTAwMDAwMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIE5Y
IChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2ZQpbICAgIDAuMDAwMDAwXSBT
TUJJT1MgMy4wLjAgcHJlc2VudC4KWyAgICAwLjAwMDAwMF0gRE1JOiBBU1VTIEYyQTg1LU1f
UFJPL0YyQTg1LU1fUFJPLCBCSU9TIDQuMTgtOS1nYjY0MGVkNTFiMiAwNC8xNy8yMDIzClsg
ICAgMC4wMDAwMDBdIHRzYzogRmFzdCBUU0MgY2FsaWJyYXRpb24gdXNpbmcgUElUClsgICAg
MC4wMDAwMDBdIHRzYzogRGV0ZWN0ZWQgMzkwMC40NDAgTUh6IHByb2Nlc3NvcgpbICAgIDAu
MDAwNzU2XSBlODIwOiB1cGRhdGUgW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmZdIHVzYWJs
ZSA9PT4gcmVzZXJ2ZWQKWyAgICAwLjAwMDc1OV0gZTgyMDogcmVtb3ZlIFttZW0gMHgwMDBh
MDAwMC0weDAwMGZmZmZmXSB1c2FibGUKWyAgICAwLjAwMDc2M10gbGFzdF9wZm4gPSAweDE3
ZjAwMCBtYXhfYXJjaF9wZm4gPSAweDQwMDAwMDAwMApbICAgIDAuMDAwNzY4XSB4ODYvUEFU
OiBDb25maWd1cmF0aW9uIFswLTddOiBXQiAgV0MgIFVDLSBVQyAgV0IgIFdQICBVQy0gV1Qg
IApbICAgIDAuMDAwOTM4XSBsYXN0X3BmbiA9IDB4NWZlNGQgbWF4X2FyY2hfcGZuID0gMHg0
MDAwMDAwMDAKWyAgICAwLjAwNDAwMF0gVXNpbmcgR0IgcGFnZXMgZm9yIGRpcmVjdCBtYXBw
aW5nClsgICAgMC4wMDQwMDBdIEFDUEk6IEVhcmx5IHRhYmxlIGNoZWNrc3VtIHZlcmlmaWNh
dGlvbiBkaXNhYmxlZApbICAgIDAuMDA0MDAwXSBBQ1BJOiBSU0RQIDB4MDAwMDAwMDAwMDBG
NjgzMCAwMDAwMjQgKHYwMiBDT1JFdjQpClsgICAgMC4wMDQwMDBdIEFDUEk6IFhTRFQgMHgw
MDAwMDAwMDVGRTVBMEUwIDAwMDA3NCAodjAxIENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBD
T1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBGQUNQIDB4MDAwMDAwMDA1RkU1
QkJDMCAwMDAxMTQgKHYwNiBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAyMDIwMDky
NSkKWyAgICAwLjAwNDAwMF0gQUNQSTogRFNEVCAweDAwMDAwMDAwNUZFNUEyODAgMDAxOTNB
ICh2MDIgQ09SRXY0IENPUkVCT09UIDAwMDEwMDAxIElOVEwgMjAyMDA5MjUpClsgICAgMC4w
MDQwMDBdIEFDUEk6IEZBQ1MgMHgwMDAwMDAwMDVGRTVBMjQwIDAwMDA0MApbICAgIDAuMDA0
MDAwXSBBQ1BJOiBGQUNTIDB4MDAwMDAwMDA1RkU1QTI0MCAwMDAwNDAKWyAgICAwLjAwNDAw
MF0gQUNQSTogU1NEVCAweDAwMDAwMDAwNUZFNUJDRTAgMDAwMDhBICh2MDIgQ09SRXY0IENP
UkVCT09UIDAwMDAwMDJBIENPUkUgMjAyMDA5MjUpClsgICAgMC4wMDQwMDBdIEFDUEk6IE1D
RkcgMHgwMDAwMDAwMDVGRTVCRDcwIDAwMDAzQyAodjAxIENPUkV2NCBDT1JFQk9PVCAwMDAw
MDAwMCBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBBUElDIDB4MDAwMDAw
MDA1RkU1QkRCMCAwMDAwNjIgKHYwMyBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAy
MDIwMDkyNSkKWyAgICAwLjAwNDAwMF0gQUNQSTogSFBFVCAweDAwMDAwMDAwNUZFNUJFMjAg
MDAwMDM4ICh2MDEgQ09SRXY0IENPUkVCT09UIDAwMDAwMDAwIENPUkUgMjAyMDA5MjUpClsg
ICAgMC4wMDQwMDBdIEFDUEk6IEhFU1QgMHgwMDAwMDAwMDVGRTVCRTYwIDAwMDFEMCAodjAx
IENPUkV2NCBDT1JFQk9PVCAwMDAwMDAwMCBDT1JFIDIwMjAwOTI1KQpbICAgIDAuMDA0MDAw
XSBBQ1BJOiBJVlJTIDB4MDAwMDAwMDA1RkU1QzAzMCAwMDAwNzAgKHYwMiBBTUQgICAgQU1E
SU9NTVUgMDAwMDAwMDEgQU1EICAwMDAwMDAwMCkKWyAgICAwLjAwNDAwMF0gQUNQSTogU1NE
VCAweDAwMDAwMDAwNUZFNUMwQTAgMDAwNTFGICh2MDIgQU1EICAgIEFMSUIgICAgIDAwMDAw
MDAxIE1TRlQgMDQwMDAwMDApClsgICAgMC4wMDQwMDBdIEFDUEk6IFNTRFQgMHgwMDAwMDAw
MDVGRTVDNUMwIDAwMDZCMiAodjAxIEFNRCAgICBQT1dFUk5PVyAwMDAwMDAwMSBBTUQgIDAw
MDAwMDAxKQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBWRkNUIDB4MDAwMDAwMDA1RkU1Q0M4MCAw
MEYyNjkgKHYwMSBDT1JFdjQgQ09SRUJPT1QgMDAwMDAwMDAgQ09SRSAyMDIwMDkyNSkKWyAg
ICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIEZBQ1AgdGFibGUgbWVtb3J5IGF0IFttZW0g
MHg1ZmU1YmJjMC0weDVmZTViY2QzXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcg
RFNEVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTVhMjgwLTB4NWZlNWJiYjldClsgICAg
MC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBGQUNTIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4
NWZlNWEyNDAtMHg1ZmU1YTI3Zl0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIEZB
Q1MgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YTI0MC0weDVmZTVhMjdmXQpbICAgIDAu
MDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgU1NEVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVm
ZTViY2UwLTB4NWZlNWJkNjldClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBNQ0ZH
IHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWJkNzAtMHg1ZmU1YmRhYl0KWyAgICAwLjAw
NDAwMF0gQUNQSTogUmVzZXJ2aW5nIEFQSUMgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1
YmRiMC0weDVmZTViZTExXQpbICAgIDAuMDA0MDAwXSBBQ1BJOiBSZXNlcnZpbmcgSFBFVCB0
YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTViZTIwLTB4NWZlNWJlNTddClsgICAgMC4wMDQw
MDBdIEFDUEk6IFJlc2VydmluZyBIRVNUIHRhYmxlIG1lbW9yeSBhdCBbbWVtIDB4NWZlNWJl
NjAtMHg1ZmU1YzAyZl0KWyAgICAwLjAwNDAwMF0gQUNQSTogUmVzZXJ2aW5nIElWUlMgdGFi
bGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1YzAzMC0weDVmZTVjMDlmXQpbICAgIDAuMDA0MDAw
XSBBQ1BJOiBSZXNlcnZpbmcgU1NEVCB0YWJsZSBtZW1vcnkgYXQgW21lbSAweDVmZTVjMGEw
LTB4NWZlNWM1YmVdClsgICAgMC4wMDQwMDBdIEFDUEk6IFJlc2VydmluZyBTU0RUIHRhYmxl
IG1lbW9yeSBhdCBbbWVtIDB4NWZlNWM1YzAtMHg1ZmU1Y2M3MV0KWyAgICAwLjAwNDAwMF0g
QUNQSTogUmVzZXJ2aW5nIFZGQ1QgdGFibGUgbWVtb3J5IGF0IFttZW0gMHg1ZmU1Y2M4MC0w
eDVmZTZiZWU4XQpbICAgIDAuMDA0MDAwXSBObyBOVU1BIGNvbmZpZ3VyYXRpb24gZm91bmQK
WyAgICAwLjAwNDAwMF0gRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAwMDAwMDAwMDAw
MC0weDAwMDAwMDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdIE5PREVfREFUQSgwKSBhbGxv
Y2F0ZWQgW21lbSAweDE3ZWZlOTAwMC0weDE3ZWZmZmZmZl0KWyAgICAwLjAwNDAwMF0gWm9u
ZSByYW5nZXM6ClsgICAgMC4wMDQwMDBdICAgRE1BICAgICAgW21lbSAweDAwMDAwMDAwMDAw
MDEwMDAtMHgwMDAwMDAwMDAwZmZmZmZmXQpbICAgIDAuMDA0MDAwXSAgIERNQTMyICAgIFtt
ZW0gMHgwMDAwMDAwMDAxMDAwMDAwLTB4MDAwMDAwMDBmZmZmZmZmZl0KWyAgICAwLjAwNDAw
MF0gICBOb3JtYWwgICBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDAxN2VmZmZm
ZmZdClsgICAgMC4wMDQwMDBdICAgRGV2aWNlICAgZW1wdHkKWyAgICAwLjAwNDAwMF0gTW92
YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUKWyAgICAwLjAwNDAwMF0gRWFybHkgbWVt
b3J5IG5vZGUgcmFuZ2VzClsgICAgMC4wMDQwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAw
MDAwMDAwMDAxMDAwLTB4MDAwMDAwMDAwMDA5ZWZmZl0KWyAgICAwLjAwNDAwMF0gICBub2Rl
ICAgMDogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAtMHgwMDAwMDAwMDVmZTRjZmZmXQpbICAg
IDAuMDA0MDAwXSAgIG5vZGUgICAwOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAw
MDAxN2VmZmZmZmZdClsgICAgMC4wMDQwMDBdIEluaXRtZW0gc2V0dXAgbm9kZSAwIFttZW0g
MHgwMDAwMDAwMDAwMDAxMDAwLTB4MDAwMDAwMDE3ZWZmZmZmZl0KWyAgICAwLjAwNDAwMF0g
T24gbm9kZSAwLCB6b25lIERNQTogMSBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXMKWyAg
ICAwLjAwNDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogOTcgcGFnZXMgaW4gdW5hdmFpbGFi
bGUgcmFuZ2VzClsgICAgMC4wMDQwMDBdIE9uIG5vZGUgMCwgem9uZSBOb3JtYWw6IDQzNSBw
YWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXMKWyAgICAwLjAwNDAwMF0gT24gbm9kZSAwLCB6
b25lIE5vcm1hbDogNDA5NiBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXMKWyAgICAwLjAw
NDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MTgKWyAgICAwLjAwNDAwMF0gQUNQ
STogTEFQSUNfTk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAg
MC4wMDQwMDBdIElPQVBJQ1swXTogYXBpY19pZCA0LCB2ZXJzaW9uIDMzLCBhZGRyZXNzIDB4
ZmVjMDAwMDAsIEdTSSAwLTIzClsgICAgMC4wMDQwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpClsgICAgMC4wMDQwMDBdIEFD
UEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZl
bCkKWyAgICAwLjAwNDAwMF0gQUNQSTogVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uClsgICAgMC4wMDQwMDBdIEFDUEk6IEhQRVQgaWQ6IDB4
MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMApbICAgIDAuMDA0MDAwXSBzbXBib290OiBBbGxv
d2luZyAyIENQVXMsIDAgaG90cGx1ZyBDUFVzClsgICAgMC4wMDQwMDBdIFttZW0gMHg4MDAw
MDAwMC0weGY3ZmZmZmZmXSBhdmFpbGFibGUgZm9yIFBDSSBkZXZpY2VzClsgICAgMC4wMDQw
MDBdIGNsb2Nrc291cmNlOiByZWZpbmVkLWppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4
X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lkbGVfbnM6IDc2NDU1MTk2MDAyMTE1NjggbnMK
WyAgICAwLjAwNDAwMF0gc2V0dXBfcGVyY3B1OiBOUl9DUFVTOjY0IG5yX2NwdW1hc2tfYml0
czoyIG5yX2NwdV9pZHM6MiBucl9ub2RlX2lkczoxClsgICAgMC4wMDQwMDBdIHBlcmNwdTog
RW1iZWRkZWQgNTUgcGFnZXMvY3B1IHMxODgzOTIgcjgxOTIgZDI4Njk2IHUxMDQ4NTc2Clsg
ICAgMC4wMDQwMDBdIHBjcHUtYWxsb2M6IHMxODgzOTIgcjgxOTIgZDI4Njk2IHUxMDQ4NTc2
IGFsbG9jPTEqMjA5NzE1MgpbICAgIDAuMDA0MDAwXSBwY3B1LWFsbG9jOiBbMF0gMCAxIApb
ICAgIDAuMDA0MDAwXSBGYWxsYmFjayBvcmRlciBmb3IgTm9kZSAwOiAwIApbICAgIDAuMDA0
MDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBw
YWdlczogODk4NDUxClsgICAgMC4wMDQwMDBdIFBvbGljeSB6b25lOiBOb3JtYWwKWyAgICAw
LjAwNDAwMF0gS2VybmVsIGNvbW1hbmQgbGluZTogQk9PVF9JTUFHRT0vYm9vdC92bWxpbnV6
LTYuMy4wLXJjMy0wMDA0Ni1nOGJhNjQzZDdlMWM3IHJvb3Q9L2Rldi9zZGEzIHJ3IHF1aWV0
IG5vaXNhcG5wIGNyeXB0b21nci5ub3Rlc3RzIGlwdjYuZGlzYWJsZV9pcHY2PTEgc2VsaW51
eD0wClsgICAgMC4wMDQwMDBdIFVua25vd24ga2VybmVsIGNvbW1hbmQgbGluZSBwYXJhbWV0
ZXJzICJub2lzYXBucCBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmMzLTAwMDQ2
LWc4YmE2NDNkN2UxYzciLCB3aWxsIGJlIHBhc3NlZCB0byB1c2VyIHNwYWNlLgpbICAgIDAu
MDA0MDAwXSBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MjQyODggKG9yZGVy
OiAxMCwgNDE5NDMwNCBieXRlcywgbGluZWFyKQpbICAgIDAuMDA0MDAwXSBJbm9kZS1jYWNo
ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI2MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMs
IGxpbmVhcikKWyAgICAwLjAwNDAwMF0gbWVtIGF1dG8taW5pdDogc3RhY2s6b2ZmLCBoZWFw
IGFsbG9jOm9mZiwgaGVhcCBmcmVlOm9mZgpbICAgIDAuMDA0MDAwXSBzdGFja2RlcG90OiBh
bGxvY2F0aW5nIGhhc2ggdGFibGUgdmlhIGFsbG9jX2xhcmdlX3N5c3RlbV9oYXNoClsgICAg
MC4wMDQwMDBdIHN0YWNrZGVwb3QgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIxNDQgKG9yZGVy
OiA5LCAyMDk3MTUyIGJ5dGVzLCBsaW5lYXIpClsgICAgMC4wMDQwMDBdIHNvZnR3YXJlIElP
IFRMQjogYXJlYSBudW0gMi4KWyAgICAwLjAwNDAwMF0gTWVtb3J5OiAzNDc3MTY4Sy8zNjUx
NTAwSyBhdmFpbGFibGUgKDE0MzM2SyBrZXJuZWwgY29kZSwgMjM0MEsgcndkYXRhLCA1MzA4
SyByb2RhdGEsIDI5MDhLIGluaXQsIDExMDY0SyBic3MsIDE3NDA3MksgcmVzZXJ2ZWQsIDBL
IGNtYS1yZXNlcnZlZCkKWyAgICAwLjAwNDAwMF0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9
MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MiwgTm9kZXM9MQpbICAgIDAuMDA0MDAwXSBmdHJh
Y2U6IGFsbG9jYXRpbmcgMzg2NTIgZW50cmllcyBpbiAxNTEgcGFnZXMKWyAgICAwLjAwNDAw
MF0gZnRyYWNlOiBhbGxvY2F0ZWQgMTUxIHBhZ2VzIHdpdGggNSBncm91cHMKWyAgICAwLjAw
NDAwMF0gRHluYW1pYyBQcmVlbXB0OiBmdWxsClsgICAgMC4wMDQwMDBdIHJjdTogUHJlZW1w
dGlibGUgaGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAwLjAwNDAwMF0g
cmN1OiAJUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTY0IHRvIG5yX2NwdV9p
ZHM9Mi4KWyAgICAwLjAwNDAwMF0gCVRyYW1wb2xpbmUgdmFyaWFudCBvZiBUYXNrcyBSQ1Ug
ZW5hYmxlZC4KWyAgICAwLjAwNDAwMF0gCVJ1ZGUgdmFyaWFudCBvZiBUYXNrcyBSQ1UgZW5h
YmxlZC4KWyAgICAwLjAwNDAwMF0gCVRyYWNpbmcgdmFyaWFudCBvZiBUYXNrcyBSQ1UgZW5h
YmxlZC4KWyAgICAwLjAwNDAwMF0gcmN1OiBSQ1UgY2FsY3VsYXRlZCB2YWx1ZSBvZiBzY2hl
ZHVsZXItZW5saXN0bWVudCBkZWxheSBpcyAyNSBqaWZmaWVzLgpbICAgIDAuMDA0MDAwXSBy
Y3U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9jcHVf
aWRzPTIKWyAgICAwLjAwNDAwMF0gTlJfSVJRUzogNDM1MiwgbnJfaXJxczogNDQwLCBwcmVh
bGxvY2F0ZWQgaXJxczogMTYKWyAgICAwLjAwNDAwMF0gcmN1OiBzcmN1X2luaXQ6IFNldHRp
bmcgc3JjdV9zdHJ1Y3Qgc2l6ZXMgYmFzZWQgb24gY29udGVudGlvbi4KWyAgICAwLjAwNDAw
MF0gc3B1cmlvdXMgODI1OUEgaW50ZXJydXB0OiBJUlE3LgpbICAgIDAuMDA0MDAwXSBDb25z
b2xlOiBjb2xvdXIgVkdBKyA4MHgyNQpbICAgIDAuMDA0MDAwXSBwcmludGs6IGNvbnNvbGUg
W3R0eTBdIGVuYWJsZWQKWyAgICAwLjAwNDAwMF0gQUNQSTogQ29yZSByZXZpc2lvbiAyMDIy
MTAyMApbICAgIDAuMDA0MDAwXSBjbG9ja3NvdXJjZTogaHBldDogbWFzazogMHhmZmZmZmZm
ZiBtYXhfY3ljbGVzOiAweGZmZmZmZmZmLCBtYXhfaWRsZV9uczogMTMzNDg0ODczNTA0IG5z
ClsgICAgMC4wMDQwMDBdIEFQSUM6IFN3aXRjaCB0byBzeW1tZXRyaWMgSS9PIG1vZGUgc2V0
dXAKWyAgICAwLjAwNDAwMF0gQU1ELVZpOiBVc2luZyBnbG9iYWwgSVZIRCBFRlI6MHgwLCBF
RlIyOjB4MApbICAgIDAuMDA0MDAwXSAuLlRJTUVSOiB2ZWN0b3I9MHgzMCBhcGljMT0wIHBp
bjE9MiBhcGljMj0tMSBwaW4yPS0xClsgICAgMC4wMDQwMDBdIGNsb2Nrc291cmNlOiB0c2Mt
ZWFybHk6IG1hc2s6IDB4ZmZmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDcwNzFmNGVk
MThmLCBtYXhfaWRsZV9uczogODgxNTkwNDIwODUwIG5zClsgICAgMC4xNDUzMzJdIENhbGli
cmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRp
bWVyIGZyZXF1ZW5jeS4uIDc4MDAuODggQm9nb01JUFMgKGxwaj0xNTYwMTc2MCkKWyAgICAw
LjE0NTMzNl0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxClsgICAgMC4x
NDU0MzBdIExTTTogaW5pdGlhbGl6aW5nIGxzbT1jYXBhYmlsaXR5ClsgICAgMC4xNDU1MjZd
IE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogODE5MiAob3JkZXI6IDQsIDY1NTM2
IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4xNDU1NDJdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0
YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogNCwgNjU1MzYgYnl0ZXMsIGxpbmVhcikKWyAg
ICAwLjE0NTk0MF0gTGFzdCBsZXZlbCBpVExCIGVudHJpZXM6IDRLQiA1MTIsIDJNQiAxMDI0
LCA0TUIgNTEyClsgICAgMC4xNDU5NDNdIExhc3QgbGV2ZWwgZFRMQiBlbnRyaWVzOiA0S0Ig
MTAyNCwgMk1CIDEwMjQsIDRNQiA1MTIsIDFHQiAwClsgICAgMC4xNDU5NDddIFNwZWN0cmUg
VjEgOiBNaXRpZ2F0aW9uOiB1c2VyY29weS9zd2FwZ3MgYmFycmllcnMgYW5kIF9fdXNlciBw
b2ludGVyIHNhbml0aXphdGlvbgpbICAgIDAuMTQ1OTUwXSBTcGVjdHJlIFYyIDogTWl0aWdh
dGlvbjogUmV0cG9saW5lcwpbICAgIDAuMTQ1OTUxXSBTcGVjdHJlIFYyIDogU3BlY3RyZSB2
MiAvIFNwZWN0cmVSU0IgbWl0aWdhdGlvbjogRmlsbGluZyBSU0Igb24gY29udGV4dCBzd2l0
Y2gKWyAgICAwLjE0NTk1Ml0gU3BlY3RyZSBWMiA6IFNwZWN0cmUgdjIgLyBTcGVjdHJlUlNC
IDogRmlsbGluZyBSU0Igb24gVk1FWElUClsgICAgMC4xNDU5NTJdIFNwZWN0cmUgVjIgOiBF
bmFibGluZyBTcGVjdWxhdGlvbiBCYXJyaWVyIGZvciBmaXJtd2FyZSBjYWxscwpbICAgIDAu
MTQ1OTUzXSBSRVRCbGVlZDogTWl0aWdhdGlvbjogdW50cmFpbmVkIHJldHVybiB0aHVuawpb
ICAgIDAuMTQ1OTU1XSBTcGVjdHJlIFYyIDogbWl0aWdhdGlvbjogRW5hYmxpbmcgY29uZGl0
aW9uYWwgSW5kaXJlY3QgQnJhbmNoIFByZWRpY3Rpb24gQmFycmllcgpbICAgIDAuMTQ1OTU3
XSBTcGVjdWxhdGl2ZSBTdG9yZSBCeXBhc3M6IE1pdGlnYXRpb246IFNwZWN1bGF0aXZlIFN0
b3JlIEJ5cGFzcyBkaXNhYmxlZCB2aWEgcHJjdGwKWyAgICAwLjE1MDQ2OV0gRnJlZWluZyBT
TVAgYWx0ZXJuYXRpdmVzIG1lbW9yeTogMzJLClsgICAgMC4yNTg2MDhdIHNtcGJvb3Q6IENQ
VTA6IEFNRCBBNi02NDAwSyBBUFUgd2l0aCBSYWRlb24odG0pIEhEIEdyYXBoaWNzIChmYW1p
bHk6IDB4MTUsIG1vZGVsOiAweDEzLCBzdGVwcGluZzogMHgxKQpbICAgIDAuMjU4ODQ1XSBj
Ymxpc3RfaW5pdF9nZW5lcmljOiBTZXR0aW5nIGFkanVzdGFibGUgbnVtYmVyIG9mIGNhbGxi
YWNrIHF1ZXVlcy4KWyAgICAwLjI1ODg0Nl0gY2JsaXN0X2luaXRfZ2VuZXJpYzogU2V0dGlu
ZyBzaGlmdCB0byAxIGFuZCBsaW0gdG8gMS4KWyAgICAwLjI1ODg3OF0gY2JsaXN0X2luaXRf
Z2VuZXJpYzogU2V0dGluZyBzaGlmdCB0byAxIGFuZCBsaW0gdG8gMS4KWyAgICAwLjI1ODkw
NV0gY2JsaXN0X2luaXRfZ2VuZXJpYzogU2V0dGluZyBzaGlmdCB0byAxIGFuZCBsaW0gdG8g
MS4KWyAgICAwLjI1ODkzM10gUGVyZm9ybWFuY2UgRXZlbnRzOiBGYW0xNWggY29yZSBwZXJm
Y3RyLCBBTUQgUE1VIGRyaXZlci4KWyAgICAwLjI1ODk1NV0gLi4uIHZlcnNpb246ICAgICAg
ICAgICAgICAgIDAKWyAgICAwLjI1ODk1Nl0gLi4uIGJpdCB3aWR0aDogICAgICAgICAgICAg
IDQ4ClsgICAgMC4yNTg5NTddIC4uLiBnZW5lcmljIHJlZ2lzdGVyczogICAgICA2ClsgICAg
MC4yNTg5NThdIC4uLiB2YWx1ZSBtYXNrOiAgICAgICAgICAgICAwMDAwZmZmZmZmZmZmZmZm
ClsgICAgMC4yNTg5NTldIC4uLiBtYXggcGVyaW9kOiAgICAgICAgICAgICAwMDAwN2ZmZmZm
ZmZmZmZmClsgICAgMC4yNTg5NjBdIC4uLiBmaXhlZC1wdXJwb3NlIGV2ZW50czogICAwClsg
ICAgMC4yNTg5NjFdIC4uLiBldmVudCBtYXNrOiAgICAgICAgICAgICAwMDAwMDAwMDAwMDAw
MDNmClsgICAgMC4yNTkwODNdIHJjdTogSGllcmFyY2hpY2FsIFNSQ1UgaW1wbGVtZW50YXRp
b24uClsgICAgMC4yNTkwODRdIHJjdTogCU1heCBwaGFzZSBuby1kZWxheSBpbnN0YW5jZXMg
aXMgMTAwMC4KWyAgICAwLjI1OTY3Nl0gTk1JIHdhdGNoZG9nOiBFbmFibGVkLiBQZXJtYW5l
bnRseSBjb25zdW1lcyBvbmUgaHctUE1VIGNvdW50ZXIuClsgICAgMC4yNTk3NTBdIHNtcDog
QnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uClsgICAgMC4yNTk5NTJdIHg4NjogQm9v
dGluZyBTTVAgY29uZmlndXJhdGlvbjoKWyAgICAwLjI1OTk1M10gLi4uLiBub2RlICAjMCwg
Q1BVczogICAgICAjMQpbICAgIDAuMjU5OTU4XSBzbXBib290OiBLaWNraW5nIEFQIGFsaXZl
OiAxNwpbICAgIDAuMjYwMDg4XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAyIENQVXMKWyAg
ICAwLjI2MDA4OF0gc21wYm9vdDogTWF4IGxvZ2ljYWwgcGFja2FnZXM6IDEKWyAgICAwLjI2
MDA4OF0gc21wYm9vdDogVG90YWwgb2YgMiBwcm9jZXNzb3JzIGFjdGl2YXRlZCAoMTU2MDEu
NzYgQm9nb01JUFMpClsgICAgMC4yNjE1MTNdIGRldnRtcGZzOiBpbml0aWFsaXplZApbICAg
IDAuMjYxNTEzXSB4ODYvbW06IE1lbW9yeSBibG9jayBzaXplOiAxMjhNQgpbICAgIDAuMjYy
Mzg3XSBjbG9ja3NvdXJjZTogamlmZmllczogbWFzazogMHhmZmZmZmZmZiBtYXhfY3ljbGVz
OiAweGZmZmZmZmZmLCBtYXhfaWRsZV9uczogNzY0NTA0MTc4NTEwMDAwMCBucwpbICAgIDAu
MjYyMzg3XSBmdXRleCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDMsIDMyNzY4
IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4yNjIzODddIHBpbmN0cmwgY29yZTogaW5pdGlhbGl6
ZWQgcGluY3RybCBzdWJzeXN0ZW0KWyAgICAwLjI2MjM4N10gUE06IFJUQyB0aW1lOiAyMDo1
NjowNywgZGF0ZTogMjAyMy0wNC0yMQpbICAgIDAuMjYyMzg3XSBORVQ6IFJlZ2lzdGVyZWQg
UEZfTkVUTElOSy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHkKWyAgICAwLjI2MjYwMF0gYXVk
aXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpClsgICAgMC4yNjI2
MTddIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMTY4MjExMDU2Ny4xNDA6MSk6IHN0YXRlPWlu
aXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MQpbICAgIDAuMjYyNjE3XSB0aGVybWFs
X3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICdmYWlyX3NoYXJlJwpbICAgIDAu
MjYyNjE3XSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICdiYW5n
X2JhbmcnClsgICAgMC4yNjI2MTddIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwg
Z292ZXJub3IgJ3N0ZXBfd2lzZScKWyAgICAwLjI2MjYxN10gdGhlcm1hbF9zeXM6IFJlZ2lz
dGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAndXNlcl9zcGFjZScKWyAgICAwLjI2MjYxN10gY3B1
aWRsZTogdXNpbmcgZ292ZXJub3IgbGFkZGVyClsgICAgMC4yNjI2MTddIGNwdWlkbGU6IHVz
aW5nIGdvdmVybm9yIG1lbnUKWyAgICAwLjI2MjYxN10gUENJOiBNTUNPTkZJRyBmb3IgZG9t
YWluIDAwMDAgW2J1cyAwMC0zZl0gYXQgW21lbSAweGY4MDAwMDAwLTB4ZmJmZmZmZmZdIChi
YXNlIDB4ZjgwMDAwMDApClsgICAgMC4yNjI2MTddIFBDSTogTU1DT05GSUcgYXQgW21lbSAw
eGY4MDAwMDAwLTB4ZmJmZmZmZmZdIHJlc2VydmVkIGFzIEU4MjAgZW50cnkKWyAgICAwLjI2
MjYxN10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3IgYmFzZSBhY2Nlc3MK
WyAgICAwLjI2MjYxN10ga3Byb2Jlczoga3Byb2JlIGp1bXAtb3B0aW1pemF0aW9uIGlzIGVu
YWJsZWQuIEFsbCBrcHJvYmVzIGFyZSBvcHRpbWl6ZWQgaWYgcG9zc2libGUuClsgICAgMC4y
NzMzNTZdIEh1Z2VUTEI6IHJlZ2lzdGVyZWQgMS4wMCBHaUIgcGFnZSBzaXplLCBwcmUtYWxs
b2NhdGVkIDAgcGFnZXMKWyAgICAwLjI3MzM1Nl0gSHVnZVRMQjogMTYzODAgS2lCIHZtZW1t
YXAgY2FuIGJlIGZyZWVkIGZvciBhIDEuMDAgR2lCIHBhZ2UKWyAgICAwLjI3MzM1Nl0gSHVn
ZVRMQjogcmVnaXN0ZXJlZCAyLjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBw
YWdlcwpbICAgIDAuMjczMzU2XSBIdWdlVExCOiAyOCBLaUIgdm1lbW1hcCBjYW4gYmUgZnJl
ZWQgZm9yIGEgMi4wMCBNaUIgcGFnZQpbICAgIDAuMjgwNTA5XSBjcnlwdGQ6IG1heF9jcHVf
cWxlbiBzZXQgdG8gMTAwMApbICAgIDAuMjgwNTA5XSBBQ1BJOiBBZGRlZCBfT1NJKE1vZHVs
ZSBEZXZpY2UpClsgICAgMC4yODA1MDldIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIERl
dmljZSkKWyAgICAwLjI4MDUwOV0gQUNQSTogQWRkZWQgX09TSSgzLjAgX1NDUCBFeHRlbnNp
b25zKQpbICAgIDAuMjgwNTA5XSBBQ1BJOiBBZGRlZCBfT1NJKFByb2Nlc3NvciBBZ2dyZWdh
dG9yIERldmljZSkKWyAgICAwLjI4NzAzOV0gQUNQSTogNCBBQ1BJIEFNTCB0YWJsZXMgc3Vj
Y2Vzc2Z1bGx5IGFjcXVpcmVkIGFuZCBsb2FkZWQKWyAgICAwLjI4ODA3Ml0gQUNQSTogSW50
ZXJwcmV0ZXIgZW5hYmxlZApbICAgIDAuMjg4MDcyXSBBQ1BJOiBQTTogKHN1cHBvcnRzIFMw
IFMxIFMzIFM1KQpbICAgIDAuMjg4MDcyXSBBQ1BJOiBVc2luZyBJT0FQSUMgZm9yIGludGVy
cnVwdCByb3V0aW5nClsgICAgMC4yODgwNzJdIEhFU1Q6IFRhYmxlIHBhcnNpbmcgaGFzIGJl
ZW4gaW5pdGlhbGl6ZWQuClsgICAgMC4yODgwNzJdIEdIRVM6IEZhaWxlZCB0byBlbmFibGUg
QVBFSSBmaXJtd2FyZSBmaXJzdCBtb2RlLgpbICAgIDAuMjg4MDcyXSBQQ0k6IFVzaW5nIGhv
c3QgYnJpZGdlIHdpbmRvd3MgZnJvbSBBQ1BJOyBpZiBuZWNlc3NhcnksIHVzZSAicGNpPW5v
Y3JzIiBhbmQgcmVwb3J0IGEgYnVnClsgICAgMC4yODgwNzJdIFBDSTogSWdub3JpbmcgRTgy
MCByZXNlcnZhdGlvbnMgZm9yIGhvc3QgYnJpZGdlIHdpbmRvd3MKWyAgICAwLjI4ODA3Ml0g
QUNQSTogRW5hYmxlZCA4IEdQRXMgaW4gYmxvY2sgMDAgdG8gMUYKWyAgICAwLjI5MDkyMl0g
QUNQSTogUENJIFJvb3QgQnJpZGdlIFtQQ0kwXSAoZG9tYWluIDAwMDAgW2J1cyAwMC1mZl0p
ClsgICAgMC4yOTA5MzJdIGFjcGkgUE5QMEEwMzowMDogX09TQzogT1Mgc3VwcG9ydHMgW0V4
dGVuZGVkQ29uZmlnIEFTUE0gQ2xvY2tQTSBTZWdtZW50cyBNU0kgSFBYLVR5cGUzXQpbICAg
IDAuMjkxMDE4XSBhY3BpIFBOUDBBMDM6MDA6IF9PU0M6IE9TIG5vdyBjb250cm9scyBbUE1F
IEFFUiBQQ0llQ2FwYWJpbGl0eSBMVFJdClsgICAgMC4yOTEwMzRdIGFjcGkgUE5QMEEwMzow
MDogW0Zpcm13YXJlIEluZm9dOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAwMDAgW2J1cyAwMC0z
Zl0gb25seSBwYXJ0aWFsbHkgY292ZXJzIHRoaXMgYnJpZGdlClsgICAgMC4yOTExMTddIGFj
cGkgUE5QMEEwMzowMDogaG9zdCBicmlkZ2Ugd2luZG93IGV4cGFuZGVkIHRvIFtpbyAgMHgw
MDAwLTB4MGNmNyB3aW5kb3ddOyBbaW8gIDB4MDNiMC0weDAzZGYgd2luZG93XSBpZ25vcmVk
ClsgICAgMC4yOTEzNjldIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAgIDAu
MjkxMzcxXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtpbyAgMHgwMDAw
LTB4MGNmNyB3aW5kb3ddClsgICAgMC4yOTEzNzRdIHBjaV9idXMgMDAwMDowMDogcm9vdCBi
dXMgcmVzb3VyY2UgW2lvICAweDBkMDAtMHhmZmZmIHdpbmRvd10KWyAgICAwLjI5MTM3Nl0g
cGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4MDAwYTAwMDAtMHgw
MDBkZmZmZl0KWyAgICAwLjI5MTM3OV0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNv
dXJjZSBbbWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZmZl0KWyAgICAwLjI5MTM4MV0gcGNpX2J1
cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbYnVzIDAwLWZmXQpbICAgIDAuMjkxNDA2
XSBwY2kgMDAwMDowMDowMC4wOiBbMTAyMjoxNDEwXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAw
ClsgICAgMC4yOTE1NTldIHBjaSAwMDAwOjAwOjAwLjI6IFsxMDIyOjE0MTldIHR5cGUgMDAg
Y2xhc3MgMHgwODA2MDAKWyAgICAwLjI5MTY0OF0gcGNpIDAwMDA6MDA6MDEuMDogWzEwMDI6
OTk5Nl0gdHlwZSAwMCBjbGFzcyAweDAzMDAwMApbICAgIDAuMjkxNjU2XSBwY2kgMDAwMDow
MDowMS4wOiByZWcgMHgxMDogW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmYgcHJlZl0KWyAg
ICAwLjI5MTY2MV0gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTQ6IFtpbyAgMHgxMDAwLTB4
MTBmZl0KWyAgICAwLjI5MTY2NV0gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTg6IFttZW0g
MHhmMDE4MDAwMC0weGYwMWJmZmZmXQpbICAgIDAuMjkxNjgxXSBwY2kgMDAwMDowMDowMS4w
OiBlbmFibGluZyBFeHRlbmRlZCBUYWdzClsgICAgMC4yOTE2OTRdIHBjaSAwMDAwOjAwOjAx
LjA6IFZpZGVvIGRldmljZSB3aXRoIHNoYWRvd2VkIFJPTSBhdCBbbWVtIDB4MDAwYzAwMDAt
MHgwMDBkZmZmZl0KWyAgICAwLjI5MTcxMV0gcGNpIDAwMDA6MDA6MDEuMDogc3VwcG9ydHMg
RDEgRDIKWyAgICAwLjI5MTc3Nl0gcGNpIDAwMDA6MDA6MDEuMTogWzEwMDI6OTkwMl0gdHlw
ZSAwMCBjbGFzcyAweDA0MDMwMApbICAgIDAuMjkxNzg0XSBwY2kgMDAwMDowMDowMS4xOiBy
ZWcgMHgxMDogW21lbSAweGYwMWMwMDAwLTB4ZjAxYzNmZmZdClsgICAgMC4yOTE4MDVdIHBj
aSAwMDAwOjAwOjAxLjE6IGVuYWJsaW5nIEV4dGVuZGVkIFRhZ3MKWyAgICAwLjI5MTgyOF0g
cGNpIDAwMDA6MDA6MDEuMTogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjI5MTkxOF0gcGNpIDAw
MDA6MDA6MTEuMDogWzEwMjI6NzgwMV0gdHlwZSAwMCBjbGFzcyAweDAxMDYwMQpbICAgIDAu
MjkxOTMxXSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxMDogW2lvICAweDE0MTAtMHgxNDE3
XQpbICAgIDAuMjkxOTM4XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxNDogW2lvICAweDE0
MjAtMHgxNDIzXQpbICAgIDAuMjkxOTQ2XSBwY2kgMDAwMDowMDoxMS4wOiByZWcgMHgxODog
W2lvICAweDE0MTgtMHgxNDFmXQpbICAgIDAuMjkxOTUzXSBwY2kgMDAwMDowMDoxMS4wOiBy
ZWcgMHgxYzogW2lvICAweDE0MjQtMHgxNDI3XQpbICAgIDAuMjkxOTYwXSBwY2kgMDAwMDow
MDoxMS4wOiByZWcgMHgyMDogW2lvICAweDE0MDAtMHgxNDBmXQpbICAgIDAuMjkxOTY4XSBw
Y2kgMDAwMDowMDoxMS4wOiByZWcgMHgyNDogW21lbSAweGYwMWNjMDAwLTB4ZjAxY2M3ZmZd
ClsgICAgMC4yOTIxMjVdIHBjaSAwMDAwOjAwOjEyLjA6IFsxMDIyOjc4MDddIHR5cGUgMDAg
Y2xhc3MgMHgwYzAzMTAKWyAgICAwLjI5MjEzOV0gcGNpIDAwMDA6MDA6MTIuMDogcmVnIDB4
MTA6IFttZW0gMHhmMDFjODAwMC0weGYwMWM4ZmZmXQpbICAgIDAuMjkyMzIyXSBwY2kgMDAw
MDowMDoxMi4yOiBbMTAyMjo3ODA4XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzIwClsgICAgMC4y
OTIzMzVdIHBjaSAwMDAwOjAwOjEyLjI6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2QwMDAtMHhm
MDFjZDBmZl0KWyAgICAwLjI5MjQwMF0gcGNpIDAwMDA6MDA6MTIuMjogc3VwcG9ydHMgRDEg
RDIKWyAgICAwLjI5MjQwMl0gcGNpIDAwMDA6MDA6MTIuMjogUE1FIyBzdXBwb3J0ZWQgZnJv
bSBEMCBEMSBEMiBEM2hvdApbICAgIDAuMjkyNTQyXSBwY2kgMDAwMDowMDoxMy4wOiBbMTAy
Mjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yOTI1NTVdIHBjaSAwMDAw
OjAwOjEzLjA6IHJlZyAweDEwOiBbbWVtIDB4ZjAxYzkwMDAtMHhmMDFjOWZmZl0KWyAgICAw
LjI5MjczN10gcGNpIDAwMDA6MDA6MTMuMjogWzEwMjI6NzgwOF0gdHlwZSAwMCBjbGFzcyAw
eDBjMDMyMApbICAgIDAuMjkyNzUwXSBwY2kgMDAwMDowMDoxMy4yOiByZWcgMHgxMDogW21l
bSAweGYwMWNlMDAwLTB4ZjAxY2UwZmZdClsgICAgMC4yOTI4MTVdIHBjaSAwMDAwOjAwOjEz
LjI6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yOTI4MTZdIHBjaSAwMDAwOjAwOjEzLjI6IFBN
RSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QKWyAgICAwLjI5Mjk1NV0gcGNpIDAw
MDA6MDA6MTQuMDogWzEwMjI6NzgwYl0gdHlwZSAwMCBjbGFzcyAweDBjMDUwMApbICAgIDAu
MjkzMTM5XSBwY2kgMDAwMDowMDoxNC4yOiBbMTAyMjo3ODBkXSB0eXBlIDAwIGNsYXNzIDB4
MDQwMzAwClsgICAgMC4yOTMxNTZdIHBjaSAwMDAwOjAwOjE0LjI6IHJlZyAweDEwOiBbbWVt
IDB4ZjAxYzQwMDAtMHhmMDFjN2ZmZiA2NGJpdF0KWyAgICAwLjI5MzIxMF0gcGNpIDAwMDA6
MDA6MTQuMjogUE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAwLjI5
MzM1NV0gcGNpIDAwMDA6MDA6MTQuMzogWzEwMjI6NzgwZV0gdHlwZSAwMCBjbGFzcyAweDA2
MDEwMApbICAgIDAuMjkzNTQxXSBwY2kgMDAwMDowMDoxNC40OiBbMTAyMjo3ODBmXSB0eXBl
IDAxIGNsYXNzIDB4MDYwNDAxClsgICAgMC4yOTM3MDNdIHBjaSAwMDAwOjAwOjE0LjU6IFsx
MDIyOjc4MDldIHR5cGUgMDAgY2xhc3MgMHgwYzAzMTAKWyAgICAwLjI5MzcxN10gcGNpIDAw
MDA6MDA6MTQuNTogcmVnIDB4MTA6IFttZW0gMHhmMDFjYTAwMC0weGYwMWNhZmZmXQpbICAg
IDAuMjkzOTA2XSBwY2kgMDAwMDowMDoxNS4wOiBbMTAyMjo0M2EwXSB0eXBlIDAxIGNsYXNz
IDB4MDYwNDAwClsgICAgMC4yOTM5MzVdIHBjaSAwMDAwOjAwOjE1LjA6IGVuYWJsaW5nIEV4
dGVuZGVkIFRhZ3MKWyAgICAwLjI5Mzk3NV0gcGNpIDAwMDA6MDA6MTUuMDogc3VwcG9ydHMg
RDEgRDIKWyAgICAwLjI5NDE1MV0gcGNpIDAwMDA6MDA6MTUuMTogWzEwMjI6NDNhMV0gdHlw
ZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDAuMjk0MTgyXSBwY2kgMDAwMDowMDoxNS4xOiBl
bmFibGluZyBFeHRlbmRlZCBUYWdzClsgICAgMC4yOTQyMjFdIHBjaSAwMDAwOjAwOjE1LjE6
IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yOTQzOThdIHBjaSAwMDAwOjAwOjE1LjI6IFsxMDIy
OjQzYTJdIHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAwLjI5NDQyNl0gcGNpIDAwMDA6
MDA6MTUuMjogZW5hYmxpbmcgRXh0ZW5kZWQgVGFncwpbICAgIDAuMjk0NDY4XSBwY2kgMDAw
MDowMDoxNS4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjk0NTQ3XSBwY2kgMDAwMDowMDox
Ni4wOiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yOTQ1NjBd
IHBjaSAwMDAwOjAwOjE2LjA6IHJlZyAweDEwOiBbbWVtIDB4ZjAxY2IwMDAtMHhmMDFjYmZm
Zl0KWyAgICAwLjI5NDczN10gcGNpIDAwMDA6MDA6MTYuMjogWzEwMjI6NzgwOF0gdHlwZSAw
MCBjbGFzcyAweDBjMDMyMApbICAgIDAuMjk0NzUxXSBwY2kgMDAwMDowMDoxNi4yOiByZWcg
MHgxMDogW21lbSAweGYwMWNmMDAwLTB4ZjAxY2YwZmZdClsgICAgMC4yOTQ4MTVdIHBjaSAw
MDAwOjAwOjE2LjI6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yOTQ4MTddIHBjaSAwMDAwOjAw
OjE2LjI6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIgRDNob3QKWyAgICAwLjI5NDk1
OF0gcGNpIDAwMDA6MDA6MTguMDogWzEwMjI6MTQwMF0gdHlwZSAwMCBjbGFzcyAweDA2MDAw
MApbICAgIDAuMjk1MDI0XSBwY2kgMDAwMDowMDoxOC4xOiBbMTAyMjoxNDAxXSB0eXBlIDAw
IGNsYXNzIDB4MDYwMDAwClsgICAgMC4yOTUwODZdIHBjaSAwMDAwOjAwOjE4LjI6IFsxMDIy
OjE0MDJdIHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI5NTE1Ml0gcGNpIDAwMDA6
MDA6MTguMzogWzEwMjI6MTQwM10gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgIDAuMjk1
Mjg5XSBwY2kgMDAwMDowMDoxOC40OiBbMTAyMjoxNDA0XSB0eXBlIDAwIGNsYXNzIDB4MDYw
MDAwClsgICAgMC4yOTUzNTNdIHBjaSAwMDAwOjAwOjE4LjU6IFsxMDIyOjE0MDVdIHR5cGUg
MDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI5NTQzNl0gcGNpX2J1cyAwMDAwOjAxOiBleHRl
bmRlZCBjb25maWcgc3BhY2Ugbm90IGFjY2Vzc2libGUKWyAgICAwLjI5NTQ5OV0gcGNpIDAw
MDA6MDA6MTQuNDogUENJIGJyaWRnZSB0byBbYnVzIDAxXSAoc3VidHJhY3RpdmUgZGVjb2Rl
KQpbICAgIDAuMjk1NTEwXSBwY2kgMDAwMDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW2lv
ICAweDAwMDAtMHgwY2Y3IHdpbmRvd10gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI5
NTUxM10gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwZDAwLTB4
ZmZmZiB3aW5kb3ddIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMC4yOTU1MTVdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4MDAwYTAwMDAtMHgwMDBkZmZm
Zl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI5NTUxN10gcGNpIDAwMDA6MDA6MTQu
NDogICBicmlkZ2Ugd2luZG93IFttZW0gMHg4MDAwMDAwMC0weGZmZmZmZmZmXSAoc3VidHJh
Y3RpdmUgZGVjb2RlKQpbICAgIDAuMjk1NTcxXSBwY2kgMDAwMDowMDoxNS4wOiBQQ0kgYnJp
ZGdlIHRvIFtidXMgMDJdClsgICAgMC4yOTU2NTNdIHBjaSAwMDAwOjAzOjAwLjA6IFsxYjIx
OjEwNDJdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAwLjI5NTY4OV0gcGNpIDAwMDA6
MDM6MDAuMDogcmVnIDB4MTA6IFttZW0gMHhmMDAwMDAwMC0weGYwMDA3ZmZmIDY0Yml0XQpb
ICAgIDAuMjk1ODYzXSBwY2kgMDAwMDowMzowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQz
aG90IEQzY29sZApbICAgIDAuMjk1OTA3XSBwY2kgMDAwMDowMzowMC4wOiAyLjAwMCBHYi9z
IGF2YWlsYWJsZSBQQ0llIGJhbmR3aWR0aCwgbGltaXRlZCBieSAyLjUgR1QvcyBQQ0llIHgx
IGxpbmsgYXQgMDAwMDowMDoxNS4xIChjYXBhYmxlIG9mIDQuMDAwIEdiL3Mgd2l0aCA1LjAg
R1QvcyBQQ0llIHgxIGxpbmspClsgICAgMC4zMDkzOTNdIHBjaSAwMDAwOjAwOjE1LjE6IFBD
SSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgICAwLjMwOTQwNV0gcGNpIDAwMDA6MDA6MTUuMTog
ICBicmlkZ2Ugd2luZG93IFttZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgIDAuMzA5
NDE0XSBwY2kgMDAwMDowMDoxNS4yOiBicmlkZ2UgY29uZmlndXJhdGlvbiBpbnZhbGlkIChb
YnVzIDAwLTAwXSksIHJlY29uZmlndXJpbmcKWyAgICAwLjMwOTU0NV0gcGNpIDAwMDA6MDQ6
MDAuMDogWzEwZWM6ODE2OF0gdHlwZSAwMCBjbGFzcyAweDAyMDAwMApbICAgIDAuMzA5NTYz
XSBwY2kgMDAwMDowNDowMC4wOiByZWcgMHgxMDogW2lvICAweDAwMDAtMHgwMGZmXQpbICAg
IDAuMzA5NTg1XSBwY2kgMDAwMDowNDowMC4wOiByZWcgMHgxODogW21lbSAweDAwMDAwMDAw
LTB4MDAwMDBmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjMwOTU5OF0gcGNpIDAwMDA6MDQ6MDAu
MDogcmVnIDB4MjA6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAzZmZmIDY0Yml0IHByZWZdClsg
ICAgMC4zMDk3MDVdIHBjaSAwMDAwOjA0OjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4z
MDk3MDddIHBjaSAwMDAwOjA0OjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIg
RDNob3QgRDNjb2xkClsgICAgMC4zMjU0MDNdIHBjaSAwMDAwOjAwOjE1LjI6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwNC1mZl0KWyAgICAwLjMyNTQxNF0gcGNpIDAwMDA6MDA6MTUuMjogICBi
cmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4MGZmZl0KWyAgICAwLjMyNTQxN10gcGNpIDAw
MDA6MDA6MTUuMjogICBicmlkZ2Ugd2luZG93IFttZW0gMHgwMDAwMDAwMC0weDAwMGZmZmZm
XQpbICAgIDAuMzI1NDIyXSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cgW21l
bSAweDAwMDAwMDAwLTB4MDAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjMyNTQyNV0gcGNp
X2J1cyAwMDAwOjA0OiBidXNuX3JlczogW2J1cyAwNC1mZl0gZW5kIGlzIHVwZGF0ZWQgdG8g
MDQKWyAgICAwLjMyNTkxOV0gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlRBIGNvbmZp
Z3VyZWQgZm9yIElSUSAwClsgICAgMC4zMjYwMTFdIEFDUEk6IFBDSTogSW50ZXJydXB0IGxp
bmsgSU5UQiBjb25maWd1cmVkIGZvciBJUlEgMApbICAgIDAuMzI2MTAyXSBBQ1BJOiBQQ0k6
IEludGVycnVwdCBsaW5rIElOVEMgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgICAwLjMyNjE5
M10gQUNQSTogUENJOiBJbnRlcnJ1cHQgbGluayBJTlREIGNvbmZpZ3VyZWQgZm9yIElSUSAw
ClsgICAgMC4zMjYyODRdIEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5URSBjb25maWd1
cmVkIGZvciBJUlEgMApbICAgIDAuMzI2Mzc1XSBBQ1BJOiBQQ0k6IEludGVycnVwdCBsaW5r
IElOVEYgY29uZmlndXJlZCBmb3IgSVJRIDAKWyAgICAwLjMyNjQ2NV0gQUNQSTogUENJOiBJ
bnRlcnJ1cHQgbGluayBJTlRHIGNvbmZpZ3VyZWQgZm9yIElSUSAwClsgICAgMC4zMjY1NTZd
IEFDUEk6IFBDSTogSW50ZXJydXB0IGxpbmsgSU5USCBjb25maWd1cmVkIGZvciBJUlEgMApb
ICAgIDAuMzI2Nzc3XSBpb21tdTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJhbnNsYXRlZCAK
WyAgICAwLjMyNjc3OF0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRhdGlvbiBwb2xp
Y3k6IGxhenkgbW9kZSAKWyAgICAwLjMyNjk3NF0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6
ZWQKWyAgICAwLjMyOTM4OV0gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAgMC4z
MjkzOTVdIEFDUEk6IGJ1cyB0eXBlIFVTQiByZWdpc3RlcmVkClsgICAgMC4zMjk0MThdIHVz
YmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgICAwLjMy
OTQyOF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAg
ICAwLjMyOTQzN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IK
WyAgICAwLjMyOTU2NF0gUENJOiBVc2luZyBBQ1BJIGZvciBJUlEgcm91dGluZwpbICAgIDAu
MzMxMTMyXSBQQ0k6IHBjaV9jYWNoZV9saW5lX3NpemUgc2V0IHRvIDY0IGJ5dGVzClsgICAg
MC4zMzExODRdIGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4MDAwOWZjMDAtMHgw
MDA5ZmZmZl0KWyAgICAwLjMzMTE4N10gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0g
MHg1ZmU0ZDAwMC0weDVmZmZmZmZmXQpbICAgIDAuMzMxMTg5XSBlODIwOiByZXNlcnZlIFJB
TSBidWZmZXIgW21lbSAweDE3ZjAwMDAwMC0weDE3ZmZmZmZmZl0KWyAgICAwLjMzMTIzNV0g
aHBldDA6IGF0IE1NSU8gMHhmZWQwMDAwMCwgSVJRcyAyLCA4LCAwClsgICAgMC4zMzEyNDBd
IGhwZXQwOiAzIGNvbXBhcmF0b3JzLCAzMi1iaXQgMTQuMzE4MTgwIE1IeiBjb3VudGVyClsg
ICAgMC4zMzQ0MTBdIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB0c2Mt
ZWFybHkKWyAgICAwLjMzNDY1Nl0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjYuMApbICAg
IDAuMzM0NjgyXSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyIChv
cmRlciAwLCA0MDk2IGJ5dGVzKQpbICAgIDAuMzM0ODAzXSBwbnA6IFBuUCBBQ1BJIGluaXQK
WyAgICAwLjMzNTE0N10gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZmVjMTAwMDItMHhmZWMxMTAw
MV0gY291bGQgbm90IGJlIHJlc2VydmVkClsgICAgMC4zMzU1MjVdIHBucDogUG5QIEFDUEk6
IGZvdW5kIDIgZGV2aWNlcwpbICAgIDAuMzQyMDcyXSBjbG9ja3NvdXJjZTogYWNwaV9wbTog
bWFzazogMHhmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmYsIG1heF9pZGxlX25zOiAyMDg1
NzAxMDI0IG5zClsgICAgMC4zNDIyNjBdIE5FVDogUmVnaXN0ZXJlZCBQRl9JTkVUIHByb3Rv
Y29sIGZhbWlseQpbICAgIDAuMzQyNDM0XSBJUCBpZGVudHMgaGFzaCB0YWJsZSBlbnRyaWVz
OiA2NTUzNiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcywgbGluZWFyKQpbICAgIDAuMzQ0MDQx
XSB0Y3BfbGlzdGVuX3BvcnRhZGRyX2hhc2ggaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChv
cmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikKWyAgICAwLjM0NDA1OF0gVGFibGUtcGVy
dHVyYiBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVz
LCBsaW5lYXIpClsgICAgMC4zNDQwNzBdIFRDUCBlc3RhYmxpc2hlZCBoYXNoIHRhYmxlIGVu
dHJpZXM6IDMyNzY4IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4z
NDQxNDFdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmllczogMzI3NjggKG9yZGVyOiA4LCAx
MDQ4NTc2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zNDQ1MDVdIFRDUDogSGFzaCB0YWJsZXMg
Y29uZmlndXJlZCAoZXN0YWJsaXNoZWQgMzI3NjggYmluZCAzMjc2OCkKWyAgICAwLjM0NDU3
Nl0gVURQIGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVz
LCBsaW5lYXIpClsgICAgMC4zNDQ2MDFdIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczog
MjA0OCAob3JkZXI6IDQsIDY1NTM2IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zNDQ3MjJdIE5F
VDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWlseQpbICAgIDAu
MzQ0NzU3XSBwY2kgMDAwMDowMDoxNS4yOiBCQVIgMTU6IGFzc2lnbmVkIFttZW0gMHg4MDAw
MDAwMC0weDgwMGZmZmZmIDY0Yml0IHByZWZdClsgICAgMC4zNDQ3NjNdIHBjaSAwMDAwOjAw
OjE1LjI6IEJBUiAxMzogYXNzaWduZWQgW2lvICAweDIwMDAtMHgyZmZmXQpbICAgIDAuMzQ0
NzY5XSBwY2kgMDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdClsgICAgMC4z
NDQ3ODFdIHBjaSAwMDAwOjAwOjE1LjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwMl0KWyAgICAw
LjM0NDc5MF0gcGNpIDAwMDA6MDA6MTUuMTogUENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAg
IDAuMzQ0NzkzXSBwY2kgMDAwMDowMDoxNS4xOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGYw
MDAwMDAwLTB4ZjAwZmZmZmZdClsgICAgMC4zNDQ4MDddIHBjaSAwMDAwOjA0OjAwLjA6IEJB
UiA0OiBhc3NpZ25lZCBbbWVtIDB4ODAwMDAwMDAtMHg4MDAwM2ZmZiA2NGJpdCBwcmVmXQpb
ICAgIDAuMzQ0ODIxXSBwY2kgMDAwMDowNDowMC4wOiBCQVIgMjogYXNzaWduZWQgW21lbSAw
eDgwMDA0MDAwLTB4ODAwMDRmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjM0NDgzM10gcGNpIDAw
MDA6MDQ6MDAuMDogQkFSIDA6IGFzc2lnbmVkIFtpbyAgMHgyMDAwLTB4MjBmZl0KWyAgICAw
LjM0NDg0MF0gcGNpIDAwMDA6MDA6MTUuMjogUENJIGJyaWRnZSB0byBbYnVzIDA0XQpbICAg
IDAuMzQ0ODQyXSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweDIw
MDAtMHgyZmZmXQpbICAgIDAuMzQ0ODQ3XSBwY2kgMDAwMDowMDoxNS4yOiAgIGJyaWRnZSB3
aW5kb3cgW21lbSAweDgwMDAwMDAwLTB4ODAwZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjM0
NDg1NF0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0IFtpbyAgMHgwMDAwLTB4MGNmNyB3
aW5kb3ddClsgICAgMC4zNDQ4NTddIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNSBbaW8g
IDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgIDAuMzQ0ODYwXSBwY2lfYnVzIDAwMDA6MDA6
IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsgICAgMC4zNDQ4NjJd
IHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgNyBbbWVtIDB4ODAwMDAwMDAtMHhmZmZmZmZm
Zl0KWyAgICAwLjM0NDg2NF0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSA0IFtpbyAgMHgw
MDAwLTB4MGNmNyB3aW5kb3ddClsgICAgMC4zNDQ4NjddIHBjaV9idXMgMDAwMDowMTogcmVz
b3VyY2UgNSBbaW8gIDB4MGQwMC0weGZmZmYgd2luZG93XQpbICAgIDAuMzQ0ODY5XSBwY2lf
YnVzIDAwMDA6MDE6IHJlc291cmNlIDYgW21lbSAweDAwMGEwMDAwLTB4MDAwZGZmZmZdClsg
ICAgMC4zNDQ4NzFdIHBjaV9idXMgMDAwMDowMTogcmVzb3VyY2UgNyBbbWVtIDB4ODAwMDAw
MDAtMHhmZmZmZmZmZl0KWyAgICAwLjM0NDg3M10gcGNpX2J1cyAwMDAwOjAzOiByZXNvdXJj
ZSAxIFttZW0gMHhmMDAwMDAwMC0weGYwMGZmZmZmXQpbICAgIDAuMzQ0ODc1XSBwY2lfYnVz
IDAwMDA6MDQ6IHJlc291cmNlIDAgW2lvICAweDIwMDAtMHgyZmZmXQpbICAgIDAuMzQ0ODc3
XSBwY2lfYnVzIDAwMDA6MDQ6IHJlc291cmNlIDIgW21lbSAweDgwMDAwMDAwLTB4ODAwZmZm
ZmYgNjRiaXQgcHJlZl0KWyAgICAwLjM0NTAwM10gcGNpIDAwMDA6MDA6MDEuMTogRDAgcG93
ZXIgc3RhdGUgZGVwZW5kcyBvbiAwMDAwOjAwOjAxLjAKWyAgICAwLjM0NTg4N10gcGNpIDAw
MDA6MDA6MTIuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxpbmcgaXQK
WyAgICAwLjM0NjY3MV0gcGNpIDAwMDA6MDA6MTMuMjogUE1FIyBkb2VzIG5vdCB3b3JrIHVu
ZGVyIEQzLCBkaXNhYmxpbmcgaXQKWyAgICAwLjM0Nzk4MF0gcGNpIDAwMDA6MDA6MTYuMjog
UE1FIyBkb2VzIG5vdCB3b3JrIHVuZGVyIEQzLCBkaXNhYmxpbmcgaXQKWyAgICAwLjM0ODM1
M10gUENJOiBDTFMgNjQgYnl0ZXMsIGRlZmF1bHQgNjQKWyAgICAwLjM0ODQ0OV0gcGNpIDAw
MDA6MDA6MDAuMjogQU1ELVZpOiBBcHBseWluZyBlcnJhdHVtIDc0NiB3b3JrYXJvdW5kClsg
ICAgMC4zNDg1NjBdIHBjaSAwMDAwOjAwOjAxLjA6IEFkZGluZyB0byBpb21tdSBncm91cCAw
ClsgICAgMC4zNDg1ODRdIHBjaSAwMDAwOjAwOjAxLjE6IEFkZGluZyB0byBpb21tdSBncm91
cCAwClsgICAgMC4zNDg2MTZdIHBjaSAwMDAwOjAwOjExLjA6IEFkZGluZyB0byBpb21tdSBn
cm91cCAxClsgICAgMC4zNDg2NTldIHBjaSAwMDAwOjAwOjEyLjA6IEFkZGluZyB0byBpb21t
dSBncm91cCAyClsgICAgMC4zNDg2ODFdIHBjaSAwMDAwOjAwOjEyLjI6IEFkZGluZyB0byBp
b21tdSBncm91cCAyClsgICAgMC4zNDg3MjNdIHBjaSAwMDAwOjAwOjEzLjA6IEFkZGluZyB0
byBpb21tdSBncm91cCAzClsgICAgMC4zNDg3NDNdIHBjaSAwMDAwOjAwOjEzLjI6IEFkZGlu
ZyB0byBpb21tdSBncm91cCAzClsgICAgMC4zNDg3OTJdIHBjaSAwMDAwOjAwOjE0LjA6IEFk
ZGluZyB0byBpb21tdSBncm91cCA0ClsgICAgMC4zNDg4MTddIHBjaSAwMDAwOjAwOjE0LjI6
IEFkZGluZyB0byBpb21tdSBncm91cCA0ClsgICAgMC4zNDg4NDBdIHBjaSAwMDAwOjAwOjE0
LjM6IEFkZGluZyB0byBpb21tdSBncm91cCA0ClsgICAgMC4zNDg4NzldIHBjaSAwMDAwOjAw
OjE0LjQ6IEFkZGluZyB0byBpb21tdSBncm91cCA1ClsgICAgMC4zNDg5MDNdIHBjaSAwMDAw
OjAwOjE0LjU6IEFkZGluZyB0byBpb21tdSBncm91cCA2ClsgICAgMC4zNDg5NDZdIHBjaSAw
MDAwOjAwOjE1LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNDg5NzRdIHBj
aSAwMDAwOjAwOjE1LjE6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNDkwMDBd
IHBjaSAwMDAwOjAwOjE1LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNDkw
NDBdIHBjaSAwMDAwOjAwOjE2LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA4ClsgICAgMC4z
NDkwNjNdIHBjaSAwMDAwOjAwOjE2LjI6IEFkZGluZyB0byBpb21tdSBncm91cCA4ClsgICAg
MC4zNDkxMjhdIHBjaSAwMDAwOjAwOjE4LjA6IEFkZGluZyB0byBpb21tdSBncm91cCA5Clsg
ICAgMC4zNDkxNTBdIHBjaSAwMDAwOjAwOjE4LjE6IEFkZGluZyB0byBpb21tdSBncm91cCA5
ClsgICAgMC4zNDkxODhdIHBjaSAwMDAwOjAwOjE4LjI6IEFkZGluZyB0byBpb21tdSBncm91
cCA5ClsgICAgMC4zNDkyMTBdIHBjaSAwMDAwOjAwOjE4LjM6IEFkZGluZyB0byBpb21tdSBn
cm91cCA5ClsgICAgMC4zNDkyMzRdIHBjaSAwMDAwOjAwOjE4LjQ6IEFkZGluZyB0byBpb21t
dSBncm91cCA5ClsgICAgMC4zNDkyNTVdIHBjaSAwMDAwOjAwOjE4LjU6IEFkZGluZyB0byBp
b21tdSBncm91cCA5ClsgICAgMC4zNDkyNjldIHBjaSAwMDAwOjAzOjAwLjA6IEFkZGluZyB0
byBpb21tdSBncm91cCA3ClsgICAgMC4zNDkyNzldIHBjaSAwMDAwOjA0OjAwLjA6IEFkZGlu
ZyB0byBpb21tdSBncm91cCA3ClsgICAgMC4zNTIwMDVdIHBjaSAwMDAwOjAwOjAwLjI6IEFN
RC1WaTogRm91bmQgSU9NTVUgY2FwIDB4NDAKWyAgICAwLjM1MjAxMV0gQU1ELVZpOiBFeHRl
bmRlZCBmZWF0dXJlcyAoMHg4MDAwMDA4NTMsIDB4MCk6IFByZUYgUFBSIEdUIElBClsgICAg
MC4zNTIwMjBdIEFNRC1WaTogSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkClsgICAgMC4z
NTIyMzhdIFBDSS1ETUE6IFVzaW5nIHNvZnR3YXJlIGJvdW5jZSBidWZmZXJpbmcgZm9yIElP
IChTV0lPVExCKQpbICAgIDAuMzUyMjQwXSBzb2Z0d2FyZSBJTyBUTEI6IG1hcHBlZCBbbWVt
IDB4MDAwMDAwMDA1YmU0ZDAwMC0weDAwMDAwMDAwNWZlNGQwMDBdICg2NE1CKQpbICAgIDAu
MzUyMjk3XSBMVlQgb2Zmc2V0IDAgYXNzaWduZWQgZm9yIHZlY3RvciAweDQwMApbICAgIDAu
MzUyMzQzXSBwZXJmOiBBTUQgSUJTIGRldGVjdGVkICgweDAwMDAwMGZmKQpbICAgIDAuMzUy
MzUxXSBhbWRfdW5jb3JlOiA0ICBhbWRfbmIgY291bnRlcnMgZGV0ZWN0ZWQKWyAgICAwLjM1
NjU5OF0gd29ya2luZ3NldDogdGltZXN0YW1wX2JpdHM9MzcgbWF4X29yZGVyPTIwIGJ1Y2tl
dF9vcmRlcj0wClsgICAgMC4zNTY2MjZdIHpidWQ6IGxvYWRlZApbICAgIDAuMzU3MTEyXSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseQpbICAgIDAuMzU3MTE4XSBL
ZXkgdHlwZSBhc3ltbWV0cmljIHJlZ2lzdGVyZWQKWyAgICAwLjM1NzExOV0gQXN5bW1ldHJp
YyBrZXkgcGFyc2VyICd4NTA5JyByZWdpc3RlcmVkClsgICAgMC4zNTc0NzFdIGFsZzogc2Vs
Zi10ZXN0cyBkaXNhYmxlZApbICAgIDAuMzU3NTY3XSBCbG9jayBsYXllciBTQ1NJIGdlbmVy
aWMgKGJzZykgZHJpdmVyIHZlcnNpb24gMC40IGxvYWRlZCAobWFqb3IgMjUxKQpbICAgIDAu
MzU3NjEwXSBpbyBzY2hlZHVsZXIgbXEtZGVhZGxpbmUgcmVnaXN0ZXJlZApbICAgIDAuMzU3
NjEyXSBpbyBzY2hlZHVsZXIga3liZXIgcmVnaXN0ZXJlZApbICAgIDAuMzU4NzgyXSBwY2ll
cG9ydCAwMDAwOjAwOjE1LjA6IFBNRTogU2lnbmFsaW5nIHdpdGggSVJRIDI1ClsgICAgMC4z
NTg5NDhdIHBjaWVwb3J0IDAwMDA6MDA6MTUuMTogUE1FOiBTaWduYWxpbmcgd2l0aCBJUlEg
MjYKWyAgICAwLjM1OTAxM10gcGNpZXBvcnQgMDAwMDowMDoxNS4yOiBlbmFibGluZyBkZXZp
Y2UgKDAwMDAgLT4gMDAwMykKWyAgICAwLjM1OTIyOF0gcGNpZXBvcnQgMDAwMDowMDoxNS4y
OiBQTUU6IFNpZ25hbGluZyB3aXRoIElSUSAyNwpbICAgIDAuMzU5NTAzXSBpbnB1dDogUG93
ZXIgQnV0dG9uIGFzIC9kZXZpY2VzL0xOWFNZU1RNOjAwL0xOWFBXUkJOOjAwL2lucHV0L2lu
cHV0MApbICAgIDAuMzU5NTY3XSBBQ1BJOiBidXR0b246IFBvd2VyIEJ1dHRvbiBbUFdSRl0K
WyAgICAwLjM1OTYyMV0gQUNQSTogXF9TQl8uUDAwMDogRm91bmQgMiBpZGxlIHN0YXRlcwpb
ICAgIDAuMzU5NzQzXSBBQ1BJOiBcX1NCXy5QMDAxOiBGb3VuZCAyIGlkbGUgc3RhdGVzClsg
ICAgMC4zNjA2NzBdIHRoZXJtYWwgTE5YVEhFUk06MDA6IHJlZ2lzdGVyZWQgYXMgdGhlcm1h
bF96b25lMApbICAgIDAuMzYwNjczXSBBQ1BJOiB0aGVybWFsOiBUaGVybWFsIFpvbmUgW1Ra
MDBdICgxNSBDKQpbICAgIDAuMzYwOTg0XSBOb24tdm9sYXRpbGUgbWVtb3J5IGRyaXZlciB2
MS4zClsgICAgMC4zNjEwNTFdIEFNRC1WaTogQU1EIElPTU1VdjIgbG9hZGVkIGFuZCBpbml0
aWFsaXplZApbICAgIDAuMzYxMDc5XSBBQ1BJOiBidXMgdHlwZSBkcm1fY29ubmVjdG9yIHJl
Z2lzdGVyZWQKWyAgICAwLjM2MTI5Ml0gYWhjaSAwMDAwOjAwOjExLjA6IHZlcnNpb24gMy4w
ClsgICAgMC4zNjE1ODBdIGFoY2kgMDAwMDowMDoxMS4wOiBBSENJIDAwMDEuMDMwMCAzMiBz
bG90cyA4IHBvcnRzIDYgR2JwcyAweDQwIGltcGwgU0FUQSBtb2RlClsgICAgMC4zNjE1ODRd
IGFoY2kgMDAwMDowMDoxMS4wOiBmbGFnczogNjRiaXQgbmNxIHNudGYgaWxjayBsZWQgY2xv
IHBpbyAKWyAgICAwLjM2Mjc0MV0gc2NzaSBob3N0MDogYWhjaQpbICAgIDAuMzYyOTY3XSBz
Y3NpIGhvc3QxOiBhaGNpClsgICAgMC4zNjMxNTddIHNjc2kgaG9zdDI6IGFoY2kKWyAgICAw
LjM2MzM3Ml0gc2NzaSBob3N0MzogYWhjaQpbICAgIDAuMzYzNTcxXSBzY3NpIGhvc3Q0OiBh
aGNpClsgICAgMC4zNjM3NzNdIHNjc2kgaG9zdDU6IGFoY2kKWyAgICAwLjM2Mzk4NF0gc2Nz
aSBob3N0NjogYWhjaQpbICAgIDAuMzY0MTgyXSBzY3NpIGhvc3Q3OiBhaGNpClsgICAgMC4z
NjQyNzVdIGF0YTE6IERVTU1ZClsgICAgMC4zNjQyNzddIGF0YTI6IERVTU1ZClsgICAgMC4z
NjQyNzhdIGF0YTM6IERVTU1ZClsgICAgMC4zNjQyNzhdIGF0YTQ6IERVTU1ZClsgICAgMC4z
NjQyNzldIGF0YTU6IERVTU1ZClsgICAgMC4zNjQyODBdIGF0YTY6IERVTU1ZClsgICAgMC4z
NjQyODJdIGF0YTc6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTIwNDhAMHhmMDFjYzAwMCBw
b3J0IDB4ZjAxY2M0MDAgaXJxIDE5ClsgICAgMC4zNjQyODRdIGF0YTg6IERVTU1ZClsgICAg
MC4zNjQ1NjRdIGk4MDQyOiBQTlA6IE5vIFBTLzIgY29udHJvbGxlciBmb3VuZC4KWyAgICAw
LjM2NDU2NV0gaTgwNDI6IFByb2JpbmcgcG9ydHMgZGlyZWN0bHkuClsgICAgMC4zNjY5ODFd
IHNlcmlvOiBpODA0MiBLQkQgcG9ydCBhdCAweDYwLDB4NjQgaXJxIDEKWyAgICAwLjM2Njk4
OF0gc2VyaW86IGk4MDQyIEFVWCBwb3J0IGF0IDB4NjAsMHg2NCBpcnEgMTIKWyAgICAwLjM2
NzEyM10gbW91c2VkZXY6IFBTLzIgbW91c2UgZGV2aWNlIGNvbW1vbiBmb3IgYWxsIG1pY2UK
WyAgICAwLjM2NzE4OF0gcnRjX2Ntb3MgMDA6MDE6IFJUQyBjYW4gd2FrZSBmcm9tIFM0Clsg
ICAgMC4zNjc0ODddIHJ0Y19jbW9zIDAwOjAxOiByZWdpc3RlcmVkIGFzIHJ0YzAKWyAgICAw
LjM2NzUxMF0gcnRjX2Ntb3MgMDA6MDE6IHNldHRpbmcgc3lzdGVtIGNsb2NrIHRvIDIwMjMt
MDQtMjFUMjA6NTY6MDcgVVRDICgxNjgyMTEwNTY3KQpbICAgIDAuMzY3NTUzXSBydGNfY21v
cyAwMDowMTogYWxhcm1zIHVwIHRvIG9uZSBkYXksIHkzaywgMTE0IGJ5dGVzIG52cmFtLCBo
cGV0IGlycXMKWyAgICAwLjM2NzU5MF0gZGV2aWNlLW1hcHBlcjogdWV2ZW50OiB2ZXJzaW9u
IDEuMC4zClsgICAgMC4zNjc2NjVdIGRldmljZS1tYXBwZXI6IGlvY3RsOiA0LjQ3LjAtaW9j
dGwgKDIwMjItMDctMjgpIGluaXRpYWxpc2VkOiBkbS1kZXZlbEByZWRoYXQuY29tClsgICAg
MC4zNjc4MjddIGhpZDogcmF3IEhJRCBldmVudHMgZHJpdmVyIChDKSBKaXJpIEtvc2luYQpb
ICAgIDAuMzY3OTEzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IHVzYmhpZApbICAgIDAuMzY3OTE0XSB1c2JoaWQ6IFVTQiBISUQgY29yZSBkcml2ZXIKWyAg
ICAwLjM2ODAxMF0gSW5pdGlhbGl6aW5nIFhGUk0gbmV0bGluayBzb2NrZXQKWyAgICAwLjM2
ODAyMF0gTkVUOiBSZWdpc3RlcmVkIFBGX1BBQ0tFVCBwcm90b2NvbCBmYW1pbHkKWyAgICAw
LjM2ODAyMV0geDg2L3BtOiBmYW1pbHkgMHgxNSBjcHUgZGV0ZWN0ZWQsIE1TUiBzYXZpbmcg
aXMgbmVlZGVkIGR1cmluZyBzdXNwZW5kaW5nLgpbICAgIDAuMzY4MzQwXSBtaWNyb2NvZGU6
IENQVTE6IHBhdGNoX2xldmVsPTB4MDYwMDExMWYKWyAgICAwLjM2ODM0Ml0gbWljcm9jb2Rl
OiBDUFUwOiBwYXRjaF9sZXZlbD0weDA2MDAxMTFmClsgICAgMC4zNjgzNTNdIG1pY3JvY29k
ZTogTWljcm9jb2RlIFVwZGF0ZSBEcml2ZXI6IHYyLjIuClsgICAgMC4zNjgzNThdIElQSSBz
aG9ydGhhbmQgYnJvYWRjYXN0OiBlbmFibGVkClsgICAgMC4zNjgzNzBdIEFWWCB2ZXJzaW9u
IG9mIGdjbV9lbmMvZGVjIGVuZ2FnZWQuClsgICAgMC4zNjg0MDFdIEFFUyBDVFIgbW9kZSBi
eTggb3B0aW1pemF0aW9uIGVuYWJsZWQKWyAgICAwLjM3MjQ5NV0gc2NoZWRfY2xvY2s6IE1h
cmtpbmcgc3RhYmxlICgyNTQ1MTE3NDksIDExNzMyODAxMyktPigzNzQ1OTA3MDksIC0yNzUw
OTQ3KQpbICAgIDAuMzcyNzQ1XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9uIDEKWyAg
ICAwLjM3Mjk4N10genN3YXA6IGxvYWRlZCB1c2luZyBwb29sIGx6by96YnVkClsgICAgMC4z
Nzc0NjFdIGttZW1sZWFrOiBLZXJuZWwgbWVtb3J5IGxlYWsgZGV0ZWN0b3IgaW5pdGlhbGl6
ZWQgKG1lbSBwb29sIGF2YWlsYWJsZTogMTU2NzkpClsgICAgMC4zNzc0NjRdIGttZW1sZWFr
OiBBdXRvbWF0aWMgbWVtb3J5IHNjYW5uaW5nIHRocmVhZCBzdGFydGVkClsgICAgMC4zNzc0
NjVdIGRlYnVnX3ZtX3BndGFibGU6IFtkZWJ1Z192bV9wZ3RhYmxlICAgICAgICAgXTogVmFs
aWRhdGluZyBhcmNoaXRlY3R1cmUgcGFnZSB0YWJsZSBoZWxwZXJzClsgICAgMC4zODIyNTBd
IEtleSB0eXBlIGVuY3J5cHRlZCByZWdpc3RlcmVkClsgICAgMC4zODUzOTVdIFBNOiAgIE1h
Z2ljIG51bWJlcjogMzo1Nzk6OTUzClsgICAgMC4zODU0NDldIG1lbW9yeSBtZW1vcnk2OiBo
YXNoIG1hdGNoZXMKWyAgICAwLjQ4MzQyNl0gYXRhNzogU0FUQSBsaW5rIHVwIDYuMCBHYnBz
IChTU3RhdHVzIDEzMyBTQ29udHJvbCAzMDApClsgICAgMC40ODM1ODddIGF0YTcuMDA6IEFU
QS05OiBTYW5EaXNrIFNEU1NEUDA2NEcsIDIuMC4wLCBtYXggVURNQS8xMzMKWyAgICAwLjQ4
MzU5MF0gYXRhNy4wMDogMTI1MDQ1NDI0IHNlY3RvcnMsIG11bHRpIDE6IExCQTQ4IE5DUSAo
ZGVwdGggMzIpClsgICAgMC40ODM3OTJdIGF0YTcuMDA6IGNvbmZpZ3VyZWQgZm9yIFVETUEv
MTMzClsgICAgMC40ODQwMDRdIHNjc2kgNjowOjA6MDogRGlyZWN0LUFjY2VzcyAgICAgQVRB
ICAgICAgU2FuRGlzayBTRFNTRFAwNiAwICAgIFBROiAwIEFOU0k6IDUKWyAgICAwLjQ4NTAw
Nl0gc2QgNjowOjA6MDogW3NkYV0gMTI1MDQ1NDI0IDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tz
OiAoNjQuMCBHQi81OS42IEdpQikKWyAgICAwLjQ4NTAyOF0gc2QgNjowOjA6MDogW3NkYV0g
V3JpdGUgUHJvdGVjdCBpcyBvZmYKWyAgICAwLjQ4NTAzM10gc2QgNjowOjA6MDogW3NkYV0g
TW9kZSBTZW5zZTogMDAgM2EgMDAgMDAKWyAgICAwLjQ4NTA2Ml0gc2QgNjowOjA6MDogW3Nk
YV0gV3JpdGUgY2FjaGU6IGVuYWJsZWQsIHJlYWQgY2FjaGU6IGVuYWJsZWQsIGRvZXNuJ3Qg
c3VwcG9ydCBEUE8gb3IgRlVBClsgICAgMC40ODUxMDNdIHNkIDY6MDowOjA6IFtzZGFdIFBy
ZWZlcnJlZCBtaW5pbXVtIEkvTyBzaXplIDUxMiBieXRlcwpbICAgIDAuNDg2NTU0XSAgc2Rh
OiBzZGExIHNkYTIgc2RhMwpbICAgIDAuNDg3MDMzXSBzZCA2OjA6MDowOiBbc2RhXSBBdHRh
Y2hlZCBTQ1NJIGRpc2sKWyAgICAwLjUwODU0M10gRVhUNC1mcyAoc2RhMyk6IG1vdW50ZWQg
ZmlsZXN5c3RlbSBmZTI5ZTBkYy02MzAzLTQ0MDEtOTg3Yy04NDcyYmMxYjk1MTYgd2l0aCBv
cmRlcmVkIGRhdGEgbW9kZS4gUXVvdGEgbW9kZTogbm9uZS4KWyAgICAwLjUwODU5MV0gVkZT
OiBNb3VudGVkIHJvb3QgKGV4dDQgZmlsZXN5c3RlbSkgb24gZGV2aWNlIDg6My4KWyAgICAw
LjUxMDYxOF0gZGV2dG1wZnM6IG1vdW50ZWQKWyAgICAwLjUxNDUwNl0gRnJlZWluZyB1bnVz
ZWQga2VybmVsIGltYWdlIChpbml0bWVtKSBtZW1vcnk6IDI5MDhLClsgICAgMC41MzIxNDFd
IFdyaXRlIHByb3RlY3RpbmcgdGhlIGtlcm5lbCByZWFkLW9ubHkgZGF0YTogMjA0ODBrClsg
ICAgMC41MzI0MjVdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBpbWFnZSAocm9kYXRhL2RhdGEg
Z2FwKSBtZW1vcnk6IDgzNksKWyAgICAwLjU2OTczNl0geDg2L21tOiBDaGVja2VkIFcrWCBt
YXBwaW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQuClsgICAgMC41Njk3NDJdIHJv
ZGF0YV90ZXN0OiBhbGwgdGVzdHMgd2VyZSBzdWNjZXNzZnVsClsgICAgMC41Njk3NzNdIFJ1
biAvc2Jpbi9pbml0IGFzIGluaXQgcHJvY2VzcwpbICAgIDAuNTY5Nzc1XSAgIHdpdGggYXJn
dW1lbnRzOgpbICAgIDAuNTY5Nzc2XSAgICAgL3NiaW4vaW5pdApbICAgIDAuNTY5Nzc3XSAg
ICAgbm9pc2FwbnAKWyAgICAwLjU2OTc3OF0gICB3aXRoIGVudmlyb25tZW50OgpbICAgIDAu
NTY5Nzc5XSAgICAgSE9NRT0vClsgICAgMC41Njk3ODBdICAgICBURVJNPWxpbnV4ClsgICAg
MC41Njk3ODFdICAgICBCT09UX0lNQUdFPS9ib290L3ZtbGludXotNi4zLjAtcmMzLTAwMDQ2
LWc4YmE2NDNkN2UxYzcKWyAgICAwLjc1MTI2OV0gc3lzdGVtZFsxXTogSW5zZXJ0ZWQgbW9k
dWxlICdhdXRvZnM0JwpbICAgIDAuNzc3NzY3XSBORVQ6IFJlZ2lzdGVyZWQgUEZfSU5FVDYg
cHJvdG9jb2wgZmFtaWx5ClsgICAgMC43Nzg2NzRdIFNlZ21lbnQgUm91dGluZyB3aXRoIElQ
djYKWyAgICAwLjc3ODcwMl0gSW4tc2l0dSBPQU0gKElPQU0pIHdpdGggSVB2NgpbICAgIDAu
ODA0MTMzXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kIDI1Mi42LTEgcnVubmluZyBpbiBzeXN0ZW0g
bW9kZSAoK1BBTSArQVVESVQgK1NFTElOVVggK0FQUEFSTU9SICtJTUEgK1NNQUNLICtTRUND
T01QICtHQ1JZUFQgLUdOVVRMUyArT1BFTlNTTCArQUNMICtCTEtJRCArQ1VSTCArRUxGVVRJ
TFMgK0ZJRE8yICtJRE4yIC1JRE4gK0lQVEMgK0tNT0QgK0xJQkNSWVBUU0VUVVAgK0xJQkZE
SVNLICtQQ1JFMiAtUFdRVUFMSVRZICtQMTFLSVQgK1FSRU5DT0RFICtUUE0yICtCWklQMiAr
TFo0ICtYWiArWkxJQiArWlNURCAtQlBGX0ZSQU1FV09SSyAtWEtCQ09NTU9OICtVVE1QICtT
WVNWSU5JVCBkZWZhdWx0LWhpZXJhcmNoeT11bmlmaWVkKQpbICAgIDAuODA0MTQzXSBzeXN0
ZW1kWzFdOiBEZXRlY3RlZCBhcmNoaXRlY3R1cmUgeDg2LTY0LgpbICAgIDAuODA4OTAzXSBz
eXN0ZW1kWzFdOiBIb3N0bmFtZSBzZXQgdG8gPGtvZGk+LgpbICAgIDEuMDgxNjE5XSBzeXN0
ZW1kWzFdOiBRdWV1ZWQgc3RhcnQgam9iIGZvciBkZWZhdWx0IHRhcmdldCBncmFwaGljYWwu
dGFyZ2V0LgpbICAgIDEuMTAyNDA0XSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3Rl
bS1nZXR0eS5zbGljZSAtIFNsaWNlIC9zeXN0ZW0vZ2V0dHkuClsgICAgMS4xMDM1MDVdIHN5
c3RlbWRbMV06IENyZWF0ZWQgc2xpY2Ugc3lzdGVtLW1vZHByb2JlLnNsaWNlIC0gU2xpY2Ug
L3N5c3RlbS9tb2Rwcm9iZS4KWyAgICAxLjEwNDM0NV0gc3lzdGVtZFsxXTogQ3JlYXRlZCBz
bGljZSB1c2VyLnNsaWNlIC0gVXNlciBhbmQgU2Vzc2lvbiBTbGljZS4KWyAgICAxLjEwNDUz
MV0gc3lzdGVtZFsxXTogU3RhcnRlZCBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25zb2xlLnBh
dGggLSBEaXNwYXRjaCBQYXNzd29yZCBSZXF1ZXN0cyB0byBDb25zb2xlIERpcmVjdG9yeSBX
YXRjaC4KWyAgICAxLjEwNDY1NV0gc3lzdGVtZFsxXTogU3RhcnRlZCBzeXN0ZW1kLWFzay1w
YXNzd29yZC13YWxsLnBhdGggLSBGb3J3YXJkIFBhc3N3b3JkIFJlcXVlc3RzIHRvIFdhbGwg
RGlyZWN0b3J5IFdhdGNoLgpbICAgIDEuMTA1MDgyXSBzeXN0ZW1kWzFdOiBTZXQgdXAgYXV0
b21vdW50IHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLmF1dG9tb3VudCAtIEFyYml0cmFyeSBF
eGVjdXRhYmxlIEZpbGUgRm9ybWF0cyBGaWxlIFN5c3RlbSBBdXRvbW91bnQgUG9pbnQuClsg
ICAgMS4xMDUxMjJdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IGNyeXB0c2V0dXAudGFy
Z2V0IC0gTG9jYWwgRW5jcnlwdGVkIFZvbHVtZXMuClsgICAgMS4xMDUxNzFdIHN5c3RlbWRb
MV06IFJlYWNoZWQgdGFyZ2V0IGludGVncml0eXNldHVwLnRhcmdldCAtIExvY2FsIEludGVn
cml0eSBQcm90ZWN0ZWQgVm9sdW1lcy4KWyAgICAxLjEwNTIxM10gc3lzdGVtZFsxXTogUmVh
Y2hlZCB0YXJnZXQgcGF0aHMudGFyZ2V0IC0gUGF0aCBVbml0cy4KWyAgICAxLjEwNTI0OF0g
c3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgcmVtb3RlLWZzLnRhcmdldCAtIFJlbW90ZSBG
aWxlIFN5c3RlbXMuClsgICAgMS4xMDUyNzZdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0
IHNsaWNlcy50YXJnZXQgLSBTbGljZSBVbml0cy4KWyAgICAxLjEwNTMwMl0gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgc3dhcC50YXJnZXQgLSBTd2Fwcy4KWyAgICAxLjEwNTM0NF0g
c3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgdmVyaXR5c2V0dXAudGFyZ2V0IC0gTG9jYWwg
VmVyaXR5IFByb3RlY3RlZCBWb2x1bWVzLgpbICAgIDEuMTA3OTExXSBzeXN0ZW1kWzFdOiBM
aXN0ZW5pbmcgb24gc3lzdGVtZC1jb3JlZHVtcC5zb2NrZXQgLSBQcm9jZXNzIENvcmUgRHVt
cCBTb2NrZXQuClsgICAgMS4xMDgxNTRdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0
ZW1kLWZzY2tkLnNvY2tldCAtIGZzY2sgdG8gZnNja2QgY29tbXVuaWNhdGlvbiBTb2NrZXQu
ClsgICAgMS4xMDgzMThdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWluaXRj
dGwuc29ja2V0IC0gaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBpcGUuClsgICAgMS4x
MDg2MjRdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLWpvdXJuYWxkLWF1ZGl0
LnNvY2tldCAtIEpvdXJuYWwgQXVkaXQgU29ja2V0LgpbICAgIDEuMTA4ODg2XSBzeXN0ZW1k
WzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1qb3VybmFsZC1kZXYtbG9nLnNvY2tldCAtIEpv
dXJuYWwgU29ja2V0ICgvZGV2L2xvZykuClsgICAgMS4xMDkxNTddIHN5c3RlbWRbMV06IExp
c3RlbmluZyBvbiBzeXN0ZW1kLWpvdXJuYWxkLnNvY2tldCAtIEpvdXJuYWwgU29ja2V0Lgpb
ICAgIDEuMTA5NDE0XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gc3lzdGVtZC1uZXR3b3Jr
ZC5zb2NrZXQgLSBOZXR3b3JrIFNlcnZpY2UgTmV0bGluayBTb2NrZXQuClsgICAgMS4xMTAy
MTVdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBzeXN0ZW1kLXVkZXZkLWNvbnRyb2wuc29j
a2V0IC0gdWRldiBDb250cm9sIFNvY2tldC4KWyAgICAxLjExMDQ3NV0gc3lzdGVtZFsxXTog
TGlzdGVuaW5nIG9uIHN5c3RlbWQtdWRldmQta2VybmVsLnNvY2tldCAtIHVkZXYgS2VybmVs
IFNvY2tldC4KWyAgICAxLjExMzE2OF0gc3lzdGVtZFsxXTogTW91bnRpbmcgZGV2LWh1Z2Vw
YWdlcy5tb3VudCAtIEh1Z2UgUGFnZXMgRmlsZSBTeXN0ZW0uLi4KWyAgICAxLjExNjU4NV0g
c3lzdGVtZFsxXTogTW91bnRpbmcgZGV2LW1xdWV1ZS5tb3VudCAtIFBPU0lYIE1lc3NhZ2Ug
UXVldWUgRmlsZSBTeXN0ZW0uLi4KWyAgICAxLjEyMTA4NV0gc3lzdGVtZFsxXTogTW91bnRp
bmcgc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudCAtIEtlcm5lbCBEZWJ1ZyBGaWxlIFN5c3RlbS4u
LgpbICAgIDEuMTI0NTc0XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBzeXMta2VybmVsLXRyYWNp
bmcubW91bnQgLSBLZXJuZWwgVHJhY2UgRmlsZSBTeXN0ZW0uLi4KWyAgICAxLjEyODIyN10g
c3lzdGVtZFsxXTogU3RhcnRpbmcga21vZC1zdGF0aWMtbm9kZXMuc2VydmljZSAtIENyZWF0
ZSBMaXN0IG9mIFN0YXRpYyBEZXZpY2UgTm9kZXMuLi4KWyAgICAxLjEzMjMxMl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAY29uZmlnZnMuc2VydmljZSAtIExvYWQgS2VybmVs
IE1vZHVsZSBjb25maWdmcy4uLgpbICAgIDEuMTM1NjEwXSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBtb2Rwcm9iZUBkbV9tb2Quc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBkbV9tb2Qu
Li4KWyAgICAxLjEzODg3Nl0gc3lzdGVtZFsxXTogU3RhcnRpbmcgbW9kcHJvYmVAZHJtLnNl
cnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZHJtLi4uClsgICAgMS4xNDc0MjVdIHN5c3Rl
bWRbMV06IFN0YXJ0aW5nIG1vZHByb2JlQGVmaV9wc3RvcmUuc2VydmljZSAtIExvYWQgS2Vy
bmVsIE1vZHVsZSBlZmlfcHN0b3JlLi4uClsgICAgMS4xNTA2MDFdIHN5c3RlbWRbMV06IFN0
YXJ0aW5nIG1vZHByb2JlQGZ1c2Uuc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBmdXNl
Li4uClsgICAgMS4xNTM5MjFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG1vZHByb2JlQGxvb3Au
c2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBsb29wLi4uClsgICAgMS4xNTQwMjRdIHN5
c3RlbWRbMV06IHN5c3RlbWQtZmlyc3Rib290LnNlcnZpY2UgLSBGaXJzdCBCb290IFdpemFy
ZCB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlvbiBjaGVjayAoQ29u
ZGl0aW9uRmlyc3RCb290PXllcykuClsgICAgMS4xNTQxMTJdIHN5c3RlbWRbMV06IHN5c3Rl
bWQtZnNjay1yb290LnNlcnZpY2UgLSBGaWxlIFN5c3RlbSBDaGVjayBvbiBSb290IERldmlj
ZSB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlvbiBjaGVjayAoQ29u
ZGl0aW9uUGF0aElzUmVhZFdyaXRlPSEvKS4KWyAgICAxLjE1NDE1OF0gc3lzdGVtZFsxXTog
UmVhY2hlZCB0YXJnZXQgbG9jYWwtZnMudGFyZ2V0IC0gTG9jYWwgRmlsZSBTeXN0ZW1zLgpb
ICAgIDEuMTU0MjU5XSBzeXN0ZW1kWzFdOiBhcHBhcm1vci5zZXJ2aWNlIC0gTG9hZCBBcHBB
cm1vciBwcm9maWxlcyB3YXMgc2tpcHBlZCBiZWNhdXNlIG9mIGFuIHVubWV0IGNvbmRpdGlv
biBjaGVjayAoQ29uZGl0aW9uU2VjdXJpdHk9YXBwYXJtb3IpLgpbICAgIDEuMTY3MTEyXSBm
dXNlOiBpbml0IChBUEkgdmVyc2lvbiA3LjM4KQpbICAgIDEuMTY3MjU1XSBsb29wOiBtb2R1
bGUgbG9hZGVkClsgICAgMS4xNjg1ODZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQt
YmluZm10LnNlcnZpY2UgLSBTZXQgVXAgQWRkaXRpb25hbCBCaW5hcnkgRm9ybWF0cy4uLgpb
ICAgIDEuMTc0MDM1XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBzeXN0ZW1kLWpvdXJuYWxkLnNl
cnZpY2UgLSBKb3VybmFsIFNlcnZpY2UuLi4KWyAgICAxLjE3NzM1M10gc3lzdGVtZFsxXTog
U3RhcnRpbmcgc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlIC0gTG9hZC9TYXZlIFJhbmRv
bSBTZWVkLi4uClsgICAgMS4xODA4NDhdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQt
c3lzY3RsLnNlcnZpY2UgLSBBcHBseSBLZXJuZWwgVmFyaWFibGVzLi4uClsgICAgMS4xOTUw
MjVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtc3lzdXNlcnMuc2VydmljZSAtIENy
ZWF0ZSBTeXN0ZW0gVXNlcnMuLi4KWyAgICAxLjIwMTk1Nl0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZSAtIENvbGRwbHVnIEFsbCB1ZGV2IERl
dmljZXMuLi4KWyAgICAxLjIxNzY5NV0gc3lzdGVtZFsxXTogTW91bnRlZCBkZXYtaHVnZXBh
Z2VzLm1vdW50IC0gSHVnZSBQYWdlcyBGaWxlIFN5c3RlbS4KWyAgICAxLjIxODQwOV0gc3lz
dGVtZFsxXTogTW91bnRlZCBkZXYtbXF1ZXVlLm1vdW50IC0gUE9TSVggTWVzc2FnZSBRdWV1
ZSBGaWxlIFN5c3RlbS4KWyAgICAxLjIxODY4M10gc3lzdGVtZFsxXTogTW91bnRlZCBzeXMt
a2VybmVsLWRlYnVnLm1vdW50IC0gS2VybmVsIERlYnVnIEZpbGUgU3lzdGVtLgpbICAgIDEu
MjE4OTU3XSBzeXN0ZW1kWzFdOiBNb3VudGVkIHN5cy1rZXJuZWwtdHJhY2luZy5tb3VudCAt
IEtlcm5lbCBUcmFjZSBGaWxlIFN5c3RlbS4KWyAgICAxLjIyNTU0Nl0gc3lzdGVtZFsxXTog
RmluaXNoZWQga21vZC1zdGF0aWMtbm9kZXMuc2VydmljZSAtIENyZWF0ZSBMaXN0IG9mIFN0
YXRpYyBEZXZpY2UgTm9kZXMuClsgICAgMS4yMjY1MzRdIHN5c3RlbWRbMV06IG1vZHByb2Jl
QGNvbmZpZ2ZzLnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4KWyAgICAxLjIy
Njg5M10gc3lzdGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAY29uZmlnZnMuc2VydmljZSAt
IExvYWQgS2VybmVsIE1vZHVsZSBjb25maWdmcy4KWyAgICAxLjIyNzY0NV0gc3lzdGVtZFsx
XTogbW9kcHJvYmVAZG1fbW9kLnNlcnZpY2U6IERlYWN0aXZhdGVkIHN1Y2Nlc3NmdWxseS4K
WyAgICAxLjIyODIwOV0gc3lzdGVtZFsxXTogRmluaXNoZWQgbW9kcHJvYmVAZG1fbW9kLnNl
cnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZG1fbW9kLgpbICAgIDEuMjMwMzgzXSBzeXN0
ZW1kWzFdOiBtb2Rwcm9iZUBkcm0uc2VydmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vzc2Z1bGx5
LgpbICAgIDEuMjMwNzI1XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBkcm0uc2Vy
dmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBkcm0uClsgICAgMS4yMzE0NTldIHN5c3RlbWRb
MV06IG1vZHByb2JlQGVmaV9wc3RvcmUuc2VydmljZTogRGVhY3RpdmF0ZWQgc3VjY2Vzc2Z1
bGx5LgpbICAgIDEuMjMxNzc0XSBzeXN0ZW1kWzFdOiBGaW5pc2hlZCBtb2Rwcm9iZUBlZmlf
cHN0b3JlLnNlcnZpY2UgLSBMb2FkIEtlcm5lbCBNb2R1bGUgZWZpX3BzdG9yZS4KWyAgICAx
LjIzMjQ3NF0gc3lzdGVtZFsxXTogbW9kcHJvYmVAZnVzZS5zZXJ2aWNlOiBEZWFjdGl2YXRl
ZCBzdWNjZXNzZnVsbHkuClsgICAgMS4yMzI3OTVdIHN5c3RlbWRbMV06IEZpbmlzaGVkIG1v
ZHByb2JlQGZ1c2Uuc2VydmljZSAtIExvYWQgS2VybmVsIE1vZHVsZSBmdXNlLgpbICAgIDEu
MjM0MTA1XSBzeXN0ZW1kWzFdOiBtb2Rwcm9iZUBsb29wLnNlcnZpY2U6IERlYWN0aXZhdGVk
IHN1Y2Nlc3NmdWxseS4KWyAgICAxLjIzNDQ2Ml0gc3lzdGVtZFsxXTogRmluaXNoZWQgbW9k
cHJvYmVAbG9vcC5zZXJ2aWNlIC0gTG9hZCBLZXJuZWwgTW9kdWxlIGxvb3AuClsgICAgMS4y
MzUwOTldIHN5c3RlbWRbMV06IHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLmF1dG9tb3VudDog
R290IGF1dG9tb3VudCByZXF1ZXN0IGZvciAvcHJvYy9zeXMvZnMvYmluZm10X21pc2MsIHRy
aWdnZXJlZCBieSAxNDAgKHN5c3RlbWQtYmluZm10KQpbICAgIDEuMjYxNzAzXSBzeXN0ZW1k
WzFdOiBNb3VudGluZyBwcm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5tb3VudCAtIEFyYml0cmFy
eSBFeGVjdXRhYmxlIEZpbGUgRm9ybWF0cyBGaWxlIFN5c3RlbS4uLgpbICAgIDEuMjcwMTYw
XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBzeXMtZnMtZnVzZS1jb25uZWN0aW9ucy5tb3VudCAt
IEZVU0UgQ29udHJvbCBGaWxlIFN5c3RlbS4uLgpbICAgIDEuMjczMjc0XSBzeXN0ZW1kWzFd
OiBNb3VudGluZyBzeXMta2VybmVsLWNvbmZpZy5tb3VudCAtIEtlcm5lbCBDb25maWd1cmF0
aW9uIEZpbGUgU3lzdGVtLi4uClsgICAgMS4yNzM0MDJdIHN5c3RlbWRbMV06IHN5c3RlbWQt
cHN0b3JlLnNlcnZpY2UgLSBQbGF0Zm9ybSBQZXJzaXN0ZW50IFN0b3JhZ2UgQXJjaGl2YWwg
d2FzIHNraXBwZWQgYmVjYXVzZSBvZiBhbiB1bm1ldCBjb25kaXRpb24gY2hlY2sgKENvbmRp
dGlvbkRpcmVjdG9yeU5vdEVtcHR5PS9zeXMvZnMvcHN0b3JlKS4KWyAgICAxLjI3MzU3NF0g
c3lzdGVtZFsxXTogc3lzdGVtZC1yZXBhcnQuc2VydmljZSAtIFJlcGFydGl0aW9uIFJvb3Qg
RGlzayB3YXMgc2tpcHBlZCBiZWNhdXNlIG5vIHRyaWdnZXIgY29uZGl0aW9uIGNoZWNrcyB3
ZXJlIG1ldC4KWyAgICAxLjI5MDQ4MV0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC1z
eXNjdGwuc2VydmljZSAtIEFwcGx5IEtlcm5lbCBWYXJpYWJsZXMuClsgICAgMS4yOTQxNTld
IHN5c3RlbWRbMV06IEZpbmlzaGVkIHN5c3RlbWQtc3lzdXNlcnMuc2VydmljZSAtIENyZWF0
ZSBTeXN0ZW0gVXNlcnMuClsgICAgMS4yOTQ1NTVdIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lz
LWZzLWZ1c2UtY29ubmVjdGlvbnMubW91bnQgLSBGVVNFIENvbnRyb2wgRmlsZSBTeXN0ZW0u
ClsgICAgMS4zMDMzNTZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHN5c3RlbWQtdG1wZmlsZXMt
c2V0dXAtZGV2LnNlcnZpY2UgLSBDcmVhdGUgU3RhdGljIERldmljZSBOb2RlcyBpbiAvZGV2
Li4uClsgICAgMS4zMDQ4MTddIHN5c3RlbWRbMV06IE1vdW50ZWQgc3lzLWtlcm5lbC1jb25m
aWcubW91bnQgLSBLZXJuZWwgQ29uZmlndXJhdGlvbiBGaWxlIFN5c3RlbS4KWyAgICAxLjMy
MDkwOV0gc3lzdGVtZFsxXTogTW91bnRlZCBwcm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5tb3Vu
dCAtIEFyYml0cmFyeSBFeGVjdXRhYmxlIEZpbGUgRm9ybWF0cyBGaWxlIFN5c3RlbS4KWyAg
ICAxLjMyMzU2MF0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC1iaW5mbXQuc2Vydmlj
ZSAtIFNldCBVcCBBZGRpdGlvbmFsIEJpbmFyeSBGb3JtYXRzLgpbICAgIDEuMzY5MzU2XSB0
c2M6IFJlZmluZWQgVFNDIGNsb2Nrc291cmNlIGNhbGlicmF0aW9uOiAzOTAwLjIyNCBNSHoK
WyAgICAxLjM2OTM2N10gY2xvY2tzb3VyY2U6IHRzYzogbWFzazogMHhmZmZmZmZmZmZmZmZm
ZmZmIG1heF9jeWNsZXM6IDB4NzA3MDVhNjQ3MmMsIG1heF9pZGxlX25zOiA4ODE1OTA1ODY4
MTIgbnMKWyAgICAxLjM3MDAzNl0gc3lzdGVtZFsxXTogRmluaXNoZWQgc3lzdGVtZC10bXBm
aWxlcy1zZXR1cC1kZXYuc2VydmljZSAtIENyZWF0ZSBTdGF0aWMgRGV2aWNlIE5vZGVzIGlu
IC9kZXYuClsgICAgMS4zNzMyMDddIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3Nv
dXJjZSB0c2MKWyAgICAxLjM4OTgzMV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lzdGVtZC11
ZGV2ZC5zZXJ2aWNlIC0gUnVsZS1iYXNlZCBNYW5hZ2VyIGZvciBEZXZpY2UgRXZlbnRzIGFu
ZCBGaWxlcy4uLgpbICAgIDEuNDI3NDU1XSBzeXN0ZW1kWzFdOiBTdGFydGVkIHN5c3RlbWQt
am91cm5hbGQuc2VydmljZSAtIEpvdXJuYWwgU2VydmljZS4KWyAgICAxLjQ3MjczN10gc3lz
dGVtZC1qb3VybmFsZFsxNDJdOiBSZWNlaXZlZCBjbGllbnQgcmVxdWVzdCB0byBmbHVzaCBy
dW50aW1lIGpvdXJuYWwuClsgICAgMS43NDIyMjNdIHNkIDY6MDowOjA6IEF0dGFjaGVkIHNj
c2kgZ2VuZXJpYyBzZzAgdHlwZSAwClsgICAgMS44NjYwNDJdIGFjcGlfY3B1ZnJlcTogb3Zl
cnJpZGluZyBCSU9TIHByb3ZpZGVkIF9QU0QgZGF0YQpbICAgIDIuMDA1MzQ3XSByYW5kb206
IGNybmcgaW5pdCBkb25lClsgICAgMi4xMzc4MzJdIFFVSVJLOiBFbmFibGUgQU1EIFBMTCBm
aXgKWyAgICAyLjEzNzgzMl0gUVVJUks6IEVuYWJsZSBBTUQgUExMIGZpeApbICAgIDIuMTM3
ODg3XSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAg
Mi4xMzc5MjJdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciAxClsgICAgMi4xMzc5MzZdIGVoY2ktcGNpIDAwMDA6
MDA6MTMuMjogYXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1t
eSBxaCB3b3JrYXJvdW5kClsgICAgMi4xMzc5NDVdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjog
ZGVidWcgcG9ydCAxClsgICAgMi4xMzgxNDFdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogaXJx
IDE3LCBpbyBtZW0gMHhmMDFjZTAwMApbICAgIDIuMTM5NDkxXSBwaWl4NF9zbWJ1cyAwMDAw
OjAwOjE0LjA6IFNNQnVzIEhvc3QgQ29udHJvbGxlciBhdCAweGIwMCwgcmV2aXNpb24gMApb
ICAgIDIuMTM5NTA1XSBwaWl4NF9zbWJ1cyAwMDAwOjAwOjE0LjA6IFVzaW5nIHJlZ2lzdGVy
IDB4MmUgZm9yIFNNQnVzIHBvcnQgc2VsZWN0aW9uClsgICAgMi4xNDAxMDddIHBpaXg0X3Nt
YnVzIDAwMDA6MDA6MTQuMDogQXV4aWxpYXJ5IFNNQnVzIEhvc3QgQ29udHJvbGxlciBhdCAw
eGIyMApbICAgIDIuMTU0NjMyXSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IFVTQiAyLjAgc3Rh
cnRlZCwgRUhDSSAxLjAwClsgICAgMi4xNTUyODRdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmlj
ZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNi4w
MwpbICAgIDIuMTU1MjkxXSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjE1NTI5NF0gdXNiIHVzYjE6
IFByb2R1Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4xNTUyOTZdIHVzYiB1c2Ix
OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjMy0wMDA0Ni1nOGJhNjQzZDdlMWM3IGVo
Y2lfaGNkClsgICAgMi4xNTUyOTldIHVzYiB1c2IxOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6
MTMuMgpbICAgIDIuMTc0NTIxXSBodWIgMS0wOjEuMDogVVNCIGh1YiBmb3VuZApbICAgIDIu
MTc0NTYwXSBodWIgMS0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZApbICAgIDIuMTc1NTgyXSBl
aGNpLXBjaSAwMDAwOjAwOjEyLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4xNzU2
MTBdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciAyClsgICAgMi4xNzU2MjNdIGVoY2ktcGNpIDAwMDA6MDA6MTIu
MjogYXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3
b3JrYXJvdW5kClsgICAgMi4xNzU2MzJdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZGVidWcg
cG9ydCAxClsgICAgMi4xNzU3ODldIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBp
byBtZW0gMHhmMDFjZDAwMApbICAgIDIuMTg5MzczXSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6
IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwClsgICAgMi4xODk3MjBdIHVzYiB1c2IyOiBO
ZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJj
ZERldmljZT0gNi4wMwpbICAgIDIuMTg5NzI1XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2Ug
c3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjE4OTcy
N10gdXNiIHVzYjI6IFByb2R1Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4xODk3
MzBdIHVzYiB1c2IyOiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjMy0wMDA0Ni1nOGJh
NjQzZDdlMWM3IGVoY2lfaGNkClsgICAgMi4xODk3MzJdIHVzYiB1c2IyOiBTZXJpYWxOdW1i
ZXI6IDAwMDA6MDA6MTIuMgpbICAgIDIuMTkwMjgzXSBodWIgMi0wOjEuMDogVVNCIGh1YiBm
b3VuZApbICAgIDIuMTkwMzIzXSBodWIgMi0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZApbICAg
IDIuMTkxODAxXSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IEVIQ0kgSG9zdCBDb250cm9sbGVy
ClsgICAgMi4xOTE4MzFdIGVoY2ktcGNpIDAwMDA6MDA6MTYuMjogbmV3IFVTQiBidXMgcmVn
aXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgICAgMi4xOTE4NDVdIGVoY2ktcGNp
IDAwMDA6MDA6MTYuMjogYXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhD
SSBkdW1teSBxaCB3b3JrYXJvdW5kClsgICAgMi4xOTE4NTVdIGVoY2ktcGNpIDAwMDA6MDA6
MTYuMjogZGVidWcgcG9ydCAxClsgICAgMi4xOTIwMThdIGVoY2ktcGNpIDAwMDA6MDA6MTYu
MjogaXJxIDE3LCBpbyBtZW0gMHhmMDFjZjAwMApbICAgIDIuMjA1MzczXSBlaGNpLXBjaSAw
MDAwOjAwOjE2LjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwClsgICAgMi4yMDU2NTFd
IHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9k
dWN0PTAwMDIsIGJjZERldmljZT0gNi4wMwpbICAgIDIuMjA1NjU1XSB1c2IgdXNiMzogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEK
WyAgICAyLjIwNTY1OF0gdXNiIHVzYjM6IFByb2R1Y3Q6IEVIQ0kgSG9zdCBDb250cm9sbGVy
ClsgICAgMi4yMDU2NjBdIHVzYiB1c2IzOiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJj
My0wMDA0Ni1nOGJhNjQzZDdlMWM3IGVoY2lfaGNkClsgICAgMi4yMDU2NjJdIHVzYiB1c2Iz
OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTYuMgpbICAgIDIuMjA2MjAwXSBodWIgMy0wOjEu
MDogVVNCIGh1YiBmb3VuZApbICAgIDIuMjA2MjM2XSBodWIgMy0wOjEuMDogNCBwb3J0cyBk
ZXRlY3RlZApbICAgIDIuMjIxMDUzXSBvaGNpLXBjaSAwMDAwOjAwOjEyLjA6IE9IQ0kgUENJ
IGhvc3QgY29udHJvbGxlcgpbICAgIDIuMjIxMDg1XSBvaGNpLXBjaSAwMDAwOjAwOjEyLjA6
IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgNApbICAgIDIu
MjIxMjY0XSBvaGNpLXBjaSAwMDAwOjAwOjEyLjA6IGlycSAxOCwgaW8gbWVtIDB4ZjAxYzgw
MDAKWyAgICAyLjIzNzc5Nl0gcjgxNjkgMDAwMDowNDowMC4wOiBlbmFibGluZyBkZXZpY2Ug
KDAwMDAgLT4gMDAwMykKWyAgICAyLjI2MTMwOF0geGhjaV9oY2QgMDAwMDowMzowMC4wOiB4
SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDIuMjYxMzk0XSB4aGNpX2hjZCAwMDAwOjAzOjAw
LjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgNQpbICAg
IDIuMjcyMTI1XSByODE2OSAwMDAwOjA0OjAwLjAgZXRoMDogUlRMODE2OGYvODExMWYsIDA4
OjYwOjZlOjc0OjdhOjUxLCBYSUQgNDgwLCBJUlEgMjgKWyAgICAyLjI3MjE0M10gcjgxNjkg
MDAwMDowNDowMC4wIGV0aDA6IGp1bWJvIGZlYXR1cmVzIFtmcmFtZXM6IDkxOTQgYnl0ZXMs
IHR4IGNoZWNrc3VtbWluZzoga29dClsgICAgMi4yODI2MzNdIHVzYiB1c2I0OiBOZXcgVVNC
IGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEsIGJjZERldmlj
ZT0gNi4wMwpbICAgIDIuMjgyNjQ0XSB1c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ugc3RyaW5n
czogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEKWyAgICAyLjI4MjY1MF0gdXNi
IHVzYjQ6IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDIuMjgyNjU1
XSB1c2IgdXNiNDogTWFudWZhY3R1cmVyOiBMaW51eCA2LjMuMC1yYzMtMDAwNDYtZzhiYTY0
M2Q3ZTFjNyBvaGNpX2hjZApbICAgIDIuMjgyNjU5XSB1c2IgdXNiNDogU2VyaWFsTnVtYmVy
OiAwMDAwOjAwOjEyLjAKWyAgICAyLjI4MzM3Ml0gaHViIDQtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgICAyLjI4MzQzMV0gaHViIDQtMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgICAy
LjI5NzU3Nl0gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBPSENJIFBDSSBob3N0IGNvbnRyb2xs
ZXIKWyAgICAyLjI5NzYwOV0gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBuZXcgVVNCIGJ1cyBy
ZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDYKWyAgICAyLjI5NzcyN10gb2hjaS1w
Y2kgMDAwMDowMDoxMy4wOiBpcnEgMTgsIGlvIG1lbSAweGYwMWM5MDAwClsgICAgMi4zMjA5
NTVdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDogaGNjIHBhcmFtcyAweDAyMDBmMTgwIGhjaSB2
ZXJzaW9uIDB4OTYgcXVpcmtzIDB4MDAwMDAwMDAwMDA4MDAxMApbICAgIDIuMzIyMDk5XSB4
aGNpX2hjZCAwMDAwOjAzOjAwLjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMi4zMjIx
MjFdIHhoY2lfaGNkIDAwMDA6MDM6MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciA3ClsgICAgMi4zMjIxMzZdIHhoY2lfaGNkIDAwMDA6MDM6MDAu
MDogSG9zdCBzdXBwb3J0cyBVU0IgMy4wIFN1cGVyU3BlZWQKWyAgICAyLjMyNDAyOV0gdXNi
IHVzYjU6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9
MDAwMiwgYmNkRGV2aWNlPSA2LjAzClsgICAgMi4zMjQwMzZdIHVzYiB1c2I1OiBOZXcgVVNC
IGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAg
IDIuMzI0MDM5XSB1c2IgdXNiNTogUHJvZHVjdDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAg
ICAyLjMyNDA0MV0gdXNiIHVzYjU6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmMzLTAw
MDQ2LWc4YmE2NDNkN2UxYzcgeGhjaS1oY2QKWyAgICAyLjMyNDA0M10gdXNiIHVzYjU6IFNl
cmlhbE51bWJlcjogMDAwMDowMzowMC4wClsgICAgMi4zMjQ2ODddIGh1YiA1LTA6MS4wOiBV
U0IgaHViIGZvdW5kClsgICAgMi4zMjQ3MjddIGh1YiA1LTA6MS4wOiAyIHBvcnRzIGRldGVj
dGVkClsgICAgMi4zMjU0NjddIHVzYiB1c2I3OiBXZSBkb24ndCBrbm93IHRoZSBhbGdvcml0
aG1zIGZvciBMUE0gZm9yIHRoaXMgaG9zdCwgZGlzYWJsaW5nIExQTS4KWyAgICAyLjMyNTYy
NF0gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFBy
b2R1Y3Q9MDAwMywgYmNkRGV2aWNlPSA2LjAzClsgICAgMi4zMjU2MjhdIHVzYiB1c2I3OiBO
ZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9
MQpbICAgIDIuMzI1NjMwXSB1c2IgdXNiNzogUHJvZHVjdDogeEhDSSBIb3N0IENvbnRyb2xs
ZXIKWyAgICAyLjMyNTYzMl0gdXNiIHVzYjc6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAt
cmMzLTAwMDQ2LWc4YmE2NDNkN2UxYzcgeGhjaS1oY2QKWyAgICAyLjMyNTYzNF0gdXNiIHVz
Yjc6IFNlcmlhbE51bWJlcjogMDAwMDowMzowMC4wClsgICAgMi4zMjYxOTVdIGh1YiA3LTA6
MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMi4zMjcwMzRdIGh1YiA3LTA6MS4wOiAyIHBvcnRz
IGRldGVjdGVkClsgICAgMi4zNDkyMDVdIHNuZF9oZGFfaW50ZWwgMDAwMDowMDowMS4xOiBG
b3JjZSB0byBub24tc25vb3AgbW9kZQpbICAgIDIuMzY2Mzk5XSB1c2IgdXNiNjogTmV3IFVT
QiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxLCBiY2REZXZp
Y2U9IDYuMDMKWyAgICAyLjM2NjQwNl0gdXNiIHVzYjY6IE5ldyBVU0IgZGV2aWNlIHN0cmlu
Z3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMi4zNjY0MDldIHVz
YiB1c2I2OiBQcm9kdWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgICAyLjM2NjQx
MV0gdXNiIHVzYjY6IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmMzLTAwMDQ2LWc4YmE2
NDNkN2UxYzcgb2hjaV9oY2QKWyAgICAyLjM2NjQxM10gdXNiIHVzYjY6IFNlcmlhbE51bWJl
cjogMDAwMDowMDoxMy4wClsgICAgMi4zNjcyMTBdIGh1YiA2LTA6MS4wOiBVU0IgaHViIGZv
dW5kClsgICAgMi4zNjczMDNdIGh1YiA2LTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAg
Mi4zOTE0NDddIHI4MTY5IDAwMDA6MDQ6MDAuMCBlbnA0czA6IHJlbmFtZWQgZnJvbSBldGgw
ClsgICAgMi40MjAwMzddIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogT0hDSSBQQ0kgaG9zdCBj
b250cm9sbGVyClsgICAgMi40MjAwNzBdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA4ClsgICAgMi40MjE1NTZd
IG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogaXJxIDE4LCBpbyBtZW0gMHhmMDFjYTAwMApbICAg
IDIuNDQ1MjgxXSByODE2OSAwMDAwOjA0OjAwLjA6IERpcmVjdCBmaXJtd2FyZSBsb2FkIGZv
ciBydGxfbmljL3J0bDgxNjhmLTEuZncgZmFpbGVkIHdpdGggZXJyb3IgLTIKWyAgICAyLjQ0
NTI5Ml0gcjgxNjkgMDAwMDowNDowMC4wOiBVbmFibGUgdG8gbG9hZCBmaXJtd2FyZSBydGxf
bmljL3J0bDgxNjhmLTEuZncgKC0yKQpbICAgIDIuNDQ1ODY4XSBSVEw4MjExRSBHaWdhYml0
IEV0aGVybmV0IHI4MTY5LTAtNDAwOjAwOiBhdHRhY2hlZCBQSFkgZHJpdmVyIChtaWlfYnVz
OnBoeV9hZGRyPXI4MTY5LTAtNDAwOjAwLCBpcnE9TUFDKQpbICAgIDIuNDgxMjUwXSBpbnB1
dDogSERBIEFUSSBIRE1JIEhETUkvRFAscGNtPTMgYXMgL2RldmljZXMvcGNpMDAwMDowMC8w
MDAwOjAwOjAxLjEvc291bmQvY2FyZDAvaW5wdXQxClsgICAgMi40ODE4ODBdIGlucHV0OiBI
REEgQVRJIEhETUkgSERNSS9EUCxwY209NyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6
MDA6MDEuMS9zb3VuZC9jYXJkMC9pbnB1dDIKWyAgICAyLjQ4ODczM10gdXNiIHVzYjg6IE5l
dyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMSwgYmNk
RGV2aWNlPSA2LjAzClsgICAgMi40ODg3NDBdIHVzYiB1c2I4OiBOZXcgVVNCIGRldmljZSBz
dHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgIDIuNDg4NzQz
XSB1c2IgdXNiODogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAgMi40
ODg3NDVdIHVzYiB1c2I4OiBNYW51ZmFjdHVyZXI6IExpbnV4IDYuMy4wLXJjMy0wMDA0Ni1n
OGJhNjQzZDdlMWM3IG9oY2lfaGNkClsgICAgMi40ODg3NDddIHVzYiB1c2I4OiBTZXJpYWxO
dW1iZXI6IDAwMDA6MDA6MTQuNQpbICAgIDIuNDkyODE2XSBzbmRfaGRhX2NvZGVjX3JlYWx0
ZWsgaGRhdWRpb0MxRDA6IEFMQzg5MjogU0tVIG5vdCByZWFkeSAweDAwMDAwMTAwClsgICAg
Mi40OTI5MDJdIGh1YiA4LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMi40OTI5NTNdIGh1
YiA4LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsgICAgMi40OTM2MTJdIHNuZF9oZGFfY29k
ZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogYXV0b2NvbmZpZyBmb3IgQUxDODkyOiBsaW5lX291
dHM9NCAoMHgxNC8weDE2LzB4MTUvMHgxNy8weDApIHR5cGU6bGluZQpbICAgIDIuNDkzNjIw
XSBzbmRfaGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgIHNwZWFrZXJfb3V0cz0w
ICgweDAvMHgwLzB4MC8weDAvMHgwKQpbICAgIDIuNDkzNjIzXSBzbmRfaGRhX2NvZGVjX3Jl
YWx0ZWsgaGRhdWRpb0MxRDA6ICAgIGhwX291dHM9MSAoMHgxYi8weDAvMHgwLzB4MC8weDAp
ClsgICAgMi40OTM2MjZdIHNuZF9oZGFfY29kZWNfcmVhbHRlayBoZGF1ZGlvQzFEMDogICAg
bW9ubzogbW9ub19vdXQ9MHgwClsgICAgMi40OTM2MjhdIHNuZF9oZGFfY29kZWNfcmVhbHRl
ayBoZGF1ZGlvQzFEMDogICAgZGlnLW91dD0weDFlLzB4MApbICAgIDIuNDkzNjI5XSBzbmRf
aGRhX2NvZGVjX3JlYWx0ZWsgaGRhdWRpb0MxRDA6ICAgIGlucHV0czoKWyAgICAyLjQ5MzYz
MV0gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICAgIFJlYXIgTWljPTB4
MTgKWyAgICAyLjQ5MzYzM10gc25kX2hkYV9jb2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAg
ICAgIEZyb250IE1pYz0weDE5ClsgICAgMi40OTM2MzVdIHNuZF9oZGFfY29kZWNfcmVhbHRl
ayBoZGF1ZGlvQzFEMDogICAgICBMaW5lPTB4MWEKWyAgICAyLjQ5MzYzNl0gc25kX2hkYV9j
b2RlY19yZWFsdGVrIGhkYXVkaW9DMUQwOiAgICAgIENEPTB4MWMKWyAgICAyLjQ5NTAwM10g
b2hjaS1wY2kgMDAwMDowMDoxNi4wOiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgICAy
LjQ5NTAzNl0gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVk
LCBhc3NpZ25lZCBidXMgbnVtYmVyIDkKWyAgICAyLjQ5NTE3Ml0gb2hjaS1wY2kgMDAwMDow
MDoxNi4wOiBpcnEgMTgsIGlvIG1lbSAweGYwMWNiMDAwClsgICAgMi41MTA1NTZdIFtkcm1d
IHJhZGVvbiBrZXJuZWwgbW9kZXNldHRpbmcgZW5hYmxlZC4KWyAgICAyLjUxMzUxMF0gW2Ry
bV0gaW5pdGlhbGl6aW5nIGtlcm5lbCBtb2Rlc2V0dGluZyAoQVJVQkEgMHgxMDAyOjB4OTk5
NiAweDEwMDI6MHg5OTk2IDB4MDApLgpbICAgIDIuNTEzNTg4XSBBVE9NIEJJT1M6IDExMwpb
ICAgIDIuNTEzODkzXSByYWRlb24gMDAwMDowMDowMS4wOiBWUkFNOiA1MTJNIDB4MDAwMDAw
MDAwMDAwMDAwMCAtIDB4MDAwMDAwMDAxRkZGRkZGRiAoNTEyTSB1c2VkKQpbICAgIDIuNTEz
ODk4XSByYWRlb24gMDAwMDowMDowMS4wOiBHVFQ6IDEwMjRNIDB4MDAwMDAwMDAyMDAwMDAw
MCAtIDB4MDAwMDAwMDA1RkZGRkZGRgpbICAgIDIuNTEzOTExXSBbZHJtXSBEZXRlY3RlZCBW
UkFNIFJBTT01MTJNLCBCQVI9MjU2TQpbICAgIDIuNTEzOTEzXSBbZHJtXSBSQU0gd2lkdGgg
NjRiaXRzIEREUgpbICAgIDIuNTE0ODA1XSBbZHJtXSByYWRlb246IDUxMk0gb2YgVlJBTSBt
ZW1vcnkgcmVhZHkKWyAgICAyLjUxNDgxNV0gW2RybV0gcmFkZW9uOiAxMDI0TSBvZiBHVFQg
bWVtb3J5IHJlYWR5LgpbICAgIDIuNTE0ODg4XSBbZHJtXSBMb2FkaW5nIEFSVUJBIE1pY3Jv
Y29kZQpbICAgIDIuNTE3MTg1XSByODE2OSAwMDAwOjA0OjAwLjAgZW5wNHMwOiBMaW5rIGlz
IERvd24KWyAgICAyLjUxOTkyN10gaW5wdXQ6IEhELUF1ZGlvIEdlbmVyaWMgUmVhciBNaWMg
YXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEvaW5wdXQz
ClsgICAgMi41MjAzMDddIGlucHV0OiBIRC1BdWRpbyBHZW5lcmljIEZyb250IE1pYyBhcyAv
ZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1dDQKWyAg
ICAyLjUyMDgyNF0gaW5wdXQ6IEhELUF1ZGlvIEdlbmVyaWMgTGluZSBhcyAvZGV2aWNlcy9w
Y2kwMDAwOjAwLzAwMDA6MDA6MTQuMi9zb3VuZC9jYXJkMS9pbnB1dDUKWyAgICAyLjUyMTE4
MF0gaW5wdXQ6IEhELUF1ZGlvIEdlbmVyaWMgTGluZSBPdXQgRnJvbnQgYXMgL2RldmljZXMv
cGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEvaW5wdXQ2ClsgICAgMi41MjE1
OTRdIGlucHV0OiBIRC1BdWRpbyBHZW5lcmljIExpbmUgT3V0IFN1cnJvdW5kIGFzIC9kZXZp
Y2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4yL3NvdW5kL2NhcmQxL2lucHV0NwpbICAgIDIu
NTIyMTYwXSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBMaW5lIE91dCBDTEZFIGFzIC9kZXZp
Y2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4yL3NvdW5kL2NhcmQxL2lucHV0OApbICAgIDIu
NTIyNTM2XSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBMaW5lIE91dCBTaWRlIGFzIC9kZXZp
Y2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxNC4yL3NvdW5kL2NhcmQxL2lucHV0OQpbICAgIDIu
NTIyODg4XSBpbnB1dDogSEQtQXVkaW8gR2VuZXJpYyBGcm9udCBIZWFkcGhvbmUgYXMgL2Rl
dmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE0LjIvc291bmQvY2FyZDEvaW5wdXQxMApbICAg
IDIuNTMyOTkwXSBbZHJtXSBJbnRlcm5hbCB0aGVybWFsIGNvbnRyb2xsZXIgd2l0aG91dCBm
YW4gY29udHJvbApbICAgIDIuNTU4ODQyXSB1c2IgdXNiOTogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxLCBiY2REZXZpY2U9IDYuMDMKWyAg
ICAyLjU1ODg0OV0gdXNiIHVzYjk6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQ
cm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMi41NTg4NTJdIHVzYiB1c2I5OiBQcm9k
dWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgICAyLjU1ODg1NV0gdXNiIHVzYjk6
IE1hbnVmYWN0dXJlcjogTGludXggNi4zLjAtcmMzLTAwMDQ2LWc4YmE2NDNkN2UxYzcgb2hj
aV9oY2QKWyAgICAyLjU1ODg1N10gdXNiIHVzYjk6IFNlcmlhbE51bWJlcjogMDAwMDowMDox
Ni4wClsgICAgMi41NTg5ODRdIFtkcm1dIHJhZGVvbjogZHBtIGluaXRpYWxpemVkClsgICAg
Mi41NjQ2NjddIFtkcm1dIEZvdW5kIFZDRSBmaXJtd2FyZS9mZWVkYmFjayB2ZXJzaW9uIDUw
LjAuMSAvIDE3IQpbICAgIDIuNTY0NzI4XSBbZHJtXSBHQVJUOiBudW0gY3B1IHBhZ2VzIDI2
MjE0NCwgbnVtIGdwdSBwYWdlcyAyNjIxNDQKWyAgICAyLjU5NDA1MV0gW2RybV0gUENJRSBH
QVJUIG9mIDEwMjRNIGVuYWJsZWQgKHRhYmxlIGF0IDB4MDAwMDAwMDAwMDFENjAwMCkuClsg
ICAgMi41OTQzMTRdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IFdCIGVuYWJsZWQKWyAgICAyLjU5
NDMxOF0gcmFkZW9uIDAwMDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgMCB1c2Ug
Z3B1IGFkZHIgMHgwMDAwMDAwMDIwMDAwYzAwClsgICAgMi41OTQ2OTddIHJhZGVvbiAwMDAw
OjAwOjAxLjA6IGZlbmNlIGRyaXZlciBvbiByaW5nIDUgdXNlIGdwdSBhZGRyIDB4MDAwMDAw
MDAwMDA3NWExOApbICAgIDIuNTk1NzExXSBodWIgOS0wOjEuMDogVVNCIGh1YiBmb3VuZApb
ICAgIDIuNTk1NzUzXSBodWIgOS0wOjEuMDogNCBwb3J0cyBkZXRlY3RlZApbICAgIDIuNjE4
ODY2XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyA2IHVzZSBn
cHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMTgKWyAgICAyLjYxODg3NF0gcmFkZW9uIDAwMDA6
MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNyB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAw
MDIwMDAwYzFjClsgICAgMi42MTg4NzddIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNlIGRy
aXZlciBvbiByaW5nIDEgdXNlIGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMwNApbICAgIDIu
NjE4ODc5XSByYWRlb24gMDAwMDowMDowMS4wOiBmZW5jZSBkcml2ZXIgb24gcmluZyAyIHVz
ZSBncHUgYWRkciAweDAwMDAwMDAwMjAwMDBjMDgKWyAgICAyLjYxODg4MV0gcmFkZW9uIDAw
MDA6MDA6MDEuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgMyB1c2UgZ3B1IGFkZHIgMHgwMDAw
MDAwMDIwMDAwYzBjClsgICAgMi42MTg4ODNdIHJhZGVvbiAwMDAwOjAwOjAxLjA6IGZlbmNl
IGRyaXZlciBvbiByaW5nIDQgdXNlIGdwdSBhZGRyIDB4MDAwMDAwMDAyMDAwMGMxMApbICAg
IDIuNjQ3ODIzXSByYWRlb24gMDAwMDowMDowMS4wOiByYWRlb246IE1TSSBsaW1pdGVkIHRv
IDMyLWJpdApbICAgIDIuNjQ4MDQ4XSByYWRlb24gMDAwMDowMDowMS4wOiByYWRlb246IHVz
aW5nIE1TSS4KWyAgICAyLjY0ODEyNV0gW2RybV0gcmFkZW9uOiBpcnEgaW5pdGlhbGl6ZWQu
ClsgICAgMi42Njc5NThdIFtkcm1dIHJpbmcgdGVzdCBvbiAwIHN1Y2NlZWRlZCBpbiAzIHVz
ZWNzClsgICAgMi42Njc5NzBdIFtkcm1dIHJpbmcgdGVzdCBvbiAzIHN1Y2NlZWRlZCBpbiA0
IHVzZWNzClsgICAgMi42Njc5NzddIFtkcm1dIHJpbmcgdGVzdCBvbiA0IHN1Y2NlZWRlZCBp
biA0IHVzZWNzClsgICAgMi43MTM3ODRdIFtkcm1dIHJpbmcgdGVzdCBvbiA1IHN1Y2NlZWRl
ZCBpbiAyIHVzZWNzClsgICAgMi43MzM2NjZdIFtkcm1dIFVWRCBpbml0aWFsaXplZCBzdWNj
ZXNzZnVsbHkuClsgICAgMi43NDEzNDFdIHVzYiA0LTE6IG5ldyBsb3ctc3BlZWQgVVNCIGRl
dmljZSBudW1iZXIgMiB1c2luZyBvaGNpLXBjaQpbICAgIDIuODQzMDQwXSBbZHJtXSByaW5n
IHRlc3Qgb24gNiBzdWNjZWVkZWQgaW4gMTggdXNlY3MKWyAgICAyLjg0MzA1Ml0gW2RybV0g
cmluZyB0ZXN0IG9uIDcgc3VjY2VlZGVkIGluIDMgdXNlY3MKWyAgICAyLjg0MzA1M10gW2Ry
bV0gVkNFIGluaXRpYWxpemVkIHN1Y2Nlc3NmdWxseS4KWyAgICAyLjg0MzIwMl0gc25kX2hk
YV9pbnRlbCAwMDAwOjAwOjAxLjE6IGJvdW5kIDAwMDA6MDA6MDEuMCAob3BzIHJhZGVvbl9h
dWRpb19jb21wb25lbnRfYmluZF9vcHMgW3JhZGVvbl0pClsgICAgMi44NDMzNzBdIFtkcm1d
IGliIHRlc3Qgb24gcmluZyAwIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAgMi44NDM0MjRd
IFtkcm1dIGliIHRlc3Qgb24gcmluZyAzIHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsgICAgMi44
NDM0NzRdIFtkcm1dIGliIHRlc3Qgb24gcmluZyA0IHN1Y2NlZWRlZCBpbiAwIHVzZWNzClsg
ICAgMi45NDI1OTVdIHVzYiA0LTE6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj00
MTNjLCBpZFByb2R1Y3Q9MjEwNiwgYmNkRGV2aWNlPSAxLjAxClsgICAgMi45NDI2MDddIHVz
YiA0LTE6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlh
bE51bWJlcj0wClsgICAgMi45NDI2MTFdIHVzYiA0LTE6IFByb2R1Y3Q6IERlbGwgUXVpZXRL
ZXkgS2V5Ym9hcmQKWyAgICAyLjk0MjYxNV0gdXNiIDQtMTogTWFudWZhY3R1cmVyOiBERUxM
ClsgICAgMi45NTEwMDZdIGlucHV0OiBERUxMIERlbGwgUXVpZXRLZXkgS2V5Ym9hcmQgYXMg
L2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjEyLjAvdXNiNC80LTEvNC0xOjEuMC8wMDAz
OjQxM0M6MjEwNi4wMDAxL2lucHV0L2lucHV0MTEKWyAgICAzLjAxMDc3NV0gaGlkLWdlbmVy
aWMgMDAwMzo0MTNDOjIxMDYuMDAwMTogaW5wdXQsaGlkcmF3MDogVVNCIEhJRCB2MS4xMCBL
ZXlib2FyZCBbREVMTCBEZWxsIFF1aWV0S2V5IEtleWJvYXJkXSBvbiB1c2ItMDAwMDowMDox
Mi4wLTEvaW5wdXQwClsgICAgMy4zODU0NTRdIFtkcm1dIGliIHRlc3Qgb24gcmluZyA1IHN1
Y2NlZWRlZApbICAgIDMuNDA1MzQ5XSB1c2IgNC0yOiBuZXcgbG93LXNwZWVkIFVTQiBkZXZp
Y2UgbnVtYmVyIDMgdXNpbmcgb2hjaS1wY2kKWyAgICAzLjYwMDU5N10gdXNiIDQtMjogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTA0NmQsIGlkUHJvZHVjdD1jMDE2LCBiY2RE
ZXZpY2U9IDMuNDAKWyAgICAzLjYwMDYwOV0gdXNiIDQtMjogTmV3IFVTQiBkZXZpY2Ugc3Ry
aW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAKWyAgICAzLjYwMDYxM10g
dXNiIDQtMjogUHJvZHVjdDogT3B0aWNhbCBVU0IgTW91c2UKWyAgICAzLjYwMDYxN10gdXNi
IDQtMjogTWFudWZhY3R1cmVyOiBMb2dpdGVjaApbICAgIDMuNjEwOTYzXSBpbnB1dDogTG9n
aXRlY2ggT3B0aWNhbCBVU0IgTW91c2UgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjEyLjAvdXNiNC80LTIvNC0yOjEuMC8wMDAzOjA0NkQ6QzAxNi4wMDAyL2lucHV0L2lucHV0
MTIKWyAgICAzLjYxMTkxN10gaGlkLWdlbmVyaWMgMDAwMzowNDZEOkMwMTYuMDAwMjogaW5w
dXQsaGlkcmF3MTogVVNCIEhJRCB2MS4xMCBNb3VzZSBbTG9naXRlY2ggT3B0aWNhbCBVU0Ig
TW91c2VdIG9uIHVzYi0wMDAwOjAwOjEyLjAtMi9pbnB1dDAKWyAgICAzLjkyOTQ1NF0gW2Ry
bV0gaWIgdGVzdCBvbiByaW5nIDYgc3VjY2VlZGVkClsgICAgNC40NDE0NTVdIFtkcm1dIGli
IHRlc3Qgb24gcmluZyA3IHN1Y2NlZWRlZApbICAgIDQuNDQ2OTgzXSBbZHJtXSBSYWRlb24g
RGlzcGxheSBDb25uZWN0b3JzClsgICAgNC40NDY5ODldIFtkcm1dIENvbm5lY3RvciAwOgpb
ICAgIDQuNDQ2OTkyXSBbZHJtXSAgIERQLTEKWyAgICA0LjQ0Njk5NV0gW2RybV0gICBIUEQx
ClsgICAgNC40NDY5OTddIFtkcm1dICAgRERDOiAweDY1MzAgMHg2NTMwIDB4NjUzNCAweDY1
MzQgMHg2NTM4IDB4NjUzOCAweDY1M2MgMHg2NTNjClsgICAgNC40NDcwMDNdIFtkcm1dICAg
RW5jb2RlcnM6ClsgICAgNC40NDcwMDVdIFtkcm1dICAgICBERlAxOiBJTlRFUk5BTF9VTklQ
SFkyClsgICAgNC40NDcwMDddIFtkcm1dIENvbm5lY3RvciAxOgpbICAgIDQuNDQ3MDA5XSBb
ZHJtXSAgIFZHQS0xClsgICAgNC40NDcwMTFdIFtkcm1dICAgSFBEMgpbICAgIDQuNDQ3MDEz
XSBbZHJtXSAgIEREQzogMHg2NTQwIDB4NjU0MCAweDY1NDQgMHg2NTQ0IDB4NjU0OCAweDY1
NDggMHg2NTRjIDB4NjU0YwpbICAgIDQuNDQ3MDE4XSBbZHJtXSAgIEVuY29kZXJzOgpbICAg
IDQuNDQ3MDIwXSBbZHJtXSAgICAgQ1JUMTogSU5URVJOQUxfVU5JUEhZMgpbICAgIDQuNDQ3
MDIyXSBbZHJtXSAgICAgQ1JUMTogTlVUTUVHClsgICAgNC40NDcwMjRdIFtkcm1dIENvbm5l
Y3RvciAyOgpbICAgIDQuNDQ3MDI2XSBbZHJtXSAgIEhETUktQS0xClsgICAgNC40NDcwMjhd
IFtkcm1dICAgSFBEMwpbICAgIDQuNDQ3MDMwXSBbZHJtXSAgIEREQzogMHg2NTUwIDB4NjU1
MCAweDY1NTQgMHg2NTU0IDB4NjU1OCAweDY1NTggMHg2NTVjIDB4NjU1YwpbICAgIDQuNDQ3
MDM1XSBbZHJtXSAgIEVuY29kZXJzOgpbICAgIDQuNDQ3MDM2XSBbZHJtXSAgICAgREZQMjog
SU5URVJOQUxfVU5JUEhZClsgICAgNC43MTYxNzFdIFtkcm1dIGZiIG1hcHBhYmxlIGF0IDB4
RTAzRTkwMDAKWyAgICA0LjcxNjE3OV0gW2RybV0gdnJhbSBhcHBlciBhdCAweEUwMDAwMDAw
ClsgICAgNC43MTYxODFdIFtkcm1dIHNpemUgNTI0Mjg4MApbICAgIDQuNzE2MTgzXSBbZHJt
XSBmYiBkZXB0aCBpcyAyNApbICAgIDQuNzE2MTg1XSBbZHJtXSAgICBwaXRjaCBpcyA1MTIw
ClsgICAgNC43MTY2NzFdIGZiY29uOiByYWRlb25kcm1mYiAoZmIwKSBpcyBwcmltYXJ5IGRl
dmljZQpbICAgIDQuOTA4MTk3XSBDb25zb2xlOiBzd2l0Y2hpbmcgdG8gY29sb3VyIGZyYW1l
IGJ1ZmZlciBkZXZpY2UgMTYweDY0ClsgICAgNC45MDk5NzRdIHJhZGVvbiAwMDAwOjAwOjAx
LjA6IFtkcm1dIGZiMDogcmFkZW9uZHJtZmIgZnJhbWUgYnVmZmVyIGRldmljZQpbICAgIDQu
OTMzNjgyXSBbZHJtXSBJbml0aWFsaXplZCByYWRlb24gMi41MC4wIDIwMDgwNTI4IGZvciAw
MDAwOjAwOjAxLjAgb24gbWlub3IgMApbICAgIDUuMTI1MDQzXSByODE2OSAwMDAwOjA0OjAw
LjAgZW5wNHMwOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCByeC90
eApbICAgIDUuMTI1MDU5XSBJUHY2OiBBRERSQ09ORihORVRERVZfQ0hBTkdFKTogZW5wNHMw
OiBsaW5rIGJlY29tZXMgcmVhZHkKWyAgICA3LjY2OTA5M10gW2RybV0gYW1kZ3B1IGtlcm5l
bCBtb2Rlc2V0dGluZyBlbmFibGVkLgpbICAgMTEuMjE2NTQyXSBtZW1mZF9jcmVhdGUoKSB3
aXRob3V0IE1GRF9FWEVDIG5vciBNRkRfTk9FWEVDX1NFQUwsIHBpZD0yNzIgJ3N5c3RlbWQn
Cg==

--------------lVKi9UGmWM0EmezMyrPPBq8L--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 19:45:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 19:45:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525593.816858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr27C-0004L6-3F; Mon, 24 Apr 2023 19:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525593.816858; Mon, 24 Apr 2023 19:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr27C-0004Kz-0U; Mon, 24 Apr 2023 19:44:46 +0000
Received: by outflank-mailman (input) for mailman id 525593;
 Mon, 24 Apr 2023 19:44:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zAuZ=AP=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pr27A-0004Kt-3j
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 19:44:44 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74c87829-e2d8-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 21:44:42 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-508418b6d59so8904564a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Apr 2023 12:44:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c87829-e2d8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682365481; x=1684957481;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iqH4nzjLv7bSAZ/EuLYdgg//7mPmqDhtT9z9RXX9OSo=;
        b=bSHHFH2JMdilNt7rvuiUY0AqTmD6WAWbvslRQept8CXOJgOIN5nyVsRbQp2YL3KXCk
         2yC5zhg5SCdkS+DNEQTfhbG59DIeNMw6KCEefPF9vmPb8F0rh1CPDSIeBhyr/8a4jTPG
         h029j+uGOtp8aPB25dCwvh3msxBP3zu1MOlhoc6Ik9BS7ng2jLuZ1GmY+c9efwNNC7gM
         raQgP5NhPdnujeuVPr+YNe2iDpViutSaQ5raZVd48cdThYAH+t7Bo0npPmpHjnhhU7XM
         Y6/RngI0qTZd5g/HptSX4U9qLXgoRAhC9rTfBRj1WvstRcxGAdghZ/k4230mJ9+JNUVz
         tQMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682365481; x=1684957481;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iqH4nzjLv7bSAZ/EuLYdgg//7mPmqDhtT9z9RXX9OSo=;
        b=Nu6hdpwLjvfXGNMy3nAqvX71Fgq/BpZ9Ovnfzv+2KWfQEKQmgiI/f4naAVCBJ1bnZ+
         d060/gkc2rzPrJiN9CJWAdxryeIRTbLeuQlYaGtjL87kuxDgEB3zCoUfWYNVNQiYky/b
         npvAELl0/w9yoB8hvfDjTp6uwqpmQz033OQ6pPwgV2mvAobIfV2AckVIxYAxTm93Cmpf
         yCuJFzSxeZ9pxOcafEhLRRfmnh4j/u25EuT0gl0lTpprgAHkWdisaeixqVH77V7M9EuL
         KlwJfTKjXcoQIFqnTGeKB8a5+5jG2ZW69LahpZNkjxXKbX8RwYoxsgpdN5Zp36r9aguR
         7hbw==
X-Gm-Message-State: AAQBX9ewwrurc6hgmB5iPkmJmw35zo66SVIfEyLa7C0z+RpyeO2Cd4d0
	hQEE5Hr5gmULU36MsiZuU16id25eMV1lJT80B8k=
X-Google-Smtp-Source: AKy350ZkoQHmlw44xB4ini7JXYTQSi22u189iUollFTqDEL4T65VRULY6zIR8+kFNykj4/YFg9uRJ9cpdsbHwFV9zyk=
X-Received: by 2002:aa7:d858:0:b0:4fe:97a2:4b86 with SMTP id
 f24-20020aa7d858000000b004fe97a24b86mr12663621eds.8.1682365481348; Mon, 24
 Apr 2023 12:44:41 -0700 (PDT)
MIME-Version: 1.0
References: <20230419100633.13047-1-olaf@aepfle.de> <0fa11dd4-da6f-df95-bff7-dc4a80553b01@suse.com>
In-Reply-To: <0fa11dd4-da6f-df95-bff7-dc4a80553b01@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 24 Apr 2023 15:44:28 -0400
Message-ID: <CAKf6xptdnmKds35e8QH-d0c-9ktajYCsJSmGA7ZMKWkJ1cHK5Q@mail.gmail.com>
Subject: Re: [PATCH v1] tools/libs/guest: assist gcc13's realloc analyzer
To: Juergen Gross <jgross@suse.com>
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 19, 2023 at 8:55=E2=80=AFAM Juergen Gross <jgross@suse.com> wro=
te:
>
> On 19.04.23 12:06, Olaf Hering wrote:
> > gcc13 fails to track the allocated memory in backup_ptes:
> >
> > xg_offline_page.c: In function 'backup_ptes':
> > xg_offline_page.c:191:13: error: pointer 'orig' may be used after 'real=
loc' [-Werror=3Duse-after-free]
> >    191 |             free(orig);
> >
> > Assist the analyzer by slightly rearranging the code:
> > In case realloc succeeds, the previous allocation is either extended
> > or released internally. In case realloc fails, the previous allocation
> > is left unchanged. Return an error in this case, the caller will
> > release the currently allocated memory in its error path.
> >
> > http://bugzilla.suse.com/show_bug.cgi?id=3D1210570
> >
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Compile-tested-by: Jason Andryuk <jandryuk@gmail.com>

Needed to build on Fedora 38.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:03:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:03:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525598.816868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr2P6-0006rb-JG; Mon, 24 Apr 2023 20:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525598.816868; Mon, 24 Apr 2023 20:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr2P6-0006rU-Gc; Mon, 24 Apr 2023 20:03:16 +0000
Received: by outflank-mailman (input) for mailman id 525598;
 Mon, 24 Apr 2023 20:03:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr2P4-0006rK-VM; Mon, 24 Apr 2023 20:03:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr2P4-0005Pe-Px; Mon, 24 Apr 2023 20:03:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr2P4-0003Cx-By; Mon, 24 Apr 2023 20:03:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pr2P4-0006EJ-AX; Mon, 24 Apr 2023 20:03:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Oei3Qi4+Fmhgg1vyIpmC8JY+KRpdoOeNF5X9aJCg94E=; b=VvxP/D+nSmXIsJOVm4kgoAdCMZ
	VAX4wcxOHGEhWyqELkQ47mhLi7UteXufqhwWwDDILwbMfjCsFRFdNMeqyvGXFZZo5+h1b2pzqiwB+
	t0r6WVk/NQVmFq9LuPXASutWmWijn7I816Gcn70aLH9gyt//w3mWEbgbu2bfJBmcdsV4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180394: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-coresched-amd64-xl:debian-fixup:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=327ec8d6c2a2223b78d311153a471036e474c5c5
X-Osstest-Versions-That:
    qemuu=6dd06214892d71cbbdd25daed7693e58afcb1093
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Apr 2023 20:03:14 +0000

flight 180394 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180394/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail in 180389 pass in 180394
 test-amd64-i386-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail in 180389 pass in 180394
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180389 pass in 180394
 test-amd64-i386-xl-shadow     7 xen-install                fail pass in 180389
 test-amd64-coresched-amd64-xl 13 debian-fixup              fail pass in 180389

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180382
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180382
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180382
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180382
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180382
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180382
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180382
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                327ec8d6c2a2223b78d311153a471036e474c5c5
baseline version:
 qemuu                6dd06214892d71cbbdd25daed7693e58afcb1093

Last test of basis   180382  2023-04-23 03:16:49 Z    1 days
Testing same since   180386  2023-04-23 16:07:44 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   6dd0621489..327ec8d6c2  327ec8d6c2a2223b78d311153a471036e474c5c5 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:40:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525608.816877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr2ye-0001v2-9n; Mon, 24 Apr 2023 20:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525608.816877; Mon, 24 Apr 2023 20:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr2ye-0001uv-76; Mon, 24 Apr 2023 20:40:00 +0000
Received: by outflank-mailman (input) for mailman id 525608;
 Mon, 24 Apr 2023 20:39:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g3T1=AP=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr2yc-0001up-D1
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:39:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b532cee-e2e0-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:39:55 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EDED261B22;
 Mon, 24 Apr 2023 20:39:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55934C433D2;
 Mon, 24 Apr 2023 20:39:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b532cee-e2e0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682368793;
	bh=qLIRSD0Y/Jg+pFreHba21RLW+7JMAMNvvTDYgDgsZwU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YFZ4y/Brf51EZVDAExxH/35yc3CbkeQ4kh0G3/mkU4zY3FL4w+wgoi/JKVtxtvQbd
	 TqIMOx+W7/POg8YrylISeijGE/cWL4Tm/8BJhpEcutbDcChkFLiYKcfchfpIPsEgJo
	 EA2rvTKOmfPnFapC3eVu20etwlD+H+pDVcveN7XGIXGZaM6R+H1UoUXVLuTpsvKwgN
	 0pCUhxP67lueO+4WZSOg5PuRWnfYYcIRi0tJzsiQCn9xsjkCkRcnbnu/I0rJM86Ihq
	 xUtqyt5PjVEIBpI9++K08CTWF47kbPhGWIN/VCAmTr9Y/EdQrulHZs3c3jZc9AzUHF
	 1u1f3Fj8/FVSA==
Date: Mon, 24 Apr 2023 13:39:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Michal Orzel <michal.orzel@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com> <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop> <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com> <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com> <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com> <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com> <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com> <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
 <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com> <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-150838559-1682368793=:3419"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-150838559-1682368793=:3419
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

Hi Oleg, 

Here is the issue from your logs:

SError Interrupt on CPU0, code 0xbe000000 -- SError

SErrors are special signals to notify software of serious hardware
errors.  Something is going very wrong. Defective hardware is a
possibility.  Another possibility if software accessing address ranges
that it is not supposed to, sometimes it causes SErrors.

Cheers,

Stefano



On Mon, 24 Apr 2023, Oleg Nikitenko wrote:

> Hello,
> 
> Thanks guys.
> I found out where the problem was.
> Now dom0 booted more. But I have a new one.
> This is a kernel panic during Dom0 loading.
> Maybe someone is able to suggest something ?
> 
> Regards,
> O.
> 
> [    3.771362] sfp_register_bus: upstream ops attach
> [    3.776119] sfp_register_bus: Bus registered
> [    3.780459] sfp_register_socket: register sfp_bus succeeded
> [    3.789399] of_cfs_init
> [    3.789499] of_cfs_init: OK
> [    3.791685] clk: Not disabling unused clocks
> [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
> [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> [   11.010393] Workqueue: events_unbound async_run_entry_fn
> [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   11.010422] pc : simple_write_end+0xd0/0x130
> [   11.010431] lr : generic_perform_write+0x118/0x1e0
> [   11.010438] sp : ffffffc00809b910
> [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
> [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
> [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
> [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
> [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
> [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
> [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
> [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
> [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
> [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
> [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
> [   11.010548] Workqueue: events_unbound async_run_entry_fn
> [   11.010556] Call trace:
> [   11.010558]  dump_backtrace+0x0/0x1c4
> [   11.010567]  show_stack+0x18/0x2c
> [   11.010574]  dump_stack_lvl+0x7c/0xa0
> [   11.010583]  dump_stack+0x18/0x34
> [   11.010588]  panic+0x14c/0x2f8
> [   11.010597]  print_tainted+0x0/0xb0
> [   11.010606]  arm64_serror_panic+0x6c/0x7c
> [   11.010614]  do_serror+0x28/0x60
> [   11.010621]  el1h_64_error_handler+0x30/0x50
> [   11.010628]  el1h_64_error+0x78/0x7c
> [   11.010633]  simple_write_end+0xd0/0x130
> [   11.010639]  generic_perform_write+0x118/0x1e0
> [   11.010644]  __generic_file_write_iter+0x138/0x1c4
> [   11.010650]  generic_file_write_iter+0x78/0xd0
> [   11.010656]  __kernel_write+0xfc/0x2ac
> [   11.010665]  kernel_write+0x88/0x160
> [   11.010673]  xwrite+0x44/0x94
> [   11.010680]  do_copy+0xa8/0x104
> [   11.010686]  write_buffer+0x38/0x58
> [   11.010692]  flush_buffer+0x4c/0xbc
> [   11.010698]  __gunzip+0x280/0x310
> [   11.010704]  gunzip+0x1c/0x28
> [   11.010709]  unpack_to_rootfs+0x170/0x2b0
> [   11.010715]  do_populate_rootfs+0x80/0x164
> [   11.010722]  async_run_entry_fn+0x48/0x164
> [   11.010728]  process_one_work+0x1e4/0x3a0
> [   11.010736]  worker_thread+0x7c/0x4c0
> [   11.010743]  kthread+0x120/0x130
> [   11.010750]  ret_from_fork+0x10/0x20
> [   11.010757] SMP: stopping secondary CPUs
> [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
> [   11.010788] PHYS_OFFSET: 0x0
> [   11.010790] CPU features: 0x00000401,00000842
> [   11.010795] Memory Limit: none
> [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
> 
> пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com>:
>       Hi Oleg,
> 
>       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>       >       
>       >
>       >
>       > Hello Michal,
>       >
>       > I was not able to enable earlyprintk in the xen for now.
>       > I decided to choose another way.
>       > This is a xen's command line that I found out completely.
>       >
>       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
>       timer_slop=0
>       Yes, adding a printk() in Xen was also a good idea.
> 
>       >
>       > So you are absolutely right about a command line.
>       > Now I am going to find out why xen did not have the correct parameters from the device tree.
>       Maybe you will find this document helpful:
>       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt
> 
>       ~Michal
> 
>       >
>       > Regards,
>       > Oleg
>       >
>       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>       >
>       >
>       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>       >     >       
>       >     >
>       >     >
>       >     > Hello Michal,
>       >     >
>       >     > Yes, I use yocto.
>       >     >
>       >     > Yesterday all day long I tried to follow your suggestions.
>       >     > I faced a problem.
>       >     > Manually in the xen config build file I pasted the strings:
>       >     In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
>       >     You shouldn't really modify .config file but if you do, you should execute "make olddefconfig" afterwards.
>       >
>       >     >
>       >     > CONFIG_EARLY_PRINTK
>       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>       >     I hope you added =y to them.
>       >
>       >     Anyway, you have at least the following solutions:
>       >     1) Run bitbake xen -c menuconfig to properly set early printk
>       >     2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not enabled by default)
>       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>       >
>       >     ~Michal
>       >
>       >     >
>       >     > Host hangs in build time. 
>       >     > Maybe I did not set something in the config build file ?
>       >     >
>       >     > Regards,
>       >     > Oleg
>       >     >
>       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >     >
>       >     >     Thanks Michal,
>       >     >
>       >     >     You gave me an idea.
>       >     >     I am going to try it today.
>       >     >
>       >     >     Regards,
>       >     >     O.
>       >     >
>       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >     >
>       >     >         Thanks Stefano.
>       >     >
>       >     >         I am going to do it today.
>       >     >
>       >     >         Regards,
>       >     >         O.
>       >     >
>       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>       >     >
>       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>       >     >             > Hi Michal,
>       >     >             >
>       >     >             > I corrected xen's command line.
>       >     >             > Now it is
>       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>       bootscrub=0 vwfi=native sched=null
>       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>       >     >
>       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>       >     >             advantage in using more than 1 color for Xen.
>       >     >
>       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>       >     >
>       >     >             xen_colors=0-0 dom0_colors=1-8
>       >     >
>       >     >
>       >     >
>       >     >             > Unfortunately the result was the same.
>       >     >             >
>       >     >             > (XEN)  - Dom0 mode: Relaxed
>       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       >     >             > (XEN) Coloring general information
>       >     >             > (XEN) Way size: 64kB
>       >     >             > (XEN) Max. number of colors available: 16
>       >     >             > (XEN) Xen color(s): [ 0 ]
>       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>       >     >             > (XEN) Color array allocation failed for dom0
>       >     >             > (XEN)
>       >     >             > (XEN) ****************************************
>       >     >             > (XEN) Panic on CPU 0:
>       >     >             > (XEN) Error creating domain 0
>       >     >             > (XEN) ****************************************
>       >     >             > (XEN)
>       >     >             > (XEN) Reboot in five seconds...
>       >     >             >
>       >     >             > I am going to find out how command line arguments passed and parsed.
>       >     >             >
>       >     >             > Regards,
>       >     >             > Oleg
>       >     >             >
>       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >     >             >       Hi Michal,
>       >     >             >
>       >     >             > You put my nose into the problem. Thank you.
>       >     >             > I am going to use your point.
>       >     >             > Let's see what happens.
>       >     >             >
>       >     >             > Regards,
>       >     >             > Oleg
>       >     >             >
>       >     >             >
>       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>       >     >             >       Hi Oleg,
>       >     >             >
>       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >     >             >       >       
>       >     >             >       >
>       >     >             >       >
>       >     >             >       > Hello Stefano,
>       >     >             >       >
>       >     >             >       > Thanks for the clarification.
>       >     >             >       > My company uses yocto for image generation.
>       >     >             >       > What kind of information do you need to consult me in this case ?
>       >     >             >       >
>       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xen.org
>       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> ?
>       >     >             >
>       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the
>       one
>       >     >             >       Xen booted with. The error you are observing most likely is due to dom0 colors configuration not
>       being
>       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this
>       parameter
>       >     >             >       is set, I strongly doubt that this is the actual command line in use.
>       >     >             >
>       >     >             >       You wrote:
>       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>       bootscrub=0 vwfi=native
>       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>       >     >             >
>       >     >             >       but:
>       >     >             >       1) way_szize has a typo
>       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>       >     >             >       (XEN) Xen color(s): [ 0 ]
>       >     >             >
>       >     >             >       This makes me believe that no colors configuration actually end up in command line that Xen booted
>       with.
>       >     >             >       Single color for Xen is a "default if not specified" and way size was probably calculated by asking
>       HW.
>       >     >             >
>       >     >             >       So I would suggest to first cross-check the command line in use.
>       >     >             >
>       >     >             >       ~Michal
>       >     >             >
>       >     >             >
>       >     >             >       >
>       >     >             >       > Regards,
>       >     >             >       > Oleg
>       >     >             >       >
>       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>       >     >             >       >
>       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >     >             >       >     > Hi Julien,
>       >     >             >       >     >
>       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>       >     >             >       >     >
>       >     >             >       >     > > would assume that upstream + the series on the ML [1] work
>       >     >             >       >     >
>       >     >             >       >     > Please clarify this point.
>       >     >             >       >     > Because the two thoughts are controversial.
>       >     >             >       >
>       >     >             >       >     Hi Oleg,
>       >     >             >       >
>       >     >             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen
>       <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen
>       <https://github.com/xilinx/xen>>>
>       >     >             >       >
>       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>       >     >             >       >
>       >     >             >       >
>       >     >             >       >     Instead, the upstream Xen tree lives here:
>       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>       >     >             >       >
>       >     >             >       >
>       >     >             >       >     The Cache Coloring feature that you are trying to configure is present
>       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>       >     >             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>       >     >             >       >     hasn't been merged yet.)
>       >     >             >       >
>       >     >             >       >
>       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>       >     >             >       >     you as you already have Cache Coloring as a feature there.
>       >     >             >       >
>       >     >             >       >
>       >     >             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>       >     >             >       >     so, please post the ImageBuilder config file that you are using.
>       >     >             >       >
>       >     >             >       >     But from the boot message, it looks like the colors configuration for
>       >     >             >       >     Dom0 is incorrect.
>       >     >             >       >
>       >     >             >
>       >     >             >
>       >     >             >
>       >     >
>       >
> 
> 
> 
--8323329-150838559-1682368793=:3419--


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525619.816918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fa-00058u-Hn; Mon, 24 Apr 2023 20:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525619.816918; Mon, 24 Apr 2023 20:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fa-00058i-Dj; Mon, 24 Apr 2023 20:57:30 +0000
Received: by outflank-mailman (input) for mailman id 525619;
 Mon, 24 Apr 2023 20:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3FZ-0004NP-EH
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:29 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e741b8c-e2e2-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:57:27 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id F00563200957;
 Mon, 24 Apr 2023 16:57:25 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 16:57:26 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:24 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e741b8c-e2e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369845; x=1682456245; bh=WH
	t+xbfkCam7fQl05pWoTXNVf+J69fv1QL2w/kPcch0=; b=CJnDcwxcB5NhxjwL7n
	7nvz8NLMXsb5jIZlVwYbX6QvTMFgcUEd6VwQ8KHmdVwztRNsgf4AwV8KKTsw4jMw
	dBlBidSiZeKxE3oRLLblhrhqUS6PgDwadMgvEgw4VBgl+oN8Slf1We/VI1/+LZDp
	EwbJXGYwiOL+JDOL5r+DkxUPnx+Qm7plC+EUNOqhVxMvrJGUPK9q8T1i3abhbwOs
	UawuEqLtberLhSXsshLiuaapP+mgXw6auKQ6ng3X0Zp0FPhpsvaabws0oPVd1ekU
	X0RwvOJoB5dxC/7PrZup9iLQB1nXUrPdEuIdA8J4RJSEKbOf5/3hGG/ALq2lFZnO
	yspA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369845; x=1682456245; bh=WHt+xbfkCam7fQl05pWoTXNVf+J69fv1QL2
	w/kPcch0=; b=HSLT1uWZlSWfCnXI7RpFRbi85eJNnVVDyRLTxBBlKUntVOl2Wsd
	nvKTyHM2xzvGKAuDqUj2yU13fHm3O+Yf7DhKl6adexGeuoRSdTip/LoW48R5MVQa
	s7R+NIcOf3bsjMALvQqey/HWMrJh4xTv9C4682YQZ4pBiGzkidJHIP5KeKLX9WAk
	kJVdz4q9mOzhWmPRu3jjvU5EtksB/MDBJFc/9Vos4g6/JxaLp0M29cVESrDG23UT
	3+1Mk7/bNLL7NRoXTnIJghf4xeDht6a/wyCZKPL6IzckRqjC1oUKaIvHHDVufnYq
	WaLJ83AovfC3u3j8UFxGhMapOIGFnYKxY9Q==
X-ME-Sender: <xms:Ne1GZCIL9S3-htgPJtCT7EFoc6g7mobjttGmQd7FpiYW-dg2WiwOsQ>
    <xme:Ne1GZKJlpMCxNFNn6xkEtxI_H5d2P0Rl2OkOePQtzvFe67B1TzFhKSyurBOZaU9Qu
    Df23coG9pnriA>
X-ME-Received: <xmr:Ne1GZCsqB96008humR4GCKEqx2QvqbYnUNK9OPGuytB87FTTFsndEuVD-8JqwrEDJ2GmmfBwDTc_cfkWOa7EaiO4hyKRGmte5TD5hfq_hVZAIb3uluJS>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:Ne1GZHYCd2cy5_LsvaBZdaURaIBqnOqZaAk4nuAYVElWWeTv39-29g>
    <xmx:Ne1GZJbU7c7ySAznl0xBybgXtNnXC0rvjQ6xKj6-axnV1SMqh3O7Sg>
    <xmx:Ne1GZDDkcFdas6jCRnBR6UWi63rWBR429vCuRbrMLgHEg8pczsOwyw>
    <xmx:Ne1GZIymQwGkSdwyF1J0Pn3Lx1eZg1kBRyzKeI_qySplIqVP7f1Zvg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 4/6] automation: wait for the login prompt as test end marker
Date: Mon, 24 Apr 2023 22:56:59 +0200
Message-Id: <8bdae473db12295b536680283820eb18a7dbd911.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The login prompt is printed after all the startup (test) scripts, wait
for that instead of "passed" marker. And only then check if test passed.
Before this patch there was a race: "passed" marker could be already
printed, but the final check would fail because login prompt wasn't
there yet.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/qubes-x86-64.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 916dbaae59c3..c0bc71764f73 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -159,7 +159,7 @@ if [ -n "$wait_and_wakeup" ]; then
     ssh $CONTROLLER wake
 fi
 
-until grep "$passed" smoke.serial || [ $timeout -le 0 ]; do
+until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
     sleep 1;
     : $((--timeout))
 done
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525618.816907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FX-0004rM-8B; Mon, 24 Apr 2023 20:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525618.816907; Mon, 24 Apr 2023 20:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FX-0004rF-5d; Mon, 24 Apr 2023 20:57:27 +0000
Received: by outflank-mailman (input) for mailman id 525618;
 Mon, 24 Apr 2023 20:57:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3FV-0004NP-Iu
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:25 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9be83945-e2e2-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:57:23 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id C418F320096E;
 Mon, 24 Apr 2023 16:57:21 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 16:57:22 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:20 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9be83945-e2e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369841; x=1682456241; bh=eE
	TiK0cMY/4VSx0OpwWx69mmU9UYYQGptj+aVps8CVM=; b=bYgbgTjZ7AhwuOzHDM
	HwSTdqfK7GqWCuDuVowuZ9+S9Tghz2UazFv7PIR+leWluDkg/k9cRBBjy1JY/IQo
	pakHHWOfVjWBo3r7EgdD4UcT8Dkx37+E49OVVTNs5OjMgZ/SgwMNbWqu+IHpBh8U
	EcctHC7Ilxt6677A4hgT/mWEHQ9oaMcnqYCSl07RVFkVRSu5D/JKVenhWJbiuhzG
	bp2N8AR0OtbIFr5XJq/brt5F2sPEstxBlBfXYgXF5hO33dG/VGzDdhm5hbbEL9q2
	wuN+5jQZ6iLmPGqabc99awIeqRFT+FBIsCBXrj98nECMzIjPw2vejjezKLO5ircg
	lvRQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369841; x=1682456241; bh=eETiK0cMY/4VSx0OpwWx69mmU9UYYQGptj+
	aVps8CVM=; b=GBkMNyiG4ksrP+9sQjh/56LH5CsD94WyciK58EChSPZeqtmkZma
	pGjaNKzFAi5/u4Y6uztBFdlnOc75KvfiBzWQrFUM5vB8QYhrfECZt3zS0hf7QKiY
	swf2dw+EPDX7i/u7SFeZh0KEefsZwzOqTQrt1ZuSXiym4nx94SUibPOxsS1o7Xtn
	GS98ctk4kMO0QvLAnFwZ+ut+9ScG/r772YphvIkmq3E3flpE2JT3u6v4USaaOk9Y
	VPbWCbPX+sTRUtxIBMGA3PNes2efO69JHgVy+I2mC6A4Kk9mZIV8YfaReXuTU1pE
	Ub2gi3UnM/L+wb9oUMSSpdLVjOMNYtUR23g==
X-ME-Sender: <xms:Me1GZJvEGqbFybrp3Ks14BWS4CPcgIi3AJL-4YbTRY3ZMlEP_uR5SA>
    <xme:Me1GZCcU47bmAfZBmAJet7be1EYNPLI3yZXGxHFF6RvGe3IjZ1inx4PndqYSW-Ds0
    Q0JH0ggz8lJfg>
X-ME-Received: <xmr:Me1GZMz2HkCsIzUlAnqzp48MXXjLRcL2kykjY4F5jWSxl5Ec9BvSJ_3e9WVRrLoJXpX9CUfUKGPS-2XRd1qYPMGr2Xzil_ZbDljOZKtzWTYsDsi8WGCL>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:Me1GZAPJ3c-xELIkDSYjg75_1D-cYoibSJg7gTyqUOGfC0x8sQ7C3w>
    <xmx:Me1GZJ8MD-mNxwVUMOsJyIfX5Fe_Ebz5JV_ynqR02NbXajlb0MbX9A>
    <xmx:Me1GZAXWOcaqLPonG9TuaNuVfxFzfGu8hl4kaCcP8owN_ej5Y73qpw>
    <xmx:Me1GZLnRNZHPuSt40upnj0bIljMeRNMYrLeR4y4dQi-qe9cl2XFPjA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 2/6] automation: add runtime qemu dependencies to test container
Date: Mon, 24 Apr 2023 22:56:57 +0200
Message-Id: <a2f2c836e7f444d733f8ce4c1c23fc6be1dc7726.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is necessary to start HVM guests in subsequent tests.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/tests-artifacts/alpine/3.12.dockerfile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/automation/tests-artifacts/alpine/3.12.dockerfile b/automation/tests-artifacts/alpine/3.12.dockerfile
index b3909996b47b..073f16a0d70a 100644
--- a/automation/tests-artifacts/alpine/3.12.dockerfile
+++ b/automation/tests-artifacts/alpine/3.12.dockerfile
@@ -13,6 +13,7 @@ RUN \
   \
   # xen runtime deps
   apk add musl && \
+  apk add libgcc && \
   apk add openrc && \
   apk add busybox && \
   apk add sudo && \
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525620.816927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fb-0005Pc-Ss; Mon, 24 Apr 2023 20:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525620.816927; Mon, 24 Apr 2023 20:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fb-0005PR-Nk; Mon, 24 Apr 2023 20:57:31 +0000
Received: by outflank-mailman (input) for mailman id 525620;
 Mon, 24 Apr 2023 20:57:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3Fa-00058D-99
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:30 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d2a711b-e2e2-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 22:57:27 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id D94FF3200961;
 Mon, 24 Apr 2023 16:57:23 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Mon, 24 Apr 2023 16:57:24 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:22 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d2a711b-e2e2-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369843; x=1682456243; bh=3v
	PSyWu0EY5YaauMa+bUaYbaLxnNRQBKTv6Gsjap6kI=; b=BLeF3GwjHE+CvCeiRi
	eFA308uh/jcX075it3+jUIrP6osEBdnHmoK9zFpIPklPWFrVwFNRk0Mh7UiufNlW
	jsSFZp5F3+YBliyjyS8FnhOzEVKzGsBavDsya7u5GEh7bp3dw91dUL38thqoOaOG
	ccQSmis5B0UML0Afn07mC4rOQI8VoQ144CiALOfjOUsOOk+CYaTm5xxT2wVwygFt
	nUfBKa+6vmfa8rLCvGyF5qWXT5dqpeVt9MPG4Rr938ULAbK04tg/OHNag6qzDBJc
	kYhrXk0CKx9G83TKWQzyiqZfQUS+tIUiFles7yEchc0RzBPit29ot+urCzF0ztKS
	p2Ww==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369843; x=1682456243; bh=3vPSyWu0EY5YaauMa+bUaYbaLxnNRQBKTv6
	Gsjap6kI=; b=AWvo9igIpldZuBJrpP6V5m3r80bMqAdR3BvS2g6Hts+EWfNLZCB
	8bkdYJytCkLjfqjFJ6LdzUJEC/v78OlOek3zvNNDfK7JvAy2PR0jT4ipQX4oeOSZ
	kaqLPPiRNz0NNHPGIUOcM7x4wWgBz99rte1vsb5yJAzYw2z23PHS2rHvSt+6XeVa
	E3wQloBv+hR9oWdNGAWS6arZj9jSVdGKl8Z0CDHG6tS4ZfO9OlKeDJ0dfUreNnKC
	VUrdyBzTT8rp5/rekHcII0cNjrx+h2ic0AIDUUPHEQe42dxK92lendLk8tQb5sqr
	3klhsRw1p91vQS1+eLv9SfnT1qZ3rCmvcTw==
X-ME-Sender: <xms:M-1GZMz0c63FvHxCD7Xnf7XXMfKTzu1QgDQIhWmMcPgJV4fb6Q5Juw>
    <xme:M-1GZASfVOQpTMpiK3pyftXAfzP13N2-Hhzrsa7mKGgnSdjqJF_S-7wJwvbVpAkjS
    qcOeth49bZQpQ>
X-ME-Received: <xmr:M-1GZOWk9UVKC0wCb5CdqHHBTd96SNSvtAHLil4AG7qvqRTOOip3QjpKFIZFLXFA6GUfVYE4cPXw62z1YdK7UszGwY-bi7MBX4ax_jEDPCjlg7lgehCv>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:M-1GZKhfvFn6nCB6uktTUMOa5aeCZhvMaIG2fZp3I62gUU3SqbJjCw>
    <xmx:M-1GZOBuSY7S9ou1YM7I4T88i3Ph9xhB_LVu_WR6dKJBQNbqt2EdJw>
    <xmx:M-1GZLIRSIZJSCHG9TrgMQETBcZVwCvi49XwHJz0xY3LRqOfObNciw>
    <xmx:M-1GZP63UNfX41VKmwc27zEPKwny4JZ-iIuLg6b9eoNablUajcNpUA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 3/6] automation: re-enable building SeaBIOS in Alpine container
Date: Mon, 24 Apr 2023 22:56:58 +0200
Message-Id: <f28aa73c1db56ccfce23c408283af28195b5eac2.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

It seems to build just fine with Alpine 3.12, and SeaBIOS is necessary
for a HVM test (that use the Alpine build).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/build | 2 --
 1 file changed, 2 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 7d1b19c4250d..d830cff7b7c7 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -77,8 +77,6 @@ else
     if ldd /bin/ls | grep -q musl; then
         # disable --disable-werror for QEMUU when building with MUSL
         cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
-        # SeaBIOS doesn't build on MUSL systems
-        cfgargs+=("--with-system-seabios=/bin/false")
     fi
 
     # Qemu requires Python 3.5 or later, and ninja
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525616.816888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FU-0004Nh-Rc; Mon, 24 Apr 2023 20:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525616.816888; Mon, 24 Apr 2023 20:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FU-0004Na-P2; Mon, 24 Apr 2023 20:57:24 +0000
Received: by outflank-mailman (input) for mailman id 525616;
 Mon, 24 Apr 2023 20:57:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3FT-0004NP-Uw
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:24 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99f3bb4c-e2e2-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:57:20 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 97FC73200805;
 Mon, 24 Apr 2023 16:57:17 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 16:57:17 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:16 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99f3bb4c-e2e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682369837; x=1682456237; bh=qOLMsKsB/I/dH81OemoGa+Oq6
	UzSRPHuTT1HinFCzNk=; b=AkRUi/6RJ+kZ54ePAqMKqpPSgLVfkIl54ky53h54N
	5c6nb+vixtH1NhQMSxprlcV8PfYZvieFnPYyaR0MtFg+a7pRBRH3E70kRWKtQ0y1
	9p0y2CY5uCe2iwQ+Kq1idUMGUsFCO/ZDnt7/DGdjY4A+68QcPf+uFuJpV+OKmx09
	mdvxUAfRH8H5LZedFnoCa/qbDy3cu1TsYjrmxorQ1Y8H0oAu5PIlXaFYtHmOel3S
	ITD9Zy6beWQYAKlGTvXDQLfxV78voRI5vvHegIEtgH1zER/NcUCUuemwnw+tWowP
	H5IqglQyLtgORuOzjjJKLlztetcKfGxv++UbCIM10fyOw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682369837; x=1682456237; bh=q
	OLMsKsB/I/dH81OemoGa+Oq6UzSRPHuTT1HinFCzNk=; b=IoSAemiYrAx+KqS2d
	GjdEX9jqW2neSsdpC0RvrNHh68qqwhG02vzhDiMll12geoLbGZJpN2klz8oflYxB
	OzfaTMo7k+ANAzJgokJwXJSL4q6Y+A6Mc3FWVjId+mfQWnyINY7DLE7oQbDkXR5Z
	x7d2ENO+u0uI1KVCss7AZchshXIzNuQnmtM/SWq079e/BxfptNbJSuMZcCsqa1Q3
	6MdJLKhz3s7cIBiB/dA6t9qGGbOGOwgVThssZ8ozpOJ4xWy2y0PSQjcsRd2n1eFw
	flUV9LpMenmnjyZLMsSV8YPZnT+CeeZ/5Nfzq7LS5G3Wj2PRyJ+5PSVNqU8mHnH8
	gwa7Q==
X-ME-Sender: <xms:Le1GZAdKZrHGAz1HMdPeKY6ym1OKMtKC2JEczkGJvr6UO9ekBVGexA>
    <xme:Le1GZCPreQd_n1pKl8rwZA_iKnKJ-Y7Lo7BKfn76dc2whwsOLUevIxWl2eKGqdkeS
    8nxAgjuMG4RLg>
X-ME-Received: <xmr:Le1GZBjozs0Ll6JD6QRlNnNIJW8VFmKZx6f3KsmXO_pZaVoK_POBtkRFeUkIUjKoh-PzUR-dFPe1jTtdC9XM5rhGbxDNkLUxCO_kTOkpTYbjxngz3V0Q>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffogggtgfesthekredtredtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeeu
    fefhleeikeegfedtgfeiueeghfduteejtefhfeevheffjefhieeggfejkeelnecuffhomh
    grihhnpehgihhtlhgrsgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgr
    mhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhm
X-ME-Proxy: <xmx:Le1GZF_OoZAUvVsZ8dFGXtAiz12YHzK0vS7QvJahRPVtwouQVDfwrg>
    <xmx:Le1GZMv8JdIjsRd_7Taz1uwuGo_D0ERPrJfclmAas715J0kf_ySmTQ>
    <xmx:Le1GZMGgVTEVAMeewVMu82wl6RxcBs559D3ornmBEYVmqg1lb2DlIg>
    <xmx:Le1GZC3wl6XoQ507rQWA5CFVkXDzIEWF6dDFV4AwQUF3HWf_K3OqzQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH 0/6] automation: add PCI passthrough tests on x86
Date: Mon, 24 Apr 2023 22:56:55 +0200
Message-Id: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This series adds passthrough tests using the ADL x86 hw gitlab runner. Some of
the patches are improves existing hw tests too.

Example passing run:
https://gitlab.com/xen-project/people/marmarek/xen/-/pipelines/846920786

Marek Marczykowski-Górecki (6):
  automation: specify explicit dom0 mem size for ADL tests
  automation: add runtime qemu dependencies to test container
  automation: re-enable building SeaBIOS in Alpine container
  automation: wait for the login prompt as test end marker
  automation: PCI passthrough tests on ADL hw
  automation: include tail of serial log in the gitlab outout

 automation/gitlab-ci/test.yaml                    | 20 ++++-
 automation/scripts/build                          |  2 +-
 automation/scripts/qubes-x86-64.sh                | 85 +++++++++++++---
 automation/tests-artifacts/alpine/3.12.dockerfile |  1 +-
 4 files changed, 93 insertions(+), 15 deletions(-)

base-commit: 18c128ba66e6308744850aca96dbffd18f91c29b
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525621.816933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fc-0005Ug-7u; Mon, 24 Apr 2023 20:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525621.816933; Mon, 24 Apr 2023 20:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Fc-0005U3-4u; Mon, 24 Apr 2023 20:57:32 +0000
Received: by outflank-mailman (input) for mailman id 525621;
 Mon, 24 Apr 2023 20:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3Fb-00058D-05
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:31 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9fe8ce3f-e2e2-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 22:57:30 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 78B883200708;
 Mon, 24 Apr 2023 16:57:28 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Mon, 24 Apr 2023 16:57:28 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:26 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fe8ce3f-e2e2-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369847; x=1682456247; bh=UR
	iOmP9knqsMpj+/wVJpseuPtx4rqcsH76JJ/F1c++g=; b=jblZFWBKdi1Ci5VHtl
	auVWPBC7czAliJ+3hPx8or3aa26MqOdLqX0qGcMrHDb6bVpqvwWwWCmUTewKjLwv
	ql2jtt0FENerI5uZUc/JVfA8rbL03qgdS5x1ebG6V3sNkOZLzY6GiTQDoj9aCcjz
	AqZhFROCsF/Q4Dyzp52SFTO8tGoxNaL/Fs5KAhkrXu30ZUE91OWGGjCebb/1Nuif
	n6KXP9enLwGFCrJUP6Scezs2UYhstqaSubwQlMsPdFCZ5BlDEnYXjz9gU5sQlqu1
	2CxU9rBGFjBt3iYxkxD30IZaZrnC8Ie7o7yEVhR4EcoNEN/p4lfOB/g9VlGRtpAD
	0PAA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369847; x=1682456247; bh=URiOmP9knqsMpj+/wVJpseuPtx4rqcsH76J
	J/F1c++g=; b=GJ8ZAogatX/nXT5dL1Kn3xIZ8yFfqxKHZ7ZN39SloRDebf56MKN
	QrcOypkFicWCP7d3RiWDkonVBd2TwEhSaY9Jf9lgO7vZ0OkL/r6DnZfIILAtTzKH
	TbcxKeMQcL6ykE4ebjI2RW2SYdSs4yLWfp7QSxCNT3Wl4I0iXSW/jRgnCd+PNcaK
	0raEtrbApvjQ/IIRjsYH6iMDPUbhMqtqxtKFpJ+ZRfPTmFjelXmvmoT/wRvcXvgx
	hGhxG2S8LsLIDppKBuB9sJNIbvkIvMdncOzdmX5h/Gq53eMsKlNlaONUZQQ65ko3
	NTuUZI5prcKF8ARH3+n+ZJLuIaoTfWIGmug==
X-ME-Sender: <xms:N-1GZBGHZLTG-VvOIen2gfxk8RoPeY54QPD1Kk0biQhKyBWn79awFQ>
    <xme:N-1GZGXkyKlwcpkg8cuLdpEUhgH6hhgC_8h4bocign8zz9-hSYcnBBvihAZYkeqgc
    5o_WNE8kbriLQ>
X-ME-Received: <xmr:N-1GZDJVOOKrYE01NP76th6hnIiCdCMCaEPXD9gIG-r-SylB00pwmifkTks0D9JwGfzGS-TLJEiQTx0CK9RBYrqgyJFgjVv1EbDG7VefKGNXvh9hbuEi>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:N-1GZHGeghnZNv5jZTnE7ls92kacXCA1H4sOTBo_FMtqtUETuvPi7Q>
    <xmx:N-1GZHXAZoEbpja1Cp6dUQUU-LABw0ZC3dXejLbcOnWvKz5WMN3HgA>
    <xmx:N-1GZCOsa8yZHpfvcR9iT7TydsEQJZOKfnvmCgaGh1FKPlNBAIwpdw>
    <xmx:N-1GZOem6ppg3his-_QQ5nBSzRYLV1FidFnxujKpMyOwGaPGmKMSqw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 5/6] automation: PCI passthrough tests on ADL hw
Date: Mon, 24 Apr 2023 22:57:00 +0200
Message-Id: <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add simple PCI passthrough test to both PV and HVM domU. It passes
through a network adapter (the only one in the system), gets an IP via
DHCP (first basic test) and then ping the gateway (second basic test).
Finally, if device is supposed to use MSI or MSI-X (as set in the
PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.

On the current runner, the device in question is this:
03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
	Flags: bus master, fast devsel, latency 0, IRQ 18
	Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
	Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
	Capabilities: [a0] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [140] Device Serial Number ...
	Capabilities: [1c0] Latency Tolerance Reporting
	Capabilities: [1f0] Precision Time Measurement
	Capabilities: [1e0] L1 PM Substates
	Kernel driver in use: igc
	Kernel modules: igc

With the current Xen version, it uses MSI-X under PV and MSI under HVM.

This patch moves domU config to a variable, to make it configurable on
per-test basis. Add also a few comments for visual separation of tests.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/gitlab-ci/test.yaml     | 20 ++++++++-
 automation/scripts/qubes-x86-64.sh | 80 ++++++++++++++++++++++++++-----
 2 files changed, 89 insertions(+), 11 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index d68c584269dd..1ce083e6cd88 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -94,6 +94,8 @@
     # the test controller runs on RPi4
     CONTAINER: alpine:3.12-arm64v8
     LOGFILE: smoke-test.log
+    PCIDEV: "03:00.0"
+    PCIDEV_INTR: "MSI-X"
   artifacts:
     paths:
       - smoke.serial
@@ -147,6 +149,24 @@ adl-suspend-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+adl-pci-pv-x86-64-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
+adl-pci-hvm-x86-64-gcc-debug:
+  extends: .adl-x86-64
+  variables:
+    PCIDEV_INTR: "MSI"
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 qemu-smoke-dom0-arm64-gcc:
   extends: .qemu-arm64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index c0bc71764f73..6442f7dda515 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -4,8 +4,21 @@ set -ex
 
 test_variant=$1
 
+### defaults
 wait_and_wakeup=
 timeout=120
+domU_config='
+type = "pvh"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ "bridge=xenbr0", ]
+disk = [ ]
+'
+
+### test: smoke test
 if [ -z "${test_variant}" ]; then
     passed="ping test passed"
     domU_check="
@@ -23,6 +36,8 @@ done
 tail -n 100 /var/log/xen/console/guest-domU.log
 echo \"${passed}\"
 "
+
+### test: S3
 elif [ "${test_variant}" = "s3" ]; then
     passed="suspend test passed"
     wait_and_wakeup="started, suspending"
@@ -48,6 +63,59 @@ xl dmesg | grep 'Finishing wakeup from ACPI S3 state' || exit 1
 ping -c 10 192.168.0.2 || exit 1
 echo \"${passed}\"
 "
+
+### test: pci-pv, pci-hvm
+elif [ "${test_variant}" = "pci-pv" ] || [ "${test_variant}" = "pci-hvm" ]; then
+
+    if [ -z "$PCIDEV" ]; then
+        echo "Please set 'PCIDEV' variable with BDF of test network adapter" >&2
+        echo "Optionally set also 'PCIDEV_INTR' to 'MSI' or 'MSI-X'" >&2
+        exit 1
+    fi
+
+    passed="pci test passed"
+
+    domU_config='
+type = "'${test_variant#pci-}'"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ ]
+disk = [ ]
+pci = [ "'$PCIDEV',seize=1" ]
+on_reboot = "destroy"
+'
+
+    domU_check="
+set -x -e
+ip link set eth0 up
+timeout 30s udhcpc -i eth0
+pingip=\$(ip -o -4 r show default|cut -f 3 -d ' ')
+ping -c 10 \"\$pingip\"
+echo domU started
+cat /proc/interrupts
+"
+    if [ "$PCIDEV_INTR" = "MSI-X" ]; then
+        domU_check="$domU_check
+grep -- '\\(-msi-x\\|PCI-MSI-X\\).*eth0' /proc/interrupts
+"
+    elif [ "$PCIDEV_INTR" = "MSI" ]; then
+        # depending on the kernel version and domain type, the MSI can be
+        # marked as '-msi', 'PCI-MSI', or 'PCI-MSI-<SBDF>'; be careful to not match
+        # -msi-x nor PCI-MSI-X
+        domU_check="$domU_check
+grep -- '\\(-msi\\|PCI-MSI\\( \\|-[^X]\\)\\).*eth0' /proc/interrupts
+"
+    fi
+    domU_check="$domU_check
+echo \"${passed}\"
+"
+
+    dom0_check="
+tail -n 100 -F /var/log/xen/console/guest-domU.log
+"
 fi
 
 # DomU
@@ -97,17 +165,7 @@ xl create /etc/xen/domU.cfg
 ${dom0_check}
 " > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
-# just PVH for now
-echo '
-type = "pvh"
-name = "domU"
-kernel = "/boot/vmlinuz"
-ramdisk = "/boot/initrd-domU"
-extra = "root=/dev/ram0 console=hvc0"
-memory = 512
-vif = [ "bridge=xenbr0", ]
-disk = [ ]
-' > etc/xen/domU.cfg
+echo "$domU_config" > etc/xen/domU.cfg
 
 echo "rc_verbose=yes" >> etc/rc.conf
 echo "XENCONSOLED_TRACE=all" >> etc/default/xencommons
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525617.816898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FW-0004cY-2K; Mon, 24 Apr 2023 20:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525617.816898; Mon, 24 Apr 2023 20:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3FV-0004cR-Vj; Mon, 24 Apr 2023 20:57:25 +0000
Received: by outflank-mailman (input) for mailman id 525617;
 Mon, 24 Apr 2023 20:57:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3FU-0004NP-In
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:24 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9aaee8e7-e2e2-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:57:21 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id B52193200957;
 Mon, 24 Apr 2023 16:57:19 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 24 Apr 2023 16:57:20 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:18 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aaee8e7-e2e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369839; x=1682456239; bh=Pz
	68EwyEOJjTpPRR8jDPy/O6zbrph/i2QW9YHpC90nc=; b=FGnOBNyXU/o2d6B1w0
	VCDvVAGlMKlA0uMcYFrZOJMYT3gDRk0wyr9jcvVrrgbY2ozHv6lMQCyOYiU9pf8O
	QKYa71eWE3MESH2nK7XnWKzsK8kp0tqeylU/WGvs0+r2wqyXD5WGkhBticZXt+03
	bTVsMHoKdVKGKDaaIX9onYD2+ralt8lhKZGmq53WYyXnhTIfG+/DnSKHbPeqyiFm
	/b3l8x6yjpYc2CQ3CJ/5TQepPrC2jyFRzG8oV8rXDk+nlXFh8lNXg1DNbMzcrtQB
	M4sOqIdplY9NBIHpDOUitMpnt3N+YQnS5ku3LzM47hMCYl04fNDXPUMlxflEaakn
	bPIw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369839; x=1682456239; bh=Pz68EwyEOJjTpPRR8jDPy/O6zbrph/i2QW9
	YHpC90nc=; b=CghUsxwCqzmvsSHQ6x3zMznX0oveaK+7q21nf2/vzdcqy2S5g2y
	+ICCJ+bUQUX5ijK6T6CfMNN9Is6jve2cLFcRRHrDiI/TXgpgiMWjgMVeN5KgHjDn
	ame7DfeOGoelLhgnkbNWCNRXohZCfCIyzxXmJ+11pz12Fme4vzrA+l+tHGTBvAp2
	yN4k7fgsCZHqgXeT9RR3k29CQHqNK0hN40UXGSnKhL4tNuOfOw2GY50Pv5ktEbSc
	mAJqMKIuNkPFiToHzzUCYnDAONYAvlT0Nl9xAqx/hB5zqPs0UyH2QchMfuaGnZSx
	9nmlM1ZxuINTGTpACSg3CFzMHMakWskM/nA==
X-ME-Sender: <xms:L-1GZKyVUohbFF2zXS31eTgMpVrFjUe8rZ34yfV9kygNaNzoxHI4Pw>
    <xme:L-1GZGReKKiZOy771bKs8zzjqS-p7kScQqPHgPtfqK1iD9DSFa2hCWtIpfqsrso6I
    PwTxotbTO-4uw>
X-ME-Received: <xmr:L-1GZMVgjpHWuKonX7NEe8fHyBCQTxkmxkvyHHgntk9FFZbCJwLkIZ5h5ClyeAyqmD7GfoH31FQh9cjNxumkHV_Z50KPG_ahvWPMCmRILAsfgGdzKdJN>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:L-1GZAieDzXVB4Jtg0j_2VkHL5bshEpuzzU32pGQw90hNE6w5A5ZYg>
    <xmx:L-1GZMA9RXsrMfsyQsf-wZYoDexcph_RJKZXD7ZaHYidesfmX7X3oA>
    <xmx:L-1GZBJn3J8-YE0xrF-aYF4ubTSO8jTpGsaJqsmsNnh2W1jxWf26Bw>
    <xmx:L-1GZF5iqXTACnz4nOmf-gUCmvpElYUt_a100sWy0Wkkd9lUeczWvA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 1/6] automation: specify explicit dom0 mem size for ADL tests
Date: Mon, 24 Apr 2023 22:56:56 +0200
Message-Id: <acf4e8fbb74715335a08bf2a5a1008a117fec65f.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Avoid memory fragmentation that leads to:
(XEN) common/memory.c:277:d0v10 Could not allocate order=9 extent: id=1 memflags=0xc0 (0 of 4)

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/qubes-x86-64.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 2d4cf2e2268c..916dbaae59c3 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -123,7 +123,7 @@ TFTP=/scratch/gitlab-runner/tftp
 CONTROLLER=control@thor.testnet
 
 echo '
-multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all
+multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
 module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
 module2 (http)/gitlab-ci/initrd-dom0
 ' > $TFTP/grub.cfg
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 20:57:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 20:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525622.816948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Ff-00061a-Ji; Mon, 24 Apr 2023 20:57:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525622.816948; Mon, 24 Apr 2023 20:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3Ff-00061N-F0; Mon, 24 Apr 2023 20:57:35 +0000
Received: by outflank-mailman (input) for mailman id 525622;
 Mon, 24 Apr 2023 20:57:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr3Fd-0004NP-SM
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 20:57:33 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a11e4afc-e2e2-11ed-8611-37d641c3527e;
 Mon, 24 Apr 2023 22:57:32 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 8512C32005C1;
 Mon, 24 Apr 2023 16:57:30 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 16:57:30 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 16:57:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a11e4afc-e2e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682369850; x=1682456250; bh=Mh
	8lt/tCiYSiI3kokJmf/Tdax7W7ULmBBzi/97xm0PQ=; b=kibAK+HhiMe0bA5wtL
	cjabupcuAX+uSIBEy1Y2WjGncqXr4glo8dT9TttYbAsof8IhNXfYQ0M/KBffjO27
	PtUfqWT6fG1yr7ds0dgF60kKPEi6qhn/N+V+uyy7hfAU7C7DO6imWQkYoASs+OfA
	S4PEpxQFvg08jVfdXgi+4Vqp8M1Pde87p4W/fel62js0WxRZxA/kjFIm8LfKTUyK
	IJdc2yWDQAgnRaSbiMyvhcYnRM/nhGH7z9xJQ2B9RNkO9dR5T0fcPMrxIwWbvSkR
	TYlce2bbTTT8ua3LGafvY8evRGC75mtdNyAPz3PYksXNJsFkex8uqA/4Zf81IlF6
	cXYw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682369850; x=1682456250; bh=Mh8lt/tCiYSiI3kokJmf/Tdax7W7ULmBBzi
	/97xm0PQ=; b=a4xjzEMJISkljx+e5gK4l8f7h/n6YYwxwEmTvGVIJ/4fWeJYw6+
	fAcvvmuWw6cq6HFpXKRNKef+RE4x2VF5Z/Uq+WFWyL+svTLU+Tg2jVmMeDRqtwXb
	/b8sbYtAB9ABMqS53IFr3gUUXFv7G0ZMR4LLKyufAtbhD1AAYlGWLKGyYgVoGNE+
	e5ySp0y2VkpaLRmu8F8dMxsU59zxxJDV7Ocbo0IyB7Htp7teopWi/NwbV4WDbnRS
	RBYad6GTHGj0PxzPxIlsqnkLTq/gx1L4+TwPNBxcx5Ou7nF+eshmVoCtbXq1bbgW
	8cS2h61HRLZRsaBaDKbtaipWx+qctPjVaDQ==
X-ME-Sender: <xms:Ou1GZN8UwlzA0PXK6UtDE99RyvbHGDmMGVmeqt-L4pe2iVvjpyxwRQ>
    <xme:Ou1GZBtZ7cTCssGHoZqvzWbUlNdCjJyM_cqBCN0nkRdDv9TdD4d9BjEEiA60654eJ
    Zzv-5jvZWb6Kw>
X-ME-Received: <xmr:Ou1GZLCKYZ11fXyAO0HwSPrXHGB7okpBq5HsmQBSLhwmHut7KafwOCxeP4-LHcCmcXvCMseO5GRyCgGASeMnb9GZdEg0VwHMGPWhPT-vQsSJXpxELctk>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedutddgudehhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:Ou1GZBeZRWTMPIZowTnyoLFVfYlVEP15U8zwBrFt4lo8gZt67zeGfw>
    <xmx:Ou1GZCOdpIhaJ6uiahl53D56VpSgS534XmHAlrfn5O_ILpYF93Jqqg>
    <xmx:Ou1GZDn2bspwGK8oXmBuJVrTKmRLbhSwl76hEKqGqPfVnuiCtY1tsw>
    <xmx:Ou1GZE0QuKsg6UzAxtQy398HUljR93UjyOT5PjRtrBnyLaFzBbKoDg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH 6/6] automation: include tail of serial log in the gitlab outout
Date: Mon, 24 Apr 2023 22:57:01 +0200
Message-Id: <a3d33c869b7fcf4f72047daa4dcbcf4ff97143c3.1682369736.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make it a bit easier to see what has failed.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/qubes-x86-64.sh | 1 +
 1 file changed, 1 insertion(+)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 6442f7dda515..9b89d90f653c 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -228,5 +228,6 @@ fi
 
 sleep 1
 
+tail -n 100 smoke.serial
 (grep -q "^Welcome to Alpine Linux" smoke.serial && grep -q "${passed}" smoke.serial) || exit 1
 exit 0
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Mon Apr 24 21:01:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 21:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525653.816958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3J3-0000mB-4n; Mon, 24 Apr 2023 21:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525653.816958; Mon, 24 Apr 2023 21:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr3J3-0000m4-0l; Mon, 24 Apr 2023 21:01:05 +0000
Received: by outflank-mailman (input) for mailman id 525653;
 Mon, 24 Apr 2023 21:01:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9eiY=AP=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pr3J1-0000jG-FR
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 21:01:03 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20610.outbound.protection.outlook.com
 [2a01:111:f400:fe59::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1cd876e8-e2e3-11ed-b223-6b7b168915f2;
 Mon, 24 Apr 2023 23:01:00 +0200 (CEST)
Received: from MW4PR03CA0122.namprd03.prod.outlook.com (2603:10b6:303:8c::7)
 by PH7PR12MB5711.namprd12.prod.outlook.com (2603:10b6:510:1e2::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Mon, 24 Apr
 2023 21:00:56 +0000
Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8c:cafe::78) by MW4PR03CA0122.outlook.office365.com
 (2603:10b6:303:8c::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Mon, 24 Apr 2023 21:00:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.19 via Frontend Transport; Mon, 24 Apr 2023 21:00:55 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 16:00:54 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 24 Apr
 2023 14:00:54 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 24 Apr 2023 16:00:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cd876e8-e2e3-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Rs5Ao5SlgVuctQAcbJy75kiz++FkNHMbX42fKnUXNj4BNFnRS6u0Elx+1JdnoC4UIAEuFCcfxWtj8HXubIr4q438iq+/gUwm3Ok1lOXzVSNkhfH7/HBzEUGMw0yBn25bU9juS/fGviK0tg4bKz2545xNTNdGLHYVsJOOBIr/YXCrsDLY1zSOu4zGg9oLKYun1FgpPAbpKIDVuIoEdD7HPMh5qyez3rFKDWImxP3CsIpy5ze1pFkqc9twSqFPmUN/0wRYj7LiTsqfILXIhwwXPrzpr6xx0MH7LC+OP92jHGZWGdzdu9klzhr7uVGJsNNrVDRtB3PhbyR0YiI8RhlrBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yIY0qqdmHDda78wfplCadbi3y5M95ECBZ9jQJrtNo4g=;
 b=hHmTwD+s8nnmcwfxyFQU0s0nZFKDdyhvbJsBJbW18SiGy6hc1BOVacJMsYGidKLfTzfoZEr5xQK2eapxI/Niv1fxDplgc8hRbnHjWy/ZzbFo68zSCbCneY9/tkBNtCL9doMANh3xhmMvOQ9dc5d6GFxVvuqTg82jzJVULiCIv8Ea8gmOu8ku7woskMF/DhJHccoMfpB2H/dMHvrtuRDcC7kSHPTnnJP9I7TTy33fpfJ8fJgpYjDWcWSs/Mv05K41C+qRVnKsf6dFXaQS6K8+ik+ZBS4mg78bJBlE8sJd2zTJaHUNwyb9hKAnuciRGxXGL3QG6MDKcXQOBFR5yC2yZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yIY0qqdmHDda78wfplCadbi3y5M95ECBZ9jQJrtNo4g=;
 b=FkwqJdOChon08eo0Nb5Ey8nY42PnoKhh3BPc+J3rF58tExgjzxUN+5VpusKyHhuSvk1MUpelqLu1mOxynHILV7soEzfoPpH7CuN+KB8h/LsQnfben/CVqm16OSAnDHe7j4BFY3CUFsw5gSPK3s1lhMKBL46WSkImQplN7xkT1UU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, George Dunlap
	<george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Subject: [RFC PATCH] xen/sched/null: avoid crash after failed domU creation
Date: Mon, 24 Apr 2023 17:00:47 -0400
Message-ID: <20230424210048.786436-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT022:EE_|PH7PR12MB5711:EE_
X-MS-Office365-Filtering-Correlation-Id: 3394b3ff-138d-4922-44cd-08db4506ff42
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QZ/NASekFVA9OBcNGrRES3QVOImGBChFYRys8sDqWzWp88Kr29QhXn0bu8f8ZQienhFIUy+AdPnmdOYss+A0p+u46CZRMfzbIeY23wsmMtcpESBdXPXo18gqvEI3sTjVolaxWkVZNMc0FNbqfFfz0kyBQQsbfQz9TMDCKFV/WI2cAqttmZm5Om4IQ5yqCbhfpcPzFGy3b6Fu0G7OZ0vUIv/06OUrR0TUepj5f1HAgux5LndYB1wbd/J3wWqORxFe7HJHjujjyUzw3ZpBUUzE/Ks3baP5YfAqNHbmgpEJlx3d9u5020CN2tHbP3P3YizrWcxgXIz1DaMw5alISUQCQSiFvV2fY1NRMIY8oYqjehQZgJPPRKznG2Gj3yNP8S9Qt2u7lVAfGpYe/2i8APWWnPibUeZ+DeSbkhLjMSSKgX4NUT5ZEhIxTTgGjCfkJ1FsIqFfmK2/xSY01v5E51heAWM+8Q/ST5l+C+IbgnvS9HbAf7NgzAxgEALlTefz0EaPSPzQtkbzAMyj8dgl/jRRGh+EI2qOgkcgxBDRznAE5PR9XhuR3ZtE1DdH5eFrwxTOm9dfooQJygfzU8fE2Xy8zZBCz/V9EY3R7MaXkdFzQupFFhqn3pVVhHuvYmyOScuoEabhOqBAlvC6jZ/Ydnv9VahrJQ+nWA0hGAwuIYyTAlXmNdimbskvk6oKYOaUfT03s/hm3D3bYUZNpneu9dIOEu+tRmJxEgMrSgWmL6zVlUE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(346002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(2906002)(2616005)(6666004)(186003)(40480700001)(26005)(1076003)(70206006)(70586007)(8676002)(8936002)(41300700001)(6916009)(4326008)(316002)(478600001)(44832011)(5660300002)(54906003)(82740400003)(356005)(81166007)(82310400005)(36756003)(86362001)(40460700003)(36860700001)(47076005)(336012)(426003)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Apr 2023 21:00:55.6304
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3394b3ff-138d-4922-44cd-08db4506ff42
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT022.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5711

When creating a domU, but the creation fails, we may end up in a state
where a vcpu has not yet been added to the null scheduler, but
unit_deassign() is invoked. In this case, when running a debug build of
Xen, we will hit an ASSERT and crash Xen:

(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
(XEN) ****************************************

To work around this, remove the ASSERT and introduce a check for the
case where npc->unit is NULL and simply return false from
unit_deassign().

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---

I'm not entirely sure whether this is the correct fix (hence the RFC
tag), but at least it avoids crashing the system for me.

Here are the steps to reproduce on an aarch64 system with 4 cpus:

sudo xl cpupool-cpu-remove Pool-0 3
sudo xl cpupool-create 'name="nullpool"' 'sched="null"' 'cpus=["3"]'

cat > stew.cfg <<EOF
name = "stew"
kernel = "Image"
extra = "console=hvc0"
memory = 768
vcpus = 1
pool= "nullpool"
pci = [ "01:00.0" ]
EOF

sudo xl create stew.cfg

The PCI device 01:00.0 is not assignable, so the domain creation will
fail.

Here is a more detailed crash log:

stew@ubuntu:~$ sudo xl create stew.cfg
Parsing config from stew.cfg
libxl: error: libxl_pci.c:1677:libxl__device_pci_add: Domain 1:PCI device 0:1:0.0 is not assignable
libxl: error: libxl_pci.c:1809:device_pci_add_done: Domain 1:libxl__device_pci_add failed for PCI device 0:1:0.0 (rc -3)
libxl: error: libxl_create.c:1923:domcreate_attach_devices: Domain 1:unable to add pci devices
(XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
(XEN) ----[ Xen-4.18-unstable  arm64  debug=y  Tainted:   C    ]----
(XEN) CPU:    0
(XEN) PC:     000002000023ec30 null.c#unit_deassign+0xdc/0x1cc
(XEN) LR:     000002000023fa1c
(XEN) SP:     00008000fff67bb0
(XEN) CPSR:   00000000800002c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 0000000000000000  X1: 00008000fff43d70  X2: 0000000000000000
(XEN)      X3: 0000000000000003  X4: 0000000000000000  X5: 00008000ffff84b0
(XEN)      X6: 0000000000000000  X7: 0000000000000000  X8: 00008000fff43fd0
(XEN)      X9: 0000000000000000 X10: f00ff00ff00ff00f X11: 00000000ccccccc0
(XEN)     X12: 00000000ccccccc0 X13: 0000000000000000 X14: 0000000000000000
(XEN)     X15: 0000007fde58e6d0 X16: 0000000000000024 X17: 0000000000000000
(XEN)     X18: 0000000000000000 X19: 00008000fff43cd0 X20: 00008000fffdc120
(XEN)     X21: 00008000fff69a30 X22: 00008000fff69a30 X23: 0000000000000003
(XEN)     X24: 00008000fffecb20 X25: 0000000000000001 X26: 00008000fff69920
(XEN)     X27: 00008000fff43c70 X28: ffffff880188eec0  FP: 00008000fff67bb0
(XEN)
(XEN)   VTCR_EL2: 0000000080023558
(XEN)  VTTBR_EL2: 000100087ff94000
(XEN)
(XEN)  SCTLR_EL2: 0000000030cd183d
(XEN)    HCR_EL2: 00000000807c063f
(XEN)  TTBR0_EL2: 000000000034b000
(XEN)
(XEN)    ESR_EL2: 00000000f2000001
(XEN)  HPFAR_EL2: 0000000000f90100
(XEN)    FAR_EL2: ffffffc0097b0f00
(XEN)
(XEN) Xen stack trace from sp=00008000fff67bb0:
(XEN)    00008000fff67c10 000002000023fa1c 00008000fff43cd0 00008000fff43d70
(XEN)    00008000fff69a30 00008000fffec068 00008000fff68000 00008000fffecb20
(XEN)    0000000000000001 00008000fff43d70 00008000fff69a30 00008000fffec068
(XEN)    00008000fff67c40 0000020000243948 0000000000000003 00008000fff43e10
(XEN)    00008000fff43cd0 00008000fffec068 00008000fff67cb0 000002000023069c
(XEN)    00008000fffecb20 00008000fff68000 00008000fffecb20 00008000fff68000
(XEN)    000000005a000ea1 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffffff880188eec0 00008000fffecb20 00008000fff43ea0
(XEN)    00008000fff67cd0 0000020000231cc0 00000200002bb7e0 00008000fff68000
(XEN)    00008000fff67d00 00000200002099f0 00008000fff68000 0000000000000000
(XEN)    0000007fb5381010 000000005a000ea1 00008000fff67d30 000002000022ec8c
(XEN)    0000000000000001 000000005a000ea1 00008000fff67d30 000002000022e744
(XEN)    00008000fff67e40 000002000027121c 00008000fff67ea0 000000005a000ea1
(XEN)    00008000fff67f20 00008000fff52000 00008000fff67da0 0000020000244650
(XEN)    00008000fff67d90 0000020000224f54 00008000fff67da0 0000020000224f54
(XEN)    00008000fff67dc0 0000020000225088 00008000fff52e16 00008000fff52e00
(XEN)    0000001500000002 0000000000000001 000a752520646a25 742064656c696166
(XEN)    c00cc00cc00cc000 0000000000000000 ff0000ff000000ff 0000000000000000
(XEN)    00000000f00f000f 0000000000000000 f00ff00ff00ff00f f00ff00ff00ff00f
(XEN)    00000000ccccccc0 00000000ccccccc0 0000000000000000 0000000000000000
(XEN)    0000007fb5310018 0000007fb529d8d0 00008000fff67e70 0000020000272134
(XEN)    00008000fff67ea0 000000005a000ea1 00008000fff67fa8 0000000020000005
(XEN)    ffffffc009c9bce0 0000020000255c60 0000000000000000 ffffff8809182a00
(XEN)    ffffffc009c9bce0 0000020000255c54 0000007fb5381010 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffffffc009c9bdc8 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000007fde58e6d0 0000000000000024 0000000000000000
(XEN)    0000000000000000 0000007fde58e6d0 ffffff8809182a00 ffffff8809182a00
(XEN)    0000000000305000 0000007fde58e6d0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 ffffff880188eec0 ffffffc009c9bce0
(XEN)    ffffffc008648cb8 ffffffffffffffff ffffffc00803799c 0000000020000005
(XEN)    000000005a000ea1 0000000000000000 0000000020000005 0000000000000000
(XEN)    0000000000000000 ffffff880188eec0 ffffffc009c9bce0 ffffffc008037998
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<000002000023ec30>] null.c#unit_deassign+0xdc/0x1cc (PC)
(XEN)    [<000002000023fa1c>] null.c#null_unit_remove+0x6c/0xe4 (LR)
(XEN)    [<000002000023fa1c>] null.c#null_unit_remove+0x6c/0xe4
(XEN)    [<0000020000243948>] sched_move_domain+0x194/0x39c
(XEN)    [<000002000023069c>] cpupool.c#cpupool_move_domain_locked+0x38/0x70
(XEN)    [<0000020000231cc0>] cpupool_move_domain+0x34/0x54
(XEN)    [<00000200002099f0>] domain_kill+0xc4/0x144
(XEN)    [<000002000022ec8c>] do_domctl+0x6d0/0xfa4
(XEN)    [<000002000027121c>] traps.c#do_trap_hypercall+0x280/0x34c
(XEN)    [<0000020000272134>] do_trap_guest_sync+0x448/0x5c4
(XEN)    [<0000020000255c60>] entry.o#guest_sync_slowpath+0xa4/0xd4
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
---
 xen/common/sched/null.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index 65a0a6c5312d..71a83c4fb1ad 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -376,7 +376,14 @@ static bool unit_deassign(struct null_private *prv, const struct sched_unit *uni
     struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(list_empty(&null_unit(unit)->waitq_elem));
-    ASSERT(npc->unit == unit);
+
+    if ( !npc->unit )
+    {
+        dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
+                unit->unit_id);
+        return false;
+    }
+
     ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free));
 
     npc->unit = NULL;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon Apr 24 23:40:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Apr 2023 23:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525664.816968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr5mm-0007de-Cw; Mon, 24 Apr 2023 23:39:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525664.816968; Mon, 24 Apr 2023 23:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr5mm-0007dX-9Y; Mon, 24 Apr 2023 23:39:56 +0000
Received: by outflank-mailman (input) for mailman id 525664;
 Mon, 24 Apr 2023 23:39:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HiTc=AP=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr5mk-0007dR-2g
 for xen-devel@lists.xenproject.org; Mon, 24 Apr 2023 23:39:54 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c7dee5b-e2f9-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 01:39:49 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id D1A32320085B;
 Mon, 24 Apr 2023 19:39:45 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 19:39:46 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 19:39:43 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c7dee5b-e2f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682379585; x=1682465985; bh=qbUiaxfm8UFf7/Yjc2NC8nBUI
	2byOOEqiabdUu9e3WQ=; b=axzkRcf2tyrGoSe5HbeFOq0VEVnPc+dLM7euLhGyD
	fOpzSWKNk+bB5yTGN1O4UY5/hVxHEX2b/8/gkK5/cKWGipzwb32ch7cM4EuT/8tw
	a7Lgx7uPA3K1CvC0oEqtkhIPf4mG/Z0VLYA/U7ZbCtn9at+lUkn8j9jOmIXwM+j6
	BPj9+gTUz7qF0qLzS1S1pbsuq8NWOrCdzRyRsLHoJK7VVf07+FB9WZ9wuix8qKnP
	/JD7zq9jubKhd5rUfITM5x/6k6HDKHOJS+GdRfO6qMrqLgbdxA45mhYme4/hTyzo
	K4NtdI7yM6mbpl37e36tulslE7xe6fjnFJ8SwEx6Czfgw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682379585; x=1682465985; bh=q
	bUiaxfm8UFf7/Yjc2NC8nBUI2byOOEqiabdUu9e3WQ=; b=T3JjEa8q52HJbz1Dm
	z7kgBweJUW4VTo9Hl9bpsS96Gy08zSiqc3BAfNMIX6x2brQHqP/ybIPwBauSPb9m
	fRSHiaNIiPK6WB18k4g0Cj0CjBzbnQjf8pmTGMdfZaDbePy2PnX7u8h36/565Dry
	0IlT7rOsTrSSLER92pniLR/fFlN3J0wisEw+JNjTDJ1OaVBsEpANJEJ78w+MflaB
	o2M80D8XmPpZzPzxnoJBszg9ss/YXfqblP98dI9QTLXys6TVPya4/ELjcOXtmRiK
	1PawCLefIW+uDeCnUCW75m0cigWYssfhiWR0cqlSCJAFtW7VzUD3MmBstnq5vW/c
	IpbUg==
X-ME-Sender: <xms:QRNHZAqTG2n0VzQRIupHzx-krgaDrN83Za11MsqvchmWspedGujuUw>
    <xme:QRNHZGqzBgjLKc21kCHooAEVrNSbYZPmRyeKiw-HII0irRiXMKjmf6ilGDAoIZyF0
    AKPH5uNBAxLIg>
X-ME-Received: <xmr:QRNHZFP4I7lDH1JJvIY0xUmDsM3XF6xhEenB1O_3bX5YY_-O8TwMtONEePsJBlwfv6puJAsgtYhojuHbExqLIi4nY0t8dSWV_Tlu6BLT1EeNC_qSbwJO>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduuddgvdeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:QRNHZH650lDbdW7zDDuT8bbBnk3CD6H6FWVmE3z4-Z4C2wRHD0-tIg>
    <xmx:QRNHZP7yyzxXl53KriApnoVep8q_LtX6dXtSDDca99sRxb__Vs1hkA>
    <xmx:QRNHZHhGNebF-sDFBCkMecgIUt1c0BkO-etHfALB_CiCkq76w7N-Jg>
    <xmx:QRNHZFRSM8yOt0Omlba4rPTTBiYTYiGealqVeGUg-cfHkxc3KcfEfg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] ns16550: enable memory decoding on MMIO-based PCI console card
Date: Tue, 25 Apr 2023 01:39:30 +0200
Message-Id: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
devices, do similar with PCI_COMMAND_MEMORY for MMIO-based UART devices.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
This fixes the issue I was talking about on #xendevel. Thanks Roger for
the hint.
---
 xen/drivers/char/ns16550.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 1b21eb93c45f..acfcce1c5d72 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -272,9 +272,17 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
 static void pci_serial_early_init(struct ns16550 *uart)
 {
 #ifdef NS16550_PCI
-    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
+    if ( !uart->ps_bdf_enable )
         return;
 
+    if ( uart->io_base >= 0x10000 )
+    {
+        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
+                                  uart->ps_bdf[2]),
+                         PCI_COMMAND, PCI_COMMAND_MEMORY);
+        return;
+    }
+
     if ( uart->pb_bdf_enable )
         pci_conf_write16(PCI_SBDF(0, uart->pb_bdf[0], uart->pb_bdf[1],
                                   uart->pb_bdf[2]),
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 00:05:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 00:05:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525670.816977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6BA-00039a-PH; Tue, 25 Apr 2023 00:05:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525670.816977; Tue, 25 Apr 2023 00:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6BA-00039T-Mi; Tue, 25 Apr 2023 00:05:08 +0000
Received: by outflank-mailman (input) for mailman id 525670;
 Tue, 25 Apr 2023 00:05:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr6B9-00039N-3D
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 00:05:07 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d456b208-e2fc-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 02:05:04 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5A73E6204F;
 Tue, 25 Apr 2023 00:05:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28FC7C433D2;
 Tue, 25 Apr 2023 00:05:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d456b208-e2fc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682381102;
	bh=UZdlInztLLntipbzj0YrWRQPOVmFI//JpMrR4I/LIKs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=G70KWCHQyPfk+KXPUwY/Xi4WBformaSxQCIqglkk4Bkp05EhJG81PXwVVEaYQO4jr
	 f4HU0HpCiJyzb8b9swAcev7HY0Cxc+lVoS1sGQDNa6rCDAzwtgi0D3zqq8tv2U4/8a
	 u0pUNPwr357fzxXyaA6jGqD4fkbviqOHRcP7ZZ6weUQfcZ6J2AJooj4fAtYl5J5wZN
	 UgeKkUmkUUUcDq/VVSAh7UryQ0MPWGIiF7+Kx67ljJP3kKjn07qfSLRNVTPS//RsBJ
	 hO9EgxM31PZ5QZ5eHuNTKfxR6hUz6Q7meDHGxZCVhuJ8kUOTXEKcVDa6MqeTTDURjH
	 kyr7jcwDQHMxA==
Date: Mon, 24 Apr 2023 17:05:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 1/6] automation: specify explicit dom0 mem size for ADL
 tests
In-Reply-To: <acf4e8fbb74715335a08bf2a5a1008a117fec65f.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241704540.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <acf4e8fbb74715335a08bf2a5a1008a117fec65f.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-524689961-1682381102=:3419"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-524689961-1682381102=:3419
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> Avoid memory fragmentation that leads to:
> (XEN) common/memory.c:277:d0v10 Could not allocate order=9 extent: id=1 memflags=0xc0 (0 of 4)
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/qubes-x86-64.sh | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 2d4cf2e2268c..916dbaae59c3 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -123,7 +123,7 @@ TFTP=/scratch/gitlab-runner/tftp
>  CONTROLLER=control@thor.testnet
>  
>  echo '
> -multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all
> +multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
>  module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
>  module2 (http)/gitlab-ci/initrd-dom0
>  ' > $TFTP/grub.cfg
> -- 
> git-series 0.9.1
> 
--8323329-524689961-1682381102=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 00:05:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 00:05:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525672.816987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6Ba-0003Zn-1U; Tue, 25 Apr 2023 00:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525672.816987; Tue, 25 Apr 2023 00:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6BZ-0003Zg-V1; Tue, 25 Apr 2023 00:05:33 +0000
Received: by outflank-mailman (input) for mailman id 525672;
 Tue, 25 Apr 2023 00:05:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr6BY-00039N-NE
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 00:05:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3eb781e-e2fc-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 02:05:31 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DEB3C62A5F;
 Tue, 25 Apr 2023 00:05:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD6AAC433D2;
 Tue, 25 Apr 2023 00:05:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3eb781e-e2fc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682381129;
	bh=tkCFMyrSnqU8AEP8i6iAq1EIpo3gAx4yz8CUZwj2ZCA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XlWkZVvYtm4lJ+Is46VO/xyh5p+c5mHCjhUjnN4xfSb4vYAildg+VWAAWThAEJjlv
	 Q1wUx5b8Xlb6vpCQt/Wyfn1TxhXPPNtHhx6G30Qmy5yR9uNAGYBE/nF4ZgzK2aXOVX
	 2VzHMjc/fDmAGwc0xeZ7vhtHzTFq9+HtnwViWTr/3u/Gfw0KcAtd+UrI7sxYd2oPuY
	 tvXbtgfafGdDipqbP3D71j3tPEIwIsRMXrZct2tbkkqWrsFnA+/t5mVfwuUz7eLvz1
	 7SGi45uORFD2S/Ug2i51ZHsWPizmb1nMpU79ZInxOc+JQgS79t8VYWo9oB8uWN5PV6
	 FcIz+deO5kafQ==
Date: Mon, 24 Apr 2023 17:05:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 2/6] automation: add runtime qemu dependencies to test
 container
In-Reply-To: <a2f2c836e7f444d733f8ce4c1c23fc6be1dc7726.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241705150.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <a2f2c836e7f444d733f8ce4c1c23fc6be1dc7726.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-813443502-1682381120=:3419"
Content-ID: <alpine.DEB.2.22.394.2304241705260.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-813443502-1682381120=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304241705261.3419@ubuntu-linux-20-04-desktop>

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> This is necessary to start HVM guests in subsequent tests.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/tests-artifacts/alpine/3.12.dockerfile | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/automation/tests-artifacts/alpine/3.12.dockerfile b/automation/tests-artifacts/alpine/3.12.dockerfile
> index b3909996b47b..073f16a0d70a 100644
> --- a/automation/tests-artifacts/alpine/3.12.dockerfile
> +++ b/automation/tests-artifacts/alpine/3.12.dockerfile
> @@ -13,6 +13,7 @@ RUN \
>    \
>    # xen runtime deps
>    apk add musl && \
> +  apk add libgcc && \
>    apk add openrc && \
>    apk add busybox && \
>    apk add sudo && \
> -- 
> git-series 0.9.1
> 
--8323329-813443502-1682381120=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 00:30:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 00:30:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525678.816998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6Zc-00073k-Tg; Tue, 25 Apr 2023 00:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525678.816998; Tue, 25 Apr 2023 00:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6Zc-00073d-Qi; Tue, 25 Apr 2023 00:30:24 +0000
Received: by outflank-mailman (input) for mailman id 525678;
 Tue, 25 Apr 2023 00:30:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr6Zc-00073X-9W
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 00:30:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5ce28327-e300-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 02:30:22 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D403A62587;
 Tue, 25 Apr 2023 00:30:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F62BC433EF;
 Tue, 25 Apr 2023 00:30:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ce28327-e300-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682382620;
	bh=X681ZDG2qL6aCN+Sjd3jm0RF4QgyjGLRKVWVzeLLYp8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BYXuncdSzsQPGWPRgWVXdYWU8lGShdQfIAfCmR68Y/bOvNTauzQ6K5IW64j7WCAfP
	 xNn4mUFhR5pmF29J6y+kItgT8wxtggyj+ffszNid9H/joyO0V/T4qY5bU99EH+UT2f
	 OVFepdy9Orvuw8KtMOmsV5GWppqlsugqSzr7FU6yDddgMqKvF8OrVoFoZqjBTsU/ic
	 XGpPh7gRVH26M2GIxsgcIZYH3/LGXCmcgnIyiFNHJmJEHJPHvvb6CxP1yfCA9wAxmr
	 2CEutfye6v0oeVldmgkw1/7bshHk6HcbRmB8qh7M0GhfcOu5uyOWcfqkzfiPMGJRhc
	 JZ5be6QjBzCbQ==
Date: Mon, 24 Apr 2023 17:30:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 4/6] automation: wait for the login prompt as test end
 marker
In-Reply-To: <8bdae473db12295b536680283820eb18a7dbd911.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241717360.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <8bdae473db12295b536680283820eb18a7dbd911.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1035251367-1682381865=:3419"
Content-ID: <alpine.DEB.2.22.394.2304241730150.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1035251367-1682381865=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304241730151.3419@ubuntu-linux-20-04-desktop>

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> The login prompt is printed after all the startup (test) scripts, wait
> for that instead of "passed" marker. And only then check if test passed.
> Before this patch there was a race: "passed" marker could be already
> printed, but the final check would fail because login prompt wasn't
> there yet.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/qubes-x86-64.sh | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 916dbaae59c3..c0bc71764f73 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -159,7 +159,7 @@ if [ -n "$wait_and_wakeup" ]; then
>      ssh $CONTROLLER wake
>  fi
>  
> -until grep "$passed" smoke.serial || [ $timeout -le 0 ]; do
> +until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
>      sleep 1;
>      : $((--timeout))
>  done
> -- 
> git-series 0.9.1
> 
--8323329-1035251367-1682381865=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 00:39:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 00:39:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525682.817008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6iM-0007iD-PG; Tue, 25 Apr 2023 00:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525682.817008; Tue, 25 Apr 2023 00:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6iM-0007i6-Ls; Tue, 25 Apr 2023 00:39:26 +0000
Received: by outflank-mailman (input) for mailman id 525682;
 Tue, 25 Apr 2023 00:39:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr6iL-0007i0-2q
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 00:39:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f60f5be-e301-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 02:39:23 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id ED13662A75;
 Tue, 25 Apr 2023 00:39:21 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C037FC433EF;
 Tue, 25 Apr 2023 00:39:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f60f5be-e301-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682383161;
	bh=W1UJTAecGf24uflNB+IwAZn61OUMilNv7ksRzV5lFeY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ldGJZQSfXT8GGfCQFFf+UpZzWtZ9t/49HjlGnS/JW5/TGGkfJWYiQ2ovXbHeUded0
	 /GNPQD2FqfbJ/MzEtx4ftXjiGQ1B+ueEc5Ph4tbssHtETCIZv3BvII1M7W19vGTkJF
	 9/mucdXa03YqiIQ26bPR/spqEa/D2fjsFUlc/3Jj+7IC88Lh2VOMk9IC4PVBn+PoAE
	 HCRjGgPdVKbATPSD6DFpx66KBXqhJajQopXoMBADvbwT2O6PJ+yCdArjmu9PD27IgP
	 lMP9XZyieUhyC/EWA3hxoS/h1eeeM+fzVXylJadloiAX8v03JCjQbArir19stF9NaC
	 V2S51WiPCN1ug==
Date: Mon, 24 Apr 2023 17:39:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 6/6] automation: include tail of serial log in the gitlab
 outout
In-Reply-To: <a3d33c869b7fcf4f72047daa4dcbcf4ff97143c3.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241718120.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <a3d33c869b7fcf4f72047daa4dcbcf4ff97143c3.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1805575572-1682382124=:3419"
Content-ID: <alpine.DEB.2.22.394.2304241730230.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1805575572-1682382124=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304241730231.3419@ubuntu-linux-20-04-desktop>

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> Make it a bit easier to see what has failed.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  automation/scripts/qubes-x86-64.sh | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 6442f7dda515..9b89d90f653c 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -228,5 +228,6 @@ fi
>  
>  sleep 1
>  
> +tail -n 100 smoke.serial
>  (grep -q "^Welcome to Alpine Linux" smoke.serial && grep -q "${passed}" smoke.serial) || exit 1
>  exit 0

Isn't smoke.serial already in stdout and also part of the artifacts? The
user can always click on the full output or on the smoke.serial file
among artifacts.  Maybe the issue is that it is called ".serial" instead
of ".txt" so the browser will not try to open it directly in a browser
window. If we rename it to ".txt" the user could just click on
"artifacts" and then on "serial.txt" and it would be all there.

100 lines is not much, but I think in general it is better if we make it
easier to access smoke.serial in its entirety instead.
--8323329-1805575572-1682382124=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 00:53:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 00:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525686.817018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6vC-0001cl-To; Tue, 25 Apr 2023 00:52:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525686.817018; Tue, 25 Apr 2023 00:52:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr6vC-0001ce-Qx; Tue, 25 Apr 2023 00:52:42 +0000
Received: by outflank-mailman (input) for mailman id 525686;
 Tue, 25 Apr 2023 00:52:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHzL=AQ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pr6vB-0001cW-8K
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 00:52:41 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78e5f1e0-e303-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 02:52:38 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 8D4993200495;
 Mon, 24 Apr 2023 20:52:35 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 24 Apr 2023 20:52:35 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Apr 2023 20:52:34 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78e5f1e0-e303-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682383955; x=1682470355; bh=4UNFZP9kqDVw+Elbus8NO1IbwHvk8fPFHGj
	+YDTr5Ok=; b=bq3q0GsU35KUBPRGS3uzB0PTd7696MIJtoITC0jt0fJNovyhmJs
	j1jGtasH/C/cEJebQuOk9PeVdd87qzs6IErzKrTs+5quGgJWHUH4fuYKrAogQqG7
	edZvJCy3sa4+m3FR03nmkoVpUSMps5q0q2K4x1jSJl41qlzf8r+4Cwy2U2NntqNl
	om4am7NcK+dFDdG2F/QVHOPSRsYq0ZZZTk3etr2ZeH+choOgAekrWPviSwvIDnfR
	5D/6Y9q864amMpSfyUj1V5tyIraCZIT6J54pUJ1YlFyjwakb4+P/QLB6MQFUhxop
	gOwjkb/qcLuuhhtYGbOfNKJ7OgN18FtdpEA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682383955; x=1682470355; bh=4UNFZP9kqDVw+
	Elbus8NO1IbwHvk8fPFHGj+YDTr5Ok=; b=P3XN88dHM4ZPfjR9XC2p8J7rmmIQ0
	S2hlVe2Dpt7uO6bUC6Ucmh9kYbc1Q5iLmPGLWP36n0TEZjmKNQ6eId/qSF6LdlWk
	CVWvRQn517yfXfaKt7DNMLegg55tQ2LkNbWwfgnceAxsq72EMWt9eBBH1NGfgiBp
	01i099v63IXHtouSXEvpMCIByzXtUkkiiZBmnLil1QhY6OdixhXMzpgijIHVAsIG
	l+cXSPf2GsA89cN4PZOnmxvtN3vrJxeIrEbFDIirIiqEBAU0m7Bi/8A59y5vOIy9
	HrhfMKGRLj17Fsyv3Bt3BadCyiBIQ/1OKx/GGRaHsmf61LaIYol95d5WQ==
X-ME-Sender: <xms:UiRHZDycxzqYyz9svoI_BG29w0j0197nW8ZgATteVNnNxAdI48bTsw>
    <xme:UiRHZLSJobTk626Gs9QgHcRr4a09E0ONiiKmxnwl6mLc4mpla3vrQlEBftrj4FOXz
    ojuMJkPjH1dqA>
X-ME-Received: <xmr:UiRHZNWxzwuSO9CZtWG1IYfRCXsl4Gr3UQwyIWnq4max3oYRj8BTTBnES-VqlMUkIZCiVggqK-MbM7_rlktwIzhx7TXJ4q79KfE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduuddggeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:UyRHZNiBc4z1RHN9JnZeOYHvs9lWXUv0eGbVPF6ZXvCzTedAaqNJPA>
    <xmx:UyRHZFAdOtOydd5rKjlPCU46T40Em_iRKSQEmYHMVI5aMucfabpKVQ>
    <xmx:UyRHZGL3ugzlGX7z6oU57md36kZFVqbcOVv83eswOdiiBsSvh3JjmQ>
    <xmx:UyRHZIpUT6yUCX55iQZV1zuSrUe9nKfiEN8M6BNptDt91PmXwzDPkw>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 25 Apr 2023 02:52:31 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 6/6] automation: include tail of serial log in the gitlab
 outout
Message-ID: <ZEckT63hoBPm/Ss3@mail-itl>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
 <a3d33c869b7fcf4f72047daa4dcbcf4ff97143c3.1682369736.git-series.marmarek@invisiblethingslab.com>
 <alpine.DEB.2.22.394.2304241718120.3419@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="vUtS7quP9sWuIv9L"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2304241718120.3419@ubuntu-linux-20-04-desktop>


--vUtS7quP9sWuIv9L
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 25 Apr 2023 02:52:31 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 6/6] automation: include tail of serial log in the gitlab
 outout

On Mon, Apr 24, 2023 at 05:39:19PM -0700, Stefano Stabellini wrote:
> On Mon, 24 Apr 2023, Marek Marczykowski-G=C3=B3recki wrote:
> > Make it a bit easier to see what has failed.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> > ---
> >  automation/scripts/qubes-x86-64.sh | 1 +
> >  1 file changed, 1 insertion(+)
> >=20
> > diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qu=
bes-x86-64.sh
> > index 6442f7dda515..9b89d90f653c 100755
> > --- a/automation/scripts/qubes-x86-64.sh
> > +++ b/automation/scripts/qubes-x86-64.sh
> > @@ -228,5 +228,6 @@ fi
> > =20
> >  sleep 1
> > =20
> > +tail -n 100 smoke.serial
> >  (grep -q "^Welcome to Alpine Linux" smoke.serial && grep -q "${passed}=
" smoke.serial) || exit 1
> >  exit 0
>=20
> Isn't smoke.serial already in stdout and also part of the artifacts? The
> user can always click on the full output or on the smoke.serial file
> among artifacts.  Maybe the issue is that it is called ".serial" instead
> of ".txt" so the browser will not try to open it directly in a browser
> window. If we rename it to ".txt" the user could just click on
> "artifacts" and then on "serial.txt" and it would be all there.
>=20
> 100 lines is not much, but I think in general it is better if we make it
> easier to access smoke.serial in its entirety instead.

Yes, you can click on it to get it full. But that's two extra clicks,
and if the thing that matters is just that panic message at the end, you
can also have it right in the job preview.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--vUtS7quP9sWuIv9L
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRHJFAACgkQ24/THMrX
1yzRywf/dxt7WmWAF3GoT6qvr02OD9nqm8EkDGDqqlnesXRF6yNhr0fIIF7VqlT2
dcTbCPgakmNvAElvyJl/8eTk9DTeO5nPD9vMCAQhwrcPewW93fcuHd4rgpDvXnAK
4GO9NCCKrwKqgg58VOsi1LbmWl4Of92ExedPGHr2U5mY7daLdeOu86GtQ1bNRLZe
86sz3adAe5BKLGaibZOq+6TRJI0VlRjNsUsio2K6vQxbd7RHCPdRd8HM7t3P+tnO
LO3z4/V2BtpBCQTKiWesFyqwZOIKcP3dSx6JII5aaKq+Jh3CVT+YbZMUIwCjniTE
tBIOZ7Tohxqzu07tiD/42wCsfhxanw==
=DKjK
-----END PGP SIGNATURE-----

--vUtS7quP9sWuIv9L--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 01:31:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 01:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525694.817028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr7Wr-0004I6-1J; Tue, 25 Apr 2023 01:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525694.817028; Tue, 25 Apr 2023 01:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr7Wq-0004Hz-UP; Tue, 25 Apr 2023 01:31:36 +0000
Received: by outflank-mailman (input) for mailman id 525694;
 Tue, 25 Apr 2023 01:31:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr7Wq-0004Hp-Au; Tue, 25 Apr 2023 01:31:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr7Wp-0003Pj-Vd; Tue, 25 Apr 2023 01:31:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pr7Wp-0007PV-D0; Tue, 25 Apr 2023 01:31:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pr7Wp-0003El-CU; Tue, 25 Apr 2023 01:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DhWmLiUFWTL3idN4ZLR4xHbTq2/MT2s78G2dLL3mzL4=; b=dNvl4wQ0eNZ7JkM8q4fRTmyR6W
	D1HW+W1yKJHYmbAkagR3xEy8ErsSNX8zTPXZt4+4yuT+3VHDcuePGugjgNGmxKPI3oprQAtpFDuq1
	8XKjN0frioQQ/B7ks6qrhX+Z5zd49RYNdnjvkvDMil9U6vxVcxy/Ees41lD3Zd4SRAzU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180395-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180395: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=457391b0380335d5e9a5babdec90ac53928b23b4
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 01:31:35 +0000

flight 180395 linux-linus real [real]
flight 180399 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180395/
http://logs.test-lab.xenproject.org/osstest/logs/180399/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                457391b0380335d5e9a5babdec90ac53928b23b4
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    8 days
Failing since        180281  2023-04-17 06:24:36 Z    7 days   14 attempts
Testing same since   180390  2023-04-23 23:41:41 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Alan Liu <HaoPing.Liu@amd.com>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexander Potapenko <glider@google.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrea Righi <andrea.righi@canonical.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Arnd Bergmann <arnd@arndb.de>
  Asahi Lina <lina@asahilina.net>
  Axel Lin <axel.lin@ingics.com>
  Baokun Li <libaokun1@huawei.com>
  Baoqi Zhang <zhangbaoqi@loongson.cn>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bhavya Kapoor <b-kapoor@ti.com>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Borislav Petkov (AMD) <bp@alien8.de>
  Brian Masney <bmasney@redhat.com>
  Chancel Liu <chancel.liu@nxp.com>
  Chen Aotian <chenaotian2@163.com>
  Chong Qiao <qiaochong@loongson.cn>
  Chris Morgan <macromorgan@hotmail.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Conor Dooley <conor.dooley@microchip.com>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Dan Carpenter <error27@gmail.com>
  Dan Johansen <strit@manjaro.org>
  Daniel Baluta <daniel.baluta@nxp.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Miess <Daniel.Miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  David Gow <davidgow@google.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Ding Hui <dinghui@sangfor.com.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Donald Hunter <donald.hunter@gmail.com>
  Dragan Simic <dragan.simic@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Enze Li <lienze@kylinos.cn>
  Fabio Estevam <festevam@denx.de>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gwangun Jung <exsociety@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  hrdl <git@hrdl.eu>
  Huacai Chen <chenhuacai@loongson.cn>
  Ido Schimmel <idosch@nvidia.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jani Nikula <jani.nikula@intel.com>
  Jaroslav Kysela <perex@perex.cz>
  Jason Wang <jasowang@redhat.com>
  Javier Martinez Canillas <javierm@redhat.com>
  Jay Vosburgh <jay.vosburgh@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingbo Xu <jefflexu@linux.alibaba.com>
  Johan Hovold <johan+linaro@kernel.org>
  John Ogness <john.ogness@linutronix.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  JR Gonzalez <jrg@scientiam.org>
  Jules Maselbas <jmaselbas@kalray.eu>
  Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
  Karol Herbst <kherbst@redhat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Li Lanzhe <u202212060@hust.edu.cn>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Liang He <windhl@126.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Long Wang <long.wang@analog.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Rodriguez Reboredo <yakoyoku@gmail.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mat Martineau <martineau@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Mel Gorman <mgorman@suse.de>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@amd.com>
  Miguel Ojeda <ojeda@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Oliver Upton <oliver.upton@linux.dev>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrick Blass <patrickblass@mailbox.org>
  Paulo Alcantara (SUSE) <pc@manguebit.com>
  Paulo Alcantara <pc@manguebit.com>
  Pedro Tammela <pctammela@mojatatu.com>
  Peng Fan <peng.fan@nxp.com>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qi Zheng <zhengqi.arch@bytedance.com>
  Qing Zhang <zhangqing@loongson.cn>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Ricardo Pardini <ricardo@pardini.net>
  Rob Herring <robh@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Mayhew <smayhew@redhat.com>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Seiji Nishikawa <snishika@redhat.com>
  SeongJae Park <sj@kernel.org>
  Shakeel Butt <shakeelb@google.com>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@gmail.com>
  Soumya Negi <soumya.negi97@gmail.com>
  Steev Klimaszewski <steev@kali.org> #Thinkpad X13s
  Steve Chou <steve_chou@pesi.com.tw>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  syzbot+a7c1ec5b1d71ceaa5186@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bamelis <thomas@bamelis.dev>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vadim Fedorenko <vadfed@meta.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vitaly Prosyak <vitaly.prosyak@amd.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Will Deacon <will@kernel.org>
  William Breathitt Gray <william.gray@linaro.org>
  Willy Tarreau <w@1wt.eu>
  Woody Suwalski <terraluna977@gmail.com>
  Xu Yilun <yilun.xu@intel.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5589 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 02:57:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 02:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525701.817038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr8ry-0004ge-DE; Tue, 25 Apr 2023 02:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525701.817038; Tue, 25 Apr 2023 02:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr8ry-0004gW-8N; Tue, 25 Apr 2023 02:57:30 +0000
Received: by outflank-mailman (input) for mailman id 525701;
 Tue, 25 Apr 2023 02:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr8rx-0004gO-JP
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 02:57:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7410efb-e314-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 04:57:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 11E3662B20;
 Tue, 25 Apr 2023 02:57:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A80BC4339B;
 Tue, 25 Apr 2023 02:57:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7410efb-e314-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682391442;
	bh=M4r5vmYkLXyRQc5/0qxxQmnvwfhEm4K9QtE/5QDP5+c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QYv2LcKxpL4wKXCBDVjgf51cqTF9teFWCuFV+0K0YzXTBrheSje+sFomKqeiZ/CVO
	 /qVori92nD6HttkqupLBoMhBoekahWBPnv02SPVuUpsa/Svfr5zzudRdNKTXWFkpKE
	 d8GEyotZaf8EbBWWg5h7C6cwmFCzVfvd4hEBtzurxZnc31qocZiE2LGcqmilNvUp7a
	 EnCiRt4FnrHg8MnOguovO8rDfK2XWAchVA4UD9t5gDUPxytqyoZL6ykVe95I4nh+Ke
	 929pqGAXi8X3XNt6Q+9N76Dn7QcoxODS1Z2dm/cYr60oVWGEFyL1CeDsLWdORM9Yl7
	 K+nDbwuh2uLlA==
Date: Mon, 24 Apr 2023 19:57:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 3/6] automation: re-enable building SeaBIOS in Alpine
 container
In-Reply-To: <f28aa73c1db56ccfce23c408283af28195b5eac2.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241717050.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <f28aa73c1db56ccfce23c408283af28195b5eac2.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1250059942-1682381831=:3419"
Content-ID: <alpine.DEB.2.22.394.2304241730280.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1250059942-1682381831=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304241730281.3419@ubuntu-linux-20-04-desktop>

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> It seems to build just fine with Alpine 3.12, and SeaBIOS is necessary
> for a HVM test (that use the Alpine build).
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/build | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 7d1b19c4250d..d830cff7b7c7 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -77,8 +77,6 @@ else
>      if ldd /bin/ls | grep -q musl; then
>          # disable --disable-werror for QEMUU when building with MUSL
>          cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
> -        # SeaBIOS doesn't build on MUSL systems
> -        cfgargs+=("--with-system-seabios=/bin/false")
>      fi
>  
>      # Qemu requires Python 3.5 or later, and ninja
> -- 
> git-series 0.9.1
> 
--8323329-1250059942-1682381831=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 03:01:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 03:01:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525705.817048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr8wC-00067l-TQ; Tue, 25 Apr 2023 03:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525705.817048; Tue, 25 Apr 2023 03:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pr8wC-00067e-Qe; Tue, 25 Apr 2023 03:01:52 +0000
Received: by outflank-mailman (input) for mailman id 525705;
 Tue, 25 Apr 2023 03:01:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pr8wC-00067U-2E
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 03:01:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 85d265a4-e315-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 05:01:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6079062086;
 Tue, 25 Apr 2023 03:01:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1379EC433D2;
 Tue, 25 Apr 2023 03:01:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85d265a4-e315-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682391708;
	bh=uSCcKvz0LNOXa/OJdwnoYYZ6X+aEYAiIBlARA18YdrY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=FBWTKohnBRYco4Ht9zef7fePIiDpc8+Qf6g7tmBD3D3uyoKfBdVsakGYliJJ/+eAN
	 NFDrTZHkTibWrbRKtJplBWaMS/f9UQ4wPHJ6fgLH/hbFnzWHy9gz/EMWw8iO6V9BLf
	 6aZJzG87pcHf1SHrHQ/3uDX8ZfcrSSHsKOa/NW08MB4Bs0UZnoUOk42zNmQEDkHGW6
	 8SpafsbNJpUU//ezHNz/Qx8oR3GVVNkHfAiSKkwshWEgBmI/yWswmBvD0TOZpKtwAn
	 cdd64ERALaR5y/fnXgNKUaQ2PEFH3DsJRpF+U6x/ENbngIweefvzmBCNACVGy2PUPb
	 Xqod+KwT+fkmg==
Date: Mon, 24 Apr 2023 20:01:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 5/6] automation: PCI passthrough tests on ADL hw
In-Reply-To: <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304241717510.3419@ubuntu-linux-20-04-desktop>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com> <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-879240808-1682381878=:3419"
Content-ID: <alpine.DEB.2.22.394.2304241722140.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-879240808-1682381878=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304241722141.3419@ubuntu-linux-20-04-desktop>

On Mon, 24 Apr 2023, Marek Marczykowski-Górecki wrote:
> Add simple PCI passthrough test to both PV and HVM domU. It passes
> through a network adapter (the only one in the system), gets an IP via
> DHCP (first basic test) and then ping the gateway (second basic test).
> Finally, if device is supposed to use MSI or MSI-X (as set in the
> PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.
> 
> On the current runner, the device in question is this:
> 03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
> 	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
> 	Flags: bus master, fast devsel, latency 0, IRQ 18
> 	Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
> 	Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
> 	Capabilities: [40] Power Management version 3
> 	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> 	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
> 	Capabilities: [a0] Express Endpoint, MSI 00
> 	Capabilities: [100] Advanced Error Reporting
> 	Capabilities: [140] Device Serial Number ...
> 	Capabilities: [1c0] Latency Tolerance Reporting
> 	Capabilities: [1f0] Precision Time Measurement
> 	Capabilities: [1e0] L1 PM Substates
> 	Kernel driver in use: igc
> 	Kernel modules: igc
> 
> With the current Xen version, it uses MSI-X under PV and MSI under HVM.
> 
> This patch moves domU config to a variable, to make it configurable on
> per-test basis. Add also a few comments for visual separation of tests.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  automation/gitlab-ci/test.yaml     | 20 ++++++++-
>  automation/scripts/qubes-x86-64.sh | 80 ++++++++++++++++++++++++++-----
>  2 files changed, 89 insertions(+), 11 deletions(-)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index d68c584269dd..1ce083e6cd88 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -94,6 +94,8 @@
>      # the test controller runs on RPi4
>      CONTAINER: alpine:3.12-arm64v8
>      LOGFILE: smoke-test.log
> +    PCIDEV: "03:00.0"
> +    PCIDEV_INTR: "MSI-X"

This is minor but I would move PCIDEV_INTR to
adl-pci-pv-x86-64-gcc-debug given that adl-pci-hvm-x86-64-gcc-debug
already redefines it.

I would also move PCIDEV to adl-pci-pv-x86-64-gcc-debug and
adl-pci-hvm-x86-64-gcc-debug, but I am fine either way.

However the two new tests failed for me:

https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/847157948


+ grep '^Welcome to Alpine Linux' smoke.serial
+ '[' 0 -le 0 ]
+ '[' 0 -le 0 ]
+ echo 'ERROR: test timeout, aborting'
ERROR: test timeout, aborting

The Welcome to Alpine Linux message is missing


>    artifacts:
>      paths:
>        - smoke.serial
> @@ -147,6 +149,24 @@ adl-suspend-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
>  
> +adl-pci-pv-x86-64-gcc-debug:
> +  extends: .adl-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
> +adl-pci-hvm-x86-64-gcc-debug:
> +  extends: .adl-x86-64
> +  variables:
> +    PCIDEV_INTR: "MSI"
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  qemu-smoke-dom0-arm64-gcc:
>    extends: .qemu-arm64
>    script:
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index c0bc71764f73..6442f7dda515 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -4,8 +4,21 @@ set -ex
>  
>  test_variant=$1
>  
> +### defaults
>  wait_and_wakeup=
>  timeout=120
> +domU_config='
> +type = "pvh"
> +name = "domU"
> +kernel = "/boot/vmlinuz"
> +ramdisk = "/boot/initrd-domU"
> +extra = "root=/dev/ram0 console=hvc0"
> +memory = 512
> +vif = [ "bridge=xenbr0", ]
> +disk = [ ]
> +'
> +
> +### test: smoke test
>  if [ -z "${test_variant}" ]; then
>      passed="ping test passed"
>      domU_check="
> @@ -23,6 +36,8 @@ done
>  tail -n 100 /var/log/xen/console/guest-domU.log
>  echo \"${passed}\"
>  "
> +
> +### test: S3
>  elif [ "${test_variant}" = "s3" ]; then
>      passed="suspend test passed"
>      wait_and_wakeup="started, suspending"
> @@ -48,6 +63,59 @@ xl dmesg | grep 'Finishing wakeup from ACPI S3 state' || exit 1
>  ping -c 10 192.168.0.2 || exit 1
>  echo \"${passed}\"
>  "
> +
> +### test: pci-pv, pci-hvm
> +elif [ "${test_variant}" = "pci-pv" ] || [ "${test_variant}" = "pci-hvm" ]; then
> +
> +    if [ -z "$PCIDEV" ]; then
> +        echo "Please set 'PCIDEV' variable with BDF of test network adapter" >&2
> +        echo "Optionally set also 'PCIDEV_INTR' to 'MSI' or 'MSI-X'" >&2
> +        exit 1
> +    fi
> +
> +    passed="pci test passed"
> +
> +    domU_config='
> +type = "'${test_variant#pci-}'"
> +name = "domU"
> +kernel = "/boot/vmlinuz"
> +ramdisk = "/boot/initrd-domU"
> +extra = "root=/dev/ram0 console=hvc0"
> +memory = 512
> +vif = [ ]
> +disk = [ ]
> +pci = [ "'$PCIDEV',seize=1" ]
> +on_reboot = "destroy"
> +'
> +
> +    domU_check="
> +set -x -e
> +ip link set eth0 up
> +timeout 30s udhcpc -i eth0
> +pingip=\$(ip -o -4 r show default|cut -f 3 -d ' ')
> +ping -c 10 \"\$pingip\"
> +echo domU started
> +cat /proc/interrupts
> +"
> +    if [ "$PCIDEV_INTR" = "MSI-X" ]; then
> +        domU_check="$domU_check
> +grep -- '\\(-msi-x\\|PCI-MSI-X\\).*eth0' /proc/interrupts
> +"
> +    elif [ "$PCIDEV_INTR" = "MSI" ]; then
> +        # depending on the kernel version and domain type, the MSI can be
> +        # marked as '-msi', 'PCI-MSI', or 'PCI-MSI-<SBDF>'; be careful to not match
> +        # -msi-x nor PCI-MSI-X
> +        domU_check="$domU_check
> +grep -- '\\(-msi\\|PCI-MSI\\( \\|-[^X]\\)\\).*eth0' /proc/interrupts
> +"
> +    fi
> +    domU_check="$domU_check
> +echo \"${passed}\"
> +"
> +
> +    dom0_check="
> +tail -n 100 -F /var/log/xen/console/guest-domU.log
> +"
>  fi
>  
>  # DomU
> @@ -97,17 +165,7 @@ xl create /etc/xen/domU.cfg
>  ${dom0_check}
>  " > etc/local.d/xen.start
>  chmod +x etc/local.d/xen.start
> -# just PVH for now
> -echo '
> -type = "pvh"
> -name = "domU"
> -kernel = "/boot/vmlinuz"
> -ramdisk = "/boot/initrd-domU"
> -extra = "root=/dev/ram0 console=hvc0"
> -memory = 512
> -vif = [ "bridge=xenbr0", ]
> -disk = [ ]
> -' > etc/xen/domU.cfg
> +echo "$domU_config" > etc/xen/domU.cfg
>  
>  echo "rc_verbose=yes" >> etc/rc.conf
>  echo "XENCONSOLED_TRACE=all" >> etc/default/xencommons
> -- 
> git-series 0.9.1
> 
--8323329-879240808-1682381878=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 05:03:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 05:03:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525718.817078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prApf-0002H9-1J; Tue, 25 Apr 2023 05:03:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525718.817078; Tue, 25 Apr 2023 05:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prApe-0002H2-Uk; Tue, 25 Apr 2023 05:03:14 +0000
Received: by outflank-mailman (input) for mailman id 525718;
 Tue, 25 Apr 2023 05:03:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prApd-0002Gp-JM; Tue, 25 Apr 2023 05:03:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prApd-0000s4-Hi; Tue, 25 Apr 2023 05:03:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prApc-0001AP-TN; Tue, 25 Apr 2023 05:03:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prApc-00083O-QC; Tue, 25 Apr 2023 05:03:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YK6k+rbsSvT7h3hCYfSokRSCv79yWSVcO5I+nH2o01s=; b=CTstfsG2hMByOXKyTq0Gp/QNZ+
	n+qjbiU7NZxBEoi01ewamz6yscAU77ukHXaPneFXrI7AccyWTUDW8DgoyH3YLNm4J0bwC+etDQ4fD
	y19A3rEW6J5LquTr1UL+QPfyRwx4TxAq5QwawRpWkErYyPUuugsKK8fWJdz3AkVPFPJ0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180396: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-vhd:guest-localmigrate:fail:heisenbug
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa80e0afaaa5399915d90a80c6dbe1e353664d44
X-Osstest-Versions-That:
    xen=e4a5fb9227889bec99ab212b839680f4d5b51e60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 05:03:12 +0000

flight 180396 xen-4.17-testing real [real]
flight 180402 xen-4.17-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180396/
http://logs.test-lab.xenproject.org/osstest/logs/180402/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd       17 guest-localmigrate  fail pass in 180402-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180209
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180209
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180209
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180209
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180209
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180209
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180209
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180209
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180209
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180209
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180209
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180209

version targeted for testing:
 xen                  aa80e0afaaa5399915d90a80c6dbe1e353664d44
baseline version:
 xen                  e4a5fb9227889bec99ab212b839680f4d5b51e60

Last test of basis   180209  2023-04-11 22:08:27 Z   13 days
Testing same since   180396  2023-04-24 11:08:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e4a5fb9227..aa80e0afaa  aa80e0afaaa5399915d90a80c6dbe1e353664d44 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 06:05:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 06:05:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525724.817088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prBnX-00007J-K2; Tue, 25 Apr 2023 06:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525724.817088; Tue, 25 Apr 2023 06:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prBnX-00007C-GT; Tue, 25 Apr 2023 06:05:07 +0000
Received: by outflank-mailman (input) for mailman id 525724;
 Tue, 25 Apr 2023 06:05:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SxzM=AQ=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1prBnW-000073-LW
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 06:05:06 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe02::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e958e7c-e32f-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 08:05:03 +0200 (CEST)
Received: from DB6PR0202CA0048.eurprd02.prod.outlook.com (2603:10a6:4:a5::34)
 by DB9PR08MB8357.eurprd08.prod.outlook.com (2603:10a6:10:3db::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 06:04:54 +0000
Received: from DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::77) by DB6PR0202CA0048.outlook.office365.com
 (2603:10a6:4:a5::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Tue, 25 Apr 2023 06:04:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT051.mail.protection.outlook.com (100.127.142.148) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 06:04:53 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Tue, 25 Apr 2023 06:04:53 +0000
Received: from 12ea708596bb.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A9004B26-6C67-4DFB-A90D-43D5988DECA3.1; 
 Tue, 25 Apr 2023 06:04:42 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 12ea708596bb.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 25 Apr 2023 06:04:42 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB7867.eurprd08.prod.outlook.com (2603:10a6:10:39e::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 06:04:40 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 06:04:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e958e7c-e32f-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4DXQAmi2cw7qAWPFSwO5nYY4M+EiR6g3vt7i7YeNRxw=;
 b=Cn+eCr/t1gQHwchsED7V3Pkkq/WxBYhNYU782Ee9ozyL2Nz5reVJwHZl4VcvpB1peWRuzJv7ORzD3ks+nBzpmmevemfSnXA6TurmyZI59r49Mb8Cz29qf708l2B8pljpSJUl+toXHjwiaYRfuOmg3Zw5qx/LZfl0pUMEkA/XrEQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 632e7ec6c46a1ba6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fmJYnnlWtjbpZriVWMfsWwvMl/bpB8Pl5eFli2djydH8I2Djv47PIG0IpQKzYoTCuwAoT1Ea7gZ2Az5qKypHq0Z7xovLR4vEMzb+id0X1BRNK9krWm2rqcKXVgc+NxiUNhwRkdfcuPsyKswd6N3k+R8DIeGB6nwBs/wIwNsLtBAwTBSSAD8/99S8qViiagv0a8JOh/6NAREUs8yoo0/y2dzjwVHGwIYwG3I9gZizSEjEcvRk6C013ubFP9aapGYpspLMiWI6ZmZnraqW9psIXdYITrCZ8V1GSO0ELqggoAqo48Dn78tKQI3WgSg7F2Qt/PZ9YQ13fIDE+YN0HirITg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4DXQAmi2cw7qAWPFSwO5nYY4M+EiR6g3vt7i7YeNRxw=;
 b=PBrSoIR/MJLvMBoCBqyQpmeB6QM68MHHDYTzXRKciBXoSaIwaL2qr9gOWliQH4sf8aB6RQAy4TH6eeMbnUAz9+X3Yndzes336sEJIyYOzJ5AcGTm+DKY9cZy/RWWZFfGJ0+yVmSg8vqzSyknyjYlSC+DCPIEa3pfWNK1+UJa9d2Rn0ZQOaYfAYb5tHCKR4rZ7CRZ5To26scK6bYCBiSNmjaCd2koLk0KFQz25cV9Nk9lVOyT6kVIRkFkauRvbcs+wzCczREEF4oZTM621J+8/s+O/6Pt6z4dRKVeyVB1PrzGLKhgLPN6hQuKryPiaTffRKhaF7znAWG5EjSX4uHKmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4DXQAmi2cw7qAWPFSwO5nYY4M+EiR6g3vt7i7YeNRxw=;
 b=Cn+eCr/t1gQHwchsED7V3Pkkq/WxBYhNYU782Ee9ozyL2Nz5reVJwHZl4VcvpB1peWRuzJv7ORzD3ks+nBzpmmevemfSnXA6TurmyZI59r49Mb8Cz29qf708l2B8pljpSJUl+toXHjwiaYRfuOmg3Zw5qx/LZfl0pUMEkA/XrEQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZdnKG11PjGcmQuE2MpJrElBiS3686VJGAgAAo1ICAAAFsAIAADk4AgAACvwCAAANFAIAAAd0AgAACk4CAAAHsAIAAALeAgAAHWYCAAOkjgA==
Date: Tue, 25 Apr 2023 06:04:39 +0000
Message-ID: <C1815EB3-E875-4D49-831A-56E152BF4B61@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
 <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
 <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
 <B534E482-71BF-4C5F-B9A8-3D567367F7AA@arm.com>
 <f9e631e1-02bb-a565-4df4-ccbb66fbaf49@suse.com>
In-Reply-To: <f9e631e1-02bb-a565-4df4-ccbb66fbaf49@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB7867:EE_|DBAEUR03FT051:EE_|DB9PR08MB8357:EE_
X-MS-Office365-Filtering-Correlation-Id: 1fd35383-b3eb-48ee-0e7f-08db4552fd24
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fdnLf2M0D2npr5GHV4czDGNa7GS966hTnlsKC3sTURB3prPWcAZ7j1832i67ec1H3e9uNpk6fkv91YQsxDSo1ixVrvgC40yXfdEyIh5BS1cBt5wMfY4MfzFhE+lZAXfJWWC7Vl+6jJCscxP3vn8m22pFlFNn05lYYawHzjys0+lveWCr/pAhOXefWjbbV4UyNdZMqJwZycpbR7qRiY+bUnkcf7EtwFnmSyY2qDxfM9O57taZsLxtYxLtsaSFSpWM1MBjAFT76KW/z1Kh6iMlDD8g9TaYZn5LlAgkJ5YGosPlwgiNdDmNYlUguSCOPOmC6fGyq49zE55daI7M5rpZkaAV6ZwuvJ+5VBl9ISEmb9byEQjdkPVLRYwsH2TRd7IoXHJzOiTT0pM/3alYvSSM2WNw8ozRT/BtQn46Icnz/OePuJK7iX2DTbzrP7sKDT7CSeHNGrt4vTUzTvySWPTrXUIjLn8E1w6Nw2zkDC9b2nIOFiZoha+iULc17F3CJf2Y8uhpOF5yDSn0Gl1/PmrBHO5AJvIioc5iwtkzYoriEfWG0EtOg7yOTKDJm3uDKNADh7OdMge58EM42MYyU9bJO2D5kZvvaOyuit5iWtO9zR36GgHJ6zPkkbhd1WUKuf1ZJaIiPklfS5XwJIRkyd8h7Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(396003)(346002)(376002)(136003)(451199021)(53546011)(6506007)(26005)(6512007)(38070700005)(2616005)(36756003)(186003)(122000001)(38100700002)(66946007)(478600001)(76116006)(86362001)(6916009)(66556008)(66476007)(66446008)(8676002)(8936002)(54906003)(64756008)(91956017)(5660300002)(71200400001)(6486002)(41300700001)(2906002)(4326008)(33656002)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <7294551E31C0B140A41CE443AA6F8FFA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7867
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a629474d-3425-4eaa-0970-08db4552f4a4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aSrKzY4nok6aq73gFi4YZeVq5yeGOXZaiAsZzXlUKs4JJ23pDuX9gpIwk+mIEIhmdC42V9ygm38Puiqp10WVGQ87525TrF666OUDGulQpABz+T5u43m+OAsMigsqJ919SVTqeyJ33kqgP3VbfuRdy4+l1NKnH/ZY9oOTldjUWhu412ADyQ6F65zrXprbxOLXvetu+6bMxMY/gejzIT73pHYZLnuw8Aw/fmErFa8AKuK/YLWaLX/AnrYuwfhxE8PuFnx3WjC/c2Z6LGspS/Nb7D6tpy4WdrcJChM8LapwksJcv1aiXnVBBWz+FnYXpX8nvbK1E5hdG5MC66vCfD0yW2L5Su8kRkYWA1y8IMrmbthW6L1o6GqUA+mQQD4EIwlRUBr1em9I1X6Hu0DRI+0nhHT6eviOdfDhJ+BJQ0EKMvpTVmt0BuIZVZxU1Y7uYRnnIwOFFyKeUS4ubJ+70zg2cxqzI6zukkPxKLCKvOCSvIdyUVv1EXYpHI+UgpKnP6tEanBQ+Y+mjyVqzNx6Z3twL0Crz0s0LQCMzRJRoRcCoRN8qIl8pyomoig9oLtqMy0gF/QcIEMTA5KOCWj0ElFUHqnMnP6L7CutCFGXKb2VCnqkkCj3XKlwGaSFgatOPzWXWLV0PjXS00fv3iuLimJsIVLTr7QjQqeJEQDkjjbIY2nh9D4nAtaEUjh3XaeHZ8aSjaTtvVgXEAsBhWgYqTbYWQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(54906003)(86362001)(478600001)(40460700003)(36756003)(186003)(6486002)(53546011)(6512007)(6506007)(8936002)(26005)(40480700001)(82310400005)(33656002)(82740400003)(316002)(356005)(336012)(81166007)(41300700001)(36860700001)(4326008)(70206006)(70586007)(8676002)(5660300002)(47076005)(2906002)(6862004)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 06:04:53.9534
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1fd35383-b3eb-48ee-0e7f-08db4552fd24
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8357

DQoNCj4gT24gMjQgQXByIDIwMjMsIGF0IDE3OjEwLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDQuMjAyMyAxNzo0MywgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyNCBBcHIgMjAyMywgYXQgMTY6NDEsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjQuMDQuMjAyMyAxNzozNCwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IE9uIDI0IEFwciAyMDIzLCBhdCAxNjoyNSwgSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+IE9uIDI0LjA0LjIwMjMgMTc6MTgsIEx1Y2Eg
RmFuY2VsbHUgd3JvdGU6DQo+Pj4+Pj4gT2ggb2ssIEkgZG9u4oCZdCBrbm93LCBoZXJlIHdoYXQg
SSBnZXQgaWYgZm9yIGV4YW1wbGUgSSBidWlsZCBhcm0zMjoNCj4+Pj4+PiANCj4+Pj4+PiBhcm0t
bGludXgtZ251ZWFiaWhmLWxkIC1FTCAtVCBhcmNoL2FybS94ZW4ubGRzIC1OIHByZWxpbmsubyBc
DQo+Pj4+Pj4gLi9jb21tb24vc3ltYm9scy1kdW1teS5vIC1vIC4vLnhlbi1zeW1zLjANCj4+Pj4+
PiBhcm0tbGludXgtZ251ZWFiaWhmLWxkOiBwcmVsaW5rLm86IGluIGZ1bmN0aW9uIGBjcmVhdGVf
ZG9tVXMnOg0KPj4+Pj4+ICguaW5pdC50ZXh0KzB4MTM0NjQpOiB1bmRlZmluZWQgcmVmZXJlbmNl
IHRvIGBzdmVfZG9tY3RsX3ZsX3BhcmFtJw0KPj4+Pj4gDQo+Pj4+PiBJbiBwYXJ0aWN1bGFyIHdp
dGggc2VlaW5nIHRoaXM6IFdoYXQgeW91IGNvcGllZCBoZXJlIGlzIGEgYnVpbGQgd2l0aCB0aGUN
Cj4+Pj4+IHNlcmllcyBhcHBsaWVkIG9ubHkgdXAgdG8gdGhpcyBwYXRjaD8gSSBhc2sgYmVjYXVz
ZSB0aGUgcGF0Y2ggaGVyZSBhZGRzIGENCj4+Pj4+IGNhbGwgb25seSBvdXQgb2YgY3JlYXRlX2Rv
bTAoKS4NCj4+Pj4gDQo+Pj4+IE5vIEnigJl2ZSBkbyB0aGUgY2hhbmdlcyBvbiB0b3Agb2YgdGhl
IHNlcmllLCBJ4oCZdmUgdHJpZWQgaXQgbm93LCBvbmx5IHRvIHRoaXMgcGF0Y2ggYW5kIGl0IGJ1
aWxkcyBjb3JyZWN0bHksDQo+Pj4+IEl0IHdhcyBteSBtaXN0YWtlIHRvIGRvbuKAmXQgcmVhZCBj
YXJlZnVsbHkgdGhlIGVycm9yIG91dHB1dC4NCj4+Pj4gDQo+Pj4+IEFueXdheSBJIGd1ZXNzIHRo
aXMgY2hhbmdlIGlzIG5vdCBhcHBsaWNhYmxlIGJlY2F1c2Ugd2UgZG9u4oCZdCBoYXZlIGEgc3lt
Ym9sIHRoYXQgaXMgcGxhaW4gMCBmb3IgZG9tVXMNCj4+Pj4gdG8gYmUgcGxhY2VkIGluc2lkZSBj
cmVhdGVfZG9tVXMuDQo+Pj4gDQo+Pj4gUG9zc2libGUsIGJ1dCB3b3VsZCB5b3UgbWluZCBmaXJz
dCB0ZWxsaW5nIG1lIGluIHdoaWNoIG90aGVyIHBhdGNoKGVzKSB0aGUNCj4+PiBmdXJ0aGVyIHJl
ZmVyZW5jZShzKSBhcmUgYmVpbmcgaW50cm9kdWNlZCwgc28gSSBjb3VsZCB0YWtlIGEgbG9vayB3
aXRob3V0DQo+Pj4gKGFnYWluKSBkaWdnaW5nIHRocm91Z2ggdGhlIGVudGlyZSBzZXJpZXM/DQo+
PiANCj4+IFN1cmUsIHRoZSBvdGhlciByZWZlcmVuY2VzIHRvIHRoZSBmdW5jdGlvbiBhcmUgaW50
cm9kdWNlZCBpbiAieGVuL2FybTogYWRkIHN2ZSBwcm9wZXJ0eSBmb3IgZG9tMGxlc3MgZG9tVXPi
gJ0gcGF0Y2ggMTENCj4gDQo+IFBlcnNvbmFsbHkgSSdtIGluY2xpbmVkIHRvIHN1Z2dlc3QgYWRk
aW5nICIjaWZkZWYgQ09ORklHX0FSTTY0X1NWRSIgdGhlcmUuDQo+IEJ1dCBJIGd1ZXNzIHRoYXQg
bWF5IGFnYWluIGdvIGFnYWluc3QgeW91ciBkZXNpcmUgdG8gbm90IGlnbm9yZSBpbmFwcGxpY2Fi
bGUNCj4gb3B0aW9ucy4gU3RpbGwgSSBjYW4ndCByZXNpc3QgdG8gYXQgbGVhc3QgYXNrIGhvdyBh
biAic3ZlIiBub2RlIG9uIEFybTMyIGlzDQo+IGRpZmZlcmVudCBmcm9tIGFuIGVudGlyZWx5IHVu
a25vd24gb25lLg0KDQpJdCB3b3VsZCBiZSBvayBmb3IgbWUgdG8gdXNlICNpZmRlZiBDT05GSUdf
QVJNNjRfU1ZFIGFuZCBmYWlsIGluIHRoZSAjZWxzZSBicmFuY2gsDQpidXQgSSBoYWQgdGhlIGZl
ZWxpbmcgaW4gdGhlIHBhc3QgdGhhdCBBcm0gbWFpbnRhaW5lcnMgYXJlIG5vdCB2ZXJ5IGhhcHB5
IHdpdGggI2lmZGVmcywgSSBtaWdodA0KYmUgd3Jvbmcgc28gSeKAmWxsIHdhaXQgZm9yIHRoZW0g
dG8gZ2l2ZSBhbiBvcGluaW9uIGFuZCB0aGVuIEkgd2lsbCBiZSBoYXBweSB0byBmb2xsb3cuDQoN
Cj4gDQo+IEphbg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 06:36:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 06:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525730.817098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prCHy-0003aF-5X; Tue, 25 Apr 2023 06:36:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525730.817098; Tue, 25 Apr 2023 06:36:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prCHy-0003a8-2Z; Tue, 25 Apr 2023 06:36:34 +0000
Received: by outflank-mailman (input) for mailman id 525730;
 Tue, 25 Apr 2023 06:36:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lKKS=AQ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prCHw-0003a2-OH
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 06:36:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 82acadfa-e333-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 08:36:29 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 207E92183A;
 Tue, 25 Apr 2023 06:36:29 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E4C3A13466;
 Tue, 25 Apr 2023 06:36:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 0IJpNux0R2S0HwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 25 Apr 2023 06:36:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82acadfa-e333-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682404589; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mo4JRcoaRz9c6ThDK9Lv6//pnOyGZNbZnRy/NWXctJc=;
	b=SnS2qTQcw+RqnZ31CthQGA48tSaKXfUc7CaH79wPAGBN6pD+n3HbGDJCfUNrgLeEuhb9K9
	mmhKENNGsDVGZzZv4rs3HoYNkK7eioHZaNniGqSkHc0GAV9Lxz+z8QeAex+G5pztDthsTI
	sckQG8qKfkSrntMWKJNmyWOoKK23Qps=
Message-ID: <16b10155-4dfb-6891-dc90-61a6b966ee6d@suse.com>
Date: Tue, 25 Apr 2023 08:36:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org, Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>
References: <20230424210048.786436-1-stewart.hildebrand@amd.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [RFC PATCH] xen/sched/null: avoid crash after failed domU
 creation
In-Reply-To: <20230424210048.786436-1-stewart.hildebrand@amd.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------X0SK0EfBcsr43ngwEtHiFbH5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------X0SK0EfBcsr43ngwEtHiFbH5
Content-Type: multipart/mixed; boundary="------------bVDBD0IsKwVMx33GQmgYqEFt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org, Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Message-ID: <16b10155-4dfb-6891-dc90-61a6b966ee6d@suse.com>
Subject: Re: [RFC PATCH] xen/sched/null: avoid crash after failed domU
 creation
References: <20230424210048.786436-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230424210048.786436-1-stewart.hildebrand@amd.com>

--------------bVDBD0IsKwVMx33GQmgYqEFt
Content-Type: multipart/mixed; boundary="------------m7LIA0v8Ziqt0mzrNBTpmJL9"

--------------m7LIA0v8Ziqt0mzrNBTpmJL9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDQuMjMgMjM6MDAsIFN0ZXdhcnQgSGlsZGVicmFuZCB3cm90ZToNCj4gV2hlbiBj
cmVhdGluZyBhIGRvbVUsIGJ1dCB0aGUgY3JlYXRpb24gZmFpbHMsIHdlIG1heSBlbmQgdXAg
aW4gYSBzdGF0ZQ0KPiB3aGVyZSBhIHZjcHUgaGFzIG5vdCB5ZXQgYmVlbiBhZGRlZCB0byB0
aGUgbnVsbCBzY2hlZHVsZXIsIGJ1dA0KPiB1bml0X2RlYXNzaWduKCkgaXMgaW52b2tlZC4N
Cg0KVGhpcyBpcyBub3QgcmVhbGx5IHRydWUuIFRoZSB2Y3B1IGhhcyBiZWVuIGFkZGVkLCBi
dXQgaXQgd2FzIG9mZmxpbmUgYXQNCnRoYXQgdGltZS4gVGhpcyByZXN1bHRlZCBpbiBudWxs
X3VuaXRfaW5zZXJ0KCkgcmV0dXJuaW5nIGVhcmx5IGFuZCBub3QNCmNhbGxpbmcgdW5pdF9h
c3NpZ24oKS4NCg0KTGF0ZXIgdGhlIHZjcHUgd2FzIG9ubGluZWQgZHVyaW5nIFhFTl9ET01D
VExfc2V0dmNwdWNvbnRleHQgaGFuZGxpbmcsDQpyZXN1bHRpbmcgaW4gbnVsbF91bml0X3Jl
bW92ZSgpIGNhbGxpbmcgdW5pdF9kZWFzc2lnbigpLg0KDQo+IEluIHRoaXMgY2FzZSwgd2hl
biBydW5uaW5nIGEgZGVidWcgYnVpbGQgb2YNCj4gWGVuLCB3ZSB3aWxsIGhpdCBhbiBBU1NF
UlQgYW5kIGNyYXNoIFhlbjoNCj4gDQo+IChYRU4pICoqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioNCj4gKFhFTikgUGFuaWMgb24gQ1BVIDA6DQo+IChYRU4pIEFz
c2VydGlvbiAnbnBjLT51bml0ID09IHVuaXQnIGZhaWxlZCBhdCBjb21tb24vc2NoZWQvbnVs
bC5jOjM3OQ0KPiAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqDQo+IA0KPiBUbyB3b3JrIGFyb3VuZCB0aGlzLCByZW1vdmUgdGhlIEFTU0VSVCBhbmQg
aW50cm9kdWNlIGEgY2hlY2sgZm9yIHRoZQ0KPiBjYXNlIHdoZXJlIG5wYy0+dW5pdCBpcyBO
VUxMIGFuZCBzaW1wbHkgcmV0dXJuIGZhbHNlIGZyb20NCj4gdW5pdF9kZWFzc2lnbigpLg0K
DQpJIHRoaW5rIHRoZSBjb3JyZWN0IGZpeCB3b3VsZCBiZSB0byBjYWxsIHVuaXRfZGVhc3Np
Z24oKSBmcm9tDQpudWxsX3VuaXRfcmVtb3ZlKCkgb25seSwgaWYgbnBjLT51bml0IGlzbid0
IE5VTEwuIERhcmlvIG1pZ2h0IGhhdmUgYQ0KZGlmZmVyZW50IG9waW5pb24sIHRob3VnaC4g
Oi0pDQoNCg0KSnVlcmdlbg0KDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBTdGV3YXJ0IEhpbGRl
YnJhbmQgPHN0ZXdhcnQuaGlsZGVicmFuZEBhbWQuY29tPg0KPiAtLS0NCj4gDQo+IEknbSBu
b3QgZW50aXJlbHkgc3VyZSB3aGV0aGVyIHRoaXMgaXMgdGhlIGNvcnJlY3QgZml4IChoZW5j
ZSB0aGUgUkZDDQo+IHRhZyksIGJ1dCBhdCBsZWFzdCBpdCBhdm9pZHMgY3Jhc2hpbmcgdGhl
IHN5c3RlbSBmb3IgbWUuDQo+IA0KPiBIZXJlIGFyZSB0aGUgc3RlcHMgdG8gcmVwcm9kdWNl
IG9uIGFuIGFhcmNoNjQgc3lzdGVtIHdpdGggNCBjcHVzOg0KPiANCj4gc3VkbyB4bCBjcHVw
b29sLWNwdS1yZW1vdmUgUG9vbC0wIDMNCj4gc3VkbyB4bCBjcHVwb29sLWNyZWF0ZSAnbmFt
ZT0ibnVsbHBvb2wiJyAnc2NoZWQ9Im51bGwiJyAnY3B1cz1bIjMiXScNCj4gDQo+IGNhdCA+
IHN0ZXcuY2ZnIDw8RU9GDQo+IG5hbWUgPSAic3RldyINCj4ga2VybmVsID0gIkltYWdlIg0K
PiBleHRyYSA9ICJjb25zb2xlPWh2YzAiDQo+IG1lbW9yeSA9IDc2OA0KPiB2Y3B1cyA9IDEN
Cj4gcG9vbD0gIm51bGxwb29sIg0KPiBwY2kgPSBbICIwMTowMC4wIiBdDQo+IEVPRg0KPiAN
Cj4gc3VkbyB4bCBjcmVhdGUgc3Rldy5jZmcNCj4gDQo+IFRoZSBQQ0kgZGV2aWNlIDAxOjAw
LjAgaXMgbm90IGFzc2lnbmFibGUsIHNvIHRoZSBkb21haW4gY3JlYXRpb24gd2lsbA0KPiBm
YWlsLg0KPiANCj4gSGVyZSBpcyBhIG1vcmUgZGV0YWlsZWQgY3Jhc2ggbG9nOg0KPiANCj4g
c3Rld0B1YnVudHU6fiQgc3VkbyB4bCBjcmVhdGUgc3Rldy5jZmcNCj4gUGFyc2luZyBjb25m
aWcgZnJvbSBzdGV3LmNmZw0KPiBsaWJ4bDogZXJyb3I6IGxpYnhsX3BjaS5jOjE2Nzc6bGli
eGxfX2RldmljZV9wY2lfYWRkOiBEb21haW4gMTpQQ0kgZGV2aWNlIDA6MTowLjAgaXMgbm90
IGFzc2lnbmFibGUNCj4gbGlieGw6IGVycm9yOiBsaWJ4bF9wY2kuYzoxODA5OmRldmljZV9w
Y2lfYWRkX2RvbmU6IERvbWFpbiAxOmxpYnhsX19kZXZpY2VfcGNpX2FkZCBmYWlsZWQgZm9y
IFBDSSBkZXZpY2UgMDoxOjAuMCAocmMgLTMpDQo+IGxpYnhsOiBlcnJvcjogbGlieGxfY3Jl
YXRlLmM6MTkyMzpkb21jcmVhdGVfYXR0YWNoX2RldmljZXM6IERvbWFpbiAxOnVuYWJsZSB0
byBhZGQgcGNpIGRldmljZXMNCj4gKFhFTikgQXNzZXJ0aW9uICducGMtPnVuaXQgPT0gdW5p
dCcgZmFpbGVkIGF0IGNvbW1vbi9zY2hlZC9udWxsLmM6Mzc5DQo+IChYRU4pIC0tLS1bIFhl
bi00LjE4LXVuc3RhYmxlICBhcm02NCAgZGVidWc9eSAgVGFpbnRlZDogICBDICAgIF0tLS0t
DQo+IChYRU4pIENQVTogICAgMA0KPiAoWEVOKSBQQzogICAgIDAwMDAwMjAwMDAyM2VjMzAg
bnVsbC5jI3VuaXRfZGVhc3NpZ24rMHhkYy8weDFjYw0KPiAoWEVOKSBMUjogICAgIDAwMDAw
MjAwMDAyM2ZhMWMNCj4gKFhFTikgU1A6ICAgICAwMDAwODAwMGZmZjY3YmIwDQo+IChYRU4p
IENQU1I6ICAgMDAwMDAwMDA4MDAwMDJjOSBNT0RFOjY0LWJpdCBFTDJoIChIeXBlcnZpc29y
LCBoYW5kbGVyKQ0KPiAoWEVOKSAgICAgIFgwOiAwMDAwMDAwMDAwMDAwMDAwICBYMTogMDAw
MDgwMDBmZmY0M2Q3MCAgWDI6IDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgICAgICBYMzog
MDAwMDAwMDAwMDAwMDAwMyAgWDQ6IDAwMDAwMDAwMDAwMDAwMDAgIFg1OiAwMDAwODAwMGZm
ZmY4NGIwDQo+IChYRU4pICAgICAgWDY6IDAwMDAwMDAwMDAwMDAwMDAgIFg3OiAwMDAwMDAw
MDAwMDAwMDAwICBYODogMDAwMDgwMDBmZmY0M2ZkMA0KPiAoWEVOKSAgICAgIFg5OiAwMDAw
MDAwMDAwMDAwMDAwIFgxMDogZjAwZmYwMGZmMDBmZjAwZiBYMTE6IDAwMDAwMDAwY2NjY2Nj
YzANCj4gKFhFTikgICAgIFgxMjogMDAwMDAwMDBjY2NjY2NjMCBYMTM6IDAwMDAwMDAwMDAw
MDAwMDAgWDE0OiAwMDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pICAgICBYMTU6IDAwMDAwMDdm
ZGU1OGU2ZDAgWDE2OiAwMDAwMDAwMDAwMDAwMDI0IFgxNzogMDAwMDAwMDAwMDAwMDAwMA0K
PiAoWEVOKSAgICAgWDE4OiAwMDAwMDAwMDAwMDAwMDAwIFgxOTogMDAwMDgwMDBmZmY0M2Nk
MCBYMjA6IDAwMDA4MDAwZmZmZGMxMjANCj4gKFhFTikgICAgIFgyMTogMDAwMDgwMDBmZmY2
OWEzMCBYMjI6IDAwMDA4MDAwZmZmNjlhMzAgWDIzOiAwMDAwMDAwMDAwMDAwMDAzDQo+IChY
RU4pICAgICBYMjQ6IDAwMDA4MDAwZmZmZWNiMjAgWDI1OiAwMDAwMDAwMDAwMDAwMDAxIFgy
NjogMDAwMDgwMDBmZmY2OTkyMA0KPiAoWEVOKSAgICAgWDI3OiAwMDAwODAwMGZmZjQzYzcw
IFgyODogZmZmZmZmODgwMTg4ZWVjMCAgRlA6IDAwMDA4MDAwZmZmNjdiYjANCj4gKFhFTikN
Cj4gKFhFTikgICBWVENSX0VMMjogMDAwMDAwMDA4MDAyMzU1OA0KPiAoWEVOKSAgVlRUQlJf
RUwyOiAwMDAxMDAwODdmZjk0MDAwDQo+IChYRU4pDQo+IChYRU4pICBTQ1RMUl9FTDI6IDAw
MDAwMDAwMzBjZDE4M2QNCj4gKFhFTikgICAgSENSX0VMMjogMDAwMDAwMDA4MDdjMDYzZg0K
PiAoWEVOKSAgVFRCUjBfRUwyOiAwMDAwMDAwMDAwMzRiMDAwDQo+IChYRU4pDQo+IChYRU4p
ICAgIEVTUl9FTDI6IDAwMDAwMDAwZjIwMDAwMDENCj4gKFhFTikgIEhQRkFSX0VMMjogMDAw
MDAwMDAwMGY5MDEwMA0KPiAoWEVOKSAgICBGQVJfRUwyOiBmZmZmZmZjMDA5N2IwZjAwDQo+
IChYRU4pDQo+IChYRU4pIFhlbiBzdGFjayB0cmFjZSBmcm9tIHNwPTAwMDA4MDAwZmZmNjdi
YjA6DQo+IChYRU4pICAgIDAwMDA4MDAwZmZmNjdjMTAgMDAwMDAyMDAwMDIzZmExYyAwMDAw
ODAwMGZmZjQzY2QwIDAwMDA4MDAwZmZmNDNkNzANCj4gKFhFTikgICAgMDAwMDgwMDBmZmY2
OWEzMCAwMDAwODAwMGZmZmVjMDY4IDAwMDA4MDAwZmZmNjgwMDAgMDAwMDgwMDBmZmZlY2Iy
MA0KPiAoWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDA4MDAwZmZmNDNkNzAgMDAwMDgw
MDBmZmY2OWEzMCAwMDAwODAwMGZmZmVjMDY4DQo+IChYRU4pICAgIDAwMDA4MDAwZmZmNjdj
NDAgMDAwMDAyMDAwMDI0Mzk0OCAwMDAwMDAwMDAwMDAwMDAzIDAwMDA4MDAwZmZmNDNlMTAN
Cj4gKFhFTikgICAgMDAwMDgwMDBmZmY0M2NkMCAwMDAwODAwMGZmZmVjMDY4IDAwMDA4MDAw
ZmZmNjdjYjAgMDAwMDAyMDAwMDIzMDY5Yw0KPiAoWEVOKSAgICAwMDAwODAwMGZmZmVjYjIw
IDAwMDA4MDAwZmZmNjgwMDAgMDAwMDgwMDBmZmZlY2IyMCAwMDAwODAwMGZmZjY4MDAwDQo+
IChYRU4pICAgIDAwMDAwMDAwNWEwMDBlYTEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCBm
ZmZmZmY4ODAxODhlZWMwIDAwMDA4MDAwZmZmZWNiMjAgMDAwMDgwMDBmZmY0M2VhMA0KPiAo
WEVOKSAgICAwMDAwODAwMGZmZjY3Y2QwIDAwMDAwMjAwMDAyMzFjYzAgMDAwMDAyMDAwMDJi
YjdlMCAwMDAwODAwMGZmZjY4MDAwDQo+IChYRU4pICAgIDAwMDA4MDAwZmZmNjdkMDAgMDAw
MDAyMDAwMDIwOTlmMCAwMDAwODAwMGZmZjY4MDAwIDAwMDAwMDAwMDAwMDAwMDANCj4gKFhF
TikgICAgMDAwMDAwN2ZiNTM4MTAxMCAwMDAwMDAwMDVhMDAwZWExIDAwMDA4MDAwZmZmNjdk
MzAgMDAwMDAyMDAwMDIyZWM4Yw0KPiAoWEVOKSAgICAwMDAwMDAwMDAwMDAwMDAxIDAwMDAw
MDAwNWEwMDBlYTEgMDAwMDgwMDBmZmY2N2QzMCAwMDAwMDIwMDAwMjJlNzQ0DQo+IChYRU4p
ICAgIDAwMDA4MDAwZmZmNjdlNDAgMDAwMDAyMDAwMDI3MTIxYyAwMDAwODAwMGZmZjY3ZWEw
IDAwMDAwMDAwNWEwMDBlYTENCj4gKFhFTikgICAgMDAwMDgwMDBmZmY2N2YyMCAwMDAwODAw
MGZmZjUyMDAwIDAwMDA4MDAwZmZmNjdkYTAgMDAwMDAyMDAwMDI0NDY1MA0KPiAoWEVOKSAg
ICAwMDAwODAwMGZmZjY3ZDkwIDAwMDAwMjAwMDAyMjRmNTQgMDAwMDgwMDBmZmY2N2RhMCAw
MDAwMDIwMDAwMjI0ZjU0DQo+IChYRU4pICAgIDAwMDA4MDAwZmZmNjdkYzAgMDAwMDAyMDAw
MDIyNTA4OCAwMDAwODAwMGZmZjUyZTE2IDAwMDA4MDAwZmZmNTJlMDANCj4gKFhFTikgICAg
MDAwMDAwMTUwMDAwMDAwMiAwMDAwMDAwMDAwMDAwMDAxIDAwMGE3NTI1MjA2NDZhMjUgNzQy
MDY0NjU2YzY5NjE2Ng0KPiAoWEVOKSAgICBjMDBjYzAwY2MwMGNjMDAwIDAwMDAwMDAwMDAw
MDAwMDAgZmYwMDAwZmYwMDAwMDBmZiAwMDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pICAgIDAw
MDAwMDAwZjAwZjAwMGYgMDAwMDAwMDAwMDAwMDAwMCBmMDBmZjAwZmYwMGZmMDBmIGYwMGZm
MDBmZjAwZmYwMGYNCj4gKFhFTikgICAgMDAwMDAwMDBjY2NjY2NjMCAwMDAwMDAwMGNjY2Nj
Y2MwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KPiAoWEVOKSAgICAwMDAw
MDA3ZmI1MzEwMDE4IDAwMDAwMDdmYjUyOWQ4ZDAgMDAwMDgwMDBmZmY2N2U3MCAwMDAwMDIw
MDAwMjcyMTM0DQo+IChYRU4pICAgIDAwMDA4MDAwZmZmNjdlYTAgMDAwMDAwMDA1YTAwMGVh
MSAwMDAwODAwMGZmZjY3ZmE4IDAwMDAwMDAwMjAwMDAwMDUNCj4gKFhFTikgICAgZmZmZmZm
YzAwOWM5YmNlMCAwMDAwMDIwMDAwMjU1YzYwIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmODgw
OTE4MmEwMA0KPiAoWEVOKSAgICBmZmZmZmZjMDA5YzliY2UwIDAwMDAwMjAwMDAyNTVjNTQg
MDAwMDAwN2ZiNTM4MTAxMCAwMDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pICAgIDAwMDAwMDAw
MDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAw
MDAwMDANCj4gKFhFTikgICAgZmZmZmZmYzAwOWM5YmRjOCAwMDAwMDAwMDAwMDAwMDAwIDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KPiAoWEVOKSAgICAwMDAwMDAwMDAw
MDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwDQo+IChYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwN2ZkZTU4ZTZkMCAwMDAw
MDAwMDAwMDAwMDI0IDAwMDAwMDAwMDAwMDAwMDANCj4gKFhFTikgICAgMDAwMDAwMDAwMDAw
MDAwMCAwMDAwMDA3ZmRlNThlNmQwIGZmZmZmZjg4MDkxODJhMDAgZmZmZmZmODgwOTE4MmEw
MA0KPiAoWEVOKSAgICAwMDAwMDAwMDAwMzA1MDAwIDAwMDAwMDdmZGU1OGU2ZDAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pICAgIDAwMDAwMDAwMDAwMDAw
MDAgMDAwMDAwMDAwMDAwMDAwMCBmZmZmZmY4ODAxODhlZWMwIGZmZmZmZmMwMDljOWJjZTAN
Cj4gKFhFTikgICAgZmZmZmZmYzAwODY0OGNiOCBmZmZmZmZmZmZmZmZmZmZmIGZmZmZmZmMw
MDgwMzc5OWMgMDAwMDAwMDAyMDAwMDAwNQ0KPiAoWEVOKSAgICAwMDAwMDAwMDVhMDAwZWEx
IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAyMDAwMDAwNSAwMDAwMDAwMDAwMDAwMDAwDQo+
IChYRU4pICAgIDAwMDAwMDAwMDAwMDAwMDAgZmZmZmZmODgwMTg4ZWVjMCBmZmZmZmZjMDA5
YzliY2UwIGZmZmZmZmMwMDgwMzc5OTgNCj4gKFhFTikgICAgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQo+IChYRU4pIFhlbiBjYWxsIHRyYWNlOg0KPiAoWEVOKSAgICBb
PDAwMDAwMjAwMDAyM2VjMzA+XSBudWxsLmMjdW5pdF9kZWFzc2lnbisweGRjLzB4MWNjIChQ
QykNCj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjNmYTFjPl0gbnVsbC5jI251bGxfdW5pdF9y
ZW1vdmUrMHg2Yy8weGU0IChMUikNCj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjNmYTFjPl0g
bnVsbC5jI251bGxfdW5pdF9yZW1vdmUrMHg2Yy8weGU0DQo+IChYRU4pICAgIFs8MDAwMDAy
MDAwMDI0Mzk0OD5dIHNjaGVkX21vdmVfZG9tYWluKzB4MTk0LzB4MzljDQo+IChYRU4pICAg
IFs8MDAwMDAyMDAwMDIzMDY5Yz5dIGNwdXBvb2wuYyNjcHVwb29sX21vdmVfZG9tYWluX2xv
Y2tlZCsweDM4LzB4NzANCj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjMxY2MwPl0gY3B1cG9v
bF9tb3ZlX2RvbWFpbisweDM0LzB4NTQNCj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjA5OWYw
Pl0gZG9tYWluX2tpbGwrMHhjNC8weDE0NA0KPiAoWEVOKSAgICBbPDAwMDAwMjAwMDAyMmVj
OGM+XSBkb19kb21jdGwrMHg2ZDAvMHhmYTQNCj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjcx
MjFjPl0gdHJhcHMuYyNkb190cmFwX2h5cGVyY2FsbCsweDI4MC8weDM0Yw0KPiAoWEVOKSAg
ICBbPDAwMDAwMjAwMDAyNzIxMzQ+XSBkb190cmFwX2d1ZXN0X3N5bmMrMHg0NDgvMHg1YzQN
Cj4gKFhFTikgICAgWzwwMDAwMDIwMDAwMjU1YzYwPl0gZW50cnkubyNndWVzdF9zeW5jX3Ns
b3dwYXRoKzB4YTQvMHhkNA0KPiAoWEVOKQ0KPiAoWEVOKQ0KPiAoWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+IChYRU4pIFBhbmljIG9uIENQVSAw
Og0KPiAoWEVOKSBBc3NlcnRpb24gJ25wYy0+dW5pdCA9PSB1bml0JyBmYWlsZWQgYXQgY29t
bW9uL3NjaGVkL251bGwuYzozNzkNCj4gKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKg0KPiAoWEVOKQ0KPiAoWEVOKSBSZWJvb3QgaW4gZml2ZSBzZWNv
bmRzLi4uDQo+IC0tLQ0KPiAgIHhlbi9jb21tb24vc2NoZWQvbnVsbC5jIHwgOSArKysrKysr
Ky0NCj4gICAxIGZpbGUgY2hhbmdlZCwgOCBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0p
DQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9zY2hlZC9udWxsLmMgYi94ZW4vY29t
bW9uL3NjaGVkL251bGwuYw0KPiBpbmRleCA2NWEwYTZjNTMxMmQuLjcxYTgzYzRmYjFhZCAx
MDA2NDQNCj4gLS0tIGEveGVuL2NvbW1vbi9zY2hlZC9udWxsLmMNCj4gKysrIGIveGVuL2Nv
bW1vbi9zY2hlZC9udWxsLmMNCj4gQEAgLTM3Niw3ICszNzYsMTQgQEAgc3RhdGljIGJvb2wg
dW5pdF9kZWFzc2lnbihzdHJ1Y3QgbnVsbF9wcml2YXRlICpwcnYsIGNvbnN0IHN0cnVjdCBz
Y2hlZF91bml0ICp1bmkNCj4gICAgICAgc3RydWN0IG51bGxfcGNwdSAqbnBjID0gZ2V0X3Nj
aGVkX3JlcyhjcHUpLT5zY2hlZF9wcml2Ow0KPiAgIA0KPiAgICAgICBBU1NFUlQobGlzdF9l
bXB0eSgmbnVsbF91bml0KHVuaXQpLT53YWl0cV9lbGVtKSk7DQo+IC0gICAgQVNTRVJUKG5w
Yy0+dW5pdCA9PSB1bml0KTsNCj4gKw0KPiArICAgIGlmICggIW5wYy0+dW5pdCApDQo+ICsg
ICAgew0KPiArICAgICAgICBkcHJpbnRrKFhFTkxPR19HX0lORk8sICIlZCA8LS0gTlVMTCAo
JXBkdiVkKVxuIiwgY3B1LCB1bml0LT5kb21haW4sDQo+ICsgICAgICAgICAgICAgICAgdW5p
dC0+dW5pdF9pZCk7DQo+ICsgICAgICAgIHJldHVybiBmYWxzZTsNCj4gKyAgICB9DQo+ICsN
Cj4gICAgICAgQVNTRVJUKCFjcHVtYXNrX3Rlc3RfY3B1KGNwdSwgJnBydi0+Y3B1c19mcmVl
KSk7DQo+ICAgDQo+ICAgICAgIG5wYy0+dW5pdCA9IE5VTEw7DQoNCg==
--------------m7LIA0v8Ziqt0mzrNBTpmJL9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------m7LIA0v8Ziqt0mzrNBTpmJL9--

--------------bVDBD0IsKwVMx33GQmgYqEFt--

--------------X0SK0EfBcsr43ngwEtHiFbH5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRHdOwFAwAAAAAACgkQsN6d1ii/Ey8T
JAgAgPv/dkU6lO+US79Y+kv4Xsw1h0AFMJBwL4Yv1qLmifKlXahKRydZn6idRRIt1HbfmjZEr6Ya
7dRJ4AQOIRZvUCO4DG0FHcFf03SbyRrCBwL0UXayKmZAuIaljm5fyEqRFhvlNEgIBFOlwziqsuuc
YXEeY5+xPVhW9QUZU3+zM/obFlMw5gJ5TthSk9dFqjAIb3z1ftB5BbdRW1yEqziWWhJiz30xbNw9
L8JRCaHd+huhVvRytLz35cB6F4HhGS7HxpoQKWZdJHnTCvUasT46aE6vPcAmwRi8CQs/eWVCZk/T
+QTkfa703VWF0VVy7XF5CFLdUEJDd0PQQkVrgjJCZg==
=cDOk
-----END PGP SIGNATURE-----

--------------X0SK0EfBcsr43ngwEtHiFbH5--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:30:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525736.817108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prD85-0001MR-3s; Tue, 25 Apr 2023 07:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525736.817108; Tue, 25 Apr 2023 07:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prD85-0001MK-0h; Tue, 25 Apr 2023 07:30:25 +0000
Received: by outflank-mailman (input) for mailman id 525736;
 Tue, 25 Apr 2023 07:30:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=twGC=AQ=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1prD82-0001ME-UO
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:30:23 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7751649-e33a-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:29:25 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-63b73203e0aso33369458b3a.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 00:29:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7751649-e33a-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682407764; x=1684999764;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=nASlhjDUUbPMalmLLb5ux2L9cQmWWdUsRQQM3o2I4uo=;
        b=Rmim99lzOdp/YJ5XwuKjOJCM91qHrRYi+ZfBZYDxXaz4AzWSLNnR/ol8uGdFO/cq7a
         erDtgA8/sBZIuWjeXa2xCpcZAZEpfF+7GKM+DgMNeYTdT8680qLxO+vdP5GgbPXdH3AO
         WyoAu997Ls7bZy7WhD6kv/iDolEIBdJICf101ZSsneESp7yKoDqbIt0EagOi3BzpZqQn
         mTJnRPSCfSibYqUoevmKV10KUMQR22LlAHlk6UkqwUPzyV5R2jF2PmJ9BAR+8S2WWVqj
         7DTsj1q9KpM29kpXiv8WSRIIVmnwgKaifs3zGuEv5V5A7yjhRDHGWJQqkdoTlaN+loVF
         q3TA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682407764; x=1684999764;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=nASlhjDUUbPMalmLLb5ux2L9cQmWWdUsRQQM3o2I4uo=;
        b=YUnNsuu+7n+YgEiayYUM9cu+6WEwXIS4G9HFZGAMn+sJOttrEClt6mypIQBe7Grhhv
         /PRPO2NIpurnwAgfZNY2+3spTwNWRlm2arleM+JUhFS22FTVIGs/6l2uozpw654LhvbP
         2nIuG+5tz3guW2/yMHfH3aWHqLdahXhvwu2F9dnwcPbs0CuzDvz1OhsXtgvuWyA54cb9
         8DN8OkSNJc7YvHXMmw5ZQCjwlsIzWaBwOB2gJpxdh2hD1pfujMqps6jJiDZR7OI6vHnQ
         L+X3kmGjUVd0huEJQ5+6nNAhW2IgPghvAYvewp/puvzGDNulOqVvxs0zqC0u51lqmT/X
         JgZw==
X-Gm-Message-State: AAQBX9cHA7VeF1hxvkX1ffwMwSQSCkvYTLFlqF+q35lIgqRGdnxasuhC
	B1nEkpMbmFfvjz7L8rApw364CtRyJMyARRI0TNo=
X-Google-Smtp-Source: AKy350aBf4u3gnL8/3hIs+lz7NobO5Zksg4tTMW/gf7fawvG7RbOHpoyLN1SxPokHRJXYEz473vnywLbCcTe6rc11X0=
X-Received: by 2002:a17:903:188:b0:1a9:2a9e:30a8 with SMTP id
 z8-20020a170903018800b001a92a9e30a8mr21566507plg.9.1682407763574; Tue, 25 Apr
 2023 00:29:23 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2u2=7h=Lo=bTC8YzmzidOErYaQGi=hpoG3w7tdM4LUzFw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304181044080.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 25 Apr 2023 10:33:49 +0300
Message-ID: <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>
Content-Type: multipart/alternative; boundary="0000000000001d84a805fa241597"

--0000000000001d84a805fa241597
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Stefano,

Thank you.
If I build xen without colors support there is not this error.
All the domains are booted well.
Hense it can not be a hardware issue.
This panic arrived during unpacking the rootfs.
Here I attached the boot log xen/Dom0 without color.
A highlighted strings printed exactly after the place where 1-st time panic
arrived.

 Xen 4.16.1-pre
(XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc
(GCC) 11.3.0) debug=3Dy 2023-04-21
(XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
(XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
(XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part
0xd03,rev 0x4
(XEN) 64-bit Execution:
(XEN)   Processor Features: 0000000000002222 0000000000000000
(XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
(XEN)     Extensions: FloatingPoint AdvancedSIMD
(XEN)   Debug Features: 0000000010305106 0000000000000000
(XEN)   Auxiliary Features: 0000000000000000 0000000000000000
(XEN)   Memory Model Features: 0000000000001122 0000000000000000
(XEN)   ISA Features:  0000000000011120 0000000000000000
(XEN) 32-bit Execution:
(XEN)   Processor Features: 0000000000000131:0000000000011011
(XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
(XEN)     Extensions: GenericTimer Security
(XEN)   Debug Features: 0000000003010066
(XEN)   Auxiliary Features: 0000000000000000
(XEN)   Memory Model Features: 0000000010201105 0000000040000000
(XEN)                          0000000001260000 0000000002102211
(XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
(XEN)                 0000000001112131 0000000000011142 0000000000011121
(XEN) Using SMC Calling Convention v1.2
(XEN) Using PSCI v1.1
(XEN) SMP: Allowing 4 CPUs
(XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz
(XEN) GICv2 initialization:
(XEN)         gic_dist_addr=3D00000000f9010000
(XEN)         gic_cpu_addr=3D00000000f9020000
(XEN)         gic_hyp_addr=3D00000000f9040000
(XEN)         gic_vcpu_addr=3D00000000f9060000
(XEN)         gic_maintenance_irq=3D25
(XEN) GICv2: Adjusting CPU interface base to 0xf902f000
(XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
(XEN) Using scheduler: null Scheduler (null)
(XEN) Initializing null scheduler
(XEN) WARNING: This is experimental software in development.
(XEN) Use at your own risk.
(XEN) Allocated console ring of 32 KiB.
(XEN) CPU0: Guest atomics will try 12 times before pausing the domain
(XEN) Bringing up CPU1
(XEN) CPU1: Guest atomics will try 13 times before pausing the domain
(XEN) CPU 1 booted.
(XEN) Bringing up CPU2
(XEN) CPU2: Guest atomics will try 13 times before pausing the domain
(XEN) CPU 2 booted.
(XEN) Bringing up CPU3
(XEN) CPU3: Guest atomics will try 13 times before pausing the domain
(XEN) Brought up 4 CPUs
(XEN) CPU 3 booted.
(XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
(XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
(XEN) smmu: /axi/smmu@fd800000: stage 2 translation
(XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups,
mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context banks (0 stage-2 only)
(XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
(XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
(XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) alternatives: Patching with alt table 00000000002cc5c8 ->
00000000002ccb2c
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading d0 kernel from boot module @ 0000000001000000
(XEN) Loading ramdisk from boot module @ 0000000002000000
(XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
(XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
(XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
(XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
(XEN) Grant table range: 0x00000000e00000-0x00000000e40000
(XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Extended region 0: 0x81200000->0xa0000000
(XEN) Extended region 1: 0xb1200000->0xc0000000
(XEN) Extended region 2: 0xc8000000->0xe0000000
(XEN) Extended region 3: 0xf0000000->0xf9000000
(XEN) Extended region 4: 0x100000000->0x600000000
(XEN) Extended region 5: 0x880000000->0x8000000000
(XEN) Extended region 6: 0x8001000000->0x10000000000
(XEN) Loading zImage from 0000000001000000 to
0000000010000000-0000000010e41008
(XEN) Loading d0 initrd from 0000000002000000 to
0x0000000013600000-0x000000001ff3a617
(XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
(XEN) null.c:353: 0 <-- d0v0
(XEN) Freed 356kB init memory.
(XEN) d0v0 Unhandled SMC/HVC: 0x84000050
(XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host)
(aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU Binutils)
2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
[    0.000000] Machine model: D14 Viper Board - White Unit
[    0.000000] Xen 4.16 support found
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
[    0.000000]   DMA32    empty
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
[    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
[    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
[    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
[    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
[    0.000000] Initmem setup node 0 [mem
0x0000000010000000-0x000000007fffffff]
[    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
[    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
[    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
[    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
[    0.000000] psci: probing for conduit method from DT.
[    0.000000] psci: PSCIv1.1 detected in firmware.
[    0.000000] psci: Using standard PSCI v0.2 function IDs
[    0.000000] psci: Trusted OS migration not required
[    0.000000] psci: SMC Calling Convention v1.1
[    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
[    0.000000] Detected VIPT I-cache on CPU0
[    0.000000] CPU features: kernel page table isolation forced ON by KASLR
[    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
[    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 40384=
5
[    0.000000] Kernel command line: console=3Dhvc0 earlycon=3Dxen
earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/ram0 maxcpus=3D2
[    0.000000] Unknown kernel command line parameters "earlyprintk=3Dxen
fips=3D1", will be passed to user space.
[    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152
bytes, linear)
[    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576
bytes, linear)
[    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
[    0.000000] mem auto-init: clearing system memory may take some time...
[    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K
rwdata, 2396K rodata, 1536K init, 262K bss, 256944K reserved, 262144K
cma-reserved)
[    0.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, N=
odes=3D1
[    0.000000] rcu: Hierarchical RCU implementation.
[    0.000000] rcu: RCU event tracing is enabled.
[    0.000000] rcu: RCU restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2=
.
[    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is
25 jiffies.
[    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D2
[    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
[    0.000000] Root IRQ handler: gic_handle_irq
[    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
[    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff
max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
[    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every
4398046511100ns
[    0.000258] Console: colour dummy device 80x25
[    0.310231] printk: console [hvc0] enabled
[    0.314403] Calibrating delay loop (skipped), value calculated using
timer frequency.. 200.00 BogoMIPS (lpj=3D400000)
[    0.324851] pid_max: default: 32768 minimum: 301
[    0.329706] LSM: Security Framework initializing
[    0.334204] Yama: becoming mindful.
[    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes,
linear)
[    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768
bytes, linear)
[    0.354743] xen:grant_table: Grant tables using version 1 layout
[    0.359132] Grant table initialized
[    0.362664] xen:events: Using FIFO-based ABI
[    0.366993] Xen: initializing cpu0
[    0.370515] rcu: Hierarchical SRCU implementation.
[    0.375930] smp: Bringing up secondary CPUs ...
(XEN) null.c:353: 1 <-- d0v1
(XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
[    0.382549] Detected VIPT I-cache on CPU1
[    0.388712] Xen: initializing cpu1
[    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
[    0.388829] smp: Brought up 1 node, 2 CPUs
[    0.406941] SMP: Total of 2 processors activated.
[    0.411698] CPU features: detected: 32-bit EL0 Support
[    0.416888] CPU features: detected: CRC32 instructions
[    0.422121] CPU: All CPU(s) started at EL1
[    0.426248] alternatives: patching kernel code
[    0.431424] devtmpfs: initialized
[    0.441454] KASLR enabled
[    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles:
0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear=
)
[    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic
allocations
[    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic
allocations
[    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for
atomic allocations
[    0.519478] audit: initializing netlink subsys (disabled)
[    0.524985] audit: type=3D2000 audit(0.336:1): state=3Dinitialized
audit_enabled=3D0 res=3D1
[    0.529169] thermal_sys: Registered thermal governor 'step_wise'
[    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers=
.
[    0.545608] ASID allocator initialised with 32768 entries
[    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for
software IO TLB
[    0.559332] software IO TLB: mapped [mem
0x0000000011800000-0x0000000011c00000] (4MB)
[    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
[    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
[    0.636520] DRBG: Continuing without Jitter RNG
[    0.737187] raid6: neonx8   gen()  2143 MB/s
[    0.805294] raid6: neonx8   xor()  1589 MB/s
[    0.873406] raid6: neonx4   gen()  2177 MB/s
[    0.941499] raid6: neonx4   xor()  1556 MB/s
[    1.009612] raid6: neonx2   gen()  2072 MB/s
[    1.077715] raid6: neonx2   xor()  1430 MB/s
[    1.145834] raid6: neonx1   gen()  1769 MB/s
[    1.213935] raid6: neonx1   xor()  1214 MB/s
[    1.282046] raid6: int64x8  gen()  1366 MB/s
[    1.350132] raid6: int64x8  xor()   773 MB/s
[    1.418259] raid6: int64x4  gen()  1602 MB/s
[    1.486349] raid6: int64x4  xor()   851 MB/s
[    1.554464] raid6: int64x2  gen()  1396 MB/s
[    1.622561] raid6: int64x2  xor()   744 MB/s
[    1.690687] raid6: int64x1  gen()  1033 MB/s
[    1.758770] raid6: int64x1  xor()   517 MB/s
[    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
[    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
[    1.767957] raid6: using neon recovery algorithm
[    1.772824] xen:balloon: Initialising balloon driver
[    1.778021] iommu: Default domain type: Translated
[    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
[    1.789149] SCSI subsystem initialized
[    1.792820] usbcore: registered new interface driver usbfs
[    1.798254] usbcore: registered new interface driver hub
[    1.803626] usbcore: registered new device driver usb
[    1.808761] pps_core: LinuxPPS API ver. 1 registered
[    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo
Giometti <giometti@linux.it>
[    1.822903] PTP clock support registered
[    1.826893] EDAC MC: Ver: 3.0.0
[    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox
with TX/RX channels.
[    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox
with TX/RX channels.
[    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox
with TX/RX channels.
[    1.855907] FPGA manager framework
[    1.859952] clocksource: Switched to clocksource arch_sys_counter
[    1.871712] NET: Registered PF_INET protocol family
[    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes,
linear)
[    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2,
16384 bytes, linear)
[    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144
bytes, linear)
[    1.894846] TCP established hash table entries: 16384 (order: 5, 131072
bytes, linear)
[    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes,
linear)
[    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
[    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes,
linear)
[    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    1.936834] RPC: Registered named UNIX socket transport module.
[    1.942342] RPC: Registered udp transport module.
[    1.947088] RPC: Registered tcp transport module.
[    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    1.958334] PCI: CLS 0 bytes, default 64
[    1.962709] Trying to unpack rootfs image as initramfs...
[    1.977090] workingset: timestamp_bits=3D62 max_order=3D19 bucket_order=
=3D0
[    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[    2.021045] NET: Registered PF_ALG protocol family
[    2.021122] xor: measuring software checksum speed
[    2.029347]    8regs           :  2366 MB/sec
[    2.033081]    32regs          :  2802 MB/sec
[    2.038223]    arm64_neon      :  2320 MB/sec
[    2.038385] xor: using function: 32regs (2802 MB/sec)
[    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 247)
[    2.050959] io scheduler mq-deadline registered
[    2.055521] io scheduler kyber registered
[    2.068227] xen:xen_evtchn: Event-channel device installed
[    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
[    2.085548] brd: module loaded
[    2.089290] loop: module loaded
[    2.089341] Invalid max_queues (4), will use default max: 2.
[    2.094565] tun: Universal TUN/TAP device driver, 1.6
[    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
[    2.104156] usbcore: registered new interface driver rtl8150
[    2.109813] usbcore: registered new interface driver r8152
[    2.115367] usbcore: registered new interface driver asix
[    2.120794] usbcore: registered new interface driver ax88179_178a
[    2.126934] usbcore: registered new interface driver cdc_ether
[    2.132816] usbcore: registered new interface driver cdc_eem
[    2.138527] usbcore: registered new interface driver net1080
[    2.144256] usbcore: registered new interface driver cdc_subset
[    2.150205] usbcore: registered new interface driver zaurus
[    2.155837] usbcore: registered new interface driver cdc_ncm
[    2.161550] usbcore: registered new interface driver r8153_ecm
[    2.168240] usbcore: registered new interface driver cdc_acm
[    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems
and ISDN adapters
[    2.181358] usbcore: registered new interface driver uas
[    2.186547] usbcore: registered new interface driver usb-storage
[    2.192643] usbcore: registered new interface driver ftdi_sio
[    2.198384] usbserial: USB Serial support registered for FTDI USB Serial
Device
[    2.206118] udc-core: couldn't find an available UDC - added
[g_mass_storage] to list of pending drivers
[    2.215332] i2c_dev: i2c /dev entries driver
[    2.220467] xen_wdt xen_wdt: initialized (timeout=3D60s, nowayout=3D0)
[    2.225923] device-mapper: uevent: version 1.0.3
[    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised:
dm-devel@redhat.com
[    2.239315] EDAC MC0: Giving out device to module 1 controller
synps_ddr_controller: DEV synps_edac (INTERRUPT)
[    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac
controller zynqmp_ocm: DEV ff960000.memory-controller (INTERRUPT)
[    2.261719] sdhci: Secure Digital Host Controller Interface driver
[    2.267487] sdhci: Copyright(c) Pierre Ossman
[    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
[    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
[    2.283816] zynqmp_firmware_probe Platform Management API v1.1
[    2.289554] zynqmp_firmware_probe Trustzone version v1.0
[    2.327875] securefw securefw: securefw probed
[    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
[    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES
Successfully Registered
[    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
[    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
[    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
[    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
[    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
[    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
[    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info:
1.512.15.0 KeyLen: 32
[    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler.
Retrying...
[    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
[    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
[    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
[    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI
Count: 512 Event Count: 32
[    2.420856] default preset
[    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
[    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
[    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
[    2.441976] vmcu driver init
[    2.444922] VMCU: : (240:0) registered
[    2.444956] In K81 Updater init
[    2.449003] pktgen: Packet Generator for packet performance testing.
Version: 2.75
[    2.468833] Initializing XFRM netlink socket
[    2.468902] NET: Registered PF_PACKET protocol family
[    2.472729] Bridge firewalling registered
[    2.476785] 8021q: 802.1Q VLAN Support v1.8
[    2.481341] registered taskstats version 1
[    2.486394] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=
=3Dno
[    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =3D 36,
base_baud =3D 6250000) is a xuartps
[    2.507103] of-fpga-region fpga-full: FPGA Region probed
[    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver
Probe success
[    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver
Probe success
[    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver
Probe success
[    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver
Probe success
[    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver
Probe success
[    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver
Probe success
[    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver
Probe success
[    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver
Probe success
[    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver
Probe success
[    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver
Probe success
[    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
[    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
[    2.952393] Creating 2 MTD partitions on "spi0.0":
[    2.957231] 0x000004000000-0x000008000000 : "bank A"
[    2.963332] 0x000000000000-0x000004000000 : "bank B"
[    2.968694] macb ff0b0000.ethernet: Not enabling partial store and
forward
[    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at
0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
[    2.984472] macb ff0c0000.ethernet: Not enabling partial store and
forward
[    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at
0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
[    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
[    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate
interface QSGMII
[    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate
interface QSGMII
[    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate
interface type 18
[    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate
interface QSGMII
[    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate
interface QSGMII
[    3.045301] viper_enet viper_enet: Viper enet registered
[    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
[    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
[    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
[    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
[    3.097729] si70xx: probe of 2-0040 failed with error -5
[    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with
timeout 60s
[    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with
timeout 10s
[    3.112457] viper-tamper viper-tamper: Device registered
[    3.117593] active_bank active_bank: boot bank: 1
[    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
[    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
[    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info:
1.512.15.0 KeyLen: 32
[    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
[    3.147438] viper-vdpp a4000000.vdpp: Device registered
[    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
[    3.158582] lpc55_user lpc55_user: The major number for your device is
236
[    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
[    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
[    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
[    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
[    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
[    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
[    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using
ADMA 64-bit
[    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
[    3.215694] lpc55_l2 spi1.0: rx error: -110
[    3.284438] mmc0: new HS200 MMC card at address 0001
[    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
[    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
[    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
[    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
[    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
[    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
[    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware
clock
[    3.591252] cdns-i2c ff020000.i2c: recovery information complete
[    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
[    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
[    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
[    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
[    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
[    3.624224] rtc-rv3028 0-0052: registered as rtc1
[    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
[    3.633253] lpc55_l2 spi1.0: rx error: -110
[    3.639104] k81_bootloader 0-0010: probe
[    3.641628] VMCU: : (235:0) registered
[    3.641635] k81_bootloader 0-0010: probe completed
[    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
[    3.669154] cdns-i2c ff030000.i2c: recovery information complete
[    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
[    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
[    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
[    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
[    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
[    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
[    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C
switch pca9546
[    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
[    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
[    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
[    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
[    3.737549] sfp_register_socket: got sfp_bus
[    3.740709] sfp_register_socket: register sfp_bus
[    3.745459] sfp_register_bus: ops ok!
[    3.749179] sfp_register_bus: Try to attach
[    3.753419] sfp_register_bus: Attach succeeded
[    3.757914] sfp_register_bus: upstream ops attach
[    3.762677] sfp_register_bus: Bus registered
[    3.766999] sfp_register_socket: register sfp_bus succeeded
[    3.775870] of_cfs_init
[    3.776000] of_cfs_init: OK
[    3.778211] clk: Not disabling unused clocks
*[   11.278477] Freeing initrd memory: 206056K*
*[   11.279406] Freeing unused kernel memory: 1536K*
[   11.314006] Checked W+X mappings: passed, no W+X pages found
[   11.314142] Run /init as init process
INIT: version 3.01 booting
fsck (busybox 1.35.0)
/dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
/dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
/dev/mmcblk0p3 was not cleanly unmounted, check forced.
/dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
[   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal.
Opts: (null). Quota mode: disabled.
Starting random number generator daemon.
[   11.580662] random: crng init done
Starting udev
[   11.613159] udevd[142]: starting version 3.2.10
[   11.620385] udevd[143]: starting eudev-3.2.10
[   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
[   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
[   12.063396] ip_local_port_range: prefer different parity for start/end
values.
[   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
hwclock: RTC_RD_TIME: Invalid exchange
Mon Feb 27 08:40:53 UTC 2023
[   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
hwclock: RTC_SET_TIME: Invalid exchange
[   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
Starting mcud
INIT: Entering runlevel: 5
Configuring network interfaces... done.
resetting network interface
[   12.718295] macb ff0b0000.ethernet control_red: PHY
[ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
[   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii
link mode
[   12.732151] pps pps0: new PPS source ptp0
[   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
[   12.745724] macb ff0c0000.ethernet control_black: PHY
[ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
[   12.753469] macb ff0c0000.ethernet control_black: configuring for
phy/gmii link mode
[   12.761804] pps pps1: new PPS source ptp1
[   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
Auto-negotiation: off
Auto-negotiation: off
[   16.828151] macb ff0b0000.ethernet control_red: unable to generate
target frequency: 125000000 Hz
[   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full
- flow control off
[   16.860552] macb ff0c0000.ethernet control_black: unable to generate
target frequency: 125000000 Hz
[   16.867052] macb ff0c0000.ethernet control_black: Link is Up -
1Gbps/Full - flow control off
Starting Failsafe Secure Shell server in port 2222: sshd
done.
Starting rpcbind daemon...done.

[   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
hwclock: RTC_RD_TIME: Invalid exchange
Starting State Manager Service
Start state-manager restarter...
(XEN) d0v1 Forwarding AES operation: 3254779951
Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid
80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744 /dev/dm-0 scanned
by udevd (385)
[   17.349933] BTRFS info (device dm-0): disk space caching is enabled
[   17.350670] BTRFS info (device dm-0): has skinny extents
[   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
[   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e
devid 1 transid 6 /dev/mapper/client_prov scanned by mkfs.btrfs (526)
[   17.872699] BTRFS info (device dm-1): using free space tree
[   17.872771] BTRFS info (device dm-1): has skinny extents
[   17.878114] BTRFS info (device dm-1): flagging fs with big metadata
feature
[   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
[   17.895695] BTRFS info (device dm-1): checking UUID tree

Setting domain 0 name, domid and JSON config...
Done setting up Dom0
Starting xenconsoled...
Starting QEMU as disk backend for dom0
Starting domain watchdog daemon: xenwatchdogd startup

[   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921
devid 1 transid 6 /dev/mapper/client_config scanned by mkfs.btrfs (574)
[done]
[   18.465552] BTRFS info (device dm-2): using free space tree
[   18.465629] BTRFS info (device dm-2): has skinny extents
[   18.471002] BTRFS info (device dm-2): flagging fs with big metadata
feature
Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd
optimizations
[   18.486659] BTRFS info (device dm-2): checking UUID tree
OK
starting rsyslogd ... Log partition ready after 0 poll loops
done
rsyslogd: cannot connect to 172.18.0.1:514: Network is unreachable
[v8.2208.0 try https://www.rsyslog.com/e/2027 ]
[   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095
devid 1 transid 608 /dev/dm-3 scanned by udevd (518)

Please insert USB token and enter your role in login prompt.

login:

Regards,
O.


=D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39, Ste=
fano Stabellini <sstabellini@kernel.org>:

> Hi Oleg,
>
> Here is the issue from your logs:
>
> SError Interrupt on CPU0, code 0xbe000000 -- SError
>
> SErrors are special signals to notify software of serious hardware
> errors.  Something is going very wrong. Defective hardware is a
> possibility.  Another possibility if software accessing address ranges
> that it is not supposed to, sometimes it causes SErrors.
>
> Cheers,
>
> Stefano
>
>
>
> On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>
> > Hello,
> >
> > Thanks guys.
> > I found out where the problem was.
> > Now dom0 booted more. But I have a new one.
> > This is a kernel panic during Dom0 loading.
> > Maybe someone is able to suggest something ?
> >
> > Regards,
> > O.
> >
> > [    3.771362] sfp_register_bus: upstream ops attach
> > [    3.776119] sfp_register_bus: Bus registered
> > [    3.780459] sfp_register_socket: register sfp_bus succeeded
> > [    3.789399] of_cfs_init
> > [    3.789499] of_cfs_init: OK
> > [    3.791685] clk: Not disabling unused clocks
> > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
> > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
> 5.15.72-xilinx-v2022.1 #1
> > [   11.010393] Workqueue: events_unbound async_run_entry_fn
> > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS
> BTYPE=3D--)
> > [   11.010422] pc : simple_write_end+0xd0/0x130
> > [   11.010431] lr : generic_perform_write+0x118/0x1e0
> > [   11.010438] sp : ffffffc00809b910
> > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27:
> ffffffef69ba88c0
> > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24:
> 0000000000000000
> > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21:
> ffffff807315a260
> > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18:
> 0000000000000000
> > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15:
> 0000000000000000
> > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12:
> 0000000000000000
> > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 :
> 0000000000000000
> > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 :
> 000000002d89b700
> > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 :
> 0000000000001000
> > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 :
> 0000000000000005
> > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrup=
t
> > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
> 5.15.72-xilinx-v2022.1 #1
> > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
> > [   11.010548] Workqueue: events_unbound async_run_entry_fn
> > [   11.010556] Call trace:
> > [   11.010558]  dump_backtrace+0x0/0x1c4
> > [   11.010567]  show_stack+0x18/0x2c
> > [   11.010574]  dump_stack_lvl+0x7c/0xa0
> > [   11.010583]  dump_stack+0x18/0x34
> > [   11.010588]  panic+0x14c/0x2f8
> > [   11.010597]  print_tainted+0x0/0xb0
> > [   11.010606]  arm64_serror_panic+0x6c/0x7c
> > [   11.010614]  do_serror+0x28/0x60
> > [   11.010621]  el1h_64_error_handler+0x30/0x50
> > [   11.010628]  el1h_64_error+0x78/0x7c
> > [   11.010633]  simple_write_end+0xd0/0x130
> > [   11.010639]  generic_perform_write+0x118/0x1e0
> > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
> > [   11.010650]  generic_file_write_iter+0x78/0xd0
> > [   11.010656]  __kernel_write+0xfc/0x2ac
> > [   11.010665]  kernel_write+0x88/0x160
> > [   11.010673]  xwrite+0x44/0x94
> > [   11.010680]  do_copy+0xa8/0x104
> > [   11.010686]  write_buffer+0x38/0x58
> > [   11.010692]  flush_buffer+0x4c/0xbc
> > [   11.010698]  __gunzip+0x280/0x310
> > [   11.010704]  gunzip+0x1c/0x28
> > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
> > [   11.010715]  do_populate_rootfs+0x80/0x164
> > [   11.010722]  async_run_entry_fn+0x48/0x164
> > [   11.010728]  process_one_work+0x1e4/0x3a0
> > [   11.010736]  worker_thread+0x7c/0x4c0
> > [   11.010743]  kthread+0x120/0x130
> > [   11.010750]  ret_from_fork+0x10/0x20
> > [   11.010757] SMP: stopping secondary CPUs
> > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
> > [   11.010788] PHYS_OFFSET: 0x0
> > [   11.010790] CPU features: 0x00000401,00000842
> > [   11.010795] Memory Limit: none
> > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError
> Interrupt ]---
> >
> > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:52,=
 Michal Orzel <michal.orzel@amd.com>:
> >       Hi Oleg,
> >
> >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
> >       >
> >       >
> >       >
> >       > Hello Michal,
> >       >
> >       > I was not able to enable earlyprintk in the xen for now.
> >       > I decided to choose another way.
> >       > This is a xen's command line that I found out completely.
> >       >
> >       > (XEN) $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnul=
l
> >       timer_slop=3D0
> >       Yes, adding a printk() in Xen was also a good idea.
> >
> >       >
> >       > So you are absolutely right about a command line.
> >       > Now I am going to find out why xen did not have the correct
> parameters from the device tree.
> >       Maybe you will find this document helpful:
> >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >       ~Michal
> >
> >       >
> >       > Regards,
> >       > Oleg
> >       >
> >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 11:16, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com>>:
> >       >
> >       >
> >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
> >       >     >
> >       >     >
> >       >     >
> >       >     > Hello Michal,
> >       >     >
> >       >     > Yes, I use yocto.
> >       >     >
> >       >     > Yesterday all day long I tried to follow your suggestions=
.
> >       >     > I faced a problem.
> >       >     > Manually in the xen config build file I pasted the string=
s:
> >       >     In the .config file or in some Yocto file (listing
> additional Kconfig options) added to SRC_URI?
> >       >     You shouldn't really modify .config file but if you do, you
> should execute "make olddefconfig" afterwards.
> >       >
> >       >     >
> >       >     > CONFIG_EARLY_PRINTK
> >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
> >       >     I hope you added =3Dy to them.
> >       >
> >       >     Anyway, you have at least the following solutions:
> >       >     1) Run bitbake xen -c menuconfig to properly set early prin=
tk
> >       >     2) Find out how you enable other Kconfig options in your
> project (e.g. CONFIG_COLORING=3Dy that is not enabled by default)
> >       >     3) Append the following to
> "xen/arch/arm/configs/arm64_defconfig":
> >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >       >
> >       >     ~Michal
> >       >
> >       >     >
> >       >     > Host hangs in build time.
> >       >     > Maybe I did not set something in the config build file ?
> >       >     >
> >       >     > Regards,
> >       >     > Oleg
> >       >     >
> >       >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 11:57, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >       >     >
> >       >     >     Thanks Michal,
> >       >     >
> >       >     >     You gave me an idea.
> >       >     >     I am going to try it today.
> >       >     >
> >       >     >     Regards,
> >       >     >     O.
> >       >     >
> >       >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 11:56, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >       >     >
> >       >     >         Thanks Stefano.
> >       >     >
> >       >     >         I am going to do it today.
> >       >     >
> >       >     >         Regards,
> >       >     >         O.
> >       >     >
> >       >     >         =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 23:05, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >       >     >
> >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >       >     >             > Hi Michal,
> >       >     >             >
> >       >     >             > I corrected xen's command line.
> >       >     >             > Now it is
> >       >     >             > xen,xen-bootargs =3D "console=3Ddtuart
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
> >       bootscrub=3D0 vwfi=3Dnative sched=3Dnull
> >       >     >             > timer_slop=3D0 way_size=3D65536 xen_colors=
=3D0-3
> dom0_colors=3D4-7";
> >       >     >
> >       >     >             4 colors is way too many for xen, just do
> xen_colors=3D0-0. There is no
> >       >     >             advantage in using more than 1 color for Xen.
> >       >     >
> >       >     >             4 colors is too few for dom0, if you are
> giving 1600M of memory to Dom0.
> >       >     >             Each color is 256M. For 1600M you should give
> at least 7 colors. Try:
> >       >     >
> >       >     >             xen_colors=3D0-0 dom0_colors=3D1-8
> >       >     >
> >       >     >
> >       >     >
> >       >     >             > Unfortunately the result was the same.
> >       >     >             >
> >       >     >             > (XEN)  - Dom0 mode: Relaxed
> >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and
> 8-bit VMID
> >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU pe=
r
> sched-resource
> >       >     >             > (XEN) Coloring general information
> >       >     >             > (XEN) Way size: 64kB
> >       >     >             > (XEN) Max. number of colors available: 16
> >       >     >             > (XEN) Xen color(s): [ 0 ]
> >       >     >             > (XEN) alternatives: Patching with alt table
> 00000000002cc690 -> 00000000002ccc0c
> >       >     >             > (XEN) Color array allocation failed for dom=
0
> >       >     >             > (XEN)
> >       >     >             > (XEN)
> ****************************************
> >       >     >             > (XEN) Panic on CPU 0:
> >       >     >             > (XEN) Error creating domain 0
> >       >     >             > (XEN)
> ****************************************
> >       >     >             > (XEN)
> >       >     >             > (XEN) Reboot in five seconds...
> >       >     >             >
> >       >     >             > I am going to find out how command line
> arguments passed and parsed.
> >       >     >             >
> >       >     >             > Regards,
> >       >     >             > Oleg
> >       >     >             >
> >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >       >     >             >       Hi Michal,
> >       >     >             >
> >       >     >             > You put my nose into the problem. Thank you=
.
> >       >     >             > I am going to use your point.
> >       >     >             > Let's see what happens.
> >       >     >             >
> >       >     >             > Regards,
> >       >     >             > Oleg
> >       >     >             >
> >       >     >             >
> >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>
> >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
> >       >     >             >       Hi Oleg,
> >       >     >             >
> >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko
> wrote:
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       > Hello Stefano,
> >       >     >             >       >
> >       >     >             >       > Thanks for the clarification.
> >       >     >             >       > My company uses yocto for image
> generation.
> >       >     >             >       > What kind of information do you nee=
d
> to consult me in this case ?
> >       >     >             >       >
> >       >     >             >       > Maybe modules sizes/addresses which
> were mentioned by @Julien Grall <mailto:julien@xen.org
> >       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>> ?
> >       >     >             >
> >       >     >             >       Sorry for jumping into discussion, bu=
t
> FWICS the Xen command line you provided seems to be not the
> >       one
> >       >     >             >       Xen booted with. The error you are
> observing most likely is due to dom0 colors configuration not
> >       being
> >       >     >             >       specified (i.e. lack of dom0_colors=
=3D<>
> parameter). Although in the command line you provided, this
> >       parameter
> >       >     >             >       is set, I strongly doubt that this is
> the actual command line in use.
> >       >     >             >
> >       >     >             >       You wrote:
> >       >     >             >       xen,xen-bootargs =3D "console=3Ddtuar=
t
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
> >       bootscrub=3D0 vwfi=3Dnative
> >       >     >             >       sched=3Dnull timer_slop=3D0
> way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >     >             >
> >       >     >             >       but:
> >       >     >             >       1) way_szize has a typo
> >       >     >             >       2) you specified 4 colors (0-3) for
> Xen, but the boot log says that Xen has only one:
> >       >     >             >       (XEN) Xen color(s): [ 0 ]
> >       >     >             >
> >       >     >             >       This makes me believe that no colors
> configuration actually end up in command line that Xen booted
> >       with.
> >       >     >             >       Single color for Xen is a "default if
> not specified" and way size was probably calculated by asking
> >       HW.
> >       >     >             >
> >       >     >             >       So I would suggest to first
> cross-check the command line in use.
> >       >     >             >
> >       >     >             >       ~Michal
> >       >     >             >
> >       >     >             >
> >       >     >             >       >
> >       >     >             >       > Regards,
> >       >     >             >       > Oleg
> >       >     >             >       >
> >       >     >             >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano
> Stabellini <sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>:
> >       >     >             >       >
> >       >     >             >       >     On Tue, 18 Apr 2023, Oleg
> Nikitenko wrote:
> >       >     >             >       >     > Hi Julien,
> >       >     >             >       >     >
> >       >     >             >       >     > >> This feature has not been
> merged in Xen upstream yet
> >       >     >             >       >     >
> >       >     >             >       >     > > would assume that upstream =
+
> the series on the ML [1] work
> >       >     >             >       >     >
> >       >     >             >       >     > Please clarify this point.
> >       >     >             >       >     > Because the two thoughts are
> controversial.
> >       >     >             >       >
> >       >     >             >       >     Hi Oleg,
> >       >     >             >       >
> >       >     >             >       >     As Julien wrote, there is
> nothing controversial. As you are aware,
> >       >     >             >       >     Xilinx maintains a separate Xen
> tree specific for Xilinx here:
> >       >     >             >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen
> >       <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen
> >       <https://github.com/xilinx/xen>>>
> >       >     >             >       >
> >       >     >             >       >     and the branch you are using
> (xlnx_rebase_4.16) comes from there.
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       >     Instead, the upstream Xen tree
> lives here:
> >       >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       >     The Cache Coloring feature that
> you are trying to configure is present
> >       >     >             >       >     in xlnx_rebase_4.16, but not ye=
t
> present upstream (there is an
> >       >     >             >       >     outstanding patch series to add
> cache coloring to Xen upstream but it
> >       >     >             >       >     hasn't been merged yet.)
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       >     Anyway, if you are using
> xlnx_rebase_4.16 it doesn't matter too much for
> >       >     >             >       >     you as you already have Cache
> Coloring as a feature there.
> >       >     >             >       >
> >       >     >             >       >
> >       >     >             >       >     I take you are using
> ImageBuilder to generate the boot configuration? If
> >       >     >             >       >     so, please post the ImageBuilde=
r
> config file that you are using.
> >       >     >             >       >
> >       >     >             >       >     But from the boot message, it
> looks like the colors configuration for
> >       >     >             >       >     Dom0 is incorrect.
> >       >     >             >       >
> >       >     >             >
> >       >     >             >
> >       >     >             >
> >       >     >
> >       >
> >
> >
> >

--0000000000001d84a805fa241597
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi Stefano,</div><div><br></div><div>Thank you.</div>=
<div>If I build xen without colors support there is not this error.</div><d=
iv>All the domains are booted well.</div><div>Hense it can not be a hardwar=
e issue.</div><div></div><div>This panic arrived during unpacking the rootf=
s.</div><div>Here I attached the boot log xen/Dom0 without color.</div><div=
>A highlighted strings printed exactly after the place where 1-st time pani=
c arrived.</div><div><br></div><div></div><div>=C2=A0Xen 4.16.1-pre<br>(XEN=
) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC=
) 11.3.0) debug=3Dy 2023-04-21<br>(XEN) Latest ChangeSet: Wed Apr 19 12:56:=
14 2023 +0300 git:321687b231-dirty<br>(XEN) build-id: c1847258fdb1b79562fc7=
10dda40008f96c0fde5<br>(XEN) Processor: 00000000410fd034: &quot;ARM Limited=
&quot;, variant: 0x0, part 0xd03,rev 0x4<br>(XEN) 64-bit Execution:<br>(XEN=
) =C2=A0 Processor Features: 0000000000002222 0000000000000000<br>(XEN) =C2=
=A0 =C2=A0 Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32<br>(XE=
N) =C2=A0 =C2=A0 Extensions: FloatingPoint AdvancedSIMD<br>(XEN) =C2=A0 Deb=
ug Features: 0000000010305106 0000000000000000<br>(XEN) =C2=A0 Auxiliary Fe=
atures: 0000000000000000 0000000000000000<br>(XEN) =C2=A0 Memory Model Feat=
ures: 0000000000001122 0000000000000000<br>(XEN) =C2=A0 ISA Features: =C2=
=A00000000000011120 0000000000000000<br>(XEN) 32-bit Execution:<br>(XEN) =
=C2=A0 Processor Features: 0000000000000131:0000000000011011<br>(XEN) =C2=
=A0 =C2=A0 Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle<br>(XEN) =C2=
=A0 =C2=A0 Extensions: GenericTimer Security<br>(XEN) =C2=A0 Debug Features=
: 0000000003010066<br>(XEN) =C2=A0 Auxiliary Features: 0000000000000000<br>=
(XEN) =C2=A0 Memory Model Features: 0000000010201105 0000000040000000<br>(X=
EN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A00000000001260000 0000000002102211<br>(XEN) =C2=A0 ISA F=
eatures: 0000000002101110 0000000013112111 0000000021232042<br>(XEN) =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0000000001112131 00000000=
00011142 0000000000011121<br>(XEN) Using SMC Calling Convention v1.2<br>(XE=
N) Using PSCI v1.1<br>(XEN) SMP: Allowing 4 CPUs<br>(XEN) Generic Timer IRQ=
: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz<br>(XEN) GICv2 initializati=
on:<br>(XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_dist_addr=3D00000000f9010000<b=
r>(XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_cpu_addr=3D00000000f9020000<br>(XEN=
) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_hyp_addr=3D00000000f9040000<br>(XEN) =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 gic_vcpu_addr=3D00000000f9060000<br>(XEN) =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 gic_maintenance_irq=3D25<br>(XEN) GICv2: Adjusting CPU=
 interface base to 0xf902f000<br>(XEN) GICv2: 192 lines, 4 cpus, secure (II=
D 0200143b).<br>(XEN) Using scheduler: null Scheduler (null)<br>(XEN) Initi=
alizing null scheduler<br>(XEN) WARNING: This is experimental software in d=
evelopment.<br>(XEN) Use at your own risk.<br>(XEN) Allocated console ring =
of 32 KiB.<br>(XEN) CPU0: Guest atomics will try 12 times before pausing th=
e domain<br>(XEN) Bringing up CPU1<br>(XEN) CPU1: Guest atomics will try 13=
 times before pausing the domain<br>(XEN) CPU 1 booted.<br>(XEN) Bringing u=
p CPU2<br>(XEN) CPU2: Guest atomics will try 13 times before pausing the do=
main<br>(XEN) CPU 2 booted.<br>(XEN) Bringing up CPU3<br>(XEN) CPU3: Guest =
atomics will try 13 times before pausing the domain<br>(XEN) Brought up 4 C=
PUs<br>(XEN) CPU 3 booted.<br>(XEN) smmu: /axi/smmu@fd800000: probing hardw=
are configuration...<br>(XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:<br>(XE=
N) smmu: /axi/smmu@fd800000: 	stage 2 translation<br>(XEN) smmu: /axi/smmu@=
fd800000: 	stream matching with 48 register groups, mask 0x7fff&lt;2&gt;smm=
u: /axi/smmu@fd800000: 	16 context banks (0 stage-2 only)<br>(XEN) smmu: /a=
xi/smmu@fd800000: 	Stage-2: 48-bit IPA -&gt; 48-bit PA<br>(XEN) smmu: /axi/=
smmu@fd800000: registered 29 master devices<br>(XEN) I/O virtualisation ena=
bled<br>(XEN) =C2=A0- Dom0 mode: Relaxed<br>(XEN) P2M: 40-bit IPA with 40-b=
it PA and 8-bit VMID<br>(XEN) P2M: 3 levels with order-1 root, VTCR 0x00000=
00080023558<br>(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<=
br>(XEN) alternatives: Patching with alt table 00000000002cc5c8 -&gt; 00000=
000002ccb2c<br>(XEN) *** LOADING DOMAIN 0 ***<br>(XEN) Loading d0 kernel fr=
om boot module @ 0000000001000000<br>(XEN) Loading ramdisk from boot module=
 @ 0000000002000000<br>(XEN) Allocating 1:1 mappings totalling 1600MB for d=
om0:<br>(XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)<br>(XEN) BA=
NK[1] 0x00000024000000-0x00000028000000 (64MB)<br>(XEN) BANK[2] 0x000000300=
00000-0x00000080000000 (1280MB)<br>(XEN) Grant table range: 0x00000000e0000=
0-0x00000000e40000<br>(XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x0000000=
87bf94000<br>(XEN) Allocating PPI 16 for event channel interrupt<br>(XEN) E=
xtended region 0: 0x81200000-&gt;0xa0000000<br>(XEN) Extended region 1: 0xb=
1200000-&gt;0xc0000000<br>(XEN) Extended region 2: 0xc8000000-&gt;0xe000000=
0<br>(XEN) Extended region 3: 0xf0000000-&gt;0xf9000000<br>(XEN) Extended r=
egion 4: 0x100000000-&gt;0x600000000<br>(XEN) Extended region 5: 0x88000000=
0-&gt;0x8000000000<br>(XEN) Extended region 6: 0x8001000000-&gt;0x100000000=
00<br>(XEN) Loading zImage from 0000000001000000 to 0000000010000000-000000=
0010e41008<br>(XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013=
600000-0x000000001ff3a617<br>(XEN) Loading d0 DTB to 0x0000000013400000-0x0=
00000001340cbdc<br>(XEN) Initial low memory virq threshold set at 0x4000 pa=
ges.<br>(XEN) Std. Loglevel: All<br>(XEN) Guest Loglevel: All<br>(XEN) *** =
Serial input to DOM0 (type &#39;CTRL-a&#39; three times to switch input)<br=
>(XEN) null.c:353: 0 &lt;-- d0v0<br>(XEN) Freed 356kB init memory.<br>(XEN)=
 d0v0 Unhandled SMC/HVC: 0x84000050<br>(XEN) d0v0 Unhandled SMC/HVC: 0x8600=
ff01<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIV=
ER4<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVE=
R8<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER=
12<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER=
16<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER=
20<br>(XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER=
0<br>[ =C2=A0 =C2=A00.000000] Booting Linux on physical CPU 0x0000000000 [0=
x410fd034]<br>[ =C2=A0 =C2=A00.000000] Linux version 5.15.72-xilinx-v2022.1=
 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU Bi=
nutils) 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023<br>[ =C2=A0 =C2=
=A00.000000] Machine model: D14 Viper Board - White Unit<br>[ =C2=A0 =C2=A0=
0.000000] Xen 4.16 support found<br>[ =C2=A0 =C2=A00.000000] Zone ranges:<b=
r>[ =C2=A0 =C2=A00.000000] =C2=A0 DMA =C2=A0 =C2=A0 =C2=A0[mem 0x0000000010=
000000-0x000000007fffffff]<br>[ =C2=A0 =C2=A00.000000] =C2=A0 DMA32 =C2=A0 =
=C2=A0empty<br>[ =C2=A0 =C2=A00.000000] =C2=A0 Normal =C2=A0 empty<br>[ =C2=
=A0 =C2=A00.000000] Movable zone start for each node<br>[ =C2=A0 =C2=A00.00=
0000] Early memory node ranges<br>[ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=
=A0 0: [mem 0x0000000010000000-0x000000001fffffff]<br>[ =C2=A0 =C2=A00.0000=
00] =C2=A0 node =C2=A0 0: [mem 0x0000000022000000-0x0000000022147fff]<br>[ =
=C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000022200000-0x0000=
000022347fff]<br>[ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000=
000024000000-0x0000000027ffffff]<br>[ =C2=A0 =C2=A00.000000] =C2=A0 node =
=C2=A0 0: [mem 0x0000000030000000-0x000000007fffffff]<br>[ =C2=A0 =C2=A00.0=
00000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]<br>=
[ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 8192 pages in unavailable ran=
ges<br>[ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 184 pages in unavailab=
le ranges<br>[ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 7352 pages in un=
available ranges<br>[ =C2=A0 =C2=A00.000000] cma: Reserved 256 MiB at 0x000=
000006e000000<br>[ =C2=A0 =C2=A00.000000] psci: probing for conduit method =
from DT.<br>[ =C2=A0 =C2=A00.000000] psci: PSCIv1.1 detected in firmware.<b=
r>[ =C2=A0 =C2=A00.000000] psci: Using standard PSCI v0.2 function IDs<br>[=
 =C2=A0 =C2=A00.000000] psci: Trusted OS migration not required<br>[ =C2=A0=
 =C2=A00.000000] psci: SMC Calling Convention v1.1<br>[ =C2=A0 =C2=A00.0000=
00] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536<br>[ =C2=A0 =C2=
=A00.000000] Detected VIPT I-cache on CPU0<br>[ =C2=A0 =C2=A00.000000] CPU =
features: kernel page table isolation forced ON by KASLR<br>[ =C2=A0 =C2=A0=
0.000000] CPU features: detected: Kernel page table isolation (KPTI)<br>[ =
=C2=A0 =C2=A00.000000] Built 1 zonelists, mobility grouping on.=C2=A0 Total=
 pages: 403845<br>[ =C2=A0 =C2=A00.000000] Kernel command line: console=3Dh=
vc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev=
/ram0 maxcpus=3D2<br>[ =C2=A0 =C2=A00.000000] Unknown kernel command line p=
arameters &quot;earlyprintk=3Dxen fips=3D1&quot;, will be passed to user sp=
ace.<br>[ =C2=A0 =C2=A00.000000] Dentry cache hash table entries: 262144 (o=
rder: 9, 2097152 bytes, linear)<br>[ =C2=A0 =C2=A00.000000] Inode-cache has=
h table entries: 131072 (order: 8, 1048576 bytes, linear)<br>[ =C2=A0 =C2=
=A00.000000] mem auto-init: stack:off, heap alloc:on, heap free:on<br>[ =C2=
=A0 =C2=A00.000000] mem auto-init: clearing system memory may take some tim=
e...<br>[ =C2=A0 =C2=A00.000000] Memory: 1121936K/1641024K available (9728K=
 kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss, 256944K rese=
rved, 262144K cma-reserved)<br>[ =C2=A0 =C2=A00.000000] SLUB: HWalign=3D64,=
 Order=3D0-3, MinObjects=3D0, CPUs=3D2, Nodes=3D1<br>[ =C2=A0 =C2=A00.00000=
0] rcu: Hierarchical RCU implementation.<br>[ =C2=A0 =C2=A00.000000] rcu: 	=
RCU event tracing is enabled.<br>[ =C2=A0 =C2=A00.000000] rcu: 	RCU restric=
ting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2.<br>[ =C2=A0 =C2=A00.000000] r=
cu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.<br>[ =
=C2=A0 =C2=A00.000000] rcu: Adjusting geometry for rcu_fanout_leaf=3D16, nr=
_cpu_ids=3D2<br>[ =C2=A0 =C2=A00.000000] NR_IRQS: 64, nr_irqs: 64, prealloc=
ated irqs: 0<br>[ =C2=A0 =C2=A00.000000] Root IRQ handler: gic_handle_irq<b=
r>[ =C2=A0 =C2=A00.000000] arch_timer: cp15 timer(s) running at 100.00MHz (=
virt).<br>[ =C2=A0 =C2=A00.000000] clocksource: arch_sys_counter: mask: 0xf=
fffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns<br>[ =
=C2=A0 =C2=A00.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wra=
ps every 4398046511100ns<br>[ =C2=A0 =C2=A00.000258] Console: colour dummy =
device 80x25<br>[ =C2=A0 =C2=A00.310231] printk: console [hvc0] enabled<br>=
[ =C2=A0 =C2=A00.314403] Calibrating delay loop (skipped), value calculated=
 using timer frequency.. 200.00 BogoMIPS (lpj=3D400000)<br>[ =C2=A0 =C2=A00=
.324851] pid_max: default: 32768 minimum: 301<br>[ =C2=A0 =C2=A00.329706] L=
SM: Security Framework initializing<br>[ =C2=A0 =C2=A00.334204] Yama: becom=
ing mindful.<br>[ =C2=A0 =C2=A00.337865] Mount-cache hash table entries: 40=
96 (order: 3, 32768 bytes, linear)<br>[ =C2=A0 =C2=A00.345180] Mountpoint-c=
ache hash table entries: 4096 (order: 3, 32768 bytes, linear)<br>[ =C2=A0 =
=C2=A00.354743] xen:grant_table: Grant tables using version 1 layout<br>[ =
=C2=A0 =C2=A00.359132] Grant table initialized<br>[ =C2=A0 =C2=A00.362664] =
xen:events: Using FIFO-based ABI<br>[ =C2=A0 =C2=A00.366993] Xen: initializ=
ing cpu0<br>[ =C2=A0 =C2=A00.370515] rcu: Hierarchical SRCU implementation.=
<br>[ =C2=A0 =C2=A00.375930] smp: Bringing up secondary CPUs ...<br>(XEN) n=
ull.c:353: 1 &lt;-- d0v1<br>(XEN) d0v1: vGICD: unhandled word write 0x00000=
0ffffffff to ICACTIVER0<br>[ =C2=A0 =C2=A00.382549] Detected VIPT I-cache o=
n CPU1<br>[ =C2=A0 =C2=A00.388712] Xen: initializing cpu1<br>[ =C2=A0 =C2=
=A00.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]<br>=
[ =C2=A0 =C2=A00.388829] smp: Brought up 1 node, 2 CPUs<br>[ =C2=A0 =C2=A00=
.406941] SMP: Total of 2 processors activated.<br>[ =C2=A0 =C2=A00.411698] =
CPU features: detected: 32-bit EL0 Support<br>[ =C2=A0 =C2=A00.416888] CPU =
features: detected: CRC32 instructions<br>[ =C2=A0 =C2=A00.422121] CPU: All=
 CPU(s) started at EL1<br>[ =C2=A0 =C2=A00.426248] alternatives: patching k=
ernel code<br>[ =C2=A0 =C2=A00.431424] devtmpfs: initialized<br>[ =C2=A0 =
=C2=A00.441454] KASLR enabled<br>[ =C2=A0 =C2=A00.441602] clocksource: jiff=
ies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000=
 ns<br>[ =C2=A0 =C2=A00.448321] futex hash table entries: 512 (order: 3, 32=
768 bytes, linear)<br>[ =C2=A0 =C2=A00.496183] NET: Registered PF_NETLINK/P=
F_ROUTE protocol family<br>[ =C2=A0 =C2=A00.498277] DMA: preallocated 256 K=
iB GFP_KERNEL pool for atomic allocations<br>[ =C2=A0 =C2=A00.503772] DMA: =
preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations<br>[ =
=C2=A0 =C2=A00.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool =
for atomic allocations<br>[ =C2=A0 =C2=A00.519478] audit: initializing netl=
ink subsys (disabled)<br>[ =C2=A0 =C2=A00.524985] audit: type=3D2000 audit(=
0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1<br>[ =C2=A0 =C2=A00=
.529169] thermal_sys: Registered thermal governor &#39;step_wise&#39;<br>[ =
=C2=A0 =C2=A00.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint r=
egisters.<br>[ =C2=A0 =C2=A00.545608] ASID allocator initialised with 32768=
 entries<br>[ =C2=A0 =C2=A00.551030] xen:swiotlb_xen: Warning: only able to=
 allocate 4 MB for software IO TLB<br>[ =C2=A0 =C2=A00.559332] software IO =
TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)<br>[ =C2=A0 =
=C2=A00.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 page=
s<br>[ =C2=A0 =C2=A00.584721] HugeTLB registered 32.0 MiB page size, pre-al=
located 0 pages<br>[ =C2=A0 =C2=A00.591478] HugeTLB registered 2.00 MiB pag=
e size, pre-allocated 0 pages<br>[ =C2=A0 =C2=A00.598225] HugeTLB registere=
d 64.0 KiB page size, pre-allocated 0 pages<br>[ =C2=A0 =C2=A00.636520] DRB=
G: Continuing without Jitter RNG<br>[ =C2=A0 =C2=A00.737187] raid6: neonx8 =
=C2=A0 gen() =C2=A02143 MB/s<br>[ =C2=A0 =C2=A00.805294] raid6: neonx8 =C2=
=A0 xor() =C2=A01589 MB/s<br>[ =C2=A0 =C2=A00.873406] raid6: neonx4 =C2=A0 =
gen() =C2=A02177 MB/s<br>[ =C2=A0 =C2=A00.941499] raid6: neonx4 =C2=A0 xor(=
) =C2=A01556 MB/s<br>[ =C2=A0 =C2=A01.009612] raid6: neonx2 =C2=A0 gen() =
=C2=A02072 MB/s<br>[ =C2=A0 =C2=A01.077715] raid6: neonx2 =C2=A0 xor() =C2=
=A01430 MB/s<br>[ =C2=A0 =C2=A01.145834] raid6: neonx1 =C2=A0 gen() =C2=A01=
769 MB/s<br>[ =C2=A0 =C2=A01.213935] raid6: neonx1 =C2=A0 xor() =C2=A01214 =
MB/s<br>[ =C2=A0 =C2=A01.282046] raid6: int64x8 =C2=A0gen() =C2=A01366 MB/s=
<br>[ =C2=A0 =C2=A01.350132] raid6: int64x8 =C2=A0xor() =C2=A0 773 MB/s<br>=
[ =C2=A0 =C2=A01.418259] raid6: int64x4 =C2=A0gen() =C2=A01602 MB/s<br>[ =
=C2=A0 =C2=A01.486349] raid6: int64x4 =C2=A0xor() =C2=A0 851 MB/s<br>[ =C2=
=A0 =C2=A01.554464] raid6: int64x2 =C2=A0gen() =C2=A01396 MB/s<br>[ =C2=A0 =
=C2=A01.622561] raid6: int64x2 =C2=A0xor() =C2=A0 744 MB/s<br>[ =C2=A0 =C2=
=A01.690687] raid6: int64x1 =C2=A0gen() =C2=A01033 MB/s<br>[ =C2=A0 =C2=A01=
.758770] raid6: int64x1 =C2=A0xor() =C2=A0 517 MB/s<br>[ =C2=A0 =C2=A01.758=
809] raid6: using algorithm neonx4 gen() 2177 MB/s<br>[ =C2=A0 =C2=A01.7629=
41] raid6: .... xor() 1556 MB/s, rmw enabled<br>[ =C2=A0 =C2=A01.767957] ra=
id6: using neon recovery algorithm<br>[ =C2=A0 =C2=A01.772824] xen:balloon:=
 Initialising balloon driver<br>[ =C2=A0 =C2=A01.778021] iommu: Default dom=
ain type: Translated <br>[ =C2=A0 =C2=A01.782584] iommu: DMA domain TLB inv=
alidation policy: strict mode <br>[ =C2=A0 =C2=A01.789149] SCSI subsystem i=
nitialized<br>[ =C2=A0 =C2=A01.792820] usbcore: registered new interface dr=
iver usbfs<br>[ =C2=A0 =C2=A01.798254] usbcore: registered new interface dr=
iver hub<br>[ =C2=A0 =C2=A01.803626] usbcore: registered new device driver =
usb<br>[ =C2=A0 =C2=A01.808761] pps_core: LinuxPPS API ver. 1 registered<br=
>[ =C2=A0 =C2=A01.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-20=
07 Rodolfo Giometti &lt;<a href=3D"mailto:giometti@linux.it">giometti@linux=
.it</a>&gt;<br>[ =C2=A0 =C2=A01.822903] PTP clock support registered<br>[ =
=C2=A0 =C2=A01.826893] EDAC MC: Ver: 3.0.0<br>[ =C2=A0 =C2=A01.830375] zynq=
mp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channel=
s.<br>[ =C2=A0 =C2=A01.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered=
 ZynqMP IPI mbox with TX/RX channels.<br>[ =C2=A0 =C2=A01.847356] zynqmp-ip=
i-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.<br=
>[ =C2=A0 =C2=A01.855907] FPGA manager framework<br>[ =C2=A0 =C2=A01.859952=
] clocksource: Switched to clocksource arch_sys_counter<br>[ =C2=A0 =C2=A01=
.871712] NET: Registered PF_INET protocol family<br>[ =C2=A0 =C2=A01.871838=
] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)<br>[=
 =C2=A0 =C2=A01.879392] tcp_listen_portaddr_hash hash table entries: 1024 (=
order: 2, 16384 bytes, linear)<br>[ =C2=A0 =C2=A01.887078] Table-perturb ha=
sh table entries: 65536 (order: 6, 262144 bytes, linear)<br>[ =C2=A0 =C2=A0=
1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes=
, linear)<br>[ =C2=A0 =C2=A01.902900] TCP bind hash table entries: 16384 (o=
rder: 6, 262144 bytes, linear)<br>[ =C2=A0 =C2=A01.910350] TCP: Hash tables=
 configured (established 16384 bind 16384)<br>[ =C2=A0 =C2=A01.916778] UDP =
hash table entries: 1024 (order: 3, 32768 bytes, linear)<br>[ =C2=A0 =C2=A0=
1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)=
<br>[ =C2=A0 =C2=A01.930759] NET: Registered PF_UNIX/PF_LOCAL protocol fami=
ly<br>[ =C2=A0 =C2=A01.936834] RPC: Registered named UNIX socket transport =
module.<br>[ =C2=A0 =C2=A01.942342] RPC: Registered udp transport module.<b=
r>[ =C2=A0 =C2=A01.947088] RPC: Registered tcp transport module.<br>[ =C2=
=A0 =C2=A01.951843] RPC: Registered tcp NFSv4.1 backchannel transport modul=
e.<br>[ =C2=A0 =C2=A01.958334] PCI: CLS 0 bytes, default 64<br>[ =C2=A0 =C2=
=A01.962709] Trying to unpack rootfs image as initramfs...<br>[ =C2=A0 =C2=
=A01.977090] workingset: timestamp_bits=3D62 max_order=3D19 bucket_order=3D=
0<br>[ =C2=A0 =C2=A01.982863] Installing knfsd (copyright (C) 1996 <a href=
=3D"mailto:okir@monad.swb.de">okir@monad.swb.de</a>).<br>[ =C2=A0 =C2=A02.0=
21045] NET: Registered PF_ALG protocol family<br>[ =C2=A0 =C2=A02.021122] x=
or: measuring software checksum speed<br>[ =C2=A0 =C2=A02.029347] =C2=A0 =
=C2=A08regs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 : =C2=A02366 MB/sec<br>[ =C2=
=A0 =C2=A02.033081] =C2=A0 =C2=A032regs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0:=
 =C2=A02802 MB/sec<br>[ =C2=A0 =C2=A02.038223] =C2=A0 =C2=A0arm64_neon =C2=
=A0 =C2=A0 =C2=A0: =C2=A02320 MB/sec<br>[ =C2=A0 =C2=A02.038385] xor: using=
 function: 32regs (2802 MB/sec)<br>[ =C2=A0 =C2=A02.043614] Block layer SCS=
I generic (bsg) driver version 0.4 loaded (major 247)<br>[ =C2=A0 =C2=A02.0=
50959] io scheduler mq-deadline registered<br>[ =C2=A0 =C2=A02.055521] io s=
cheduler kyber registered<br>[ =C2=A0 =C2=A02.068227] xen:xen_evtchn: Event=
-channel device installed<br>[ =C2=A0 =C2=A02.069281] Serial: 8250/16550 dr=
iver, 4 ports, IRQ sharing disabled<br>[ =C2=A0 =C2=A02.076190] cacheinfo: =
Unable to detect cache hierarchy for CPU 0<br>[ =C2=A0 =C2=A02.085548] brd:=
 module loaded<br>[ =C2=A0 =C2=A02.089290] loop: module loaded<br>[ =C2=A0 =
=C2=A02.089341] Invalid max_queues (4), will use default max: 2.<br>[ =C2=
=A0 =C2=A02.094565] tun: Universal TUN/TAP device driver, 1.6<br>[ =C2=A0 =
=C2=A02.098655] xen_netfront: Initialising Xen virtual ethernet driver<br>[=
 =C2=A0 =C2=A02.104156] usbcore: registered new interface driver rtl8150<br=
>[ =C2=A0 =C2=A02.109813] usbcore: registered new interface driver r8152<br=
>[ =C2=A0 =C2=A02.115367] usbcore: registered new interface driver asix<br>=
[ =C2=A0 =C2=A02.120794] usbcore: registered new interface driver ax88179_1=
78a<br>[ =C2=A0 =C2=A02.126934] usbcore: registered new interface driver cd=
c_ether<br>[ =C2=A0 =C2=A02.132816] usbcore: registered new interface drive=
r cdc_eem<br>[ =C2=A0 =C2=A02.138527] usbcore: registered new interface dri=
ver net1080<br>[ =C2=A0 =C2=A02.144256] usbcore: registered new interface d=
river cdc_subset<br>[ =C2=A0 =C2=A02.150205] usbcore: registered new interf=
ace driver zaurus<br>[ =C2=A0 =C2=A02.155837] usbcore: registered new inter=
face driver cdc_ncm<br>[ =C2=A0 =C2=A02.161550] usbcore: registered new int=
erface driver r8153_ecm<br>[ =C2=A0 =C2=A02.168240] usbcore: registered new=
 interface driver cdc_acm<br>[ =C2=A0 =C2=A02.173109] cdc_acm: USB Abstract=
 Control Model driver for USB modems and ISDN adapters<br>[ =C2=A0 =C2=A02.=
181358] usbcore: registered new interface driver uas<br>[ =C2=A0 =C2=A02.18=
6547] usbcore: registered new interface driver usb-storage<br>[ =C2=A0 =C2=
=A02.192643] usbcore: registered new interface driver ftdi_sio<br>[ =C2=A0 =
=C2=A02.198384] usbserial: USB Serial support registered for FTDI USB Seria=
l Device<br>[ =C2=A0 =C2=A02.206118] udc-core: couldn&#39;t find an availab=
le UDC - added [g_mass_storage] to list of pending drivers<br>[ =C2=A0 =C2=
=A02.215332] i2c_dev: i2c /dev entries driver<br>[ =C2=A0 =C2=A02.220467] x=
en_wdt xen_wdt: initialized (timeout=3D60s, nowayout=3D0)<br>[ =C2=A0 =C2=
=A02.225923] device-mapper: uevent: version 1.0.3<br>[ =C2=A0 =C2=A02.23066=
8] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: <a href=3D"=
mailto:dm-devel@redhat.com">dm-devel@redhat.com</a><br>[ =C2=A0 =C2=A02.239=
315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controlle=
r: DEV synps_edac (INTERRUPT)<br>[ =C2=A0 =C2=A02.249405] EDAC DEVICE0: Giv=
ing out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV ff96000=
0.memory-controller (INTERRUPT)<br>[ =C2=A0 =C2=A02.261719] sdhci: Secure D=
igital Host Controller Interface driver<br>[ =C2=A0 =C2=A02.267487] sdhci: =
Copyright(c) Pierre Ossman<br>[ =C2=A0 =C2=A02.271890] sdhci-pltfm: SDHCI p=
latform and OF driver helper<br>[ =C2=A0 =C2=A02.278157] ledtrig-cpu: regis=
tered to indicate activity on CPUs<br>[ =C2=A0 =C2=A02.283816] zynqmp_firmw=
are_probe Platform Management API v1.1<br>[ =C2=A0 =C2=A02.289554] zynqmp_f=
irmware_probe Trustzone version v1.0<br>[ =C2=A0 =C2=A02.327875] securefw s=
ecurefw: securefw probed<br>[ =C2=A0 =C2=A02.328324] alg: No test for xilin=
x-zynqmp-aes (zynqmp-aes)<br>[ =C2=A0 =C2=A02.332563] zynqmp_aes firmware:z=
ynqmp-firmware:zynqmp-aes: AES Successfully Registered<br>[ =C2=A0 =C2=A02.=
341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)<br>[ =C2=A0 =C2=A02=
.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available<br>[ =C2=
=A0 =C2=A02.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is availab=
le<br>[ =C2=A0 =C2=A02.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manag=
er registered<br>[ =C2=A0 =C2=A02.366540] viper-xen-proxy viper-xen-proxy: =
Viper Xen Proxy registered<br>[ =C2=A0 =C2=A02.372525] viper-vdpp a4000000.=
vdpp: Device Tree Probing<br>[ =C2=A0 =C2=A02.377778] viper-vdpp a4000000.v=
dpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32<br>[ =C2=A0 =C2=A02.=
386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retryi=
ng...<br>[ =C2=A0 =C2=A02.394094] viper-vdpp-net a5000000.vdpp_net: Device =
Tree Probing<br>[ =C2=A0 =C2=A02.399854] viper-vdpp-net a5000000.vdpp_net: =
Device registered<br>[ =C2=A0 =C2=A02.405931] viper-vdpp-stat a8000000.vdpp=
_stat: Device Tree Probing<br>[ =C2=A0 =C2=A02.412037] viper-vdpp-stat a800=
0000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32<br>[ =C2=
=A0 =C2=A02.420856] default preset<br>[ =C2=A0 =C2=A02.423797] viper-vdpp-s=
tat a8000000.vdpp_stat: Device registered<br>[ =C2=A0 =C2=A02.430054] viper=
-vdpp-rng ac000000.vdpp_rng: Device Tree Probing<br>[ =C2=A0 =C2=A02.435948=
] viper-vdpp-rng ac000000.vdpp_rng: Device registered<br>[ =C2=A0 =C2=A02.4=
41976] vmcu driver init<br>[ =C2=A0 =C2=A02.444922] VMCU: : (240:0) registe=
red<br>[ =C2=A0 =C2=A02.444956] In K81 Updater init<br>[ =C2=A0 =C2=A02.449=
003] pktgen: Packet Generator for packet performance testing. Version: 2.75=
<br>[ =C2=A0 =C2=A02.468833] Initializing XFRM netlink socket<br>[ =C2=A0 =
=C2=A02.468902] NET: Registered PF_PACKET protocol family<br>[ =C2=A0 =C2=
=A02.472729] Bridge firewalling registered<br>[ =C2=A0 =C2=A02.476785] 8021=
q: 802.1Q VLAN Support v1.8<br>[ =C2=A0 =C2=A02.481341] registered taskstat=
s version 1<br>[ =C2=A0 =C2=A02.486394] Btrfs loaded, crc32c=3Dcrc32c-gener=
ic, zoned=3Dno, fsverity=3Dno<br>[ =C2=A0 =C2=A02.503145] ff010000.serial: =
ttyPS1 at MMIO 0xff010000 (irq =3D 36, base_baud =3D 6250000) is a xuartps<=
br>[ =C2=A0 =C2=A02.507103] of-fpga-region fpga-full: FPGA Region probed<br=
>[ =C2=A0 =C2=A02.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP=
 DMA driver Probe success<br>[ =C2=A0 =C2=A02.520267] xilinx-zynqmp-dma fd5=
10000.dma-controller: ZynqMP DMA driver Probe success<br>[ =C2=A0 =C2=A02.5=
28239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe s=
uccess<br>[ =C2=A0 =C2=A02.536152] xilinx-zynqmp-dma fd530000.dma-controlle=
r: ZynqMP DMA driver Probe success<br>[ =C2=A0 =C2=A02.544153] xilinx-zynqm=
p-dma fd540000.dma-controller: ZynqMP DMA driver Probe success<br>[ =C2=A0 =
=C2=A02.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA drive=
r Probe success<br>[ =C2=A0 =C2=A02.560178] xilinx-zynqmp-dma ffa80000.dma-=
controller: ZynqMP DMA driver Probe success<br>[ =C2=A0 =C2=A02.567987] xil=
inx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success<br>=
[ =C2=A0 =C2=A02.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP =
DMA driver Probe success<br>[ =C2=A0 =C2=A02.583889] xilinx-zynqmp-dma ffab=
0000.dma-controller: ZynqMP DMA driver Probe success<br>[ =C2=A0 =C2=A02.94=
6379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)<br>[ =C2=A0 =C2=A02.946467=
] 2 fixed-partitions partitions found on MTD device spi0.0<br>[ =C2=A0 =C2=
=A02.952393] Creating 2 MTD partitions on &quot;spi0.0&quot;:<br>[ =C2=A0 =
=C2=A02.957231] 0x000004000000-0x000008000000 : &quot;bank A&quot;<br>[ =C2=
=A0 =C2=A02.963332] 0x000000000000-0x000004000000 : &quot;bank B&quot;<br>[=
 =C2=A0 =C2=A02.968694] macb ff0b0000.ethernet: Not enabling partial store =
and forward<br>[ =C2=A0 =C2=A02.975333] macb ff0b0000.ethernet eth0: Cadenc=
e GEM rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)<br>[ =C2=A0 =
=C2=A02.984472] macb ff0c0000.ethernet: Not enabling partial store and forw=
ard<br>[ =C2=A0 =C2=A02.992144] macb ff0c0000.ethernet eth1: Cadence GEM re=
v 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)<br>[ =C2=A0 =C2=A03.0=
01043] viper_enet viper_enet: Viper power GPIOs initialised<br>[ =C2=A0 =C2=
=A03.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interfac=
e QSGMII<br>[ =C2=A0 =C2=A03.014914] viper_enet viper_enet vnet1 (uninitial=
ized): Validate interface QSGMII<br>[ =C2=A0 =C2=A03.022138] viper_enet vip=
er_enet vnet1 (uninitialized): Validate interface type 18<br>[ =C2=A0 =C2=
=A03.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interfac=
e QSGMII<br>[ =C2=A0 =C2=A03.037785] viper_enet viper_enet vnet3 (uninitial=
ized): Validate interface QSGMII<br>[ =C2=A0 =C2=A03.045301] viper_enet vip=
er_enet: Viper enet registered<br>[ =C2=A0 =C2=A03.050958] xilinx-axipmon f=
fa00000.perf-monitor: Probed Xilinx APM<br>[ =C2=A0 =C2=A03.057135] xilinx-=
axipmon fd0b0000.perf-monitor: Probed Xilinx APM<br>[ =C2=A0 =C2=A03.063538=
] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM<br>[ =C2=A0 =C2=
=A03.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM<br>[ =
=C2=A0 =C2=A03.097729] si70xx: probe of 2-0040 failed with error -5<br>[ =
=C2=A0 =C2=A03.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer wi=
th timeout 60s<br>[ =C2=A0 =C2=A03.105111] cdns-wdt ff150000.watchdog: Xili=
nx Watchdog Timer with timeout 10s<br>[ =C2=A0 =C2=A03.112457] viper-tamper=
 viper-tamper: Device registered<br>[ =C2=A0 =C2=A03.117593] active_bank ac=
tive_bank: boot bank: 1<br>[ =C2=A0 =C2=A03.122184] active_bank active_bank=
: boot mode: (0x02) qspi32<br>[ =C2=A0 =C2=A03.128247] viper-vdpp a4000000.=
vdpp: Device Tree Probing<br>[ =C2=A0 =C2=A03.133439] viper-vdpp a4000000.v=
dpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32<br>[ =C2=A0 =C2=A03.=
142151] viper-vdpp a4000000.vdpp: Tamper handler registered<br>[ =C2=A0 =C2=
=A03.147438] viper-vdpp a4000000.vdpp: Device registered<br>[ =C2=A0 =C2=A0=
3.153007] lpc55_l2 spi1.0: registered handler for protocol 0<br>[ =C2=A0 =
=C2=A03.158582] lpc55_user lpc55_user: The major number for your device is =
236<br>[ =C2=A0 =C2=A03.165976] lpc55_l2 spi1.0: registered handler for pro=
tocol 1<br>[ =C2=A0 =C2=A03.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time=
: bad result: 1<br>[ =C2=A0 =C2=A03.182856] rtc-lpc55 rtc_lpc55: registered=
 as rtc0<br>[ =C2=A0 =C2=A03.188656] lpc55_l2 spi1.0: (2) mcu still not rea=
dy?<br>[ =C2=A0 =C2=A03.193744] lpc55_l2 spi1.0: (3) mcu still not ready?<b=
r>[ =C2=A0 =C2=A03.198848] lpc55_l2 spi1.0: (4) mcu still not ready?<br>[ =
=C2=A0 =C2=A03.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc=
] using ADMA 64-bit<br>[ =C2=A0 =C2=A03.210689] lpc55_l2 spi1.0: (5) mcu st=
ill not ready?<br>[ =C2=A0 =C2=A03.215694] lpc55_l2 spi1.0: rx error: -110<=
br>[ =C2=A0 =C2=A03.284438] mmc0: new HS200 MMC card at address 0001<br>[ =
=C2=A0 =C2=A03.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB <br>[ =C2=A0 =C2=
=A03.291784] =C2=A0mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8<br>[ =C2=A0 =C2=A03.293=
915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB <br>[ =C2=A0 =C2=A03.299054] m=
mcblk0boot1: mmc0:0001 SEM16G 4.00 MiB <br>[ =C2=A0 =C2=A03.303905] mmcblk0=
rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)<br>[ =C2=A0 =C2=A03.582676=
] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>[ =C2=A0 =C2=A0=
3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock<b=
r>[ =C2=A0 =C2=A03.591252] cdns-i2c ff020000.i2c: recovery information comp=
lete<br>[ =C2=A0 =C2=A03.597085] at24 0-0050: supply vcc not found, using d=
ummy regulator<br>[ =C2=A0 =C2=A03.603011] lpc55_l2 spi1.0: (2) mcu still n=
ot ready?<br>[ =C2=A0 =C2=A03.608093] at24 0-0050: 256 byte spd EEPROM, rea=
d-only<br>[ =C2=A0 =C2=A03.613620] lpc55_l2 spi1.0: (3) mcu still not ready=
?<br>[ =C2=A0 =C2=A03.619362] lpc55_l2 spi1.0: (4) mcu still not ready?<br>=
[ =C2=A0 =C2=A03.624224] rtc-rv3028 0-0052: registered as rtc1<br>[ =C2=A0 =
=C2=A03.628343] lpc55_l2 spi1.0: (5) mcu still not ready?<br>[ =C2=A0 =C2=
=A03.633253] lpc55_l2 spi1.0: rx error: -110<br>[ =C2=A0 =C2=A03.639104] k8=
1_bootloader 0-0010: probe<br>[ =C2=A0 =C2=A03.641628] VMCU: : (235:0) regi=
stered<br>[ =C2=A0 =C2=A03.641635] k81_bootloader 0-0010: probe completed<b=
r>[ =C2=A0 =C2=A03.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq=
 28<br>[ =C2=A0 =C2=A03.669154] cdns-i2c ff030000.i2c: recovery information=
 complete<br>[ =C2=A0 =C2=A03.675412] lm75 1-0048: supply vs not found, usi=
ng dummy regulator<br>[ =C2=A0 =C2=A03.682920] lm75 1-0048: hwmon1: sensor =
&#39;tmp112&#39;<br>[ =C2=A0 =C2=A03.686548] i2c i2c-1: Added multiplexed i=
2c bus 3<br>[ =C2=A0 =C2=A03.690795] i2c i2c-1: Added multiplexed i2c bus 4=
<br>[ =C2=A0 =C2=A03.695629] i2c i2c-1: Added multiplexed i2c bus 5<br>[ =
=C2=A0 =C2=A03.700492] i2c i2c-1: Added multiplexed i2c bus 6<br>[ =C2=A0 =
=C2=A03.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C swi=
tch pca9546<br>[ =C2=A0 =C2=A03.713049] at24 1-0054: supply vcc not found, =
using dummy regulator<br>[ =C2=A0 =C2=A03.720067] at24 1-0054: 1024 byte 24=
c08 EEPROM, read-only<br>[ =C2=A0 =C2=A03.724761] cdns-i2c ff030000.i2c: 10=
0 kHz mmio ff030000 irq 29<br>[ =C2=A0 =C2=A03.731272] sfp viper_enet:sfp-e=
th1: Host maximum power 2.0W<br>[ =C2=A0 =C2=A03.737549] sfp_register_socke=
t: got sfp_bus<br>[ =C2=A0 =C2=A03.740709] sfp_register_socket: register sf=
p_bus<br>[ =C2=A0 =C2=A03.745459] sfp_register_bus: ops ok!<br>[ =C2=A0 =C2=
=A03.749179] sfp_register_bus: Try to attach<br>[ =C2=A0 =C2=A03.753419] sf=
p_register_bus: Attach succeeded<br>[ =C2=A0 =C2=A03.757914] sfp_register_b=
us: upstream ops attach<br>[ =C2=A0 =C2=A03.762677] sfp_register_bus: Bus r=
egistered<br>[ =C2=A0 =C2=A03.766999] sfp_register_socket: register sfp_bus=
 succeeded <br>[ =C2=A0 =C2=A03.775870] of_cfs_init<br>[ =C2=A0 =C2=A03.776=
000] of_cfs_init: OK<br>[ =C2=A0 =C2=A03.778211] clk: Not disabling unused =
clocks<br><b><font size=3D"4">[ =C2=A0 11.278477] Freeing initrd memory: 20=
6056K</font></b><br><b><font size=3D"4">[ =C2=A0 11.279406] Freeing unused =
kernel memory: 1536K</font></b><br>[ =C2=A0 11.314006] Checked W+X mappings=
: passed, no W+X pages found<br>[ =C2=A0 11.314142] Run /init as init proce=
ss<br>INIT: version 3.01 booting<br>fsck (busybox 1.35.0)<br>/dev/mmcblk0p1=
: clean, 12/102400 files, 238162/409600 blocks<br>/dev/mmcblk0p2: clean, 12=
/102400 files, 171972/409600 blocks<br>/dev/mmcblk0p3 was not cleanly unmou=
nted, check forced.<br>/dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous),=
 663/16384 blocks<br>[ =C2=A0 11.553073] EXT4-fs (mmcblk0p3): mounted files=
ystem without journal. Opts: (null). Quota mode: disabled.<br>Starting rand=
om number generator daemon.<br>[ =C2=A0 11.580662] random: crng init done<b=
r>Starting udev<br>[ =C2=A0 11.613159] udevd[142]: starting version 3.2.10<=
br>[ =C2=A0 11.620385] udevd[143]: starting eudev-3.2.10<br>[ =C2=A0 11.704=
481] macb ff0b0000.ethernet control_red: renamed from eth0<br>[ =C2=A0 11.7=
20264] macb ff0c0000.ethernet control_black: renamed from eth1<br>[ =C2=A0 =
12.063396] ip_local_port_range: prefer different parity for start/end value=
s.<br>[ =C2=A0 12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad resu=
lt: 1<br>hwclock: RTC_RD_TIME: Invalid exchange<br>Mon Feb 27 08:40:53 UTC =
2023<br>[ =C2=A0 12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad re=
sult<br>hwclock: RTC_SET_TIME: Invalid exchange<br>[ =C2=A0 12.131027] rtc-=
lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>Starting mcud<br>INIT=
: Entering runlevel: 5<br>Configuring network interfaces... done.<br>resett=
ing network interface<br>[ =C2=A0 12.718295] macb ff0b0000.ethernet control=
_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=
=3DPOLL)<br>[ =C2=A0 12.723919] macb ff0b0000.ethernet control_red: configu=
ring for phy/gmii link mode<br>[ =C2=A0 12.732151] pps pps0: new PPS source=
 ptp0<br>[ =C2=A0 12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp cloc=
k registered.<br>[ =C2=A0 12.745724] macb ff0c0000.ethernet control_black: =
PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL=
)<br>[ =C2=A0 12.753469] macb ff0c0000.ethernet control_black: configuring =
for phy/gmii link mode<br>[ =C2=A0 12.761804] pps pps1: new PPS source ptp1=
<br>[ =C2=A0 12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock reg=
istered.<br>	Auto-negotiation: off<br>	Auto-negotiation: off<br>[ =C2=A0 16=
.828151] macb ff0b0000.ethernet control_red: unable to generate target freq=
uency: 125000000 Hz<br>[ =C2=A0 16.834553] macb ff0b0000.ethernet control_r=
ed: Link is Up - 1Gbps/Full - flow control off<br>[ =C2=A0 16.860552] macb =
ff0c0000.ethernet control_black: unable to generate target frequency: 12500=
0000 Hz<br>[ =C2=A0 16.867052] macb ff0c0000.ethernet control_black: Link i=
s Up - 1Gbps/Full - flow control off<br>Starting Failsafe Secure Shell serv=
er in port 2222: sshd<br>done.<br>Starting rpcbind daemon...done.<br><br>[ =
=C2=A0 17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br=
>hwclock: RTC_RD_TIME: Invalid exchange<br>Starting State Manager Service<b=
r>Start state-manager restarter...<br>(XEN) d0v1 Forwarding AES operation: =
3254779951<br>Starting /usr/sbin/xenstored....[ =C2=A0 17.265256] BTRFS: de=
vice fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744 /dev/dm-=
0 scanned by udevd (385)<br>[ =C2=A0 17.349933] BTRFS info (device dm-0): d=
isk space caching is enabled<br>[ =C2=A0 17.350670] BTRFS info (device dm-0=
): has skinny extents<br>[ =C2=A0 17.364384] BTRFS info (device dm-0): enab=
ling ssd optimizations<br>[ =C2=A0 17.830462] BTRFS: device fsid 27ff666b-f=
4e5-4f90-9054-c210db5b2e2e devid 1 transid 6 /dev/mapper/client_prov scanne=
d by mkfs.btrfs (526)<br>[ =C2=A0 17.872699] BTRFS info (device dm-1): usin=
g free space tree<br>[ =C2=A0 17.872771] BTRFS info (device dm-1): has skin=
ny extents<br>[ =C2=A0 17.878114] BTRFS info (device dm-1): flagging fs wit=
h big metadata feature<br>[ =C2=A0 17.894289] BTRFS info (device dm-1): ena=
bling ssd optimizations<br>[ =C2=A0 17.895695] BTRFS info (device dm-1): ch=
ecking UUID tree<br><br>Setting domain 0 name, domid and JSON config...<br>=
Done setting up Dom0<br>Starting xenconsoled...<br>Starting QEMU as disk ba=
ckend for dom0<br>Starting domain watchdog daemon: xenwatchdogd startup<br>=
<br>[ =C2=A0 18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b=
8921 devid 1 transid 6 /dev/mapper/client_config scanned by mkfs.btrfs (574=
)<br>[done]<br>[ =C2=A0 18.465552] BTRFS info (device dm-2): using free spa=
ce tree<br>[ =C2=A0 18.465629] BTRFS info (device dm-2): has skinny extents=
<br>[ =C2=A0 18.471002] BTRFS info (device dm-2): flagging fs with big meta=
data feature<br>Starting crond: [ =C2=A0 18.482371] BTRFS info (device dm-2=
): enabling ssd optimizations<br>[ =C2=A0 18.486659] BTRFS info (device dm-=
2): checking UUID tree<br>OK<br>starting rsyslogd ... Log partition ready a=
fter 0 poll loops<br>done<br>rsyslogd: cannot connect to <a href=3D"http://=
172.18.0.1:514">172.18.0.1:514</a>: Network is unreachable [v8.2208.0 try <=
a href=3D"https://www.rsyslog.com/e/2027">https://www.rsyslog.com/e/2027</a=
> ]<br>[ =C2=A0 18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690de=
b722095 devid 1 transid 608 /dev/dm-3 scanned by udevd (518)<br><br>Please =
insert USB token and enter your role in login prompt.<br><br>login: <br></d=
iv><br><div>Regards,</div><div>O.<br></div><div><br></div><br><div class=3D=
"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D0=BF=D0=BD, 24 =D0=B0=
=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39, Stefano Stabellini &lt;<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a>&gt;:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi=
 Oleg, <br>
<br>
Here is the issue from your logs:<br>
<br>
SError Interrupt on CPU0, code 0xbe000000 -- SError<br>
<br>
SErrors are special signals to notify software of serious hardware<br>
errors.=C2=A0 Something is going very wrong. Defective hardware is a<br>
possibility.=C2=A0 Another possibility if software accessing address ranges=
<br>
that it is not supposed to, sometimes it causes SErrors.<br>
<br>
Cheers,<br>
<br>
Stefano<br>
<br>
<br>
<br>
On Mon, 24 Apr 2023, Oleg Nikitenko wrote:<br>
<br>
&gt; Hello,<br>
&gt; <br>
&gt; Thanks guys.<br>
&gt; I found out where the problem was.<br>
&gt; Now dom0 booted more. But I have a new one.<br>
&gt; This is a kernel panic during Dom0 loading.<br>
&gt; Maybe someone is able to suggest something ?<br>
&gt; <br>
&gt; Regards,<br>
&gt; O.<br>
&gt; <br>
&gt; [ =C2=A0 =C2=A03.771362] sfp_register_bus: upstream ops attach<br>
&gt; [ =C2=A0 =C2=A03.776119] sfp_register_bus: Bus registered<br>
&gt; [ =C2=A0 =C2=A03.780459] sfp_register_socket: register sfp_bus succeed=
ed<br>
&gt; [ =C2=A0 =C2=A03.789399] of_cfs_init<br>
&gt; [ =C2=A0 =C2=A03.789499] of_cfs_init: OK<br>
&gt; [ =C2=A0 =C2=A03.791685] clk: Not disabling unused clocks<br>
&gt; [ =C2=A0 11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SErro=
r<br>
&gt; [ =C2=A0 11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.=
72-xilinx-v2022.1 #1<br>
&gt; [ =C2=A0 11.010393] Workqueue: events_unbound async_run_entry_fn<br>
&gt; [ =C2=A0 11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -S=
SBS BTYPE=3D--)<br>
&gt; [ =C2=A0 11.010422] pc : simple_write_end+0xd0/0x130<br>
&gt; [ =C2=A0 11.010431] lr : generic_perform_write+0x118/0x1e0<br>
&gt; [ =C2=A0 11.010438] sp : ffffffc00809b910<br>
&gt; [ =C2=A0 11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: f=
fffffef69ba88c0<br>
&gt; [ =C2=A0 11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0=
000000000000000<br>
&gt; [ =C2=A0 11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: f=
fffff807315a260<br>
&gt; [ =C2=A0 11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0=
000000000000000<br>
&gt; [ =C2=A0 11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0=
000000000000000<br>
&gt; [ =C2=A0 11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0=
000000000000000<br>
&gt; [ =C2=A0 11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0=
000000000000000<br>
&gt; [ =C2=A0 11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 0=
00000002d89b700<br>
&gt; [ =C2=A0 11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0=
000000000001000<br>
&gt; [ =C2=A0 11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0=
000000000000005<br>
&gt; [ =C2=A0 11.010534] Kernel panic - not syncing: Asynchronous SError In=
terrupt<br>
&gt; [ =C2=A0 11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.=
72-xilinx-v2022.1 #1<br>
&gt; [ =C2=A0 11.010545] Hardware name: D14 Viper Board - White Unit (DT)<b=
r>
&gt; [ =C2=A0 11.010548] Workqueue: events_unbound async_run_entry_fn<br>
&gt; [ =C2=A0 11.010556] Call trace:<br>
&gt; [ =C2=A0 11.010558] =C2=A0dump_backtrace+0x0/0x1c4<br>
&gt; [ =C2=A0 11.010567] =C2=A0show_stack+0x18/0x2c<br>
&gt; [ =C2=A0 11.010574] =C2=A0dump_stack_lvl+0x7c/0xa0<br>
&gt; [ =C2=A0 11.010583] =C2=A0dump_stack+0x18/0x34<br>
&gt; [ =C2=A0 11.010588] =C2=A0panic+0x14c/0x2f8<br>
&gt; [ =C2=A0 11.010597] =C2=A0print_tainted+0x0/0xb0<br>
&gt; [ =C2=A0 11.010606] =C2=A0arm64_serror_panic+0x6c/0x7c<br>
&gt; [ =C2=A0 11.010614] =C2=A0do_serror+0x28/0x60<br>
&gt; [ =C2=A0 11.010621] =C2=A0el1h_64_error_handler+0x30/0x50<br>
&gt; [ =C2=A0 11.010628] =C2=A0el1h_64_error+0x78/0x7c<br>
&gt; [ =C2=A0 11.010633] =C2=A0simple_write_end+0xd0/0x130<br>
&gt; [ =C2=A0 11.010639] =C2=A0generic_perform_write+0x118/0x1e0<br>
&gt; [ =C2=A0 11.010644] =C2=A0__generic_file_write_iter+0x138/0x1c4<br>
&gt; [ =C2=A0 11.010650] =C2=A0generic_file_write_iter+0x78/0xd0<br>
&gt; [ =C2=A0 11.010656] =C2=A0__kernel_write+0xfc/0x2ac<br>
&gt; [ =C2=A0 11.010665] =C2=A0kernel_write+0x88/0x160<br>
&gt; [ =C2=A0 11.010673] =C2=A0xwrite+0x44/0x94<br>
&gt; [ =C2=A0 11.010680] =C2=A0do_copy+0xa8/0x104<br>
&gt; [ =C2=A0 11.010686] =C2=A0write_buffer+0x38/0x58<br>
&gt; [ =C2=A0 11.010692] =C2=A0flush_buffer+0x4c/0xbc<br>
&gt; [ =C2=A0 11.010698] =C2=A0__gunzip+0x280/0x310<br>
&gt; [ =C2=A0 11.010704] =C2=A0gunzip+0x1c/0x28<br>
&gt; [ =C2=A0 11.010709] =C2=A0unpack_to_rootfs+0x170/0x2b0<br>
&gt; [ =C2=A0 11.010715] =C2=A0do_populate_rootfs+0x80/0x164<br>
&gt; [ =C2=A0 11.010722] =C2=A0async_run_entry_fn+0x48/0x164<br>
&gt; [ =C2=A0 11.010728] =C2=A0process_one_work+0x1e4/0x3a0<br>
&gt; [ =C2=A0 11.010736] =C2=A0worker_thread+0x7c/0x4c0<br>
&gt; [ =C2=A0 11.010743] =C2=A0kthread+0x120/0x130<br>
&gt; [ =C2=A0 11.010750] =C2=A0ret_from_fork+0x10/0x20<br>
&gt; [ =C2=A0 11.010757] SMP: stopping secondary CPUs<br>
&gt; [ =C2=A0 11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc00800000=
0<br>
&gt; [ =C2=A0 11.010788] PHYS_OFFSET: 0x0<br>
&gt; [ =C2=A0 11.010790] CPU features: 0x00000401,00000842<br>
&gt; [ =C2=A0 11.010795] Memory Limit: none<br>
&gt; [ =C2=A0 11.277509] ---[ end Kernel panic - not syncing: Asynchronous =
SError Interrupt ]---<br>
&gt; <br>
&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:52=
, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank=
">michal.orzel@amd.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 21/04/2023 14:49, Oleg Nikitenko wrote:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I was not able to enable earlyprintk in=
 the xen for now.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I decided to choose another way.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; This is a xen&#39;s command line that I=
 found out completely.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) $$$$ console=3Ddtuart dtuart=3Dse=
rial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=
=3Dnative sched=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0timer_slop=3D0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Yes, adding a printk() in Xen was also a goo=
d idea.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; So you are absolutely right about a com=
mand line.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now I am going to find out why xen did =
not have the correct parameters from the device tree.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Maybe you will find this document helpful:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/Xilinx/xen/blo=
b/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer=
" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/doc=
s/misc/arm/device-tree/booting.txt</a><br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 20=
23=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel &lt;<a href=3D"mailto:michal.=
orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a hre=
f=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a=
>&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On 21/04/2023 10:04,=
 Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello Michal,<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yes, I use yoct=
o.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yesterday all d=
ay long I tried to follow your suggestions.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I faced a probl=
em.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Manually in the=
 xen config build file I pasted the strings:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0In the .config file =
or in some Yocto file (listing additional Kconfig options) added to SRC_URI=
?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0You shouldn&#39;t re=
ally modify .config file but if you do, you should execute &quot;make oldde=
fconfig&quot; afterwards.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_PR=
INTK<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_PR=
INTK_ZYNQMP<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EARLY_UA=
RT_CHOICE_CADENCE<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I hope you added =3D=
y to them.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, you have at =
least the following solutions:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A01) Run bitbake xen -=
c menuconfig to properly set early printk<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A02) Find out how you =
enable other Kconfig options in your project (e.g. CONFIG_COLORING=3Dy that=
 is not enabled by default)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A03) Append the follow=
ing to &quot;xen/arch/arm/configs/arm64_defconfig&quot;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0CONFIG_EARLY_PRINTK_=
ZYNQMP=3Dy<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Host hangs in b=
uild time.=C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Maybe I did not=
 set something in the config build file ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D1=87=D1=82, 2=
0 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko &lt=
;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gma=
il.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0Thanks Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0You gave me an idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0I am going to try it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:5=
6, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0Thanks Stefano.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0I am going to do it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 23:05, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Wed, 19 Apr 2023, Oleg Nikitenko wrote=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hi Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I corrected xen&#39;s command line.<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now it is<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; xen,xen-bootargs =3D &quot;console=
=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_p=
in<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0 vwfi=3Dnative sched=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; timer_slop=3D0 way_size=3D65536 xen_=
colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors is way too many for xen, just do=
 xen_colors=3D0-0. There is no<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0advantage in using more than 1 color for =
Xen.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors is too few for dom0, if you are =
giving 1600M of memory to Dom0.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Each color is 256M. For 1600M you should =
give at least 7 colors. Try:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0xen_colors=3D0-0 dom0_colors=3D1-8<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Unfortunately the result was the sam=
e.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 40-bit IPA with 40-bit PA=
 and 8-bit VMID<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 3 levels with order-1 roo=
t, VTCR 0x0000000080023558<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Scheduling granularity: cpu, 1=
 CPU per sched-resource<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Coloring general information<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Way size: 64kB<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Max. number of colors availabl=
e: 16<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) alternatives: Patching with al=
t table 00000000002cc690 -&gt; 00000000002ccc0c<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Color array allocation failed =
for dom0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) ******************************=
**********<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Panic on CPU 0:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Error creating domain 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) ******************************=
**********<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Reboot in five seconds...<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am going to find out how command l=
ine arguments passed and parsed.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80.=
 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko &lt;<a href=3D"mailto:ol=
eshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto=
:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gma=
il.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Michal,=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; You put my nose into the problem. Th=
ank you.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am going to use your point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Let&#39;s see what happens.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80.=
 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel &lt;<a href=3D"mailto:mich=
al.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a =
href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com=
</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mai=
lto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt=
;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/04/2=
023 09:03, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello=
 Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thank=
s for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; My co=
mpany uses yocto for image generation.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; What =
kind of information do you need to consult me in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe=
 modules sizes/addresses which were mentioned by @Julien Grall &lt;mailto:<=
a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mail=
to:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt; ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry for =
jumping into discussion, but FWICS the Xen command line you provided seems =
to be not the<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0one<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen booted=
 with. The error you are observing most likely is due to dom0 colors config=
uration not<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0being<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specified =
(i.e. lack of dom0_colors=3D&lt;&gt; parameter). Although in the command li=
ne you provided, this<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0parameter<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set, I =
strongly doubt that this is the actual command line in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wrote:=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xen-bo=
otargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_ma=
x_vcpus=3D2 dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnu=
ll timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quot=
;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way_szi=
ze has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you spe=
cified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) Xen =
color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This makes=
 me believe that no colors configuration actually end up in command line th=
at Xen booted<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single col=
or for Xen is a &quot;default if not specified&quot; and way size was proba=
bly calculated by asking<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0HW.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I would=
 suggest to first cross-check the command line in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regar=
ds,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=
=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefan=
o Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0On Tue, 18 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt; &gt;&gt; This feature has not been merged in Xen upstrea=
m yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt; &gt; would assume that upstream + the series on the ML [=
1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt; Please clarify this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0&gt; Because the two thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0As Julien wrote, there is nothing controversial. As you are a=
ware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0Xilinx maintains a separate Xen tree specific for Xilinx here=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://=
github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.=
com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0and the branch you are using (xlnx_rebase_4.16) comes from th=
ere.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0Instead, the upstream Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsu=
mmary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/=
?p=3Dxen.git;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a><br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"ht=
tps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" t=
arget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a=
><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a><br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0The Cache Coloring feature that you are trying to configure i=
s present<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0in xlnx_rebase_4.16, but not yet present upstream (there is a=
n<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0outstanding patch series to add cache coloring to Xen upstrea=
m but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0hasn&#39;t been merged yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0Anyway, if you are using xlnx_rebase_4.16 it doesn&#39;t matt=
er too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0you as you already have Cache Coloring as a feature there.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0I take you are using ImageBuilder to generate the boot config=
uration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0so, please post the ImageBuilder config file that you are usi=
ng.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0But from the boot message, it looks like the colors configura=
tion for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0Dom0 is incorrect.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
&gt; <br>
&gt; </blockquote></div></div>

--0000000000001d84a805fa241597--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:42:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:42:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525742.817117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDK9-0002wu-Cy; Tue, 25 Apr 2023 07:42:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525742.817117; Tue, 25 Apr 2023 07:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDK9-0002wn-AL; Tue, 25 Apr 2023 07:42:53 +0000
Received: by outflank-mailman (input) for mailman id 525742;
 Tue, 25 Apr 2023 07:42:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prDK8-0002wh-Nv
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:42:52 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe13::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c80707eb-e33c-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:42:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9361.eurprd04.prod.outlook.com (2603:10a6:20b:4e6::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 07:42:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 07:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c80707eb-e33c-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bFHHp3zFG4JdgOWtNm6Z2cVRaMDg8dij1ZXCxRstriSgziuUDtRyjW3HYDnQ57Qnn6ox7tjGOTBeUYUWsJ14nyLEEJJB4yPG84VSjvUp2Ms3hWTQortCQFseeF2qb53BjJd90ejN7C+ptUNPve3u4aFFKEuVCOooX0LcYnv6xOl4pN6JjliwRyfiVOu0NDn7m8ah6IzJhYFE9jTR2KD9gkEJ/Jv7Ob6Wg1CpGtUHLiK94HG38LSz09Q7k5KxAVqyZ/y1G6N+4hxeBCCDHfC8tXtuoNxvhKS3qnqZ2EbW/igFq+0clB+/tVZ/KuqR0B3YjawoRQxfs72z4HleSqLLCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DdxazYQaXGeErlEztRPxasszU1ZMO5YgXjTge+0NjuI=;
 b=bCuHrAZzIYBCLKBQ0VZVAng9/nkPFV6dZdAZYripnwjsInJB82MIZHNJnqyfcdZe5ieqhSoUMxg0WQ1NmWf8aOSJ260MLSZeN2xXDjXU0gViyqhuytKkT6kPmTD+oO2usJG2xv5BjeOHwgxEcz5po7uIhfeZ/2ZkNC83NCCKauRmfpLby1Qbgn34BhMF3Kh2xfjW7faV/2M6D9ZMm8mpV5EFfrHeClBJ68erbSCf9ZmL7+YSdHgpi9Eyd7f8RCOYxRmJZvfTCSStIgcmtvaZqP8fbXdS2WAa/m+oba/ONEutgHCm3vpzI/eooY/31Gx0U49SNwdHaQpDn+e4+ujr5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DdxazYQaXGeErlEztRPxasszU1ZMO5YgXjTge+0NjuI=;
 b=iZYezDL4eqhfGQCh+xjOuC/A1HaFE2LeSDccUMYLAwwkVAQ9Kzeq02fNHBrXyF7pTWXvaojaGX5aAXnDRSSqOXk4zm5B2pyXAKiCkQBTHIcB2rVS6ZcPc7XG6ys7AIu7DsaAsgIJ4yc4nPz3ktTrvswlJ7b0+B41igEawIekHmZDkELA/Xm3SomzIIUCsb0HwY13+rzvPy3hGsLSuPGurVCLyZsJpf7Fb1XEWuMFs7fVocEd/3exZV+kBKX+SPHujHqdJr/a/uNheUiyqjdhWJCSey3tEN2kKkDFoaQz1My+iPtEydHQufcmU40fwp0ddq6rQBkcMY600D61h9lfGQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fd1cde5e-a12f-85f3-1c21-bc41a483be39@suse.com>
Date: Tue, 25 Apr 2023 09:42:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/sched/null: avoid crash after failed domU
 creation
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>,
 Stewart Hildebrand <stewart.hildebrand@amd.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20230424210048.786436-1-stewart.hildebrand@amd.com>
 <16b10155-4dfb-6891-dc90-61a6b966ee6d@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <16b10155-4dfb-6891-dc90-61a6b966ee6d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9361:EE_
X-MS-Office365-Filtering-Correlation-Id: 712dd57b-4d5c-495c-a4ac-08db4560aad9
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YZgptgTLT1SY5To4bqIHlJt6VhMHbxgysuMVNqKfTgULF8aPM8tfQAX6RJNkBFOzlIfgxxYSLqNgrn03J1erHoOTJQ926FJm1urngHp1G6/RK2Q4sdtlr3hjtmCd3/m0kfI8V926XNiZZYE6C7kjqEHR7tRWWfS/21D1FcvYDexP4bQYF52L0eOF231cKXybraunFLdNx2CyLfFPKzom8dWyD+btNa4BlOZCDKDBrIKLbKzl21IAmHwT4czMtSJWcKYXH1fIKt1nzyFGxfz7s88+uQYoIEnDpmNJWtA6or9xxBa+a341JOmv2MFd/4Phqx2syceelzu070NYfq79hum4nDPMHb7uJ/0SPE/erNxC3ZcqCirobuusUAMZzj75XABOMn5NHJDy03vSCr3EoY8y+uBzmBpoHwnWpmxi5Lgrq2f4kv8UvkjZ47DD9rT3oCa/8ekICdmWto9BqchvEsOPCePzxXCQvmFchT3Mp46zYzwGS0H9JCnzJt7QYVrchlMbi7nyFxavIRUBl22U+uKIdB1JuYelV3v1vXE71kD9fqYNl2VC356FaQXoJyuaPJLRrOpPUlPlLa3vLPSQI9k/qur2YH2HP5BqX/M0US5xiqJmayalip/tYu5zHOFKnsahKxTWZAz94mVZLD8n5g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(136003)(346002)(396003)(366004)(451199021)(2906002)(6486002)(6512007)(6506007)(2616005)(6666004)(186003)(53546011)(66556008)(66476007)(66946007)(8676002)(8936002)(41300700001)(4326008)(316002)(26005)(478600001)(6636002)(5660300002)(110136005)(38100700002)(36756003)(86362001)(31696002)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q1Z1UmpTKzVCRGFyUldrbFdPbFNBSDlSVTk0MURhM2FYRG9tR0F4SDMxUjAw?=
 =?utf-8?B?RFpmYmFKaDAwR2c2ZThFUFpmNnBqNCthUERpOWFWKy9Jc2dIZFZKVkxjL0dB?=
 =?utf-8?B?bEE0aHFPT01NUGYrT1JJQ0NPOGk3cjdvNHdwZG9vQXoyQ2c0SjRPMWYvTmRK?=
 =?utf-8?B?TFJpeUphd3NmOGdaTFFOb0w5U3ZWRTlqQkhaMlM3NjdaMlZQLzlWcTRGSTY3?=
 =?utf-8?B?THhqZVN0TkhDWVg3M1R4cGNxUUJ0cEthVHpXZTZxY3FmRnc4Y0E1QzFoTnNs?=
 =?utf-8?B?RFlLenZyS2plNGd4dHI1b0VWbnpJaDhIaUNwWlN0dEVBU0o3dnZiNWFBL2Zx?=
 =?utf-8?B?d1BobnhINHVWb1M4a3dIbmV3K3FFc3VBSjNPTHYyU0NjRm9rNSsvR1REV1ZJ?=
 =?utf-8?B?NnRmWnF4R3BmMEx3MnQwWDN3d0YzN3FEamdob1ZUanVsZWluSnZlMTdIdXRM?=
 =?utf-8?B?SHY3QXUyYkYzU0l1QzFBK29UQ3JZeHFOamQ0elRvYVp6dXc2WjdtOXVqZ0JJ?=
 =?utf-8?B?K0czQVVMTFRXSXJaT3hod2hZNG5UdHcwYTdPK0tTbm1iZ1RFS3JHeEhDUGZr?=
 =?utf-8?B?U2pnVnh1anBkVEhpU05sWEJSNkdKWVlCdkV2L0JIU0F3aytxZ1MvYk96aFQv?=
 =?utf-8?B?U3pHUFk1UWptbG9jYXZJMmJlZktaZlVsdGVmOHc4S1VzczlRZVlmV0JNQ3c1?=
 =?utf-8?B?UmtHMEVDRmptOFRyL2RGTnByMXhkZUNCNHBwWXlsbXB2SUVqWVZnSFRta3NX?=
 =?utf-8?B?RVV4TlQvbXd6dXl2dlRCZklqNXV5TzBRSXdIUENLMTd0cWVFdkt0OXY0UlJS?=
 =?utf-8?B?U3JZSDFhdjhMd3pxNTBLbmdzTkRSWlNpL2VTQjFwVkhQdkdaZ0ZxMDVCQzlm?=
 =?utf-8?B?UGpEOTQ4b1NDd1RKWDlBRy9BbVVNQVJNSEkxd09Qa2ppbHg2TkpLc0pFVzVr?=
 =?utf-8?B?LzU0WGZhTHZIV2pWbHRrb3BJa2hxKzE5SjNUZVJhdTdadFQrS1lOeGNqSTdU?=
 =?utf-8?B?Mzd4K0pUTWJjdmhaN2dvbDB6ZkZmWmt6RWF5ZTBsTStCNlUwT3dqczBMZndv?=
 =?utf-8?B?ODMzVVlSRjFqQllrRUN5N2hkNm9qb1VBb1RkdUVuZ1hJdHIxN2N1eklaVnZD?=
 =?utf-8?B?eGExOUlpQnFaSXBRazMrMllMTjUvTUJFbFJMM0hWMnJ4M3FCUHJZZmZoL3lU?=
 =?utf-8?B?QkJzVDhlSStsR3NqSVJ5NkN2c0h2alFHdnNjNmVyc2o3Ym5MTWZmYjRzc0lQ?=
 =?utf-8?B?RzM4bVJyL2cyczFScXlhWlpzOE9nQW5nVDhsWHROSklyNGdvVUhoT3hPbnRH?=
 =?utf-8?B?djk3U2RibHFrdEQvWW1RQlZoS1dycm5yL2JwbkFCeVh6Zncxc3lsakFOM3do?=
 =?utf-8?B?MHU1U2VuMlFDSjhhQjVMRlVFK1BuSVdibGdxVWZXVFl0R3BLUGxJclhyNE5N?=
 =?utf-8?B?THVUejRWNGRZRDBkeGJ5ZmU5S1JXalpGR3lybEtMUjA5cFZIWmtkMlBqK2p1?=
 =?utf-8?B?YSsyZS9kR1Y0WTNyUTlGS1lONEpjaE1ja3Y4MmxBRklWbm9aVHdENVlrV1lj?=
 =?utf-8?B?b3R1dWxxSk5yaHdqM3czemlwbnR1bzVXQzlCUmZ0V3RMU25OTnlKUjF2alQ2?=
 =?utf-8?B?U1ZjeEtsUy82cFlvQ0plalFyZGNOUkpFQW83aW01M2NIMEVjUm5qWDQyNzla?=
 =?utf-8?B?U2ZySXd3am8ybkIxcXdWTWw0QWw0NWhlZlhWcXZGNTI1TnFJVExtZWR5Y2o2?=
 =?utf-8?B?cnBlSlFZODh3d1ZWRE5NQnJsTWd3dythOFA3dEN1aTZRS3FIR2VWK2tzVm9m?=
 =?utf-8?B?SFdPRU16UjkwUDAya0k1UlJ3cTZwTWJXMUFFTXBwSUpVWDZCVFBuR1ZKNFZN?=
 =?utf-8?B?c1FIb2QrU21iamVCMWgvM0RoMFdxYjhDazRzU1dnZmNBeG9OeEdWRytHZTJU?=
 =?utf-8?B?YTRVNXpzeXJqajRrM2ZjQkxjem5mVmtWbTQxSUxFMUZscVRhdFBTeHJUd2U1?=
 =?utf-8?B?dTFrQjU5L1U1ODJ3bVdwMmJJVVpRYThNUndreE5yRjk0b2sxNlljYWxzZTRP?=
 =?utf-8?B?Vm9hWVJ4aWpIZXpzMWN5VUhtNjhkV2xHWkswTG55a3d4WFA3Vi84UzRsYVpR?=
 =?utf-8?Q?NUmOAyqkagzHY3tKrgyxkVn2Y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 712dd57b-4d5c-495c-a4ac-08db4560aad9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 07:42:49.0351
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qhefaHRC7M6DOyWA0Ly7tz7Vvfk/BmIFpup3fdiB3xgnzM1Ply4eu6e26EQ0dFIMfhNmmEcE75XF/+WKgJN8ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9361

On 25.04.2023 08:36, Juergen Gross wrote:
> On 24.04.23 23:00, Stewart Hildebrand wrote:
>> When creating a domU, but the creation fails, we may end up in a state
>> where a vcpu has not yet been added to the null scheduler, but
>> unit_deassign() is invoked.
> 
> This is not really true. The vcpu has been added, but it was offline at
> that time. This resulted in null_unit_insert() returning early and not
> calling unit_assign().
> 
> Later the vcpu was onlined during XEN_DOMCTL_setvcpucontext handling,
> resulting in null_unit_remove() calling unit_deassign().
> 
>> In this case, when running a debug build of
>> Xen, we will hit an ASSERT and crash Xen:
>>
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
>> (XEN) ****************************************
>>
>> To work around this, remove the ASSERT and introduce a check for the
>> case where npc->unit is NULL and simply return false from
>> unit_deassign().
> 
> I think the correct fix would be to call unit_deassign() from
> null_unit_remove() only, if npc->unit isn't NULL. Dario might have a
> different opinion, though. :-)

Furthermore, even if the proposed solution was (roughly) followed, ...

>> --- a/xen/common/sched/null.c
>> +++ b/xen/common/sched/null.c
>> @@ -376,7 +376,14 @@ static bool unit_deassign(struct null_private *prv, const struct sched_unit *uni
>>       struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
>>   
>>       ASSERT(list_empty(&null_unit(unit)->waitq_elem));
>> -    ASSERT(npc->unit == unit);
>> +
>> +    if ( !npc->unit )
>> +    {
>> +        dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
>> +                unit->unit_id);
>> +        return false;
>> +    }
>> +

... shouldn't the assertion be kept, with the new if() inserted ahead of
it? Plus the log message probably better wouldn't print a unit ID like a
vCPU one, but instead use e.g. %pdu%u?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:46:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:46:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525746.817127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDN3-0003Xd-Qx; Tue, 25 Apr 2023 07:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525746.817127; Tue, 25 Apr 2023 07:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDN3-0003XW-OD; Tue, 25 Apr 2023 07:45:53 +0000
Received: by outflank-mailman (input) for mailman id 525746;
 Tue, 25 Apr 2023 07:45:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bwpa=AQ=redhat.com=mst@srs-se1.protection.inumbo.net>)
 id 1prDN2-0003XO-CW
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:45:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3228640a-e33d-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:45:50 +0200 (CEST)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-589-5FbvCDsMMVucTY83pUDLkw-1; Tue, 25 Apr 2023 03:45:47 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 5b1f17b1804b1-3f250e9e090so6441655e9.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 00:45:47 -0700 (PDT)
Received: from redhat.com ([2.55.61.39]) by smtp.gmail.com with ESMTPSA id
 c21-20020a7bc855000000b003f17300c7dcsm14069408wml.48.2023.04.25.00.45.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Apr 2023 00:45:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3228640a-e33d-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682408749;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=sdqkWiEXuBbe0aMuicHVixExQ0RmC/KmdA8XIHRB1Jw=;
	b=DFpss1RSe5Q3/YBqOYsnaajaQEGoVIfPIwr4u80ZPfnk/SbbxCPTvhR9SDj3Igw2a1WnCT
	Mny3MCBn54YJ2oYz0YRtGXIXMKnXp6DMLi76KI/TvgXHfjs5/7a2mq9MVjNyJuMsuDlh/T
	5tdKoR1zd2QZ0QuMgEvNZx5fY3eIjdk=
X-MC-Unique: 5FbvCDsMMVucTY83pUDLkw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682408746; x=1685000746;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sdqkWiEXuBbe0aMuicHVixExQ0RmC/KmdA8XIHRB1Jw=;
        b=iSJGD3z+U+0uV4EVgSfI6xvXRP/WbXAvmPkcL7gDfW5RM5Cc3GO+kUgNZV5Pc0fuVF
         Mn+Y104FgnTGRAbbciWBomizKX93GN7CuqiatWLQCtALWNuYEF9mXVckzkEEsDwdYs/Q
         Zi0AirbcdCZn7W027vTK6MX/aYpi3sbV2kjLwZqatC3gyryZKhw8CDlvwbDVU0G69r8h
         8+YJO+ZHqAbhN7EC2U07Q1P7GL/msRxBUtNmgMeBtqOlV0gUxtHhrs5krPFDveIPer3/
         MVGP7kgZq5qtySypUeEnfAammQx2/3PZFZuwoKyEs90XddUl9g8HEFSBq4vqNY25hh+2
         U/zw==
X-Gm-Message-State: AAQBX9e67411ZX0ybk1qH2fdp/Nw+sBTelX4VdXWHFF/ruvVLleVfyZI
	MHvxlwGvUi2EqhHHhyBWqBeMrzRbyZbwn1c+tf2dYFKJ4ALe6BwwcvEKVhbhQYibTNCL0g8mwwc
	qMiyTH1nyWFjKN4zfpsE7oqMEFW0=
X-Received: by 2002:a7b:c393:0:b0:3f1:6458:99a7 with SMTP id s19-20020a7bc393000000b003f1645899a7mr9012014wmj.38.1682408746653;
        Tue, 25 Apr 2023 00:45:46 -0700 (PDT)
X-Google-Smtp-Source: AKy350ad4TCuWCBC+AG461cfDN1Vfgv2g5HSPeZ6e0X8sYuDXur/RoTq2vL2cKfGjWKS10+DKS3e8g==
X-Received: by 2002:a7b:c393:0:b0:3f1:6458:99a7 with SMTP id s19-20020a7bc393000000b003f1645899a7mr9011994wmj.38.1682408746316;
        Tue, 25 Apr 2023 00:45:46 -0700 (PDT)
Date: Tue, 25 Apr 2023 03:45:42 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Chuck Zmudzinski <brchuckz@aol.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Subject: [PULL 16/31] pci: avoid accessing slot_reserved_mask directly
 outside of pci.c
Message-ID: <b93fe7f2ca9aea5ef74db5881aabecd7b1c234ed.1682408661.git.mst@redhat.com>
References: <cover.1682408661.git.mst@redhat.com>
MIME-Version: 1.0
In-Reply-To: <cover.1682408661.git.mst@redhat.com>
X-Mailer: git-send-email 2.27.0.106.g8ac3dc51b1
X-Mutt-Fcc: =sent
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

From: Chuck Zmudzinski <brchuckz@aol.com>

This patch provides accessor functions as replacements for direct
access to slot_reserved_mask according to the comment at the top
of include/hw/pci/pci_bus.h which advises that data structures for
PCIBus should not be directly accessed but instead be accessed using
accessor functions in pci.h.

Three accessor functions can conveniently replace all direct accesses
of slot_reserved_mask. With this patch, the new accessor functions are
used in hw/sparc64/sun4u.c and hw/xen/xen_pt.c and pci_bus.h is removed
from the included header files of the same two files.

No functional change intended.

Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
Message-Id: <b1b7f134883cbc83e455abbe5ee225c71aa0e8d0.1678888385.git.brchuckz@aol.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> [sun4u]
---
 include/hw/pci/pci.h |  3 +++
 hw/pci/pci.c         | 15 +++++++++++++++
 hw/sparc64/sun4u.c   |  7 +++----
 hw/xen/xen_pt.c      |  7 +++----
 4 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index d5a40cd058..935b4b91b4 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -287,6 +287,9 @@ void pci_bus_irqs(PCIBus *bus, pci_set_irq_fn set_irq,
 void pci_bus_map_irqs(PCIBus *bus, pci_map_irq_fn map_irq);
 void pci_bus_irqs_cleanup(PCIBus *bus);
 int pci_bus_get_irq_level(PCIBus *bus, int irq_num);
+uint32_t pci_bus_get_slot_reserved_mask(PCIBus *bus);
+void pci_bus_set_slot_reserved_mask(PCIBus *bus, uint32_t mask);
+void pci_bus_clear_slot_reserved_mask(PCIBus *bus, uint32_t mask);
 /* 0 <= pin <= 3 0 = INTA, 1 = INTB, 2 = INTC, 3 = INTD */
 static inline int pci_swizzle(int slot, int pin)
 {
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index def5000e7b..8a87ccc8b0 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -1116,6 +1116,21 @@ static bool pci_bus_devfn_reserved(PCIBus *bus, int devfn)
     return bus->slot_reserved_mask & (1UL << PCI_SLOT(devfn));
 }
 
+uint32_t pci_bus_get_slot_reserved_mask(PCIBus *bus)
+{
+    return bus->slot_reserved_mask;
+}
+
+void pci_bus_set_slot_reserved_mask(PCIBus *bus, uint32_t mask)
+{
+    bus->slot_reserved_mask |= mask;
+}
+
+void pci_bus_clear_slot_reserved_mask(PCIBus *bus, uint32_t mask)
+{
+    bus->slot_reserved_mask &= ~mask;
+}
+
 /* -1 for devfn means auto assign */
 static PCIDevice *do_pci_register_device(PCIDevice *pci_dev,
                                          const char *name, int devfn,
diff --git a/hw/sparc64/sun4u.c b/hw/sparc64/sun4u.c
index a25e951f9d..eae7589462 100644
--- a/hw/sparc64/sun4u.c
+++ b/hw/sparc64/sun4u.c
@@ -31,7 +31,6 @@
 #include "hw/irq.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_bridge.h"
-#include "hw/pci/pci_bus.h"
 #include "hw/pci/pci_host.h"
 #include "hw/qdev-properties.h"
 #include "hw/pci-host/sabre.h"
@@ -608,9 +607,9 @@ static void sun4uv_init(MemoryRegion *address_space_mem,
     /* Only in-built Simba APBs can exist on the root bus, slot 0 on busA is
        reserved (leaving no slots free after on-board devices) however slots
        0-3 are free on busB */
-    pci_bus->slot_reserved_mask = 0xfffffffc;
-    pci_busA->slot_reserved_mask = 0xfffffff1;
-    pci_busB->slot_reserved_mask = 0xfffffff0;
+    pci_bus_set_slot_reserved_mask(pci_bus, 0xfffffffc);
+    pci_bus_set_slot_reserved_mask(pci_busA, 0xfffffff1);
+    pci_bus_set_slot_reserved_mask(pci_busB, 0xfffffff0);
 
     ebus = pci_new_multifunction(PCI_DEVFN(1, 0), true, TYPE_EBUS);
     qdev_prop_set_uint64(DEVICE(ebus), "console-serial-base",
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 2d33d178ad..a540149639 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -57,7 +57,6 @@
 #include <sys/ioctl.h>
 
 #include "hw/pci/pci.h"
-#include "hw/pci/pci_bus.h"
 #include "hw/qdev-properties.h"
 #include "hw/qdev-properties-system.h"
 #include "xen_pt.h"
@@ -951,7 +950,7 @@ void xen_igd_reserve_slot(PCIBus *pci_bus)
     }
 
     XEN_PT_LOG(0, "Reserving PCI slot 2 for IGD\n");
-    pci_bus->slot_reserved_mask |= XEN_PCI_IGD_SLOT_MASK;
+    pci_bus_set_slot_reserved_mask(pci_bus, XEN_PCI_IGD_SLOT_MASK);
 }
 
 static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
@@ -971,7 +970,7 @@ static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
         return;
     }
 
-    if (!(pci_bus->slot_reserved_mask & XEN_PCI_IGD_SLOT_MASK)) {
+    if (!(pci_bus_get_slot_reserved_mask(pci_bus) & XEN_PCI_IGD_SLOT_MASK)) {
         xpdc->pci_qdev_realize(qdev, errp);
         return;
     }
@@ -982,7 +981,7 @@ static void xen_igd_clear_slot(DeviceState *qdev, Error **errp)
         s->real_device.dev == XEN_PCI_IGD_DEV &&
         s->real_device.func == XEN_PCI_IGD_FN &&
         s->real_device.vendor_id == PCI_VENDOR_ID_INTEL) {
-        pci_bus->slot_reserved_mask &= ~XEN_PCI_IGD_SLOT_MASK;
+        pci_bus_clear_slot_reserved_mask(pci_bus, XEN_PCI_IGD_SLOT_MASK);
         XEN_PT_LOG(pci_dev, "Intel IGD found, using slot 2\n");
     }
     xpdc->pci_qdev_realize(qdev, errp);
-- 
MST



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525750.817138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDXy-00053R-Os; Tue, 25 Apr 2023 07:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525750.817138; Tue, 25 Apr 2023 07:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDXy-00053K-M0; Tue, 25 Apr 2023 07:57:10 +0000
Received: by outflank-mailman (input) for mailman id 525750;
 Tue, 25 Apr 2023 07:57:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDXx-00053E-LZ
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id c6179361-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:07 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C6BB4B3;
 Tue, 25 Apr 2023 00:57:50 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CCFFC3F587;
 Tue, 25 Apr 2023 00:57:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6179361-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 00/17] Device tree based NUMA support for Arm - Part#3
Date: Tue, 25 Apr 2023 15:56:38 +0800
Message-Id: <20230425075655.4037980-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

(Henry: Following the offline discussion with Wei, I will be
the one to follow-up the upstream comments for this series.)

The preparation work to support NUMA on Arm has been merged
and can be found at [1] and [2]. The initial discussions of
the Arm NUMA support can be found at [3].

- Background of this series:

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on FVP in
Arm64 mode with NUMA configs in device tree and one HPE x86
NUMA machine.

[1] https://lists.xenproject.org/archives/html/xen-devel/2022-06/msg00499.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01043.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg01903.html

v3 -> v4:
1. s/definition/declaration/ in commit message.
2. Add Acked-by tag from Jan for non-Arm parts.
3. Drop unnecessary initializer for node_distance_map. Pre-set the
   distance map to NUMA_NO_DISTANCE.
4. Drop NUMA_DISTANCE_UDF_MIN and its usage.
5. Drop EXPORT_SYMBOL(__node_distance).
6. Rework __node_distance()'s return value logic.
7. The distance map default value is now NUMA_NO_DISTANCE, update
   the logic accordingly and add in-code comment as a note.
8. Add Acked-by tag from Jan for related patches.

Henry Wang (1):
  xen/arm: Set correct per-cpu cpu_core_mask

Wei Chen (16):
  xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
  xen/arm: implement helpers to get and update NUMA status
  xen/arm: implement node distance helpers for Arm
  xen/arm: use arch_get_ram_range to get memory ranges from bootinfo
  xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
  xen/arm: Add boot and secondary CPU to NUMA system
  xen/arm: introduce a helper to parse device tree processor node
  xen/arm: introduce a helper to parse device tree memory node
  xen/arm: introduce a helper to parse device tree NUMA distance map
  xen/arm: unified entry to parse all NUMA data from device tree
  xen/arm: keep guest still be NUMA unware
  xen/arm: enable device tree based NUMA in system init
  xen/arm: implement numa_node_to_arch_nid for device tree NUMA
  xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
  xen/arm: Provide Kconfig options for Arm to enable NUMA
  docs: update numa command line to support Arm

 SUPPORT.md                        |   1 +
 docs/misc/xen-command-line.pandoc |   2 +-
 xen/arch/arm/Kconfig              |  11 ++
 xen/arch/arm/Makefile             |   2 +
 xen/arch/arm/domain_build.c       |   6 +
 xen/arch/arm/include/asm/numa.h   |  91 +++++++++-
 xen/arch/arm/numa.c               | 188 +++++++++++++++++++
 xen/arch/arm/numa_device_tree.c   | 291 ++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c              |  17 ++
 xen/arch/arm/smpboot.c            |  38 ++++
 xen/arch/x86/include/asm/numa.h   |   1 -
 xen/arch/x86/srat.c               |   2 +-
 xen/include/xen/numa.h            |  10 +
 13 files changed, 656 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/numa.c
 create mode 100644 xen/arch/arm/numa_device_tree.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525751.817148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDY3-0005Iy-0R; Tue, 25 Apr 2023 07:57:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525751.817148; Tue, 25 Apr 2023 07:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDY2-0005Ir-Td; Tue, 25 Apr 2023 07:57:14 +0000
Received: by outflank-mailman (input) for mailman id 525751;
 Tue, 25 Apr 2023 07:57:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDY1-00053E-Eg
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id c943116a-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDFD1D75;
 Tue, 25 Apr 2023 00:57:55 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B5A63F587;
 Tue, 25 Apr 2023 00:57:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c943116a-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
Date: Tue, 25 Apr 2023 15:56:39 +0800
Message-Id: <20230425075655.4037980-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

As a memory range described in device tree cannot be split across
multiple nodes. And it is very likely than if you have more than
64 nodes, you may need a lot more than 2 regions per node. So the
default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense
on Arm.

So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to
NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable
via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This
avoids having different way to define the value based NUMA vs non-NUMA.

Further discussions can be found here[1].

[1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. Add Acked-by tag from Jan.
v2 -> v3:
By checking the discussion in [1] and [2]
[1] https://lists.xenproject.org/archives/html/xen-devel/2023-01/msg00595.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
1. No change
v1 -> v2:
1. Add code comments to explain using NR_MEM_BANKS for Arm
2. Refine commit messages.
---
 xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++-
 xen/include/xen/numa.h          |  9 +++++++++
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e2bee2bd82..7d6ae36a19 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -3,9 +3,26 @@
 
 #include <xen/mm.h>
 
+#include <asm/setup.h>
+
 typedef u8 nodeid_t;
 
-#ifndef CONFIG_NUMA
+#ifdef CONFIG_NUMA
+
+/*
+ * It is very likely that if you have more than 64 nodes, you may
+ * need a lot more than 2 regions per node. So, for Arm, we would
+ * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS.
+ * And in the future NR_MEM_BANKS will be bumped for new platforms,
+ * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to
+ * have different way to define the value based NUMA vs non-NUMA.
+ *
+ * Further discussions can be found here:
+ * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
+ */
+#define NR_NODE_MEMBLKS NR_MEM_BANKS
+
+#else
 
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 29b8c2df89..b86d0851fc 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -13,7 +13,16 @@
 #define MAX_NUMNODES 1
 #endif
 
+/*
+ * Some architectures may have different considerations for
+ * number of node memory blocks. They can define their
+ * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural
+ * implementation. If the arch does not have specific implementation,
+ * the following default NR_NODE_MEMBLKS will be used.
+ */
+#ifndef NR_NODE_MEMBLKS
 #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2)
+#endif
 
 #define vcpu_to_node(v) (cpu_to_node((v)->processor))
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525752.817158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDY6-0005ap-Br; Tue, 25 Apr 2023 07:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525752.817158; Tue, 25 Apr 2023 07:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDY6-0005ag-8s; Tue, 25 Apr 2023 07:57:18 +0000
Received: by outflank-mailman (input) for mailman id 525752;
 Tue, 25 Apr 2023 07:57:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDY4-00053E-Rv
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cb5e765d-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:16 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3D17C4B3;
 Tue, 25 Apr 2023 00:57:59 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E70743F587;
 Tue, 25 Apr 2023 00:57:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb5e765d-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 02/17] xen/arm: implement helpers to get and update NUMA status
Date: Tue, 25 Apr 2023 15:56:40 +0800
Message-Id: <20230425075655.4037980-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA has one global and one implementation specific switches. For
ACPI NUMA implementation, Xen has acpi_numa, so we introduce
device_tree_numa for device tree NUMA implementation. And use
enumerations to indicate init, off and on status.

arch_numa_disabled will get device_tree_numa status, but for
arch_numa_setup we have not provided boot arguments to setup
device_tree_numa. So we just return -EINVAL in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. Rename the first entry of enum dt_numa_status as DT_NUMA_DEFAULT.
2. Make enum dt_numa_status device_tree_numa as __ro_after_init and
   assign it explicitly to DT_NUMA_DEFAULT.
3. Update the year in copyright to 2023.
4. Don't move the x86 numa_disabled() and make Arm's numa_disabled()
   a static inline function for !CONFIG_NUMA.
v1 -> v2:
1. Use arch_numa_disabled to replace numa_enable_with_firmware.
2. Introduce enumerations for device tree numa status.
3. Use common numa_disabled, drop Arm version numa_disabled.
4. Introduce arch_numa_setup for Arm.
5. Rename bad_srat to numa_bad.
6. Add numa_enable_with_firmware helper.
7. Add numa_disabled helper.
8. Refine commit message.
---
 xen/arch/arm/include/asm/numa.h | 17 +++++++++++
 xen/arch/arm/numa.c             | 50 +++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+)
 create mode 100644 xen/arch/arm/numa.c

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 7d6ae36a19..83f60ad05b 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,8 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+extern bool numa_disabled(void);
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
@@ -39,6 +41,21 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
+static inline bool numa_disabled(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_unavailable(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_broken(void)
+{
+    return true;
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
new file mode 100644
index 0000000000..eb5d0632cb
--- /dev/null
+++ b/xen/arch/arm/numa.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/numa.h>
+
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+
+void __init numa_fw_bad(void)
+{
+    printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
+    device_tree_numa = DT_NUMA_OFF;
+}
+
+bool __init arch_numa_unavailable(void)
+{
+    return device_tree_numa != DT_NUMA_ON;
+}
+
+bool arch_numa_disabled(void)
+{
+    return device_tree_numa == DT_NUMA_OFF;
+}
+
+int __init arch_numa_setup(const char *opt)
+{
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525753.817168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYB-0005va-LF; Tue, 25 Apr 2023 07:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525753.817168; Tue, 25 Apr 2023 07:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYB-0005vT-Hr; Tue, 25 Apr 2023 07:57:23 +0000
Received: by outflank-mailman (input) for mailman id 525753;
 Tue, 25 Apr 2023 07:57:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYA-0005uB-Uu
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id ce1616ef-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:20 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E0A634B3;
 Tue, 25 Apr 2023 00:58:03 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4ECB33F587;
 Tue, 25 Apr 2023 00:57:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce1616ef-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 03/17] xen/arm: implement node distance helpers for Arm
Date: Tue, 25 Apr 2023 15:56:41 +0800
Message-Id: <20230425075655.4037980-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

We will parse NUMA nodes distances from device tree. So we need
a matrix to record the distances between any two nodes we parsed.
Accordingly, we provide this node_set_distance API for device tree
NUMA to set the distance for any two nodes in this patch. When
NUMA initialization failed, __node_distance will return
NUMA_REMOTE_DISTANCE, this will help us avoid doing rollback
for distance maxtrix when NUMA initialization failed.

As both x86 and Arm have implemented __node_distance, so we move
its declaration from asm/numa.h to xen/numa.h. At same time, the
outdated u8 return value of x86 has been changed to unsigned char.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com> # non-Arm parts
---
v3 -> v4:
1. s/definition/declaration/ in commit message.
2. Add Acked-by tag from Jan for non-Arm parts.
3. Drop unnecessary initializer for node_distance_map. Pre-set the
   distance map to NUMA_NO_DISTANCE.
4. Drop NUMA_DISTANCE_UDF_MIN and its usage.
5. Drop EXPORT_SYMBOL(__node_distance).
6. Rework __node_distance()'s return value logic.
v2 -> v3:
1. Use __ro_after_init for node_distance_map.
2. Correct format of if condition identation in numa_set_distance().
3. Drop the unnecessary change to the year of copyright.
4. Use ARRAY_SIZE() to determine node_distance_map's row, column size.
v1 -> v2:
1. Use unsigned int/char instead of uint32_t/u8.
2. Re-org the commit message.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h | 12 +++++++
 xen/arch/arm/numa.c             | 55 +++++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/numa.h |  1 -
 xen/arch/x86/srat.c             |  2 +-
 xen/include/xen/numa.h          |  1 +
 6 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..9073398d6e 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
+obj-$(CONFIG_NUMA) += numa.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 83f60ad05b..96c856a9f7 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,7 +22,19 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+/*
+ * In ACPI spec, 0-9 are the reserved values for node distance,
+ * 10 indicates local node distance, 20 indicates remote node
+ * distance. Set node distance map in device tree will follow
+ * the ACPI's definition.
+ */
+#define NUMA_DISTANCE_UDF_MAX   9
+#define NUMA_LOCAL_DISTANCE     10
+#define NUMA_REMOTE_DISTANCE    20
+
 extern bool numa_disabled(void);
+extern void numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance);
 
 #else
 
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index eb5d0632cb..e4f75f314b 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -28,6 +28,12 @@ enum dt_numa_status {
 
 static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
+static unsigned char __ro_after_init
+node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
+    [0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_DISTANCE }
+};
+
+
 void __init numa_fw_bad(void)
 {
     printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
@@ -48,3 +54,52 @@ int __init arch_numa_setup(const char *opt)
 {
     return -EINVAL;
 }
+
+void __init numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance)
+{
+    if ( from >= ARRAY_SIZE(node_distance_map) ||
+         to >= ARRAY_SIZE(node_distance_map[0]) )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
+               from, to, MAX_NUMNODES);
+        return;
+    }
+
+    /* NUMA defines NUMA_NO_DISTANCE as unreachable and 0-9 are undefined */
+    if ( distance >= NUMA_NO_DISTANCE || distance <= NUMA_DISTANCE_UDF_MAX ||
+         (from == to && distance != NUMA_LOCAL_DISTANCE) )
+    {
+        printk(KERN_WARNING
+               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%"PRIu32"\n",
+               from, to, distance);
+        return;
+    }
+
+    node_distance_map[from][to] = distance;
+}
+
+unsigned char __node_distance(nodeid_t from, nodeid_t to)
+{
+    if ( from == to )
+        return NUMA_LOCAL_DISTANCE;
+
+    /*
+     * When NUMA is off, any distance will be treated as unreachable, so
+     * directly return NUMA_NO_DISTANCE from here as an optimization.
+     */
+    if ( numa_disabled() )
+        return NUMA_NO_DISTANCE;
+
+    /*
+     * Check whether the nodes are in the matrix range.
+     * When any node is out of range, except from and to nodes are the
+     * same, we treat them as unreachable.
+     */
+    if ( from >= ARRAY_SIZE(node_distance_map) ||
+         to >= ARRAY_SIZE(node_distance_map[0]) )
+        return NUMA_NO_DISTANCE;
+
+    return node_distance_map[from][to];
+}
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 7866afa408..45456ac441 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -22,7 +22,6 @@ extern void init_cpu_to_node(void);
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 void srat_parse_regions(paddr_t addr);
-extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
 #endif
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 56749ddca5..50faf5d352 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -328,7 +328,7 @@ unsigned int numa_node_to_arch_nid(nodeid_t n)
 	return 0;
 }
 
-u8 __node_distance(nodeid_t a, nodeid_t b)
+unsigned char __node_distance(nodeid_t a, nodeid_t b)
 {
 	unsigned index;
 	u8 slit_val;
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index b86d0851fc..8356e47b61 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -114,6 +114,7 @@ extern bool numa_memblks_available(void);
 extern bool numa_update_node_memblks(nodeid_t node, unsigned int arch_nid,
                                      paddr_t start, paddr_t size, bool hotplug);
 extern void numa_set_processor_nodes_parsed(nodeid_t node);
+extern unsigned char __node_distance(nodeid_t a, nodeid_t b);
 
 #else
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525754.817177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYE-0006G1-UQ; Tue, 25 Apr 2023 07:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525754.817177; Tue, 25 Apr 2023 07:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYE-0006Fs-RC; Tue, 25 Apr 2023 07:57:26 +0000
Received: by outflank-mailman (input) for mailman id 525754;
 Tue, 25 Apr 2023 07:57:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYD-0005uB-Ol
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:25 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d0231bcb-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5E193D75;
 Tue, 25 Apr 2023 00:58:07 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0BA1B3F587;
 Tue, 25 Apr 2023 00:57:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0231bcb-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 04/17] xen/arm: use arch_get_ram_range to get memory ranges from bootinfo
Date: Tue, 25 Apr 2023 15:56:42 +0800
Message-Id: <20230425075655.4037980-5-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Implement the same helper "arch_get_ram_range" as x86 for NUMA
code to get memory bank from Arm bootinfo.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use arch_get_ram_range instead of arch_get_memory_map.
---
 xen/arch/arm/numa.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index e4f75f314b..3fee3789c7 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -103,3 +103,14 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 
     return node_distance_map[from][to];
 }
+
+int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
+{
+    if ( idx >= bootinfo.mem.nr_banks )
+        return -ENOENT;
+
+    *start = bootinfo.mem.bank[idx].start;
+    *end = *start + bootinfo.mem.bank[idx].size;
+
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525759.817188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYI-0006ix-7K; Tue, 25 Apr 2023 07:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525759.817188; Tue, 25 Apr 2023 07:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYI-0006io-3O; Tue, 25 Apr 2023 07:57:30 +0000
Received: by outflank-mailman (input) for mailman id 525759;
 Tue, 25 Apr 2023 07:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYH-0005uB-AL
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d22f07c2-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B4B654B3;
 Tue, 25 Apr 2023 00:58:10 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6A5033F882;
 Tue, 25 Apr 2023 00:57:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d22f07c2-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 05/17] xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
Date: Tue, 25 Apr 2023 15:56:43 +0800
Message-Id: <20230425075655.4037980-6-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA implementation has a cpu_to_node array to store CPU to NODE
map. Xen is using CPU logical ID in runtime components, so we
use CPU logical ID as CPU index in cpu_to_node.

In device tree case, cpu_logical_map is created in dt_smp_init_cpus.
So, when NUMA is enabled, dt_smp_init_cpus will fetch CPU NUMA id
at the same time for cpu_to_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use static inline to replace macros to perform
   function paramerters type check.
2. Add numa_disabled to gate the numa-node-id check for
   CONFIG_NUMA on but numa disabled user case.
3. Use macro instead of static inline function to stub
   numa_set_node.
---
 xen/arch/arm/include/asm/numa.h |  4 ++++
 xen/arch/arm/smpboot.c          | 36 +++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 96c856a9f7..97d4a67dea 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -68,6 +68,10 @@ static inline bool arch_numa_broken(void)
     return true;
 }
 
+static inline void numa_set_node(unsigned int cpu, nodeid_t node)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..da7f2afd97 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -118,7 +118,12 @@ static void __init dt_smp_init_cpus(void)
     {
         [0 ... NR_CPUS - 1] = MPIDR_INVALID
     };
+    static nodeid_t node_map[NR_CPUS] __initdata =
+    {
+        [0 ... NR_CPUS - 1] = NUMA_NO_NODE
+    };
     bool bootcpu_valid = false;
+    unsigned int nid = 0;
     int rc;
 
     mpidr = system_cpuinfo.mpidr.bits & MPIDR_HWID_MASK;
@@ -169,6 +174,28 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
+        if ( IS_ENABLED(CONFIG_NUMA) )
+        {
+            /*
+             * When CONFIG_NUMA is set, try to fetch numa infomation
+             * from CPU dts node, otherwise the nid is always 0.
+             */
+            if ( !dt_property_read_u32(cpu, "numa-node-id", &nid) )
+            {
+                printk(XENLOG_WARNING
+                       "cpu[%d] dts path: %s: doesn't have numa information!\n",
+                       cpuidx, dt_node_full_name(cpu));
+                /*
+                 * During the early stage of NUMA initialization, when Xen
+                 * found any CPU dts node doesn't have numa-node-id info, the
+                 * NUMA will be treated as off, all CPU will be set to a FAKE
+                 * node 0. So if we get numa-node-id failed here, we should
+                 * set nid to 0.
+                 */
+                nid = 0;
+            }
+        }
+
         /*
          * 8 MSBs must be set to 0 in the DT since the reg property
          * defines the MPIDR[23:0]
@@ -228,9 +255,13 @@ static void __init dt_smp_init_cpus(void)
         {
             printk("cpu%d init failed (hwid %"PRIregister"): %d\n", i, hwid, rc);
             tmp_map[i] = MPIDR_INVALID;
+            node_map[i] = NUMA_NO_NODE;
         }
         else
+        {
             tmp_map[i] = hwid;
+            node_map[i] = nid;
+        }
     }
 
     if ( !bootcpu_valid )
@@ -246,6 +277,11 @@ static void __init dt_smp_init_cpus(void)
             continue;
         cpumask_set_cpu(i, &cpu_possible_map);
         cpu_logical_map(i) = tmp_map[i];
+
+        nid = node_map[i];
+        if ( nid >= MAX_NUMNODES )
+            nid = 0;
+        numa_set_node(i, nid);
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525760.817198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYK-00076a-Ho; Tue, 25 Apr 2023 07:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525760.817198; Tue, 25 Apr 2023 07:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYK-00076N-DO; Tue, 25 Apr 2023 07:57:32 +0000
Received: by outflank-mailman (input) for mailman id 525760;
 Tue, 25 Apr 2023 07:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYJ-00053E-8K
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d41347b7-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:30 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E99E4B3;
 Tue, 25 Apr 2023 00:58:14 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B7A583F587;
 Tue, 25 Apr 2023 00:57:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d41347b7-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 06/17] xen/arm: Add boot and secondary CPU to NUMA system
Date: Tue, 25 Apr 2023 15:56:44 +0800
Message-Id: <20230425075655.4037980-7-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we make NUMA node online and add cpu to
its NUMA node. This will make NUMA-aware components
have NUMA affinity data to support their work.

To keep the mostly the same behavior of x86, we use
numa_detect_cpu_node to online node. The difference is that,
we have prepared cpu_to_node in dt_smp_init_cpus, so we don't
need to setup cpu_to_node in numa_detect_cpu_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use unsigned int instead of int for cpu id.
2. Use static inline for stub to do type check.
---
 xen/arch/arm/include/asm/numa.h |  9 +++++++++
 xen/arch/arm/numa.c             | 10 ++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 97d4a67dea..b04ace26db 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -35,6 +35,7 @@ typedef u8 nodeid_t;
 extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
+extern void numa_detect_cpu_node(unsigned int cpu);
 
 #else
 
@@ -72,6 +73,14 @@ static inline void numa_set_node(unsigned int cpu, nodeid_t node)
 {
 }
 
+static inline void numa_add_cpu(unsigned int cpu)
+{
+}
+
+static inline void numa_detect_cpu_node(unsigned int cpu)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 3fee3789c7..7ae9ace15d 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -80,6 +80,16 @@ void __init numa_set_distance(nodeid_t from, nodeid_t to,
     node_distance_map[from][to] = distance;
 }
 
+void numa_detect_cpu_node(unsigned int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    node_set_online(node);
+}
+
 unsigned char __node_distance(nodeid_t from, nodeid_t to)
 {
     if ( from == to )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a..09e18d32df 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1205,6 +1205,11 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     for_each_present_cpu ( i )
     {
+        /* Detect and online node based on cpu_to_node[]. */
+        numa_detect_cpu_node(i);
+        /* Set up node_to_cpumask based on cpu_to_node[]. */
+        numa_add_cpu(i);
+
         if ( (num_online_cpus() < nr_cpu_ids) && !cpu_online(i) )
         {
             int ret = cpu_up(i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525764.817208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYN-0007dk-Ps; Tue, 25 Apr 2023 07:57:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525764.817208; Tue, 25 Apr 2023 07:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYN-0007dZ-Mr; Tue, 25 Apr 2023 07:57:35 +0000
Received: by outflank-mailman (input) for mailman id 525764;
 Tue, 25 Apr 2023 07:57:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYM-00053E-U1
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:34 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id d62057d5-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C80D4B3;
 Tue, 25 Apr 2023 00:58:17 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 12B4A3F587;
 Tue, 25 Apr 2023 00:57:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d62057d5-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree processor node
Date: Tue, 25 Apr 2023 15:56:45 +0800
Message-Id: <20230425075655.4037980-8-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Processor NUMA ID information is stored in device tree's processor
node as "numa-node-id". We need a new helper to parse this ID from
processor node. If we get this ID from processor node, this ID's
validity still need to be checked. Once we got a invalid NUMA ID
from any processor node, the device tree will be marked as NUMA
information invalid.

Since new helpers need to know the NUMA status, move the
enum dt_numa_status to the Arm NUMA header.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. Move the enum dt_numa_status to the Arm NUMA header.
2. Update the year in copyright to 2023.
v1 -> v2:
1. Move numa_disabled from fdt_numa_processor_affinity_init
   to fdt_parse_numa_cpu_node.
2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
3. Return ENODATA for normal dtb without NUMA info.
4. Use NUMA status helpers instead of SRAT functions.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h |  8 +++++
 xen/arch/arm/numa.c             |  8 +----
 xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
 4 files changed, 74 insertions(+), 7 deletions(-)
 create mode 100644 xen/arch/arm/numa_device_tree.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9073398d6e..bbc68e3735 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -39,6 +39,7 @@ obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
 obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_DEVICE_TREE_NUMA) += numa_device_tree.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index b04ace26db..2987158d16 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,14 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+extern enum dt_numa_status device_tree_numa;
+
 /*
  * In ACPI spec, 0-9 are the reserved values for node distance,
  * 10 indicates local node distance, 20 indicates remote node
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 7ae9ace15d..b953c2574a 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -20,13 +20,7 @@
 #include <xen/init.h>
 #include <xen/numa.h>
 
-enum dt_numa_status {
-    DT_NUMA_DEFAULT,
-    DT_NUMA_ON,
-    DT_NUMA_OFF,
-};
-
-static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
 static unsigned char __ro_after_init
 node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
new file mode 100644
index 0000000000..83601c83e7
--- /dev/null
+++ b/xen/arch/arm/numa_device_tree.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for device tree NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/nodemask.h>
+#include <xen/numa.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+
+/* Callback for device tree processor affinity */
+static int __init fdt_numa_processor_affinity_init(nodeid_t node)
+{
+    numa_set_processor_nodes_parsed(node);
+    device_tree_numa = DT_NUMA_ON;
+
+    printk(KERN_INFO "DT: NUMA node %"PRIu8" processor parsed\n", node);
+
+    return 0;
+}
+
+/* Parse CPU NUMA node info */
+static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
+{
+    unsigned int nid;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this CPU
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( nid == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_ERR "DT: CPU NUMA node id %u is invalid\n", nid);
+        numa_fw_bad();
+        return -EINVAL;
+    }
+
+    return fdt_numa_processor_affinity_init(nid);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:57:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525768.817218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYT-0008EL-BE; Tue, 25 Apr 2023 07:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525768.817218; Tue, 25 Apr 2023 07:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDYT-0008EC-6Y; Tue, 25 Apr 2023 07:57:41 +0000
Received: by outflank-mailman (input) for mailman id 525768;
 Tue, 25 Apr 2023 07:57:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYS-0005uB-4Y
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id d81a8885-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B31DE4B3;
 Tue, 25 Apr 2023 00:58:20 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 697863F587;
 Tue, 25 Apr 2023 00:57:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d81a8885-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 08/17] xen/arm: introduce a helper to parse device tree memory node
Date: Tue, 25 Apr 2023 15:56:46 +0800
Message-Id: <20230425075655.4037980-9-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Memory blocks' NUMA ID information is stored in device tree's
memory nodes as "numa-node-id". We need a new helper to parse
and verify this ID from memory nodes.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Move numa_disabled check to fdt_parse_numa_memory_node.
2. Use numa_bad to replace bad_srat.
3. Replace tabs by spaces.
4. Align parameters.
5. return ENODATA for a normal dtb without numa info.
6. Un-addressed comment:
   "Why not parse numa-node-id and call fdt_numa_memory_affinity_init
   from xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because
   device_tree_get_meminfo is called too early?"
   I checked the device_tree_get_meminfo code and I think the answer
   is similar as I reply in RFC. I prefer a unify numa initialization
   entry. Don't want to make numa parse code in different places.
7. Use node id as dummy PXM for numa_update_node_memblks.
---
 xen/arch/arm/numa_device_tree.c | 89 +++++++++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 83601c83e7..38f5e93d1d 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -34,6 +34,26 @@ static int __init fdt_numa_processor_affinity_init(nodeid_t node)
     return 0;
 }
 
+/* Callback for parsing of the memory regions affinity */
+static int __init fdt_numa_memory_affinity_init(nodeid_t node,
+                                                paddr_t start, paddr_t size)
+{
+    if ( !numa_memblks_available() )
+    {
+        dprintk(XENLOG_WARNING,
+                "Too many NUMA entries, try bigger NR_NODE_MEMBLKS\n");
+        return -EINVAL;
+    }
+
+    numa_fw_nid_name = "numa-node-id";
+    if ( !numa_update_node_memblks(node, node, start, size, false) )
+        return -EINVAL;
+
+    device_tree_numa = DT_NUMA_ON;
+
+    return 0;
+}
+
 /* Parse CPU NUMA node info */
 static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 {
@@ -62,3 +82,72 @@ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 
     return fdt_numa_processor_affinity_init(nid);
 }
+
+/* Parse memory node NUMA info */
+static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
+                                             const char *name,
+                                             unsigned int addr_cells,
+                                             unsigned int size_cells)
+{
+    unsigned int nid;
+    int ret = 0, len;
+    paddr_t addr, size;
+    const struct fdt_property *prop;
+    unsigned int idx, ranges;
+    const __be32 *addresses;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this memory
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( node == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_WARNING "Node id %u exceeds maximum value\n", nid);
+        goto invalid_data;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "fdt: node `%s': missing `reg' property\n", name);
+        goto invalid_data;
+    }
+
+    addresses = (const __be32 *)prop->data;
+    ranges = len / (sizeof(__be32)* (addr_cells + size_cells));
+    for ( idx = 0; idx < ranges; idx++ )
+    {
+        device_tree_get_reg(&addresses, addr_cells, size_cells, &addr, &size);
+        /* Skip zero size ranges */
+        if ( !size )
+            continue;
+
+        ret = fdt_numa_memory_affinity_init(nid, addr, size);
+        if ( ret )
+            goto invalid_data;
+    }
+
+    if ( idx == 0 )
+    {
+        printk(XENLOG_ERR
+               "bad property in memory node, idx=%d ret=%d\n", idx, ret);
+        goto invalid_data;
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525783.817238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZo-0001pw-2B; Tue, 25 Apr 2023 07:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525783.817238; Tue, 25 Apr 2023 07:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZn-0001pl-U4; Tue, 25 Apr 2023 07:59:03 +0000
Received: by outflank-mailman (input) for mailman id 525783;
 Tue, 25 Apr 2023 07:59:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYU-00053E-1l
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:42 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id da1d13b8-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 18621D75;
 Tue, 25 Apr 2023 00:58:24 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C25383F587;
 Tue, 25 Apr 2023 00:57:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da1d13b8-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree NUMA distance map
Date: Tue, 25 Apr 2023 15:56:47 +0800
Message-Id: <20230425075655.4037980-10-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

A NUMA aware device tree will provide a "distance-map" node to
describe distance between any two nodes. This patch introduce a
new helper to parse this distance map.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. The distance map default value is now NUMA_NO_DISTANCE, update
   the logic accordingly and add in-code comment as a note.
v2 -> v3:
1. No change.
v1 -> v2:
1. Get rid of useless braces.
2. Use new NUMA status helper.
3. Use PRIu32 to replace u in print messages.
4. Fix opposite = __node_distance(to, from).
5. disable dtb numa info table when we find an invalid data
   in dtb.
---
 xen/arch/arm/numa_device_tree.c | 108 ++++++++++++++++++++++++++++++++
 1 file changed, 108 insertions(+)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 38f5e93d1d..9d9d09273e 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -151,3 +151,111 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+/* Parse NUMA distance map v1 */
+static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
+{
+    const struct fdt_property *prop;
+    const __be32 *matrix;
+    unsigned int i, entry_count;
+    int len;
+
+    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
+
+    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "NUMA: No distance-matrix property in distance-map\n");
+        goto invalid_data;
+    }
+
+    if ( len % sizeof(__be32) != 0 )
+    {
+        printk(XENLOG_WARNING
+               "distance-matrix in node is not a multiple of u32\n");
+        goto invalid_data;
+    }
+
+    entry_count = len / sizeof(__be32);
+    if ( entry_count == 0 )
+    {
+        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
+        goto invalid_data;
+    }
+
+    matrix = (const __be32 *)prop->data;
+    for ( i = 0; i + 2 < entry_count; i += 3 )
+    {
+        unsigned int from, to, distance, opposite;
+
+        from = dt_next_cell(1, &matrix);
+        to = dt_next_cell(1, &matrix);
+        distance = dt_next_cell(1, &matrix);
+        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
+             (from != to && distance <= NUMA_LOCAL_DISTANCE) )
+        {
+            printk(XENLOG_WARNING
+                   "NUMA: Invalid distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                   from, to, distance);
+            goto invalid_data;
+        }
+
+        printk(XENLOG_INFO "NUMA: distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+               from, to, distance);
+
+        /* Get opposite way distance */
+        opposite = __node_distance(to, from);
+        /* The default value in node_distance_map is NUMA_NO_DISTANCE */
+        if ( opposite == NUMA_NO_DISTANCE )
+        {
+            /* Bi-directions are not set, set both */
+            numa_set_distance(from, to, distance);
+            numa_set_distance(to, from, distance);
+        }
+        else
+        {
+            /*
+             * Opposite way distance has been set to a different value.
+             * It may be a firmware device tree bug?
+             */
+            if ( opposite != distance )
+            {
+                /*
+                 * In device tree NUMA distance-matrix binding:
+                 * https://www.kernel.org/doc/Documentation/devicetree/bindings/numa.txt
+                 * There is a notes mentions:
+                 * "Each entry represents distance from first node to
+                 *  second node. The distances are equal in either
+                 *  direction."
+                 *
+                 * That means device tree doesn't permit this case.
+                 * But in ACPI spec, it cares to specifically permit this
+                 * case:
+                 * "Except for the relative distance from a System Locality
+                 *  to itself, each relative distance is stored twice in the
+                 *  matrix. This provides the capability to describe the
+                 *  scenario where the relative distances for the two
+                 *  directions between System Localities is different."
+                 *
+                 * That means a real machine allows such NUMA configuration.
+                 * So, place a WARNING here to notice system administrators,
+                 * is it the specail case that they hijack the device tree
+                 * to support their rare machines?
+                 */
+                printk(XENLOG_WARNING
+                       "Un-matched bi-direction! NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32", NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
+                       from, to, distance, to, from, opposite);
+            }
+
+            /* Opposite way distance has been set, just set this way */
+            numa_set_distance(from, to, distance);
+        }
+    }
+
+    return 0;
+
+invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525785.817243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZo-0001sa-Cv; Tue, 25 Apr 2023 07:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525785.817243; Tue, 25 Apr 2023 07:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZo-0001rp-7L; Tue, 25 Apr 2023 07:59:04 +0000
Received: by outflank-mailman (input) for mailman id 525785;
 Tue, 25 Apr 2023 07:59:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYw-0005uB-IE
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:58:10 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id eae4ecbe-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:58:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 50CE84B3;
 Tue, 25 Apr 2023 00:58:52 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7CF703F587;
 Tue, 25 Apr 2023 00:58:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eae4ecbe-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 17/17] docs: update numa command line to support Arm
Date: Tue, 25 Apr 2023 15:56:55 +0800
Message-Id: <20230425075655.4037980-18-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Current numa command in documentation is x86 only. Remove
x86 from numa command's arch limitation in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. Add the Acked-by tag from Jan.
v1 -> v2:
1. Update Arm NUMA status in SUPPORT.md to "Tech Preview".
---
 SUPPORT.md                        | 1 +
 docs/misc/xen-command-line.pandoc | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..da4e3b9aa2 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -401,6 +401,7 @@ on embedded platforms and the x86 PV shim.
 Enables NUMA aware scheduling in Xen
 
     Status, x86: Supported
+    Status, Arm: Tech Preview
 
 ## Scalability
 
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..2fea22dd70 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1890,7 +1890,7 @@ i.e. a limit on the number of guests it is possible to start each having
 assigned a device sharing a common interrupt line.  Accepts values between
 1 and 255.
 
-### numa (x86)
+### numa
 > `= on | off | fake=<integer> | noacpi`
 
 > Default: `on`
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525789.817256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZp-0002Hm-MZ; Tue, 25 Apr 2023 07:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525789.817256; Tue, 25 Apr 2023 07:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZp-0002GG-Hc; Tue, 25 Apr 2023 07:59:05 +0000
Received: by outflank-mailman (input) for mailman id 525789;
 Tue, 25 Apr 2023 07:59:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYh-00053E-HI
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id e206d91b-e33e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 09:57:54 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 79A8EFEC;
 Tue, 25 Apr 2023 00:58:37 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 272B63F587;
 Tue, 25 Apr 2023 00:57:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e206d91b-e33e-11ed-b223-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 13/17] xen/arm: implement numa_node_to_arch_nid for device tree NUMA
Date: Tue, 25 Apr 2023 15:56:51 +0800
Message-Id: <20230425075655.4037980-14-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Device tree based NUMA doesn't have the proximity domain like
ACPI. So we can return node id directly as arch nid.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use numa_node_to_arch_nid instead of dummy node_to_pxm.
---
 xen/arch/arm/include/asm/numa.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 55ac4665db..71b95a9a62 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -47,6 +47,15 @@ extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
 
+/*
+ * Device tree NUMA doesn't have architecural node id.
+ * So we can just return node id as arch nid.
+ */
+static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
+{
+    return n;
+}
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525779.817229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZm-0001ae-SE; Tue, 25 Apr 2023 07:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525779.817229; Tue, 25 Apr 2023 07:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZm-0001aS-LH; Tue, 25 Apr 2023 07:59:02 +0000
Received: by outflank-mailman (input) for mailman id 525779;
 Tue, 25 Apr 2023 07:59:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYb-0005uB-4u
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id de212a70-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:47 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D010A4B3;
 Tue, 25 Apr 2023 00:58:30 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 85F5B3F587;
 Tue, 25 Apr 2023 00:57:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de212a70-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 11/17] xen/arm: keep guest still be NUMA unware
Date: Tue, 25 Apr 2023 15:56:49 +0800
Message-Id: <20230425075655.4037980-12-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

The NUMA information provided in the host Device-Tree
are only for Xen. For dom0, we want to hide them as they
may be different (for now, dom0 is still not aware of NUMA)
The CPU and memory nodes are recreated from scratch for the
domain. So we already skip the "numa-node-id" property for
these two types of nodes.

However, some devices like PCIe may have "numa-node-id"
property too. We have to skip them as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Add Rb
---
 xen/arch/arm/domain_build.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af2..2bf586fc45 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1185,6 +1185,10 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                 continue;
         }
 
+        /* Dom0 is currently NUMA unaware */
+        if ( dt_property_name_is_equal(prop, "numa-node-id") )
+            continue;
+
         res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
 
         if ( res )
@@ -2567,6 +2571,8 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TYPE("memory"),
         /* The memory mapped timer is not supported by Xen. */
         DT_MATCH_COMPATIBLE("arm,armv7-timer-mem"),
+        /* Numa info doesn't need to be exposed to Domain-0 */
+        DT_MATCH_COMPATIBLE("numa-distance-map-v1"),
         { /* sentinel */ },
     };
     static const struct dt_device_match timer_matches[] __initconst =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525792.817268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZs-0002iD-UQ; Tue, 25 Apr 2023 07:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525792.817268; Tue, 25 Apr 2023 07:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZs-0002i3-RB; Tue, 25 Apr 2023 07:59:08 +0000
Received: by outflank-mailman (input) for mailman id 525792;
 Tue, 25 Apr 2023 07:59:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYl-0005uB-O9
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id e4132d8b-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:57 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D17FB4B3;
 Tue, 25 Apr 2023 00:58:40 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7EE673F587;
 Tue, 25 Apr 2023 00:57:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4132d8b-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 14/17] xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
Date: Tue, 25 Apr 2023 15:56:52 +0800
Message-Id: <20230425075655.4037980-15-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

node_online_map in smpboot is still needed for Arm when NUMA is
turned off by Kconfig.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. No change.
---
 xen/arch/arm/smpboot.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index da7f2afd97..4f71cc974a 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -41,8 +41,10 @@ integer_param("maxcpus", max_cpus);
 /* CPU logical map: map xen cpuid to an MPIDR */
 register_t __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID };
 
+#ifndef CONFIG_NUMA
 /* Fake one node for now. See also asm/numa.h */
 nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+#endif
 
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525796.817272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZt-0002mY-Dz; Tue, 25 Apr 2023 07:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525796.817272; Tue, 25 Apr 2023 07:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZt-0002lz-6o; Tue, 25 Apr 2023 07:59:09 +0000
Received: by outflank-mailman (input) for mailman id 525796;
 Tue, 25 Apr 2023 07:59:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYo-0005uB-CH
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:58:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id e5dcbc30-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:58:00 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D8B0D4B3;
 Tue, 25 Apr 2023 00:58:43 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CE0DB3F587;
 Tue, 25 Apr 2023 00:57:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5dcbc30-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v4 15/17] xen/arm: Set correct per-cpu cpu_core_mask
Date: Tue, 25 Apr 2023 15:56:53 +0800
Message-Id: <20230425075655.4037980-16-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the common sysctl command XEN_SYSCTL_physinfo, the cores_per_socket
is calculated based on the cpu_core_mask of CPU0. Currently on Arm
this is a fixed value 1 (can be checked via xl info), which is not
correct. This is because during the Arm cpu online process,
set_cpu_sibling_map() only sets the per-cpu cpu_core_mask for itself.

cores_per_socket refers to the number of cores that belong to the same
socket (NUMA node). Therefore, this commit introduces a helper function
numa_set_cpu_core_mask(cpu), which sets the per-cpu cpu_core_mask to
the cpus in the same NUMA node as cpu. Calling this function at the
boot time can ensure the correct cpu_core_mask, leading to the correct
cores_per_socket to be returned by XEN_SYSCTL_physinfo.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. New patch
---
 xen/arch/arm/include/asm/numa.h |  7 +++++++
 xen/arch/arm/numa.c             | 11 +++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 71b95a9a62..d4c89909d0 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -46,6 +46,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
+extern void numa_set_cpu_core_mask(int cpu);
 
 /*
  * Device tree NUMA doesn't have architecural node id.
@@ -62,6 +63,12 @@ static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
+static inline void numa_set_cpu_core_mask(int cpu)
+{
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &cpu_possible_map);
+}
+
 /*
  * TODO: make first_valid_mfn static when NUMA is supported on Arm, this
  * is required because the dummy helpers are using it.
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index f4635e0277..74e4dc2c67 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -53,6 +53,17 @@ int __init arch_numa_setup(const char *opt)
     return -EINVAL;
 }
 
+void numa_set_cpu_core_mask(int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &node_to_cpumask(node));
+}
+
 void __init numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index f05e233f3a..7cef913b7c 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1226,6 +1226,11 @@ void __init start_xen(unsigned long boot_phys_offset,
     }
 
     printk("Brought up %ld CPUs\n", (long)num_online_cpus());
+
+    /* Set per-cpu cpu_core_mask to cpus that belongs to the same NUMA node. */
+    for_each_online_cpu ( i )
+        numa_set_cpu_core_mask(i);
+
     /* TODO: smp_cpus_done(); */
 
     /* This should be done in a vpmu driver but we do not have one yet. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525798.817281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZu-0002zq-Bx; Tue, 25 Apr 2023 07:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525798.817281; Tue, 25 Apr 2023 07:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZu-0002xu-2V; Tue, 25 Apr 2023 07:59:10 +0000
Received: by outflank-mailman (input) for mailman id 525798;
 Tue, 25 Apr 2023 07:59:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYe-0005uB-Si
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:52 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id e0211fc5-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 23EC4FEC;
 Tue, 25 Apr 2023 00:58:34 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CE20E3F587;
 Tue, 25 Apr 2023 00:57:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0211fc5-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 12/17] xen/arm: enable device tree based NUMA in system init
Date: Tue, 25 Apr 2023 15:56:50 +0800
Message-Id: <20230425075655.4037980-13-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we can start to create NUMA system that is
based on device tree.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. replace ~0 by INVALID_PADDR.
2. only print error messages for invalid dtb data.
3. remove unnecessary return.
4. remove the parameter of numa_init.
---
 xen/arch/arm/include/asm/numa.h |  5 +++
 xen/arch/arm/numa.c             | 57 +++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c            |  7 ++++
 3 files changed, 69 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 15308f5a36..55ac4665db 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -45,6 +45,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
+extern void numa_init(void);
 
 #else
 
@@ -90,6 +91,10 @@ static inline void numa_detect_cpu_node(unsigned int cpu)
 {
 }
 
+static inline void numa_init(void)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index b953c2574a..f4635e0277 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -18,7 +18,11 @@
  *
  */
 #include <xen/init.h>
+#include <xen/device_tree.h>
+#include <xen/nodemask.h>
 #include <xen/numa.h>
+#include <xen/pfn.h>
+#include <xen/acpi.h>
 
 enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
@@ -108,6 +112,59 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
     return node_distance_map[from][to];
 }
 
+void __init numa_init(void)
+{
+    unsigned int idx;
+    paddr_t ram_start = INVALID_PADDR;
+    paddr_t ram_size = 0;
+    paddr_t ram_end = 0;
+
+    /* NUMA has been turned off through Xen parameters */
+    if ( numa_off )
+        goto mem_init;
+
+    /* Initialize NUMA from device tree when system is not ACPI booted */
+    if ( acpi_disabled )
+    {
+        int ret = numa_device_tree_init(device_tree_flattened);
+        if ( ret )
+        {
+            numa_off = true;
+            if ( ret == -EINVAL )
+                printk(XENLOG_WARNING
+                       "Init NUMA from device tree failed, ret=%d\n", ret);
+        }
+    }
+    else
+    {
+        /* We don't support NUMA for ACPI boot currently */
+        printk(XENLOG_WARNING
+               "ACPI NUMA has not been supported yet, NUMA off!\n");
+        numa_off = true;
+    }
+
+mem_init:
+    /*
+     * Find the minimal and maximum address of RAM, NUMA will
+     * build a memory to node mapping table for the whole range.
+     */
+    ram_start = bootinfo.mem.bank[0].start;
+    ram_size  = bootinfo.mem.bank[0].size;
+    ram_end   = ram_start + ram_size;
+    for ( idx = 1 ; idx < bootinfo.mem.nr_banks; idx++ )
+    {
+        paddr_t bank_start = bootinfo.mem.bank[idx].start;
+        paddr_t bank_size = bootinfo.mem.bank[idx].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        ram_size  = ram_size + bank_size;
+        ram_start = min(ram_start, bank_start);
+        ram_end   = max(ram_end, bank_end);
+    }
+
+    numa_initmem_init(PFN_UP(ram_start), PFN_DOWN(ram_end));
+}
+
 int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
 {
     if ( idx >= bootinfo.mem.nr_banks )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 09e18d32df..f05e233f3a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1121,6 +1121,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Parse the ACPI tables for possible boot-time configuration */
     acpi_boot_table_init();
 
+    /*
+     * Try to initialize NUMA system, if failed, the system will
+     * fallback to uniform system which means system has only 1
+     * NUMA node.
+     */
+    numa_init();
+
     end_boot_allocator();
 
     /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525799.817287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZu-00037K-Sc; Tue, 25 Apr 2023 07:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525799.817287; Tue, 25 Apr 2023 07:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZu-00035s-Hg; Tue, 25 Apr 2023 07:59:10 +0000
Received: by outflank-mailman (input) for mailman id 525799;
 Tue, 25 Apr 2023 07:59:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYX-0005uB-NO
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:57:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id dc0f7b28-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:57:44 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 779504B3;
 Tue, 25 Apr 2023 00:58:27 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2D11D3F587;
 Tue, 25 Apr 2023 00:57:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc0f7b28-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 10/17] xen/arm: unified entry to parse all NUMA data from device tree
Date: Tue, 25 Apr 2023 15:56:48 +0800
Message-Id: <20230425075655.4037980-11-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this function, we scan the whole device tree to parse CPU node id,
memory node id and distance-map. Though early_scan_node will invoke
a handler to process memory nodes. If we want to parse memory node id
in that handler, we have to embed NUMA parse code in that handler.
But we still need to scan whole device tree to find CPU NUMA id and
distance-map. In this case, we include memory NUMA id parse in this
function too. Another benefit is that we have a unique entry for device
tree NUMA data parse.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Fix typos in commit message.
2. Fix code style and align parameters.
3. Use strncmp to replace memcmp.
---
 xen/arch/arm/include/asm/numa.h |  1 +
 xen/arch/arm/numa_device_tree.c | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 2987158d16..15308f5a36 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -44,6 +44,7 @@ extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
+extern int numa_device_tree_init(const void *fdt);
 
 #else
 
diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 9d9d09273e..bf5f112b92 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -259,3 +259,33 @@ invalid_data:
     numa_fw_bad();
     return -EINVAL;
 }
+
+static int __init fdt_scan_numa_nodes(const void *fdt, int node,
+                                      const char *uname, int depth,
+                                      unsigned int address_cells,
+                                      unsigned int size_cells, void *data)
+{
+    int len, ret = 0;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "device_type", &len);
+    if ( prop )
+    {
+        if ( strncmp(prop, "cpu", len) == 0 )
+            ret = fdt_parse_numa_cpu_node(fdt, node);
+        else if ( strncmp(prop, "memory", len) == 0 )
+            ret = fdt_parse_numa_memory_node(fdt, node, uname,
+                                address_cells, size_cells);
+    }
+    else if ( fdt_node_check_compatible(fdt, node,
+                                        "numa-distance-map-v1") == 0 )
+        ret = fdt_parse_numa_distance_map_v1(fdt, node);
+
+    return ret;
+}
+
+/* Initialize NUMA from device tree */
+int __init numa_device_tree_init(const void *fdt)
+{
+    return device_tree_for_each_node(fdt, 0, fdt_scan_numa_nodes, NULL);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 07:59:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 07:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525801.817290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZv-0003Cx-4H; Tue, 25 Apr 2023 07:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525801.817290; Tue, 25 Apr 2023 07:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDZu-0003BT-W2; Tue, 25 Apr 2023 07:59:10 +0000
Received: by outflank-mailman (input) for mailman id 525801;
 Tue, 25 Apr 2023 07:59:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prDYr-0005uB-FT
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 07:58:05 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id e7d41e6c-e33e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 09:58:03 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31B9ED75;
 Tue, 25 Apr 2023 00:58:47 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DBC503F587;
 Tue, 25 Apr 2023 00:58:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d41e6c-e33e-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v4 16/17] xen/arm: Provide Kconfig options for Arm to enable NUMA
Date: Tue, 25 Apr 2023 15:56:54 +0800
Message-Id: <20230425075655.4037980-17-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230425075655.4037980-1-Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Arm platforms support both ACPI and device tree. We don't
want users to select device tree NUMA or ACPI NUMA manually.
We hope users can just enable NUMA for Arm, and device tree
NUMA and ACPI NUMA can be selected depends on device tree
feature and ACPI feature status automatically. In this case,
these two kinds of NUMA support code can be co-exist in one
Xen binary. Xen can check feature flags to decide using
device tree or ACPI as NUMA based firmware.

So in this patch, we introduce a generic option:
CONFIG_ARM_NUMA for users to enable NUMA for Arm.
And one CONFIG_DEVICE_TREE_NUMA option for ARM_NUMA
to select when HAS_DEVICE_TREE option is enabled.
Once when ACPI NUMA for Arm is supported, ACPI_NUMA
can be selected here too.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Remove the condition of selecting DEVICE_TREE_NUMA.
---
 xen/arch/arm/Kconfig | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..e751ad50d1 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -39,6 +39,17 @@ config ACPI
 config ARM_EFI
 	bool
 
+config ARM_NUMA
+	bool "Arm NUMA (Non-Uniform Memory Access) Support (UNSUPPORTED)" if UNSUPPORTED
+	depends on HAS_DEVICE_TREE
+	select DEVICE_TREE_NUMA
+	help
+	  Enable Non-Uniform Memory Access (NUMA) for Arm architecutres
+
+config DEVICE_TREE_NUMA
+	bool
+	select NUMA
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:05:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525832.817318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDg3-0008Fd-Eg; Tue, 25 Apr 2023 08:05:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525832.817318; Tue, 25 Apr 2023 08:05:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDg3-0008FW-Bk; Tue, 25 Apr 2023 08:05:31 +0000
Received: by outflank-mailman (input) for mailman id 525832;
 Tue, 25 Apr 2023 08:05:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prDg3-0008FQ-01
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:05:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f183195a-e33f-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 10:05:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7958.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:05:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:05:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f183195a-e33f-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oJFmSlHR82JWahAdxwLlwxuXcS0pec7pEY0TMb+mGeePvU8KZnlYDEgPrfd0pH5/xO8fPnApuKqse/T1BOef+k+q2NQAAIQVsCCkDlpriHHSyKU185hqPm3NBGj6PIRQaGWytki43AC6OEfk4iI5EW1OjznqzkKR6+sgekuYgxAInIKsbWZ+9HGfrw9BLn6nc5y4K0j+8NXNTKmAUWv/aA92woaWPkUz+nvHq6XmSRkW+tLeVsdEoj57L4864XtjtCX2+zpqC3rKdiYUYyNWwQKn0wKFVWFeTBHAg6rCk+KPQ7CaHUsVvCcvcbeNC5pvrvT15qHpvl3WTWYVTbslBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v7JQzoJuQeZ8Mtii+QMOp562cagWTYXdbaqZmDwPa/4=;
 b=icGXMOy9uRvDBiXlUFyw9dlHcYTTx8uUm8+MxtlBQRr/4x1tete2PUgwYcXJe8QAoe/4eJTBYn+PepZU1nyPHghYVw7qUtlCsidsyo542ONlikKUeyaX2IpaRNKhPKWIrdJ8yM0zFz4coVFfU3Tw0lVqZCF/UVnxIn+LiKN6w7OldloHs42VbXBXmyVC3Ye3nFyeDA0T+YmAOnwyGiAZddgSRPoh3FoYxwi4hCI3FW3X69x2Ri3+zhuNDFYiwtZXbsUg/HSEqlc80ffQHQ33Gkpbtl0lGlEq/Sh0hNHY8ERQxHieRSlZrFnJMovWGdddV9j6kIhc6sCCwkHsHTXL/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v7JQzoJuQeZ8Mtii+QMOp562cagWTYXdbaqZmDwPa/4=;
 b=1ytRi7N7E7RajOJHh9/HyNekR7O1EBMzyncuINJjDAZlyfBhk9NyeNvjXeLwQFlZWQZ2mkWMpeXFN2gGhC/Au+cQwTsrrDXWQg44wY5s2yB2RgrgmWx+FwsCPSG+Zg/AZYGFahy3PvIaThwh+fjWJB4thZbrKCZPELmJwsdDFVQqnlRKA3QjtoDa6zE99u6pVh7/I2ciZsAB+KAScyNS/EwZWq7cbiSAzkL0qp377n4WM8ljzfqto1GgS3IK/kfrTzwYUxkFYflYlj+/69lc/qAK5n4sniEa4ZGjNokDtNJ/zbShrdaMk3BFQXiMVN1Y3Jipmrlp5x77Ne8w/kAuOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com>
Date: Tue, 25 Apr 2023 10:05:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI console
 card
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7958:EE_
X-MS-Office365-Filtering-Correlation-Id: ef92072c-d319-4917-0d51-08db4563d47b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8rLMtXIsc8KlEwAM1h9BYj+kaNJ5iGK39c1MkC4RP6J9Ave671TBe2vAnPdMkduLvBVu2ENIm0N2UlmDp+Hid0dSHrEo/W2OB524mZlaYm/CSsEAQKeYzN1o8EkY+nBQHMYhMGUCVNd9Mx1KDpfUxzgh0triVxIhK6iUF2lfM1TFs/zGZPAB6suN0qzsfc2YAJScNH1Eqny6uBHZZKSMwzCaQ6ioLVRW5PfujCZ3oy2SuondSgciHiBGE8z3O5lTqia2yQ2YelbvTCD+mn06yknx2gMid2Hcpu20j40atlkWtT58PyIUFjlhPtfF6Ki60MtuNv/GxRmBCeGkzO6HnmIt7/x+QZT8KRdNU+j/tE89r2m2dWoP0vb2fqWzryfUYNxpd+5LOBOn0aoAJ7XSIOxTt+Xt+5iWMPpEi8g22gI66yyyw6DjJk3umtSCKBVEFjB3M6MEMAViTL9ie+23/sBADgUUF4+TMx1T9uRx3eG749wDn4ac/qjalVJyWGCXfV9tsPNWxCPtV55lPbMs8A+hqUOIlJLeR5GTqA6mLrd1j3u644O9OU63CVwoYzw/irdHDRTOaSWluxhniban9LU5HoB7P5xQFXVI/42zz8iTvBhkY2yvPPSDGRtZPRW6D51whJmibMM6ywEI4HekKQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(396003)(136003)(346002)(39860400002)(451199021)(66899021)(54906003)(478600001)(5660300002)(8936002)(38100700002)(8676002)(36756003)(2906002)(86362001)(31696002)(66556008)(66946007)(66476007)(4326008)(6916009)(316002)(41300700001)(186003)(2616005)(31686004)(53546011)(6512007)(6506007)(26005)(6486002)(66574015)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V0dpS3lYaWFsZEVsQXFmUFBZNGZsV0FQOTRmaFl4S2ZESHpSWnZuS1F0bFh4?=
 =?utf-8?B?d2FjYlNQMy9RejJJVDFnT2hXSmZ6ZC9zVDk1aTROSXhEcGJpdDNQd242cVRr?=
 =?utf-8?B?eWZNeWV5NnIyS0swYVZnRkh0Q0tQS3NCeFJnVWorT1dUUnlvOHdBRlNPWEV3?=
 =?utf-8?B?Wk1WWjhjZWFGakNJVHJBS1o4RStRbjJjVzAvYnI5Yk9qZXM5Z2FvTmlDcVQv?=
 =?utf-8?B?SU45MEMwcnJ5a1lCYmdtWWg4OVhmcWpkUVFxaFhMcUFDL01mV3B4ZEFSUVZp?=
 =?utf-8?B?NjdEcmVrQ0YyWEczNnVaK3RaODdUeGw5dVN3cHZpZ0dpWGd4ZFdkYmE5ZDky?=
 =?utf-8?B?aWg3MXI1L1kvSXpqOUhaU3BHbGtNeHI2SFk3S3Z1cDF4Rm9MNzVkMnYyYnhB?=
 =?utf-8?B?UWQ0anhyOWpWYTVXT0lUTmJoMDVBNE9MVXJaalo0Y09FNTBJQjQydk5kaG9L?=
 =?utf-8?B?MGVyV0J0OEUrcFVGZ3JPcnhXSFNkQ3liMjZ1eGZhalYrbFp3VlRObUkrNmdm?=
 =?utf-8?B?MnlXMEsyN3k3dm11WXEzVnRPeXpJZThNWWJqdW5JQTRKVHhSNHMxMEY0TWdN?=
 =?utf-8?B?VFdHNlE3NnFaNm50eTA4N09hRlpWenVFQ3FKZlVMR0JwdDZZSXhWdXFidW14?=
 =?utf-8?B?YnpucDI5VUJhZ3FDWkx2NGFPYmc5NEx5dmFwV3pPUmRhbS9VblNhV2V2Wlo1?=
 =?utf-8?B?Z0Z0K0hyZFZJb1BhVmJJaktuOWswaHZxcVpVNXpoY3ZhbzBGVEJnY2FzRVA2?=
 =?utf-8?B?UDF5N1NFVlZGdzdGcmRxT2dEdnEwYm5Hc3JSWUxxdU1PSHd5K1AwbXVoR01x?=
 =?utf-8?B?QlFkNjBwb0VVNDJoUmJ5bzhXRloxRGNWWnJ0MVF2TDRINExzVmRSNXFDRnZl?=
 =?utf-8?B?bXc5YkZoc1RGaHhVN2JZSlFLc1lMYnpTU3NUMzBzUW1JMVdXV08yQUl4cGtp?=
 =?utf-8?B?RW5adkdzMGg5b0ZvbFIwOEN1ZzVuVUZYOVpCTTFyU3QrR25kYW8xVGNrM0hH?=
 =?utf-8?B?bi94ZjVWV1RoWWF4R3Z6NlJ3em16K3NKMkpaU0ZXeGRZajJucWtzK2ZmYnZO?=
 =?utf-8?B?WmQ4REdMMmhNNktRWHoyRHlLSTBBRlZKamcxMHJ1OVF4SkpyK053a0NPcUJ4?=
 =?utf-8?B?dTlhZXJKZm9WRUhZNEU2VjRwQUNVSFpVRDduY1JPamJ5dkphNEZrT1ZMSGxh?=
 =?utf-8?B?djJxb2xJWEplUnRGeFhRUjlqTHNUdkVWTyt1R0VzVjFiOHNBOE5DejJkejFz?=
 =?utf-8?B?dnY5MXBydGZiSDROZkQ1SmxiWG1oZWRTcUNPdlhnTEJCRGg5UnlmajdRaVl2?=
 =?utf-8?B?NzY4Y09tMklXbjh4ZTFMSnFpUGcxeG5iRnRZd3JZaXU2TFpqaDZDQTJ6Yngr?=
 =?utf-8?B?RERuNnBaQ3ZGS2ZaYUNsUmpQTXNJRjg4WEhxYnR1TFRjS21KU282VVhublJv?=
 =?utf-8?B?M0RXZWNuTEswanlsWStKamE4Qms4YjgzejBJaHVyVmVkOHZxemVXYWQxSkQ1?=
 =?utf-8?B?TEJwU055RUVoMEludXFXRG1pSXFpWU8xcTd4cXFyM3IyOE5URCtDK3JXY2dQ?=
 =?utf-8?B?M0FBNjVWTGN3ek1CQ09Kd292OTZsejRDMG9yU0ttZ1NpUFFha1lEYzBuQTlJ?=
 =?utf-8?B?SXJmNkpkTGM5eE80c1Zka0xyRXBFVnU0RE42Z3YvWk9yMElKc1dNOEU0UFQr?=
 =?utf-8?B?REdSZGNmZ1ZPNy9uVWNEaGpKNVNuaklmQ1FqVFBIMUEwcTVSN1VkMDc0VUFa?=
 =?utf-8?B?eUthSCs3c0JwdXpTSzJxVW1xMlBwZnJkSEZpb3ZCKy9DN0JJODRuQmJ2cjVD?=
 =?utf-8?B?QWxpdzNRczJqNUpzbDc5ajhCMWlNTWZXNVM3c2tRYWlCTzJTVHpkMFk3RDZ4?=
 =?utf-8?B?SkZ0cmFwS0h1aXR1dFhteDREQzd4SWxBQnpPYjNkcDFOUDZGLzlCZjRSNjMw?=
 =?utf-8?B?YzgvaXVYYk1zektwdlZlMnljQjdEM3NOTXY4SkUvcG9MYm1CUlZGOThOUmhG?=
 =?utf-8?B?ckpubEtBbXJSWTdDQXFMcjRKMHltcWVISlppNklOZ3MwZU1temllTHlEbXlm?=
 =?utf-8?B?TU9oRjhMTHhKRUlnc3ZIcWZ3WTRRMVpEN2twN2k0TzZaeDg3WHFVWWlXWkZK?=
 =?utf-8?Q?4KyZe+rPHR6BJ51aVx+5wlwC8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef92072c-d319-4917-0d51-08db4563d47b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:05:27.3597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zZ2Byq7DcmTLnkXCJBAnYB63zpeNsfeepGcVRmFRt5romYZGRD0ns0vW+nIbfTfFgsiRqw73LPW9J10JmM4osg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7958

On 25.04.2023 01:39, Marek Marczykowski-Górecki wrote:
> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> devices, do similar with PCI_COMMAND_MEMORY for MMIO-based UART devices.

While I agree that something like this is needed (and in fact I have been
wondering more than once why this wasn't there), what you say above isn't
really correct imo: You do not really make things similar to port-based
handling. For one you don't respect uart->pb_bdf_enable. And then you also
don't write the BAR. When you use the BDF form of com<N>=, I don't see how
else the BAR could be written to the value that you (necessarily) have to
also specify on the command line. As said on Matrix, using the "pci"
sub-option together with the BDF one isn't intended (and would probably
better be rejected), according to all I can tell. Which in turn means that
for the "pci" sub-option alone to also have the effect of - if necessary -
enabling I/O or memory decoding, a further adjustment would be needed
(because merely keying this to uart->ps_bdf_enable then isn't enough). I
guess like e.g. ns16550_init_postirq() you'd want to also check uart->bar.

That said, I'm not meaning to mandate you to particularly deal with the
bridge part of the setup, not the least because I consider bogus what we
have. But you should at least mention in the description what you leave
untouched (and hence dissimilar to port-based handling).

As to rejecting invalid combinations of sub-options: See e.g. the dev_set
variable in parse_namevalue_pairs(). That's a wee attempt to go in the
intended direction.

Jan

> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -272,9 +272,17 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
>  #ifdef NS16550_PCI
> -    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
> +    if ( !uart->ps_bdf_enable )
>          return;
>  
> +    if ( uart->io_base >= 0x10000 )
> +    {
> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> +                                  uart->ps_bdf[2]),
> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> +        return;
> +    }
> +
>      if ( uart->pb_bdf_enable )
>          pci_conf_write16(PCI_SBDF(0, uart->pb_bdf[0], uart->pb_bdf[1],
>                                    uart->pb_bdf[2]),



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:18:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525845.817327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDrw-0001NG-HN; Tue, 25 Apr 2023 08:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525845.817327; Tue, 25 Apr 2023 08:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDrw-0001N9-EM; Tue, 25 Apr 2023 08:17:48 +0000
Received: by outflank-mailman (input) for mailman id 525845;
 Tue, 25 Apr 2023 08:17:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prDrv-0001Mz-SD; Tue, 25 Apr 2023 08:17:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prDrv-0005s8-Mq; Tue, 25 Apr 2023 08:17:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prDrv-0001xS-A9; Tue, 25 Apr 2023 08:17:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prDrv-0001CC-9S; Tue, 25 Apr 2023 08:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2Cs7Vul9+aQHjeMrq4MqUnUYqtW4K6JgECnZNIU2dlw=; b=P49q0Ol20Moe4eabiDABYdojZ8
	CCjhNVPMURYEp2aSWObjTXa14ZZ5OxOg0ZziXgsJbbXtrYrFZqkMeZXvfGHiglM2bzsmAZnUHDfBH
	3HOKzoWcq9TwAPtq36X+S7TA5CQq8yBshEbH0AKWuSQ+kgD9dmu19ueYTT28F17dQXec=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180397-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180397: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-start/debianhvm.repeat:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7f2df63f723478dee629b5884cdee9914f88d98c
X-Osstest-Versions-That:
    xen=31627a059c2e186f4ad12d171d964b09abe8a4a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 08:17:47 +0000

flight 180397 xen-4.16-testing real [real]
flight 180404 xen-4.16-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180397/
http://logs.test-lab.xenproject.org/osstest/logs/180404/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail pass in 180404-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180404-retest
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 20 guest-start/debianhvm.repeat fail pass in 180404-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180219
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180219
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180219
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180219
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180219
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180219
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180219
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180219
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180219
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180219
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180219
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180219

version targeted for testing:
 xen                  7f2df63f723478dee629b5884cdee9914f88d98c
baseline version:
 xen                  31627a059c2e186f4ad12d171d964b09abe8a4a9

Last test of basis   180219  2023-04-12 08:06:49 Z   13 days
Testing same since   180397  2023-04-24 11:36:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   31627a059c..7f2df63f72  7f2df63f723478dee629b5884cdee9914f88d98c -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:20:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525866.817338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDuJ-0002r1-3A; Tue, 25 Apr 2023 08:20:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525866.817338; Tue, 25 Apr 2023 08:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prDuJ-0002qu-0N; Tue, 25 Apr 2023 08:20:15 +0000
Received: by outflank-mailman (input) for mailman id 525866;
 Tue, 25 Apr 2023 08:20:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prDuH-0002q2-L2
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:20:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ffeab647-e341-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 10:20:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8392.eurprd04.prod.outlook.com (2603:10a6:102:1c5::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:20:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffeab647-e341-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OzwqqpXSn2uCxHe9I2vxgBwXwST7+fqG3K6uBNIMLm60UW/gs9RzP9z/TNVGgp4MnzyNKDSVwpICWK9b1F/piPtK1lWw7mB0hnFTDhvqr0YSswZx7IOdIESvKw9n2LLb5d3Xua6MLgXPxafD4ZB3Nkf8Suy0rRBNlfllU2UwmGaktyz0TFQxK5W/GjaKmWALTJq1afCZGfBhU/G/KzR4kGx55aAJdHAkXTf8xkIw/xWM6TYFz64G8kZpkuMdQn5NVf1gxuZb8znKh7O948ynGJykkvIrbeQEPDir2CwSiI1rsUC8oQQqHTn0uUqYpYgPDkmMiVx6aWBHVaZpCOPbMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7D+O7eAjRI6QP88+awE5NKGVnJ/s/oPhYACqqFuXpC4=;
 b=i3GLZu644N4gOMiteDtyw0gqnV3y8xX41xFM83ps+36cbM41JsWS7r3TkdaOTu8SIEefe66HPXCKAd+jNtAx7bLuOBEelwK+btHXO14+GvDM4IDmWH73J++kNlP+O0z+xtH8iiYREdxuHxS4DjAwZLA8ktAXkwEyARojmtWLUMOk7R3Am2eUi13j6rS5UPt4Crb3E+QvUKBTXTSpcFeVRDP6Yi+wmVzTJ31/cSkvk56yIPCo9UKc1e2prgEuFdNywAs2Dd97W+nxDCaEBLXAaXog96RDw9S1F+9tRSujZtnBujiy8D2UdoOJfgDedQTlm9zMGkaPqtBocrz5/mJKeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7D+O7eAjRI6QP88+awE5NKGVnJ/s/oPhYACqqFuXpC4=;
 b=zBYCcancDUAIzIQ24cKk0o4eLuAWRxOdqTP44VHhqEstE/iwpshDoZSwsGHmzMVXqbEdg6CJDFBfG+jWE4rHQfmsCqZYLB3Jb1Go6moSC+DTXLODb5ua96fyGmgpsNtBJW4pNmerg5smv2plZ04L4ll9CILd4aharu1dD6G5IkjMp668TTehNAvGgAgrjgQ3min8OWsZ+Uu/WErc/hbXhrO0T6HxRY/oaTXbgqpcBAry4xDZu18Wx99oDb8jzi9UsFt9s54fsvDinoLlECOmIcfnF7OBZ86MqNdzXJXcs/KsTyF6TwhmdrXzDNoCLyJCvODhPIwdwBYj8pNvKt7v4w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
Date: Tue, 25 Apr 2023 10:20:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree
 processor node
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-8-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425075655.4037980-8-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8392:EE_
X-MS-Office365-Filtering-Correlation-Id: 44aff554-7090-4fe7-543d-08db4565e262
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xg3S2eA/jkFLMQ0zENmNznv/cCiU8kqe5t2mgROz9gGsGOrrVMcvx2ppg71rODhNWjrOvn5DryBFtbdAE3xdkPouuA7qUU9rDHyFvf5Dsonh7xT5ljDS3IAIJHkQY2e+He4dGHnSuATjpL3hNMxaegG56HUshoKFgLGtgBZSKWuuqToNLTaAyul0KpMdbtI/CYrlFGs8iWB4gUPYNDU911tg2wtEFvxXzgThwvxqg48GnftrL//sN3jDt8oBkaxHjk1rZL5L2im15DN6SQAGiFWFhx97W5B1Ue1HJkkYrtiuEo2BXsev+WmI7WCDIHev+ZpdsiMRB7a0WkZnPAHN0XtGwPF1lu1AuNDgJV12fxg5itMLkVMIiMjCE8sLR6WsZVOsSdUDg/8H6HLFOIoJRdBKlV0sIS4apQ2D4sS1bFaamZsDtXYw8aDOOGGoNqmuiNXT9bQWjRP/cStPqiddGBm6qBLi7ZPItx6PxnqeT6l8hw79acFjKGnt87GF6H3MCdj8bCm/mAEmy6xdkqs3uM9GQ2IThA3EviO4KOO9D3ONZ0PFXKrNieynTDrDGdU61muWox/kT++FdpT6YgmUiMZiIQoUBHqYSMkxPeBWQCLNzrehDfV5AZqsjbM40N/al/8YZ/19ithwuMxn7DfDCg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(376002)(39860400002)(366004)(396003)(451199021)(66556008)(86362001)(83380400001)(26005)(186003)(6506007)(6512007)(53546011)(2616005)(31696002)(38100700002)(316002)(4326008)(6916009)(5660300002)(41300700001)(8936002)(8676002)(6486002)(36756003)(66476007)(66946007)(478600001)(54906003)(31686004)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RldKNVc1V3E3dTdMV3IxRkZDY1g1YWgyajRCSjBLYlpjYmkzVWpHbHc3Ujlr?=
 =?utf-8?B?RmszalFpV0dITGNVY29teG9hQzlQdEErSjkvVDllaUdybnhVbjN4OXB1YTgy?=
 =?utf-8?B?VkMyUFdlQ1FQdWlUSUUzYmh0MnUydnJxNXRjY1JJYVJsMGZGbHJHd2wxOVkw?=
 =?utf-8?B?YWNPMWNvK0lOMHFwOHIvMU9sVlQvRXBuNDhrdXBhZTNoaUJneWptbnNpRW5N?=
 =?utf-8?B?cGhoMW9rMHNlbFF0M1AyVFlQTzVQK3lFSTNvcDdwNVI0eE9MQU5Pc21ZSXQ5?=
 =?utf-8?B?cFN4V1lSQkhlMU1zZkdRUnhUTUE1eFAvU3E1QXVJckJjamFadEJ4aVg5bnA5?=
 =?utf-8?B?UXZvOU1SR1k1azEydnNrbkpFQjN5c2phS3p5elRMUDdiV1gwOTlydnUxQUQ1?=
 =?utf-8?B?cERYSTFMRDVicHdjdngzbkxuV2JhK0ZNMjMrN0xaTGhRRi8wQ2xOdkc3U0FD?=
 =?utf-8?B?NklCWTBEamlSZTcwa2lpWVduT3pnV2lXOHNsdHdPU2ZuZThMcm55bDR4cWZC?=
 =?utf-8?B?Y1dTdHA1aEZQV01hcG9GOWdiNGN3VUI2Y0RlSHR2b3hVbVcyVlUrNDU5cWpW?=
 =?utf-8?B?c3c4YmJxUTlWK05FbU9DWmZKbmgvY1d3dXYxZlI2Q3JMMStWOENVU1J2K1N1?=
 =?utf-8?B?dnJoR3E3d25oYlRzTkY3cGZNaHBBWU1yVTU1NzBMOFkvRFA3dS9DajBTb0Vm?=
 =?utf-8?B?d29XQk1qMEtwU1FLbk1leFhGVXl1eWNua0VjaDBmWWhlc3NPSkNaZjRZNjEv?=
 =?utf-8?B?MlowcWNMUkd3NWNIeDhXcjdYM21QenBQMEdpdXo5UnkzRjl5UmhRWXVXYkFm?=
 =?utf-8?B?M2diT2V6YVRFeEZUQlVOMUJIMFZUbGZFNHFodUtDK1E0cTBvRGFoMHNlOEJs?=
 =?utf-8?B?UytsNmFJNmx1N3BodFgxRUFaL3pnYjdMUmpNQjQrR3Q3WllBc2dteWxIL3pj?=
 =?utf-8?B?Q2kzRWN3STkxNStPSGZmaGg1Y2hGeFFhS2s1VVpLMUFaeGpnSXdDYWE1aTZ6?=
 =?utf-8?B?R0d6UUVaVGxpbk9sN3Rud01WeXlJNXo4bkxJbWFpdVJpVFFyaTJsWGRpRVdN?=
 =?utf-8?B?WUlJOUd2U3hnNTFWVmxnbjU3VkRmK2xyTVNoTG1kL2drK2EzR1dLOFdIbUI3?=
 =?utf-8?B?a0t2Vjh1b256cktWcWd4KzEwOUZTWk85ZFdTSjFTOENzYnQreEx0cEJ3bFdh?=
 =?utf-8?B?S082SEswUWtSK2FRVXQwNVpRaW82aTFydzRiUGxCN2JJSlViMDBiVlRDbkcy?=
 =?utf-8?B?UkdVRmxkdUdLMnpPaGE4eXE2UHFrRHlsUWdySG9CcnNNaWNSMzlMdUIzUGs4?=
 =?utf-8?B?b2lkeFlIVDRad1FLTEkwNFZNeXdpKzI2SkNHUDQxTlRXUDhha25qOE1WSzJp?=
 =?utf-8?B?WmdwWTZBZWMzN3lwVGs1SVBHZ0N1NWVTZW9rRFBxcU5tWklXR3ZKUUVIeDBO?=
 =?utf-8?B?djlrMUpjOUNqblVpOCtGaWt0ODB1R2pKV1g1QWV0T3p6RFE5eEZTTzBaTVVH?=
 =?utf-8?B?WkVodUszNFQ0UTc4U2RmbVJtS2NYb3hDSkc5SVhsOGRjRVc4Z0R5dUY2RmFF?=
 =?utf-8?B?LzRCL0drNTMxMVQ2blFwSk9DTTR0QWt3ellidWRRaTI0VzB2U2V3RnJycEJT?=
 =?utf-8?B?UnFNdHJyUlpJYjhHaUlnemwvcDhPQS9MU0d2U3dXd0V3ODFwMTYweWRTNVBP?=
 =?utf-8?B?V1plTTBHR2JvTUxRdUVnOHRLcEhlV1ZYWVAxMldtQkYwQ2FRVURRWmlqejRk?=
 =?utf-8?B?UXdndG1EbWtNREZwcHBxRmVhN1gvMmlVbS9uYzZVazNEQ3NhSG5NUUpSb2kv?=
 =?utf-8?B?ckFienlvUUtOWmNoRHRNL3VDa0h6eW00T0xGVGl1YndUdlZUeEo3bHQwUmlP?=
 =?utf-8?B?SUpmSncwN0s0MTZDV1FidTdRa2F2QVRKbVZybjNzaGY4cHVrNVRYaDlTYThs?=
 =?utf-8?B?QnNtV2IvT3FnY1hSWXkrc0loMFRHNWp4VVoxR3pNZVlBdVdPM0RoNjJtZnR3?=
 =?utf-8?B?aWljL3lXT1pzWnFLRno5TlNYNEFwQnRqRGt2U1RmM0syT1lueGpwb0xla1Y2?=
 =?utf-8?B?VmJib2pVU2JFRVZSOGsxbGtGaUZrZHM5QmU5V2xwY1VaNjczVUh3YnUrYVJ2?=
 =?utf-8?Q?Xk31SGDtRrIzuefe9LuZpIaHO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 44aff554-7090-4fe7-543d-08db4565e262
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:20:09.6646
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: odIFPg1ewxciHDYBlGwz6Ug0WHPjj1WeVBcrgrIY6hw5hccJp4bZvzEUsVm4nh5oRBcJ5D9QIgI4lsdMmsDr0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8392

On 25.04.2023 09:56, Henry Wang wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> Processor NUMA ID information is stored in device tree's processor
> node as "numa-node-id". We need a new helper to parse this ID from
> processor node. If we get this ID from processor node, this ID's
> validity still need to be checked. Once we got a invalid NUMA ID
> from any processor node, the device tree will be marked as NUMA
> information invalid.
> 
> Since new helpers need to know the NUMA status, move the
> enum dt_numa_status to the Arm NUMA header.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
> v3 -> v4:
> 1. No change.
> v2 -> v3:
> 1. Move the enum dt_numa_status to the Arm NUMA header.
> 2. Update the year in copyright to 2023.
> v1 -> v2:
> 1. Move numa_disabled from fdt_numa_processor_affinity_init
>    to fdt_parse_numa_cpu_node.
> 2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
> 3. Return ENODATA for normal dtb without NUMA info.
> 4. Use NUMA status helpers instead of SRAT functions.
> ---
>  xen/arch/arm/Makefile           |  1 +
>  xen/arch/arm/include/asm/numa.h |  8 +++++
>  xen/arch/arm/numa.c             |  8 +----
>  xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
>  4 files changed, 74 insertions(+), 7 deletions(-)
>  create mode 100644 xen/arch/arm/numa_device_tree.c

As asked for in various other contexts, may I please ask that new files
prefer dashes over underscores in their names? Additionally short but
still descriptive names are imo to be generally preferred; in the case
here how about numa-dt.c?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:26:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525875.817368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prE0U-0003iv-0T; Tue, 25 Apr 2023 08:26:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525875.817368; Tue, 25 Apr 2023 08:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prE0T-0003io-Tx; Tue, 25 Apr 2023 08:26:37 +0000
Received: by outflank-mailman (input) for mailman id 525875;
 Tue, 25 Apr 2023 08:26:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prE0S-0003ii-2i
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:26:36 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe13::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e28411e7-e342-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 10:26:33 +0200 (CEST)
Received: from DU2PR04CA0178.eurprd04.prod.outlook.com (2603:10a6:10:2b0::33)
 by GV2PR08MB9278.eurprd08.prod.outlook.com (2603:10a6:150:d9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:26:30 +0000
Received: from DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b0:cafe::60) by DU2PR04CA0178.outlook.office365.com
 (2603:10a6:10:2b0::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Tue, 25 Apr 2023 08:26:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT054.mail.protection.outlook.com (100.127.142.218) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 08:26:30 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 25 Apr 2023 08:26:30 +0000
Received: from 5dd4d10e891f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 477280AE-0E8D-4396-955F-1D0F3C144578.1; 
 Tue, 25 Apr 2023 08:26:24 +0000
Received: from EUR03-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5dd4d10e891f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 25 Apr 2023 08:26:24 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS4PR08MB7904.eurprd08.prod.outlook.com (2603:10a6:20b:51f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 08:26:19 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:26:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e28411e7-e342-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ew55h3Vy9ekmQe3Hh6t5Kw5P88pW2WZD7F4/8+FfPWA=;
 b=GredgnArhFHlRmDoyr+MRLBceAP15F9ZfAxt0UoABarO3/jzkX2QoO/t2KipkCmi96/hIhhj3SSAtQRsMPbtrTZIThITcIsaahApSdoKGfadQbgMPRzV9BCHsTY7/tqL3fTi01ham0fD2grRTSWbaATNf+G3U5klBleKrNAmfQU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IKDlecFhgg4LyS886cD/jrP5mjQEwHLUTR2T+6pBsltV1EI0znT/B2WDBzBy5hFBhTgwglXNQ4quPuhRYXMgqH6C6BrQonkLMYr/B39qKslKlD8vNUTXI6LT5xGTvryykjZAVAClGMfF43It3wP9h5tVq8rTR/t55y3ED5dyz/ZanF+5Y28VagpluWb8gr9fIgYIR13UNst9EJJWMWgDjAwsqBlB+omp6emBhAWUQ38eelCgwKINQMoB4rk6ESpVazaU+Cct6bSROEfTT6e+kB8IFMvrEz3ArCGiLb3XQxKtqq5k6o65BnfSBryEv1GeVafMa5AP/nqTydUYGlTLpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ew55h3Vy9ekmQe3Hh6t5Kw5P88pW2WZD7F4/8+FfPWA=;
 b=CdkgZBf1W2HM8lx8GFhuXaUt7xz+pEYDOXqACLXI+6F1gIZSxl6e66bowQgde0cENLUEbGTwGDat1Iz+pvJskMfahkKbxtKirfo4c8DpxF50MAYLBtrzjMnaQCnm8lJKRsGyqV19tQTryRt/sAnhNdvp/kPLLQh2q/BBu+ASF9e2SqUieakwbu21Na5E2EdWdBgZnUcbiVUnie4zh6dqemr3ngLZr58bTJfdgbOj1PZAhfOe8poo1m5wT5Aj/k+ukDPwV+I8c39PX2dzqMHEMv7KOCXJkpjonhFQuA5p76md++z2l6CII91PjTe6ONn/BTPxO1N8Z+OtepqPW+YcAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ew55h3Vy9ekmQe3Hh6t5Kw5P88pW2WZD7F4/8+FfPWA=;
 b=GredgnArhFHlRmDoyr+MRLBceAP15F9ZfAxt0UoABarO3/jzkX2QoO/t2KipkCmi96/hIhhj3SSAtQRsMPbtrTZIThITcIsaahApSdoKGfadQbgMPRzV9BCHsTY7/tqL3fTi01ham0fD2grRTSWbaATNf+G3U5klBleKrNAmfQU=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree
 processor node
Thread-Topic: [PATCH v4 07/17] xen/arm: introduce a helper to parse device
 tree processor node
Thread-Index: AQHZd0ufawBJr94Nqk2Kxm6Mb2Kjtq87ruWAgAAAlXA=
Date: Tue, 25 Apr 2023 08:26:19 +0000
Message-ID:
 <AS8PR08MB7991CBEDDE9F0B0EC0F8B86092649@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-8-Henry.Wang@arm.com>
 <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
In-Reply-To: <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4A2618C576249E408050BFE7E4C45171.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS4PR08MB7904:EE_|DBAEUR03FT054:EE_|GV2PR08MB9278:EE_
X-MS-Office365-Filtering-Correlation-Id: 64a559cc-0118-4efa-355f-08db4566c561
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YuMRyfiNh8UwQ2FWPdEugOIvT6FJ3FDGTeGXFnYbZE0h+hoEAnAk5A+lCw8evXK0CljfNhK+8h1lo9ClVun57sJFgF5vQjZN4Un4l4y0pT6sDamS6ao3CHrljqRwe9RgryQeep91QOfB/3F+GJsiRYL8nPv3dFFzfGmha0AM+GQbFriKZVki3YpGA7TtIGdnDz7kGq+u3W1PNbR3lVnOE5PQdpay2xzT2uqgBBZig56BBPJCshhHLy9mopMOp2zSPaNgNz2wVoqhQkMpAmLkFF3ElyUb1kN81XWHMscNMDV5ssaY6CCSGTJf5l6Ww8xeYV47jrME7Rp+AGY3U4Wj1X2tgcvSJuCRnogXUQsBv4WD7BnyvJ2avCtu+2AekngYsrDSWdTdnkIu8Cai37YwUPrxTktpdW4JnAooOgTcCvZUajqcc6wMgdntwHvq/uDrFj/sqVCGmi5uBjOa8ZtXK4RtDV6lpdvK+T7HKCWyKHnNVdq3mBVYeX9x0jji5VVB1NW3GWjxqjPkIFWxqgRio6ufVJTQEt/QnhrCkxZ+mOmjR9hjLekV/4EdVnwiwiBl6Mdzb0075tUbZOJGkwDcVeqQD9ZmkKxodUMTS1RxkWzJIsAfFOFPdBNpWqMuFD04
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(39860400002)(136003)(396003)(451199021)(38100700002)(122000001)(6506007)(9686003)(26005)(55016003)(186003)(8936002)(83380400001)(33656002)(4744005)(2906002)(8676002)(52536014)(5660300002)(478600001)(54906003)(7696005)(71200400001)(38070700005)(316002)(4326008)(6916009)(66556008)(76116006)(64756008)(41300700001)(66446008)(66946007)(86362001)(66476007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7904
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	31c533c1-e833-475c-6e94-08db4566bee9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y86NQp7eFAnfoodh21BwfIqvZKC1m6Ssn5ERuRLcwqg1X6W01iPxFgIEsrSH2OqkP4n9cpaYFyMdjRGQPOofksD3LoWASYQ5MNgpXIs3PXhEjvyWRyOCO+jnS6CO/TH1zHbsytOxbOxlrW9XtaRX/2mneUW1UrbL0juG6db7pfI4/+64MineKJ1O2SnZdtsop/3Mo3micMOU05O5dSxdjiuVwNHFe8k042hBsNtkqu9+F919Xr5a3jR7EQZOFEyJPEB4XBlVidtQq0jcqD3RVt8FXN5TMwjhGugYnfdKqsAmL2iv9ii8zXvg3YCOCQRPpavsC6zOMt2WbVXynWhkkdp+juZyjj572yrcysnm52mCWLEysyzBw7v6K7ZzEKR+6KaZ7TzZQoNwHI2iOJ5AAIvHc1cTsJnpy3kQ1UBXnvYK7sntRHXWjjzw+CVueRlUk0szt3DGZXcR/is6OHqU0sqRmgYI0eaSHKvZtP4aWwofeA3UXL1g3HO4dWkqyCyg3Oiw80nqN+NfKFBDCTY13rS5QsaWSgK4tRho7mRxtAge6Vf0ESU/jOrtW32i6wg9xKo7LLECtW52KjzRYe831H4xVV5aBlMgpGTcIpV5Q9iOr/ZwcNBMADP9extzm9GrzAVV5M0xORDhBFoy/AG9TKp/hiQNx5cGCH+yFq1ikClJOWA9W36AgyM8OZ6nDFJAiPIyDC3bjEyoKxmShwZ3BQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(46966006)(40470700004)(36840700001)(54906003)(478600001)(40460700003)(5660300002)(52536014)(8936002)(8676002)(6862004)(2906002)(4744005)(86362001)(33656002)(82310400005)(40480700001)(55016003)(4326008)(82740400003)(316002)(70206006)(70586007)(356005)(41300700001)(81166007)(336012)(186003)(36860700001)(26005)(6506007)(9686003)(83380400001)(7696005)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:26:30.3520
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64a559cc-0118-4efa-355f-08db4566c561
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9278

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwNy8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIHByb2Nl
c3NvciBub2RlDQo+IA0KPiA+ICB4ZW4vYXJjaC9hcm0vbnVtYV9kZXZpY2VfdHJlZS5jIHwgNjQN
Cj4gKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+ID4gIDQgZmlsZXMgY2hhbmdl
ZCwgNzQgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkNCj4gPiAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHhlbi9hcmNoL2FybS9udW1hX2RldmljZV90cmVlLmMNCj4gDQo+IEFzIGFza2VkIGZvciBp
biB2YXJpb3VzIG90aGVyIGNvbnRleHRzLCBtYXkgSSBwbGVhc2UgYXNrIHRoYXQgbmV3IGZpbGVz
DQo+IHByZWZlciBkYXNoZXMgb3ZlciB1bmRlcnNjb3JlcyBpbiB0aGVpciBuYW1lcz8gQWRkaXRp
b25hbGx5IHNob3J0IGJ1dA0KPiBzdGlsbCBkZXNjcmlwdGl2ZSBuYW1lcyBhcmUgaW1vIHRvIGJl
IGdlbmVyYWxseSBwcmVmZXJyZWQ7IGluIHRoZSBjYXNlDQo+IGhlcmUgaG93IGFib3V0IG51bWEt
ZHQuYz8NCg0KU291bmRzIGdvb2QgdG8gbWUuIEkgd2lsbCBmb2xsb3cgeW91ciBzdWdnZXN0aW9u
IGlmIHRoZXJlIHdpbGwgYmUgbm8NCmV4cGxpY2l0IG9iamVjdGlvbiBmcm9tIG90aGVyIG1haW50
YWluZXJzLiBUaGFua3MgZm9yIHRoZSBzdWdnZXN0aW9uDQphcyBhbHdheXMgOikNCg0KS2luZCBy
ZWdhcmRzLA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:35:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525879.817377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prE8S-0005BF-Pi; Tue, 25 Apr 2023 08:34:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525879.817377; Tue, 25 Apr 2023 08:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prE8S-0005B8-NC; Tue, 25 Apr 2023 08:34:52 +0000
Received: by outflank-mailman (input) for mailman id 525879;
 Tue, 25 Apr 2023 08:34:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prE8R-0005B2-8q
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:34:51 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 090278a9-e344-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 10:34:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6975.eurprd04.prod.outlook.com (2603:10a6:803:138::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 08:34:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 090278a9-e344-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EOdy7LrTX7mJmnxKuUiKWyzETscTsxLXVkFtudkWnxKFKDzWhmrZVWqYe8aj1PgDGuyq2dOociSURUE4CBmj8H38flIFfoog1Xt/pVY2BGMk4oZYsa+JCqK2gcYV9eaPeKC3J2yC/tGxNSDcAFfgFsko7XalgjdcPt5Ag+bFfubKlWZTFaw7xxE8pm9UKqTZ5neUPVh43IL2k8am02WJ8oU51UMvfgzKFjis8FGVntjL8MhWPgXVWJN4P4VhMjzZrTEx4YzCS81g4vUlT7boNMwlpb8cxTGomMytZAGbFU24+upe9wejpibMiGIAjPhXJeTI4hAJ3cbfboHU9dfKJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EzRFWBVOHDvBq54TKxesyfelsMZoGhCw5JQVop2/VUY=;
 b=h6MQ9JIeDfgC3ce4C+ZoiXYz9bmHiHZprBda8JqvJTi18KpV2COYsv1snhA2/bpQbDvJQUj8VTnuXiIumJ1rrMrQ+zp2m+pe2w3YkET9afCa2QJ8eWcC7JLIkjHR9tHvBw+UjWPyoAOI9I7tcuNa/jSvFg/29Xse3EN2SdGogvAkmOOKedvvL+iy9HtZMpqDvGPG3WMRDs1qoRHDr3AfBewRi07x1/344Hcnzxa+brNO6+YgaRScfRK/zQWkolRPf1aulVwmHJEpNuDsrs5RCy4ckGlnw8xBxMivFwk2ZrYwzk10/hTb/USXn1y/wsv83+wuerzND3gDtxPdmHllLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EzRFWBVOHDvBq54TKxesyfelsMZoGhCw5JQVop2/VUY=;
 b=bn9LWdpB4sacz8C+4ncr30dbjeQqcGP7ARk+tPHtyLmhMb9p4aZQ5Ii5SI0zyFf7fpgMLEC1tZzDHRkrt5VKNl983Jlb9pfkoQu1phkrIb19SSdJl46HduEug4pge/peJR8tM/MgjIklcxviT0dYj3FoNojwCH/UMA/yNAo0/Hbs7RN3HS+5HuRY5QMbvz3lheMv/SkGrKJx0tXeyHWR/AF6tSB1Zzl7JxVfyXRHmqlkzjt+tYulr35eOQhMmwkmC/yF6yi8L1Yz5eZo46d5t8d9lIty9FBSkqWQQ9tkNobty/LhKHS3LuUborp1fgXT1+RdH9G8dOmKj/nGrV2qHA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
Date: Tue, 25 Apr 2023 10:34:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425075655.4037980-10-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0017.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6975:EE_
X-MS-Office365-Filtering-Correlation-Id: 53ed1436-f9c9-4324-ab5a-08db4567eb89
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YereA+rfMzguNCxWNnPtuTsyN+37C8HUjkbJ+MAo58I/1hgkqPA1yC1dCLlm70C3kfecR13rvW8W9YNn9UmH/IeFbryjnBZZmO8B8KS3fuH+MKwUC1iZpM64UEG2z3r6k5g4A8wbwyDo8HvvUOIW9H12IH1EUsBbX+ZnNkcUKyZYbgEt/o006515ofHtvS3DT8wRBxVHrMJOswe5UhNhYPoHYyWKyKWzYJ1HF0Cpd+dW3vXpk9gPnLVvsGUwHnRd382RVs+EiRfXSSIkWmssfh9zOkp/9GXei1llgzxnDc/48DoepiBaYD3o+iUE3Xrdr2suohVwPwz89q6kkg7Wwo8ixs+ZWrC4Zzc5EeR0btT6fiQR/UTEmtMuw5YucXnIgkniGzTjfcIq+ZKQd+ssv7sqnHS0dO1IAKEAFwNzw+jUHInwFuSm1ItDprUYAivJelBwuZ8cc8jdMWE3Vw5spK2W9Yt3lFX7XkDDtSbcA7LMCQHB2ZnuQ7exzU9yo0CSxpAhARCJylt6i2xXAvPqbk79fIVDM87WmX9wJNQV/0a6ioGyzZj9B8NfAXr/qpm9ufRM7/rId9Kw8tHp8867BwSexGMRieQB3emr2aNUhSHMtYCXXV58Pehomorh9WGw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(396003)(346002)(376002)(136003)(451199021)(966005)(53546011)(6506007)(26005)(6512007)(2616005)(36756003)(83380400001)(186003)(38100700002)(66946007)(478600001)(31696002)(86362001)(6916009)(66556008)(66476007)(31686004)(8676002)(8936002)(54906003)(5660300002)(6486002)(41300700001)(2906002)(4326008)(6666004)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cEhjWkQyZ3EyM0p3eEdWUU9PV2tFMnVPRERkdTVQWGdLd1BGajM4b0orNWs1?=
 =?utf-8?B?bWQzM29FTkNnNkZ2RWx0cGFHQkpXaktPd1ZPZXZIN3lreFg5d0xwejRBWG9k?=
 =?utf-8?B?ckVaazNIRGdhN0FRWVg5MjdkUWZUNmsxb1BkT1RqNWhMVUZZN01oVldBOHNt?=
 =?utf-8?B?WU93MncwcTk2alk4Q3Roa3ZQWDR6QUZmczcyNUdZQUtZS05FOWVSUE5NQ09R?=
 =?utf-8?B?RVhTTzJGTUxyZVNUdVlSRnVkaitGYkE3OUVvZHFYaWVaRUxGMTBDb3k1QWdW?=
 =?utf-8?B?NnB6VElGZDluYmJuWGpCUitSdmQrRzcrVWc1QkRkNjNxWVE0azVIdGdWZFVJ?=
 =?utf-8?B?bkxtVG1PUDZEcDVqNS9kOXEyRnozSnZocFdISVVmLytGcnExTFk5eEhsQ3g0?=
 =?utf-8?B?UkxhbG42YjJodnF4TXdNRW5RV0E4a3N0SUU2Y1ZEd2plTUg5MHhJT05HSzdk?=
 =?utf-8?B?RWRTaUdKM3EvbGlXYVJlaTdFVkFwMmtqQlNsQkxwQ0pFb2FMWHVCYlFUcmZB?=
 =?utf-8?B?WXIwYnExZVRSMTEwNm1hcm9reTRlSFVYbTQ0NjN5VFRHR29sY2NkWEZLT216?=
 =?utf-8?B?RDlIQzNsekY5OGk1dnk0WXpCWndRZzc2TlZLTVR1NndZTDc1VjlUdkN4TTZy?=
 =?utf-8?B?b0hPRjNvRHJFWEZzVEs2ODFVNFdpYitCU1dwVDVsV0tWM3dnNWZ1L1JXM3hl?=
 =?utf-8?B?RXdwTHI3NzIxdjY2WHIraFRJSjNzNGwwdWppeTdoeFpKek5Zbkk1OXY3MnNk?=
 =?utf-8?B?eXlOby9ncEVrMmxxNVNzOWVhUk5PakNNa253QXlkSU9LSWx5NVpseXpzYjRN?=
 =?utf-8?B?ZFA1NHpVQk0wRHhhRGo3QjEwUG1rTlg4bm5XL0k1bVBTSlJ2NG41YklsYmlK?=
 =?utf-8?B?UVVHTVIybGt6V0JDbEVuRTQ5Z1RweWg3SThoWTVENFVkRGNxU0l6Q1d6Q1NM?=
 =?utf-8?B?WWRVemhTbElKWndsdHppVEtlREFrcmVnRE1hYzdlZGpGQ2ZYMjZ5bVU4NWkz?=
 =?utf-8?B?UXVXSGFVYmpNU09xUFloZnhpbzNNK0JTaEVpRFdZMGlpZXZuV2dDNDIwM1dq?=
 =?utf-8?B?RlV5NVFDNExqVHVPQ3BHZEZWdnFoaStNYjlQdm5vUFU4YXRSOWtiNlJwTjFj?=
 =?utf-8?B?ZXNkYXlEVkNlY1d4bTlCeUUvbzB3YXN0VHpCZnV3eGkzbkZUUHNVVTJWaUFU?=
 =?utf-8?B?dXRFU0M2dDNJTHl1TjJHaEVQeWJ4SWlMS1krT1FxZER2WW5tR0s2YWh2aUI4?=
 =?utf-8?B?M3pLcm1mSjRSTm1xZ1dwOGVzWjFlZnJqRjRJLzZ0S1RwY1E3LzRPQm9UZ0E4?=
 =?utf-8?B?MXFpb1BHUm91ZS9mdHE1NitjVGEvekhCM2g4UExFZ0FHUG5ZaDNVak52ZGFs?=
 =?utf-8?B?YndocXU4MzBicVFncE4wZTRiVXRSWHRwM3FyVndrT1BJVjVneTJJWGlVenRN?=
 =?utf-8?B?TWxtbkJ4eWdRRU5jMVhGMHhCQmJ5THZpZlZsWnN0Lzk1clovaFhlK0VLQita?=
 =?utf-8?B?NzNnWldSOHBvY0o3NTUyWGpsSEhIM0UwNzlIaFJoRGg2eEwzVHNPVEhaZ1dF?=
 =?utf-8?B?bXRZU3Y3ZnEwYUtvVDkrSVBabUpTMy9wN3lVUEh3TW11WFhKUXJYZDN2dHdS?=
 =?utf-8?B?eGF5ZW55cjZvditFV3pHN0s5QjNOTnVxVXJGSk9zeVIzS3BiYlJJbVdZNHFX?=
 =?utf-8?B?b0NRbis3dDlWUjNUa2dLeGpGbXNWbHdTWFY4dG9oK0NXUitPV0l3OWhSc1Nv?=
 =?utf-8?B?SVFCNVlnUW1GVmladkp5L1VEZ0hVc3ZWa1hZZFMwbjFZMHU1Y09NUE5jVUk1?=
 =?utf-8?B?Ti9ScmhvZGx3L1RsdVNvWTRUNnZteEkyWG1TMXpVbDNJK3JGTDBjTkdvb205?=
 =?utf-8?B?SlFKN253WS9GQWh3UFE4ZW80MkxuelNnUnIrQnhmWHVGWmE4YkhtWmNKMWhr?=
 =?utf-8?B?RzUvYTZFK1hzYnluaG5jNFhmZFl6MmNwRlRyRThmZDBLeUtRNi9zMm1UbGd2?=
 =?utf-8?B?V2ZBaHBCYVBadWdBeXR6bHZ5NnAxM2ovcWRvVzlpdkxNR2JKanVTYmNJNTht?=
 =?utf-8?B?aW1QRUdRZjJOaUhtWm1JNmdHTUxCc295c0Y2TnFURzBITGlRdzJPT0RudFFC?=
 =?utf-8?Q?O1LwRiOgX9jR6HtftPNlSTTS7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 53ed1436-f9c9-4324-ab5a-08db4567eb89
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:34:44.0145
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4pWUaiUyY+erwjaJEkiJ+EgG3bSUl421yI8Oj8J3YlZJmrBFLyZUx02z+H5kqd3VgvXeooxILYx9GG6V9F13+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6975

On 25.04.2023 09:56, Henry Wang wrote:
> From: Wei Chen <wei.chen@arm.com>
> 
> A NUMA aware device tree will provide a "distance-map" node to
> describe distance between any two nodes. This patch introduce a
> new helper to parse this distance map.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>

While trying to hunt down the caller(s) of numa_set_distance() in the
context to replying to patch 3, I came across this one:

> --- a/xen/arch/arm/numa_device_tree.c
> +++ b/xen/arch/arm/numa_device_tree.c
> @@ -151,3 +151,111 @@ invalid_data:
>      numa_fw_bad();
>      return -EINVAL;
>  }
> +
> +/* Parse NUMA distance map v1 */
> +static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
> +{
> +    const struct fdt_property *prop;
> +    const __be32 *matrix;
> +    unsigned int i, entry_count;
> +    int len;
> +
> +    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
> +
> +    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
> +    if ( !prop )
> +    {
> +        printk(XENLOG_WARNING
> +               "NUMA: No distance-matrix property in distance-map\n");
> +        goto invalid_data;
> +    }
> +
> +    if ( len % sizeof(__be32) != 0 )
> +    {
> +        printk(XENLOG_WARNING
> +               "distance-matrix in node is not a multiple of u32\n");
> +        goto invalid_data;
> +    }
> +
> +    entry_count = len / sizeof(__be32);
> +    if ( entry_count == 0 )
> +    {
> +        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
> +        goto invalid_data;
> +    }
> +
> +    matrix = (const __be32 *)prop->data;
> +    for ( i = 0; i + 2 < entry_count; i += 3 )
> +    {
> +        unsigned int from, to, distance, opposite;

With these ...

> +        from = dt_next_cell(1, &matrix);
> +        to = dt_next_cell(1, &matrix);
> +        distance = dt_next_cell(1, &matrix);
> +        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
> +             (from != to && distance <= NUMA_LOCAL_DISTANCE) )
> +        {
> +            printk(XENLOG_WARNING
> +                   "NUMA: Invalid distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",

... you don't mean PRIu32 here and ...

> +                   from, to, distance);
> +            goto invalid_data;
> +        }
> +
> +        printk(XENLOG_INFO "NUMA: distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",

... here and yet further down anymore. That'll at the same time shorten
all these lines quite a bit.

> +               from, to, distance);
> +
> +        /* Get opposite way distance */
> +        opposite = __node_distance(to, from);
> +        /* The default value in node_distance_map is NUMA_NO_DISTANCE */
> +        if ( opposite == NUMA_NO_DISTANCE )

And the matrix you're reading from can't hold NUMA_NO_DISTANCE entries?
I ask because you don't check this above; you only check against
NUMA_LOCAL_DISTANCE.

> +        {
> +            /* Bi-directions are not set, set both */
> +            numa_set_distance(from, to, distance);
> +            numa_set_distance(to, from, distance);
> +        }
> +        else
> +        {
> +            /*
> +             * Opposite way distance has been set to a different value.
> +             * It may be a firmware device tree bug?
> +             */
> +            if ( opposite != distance )
> +            {
> +                /*
> +                 * In device tree NUMA distance-matrix binding:
> +                 * https://www.kernel.org/doc/Documentation/devicetree/bindings/numa.txt
> +                 * There is a notes mentions:
> +                 * "Each entry represents distance from first node to
> +                 *  second node. The distances are equal in either
> +                 *  direction."
> +                 *
> +                 * That means device tree doesn't permit this case.
> +                 * But in ACPI spec, it cares to specifically permit this
> +                 * case:
> +                 * "Except for the relative distance from a System Locality
> +                 *  to itself, each relative distance is stored twice in the
> +                 *  matrix. This provides the capability to describe the
> +                 *  scenario where the relative distances for the two
> +                 *  directions between System Localities is different."
> +                 *
> +                 * That means a real machine allows such NUMA configuration.
> +                 * So, place a WARNING here to notice system administrators,
> +                 * is it the specail case that they hijack the device tree
> +                 * to support their rare machines?
> +                 */
> +                printk(XENLOG_WARNING
> +                       "Un-matched bi-direction! NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32", NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
> +                       from, to, distance, to, from, opposite);
> +            }
> +
> +            /* Opposite way distance has been set, just set this way */
> +            numa_set_distance(from, to, distance);

It took me a while to understand what the comment is to tell me,
because in this iteration the opposite entry wasn't set. May I
suggest to make more explicit that you refer to an earlier iteration,
e.g. by "... was set before, ..."?

> +        }
> +    }
> +
> +    return 0;
> +
> +invalid_data:

Nit: Style (labels to be indented by [at least] one blank).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:37:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:37:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525885.817388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEAq-0005p5-C0; Tue, 25 Apr 2023 08:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525885.817388; Tue, 25 Apr 2023 08:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEAq-0005oy-86; Tue, 25 Apr 2023 08:37:20 +0000
Received: by outflank-mailman (input) for mailman id 525885;
 Tue, 25 Apr 2023 08:37:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prEAp-0005oZ-Mg
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:37:19 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62e926a8-e344-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 10:37:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6817.eurprd04.prod.outlook.com (2603:10a6:208:17e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:37:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62e926a8-e344-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S+/n0+itQbSPOE8HeG2HZv8dieSAuecCt/q5/c9RSFeWZtnY/5dM42i7zI1cnRBzAtuCTCP/JWfSz3EhxVMnLR82FLrwHZT3jSbgGYQ1VoQTK39UOOYAHg1EodXodLFr+I81MInpF3vAMYPeoc88NzaX8ZCXCTUDqu+3xlxHfVU3d21wHH4d9Yv3H2wMS6xS1HKJwhtepl8K6NI0VjDCXYS+/CmTAkAnnaf6a+R1bGDZhYDG10UXBGU4gOUY11M55fJRdzLrC6lXnc1l5EvJ3Mrns6av/NFPm17xGUYJyhi5Yq4nWSyD3EEPSuLbi6Fw27VrK3YjFq7b9zQElMuDFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CJ+p3bW39XOWyNhcAhQW2ne/frfdX1ojC+lG8hv2g6E=;
 b=Ks+VcbVHKBqklxeIZNMb3WuurG7H3BvbOk9OA5tUwiTr1kMw/QvUguRv7ACPspVayeAZF9y3mB//JwVS1InVluV2QIQtFhKx6tct3vOUCz8GUesr6/egnfdMR+n/a6fZXyIkAtOcmks+EkbnLP7Uqt5wTibFfYb5grAfozlm2tw9lqqKGxQny86O1Jf1oJmxLzeXRvmHYhnf5PfBFr/NQJifBIJzbWTWy7Ff6ElpWYER0ETPUL8gRjq6VmXVoXGuBrRrgFsAwsgC56tITFg3oZ/aAX7yva6X3h1ZhVF19O4ekNk+iP8Zoe4H9xz8fX+Qi92mO5sgcA+e6Piw6F9tkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CJ+p3bW39XOWyNhcAhQW2ne/frfdX1ojC+lG8hv2g6E=;
 b=LLJpqc85iQRQgJ0QxhZIgHYjgA1t56xtwol6Hdb4QZePwTL3ZMJs0PHSE6jKe7jJT9bBA9KaltNmKu+WEGchMcPNLwhLdioq8PMCsS1VmM7ckV+wi3llYIVK0zBcUgoHnX7GkIyJKsoiMNsYZdALyYc1DU28FwVSsSvK7YCFUWCVv3+x7p8DEPtaTiQ2/5XoVZvii2aDjzv0p7qz48ar1/VHO7QgJS3WnlunF2sOt9Cfa4d+XF0ACoxy9H+5A1gAV2c4f02ydm63GLVr+zGpX1xB022HLahDcdYvSd/W48GcniKqd7GBEo654uv3DGfQ7RdNINbuN6eiPdDHmCi6NQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7d2c221b-745b-109e-af1f-2b78504b2e0e@suse.com>
Date: Tue, 25 Apr 2023 10:37:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-4-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425075655.4037980-4-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6817:EE_
X-MS-Office365-Filtering-Correlation-Id: fcf3e90b-73cc-4cdd-1629-08db4568464a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WTecQoHxhSNIYSo1b4b6QcxCkWZXrvPCcr0Th7f/B1zwfQWODUV6pQVJDYBC8P97DRYuTX2Gd7/LjFDloynNnCWTDBDODfmf82faE3Ue+2o7j6ypN+T4TvV9qOCP65H7ouW0HSkvMKpHxyEid1gLaSEkc3MODvf79XMZTDTF/trIc4DeUsm2UXtvK3891jgyEzsySY/TAFT+6DRfgOTMsIehOK/ml43dQsC+LBOjab04DeMMjI2E5K2U2tYyUXQuD7bU0jz1CQ8K+DEkNcwPhjD41d9RcWtZ0tXpACy8jmhehDQeEltA15SKj0I7i7K918MHiVm3IJrOs3CvtyodLB/V2ahYToJqwNCnkRs8zK8KJq3tPVx66wxh/xs/I8DLSUxQ1UfRkMGRS19Us4xNzxKuNpryL8Wjz/TBmanb6fBm5+d5731cmM8q0iNAkMvhEo6Z1QOW5rv97fhc2UWD9lIMmP8Dui2u3gBoMUQY6gcpj62x3eQPkpOi6n1Pt+2V4/3cMUxmXjQcUQxJqAk7DpXIGDc6G6NAvhYUCFBrE8rxS5zEd4IkG1vaFL/POyWeyHdXT789MiyzIpiNsTqPtMutETZvi1ljmIG9AYShjKWd6eD+TlEmuyJKeqgny7dH9u5LDAfTtMdEWprh6POkAQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(376002)(366004)(39860400002)(451199021)(38100700002)(8936002)(5660300002)(2906002)(8676002)(36756003)(31696002)(86362001)(7416002)(6486002)(6666004)(54906003)(478600001)(2616005)(83380400001)(53546011)(26005)(6506007)(6512007)(31686004)(186003)(316002)(66946007)(41300700001)(6916009)(4326008)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NkdlaElUd0I5TlB4YVdObHRuQ0g0VExwWlRIL3hBR2JSWS8rSnpJcGtTWm1P?=
 =?utf-8?B?SmZHMHJXY3NMM0pCQTdJNDlNcFM1dDlTZ05aSDYyWkpEYnpEcCtobG9MZGIv?=
 =?utf-8?B?eUhrQmFvQytWY0xHYy94cTBNMDFreHNLdDNkSG1ESzREODNyWitNSXhmWjRY?=
 =?utf-8?B?S3FTQXZManhubWgvWlpnOXNiVFNOS05SdHNJMDlIR1dlZkV3MDdnS2ZGWEpF?=
 =?utf-8?B?WVFWb0tZak5SVGE2bmhHUmkrSVBCUk9iU0dkQWhudm1NRVQ5RnpZNEFyNUc5?=
 =?utf-8?B?eWs1dHQ1QW1Ua08ydXVMeDlqbDFMVEhxSVpNNWRTWm9tVDRpUnVWQ2R2TEZk?=
 =?utf-8?B?TGJiMUZFYWpkZHh5eVRmcEs0Q1pqODFGeklhcFJ6SXkvdTluMFlSdkg4NVNB?=
 =?utf-8?B?Y1NHT0NxUkNGSGdJK3RKeDRxNndDN3NSaTNZQ0dPL01hQjRpdlhMVkk0VGlN?=
 =?utf-8?B?UlZVQmkraCt4alFQbVJiQmhyWDBlR3BNN2tGZ0FoN0JQVG9VVnlxWHFaeVJi?=
 =?utf-8?B?U3p0N2pQbU5YaDNINEVDemRRSFFWaHhBbmtzRERaakJ5ZXBLdzNhK1hZVytP?=
 =?utf-8?B?MUF5REdKZmFoeEw5dG0wOUFSQ28zMDZjblhWMjBJK1dLYkJySlBYT29qVjlY?=
 =?utf-8?B?T0kxdDN1TlZQckV2aVRCS2hac0N3aE8zc0hpLy9ydG43RzhsWmZsb01UczF0?=
 =?utf-8?B?Z2wwR1pMYlRJR3hsaUF4ZGlyMDkrcCs1U0V1TE0wUEQ1QytSN0YxVERRbnpI?=
 =?utf-8?B?US9nVHc4YTc3RFZiZWdjUjBIL1VDL1E4U1NEalAxTHlIdTZubkpPM1lsRnk4?=
 =?utf-8?B?eVo4YTBVUmE5aG5BK25WZFdRUWl4VlJXeVhIVXVVaFB3ZU1Ob3ppZjNGM3Uy?=
 =?utf-8?B?d0hkZzJoWTUySWpEdlV0SFZ3TjEydWlPREtDYnUrU2VUUkdQd295emlWWU5S?=
 =?utf-8?B?dXNpdmtSU0J6MDN3aVpGM0QxSHlWN28rOWtWNmVYdmxCejFkWVp6Q3ZSZmtw?=
 =?utf-8?B?V0JRYW1YbUJZOXRwZ0wraC9Pazk4anN5OS9JNHFVdmlCZDVHQXdTSVpGZVNx?=
 =?utf-8?B?eXd3MGcrYlVLWGVOTENlMWRKd09ESURFTSs0UXRuUEkxbm9Gb25yNVFEMGtG?=
 =?utf-8?B?Wnp0bVp6bE5CeVdEMnVsQVpDY0x6bGk5OHJPRUtpaTZHUXJEWHBMUTNoMlll?=
 =?utf-8?B?by9EVUJiRXBnR21BSkZHb25ndXpJWEZBeGRXWmJuT3FUam9EVzJpekdpbDdB?=
 =?utf-8?B?L3djOWM1dG5RVnFWNVovTC9uSURYL1M2SEdlVmRBODV1bCtzNzIwcWZzUW1P?=
 =?utf-8?B?OWY2b1RsV0czK0tHSktTemhIOWlhZ014L09ldGtFYnJmR29BOEltTDVLeEQy?=
 =?utf-8?B?ZWwxT01PV1BsTFdKRlB6YzNmS3BHVmp3YTdsTjJwYks1OVhleDd3ckxtclM4?=
 =?utf-8?B?dmxmUlhFV1pkSit4MmRzOEkyR0NONjlCRzdDck0xOGhQSVBPVm5wdmwwVFF0?=
 =?utf-8?B?SGlrWSszU1EyY0tERVRLNUJQekdxZXZIdHNnZTgvbzVNenc4VlFTQlZQSTJZ?=
 =?utf-8?B?QXB1U3JidGNvdXh6dDBBRDU1R1hWVlBuZmQ4Z3BSY3NBWWdkQWpDT0ZvTE5B?=
 =?utf-8?B?UmxtSlEwd3dCUUlDY3hpZjNKakRIUkFmUTV0VEFKakNSQkN0R0hpVGVEZks3?=
 =?utf-8?B?K1g0aU5vcmJQNXNneHZKQWF5S3RQZmphVXpzZUttM0t3TEpnN0V1TXpvcXdW?=
 =?utf-8?B?ZmNab0lHTVBJWVNTRFAxZW0vaGdmRG5DbkVxZWpJT3JrT044bHMvbDFsTncx?=
 =?utf-8?B?dVVUVXpzWmQwTmVpNGlHcy9kTGFLV0ZZWUtNSEt4M3U5aXFLeGFaVk9HK0Zq?=
 =?utf-8?B?cGlxK0RXSnJPNnc1MEsrSkp4SDZORHRUTE8xM0ZJR0pmOTFtV3hZZDZwSmg4?=
 =?utf-8?B?YzJydnhmL2p6OCsyQkRDZEJkQ3FsNDVvcEJxc25qZTVGZ0pJdHMwYkRNSWZK?=
 =?utf-8?B?bjBOVk92REttKytOa0laZmtoSnhXSm5OcTBScVoveVVzT3BkdUwwUkVFLzdJ?=
 =?utf-8?B?ZDgrYW03TitMelZSbVVnNzIzSzFQUkI1Q1FUbTRMYm1lKzJ4ZnhIaWp3aVh4?=
 =?utf-8?Q?bnoHNrEBolWRJ0P0yXa8bkZkl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fcf3e90b-73cc-4cdd-1629-08db4568464a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:37:16.2636
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pL8/OFDb5ATXqgyLk8VOG5gxAcJkDL7VajtD727gC8ZUzt/NSBCvsEnOG29zhr1w1kVCwN+o57AdI2sD7MSjHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6817

On 25.04.2023 09:56, Henry Wang wrote:
> --- a/xen/arch/arm/numa.c
> +++ b/xen/arch/arm/numa.c
> @@ -28,6 +28,12 @@ enum dt_numa_status {
>  
>  static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
>  
> +static unsigned char __ro_after_init
> +node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
> +    [0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_DISTANCE }
> +};
> +
> +

Nit: A stray 2nd blank line has appeared here.

> @@ -48,3 +54,52 @@ int __init arch_numa_setup(const char *opt)
>  {
>      return -EINVAL;
>  }
> +
> +void __init numa_set_distance(nodeid_t from, nodeid_t to,
> +                              unsigned int distance)
> +{
> +    if ( from >= ARRAY_SIZE(node_distance_map) ||
> +         to >= ARRAY_SIZE(node_distance_map[0]) )
> +    {
> +        printk(KERN_WARNING
> +               "NUMA: invalid nodes: from=%"PRIu8" to=%"PRIu8" MAX=%"PRIu8"\n",
> +               from, to, MAX_NUMNODES);
> +        return;
> +    }
> +
> +    /* NUMA defines NUMA_NO_DISTANCE as unreachable and 0-9 are undefined */
> +    if ( distance >= NUMA_NO_DISTANCE || distance <= NUMA_DISTANCE_UDF_MAX ||
> +         (from == to && distance != NUMA_LOCAL_DISTANCE) )
> +    {
> +        printk(KERN_WARNING
> +               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%"PRIu32"\n",
> +               from, to, distance);
> +        return;
> +    }

I appreciate the checking that node-local references are NUMA_LOCAL_DISTANCE,
but if they're wrongly passed into here, shouldn't the resulting array still
have NUMA_LOCAL_DISTANCE on its diagonal, at least as far as present nodes
go?

> +    node_distance_map[from][to] = distance;
> +}
> +
> +unsigned char __node_distance(nodeid_t from, nodeid_t to)
> +{
> +    if ( from == to )
> +        return NUMA_LOCAL_DISTANCE;
> +
> +    /*
> +     * When NUMA is off, any distance will be treated as unreachable, so
> +     * directly return NUMA_NO_DISTANCE from here as an optimization.
> +     */
> +    if ( numa_disabled() )
> +        return NUMA_NO_DISTANCE;
> +
> +    /*
> +     * Check whether the nodes are in the matrix range.
> +     * When any node is out of range, except from and to nodes are the
> +     * same, we treat them as unreachable.

I think this "except ..." part is slightly confusing, as it doesn't comment
the subsequent code, but instead refers to the first check in the function.
If you want to keep it, may I suggest to add something like "(see above)"
before the comma?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:39:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525891.817398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEDA-0006Q8-Ni; Tue, 25 Apr 2023 08:39:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525891.817398; Tue, 25 Apr 2023 08:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEDA-0006Q1-Kr; Tue, 25 Apr 2023 08:39:44 +0000
Received: by outflank-mailman (input) for mailman id 525891;
 Tue, 25 Apr 2023 08:39:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prED9-0006Pv-Pc
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:39:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b957c739-e344-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 10:39:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6817.eurprd04.prod.outlook.com (2603:10a6:208:17e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:39:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:39:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b957c739-e344-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lTC+NTg1fAs4JOrL4+FF45IohpGITY5Mzghe8xmWVFStjPK8MX/klAH4yn0aFxJNaAwvZ6CTujpS2T+chRhiuMu2jniOAVQPRXTqhknotEppCAOyY6CenBHYAIrMtrgXwkofzbammLydoHVeTHrf9gGw0muth13nigP8jahYOU262FQ+ZmNcBV5K9ihQbymCIemfHGI5NF+9xe/XvuhP/5EY87sdMXl3+ZK4AgxG7ve8zf0CwTfq9rGM8aJETS21mBpT4nreFsd3S4kmYgZMb4pjpzo7YTAuLU0ngDeEo8x1ohjs3sX47vcOiVjM4dGTflAYkJgS4JKimC17YkvBYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FTceK13BVxbds1j7LGdf3l1aAsxCS1lDwoJf1eO/QPI=;
 b=SAAAqDmt5Ex1W1wHsz7oJkUD3DArlk/0NSE2J7zOp6azFFScZ8GIE3fYFUHFF/81OTNmUV2V1VZ5xk6TO+cJoMx8rcHRKw+fHW7qQlekIO0eTwm3eijDk86ikkRx/kZGVT+bYeBUgf6CdvQfJVxAF+sRpc19qAZftmJBARL+cgkIGvFJcMoTiAlPviG1h2P0cC9MxoIF1JODl3ocAji36AcjfBkhh2ReYPU4qE+OG2hlHGOWBzv3KzRPXlnuP2kHOUXV9N4Fcs/HLp80FsGvZUQIg3V41mfQpi8CBAv1zPXYfDGHpIMrcsTokKcCo8mzvuiZ/J/RNwowOdQQsl0+Vg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FTceK13BVxbds1j7LGdf3l1aAsxCS1lDwoJf1eO/QPI=;
 b=DrL9mjql0R9fz8lzanA2JdV70efk+1LhzPORpPrRAw5fuo/UZWjrmtwNWP4JhDL0A2TCHjdf88TPkTmNyAPibKUSSn2fgi37piP9Zcru293fwatsgPgvVUF13rNGQhouvXQ44YYEFmlrhV7u2jhivpnazCifUte2f9NLFUnyUZ6M9SUvqZPGCc/UGDxpxDK8wohYr8h3GPOL8mWf3/LOqKKD2f3qs1vbb+LIIXim11U+u48w+FN7/tNakzeiYXui2qWeBbB85dBSbc3Cj0PapCCwXiEnpZTYGlOKOr1aWE06a8MWjK431PFFumuScPWmrrVXpmfRCSSbwJHEOMBz3w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
Date: Tue, 25 Apr 2023 10:39:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425075655.4037980-10-Henry.Wang@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6817:EE_
X-MS-Office365-Filtering-Correlation-Id: 788ca8bc-5a10-4ebd-6069-08db45689cea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vcwvq+F640DhnS/rmOwJf4rCxdK6+m7uzbZRxZmSu+6ZmKamkFaf5gVRU+gixWkp2Uv0xVbZSTXTm+sRYI+G+d8LF6GHYMXsKMBrNgpCs2aGmAqDzyUgKfoGsnxnRgKNM0X9y5gRS8MZg4y6j7eRoLxCgjcYjaVu+78qisjj2z5MgNTCAb3Ql5fM4BPMvrLuL8PyvUVkrsmyX3pLhFYmA3gJkDxP4Um06tGpRJczZjEuOOhOUZ80hx6rkw/g50iQ/jkkS0jkc33bn4dJJoaTwjnmJTM5w8U74NYpoFcGMOSI4duiVf5DnmmMcvPlkDu7vqU5DMNTR8+RB6bWKwXFKnGO48xIiqL6qb75P/k2IrN92NpFklZNo5iLxaB/IgEq9RSn3+zZMzETR4mkL3laJSLePqC9rWnraicpTULSU2bLR+cdffzmBMzN3+5ZMG8KmzkU3mNvifZ+V8dmBSpwS2eu64KzAwU0tE5XyGWTNR/yYV7g91XOBrX7vKVNvjlL+Do/WOx8O118AIEJwUTk3RZDboO6wQEdX2nO1Q8ZZOx/hQFBTQpu3h6Vi3URzXbW71npsTOwYwem/SZPjYwloZwDRfvbxU+0Bq/P6/bd/pQsSDj2m8ASBraTLPkBBaoS9Q4+Mk7u48ciycqg40Qp/g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(376002)(366004)(39860400002)(451199021)(38100700002)(8936002)(5660300002)(2906002)(8676002)(36756003)(31696002)(86362001)(6486002)(54906003)(478600001)(2616005)(83380400001)(53546011)(26005)(6506007)(6512007)(31686004)(186003)(316002)(66946007)(41300700001)(6916009)(4326008)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eFQ0b1FidXNRdzZaeTczd2ExZHNWZmkxTnV5d2RYdFUxYVFFKzVXM25XZmNw?=
 =?utf-8?B?aENvSmwxSkRFWHk3WDRMTEpPeEJudG9ibUkyMU91T0hlWFpqQWhNY05ieG9X?=
 =?utf-8?B?cStsM0dlZFJ4anNEeUNIZ1hkdmE2RVhvRWFtdmhacjJBcVluSVlPRTBjVVFn?=
 =?utf-8?B?RDEvTnpmZzRJeWpXUWJaV2hqWm1sekFsTGU4c1NObm1DOWxnWkFucnpQMG9N?=
 =?utf-8?B?RlQ3ekJWT09BdWIvOFZQOFpjdnVOUFRsM3BKWHhtRWwvMnZYQjlpVGhQUGx0?=
 =?utf-8?B?N1JRMjdySWRoMjdHT05JbDlPUnJDbW9JTDZBK3BZdFNpdUxqV0xYUENMRWxC?=
 =?utf-8?B?Y01wMG0zL1lrRU5QRTlwM2NWTnAvNlRiUG1LUnFrUmY2bm5uK2crVkdGUjNy?=
 =?utf-8?B?QkxQUlhOcVppcmwyU25zcUd1RzBEZXFLcnk4c3V4SkdSUjRoK1ZyWEJBeDhh?=
 =?utf-8?B?bDNQbGg0blZvWXNiRUFUWEZVZjQ1MXR1d3c4TWRPSVZkdlpjSElHRjRLdjN1?=
 =?utf-8?B?ZVlRbXBsSDNGR0dQOTVQeEhDc0Q1ZjBXeG5ZZGxYL1VXcWdDVEwxTEQ4clV5?=
 =?utf-8?B?UmlmSHdNNGEwb2paQXlBeUhiRWpNNGo1OXI4V1N3WGZkajBCejRrRi8rdU9K?=
 =?utf-8?B?bk1GbWx6RkdJZ1U0OGx5SDBYeEpQeHpNMUxGZ0Z0dm5Kd2xZam0xbERwZXow?=
 =?utf-8?B?M2VHTlBZMU5HN3lvTk9ncGJNaVd1aWNDU2dvYnVHbDFpMGlmSjVnS0Z6OGlY?=
 =?utf-8?B?bDNDSHpPMjcyODBTWldpSDc1QUZ4T3JrYWowbTVXaWhVTVM1bHdwSVFaL1F1?=
 =?utf-8?B?R0YyV3JrTXd3UENKL0JldENnSnkyclh1MFRya3BaWm4rSG9YdGN0cFErSVZG?=
 =?utf-8?B?RjZUWlZnejdyOHpRZWkxTTNHUlZqdVhXMmIvM2ZVdnRLeDJGYmxJYWw1RGVr?=
 =?utf-8?B?cjhoV09sUjlLNTRXeUVsODZhbndyaDV4OVFSdjFhc2tpdXNDc2gvV25CRjlY?=
 =?utf-8?B?L1VXUlo0dzlkZ3Z0cUtRYndIakIvTDc5eHUxWmVlaWxFT3l4M1BnaWFiZG54?=
 =?utf-8?B?TFV2OThYcmVDYnJnZ2c3VENwblgxejd2bHdpYysrWUdodjF5eHF0VDNIS2Fw?=
 =?utf-8?B?bVUvbkZJWDk1OXJjQkFmYzBDODl6Mkpxa1kvTytPVm5KMllVZm01TlFsYlBQ?=
 =?utf-8?B?a3hzSzVMcVUwbDQ5d0FlbzNmUEhmd2tGUEFuZXNFSWtFYXF6cEJIYkRLbCs3?=
 =?utf-8?B?RDJ6dkc5MFZKNmxYL3JnZFF1SlZwT1dxTFZkNFRFVDY2VjVrQ25kMStNZUlT?=
 =?utf-8?B?YkZwb0pKV2tkb0lZREkxL2ptaWhHUVFFQnVNRDBkaW9DNlRlVXF0RytTcFJM?=
 =?utf-8?B?R21qRGpCVmFFMXhTalpNdjV5R2N1aUs0cjd0MURFSWFvYTRZa0xhTTNVQnpR?=
 =?utf-8?B?YjR1aDNNeGtrUDFpT2ZON0pKcVNnUGlxTjYvQXdmM3ZHajBaM3E1Q0pDYzNy?=
 =?utf-8?B?dEFCcmp1N3JzdUZ4NXQ2em9jZVlIQTVJSEFLVU8yNkZrWmNpbVRXREFVOGNR?=
 =?utf-8?B?c1lveVRaY0RSeWpLci9GblN4djdiT2REbWxEaWxvQWgxbDJ6anF3SXRvRTdY?=
 =?utf-8?B?dzBLOERXUzhaMnRPZjU1Uk5QekZMdm5QNUQ3MzNDL09hV0VMRU9vMWpaZGNy?=
 =?utf-8?B?Y1JodlFaZ1h1VDBnUHB3Y3ZJNnAydFdyMXlvZmw2MUc1MVcwcEtYVHRob3l0?=
 =?utf-8?B?amNCUlM3OElHZU01cGJXK01FbW1PSUVNYi80ZnlMQ0lCbXRjYUIwUGN6YlJt?=
 =?utf-8?B?cnU5SWVmTFV2RjdPY0R1RXhScm1wb0RyN1F1R0VmWnBTR1pKMnFidXNHWEg2?=
 =?utf-8?B?eHovODdKM0dDVFdpclNJbDJPRGVhRjUrY0JXejJxNUtzQ0RXNWR1dzQ0YnMy?=
 =?utf-8?B?elRZWGpiRmpleXlybkVvNWt5VHBQYjRnWm04Slh0UnJDVjRnaXlyc1EvQ1cy?=
 =?utf-8?B?cU5FVHppS3pTSzV0VVFQKzdBTld2RWlCRVh3a3A5WWtTTVBwUk1GVUJBVWdL?=
 =?utf-8?B?SCsxaHh3NG5OWjJpc0ZxZE13Yzd5ZWo4dXpUTEg5cVkvRTBqMHp3dWQ3SDQr?=
 =?utf-8?Q?Uhd7kVo4Jfvmgf1xIAqkZ5Y+I?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 788ca8bc-5a10-4ebd-6069-08db45689cea
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:39:41.6033
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Sfv1HqYYyca9ByhpnE8G3NijptUPhw1rV6ujGtyuapXPLW0CXdfy6y/yPOv4KLeZlSf0syVeKuMIK6GVnJH3CA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6817

On 25.04.2023 09:56, Henry Wang wrote:
> --- a/xen/arch/arm/numa_device_tree.c
> +++ b/xen/arch/arm/numa_device_tree.c
> @@ -151,3 +151,111 @@ invalid_data:
>      numa_fw_bad();
>      return -EINVAL;
>  }
> +
> +/* Parse NUMA distance map v1 */
> +static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
> +{
> +    const struct fdt_property *prop;
> +    const __be32 *matrix;
> +    unsigned int i, entry_count;
> +    int len;
> +
> +    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
> +
> +    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
> +    if ( !prop )
> +    {
> +        printk(XENLOG_WARNING
> +               "NUMA: No distance-matrix property in distance-map\n");
> +        goto invalid_data;
> +    }
> +
> +    if ( len % sizeof(__be32) != 0 )
> +    {
> +        printk(XENLOG_WARNING
> +               "distance-matrix in node is not a multiple of u32\n");
> +        goto invalid_data;
> +    }
> +
> +    entry_count = len / sizeof(__be32);
> +    if ( entry_count == 0 )
> +    {
> +        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
> +        goto invalid_data;
> +    }
> +
> +    matrix = (const __be32 *)prop->data;
> +    for ( i = 0; i + 2 < entry_count; i += 3 )
> +    {
> +        unsigned int from, to, distance, opposite;
> +
> +        from = dt_next_cell(1, &matrix);
> +        to = dt_next_cell(1, &matrix);
> +        distance = dt_next_cell(1, &matrix);

Upon second thought I checked what dt_next_cell() returns: You're silently
truncating here and then ...

> +        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
> +             (from != to && distance <= NUMA_LOCAL_DISTANCE) )
> +        {
> +            printk(XENLOG_WARNING
> +                   "NUMA: Invalid distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
> +                   from, to, distance);
> +            goto invalid_data;
> +        }
> +
> +        printk(XENLOG_INFO "NUMA: distance: NODE#%"PRIu32"->NODE#%"PRIu32":%"PRIu32"\n",
> +               from, to, distance);
> +
> +        /* Get opposite way distance */
> +        opposite = __node_distance(to, from);
> +        /* The default value in node_distance_map is NUMA_NO_DISTANCE */
> +        if ( opposite == NUMA_NO_DISTANCE )
> +        {
> +            /* Bi-directions are not set, set both */
> +            numa_set_distance(from, to, distance);
> +            numa_set_distance(to, from, distance);

... here again. Is that really the intention?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:40:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525894.817408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEDo-0007lL-1L; Tue, 25 Apr 2023 08:40:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525894.817408; Tue, 25 Apr 2023 08:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEDn-0007lE-Tm; Tue, 25 Apr 2023 08:40:23 +0000
Received: by outflank-mailman (input) for mailman id 525894;
 Tue, 25 Apr 2023 08:40:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dmH=AQ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1prEDl-0007l4-Tj
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:40:22 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe59::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce4e0d77-e344-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 10:40:18 +0200 (CEST)
Received: from BN9P222CA0018.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::23)
 by CY8PR12MB7244.namprd12.prod.outlook.com (2603:10b6:930:57::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Tue, 25 Apr
 2023 08:40:16 +0000
Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10c:cafe::39) by BN9P222CA0018.outlook.office365.com
 (2603:10b6:408:10c::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Tue, 25 Apr 2023 08:40:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 08:40:15 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 25 Apr
 2023 03:40:10 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 25 Apr
 2023 01:40:03 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 25 Apr 2023 03:40:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce4e0d77-e344-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GyA1jSptl/o30kx708Cv58lW6lwkGqBu0d8+AEaG+/5hmxbdEdc+QtUGz9Ge53gqrv6W+Q51xkn03EvUEjlPB3fWEimsFGaaNtmwQJzV5MyGAlbLqC3eETCa9zInoq2B30Ujgv/9IMd+tLqqYXnx/0QHL54VCO+yeRqllTCNOWlpWBKm1cwgvnUxDqyXQ0COrr4arRz4lNMRNpQlv3Ce+NYqKELtu7Tyh/D/xuznYmWdjRROIeD+/e/X433xOX8vZ6UoblDnl9b6hU7W7PfSF7P3VVnlmGZleatWi9gnwRCkTUFGhfUaGX7HvMngQx/zKESoI0TKFRXFHGK7jWxGLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5paBJTUOvhG4VMZZAzjbXUjIwXGSXCGhIfccklRUmHw=;
 b=N+pEnd4+FdQGqwk3+WHeLQ+eU9YOb5uG5QzteSYqkzZhaTqiP6TbWacp8VJIouTmm6m9kJpJGD3xZ34lReHx7y+hzxRAQk7eiUK1jmC19fcitXuUtHZlnZt+T/9YVyoU2pETo4pF94nwm+iWV/l3W0IIzOaOxnK/HxUxpnlvKu7BbUSmJs9zvUDJc7ipGebS4c23+dSvPAiD255kdfGPRPPObjIi64jUVXXSsiMpomJRjOuqus8Ju2HtLvws/I453vtW9WaV40H3hok+80sbuIqu1/QSL3+RBVD5vrT9rD+xM2iKn55JH/BMrYy1Vt5b6vQcZ81Hzv/J6aJFAjDYug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5paBJTUOvhG4VMZZAzjbXUjIwXGSXCGhIfccklRUmHw=;
 b=Su7ZAbRcJIHz9Y48ngPIunFsekdkhUE+3iOFNjjX5aBqeTmBURUFPuaQ+SzpJVE0VoJR7brVSsbGE1uaqIpbASeg4qVKbPXL2oOK0Jz/U2PoqBunfFiFCbVq4/zuLona9oiJJQh8wjFWaggevAkqwbAMLUhRGqIPgZgZHmWb4RI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <36e27df0-7fdc-5f1d-ebb7-0b021bdae2bf@amd.com>
Date: Tue, 25 Apr 2023 10:39:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree
 processor node
To: Jan Beulich <jbeulich@suse.com>, Henry Wang <Henry.Wang@arm.com>
CC: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<xen-devel@lists.xenproject.org>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-8-Henry.Wang@arm.com>
 <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT057:EE_|CY8PR12MB7244:EE_
X-MS-Office365-Filtering-Correlation-Id: 59a2529c-925a-494c-117f-08db4568b13c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5uReVzpBoL4qoljIGOr0Ni8rypggTGJASfJ19BE0cWd5FsB5w6F2hiuT7/epkfd6KptMjEGRpJ4N9JpGfQCsDFz49NZd83BcSdo0N6gSaLwA9NYutwH6LvN4df/UUpgOd+NCd+9BpVBpTbG+wiFUkrAyk43d3vgcgHFbmAkPQu2k8Cb2Z8DbSN1sDUDii1eW7mYZKYiMhpUl+rT++XrBqRdzazJ9CyQduMia0mv3wK3Hjuf/zJAAcP1jl4tyPzV0Gm8GRt8kL4puFydC3jMYNrYXXUPQvIMPiMVNk0KTVhCHh8sUCR/B/asNVssnK6o4l0th+gOD46fDyxBbm1tJjq2URMj6v5uA3CL+5IWc7mG5TYHQS62WEjz6iRWAzjAOOdwl4XVDy0Zoze0YHg524kUvWjPskpUcMaCKoBXiiYeDVElc3hQnatPddHVaW/PCL4vAefXvbu8tJkdClAndfRSQNrzUr/KF1FxjkcQgQYQqLcQcZJzyKeUE3e108PWc8fK4fwW9FoCFgCqf8ovlSFY0mq0JzKSD1H2w0Kx8FQ2TuN0rIRAOx0hLIVdkPeIlt+qsP0qlrMLFi/UQMV6xiaIJlAZ38PvNHVIzBFzJmsUQxp2NOWTryQ5Q+0sLQ66afxMAw5MBMh2Ny91q6/jECPBzc45R3H1o3N+sO5MhM0uM815CwaJTdocCGXJzV9pqXY9915v3tl5ZcLJNaovJL4ie7czXa0TtQqGRWgLVyqJTZ7gYZP5OalMttPQt/pu2D4/0NW/Ri60+yTRJlCjzJA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(186003)(4326008)(70586007)(70206006)(82740400003)(2906002)(81166007)(356005)(316002)(41300700001)(5660300002)(40460700003)(44832011)(8676002)(8936002)(36756003)(6666004)(82310400005)(16576012)(110136005)(54906003)(31696002)(40480700001)(86362001)(478600001)(53546011)(26005)(2616005)(36860700001)(336012)(426003)(47076005)(31686004)(83380400001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:40:15.5007
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 59a2529c-925a-494c-117f-08db4568b13c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT057.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7244

Hi Jan,

On 25/04/2023 10:20, Jan Beulich wrote:
> 
> 
> On 25.04.2023 09:56, Henry Wang wrote:
>> From: Wei Chen <wei.chen@arm.com>
>>
>> Processor NUMA ID information is stored in device tree's processor
>> node as "numa-node-id". We need a new helper to parse this ID from
>> processor node. If we get this ID from processor node, this ID's
>> validity still need to be checked. Once we got a invalid NUMA ID
>> from any processor node, the device tree will be marked as NUMA
>> information invalid.
>>
>> Since new helpers need to know the NUMA status, move the
>> enum dt_numa_status to the Arm NUMA header.
>>
>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>> ---
>> v3 -> v4:
>> 1. No change.
>> v2 -> v3:
>> 1. Move the enum dt_numa_status to the Arm NUMA header.
>> 2. Update the year in copyright to 2023.
>> v1 -> v2:
>> 1. Move numa_disabled from fdt_numa_processor_affinity_init
>>    to fdt_parse_numa_cpu_node.
>> 2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
>> 3. Return ENODATA for normal dtb without NUMA info.
>> 4. Use NUMA status helpers instead of SRAT functions.
>> ---
>>  xen/arch/arm/Makefile           |  1 +
>>  xen/arch/arm/include/asm/numa.h |  8 +++++
>>  xen/arch/arm/numa.c             |  8 +----
>>  xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
>>  4 files changed, 74 insertions(+), 7 deletions(-)
>>  create mode 100644 xen/arch/arm/numa_device_tree.c
> 
> As asked for in various other contexts, may I please ask that new files
> prefer dashes over underscores in their names? Additionally short but
> still descriptive names are imo to be generally preferred; in the case
> here how about numa-dt.c?

Seeing that you made this request multiple times within the last months, maybe it would
be better to write it down in CODING_STYLE or CONTRIBUTING? Otherwise, you will keep making
these requests indefinitely without people being able to know things like that before pushing
new files (+ this always requires respin).

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 08:47:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 08:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525901.817418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEKb-0008VK-Ti; Tue, 25 Apr 2023 08:47:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525901.817418; Tue, 25 Apr 2023 08:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEKb-0008VD-QK; Tue, 25 Apr 2023 08:47:25 +0000
Received: by outflank-mailman (input) for mailman id 525901;
 Tue, 25 Apr 2023 08:47:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prEKb-0008V7-DF
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 08:47:25 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cba86e65-e345-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 10:47:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9287.eurprd04.prod.outlook.com (2603:10a6:20b:4dd::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 08:47:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 08:47:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cba86e65-e345-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aEMvFQgCXLCKxepuJhgbqhycbVqswZtWUGmxPApDpetL4mxdtrhCjOSRcwITZSSDmccUXLABBGEBunkwmiv/ZFQ+hyq6bpVVrIzwoK5zw0mbpCH0ppwTsqhKNXD3zchZr/c/ePXoTCzylP3nFVChmRNhY9fE7bB2gXv0d8dpdmWs5VNrqxaectV2V8fBKdLBzSTyct1uiOnxaQdFaxOcSNi8H7zn0E55fit/oB/mOj/ryLMs/rhv3fsFlVElytw/53d4UFj7w9R13x/tbB+qRjNG8+vycyGMZ2TZ215UsQlgqL6O4nKhyFxuxP/wneDEtB35iIsruxfTIG5POYirKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MEVbymheg/pi0H18lPXe04kKTgelT+WrQ6x2GWgpn9I=;
 b=B9+kgu0oSLcNrIoIQywCf5pe2LBub97KciVvC6/lAOqQUOeKiwJD5lI9w/KQe/ikT/cvf1EqL33Ae+XzuhDNWtzTLev/EeIl5GXMbCYf47y3u2v3WE2C0a5rZ+SX+fMJyVgJJR/fl81DeFgVnI3UK6L0Sf0GkKjqAo7AIA7kaYNi+9oWOk3Gc3IuOpzfzLABpMbOgrwq7kqjRMBY3VPPaYMiOOuvWyt+Md70lMYTaLD3Yco4xY00JbHMiWxl9TiSjdhUyHnNsjhAEsnbwZ2/8Z7yMt9pzOeuIyVLA+Pq++dRKTHzsw7FzxPtlGPqzqziS30rCM30wGalwYsyag8JwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEVbymheg/pi0H18lPXe04kKTgelT+WrQ6x2GWgpn9I=;
 b=nHOgBYE2/r0R+4SwA/hXMEiExLNCnRdEJPa1z/HTHu/+VxB+lJ4RNd7tZT18pcWfRYHtcZ0I1xkQK1KH30rVgavLWgISAE0RMZ6jxhdVfBZfzwFDGN5tcCG1gzShc06vQmXe5T/OJZtU0WoJxmAeSPnH764MLHWOX2orXzgaklHqAKGq0pkOQ4OHUhVIRNc+YK/wgX2sQ8k+UWVoV7jzNH9+n/TZwDM3WhDEPoFnMiX+e/Xm/2ZSSy/Tupv+B7iuLgRM3PS2k38/OMUSHPC7dPvcgkEcvfBJltHoEVtAWOq9DVpAi+3oFSu4mFyIfUGJtpdJR7XN9M56t4LdVZ1bLA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d998b0cd-d8da-6022-3ec4-acfb798d5d3f@suse.com>
Date: Tue, 25 Apr 2023 10:47:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree
 processor node
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>
Cc: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org, Henry Wang <Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-8-Henry.Wang@arm.com>
 <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
 <36e27df0-7fdc-5f1d-ebb7-0b021bdae2bf@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <36e27df0-7fdc-5f1d-ebb7-0b021bdae2bf@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0170.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9287:EE_
X-MS-Office365-Filtering-Correlation-Id: 413161c0-c1ec-4661-9f3f-08db4569ad4a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kj91R8jnIFJex2YDar+AUfjYRL4RIr78dB9aZ0uz4zbHj683b7igWXUjctrfbrTpxEitP/yHwcB9eOgBulqFNRsx+dLkYXXkEaedvqFF3iERQ4ld1HFoJQksTRQ3dv6fE2nslg8neHOLtqVWwxviZzd3iMBQZAzBVHyZFW7LI3PJT0PQc30EZHuHj8hnB5H/l+Etbp2YY3evcdNt1G/X8OyutY9gCncWHgmW+TdcSZSjiFDR4Py+gW8RzcUdxkPduwILhsSJ2vrQd1r0bPlbFyeZkP/t5IUgy1OQpv7ZdX/iy55LNMQFinuJMxz2qgsjce8OChJJt14Jq8dlfGj1eyudHV71vf30k0/IQjA5mcLJyr40xB0XVx/egcGLM+j7o4xZH5x1eGTl6PJRrK15+U6/o6bxXTC8inuIQcSuH+dlBRPuikE55ZRarKdCNcQIM/92+lydUDK9UE9pliVtIr+GAkDR1qKPfuaVaa/fBQzL4Mngokpyl9zibbnmJqyUgezSl77iWZ9UE61AUVtS2upuA4HWukRcx1zr5OHGYwxqiFiPSeTlb4kEHWHee/zAXuJffH7zRlG5LqYkUZGdVcMR+9Z0BKOkR7ZJeXGBxb4QmUl7apfRMPQgjLG1Um56rSery4+2Li+c9SRMdIDpMQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(396003)(39860400002)(136003)(451199021)(478600001)(31696002)(54906003)(86362001)(36756003)(186003)(53546011)(6486002)(26005)(6512007)(6506007)(4326008)(6916009)(66476007)(66556008)(316002)(66946007)(83380400001)(2906002)(38100700002)(8676002)(41300700001)(31686004)(5660300002)(8936002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1Y1QVA3d2E1SktLL3EyNGdmV21Zd296akkvSEVDNkhWV3VEbXpFWFd1eUw2?=
 =?utf-8?B?a0phK1ZaMTk5cmxJbFNjSFlaV04rZzBkQnV4c3hPdVk3ei8wSjM5L2NFSnhw?=
 =?utf-8?B?ZEV6bVR6ZW8rR2dxQ1pISVgrSUZveG1BWDBod214d2NWSWZTbU44NnFTNSsx?=
 =?utf-8?B?UUgrQVpLMW9ENVJQS3JoMWsxa2R2N2VUZ0hsMjNVYXpOQVlmTUZMUkdTM2JU?=
 =?utf-8?B?OVZMM2NQYWZnYUlURlJHUEM2TlJubDVMVHNONkdWdXlidlJFUjRqeXlXYkVE?=
 =?utf-8?B?N0F0WTFtVFFickx0K0NyMndYTFhSbnlaQVEzWk1WME9IOFNtb08zQ2U3T0hS?=
 =?utf-8?B?QTExdmFOOGhHNjBkS0xGaDdFSGZzRFQ1Q3JKVWhpeUpzK3U5WjBYRWxYOUFt?=
 =?utf-8?B?UGJ0VGVHTXVwNmd2OTJyajBsdGRLRmRISUo1T1hsWHNGTlFrenNhM0x2NlIw?=
 =?utf-8?B?WEhwLzJKR0kwdkl5RGU1UDd6eGdrZFBKQkFmVXpKOW9mZmUyemtMbTRXdjJ6?=
 =?utf-8?B?MmZWeWlPM2FxK3NVdHBQQ1ludzd4VDY2OUZQNEl0NEJSWHlLYmMxZnVsWXFk?=
 =?utf-8?B?cVZJZWZKUlk1L3VRNzJvUnpyZDBnN0pOMnNZeVpaVUN6UklWZDI4WEZHZGps?=
 =?utf-8?B?Yi9mYXY5b0ZNT0Y5SjV4MzdGNmJ0WlR1NTRXZ2JNZ0kzalNLMHJ4R1NNZVNO?=
 =?utf-8?B?UEhVZjlHTEN4OVJGYkt0Z1VtemFnakJLMmllNFJrQll5WGpIV1NHVldZSW5w?=
 =?utf-8?B?anFnQ3lxcmNYblRYclR6eFFJd1JaSktFczlBWEtGbGlucllTcElsckNJKzhp?=
 =?utf-8?B?NnZSNEVxZFFwWkhYVlRvT3p2WTNtWnhWUzZDcnN4cmdoekZNNk5mUUYvR2Rw?=
 =?utf-8?B?OTBINzRaTlF3b2FmRXlMTkxibktMNnVUVHVyejh3bmlhTU9zbkZwVWhMaTA3?=
 =?utf-8?B?R1Ixa3d6dndVcnBNOEZsOGlGVHFNL3Y3YnlIK1ZQVTgyell6NUNQblJjem9a?=
 =?utf-8?B?SHdpNytmRjRGeTJkOTFYNVJudGpkYkhwNXdDMVU2aERoRHFZMlJlcWxpNEs2?=
 =?utf-8?B?SXRSbm5vdXFYL2Fjc3hwTmF3V0hjMHlBNEE4RVg1QWlRRjV2VmQ0bUVtTG0x?=
 =?utf-8?B?eWFleWpKVTNiQzZBZkVPREVBQ2lYUFVDQzNKZ3JrK2E5VnpnTWFSRlZEdVlX?=
 =?utf-8?B?bVgydGhFd1Q0blpNR1pybllwbHYwL1ViYzUrMTd6SjRNQ1JicVhFVE1lSVMx?=
 =?utf-8?B?bHBYRGpqZSt6cFRoYmtEUTR1a09rMGR1UjBSYzBlSUE2ZzBBWldxQlc1ZHBq?=
 =?utf-8?B?NFNSeHp6TlM5VU1GWUZVSzVJN1lDRVB6VFg5aUVIM2t4Wjk0OE1sWHVoVlJB?=
 =?utf-8?B?UDNZb1Vkd1EwbTJFM2UwTUE4MlplUWJLTmsxTXA3dmdwWFYvVzcwNkhDTDAr?=
 =?utf-8?B?bWpiODNhUWxSWktYVUE2SG15aG9XYUtzZElBeE5pSlJrL1lPNmJXajd3aE9C?=
 =?utf-8?B?SFV3VFBOcVlUTUhBazRXWkg4U09pY3dRaTB2dVFSMElHYzUvK1djTE45UmxG?=
 =?utf-8?B?YjRIUm9YS1VvVzRldnAzWkZxenIwdHM4aHdrTGtaK0o3RXptVTVoUjJRWjRz?=
 =?utf-8?B?cUNXc21uTmpBRHM1UUYvb09tbEl5bmxDSmF2MmpxUW15NkN4Ulh2MWNzWHFH?=
 =?utf-8?B?Y3RuMjZWMm9ESjMxcDh5OVpTY2lUb0g5U2MwUHVlZ3ZaQXNuczdCMTA3K2lX?=
 =?utf-8?B?ZkM2TGFvd2VJUDlabXRZNHRaMllieEpDMk5tekF6WmVXNklDemorRWNIRFlr?=
 =?utf-8?B?WTZpTjlzUnhDTFlvUnltLzJNbWN2N2JKMTVmdXB5K3k3UzRrMWR5dnlWZU1y?=
 =?utf-8?B?NGpTME5KODhKNUxXZ3UvRGZyOXptU3habmhzeGdOa1BGZmYrem9kSUhYc3lP?=
 =?utf-8?B?QytDU0hIVTFlTmdsOEtETjA5bGhMcHFuRitCTkRYeitjc1NUNituSmZSWGpr?=
 =?utf-8?B?bkd4NkFJc1dRcFhRR3RDWENMQ1gvT0d3THlLcXZLdTJ2MjlkMTJiMnpmNm15?=
 =?utf-8?B?dE13bjNSb0k3U3hDUFJMdW5YM2lTWDRGKzFaa014czNsOFVaTTdwMTAzV1A3?=
 =?utf-8?Q?nQ+92gEH7Sjh2sMlV/S1BsWYc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 413161c0-c1ec-4661-9f3f-08db4569ad4a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 08:47:18.5905
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: h5HQMe8wT0hMROxo9ENe9JKcpet1CDf88YxgdjH0DBbs4bTWVQByC+HU0010Tr3fy2wPMGPBsVFbJWQzN5hNWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9287

On 25.04.2023 10:39, Michal Orzel wrote:
> Hi Jan,
> 
> On 25/04/2023 10:20, Jan Beulich wrote:
>>
>>
>> On 25.04.2023 09:56, Henry Wang wrote:
>>> From: Wei Chen <wei.chen@arm.com>
>>>
>>> Processor NUMA ID information is stored in device tree's processor
>>> node as "numa-node-id". We need a new helper to parse this ID from
>>> processor node. If we get this ID from processor node, this ID's
>>> validity still need to be checked. Once we got a invalid NUMA ID
>>> from any processor node, the device tree will be marked as NUMA
>>> information invalid.
>>>
>>> Since new helpers need to know the NUMA status, move the
>>> enum dt_numa_status to the Arm NUMA header.
>>>
>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>>> ---
>>> v3 -> v4:
>>> 1. No change.
>>> v2 -> v3:
>>> 1. Move the enum dt_numa_status to the Arm NUMA header.
>>> 2. Update the year in copyright to 2023.
>>> v1 -> v2:
>>> 1. Move numa_disabled from fdt_numa_processor_affinity_init
>>>    to fdt_parse_numa_cpu_node.
>>> 2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
>>> 3. Return ENODATA for normal dtb without NUMA info.
>>> 4. Use NUMA status helpers instead of SRAT functions.
>>> ---
>>>  xen/arch/arm/Makefile           |  1 +
>>>  xen/arch/arm/include/asm/numa.h |  8 +++++
>>>  xen/arch/arm/numa.c             |  8 +----
>>>  xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
>>>  4 files changed, 74 insertions(+), 7 deletions(-)
>>>  create mode 100644 xen/arch/arm/numa_device_tree.c
>>
>> As asked for in various other contexts, may I please ask that new files
>> prefer dashes over underscores in their names? Additionally short but
>> still descriptive names are imo to be generally preferred; in the case
>> here how about numa-dt.c?
> 
> Seeing that you made this request multiple times within the last months, maybe it would
> be better to write it down in CODING_STYLE or CONTRIBUTING? Otherwise, you will keep making
> these requests indefinitely without people being able to know things like that before pushing
> new files (+ this always requires respin).

Well. I could make a try, but getting ./CODING_STYLE changed has proven
to be difficult in the past, and I would prefer to not again sit on such
a patch for months or years. IOW I've become quite hesitant to submit
such patches, even more so for things which - imo - should go without
saying.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:03:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525909.817427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEZu-0002T1-6k; Tue, 25 Apr 2023 09:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525909.817427; Tue, 25 Apr 2023 09:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEZu-0002Sr-3q; Tue, 25 Apr 2023 09:03:14 +0000
Received: by outflank-mailman (input) for mailman id 525909;
 Tue, 25 Apr 2023 09:03:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prEZt-0002Sh-JC; Tue, 25 Apr 2023 09:03:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prEZt-00074L-Cv; Tue, 25 Apr 2023 09:03:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prEZs-00055b-Uq; Tue, 25 Apr 2023 09:03:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prEZs-0007bF-UP; Tue, 25 Apr 2023 09:03:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2uqHjssPFPjD2GeE/Z+UJjtAOfLe6XLoBX1O6YBYgT4=; b=59VRhXUpMRxH0AVSeCPWVUaPHW
	kQHTY64RwkKqDY+NIWr4HcLx70u+/DgfZ+ffz4qd4AzjXQr3uVbIQ5lRhY/IQaWb4KH1JVbbZSK9T
	U0t97gN6hUffd3CJRid9YProVMn6U3dPLgF4ZE1b50Gpd3WS9o0pWu3VKRc7ZhLsHJ8g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180405-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180405: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6127bf1f30a97e8135905d921d7745eb13554815
X-Osstest-Versions-That:
    ovmf=2c2cb235289642775a7c4e6eaeffa6d3828d279c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 09:03:12 +0000

flight 180405 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180405/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6127bf1f30a97e8135905d921d7745eb13554815
baseline version:
 ovmf                 2c2cb235289642775a7c4e6eaeffa6d3828d279c

Last test of basis   180368  2023-04-21 19:10:40 Z    3 days
Testing same since   180405  2023-04-25 07:10:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2c2cb23528..6127bf1f30  6127bf1f30a97e8135905d921d7745eb13554815 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:17:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:17:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525920.817440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEnD-00041Y-EG; Tue, 25 Apr 2023 09:16:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525920.817440; Tue, 25 Apr 2023 09:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEnD-00041R-Bh; Tue, 25 Apr 2023 09:16:59 +0000
Received: by outflank-mailman (input) for mailman id 525920;
 Tue, 25 Apr 2023 09:16:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dmH=AQ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1prEnB-00041L-Vu
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:16:58 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebd96e39-e349-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 11:16:55 +0200 (CEST)
Received: from MW4PR04CA0254.namprd04.prod.outlook.com (2603:10b6:303:88::19)
 by SN7PR12MB7451.namprd12.prod.outlook.com (2603:10b6:806:29b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:16:51 +0000
Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:88:cafe::a3) by MW4PR04CA0254.outlook.office365.com
 (2603:10b6:303:88::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Tue, 25 Apr 2023 09:16:51 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 09:16:50 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 25 Apr
 2023 04:16:50 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 25 Apr 2023 04:16:48 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebd96e39-e349-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h5GdVPIo3ax4hXGsZQRsuER/4Br+JzxTKb+PlKVLKL9hT2dbSV3erDC1BjFk0um4OMlEJvoN5WijlmQvzS+1klmGfef3jGG7YRjUVjvWBOrH2tel2FwtQyWhEFcvmZN4SqahFjiCpuogxfTdbObR2Qkc2a3ssfBDxsgNnGAaBhsNx1fE7hjUp4y8Na0xc0mYMtcrOfgpZT6dMTmN2bAnBXvuUw1XBxT6BJFNjz4/+3S56wxt/HLl57icuQG1D27InRH3+28v5ClqwrXmX4KgLXsM7TuWOKTjTAY8EM5psKNLfY5KSfKfzcnRltP6rRZoSforMuikLp9rzyMHAL/CDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Am36Z/cVjOhEFeXClzPQpcORJB52iIGKESmRFUjF08g=;
 b=f1W02WmAr3pEE0ia0GlUugWduth2e+ObYIXxZFVL/AvTZ5DBgV2p9RsS21rrdnzAtg5tZ4RG9hmVfVfS358FyCagkKpB6zCg9BT9r3mqXk70bSZAO1agzJoINeusvJfHzGN0lBMuzuZ2RJb/a2eUMGoNxp2sBxMNi4niz+3BAcOlZ/Tes4QYCd8w+6HsYJue2s7Bl6l6YD8T7i7tMDps3JwVMFzEkE8KIPKj1RShL3GwdmsnwuRn9HHvVO0gLQ8Opbr8kQZUjxzPYMtuydCCZzls4gwqvRTmxJn+zn1socsPsI8pxeshaP0C2hDS1uG5AhDFwBc48W/HUaqe9cvJSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Am36Z/cVjOhEFeXClzPQpcORJB52iIGKESmRFUjF08g=;
 b=jZhn4cGae0sS7n07mCtZ2AMIeEmIqPOI4Duvke3O73CRu06if4rLD2WOrMq7pdZSIEz38T74GC9+OtmM1J4TKH8B+5qAUtgEE0SFQRtdLGtjl2MkaHOxxOU6xWFiLXTN6k18+erQ+PH2mZfNSmsq9OWG9ABnl1eaczn5sLSZpsQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <a9b8b2ee-1b41-7fc4-2cda-3f490e70ada6@amd.com>
Date: Tue, 25 Apr 2023 11:16:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 07/17] xen/arm: introduce a helper to parse device tree
 processor node
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <wei.chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<xen-devel@lists.xenproject.org>, Henry Wang <Henry.Wang@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-8-Henry.Wang@arm.com>
 <8ce925a1-dadd-79c3-3b0f-c3ab45b1a669@suse.com>
 <36e27df0-7fdc-5f1d-ebb7-0b021bdae2bf@amd.com>
 <d998b0cd-d8da-6022-3ec4-acfb798d5d3f@suse.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <d998b0cd-d8da-6022-3ec4-acfb798d5d3f@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT013:EE_|SN7PR12MB7451:EE_
X-MS-Office365-Filtering-Correlation-Id: d8dde6a4-42ca-4e3a-a122-08db456dcdce
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ROpxwRGAOPtdFNdqeEAu2B5KW5/BYY6qn5fD2e/nRu/zPcNRozOC28t54QD3ZsB9Mk/vPIwnRvyhCVJcoZ4VuPquKe8Ahu+lllIEKnBOxIRrRtNT1Dck0lPfmKSCMQGrTv85Db9uah92GAd0uIp1N6Tj9yiySNa5WCfAUXSVmsDcxw2riNT5wMODRm5sPJihM3wLMABtISObILGDelaR0AK/w2Pr28N8mm/uyXrWWZ4Lq6s+pvY8U028XsfOJ0YKRBFj2WChI8UAdqYsDdXBqof4qdzKgkwyZq0wzNkyYqaTx3O6aOgoD83rUi/r3c0dO2Ch10CZ+4yTYLPYF0wyRK8J3OE+0egB7mOVMBGLTNmkRQ5j1MulPfwcwWdXyvirceh0yrFMxQ9fs6ot3+vBd+/YWd0SpOHNgypONxvwSIH8jIKjxYY7DrYKYusS9yzp6iuWj3p73PGQrX3P347qW+Iiw5h9z/XXNZtaZ2mEEx3o7OJXK6IZaJ3m585w/qBbSTIBUYnS34Izs9tsokgaeY6vnSVKijSsg1BQWVbTjmXHqtX5xf0WvUx4iSlqKfeK+Kx5wF50EqYvnEXxrbuTRHW8ZoJox2o5aVuNW+DPvQSnirLRt03ro/fAY1UgNfcNVV0aAHq1596FBpUaBxmCQ7u4aNnifWRE3CGc5DvfopIusjt0KisO5ueY04HxrrPXH7aO95ef16XNvNhlQbEn8/dTFt/Qg08YslE/fxBO75dRTSs42BUBHwDokPuYr6rRy0v4hVVRvyAC/wtp1ecDVg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(2616005)(44832011)(40460700003)(2906002)(36756003)(5660300002)(86362001)(8936002)(8676002)(40480700001)(47076005)(31696002)(82310400005)(16576012)(316002)(26005)(53546011)(54906003)(186003)(478600001)(6666004)(4326008)(6916009)(41300700001)(81166007)(83380400001)(336012)(426003)(82740400003)(356005)(31686004)(70586007)(70206006)(36860700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 09:16:50.8117
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8dde6a4-42ca-4e3a-a122-08db456dcdce
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT013.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7451



On 25/04/2023 10:47, Jan Beulich wrote:
> 
> 
> On 25.04.2023 10:39, Michal Orzel wrote:
>> Hi Jan,
>>
>> On 25/04/2023 10:20, Jan Beulich wrote:
>>>
>>>
>>> On 25.04.2023 09:56, Henry Wang wrote:
>>>> From: Wei Chen <wei.chen@arm.com>
>>>>
>>>> Processor NUMA ID information is stored in device tree's processor
>>>> node as "numa-node-id". We need a new helper to parse this ID from
>>>> processor node. If we get this ID from processor node, this ID's
>>>> validity still need to be checked. Once we got a invalid NUMA ID
>>>> from any processor node, the device tree will be marked as NUMA
>>>> information invalid.
>>>>
>>>> Since new helpers need to know the NUMA status, move the
>>>> enum dt_numa_status to the Arm NUMA header.
>>>>
>>>> Signed-off-by: Wei Chen <wei.chen@arm.com>
>>>> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
>>>> ---
>>>> v3 -> v4:
>>>> 1. No change.
>>>> v2 -> v3:
>>>> 1. Move the enum dt_numa_status to the Arm NUMA header.
>>>> 2. Update the year in copyright to 2023.
>>>> v1 -> v2:
>>>> 1. Move numa_disabled from fdt_numa_processor_affinity_init
>>>>    to fdt_parse_numa_cpu_node.
>>>> 2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
>>>> 3. Return ENODATA for normal dtb without NUMA info.
>>>> 4. Use NUMA status helpers instead of SRAT functions.
>>>> ---
>>>>  xen/arch/arm/Makefile           |  1 +
>>>>  xen/arch/arm/include/asm/numa.h |  8 +++++
>>>>  xen/arch/arm/numa.c             |  8 +----
>>>>  xen/arch/arm/numa_device_tree.c | 64 +++++++++++++++++++++++++++++++++
>>>>  4 files changed, 74 insertions(+), 7 deletions(-)
>>>>  create mode 100644 xen/arch/arm/numa_device_tree.c
>>>
>>> As asked for in various other contexts, may I please ask that new files
>>> prefer dashes over underscores in their names? Additionally short but
>>> still descriptive names are imo to be generally preferred; in the case
>>> here how about numa-dt.c?
>>
>> Seeing that you made this request multiple times within the last months, maybe it would
>> be better to write it down in CODING_STYLE or CONTRIBUTING? Otherwise, you will keep making
>> these requests indefinitely without people being able to know things like that before pushing
>> new files (+ this always requires respin).
> 
> Well. I could make a try, but getting ./CODING_STYLE changed has proven
> to be difficult in the past, and I would prefer to not again sit on such
> a patch for months or years. IOW I've become quite hesitant to submit
> such patches, even more so for things which - imo - should go without
> saying.

I understand your point. It might be worth asking e.g. on a community call
to see what others think (for now I did not see any objections). It might be
that everyone is ok with it under good justification.

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:23:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525924.817451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEsw-0005Sv-39; Tue, 25 Apr 2023 09:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525924.817451; Tue, 25 Apr 2023 09:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prEsv-0005So-Vx; Tue, 25 Apr 2023 09:22:53 +0000
Received: by outflank-mailman (input) for mailman id 525924;
 Tue, 25 Apr 2023 09:22:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHzL=AQ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prEsu-0005Sh-Nm
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:22:52 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bea29891-e34a-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 11:22:50 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 5C5E332005C1;
 Tue, 25 Apr 2023 05:22:46 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 25 Apr 2023 05:22:47 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 05:22:44 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bea29891-e34a-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682414565; x=1682500965; bh=n7mEZHqqlPL4GuAmu+P0KpVYPyPHX0Db6or
	cu4QoUDU=; b=ofpUM5MlX1mzGborQ6U/zTN7T3LNetR3enueMNXKd04sCg4H1HU
	BIkgsY3eXNEF3jvbFGWcn2oR45LCFTxwfOKhpKviwnizVbo/1oyXcEp5aLtocHG/
	ZAvcxN9Owb7uKAJk9+OjNueH+Lleq0b0ETO1vpC6qFt9cxqpjMavsGGZX9z71Gks
	SctfzNlQnH6GYoNg0fmCDEHTRK2chTz4kgH1m69T7AFNPuKZIwcAAuuLXD+h47tZ
	oxDAFlR0mno/kPH/UshVwVBXZvWZ/3irCtlvgjZ5sut4QpsDnr7Y9kV1fnjAbAsk
	ja1R6AnTYcij1r9hOAGWXAzjOjXeVM9/H4A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682414565; x=1682500965; bh=n7mEZHqqlPL4G
	uAmu+P0KpVYPyPHX0Db6orcu4QoUDU=; b=iwftkVvaFqonMFO9WW/mjPGa0pFzu
	B1/3fus/3i5hZkIcuba/FKS4huyRKKsheMErqSlnwAnpegAB0aAx/eSYWWW9PS4y
	iL0bpcrulYTJzAyF+xU2x68VXelxlO3pxkkQ2VSqFa4EMJBFdlviiepfQvtTWBQP
	05Gr3phlmbZvU1nDWx7qi6Ls+WeMSDyPhDVsxjppUjyL4UStl97c8iNiVEcd0CE3
	v21ZVIpSDTuzxQBrilfpwfFuNQY/8zKX+qEzHay7uhbChhpuhigUB57mkwWEz1vh
	Iz+f8r1BT6+uqAmnuuTbno3VtgrzdKFWKR0grPMzkUX+L6CtQU80yeKDw==
X-ME-Sender: <xms:5ZtHZBhQg3zkBi_dNd1TkzKVX3uljLUOtNPqgOM-zgWra8kWl03Yhw>
    <xme:5ZtHZGDG9Ot2Z9zgb226hqjkHsMZ2NtXpTAUg4Di0hT0JW7oYFsDC8riqBSLgzudq
    UOy5TKBhCYcig>
X-ME-Received: <xmr:5ZtHZBEPU3mdRfFh1t7xeRs1TWOHuU-DQJsLkO7vJ4qIxZO9w-fodL7jqZdwX9VA49v-FZkheAeOv1vZSKCL2XqLbqhZCVzwJl4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduvddgudehucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:5ZtHZGQcdi6HjHiIvgfuJWuC7eNJ7p6EEJHnEEUjSwNDR_Y8H0m8vg>
    <xmx:5ZtHZOyyPwaLNgKVj9c3L9uNa_1LufSiwT-aLqFv9QVmZi6zUGuNhg>
    <xmx:5ZtHZM7SVf59UqbFl-8OMEPV_qZvTpt8zelNlQEU6rRlOroKHlx5Kg>
    <xmx:5ZtHZIsh3pCr89N5nuDQZGLyEBl8jlyMWpvNql7PV1K_jAIj5f_FtQ>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 25 Apr 2023 11:22:41 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZEeb4ieUq1B1cQsg@mail-itl>
References: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
 <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="o+ldYxeBTrHEeD44"
Content-Disposition: inline
In-Reply-To: <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com>


--o+ldYxeBTrHEeD44
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 25 Apr 2023 11:22:41 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI
 console card

On Tue, Apr 25, 2023 at 10:05:25AM +0200, Jan Beulich wrote:
> On 25.04.2023 01:39, Marek Marczykowski-G=C3=B3recki wrote:
> > pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> > devices, do similar with PCI_COMMAND_MEMORY for MMIO-based UART devices.
>=20
> While I agree that something like this is needed (and in fact I have been
> wondering more than once why this wasn't there), what you say above isn't
> really correct imo: You do not really make things similar to port-based
> handling. For one you don't respect uart->pb_bdf_enable. And then you also
> don't write the BAR. When you use the BDF form of com<N>=3D, I don't see =
how
> else the BAR could be written to the value that you (necessarily) have to
> also specify on the command line.=20

I don't think MMIO-based UART is going to work without "pci" on the
command line at all. Setting the BAR is one of the reasons (there is
more to it than just setting (or reading) PCI_BASE_ADDRESS_0, as many
cards have UART registers at an offset), but also other parameters like
fifo_size. So, I don't think it's a good idea to set PCI_BASE_ADDRESS_0
to what user provided in io_base.

> As said on Matrix, using the "pci"
> sub-option together with the BDF one isn't intended (and would probably
> better be rejected), according to all I can tell. Which in turn means that
> for the "pci" sub-option alone to also have the effect of - if necessary -
> enabling I/O or memory decoding, a further adjustment would be needed
> (because merely keying this to uart->ps_bdf_enable then isn't enough). I
> guess like e.g. ns16550_init_postirq() you'd want to also check uart->bar.

Yes, checking also uart->bar makes sense.

> That said, I'm not meaning to mandate you to particularly deal with the
> bridge part of the setup, not the least because I consider bogus what we
> have. But you should at least mention in the description what you leave
> untouched (and hence dissimilar to port-based handling).
>=20
> As to rejecting invalid combinations of sub-options: See e.g. the dev_set
> variable in parse_namevalue_pairs(). That's a wee attempt to go in the
> intended direction.

That makes sense with the current code shape. At some point IMO it's
worth having an option to choose which PCI device to use, also for
MMIO-based cards, but I don't have a need for this feature right now.

> Jan
>=20
> > --- a/xen/drivers/char/ns16550.c
> > +++ b/xen/drivers/char/ns16550.c
> > @@ -272,9 +272,17 @@ static int cf_check ns16550_getc(struct serial_por=
t *port, char *pc)
> >  static void pci_serial_early_init(struct ns16550 *uart)
> >  {
> >  #ifdef NS16550_PCI
> > -    if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
> > +    if ( !uart->ps_bdf_enable )
> >          return;
> > =20
> > +    if ( uart->io_base >=3D 0x10000 )
> > +    {
> > +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> > +                                  uart->ps_bdf[2]),
> > +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> > +        return;
> > +    }
> > +
> >      if ( uart->pb_bdf_enable )
> >          pci_conf_write16(PCI_SBDF(0, uart->pb_bdf[0], uart->pb_bdf[1],
> >                                    uart->pb_bdf[2]),
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--o+ldYxeBTrHEeD44
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRHm+IACgkQ24/THMrX
1yyhnAgAmigbGE67ThnOdCcgO7WrstM++HrGTvar4fpC3mK/q1VkFlrxeB+AXRjY
eG++ne4G//wvxAJeP1uQZQSNCwLMGL4PboPQ1/RolhRoW8nj4fenIOjc96Dyc9rB
7FkEfe/SIiyEuu9PxFXbavEnrLEOPCVdfSBj0ZR1JA1bAcYwyVEHbslWG++qUdks
sqHSDa5neq4McV/ucC/JvOdxdxffCVZf15m/is7mqayM0smFuyGOLUTStI9cus1S
jXTJwyw44Foyd4bW+w6zDDEDVdPGwtTRZQvw3jPbwnKYxf7c4bbNpOoR6ysNRXh2
ttIBbaDPtYjwJ8EAa8LcVZsB9z+4dQ==
=KdJE
-----END PGP SIGNATURE-----

--o+ldYxeBTrHEeD44--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:30:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:30:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525932.817460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF08-00071N-Vm; Tue, 25 Apr 2023 09:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525932.817460; Tue, 25 Apr 2023 09:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF08-00071G-T1; Tue, 25 Apr 2023 09:30:20 +0000
Received: by outflank-mailman (input) for mailman id 525932;
 Tue, 25 Apr 2023 09:30:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHzL=AQ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prF07-00070t-PL
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:30:19 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c91fd13e-e34b-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 11:30:17 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id C9B0B320096B;
 Tue, 25 Apr 2023 05:30:13 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 25 Apr 2023 05:30:14 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 05:30:11 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c91fd13e-e34b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682415013; x=1682501413; bh=UI7v9qpm2wM1dJOLNNZWpO2w8GjESz36ETP
	nStloqCo=; b=kqH/YXRtQhN1inkhLamlO3PDbbp67aGfS+OH9Pruouf8DhGoQ3b
	AZK1OdGCOskWGG4WDgIn5r7gf7ZPbX1iK6nCy2Ivk50U2LTmTD8Pe9VIBYsMDv/J
	pICjTL2JUfD/8hmk2YXiClb1Or5g6DPgTciCBGcvGEW9LAI/YD5QlZtMPIe3RHv6
	QA1eQ2xR8z/JlbbQmKyN7NmeLaynhpKKwyuM3vZ0+Bv3GMbO30G2wcZMNqZ0FC+j
	pNJXOCDyTHyFm5bur9LvwsLl8ecaQ86ot9ynw3IrPZZEGdyC3Sx79s1XtuXqdHsU
	pMCw8sOGHLEQs8ns/ze/j68VtSIQXEnPbew==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682415013; x=1682501413; bh=UI7v9qpm2wM1d
	JOLNNZWpO2w8GjESz36ETPnStloqCo=; b=NUUTDpnBC46mlEiwFGf300L3A+mJT
	uveRcYq5R3kU7tiGG+FCB4YP3iNBD/cHYXnKYn1HxEI9+2TE9WUEG7n1KkQzqKhS
	4MpPnBgKWUeAQD4TC04z6rKLFqrv8nW/yZ/DW9O3ulOkO1gL4Rz+d/GcCVJMiTwK
	jvKwihKnFcTkmKpfaUhqSpAd8sMmAC5+VrPB7F1OGXBQLKC6G36C/cip/xewOo+u
	bakEgu+yPhnWNsSbfhTISMweaa05xEpBYRfZHZBzeLoZpZVwquX5qlXgV29vbYUS
	WoYbqeZVcJwfaG0EzhOefdBibYpOEweHnUcbBKOK1WSTPCf5iQlqZdKlg==
X-ME-Sender: <xms:pZ1HZCaYoAvCyeIXUcSNyxjhYuwrKhnhEto6dMQsHn6Cg0E6pfI7fQ>
    <xme:pZ1HZFYIEbsaKwm7UmbTNfNKPPiBPUMYgaR454xYAtJhpoCeh4extYjh3nyb9Xe0n
    -Szt1vrB_YR2w>
X-ME-Received: <xmr:pZ1HZM8oZi7kmdpDOrPoM_Zvz45R5z9X7AyPLw_aiCb_-gTmnXBxXZfcZtLa-uv8ptLsCFoCRKla71D3Nh5rEh0SElw2mak86-I>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduvddgudejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:pZ1HZErG4O-LxN5QMriXmfPNRvegADqcSSibZCFvvhQQdgAmezGwKQ>
    <xmx:pZ1HZNqtVkmXAm8F4nrdwdKHmivIsI1AukjO7jgatmnciPsNko0ccg>
    <xmx:pZ1HZCR--OvPCa3klKb0nbfXNL9xjCWIYxlYWGEQkonKTISWy3DF4w>
    <xmx:pZ1HZOluAPDSk2O52mbK0iyXYUPOHWp-VbjNG_x37_dNYxJee3UUBA>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 25 Apr 2023 11:30:08 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZEedoM+iqUUroneP@mail-itl>
References: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
 <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="3zpo+Ev1W+2Pp2ss"
Content-Disposition: inline
In-Reply-To: <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com>


--3zpo+Ev1W+2Pp2ss
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 25 Apr 2023 11:30:08 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI
 console card

On Tue, Apr 25, 2023 at 10:05:25AM +0200, Jan Beulich wrote:
> On 25.04.2023 01:39, Marek Marczykowski-G=C3=B3recki wrote:
> > pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> > devices, do similar with PCI_COMMAND_MEMORY for MMIO-based UART devices.
>=20
> While I agree that something like this is needed (and in fact I have been
> wondering more than once why this wasn't there), what you say above isn't
> really correct imo: You do not really make things similar to port-based
> handling. For one you don't respect uart->pb_bdf_enable. And then you also
> don't write the BAR. When you use the BDF form of com<N>=3D, I don't see =
how
> else the BAR could be written to the value that you (necessarily) have to
> also specify on the command line.=20

I don't think MMIO-based UART is going to work without "pci" on the
command line at all. Setting the BAR is one of the reasons (there is
more to it than just setting (or reading) PCI_BASE_ADDRESS_0, as many
cards have UART registers at an offset), but also other parameters like
fifo_size. So, I don't think it's a good idea to set PCI_BASE_ADDRESS_0
to what user provided in io_base.

> As said on Matrix, using the "pci"
> sub-option together with the BDF one isn't intended (and would probably
> better be rejected), according to all I can tell. Which in turn means that
> for the "pci" sub-option alone to also have the effect of - if necessary -
> enabling I/O or memory decoding, a further adjustment would be needed
> (because merely keying this to uart->ps_bdf_enable then isn't enough). I
> guess like e.g. ns16550_init_postirq() you'd want to also check uart->bar.

Yes, checking also uart->bar makes sense.

> That said, I'm not meaning to mandate you to particularly deal with the
> bridge part of the setup, not the least because I consider bogus what we
> have. But you should at least mention in the description what you leave
> untouched (and hence dissimilar to port-based handling).
>=20
> As to rejecting invalid combinations of sub-options: See e.g. the dev_set
> variable in parse_namevalue_pairs(). That's a wee attempt to go in the
> intended direction.

That makes sense with the current code shape. At some point IMO it's
worth having an option to choose which PCI device to use, also for
MMIO-based cards, but I don't have a need for this feature right now.

> Jan
>=20
> > --- a/xen/drivers/char/ns16550.c
> > +++ b/xen/drivers/char/ns16550.c
> > @@ -272,9 +272,17 @@ static int cf_check ns16550_getc(struct serial_por=
t *port, char *pc)
> >  static void pci_serial_early_init(struct ns16550 *uart)
> >  {
> >  #ifdef NS16550_PCI
> > -    if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
> > +    if ( !uart->ps_bdf_enable )
> >          return;
> > =20
> > +    if ( uart->io_base >=3D 0x10000 )
> > +    {
> > +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> > +                                  uart->ps_bdf[2]),
> > +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> > +        return;
> > +    }
> > +
> >      if ( uart->pb_bdf_enable )
> >          pci_conf_write16(PCI_SBDF(0, uart->pb_bdf[0], uart->pb_bdf[1],
> >                                    uart->pb_bdf[2]),
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--3zpo+Ev1W+2Pp2ss
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRHnaAACgkQ24/THMrX
1yxV1QgAmByH3X0slNQEPxJ5CA71kaoVU9VQAO9habWVMGdQZ6f5QCvHn2ScvO8l
JTQPFbqxplnfWkPrWr1yvA93fFkOadPvC8yxWQNqUrkKY2TU0WjVWPgiQlcYxW5p
eG0jE+gHEEhZy7M+mNqJXXNwi/PCQyydHHHF3lO/dTtFDho/a5twpzgr3b3FPX7N
WF93gq62fnC/itXA3HH3bU+sc5J9zO+cH3qUVKBFDgo42ZfDq6VF8Y5Szdh1dHzO
hzpgKdpd5/WlJSiIhaVE0C3moZCQpTCXWjLE1N6B2E/Dkn2gb8dpVVl8vW5P6mV9
/taIoRQNpOEbe2sDKNKIgiQe3msrMw==
=0aWC
-----END PGP SIGNATURE-----

--3zpo+Ev1W+2Pp2ss--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:32:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:32:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525937.817471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF20-0007ZK-BJ; Tue, 25 Apr 2023 09:32:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525937.817471; Tue, 25 Apr 2023 09:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF20-0007ZD-8c; Tue, 25 Apr 2023 09:32:16 +0000
Received: by outflank-mailman (input) for mailman id 525937;
 Tue, 25 Apr 2023 09:32:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prF1y-0007Z7-Mt
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:32:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0eed9138-e34c-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 11:32:12 +0200 (CEST)
Received: from DB9PR01CA0014.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:1d8::19) by AS8PR08MB6134.eurprd08.prod.outlook.com
 (2603:10a6:20b:291::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:32:08 +0000
Received: from DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d8:cafe::ea) by DB9PR01CA0014.outlook.office365.com
 (2603:10a6:10:1d8::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Tue, 25 Apr 2023 09:32:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT059.mail.protection.outlook.com (100.127.142.102) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.18 via Frontend Transport; Tue, 25 Apr 2023 09:32:08 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Tue, 25 Apr 2023 09:32:08 +0000
Received: from c00b1b0e3600.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CDC6313B-9F18-413A-B21E-83DE98492892.1; 
 Tue, 25 Apr 2023 09:31:57 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c00b1b0e3600.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 25 Apr 2023 09:31:57 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB9425.eurprd08.prod.outlook.com (2603:10a6:20b:5eb::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:31:54 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 09:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0eed9138-e34c-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aLDGvAkRP281Y3BToysipj4WgL+Mh3vkMqKfKo3cS+A=;
 b=n33eRVKuqCK+C7DtznIlnQWHiMuJeGjMlboToR+IM2eWc92D+qM2xwisSaNsv2tDF6AJuY76UczeXF/B+L9fUKrBEN9FgCsn0RIetbs0ypEWUE6Stmj0e6oT73yEoFmoE2ZoFjcyehubNzu8DkFmuiDZ8+hEkOsI2jKk8JUdHVk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A+S/MPiYdpIFK04hhMTy4gIVFWnHn7af2zDtp0nlx9E4ypKkm7tQNr0I5MDISGWgL9Z8nv34zSFzANE3NBzMLtIorFtGlHqrjiO639+HOUnASdS4gNPiXbhVvob6De5QxCD7yDNedzwpJah/6mHoWgS+t/uDTK/IlnYRHAJlto7gHR34xH8SmQj6XxCMKBWubLfzFLeELVYH6YxLZtQfQjPWLq7xR7VvpKP179BD+HU2ZtC43nv8K3ztw0PTPcozE4FmYKNjQpL9GW7r7qlal1QlkYHVJPeRq8GSrfj4g5OutCvOI0oWe6M3ESauPwcf+M4KiU6MttBenBGn1ymqDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aLDGvAkRP281Y3BToysipj4WgL+Mh3vkMqKfKo3cS+A=;
 b=ew4n5fY+ppSfUYxDe7/1/764Jp/Qz9lpSo8TzW6L5fB2byzxsPVf7/Teqt/j9S/Xkw2fKSZFkPYkzLj9Qn1h5sBN5fPBxphbeu+5VCS3+b2t9E7/+tC9zl7dsk7SI57tnZuND+uMwP6W8ozfdGaQgPkn2TggaEM51ytVtA/mmlcO6gvxpTwVhcOYIqEf80SWhgIRHjDgduqxZp7lp6wtC0BBSvYP/iqGWOMFQqjYBmF/mw9VkX2xWcM2/YdK0SkEo4fUzLY43ci0qltSVZ6LabSai1LHGWWMJCgzW02eZBu+uw9VfCsm24gIt2EeBIchlO/Ii4mQRYVUybjZLqXoqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aLDGvAkRP281Y3BToysipj4WgL+Mh3vkMqKfKo3cS+A=;
 b=n33eRVKuqCK+C7DtznIlnQWHiMuJeGjMlboToR+IM2eWc92D+qM2xwisSaNsv2tDF6AJuY76UczeXF/B+L9fUKrBEN9FgCsn0RIetbs0ypEWUE6Stmj0e6oT73yEoFmoE2ZoFjcyehubNzu8DkFmuiDZ8+hEkOsI2jKk8JUdHVk=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v4 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZd0uatmvJ8yLEu0+BYdIuaJgbea87s6sAgAANHMA=
Date: Tue, 25 Apr 2023 09:31:54 +0000
Message-ID:
 <AS8PR08MB799137BF5927B402E7B0069392649@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-4-Henry.Wang@arm.com>
 <7d2c221b-745b-109e-af1f-2b78504b2e0e@suse.com>
In-Reply-To: <7d2c221b-745b-109e-af1f-2b78504b2e0e@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 69CFC7BB08AD8546970C9F34FFF82DFE.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB9425:EE_|DBAEUR03FT059:EE_|AS8PR08MB6134:EE_
X-MS-Office365-Filtering-Correlation-Id: 43ddbefb-b2e1-43be-948f-08db456ff09c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KpGm5JPjWvDi/1TE9C/RviXWX/opB3PSozGYxrrRSX46bG8Itzq9dQCTbRlYxM0FE31P8HZtjOXJ2soUZ+eqVkPMliI7JY3aoW8cTtrBbxDgnNlB6V02HlwRAOpKR36Fdrp6ZPNBfiIdyFLM3U9qEv9gmikR0rmhS57jtFnR3+4YvofGmy0O2g/3zVmYm7HT57eqfYTSg/0QAEmbZ+Z8orPgmBQWotTy+/Og+9DCYQJ9xbZlMAxjojsgI8TTcFuX+eYgkxKIJDK50QYERUEXQASyd3jkFrjIr7q1RPEpGhwjlD9/4+uT0ue9YwY7Kz9Ebu6WTqNTd95q76K9hXUjzAxxLoB+ujLN3bXrO9o5t9ezTAk7c2aDdF9rL4vD5hglV/jp/Bin2THHKNZfEZ6gte//Gcmx3WAk8F3zRE5ObkR48ntlbM2OkwB5MaNqo1aaxZ0encfl2BBUpYncnVoXeS/TL49do0opj6IMcROefnlP85owne/fYCyB2dxe6rOwtqJdXYIMjkp29+Q+yD+FVzAf4h+xvd2m+TkF1HZuyGji+s5ZUJv6gcgAKnnHe8Kn1iuGtPTxNZD5C+F16Na9HtLuNiGHGVFr+5dSwUcEMBFpL63SKrelnnoJ+Sh+DHaG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(366004)(346002)(451199021)(38100700002)(122000001)(6506007)(9686003)(53546011)(26005)(55016003)(186003)(8936002)(83380400001)(33656002)(2906002)(8676002)(52536014)(5660300002)(478600001)(54906003)(7696005)(71200400001)(38070700005)(316002)(4326008)(6916009)(66556008)(76116006)(64756008)(41300700001)(66446008)(66946007)(86362001)(66476007);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9425
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	aaf9ad94-d9ab-4b88-ce52-08db456fe880
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F9SyNl3Kg0rsqXKaPcJ8xW5w4rNrmPWwZJlmmvRlTZ7uvfEff8TGjsbYJWLV4NDxLR7QqlbBIMC+Rt41050+nUN5YMTF/LMUs49I+lYf9lapZBzY5ftJ37bNonHSSwJ0PDrnS5UYWGFQwsLMW1SMKlfiZXWL8RLMLYn7Pc4ekp8Rrf9OGSZGF9TFdV35u2UG0G5oNxv5nMQ3XbP7ddj3rBaprE2aBGA7VlK0Qi+3NzlNsRVJ/9K1+DJJ5RWh3sg7VQqKTcP7NtnMQ3iI4CvwFjaz6rBBp25dIOT32wYHwUjZ0L0OGnG1I87AbPATSprmnK48E+sLNmc/9fL3EOtQmK48ZT8grJH+1julLm/sqIre9Qy/hGLfrymX5V8/isB1ZoHZhqTHycQgF/bUcxkwQL6+oqFc4zqZ8asqwsdZvrRsh+VwNi5/tw8Z5YBniIP6kbkX9OyuJ4Y32hj2UE9KneKJWu8PA31xlyCXNxUdmoO8WIuykkcvAL/abDzxgdxuKywRRtvXR0+dcz5OWRYrhF8pK3/p1YNDij0Utt4KFJUeVv/N0dCL4g4A8bcP0TDSfKeacTGPWuD9Rd/6SNyuGQLLcfbxrS3KVHfxhyjyEb4+JhiZYqw+r6wRn0IZDz/+DuyTSw2paHryt/+ioscXyxnFlIasGScZWbKq8WnaGkFFl7Yz/Yhnl1YKhtts6hrIGOynvR/fSlI5u+/OyWe9kw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(2906002)(70206006)(70586007)(316002)(4326008)(6862004)(52536014)(5660300002)(8936002)(8676002)(41300700001)(33656002)(82310400005)(86362001)(55016003)(40460700003)(40480700001)(356005)(186003)(9686003)(26005)(53546011)(81166007)(478600001)(7696005)(36860700001)(83380400001)(47076005)(336012)(6506007)(54906003)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 09:32:08.3380
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 43ddbefb-b2e1-43be-948f-08db456ff09c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6134

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+IA0K
PiBPbiAyNS4wNC4yMDIzIDA5OjU2LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+IC0tLSBhL3hlbi9h
cmNoL2FybS9udW1hLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vbnVtYS5jDQo+ID4gQEAgLTI4
LDYgKzI4LDEyIEBAIGVudW0gZHRfbnVtYV9zdGF0dXMgew0KPiA+DQo+ID4gIHN0YXRpYyBlbnVt
IGR0X251bWFfc3RhdHVzIF9fcm9fYWZ0ZXJfaW5pdCBkZXZpY2VfdHJlZV9udW1hID0NCj4gRFRf
TlVNQV9ERUZBVUxUOw0KPiA+DQo+ID4gK3N0YXRpYyB1bnNpZ25lZCBjaGFyIF9fcm9fYWZ0ZXJf
aW5pdA0KPiA+ICtub2RlX2Rpc3RhbmNlX21hcFtNQVhfTlVNTk9ERVNdW01BWF9OVU1OT0RFU10g
PSB7DQo+ID4gKyAgICBbMCAuLi4gTUFYX05VTU5PREVTIC0gMV0gPSB7IFswIC4uLiBNQVhfTlVN
Tk9ERVMgLSAxXSA9DQo+IE5VTUFfTk9fRElTVEFOQ0UgfQ0KPiA+ICt9Ow0KPiA+ICsNCj4gPiAr
DQo+IA0KPiBOaXQ6IEEgc3RyYXkgMm5kIGJsYW5rIGxpbmUgaGFzIGFwcGVhcmVkIGhlcmUuDQoN
CldpbGwgZml4IHRoaXMgaW4gdjUuDQoNCj4gDQo+ID4gKyAgICAvKiBOVU1BIGRlZmluZXMgTlVN
QV9OT19ESVNUQU5DRSBhcyB1bnJlYWNoYWJsZSBhbmQgMC05IGFyZQ0KPiB1bmRlZmluZWQgKi8N
Cj4gPiArICAgIGlmICggZGlzdGFuY2UgPj0gTlVNQV9OT19ESVNUQU5DRSB8fCBkaXN0YW5jZSA8
PQ0KPiBOVU1BX0RJU1RBTkNFX1VERl9NQVggfHwNCj4gPiArICAgICAgICAgKGZyb20gPT0gdG8g
JiYgZGlzdGFuY2UgIT0gTlVNQV9MT0NBTF9ESVNUQU5DRSkgKQ0KPiA+ICsgICAgew0KPiA+ICsg
ICAgICAgIHByaW50ayhLRVJOX1dBUk5JTkcNCj4gPiArICAgICAgICAgICAgICAgIk5VTUE6IGlu
dmFsaWQgZGlzdGFuY2U6IGZyb209JSJQUkl1OCIgdG89JSJQUkl1OCINCj4gZGlzdGFuY2U9JSJQ
Ukl1MzIiXG4iLA0KPiA+ICsgICAgICAgICAgICAgICBmcm9tLCB0bywgZGlzdGFuY2UpOw0KPiA+
ICsgICAgICAgIHJldHVybjsNCj4gPiArICAgIH0NCj4gDQo+IEkgYXBwcmVjaWF0ZSB0aGUgY2hl
Y2tpbmcgdGhhdCBub2RlLWxvY2FsIHJlZmVyZW5jZXMgYXJlDQo+IE5VTUFfTE9DQUxfRElTVEFO
Q0UsDQo+IGJ1dCBpZiB0aGV5J3JlIHdyb25nbHkgcGFzc2VkIGludG8gaGVyZSwgc2hvdWxkbid0
IHRoZSByZXN1bHRpbmcgYXJyYXkgc3RpbGwNCj4gaGF2ZSBOVU1BX0xPQ0FMX0RJU1RBTkNFIG9u
IGl0cyBkaWFnb25hbCwgYXQgbGVhc3QgYXMgZmFyIGFzIHByZXNlbnQgbm9kZXMNCj4gZ28/DQoN
CkFwb2xvZ2llcyBpbiBhZHZhbmNlIHRvIGFzayBtb3JlIHNwZWNpZmljIGRldGFpbHMgZnJvbSB5
b3UgYXMgSSBhbSBub3Qgc3VyZQ0KaWYgSSBjYW4gY29ycmVjdGx5IHVuZGVyc3RhbmQgdGhlICJp
ZiB0aGV5J3JlIHdyb25nbHkgcGFzc2VkIGludG8gaGVyZSIgY2FzZS4gRG8geW91DQptZWFuIHdl
IGFyZSBhbHdheXMgZ3VhcmFudGVlZCB0aGF0IGlmIGZyb20gPT0gdG8sIHRoZSBkaXN0YW5jZSB3
aWxsIGFsd2F5cyBiZQ0KTlVNQV9MT0NBTF9ESVNUQU5DRSBzbyB0aGUgKGZyb20gPT0gdG8gJiYg
ZGlzdGFuY2UgIT0gTlVNQV9MT0NBTF9ESVNUQU5DRSkNCmNoZWNrIGlzIHJlZHVuZGFudCBhbmQg
Y2FuIGJlIHJlbW92ZWQ/IFRoYW5rcy4NCg0KPiANCj4gPiArICAgIG5vZGVfZGlzdGFuY2VfbWFw
W2Zyb21dW3RvXSA9IGRpc3RhbmNlOw0KPiA+ICt9DQoNClsuLi5dDQoNCj4gPiArICAgIC8qDQo+
ID4gKyAgICAgKiBDaGVjayB3aGV0aGVyIHRoZSBub2RlcyBhcmUgaW4gdGhlIG1hdHJpeCByYW5n
ZS4NCj4gPiArICAgICAqIFdoZW4gYW55IG5vZGUgaXMgb3V0IG9mIHJhbmdlLCBleGNlcHQgZnJv
bSBhbmQgdG8gbm9kZXMgYXJlIHRoZQ0KPiA+ICsgICAgICogc2FtZSwgd2UgdHJlYXQgdGhlbSBh
cyB1bnJlYWNoYWJsZS4NCj4gDQo+IEkgdGhpbmsgdGhpcyAiZXhjZXB0IC4uLiIgcGFydCBpcyBz
bGlnaHRseSBjb25mdXNpbmcsIGFzIGl0IGRvZXNuJ3QgY29tbWVudA0KPiB0aGUgc3Vic2VxdWVu
dCBjb2RlLCBidXQgaW5zdGVhZCByZWZlcnMgdG8gdGhlIGZpcnN0IGNoZWNrIGluIHRoZSBmdW5j
dGlvbi4NCj4gSWYgeW91IHdhbnQgdG8ga2VlcCBpdCwgbWF5IEkgc3VnZ2VzdCB0byBhZGQgc29t
ZXRoaW5nIGxpa2UgIihzZWUgYWJvdmUpIg0KPiBiZWZvcmUgdGhlIGNvbW1hPw0KDQpTdXJlLCBJ
IHdpbGwgYWRkICIoc2VlIGFib3ZlKSIgaW4gdjUuDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoN
Cj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:38:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525942.817481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF7T-0008Cl-0T; Tue, 25 Apr 2023 09:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525942.817481; Tue, 25 Apr 2023 09:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prF7S-0008Ce-U2; Tue, 25 Apr 2023 09:37:54 +0000
Received: by outflank-mailman (input) for mailman id 525942;
 Tue, 25 Apr 2023 09:37:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prF7S-0008CY-H0
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:37:54 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d976d55b-e34c-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 11:37:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB8018.eurprd04.prod.outlook.com (2603:10a6:20b:236::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:37:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 09:37:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d976d55b-e34c-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OD2hcd+NEEGgFa02CRbSsnvk3yMWF/c2Yu1mSOrwzPj/gRimpox3FIFF1Y5n28eFhnKr7nFbHC15QCIqlqmp9Z6Vtgd5bo7SCdq2q21sBKSNaXpAhFeWZwLD1vY2ntzpbuZtgvZIqH2d4I1d+4KEpuyrPu+92zebLzGvFDZO/9M5vX10gImwepll/ySBta3skOzTC18ZV8UpVc4PuT2A6KbkkJdIGOQ/8k+i0TJjFifj2/DO26Hp8COIreG5ecTdqKPg75fySkgpg1Q0kXA5i5I2rtkSVrESIUmoFFRCL9gMUucbir1a9QzujQkVOqXusc14295AYSa87ZqoqARi5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3ATNNVVqOrdW2aZ7VmVeCVcEoEOy2KfvMGzUwlyIuyQ=;
 b=ctZH0H5Cc2ov4nSmJ27CE6NJu7hTa0zWZJPH2GUMZNyNiFT/eLTSb+RnFhYA4LAvBysBgoU6eArloVIGHi2rzeaanU8PEOFSAvFEGa8ddLlt1wvlB6/0ws+vaRaZgyniGIIkDMjZsonm2zQeqBM6Ardw/HSJbxEUu1REv/gt8MlJYwMN1lsCgBTnrvZb+nbO0/eF4UCQYP03DcBX+uEkWY5uBChHRhVKbgCHbDGiqUp2AmJBlygmFuiw9yh1eSy0Y+xDYMk47awBWTpLH2//26oN113D/Gpp1JWYcJEdraw5s0/W4AkYgGKV6kiyTJXK8p9wlzM2jcg/dkTjW+X4Rg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3ATNNVVqOrdW2aZ7VmVeCVcEoEOy2KfvMGzUwlyIuyQ=;
 b=qqqsOofUbDz8K/AJCQOA/A65Wt+kYSHyaK8Km29Ht/MuWMYoEThnmJH6byxW1E1M7QiZiu/CGG9227h18qv9HzvMxj0WTchyKFXe2rdqlQGl3A/qbOH364xHI6B0TbL0hCIwpy81jknlXnj8tt/FG6mZvBP+NF0JWS7BDNns+0hdkMqInoyxuQUh6czMMXzpm8STVQ4hDm+NluD4gf+KjahhXjmQePzNGoEdoB5RgsivYAxB8TnWm7XumhgxuWpnyl2qKH12Ai47Fm+tnoCual4zEGzkA9qO37QXR1NljCtNznY3vKfSronl3PTzJp0SvnFFeWnQFinzasR/xfSmew==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <25123de7-c04e-1cf6-6076-6d3dcb9a4306@suse.com>
Date: Tue, 25 Apr 2023 11:37:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] ns16550: enable memory decoding on MMIO-based PCI console
 card
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230424233930.129621-1-marmarek@invisiblethingslab.com>
 <e93bd35b-42fb-3979-7e08-0d7f779e9ed1@suse.com> <ZEeb4ieUq1B1cQsg@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEeb4ieUq1B1cQsg@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB8018:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ef1ee31-2c22-4fe0-f769-08db4570bbb7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tnAbaoo+sfrAyONp6513hfGHluSo6IOg3ZUINDQwSxDZP9/sVdNxLxQKG2e+uumY+1r9Jw2/fKFJGkQz9DTJdkO/zKX9kLfk7+gYIYQi2YI4LwfNA6f6PqqA+HRJdrH+GOyX0wnbwOEx0beZ13myvyePKllPFPYdWcqZwDxMLl1lsAwPLGYP6XrmRy/FHspReL6ykD3pAqNM+Y+RsxeDuWHZ1lNV6YLxZkApj2LQJotcz0FF+8zOS5LIaC33wkDcvxlAqhZSvxPk78ERC92fCgyb2cgULjnl87dWUDnln1h2IxoT1fJhKDuNS6X3jN7YSw3ywcyJ/OGt12pWJYw4KooQrnZ8QCYjur9iiCVABXfcWlj4A3X2tEVZ/IkyIWB9hhT3E0mqSch9kxKcj8chzfBn367+vgsmvnXp+j/dxg4kFbVWYiskRVvb2C7cFTuXzurylHKZx7auOyIIzoCE64RkOTBhdqJMqf1Mo31+BqH70thknXnE1qfFpU2gcsOoXON0HwNONMTgjLw+E55UTjqNdy20meoaet4/SAUEod5otY/sTTgZ2viaivIpKJ1uJvukIrTwl5wfE0tHUTqpn8jV0dol27jmTDaXnCaXzxbFU3K15+XNZE42DZ5C960fAxv4emTyp+nQe20iGB3Z3w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(376002)(136003)(346002)(39860400002)(451199021)(2906002)(6486002)(6512007)(6506007)(2616005)(186003)(53546011)(66556008)(66476007)(66946007)(8676002)(8936002)(41300700001)(6916009)(4326008)(316002)(26005)(478600001)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(83380400001)(66574015)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXc4VkpoSzdyZkZyc0Z1anR4WGcvU0IvaWorUVJGUW9ja1FBdDgzeUowUndZ?=
 =?utf-8?B?TDhJZEhpWXl1dmdnNThLYkV5UVRyaUl4eXNDUHJGaE51am5zZmRwSVB5TU1Q?=
 =?utf-8?B?QUJEa1E1MENVTHNHUFZ5YnU2WVZVUmtzQlQxcVRZbDBhMWF0clNmdTYyRHZE?=
 =?utf-8?B?aUxGQUxaeXRlNnd5emVKUEVseGliYVpvVWxGVkoxOXJTKzJQaE1vUjZsOVNh?=
 =?utf-8?B?Yi9EOFJJcHdXK29lYmdoZUxLUG96NElXcHNOY0wyMDFLKzllS3IrZUZnNDZa?=
 =?utf-8?B?dVhEd1dldHl1aFhFamFQUS8yOXNWUUJtbEZqeFdlTzNtaWRkNTBtbnd0TWdW?=
 =?utf-8?B?SHhCeHpJYnpncmVnUFhlVE8rSXMrclRveTM0d3V5b1NWeGdjV1cybUNqRkJW?=
 =?utf-8?B?V2VhYWcwZTI4US84WG1mVHBYOXhiMzF0Q2dscFJtaXlsNUErS2RlUjIycUpW?=
 =?utf-8?B?czFKY0hLVGI0SDRPS0oxZzRNNThqM2pNREkrOE54SFV4cElidUtYOWJNbDFN?=
 =?utf-8?B?VW1COWo3SG1UK05LS1ExRkd2dTRjMmtlRWwvZnMvbEdvUDZDMjhJOE8yRUtD?=
 =?utf-8?B?Zk9NS3dRNUMxNkVTT1dTNzhZNm8ySWV6STRJN3R0U3ZRblB6b2VPRG1UbGVC?=
 =?utf-8?B?L3ZWU0ZZWWNMdDJWdE1zUGtiaHVCeU1hR3NyS0FwVmR2L3BvVndIN0I0d2pD?=
 =?utf-8?B?T3l5dHZZdXZUTzNNVWF4ajlJZU5LcjVhR1hRSUJZQUVrZ01Kc0FrSEJrRHgw?=
 =?utf-8?B?WHJORGsvcWdtVUZsYjQxYVVwbUkrYzJtWkJyR0Uza214WVJWTDJ0WGJ4WjFZ?=
 =?utf-8?B?Uml5MEUvVlkrMFJRckJ1SEdDWWhualliNTZTelhRTG1tTjBndUJnZlF0cTdw?=
 =?utf-8?B?d1N4RzZwZVBEM1VpeUhncnFwa3JUY3l6bXhXaDFYNGZ2SDM3M25KTE1zNHEr?=
 =?utf-8?B?M1k5MUROcXVOVVhoNFBFUWtNV2ZhY284L0lzc21iaXNDOTlFWk1KQUk2ODZE?=
 =?utf-8?B?bE5GSHVOaVNrUFRtT3hRY3N1aG9DNzM5czlOSkx2dWEvUFNRL0k2MkpKT0Nu?=
 =?utf-8?B?cjdhMTd0U1Q3OHJUK1dZT1RkUnQvUXNkbUh0d3NWaG1jOEJ6S1Jtc1RxQlJT?=
 =?utf-8?B?UkFUZWFmY0YrWnpJNmlVNkpkVnJkazVjNjl0MWx4aVF0b1B4b3VLdWxoeHZj?=
 =?utf-8?B?c29WRTY1ZVhwS1VEMXE4MGhqN1IvV3RCS2FibUdNL3RPTW9WQ0RFVTJ1QTNK?=
 =?utf-8?B?UWQ4UkhFRFNwbmxjaExPOHZtSmkxOWp2d25DQnVxUWxIekR2U2NIRG9BSklD?=
 =?utf-8?B?K2FuQ1RKYzBRTkFzeWxTekhQZHZNVUxnZVhHZ05hajJKV0hhYzBiazQ4dy9y?=
 =?utf-8?B?K3ZwdXZuZjI5aUJQVmV3WTAyWDVRVUd1UTBkdE1NZTJVenI3Z0hvLzJlSnMx?=
 =?utf-8?B?M2UxRkNvck1wWkRDSGFYcFpKUndkL3Z3RzlKeVk0anJzeXR5aytKN3lUZ2VN?=
 =?utf-8?B?M2ZGVi9aMnY1TXFrUE5tR3l1allaajJYd29yeUtFYkY4S2k4cUY1S0ZscXJF?=
 =?utf-8?B?SkNaTWtyaVgvd2JnaFdIbzFadjJ6azkwWHNsWWsrQ0hMZW40b0ZXV3A5VTY5?=
 =?utf-8?B?R3grQjFPcSthc3dyV0VjZmdsVHBNME1YTHdZSnlTMXZ2b09NTldocnd4Mnc0?=
 =?utf-8?B?dzBraWl1Z0h3SC8yVjBkUzZuVlJDR3FQVXVJYzV5bVk1WGN5LzlSSDIwczZ6?=
 =?utf-8?B?bmZEVFJ3K0hzbWJnL213a0M1L0Jmc0p5VGVOYVVZNCtudFlrZEVIb0ExRFQ4?=
 =?utf-8?B?UTN0ckt6ZThqMjhpdWlEUTYvdVphQXdsU0tXWlBmYW5zenFISC9ScmEyRnhq?=
 =?utf-8?B?QzFSZ25TeEZNSlQ2UFNRVmZORm9ibWZsZTNWaGZrd3puRTdLejVsQzJQbkgw?=
 =?utf-8?B?NFlhZTltYnNOUU85OGVJT2p0L2dMdCt0M1JkTTY4b2hFclBJVmpHeUN1Mk5F?=
 =?utf-8?B?dTlhOWFYYzlLdzN1bE15U0xscXlscFl6N25wemF6WnpYa2JqZUwrK3c3RHB2?=
 =?utf-8?B?d2d5RlNTVzh0dkxWNktLN3FQR2pZbkwxS0Npb2ZiazNNa0s2T1NsVEplRlNC?=
 =?utf-8?Q?cA7e+RJMT1JAs5LIi9NoMCdH2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ef1ee31-2c22-4fe0-f769-08db4570bbb7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 09:37:49.3151
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6c0A3eisTY6kgKp9hWuyJ3fG9Q5AcEmugxVFuYloCIARpnB3l4wDy+e/0EY4einG25lf22eeMkr9QmKYQrKdbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB8018

On 25.04.2023 11:22, Marek Marczykowski-Górecki wrote:
> On Tue, Apr 25, 2023 at 10:05:25AM +0200, Jan Beulich wrote:
>> On 25.04.2023 01:39, Marek Marczykowski-Górecki wrote:
>>> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
>>> devices, do similar with PCI_COMMAND_MEMORY for MMIO-based UART devices.
>>
>> While I agree that something like this is needed (and in fact I have been
>> wondering more than once why this wasn't there), what you say above isn't
>> really correct imo: You do not really make things similar to port-based
>> handling. For one you don't respect uart->pb_bdf_enable. And then you also
>> don't write the BAR. When you use the BDF form of com<N>=, I don't see how
>> else the BAR could be written to the value that you (necessarily) have to
>> also specify on the command line. 
> 
> I don't think MMIO-based UART is going to work without "pci" on the
> command line at all. Setting the BAR is one of the reasons (there is
> more to it than just setting (or reading) PCI_BASE_ADDRESS_0, as many
> cards have UART registers at an offset), but also other parameters like
> fifo_size. So, I don't think it's a good idea to set PCI_BASE_ADDRESS_0
> to what user provided in io_base.

While the BDF way of setting the device to use is meant for the most basic
configurations only anyway, I'm okay with you to leave out that aspect as
well, so long as you mention it as (another) dissimilarity with the port-
based logic.

>> As said on Matrix, using the "pci"
>> sub-option together with the BDF one isn't intended (and would probably
>> better be rejected), according to all I can tell. Which in turn means that
>> for the "pci" sub-option alone to also have the effect of - if necessary -
>> enabling I/O or memory decoding, a further adjustment would be needed
>> (because merely keying this to uart->ps_bdf_enable then isn't enough). I
>> guess like e.g. ns16550_init_postirq() you'd want to also check uart->bar.
> 
> Yes, checking also uart->bar makes sense.
> 
>> That said, I'm not meaning to mandate you to particularly deal with the
>> bridge part of the setup, not the least because I consider bogus what we
>> have. But you should at least mention in the description what you leave
>> untouched (and hence dissimilar to port-based handling).
>>
>> As to rejecting invalid combinations of sub-options: See e.g. the dev_set
>> variable in parse_namevalue_pairs(). That's a wee attempt to go in the
>> intended direction.
> 
> That makes sense with the current code shape. At some point IMO it's
> worth having an option to choose which PCI device to use, also for
> MMIO-based cards, but I don't have a need for this feature right now.

Well, in a very limited way this is already possible - see pci_uart_config()'s
"idx" parameter. The primary thing needed is extending ns16550_com[] to more
than two entries, or introducing a suitable level of indirection.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:43:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525949.817491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prFCc-0001Gd-Nk; Tue, 25 Apr 2023 09:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525949.817491; Tue, 25 Apr 2023 09:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prFCc-0001GW-Kv; Tue, 25 Apr 2023 09:43:14 +0000
Received: by outflank-mailman (input) for mailman id 525949;
 Tue, 25 Apr 2023 09:43:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=x+We=AQ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prFCb-0001GE-HZ
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:43:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0614.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98486422-e34d-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 11:43:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB8018.eurprd04.prod.outlook.com (2603:10a6:20b:236::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:43:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 09:43:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98486422-e34d-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f6TKfXiTQ6QhEqOIEwe4w4dqK7BnNQh0oN74aylur1gysVWWPtXNonkZOIAjKH3+Bggi67wsUUxcCacfm3Npgr3ocLLIK1Q/QddNDLfp6tyLd4wRYUTF+AL4GjhTLZ93mTZtL03UzelNGYJG4l4/sC47Do1jt+yr2OGN+yRH9hvSdjEB1clAm5HaZV+DgZoBY+GMVBR+EY5hzJMTqfiS2ONqu5KQ1uxaj0bT1w78zhrzBI3urZK7LK+xaj9vR5lr/Q2Hl+Ey3AQESMN4q271F2VReMrjiELN6setpJ90nIgTEVF5SzK3W2IAlzHeme9CFupkhj9+AaA8mYD0YRkm3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3ysSWlggfX7ar5zc11t8m9pp2LqrWQ4SRh3Kl4aeuI8=;
 b=WsSCKU2f9c3ti0g3iCwudq97SCYVxCf/q+kNwgpP29ifptzgesvKfZ1fVqzWlRN0hPbmqIFH7YWb5f0cMfbnzcwUWghz2tE7lvWD0Cv20iXJD2NzJytE1+JHvN2y/gMQKonU8XJOXZqr9RvvF3UxS7rPw6LZHwG1EruBECv+ThyNrH5PFcTVC0x/5s7gArPVSTzxujoTfLU5EQVJbKHB+6NHnxO4ZjOtF3towQ5QqEAEiYJnVldzChkMpDB4vbVlWO7+QWetR8X62fyPXGPrs51+PbDZ/i3AGgg6VlD4kBbOmQ3v6u82KQ4yKZcaTggbyGgyziOJ2Gx+khlfC/fEww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3ysSWlggfX7ar5zc11t8m9pp2LqrWQ4SRh3Kl4aeuI8=;
 b=vOFn1YQKQM2AIrk0wsspSfBmYq6lBUhxBuA0G6De0wOJDaJdxhDqiL2T9b2OGDTtr/uhcesEznzaeQiAd33QF5pE5G6QjurOrBrpvO6PGj4aFF/F36mw2zsu+GEtnn6AVr8Rx87dpddGlsfko9/GaK1fYmOXcyLUru9X8QnSFXNHNW5T0fcP+RBzCs4amsfxzbBui+BbtqPOLPoohQFDM4osxItJQ3ruxxPw6QE1JQ/QjHNG6cAMJ28gMT68l+rg05NRbRgTZ7iqLmLTNv1fwKK853R6rvPwQKPW2twBdjUUWaf3KfszFzfCxIbNUsw7c9kiNBXUWFcOYNh//ezeEA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <426a6ed3-f0fb-afa5-c685-4f8cdb0d2ff8@suse.com>
Date: Tue, 25 Apr 2023 11:43:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 03/17] xen/arm: implement node distance helpers for Arm
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-4-Henry.Wang@arm.com>
 <7d2c221b-745b-109e-af1f-2b78504b2e0e@suse.com>
 <AS8PR08MB799137BF5927B402E7B0069392649@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB799137BF5927B402E7B0069392649@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0261.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB8018:EE_
X-MS-Office365-Filtering-Correlation-Id: 2767e631-1e04-4058-8790-08db45717b85
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GkFqLaHC+o7a4w6TiAHLeOPRb5ctNWCmJrnUl01UEz9axasA7dNgrruSdB3hoT2YXQCyGBFmMcNQXUThs6yWsPnQYZQDCHGR/X/QMnPn6doPuC+3MC6UfmqCPbc/8PD8MeZ/ieF/bHwGA1udOQtVqtzwWcQqBvSxbFSFCVjPvuBuAaQ7dgfHEwtvfBFiwb6MdVNUwfA8q6lzvhmWt25MH2gcHgKFbJe9DY42eNHEkhzyhhnIEfmSXvaWQY2+g+tJXbkmBbWug0ki7zTlYXaGxAPlRF/FTc+PgEewMtkv89j3UznyybvxHfpZUJmhNZjgWplZRU4SxUi6UzwvX7aNpHUyZ1/bhO/wKIC7cYYoVs3znkYOpvzbY/2JJzY9uVtfQN4vGhLSkEByNMS5uL1IouLaGfCsoFujXzcupTiBB0mTjP/uvX8P8+Y5ixUlNKFNgj0tuASt6vPI+GcpzP4uRpCquJDaL07CspqSfXH8rdQD0BHSNkjV1nl7D81y4/c3Og4486V24dMDwcc8Jol86Vv5FdfCqdd37IZAoNcuZiiB3L6zYGIOjdgtYL/r+yU0ePPe7wNWfsdQAtVPYNWtygIdH5dxouEppukoBDVRUEwFv6janSXeoICP36fa0wCHDY7wDOrigeX2OCuCWC5PZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(376002)(136003)(346002)(39860400002)(451199021)(2906002)(6486002)(6512007)(6506007)(2616005)(6666004)(186003)(53546011)(66556008)(66476007)(66946007)(8676002)(8936002)(41300700001)(6916009)(4326008)(316002)(26005)(478600001)(7416002)(5660300002)(54906003)(38100700002)(36756003)(86362001)(31696002)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Rkc0MTM3aThadXJjUGp2MGR5cVN4RWY1WXhMR2hlbnJYYTBlUm5rbSt1S0Qx?=
 =?utf-8?B?MmhEa1liMGdQYm13K25VajVBSXFFcG83c0J2V0tJZEllQlUyVGVnQjNuUGZ6?=
 =?utf-8?B?UVJPS05KMFZuZEpJZm04SUpPTHRicDA5NUNtdzAvanNkdExnRGlqTHhocU8z?=
 =?utf-8?B?VGkrMlNyVVNJSjFObWI5MVUvaFZnQ3lobWpKOExydEF2Tm1pdUYvczVQY0JU?=
 =?utf-8?B?aGZYemxmQTF0MG5yeURpczFrbEcxNzdKS2JESlFNSVlHQU9EczFZbDE0dTgv?=
 =?utf-8?B?RytCL1pqVFM2YmhKengreXFidjVBMnJLbkNPeUZXZjRzTEl6VExUSEE1VHRs?=
 =?utf-8?B?bURnTG03WkI1TDhYNTZ5RVlPaXFNZjhSNUkxS1ZrZWNGd2hqQVdXbW5jQWlB?=
 =?utf-8?B?eU9xczl3Z3E0NXJ5SXc3U2YyaVFOOGxiQlM2RUFFeXUvd20xaGxMNk1MZ2cw?=
 =?utf-8?B?ZXo1dGxaMTBVSDlKUU1wZnd4N2dZQkNOZHFwZW8rZjkxTXNQeDUrTW9MR1Bo?=
 =?utf-8?B?TVdCbzFxL0FrL3B2ckFjSU40VnNicXdSemFoUW9rZk1tTXJlc21EckpyOW4w?=
 =?utf-8?B?YWRpdkh1QjdpUkczbXY4b2pVRmhNR1pkK2VESGN5OVBlWEpYeG5QekFDVm45?=
 =?utf-8?B?bEE5R0VGUmVMUjY2N0ZZc2h6cUhQbERrU09lbTNMVkNoZ2ZVTDB2cmN2ZEpx?=
 =?utf-8?B?NExMOUpleWlRODdEdGk0R0ZXZitrV2pBbFBDczlkblZsS0tqdEw0M1VlM3pK?=
 =?utf-8?B?ZGNEeFkvU3ZvcDhXUmpNUklGUWdFNjc3QkJITDQvbTRGbC93bnQrV2pVckNU?=
 =?utf-8?B?T2c2amdMdkdQS2VEcStoL3NyRVdycWpzSUJaS3YxNGhVQWg1cUcxVjZkblpD?=
 =?utf-8?B?YWN3anFoRVo1TzhFak44YTNoeWZIb25adUhFTktIS1N3ZnFRZm8rWDNKdlBT?=
 =?utf-8?B?dGhMbnpqNzVCcjZoQmRweVVCUHJzUG1IODVzbVpNYkVIa0IvdXk0MEFJcmd3?=
 =?utf-8?B?bTQ5a0RhbGhTbUFXMGhNMGRwY3N3ZHNvcGc0SkxNUWJkbkFXS2Fxc0RwUjFQ?=
 =?utf-8?B?UVkvOUdUNEhHNmpRY0c5Y2p0dE1raVljNml2d08rNlJxa3NNWVJmaUpIVnF0?=
 =?utf-8?B?TDZtd3FsbUpPdEZETk9vWWduSXZ4bHcxTjl2YkJNcUtVdllhL1FFYitFSkZ0?=
 =?utf-8?B?MFNWZ3Vjci9SZWQvL2JXMWNLOTlUN1Fyd3AwUXpUazdSOXpvQnlwK3FidmxQ?=
 =?utf-8?B?dTZUeDVuYXREMDIyRGxJd3dacUs4NjlVYUhySiswaWxOSzVVQUdoZStORW1G?=
 =?utf-8?B?cGs3eVVUUW9Kam1PbVJnSHNWL3NMMGRHQWcrZHlFYS91bTJYeTYyemRsM09w?=
 =?utf-8?B?RzJUa0x4SUFVR0w4TW5vSkowRkRhaGlzRjUyY2tuUWQ5akREckx0TlIvZnUv?=
 =?utf-8?B?bU02aU4vVWRYeWl6cHRiNnJmRWxwOU9EM2ZUTGh4R05kcEdlNUtDOUNPVEgx?=
 =?utf-8?B?QnRaU2xMMGVwVjZWbnR4RVYzc2dOQUc3ZnZoMnZuRERLRFNyMGo1WlhOblZP?=
 =?utf-8?B?c2NSbWgvM3JpdXdnMkhZTmc2Ti9qRFZYeVg2Zlhkc3Rhb3Eza09ZK0l4cVY1?=
 =?utf-8?B?QWE1Uk9vOFVNMHNweFhnVFlsUmhsL2NkU2lYeEJiRzJxVU05RFJ5K3NzbVlI?=
 =?utf-8?B?K0F5L21RdUZUVUt0RWo0bFZKQzZUakVQbFdkTVRvemlIVWs1Y1hCOUJFeFEx?=
 =?utf-8?B?NG9MWW5YeHNqNklkcUdUMlE5Qno5bWlrbHJ4SWtsMFEyaDVtQUs0VEtKdk1s?=
 =?utf-8?B?RmNXZ0dBV0VteEFod1dyblhyN2VySzBHSWlCT2JwQ1N5UFRoeGkzS1JFd1NK?=
 =?utf-8?B?akN4RklGMnVUZy9SenVyQXZMaG83a1VsSjZ6bkFRcFNKSHRFYVBnaElmbTJo?=
 =?utf-8?B?MkZXQ1h2NXpoR01ZL2N5YlhycjZNWjJ6ai9yczNlTzhCWHVyT2I1dXc0ZzFJ?=
 =?utf-8?B?ekEvQ05TLy8yRTNiUlhoYTRBR0dFY3puWEk3WnhBTlBrdXNrU2E5a2JlbHZR?=
 =?utf-8?B?ZEZDQVF0UGRMc2x1RExsN2dna3ptS0M5Z2lmRURuT211TnRyOGNGSWlVcklh?=
 =?utf-8?Q?9MOtYhVp/E352Bn6RtvIrysLm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2767e631-1e04-4058-8790-08db45717b85
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 09:43:11.0497
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: roTBR/F1wmQsWcASmQ4YbZAsh8xND5XS5eQyvP5aMkkZMlDrk9gIAcOeJI3SFL/ULGzL/g5N91guiIJxZ1VG5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB8018

On 25.04.2023 11:31, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>> On 25.04.2023 09:56, Henry Wang wrote:
>>> +    /* NUMA defines NUMA_NO_DISTANCE as unreachable and 0-9 are
>> undefined */
>>> +    if ( distance >= NUMA_NO_DISTANCE || distance <=
>> NUMA_DISTANCE_UDF_MAX ||
>>> +         (from == to && distance != NUMA_LOCAL_DISTANCE) )
>>> +    {
>>> +        printk(KERN_WARNING
>>> +               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8"
>> distance=%"PRIu32"\n",
>>> +               from, to, distance);
>>> +        return;
>>> +    }
>>
>> I appreciate the checking that node-local references are
>> NUMA_LOCAL_DISTANCE,
>> but if they're wrongly passed into here, shouldn't the resulting array still
>> have NUMA_LOCAL_DISTANCE on its diagonal, at least as far as present nodes
>> go?
> 
> Apologies in advance to ask more specific details from you as I am not sure
> if I can correctly understand the "if they're wrongly passed into here" case. Do you
> mean we are always guaranteed that if from == to, the distance will always be
> NUMA_LOCAL_DISTANCE so the (from == to && distance != NUMA_LOCAL_DISTANCE)
> check is redundant and can be removed? Thanks.

It's a little odd that you ask me: It is your code which insists on
NUMA_LOCAL_DISTANCE when from == to. If you insist on your caller to pass
only this one value, then I think it is only logical to also set the
respective table entries to that value (rather than leaving them at
NUMA_NO_DISTANCE).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 09:49:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 09:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525953.817501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prFIA-0001uv-CG; Tue, 25 Apr 2023 09:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525953.817501; Tue, 25 Apr 2023 09:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prFIA-0001uo-99; Tue, 25 Apr 2023 09:48:58 +0000
Received: by outflank-mailman (input) for mailman id 525953;
 Tue, 25 Apr 2023 09:48:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1xGP=AQ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prFI9-0001ud-6Z
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 09:48:57 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 640d4e6d-e34e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 11:48:54 +0200 (CEST)
Received: from AS9PR04CA0071.eurprd04.prod.outlook.com (2603:10a6:20b:48b::20)
 by AS2PR08MB9272.eurprd08.prod.outlook.com (2603:10a6:20b:59b::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:48:48 +0000
Received: from AM7EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:48b:cafe::46) by AS9PR04CA0071.outlook.office365.com
 (2603:10a6:20b:48b::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Tue, 25 Apr 2023 09:48:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT049.mail.protection.outlook.com (100.127.140.234) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 09:48:48 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Tue, 25 Apr 2023 09:48:47 +0000
Received: from 878cdd0a9ec5.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B3111A1C-299D-4D12-A4EA-E5A447026703.1; 
 Tue, 25 Apr 2023 09:48:37 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 878cdd0a9ec5.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 25 Apr 2023 09:48:37 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB9451.eurprd08.prod.outlook.com (2603:10a6:20b:5e9::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 09:48:35 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 09:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 640d4e6d-e34e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z28GOUe5s7WC0tnX/8jCpcrjDY2WrCcBIGidUSVw+hg=;
 b=gSP5S4GGO8zPE0j6pjlZzHsyKihPosHxlCElzfpV7EksXYvMzxb2s8rKOmEjlVzpd372uK2GJn7nCRa2KyMMpnFBoCpgqoLNTjTrebS6aIgeUcHSeRgSPb/2CzMnhfTLp1QP8yxn3jM1f3fds9nfa+5/rOGuPPA4vUZXScEmNyA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kd3wLNk2Z+t2oguNVDoQEBQVXi8vKBCWz9WYTTg4euih07n7Za/qdH+dirTAG4afB6MSQjVuStchziR8LI1iQI82fvanZE2FNimuZjzg9G5QxZVxqOQjRl88iXF7bhoihUqc7jaEGQRjET4sYyuPvEt9fOIKykk1vdFzh5baJoF37e7uGxinhSfhKd7OlJhT0kQhzfDFee6hbWMC7+H9viJC9S0wPAWSEP4Moa9JVKVNu4dU66Ac4XXRAVH41lgyPJ+mqy3HHmt/UbPQzRDzd4WrPJx2stT8ABtx3trsbjSDV3kE/lX02/4KX0XenR7j/CrYi/Qb5ICFFV5hbvdSPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z28GOUe5s7WC0tnX/8jCpcrjDY2WrCcBIGidUSVw+hg=;
 b=SAq/U7n652t1JTixnvOpsaG1yheqZ+ByC3ddWqC8UKtC7qdabMi16uoRGsXTZf+7wS3xqtkqVtKzHuUiE34sNNAa5xyTknaAPhfbuaRrOuM1zFWsrw6Nt9L7+PUbljfXWLEgwZAbtOfelVHmn0a8aiN52IHKLOOBwSgKZbJjRGNafHf5nAeIjBAbD53XnWodpVeSZuHbXlfsW2JDTHoxNSQ8+o1joTRGB8DM9Yp79Iu/EGtq+OO79MMdCiIR865kwwIssRxtBtlGvUjr6Qk4QLGPXI1Vmq92DNtoCHHEDFSB8NcuwTpSjAwuLetqoOS4WLC1Jo8vO83+3E8RENXFnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z28GOUe5s7WC0tnX/8jCpcrjDY2WrCcBIGidUSVw+hg=;
 b=gSP5S4GGO8zPE0j6pjlZzHsyKihPosHxlCElzfpV7EksXYvMzxb2s8rKOmEjlVzpd372uK2GJn7nCRa2KyMMpnFBoCpgqoLNTjTrebS6aIgeUcHSeRgSPb/2CzMnhfTLp1QP8yxn3jM1f3fds9nfa+5/rOGuPPA4vUZXScEmNyA=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Wei
 Liu <wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 03/17] xen/arm: implement node distance helpers for Arm
Thread-Topic: [PATCH v4 03/17] xen/arm: implement node distance helpers for
 Arm
Thread-Index: AQHZd0uatmvJ8yLEu0+BYdIuaJgbea87s6sAgAANHMCAAAVQAIAAAOFg
Date: Tue, 25 Apr 2023 09:48:34 +0000
Message-ID:
 <AS8PR08MB79910D85653598521265BC8992649@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-4-Henry.Wang@arm.com>
 <7d2c221b-745b-109e-af1f-2b78504b2e0e@suse.com>
 <AS8PR08MB799137BF5927B402E7B0069392649@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <426a6ed3-f0fb-afa5-c685-4f8cdb0d2ff8@suse.com>
In-Reply-To: <426a6ed3-f0fb-afa5-c685-4f8cdb0d2ff8@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C724B423BFFE7D499DCA6C081B2C5B36.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB9451:EE_|AM7EUR03FT049:EE_|AS2PR08MB9272:EE_
X-MS-Office365-Filtering-Correlation-Id: d2871352-fb7a-40dc-fd8a-08db45724487
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 86Qa6v45eKPdi670Xt+hUDIMpkx56c7gllpyzq4OC0wwe09BDCk78efSSUSB7gX87Wq+HgcOXZK78RyQZNh1duOmlVvvDP0Vcyaafzr6ErR6ASjLphw5gczAESiSgSfFyc1boqnD9+uIOdf5zjLoj64XAXyOyj/9DgX7ep4c8gWP4wyzW++Y6p7S9O8yqy1RfQoiJz/caPuZDeBFKHQBPXEvMgVKnKQ0bqzKHYlaTTHt+bWAuwCWuZCdpFJ1E8nJWPJwsEd35BGnFk/VopXCwzj247f3FrvqKj2riTzJSTmrWQyUOXpT1z00k1hLydaQiP20i+t3dTeUlKuWzoJKqQWgTQMyhWe5cmCGVxwOF5SULlAjiBxty0VXwVWsHfJYbJVMtqjKPCbG4dTa+2sY8ZjjgGWHNbKAkgASAnUSDfYyiVldNHIRM2BbOj+6Wb9aRsAyS5YQ9LrqFLn2VPhD/gMVuXqFph1syRUER+xnm2YqmNvuGhbZSdwI79jgP67+yLa9jL8anHiBJWF9tFXbI4JQCabVQRngS+81O6P1pHYJSIAVYr8KT5Mxz8E7zIRosIx8mrY2c7Hx+Hf8yh37d4UuTb69e1bauR+wnaqV8ty9+Cg/qjm9bxP/a58KWOEI
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(366004)(39860400002)(451199021)(54906003)(38070700005)(478600001)(76116006)(316002)(66946007)(6916009)(64756008)(66446008)(66476007)(66556008)(55016003)(122000001)(4326008)(41300700001)(2906002)(8936002)(8676002)(52536014)(5660300002)(38100700002)(186003)(86362001)(26005)(33656002)(9686003)(53546011)(6506007)(7696005)(71200400001)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9451
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f4dbdced-cb5a-48c4-f862-08db45723cad
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	POWADSoQ13QNf1nFYS61oMyEk/GHwHBjNTbLEROHBnBU0W4vj9deprpjaF3GeSFbJZVNkFBu70uaQxgB0Bx2ynPh8LyisW4v1Z4dJ5BPlVl75adY2CL6A1Qs543taNh7xypEGHMHvk/Zhb4IBrdfYfSQzaT22Vlp0JES3962SaovwTPje1JLwbbb/iLXqfT9mLSaW93hHi39DwV1aGfo4PPTYZ0r05b7mBem2NTLvU6MqI2lxDiGpRFgLyrcYSSFCZvZOcOQuBlrn+4ZmWRoNYRtE2KrW9XiC9VkxDhYC3MPXj78j3qVdqT6SIR9tQF6wx38MS+sR/kHjG1kgRkqAFSFOAInwNX+qF2oXAqOmxjtY17Zsba/H7J9ddfdRrcJXVs3UlZ1zKnFDR2ZRlGtEZIc3bEE3QvFjL2ue44foi2oJaXyjTSLhJzNJY6dyG7W6X6itrHmYL0MsvhUp3ee10TNZhF76NWBRq6N8jYOxhpBVe2QUXuuQMnTvRDAS+wn4sBD3vJ3YgQgqpI+BTCM6BO7iLJwNQ+ptBbc8CuuD4fVKYCjYc/3BC2Gg8sVBhLd95MFGxDabtClb+Z/6/CKMX1YIKjlHJpefJ7tY8vE4iA6x4BgqgrsILfQ7ZjALix6EAi1QK2cEJ/T8BwU2oNe8sgCS40AUgnBFfwjkO1jtcVfSbwfuNkHQDdd8CNiJbk0lk6yhGtPEkQuq9f3vvuKsg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(336012)(356005)(82740400003)(81166007)(6506007)(9686003)(53546011)(26005)(55016003)(40480700001)(186003)(8936002)(83380400001)(47076005)(36860700001)(33656002)(2906002)(6862004)(8676002)(52536014)(5660300002)(40460700003)(478600001)(54906003)(7696005)(316002)(4326008)(70586007)(41300700001)(70206006)(86362001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 09:48:48.0783
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d2871352-fb7a-40dc-fd8a-08db45724487
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9272

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwMy8xN10g
eGVuL2FybTogaW1wbGVtZW50IG5vZGUgZGlzdGFuY2UgaGVscGVycyBmb3INCj4gQXJtDQo+IA0K
PiBPbiAyNS4wNC4yMDIzIDExOjMxLCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+PiAtLS0tLU9yaWdp
bmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+DQo+ID4+DQo+ID4+IE9uIDI1LjA0LjIwMjMgMDk6NTYsIEhlbnJ5IFdhbmcgd3JvdGU6DQo+
ID4+PiArICAgIC8qIE5VTUEgZGVmaW5lcyBOVU1BX05PX0RJU1RBTkNFIGFzIHVucmVhY2hhYmxl
IGFuZCAwLTkgYXJlDQo+ID4+IHVuZGVmaW5lZCAqLw0KPiA+Pj4gKyAgICBpZiAoIGRpc3RhbmNl
ID49IE5VTUFfTk9fRElTVEFOQ0UgfHwgZGlzdGFuY2UgPD0NCj4gPj4gTlVNQV9ESVNUQU5DRV9V
REZfTUFYIHx8DQo+ID4+PiArICAgICAgICAgKGZyb20gPT0gdG8gJiYgZGlzdGFuY2UgIT0gTlVN
QV9MT0NBTF9ESVNUQU5DRSkgKQ0KPiA+Pj4gKyAgICB7DQo+ID4+PiArICAgICAgICBwcmludGso
S0VSTl9XQVJOSU5HDQo+ID4+PiArICAgICAgICAgICAgICAgIk5VTUE6IGludmFsaWQgZGlzdGFu
Y2U6IGZyb209JSJQUkl1OCIgdG89JSJQUkl1OCINCj4gPj4gZGlzdGFuY2U9JSJQUkl1MzIiXG4i
LA0KPiA+Pj4gKyAgICAgICAgICAgICAgIGZyb20sIHRvLCBkaXN0YW5jZSk7DQo+ID4+PiArICAg
ICAgICByZXR1cm47DQo+ID4+PiArICAgIH0NCj4gPj4NCj4gPj4gSSBhcHByZWNpYXRlIHRoZSBj
aGVja2luZyB0aGF0IG5vZGUtbG9jYWwgcmVmZXJlbmNlcyBhcmUNCj4gPj4gTlVNQV9MT0NBTF9E
SVNUQU5DRSwNCj4gPj4gYnV0IGlmIHRoZXkncmUgd3JvbmdseSBwYXNzZWQgaW50byBoZXJlLCBz
aG91bGRuJ3QgdGhlIHJlc3VsdGluZyBhcnJheSBzdGlsbA0KPiA+PiBoYXZlIE5VTUFfTE9DQUxf
RElTVEFOQ0Ugb24gaXRzIGRpYWdvbmFsLCBhdCBsZWFzdCBhcyBmYXIgYXMgcHJlc2VudA0KPiBu
b2Rlcw0KPiA+PiBnbz8NCj4gPg0KPiA+IEFwb2xvZ2llcyBpbiBhZHZhbmNlIHRvIGFzayBtb3Jl
IHNwZWNpZmljIGRldGFpbHMgZnJvbSB5b3UgYXMgSSBhbSBub3Qgc3VyZQ0KPiA+IGlmIEkgY2Fu
IGNvcnJlY3RseSB1bmRlcnN0YW5kIHRoZSAiaWYgdGhleSdyZSB3cm9uZ2x5IHBhc3NlZCBpbnRv
IGhlcmUiIGNhc2UuDQo+IERvIHlvdQ0KPiA+IG1lYW4gd2UgYXJlIGFsd2F5cyBndWFyYW50ZWVk
IHRoYXQgaWYgZnJvbSA9PSB0bywgdGhlIGRpc3RhbmNlIHdpbGwgYWx3YXlzDQo+IGJlDQo+ID4g
TlVNQV9MT0NBTF9ESVNUQU5DRSBzbyB0aGUgKGZyb20gPT0gdG8gJiYgZGlzdGFuY2UgIT0NCj4g
TlVNQV9MT0NBTF9ESVNUQU5DRSkNCj4gPiBjaGVjayBpcyByZWR1bmRhbnQgYW5kIGNhbiBiZSBy
ZW1vdmVkPyBUaGFua3MuDQo+IA0KPiBJdCdzIGEgbGl0dGxlIG9kZCB0aGF0IHlvdSBhc2sgbWU6
IEl0IGlzIHlvdXIgY29kZSB3aGljaCBpbnNpc3RzIG9uDQo+IE5VTUFfTE9DQUxfRElTVEFOQ0Ug
d2hlbiBmcm9tID09IHRvLiBJZiB5b3UgaW5zaXN0IG9uIHlvdXIgY2FsbGVyIHRvIHBhc3MNCj4g
b25seSB0aGlzIG9uZSB2YWx1ZSwgdGhlbiBJIHRoaW5rIGl0IGlzIG9ubHkgbG9naWNhbCB0byBh
bHNvIHNldCB0aGUNCj4gcmVzcGVjdGl2ZSB0YWJsZSBlbnRyaWVzIHRvIHRoYXQgdmFsdWUgKHJh
dGhlciB0aGFuIGxlYXZpbmcgdGhlbSBhdA0KPiBOVU1BX05PX0RJU1RBTkNFKS4NCg0KSSB0aGlu
ayBJIHVuZGVyc3RhbmQgd2hhdCBkbyB5b3UgbWVhbiBub3cuIEkgd2FzIG9ubHkgY2hlY2tpbmcg
dGhpcyBzcGVjaWZpYw0KcGF0Y2ggc28gZGlkbid0IG1ha2UgdGhlIGNvbm5lY3Rpb24gd2l0aCB0
aGUgbGF0dGVyIHBhdGNoZXMuDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 12:03:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 12:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.525998.817528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prHO6-00086m-CK; Tue, 25 Apr 2023 12:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 525998.817528; Tue, 25 Apr 2023 12:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prHO6-00086f-9F; Tue, 25 Apr 2023 12:03:14 +0000
Received: by outflank-mailman (input) for mailman id 525998;
 Tue, 25 Apr 2023 12:03:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ShTu=AQ=xenbits.xen.org=julieng@srs-se1.protection.inumbo.net>)
 id 1prHO4-0007lz-LW
 for xen-devel@lists.xen.org; Tue, 25 Apr 2023 12:03:12 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 247efbca-e361-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 14:03:09 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1prHNo-0002qJ-6m; Tue, 25 Apr 2023 12:02:56 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1prHNo-0005by-4j; Tue, 25 Apr 2023 12:02:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 247efbca-e361-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=ibj/UWzx5+E1wzwy1jgTCdemR93SZRFrGnLb7SZfK7o=; b=vnYZJVaE4MkyrFov4K7A4irp1b
	80DgsXynVJoKBr13ljg7brKClUpR2FxOIqLU+DhZCv38d+6+feq+xArL67pLgNYS4fm2/BjVFFEE4
	RRKv+tRFOcE8y9MleNpUcQZDu7rMQHclo59cnQ+AgDZU4LVZZdAY1MU8XgdF4R5ZveBQ=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 430 v2 (CVE-2022-42335) - x86 shadow paging
 arbitrary pointer dereference
Message-Id: <E1prHNo-0005by-4j@xenbits.xenproject.org>
Date: Tue, 25 Apr 2023 12:02:56 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2022-42335 / XSA-430
                               version 2

             x86 shadow paging arbitrary pointer dereference

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

In environments where host assisted address translation is necessary
but Hardware Assisted Paging (HAP) is unavailable, Xen will run guests
in so called shadow mode.  Due to too lax a check in one of the hypervisor
routines used for shadow page handling it is possible for a guest with a PCI
device passed through to cause the hypervisor to access an arbitrary pointer
partially under guest control.

IMPACT
======

Guests running in shadow mode and having a PCI device passed through may be
able to cause Denial of Service and other problems, escalation of privilege
cannot be ruled out.

VULNERABLE SYSTEMS
==================

Only Xen version 4.17 is vulnerable.

Only x86 systems are vulnerable.  The vulnerability can be leveraged only
by HVM guests running with shadow paging and having a PCI device passed
through.

MITIGATION
==========

Not passing through PCI devices to HVM guests will avoid the vulnerability.

Running HVM guests only in HAP (Hardware Assisted Paging) mode will also
avoid the vulnerability.

CREDITS
=======

This issue was discovered by Roger Pau Monné of XenServer.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa430.patch           xen-unstable - Xen 4.17.x

$ sha256sum xsa430*
c861cabdf546ec7583f2193f9c4f8a62579047315e5fe9eca3e9e944b67ca852  xsa430.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmRHr/4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ6UsH/ib0ei76XtojIl9eaNCPoAotcGBXLDQScV133z5e
7UhW3JPUEG79+p22ACL52Km7wVtWwuL5QzbBDJaw47hTD1IwvoOTQ8Dx+KwyZGsK
H8VW8WM70XyqxRJVfA+sEIEfRnxXKfWz6qWV5n2085XzFFwbF9c+ZZ6NafGv/Jd3
75eUwyGaR0o4YEnzKpLzqYFihK56YyJmZ0+rdYYydHKUy+oVcWjrNEh41Xa6lCJX
OdZ60inTu8rizItE+xEsKLatvoKVrO9q/zhAtLm+iWldf8PTgY9tq4S89DRMD/BN
uYIAL1xBCS2HC/IyUXI63PMwHg6fYzq+0JLjtYV0IYDfYE8=
=tInZ
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa430.patch"
Content-Disposition: attachment; filename="xsa430.patch"
Content-Transfer-Encoding: base64

RnJvbSA1N2IzYTJhY2U1YzRhNzgxMThiMzcyYzk1ZjY5YWY0ZjA1ODViNDhk
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBSb2dlciBQYXUgTW9u
bmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpEYXRlOiBNb24sIDIwIE1hciAy
MDIzIDEyOjA4OjUyICswMTAwClN1YmplY3Q6IFtQQVRDSF0geDg2L3NoYWRv
dzogcmVzdG9yZSBkcm9wcGVkIGNoZWNrIGluCiBzaF91bnNoYWRvd19mb3Jf
cDJtX2NoYW5nZSgpCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHlwZTog
dGV4dC9wbGFpbjsgY2hhcnNldD1VVEYtOApDb250ZW50LVRyYW5zZmVyLUVu
Y29kaW5nOiA4Yml0CgpBcyBhIHJlc3VsdCBvZiAyNDE3MDJlMDY0NjA0ZGJi
M2UwZDliNzMxYWE4ZjQ1YmU0NDgyNDNiIHRoZQptZm5fdmFsaWQoKSBjaGVj
ayBpbiBzaF91bnNoYWRvd19mb3JfcDJtX2NoYW5nZSgpIHdhcyBsb3N0LiAg
VGhhdAphbGxvd3Mgc2hfcmVtb3ZlX3NoYWRvd3MoKSB0byBiZSBjYWxsZWQg
d2l0aCBnZm5zIHRoYXQgaGF2ZSBubyBiYWNraW5nCnBhZ2UsIGNhdXNpbmcg
YW4gQVNTRVJUIHRvIHRyaWdnZXIgaW4gZGVidWcgYnVpbGRzIG9yIGRlcmVm
ZXJlbmNpbmcgYW4KYXJiaXRyYXJ5IHBvaW50ZXIgcGFydGlhbGx5IHVuZGVy
IGd1ZXN0IGNvbnRyb2wgaW4gbm9uLWRlYnVnIGJ1aWxkczoKClJJUDogICAg
ZTAwODpbPGZmZmY4MmQwNDAyZGNmMmM+XSBzaF9yZW1vdmVfc2hhZG93cysw
eDE5Zi8weDcyMgpSRkxBR1M6IDAwMDAwMDAwMDAwMTAyNDYgICBDT05URVhU
OiBoeXBlcnZpc29yIChkMHYyKQpbLi4uXQpYZW4gY2FsbCB0cmFjZToKICAg
WzxmZmZmODJkMDQwMmRjZjJjPl0gUiBzaF9yZW1vdmVfc2hhZG93cysweDE5
Zi8weDcyMgogICBbPGZmZmY4MmQwNDAyZTI4ZjQ+XSBGIGFyY2gveDg2L21t
L3NoYWRvdy9odm0uYyNzaF91bnNoYWRvd19mb3JfcDJtX2NoYW5nZSsweGFi
LzB4MmI3CiAgIFs8ZmZmZjgyZDA0MDMxMTkzMT5dIEYgYXJjaC94ODYvbW0v
cDJtLXB0LmMjd3JpdGVfcDJtX2VudHJ5KzB4MTliLzB4NGQzCiAgIFs8ZmZm
ZjgyZDA0MDMxMzFiMj5dIEYgYXJjaC94ODYvbW0vcDJtLXB0LmMjcDJtX3B0
X3NldF9lbnRyeSsweDY3Yi8weGE4ZQogICBbPGZmZmY4MmQwNDAzMDJjOTI+
XSBGIHAybV9zZXRfZW50cnkrMHhjYy8weDE0OQogICBbPGZmZmY4MmQwNDAz
MDVhNTA+XSBGIHVubWFwX21taW9fcmVnaW9ucysweDE3Yi8weDJjOQogICBb
PGZmZmY4MmQwNDAyNDFlNWU+XSBGIGRvX2RvbWN0bCsweDExZjMvMHgxOTVl
CiAgIFs8ZmZmZjgyZDA0MDJjN2UxMD5dIEYgaHZtX2h5cGVyY2FsbCsweDVi
MS8weGEyZAogICBbPGZmZmY4MmQwNDAyYWRjNzI+XSBGIHZteF92bWV4aXRf
aGFuZGxlcisweDEzMGYvMHgxY2Q1CiAgIFs8ZmZmZjgyZDA0MDIwMzYwMj5d
IEYgdm14X2FzbV92bWV4aXRfaGFuZGxlcisweGYyLzB4MjEwCgoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqClBhbmljIG9uIENQ
VSAxOgpBc3NlcnRpb24gJ21mbl92YWxpZChnbWZuKScgZmFpbGVkIGF0IGFy
Y2gveDg2L21tL3NoYWRvdy9jb21tb24uYzoyMjAzCioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioKCkZpeCB0aGlzIGJ5IHJlc3Rv
cmluZyB0aGUgbWZuX3ZhbGlkKCkgY2hlY2sgaW4Kc2hfdW5zaGFkb3dfZm9y
X3AybV9jaGFuZ2UoKSwgdW5pZnlpbmcgaXQgd2l0aCB0aGUgcmVzdCBvZiB0
aGUgY2hlY2tzCnRoYXQgYXJlIGRvbmUgYXQgdGhlIHN0YXJ0IG9mIHRoZSBm
dW5jdGlvbi4KClRoaXMgaXMgWFNBLTQzMCAvIENWRS0yMDIyLTQyMzM1CgpG
aXhlczogMjQxNzAyZTA2NCAoJ3g4Ni9zaGFkb3c6IHNsaWdodGx5IGNvbnNv
bGlkYXRlIHNoX3Vuc2hhZG93X2Zvcl9wMm1fY2hhbmdlKCkgKHBhcnQgSUkp
JykKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1
QGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+Ci0tLQogeGVuL2FyY2gveDg2L21tL3NoYWRvdy9odm0u
YyB8IDMgKystCiAxIGZpbGUgY2hhbmdlZCwgMiBpbnNlcnRpb25zKCspLCAx
IGRlbGV0aW9uKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3No
YWRvdy9odm0uYyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvaHZtLmMKaW5k
ZXggODhjM2MxNjMyMi4uNmRlNDc5YzAwOCAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L21tL3NoYWRvdy9odm0uYworKysgYi94ZW4vYXJjaC94ODYvbW0v
c2hhZG93L2h2bS5jCkBAIC04MTQsNyArODE0LDggQEAgc3RhdGljIHZvaWQg
Y2ZfY2hlY2sgc2hfdW5zaGFkb3dfZm9yX3AybV9jaGFuZ2UoCiAKICAgICAv
KiBPbmx5IHByZXZpb3VzbHkgcHJlc2VudCAvIHZhbGlkIGVudHJpZXMgbmVl
ZCBwcm9jZXNzaW5nLiAqLwogICAgIGlmICggIShvZmxhZ3MgJiBfUEFHRV9Q
UkVTRU5UKSB8fAotICAgICAgICAgKCFwMm1faXNfdmFsaWQocDJtdCkgJiYg
IXAybV9pc19ncmFudChwMm10KSkgKQorICAgICAgICAgKCFwMm1faXNfdmFs
aWQocDJtdCkgJiYgIXAybV9pc19ncmFudChwMm10KSkgfHwKKyAgICAgICAg
ICFtZm5fdmFsaWQob21mbikgKQogICAgICAgICByZXR1cm47CiAKICAgICBz
d2l0Y2ggKCBsZXZlbCApCi0tIAoyLjQwLjAKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 13:53:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 13:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526083.817555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJ60-0003v9-Km; Tue, 25 Apr 2023 13:52:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526083.817555; Tue, 25 Apr 2023 13:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJ60-0003v2-Hm; Tue, 25 Apr 2023 13:52:40 +0000
Received: by outflank-mailman (input) for mailman id 526083;
 Tue, 25 Apr 2023 13:52:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prJ5z-0003us-99; Tue, 25 Apr 2023 13:52:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prJ5z-00059E-08; Tue, 25 Apr 2023 13:52:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prJ5x-00081t-MP; Tue, 25 Apr 2023 13:52:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prJ5x-00089U-LV; Tue, 25 Apr 2023 13:52:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p2iLSHIMKzSHkotMr85dDHiNmOdEJ0brQc4E42jKTsE=; b=0RqE0TQZq4zbcU5a7OhKTCUeqG
	RmsrhNMiP5ODQgHQwpMagLh8GW9Bh+ZfRvW3p6VEXc6lwqwV0cCrZD46xd9i7jyif/MV+37CF8LeT
	JS5RIWBS0G4H51QvvaJFZHNSj+6Es5RnHhaMiLfjrZQ2EU7rspCsH6cOCczD8ZdR+QM0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180408-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180408: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ffc3ca75e25024c05bce7afea694a7446e513c03
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 13:52:37 +0000

flight 180408 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180408/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ffc3ca75e25024c05bce7afea694a7446e513c03
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180364  2023-04-21 15:01:51 Z    3 days
Testing same since   180408  2023-04-25 11:02:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c6c8c0808f..ffc3ca75e2  ffc3ca75e25024c05bce7afea694a7446e513c03 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 13:58:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 13:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526090.817565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJBd-0004Xo-8z; Tue, 25 Apr 2023 13:58:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526090.817565; Tue, 25 Apr 2023 13:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJBd-0004Xd-6M; Tue, 25 Apr 2023 13:58:29 +0000
Received: by outflank-mailman (input) for mailman id 526090;
 Tue, 25 Apr 2023 13:58:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3bG=AQ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prJBc-0004XW-Ek
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 13:58:28 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4097ac8e-e371-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 15:58:27 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-94ed7e49541so877107166b.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 06:58:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4097ac8e-e371-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682431107; x=1685023107;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wIWeMc9o17CKQ0OUZe12x31/iPLhRnin8xVXaGqeYpA=;
        b=Rb+VuDNYs28yW0WYOIvdnYfvtJpGDiQK0y4UhkwQfM7O6IgEl/aezgT2nBCEwX9M5x
         6KmaDprPA4SYNUuOQ4I2EalgNmlvSqSBuK7/Ow/JtFJafmnWkXYsWE34vh2eSTX/IHue
         6XLbChAqjIylc0rSDWMoYoQpZARTtW/OSVRCriH2Zde6LHfhwHnFM3GbA3r/BEMLmE3+
         qYBxzM/fqXJQq1w7gRriI92+e02I0I1FJ3e9KsdT4x7389GPLP+NQnJLddKT4fIaNWPp
         WKINBSE/0rfPTdzW9bNygzBT1Dk9GALMY1si3plWgQtueBSXgRvLmIrw+yYdS1oea4HH
         XQaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682431107; x=1685023107;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wIWeMc9o17CKQ0OUZe12x31/iPLhRnin8xVXaGqeYpA=;
        b=OPcq2j/OmG8M4/89jnnZTfLqhY8LVpjKHkWzUEfOAtZaSBXLE+Q14BUCXRbEe0AFOE
         ESQG14wicaVZUYov5tzURKZkDIZsniGbLHxQj060JZDZt7ZpsLN4r/rJo3F87ZrMQoDO
         pTl9yJyyD46r6Pdm8Kyt2F39K7R0g195lKegAqU4iqeuCN2HY28hhULWr1JqMvNrLpX+
         AxpinXBFMWWgq4AUjYkk7BgEOi8Zot4V+ySFu3R8RmLYrR1CISKWz0tnKWJwtJxQgFQu
         of4lTekiygVjRofTAhAyF7buCVJLS7Iqj14KATQZIr2TI6IxMQ9CKk4XtPT+lRPASbXC
         PifA==
X-Gm-Message-State: AAQBX9f+cmkEno5amfkT67iXEHca3PRRSSNg1knDQKxK31T6GpXQVE3Z
	xnWoS34cV8I8Te2rAaNFGLtBhD9xTFlWZUhtW10=
X-Google-Smtp-Source: AKy350b5ev7765OJdt+6XmlRn+b+ZUQ5QqXnDgaS/HE/fDBJbyO2HyTg9as9UCV+Xv7FBEO/Bpf0dCYMoxyz+i6gX3Q=
X-Received: by 2002:a17:906:72dc:b0:94c:dcac:4b24 with SMTP id
 m28-20020a17090672dc00b0094cdcac4b24mr13523937ejl.49.1682431106886; Tue, 25
 Apr 2023 06:58:26 -0700 (PDT)
MIME-Version: 1.0
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
 <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 25 Apr 2023 09:58:15 -0400
Message-ID: <CAKf6xpvg-NQrJhekDdNi+RS0KyDQBtOOWTYmdQCtpdF5Ggfr2g@mail.gmail.com>
Subject: Re: [PATCH 5/6] automation: PCI passthrough tests on ADL hw
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Doug Goldstein <cardoe@cardoe.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi, Marek,

On Mon, Apr 24, 2023 at 4:57=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
> +    elif [ "$PCIDEV_INTR" =3D "MSI" ]; then
> +        # depending on the kernel version and domain type, the MSI can b=
e
> +        # marked as '-msi', 'PCI-MSI', or 'PCI-MSI-<SBDF>'; be careful t=
o not match
> +        # -msi-x nor PCI-MSI-X
> +        domU_check=3D"$domU_check
> +grep -- '\\(-msi\\|PCI-MSI\\( \\|-[^X]\\)\\).*eth0' /proc/interrupts
> +"

This will match -msi-x.  Do you want to make the first part "-msi "?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 14:39:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 14:39:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526095.817575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJpG-0000WI-Dd; Tue, 25 Apr 2023 14:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526095.817575; Tue, 25 Apr 2023 14:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prJpG-0000WB-At; Tue, 25 Apr 2023 14:39:26 +0000
Received: by outflank-mailman (input) for mailman id 526095;
 Tue, 25 Apr 2023 14:39:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHzL=AQ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prJpE-0000Vm-VV
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 14:39:25 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f626a355-e376-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 16:39:21 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 91E7E3200CD9;
 Tue, 25 Apr 2023 10:39:17 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 25 Apr 2023 10:39:18 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 10:39:15 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f626a355-e376-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682433557; x=1682519957; bh=Yo6puT7IlnChcJRZUA4S2RX7x
	2nQa/b22dTRjpdjRFk=; b=NMR7MnGJfxRxH+qV7woA8jZrxU/29CsoaLANUQE9R
	q9GxQzDWAJgUn2/yv4/POsMjqQttyxGbeaMJE0E1+DCBc+Hm6YK5qunzPz+OsxTE
	x3Py0KfUOkJsKiJUA6pqcESQIaLnWFCioLuhIIa99FiHsdeJOU90kZwg9ZkX6yaz
	wbxllesRpIou9Qhr8ojMIOL03+Ocx6M5eZmoAnK4YmwW5aU++E2j3UWqVss7VGxN
	j+WESSxbllYl6rYaHA3WSnfGBKCXxaf9SBhIvXs/aO5HKL8fE7zEVKE/n00X+U5Y
	wVcmsOSxmx065wSQ72TnJfH9WivdQZ0oLpgdiufyHKESQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682433557; x=1682519957; bh=Y
	o6puT7IlnChcJRZUA4S2RX7x2nQa/b22dTRjpdjRFk=; b=ItE/ETWrUTce9lCu0
	ezIMyA2r0sxzDYNJENwHoiUSYeTbDB4o0j9jKFDzV++YLj6lXqskQ0D5hDN/9UrA
	WAD4xv+JnQrR85fvhsoFtnzcN258z3bSjZvnp8g963odrg8tWrT5in/8PGikYtkg
	38+NY8nwB4X+dpCGr/m/OWFjogsJCTCU3s2AvmCpvQor8mzCYE4ilo/nf9mHsWuP
	AYnppOLLnNCEHt/VDwB/PmSqfLQXj5Fy4Ts5e7n/WOrKaMJLfHjkr6sn0eRAWV1W
	XGYiR1ZAhH2MVJgRt70RiXfJj9LJ5X/5Ag28quxTADlyExb9SvEkSsPfvfGPYasH
	oG52A==
X-ME-Sender: <xms:FOZHZGbcVn6nrVvpFokIbp0HrcT1-AzZhivSqHmYaY5dxSvSUP9Tkw>
    <xme:FOZHZJaX_b4nwMPkPa8xUs3B7UC4ESeNUzl_2ekk4JHV_PvaV_B6YYN7pFJxE6OAC
    Z5MwZpNjLUx2Q>
X-ME-Received: <xmr:FOZHZA-n0vGhwGj2bFlyIHC52qmclhBcwGj6XKiue9Fs-EAMw7gBJ7uRRgZgNoo9WjpIM0cwzlFOveH5mu5y_iLfnUApjzwkCobCaespYAG7KX0fn00F>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduvddgjeelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:FOZHZIoHgIHcc3co1pbHz82zevfUDsmrSgjM6pK4ydQkgqgfnNyvEg>
    <xmx:FOZHZBr2xAWv3k-xpzziezsP2Ys1qOEyj-0ZirP0QWJt9bwLNhLYCA>
    <xmx:FOZHZGTB1lZTwDn9MOtKq-N4So9JSQ58n1-W9Bp-2SWKrzeqIykQWg>
    <xmx:FeZHZKACaZ9245IKBwyZ2rAGmcMSCLg-HBLBzUgtED9j8x0NQdZ0Fg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI console card
Date: Tue, 25 Apr 2023 16:39:02 +0200
Message-Id: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
Note the MMIO-based devices in practice need a "pci" sub-option,
otherwise a few parameters are not initialized (including bar_idx,
reg_shift, reg_width etc). The "pci" is not supposed to be used with
explicit BDF, so do not key setting PCI_COMMAND_MEMORY on explicit BDF
being set. Contrary to the IO-based UART, pci_serial_early_init() will
not attempt to set BAR0 address, even if user provided io_base manually
- in most cases, those are with an offest and the current cmdline syntax
doesn't allow expressing it. Due to this, enable PCI_COMMAND_MEMORY only
if uart->bar is already populated. In similar spirit, this patch does
not support setting BAR0 of the bridge.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
This fixes the issue I was talking about on #xendevel. Thanks Roger for
the hint.

Changes in v2:
 - check if uart->bar instead of uart->io_base
 - move it ahead of !uart->ps_bdf_enable return
 - expand commit message.
---
 xen/drivers/char/ns16550.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 1b21eb93c45f..34231dcb23ea 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
 static void pci_serial_early_init(struct ns16550 *uart)
 {
 #ifdef NS16550_PCI
-    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
+    if ( uart->bar )
+    {
+        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
+                                  uart->ps_bdf[2]),
+                         PCI_COMMAND, PCI_COMMAND_MEMORY);
+        return;
+    }
+
+    if ( !uart->ps_bdf_enable )
         return;
 
     if ( uart->pb_bdf_enable )
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 15:05:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 15:05:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526102.817584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKDt-0003rz-Dm; Tue, 25 Apr 2023 15:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526102.817584; Tue, 25 Apr 2023 15:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKDt-0003rs-B7; Tue, 25 Apr 2023 15:04:53 +0000
Received: by outflank-mailman (input) for mailman id 526102;
 Tue, 25 Apr 2023 15:04:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKDr-0003ri-LE; Tue, 25 Apr 2023 15:04:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKDr-0006rW-C3; Tue, 25 Apr 2023 15:04:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKDq-0001oB-Tn; Tue, 25 Apr 2023 15:04:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prKDq-000229-TL; Tue, 25 Apr 2023 15:04:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qp+2+oWaz9XrE0zXmgywxqgP6er8UD+P7Uh+eiuC+KA=; b=JrBAsAY3TVoVQIdfqWjT4p4kXd
	tn6wm4oqhnh5WTGnb4VVC0+9nAiFwUFRyeSbH6vXrpiInc5jZihgzLhTE/FmBtmpHMnOkdkXRK7iA
	PqqSuOc+cVgQj8A9BxukLzqhGQ1RWndI1WqHxs0wn5mCfEhZ5KYnnCMagSOfm/dWB0OM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180409-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180409: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=18f463edbaa911b6e2c32a3e783bf6c2c9997512
X-Osstest-Versions-That:
    ovmf=6127bf1f30a97e8135905d921d7745eb13554815
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 15:04:50 +0000

flight 180409 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180409/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 18f463edbaa911b6e2c32a3e783bf6c2c9997512
baseline version:
 ovmf                 6127bf1f30a97e8135905d921d7745eb13554815

Last test of basis   180405  2023-04-25 07:10:43 Z    0 days
Testing same since   180409  2023-04-25 11:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <pierre.gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6127bf1f30..18f463edba  18f463edbaa911b6e2c32a3e783bf6c2c9997512 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 15:17:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 15:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526112.817595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKPf-0005TX-Ml; Tue, 25 Apr 2023 15:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526112.817595; Tue, 25 Apr 2023 15:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKPf-0005TQ-Jg; Tue, 25 Apr 2023 15:17:03 +0000
Received: by outflank-mailman (input) for mailman id 526112;
 Tue, 25 Apr 2023 15:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZje=AQ=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1prKPd-0005TK-PY
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 15:17:02 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7eab::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 388ac32f-e37c-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 17:16:59 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM6PR12MB4122.namprd12.prod.outlook.com (2603:10b6:5:214::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 15:16:55 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%6]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 15:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 388ac32f-e37c-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=De0SWQ46NhAkP38E+724XomMfKf+K4Y5Ic6upz5SV9EVzUXGZaOCgcbtMX65ZtOpoWlCpuq/Yk9PIDj5NvdCkjnlcFWVeL0TjxwYfaFij9761bRpAtQctriCTT7i5bcJ65ARoL9xrFg8VSxRQMh8rmp9+e1Q+yx4MKeA4tLtYHPoBCYbBEkgnrhZ+Lt8AG40Cr3UmSnz7tJnkv4H7QICLtdkDsjiz6SDgnxrP9ZJX2lTFlgMrjIlKmGXe0PlLL1eVodh+JH9an4H6vrJvsOpCbBCXeHr7vqaBDuJ33pgJFzrozf2eVn8luw05CdGUrobWDNUHIFCbAt7DgrQ0u55vA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JYGHV/hRW7GqSYNHjy07zg15yi8nYQ0nD7DMX5uMqCI=;
 b=BX+/G/msmIzqmA70coqWGof5Q3mVajR79NW0lfpvXhsVqorbaPq7KXeBoDEiFL7Gt1+MvHufay2PRH/IQweCWyjQ3ndxc2MR04ldQ8rghlkbAKkung05RzxsKDtLwOQc+3FCfHMaYd5vGbeVjw0C8qDM1JR+V2ipaDrop70ZLp54NwdWPr3tpBbepB7+cEi3V9ZEVpVrpUaUpcKxLjsxg6c4XZLsDVLYb0ZufEy3qfxUJEWkawhGGV8Kdv6S6MGTNFNYfsMWhhATxtC/egCYAbx/8cwZuozwNz9BIJOjZVWnTTUAtO+i9YAj2JO8AZxtEGX4Wv5OpxT3stojlT4dDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JYGHV/hRW7GqSYNHjy07zg15yi8nYQ0nD7DMX5uMqCI=;
 b=ArWoWCS7bcR6y6LlloIg8HqOylsDWyxmcWSeT5cCPLl+o651ipEoDROYIw0JwdHJVstU+8cTaPkRtc6m7MGb2TP+h1DrJVhVMhKHo9z/EKE5QSd2znFpixZnV5eSmhKkuI3GZtxYjQUTn//McuB+BslPi6Yjhq6dn0+uJbFWZcw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <ad2dec59-d35f-3edd-b928-5005b47ebff7@amd.com>
Date: Tue, 25 Apr 2023 16:16:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 05/10] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-6-ayan.kumar.halder@amd.com>
 <ca8fd34c-3755-161d-38c1-651cf08ef589@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <ca8fd34c-3755-161d-38c1-651cf08ef589@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LNXP265CA0078.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::18) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|DM6PR12MB4122:EE_
X-MS-Office365-Filtering-Correlation-Id: 5f0eefc6-38bf-4612-82b0-08db45a01b21
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5rD2bhCwI5UEpXNN0DJnU2WB83VVgTP1GBetjR/bd5pAqTu4tT/DdqejRIKTxVpb2PHD3EC6/DXkOhOC1IDX7NIvbFvKChJqxYHpnCMssTdlQvLel98ixkLPyG0PEhXIBMTPmXKwrISwFbK1b9D3OGX7SlreMGQjUqOk4o5VDKVYxoUoboAhvmD34XiUc2QfJatzcoNYaRUwIsbEGxjdWRw2idA3RnId1+tlXeyZ9C+6tXm4Fa95U/ZyjwqWGHJc5E9mMn8AgwaZ4osxP3CIBGUld0M1plbp0zw4ZsJi9XyJ0JLx5yNZHSJg0rCv9Ii3spy+l0pobb467iofkqUrp0w8F5wYkwVoMXIEIXV6GjmiSQV9BxMg6m1ekzc2l/gnK3BDH0B6MkvG0akGELZ+Y4qRHovlsx+vPIgO+ydcGuNktWU/BJBriqwCjGFpaPKzQRtOZ2fEgq3FwiPpbsSF0Tt5a/X1L7PFp4jHB9SpD6BCm69qEwBp/G5CgO6iEAi+nWz4RTqtNaRQmtZY2gJyNfCUN6pwLJTuNhdsXYb/wpkzucoJDZYkvciWaSQ4EaI+f/29eCSAB8UxKTULUTL0An99PNhm84qrN2vrqbDBfmRydWru5ik9uazLhn6952S046TeOwcPPc8AtoguULSPloI9TfmEAQ3Ph6lUpy2ANhk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(346002)(136003)(39860400002)(376002)(451199021)(6486002)(2906002)(6512007)(6506007)(2616005)(6666004)(186003)(53546011)(66946007)(66476007)(8936002)(8676002)(66556008)(41300700001)(4326008)(316002)(26005)(478600001)(7416002)(5660300002)(110136005)(38100700002)(36756003)(31696002)(83380400001)(31686004)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1RscVFqQjBLVHB3bGhjdllGRHcyeXB5SGZDRGxmRlN0ZWZ3ZjhzU0dUMmNs?=
 =?utf-8?B?RFA0OTJWa0hIZGEyN0VlRUl5TFRKd2IzYzA4WjlpOTR6ZE9odmkzdHBBSUNB?=
 =?utf-8?B?WUxCbUlVV2hiMFBNYkVVSkhuK0VvTkxVbVFqalAyQWd1b1NkL0ZrQ3JuaGM4?=
 =?utf-8?B?Vm9BUjlSMGlhbGRJamNTKzQydVpnZ2pHMHdnNUJCR2dubHp0YUk3aEJLUEVN?=
 =?utf-8?B?d1lyYmI2akQxdi9hcWl4M0k4OVFUV3JoSk52TUJBb3l6RDNjVk54K2c0ekxl?=
 =?utf-8?B?NWRvcGJaQks4M1FQWC9Cbi95QzR2a3A4VW5qN0l5S0g3SVlFK2ZYR0l1MDcy?=
 =?utf-8?B?WnZ0aFZESWdlWStYTkNNU0lzR0puT2pHbk1OcnRCdERzNmxzazJrVFlZMHd5?=
 =?utf-8?B?SFNIbUFleUNRY21kdkEweXUvbXNxWjVYUFp2OTk4aGRFOXExNVFDeDR4QjdB?=
 =?utf-8?B?TU01eCt2MHFOR0djOFM3YlNDMTFNOWl5UUxITkROSFBaMnFsdWwvdVplV0ZP?=
 =?utf-8?B?RktpbVc3eE04Y1lGSlplcTFyaS9IRUtqdGE4SGxOMVN1U290ZmtNeHF0Nkhv?=
 =?utf-8?B?Uk5aZWRxL0V6UnJtbzFCZksvZDdDV1Naa2JNWkhTYkVyZERFcGRtVEZ5SGFB?=
 =?utf-8?B?Vnpqa0lyQUFwRk9ZSEZrL0k5MEpmbUg0QzBHQ05ScFphS2tvYkJPYUtSVm5y?=
 =?utf-8?B?ZkZ1NHY4MVVNbSs2VWtBRDZ2S3hHOVBxMVV0WUFaL0ZUM1cvcEhIVnFWdlhJ?=
 =?utf-8?B?ZmJFVUdadFBlUFR4anI0SG9tOVZscFMycWJQYk1HU3JkL1JBa05iRWVGZ0d0?=
 =?utf-8?B?R3dHUVhKSkhISUwxVktqSW9UclBObGhiMWViZXhHeUdaVzRodUczdDV5MTl4?=
 =?utf-8?B?RzhjSEx6T3VJZk80SXcxcFJZa1ZxdGpjQnNOVzlsYXlkaVk0THorWTY2a2hW?=
 =?utf-8?B?VVBKeUF1MkhUUVM2UHNFVDlzZXpmSGM4eEZsZStmRDlWNDQxaUw1ZzQ1WWk5?=
 =?utf-8?B?Yjh2RCtQUjAwekhEblFwUEp5WjB6U2pMYVJ4eXNnTVZEMkhMMkhLVmpYYXVX?=
 =?utf-8?B?RTVqQlZhM0svblJ4M21iRjFUNjVGS2lsQnJnN2src2w1VXIzdCtqOGtRRVpK?=
 =?utf-8?B?RkphNUY0cWxYS2hWbG55Z1hHakU4Tk9nOXVsVmV0K2F6cHpBMEY5aEFjSXRm?=
 =?utf-8?B?ZW1RWmt1dnIrRzR0UkliMkJqRXZlQm4rU05OWE5RL3FNSDlic3cvZ0JaWG56?=
 =?utf-8?B?aFNEOC9ZOXNQcStJUndJTzkxQW5nakFsSm5kWGxFQUwzcWJFcEVmZE81ZDQx?=
 =?utf-8?B?dEtlV1l5MUFZT2tmRk1WUUpqT29CUkFFY1YzbkxEWHcxUGlwT2xRcXBtM244?=
 =?utf-8?B?WFZDRjExTk95R0xlS3o2S3cxKzBrbmEwWk85elo1R1AwbW15WG0zS2VoV2dY?=
 =?utf-8?B?aGwrUHUvNVRlZG1CL2d4bmxpRHprTzBRbk1xU2E5TmptdSt5anpKOTByVlV1?=
 =?utf-8?B?RnRKOGFWSGVIS0g1TmViL09ML2VTWEpVWk84K1U1cnRROEs5RzBUUytNZE9j?=
 =?utf-8?B?Z2NTWTMyVmJtUTIrVUZlMS9vTXJPNDBYYVlxTU1NYmN2bTdzODc2eEt4clVP?=
 =?utf-8?B?R1F3TEJ5ZElpZVJhcTcxL0xFcDZzYm9hMGU1N1dsZTJZa0ZjQXNUZndveVc3?=
 =?utf-8?B?eTJkbkErVmt4alZvRFFGbkRZbS9tRy9QZ2prL0NXYk5yVXBqWDMwcmRROVpk?=
 =?utf-8?B?VHJvbTBRejZNVTJROHd6V3piWVpWRmw2L0cwcGUxUmgwU3owdjJDajVZcWdr?=
 =?utf-8?B?UGJTbUk1NmpNMUhxRjJ1UWJGZlZCeG9KaSsyK1ZZMVFWZkdTNDVuY1NzV1lt?=
 =?utf-8?B?SUcrdEgrLzFLYmRlSW9UU0R0bytaaHo3Vk4xemRLbHJ0L0NEOVVyMEJqZHFx?=
 =?utf-8?B?R3R3bHlSQ0tVV1dQeDRSc1VwMXMvRkxkR2FCcDl0bG5FL2JVQzg3ZmUzTVZ4?=
 =?utf-8?B?a3BmekkyVG1OSldzUXpQQ2RuZFZtdFFqNTR5ZDF4TlVHSC95RG5peE4ydTNB?=
 =?utf-8?B?bFpyYnBiMVFWR1ovU2pNR0dHVnZuVzNxUzJkUU1XelRNUUVXeHJRZmluc0lU?=
 =?utf-8?Q?Hjxxh3I6kwEJWsM+GuBALbgi1?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f0eefc6-38bf-4612-82b0-08db45a01b21
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 15:16:55.8198
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j6Oq6Df2FsOD2dgVoKwCgXj49K+QXJReRXVcXOLzyq8ogBwR08ihsrZtSshyFEGXnSMrhh4jxcNbUtfi4y4hWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4122


On 24/04/2023 13:08, Michal Orzel wrote:
> Hi Ayan,

Hi Michal,

A clarification.

>
> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>
>> Some Arm based hardware platforms which does not support LPAE
>> (eg Cortex-R52), uses 32 bit physical addresses.
>> Also, users may choose to use 32 bits to represent physical addresses
>> for optimization.
>>
>> To support the above use cases, we have introduced arch independent
>> configs to choose if the physical address can be represented using
>> 32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
>> For now only ARM_32 provides support to enable 32 bit physical
>> addressing.
>>
>> When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32.
>> When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
>> When PHYS_ADDR_T_32 is not defined for ARM_64, PADDR_BITS is set to 48.
>> The last two are same as the current configuration used today on Xen.
>>
>> PADDR_BITS is also set to 48 when ARM_64 is defined. The reason being
>> the choice to select ARM_PA_BITS_32/ARM_PA_BITS_40/ARM_PA_BITS_48 is
>> currently allowed when ARM_32 is defined.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>> v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
>>
>> v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
>> ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
>> 2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'.
>>
>> v3 - 1. Allow user to define PADDR_BITS by selecting different config options
>> ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
>> 2. Add the choice under "Architecture Features".
>>
>> v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.
>>
>>   xen/arch/Kconfig                     |  3 +++
>>   xen/arch/arm/Kconfig                 | 37 ++++++++++++++++++++++++++--
>>   xen/arch/arm/include/asm/page-bits.h |  6 +----
>>   xen/arch/arm/include/asm/types.h     |  6 +++++
>>   xen/arch/arm/mm.c                    |  5 ++++
>>   5 files changed, 50 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>> index 7028f7b74f..67ba38f32f 100644
>> --- a/xen/arch/Kconfig
>> +++ b/xen/arch/Kconfig
>> @@ -1,6 +1,9 @@
>>   config 64BIT
>>          bool
>>
>> +config PHYS_ADDR_T_32
>> +       bool
>> +
>>   config NR_CPUS
>>          int "Maximum number of CPUs"
>>          range 1 4095
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index 239d3aed3c..3f6e13e475 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -19,13 +19,46 @@ config ARM
>>          select HAS_PMAP
>>          select IOMMU_FORCE_PT_SHARE
>>
>> +menu "Architecture Features"
>> +
>> +choice
>> +       prompt "Physical address space size" if ARM_32
> Why is it protected by ARM_32 but in the next line you add something protected by ARM_64?
> This basically means that for arm64, ARM_PA_BITS_XXX is never defined.

Currently, I intentend to support 32-bit physical address configuration 
for ARM_32 only. So in case of ARM_32, user will have a choice between 
32-bit and 40-bit PA.

So, if it is ok with you, can I remove the below line and "config 
ARM_PA_BITS_48 ..." ?

Then ...

>
>> +       default ARM_PA_BITS_48 if ARM_64
>> +       default ARM_PA_BITS_40 if ARM_32
>> +       help
>> +         User can choose to represent the width of physical address. This can
>> +         sometimes help in optimizing the size of image when user chooses a
>> +         smaller size to represent physical address.
>> +
>> +config ARM_PA_BITS_32
>> +       bool "32-bit"
>> +       help
>> +         On platforms where any physical address can be represented within 32 bits,
>> +         user should choose this option. This will help is reduced size of the
>> +         binary.
>> +       select PHYS_ADDR_T_32
>> +       depends on ARM_32
>> +
>> +config ARM_PA_BITS_40
>> +       bool "40-bit"
>> +       depends on ARM_32
>> +
>> +config ARM_PA_BITS_48
>> +       bool "40-bit"
> 40-bit? I think this should be 48-bit.
>
>> +       depends on ARM_48
> What is ARM_48? Shouldn't it be ARM_64?
> And if so, why bother defining it given everything here is protected by ARM_32.
>
>> +endchoice
>> +
>> +config PADDR_BITS
>> +       int
>> +       default 32 if ARM_PA_BITS_32
>> +       default 40 if ARM_PA_BITS_40
>> +       default 48 if ARM_PA_BITS_48 || ARM_64

default 48 if ARM_64

- Ayan

> This reads as if on arm32 we could have 48-bit PA space which is not true (LPAE is 40 bit unless I missed something).
> You could get rid of || ARM_64 if the choice wasn't protected by ARM_32 and fixing ARM_48 to ARM_64.
>
>> +
>>   config ARCH_DEFCONFIG
>>          string
>>          default "arch/arm/configs/arm32_defconfig" if ARM_32
>>          default "arch/arm/configs/arm64_defconfig" if ARM_64
>>
>> -menu "Architecture Features"
>> -
>>   source "arch/Kconfig"
>>
>>   config ACPI
>> diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
>> index 5d6477e599..deb381ceeb 100644
>> --- a/xen/arch/arm/include/asm/page-bits.h
>> +++ b/xen/arch/arm/include/asm/page-bits.h
>> @@ -3,10 +3,6 @@
>>
>>   #define PAGE_SHIFT              12
>>
>> -#ifdef CONFIG_ARM_64
>> -#define PADDR_BITS              48
>> -#else
>> -#define PADDR_BITS              40
>> -#endif
>> +#define PADDR_BITS              CONFIG_PADDR_BITS
>>
>>   #endif /* __ARM_PAGE_SHIFT_H__ */
>> diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
>> index e218ed77bd..e3cfbbb060 100644
>> --- a/xen/arch/arm/include/asm/types.h
>> +++ b/xen/arch/arm/include/asm/types.h
>> @@ -34,9 +34,15 @@ typedef signed long long s64;
>>   typedef unsigned long long u64;
>>   typedef u32 vaddr_t;
>>   #define PRIvaddr PRIx32
>> +#if defined(CONFIG_PHYS_ADDR_T_32)
>> +typedef unsigned long paddr_t;
>> +#define INVALID_PADDR (~0UL)
>> +#define PRIpaddr "08lx"
>> +#else
>>   typedef u64 paddr_t;
>>   #define INVALID_PADDR (~0ULL)
>>   #define PRIpaddr "016llx"
>> +#endif
>>   typedef u32 register_t;
>>   #define PRIregister "08x"
>>   #elif defined (CONFIG_ARM_64)
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index b99806af99..6dc37be97e 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -690,6 +690,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>>       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>>       int rc;
>>
>> +    /*
>> +     * The size of paddr_t should be sufficient for the complete range of
>> +     * physical address.
>> +     */
>> +    BUILD_BUG_ON((sizeof(paddr_t) * 8) < PADDR_BITS);
> Just FYI, there is a macro BITS_PER_BYTE defined in bitops.h that you could use instead of 8.
> Although I'm not sure if this would be better :)
>
>>       BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
>>
>>       if ( frametable_size > FRAMETABLE_SIZE )
>> --
>> 2.17.1
>>
>>
> ~Michal


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 15:53:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 15:53:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526116.817605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKyP-0001K2-Bx; Tue, 25 Apr 2023 15:52:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526116.817605; Tue, 25 Apr 2023 15:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKyP-0001Ju-9I; Tue, 25 Apr 2023 15:52:57 +0000
Received: by outflank-mailman (input) for mailman id 526116;
 Tue, 25 Apr 2023 15:52:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKyO-0001Jl-V2; Tue, 25 Apr 2023 15:52:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKyO-0007no-IV; Tue, 25 Apr 2023 15:52:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prKyO-000346-1n; Tue, 25 Apr 2023 15:52:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prKyO-0006mV-1M; Tue, 25 Apr 2023 15:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8Fa27G7rYgecI7Cw5eJJACvxX3l7Pd2g6CMj1KdVLME=; b=bCH75SB/kH5aM6QQXObmEQEY58
	Q7yF+Kwt6Bo6faRTZRmBSs1TjxE8OHJbzT/svlP6GjfSDA7Y7wRRqV/i+/tqzAbxzD2ncY5JXdUgw
	SUKNuQcVVf9pEpftx+tQ9+d1Jgx+ly8w9XIEXVHWrjZTYcomLTABa1+djpVP5Yt1dDLE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180398-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180398: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:xen-install:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac5f7bf8e208cd7893dbb1a9520559e569a4677c
X-Osstest-Versions-That:
    qemuu=327ec8d6c2a2223b78d311153a471036e474c5c5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 15:52:56 +0000

flight 180398 qemu-mainline real [real]
flight 180410 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180398/
http://logs.test-lab.xenproject.org/osstest/logs/180410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180394
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180394

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm   7 xen-install         fail pass in 180410-retest
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail pass in 180410-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm 15 migrate-support-check fail in 180410 never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 180410 never pass
 test-amd64-i386-xl-shadow     7 xen-install                  fail  like 180394
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180394
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180394
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180394
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180394
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180394
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180394
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180394
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180394
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ac5f7bf8e208cd7893dbb1a9520559e569a4677c
baseline version:
 qemuu                327ec8d6c2a2223b78d311153a471036e474c5c5

Last test of basis   180394  2023-04-24 07:55:31 Z    1 days
Testing same since   180398  2023-04-24 20:10:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Blake <eblake@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Peter Xu <peterx@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  李皆俊 <a_lijiejun@163.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 754 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 15:54:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 15:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526124.817614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKzp-0001v5-Rm; Tue, 25 Apr 2023 15:54:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526124.817614; Tue, 25 Apr 2023 15:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prKzp-0001uy-P5; Tue, 25 Apr 2023 15:54:25 +0000
Received: by outflank-mailman (input) for mailman id 526124;
 Tue, 25 Apr 2023 15:54:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZje=AQ=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1prKzo-0001t3-Ot
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 15:54:24 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 724705ed-e381-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 17:54:23 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by IA0PR12MB8085.namprd12.prod.outlook.com (2603:10b6:208:400::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 15:54:20 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%6]) with mapi id 15.20.6319.033; Tue, 25 Apr 2023
 15:54:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 724705ed-e381-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VgRekv/FZYlTcLSNv46Vp+C7C3ezYNt7/cNPPGUZQ1xrhlzsFeZZV397qDnSaX2frak1M0WIBi4w/ZtMvVpvwuQFThVRowG5TBy3RW2Ste3MX6OiSKBslvE9dOP0WSN7Y+mFCNtZFt+sg8gsTd0LHyED1LdDlC9Tbp26frvmSAPOTox7yAapoDZsG6IepjFI8Vh+TS2or1z2QqQC0tiwIg+bfDBSwofsNZm4kmSClGnpVTFACOBrkOwILiv5s7/IkLeZMj0/B8ApTMp18OyiXiHXj84JCpErUBkB3+LDgXsLo25sXXImTrtypumW/MdljuXSfucBtLB2B9i9fyNpiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f0vLtKMp4mt3w3BEv9GRZqTyAgH4JH8R8pqx0n6Srmo=;
 b=hXxBNQqHGN5DY7yj7YxuXBBS7KeGXhgZAcs3B5oyrAgvQA/Yc2WcdBRcY6Es3DYyxR/DojNaZYQMqwsRfg7dEm/o2UqhXVAWjEnPRKFygyGLQ+rBsQ/3q7nXLXgtQwunZo45N5VvtCtiKhbEP4x6SSY7delyKnTaTS614Y8UzKDx6KJDfiQaN4lAPf/diE1fWr3WZTdIGTFhFca0q3SjSSfQEyOcOTVgaE2dbbsRVZVuStdtLhACtir754WWeKtIqHsfUeiSdi+x3irImASnpc58gwRy8XyBErcTYekD0XCq6ZE4/Fv6CxndelKdh4RQbUn6P9Vxvwl+qXWnkVu4UA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f0vLtKMp4mt3w3BEv9GRZqTyAgH4JH8R8pqx0n6Srmo=;
 b=AruoKYNSbJE83WRg88H7s9mN0FfTWYxX0u3iNa9UzechEvEfBNosudkl+qpkEJ95FWRB+Pq8ceGCsx0SS6aORxA25lf6nkoHSYrkjshxIoD/ec+PaAGYEwxC+40J5QF3lUruHGEz57p83Gvs0w9tzJdzDLe0JR6bqQuWWyoAxI0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <db111e27-5fba-37b7-334a-6f9f8447d072@amd.com>
Date: Tue, 25 Apr 2023 16:54:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v5 08/10] xen/arm: domain_build: Check if the address fits
 the range of physical address
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-9-ayan.kumar.halder@amd.com>
 <e1769508-287d-a278-56d0-01aacc77b391@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <e1769508-287d-a278-56d0-01aacc77b391@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0184.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|IA0PR12MB8085:EE_
X-MS-Office365-Filtering-Correlation-Id: cae711ec-3054-41af-5d80-08db45a554e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0DSJc3/z73fFU1fOMaKhSSbtgFIalkLfUUi0X95avoelaHTnqy2TKPwD8Z3l9MwpV1geafMmjBrG7Q8ZP/Q6kjp0yCYViwNtPUeQN1bxkRRd8TXinv7ZUnImlH9QJbvHYlwfIMKhJJ4/uKSLu7G0bDBhQvx0WBisahnVcqudjQNOwDtJaag/Tv5FVZH6xRHqQkN02s38BvRf9jZ2ydw2rKjBBJ/6IZMYAzsUS0TXuqUwPRONURZpNO1gdw78WDnpnlJwEYGVZJEHsCbaAT+xuhbsSferij9wKLXQp434X7SGo7CFwKRbMkcxrNL9N06DKMptEkOobzF8bB86XdbNZTC1K9tjZHyfmVGmqAPOSVcVck9dRxaIg9MwciWTqkgILPHEEbPrPcE5nAuzHuJRhGiZ53wLLJdbUtEGGkpKekEnfTE+/WayJR03J5+++3XA/9YYo7YuzvhLZGlfDhaJjA0vn3l74azaXQ51kGY4VgmQFdRaakTV8pAEKZ9pkbEO9K+pqIohri1nirWocp/7W571Fwat/D/1J1yF49/bnHvYkeL1+BGEA2O/JjTwFX5tIzpk3dJtawBZfrjtbImoc55MJfFMSjg39vUP4TdQXNH/2qNWrkxHsG+Hw3Hha+x1YiXESS6vGoka8a6+0tRd4A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(39860400002)(376002)(136003)(346002)(451199021)(53546011)(6506007)(26005)(6512007)(2616005)(36756003)(83380400001)(186003)(38100700002)(66946007)(31696002)(478600001)(66556008)(66476007)(31686004)(8676002)(8936002)(7416002)(110136005)(5660300002)(6486002)(41300700001)(2906002)(4326008)(6666004)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUkyT2J3eEY5OWtXVGhXUW9yaDBLK05YMFdDUUdSL090OUVid2JycFJ3TUJT?=
 =?utf-8?B?eFJQTjM4V21kNEhPMWtLMlNpQXdWMm1ObGFBQkUvZVk4SGdBa3ZEVW1ZOU9z?=
 =?utf-8?B?TDJuZWZBT0RNUUFyc1FxVytBTXRSZS9ra3hWT1FWSW4zSElJL0RVL0tKUTRK?=
 =?utf-8?B?WHVjYVUxR0dRSnZpaWw3TklZd0JIYWVCcTRKRUptSDQzUVlhZk80VjAxWlho?=
 =?utf-8?B?YlBFVi9lZXhHdHFtU095RjZYNkxBQnA0RzgzVG9lQlliR1BsS3c1dFJCalJ5?=
 =?utf-8?B?TFR4SGtTN3l5ZHNhemFrMFRwMC94b2dIaHNtdjlwSTNtNngvMDJKRUpkTVFE?=
 =?utf-8?B?VWE5MEh2Sy9PdmJXK1Ayb3dvVlNBYlpTcEd3TFZsY2F6a2FvbFBUVDJMdzFR?=
 =?utf-8?B?dVExZ3RjcEVmMVBIMUs1dmFLdUEyRkZXMEFMUWJwdlhzWldnSFpDclQveWND?=
 =?utf-8?B?eThBU0FDeVo1R3FPMFVhOWJ1RExIYmxjWTNVVjd5SXdYdmVJK3lSdVVUYVRU?=
 =?utf-8?B?Z1pSUDNkMHgwMlVCMHFNR2IwaGNEN3V6Zy9lMitjREFrWERNSVlkYWRVQyto?=
 =?utf-8?B?WHg5am40QTZ3RkpzclFHL09LVTFTMGkrL1RlVmgrdFBqUkhxWksvQ0RQMXhm?=
 =?utf-8?B?WUs5c3JwTFpENnBIalRSTU5vZFpaQVpiS0U1SE52enhPQ3FFZktyVVVTbzI5?=
 =?utf-8?B?Rk02aDBaS2R1Q0drSFVCaWh2YkJuMDBmK0JqOXF3VlpEMzNRSDNia1ZIWUly?=
 =?utf-8?B?RUVxdUlrRllKY213aUR5eTNZNjFuT1l4WGxFbjd0UXVVeHFZWmlQMkRwMGxB?=
 =?utf-8?B?b3p4ZzBld25ZcVc3Z1lpbmNIVWJIaEl4ZDN2MlFhQzdBRG05L0laSGs3dFY1?=
 =?utf-8?B?VUswZUlIcVNHR3g1cmd0Qm5IVU1LeFNPZVRuWGFaQWVCTElVbWwwTEhoZkpq?=
 =?utf-8?B?VnY1YnpTeW9hSURJRlZSQ04wckg5Q1BhYThzb0FJUGxKWnpSemQ2bG1LNjRa?=
 =?utf-8?B?UGEwcmw5NVdnTzI0S3Q4Z0dvb0NjQ2Z6SlVsTSt5Slk2dWwvdEhQZytzMHJs?=
 =?utf-8?B?WUswYlZYWnpxNTFra3VLcXl1MlZJSzNwY0xaazFVcFZZRTd2M29HY2dzOTA3?=
 =?utf-8?B?NWdiK001R2YxQnppZTdBRmtmNDdOSW03UTlLSHZTVkF6RWRSSkloQzdMZzMv?=
 =?utf-8?B?ck52cXJERUthUzJyYzY3eWQ1cnlBU2ZpSXRIRnQ1U1NTTUM2eXNPVGV0ZUdX?=
 =?utf-8?B?ZXk4Um9ycDYvenBCeXJTbTJWMnBjQVVteDhlVVpwSHdUenkrd2EwVXF0M1Z3?=
 =?utf-8?B?N3RpV2RGM2JzdjVzN1B0dVZiNHdYRXJLbU9XZHV4dzlVSVFRaklaUkg0eStK?=
 =?utf-8?B?R2tQYk9yRDd2WUpYMVkwUFpkV2lkN3lmbUtlZVlldEVsdzd6aW9RWE5qNTk3?=
 =?utf-8?B?Zmk5MG5VQk45ZnVBeWd3ZUhmbHo3MmpkSS9aaEtYNHlleGdZS1BZNWdKTzhz?=
 =?utf-8?B?WkVzbUVVcENDSDZyekNzeTFxZ2l6VWZLcWQ4M05sSVR6YUNycFZvVzdmZDBl?=
 =?utf-8?B?MDR6WjZsczVvdHhYM2VsQXFlN3Qxc3RMYVJGL2lqKzBaN0RWM1NkaFVXRXBw?=
 =?utf-8?B?dlF4eituUWpGTURCdkE3dWhMZ1ZxL1lqTFFRYmE2VDNBYkpxOGhIdnVEZFZ3?=
 =?utf-8?B?UUl4eVRuWkp6MHpDOGtlaGZzVldsWkptNGo2bkpXVVk1bVFWTEhkckJsakkw?=
 =?utf-8?B?cHRtZ3hGYkJCVnFiVEJEOHdBNnJxdWpZakw1WWJJTjRpTjdiUjFLcHpEdjkr?=
 =?utf-8?B?cHN3QlJSTldrQ2hzN3plc3NHOXBhNGI3bDNadTZCaERZZmY0QXZIWTJKaDR2?=
 =?utf-8?B?Z09PNEpDTGl5cDhuRWdUNUFKZnRuZTNxZVVnbG5pc1hXYm0xVUYrMC9pSWJ6?=
 =?utf-8?B?cHMvb3VQVjZ5QzZDNi9aTU1VMTZwOWFyRktqYTFVOS9VczkvdnhHdVFwaVpN?=
 =?utf-8?B?dlV4RFA3M1RHS05SeGF5UjlUc2F1bHNBTmh1UGoybThtY290bllOeWkrbnMr?=
 =?utf-8?B?MlEwNTFNZGZWd1dhMW9RY3JCNm5tZGZNaDNvMllOUXdBRElsK21zOXBUWEFr?=
 =?utf-8?Q?1dGJck/yWx2GTx3nOIX0/6gD8?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cae711ec-3054-41af-5d80-08db45a554e4
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 15:54:20.1695
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FGSn2bMzR7pXnuxDaT3v53q14CCtnw5F0Fci1UsbGY2al+J+s3cwh/KKRKPa+KJrTzsaUGSpgm2i8E8e0CWY4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8085


On 24/04/2023 09:26, Julien Grall wrote:
> Hi,
Hi Julien,
>
> On 13/04/2023 18:37, Ayan Kumar Halder wrote:
>> handle_pci_range() and map_range_to_domain() take addr and len as 
>> uint64_t
>> parameters. Then frame numbers are obtained from addr and len by 
>> right shifting
>> with PAGE_SHIFT. The page frame numbers are saved using unsigned long.
>>
>> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. 
>> On a 32-bit
>> system, 'unsigned long' is 32-bits. Thus, there is a potential loss 
>> of value
>> when the result is stored as 'unsigned long'.
>>
>> To mitigate this issue, we check if the starting and end address can be
>> contained within the range of physical address supported on the 
>> system. If not,
>> then an appropriate error is returned.
>>
>> Also, the end address is computed once and used when required. And 
>> replaced u64
>> with uint64_t.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>>
>> Changes from :-
>> v1...v4 - NA. New patch introduced in v5.
>>
>>   xen/arch/arm/domain_build.c | 30 +++++++++++++++++++++++-------
>>   1 file changed, 23 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 7d28b75517..b98ee506a8 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -1637,15 +1637,23 @@ out:
>>   }
>>     static int __init handle_pci_range(const struct dt_device_node *dev,
>> -                                   u64 addr, u64 len, void *data)
>> +                                   uint64_t addr, uint64_t len, void 
>> *data)
>>   {
>>       struct rangeset *mem_holes = data;
>>       paddr_t start, end;
>>       int res;
>> +    uint64_t end_addr = addr + len - 1;
>
> I find the difference between end and end_addr a bit confusing. How 
> about...
>
>> +
>> +    if ( addr != (paddr_t)addr || end_addr != (paddr_t)end_addr )
>
> ... replace the second part with (((paddr_t)~0 - addr) > len))

It should be

if ( ... || (((paddr_t)~0 - addr) < len) )

{

/* print error */

}

>
>> +    {
>> +        printk(XENLOG_ERR "addr (0x%"PRIx64") or end_addr 
>> (0x%"PRIx64") exceeds the maximum allowed width (%d bits) for 
>> physical address\n",
>
> In addition to what Michal says, I would replace the "addr .... 
> end_addr" with "[start, end]" to further reduce the message.
>
> Also, please use %u rather than %d as the number of bits cannot be 
> negative.

I think this should be fine ?

printk(XENLOG_ERR "[%#"PRIx64", "#PRIx64"] exceeds the maximum allowed PA width (%u bits)", start, end, CONFIG_PADDR_BITS);

- Ayan

>
>> +               addr, end_addr, CONFIG_PADDR_BITS);
>> +        return -ERANGE;
>> +    }
>>         start = addr & PAGE_MASK;
>> -    end = PAGE_ALIGN(addr + len);
>> -    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), 
>> PFN_DOWN(end - 1));
>> +    end = PAGE_ALIGN(end_addr);
>> +    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), 
>> PFN_DOWN(end));
>
> And this will not need to be changed.
>
>>       if ( res )
>>       {
>>           printk(XENLOG_ERR "Failed to remove: 
>> %#"PRIpaddr"->%#"PRIpaddr"\n",
>> @@ -2330,11 +2338,19 @@ static int __init map_dt_irq_to_domain(const 
>> struct dt_device_node *dev,
>>   }
>>     int __init map_range_to_domain(const struct dt_device_node *dev,
>> -                               u64 addr, u64 len, void *data)
>> +                               uint64_t addr, uint64_t len, void *data)
>>   {
> My comment on the previous function applies for this one as well.
>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:22:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526132.817628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLRA-0005rJ-0z; Tue, 25 Apr 2023 16:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526132.817628; Tue, 25 Apr 2023 16:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLR9-0005rC-UT; Tue, 25 Apr 2023 16:22:39 +0000
Received: by outflank-mailman (input) for mailman id 526132;
 Tue, 25 Apr 2023 16:22:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prLR8-0005r2-UK; Tue, 25 Apr 2023 16:22:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prLR8-0000gA-Ov; Tue, 25 Apr 2023 16:22:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prLR8-0003uL-7Q; Tue, 25 Apr 2023 16:22:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prLR8-0008C1-6y; Tue, 25 Apr 2023 16:22:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F068civ1S7z56GyVHstjD96oKkpPYI1kHu6RJIeOqns=; b=vFpcqgnrtI3JL/JyE5X+/KOBD7
	k1ptKaQ4QpRd6eG1+WXVOHYiLq6Oobl5xK7ugoGitTatjfCRFiPnRmh3DdQXb+3BFIJJts270EG5/
	0JD1oKZgkcnSCWZvJQGd+a6rLMKQpNFBs5SkGOaojXPeY5jMXcRrQ/dvx4XnzQcO25+8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180411-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180411: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f6c3cb21628f7bed73cb992da400f6b36630f290
X-Osstest-Versions-That:
    xen=ffc3ca75e25024c05bce7afea694a7446e513c03
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 16:22:38 +0000

flight 180411 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180411/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f6c3cb21628f7bed73cb992da400f6b36630f290
baseline version:
 xen                  ffc3ca75e25024c05bce7afea694a7446e513c03

Last test of basis   180408  2023-04-25 11:02:07 Z    0 days
Testing same since   180411  2023-04-25 14:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ffc3ca75e2..f6c3cb2162  f6c3cb21628f7bed73cb992da400f6b36630f290 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:29:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526138.817638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLXp-0006VQ-Oc; Tue, 25 Apr 2023 16:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526138.817638; Tue, 25 Apr 2023 16:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLXp-0006VJ-Lt; Tue, 25 Apr 2023 16:29:33 +0000
Received: by outflank-mailman (input) for mailman id 526138;
 Tue, 25 Apr 2023 16:29:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prLXp-0006VD-5Z
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 16:29:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a9a9ab9-e386-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 18:29:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-111-R8bTjhhCPLi9WY5oOQOfhw-1; Tue, 25 Apr 2023 12:29:27 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 225DF185A790;
 Tue, 25 Apr 2023 16:29:26 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 562ACC15BA0;
 Tue, 25 Apr 2023 16:29:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a9a9ab9-e386-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682440170;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=42lP/ytKQA/eCGkNRdBo0J2DfrHW7Ye9O5dUPefMkOg=;
	b=hvU607e+l1zQvvJL7Vl099u/mVJ5Pjx+LZ2ifsEsgrcRKM5zNZLPn7yr7fljpbzXtIVhET
	WjsDxjV95dypPPEo12NbeGt8EjynlU2EgI5ZLE/11J0bT6LY/HYuLpw0KS8awg3MULYjHn
	M+VcrYcPHDl2KdnOHifD7lT7p5jxC+I=
X-MC-Unique: R8bTjhhCPLi9WY5oOQOfhw-1
Date: Tue, 25 Apr 2023 12:29:22 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org, Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH v3 00/20] block: remove aio_disable_external() API
Message-ID: <20230425162922.GA725672@fedora>
References: <20230420113732.336620-1-stefanha@redhat.com>
 <1e1f3a54-7113-7929-38a1-23d97bfa4d45@linaro.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="lTQFmEIU0pfkRcw9"
Content-Disposition: inline
In-Reply-To: <1e1f3a54-7113-7929-38a1-23d97bfa4d45@linaro.org>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--lTQFmEIU0pfkRcw9
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 20, 2023 at 03:39:59PM +0200, Philippe Mathieu-Daud=E9 wrote:
> Hi Stefan,
>=20
> On 20/4/23 13:37, Stefan Hajnoczi wrote:
> > v3:
> > - Resend full patch series. v2 was sent in the middle of a git rebase a=
nd was
> >    missing patches. [Eric]
> > - Apply Reviewed-by tags.
>=20
> > Based-on: 087bc644b7634436ca9d52fe58ba9234e2bef026 (kevin/block-next)
>=20
> It seems kevin/block-next got rebased and doesn't contain 087bc644b76.
>=20
> Based on 3d1ba50c4b ("vmdk: make vmdk_is_cid_valid a coroutine_fn")
> I get:
>=20
> Applying: hw/qdev: introduce qdev_is_realized() helper
> Applying: virtio-scsi: avoid race between unplug and transport event
> Applying: virtio-scsi: stop using aio_disable_external() during unplug
> Applying: block/export: only acquire AioContext once for
> vhost_user_server_stop()
> error: patch failed: util/vhost-user-server.c:346
> error: util/vhost-user-server.c: patch does not apply
> Patch failed at 0004 block/export: only acquire AioContext once for
> vhost_user_server_stop()
>=20
> Hmm patch #4 is already merged as commit 2957dc40a2, let's skip it:
>=20
> $ git am --skip
> Applying: util/vhost-user-server: rename refcount to in_flight counter
> Applying: block/export: wait for vhost-user-blk requests when draining
> Applying: block/export: stop using is_external in vhost-user-blk server
> Applying: hw/xen: do not use aio_set_fd_handler(is_external=3Dtrue) in
> xen_xenstore
> Applying: block: add blk_in_drain() API
> Applying: block: drain from main loop thread in bdrv_co_yield_to_drain()
> Applying: xen-block: implement BlockDevOps->drained_begin()
> Applying: hw/xen: do not set is_external=3Dtrue on evtchn fds
> Applying: block/export: rewrite vduse-blk drain code
> Applying: block/export: don't require AioContext lock around
> blk_exp_ref/unref()
> Applying: block/fuse: do not set is_external=3Dtrue on FUSE fd
> Applying: virtio: make it possible to detach host notifier from any thread
> Applying: virtio-blk: implement BlockDevOps->drained_begin()
> Applying: virtio-scsi: implement BlockDevOps->drained_begin()
> Applying: virtio: do not set is_external=3Dtrue on host notifiers
> Applying: aio: remove aio_disable_external() API
> error: patch failed: util/fdmon-epoll.c:131
> error: util/fdmon-epoll.c: patch does not apply
> Patch failed at 0020 aio: remove aio_disable_external() API
>=20
> Now this clashes with commit e62da98527 ("aio-posix: fix race between epo=
ll
> upgrade and aio_set_fd_handler()").
>=20
> Indeed reverting both e62da98527 / 2957dc40a2, I can apply your
> series.

Thanks, I will rebase to origin/master now to make this patch series
easier to merge.

Stefan

--lTQFmEIU0pfkRcw9
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRH/+IACgkQnKSrs4Gr
c8iYggf/b4293BgOm1R5DDER4USW/3OSKh2P4LwY7cqAodChEJo2dbRViaYECV2Q
bSxEGgsjB6wPdZYKwNFWfPiBpVQswZHeF0JDoupXILM4ZnsMtOXuhQKCIREPvPcj
nwMdkECDC6G/aIyzybY4xHV4lYoUvv61OlvEzuG5p+6jmyiUIB7WYwftBdG0Vfsy
eo5110kPiYRdcAlMsL7QpgS01WT9DBCSw98NhH0x0dWLBYlgkgGhP7GRnIMfriFc
o+v31v9eCHe7J1wvvqdbniWgas9xqxzQyyLhXcvbVffHlUftAZ2KL/fhYIXyRwtk
vl3prMrg4+6AVRr5K4a+zoJ8eK8ETg==
=cT+E
-----END PGP SIGNATURE-----

--lTQFmEIU0pfkRcw9--



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:30:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526141.817648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLYI-00073B-2h; Tue, 25 Apr 2023 16:30:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526141.817648; Tue, 25 Apr 2023 16:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLYH-00072O-Ua; Tue, 25 Apr 2023 16:30:01 +0000
Received: by outflank-mailman (input) for mailman id 526141;
 Tue, 25 Apr 2023 16:30:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prLYG-0006rg-JA
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 16:30:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6aaa6d4a-e386-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 18:29:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-307-k5r-hIbGOR-yTKl6ZBF_ag-1; Tue, 25 Apr 2023 12:29:53 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A73FC885624;
 Tue, 25 Apr 2023 16:29:52 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1D32CC15BA0;
 Tue, 25 Apr 2023 16:29:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aaa6d4a-e386-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682440197;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E79i0nQNcOhPx9obbDT5boX/wVXS7Z+jrWPuvrkk1Y4=;
	b=FHiVH9DMSiIGrsu1k21xbNJtDh8YSWeUHLMvInVumC/j98dczWJ8FZ7UY9zLmgU9+EhW/W
	ns5kC2Rb1FNMuwFcAzteIJKxFNgXCyrv0BUhQvXP9ABmCFxKLo/7MAfTriXNy7eCc+W2/M
	GQnDgfenuDCr7TFYIDu4usuK9yud6Hs=
X-MC-Unique: k5r-hIbGOR-yTKl6ZBF_ag-1
Date: Tue, 25 Apr 2023 12:29:50 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Xie Yongji <xieyongji@bytedance.com>,
	Juan Quintela <quintela@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org, Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH v3 20/20] aio: remove aio_disable_external() API
Message-ID: <20230425162950.GB725672@fedora>
References: <20230420113732.336620-1-stefanha@redhat.com>
 <20230420113732.336620-21-stefanha@redhat.com>
 <f7b20c96-be06-2299-5589-11dbf23251f8@linaro.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="BIT1XJZKQdQoLFB0"
Content-Disposition: inline
In-Reply-To: <f7b20c96-be06-2299-5589-11dbf23251f8@linaro.org>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--BIT1XJZKQdQoLFB0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 20, 2023 at 03:44:06PM +0200, Philippe Mathieu-Daud=E9 wrote:
> On 20/4/23 13:37, Stefan Hajnoczi wrote:
> > All callers now pass is_external=3Dfalse to aio_set_fd_handler() and
> > aio_set_event_notifier(). The aio_disable_external() API that
> > temporarily disables fd handlers that were registered is_external=3Dtrue
> > is therefore dead code.
> >=20
> > Remove aio_disable_external(), aio_enable_external(), and the
> > is_external arguments to aio_set_fd_handler() and
> > aio_set_event_notifier().
> >=20
> > The entire test-fdmon-epoll test is removed because its sole purpose was
> > testing aio_disable_external().
> >=20
> > Parts of this patch were generated using the following coccinelle
> > (https://coccinelle.lip6.fr/) semantic patch:
> >=20
> >    @@
> >    expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll=
_ready, opaque;
> >    @@
> >    - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_pol=
l, io_poll_ready, opaque)
> >    + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_re=
ady, opaque)
> >=20
> >    @@
> >    expression ctx, notifier, is_external, io_read, io_poll, io_poll_rea=
dy;
> >    @@
> >    - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_pol=
l, io_poll_ready)
> >    + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_re=
ady)
> >=20
> > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >   include/block/aio.h           | 55 --------------------------
> >   util/aio-posix.h              |  1 -
> >   block.c                       |  7 ----
> >   block/blkio.c                 | 15 +++----
> >   block/curl.c                  | 10 ++---
> >   block/export/fuse.c           |  8 ++--
> >   block/export/vduse-blk.c      | 10 ++---
> >   block/io.c                    |  2 -
> >   block/io_uring.c              |  4 +-
> >   block/iscsi.c                 |  3 +-
> >   block/linux-aio.c             |  4 +-
> >   block/nfs.c                   |  5 +--
> >   block/nvme.c                  |  8 ++--
> >   block/ssh.c                   |  4 +-
> >   block/win32-aio.c             |  6 +--
> >   hw/i386/kvm/xen_xenstore.c    |  2 +-
> >   hw/virtio/virtio.c            |  6 +--
> >   hw/xen/xen-bus.c              |  8 ++--
> >   io/channel-command.c          |  6 +--
> >   io/channel-file.c             |  3 +-
> >   io/channel-socket.c           |  3 +-
> >   migration/rdma.c              | 16 ++++----
> >   tests/unit/test-aio.c         | 27 +------------
> >   tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
> >   util/aio-posix.c              | 20 +++-------
> >   util/aio-win32.c              |  8 +---
> >   util/async.c                  |  3 +-
> >   util/fdmon-epoll.c            | 10 -----
> >   util/fdmon-io_uring.c         |  8 +---
> >   util/fdmon-poll.c             |  3 +-
> >   util/main-loop.c              |  7 ++--
> >   util/qemu-coroutine-io.c      |  7 ++--
> >   util/vhost-user-server.c      | 11 +++---
> >   tests/unit/meson.build        |  3 --
> >   34 files changed, 76 insertions(+), 290 deletions(-)
> >   delete mode 100644 tests/unit/test-fdmon-epoll.c
>=20
>=20
> > -/**
> > - * aio_disable_external:
> > - * @ctx: the aio context
> > - *
> > - * Disable the further processing of external clients.
> > - */
> > -static inline void aio_disable_external(AioContext *ctx)
> > -{
> > -    qatomic_inc(&ctx->external_disable_cnt);
> > -}
> > -
> > -/**
> > - * aio_enable_external:
> > - * @ctx: the aio context
> > - *
> > - * Enable the processing of external clients.
> > - */
> > -static inline void aio_enable_external(AioContext *ctx)
> > -{
> > -    int old;
> > -
> > -    old =3D qatomic_fetch_dec(&ctx->external_disable_cnt);
> > -    assert(old > 0);
> > -    if (old =3D=3D 1) {
> > -        /* Kick event loop so it re-arms file descriptors */
> > -        aio_notify(ctx);
> > -    }
> > -}
> > -
> > -/**
> > - * aio_external_disabled:
> > - * @ctx: the aio context
> > - *
> > - * Return true if the external clients are disabled.
> > - */
> > -static inline bool aio_external_disabled(AioContext *ctx)
> > -{
> > -    return qatomic_read(&ctx->external_disable_cnt);
> > -}
>=20
> Missing:
>=20
> -- >8 --
> diff --git a/include/block/aio.h b/include/block/aio.h
> index d4ce01ea08..266be26f8e 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -224,6 +224,4 @@ struct AioContext {
>      QEMUTimerListGroup tlg;
>=20
> -    int external_disable_cnt;
> -
>      /* Number of AioHandlers without .io_poll() */
>      int poll_disable_cnt;
> diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
> index d9d3807062..5c89169e46 100644
> --- a/tests/unit/test-bdrv-drain.c
> +++ b/tests/unit/test-bdrv-drain.c
> @@ -436,5 +436,4 @@ static void test_graph_change_drain_all(void)
>      g_assert_cmpint(bs_b->quiesce_counter, =3D=3D, 0);
>      g_assert_cmpint(b_s->drain_count, =3D=3D, 0);
> -    g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, =3D=3D=
, 0);
>=20
>      bdrv_unref(bs_b);
> ---
>=20
> Once cleaned:
> Reviewed-by: Philippe Mathieu-Daud=E9 <philmd@linaro.org>

Oh, yes! Thank you.

Stefan

--BIT1XJZKQdQoLFB0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRH//4ACgkQnKSrs4Gr
c8hAWAgArFHJSBD/052Y5YxJagR3rp5Ty6TQFZbFR7XqgZm4OKiJUdVsYRjLFAVb
88VPsKWcJcTIjsM6CsLc08dll7kjwWltL1oWglX3chlwNSyFz7JJpkrBEsGi+FTk
QnHEXHYTopX8yvkc63FJy9xMYPUkBRoxRU2out3CeqaPrcBTtkoDRjoUvG0iPi4I
a2KBbyhYim8z4W2OWw3ereSqfBzSHaIc6c16hE74O2NTVFXYKWxj1VV/HNJDmRM9
s+wDgxnywf2oAtec9QZvxI7k/Jkb4B4zqox2QJV/2f2ngq5QqQMEBh4Q8uKMyC8t
qV7BJGwMNoPGtu5fnTYjZVJApmvWNA==
=q6+o
-----END PGP SIGNATURE-----

--BIT1XJZKQdQoLFB0--



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:38:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:38:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526152.817661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLg7-0000AZ-2L; Tue, 25 Apr 2023 16:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526152.817661; Tue, 25 Apr 2023 16:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLg6-0000AS-Vy; Tue, 25 Apr 2023 16:38:06 +0000
Received: by outflank-mailman (input) for mailman id 526152;
 Tue, 25 Apr 2023 16:38:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BHzL=AQ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prLg4-0000AM-UF
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 16:38:05 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89e220e4-e387-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 18:38:01 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 75E3C3200919;
 Tue, 25 Apr 2023 12:37:57 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 25 Apr 2023 12:37:57 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 12:37:56 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89e220e4-e387-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682440677; x=1682527077; bh=8x38kB+ZlYh223Y2lo2Kh7PwV9NIajIpnFg
	diKdPG6g=; b=nPSmog2lvUAoNpQ1EpvOFWdYmmlsa3R1GF9rtqQohKiTz7E16kn
	ICpXMT24n024y24RzWmAhOf67rFlPVWkRE8mzsCelKHbjvH24QPOSqMfWclKw4I3
	jhRWu3U/Gnz4tC+kzH4KZtkhX8gio8fLLD8Qx6lszJrna1gvHuy1W5LLz6DUh0Ui
	kXVtMI7cjSbandfDpXI2xaTMakeDxPmUrKHHTIQId7ngrTnLw4O8vrSa6oDLWXA0
	NYLzR87cTkzQoK6LVK/XJAt3k/Esho3dD9HRdp2tEURD6zJ+6ZwlSzrBKFbjSouf
	R8ItFJqPyU+MmwjyXXx/HiQEac2eGHbgQ3A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682440677; x=1682527077; bh=8x38kB+ZlYh22
	3Y2lo2Kh7PwV9NIajIpnFgdiKdPG6g=; b=DFSJFS4HgYkNlTAqMebl8BejZXFUl
	9hxWkcxzq2aoJ0oBnKs6htToNWqHJp1IMZrJZNnzbmXOdkxjvNOfA1VmYNZOzhax
	93GeHNPMtf0jjnxMQpLbQ52qkRe4ym2Dn0IIBQMmPt9piDS5EqtJQozLP/KSVMqa
	4FQ+lDW89fGKMPE87N1SsqOJM9+Rcxd6nqVdPWv94HWg+Nys65I3HFTPL/WSlupR
	racUjAUdzqbWoz18Dz0oWarx6OLFPrGBOsBLM9lmPWnARgNmoo+Yb2564GSKmDiJ
	7i7DbZKabvcbOTRGhGYV6fHgiMyykchMgCrHxQHfAYZEe2/QsgczAXE+Q==
X-ME-Sender: <xms:5AFIZFYEjt0wxiezo9TRAMznqB5mxrlkNHJAPnN1T3u58aw48Agv1w>
    <xme:5AFIZMY4m8ihc0afzc7XfebbxA3qYkRZw2rGMEU3UeBw-DzW4ViwU2UjRay0N6QZx
    q7Rp75CefPgyQ>
X-ME-Received: <xmr:5AFIZH9oAeGVapR5ysm7jZtQg1YSGrlC1rtYYyxo6qekIDYGgIt_aTHWlNb4l0COTIRCIYW3golvacVEETsVoI9rDAfYOeqFk2k>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduvddguddtfecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeev
    ueejteegleelteduueevhfetgfffjeevtddvgfeiveehteehleegueelvdejveenucffoh
    hmrghinhepghhithhlrggsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghr
    rghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomh
X-ME-Proxy: <xmx:5AFIZDoEc83D7WqTom5Dvd31P0XzW_gAIzL_-i69X1ML_3P7C7qcDA>
    <xmx:5AFIZAqtgD09XwmohUaWC8GMOhQ131S48-dtniPBXZMyOogsQB77Lw>
    <xmx:5AFIZJRFX33oh92xA9FoBjLJyYROd6XJy7I-JBf7v3U-4T-z9rgSpw>
    <xmx:5QFIZHR2vQR8RUnazMJd-lfExnX_8oKK2qKfAE0wblDHcln4F4l2fQ>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 25 Apr 2023 18:37:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 5/6] automation: PCI passthrough tests on ADL hw
Message-ID: <ZEgB4e9Rfmt6wh2f@mail-itl>
References: <cover.52ddd01da196853766a5b39a89c631f3e4652dd9.1682369736.git-series.marmarek@invisiblethingslab.com>
 <b01494665d1a8cce5c426be70beca2c519215eca.1682369736.git-series.marmarek@invisiblethingslab.com>
 <alpine.DEB.2.22.394.2304241717510.3419@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="OS1ldJoc2laumTA5"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2304241717510.3419@ubuntu-linux-20-04-desktop>


--OS1ldJoc2laumTA5
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 25 Apr 2023 18:37:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH 5/6] automation: PCI passthrough tests on ADL hw

On Mon, Apr 24, 2023 at 08:01:46PM -0700, Stefano Stabellini wrote:
> On Mon, 24 Apr 2023, Marek Marczykowski-G=C3=B3recki wrote:
> > Add simple PCI passthrough test to both PV and HVM domU. It passes
> > through a network adapter (the only one in the system), gets an IP via
> > DHCP (first basic test) and then ping the gateway (second basic test).
> > Finally, if device is supposed to use MSI or MSI-X (as set in the
> > PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.
> >=20
> > On the current runner, the device in question is this:
> > 03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controll=
er I225-V [8086:15f3] (rev 03)
> > 	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
> > 	Flags: bus master, fast devsel, latency 0, IRQ 18
> > 	Memory at 50400000 (32-bit, non-prefetchable) [size=3D1M]
> > 	Memory at 50500000 (32-bit, non-prefetchable) [size=3D16K]
> > 	Capabilities: [40] Power Management version 3
> > 	Capabilities: [50] MSI: Enable- Count=3D1/1 Maskable+ 64bit+
> > 	Capabilities: [70] MSI-X: Enable+ Count=3D5 Masked-
> > 	Capabilities: [a0] Express Endpoint, MSI 00
> > 	Capabilities: [100] Advanced Error Reporting
> > 	Capabilities: [140] Device Serial Number ...
> > 	Capabilities: [1c0] Latency Tolerance Reporting
> > 	Capabilities: [1f0] Precision Time Measurement
> > 	Capabilities: [1e0] L1 PM Substates
> > 	Kernel driver in use: igc
> > 	Kernel modules: igc
> >=20
> > With the current Xen version, it uses MSI-X under PV and MSI under HVM.
> >=20
> > This patch moves domU config to a variable, to make it configurable on
> > per-test basis. Add also a few comments for visual separation of tests.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> > ---
> >  automation/gitlab-ci/test.yaml     | 20 ++++++++-
> >  automation/scripts/qubes-x86-64.sh | 80 ++++++++++++++++++++++++++-----
> >  2 files changed, 89 insertions(+), 11 deletions(-)
> >=20
> > diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test=
=2Eyaml
> > index d68c584269dd..1ce083e6cd88 100644
> > --- a/automation/gitlab-ci/test.yaml
> > +++ b/automation/gitlab-ci/test.yaml
> > @@ -94,6 +94,8 @@
> >      # the test controller runs on RPi4
> >      CONTAINER: alpine:3.12-arm64v8
> >      LOGFILE: smoke-test.log
> > +    PCIDEV: "03:00.0"
> > +    PCIDEV_INTR: "MSI-X"
>=20
> This is minor but I would move PCIDEV_INTR to
> adl-pci-pv-x86-64-gcc-debug given that adl-pci-hvm-x86-64-gcc-debug
> already redefines it.

The device is MSI-X capable and I'd expect Linux should use MSI-X. My
guess is it's using MSI in HVM because MSI-X doesn't work (and I hope my
other series will fix that), but I have _not_ verified this theory.
If I'm right, I hope the PCIDEV_INTR=3D"MSI" will go away from the HVM
test, and it will be testing if MSI-X didn't regressed.

> I would also move PCIDEV to adl-pci-pv-x86-64-gcc-debug and
> adl-pci-hvm-x86-64-gcc-debug, but I am fine either way.

My idea is those both are more of a test target properties, not an
individual test properties.


> However the two new tests failed for me:
>=20
> https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/8471579=
48
>=20
>=20
> + grep '^Welcome to Alpine Linux' smoke.serial
> + '[' 0 -le 0 ]
> + '[' 0 -le 0 ]
> + echo 'ERROR: test timeout, aborting'
> ERROR: test timeout, aborting
>=20
> The Welcome to Alpine Linux message is missing

Ah, I forgot to remove debug shell...

I'll post v2 once it passes test run (waiting in the queue...).
--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--OS1ldJoc2laumTA5
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRIAeEACgkQ24/THMrX
1yyZoAf/ecXi0HTG95PO/xvHUpnfDFNZYr07QJXHtWxB05Vf6RWkXUoOujoAec4U
CUt0aQbfdbeWB26CcTpJ/ZdTlyqFt1lWKvPtBzlAPryuKY6X2FLAznsm+VxT4919
I/9reZO8TdVfM6rXmBoe3to+uvn9wbHephterwgp5PCf6nZTnYc/xszJUhn6CVGt
GjLPpdQ8jnmMDOo5CCosKYwAHOWcZBaBhcUhxCEKcFO/a/WdM37Kdo8m+g2ZCBjq
t1gK2de6YigHbEXGcBXrU0/Vc2t4Gq0zajOuquZxw4oAjEq95KPNN22TJ7equa2F
EtCNiqqGaQ4Cy7fpEWJaNrXUrfq/vA==
=dOyz
-----END PGP SIGNATURE-----

--OS1ldJoc2laumTA5--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:43:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:43:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526158.817670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLkl-0001bp-Kq; Tue, 25 Apr 2023 16:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526158.817670; Tue, 25 Apr 2023 16:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLkl-0001bi-IH; Tue, 25 Apr 2023 16:42:55 +0000
Received: by outflank-mailman (input) for mailman id 526158;
 Tue, 25 Apr 2023 16:42:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prLkk-0001bc-DV
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 16:42:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37c8516a-e388-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 18:42:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-36-0_Np5WYHP7OPCu59TXqr_g-1; Tue, 25 Apr 2023 12:42:45 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1EC7F811E7B;
 Tue, 25 Apr 2023 16:42:44 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 14022492B03;
 Tue, 25 Apr 2023 16:42:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37c8516a-e388-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682440970;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YU7EZWrc/q5d2oIlsofGQO2gCk+XKR8YPiDYAdpL+Z0=;
	b=dJQFApdILoFuhcQyq7J6uptySBD6w+VnziUOQAG8Tv9aKtWqWjvAGMxe4AlGKTBuVu3GKY
	oxxflVEAMQTCJFoHPqDjfysT1vMencROLGFdfQLS5Qu04oMA1Di90jf+fU/MRVbEOBxJmf
	jGjgg/8cFG3WTZOLGHIb4yHMEq1dtYg=
X-MC-Unique: 0_Np5WYHP7OPCu59TXqr_g-1
Date: Tue, 25 Apr 2023 12:42:41 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Yongji Xie <xieyongji@bytedance.com>
Cc: qemu devel list <qemu-devel@nongnu.org>, Peter Lieven <pl@kamp.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>,
	Fam Zheng <fam@euphon.net>, Julia Suvorova <jusual@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	xen-devel@lists.xenproject.org, Hanna Reitz <hreitz@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, eesposit@redhat.com,
	Kevin Wolf <kwolf@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH v3 13/20] block/export: rewrite vduse-blk drain code
Message-ID: <20230425164241.GC725672@fedora>
References: <20230420113732.336620-1-stefanha@redhat.com>
 <20230420113732.336620-14-stefanha@redhat.com>
 <CACycT3suSR+nYhe4z2zuocYsBBVSDBCE+614zT0jfDZCBRveaA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="taPYnWD+LbJQX+JL"
Content-Disposition: inline
In-Reply-To: <CACycT3suSR+nYhe4z2zuocYsBBVSDBCE+614zT0jfDZCBRveaA@mail.gmail.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10


--taPYnWD+LbJQX+JL
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Apr 21, 2023 at 11:36:02AM +0800, Yongji Xie wrote:
> Hi Stefan,
>=20
> On Thu, Apr 20, 2023 at 7:39=E2=80=AFPM Stefan Hajnoczi <stefanha@redhat.=
com> wrote:
> >
> > vduse_blk_detach_ctx() waits for in-flight requests using
> > AIO_WAIT_WHILE(). This is not allowed according to a comment in
> > bdrv_set_aio_context_commit():
> >
> >   /*
> >    * Take the old AioContex when detaching it from bs.
> >    * At this point, new_context lock is already acquired, and we are now
> >    * also taking old_context. This is safe as long as bdrv_detach_aio_c=
ontext
> >    * does not call AIO_POLL_WHILE().
> >    */
> >
> > Use this opportunity to rewrite the drain code in vduse-blk:
> >
> > - Use the BlockExport refcount so that vduse_blk_exp_delete() is only
> >   called when there are no more requests in flight.
> >
> > - Implement .drained_poll() so in-flight request coroutines are stopped
> >   by the time .bdrv_detach_aio_context() is called.
> >
> > - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
> >   .bdrv_detach_aio_context() constraint violation. It's no longer
> >   needed due to the previous changes.
> >
> > - Always handle the VDUSE file descriptor, even in drained sections. The
> >   VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
> >   drained sections. This ensures that the VDUSE kernel code gets a fast
> >   response.
> >
> > - Suspend virtqueue fd handlers in .drained_begin() and resume them in
> >   .drained_end(). This eliminates the need for the
> >   aio_set_fd_handler(is_external=3Dtrue) flag, which is being removed f=
rom
> >   QEMU.
> >
> > This is a long list but splitting it into individual commits would
> > probably lead to git bisect failures - the changes are all related.
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
> >  1 file changed, 93 insertions(+), 39 deletions(-)
> >
> > diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
> > index f7ae44e3ce..35dc8fcf45 100644
> > --- a/block/export/vduse-blk.c
> > +++ b/block/export/vduse-blk.c
> > @@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
> >      VduseDev *dev;
> >      uint16_t num_queues;
> >      char *recon_file;
> > -    unsigned int inflight;
> > +    unsigned int inflight; /* atomic */
> > +    bool vqs_started;
> >  } VduseBlkExport;
> >
> >  typedef struct VduseBlkReq {
> > @@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
> >
> >  static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
> >  {
> > -    vblk_exp->inflight++;
> > +    if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) {
>=20
> I wonder why we need to use atomic operations here.

The inflight counter is only modified by the vhost-user export thread,
but it may be read by another thread here:

  static bool vduse_blk_drained_poll(void *opaque)
  {
      BlockExport *exp =3D opaque;
      VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, export=
);

      return qatomic_read(&vblk_exp->inflight) > 0;

BlockDevOps->drained_poll() calls are invoked when BlockDriverStates are
drained (e.g. blk_drain_all() and related APIs).

> > @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
> >      g_free(vblk_exp->handler.serial);
> >  }
> >
> > +/* Called with exp->ctx acquired */
> >  static void vduse_blk_exp_request_shutdown(BlockExport *exp)
> >  {
> >      VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, exp=
ort);
> >
> > -    aio_context_acquire(vblk_exp->export.ctx);
> > -    vduse_blk_detach_ctx(vblk_exp);
> > -    aio_context_acquire(vblk_exp->export.ctx);
> > +    vduse_blk_stop_virtqueues(vblk_exp);
>=20
> Can we add a AIO_WAIT_WHILE() here? Then we don't need to
> increase/decrease the BlockExport refcount during I/O processing.

I don't think so because vduse_blk_exp_request_shutdown() is not the
only place where we wait for requests to complete. There would still
need to be away to wait for requests to finish (without calling
AIO_WAIT_WHILE()) in vduse_blk_drained_poll().

Stefan

--taPYnWD+LbJQX+JL
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRIAwEACgkQnKSrs4Gr
c8hhBggAqyQ3QbuCykGssGed+Wva5piRrv+slVXsp6pWuyvi67KwteSenyU8jyVK
0nAuF6fpgfMC44OAy/Cqt5JYbZg0883laglZxoFjJIJ52FvhtMCDOetwnVhjmkN7
U1CdvipfqjL7dKNx316HDrqikr28F8CBTMmgyLP9HLUEegIh1RpGyUyvE8G0FmXJ
cTYoJPC6PJZE1TNwJ0hlsUIsSvR+IL8s5JpqUPEiybpZAl5ro9a3O/4QnExGxsIl
cakAXaxPfK2wgbbbvDirQefgDT1nnexKs7hpMRMoci9FHPUyWv+V1OLNuakp0XMf
gUwutPZE/o7Yw4PJJiH8duPpAKz5wQ==
=rvpQ
-----END PGP SIGNATURE-----

--taPYnWD+LbJQX+JL--



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 16:50:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 16:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526164.817684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLrT-0002GG-DO; Tue, 25 Apr 2023 16:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526164.817684; Tue, 25 Apr 2023 16:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prLrT-0002G9-9N; Tue, 25 Apr 2023 16:49:51 +0000
Received: by outflank-mailman (input) for mailman id 526164;
 Tue, 25 Apr 2023 16:49:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dmH=AQ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1prLrR-0002G3-SF
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 16:49:49 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fa63e23-e389-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 18:49:48 +0200 (CEST)
Received: from MW4PR03CA0324.namprd03.prod.outlook.com (2603:10b6:303:dd::29)
 by PH7PR12MB7283.namprd12.prod.outlook.com (2603:10b6:510:20a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Tue, 25 Apr
 2023 16:49:44 +0000
Received: from CO1NAM11FT072.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:dd:cafe::58) by MW4PR03CA0324.outlook.office365.com
 (2603:10b6:303:dd::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20 via Frontend
 Transport; Tue, 25 Apr 2023 16:49:44 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT072.mail.protection.outlook.com (10.13.174.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.20 via Frontend Transport; Tue, 25 Apr 2023 16:49:43 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 25 Apr
 2023 11:49:42 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 25 Apr
 2023 11:49:41 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 25 Apr 2023 11:49:39 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fa63e23-e389-11ed-b223-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AQNxMtaS11MxeDTwcIO8GAusj8FuvNg/sLFL5a9uhrnILuM1BqzRcBTXfqn88Dw5lk5uExi2O198Kys47EH+I0vXwWkPeNYnofW3yTjKtBsDOnxkhNPMEons1MfBynvg3qvniOD0yqYAExtQs3PCb2OtokBqLVLVHVIqMZhtoknZgg1SA0ePdmPH8vD3Vbf6ZQ0ADiK1Tp5vBCyY/tTgyVq/kdEbMhlyrPMOpPVg83Nci2L0iDBE71+RE8etGaYTWlGYg/aJRtFn+/2+KuHrpLFXg25ErTQwZnA2S/PQ8fU+AhZyQLABgO50ExiW5L8O0j9fXQ31t0G4mjNCHeczHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pTVuCKjfrnLU6F/+3MjKDTLewzSUJKzjfnL0s/DkNxo=;
 b=VNQIsTnoUOdIBEzQWTXhJ7Ftng7G51JrpjtqNlNDFXnYdjyIZDJLPsf2GNHzKbbbcWrSAbVfGVJR4iWO7dSzOdRMPSkKCnasqkR4Z5foW0HF81KHcvWKKwJlx0dZs2u1x/Kz02ZPXEfHR+6iyZ6GwoCFKABXqa9qo6MfaSEvI3OZAdtvFSHvwLC9sVuvXfxmiKnaR3TXGEjJ74/73WFv7Wc3ilu1wKcd98m+iCLeq88biDfvabgp6PEEK6osUaBxoSy60X4eQWn8/y36JgkJKMDEjXoD3R/RKNtjdXiepF1S3GigbicZc1hrJCwmqEgoGvaykcTnR0uR8nAVqtO6EQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pTVuCKjfrnLU6F/+3MjKDTLewzSUJKzjfnL0s/DkNxo=;
 b=TNRTgnr8LGWTfYdvX3239oLMgoGzOxiFs0BPwf89rADoHh5eVssW4w3aiaLTpNaUMxSI8CVuC/NM2Ixoz6n9e9mLKIGWKZJTYu8SQrLDlLQxGUYh0zsZqJrU4rHnVa2ju4Ul4jDpM0A7mW8IlAgFGo2YwNHxrjL8lArbftBbCwM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <56a55bd0-d86e-aa32-ab7f-9ff2bcfb9e1c@amd.com>
Date: Tue, 25 Apr 2023 18:49:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN v5 05/10] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230413173735.48387-1-ayan.kumar.halder@amd.com>
 <20230413173735.48387-6-ayan.kumar.halder@amd.com>
 <ca8fd34c-3755-161d-38c1-651cf08ef589@amd.com>
 <ad2dec59-d35f-3edd-b928-5005b47ebff7@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <ad2dec59-d35f-3edd-b928-5005b47ebff7@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT072:EE_|PH7PR12MB7283:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b9c1129-549e-4c7c-4f78-08db45ad1211
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lYb2HIRxsAAaGbfgmxhmNlMJQ+MOlzetjm1eHdIUD+phr54dx870qBtnljyoFoAKWWjcuSLDn7t5olmcvgwI8V4mtScXtUMotGrFEXvT/tMKVonzUpXcYeVkDlqkIYlWARorbbUhVyxKmH/3jrGaClnvZzqGrshx4r0xec0HoRDGP9Tl0Wk/lP9FUY3zkc2KeVSNzP8z+T/Zxm6rOyFxx4nJOC93kAqmOK3GMBooN6PRN3BTtzTWEoFNBLZh/9cfdLivC2iJ+EYz3ePxrVYHDPTYRasxxI50xSp7K9NzcQnxHaP7p7Mr/LjeHK6WXucewKNLV3kXHioeh8fapAgwhtXcgcQKhRugZDdizpae2+0VoZg+JGYNwUc015x3MtkCoYGYmCba9+Rf1FVW7qOJ0VrbQxENQa7EjekCoxe7Rld46p/HTeLOwlPduif94pGn4rF9rp/HQSwhi1J0li2xHaMHXO5WP725J9SmBShSZrVB9IfClMaHiweX7s6FD4QYT0OtNrnSJGOnCv9aEtcl0pi+e3HYnrus+S/b26y3erUqaT915wT5DAS0LNx9k2mXglWnEkrw+djZTIIAxHhz02nHIsWaSPHBVpuNQMdgm3yi/RWTTSK1fJ2TAHW3zBK23m9HMO++KqeNfhe7hfsTUQGRhQlbOrd5UOWckoQNki39tksGWImVwpnQHEdotmnZJAAAelwjrvS3GvOmVr/dzK0IbpMgMBmOy4nTFX5Fvjq/GnCqOQgIH0EfoiKzMil/BVxuVRv2B3REQZx/ygdsbiA6T4w1fqBOtjxasBkkJvM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199021)(46966006)(36840700001)(40470700004)(44832011)(2616005)(8936002)(31686004)(356005)(82740400003)(70206006)(70586007)(316002)(336012)(81166007)(426003)(82310400005)(40480700001)(26005)(36860700001)(83380400001)(5660300002)(7416002)(2906002)(41300700001)(8676002)(4326008)(47076005)(86362001)(54906003)(478600001)(40460700003)(31696002)(16576012)(110136005)(186003)(53546011)(36756003)(21314003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2023 16:49:43.6121
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b9c1129-549e-4c7c-4f78-08db45ad1211
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT072.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7283

Hi Ayan,

On 25/04/2023 17:16, Ayan Kumar Halder wrote:
> 
> On 24/04/2023 13:08, Michal Orzel wrote:
>> Hi Ayan,
> 
> Hi Michal,
> 
> A clarification.
> 
>>
>> On 13/04/2023 19:37, Ayan Kumar Halder wrote:
>>>
>>> Some Arm based hardware platforms which does not support LPAE
>>> (eg Cortex-R52), uses 32 bit physical addresses.
>>> Also, users may choose to use 32 bits to represent physical addresses
>>> for optimization.
>>>
>>> To support the above use cases, we have introduced arch independent
>>> configs to choose if the physical address can be represented using
>>> 32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
>>> For now only ARM_32 provides support to enable 32 bit physical
>>> addressing.
>>>
>>> When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32.
>>> When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
>>> When PHYS_ADDR_T_32 is not defined for ARM_64, PADDR_BITS is set to 48.
>>> The last two are same as the current configuration used today on Xen.
>>>
>>> PADDR_BITS is also set to 48 when ARM_64 is defined. The reason being
>>> the choice to select ARM_PA_BITS_32/ARM_PA_BITS_40/ARM_PA_BITS_48 is
>>> currently allowed when ARM_32 is defined.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> Changes from -
>>> v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
>>>
>>> v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
>>> ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
>>> 2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'.
>>>
>>> v3 - 1. Allow user to define PADDR_BITS by selecting different config options
>>> ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
>>> 2. Add the choice under "Architecture Features".
>>>
>>> v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.
>>>
>>>   xen/arch/Kconfig                     |  3 +++
>>>   xen/arch/arm/Kconfig                 | 37 ++++++++++++++++++++++++++--
>>>   xen/arch/arm/include/asm/page-bits.h |  6 +----
>>>   xen/arch/arm/include/asm/types.h     |  6 +++++
>>>   xen/arch/arm/mm.c                    |  5 ++++
>>>   5 files changed, 50 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
>>> index 7028f7b74f..67ba38f32f 100644
>>> --- a/xen/arch/Kconfig
>>> +++ b/xen/arch/Kconfig
>>> @@ -1,6 +1,9 @@
>>>   config 64BIT
>>>          bool
>>>
>>> +config PHYS_ADDR_T_32
>>> +       bool
>>> +
>>>   config NR_CPUS
>>>          int "Maximum number of CPUs"
>>>          range 1 4095
>>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>>> index 239d3aed3c..3f6e13e475 100644
>>> --- a/xen/arch/arm/Kconfig
>>> +++ b/xen/arch/arm/Kconfig
>>> @@ -19,13 +19,46 @@ config ARM
>>>          select HAS_PMAP
>>>          select IOMMU_FORCE_PT_SHARE
>>>
>>> +menu "Architecture Features"
>>> +
>>> +choice
>>> +       prompt "Physical address space size" if ARM_32
>> Why is it protected by ARM_32 but in the next line you add something protected by ARM_64?
>> This basically means that for arm64, ARM_PA_BITS_XXX is never defined.
> 
> Currently, I intentend to support 32-bit physical address configuration 
> for ARM_32 only. So in case of ARM_32, user will have a choice between 
> 32-bit and 40-bit PA.
> 
> So, if it is ok with you, can I remove the below line and "config 
> ARM_PA_BITS_48 ..." ?
I'm ok with that. I'm also ok to keep ARM_PA_BITS_48 but with ARM_32 protection removed to that the option makes sense.
However, since there is no real choice for arm64, I think the former is better.

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526173.817704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRw-0006uk-J0; Tue, 25 Apr 2023 17:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526173.817704; Tue, 25 Apr 2023 17:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRw-0006ub-EF; Tue, 25 Apr 2023 17:27:32 +0000
Received: by outflank-mailman (input) for mailman id 526173;
 Tue, 25 Apr 2023 17:27:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMRv-0006fQ-Ap
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7488fe3d-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:30 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-557-5iOd8U5FNmmcREp2ZjiRtA-1; Tue, 25 Apr 2023 13:27:24 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E570EA0F39B;
 Tue, 25 Apr 2023 17:27:23 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1B15E201E75D;
 Tue, 25 Apr 2023 17:27:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7488fe3d-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443649;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DrOhiPPcL+d6htjDrKGwRstWfOt7XytfnWLBr7kygzo=;
	b=RV0Qya3+0X9DteFfqKj729EaykumOuLgFvRXFq1vJktqzag+NEdnyYvZtJwlBfubq/kw85
	4iJJHnp9H6EMMvRtR75RrDtVHtaIdjiGM5++g5GRGb8A+MAClpzsbpmkJp2pqn337KI925
	6qrlIpaico8BwTkjqlt5Bp4VWnIIIYw=
X-MC-Unique: 5iOd8U5FNmmcREp2ZjiRtA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 02/20] hw/qdev: introduce qdev_is_realized() helper
Date: Tue, 25 Apr 2023 13:26:58 -0400
Message-Id: <20230425172716.1033562-3-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

Add a helper function to check whether the device is realized without
requiring the Big QEMU Lock. The next patch adds a second caller. The
goal is to avoid spreading DeviceState field accesses throughout the
code.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/qdev-core.h | 17 ++++++++++++++---
 hw/scsi/scsi-bus.c     |  3 +--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index bd50ad5ee1..4d734cf35e 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -1,6 +1,7 @@
 #ifndef QDEV_CORE_H
 #define QDEV_CORE_H
 
+#include "qemu/atomic.h"
 #include "qemu/queue.h"
 #include "qemu/bitmap.h"
 #include "qemu/rcu.h"
@@ -164,9 +165,6 @@ struct NamedClockList {
 
 /**
  * DeviceState:
- * @realized: Indicates whether the device has been fully constructed.
- *            When accessed outside big qemu lock, must be accessed with
- *            qatomic_load_acquire()
  * @reset: ResettableState for the device; handled by Resettable interface.
  *
  * This structure should not be accessed directly.  We declare it here
@@ -332,6 +330,19 @@ DeviceState *qdev_new(const char *name);
  */
 DeviceState *qdev_try_new(const char *name);
 
+/**
+ * qdev_is_realized:
+ * @dev: The device to check.
+ *
+ * May be called outside big qemu lock.
+ *
+ * Returns: %true% if the device has been fully constructed, %false% otherwise.
+ */
+static inline bool qdev_is_realized(DeviceState *dev)
+{
+    return qatomic_load_acquire(&dev->realized);
+}
+
 /**
  * qdev_realize: Realize @dev.
  * @dev: device to realize
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..07275fb631 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus,
      * the user access the device.
      */
 
-    if (retval && !include_unrealized &&
-        !qatomic_load_acquire(&retval->qdev.realized)) {
+    if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) {
         retval = NULL;
     }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526172.817693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRs-0006fd-9s; Tue, 25 Apr 2023 17:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526172.817693; Tue, 25 Apr 2023 17:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRs-0006fW-78; Tue, 25 Apr 2023 17:27:28 +0000
Received: by outflank-mailman (input) for mailman id 526172;
 Tue, 25 Apr 2023 17:27:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMRq-0006fQ-Il
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70ed4296-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:24 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-614-L0EAQYIeNnCHTrMEG_ewXg-1; Tue, 25 Apr 2023 13:27:20 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 559153C0F19B;
 Tue, 25 Apr 2023 17:27:19 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4D29B492B03;
 Tue, 25 Apr 2023 17:27:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70ed4296-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443643;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=f0ej+5pIhrlcIJ/hZlYkQH8FLS/0nkngHI+ORQgtlrE=;
	b=JlGzrbezcc8mpkqYz782rygcM0vY6jS1dK0S7T99PrgMSOicxs8LUp0ti1jveXbb/Wi54c
	ESVNJL2peB/YguikIFCePb9lei0ugTxZDr2wXnKyW1H4Evnm7YoLJdF1dRjXHHi3ws2OtX
	luTKqSSE5ynWmqsXyrfGvw1zyrF3I2g=
X-MC-Unique: L0EAQYIeNnCHTrMEG_ewXg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 00/20] block: remove aio_disable_external() API
Date: Tue, 25 Apr 2023 13:26:56 -0400
Message-Id: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

v4:
- Remove external_disable_cnt variable [Philippe]
- Add Patch 1 to fix assertion failure in .drained_end() -> blk_get_aio_context()
v3:
- Resend full patch series. v2 was sent in the middle of a git rebase and was
  missing patches. [Eric]
- Apply Reviewed-by tags.
v2:
- Do not rely on BlockBackend request queuing, implement .drained_begin/end()
  instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
- Add qdev_is_realized() API [Philippe]
- Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
- Add patch to call .drained_begin/end() from main loop thread to simplify
  callback implementations

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

The approach in this patch series is to implement BlockDevOps
.drained_begin/end() callbacks that temporarily stop file descriptor handlers.
This ensures that new I/O requests are not submitted in drained sections.

The first two virtio-scsi patches were already sent as a separate series. I
included them because they are necessary in order to fully remove
aio_disable_external().

Stefan Hajnoczi (20):
  block-backend: split blk_do_set_aio_context()
  hw/qdev: introduce qdev_is_realized() helper
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  block: add blk_in_drain() API
  block: drain from main loop thread in bdrv_co_yield_to_drain()
  xen-block: implement BlockDevOps->drained_begin()
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/export: don't require AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  virtio: make it possible to detach host notifier from any thread
  virtio-blk: implement BlockDevOps->drained_begin()
  virtio-scsi: implement BlockDevOps->drained_begin()
  virtio: do not set is_external=true on host notifiers
  aio: remove aio_disable_external() API

 hw/block/dataplane/xen-block.h              |   2 +
 include/block/aio.h                         |  57 ---------
 include/block/export.h                      |   2 +
 include/hw/qdev-core.h                      |  17 ++-
 include/hw/scsi/scsi.h                      |  14 +++
 include/qemu/vhost-user-server.h            |   8 +-
 include/sysemu/block-backend-common.h       |  25 ++--
 include/sysemu/block-backend-global-state.h |   1 +
 util/aio-posix.h                            |   1 -
 block.c                                     |   7 --
 block/blkio.c                               |  15 +--
 block/block-backend.c                       |  78 ++++++------
 block/curl.c                                |  10 +-
 block/export/export.c                       |  13 +-
 block/export/fuse.c                         |  58 ++++++++-
 block/export/vduse-blk.c                    | 128 ++++++++++++++------
 block/export/vhost-user-blk-server.c        |  70 +++++++----
 block/io.c                                  |   5 +-
 block/io_uring.c                            |   4 +-
 block/iscsi.c                               |   3 +-
 block/linux-aio.c                           |   4 +-
 block/nfs.c                                 |   5 +-
 block/nvme.c                                |   8 +-
 block/ssh.c                                 |   4 +-
 block/win32-aio.c                           |   6 +-
 hw/block/dataplane/virtio-blk.c             |  19 ++-
 hw/block/dataplane/xen-block.c              |  42 +++++--
 hw/block/virtio-blk.c                       |  38 +++++-
 hw/block/xen-block.c                        |  24 +++-
 hw/i386/kvm/xen_xenstore.c                  |   2 +-
 hw/scsi/scsi-bus.c                          |  46 ++++++-
 hw/scsi/scsi-disk.c                         |  28 ++++-
 hw/scsi/virtio-scsi-dataplane.c             |  31 +++--
 hw/scsi/virtio-scsi.c                       |  59 +++++++--
 hw/virtio/virtio.c                          |   6 +-
 hw/xen/xen-bus.c                            |  11 +-
 io/channel-command.c                        |   6 +-
 io/channel-file.c                           |   3 +-
 io/channel-socket.c                         |   3 +-
 migration/rdma.c                            |  16 +--
 tests/unit/test-aio.c                       |  27 +----
 tests/unit/test-bdrv-drain.c                |   1 -
 tests/unit/test-fdmon-epoll.c               |  73 -----------
 util/aio-posix.c                            |  20 +--
 util/aio-win32.c                            |   8 +-
 util/async.c                                |   3 +-
 util/fdmon-epoll.c                          |  18 +--
 util/fdmon-io_uring.c                       |   8 +-
 util/fdmon-poll.c                           |   3 +-
 util/main-loop.c                            |   7 +-
 util/qemu-coroutine-io.c                    |   7 +-
 util/vhost-user-server.c                    |  33 ++---
 hw/scsi/trace-events                        |   2 +
 tests/unit/meson.build                      |   3 -
 54 files changed, 612 insertions(+), 480 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526175.817725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRy-0007P5-4d; Tue, 25 Apr 2023 17:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526175.817725; Tue, 25 Apr 2023 17:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRx-0007OU-VW; Tue, 25 Apr 2023 17:27:33 +0000
Received: by outflank-mailman (input) for mailman id 526175;
 Tue, 25 Apr 2023 17:27:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMRx-0006fQ-B3
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74f80a3c-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-649-zygUM1aeMOCsWjm0lHqwCg-1; Tue, 25 Apr 2023 13:27:27 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92D2A28082B1;
 Tue, 25 Apr 2023 17:27:26 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A0ED82027043;
 Tue, 25 Apr 2023 17:27:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74f80a3c-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443650;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ct1ICfub90ccCopLja+U3Sq9ZQX/65uvuN+fpfLMIoc=;
	b=EzqUA7UMmqv3z+VBzjQNsA72l2G4m4KEWGgIorpB/vNg/OdtCvsCI1wpRJDeTIN/dbjg1G
	73ol2JnAa5Auqm481++Q4pL3CE2keE2W99drf9/hAGxcd4tXm+vReXj0O5//bIua0zl+no
	wWukRRzv5NF0b8bXwIIfi4KlJThBDIg=
X-MC-Unique: zygUM1aeMOCsWjm0lHqwCg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and transport event
Date: Tue, 25 Apr 2023 13:26:59 -0400
Message-Id: <20230425172716.1033562-4-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-bus.c    |  3 ++-
 hw/scsi/virtio-scsi.c | 18 +++++++++---------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 07275fb631..64d7311757 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -486,7 +486,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qdev_is_realized(&dev->qdev)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..000961446c 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1063,15 +1063,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
-
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
     aio_enable_external(ctx);
@@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, sd,
+                               VIRTIO_SCSI_T_TRANSPORT_RESET,
+                               VIRTIO_SCSI_EVT_RESET_REMOVED);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526174.817709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRw-0006y1-RM; Tue, 25 Apr 2023 17:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526174.817709; Tue, 25 Apr 2023 17:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMRw-0006wt-MR; Tue, 25 Apr 2023 17:27:32 +0000
Received: by outflank-mailman (input) for mailman id 526174;
 Tue, 25 Apr 2023 17:27:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMRw-0006fQ-B3
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74cbded0-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-492-aQ75ElyHO6Sd8kPgmp3dug-1; Tue, 25 Apr 2023 13:27:22 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4E5D9A0F391;
 Tue, 25 Apr 2023 17:27:21 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B514514171B8;
 Tue, 25 Apr 2023 17:27:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74cbded0-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443650;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6JkFpuuttzsHF3MmIyOMmXjxDhu1gSfxxf39pT8P3Wk=;
	b=eOuOf6anvtgtNSlbHPqYrIMaflEVxvgijYK0CdXcZ+t+EOilYXID9tCjQSc25BLZATTlVS
	2AEw35AEe5ebwEVKed9ANye+tbEhUMt2kQEBYCqj3lkEkIFBPeG/9eN3DP0/+C6bhSyEP5
	3BjO4/tzM0yUcQLmgy5Ba3iJ08QuMDI=
X-MC-Unique: aQ75ElyHO6Sd8kPgmp3dug-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 01/20] block-backend: split blk_do_set_aio_context()
Date: Tue, 25 Apr 2023 13:26:57 -0400
Message-Id: <20230425172716.1033562-2-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

blk_set_aio_context() is not fully transactional because
blk_do_set_aio_context() updates blk->ctx outside the transaction. Most
of the time this goes unnoticed but a BlockDevOps.drained_end() callback
that invokes blk_get_aio_context() fails assert(ctx == blk->ctx). This
happens because blk->ctx is only assigned after
BlockDevOps.drained_end() is called and we're in an intermediate state
where BlockDrvierState nodes already have the new context and the
BlockBackend still has the old context.

Making blk_set_aio_context() fully transactional solves this assertion
failure because the BlockBackend's context is updated as part of the
transaction (before BlockDevOps.drained_end() is called).

Split blk_do_set_aio_context() in order to solve this assertion failure.
This helper function actually serves two different purposes:
1. It drives blk_set_aio_context().
2. It responds to BdrvChildClass->change_aio_ctx().

Get rid of the helper function. Do #1 inside blk_set_aio_context() and
do #2 inside blk_root_set_aio_ctx_commit(). This simplifies the code.

The only drawback of the fully transactional approach is that
blk_set_aio_context() must contend with blk_root_set_aio_ctx_commit()
being invoked as part of the AioContext change propagation. This can be
solved by temporarily setting blk->allow_aio_context_change to true.

Future patches call blk_get_aio_context() from
BlockDevOps->drained_end(), so this patch will become necessary.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/block-backend.c | 71 +++++++++++++++++--------------------------
 1 file changed, 28 insertions(+), 43 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 5566ea059d..ffd1d66f7d 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2199,52 +2199,31 @@ static AioContext *blk_aiocb_get_aio_context(BlockAIOCB *acb)
     return blk_get_aio_context(blk_acb->blk);
 }
 
-static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_context,
-                                  bool update_root_node, Error **errp)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
-    int ret;
-
-    if (bs) {
-        bdrv_ref(bs);
-
-        if (update_root_node) {
-            /*
-             * update_root_node MUST be false for blk_root_set_aio_ctx_commit(),
-             * as we are already in the commit function of a transaction.
-             */
-            ret = bdrv_try_change_aio_context(bs, new_context, blk->root, errp);
-            if (ret < 0) {
-                bdrv_unref(bs);
-                return ret;
-            }
-        }
-        /*
-         * Make blk->ctx consistent with the root node before we invoke any
-         * other operations like drain that might inquire blk->ctx
-         */
-        blk->ctx = new_context;
-        if (tgm->throttle_state) {
-            bdrv_drained_begin(bs);
-            throttle_group_detach_aio_context(tgm);
-            throttle_group_attach_aio_context(tgm, new_context);
-            bdrv_drained_end(bs);
-        }
-
-        bdrv_unref(bs);
-    } else {
-        blk->ctx = new_context;
-    }
-
-    return 0;
-}
-
 int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
                         Error **errp)
 {
+    bool old_allow_change;
+    BlockDriverState *bs = blk_bs(blk);
+    int ret;
+
     GLOBAL_STATE_CODE();
-    return blk_do_set_aio_context(blk, new_context, true, errp);
+
+    if (!bs) {
+        blk->ctx = new_context;
+        return 0;
+    }
+
+    bdrv_ref(bs);
+
+    old_allow_change = blk->allow_aio_context_change;
+    blk->allow_aio_context_change = true;
+
+    ret = bdrv_try_change_aio_context(bs, new_context, NULL, errp);
+
+    blk->allow_aio_context_change = old_allow_change;
+
+    bdrv_unref(bs);
+    return ret;
 }
 
 typedef struct BdrvStateBlkRootContext {
@@ -2256,8 +2235,14 @@ static void blk_root_set_aio_ctx_commit(void *opaque)
 {
     BdrvStateBlkRootContext *s = opaque;
     BlockBackend *blk = s->blk;
+    AioContext *new_context = s->new_ctx;
+    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
 
-    blk_do_set_aio_context(blk, s->new_ctx, false, &error_abort);
+    blk->ctx = new_context;
+    if (tgm->throttle_state) {
+        throttle_group_detach_aio_context(tgm);
+        throttle_group_attach_aio_context(tgm, new_context);
+    }
 }
 
 static TransactionActionDrv set_blk_root_context = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526176.817734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS4-0007mh-FL; Tue, 25 Apr 2023 17:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526176.817734; Tue, 25 Apr 2023 17:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS4-0007mY-Ae; Tue, 25 Apr 2023 17:27:40 +0000
Received: by outflank-mailman (input) for mailman id 526176;
 Tue, 25 Apr 2023 17:27:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMS2-0006fQ-GZ
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:38 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78b37e55-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-668-sx8UeKvTNgSkYAJ4YHtjZQ-1; Tue, 25 Apr 2023 13:27:33 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 06A6128082AC;
 Tue, 25 Apr 2023 17:27:32 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5EE9B492C13;
 Tue, 25 Apr 2023 17:27:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78b37e55-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443656;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B3H8MiZnviPPfHHhNt1UwKF8b8RPyeQv8gQKbNPpxCA=;
	b=Oo9bBjzectlGtdKIB+P6y1Onka2xWwJDAvL6oj+QsDf44S5PJGiYCO4HVsTbmhySVnTMbn
	mECVXgQhzw0zZe3pnbaD9b0D6r/+Ln7UpoERLTKUPZr4yumLSvW1D0eKNMGxlhXy4j7kwR
	IwfWWwlXXsrFV2hQzyuoTeeJBfHRTfk=
X-MC-Unique: sx8UeKvTNgSkYAJ4YHtjZQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 05/20] util/vhost-user-server: rename refcount to in_flight counter
Date: Tue, 25 Apr 2023 13:27:01 -0400
Message-Id: <20230425172716.1033562-6-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e56b92f2e2..841acb36e3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -50,7 +50,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -68,12 +71,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -95,7 +98,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526177.817740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS4-0007qM-RY; Tue, 25 Apr 2023 17:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526177.817740; Tue, 25 Apr 2023 17:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS4-0007q4-JP; Tue, 25 Apr 2023 17:27:40 +0000
Received: by outflank-mailman (input) for mailman id 526177;
 Tue, 25 Apr 2023 17:27:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMS3-0007l5-Gq
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 78a1ec62-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-319-1GkdNZ8vNFmRijBei6Lasw-1; Tue, 25 Apr 2023 13:27:30 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 57853885626;
 Tue, 25 Apr 2023 17:27:29 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6B06A2166B41;
 Tue, 25 Apr 2023 17:27:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78a1ec62-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443656;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mEUiDYhTGG6/grGbCAPXx5jwh+m436EKZLWWN7O6BQM=;
	b=LmFcph+6LbhJpiq7MdnyDQOSbqobS1ymcEFCay5kqgB8FYCdBXin46Qdhy395T9c0pNnFe
	5vLtIQraJdzSPf7Ms7pgIK9qCZ6DONXRIzANCQuJv1unTM/n+pTsscdiMInYo3RiHvdKzv
	AekbdPhDwvvQYLCOMHbLr06SszJU1bI=
X-MC-Unique: 1GkdNZ8vNFmRijBei6Lasw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external() during unplug
Date: Tue, 25 Apr 2023 13:27:00 -0400
Message-Id: <20230425172716.1033562-5-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
   that in-flight requests are cancelled synchronously. This ensures
   that no in-flight requests remain once qdev_simple_device_unplug_cb()
   returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/scsi-disk.c   | 1 +
 hw/scsi/virtio-scsi.c | 3 ---
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e01bd84541 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2522,6 +2522,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
 
 static void scsi_unrealize(SCSIDevice *dev)
 {
+    scsi_device_purge_requests(dev, SENSE_CODE(RESET));
     del_boot_device_lchs(&dev->qdev, NULL);
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 000961446c..a02f9233ec 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1061,11 +1061,8 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526178.817754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS7-0008Lh-1z; Tue, 25 Apr 2023 17:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526178.817754; Tue, 25 Apr 2023 17:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS6-0008LO-Uv; Tue, 25 Apr 2023 17:27:42 +0000
Received: by outflank-mailman (input) for mailman id 526178;
 Tue, 25 Apr 2023 17:27:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMS6-0007l5-6D
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:42 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a65d7da-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:40 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-157-4QWj-JVrOH6B29n66eP05w-1; Tue, 25 Apr 2023 13:27:35 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EAA13185A7A7;
 Tue, 25 Apr 2023 17:27:33 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6C9B3492B03;
 Tue, 25 Apr 2023 17:27:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a65d7da-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443659;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LYOSkRZsiK3MtWRK+tCyHVAwWhRHynZGItZ8w7U+shA=;
	b=e1m7FHO5UOM3fhDIjBPWfsTN4IXTnQK8OEDQLyxagiL0WEZtgpK5NJ1oOuJKwEQk7zRy8P
	kGu+xXtjAHD0ZEZOc8YK8BehrR/ruKeT7aPjF2II104AXj+da1cL8XwKhBRvfaMhcnOwDx
	/7tcuyb2NQyJOP8TjkXi7gVnjSIbS98=
X-MC-Unique: 4QWj-JVrOH6B29n66eP05w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 06/20] block/export: wait for vhost-user-blk requests when draining
Date: Tue, 25 Apr 2023 13:27:02 -0400
Message-Id: <20230425172716.1033562-7-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 16 ++++++++++++++++
 util/vhost-user-server.c             | 14 ++++++++++----
 3 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 841acb36e3..092b86aae4 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
 
@@ -314,6 +327,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
                              logical_block_size, num_queues);
 
+    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vexp);
 
@@ -323,6 +337,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                                  num_queues, &vu_blk_iface, errp)) {
         blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
                                         blk_aio_detach, vexp);
+        blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
     }
@@ -336,6 +351,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
 
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vexp);
+    blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..2e6b640050 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526179.817764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS9-0000EU-C0; Tue, 25 Apr 2023 17:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526179.817764; Tue, 25 Apr 2023 17:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMS9-0000EK-8O; Tue, 25 Apr 2023 17:27:45 +0000
Received: by outflank-mailman (input) for mailman id 526179;
 Tue, 25 Apr 2023 17:27:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMS8-0006fQ-2L
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c392ec3-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:43 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-170-F5m3ad6vPW6tX-c71MgQag-1; Tue, 25 Apr 2023 13:27:40 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BC32D87A9E0;
 Tue, 25 Apr 2023 17:27:38 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 33E42141511D;
 Tue, 25 Apr 2023 17:27:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c392ec3-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443662;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SGtAMiwxZ6UOXxi5QjVUlwj1Vpn4MSdaUDJYH4vesKw=;
	b=LqasbpwQ5qHA4V6xcQcprrq0AyVvPhlhtiwuEzN3T2ZaaHuGj52rWp1ywlMmRsmsjfWHVe
	ScmLVFEOy2NrwRkhV7dpl32MglgQlalCj4A+XuZeeBLEv1UGKGn1MqhYgeJk217gyOXz/5
	q01/cUlg9kUJcce8epQToRj27m6X7w0=
X-MC-Unique: F5m3ad6vPW6tX-c71MgQag-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH v4 08/20] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Tue, 25 Apr 2023 13:27:04 -0400
Message-Id: <20230425172716.1033562-9-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526181.817774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSA-0000Yu-Py; Tue, 25 Apr 2023 17:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526181.817774; Tue, 25 Apr 2023 17:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSA-0000Xq-KC; Tue, 25 Apr 2023 17:27:46 +0000
Received: by outflank-mailman (input) for mailman id 526181;
 Tue, 25 Apr 2023 17:27:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMS9-0007l5-1Y
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:45 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7bec23f2-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:43 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-499-bHDEV7dPMI2DaMxqNeeCrQ-1; Tue, 25 Apr 2023 13:27:37 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 833723C0F367;
 Tue, 25 Apr 2023 17:27:36 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 637BF40C6E68;
 Tue, 25 Apr 2023 17:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bec23f2-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443662;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DTceCTcDIciTAW6Fo+6Plvs7yeu7soK0WgIi8nuOQs4=;
	b=Zj24rqSLt00mldfV74SYf9OodeOZPhP5ykWc+NYJaNJius8+kGmidmxFxfAPOe+VuIZcuS
	iFG3BeNG10bo9TRjnq4ipo//2WAxh+zHIMC9oq+VndTMuhE8bcBqwLMh3B2mEQZzHKZgJo
	FjIsintsF0fPnmk12XP0i8gb88QVJJw=
X-MC-Unique: bHDEV7dPMI2DaMxqNeeCrQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 07/20] block/export: stop using is_external in vhost-user-blk server
Date: Tue, 25 Apr 2023 13:27:03 -0400
Message-Id: <20230425172716.1033562-8-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
 util/vhost-user-server.c             | 10 +++----
 2 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 092b86aae4..d20f69cd74 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface = {
     .process_msg           = vu_blk_process_msg,
 };
 
-static void blk_aio_attached(AioContext *ctx, void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
-}
-
-static void blk_aio_detach(void *opaque)
-{
-    VuBlkExport *vexp = opaque;
-
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
-    vexp->export.ctx = NULL;
-}
-
 static void
 vu_blk_initialize_config(BlockDriverState *bs,
                          struct virtio_blk_config *config,
@@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    vexp->export.ctx = blk_get_aio_context(vexp->export.blk);
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -285,6 +288,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
@@ -328,15 +333,11 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
                              logical_block_size, num_queues);
 
     blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
-    blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                 vexp);
 
     blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
 
     if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
                                  num_queues, &vu_blk_iface, errp)) {
-        blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
-                                        blk_aio_detach, vexp);
         blk_set_dev_ops(exp->blk, NULL, NULL);
         g_free(vexp->handler.serial);
         return -EADDRNOTAVAIL;
@@ -349,8 +350,6 @@ static void vu_blk_exp_delete(BlockExport *exp)
 {
     VuBlkExport *vexp = container_of(exp, VuBlkExport, export);
 
-    blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
-                                    vexp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
     g_free(vexp->handler.serial);
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 2e6b640050..332aea9306 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526183.817781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSB-0000j2-KF; Tue, 25 Apr 2023 17:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526183.817781; Tue, 25 Apr 2023 17:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSB-0000ib-B6; Tue, 25 Apr 2023 17:27:47 +0000
Received: by outflank-mailman (input) for mailman id 526183;
 Tue, 25 Apr 2023 17:27:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSA-0006fQ-Ca
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7d5eb1e2-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:45 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-318-brbwTRKSPLCfpDHZ0sB8ug-1; Tue, 25 Apr 2023 13:27:41 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9E4D928082A8;
 Tue, 25 Apr 2023 17:27:40 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2173440C6E68;
 Tue, 25 Apr 2023 17:27:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d5eb1e2-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443664;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MVghegvQVUaizZCyEWjUSp2wGJFPOHKrCrNbSIUao8c=;
	b=Em/IX24eFFiHfbpx3PDpc2L6jENkaI4OWdNKG28Zks0D/3vIctydnxZ6eAUvNLbIi4Fui8
	ZSleEp5JvBSOxG+CN3ftVtvTBaxbbKxmwSaIifYeWvY7C4uT9jQ14aXH6xL0aSiPGoPYqp
	1B8STAZAQOSXE2RZwOMeiqspiP4vDw4=
X-MC-Unique: brbwTRKSPLCfpDHZ0sB8ug-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 09/20] block: add blk_in_drain() API
Date: Tue, 25 Apr 2023 13:27:05 -0400
Message-Id: <20230425172716.1033562-10-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

The BlockBackend quiesce_counter is greater than zero during drained
sections. Add an API to check whether the BlockBackend is in a drained
section.

The next patch will use this API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-global-state.h | 1 +
 block/block-backend.c                       | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h
index 2b6d27db7c..ac7cbd6b5e 100644
--- a/include/sysemu/block-backend-global-state.h
+++ b/include/sysemu/block-backend-global-state.h
@@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp);
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
 void blk_aio_cancel(BlockAIOCB *acb);
 int blk_commit_all(void);
+bool blk_in_drain(BlockBackend *blk);
 void blk_drain(BlockBackend *blk);
 void blk_drain_all(void);
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
diff --git a/block/block-backend.c b/block/block-backend.c
index ffd1d66f7d..42721a3592 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1266,6 +1266,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes)
     return 0;
 }
 
+/* Are we currently in a drained section? */
+bool blk_in_drain(BlockBackend *blk)
+{
+    GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */
+    return qatomic_read(&blk->quiesce_counter);
+}
+
 /* To be called between exactly one pair of blk_inc/dec_in_flight() */
 static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526185.817794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSF-0001Vr-Ub; Tue, 25 Apr 2023 17:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526185.817794; Tue, 25 Apr 2023 17:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSF-0001VP-Ow; Tue, 25 Apr 2023 17:27:51 +0000
Received: by outflank-mailman (input) for mailman id 526185;
 Tue, 25 Apr 2023 17:27:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSE-0006fQ-RP
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:50 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80272708-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:27:50 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-371-trIX9XLKPRWWnULzgTRHCw-1; Tue, 25 Apr 2023 13:27:43 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82C42A0F397;
 Tue, 25 Apr 2023 17:27:42 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 02082492B03;
 Tue, 25 Apr 2023 17:27:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80272708-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BWB8WEf8NZeINbe7UA+dJnukN7hLgv/twVbNfXFQN24=;
	b=iBvj65WC46y7UY2FK/n/dWDBPyu25N3/nCSWwwWal3myS92rcpBK8foIhWeDE09MzxuX6t
	mA0QKnN0j8PsZbs08zqJxk4Qdx7zSSEajrrxOxrZls5bz2tFVPge/e/F04FtIJOZU5dLpR
	sG3mr4mo6wv3HixueGYKt5vhbAEkLtQ=
X-MC-Unique: trIX9XLKPRWWnULzgTRHCw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 10/20] block: drain from main loop thread in bdrv_co_yield_to_drain()
Date: Tue, 25 Apr 2023 13:27:06 -0400
Message-Id: <20230425172716.1033562-11-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

For simplicity, always run BlockDevOps .drained_begin/end/poll()
callbacks in the main loop thread. This makes it easier to implement the
callbacks and avoids extra locks.

Move the function pointer declarations from the I/O Code section to the
Global State section in block-backend-common.h.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-common.h | 25 +++++++++++++------------
 block/io.c                            |  3 ++-
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
index 2391679c56..780cea7305 100644
--- a/include/sysemu/block-backend-common.h
+++ b/include/sysemu/block-backend-common.h
@@ -59,6 +59,19 @@ typedef struct BlockDevOps {
      */
     bool (*is_medium_locked)(void *opaque);
 
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+
     /*
      * I/O API functions. These functions are thread-safe.
      *
@@ -76,18 +89,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
 } BlockDevOps;
 
 /*
diff --git a/block/io.c b/block/io.c
index 2e267a85ab..4f9fe2f808 100644
--- a/block/io.c
+++ b/block/io.c
@@ -335,7 +335,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (ctx != co_ctx) {
         aio_context_release(ctx);
     }
-    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
+    replay_bh_schedule_oneshot_event(qemu_get_aio_context(),
+                                     bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526186.817804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSI-00022V-Ag; Tue, 25 Apr 2023 17:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526186.817804; Tue, 25 Apr 2023 17:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSI-00021N-5e; Tue, 25 Apr 2023 17:27:54 +0000
Received: by outflank-mailman (input) for mailman id 526186;
 Tue, 25 Apr 2023 17:27:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSG-0007l5-8O
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 805102ac-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:50 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-324-_LlzPYThP92zuiCljAn7kw-1; Tue, 25 Apr 2023 13:27:45 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A638828082A8;
 Tue, 25 Apr 2023 17:27:44 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 27D82492B10;
 Tue, 25 Apr 2023 17:27:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 805102ac-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pr0VKOkDxuZ5hinADqgnbKz4Kufrp9KJiNLqpg8yv3Q=;
	b=PWBBM/H09rSu7gAuqjo4eV+haZlIkNYbsyfabYgtk4SPsfsSNcQbqxmRMfnToVXvbaxzfg
	inthcnyivnDcl275+0Cn8/xzpZnTSCC40ks1cyKGGpHKDcTDSgosw0dj61Hufkz1MlUhyQ
	cBK9ltLwJ5FHcINPM3sLmhDkP7G1FaY=
X-MC-Unique: _LlzPYThP92zuiCljAn7kw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 11/20] xen-block: implement BlockDevOps->drained_begin()
Date: Tue, 25 Apr 2023 13:27:07 -0400
Message-Id: <20230425172716.1033562-12-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.

Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.

It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/xen-block.h |  2 ++
 hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++---------
 hw/block/xen-block.c           | 24 ++++++++++++++++---
 hw/xen/xen-bus.c               |  7 ++++--
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h
index 76dcd51c3d..7b8e9df09f 100644
--- a/hw/block/dataplane/xen-block.h
+++ b/hw/block/dataplane/xen-block.h
@@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp);
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane);
 
 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..02e0fd6115 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -663,6 +663,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
     g_free(dataplane);
 }
 
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         NULL, &error_abort);
+}
+
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         dataplane->ctx, &error_abort);
+}
+
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
@@ -673,13 +697,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 
     xendev = dataplane->xendev;
 
-    aio_context_acquire(dataplane->ctx);
-    if (dataplane->event_channel) {
-        /* Only reason for failure is a NULL channel */
-        xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                             qemu_get_aio_context(),
-                                             &error_abort);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_detach(dataplane);
     }
+
+    aio_context_acquire(dataplane->ctx);
     /* Xen doesn't have multiple users for nodes, so this can't fail */
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
@@ -818,11 +840,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
     aio_context_release(old_context);
 
-    /* Only reason for failure is a NULL channel */
-    aio_context_acquire(dataplane->ctx);
-    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                         dataplane->ctx, &error_abort);
-    aio_context_release(dataplane->ctx);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_attach(dataplane);
+    }
 
     return;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index f5a744589d..f099914831 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque)
     xen_device_backend_printf(xendev, "state", "%u", state);
 }
 
+/* Suspend request handling */
+static void xen_block_drained_begin(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_detach(blockdev->dataplane);
+}
+
+/* Resume request handling */
+static void xen_block_drained_end(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_attach(blockdev->dataplane);
+}
+
 static const BlockDevOps xen_block_dev_ops = {
-    .resize_cb = xen_block_resize_cb,
+    .resize_cb     = xen_block_resize_cb,
+    .drained_begin = xen_block_drained_begin,
+    .drained_end   = xen_block_drained_end,
 };
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
@@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
-
     if (conf->discard_granularity == -1) {
         conf->discard_granularity = conf->physical_block_size;
     }
@@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     blockdev->dataplane =
         xen_block_dataplane_create(xendev, blk, conf->logical_block_size,
                                    blockdev->props.iothread);
+
+    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
 }
 
 static void xen_block_frontend_changed(XenDevice *xendev,
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..b8f408c9ed 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
-                       xen_device_event, NULL, xen_device_poll, NULL, channel);
+    if (ctx) {
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
+                           true, xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
+    }
 }
 
 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:27:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526192.817814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSM-0002lf-Pl; Tue, 25 Apr 2023 17:27:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526192.817814; Tue, 25 Apr 2023 17:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSM-0002lQ-Jd; Tue, 25 Apr 2023 17:27:58 +0000
Received: by outflank-mailman (input) for mailman id 526192;
 Tue, 25 Apr 2023 17:27:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSL-0007l5-Cl
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 818b67b1-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-196-6D_FeGHnNl-RzOiCHv9FDA-1; Tue, 25 Apr 2023 13:27:48 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E805487A9E6;
 Tue, 25 Apr 2023 17:27:46 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1CE1E2166B41;
 Tue, 25 Apr 2023 17:27:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 818b67b1-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443671;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l1BfjF5WOabKn8yxsLU/Ixv2u9YtxHZYDFF52BuLnDA=;
	b=R3hAfaGqHbElLEBj5rTWRitcmgkqgDlMBepN/lfTDeTz/0w8D3OgY0R6yH+gWeB6oEVYQ/
	+Y2GqMp4Z6ntVxXawasE9Rngg0vr7/rwddFMsHfjtNn4xB6vuCnJ5L177HT/NgwZtQ4ST5
	VFHHOa61GMzklIFW8oruWmH3oaa4oao=
X-MC-Unique: 6D_FeGHnNl-RzOiCHv9FDA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 12/20] hw/xen: do not set is_external=true on evtchn fds
Date: Tue, 25 Apr 2023 13:27:08 -0400
Message-Id: <20230425172716.1033562-13-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The previous commit converted the xen-block device to use BlockDevOps
.drained_begin/end() callbacks. It no longer relies on is_external=true
so it is safe to pass is_external=false.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/xen/xen-bus.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index b8f408c9ed..bf256d4da2 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           true, xen_device_event, NULL, xen_device_poll, NULL,
-                           channel);
+                           false, xen_device_event, NULL, xen_device_poll,
+                           NULL, channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:28:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526196.817824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSP-0003H3-6w; Tue, 25 Apr 2023 17:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526196.817824; Tue, 25 Apr 2023 17:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSP-0003GX-02; Tue, 25 Apr 2023 17:28:01 +0000
Received: by outflank-mailman (input) for mailman id 526196;
 Tue, 25 Apr 2023 17:27:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSN-0007l5-DC
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:27:59 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8402edc3-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:56 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-246-UQWURWHJOjqLQ6VvJalAKQ-1; Tue, 25 Apr 2023 13:27:50 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9ABA3C0F367;
 Tue, 25 Apr 2023 17:27:48 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 58627492B0F;
 Tue, 25 Apr 2023 17:27:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8402edc3-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443675;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Kvgpibm7uG8By4vHBOsqesPFxvQEinDBgjjk4WE12qY=;
	b=admB9LnPDnWhPKv/fVY0i5/DcuQGGpZyske878zB7sZvAtA+P9TcjMeFAaIQnxltzGc99B
	SR7UDwP5M1TdPoiEsoaMjXX+o823+nJ4lwEMqapQ5hJof5t0PXrVw/T/o5kzB4tZ7RA//P
	XZ0ISI4b+41LQkhi/RNeMX6nGxMqleE=
X-MC-Unique: UQWURWHJOjqLQ6VvJalAKQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 13/20] block/export: rewrite vduse-blk drain code
Date: Tue, 25 Apr 2023 13:27:09 -0400
Message-Id: <20230425172716.1033562-14-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index f7ae44e3ce..35dc8fcf45 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     blk_set_dev_ops(exp->blk, NULL, NULL);
@@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:28:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526197.817831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSQ-0003TS-27; Tue, 25 Apr 2023 17:28:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526197.817831; Tue, 25 Apr 2023 17:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMSP-0003QU-O7; Tue, 25 Apr 2023 17:28:01 +0000
Received: by outflank-mailman (input) for mailman id 526197;
 Tue, 25 Apr 2023 17:28:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSO-0007l5-O5
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 855969ce-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:27:59 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-166-4yW8uuHRMwqs5zACvwosHQ-1; Tue, 25 Apr 2023 13:27:54 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 444E887A9E8;
 Tue, 25 Apr 2023 17:27:53 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B7927C15BA0;
 Tue, 25 Apr 2023 17:27:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 855969ce-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443677;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AI5kqkdzipCjtseeQPE5/aVyAJxrbcrgigA/+qZzi/8=;
	b=idZ+gkjmvrwYeE6c6KQnRkXUNMjTDeTpBaZZPJgqnsu9vZmQOvSnD+eemSKQjmdE1NBcme
	1Ewy3ltVhx+d25ab+zfTV75mHM51VC6/j0cLXdaeocC/NZZ/+XnNGk/Yp9elP/UMUnoQNG
	+TGFajP/qC0wLb/1Z57F523FHGsb64g=
X-MC-Unique: 4yW8uuHRMwqs5zACvwosHQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 15/20] block/fuse: do not set is_external=true on FUSE fd
Date: Tue, 25 Apr 2023 13:27:11 -0400
Message-Id: <20230425172716.1033562-16-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 58 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..65a7f4d723 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque)
 
     blk_exp_ref(&exp->common);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     blk_exp_unref(&exp->common);
 }
 
@@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
@@ -287,6 +339,8 @@ static void fuse_export_delete(BlockExport *blk_exp)
 {
     FuseExport *exp = container_of(blk_exp, FuseExport, common);
 
+    blk_set_dev_ops(exp->common.blk, NULL, NULL);
+
     if (exp->fuse_session) {
         if (exp->mounted) {
             fuse_session_unmount(exp->fuse_session);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526212.817843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTM-0006II-AP; Tue, 25 Apr 2023 17:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526212.817843; Tue, 25 Apr 2023 17:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTM-0006IB-7Z; Tue, 25 Apr 2023 17:29:00 +0000
Received: by outflank-mailman (input) for mailman id 526212;
 Tue, 25 Apr 2023 17:28:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSV-0006fQ-KJ
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8858f246-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:28:04 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-208-MR8Wa4WXPsOHkLnVhYPEbw-1; Tue, 25 Apr 2023 13:28:00 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 33B01858289;
 Tue, 25 Apr 2023 17:27:59 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8808C1121314;
 Tue, 25 Apr 2023 17:27:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8858f246-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443682;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V8/JM/4FpUbs1MFSPNPGCdntPIYaUvTzToJc4P5DEws=;
	b=M3Ng3nvp7JIPp1R9UjJSFaygwF1leg0n3idtDawHu5QXB/DrESgiYLD2Ym9svyXor6mjPF
	8vDsTs4dFECE0CkRjryPexyj6sDQwl2xebguegIHnoI/agXYR+HGaqKBdOOhI68UwdAJXG
	rJONw7FvZFzsjP9XHU6hGXf6f6gEQMw=
X-MC-Unique: MR8Wa4WXPsOHkLnVhYPEbw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 18/20] virtio-scsi: implement BlockDevOps->drained_begin()
Date: Tue, 25 Apr 2023 13:27:14 -0400
Message-Id: <20230425172716.1033562-19-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

The virtio-scsi Host Bus Adapter provides access to devices on a SCSI
bus. Those SCSI devices typically have a BlockBackend. When the
BlockBackend enters a drained section, the SCSI device must temporarily
stop submitting new I/O requests.

Implement this behavior by temporarily stopping virtio-scsi virtqueue
processing when one of the SCSI devices enters a drained section. The
new scsi_device_drained_begin() API allows scsi-disk to message the
virtio-scsi HBA.

scsi_device_drained_begin() uses a drain counter so that multiple SCSI
devices can have overlapping drained sections. The HBA only sees one
pair of .drained_begin/end() calls.

After this commit, virtio-scsi no longer depends on hw/virtio's
ioeventfd aio_set_event_notifier(is_external=true). This commit is a
step towards removing the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/scsi/scsi.h          | 14 ++++++++++++
 hw/scsi/scsi-bus.c              | 40 +++++++++++++++++++++++++++++++++
 hw/scsi/scsi-disk.c             | 27 +++++++++++++++++-----
 hw/scsi/virtio-scsi-dataplane.c | 22 ++++++++++--------
 hw/scsi/virtio-scsi.c           | 38 +++++++++++++++++++++++++++++++
 hw/scsi/trace-events            |  2 ++
 6 files changed, 129 insertions(+), 14 deletions(-)

diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h
index 6f23a7a73e..e2bb1a2fbf 100644
--- a/include/hw/scsi/scsi.h
+++ b/include/hw/scsi/scsi.h
@@ -133,6 +133,16 @@ struct SCSIBusInfo {
     void (*save_request)(QEMUFile *f, SCSIRequest *req);
     void *(*load_request)(QEMUFile *f, SCSIRequest *req);
     void (*free_request)(SCSIBus *bus, void *priv);
+
+    /*
+     * Temporarily stop submitting new requests between drained_begin() and
+     * drained_end(). Called from the main loop thread with the BQL held.
+     *
+     * Implement these callbacks if request processing is triggered by a file
+     * descriptor like an EventNotifier. Otherwise set them to NULL.
+     */
+    void (*drained_begin)(SCSIBus *bus);
+    void (*drained_end)(SCSIBus *bus);
 };
 
 #define TYPE_SCSI_BUS "SCSI"
@@ -144,6 +154,8 @@ struct SCSIBus {
 
     SCSISense unit_attention;
     const SCSIBusInfo *info;
+
+    int drain_count; /* protected by BQL */
 };
 
 /**
@@ -213,6 +225,8 @@ void scsi_req_cancel_complete(SCSIRequest *req);
 void scsi_req_cancel(SCSIRequest *req);
 void scsi_req_cancel_async(SCSIRequest *req, Notifier *notifier);
 void scsi_req_retry(SCSIRequest *req);
+void scsi_device_drained_begin(SCSIDevice *sdev);
+void scsi_device_drained_end(SCSIDevice *sdev);
 void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 64d7311757..b571fdf895 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -1668,6 +1668,46 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense)
     scsi_device_set_ua(sdev, sense);
 }
 
+void scsi_device_drained_begin(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count < INT_MAX);
+
+    /*
+     * Multiple BlockBackends can be on a SCSIBus and each may begin/end
+     * draining at any time. Keep a counter so HBAs only see begin/end once.
+     */
+    if (bus->drain_count++ == 0) {
+        trace_scsi_bus_drained_begin(bus, sdev);
+        if (bus->info->drained_begin) {
+            bus->info->drained_begin(bus);
+        }
+    }
+}
+
+void scsi_device_drained_end(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count > 0);
+
+    if (bus->drain_count-- == 1) {
+        trace_scsi_bus_drained_end(bus, sdev);
+        if (bus->info->drained_end) {
+            bus->info->drained_end(bus);
+        }
+    }
+}
+
 static char *scsibus_get_dev_path(DeviceState *dev)
 {
     SCSIDevice *d = SCSI_DEVICE(dev);
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index e01bd84541..2249087d6a 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2360,6 +2360,20 @@ static void scsi_disk_reset(DeviceState *dev)
     s->qdev.scsi_version = s->qdev.default_scsi_version;
 }
 
+static void scsi_disk_drained_begin(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_begin(&s->qdev);
+}
+
+static void scsi_disk_drained_end(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_end(&s->qdev);
+}
+
 static void scsi_disk_resize_cb(void *opaque)
 {
     SCSIDiskState *s = opaque;
@@ -2414,16 +2428,19 @@ static bool scsi_cd_is_medium_locked(void *opaque)
 }
 
 static const BlockDevOps scsi_disk_removable_block_ops = {
-    .change_media_cb = scsi_cd_change_media_cb,
+    .change_media_cb  = scsi_cd_change_media_cb,
+    .drained_begin    = scsi_disk_drained_begin,
+    .drained_end      = scsi_disk_drained_end,
     .eject_request_cb = scsi_cd_eject_request_cb,
-    .is_tray_open = scsi_cd_is_tray_open,
     .is_medium_locked = scsi_cd_is_medium_locked,
-
-    .resize_cb = scsi_disk_resize_cb,
+    .is_tray_open     = scsi_cd_is_tray_open,
+    .resize_cb        = scsi_disk_resize_cb,
 };
 
 static const BlockDevOps scsi_disk_block_ops = {
-    .resize_cb = scsi_disk_resize_cb,
+    .drained_begin = scsi_disk_drained_begin,
+    .drained_end   = scsi_disk_drained_end,
+    .resize_cb     = scsi_disk_resize_cb,
 };
 
 static void scsi_disk_unit_attention_reported(SCSIDevice *dev)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 81643445ed..1060038e13 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -153,14 +153,16 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
     s->dataplane_starting = false;
     s->dataplane_started = true;
 
-    aio_context_acquire(s->ctx);
-    virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
-    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
+        virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
-    for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        for (i = 0; i < vs->conf.num_queues; i++) {
+            virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
 fail_host_notifiers:
@@ -206,9 +208,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
     }
     s->dataplane_stopping = true;
 
-    aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
-    aio_context_release(s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
+        aio_context_release(s->ctx);
+    }
 
     blk_drain_all(); /* ensure there are no in-flight requests */
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index a02f9233ec..eba1e84dac 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1081,6 +1081,42 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_scsi_drained_begin(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_scsi_drained_end(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+    }
+}
+
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .tcq = true,
     .max_channel = VIRTIO_SCSI_MAX_CHANNEL,
@@ -1095,6 +1131,8 @@ static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .get_sg_list = virtio_scsi_get_sg_list,
     .save_request = virtio_scsi_save_request,
     .load_request = virtio_scsi_load_request,
+    .drained_begin = virtio_scsi_drained_begin,
+    .drained_end = virtio_scsi_drained_end,
 };
 
 void virtio_scsi_common_realize(DeviceState *dev,
diff --git a/hw/scsi/trace-events b/hw/scsi/trace-events
index ab238293f0..bdd4e2c7c7 100644
--- a/hw/scsi/trace-events
+++ b/hw/scsi/trace-events
@@ -6,6 +6,8 @@ scsi_req_cancel(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_data(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_data_canceled(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_dequeue(int target, int lun, int tag) "target %d lun %d tag %d"
+scsi_bus_drained_begin(void *bus, void *sdev) "bus %p sdev %p"
+scsi_bus_drained_end(void *bus, void *sdev) "bus %p sdev %p"
 scsi_req_continue(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_continue_canceled(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_parsed(int target, int lun, int tag, int cmd, int mode, int xfer) "target %d lun %d tag %d command %d dir %d length %d"
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526216.817848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTM-0006MM-Kj; Tue, 25 Apr 2023 17:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526216.817848; Tue, 25 Apr 2023 17:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTM-0006LE-F2; Tue, 25 Apr 2023 17:29:00 +0000
Received: by outflank-mailman (input) for mailman id 526216;
 Tue, 25 Apr 2023 17:28:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMST-0006fQ-JU
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 878b602f-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:28:02 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-135-xhkpTfX3OsmcGVRiBiaH6Q-1; Tue, 25 Apr 2023 13:27:58 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2607C1C02C9C;
 Tue, 25 Apr 2023 17:27:57 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9478314171B8;
 Tue, 25 Apr 2023 17:27:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 878b602f-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443681;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NTRadM71WOmRghvzBom8j3O5LrKwHmY9JQuXgfzMJZs=;
	b=Eo/BY+5G4tvepPg7/VqkGl66Ffv7ln2B9qN0qWNDhg7YHzIIYGcp15WFkSIDMm8Getfjmh
	UdF9L0OLgtkPuN92EVuxhFTCoPRXEdxThsJzyMA1AHBmts9IYIRMe6GqHt8QOpXZxaPQKJ
	KGYWKOLWpkA8+wB7aD7laMgUiE/8BAc=
X-MC-Unique: xhkpTfX3OsmcGVRiBiaH6Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 17/20] virtio-blk: implement BlockDevOps->drained_begin()
Date: Tue, 25 Apr 2023 13:27:13 -0400
Message-Id: <20230425172716.1033562-18-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Detach ioeventfds during drained sections to stop I/O submission from
the guest. virtio-blk is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Take extra care to avoid attaching/detaching ioeventfds if the data
plane is started/stopped during a drained section. This should be rare,
but maybe the mirror block job can trigger it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 +++++++++------
 hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
 2 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index bd7cc6e76b..d77fc6028c 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     }
 
     /* Get this show started by hooking up our callbacks */
-    aio_context_acquire(s->ctx);
-    for (i = 0; i < nvqs; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_context_acquire(s->ctx);
+        for (i = 0; i < nvqs; i++) {
+            VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
-        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
   fail_aio_context:
@@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     trace_virtio_blk_data_plane_stop(s);
 
     aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+    }
 
     /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
     blk_drain(s->conf->conf.blk);
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cefca93b31..d8dedc575c 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1109,8 +1109,44 @@ static void virtio_blk_resize(void *opaque)
     aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev);
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_blk_drained_begin(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_blk_drained_end(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, ctx);
+    }
+}
+
 static const BlockDevOps virtio_block_ops = {
-    .resize_cb = virtio_blk_resize,
+    .resize_cb     = virtio_blk_resize,
+    .drained_begin = virtio_blk_drained_begin,
+    .drained_end   = virtio_blk_drained_end,
 };
 
 static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526219.817863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTO-0006lO-3L; Tue, 25 Apr 2023 17:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526219.817863; Tue, 25 Apr 2023 17:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTN-0006ke-Te; Tue, 25 Apr 2023 17:29:01 +0000
Received: by outflank-mailman (input) for mailman id 526219;
 Tue, 25 Apr 2023 17:28:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMTI-0007l5-53
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a65dc6eb-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:28:54 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-673-veeuRtEqOMmqQHFesBK3Zg-1; Tue, 25 Apr 2023 13:28:41 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 564C310146EC;
 Tue, 25 Apr 2023 17:27:51 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 83601C15BA0;
 Tue, 25 Apr 2023 17:27:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a65dc6eb-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443733;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RuyPP84PtIn0scDzjbw4vF6+iUnekek3zn9Q9Bf1Sn4=;
	b=SHnm/ofxiXQSJJm+mFY5E8cWc45r8bQAyEv0/KXZLZfDBqeKhY4IDi7oQBGl/E6nc0HcMH
	141dYfbtfLt48ubJQPLKKedqs3KDh2AljOUaz4OuXo4k3CB4Z6hOKp6NvAzz8FOT1oR02g
	wRDMFAhpV8CMPyevDb6nAfB2OtDYZHQ=
X-MC-Unique: veeuRtEqOMmqQHFesBK3Zg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 14/20] block/export: don't require AioContext lock around blk_exp_ref/unref()
Date: Tue, 25 Apr 2023 13:27:10 -0400
Message-Id: <20230425172716.1033562-15-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

The FUSE export calls blk_exp_ref/unref() without the AioContext lock.
Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they
work without the AioContext lock. This way it's less error-prone.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/export.h   |  2 ++
 block/export/export.c    | 13 ++++++-------
 block/export/vduse-blk.c |  4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..f2fe0f8078 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -57,6 +57,8 @@ struct BlockExport {
      * Reference count for this block export. This includes strong references
      * both from the owner (qemu-nbd or the monitor) and clients connected to
      * the export.
+     *
+     * Use atomics to access this field.
      */
     int refcount;
 
diff --git a/block/export/export.c b/block/export/export.c
index 28a91c9c42..ddaf8036e5 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -201,11 +201,10 @@ fail:
     return NULL;
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_ref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    exp->refcount++;
+    assert(qatomic_read(&exp->refcount) > 0);
+    qatomic_inc(&exp->refcount);
 }
 
 /* Runs in the main thread */
@@ -227,11 +226,10 @@ static void blk_exp_delete_bh(void *opaque)
     aio_context_release(aio_context);
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_unref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    if (--exp->refcount == 0) {
+    assert(qatomic_read(&exp->refcount) > 0);
+    if (qatomic_fetch_dec(&exp->refcount) == 1) {
         /* Touch the block_exports list only in the main thread */
         aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh,
                                 exp);
@@ -339,7 +337,8 @@ void qmp_block_export_del(const char *id,
     if (!has_mode) {
         mode = BLOCK_EXPORT_REMOVE_MODE_SAFE;
     }
-    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) {
+    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE &&
+        qatomic_read(&exp->refcount) > 1) {
         error_setg(errp, "export '%s' still in use", exp->id);
         error_append_hint(errp, "Use mode='hard' to force client "
                           "disconnect\n");
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 35dc8fcf45..611430afda 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
     if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
         /* Prevent export from being deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_ref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
         aio_wait_kick();
 
         /* Now the export can be deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_unref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526238.817874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTg-0007lG-ES; Tue, 25 Apr 2023 17:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526238.817874; Tue, 25 Apr 2023 17:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTg-0007l2-AA; Tue, 25 Apr 2023 17:29:20 +0000
Received: by outflank-mailman (input) for mailman id 526238;
 Tue, 25 Apr 2023 17:29:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSa-0006fQ-LJ
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c03efce-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:28:10 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-372-LxNHUAMcMq-A2DsTWxkDNA-1; Tue, 25 Apr 2023 13:28:05 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 607D8858289;
 Tue, 25 Apr 2023 17:28:04 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id EA7CF7ADE;
 Tue, 25 Apr 2023 17:28:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c03efce-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443689;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cyIMCQvAF20/OYjye5kCDL3cT6miBWdMHrsXrPBAXnM=;
	b=Ky3kcMjv3SCB6MDR3TZ/UUSnL7T3S2+fv/UKLNxMWza+R1nmgXIH2t9K7zpK/lw74i+xnt
	DAjoceG7tImYAx9vhEoQZkUJqMjY5Q8HCiezwQN3eE9KVU6UliVk+EEzuUX/NV0uuOBFzU
	5MDweU1bdDRAN6heaS0GI5OdX8fEJhQ=
X-MC-Unique: LxNHUAMcMq-A2DsTWxkDNA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 20/20] aio: remove aio_disable_external() API
Date: Tue, 25 Apr 2023 13:27:16 -0400
Message-Id: <20230425172716.1033562-21-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.

Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().

The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().

Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:

  @@
  expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
  @@
  - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
  + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)

  @@
  expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
  @@
  - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
  + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/aio.h           | 57 ---------------------------
 util/aio-posix.h              |  1 -
 block.c                       |  7 ----
 block/blkio.c                 | 15 +++----
 block/curl.c                  | 10 ++---
 block/export/fuse.c           |  8 ++--
 block/export/vduse-blk.c      | 10 ++---
 block/io.c                    |  2 -
 block/io_uring.c              |  4 +-
 block/iscsi.c                 |  3 +-
 block/linux-aio.c             |  4 +-
 block/nfs.c                   |  5 +--
 block/nvme.c                  |  8 ++--
 block/ssh.c                   |  4 +-
 block/win32-aio.c             |  6 +--
 hw/i386/kvm/xen_xenstore.c    |  2 +-
 hw/virtio/virtio.c            |  6 +--
 hw/xen/xen-bus.c              |  8 ++--
 io/channel-command.c          |  6 +--
 io/channel-file.c             |  3 +-
 io/channel-socket.c           |  3 +-
 migration/rdma.c              | 16 ++++----
 tests/unit/test-aio.c         | 27 +------------
 tests/unit/test-bdrv-drain.c  |  1 -
 tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
 util/aio-posix.c              | 20 +++-------
 util/aio-win32.c              |  8 +---
 util/async.c                  |  3 +-
 util/fdmon-epoll.c            | 18 +++------
 util/fdmon-io_uring.c         |  8 +---
 util/fdmon-poll.c             |  3 +-
 util/main-loop.c              |  7 ++--
 util/qemu-coroutine-io.c      |  7 ++--
 util/vhost-user-server.c      | 11 +++---
 tests/unit/meson.build        |  3 --
 35 files changed, 82 insertions(+), 295 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

diff --git a/include/block/aio.h b/include/block/aio.h
index 543717f294..bb38f0753f 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -231,8 +231,6 @@ struct AioContext {
      */
     QEMUTimerListGroup tlg;
 
-    int external_disable_cnt;
-
     /* Number of AioHandlers without .io_poll() */
     int poll_disable_cnt;
 
@@ -475,7 +473,6 @@ bool aio_poll(AioContext *ctx, bool blocking);
  */
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -491,7 +488,6 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready);
@@ -620,59 +616,6 @@ static inline void aio_timer_init(AioContext *ctx,
  */
 int64_t aio_compute_timeout(AioContext *ctx);
 
-/**
- * aio_disable_external:
- * @ctx: the aio context
- *
- * Disable the further processing of external clients.
- */
-static inline void aio_disable_external(AioContext *ctx)
-{
-    qatomic_inc(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_enable_external:
- * @ctx: the aio context
- *
- * Enable the processing of external clients.
- */
-static inline void aio_enable_external(AioContext *ctx)
-{
-    int old;
-
-    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
-    assert(old > 0);
-    if (old == 1) {
-        /* Kick event loop so it re-arms file descriptors */
-        aio_notify(ctx);
-    }
-}
-
-/**
- * aio_external_disabled:
- * @ctx: the aio context
- *
- * Return true if the external clients are disabled.
- */
-static inline bool aio_external_disabled(AioContext *ctx)
-{
-    return qatomic_read(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_node_check:
- * @ctx: the aio context
- * @is_external: Whether or not the checked node is an external event source.
- *
- * Check if the node's is_external flag is okay to be polled by the ctx at this
- * moment. True means green light.
- */
-static inline bool aio_node_check(AioContext *ctx, bool is_external)
-{
-    return !is_external || !qatomic_read(&ctx->external_disable_cnt);
-}
-
 /**
  * aio_co_schedule:
  * @ctx: the aio context
diff --git a/util/aio-posix.h b/util/aio-posix.h
index 80b927c7f4..4264c518be 100644
--- a/util/aio-posix.h
+++ b/util/aio-posix.h
@@ -38,7 +38,6 @@ struct AioHandler {
 #endif
     int64_t poll_idle_timeout; /* when to stop userspace polling */
     bool poll_ready; /* has polling detected an event? */
-    bool is_external;
 };
 
 /* Add a handler to a ready list */
diff --git a/block.c b/block.c
index d79a52ca74..608c99a219 100644
--- a/block.c
+++ b/block.c
@@ -7268,9 +7268,6 @@ static void bdrv_detach_aio_context(BlockDriverState *bs)
         bs->drv->bdrv_detach_aio_context(bs);
     }
 
-    if (bs->quiesce_counter) {
-        aio_enable_external(bs->aio_context);
-    }
     bs->aio_context = NULL;
 }
 
@@ -7280,10 +7277,6 @@ static void bdrv_attach_aio_context(BlockDriverState *bs,
     BdrvAioNotifier *ban, *ban_tmp;
     GLOBAL_STATE_CODE();
 
-    if (bs->quiesce_counter) {
-        aio_disable_external(new_context);
-    }
-
     bs->aio_context = new_context;
 
     if (bs->drv && bs->drv->bdrv_attach_aio_context) {
diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..72117fa005 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState *bs,
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(new_context,
-                       s->completion_fd,
-                       false,
-                       blkio_completion_fd_read,
-                       NULL,
+    aio_set_fd_handler(new_context, s->completion_fd,
+                       blkio_completion_fd_read, NULL,
                        blkio_completion_fd_poll,
-                       blkio_completion_fd_poll_ready,
-                       bs);
+                       blkio_completion_fd_poll_ready, bs);
 }
 
 static void blkio_detach_aio_context(BlockDriverState *bs)
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       s->completion_fd,
-                       false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, NULL,
+                       NULL, NULL, NULL);
 }
 
 /* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
diff --git a/block/curl.c b/block/curl.c
index 8bb39a134e..0fc42d03d7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value, void *opaque)
     CURLSocket *socket = value;
     BDRVCURLState *s = socket->s;
 
-    aio_set_fd_handler(s->aio_context, socket->fd, false,
+    aio_set_fd_handler(s->aio_context, socket->fd,
                        NULL, NULL, NULL, NULL, NULL);
     return true;
 }
@@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     trace_curl_sock_cb(action, (int)fd);
     switch (action) {
         case CURL_POLL_IN:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, NULL, NULL, NULL, socket);
             break;
         case CURL_POLL_OUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, curl_multi_do, NULL, NULL, socket);
             break;
         case CURL_POLL_INOUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, curl_multi_do,
                                NULL, NULL, socket);
             break;
         case CURL_POLL_REMOVE:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, NULL, NULL, NULL, NULL);
             break;
     }
diff --git a/block/export/fuse.c b/block/export/fuse.c
index 65a7f4d723..5c75c9407e 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque)
     FuseExport *exp = opaque;
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        NULL, NULL, NULL, NULL, NULL);
     exp->fd_handler_set_up = false;
 }
@@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque)
     exp->common.ctx = blk_get_aio_context(exp->common.blk);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 }
@@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -320,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), false,
+                               fuse_session_fd(exp->fuse_session),
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index 611430afda..048bdcbfb6 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -137,7 +137,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
     }
 
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -151,7 +151,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
         return;
     }
 
-    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+    aio_set_fd_handler(vblk_exp->export.ctx, fd,
                        NULL, NULL, NULL, NULL, NULL);
 }
 
@@ -170,7 +170,7 @@ static void on_vduse_dev_kick(void *opaque)
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, on_vduse_dev_kick, NULL, NULL, NULL,
+                       on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
     /* Virtqueues are handled by vduse_blk_drained_end() */
@@ -179,7 +179,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
 
     /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
@@ -364,7 +364,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev),
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
diff --git a/block/io.c b/block/io.c
index 4f9fe2f808..9affddb3a0 100644
--- a/block/io.c
+++ b/block/io.c
@@ -361,7 +361,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
 
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
         bdrv_parent_drained_begin(bs, parent);
         if (bs->drv && bs->drv->bdrv_drain_begin) {
             bs->drv->bdrv_drain_begin(bs);
@@ -414,7 +413,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
             bs->drv->bdrv_drain_end(bs);
         }
         bdrv_parent_drained_end(bs, parent);
-        aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
diff --git a/block/io_uring.c b/block/io_uring.c
index 973e15d876..3a07215ded 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -399,7 +399,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, int fd,
 
 void luring_detach_aio_context(LuringState *s, AioContext *old_context)
 {
-    aio_set_fd_handler(old_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(old_context, s->ring.ring_fd,
                        NULL, NULL, NULL, NULL, s);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
@@ -409,7 +409,7 @@ void luring_attach_aio_context(LuringState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_luring_completion_bh, s);
-    aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(s->aio_context, s->ring.ring_fd,
                        qemu_luring_completion_cb, NULL,
                        qemu_luring_poll_cb, qemu_luring_poll_ready, s);
 }
diff --git a/block/iscsi.c b/block/iscsi.c
index 9fc0bed90b..34f97ab646 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun)
 
     if (ev != iscsilun->events) {
         aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
-                           false,
                            (ev & POLLIN) ? iscsi_process_read : NULL,
                            (ev & POLLOUT) ? iscsi_process_write : NULL,
                            NULL, NULL,
@@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
     IscsiLun *iscsilun = bs->opaque;
 
     aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     iscsilun->events = 0;
 
     if (iscsilun->nop_timer) {
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d2cfb7f523..d4a9e21a11 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -438,7 +438,7 @@ int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
 }
@@ -447,7 +447,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
-    aio_set_event_notifier(new_context, &s->e, false,
+    aio_set_event_notifier(new_context, &s->e,
                            qemu_laio_completion_cb,
                            qemu_laio_poll_cb,
                            qemu_laio_poll_ready);
diff --git a/block/nfs.c b/block/nfs.c
index 006045d71a..8f89ece69f 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client)
     int ev = nfs_which_events(client->context);
     if (ev != client->events) {
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false,
                            (ev & POLLIN) ? nfs_process_read : NULL,
                            (ev & POLLOUT) ? nfs_process_write : NULL,
                            NULL, NULL, client);
@@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
     NFSClient *client = bs->opaque;
 
     aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     client->events = 0;
 }
 
@@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client)
     if (client->context) {
         qemu_mutex_lock(&client->mutex);
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false, NULL, NULL, NULL, NULL, NULL);
+                           NULL, NULL, NULL, NULL, NULL);
         qemu_mutex_unlock(&client->mutex);
         if (client->fh) {
             nfs_close(client->context, client->fh);
diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..17937d398d 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     }
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     if (!nvme_identify(bs, namespace, errp)) {
@@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs)
     g_free(s->queues);
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
     event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]);
     qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map,
                             0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE);
@@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState *bs)
 
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
 }
 
 static void nvme_attach_aio_context(BlockDriverState *bs,
@@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
 
     s->aio_context = new_context;
     aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     for (unsigned i = 0; i < s->queue_count; i++) {
diff --git a/block/ssh.c b/block/ssh.c
index b3b3352075..2748253d4a 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     trace_ssh_restart_coroutine(restart->co);
-    aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL);
 
     aio_co_wake(restart->co);
 }
@@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
     trace_ssh_co_yield(s->sock, rd_handler, wr_handler);
 
     aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
-                       false, rd_handler, wr_handler, NULL, NULL, &restart);
+                       rd_handler, wr_handler, NULL, NULL, &restart);
     qemu_coroutine_yield();
     trace_ssh_co_yield_back(s->sock);
 }
diff --git a/block/win32-aio.c b/block/win32-aio.c
index ee87d6048f..6327861e1d 100644
--- a/block/win32-aio.c
+++ b/block/win32-aio.c
@@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfile)
 void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL);
     aio->aio_ctx = NULL;
 }
 
@@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *new_context)
 {
     aio->aio_ctx = new_context;
-    aio_set_event_notifier(new_context, &aio->e, false,
-                           win32_aio_completion_cb, NULL, NULL);
+    aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb,
+                           NULL, NULL);
 }
 
 QEMUWin32AIOState *win32_aio_init(void)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e81bc8791..0b189c6ab8 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh),
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9cdad7e550..d48e240c37 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index bf256d4da2..1e08cf027a 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           false, xen_device_event, NULL, xen_device_poll,
-                           NULL, channel);
+                           xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
diff --git a/io/channel-command.c b/io/channel-command.c
index e7edd091af..7ed726c802 100644
--- a/io/channel-command.c
+++ b/io/channel-command.c
@@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
                                                    void *opaque)
 {
     QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
-    aio_set_fd_handler(ctx, cioc->readfd, false,
-                       io_read, NULL, NULL, NULL, opaque);
-    aio_set_fd_handler(ctx, cioc->writefd, false,
-                       NULL, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opaque);
 }
 
 
diff --git a/io/channel-file.c b/io/channel-file.c
index d76663e6ae..8b5821f452 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
                                                 void *opaque)
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
-    aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write,
-                       NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b0ea7d48b3..d99945ebec 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
                                                   void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false,
-                       io_read, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/migration/rdma.c b/migration/rdma.c
index 0af5e944f0..4149662fc6 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3105,15 +3105,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     }
 }
 
diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c
index 321d7ab01a..519440eed3 100644
--- a/tests/unit/test-aio.c
+++ b/tests/unit/test-aio.c
@@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque)
 static void set_event_notifier(AioContext *ctx, EventNotifier *notifier,
                                EventNotifierHandler *handler)
 {
-    aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL);
+    aio_set_event_notifier(ctx, notifier, handler, NULL, NULL);
 }
 
 static void dummy_notifier_read(EventNotifier *n)
@@ -383,30 +383,6 @@ static void test_flush_event_notifier(void)
     event_notifier_cleanup(&data.e);
 }
 
-static void test_aio_external_client(void)
-{
-    int i, j;
-
-    for (i = 1; i < 3; i++) {
-        EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
-        event_notifier_init(&data.e, false);
-        aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, NULL);
-        event_notifier_set(&data.e);
-        for (j = 0; j < i; j++) {
-            aio_disable_external(ctx);
-        }
-        for (j = 0; j < i; j++) {
-            assert(!aio_poll(ctx, false));
-            assert(event_notifier_test_and_clear(&data.e));
-            event_notifier_set(&data.e);
-            aio_enable_external(ctx);
-        }
-        assert(aio_poll(ctx, false));
-        set_event_notifier(ctx, &data.e, NULL);
-        event_notifier_cleanup(&data.e);
-    }
-}
-
 static void test_wait_event_notifier_noflush(void)
 {
     EventNotifierTestData data = { .n = 0 };
@@ -935,7 +911,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-    g_test_add_func("/aio/external-client",         test_aio_external_client);
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index d9d3807062..5c89169e46 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -435,7 +435,6 @@ static void test_graph_change_drain_all(void)
 
     g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
     g_assert_cmpint(b_s->drain_count, ==, 0);
-    g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, ==, 0);
 
     bdrv_unref(bs_b);
     blk_unref(blk_b);
diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c
deleted file mode 100644
index ef5a856d09..0000000000
--- a/tests/unit/test-fdmon-epoll.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * fdmon-epoll tests
- *
- * Copyright (c) 2020 Red Hat, Inc.
- */
-
-#include "qemu/osdep.h"
-#include "block/aio.h"
-#include "qapi/error.h"
-#include "qemu/main-loop.h"
-
-static AioContext *ctx;
-
-static void dummy_fd_handler(EventNotifier *notifier)
-{
-    event_notifier_test_and_clear(notifier);
-}
-
-static void add_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        event_notifier_init(&notifiers[i], false);
-        aio_set_event_notifier(ctx, &notifiers[i], false,
-                               dummy_fd_handler, NULL, NULL);
-    }
-}
-
-static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL, NULL);
-        event_notifier_cleanup(&notifiers[i]);
-    }
-}
-
-/* Check that fd handlers work when external clients are disabled */
-static void test_external_disabled(void)
-{
-    EventNotifier notifiers[100];
-
-    /* fdmon-epoll is only enabled when many fd handlers are registered */
-    add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-
-    aio_disable_external(ctx);
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-    aio_enable_external(ctx);
-
-    remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-}
-
-int main(int argc, char **argv)
-{
-    /*
-     * This code relies on the fact that fdmon-io_uring disables itself when
-     * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
-     * to fdmon-epoll when the number of fds exceeds a threshold.
-     */
-    qemu_init_main_loop(&error_fatal);
-    ctx = qemu_get_aio_context();
-
-    while (g_main_context_iteration(NULL, false)) {
-        /* Do nothing */
-    }
-
-    g_test_init(&argc, &argv, NULL);
-    g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
-    return g_test_run();
-}
diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f76..934b1bbb85 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx,
         new_node->io_poll = io_poll;
         new_node->io_poll_ready = io_poll_ready;
         new_node->opaque = opaque;
-        new_node->is_external = is_external;
 
         if (is_new) {
             new_node->pfd.fd = fd;
@@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
 {
-    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external,
+    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
                        (IOHandler *)io_read, NULL, io_poll,
                        (IOHandler *)io_poll_ready, notifier);
 }
@@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx)
 
         /* TODO should this check poll ready? */
         revents = node->pfd.revents & node->pfd.events;
-        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
             result = true;
             break;
         }
-        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
             result = true;
             break;
         }
@@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
         QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll);
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
-        poll_ready && revents == 0 &&
-        aio_node_check(ctx, node->is_external) &&
-        node->io_poll_ready) {
+        poll_ready && revents == 0 && node->io_poll_ready) {
         node->io_poll_ready(node->opaque);
 
         /*
@@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
 
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_read) {
         node->io_read(node->opaque);
 
@@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_OUT | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_write) {
         node->io_write(node->opaque);
         progress = true;
@@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx,
     AioHandler *tmp;
 
     QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) {
-        if (aio_node_check(ctx, node->is_external) &&
-            node->io_poll(node->opaque)) {
+        if (node->io_poll(node->opaque)) {
             aio_add_poll_ready_handler(ready_list, node);
 
             node->poll_idle_timeout = now + POLL_IDLE_INTERVAL_NS;
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 6bded009a4..948ef47a4d 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -32,7 +32,6 @@ struct AioHandler {
     GPollFD pfd;
     int deleted;
     void *opaque;
-    bool is_external;
     QLIST_ENTRY(AioHandler) node;
 };
 
@@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx,
         node->opaque = opaque;
         node->io_read = io_read;
         node->io_write = io_write;
-        node->is_external = is_external;
 
         if (io_read) {
             bitmask |= FD_READ | FD_ACCEPT | FD_CLOSE;
@@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            bool is_external,
                             EventNotifierHandler *io_notify,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
@@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx,
             node->e = e;
             node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
             node->pfd.events = G_IO_IN;
-            node->is_external = is_external;
             QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node);
 
             g_source_add_poll(&ctx->source, &node->pfd);
@@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* fill fd sets */
     count = 0;
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!node->deleted && node->io_notify
-            && aio_node_check(ctx, node->is_external)) {
+        if (!node->deleted && node->io_notify) {
             assert(count < MAXIMUM_WAIT_OBJECTS);
             events[count++] = event_notifier_get_handle(node->e);
         }
diff --git a/util/async.c b/util/async.c
index 21016a1ac7..be0726038e 100644
--- a/util/async.c
+++ b/util/async.c
@@ -377,7 +377,7 @@ aio_ctx_finalize(GSource     *source)
         g_free(bh);
     }
 
-    aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_rec_mutex_destroy(&ctx->lock);
     qemu_lockcnt_destroy(&ctx->list_lock);
@@ -561,7 +561,6 @@ AioContext *aio_context_new(Error **errp)
     QSLIST_INIT(&ctx->scheduled_coroutines);
 
     aio_set_event_notifier(ctx, &ctx->notifier,
-                           false,
                            aio_context_notifier_cb,
                            aio_context_notifier_poll,
                            aio_context_notifier_poll_ready);
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index 1683aa1105..6b6a1a91f8 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
     int i, ret = 0;
     struct epoll_event events[128];
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout > 0) {
         ret = qemu_poll_ns(&pfd, 1, timeout);
         if (ret > 0) {
@@ -133,13 +128,12 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    /* Do not upgrade while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return false;
-    }
-
-    if (npfd < EPOLL_ENABLE_THRESHOLD) {
-        return false;
+    if (npfd >= EPOLL_ENABLE_THRESHOLD) {
+        if (fdmon_epoll_try_enable(ctx)) {
+            return true;
+        } else {
+            fdmon_epoll_disable(ctx);
+        }
     }
 
     /* The list must not change while we add fds to epoll */
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index ab43052dd7..17ec18b7bd 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
     unsigned wait_nr = 1; /* block until at least one cqe is ready */
     int ret;
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout == 0) {
         wait_nr = 0; /* non-blocking */
     } else if (timeout > 0) {
@@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
         return true;
     }
 
-    /* Are we falling back to fdmon-poll? */
-    return qatomic_read(&ctx->external_disable_cnt);
+    return false;
 }
 
 static const FDMonOps fdmon_io_uring_ops = {
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
index 5fe3b47865..17df917cf9 100644
--- a/util/fdmon-poll.c
+++ b/util/fdmon-poll.c
@@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
     assert(npfd == 0);
 
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
-                && aio_node_check(ctx, node->is_external)) {
+        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) {
             add_pollfd(node);
         }
     }
diff --git a/util/main-loop.c b/util/main-loop.c
index e180c85145..3e43a9cd38 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -642,14 +642,13 @@ void qemu_set_fd_handler(int fd,
                          void *opaque)
 {
     iohandler_init();
-    aio_set_fd_handler(iohandler_ctx, fd, false,
-                       fd_read, fd_write, NULL, NULL, opaque);
+    aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL,
+                       opaque);
 }
 
 void event_notifier_set_handler(EventNotifier *e,
                                 EventNotifierHandler *handler)
 {
     iohandler_init();
-    aio_set_event_notifier(iohandler_ctx, e, false,
-                           handler, NULL, NULL);
+    aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL);
 }
diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c
index d791932d63..364f4d5abf 100644
--- a/util/qemu-coroutine-io.c
+++ b/util/qemu-coroutine-io.c
@@ -74,8 +74,7 @@ typedef struct {
 static void fd_coroutine_enter(void *opaque)
 {
     FDYieldUntilData *data = opaque;
-    aio_set_fd_handler(data->ctx, data->fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL);
     qemu_coroutine_enter(data->co);
 }
 
@@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd)
     data.ctx = qemu_get_current_aio_context();
     data.co = qemu_coroutine_self();
     data.fd = fd;
-    aio_set_fd_handler(
-        data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data);
+    aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL,
+                       &data);
     qemu_coroutine_yield();
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 332aea9306..9ba19121a2 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
     g_free(vu_fd_watch);
@@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index 3bc78d8660..b33298a444 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -122,9 +122,6 @@ if have_block
   if nettle.found() or gcrypt.found()
     tests += {'test-crypto-pbkdf': [io]}
   endif
-  if config_host_data.get('CONFIG_EPOLL_CREATE1')
-    tests += {'test-fdmon-epoll': [testblock]}
-  endif
 endif
 
 if have_system
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526239.817881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTg-0007p2-SO; Tue, 25 Apr 2023 17:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526239.817881; Tue, 25 Apr 2023 17:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTg-0007nQ-IW; Tue, 25 Apr 2023 17:29:20 +0000
Received: by outflank-mailman (input) for mailman id 526239;
 Tue, 25 Apr 2023 17:29:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSW-0007l5-LW
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a0f6fa9-e38e-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:28:06 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-453-xunltWRNMdyqImzyKHy4uA-1; Tue, 25 Apr 2023 13:28:02 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 315EC185A791;
 Tue, 25 Apr 2023 17:28:01 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9F13CC15BA0;
 Tue, 25 Apr 2023 17:28:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a0f6fa9-e38e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443685;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8sGNuhNmUwwRYy7CrP+NJgAMgjXbbyx47QaYmFQ/Pvo=;
	b=J9MyOaTReKFIdTcL/owuHs4VIGDofp1Ujq5pw8wLbexx0G8M6Ufc4tWxdFUBcBsjUw9EIY
	5nVJXrqmoEz3fVhpryBo01dLWu/QEMq+MZdRazw0676FtTDRwCsA6eCcPEaYg9FRakj3pE
	mzcWy+CRasB3Mp205NrypT85WlbwhW0=
X-MC-Unique: xunltWRNMdyqImzyKHy4uA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 19/20] virtio: do not set is_external=true on host notifiers
Date: Tue, 25 Apr 2023 13:27:15 -0400
Message-Id: <20230425172716.1033562-20-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Host notifiers can now use is_external=false since virtio-blk and
virtio-scsi no longer rely on is_external=true for drained sections.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 272d930721..9cdad7e550 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:29:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526240.817884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTh-0007ya-6p; Tue, 25 Apr 2023 17:29:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526240.817884; Tue, 25 Apr 2023 17:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMTh-0007uh-1s; Tue, 25 Apr 2023 17:29:21 +0000
Received: by outflank-mailman (input) for mailman id 526240;
 Tue, 25 Apr 2023 17:29:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Bb6=AQ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1prMSS-0006fQ-JV
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:28:04 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8778516b-e38e-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:28:02 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-307-Lmx5UjrjNP6ebcnT7dlP9g-1; Tue, 25 Apr 2023 13:27:56 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 234BE185A7A7;
 Tue, 25 Apr 2023 17:27:55 +0000 (UTC)
Received: from localhost (unknown [10.39.193.242])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 995BA492B03;
 Tue, 25 Apr 2023 17:27:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8778516b-e38e-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682443681;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qs0Wu6hxc7j58DwPPRdDhx+qIxo2MCSBXMmfJ14jr7A=;
	b=h5KKjKc2m6DuNvVJE8O+TqaGUD2iAG6DpNUHa6S1bfdoNUuNkFJ616eo/Un6D6eCy+O5Rr
	S3MEyZ64fCUdEZidqDCitm0xsRXEF5gURknxBufySWSGN/ETm8KfEy8Cg8N6ZgugHuY/gt
	oOQSUp7/OIYgkOOrbIYUbHyiXnEQ4qg=
X-MC-Unique: Lmx5UjrjNP6ebcnT7dlP9g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>,
	Paul Durrant <paul@xen.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: [PATCH v4 16/20] virtio: make it possible to detach host notifier from any thread
Date: Tue, 25 Apr 2023 13:27:12 -0400
Message-Id: <20230425172716.1033562-17-stefanha@redhat.com>
In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

virtio_queue_aio_detach_host_notifier() does two things:
1. It removes the fd handler from the event loop.
2. It processes the virtqueue one last time.

The first step can be peformed by any thread and without taking the
AioContext lock.

The second step may need the AioContext lock (depending on the device
implementation) and runs in the thread where request processing takes
place. virtio-blk and virtio-scsi therefore call
virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
AioContext

Scheduling a BH is undesirable for .drained_begin() functions. The next
patch will introduce a .drained_begin() function that needs to call
virtio_queue_aio_detach_host_notifier().

Move the virtqueue processing out to the callers of
virtio_queue_aio_detach_host_notifier() so that the function can be
called from any thread. This is in preparation for the next patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 ++
 hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..bd7cc6e76b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
 
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
+        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
         virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 20bb91766e..81643445ed 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
 {
     VirtIOSCSI *s = opaque;
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
+    EventNotifier *host_notifier;
     int i;
 
     virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->event_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:36:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526257.817903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMaP-0002e0-8C; Tue, 25 Apr 2023 17:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526257.817903; Tue, 25 Apr 2023 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMaP-0002dt-4W; Tue, 25 Apr 2023 17:36:17 +0000
Received: by outflank-mailman (input) for mailman id 526257;
 Tue, 25 Apr 2023 17:36:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1prMaN-0002dn-OE
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:36:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1prMaM-0002fq-FS; Tue, 25 Apr 2023 17:36:14 +0000
Received: from [54.239.6.184] (helo=[192.168.17.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1prMaM-0008SB-8l; Tue, 25 Apr 2023 17:36:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LRP3BF92yluZBIpqFABAu2xc1JQLuJuRtqH56OuTS54=; b=Bx+847RVm8qZMaO+s6ElAIzNIh
	2rGhtulOk3AviFSmbDjs+FspcXeuuWh8Pg+h6DpiAusXEptLl1aCpv/zyFOSQeQPXalLz56AjUj0a
	GWJPA9H9MBSwUYc7Gug0RlnzKjvae/tdfK9yDz6WE302tphX+a4Rw1pd8jaVKwTTM3qw=;
Message-ID: <d27bed23-bd4b-9a50-a0cf-7c3d4d57da22@xen.org>
Date: Tue, 25 Apr 2023 18:36:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 03/13] tools/xenstore: introduce accounting data array
 for per-domain values
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-4-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> Introduce the scheme of an accounting data array for per-domain
> accounting data and use it initially for the number of nodes owned by
> a domain.
> 
> Make the accounting data type to be unsigned int, as no data is allowed
> to be negative at any time.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526270.817914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlp-0004Ba-9K; Tue, 25 Apr 2023 17:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526270.817914; Tue, 25 Apr 2023 17:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlp-0004BT-64; Tue, 25 Apr 2023 17:48:05 +0000
Received: by outflank-mailman (input) for mailman id 526270;
 Tue, 25 Apr 2023 17:48:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnGU=AQ=citrix.com=prvs=4724fc120=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1prMlo-0004BN-5n
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:48:04 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 511b254e-e391-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 19:48:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 511b254e-e391-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682444881;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=rAcd38lHfyxjyo5VODfhs5oaQerCPZKZ+8xu0A03MtA=;
  b=htfzPCyROck2+J4gqUtnSgU+rgFJmXaJ5fttxrFBHkx2cemxrgOON64T
   kxX7Lt5zzKwvNmIjOYJflDQ4mKwJzxMM6zxkpDWINCXbVKEXRJwbqKc7s
   FxNzZN3Sa9RYMol5BA7SwUTQDkKgtyrAXz5Q9TG3hZ9PUoOT0WrWiiiwr
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106160674
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:FMuS3KCjaBGeqhVW/yfjw5YqxClBgxIJ4kV8jS/XYbTApDIkhTQHx
 2BJWj3TMv3fM2P8etF+aojgpBwOvJKBndJlQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw8+lNP0dC6
 eQjNBtQTRTborO4y4iFY7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJAzmuGpiHTlNT1VsliYv7Yf6GnP1g1hlrPqNbI5f/TTHZsMwB7G+
 T6uE2LRDxZFNoOV9Qe++GPy2/XJmWDHZb4YG+jtnhJtqALKnTFCYPEMbnO5rP+/i0CzQZRfJ
 lYe9zAyhaMz6Fa7CNL6WnWQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0TzE30
 l6Cn/vyGCdi9raSTBqgGqy89G3of3JPdClbOHFCFFFeizX+nG0tphvAdOhFHLKttcHeRBL0m
 xXboiMEuZxG2KbnyJ6HEUD7byOE/8aZFFVku12KDgpJ/SsiOtf7OtXABUzzqK8Zcd3HFgTpU
 G0swZD20QwYMX2aeMVhqs0pFarh2fuKOSa0bbVHT8h4rGTFF5JOkOltDNBCyKRBaJxslcfBO
 hO7hO+ozMY70IGWRaF2eZmtLM8h0LLtE9/oPtiNMIoUOcYoLl/Won0/DaJ144wKuBlErE3CE
 c3DLZbE4YgyUsyLMwZat89CiOR2l0jSNEvYRIzhzgTP7IdykEW9EO9fWHPXN7BR0U9xiFmNm
 zqpH5fQmko3vSyXSnW/zLP/2nhRfCRqXsCm+50OHgNBSyI/cFwc5zbq6etJU+RYc259zY8kI
 lnVtpdk9WfC
IronPort-HdrOrdr: A9a23:H0ew0awlmu5nMriXeLKOKrPwE71zdoMgy1knxilNoNJuA7Wlfq
 GV7YwmPHrP4gr5N0tQ/OxoVJPwI080sKQFgrX5Xo3CYOCFghrNEGgK1+KLqAEIWRefygc379
 YGT0ERMqyXMbG4t6rHCcuDfurIDOPpzElgv4nj80s=
X-Talos-CUID: 9a23:al8FO2+pWIopg9AaYPCVv0cbIP4DfmHF92v7fmy8UUM4Rravd3bFrQ==
X-Talos-MUID: 9a23:UB2oWglFB6sviIPeLERFdnpvM5hqvK6SNXsHgMtboc+AdhBNBz2S2WE=
X-IronPort-AV: E=Sophos;i="5.99,226,1677560400"; 
   d="scan'208";a="106160674"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: <jennifer.herbert@citrx.com>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jennifer Herbert <jennifer.herbert@citrix.com>
Subject: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
Date: Tue, 25 Apr 2023 17:47:33 +0000
Message-ID: <20230425174733.795961-3-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230425174733.795961-1-jennifer.herbert@citrix.com>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This patch introduces an optional TPM 2 interface definition to the ACPI table,
which is to be used as part of a vTPM 2 implementation.

Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
---
 docs/misc/xenstore-paths.pandoc |  3 ++-
 tools/firmware/hvmloader/util.c |  9 ++++++++
 tools/libacpi/Makefile          |  3 ++-
 tools/libacpi/acpi2_0.h         | 32 +++++++++++++++++++++++++++
 tools/libacpi/build.c           | 39 +++++++++++++++++++++++++++++++++
 tools/libacpi/libacpi.h         |  1 +
 tools/libacpi/ssdt_tpm2.asl     | 36 ++++++++++++++++++++++++++++++
 7 files changed, 121 insertions(+), 2 deletions(-)
 create mode 100644 tools/libacpi/ssdt_tpm2.asl

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index e67e164855..bffb8ea544 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -274,7 +274,8 @@ circumstances where the generation ID needs to be changed.
 
 The TPM version to be probed for.
 
-A value of 1 indicates to probe for TPM 1.2.
+A value of 1 indicates to probe for TPM 1.2, whereas a value of 2
+indicates that a TPM 2.0 using CRB should be probed.
 A value of 0 or an invalid value will result in no TPM being probed.
 If unset, a default of 1 is assumed.
 
diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index f39a8e584f..51272530fe 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -1009,6 +1009,15 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
         config->table_flags |= ACPI_HAS_TPM;
         config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
         break;
+
+    case 2:
+        config->table_flags |= ACPI_HAS_TPM;
+        config->crb_id = (uint16_t *)TPM_CRB_INTF_ID;
+
+        mem_hole_populate_ram(TPM_LOG_AREA_ADDRESS >> PAGE_SHIFT,
+                              TPM_LOG_SIZE >> PAGE_SHIFT);
+        memset((void *)TPM_LOG_AREA_ADDRESS, 0, TPM_LOG_SIZE);
+        break;
     }
 
     config->numa.nr_vmemranges = nr_vmemranges;
diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index 60860eaa00..23278f6a61 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -25,7 +25,8 @@ C_SRC-$(CONFIG_X86) = dsdt_anycpu.c dsdt_15cpu.c dsdt_anycpu_qemu_xen.c dsdt_pvh
 C_SRC-$(CONFIG_ARM_64) = dsdt_anycpu_arm.c
 DSDT_FILES ?= $(C_SRC-y)
 C_SRC = $(addprefix $(ACPI_BUILD_DIR)/, $(DSDT_FILES))
-H_SRC = $(addprefix $(ACPI_BUILD_DIR)/, ssdt_s3.h ssdt_s4.h ssdt_pm.h ssdt_tpm.h ssdt_laptop_slate.h)
+H_SRC = $(addprefix $(ACPI_BUILD_DIR)/, ssdt_s3.h ssdt_s4.h ssdt_pm.h)
+H_SRC += $(addprefix $(ACPI_BUILD_DIR)/, ssdt_tpm.h ssdt_tpm2.h ssdt_laptop_slate.h)
 
 MKDSDT_CFLAGS-$(CONFIG_ARM_64) = -DCONFIG_ARM_64
 MKDSDT_CFLAGS-$(CONFIG_X86) = -DCONFIG_X86
diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 2619ba32db..19a43d4b2e 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -121,6 +121,36 @@ struct acpi_20_tcpa {
 };
 #define ACPI_2_0_TCPA_LAML_SIZE (64*1024)
 
+/*
+ * TPM2
+ */
+struct acpi_20_tpm2 {
+    struct acpi_header header;
+    uint16_t platform_class;
+    uint16_t reserved;
+    uint64_t control_area_address;
+    uint32_t start_method;
+    uint8_t start_method_params[12];
+    uint32_t log_area_minimum_length;
+    uint64_t log_area_start_address;
+};
+#define TPM2_ACPI_CLASS_CLIENT      0
+#define TPM2_START_METHOD_CRB       7
+
+/* TPM register I/O Mapped region, location of which defined in the
+ * TCG PC Client Platform TPM Profile Specification for TPM 2.0.
+ * See table 9 - Only Locality 0 is used here. This is emulated by QEMU.
+ * Definition of Register space is found in table 12.
+ */
+#define TPM_REGISTER_BASE           0xFED40000
+#define TPM_CRB_CTRL_REQ            (TPM_REGISTER_BASE  + 0x40)
+#define TPM_CRB_INTF_ID             (TPM_REGISTER_BASE  + 0x30)
+
+#define TPM_LOG_AREA_ADDRESS        0xFED50000
+
+#define TPM_LOG_AREA_MINIMUM_SIZE   (64 << 10)
+#define TPM_LOG_SIZE                (64 << 10)
+
 /*
  * Fixed ACPI Description Table Structure (FADT) in ACPI 1.0.
  */
@@ -431,6 +461,7 @@ struct acpi_20_slit {
 #define ACPI_2_0_RSDT_SIGNATURE ASCII32('R','S','D','T')
 #define ACPI_2_0_XSDT_SIGNATURE ASCII32('X','S','D','T')
 #define ACPI_2_0_TCPA_SIGNATURE ASCII32('T','C','P','A')
+#define ACPI_2_0_TPM2_SIGNATURE ASCII32('T','P','M','2')
 #define ACPI_2_0_HPET_SIGNATURE ASCII32('H','P','E','T')
 #define ACPI_2_0_WAET_SIGNATURE ASCII32('W','A','E','T')
 #define ACPI_2_0_SRAT_SIGNATURE ASCII32('S','R','A','T')
@@ -444,6 +475,7 @@ struct acpi_20_slit {
 #define ACPI_2_0_RSDT_REVISION 0x01
 #define ACPI_2_0_XSDT_REVISION 0x01
 #define ACPI_2_0_TCPA_REVISION 0x02
+#define ACPI_2_0_TPM2_REVISION 0x04
 #define ACPI_2_0_HPET_REVISION 0x01
 #define ACPI_2_0_WAET_REVISION 0x01
 #define ACPI_1_0_FADT_REVISION 0x01
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 716cb49624..359a4dbba4 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -19,6 +19,7 @@
 #include "ssdt_s3.h"
 #include "ssdt_s4.h"
 #include "ssdt_tpm.h"
+#include "ssdt_tpm2.h"
 #include "ssdt_pm.h"
 #include "ssdt_laptop_slate.h"
 #include <xen/hvm/hvm_info_table.h>
@@ -352,6 +353,7 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
     struct acpi_20_tcpa *tcpa;
     unsigned char *ssdt;
     void *lasa;
+    struct acpi_20_tpm2 *tpm2;
 
     /* MADT. */
     if ( (config->hvminfo->nr_vcpus > 1) || config->hvminfo->apic_mode )
@@ -450,6 +452,43 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
                              tcpa->header.length);
             }
             break;
+
+        case 2:
+            /* Check VID stored in bits 37:32 (3rd 16 bit word) of CRB
+             * identifier register.  See table 16 of TCG PC client platform
+             * TPM profile specification for TPM 2.0.
+             */
+            if ( config->crb_id[2] == 0 || config->crb_id[2] == 0xffff )
+                break;
+
+            ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm2), 16);
+            if (!ssdt) return -1;
+            memcpy(ssdt, ssdt_tpm2, sizeof(ssdt_tpm2));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
+
+            tpm2 = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tpm2), 16);
+            if (!tpm2) return -1;
+            memset(tpm2, 0, sizeof(*tpm2));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tpm2);
+
+            tpm2->header.signature = ACPI_2_0_TPM2_SIGNATURE;
+            tpm2->header.length    = sizeof(*tpm2);
+            tpm2->header.revision  = ACPI_2_0_TPM2_REVISION;
+            fixed_strcpy(tpm2->header.oem_id, ACPI_OEM_ID);
+            fixed_strcpy(tpm2->header.oem_table_id, ACPI_OEM_TABLE_ID);
+            tpm2->header.oem_revision = ACPI_OEM_REVISION;
+            tpm2->header.creator_id   = ACPI_CREATOR_ID;
+            tpm2->header.creator_revision = ACPI_CREATOR_REVISION;
+            tpm2->platform_class = TPM2_ACPI_CLASS_CLIENT;
+            tpm2->control_area_address = TPM_CRB_CTRL_REQ;
+            tpm2->start_method = TPM2_START_METHOD_CRB;
+            tpm2->log_area_minimum_length = TPM_LOG_AREA_MINIMUM_SIZE;
+            tpm2->log_area_start_address = TPM_LOG_AREA_ADDRESS;
+
+            set_checksum(tpm2,
+                         offsetof(struct acpi_header, checksum),
+                         tpm2->header.length);
+            break;
         }
     }
 
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index f69452401f..0d19f9fc4d 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -80,6 +80,7 @@ struct acpi_config {
     const struct hvm_info_table *hvminfo;
 
     const uint16_t *tis_hdr;
+    const uint16_t *crb_id;
 
     /*
      * Address where acpi_info should be placed.
diff --git a/tools/libacpi/ssdt_tpm2.asl b/tools/libacpi/ssdt_tpm2.asl
new file mode 100644
index 0000000000..1801c338df
--- /dev/null
+++ b/tools/libacpi/ssdt_tpm2.asl
@@ -0,0 +1,36 @@
+/*
+ * ssdt_tpm2.asl
+ *
+ * Copyright (c) 2018-2022, Citrix Systems, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+/* SSDT for TPM CRB Interface for Xen with Qemu device model. */
+
+DefinitionBlock ("SSDT_TPM2.aml", "SSDT", 2, "Xen", "HVM", 0)
+{
+    Device (TPM)
+    {
+        Name (_HID, "MSFT0101" /* TPM 2.0 Security Device */)  // _HID: Hardware ID
+        Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource Settings
+        {
+            Memory32Fixed (ReadWrite,
+                0xFED40000,         // Address Base
+                0x00001000,         // Address Length
+                )
+        })
+        Method (_STA, 0, NotSerialized)  // _STA: Status
+        {
+            Return (0x0F)
+        }
+    }
+}
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526272.817929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlt-0004Y3-UF; Tue, 25 Apr 2023 17:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526272.817929; Tue, 25 Apr 2023 17:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlt-0004Va-Qq; Tue, 25 Apr 2023 17:48:09 +0000
Received: by outflank-mailman (input) for mailman id 526272;
 Tue, 25 Apr 2023 17:48:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnGU=AQ=citrix.com=prvs=4724fc120=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1prMls-0004QK-PX
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:48:08 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 565722ed-e391-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:48:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 565722ed-e391-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682444887;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=iJpmjT8+u6Su+cj0ubRBWk9TtvgNO4TNYN5OVbKekIY=;
  b=CDb6I9NCCHFbW6lkynZmkTbzed73zFGUKijFKsophjiBr5lkMbT6fLmW
   GTnkdFdRHhWNXAWybF1KMg6rixG+CrxpiRTTTfx7Nz/i8Tmurx/Nc76ae
   tvyZq8dEtL70sk/jAPvdtRGDgzEa4fPXxEjSrQoSRbszqj0rcXb7/gO3N
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107228331
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:s8ZtY6M134e3B4DvrR2sl8FynXyQoLVcMsEvi/4bfWQNrUoi0DxWy
 mIdWmHTMvbfN2v8eYp1bt6+8kgCscTRzdBgHAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5AFmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sFWCntWq
 KNBERkuMzKG3sKK4pGyUfY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo8keGuh3fyaXtYpUifqLAry2PS0BZwwP7mN9+9ltmiHJ0KxBzI/
 Tmfl4j/KgMiPYS1mR+ly3mphtfzxwPHCbojLLLto5aGh3XMnzdOWXX6T2CTo/S/jE+wVsgZK
 0EO8Cc0sYA59VCxXp/2WBjQiG6JuFsQVsRdF8U+6RqR0ezE7gCBHG8GQzVdLts8u6ceRzYny
 1uIlNPBHiF0vfueTnf1y1uPhWrsY25PdzZEPHJaC1JfuLEPvb3fkDrob915CPPq1+TcOmzSx
 mmqkAodjq4M2JtjO7qAwbzXv969jsGXHlRut1iPAzLNAhBRP9D8OdHxgbTPxbMZddvCEAHc1
 JQRs5LGhN3iG61hg8BkrA8lOLiyr8iIPzTH6bKEN8lwrm/9k5JPkG053d2fGKuKGpxeEdMRS
 BWP0T69HbcKVJdQUYd5YpiqF+MhxrX6GNLuW5j8N4QeOMMvLVXXrX8yNCZ8OlwBd2B9+ZzTx
 L/BKZr8ZZrkIf8PIMWKqxc1juZwm3FWKZL7TpHn1RW3uYejiIquYe5dajOmN7lphJ5oVS2Jq
 76zwePWkUQAOAA/CwGLmbMuwacidilnVMum+5wGL4Zu4GNOQQkcNhMY+pt5E6QNokifvr6Sl
 p1hcie0EGbCuEA=
IronPort-HdrOrdr: A9a23:9cToN6rVnreKx2BQs2LYTuoaV5oReYIsimQD101hICG8cqSj9v
 xG+85rrCMc6QxhI03I9urwW5VoLUmyyXcx2/h0AV7AZniBhILLFvAB0WKK+VSJcEeSmtK1l5
 0QFJSWYOeAdWSS5vyb3ODXKbgdKaG8gcWVuds=
X-Talos-CUID: =?us-ascii?q?9a23=3ANYV+kWr6vCz21SQOK+7PduvmUecEeWfFkSiJGma?=
 =?us-ascii?q?bVERsEZ2pa3yZ47wxxg=3D=3D?=
X-Talos-MUID: 9a23:nnLlyASu02I8yS5URXT+vDg6HpdNw52lDUA1rc8mh5i8BBR/bmI=
X-IronPort-AV: E=Sophos;i="5.99,226,1677560400"; 
   d="scan'208";a="107228331"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: <jennifer.herbert@citrx.com>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jennifer Herbert <jennifer.herbert@citrix.com>
Subject: [PATCH v3 1/2] acpi: Make TPM version configurable.
Date: Tue, 25 Apr 2023 17:47:32 +0000
Message-ID: <20230425174733.795961-2-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230425174733.795961-1-jennifer.herbert@citrix.com>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This patch makes the TPM version, for which the ACPI libary probes, configurable.
If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
I have also added to hvmloader an option to allow setting this new config, which can
be triggered by setting the platform/tpm_version xenstore key.

Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
---
 docs/misc/xenstore-paths.pandoc |  9 +++++
 tools/firmware/hvmloader/util.c | 19 ++++++---
 tools/libacpi/build.c           | 69 +++++++++++++++++++--------------
 tools/libacpi/libacpi.h         |  3 +-
 4 files changed, 64 insertions(+), 36 deletions(-)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index 5cd5c8a3b9..e67e164855 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -269,6 +269,15 @@ at the guest physical address in HVM_PARAM_VM_GENERATION_ID_ADDR.
 See Microsoft's "Virtual Machine Generation ID" specification for the
 circumstances where the generation ID needs to be changed.
 
+
+#### ~/platform/tpm_version = INTEGER [HVM,INTERNAL]
+
+The TPM version to be probed for.
+
+A value of 1 indicates to probe for TPM 1.2.
+A value of 0 or an invalid value will result in no TPM being probed.
+If unset, a default of 1 is assumed.
+
 ### Frontend device paths
 
 Paravirtual device frontends are generally specified by their own
diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 581b35e5cf..f39a8e584f 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -994,13 +994,22 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
     if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
         config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
 
-    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
-                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
-                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
-                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
+    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
+                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
+                            ACPI_HAS_VGA | ACPI_HAS_8042 |
+                            ACPI_HAS_CMOS_RTC);
     config->acpi_revision = 4;
 
-    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
+    s = xenstore_read("platform/tpm_version", "1");
+    config->tpm_version = strtoll(s, NULL, 0);
+
+    switch( config->tpm_version )
+    {
+    case 1:
+        config->table_flags |= ACPI_HAS_TPM;
+        config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
+        break;
+    }
 
     config->numa.nr_vmemranges = nr_vmemranges;
     config->numa.nr_vnodes = nr_vnodes;
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index fe2db66a62..716cb49624 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
         memcpy(ssdt, ssdt_laptop_slate, sizeof(ssdt_laptop_slate));
         table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
     }
-
-    /* TPM TCPA and SSDT. */
-    if ( (config->table_flags & ACPI_HAS_TCPA) &&
-         (config->tis_hdr[0] != 0 && config->tis_hdr[0] != 0xffff) &&
-         (config->tis_hdr[1] != 0 && config->tis_hdr[1] != 0xffff) )
+    /* TPM and its SSDT. */
+    if ( config->table_flags & ACPI_HAS_TPM )
     {
-        ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
-        if (!ssdt) return -1;
-        memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
-        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
-
-        tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
-        if (!tcpa) return -1;
-        memset(tcpa, 0, sizeof(*tcpa));
-        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
-
-        tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
-        tcpa->header.length    = sizeof(*tcpa);
-        tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
-        fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
-        fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
-        tcpa->header.oem_revision = ACPI_OEM_REVISION;
-        tcpa->header.creator_id   = ACPI_CREATOR_ID;
-        tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
-        if ( (lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16)) != NULL )
+        switch ( config->tpm_version )
         {
-            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
-            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
-            memset(lasa, 0, tcpa->laml);
-            set_checksum(tcpa,
-                         offsetof(struct acpi_header, checksum),
-                         tcpa->header.length);
+        case 0: /* Assume legacy code wanted tpm 1.2 */
+        case 1:
+            if ( config->tis_hdr[0] == 0 || config->tis_hdr[0] == 0xffff ||
+                 config->tis_hdr[1] == 0 || config->tis_hdr[1] == 0xffff )
+                break;
+
+            ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
+            if (!ssdt) return -1;
+            memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
+
+            tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
+            if (!tcpa) return -1;
+            memset(tcpa, 0, sizeof(*tcpa));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
+
+            tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
+            tcpa->header.length    = sizeof(*tcpa);
+            tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
+            fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
+            fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
+            tcpa->header.oem_revision = ACPI_OEM_REVISION;
+            tcpa->header.creator_id   = ACPI_CREATOR_ID;
+            tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
+
+            lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16);
+            if ( lasa )
+            {
+                tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
+                tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
+                memset(lasa, 0, tcpa->laml);
+                set_checksum(tcpa,
+                             offsetof(struct acpi_header, checksum),
+                             tcpa->header.length);
+            }
+            break;
         }
     }
 
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index a2efd23b0b..f69452401f 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -27,7 +27,7 @@
 #define ACPI_HAS_SSDT_PM           (1<<4)
 #define ACPI_HAS_SSDT_S3           (1<<5)
 #define ACPI_HAS_SSDT_S4           (1<<6)
-#define ACPI_HAS_TCPA              (1<<7)
+#define ACPI_HAS_TPM               (1<<7)
 #define ACPI_HAS_IOAPIC            (1<<8)
 #define ACPI_HAS_WAET              (1<<9)
 #define ACPI_HAS_PMTIMER           (1<<10)
@@ -66,6 +66,7 @@ struct acpi_config {
 
     uint32_t table_flags;
     uint8_t acpi_revision;
+    uint8_t tpm_version;
 
     uint64_t vm_gid[2];
     unsigned long vm_gid_addr; /* OUT parameter */
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526271.817924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlt-0004R4-H7; Tue, 25 Apr 2023 17:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526271.817924; Tue, 25 Apr 2023 17:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMlt-0004Qx-Dk; Tue, 25 Apr 2023 17:48:09 +0000
Received: by outflank-mailman (input) for mailman id 526271;
 Tue, 25 Apr 2023 17:48:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnGU=AQ=citrix.com=prvs=4724fc120=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1prMlr-0004QK-VA
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:48:07 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 543c0e36-e391-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 19:48:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 543c0e36-e391-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682444885;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=rK45VTECOGah3EhG6bjJLWH2dLGpXmW9WnoGplFDXNk=;
  b=CdQR5MUiRfPypL7vhYZqu5vsZVycnFqMehsx0wBvJX4h+wCh/sfBwVo5
   f+mQTlo6M0uELLSI1gLPnIFZL2n+aqT9sKl6MnJJMd6SzCVGJMFECMbgH
   +Ouz1hFuUYktKjjYQ/+0VSiZAOuk6o/t7S/aELdnNkamRT+lE/ObQldE3
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107228329
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cYzJeqozhWyZk8Eeeo/1uhoCcKxeBmIRZRIvgKrLsJaIsI4StFCzt
 garIBnVM/aKMWb0ed5wPoiy/ENVucDRyIBrSAtrqisyEi4RpZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSRNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABcsZQqb2r3v+ZuYF8tp3+QfDuriJ5xK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVapVSTpKQ6+S7TwRZ73aLxGNHUZsaLVYNemUPwS
 mfupjymWEtKb4bOodaD2l71r7H0tADAYZgtOqKh0MR6sAWXzXNGXXX6UnPk+KLk2yZSQel3K
 UES/CsgoLJ09EGxRNTnQzWzoWKerlgXXN84O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k
 FiTkLvBBzZirbmUQnK17aqPoHW5Pi19EIMZTXZaF01fuYCl+dxtyEuVFb6PDZJZkPXSODrq2
 23V9REXpOgxiJcr64eY9EvY1mfESofycuIl2unGdjv7vlkiNdP9OtzABUvztqgZctvAJrWVl
 D1dwpXFsrhTZX2YvHbVKNjhCo1F8Bps3Nf0pVd0V6cs+D22k5JIVdABuWouTKuF3yttRNMIX
 KMwkVkLjHOrFCH2BZKbmqroYyjQ8YDuFM7+StffZcdUb556eWevpX8+PxXOhjq9wRZwwMnT3
 Kt3lu79ZUv29Iw9lGbmLwvj+eRDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8PX/ZmPj0DCIUSlAGLq99MRb3LRFBnba3LRzt/Lb/fc1E2SDFwUpc8A9oJIuRYokicrc+Ql
 lnVZ6OS4AOl7ZEbAW1mskxeVY4=
IronPort-HdrOrdr: A9a23:Hy8yRKm+1ZEhHe0r8Ay/Aaf40w7pDfIT3DAbv31ZSRFFG/FwWf
 re5cjztCWE8Ar5PUtLpTnuAtjkfZqxz+8W3WBVB8bAYOCEggqVxeNZnO/fKlTbckWUygce78
 ddmsNFebrN5DZB/KDHCcqDf+rIAuPrzEllv4jjJr5WIz1XVw==
X-Talos-CUID: 9a23:PzOizWHl2d/cAFSSqmJA1UUOAp16K0T07yryJhaYNURKQaaKHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AI6RDYQyIkFag6TFbXj/5BJf1VpaaqIj3IR42uLY?=
 =?us-ascii?q?9gtO7GzNpHQ2NvGyzSZByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,226,1677560400"; 
   d="scan'208";a="107228329"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: <jennifer.herbert@citrx.com>, Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jennifer Herbert <jennifer.herbert@citrix.com>
Subject: [PATCH v3 0/2] acpi: Make TPM version configurable.
Date: Tue, 25 Apr 2023 17:47:31 +0000
Message-ID: <20230425174733.795961-1-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This patch makes the TPM version, for which the ACPI library probes,
configurable.
Find following version 3 of this patche series.  Appoliges for it
taking so long, and my previous version missing some changes I intended to
share.

Have renamed the TPM_CRB constants to better match the TPM
specification.  (They where previously trying to stay consistent with the
TIS code)
Have moved some ACPI register locations to acpi2_0.h to such that both
TPM register offsets are defined together, so that their relation can be
better understood.  Have  also added additional comments to explain these
constants.

Changed the defaults, such that it will act exactly as is current,
(which is to attempt to probe for TPM 1.2) unless explicitly set to no
TPM or TPM 2, so not to regress anything.
Addressed veriouse style issues.
Moved tpm_version field up in acpi_config for better alignment.

Add a new xenstore key 'tpm_version', which xenopsd is setting to 2.  If
not set, it defaults to '1' probing for tpm 1.2 as before.

A note on the use of CRB:  QEMU implements both TIS and CRB interfaces
for TPM2. We use this CRB interface as defined by the TCG PTP
specification,
as it is thought to be the more modern interface and preferred for the
TPM2.0 only world.
TIS is PC specific, implemented with an ISA bus device in QEMU, whereas
I think CRB is more generic and would be suitable for
use on other platforms like ARM.  While I read here is some confusion
about the Mobile CRB specification regrading locality, I don't 
think this is a problem for our use case.  However, should someone
decide they need to use TIS with TPM2, I don't believe this patch series
would exclude the option for this to be added later, since struct
acpi_config does allow a TIS to be supplied instead of CRB for version
2.

Jennifer Herbert (2):
  acpi: Make TPM version configurable.
  acpi: Add TPM2 interface definition.

 docs/misc/xenstore-paths.pandoc |  10 +++
 tools/firmware/hvmloader/util.c |  28 +++++++--
 tools/libacpi/Makefile          |   3 +-
 tools/libacpi/acpi2_0.h         |  32 ++++++++++
 tools/libacpi/build.c           | 106 +++++++++++++++++++++++---------
 tools/libacpi/libacpi.h         |   4 +-
 tools/libacpi/ssdt_tpm2.asl     |  36 +++++++++++
 7 files changed, 183 insertions(+), 36 deletions(-)
 create mode 100644 tools/libacpi/ssdt_tpm2.asl

-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:50:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526284.817944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMoA-0006kd-AP; Tue, 25 Apr 2023 17:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526284.817944; Tue, 25 Apr 2023 17:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMoA-0006kW-6P; Tue, 25 Apr 2023 17:50:30 +0000
Received: by outflank-mailman (input) for mailman id 526284;
 Tue, 25 Apr 2023 17:50:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prMo9-0006kK-Bh; Tue, 25 Apr 2023 17:50:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prMo9-0002we-3Y; Tue, 25 Apr 2023 17:50:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prMo8-0008Tg-RA; Tue, 25 Apr 2023 17:50:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prMo8-0003Zb-Qh; Tue, 25 Apr 2023 17:50:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iZ4byfovi4j8xg62Y6tKLPMV6tvfOcT2/nLamKLWTzU=; b=qMTCckZAdB7zgy4yXz+0RWBdY7
	yRU7vC2ldFNvmvD7sQbvcwvz42iTba49wsv9QQZTmUQLnD4RUwU6nwfiNZ1VexmazMj5cDrFkq56g
	Z3+WuRen3S31lLrmZu8Uxs+hlMcAPS7YTU7+hPxGxhorMyyEtCTzdCqOnck0hYTNR9P0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180401-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180401: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 17:50:28 +0000

flight 180401 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180401/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180391 pass in 180401
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 180391 pass in 180401
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180391

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180391 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180391 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180391 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180381
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180391
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180391
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180391
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180391
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180391
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180391
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180391
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180391
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180391
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180391
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180391
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180401  2023-04-25 01:51:51 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 17:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 17:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526292.817954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMph-0007OU-R0; Tue, 25 Apr 2023 17:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526292.817954; Tue, 25 Apr 2023 17:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prMph-0007ON-Nw; Tue, 25 Apr 2023 17:52:05 +0000
Received: by outflank-mailman (input) for mailman id 526292;
 Tue, 25 Apr 2023 17:52:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1prMpg-0007OF-Nn
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 17:52:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1prMpg-0002yN-0C; Tue, 25 Apr 2023 17:52:04 +0000
Received: from [54.239.6.184] (helo=[192.168.17.85])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1prMpf-0000dA-Pq; Tue, 25 Apr 2023 17:52:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8rz7ZWQK/maqLdu7iUYZ+4Yz4thkHd8UA1FC4lCo+bw=; b=xdHVcf7gHgJkvpfjDuc3BQDxnC
	42GqgfAr7ouiftKZs1gREO9PmdoZV5EnxKb2xpKUvBCjp00d0ADqzpnb7K8FqTDk73azSKt4wImWQ
	54Dqb4hubhy+1r11HN8PnaRUzCQMbIxQ6ilLHInDox7ozl9k2TpLMDXGlBNnybU2y67U=;
Message-ID: <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
Date: Tue, 25 Apr 2023 18:52:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-5-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> Instead of modifying accounting data and undo those modifications in
> case of an error during further processing, add a framework for
> collecting the needed changes and commit them only when the whole
> operation has succeeded.
> 
> This scheme can reuse large parts of the per transaction accounting.
> The changed_domain handling can be reused, but the array size of the
> accounting data should be possible to be different for both use cases.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V3:
> - call acc_commit() earlier (Julien Grall)
> - add assert() to acc_commit()
> - use fixed sized acc array in struct changed_domain (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c   |  9 ++++--
>   tools/xenstore/xenstored_core.h   |  3 ++
>   tools/xenstore/xenstored_domain.c | 53 ++++++++++++++++++++++++++++++-
>   tools/xenstore/xenstored_domain.h |  5 ++-
>   4 files changed, 66 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 3ca68681e3..84335f5f3d 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, int error)
>   			break;
>   		}
>   	}
> +
> +	acc_drop(conn);
> +
>   	send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
>   			  strlen(xsd_errors[i].errstring) + 1);
>   }
> @@ -1034,6 +1037,9 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
>   
>   	assert(type != XS_WATCH_EVENT);
>   
> +	conn->in = NULL;

AFAIU, you are setting conn->in to NULL in order to please..

> +	acc_commit(conn);

... this call. However in case of an error like...

> +
>   	if ( len > XENSTORE_PAYLOAD_MAX ) { >   		send_error(conn, E2BIG);

... here, send_reply() will be called again. But the error will not be 
set because conn->in is NULL.

So I think you want to restore conn->in rewrite acc_commit(). The 
ordering would also deserve an explanation in a comment.

>   		return;
> @@ -1059,8 +1065,6 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
>   		}
>   	}
>   
> -	conn->in = NULL;
> -
>   	/* Update relevant header fields and fill in the message body. */
>   	bdata->hdr.msg.type = type;
>   	bdata->hdr.msg.len = len;
> @@ -2195,6 +2199,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
>   	new->is_stalled = false;
>   	new->transaction_started = 0;
>   	INIT_LIST_HEAD(&new->out_list);
> +	INIT_LIST_HEAD(&new->acc_list);
>   	INIT_LIST_HEAD(&new->ref_list);
>   	INIT_LIST_HEAD(&new->watches);
>   	INIT_LIST_HEAD(&new->transaction_list);
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index c59b06551f..1f811f38cb 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -139,6 +139,9 @@ struct connection
>   	struct list_head out_list;
>   	uint64_t timeout_msec;
>   
> +	/* Not yet committed accounting data (valid if in != NULL). */
> +	struct list_head acc_list;
> +
>   	/* Referenced requests no longer pending. */
>   	struct list_head ref_list;
>   
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 30fb9acec6..144cbafb73 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -91,6 +91,8 @@ struct domain
>   	bool wrl_delay_logged;
>   };
>   
> +#define ACC_CHD_N (ACC_TR_N < ACC_REQ_N ? ACC_REQ_N : ACC_TR_N)

Both ACC_TR_N and ACC_REQ_N are fixed. Can you explain why we need this 
magic?

Related, wouldn't it be better to define it in the enum?

> +
>   struct changed_domain
>   {
>   	/* List of all changed domains. */
> @@ -100,7 +102,7 @@ struct changed_domain
>   	unsigned int domid;
>   
>   	/* Accounting data. */
> -	int acc[ACC_TR_N];
> +	int acc[ACC_CHD_N];
>   };
>   
>   static struct hashtable *domhash;
> @@ -1070,6 +1072,7 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
>   			  enum accitem what, int add, bool no_dom_alloc)
>   {
>   	struct domain *d;
> +	struct changed_domain *cd;
>   	struct list_head *head;
>   	int ret;
>   
> @@ -1090,6 +1093,22 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
>   		}
>   	}
>   
> +	/* Temporary accounting data until final commit? */
> +	if (conn && conn->in && what < ACC_REQ_N) {
> +		/* Consider transaction local data. */
> +		ret = 0;
> +		if (conn->transaction && what < ACC_TR_N) {
> +			head = transaction_get_changed_domains(
> +				conn->transaction);
> +			cd = acc_find_changed_domain(head, domid);
> +			if (cd)
> +				ret = cd->acc[what];
> +		}
> +		ret += acc_add_changed_dom(conn->in, &conn->acc_list, what,
> +					   add, domid);
> +		return errno ? -1 : domain_acc_add_valid(d, what, ret);
> +	}
> +
>   	if (conn && conn->transaction && what < ACC_TR_N) {
>   		head = transaction_get_changed_domains(conn->transaction);
>   		ret = acc_add_changed_dom(conn->transaction, head, what,
> @@ -1106,6 +1125,38 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
>   	return d->acc[what];
>   }
>   
> +void acc_drop(struct connection *conn)
> +{
> +	struct changed_domain *cd;
> +
> +	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
> +		list_del(&cd->list);
> +		talloc_free(cd);
> +	}
> +}
> +
> +void acc_commit(struct connection *conn)
> +{
> +	struct changed_domain *cd;
> +	enum accitem what;
> +
> +	/*
> +	 * Make sure domain_acc_add() below can't add additional data to
> +	 * to be committed accounting records.
> +	 */
> +	assert(!conn->in);
> +
> +	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
> +		list_del(&cd->list);
> +		for (what = 0; what < ACC_REQ_N; what++)
> +			if (cd->acc[what])
> +				domain_acc_add(conn, cd->domid, what,
> +					       cd->acc[what], true);
> +
> +		talloc_free(cd);
> +	}
> +}
> +
>   int domain_nbentry_inc(struct connection *conn, unsigned int domid)
>   {
>   	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index 9d05eb01da..6355ad4f37 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -25,7 +25,8 @@
>    * a per transaction array.
>    */
>   enum accitem {
> -	ACC_NODES,
> +	ACC_REQ_N,		/* Number of elements per request. */
> +	ACC_NODES = ACC_REQ_N,
>   	ACC_TR_N,		/* Number of elements per transaction. */
>   	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
>   };
> @@ -113,6 +114,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>    * If "update" is true, "chk_quota" is ignored.
>    */
>   int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
> +void acc_drop(struct connection *conn);
> +void acc_commit(struct connection *conn);
>   
>   /* Write rate limiting */
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 18:21:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 18:21:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526296.817964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prNHo-0002S0-4o; Tue, 25 Apr 2023 18:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526296.817964; Tue, 25 Apr 2023 18:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prNHo-0002Rt-1u; Tue, 25 Apr 2023 18:21:08 +0000
Received: by outflank-mailman (input) for mailman id 526296;
 Tue, 25 Apr 2023 18:21:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rxgq=AQ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1prNHm-0002Rn-FU
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 18:21:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef8fd3a4-e395-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 20:21:03 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E37BF616EA;
 Tue, 25 Apr 2023 18:21:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16EFBC433EF;
 Tue, 25 Apr 2023 18:20:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef8fd3a4-e395-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682446861;
	bh=57/xIHfHp3L5NdxLzTUsuOkNWdo28qc8vE0kOkFyov0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=RZUFbjMeBarEXvKy5N9wjRAlLoMHiqZMesPIasy51qZdkw/ZjLcy9V1JLE139RJQE
	 MS0jysHNDs3V/yugCtbLEOi89mcls8hb+cQ4ArTK7Kko5qLmh8xLTdX55SihjOWf20
	 MvlY1D2o2kX9DMmWLO4TzI7M18DtiJZZoOKyrfGvTWxStSYd40I54rJgA4mXV33tA3
	 4L4l+8YHAnFSLVKjXoxLYXuj+X0xL6bOMpY4fEgpP8tF9Ohn8vOd//qYxC0dUwzadw
	 PaFuM25k7eirHBDxY2TayyI/rUCpgOJ/8Jm2nXQSqFl9KOJI1h7WWDXjDzo7HGmBIy
	 CsjDagwGgHZvg==
Date: Tue, 25 Apr 2023 11:20:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>, Stewart.Hildebrand@amd.com
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com> <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com> <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop> <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com> <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com> <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com> <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
 <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com> <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop> <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1645323024-1682446524=:3419"
Content-ID: <alpine.DEB.2.22.394.2304251115260.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1645323024-1682446524=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304251115261.3419@ubuntu-linux-20-04-desktop>

This is interesting. Are you using Xilinx hardware by any chance? If so,
which board?

Are you using ImageBuilder to generate your boot.scr boot script? If so,
could you please post your ImageBuilder config file? If not, can you
post the source of your uboot boot script?

SErrors are supposed to be related to a hardware failure of some kind.
You are not supposed to be able to trigger an SError easily by
"mistake". I have not seen SErrors due to wrong cache coloring
configurations on any Xilinx board before.

The differences between Xen with and without cache coloring from a
hardware perspective are:

- With cache coloring, the SMMU is enabled and does address translations
  even for dom0. Without cache coloring the SMMU could be disabled, and
  if enabled, the SMMU doesn't do any address translations for Dom0. If
  there is a hardware failure related to SMMU address translation it
  could only trigger with cache coloring. This would be my normal
  suggestion for you to explore, but the failure happens too early
  before any DMA-capable device is programmed. So I don't think this can
  be the issue.

- With cache coloring, the memory allocation is very different so you'll
  end up using different DDR regions for Dom0. So if your DDR is
  defective, you might only see a failure with cache coloring enabled
  because you end up using different regions.


On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> Hi Stefano,
> 
> Thank you.
> If I build xen without colors support there is not this error.
> All the domains are booted well.
> Hense it can not be a hardware issue.
> This panic arrived during unpacking the rootfs.
> Here I attached the boot log xen/Dom0 without color.
> A highlighted strings printed exactly after the place where 1-st time panic arrived.
> 
>  Xen 4.16.1-pre
> (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y 2023-04-21
> (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
> (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
> (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
> (XEN) 64-bit Execution:
> (XEN)   Processor Features: 0000000000002222 0000000000000000
> (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> (XEN)     Extensions: FloatingPoint AdvancedSIMD
> (XEN)   Debug Features: 0000000010305106 0000000000000000
> (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
> (XEN)   Memory Model Features: 0000000000001122 0000000000000000
> (XEN)   ISA Features:  0000000000011120 0000000000000000
> (XEN) 32-bit Execution:
> (XEN)   Processor Features: 0000000000000131:0000000000011011
> (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> (XEN)     Extensions: GenericTimer Security
> (XEN)   Debug Features: 0000000003010066
> (XEN)   Auxiliary Features: 0000000000000000
> (XEN)   Memory Model Features: 0000000010201105 0000000040000000
> (XEN)                          0000000001260000 0000000002102211
> (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
> (XEN)                 0000000001112131 0000000000011142 0000000000011121
> (XEN) Using SMC Calling Convention v1.2
> (XEN) Using PSCI v1.1
> (XEN) SMP: Allowing 4 CPUs
> (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
> (XEN) GICv2 initialization:
> (XEN)         gic_dist_addr=00000000f9010000
> (XEN)         gic_cpu_addr=00000000f9020000
> (XEN)         gic_hyp_addr=00000000f9040000
> (XEN)         gic_vcpu_addr=00000000f9060000
> (XEN)         gic_maintenance_irq=25
> (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
> (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
> (XEN) Using scheduler: null Scheduler (null)
> (XEN) Initializing null scheduler
> (XEN) WARNING: This is experimental software in development.
> (XEN) Use at your own risk.
> (XEN) Allocated console ring of 32 KiB.
> (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
> (XEN) Bringing up CPU1
> (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
> (XEN) CPU 1 booted.
> (XEN) Bringing up CPU2
> (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
> (XEN) CPU 2 booted.
> (XEN) Bringing up CPU3
> (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
> (XEN) Brought up 4 CPUs
> (XEN) CPU 3 booted.
> (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
> (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
> (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
> (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context banks (0
> stage-2 only)
> (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
> (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Loading d0 kernel from boot module @ 0000000001000000
> (XEN) Loading ramdisk from boot module @ 0000000002000000
> (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
> (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
> (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
> (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
> (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
> (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Extended region 0: 0x81200000->0xa0000000
> (XEN) Extended region 1: 0xb1200000->0xc0000000
> (XEN) Extended region 2: 0xc8000000->0xe0000000
> (XEN) Extended region 3: 0xf0000000->0xf9000000
> (XEN) Extended region 4: 0x100000000->0x600000000
> (XEN) Extended region 5: 0x880000000->0x8000000000
> (XEN) Extended region 6: 0x8001000000->0x10000000000
> (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
> (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
> (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) null.c:353: 0 <-- d0v0
> (XEN) Freed 356kB init memory.
> (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
> [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU Binutils)
> 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
> [    0.000000] Machine model: D14 Viper Board - White Unit
> [    0.000000] Xen 4.16 support found
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
> [    0.000000]   DMA32    empty
> [    0.000000]   Normal   empty
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
> [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
> [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
> [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
> [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
> [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
> [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
> [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
> [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
> [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
> [    0.000000] psci: probing for conduit method from DT.
> [    0.000000] psci: PSCIv1.1 detected in firmware.
> [    0.000000] psci: Using standard PSCI v0.2 function IDs
> [    0.000000] psci: Trusted OS migration not required
> [    0.000000] psci: SMC Calling Convention v1.1
> [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
> [    0.000000] Detected VIPT I-cache on CPU0
> [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
> [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
> [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
> [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1 root=/dev/ram0 maxcpus=2
> [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user space.
> [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
> [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
> [    0.000000] mem auto-init: clearing system memory may take some time...
> [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss, 256944K reserved,
> 262144K cma-reserved)
> [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
> [    0.000000] rcu: Hierarchical RCU implementation.
> [    0.000000] rcu: RCU event tracing is enabled.
> [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
> [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
> [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
> [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> [    0.000000] Root IRQ handler: gic_handle_irq
> [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
> [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
> [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
> [    0.000258] Console: colour dummy device 80x25
> [    0.310231] printk: console [hvc0] enabled
> [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS (lpj=400000)
> [    0.324851] pid_max: default: 32768 minimum: 301
> [    0.329706] LSM: Security Framework initializing
> [    0.334204] Yama: becoming mindful.
> [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
> [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
> [    0.354743] xen:grant_table: Grant tables using version 1 layout
> [    0.359132] Grant table initialized
> [    0.362664] xen:events: Using FIFO-based ABI
> [    0.366993] Xen: initializing cpu0
> [    0.370515] rcu: Hierarchical SRCU implementation.
> [    0.375930] smp: Bringing up secondary CPUs ...
> (XEN) null.c:353: 1 <-- d0v1
> (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> [    0.382549] Detected VIPT I-cache on CPU1
> [    0.388712] Xen: initializing cpu1
> [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
> [    0.388829] smp: Brought up 1 node, 2 CPUs
> [    0.406941] SMP: Total of 2 processors activated.
> [    0.411698] CPU features: detected: 32-bit EL0 Support
> [    0.416888] CPU features: detected: CRC32 instructions
> [    0.422121] CPU: All CPU(s) started at EL1
> [    0.426248] alternatives: patching kernel code
> [    0.431424] devtmpfs: initialized
> [    0.441454] KASLR enabled
> [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
> [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
> [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
> [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
> [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
> [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> [    0.519478] audit: initializing netlink subsys (disabled)
> [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
> [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
> [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
> [    0.545608] ASID allocator initialised with 32768 entries
> [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
> [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
> [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
> [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
> [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
> [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
> [    0.636520] DRBG: Continuing without Jitter RNG
> [    0.737187] raid6: neonx8   gen()  2143 MB/s
> [    0.805294] raid6: neonx8   xor()  1589 MB/s
> [    0.873406] raid6: neonx4   gen()  2177 MB/s
> [    0.941499] raid6: neonx4   xor()  1556 MB/s
> [    1.009612] raid6: neonx2   gen()  2072 MB/s
> [    1.077715] raid6: neonx2   xor()  1430 MB/s
> [    1.145834] raid6: neonx1   gen()  1769 MB/s
> [    1.213935] raid6: neonx1   xor()  1214 MB/s
> [    1.282046] raid6: int64x8  gen()  1366 MB/s
> [    1.350132] raid6: int64x8  xor()   773 MB/s
> [    1.418259] raid6: int64x4  gen()  1602 MB/s
> [    1.486349] raid6: int64x4  xor()   851 MB/s
> [    1.554464] raid6: int64x2  gen()  1396 MB/s
> [    1.622561] raid6: int64x2  xor()   744 MB/s
> [    1.690687] raid6: int64x1  gen()  1033 MB/s
> [    1.758770] raid6: int64x1  xor()   517 MB/s
> [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
> [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
> [    1.767957] raid6: using neon recovery algorithm
> [    1.772824] xen:balloon: Initialising balloon driver
> [    1.778021] iommu: Default domain type: Translated
> [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
> [    1.789149] SCSI subsystem initialized
> [    1.792820] usbcore: registered new interface driver usbfs
> [    1.798254] usbcore: registered new interface driver hub
> [    1.803626] usbcore: registered new device driver usb
> [    1.808761] pps_core: LinuxPPS API ver. 1 registered
> [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
> [    1.822903] PTP clock support registered
> [    1.826893] EDAC MC: Ver: 3.0.0
> [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
> [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
> [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
> [    1.855907] FPGA manager framework
> [    1.859952] clocksource: Switched to clocksource arch_sys_counter
> [    1.871712] NET: Registered PF_INET protocol family
> [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
> [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
> [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
> [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
> [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
> [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
> [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
> [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
> [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
> [    1.936834] RPC: Registered named UNIX socket transport module.
> [    1.942342] RPC: Registered udp transport module.
> [    1.947088] RPC: Registered tcp transport module.
> [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
> [    1.958334] PCI: CLS 0 bytes, default 64
> [    1.962709] Trying to unpack rootfs image as initramfs...
> [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
> [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
> [    2.021045] NET: Registered PF_ALG protocol family
> [    2.021122] xor: measuring software checksum speed
> [    2.029347]    8regs           :  2366 MB/sec
> [    2.033081]    32regs          :  2802 MB/sec
> [    2.038223]    arm64_neon      :  2320 MB/sec
> [    2.038385] xor: using function: 32regs (2802 MB/sec)
> [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
> [    2.050959] io scheduler mq-deadline registered
> [    2.055521] io scheduler kyber registered
> [    2.068227] xen:xen_evtchn: Event-channel device installed
> [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
> [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
> [    2.085548] brd: module loaded
> [    2.089290] loop: module loaded
> [    2.089341] Invalid max_queues (4), will use default max: 2.
> [    2.094565] tun: Universal TUN/TAP device driver, 1.6
> [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
> [    2.104156] usbcore: registered new interface driver rtl8150
> [    2.109813] usbcore: registered new interface driver r8152
> [    2.115367] usbcore: registered new interface driver asix
> [    2.120794] usbcore: registered new interface driver ax88179_178a
> [    2.126934] usbcore: registered new interface driver cdc_ether
> [    2.132816] usbcore: registered new interface driver cdc_eem
> [    2.138527] usbcore: registered new interface driver net1080
> [    2.144256] usbcore: registered new interface driver cdc_subset
> [    2.150205] usbcore: registered new interface driver zaurus
> [    2.155837] usbcore: registered new interface driver cdc_ncm
> [    2.161550] usbcore: registered new interface driver r8153_ecm
> [    2.168240] usbcore: registered new interface driver cdc_acm
> [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
> [    2.181358] usbcore: registered new interface driver uas
> [    2.186547] usbcore: registered new interface driver usb-storage
> [    2.192643] usbcore: registered new interface driver ftdi_sio
> [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
> [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending drivers
> [    2.215332] i2c_dev: i2c /dev entries driver
> [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
> [    2.225923] device-mapper: uevent: version 1.0.3
> [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
> [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
> [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV ff960000.memory-controller (INTERRUPT)
> [    2.261719] sdhci: Secure Digital Host Controller Interface driver
> [    2.267487] sdhci: Copyright(c) Pierre Ossman
> [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
> [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
> [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
> [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
> [    2.327875] securefw securefw: securefw probed
> [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
> [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
> [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
> [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
> [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
> [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
> [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
> [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
> [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
> [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
> [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
> [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
> [    2.420856] default preset
> [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
> [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
> [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
> [    2.441976] vmcu driver init
> [    2.444922] VMCU: : (240:0) registered
> [    2.444956] In K81 Updater init
> [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
> [    2.468833] Initializing XFRM netlink socket
> [    2.468902] NET: Registered PF_PACKET protocol family
> [    2.472729] Bridge firewalling registered
> [    2.476785] 8021q: 802.1Q VLAN Support v1.8
> [    2.481341] registered taskstats version 1
> [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
> [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
> [    2.507103] of-fpga-region fpga-full: FPGA Region probed
> [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
> [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
> [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
> [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
> [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
> [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
> [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
> [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
> [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
> [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
> [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
> [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
> [    2.952393] Creating 2 MTD partitions on "spi0.0":
> [    2.957231] 0x000004000000-0x000008000000 : "bank A"
> [    2.963332] 0x000000000000-0x000004000000 : "bank B"
> [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
> [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
> [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
> [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
> [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
> [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
> [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
> [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
> [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
> [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
> [    3.045301] viper_enet viper_enet: Viper enet registered
> [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
> [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
> [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
> [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
> [    3.097729] si70xx: probe of 2-0040 failed with error -5
> [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
> [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
> [    3.112457] viper-tamper viper-tamper: Device registered
> [    3.117593] active_bank active_bank: boot bank: 1
> [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
> [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
> [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
> [    3.147438] viper-vdpp a4000000.vdpp: Device registered
> [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
> [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
> [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
> [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
> [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
> [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
> [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
> [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
> [    3.215694] lpc55_l2 spi1.0: rx error: -110
> [    3.284438] mmc0: new HS200 MMC card at address 0001
> [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
> [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
> [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
> [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
> [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
> [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
> [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
> [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
> [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
> [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
> [    3.624224] rtc-rv3028 0-0052: registered as rtc1
> [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
> [    3.633253] lpc55_l2 spi1.0: rx error: -110
> [    3.639104] k81_bootloader 0-0010: probe
> [    3.641628] VMCU: : (235:0) registered
> [    3.641635] k81_bootloader 0-0010: probe completed
> [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
> [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
> [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
> [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
> [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
> [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
> [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
> [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
> [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
> [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
> [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
> [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
> [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
> [    3.737549] sfp_register_socket: got sfp_bus
> [    3.740709] sfp_register_socket: register sfp_bus
> [    3.745459] sfp_register_bus: ops ok!
> [    3.749179] sfp_register_bus: Try to attach
> [    3.753419] sfp_register_bus: Attach succeeded
> [    3.757914] sfp_register_bus: upstream ops attach
> [    3.762677] sfp_register_bus: Bus registered
> [    3.766999] sfp_register_socket: register sfp_bus succeeded
> [    3.775870] of_cfs_init
> [    3.776000] of_cfs_init: OK
> [    3.778211] clk: Not disabling unused clocks
> [   11.278477] Freeing initrd memory: 206056K
> [   11.279406] Freeing unused kernel memory: 1536K
> [   11.314006] Checked W+X mappings: passed, no W+X pages found
> [   11.314142] Run /init as init process
> INIT: version 3.01 booting
> fsck (busybox 1.35.0)
> /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
> /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
> /dev/mmcblk0p3 was not cleanly unmounted, check forced.
> /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
> [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode: disabled.
> Starting random number generator daemon.
> [   11.580662] random: crng init done
> Starting udev
> [   11.613159] udevd[142]: starting version 3.2.10
> [   11.620385] udevd[143]: starting eudev-3.2.10
> [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
> [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
> [   12.063396] ip_local_port_range: prefer different parity for start/end values.
> [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> hwclock: RTC_RD_TIME: Invalid exchange
> Mon Feb 27 08:40:53 UTC 2023
> [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
> hwclock: RTC_SET_TIME: Invalid exchange
> [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> Starting mcud
> INIT: Entering runlevel: 5
> Configuring network interfaces... done.
> resetting network interface
> [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=POLL)
> [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
> [   12.732151] pps pps0: new PPS source ptp0
> [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
> [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY] (irq=POLL)
> [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
> [   12.761804] pps pps1: new PPS source ptp1
> [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
> Auto-negotiation: off
> Auto-negotiation: off
> [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
> [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
> [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
> [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
> Starting Failsafe Secure Shell server in port 2222: sshd
> done.
> Starting rpcbind daemon...done.
> 
> [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> hwclock: RTC_RD_TIME: Invalid exchange
> Starting State Manager Service
> Start state-manager restarter...
> (XEN) d0v1 Forwarding AES operation: 3254779951
> Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744 /dev/dm-0
> scanned by udevd (385)
> [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
> [   17.350670] BTRFS info (device dm-0): has skinny extents
> [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
> [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6 /dev/mapper/client_prov scanned by mkfs.btrfs
> (526)
> [   17.872699] BTRFS info (device dm-1): using free space tree
> [   17.872771] BTRFS info (device dm-1): has skinny extents
> [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
> [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
> [   17.895695] BTRFS info (device dm-1): checking UUID tree
> 
> Setting domain 0 name, domid and JSON config...
> Done setting up Dom0
> Starting xenconsoled...
> Starting QEMU as disk backend for dom0
> Starting domain watchdog daemon: xenwatchdogd startup
> 
> [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6 /dev/mapper/client_config scanned by mkfs.btrfs
> (574)
> [done]
> [   18.465552] BTRFS info (device dm-2): using free space tree
> [   18.465629] BTRFS info (device dm-2): has skinny extents
> [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
> Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
> [   18.486659] BTRFS info (device dm-2): checking UUID tree
> OK
> starting rsyslogd ... Log partition ready after 0 poll loops
> done
> rsyslogd: cannot connect to 172.18.0.1:514: Network is unreachable [v8.2208.0 try https://www.rsyslog.com/e/2027 ]
> [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scanned by udevd (518)
> 
> Please insert USB token and enter your role in login prompt.
> 
> login:
> 
> Regards,
> O.
> 
> 
> пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org>:
>       Hi Oleg,
> 
>       Here is the issue from your logs:
> 
>       SError Interrupt on CPU0, code 0xbe000000 -- SError
> 
>       SErrors are special signals to notify software of serious hardware
>       errors.  Something is going very wrong. Defective hardware is a
>       possibility.  Another possibility if software accessing address ranges
>       that it is not supposed to, sometimes it causes SErrors.
> 
>       Cheers,
> 
>       Stefano
> 
> 
> 
>       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
> 
>       > Hello,
>       >
>       > Thanks guys.
>       > I found out where the problem was.
>       > Now dom0 booted more. But I have a new one.
>       > This is a kernel panic during Dom0 loading.
>       > Maybe someone is able to suggest something ?
>       >
>       > Regards,
>       > O.
>       >
>       > [    3.771362] sfp_register_bus: upstream ops attach
>       > [    3.776119] sfp_register_bus: Bus registered
>       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>       > [    3.789399] of_cfs_init
>       > [    3.789499] of_cfs_init: OK
>       > [    3.791685] clk: Not disabling unused clocks
>       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>       > [   11.010422] pc : simple_write_end+0xd0/0x130
>       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>       > [   11.010438] sp : ffffffc00809b910
>       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>       > [   11.010556] Call trace:
>       > [   11.010558]  dump_backtrace+0x0/0x1c4
>       > [   11.010567]  show_stack+0x18/0x2c
>       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>       > [   11.010583]  dump_stack+0x18/0x34
>       > [   11.010588]  panic+0x14c/0x2f8
>       > [   11.010597]  print_tainted+0x0/0xb0
>       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>       > [   11.010614]  do_serror+0x28/0x60
>       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>       > [   11.010628]  el1h_64_error+0x78/0x7c
>       > [   11.010633]  simple_write_end+0xd0/0x130
>       > [   11.010639]  generic_perform_write+0x118/0x1e0
>       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>       > [   11.010656]  __kernel_write+0xfc/0x2ac
>       > [   11.010665]  kernel_write+0x88/0x160
>       > [   11.010673]  xwrite+0x44/0x94
>       > [   11.010680]  do_copy+0xa8/0x104
>       > [   11.010686]  write_buffer+0x38/0x58
>       > [   11.010692]  flush_buffer+0x4c/0xbc
>       > [   11.010698]  __gunzip+0x280/0x310
>       > [   11.010704]  gunzip+0x1c/0x28
>       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>       > [   11.010715]  do_populate_rootfs+0x80/0x164
>       > [   11.010722]  async_run_entry_fn+0x48/0x164
>       > [   11.010728]  process_one_work+0x1e4/0x3a0
>       > [   11.010736]  worker_thread+0x7c/0x4c0
>       > [   11.010743]  kthread+0x120/0x130
>       > [   11.010750]  ret_from_fork+0x10/0x20
>       > [   11.010757] SMP: stopping secondary CPUs
>       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>       > [   11.010788] PHYS_OFFSET: 0x0
>       > [   11.010790] CPU features: 0x00000401,00000842
>       > [   11.010795] Memory Limit: none
>       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>       >
>       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com>:
>       >       Hi Oleg,
>       >
>       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>       >       >       
>       >       >
>       >       >
>       >       > Hello Michal,
>       >       >
>       >       > I was not able to enable earlyprintk in the xen for now.
>       >       > I decided to choose another way.
>       >       > This is a xen's command line that I found out completely.
>       >       >
>       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native
>       sched=null
>       >       timer_slop=0
>       >       Yes, adding a printk() in Xen was also a good idea.
>       >
>       >       >
>       >       > So you are absolutely right about a command line.
>       >       > Now I am going to find out why xen did not have the correct parameters from the device tree.
>       >       Maybe you will find this document helpful:
>       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt
>       >
>       >       ~Michal
>       >
>       >       >
>       >       > Regards,
>       >       > Oleg
>       >       >
>       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>       >       >
>       >       >
>       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>       >       >     >       
>       >       >     >
>       >       >     >
>       >       >     > Hello Michal,
>       >       >     >
>       >       >     > Yes, I use yocto.
>       >       >     >
>       >       >     > Yesterday all day long I tried to follow your suggestions.
>       >       >     > I faced a problem.
>       >       >     > Manually in the xen config build file I pasted the strings:
>       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
>       >       >     You shouldn't really modify .config file but if you do, you should execute "make olddefconfig" afterwards.
>       >       >
>       >       >     >
>       >       >     > CONFIG_EARLY_PRINTK
>       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>       >       >     I hope you added =y to them.
>       >       >
>       >       >     Anyway, you have at least the following solutions:
>       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>       >       >     2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not enabled by
>       default)
>       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>       >       >
>       >       >     ~Michal
>       >       >
>       >       >     >
>       >       >     > Host hangs in build time. 
>       >       >     > Maybe I did not set something in the config build file ?
>       >       >     >
>       >       >     > Regards,
>       >       >     > Oleg
>       >       >     >
>       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >     >
>       >       >     >     Thanks Michal,
>       >       >     >
>       >       >     >     You gave me an idea.
>       >       >     >     I am going to try it today.
>       >       >     >
>       >       >     >     Regards,
>       >       >     >     O.
>       >       >     >
>       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >     >
>       >       >     >         Thanks Stefano.
>       >       >     >
>       >       >     >         I am going to do it today.
>       >       >     >
>       >       >     >         Regards,
>       >       >     >         O.
>       >       >     >
>       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>       >       >     >
>       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>       >       >     >             > Hi Michal,
>       >       >     >             >
>       >       >     >             > I corrected xen's command line.
>       >       >     >             > Now it is
>       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>       >       bootscrub=0 vwfi=native sched=null
>       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>       >       >     >
>       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>       >       >     >             advantage in using more than 1 color for Xen.
>       >       >     >
>       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>       >       >     >
>       >       >     >             xen_colors=0-0 dom0_colors=1-8
>       >       >     >
>       >       >     >
>       >       >     >
>       >       >     >             > Unfortunately the result was the same.
>       >       >     >             >
>       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       >       >     >             > (XEN) Coloring general information
>       >       >     >             > (XEN) Way size: 64kB
>       >       >     >             > (XEN) Max. number of colors available: 16
>       >       >     >             > (XEN) Xen color(s): [ 0 ]
>       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>       >       >     >             > (XEN) Color array allocation failed for dom0
>       >       >     >             > (XEN)
>       >       >     >             > (XEN) ****************************************
>       >       >     >             > (XEN) Panic on CPU 0:
>       >       >     >             > (XEN) Error creating domain 0
>       >       >     >             > (XEN) ****************************************
>       >       >     >             > (XEN)
>       >       >     >             > (XEN) Reboot in five seconds...
>       >       >     >             >
>       >       >     >             > I am going to find out how command line arguments passed and parsed.
>       >       >     >             >
>       >       >     >             > Regards,
>       >       >     >             > Oleg
>       >       >     >             >
>       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >     >             >       Hi Michal,
>       >       >     >             >
>       >       >     >             > You put my nose into the problem. Thank you.
>       >       >     >             > I am going to use your point.
>       >       >     >             > Let's see what happens.
>       >       >     >             >
>       >       >     >             > Regards,
>       >       >     >             > Oleg
>       >       >     >             >
>       >       >     >             >
>       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>       >       >     >             >       Hi Oleg,
>       >       >     >             >
>       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >       >     >             >       >       
>       >       >     >             >       >
>       >       >     >             >       >
>       >       >     >             >       > Hello Stefano,
>       >       >     >             >       >
>       >       >     >             >       > Thanks for the clarification.
>       >       >     >             >       > My company uses yocto for image generation.
>       >       >     >             >       > What kind of information do you need to consult me in this case ?
>       >       >     >             >       >
>       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall
>       <mailto:julien@xen.org
>       >       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> ?
>       >       >     >             >
>       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be
>       not the
>       >       one
>       >       >     >             >       Xen booted with. The error you are observing most likely is due to dom0 colors
>       configuration not
>       >       being
>       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you
>       provided, this
>       >       parameter
>       >       >     >             >       is set, I strongly doubt that this is the actual command line in use.
>       >       >     >             >
>       >       >     >             >       You wrote:
>       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2
>       dom0_vcpus_pin
>       >       bootscrub=0 vwfi=native
>       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>       >       >     >             >
>       >       >     >             >       but:
>       >       >     >             >       1) way_szize has a typo
>       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>       >       >     >             >
>       >       >     >             >       This makes me believe that no colors configuration actually end up in command line that Xen
>       booted
>       >       with.
>       >       >     >             >       Single color for Xen is a "default if not specified" and way size was probably calculated
>       by asking
>       >       HW.
>       >       >     >             >
>       >       >     >             >       So I would suggest to first cross-check the command line in use.
>       >       >     >             >
>       >       >     >             >       ~Michal
>       >       >     >             >
>       >       >     >             >
>       >       >     >             >       >
>       >       >     >             >       > Regards,
>       >       >     >             >       > Oleg
>       >       >     >             >       >
>       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org
>       >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>       <mailto:sstabellini@kernel.org
>       >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>       >       >     >             >       >
>       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >       >     >             >       >     > Hi Julien,
>       >       >     >             >       >     >
>       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>       >       >     >             >       >     >
>       >       >     >             >       >     > > would assume that upstream + the series on the ML [1] work
>       >       >     >             >       >     >
>       >       >     >             >       >     > Please clarify this point.
>       >       >     >             >       >     > Because the two thoughts are controversial.
>       >       >     >             >       >
>       >       >     >             >       >     Hi Oleg,
>       >       >     >             >       >
>       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       <https://github.com/xilinx/xen
>       >       <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       <https://github.com/xilinx/xen
>       >       <https://github.com/xilinx/xen>>>
>       >       >     >             >       >
>       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>       >       >     >             >       >
>       >       >     >             >       >
>       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>       >       >     >             >       >
>       >       >     >             >       >
>       >       >     >             >       >     The Cache Coloring feature that you are trying to configure is present
>       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>       >       >     >             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>       >       >     >             >       >     hasn't been merged yet.)
>       >       >     >             >       >
>       >       >     >             >       >
>       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>       >       >     >             >       >     you as you already have Cache Coloring as a feature there.
>       >       >     >             >       >
>       >       >     >             >       >
>       >       >     >             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>       >       >     >             >       >     so, please post the ImageBuilder config file that you are using.
>       >       >     >             >       >
>       >       >     >             >       >     But from the boot message, it looks like the colors configuration for
>       >       >     >             >       >     Dom0 is incorrect.
>       >       >     >             >       >
>       >       >     >             >
>       >       >     >             >
>       >       >     >             >
>       >       >     >
>       >       >
>       >
>       >
>       >
> 
> 
> 
--8323329-1645323024-1682446524=:3419--


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 19:47:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 19:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526311.818000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcx-0002o2-2b; Tue, 25 Apr 2023 19:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526311.818000; Tue, 25 Apr 2023 19:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcw-0002nt-VO; Tue, 25 Apr 2023 19:47:02 +0000
Received: by outflank-mailman (input) for mailman id 526311;
 Tue, 25 Apr 2023 19:47:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3bG=AQ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prOcw-0002TT-20
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 19:47:02 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2272a83-e3a1-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 21:47:01 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-74e3de79bf2so451277685a.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 12:47:01 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 rv12-20020a05620a688c00b0074c438db55asm1492592qkn.74.2023.04.25.12.46.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Apr 2023 12:46:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2272a83-e3a1-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682452020; x=1685044020;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=25FZJHn3Q7EdEZTYmZxGRc3TdOIkqVjFElqtDI72BJ4=;
        b=i6CevYhNStE7bMNIT+4bmXCqXMPT07SuQuicjBVJfW/5T+457eq0HHcKMtK0pT85f7
         pCIMu1nzUshXvOMRBXtwC4cw7VjkQuUuVd+Py3Mvf3sAwKyppEzbaJrZW2r4o+wJX0xW
         6Eui4sgQwkFsAZ/YyPb3+bcQ18wDKXsbN995pZigahHx40i6ZcnThwNeko+pkRFi+q9F
         /OYUkdwHmfT00najpjRfW+Xf3JAMUHokdnG53zJxRWr/Y8cRmYFM7J04v4xoTfkt9HcH
         jxCKOZb53cOIUA22Ub6TaPYZ7KtsJ3qiS15+vjN0hQBPx3rrXzny/OXJyRwH8haI57dV
         jywg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682452020; x=1685044020;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=25FZJHn3Q7EdEZTYmZxGRc3TdOIkqVjFElqtDI72BJ4=;
        b=FS/m4aPY9R+a3xm0Hu3P0T5nfv9Zoh/VwU2tIxFM3ywCXKMhfG0hogDtXDjxRXhcXF
         EfEzJmGw6v7gss9ZleIi4pXD+UaAb1Dc2qAwtIiJvIy5yUY9+EMrNRbnY7wVUXXGbBUW
         +PNvcChJCZ5Mro/AsOauRkSShcBlKUlfmLvUpmfo6Tp8RonJqTON5ptswScf5F+KSySe
         5e1i2FRyfo214+8gCVBKJ0cBIzRTpFhbkZIbYj1CxZSLSh1882gmBHXZBpZ3Yx5TBw9d
         dmRTd1J+yRmaoZXZnJrlMnaJNa2mNpRY6/Ok/10FUix5Ovi9XYKtdLot9PUKZ5ASczGg
         lBmw==
X-Gm-Message-State: AAQBX9dakDZhnK89nRA2o7ofDQYoPi8NjCDagZjC9sSVcWADfZzdWfxh
	LqtFXDS/qAmhdufdaV48mwprtWErMQg=
X-Google-Smtp-Source: AKy350Y9bMblCI5bKH6aGwBZSHzcf7skrH8RmbpkKWpftFeFKbUvrtyyNkpkNaAqDsxlNtG1/7YAPg==
X-Received: by 2002:ac8:7d94:0:b0:3d9:cb72:3653 with SMTP id c20-20020ac87d94000000b003d9cb723653mr32039016qtd.25.1682452019795;
        Tue, 25 Apr 2023 12:46:59 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] libxl: Print device_kind as a string
Date: Tue, 25 Apr 2023 15:46:22 -0400
Message-Id: <20230425194622.114869-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230425194622.114869-1-jandryuk@gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Printing the integer isn't particularly informative.  Switch to a
human-readable string when printing the device_kind in
libxl__get_hotplug_script_info().

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_linux.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_linux.c b/tools/libs/light/libxl_linux.c
index 27f2bce718..f7c92ba562 100644
--- a/tools/libs/light/libxl_linux.c
+++ b/tools/libs/light/libxl_linux.c
@@ -231,8 +231,8 @@ int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
         break;
     default:
         /* No need to execute any hotplug scripts */
-        LOGD(DEBUG, dev->domid,
-             "backend_kind %d, no need to execute scripts", dev->backend_kind);
+        LOGD(DEBUG, dev->domid, "backend_kind %s, no need to execute scripts",
+             libxl__device_kind_to_string(dev->backend_kind));
         rc = 0;
         break;
     }
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 19:47:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 19:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526310.817990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcs-0002Wz-QX; Tue, 25 Apr 2023 19:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526310.817990; Tue, 25 Apr 2023 19:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcs-0002Ws-Nj; Tue, 25 Apr 2023 19:46:58 +0000
Received: by outflank-mailman (input) for mailman id 526310;
 Tue, 25 Apr 2023 19:46:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3bG=AQ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prOcr-0002TT-Hp
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 19:46:57 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef110557-e3a1-11ed-b223-6b7b168915f2;
 Tue, 25 Apr 2023 21:46:56 +0200 (CEST)
Received: by mail-qt1-x82c.google.com with SMTP id
 d75a77b69052e-3ef6e84945dso14982511cf.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 12:46:56 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 rv12-20020a05620a688c00b0074c438db55asm1492592qkn.74.2023.04.25.12.46.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Apr 2023 12:46:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef110557-e3a1-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682452015; x=1685044015;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lzwTXjcS78T2tg9v5yGGdamq4ltimB2uXb0D2+o3Noc=;
        b=VI1Lm9Tw+YLYJfU8T1PQAlNDnQPuRYCpdEAfC9ORHiqTh+gkc+EIYeb3v5XvD8XGmW
         H9kz2Ygr5uzgEp0bKg7/Lj9548CJ5+Ql/HZpF6ghzRYdbVuiNRQbi0rkzzUg3Sbuz4j4
         UQ6qyIdQLaZqmzexTmZnY1cqdNy1yLoqTHWOyYR3Lo3wT/Hb8c04tvnqXUmv/5dxaEiN
         8u2wmBXUnhs/w2iAWKPiHfAWQyjCtIcpTKFHT8RtzzEN7rSv0mtVW+GIXL56auCDWA0R
         Om4/QZLYqJ8kMqcjjHn0s+XxBmeJDLI7TxCqDcCY1fj8WY1ApRuOIN/fQ4RXubuVC57U
         C5rg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682452015; x=1685044015;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lzwTXjcS78T2tg9v5yGGdamq4ltimB2uXb0D2+o3Noc=;
        b=W10GdbvgHEn4n51p0sTgsygfhuHsp8M6fuEsYFVxZARkxe8AFSc8npjAzCND5hY50n
         1dcQrnNUbmnmWfil0DoHlO+zV4u9fuLTmvnQL8nBYFsdt50j0ZBxpFTYNBMjbea9iu4g
         wFi03wZBVfvbUp/JkQfL6DEaSasb4Pya7jTXISxt1UZ4Ek8NzMvYkCMFT1gZAKu2+mNw
         rhsgvrXmOqA2wawCzpwM9x3xNuwfuRWudeY9cgtVEIwa29CVtlXixbyVyEjWueqZyrm+
         hj4pkC4zpOHaTIyp6NpR/pe70aqMkTnN+5duZtXcXtoZEl+rBfoJPBaAEg2PROPo/Qq6
         q3LA==
X-Gm-Message-State: AAQBX9cUjASzGso9Iip4rsyiyrk5OWrLRo0LaA/GxQoulrLRIdONMOmr
	2een4ACe8OAYvwijHLgYtGGCOT0L24k=
X-Google-Smtp-Source: AKy350bFbXfmNjgtxcmFMFJ2OzlUVxB1znImKXDntfy5sNIUMoQGVMD3VwmEoF4MDPRV+p/1GbOJYw==
X-Received: by 2002:ac8:5e46:0:b0:3ef:6798:2a2e with SMTP id i6-20020ac85e46000000b003ef67982a2emr19288011qtx.54.1682452014732;
        Tue, 25 Apr 2023 12:46:54 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] libxl: device_backend_callback() print rc on error
Date: Tue, 25 Apr 2023 15:46:21 -0400
Message-Id: <20230425194622.114869-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230425194622.114869-1-jandryuk@gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Print the rc when an error is found in device_backend_callback() so the
user can have some idea of why things went wrong.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_device.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index a75c21d433..13da6e0573 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -1160,9 +1160,10 @@ static void device_backend_callback(libxl__egc *egc, libxl__ev_devstate *ds,
     }
 
     if (rc) {
-        LOGD(ERROR, aodev->dev->domid, "unable to %s device with path %s",
+        LOGD(ERROR, aodev->dev->domid,
+                    "unable to %s device with path %s - rc %d",
                     libxl__device_action_to_string(aodev->action),
-                    libxl__device_backend_path(gc, aodev->dev));
+                    libxl__device_backend_path(gc, aodev->dev), rc);
         goto out;
     }
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 19:47:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 19:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526309.817980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcq-0002I6-IW; Tue, 25 Apr 2023 19:46:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526309.817980; Tue, 25 Apr 2023 19:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOcq-0002Hz-Fd; Tue, 25 Apr 2023 19:46:56 +0000
Received: by outflank-mailman (input) for mailman id 526309;
 Tue, 25 Apr 2023 19:46:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3bG=AQ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prOcp-0002Ht-Jt
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 19:46:55 +0000
Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com
 [2607:f8b0:4864:20::835])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb580ea1-e3a1-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 21:46:50 +0200 (CEST)
Received: by mail-qt1-x835.google.com with SMTP id
 d75a77b69052e-3ef4a74b42dso30582501cf.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 12:46:50 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 rv12-20020a05620a688c00b0074c438db55asm1492592qkn.74.2023.04.25.12.46.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Apr 2023 12:46:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb580ea1-e3a1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682452008; x=1685044008;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=Kepz7/qGHLR5Ko6XQgYK3k4tdVN+H4VU8ZBZpvWm3pg=;
        b=dH1qID43D6X8wz4I74xUfwb48DtMD530JFfXz7XZOXL3O2UxX81y3JBZlhQznPXuH0
         RPR/C+gSQOA+LvseRCqMrBxCNi4pruoPtZKSG2QDNbDOK7cw3/ltnoqXi+0KOcVPGFwH
         8Dx7K+Y6NMy9VXjGRUt+p55RTZc+xVHS0li90TF8ni0v+7RwmN7+6gTtDM89LXTHDeHe
         7EEwzjC0wDMmCEXADoXwujWUTwEyR3Cp/HjZBa533qVkmlYr4etsabGSCTn6wVagjQAz
         6xypRCvHCfCKs/4Iun0kF6Kram9TZMwMHK8RrgjYjr5REX+CnKSnvYFegmv5Vorj5jb+
         BpgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682452008; x=1685044008;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Kepz7/qGHLR5Ko6XQgYK3k4tdVN+H4VU8ZBZpvWm3pg=;
        b=KzjaNNBSP1gyTSnLt5Agm1lGZ79VR5nAuWcxsaLopFPFzWyIs/DqkZ49yG5I3sKxde
         5FPT6jW6H6kS3GLk1PHPm0rgQZP7FHw6kGqrD+070u90ES/GKVRPtN+rVj4CcjJyO0aK
         Z/VSxqQOUB+Nto869KGP8R24SkCP/H7fr0+pRn534CvrERbjavBdAvdeS3rbMtR9roq8
         PyQMFPz/0pFHGZiFjaRXug92k50ZwUJ507DNHOpIzcpcrHTPGYaJQDD36ITDL4q/Oko+
         5ouPUdLjy9wvTs5hLEaTo/wOFLoCSWmghGYzSausxmS4WmgoDUabPWzmeoYeUxUv4y1p
         qXfw==
X-Gm-Message-State: AAQBX9ead5LzbYSy6EVvVODlzLMt3g8x6MIdtvgnY7iLuMfNLHQACL8u
	PHPkCBc314nyNf/TCVtMIERHcCfmTnA=
X-Google-Smtp-Source: AKy350YcNREJfRH/nEbSDZMs6QUuHcjJdf65yWhgdMhhxiSp29zfA2++KPWh7/kNE0oPu6Gd6Dlvdw==
X-Received: by 2002:a05:622a:345:b0:3bf:bb1f:3c2b with SMTP id r5-20020a05622a034500b003bfbb1f3c2bmr30583251qtw.6.1682452008545;
        Tue, 25 Apr 2023 12:46:48 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] Fix install.sh for systemd
Date: Tue, 25 Apr 2023 15:46:20 -0400
Message-Id: <20230425194622.114869-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On a fedora system, if you run `sudo sh install.sh` you break your
system.  The installation clobbers /var/run, a symlink to /run.  A
subsequent boot fails when /var/run and /run are different since
accesses through /var/run can't find items that now only exist in /run
and vice-versa.

Skip populating /var/run/xen when systemd is being used.  systemd
expects an empty /run on boot and works properly without one.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/Makefile | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/Makefile b/tools/Makefile
index 4906fdbc23..32c8b0a2a2 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -58,9 +58,11 @@ build all: subdirs-all
 install:
 	$(INSTALL_DIR) -m 700 $(DESTDIR)$(XEN_DUMP_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LOG_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LIB_DIR)
+ifneq ($(CONFIG_SYSTEMD),y)
+	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_STORED)
+endif
 	$(INSTALL_DIR) $(DESTDIR)$(PKG_INSTALLDIR)
 	$(MAKE) subdirs-install
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue Apr 25 19:51:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 19:51:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526321.818009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOh5-0004os-KH; Tue, 25 Apr 2023 19:51:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526321.818009; Tue, 25 Apr 2023 19:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOh5-0004ol-Hl; Tue, 25 Apr 2023 19:51:19 +0000
Received: by outflank-mailman (input) for mailman id 526321;
 Tue, 25 Apr 2023 19:51:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KQ8d=AQ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1prOh3-0004oe-FU
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 19:51:17 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 891c3945-e3a2-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 21:51:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 891c3945-e3a2-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682452273;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=epqrrIP9XVMR1rQYOgUv6OMHFSXsoKyAhdvov9JTG3c=;
	b=hcfCyrdfQ/+yHnKF3T+I49mLvnN7Y7S69gBRQL7PhhKJcZIQM6dUY5pvAbw4x5UtKBksj5
	+gNdAWyqbtppxTS6xk/OOGkQEgCKZ50daZMUpYiSsjHUolnRQ63od8fRIwCIy22iqsL7Jz
	ptuYgLF2Vyi6TxSoYJlRddHoLRBh0aBb/2ut263viqoCn4L8IM34OuGeUFz0vHjf3uDsn9
	kzIwMYp79iEd+cDm0M3l/TvJoQ8Ebljb98ttQPEip+uioS6jdgokPZm34PxDLgh8kTLkX7
	Tk0Brm7tWOMbl31MYKpF+X1fNVmxtHUKna8IUJdeJdt+TiqryaL2tjxt/WxUaA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682452273;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=epqrrIP9XVMR1rQYOgUv6OMHFSXsoKyAhdvov9JTG3c=;
	b=mlbSRhNUu/9kpluKPqwDbDiOM+hdImwLPmR+/CiZOtx2CU7HgipiQ7bg9V2pgBimVEqMmi
	qqHiWR0zzWYsg/Bg==
To: Mark Rutland <mark.rutland@arm.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 linux-arm-kernel@lists.infradead.org, David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 22/37] arm64: smp: Switch to hotplug core state
 synchronization
In-Reply-To: <ZD1q3TF2ixVD1f2M@FVFF77S0Q05N>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.569498144@linutronix.de> <ZD1q3TF2ixVD1f2M@FVFF77S0Q05N>
Date: Tue, 25 Apr 2023 21:51:12 +0200
Message-ID: <87ttx3zqof.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, Apr 17 2023 at 16:50, Mark Rutland wrote:
> On Sat, Apr 15, 2023 at 01:44:49AM +0200, Thomas Gleixner wrote:
> I gave this a spin on arm64 (in a 64-vCPU VM on an M1 host), and it seems to
> work fine with a bunch of vCPUs being hotplugged off and on again randomly.
>
> FWIW:
>
> Tested-by: Mark Rutland <mark.rutland@arm.com>
>
> I also hacked the code to have the dying CPU spin forever before the call to
> cpuhp_ap_report_dead(). In that case I see a warning, and that we don't call
> arch_cpuhp_cleanup_dead_cpu(), and that the CPU is marked as offline (per
> /sys/devices/system/cpu/$N/online).

Nice!

> As a tangent/aside, we might need to improve that for confidential compute
> architectures, and we might want to generically track cpus which might still be
> using kernel text/data. On arm64 we ensure that via our cpu_kill() callback
> (which'll use PSCI CPU_AFFINITY_INFO), but I'm not sure if TDX and/or SEV-SNP
> have a similar mechanism.
>
> Otherwise, a malicious hypervisor can pause a vCPU just before it leaves the
> kernel (e.g. immediately after the arch_cpuhp_cleanup_dead_cpu() call), wait
> for a kexec (or resuse of stack memroy), and unpause the vCPU to cause things
> to blow up.

There are a gazillion ways for a malicious hypervisor to blow up a
'squint enough to be confident' guest.

The real question is whether it can utilize such a blow up to extract
confidential information from the guest.

If not then it's just yet another way of DoS which is an "acceptable"
attack as it only affects availability but not confidentiality.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 19:55:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 19:55:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526324.818020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOlJ-0005R9-5t; Tue, 25 Apr 2023 19:55:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526324.818020; Tue, 25 Apr 2023 19:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOlJ-0005R2-1r; Tue, 25 Apr 2023 19:55:41 +0000
Received: by outflank-mailman (input) for mailman id 526324;
 Tue, 25 Apr 2023 19:55:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prOlH-0005Qs-QI; Tue, 25 Apr 2023 19:55:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prOlH-0005l9-Ew; Tue, 25 Apr 2023 19:55:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prOlH-0007Bt-3N; Tue, 25 Apr 2023 19:55:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prOlH-0001UP-2v; Tue, 25 Apr 2023 19:55:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6GofTY2zcgwqWal2gV3ynFS4syq69OYIgsmDC31qgz4=; b=PJaRazu/W6HG94DhEMNSNgq+pm
	w1yjeHKj/kKiuB85dtRgocYgpLgnOIG6+UMg79juUDbQkxnnD35ZGgGgxY0uw0pF107pD8MfCjKlZ
	WS0UyLEG1Qx1bUr9S7sRuuiqF7qtgdneRse24cLXyfCsyYI46/kgEXYBeKHp/lucqXBA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180403-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180403: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=c4bc4d3b82fbe22e03c986ca896090f481df5c10
X-Osstest-Versions-That:
    libvirt=d063389f10707a6694fb98a69f403a63e4d22245
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 19:55:39 +0000

flight 180403 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180403/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180353
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180353
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180353
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              c4bc4d3b82fbe22e03c986ca896090f481df5c10
baseline version:
 libvirt              d063389f10707a6694fb98a69f403a63e4d22245

Last test of basis   180353  2023-04-21 04:18:52 Z    4 days
Testing same since   180403  2023-04-25 04:20:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  K Shiva <shiva_kr@riseup.net>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   d063389f10..c4bc4d3b82  c4bc4d3b82fbe22e03c986ca896090f481df5c10 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 20:07:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 20:07:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526335.818029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOx6-00077c-Dk; Tue, 25 Apr 2023 20:07:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526335.818029; Tue, 25 Apr 2023 20:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prOx6-00077V-B7; Tue, 25 Apr 2023 20:07:52 +0000
Received: by outflank-mailman (input) for mailman id 526335;
 Tue, 25 Apr 2023 20:07:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KQ8d=AQ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1prOx5-00077P-C3
 for xen-devel@lists.xenproject.org; Tue, 25 Apr 2023 20:07:51 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da0433af-e3a4-11ed-8611-37d641c3527e;
 Tue, 25 Apr 2023 22:07:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da0433af-e3a4-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682453268;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4O15691mL3gwOERb5M8WwZ/BXXDNqH0+MWC0TCs50QE=;
	b=Bwuzz8Nz/BV6G/VL2bb+kha7mVj8/IRkuvbsjwjzk4USnPr2OW7n7gum3YLTay29fSKm4l
	VIaAVviF4DxNGn2wYM6o0v57oY64CRe+VgXsYQv76pd/utkcrBMIA+Hlh315Cm54VqTo7K
	atpyanHwbqEDa1wx0+1kk6JnSgmhiayROO89bvR9xqhG5h5fXOyJHuELCWZOaAN84arMeq
	GGcJ4Ir8foJBLeP8ch9gHIy9jvOiKYJ4asV3QtfyaAgXR2pzDWC3evufPz0x9jkOlulsT0
	C2Q9I3DEe4BIAOmlzxJr0oVN2tLwYUOfQk+hTo1aBm1XqujojvIZxEYvbxiQIw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682453268;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4O15691mL3gwOERb5M8WwZ/BXXDNqH0+MWC0TCs50QE=;
	b=RrFYGWkb9+Sr7TGoFVSkR/dCgulbRZuvJgUFeQBLzx1gsWc2Iv3HT6T4XcF1FLb33wIMzq
	mCdHr2bCd+lRBoCA==
To: Sean Christopherson <seanjc@google.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Menzel
 <pmenzel@molgen.mpg.de>, linux-kernel@vger.kernel.org, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
 <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
 <thomas.lendacky@amd.com>, Oleksandr Natalenko <oleksandr@natalenko.name>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E. J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <87v8hq35sk.ffs@tglx>
References: <87r0sh4m7a.ffs@tglx>
 <8592a301-9933-1cad-bd61-8d97e7c7493b@molgen.mpg.de> <87a5z443g2.ffs@tglx>
 <877cu83v45.ffs@tglx> <874jpc3s3r.ffs@tglx>
 <0f5463fd-9c4a-6361-adbb-dd89dbb9138d@citrix.com>
 <c2aaa4fb-a5ba-d5bf-634a-dcf4fd8ad246@citrix.com> <871qkf3qek.ffs@tglx>
 <26d385da-2ede-5d73-2959-84c8f7d89e03@citrix.com> <87y1mm3iqz.ffs@tglx>
 <ZEFRhXua6Jxvit1R@google.com> <87v8hq35sk.ffs@tglx>
Date: Tue, 25 Apr 2023 22:07:47 +0200
Message-ID: <87r0s7zpws.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Apr 20 2023 at 17:57, Thomas Gleixner wrote:
> On Thu, Apr 20 2023 at 07:51, Sean Christopherson wrote:
> Something like the completely untested below should just work whatever
> APIC ID the BIOS decided to dice.
>
> That might just work on SEV too without that GHCB muck, but what do I
> know.

It does not.

RDMSR(X2APIC_ID) is trapped via #VC which cannot be handled at that
point. Unfortunately the GHCB protocol does not provide a RDMSR
mechanism similar to the CPUID mechanism. Neither does the secure
firmware enforce CPUID(0xb):APICID to real APIC ID consistency.

So the hypervisor can dice the APIC IDs as long as they are consistent
with the provided ACPI/MADT table.

So no parallel startup for SEV for now.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue Apr 25 20:15:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Apr 2023 20:15:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526339.818040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prP4W-0000B1-5A; Tue, 25 Apr 2023 20:15:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526339.818040; Tue, 25 Apr 2023 20:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prP4W-0000Au-2Z; Tue, 25 Apr 2023 20:15:32 +0000
Received: by outflank-mailman (input) for mailman id 526339;
 Tue, 25 Apr 2023 20:15:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prP4V-0000Ak-8Z; Tue, 25 Apr 2023 20:15:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prP4V-0006Kl-2D; Tue, 25 Apr 2023 20:15:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prP4U-00009E-NQ; Tue, 25 Apr 2023 20:15:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prP4U-0003zN-Mx; Tue, 25 Apr 2023 20:15:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x0aTubBf8Hl0qoGXbKIkSWdmjybSww8UXbF9a3yK5tw=; b=Kvlvuxvy4NqOl+We2Ds7k+BdFz
	qPUldW8emZpYP2JudiJFjoBnfkHcBGw1N3/k+KX1nkRnVfWIdHEkg/3m2+IP31NC2TB6Wz559p0oN
	xZF3JuczM0GYsPjTG/vEIg74OU+rYhFomZ9Fs/nk4YtvnyPtZGzUrkODtpGqpxO8JqLY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180400-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180400: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=61d325dcbc05d8fef88110d35ef7776f3ac3f68b
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Apr 2023 20:15:30 +0000

flight 180400 linux-linus real [real]
flight 180415 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180400/
http://logs.test-lab.xenproject.org/osstest/logs/180415/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180415-retest
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180415-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                61d325dcbc05d8fef88110d35ef7776f3ac3f68b
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    9 days
Failing since        180281  2023-04-17 06:24:36 Z    8 days   15 attempts
Testing same since   180400  2023-04-25 01:42:44 Z    0 days    1 attempts

------------------------------------------------------------
312 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14889 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526382.818096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSps-0000Oz-W1; Wed, 26 Apr 2023 00:16:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526382.818096; Wed, 26 Apr 2023 00:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSps-0000MW-Lm; Wed, 26 Apr 2023 00:16:40 +0000
Received: by outflank-mailman (input) for mailman id 526382;
 Wed, 26 Apr 2023 00:16:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpq-0008T0-Lq
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:38 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9bcbbfab-e3c7-11ed-b223-6b7b168915f2;
 Wed, 26 Apr 2023 02:16:38 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 427DD3200918;
 Tue, 25 Apr 2023 20:16:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 25 Apr 2023 20:16:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:34 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bcbbfab-e3c7-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468195; x=1682554595; bh=AI
	2JxnYviPNAYYE+wMSmcTdckoUvEFPuFDG8Zwtbt2A=; b=n/PtNepDuE337hVfcj
	zAOm8fgI/Q5sZACtn9piV/PwruY0dJ3FA6zdD3yVHdsUiRGSDgJ9Zlr01YtFJJNo
	ixY91Rx6ZZgautdWdRgfrWwK+hDyL1JGDh4U046w8Lx0VySX1CNw7l1USaIg8aCG
	JWvNhrIjOFEqOOmb6JPLk6vyS06y2/vUUKr4CDOXO4HrRbsnNT5h6s1JzniH56Mp
	B9WxmozxzLbTGVInpYD5ZlKLSVNIC7pguyiNj+DMqQaqKgjxQ8yHd3jErzEngFHh
	tFwGt9Mbfkrr7jrInHtS45WF6K+J6HpVFpyyGwKJijatDDOC6NtQACA9lgRNwHwm
	AHiA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468195; x=1682554595; bh=AI2JxnYviPNAYYE+wMSmcTdckoUvEFPuFDG
	8Zwtbt2A=; b=XTKHwwBFRl+xoJafvnjVhhX8tUp2inan7n83tE1rpI7V55UOXYH
	FMfUm9nuQYZbQVKcIxT0dKYvcCwMgfTvkxp7Dco4enZyADfYz9NS9PVPRPcRI5C0
	ekQesw9Rwxg8ycUXC9M/oLSS+I7hH9SfvjbeBPOh1RB1cdxpmOp5OBSlCKvbxXlu
	LVZEfCOdNeRwF2LVgh/ciFUndfpeQLoP7hN59LeIDziVLpUOA1r2povsGD7D5bb0
	XmfLeKe+8DJVEArq6jo4dmJJZ8oBaAclHImPLRFSqUvp16QFDrOcXSESH4ePCFhb
	sLO21XXl0LBi71nAKpPbqOVm359XHNYLdxA==
X-ME-Sender: <xms:Y21IZNvyLy3PF3P8Ercc5zGiw6rW_0p1Ej6VchhS28S5Av1cQzezlA>
    <xme:Y21IZGdFdWX5Jq-eVBq4utoh9OAvbCbl1yTOT6basUqqKJx6NsQxQv-Ij-76mEKj3
    FLtEW0L9putDA>
X-ME-Received: <xmr:Y21IZAxvndPgxj6CXQhGIqXaJjL6eUR3NRxOfG2Z8SErgBRVruzD-vShc-qCa14mhDIwfPi6Kk0py0w95U9fKsgpXYBWiAVSB-j3Efzje7teywc5nga3>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:Y21IZEN8wJe6foV98MMi-iK-6VjjlFRWNNk413XdBoijOMRUhRWkGQ>
    <xmx:Y21IZN_eoHCZSiAjvBZFzFTk3NmoBBHqFS8enXQQnfap7PakiLP1kQ>
    <xmx:Y21IZEXbNdWWuNEKxJLfwNNAsbau31IpFICgWvUwwLr-NEH8WAVBAA>
    <xmx:Y21IZPmsA8vy3QnjRrO5hrOk1y1VrUVvGsrXS3MAaWPS6CrTXPWFTg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 3/6] automation: re-enable building SeaBIOS in Alpine container
Date: Wed, 26 Apr 2023 02:16:13 +0200
Message-Id: <9e7fc91744e52b6b2f5ea8a05a92b920ad22451a.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

It seems to build just fine with Alpine 3.12, and SeaBIOS is necessary
for a HVM test (that use the Alpine build).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 automation/scripts/build | 2 --
 1 file changed, 2 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 7d1b19c4250d..d830cff7b7c7 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -77,8 +77,6 @@ else
     if ldd /bin/ls | grep -q musl; then
         # disable --disable-werror for QEMUU when building with MUSL
         cfgargs+=("--with-extra-qemuu-configure-args=\"--disable-werror\"")
-        # SeaBIOS doesn't build on MUSL systems
-        cfgargs+=("--with-system-seabios=/bin/false")
     fi
 
     # Qemu requires Python 3.5 or later, and ninja
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526380.818080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSps-00005K-0t; Wed, 26 Apr 2023 00:16:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526380.818080; Wed, 26 Apr 2023 00:16:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpr-00004v-Sn; Wed, 26 Apr 2023 00:16:39 +0000
Received: by outflank-mailman (input) for mailman id 526380;
 Wed, 26 Apr 2023 00:16:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpq-0008Sz-0r
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:38 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a9241dc-e3c7-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 02:16:36 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 3BE5D3200488;
 Tue, 25 Apr 2023 20:16:34 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 25 Apr 2023 20:16:34 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:32 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a9241dc-e3c7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468193; x=1682554593; bh=PK
	2ZYDGyTV06RxgorcTiEiiY75XG+xFNOCsrrtktg5E=; b=pOx15lYTtT4GMtluk3
	0gaEGiPx01TOWBA5iG5lCkdFvdS+Uf4+VHDm82s+bWM4F86PGNpGeUPwt1IUYVFv
	/EBGEbrCuHXEvw+Rl3NQxScTZlaucS0lMnLSaO8uH/gC5TN0e02G9o/eUHqJbLYC
	W+PyMj9dLwfWTJeO6fOFYtCB8eoBytLVQqHAt91NgPR3ErpLphas7y327QCVNy8b
	uSEt31mzhUHi2STh/XYPB0yXS32ezxtRLk7f+DkwytIaz90BCYSieekO6+nEH0TL
	k87QMuFxgzH3oHUCDCtyP0S3oheYkB1fZsXKPYFhyenzsrX7ZEUbFBssTWmF3z2Z
	eT3g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468193; x=1682554593; bh=PK2ZYDGyTV06RxgorcTiEiiY75XG+xFNOCs
	rrtktg5E=; b=VdICgU1dxn5rVC93Ga9Va7F+8Vclljm3R2JsJJ2frsVQB3ADtsY
	Wpx/8lYSyUG8aJrp7bMV05YHcuOjMEIefhhT8oGhj0HK6P4313pWXLUXzZHutEMt
	PXZ2w/e/9teN38aA0puSB24RuL1cGDiwlTvukD+yRbEs7ZmT/j2zdn5C4zu07/Il
	/3wHca95VWkHGAlAK5NrkaOuNEds6cr8ZKoWJL1cldo/dzIxe/LQhDL3QptR259r
	Zki1kHAcYx6OZg0ZhO0PXn33fKvJ3APs0GYbz8CCJ2PJP8KBnoYvDar9I8rGpjAi
	eTpmP8EBoNsA6VHtIUKO6XX195xmL3Yp1Jw==
X-ME-Sender: <xms:YW1IZFzfuiuIzkggTLWC2XPNzxz_yQkrbyAgNVdJKeMyZ5b2CgE3EA>
    <xme:YW1IZFTeA3yofunY5np3vBMqWbEKTE8QWMNaI2y8rEBEQH4JXNzCzWGGAisfwTKOm
    8GG5Hc2wCktnA>
X-ME-Received: <xmr:YW1IZPV5NQzEXfp3qFvqGkKM3jdOs-KBiV_7Tzz3jrJiaQh9GMWetRXKuWNY_n8OcRswcwX3xyMZpYWEYtr4qLQ5Na_en9nUTajZHoEjHtibSmPoXIj1>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:YW1IZHh3Fmc2mdsYD7fBkL2pO4bv-TYZpvAJR49HCEFhcK6UmPZLYg>
    <xmx:YW1IZHByyoK4rSotUOwpK9wd8M8LwKibf6yUu4c3cih97Ccb__EGhA>
    <xmx:YW1IZAL2KbM75f0cWm_UmfLWeM1O9_VeazLLxZuUj0yAegxF8Kkfug>
    <xmx:YW1IZM75xyUv8sGyaia_Lke0LXz1L3El-cStRk3fhlrhs5rIehPoGA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 2/6] automation: add runtime qemu dependencies to test container
Date: Wed, 26 Apr 2023 02:16:12 +0200
Message-Id: <c2dac6e1feca4410655b36e02354b50cf7fe8ddc.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is necessary to start HVM guests in subsequent tests.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 automation/tests-artifacts/alpine/3.12.dockerfile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/automation/tests-artifacts/alpine/3.12.dockerfile b/automation/tests-artifacts/alpine/3.12.dockerfile
index b3909996b47b..073f16a0d70a 100644
--- a/automation/tests-artifacts/alpine/3.12.dockerfile
+++ b/automation/tests-artifacts/alpine/3.12.dockerfile
@@ -13,6 +13,7 @@ RUN \
   \
   # xen runtime deps
   apk add musl && \
+  apk add libgcc && \
   apk add openrc && \
   apk add busybox && \
   apk add sudo && \
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526381.818086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSps-0000Ct-8x; Wed, 26 Apr 2023 00:16:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526381.818086; Wed, 26 Apr 2023 00:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSps-0000BG-5J; Wed, 26 Apr 2023 00:16:40 +0000
Received: by outflank-mailman (input) for mailman id 526381;
 Wed, 26 Apr 2023 00:16:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpq-0008T0-BH
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:38 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 989da007-e3c7-11ed-b223-6b7b168915f2;
 Wed, 26 Apr 2023 02:16:35 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 04A1A3200495;
 Tue, 25 Apr 2023 20:16:29 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 25 Apr 2023 20:16:30 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:28 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 989da007-e3c7-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682468189; x=1682554589; bh=eI2Vz4ddo+VsiROfqiq/OB+ci
	7pcGaFB1Xo3BY3CtDk=; b=BOgas+Cy3pojbvOPx0/8fEjmjATyPxeinbAiAVXk1
	3KA7QqfKsQE6CpJW6QtAdmzhPaCOVrF5Fmcvh8LKlonbBUQWVF01EsnUB10HWxXW
	gnNXYjJswHHTEbThzlhmsyLRopoKXfeM/rQUDlTIvD5ylz1xZyz8N2KyjnagdOHW
	4wfp1rVOoF7PrbyPN2cxQARdXJ3lJS4PMB4MMXVKPClKbjsPnyog79yoGDyOh8SS
	7iIkT15xN1hkv7XuCbLddfQCA+2F44/ZOB7GEYE6ryg8014syimMx0QmgR8GNNG7
	DuwBdR+X+RDXgrjz9+ieAh3yJVbb+nLLGj7500PTSSTDA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682468189; x=1682554589; bh=e
	I2Vz4ddo+VsiROfqiq/OB+ci7pcGaFB1Xo3BY3CtDk=; b=ZcLrYhdTTlcR1ASje
	cvIb3MfQuxknXmrqRSGZ/DOEcM2mRQpLIMST1Kz4V+Pfl1tomhd3vApZcnTdhpW0
	i5IB+J8S3eRTzB8jC15t6tHazVmPqs4Xl1zJnDg8tzYKc71HrAD/uBK7oczor8e2
	DKPP0wiBh0h2xneDquWyx/HlfcZ4CECxXBl3R5bhO8gWQUQOQfyQJGeXv+W5y2EJ
	kJokogfc+u5rGX/tYaJn2hYBsovf2BjnBCt0cly7sa9ecSbkjwx5g3UrNcwqg/I6
	DQEsYd9GxrNHTu2GJEBjrIwH99UZG3z1CzEjzJz0EnB+XGfa+d94Uy42BLQ7EGx9
	KDIZA==
X-ME-Sender: <xms:XW1IZNrX-eTUaSl_GvWNHoA3-89p7CBdg4bMwV5A2KjC5Zz2iyVGeg>
    <xme:XW1IZPrfPEekCkYKmzKjNB9kM4n68ZfFFJF5xbF-iL5KCOGxcYPX1FhxFPOxVyRLc
    HvvEaLaM-1igA>
X-ME-Received: <xmr:XW1IZKNR_kMKodQm4uQ0iTZJ_yYQjLr2RCp-12i-6ycG4Lh5OusxqRWOGvve9z16YB9C8c6_z1hndnjHStcz40l9jy6ojkY-j8x8h-SQUzTJk45LLc5Z>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejueef
    hfelieekgeeftdfgieeugefhudetjeethfefveehffejhfeigefgjeekleenucffohhmrg
    hinhepghhithhlrggsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm
    pehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:XW1IZI6gyFH0-qEZqwga-gsv3DrtEP91NtAvrXuqctuTY-S-6VP-EQ>
    <xmx:XW1IZM6013XtmzgFryUtVnYMBtS7800R7HQyHlruizX4Jpwydy6TQQ>
    <xmx:XW1IZAisF8ytDZnyV3fqOvw1qp8T4K-KAXH-qF8We2pRbsrnQXvAjg>
    <xmx:XW1IZBjgWOr2BVmT2RpRkcVIi_6rlbENlPd-9R9A55ylCYctotMQzg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH v2 0/6] automation: add PCI passthrough tests on x86
Date: Wed, 26 Apr 2023 02:16:10 +0200
Message-Id: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This series adds passthrough tests using the ADL x86 hw gitlab runner. Some of
the patches are improves existing hw tests too.

Example passing run:
https://gitlab.com/xen-project/people/marmarek/xen/-/pipelines/848907124

Marek Marczykowski-Górecki (6):
  automation: specify explicit dom0 mem size for ADL tests
  automation: add runtime qemu dependencies to test container
  automation: re-enable building SeaBIOS in Alpine container
  automation: wait for the login prompt as test end marker
  automation: PCI passthrough tests on ADL hw
  automation: include tail of serial log in the gitlab outout

 automation/gitlab-ci/test.yaml                    | 20 +++-
 automation/scripts/build                          |  2 +-
 automation/scripts/qubes-x86-64.sh                | 93 +++++++++++++---
 automation/tests-artifacts/alpine/3.12.dockerfile |  1 +-
 4 files changed, 100 insertions(+), 16 deletions(-)

base-commit: ffc3ca75e25024c05bce7afea694a7446e513c03
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526383.818116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpu-0000z3-5x; Wed, 26 Apr 2023 00:16:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526383.818116; Wed, 26 Apr 2023 00:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpu-0000xw-1a; Wed, 26 Apr 2023 00:16:42 +0000
Received: by outflank-mailman (input) for mailman id 526383;
 Wed, 26 Apr 2023 00:16:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSps-0008T0-LV
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:40 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d048d4c-e3c7-11ed-b223-6b7b168915f2;
 Wed, 26 Apr 2023 02:16:40 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 504E7320046F;
 Tue, 25 Apr 2023 20:16:38 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 25 Apr 2023 20:16:38 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:36 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d048d4c-e3c7-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468197; x=1682554597; bh=Cq
	ssG/KC+YvF49hzLtSeRQGnmHXkLm9FnGPzRc0yuiY=; b=FkqRwE5Bn4jO0B0CS/
	TA+FHAAPJoTdI9VsGJaqkUmHip3J6zjja/1Nphj3Lgj67Cfv1kc1dPb3QlVmvZoI
	5by8SuHpWAk8sGhk09E/d7Bwks8yjaFvXfhaXFkoK6A/9sEIGbgOnNjVjf3RtTKA
	1gMcgauVczqX7Pn43DCF081X3hi/agBhsJwx5mq4VjATPJ7NB0VwT4ZnrrjryXUz
	qAI5IypjsCegE3uqK6ip5W34Jo/RQ/s4wJEZSZi1cVTiKpHor3QENZ/z9xOT83h2
	8QHEDmBZn9do+Fj48WK8i0pPsf64cxwNsTXS1gbNHIpsYTltpuhXYqZ2xJzsypv7
	d/yw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468197; x=1682554597; bh=CqssG/KC+YvF49hzLtSeRQGnmHXkLm9FnGP
	zRc0yuiY=; b=Wbl0Dytlkd0lKjIbe6t9AchnDCRQkS7/w/d126/UMletBGaSAzt
	P9YBZwNZ42PsVn95SD1VGUvIL66ba3WF3OC7wJh97VjuLaa/RJ4qyAQ4+BWzYsU3
	DhSsfRjwCWUShC87Nv+JpoidaJsT/FUaLS+ZA5ddsDTMiq4o9vAxrQY4WDDOEjWK
	FqmGcNHvDJIx1OpOGVdDcyNd5dFNLcXJd1zj2hzd17m73XHk0n64VVVf3vZFL4me
	GClUDumJE3p8wmRaq6hnpe70ExpOjfftXII0K3x9uLqypKhZsY5UGO3zcWF3puyG
	Cm5xmTiA8YXeBkSVq4rGHgk5TcemmZJhBXg==
X-ME-Sender: <xms:ZW1IZOwzQFdqhw1r8sM16xouaVlzQXTqF3O3zzNx8lbOU1EpgG9eOg>
    <xme:ZW1IZKQ_gqIl2nsxMicAZMr7rjOa9xiBmxbvF5sLhB18vGSsnJamDGAeiCCYkykyK
    ntVZvJRM78ZmA>
X-ME-Received: <xmr:ZW1IZAXJMhGxoRw8NSEskFcL8ZeVPN909dTVoWILRRt0acGLRfg8gEItrjTWUp26EePhdOW6eMmoBNDEXXGHMLfwWe-d-YI9Mr2C6cnYJRJCv9s68DzB>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:ZW1IZEiBud7mqi_Xjsebc-Q6F9QW6k-JZ6NRnlyfB-gmMXU69-o_fQ>
    <xmx:ZW1IZACAxlkZeRpSheHbleB4RGCSwTseCCk2xuLby3SQOexjYIKjJQ>
    <xmx:ZW1IZFIcaxGYJ5r7rwVchsvQ1ohl8mGr9U_vRC4OLHokmemkLeQGmw>
    <xmx:ZW1IZJ5t9G-M6dk5HNlVstpn6mVWEWr10NniYiI0qIUGAioE_Bndaw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 4/6] automation: wait for the login prompt as test end marker
Date: Wed, 26 Apr 2023 02:16:14 +0200
Message-Id: <7a0e3b0f6373ce9ad0bf66ddb1535ca9c4fed0fc.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The login prompt is printed after all the startup (test) scripts, wait
for that instead of "passed" marker. And only then check if test passed.
Before this patch there was a race: "passed" marker could be already
printed, but the final check would fail because login prompt wasn't
there yet.

Also, modify etc/issue in domU rootfs to avoid confusing the one from
domU with the dom0's one. Use the dom0 one as test end marker.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
changes in v2:
 - differentiate dom0 and domU welcome message
---
 automation/scripts/qubes-x86-64.sh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 916dbaae59c3..6c0309704661 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -66,6 +66,7 @@ ${domU_check}
 /bin/sh" > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
 echo "rc_verbose=yes" >> etc/rc.conf
+sed -i -e 's/^Welcome/domU \0/' etc/issue
 find . | fakeroot -i ../fakeroot-save cpio -H newc -o | gzip > ../binaries/domU-rootfs.cpio.gz
 cd ..
 rm -rf rootfs
@@ -159,7 +160,7 @@ if [ -n "$wait_and_wakeup" ]; then
     ssh $CONTROLLER wake
 fi
 
-until grep "$passed" smoke.serial || [ $timeout -le 0 ]; do
+until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
     sleep 1;
     : $((--timeout))
 done
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526385.818136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpz-0001ce-Nr; Wed, 26 Apr 2023 00:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526385.818136; Wed, 26 Apr 2023 00:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpz-0001cR-Jz; Wed, 26 Apr 2023 00:16:47 +0000
Received: by outflank-mailman (input) for mailman id 526385;
 Wed, 26 Apr 2023 00:16:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpy-0008Sz-AZ
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:46 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9fbf73ca-e3c7-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 02:16:44 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id E2B413200488;
 Tue, 25 Apr 2023 20:16:42 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 25 Apr 2023 20:16:43 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:41 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fbf73ca-e3c7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468202; x=1682554602; bh=QO
	VEexrNPOXNZ0lPTo4d9Cd3KrC07w6b0cxFok+5DDs=; b=UQt9JUPcYG8T8XYXyn
	cWPw78l2D/P82ymMRLbKF7UnZ8kYD2F+OtczCsRv0vz2YNLhGUj33oXXNbsFczIZ
	ihvsSdlJ9wBkR9IIYwEDYyYJOulpN66KsjzHfmi/fcp/9jeLLH6XgXKFC4Lr1HYW
	QpSL4y2FvnHrKgMYTGQWq4ZDMnmFBjvhBGS1CqEC6QY8QVxrbEq3SQ4cuq5/KBEA
	yqHbb8yZZGntNApnRvn1kTNW0msdRxtfmQOcsA6y2U4/ng4shkq4vM0q5H8a4DIR
	AFyAPXsQes2nk0QO9ZLho40bUbf8YdpVbfc2TgjjpPspj9vBbJi8Al8FWOo57lhJ
	9LtQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468202; x=1682554602; bh=QOVEexrNPOXNZ0lPTo4d9Cd3KrC07w6b0cx
	Fok+5DDs=; b=XIBbv6pPX5qPNi03kvVdxAHrbWQDBaZJkTexy8KWdyggwgIKZ6r
	/YH4JLgXDJICHSD1wZssFv0JvOSBHjRrWGybSlqN9qLfKSy2JB8gU04wHsqscrvm
	KMIVTWaQ5NqkFjpSp1IYJxcWKhbNdnMo4toaot0bYwIohyLvGVOvBEiCVMffdRxC
	CggxCwqOf/7PPQ88h8wSfj2lcMwuy397TrYhqq0t90yXnXRQ+puqY0PLbhBSunQC
	nQP0AGURXyrEeZHj+h/UxBXwryNsI1oESC5rdGdfP4N+HruuhSg7Zd/uRp0f+PCg
	J6D45qmGRSMvO/iUuAOimeuDE+7kp8hbrMg==
X-ME-Sender: <xms:am1IZMCJDjEp9pY9afx626boJYyYMFkmjtCvXr8hNlfPQrumAD8Dwg>
    <xme:am1IZOhC6bfJUw0ScIRwgE3s7x0maD1_jUIupUhiGLYK2AauX9Vn4V4xgjLO5v5hr
    TiXz_Bu6UqIuA>
X-ME-Received: <xmr:am1IZPlErrV3gVMzDaV3UUK16dV9zrmQExwKAJX0j2AvMk_wfBZW9W4whVoaPJwk3Al0Z6_iU0GJE-ufmmv8ARWX7HV16a6mrx5y78gBGagtGvKSCCXB>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:am1IZCyAmrWNC5mqrDEm7_Cuw2LDTLBKRN4-X4Nkx9zsTIfyZdEU1w>
    <xmx:am1IZBTfw656o94sxfDgn9BrW70VM2LrVV0NGhG38giOyxGxvynkXw>
    <xmx:am1IZNaPKONjuWDPB8jVSlebX2vnJCus-_EMSyM4As97RPEWZmcamg>
    <xmx:am1IZALlOFQAbRFtDVqKND0JAbB9uL4NXaLWky-artFJh6DYiq4XFA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 6/6] automation: include tail of serial log in the gitlab outout
Date: Wed, 26 Apr 2023 02:16:16 +0200
Message-Id: <8e1799a0e50b5a4b693f92ba26b6fef6154aeb79.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make it a bit easier to see what has failed.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
Changes in v2:
 - print it also in case of a timeout
---
 automation/scripts/qubes-x86-64.sh | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index a01c571860ee..056faf9e6de8 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -225,6 +225,9 @@ until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
     sleep 1;
     : $((--timeout))
 done
+
+tail -n 100 smoke.serial
+
 if [ $timeout -le 0 ]; then
     echo "ERROR: test timeout, aborting"
     exit 1
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526379.818076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpr-0008TS-N9; Wed, 26 Apr 2023 00:16:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526379.818076; Wed, 26 Apr 2023 00:16:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpr-0008TL-KI; Wed, 26 Apr 2023 00:16:39 +0000
Received: by outflank-mailman (input) for mailman id 526379;
 Wed, 26 Apr 2023 00:16:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpp-0008Sz-BG
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:37 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99451605-e3c7-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 02:16:34 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 0DF89320093B;
 Tue, 25 Apr 2023 20:16:31 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 25 Apr 2023 20:16:32 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:30 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99451605-e3c7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468191; x=1682554591; bh=jf
	46EdUWhIamXc+NDEjQxcZWoeWhu8f3Oja5zY0DWbU=; b=VBh9tHhscu0+3/4fjX
	GOIeHRiwDv2DI7A+Ci0rxFlZm1mhH0lRp9PxtOjtMZhxDg5sQIa2mZI7mQ1Q06eC
	MUokEOVVV/ffLrFaytUysuiq4xqtaA1FIIsT1bpNZFuuSsodsfrCZZzVnWBUb9Gj
	8FsOpFTpqZK0ugYltqAIVNV6XvATjZWl7bE7Wn/IvC8F/ViEhxukjRJBkDMrCzDI
	R1//Jsdgb7SP0wLBRaORlfv7grGJSuaoJiW8QmsTPfGZrbiYhObBouaoxLMeNq56
	NLEmqQ1Et+/cboncFyWOaXi85kmxpaWHhzrbr0du+gLXPNYJL/SgzDkGdENs83Yn
	hh2g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468191; x=1682554591; bh=jf46EdUWhIamXc+NDEjQxcZWoeWhu8f3Oja
	5zY0DWbU=; b=E7NHURS16ZEhpGlTujlvZJlQ3dxWyvqU/T9B064Eeq18G06cd+T
	UAXdu0nwTFDFuNa9hOWRpIeNRMz7VNqrwfbGB8KvP2HGo8agsyC2ggOqHvdK+jL7
	NUuWqaTPKSVeh9N2J/CKOS/p332qw2BwyL2TmLfGZZDG5V3Cyf5C0hJGbtAjjnHs
	KWemJM+4J6HjNfap+qVKrkAKirDdJ7RXOhlhaDpYn3BrXcLhbjzmRylgvEDM6bow
	vyhfrpGc0dft6t/4/7YT6hOgiD6CaA1x+8P9lSjuir1um3NltIO4w7J15geK3P7h
	JOVBeEuUmm/Bs/hCXMVZdbPXREjM761E3Cw==
X-ME-Sender: <xms:X21IZDFO56Elmh8pUpnRtP37faBfDmfvh5WyPD3tnXzpIoXAXmEZ3w>
    <xme:X21IZAVt3pqtEUejRglBm6iHh9YwDDaa-Bzn32pyzUxcIywFOIagsXXzS4oQLqU7J
    WsHTTsQbB1qNA>
X-ME-Received: <xmr:X21IZFJBo5R5YZCmQCqMBG_WeoT3Pi4xZKRZd_dz63O1Y-JE49ZBR0K0qRp-6XoTESX-AUo_iZWsH1aEiw0kGq-UeShcNGSzLSJxe4QlnoAz3ca5WO1G>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:X21IZBHorY4FJrrlPc3vxwUEXwwhd3uKDEF1BhAGjf4U3YIzKLuGjg>
    <xmx:X21IZJXZeuiKva2wH6OzXwvjOnbbLIUUWtJbFKLq66deu3srmvXgWw>
    <xmx:X21IZMMeDaiXmkbRRHg4W3xEDTp9VnWeMTAXDiLievZIrujB20Nb2w>
    <xmx:X21IZIeEWZwQDKPncoiFG5BNAvbIQk3TqlIA2hAtXvLYMFd93spW-A>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 1/6] automation: specify explicit dom0 mem size for ADL tests
Date: Wed, 26 Apr 2023 02:16:11 +0200
Message-Id: <9e184123dab430fdf9cb6edf818805b15a4afbc8.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Avoid memory fragmentation that leads to:
(XEN) common/memory.c:277:d0v10 Could not allocate order=9 extent: id=1 memflags=0xc0 (0 of 4)

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 automation/scripts/qubes-x86-64.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 2d4cf2e2268c..916dbaae59c3 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -123,7 +123,7 @@ TFTP=/scratch/gitlab-runner/tftp
 CONTROLLER=control@thor.testnet
 
 echo '
-multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all
+multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
 module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
 module2 (http)/gitlab-ci/initrd-dom0
 ' > $TFTP/grub.cfg
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 00:16:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 00:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526384.818126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpw-0001IK-G5; Wed, 26 Apr 2023 00:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526384.818126; Wed, 26 Apr 2023 00:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prSpw-0001IB-Bx; Wed, 26 Apr 2023 00:16:44 +0000
Received: by outflank-mailman (input) for mailman id 526384;
 Wed, 26 Apr 2023 00:16:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1prSpu-0008T0-TM
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 00:16:42 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e42371a-e3c7-11ed-b223-6b7b168915f2;
 Wed, 26 Apr 2023 02:16:42 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 639493200488;
 Tue, 25 Apr 2023 20:16:40 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 25 Apr 2023 20:16:40 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 25 Apr 2023 20:16:38 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e42371a-e3c7-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682468200; x=1682554600; bh=F/
	nKkof+j5Rza7lQDx9tTTYfM+sXNOVmeqYfVz+E/ZI=; b=KP4rI7h1Vw1ZnW0JBo
	P1QgOP5jM3k89/pD6nX/gNVWSdCCfSd3F3KCjb0maoW3lT+ac/m8c95IPGHrY7Ec
	yVvO8OkcjBony1Ts2HVZRl8UV61pb8i6jjHFcxjhYGxxulrhUFIO6KR7DuezBrWw
	rd/voIAgCvFf5x68Z/UKHGGGH2E6u+9tr4oSsZKq+j8C67xe/lVUA4/uFEBPhpIb
	PtpoqTnPKzJ5++Zxb2vLDoBLZH/SHAW8bDTV54Fnqz1Qc75qzG5QCsP6VQwbqBvT
	+0huT6Y9iNRsdW116QJTl7pWw+YuelRIUabM7CI+kho57gF77ygdCvQvadlR/dXh
	ek4g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682468200; x=1682554600; bh=F/nKkof+j5Rza7lQDx9tTTYfM+sXNOVmeqY
	fVz+E/ZI=; b=ZiO7U0LqpfqK946evgxizbYuvTouHTt4rR/TQfOWuOYRcdE1Pn2
	cAowElQZaKd4akRzcZ3kPPZFWmyOO7M1aJHp9VgedFotIoh84dH/N3HTLRUr76uB
	Dd5BrEkXk/WSet2xMoUog45evoHMGwhJAbVNrEpsgSPzNytAUaC28qD2xdg+unL0
	rAyn3oNxjlWa5DuIb1L7zCoMhDf5oNYcPFInBhDpeKcHdlLVJHnrLDGK8Pt8Zr4y
	oE3tFQ4V1qn00nngr3fiEsqQzKwUd7eurVs12c0CxDR8FKacjZMTKuOUHnEgb41l
	ETWR33pBVUvAnhrsv7PioZBJSMi31nEbqFA==
X-ME-Sender: <xms:Z21IZC4K6ncVgAJfzUfYC-b1R96qu34PHJ9IblT6qL0Nj-JnLyzw6Q>
    <xme:Z21IZL7FKeKHs9wzBvaQ6P-TG_hkLbmuLjUQkY07XVOGlmgD1UQdlzhwoV9_QfgAA
    E3QgIDzWr89UA>
X-ME-Received: <xmr:Z21IZBc--tnvIM9QRPBSVRJ9X0LwDS2vtlDvAODbnFtU2AfnRQmvI3ga2A1qHeNS_Ui4nRrVsqU9ap5l6lEaqKBXQWK-J6IKC92ok9RC7uYTck7ZhSqf>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedufedgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:Z21IZPKFbBd5Hjh1-lenOoTy3qbV9meBtPkxzvzw-9qooXKx4tczlg>
    <xmx:Z21IZGI46AwgT2gxi3-TEU3wln0rts2k1YfKrf-ad-C0wHI7dcqtgg>
    <xmx:Z21IZAyqxpjpaadPYsFfc7nDlc9_v-cha9hvNWA8sCPLqF4f1twn5Q>
    <xmx:aG1IZIidfzHzSJ9vIW7SvlSx_-hrFoiON2douDVn7TQ_yZuTg-DsgA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 5/6] automation: PCI passthrough tests on ADL hw
Date: Wed, 26 Apr 2023 02:16:15 +0200
Message-Id: <1948952135feb360797da0bb0136e7d42e188e72.1682468126.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Add simple PCI passthrough test to both PV and HVM domU. It passes
through a network adapter (the only one in the system), gets an IP via
DHCP (first basic test) and then ping the gateway (second basic test).
Finally, if device is supposed to use MSI or MSI-X (as set in the
PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.

On the current runner, the device in question is this:
03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
	Flags: bus master, fast devsel, latency 0, IRQ 18
	Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
	Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
	Capabilities: [a0] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [140] Device Serial Number ...
	Capabilities: [1c0] Latency Tolerance Reporting
	Capabilities: [1f0] Precision Time Measurement
	Capabilities: [1e0] L1 PM Substates
	Kernel driver in use: igc
	Kernel modules: igc

With the current Xen version, it uses MSI-X under PV and MSI under HVM.

This patch moves domU config to a variable, to make it configurable on
per-test basis. Add also a few comments for visual separation of tests.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
changes in v2:
 - drop leftover debug shell
 - fix regex -msi to not match -msi-x
 - fix waiting for domU startup
---
 automation/gitlab-ci/test.yaml     | 20 +++++++-
 automation/scripts/qubes-x86-64.sh | 85 ++++++++++++++++++++++++++-----
 2 files changed, 93 insertions(+), 12 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index d68c584269dd..1ce083e6cd88 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -94,6 +94,8 @@
     # the test controller runs on RPi4
     CONTAINER: alpine:3.12-arm64v8
     LOGFILE: smoke-test.log
+    PCIDEV: "03:00.0"
+    PCIDEV_INTR: "MSI-X"
   artifacts:
     paths:
       - smoke.serial
@@ -147,6 +149,24 @@ adl-suspend-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+adl-pci-pv-x86-64-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
+adl-pci-hvm-x86-64-gcc-debug:
+  extends: .adl-x86-64
+  variables:
+    PCIDEV_INTR: "MSI"
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 qemu-smoke-dom0-arm64-gcc:
   extends: .qemu-arm64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 6c0309704661..a01c571860ee 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -4,8 +4,21 @@ set -ex
 
 test_variant=$1
 
+### defaults
 wait_and_wakeup=
 timeout=120
+domU_config='
+type = "pvh"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ "bridge=xenbr0", ]
+disk = [ ]
+'
+
+### test: smoke test
 if [ -z "${test_variant}" ]; then
     passed="ping test passed"
     domU_check="
@@ -23,6 +36,8 @@ done
 tail -n 100 /var/log/xen/console/guest-domU.log
 echo \"${passed}\"
 "
+
+### test: S3
 elif [ "${test_variant}" = "s3" ]; then
     passed="suspend test passed"
     wait_and_wakeup="started, suspending"
@@ -48,6 +63,62 @@ xl dmesg | grep 'Finishing wakeup from ACPI S3 state' || exit 1
 ping -c 10 192.168.0.2 || exit 1
 echo \"${passed}\"
 "
+
+### test: pci-pv, pci-hvm
+elif [ "${test_variant}" = "pci-pv" ] || [ "${test_variant}" = "pci-hvm" ]; then
+
+    if [ -z "$PCIDEV" ]; then
+        echo "Please set 'PCIDEV' variable with BDF of test network adapter" >&2
+        echo "Optionally set also 'PCIDEV_INTR' to 'MSI' or 'MSI-X'" >&2
+        exit 1
+    fi
+
+    passed="pci test passed"
+
+    domU_config='
+type = "'${test_variant#pci-}'"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ ]
+disk = [ ]
+pci = [ "'$PCIDEV',seize=1" ]
+on_reboot = "destroy"
+'
+
+    domU_check="
+set -x -e
+ip link set eth0 up
+timeout 30s udhcpc -i eth0
+pingip=\$(ip -o -4 r show default|cut -f 3 -d ' ')
+ping -c 10 \"\$pingip\"
+echo domU started
+cat /proc/interrupts
+"
+    if [ "$PCIDEV_INTR" = "MSI-X" ]; then
+        domU_check="$domU_check
+grep -- '\\(-msi-x\\|PCI-MSI-X\\).*eth0' /proc/interrupts
+"
+    elif [ "$PCIDEV_INTR" = "MSI" ]; then
+        # depending on the kernel version and domain type, the MSI can be
+        # marked as '-msi', 'PCI-MSI', or 'PCI-MSI-<SBDF>'; be careful to not match
+        # -msi-x nor PCI-MSI-X
+        domU_check="$domU_check
+grep -- '\\(-msi \\|PCI-MSI\\( \\|-[^X]\\)\\).*eth0' /proc/interrupts
+"
+    fi
+    domU_check="$domU_check
+echo \"${passed}\"
+"
+
+    dom0_check="
+until grep -q \"^domU Welcome to Alpine Linux\" /var/log/xen/console/guest-domU.log; do
+    sleep 1
+done
+tail -n 100 /var/log/xen/console/guest-domU.log
+"
 fi
 
 # DomU
@@ -63,7 +134,7 @@ rm var/run
 echo "#!/bin/sh
 
 ${domU_check}
-/bin/sh" > etc/local.d/xen.start
+" > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
 echo "rc_verbose=yes" >> etc/rc.conf
 sed -i -e 's/^Welcome/domU \0/' etc/issue
@@ -98,17 +169,7 @@ xl create /etc/xen/domU.cfg
 ${dom0_check}
 " > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
-# just PVH for now
-echo '
-type = "pvh"
-name = "domU"
-kernel = "/boot/vmlinuz"
-ramdisk = "/boot/initrd-domU"
-extra = "root=/dev/ram0 console=hvc0"
-memory = 512
-vif = [ "bridge=xenbr0", ]
-disk = [ ]
-' > etc/xen/domU.cfg
+echo "$domU_config" > etc/xen/domU.cfg
 
 echo "rc_verbose=yes" >> etc/rc.conf
 echo "XENCONSOLED_TRACE=all" >> etc/default/xencommons
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 01:23:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 01:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526418.818146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTs7-0000gC-T7; Wed, 26 Apr 2023 01:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526418.818146; Wed, 26 Apr 2023 01:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTs7-0000g5-Pv; Wed, 26 Apr 2023 01:23:03 +0000
Received: by outflank-mailman (input) for mailman id 526418;
 Wed, 26 Apr 2023 01:23:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cwp=AR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1prTs6-0000fz-Sl
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 01:23:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1e6cc18-e3d0-11ed-b223-6b7b168915f2;
 Wed, 26 Apr 2023 03:23:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 79D0861776;
 Wed, 26 Apr 2023 01:22:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4075AC433D2;
 Wed, 26 Apr 2023 01:22:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1e6cc18-e3d0-11ed-b223-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682472178;
	bh=e1FgjD0RZ3oYOY6urS6d4ljJIy0WoJwgllrFHvHLzsU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PY+n0a+VgItot2X7FzNhJ/EVbll4QGTI62hwOdO8yuo+8Rofs1n4BEIIx4RCFVECm
	 V0UscA2pNy8s4mMIVMuLhBXmiFAefPTPMN00PgyD/u5dheSCRV+HBUy5xJzXdzt4wm
	 j+QkZrXmz0ESQNRj+F52ypTLhN7V5TMqH7BOtZq8Xerrxl/I76ZSpZuKizUPcsBQ4y
	 8ZRGuYk93FyMDcAlbDRkqKubbfIpA6pkGctdEWiwslO48C8KFaglLp6USzFG0pnC9G
	 a5qyf9uSGCIWhzlclQ/kinxyMbM1kr4lgMLDZl0KJakEctvDWXPocClwvArTtWZ6Lw
	 YNbb1GV605Vfg==
Date: Tue, 25 Apr 2023 18:22:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v2 6/6] automation: include tail of serial log in the
 gitlab outout
In-Reply-To: <8e1799a0e50b5a4b693f92ba26b6fef6154aeb79.1682468126.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304251821490.3419@ubuntu-linux-20-04-desktop>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com> <8e1799a0e50b5a4b693f92ba26b6fef6154aeb79.1682468126.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1873717104-1682472179=:3419"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1873717104-1682472179=:3419
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 26 Apr 2023, Marek Marczykowski-Górecki wrote:
> Make it a bit easier to see what has failed.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

I am not too happy about this, I would rather make smoke.serial easier
to access, but if you still want this I wont block it.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> Changes in v2:
>  - print it also in case of a timeout
> ---
>  automation/scripts/qubes-x86-64.sh | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index a01c571860ee..056faf9e6de8 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -225,6 +225,9 @@ until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
>      sleep 1;
>      : $((--timeout))
>  done
> +
> +tail -n 100 smoke.serial
> +
>  if [ $timeout -le 0 ]; then
>      echo "ERROR: test timeout, aborting"
>      exit 1
> -- 
> git-series 0.9.1
> 
--8323329-1873717104-1682472179=:3419--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 01:25:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 01:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526422.818155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTuY-0001G7-9s; Wed, 26 Apr 2023 01:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526422.818155; Wed, 26 Apr 2023 01:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTuY-0001G0-7N; Wed, 26 Apr 2023 01:25:34 +0000
Received: by outflank-mailman (input) for mailman id 526422;
 Wed, 26 Apr 2023 01:25:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cwp=AR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1prTuX-0001Fu-Fr
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 01:25:33 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b891c20-e3d1-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 03:25:31 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2503761776;
 Wed, 26 Apr 2023 01:25:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E822CC433D2;
 Wed, 26 Apr 2023 01:25:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b891c20-e3d1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682472329;
	bh=/VXR3+ubC/WgQkXMc/DNqf78a9huRi+gfho5wyE5DCo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bsNYwWHCE+3I58rI+J1ndqKbw4qHtPDPU/Jb/ZPSjfU+2i0yjdZ9mbmWDrF7nVIUm
	 ckyleNZBoTArAA1vNqO9rAs1jfHinuivlpqK4vwryI3HAyHYCZzikyhr5fK79CDkcn
	 ox8pTwawLXQXdTHwcbzxDTd+kftt9hQTQvIK2PvivD9l9fc+ygf61XJMa9IB5ZZhha
	 uvM8llHp2Z44LGHrGvCQosy3dwMYqkzUv45IOu/1N2N91PB5VwGeYC+qpIe5Gt7Jka
	 3mZ/9amAbJpo+ZVTcIWvb5HrqN1joty5YMNGFOQFw9nx8sXKNpk0sTKwo9nA98SwOb
	 PJvUlWUHWGzlA==
Date: Tue, 25 Apr 2023 18:25:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v2 4/6] automation: wait for the login prompt as test
 end marker
In-Reply-To: <7a0e3b0f6373ce9ad0bf66ddb1535ca9c4fed0fc.1682468126.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304251825050.3419@ubuntu-linux-20-04-desktop>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com> <7a0e3b0f6373ce9ad0bf66ddb1535ca9c4fed0fc.1682468126.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2132388244-1682472329=:3419"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2132388244-1682472329=:3419
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 26 Apr 2023, Marek Marczykowski-Górecki wrote:
> The login prompt is printed after all the startup (test) scripts, wait
> for that instead of "passed" marker. And only then check if test passed.
> Before this patch there was a race: "passed" marker could be already
> printed, but the final check would fail because login prompt wasn't
> there yet.
> 
> Also, modify etc/issue in domU rootfs to avoid confusing the one from
> domU with the dom0's one. Use the dom0 one as test end marker.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> changes in v2:
>  - differentiate dom0 and domU welcome message
> ---
>  automation/scripts/qubes-x86-64.sh | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 916dbaae59c3..6c0309704661 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -66,6 +66,7 @@ ${domU_check}
>  /bin/sh" > etc/local.d/xen.start
>  chmod +x etc/local.d/xen.start
>  echo "rc_verbose=yes" >> etc/rc.conf
> +sed -i -e 's/^Welcome/domU \0/' etc/issue
>  find . | fakeroot -i ../fakeroot-save cpio -H newc -o | gzip > ../binaries/domU-rootfs.cpio.gz
>  cd ..
>  rm -rf rootfs
> @@ -159,7 +160,7 @@ if [ -n "$wait_and_wakeup" ]; then
>      ssh $CONTROLLER wake
>  fi
>  
> -until grep "$passed" smoke.serial || [ $timeout -le 0 ]; do
> +until grep "^Welcome to Alpine Linux" smoke.serial || [ $timeout -le 0 ]; do
>      sleep 1;
>      : $((--timeout))
>  done
> -- 
> git-series 0.9.1
> 
--8323329-2132388244-1682472329=:3419--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 01:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 01:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526426.818166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTy2-0001r5-PH; Wed, 26 Apr 2023 01:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526426.818166; Wed, 26 Apr 2023 01:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prTy2-0001qw-MC; Wed, 26 Apr 2023 01:29:10 +0000
Received: by outflank-mailman (input) for mailman id 526426;
 Wed, 26 Apr 2023 01:29:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prTy0-0001qm-T5; Wed, 26 Apr 2023 01:29:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prTy0-0003g7-Fg; Wed, 26 Apr 2023 01:29:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prTxz-0000k8-Uq; Wed, 26 Apr 2023 01:29:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prTxz-0001Q4-UL; Wed, 26 Apr 2023 01:29:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=51ojedJ/IrA8S6czndB8Ed0SCd5o4mD5+3Se0Z7yw3U=; b=PHF836nGN667zbixVmtVCs2hw9
	TtewWw4gjV8q0KdbXQKNaDsNC6sAp+hEVCh/z0PYLyI9Q2uMXDhLOf5OxEoJdSgFzrPT7o5GzuYIk
	Ce8l4Q9tZExK0bdT8xKrE+eOGqje2gdt3zIBOU2zXkr4fPu/dvkby339wQ5hGkfAT0FA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180420: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
X-Osstest-Versions-That:
    xen=f6c3cb21628f7bed73cb992da400f6b36630f290
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 01:29:07 +0000

flight 180420 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180420/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
baseline version:
 xen                  f6c3cb21628f7bed73cb992da400f6b36630f290

Last test of basis   180411  2023-04-25 14:00:26 Z    0 days
Testing same since   180420  2023-04-25 23:02:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f6c3cb2162..18a36b4a9b  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 02:23:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 02:23:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526433.818176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prUol-0008Rk-Rl; Wed, 26 Apr 2023 02:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526433.818176; Wed, 26 Apr 2023 02:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prUol-0008Ra-Ll; Wed, 26 Apr 2023 02:23:39 +0000
Received: by outflank-mailman (input) for mailman id 526433;
 Wed, 26 Apr 2023 02:23:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lx8T=AR=bytedance.com=xieyongji@srs-se1.protection.inumbo.net>)
 id 1prUoj-0008RS-Uj
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 02:23:38 +0000
Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com
 [2607:f8b0:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56f67db3-e3d9-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 04:23:33 +0200 (CEST)
Received: by mail-pg1-x533.google.com with SMTP id
 41be03b00d2f7-52863157da6so1633307a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Apr 2023 19:23:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56f67db3-e3d9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bytedance.com; s=google; t=1682475812; x=1685067812;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dY6YFGS5YhBPz119emz84807goMWNpCefgDK41V5Ayw=;
        b=erZ8A7biMxxqB/SHMsCaaHRCTf4sgfmB01IUdLmTOqn3XjaYP8AjgSREJDbT1NsNES
         /bBrBS6cxiRpFRyZUvGFLmr143lJiB2sBSk1Fb2cjgYzqdDA9XKitxFo8Vy+WOD1Gz+U
         YCIg/104TE6Wk/PcJYYXX0pc3D0UHcYMt6LTsdfJGSlgIvqqqHp6rpYXCQCHll/dhoWH
         tZx2JzETNasVfnw4wjoJV0520uuCwBCJalEII7Icjv3uRBPf9Nc/wHCFl56Nl4nOe6CL
         +JoxEMGVgBC0NjcSHXE0+CB+Qog3Yb3HVG/POtvzwL93Z2za9ShPng35vggP6wWuKkGZ
         /seA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682475812; x=1685067812;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dY6YFGS5YhBPz119emz84807goMWNpCefgDK41V5Ayw=;
        b=M0FMx/0jspoO/QOwJpBIChz3712pJX0azQkln8etaRbaG0desq2JDAXrMlyeDvodWx
         oEgCsyfxgDodKvCqsJ2kiOvq0JNnNstJ3ZjSCqiJBsi5psRXs4oc3svyw2+hP7uL9T+W
         R9otaoSp3KEqjVf5laUrwP3uz8zhwTX+yoMcwKgCK6d/nFrpP0AdTsfhG6DFq/BSwcYU
         ZcHDDlxh/dI0YJawQXOn3jpbxEnwDb7q8gNCVCEXlEO97UaTsnHI/STKKohuuxP54RzV
         nXfAHaHLGVV8xTZYCQfZ54HFFCml1njvYvM/Ii/uVxlEy1DqJoLCm/SvSdF6NYCQ0Xiv
         hinA==
X-Gm-Message-State: AAQBX9cWRa6ZgSuYSV0W5+hQHmoNBkGa4O3VoC+ThTXwBwxq+FKAHK4D
	H0Opy7hlQicx9GBcv+Y1+PsxkjbBs5MU9YziNnjb
X-Google-Smtp-Source: AKy350bUdc4Aelq2+pyhyCkb7VYlUy7PjweYo0DP7me+Y3/2hO8ewi4s6aXdU11wFO/OMLnfiF/4gpBlv6/Ezxkkk4I=
X-Received: by 2002:a17:90b:4acc:b0:24b:8b39:cd7f with SMTP id
 mh12-20020a17090b4acc00b0024b8b39cd7fmr13173940pjb.41.1682475811679; Tue, 25
 Apr 2023 19:23:31 -0700 (PDT)
MIME-Version: 1.0
References: <20230420113732.336620-1-stefanha@redhat.com> <20230420113732.336620-14-stefanha@redhat.com>
 <CACycT3suSR+nYhe4z2zuocYsBBVSDBCE+614zT0jfDZCBRveaA@mail.gmail.com> <20230425164241.GC725672@fedora>
In-Reply-To: <20230425164241.GC725672@fedora>
From: Yongji Xie <xieyongji@bytedance.com>
Date: Wed, 26 Apr 2023 10:23:14 +0800
Message-ID: <CACycT3s+jJ7=6+bsvedoBvmUm9U6pVoJgVKMO882gkQJr5Yj4A@mail.gmail.com>
Subject: Re: [PATCH v3 13/20] block/export: rewrite vduse-blk drain code
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu devel list <qemu-devel@nongnu.org>, Peter Lieven <pl@kamp.de>, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@linaro.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, =?UTF-8?Q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>, 
	Juan Quintela <quintela@redhat.com>, qemu-block@nongnu.org, 
	Eduardo Habkost <eduardo@habkost.net>, Richard Henderson <richard.henderson@linaro.org>, 
	David Woodhouse <dwmw2@infradead.org>, Stefan Weil <sw@weilnetz.de>, Fam Zheng <fam@euphon.net>, 
	Julia Suvorova <jusual@redhat.com>, Ronnie Sahlberg <ronniesahlberg@gmail.com>, 
	xen-devel@lists.xenproject.org, Hanna Reitz <hreitz@redhat.com>, 
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>, eesposit@redhat.com, Kevin Wolf <kwolf@redhat.com>, 
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Paul Durrant <paul@xen.org>, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Anthony Perard <anthony.perard@citrix.com>, 
	"Richard W.M. Jones" <rjones@redhat.com>, Coiby Xu <Coiby.Xu@gmail.com>, 
	Stefano Garzarella <sgarzare@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 26, 2023 at 12:43=E2=80=AFAM Stefan Hajnoczi <stefanha@redhat.c=
om> wrote:
>
> On Fri, Apr 21, 2023 at 11:36:02AM +0800, Yongji Xie wrote:
> > Hi Stefan,
> >
> > On Thu, Apr 20, 2023 at 7:39=E2=80=AFPM Stefan Hajnoczi <stefanha@redha=
t.com> wrote:
> > >
> > > vduse_blk_detach_ctx() waits for in-flight requests using
> > > AIO_WAIT_WHILE(). This is not allowed according to a comment in
> > > bdrv_set_aio_context_commit():
> > >
> > >   /*
> > >    * Take the old AioContex when detaching it from bs.
> > >    * At this point, new_context lock is already acquired, and we are =
now
> > >    * also taking old_context. This is safe as long as bdrv_detach_aio=
_context
> > >    * does not call AIO_POLL_WHILE().
> > >    */
> > >
> > > Use this opportunity to rewrite the drain code in vduse-blk:
> > >
> > > - Use the BlockExport refcount so that vduse_blk_exp_delete() is only
> > >   called when there are no more requests in flight.
> > >
> > > - Implement .drained_poll() so in-flight request coroutines are stopp=
ed
> > >   by the time .bdrv_detach_aio_context() is called.
> > >
> > > - Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
> > >   .bdrv_detach_aio_context() constraint violation. It's no longer
> > >   needed due to the previous changes.
> > >
> > > - Always handle the VDUSE file descriptor, even in drained sections. =
The
> > >   VDUSE file descriptor doesn't submit I/O, so it's safe to handle it=
 in
> > >   drained sections. This ensures that the VDUSE kernel code gets a fa=
st
> > >   response.
> > >
> > > - Suspend virtqueue fd handlers in .drained_begin() and resume them i=
n
> > >   .drained_end(). This eliminates the need for the
> > >   aio_set_fd_handler(is_external=3Dtrue) flag, which is being removed=
 from
> > >   QEMU.
> > >
> > > This is a long list but splitting it into individual commits would
> > > probably lead to git bisect failures - the changes are all related.
> > >
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > ---
> > >  block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++----------=
--
> > >  1 file changed, 93 insertions(+), 39 deletions(-)
> > >
> > > diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
> > > index f7ae44e3ce..35dc8fcf45 100644
> > > --- a/block/export/vduse-blk.c
> > > +++ b/block/export/vduse-blk.c
> > > @@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
> > >      VduseDev *dev;
> > >      uint16_t num_queues;
> > >      char *recon_file;
> > > -    unsigned int inflight;
> > > +    unsigned int inflight; /* atomic */
> > > +    bool vqs_started;
> > >  } VduseBlkExport;
> > >
> > >  typedef struct VduseBlkReq {
> > > @@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
> > >
> > >  static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
> > >  {
> > > -    vblk_exp->inflight++;
> > > +    if (qatomic_fetch_inc(&vblk_exp->inflight) =3D=3D 0) {
> >
> > I wonder why we need to use atomic operations here.
>
> The inflight counter is only modified by the vhost-user export thread,
> but it may be read by another thread here:
>

I see. I mean is it enough to just use volatile keywords here, since
the writers would not access the variable concurrently.

>   static bool vduse_blk_drained_poll(void *opaque)
>   {
>       BlockExport *exp =3D opaque;
>       VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, expo=
rt);
>
>       return qatomic_read(&vblk_exp->inflight) > 0;
>
> BlockDevOps->drained_poll() calls are invoked when BlockDriverStates are
> drained (e.g. blk_drain_all() and related APIs).
>
> > > @@ -355,13 +410,12 @@ static void vduse_blk_exp_delete(BlockExport *e=
xp)
> > >      g_free(vblk_exp->handler.serial);
> > >  }
> > >
> > > +/* Called with exp->ctx acquired */
> > >  static void vduse_blk_exp_request_shutdown(BlockExport *exp)
> > >  {
> > >      VduseBlkExport *vblk_exp =3D container_of(exp, VduseBlkExport, e=
xport);
> > >
> > > -    aio_context_acquire(vblk_exp->export.ctx);
> > > -    vduse_blk_detach_ctx(vblk_exp);
> > > -    aio_context_acquire(vblk_exp->export.ctx);
> > > +    vduse_blk_stop_virtqueues(vblk_exp);
> >
> > Can we add a AIO_WAIT_WHILE() here? Then we don't need to
> > increase/decrease the BlockExport refcount during I/O processing.
>
> I don't think so because vduse_blk_exp_request_shutdown() is not the
> only place where we wait for requests to complete. There would still
> need to be away to wait for requests to finish (without calling
> AIO_WAIT_WHILE()) in vduse_blk_drained_poll().
>

But the BlockExport would not be freed until we call
vduse_blk_exp_request_shutdown(). If we can ensure that there will be
no inflight I/O after we call vduse_blk_exp_request_shutdown(), the
BlockExport can be freed safely without increasing/decreasing the
BlockExport refcount during I/O processing.

Thanks,
Yongji


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 02:40:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 02:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526437.818186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prV4z-0002Pj-69; Wed, 26 Apr 2023 02:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526437.818186; Wed, 26 Apr 2023 02:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prV4z-0002Pc-2P; Wed, 26 Apr 2023 02:40:25 +0000
Received: by outflank-mailman (input) for mailman id 526437;
 Wed, 26 Apr 2023 02:40:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prV4x-0002PS-97; Wed, 26 Apr 2023 02:40:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prV4w-0005wh-Tr; Wed, 26 Apr 2023 02:40:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prV4w-0002oR-FU; Wed, 26 Apr 2023 02:40:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prV4w-00010m-Ey; Wed, 26 Apr 2023 02:40:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=66NRyGj66chylAv5mBm0ZQIVqmnMfuZ2yBmdTiU3gWg=; b=BdrtfgNSjFk3eul/nNiqm1X82L
	GAV0OT6tF8LDfARF6fVj2LdZIzpoPKKeexkI54FjYQdsZQe+80leQF35OyE0fLT+L9HJXKnf+opvR
	oLd1YFha/CUq4zb2pOoiiQlEiWaP2AO2q1MXaIkUGLSX5GiJksoKboG0Uf8Ck1ZEdjhU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180412-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180412: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=a14b8206c5edcbbad1c71256ea9b44c3b382a9f5
X-Osstest-Versions-That:
    qemuu=327ec8d6c2a2223b78d311153a471036e474c5c5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 02:40:22 +0000

flight 180412 qemu-mainline real [real]
flight 180421 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180412/
http://logs.test-lab.xenproject.org/osstest/logs/180421/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail pass in 180421-retest
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install fail pass in 180421-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 180421 like 180394
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail in 180421 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180394
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180394
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180394
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180394
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180394
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180394
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180394
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                a14b8206c5edcbbad1c71256ea9b44c3b382a9f5
baseline version:
 qemuu                327ec8d6c2a2223b78d311153a471036e474c5c5

Last test of basis   180394  2023-04-24 07:55:31 Z    1 days
Failing since        180398  2023-04-24 20:10:16 Z    1 days    2 attempts
Testing same since   180412  2023-04-25 15:56:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Carlos López <clopez@suse.de>
  Chuck Zmudzinski <brchuckz@aol.com>
  Cornelia Huck <cohuck@redhat.com>
  David Hildenbrand <david@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Halil Pasic <pasic@linux.ibm.com>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> [sun4u]
  Michael S. Tsirkin <mst@redhat.com>
  Ming Yang yangming73@huawei.com
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Qi Xi <xiqi2@huawei.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
  Thomas Huth <thuth@redhat.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Yangming <yangming73@huawei.com>
  李皆俊 <a_lijiejun@163.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   327ec8d6c2..a14b8206c5  a14b8206c5edcbbad1c71256ea9b44c3b382a9f5 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 02:59:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 02:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526447.818199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prVNM-00047s-R4; Wed, 26 Apr 2023 02:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526447.818199; Wed, 26 Apr 2023 02:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prVNM-00047l-ON; Wed, 26 Apr 2023 02:59:24 +0000
Received: by outflank-mailman (input) for mailman id 526447;
 Wed, 26 Apr 2023 02:59:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prVNK-00047Z-Tw; Wed, 26 Apr 2023 02:59:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prVNK-0006Gj-Jd; Wed, 26 Apr 2023 02:59:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prVNK-0003JG-61; Wed, 26 Apr 2023 02:59:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prVNK-0008Tl-5b; Wed, 26 Apr 2023 02:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QdzR+TXBKnCArKau5ItXGteLN/CwaX7Cvv2zAD5c+IU=; b=2AW9dNtCZ9VXXpr+UOZXtglzRI
	A2fLw1xdUFhy2SYZLLy3ozRFwXOgd8+Y+XB16fdVhdtrZc+0ZntPOcfyfMQlcR3r58IIT9bR2MTNs
	wOaWFp0mTKTvmPj1udF/wkhtwhMIRx086eIZEf5ijrJg9EdxWx8r3V72Ki18/7tyjrrI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180406-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180406: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=208dd44299193347d4ececdc1c8f864f6d9a0b9b
X-Osstest-Versions-That:
    xen=aa80e0afaaa5399915d90a80c6dbe1e353664d44
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 02:59:22 +0000

flight 180406 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180406/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180396
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180396
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180396
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180396
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180396
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180396
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180396
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180396
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180396
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180396
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180396
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180396
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  208dd44299193347d4ececdc1c8f864f6d9a0b9b
baseline version:
 xen                  aa80e0afaaa5399915d90a80c6dbe1e353664d44

Last test of basis   180396  2023-04-24 11:08:31 Z    1 days
Testing same since   180406  2023-04-25 07:38:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   aa80e0afaa..208dd44299  208dd44299193347d4ececdc1c8f864f6d9a0b9b -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 03:08:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 03:08:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526454.818209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prVWJ-0005eB-ND; Wed, 26 Apr 2023 03:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526454.818209; Wed, 26 Apr 2023 03:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prVWJ-0005e4-Kb; Wed, 26 Apr 2023 03:08:39 +0000
Received: by outflank-mailman (input) for mailman id 526454;
 Wed, 26 Apr 2023 03:08:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cwp=AR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1prVWI-0005dy-G6
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 03:08:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1cd3d15-e3df-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 05:08:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9CB1861193;
 Wed, 26 Apr 2023 03:08:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6053AC433EF;
 Wed, 26 Apr 2023 03:08:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1cd3d15-e3df-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682478514;
	bh=PzQJ1UmMeamWaVQGvThlEAXTvAWz6c6yXI6rxtx2GOE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=P2ABSbKv7IUuhZVOd/Nq0/olTd6IIr6nxHIbPhLJLvNLyu3SSWwCm9cf69FuuyWDr
	 +rK8O42bHFpU/TdR9WeRrDVxzocnEfhfFAA3e7YC0WmyX8+1afpq783t1XJb+ZiYEF
	 nffsp2Rt8YrvMGI57Dn37gvPzsW96IHZV1ainkbi1K6nPpL4zyeEBJJGYVEtD7jQhj
	 rIb4IGjy4xuodh/zUt9R4YwFrzB20MS5zO5n5CzkjG/RKEip6AKMVzFV4VJ8WqNtmx
	 1p0A+GnWJfZbiBjFUGzA4k8q00Tixc59xu+m0pgHfg/KGU2WD/Z/Wtf4fBFuhRkNPg
	 UE6WaTrC3rVnQ==
Date: Tue, 25 Apr 2023 20:08:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v2 5/6] automation: PCI passthrough tests on ADL hw
In-Reply-To: <1948952135feb360797da0bb0136e7d42e188e72.1682468126.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2304251825360.3419@ubuntu-linux-20-04-desktop>
References: <cover.ddd9fded43b546af196bcfb4473b62e1fa3864b3.1682468126.git-series.marmarek@invisiblethingslab.com> <1948952135feb360797da0bb0136e7d42e188e72.1682468126.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-601136291-1682472388=:3419"
Content-ID: <alpine.DEB.2.22.394.2304252008090.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-601136291-1682472388=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304252008091.3419@ubuntu-linux-20-04-desktop>

On Wed, 26 Apr 2023, Marek Marczykowski-Górecki wrote:
> Add simple PCI passthrough test to both PV and HVM domU. It passes
> through a network adapter (the only one in the system), gets an IP via
> DHCP (first basic test) and then ping the gateway (second basic test).
> Finally, if device is supposed to use MSI or MSI-X (as set in the
> PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.
> 
> On the current runner, the device in question is this:
> 03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
> 	Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
> 	Flags: bus master, fast devsel, latency 0, IRQ 18
> 	Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
> 	Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
> 	Capabilities: [40] Power Management version 3
> 	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> 	Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
> 	Capabilities: [a0] Express Endpoint, MSI 00
> 	Capabilities: [100] Advanced Error Reporting
> 	Capabilities: [140] Device Serial Number ...
> 	Capabilities: [1c0] Latency Tolerance Reporting
> 	Capabilities: [1f0] Precision Time Measurement
> 	Capabilities: [1e0] L1 PM Substates
> 	Kernel driver in use: igc
> 	Kernel modules: igc
> 
> With the current Xen version, it uses MSI-X under PV and MSI under HVM.
> 
> This patch moves domU config to a variable, to make it configurable on
> per-test basis. Add also a few comments for visual separation of tests.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> changes in v2:
>  - drop leftover debug shell
>  - fix regex -msi to not match -msi-x
>  - fix waiting for domU startup
> ---
>  automation/gitlab-ci/test.yaml     | 20 +++++++-
>  automation/scripts/qubes-x86-64.sh | 85 ++++++++++++++++++++++++++-----
>  2 files changed, 93 insertions(+), 12 deletions(-)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index d68c584269dd..1ce083e6cd88 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -94,6 +94,8 @@
>      # the test controller runs on RPi4
>      CONTAINER: alpine:3.12-arm64v8
>      LOGFILE: smoke-test.log
> +    PCIDEV: "03:00.0"
> +    PCIDEV_INTR: "MSI-X"
>    artifacts:
>      paths:
>        - smoke.serial
> @@ -147,6 +149,24 @@ adl-suspend-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
>  
> +adl-pci-pv-x86-64-gcc-debug:
> +  extends: .adl-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
> +adl-pci-hvm-x86-64-gcc-debug:
> +  extends: .adl-x86-64
> +  variables:
> +    PCIDEV_INTR: "MSI"
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  qemu-smoke-dom0-arm64-gcc:
>    extends: .qemu-arm64
>    script:
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 6c0309704661..a01c571860ee 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -4,8 +4,21 @@ set -ex
>  
>  test_variant=$1
>  
> +### defaults
>  wait_and_wakeup=
>  timeout=120
> +domU_config='
> +type = "pvh"
> +name = "domU"
> +kernel = "/boot/vmlinuz"
> +ramdisk = "/boot/initrd-domU"
> +extra = "root=/dev/ram0 console=hvc0"
> +memory = 512
> +vif = [ "bridge=xenbr0", ]
> +disk = [ ]
> +'
> +
> +### test: smoke test
>  if [ -z "${test_variant}" ]; then
>      passed="ping test passed"
>      domU_check="
> @@ -23,6 +36,8 @@ done
>  tail -n 100 /var/log/xen/console/guest-domU.log
>  echo \"${passed}\"
>  "
> +
> +### test: S3
>  elif [ "${test_variant}" = "s3" ]; then
>      passed="suspend test passed"
>      wait_and_wakeup="started, suspending"
> @@ -48,6 +63,62 @@ xl dmesg | grep 'Finishing wakeup from ACPI S3 state' || exit 1
>  ping -c 10 192.168.0.2 || exit 1
>  echo \"${passed}\"
>  "
> +
> +### test: pci-pv, pci-hvm
> +elif [ "${test_variant}" = "pci-pv" ] || [ "${test_variant}" = "pci-hvm" ]; then
> +
> +    if [ -z "$PCIDEV" ]; then
> +        echo "Please set 'PCIDEV' variable with BDF of test network adapter" >&2
> +        echo "Optionally set also 'PCIDEV_INTR' to 'MSI' or 'MSI-X'" >&2
> +        exit 1
> +    fi
> +
> +    passed="pci test passed"
> +
> +    domU_config='
> +type = "'${test_variant#pci-}'"
> +name = "domU"
> +kernel = "/boot/vmlinuz"
> +ramdisk = "/boot/initrd-domU"
> +extra = "root=/dev/ram0 console=hvc0"
> +memory = 512
> +vif = [ ]
> +disk = [ ]
> +pci = [ "'$PCIDEV',seize=1" ]
> +on_reboot = "destroy"
> +'
> +
> +    domU_check="
> +set -x -e
> +ip link set eth0 up
> +timeout 30s udhcpc -i eth0
> +pingip=\$(ip -o -4 r show default|cut -f 3 -d ' ')
> +ping -c 10 \"\$pingip\"
> +echo domU started
> +cat /proc/interrupts
> +"
> +    if [ "$PCIDEV_INTR" = "MSI-X" ]; then
> +        domU_check="$domU_check
> +grep -- '\\(-msi-x\\|PCI-MSI-X\\).*eth0' /proc/interrupts
> +"
> +    elif [ "$PCIDEV_INTR" = "MSI" ]; then
> +        # depending on the kernel version and domain type, the MSI can be
> +        # marked as '-msi', 'PCI-MSI', or 'PCI-MSI-<SBDF>'; be careful to not match
> +        # -msi-x nor PCI-MSI-X
> +        domU_check="$domU_check
> +grep -- '\\(-msi \\|PCI-MSI\\( \\|-[^X]\\)\\).*eth0' /proc/interrupts
> +"
> +    fi
> +    domU_check="$domU_check
> +echo \"${passed}\"
> +"
> +
> +    dom0_check="
> +until grep -q \"^domU Welcome to Alpine Linux\" /var/log/xen/console/guest-domU.log; do
> +    sleep 1
> +done
> +tail -n 100 /var/log/xen/console/guest-domU.log
> +"
>  fi
>  
>  # DomU
> @@ -63,7 +134,7 @@ rm var/run
>  echo "#!/bin/sh
>  
>  ${domU_check}
> -/bin/sh" > etc/local.d/xen.start
> +" > etc/local.d/xen.start
>  chmod +x etc/local.d/xen.start
>  echo "rc_verbose=yes" >> etc/rc.conf
>  sed -i -e 's/^Welcome/domU \0/' etc/issue
> @@ -98,17 +169,7 @@ xl create /etc/xen/domU.cfg
>  ${dom0_check}
>  " > etc/local.d/xen.start
>  chmod +x etc/local.d/xen.start
> -# just PVH for now
> -echo '
> -type = "pvh"
> -name = "domU"
> -kernel = "/boot/vmlinuz"
> -ramdisk = "/boot/initrd-domU"
> -extra = "root=/dev/ram0 console=hvc0"
> -memory = 512
> -vif = [ "bridge=xenbr0", ]
> -disk = [ ]
> -' > etc/xen/domU.cfg
> +echo "$domU_config" > etc/xen/domU.cfg
>  
>  echo "rc_verbose=yes" >> etc/rc.conf
>  echo "XENCONSOLED_TRACE=all" >> etc/default/xencommons
> -- 
> git-series 0.9.1
> 
--8323329-601136291-1682472388=:3419--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 05:34:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 05:34:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526464.818219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXmw-00047Y-6x; Wed, 26 Apr 2023 05:33:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526464.818219; Wed, 26 Apr 2023 05:33:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXmw-00047R-4K; Wed, 26 Apr 2023 05:33:58 +0000
Received: by outflank-mailman (input) for mailman id 526464;
 Wed, 26 Apr 2023 05:33:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prXmu-00047L-Mm
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 05:33:56 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20616.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eea6aa83-e3f3-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 07:33:54 +0200 (CEST)
Received: from DU2P250CA0010.EURP250.PROD.OUTLOOK.COM (2603:10a6:10:231::15)
 by DU0PR08MB9370.eurprd08.prod.outlook.com (2603:10a6:10:420::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Wed, 26 Apr
 2023 05:33:47 +0000
Received: from DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:231:cafe::2b) by DU2P250CA0010.outlook.office365.com
 (2603:10a6:10:231::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21 via Frontend
 Transport; Wed, 26 Apr 2023 05:33:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT060.mail.protection.outlook.com (100.127.142.238) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 05:33:47 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 26 Apr 2023 05:33:47 +0000
Received: from 067581ce04cc.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 12E96CDC-DFC1-43BA-AE7D-D67BA066C3D7.1; 
 Wed, 26 Apr 2023 05:33:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 067581ce04cc.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 05:33:41 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB10348.eurprd08.prod.outlook.com (2603:10a6:20b:57c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 05:33:40 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 05:33:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eea6aa83-e3f3-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4HEs9Tlm6pz/CzNAbKA9WmPCBPTsr578UbZpamE1/rY=;
 b=c96dHdgMKujEpOo42hoGqhPiCTJtetSwnfW88+hFab1KUuqB3FSMfhzcD4jKVadtkC+HQy0nDXgnhZEq5q9x+L9P4PjzsqCh7sf7BzXeYtngy/GdnGjo0/1FMhT9P7QiBZZAVJbh36cOCeIiiso2g0mkXx43dLLj2wczY6MMIoQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B9xXODhmXnXnVugNt6gS7Un44U5Th1E45pAKOXxt2e/xrNC36uyMIHKWzQTMI2rNJ7+xG20psq/KdO3a8JqBc9tP9+SBohLPe+G7q/wJei8uq78Zrik2BurZZ8qWSYjP1OOZ9x37b8/MBZ0znYhT3HyEMIdsvj8gvKFX9VVXg3luzv95P6bRlrDIVqchOfut1uM85usOTWNT4WrPHJ31JOCrYKdnk2dH4pB5j/Q++MNRaCHFvSSzEExajmJKcWyLalihHeRVaWtVFcEPceWuJQyUm05Xh3m+dsU7KgXMq1h9qkr4r2S77owSO66+KYTTIxB+7khIIAJLJHIG/OZtgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4HEs9Tlm6pz/CzNAbKA9WmPCBPTsr578UbZpamE1/rY=;
 b=fMv2sD/YKpxAiIqiDLUKlVOQzeD6Ebu2Z5lEGkkhI6yEmw75jGL1U6nTjITafIMJRTHwBPNmZwExnym/WtjbDWXbYLKvG6jN6AiOlfhmNaCzzB+U+WCTEwBMV1HVybqAG3NS/ackREEjtPFEhhIMBUEytEdzHOqQURyin1I4+Mf2Oy+N+oFx5HzC3BlR/TaGBkzPj29R6NK2nUi7ChAtdA77GDxt7fm0J8JbWJpKNS0JG0nop4gbuPJu6/9WSeU65TTzMVJLeaLDmr391EL/hB2QIJ7dOvj2xmOHlQPyaMCcrh9i2ZKUaVy5vcW5PuKZuVutzvwusV+k7ZoNDn7DAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4HEs9Tlm6pz/CzNAbKA9WmPCBPTsr578UbZpamE1/rY=;
 b=c96dHdgMKujEpOo42hoGqhPiCTJtetSwnfW88+hFab1KUuqB3FSMfhzcD4jKVadtkC+HQy0nDXgnhZEq5q9x+L9P4PjzsqCh7sf7BzXeYtngy/GdnGjo0/1FMhT9P7QiBZZAVJbh36cOCeIiiso2g0mkXx43dLLj2wczY6MMIoQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Julien Grall <julien@xen.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index: AQHZd0ulMpqFICe/LUiDVq6+aDsW9687sveAgAFTkCA=
Date: Wed, 26 Apr 2023 05:33:39 +0000
Message-ID:
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
In-Reply-To: <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6B56B7C6B8EF6E49B2BD95D9754F6A85.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB10348:EE_|DBAEUR03FT060:EE_|DU0PR08MB9370:EE_
X-MS-Office365-Filtering-Correlation-Id: e3e57fdc-6d65-4662-4001-08db4617cf0d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qk/22EgQDhnijF9h5pAO+4EqwfcVhHt4g5J22SW41s6Uv9UfF8n9XTn9xYqQMPR+55kW+2Ozyau8ldNLovVucCHXp37rOB/98OLG6DefO8buo5KVlhuytGXAe/G8Lp6dkjBg1zBMWeNa3obQVs6g3zIiUdbs3EB78NRw4xBlccT1itTdmKnyFxEyy19B2ejy8At5UyqSAfM38CftZz6lo3CvrpkcPADPkZQNsZhQ7kZ/1HiAg4ScqoSHOfdaeV3IWNoY7ZrSnejbRt5BDqRn9by+INCJbUMa/pYJQKjE+5d0pHVLuEhX9ppUYunbyKLgEp2JTPge91F3rmMOT9FDiDVgKjVpkGXRRdmJIFpd+KdqU2dX2Ym9W9HE0jwrSlsQ4iruAVq6AH0yVq1QdX9rwCOHKSTmYB6KGYIrL2HMsZs5qBQAAPvjPihMZ6A+zCP3RT5hdW8e8Rr8pSOFcSxtGSixCAg7ONN/kV6EO1LHUHv0B0T/vxi9ymRHkC+cEXdRnFl5V8g7Q9e/BCbyRl5C1E/m9s6zmuzjlrEOTsO9SCWfdblUSA6RlUU3V4iONQIr/lJvh9L2awFgFDxKA67vEDEcXJGy/L30OCB+IIRIZ2s2uKZGfZPE2Rd+0povnIZ3Aeemh5VrWviJl27HiOqg5g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(366004)(451199021)(966005)(71200400001)(55016003)(2906002)(7696005)(83380400001)(86362001)(33656002)(66556008)(66476007)(8936002)(66946007)(316002)(478600001)(26005)(64756008)(66446008)(76116006)(6506007)(41300700001)(6916009)(9686003)(186003)(4326008)(38070700005)(122000001)(38100700002)(54906003)(8676002)(52536014)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10348
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c87df224-920d-445b-d42d-08db4617ca52
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JYeAUcf+QBxPJQ9NnZiljMUBShPZNCFEbeW8IZtdBf2w+uWINY9RGY3FVvb+pGkjjV/ndSfIFmHelylaOH5SzwnCqjgRsThJQMU7v8wpVS5/az+euEj+SrC0Cgu1WWOj54GnF5QHYKckUIqVBLO7wpiPL+A8g4WQ0VjabeVc3xSX2pMjK9DWulsK/J0l4ZRm3hS6re8noixSCLMfApcjAUE+TwLjvPNXIcy/Lv4hgdlP8IJ4fUdS71OJ/6lEU+5VACp/7IQGYAP+eHt1st8kcEceEnVWcIYWfoMY0SHFDnND7+NLbYpgtTwtVSEJwgY62YgTKLuE/xfxDync6AQfcY3VotPVUNiAaezrxVpOZENLBxwfiZwXAVOYVIcew04WkZVlAuMmbxcrok3GYZJ2ax0IIjuTrHWLz7xth1M7LXolS+1etXfJpx5oGdeEcBN2UwpM0Av+5FyOOKYUL2KpblMG9ckSm2gojmCcZHk2tEp1h/2AEJPFGVMIOKBWYvfwhJBSGBUK98C5JNT0Q+KtdjksS1PwOyOuF9DsZnBWQ10Mz3gvFNkpxVGpsYb1d+UNIED/1qTa9Kb6d4GZOGXeoBCPl8eJaRL2TF4/G2bc1wWwaHrGOYYftDQ+eB9Nw6H6hU4+9ExSC/RmrZMDOkQH4Se448kxAHul12qNmxrd6e1p8VT810HeAO6FgFPd4kGmRceJknsgi7O+R7pJkY0/vFC1DVkDLA9dh6DHf6mY9zmxIbDctnDPEQIRLEuWXFj4
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(36860700001)(81166007)(40460700003)(33656002)(6862004)(356005)(52536014)(5660300002)(55016003)(8676002)(40480700001)(316002)(8936002)(70586007)(70206006)(82740400003)(41300700001)(86362001)(4326008)(82310400005)(2906002)(966005)(83380400001)(47076005)(336012)(34070700002)(186003)(9686003)(6506007)(26005)(54906003)(7696005)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 05:33:47.4845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e3e57fdc-6d65-4662-4001-08db4617cf0d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9370

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwOS8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIE5VTUEg
ZGlzdGFuY2UgbWFwDQo+IA0KPiA+ICsgICAgICAgIHVuc2lnbmVkIGludCBmcm9tLCB0bywgZGlz
dGFuY2UsIG9wcG9zaXRlOw0KPiANCj4gV2l0aCB0aGVzZSAuLi4NCj4gDQo+ID4gKyAgICAgICAg
ZnJvbSA9IGR0X25leHRfY2VsbCgxLCAmbWF0cml4KTsNCj4gPiArICAgICAgICB0byA9IGR0X25l
eHRfY2VsbCgxLCAmbWF0cml4KTsNCj4gPiArICAgICAgICBkaXN0YW5jZSA9IGR0X25leHRfY2Vs
bCgxLCAmbWF0cml4KTsNCj4gPiArICAgICAgICBpZiAoIChmcm9tID09IHRvICYmIGRpc3RhbmNl
ICE9IE5VTUFfTE9DQUxfRElTVEFOQ0UpIHx8DQo+ID4gKyAgICAgICAgICAgICAoZnJvbSAhPSB0
byAmJiBkaXN0YW5jZSA8PSBOVU1BX0xPQ0FMX0RJU1RBTkNFKSApDQo+ID4gKyAgICAgICAgew0K
PiA+ICsgICAgICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcNCj4gPiArICAgICAgICAgICAg
ICAgICAgICJOVU1BOiBJbnZhbGlkIGRpc3RhbmNlOiBOT0RFIyUiUFJJdTMyIi0NCj4gPk5PREUj
JSJQUkl1MzIiOiUiUFJJdTMyIlxuIiwNCj4gDQo+IC4uLiB5b3UgZG9uJ3QgbWVhbiBQUkl1MzIg
aGVyZSBhbmQgLi4uDQo+IA0KPiA+ICsgICAgICAgICAgICAgICAgICAgZnJvbSwgdG8sIGRpc3Rh
bmNlKTsNCj4gPiArICAgICAgICAgICAgZ290byBpbnZhbGlkX2RhdGE7DQo+ID4gKyAgICAgICAg
fQ0KPiA+ICsNCj4gPiArICAgICAgICBwcmludGsoWEVOTE9HX0lORk8gIk5VTUE6IGRpc3RhbmNl
OiBOT0RFIyUiUFJJdTMyIi0NCj4gPk5PREUjJSJQUkl1MzIiOiUiUFJJdTMyIlxuIiwNCj4gDQo+
IC4uLiBoZXJlIGFuZCB5ZXQgZnVydGhlciBkb3duIGFueW1vcmUuIFRoYXQnbGwgYXQgdGhlIHNh
bWUgdGltZSBzaG9ydGVuDQo+IGFsbCB0aGVzZSBsaW5lcyBxdWl0ZSBhIGJpdC4NCg0KSSBkaWQg
YSBsaXR0bGUgYml0IGFyY2hlb2xvZ3kgdG8gY2hlY2sgdGhlIGRpc2N1c3Npb24gYmFjayB0byAy
IHllYXJzIGFnbyB3aGVuDQpXZWkgb3JpZ2luYWxseSBzZW50IHRoZSB2MSB0byB0aGUgbWFpbGlu
ZyBsaXN0LiBDaGFuZ2luZyAldSB0byAlIlBSSXUzMiIgd2FzDQpvbmUgb2YgdGhlIGNvbW1lbnQg
YXQgdGhhdCB0aW1lIFsxXSB3aGVuIHRoZXNlIHZhcmlhYmxlcyB3ZXJlIGRlZmluZWQgYXMNCnVp
bnQzMl90LiBCdXQgbm93IHdpdGggdGhlc2UgdmFyaWFibGVzIGJlaW5nIHVuc2lnbmVkIGludCBJ
IHRoaW5rIG5vdyAldSBpcw0KYSBtb3JlIHByb3BlciB3YXkgaGVyZS4gSSB3aWxsIGRvIHRoZSBh
Y2NvcmRpbmcgY2hhbmdlcyBpbiB2NS4NCg0KPiANCj4gPiArICAgICAgICAgICAgICAgZnJvbSwg
dG8sIGRpc3RhbmNlKTsNCj4gPiArDQo+ID4gKyAgICAgICAgLyogR2V0IG9wcG9zaXRlIHdheSBk
aXN0YW5jZSAqLw0KPiA+ICsgICAgICAgIG9wcG9zaXRlID0gX19ub2RlX2Rpc3RhbmNlKHRvLCBm
cm9tKTsNCj4gPiArICAgICAgICAvKiBUaGUgZGVmYXVsdCB2YWx1ZSBpbiBub2RlX2Rpc3RhbmNl
X21hcCBpcyBOVU1BX05PX0RJU1RBTkNFDQo+ICovDQo+ID4gKyAgICAgICAgaWYgKCBvcHBvc2l0
ZSA9PSBOVU1BX05PX0RJU1RBTkNFICkNCj4gDQo+IEFuZCB0aGUgbWF0cml4IHlvdSdyZSByZWFk
aW5nIGZyb20gY2FuJ3QgaG9sZCBOVU1BX05PX0RJU1RBTkNFIGVudHJpZXM/DQo+IEkgYXNrIGJl
Y2F1c2UgeW91IGRvbid0IGNoZWNrIHRoaXMgYWJvdmU7IHlvdSBvbmx5IGNoZWNrIGFnYWluc3QN
Cj4gTlVNQV9MT0NBTF9ESVNUQU5DRS4NCg0KTXkgdW5kZXJzdGFuZGluZyBmb3IgdGhlIHB1cnBv
c2Ugb2YgdGhpcyBwYXJ0IG9mIGNvZGUgaXMgdG8gY2hlY2sgaWYgdGhlIG9wcG9zaXRlDQp3YXkg
ZGlzdGFuY2UgaGFzIGFscmVhZHkgYmVlbiBzZXQsIHNvIHdlIG5lZWQgdG8gY29tcGFyZSB0aGUg
b3Bwb3NpdGUgd2F5DQpkaXN0YW5jZSB3aXRoIHRoZSBkZWZhdWx0IHZhbHVlIE5VTUFfTk9fRElT
VEFOQ0UgaGVyZS4NCg0KQmFjayB0byB5b3VyIHF1ZXN0aW9uLCBJIGNhbiBzZWUgeW91ciBwb2lu
dCBvZiB0aGUgcXVlc3Rpb24uIEhvd2V2ZXIgSSBkb24ndCB0aGluaw0KTlVNQV9OT19ESVNUQU5D
RSBpcyBhIHZhbGlkIHZhbHVlIHRvIGRlc2NyaWJlIHRoZSBub2RlIGRpc3RhbmNlIGluIHRoZSBk
ZXZpY2UNCnRyZWUuIFRoaXMgaXMgYmVjYXVzZSBJIGh1bnRlZCBkb3duIHRoZSBwcmV2aW91cyBk
aXNjdXNzaW9ucyBhbmQgZm91bmQgWzJdIGFib3V0DQp3ZSBzaG91bGQgdHJ5IHRvIGtlZXAgY29u
c2lzdGVudCBiZXR3ZWVuIHRoZSB2YWx1ZSB1c2VkIGluIGRldmljZSB0cmVlIGFuZCBBQ1BJDQp0
YWJsZXMuIEZyb20gdGhlIEFDUEkgc3BlYywgMHhGRiwgaS5lLiBOVU1BX05PX0RJU1RBTkNFIG1l
YW5zIHVucmVhY2hhYmxlLg0KSSB0aGluayB0aGlzIGlzIGFsc28gdGhlIHJlYXNvbiB3aHkgTlVN
QV9OT19ESVNUQU5DRSBjYW4gYmUgdXNlZCBhcyB0aGUgZGVmYXVsdA0KdmFsdWUgb2YgdGhlIGRp
c3RhbmNlIG1hcCwgb3RoZXJ3aXNlIHdlIHdvbid0IGhhdmUgYW55IHZhbHVlIHRvIHVzZS4NCg0K
PiANCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIC8qIEJpLWRpcmVjdGlvbnMgYXJl
IG5vdCBzZXQsIHNldCBib3RoICovDQo+ID4gKyAgICAgICAgICAgIG51bWFfc2V0X2Rpc3RhbmNl
KGZyb20sIHRvLCBkaXN0YW5jZSk7DQo+ID4gKyAgICAgICAgICAgIG51bWFfc2V0X2Rpc3RhbmNl
KHRvLCBmcm9tLCBkaXN0YW5jZSk7DQo+ID4gKyAgICAgICAgfQ0KPiA+ICsgICAgICAgIGVsc2UN
Cj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIC8qDQo+ID4gKyAgICAgICAgICAgICAq
IE9wcG9zaXRlIHdheSBkaXN0YW5jZSBoYXMgYmVlbiBzZXQgdG8gYSBkaWZmZXJlbnQgdmFsdWUu
DQo+ID4gKyAgICAgICAgICAgICAqIEl0IG1heSBiZSBhIGZpcm13YXJlIGRldmljZSB0cmVlIGJ1
Zz8NCj4gPiArICAgICAgICAgICAgICovDQo+ID4gKyAgICAgICAgICAgIGlmICggb3Bwb3NpdGUg
IT0gZGlzdGFuY2UgKQ0KPiA+ICsgICAgICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgICAgICAv
Kg0KPiA+ICsgICAgICAgICAgICAgICAgICogSW4gZGV2aWNlIHRyZWUgTlVNQSBkaXN0YW5jZS1t
YXRyaXggYmluZGluZzoNCj4gPiArICAgICAgICAgICAgICAgICAqDQo+IGh0dHBzOi8vd3d3Lmtl
cm5lbC5vcmcvZG9jL0RvY3VtZW50YXRpb24vZGV2aWNldHJlZS9iaW5kaW5ncy9udW1hLnR4dA0K
PiA+ICsgICAgICAgICAgICAgICAgICogVGhlcmUgaXMgYSBub3RlcyBtZW50aW9uczoNCj4gPiAr
ICAgICAgICAgICAgICAgICAqICJFYWNoIGVudHJ5IHJlcHJlc2VudHMgZGlzdGFuY2UgZnJvbSBm
aXJzdCBub2RlIHRvDQo+ID4gKyAgICAgICAgICAgICAgICAgKiAgc2Vjb25kIG5vZGUuIFRoZSBk
aXN0YW5jZXMgYXJlIGVxdWFsIGluIGVpdGhlcg0KPiA+ICsgICAgICAgICAgICAgICAgICogIGRp
cmVjdGlvbi4iDQo+ID4gKyAgICAgICAgICAgICAgICAgKg0KPiA+ICsgICAgICAgICAgICAgICAg
ICogVGhhdCBtZWFucyBkZXZpY2UgdHJlZSBkb2Vzbid0IHBlcm1pdCB0aGlzIGNhc2UuDQo+ID4g
KyAgICAgICAgICAgICAgICAgKiBCdXQgaW4gQUNQSSBzcGVjLCBpdCBjYXJlcyB0byBzcGVjaWZp
Y2FsbHkgcGVybWl0IHRoaXMNCj4gPiArICAgICAgICAgICAgICAgICAqIGNhc2U6DQo+ID4gKyAg
ICAgICAgICAgICAgICAgKiAiRXhjZXB0IGZvciB0aGUgcmVsYXRpdmUgZGlzdGFuY2UgZnJvbSBh
IFN5c3RlbSBMb2NhbGl0eQ0KPiA+ICsgICAgICAgICAgICAgICAgICogIHRvIGl0c2VsZiwgZWFj
aCByZWxhdGl2ZSBkaXN0YW5jZSBpcyBzdG9yZWQgdHdpY2UgaW4gdGhlDQo+ID4gKyAgICAgICAg
ICAgICAgICAgKiAgbWF0cml4LiBUaGlzIHByb3ZpZGVzIHRoZSBjYXBhYmlsaXR5IHRvIGRlc2Ny
aWJlIHRoZQ0KPiA+ICsgICAgICAgICAgICAgICAgICogIHNjZW5hcmlvIHdoZXJlIHRoZSByZWxh
dGl2ZSBkaXN0YW5jZXMgZm9yIHRoZSB0d28NCj4gPiArICAgICAgICAgICAgICAgICAqICBkaXJl
Y3Rpb25zIGJldHdlZW4gU3lzdGVtIExvY2FsaXRpZXMgaXMgZGlmZmVyZW50LiINCj4gPiArICAg
ICAgICAgICAgICAgICAqDQo+ID4gKyAgICAgICAgICAgICAgICAgKiBUaGF0IG1lYW5zIGEgcmVh
bCBtYWNoaW5lIGFsbG93cyBzdWNoIE5VTUEgY29uZmlndXJhdGlvbi4NCj4gPiArICAgICAgICAg
ICAgICAgICAqIFNvLCBwbGFjZSBhIFdBUk5JTkcgaGVyZSB0byBub3RpY2Ugc3lzdGVtIGFkbWlu
aXN0cmF0b3JzLA0KPiA+ICsgICAgICAgICAgICAgICAgICogaXMgaXQgdGhlIHNwZWNhaWwgY2Fz
ZSB0aGF0IHRoZXkgaGlqYWNrIHRoZSBkZXZpY2UgdHJlZQ0KPiA+ICsgICAgICAgICAgICAgICAg
ICogdG8gc3VwcG9ydCB0aGVpciByYXJlIG1hY2hpbmVzPw0KPiA+ICsgICAgICAgICAgICAgICAg
ICovDQo+ID4gKyAgICAgICAgICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcNCj4gPiArICAg
ICAgICAgICAgICAgICAgICAgICAiVW4tbWF0Y2hlZCBiaS1kaXJlY3Rpb24hIE5PREUjJSJQUkl1
MzIiLQ0KPiA+Tk9ERSMlIlBSSXUzMiI6JSJQUkl1MzIiLCBOT0RFIyUiUFJJdTMyIi0NCj4gPk5P
REUjJSJQUkl1MzIiOiUiUFJJdTMyIlxuIiwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICBm
cm9tLCB0bywgZGlzdGFuY2UsIHRvLCBmcm9tLCBvcHBvc2l0ZSk7DQo+ID4gKyAgICAgICAgICAg
IH0NCj4gPiArDQo+ID4gKyAgICAgICAgICAgIC8qIE9wcG9zaXRlIHdheSBkaXN0YW5jZSBoYXMg
YmVlbiBzZXQsIGp1c3Qgc2V0IHRoaXMgd2F5ICovDQo+ID4gKyAgICAgICAgICAgIG51bWFfc2V0
X2Rpc3RhbmNlKGZyb20sIHRvLCBkaXN0YW5jZSk7DQo+IA0KPiBJdCB0b29rIG1lIGEgd2hpbGUg
dG8gdW5kZXJzdGFuZCB3aGF0IHRoZSBjb21tZW50IGlzIHRvIHRlbGwgbWUsDQo+IGJlY2F1c2Ug
aW4gdGhpcyBpdGVyYXRpb24gdGhlIG9wcG9zaXRlIGVudHJ5IHdhc24ndCBzZXQuIE1heSBJDQo+
IHN1Z2dlc3QgdG8gbWFrZSBtb3JlIGV4cGxpY2l0IHRoYXQgeW91IHJlZmVyIHRvIGFuIGVhcmxp
ZXIgaXRlcmF0aW9uLA0KPiBlLmcuIGJ5ICIuLi4gd2FzIHNldCBiZWZvcmUsIC4uLiI/DQoNClll
cywgdGhhbmtzIGZvciBwb2ludGluZyB0aGlzIG91dC4gSSB3aWxsIG1ha2UgdGhlIGNvbW1lbnQg
bW9yZSBleHBsaWNpdA0KYXMgeW91IHN1Z2dlc3RlZC4NCg0KPiANCj4gPiArICAgICAgICB9DQo+
ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIDA7DQo+ID4gKw0KPiA+ICtpbnZhbGlk
X2RhdGE6DQo+IA0KPiBOaXQ6IFN0eWxlIChsYWJlbHMgdG8gYmUgaW5kZW50ZWQgYnkgW2F0IGxl
YXN0XSBvbmUgYmxhbmspLg0KDQpTdXJlLiBXaWxsIGFkZCBhIHNwYWNlIGJlZm9yZS4NCg0KWzFd
IGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAy
MS0wOS9tc2cwMjA2Ni5odG1sDQpbMl0gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNo
aXZlcy9odG1sL3hlbi1kZXZlbC8yMDIzLTAxL21zZzAwNjkwLmh0bWwNCg0KS2luZCByZWdhcmRz
LA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 05:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 05:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526468.818230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXqT-0004iP-N6; Wed, 26 Apr 2023 05:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526468.818230; Wed, 26 Apr 2023 05:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXqT-0004iI-KD; Wed, 26 Apr 2023 05:37:37 +0000
Received: by outflank-mailman (input) for mailman id 526468;
 Wed, 26 Apr 2023 05:37:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vDOC=AR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prXqR-0004iA-UZ
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 05:37:35 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7218f253-e3f4-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 07:37:34 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1A86F1FDCC;
 Wed, 26 Apr 2023 05:37:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E454413499;
 Wed, 26 Apr 2023 05:37:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id cdn/NZ24SGQpQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 26 Apr 2023 05:37:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7218f253-e3f4-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682487454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=tgV56S5qiOQ/84KjqJPAHn3Vo9CBCYcj1Ub2WXXrFbs=;
	b=sIIO70hC+bfFaKVEW6yYLT8P6nPEZJ/8RvAKV+5tsGAwExrh3K+uD3ZmyaQoXrvFHZHHyE
	18s32b8Ze926XSgR/exCJfIGh3zZKY8Oec9/5OyMY9cCnggVDn3jexSB1tbqXQQYwrf/5b
	SnDGMOZennRpOEs2roBRguPwz+oJTEQ=
Message-ID: <b317291f-98d3-f028-db75-a8aef8db5f9c@suse.com>
Date: Wed, 26 Apr 2023 07:37:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] libxl: device_backend_callback() print rc on error
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230425194622.114869-2-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230425194622.114869-2-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------c4IAEzMtsYnTZHDVMQLUbDxe"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------c4IAEzMtsYnTZHDVMQLUbDxe
Content-Type: multipart/mixed; boundary="------------IFoo0gcLmFeWkxVCxGbVc59U";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <b317291f-98d3-f028-db75-a8aef8db5f9c@suse.com>
Subject: Re: [PATCH] libxl: device_backend_callback() print rc on error
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230425194622.114869-2-jandryuk@gmail.com>
In-Reply-To: <20230425194622.114869-2-jandryuk@gmail.com>

--------------IFoo0gcLmFeWkxVCxGbVc59U
Content-Type: multipart/mixed; boundary="------------GCU1EmVOLdBvRV7KobKOGRLm"

--------------GCU1EmVOLdBvRV7KobKOGRLm
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMDQuMjMgMjE6NDYsIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IFByaW50IHRoZSBy
YyB3aGVuIGFuIGVycm9yIGlzIGZvdW5kIGluIGRldmljZV9iYWNrZW5kX2NhbGxiYWNrKCkg
c28gdGhlDQo+IHVzZXIgY2FuIGhhdmUgc29tZSBpZGVhIG9mIHdoeSB0aGluZ3Mgd2VudCB3
cm9uZy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdt
YWlsLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNv
bT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------GCU1EmVOLdBvRV7KobKOGRLm
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------GCU1EmVOLdBvRV7KobKOGRLm--

--------------IFoo0gcLmFeWkxVCxGbVc59U--

--------------c4IAEzMtsYnTZHDVMQLUbDxe
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRIuJ0FAwAAAAAACgkQsN6d1ii/Ey9E
MQf9GBtWQvYFemQ+WW1KKzk+vHlYUs3FXrVMJWMHX5a1PzYnmXwnbF1MD63L91Wi0ua0QyouwE8l
+wFDPotfEojZ5jCoU2C3+GMbUEjN9xhfIXWfTRDQEqcqJq1wdHCAM7W5vd2qXosTRoG12FKOOmKU
1TDPtRG54jPnrBfG/7D4tLB437vz71k00bhsso8AmP3gRWENTLsr7WuvF9MoNRMYU2uJUwtl+hN5
48PUbaL4YLMeJVFWsi4WS/S9wZcnTiBZJ8DKlfvUEOwcAbm+KjZlI71lLoUJqT6F28Lqo4QtAQxT
V575vLrs2rcQ+BQfpdazgG2XwA/eYE1Grn8qTVcvIQ==
=+z/X
-----END PGP SIGNATURE-----

--------------c4IAEzMtsYnTZHDVMQLUbDxe--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 05:38:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 05:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526472.818240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXr7-0005Ch-Vo; Wed, 26 Apr 2023 05:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526472.818240; Wed, 26 Apr 2023 05:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prXr7-0005Ca-Sx; Wed, 26 Apr 2023 05:38:17 +0000
Received: by outflank-mailman (input) for mailman id 526472;
 Wed, 26 Apr 2023 05:38:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vDOC=AR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prXr6-0004iA-H3
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 05:38:16 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8aca9f1f-e3f4-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 07:38:16 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C24EB1FDC9;
 Wed, 26 Apr 2023 05:38:15 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8F07C13499;
 Wed, 26 Apr 2023 05:38:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oFgAIce4SGRjQwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 26 Apr 2023 05:38:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8aca9f1f-e3f4-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682487495; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3+1LvzoczVZHA9tNjJWh1uzUv+ITIV58gQTXKySEJAs=;
	b=E5ngDgT/WXnIiiiADnX0nFj9xPxoThk+T33YZhGYNg0q07SKvC2FInS8y4doyK06/tldpU
	SYiNVIGn/gUh2xuKjSxJe7yJx1wv2KmB2NEsrgquO97tUhXPSFZsFjmwXqNk9Ol6e/aAnH
	M1+4ArFq2vBTT0WizIma5pBQCTHRBAI=
Message-ID: <47ba66e3-085f-b6c8-77c6-3a080ca883ee@suse.com>
Date: Wed, 26 Apr 2023 07:38:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Subject: Re: [PATCH] libxl: Print device_kind as a string
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230425194622.114869-3-jandryuk@gmail.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230425194622.114869-3-jandryuk@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------cOBoH2z0C60Q0n00Hbw4G0te"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------cOBoH2z0C60Q0n00Hbw4G0te
Content-Type: multipart/mixed; boundary="------------TRU7bDplHhpwZYUhH07j7NmB";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <47ba66e3-085f-b6c8-77c6-3a080ca883ee@suse.com>
Subject: Re: [PATCH] libxl: Print device_kind as a string
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230425194622.114869-3-jandryuk@gmail.com>
In-Reply-To: <20230425194622.114869-3-jandryuk@gmail.com>

--------------TRU7bDplHhpwZYUhH07j7NmB
Content-Type: multipart/mixed; boundary="------------ErqHxfrzy03szkk3N7YH9UY0"

--------------ErqHxfrzy03szkk3N7YH9UY0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMDQuMjMgMjE6NDYsIEphc29uIEFuZHJ5dWsgd3JvdGU6DQo+IFByaW50aW5nIHRo
ZSBpbnRlZ2VyIGlzbid0IHBhcnRpY3VsYXJseSBpbmZvcm1hdGl2ZS4gIFN3aXRjaCB0byBh
DQo+IGh1bWFuLXJlYWRhYmxlIHN0cmluZyB3aGVuIHByaW50aW5nIHRoZSBkZXZpY2Vfa2lu
ZCBpbg0KPiBsaWJ4bF9fZ2V0X2hvdHBsdWdfc2NyaXB0X2luZm8oKS4NCj4gDQo+IFNpZ25l
ZC1vZmYtYnk6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdtYWlsLmNvbT4NCg0KUmV2aWV3
ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoN
Cg==
--------------ErqHxfrzy03szkk3N7YH9UY0
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------ErqHxfrzy03szkk3N7YH9UY0--

--------------TRU7bDplHhpwZYUhH07j7NmB--

--------------cOBoH2z0C60Q0n00Hbw4G0te
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRIuMcFAwAAAAAACgkQsN6d1ii/Ey+/
Dwf/ZIbJR0e9BN6cana3f/ZVUCCbpodCiY1aRcoDtRXkTc5sr2O77rNWWF25Wqt++Kox3lch/L7j
YUnkUaTZU1F9jItTsOT119RESuFcKthjsv9WSZ89JSYGtbq5B/b05VZ3gnkf2xKgNEPlhRmkAV0e
y6iC0qTrhVdb8QXQOlHJujmW27guo06oLqVp0D6vxsGKoeG0Uc5/av+Zl3uz02E5GfW5pMGWaRKx
xUmy80vfZ5Z+ZeexOoPmCmQFKeTT2pqKwCcbo9SOEWgYRq2CFWm0Jf4h42P6U7VmF9YhNqYOfCij
5qVO/yVUTIohWB8IwucGTwkHjNg8lytyeq4hiepQmg==
=Se84
-----END PGP SIGNATURE-----

--------------cOBoH2z0C60Q0n00Hbw4G0te--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 05:56:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 05:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526479.818254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prY8U-0007qO-I4; Wed, 26 Apr 2023 05:56:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526479.818254; Wed, 26 Apr 2023 05:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prY8U-0007qH-DW; Wed, 26 Apr 2023 05:56:14 +0000
Received: by outflank-mailman (input) for mailman id 526479;
 Wed, 26 Apr 2023 05:56:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prY8T-0007q7-LI; Wed, 26 Apr 2023 05:56:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prY8T-0002DO-Jc; Wed, 26 Apr 2023 05:56:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prY8T-0003G0-4q; Wed, 26 Apr 2023 05:56:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prY8T-0007Va-4H; Wed, 26 Apr 2023 05:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pm9VEq0U5x3NupXlhiITNYXBHeJogaqj+x+Tq/ZVEKk=; b=CVqiEYFN6j5jpSlvCumDie6OlF
	EOWFkgeVtyo6vrEIMro5bacs/XdL+UhU/C819ztnPWxrw9+cCzFEMyYz2A6VegEJtHqVn5NewQZtq
	L2H+JkoaV5maNmi/mudSuInH1iFC8DsSbknYw+P8lBxgg8xbze7oyfJKe94aGymv/xL8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180407-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180407: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6f6526ac7e342b2f125ffba649d4f13e22bbc860
X-Osstest-Versions-That:
    xen=7f2df63f723478dee629b5884cdee9914f88d98c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 05:56:13 +0000

flight 180407 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180407/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180397
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180397
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180397
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180397
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180397
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180397
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180397
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180397
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180397
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180397
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180397
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180397
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6f6526ac7e342b2f125ffba649d4f13e22bbc860
baseline version:
 xen                  7f2df63f723478dee629b5884cdee9914f88d98c

Last test of basis   180397  2023-04-24 11:36:55 Z    1 days
Testing same since   180407  2023-04-25 08:21:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7f2df63f72..6f6526ac7e  6f6526ac7e342b2f125ffba649d4f13e22bbc860 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 06:30:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 06:30:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526486.818264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prYfL-0003kG-0z; Wed, 26 Apr 2023 06:30:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526486.818264; Wed, 26 Apr 2023 06:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prYfK-0003k9-UA; Wed, 26 Apr 2023 06:30:10 +0000
Received: by outflank-mailman (input) for mailman id 526486;
 Wed, 26 Apr 2023 06:30:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prYfJ-0003k3-HK
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 06:30:09 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c9548c3d-e3fb-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 08:30:07 +0200 (CEST)
Received: from AM5PR0502CA0020.eurprd05.prod.outlook.com
 (2603:10a6:203:91::30) by AS2PR08MB9785.eurprd08.prod.outlook.com
 (2603:10a6:20b:606::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 06:30:05 +0000
Received: from AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::25) by AM5PR0502CA0020.outlook.office365.com
 (2603:10a6:203:91::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Wed, 26 Apr 2023 06:30:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT051.mail.protection.outlook.com (100.127.140.64) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 06:30:04 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Wed, 26 Apr 2023 06:30:04 +0000
Received: from 3600f1817218.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F3B16A9D-BDA3-46E6-B0E8-8B4ADA33E056.1; 
 Wed, 26 Apr 2023 06:29:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3600f1817218.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 06:29:54 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB9PR08MB10378.eurprd08.prod.outlook.com (2603:10a6:10:3da::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Wed, 26 Apr
 2023 06:29:51 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 06:29:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9548c3d-e3fb-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NHlVS/xIpZ4NMvOTHhO8WrzzPUxd4UXeced3AXVdTcQ=;
 b=XbYTDLrgoWps9WRCEV/a/gHy6Qs3HYeys/XTgjoXTRfkJIz0imRCw/NdiUtgQz0URm4pRwtyDqQpAgW4Wtn6PmwcrT85j67xxxrqVIFR8+PrlHORPiCxDcGTzqxdRKtGBPzJ545bOMRZpEz+NE5w8VGR8lDpi4w36wgRgCh8WGY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mNmvRTN0enW9R5+nHOU0sMMi/Y9hdX6uoYuQ1xNsvv3FxSLz0ZmG/p8/1wzyXEpUxXv2SIIUi0HCNkWpH9O0TSfV/oQhErerzk9io3uMFT1WOEWX8rkNCj1yYufLg9t2Im2vVKF5+uS2hfrWZg9kkighGy03vhzIOKtmna/6iQLPegmvxRERDJtAWVWJwG4y+fS8PIXovN0bq8Nf0phenDqyxVwb70CdoK5o/KzIWKrFRz4nZ6C1mPFIGOIkrbUOPkaiU/6WpAmWY54982ygQJjQoMrfclKjPeuJy+2ufJmM5Zj6fXRMspdMjAQz5xfuTMTGFDcWs+Vya1thDB40iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NHlVS/xIpZ4NMvOTHhO8WrzzPUxd4UXeced3AXVdTcQ=;
 b=N7hr8qgM/Eb0KxhqUrKkmr9qYu++TmqkARiDIHCf3Kb0wb3lzWcBd19Wl8dZ/dmoKJB/jMkCNeylAMQGepYGeOnNuRmxEvGCBW4HBdrAjv4ItLIftox1B1YLbc/KNc6+a2OkjXhhwdezHHQOzTKMwBPFwweXmbYeVMuymG+VRtY5hyjc7diFsuX7sGIFO0MFgVZN2lKjKeuEoXIHpseyOKubxkGBTKBY95SgkkWcZaViHB2lBe0Z84jd8bA0LmPlZRmRuARye7WATpcrhApSvHsSckk8xTUxCIZ6vyWQoqbR2cQ4WmxOHG3VSZ7wEXmuu6gES4A4umaDQQqV485CpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NHlVS/xIpZ4NMvOTHhO8WrzzPUxd4UXeced3AXVdTcQ=;
 b=XbYTDLrgoWps9WRCEV/a/gHy6Qs3HYeys/XTgjoXTRfkJIz0imRCw/NdiUtgQz0URm4pRwtyDqQpAgW4Wtn6PmwcrT85j67xxxrqVIFR8+PrlHORPiCxDcGTzqxdRKtGBPzJ545bOMRZpEz+NE5w8VGR8lDpi4w36wgRgCh8WGY=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index: AQHZd0ulMpqFICe/LUiDVq6+aDsW9687tFqAgAFSIKA=
Date: Wed, 26 Apr 2023 06:29:51 +0000
Message-ID:
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
In-Reply-To: <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 7A22E85BB6A0284E94D4B6010A1BA705.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB9PR08MB10378:EE_|AM7EUR03FT051:EE_|AS2PR08MB9785:EE_
X-MS-Office365-Filtering-Correlation-Id: fba4ff46-f873-476f-f9d7-08db461fac30
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iYg2yE3xB/GZNd38kuBOShtpMdFCkLdg1Gj+vZ728QILwF4Tvml4z1MqUlkeLApAJmGCCB/yeWNeqyWvObqkHYFkbmecisX5TtJDC3vOqdUKdXAKyUMwpnonYuy7Kvri17dhEZAuw1P0Si+lV3FiUqTrFj9MsQOtffm7RwwZAaBNtcvh1hqVzv4qrMeDb3DyBg98WD/YasA26P/r6zu20FLSx2pQ0ycM8alKRlzPUuyoMt5DfffQP+5tpDTtxr4RG+lX6CYev/bTpMizFMcjIRbRA9LXtOiqkVPsXGQCijm9rPBqqvvgjaDuIfmmU+6aFAYKyX9aZRkHKyNvdHGKGRXxzPZUdqySrwypw8oURoNZ3rG8m31fvjlUK2B0Uqkur9OEYUhqiGhyhR1VIUZcEUuNBKaFD88/h+Gso+3g5xOoDhkQ3q4Rg8B+spYRT/3MlfA2vFFkL1hzwt8dyEcGuZlEpEZmIH61EVc2KJYyUJTNZPG3pkCW/Bz1IaPO64HZVM3vTafpp1h2yjXWtsy1sUy0DVJBAg/so5O0QCoxFxeVLcKtvWxP/D9b56ZSGfjhNPef2EMQ2KPJ+0EQOe/AAXM1cOFuDk/ySnBgOFD9dkc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(451199021)(66899021)(966005)(71200400001)(55016003)(2906002)(7696005)(83380400001)(86362001)(33656002)(66556008)(8936002)(66476007)(66946007)(316002)(478600001)(26005)(64756008)(66446008)(76116006)(6506007)(41300700001)(6916009)(9686003)(186003)(4326008)(38070700005)(122000001)(38100700002)(54906003)(8676002)(52536014)(5660300002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB10378
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6e3fab54-047f-4583-ceb5-08db461fa443
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tFU/nPnz44DS2ugCYYoI+yW/0so50ILyEPvntCPdqAbFfDcJ6ozt7JUHmg6nuclzhtjv7G1zTtEfm4o+yWIAb1iptPI64IOqbSIxqAwsZJC2L8wp67Z2WWFpXnvT2l4MRpHcLHR0Jn5BC9rhnetRyiHP8Cksb63tf1S7f0NjlojgLDat7HJPI8QrIaDe+0SS0cnNAQoRQg8Fa9/kxe/S3LiFnq2Rclc121ZazhPkZBokUvg7sojboL6NyEneV6Q3EENeEyB2bvO0RXid628wHr9U/k8cBJi54w2EJ4xPTaNDN5m8TXqPtPVI1qbuHYGdcfHV6Vc+oFwp+PbtM2Dck7N4G3bCflsxs/cFNwkwV0tpV3ISCnUGXO/U4sx+IDoxqLXEVYsMpUEP8zBa5zZWjUY7hI3KyuqgemVCuLERd0gt9k5LTYPuW3aDdCqBZhAVPQgyq0rSoIEVr+HmJYzbfp6KezH+/ZGHXIUBsqNrBMnDlAa4qrXnCV2l0u+ZDmtVS3erAEFdW69x4gCBDcGv74VxTmfoxabXA1l9+/oq3gtdogmKgaRhqZtN2MCKl0woMfiGhS7lgE+GcB+D+VJukYN+w/Tt3Q60HA/7uG202JVlAIW3mJe2KwJ8ObiqrSWMzcvPU1hUIek3lKc2WV4dY2x1eNTsvFiC0UNdOZUKKbTseyVTErUOnJFzKuPlPL4VaQewL3exkojfd00fp2UxBgh9NeNLGoTyfVU3Td9w0uc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(9686003)(26005)(6506007)(186003)(66899021)(966005)(336012)(40480700001)(82310400005)(83380400001)(36860700001)(7696005)(40460700003)(55016003)(54906003)(316002)(41300700001)(52536014)(5660300002)(34070700002)(356005)(82740400003)(2906002)(33656002)(478600001)(86362001)(8936002)(4326008)(81166007)(47076005)(6862004)(70586007)(8676002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 06:30:04.9253
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fba4ff46-f873-476f-f9d7-08db461fac30
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9785

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwOS8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIE5VTUEg
ZGlzdGFuY2UgbWFwDQo+IA0KPiA+ICsgICAgICAgIGRpc3RhbmNlID0gZHRfbmV4dF9jZWxsKDEs
ICZtYXRyaXgpOw0KPiANCj4gVXBvbiBzZWNvbmQgdGhvdWdodCBJIGNoZWNrZWQgd2hhdCBkdF9u
ZXh0X2NlbGwoKSByZXR1cm5zOiBZb3UncmUgc2lsZW50bHkNCj4gdHJ1bmNhdGluZyBoZXJlIGFu
ZCB0aGVuIC4uLg0KPiANCj4gPiArICAgICAgICAgICAgLyogQmktZGlyZWN0aW9ucyBhcmUgbm90
IHNldCwgc2V0IGJvdGggKi8NCj4gPiArICAgICAgICAgICAgbnVtYV9zZXRfZGlzdGFuY2UoZnJv
bSwgdG8sIGRpc3RhbmNlKTsNCj4gPiArICAgICAgICAgICAgbnVtYV9zZXRfZGlzdGFuY2UodG8s
IGZyb20sIGRpc3RhbmNlKTsNCj4gDQo+IC4uLiBoZXJlIGFnYWluLiBJcyB0aGF0IHJlYWxseSB0
aGUgaW50ZW50aW9uPw0KDQpCeSBodW50aW5nIGRvd24gdGhlIGhpc3RvcmljYWwgZGlzY3Vzc2lv
bnMgSSBmb3VuZCB0aGF0IHVzaW5nIGR0X25leHRfY2VsbCgpIGlzDQp3aGF0IEp1bGllbiBzdWdn
ZXN0ZWQgMiB5ZWFycyBhZ28gaW4gdGhlIFJGQyBzZXJpZXMgWzFdLiBHaXZlbiB0aGUgdHJ1bmNh
dGlvbg0KaGVyZSBpcyBmb3Igbm9kZSBpZCAoZnJvbS90bykgYW5kIGRpc3RhbmNlIHdoaWNoIEkg
YW0gcHJldHR5IHN1cmUgd2lsbCBub3QNCmV4Y2VlZCAzMi1iaXQgcmFuZ2UsIEkgdGhpbmsgdGhl
IHNpbGVudCB0cnVuY2F0aW9uIGlzIHNhZmUuDQoNCkhvd2V2ZXIgSSB1bmRlcnN0YW5kIHlvdXIg
cG9pbnQgaGVyZSwgdGhlIHNpbGVudCB0cnVuY2F0aW9uIGlzIG5vdCBpZGVhbCwgc28NCkkgd29u
ZGVyIGlmIHlvdSBoYXZlIGFueSBzdWdnZXN0aW9ucyB0byBpbXByb3ZlLCBkbyB5b3UgdGhpbmsg
SSBzaG91bGQgY2hhbmdlDQp0aGVzZSB2YXJpYWJsZXMgdG8gdTY0IG9yIG1heWJlIEkgbmVlZCB0
byBkbyB0aGUgZXhwbGljaXQgdHlwZSBjYXN0IG9yIGFueQ0KYmV0dGVyIHN1Z2dlc3Rpb25zIGZy
b20geW91PyBUaGFua3MhDQoNClsxXSBodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3JnL2FyY2hp
dmVzL2h0bWwveGVuLWRldmVsLzIwMjEtMDgvbXNnMDExNzUuaHRtbA0KDQpLaW5kIHJlZ2FyZHMs
DQpIZW5yeQ0KDQo+IA0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 06:56:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 06:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526495.818274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZ50-0006OW-1a; Wed, 26 Apr 2023 06:56:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526495.818274; Wed, 26 Apr 2023 06:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZ4z-0006OP-UF; Wed, 26 Apr 2023 06:56:41 +0000
Received: by outflank-mailman (input) for mailman id 526495;
 Wed, 26 Apr 2023 06:56:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prZ4z-0006OE-De
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 06:56:41 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7dea7be9-e3ff-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 08:56:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8560.eurprd04.prod.outlook.com (2603:10a6:102:217::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 06:56:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 06:56:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dea7be9-e3ff-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kNYkIb7GsHA5yZmZx9LNdft8RH/ndfS3uER6I/lw5rJNzo0AIelm+pMczZPbcb5bSKv8TJGqNrPXvlT9dL8zne1uJLqFGBpjVAlnIvAl1aVHnUOLKDjcbkFM0fRcJR5Qhy0ADJuZWKhvxDt7H3uScj41FmbK7PiIHWcgRYuDi/lZkjB5WBFU7xYtWmyE6JM6fAsY+hI4r96D0+cLpzI0R12i25U8sZQXr+Dv+dcVCuRMmTW0m4A+7sceSqEH9ePF/u3dHl86Dtl2TW5ju8oG+oB3byrDr3vEPfkvApxloTN48xw8K1vynMGWuhmg6QCgKYQySZVPW2W7+OHQw2RgLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/qG/e5edz6n7E3IYj7TTk5sucdHWBDsJLvmI1nmGUH4=;
 b=ElJbW2vggq9FWRWjZA6f+Lkfa6GLoTf5lQTA1+Q1VP6b4EmX49Rcdze4VNgME3zFDXTNmVQPlLiwa2rTWrbaWPC/giZUQhTvXyIXsZauvuGv+xibL5mhmSbyns1ckk0FzrYLzYV1iQyY9ZbDJq4EldPDyv41tkXxM62ATlSDw31Cmu2skRYpRgu99VApo82ad0IdC2oNo1OG8rrBjR9HOgOoQJqUzfOzBKrWZfMXv0Jm98P7JN1SrtxaRPQoN+nbwTfT+/RKXTXhcMWddpydEB47FwD6FNbF7XC8BLM0zl8Oyf1iegfnDT3MRXi2widpTk+HfW766pFrUDeAJXNiMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/qG/e5edz6n7E3IYj7TTk5sucdHWBDsJLvmI1nmGUH4=;
 b=W3ATk8iZuwB8N6+ZJtVMbNteLrcvwwMpB4bRK7LCgJGbCnoZAO6CdXYLfiGXnoRgxcQ5KJWTH+lVhDuexvAqwo1V7UnV73AdX9nzSBVoSzPz4zzbOnqk30Hqbb6+zOiYNz/ygYEi9HPXoGuPWs9rCia+HazJuwuOEWv5QnrV/mGk+OKjn8/FQo1Ba3MDYLSCWBZrrUVgqvGtbbdjqCK27tX6ranYsa4ROpOkAKPnTGX43u4r5gpQv1WLEABzCMIM8Tqz7eBkJ1FuH2rF/SX7OoZpcAFlVgUvkwd+kDFXL5L5pFQSaoSVyAm5Zp13lXxOPX7mk6FOBVnOAdh3E3Y3wA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <13635377-e296-370d-121b-5b617dc210bc@suse.com>
Date: Wed, 26 Apr 2023 08:56:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0213.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8560:EE_
X-MS-Office365-Filtering-Correlation-Id: 00a6f454-3591-43df-a959-08db46235fd4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LDOqu8LtWScq6K6fOiyfdF/7dqF1ZOfT23A389UmS/8wGAuqprdI+hYdTNbem7xzwlTNV1F0IhogOq8zDJJwSnu4NAQlGxzuMlyAF81CBLI0xWLOLojPMN8PNdqv2KKRiofYzEyNxys4PRJNbrAfATW3FDwVtI1Nx3RlcuLqMR6T+oYHfBkmFDFq9lNAoEv5ixmA+0z86UfndGVroP5xzmrrW1hMAUAY+TVTcF6z32CfNaiFDU0pXZebCV4jrW0jEGDKck+T9446n1PEH/4yxDJEIrRChzqFvnIbHuc0+m0DYLRajksaXsQ9ND3W4q/KMWTyciD1k0bjkhAZKky9xVD+bDVVXq3BcqNROG6P5fdxK7NxZalVdV4BlDtc8UjKXU3Q6w/69IMRznsFJmgNgoYJ6f9kZ2swOl75LD4V6St9uQde4DuvR+R/jMbPie82EmM0FoNDQ+G/mpro3zBk/MsP0I6J9OjA3EpGugDVZDOPnoEUqJUtwDb0/2E+yguX7jPCseERik0eFx4398mEXp6101Di6w5k0scyZaDwY/sWqepB/mA1V/Qs5vs+v5hrmCuQ3MBAMJefNUzWjRepf5FmfHWPNwJ8JLOzDE5bRwcsTHX8/wHLe5L3hTei57JfAP1r2W53YUPJ1hWBUnYWSA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(39860400002)(136003)(396003)(451199021)(2906002)(5660300002)(54906003)(2616005)(6486002)(31686004)(478600001)(86362001)(8936002)(6506007)(41300700001)(66946007)(6512007)(26005)(8676002)(186003)(53546011)(316002)(6916009)(4326008)(66556008)(66476007)(83380400001)(31696002)(38100700002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R3dyNDBQQ3dlaHJKSkxjTVZHeHRYNDhpa2NqOFRGMThWWG9HeGNyc09GVVU4?=
 =?utf-8?B?WFdmc3drbUlwY3BpS2Y0ckRMS25qZDVUejhCSHJaQWllRDlqT0FSUWFyTlFD?=
 =?utf-8?B?RTV3Q0VjemtiQ0dPWFVxblZEVlVLMTM0NUpuWFV5b0ZMZFR5V25oR1lPcWF1?=
 =?utf-8?B?MEFhdSt6anpDSE9OYlpQMFI1b0ZUVkJLQ3VEL0RCYWl5aVlaaHpBSm1iQ1k2?=
 =?utf-8?B?YXc4RWR5U0s1UUhjWW1zTkp5Y3RTMXVyVTcwSDk5elNXV2dHQzBVeUVESWRN?=
 =?utf-8?B?ZkpIMmRQclozemZZcGNucXV3cmpVNFBxSFIxQ1Vsa2x0NHQ2eFlMb21tc24y?=
 =?utf-8?B?ZkRvc3JobWd4WHRVRVpGWHI5OVhZdGFPS0hGZVVlS1dTRWREOWJLbERWbm1Y?=
 =?utf-8?B?Z25tMUpkL2xUY09nMndkdjZJT216K0lnUUVmK2hzZmpHRFVjQ1ZlMEp6NEd0?=
 =?utf-8?B?TUtIcTlpK1piQ04rQk1GQ3I1TURFUkp4UjhWa1Ayb3JTNzFpaks4S0ZNNXRt?=
 =?utf-8?B?WVNrNFBEbk55c3dHMzJucWRUZXB0MC9QQjE4bGVTVHJndS96eFFxQkJWYmdm?=
 =?utf-8?B?ejIwWEdzQ00yTmlQSWJqdEFVcGV2N0NVYW1aZkNEMUhlcmNkckFMVGxaVXhr?=
 =?utf-8?B?ams0aTlZVURNL296WjB6SHdDZzFiOTY3cFllQUUvZHhaNWw1eXBWMjQwdTJH?=
 =?utf-8?B?UDR2MXE2TmJZdGY5YUU0M0RPV3k5K2JUTTVXbFBSMTIxUmJLYjRBUXg2YXp3?=
 =?utf-8?B?Z1YrSkdaQ2JHUjBNYXhIejZiQXZHVmtlZjRLZmpSNkZtU3d2bGpxMHhSak9t?=
 =?utf-8?B?bjQ4NFlySXFtSHpvUng5clNCR1hUUm9oTEdnTTZZcENlOGhYYzhjNkVJMUpK?=
 =?utf-8?B?N09zaFNmM3duS05jWHZBd0wwaU9NZnZJMmoydlFRb3l0bHR1YjY2S1ZJWGhS?=
 =?utf-8?B?b2pSTi95L2NDbkZQcC9RcTZZdXJ5akhEODhndHh4VERYUGExajB3TlJ6ZG4w?=
 =?utf-8?B?b1pHeFloL1ZqVXE0V1NQcFBXUmFlVk5PUDVYajN1dXVWOEhvam52WjBJMTI3?=
 =?utf-8?B?TE9KSkU1c21zRnA0d0lHVGFvTGJtV3REeHZYcnZhdnZ0QmNwUmJKTGppWHpC?=
 =?utf-8?B?Q3Npb0NOeG9DRzZjVE9nbklTdkpHTUtaQ2RsWFdOTTFrOEZTZDBIVmtsUzUz?=
 =?utf-8?B?K1kvajQvclp0Y21WVU4vNEMyN213NGkvN3cyWDZ3OUJpTkpIWVN3dTQ3OGRP?=
 =?utf-8?B?V29CanIveWRkd296ZDZuQnMzM2xFYk04Y05MVWJ2QUJuVFRISi90N24rZlYx?=
 =?utf-8?B?KzhkTjZrYmlRYXBmY1RxaXI5YVg0V3g2S1lPanJhWjVpaGpacDFRNU8rUFAz?=
 =?utf-8?B?ZWo4WFUrdEJYZ2prQU5OZzU3WUFDeE4zQnM3aFNwZGpvbjhOTUpLNWpmSTM1?=
 =?utf-8?B?OGhvSUpJQTVNcEhOdUI3RDdZMmtuMFFnMDNid2NreG13Zy9lNUU0R2kyekgv?=
 =?utf-8?B?WE8yRllwWXdqVi9CazhHUmUyUlc1WGt0bUx6UHoySHBENW5VQXRSTW5WVW5p?=
 =?utf-8?B?WkliZ3JMWlpJVzNOd1dwajN3MFIwNDl2NG80aWJMbEdPRXVySWdXajRQemQw?=
 =?utf-8?B?YmVvcEVwQ0RSazNEYnJyU3hqMnZJUDNoNE1EMWJWclhCOEQ3UFZkR1N5WmRH?=
 =?utf-8?B?M3lNWGd0azFPUVgya0oyVG15WXUzaE1ZQmh6WUtwQkU0ejMzVGY0VTk1QSt6?=
 =?utf-8?B?dGl2ZkI0VTlRRWdEZi9CVHFSYlVUenJaVktyWkFubG9ON0ZPS3U3c09xQ0Fp?=
 =?utf-8?B?cmJOR1EwMzR1WCtnWkJrWTRaMnZYRlF5aFAxSHlqamF2UkNISHdEMndpY3A2?=
 =?utf-8?B?d0RsZ1kzeU94N25SdXFwZHNIck1sMFEzQlJhNTVPcUdsSHpyb2QxTHZPSjFJ?=
 =?utf-8?B?YmhsMDM1LzFxWk1Fb2hENmdhRy84ci9TZm5PT3ZqbXByazVRTy9GbnRWcWl6?=
 =?utf-8?B?MDA2MVlqVG8rd0o1anRkSzVRKy9ZWnZVS3cyOEs1YnhjVkdZSlFpcEJJSE42?=
 =?utf-8?B?akxncTh3c0RWRm5DbVpheVFvcmdsQUpCMVFzNzJGSU5TOTJrbzd4alVzU1A3?=
 =?utf-8?Q?LPx8t06z+CLLO1+iqVBg5jkz0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 00a6f454-3591-43df-a959-08db46235fd4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 06:56:35.1948
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DkxEoc3o8hCcTJ2FDIvSWASSLAQGN0EXiQLwFiPeuFCKNbaoLcE/vIyFPmdpcA0SjBoRQJrm5MccbaUQsSiPLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8560

On 26.04.2023 07:33, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>>> +        /* Get opposite way distance */
>>> +        opposite = __node_distance(to, from);
>>> +        /* The default value in node_distance_map is NUMA_NO_DISTANCE
>> */
>>> +        if ( opposite == NUMA_NO_DISTANCE )
>>
>> And the matrix you're reading from can't hold NUMA_NO_DISTANCE entries?
>> I ask because you don't check this above; you only check against
>> NUMA_LOCAL_DISTANCE.
> 
> My understanding for the purpose of this part of code is to check if the opposite
> way distance has already been set, so we need to compare the opposite way
> distance with the default value NUMA_NO_DISTANCE here.
> 
> Back to your question, I can see your point of the question. However I don't think
> NUMA_NO_DISTANCE is a valid value to describe the node distance in the device
> tree. This is because I hunted down the previous discussions and found [2] about
> we should try to keep consistent between the value used in device tree and ACPI
> tables. From the ACPI spec, 0xFF, i.e. NUMA_NO_DISTANCE means unreachable.
> I think this is also the reason why NUMA_NO_DISTANCE can be used as the default
> value of the distance map, otherwise we won't have any value to use.

The [2] link you provided discusses NUMA_LOCAL_DISTANCE. Looking at
Linux'es Documentation/devicetree/numa.txt, there's no mention of an
upper bound on the distance values. It only says that on the diagonal
entries should be 10 (i.e. matching ACPI, without really saying so).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:08:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526499.818284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZFz-0007yd-3b; Wed, 26 Apr 2023 07:08:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526499.818284; Wed, 26 Apr 2023 07:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZFy-0007yW-V7; Wed, 26 Apr 2023 07:08:02 +0000
Received: by outflank-mailman (input) for mailman id 526499;
 Wed, 26 Apr 2023 07:08:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prZFy-0007yQ-EA
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:08:02 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13ff15c4-e401-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 09:08:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7340.eurprd04.prod.outlook.com (2603:10a6:102:93::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 07:07:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 07:07:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13ff15c4-e401-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eZVY8BTne0gQp7pLdKvsQZwtzZHoQYUn9abIXqhSHbmD02TdeJ35iMVN8hPmLfswdyN3fLPI0zjTD6NS0a90GXvsd2GZNrv4HEKNx3GUPRpxQneaR2dcdImzc+6sb/7tI0LZp50hi57NBIa2c/76HsNvjp4xrNHe5nSfodADAjAjpFc6UOk/cHf1NGeKsXEvsxK5vawq4a8b3MMpMKspthd9OKAbDhRAJO/6RiVfpMd20zUev/eiKuzz5q4MjbA8siuH/V80uZofseoUEUD7qz1deZ8T59Owl5WhM353rG4I76D1pu8LqYHhGoEd1ax7i1m7ZwI4JKTIir+P7/mKOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u73J4pR9h6BZim13/FmvTniUy5L8IiACuXft2MAZqT8=;
 b=BevTrURPyQrZ895h3qGNRoedxCloGbOW227P/TAZ9meq+qi4Xd1UtbltxIAdeC+jW1nUILEE1cMGsUeQF8vpGSU8DYkISaVnbTq7HhxFwXYilD+yjs/Lg2cSc0y+4400dxYv4gjSHtVoO3G9x+Q0zKjcG+LdKj/8bR9o7m+PfPzvLfNbTyqu59GAu3T9JIi4GB4Q8t+CR+6UTVNn2id+sg3YFi4dF2+8+Bhb+GR8824zHwQq9n0GstbNa85czSKgO7DKRTsPRuOobnUMqqm1eX0O8a8i2i5SoxNbO8fvAj5FzoT7MDBj7OZD/VPYSTICSBhI1r/cahMqOp2x0jjXTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u73J4pR9h6BZim13/FmvTniUy5L8IiACuXft2MAZqT8=;
 b=37sCwlCI40Yu+lWcKItHIMCEUIOLRBxlSHyN40DHLFWNPOOZZWtDNoir8FG42QL0eNawx5/LKMhvTWcrxsXG03FKkdQg3y1odRsdUV4FB4oPs1g2WedmcIySH0PW3sPNqcGM4azlTq4ko1z0pxhtKizG1OJPeWb7lWP638zl2/BMNb4ojkeyJf+r8wFNtNWIPBXisIkJb5T3wL65n4qJ8WeJfX0Yxcz0AwD8g6wM+hEvy6W3tSgWWSyfu7X7/ZDb4ekbI9xJiTnya0rAvzMnXypJ9MkVMjfi1iADOgMsaG/6mKMKg7CZqHGN1PQC38yKrFh5sWj2fpxBmkh5miro4Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
Date: Wed, 26 Apr 2023 09:07:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0193.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7340:EE_
X-MS-Office365-Filtering-Correlation-Id: 6cbb8de2-8d54-43c6-23d2-08db4624e6f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bo1/FnjMdLX3upI74TWA3m0Rw36J5t581weSFNlkxMG/fhJzT+NRY2F67E/Vc2WuxMcRMhOZoaM/qWjtE6yVICr/5Oeqxswglwv5MF3RDHhx/b5n1XuA76tkoJVGMXSyWSJiSrxIijqqLrrC9sa0TgYU5dAl3LgrRUlDaYJ+w/g6JpdhhasJ8mwqtuXIvGHvrrCDzAfQv3iWn5KGVOzq2PRHetY10YatViXbzqaw0HHxWZK+Sh1eD+zlirqdN8bOi1mUw/BiNc47FZ8VVWhLRyOhuvkUbf6t7hL+YcNwleomDsKYEe9zsClkh4aIk+oIKGMrJbJD62NsXs9zImoi7OD204uZpiuVV10lbJIg+DUVsXouPK86dvxJ6RAQC5Fm+v/fIhTvObxeTte5woH4ysjhwHAmPuAlSDqpVNk2wf9SKprEFso9OiDp8Ivkrw2/rawPBwGjYEwiZpCj5KeNh6i/G6Y9r/jvTM/c1AMNrbTxvT2pgyzgXlApwM4wc4h0ltZ1gqn8R7mANxjEvJ1tY2k3U7UMW1sNQq3x9/sVH83d7KlXBgVwwyuRbUMAUB4G39+4y1KhfJbsifo8EqtdOZKWuKOp1hIlYt8YQ+FfEfTsbGqmcbv5Zsh9b2z/W3A6tkESwg8VJzA8cY3ZCYNidw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(366004)(396003)(346002)(376002)(451199021)(41300700001)(36756003)(66946007)(54906003)(478600001)(316002)(966005)(66476007)(4326008)(66556008)(6916009)(86362001)(6486002)(31696002)(83380400001)(2906002)(2616005)(8936002)(8676002)(5660300002)(31686004)(6506007)(6512007)(53546011)(26005)(66899021)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3p1UTZOaEN2d1IxWDRGc242UTRxcHhOS1R6N1pZZUFBQzFNaC93M1lYYmxn?=
 =?utf-8?B?ajVnK3dKSFpNUVhsdXM4Rm85cUNCdWRtMjM0WDlselVMbFJKZXdhUG1nVEpq?=
 =?utf-8?B?dDFhM0gvekJYMll2VlFuMktwVXFJRXRlZVA4bWJ5TGZVUW4rYU1hUUR1NzY3?=
 =?utf-8?B?QzVwU2pqcGZqSStvQUYyMGJKODRNNUtPd2xPY1E4U3R5Zzh2eTRpWVpuRita?=
 =?utf-8?B?cnNqWlhMYzd6TG5FMDBEc1BjdDhyc21uVlY4M2U0NlVoeHZYUkZWOW43bWty?=
 =?utf-8?B?eDc2dC9aTUFpeUd6SnZ6UW5LUk91d3ducFRIbFYwbk9lRkJaRVNQZ0tZQ3lF?=
 =?utf-8?B?dnMwajNWaUVkRHlER1hDMmx4Y0gvbjlpbDl3bVUxL2piTnkrWXNoNlB2LzN3?=
 =?utf-8?B?S0NxckRDSlBNRW96L3E1Rk5nNHBrUW4rWjVRVGRMcU9BNWdWWGFFcThUMGNi?=
 =?utf-8?B?TXNxTDZLSkFrSktYK3FyYWFaVDJheEw5WmFZNjBqVkRpR3oxaDMraUZDdzlW?=
 =?utf-8?B?b1VWTDFwSFYvaXVGcEcvT0V3VjR5U2RMTEQ2TFhsTGw0WlBqYVFUamZJTnJF?=
 =?utf-8?B?VmdmL3l0SFhPOXRGb0JBWHovMnZLQ25wb0dTNURBQUJldEIyVHN4aytYY0gw?=
 =?utf-8?B?bVBNK2FCRmcrVDM1SzNOTGRGSDkrTk5RZWlVU1RocFc4Ym04S3cvaDhoQ0Zk?=
 =?utf-8?B?TXBZa1haUG1HV3V6MlhJSGxpSFkzenpneStkSHovSkdtKzlIMmhGNWhycFpN?=
 =?utf-8?B?VnBCYXVTcTY3Zm9aNDRQZGlyaVhzWTIxcTc1ZTNlVTN0ZXNkOVZyTktyeUF5?=
 =?utf-8?B?eDJ6dE00QnNMNnJ6VkxxM0kyci9ZY09Zd3VWbURLUUF1NFZNaG9ETnV2cDB4?=
 =?utf-8?B?TXowcFJtNGVLU3hZc1FJNXNXbVI1NGc0VitRbDZrMmZIVnUwbkRxMS9hNXhL?=
 =?utf-8?B?WlJrdmY1SUZkd1h4MkQrenJNQVc3Z2hSWXExMEg3aE9XZlJ6NzR5K0Zsc2py?=
 =?utf-8?B?d3ZaQmJkZG5UQldkdTBlNVZ0RGVhajhDa1FTcGpQTlZnYU1uS3FVanhkWHlN?=
 =?utf-8?B?aHBTK05SZnVEOHQ1RURUaWZ3YTd0NzQzL083T0g3NTAzRFQyUDhRS0lCdDUy?=
 =?utf-8?B?NnVNWE9lbWM3Z0NuTDRUaVlhRXlpSTZoR1VhaU16eHNRSXFTQmtXMkxnWXdY?=
 =?utf-8?B?MTIwUmtETUw4bzQxQW9TTlE0M2lUZVY2b1BtamQ0TlZ2S1NYazdmaHBRL2ph?=
 =?utf-8?B?Zm5iVWNjSGFhNlFXWVNqYmZQS1Nyd1Q0d3lWcmpER0hqbzE5LytmRFB1Q2lL?=
 =?utf-8?B?b3VnL01uR21UYklMcTVITHB4d3pqR1J3dmw5MzFaQkM1VjA3TE5qc0o5WStm?=
 =?utf-8?B?MW9LdkNIQTJ4bmZnSjhXT1M4OFNOdmVmbU9IUDhqVTNHWVNtMExFWmcyYnNz?=
 =?utf-8?B?MXV6R3l3ZXVSalZLWkVSb0FhRGVwVzNwMXliVXc2S2NKZWNNTFlPMHVIM1Aw?=
 =?utf-8?B?YUhic3FnOFVNTDFpQ0JJY2xmNFFtNWdHbEs3RGJIRTQ4bE56bUpnbVdaQVRW?=
 =?utf-8?B?Zm9JQ0JZMFU4TjRiWjltcnNoeWEzY2dubklWaG44Y0JKaEpZMkpMdlVjWFF1?=
 =?utf-8?B?UWptZ0g0Y09XVk9qUjh5SjNsa0hLNFl5OWFUZUdDOUd6Nzk4aEo5Y1o5bnNw?=
 =?utf-8?B?L20xVVlKSHQ3aUZLaWk4WEk1ZU03TWNPQXN1RTVtSko0aWkrdnBGdFU4RE04?=
 =?utf-8?B?N3l1cVJhN3J1aDZWdHkvZzc5cWZnQnhtWlFFRFNPdHRsajhTLzVubmQwSFBv?=
 =?utf-8?B?bytsTEpkUC9sM2lWMlZFemtMSjRrb0s3TG5CaTlsODFKNE0rdjNVOUdrNWdC?=
 =?utf-8?B?a2NZU1B4TmJ6QlV6WXlLUWlSOUlYYmVueXpLYzBQSWxJb1dYTXYrSHpsaXFR?=
 =?utf-8?B?a0duSXNlM3ZTNnNDSWRhMno2VG5DQmUwL01PU003dC96VzZOWUo4RHN4NUpF?=
 =?utf-8?B?WVVZRmp0bENKVmo0Y2RRZDhSRFJ5Um5ua1F4cjl4N0IrTnFKRWtXRkc2ck55?=
 =?utf-8?B?N1NHdW5ONFJoNWI5dklPSU5QWnQwOVJZUG16Mm1WakphNk9xVHN1N2JROWF2?=
 =?utf-8?Q?ZBc7+B2kU9MUf84/YTv+l5D/9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6cbb8de2-8d54-43c6-23d2-08db4624e6f6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 07:07:31.2623
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 49r7xYR6lNQBx/0HXM5yrPoQFuaKM6emBb7AH1NlxlHR1Pm48HafJBcOa6wzjUme7gU5WkcxlIb4cPRHcziq6w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7340

On 26.04.2023 08:29, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>>> +        distance = dt_next_cell(1, &matrix);
>>
>> Upon second thought I checked what dt_next_cell() returns: You're silently
>> truncating here and then ...
>>
>>> +            /* Bi-directions are not set, set both */
>>> +            numa_set_distance(from, to, distance);
>>> +            numa_set_distance(to, from, distance);
>>
>> ... here again. Is that really the intention?
> 
> By hunting down the historical discussions I found that using dt_next_cell() is
> what Julien suggested 2 years ago in the RFC series [1]. Given the truncation
> here is for node id (from/to) and distance which I am pretty sure will not
> exceed 32-bit range, I think the silent truncation is safe.

That discussion is orthogonal; the previously used dt_read_number() is no
different in the regard I'm referring to.

> However I understand your point here, the silent truncation is not ideal, so
> I wonder if you have any suggestions to improve, do you think I should change
> these variables to u64 or maybe I need to do the explicit type cast or any
> better suggestions from you? Thanks!

So one thing I overlooked is that by passing 1 as the first argument, you
only request a 32-bit value. Hence there's no (silent) truncation then on
the dt_next_cell() uses. But the numa_set_distance() calls still truncate
to 8 bits. Adding explicit type casts won't help at all - truncation will
remain as silent as it was before. However, numa_set_distance()'s first
two arguments could easily become "unsigned int", resulting in its node
related bounds checking to take care of all truncation issues. The
"distance" parameter already is unsigned int, and is already being bounds
checked against NUMA_NO_DISTANCE.

Jan

> [1] https://lists.xenproject.org/archives/html/xen-devel/2021-08/msg01175.html
> 
> Kind regards,
> Henry



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:09:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:09:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526502.818293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZGp-0008Tx-C1; Wed, 26 Apr 2023 07:08:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526502.818293; Wed, 26 Apr 2023 07:08:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZGp-0008Tq-9H; Wed, 26 Apr 2023 07:08:55 +0000
Received: by outflank-mailman (input) for mailman id 526502;
 Wed, 26 Apr 2023 07:08:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vDOC=AR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prZGo-0008Td-2f
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:08:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32f241c6-e401-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 09:08:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A3FC11FDCD;
 Wed, 26 Apr 2023 07:08:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7AAA0138F0;
 Wed, 26 Apr 2023 07:08:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id leyDHAPOSGTRbAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 26 Apr 2023 07:08:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32f241c6-e401-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682492931; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uo1R/xDtWSySdYzunGr1/3kyitjb7qoGsUF+7mCVkbM=;
	b=nakpPi+OsA4jqXGduY4hWxfYpV6DhFsNy0lOIEDdTi63t/ymwW5NhEpSwH9W4GUYsS1CJ8
	eFhoTJUDi79Be+JQhaS/7g5ONULkONJglUcfMsBMD8owD0hTH93CwOsNqBBryFR+mQDz2o
	63Kyx+7S3drij9tx27oWfj8L1NvWWUQ=
Message-ID: <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
Date: Wed, 26 Apr 2023 09:08:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
 <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
In-Reply-To: <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------i2SxkbUfotlxBg6v2RwT8EVI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------i2SxkbUfotlxBg6v2RwT8EVI
Content-Type: multipart/mixed; boundary="------------D0QHExCQktHfRdUz0UEuDZlt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
 <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
In-Reply-To: <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>

--------------D0QHExCQktHfRdUz0UEuDZlt
Content-Type: multipart/mixed; boundary="------------TpgdXAoJS5QWpZz20m9R3KN9"

--------------TpgdXAoJS5QWpZz20m9R3KN9
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMDQuMjMgMTk6NTIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA1LzA0LzIwMjMgMDg6MDMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBJ
bnN0ZWFkIG9mIG1vZGlmeWluZyBhY2NvdW50aW5nIGRhdGEgYW5kIHVuZG8gdGhvc2UgbW9k
aWZpY2F0aW9ucyBpbg0KPj4gY2FzZSBvZiBhbiBlcnJvciBkdXJpbmcgZnVydGhlciBwcm9j
ZXNzaW5nLCBhZGQgYSBmcmFtZXdvcmsgZm9yDQo+PiBjb2xsZWN0aW5nIHRoZSBuZWVkZWQg
Y2hhbmdlcyBhbmQgY29tbWl0IHRoZW0gb25seSB3aGVuIHRoZSB3aG9sZQ0KPj4gb3BlcmF0
aW9uIGhhcyBzdWNjZWVkZWQuDQo+Pg0KPj4gVGhpcyBzY2hlbWUgY2FuIHJldXNlIGxhcmdl
IHBhcnRzIG9mIHRoZSBwZXIgdHJhbnNhY3Rpb24gYWNjb3VudGluZy4NCj4+IFRoZSBjaGFu
Z2VkX2RvbWFpbiBoYW5kbGluZyBjYW4gYmUgcmV1c2VkLCBidXQgdGhlIGFycmF5IHNpemUg
b2YgdGhlDQo+PiBhY2NvdW50aW5nIGRhdGEgc2hvdWxkIGJlIHBvc3NpYmxlIHRvIGJlIGRp
ZmZlcmVudCBmb3IgYm90aCB1c2UgY2FzZXMuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+PiBWMzoNCj4+IC0gY2Fs
bCBhY2NfY29tbWl0KCkgZWFybGllciAoSnVsaWVuIEdyYWxsKQ0KPj4gLSBhZGQgYXNzZXJ0
KCkgdG8gYWNjX2NvbW1pdCgpDQo+PiAtIHVzZSBmaXhlZCBzaXplZCBhY2MgYXJyYXkgaW4g
c3RydWN0IGNoYW5nZWRfZG9tYWluIChKdWxpZW4gR3JhbGwpDQo+PiAtLS0NCj4+IMKgIHRv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmPCoMKgIHzCoCA5ICsrKystLQ0KPj4gwqAg
dG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaMKgwqAgfMKgIDMgKysNCj4+IMKgIHRv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyB8IDUzICsrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKy0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
aCB8wqAgNSArKy0NCj4+IMKgIDQgZmlsZXMgY2hhbmdlZCwgNjYgaW5zZXJ0aW9ucygrKSwg
NCBkZWxldGlvbnMoLSkNCj4+DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+IGlu
ZGV4IDNjYTY4NjgxZTMuLjg0MzM1ZjVmM2QgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jDQo+PiBAQCAtMTAyMyw2ICsxMDIzLDkgQEAgc3RhdGljIHZvaWQgc2VuZF9l
cnJvcihzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgaW50IGVycm9yKQ0KPj4gwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4g
wqDCoMKgwqDCoCB9DQo+PiArDQo+PiArwqDCoMKgIGFjY19kcm9wKGNvbm4pOw0KPj4gKw0K
Pj4gwqDCoMKgwqDCoCBzZW5kX3JlcGx5KGNvbm4sIFhTX0VSUk9SLCB4c2RfZXJyb3JzW2ld
LmVycnN0cmluZywNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBzdHJsZW4o
eHNkX2Vycm9yc1tpXS5lcnJzdHJpbmcpICsgMSk7DQo+PiDCoCB9DQo+PiBAQCAtMTAzNCw2
ICsxMDM3LDkgQEAgdm9pZCBzZW5kX3JlcGx5KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBl
bnVtIA0KPj4geHNkX3NvY2ttc2dfdHlwZSB0eXBlLA0KPj4gwqDCoMKgwqDCoCBhc3NlcnQo
dHlwZSAhPSBYU19XQVRDSF9FVkVOVCk7DQo+PiArwqDCoMKgIGNvbm4tPmluID0gTlVMTDsN
Cj4gDQo+IEFGQUlVLCB5b3UgYXJlIHNldHRpbmcgY29ubi0+aW4gdG8gTlVMTCBpbiBvcmRl
ciB0byBwbGVhc2UuLg0KPiANCj4+ICvCoMKgwqAgYWNjX2NvbW1pdChjb25uKTsNCj4gDQo+
IC4uLiB0aGlzIGNhbGwuIEhvd2V2ZXIgaW4gY2FzZSBvZiBhbiBlcnJvciBsaWtlLi4uDQo+
IA0KPj4gKw0KPj4gwqDCoMKgwqDCoCBpZiAoIGxlbiA+IFhFTlNUT1JFX1BBWUxPQURfTUFY
ICkgeyA+wqDCoMKgwqDCoMKgwqDCoMKgwqAgc2VuZF9lcnJvcihjb25uLCBFMkJJRyk7DQo+
IA0KPiAuLi4gaGVyZSwgc2VuZF9yZXBseSgpIHdpbGwgYmUgY2FsbGVkIGFnYWluLiBCdXQg
dGhlIGVycm9yIHdpbGwgbm90IGJlIHNldCANCj4gYmVjYXVzZSBjb25uLT5pbiBpcyBOVUxM
Lg0KPiANCj4gU28gSSB0aGluayB5b3Ugd2FudCB0byByZXN0b3JlIGNvbm4tPmluIHJld3Jp
dGUgYWNjX2NvbW1pdCgpLiBUaGUgb3JkZXJpbmcgd291bGQgDQo+IGFsc28gZGVzZXJ2ZSBh
biBleHBsYW5hdGlvbiBpbiBhIGNvbW1lbnQuDQoNCkp1c3QgdG8gbWFrZSBzdXJlIEkgdW5k
ZXJzdGFuZCB5b3UgY29ycmVjdGx5IChJIGhhdmUgc29tZSBkaWZmaWN1bHRpZXMNCnBhcnNp
bmcgIlNvIEkgdGhpbmsgeW91IHdhbnQgdG8gcmVzdG9yZSBjb25uLT5pbiByZXdyaXRlIGFj
Y19jb21taXQoKS4iDQpjb21wbGV0ZWx5KToNCg0KWW91IGFyZSBzdWdnZXN0aW5nIHRvIG1v
dmUgc2V0dGluZyBjb25uLT5pbiB0byBOVUxMIHRvIGFjY19jb21taXQoKSBhbmQNCnRvIHJl
c3RvcmUgaXQgYmVmb3JlIHJldHVybmluZz8gSSdtIGZpbmUgd2l0aCB0aGF0Lg0KDQo+IA0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybjsNCj4+IEBAIC0xMDU5LDggKzEwNjUsNiBA
QCB2b2lkIHNlbmRfcmVwbHkoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIGVudW0gDQo+PiB4
c2Rfc29ja21zZ190eXBlIHR5cGUsDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgfQ0KPj4gwqDC
oMKgwqDCoCB9DQo+PiAtwqDCoMKgIGNvbm4tPmluID0gTlVMTDsNCj4+IC0NCj4+IMKgwqDC
oMKgwqAgLyogVXBkYXRlIHJlbGV2YW50IGhlYWRlciBmaWVsZHMgYW5kIGZpbGwgaW4gdGhl
IG1lc3NhZ2UgYm9keS4gKi8NCj4+IMKgwqDCoMKgwqAgYmRhdGEtPmhkci5tc2cudHlwZSA9
IHR5cGU7DQo+PiDCoMKgwqDCoMKgIGJkYXRhLT5oZHIubXNnLmxlbiA9IGxlbjsNCj4+IEBA
IC0yMTk1LDYgKzIxOTksNyBAQCBzdHJ1Y3QgY29ubmVjdGlvbiAqbmV3X2Nvbm5lY3Rpb24o
Y29uc3Qgc3RydWN0IA0KPj4gaW50ZXJmYWNlX2Z1bmNzICpmdW5jcykNCj4+IMKgwqDCoMKg
wqAgbmV3LT5pc19zdGFsbGVkID0gZmFsc2U7DQo+PiDCoMKgwqDCoMKgIG5ldy0+dHJhbnNh
Y3Rpb25fc3RhcnRlZCA9IDA7DQo+PiDCoMKgwqDCoMKgIElOSVRfTElTVF9IRUFEKCZuZXct
Pm91dF9saXN0KTsNCj4+ICvCoMKgwqAgSU5JVF9MSVNUX0hFQUQoJm5ldy0+YWNjX2xpc3Qp
Ow0KPj4gwqDCoMKgwqDCoCBJTklUX0xJU1RfSEVBRCgmbmV3LT5yZWZfbGlzdCk7DQo+PiDC
oMKgwqDCoMKgIElOSVRfTElTVF9IRUFEKCZuZXctPndhdGNoZXMpOw0KPj4gwqDCoMKgwqDC
oCBJTklUX0xJU1RfSEVBRCgmbmV3LT50cmFuc2FjdGlvbl9saXN0KTsNCj4+IGRpZmYgLS1n
aXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuaA0KPj4gaW5kZXggYzU5YjA2NTUxZi4uMWY4MTFmMzhjYiAxMDA2
NDQNCj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgNCj4+ICsrKyBi
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgNCj4+IEBAIC0xMzksNiArMTM5LDkg
QEAgc3RydWN0IGNvbm5lY3Rpb24NCj4+IMKgwqDCoMKgwqAgc3RydWN0IGxpc3RfaGVhZCBv
dXRfbGlzdDsNCj4+IMKgwqDCoMKgwqAgdWludDY0X3QgdGltZW91dF9tc2VjOw0KPj4gK8Kg
wqDCoCAvKiBOb3QgeWV0IGNvbW1pdHRlZCBhY2NvdW50aW5nIGRhdGEgKHZhbGlkIGlmIGlu
ICE9IE5VTEwpLiAqLw0KPj4gK8KgwqDCoCBzdHJ1Y3QgbGlzdF9oZWFkIGFjY19saXN0Ow0K
Pj4gKw0KPj4gwqDCoMKgwqDCoCAvKiBSZWZlcmVuY2VkIHJlcXVlc3RzIG5vIGxvbmdlciBw
ZW5kaW5nLiAqLw0KPj4gwqDCoMKgwqDCoCBzdHJ1Y3QgbGlzdF9oZWFkIHJlZl9saXN0Ow0K
Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYyANCj4+
IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jDQo+PiBpbmRleCAzMGZiOWFj
ZWM2Li4xNDRjYmFmYjczIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5jDQo+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWlu
LmMNCj4+IEBAIC05MSw2ICs5MSw4IEBAIHN0cnVjdCBkb21haW4NCj4+IMKgwqDCoMKgwqAg
Ym9vbCB3cmxfZGVsYXlfbG9nZ2VkOw0KPj4gwqAgfTsNCj4+ICsjZGVmaW5lIEFDQ19DSERf
TiAoQUNDX1RSX04gPCBBQ0NfUkVRX04gPyBBQ0NfUkVRX04gOiBBQ0NfVFJfTikNCj4gDQo+
IEJvdGggQUNDX1RSX04gYW5kIEFDQ19SRVFfTiBhcmUgZml4ZWQuIENhbiB5b3UgZXhwbGFp
biB3aHkgd2UgbmVlZCB0aGlzIG1hZ2ljPw0KPiANCj4gUmVsYXRlZCwgd291bGRuJ3QgaXQg
YmUgYmV0dGVyIHRvIGRlZmluZSBpdCBpbiB0aGUgZW51bT8NCg0KSSBjYW4gZG8gdGhhdCwg
b2YgY291cnNlLiBJIGp1c3QgZGlkbid0IHdhbnQgdG8gbWFrZSB0aGUgZW51bSBldmVuIG1v
cmUNCmNvbXBsZXguIDotKQ0KDQpCdXQgd2l0aCBhIGNvbW1lbnQgdGhpcyBzaG91bGQgYmUg
b2theSBJTU8uDQoNCg0KSnVlcmdlbg0K
--------------TpgdXAoJS5QWpZz20m9R3KN9
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------TpgdXAoJS5QWpZz20m9R3KN9--

--------------D0QHExCQktHfRdUz0UEuDZlt--

--------------i2SxkbUfotlxBg6v2RwT8EVI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRIzgMFAwAAAAAACgkQsN6d1ii/Ey+6
Qgf+IouX8kt/GqNbQiCQ+p7FNpmrElWIqMeKA0i/TooBzhHmRCPkUE+k+1W/nZ3LZmra9NXeke2C
jdQBomqO7vIrdVDX0E4zuozijimkZbtvKnOZ9lvzY6cSa4nbGga9bEIc4HIwMBAXburd7JyvntJf
7giCw78rNxutWhkoonPqniE407PYGnc3wWIy2DPN+ytGRqX+JMk8L7UGcPXkt/cvFEYlGaaQBQUt
fEbVIKfK6wSKxJyfwKPc0PxxY1mawIKoR2iUdJj4U/Rg8FdPjEdjHlLT26+UyxMIJaTN99ApnZIn
VMGzGnSogawUzQ2pChnPBvV2/hTD7EcZg+1goyPA1A==
=4gde
-----END PGP SIGNATURE-----

--------------i2SxkbUfotlxBg6v2RwT8EVI--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526507.818304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZHF-0000bb-PU; Wed, 26 Apr 2023 07:09:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526507.818304; Wed, 26 Apr 2023 07:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZHF-0000bL-LL; Wed, 26 Apr 2023 07:09:21 +0000
Received: by outflank-mailman (input) for mailman id 526507;
 Wed, 26 Apr 2023 07:09:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prZHE-0008Td-D5
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:09:20 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 423e1730-e401-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 09:09:17 +0200 (CEST)
Received: from AM6P191CA0002.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::15)
 by AS4PR08MB8166.eurprd08.prod.outlook.com (2603:10a6:20b:58d::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 07:09:12 +0000
Received: from AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::9a) by AM6P191CA0002.outlook.office365.com
 (2603:10a6:209:8b::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21 via Frontend
 Transport; Wed, 26 Apr 2023 07:09:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT015.mail.protection.outlook.com (100.127.140.173) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 07:09:12 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 26 Apr 2023 07:09:11 +0000
Received: from 74e3fce50081.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1058206B-7833-4093-BD0E-DBAA7E87E022.1; 
 Wed, 26 Apr 2023 07:09:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 74e3fce50081.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 07:09:01 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB10361.eurprd08.prod.outlook.com (2603:10a6:20b:56d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Wed, 26 Apr
 2023 07:08:56 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 07:08:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 423e1730-e401-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0GUU/m+xsfVYzT9TWT2PAJ1kjugLJkqjWPwsCs1aqwg=;
 b=hS4rh9OWjW7bxwtxiiQLmMR5qVdKVxrIvhkQqn8yWGK5t8BLgITKyyViy+24xsutJp1n/5rfKtRHkSulnw7WBEFGhWj7SiuM9Lf+e0PzrvypUeqGt8rU5uoRWi5p22R6FZLghoQGSq8BbOP67R1wXOSmaoKuAs/Fg7F8/4CAWX8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dgv+dYJvbU94aOUiAG4ycwk0vTZfSzsvfKxEhC4vxKCVyCblzsvp7mUGmg5UOoVf4P122jk8yHRgQZJW09IjsTt0tKhhjBFitHyDDfXtt8+FFBlioExPFfmXcKLzFQpSPwj397r3C8VsNjF/a5r7bVdaZjmW/SkizCJ2Xtp7WWlwAuyTuYCut+/y9Xl4ZyTMMyG6FKPVLW6iSIquk2gBDSBXo0et2DxNeTjG49wrgp5enZMxVj+aw2MdgwpXE30YmNksDcTilm+tB88DsX6iUtJafbRBxTTQvJ0xneTV9hQpoA/J2SAruU/tMIOoJgTesHBkbAzlestvVua1SdK/Cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0GUU/m+xsfVYzT9TWT2PAJ1kjugLJkqjWPwsCs1aqwg=;
 b=eRoZDKIo+CO0FFxzGGgWZ4wVTRT5VbVoJWvmfKaTI8b/Ha98boqcXHKbGl9hdysNA3CgTcFHwlBiUTUcLvOiw1sxZPfNIZdJaY9MntYy/Pa78dk/NBbcn/LVwrBS60HY50gTKbv7/y2tNBPLYSXtpsORzy6PlXdoseola9VCPpre8LDragVsmawcR5kBuFKL1CoqJ7EDYZP/YemChHJC87J3c35pOLiPCLiCBAErAL+fHdvzTLOCzpE466ZOiqn6i3yhgPKOfhSa2u0a1b/HNHirahH/WmFDQUT7muGnzgCk4lFQ0QtkkQGx8uLR8KkQYmVcI315Ip6nqGH+grz43w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0GUU/m+xsfVYzT9TWT2PAJ1kjugLJkqjWPwsCs1aqwg=;
 b=hS4rh9OWjW7bxwtxiiQLmMR5qVdKVxrIvhkQqn8yWGK5t8BLgITKyyViy+24xsutJp1n/5rfKtRHkSulnw7WBEFGhWj7SiuM9Lf+e0PzrvypUeqGt8rU5uoRWi5p22R6FZLghoQGSq8BbOP67R1wXOSmaoKuAs/Fg7F8/4CAWX8=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
CC: Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index: AQHZd0ulMpqFICe/LUiDVq6+aDsW9687sveAgAFTkCCAACNagIAAAPgg
Date: Wed, 26 Apr 2023 07:08:56 +0000
Message-ID:
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
In-Reply-To: <13635377-e296-370d-121b-5b617dc210bc@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CA862ADAAD63DD47A292997B3D788A4E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB10361:EE_|AM7EUR03FT015:EE_|AS4PR08MB8166:EE_
X-MS-Office365-Filtering-Correlation-Id: 79533882-192a-44fd-04ec-08db4625232e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 baedVpBxtfAzfCqe9mAqyWLt9Cjna7ki1rbFYPYEZ9o5iJSkIjSgDJDJiNIIT2osubXj6wzJPVnzeTZgvCDK28qr4USy7/0eQfideRbxxIF4ffPLgj5hQy/7/kPukMTM/opF3XDwmvoS1DKumfhzqjPvDBiTEQngs43fVEXKoPWhxYYijhMgpHqXb2b/2BSR2XczSdFIpYTI0sxGq1+XSjUcoSbvu4Cu5zYnPF8m3bhhMBy7XPnntth+uYWRSoNWbEMZb6DLYMk7KxjX2B1sUMZqsKg8v90/jzl+xGqLJ/EB8wsE74YJVddJddYOOEt3FLxwwEcR2kz17DPeCT/oymmNGLgBd5HtBxaDPUvIMTYbL5qsXepOBwTfRH2kUnWhnEAJUu1Y6yzENViEFrdlTz1aD/3djEr70orBT9SoBXFh8+0uWBJXtymJsCGYsBvOATND3NQ3EYIl0yG9Jwjxqko1eekSYfJOL0pn/er+aHRGEUMb0ecTsQirPRwy5znmfdtzAOU+5BeXbMwioyvFlg0VdLI4nzbBUvbhdHKf4qJun0C6BFsfe/lUgAYfxlKExaO1tuzxQiqJyId3fgBJIzfVCq7FELFfznfxNjU1mC0TORxuPS/awrcSLk/5xz9Y
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199021)(86362001)(38070700005)(33656002)(2906002)(55016003)(71200400001)(186003)(53546011)(83380400001)(7696005)(6506007)(9686003)(26005)(66946007)(6636002)(478600001)(66476007)(76116006)(110136005)(41300700001)(122000001)(52536014)(316002)(66446008)(8676002)(5660300002)(66556008)(4326008)(54906003)(64756008)(8936002)(38100700002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10361
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3511d7f4-0ce7-48ca-e59c-08db462519a5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ilhFoR/ds4jEKJ75T6LASHNG/B+DEeSNDYBpOBFxT7Knx1t4k9ozvAQkP4cidD3gu2BpeES10xairOtQBYOw5tp2x2uyxPTqi8GtVMBoX/Ng4HzXgJoyV0SoYdvZ2sPDehtxK5yd+g+Zbrl1RBdXYZYxLS3WbbjLjLP6bMj9zKku3iRS0t3GTU70cwev9F9uFYxi4+5Cnxk0zulOqnRt60/bWL8IkidtaE5p0ui85ybpr3CULRSUoaIfAo5cq1tG3TTONlKhOv4Q7UTMuPbaYgALBTQKDvGMRVUAxauLVWLJ+Yd0kbgsunzdgbrbpkJ/OKbhy/Zx6C/K4i4nCk8Sr0kBxomNTUjVHv5btgX5z8gNyoz0dNk80WNz99TIoZ8kieHxDdFN282ro+IYSCSwrQAPmbGFxY9c6x9fYssAc6qs3/GHtHtORCLQ8ur0h9XN0zrM0xD0oeq0LmnrKOtKeDnDKQkEUADNtJvpHkLYI+22Sr6aPaODlkc4JFaf7YG/4f2jrTiMHC8sXrPb3rOM4VNCKphf92FxXh+2loZWef1qhzVnMe6F6gooG/niRxr5rGWYNtoLmbPotz8MQSRMi8Yoeh2XePgKwfVbjN356YjS8HS0Nja9K2W/khIyOF7Vnf71DVRcrP2EaMW/uSPG6mpurp5QZftwzppmSprR7krRJddA4rcWaWDoAL1x6ZB7ujFJ8L7Vu8LWkb8VtBnNfQ15u/yypE2KGzPtQbe9uI8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199021)(40470700004)(46966006)(36840700001)(70586007)(70206006)(82740400003)(83380400001)(36860700001)(2906002)(6506007)(55016003)(40480700001)(33656002)(26005)(9686003)(82310400005)(316002)(4326008)(47076005)(52536014)(356005)(336012)(8676002)(81166007)(41300700001)(5660300002)(8936002)(7696005)(110136005)(6636002)(54906003)(86362001)(478600001)(40460700003)(186003)(53546011)(34070700002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 07:09:12.0384
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79533882-192a-44fd-04ec-08db4625232e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8166

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwOS8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIE5VTUEg
ZGlzdGFuY2UgbWFwDQo+IA0KPiBPbiAyNi4wNC4yMDIzIDA3OjMzLCBIZW5yeSBXYW5nIHdyb3Rl
Og0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4+DQo+ID4+PiArICAgICAgICAvKiBHZXQgb3Bwb3Np
dGUgd2F5IGRpc3RhbmNlICovDQo+ID4+PiArICAgICAgICBvcHBvc2l0ZSA9IF9fbm9kZV9kaXN0
YW5jZSh0bywgZnJvbSk7DQo+ID4+PiArICAgICAgICAvKiBUaGUgZGVmYXVsdCB2YWx1ZSBpbiBu
b2RlX2Rpc3RhbmNlX21hcCBpcw0KPiBOVU1BX05PX0RJU1RBTkNFDQo+ID4+ICovDQo+ID4+PiAr
ICAgICAgICBpZiAoIG9wcG9zaXRlID09IE5VTUFfTk9fRElTVEFOQ0UgKQ0KPiA+Pg0KPiA+PiBB
bmQgdGhlIG1hdHJpeCB5b3UncmUgcmVhZGluZyBmcm9tIGNhbid0IGhvbGQgTlVNQV9OT19ESVNU
QU5DRQ0KPiBlbnRyaWVzPw0KPiA+PiBJIGFzayBiZWNhdXNlIHlvdSBkb24ndCBjaGVjayB0aGlz
IGFib3ZlOyB5b3Ugb25seSBjaGVjayBhZ2FpbnN0DQo+ID4+IE5VTUFfTE9DQUxfRElTVEFOQ0Uu
DQo+ID4NCj4gPiBNeSB1bmRlcnN0YW5kaW5nIGZvciB0aGUgcHVycG9zZSBvZiB0aGlzIHBhcnQg
b2YgY29kZSBpcyB0byBjaGVjayBpZiB0aGUNCj4gb3Bwb3NpdGUNCj4gPiB3YXkgZGlzdGFuY2Ug
aGFzIGFscmVhZHkgYmVlbiBzZXQsIHNvIHdlIG5lZWQgdG8gY29tcGFyZSB0aGUgb3Bwb3NpdGUN
Cj4gd2F5DQo+ID4gZGlzdGFuY2Ugd2l0aCB0aGUgZGVmYXVsdCB2YWx1ZSBOVU1BX05PX0RJU1RB
TkNFIGhlcmUuDQo+ID4NCj4gPiBCYWNrIHRvIHlvdXIgcXVlc3Rpb24sIEkgY2FuIHNlZSB5b3Vy
IHBvaW50IG9mIHRoZSBxdWVzdGlvbi4gSG93ZXZlciBJIGRvbid0DQo+IHRoaW5rDQo+ID4gTlVN
QV9OT19ESVNUQU5DRSBpcyBhIHZhbGlkIHZhbHVlIHRvIGRlc2NyaWJlIHRoZSBub2RlIGRpc3Rh
bmNlIGluIHRoZQ0KPiBkZXZpY2UNCj4gPiB0cmVlLiBUaGlzIGlzIGJlY2F1c2UgSSBodW50ZWQg
ZG93biB0aGUgcHJldmlvdXMgZGlzY3Vzc2lvbnMgYW5kIGZvdW5kIFsyXQ0KPiBhYm91dA0KPiA+
IHdlIHNob3VsZCB0cnkgdG8ga2VlcCBjb25zaXN0ZW50IGJldHdlZW4gdGhlIHZhbHVlIHVzZWQg
aW4gZGV2aWNlIHRyZWUgYW5kDQo+IEFDUEkNCj4gPiB0YWJsZXMuIEZyb20gdGhlIEFDUEkgc3Bl
YywgMHhGRiwgaS5lLiBOVU1BX05PX0RJU1RBTkNFIG1lYW5zDQo+IHVucmVhY2hhYmxlLg0KPiA+
IEkgdGhpbmsgdGhpcyBpcyBhbHNvIHRoZSByZWFzb24gd2h5IE5VTUFfTk9fRElTVEFOQ0UgY2Fu
IGJlIHVzZWQgYXMgdGhlDQo+IGRlZmF1bHQNCj4gPiB2YWx1ZSBvZiB0aGUgZGlzdGFuY2UgbWFw
LCBvdGhlcndpc2Ugd2Ugd29uJ3QgaGF2ZSBhbnkgdmFsdWUgdG8gdXNlLg0KPiANCj4gVGhlIFsy
XSBsaW5rIHlvdSBwcm92aWRlZCBkaXNjdXNzZXMgTlVNQV9MT0NBTF9ESVNUQU5DRS4NCg0KSSBp
bmZlcnJlZCB0aGUgZGlzY3Vzc2lvbiBhcyAid2Ugc2hvdWxkIHRyeSB0byBrZWVwIGNvbnNpc3Rl
bnQgYmV0d2VlbiB0aGUgdmFsdWUNCnVzZWQgaW4gZGV2aWNlIHRyZWUgYW5kIEFDUEkgdGFibGVz
Ii4gTWF5YmUgbXkgaW5mZXJlbmNlIGlzIHdyb25nLg0KDQo+IExvb2tpbmcgYXQNCj4gTGludXgn
ZXMgRG9jdW1lbnRhdGlvbi9kZXZpY2V0cmVlL251bWEudHh0LCB0aGVyZSdzIG5vIG1lbnRpb24g
b2YgYW4NCj4gdXBwZXIgYm91bmQgb24gdGhlIGRpc3RhbmNlIHZhbHVlcy4gSXQgb25seSBzYXlz
IHRoYXQgb24gdGhlIGRpYWdvbmFsDQo+IGVudHJpZXMgc2hvdWxkIGJlIDEwIChpLmUuIG1hdGNo
aW5nIEFDUEksIHdpdGhvdXQgcmVhbGx5IHNheWluZyBzbykuDQoNCkkgYWdyZWUgdGhhdCB0aGUg
TlVNQSBkZXZpY2UgdHJlZSBiaW5kaW5nIGlzIGEgbGl0dGxlIGJpdCB2YWd1ZS4gU28gSSBjYW5u
b3QNCnNheSB0aGUgY2FzZSB0aGF0IHlvdSBwcm92aWRlZCBpcyBub3QgdmFsaWQuIEkgd291bGQg
bGlrZSB0byBhc2sgQXJtIG1haW50YWluZXJzDQoocHV0dGluZyB0aGVtIGludG8gVG86KSBvcGlu
aW9uIG9uIHRoaXMgYXMgSSB0aGluayBJIGFtIG5vdCB0aGUgb25lIHRvIGRlY2lkZSB0aGUNCmV4
cGVjdGVkIGJlaGF2aW9yIG9uIEFybS4NCg0KQmVydHJhbmQvSnVsaWVuL1N0ZWZhbm86IFdvdWxk
IHlvdSBwbGVhc2Uga2luZGx5IHNoYXJlIHlvdXIgb3BpbmlvbiBvbiB3aGljaA0KdmFsdWUgc2hv
dWxkIGJlIHVzZWQgYXMgdGhlIGRlZmF1bHQgdmFsdWUgb2YgdGhlIG5vZGUgZGlzdGFuY2UgbWFw
PyBEbyB5b3UNCnRoaW5rIHJldXNpbmcgdGhlICJ1bnJlYWNoYWJsZSIgZGlzdGFuY2UsIGkuZS4g
MHhGRiwgYXMgdGhlIGRlZmF1bHQgbm9kZSBkaXN0YW5jZQ0KaXMgYWNjZXB0YWJsZSBoZXJlPyBU
aGFua3MhDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:12:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:12:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526512.818314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZKC-00029G-6X; Wed, 26 Apr 2023 07:12:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526512.818314; Wed, 26 Apr 2023 07:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZKC-000299-3p; Wed, 26 Apr 2023 07:12:24 +0000
Received: by outflank-mailman (input) for mailman id 526512;
 Wed, 26 Apr 2023 07:12:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prZKB-000293-HB
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:12:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b03f10a5-e401-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 09:12:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9661.eurprd04.prod.outlook.com (2603:10a6:102:273::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 07:12:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 07:12:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b03f10a5-e401-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dDeCbMVupOIu9yGJzLKIYQqsWiN8TdenrQRnfrrdSI6A8QBAx2LMbDXiGQQgHUxzBqnzyYy3t8WmIYhCPnr9Ieqjzzrkw7r6qU2GGm7aX4/UWbtQa/JB4fwwjBkjlRZs+7aOwmPE2daOkvkCFeYT/rHRPJ0QtUCnqmQKrnC8tgCWynDJH0EoHg00QyIileA9K1zf0FjqftaHlZkOC0Cs3vClFfyEmksoviUSlU9E7gksDcjulS0ZfAnn/kwphWRJYsHRvgMKKFOTHEbMwUikAQBa4DBTPHATr1Z53Jc3Hguv2b/GsdaK5YKqrqEhJgpQ5wH4FAi3KjoKnlZrhGcfig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yMlC3koo35lzX30uCGzeg4nhB5M3NCgIdTaFQJUOAK0=;
 b=EyO69XKNFp0Y3B3dmlGTDP9Ghlo1mkyOdjKUz+92jGDy+IfLt6CwIemsqvYSHEwO0KOl9ylYPEA0W7kVQ86rCGY3tZv0wByAA8BxrFnYrhAKU7k9iYSCX0VTqJbkbjpB+5tU9y9c4KoTQKyTHltyR77NmrBbv+JymMnMHONON6qwYVt3152ZIsWHZlvsu12UoZSMljlr+ovm2vqQpShM2ghtWDFJ72hs+L3GH71raW4viHOaEvojA/Y3ezE72XJmkEfdN4iHZffgxtRjEDnY3GzOypsdv8A4tEHlT3PANVmopMeE1Ro6lTQ9TJHsZhqPStBnCD2ocd+fC2ybbPL/Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yMlC3koo35lzX30uCGzeg4nhB5M3NCgIdTaFQJUOAK0=;
 b=wMCWFKRhYohyWeU6I8sYinLWu3Qsa3KSH8gURYIDgIsaCQ/A1KxOpFg5hhSDcG35gJD5NY2PhC/QUWKBuo3WnVNJR9ue3sXEVShGfh+PCQwVeoYWpCPubQV9EqTsVMaLpWvfo7Eu5/23K+nECuPyHM40DJj/4tV7YZ8igXhYFFyYPghzyqTcyCFHZnFyVwb3bWdh0v8lMxqimI8CVC2kv2z/y5EfZ33vfo3uPvtp0GLt5zEpir/6CyGLS2Zadi+EcwSw57nIzTWiw9vXNIgi3qFHglz2R2CTyZF0E8oP1K5mkr40QecbM0/nsHBVOEeZcIADx5TeRB5yD+ZLGeKhsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
Date: Wed, 26 Apr 2023 09:12:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9661:EE_
X-MS-Office365-Filtering-Correlation-Id: d799c1ca-5b20-469c-2753-08db4625937f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cF4Zq3yM1P2xuUTcDuBUytSpGDfVqb5Yx5Qe/ZIQXkYs5q0vIcKO3JWGIoOX6/IGjHnClLKAVrsUcyIjU/BK9fyTmdodB65WVqO91aZlzm9flEYhm/5UqlYhg3+n8wndgk/LngwirtA/l9QYA4ynZjU0cAm6/4AAStLBHEriRla3ZZ7c6ZWJe5OHSBehGJV2AXSfZUCkYLqwFRgGWrQ0EULWUSLzlYkym/3Mh0asYJ4EXyS2DgNB84y12yFJYGxwa7AqExCPudURTXLzK2B+6zkSBptAD9Nzy9nGyL8XeA1T7o9jbFpEJQyineC+DrrxzXTExLzk85VrQraDfyKWfLmdebZ174VjFjYpN4A3Qq6ngmLTBPTNoPIZV8Br3vpSVCfr0kM6hpz5NImqAosvZbiDF0OQsCkJ+Ph8aE+vBEju6Vkz4Crc4CC/0/iiw3HaXrTPAGLadTp3n2jEJux3yC843kyxsYt//C8l0RfRxtFnVL1J4/1Cq7alxihAzAzKTVBWCKhoQ5TK2rzivsdHAziSr8/lZgFC7bwVtTbB7ijI360uPC3j4glbrJ4eaabDNs6QyGi5Ne2Q2byXCOPFTJpjBI/SENk0IEQ8S1N3nA4KmrkGVqC58BZUn5/WSqvOMROiBUt7eV69jny5gYtiTg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(39860400002)(396003)(346002)(451199021)(478600001)(38100700002)(6486002)(26005)(6512007)(186003)(53546011)(54906003)(83380400001)(31686004)(6506007)(2616005)(5660300002)(8936002)(8676002)(41300700001)(2906002)(316002)(4326008)(66556008)(66476007)(6916009)(66946007)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L21aSllrVkFUUURMbk9LTDJkZFlqcG1xOHdUamlEUkJmOEVuL2k0TmJaNjEz?=
 =?utf-8?B?aUtNS09xd3ZWK0NPUThOeGdrR1FUcUIrekREb3dseC9ZcHlMYmpGRDNvYStL?=
 =?utf-8?B?WVpvcHBPRVFLaDhHb3VxR2pCb0txWVI4RzdXWHozcVdWeFNXa05MUGxaeUpV?=
 =?utf-8?B?MytxRzNVZ2NJTmJ4akJ2a1ZhUWN0bHk1b1k3ZHZvaU1VOXV2a2xjeERkWGNu?=
 =?utf-8?B?R1NqbDhnVWs5QUJ1YS9xOW1VRGJMUkdBRTFLWEZaaGFHMlNCVWs2MFRLTXdO?=
 =?utf-8?B?Yjl5dUcvK2xCMFB4WFZNa2hPN2FtcEM0bmZzZGNyUEszbzE5MUgxc2FRMGxE?=
 =?utf-8?B?VUxRbEZ0OWgrY0k1dGNwRXArcGVhMW92NEVmdGQ3VlN4QlB3YWFJWGIyN0Yr?=
 =?utf-8?B?eWU0MCtNMUpJcS9mVkdUd3dPSExwcmlrUG5KNGFuQVd0UkdWeVBMdjloaXBp?=
 =?utf-8?B?MDNRMG1US2lQWUJlSllhMmZZQWF5VWhlbjlOZm9URFBtZC8wZENTb3V4bEJ1?=
 =?utf-8?B?aWFQWCtkZ1FGN3l2OGtPWERreXErd1lOVjVWQXNGYjVlWkVrVmFUbWwyZjUz?=
 =?utf-8?B?a3JpM2NZcmtFSFRadHMwd3d6ZlVyaGJFaVJDem1wMWFoSTY4YmVzZ0sxbUto?=
 =?utf-8?B?aG9ZZEw2OXhWSFJUSy9RTWZsSzJsbVorZmJMeWM1VFNabDhyUTlkRHlZemxI?=
 =?utf-8?B?RkZZSU5DSFlJMXRBMUpFZENKajVDcHZVYzd3US9tODAvN1dLcFNxQTUrWVVT?=
 =?utf-8?B?MnhvcklmbUNya1BtUVpZRy9CcTN5b3BlelRZbDJON2l2Z1drZW9EWmVEWDZr?=
 =?utf-8?B?WUJFSU1kNitSMkFoeHoxS01vNkMvVGxrbWFyVWM4SytkbFd1QlJFVFhGTUNQ?=
 =?utf-8?B?NEdWZG9QeFdKQ0w5bHc5cjFvcXYyRWI4ZllQRG1qM0F2RjZRUGNTcEhQQkMx?=
 =?utf-8?B?cUFFbGxhMDRTSnh3QVlTS1lyMW5XZ21MVkRYdW5CcW4zcHVRSHY3SHB2UU9K?=
 =?utf-8?B?cklnVFErZ3pFZFEva3R3YWtJT2dYdU5XZmJocEZSenpZT0Q2aTNiaVFUbllx?=
 =?utf-8?B?aVFKQWwzalZ3UEU5MGo0ajRQME55UnlENlhWYmNPdnNnVXdSVHgxeWxxYjJp?=
 =?utf-8?B?U05wTEhVYnM4QXZQR0JTRU1UVjhtb1NTVFJVbG91TGVYWlpBOE5CeUluOSsv?=
 =?utf-8?B?bFhJUk5iTGV0amZaMHhXRFB0ZzhXbzVLeW51ZmF0TW5STHUvSmZzMG1teVV5?=
 =?utf-8?B?WWRadk4yUTUraDk1ZS9ET0I5MVg4Q2dnVi9rb1l6MmUrTzh5RkxPdzd2Q3ov?=
 =?utf-8?B?T0x3STRLK0xaZXcwVzR0M0RDS0h5aFNpdTVtMno5Ui9mNk41c2lZbFZUQUhw?=
 =?utf-8?B?MmlpcGFVRGNIS3JkVWY0Tm9jQzFZRWw0SDVrN3JCSUo3MG5HUjk4SnUxS0JB?=
 =?utf-8?B?VEJwOGhrNkdaQlR6eGk2M2VQc0lsR1FEeUpiYzFsVEZzZlA4bG93aG9pOVdw?=
 =?utf-8?B?R1FCSTdEMm5BUjNid1VxZVNsdGQxV3hoQUV4YzcyWVpJbUJneUxlZzI4Mjd6?=
 =?utf-8?B?OVY0b2FzRlNocE1ncHJHdGlXeWJaT3BtNHc5STlubWtqN0hES1JMZnhUTlp0?=
 =?utf-8?B?L3QvdS9QdE1Dc1pidHJWS0pGRHlkbG10NnBWYmdDN1Juemp4QlBpMHBUUjZW?=
 =?utf-8?B?Tko2Y0Vac1lFSGhKM0E3NFp2T0E2OURxUVl5V2J4SHlYL09Vb1BjSHFSejZM?=
 =?utf-8?B?d21lRTJlYjJWRWR1SkV4NlpkdUY1cUtHR0hlZ2czemtxL2pMYmgva3I5bmlH?=
 =?utf-8?B?RCtFYlN6bGZDd3FqZU1ENlVhdDRUYytKQ2VtOXJHYXlXQXkrb2s0Q2FHbFNs?=
 =?utf-8?B?Y1R1SjYwSHprbzEyQUE5d3h2VDRXaERFUmFmeXpjMFRaZzhoT3VlNlZmb3Ni?=
 =?utf-8?B?YnRzcEtRRkpCaFExazk5ZzgwN001QXVrWGppdG9PQVQ2VjV0aUxaM1BGRkJJ?=
 =?utf-8?B?QTZNTnJTQkhCMG8xSFMwaHpZd0lHRnpuVWRsNHI3eEgzVDRBME4yTnREZ1BO?=
 =?utf-8?B?TFNkZDlCUGd6dnAvQnFwdG9qelpnTEM3YlB3QXBsdXdlUjNuWFVpaUd2eXBP?=
 =?utf-8?Q?7gEClZ7KHizVKiv7eJKRfDHw+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d799c1ca-5b20-469c-2753-08db4625937f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 07:12:20.6893
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6lgakOPIwxd3r9KNXprPR8dCchkUb7ZbivZS4/SNq5xJcsPK2hsQNmf4o7hTS8rNaAa/+Fl+qWaTkQi9wja6sw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9661

On 26.04.2023 09:08, Henry Wang wrote:
> Hi Jan,
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
>> tree NUMA distance map
>>
>> On 26.04.2023 07:33, Henry Wang wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>
>>>>> +        /* Get opposite way distance */
>>>>> +        opposite = __node_distance(to, from);
>>>>> +        /* The default value in node_distance_map is
>> NUMA_NO_DISTANCE
>>>> */
>>>>> +        if ( opposite == NUMA_NO_DISTANCE )
>>>>
>>>> And the matrix you're reading from can't hold NUMA_NO_DISTANCE
>> entries?
>>>> I ask because you don't check this above; you only check against
>>>> NUMA_LOCAL_DISTANCE.
>>>
>>> My understanding for the purpose of this part of code is to check if the
>> opposite
>>> way distance has already been set, so we need to compare the opposite
>> way
>>> distance with the default value NUMA_NO_DISTANCE here.
>>>
>>> Back to your question, I can see your point of the question. However I don't
>> think
>>> NUMA_NO_DISTANCE is a valid value to describe the node distance in the
>> device
>>> tree. This is because I hunted down the previous discussions and found [2]
>> about
>>> we should try to keep consistent between the value used in device tree and
>> ACPI
>>> tables. From the ACPI spec, 0xFF, i.e. NUMA_NO_DISTANCE means
>> unreachable.
>>> I think this is also the reason why NUMA_NO_DISTANCE can be used as the
>> default
>>> value of the distance map, otherwise we won't have any value to use.
>>
>> The [2] link you provided discusses NUMA_LOCAL_DISTANCE.
> 
> I inferred the discussion as "we should try to keep consistent between the value
> used in device tree and ACPI tables". Maybe my inference is wrong.
> 
>> Looking at
>> Linux'es Documentation/devicetree/numa.txt, there's no mention of an
>> upper bound on the distance values. It only says that on the diagonal
>> entries should be 10 (i.e. matching ACPI, without really saying so).
> 
> I agree that the NUMA device tree binding is a little bit vague. So I cannot
> say the case that you provided is not valid. I would like to ask Arm maintainers
> (putting them into To:) opinion on this as I think I am not the one to decide the
> expected behavior on Arm.
> 
> Bertrand/Julien/Stefano: Would you please kindly share your opinion on which
> value should be used as the default value of the node distance map? Do you
> think reusing the "unreachable" distance, i.e. 0xFF, as the default node distance
> is acceptable here? Thanks!

My suggestion would be to, rather than rejecting values >= 0xff, to saturate
at 0xfe, while keeping 0xff for NUMA_NO_DISTANCE (and overall keeping things
consistent with ACPI).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:16:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526515.818323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZNw-0002kh-Ml; Wed, 26 Apr 2023 07:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526515.818323; Wed, 26 Apr 2023 07:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZNw-0002ka-KG; Wed, 26 Apr 2023 07:16:16 +0000
Received: by outflank-mailman (input) for mailman id 526515;
 Wed, 26 Apr 2023 07:16:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HkN=AR=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1prZNv-0002kU-4q
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:16:15 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.218]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39e7f914-e402-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 09:16:13 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3Q7FvGzX
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 26 Apr 2023 09:15:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39e7f914-e402-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1682493364; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=sty/NXQtydWIHacBHDZ7T0YHJt8CxG07FXnQe610sL6e6F8WViWPdf3xSfUFil5+rA
    5YuSYcqlARualv46S0bQj9jqNtwkf2rJwsJiGhGpoAEjqWgJ3Blh6OU0jB7z3WdPcfVv
    s0s6exfVq5BxzKZmahsmXnD9GKS9BW8hvYVx6DYSbDL5zNKK1G8YM5dnGWWkh9+j3zEG
    eFm2Vmx2OH3gkBQ+71PHAbdIAFRL1HrKLz4tztiNOANfxEvxZBLRqeWC6ZGZN5ZirIhJ
    h5b3sA+f8vnUSs+xTt9cQWI9k3T5a5vCVeFS29/1uo8+V/mWzYA38rvYYwbLN1v0gcEb
    SYtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1682493364;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=WNKNiox5TjB0a6GI59z1cB1C69Xs7WrfUseimkr12qw=;
    b=ZbMohsiIhdGyYSbTey92ZVnLtlQu+tMw3O5NUh1OlOhJEKRVBi/n9aMlsUPTcfgcMi
    kXUG/rAPpG3OPK1Jm0gA4662dC1m3ZuRC5MViw3xydCFYSj6QUNSh8ZblljmRB7TUToR
    0hxvBy26bZnhA7ORFHQX1PV78XhLqEg87JyANR7toCrrRnnsBbKXZl6HuClw9jXJhlpi
    SbNDTSx+sV2J0BzIPQsYFcLPK5qvR82Gp81vGRIuKXcvtounAkydgQwZEwkboLFm011L
    ynwBlXIuBoJ6CZmocwWxnNcki95U7P+8BOYFd2eqrsC0eDS6Jw/B0R6efn5P+u1U+4/h
    2wKg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1682493364;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=WNKNiox5TjB0a6GI59z1cB1C69Xs7WrfUseimkr12qw=;
    b=di/ShdYwRXkNAxeFlJqhQYM5u63seX1DaPY6aOknDhuQIEyUuqd94P9eifDT9oTkn/
    r6QwlqseskYlWQI7aHMD8t5oDtNOX5wQ4w5OMDHrU1pkOuXD/x9rTJFX19PBuLDQCFjG
    p6OJ/pwGjeN5RysnmyqXVHy7nnktVQvMuzOeeF136ZAd0xLPyW31Ta3RxheD/1Y7nTPE
    9hw0nemRrCVRCwzL7jWHujUZwM/vUmw0nMqzKi8sfy1T/YadCbhsfNAgvdQb4LDYv/1f
    2I25+JkP3HcLd91QVdF0+KeWgoLjZnew+wqCgrz+UdZeM1/7AFAgDwkbvOSY8bE5o7Ji
    HevQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1682493357;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=WNKNiox5TjB0a6GI59z1cB1C69Xs7WrfUseimkr12qw=;
    b=yHwG1YA0RurrIS9P3IGfyxOXoixovzk09uXtuXFk9ZpfY6+lFQsGBnddew3buDMPUO
    uQbPxaGk/NxkcRT/+4Bg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BBKIYx7sXVVhU9+brASRK3ZldJTnR7IDHecOJA=="
Date: Wed, 26 Apr 2023 09:15:33 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH] Fix install.sh for systemd
Message-ID: <20230426091533.68324d8d.olaf@aepfle.de>
In-Reply-To: <20230425194622.114869-1-jandryuk@gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Qo/dB9N+HdOWup4hXo=c7_I";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/Qo/dB9N+HdOWup4hXo=c7_I
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 25 Apr 2023 15:46:20 -0400 Jason Andryuk <jandryuk@gmail.com>:

> Skip populating /var/run/xen when systemd is being used.

It was wrong to do it like that from day one.
Such directory has to be populated on demand at runtime.


Olaf

--Sig_/Qo/dB9N+HdOWup4hXo=c7_I
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRIz5UACgkQ86SN7mm1
DoB0DA//a6O5l4lWwaoCy4dQScO2afmOK/sxgTUdiq3ufD+KoLDFn4z1OUjQpTrg
oGPHBO78JUcgdK9sxn8DsH+SOoD/k7JZ1nQJ3WEGl2PFndOq+aUM+jXtwyl1QvR8
6FJ7YoxIsFEQaUOYLlZJyDh/cbTVjkQb82hft1tEsL3V6+zc1e3SuyvwEyqL/Fj3
BpxUR+m32V5HTcoPomWxb5J04CWB4w1qxkvdfQ6FIrdIHJ+prNow7yTCqkn6qG7X
ux/6xmbiiYyH56uE0mWBhDx/QXz0MO+MFxMdYxE41PStWT2cXO3sY8/ZGNErkFS1
lBFvKalqbmIQ/s8r9+PpglQ6iMFzP4/rBJdf4daLN5zfL/dwziYZ9JLYV24yAra9
6eFb5ABMbsj5uIGy2UDm8fYE0wu/h4vsNMMUM4GeiGElO48AKK8AQ0MIRnW/c1QQ
LYJdsojKJuuhZMILEePGPGxiygtADGoeiZnyGkNscdiRARqPDJFv8/jKwH7+EHes
1O+/++8q0uLzaCP5IlL0U49PfOfM17FVPwb2mipOMPStOzV7kT+wvBI1+8VA5EWg
yqq2fZNMLaX0gkV7ya874vE8rEHhxyb10fOo3fUklN+149OjAbUrhYNxkw3mBheO
sKP5dFpfng+dKNPs32O/7cKLgd9oQgz1ejezNRmI1SYFlfVGtzk=
=WSB0
-----END PGP SIGNATURE-----

--Sig_/Qo/dB9N+HdOWup4hXo=c7_I--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:20:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526522.818334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZRV-0003Ng-9b; Wed, 26 Apr 2023 07:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526522.818334; Wed, 26 Apr 2023 07:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZRV-0003NZ-6p; Wed, 26 Apr 2023 07:19:57 +0000
Received: by outflank-mailman (input) for mailman id 526522;
 Wed, 26 Apr 2023 07:19:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vDOC=AR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prZRT-0003NT-VH
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:19:55 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bdf1c4ba-e402-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 09:19:54 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 54EE521A0D;
 Wed, 26 Apr 2023 07:19:54 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 13C88138F0;
 Wed, 26 Apr 2023 07:19:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id hKEGA5rQSGRecgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 26 Apr 2023 07:19:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdf1c4ba-e402-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682493594; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+PlWFiHV5a4YliXjXOzC7vj4vlQL5aM6zw9+kkyleqY=;
	b=QDwa69+tZc+ndop6WVrqDaweyUtlg7SSFbWt0sVBGaYdJgQUUXNyAxEyyRAoNJlmU80OSx
	Orr4yJnI/qZUsJJZcqPk0iVxwpFbh+zwUetWL8xW3htevds4opGgTvP8MyUMgqdbmQ0JGh
	ZItGQ0WAK15+/WVfo8bILZK+io0XrhU=
Message-ID: <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
Date: Wed, 26 Apr 2023 09:19:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230405070349.25293-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 00/13] tools/xenstore: rework internal accounting
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------N2AiyT4Sy18ABM1cQdpB07nL"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------N2AiyT4Sy18ABM1cQdpB07nL
Content-Type: multipart/mixed; boundary="------------YiLzASuUI6vzbXFOcs2XHep2";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
Subject: Re: [PATCH v4 00/13] tools/xenstore: rework internal accounting
References: <20230405070349.25293-1-jgross@suse.com>
In-Reply-To: <20230405070349.25293-1-jgross@suse.com>

--------------YiLzASuUI6vzbXFOcs2XHep2
Content-Type: multipart/mixed; boundary="------------00bIc8B1PT0vCMA93olUOaDm"

--------------00bIc8B1PT0vCMA93olUOaDm
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDUuMDQuMjMgMDk6MDMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IFRoaXMgc2VyaWVz
IHJld29ya3MgdGhlIFhlbnN0b3JlIGludGVybmFsIGFjY291bnRpbmcgdG8gdXNlIGEgdW5p
Zm9ybQ0KPiBnZW5lcmljIGZyYW1ld29yay4gSXQgaXMgYWRkaW5nIHNvbWUgYWRkaXRpb25h
bCB1c2VmdWwgZGlhZ25vc3RpYw0KPiBpbmZvcm1hdGlvbiwgbGlrZSBhY2NvdW50aW5nIHRy
YWNlIGFuZCBtYXguIHBlci1kb21haW4gYW5kIGdsb2JhbCBxdW90YQ0KPiB2YWx1ZXMgc2Vl
bi4NCj4gDQo+IENoYW5nZXMgaW4gVjI6DQo+IC0gYWRkZWQgcGF0Y2ggMSAobGVmdG92ZXIg
ZnJvbSBwcmV2aW91cyBzZXJpZXMpDQo+IC0gcmViYXNlDQo+IA0KPiBDaGFuZ2VzIGluIFYz
Og0KPiAtIGFkZHJlc3NlZCBjb21tZW50cw0KPiANCj4gQ2hhbmdlcyBpbiBWNDoNCj4gLSBm
aXhlZCBwYXRjaCAzDQoNCkFub3RoZXIgdGhvdWdodCBmb3IgdGhpcyBzZXJpZXMgYW5kIGZv
bGxvd3VwIG9uZToNCg0KRG8gd2Ugd2FudCB0byBrZWVwIGN1cnJlbnQgY29kaW5nIHN0eWxl
IGluIHRvb2xzL3hlbnN0b3JlIChiYXNpY2FsbHkNCkxpbnV4IGtlcm5lbCBzdHlsZSksIG9y
IGRvIHdlIHdhbnQgdG8gc3dpdGNoIHRvIFhlbiBzdHlsZSBpbnN0ZWFkPw0KDQpJZiBhIHN3
aXRjaCB0byBYZW4gc3R5bGUgaXMgcHJlZmVycmVkIChJIGRvIHByZWZlciB0aGF0IHN3aXRj
aCksIEknZA0KbGlrZSB0byBzdWdnZXN0IHRoYXQgSSBkbyBhIHJld29yayBvZiB0aGlzIHNl
cmllcyBhbmQgdGhlIGZvbGxvd3VwIG9uZQ0KdG8gdXNlIHRoZSBYZW4gc3R5bGUgZm9yIG5l
dyBvciBtb3ZlZCBmdW5jdGlvbnMuDQoNCkEgbW9yZSByYWRpY2FsIGFwcHJvYWNoIHdvdWxk
IGJlIHRvIGRvIGEgbGFyZ2Ugc3R5bGUgc3dpdGNoIHNlcmllcw0KYWZ0ZXIgdGhlIHR3byBz
ZXJpZXMsIGJ1dCBJJ20gaGVzaXRhbnQgYXMgdGhpcyB3b3VsZCBhZmZlY3QgYmFja3BvcnRz
DQpyYXRoZXIgYmFkbHkuDQoNCg0KSnVlcmdlbg0KDQo=
--------------00bIc8B1PT0vCMA93olUOaDm
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------00bIc8B1PT0vCMA93olUOaDm--

--------------YiLzASuUI6vzbXFOcs2XHep2--

--------------N2AiyT4Sy18ABM1cQdpB07nL
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRI0JkFAwAAAAAACgkQsN6d1ii/Ey+C
qgf/SshbV+8/GfftbZLMrnTCF0qihI8HyYQxqcVL0/SXGLr+Tma1Eyd95hRt97GJG6B81obHH4++
sF11mSzOEVzWlIYNTwyc33Z2Rqnk44QwmvXQgrIxs4qB2SCB5i6gY3T91Nzq8+fB0s9UpFuJLPmI
TCN0aFEBNualGZ3Co3JcO9VYtzi7zlykdhGn2ECW2yqEp3X7x84hL108fZQj4yY/v8UF+EBoOlqK
hxrlAHQ3pUhrz1+RR0L0KglKMwwCPy3aRBSdTWdPs/RddZxBjNwXDWFKqL8AnfqKU335aPUSY6x+
iZofwFgAxjEqJikJHTA642a9YW86vytotLd9oRd3Fg==
=SwGC
-----END PGP SIGNATURE-----

--------------N2AiyT4Sy18ABM1cQdpB07nL--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:36:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526528.818344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZhf-0005of-MU; Wed, 26 Apr 2023 07:36:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526528.818344; Wed, 26 Apr 2023 07:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZhf-0005oY-JP; Wed, 26 Apr 2023 07:36:39 +0000
Received: by outflank-mailman (input) for mailman id 526528;
 Wed, 26 Apr 2023 07:36:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1prZhe-0005oS-BJ
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:36:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on062c.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12e8b700-e405-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 09:36:36 +0200 (CEST)
Received: from DUZPR01CA0223.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b4::10) by AS2PR08MB8574.eurprd08.prod.outlook.com
 (2603:10a6:20b:55d::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Wed, 26 Apr
 2023 07:36:31 +0000
Received: from DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b4:cafe::9f) by DUZPR01CA0223.outlook.office365.com
 (2603:10a6:10:4b4::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21 via Frontend
 Transport; Wed, 26 Apr 2023 07:36:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT035.mail.protection.outlook.com (100.127.142.136) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.20 via Frontend Transport; Wed, 26 Apr 2023 07:36:31 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 26 Apr 2023 07:36:31 +0000
Received: from 63161baa45d4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A64543B8-D42A-434D-8D57-7E7432CEBD59.1; 
 Wed, 26 Apr 2023 07:36:21 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 63161baa45d4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 07:36:21 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB4PR08MB8029.eurprd08.prod.outlook.com (2603:10a6:10:38b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 07:36:16 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 07:36:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12e8b700-e405-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JGaej4NQ+uswKUQdRV/BuZd/DUx0sCqy/7Xx/koFV/Y=;
 b=21le+bEfe3gEkCb6+RDupx1QUYuxNsf6QEnxbO5yCi8gt/1z0bOQTwjyGX0UzPvH+Yi6GiF3D9rvBBgtDrGAGekATle15eJbp1s9DOzqYRzcJJ3VpOwjcLXOp/pLWRaeS9KupFZrEsdRWHK0ppoMYdlWbFjuFMi9Qap7WAZyQkA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B0qpccbsyImua9+2a7u2zfDoorXpKTcXLlrW1Do6NX1XVGA3NA75smXoTjCaEIYdcXwaEtvH6QUYP16/mJkLCDdPM+7Uw1Q4UIqUoKeSfTmONddF6mvgwCGl4ttbllfJVWs6ioYP8ihWt/zmnbZc4Eh31zpCZhbCcuxkntNmqxhrOx6bNLqxDGMeGZsd73Txn6wyQtwIsGq26UZOO7+EIzCgrFRSLpzfawcDTE/F4B/bsz3cv8Hf/mQbUVViOTfqu+zN+camLAiXuFjT+YbYdhO6NQbO3b2J+N8guObibNBKj2gl5bX7PSw8X/506R/5+1SpkGj0VJjxV72509X38A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JGaej4NQ+uswKUQdRV/BuZd/DUx0sCqy/7Xx/koFV/Y=;
 b=PF2bNFS9ck3ji2QedmH95lbkDyb+Di8FNHtVn28myAMMm0S5sFj2JTbiRtlscWytbu1kCRMdAHKpNszuxC/9RPjfs1FG6xS+2+UMacg5T7xj7CLVURmsn7lkLwXp2sV0rZL9VubFDEz1DrXTPew1Buetgqv2i3IWjS5RyvVMQLutOispjK8GQN2UO7rS8BlpE9Bp0KHfRbCVqeusW/DaeZFMQhQYmmGMNxwI6P8UTIjqyK3y1kV3fXafpUq8bFD/jsdqquB26sHpnYs0T3sdgptEGcbW9/H1Njgs54nMQCzowE4/0/qwuMlgJ9lBFSS+hklrdvqM8TkT1Zw5e1vWvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JGaej4NQ+uswKUQdRV/BuZd/DUx0sCqy/7Xx/koFV/Y=;
 b=21le+bEfe3gEkCb6+RDupx1QUYuxNsf6QEnxbO5yCi8gt/1z0bOQTwjyGX0UzPvH+Yi6GiF3D9rvBBgtDrGAGekATle15eJbp1s9DOzqYRzcJJ3VpOwjcLXOp/pLWRaeS9KupFZrEsdRWHK0ppoMYdlWbFjuFMi9Qap7WAZyQkA=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index: AQHZd0ulMpqFICe/LUiDVq6+aDsW9687tFqAgAFSIKCAACZ1gIAABguQ
Date: Wed, 26 Apr 2023 07:36:15 +0000
Message-ID:
 <AS8PR08MB79919EAEDCD85073CAECE5DD92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
In-Reply-To: <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C65C67E0A067E549B0CF886F5CD35384.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB4PR08MB8029:EE_|DBAEUR03FT035:EE_|AS2PR08MB8574:EE_
X-MS-Office365-Filtering-Correlation-Id: 6c8ebd0c-9e46-4431-11e9-08db4628f441
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KpBdWNWAapWcvGOJ0/RDOuoP4feRFHeFQnweZWbNmLJ3nEJ0GqH6cuO1MkJAnwI32Pve79v6a9/uH+7sVA5mqssKgz3JxdzJkpeCleFbI73J2rdfhEbm4DkuCtXC0luE9zcznhsn/8Vtx3JxQmgGS3omnj6y2rPwNY3BTwZuK9Tk1V4TKT90zuW0BzjrT04pnSkoT1suf+e5rzZuaXZk6M/v75HHGTLIm/1wh7Dq27AHhMqgJq+ZC/oSNeHsxRJJyyy271VRvo39c7mdglDW9K5H9VSU1KOB0C7YI1I0gXO99gNhHDjHSXh+pyxKiim7fyGJYB4tS6zEwm4P0jGFEgX8suUaIKEZpZZv8qPg7sBHJVUC5+eNZtAaBp/SPcE8C0tKW4Rvi/4StIZyfF5Xtj0VEAaePSrOhE8mWLFRQCN+jBHxT2q806fhu1ypr2O0fayCFCMZcbEYbGsFnIU7TnwebV4Y3y1rmoVFZMcrNgo0urcDRrPp5WkHMtDInJ+UXGOfiqpPOteTieiv1VaQnfHXxRtSAosN6AYtDHZVAT+KTQT5qqiWTqxoQvQOKfOiHSdCG9gMQFg2MRCIIqBrjjvv7+PxfUSKYdyKyqNbW5g=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39850400004)(376002)(396003)(346002)(136003)(451199021)(9686003)(26005)(6506007)(53546011)(186003)(478600001)(33656002)(54906003)(966005)(7696005)(5660300002)(71200400001)(52536014)(8936002)(8676002)(2906002)(38070700005)(316002)(4326008)(86362001)(6916009)(64756008)(66446008)(66476007)(66946007)(76116006)(66556008)(38100700002)(41300700001)(122000001)(55016003)(83380400001)(66899021);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8029
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b267ac2b-827b-4dc0-c8fa-08db4628eb11
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NVN2aEBQYwdIcgynD7WxChkjNQH30HqGts/MtQqcV424JUL2lNGNmaW1MpRk+7y2WNKkiH4SmAUiciJW1az+iiK0h3w7bWSgmBJNMGipaFxQjMcYr5MEW8+/XXOthyRa7C1Si9nAgy45sGN1YvtBFdW79uMLvYffoRd1bHnwmAc7smiqJQt1+GZ/cT/Mssb9yAy7bIsCqBmf6m+Bs5KEUD44QuM4jVWHAYzFSxPL/BOBqiEgZv4g13vsYyZQh6U+QxxITpfEyh55Hlod6AXuBsaO8uXtK/sm06ggc7oNA8fYHSnyvyOeI+EtJVr2jl7y2XXUI4RPahdKAHPCCMix2g2ySVuIoB/Nl4XZpPZDFx6QfNGbbKwH4KBKE2AdFxKBr96bG8drgRA3j/0rTqHrcPH53ZrJtaW4E0BhPvbF4SKNNJ6Kjg0vFHf9IDPWl6TTGBIeX1fb90GGBRnKfFGjUinnfXQpBbQQ/76+iCBRjPrYaVE6vd3SW3TigfsYo8ncDnpSs5QmgQHcjEmy1b4QvLwvjCM279hqsSUwAVON0cqV9KoaQDqgX5ZBQm0SmLu6+rAg9XswRf85su31SIZnaXYMY6+90fxzlo/BhmBCCNUKeDBMvGcUanHlW0jIM1rvy4W9kWZHYmmD8cCYdKaM0xFU9vC7mJgDnd9iQyj38FsLtdopUzGphj2QNLybwm2Hhw/TrVYOmXmQjfTk5N9xktkL0iCkMEoWL2dMH9TCOuk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(66899021)(82740400003)(316002)(40460700003)(4326008)(81166007)(41300700001)(5660300002)(356005)(52536014)(8936002)(6862004)(8676002)(82310400005)(33656002)(86362001)(2906002)(40480700001)(55016003)(53546011)(966005)(7696005)(9686003)(6506007)(26005)(478600001)(34070700002)(36860700001)(83380400001)(336012)(186003)(47076005)(54906003)(70206006)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 07:36:31.3442
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c8ebd0c-9e46-4431-11e9-08db4628f441
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8574

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwOS8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIE5VTUEg
ZGlzdGFuY2UgbWFwDQo+IA0KPiBPbiAyNi4wNC4yMDIzIDA4OjI5LCBIZW5yeSBXYW5nIHdyb3Rl
Og0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGlj
aCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4+DQo+ID4+PiArICAgICAgICBkaXN0YW5jZSA9IGR0
X25leHRfY2VsbCgxLCAmbWF0cml4KTsNCj4gPj4NCj4gPj4gVXBvbiBzZWNvbmQgdGhvdWdodCBJ
IGNoZWNrZWQgd2hhdCBkdF9uZXh0X2NlbGwoKSByZXR1cm5zOiBZb3UncmUgc2lsZW50bHkNCj4g
Pj4gdHJ1bmNhdGluZyBoZXJlIGFuZCB0aGVuIC4uLg0KPiA+Pg0KPiA+Pj4gKyAgICAgICAgICAg
IC8qIEJpLWRpcmVjdGlvbnMgYXJlIG5vdCBzZXQsIHNldCBib3RoICovDQo+ID4+PiArICAgICAg
ICAgICAgbnVtYV9zZXRfZGlzdGFuY2UoZnJvbSwgdG8sIGRpc3RhbmNlKTsNCj4gPj4+ICsgICAg
ICAgICAgICBudW1hX3NldF9kaXN0YW5jZSh0bywgZnJvbSwgZGlzdGFuY2UpOw0KPiA+Pg0KPiA+
PiAuLi4gaGVyZSBhZ2Fpbi4gSXMgdGhhdCByZWFsbHkgdGhlIGludGVudGlvbj8NCj4gPg0KPiA+
IEJ5IGh1bnRpbmcgZG93biB0aGUgaGlzdG9yaWNhbCBkaXNjdXNzaW9ucyBJIGZvdW5kIHRoYXQg
dXNpbmcgZHRfbmV4dF9jZWxsKCkNCj4gaXMNCj4gPiB3aGF0IEp1bGllbiBzdWdnZXN0ZWQgMiB5
ZWFycyBhZ28gaW4gdGhlIFJGQyBzZXJpZXMgWzFdLiBHaXZlbiB0aGUgdHJ1bmNhdGlvbg0KPiA+
IGhlcmUgaXMgZm9yIG5vZGUgaWQgKGZyb20vdG8pIGFuZCBkaXN0YW5jZSB3aGljaCBJIGFtIHBy
ZXR0eSBzdXJlIHdpbGwgbm90DQo+ID4gZXhjZWVkIDMyLWJpdCByYW5nZSwgSSB0aGluayB0aGUg
c2lsZW50IHRydW5jYXRpb24gaXMgc2FmZS4NCj4gDQo+IFRoYXQgZGlzY3Vzc2lvbiBpcyBvcnRo
b2dvbmFsOyB0aGUgcHJldmlvdXNseSB1c2VkIGR0X3JlYWRfbnVtYmVyKCkgaXMgbm8NCj4gZGlm
ZmVyZW50IGluIHRoZSByZWdhcmQgSSdtIHJlZmVycmluZyB0by4NCj4gDQo+ID4gSG93ZXZlciBJ
IHVuZGVyc3RhbmQgeW91ciBwb2ludCBoZXJlLCB0aGUgc2lsZW50IHRydW5jYXRpb24gaXMgbm90
IGlkZWFsLCBzbw0KPiA+IEkgd29uZGVyIGlmIHlvdSBoYXZlIGFueSBzdWdnZXN0aW9ucyB0byBp
bXByb3ZlLCBkbyB5b3UgdGhpbmsgSSBzaG91bGQNCj4gY2hhbmdlDQo+ID4gdGhlc2UgdmFyaWFi
bGVzIHRvIHU2NCBvciBtYXliZSBJIG5lZWQgdG8gZG8gdGhlIGV4cGxpY2l0IHR5cGUgY2FzdCBv
ciBhbnkNCj4gPiBiZXR0ZXIgc3VnZ2VzdGlvbnMgZnJvbSB5b3U/IFRoYW5rcyENCj4gDQo+IFNv
IG9uZSB0aGluZyBJIG92ZXJsb29rZWQgaXMgdGhhdCBieSBwYXNzaW5nIDEgYXMgdGhlIGZpcnN0
IGFyZ3VtZW50LCB5b3UNCj4gb25seSByZXF1ZXN0IGEgMzItYml0IHZhbHVlLiBIZW5jZSB0aGVy
ZSdzIG5vIChzaWxlbnQpIHRydW5jYXRpb24gdGhlbiBvbg0KPiB0aGUgZHRfbmV4dF9jZWxsKCkg
dXNlcy4gQnV0IHRoZSBudW1hX3NldF9kaXN0YW5jZSgpIGNhbGxzIHN0aWxsIHRydW5jYXRlDQo+
IHRvIDggYml0cy4gQWRkaW5nIGV4cGxpY2l0IHR5cGUgY2FzdHMgd29uJ3QgaGVscCBhdCBhbGwg
LSB0cnVuY2F0aW9uIHdpbGwNCj4gcmVtYWluIGFzIHNpbGVudCBhcyBpdCB3YXMgYmVmb3JlLiBI
b3dldmVyLCBudW1hX3NldF9kaXN0YW5jZSgpJ3MgZmlyc3QNCj4gdHdvIGFyZ3VtZW50cyBjb3Vs
ZCBlYXNpbHkgYmVjb21lICJ1bnNpZ25lZCBpbnQiLCByZXN1bHRpbmcgaW4gaXRzIG5vZGUNCj4g
cmVsYXRlZCBib3VuZHMgY2hlY2tpbmcgdG8gdGFrZSBjYXJlIG9mIGFsbCB0cnVuY2F0aW9uIGlz
c3Vlcy4gVGhlDQo+ICJkaXN0YW5jZSIgcGFyYW1ldGVyIGFscmVhZHkgaXMgdW5zaWduZWQgaW50
LCBhbmQgaXMgYWxyZWFkeSBiZWluZyBib3VuZHMNCj4gY2hlY2tlZCBhZ2FpbnN0IE5VTUFfTk9f
RElTVEFOQ0UuDQoNCkdyZWF0IHBvaW50cyEgVGhhbmtzIGZvciBwb2ludGluZyB0aGUgOC1iaXQg
dHJ1bmNhdGlvbiBvdXQuIFlvdSBhcmUgY29ycmVjdC4NClNvbWVob3cgbXkgaW1wcmVzc2lvbiBv
ZiBudW1hX3NldF9kaXN0YW5jZSgpJ3MgZmlyc3QgdHdvIGFyZ3VtZW50cyBhcmUNCmFscmVhZHkg
InVuc2lnbmVkIGludCIgc28gSSBtaXNzZWQgdGhpcyBwYXJ0Li4uU29ycnkuDQoNCkluIHRoYXQg
Y2FzZSwgSSB0aGluayBJIHdpbGwgYWRkIGEgY2hlY2sgYmV0d2VlbiAiZnJvbSwgdG8iIGFuZCBN
QVhfTlVNTk9ERVMNCmFzIHNvb24gYXMgdGhlIHZhbHVlcyBvZiAiZnJvbSIgYW5kICJ0byIgYXJl
IHBvcHVsYXRlZCBieSBkdF9uZXh0X2NlbGwoKS4NCkhvcGVmdWxseSB0aGlzIHdpbGwgYWRkcmVz
cyB5b3VyIGNvbmNlcm4uDQoNCktpbmQgcmVnYXJkcywNCkhlbnJ5DQoNCj4gDQo+IEphbg0KPiAN
Cj4gPiBbMV0gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1k
ZXZlbC8yMDIxLQ0KPiAwOC9tc2cwMTE3NS5odG1sDQo+ID4NCj4gPiBLaW5kIHJlZ2FyZHMsDQo+
ID4gSGVucnkNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526534.818354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZt7-0007Ks-OI; Wed, 26 Apr 2023 07:48:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526534.818354; Wed, 26 Apr 2023 07:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prZt7-0007Kl-LZ; Wed, 26 Apr 2023 07:48:29 +0000
Received: by outflank-mailman (input) for mailman id 526534;
 Wed, 26 Apr 2023 07:48:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eKXV=AR=citrix.com=prvs=47345d3e0=roger.pau@srs-se1.protection.inumbo.net>)
 id 1prZt5-0007Kf-Ap
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:48:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8727a64-e406-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 09:48:25 +0200 (CEST)
Received: from mail-bn1nam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 Apr 2023 03:48:14 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6465.namprd03.prod.outlook.com (2603:10b6:806:1c0::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Wed, 26 Apr
 2023 07:48:11 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::48bb:fedd:a394:9f39%5]) with mapi id 15.20.6319.029; Wed, 26 Apr 2023
 07:48:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8727a64-e406-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682495305;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=0sS3ir+vHB082WhzG4V1EoXKV+Oe2aLfXKHjgPJdE7c=;
  b=bifDPeJOh5QaI5g8lBkojp3h4tcXPpMgECHgUrh4gd5p3K91Pr6Z/Srp
   gVGpDF2d0/80KRJ7QSVr2/qgyCMA6jxJS7cawPrSUrpZ2DO2C9bMFoF9V
   1yqxyUDEzhs7f+CigivjgHKVinxQAyab59lvLJeS7TOTHpzPu9rMPZrHz
   g=;
X-IronPort-RemoteIP: 104.47.51.43
X-IronPort-MID: 109336669
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:EV5reaLr7sh5JvSnFE+RP5QlxSXFcZb7ZxGr2PjKsXjdYENS1mMPy
 zBJWzvQa6veNGDyLoola4y0/R5QvZaGxt8yT1ZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gZgPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5FBHoU9
 MQxEgwOQUqJocac+q26VdBj05FLwMnDZOvzu1lG5BSBV7MMZ8mGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dupTSIpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr93bSRw32qMG4UPLGBq6JpjkLN/WM4JyQyWln4ouiJ0kHrDrqzL
 GRRoELCt5Ma8UWxS9DnUh6QoXiavwUdUd5dD+077g6WzqPepQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRuwMyUIKW4JZQcfUBAIpdLkpekbjA/LT9tlOL64iJvyAz6Y6
 yuRsCE0irEXjMgK/6a251bKh3SrvJehZhExzhXaWCSi9AwRWWK+T4mh6Fye5/AZKo+cFgOFp
 CJcx5PY6/0SB5aQkiDLWP8KALyi+/eCNnvbnEJrGJ4isT+q/hZPYLxt3d23H28xWu5sRNMjS
 BG7Vd95jHOLAEaXUA==
IronPort-HdrOrdr: A9a23:CVNHJahZt4tNJlMA1ghpiSjRRXBQX7123DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK4yXdQ2/hoAV7EZnichILIFvAa0WKG+VHd8kLFltK1uZ
 0QEJSWTeeAd2SS7vyKnzVQcexQp+VvmZrA7Ym+854ud3ANV0gJ1XYENu/xKDwTeOApP+taKH
 LKjfA32gZINE5nJ/hSQRI+Lpv+juyOsKijTQ8NBhYh5gXLpTS06ITiGxzd8gYCXyhJybIC93
 GAtwDi/K2sv9yy1xeZjgbontlrseqk7uEGKN2Hi8ATJDmpogG0ZL55U7nHkCEprPqp4FMKls
 CJhxs7Jcx8517YY2nwixrw3AvL1ioo9hbZuBWlqEqmhfa8aCMxCsJHi44cWhzF63A4tNU59K
 5QxWqWu7deEBuFxU3GlpP1fiAvsnDxjWspkOYVgXAaeYwCaIVJpYha2E9OCp8PEA/z9YhiOu
 hzC8P34upQbDqhHjjkl1gq5ObpcmU4Hx+ATERHksuJ0wJOlHQ89EcczNx3pAZ1yLsND71/o8
 jUOKVhk79DCuUMa7hmOesHScyrTkTQXBPlKgupUBXaPZBCH0iIh4/84b0z6u3vUocP1oEOlJ
 PIV04dnXIuenjpFdaF0PRwg17wqV2GLHfQI/xlltpEUuWWfsuvDcTDciFgryKYmYRePiWBMM
 zDfK6/AJfYXB7T8MhyrkrDsqJpWAkjuf0uy6gGsm2107P2w63Rx5vmmaXoVczQOAdhfF/DKV
 0+exW2DPl8zymQKw3FaV7qKj/QRnA=
X-Talos-CUID: 9a23:ASHsP24dA4SAiz0CKtssxksPQMEdW0fhz1iKKWmcI1x2SaKsRgrF
X-Talos-MUID: 9a23:t7BPjAbLnX6eheBTuwax2DJsK5lT7I+BMWQCk9YFqvncOnkl
X-IronPort-AV: E=Sophos;i="5.99,227,1677560400"; 
   d="scan'208";a="109336669"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DXbvAfnkLubOwSTdTHuHYQ7Z69dv97zhDk0oD/nPJzvlnbjWEliYaoGSLm01gWJ8RJ9SDRBfkDo6yC8/IYli5QYxKfxXbhMsaQ3/PHiAVR4GQr7M5Bruqamv8IhbOxuTQaQkauqfz1LEqkIhY1odSOdsltPZzyHYYjjHZORX6mAnvxlKie3qUKCSuW78leNHRu5hxYsu+gdAdxkmkKhRdJzgClOZeSbT/nENFpYxhwrS7lw3B9mvF6EeO18eh6jQnueH/1RSwR0KmsFtVHO1nXGiyLpqT+NzErvc9BBkB+VbwjvxN732nMAIomrxZ4Pa9UdcE4HHMwX4xczsGM4rEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zf2xenawkEDBzK8yC2XFCbJS8ncy2Q7HBlgb8stBePE=;
 b=Cjl8NjLrwrhM6tVBdewyy5MpncVSxGeRaujOzE2gTKfXTNKx5oozCa7KdswytnIQRy8r+jEn1KDPq8rAxxdxQxWsXmx1uDonRYqQcK2oes1ahBQRiiSZLHDwypieL0v3RYgxWeyL8YBZzG9HFf057cXKLfI+mHh9QeKgcL2yT6Dl9+Q7b9Dkn38ryrQ+plm6hf6FJTCqtMlf6ZODJ260nDbtpm4nc5js4brq35wdIfohG3P/Cd6G46nlFKwFCOQbh0Umz4Ahu+O1liyutAp7qFwT1PHQBQY+10lpm/OBUv7PP8EY6dgmTysMfI0WqJzmsABMvmmSPor/gPZYmDbanQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zf2xenawkEDBzK8yC2XFCbJS8ncy2Q7HBlgb8stBePE=;
 b=fqZ64B8zUGUCvzNKnaik5bIMIOfzomdtwo++MADloAUHKEv4R1ZLJ/yu/DpBP3ZG8rRcS8vpLk5r6dU3XYO9oErz7oeVWSFuB7GW5DzHBOlU/f3AiBlQrcF2/IxV47CnRiYpoDQBtBpXf+FPfgSPpXMSasOnxQ/Yx5g0BAkS0ZM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 26 Apr 2023 09:48:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZEjXNLAVCixClGyl@Air-de-Roger>
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
X-ClientProxiedBy: LO4P123CA0545.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6465:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f6e807c-c7d6-4f5e-3e2c-08db462a9501
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M8CUCcULHxN6C0VGOHOamxJGjfZD3hghanW5CAPimpgiTEWWvrNoXCm166S/z4SSg0IG03VztVbxbEaOUOC3uhuENNzBDQiJkIbNPCWy16FiMvzzMJrXXRMuqK9kwg0DnHd5SHPOEvtk/lzsYV47xFgBF2LUZ6JBRCG7yqYwTxtg16Hk9FK0IoqsRLJHBS4kZjsl28jRO2y7n8Tj4Yd/Iw5J2TSOJqzmLDFo4LMK6G6EZJst0rJY+oGNmMltSMS8WsDryAoxnzizdnDLHMnKsPAScO3huD5GT4Zp0oQPcgkYmtFJ7InsUephAHnW7LTTW/IRaff78+2ip7yzB10V9xQA5hadRbMhCaJ+F2ywAtEOwIgvjLHuEXBtoJ2IHMBlAsGfvOc+Lu3fX3iKyCT+ZP2zC4kKZzeDwALd36RWU3q6GFrV7ykGwlC3WyHxMDtvPjvxPFYDU5iTjzcuRsuPcv5g0rBX1WZsOMF+O+pTur7u93J+xoQ537I2eZMgbwGU0GSfH2dhJsjeVVKaeCy9MOE6fAr6Lre8/j/UNQ6ncQr3RpNTLaaVVscPUgXL/ZCm
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(39860400002)(136003)(396003)(366004)(376002)(451199021)(186003)(2906002)(9686003)(85182001)(41300700001)(6506007)(6512007)(26005)(6916009)(83380400001)(66574015)(33716001)(86362001)(54906003)(6486002)(6666004)(66556008)(8676002)(8936002)(5660300002)(82960400001)(316002)(478600001)(38100700002)(66946007)(66476007)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RU1TVDZINTBXWXVmNGxocHFQVXEwNjM2N3p4MWRSczhVK21Hd0lvUHYrajlJ?=
 =?utf-8?B?Zk02U2srOUJHWkZxczlmTFNBRSs3cFpiTmxtNFV1TzlUMlVsUW5sdG5xRm9v?=
 =?utf-8?B?U2MwakhnQ3ZsdUo1ZndwTmNwbTRnS3M2d1JoeG5VaWtTRXlPRlJxbXFpV25D?=
 =?utf-8?B?R0ZtRzVMZVZ5L2FuUFpDRzlqTmxPempraGZUWmhaaXhOOGxYR3NoWDdWdmNG?=
 =?utf-8?B?MXNSYjZPb3NkUGkrSFN2VDlTOU5kM3FieGNySmVhdjdUVlcwNG4vcUsyUTF6?=
 =?utf-8?B?WVo5N0lDVC94enR5dXVxbmlZdGd6VlZOTFRnb2plaEdhRlZuU3N6ek82UVV0?=
 =?utf-8?B?N1B4c3pFUjZ3ZWJoc2t2Q04vMm9NNkQraHFRVlZhSWErQzhuL3MrL21YR0g4?=
 =?utf-8?B?aUsrNXU0eEV2RUp0Nldza2dSVFVubVFqcDV5TG5lRWprK3ZXL3J2cGN5M1Aw?=
 =?utf-8?B?RkEvM0FpMUF5NUtYY2VObUFoNHRXU3A2NEtTY3Z6M04rTlNJQ2w2Y3UydjF4?=
 =?utf-8?B?UytSdVkwMmI0cmFVaE1YM2k1MzhuK3Nld0QrMnQrR2J3anN0VDRUR2hIYVBG?=
 =?utf-8?B?OUJ5TU1rcjRUYVVpeDZCNVcra3RpZlZYOGVjTWpyMHBrQm9kV0t2eHNjRjlu?=
 =?utf-8?B?UkExanpsaGh3Tm5tLzFiSmJheklxcGl0UXM4NVQzWm52RkVnckFCaThOdFBy?=
 =?utf-8?B?aFBuelpOZU9oNVI0bytGdnFSQVVjcEdGMkhTd1BvTWdKSnJlUk1QekVqV1dr?=
 =?utf-8?B?SmFnSGpGWjA3Sjc4bjRNZ3ZBNjVPK0pzemdubjVPZjJkYlFZNUd6S0lKa0M3?=
 =?utf-8?B?b3NYbVZtTTNVb0VxSjAwOGFTZCtjZ2QwcjVZaS84Q0VhTlc4Nnl3dllZei9r?=
 =?utf-8?B?UzVUdlhpZWdzN3NHeENUaStaU2J6bnlzTnpUZml5eXBtOVIvZTRyNkxUeVln?=
 =?utf-8?B?WUFRa1owK01oaFNucy8xQlYrZHFucmwzdzRiclQzQVhITDI1VUhVZ0pJWjhn?=
 =?utf-8?B?Y2xiMENPanlXRUYweHpxL29WZnM0SStMRWdvb3BqV3R3QXVZNGo4Q240SlJG?=
 =?utf-8?B?U2U4N05Sbi9WbHFYbnVSMmJRNFpPTUk3aWNzd2lBY1gyTTRyMDNwNWxXc1Zh?=
 =?utf-8?B?blBSLzFlVlJIQ2Rvc0hVTHFid09oRm1Rc3lRc3dYZHEzMnEyaktoakw0aHFi?=
 =?utf-8?B?QlBxUWw1UEt3Q0x5cklncVpWei9Zd3F6MmhoMjJEQVNDNndPTENwckhiblpz?=
 =?utf-8?B?VHYzU3dTVWxqcTNJVThwd29obkxTd2lyek8vNmNtNkRKSzVPTXJ2Uk1IRS8w?=
 =?utf-8?B?QjRTL2E2NEdoNVorSFJwZkFSRWhLbGJYaTZCVmxlVVNkNXRveXo4eGxQa1dZ?=
 =?utf-8?B?VXhyZlVPKzlUMG5TWnF5ekpKSUpLYmhHVDJna0pyRFlTRWF0VVBERmp2Z1g3?=
 =?utf-8?B?MFdUYVJJYzhUWTZzTzl4SXRITkVTaW1TdGtVU0xTUW54YXBJeHZTRm4xckNq?=
 =?utf-8?B?bE80MnVjdXdOWmMxLzF2b0VJbndjMXByb1U5aE5NNndSbHZhb1h6Sm9PdUVi?=
 =?utf-8?B?WGZ4KzBQUnB4ODlDbldpY1F5Tm1yWWZIQVNHWnNPL3FyRlU2QkVLZlhhLzBN?=
 =?utf-8?B?RUkzQzJTNy84YStDUDNDQzJad3Rhc0piYmZFK2RPNjFKdW8vSm9VblNuMlNY?=
 =?utf-8?B?dGN6SktGNHJuRDZlcnZPYm9JY044NkNPKzByZHoyRXhUNkNOUndudU5UZ0FN?=
 =?utf-8?B?SEJqaG9GYmZweGdOMXpiREJyeURjMWpUWmxLK2owczJIRjJDUVpsUXlZTG5a?=
 =?utf-8?B?MVJ4V2MxaVh5OTdHNGZLanB4TEc2aGhMVldOWkswUEIzNmZEc3ljeURreEZ5?=
 =?utf-8?B?dFQ3SGdVWGVwY252NThQYlJDT2hhZXhFbGlHbnp3T3JzQklEUWp1TGpJeCtz?=
 =?utf-8?B?WUpwS3dwbGxxc1hXR3RxYmtGUzlad3RtZ2dNNE1RMGNpSGFYcnAyVkRKL3hn?=
 =?utf-8?B?ZWxLc1FEc2hLbDJaTWkrWFJwWjdzMjJwdVhXK2tYOVBTQ2oyTTdISnNUTXdS?=
 =?utf-8?B?bDVUMHdGTStIVTlSbVVDZHByem1HRWJEQUlPV2VwdVNGRzBUSFB3aXBHOThq?=
 =?utf-8?Q?3axPNaEb6iY3VpWTEUmMGKW7o?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Yx8WiW3e+UFPzguVOAQc99Qwi+Nyu4UTkZmmKIOAdBgptDk/D+pDgHG1AMJZNdFKXIPNe+yCX6Xha0x4ZGNkPLK99/9urip7vlthcSoT40Or3Dxj9H7kfjxP5UOO6Kr4nnr2pPXYHp6S5OwWpNlz30CJIXiNPU1x8/EoOopy9nsDzFH49q+Ewz2CvSBout0QvB3p7/uVW8rhcE6BDwkRUEsC+gIiQaVuLAX/NnEiuUY09QhrokbI9DKqLLkBrb17kzrZ7TZHvPSPoXI/zBtJrvs2SZsOr2FU4OH5F1yCO3jvisgCZmgOOtXDvsoGk6210Y0UB7Bnk6BgeGUZBU7p/4+JEnYTZ7wRHprN24jkE1FrLdNDug6lOeezTCMxhi7VhyLg+clFY+vWPb2PhtYuRf5RCs1K1/K0XAuYPpJLs6A+UNnEdkDMxHX1vPYVx7vchCU6reKfR+QA1sCZBKaU1S6iwOzaiYBzxfXUHqoO12yFf8PNc4CernYSF6/vy6Jh6Ws7eZ08MKZ5LMv+5pDjIqz3JjXGeYT7hnBWpVLuDvJRiekCWEXSRvEfo3Yv3EUYMyDK9UdZMAeSwEsjQLqvIl/vKqiO6n0ovPP0DJxP8HktS5YPyRP4d3SL9YMebYnUXzzeNjWJAtAxwCQ8r+RfvVNoQHiMGpEilCTHrnueUbPFTNuHeuTiZ6nEgLJCWGlZAlG/nfE90NvcW24vtWVwMVXoXRFDntvq9720HILorQoekrzIu9nj1HFf4Yi/4I89YOlrwyEeec2KanLV3KR7S2ga2FUwWMtnfLIrANv/+tZ/2VPW4u27vezMxuRC2p86JW7Pvndf+IfmWkRRWQ/Wd2UMDnxlkdg7pCIljaV7PB1xDXHVfSXeoaz3qVyVkx39xPetxuKWo84IhJM+NPFixYP6au/cZb4TU5eFnOEe8mg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f6e807c-c7d6-4f5e-3e2c-08db462a9501
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 07:48:10.8826
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g7Hl+eTeD0z0+k2LhUOK8rsf849mapm/dQAKqBOu44usgAKVi77yrRPpqiNkBlQjSs2B8HWEMlIzV6vA9iRLqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6465

On Tue, Apr 25, 2023 at 04:39:02PM +0200, Marek Marczykowski-Górecki wrote:
> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
> Note the MMIO-based devices in practice need a "pci" sub-option,
> otherwise a few parameters are not initialized (including bar_idx,
> reg_shift, reg_width etc). The "pci" is not supposed to be used with
> explicit BDF, so do not key setting PCI_COMMAND_MEMORY on explicit BDF
> being set. Contrary to the IO-based UART, pci_serial_early_init() will
> not attempt to set BAR0 address, even if user provided io_base manually
> - in most cases, those are with an offest and the current cmdline syntax
> doesn't allow expressing it. Due to this, enable PCI_COMMAND_MEMORY only
> if uart->bar is already populated. In similar spirit, this patch does
> not support setting BAR0 of the bridge.

FWIW (not that should be done here) but I think we also want to
disable memory decoding in pci_uart_config() while sizing the BAR.

> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
> This fixes the issue I was talking about on #xendevel. Thanks Roger for
> the hint.
> 
> Changes in v2:
>  - check if uart->bar instead of uart->io_base
>  - move it ahead of !uart->ps_bdf_enable return
>  - expand commit message.
> ---
>  xen/drivers/char/ns16550.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 1b21eb93c45f..34231dcb23ea 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
>  #ifdef NS16550_PCI
> -    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
> +    if ( uart->bar )
> +    {
> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> +                                  uart->ps_bdf[2]),
> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);

Don't you want to read the current command register first and just
or PCI_COMMAND_MEMORY?

I see that for IO decoding we already do it this way, but I'm not sure
whether it could cause issues down the road by unintentionally
disabling command flags.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 07:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 07:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526542.818364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pra3x-0000UV-Sd; Wed, 26 Apr 2023 07:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526542.818364; Wed, 26 Apr 2023 07:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pra3x-0000UO-Pn; Wed, 26 Apr 2023 07:59:41 +0000
Received: by outflank-mailman (input) for mailman id 526542;
 Wed, 26 Apr 2023 07:59:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YPIn=AR=arm.com=mark.rutland@srs-se1.protection.inumbo.net>)
 id 1pra3x-0000UH-95
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 07:59:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4a41ef5e-e408-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 09:59:38 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 680B44B3;
 Wed, 26 Apr 2023 01:00:21 -0700 (PDT)
Received: from FVFF77S0Q05N (unknown [10.57.23.120])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A76593F587;
 Wed, 26 Apr 2023 00:59:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a41ef5e-e408-11ed-8611-37d641c3527e
Date: Wed, 26 Apr 2023 08:59:26 +0100
From: Mark Rutland <mark.rutland@arm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, linux-arm-kernel@lists.infradead.org,
	David Woodhouse <dwmw@amazon.co.uk>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org, Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 22/37] arm64: smp: Switch to hotplug core state
 synchronization
Message-ID: <ZEjZ3pHjQWn4drs8@FVFF77S0Q05N>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.569498144@linutronix.de>
 <ZD1q3TF2ixVD1f2M@FVFF77S0Q05N>
 <87ttx3zqof.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87ttx3zqof.ffs@tglx>

On Tue, Apr 25, 2023 at 09:51:12PM +0200, Thomas Gleixner wrote:
> On Mon, Apr 17 2023 at 16:50, Mark Rutland wrote:
> > As a tangent/aside, we might need to improve that for confidential compute
> > architectures, and we might want to generically track cpus which might still be
> > using kernel text/data. On arm64 we ensure that via our cpu_kill() callback
> > (which'll use PSCI CPU_AFFINITY_INFO), but I'm not sure if TDX and/or SEV-SNP
> > have a similar mechanism.
> >
> > Otherwise, a malicious hypervisor can pause a vCPU just before it leaves the
> > kernel (e.g. immediately after the arch_cpuhp_cleanup_dead_cpu() call), wait
> > for a kexec (or resuse of stack memroy), and unpause the vCPU to cause things
> > to blow up.
> 
> There are a gazillion ways for a malicious hypervisor to blow up a
> 'squint enough to be confident' guest.
> 
> The real question is whether it can utilize such a blow up to extract
> confidential information from the guest.
>
> If not then it's just yet another way of DoS which is an "acceptable"
> attack as it only affects availability but not confidentiality.

Sure.

My thinking is that this is an attack against the *integrity* of the guest
(since the vCPU that gets unpasued may write to memory), and so it's
potentially more than just a DoS.

I only mention this because I'd like to account for that on arm64, and if other
architectures also wanted to handle that it might make sense to have some
common infrastructure to track whether CPUs are potentially still within the
kernel.

Thanks,
Mark.


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:15:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:15:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526550.818374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praJS-0003Sw-Jh; Wed, 26 Apr 2023 08:15:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526550.818374; Wed, 26 Apr 2023 08:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praJS-0003Sp-FP; Wed, 26 Apr 2023 08:15:42 +0000
Received: by outflank-mailman (input) for mailman id 526550;
 Wed, 26 Apr 2023 08:15:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1praJQ-0003Sj-NT
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:15:40 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 874f29d5-e40a-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 10:15:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6881.eurprd04.prod.outlook.com (2603:10a6:208:18b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 08:15:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:15:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 874f29d5-e40a-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fV8eUIoxeD8dpX7aHLfsmuX5F/A1uCGWYGC0BzVk3bVDNxNTclemmj+pH+oj2w5U0wLkXDA/vv9ZP1WUSdx//rNC6FgmWmzElo2A6duadZggmhguC+5L1uyBORgXU6cEHKpNEsB6CnpGDhuoit7fnKjwVjmCQeWUJunRYYt4X4I2FfuNLdgFRH2U5x2K3umgvmtvJZqzblhprzRHFSts7Of92re2ixvzkyeKVjUf/uuAwVfTCPFgvk4V+WLZsJA3AcbRFiBnqUGwj1gJ9Ol7OgUAfjme4obHUEVUJePz38F6HJJ4BTvdB7hc2ED42fghtpu6SwIXB6GOycMqZizuxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AUVIJJ8M4zMA2gdqaaEFVL9XVXUvzQBeqI9JnyTXkv0=;
 b=DubAwJ8Bz8myQQPSmkN+0qFqcn75WyahI62JGcHonsZGA78LiZM4NRypdD7fMa0Hxe3ebBfU5XK+5V/GGfieR2hGX4BW5SYWci8L/fary42Sk9Sz3FqYytaqf5seNN2dXPecQBgM99J1ohzx5yH6DEqmgzrS5lW8g8Tcksi1K1l8Z0DzqhYWvMYbIEhttk63nmDxJldut0niUMZXcU1gwd4tFGRWQi3a2SOYc6JlQnNJ/00foNsaulZukMppwybSTuMoTLrRnhPCDhtEOAvYS1VNq/0dM/DygiybaVkX67w90eYCNi8Vv5vKDDsOIK0hmdZv7XOPV35xe68Kws6Vng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AUVIJJ8M4zMA2gdqaaEFVL9XVXUvzQBeqI9JnyTXkv0=;
 b=YA3RZRV1OjXbKj5tndAecLyeClifzwEt6lmncjA/93Z9xGCWZ3RzH0RcCXzL1UwdG8J2F7yDVEKycvMbQvTDg8VND5j86CG+1YqpBYH6+fm00gN7TUVEBejfbsPo/D7Mq7qVvlGb9OAxkBdwGMWwhW+G0rXT49cSfOjCr5UU1z/qcAx6V+2ZpSlCWjaDJ0WJ9aaj9RkNxReZjQgmHr5OW8614Vg2Y4UjpUXzzJiJNmKzqdaXsSGHeRJDH+vr4OwykZgWfBTVWBxp2WYNHa52TxpHqJMYXakRhSeOfU+URywl/JttUqhSLBtcE861UKpYfK/5vXD9GvOxHINm/zWR0w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cebf1a46-71cb-a700-78ac-f9ee8bb64c22@suse.com>
Date: Wed, 26 Apr 2023 10:15:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
 <AS8PR08MB79919EAEDCD85073CAECE5DD92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB79919EAEDCD85073CAECE5DD92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6881:EE_
X-MS-Office365-Filtering-Correlation-Id: 0ab69b8b-2f12-4c91-2c4d-08db462e69c7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1gm205BiATvtlBhUM7QpDeS81iN4cdcYP6wxZ7Bd3clbKXpm08HjZ4j3uUfOUG1QPeOc1euCLUCrZR3vVZL3ZYbvCXcONtM9mkFv52zjhshz2/OiY0Fc7fEzuQdOgESFVrUxRayLVrART6g7/DHvRWuaFOEwjmTmlro8PNYJXEwbS1blK26nKGAbqFG9w/HX1Toafpke3tadlDPYENlKeuTv8jHorYQE0TSklhHDR/Wl7+oY///gvHFeq+w0JYAtWhbYeeffBSfT5Rwg56djbvslEO4m/0CljDwAB4CZ2D+wfpfrcZZ00mGabYnPKFGy4UnGRMPix8rBkgSgpjRveVJCJw8X9oV48cvdu5g5ij6o5K5gLxyS0ztcRHDwcUvdIbQa6SCZ14VC2THbsWntmywyONHAc6jKxB092LJPCy79FTKPb2PoMa81xRinqjgCDSBGP4Uv6qmbByZDUqALLjQRz8hPzTwYc7pOe3S7c7mW1M9B3wNiAhuQ0EYPOJanoPWI+dYV39Y0PZR26jb1b5095jWXcmqBfqZLUZDUnQVxPjTBDdFEe2cdpx/WZVODZCesWCrPCMhdvB9VquYO1UetA1tEGg8La++u24oNr1svGs+tDGZXPBSReDv2kO78yKS4ZJyV21e8iUjpkrKFTw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(376002)(396003)(39860400002)(366004)(451199021)(66556008)(66946007)(66476007)(316002)(6916009)(4326008)(54906003)(66899021)(5660300002)(41300700001)(8936002)(8676002)(2906002)(38100700002)(186003)(53546011)(31696002)(6512007)(6506007)(26005)(83380400001)(31686004)(86362001)(36756003)(2616005)(478600001)(6486002)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjdTMVVpTXl3Z1c5QjJvb2lVRXpIb2hsRXVucjdzUmhTZTFrMXZWcU5OOExD?=
 =?utf-8?B?VjNtU2pIWkRKTGNwdmw4bmRpWWxnaXlWdUl5QWVrb1pua1RyRW9vZXozOEFa?=
 =?utf-8?B?YzFNamhXcDQ4L0NjWmlRMWkxUlhoS3hvZ3ZST0pEL3dIbFpkSXRJdndMNE1h?=
 =?utf-8?B?aVNHdWVJR2c2dXFpUnF6Zy9OOEVFdG5RengyNUkrY0xZZzBzZXdQM1Z1Mlhx?=
 =?utf-8?B?ZzJ5bkJHbXRjUHRlV2xETS9FSGJmcGhWVmgyU0xtamhsemgrWmRYUDY3KzVJ?=
 =?utf-8?B?N3hqcjU0aThrZFU3eWUyYXFkalovQXlZR3F6ZVJQNjdJWE42Yk11ZnpvT3c1?=
 =?utf-8?B?OFhoSkZuSjdNWjQ3bXM2azRqdEM4c0swSDVzcGJTeDJqOVRLWlNZYm83eFcz?=
 =?utf-8?B?VGo1VkFzNVJnd0hXeGczNitKeWxwN2dpYVRqMElUakJwVURZQWZJancvZ1I1?=
 =?utf-8?B?OXJkT3dXQnIyNitpK1U1SHRVU2U3emVQRjNKSzVYZ1VaWHdVL2NXeWI4b0hh?=
 =?utf-8?B?WURUd3k0cGdYL3dmaythY1lmbms5MXNSNm9kb1ZqaUpXT0loN0xtZWl6QlhH?=
 =?utf-8?B?NFNOUk00OGJOaS9zdGVYKzBsTVdRYk9GV0NVWjVKUzBseW5RVVlNZlMvSzFQ?=
 =?utf-8?B?UDdkSHZZL0V3Yk9CMlNNRFExQWg2VnI4a3hxVkgrUWZkY3JsVkltc0pETEFu?=
 =?utf-8?B?S1ZQRUY4bmlna05FaGZnNnJTdGFJN1ZpNVMycm00QVZTbDBHTHc5cEk5QWIw?=
 =?utf-8?B?ZDZ6R050clRxZWZ3elhwbGNhQWU1WWZjcGhCYlQyOE1Pem80cDJwZFBFVVZl?=
 =?utf-8?B?YVJ2VHluMDJRenlEcERqVE1vRmo3Z1UxTHpmSCtLQlIvRkRTQzZmSUIwMlhu?=
 =?utf-8?B?Nm1TOENzSkt1Tk1mSDRNR0w4K0MvOUJzbENYZmJobDBJeC9jcnRPb1Z1MEtL?=
 =?utf-8?B?Z0xBNk9PVVVoVjl1UkVZSXB6ZzJDVTZYVFF6SzczTEI3b3VBSVNIRVJ2TEJ0?=
 =?utf-8?B?UE10OHE0dEQ0S0h0WnZDVUpvR2VUbmZqdUV0Y2V2ck0vZGVHaTdqMjIwWkF0?=
 =?utf-8?B?MnhZV1RrWS9reFQrVURTelZ4MzM2bTJPSnBOMkV6Rzd0bE1Lb3o4bGV2eHJw?=
 =?utf-8?B?VFI4UWdMcGlIN2ZOZ2J0dzNOVStOc0NuNW1RNHZjVU95UFR3RWRZclZsanNm?=
 =?utf-8?B?UVo0U0xhOE5FR3h1UUtQcDdRdFE0REhVNThQWDk5azBxMzZ6YS9vS1ZXN0JR?=
 =?utf-8?B?aTdueENmYXE1bG9rZ2p0Ly9qME5XZG1BR2lDUTFFall6M1FxNHF0QkMzWGhL?=
 =?utf-8?B?cys5WGRnVC9SK0lOOVl4dXVxdXoxVExtRkpnRXlPelE2enRXYzluVWZpTXky?=
 =?utf-8?B?NW5uQlR6dDAzUWwyQS9pbThPSlFwQ2dJSjBiU0JYdlJmMXJlWFhEVzhZTHJP?=
 =?utf-8?B?QmVuZmxEU1NTMkFiSWFVbUR4a2VUeHRYbnVXN2tXYjlOd05WMCtqVFNpV284?=
 =?utf-8?B?Q3hDYmtWbS8yZm9TYzFzajc3ZXZQZWpQamN1bytrZElhNjZib0FWdlBwemsv?=
 =?utf-8?B?aDllTVJZVSt3NDFUTS9QUTdIVjVGVTJHVm5NZmZVcHVaT2pKbnA0NzcvbkM3?=
 =?utf-8?B?SEpsQkp1OXMzZjRFaE5FeDhqaHVCZ0VhUFY4aFB0R1JvczhMMGJWZWEwMlVm?=
 =?utf-8?B?SUt6d0VJeGM5SFJhdW9UaHFROHkxSWFTWkJURDYrRkZCTVZFSWpxYTYzM2tC?=
 =?utf-8?B?NHFuakcxZTVYR04zYmM3dTJpNTlacHVZeFdWTE9lczVJSUhvc1gzTzdNWm1h?=
 =?utf-8?B?MUNic2xHaGw0d0NTSnJIcFJONkFTQUE4WDZhN2Z2WTNoMm91NFVVUVpkNjh6?=
 =?utf-8?B?SDBEZHJ0OUxSYk1GckJaWmdDeFE0SFdvb2owU3prREsvSkYwU1pxUlFnM29x?=
 =?utf-8?B?b3NFditaSTJ4MzlnOUJtdzVoWE0vakJSLy92RXo2NmJWK3RsS0xZK2VVaW5R?=
 =?utf-8?B?N2JBWVA4YWNySzVlTDhaNHBENkI1SmR6WW1CN3h6Wm1FTCtLbWk0SDNmalNr?=
 =?utf-8?B?VW1RdFpyZEw0SHkrNTVza1F0cnU2aGRVUS9KYWJYYkhJZGI5UFpmWHk2R2Q3?=
 =?utf-8?Q?22TgCRVRY7UIQNZcKqmV5Igr9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0ab69b8b-2f12-4c91-2c4d-08db462e69c7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:15:36.1385
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ellOj0OT5eMCtqbZAwhMddo4Vlg+jsPc1RrUAQu8MHmuo15025xvQ0yKuOH19DX1NrAxk4HFSuGyMCrqKlrrIw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6881

On 26.04.2023 09:36, Henry Wang wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>>
>> On 26.04.2023 08:29, Henry Wang wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>
>>>>> +        distance = dt_next_cell(1, &matrix);
>>>>
>>>> Upon second thought I checked what dt_next_cell() returns: You're silently
>>>> truncating here and then ...
>>>>
>>>>> +            /* Bi-directions are not set, set both */
>>>>> +            numa_set_distance(from, to, distance);
>>>>> +            numa_set_distance(to, from, distance);
>>>>
>>>> ... here again. Is that really the intention?
>>>
>>> By hunting down the historical discussions I found that using dt_next_cell()
>> is
>>> what Julien suggested 2 years ago in the RFC series [1]. Given the truncation
>>> here is for node id (from/to) and distance which I am pretty sure will not
>>> exceed 32-bit range, I think the silent truncation is safe.
>>
>> That discussion is orthogonal; the previously used dt_read_number() is no
>> different in the regard I'm referring to.
>>
>>> However I understand your point here, the silent truncation is not ideal, so
>>> I wonder if you have any suggestions to improve, do you think I should
>> change
>>> these variables to u64 or maybe I need to do the explicit type cast or any
>>> better suggestions from you? Thanks!
>>
>> So one thing I overlooked is that by passing 1 as the first argument, you
>> only request a 32-bit value. Hence there's no (silent) truncation then on
>> the dt_next_cell() uses. But the numa_set_distance() calls still truncate
>> to 8 bits. Adding explicit type casts won't help at all - truncation will
>> remain as silent as it was before. However, numa_set_distance()'s first
>> two arguments could easily become "unsigned int", resulting in its node
>> related bounds checking to take care of all truncation issues. The
>> "distance" parameter already is unsigned int, and is already being bounds
>> checked against NUMA_NO_DISTANCE.
> 
> Great points! Thanks for pointing the 8-bit truncation out. You are correct.
> Somehow my impression of numa_set_distance()'s first two arguments are
> already "unsigned int" so I missed this part...Sorry.
> 
> In that case, I think I will add a check between "from, to" and MAX_NUMNODES
> as soon as the values of "from" and "to" are populated by dt_next_cell().
> Hopefully this will address your concern.

While this would address by concern, I don't see why you want to repeat
the checking that numa_set_distance() already does.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:16:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:16:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526551.818384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praJd-0003kH-R8; Wed, 26 Apr 2023 08:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526551.818384; Wed, 26 Apr 2023 08:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praJd-0003kA-Nn; Wed, 26 Apr 2023 08:15:53 +0000
Received: by outflank-mailman (input) for mailman id 526551;
 Wed, 26 Apr 2023 08:15:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PyPi=AR=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1praJb-0003Sj-G4
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:15:51 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e0ed531-e40a-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 10:15:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e0ed531-e40a-11ed-b224-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1682496949;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=y7cSCpTFBU6gtNFDDvX9BBXJHGwRYvRyuVYMZSUvskg=;
	b=puMJndFvZk2DjG2oGhTaaANtFluqIT71fJ8veXKWgaKYEc7ZaIibudafgbLM64wYizw2Y+
	OXOoExk0mzZdwI+qZRCbqqN29gJr4bx9hGMM4FqEkCO4H6WN381K3f/yJnQUdMhiDoudkE
	v7q+YkKqQYiLDFXvBw9Q1TAKdQtU3EOXylf0wOjoPixyD8dhvyM5BYTWe0qo092Dn58dWD
	wK5mQfTO6pClT3oj6wx4w2PqZTUsjxsf5TLo5xJWNv/gc/U3RQVFJKGrWcJS7Q1p/NUtUU
	aUWCZ+sybUxpx87tAuGvu76aAkAfxwkf6VweEiexDbnjkrDzfgfAwBP2nQkz6w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1682496949;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=y7cSCpTFBU6gtNFDDvX9BBXJHGwRYvRyuVYMZSUvskg=;
	b=4JTA5ZSC1xgsH2IPcOsiykb/g/HrD1bcjGUJcIN8iOa9ku/bVlaHwu78ldM0LGCo2BtXJa
	9xfjqD5Ed8VSHzCw==
To: Mark Rutland <mark.rutland@arm.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Catalin
 Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 linux-arm-kernel@lists.infradead.org, David Woodhouse <dwmw@amazon.co.uk>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Sabin Rapan <sabrapan@amazon.com>
Subject: Re: [patch 22/37] arm64: smp: Switch to hotplug core state
 synchronization
In-Reply-To: <ZEjZ3pHjQWn4drs8@FVFF77S0Q05N>
References: <20230414225551.858160935@linutronix.de>
 <20230414232310.569498144@linutronix.de> <ZD1q3TF2ixVD1f2M@FVFF77S0Q05N>
 <87ttx3zqof.ffs@tglx> <ZEjZ3pHjQWn4drs8@FVFF77S0Q05N>
Date: Wed, 26 Apr 2023 10:15:48 +0200
Message-ID: <87ildjys7f.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Wed, Apr 26 2023 at 08:59, Mark Rutland wrote:
> On Tue, Apr 25, 2023 at 09:51:12PM +0200, Thomas Gleixner wrote:
>> If not then it's just yet another way of DoS which is an "acceptable"
>> attack as it only affects availability but not confidentiality.
>
> Sure.
>
> My thinking is that this is an attack against the *integrity* of the guest
> (since the vCPU that gets unpasued may write to memory), and so it's
> potentially more than just a DoS.
>
> I only mention this because I'd like to account for that on arm64, and if other
> architectures also wanted to handle that it might make sense to have some
> common infrastructure to track whether CPUs are potentially still within the
> kernel.

Fair enough.


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:24:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526561.818394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praS3-0005VW-N9; Wed, 26 Apr 2023 08:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526561.818394; Wed, 26 Apr 2023 08:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praS3-0005VP-KO; Wed, 26 Apr 2023 08:24:35 +0000
Received: by outflank-mailman (input) for mailman id 526561;
 Wed, 26 Apr 2023 08:24:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1praS3-0005V3-2n
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:24:35 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4e249d8-e40b-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 10:24:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7601.eurprd04.prod.outlook.com (2603:10a6:20b:285::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 08:24:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:24:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4e249d8-e40b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ij/HIrcqIBNfVJT5xixCnrTDjwdTIVVqqe6yviN1eJRY5VYOB2CKGJq3ANVf85MR9llI4bnwUQ0m2UoAt0cOtS0cKQONOu2UnKaIay4axssB4lAivnaqcmwoE1bYSZyLwXAHnfTUcknZAyhGt52OVYECpLO0VdcOnWR6XLJkLscbbOezZjGPdhCyVadGb+R1pODmFHQb6bWOsoYjqUAzZjIk2WHeygxxcU0mRZuJXj1so898NIGqxbhrzOK2oslzF/e2imEdtxdfGXmWIvHXpPYc9Wk6l7G3osrldPxsUkww5XruxV0l+aDZXE+0Rh3VsleDsGboG8gsNJ+yFE4Gsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TxMlZJ8sld0LICCblQtk6hR1iDysMeak36xDyCTXStk=;
 b=XkVSaZELCE5cbq8+npunXOExfRwxKkXArM7Ek7q3K6BHO1YOOpXdlk5vg6gs3iH5YYJO/YWBHOFyM0y5P8wvkVwQ81j/iynmgxEDakHkQJCL6PlpU/AhFD6bP5aFmhsg+X3CgVjHu1ymZC3DRrpa+Lc7+3SC+SxklJkZlnss0fEQfWwMx+4fNrMU+V1l5Aeay4V+2Yc3ioQSLlh17AK3xwJiQxfxwsHCXTo1SJvAFwCa/Z5YbJbaOWsWrKxaGCoZXzjDMJXTpoCtyOaNTNvevE9J7MlheDJgpLcmrBDTv6awCGsqHunnlQozcQZcK6UOgPLjt2QYq2tgmToEaGoGtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TxMlZJ8sld0LICCblQtk6hR1iDysMeak36xDyCTXStk=;
 b=2hvf8almezl7gaJ/VGfgDGc0tOzgoyzdFwORx95+OBr/Q6TC0VLMH1z85UjxDQHrsEYC3eEG9HHgdHCx/pCK0fumGl+TNb+POaZikPygVyu5V6QJx//ep6W84NjSxoJeseL0kPpjhiE9ycHfYHIWmIPoiN1GBnierZYBxqj+5Fx830eVyOytDg4yNKQnjLfsYXjCuhsTR9AVyKdwIoPBBJl2gBYvH8Qi0V34mNc7N4bZVAn8YDYtLV3PZ9k7yFp2bPYBp1pI17wls0lQ/Jf/OfHRrUTlPDFtyHY0O/Lxpf57BnSXaMDf8q13v2c/W7VDUKkfBzclvkcSqx3WbkdiIg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b41e8eb0-a776-8924-429f-8abe7e70ead7@suse.com>
Date: Wed, 26 Apr 2023 10:24:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
 <ZEjXNLAVCixClGyl@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEjXNLAVCixClGyl@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0174.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7601:EE_
X-MS-Office365-Filtering-Correlation-Id: 619b9401-3af9-4ed3-52c6-08db462fa7ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2X5D0PCt/ec2nTrO5S/0FrUrhNCLpBm2AkiF4ggcKR2WfpSuPdSR945iZOEThwb7BWzwsEudYX/rgcCqEENKKNWLptxPh/0BiWiTVHSEJCt8LvO3KmkRxKWFcqo40KUrYskYnL9viAPiWYJh9tvuhAo/3KcJqCAgzlrMBOx97MKIFCKr72aAMU1IYFplgA+FIlMsl+M42aRIAv+na1Oc51vTqMs/WJs9icqPTXlXk6U7Yk3DCVsDrGWNfaHTbInE/3j5SrC8hZwhO7kIyoYyWPjGuq28FqCyl2ZE6k+qrhSeP7pjfPZvaqgB6+VCdDrhXTrlsYJPpY/B/oY8Oy2z0cSI3lwg0aZqbZhd60XlpCp7oKThlghFRc951Mbj1f359DlhH2DtB1bLfLnI+7qm8wkAzmiU44VNPqK3afjBsAKy1T4VLnyObgLr0l5oTGhZKHFAGTpTnX68i7RrWR1Ozk8ZPR02v3l1E78kTx0kBoikz8K8FjHKvqwjDmLy2KrsXFQ77vVKFNCb0Qm5e1rK29rDFStiTv0q6Rlay/Zwnlu3VMC+rUSRlrqS8GWZEBQJCRX9eiKclwwMVikoNa0jRoZi07NN2k0RqmqfwI1lFFMI0zIa88iJLKAi+WAAaTKtFlyaa2o5HaqennPQbwDtIw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(136003)(396003)(366004)(451199021)(38100700002)(5660300002)(2906002)(8936002)(8676002)(36756003)(86362001)(6506007)(31696002)(6486002)(26005)(6512007)(54906003)(478600001)(2616005)(31686004)(53546011)(186003)(66946007)(66476007)(66556008)(6916009)(41300700001)(4326008)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OHJEUkt1VVhwbURwZFZSUmR6bDQ1Ui9KUnd0cE40OHhOTDVXeGgranZJa0lj?=
 =?utf-8?B?enV1M1ZmMGFDZjkxQ0J0OUgzZVppeDhaSzluMnZHUzlnWklTVHpTS0dENlhU?=
 =?utf-8?B?TEphRVNQNXhrd1BOZit0U21TZm0vK0I4bUFaQncrVktaOE9NZnlFb3oydDJZ?=
 =?utf-8?B?Q3ZsVVA2UFpUQXc3REhFUk5mU2srMzUwc1NTckY3eGljcEcxMVZUbFRMbWdZ?=
 =?utf-8?B?QlBVdXZLK3hmZkRQV0lXSmVRbXBWNnd1NHhUclJBY3hoV0Nwa0UxRExSaXpk?=
 =?utf-8?B?MEpPSmQ0V25ocFRyQVZIS2RiNlN2c1VVcW1VeHM1M3FELzl1Uys2R1B5dUFv?=
 =?utf-8?B?bHV1bGljbCt6R2l2QUVNOG5WQWJtVFNmOVJFVVd2dWhjTWhxMDZ4K29BWkpw?=
 =?utf-8?B?S0l6SGVTQXRxVVFkeEhsQkRNaGNXSFBuT1lvRVAwZ2RBN3V6WXVDL05oRU1Q?=
 =?utf-8?B?c1pkdG9GcUtxdENFUFN5bnpoNUhzQWxxSkRyNW1pdnlDYVpmZWdzbWJWM0xB?=
 =?utf-8?B?L3ZIckpFQ3hRbmVINDVEK2NBSG11MUZWVXh0NCtYWU1HTzcxWlFsLyt1NG9v?=
 =?utf-8?B?dW04NFpHNFFjNnlDUUZwUGIzVmpkcHM5bzdVQWpyRVpXOWI0QnpyMmh5c2Y2?=
 =?utf-8?B?L2dzU3haRlkreGJWVm5nalZaRmcrVXVQZFM3c1NKd3lYdkxDZnlPMVBueEhQ?=
 =?utf-8?B?QTNFTlFsRzV6U0RXSzJqRFo2SWxEQmIvcFA0SjA2VmJiWnJvZWFsb2hMKzFt?=
 =?utf-8?B?MExOOGowWXhFU2EyTDhDWGdYM3piOFFZeWFmZi9zZE1iU0FmaFFqZ3c5MFM2?=
 =?utf-8?B?WkhEd0Q3NE8zOUY4QStya0FxYjJkbTdNYWRrd2V2YUd1U0ErMUZhaW5CN2Nh?=
 =?utf-8?B?ZlZZc3ZCWTJ5aU5hY2FCMGlxYWZ6TmdYMEliUFF3Ujg0d21LQzBqL0tDQzBG?=
 =?utf-8?B?SVVKNUNoQXFvZkJOMlF3eWxiYW9TY1dCSE5hVkVleFhjcys0QW9VZFoxTUVM?=
 =?utf-8?B?cU1Ta3dPV1MvM3JGampOMFk1VWFNZitxT2ROekdYZFhuZTRoRjdwVURJamlm?=
 =?utf-8?B?Q2pZSlZmc1k1aDlIdzdxY3dVTVJ1cjMzeExITlM4c3JFVjk4d0ErSElLbWI5?=
 =?utf-8?B?aHJPMk4wNjQ0R1pwL2ZQbnFMMjBKNmNPMHdVZ0doazkzcjcyeTJZQXd1QXUz?=
 =?utf-8?B?bzEwaGRKekJVM3JzN1BBQlZhRmt0blNuaHhoUHo3Q3FkNkhjVGNNZEZjWHdB?=
 =?utf-8?B?all4YWNpMlpCUXlpYysxT25sejBkdUk4ME0wT1U3Q2o3Njh0aUJydnVuWlk1?=
 =?utf-8?B?Ukx4bXc0Tythd2VtZ2dMajcwcVNOZkhOemU1Z2xBM0MvV3NnczVZTFV2N0Yr?=
 =?utf-8?B?ek5Rb1ZMbjQ1WEdLZ05VMVFJUWxHMkN1QnNtWGFaeVpFTmFqU1VXZlpESVRs?=
 =?utf-8?B?UzY5Z2RHem9VQnNYTDNPU25BRVd6dU51dkRnQnRYYy9rejdUcEwxV3ZWRGkw?=
 =?utf-8?B?SEIxbk5SNkc0bXlxbEJ2cTNnWWJsUkkwOFRpdlliYW8ydE5ueFBuSzRkTW80?=
 =?utf-8?B?SUdvMEticUZPZkZxQmpMZUI3Zk9aREJJdHN4ZUNMR3J6bDF1WU01MkRYYmor?=
 =?utf-8?B?UVM5LzdWbklUVGMvWC9mNkpFQjU4cnlhOEh2RDJPMmwyZzBqTDhCclRVYVhH?=
 =?utf-8?B?VElWUVVqcXBGZGsxcmFkb0N1czFwc0p4cGk0N0xpUXdVRUQvdmZUU2pEdld2?=
 =?utf-8?B?SnFWZDhlRTlCVW9YWnhSYzNXSHFJd2xNNjJWZ256T1hsK3NjQkxSaktheEVy?=
 =?utf-8?B?QmZ1NFlURVoyZHF2UXRRNEZlc2FtVFRKa2tJdjdqWWhZWVFvV0diWHBtdWw4?=
 =?utf-8?B?dE9wczhmRVQ0MzIveFpYNnF4Rk5PbS90Vk90NkdZZ0FpSnRsM2RydFFqSVVh?=
 =?utf-8?B?M2dhU21RVTZrQWJnWDNtL1NHSzIra09lUDM1S0lQNVo4Q2pFUTBadmJjRndM?=
 =?utf-8?B?OVBCTE1HVUtLb3hqTXoyS2d2MkJuL0JlYmRjc1piNERQWU1VUUFjMjNBQjVL?=
 =?utf-8?B?U09Kam1aS1c5SzJwN3RnQjdQSjFkMm5MU3BMNUFYdFlkY1VwVXYvQjRuNGFU?=
 =?utf-8?Q?eFfO9pldY7BmGAOtTjFeEEmF1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 619b9401-3af9-4ed3-52c6-08db462fa7ef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:24:29.9640
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m9AcWSpl/+EHFmpHyTlizz7+oZMKvafuWhDDO3avs6Lj20nM2loxj1pHDH+KNc4Tjqg8hctz9Ss5QLeTQPxuwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7601

On 26.04.2023 09:48, Roger Pau Monné wrote:
> On Tue, Apr 25, 2023 at 04:39:02PM +0200, Marek Marczykowski-Górecki wrote:
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>>  static void pci_serial_early_init(struct ns16550 *uart)
>>  {
>>  #ifdef NS16550_PCI
>> -    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
>> +    if ( uart->bar )
>> +    {
>> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
>> +                                  uart->ps_bdf[2]),
>> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> 
> Don't you want to read the current command register first and just
> or PCI_COMMAND_MEMORY?
> 
> I see that for IO decoding we already do it this way, but I'm not sure
> whether it could cause issues down the road by unintentionally
> disabling command flags.

Quite some time ago I asked myself the same question when seeing that
code, but I concluded that perhaps none of the bits are really sensible
to be set for a device as simple as a serial one. I'm actually thinking
that for such a device to be used during early boot, it might even be
helpful if bits like PARITY or SERR get cleared. (Of course if any of
that was really the intention of the change introducing that code, it
should have come with a code comment.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:28:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:28:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526567.818404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praW6-0006B7-Bk; Wed, 26 Apr 2023 08:28:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526567.818404; Wed, 26 Apr 2023 08:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praW6-0006B0-8z; Wed, 26 Apr 2023 08:28:46 +0000
Received: by outflank-mailman (input) for mailman id 526567;
 Wed, 26 Apr 2023 08:28:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1praW4-0006Au-Pl
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:28:44 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe13::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a11eebe-e40c-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 10:28:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9308.eurprd04.prod.outlook.com (2603:10a6:10:36c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Wed, 26 Apr
 2023 08:28:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:28:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a11eebe-e40c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kp06BwQD5taHwt4kdGt27XsSd0ozuP+FGwh9ouWtiBuqQaL92HKEu7vn8eMRTFdsAQmnTIv9rd4mAcZaR1o4NUKqyPmKLHaU2xvBtmtGngcHfECWFEgWC8FemdM9I+1AKKo5Wc4uzYkkWUHA/F6t85YDGG3O1h6q62KdLumvfFJnLO1x+FHFPaMVcyx4U1jEWHV53HcMLad3k+7h9MEQnA2T3z81GH+SOJjmOxzQ1yHY8hUMtpwYmX00i3i9mzd4jus+2+K1G+hRj15YUB41i+lKztMf/8H8Gjp0CpxcCnodPGa3xs7Jij0LNqDJywFvWwl8QZIHiSW1H9IUw84VwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LXmGxvmAdzkpAAMX6LCCjraR1Hfm7l7ALV3nAUsJmGM=;
 b=XW+AV9m8D5afuGdXEq0nhFuXdeacydj+YQnKJptBrjDODyXIKlRfhtrhomWONI2Y6hmFXDsE7AClq0oHwO5zcARuJf/Hy0CTh52NL6j/W3VywkqK0k0qVHwq+9cHFvsVoLv+ALjzILOw5yeKdMvzTIZ5n5ONRZ4MJrdfPqBqiOhTzJq2VCx51cNy0XQSjpdEi50vreyS6J3w7cyAPbYzGD6nXwfPbC62axnz5P1lkAynML89YjgE0LfnJm64Suk65izyhIW0bmr+L0zfJvKB+HvjzHcazJOmuyplKucpFPjFbUPV6f880YUn9wdxBSTOmHH1ZmbY2NSeHq+3ReSkww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LXmGxvmAdzkpAAMX6LCCjraR1Hfm7l7ALV3nAUsJmGM=;
 b=G1FWLBUMLIDValv0CdpMiWb0RM8d2Jsv4nZ7h6PFgCrt96SedlYt0Vkf0ge/kN2Ab8v8x3Dy++ZUfjCGIFOK6Hus1OqF/WG9qyLW3agr8oCVHIAKnBfRsbRm5qST53px75MGC+K4zU3VXB+ZppGs+4v6rWIjT3cQiNpYMUkGQNNWTD1sUBLLxRTHMGcvn1mk2tg3iHEIf/Erdd59+F0fPDQogtLMdatQDHnIXP1rV6lIJdJ0DTlzci99letVYxN1aOnwIQA0+gygXZcakJofHjKLVl4mNM5Qo107HyzAwWyzUNgXIgq0bondz1gSgAgE1H8v3E6KegYvrHWHbAfFoQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
Date: Wed, 26 Apr 2023 10:28:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] Fix install.sh for systemd
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230426091533.68324d8d.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230426091533.68324d8d.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0207.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9308:EE_
X-MS-Office365-Filtering-Correlation-Id: 8cf33ddc-1dfa-42e2-7dec-08db46303ced
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aKZm/NdpBgU43+OXGhn6bzAMgIaL4yPdVnkqJhtVFvuAnhRthMYXDpUpEWhu8neCQicZk9gjZNjeV3cuRlcv/segrBW/94IWEhN7MrZlkdN4cNlBJK2vQyKcP+HtTazwpOMNytPcGI6RiaN4rMn251R7SMGiFZibpkYbCD340+Gg53fU0ATcotJbv3wcD4I7pbrKqt34HCi1qbyQR+ZNCv/GWQRLjEBblAs2J9YWDTDfgkZz28a2NWNjbT3N+Oi5hKI3wDL1EX1qpC/OW605WXHmpB/zVSQEj7+809zEtPZHJv0mBgqJecanptQX7t4dl21rAmsfcM2Q+2DHcTusUNnn9czwZiOKn2h2+shYAxtYlOLq5snBQA68Y7wiItk+zUKiB61xH/B8Qle20Q8ZXnf9Xwt4H47Z7A9LYSmyLUkq3rJPD/xkUxHhksDV6B3zVyT+RhRLtv7VmM72Gz80OwkaT8w8U7YD92ZNkdsgxUvZPrNG9lT6GNKk8ITYQlGMtyw1SX4a6CYBQvFPTmSVMt0aIXqUlkYNNbqyCm9OeCfTtBoezvdE1o8tHQM2IWX5kMblT+u5KL/H8BdQkvi6nFeHaeJ/V+cOCqh8vVT1yNfkcnZCjVzJB6+np4dLZp4n1ud9VQH81FK0cZuvYCk2lg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(396003)(376002)(346002)(451199021)(31686004)(36756003)(5660300002)(8676002)(316002)(8936002)(4744005)(38100700002)(66946007)(31696002)(41300700001)(86362001)(66476007)(66556008)(4326008)(2906002)(6916009)(186003)(53546011)(6506007)(54906003)(26005)(6512007)(478600001)(6486002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0ZXZnBQWWdwWUliWEdIdmFpWFcrWk1QeTVLWUplaVQwZ0RnUTUzaVNjL2NB?=
 =?utf-8?B?MDNrc0J0REN4djNZeUs2SjQyK3NuTlE4NnNlRDF3Y2V0dWNuczM3K2x0WTlJ?=
 =?utf-8?B?U2lCcXduLzl6RUo3TWVKQ1EvWDIvbi9HNmdBL3BnZmtHMnRpR3JUaGpoYk1V?=
 =?utf-8?B?ZzgrcldNY09oemJHY043UURsWk9wQ0RkV2RSYllMMElHbU8rTzc1SWxITGRX?=
 =?utf-8?B?cTNPK2FqbU1zeHVsRi9jZVc1VTZZV0dGTUNmaGdxVk1BT0ZDa2owQTlUWFda?=
 =?utf-8?B?OWdaaElWckw1QTY3RWpRcDh3bCthL29kSTNNdzZDdVc1RVdQU25OOVVjRlQ4?=
 =?utf-8?B?akFzcndWOXZNaFd5Vk9vN2VlT0NGeGsxY2RNRXFsRVdXWDlyeEV2L0JCYS9P?=
 =?utf-8?B?cFA0RmppR1ZJRTdWOW4wSnZlWEYyNjZibm94SS82RE1LRlRBZmpib3N2STNv?=
 =?utf-8?B?NVorRjBoK3V1RS9oQlZINlZYYUJyR1NkdUJuZnF6VUJNenhEekUxVTZXUHhX?=
 =?utf-8?B?eTlDNXhiS25PTHI3V1dpWVFKSkd3Rk5raWpZOVk1S3Z5dEVTMHlrcFp4N2JK?=
 =?utf-8?B?RWhsZStoUSt4NldLTTNuY290azVjb01qSWM0TksvejdSS1Ziakg1MThWb0FQ?=
 =?utf-8?B?MHRsZ0tPMVdQMXBxcVFJYTJoUVYvZHY2aTFnSzFBdUVYWXp1QkpLa1Y3NGZs?=
 =?utf-8?B?M2VHRFUvbmZIL3dwamZteElWTXkzMWxHdmd5NUZuOXhOVXZ1UmIyY3U0QWxT?=
 =?utf-8?B?UmI5eEpxdk04NGNGU2ZJT3F5d1dWZDJsRzNHNlZ1WFhvZElxNXoyR1FscDUz?=
 =?utf-8?B?STZLSlh3MkNnY2ZCdndYYmJxeTAwemZMV0hPK3NuUDBZQ0cwLzRmWGZhMU53?=
 =?utf-8?B?cVFUeG10OGw1cmtmaDhBd0FwUnNTMGp3dll0OWRXOUpuSWRkVEM4SmV0NWtl?=
 =?utf-8?B?WWVScE0xak93NUxKV0FtalVaK0RuQ3dVNi9CTk1CMGJnY1A0TldzeWJHWkhx?=
 =?utf-8?B?aWNVaFJnUzcxWW9mNVhpRXk3KzM2bnRkU3hnTG1GK0dHQ0toUjBqT0V4VmxY?=
 =?utf-8?B?c3ZIYnpLdlczVlhZYkFTbHJuTXhoMWhuN0JZdjRiVDh1bk9KU1ZhU0xTZWRO?=
 =?utf-8?B?eXk0Q256V0tYaGpUdFQvcUd0anZINHFnZzhPeTFOTmlDbDlYZ2UrNlo3UndB?=
 =?utf-8?B?NlBzb1pLV0xsZzRTS1NIQVpZK0xKeW54RmRUMkQ5ZGJOR3BReXpud2U2NCtQ?=
 =?utf-8?B?c0VDNmVudG9NOENzTExiT0VVc3IvREhPQlllcHh0UWhmdmdXWW9UeWxieE1B?=
 =?utf-8?B?QTJVeEJXVnQrSjFMZk9sMFBUenJUbGlndUhoK0l6LzVpWnV4UlhrbHJ3d0hX?=
 =?utf-8?B?Mkh2NjAxZGpwS0hCOEplcVdWd1NBU1ZRSUlzTDhXa2crRjlWbzdyd05Ob01z?=
 =?utf-8?B?SUdxSGRkMUZkUTczWkdBaVk5a1FhMkJsZi9nSERQWVh1RGJQbUVmeDh1ak9Z?=
 =?utf-8?B?SmtXK29GcEJvWkp3Q0RQYzd1UDhUd2dTRC85NzRvdmR4MTNsMXBZUzgrYWZD?=
 =?utf-8?B?Zm1iMU9Qd0wvUGt4Qk9TRU5tZWZOZk9KRWRCa3NJTGtCb0hIM0o5VDJNR3Bj?=
 =?utf-8?B?SVhybVpSYmE5N0hHYzZsdkdRbmpFY1RFZkJDNkYvM1J6V2JHZURtVUdQaC9O?=
 =?utf-8?B?Qm0wcU9XODdLT1NkSlVnSFVkRlFQdTk3cUVxSSsxYUJ3OXQrNHd4KzczYU9o?=
 =?utf-8?B?OXVNV21reWszY3l1ZUVJT2NmMHhpNnRtaDFtT2R4L1RTU3B5Qkl1ZGVJRXF1?=
 =?utf-8?B?L25TUkRkYWh2OVRsZUE0WGcweTdobTN4aDFJZk55aEw0a1M4S1VLV1RPL1pG?=
 =?utf-8?B?YWZwUE5JMmptbnBzZlBVcXV1d1YxWThiVVZvZTErOXpydG84QWwyN1R1ekht?=
 =?utf-8?B?MVdSVFVVMW14V0xnSFMvTkwvTGlEdUZMaGF0L0FhL0RoWWRXWm5NZzJmYXRZ?=
 =?utf-8?B?bHdPZmpCSXdqQ2Q2YXRnWGZRQjZCelppQzBiTFRxS0ZvdlQ2ci91L2lqQitK?=
 =?utf-8?B?ZnZpMDlvUFF6WlBwcDhjeWxENC9LeEZIWEhDcmVncDUxMmV5NmROSXBWWCtI?=
 =?utf-8?Q?8Ysjl0J2yze8QuYkNlBpleyq+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cf33ddc-1dfa-42e2-7dec-08db46303ced
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:28:39.9781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N6OTc2qgv9/8pVVs91tKgXmYFVzWpkaTuCtqgaG8tb3q57eNkl102D9Na5kchB5dV35qKRJ7ywlxu6veYOMbGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9308

On 26.04.2023 09:15, Olaf Hering wrote:
> Tue, 25 Apr 2023 15:46:20 -0400 Jason Andryuk <jandryuk@gmail.com>:
> 
>> Skip populating /var/run/xen when systemd is being used.
> 
> It was wrong to do it like that from day one.
> Such directory has to be populated on demand at runtime.

Is this to be translated to Reviewed-by: <you>?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:30:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:30:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526570.818414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praXK-000708-MR; Wed, 26 Apr 2023 08:30:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526570.818414; Wed, 26 Apr 2023 08:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praXK-0006zi-J0; Wed, 26 Apr 2023 08:30:02 +0000
Received: by outflank-mailman (input) for mailman id 526570;
 Wed, 26 Apr 2023 08:30:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1praXI-0006kv-Vy
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:30:01 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2054.outbound.protection.outlook.com [40.107.13.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86e879dc-e40c-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 10:29:57 +0200 (CEST)
Received: from DB6PR0301CA0085.eurprd03.prod.outlook.com (2603:10a6:6:30::32)
 by PAWPR08MB9967.eurprd08.prod.outlook.com (2603:10a6:102:358::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 08:29:21 +0000
Received: from DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::cc) by DB6PR0301CA0085.outlook.office365.com
 (2603:10a6:6:30::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.54 via Frontend
 Transport; Wed, 26 Apr 2023 08:29:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT047.mail.protection.outlook.com (100.127.143.25) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 08:29:21 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Wed, 26 Apr 2023 08:29:20 +0000
Received: from 6893e8ce4ffb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 70254448-2759-4EDF-92B5-2378D5FAEBA6.1; 
 Wed, 26 Apr 2023 08:29:10 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6893e8ce4ffb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 08:29:10 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB9560.eurprd08.prod.outlook.com (2603:10a6:10:44b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 08:29:08 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:29:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86e879dc-e40c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=82jgf7C7W97Gi1/0L70fqkFNer9/ST18TiwAoFIhZHA=;
 b=cLiyYFIzr5N0AjH2a4P6UlHglyQN+CZXR6hQDQ7sRqUqvf+mNVYxsRGvO663NQQQQyDeGz5Z4SD1x9MCwUt4JvZAPtUhv6dEA3De8Q9PIzTu88L/AkBXyjfko3ZnKgt/N5oOTLOXGDBqc7KcIXWyohPjgmlxSyEROQ8G4UvKMZ0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OE8u2obypfE/r8UjKsD7h6MSQV28MpgHoXXzJTOclDvhi79yaY92vGXjUhzDhkqE6pqn3KwzfqOOREYadEjPvke2XSQwLkzsePvhsefnUexuENUK0rJKpSm4PVg0tRPw/U/dSpmX/ujNTcX0mZaJBMnwJgPGop8S8qmEW/HBp82vZebLMMugYOfqG4+zSsGFCUMSXyoXDJ3SAw+XcMxoWxDQXrtFynn0dJT51Xd21cGmCIQWZit/TFUHoXHJ08KO0fN7HRQYluxPMq0z0NhlpqRVvN6Yt/Br7gJdA3ObFHs6IZENfgYVW/+BhCrlUB2/SZyMAVG5Cg9Fqbjf4To2RQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=82jgf7C7W97Gi1/0L70fqkFNer9/ST18TiwAoFIhZHA=;
 b=f+RVOkmQ2Z6L5APU0ltvZ9lXprhN+wgfrp8qymE6VaZzKlr/k1nth0/JbKbcXuIbfrCojurdzYaIdPSOyqz9S3JN/7mMvzAx0URQoH8qVFZE5sK01Fz95VidTBLqRB13OI+klvHSst/G1VBEe8FTLgxCUaeI9TF84q1ml82VXau9yrplaQm2VAYpKgQe6VNkOgRGwd1h+fQh9R919AmGjQRDgS6zwAispNnWsqjxBTA5SDqum6RAW+9cAUJ7gANAEHTK5+odLtLJdUdeY3scK0Z6lgqW7jKRzOEVtEeXMENjok4oflHYaUQExjqRe5kk5NLl/yH8LjsUcp1wh5j5xg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=82jgf7C7W97Gi1/0L70fqkFNer9/ST18TiwAoFIhZHA=;
 b=cLiyYFIzr5N0AjH2a4P6UlHglyQN+CZXR6hQDQ7sRqUqvf+mNVYxsRGvO663NQQQQyDeGz5Z4SD1x9MCwUt4JvZAPtUhv6dEA3De8Q9PIzTu88L/AkBXyjfko3ZnKgt/N5oOTLOXGDBqc7KcIXWyohPjgmlxSyEROQ8G4UvKMZ0=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index:
 AQHZd0ulMpqFICe/LUiDVq6+aDsW9687tFqAgAFSIKCAACZ1gIAABguQgAAM+YCAAAMWUA==
Date: Wed, 26 Apr 2023 08:29:08 +0000
Message-ID:
 <AS8PR08MB79911D9836C2954C047BECDF92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
 <AS8PR08MB79919EAEDCD85073CAECE5DD92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <cebf1a46-71cb-a700-78ac-f9ee8bb64c22@suse.com>
In-Reply-To: <cebf1a46-71cb-a700-78ac-f9ee8bb64c22@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F541DB9C9283CE449860593CE3A1A199.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB9560:EE_|DBAEUR03FT047:EE_|PAWPR08MB9967:EE_
X-MS-Office365-Filtering-Correlation-Id: 96bedec7-387b-4584-c2df-08db4630559d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YzSdOyzCh/ICeIkJIHJlnomQZxv48ZBRWQ69VokrSl2Pcr9VS236otPg2z364hkaY9WcbqtNd+AL/Vw2QQ57BfwnsIJFKkBWea/Srce4eVawzKqA9zF4pF8PCHZ1exOvp5phL9etzlyF6+ShD0LKdu0cTCEx2SGIntbMzAtd/tzVNwA2U3GPIykf+tYJooo1cj3gM8L3t6M+L/VaTk4GCAuCSD5/+lVmmEo57wZt2SZHmEaZc3XceH2AQyfHvJ45+mor4IleIdCh5EIFc0iVwC2g8lF33BZkq+sRcdKRSjTI28e0qY1HDkHQ6W7+noe+tJ+/f+kYtrDODfqoILQ+Risa/hjTcRH2ZaZSHd4OMI0ZLFNZb7OYbfXTibvq7mrCtcwA1QOKL8xYYKfFQIFJuRN+S+qH1kUSGYBY2UBeAw1R9ET1LrerAKihMW3f/TtTC+rxGbXvvV/Co0Uj801xHuMUN8rHQvG3Dvd1/KmFabfuXRFHtuZkZgSzvNl4fpPremtwKG5zfw5SjOLvDRiurECVghDtLg4GUmPssOlj73MNyWlGxy/vduhfGzKfRawuer77uIJbGKH22Ufb8TqNmscHeli6iLT+RFlaSYnh7CLgxDbDaVfHQIi2yjAnSTCG
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(39860400002)(346002)(376002)(396003)(451199021)(6506007)(26005)(33656002)(9686003)(52536014)(86362001)(5660300002)(186003)(83380400001)(7696005)(71200400001)(54906003)(38070700005)(8936002)(8676002)(41300700001)(316002)(4744005)(38100700002)(2906002)(64756008)(66446008)(66476007)(76116006)(55016003)(66946007)(4326008)(6916009)(478600001)(122000001)(66556008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9560
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	992b0d83-db26-4217-35ad-08db46304e36
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NWFVnKHmi/VscMBpZ+GyA6n5o5cT1F/NnJqKgvBvYo9LgGmmNGjCXJdUh/RzjlY4Evsv4YYF7Qrnblsi3IartnM3r7qxzXu+CYt0V/yGEtd/3Zs5HO1glJO02kf1J19OFnU90fgGStO6NC6HEdYUXYR33cGqiP/btkfvhBiSTDZh4KZ6yvI+2qty/SxJrr+k6vOWrT1HQIVY/ms7bSUmmQZ+U8yMIc3KRUG9Uh+zF+ELLC0Nro1u+JxTfQhJlh8eIZB3xNpfBxBzWhgeGe81LjAlsYW5gmO9KNK24lXjpaf+8GRI9Mky3l5C8yD9UJOV5WeV7rN99/FGyoq7uwXi9//JL+QYl2ztvz2Y83IXevM0ncqPjqCh91sPGlontK0eL0BdodMU3NFRS7r99wJFhDnffQi+e9tZchTqIrsdI2kEW/yz9l15bqOnXmuwO3PrSxMDrxaVtbbJ5jRpUt1zpEtC74JvrJfFKtUdOlGgK5cAUAC5xpoIlVVAuIIyZoIq5ixwB1zDqucZoU/AmIbvoZuMcso0MuxIOTWietvJ97Mz5lhw0tL96v8PiA6Ta6Tdzp+F3l+7XxG0ytSK/YDwp1cM/QTRI7jQBUuFUnuifer8heUzjT+LLaEfxWWEBOysDhEmov1Uvg9gZqDZb4j0RYWqOUKRpphqGc2wu8Dr5kTIvSLwmeOu5LhzV3hfKudmmPXFJqb+UzS1YQgIpZA4HPzoz7dDjosNNA+QtQ3X3+M=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(7696005)(26005)(6506007)(9686003)(36860700001)(83380400001)(47076005)(336012)(2906002)(4744005)(40480700001)(186003)(54906003)(478600001)(34070700002)(33656002)(82310400005)(4326008)(316002)(82740400003)(6862004)(41300700001)(81166007)(40460700003)(356005)(5660300002)(70206006)(86362001)(70586007)(8676002)(8936002)(55016003)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:29:21.1834
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 96bedec7-387b-4584-c2df-08db4630559d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9967

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAwOS8xN10g
eGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiB0cmVlIE5VTUEg
ZGlzdGFuY2UgbWFwDQo+IA0KPiA+IEdyZWF0IHBvaW50cyEgVGhhbmtzIGZvciBwb2ludGluZyB0
aGUgOC1iaXQgdHJ1bmNhdGlvbiBvdXQuIFlvdSBhcmUgY29ycmVjdC4NCj4gPiBTb21laG93IG15
IGltcHJlc3Npb24gb2YgbnVtYV9zZXRfZGlzdGFuY2UoKSdzIGZpcnN0IHR3byBhcmd1bWVudHMg
YXJlDQo+ID4gYWxyZWFkeSAidW5zaWduZWQgaW50IiBzbyBJIG1pc3NlZCB0aGlzIHBhcnQuLi5T
b3JyeS4NCj4gPg0KPiA+IEluIHRoYXQgY2FzZSwgSSB0aGluayBJIHdpbGwgYWRkIGEgY2hlY2sg
YmV0d2VlbiAiZnJvbSwgdG8iIGFuZA0KPiBNQVhfTlVNTk9ERVMNCj4gPiBhcyBzb29uIGFzIHRo
ZSB2YWx1ZXMgb2YgImZyb20iIGFuZCAidG8iIGFyZSBwb3B1bGF0ZWQgYnkgZHRfbmV4dF9jZWxs
KCkuDQo+ID4gSG9wZWZ1bGx5IHRoaXMgd2lsbCBhZGRyZXNzIHlvdXIgY29uY2Vybi4NCj4gDQo+
IFdoaWxlIHRoaXMgd291bGQgYWRkcmVzcyBieSBjb25jZXJuLCBJIGRvbid0IHNlZSB3aHkgeW91
IHdhbnQgdG8gcmVwZWF0DQo+IHRoZSBjaGVja2luZyB0aGF0IG51bWFfc2V0X2Rpc3RhbmNlKCkg
YWxyZWFkeSBkb2VzLg0KDQpDb3JyZWN0LCBJIHRoaW5rIEkgd291bGQgYmV0dGVyIHRvIG1vdmUg
dGhlIGNoZWNrIGluIG51bWFfc2V0X2Rpc3RhbmNlKCkgdG8NCnRoZSBjYWxsZXIgZmR0X3BhcnNl
X251bWFfZGlzdGFuY2VfbWFwX3YxKCkgYXMgSSBiZWxpZXZlIGlmIHRoZSB0cnVuY2F0aW9uDQpy
ZWFsbHkgaGFwcGVucyBpdCBpcyB0b28gbGF0ZSB0byBjaGVjayBpbiBudW1hX3NldF9kaXN0YW5j
ZSgpLg0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:32:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526575.818423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praZV-0008Am-2E; Wed, 26 Apr 2023 08:32:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526575.818423; Wed, 26 Apr 2023 08:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praZU-0008Af-Ve; Wed, 26 Apr 2023 08:32:16 +0000
Received: by outflank-mailman (input) for mailman id 526575;
 Wed, 26 Apr 2023 08:32:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JrF9=AR=tibco.com=tismith@srs-se1.protection.inumbo.net>)
 id 1praZT-0008AX-Fb
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:32:15 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7d08984-e40c-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 10:32:13 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2a8c30ac7e3so62768691fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 01:32:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7d08984-e40c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=tibco.com; s=googleworkspace; t=1682497933; x=1685089933;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FAG+xOASBWEyoJ95L+7VR6/AeiEj+lhH28fE7GiisHI=;
        b=Q2EmAqJ0z7ZZ3JG2zydIs5QSjQ4LZeXZprixf37Xsaq+NayOK+LOFhG3KmEqS9DYGo
         +GaMf0+YaOELAquJwJtIXWsWDuzzuqU0rVdlNwh3D4cKU+MBL42CPOPTkhWn9IE+8jXG
         YX4IxOtsO946q66afTq0TDolBTkt5xBpk9rsK9XhdAwI0Kw/1P0t4XQj+DznxkkMhvAw
         6eiWMl5kPlwkrGdQq9Of5i8M9svZH0IgMrWH+XO/xgGaXPCOIKNAyUZtZjg+1osLvnVZ
         48QS18GtP/p7SO9IbALxMP7nkU5zeR/Fz6f7ETvHiW3R9IQ1sElakKGUbGwTlUs4K+cc
         EGVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682497933; x=1685089933;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FAG+xOASBWEyoJ95L+7VR6/AeiEj+lhH28fE7GiisHI=;
        b=Kf6R5SU+buJN+5TYjDVyvLwJo2e8MDnU6xUfOeuRpH/5GqdL1vNIbaqnqzivo2Nzd5
         PREr7AyGoCsZPjmmcYfV7fIYAe1FmIuMs3a19aGTA0UpEaqi2XOv1hZwtC+ZI8AW8vhT
         oh+Mxxb7dKwMIwbB8+SSZ2jLqYjAsc1xlGQ9rBYqFHdKpAtTUk+5fHxxzyUt2LTzNk8j
         39uG3PqegCBCdf1iKjnKvZjFdjYuIFokciXU0sAF+quHlBAwlzuJrl0lzTyyKEfv7tx3
         f4ckpJjqTLdGZdQRGdsTD4hEVB0DFSXhEksoPUrTbZDqMAkRL21ggoOKy+2BQ040yGf8
         Uw3A==
X-Gm-Message-State: AAQBX9fzdOSDGCKTg5btRfDIYvWDnJ9x7IdU4XSxab+g4iwj+lBpBoNw
	z49nGjF77e2VzE75qVu6z6IcaFlOSljARN6eGsjPGw==
X-Google-Smtp-Source: AKy350a51CbIJJguxBdXXfGOE0FWIwuFtCZZ62Rcqkvukt1rfuPtFG/noW8O+B6X0NzUCMuAN9WyuLoUzwn0/nyDIu4=
X-Received: by 2002:a2e:9d44:0:b0:2aa:43cd:57c9 with SMTP id
 y4-20020a2e9d44000000b002aa43cd57c9mr3541695ljj.36.1682497933062; Wed, 26 Apr
 2023 01:32:13 -0700 (PDT)
MIME-Version: 1.0
References: <20230420110205.688689-1-mark.syms@citrix.com> <54a37172-cad5-3b27-36fc-3b7768e39df8@xen.org>
 <CAPYKksVtGyfv3TbAjLH1G=N6=pH-pH2-FTX5c3+E5PsOKo2aOQ@mail.gmail.com>
 <CALUK5G5T=8MkxaQxdeid_ypo1e4DJ-zBRAMb7D+dcHkVdJt2tQ@mail.gmail.com> <f1325cdb-9e0f-b955-7041-826fb6c78174@xen.org>
In-Reply-To: <f1325cdb-9e0f-b955-7041-826fb6c78174@xen.org>
From: Tim Smith <tismith@tibco.com>
Date: Wed, 26 Apr 2023 09:32:02 +0100
Message-ID: <CALUK5G4zgRqQrinibOf06vydR1AMwdp=jTvLrC9FWqg385Tw_g@mail.gmail.com>
Subject: Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect
To: paul@xen.org
Cc: Mark Syms <mark.syms@cloud.com>, mark.syms@citrix.com, qemu-devel@nongnu.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, tim.smith@citrix.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Apr 24, 2023 at 2:51=E2=80=AFPM Paul Durrant <xadimgnik@gmail.com> =
wrote:
>
> So if you drop the ring drain then this patch should still stop the
> SEGVs, right?
>

I think that's worth a few test runs. I recall some coredumps in that
condition when I was investigating early on, but I don't have them in
my collection so maybe I'm misremembering.

Tim


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:39:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526580.818434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pragE-0000QA-P5; Wed, 26 Apr 2023 08:39:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526580.818434; Wed, 26 Apr 2023 08:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pragE-0000Q3-MP; Wed, 26 Apr 2023 08:39:14 +0000
Received: by outflank-mailman (input) for mailman id 526580;
 Wed, 26 Apr 2023 08:39:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pragE-0000Px-CW
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:39:14 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1fb7e7b-e40d-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 10:39:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7073.eurprd04.prod.outlook.com (2603:10a6:208:1a0::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 08:39:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1fb7e7b-e40d-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YGlv30dhuPn0huxAgqoiMdrQ+0PpWjFL0Ep+BN05Lbswe+GzNkeLHVl3O4k1BnwERNYLhhH7YMVPlNi7unkdy7FHkXUy8WRYTDI413Fc4lSz9kQ8P/FEIuqwZCMyNJVd5T+CWCSdYnf8XVxN/+m5lAYN7YTgHGOW4D28pi9O0le00iVb771MtnV9qbE839DVLzYtZKy/MIuOPEo90o1DLY7MZB5egBDiXdxCg+aLtnrcF6c40hIKWLGBvmvrifQ8whDQNHEW5AOEY87vMQ7Yh9moyauWfsiyCij2TR7q4lefc5wcinNUr3MXJdtzd+N94/IR2dwK5ADbgNwJH8acJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N4eGMJQwvz7A42R68EwbdgpDYYbgEq5y8U1rWzi1CxQ=;
 b=RHZKa06Xd/0T9iLly6MNeysIwNw8UTwfy+cIW4BOkpCR+m7XWyGCFvua6x5joX+suEBxd2TSnrMa5VoeWF6u6xpDZgkHaK5P8qE1R82sODQmus2wfcd3WrRAlJp8tXdj9uSMFfHSmVlPUdKeyVxD0ldDERlWYPi1awkdf5Dibe9hUufJarNMIkExmo4G3Aa0oBXBrYq0GDo4mbUEhVgpSaIT3BZrlcZqL3SQw8lhLmqQws1aUvYJ3pU+TJvkv6EwtVY88sgvCeMrXQ5ZV5idMdZSWec5kEbvSqyx9bVPEs21r1uNy6qUCia2+NMgjwPSTat9LfLt6AQg8LupfO+8jw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N4eGMJQwvz7A42R68EwbdgpDYYbgEq5y8U1rWzi1CxQ=;
 b=CU225G2Pj5KLNynQQFEtTUah2gm7Rpb8aor/ych4TCPAdQ2PrX9EFccml+SF+MeADRwITV4jQS10qJH09Qyjby7AePcPf4Jzjrefg4mj2hgRpsFrDNUtE7Y7XEufnlQrOwiSdbYkxlnupiEuy32/VjIcEZhxCgMhlK9rUCEdB7q6gcpP4l6DJ4/vw/5orv9XJhX6fIw9LMwWz9A3XP+hKP8K6A8uG2SbLdyi4KrlGJRGytw7nbOqmFpLWEzoYoAo3hxLXtQMqkiB1/2rG7ffvLftaajyDtpY3zLnjIz60lvtGzqtGuDR9Gtq6UEAWMHEAMi5IgdK2fGRepVSsspDPw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1085873c-13dd-aae4-55f2-9d69635b37be@suse.com>
Date: Wed, 26 Apr 2023 10:39:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] libxl: device_backend_callback() print rc on error
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230425194622.114869-2-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425194622.114869-2-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7073:EE_
X-MS-Office365-Filtering-Correlation-Id: a9c2f0d1-dcf3-4d61-3f8c-08db4631b54c
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NQ0Lk3Gjmc6SSbgPv5gzEVaYVgILLbTpt+RKJjU9yLRpAoT/l0tLkxFaBbrZdJEY9OrIwrF/kRxPiG0DuyZRCNHmlSosBpDAbI8te5zIkKtIGxbkro5nz55xMejpzS+aq3tzBSHVkZBfNqIN/KkBd8S1T6LS8UXnvKySY9u17Cpj8EFD0KIX/jfje5keuuISQojh9K44C7z2W1UNSWUMw3NOt8Oau0KT4kjkE/RTC1FUxyRl4xU37dXAmNfgtqofmDW3Drn6jXfzTYESRSYGEemildK4nsxcBm1chfaXpLfhuJ0L5qsvSqaY8/Pk64Hm5fIZQOmbVhNjrscS/57H2rB4VNkVoHlWs68OkUwG0opBpmJpQ77UR76/RWKGcBTGgpbRB0+8z/qoI2VleRknhoqc32L9taONpYu6+KUZgP5zR3EDRiADCgdtlJSJVkqRhha6heGQvVzu2f3xr5seM/foSDkc0gYVoYzO3srpS1DeoL50/Q0BxXzCQnEVbuh5D4hc6zMT6rl87404D6Pbq8d41Ia+nEay+0F8FkdY06LisLtFheUTLRgAtOLrdu/U0Rk6YWKmReJ65lg6HQ7zxIY69KUh9YTjmK0IhqnIs2/7BR8XWUUEpT0/+Xw3gPcyvgSaS2sM8X6PQ+ZXHTKgeg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(39860400002)(366004)(376002)(451199021)(66556008)(66946007)(66476007)(316002)(4326008)(6916009)(54906003)(5660300002)(41300700001)(8936002)(8676002)(4744005)(2906002)(186003)(38100700002)(53546011)(31696002)(6512007)(6506007)(26005)(83380400001)(86362001)(31686004)(36756003)(2616005)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YW1RNXRJVVNOWStNcG5oSm90Yk9ieXM5ZytkOG94cGk0YlpPK0txS0ZsQzZu?=
 =?utf-8?B?RzNaMFoxc2wycDBhME5KUXZYT05QbUxkdis0NTJ3Nmk0UlA3ekozUm94Rkgx?=
 =?utf-8?B?WWk3bUUrcHd0QUlhYllxWDl4aHRmQ1NYZExTY1FFUmdsVmM5VnBaekdBeUdh?=
 =?utf-8?B?amZuQ2RQcVRZdzFhdVhkbngwMi9ZTGR3RWF2eFQ4b29TRThNSVlSSkRVZWps?=
 =?utf-8?B?bkhpQlkzUkFXczcyeTFvKzBqcmZRb0srWEg2aDBZVGphMUxRQnY4R2pydUQ4?=
 =?utf-8?B?dENaUVRDWXlJaTd0ZGxPaTZsZFZMY0VyWnhMNWQ3WEpjUXhKS0J2TXdiYWFD?=
 =?utf-8?B?d1ZZWStWWDdORiszMEt4ZG96UFYzYXJXbjdxWHMvWjNtYkFCclF0bTZwbDVm?=
 =?utf-8?B?djlSdlU4QXdWQnNiWWFMZDRxV1puUzhoeTVqd2U2Z1VnNXpwbnVWL1RuUVY1?=
 =?utf-8?B?eVlwbDJjMjdJR1FBeElWTk0vTXk3UWt5Znk5RXh4SXRJTExRMC9GWmhDcFRu?=
 =?utf-8?B?MjdzYW1PNEFTMWFnUjN1TkxreVIvSVNaR1JEWnZJMkw5K2xKeGNXU1BXaFNU?=
 =?utf-8?B?TTBOQWlSeUVCeGNDbXVvY1V6UU1laldmTHlOVjQrWFF3encvSytjMC9HNnN5?=
 =?utf-8?B?UkJmcWh3VURUUUtGaG4xMlFWdEJlU29JTktIUVVOWktSaVFxa1NETEFna2lH?=
 =?utf-8?B?UXNhSUlsZjlIak1rbEk5TkRmRStSd0lWbE0xWXd6YW02ZVF3Y01hV1BNRklV?=
 =?utf-8?B?ZnB6K1MvdStPOXNQYUFCNEc3alpoNGtQTTFBSVN3YjZiNTVLTEhCazQzZkc4?=
 =?utf-8?B?K3BNWStCaG0reW0wNDBFdUtmWHA0QnRaUnEvc2RUbkQxWUV1S05vUzJ3dTZE?=
 =?utf-8?B?dkhuaEJ6M2M4a1NqZEJIRlJKR0tJNXhjNGVyKzJyc1RuSERnczRrcUFhaGYz?=
 =?utf-8?B?dUQwS1dsY1pyRTNZYWozZzVFSXVxb3h3d0JaTG9OWFo5WVl6VTgwY29WdjQr?=
 =?utf-8?B?cnhGWlluUVJBYyt1Y0g2bXUrL0NqRVhlSU0raUc4bWhjbm5UYjE4Z056MURt?=
 =?utf-8?B?bnA0Nmppcm1RYlRYbC93ajFJdERlQ1NHc0NYSEQyTVN1aTBjd2FjcVQvK3Fv?=
 =?utf-8?B?KzRKMWJFZVBnZngybkRPOXcwMVFFVG1JcVB6a2kyemJSaUs0eEZYTGdHVGI0?=
 =?utf-8?B?cjNGdURYSjlwZVpvSCszbUt6NEszYk5MZURNeEtFQkMxajR2Y3pDNnpXV29X?=
 =?utf-8?B?eFFESHNpUXlGT2NrWHV5VHczQktUQ0d3SzF4VThYV1JJWkFvdXVvSHgwemRG?=
 =?utf-8?B?UUJSSmV2ZG5XemZxdzgraVRXSGUwblEwRG9VdmZuOWhsR2JXRTJaK1p3Ti9a?=
 =?utf-8?B?SldPYmxRTkpYd2t0dUh5bFYyNUVMcEZuN29uNjI5M1FYWmRIa2VMb3UzM1lq?=
 =?utf-8?B?cnBJQ0hkQ2kzUTFkbFpRWWVRTTVWcThaT0grT2d4WjB1Wk9qMHBPU3VpUDZt?=
 =?utf-8?B?QVMrd3FmZVhzMzZvcmpPa3lQN3lmbkxxYzFSdlNaOXFDNVRiS28wTHRJeDd3?=
 =?utf-8?B?SGRGZXJ3NE5QK2wxRUlGSnFLSFRTaWdCVEI4OEJQRnhpdTlNUVFQWlMvcXA3?=
 =?utf-8?B?WmpHQVd2NGNmTTN2d2xpdnp0QkhJMlZCYVIxT0VkanpkSVhONUYzRFRMcU5t?=
 =?utf-8?B?N0xYMmpaMGlqY1VvNDFCRnI3OGFQc2d2QU1taDNIV3RDVzFJSVRlcHo2Nm5a?=
 =?utf-8?B?TUJzV2c2US9TTW0zY21iVmVOTWxadTZvNFlwOEdSYjRoeHFLVXZKdGtVeWRo?=
 =?utf-8?B?Mm9hNEZJT1RFUGttMmdtVlMxZFRhMmhhOFJVU2lHR3hFTU5kRXB2S1lsNExk?=
 =?utf-8?B?TENMc00zc1VtTlRMT0wyMURNSWpwTUxPYklxVlpxM3pNZ1pWN2twZ3A2M0Zo?=
 =?utf-8?B?eGRZcGwxdFVCRVI4azdCK3FGWGx0Wlh6TDlxendGVnhCYjZUQnFDT3ZBU3E2?=
 =?utf-8?B?YTd3a3pZeWRueDJGUVl4b3Mvd1BHYWRJMnI2MWxpbExUUzg3aEJMcU9VbTA0?=
 =?utf-8?B?bTFUZ2RLd1BZbkliSVpoRkJ2aFh2ck12dlRJZUgrTElvSCtYQUJiSlF4dFVR?=
 =?utf-8?Q?KYra822VFHhd3d/avdaUYyGRH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9c2f0d1-dcf3-4d61-3f8c-08db4631b54c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:39:11.3853
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ii1OExP2RDyU73NW7yPEXcI7i+Ui7VwhNflfo8CySsFvQfo5+PDBl1ErGi8jlZlcimH9tTdi84xv6Rf3eVJyTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7073

On 25.04.2023 21:46, Jason Andryuk wrote:
> Print the rc when an error is found in device_backend_callback() so the
> user can have some idea of why things went wrong.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  tools/libs/light/libxl_device.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

While patches which are part of a series should be sent as replies to the
cover letter, may I ask that you do not send individual patches as replies
to other (unrelated) patches (or, in general, really as replies to anything,
i.e. also not as replies to e.g. an earlier discussion)?

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:47:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:47:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526586.818444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praoN-0001xf-Mt; Wed, 26 Apr 2023 08:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526586.818444; Wed, 26 Apr 2023 08:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praoN-0001xY-KF; Wed, 26 Apr 2023 08:47:39 +0000
Received: by outflank-mailman (input) for mailman id 526586;
 Wed, 26 Apr 2023 08:47:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1praoL-0001xO-V0; Wed, 26 Apr 2023 08:47:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1praoL-0006gY-Ne; Wed, 26 Apr 2023 08:47:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1praoL-0003f1-5n; Wed, 26 Apr 2023 08:47:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1praoL-0006PG-5G; Wed, 26 Apr 2023 08:47:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pRBsiZ2b6pR3e+Ol59WuAYRIbWtFq27ueAiLTd4YfDo=; b=hj2alqUfq792eS1S/0lVll/22L
	QcVpNVNLhPeK4HFx0OEs7OEHuPL4OY2PNUWDkgflNN+NUJ2rqP/FlQQB3ajhExK4bw/cAgzqaikPV
	ye7jlZZYjxtFBFoTwOtAKjS0PDYMSup9tJjUCU8q7q7v8ZSS3b0pBcOe2Pg5WKxCrk3M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180418-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180418: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=736b378b29d89c8c3567fa4b2e948be5568aebb8
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 08:47:37 +0000

flight 180418 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180418/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                736b378b29d89c8c3567fa4b2e948be5568aebb8
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z    9 days
Failing since        180281  2023-04-17 06:24:36 Z    9 days   16 attempts
Testing same since   180418  2023-04-25 20:19:00 Z    0 days    1 attempts

------------------------------------------------------------
638 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 40512 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:48:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526592.818453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prapC-0002TA-0R; Wed, 26 Apr 2023 08:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526592.818453; Wed, 26 Apr 2023 08:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prapB-0002T3-U5; Wed, 26 Apr 2023 08:48:29 +0000
Received: by outflank-mailman (input) for mailman id 526592;
 Wed, 26 Apr 2023 08:48:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HkN=AR=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1prapB-0002SV-30
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:48:29 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.163]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ca7f6dc-e40f-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 10:48:27 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3Q8m1HY8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 26 Apr 2023 10:48:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ca7f6dc-e40f-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1682498888; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=GQdomyordl0bHLukLePWnovJVCsLzJZoq2ZusSO92w3F49JqtHUo7k5rvlpeTzwMnl
    3tmkloljiSOOGXkSX+e/t8I5K9fxy8xjE67dUW/Lpitawe7Eo7Zv+Gcovg50yFj61Gjp
    7eIGxU9+o0Iq+TxXTQFwLWbAyYpzxA0xNR5J2CwWY9YVOy2G0WqgevY+6cIANa0qFdw+
    WxjpiZbO5rzmuLjvdZQS5PdjWUBJCxn9Veb+xJh++RkRKwQYS8s9K2lS1sMMhKGjCuko
    +c0ARFRbXlXdgbXyp99NV0XetUmNeBkKY033rc0QXX6WU6JKzTcEJ4Fdr7W3FOITxLkT
    r3Aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1682498888;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=QurI9vR3tSuWRRdF4/fGjffLmEynoAziTMR+dZMGWEI=;
    b=erZg9fuhcnXr6pL3POqpZyLwQqH7cgSSfiCqiyso35+zIpqQkH66IChgQIOtHNLQYF
    dGmUXH3P7zmw71hU8sKCJRAdTHoTNgmrFwfMKG2g1GY5jizMgCmJH4g++xTCMi7l3UD0
    M2RZcIbvPQ+CkVeJOG9uMJLFoRbNH8riNkMyRstjRLu+VEdlAHFLzQ56hpW2BfmP3IRb
    B6PqydAeAHa3HVIpkssge9mR4GMPHALL08WL+4bf1VucouTo7PFNVpm9s0lrxkwNGZu6
    ARP+CxtodbfA3KrOGbCUBM5AUbYXiMzDuPFyndhX7/RgOGcDkaID+1mJvE0iBHJs5LCw
    Fzww==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1682498888;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=QurI9vR3tSuWRRdF4/fGjffLmEynoAziTMR+dZMGWEI=;
    b=I6310rjdCQETMPzJxVNKSh0v6IOIeEU1RHtSebKqWoUhdpI+OeRXce6AQKT3zbsFOM
    a5jfmba2vbKiavzg4QjXto5fgZcFwTXwQBe6F/ot5AjJrnS+gNAST49EZZrakRXrfGYF
    U5dskY/RFydhzbGjECITM0IDc5b1zXmzaZfZkSAInXS8xcy7QufCHfXbuv7JywVUgMZs
    6cxHAF93juxLfTikS5E+MVHad1byTDQO8rhA4c+lMXA35hADM5sxIJ44sWbtqpD9wtXA
    VLueEtIWY6j0pjJ1oWWSKR6iKcWUXsYAHVkeSD74xhxGMqICWBnL/7WvqpY7vhqwtN7V
    14CQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1682498888;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=QurI9vR3tSuWRRdF4/fGjffLmEynoAziTMR+dZMGWEI=;
    b=vWRMrMhhUfpK0md7JZWIvYfyWeSULSC7zEUpPRHTValARmvZu8NzMaat51oGLp69vs
    hF0GX2iYVs9KoEwld/Bw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BBKIYx7sXVVhU9+brASRK3ZldJTnR7IDHecOJA=="
Date: Wed, 26 Apr 2023 10:47:54 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>, Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] Fix install.sh for systemd
Message-ID: <20230426104754.78845a19.olaf@aepfle.de>
In-Reply-To: <650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
	<20230426091533.68324d8d.olaf@aepfle.de>
	<650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/jIE2x=fIqeynXMOA2_HVT_3";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/jIE2x=fIqeynXMOA2_HVT_3
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Wed, 26 Apr 2023 10:28:38 +0200 Jan Beulich <jbeulich@suse.com>:

> Is this to be translated to Reviewed-by: <you>?

It was a Nack, in case such thing exists, and if it has any meaning if sent=
 by me.

There are a few places already which do a mkdir -p XEN_RUN_DIR prior usage.
But a few are missing.
XEN_RUN_DIR and most likely also XEN_RUN_STORED have to be removed from mak=
e install.


Olaf

--Sig_/jIE2x=fIqeynXMOA2_HVT_3
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRI5ToACgkQ86SN7mm1
DoC4sQ//Vy2cuy2PKtfsFjg2NKsOpIL2dLo3HXZV4ugUzgBml5D70zBgSuKjfMjQ
TGn3MHz5I3u2isUXmN12PB8Q5581hYcS0ZxLW0sJg9u2+RVk48jmFHV6x+99+KFU
sh2AHiZAh6dfKhMTKz/bxKyql2i2SVE28huD8Mjdr0KoQH2S5P8BklEOihMYDQns
J+yZhI92u4P4KavdSBP+V61M0ob3aI4y4aost2KPOTZNzS382gdhcOBEkDJaTCp3
z2SX4IJObNmFyq6P596Sofx+ij8Q0vKNVHtg6SPOUdK8W2FiduhqDG1aX7I+FgQv
TmI2s9GK4IaOITL3jrwVgtYI9iKfZNprmC9+hx1h6F1srzcIfqzALkucMLgsDuum
l2PUOpdBYwfXVQxrW5J3xWuVkz4Tiaeng+o7u2OHt/yUwypbI/47hw6gENA178k9
ml65H5utsBkXachhMgnaZj9kCQ0wpWGMjBqudjsLjzu84MReZPKP4sqtJ3Z/dxe9
8EQ6BAzBimCq2raBFzsLkM5KZWGJL2jZLgGGZ/Vja2ANkcN6Ua1Ks569r4JsmRak
r3/LQ8Zs42ZPRZpSluY8gnjMKY9XuHYK5VEY12xvyu1HRl73hK1YiXm3Y+e8rwQy
rtZyfVtYTGlIr2ZdnlGqWXfKShPk2S8GZDAE2As21zB61Vd2N8M=
=WZaD
-----END PGP SIGNATURE-----

--Sig_/jIE2x=fIqeynXMOA2_HVT_3--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 08:56:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 08:56:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526598.818463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praxE-00041d-QX; Wed, 26 Apr 2023 08:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526598.818463; Wed, 26 Apr 2023 08:56:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1praxE-00041W-Nv; Wed, 26 Apr 2023 08:56:48 +0000
Received: by outflank-mailman (input) for mailman id 526598;
 Wed, 26 Apr 2023 08:56:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNUB=AR=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1praxD-00041Q-Dn
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 08:56:47 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4452d1eb-e410-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 10:56:44 +0200 (CEST)
Received: from DB6P18901CA0010.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::20)
 by DB3PR08MB8915.eurprd08.prod.outlook.com (2603:10a6:10:43d::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 08:56:41 +0000
Received: from DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::23) by DB6P18901CA0010.outlook.office365.com
 (2603:10a6:4:16::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34 via Frontend
 Transport; Wed, 26 Apr 2023 08:56:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT023.mail.protection.outlook.com (100.127.142.253) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.20 via Frontend Transport; Wed, 26 Apr 2023 08:56:40 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 26 Apr 2023 08:56:40 +0000
Received: from 8a331848db47.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FE694588-77F1-4C2E-A959-D3BCB09D76E4.1; 
 Wed, 26 Apr 2023 08:56:34 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a331848db47.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 08:56:34 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS8PR08MB8779.eurprd08.prod.outlook.com (2603:10a6:20b:5be::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 08:56:32 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%7]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 08:56:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4452d1eb-e410-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Li2dIvkxQIQVgVE27pmrEXty1j/6u0TmS1D0UWIdPNg=;
 b=7hdveCXK0m7Phd/d3gmm8upmHZ2the+IBaiaKh/pQ+uOk/Mmj5GKkShSVTbvgvRVyeSplnaX+FO1leQjGsJuO42jCYtpoPI3sUBlX+EZUQKCI1VFCR7k422T0hIWY/exEpOwtyrK1Aic7Bp88NXWPsTsB4DT5Q0fXJxhni//l6Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OcvFf1mT4Nqht5bX25fEbTJFz7QURyys93LiTtBbXdREnDvIJP6FDthNppzl4Fy2e4lfVS2p1RMlPpqWopXu/4X/7vLCYomzWIc//hP3o+3GBp4cZFq1a7E7Q8JcIh9/4wBYKlR/Br+OImI82WZ8/Q40K1aHND7DKXvaH8JLWMalYGa6yEf+ZxitpWxR1Jgxny0VKQy6ygRYyQrS4UUZ6t4HGazEb3bkbD+htVL5/nhF4l0F+ptb3nH9wqBxCNG2XWntYYCRuTm6hPdeBoFYlP57/+Ttho1VBu7AhxL1R5UbR3gF4B26i3LglqiDqW2jcZX731hnEY84GiiMMI622w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Li2dIvkxQIQVgVE27pmrEXty1j/6u0TmS1D0UWIdPNg=;
 b=iQGI0mcvl8JhGq9cO943UOpd1ZrkUsraeMF+TUj60GG9t/EokzMofY9DdpGJPX4/5t9xPPZ2Ci5sU229SyjYzEfE8fkaD8g6lImQInz8jj9vXnSBhTXk/Pf55idvIZyZs1q7fUu/yLmdABQOxNj6TzrIkYA5+Lv5Z3o19en94PmX9Wfyc0KKxwZDh75m1GUPrr6mJhZfVJUAPz0cxzelb0oTlzkPJSsH2V/XTzB/aCy1zN/mN1yVJu6Yz5XKrLRa2Q2HHrFt7+txInEGrnt0W8SRR25/9zYIaB+PNKq+DC24U/HyodGLx0Mgj4amWXcv0g3dOnmfKsFNpUACLBI66Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Li2dIvkxQIQVgVE27pmrEXty1j/6u0TmS1D0UWIdPNg=;
 b=7hdveCXK0m7Phd/d3gmm8upmHZ2the+IBaiaKh/pQ+uOk/Mmj5GKkShSVTbvgvRVyeSplnaX+FO1leQjGsJuO42jCYtpoPI3sUBlX+EZUQKCI1VFCR7k422T0hIWY/exEpOwtyrK1Aic7Bp88NXWPsTsB4DT5Q0fXJxhni//l6Y=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index:
 AQHZd0ulMpqFICe/LUiDVq6+aDsW9687tFqAgAFSIKCAACZ1gIAABguQgAAM+YCAAAMWUIAAB/ZA
Date: Wed, 26 Apr 2023 08:56:32 +0000
Message-ID:
 <AS8PR08MB7991108BC21AF3869FE5C03092659@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <188a01f0-a2d1-0f2d-4d01-61a259c790f1@suse.com>
 <AS8PR08MB7991F2DDC4C13F33390557EA92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <e783d71f-cc0c-e235-28a8-7ec9ad63d41f@suse.com>
 <AS8PR08MB79919EAEDCD85073CAECE5DD92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <cebf1a46-71cb-a700-78ac-f9ee8bb64c22@suse.com>
 <AS8PR08MB79911D9836C2954C047BECDF92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To:
 <AS8PR08MB79911D9836C2954C047BECDF92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 61B970A5FF209D4EA5E4A564E725E220.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS8PR08MB8779:EE_|DBAEUR03FT023:EE_|DB3PR08MB8915:EE_
X-MS-Office365-Filtering-Correlation-Id: c2e315ee-41ce-46bc-0dd5-08db463426c7
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 iWkaEvQkOm1cPqY9usIdDNIwH8XPLD4JMCCHWQ/imNIu+CXOkwzBEydNXNIVInGa93vq4KsCRWQC5ThCYqOeVkvic0jucACgsZMpwG2CiIvJtuwFY4U2MW6NkiJlqsSr0WK6+pdDxmQRRc4aVp+40eLECQ0SDxCmVEacCwnAHvzZ80Tvl6L7pxZ8oXnC2SDfwa3NXjuzpMjC3yAtW7ClGHICYXLPqSD9E3uORjIhAhNPGv5lKF9gxBVJASwSY9EkiYeHC0eR2I16mqkAnk+/wzrHoWcN8HpBZ2c7Ladj+DgnoWS/1qtHeKNPyKWI7FgDw9HloiDOFfCAGwj4GzTxCxNce9SC9ksbVGLmC6x1waqgbR2FscLl3pyA2LnEIOuksrLNQ+SCBzYzELLtvKcwOd6rchTDNpsCFCtIU0y+PpjvleLZiRWd7z3cBBe8/CbyW44FMKSgLvdwkzz4jWnE7293hh2uqPiWfEbmT3xGbu8aDwa8MD6xDz9P48bKXzx5vv0JNuXZfh4QWg7i806ENoQWHBPZ9mlGgHyVZUjw3oNyb/kVRcRI5feNoq3r/BMbDCGmS0yl1lmW6Ur9j9gkufB85ZfBPY+rLmxVqLQ/56Yi6+WZ34P6thJcbW1OVio/
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(376002)(136003)(346002)(451199021)(86362001)(2940100002)(186003)(9686003)(6506007)(26005)(38100700002)(5660300002)(52536014)(7696005)(33656002)(83380400001)(71200400001)(478600001)(54906003)(38070700005)(41300700001)(8936002)(2906002)(8676002)(316002)(64756008)(66446008)(55016003)(66476007)(122000001)(76116006)(6916009)(66556008)(66946007)(4326008);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8779
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a68b72f7-4415-4dd5-e523-08db463421e3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+xufpFb35lgCtT9tJ51IcY5q+nGW/AeBhIMAchx25qB7zFbSF0TeCspdb8eBEuNcRaxarfMilP15j+F/nIdEsquqX0QQ0YKRLGiOS/0KApXPkeLoZKhV2v4b/0e+2pf+4iy22vpcQnooxyJq5Qv2xePQdCJyZTF2Q/etFqj52dnn81AyHlmi4CK8dknj1FrW4zRbnSmBiZ9fF+Uerhzc3QucMZLvvn3MKLLiwWJckNQ8YwrhFPLr360NOPzOLZAciShmbd+N3WE00DKz8FAGKdVISLCHq9zMBBNeKYL/8FZQei4VpL4+jnt4hSrFOKv/jJUyh/pTzN4N/qfd5IYTM5zYOkYI+If3h/xu5zenVofIph3zo4fPFi4YKry3CCzI9dX1ONYuK9i4QPXbBx07CUxA83VbHMmqKp8+RbNpVv++0XoOzDMGPEAgfxkYJ/saR/+uwoCKAKCgsoaJ90AYe1TJeYfdNOqjfXCFiPbnXKIqLsGxxZgrx9y8R2QurSRV/UAzgj5QIxaST2pZrS754c2trWUfJPotn7UZ3IyGo7y6rq1quI2rweNDLkQCvM/cXv+QQN+N78kSYIHX+zyvkkmo5/7dH5qTxrqpE9rZGJ8T9QPtsmGvXhP5G5GbZVrU6boDu7hC0r8WTYXkcmMTwW8PS5bqZhS25AHLbLyKyvRqhRKggvSbAJopWpYxwsyCMmHrVZAx7UaTT7l3OY726qUBXGPWNYuMlxIKoEeNW9U=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(136003)(39850400004)(451199021)(40470700004)(46966006)(36840700001)(478600001)(34070700002)(2940100002)(36860700001)(7696005)(40480700001)(81166007)(356005)(9686003)(26005)(186003)(82740400003)(54906003)(83380400001)(6506007)(47076005)(336012)(5660300002)(8936002)(8676002)(41300700001)(70586007)(70206006)(2906002)(4326008)(316002)(6862004)(52536014)(86362001)(40460700003)(55016003)(33656002)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 08:56:40.5724
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c2e315ee-41ce-46bc-0dd5-08db463426c7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8915

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSGVucnkgV2Fu
Zw0KPiBTdWJqZWN0OiBSRTogW1BBVENIIHY0IDA5LzE3XSB4ZW4vYXJtOiBpbnRyb2R1Y2UgYSBo
ZWxwZXIgdG8gcGFyc2UgZGV2aWNlDQo+IHRyZWUgTlVNQSBkaXN0YW5jZSBtYXANCj4gDQo+IEhp
IEphbiwNCj4gDQo+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiBGcm9tOiBKYW4g
QmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4gU3ViamVjdDogUmU6IFtQQVRDSCB2NCAw
OS8xN10geGVuL2FybTogaW50cm9kdWNlIGEgaGVscGVyIHRvIHBhcnNlIGRldmljZQ0KPiA+IHRy
ZWUgTlVNQSBkaXN0YW5jZSBtYXANCj4gPg0KPiA+ID4gR3JlYXQgcG9pbnRzISBUaGFua3MgZm9y
IHBvaW50aW5nIHRoZSA4LWJpdCB0cnVuY2F0aW9uIG91dC4gWW91IGFyZSBjb3JyZWN0Lg0KPiA+
ID4gU29tZWhvdyBteSBpbXByZXNzaW9uIG9mIG51bWFfc2V0X2Rpc3RhbmNlKCkncyBmaXJzdCB0
d28gYXJndW1lbnRzDQo+IGFyZQ0KPiA+ID4gYWxyZWFkeSAidW5zaWduZWQgaW50IiBzbyBJIG1p
c3NlZCB0aGlzIHBhcnQuLi5Tb3JyeS4NCj4gPiA+DQo+ID4gPiBJbiB0aGF0IGNhc2UsIEkgdGhp
bmsgSSB3aWxsIGFkZCBhIGNoZWNrIGJldHdlZW4gImZyb20sIHRvIiBhbmQNCj4gPiBNQVhfTlVN
Tk9ERVMNCj4gPiA+IGFzIHNvb24gYXMgdGhlIHZhbHVlcyBvZiAiZnJvbSIgYW5kICJ0byIgYXJl
IHBvcHVsYXRlZCBieSBkdF9uZXh0X2NlbGwoKS4NCj4gPiA+IEhvcGVmdWxseSB0aGlzIHdpbGwg
YWRkcmVzcyB5b3VyIGNvbmNlcm4uDQo+ID4NCj4gPiBXaGlsZSB0aGlzIHdvdWxkIGFkZHJlc3Mg
YnkgY29uY2VybiwgSSBkb24ndCBzZWUgd2h5IHlvdSB3YW50IHRvIHJlcGVhdA0KPiA+IHRoZSBj
aGVja2luZyB0aGF0IG51bWFfc2V0X2Rpc3RhbmNlKCkgYWxyZWFkeSBkb2VzLg0KPiANCj4gQ29y
cmVjdCwgSSB0aGluayBJIHdvdWxkIGJldHRlciB0byBtb3ZlIHRoZSBjaGVjayBpbiBudW1hX3Nl
dF9kaXN0YW5jZSgpIHRvDQo+IHRoZSBjYWxsZXIgZmR0X3BhcnNlX251bWFfZGlzdGFuY2VfbWFw
X3YxKCkgYXMgSSBiZWxpZXZlIGlmIHRoZSB0cnVuY2F0aW9uDQo+IHJlYWxseSBoYXBwZW5zIGl0
IGlzIHRvbyBsYXRlIHRvIGNoZWNrIGluIG51bWFfc2V0X2Rpc3RhbmNlKCkuDQoNCk9uIHNlY29u
ZCB0aG91Z2h0LCBtYXliZSBldmVuIHJlbW92ZSB0aGUgc2FtZSBjaGVjayBpbiBfX25vZGVfZGlz
dGFuY2UoKQ0KaWYgd2UgZG8gdGhlIGNoZWNrIGluIHRoZSBjYWxsZXIsIGFzIEkgYmVsaWV2ZSB0
aGV5IHdpbGwgc3VmZmVyIHRoZSBzYW1lIHByb2JsZW0uLi4NCg0KS2luZCByZWdhcmRzLA0KSGVu
cnkNCg0KPiANCj4gS2luZCByZWdhcmRzLA0KPiBIZW5yeQ0KPiANCj4gPg0KPiA+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 09:07:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 09:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526604.818474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prb7h-0005e2-Vx; Wed, 26 Apr 2023 09:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526604.818474; Wed, 26 Apr 2023 09:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prb7h-0005dv-Ry; Wed, 26 Apr 2023 09:07:37 +0000
Received: by outflank-mailman (input) for mailman id 526604;
 Wed, 26 Apr 2023 09:07:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prb7g-0005dp-Dh
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 09:07:36 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2088.outbound.protection.outlook.com [40.107.7.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c80d3cc5-e411-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 11:07:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9388.eurprd04.prod.outlook.com (2603:10a6:20b:4eb::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 09:07:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 09:07:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c80d3cc5-e411-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n93/cOZQKl4v6rkZfsPsNQmTkqAcPdxnFBtOEYfVk9dQO1QtHdMa8bSM4yFoiKMizkL1lDvm/G1n1T7pMR8xHw0Dj+iJFBYBhDDjoAFb+Hsk4+iu/0QcSW/MYoVdokjXd25rqSKefTsF5gpqQBPQfN0/PBO7dVeMuEHKfcRW2OfeyS5UFxdsL/MS5zl8KJ/YTRKaaDqyCJs4TDvJVko+9nOyP1ADseweDrke6E76o3nC5sAdKMIrJJqshmE9JxQ8/r3Qe4e4zpot9fF11E4VJMR4Bslct7rhmdjTb2H0zxQ6tdsBr0280+dbF0vHQd1XKTzgcKLvGoWVW02ICdSeHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GkSjVz2yZSYybsLyLMUVxLzZYRjqtAbROe0L+OSR2XU=;
 b=PivZAujCuWwQGytxovwcHSaaS8kvnUTu13Y/bUWDuaqVVQwGQFk2OoqHKu8/Rxop09vKnOTs0w4gQe7pm7PVKiwyG8nV8SGRZSfpE+Is2vDvwqVVJlazBayo34WSv/JHaQUnX8qg06pqj/OMFIlm+RQ4wBGrcQ25jhOLFpLfudQSXu6B8uhjmWdolvpt0XBFdI/O/pA0XwV4EudrlNN70GQ0lu/IxDUiawBzZJatGdog0yBH0VOeD/uq+01/Zd5mn82uh+Zr+waXLJf8pU9ux/24yFaR15Iq/U3EWZfKu1nYZHtRZtx+flo4tOseitaXu2E62QTSUq9F2ivR6xwugw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GkSjVz2yZSYybsLyLMUVxLzZYRjqtAbROe0L+OSR2XU=;
 b=GfA0VwT1TdbOzI9AR2LZxbmBvfVXDSIpyJOi+I4Ky3TYbTolQuvh+DOtewo2pHO0cLjqBOdNMXvVJNOt1I8htkPRPZqj+e++fhtrrrn76GhHXZ9fYjC5vGDCpUiMpH6SXo0blaQosiFQ/2RlWfWd5VbljAGo2QimWid/V2Fa0r4VdkDQj3RUl7SRQwqOy8kkRuxL1wuDGtRzZduRJ6r2M36G7nDXteiSWNU1Lkjt6MxZSdeqfjvOz/cWr5G8OCIz7Ip6p3gCO+g4TkmUux1geyZf5USULf6sTgafIkN8AXSYINXO+8qzzZHXsvd4sO056fGi4S7HeWtE8+eVYDlylQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9dfb4f01-979e-e225-214e-34ddb51a9199@suse.com>
Date: Wed, 26 Apr 2023 11:07:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] Fix install.sh for systemd
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
 <20230426091533.68324d8d.olaf@aepfle.de>
 <650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
 <20230426104754.78845a19.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230426104754.78845a19.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0089.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9388:EE_
X-MS-Office365-Filtering-Correlation-Id: f3d62308-9855-4c45-32d2-08db4635a349
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RTaTUOIWVDwO4xFYT5PwzzULKPj6ImPbXGBYvVSjBFB7W3IRovjANGm7pPJFOD8v814abX2dn3oD/Ybo6DIdeXWWGS4K3oxOlnfqJLmjmoDA6aM9TUZzedik2UaHWukAzCExwc0o1Zj8P7XvEI0TsEYuQz/L4CaOJya0yXUjIjFWy3Qd8ZtXSIn71ZMFp5cxH0V8Rj7vtAnp6eiqvZIRKljQOKl6RB8csjCqiR3GFmUzsZea8Xco7lPLbmT8MWmsorJlZR9A41ooGAF+ne8x6bNBs1PxrkQOqHrxB9e/42lcNwKhzp6uAxqn+xPXUhEXWz3ZYqX0ySTujd15qX8og5hD4jg0Snm8nbxuxVpUjbNaLTyA5+vclLqQBZh4qGvmRAdXUP2UZustn2dFLoQ06WkWlu3jWucLjpWJ6Df9XmtO44tL+2wy/cG5G/QcuQdMayWP8wXSlKEqmmrUnliVElw/VXGl+rgaHrC/Sqoy6nplY7Rtl8E6AWilHCChicn+rQA/PmFUvkx6i8bk/TNCuIr0VqGeVvXg/HFoJreLNhD4pTkV9HkmH9fIWxhzIjB3fyOzZFbgjPu3B2bt6XJCm5Dn6itUytAph3w7F+keH+ru4Ac0REwzCglTJYAzj14wWNTiFcVsza4cOu8HR0kpLg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(39860400002)(136003)(376002)(451199021)(5660300002)(8936002)(38100700002)(8676002)(86362001)(316002)(66946007)(66556008)(66476007)(6916009)(31696002)(4326008)(41300700001)(4744005)(2906002)(186003)(26005)(6506007)(6512007)(6486002)(36756003)(53546011)(2616005)(31686004)(478600001)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NTNicGwyaEdVeDBERlF2K3hER1o0ZlZGM2hraElRS2ROdTdMQk5VU1pDajBW?=
 =?utf-8?B?dk9jL3pUYTlTQzZPdlF1Rnd4RWxNcThTWHJXLytxS3M4a3Zva29DQ0JXL21z?=
 =?utf-8?B?RWZWaWVGWUFBODFJNXRubW9MdzFXVk9vUkttZE9aRm5aTmZ1R2xmK1h5WU5W?=
 =?utf-8?B?VE42RjBDOENON0M1NDNLa0cyQWdmTmtaRXdWeENMYU1IOUl5bFFQVGFXeXpy?=
 =?utf-8?B?TzBISTRHS2J0WmVKdGxsWjB3NHFib1JNL2pxYXVuMnNBRHhsYlFrZG41L3p1?=
 =?utf-8?B?RVQzbWxLZUJDc0hTekRJOWpQTU5INHJ2S0h4OS9JTUJPNStMbWQ5L3FiTTFZ?=
 =?utf-8?B?TTZzZ2JZdnkxcUltcW84M0k4dWN4RUxPM0l5T0RucDdRRm9vcytIV0RFRmZ2?=
 =?utf-8?B?anMyaTdWTnFkZkFvZDJ6enlqRVZpSEh1Sk1ndEhINUJkU09jTWwxSVVac3hT?=
 =?utf-8?B?dENIUnVXbjdvTkZVcXNxbUdFcXlONjdUYXpCcm44Uk5abHFxQmtGaDFqZk5y?=
 =?utf-8?B?WkxscDlOOWNKelhqQ3VqdDlEZDdWT0ZhNXo4QWJXamJEVU9VN0dCKzlneXRC?=
 =?utf-8?B?SSs3R1hkREpMUUxKZ2owRXhVeUYzQWtQYzVmMTNFWmRCaS80WVJGWXpsZktQ?=
 =?utf-8?B?bkpyaTdzSmxBdXBoT0RqTXdJUUNleGxSdWF0Mlo1ZVRDbVRLL0lZNDFaVkdC?=
 =?utf-8?B?Tnh2OEV6eTFHbkw0NmdNZ1pOdHVkMlhuMmh2d3FMMGw2MWtSYnZmMm1PcG82?=
 =?utf-8?B?WG1QbzJOeFF5NDIrWE5vMFhlSTR5c3QyaVZqY0U1M2RaU3hCVlRqRUU3NlJw?=
 =?utf-8?B?aFR4WWRlbUpEMEFHVGFNMXFDZEpYei9tcDkybUp4Nk4yOGVYcTVzZTJodXA4?=
 =?utf-8?B?bGpxb05iWXI3ZCtSakdEYzVEbmJTekZPVVE2Y3UvYnZJWTYrRDhFUWoyNjR2?=
 =?utf-8?B?ZllPeHJ2OWNXcnhLZkpTZ3BLV3JueWd2RWxteE9EN3ZWWDhNZ2FldXJib2Mv?=
 =?utf-8?B?L1o0SU9SQXVDeUkrME92UGNqZzFwR1pGOHBOUkJ5TnM4ZTB6OUVHakFHbUhx?=
 =?utf-8?B?Y1MreXNGYXZaMXMyblNHbU9LNFRVTTRNbTNzeEoyYllEWnFsdDltSlo5d2Fm?=
 =?utf-8?B?c3NjL2JpWlY5RU1NNHpTY000c0k1eTkzbC90NkgzZHN1NEpwMllWMjhtMEFM?=
 =?utf-8?B?KzFkYjloenJRTzlFRWJXaFFFNlM4SGdEeUxmcndDckpKT0FaNUJIaU1BYWt6?=
 =?utf-8?B?VXNtMmI3K2FPb3BnM1dVdHlNVmsrTlB3R1ZkV0p3NHB6cWlVSFFGQitNWXU2?=
 =?utf-8?B?cEpoeDhSaU5QUU42dHBEYXFyb3EyU0JRbkNyWkQxWVhtS1NoOE1td0xvaE1J?=
 =?utf-8?B?T1FoQ0V2RitCOERCVWFPaCtpNnJuUWRLMy92Zk8vOHl0QWtsbEF5ZHFZajVV?=
 =?utf-8?B?ZzU0bXV2bDgvU3dEV0RvMWd1UmI3MU1lMGdBdzg3cnEyWk5iTFM5d3lkTUg3?=
 =?utf-8?B?ZlNxU1pmdURrdGx6b2NLS0syK2E1SStlZ2cvMkJPTlg3b3QxenlmcEIxbmtJ?=
 =?utf-8?B?MUN1YXBOblFnMi9RVUt2QXVWSTAxMC9BNmVXUVRRQ1dyVC9TbUxFdlZnV29s?=
 =?utf-8?B?OUJHTEl0eXBHMStwQWMzc3NWcXErSi9OdnRvbE9WVXM4R3RLdi9Zdkk4OVNJ?=
 =?utf-8?B?U1ZPYXQ4QWJmU2JFaVpFVEhHZFpZN0c2eEwzSktNTk5XS2NzcGpZOW5zSkFH?=
 =?utf-8?B?alowRHJncUpiN0swS1hHSE50MVpUM3ViL1JHWVJaTWdGU0NYUGdabk54YUx5?=
 =?utf-8?B?WEFOaytSSVVvUFlXTE1OUExJZkJoQ2dSUFNvekUzazl4QXpPdm0vWkIvSjhI?=
 =?utf-8?B?blF4WjRzOGUyd1hOM2ZqRHRtNC82RVVMZSs2NGdQWnRSSVpNWVJIa3JES1lX?=
 =?utf-8?B?Q1ZNT1VGTy9KRmdxclpaMjBpUytSSXhBRGpPUlU4OGVXVTh6N0orMDVpUmJN?=
 =?utf-8?B?ckJQdlFhbHdkZGFwSnRxZWdmNkVLZkpKNHBIeHpBeWttMFI0WTFqcDNkR25h?=
 =?utf-8?B?aTAxUm0vTHBNVE1uNTA4bVZBcFFRU01QYmNFNVVYa1ZURjJXM0RFeDZvMkxx?=
 =?utf-8?Q?lQR+RGA1+r+HsniscMl456sd9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3d62308-9855-4c45-32d2-08db4635a349
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 09:07:19.1150
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ix4/YjDGxO49hOJh8/7sT7sS8uph9ngk4TbK3g0rY+ZYe2K7ZfpKOnKHfz+CGFLHtOuGIsExhaJ0QwKoMPAj7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9388

On 26.04.2023 10:47, Olaf Hering wrote:
> Wed, 26 Apr 2023 10:28:38 +0200 Jan Beulich <jbeulich@suse.com>:
> 
>> Is this to be translated to Reviewed-by: <you>?
> 
> It was a Nack, in case such thing exists, and if it has any meaning if sent by me.

Now I'm confused: Your reply didn't read like a nack at all, at least to
me. Furthermore ...

> There are a few places already which do a mkdir -p XEN_RUN_DIR prior usage.
> But a few are missing.
> XEN_RUN_DIR and most likely also XEN_RUN_STORED have to be removed from make install.

... this suggests to me that you really mean the change doesn't go far
enough, but that's then different from nack-ing a change. Can you please
clarify this for me (and maybe also for Jason, depending on how he has
read your replies)?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 09:28:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 09:28:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526609.818484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prbRl-00088C-Nz; Wed, 26 Apr 2023 09:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526609.818484; Wed, 26 Apr 2023 09:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prbRl-000885-JW; Wed, 26 Apr 2023 09:28:21 +0000
Received: by outflank-mailman (input) for mailman id 526609;
 Wed, 26 Apr 2023 09:28:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prbRl-00087v-01; Wed, 26 Apr 2023 09:28:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prbRk-0007X5-O4; Wed, 26 Apr 2023 09:28:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prbRk-0005ci-EQ; Wed, 26 Apr 2023 09:28:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prbRk-0006SL-E1; Wed, 26 Apr 2023 09:28:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BW38ziko93v2XHUfx6IUbK/3fFV5RFEYgxJOznr++7g=; b=1RwscuhgFLulJgYCe0tMKZUsYk
	9+W4OgIFZrfOOvGJg3EBO/1iZvKxAMGapXeOeY0CKfkBpZsIObN+XvUcr3MuWsWbiw715U1wHjHtT
	1NiRCjr8PKGk+IxFAawjy1AbJJX3YbkGLfMK2QPADAbPPUQD+1HjFFsX8CcyoQtTum2s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180423-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180423: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5a349b96b171e85744024904b0c8453d06d2fb45
X-Osstest-Versions-That:
    ovmf=18f463edbaa911b6e2c32a3e783bf6c2c9997512
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 09:28:20 +0000

flight 180423 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180423/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5a349b96b171e85744024904b0c8453d06d2fb45
baseline version:
 ovmf                 18f463edbaa911b6e2c32a3e783bf6c2c9997512

Last test of basis   180409  2023-04-25 11:10:48 Z    0 days
Testing same since   180423  2023-04-26 03:40:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Igor Kulchytskyy <igork@ami.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   18f463edba..5a349b96b1  5a349b96b171e85744024904b0c8453d06d2fb45 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 10:41:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 10:41:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526618.818494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prcaI-0007xy-U0; Wed, 26 Apr 2023 10:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526618.818494; Wed, 26 Apr 2023 10:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prcaI-0007xr-RD; Wed, 26 Apr 2023 10:41:14 +0000
Received: by outflank-mailman (input) for mailman id 526618;
 Wed, 26 Apr 2023 10:41:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HkN=AR=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1prcaH-0007xl-Kp
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 10:41:13 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.216]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dbc382dc-e41e-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 12:41:10 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3QAewIBW
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 26 Apr 2023 12:40:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbc382dc-e41e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1682505658; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=EpjqCiV3gnmm42+9O3vkwFVAjru+cgqxjkK6VGJRowFM9MGQQc9IvV/F42T2jNIRWc
    2vYAyB8yJGhJnSC5flbE8J0F5+UGgbRjtMKXuJY+xToQODqSDmUTZsPvn1c/h6xO6dSV
    zJJ5g9kQAcPYKTZ/NZy0MrEJBo2HwX9AhyVaGGhiIcx8WfUPIXUWcpKa+esSmefa6mJF
    2yb0vW2m5jUUebEIEIquT9AckCfySP/uc9odfV0t9Kr5LTn9Oshhim1xD3Jj6zd/rWtp
    jTTaVj34rsorH6AZSNZmxlT002MelAtNiPo97mVXRoyMa740PYevXbiLnQkY4D/D68py
    ID8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1682505658;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=7Cv5/o3BY5zVvZgmulq4jQTU+WB0ze9LCIknuwExZm0=;
    b=F93Rv2xQXExxDBIy3YWb+le5l6JzdNDddaMa7jJyi9qn6cyMxqIwR5xqiy1KOX6Jap
    oir6x5aJRTGhihjItsCCq952i75czRRz63VDJ4nfTnM6JCfsAqk/8FzdWX8LKwXwrhss
    BfdCLQlOTltgV2vJKBSwwDMpA1fRs+Ged7zw03/Awi7djJyZ79OnfJL5v7E14K1FlUPy
    Np3Q2KeW/u5wbapbyqcDhcksZNjZWCvPNp+hPiSbFZCKxrGHOoOSJH0I3x74HKWYtC26
    s0u2zAniIQYaDOZtyb2HgBC0zjuCP5P+Pp1mXmWsHuh0LKV0RBjFYYmRxdnkAGBzzlRg
    +b1w==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1682505658;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=7Cv5/o3BY5zVvZgmulq4jQTU+WB0ze9LCIknuwExZm0=;
    b=H89P/P1s61Mo/Rbu0lXd4HcMbcu/f33z/4MlzvbP5PZM4qGoa1pjChy2OHB6H4BQEh
    ULDYX2SqAKA8OzfGFr8sGfxeLXodUCNLUNy+fTDJQMFfYKwbMwth3dk8wViF0cXIF0CW
    09qbADyB9+ZKhPq9FZ4QS2+fsIjF3WOdRV1Jx4Va2xy3U/eq8bkyA6RGoYF23HuyOCcj
    KxIZjFZmL+XuMd+SyzABERTFSjrvLhKFFjVzmk74ajxoIEV7MMhHg75gMqFQFOyKRpO1
    107KYUvtGCzD0gAnl2krZKe9DJG9uXzT9ZP78cUktc4IgECtvP/RtIzmxy61zmK724TY
    Sjdw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1682505658;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=7Cv5/o3BY5zVvZgmulq4jQTU+WB0ze9LCIknuwExZm0=;
    b=6EsxgS8dTVTP364CxSDuG0RE4/dJ6Hx0x54uY5QvPz/mS33uqjVmmkJqBMQWjw4bX3
    1sYmet2W/vu9MuTks/BA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BBKIYx7sXVVhU9+brASRK3ZldJTnR7IDHecOJA=="
Date: Wed, 26 Apr 2023 12:40:51 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>, Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] Fix install.sh for systemd
Message-ID: <20230426124051.24c2a9a6.olaf@aepfle.de>
In-Reply-To: <9dfb4f01-979e-e225-214e-34ddb51a9199@suse.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
	<20230426091533.68324d8d.olaf@aepfle.de>
	<650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
	<20230426104754.78845a19.olaf@aepfle.de>
	<9dfb4f01-979e-e225-214e-34ddb51a9199@suse.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/zhIK0kJ9tk+j6u24=l2I7ib";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/zhIK0kJ9tk+j6u24=l2I7ib
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Wed, 26 Apr 2023 11:07:17 +0200 Jan Beulich <jbeulich@suse.com>:

> On 26.04.2023 10:47, Olaf Hering wrote:
> > XEN_RUN_DIR and most likely also XEN_RUN_STORED have to be removed from=
 make install. =20
> ... this suggests to me that you really mean the change doesn't go far
> enough, but that's then different from nack-ing a change. Can you please
> clarify this for me (and maybe also for Jason, depending on how he has
> read your replies)?

I think the change should look like this, the runtime directories have to b=
e created at runtime.

 tools/Makefile                                     |    2 --
 tools/hotplug/FreeBSD/rc.d/xencommons.in           |    1 +
 tools/hotplug/FreeBSD/rc.d/xendriverdomain.in      |    1 +
 tools/hotplug/Linux/init.d/xendriverdomain.in      |    1 +
 tools/hotplug/Linux/systemd/xenconsoled.service.in |    2 +-
 tools/hotplug/NetBSD/rc.d/xendriverdomain.in       |    2 +-

--- a/tools/Makefile
+++ b/tools/Makefile
@@ -58,9 +58,7 @@ build all: subdirs-all
 install:
 	$(INSTALL_DIR) -m 700 $(DESTDIR)$(XEN_DUMP_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LOG_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LIB_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_STORED)
 	$(INSTALL_DIR) $(DESTDIR)$(PKG_INSTALLDIR)
 	$(MAKE) subdirs-install
=20
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -34,6 +34,7 @@ xen_startcmd()
 	local time=3D0
 	local timeout=3D30
=20
+	mkdir -p "@XEN_RUN_DIR@"
 	xenstored_pid=3D$(check_pidfile ${XENSTORED_PIDFILE} ${XENSTORED})
 	if test -z "$xenstored_pid"; then
 		printf "Starting xenservices: xenstored, xenconsoled."
--- a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
@@ -27,6 +27,7 @@ xendriverdomain_start()
 {
 	printf "Starting xenservices: xl devd."
=20
+	mkdir -p "@XEN_RUN_DIR@"
 	PATH=3D"${bindir}:${sbindir}:$PATH" ${sbindir}/xl devd --pidfile ${XLDEVD=
_PIDFILE} ${XLDEVD_ARGS}
=20
 	printf "\n"
--- a/tools/hotplug/Linux/init.d/xendriverdomain.in
+++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
@@ -49,6 +49,7 @@ fi
=20
 do_start () {
 	echo Starting xl devd...
+	mkdir -m700 -p ${XEN_RUN_DIR}
 	${sbindir}/xl devd --pidfile=3D$XLDEVD_PIDFILE $XLDEVD_ARGS
 }
 do_stop () {
--- a/tools/hotplug/Linux/systemd/xenconsoled.service.in
+++ b/tools/hotplug/Linux/systemd/xenconsoled.service.in
@@ -11,7 +11,7 @@ Environment=3DXENCONSOLED_TRACE=3Dnone
 Environment=3DXENCONSOLED_LOG_DIR=3D@XEN_LOG_DIR@/console
 EnvironmentFile=3D-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 ExecStartPre=3D/bin/grep -q control_d /proc/xen/capabilities
-ExecStartPre=3D/bin/mkdir -p ${XENCONSOLED_LOG_DIR}
+ExecStartPre=3D/bin/mkdir -p ${XENCONSOLED_LOG_DIR} @XEN_RUN_DIR@
 ExecStart=3D@sbindir@/xenconsoled -i --log=3D${XENCONSOLED_TRACE} --log-di=
r=3D${XENCONSOLED_LOG_DIR} $XENCONSOLED_ARGS
=20
 [Install]
--- a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
@@ -23,7 +23,7 @@ XLDEVD_PIDFILE=3D"@XEN_RUN_DIR@/xldevd.pid"
=20
 xendriverdomain_precmd()
 {
-	:
+	mkdir -p "@XEN_RUN_DIR@"
 }
=20
 xendriverdomain_startcmd()


Olaf

--Sig_/zhIK0kJ9tk+j6u24=l2I7ib
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRI/7MACgkQ86SN7mm1
DoB+Rw//YYUMzUCf0kQ0lG+62CT2JlCC75TM4vZ5ppndZrfm3qnszQ7Zs4GSRQat
XQiMob1OnrE9LGouv/eoT638fAhNvOSmbZolB7cOLz256/G8kyAO43FiU461ax6e
QARvxABqmIbtgskm1gYovpouXaNAcTiyujPA6VmmMSDmyRFRsOJtQ5vji85YwHTy
u90SeLNAJgC++yXmc9EYznbn/MlRAldHRAxbDU3NqdHSw4PKoWP4R/hnC8JfycdY
pBfRCghrUgbICQYrTZ2HLXxwgKdUkDdAjZn4ca+UNUlmgIiCXh2q+ap7lWbyVgEA
H6HbqInYQy9UMfHwTqAqb+gp/dI1AosL9rd/TUKOy+g3ZAAK0/AeGpQe490UxuC3
6F0/EEXHLVX0Oi+jgtK/h6lyJvxpfk316JuPYr+6EI2YB5sit0C9eQNa2Wrqc5Ku
zsX60Ctxu3ZYOgvUpFFKizBUPJ6hb0gXRiyLLawfzicOE3d8vfpbnJoEolOrjGcL
oH47tXUvXlmFGUpaJz5d1f5lMdx7OstoZVEdE3bIQo4KfBpQyuSkqfI5KTdpimfb
M5ukNWK5Fnub47K0UonzLq8MULLuWAVJOLzYYuffVPrM8iICGHyJDvkKsw2GdgXE
VcCY8k1QbxpR3p8lMBNbUvoviuJvlsgHG+wJwfA3/hCJPkKyLY8=
=to8F
-----END PGP SIGNATURE-----

--Sig_/zhIK0kJ9tk+j6u24=l2I7ib--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 10:46:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 10:46:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526624.818504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prcfH-00009d-Gv; Wed, 26 Apr 2023 10:46:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526624.818504; Wed, 26 Apr 2023 10:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prcfH-00009W-EA; Wed, 26 Apr 2023 10:46:23 +0000
Received: by outflank-mailman (input) for mailman id 526624;
 Wed, 26 Apr 2023 10:46:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jz9v=AR=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1prcfG-00009P-Bk
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 10:46:22 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 9453c809-e41f-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 12:46:20 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0DEE4B3;
 Wed, 26 Apr 2023 03:47:03 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8CB0D3F5A1;
 Wed, 26 Apr 2023 03:46:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9453c809-e41f-11ed-b224-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/misra: xen-analysis.py: fix return error on PhaseExceptions
Date: Wed, 26 Apr 2023 11:46:05 +0100
Message-Id: <20230426104605.3447049-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the script return code is 0 even if an exception is
found, because the return code is written only if the exception
object has the errorcode member.

Fix the issue returning the errorcode member in case it exists,
otherwise use a generic value different from 0.

Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen-analysis.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/scripts/xen-analysis.py b/xen/scripts/xen-analysis.py
index 8e50c27cd898..7185c5a06d2c 100755
--- a/xen/scripts/xen-analysis.py
+++ b/xen/scripts/xen-analysis.py
@@ -26,8 +26,7 @@ def main(argv):
             cppcheck_analysis.generate_cppcheck_report()
     except PhaseExceptions as e:
         print("ERROR: {}".format(e))
-        if hasattr(e, "errorcode"):
-            ret_code = e.errorcode
+        ret_code = e.errorcode if hasattr(e, "errorcode") else 1
     finally:
         if settings.step_clean_analysis:
             cppcheck_analysis.clean_analysis_artifacts()
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 10:52:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 10:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526629.818514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prckr-0001cj-5d; Wed, 26 Apr 2023 10:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526629.818514; Wed, 26 Apr 2023 10:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prckr-0001cc-1O; Wed, 26 Apr 2023 10:52:09 +0000
Received: by outflank-mailman (input) for mailman id 526629;
 Wed, 26 Apr 2023 10:52:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HkN=AR=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1prckp-0001cW-G3
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 10:52:07 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.54]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 623d8318-e420-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 12:52:05 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3QApxIF3
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 26 Apr 2023 12:51:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 623d8318-e420-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1682506319; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YrrrVEKkY1xSgYZ/BuO72DGofk503zHM4gDgcFMX1aKdNWQ72EjkA92+M0TKuOnie1
    v0qtaBCSPUgGrcPhhxbjpUVq4wZq9XnK4o6HLtVNPdiySYLH4g9MLZAnTVXxTUgjSE4l
    yDFIUeudjppMVwvOmTHhsEBILhaaiLAMEI23AEOcxgwCPBgO5OuD9pcNROQeTZff3ocp
    NI2A4x+s62tF3ZwPgCuKA3/aCf/FTMFkZIv2uGSkdfX/mlGyBWPhOVxsAR/AY0++NaM3
    a/0OGXRUe0DueN+YuM0TsM0EW7jlGDl0UEhXLeBEw1zpKVEx3Hunjvd78gFOjSyojD3i
    epvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1682506319;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=IrwdqgFkX0ZU6V+Dyc1PJ+89WpXrnJ6yDr5eITXGbIk=;
    b=ZXxDPWcAJaD17yqeCLXedK7trTlY678n7MviBf4sp6ZEnUJal1lPTmZedvVfiW33Z6
    FQ7X3VB1EHJTA9czyTUST0oqbtGKyLdSzeDBakJfo+/e3I2GQzpnM454kVVWD3FWTtVX
    xhEaKXKtQmJCivn6nEZ5giPiiNqjO8cV+eVANoQ8ZJi+TiSHBNzn7HIsN8rACmKvsnye
    5yDNv07Z+JH6h2GK7etnkkS2+MeVXT31m1fSaP0Uv9qStvdj1p/QCzVNYYjhc+AvsijN
    4FNOIPHp/S+cmNtsOVYyUHyoVLo7+Ht0owSAXxCXpdjCUVS/8nrqLkmoPxq/ECjk+oB1
    R5qQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1682506319;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=IrwdqgFkX0ZU6V+Dyc1PJ+89WpXrnJ6yDr5eITXGbIk=;
    b=GbJB8GvqAlv+3XL1xBzNIrWQ9tnFIpW/gfRcY1r4XefmTeOEhx4SzwLJVZKT6Nfdg9
    M+nSoU0ZOPJUft0a3KJGJJZP2adU8m84kXYPRVbqEkIPrci73stEHWOMlbmyw5gXtTPz
    k4vLOO4XC7ewX6Qu+h3Mbz4UhqGpAasSJLflGXj4R7ZKNeclnICendUjsg0bCViSkOrQ
    L5YwaQBkfbtF0IvSI3Z22iz4cpKOu9avLkoeW50mkExJUyRNUqcKRPtDjVTJcTzkt7J0
    xOPorAfarGORMHSmDBXMeZAgORjKWk/dGUHLfbnI405QJnV9atreSToDWBDSo9j/Gj+E
    sRBw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1682506319;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=IrwdqgFkX0ZU6V+Dyc1PJ+89WpXrnJ6yDr5eITXGbIk=;
    b=250efq1htmE8aU8OOiwXNUCEZS1+ZFbx3H4aGmt34PqEcMm8k5EFcsTJybbfGgxrey
    /NYe7G/9HO7OR8aeL5Bg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqlr7GpgtSxIX+ZWs95M7PYKTHoBaxED20qrwFA=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v1] stubdom: fix errors in newlib:cygmon-gmon.c
Date: Wed, 26 Apr 2023 10:51:56 +0000
Message-Id: <20230426105156.2381-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

rpm post-build-checks found a few code bugs in newlib, and marks them as
errors. Add another newlib patch and apply it during stubdom build.

I: A function uses a 'return;' statement, but has actually a value
   to return, like an integer ('return 42;') or similar.
W: xen voidreturn ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:117, 125, 146, 157, 330

I: Program is using implicit definitions of special functions.
   these functions need to use their correct prototypes to allow
   the lightweight buffer overflow checking to work.
     - Implicit memory/string functions need #include <string.h>.
     - Implicit *printf functions need #include <stdio.h>.
     - Implicit *printf functions need #include <stdio.h>.
     - Implicit *read* functions need #include <unistd.h>.
     - Implicit *recv* functions need #include <sys/socket.h>.
E: xen implicit-fortify-decl ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:119

I: Program returns random data in a function
E: xen no-return-in-nonvoid-function ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:362

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 stubdom/Makefile                 |  1 +
 stubdom/newlib-cygmon-gmon.patch | 60 ++++++++++++++++++++++++++++++++
 2 files changed, 61 insertions(+)
 create mode 100644 stubdom/newlib-cygmon-gmon.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index b312f710cd..cddbbe2da0 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -95,6 +95,7 @@ newlib-$(NEWLIB_VERSION): newlib-$(NEWLIB_VERSION).tar.gz
 	patch -d $@ -p0 < newlib-chk.patch
 	patch -d $@ -p1 < newlib-stdint-size_max-fix-from-1.17.0.patch
 	patch -d $@ -p1 < newlib-disable-texinfo.patch
+	patch -d $@ -p1 < newlib-cygmon-gmon.patch
 	find $@ -type f | xargs perl -i.bak \
 		-pe 's/\b_(tzname|daylight|timezone)\b/$$1/g'
 	touch $@
diff --git a/stubdom/newlib-cygmon-gmon.patch b/stubdom/newlib-cygmon-gmon.patch
new file mode 100644
index 0000000000..b2dfbfafe2
--- /dev/null
+++ b/stubdom/newlib-cygmon-gmon.patch
@@ -0,0 +1,60 @@
+
+I: A function uses a 'return;' statement, but has actually a value
+   to return, like an integer ('return 42;') or similar.
+W: xen voidreturn ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:117, 125, 146, 157, 330
+
+I: Program is using implicit definitions of special functions.
+   these functions need to use their correct prototypes to allow
+   the lightweight buffer overflow checking to work.
+     - Implicit memory/string functions need #include <string.h>.
+     - Implicit *printf functions need #include <stdio.h>.
+     - Implicit *printf functions need #include <stdio.h>.
+     - Implicit *read* functions need #include <unistd.h>.
+     - Implicit *recv* functions need #include <sys/socket.h>.
+E: xen implicit-fortify-decl ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:119
+
+I: Program returns random data in a function
+E: xen no-return-in-nonvoid-function ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:362
+
+---
+ libgloss/i386/cygmon-gmon.c |    6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+Index: newlib-1.16.0/libgloss/i386/cygmon-gmon.c
+===================================================================
+--- newlib-1.16.0.orig/libgloss/i386/cygmon-gmon.c
++++ newlib-1.16.0/libgloss/i386/cygmon-gmon.c
+@@ -61,6 +61,8 @@
+ static char sccsid[] = "@(#)gmon.c	5.3 (Berkeley) 5/22/91";
+ #endif /* not lint */
+ 
++#include <string.h>
++#include <unistd.h>
+ #define DEBUG
+ #ifdef DEBUG
+ #include <stdio.h>
+@@ -89,7 +91,7 @@ static int	s_scale;
+ 
+ extern int errno;
+ 
+-int
++void
+ monstartup(lowpc, highpc)
+      char	*lowpc;
+      char	*highpc;
+@@ -199,6 +201,7 @@ _mcleanup()
+ 
+ static char already_setup = 0;
+ 
++void
+ _mcount()
+ {
+   register char			*selfpc;
+@@ -341,6 +344,7 @@ overflow:
+  *	profiling is what mcount checks to see if
+  *	all the data structures are ready.
+  */
++void
+ moncontrol(mode)
+     int mode;
+ {


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 10:52:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 10:52:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526636.818524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prclV-0002Bl-H6; Wed, 26 Apr 2023 10:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526636.818524; Wed, 26 Apr 2023 10:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prclV-0002Be-EL; Wed, 26 Apr 2023 10:52:49 +0000
Received: by outflank-mailman (input) for mailman id 526636;
 Wed, 26 Apr 2023 10:52:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2HkN=AR=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1prclU-0001z0-9t
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 10:52:48 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.166]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a7ba89e-e420-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 12:52:46 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz3QAqgIFZ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 26 Apr 2023 12:52:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a7ba89e-e420-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1682506362; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=fDSnyS0AnPUWEsBPNKZT8Fza6aye84168oMlcIM8ES38acohynGIhJgSIczWo1zrpX
    hZYCwtxQqZF0pZpG2VAyy309HMj8OKY/TB+AvaRv8w7P15DLVmTaYVnf7Mk2ADLtmXOd
    hzO5T1JklfEO1XZR/VNAN6qj3Ln/qiSqbNgz2vcqfHa2DBBxOlwrgNdDk0yWXFYcd5vx
    26GI4PqpSgYOxp2dtbyqUaFb4OuMwxei52tb4EJmC7vokJc7avT2erKcXLkipq/TAigz
    pKZcSxW9arADq6ypCZVbavWamAQC5R6Xxc6De8+em4vS4g8TUpG06z2mBZo1vGLl9fAW
    LLEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1682506362;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FGgNeMXD1fpexIJRy+L7hZ4dIGHGd8+SytNB0T7EbIo=;
    b=D6/CsQ4f0aYGmnI0YT7CETI+l2HWhROAw1BLQt0yUFC8egLStQawlDenTYK993ucbo
    7+VM0xSi43KAo1JzEWGNNeA4rP7HSmR1zn+TM9gU73Y5g+jHgBWfk3cGUsQyXsvtN07T
    i1gAboA4ww9SROyDQU44MGoNcqhI3JCR98FWxLZ6uiKcg/Y58x+zTeffjtpFdiUI1ZAH
    QK3iyo6Xi21+hjKmVsg/V1TDrePyoxJgnQ1Xz4e6joMYbi4R0w74r6XXLi5oqORElRfn
    ZtgYwUkMJoFut1ainqGyN+Hs4o6QlzX3O6VqH5RA8goyXGn+G2riBItZCKCrnWKpGkEL
    Ko/g==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1682506362;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FGgNeMXD1fpexIJRy+L7hZ4dIGHGd8+SytNB0T7EbIo=;
    b=qcUqHBrMyUw+8k9zC8a2mxDix5kVQgxLggr4op3S+S2ppeKaJ9gbMOQ968dfBnFA3L
    rA9gWFqvNJpxdNLUXMLtUo36MSYOBnMzXr3dkOMBFZYtEJMoh1Xm6Hm1rmzr3GLukdvV
    730t1qqPB7R80E/2bgpS5WKvJ/gTDVqWBGQf1+PW5j7I6J6p0zvaLA6qffloxvnLNFoh
    BZy27wl8q6UHlp5gwkAgAQJ42GYOGIXzAZ64Ia+KBADMzgDAcFDArgDfKhLLz4xc3qVu
    dHRxmFjb2oeWfkxLxT/ExyvZXTCqcS7k0mXIYJqCTqrjAe8QCJVEYMEy2MFSNAEkXoNE
    nXjQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1682506362;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FGgNeMXD1fpexIJRy+L7hZ4dIGHGd8+SytNB0T7EbIo=;
    b=8Y0W7mwTenpHKm7zfzP8w9KqGro7xUbeRKJJ13DUR8LjhgRkfhCpH0QO6dnr5oFKsC
    yVpNEll8ofCKJxgXnZDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqlr7GpgtSxIX+ZWs95M7PYKTHoBaxED20qrwFA=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v1] stubdom: fix errors in newlib:makedoc
Date: Wed, 26 Apr 2023 10:52:39 +0000
Message-Id: <20230426105239.2496-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

rpm post-build-checks found a few code bugs in newlib, and marks them as
errors. Add another newlib patch and apply it during stubdom build.

[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c: In function 'lookup_word':
[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147:10: warning: implicit declaration of function 'strcmp' [-Wimplicit-function-declaration]
[  227s]       if (strcmp(ptr->word, word) == 0) return ptr;
[  227s]           ^

[  460s] I: Program is using implicit definitions of special functions.
[  460s]    these functions need to use their correct prototypes to allow
[  460s]    the lightweight buffer overflow checking to work.
[  460s]      - Implicit memory/string functions need #include <string.h>.
[  460s]      - Implicit *printf functions need #include <stdio.h>.
[  460s]      - Implicit *printf functions need #include <stdio.h>.
[  460s]      - Implicit *read* functions need #include <unistd.h>.
[  460s]      - Implicit *recv* functions need #include <sys/socket.h>.
[  460s] E: xen implicit-fortify-decl ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

Depends on newlib-cygmon-gmon.patch

 stubdom/Makefile             |  1 +
 stubdom/newlib-makedoc.patch | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)
 create mode 100644 stubdom/newlib-makedoc.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index cddbbe2da0..a21e1c3fa3 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -96,6 +96,7 @@ newlib-$(NEWLIB_VERSION): newlib-$(NEWLIB_VERSION).tar.gz
 	patch -d $@ -p1 < newlib-stdint-size_max-fix-from-1.17.0.patch
 	patch -d $@ -p1 < newlib-disable-texinfo.patch
 	patch -d $@ -p1 < newlib-cygmon-gmon.patch
+	patch -d $@ -p1 < newlib-makedoc.patch
 	find $@ -type f | xargs perl -i.bak \
 		-pe 's/\b_(tzname|daylight|timezone)\b/$$1/g'
 	touch $@
diff --git a/stubdom/newlib-makedoc.patch b/stubdom/newlib-makedoc.patch
new file mode 100644
index 0000000000..90678f1b63
--- /dev/null
+++ b/stubdom/newlib-makedoc.patch
@@ -0,0 +1,35 @@
+stubdom: fix errors in newlib
+
+rpm post-build-checks found a few code bugs in newlib, and marks them as
+errors. Add another newlib patch and apply it during stubdom build.
+
+[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c: In function 'lookup_word':
+[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147:10: warning: implicit declaration of function 'strcmp' [-Wimplicit-function-declaration]
+[  227s]       if (strcmp(ptr->word, word) == 0) return ptr;
+[  227s]           ^
+
+[  460s] I: Program is using implicit definitions of special functions.
+[  460s]    these functions need to use their correct prototypes to allow
+[  460s]    the lightweight buffer overflow checking to work.
+[  460s]      - Implicit memory/string functions need #include <string.h>.
+[  460s]      - Implicit *printf functions need #include <stdio.h>.
+[  460s]      - Implicit *printf functions need #include <stdio.h>.
+[  460s]      - Implicit *read* functions need #include <unistd.h>.
+[  460s]      - Implicit *recv* functions need #include <sys/socket.h>.
+[  460s] E: xen implicit-fortify-decl ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147
+---
+ newlib/doc/makedoc.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+Index: newlib-1.16.0/newlib/doc/makedoc.c
+===================================================================
+--- newlib-1.16.0.orig/newlib/doc/makedoc.c
++++ newlib-1.16.0/newlib/doc/makedoc.c
+@@ -38,6 +38,7 @@ There is  no
+ #include "ansidecl.h"
+ #include <stdio.h>
+ #include <stdlib.h>
++#include <string.h>
+ #include <ctype.h>
+ 
+ #define DEF_SIZE 5000


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 11:20:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 11:20:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526645.818534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prdC9-0005ix-He; Wed, 26 Apr 2023 11:20:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526645.818534; Wed, 26 Apr 2023 11:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prdC9-0005iq-EZ; Wed, 26 Apr 2023 11:20:21 +0000
Received: by outflank-mailman (input) for mailman id 526645;
 Wed, 26 Apr 2023 11:20:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=laZO=AR=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1prdC8-0005ik-9t
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 11:20:20 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 523e149e-e424-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 13:20:17 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 9AA4520149;
 Wed, 26 Apr 2023 13:20:16 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id yqWqQTyUaZDB; Wed, 26 Apr 2023 13:20:16 +0200 (CEST)
Received: from begin (nat-inria-interne-52-gw-01-bso.bordeaux.inria.fr
 [194.199.1.52])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 4DFA120146;
 Wed, 26 Apr 2023 13:20:16 +0200 (CEST)
Received: from samy by begin with local (Exim 4.96)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1prdC3-00Gt1A-2S;
 Wed, 26 Apr 2023 13:20:15 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 523e149e-e424-11ed-8611-37d641c3527e
Date: Wed, 26 Apr 2023 13:20:15 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v1] stubdom: fix errors in newlib:cygmon-gmon.c
Message-ID: <20230426112015.afh6fhh7ubec2szx@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230426105156.2381-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230426105156.2381-1-olaf@aepfle.de>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Olaf Hering, le mer. 26 avril 2023 10:51:56 +0000, a ecrit:
> rpm post-build-checks found a few code bugs in newlib, and marks them as
> errors. Add another newlib patch and apply it during stubdom build.
> 
> I: A function uses a 'return;' statement, but has actually a value
>    to return, like an integer ('return 42;') or similar.
> W: xen voidreturn ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:117, 125, 146, 157, 330
> 
> I: Program is using implicit definitions of special functions.
>    these functions need to use their correct prototypes to allow
>    the lightweight buffer overflow checking to work.
>      - Implicit memory/string functions need #include <string.h>.
>      - Implicit *printf functions need #include <stdio.h>.
>      - Implicit *printf functions need #include <stdio.h>.
>      - Implicit *read* functions need #include <unistd.h>.
>      - Implicit *recv* functions need #include <sys/socket.h>.
> E: xen implicit-fortify-decl ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:119
> 
> I: Program returns random data in a function
> E: xen no-return-in-nonvoid-function ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:362
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks!

> ---
>  stubdom/Makefile                 |  1 +
>  stubdom/newlib-cygmon-gmon.patch | 60 ++++++++++++++++++++++++++++++++
>  2 files changed, 61 insertions(+)
>  create mode 100644 stubdom/newlib-cygmon-gmon.patch
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index b312f710cd..cddbbe2da0 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -95,6 +95,7 @@ newlib-$(NEWLIB_VERSION): newlib-$(NEWLIB_VERSION).tar.gz
>  	patch -d $@ -p0 < newlib-chk.patch
>  	patch -d $@ -p1 < newlib-stdint-size_max-fix-from-1.17.0.patch
>  	patch -d $@ -p1 < newlib-disable-texinfo.patch
> +	patch -d $@ -p1 < newlib-cygmon-gmon.patch
>  	find $@ -type f | xargs perl -i.bak \
>  		-pe 's/\b_(tzname|daylight|timezone)\b/$$1/g'
>  	touch $@
> diff --git a/stubdom/newlib-cygmon-gmon.patch b/stubdom/newlib-cygmon-gmon.patch
> new file mode 100644
> index 0000000000..b2dfbfafe2
> --- /dev/null
> +++ b/stubdom/newlib-cygmon-gmon.patch
> @@ -0,0 +1,60 @@
> +
> +I: A function uses a 'return;' statement, but has actually a value
> +   to return, like an integer ('return 42;') or similar.
> +W: xen voidreturn ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:117, 125, 146, 157, 330
> +
> +I: Program is using implicit definitions of special functions.
> +   these functions need to use their correct prototypes to allow
> +   the lightweight buffer overflow checking to work.
> +     - Implicit memory/string functions need #include <string.h>.
> +     - Implicit *printf functions need #include <stdio.h>.
> +     - Implicit *printf functions need #include <stdio.h>.
> +     - Implicit *read* functions need #include <unistd.h>.
> +     - Implicit *recv* functions need #include <sys/socket.h>.
> +E: xen implicit-fortify-decl ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:119
> +
> +I: Program returns random data in a function
> +E: xen no-return-in-nonvoid-function ../../../../newlib-1.16.0/libgloss/i386/cygmon-gmon.c:362
> +
> +---
> + libgloss/i386/cygmon-gmon.c |    6 +++++-
> + 1 file changed, 5 insertions(+), 1 deletion(-)
> +
> +Index: newlib-1.16.0/libgloss/i386/cygmon-gmon.c
> +===================================================================
> +--- newlib-1.16.0.orig/libgloss/i386/cygmon-gmon.c
> ++++ newlib-1.16.0/libgloss/i386/cygmon-gmon.c
> +@@ -61,6 +61,8 @@
> + static char sccsid[] = "@(#)gmon.c	5.3 (Berkeley) 5/22/91";
> + #endif /* not lint */
> + 
> ++#include <string.h>
> ++#include <unistd.h>
> + #define DEBUG
> + #ifdef DEBUG
> + #include <stdio.h>
> +@@ -89,7 +91,7 @@ static int	s_scale;
> + 
> + extern int errno;
> + 
> +-int
> ++void
> + monstartup(lowpc, highpc)
> +      char	*lowpc;
> +      char	*highpc;
> +@@ -199,6 +201,7 @@ _mcleanup()
> + 
> + static char already_setup = 0;
> + 
> ++void
> + _mcount()
> + {
> +   register char			*selfpc;
> +@@ -341,6 +344,7 @@ overflow:
> +  *	profiling is what mcount checks to see if
> +  *	all the data structures are ready.
> +  */
> ++void
> + moncontrol(mode)
> +     int mode;
> + {
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 11:20:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 11:20:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526646.818544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prdCN-00060i-P0; Wed, 26 Apr 2023 11:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526646.818544; Wed, 26 Apr 2023 11:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prdCN-00060Z-Lw; Wed, 26 Apr 2023 11:20:35 +0000
Received: by outflank-mailman (input) for mailman id 526646;
 Wed, 26 Apr 2023 11:20:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=laZO=AR=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1prdCN-0005ik-2E
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 11:20:35 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5bdf2ffb-e424-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 13:20:33 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id CCF36200FD;
 Wed, 26 Apr 2023 13:20:32 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 6TgDEiTvki3H; Wed, 26 Apr 2023 13:20:32 +0200 (CEST)
Received: from begin (nat-inria-interne-52-gw-01-bso.bordeaux.inria.fr
 [194.199.1.52])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id A6D7D200EE;
 Wed, 26 Apr 2023 13:20:32 +0200 (CEST)
Received: from samy by begin with local (Exim 4.96)
 (envelope-from <samuel.thibault@ens-lyon.org>) id 1prdCK-00Gt1M-1B;
 Wed, 26 Apr 2023 13:20:32 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bdf2ffb-e424-11ed-8611-37d641c3527e
Date: Wed, 26 Apr 2023 13:20:32 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v1] stubdom: fix errors in newlib:makedoc
Message-ID: <20230426112032.snnxwxd4vjrcom6w@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230426105239.2496-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230426105239.2496-1-olaf@aepfle.de>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Olaf Hering, le mer. 26 avril 2023 10:52:39 +0000, a ecrit:
> rpm post-build-checks found a few code bugs in newlib, and marks them as
> errors. Add another newlib patch and apply it during stubdom build.
> 
> [  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c: In function 'lookup_word':
> [  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147:10: warning: implicit declaration of function 'strcmp' [-Wimplicit-function-declaration]
> [  227s]       if (strcmp(ptr->word, word) == 0) return ptr;
> [  227s]           ^
> 
> [  460s] I: Program is using implicit definitions of special functions.
> [  460s]    these functions need to use their correct prototypes to allow
> [  460s]    the lightweight buffer overflow checking to work.
> [  460s]      - Implicit memory/string functions need #include <string.h>.
> [  460s]      - Implicit *printf functions need #include <stdio.h>.
> [  460s]      - Implicit *printf functions need #include <stdio.h>.
> [  460s]      - Implicit *read* functions need #include <unistd.h>.
> [  460s]      - Implicit *recv* functions need #include <sys/socket.h>.
> [  460s] E: xen implicit-fortify-decl ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks!

> ---
> 
> Depends on newlib-cygmon-gmon.patch
> 
>  stubdom/Makefile             |  1 +
>  stubdom/newlib-makedoc.patch | 35 +++++++++++++++++++++++++++++++++++
>  2 files changed, 36 insertions(+)
>  create mode 100644 stubdom/newlib-makedoc.patch
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index cddbbe2da0..a21e1c3fa3 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -96,6 +96,7 @@ newlib-$(NEWLIB_VERSION): newlib-$(NEWLIB_VERSION).tar.gz
>  	patch -d $@ -p1 < newlib-stdint-size_max-fix-from-1.17.0.patch
>  	patch -d $@ -p1 < newlib-disable-texinfo.patch
>  	patch -d $@ -p1 < newlib-cygmon-gmon.patch
> +	patch -d $@ -p1 < newlib-makedoc.patch
>  	find $@ -type f | xargs perl -i.bak \
>  		-pe 's/\b_(tzname|daylight|timezone)\b/$$1/g'
>  	touch $@
> diff --git a/stubdom/newlib-makedoc.patch b/stubdom/newlib-makedoc.patch
> new file mode 100644
> index 0000000000..90678f1b63
> --- /dev/null
> +++ b/stubdom/newlib-makedoc.patch
> @@ -0,0 +1,35 @@
> +stubdom: fix errors in newlib
> +
> +rpm post-build-checks found a few code bugs in newlib, and marks them as
> +errors. Add another newlib patch and apply it during stubdom build.
> +
> +[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c: In function 'lookup_word':
> +[  227s] ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147:10: warning: implicit declaration of function 'strcmp' [-Wimplicit-function-declaration]
> +[  227s]       if (strcmp(ptr->word, word) == 0) return ptr;
> +[  227s]           ^
> +
> +[  460s] I: Program is using implicit definitions of special functions.
> +[  460s]    these functions need to use their correct prototypes to allow
> +[  460s]    the lightweight buffer overflow checking to work.
> +[  460s]      - Implicit memory/string functions need #include <string.h>.
> +[  460s]      - Implicit *printf functions need #include <stdio.h>.
> +[  460s]      - Implicit *printf functions need #include <stdio.h>.
> +[  460s]      - Implicit *read* functions need #include <unistd.h>.
> +[  460s]      - Implicit *recv* functions need #include <sys/socket.h>.
> +[  460s] E: xen implicit-fortify-decl ../../../../newlib-1.16.0/newlib/doc/makedoc.c:1147
> +---
> + newlib/doc/makedoc.c |    1 +
> + 1 file changed, 1 insertion(+)
> +
> +Index: newlib-1.16.0/newlib/doc/makedoc.c
> +===================================================================
> +--- newlib-1.16.0.orig/newlib/doc/makedoc.c
> ++++ newlib-1.16.0/newlib/doc/makedoc.c
> +@@ -38,6 +38,7 @@ There is  no
> + #include "ansidecl.h"
> + #include <stdio.h>
> + #include <stdlib.h>
> ++#include <string.h>
> + #include <ctype.h>
> + 
> + #define DEF_SIZE 5000
> 

-- 
Samuel
---
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 12:57:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 12:57:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526684.818554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prehm-0007hQ-FO; Wed, 26 Apr 2023 12:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526684.818554; Wed, 26 Apr 2023 12:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prehm-0007hJ-CQ; Wed, 26 Apr 2023 12:57:06 +0000
Received: by outflank-mailman (input) for mailman id 526684;
 Wed, 26 Apr 2023 12:57:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prehk-0007hD-Ps
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 12:57:04 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2052.outbound.protection.outlook.com [40.107.13.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6b127c0-e431-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 14:57:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8439.eurprd04.prod.outlook.com (2603:10a6:20b:412::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 12:56:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 12:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6b127c0-e431-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EQpS0dA80ffqmwvVTE9NVpcAumnTZqlq8XGGfrTXZdSe/GKApguxqLcB5ZguRS6hdzOlso144hti/B/bnMwh9f6ZjkHPutHbmlms18PPXHkU9zPnccU8zI5S83ZTc1eNVKgKCis126jYaI3jczJSJxbgeYtPEQK3A4JB1LvE+P/977Yc3HC7Fmo9qM53hIYBnj3i6/rHMKTdSM5yxJChTn3vQTh5FJ6ZaNSHgHlgkSZwb1GW7gIifOyhZwfzwaYku1ixB239fx2G0p0u6TF2ZQIS+zuTNkys8CqxyssUf4tHjD3Gk+2jP2NSjiPVKTYKerfxOubHXxLKTmreyK/sVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FopZvnEv9dxKP71KZ1LKD+oHMxhR06cTUI1wcO4NRyg=;
 b=maK0vOw9FhdrP4BaBew+SCscAuP1K6+DTeFx+Yvdm16mrrlqyb6ZD6alWqAYcCNHYXVy3HBDH0Chd1RR/d//GsBWEGVEDl+rPT1dWIXHCgYC1V9udTEngNAOAX9EqTIASKNyDMteQES2dtXxkNDNtavkmKxa39oVrSQc9CW8l8TPXYTMPdJ1CZZHZIomijMQHopv8QBzN5PJxwxI5bhYjR/r9QwGcPH20r9xBK8IzHKujiL2k7yiYmZODxjyN4fGhYQqtW40PUEEn8wZiDjplcbDx50jmNd/6pgjTrRUUPNywx9p0eL0qZEvWCYeg8ucsXirwmGIEOP8NBdqE4aieQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FopZvnEv9dxKP71KZ1LKD+oHMxhR06cTUI1wcO4NRyg=;
 b=tGVfb3uV5JnSNnzVcQjsOv88YaYMn7Tk6+OTAuQSicZqaGE2IaXYttf9XKBCkWJCCa+hbgIN2HrA7sX2thlkvgW+pcLX0ykCuCOXS+9DZHozOFnEguSQz9dtTPkbpmMz7rMAHLwu9pxHLat1p28Udtc/0oPllroSpzqgavPD0LBWZEMj5xg0yMA1kxHWbmTxYB2J5aXPqLU1x7e4r9D54L+SrskSKxSdF74Th5uCE879Ml3umSTylVsU/otHSVz0i70cIbLlWFb+xPQApz88fmMi8y90gn7Z/f/D2M9UG+dzF0ayFQScTv1qf2p7o/eK0/roXcgN0y4ELVJC4lEgSg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
Date: Wed, 26 Apr 2023 14:56:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] VMX/cpu-policy: RDTSCP and INVPCID handling
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0097.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8439:EE_
X-MS-Office365-Filtering-Correlation-Id: 893ff0e1-6633-4d85-6885-08db4655a999
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gOeJ4ZFr6jsuIcCXPDvx3TqiAXN94+rdkR5fc+CC3lKKcN4Q9ARVNRVggjBBgfz5p+2s2lwxhoGuQ9eTuk6iMNM7JA9JZPs3+2WPgKkY6PZGCX1O2m9SxW51k0xGwi26vQ1x7Wk1aiqBXbrsI0JD7B0hFlaq/zf+bdVpT0uOlUBUwEY+XLzCkAE/CVBwfVGPuCqby94PT850BEsiAXAjqDWS85+bDY2wBi2MAEqhyirYNkIOf11DPmhhhxfYNgcDcb37oCN8fv7wfbD/2wqODnAkLDrY2SbWDX3j9yZvwmC06bkNLQnMxTFI0wAWbYSAlNgmhiEh5HudjdWAaZafCD1VVhSga+CTLoUwDe/UOCXRG49HWIU5fAArFb01v4ekDG+ofOmP47eqZRPFrolnWDA8oUPkGfi6BZk2JsdWwgF2InVVPJ9jYAKZ6/XulO7QP9awvCz5nW/3LNzoz9ABzzRwlvTcH4+nu2qmAYBuF4Yr/YkFHwC7yfVD9+oVWPhSaoSGhvnAKWWI0gyArkth5cMoy/9jbZeWmNBWma6jHfk91hcPOe/yqOYchOUpBwZoM4sY5YyjZVYl5VjF6BMxho1CS4L6WVEvQmriWJWub6h+pkueOF/rW3FUTGcke0zKAkMBsCay8C3hhNfJX2VIlA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(376002)(396003)(39860400002)(451199021)(66946007)(66556008)(66476007)(316002)(6916009)(4326008)(54906003)(5660300002)(41300700001)(8676002)(8936002)(2906002)(186003)(38100700002)(31696002)(6506007)(26005)(6512007)(83380400001)(558084003)(31686004)(36756003)(2616005)(86362001)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SzlNYWJ5S3pwazh6VVMrZ0JNM2V2bWN2bUJhemZpYllLSE9ZWmhDZDZ0UHFt?=
 =?utf-8?B?QWJBd3hZbEpiVjV1UzNTSXZoUFQxTDlhMXJQUjczdG1MNGJVOEFrZG92cXhj?=
 =?utf-8?B?WDN3NDVwSThrdUdVYnF5KytsVXJxVHByRGM0ZkttNWlwUlhuekowYTB1SWtY?=
 =?utf-8?B?bllhTTcvc0EwRi9GbWNFaHllS1Y1OEIzZHNjUzd6bm9sYVoxMC94cytFb2JC?=
 =?utf-8?B?dGhDSno0YzhYWFIxWFJBSERJbmNPYjFKT0dQZTRoYktQeFVYS01tT1pWZGc2?=
 =?utf-8?B?Y2tJQ2NLU0hjSGRJRmN6OWtka1pEdVlyQlpURE5DNU0xU3Z5R3ByemEvYUdN?=
 =?utf-8?B?YUEvR1h1L3lOenRIdWN3bHZGYXFHRE1WMkN2dmJ6SzI2MTE2VFRMVGw1dVRQ?=
 =?utf-8?B?Umk3aFFVRmhVK1dabTQ5UW5UQVNaeW9qc3M1VUR5RmpMTThFVUtZN2FSUHpW?=
 =?utf-8?B?OFpjV3lkd1ZPaU1ZajAxTEhrMHFiLzk3SE9zV1MvM0hhRDllTVpVVDY2cUdx?=
 =?utf-8?B?MTNWNHRrTGVoM05qQzdzOUtnWXV5Vk8rMDk3QXdRUENwenRraFdMMXZxTXJ6?=
 =?utf-8?B?eWtQK0k2L29HYVZldG9uK0VOdVg2NE90bmY3OUcyZklIbHBzVDF4SlVsMlBr?=
 =?utf-8?B?ZEk2UE5rQlpQNVpQZEJ6Ym45UXlQSnlERUxSeUNMVFBQYmJ6QWQ5TFBLYldR?=
 =?utf-8?B?UGx2NHZOSXVhOUthb0V0Q2Z4WXBIYnlaWmFBUWg5SmQwT2RqZ1pqMEwvbGx0?=
 =?utf-8?B?VEZjTnF4aE9jZkxvRVhVNWc1b3dTczQ0VjhrR2owaDBPQllpZlR1Y0thNG1J?=
 =?utf-8?B?N0UvQlFVY0hwVlBHUSszNURYVS9YOGVPN09wUFpYMUI5d0JVenZKMVFNZlRJ?=
 =?utf-8?B?T3R0Y2lxTGhMb0hTWjNOSklUeUtsZzhiZ283ZTRNVHhiY0N3UXdVOGNmRnU2?=
 =?utf-8?B?a1RPczE5enZyOTN6ZkhVYXNiM1BMZndqc1ljajk3ZExQNnlCWkF5dGJRaUxC?=
 =?utf-8?B?SzJXSVJ6dVdJQ2owMThZdzg1VlZMQ3ppdU1rbE1sVjRoanVkMXhMNzBMZXpZ?=
 =?utf-8?B?bVl2dTNKbDU3Y1l1Vll4OTVjdlUzaEVvM0xyZFZjciswcE1QcHhFNE5EaVpQ?=
 =?utf-8?B?bGNWRE81eUViYlF2bTlqZ3U0ZmpFS2ZuYUFPQTNpQUhqV0ZNUWZrMmNuNlZn?=
 =?utf-8?B?bnRJRWIra3ZqL3N6OExXQmNWSFUwbzE2V2xqamt0OFJtV0VYVVp6THE5bHla?=
 =?utf-8?B?K1VRQXB4STdkc21FL1dYbTlEUE9TdmZzd1JpWFNYU0JSSjdrcGUzRU00VElM?=
 =?utf-8?B?MzM1UTlJWkUzY2Y5cXZzMDdZeWVIZVEvQzBHVUc1RXlVU0lrVzcwZHpWQUQ0?=
 =?utf-8?B?d0YwQW5mWFd1WStRMVdvQVBzZ0wvNnM2T1pYd0hBTGwvK2Y2Z0JoZW1rbjR1?=
 =?utf-8?B?K2Q4M29GbmtrdGd2a3lqSjcwZjNINGg2c0VQb01ETHYwTEF5aTdsMk5ob21x?=
 =?utf-8?B?SkdkZVhhWndNRk9PcU9GTk5ycWxqdG04WkZTNnhQWUZnU1hhbFpFSDU0Q1Za?=
 =?utf-8?B?bm1CYWI2ajNMdFlnRTh5R2oydWN5UkszWGZ1OGV1UzRxYVZjV2Jld1dTV0g3?=
 =?utf-8?B?dEJBRzh3VlV3ZFZjVms0RHN4VTdJemNySXM0OTQwdE1tQitrR3RUaURoa1FT?=
 =?utf-8?B?MTR3MTFvMUJML0ZVYmtaTExGdktBOXl4c3IxcFFoVmRFMzFwanpJT0xlRzg2?=
 =?utf-8?B?MUtONUVpcGs1NVcwRUpscitMVDE2RFo2aTE1ZG9LUjAyM253cE9RMzRQMVM1?=
 =?utf-8?B?aTdMOEw3Ulkwb3pESVdiYTBJcTBwSlVJOUlpUnk1MXFpL1ZaZFd1amRTQWh1?=
 =?utf-8?B?a0RhRlZ4UUJhQ0txc2cvbllMSzNTL2h6KzZCVUhWWWtpVFlDaExsL3hMMHhT?=
 =?utf-8?B?SmQvaVVHdFpVY0hSNEd0a2NPMnJWMXp2N1prL2lnL0F2NDVyR1RmSmNTMXJK?=
 =?utf-8?B?bGdwWm51TDYyTHpldFdKb3R2c2JDMnhPS1pIKzNHT1krYm5zQXpMMWl3UWxN?=
 =?utf-8?B?alRsbGRDN1dTZ08xZUNmQnFPOEpzci9TQUZJZ1hUSmhZWmJITzJGTEVGY0VH?=
 =?utf-8?Q?1C/sStsn37TuPjCRMyJhF6oEw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 893ff0e1-6633-4d85-6885-08db4655a999
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 12:56:33.7053
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9ANQ7D0JjUlMO5VzFxfjvXitdi7DLRKlhLfBu7vPhY23WpZfFumb6gMZAK+8w5IuCwMjS5KWA/N6M9JUNKneow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8439

While putting in place more of the still missing MSRLIST code, I've
noticed two anomalies here.

1: check availability of RDTSCP and INVPCID
2: disable RDTSCP and INVPCID insns as needed

Jan


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 12:58:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 12:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526688.818563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1preie-0008BZ-O6; Wed, 26 Apr 2023 12:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526688.818563; Wed, 26 Apr 2023 12:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1preie-0008BS-LV; Wed, 26 Apr 2023 12:58:00 +0000
Received: by outflank-mailman (input) for mailman id 526688;
 Wed, 26 Apr 2023 12:57:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1preid-00087s-Ka
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 12:57:59 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f82e5e91-e431-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 14:57:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8799.eurprd04.prod.outlook.com (2603:10a6:102:20e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 12:57:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 12:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f82e5e91-e431-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VDyKX6YF40SxYygAlXF6rlt07fDgYTi7F7MMuVSHsqZyYehQ/AWT9pTmB8fWzoRe7beC/glhVkRazmuBjh4euEqQWsjJcXjIVwcSSJbWxafJ0ddXunPCa63JC1qN97MvF8L0vKrjCGH7vsFATFsKNyL/zFoD+V3GAdpfbZZyxp1fGrhySbRwxn8fNmZXPOxj0YBhHEFW1Hhbws0A0Pbp13CKij2+/tGVXDYmuJG/mJxF5Vod2zz8a1dp43BAsgEbOCc3wI5+Cz97yI0MgkV1lUq+os6rW2TtCa0vBQaz54oH4dqC0LCj6DtGs9bPVT1ZAq2yt8gi1ybDRZx2tBMBwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OpLaqKMeaT1bmjVi3m3iJaeh3uX4cpagAbFfokRvcts=;
 b=iL2tP/sn25sqST0bIBBOABKVMgYN1dQbk9N5alJcn19vlqxkQkS27efBuy3+5oKh22Xea1jTMn/+Zjf9ve9ZKyqXqG8/YKS/S1DhKMC5UqWjG7XbUfHMi/oHrIbcsOuo3WJxyZGMm5jUw6l8fLi+8/zf72lmx1NUSv33W1n/FGmtinkVSMRA+kEEHFrgmH0yTUMN5hcgNLqC/lBhx5eDNIWSEaXjPouNDLy6HCra6P4FqU0qAq1gK/hCUAPbBzObBWnjS2t/FvTJYH/M0RMlm5gq+jNgx0l975eCNEu1HVYb+JI57nZDsShoKNEyjNf69FdDNN463tWWZIIErI2WZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OpLaqKMeaT1bmjVi3m3iJaeh3uX4cpagAbFfokRvcts=;
 b=Zn/VJmUdJF5HJ2QSpRhhT9B8U53NG6WvEBa4QkxDUg/wTMEdT6CbNiCo7YtsLNcYPXXS3vSW96ts1DzrZZIvpvadj86qveo1c6q7KM7RO7YzY97j+dYU+U1Q/5mCngojubOCJjo+f2TPXBea/BVivZlE5NZHY+XPlXzGbvX1FZouMFEjTHwS+MCXVREMCXhlcANueOUquMIf0BuJtQ261zug1s76FtVmsx5ZukTgPeT9GW8H0rCmCqZ4/JwZPSe+PP1kBaRGdzYbno9V4Et/wn0r+dkuCL34TnHcreEHLjKvHFbfGDqa2n5aWowuVus1/oEUbbRRDKtN0lca4a3wpw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cae19837-31f6-9fb3-5c90-37aaf8920594@suse.com>
Date: Wed, 26 Apr 2023 14:57:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: [PATCH 1/2] VMX/cpu-policy: check availability of RDTSCP and INVPCID
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
In-Reply-To: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0153.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8799:EE_
X-MS-Office365-Filtering-Correlation-Id: 92ea104e-e69a-46c7-ce3e-08db4655dab6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AfGQGd7ZrPjjq449pWTZYap5Zq7cdNlXj+0R61+4InXh6JcOOSCDTTEnTELVJxw48kcmO+e8hMBDks//7yRevRmgleSmWzomGGFz7N8EtNHIw+ohxxNy6/JYHBcy8dyMqrQHRAfdQeHwesMeuUWD8JSj78jSdlBpsKr36tUfVkf5rrVVcpZDoDCC05ZbUIZYD718nm/GMLqJTzAsx5vgWdY2VbhSMy4T+h/uTmZvWr5YdaK2yCx2L5CxkvZNkVmLIiw2tCFcW0Am2Fx+LOw2ZuBVY8OPsEukUz4tVeteBHERQCIsrk5wLYJvRtsl/Rrg0A6/R8geblVz6IFux15LfGvTjqA09DV6pNFF7KYqfX2kmrjNKC8VDBGW2zQFxUWkOj92W6tIf0tGV3K3o/DBilOmPeTm9Rzb+f+NcERMf8hRrt8EaerViulWCP8Zutv6kbpDBn8B10CVHTSXZoe0Qtu8oe8qB/YRZDe/3xaecgW8ibCd91q3AXJF1JBWYo3sVCi67JMWCx4wFbBwpd4jTssqcJmo5Vg4XhLEdMgGAWDXDSV6k42/N3vI5oqf1QfNLTGR62vquGWs0PY3/e1Ybbxo6RnvWKmlJ9UUHFw+pjXtSeevbMALMx/gBT2+Oqn0tlqZGXJy3i1kNHDPBU+EXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(136003)(39860400002)(376002)(346002)(451199021)(8936002)(38100700002)(2616005)(8676002)(66946007)(66556008)(66476007)(86362001)(31696002)(6916009)(316002)(4326008)(5660300002)(41300700001)(478600001)(6486002)(6666004)(54906003)(186003)(36756003)(26005)(31686004)(6506007)(6512007)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NHRYYmpOM25lV3l0dDlPdEUrMVdIS3NOTE56b1NPNFRHeVZHbXdoazRmazNu?=
 =?utf-8?B?TTBLS2NzVWx3empwV1l6OVdpT2lHdkpBVGhOWURCaGpvMlZEcVhwVmpqd1BC?=
 =?utf-8?B?VEVjTWs5TVNLTEV1cUk5dUFzSk1aWE5vcFdaY2RJdXUvaXoxbmdGU0w0UTZJ?=
 =?utf-8?B?ek5XVEZoN3I3TEs5RmpDakxTWUJvNzBFcER4R0hLZHgxV3doOXh1RG5Zckcz?=
 =?utf-8?B?K1YzVWhQQTJnOVFWaloxaEprWjd4OVRVbC9BMmZkdUYzbFFHY25GQ1dNaUFP?=
 =?utf-8?B?S05RMHQrcmVZNEhtNGxmMTRKT0VlR2RXMTVrS2g2bHlCdTdVdXRwbXhOeGFH?=
 =?utf-8?B?TWF3aHZaQVZzQ2l6ZWFLMGJ0UHhDaWtsUE0zTEljVk0wSVJYYWs3RGtGd3BK?=
 =?utf-8?B?UnI2NExKOVhuVzMvZ2FUam03MXB0OExTTlBEUGhGRXozTHFHMmpuRWdBRGxS?=
 =?utf-8?B?ajA4NHA1SGpKMnR5Wk5Ba0Fla295aXlVclRYeVZMY1MranRBdG1rZHZwN1Mw?=
 =?utf-8?B?VkhzTXBSekhVcXVVeU41ZjhPbnNscWtQMlFsdktqeGc3eGVwUVUyTm1hRkF6?=
 =?utf-8?B?YXY4dWk0MzEvU285MDRWeW0ydnM2bDUzdzV2WVVsU2ZYOXdpSFQzSXhKME9w?=
 =?utf-8?B?OHpQNkVZcnpVRnZuYnR0ajRDcTVWVkxLRDVGKzdCVUQ2bk5yUkRMZ29xRVQv?=
 =?utf-8?B?TEF0bzBQU0ZqVjlVREhLUW85UklrOGNJN0NnN1NVS21SL3BhZ3VwS1ltMm9X?=
 =?utf-8?B?Y2twTE9pYSsxUkhuRjZPQ0RlSVpSSTliVDlhL1lKaW1ld085ajNwSXBlSm94?=
 =?utf-8?B?SkpGd2QzY2JnRWtmL2R1RVNnaDBXSlJ3aUdIYzlJaG4yUndRL2g5L0k3MkVa?=
 =?utf-8?B?UDdTUjZha1pkOU8zQzhRWHlMTDdXNEVadlNBOW56L0JhSG16U2N3VEM4UENk?=
 =?utf-8?B?MUxFZDBmczBpT2xjZFhPMjVpRTNkZ09laHNNZldpWjhKY1dPNUFyV2hIQkx1?=
 =?utf-8?B?RzliTGZrQ2ZtclRyU0F5WnZzN1hEWHRTK3BCekdUM0xuQTJycWJ0ZkYxNEpu?=
 =?utf-8?B?NXF4RTA0d2dnZ0pkRTdMM0lRVWFlc0RtZlhSZ2VwY3MzNGk0VlZqQUlJMSts?=
 =?utf-8?B?bHV1R0NlUkphbmpLK2ZFc293Vkh2TytqQ3cyWWV5TXZJTFdveFN3cTdQWFRB?=
 =?utf-8?B?T1loNmpJR1AvNTdweDh1VjNDOGdycGNRb2ZDTCtocmVSTEYxNWxaZ2tkc09B?=
 =?utf-8?B?RzRJQVYvK2tSdmpaZTlnYzg3SnF3ZjF0T2Iyc1lqV2kwZmxBczFGYWNKVmpQ?=
 =?utf-8?B?Z29SUExjQStFMHJUemNKYk0yOWpvVERMNU1ZVWFYTThNV2JiczVjZ0ovQ3ZY?=
 =?utf-8?B?dGdJbDYyVEV6Z3pzUFpBcytnUWEvZkRXWEhSOTIvTWpOdDJ2Q2hSR0xFR29U?=
 =?utf-8?B?VVg1YnFMOGxHV3RRMmN2WU9mMUl0aldHdmhMSnhCdStwR3Y3ZW44c0ZpZHJv?=
 =?utf-8?B?c1JmV0JoRE5MbHpZVFRxOWJrUGtlWWV6OEg1cWpBQWxGOE9hR2h0ZGhnUnNW?=
 =?utf-8?B?UDAyN0FScE5sbmU0V2ZSVnBVYUtOdHRwM0F0U05YU1JINHVCR29PWEZtMThC?=
 =?utf-8?B?R214OCtZcGRsY21iMVc2WWJJN3loZk1yTFhEL2NQT0phTVRkSUwyRVdjR2Ry?=
 =?utf-8?B?MlU2Q05ic0ZBSmRRT0o2TjZhNS9VZ0hDVlpmYjZRd09GS09rMlNjNHo2cUxW?=
 =?utf-8?B?aGhJaTJJSjNkRHFrSkhKbWoxMDRZT3EySlNoMFdEMUxUTHRLcFZ4cGU2dWpz?=
 =?utf-8?B?N0haUk43eXA0NTFGWGcrSDFId0s4ZzlvaW53ZlRkVVk1dG5RUG93TlhmNDJF?=
 =?utf-8?B?dm9OS25WVEVicURpTCtqdjJPeEJRczBNZDFIV0JmTWdBMmtrU0UyWW1hR3V4?=
 =?utf-8?B?c2wwZDBqcFJuZ0pVeGlaL2JZb1lTNVJJdmJqb3huTXlybDBDVWZLdHlVMjNy?=
 =?utf-8?B?OXdsS1N5RUdlOGM5R1IxRXdHWDFobWlBU3gxT0EvOE5FdWk2QXdTR2RLcTJu?=
 =?utf-8?B?REFwaFI2VjFjVWh6MVZtanY5SE9MTDU1K0JZazI4SForcDRjNjMyRjhrZVpF?=
 =?utf-8?Q?/7oawOXlhdbrLvFRvA7JorVYB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 92ea104e-e69a-46c7-ce3e-08db4655dab6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 12:57:56.0061
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5Lx6bsEtSOzq7NcktbXZwL2FTPrUjWrNjBJz5juj4yg6ObYpm5HR8bBgC5mI+rJZGM/tiddFEJ0EQDw4C2MZ7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8799

Both have separate enable bits, which are optional. While on real
hardware we can perhaps expect these VMX controls to be available if
(and only if) the base CPU feature is available, when running
virtualized ourselves this may not be the case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Afaics we don't ourselves expose the 1-setting of the two enables. (We
also don't constrain guests to set only bits we report as available to
set; there's a respective TODO comment in set_vvmcs_virtual_safe().)

--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -594,6 +594,12 @@ static void __init calculate_hvm_max_pol
      */
     if ( cpu_has_vmx )
     {
+        if ( !cpu_has_vmx_rdtscp )
+            __clear_bit(X86_FEATURE_RDTSCP, fs);
+
+        if ( !cpu_has_vmx_invpcid )
+            __clear_bit(X86_FEATURE_INVPCID, fs);
+
         if ( !cpu_has_vmx_mpx )
             __clear_bit(X86_FEATURE_MPX, fs);
 
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -299,6 +299,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
 #define cpu_has_vmx_dt_exiting \
     (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING)
+#define cpu_has_vmx_rdtscp \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_RDTSCP)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
@@ -314,6 +316,8 @@ extern u64 vmx_ept_vpid_cap;
      SECONDARY_EXEC_UNRESTRICTED_GUEST)
 #define cpu_has_vmx_ple \
     (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING)
+#define cpu_has_vmx_invpcid \
+    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_INVPCID)
 #define cpu_has_vmx_apic_reg_virt \
     (vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT)
 #define cpu_has_vmx_virtual_intr_delivery \



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 12:58:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 12:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526691.818575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prej7-0000Ch-2q; Wed, 26 Apr 2023 12:58:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526691.818575; Wed, 26 Apr 2023 12:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prej6-0000Ca-U4; Wed, 26 Apr 2023 12:58:28 +0000
Received: by outflank-mailman (input) for mailman id 526691;
 Wed, 26 Apr 2023 12:58:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prej6-0000C2-1N
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 12:58:28 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060e.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 085f2683-e432-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 14:58:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8799.eurprd04.prod.outlook.com (2603:10a6:102:20e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 12:58:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 12:58:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 085f2683-e432-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ORN+xLgr0lLUhcJP7FXsrmJQR311RJRO9i510VZO8H3+bnf5AxQVXgAKidEC1J/7EkwHPPvT10NFob/GD9B5jRqLsD6FD+1ZCiqsiSKj9c1zUAeE63wUzVvVVEudtplmegnrZjVlNe5wTMf/a+G36iZRvVUexHpCr83yKyacyV+anlElWewLchewf+vkYud1EIXzRBwDidtOL92iS49AVs6H3YHKvYNDy9FAHnGGfhJVMdwDfFIbdt758xyo/4JFWFmNyu8YtT+CTBcmZg3evBk1Op/PISsCNY59p9aF8tSZLrN3rj0B89sZtXibI5BQ20q1w5O1FYr6Mldtpzus/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uCLLEbpWgJusYs/t7RT986NWsS87zkp4TQOhZfhkS8s=;
 b=RCGJHLukDiZdoPnLPsUv0pqlO1pca2RT8tBRMvG3Otzsrm+00D2gvkz6J/P3QME1BaV/HfvxqxSzCMR3CMit9E1IGx8S8xdGdKRGEsUJ87u/+7wVwTWXRisq9rBaoU38mrbRg4nuS/+M/G39z9hhQTjkJxVWpx9ZbblzyJVimr7groFg65eloI32v8PxdSGn76EwMCqd5m4zLE7y17BC7aoRGs0fPuKsyq7I9k/Mb0EhH6hArMyTkYA9kHAx2CoPlPq0qblNEQYwe++TA9sFnai5YSXzzv5+CPqda9ZKADctxcVw6OargIQbZ5wr1xQq7ntNBY2UMoepZKs2uvON9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uCLLEbpWgJusYs/t7RT986NWsS87zkp4TQOhZfhkS8s=;
 b=vA0ySRO9nFWScDA3zD5c/7EIEWikOp+dGgZ3CMijzG5fqtRrJT+9nlEVo+OA2F789oQAOg9AfXW1vVCfp3WI6Z4LXzRUiXyM335lNPNhARApml4tfBeo29/e14w8RW4KobmN7YkfrpRMyYgecQmfiRbCzmLcnzl5zqhkqhFi4qHZLu0FmSSHdM2DMQ7jIvrvAYEI/kV45a7B1RIfEjLerIfoVzUqSFhwhloTv8L1EXYoU22V2NHnE5JJ3KV0Uj7pm9/mTtDv/M5+OvJdETyAnfzdDFe3CZvKufjTM0mAAvCFPdaxxxOB01yGI9KcCuN9/y8kRnXnsOpIDfm3YFuZwg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fa9e9ece-df60-e249-7cc2-ad3af50d26bb@suse.com>
Date: Wed, 26 Apr 2023 14:58:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: [PATCH 2/2] VMX/cpu-policy: disable RDTSCP and INVPCID insns as
 needed
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
In-Reply-To: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0151.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8799:EE_
X-MS-Office365-Filtering-Correlation-Id: ced1774d-3f67-4fa3-6fcc-08db4655ebba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PBxy5op1H1AtA5GPjJHpLsfteFVc0Ms9pOQDVi2utmWAqUk/AKODPo4tUzSFBtoAb+WD4Bc0dAnXXJQU4OiYmY0r0XoUFk4ljYq6IQNtU9p/S88cc6wDTMNJWLtbILbrhz9BYN0d1CfmuJsoc1Hy65nlA5jN+zcdFBKkb2Ci7qxayWnLmeyEt2KdubDaVV/R8mohNMqur4upkXqVqfJ5g0lN0CWedFJNSHPlyTVUg7uzw8dx3AfN+kvqQ3ACjr18r9/d2Aw2uchJkfNXz0wmSk9MCK1YvTu+Xpz+6F/2Y9ehy+MQqu9KbwOrImkF2ws10vIZPoPTT3hQSyJbV9C8ZdeQDNfER39IQaXzmjrtjTv2bhlM74ZWyx2eTBMk/F6yreZ8/oYmnuTjy8yqaZji+G10ccVVvKOft2/cjglpVe7LmbFdtMFv+hsKagEHDKigVl2UMhm6NARsO++tNJtIL0fDvhO+ohvkOKy97rMP9GA+aLDtWw2vu8LIEVCiUZrYqJD1W1LvJVK9jWkCOIJb8NZfhTrO1AQbqLRjOHMCEggDMrs4rJn6bLNK16rjQlNpaWCA7EZOOZZHktvF30OPEz6ORNtPxAZkU2hCySfNxwBivy3zZ9ZDF3kqdYu60MhP36qeh1H7tIf1zlEsyyDisQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(136003)(39860400002)(376002)(346002)(451199021)(8936002)(38100700002)(2616005)(8676002)(66946007)(66556008)(66476007)(86362001)(31696002)(6916009)(316002)(4326008)(5660300002)(83380400001)(41300700001)(478600001)(6486002)(54906003)(186003)(36756003)(26005)(31686004)(6506007)(6512007)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEt3VU1tRGp0aDhFcm15STFQSjk3TUE2aDVoNEc1VFdnUGxicklPalJCYmww?=
 =?utf-8?B?UU5hVUFVa1RiQzdPcnJIejRqcmtaN3UrVkZQcU9KZlhSMWsrb3ZHaGV5NjRi?=
 =?utf-8?B?R2tKdmNrZmwvTVFrR1BJcUFTb1FlbFkzOGwzZ2xUQ0hzVll1aUxyRW44MUJL?=
 =?utf-8?B?ZFBRK3JJaXJzaHFDSkVSUjlKL0lhV1dyZmJVWDZDdnQ2dEhqLzdTZzJjdy9y?=
 =?utf-8?B?TlZ2eGJMcXdQSC9hVFZUNTRZV2FaWFBnVWVjbGJHZ2dUNWR4alpMcTdHVkFk?=
 =?utf-8?B?b2ZEbGw1cE93UVVWNmw5RGpSOTU4azdNRFhVdHhJZ3JrL3pnbWRqUUhYeWFa?=
 =?utf-8?B?R01wS3ZpeXFUVUtSSEJ4V2ZUb05ZOUNIS1FJNFdidjJWelp6b1VLUGtxelVk?=
 =?utf-8?B?ZnFqUjFPcFZLU0Nia2VQWm5Oak44YXQxc01OY1JxT2VRNFoxbG41K0tLd2xU?=
 =?utf-8?B?M2JMWkZNblRtT0JJUHFDTGlSdnVja2Z3YnY2R3I4amc4NEk3aHJZbmhsbEw4?=
 =?utf-8?B?WGgwV2xDRGtVaGZNSUNRZk45ZUFDS3Y1YWFZazZOM000cURsb2hkcFZiQTU1?=
 =?utf-8?B?aGhTUE4zdDVOUHM4c0h6cG9hOTM1eEZFSTl4M2FCT2hmeGtmaUpQb0htUC9Q?=
 =?utf-8?B?enVsQ3gwUTk0dFA5bFhLRHNvRGk5UE9iSzVuSzA4TXdsaVdDVTU5TWVSd3Bz?=
 =?utf-8?B?cmlmMWhpNm1XU3VySUhmb2dLQzQ5dVNCQm5wSDRWSzZTZDdOWmRvNXZCd0pH?=
 =?utf-8?B?QkVScEtadlZzQkRjdUtQZjBKdXU0TVVhMDJsMzNWTm5TV2lBa3Jpd0N6emdl?=
 =?utf-8?B?eG1GdmVDVWdGVUFXR3lOYW9rR1JYMFdiVG1VU3RrUnlpRHRCSFpocHlDa0oy?=
 =?utf-8?B?TklJeTVNTzVsRk04Q1dDeTloT3oyanloNDBKbGJML0RmcUxlVG1CRXBhWEl3?=
 =?utf-8?B?ODZhaE5pVWlpUnRzajA4M3pUS3UyNlBQa3R5OGNQemtyaTQ1MEtHSVdQbzJy?=
 =?utf-8?B?eTRiTVhLVVNsRVEyUHgydnJHeU5laXgvRmVQZ1JQTDY5OGFoRWxGZXlrOXV6?=
 =?utf-8?B?UUhsOURWWndweXJXNkJkY1ZNZE1PQzNMVU1KQUxIMkhacktOUkpzQmRIb3Jm?=
 =?utf-8?B?MFZnTXhIMGNPNmZEc2toZklZUGZwWXBTczNLbmxaUnZsRnlGaTF3d0xaWnNY?=
 =?utf-8?B?SWtBS09IS3ZWZ2lnTmh0OE0yWEphNmJSdlVHaGxSMEFsRUhGd0NBTmpzU2w5?=
 =?utf-8?B?Mk5XY2RSU1B2VGNXbHVGendEOWhCNHBWVVMzUUhBVU9KUTVsTGhtcUxDNmpu?=
 =?utf-8?B?c053S1BxWmpyV2orNUZNZk50NEdqYms1d254OEpWU0pjUG5IUWNwS1BiLzA2?=
 =?utf-8?B?bkFHS21yUnNvcjJzRUpoNnJJTjVRMmVmSnlRdjVacWdneDdGUGFaMVZTOVJr?=
 =?utf-8?B?b2tKeUVUb2g0OGN6bGVmUmFJNCtwTVVGNXV0QWlEajNwVC9jdzRuNlhuQjRo?=
 =?utf-8?B?QmhNVlNKSXRQUTJTWEpNNGlUSEdKVzUyaGRNcXdpVTIyK1RZS2s0ZUpJOGpx?=
 =?utf-8?B?Wng3WVZxSVpGR3ZpWXVic0NMOW0vZGxkQ00rYkFFaDdHejZQZ3BUUjU5VUZ6?=
 =?utf-8?B?MklxRjUrV2JjU2tlUmxUQ2RHaTRub0lGeTZSOS9mZ0ZXSmlTd2owcE5EditZ?=
 =?utf-8?B?UVRFdHRVQ3RxRnNZR1p0U0pZa1IrOGpqcmtobmg5ODFWRkMzdjRRY1lRUEtC?=
 =?utf-8?B?OHhtbEhIbXFsWE10d3Q1NmxMUWFyZ01hM2orcm5TS2tRNXdOazVITktCSU55?=
 =?utf-8?B?eWlGR0RPU2tYZWFITEhUR1ZQdzJsbG10eTJiWFNqQ2h0RkhBR3VhcEVaa1hG?=
 =?utf-8?B?cmczUzlhaXMyM3V2TTNRb1JrZUtYY3dlQmZ5a0JRSm9seG5XcU44SnVXV0lo?=
 =?utf-8?B?cE1SK2JTMjFLZDA4NUFpaS9VM2NWdmhvTHlsSWNoTGJqV3k0Y2JtSjA5K0tW?=
 =?utf-8?B?YittazRSR3dwcThLbHRyZDI4QmtVaUtDNEhpWTNPaTlMN016bTFNZFE1aWla?=
 =?utf-8?B?aVhOcUM1L2piM2IrQ0FZQXJsM3AyWjQwYzVGYmlsRFRicWgwa1RCWTcybzBH?=
 =?utf-8?Q?UhMrgxpZ+79wgguURX8DwN7Gc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ced1774d-3f67-4fa3-6fcc-08db4655ebba
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 12:58:24.5815
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O1ZZ10yQjFzMJrIpOLDTc7m0qbiqcAX0l17FMwG5uLPuYVAUczzayjblNM+Tn9c6E2ofvVtGtmv/IM+PxcWXhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8799

When either feature is available in hardware, but disabled for a guest,
the respective insn would better cause #UD if attempted to be used.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -785,6 +785,30 @@ static void cf_check vmx_cpuid_policy_ch
     vmx_vmcs_enter(v);
     vmx_update_exception_bitmap(v);
 
+    if ( cp->extd.rdtscp )
+    {
+        v->arch.hvm.vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_RDTSCP;
+        vmx_update_secondary_exec_control(v);
+    }
+    else if ( v->arch.hvm.vmx.secondary_exec_control &
+              SECONDARY_EXEC_ENABLE_RDTSCP )
+    {
+        v->arch.hvm.vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_RDTSCP;
+        vmx_update_secondary_exec_control(v);
+    }
+
+    if ( cp->feat.invpcid )
+    {
+        v->arch.hvm.vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_INVPCID;
+        vmx_update_secondary_exec_control(v);
+    }
+    else if ( v->arch.hvm.vmx.secondary_exec_control &
+              SECONDARY_EXEC_ENABLE_INVPCID )
+    {
+        v->arch.hvm.vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_INVPCID;
+        vmx_update_secondary_exec_control(v);
+    }
+
     /*
      * We can safely pass MSR_SPEC_CTRL through to the guest, even if STIBP
      * isn't enumerated in hardware, as SPEC_CTRL_STIBP is ignored.



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 13:18:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 13:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526700.818584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prf2c-0002z0-Of; Wed, 26 Apr 2023 13:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526700.818584; Wed, 26 Apr 2023 13:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prf2c-0002yt-LR; Wed, 26 Apr 2023 13:18:38 +0000
Received: by outflank-mailman (input) for mailman id 526700;
 Wed, 26 Apr 2023 13:18:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+pP=AR=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prf2b-0002yX-CB
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 13:18:37 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9da07c5-e434-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 15:18:36 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-5067736607fso12339116a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 06:18:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9da07c5-e434-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682515116; x=1685107116;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ltWBGgzcaAeAi6ZraUXkneLShSmABFkQKDgE0zSXC6E=;
        b=clZ8lXI4mHvxbh7aetR3IpbtjD54eZYHy3c/LDx/KG/g5vdmlLqjYNHM+7K5XOZJfN
         5BSRfjWMYdV+qG2qqiycdfhLe5yy74BIp9eZeY1QJ6T8n4LX0meM+RmMRBTDwwminlDp
         orSP19114X18cyfwrRRWgXlZcL2mAjNooL8WPDK4cWI+fPsaV1x7L5P8ptEJ3tAc43er
         tVAu1aRFKl+SSwoyX4+/rpIeyQk+Lt0fSiue8Vs/J5UG8txdOO3NMyOxAqjKzUTS91WP
         /FWiS9o/2CcDptGW34N7mXu6bPso48kzL8LBrQhgY2IUAOLHnikC97Xk9IGiOW8d5RbM
         HyxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682515116; x=1685107116;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ltWBGgzcaAeAi6ZraUXkneLShSmABFkQKDgE0zSXC6E=;
        b=PjMXnbsMtVXNMsrNkWGWaSAgsjEbse4V3w8ddDnyKZ79gzZnXbHRY0R+ziIh30wkAX
         I9bhtJqU3J2zDW/99L98iWdp1JX2R+BQYj0wLSmBAUq+8Zkp+udaPkyT9/hEybepNd2c
         zBntcO7MnG384cc0Txtfw8qhcflenLGd3jKV3+QozEBQSXQkxAzQ9sWqLjrNrDRelbaB
         71IZSg0D8P60MrMLt7QaVSwj9s3z7Cg87LHCjcv3ItaePgGcmhhKXSnacB9tMMngqea2
         jaOaWDjs1kRbUcEtRz7eZtdxFxiyXZstxYXzrMcY2qGFl2a5G+zun26P1+llGlVedSqw
         teFA==
X-Gm-Message-State: AAQBX9dq2irmfU/ZqgeAnZru1Q8XtuoBQGjFAYJH74l/dN884QxHQTSA
	lOZFrvgYcnYTDpSM65dgX4tdruCqgpv+wUC9Zr8=
X-Google-Smtp-Source: AKy350aBVj63qmhSE1Wj+S3BPAHWgkYLiYZoqlv4elusUOeM36jjC1N9EFzGPt7CAuPOA4wJH4LSlAoQih52v/wuyzs=
X-Received: by 2002:a05:6402:158:b0:4fe:9689:96bb with SMTP id
 s24-20020a056402015800b004fe968996bbmr21081563edu.35.1682515115728; Wed, 26
 Apr 2023 06:18:35 -0700 (PDT)
MIME-Version: 1.0
References: <20230425194622.114869-1-jandryuk@gmail.com> <20230425194622.114869-2-jandryuk@gmail.com>
 <1085873c-13dd-aae4-55f2-9d69635b37be@suse.com>
In-Reply-To: <1085873c-13dd-aae4-55f2-9d69635b37be@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 Apr 2023 09:18:23 -0400
Message-ID: <CAKf6xpvu68FJKAkoeXOQUFA8XCxGxBHT56-MWxUTz-R8nYAUrQ@mail.gmail.com>
Subject: Re: [PATCH] libxl: device_backend_callback() print rc on error
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 26, 2023 at 4:39=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 25.04.2023 21:46, Jason Andryuk wrote:
> > Print the rc when an error is found in device_backend_callback() so the
> > user can have some idea of why things went wrong.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  tools/libs/light/libxl_device.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
>
> While patches which are part of a series should be sent as replies to the
> cover letter, may I ask that you do not send individual patches as replie=
s
> to other (unrelated) patches (or, in general, really as replies to anythi=
ng,
> i.e. also not as replies to e.g. an earlier discussion)?

Certainly.  Sorry about that.  I formatted the patches individually,
but sent them with a single git send-email command.  Looks like I
should have added --no-thread to have them sent individually.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 13:32:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 13:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526706.818593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfFW-0005RP-Se; Wed, 26 Apr 2023 13:31:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526706.818593; Wed, 26 Apr 2023 13:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfFW-0005RI-Pw; Wed, 26 Apr 2023 13:31:58 +0000
Received: by outflank-mailman (input) for mailman id 526706;
 Wed, 26 Apr 2023 13:31:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+pP=AR=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prfFV-0005RC-T4
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 13:31:57 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6e5dc1a-e436-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 15:31:56 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-5051abd03a7so10565120a12.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 06:31:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6e5dc1a-e436-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682515916; x=1685107916;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vh9Vbgenoi7pBVLRqb4wkaa7EhG9aGhaNmdC+ps2G/Y=;
        b=Kk7+jCJIkUEbJkRkR3E1eeZE+I20yrjDxznKEa9Tl2blPOG+GKfapp1YFG7pcFXHP1
         PfvLRbYmEK9Lvz7x1iHP7qfTtiK7esHuBIEcdS9Fuzp+YgUVG0flY0EdCTaRdU+rlYTZ
         zRJvnt32lhsHxCbMrYjrQRjp/V2MMkWYQIx3AZuXkJjvXh6y+9ZQsZ3RspJoQ4xxHxEt
         31UqMQ2EtgQU/DGGbkriaPVgi6ndvpLJgBPqKj+976dBkyDqtAKK0oFtRdFs2w72xakI
         ESY6OOJWgQti2QzsqSyP2Sjl52/WesvxEPCd317wpiCMMTKrnDWJ+LR4cQYMM6dsuGI9
         lkbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682515916; x=1685107916;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=vh9Vbgenoi7pBVLRqb4wkaa7EhG9aGhaNmdC+ps2G/Y=;
        b=ApgZiUjOgDiWSjQJ7KPye6IVlDSzsYrCkzah5gDgX5fKgy1jjxAa3QnHvGQud8E7o+
         cZYN/lAP+42aANmnILN4+I9i4nMiRzRenYzbekLh+/7mJTNWrorJqIFKQQm2qYLwG0Ee
         o4yZuk3AzcU1lfdUAp7MWB3lfJMalXBYu9f4sfWkNapYDOnOI8DkuNoSJiHndyF0L7oq
         6QN9bZT+trX4sL3B+N25SSBHiryyrQSyIylaRZRxP+y6vi17lnvLUioU+bf964pKoaB9
         gUDAjfMHvCdbQAsaVArBk50TRoVjdGlPkyNmaKnQlhcecOSA5octlpCbDtHg8KhQrA2A
         4G6Q==
X-Gm-Message-State: AAQBX9fhn0n6ZfcLRYWVOoyq5F3gN9CTn1DaB8T3ab/PjT7468bwSNpg
	fXSYVWuRyf57athYRNnBSuWTb4nbdV8uHqN3DFM=
X-Google-Smtp-Source: AKy350Y+1ij1zZ0w9hJ+OQ3tF/S3KJ/LT5RS12kvUNPvfxUf5h11+Kh/EuhdiSyQzTflDId9aUjyQSV6Xk+/vP7nicQ=
X-Received: by 2002:aa7:da41:0:b0:505:4391:398 with SMTP id
 w1-20020aa7da41000000b0050543910398mr19234209eds.33.1682515916052; Wed, 26
 Apr 2023 06:31:56 -0700 (PDT)
MIME-Version: 1.0
References: <20230425194622.114869-1-jandryuk@gmail.com> <20230426091533.68324d8d.olaf@aepfle.de>
 <650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com> <20230426104754.78845a19.olaf@aepfle.de>
 <9dfb4f01-979e-e225-214e-34ddb51a9199@suse.com> <20230426124051.24c2a9a6.olaf@aepfle.de>
In-Reply-To: <20230426124051.24c2a9a6.olaf@aepfle.de>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 Apr 2023 09:31:44 -0400
Message-ID: <CAKf6xpt-h7sMjznhbn1RvdT_kn1iri5oXq+_ugjWib6YyuCx+w@mail.gmail.com>
Subject: Re: [PATCH] Fix install.sh for systemd
To: Olaf Hering <olaf@aepfle.de>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Apr 26, 2023 at 6:40=E2=80=AFAM Olaf Hering <olaf@aepfle.de> wrote:
>
> Wed, 26 Apr 2023 11:07:17 +0200 Jan Beulich <jbeulich@suse.com>:
>
> > On 26.04.2023 10:47, Olaf Hering wrote:
> > > XEN_RUN_DIR and most likely also XEN_RUN_STORED have to be removed fr=
om make install.
> > ... this suggests to me that you really mean the change doesn't go far
> > enough, but that's then different from nack-ing a change. Can you pleas=
e
> > clarify this for me (and maybe also for Jason, depending on how he has
> > read your replies)?
>
> I think the change should look like this, the runtime directories have to=
 be created at runtime.

Thanks, Olaf.  Yes, I think your approach is better.  Will you submit
it as a formal patch?  I'm happy to test it.

> --- a/tools/hotplug/Linux/init.d/xendriverdomain.in
> +++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
> @@ -49,6 +49,7 @@ fi
>
>  do_start () {
>         echo Starting xl devd...
> +       mkdir -m700 -p ${XEN_RUN_DIR}

This one should be "@XEN_RUN_DIR@"?

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 13:41:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 13:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526710.818604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfP0-00070N-QY; Wed, 26 Apr 2023 13:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526710.818604; Wed, 26 Apr 2023 13:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfP0-00070G-Mp; Wed, 26 Apr 2023 13:41:46 +0000
Received: by outflank-mailman (input) for mailman id 526710;
 Wed, 26 Apr 2023 13:41:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m295=AR=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1prfOz-000708-1p
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 13:41:45 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11d15e6e-e438-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 15:41:40 +0200 (CEST)
Received: from DM6PR13CA0018.namprd13.prod.outlook.com (2603:10b6:5:bc::31) by
 PH7PR12MB6811.namprd12.prod.outlook.com (2603:10b6:510:1b5::9) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6319.33; Wed, 26 Apr 2023 13:41:36 +0000
Received: from DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:bc:cafe::fb) by DM6PR13CA0018.outlook.office365.com
 (2603:10b6:5:bc::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21 via Frontend
 Transport; Wed, 26 Apr 2023 13:41:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT093.mail.protection.outlook.com (10.13.172.235) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 13:41:36 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 26 Apr
 2023 08:41:35 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 26 Apr
 2023 08:41:35 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 26 Apr 2023 08:41:34 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11d15e6e-e438-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lZh086fySMqdxCA/R2H/CW2/ZsafL/9K56j15SNW7DRM9a3C1jEUnkFZK7ViBQsgLDJfvzNjkhdBW5r6QRhutHo8dLQ5ZuVtFgITtjPnK/YNAmpxCQvkYPEFj5QvIaeirByQrWfdmqxLXONfVmapVbbbDpB87XrefUIPvkKBhoZzzvniW9roeyOi6UUMHSfisGC3Gsp+aAQdfdZzvGq6HaVt0o4ITpj8xj62UDXJ51fwHR7Wpj22aLdt0OfEzht4ZmaCCTgCc7UvPS9VDD9sERtnQ6ewv8R6rPIZpNNhuQX21fLhssOIqPzBfv7PZaqD+oJJg2NcnTiXGNQ71BFLXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E1eEH5BwC384RmlKdS6D0BJCyf0JUhERXAlfmnSiq6M=;
 b=hcJV2+Cq7oEh0GyVRBOnSTXcfjyJLn5TXtSR8tSjMm7OoK5bhzAs0FmZSmhrKhRkM3Ejk2bbCGeGaN0brW6JkmdSZ0sNJP8LiyFGv2be20jWqtB16RmT+LyiwRIF5mpWkHel1pdH4buq+ZHMANzQO6WLajYkIG5AzbRosELOsDdJbtAslMaRISl0o6s4sj68Lxyjn1bNBFY/RwMSbDZfpeN78wVIv8EDfh6LPfDbCLahJkvkRp1NRK3+2k/Qjiuq1L8/BjiW+zs2QiFm609DHRXMNcbDivw6ox2uI8jrtsfieiPyy99Tq83m16GQ1rsp+pYzo5MJRNs8t/V2/o4zxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E1eEH5BwC384RmlKdS6D0BJCyf0JUhERXAlfmnSiq6M=;
 b=YowstFeEecwFBdWeBTubyG/7sZ4n5wZVWmrjan1qdQNRb9PNtclQY8ZArnUK0Oo7mREqkGP9/Kf18Go4uWTbOdw1bBBM6rIrdKjLQhmFRs4S6KwAtmpkOeD18QLmzfmtnvwJElqNMOziUOZHsiOFPxyTIpwgl5sH93Z7GGvE864=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <a28b68fd-50e9-016f-71b3-6daa85619dad@amd.com>
Date: Wed, 26 Apr 2023 09:41:34 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/sched/null: avoid crash after failed domU
 creation
To: Jan Beulich <jbeulich@suse.com>, Juergen Gross <jgross@suse.com>, "Dario
 Faggioli" <dfaggioli@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>, <xen-devel@lists.xenproject.org>
References: <20230424210048.786436-1-stewart.hildebrand@amd.com>
 <16b10155-4dfb-6891-dc90-61a6b966ee6d@suse.com>
 <fd1cde5e-a12f-85f3-1c21-bc41a483be39@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <fd1cde5e-a12f-85f3-1c21-bc41a483be39@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT093:EE_|PH7PR12MB6811:EE_
X-MS-Office365-Filtering-Correlation-Id: 3f3e85f4-70f2-4bf2-53de-08db465bf4a6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hXHPdsxpe8taw6FEy27mJqZJ0H9ky7I0s73JlstGB9KSeavPjdc9OYHjLeXv4InN90XuEO0LTMhklu2gOYdPu0FwBpEvG7i51lX02zxoj4NF8Me7osnWVK+dPq19Tqj9DdHMbAF1t0nY6eaiCLcbhWYq6ryHapXN3S9wknmKE3XHC6Aunwy64m84ILyuK3vhadbz0yJ3aO9dMSDGB1sjz5dfy6o4pgQkcCOaEkD3diGi3pPKTwOTQQnVYilZQhcFxXUf7DoNAWCCU12KYnlqSzzaa10bInFOyPue8YvEvUEAepKTKpQ8vR0sbr4Xw73IQ+gVMXkAXSPbeGHAPUN+TjJfmMd0XefDYxxOtCns/gV2ifRh7xSrU6dvUROcv9E/8USDOvQSErdd5iC+EWiNnisjXCmS723GSkMtm0lQZvFCq8KpPl73MUf8EDBAazTyw0stcVoHw/0KfQnzHuxcGT36qaHeWn02kTOao2VBVfbiZiCt2pRaaRcO6SK2XiyQv4hmZp1f0ntGCDnFXIKhfi6lU0FsL9B5KHB/t04q3Jytv/7yP9XUYTEvCzsJFvun+d0xuqw5Jqn3hMG4ItxNIck7Tmbkzltf6RSDBVwn+YdZvXrHrnTDJLXyLBEiOqhq+ZpxxJiy5Tx3vkjUi+7m4V9kUrlBrnaQJHjktjEB0i4KWYivwoj2sX7UbtB+BpH+Hw+jhRlZKCGT8SwI6MBkGmibQh5CvK/Bz42DGjXPPhpgJf1R8eaEpZkr0zDO3SWqbGBCOkWfUBMImth3yW8iYA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(2906002)(2616005)(53546011)(4326008)(41300700001)(70206006)(8936002)(70586007)(26005)(186003)(316002)(478600001)(40480700001)(44832011)(54906003)(5660300002)(110136005)(8676002)(82740400003)(356005)(81166007)(86362001)(40460700003)(31696002)(36756003)(336012)(47076005)(36860700001)(83380400001)(82310400005)(426003)(31686004)(16576012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 13:41:36.2785
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f3e85f4-70f2-4bf2-53de-08db465bf4a6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6811

On 4/25/23 03:42, Jan Beulich wrote:
> On 25.04.2023 08:36, Juergen Gross wrote:
>> On 24.04.23 23:00, Stewart Hildebrand wrote:
>>> When creating a domU, but the creation fails, we may end up in a state
>>> where a vcpu has not yet been added to the null scheduler, but
>>> unit_deassign() is invoked.
>>
>> This is not really true. The vcpu has been added, but it was offline at
>> that time. This resulted in null_unit_insert() returning early and not
>> calling unit_assign().
>>
>> Later the vcpu was onlined during XEN_DOMCTL_setvcpucontext handling,
>> resulting in null_unit_remove() calling unit_deassign().

Makes sense. I'll reword the message in the next revision.

>>> In this case, when running a debug build of
>>> Xen, we will hit an ASSERT and crash Xen:
>>>
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
>>> (XEN) ****************************************
>>>
>>> To work around this, remove the ASSERT and introduce a check for the
>>> case where npc->unit is NULL and simply return false from
>>> unit_deassign().
>>
>> I think the correct fix would be to call unit_deassign() from
>> null_unit_remove() only, if npc->unit isn't NULL. Dario might have a
>> different opinion, though. :-)

Yes, this seems cleaner to me, thanks for the suggestion. I did a quick test, and this approach works to avoid the crash too. I'll wait a few days in case anyone else wants to chime in, and if there aren't any more comments I'll send out a new patch following this suggestion.

> Furthermore, even if the proposed solution was (roughly) followed, ...
> 
>>> --- a/xen/common/sched/null.c
>>> +++ b/xen/common/sched/null.c
>>> @@ -376,7 +376,14 @@ static bool unit_deassign(struct null_private *prv, const struct sched_unit *uni
>>>       struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
>>>
>>>       ASSERT(list_empty(&null_unit(unit)->waitq_elem));
>>> -    ASSERT(npc->unit == unit);
>>> +
>>> +    if ( !npc->unit )
>>> +    {
>>> +        dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
>>> +                unit->unit_id);
>>> +        return false;
>>> +    }
>>> +
> 
> ... shouldn't the assertion be kept, with the new if() inserted ahead of
> it? Plus the log message probably better wouldn't print a unit ID like a
> vCPU one, but instead use e.g. %pdu%u?

Sure, although, with Juergen's suggested fix in null_unit_remove(), I think we could simply drop this snippet and leave unit_deassign() unmodified.

Your suggested print format is an improvement, but perhaps it would be better suited for a separate patch since there are several more instances throughout null.c that would also want to be changed.

Example with %pdv%d:
# xl create ...
(XEN) common/sched/null.c:355: 3 <-- d1v0
# xl destroy ...
(XEN) common/sched/null.c:385: 3 <-- NULL (d1v0)

Example with %pdu%u:
# xl create ...
(XEN) common/sched/null.c:355: 3 <-- d1u0
# xl destroy ...
(XEN) common/sched/null.c:385: 3 <-- NULL (d1u0)


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 13:55:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 13:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526714.818614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfby-00008Q-1r; Wed, 26 Apr 2023 13:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526714.818614; Wed, 26 Apr 2023 13:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfbx-00008J-Uj; Wed, 26 Apr 2023 13:55:09 +0000
Received: by outflank-mailman (input) for mailman id 526714;
 Wed, 26 Apr 2023 13:55:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prfbw-000088-2b; Wed, 26 Apr 2023 13:55:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prfbv-0005NC-SG; Wed, 26 Apr 2023 13:55:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prfbv-0005wV-Cb; Wed, 26 Apr 2023 13:55:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prfbv-0007P6-6B; Wed, 26 Apr 2023 13:55:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ob/kLy5mW/qWt5rXtU6z2zGsueJsjQ++Wuxs8jLbLDg=; b=KJJ6Eqpa6STN7x9qYxJdA/sELA
	tKoqd62KPFuB085M4pqK2Ot0akwn5hs+ZHdpbMtFF3U4sRFudLdUJ6rHJAKmo3sMmqOmeQNF22VfJ
	YGNqaZm5ko7PALRqXTea3Qgm52iGmA95WvECTBD3qslc7gwl55EpiCqHat8keOLepKfw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180416: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-pair:xen-install/src_host:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f6c3cb21628f7bed73cb992da400f6b36630f290
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 13:55:07 +0000

flight 180416 xen-unstable real [real]
flight 180430 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180416/
http://logs.test-lab.xenproject.org/osstest/logs/180430/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 180401

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  7 xen-install      fail pass in 180430-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 180381
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180381
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 180391
 test-amd64-i386-pair         11 xen-install/dst_host         fail  like 180401
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180401
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180401
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180401
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180401
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180401
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180401
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180401
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180401
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180401
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180401
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180401
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  f6c3cb21628f7bed73cb992da400f6b36630f290
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180401  2023-04-25 01:51:51 Z    1 days
Testing same since   180416  2023-04-25 18:06:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f6c3cb21628f7bed73cb992da400f6b36630f290
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Mon Mar 20 12:08:52 2023 +0100

    x86/shadow: restore dropped check in sh_unshadow_for_p2m_change()
    
    As a result of 241702e064604dbb3e0d9b731aa8f45be448243b the
    mfn_valid() check in sh_unshadow_for_p2m_change() was lost.  That
    allows sh_remove_shadows() to be called with gfns that have no backing
    page, causing an ASSERT to trigger in debug builds or dereferencing an
    arbitrary pointer partially under guest control in non-debug builds:
    
    RIP:    e008:[<ffff82d0402dcf2c>] sh_remove_shadows+0x19f/0x722
    RFLAGS: 0000000000010246   CONTEXT: hypervisor (d0v2)
    [...]
    Xen call trace:
       [<ffff82d0402dcf2c>] R sh_remove_shadows+0x19f/0x722
       [<ffff82d0402e28f4>] F arch/x86/mm/shadow/hvm.c#sh_unshadow_for_p2m_change+0xab/0x2b7
       [<ffff82d040311931>] F arch/x86/mm/p2m-pt.c#write_p2m_entry+0x19b/0x4d3
       [<ffff82d0403131b2>] F arch/x86/mm/p2m-pt.c#p2m_pt_set_entry+0x67b/0xa8e
       [<ffff82d040302c92>] F p2m_set_entry+0xcc/0x149
       [<ffff82d040305a50>] F unmap_mmio_regions+0x17b/0x2c9
       [<ffff82d040241e5e>] F do_domctl+0x11f3/0x195e
       [<ffff82d0402c7e10>] F hvm_hypercall+0x5b1/0xa2d
       [<ffff82d0402adc72>] F vmx_vmexit_handler+0x130f/0x1cd5
       [<ffff82d040203602>] F vmx_asm_vmexit_handler+0xf2/0x210
    
    ****************************************
    Panic on CPU 1:
    Assertion 'mfn_valid(gmfn)' failed at arch/x86/mm/shadow/common.c:2203
    ****************************************
    
    Fix this by restoring the mfn_valid() check in
    sh_unshadow_for_p2m_change(), unifying it with the rest of the checks
    that are done at the start of the function.
    
    This is XSA-430 / CVE-2022-42335
    
    Fixes: 241702e064 ('x86/shadow: slightly consolidate sh_unshadow_for_p2m_change() (part II)')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit ffc3ca75e25024c05bce7afea694a7446e513c03
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Apr 25 12:37:25 2023 +0200

    x86/shadow: "monitor table" is a HVM-only concept
    
    It looks like in the combination of aff8bf94ce65 ('x86/shadow: only
    4-level guest code needs building when !HVM') and 0b841314dace
    ('x86/shadow: sh_{make,destroy}_monitor_table() are "even more" HVM-
    only') I didn't go quite far enough: SH_type_monitor_table is also
    effectively unused when !HVM.
    
    The assertion early in sh_destroy_shadow() can have the type dropped
    altogether: it shouldn't make it here in the first place. Pages of
    this type are freed directly from sh_destroy_monitor_table() only.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f514bab30ef8d4ade77a27c926e283c9bbbf9ffd
Author: Juergen Gross <jgross@suse.com>
Date:   Tue Apr 25 12:18:37 2023 +0200

    x86: add support for crash dump analysis with xen.efi
    
    Today it is not possible to analyse crash dumps of a system in
    hypervisor mode when it had been booted via EFI, as the crash utility
    doesn't understand the file format of xen.efi.
    
    This can easily be solved by creating an ELF file from xen.efi via
    objcopy. Using that file as name list for crash enables the user to
    analyse the dump in hypervisor mode. Note that crash isn't happy with
    a file containing no text and data, so using --only-keep-debug is not
    an option.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 7c3e99b642d321bfe4eafed4d667e1bdd65ac410
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Apr 25 12:17:26 2023 +0200

    x86: detect CMOS aliasing on ports other than 0x70/0x71
    
    ... in order to also intercept Dom0 accesses through the alias ports.
    
    Also stop intercepting accesses to the CMOS ports if we won't ourselves
    use the CMOS RTC, because of there being none. This doesn't go as far as
    covering port 0x70, as that also has the NMI disable bit, which we don't
    want to permit Dom0 to set.
    
    Note that rtc_init() deliberately uses 16 as the upper loop bound,
    despite probe_cmos_alias() using 8: The higher bound is benign now, but
    would save us touching the code (or, worse, missing to touch it) in case
    the lower one was doubled.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 913751d7af6e78d65c1e2adf4887193c827f0c5e
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Tue Apr 25 12:16:17 2023 +0200

    x86/msi: clear initial MSI-X state on boot
    
    Some firmware/devices are found to not reset MSI-X properly, leaving
    MASKALL set. Jason reports on his machine MASKALL persists through a
    warm reboot, but is cleared on cold boot. Xen relies on initial state
    being MASKALL clear. Especially, pci_reset_msix_state() assumes if
    MASKALL is set, it was Xen setting it due to msix->host_maskall or
    msix->guest_maskall. Clearing just MASKALL is risky if ENABLE is set,
    so clear them both.
    
    Reported-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Jason Andryuk <jandryuk@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:09:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526722.818624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfpW-0001tv-By; Wed, 26 Apr 2023 14:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526722.818624; Wed, 26 Apr 2023 14:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prfpW-0001to-92; Wed, 26 Apr 2023 14:09:10 +0000
Received: by outflank-mailman (input) for mailman id 526722;
 Wed, 26 Apr 2023 14:09:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldec=AR=citrix.com=prvs=473a90206=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1prfpV-0001ti-6j
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:09:09 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e6eedcf1-e43b-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:09:06 +0200 (CEST)
Received: from mail-bn7nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 Apr 2023 10:09:02 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN7PR03MB7274.namprd03.prod.outlook.com (2603:10b6:806:2de::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 14:09:00 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 14:09:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6eedcf1-e43b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682518146;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QAIzhBizAEn3A6H1xGh0lOJhiJJLg1iOQ+Q14zLMrLs=;
  b=JQfXHSLrHf7OR+UVCOkYsRmdRPDIxbckXef2umVYXkZteWOx7hWmn+Dp
   MmjCFHSV2zUoxg+txyfKZIdsPAHhRgRog6xqZ66NJzrwvQ6KFu1+Ekkf4
   tSo03Y/fUZcEDvBSpRR4422sFqI/hiPbSgQfYdisVKJe9hfVcT1dITuFI
   U=;
X-IronPort-RemoteIP: 104.47.70.104
X-IronPort-MID: 106828060
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:za65DqzsB/rQ9yPby1F6t+ehxyrEfRIJ4+MujC+fZmUNrF6WrkUDy
 WJLWG6BM6uMNjH2c99xa9/k9U1U6JLUyIdmSgZupCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiP64T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KU5Vy
 s0XLyBdUhmko6WN5a2UV/NC3+12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQtuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANxCTu3gqKYCbFu7xU4wUSYIUWuBneSBuHaFa5FWF
 lEU0397xUQ13AnxJjXnZDW8vXWN+BAVXdFdF+knwAiXz+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8pz6oJTIcK2NEYCYeVBYE+PHquoR1hRXKJv58Ha/zhd34Hxn52
 TXMpy87750Rh8MW06Sw/Xjcnimh4JPOS2Yd5BjTX2+jxhN0YsiifYPAwUPA8f9KIYKdT1+Al
 HsJgc6T6KYJF57lvDOJaPUAGveu/fntDdHHqVtmHp1k/TLy/XemJNhU+GsnfBsvNdsYczj0Z
 kOVoRlW+JJYIHqta+lwfp61DMMpi6PnELwJS8zpUzaHWbApHCfvwc2kTRf4M7zF+KT0rZwCB
 A==
IronPort-HdrOrdr: A9a23:UNjsjqwpmFdQMfXspgg5KrPwAr1zdoMgy1knxilNoERuA6ulf8
 DHppsmPGzP+VAssRAb6Kq90ca7MBDhHPJOjLX5eI3SODUO21HYUb2Kj7GSoQEIcheWnoU26U
 4jSdkaNDSaNzZHZLPBgDVQZOxA/DDoysyVbKzlvg5QpElRGtldBilCe32mLnE=
X-Talos-CUID: 9a23:sI2SZ2FaEVPgf6g3qmJI73ElFcc1IkHU61OXGGqnJntzZuS8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AEpihgAzdtB99cgysYJnr9T6NZMWaqKeMKxk2j5E?=
 =?us-ascii?q?2gfWZDR5IAAfehQmqQqZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,228,1677560400"; 
   d="scan'208";a="106828060"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oK7lXgUcsQd07IcfYgaT/Ko0BFYGGY8sG3LqlgwGgnlGucpznaVaRPlT5HsG6bDdWmvx5qMwlZ9DhePo+0Ct90F1ZMQrG4NjOqvApaf9RvaoE2Tfm3JVYhoI3+RYPOZf4jrx0zfl6zYlKB6mfh3XM5gALcDwSeiMVLQoHhOI32vpeaXdJUFKEe8XxLq+r1VJ166KUt59EehRxtgS45X+ZFFiBGyZ6dNmRGe4hbHf6tF+zI2fTZzcHMBPE+H28XzTLbhsN4tSYDinLLgfR8ctM9iZYj7amaN8SPVIloSgn5SDCIAmmsv4qWG0/SHuEkqKghYyLEv3avauoX1G5ceytQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FIsfvn7vHUmyaygHz/bYx2AUse3OvQ/Ew7Xno2+dzY8=;
 b=N60K+fIB10EDB7L+0Aarl8oar6eR8NwdY2xvt6SHhbdfbG3IvdOa+uTLSUijAGoUASZYTdyf2DmhE1UD7JW4T0hHj06jh4FAnqsxzMoA6tYpKLTM8Lz3LP3VWmkMs2esKgyfGtsFryj3I6BW4CuTmOl+x+J1AgqN6Y8GbA2lXtrw6B4CcPWAIlXGtBB1QXfwqlOA6ET8HoWhTzWEcixGSsKeBsFPuwoHM560yJFzvTWNH/nnmZO1Q0YAyRQF0Y6Ym1vr7loQjTU5UW7gPL4j3mHBTVbRR/jO/iNI1WTUkaE+gsr8bMso17HuxClNjbfW03CapvjtJ+7QzSO9dX8zXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FIsfvn7vHUmyaygHz/bYx2AUse3OvQ/Ew7Xno2+dzY8=;
 b=L7yuPjF51F+7s7KHUb5njFeD9TdSJ/Uu40tsFQaTxwZQUZvCjXQVMkygsHC7htSgUA326RqPaAFYm3b57cWd6m09p6IB5a7NBuez84vDxvcMUzlanBxOk777R8D1Ifs6DQe1bq4BTD8eEygSmCDT0yHTr8930mk9oaPyR+/F+HI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <357e94d0-0f49-7854-7562-9f6550d0fdd8@citrix.com>
Date: Wed, 26 Apr 2023 15:08:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Content-Language: en-GB
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230426104605.3447049-1-luca.fancellu@arm.com>
In-Reply-To: <20230426104605.3447049-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0035.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SN7PR03MB7274:EE_
X-MS-Office365-Filtering-Correlation-Id: fa26597a-1603-426a-650e-08db465fc83e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PTstlTS+pmJm+QY7VeJuyXG8tUSGMrGD4IOnkLYefkskEAI4XPkmPZQsoUa4g3Tux4PFW71CWSd5dDVRdgX8sMkcSfkwo7Fg7H7BciwPcUKKpfpawZym4eB6EUtpAZ7WEwOudH8rXqUY9ND/XA04HXAVPvJYw3VzEtjqgCFOaIwnSLUM/oPWLkV48wkmvmuMi0zXokoORXWYvI7I7u2UJOECCCcPLA+TQQBVq5q1hNmjjdWrXi3+dKlFOkEgpvQgRETSZ7z8FHXD2dG+e5VFPXCKtJjTCgCdGc/Z4a8wXQ3OASkFotFZdkSBAHtUBy6lLbu1KqciOnQpe9WXEHpGHjJ9JG1j7Mxrj8VD+sXOA8wQ4kgNCiuoQXr/Q3QydKng0Y+AGU+DIgD3m4216iWD6BTFsXXgPAvv8dJikHOEqzEbdYyUyzkc7viGHlXrGOVnMG9PX9tXm6/gKaR7xEuIVeYcNNIkgGwaBvc+SYkkmV5hYN8DhQhUhy96cqaqXGt1def2L3lJNF1aTRnMu3KAFa5wy7WT8mVHOVrsgNkkfRAONxvjJ2Au90S1qpB0wNSwKVlgJumPO1tXStBw7EpgM+xP7xbut1LcxSY9v0Rg4SZ5Mrqm2pl/aqSol1t9rHsiJjXZ9qXjhcuXLBuo6imA3w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(39860400002)(376002)(136003)(451199021)(478600001)(38100700002)(6666004)(6486002)(26005)(6512007)(186003)(53546011)(82960400001)(54906003)(83380400001)(31686004)(6506007)(2616005)(5660300002)(8936002)(8676002)(41300700001)(2906002)(316002)(4326008)(66476007)(66556008)(66946007)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YVNyS0R5Q0UzaEpnMFFScDRCOU8xWW1VZDk2RVdBY0dnU3VLd0Z5ZEhxcGNH?=
 =?utf-8?B?K2ZORjlBaXl3YUJPUFRIalZIcEFuUklOZElXTnprUmV1dFBub3EzUjltNlZi?=
 =?utf-8?B?aWRpZ1IrVEFtY2pVWDNnOGtrak9wR0lxd0N2STVmZEFzT1BiSG1qdTlVUmtQ?=
 =?utf-8?B?U2hHT3NPeTE1UHFkSStwQWVTQ0dtY2N5THFCMy9xTWt5bndQQjcwNlpRWWJC?=
 =?utf-8?B?MFVWUExVZTVkUTJuQ1lCRmxsc2poaGUyM0Rld0puV1MvSk5iS0hhdUFlMkhY?=
 =?utf-8?B?VWdXRlpGWGx0UlNYeVpNdGRYYStQVVZNOVI0aEc2QitrZldDWmsrNU56dks3?=
 =?utf-8?B?dFJBbXU4cEl3Vy83YkdGRXZyQzZNTGVyZUtybVNuZzdvUklKZGIzRlRTTUIy?=
 =?utf-8?B?NVQxUkRiNkRaejRLRHRsWFZwMEpBR1M3amd5blhuVnJKaWZ3RHoydXNBV1Ry?=
 =?utf-8?B?d0NxUUl2bm9KdHV2cDlFVlpGZnI2emhPRVZXK1VSTDJmVkF0TlFPTGVxMmlM?=
 =?utf-8?B?MDVVaVNQMzN1clgrU1RKaWFrR0JEcS9xcnpwK1cyMnhSeDZnSWJsdkdvUDlI?=
 =?utf-8?B?d0tEODV2S3c2ZUNxMUl5U1VBYTdYM0o4QXRQVjQ2UEY4a1pBeU13OGtpR2Vh?=
 =?utf-8?B?Mk1UeVZQRVZEMThnRFRPVUhIcmhJTUNUMVo3ODJFRTlDd3gxWk94Nml4VFZO?=
 =?utf-8?B?clJTTVlqWkFnakthcm5WUmZrTXBaRU15VGsvT2J4UG9oUndSejhUSjNJNFBF?=
 =?utf-8?B?VEU4c0dOTUVvc1BYSFc3ZmhwWDZnQzczZDAwSEcwN0Y2b0tQNkkycXNHSFgx?=
 =?utf-8?B?WkxEVXdjdUMvVzRGS0xVRjBxbm54VzA3S1lqNXFLM2dadTl5ODBmR3dlRWVB?=
 =?utf-8?B?USsyVXNucU44VE8zK3p0cjkxcHpubnVIS2N3b0cxOENiWnJUc01KTUhnTFJK?=
 =?utf-8?B?STNWYTdrVmsrRXN5dEUzWkQrS01acUtNVVJRRU1UeFNQTTRPQlkwN0E5Zis0?=
 =?utf-8?B?cFBhRWFIbVpOTjBzRGdLdDkyams5b1ZYaHpQbUsrU2NBcVJEWFNaZFVrV0V6?=
 =?utf-8?B?dmFMMGRvaE5iYkFJNkROSG8reXE5ZmZEcTA2eXovSTg0RnZNYjFUN0k3WTA0?=
 =?utf-8?B?QjBTRVd4MVNCTUlkQ2V6REhBNE1vcUIvSmE3MFFXaE44bUtlRzRnTjJDbjdu?=
 =?utf-8?B?S3M3d0E5K0RheHQ5ZldORTRxOVRqLzBDNmlJa3dpbjVBS25EUFVxTlluWFh3?=
 =?utf-8?B?dmM0NGZSeFVHSUR4ZzZUUnNudWhhVGFWTzRXMjB4ZG9qdjQ2dWVoc1lXbXlv?=
 =?utf-8?B?b1F4bUFyazdoZktyYXl0aWlFVm5EYXpGTVBIWERuSnpiaTQ2MUdVR0pXL3NI?=
 =?utf-8?B?UnBLeXh4S1E0VDZ0clZ4K1VoWWpGNE53VjB4Q09UQnNQZlNQenJuV1NDU0hq?=
 =?utf-8?B?bElXWUttdGVJU2huZlhvZHFhUy9mWnJsREdWSzZjM3ZtM1R6ekdyNlZ6SHVC?=
 =?utf-8?B?TlRyc05PamZreUdjbWM5ZlQzb2NqTTk3bW1iQVlPdGpwaFAzUEtWU21Sa3Fi?=
 =?utf-8?B?Y0ZNVVEvQVJ0NTYzNi8zK0I5WktXQUw2WWRFaWtFL1J3NHBWSzRTNU5VTVgw?=
 =?utf-8?B?bzlrQ3JvMzNwL3BDT3QxV0poNmFYOFB1T0x5TXcrQURGeGhsTGxIbEZvOU4x?=
 =?utf-8?B?Zk5UU1BCelVWN0dqR2w3WEpzOTBTT3VETisyc3ljeXcra3ZHTyttSmIyMVZj?=
 =?utf-8?B?cXNNOVVhMzlZcCttU1hhckg3ZUkwVXFtbU1vU1JRamN4OGdGUk9yYWQ0RU5C?=
 =?utf-8?B?cndZM1dnUENMZm5nUEVtRDRHVTRjd2t6T2RVZGpRWEgvbmhqSU9BNnFnVVFx?=
 =?utf-8?B?WFAvSjJEa2l3bHdWdTNZVWNoSmZubzg2Sy9uRVpwcGVkR3VyZVhRa2kyUGtC?=
 =?utf-8?B?K0lXVTdJMzlQbDhiU2dTTHphK3dTVnNxT2U0YXZkTGlHT3hOb015cFh2VDFG?=
 =?utf-8?B?RzFRWFRjRnYrd01Ha1dWdDRNcURMTGZ2azQxT05XQ3J2aUt4OE1wK0JYOWth?=
 =?utf-8?B?S2w5MEc1allubFZHODA1K1hScGVWcmc1RE54ZG44Z0NMaVRkTTJGM2l2NndL?=
 =?utf-8?B?OFZtWnJXOXo3OFM4Z1Q0MjA5aE12emxYVzZVQWxWSmptR2kwQUNkbmtyeTFI?=
 =?utf-8?B?NlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CDDWAaSKek1reIy57HAACMuNb/VUl2uXetZctHZTU9bmOddewXOnp5h7gK/YsOUtfiCwzlp90n7loDzr0CqZ4InxddMwnuu97LXHxX1WbORnfZCznKkePgo8IT93cMKyHBdrpS/GuRBwqhlOy9nAQ1si/QMualdRy2rpa3B5wYZUOtSvdFln/gqBioWKtz8OlDTfU5kIa0p5O88sXB0ODoyjZ42pOQ30FvY9IXF9oqyonuCK6j8sT1B4kg7CRB3MZVm0T0x4EY6BCrbUQmuYf+x0i8pHZvvGAXhymxH8aNOfnaXcNU+XhnjjcUaV1EGESlJUT1n8xZx/GwpC5kWB2Ajl7Rk+YbdR0lY/K0Tnz6P1i7x1XotMmYjXQw2Z8zsiX+OcH5gILRke5F2j05UZqfitzJ1J9+5ZmjVm9S9aYc+r0UC3mht2lLQOXFNiCth6+uJTZtwXQDwhQd7eY6zm2TvcP/fwOf7dHnnvNaeqCFfdbhUULCXTxdStuYaakfIk1q1fl05TBb2TyYhgDrPlDhrSppPSepRqQS+LG+j/C5cjCnX8jYzpjzGFRY4EVck2cghYVItCrQMtuymUNTAJA7B2ZHyjJOffKelN9NOJmjNOARWqIpUk9Zb0+45kZVH7h6CeYrcVIPaQhcf8qimMS9W1HD2cJt7KBbY2DxCbNd5hZOuN0qnX+VFyttwtydqJLuk/wfEs3965aV9jLQEXrM6tuduglOAUUvrAidAdgzapa2tOofmF5UrSOq4fXm7rMkAy/jNfRRx2wdXfx9HoGg1UB6ai7FVjGKE99QQxZ/O6Me7SV24Mk32d1w/7wJWD2aCzHwnjM1TpZ6dlznO6mpk5Sb790toxFoiQkIT0kyBaDBhRaLwB/W4nAbDoE5Z/
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa26597a-1603-426a-650e-08db465fc83e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 14:09:00.2227
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zs8uyWHhCvxE6jHbU+QxUGZb1JU0LvJucVA8bxrwum+Fd5Hn8K/ZXaVTnxXwIuo+Vx/WURfUQzEulspelGeLR3HHNaeksiCeGHB+7Bs6Vhc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7274

On 26/04/2023 11:46 am, Luca Fancellu wrote:
> Currently the script return code is 0 even if an exception is
> found, because the return code is written only if the exception
> object has the errorcode member.
>
> Fix the issue returning the errorcode member in case it exists,
> otherwise use a generic value different from 0.
>
> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/scripts/xen-analysis.py | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/xen/scripts/xen-analysis.py b/xen/scripts/xen-analysis.py
> index 8e50c27cd898..7185c5a06d2c 100755
> --- a/xen/scripts/xen-analysis.py
> +++ b/xen/scripts/xen-analysis.py
> @@ -26,8 +26,7 @@ def main(argv):
>              cppcheck_analysis.generate_cppcheck_report()
>      except PhaseExceptions as e:
>          print("ERROR: {}".format(e))
> -        if hasattr(e, "errorcode"):
> -            ret_code = e.errorcode
> +        ret_code = e.errorcode if hasattr(e, "errorcode") else 1

ret_code = getattr(e, "errorcode", 1)

is rather more succinct, and pythonic.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526729.818637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgGt-0005MI-M1; Wed, 26 Apr 2023 14:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526729.818637; Wed, 26 Apr 2023 14:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgGt-0005MB-IK; Wed, 26 Apr 2023 14:37:27 +0000
Received: by outflank-mailman (input) for mailman id 526729;
 Wed, 26 Apr 2023 14:37:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jz9v=AR=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1prgGr-0005M5-Oq
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:37:25 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db7b37ca-e43f-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:37:23 +0200 (CEST)
Received: from AM5PR0201CA0011.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::21) by AM9PR08MB6098.eurprd08.prod.outlook.com
 (2603:10a6:20b:2d9::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Wed, 26 Apr
 2023 14:37:21 +0000
Received: from AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::b9) by AM5PR0201CA0011.outlook.office365.com
 (2603:10a6:203:3d::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33 via Frontend
 Transport; Wed, 26 Apr 2023 14:37:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT054.mail.protection.outlook.com (100.127.140.133) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.21 via Frontend Transport; Wed, 26 Apr 2023 14:37:20 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Wed, 26 Apr 2023 14:37:20 +0000
Received: from 4e5ff39aef5e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 267AAA95-554E-4F2C-B88F-352A1EC34806.1; 
 Wed, 26 Apr 2023 14:37:13 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4e5ff39aef5e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Apr 2023 14:37:13 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9080.eurprd08.prod.outlook.com (2603:10a6:10:474::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Wed, 26 Apr
 2023 14:37:12 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6b4f:579f:6dca:8b91%5]) with mapi id 15.20.6340.021; Wed, 26 Apr 2023
 14:37:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db7b37ca-e43f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VpklSiqmCRDey2bBIC0y66olfvI4v9WoIdqkHYIUQW4=;
 b=+AFlFwgonlwE/ufNb6HoL5gyV7C3aMAOgK2+qbq3sVBO4VnH/DclZ9CcjwmDIWRrdQLk2q/oxhoKuEhYzZG0N73dUgMtWvGBOMc/+Hgra8vCeho3U+uWPsHtcdJTfCLHI10CsgMJM+kHkcuPXoMa+CTA9ahh1GM99ARhwKEQS1c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f669797dadd128ce
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bdkVgVykaG9bawPESTtfodSSSEg4ROHhkmm7KTPeol8uTICYJ4/Xm4kD19tXs8F9vsqih0P4EYXV8niaxsU9gHULjvW95vkLy8z+ehAxmhMfPhY8Yg8oLKM7nh0N/Hw/1hjedIXacwy/uRqfPjZ74fwkxUnK+wau4qtjPePZmqqJtNOzCily39aNDsI6bMTHj0CTVNNofeG8TIecRxlrhAF7teptPftMS0I6zG2LglstYkAOuPH+b/Qeays/GDNU8/Zxxvc+ruAC3SBAo4omUccx4Z5xN24PkMKejl056+v6SCHQDf40Z/dZntdj1PcFyhKVbI6UOx5L4/i4vdp5jA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VpklSiqmCRDey2bBIC0y66olfvI4v9WoIdqkHYIUQW4=;
 b=KAS1UR3m8ZZCakVf6JmF1oSbnftwEo6jNtFxFqVysVvgH357zIX8TQrH+GORuir9gCuZwgotOufFGU8h7Qc4zUci6Y/6SEmTTi4sYPyjzK3uuJIxseAwLNgNEFwZjwrkjF1puiCUO7foZ7MEj1aUF0Eh1xcpDRxwD1HMzGBPzy8XNQFwq4k21f96dz5aVF/898gdCAy1HLiNDM7Ug8CRQqGrFGggQcOYUg85qeIlkOGvTKYXrTH9qUkSL1rsmId2UWdlsOcbopPM8LohrJxA+0L5onqzBANkVRfqCnjUqgtPw6tNNMilbNcyRSqebZpn0dhYBIltsXsTo8xlQRqOuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VpklSiqmCRDey2bBIC0y66olfvI4v9WoIdqkHYIUQW4=;
 b=+AFlFwgonlwE/ufNb6HoL5gyV7C3aMAOgK2+qbq3sVBO4VnH/DclZ9CcjwmDIWRrdQLk2q/oxhoKuEhYzZG0N73dUgMtWvGBOMc/+Hgra8vCeho3U+uWPsHtcdJTfCLHI10CsgMJM+kHkcuPXoMa+CTA9ahh1GM99ARhwKEQS1c=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Thread-Topic: [PATCH] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Thread-Index: AQHZeCxqGE3P3XkfFUu99Ot710R/Aa89oOqAgAAH2wA=
Date: Wed, 26 Apr 2023 14:37:12 +0000
Message-ID: <367DCC52-C860-4BBF-A9C2-33CD002819D0@arm.com>
References: <20230426104605.3447049-1-luca.fancellu@arm.com>
 <357e94d0-0f49-7854-7562-9f6550d0fdd8@citrix.com>
In-Reply-To: <357e94d0-0f49-7854-7562-9f6550d0fdd8@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9080:EE_|AM7EUR03FT054:EE_|AM9PR08MB6098:EE_
X-MS-Office365-Filtering-Correlation-Id: d912cfea-df70-4efd-53b2-08db4663be03
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 p8v6+hRIZk//5iZ01YwnhGrGP2pBfDjhkJY4AybIROOJ4dKgMD6Ww6b9y89Pq+cuvBFMD/MMS9TBDsWDc1X1T9Du3oWakdtW3ELAcUaa3NcA1TdyBJzd3Wb5BN6ePVR+wkIYYn+o+iLp4HudioIPvGvgDnE+2Rq+AkNzvJeopyMniWxc9WlVWdQrB4cMHncwPNSEptf7Cwd6baJJYBf7K5vVQWWr9wczsbVM/T+UlMVWfsGVdDW+tPLxlvkRILhENEJGfeeK7YN4fxm96SSUrzEFHK6SDnvCpu1FhDmBdR6cguy3xGluu0AgOooP+2ESOQ+IAG2s1zcd2UaaufPNNNu/Sr3OzyQLTcXnZfWg6mrvDWdpIefdYh4x5U6+FMHqkXIcadj9Bn2Oe8tpQ160uPxLDz7zBnlg4Mf1EfSbSbVu+1I64rxrnZ/ABHusJzm3aeOsOibs/riYznwLvZjvAxJ38XB61HWIcvIriHm11s3E253fd1mg1Ao08AVWt4NzKjdIKx8YtU4xq+JaAyfY4E+UeBGNt+knwqiWzlKFL5ceCqt11vhDlkq3MWL0FHtfcCevUCx3FweEkbp2Uwz3ANwyRAhUkU13ivr4qG4dLtS47l7FAlcgizRCDQBSzq7qgKML0WCqhGNxqVU4FHpaHA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(366004)(376002)(451199021)(6506007)(33656002)(26005)(6512007)(53546011)(86362001)(186003)(2616005)(5660300002)(83380400001)(6486002)(71200400001)(36756003)(8936002)(38070700005)(8676002)(54906003)(41300700001)(316002)(38100700002)(2906002)(66446008)(66476007)(76116006)(66946007)(4326008)(6916009)(478600001)(122000001)(64756008)(91956017)(66556008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D0AED5687D9CA2458EB49B4AF1F93BDD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9080
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b6f9c645-5c71-4100-32ea-08db4663b8f1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	idoP69EUXsKjoM4TcE9PHv5XSbR7lqhwBrT4i2K0M8BdzrJi5Y3nZ46Ivmc/8SwgAjoUMPcy2pqZryY6QWvwGSGfkqXCedxtN5u+nDIJx9AjvqXJHUJmg9UbfVFlC9ndAC/12bKsRYRVooFO/Pjhzc/59sfr8P8OBFNtLu+gcXKErnl9YzP1yVjB+YpIXRjVWO99AVGzjfQVm6cY8FAgHW5Oy1qIoGTc5Eh6ut9yjilEyJSIkfZzk97ugKjUFBqQYGKbcBSilV0Yip6ezFKoV28HLmdyD3kUJwknxH3GfxT48i8rylvit4qATbx9fKGYDK4xHwT52AUXDRXdyQQ0ccM2Do96Wm6UB8LC1OIdHckrCqxWSVlflsgyPm52gFxQ+Ba+wI/aiBfpInGRD5G8koyezOhJf0Oz5ySq989VM+cPJshmJH5yPNSOXwnqdmTvYU61LjQ+6V7cBIwarQ6y6NIUotn8NzrDy5dk+08JjAJAFLOVEDS7zsj8Yc7I/9S1DDSohdhIYvCI0JsuZAh213XyNcWKjH8OH76zfPIiwsCyZOqN8OnocZSeMQ6Abam8bB0qhDFyrTrV7SkZ2WHHMFltRdpkFliEGfbWiz5vuXJ45ESRXOOoIvgW5o3rves/Z0EtxY9QgMWV45kMpoL+YSb3mBWfTXLe6u06oIQIsPOcZTRAySKNqDaeHkkIHzM3DDKBIU3jfcUuaqv3hHlZHOrPJk49suniZxgOM2KnN6I=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(83380400001)(82310400005)(40480700001)(336012)(2906002)(82740400003)(47076005)(478600001)(356005)(54906003)(41300700001)(5660300002)(33656002)(316002)(8936002)(8676002)(70586007)(70206006)(6862004)(34020700004)(4326008)(86362001)(6486002)(36860700001)(36756003)(40460700003)(186003)(6506007)(6512007)(81166007)(26005)(53546011)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 14:37:20.6027
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d912cfea-df70-4efd-53b2-08db4663be03
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6098

DQoNCj4gT24gMjYgQXByIDIwMjMsIGF0IDE1OjA4LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIDI2LzA0LzIwMjMgMTE6NDYgYW0sIEx1
Y2EgRmFuY2VsbHUgd3JvdGU6DQo+PiBDdXJyZW50bHkgdGhlIHNjcmlwdCByZXR1cm4gY29kZSBp
cyAwIGV2ZW4gaWYgYW4gZXhjZXB0aW9uIGlzDQo+PiBmb3VuZCwgYmVjYXVzZSB0aGUgcmV0dXJu
IGNvZGUgaXMgd3JpdHRlbiBvbmx5IGlmIHRoZSBleGNlcHRpb24NCj4+IG9iamVjdCBoYXMgdGhl
IGVycm9yY29kZSBtZW1iZXIuDQo+PiANCj4+IEZpeCB0aGUgaXNzdWUgcmV0dXJuaW5nIHRoZSBl
cnJvcmNvZGUgbWVtYmVyIGluIGNhc2UgaXQgZXhpc3RzLA0KPj4gb3RoZXJ3aXNlIHVzZSBhIGdl
bmVyaWMgdmFsdWUgZGlmZmVyZW50IGZyb20gMC4NCj4+IA0KPj4gRml4ZXM6IDAyYjI2YzAyYzdj
NCAoInhlbi9zY3JpcHRzOiBhZGQgY3BwY2hlY2sgdG9vbCB0byB0aGUgeGVuLWFuYWx5c2lzLnB5
IHNjcmlwdCIpDQo+PiBTaWduZWQtb2ZmLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZhbmNlbGx1
QGFybS5jb20+DQo+PiAtLS0NCj4+IHhlbi9zY3JpcHRzL3hlbi1hbmFseXNpcy5weSB8IDMgKy0t
DQo+PiAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDIgZGVsZXRpb25zKC0pDQo+PiAN
Cj4+IGRpZmYgLS1naXQgYS94ZW4vc2NyaXB0cy94ZW4tYW5hbHlzaXMucHkgYi94ZW4vc2NyaXB0
cy94ZW4tYW5hbHlzaXMucHkNCj4+IGluZGV4IDhlNTBjMjdjZDg5OC4uNzE4NWM1YTA2ZDJjIDEw
MDc1NQ0KPj4gLS0tIGEveGVuL3NjcmlwdHMveGVuLWFuYWx5c2lzLnB5DQo+PiArKysgYi94ZW4v
c2NyaXB0cy94ZW4tYW5hbHlzaXMucHkNCj4+IEBAIC0yNiw4ICsyNiw3IEBAIGRlZiBtYWluKGFy
Z3YpOg0KPj4gICAgICAgICAgICAgY3BwY2hlY2tfYW5hbHlzaXMuZ2VuZXJhdGVfY3BwY2hlY2tf
cmVwb3J0KCkNCj4+ICAgICBleGNlcHQgUGhhc2VFeGNlcHRpb25zIGFzIGU6DQo+PiAgICAgICAg
IHByaW50KCJFUlJPUjoge30iLmZvcm1hdChlKSkNCj4+IC0gICAgICAgIGlmIGhhc2F0dHIoZSwg
ImVycm9yY29kZSIpOg0KPj4gLSAgICAgICAgICAgIHJldF9jb2RlID0gZS5lcnJvcmNvZGUNCj4+
ICsgICAgICAgIHJldF9jb2RlID0gZS5lcnJvcmNvZGUgaWYgaGFzYXR0cihlLCAiZXJyb3Jjb2Rl
IikgZWxzZSAxDQo+IA0KPiByZXRfY29kZSA9IGdldGF0dHIoZSwgImVycm9yY29kZSIsIDEpDQo+
IA0KPiBpcyByYXRoZXIgbW9yZSBzdWNjaW5jdCwgYW5kIHB5dGhvbmljLg0KDQpZZXMgaXQgbG9v
a3MgYmV0dGVyLCBJ4oCZbGwgdXBkYXRlIHRoZSBwYXRjaA0KDQo+IA0KPiB+QW5kcmV3DQoNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526735.818647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgRB-0006sd-Lc; Wed, 26 Apr 2023 14:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526735.818647; Wed, 26 Apr 2023 14:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgRB-0006sW-Hy; Wed, 26 Apr 2023 14:48:05 +0000
Received: by outflank-mailman (input) for mailman id 526735;
 Wed, 26 Apr 2023 14:48:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldec=AR=citrix.com=prvs=473a90206=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1prgRA-0006sP-8D
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:48:04 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56433a00-e441-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:48:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56433a00-e441-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682520481;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=GZ9NpsvUTgP/I90kLu9M40l73+ZnSwecSPLdpqLq8z8=;
  b=OR5kxCQ8e5juk9HlPeC6ABJYdOf9ukOP2jlWrvooFuv2vsebN266NmHh
   OSp2EFwJ+DPooet3zsgujBpVWB4LVKPlIEzztFFA8pUuwtFQQDlMqFv3l
   JL0sWWtRiUxnAkGJt+Gkk/p37N5Z0SNbTu4t5DV4hySjm08mEkj4A+Znu
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107351865
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Pu0twqlf8HIzQI0x986d+pvo5gy5JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJKUTqHaaqJNGP9KYonatnl908FsZeDxt9hTwY4qS43RCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5gGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 ftCMhQTZTKkvcan5uupVvZKou8bEPC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/A+
 TmbpjSnX3n2MvTDzR+qyFeG2tT0siejALwRJua88vVT1Qj7Kms7V0RNCArTTeOCoku/UNJWL
 woT4DgjqYA78UDtRd74NzWzqWSIuRM0UNNKH+o3rgqKz8L85g+DA2EeQz1pado4tdQ3Tzgny
 l+ImdzyATVl9raSTBq17ayIpDm/PSwUK24qZiIeSwYBpd75r+kbnh/JC9puDqOxptn0Ai3rh
 SCHqjAkgLcehtJN0L+0lW0rmBr1+MKPFFRsoFyKACT8tFgRiJOZi5KA7gLByehLDqGjakin5
 WoYhO66x9gyNMTY/MCSe9nhDI1F9t7cbm2G2g8yR8Rxn9i+0yX9JN4NuVmSMG8sa59ZImGxP
 Sc/rCsLvPdu0G2Wgbibim5bI+Aj1uDeGNvsTZg4hfIeM8EqJGdrEMyDDHN8PlwBc2B2y8nTw
 b/BLa6R4Y8yUMyLNgaeSeYHyqMMzSsj327VTp2T5035geDCOCHLFulUYADmggUFAESs+V29z
 jqiH5HSl0U3vBPWOUE7DrL/3XhVdCNmVPgaWuRcd/KZIxoOJVzN/8T5mOt7E6Q8xvQ9qws91
 i3lMqOu4Aal1CKvxMTjQiwLVY4Dqr4l/CtkYHF8YA32s5XhCK72hJoim1IMVeFP3IReITRcF
 JHpp+3o7ixzdwn6
IronPort-HdrOrdr: A9a23:3bjfQqCGylXXJ6rlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:f70ywW9HXTx4oX70BEmVv0lOAOQ7eV/69lPBBRaqFXlPbJatdWbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3Akoqr0A91GPCDWNc+fboyMZKQf5tsxouULWpRqq4?=
 =?us-ascii?q?pv5HcCy9CYDOZhw3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,228,1677560400"; 
   d="scan'208";a="107351865"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
	<marmarek@invisiblethingslab.com>
Subject: [PATCH] CI: Remove all use of /bin/false as a ROM
Date: Wed, 26 Apr 2023 15:47:48 +0100
Message-ID: <20230426144748.1236385-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

As the recent work to get PCI Passthrough testing working shows, putting
`/bin/false` as a ROM into guest context doesn't work so well.

For all ROM paths where we're skipping the build, use a slightly-plausible but
likely non-existent path instead.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony.perard@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/build | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index d830cff7b7c7..197d085f3e07 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -67,9 +67,9 @@ else
 
     if [[ "${cc_is_clang}" == "y" ]]; then
         # SeaBIOS cannot be built with clang
-        cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
+        cfgargs+=("--with-system-seabios=/usr/share/no-seabios.bin")
         # iPXE cannot be built with clang
-        cfgargs+=("--with-system-ipxe=/usr/lib/ipxe/ipxe.pxe")
+        cfgargs+=("--with-system-ipxe=/usr/share/no-ipxe.pxe")
         # newlib cannot be built with clang so we cannot build stubdoms
         cfgargs+=("--disable-stubdom")
     fi
@@ -87,7 +87,7 @@ else
 
     # SeaBIOS requires GCC 4.6 or later
     if [[ "${cc_is_gcc}" == "y" && "${cc_ver}" -lt 0x040600 ]]; then
-        cfgargs+=("--with-system-seabios=/bin/false")
+        cfgargs+=("--with-system-seabios=/usr/share/no-seabios.bin")
     fi
 
     ./configure "${cfgargs[@]}"
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:58:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526741.818657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgav-0008Ue-Qj; Wed, 26 Apr 2023 14:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526741.818657; Wed, 26 Apr 2023 14:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgav-0008UX-NA; Wed, 26 Apr 2023 14:58:09 +0000
Received: by outflank-mailman (input) for mailman id 526741;
 Wed, 26 Apr 2023 14:55:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t/Q4=AR=amazon.co.uk=prvs=473b0a973=hakor@srs-se1.protection.inumbo.net>)
 id 1prgYf-0008RQ-On
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:55:49 +0000
Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com
 [99.78.197.217]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bb8d845-e442-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:55:46 +0200 (CEST)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-iad-1e-m6i4x-a65ebc6e.us-east-1.amazon.com)
 ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2023 14:55:28 +0000
Received: from EX19D020EUA004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-iad-1e-m6i4x-a65ebc6e.us-east-1.amazon.com (Postfix)
 with ESMTPS id 979826575B; Wed, 26 Apr 2023 14:55:24 +0000 (UTC)
Received: from EX19D037EUC004.ant.amazon.com (10.252.61.170) by
 EX19D020EUA004.ant.amazon.com (10.252.50.56) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Wed, 26 Apr 2023 14:55:23 +0000
Received: from EX19MTAUEC001.ant.amazon.com (10.252.135.222) by
 EX19D037EUC004.ant.amazon.com (10.252.61.170) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Wed, 26 Apr 2023 14:55:23 +0000
Received: from dev-dsk-hakor-1a-9589d7a9.eu-west-1.amazon.com (172.19.124.154)
 by mail-relay.amazon.com (10.252.135.200) with Microsoft SMTP Server
 id
 15.2.1118.26 via Frontend Transport; Wed, 26 Apr 2023 14:55:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bb8d845-e442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1682520947; x=1714056947;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=HqpkH6rdqOFcn9B9h8DzALRHycpElJztzr/EKxb5REs=;
  b=QATy8Bc35oNRYIiAA0Ky7W5w+DDbkTJcsyYsTiOVoQSEOyb1ACbwTo0o
   18vrzB2EAzU87o5WZO7K3L3UXR1gxahFividw7tbnEl2P7X9oP9eHYFFU
   ikM79d41GGxROeoTzLz+HkYP/o4JhHeFsX17BUe7HgABnV51uG+Hl6mo3
   Y=;
X-IronPort-AV: E=Sophos;i="5.99,228,1677542400"; 
   d="scan'208";a="208348781"
From: Ruben Hakobyan <hakor@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Ruben Hakobyan <hakor@amazon.com>, "Jan
 Beulich" <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Date: Wed, 26 Apr 2023 14:55:20 +0000
Message-ID: <20230426145520.40554-1-hakor@amazon.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Precedence: Bulk

Xen reserves a constant number of pages that can be used for mapping
MSI-X tables. This limit is defined by FIX_MSIX_MAX_PAGES in fixmap.h.

Reserving a fixed number of pages could result in an -ENOMEM if a
device requests a new page when the fixmap limit is exhausted and will
necessitate manually adjusting the limit before compilation.

To avoid the issues with the current fixmap implementation, we modify
the MSI-X page mapping logic to instead dynamically map new pages when
they are needed by making use of ioremap().

Signed-off-by: Ruben Hakobyan <hakor@amazon.com>
---
 xen/arch/x86/include/asm/fixmap.h |  2 -
 xen/arch/x86/include/asm/msi.h    |  5 +--
 xen/arch/x86/msi.c                | 69 ++++++++-----------------------
 3 files changed, 19 insertions(+), 57 deletions(-)

diff --git a/xen/arch/x86/include/asm/fixmap.h b/xen/arch/x86/include/asm/fixmap.h
index 516ec3fa6c..139c3e2dcc 100644
--- a/xen/arch/x86/include/asm/fixmap.h
+++ b/xen/arch/x86/include/asm/fixmap.h
@@ -61,8 +61,6 @@ enum fixed_addresses {
     FIX_ACPI_END = FIX_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1,
     FIX_HPET_BASE,
     FIX_TBOOT_SHARED_BASE,
-    FIX_MSIX_IO_RESERV_BASE,
-    FIX_MSIX_IO_RESERV_END = FIX_MSIX_IO_RESERV_BASE + FIX_MSIX_MAX_PAGES -1,
     FIX_TBOOT_MAP_ADDRESS,
     FIX_APEI_RANGE_BASE,
     FIX_APEI_RANGE_END = FIX_APEI_RANGE_BASE + FIX_APEI_RANGE_MAX -1,
diff --git a/xen/arch/x86/include/asm/msi.h b/xen/arch/x86/include/asm/msi.h
index a53ade95c9..16c80c9883 100644
--- a/xen/arch/x86/include/asm/msi.h
+++ b/xen/arch/x86/include/asm/msi.h
@@ -55,9 +55,6 @@
 #define	 MSI_ADDR_DEST_ID_MASK		0x00ff000
 #define  MSI_ADDR_DEST_ID(dest)		(((dest) << MSI_ADDR_DEST_ID_SHIFT) & MSI_ADDR_DEST_ID_MASK)
 
-/* MAX fixed pages reserved for mapping MSIX tables. */
-#define FIX_MSIX_MAX_PAGES              512
-
 struct msi_info {
     pci_sbdf_t sbdf;
     int irq;
@@ -213,7 +210,7 @@ struct arch_msix {
         unsigned long first, last;
     } table, pba;
     int table_refcnt[MAX_MSIX_TABLE_PAGES];
-    int table_idx[MAX_MSIX_TABLE_PAGES];
+    void __iomem *table_va[MAX_MSIX_TABLE_PAGES];
     spinlock_t table_lock;
     bool host_maskall, guest_maskall;
     domid_t warned;
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index d0bf63df1d..8128274c07 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -24,7 +24,6 @@
 #include <asm/smp.h>
 #include <asm/desc.h>
 #include <asm/msi.h>
-#include <asm/fixmap.h>
 #include <asm/p2m.h>
 #include <mach_apic.h>
 #include <io_ports.h>
@@ -39,75 +38,44 @@ boolean_param("msi", use_msi);
 
 static void __pci_disable_msix(struct msi_desc *);
 
-/* bitmap indicate which fixed map is free */
-static DEFINE_SPINLOCK(msix_fixmap_lock);
-static DECLARE_BITMAP(msix_fixmap_pages, FIX_MSIX_MAX_PAGES);
-
-static int msix_fixmap_alloc(void)
-{
-    int i, rc = -ENOMEM;
-
-    spin_lock(&msix_fixmap_lock);
-    for ( i = 0; i < FIX_MSIX_MAX_PAGES; i++ )
-        if ( !test_bit(i, &msix_fixmap_pages) )
-            break;
-    if ( i == FIX_MSIX_MAX_PAGES )
-        goto out;
-    rc = FIX_MSIX_IO_RESERV_BASE + i;
-    set_bit(i, &msix_fixmap_pages);
-
- out:
-    spin_unlock(&msix_fixmap_lock);
-    return rc;
-}
-
-static void msix_fixmap_free(int idx)
-{
-    spin_lock(&msix_fixmap_lock);
-    if ( idx >= FIX_MSIX_IO_RESERV_BASE )
-        clear_bit(idx - FIX_MSIX_IO_RESERV_BASE, &msix_fixmap_pages);
-    spin_unlock(&msix_fixmap_lock);
-}
-
-static int msix_get_fixmap(struct arch_msix *msix, u64 table_paddr,
+static void __iomem *msix_map_table(struct arch_msix *msix, u64 table_paddr,
                            u64 entry_paddr)
 {
     long nr_page;
-    int idx;
+    void __iomem *va = NULL;
 
     nr_page = (entry_paddr >> PAGE_SHIFT) - (table_paddr >> PAGE_SHIFT);
 
     if ( nr_page < 0 || nr_page >= MAX_MSIX_TABLE_PAGES )
-        return -EINVAL;
+        return NULL;
 
     spin_lock(&msix->table_lock);
     if ( msix->table_refcnt[nr_page]++ == 0 )
     {
-        idx = msix_fixmap_alloc();
-        if ( idx < 0 )
+        va = ioremap(entry_paddr, PAGE_SIZE);
+        if ( va == NULL )
         {
             msix->table_refcnt[nr_page]--;
             goto out;
         }
-        set_fixmap_nocache(idx, entry_paddr);
-        msix->table_idx[nr_page] = idx;
+        msix->table_va[nr_page] = va;
     }
     else
-        idx = msix->table_idx[nr_page];
+        va = msix->table_va[nr_page];
 
  out:
     spin_unlock(&msix->table_lock);
-    return idx;
+    return va;
 }
 
-static void msix_put_fixmap(struct arch_msix *msix, int idx)
+static void msix_unmap_table(struct arch_msix *msix, void __iomem *va)
 {
     int i;
 
     spin_lock(&msix->table_lock);
     for ( i = 0; i < MAX_MSIX_TABLE_PAGES; i++ )
     {
-        if ( msix->table_idx[i] == idx )
+        if ( msix->table_va[i] == va )
             break;
     }
     if ( i == MAX_MSIX_TABLE_PAGES )
@@ -115,9 +83,8 @@ static void msix_put_fixmap(struct arch_msix *msix, int idx)
 
     if ( --msix->table_refcnt[i] == 0 )
     {
-        clear_fixmap(idx);
-        msix_fixmap_free(idx);
-        msix->table_idx[i] = 0;
+        vunmap(va);
+        msix->table_va[i] = NULL;
     }
 
  out:
@@ -568,8 +535,8 @@ int msi_free_irq(struct msi_desc *entry)
     }
 
     if ( entry->msi_attrib.type == PCI_CAP_ID_MSIX )
-        msix_put_fixmap(entry->dev->msix,
-                        virt_to_fix((unsigned long)entry->mask_base));
+        msix_unmap_table(entry->dev->msix,
+                       (void*)((unsigned long)entry->mask_base & PAGE_MASK));
 
     list_del(&entry->list);
     xfree(entry);
@@ -892,10 +859,10 @@ static int msix_capability_init(struct pci_dev *dev,
     {
         /* Map MSI-X table region */
         u64 entry_paddr = table_paddr + msi->entry_nr * PCI_MSIX_ENTRY_SIZE;
-        int idx = msix_get_fixmap(msix, table_paddr, entry_paddr);
+        void __iomem *va = msix_map_table(msix, table_paddr, entry_paddr);
         void __iomem *base;
 
-        if ( idx < 0 )
+        if ( va == NULL )
         {
             if ( zap_on_error )
             {
@@ -907,9 +874,9 @@ static int msix_capability_init(struct pci_dev *dev,
 
             pci_conf_write16(dev->sbdf, msix_control_reg(pos), control);
             xfree(entry);
-            return idx;
+            return -ENOMEM;
         }
-        base = fix_to_virt(idx) + (entry_paddr & (PAGE_SIZE - 1));
+        base = va + (entry_paddr & (PAGE_SIZE - 1));
 
         /* Mask interrupt here */
         writel(1, base + PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526747.818676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcS-0000rc-C5; Wed, 26 Apr 2023 14:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526747.818676; Wed, 26 Apr 2023 14:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcS-0000rR-8t; Wed, 26 Apr 2023 14:59:44 +0000
Received: by outflank-mailman (input) for mailman id 526747;
 Wed, 26 Apr 2023 14:59:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcQ-0000bz-OW
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:42 +0000
Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com
 [2a00:1450:4864:20::441])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8656139-e442-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:59:41 +0200 (CEST)
Received: by mail-wr1-x441.google.com with SMTP id
 ffacd0b85a97d-2f7a7f9667bso4529552f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:41 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8656139-e442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521179; x=1685113179;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GPXQUFmJANWtRSEJizBlmBuJVzkoyDDxbZ36Iy8xMlY=;
        b=cQpl9qbpxf7rta6xIMY73ap19qqUEtWRCoT83Xl2dd7h75ijI6RiGAuRvhdMYY2MDA
         iAX0Ha6+IaSlue/rS2jtE9TQhFEWTNMDKFVgRSY3qIqQqLJcXiASx1ru0stUXI6zeXlx
         8JPSwb4AyTkg0WkXjqfdeOSiSTLL99IOZZfds=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521179; x=1685113179;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GPXQUFmJANWtRSEJizBlmBuJVzkoyDDxbZ36Iy8xMlY=;
        b=PuqSMXoqrF3h10OQOGo3VuzwbBXAnT9KHFVeQQZjXwtoxsipEYPl+abgJTwdqRWtpu
         ECNFUn2IHD6nGGbuxAVfgG6kQXJs4B1t+5ZFO302wUmNgGBdlLXDxFvmXJe0QwtIXUbI
         KpeUkVW1pk3rX2uAtLI/1tyhRoyHJHRbD1XWB2QrRQyGZMFtLqIDNesNPPXx4OGv6F3D
         hlD3ddVAbF7AFOIxVdeBr6uM5n+chd3GWl1ikYgVj2nkPhYsAX3yfoxQ0SE0JBMRKhbS
         p1SkRMt6Xuj/U323LABOmzKfXS1QkGmX6z9OhoBIvckXKcfV+Zcdti+MeRtaz9dmji7a
         bVSA==
X-Gm-Message-State: AAQBX9coDEySX5xvNlkONLWaf4itEOKBJz44uLbmQn2KoM9uevmqrYBy
	5VtXl+xeYcvOnT1Zn31PncuWL+QOChImMBpSuyOAPQ==
X-Google-Smtp-Source: AKy350Yamqi6Z9wNCPW6Nlmb5C/dSHeaLKEaMUxtujLbZj+v6bod9X89zyCRsNhe5eHuw1fRY1Ucfw==
X-Received: by 2002:adf:f3d0:0:b0:2ee:fa23:7904 with SMTP id g16-20020adff3d0000000b002eefa237904mr13332616wrp.70.1682521179434;
        Wed, 26 Apr 2023 07:59:39 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 1/7] tools: Make some callers of xc_domain_getinfo use xc_domain_getinfolist
Date: Wed, 26 Apr 2023 15:59:26 +0100
Message-Id: <20230426145932.3340-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfo() is slow and prone to races because N hypercalls are
needed to find information about N domains. xc_domain_getinfolist() finds
the same information in a single hypercall as long as a big enough buffer
is provided. Plus, xc_domain_getinfo() is disappearing on a future patch
so migrate the callers interested in more than 1 domain to the the *list()
version.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h           |  5 +++++
 tools/python/xen/lowlevel/xc/xc.c | 29 +++++++++++++++--------------
 tools/xenmon/xenbaked.c           |  6 +++---
 3 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..90b33aa3a7 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -468,6 +468,11 @@ typedef struct xc_dominfo {
 
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
+static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
+{
+    return (info->flags >> XEN_DOMINF_shutdownshift) & XEN_DOMINF_shutdownmask;
+}
+
 typedef union 
 {
 #if defined(__i386__) || defined(__x86_64__)
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 35901c2d63..38212e8091 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -342,7 +342,7 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
     uint32_t first_dom = 0;
     int max_doms = 1024, nr_doms, i;
     size_t j;
-    xc_dominfo_t *info;
+    xc_domaininfo_t *info;
 
     static char *kwd_list[] = { "first_dom", "max_doms", NULL };
 
@@ -350,11 +350,11 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
                                       &first_dom, &max_doms) )
         return NULL;
 
-    info = calloc(max_doms, sizeof(xc_dominfo_t));
+    info = calloc(max_doms, sizeof(*info));
     if (info == NULL)
         return PyErr_NoMemory();
 
-    nr_doms = xc_domain_getinfo(self->xc_handle, first_dom, max_doms, info);
+    nr_doms = xc_domain_getinfolist(self->xc_handle, first_dom, max_doms, info);
 
     if (nr_doms < 0)
     {
@@ -368,21 +368,22 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
         info_dict = Py_BuildValue(
             "{s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i"
             ",s:L,s:L,s:L,s:i,s:i,s:i}",
-            "domid",           (int)info[i].domid,
+            "domid",           (int)info[i].domain,
             "online_vcpus",    info[i].nr_online_vcpus,
             "max_vcpu_id",     info[i].max_vcpu_id,
-            "hvm",             info[i].hvm,
-            "dying",           info[i].dying,
-            "crashed",         info[i].crashed,
-            "shutdown",        info[i].shutdown,
-            "paused",          info[i].paused,
-            "blocked",         info[i].blocked,
-            "running",         info[i].running,
-            "mem_kb",          (long long)info[i].nr_pages*(XC_PAGE_SIZE/1024),
+            "hvm",             !!(info[i].flags & XEN_DOMINF_hvm_guest),
+            "dying",           !!(info[i].flags & XEN_DOMINF_dying),
+            "crashed",         (info[i].flags & XEN_DOMINF_shutdown) &&
+                                 (dominfo_shutdown_reason(&info[i]) == SHUTDOWN_crash),
+            "shutdown",        !!(info[i].flags & XEN_DOMINF_shutdown),
+            "paused",          !!(info[i].flags & XEN_DOMINF_paused),
+            "blocked",         !!(info[i].flags & XEN_DOMINF_blocked),
+            "running",         !!(info[i].flags & XEN_DOMINF_running),
+            "mem_kb",          (long long)info[i].tot_pages*(XC_PAGE_SIZE/1024),
             "cpu_time",        (long long)info[i].cpu_time,
-            "maxmem_kb",       (long long)info[i].max_memkb,
+            "maxmem_kb",       (long long)(info[i].max_pages << (XC_PAGE_SHIFT - 10)),
             "ssidref",         (int)info[i].ssidref,
-            "shutdown_reason", info[i].shutdown_reason,
+            "shutdown_reason", dominfo_shutdown_reason(&info[i]),
             "cpupool",         (int)info[i].cpupool);
         pyhandle = PyList_New(sizeof(xen_domain_handle_t));
         if ( (pyhandle == NULL) || (info_dict == NULL) )
diff --git a/tools/xenmon/xenbaked.c b/tools/xenmon/xenbaked.c
index 4dddbd20e2..8632b10ea4 100644
--- a/tools/xenmon/xenbaked.c
+++ b/tools/xenmon/xenbaked.c
@@ -775,7 +775,7 @@ static void global_init_domain(int domid, int idx)
 static int indexof(int domid)
 {
     int idx;
-    xc_dominfo_t dominfo[NDOMAINS];
+    xc_domaininfo_t dominfo[NDOMAINS];
     xc_interface *xc_handle;
     int ndomains;
   
@@ -797,7 +797,7 @@ static int indexof(int domid)
 
     // call domaininfo hypercall to try and garbage collect unused entries
     xc_handle = xc_interface_open(0,0,0);
-    ndomains = xc_domain_getinfo(xc_handle, 0, NDOMAINS, dominfo);
+    ndomains = xc_domain_getinfolist(xc_handle, 0, NDOMAINS, dominfo);
     xc_interface_close(xc_handle);
 
     // for each domain in our data, look for it in the system dominfo structure
@@ -808,7 +808,7 @@ static int indexof(int domid)
         int jdx;
     
         for (jdx=0; jdx<ndomains; jdx++) {
-            if (dominfo[jdx].domid == domid)
+            if (dominfo[jdx].domain == domid)
                 break;
         }
         if (jdx == ndomains)        // we didn't find domid in the dominfo struct
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526746.818667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcR-0000cN-5q; Wed, 26 Apr 2023 14:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526746.818667; Wed, 26 Apr 2023 14:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcR-0000cG-1K; Wed, 26 Apr 2023 14:59:43 +0000
Received: by outflank-mailman (input) for mailman id 526746;
 Wed, 26 Apr 2023 14:59:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcP-0000bz-8C
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:41 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f75b1724-e442-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:59:38 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f09b4a156eso48771165e9.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:38 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f75b1724-e442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521177; x=1685113177;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=mfe9T+Yu+Be8zdjjmeeZpDjZq2wXyt3ymKJgvGq+f2s=;
        b=Zsa40GFw7quWUyzlNKE2/WO9yo3B4xyo7S5MMKiQm0DhEgvT7cWWncoEXg70x/nltN
         PAsoVkBd5CyDtGmzh3/2429VIQEty6yphlQxmnO/PC4h4BCcmfYRYQYuRGE57FuIl/Pz
         n8I/v6uKQj144wnlV54TevTsWGi8lS91DKxm4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521177; x=1685113177;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mfe9T+Yu+Be8zdjjmeeZpDjZq2wXyt3ymKJgvGq+f2s=;
        b=EQ5GdgFah1rRepSKpXpbsXwCezNi7BXmq1Lm0pYulk9zR17RT2i9BS8BX2EQfrhrSm
         l/LRw1hyWF6UA2JLyRbc9hHnzArcjfduDqhcl3r/2ICe/i9VV9Glpr/TH246Y83saoHo
         15D7xrRb/IRq6arafRHvzPwHWNm+CkrTgVoR6+BtI0m2pj+DkdaiE8kx1HAM9dwQJHFh
         u3CJ56VBoiqhKK1xdiD93JWJ+YoEBRD+IQDKDjKaZsHiw2R5VrLzANEqJ/ga5JuOLi/9
         rmk/UHaSPw45PMhKPfhqM7vbTCzXa4nHY4DWXUMjy6KKuRYht+uheN1C5lkvnR46OK8E
         VnPw==
X-Gm-Message-State: AAQBX9eUJRYfp6ditbaopw9+dBmOUL6HIH7q7lq9h+H92d6Wt2EERdGL
	LdYqNwYSu/Kk0ZQ65rtj00dHZkxnT4SMIyNDKkutxw==
X-Google-Smtp-Source: AKy350axbG54dQ9LxejbXJX0pZMaL1zCVNK1c1BL+Tm2RDrKrOREH2imfxVIz8Z23wCKRzpe2w5jSw==
X-Received: by 2002:a7b:c386:0:b0:3f1:78d0:fc45 with SMTP id s6-20020a7bc386000000b003f178d0fc45mr13170623wmj.28.1682521177562;
        Wed, 26 Apr 2023 07:59:37 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH 0/7] Rationalize usage of xc_domain_getinfo{,list}()
Date: Wed, 26 Apr 2023 15:59:25 +0100
Message-Id: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfo() returns the list of domains with domid >= first_domid.
It does so by repeatedly invoking XEN_DOMCTL_getdomaininfo, which leads to
unintuitive behaviour (asking for domid=1 might succeed returning domid=2).
Furthermore, N hypercalls are required whereas the equivalent functionality
can be achieved using with XEN_SYSCTL_getdomaininfo.

Ideally, we want a DOMCTL interface that operates over a single precisely
specified domain and a SYSCTL interface that can be used for bulk queries.

All callers of xc_domain_getinfo() that are better off using SYSCTL are
migrated to use that instead. That includes callers performing domain
discovery and those requesting info for more than 1 domain per hypercall.

A new xc_domain_getinfo_single() is introduced with stricter semantics than
xc_domain_getinfo() (failing if domid isn't found) to migrate the rest to.

With no callers left the xc_dominfo_t structure and the xc_domain_getinfo()
call itself can be cleanly removed, and the DOMCTL interface simplified to
only use its fastpath.

With the DOMCTL ammended, the new xc_domain_getinfo_single() drops its
stricter check, becoming a simple wrapper to invoke the hypercall itself.

Alejandro Vallejo (7):
  tools: Make some callers of xc_domain_getinfo use
    xc_domain_getinfolist
  tools: Create xc_domain_getinfo_single()
  tools: Refactor the console/io.c to avoid using xc_domain_getinfo()
  tools: Make init-xenstore-domain use xc_domain_getinfolist()
  tools: Modify single-domid callers of xc_domain_getinfolist
  tools: Use new xc function for some xc_domain_getinfo calls
  domctl: Modify getdomaininfo to fail if domid is not found

 tools/console/client/main.c             |  7 +--
 tools/console/daemon/io.c               | 31 +++++-----
 tools/debugger/kdd/kdd-xen.c            |  6 +-
 tools/helpers/init-xenstore-domain.c    | 14 +++--
 tools/include/xenctrl.h                 | 63 ++++++++------------
 tools/libs/ctrl/xc_domain.c             | 79 +++++--------------------
 tools/libs/ctrl/xc_pagetab.c            |  7 +--
 tools/libs/ctrl/xc_private.c            |  7 +--
 tools/libs/ctrl/xc_private.h            |  6 +-
 tools/libs/guest/xg_core.c              | 21 +++----
 tools/libs/guest/xg_core.h              |  6 +-
 tools/libs/guest/xg_core_arm.c          | 10 ++--
 tools/libs/guest/xg_core_x86.c          | 18 +++---
 tools/libs/guest/xg_cpuid_x86.c         | 28 +++++----
 tools/libs/guest/xg_dom_boot.c          | 12 +---
 tools/libs/guest/xg_domain.c            |  6 +-
 tools/libs/guest/xg_offline_page.c      | 10 ++--
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 17 +++---
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 14 ++---
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 26 ++++----
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 +-
 tools/libs/light/libxl_dom.c            | 15 ++---
 tools/libs/light/libxl_dom_suspend.c    |  7 +--
 tools/libs/light/libxl_domain.c         | 12 ++--
 tools/libs/light/libxl_mem.c            |  4 +-
 tools/libs/light/libxl_sched.c          | 28 ++++-----
 tools/libs/light/libxl_x86_acpi.c       |  4 +-
 tools/misc/xen-hvmcrash.c               |  6 +-
 tools/misc/xen-lowmemd.c                |  6 +-
 tools/misc/xen-mfndump.c                | 22 +++----
 tools/misc/xen-vmtrace.c                |  6 +-
 tools/python/xen/lowlevel/xc/xc.c       | 29 ++++-----
 tools/vchan/vchan-socket-proxy.c        |  6 +-
 tools/xenmon/xenbaked.c                 |  6 +-
 tools/xenpaging/xenpaging.c             | 14 ++---
 tools/xenstore/xenstored_domain.c       | 15 +++--
 tools/xentrace/xenctx.c                 |  8 +--
 xen/common/domctl.c                     | 32 +---------
 41 files changed, 245 insertions(+), 374 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526748.818682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcS-0000uf-NV; Wed, 26 Apr 2023 14:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526748.818682; Wed, 26 Apr 2023 14:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcS-0000uF-Iq; Wed, 26 Apr 2023 14:59:44 +0000
Received: by outflank-mailman (input) for mailman id 526748;
 Wed, 26 Apr 2023 14:59:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcR-0000cE-4i
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:43 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f96807f5-e442-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 16:59:42 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-2f7db354092so4529844f8f.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:42 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f96807f5-e442-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521181; x=1685113181;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mmERS2liqEHDrbW1Ycwm/uU3R2wj9QakSk+FoH5C6bM=;
        b=ilPmWn7nj10VuZmEUluxGvctuWIjxS03O93jjXKqnhTefPbgbOzlLw78W3PZFPE4my
         c8bBGm6m1WFL0Y1ObwSX+3QNp5ORWDjKO22n461KjhoPhB61kwL6n2NEhe7JxVuh275P
         qqyYVXkgzuGPU7NqeyXr0wcExx11XL2JspYSw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521181; x=1685113181;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mmERS2liqEHDrbW1Ycwm/uU3R2wj9QakSk+FoH5C6bM=;
        b=XXln4e7H2B8ZCA9jqxHJdrSAO7LaRmEiOhJtCmS8hxWfygHvraJQFTjlU+6gBYcvWZ
         Nc/doUfHJRL8sigqyukHIdYu2ym+2zYK7F4T1cvmR2YBYGlPPNWIvT5xI2215b2G3jRj
         24loZkK8L5jlmfCV3M5Qc4hr/BIZxnXLigEjNXNslAfvDtSN50vyE5CsOTcbmtN68GE+
         ZB3GIwEowdsdP4+8/Ky3lCPCECPEc518YJZRF9/X0lF4NrJeDi+QX8XyDtjOe7vhc11C
         ZNYP83GLIWTWUKR2DhwGBQNwptY7E9m9FyFOT6H70sNSGwXT1nrAup7loUq80d8AIEg5
         AI5w==
X-Gm-Message-State: AAQBX9f1llVT3tvrpCHe+QAnNm5Nx7oBOn3kgryw3D9j+eqA/EtI+RCa
	xen3R829Wmh8gygnsXo9LxYddP4L3FCXs1w7cHk=
X-Google-Smtp-Source: AKy350Zh87+7rMSV0zQ/4l32i4gMVMhSQBMrhTpfpjG0umZMEvTPcYok0gQh6j/HB8qcGkoE4rXcFA==
X-Received: by 2002:a5d:4e01:0:b0:304:6715:8728 with SMTP id p1-20020a5d4e01000000b0030467158728mr10762186wrt.18.1682521181193;
        Wed, 26 Apr 2023 07:59:41 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 2/7] tools: Create xc_domain_getinfo_single()
Date: Wed, 26 Apr 2023 15:59:27 +0100
Message-Id: <20230426145932.3340-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It's a stricter version of xc_domain_getinfo() where the returned domid
always matches the requested domid or the error code shows an error instead.
A few patches ahead usages of xc_domain_getinfo() are removed until only
xc_domain_getinfo_single() and xc_domain_getinfolist() remain.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h     | 16 ++++++++++++++++
 tools/libs/ctrl/xc_domain.c | 22 ++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 90b33aa3a7..73b07955c6 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -696,6 +696,22 @@ int xc_vcpu_getaffinity(xc_interface *xch,
 int xc_domain_get_guest_width(xc_interface *xch, uint32_t domid,
                               unsigned int *guest_width);
 
+/**
+ * This function will return information about a single domain. It looks
+ * up the domain by the provided domid and succeeds if the domain exists
+ * and is accesible by the current domain, or fails otherwise. A buffer
+ * may optionally passed on the `info` parameter in order to retrieve
+ * information about the domain. The buffer is ignored if NULL is
+ * passed instead.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domid to lookup
+ * @parm info Optional domain information buffer (may be NULL)
+ * @return 0 on success, otherwise the call failed and info is undefined
+ */
+int xc_domain_getinfo_single(xc_interface *xch,
+                             uint32_t domid,
+                             xc_domaininfo_t *info);
 
 /**
  * This function will return information about one or more domains. It is
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e939d07157..3ff91023bf 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -345,6 +345,28 @@ int xc_dom_vuart_init(xc_interface *xch,
     return rc;
 }
 
+int xc_domain_getinfo_single(xc_interface *xch,
+                             uint32_t domid,
+                             xc_domaininfo_t *info)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_getdomaininfo,
+        .domain = domid,
+    };
+
+    int rc = do_domctl(xch, &domctl);
+    if (rc < 0)
+        return rc;
+
+    if (domctl.u.getdomaininfo.domain != domid)
+        return -ESRCH;
+
+    if (info)
+        *info = domctl.u.getdomaininfo;
+
+    return rc;
+}
+
 int xc_domain_getinfo(xc_interface *xch,
                       uint32_t first_domid,
                       unsigned int max_doms,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526749.818696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcU-0001Nd-3M; Wed, 26 Apr 2023 14:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526749.818696; Wed, 26 Apr 2023 14:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcU-0001NU-0J; Wed, 26 Apr 2023 14:59:46 +0000
Received: by outflank-mailman (input) for mailman id 526749;
 Wed, 26 Apr 2023 14:59:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcS-0000cE-KT
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:44 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa67221c-e442-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 16:59:44 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-2f95231618aso4622014f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:44 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa67221c-e442-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521183; x=1685113183;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X/n8iDxY+gFzED/faB3xf5M0UlbwG5Zihhb3i4U/Bo0=;
        b=SL13iuQjTgVU85bnRoAsthjqKmb/1hm7tIYLgNQKtfXUyHNkx7wP6IrbLJpQedK0GJ
         Gh7/nCU/aYrUbDrsVfHLgRuBKx1syn2buHTJS60mKrRbPKZn/mPryg3+xCbpBvXkqPc+
         nEqHFC3wqzIfJJBVgdv2sjTOAb3EsSl51OCKs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521183; x=1685113183;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=X/n8iDxY+gFzED/faB3xf5M0UlbwG5Zihhb3i4U/Bo0=;
        b=Jj1Tp6nKXQnZq4F1QNLlh4ODwZChQ8DLlspz0rQ0NUILM9q1VYs/Rh7Jt8LOQB1Py2
         kzGtNgw4u6RAVqiFV9ePpga0HhZC0QB9X8yvoRuKUNqHxUhX/+mHhW7irkf2Qw3AP2ns
         c/CKIM4kuD2o68yjn6U07PEyVV3iAegAg+wGUCDGjubi9iU9Uv1Z0E4XFUpExsS/8A74
         WgQbCjbJ4kS1NysWotb/y8kQiN7K3Gd8vz1Ws6vBFQ+IP7ZqUF8i5wYyk5meHRzbN/D7
         RFquYsh+9r9wKxpZ3AtNAbWMb3WA7qG9FaFJFN70zzZaJBzx73hkBXHnqodfFfO6V2oo
         Fn6g==
X-Gm-Message-State: AAQBX9cbItLD2EJCeDZ3JZzev7uYZwzemtdka0xe+DwGKUmq/4r9dD/F
	KYOFDpXAHHHE7L70vPCU7voCp7HIrZU9Jw5p6D8=
X-Google-Smtp-Source: AKy350b7muSWZSZZXgN3C3/C/ZlWpkhbbgWHWwL+RxuioWyLwpejK9lvwcfQqqvtbhaD/R+F0vRCyQ==
X-Received: by 2002:adf:dece:0:b0:2d8:47c7:7b52 with SMTP id i14-20020adfdece000000b002d847c77b52mr14977960wrn.9.1682521182906;
        Wed, 26 Apr 2023 07:59:42 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 3/7] tools: Refactor the console/io.c to avoid using xc_domain_getinfo()
Date: Wed, 26 Apr 2023 15:59:28 +0100
Message-Id: <20230426145932.3340-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It has 2 avoidable occurences

* Check whether a domain is valid, which can be done faster with
    xc_domain_getinfo_single()
* Domain discovery, which can be done much faster with the sysctl
    interface through xc_domain_getinfolist().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/console/daemon/io.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index 6bfe96715b..1fc56f6643 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -405,13 +405,7 @@ static void buffer_advance(struct buffer *buffer, size_t len)
 
 static bool domain_is_valid(int domid)
 {
-	bool ret;
-	xc_dominfo_t info;
-
-	ret = (xc_domain_getinfo(xc, domid, 1, &info) == 1 &&
-	       info.domid == domid);
-		
-	return ret;
+	return xc_domain_getinfo_single(xc, domid, NULL) == 0;
 }
 
 static int create_hv_log(void)
@@ -959,26 +953,35 @@ static void shutdown_domain(struct domain *d)
 
 static unsigned enum_pass = 0;
 
+/**
+ * Memory set aside to query the state of every
+ * domain in the hypervisor in a single hypercall.
+ */
+static xc_domaininfo_t domaininfo[DOMID_FIRST_RESERVED - 1];
+
 static void enum_domains(void)
 {
-	int domid = 1;
-	xc_dominfo_t dominfo;
+	int ret;
 	struct domain *dom;
 
 	enum_pass++;
 
-	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
-		dom = lookup_domain(dominfo.domid);
-		if (dominfo.dying) {
+	/* Fetch info on every valid domain except for dom0 */
+	ret = xc_domain_getinfolist(xc, 1, DOMID_FIRST_RESERVED - 1, domaininfo);
+	if (ret < 0)
+		return;
+
+	for (size_t i = 0; i < ret; i++) {
+		dom = lookup_domain(domaininfo[i].domain);
+		if (domaininfo[i].flags & XEN_DOMINF_dying) {
 			if (dom)
 				shutdown_domain(dom);
 		} else {
 			if (dom == NULL)
-				dom = create_domain(dominfo.domid);
+				dom = create_domain(domaininfo[i].domain);
 		}
 		if (dom)
 			dom->last_seen = enum_pass;
-		domid = dominfo.domid + 1;
 	}
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526750.818707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcX-0001hR-CS; Wed, 26 Apr 2023 14:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526750.818707; Wed, 26 Apr 2023 14:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcX-0001hI-8w; Wed, 26 Apr 2023 14:59:49 +0000
Received: by outflank-mailman (input) for mailman id 526750;
 Wed, 26 Apr 2023 14:59:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcV-0000bz-LF
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:47 +0000
Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com
 [2a00:1450:4864:20::344])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb69d864-e442-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:59:46 +0200 (CEST)
Received: by mail-wm1-x344.google.com with SMTP id
 5b1f17b1804b1-3f195b164c4so38226505e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:46 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb69d864-e442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521184; x=1685113184;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7oGcNamzq83RSDTcW0FBMjk99Wl1dp80foK2pDBAN98=;
        b=Hl26xmLqoalAyuUskJWFwA3NyNZTQ4t0E3QRIP/EjrPvqlwP0GA+lZ5HPpxo22DWOk
         pHhSuN+K9ghqHpKB7yXMaHhl+i220M/gAiv/M5QWyxLBr39B8H0iRdk2SPvXTNyrVhbF
         T4niznkfbKGC1wFypzTlkqYkA76mF1hKmEGXU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521184; x=1685113184;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7oGcNamzq83RSDTcW0FBMjk99Wl1dp80foK2pDBAN98=;
        b=SAJFJlpOw58rTu0mo07Lyxx/Dg+nV1xLxEE6nDSDCQf1aBZPBH+yqVYMkzGxnCUEWg
         GMbg2NsxUCFWUsP1/Aj3vTLzsFB8qaumgweXuLFURzy/b3+e1Y3nSxiaCfe7pUW+oErA
         h4zzS0/yUL8twBsHtfc4nII4joF3eEzzf7LRb70plJ7PNpYtknEGhEseyF61O21lk7Yp
         hjtKJB7Vdid9E9jO3krW6UA94UxTeLoj+s1fWPjHYk2KbSdDDuknCBjQIMYonW8WzxKC
         7YwfiNvNaRxXh+b2VQ1CI3tbU6T2gmA9t5yWScqtjarvv423Gyq5EB2GO7Q8wIyPctUu
         NJXA==
X-Gm-Message-State: AAQBX9ft1HIOXr5VFS5MLU8145giN5ET3cx/FUlr+g33YT8FCYKR3Xya
	9WBfrSOuTt4PgjXMHJ6yFEAIvRJnuS7UYXIo65rpwQ==
X-Google-Smtp-Source: AKy350bcd+UvEAzKLO6obU/iO59tbgHrProju74w+psAGKtSZ6hWKlorVHdNsC37N2u/GIcK0atBMw==
X-Received: by 2002:a7b:ce04:0:b0:3f0:f3ee:9e2a with SMTP id m4-20020a7bce04000000b003f0f3ee9e2amr13094550wmc.35.1682521184572;
        Wed, 26 Apr 2023 07:59:44 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.org>
Subject: [PATCH 4/7] tools: Make init-xenstore-domain use xc_domain_getinfolist()
Date: Wed, 26 Apr 2023 15:59:29 +0100
Message-Id: <20230426145932.3340-5-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It currently relies on xc_domain_getinfo() returning the next available
domain past "first_domid", which is a feature that will disappear in a
future patch.

Furthermore and while at it, make it so the hypercall tries to fetch information
about more than one domain per hypercall so we can (hopefully) get away with a
single hypercall in a typical system.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Juergen Gross <jgross@suse.org>
---
 tools/helpers/init-xenstore-domain.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 0950ba7dc5..5f40901d31 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -21,6 +21,7 @@
 #define LAPIC_BASE_ADDRESS  0xfee00000UL
 #define MB(x)               ((uint64_t)x << 20)
 #define GB(x)               ((uint64_t)x << 30)
+#define ARRAY_SIZE(x)       (sizeof(x) / sizeof((x)[0]))
 
 static uint32_t domid = ~0;
 static char *kernel;
@@ -322,16 +323,19 @@ err:
 
 static int check_domain(xc_interface *xch)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info[8];
     uint32_t dom;
     int ret;
 
     dom = 1;
-    while ( (ret = xc_domain_getinfo(xch, dom, 1, &info)) == 1 )
+    while ( (ret = xc_domain_getinfolist(xch, dom, ARRAY_SIZE(info), info)) > 0 )
     {
-        if ( info.xenstore )
-            return 1;
-        dom = info.domid + 1;
+        for ( size_t i = 0; i < ret; i++ )
+        {
+            if ( info[i].flags & XEN_DOMINF_xs_domain )
+                return 1;
+        }
+        dom = info[ret - 1].domain + 1;
     }
     if ( ret < 0 && errno != ESRCH )
     {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 14:59:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 14:59:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526751.818713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcX-0001lI-PE; Wed, 26 Apr 2023 14:59:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526751.818713; Wed, 26 Apr 2023 14:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcX-0001kH-H2; Wed, 26 Apr 2023 14:59:49 +0000
Received: by outflank-mailman (input) for mailman id 526751;
 Wed, 26 Apr 2023 14:59:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcW-0000cE-0S
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:48 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc6d5dcd-e442-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 16:59:47 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-2f4214b430aso4537615f8f.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:47 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc6d5dcd-e442-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521186; x=1685113186;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZFE69b1k1YkDEEzbAUbLCBpAHxB5+1bvQjqKSxJf1mg=;
        b=A6dfKjRU2kBlkF3k6zFQ5CGUQFNd8y5QcBbh4E1VQLKqfFHCtaP8ChsWRfwc00a3T7
         RnIrdBc1q78cIMLoMz1XNyvSpwriRrAPcEdHIzEqYBoJtfYC/sAkNP8dKgn+epyUPc/X
         1ezy/xnu+4O+S5R9mGElTM3WZ9moeyIBzRJVI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521186; x=1685113186;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZFE69b1k1YkDEEzbAUbLCBpAHxB5+1bvQjqKSxJf1mg=;
        b=UE3GT9tHdIyuyCo7n+p0Jl5co81X3HFIrrY5SKQSb+FxzPuw0Gp4KhZzaGW+nbUCt3
         9IkQOHc4RZdGka6nmF4CXdMEg+OVbZYhMVfkqpcAINv4fQeC/5svf7A62BdrSFkjcICI
         Pvg8gUnWPid8IiUSZeiZaxGHZJ7/ZO+0RlPFXgiDsypbMVdxt6ydOzMxk102so/UmRHg
         029/rLaHJ8OCzbf0IrsdDCJIQQcZkWPzZ6iDElcdOu5m++5Jb20lCJHqHZ782BaX3AOm
         GMUZBLlBlpB01s3pefTDv3tyKFgo6vov2qO9/ACqYnRQORCg1hOIwNSKvWczKODbqxdD
         UnGw==
X-Gm-Message-State: AAQBX9dMwZGSEoQLn/Mn9IJTtjYzXQkV8q2GiWsCWKPe7yukd/5SXce7
	mwB95lnZoVK76AvU9Ry+1lF61ZmSfBYg3So7lMs=
X-Google-Smtp-Source: AKy350b0GzCrgIs6zTHtQ1zyTrOSU4/QlvDF42ZbSeTg4K3j2ayVq3Lyc4fsir19OKXPyfvR0KyeYg==
X-Received: by 2002:a5d:4f81:0:b0:2fe:c0ea:18b4 with SMTP id d1-20020a5d4f81000000b002fec0ea18b4mr14994538wru.24.1682521186330;
        Wed, 26 Apr 2023 07:59:46 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 5/7] tools: Modify single-domid callers of xc_domain_getinfolist
Date: Wed, 26 Apr 2023 15:59:30 +0100
Message-Id: <20230426145932.3340-6-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfolist() internally relies on a sysctl that performs
a linear search for the domids. Many callers of xc_domain_getinfolist()
who require information about a precise domid are much better off calling
xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
instead and ensure the returned domid matches the requested one. The domtctl
will find the domid faster too, because that uses hashed lists.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/libs/light/libxl_dom.c         | 15 +++++----------
 tools/libs/light/libxl_dom_suspend.c |  7 +------
 tools/libs/light/libxl_domain.c      | 12 ++++--------
 tools/libs/light/libxl_mem.c         |  4 ++--
 tools/libs/light/libxl_sched.c       | 12 ++++--------
 tools/xenpaging/xenpaging.c          | 14 +++++++-------
 6 files changed, 23 insertions(+), 41 deletions(-)

diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 25fb716084..482e04b38c 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -32,8 +32,8 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (ret != 1 || info.domain != domid) {
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (ret < 0) {
         LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
         return LIBXL_DOMAIN_TYPE_INVALID;
     }
@@ -70,15 +70,10 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
-    if (ret != 1)
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
+    if (ret < 0)
     {
-        LOGE(ERROR, "getinfolist failed %d", ret);
-        return ERROR_FAIL;
-    }
-    if (info.domain != domid)
-    {
-        LOGE(ERROR, "got info for dom%d, wanted dom%d\n", info.domain, domid);
+        LOGE(ERROR, "getinfo_single failed %d", ret);
         return ERROR_FAIL;
     }
     return info.cpupool;
diff --git a/tools/libs/light/libxl_dom_suspend.c b/tools/libs/light/libxl_dom_suspend.c
index 4fa22bb739..6091a5f3f6 100644
--- a/tools/libs/light/libxl_dom_suspend.c
+++ b/tools/libs/light/libxl_dom_suspend.c
@@ -332,13 +332,8 @@ static void suspend_common_wait_guest_check(libxl__egc *egc,
     /* Convenience aliases */
     const uint32_t domid = dsps->domid;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (ret < 0) {
-        LOGED(ERROR, domid, "unable to check for status of guest");
-        goto err;
-    }
-
-    if (!(ret == 1 && info.domain == domid)) {
         LOGED(ERROR, domid, "guest we were suspending has been destroyed");
         goto err;
     }
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 7f0986c185..33ac8e9ce8 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -349,16 +349,12 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
     int ret;
     GC_INIT(ctx);
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
     if (ret<0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info single");
         GC_FREE;
         return ERROR_FAIL;
     }
-    if (ret==0 || xcinfo.domain != domid) {
-        GC_FREE;
-        return ERROR_DOMAIN_NOTFOUND;
-    }
 
     if (info_r)
         libxl__xcinfo2xlinfo(ctx, &xcinfo, info_r);
@@ -1663,8 +1659,8 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
     xc_vcpuinfo_t vcpuinfo;
     unsigned int nr_vcpus;
 
-    if (xc_domain_getinfolist(ctx->xch, domid, 1, &domaininfo) != 1) {
-        LOGED(ERROR, domid, "Getting infolist");
+    if (xc_domain_getinfo_single(ctx->xch, domid, &domaininfo) < 0) {
+        LOGED(ERROR, domid, "Getting info single");
         GC_FREE;
         return NULL;
     }
diff --git a/tools/libs/light/libxl_mem.c b/tools/libs/light/libxl_mem.c
index 92ec09f4cf..44e554adba 100644
--- a/tools/libs/light/libxl_mem.c
+++ b/tools/libs/light/libxl_mem.c
@@ -323,8 +323,8 @@ retry_transaction:
     libxl__xs_printf(gc, t, GCSPRINTF("%s/memory/target", dompath),
                      "%"PRIu64, new_target_memkb);
 
-    r = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (r != 1 || info.domain != domid) {
+    r = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (r < 0) {
         abort_transaction = 1;
         rc = ERROR_FAIL;
         goto out;
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 7c53dc60e6..19da7c49ea 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -219,13 +219,11 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t domaininfo;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &domaininfo);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info single");
         return ERROR_FAIL;
     }
-    if (rc != 1 || domaininfo.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
@@ -426,13 +424,11 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t info;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info");
+        LOGED(ERROR, domid, "Getting domain info single");
         return ERROR_FAIL;
     }
-    if (rc != 1 || info.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit2_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 6e5490315d..023f2bf295 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -169,10 +169,10 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
-    if ( rc != 1 )
+    rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id, &domain_info);
+    if ( rc < 0 )
     {
-        PERROR("Error getting domain info");
+        PERROR("Error getting domain info single");
         return -1;
     }
     return domain_info.tot_pages;
@@ -424,11 +424,11 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
-                                   &domain_info);
-        if ( rc != 1 )
+        rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id,
+                                      &domain_info);
+        if ( rc < 0 )
         {
-            PERROR("Error getting domain info");
+            PERROR("Error getting domain info single");
             goto err;
         }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 15:00:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 15:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526752.818727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcb-0002ID-3n; Wed, 26 Apr 2023 14:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526752.818727; Wed, 26 Apr 2023 14:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgca-0002Hs-V3; Wed, 26 Apr 2023 14:59:52 +0000
Received: by outflank-mailman (input) for mailman id 526752;
 Wed, 26 Apr 2023 14:59:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgcZ-0000cE-V5
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:51 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id febcc53d-e442-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 16:59:51 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-2fc3f1d6f8cso4615864f8f.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:51 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: febcc53d-e442-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521190; x=1685113190;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QApVxERY829pwt09SvcXdDVUNpQVziPVty5UuwjPEJE=;
        b=DrJGhdhuFSloJuoPn17ICOmSFfPV6/16qhqMZWpLECee5iMuYdheSztwuwoVSLH7jJ
         iN1Jmm53mlH6y4YCQnzFCQsm52sVtM5aRBa2tHvFyjVJOetCUOgwDzmrXk+CyfKmKkoJ
         hX7dBsZELH1DglArzkBVwcuAph8RYi853cWaU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521190; x=1685113190;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=QApVxERY829pwt09SvcXdDVUNpQVziPVty5UuwjPEJE=;
        b=X/LbrXMDVVBuBQRau4+5IC7X3GSKH5A2cYpRzCUGzxsRygZSsCvkTftX3ZulWnS7PZ
         hgwqHAggd+qBOyih9cCe8/8FYfx2CHYcG9MAyS7T/AeFjTjLueXA9DynDgVRUPeOB1RD
         hHaA4tEbyN5V3e50wyE80lhhPCHoVc07upYu8HfXyU10sZm8CniMNbHHMl50cp9ReCP8
         KZ3Iq7KW5mp1Q9A+k9SIvv9Jm4FXMTyhr4mAUXKw8B6Nm4eQy5CojngAwUvmcfql8R0s
         vf5qNfJjWRIhHHiU8JuR8eLdzJOpmdJnPlHVFDPYrmDTV1nrUTbrEOCrSN7wCibMczi5
         Si7w==
X-Gm-Message-State: AAQBX9fqUvxe0koiDUBCS04KDPYLPUoyHgKJ2lsNgci7PY9hGQG0Kkq/
	nss/N5TpB+7COXXqNwc6US84ETYq0lTrmE6/gvQ=
X-Google-Smtp-Source: AKy350ZPFEg7BYMyAAA5IAz4reDgIUpVGkf02JvvGJlWdWUw0qzWqfhqrAyzU7Dol4vA9JrsFjlt8Q==
X-Received: by 2002:a5d:58e3:0:b0:2ee:da1c:381a with SMTP id f3-20020a5d58e3000000b002eeda1c381amr14327620wrd.69.1682521189967;
        Wed, 26 Apr 2023 07:59:49 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 7/7] domctl: Modify getdomaininfo to fail if domid is not found
Date: Wed, 26 Apr 2023 15:59:32 +0100
Message-Id: <20230426145932.3340-8-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It previously mimicked the getdomaininfo sysctl semantics by returning
the first domid higher than the requested domid that does exist. This
unintuitive behaviour causes quite a few mistakes and makes the call
needlessly slow in its error path.

This patch removes the fallback search, returning -ESRCH if the requested
domain doesn't exist. Domain discovery can still be done through the sysctl
interface as that performs a linear search on the list of domains.

With this modification the xc_domain_getinfo() function is deprecated and
removed to make sure it's not mistakenly used expecting the old behaviour.
The new xc wrapper is xc_domain_getinfo_single().

All previous callers of xc_domain_getinfo() have been updated to use
xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
means xc_dominfo_t is no longer used by anything and can be purged.

Resolves: xen-project/xen#105
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h     | 43 -----------------------
 tools/libs/ctrl/xc_domain.c | 70 -------------------------------------
 xen/common/domctl.c         | 32 ++---------------
 3 files changed, 2 insertions(+), 143 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 755759f0fe..7ed22eff0d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -444,28 +444,6 @@ typedef struct xc_core_header {
  * DOMAIN MANAGEMENT FUNCTIONS
  */
 
-typedef struct xc_dominfo {
-    uint32_t      domid;
-    uint32_t      ssidref;
-    unsigned int  dying:1, crashed:1, shutdown:1,
-                  paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1, xenstore:1, hap:1;
-    unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
-    unsigned long nr_pages; /* current number, not maximum */
-    unsigned long nr_outstanding_pages;
-    unsigned long nr_shared_pages;
-    unsigned long nr_paged_pages;
-    unsigned long shared_info_frame;
-    uint64_t      cpu_time;
-    unsigned long max_memkb;
-    unsigned int  nr_online_vcpus;
-    unsigned int  max_vcpu_id;
-    xen_domain_handle_t handle;
-    unsigned int  cpupool;
-    uint8_t       gpaddr_bits;
-    struct xen_arch_domainconfig arch_config;
-} xc_dominfo_t;
-
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
 static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
@@ -720,27 +698,6 @@ int xc_domain_getinfo_single(xc_interface *xch,
                              uint32_t domid,
                              xc_domaininfo_t *info);
 
-/**
- * This function will return information about one or more domains. It is
- * designed to iterate over the list of domains. If a single domain is
- * requested, this function will return the next domain in the list - if
- * one exists. It is, therefore, important in this case to make sure the
- * domain requested was the one returned.
- *
- * @parm xch a handle to an open hypervisor interface
- * @parm first_domid the first domain to enumerate information from.  Domains
- *                   are currently enumerate in order of creation.
- * @parm max_doms the number of elements in info
- * @parm info an array of max_doms size that will contain the information for
- *            the enumerated domains.
- * @return the number of domains enumerated or -1 on error
- */
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info);
-
-
 /**
  * This function will set the execution context for the specified vcpu.
  *
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 5c37dbe200..581e7529e5 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -358,82 +358,12 @@ int xc_domain_getinfo_single(xc_interface *xch,
     if (rc < 0)
         return rc;
 
-    if (domctl.u.getdomaininfo.domain != domid)
-        return -ESRCH;
-
     if (info)
         *info = domctl.u.getdomaininfo;
 
     return rc;
 }
 
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info)
-{
-    unsigned int nr_doms;
-    uint32_t next_domid = first_domid;
-    DECLARE_DOMCTL;
-    int rc = 0;
-
-    memset(info, 0, max_doms*sizeof(xc_dominfo_t));
-
-    for ( nr_doms = 0; nr_doms < max_doms; nr_doms++ )
-    {
-        domctl.cmd = XEN_DOMCTL_getdomaininfo;
-        domctl.domain = next_domid;
-        if ( (rc = do_domctl(xch, &domctl)) < 0 )
-            break;
-        info->domid      = domctl.domain;
-
-        info->dying    = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_dying);
-        info->shutdown = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_shutdown);
-        info->paused   = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_paused);
-        info->blocked  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_blocked);
-        info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
-        info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
-        info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
-        info->xenstore = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_xs_domain);
-        info->hap      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hap);
-
-        info->shutdown_reason =
-            (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
-            XEN_DOMINF_shutdownmask;
-
-        if ( info->shutdown && (info->shutdown_reason == SHUTDOWN_crash) )
-        {
-            info->shutdown = 0;
-            info->crashed  = 1;
-        }
-
-        info->ssidref  = domctl.u.getdomaininfo.ssidref;
-        info->nr_pages = domctl.u.getdomaininfo.tot_pages;
-        info->nr_outstanding_pages = domctl.u.getdomaininfo.outstanding_pages;
-        info->nr_shared_pages = domctl.u.getdomaininfo.shr_pages;
-        info->nr_paged_pages = domctl.u.getdomaininfo.paged_pages;
-        info->max_memkb = domctl.u.getdomaininfo.max_pages << (PAGE_SHIFT-10);
-        info->shared_info_frame = domctl.u.getdomaininfo.shared_info_frame;
-        info->cpu_time = domctl.u.getdomaininfo.cpu_time;
-        info->nr_online_vcpus = domctl.u.getdomaininfo.nr_online_vcpus;
-        info->max_vcpu_id = domctl.u.getdomaininfo.max_vcpu_id;
-        info->cpupool = domctl.u.getdomaininfo.cpupool;
-        info->gpaddr_bits = domctl.u.getdomaininfo.gpaddr_bits;
-        info->arch_config = domctl.u.getdomaininfo.arch_config;
-
-        memcpy(info->handle, domctl.u.getdomaininfo.handle,
-               sizeof(xen_domain_handle_t));
-
-        next_domid = (uint16_t)domctl.domain + 1;
-        info++;
-    }
-
-    if ( nr_doms == 0 )
-        return rc;
-
-    return nr_doms;
-}
-
 int xc_domain_getinfolist(xc_interface *xch,
                           uint32_t first_domain,
                           unsigned int max_domains,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ad71ad8a4c..24a14996e6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* fall through */
     default:
         d = rcu_lock_domain_by_id(op->domain);
-        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
+        if ( !d )
             return -ESRCH;
     }
 
@@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_getdomaininfo:
     {
-        domid_t dom = DOMID_INVALID;
-
-        if ( !d )
-        {
-            ret = -EINVAL;
-            if ( op->domain >= DOMID_FIRST_RESERVED )
-                break;
-
-            rcu_read_lock(&domlist_read_lock);
-
-            dom = op->domain;
-            for_each_domain ( d )
-                if ( d->domain_id >= dom )
-                    break;
-        }
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            goto getdomaininfo_out;
-
         ret = xsm_getdomaininfo(XSM_HOOK, d);
         if ( ret )
-            goto getdomaininfo_out;
+            break;
 
         getdomaininfo(d, &op->u.getdomaininfo);
 
         op->domain = op->u.getdomaininfo.domain;
         copyback = 1;
-
-    getdomaininfo_out:
-        /* When d was non-NULL upon entry, no cleanup is needed. */
-        if ( dom == DOMID_INVALID )
-            break;
-
-        rcu_read_unlock(&domlist_read_lock);
-        d = NULL;
         break;
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 15:00:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 15:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526753.818737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcc-0002bB-JY; Wed, 26 Apr 2023 14:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526753.818737; Wed, 26 Apr 2023 14:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgcc-0002aK-Fb; Wed, 26 Apr 2023 14:59:54 +0000
Received: by outflank-mailman (input) for mailman id 526753;
 Wed, 26 Apr 2023 14:59:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vxt2=AR=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1prgca-0000bz-NK
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 14:59:52 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd9d9dd5-e442-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 16:59:49 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-2febac9cacdso4448943f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 07:59:49 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v17-20020a5d43d1000000b003047ea78b42sm6654116wrr.43.2023.04.26.07.59.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 07:59:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd9d9dd5-e442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682521188; x=1685113188;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nX+lMTfE0qojwKcJhtm+OU9TM/2tH0AcbR/b2k6dXSw=;
        b=G2OIXZ0N2SRLY09CjR90cU8xzJOGWDuxhMCVUdyJH6z03ommfu18W92kRT8JE68tHq
         x+fB0+B7JoQY6eVADvxxuOMRUxleP74V9fyy3ZQGNzBFuovBPLm2IkY8gMcscLcuGQmT
         r1rTi6eMoWn3ytg/w9DXTVfC33wOPg2I8NDDM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682521188; x=1685113188;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nX+lMTfE0qojwKcJhtm+OU9TM/2tH0AcbR/b2k6dXSw=;
        b=R8y0x9uZG+5iPQ+Df1NvZUI/M+8/WZxDtEcdM+Bnl7xJtgGutd4eLzHSm7jASsYfD7
         ssgYbLPS69f6gA8MWFg0ZW0oHlju7x3U1iGkbx0SD1Nk2hIXrwk0p3KmrXw9n+cf58ie
         6y5m5pewsgfj0lp0LFINSOtEs4rtSqOYPwtj3LPnOGu3Ork7fQBluZdK9w1kDeNTiO7+
         Mv8fukO+UMVQrV5P4BrW9lDOnlsgBxlfzl1+BJQBlE+RMIDflZHc2h4xCtU6KOeWKSWc
         GYCHkcZnGWgPAZBAJWLEHcyJgGmeeKpO6oWEx9/sde7ZMsRY+YI4kIqwG7huS/or+AoD
         E8SQ==
X-Gm-Message-State: AAQBX9f3aTz63QnjA0fNECfqwFpZUS10wpGXDC0WlDzON1LswkqDzuHr
	A1SkvSnscIg33Dtwjs9Z4DfxVdBOfh+vSySR2MI=
X-Google-Smtp-Source: AKy350bztpHyxbUb5Yj+C0bVzBq0y7TMScwNOhKdKKzHusqZyllBJn64fdxR/Kies/st+Jdqf4imqg==
X-Received: by 2002:a5d:4f05:0:b0:2ef:baa1:f3fc with SMTP id c5-20020a5d4f05000000b002efbaa1f3fcmr13575143wru.19.1682521188086;
        Wed, 26 Apr 2023 07:59:48 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH 6/7] tools: Use new xc function for some xc_domain_getinfo calls
Date: Wed, 26 Apr 2023 15:59:31 +0100
Message-Id: <20230426145932.3340-7-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move calls that require a information about a single precisely identified
domain to the new xc_domain_getinfo_single().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/console/client/main.c             |  7 +++----
 tools/debugger/kdd/kdd-xen.c            |  6 ++++--
 tools/include/xenctrl.h                 |  7 +++++++
 tools/libs/ctrl/xc_domain.c             |  5 ++---
 tools/libs/ctrl/xc_pagetab.c            |  7 +++----
 tools/libs/ctrl/xc_private.c            |  7 +++----
 tools/libs/ctrl/xc_private.h            |  6 +++---
 tools/libs/guest/xg_core.c              | 21 +++++++------------
 tools/libs/guest/xg_core.h              |  6 +++---
 tools/libs/guest/xg_core_arm.c          | 10 ++++-----
 tools/libs/guest/xg_core_x86.c          | 18 ++++++++--------
 tools/libs/guest/xg_cpuid_x86.c         | 28 +++++++++++++------------
 tools/libs/guest/xg_dom_boot.c          | 12 ++---------
 tools/libs/guest/xg_domain.c            |  6 +++---
 tools/libs/guest/xg_offline_page.c      | 10 ++++-----
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 17 +++++++--------
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 14 +++++--------
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 26 +++++++++--------------
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 +++---
 tools/libs/light/libxl_sched.c          | 16 +++++++-------
 tools/libs/light/libxl_x86_acpi.c       |  4 ++--
 tools/misc/xen-hvmcrash.c               |  6 +++---
 tools/misc/xen-lowmemd.c                |  6 +++---
 tools/misc/xen-mfndump.c                | 22 ++++++++-----------
 tools/misc/xen-vmtrace.c                |  6 +++---
 tools/vchan/vchan-socket-proxy.c        |  6 +++---
 tools/xenstore/xenstored_domain.c       | 15 +++++++------
 tools/xentrace/xenctx.c                 |  8 +++----
 31 files changed, 146 insertions(+), 167 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 1a6fa162f7..6775006488 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -408,17 +408,16 @@ int main(int argc, char **argv)
 	if (dom_path == NULL)
 		err(errno, "xs_get_domain_path()");
 	if (type == CONSOLE_INVAL) {
-		xc_dominfo_t xcinfo;
+		xc_domaininfo_t xcinfo;
 		xc_interface *xc_handle = xc_interface_open(0,0,0);
 		if (xc_handle == NULL)
 			err(errno, "Could not open xc interface");
-		if ( (xc_domain_getinfo(xc_handle, domid, 1, &xcinfo) != 1) ||
-		     (xcinfo.domid != domid) ) {
+		if (xc_domain_getinfo_single(xc_handle, domid, &xcinfo) < 0) {
 			xc_interface_close(xc_handle);
 			err(errno, "Failed to get domain information");
 		}
 		/* default to pv console for pv guests and serial for hvm guests */
-		if (xcinfo.hvm)
+		if (xcinfo.flags & XEN_DOMINF_hvm_guest)
 			type = CONSOLE_SERIAL;
 		else
 			type = CONSOLE_PV;
diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index e78c9311c4..b1a96bf4e2 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -570,7 +570,7 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     kdd_guest *g = NULL;
     xc_interface *xch = NULL;
     uint32_t domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
 
     g = calloc(1, sizeof (kdd_guest));
     if (!g) 
@@ -590,7 +590,9 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     g->domid = domid;
 
     /* Check that the domain exists and is HVM */
-    if (xc_domain_getinfo(xch, domid, 1, &info) != 1 || !info.hvm)
+    if (xc_domain_getinfo_single(xch, domid, &info) < 0)
+        goto err;
+    if (!(info.flags & XEN_DOMINF_hvm_guest))
         goto err;
 
     snprintf(g->id, (sizeof g->id) - 1, 
diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 73b07955c6..755759f0fe 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -473,6 +473,13 @@ static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
     return (info->flags >> XEN_DOMINF_shutdownshift) & XEN_DOMINF_shutdownmask;
 }
 
+static inline bool dominfo_shutdown_with(xc_domaininfo_t *info, unsigned int expected_reason)
+{
+    /* The reason doesn't make sense unless the domain is actually shutdown */
+    return (info->flags & XEN_DOMINF_shutdown) &&
+                (dominfo_shutdown_reason(info) == expected_reason);
+}
+
 typedef union 
 {
 #if defined(__i386__) || defined(__x86_64__)
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 3ff91023bf..5c37dbe200 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1958,12 +1958,11 @@ int xc_domain_memory_mapping(
     uint32_t add_mapping)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int ret = 0, rc;
     unsigned long done = 0, nr, max_batch_sz;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get info for domain");
         return -EINVAL;
diff --git a/tools/libs/ctrl/xc_pagetab.c b/tools/libs/ctrl/xc_pagetab.c
index db25c20247..d9f886633a 100644
--- a/tools/libs/ctrl/xc_pagetab.c
+++ b/tools/libs/ctrl/xc_pagetab.c
@@ -29,17 +29,16 @@
 unsigned long xc_translate_foreign_address(xc_interface *xch, uint32_t dom,
                                            int vcpu, unsigned long long virt)
 {
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     uint64_t paddr, mask, pte = 0;
     int size, level, pt_levels = 2;
     void *map;
 
-    if (xc_domain_getinfo(xch, dom, 1, &dominfo) != 1 
-        || dominfo.domid != dom)
+    if (xc_domain_getinfo_single(xch, dom, &dominfo) < 0)
         return 0;
 
     /* What kind of paging are we dealing with? */
-    if (dominfo.hvm) {
+    if (dominfo.flags & XEN_DOMINF_hvm_guest) {
         struct hvm_hw_cpu ctx;
         if (xc_domain_hvm_getcontext_partial(xch, dom,
                                              HVM_SAVE_CODE(CPU), vcpu,
diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index 2f99a7d2cf..8dcebad401 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -441,11 +441,10 @@ int xc_machphys_mfn_list(xc_interface *xch,
 
 long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
 {
-    xc_dominfo_t info;
-    if ( (xc_domain_getinfo(xch, domid, 1, &info) != 1) ||
-         (info.domid != domid) )
+    xc_domaininfo_t info;
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
         return -1;
-    return info.nr_pages;
+    return info.tot_pages;
 }
 
 int xc_copy_to_domain_page(xc_interface *xch,
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 80dc464c93..abd557e861 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -420,12 +420,12 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
 #if defined (__i386__) || defined (__x86_64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
-    return info->hvm;
+    return info->flags & XEN_DOMINF_hvm_guest;
 }
 #elif defined (__arm__) || defined(__aarch64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
     return 1;
 }
diff --git a/tools/libs/guest/xg_core.c b/tools/libs/guest/xg_core.c
index c52f1161c1..33d35715c9 100644
--- a/tools/libs/guest/xg_core.c
+++ b/tools/libs/guest/xg_core.c
@@ -349,7 +349,7 @@ elfnote_dump_none(xc_interface *xch, void *args, dumpcore_rtn_t dump_rtn)
 static int
 elfnote_dump_core_header(
     xc_interface *xch,
-    void *args, dumpcore_rtn_t dump_rtn, const xc_dominfo_t *info,
+    void *args, dumpcore_rtn_t dump_rtn, const xc_domaininfo_t *info,
     int nr_vcpus, unsigned long nr_pages)
 {
     int sts;
@@ -361,7 +361,8 @@ elfnote_dump_core_header(
     
     elfnote.descsz = sizeof(header);
     elfnote.type = XEN_ELFNOTE_DUMPCORE_HEADER;
-    header.xch_magic = info->hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC;
+    header.xch_magic = (info->flags & XEN_DOMINF_hvm_guest) ? XC_CORE_MAGIC_HVM
+                                                            : XC_CORE_MAGIC;
     header.xch_nr_vcpus = nr_vcpus;
     header.xch_nr_pages = nr_pages;
     header.xch_page_size = PAGE_SIZE;
@@ -423,7 +424,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                                 void *args,
                                 dumpcore_rtn_t dump_rtn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo = NULL;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
@@ -468,7 +469,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         goto out;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get info for domain");
         goto out;
@@ -476,7 +477,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
     /* Map the shared info frame */
     live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE,
                                        PROT_READ, info.shared_info_frame);
-    if ( !live_shinfo && !info.hvm )
+    if ( !live_shinfo && !(info.flags & XEN_DOMINF_hvm_guest) )
     {
         PERROR("Couldn't map live_shinfo");
         goto out;
@@ -517,12 +518,6 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         dinfo->guest_width = sizeof(unsigned long);
     }
 
-    if ( domid != info.domid )
-    {
-        PERROR("Domain %d does not exist", domid);
-        goto out;
-    }
-
     ctxt = calloc(sizeof(*ctxt), info.max_vcpu_id + 1);
     if ( !ctxt )
     {
@@ -560,9 +555,9 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
      * all the array...
      *
      * We don't want to use the total potential size of the memory map
-     * since that is usually much higher than info.nr_pages.
+     * since that is usually much higher than info.tot_pages.
      */
-    nr_pages = info.nr_pages;
+    nr_pages = info.tot_pages;
 
     if ( !auto_translated_physmap )
     {
diff --git a/tools/libs/guest/xg_core.h b/tools/libs/guest/xg_core.h
index aaca9e0a8b..ff577dad31 100644
--- a/tools/libs/guest/xg_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -134,15 +134,15 @@ typedef struct xc_core_memory_map xc_core_memory_map_t;
 struct xc_core_arch_context;
 int xc_core_arch_memory_map_get(xc_interface *xch,
                                 struct xc_core_arch_context *arch_ctxt,
-                                xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                                xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
 int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
-                         xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                         xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                          xen_pfn_t **live_p2m);
 
 int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
-                                  xc_dominfo_t *info,
+                                  xc_domaininfo_t *info,
                                   shared_info_any_t *live_shinfo,
                                   xen_pfn_t **live_p2m);
 
diff --git a/tools/libs/guest/xg_core_arm.c b/tools/libs/guest/xg_core_arm.c
index de30cf0c31..34276152da 100644
--- a/tools/libs/guest/xg_core_arm.c
+++ b/tools/libs/guest/xg_core_arm.c
@@ -33,14 +33,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -59,7 +59,7 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
@@ -67,14 +67,14 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_core_x86.c b/tools/libs/guest/xg_core_x86.c
index c5e4542ccc..dbd3a440f7 100644
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -49,14 +49,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -314,24 +314,24 @@ xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinf
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     xen_pfn_t *p2m_frame_list = NULL;
     uint64_t p2m_cr3;
-    uint32_t dom = info->domid;
+    uint32_t dom = info->domain;
     int ret = -1;
     int err;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &dinfo->p2m_size) < 0 )
     {
         ERROR("Could not get maximum GPFN!");
         goto out;
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    if ( dinfo->p2m_size < info->tot_pages  )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("p2m_size < nr_pages -1 (%lx < %"PRIx64, dinfo->p2m_size, info->tot_pages - 1);
         goto out;
     }
 
@@ -366,14 +366,14 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index bd16a87e48..6d260d2cff 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
     xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool is_hvm;
+    xc_domaininfo_t di;
     unsigned int nr_leaves, nr_msrs;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     /*
@@ -291,13 +292,13 @@ static int xc_cpuid_xend_policy(
     xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
     unsigned int nr_host, nr_def, nr_cur;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &di) < 0 )
     {
         ERROR("Failed to obtain d%d info", domid);
         rc = -ESRCH;
         goto fail;
     }
+    is_hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -330,12 +331,12 @@ static int xc_cpuid_xend_policy(
     /* Get the domain type's default policy. */
     nr_msrs = 0;
     nr_def = nr_leaves;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+    rc = get_system_cpu_policy(xch, is_hvm ? XEN_SYSCTL_cpu_policy_hvm_default
                                            : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_def, def, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s def policy", is_hvm ? "hvm" : "pv");
         rc = -errno;
         goto fail;
     }
@@ -428,7 +429,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
                           const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool is_hvm;
+    xc_domaininfo_t di;
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpu_policy *p = NULL;
@@ -436,13 +438,13 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &di) < 0 )
     {
         ERROR("Failed to obtain d%d info", domid);
         rc = -ESRCH;
         goto out;
     }
+    is_hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -475,12 +477,12 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     /* Get the domain's default policy. */
     nr_msrs = 0;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+    rc = get_system_cpu_policy(xch, is_hvm ? XEN_SYSCTL_cpu_policy_hvm_default
                                            : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_leaves, leaves, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s default policy", is_hvm ? "hvm" : "pv");
         rc = -errno;
         goto out;
     }
@@ -514,7 +516,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         p->feat.hle = test_bit(X86_FEATURE_HLE, host_featureset);
         p->feat.rtm = test_bit(X86_FEATURE_RTM, host_featureset);
 
-        if ( di.hvm )
+        if ( is_hvm )
         {
             p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset);
         }
@@ -571,7 +573,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     {
         p->extd.itsc = itsc;
 
-        if ( di.hvm )
+        if ( is_hvm )
         {
             p->basic.pae = pae;
             p->basic.vmx = nested_virt;
@@ -579,7 +581,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         }
     }
 
-    if ( !di.hvm )
+    if ( !is_hvm )
     {
         /*
          * On hardware without CPUID Faulting, PV guests see real topology.
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 263a3f4c85..59b4d641c9 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
 
 int xc_dom_boot_image(struct xc_dom_image *dom)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int rc;
 
     DOMPRINTF_CALLED(dom->xch);
@@ -174,21 +174,13 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
         return rc;
 
     /* collect some info */
-    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
+    rc = xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info);
     if ( rc < 0 )
     {
         xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
                      "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
         return rc;
     }
-    if ( rc == 0 || info.domid != dom->guest_domid )
-    {
-        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: Huh? No domains found (nr_domains=%d) "
-                     "or domid mismatch (%d != %d)", __FUNCTION__,
-                     rc, info.domid, dom->guest_domid);
-        return -1;
-    }
     dom->shared_info_mfn = info.shared_info_frame;
 
     /* initial mm setup */
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index f0e7748449..49e532ee33 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -37,7 +37,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
 {
     struct domain_info_context _di;
 
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo;
     xen_capabilities_info_t xen_caps = "";
     unsigned long i;
@@ -49,7 +49,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return -1;
@@ -86,7 +86,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                                        info.shared_info_frame);
     if ( !live_shinfo )
     {
-        PERROR("Could not map the shared info frame (MFN 0x%lx)",
+        PERROR("Could not map the shared info frame (MFN 0x%"PRIx64")",
                info.shared_info_frame);
         return -1;
     }
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index ccd0299f0f..0ca9ca32ef 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -370,7 +370,7 @@ static int clear_pte(xc_interface *xch, uint32_t domid,
  */
 
 static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
-                               xc_dominfo_t *info)
+                               xc_domaininfo_t *info)
 {
     uint32_t status;
     int rc;
@@ -381,7 +381,7 @@ static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
         DPRINTF("Dom0's page can't be LM");
         return 0;
     }
-    if (info->hvm)
+    if (info->flags & XEN_DOMINF_hvm_guest)
     {
         DPRINTF("Currently we can only live change PV guest's page\n");
         return 0;
@@ -462,7 +462,7 @@ err0:
 /* The domain should be suspended when called here */
 int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xc_domain_meminfo minfo;
     struct xc_mmu *mmu = NULL;
     struct pte_backup old_ptes = {NULL, 0, 0};
@@ -477,13 +477,13 @@ int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
     xen_pfn_t *m2p_table;
     unsigned long max_mfn;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Could not get domain info");
         return -1;
     }
 
-    if (!info.shutdown || info.shutdown_reason != SHUTDOWN_suspend)
+    if (!dominfo_shutdown_with(&info, SHUTDOWN_suspend))
     {
         errno = EINVAL;
         ERROR("Can't exchange page unless domain is suspended\n");
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index e729a8106c..d73947094f 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -16,6 +16,7 @@
 #ifndef XG_PRIVATE_H
 #define XG_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <errno.h>
 #include <fcntl.h>
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index 77e2451a3c..60d682c746 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -26,28 +26,27 @@
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
     vcpu_guest_context_any_t ctxt;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     xen_capabilities_info_t caps;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
     int rc;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return -1;
     }
 
-    if ( !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&info, SHUTDOWN_suspend))
     {
         ERROR("Dom %d not suspended: (shutdown %d, reason %d)", domid,
-              info.shutdown, info.shutdown_reason);
+              info.flags & XEN_DOMINF_shutdown, dominfo_shutdown_reason(&info));
         errno = EINVAL;
         return -1;
     }
 
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
     {
         /* HVM guests without PV drivers have no return code to modify. */
         uint64_t irq = 0;
@@ -133,7 +132,7 @@ static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
 static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int i, rc = -1;
 #if defined(__i386__) || defined(__x86_64__)
     struct domain_info_context _dinfo = { .guest_width = 0,
@@ -146,7 +145,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
     xen_pfn_t *p2m = NULL;
 #endif
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return rc;
@@ -156,7 +155,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
      * (x86 only) Rewrite store_mfn and console_mfn back to MFN (from PFN).
      */
 #if defined(__i386__) || defined(__x86_64__)
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
         return xc_domain_resume_hvm(xch, domid);
 
     if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 )
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 36d45ef56f..2f058ee3a6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -220,7 +220,7 @@ struct xc_sr_context
     /* Plain VM, or checkpoints over time. */
     xc_stream_type_t stream_type;
 
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 
     union /* Common save or restore data. */
     {
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7314a24cf9..a03183f4b9 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -852,6 +852,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       xc_stream_type_t stream_type,
                       struct restore_callbacks *callbacks, int send_back_fd)
 {
+    bool is_hvm;
     xen_pfn_t nr_pfns;
     struct xc_sr_context ctx = {
         .xch = xch,
@@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         break;
     }
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
         PERROR("Failed to get domain info");
         return -1;
     }
 
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
-        return -1;
-    }
-
+    is_hvm = !!(ctx.dominfo.flags & XEN_DOMINF_hvm_guest);
     DPRINTF("fd %d, dom %u, hvm %u, stream_type %d",
-            io_fd, dom, ctx.dominfo.hvm, stream_type);
+            io_fd, dom, is_hvm, stream_type);
 
     ctx.domid = dom;
 
@@ -914,7 +910,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     ctx.restore.p2m_size = nr_pfns;
-    ctx.restore.ops = ctx.dominfo.hvm
+    ctx.restore.ops = is_hvm
         ? restore_ops_x86_hvm : restore_ops_x86_pv;
 
     if ( restore(&ctx) )
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/guest/xg_sr_restore_x86_pv.c
index dc50b0f5a8..eaeb97f4a0 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -903,7 +903,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
         ctx->dominfo.shared_info_frame);
     if ( !guest_shinfo )
     {
-        PERROR("Failed to map Shared Info at mfn %#lx",
+        PERROR("Failed to map Shared Info at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         goto err;
     }
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 9853d8d846..8fc8e9d3b2 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -336,19 +336,17 @@ static int suspend_domain(struct xc_sr_context *ctx)
     }
 
     /* Refresh domain information. */
-    if ( (xc_domain_getinfo(xch, ctx->domid, 1, &ctx->dominfo) != 1) ||
-         (ctx->dominfo.domid != ctx->domid) )
+    if ( xc_domain_getinfo_single(xch, ctx->domid, &ctx->dominfo) < 0 )
     {
         PERROR("Unable to refresh domain information");
         return -1;
     }
 
     /* Confirm the domain has actually been paused. */
-    if ( !ctx->dominfo.shutdown ||
-         (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
     {
         ERROR("Domain has not been suspended: shutdown %d, reason %d",
-              ctx->dominfo.shutdown, ctx->dominfo.shutdown_reason);
+              ctx->dominfo.flags & XEN_DOMINF_shutdown, dominfo_shutdown_reason(&ctx->dominfo));
         return -1;
     }
 
@@ -893,8 +891,7 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type)
         if ( rc )
             goto err;
 
-        if ( !ctx->dominfo.shutdown ||
-             (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+        if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
         {
             ERROR("Domain has not been suspended");
             rc = -1;
@@ -989,6 +986,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         .fd = io_fd,
         .stream_type = stream_type,
     };
+    bool is_hvm;
 
     /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
     ctx.save.callbacks = callbacks;
@@ -996,17 +994,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
     ctx.save.recv_fd = recv_fd;
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
         PERROR("Failed to get domain info");
         return -1;
     }
 
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
-        return -1;
-    }
+    is_hvm = !!(ctx.dominfo.flags & XEN_DOMINF_hvm_guest);
 
     /* Sanity check stream_type-related parameters */
     switch ( stream_type )
@@ -1018,7 +1012,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         assert(callbacks->checkpoint && callbacks->postcopy);
         /* Fallthrough */
     case XC_STREAM_PLAIN:
-        if ( ctx.dominfo.hvm )
+        if ( is_hvm )
             assert(callbacks->switch_qemu_logdirty);
         break;
 
@@ -1028,11 +1022,11 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     DPRINTF("fd %d, dom %u, flags %u, hvm %d",
-            io_fd, dom, flags, ctx.dominfo.hvm);
+            io_fd, dom, flags, is_hvm);
 
     ctx.domid = dom;
 
-    if ( ctx.dominfo.hvm )
+    if ( is_hvm )
     {
         ctx.save.ops = save_ops_x86_hvm;
         return save(&ctx, DHDR_TYPE_X86_HVM);
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..f3d7a7a71a 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -20,7 +20,7 @@ static int map_shinfo(struct xc_sr_context *ctx)
         xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
     if ( !ctx->x86.pv.shinfo )
     {
-        PERROR("Failed to map shared info frame at mfn %#lx",
+        PERROR("Failed to map shared info frame at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         return -1;
     }
@@ -943,7 +943,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 #ifdef __i386__
             if ( mfn == INVALID_MFN )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
@@ -965,7 +965,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 
             if ( !mfn_in_pseudophysmap(ctx, mfn) )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 19da7c49ea..ad81f15a97 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -498,10 +498,10 @@ static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -552,10 +552,10 @@ static int sched_rtds_vcpu_get_all(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -602,10 +602,10 @@ static int sched_rtds_vcpu_set(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -662,11 +662,11 @@ static int sched_rtds_vcpu_set_all(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
     uint32_t num_vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
index 22eb160659..796b009d0c 100644
--- a/tools/libs/light/libxl_x86_acpi.c
+++ b/tools/libs/light/libxl_x86_acpi.c
@@ -87,14 +87,14 @@ static int init_acpi_config(libxl__gc *gc,
 {
     xc_interface *xch = dom->xch;
     uint32_t domid = dom->guest_domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct hvm_info_table *hvminfo;
     int i, r, rc;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
 
-    r = xc_domain_getinfo(xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(xch, domid, &info);
     if (r < 0) {
         LOG(ERROR, "getdomaininfo failed (rc=%d)", r);
         rc = ERROR_FAIL;
diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
index 4f0dabcb18..1d058fa40a 100644
--- a/tools/misc/xen-hvmcrash.c
+++ b/tools/misc/xen-hvmcrash.c
@@ -48,7 +48,7 @@ main(int argc, char **argv)
 {
     int domid;
     xc_interface *xch;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     int ret;
     uint32_t len;
     uint8_t *buf;
@@ -66,13 +66,13 @@ main(int argc, char **argv)
         exit(1);
     }
 
-    ret = xc_domain_getinfo(xch, domid, 1, &dominfo);
+    ret = xc_domain_getinfo_single(xch, domid, &dominfo);
     if (ret < 0) {
         perror("xc_domain_getinfo");
         exit(1);
     }
 
-    if (!dominfo.hvm) {
+    if (!(dominfo.flags & XEN_DOMINF_hvm_guest)) {
         fprintf(stderr, "domain %d is not HVM\n", domid);
         exit(1);
     }
diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
index a3a2741242..b483f63fdc 100644
--- a/tools/misc/xen-lowmemd.c
+++ b/tools/misc/xen-lowmemd.c
@@ -38,7 +38,7 @@ void cleanup(void)
 #define BUFSZ 512
 void handle_low_mem(void)
 {
-    xc_dominfo_t  dom0_info;
+    xc_domaininfo_t  dom0_info;
     xc_physinfo_t info;
     unsigned long long free_pages, dom0_pages, diff, dom0_target;
     char data[BUFSZ], error[BUFSZ];
@@ -58,13 +58,13 @@ void handle_low_mem(void)
         return;
     diff = THRESHOLD_PG - free_pages; 
 
-    if (xc_domain_getinfo(xch, 0, 1, &dom0_info) < 1)
+    if (xc_domain_getinfo_single(xch, 0, &dom0_info) < 0)
     {
         perror("Failed to get dom0 info");
         return;
     }
 
-    dom0_pages = (unsigned long long) dom0_info.nr_pages;
+    dom0_pages = (unsigned long long) dom0_info.tot_pages;
     printf("Dom0 pages: 0x%llx:%llu\n", dom0_pages, dom0_pages);
     dom0_target = dom0_pages - diff;
     if (dom0_target <= DOM0_FLOOR_PG)
diff --git a/tools/misc/xen-mfndump.c b/tools/misc/xen-mfndump.c
index b32c95e262..8863ece3f5 100644
--- a/tools/misc/xen-mfndump.c
+++ b/tools/misc/xen-mfndump.c
@@ -74,7 +74,7 @@ int dump_m2p_func(int argc, char *argv[])
 int dump_p2m_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     unsigned long i;
     int domid;
 
@@ -85,8 +85,7 @@ int dump_p2m_func(int argc, char *argv[])
     }
     domid = atoi(argv[0]);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -158,7 +157,7 @@ int dump_p2m_func(int argc, char *argv[])
 int dump_ptes_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, max_mfn;
     int domid, pte_num, rc = 0;
@@ -172,8 +171,7 @@ int dump_ptes_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -266,7 +264,7 @@ int dump_ptes_func(int argc, char *argv[])
 int lookup_pte_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, j;
     int domid, pte_num;
@@ -280,8 +278,7 @@ int lookup_pte_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -336,7 +333,7 @@ int lookup_pte_func(int argc, char *argv[])
 
 int memcmp_mfns_func(int argc, char *argv[])
 {
-    xc_dominfo_t info1, info2;
+    xc_domaininfo_t info1, info2;
     void *page1 = NULL, *page2 = NULL;
     int domid1, domid2;
     xen_pfn_t mfn1, mfn2;
@@ -352,9 +349,8 @@ int memcmp_mfns_func(int argc, char *argv[])
     mfn1 = strtoul(argv[1], NULL, 16);
     mfn2 = strtoul(argv[3], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid1, 1, &info1) != 1 ||
-         xc_domain_getinfo(xch, domid2, 1, &info2) != 1 ||
-         info1.domid != domid1 || info2.domid != domid2)
+    if ( xc_domain_getinfo_single(xch, domid1, &info1) < 0 ||
+         xc_domain_getinfo_single(xch, domid2, &info2) < 0)
     {
         ERROR("Failed to obtain info for domains\n");
         return -1;
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 5b688a54af..ba2ce17a17 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -133,15 +133,15 @@ int main(int argc, char **argv)
 
     while ( !interrupted )
     {
-        xc_dominfo_t dominfo;
+        xc_domaininfo_t dominfo;
 
         if ( get_more_data() )
             goto out;
 
         usleep(1000 * 100);
 
-        if ( xc_domain_getinfo(xch, domid, 1, &dominfo) != 1 ||
-             dominfo.domid != domid || dominfo.shutdown )
+        if ( xc_domain_getinfo_single(xch, domid, &dominfo) < 0 ||
+             (dominfo.flags & XEN_DOMINF_shutdown) )
         {
             if ( get_more_data() )
                 goto out;
diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
index e1d959c6d1..9c4c336b03 100644
--- a/tools/vchan/vchan-socket-proxy.c
+++ b/tools/vchan/vchan-socket-proxy.c
@@ -222,7 +222,7 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
     struct libxenvchan *ctrl = NULL;
     struct xs_handle *xs = NULL;
     xc_interface *xc = NULL;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     char **watch_ret;
     unsigned int watch_num;
     int ret;
@@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
         if (ctrl)
             break;
 
-        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
         /* break the loop if domain is definitely not there anymore, but
          * continue if it is or the call failed (like EPERM) */
         if (ret == -1 && errno == ESRCH)
             break;
-        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+        if (ret == 0 && (dominfo.flags & XEN_DOMINF_dying))
             break;
     }
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..aeb7595ae1 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -339,15 +339,14 @@ static int destroy_domain(void *_domain)
 	return 0;
 }
 
-static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
+static bool get_domain_info(unsigned int domid, xc_domaininfo_t *dominfo)
 {
-	return xc_domain_getinfo(*xc_handle, domid, 1, dominfo) == 1 &&
-	       dominfo->domid == domid;
+	return xc_domain_getinfo_single(*xc_handle, domid, dominfo) == 0;
 }
 
 static int check_domain(const void *k, void *v, void *arg)
 {
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 	struct connection *conn;
 	bool dom_valid;
 	struct domain *domain = v;
@@ -360,12 +359,12 @@ static int check_domain(const void *k, void *v, void *arg)
 		return 0;
 	}
 	if (dom_valid) {
-		if ((dominfo.crashed || dominfo.shutdown)
+		if ((dominfo.flags & XEN_DOMINF_shutdown)
 		    && !domain->shutdown) {
 			domain->shutdown = true;
 			*notify = true;
 		}
-		if (!dominfo.dying)
+		if (!(dominfo.flags & XEN_DOMINF_dying))
 			return 0;
 	}
 	if (domain->conn) {
@@ -486,7 +485,7 @@ static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
 static struct domain *find_or_alloc_existing_domain(unsigned int domid)
 {
 	struct domain *domain;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	domain = find_domain_struct(domid);
 	if (!domain && get_domain_info(domid, &dominfo))
@@ -1010,7 +1009,7 @@ int domain_alloc_permrefs(struct node_perms *perms)
 {
 	unsigned int i, domid;
 	struct domain *d;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	for (i = 0; i < perms->num; i++) {
 		domid = perms->p[i].id;
diff --git a/tools/xentrace/xenctx.c b/tools/xentrace/xenctx.c
index 85ba0c0fa6..9acb9db460 100644
--- a/tools/xentrace/xenctx.c
+++ b/tools/xentrace/xenctx.c
@@ -92,7 +92,7 @@ static struct xenctx {
     int do_stack;
 #endif
     int kernel_start_set;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 } xenctx;
 
 struct symbol {
@@ -989,7 +989,7 @@ static void dump_ctx(int vcpu)
 
 #if defined(__i386__) || defined(__x86_64__)
     {
-        if (xenctx.dominfo.hvm) {
+        if (xenctx.dominfo.flags & XEN_DOMINF_hvm_guest) {
             struct hvm_hw_cpu cpuctx;
             xen_capabilities_info_t xen_caps = "";
             if (xc_domain_hvm_getcontext_partial(
@@ -1269,9 +1269,9 @@ int main(int argc, char **argv)
         exit(-1);
     }
 
-    ret = xc_domain_getinfo(xenctx.xc_handle, xenctx.domid, 1, &xenctx.dominfo);
+    ret = xc_domain_getinfo_single(xenctx.xc_handle, xenctx.domid, &xenctx.dominfo);
     if (ret < 0) {
-        perror("xc_domain_getinfo");
+        perror("xc_domain_getinfo_single");
         exit(-1);
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 15:20:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 15:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526777.818747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgwA-0007u7-Jx; Wed, 26 Apr 2023 15:20:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526777.818747; Wed, 26 Apr 2023 15:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prgwA-0007u0-FK; Wed, 26 Apr 2023 15:20:06 +0000
Received: by outflank-mailman (input) for mailman id 526777;
 Wed, 26 Apr 2023 15:20:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vDOC=AR=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prgw9-0007pK-Lk
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 15:20:05 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0b5c262-e445-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 17:20:03 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3F8831FDD3;
 Wed, 26 Apr 2023 15:20:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0EC97138F0;
 Wed, 26 Apr 2023 15:20:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id zDcaAiJBSWQKfwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 26 Apr 2023 15:20:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0b5c262-e445-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682522402; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=doQDFYN3VKAZkjC7VD7tFCLBIqOUq3XK95YVk3QGTOY=;
	b=n8yo9DfJTnztYjWntbkthDRpGNwf3LUq2oXc8YoJEkInn2mXypk1kpgUyOGo4TIkGunYaW
	Y1tN91gH1RvtEB07cJ/kz4nY3rkJlaLieIhaXfmku8NIKPSwB8BNDQZWnM2/Da0i6pq3/4
	ewGV1wHmyliawGexPE2NGEM5Y79fxp8=
Message-ID: <18f4bd31-b26c-5cdc-5798-94ac8b7f282e@suse.com>
Date: Wed, 26 Apr 2023 17:20:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.org>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-5-alejandro.vallejo@cloud.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
In-Reply-To: <20230426145932.3340-5-alejandro.vallejo@cloud.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------uKEOHg9kx7vBXDsA0lco8sHH"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------uKEOHg9kx7vBXDsA0lco8sHH
Content-Type: multipart/mixed; boundary="------------caGaSSQ00IoQW8N2hQtV98RR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.org>
Message-ID: <18f4bd31-b26c-5cdc-5798-94ac8b7f282e@suse.com>
Subject: Re: [PATCH 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-5-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-5-alejandro.vallejo@cloud.com>

--------------caGaSSQ00IoQW8N2hQtV98RR
Content-Type: multipart/mixed; boundary="------------DvC0XkO9nd0vQ4Cm4fvU2NCm"

--------------DvC0XkO9nd0vQ4Cm4fvU2NCm
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjYuMDQuMjMgMTY6NTksIEFsZWphbmRybyBWYWxsZWpvIHdyb3RlOg0KPiBJdCBjdXJy
ZW50bHkgcmVsaWVzIG9uIHhjX2RvbWFpbl9nZXRpbmZvKCkgcmV0dXJuaW5nIHRoZSBuZXh0
IGF2YWlsYWJsZQ0KPiBkb21haW4gcGFzdCAiZmlyc3RfZG9taWQiLCB3aGljaCBpcyBhIGZl
YXR1cmUgdGhhdCB3aWxsIGRpc2FwcGVhciBpbiBhDQo+IGZ1dHVyZSBwYXRjaC4NCj4gDQo+
IEZ1cnRoZXJtb3JlIGFuZCB3aGlsZSBhdCBpdCwgbWFrZSBpdCBzbyB0aGUgaHlwZXJjYWxs
IHRyaWVzIHRvIGZldGNoIGluZm9ybWF0aW9uDQo+IGFib3V0IG1vcmUgdGhhbiBvbmUgZG9t
YWluIHBlciBoeXBlcmNhbGwgc28gd2UgY2FuIChob3BlZnVsbHkpIGdldCBhd2F5IHdpdGgg
YQ0KPiBzaW5nbGUgaHlwZXJjYWxsIGluIGEgdHlwaWNhbCBzeXN0ZW0uDQo+IA0KPiBTaWdu
ZWQtb2ZmLWJ5OiBBbGVqYW5kcm8gVmFsbGVqbyA8YWxlamFuZHJvLnZhbGxlam9AY2xvdWQu
Y29tPg0KPiAtLS0NCj4gQ2M6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJp
eC5jb20+DQo+IENjOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiBDYzogSnVlcmdlbiBHcm9z
cyA8amdyb3NzQHN1c2Uub3JnPg0KPiAtLS0NCj4gICB0b29scy9oZWxwZXJzL2luaXQteGVu
c3RvcmUtZG9tYWluLmMgfCAxNCArKysrKysrKystLS0tLQ0KPiAgIDEgZmlsZSBjaGFuZ2Vk
LCA5IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEv
dG9vbHMvaGVscGVycy9pbml0LXhlbnN0b3JlLWRvbWFpbi5jIGIvdG9vbHMvaGVscGVycy9p
bml0LXhlbnN0b3JlLWRvbWFpbi5jDQo+IGluZGV4IDA5NTBiYTdkYzUuLjVmNDA5MDFkMzEg
MTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2hlbHBlcnMvaW5pdC14ZW5zdG9yZS1kb21haW4uYw0K
PiArKysgYi90b29scy9oZWxwZXJzL2luaXQteGVuc3RvcmUtZG9tYWluLmMNCj4gQEAgLTIx
LDYgKzIxLDcgQEANCj4gICAjZGVmaW5lIExBUElDX0JBU0VfQUREUkVTUyAgMHhmZWUwMDAw
MFVMDQo+ICAgI2RlZmluZSBNQih4KSAgICAgICAgICAgICAgICgodWludDY0X3QpeCA8PCAy
MCkNCj4gICAjZGVmaW5lIEdCKHgpICAgICAgICAgICAgICAgKCh1aW50NjRfdCl4IDw8IDMw
KQ0KPiArI2RlZmluZSBBUlJBWV9TSVpFKHgpICAgICAgIChzaXplb2YoeCkgLyBzaXplb2Yo
KHgpWzBdKSkNCg0KUGxlYXNlIGluY2x1ZGUgPHhlbi10b29scy9jb21tb24tbWFjcm9zLmg+
IGluc3RlYWQgb2YgZGVmaW5pbmcgQVJSQVlfU0laRSgpLg0KDQpXaXRoIHRoYXQgY2hhbmdl
ZDoNCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0K
DQpKdWVyZ2VuDQo=
--------------DvC0XkO9nd0vQ4Cm4fvU2NCm
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------DvC0XkO9nd0vQ4Cm4fvU2NCm--

--------------caGaSSQ00IoQW8N2hQtV98RR--

--------------uKEOHg9kx7vBXDsA0lco8sHH
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRJQSEFAwAAAAAACgkQsN6d1ii/Ey/K
qQf/fyq8k+xtSoO7wY+YiqM7WwP6yDksECgRb17VnhIZem/lESrMmwEWvMStqqyrg+VRdULiOhu7
j2/4jH2zDS6wrPhqLGCzvAuTcQQiFTdlOhqbKnAQXqgk0EECCX7tfGDaek+EPFzESTcIFYMI/h4C
ZQyfDQ3CpgunmTKNMly5RczP52k+q4sqvi3QNDtaj4vgb6crCOMDJUHdmU8QCHi87BGFzjWUvk2t
UZKkR9YIbW+AfrxMQc4eKGdbvDU+GGe+Gt8E6ywKBazGZnhKMeV6Y2hD4leeTotZUii1KWeyrSFp
1D40JK2LPXzlF2Iu+bHKPEsULNWDuhLPL/chbwpjTQ==
=tTqV
-----END PGP SIGNATURE-----

--------------uKEOHg9kx7vBXDsA0lco8sHH--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 16:22:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 16:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526783.818756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prhud-0006Vg-4N; Wed, 26 Apr 2023 16:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526783.818756; Wed, 26 Apr 2023 16:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prhud-0006VZ-1V; Wed, 26 Apr 2023 16:22:35 +0000
Received: by outflank-mailman (input) for mailman id 526783;
 Wed, 26 Apr 2023 16:22:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B7RY=AR=bu.edu=alxndr@srs-se1.protection.inumbo.net>)
 id 1prhub-0006VT-41
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 16:22:33 +0000
Received: from esa6.hc2706-39.iphmx.com (esa6.hc2706-39.iphmx.com
 [216.71.137.79]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8911cda6-e44e-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 18:22:29 +0200 (CEST)
Received: from mail-pf1-f200.google.com ([209.85.210.200])
 by ob1.hc2706-39.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 Apr 2023 12:20:34 -0400
Received: by mail-pf1-f200.google.com with SMTP id
 d2e1a72fcca58-63b62529864so8509432b3a.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 09:20:34 -0700 (PDT)
Received: from mozz.bu.edu (mozz.bu.edu. [128.197.127.33])
 by smtp.gmail.com with ESMTPSA id
 b19-20020a0cb3d3000000b005e8d802ce32sm4899810qvf.143.2023.04.26.09.20.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Apr 2023 09:20:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8911cda6-e44e-11ed-b224-6b7b168915f2
X-IronPort-RemoteIP: 209.85.210.200
X-IronPort-MID: 278853036
X-IronPort-Reputation: None
X-IronPort-Listener: OutgoingMail
X-IronPort-SenderGroup: RELAY_GSUITE
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/dGmZakHgyJpMpIhLWaaMjfo5gzCJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJLCjzQO/aJYDT1Ko0ibt+/8UwF7JOHm9UwSgE/+yBkQy4T+ZvOCOrCEkqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5AJmP5ingXeF/5UrJMNHTU2OByagKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0xK6aYD76vRxjnVaPpIACRYpQRw/ZwNlPjxG4
 I4lWZSYEW/FN0BX8QgXe0Aw/ypWZMWq9FJbSJQWXAP6I0DuKhPRL/tS4E4eY4Ek3vknXzl39
 NMGNCtXNRmYi82d3+fuIgVsrpxLwMjDOYoevjR4w2icA6x+GdbMRKLF4dIe1zA17ixMNayGN
 oxJNHw1NUyGOUUXUrsUIMtWcOOAj33vdTFCgFiI46c7/gA/ySQrj+i9bYGNIYLiqcN9o2qBl
 kD73FXDWi5CFtaQ6RqEq1e2mbqa9c/8cMdIfFGizdZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK
 UEf5mwpt6dayaCwZtz0Xhn9uHDd+xBGA4IWHOo95wWAjKHT5m51G1Q5c9KIU/R+3OdeeNDg/
 gTUwLsF2RQHXGWpdE+g
IronPort-HdrOrdr: A9a23:gXaV+KhtVubVSdVyRPSK9m0vY3BQXgYji2hC6mlwRA09TyX4rb
 HUoB1/73TJYVkqNk3I9ersBEDCewK5yXcN2+gs1O6ZPDUO21HYTr2Kj7GSuwEIcheWnoRgPM
 FbAs1D4bbLYmSS4/yX3OD2KadG/DArytHPuc7Oi11WZUVBbaV46gdwDQyWVndxWBJNCfMCZf
 mhD4581kOdRUg=
X-Talos-CUID: =?us-ascii?q?9a23=3ARxQT6Grdl9XwdGxJvDoz1wvmUZ4pTlnCnCmIGk6?=
 =?us-ascii?q?XF3pyEuacUlO/07wxxg=3D=3D?=
X-Talos-MUID: 9a23:ZKLLawjn4vxqofYKbhQIYMMpCetx04aTK3E3yapemcOLFQxxYhKHk2Hi
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bu.edu; s=s1gsbu; t=1682526033; x=1685118033;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VcAcKX7/rrkQbLPugxX5FYzGcIPCu/jEZtAHpxtM8bY=;
        b=iE1TTT1FHxaEvPl5pFP7UtQGtLVlQV2LnCGm1vV614edkOz53U26xzHhCl18WuFMvN
         18w7iUk1NRQMjqz9lsnLwQltGYQE2F77sg4CvTfpukR5v7WPZtkxTxsPFzPJnAHlHUQe
         rra6eb4jjOJIAHfS85xwkE9NaeQyZ4uHTUXxsjc6nwSBhkbCCu7nvTWuSusi2rQQ0h+d
         1PdWPIChBeCxO+DQmkXDjvQMLUWMDr932zQox4wVieCD5gohG/xE0j4Mf4gQAIEaNKTt
         bFPKoz07FD8S71o+9HcM5W6Zz7oLhT406LE4R8bZamrwm6L73AO3dU10sF4m/fxMbtaY
         DVzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682526033; x=1685118033;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=VcAcKX7/rrkQbLPugxX5FYzGcIPCu/jEZtAHpxtM8bY=;
        b=I1SowRimgeLL0+3cs3A2IE455cgP/j3Q7ad0PynjDaGvegk7JazJosHjryW5mA5226
         RCLRilweNdeMSZYJC4z+Uk1zjlI4DciN9QujxNW2nirK/n+gb6Rf9sepHQNw1omthVJf
         ItFVfy/L8fMfW2K1jyYwEXv0CG0rODe7sAAstr3DXh4TPf28bxwqpvbQCUwAh7gkbmgO
         M7feeLohcn93dvJ6ROc0TAmDhmSeQ/bgEzFqgsIJuaPxEJDex4kQzj77gdMCAbi1zCOS
         Qryw/3lgPUYDJFNPooKVwULdh/pel3phgJqVXJrC9yVm0Kgh+t9BVYqhNGGV5c2Qy9gN
         gMxA==
X-Gm-Message-State: AAQBX9dYdO2U3wDdeRAEu8vU0R+ySHHRzR3zP8QAHeInCfhg2ec5kox8
	toZiErlxryMUn+bb5sqs9HeTidoM26aR4td2yyKp/AAuH6nK7aO2FBaRFqkgkOAHmDzfs54c6lk
	Mf0DTu80yPVeT0TYyqEQA0E1qT9loTMcqJGTaRutTTA==
X-Received: by 2002:a05:6214:21e2:b0:5ef:9b22:dc7e with SMTP id p2-20020a05621421e200b005ef9b22dc7emr32955961qvj.8.1682526012245;
        Wed, 26 Apr 2023 09:20:12 -0700 (PDT)
X-Google-Smtp-Source: AKy350aIyo/hhWU+vCFPCfUcLp7e6OYg/xUEp5KMz7jZpd1jbs6u7UCc+TlkRLn285gB+3L9GIoDMw==
X-Received: by 2002:a05:6214:21e2:b0:5ef:9b22:dc7e with SMTP id p2-20020a05621421e200b005ef9b22dc7emr32955882qvj.8.1682526011717;
        Wed, 26 Apr 2023 09:20:11 -0700 (PDT)
From: Alexander Bulekov <alxndr@bu.edu>
To: qemu-devel@nongnu.org
Cc: Alexander Bulekov <alxndr@bu.edu>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>,
	Siqi Chen <coc.cyqh@gmail.com>,
	Michael Tokarev <mjt@tls.msk.ru>,
	Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Amit Shah <amit@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>,
	Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs),
	qemu-block@nongnu.org (open list:virtio-blk),
	qemu-arm@nongnu.org (open list:i.MX31 (kzm)),
	qemu-ppc@nongnu.org (open list:Old World (g3beige))
Subject: [PATCH v9 4/8] hw: replace most qemu_bh_new calls with qemu_bh_new_guarded
Date: Wed, 26 Apr 2023 12:19:47 -0400
Message-Id: <20230426161951.2948996-5-alxndr@bu.edu>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230426161951.2948996-1-alxndr@bu.edu>
References: <20230426161951.2948996-1-alxndr@bu.edu>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CES-GSUITE_AUTH: bf3aNvsZpxl8

This protects devices from bh->mmio reentrancy issues.

Thanks: Thomas Huth <thuth@redhat.com> for diagnosing OS X test failure.
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
---
 hw/9pfs/xen-9p-backend.c        | 5 ++++-
 hw/block/dataplane/virtio-blk.c | 3 ++-
 hw/block/dataplane/xen-block.c  | 5 +++--
 hw/char/virtio-serial-bus.c     | 3 ++-
 hw/display/qxl.c                | 9 ++++++---
 hw/display/virtio-gpu.c         | 6 ++++--
 hw/ide/ahci.c                   | 3 ++-
 hw/ide/ahci_internal.h          | 1 +
 hw/ide/core.c                   | 4 +++-
 hw/misc/imx_rngc.c              | 6 ++++--
 hw/misc/macio/mac_dbdma.c       | 2 +-
 hw/net/virtio-net.c             | 3 ++-
 hw/nvme/ctrl.c                  | 6 ++++--
 hw/scsi/mptsas.c                | 3 ++-
 hw/scsi/scsi-bus.c              | 3 ++-
 hw/scsi/vmw_pvscsi.c            | 3 ++-
 hw/usb/dev-uas.c                | 3 ++-
 hw/usb/hcd-dwc2.c               | 3 ++-
 hw/usb/hcd-ehci.c               | 3 ++-
 hw/usb/hcd-uhci.c               | 2 +-
 hw/usb/host-libusb.c            | 6 ++++--
 hw/usb/redirect.c               | 6 ++++--
 hw/usb/xen-usb.c                | 3 ++-
 hw/virtio/virtio-balloon.c      | 5 +++--
 hw/virtio/virtio-crypto.c       | 3 ++-
 25 files changed, 66 insertions(+), 33 deletions(-)

diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 74f3a05f88..0e266c552b 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -61,6 +61,7 @@ typedef struct Xen9pfsDev {
 
     int num_rings;
     Xen9pfsRing *rings;
+    MemReentrancyGuard mem_reentrancy_guard;
 } Xen9pfsDev;
 
 static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev);
@@ -443,7 +444,9 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
         xen_9pdev->rings[i].ring.out = xen_9pdev->rings[i].data +
                                        XEN_FLEX_RING_SIZE(ring_order);
 
-        xen_9pdev->rings[i].bh = qemu_bh_new(xen_9pfs_bh, &xen_9pdev->rings[i]);
+        xen_9pdev->rings[i].bh = qemu_bh_new_guarded(xen_9pfs_bh,
+                                                     &xen_9pdev->rings[i],
+                                                     &xen_9pdev->mem_reentrancy_guard);
         xen_9pdev->rings[i].out_cons = 0;
         xen_9pdev->rings[i].out_size = 0;
         xen_9pdev->rings[i].inprogress = false;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..a6202997ee 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -127,7 +127,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     } else {
         s->ctx = qemu_get_aio_context();
     }
-    s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
+    s->bh = aio_bh_new_guarded(s->ctx, notify_guest_bh, s,
+                               &DEVICE(vdev)->mem_reentrancy_guard);
     s->batch_notify_vqs = bitmap_new(conf->num_queues);
 
     *dataplane = s;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..d8bc39d359 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -633,8 +633,9 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev,
     } else {
         dataplane->ctx = qemu_get_aio_context();
     }
-    dataplane->bh = aio_bh_new(dataplane->ctx, xen_block_dataplane_bh,
-                               dataplane);
+    dataplane->bh = aio_bh_new_guarded(dataplane->ctx, xen_block_dataplane_bh,
+                                       dataplane,
+                                       &DEVICE(xendev)->mem_reentrancy_guard);
 
     return dataplane;
 }
diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index 7d4601cb5d..dd619f0731 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -985,7 +985,8 @@ static void virtser_port_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    port->bh = qemu_bh_new(flush_queued_data_bh, port);
+    port->bh = qemu_bh_new_guarded(flush_queued_data_bh, port,
+                                   &dev->mem_reentrancy_guard);
     port->elem = NULL;
 }
 
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 80ce1e9a93..f1c0eb7dfc 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2201,11 +2201,14 @@ static void qxl_realize_common(PCIQXLDevice *qxl, Error **errp)
 
     qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
 
-    qxl->update_irq = qemu_bh_new(qxl_update_irq_bh, qxl);
+    qxl->update_irq = qemu_bh_new_guarded(qxl_update_irq_bh, qxl,
+                                          &DEVICE(qxl)->mem_reentrancy_guard);
     qxl_reset_state(qxl);
 
-    qxl->update_area_bh = qemu_bh_new(qxl_render_update_area_bh, qxl);
-    qxl->ssd.cursor_bh = qemu_bh_new(qemu_spice_cursor_refresh_bh, &qxl->ssd);
+    qxl->update_area_bh = qemu_bh_new_guarded(qxl_render_update_area_bh, qxl,
+                                              &DEVICE(qxl)->mem_reentrancy_guard);
+    qxl->ssd.cursor_bh = qemu_bh_new_guarded(qemu_spice_cursor_refresh_bh, &qxl->ssd,
+                                             &DEVICE(qxl)->mem_reentrancy_guard);
 }
 
 static void qxl_realize_primary(PCIDevice *dev, Error **errp)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e15c79b94..66ac9b6cc5 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1339,8 +1339,10 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     g->ctrl_vq = virtio_get_queue(vdev, 0);
     g->cursor_vq = virtio_get_queue(vdev, 1);
-    g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
-    g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
+    g->ctrl_bh = qemu_bh_new_guarded(virtio_gpu_ctrl_bh, g,
+                                     &qdev->mem_reentrancy_guard);
+    g->cursor_bh = qemu_bh_new_guarded(virtio_gpu_cursor_bh, g,
+                                       &qdev->mem_reentrancy_guard);
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 55902e1df7..4e76d6b191 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1509,7 +1509,8 @@ static void ahci_cmd_done(const IDEDMA *dma)
     ahci_write_fis_d2h(ad);
 
     if (ad->port_regs.cmd_issue && !ad->check_bh) {
-        ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad);
+        ad->check_bh = qemu_bh_new_guarded(ahci_check_cmd_bh, ad,
+                                           &ad->mem_reentrancy_guard);
         qemu_bh_schedule(ad->check_bh);
     }
 }
diff --git a/hw/ide/ahci_internal.h b/hw/ide/ahci_internal.h
index 303fcd7235..2480455372 100644
--- a/hw/ide/ahci_internal.h
+++ b/hw/ide/ahci_internal.h
@@ -321,6 +321,7 @@ struct AHCIDevice {
     bool init_d2h_sent;
     AHCICmdHdr *cur_cmd;
     NCQTransferState ncq_tfs[AHCI_MAX_CMDS];
+    MemReentrancyGuard mem_reentrancy_guard;
 };
 
 struct AHCIPCIState {
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 45d14a25e9..de48ff9f86 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -513,6 +513,7 @@ BlockAIOCB *ide_issue_trim(
         BlockCompletionFunc *cb, void *cb_opaque, void *opaque)
 {
     IDEState *s = opaque;
+    IDEDevice *dev = s->unit ? s->bus->slave : s->bus->master;
     TrimAIOCB *iocb;
 
     /* Paired with a decrement in ide_trim_bh_cb() */
@@ -520,7 +521,8 @@ BlockAIOCB *ide_issue_trim(
 
     iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
     iocb->s = s;
-    iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+    iocb->bh = qemu_bh_new_guarded(ide_trim_bh_cb, iocb,
+                                   &DEVICE(dev)->mem_reentrancy_guard);
     iocb->ret = 0;
     iocb->qiov = qiov;
     iocb->i = -1;
diff --git a/hw/misc/imx_rngc.c b/hw/misc/imx_rngc.c
index 632c03779c..082c6980ad 100644
--- a/hw/misc/imx_rngc.c
+++ b/hw/misc/imx_rngc.c
@@ -228,8 +228,10 @@ static void imx_rngc_realize(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->iomem);
 
     sysbus_init_irq(sbd, &s->irq);
-    s->self_test_bh = qemu_bh_new(imx_rngc_self_test, s);
-    s->seed_bh = qemu_bh_new(imx_rngc_seed, s);
+    s->self_test_bh = qemu_bh_new_guarded(imx_rngc_self_test, s,
+                                          &dev->mem_reentrancy_guard);
+    s->seed_bh = qemu_bh_new_guarded(imx_rngc_seed, s,
+                                     &dev->mem_reentrancy_guard);
 }
 
 static void imx_rngc_reset(DeviceState *dev)
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index 43bb1f56ba..80a789f32b 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -914,7 +914,7 @@ static void mac_dbdma_realize(DeviceState *dev, Error **errp)
 {
     DBDMAState *s = MAC_DBDMA(dev);
 
-    s->bh = qemu_bh_new(DBDMA_run_bh, s);
+    s->bh = qemu_bh_new_guarded(DBDMA_run_bh, s, &dev->mem_reentrancy_guard);
 }
 
 static void mac_dbdma_class_init(ObjectClass *oc, void *data)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 53e1c32643..447f669921 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2917,7 +2917,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
         n->vqs[index].tx_vq =
             virtio_add_queue(vdev, n->net_conf.tx_queue_size,
                              virtio_net_handle_tx_bh);
-        n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
+        n->vqs[index].tx_bh = qemu_bh_new_guarded(virtio_net_tx_bh, &n->vqs[index],
+                                                  &DEVICE(vdev)->mem_reentrancy_guard);
     }
 
     n->vqs[index].tx_waiting = 0;
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f59dfe1cbe..fd917fcda1 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -4607,7 +4607,8 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
         QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
     }
 
-    sq->bh = qemu_bh_new(nvme_process_sq, sq);
+    sq->bh = qemu_bh_new_guarded(nvme_process_sq, sq,
+                                 &DEVICE(sq->ctrl)->mem_reentrancy_guard);
 
     if (n->dbbuf_enabled) {
         sq->db_addr = n->dbbuf_dbs + (sqid << 3);
@@ -5253,7 +5254,8 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
         }
     }
     n->cq[cqid] = cq;
-    cq->bh = qemu_bh_new(nvme_post_cqes, cq);
+    cq->bh = qemu_bh_new_guarded(nvme_post_cqes, cq,
+                                 &DEVICE(cq->ctrl)->mem_reentrancy_guard);
 }
 
 static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
diff --git a/hw/scsi/mptsas.c b/hw/scsi/mptsas.c
index c485da792c..3de288b454 100644
--- a/hw/scsi/mptsas.c
+++ b/hw/scsi/mptsas.c
@@ -1322,7 +1322,8 @@ static void mptsas_scsi_realize(PCIDevice *dev, Error **errp)
     }
     s->max_devices = MPTSAS_NUM_PORTS;
 
-    s->request_bh = qemu_bh_new(mptsas_fetch_requests, s);
+    s->request_bh = qemu_bh_new_guarded(mptsas_fetch_requests, s,
+                                        &DEVICE(dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), &dev->qdev, &mptsas_scsi_info);
 }
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..3c20b47ad0 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -193,7 +193,8 @@ static void scsi_dma_restart_cb(void *opaque, bool running, RunState state)
         AioContext *ctx = blk_get_aio_context(s->conf.blk);
         /* The reference is dropped in scsi_dma_restart_bh.*/
         object_ref(OBJECT(s));
-        s->bh = aio_bh_new(ctx, scsi_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(ctx, scsi_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         qemu_bh_schedule(s->bh);
     }
 }
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index fa76696855..4de34536e9 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -1184,7 +1184,8 @@ pvscsi_realizefn(PCIDevice *pci_dev, Error **errp)
         pcie_endpoint_cap_init(pci_dev, PVSCSI_EXP_EP_OFFSET);
     }
 
-    s->completion_worker = qemu_bh_new(pvscsi_process_completion_queue, s);
+    s->completion_worker = qemu_bh_new_guarded(pvscsi_process_completion_queue, s,
+                                               &DEVICE(pci_dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), DEVICE(pci_dev), &pvscsi_scsi_info);
     /* override default SCSI bus hotplug-handler, with pvscsi's one */
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index 88f99c05d5..f013ded91e 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -937,7 +937,8 @@ static void usb_uas_realize(USBDevice *dev, Error **errp)
 
     QTAILQ_INIT(&uas->results);
     QTAILQ_INIT(&uas->requests);
-    uas->status_bh = qemu_bh_new(usb_uas_send_status_bh, uas);
+    uas->status_bh = qemu_bh_new_guarded(usb_uas_send_status_bh, uas,
+                                         &d->mem_reentrancy_guard);
 
     dev->flags |= (1 << USB_DEV_FLAG_IS_SCSI_STORAGE);
     scsi_bus_init(&uas->bus, sizeof(uas->bus), DEVICE(dev), &usb_uas_scsi_info);
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 8755e9cbb0..a0c4e782b2 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -1364,7 +1364,8 @@ static void dwc2_realize(DeviceState *dev, Error **errp)
     s->fi = USB_FRMINTVL - 1;
     s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
-    s->async_bh = qemu_bh_new(dwc2_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(dwc2_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
 
     sysbus_init_irq(sbd, &s->irq);
 }
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..c930c60921 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2533,7 +2533,8 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp)
     }
 
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, ehci_work_timer, s);
-    s->async_bh = qemu_bh_new(ehci_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(ehci_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
     s->device = dev;
 
     s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 8ac1175ad2..77baaa7a6b 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -1190,7 +1190,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
                               USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL);
         }
     }
-    s->bh = qemu_bh_new(uhci_bh, s);
+    s->bh = qemu_bh_new_guarded(uhci_bh, s, &DEVICE(dev)->mem_reentrancy_guard);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, uhci_frame_timer, s);
     s->num_ports_vmstate = NB_PORTS;
     QTAILQ_INIT(&s->queues);
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index 176868d345..f500db85ab 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -1141,7 +1141,8 @@ static void usb_host_nodev_bh(void *opaque)
 static void usb_host_nodev(USBHostDevice *s)
 {
     if (!s->bh_nodev) {
-        s->bh_nodev = qemu_bh_new(usb_host_nodev_bh, s);
+        s->bh_nodev = qemu_bh_new_guarded(usb_host_nodev_bh, s,
+                                          &DEVICE(s)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(s->bh_nodev);
 }
@@ -1739,7 +1740,8 @@ static int usb_host_post_load(void *opaque, int version_id)
     USBHostDevice *dev = opaque;
 
     if (!dev->bh_postld) {
-        dev->bh_postld = qemu_bh_new(usb_host_post_load_bh, dev);
+        dev->bh_postld = qemu_bh_new_guarded(usb_host_post_load_bh, dev,
+                                             &DEVICE(dev)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(dev->bh_postld);
     dev->bh_postld_pending = true;
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc..39fbaaab16 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1441,8 +1441,10 @@ static void usbredir_realize(USBDevice *udev, Error **errp)
         }
     }
 
-    dev->chardev_close_bh = qemu_bh_new(usbredir_chardev_close_bh, dev);
-    dev->device_reject_bh = qemu_bh_new(usbredir_device_reject_bh, dev);
+    dev->chardev_close_bh = qemu_bh_new_guarded(usbredir_chardev_close_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
+    dev->device_reject_bh = qemu_bh_new_guarded(usbredir_device_reject_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
     dev->attach_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, usbredir_do_attach, dev);
 
     packet_id_queue_init(&dev->cancelled, dev, "cancelled");
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 66cb3f7c24..38ee660a30 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -1032,7 +1032,8 @@ static void usbback_alloc(struct XenLegacyDevice *xendev)
 
     QTAILQ_INIT(&usbif->req_free_q);
     QSIMPLEQ_INIT(&usbif->hotplug_q);
-    usbif->bh = qemu_bh_new(usbback_bh, usbif);
+    usbif->bh = qemu_bh_new_guarded(usbback_bh, usbif,
+                                    &DEVICE(xendev)->mem_reentrancy_guard);
 }
 
 static int usbback_free(struct XenLegacyDevice *xendev)
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index fd06fcfb3f..d004cf29d2 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -886,8 +886,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
         precopy_add_notifier(&s->free_page_hint_notify);
 
         object_ref(OBJECT(s->iothread));
-        s->free_page_bh = aio_bh_new(iothread_get_aio_context(s->iothread),
-                                     virtio_ballloon_get_free_page_hints, s);
+        s->free_page_bh = aio_bh_new_guarded(iothread_get_aio_context(s->iothread),
+                                             virtio_ballloon_get_free_page_hints, s,
+                                             &dev->mem_reentrancy_guard);
     }
 
     if (virtio_has_feature(s->host_features, VIRTIO_BALLOON_F_REPORTING)) {
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 802e1b9659..2fe804510f 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -1074,7 +1074,8 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
         vcrypto->vqs[i].dataq =
                  virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq_bh);
         vcrypto->vqs[i].dataq_bh =
-                 qemu_bh_new(virtio_crypto_dataq_bh, &vcrypto->vqs[i]);
+                 qemu_bh_new_guarded(virtio_crypto_dataq_bh, &vcrypto->vqs[i],
+                                     &dev->mem_reentrancy_guard);
         vcrypto->vqs[i].vcrypto = vcrypto;
     }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 16:40:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 16:40:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526789.818767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1priC3-0000cZ-NK; Wed, 26 Apr 2023 16:40:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526789.818767; Wed, 26 Apr 2023 16:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1priC3-0000cS-KT; Wed, 26 Apr 2023 16:40:35 +0000
Received: by outflank-mailman (input) for mailman id 526789;
 Wed, 26 Apr 2023 16:40:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b4Vc=AR=citrix.com=prvs=473c62be2=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1priC2-0000cM-0w
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 16:40:34 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e918790-e451-11ed-b224-6b7b168915f2;
 Wed, 26 Apr 2023 18:40:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e918790-e451-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682527232;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=xhRc0AT4ugGzA8EYup0rcdpey2Q9q4IGlb8wJi+JpaE=;
  b=TjYxTxOdlkxB8YmzrhS/LoTeeJTJvuBW0dNH71wkFL1HCzRnIJ6Jh+n4
   phVSmNLe3GQ7V6jApP7I3HnPO1jPk8pWRh7BFPFAC47/3MvhclQCX4tYd
   Wa3VhFpckVOCJssaI4fFnDHmQdq66n2lRRvHeuVoO93JQunyFGiAZWvIx
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106852690
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:EYMLyaoEVUAP5LGv6qU1xyonDJNeBmIoZRIvgKrLsJaIsI4StFCzt
 garIBmFPPuJZWP1fYx+aIi/pE8CuZDUyoAwSldkryw3QS4T85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSdNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAC0SMRm/ovCp+r74RuBqpP4bC9OxBapK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVCr0mO464+7GXJ0wV11JDmMcbPe8zMTsJQ9qqdj
 jueoTSkWE9LbLRzzxKhtVW3o8X+sB//Sd4QSa+Hy9FK3Rqckzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6g6XzbHPyx2EHWVCRTlEAPQ9r9M/TzEu0
 l6PnvvqCCZpvbnTTmiSnp+TqT6xIiETIXU1eT4fTQAF7t/gp6k+lhvKCN1kFcadh83/HjzYw
 D2QqiU6wbkQ5fPnzI3iowqB2Wj14MGUEEhsvF6/sn+ZAh1ReZH6brCN+0fh4bVMJYC7dGGmp
 iI9sp3LhAwRNq1hhBBhUc1UQuHzvKfYaGCM6bJ8N8J/rmrwohZPaagVuWgjfxkxb67obBezO
 CfuVRVtCIi/1ZdARYt+eMqPBssj1sAM/vy1B6mPPrKijnWcHTJrHR2ChmbKhQgBaGB2zckC1
 W6zKK5A90oyB6V91yaRTOwAy7ItzS1W7TqNFcqnk0v5juTPNCX9pVI53LymN7hR0U95iF+Nr
 4Y32zWikH2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/S9Nv1jFOQTl7Y9eImONJRmCQt/gN/gs+1
 i3nCxAwJZuWrSGvFDhmnVg5MuKzDM8u9yxmVcHuVH7xs0UejU+UxP93X/MKkXMPrrELISJcJ
 xXdR/i9Pw==
IronPort-HdrOrdr: A9a23:15M3HaBMqRWZy0PlHemB55DYdb4zR+YMi2TDtnoBMiC9F/bzqy
 nApoV96faZslYssTQb6LO90cq7MBbhHPxOkO8s1N6ZNWGM2VdAbrsSj7cKqAeQYhEWmNQtrZ
 uIsJITNDQzNzVHZArBjzWQIpIA+p2p+LHtr+fSpk0dKT2CopsP0ztE
X-Talos-CUID: 9a23:6JP9AWFlsAhgqPrrqmJ66xEdJuU3bUfRj3vZJW2CMUFzEoS8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3Ac4IR/A+JddBcYR6XmB0+dLuQf+MvxL+LInEgrZE?=
 =?us-ascii?q?X58vZJxNNFBfAyzviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,228,1677560400"; 
   d="scan'208";a="106852690"
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <roger.pau@citrix.com>, <jgross@suse.com>, <sstabellini@kernel.org>,
	<oleksandr_tyshchenko@epam.com>, <axboe@kernel.dk>, Ross Lagerwall
	<ross.lagerwall@citrix.com>
Subject: [PATCH] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 26 Apr 2023 17:40:05 +0100
Message-ID: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 23ed258b57f0..c1890c8a9f6e 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed Apr 26 17:19:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 17:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526794.818777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prinB-0004CG-Mj; Wed, 26 Apr 2023 17:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526794.818777; Wed, 26 Apr 2023 17:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prinB-0004C9-I7; Wed, 26 Apr 2023 17:18:57 +0000
Received: by outflank-mailman (input) for mailman id 526794;
 Wed, 26 Apr 2023 17:18:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prinB-0004Bz-2w; Wed, 26 Apr 2023 17:18:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prinA-0002Is-So; Wed, 26 Apr 2023 17:18:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prinA-0002wf-CE; Wed, 26 Apr 2023 17:18:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prinA-0007UL-Bj; Wed, 26 Apr 2023 17:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sO4+oWVHkLK378sGccd9Kh72L2/iCoorfmIx3eNizY8=; b=Dm7uQkiary6TRYV53f6EzW1xJ8
	ImdpQoqoDcWSxjd+HXDjlXhjyVUtozoHiceT22SNXhewvBpu8SkC4IGMP+LO0LEVb8qGI/U4IBBkq
	/NUd97NoYFKNwsWZyod5Sr2q+1ohG6VeRf0W6bBC+9tzXBEKKy0SzEy05D7CO1KQDxZ0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180429-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180429: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ede0bd1496405f72147308b9570efba0234349b2
X-Osstest-Versions-That:
    ovmf=5a349b96b171e85744024904b0c8453d06d2fb45
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 17:18:56 +0000

flight 180429 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180429/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ede0bd1496405f72147308b9570efba0234349b2
baseline version:
 ovmf                 5a349b96b171e85744024904b0c8453d06d2fb45

Last test of basis   180423  2023-04-26 03:40:45 Z    0 days
Testing same since   180429  2023-04-26 09:41:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5a349b96b1..ede0bd1496  ede0bd1496405f72147308b9570efba0234349b2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 17:28:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 17:28:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526800.818787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1privn-0005hj-I8; Wed, 26 Apr 2023 17:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526800.818787; Wed, 26 Apr 2023 17:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1privn-0005hc-EF; Wed, 26 Apr 2023 17:27:51 +0000
Received: by outflank-mailman (input) for mailman id 526800;
 Wed, 26 Apr 2023 17:27:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1privm-0005hS-Or; Wed, 26 Apr 2023 17:27:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1privm-0002T2-KR; Wed, 26 Apr 2023 17:27:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1privm-0003LX-76; Wed, 26 Apr 2023 17:27:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1privm-0007zR-4O; Wed, 26 Apr 2023 17:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PZAN+ic4wrG8URGdC3CYv+walorZo7bHsmstAGQhzac=; b=QWy52oIoyDPXzGSrshFsKd6Kk8
	qHAbKbZDN0+BjSot6MVZ5diaBlRsUQt1RILUXB7yLJSYiqg0L+t2KuW9C5sB6Dvjs24AP8xGv2A5n
	WY1C45CNk3byfu7MY7LEx9egpLSCQiSY7bSgjNTOuRfKl4ckwEoq9poaVUX+p00VCBi0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180422-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180422: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b5be1fe938f52b5d3682dee7702fd51c8cfb61b
X-Osstest-Versions-That:
    xen=208dd44299193347d4ececdc1c8f864f6d9a0b9b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 17:27:50 +0000

flight 180422 xen-4.17-testing real [real]
flight 180434 xen-4.17-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180422/
http://logs.test-lab.xenproject.org/osstest/logs/180434/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 17 guest-start/debian.repeat fail pass in 180434-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180406
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180406
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180406
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180406
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180406
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180406
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180406
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180406
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180406
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180406
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180406
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180406
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b5be1fe938f52b5d3682dee7702fd51c8cfb61b
baseline version:
 xen                  208dd44299193347d4ececdc1c8f864f6d9a0b9b

Last test of basis   180406  2023-04-25 07:38:20 Z    1 days
Testing same since   180422  2023-04-26 03:03:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   208dd44299..8b5be1fe93  8b5be1fe938f52b5d3682dee7702fd51c8cfb61b -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 17:30:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 17:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526808.818797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1priyS-0007C2-4v; Wed, 26 Apr 2023 17:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526808.818797; Wed, 26 Apr 2023 17:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1priyS-0007Bv-0Y; Wed, 26 Apr 2023 17:30:36 +0000
Received: by outflank-mailman (input) for mailman id 526808;
 Wed, 26 Apr 2023 17:30:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1priyR-0007Bl-IG; Wed, 26 Apr 2023 17:30:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1priyR-0002XC-Gf; Wed, 26 Apr 2023 17:30:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1priyQ-0003Z9-Va; Wed, 26 Apr 2023 17:30:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1priyQ-0001MB-Sx; Wed, 26 Apr 2023 17:30:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SJErwP+50TchgSVeeS2voaHbiKYRcz7bn+lg/9tfXL8=; b=V02Ee2n3NCYNKp0bsDek5twr/4
	DNhwrq0bCXukD+p7u88wz8lHB2djS1JABUV2Oq/HMErSLtjLhUR2D3zCxlylMB4ll4W2MauWdQ8hR
	gGvEO9FV0lokfBdsAv+ZnuFmjNRoHSnL/b04MTuLM6Y8wdDNSD7IzdX5L7nrS4CHILmE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180424-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180424: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=74b86146ef8a4ae484d74f104b465bfc8cc73512
X-Osstest-Versions-That:
    libvirt=c4bc4d3b82fbe22e03c986ca896090f481df5c10
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 17:30:34 +0000

flight 180424 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180424/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180403
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180403
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180403
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              74b86146ef8a4ae484d74f104b465bfc8cc73512
baseline version:
 libvirt              c4bc4d3b82fbe22e03c986ca896090f481df5c10

Last test of basis   180403  2023-04-25 04:20:19 Z    1 days
Testing same since   180424  2023-04-26 04:18:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Borecki <pavel.borecki@gmail.com>
  Weblate <noreply@weblate.org>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   c4bc4d3b82..74b86146ef  74b86146ef8a4ae484d74f104b465bfc8cc73512 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 19:39:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 19:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526818.818811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prkye-0002Yq-AT; Wed, 26 Apr 2023 19:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526818.818811; Wed, 26 Apr 2023 19:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prkye-0002Yh-61; Wed, 26 Apr 2023 19:38:56 +0000
Received: by outflank-mailman (input) for mailman id 526818;
 Wed, 26 Apr 2023 19:38:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prkyd-0002YY-UG; Wed, 26 Apr 2023 19:38:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prkyd-0005k2-M0; Wed, 26 Apr 2023 19:38:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prkyd-000223-DB; Wed, 26 Apr 2023 19:38:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prkyd-0006cc-Cg; Wed, 26 Apr 2023 19:38:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eoBW8CshTLQgsxShCNtR4mIYSK2nItxU6cfogFE4DqQ=; b=EWjBGBFdD6kJcY8QWHPjlD/mM8
	BsXIp1rmLZfJxI8L4ejshUmO5rLc2jMtID85QxX9fRfO5DAuuU30oKnIzBEd+0U86OTpyHVdaBC6S
	CaMjjKg4upb+ABKF+ooQtm/CZMZm0kBht5Ed6Y37P4IFyusvGYUZnGHNGnfvpP10aeio=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180435-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180435: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
X-Osstest-Versions-That:
    xen=18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 19:38:55 +0000

flight 180435 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180435/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c
baseline version:
 xen                  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47

Last test of basis   180420  2023-04-25 23:02:14 Z    0 days
Testing same since   180435  2023-04-26 17:01:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   18a36b4a9b..dde20f7dc1  dde20f7dc182fdfeeb6c55648979326bb982ca8c -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 20:27:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 20:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526825.818824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prljT-00082c-VS; Wed, 26 Apr 2023 20:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526825.818824; Wed, 26 Apr 2023 20:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prljT-00082V-SW; Wed, 26 Apr 2023 20:27:19 +0000
Received: by outflank-mailman (input) for mailman id 526825;
 Wed, 26 Apr 2023 20:27:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+pP=AR=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prljS-00082M-6M
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 20:27:18 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc040bdb-e470-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 22:27:16 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-50506111a6eso13971734a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 13:27:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc040bdb-e470-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682540835; x=1685132835;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VWpNif3FNqe3UmxNEOuJxk87Z9LX07seP03fmg8sxU8=;
        b=SL3+3BvftlpEcs+uah2jbhQsbbpFGLKnjc1hiD3rZOdICfoNYjHwmn25gJNZLXUoLg
         tC65f8kgWU4DmnTpak0/LRliZc8w633uM9qr0HeWu019pn+hySdQA7j9K7m4VHR17kLS
         F2mRYaykFLnle/Jp9R1i7sEG7wgdbahj6MzInEqZ2U+VLMLHVL3oGPwd0t70ZWWBmj2+
         fEaIs6A6fNADIIkHQJ3ZoG6iLxxM7gyGMFrsjmjD7y6WUObOJ5sf5xavwRKGphofmP+G
         gjuC3D9W+cGkLAQPD+ZoIBVvbi0Ir4MMTsQZ7gSqHcLaWri5NAn8kDgw6b8VPHKmW069
         1XPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682540835; x=1685132835;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=VWpNif3FNqe3UmxNEOuJxk87Z9LX07seP03fmg8sxU8=;
        b=QgHaoE8ubC9nP8NOIBQPjTZ6UlP5fY004cGyX2Z1SmJIK456u9EtCtRfVj90nG25kN
         ydlmhB1NutIJJgPOYGRXcmeP2UID4Ws1196/B2a/+KCVsKdBgMEGx3AbN+Zqe9yTS97X
         L6trRpgaAXn/0mN36iv+Yqa+Qi5Ef8S9KbwQHIQ6Y3SSKlomfrLpuzGdZDJCcnxwDJ9s
         WeA2cC8pAKbTzaNRP6maszyxi0WRigLHL3CMZq9E6/6y/gx03/kEiaP6A8GDy2Dd1Ikp
         CpFr5dRaHcTmxJ5yvTDDxfEEcUVdGIytYX7uq6Mbl6z5JYnbet7fhEMbopirQHEX6lrt
         KxoA==
X-Gm-Message-State: AAQBX9dXFNyGKdkGyh5fTijJEv3BamsJINp9k3eZdiTImrvTWn4SaJPt
	zPQ7on6dmxTcAduGsNIYKUT9051dphrEQp3kCAI=
X-Google-Smtp-Source: AKy350b7cqRuRX9dpLtZ3KTD9dAUxfUmI87Se/I4Lel61DAiFcfOecdOQjVu5kkOr9suKKq6rMVVvU64COBLhef+occ=
X-Received: by 2002:aa7:d88c:0:b0:506:be07:3473 with SMTP id
 u12-20020aa7d88c000000b00506be073473mr20104843edq.4.1682540835283; Wed, 26
 Apr 2023 13:27:15 -0700 (PDT)
MIME-Version: 1.0
References: <20230425174733.795961-1-jennifer.herbert@citrix.com> <20230425174733.795961-2-jennifer.herbert@citrix.com>
In-Reply-To: <20230425174733.795961-2-jennifer.herbert@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 Apr 2023 16:27:03 -0400
Message-ID: <CAKf6xpsbaZMMFCW3Uw0XZ2gm185iwwtT2H+RcAReFrze9UWdAw@mail.gmail.com>
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: jennifer.herbert@citrx.com, Xen-devel <xen-devel@lists.xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi, Jennifer,

On Tue, Apr 25, 2023 at 1:48=E2=80=AFPM Jennifer Herbert
<jennifer.herbert@citrix.com> wrote:
>
> This patch makes the TPM version, for which the ACPI libary probes, confi=
gurable.
> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) shou=
ld be probed.
> I have also added to hvmloader an option to allow setting this new config=
, which can
> be triggered by setting the platform/tpm_version xenstore key.
>
> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
...
> --- a/tools/libacpi/build.c
> +++ b/tools/libacpi/build.c
> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_c=
txt *ctxt,
...
> +        switch ( config->tpm_version )
>          {
> -            tcpa->lasa =3D ctxt->mem_ops.v2p(ctxt, lasa);
> -            tcpa->laml =3D ACPI_2_0_TCPA_LAML_SIZE;
> -            memset(lasa, 0, tcpa->laml);
> -            set_checksum(tcpa,
> -                         offsetof(struct acpi_header, checksum),
> -                         tcpa->header.length);
> +        case 0: /* Assume legacy code wanted tpm 1.2 */

This shouldn't be reached, since tpm_version =3D=3D 0 won't have
ACPI_HAS_TPM set.  Still, do you want to make it a break or drop the
case to avoid falling through to the TPM1.2 code?

Looks good though.

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 20:29:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 20:29:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526829.818833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prllr-0000Ao-BL; Wed, 26 Apr 2023 20:29:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526829.818833; Wed, 26 Apr 2023 20:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prllr-0000Ah-89; Wed, 26 Apr 2023 20:29:47 +0000
Received: by outflank-mailman (input) for mailman id 526829;
 Wed, 26 Apr 2023 20:29:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+pP=AR=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1prllq-0000Aa-GO
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 20:29:46 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1462bf24-e471-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 22:29:44 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-504efe702d5so11313620a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 13:29:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1462bf24-e471-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682540984; x=1685132984;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EUQ9GkqDQDJrESjM5VMTuG5CP+Vyl+2eVho5xDmCSdw=;
        b=sCYfFNAf0Lg9K6nvQjISB8gH0gBFRqsAsT0AFkv39hd8YhppZH0E/OVe3tUCKUv9bZ
         nkVmhzFc7181nbuIWP5MaSzqKqVpAq3ps8wWKrYGoaN2oSpLxNxKPK/P6KUEJTRr99ES
         dQZyM1oKAsZ1Xd4+TU09P5zsB6EI+RVzRt/rKev44oNwkvT0LuaboxLAn5XKUzh1DYTg
         DWdERbPlaxnTvoC+1XkVITPbCsgoisci2nWBLK4Qb8oeh42fRTAN4kL8RfJB6P645fFi
         fuvcgePzbtqSu6eszRA9Rt1pr8GXdDJD00Yt3LS30HgFsIjrNFI3cnc8+qBk5rqcFoWe
         UHhw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682540984; x=1685132984;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EUQ9GkqDQDJrESjM5VMTuG5CP+Vyl+2eVho5xDmCSdw=;
        b=Fs0JhOFiV6usVAraKZwne7qy/2KLmj75YVikapVHnt2ZKWBn7BlksIqqTAJg0aRQEj
         IBFQH6QCtatK5a2DL5kRaw+nI0bB9W9dNS/4zY8naLhH/bpL2y1DBnv/zl0svdtUilst
         NPs1W1eU7rn5waOPhT1EeJl32UQ+c9km0deJdXRTgTV0ZN6W6s0yerTH982AY+LvVMJ5
         pl8dRzSRmxCC/7pdJFXiNk4GFbg05WcR8y51hXPyEGkIFXfop/lOvZtKw6vXCvXm2Qen
         GqqJzcQdROpm2q3QC9GGBMAnYca9xiR3BeKSFZWxscrfS7LgZ8blWjucBELqCDrBjujX
         K9dQ==
X-Gm-Message-State: AAQBX9cx5b91fcCzBpQOnMB8NfqcF9ZBPB8OAsJ0dOEKY6uPvP5VNsDX
	HOTBP0p6yxZTU8jmruR462VBxUaXK0oAicp1lyWS4WVAMlc=
X-Google-Smtp-Source: AKy350ZpdJhyFyIA3Bz2DP6f4i+cMx/sEVbUMJVLgiMv9Lt9oNJkx3MFthzoL8FUE6CJZY7jFyuIF873zBvNTb4l2Z0=
X-Received: by 2002:aa7:c0c8:0:b0:508:46d4:898 with SMTP id
 j8-20020aa7c0c8000000b0050846d40898mr21151268edp.4.1682540983689; Wed, 26 Apr
 2023 13:29:43 -0700 (PDT)
MIME-Version: 1.0
References: <20230425174733.795961-1-jennifer.herbert@citrix.com> <20230425174733.795961-3-jennifer.herbert@citrix.com>
In-Reply-To: <20230425174733.795961-3-jennifer.herbert@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 Apr 2023 16:29:31 -0400
Message-ID: <CAKf6xpskM2k5aeLoYLfxnR9KFuK7w3NkZaT_4z-SdOQ8VUc8NQ@mail.gmail.com>
Subject: Re: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: jennifer.herbert@citrx.com, Xen-devel <xen-devel@lists.xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Apr 25, 2023 at 1:48=E2=80=AFPM Jennifer Herbert
<jennifer.herbert@citrix.com> wrote:
>
> This patch introduces an optional TPM 2 interface definition to the ACPI =
table,
> which is to be used as part of a vTPM 2 implementation.
>
> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> ---
...
> diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/u=
til.c
> index f39a8e584f..51272530fe 100644
> --- a/tools/firmware/hvmloader/util.c
> +++ b/tools/firmware/hvmloader/util.c
> @@ -1009,6 +1009,15 @@ void hvmloader_acpi_build_tables(struct acpi_confi=
g *config,
>          config->table_flags |=3D ACPI_HAS_TPM;
>          config->tis_hdr =3D (uint16_t *)ACPI_TIS_HDR_ADDRESS;
>          break;
> +
> +    case 2:
> +        config->table_flags |=3D ACPI_HAS_TPM;
> +        config->crb_id =3D (uint16_t *)TPM_CRB_INTF_ID;
> +
> +        mem_hole_populate_ram(TPM_LOG_AREA_ADDRESS >> PAGE_SHIFT,
> +                              TPM_LOG_SIZE >> PAGE_SHIFT);
> +        memset((void *)TPM_LOG_AREA_ADDRESS, 0, TPM_LOG_SIZE);

TPM_LOG_AREA_ADDRESS is reserved in the e820 table since it is the
high memory range after the ACPI data, correct?

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 20:32:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 20:32:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526833.818844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prlod-0001cD-OT; Wed, 26 Apr 2023 20:32:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526833.818844; Wed, 26 Apr 2023 20:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prlod-0001c6-Lc; Wed, 26 Apr 2023 20:32:39 +0000
Received: by outflank-mailman (input) for mailman id 526833;
 Wed, 26 Apr 2023 20:32:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cwp=AR=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1prloc-0001c0-Vr
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 20:32:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7aa760bb-e471-11ed-8611-37d641c3527e;
 Wed, 26 Apr 2023 22:32:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1747463012;
 Wed, 26 Apr 2023 20:32:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95051C433D2;
 Wed, 26 Apr 2023 20:32:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7aa760bb-e471-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682541154;
	bh=19qsQm6n48p1+w0nyOWsAYHO53X8Qf94yVNlgYZx24s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NXaS/3f5+a0FDcjbEXrJDCZfK+PxzA6Zph6PWRbGN0ha3DXsQEav32WtPY4iHu+KC
	 w4iJKaCU6kzslk7/WbR+ZuwBwQefu7s/H5lXcpbYj1fBA9kGVhCO4tjBe19mOeSsZF
	 HF+83tFidfdsCf6IQT8IaLu+cThlDmlnil0DXGIOUPm72MUIgPa/uBoyxS2OYeJM2D
	 XZOuX9HLTuFuNuYNY3sHo1IVtb9OaXkj64A7fOf3NCu8hU1iaM5WsXcaMtuX//ENer
	 0Ii14R+nzm1QOfi5GdE8bloJ++lBYB5yoUPmdr/1WPskDCuE8fh1GHIaeMuuRBpltO
	 uO9bL1siMBCzQ==
Date: Wed, 26 Apr 2023 13:32:32 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>, 
    =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH] CI: Remove all use of /bin/false as a ROM
In-Reply-To: <20230426144748.1236385-1-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304261332020.3419@ubuntu-linux-20-04-desktop>
References: <20230426144748.1236385-1-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1372857357-1682541154=:3419"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1372857357-1682541154=:3419
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 26 Apr 2023, Andrew Cooper wrote:
> As the recent work to get PCI Passthrough testing working shows, putting
> `/bin/false` as a ROM into guest context doesn't work so well.
> 
> For all ROM paths where we're skipping the build, use a slightly-plausible but
> likely non-existent path instead.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Assuming you (or patchew) tested it:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Anthony PERARD <anthony.perard@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Doug Goldstein <cardoe@cardoe.com>
> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
>  automation/scripts/build | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index d830cff7b7c7..197d085f3e07 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -67,9 +67,9 @@ else
>  
>      if [[ "${cc_is_clang}" == "y" ]]; then
>          # SeaBIOS cannot be built with clang
> -        cfgargs+=("--with-system-seabios=/usr/share/seabios/bios.bin")
> +        cfgargs+=("--with-system-seabios=/usr/share/no-seabios.bin")
>          # iPXE cannot be built with clang
> -        cfgargs+=("--with-system-ipxe=/usr/lib/ipxe/ipxe.pxe")
> +        cfgargs+=("--with-system-ipxe=/usr/share/no-ipxe.pxe")
>          # newlib cannot be built with clang so we cannot build stubdoms
>          cfgargs+=("--disable-stubdom")
>      fi
> @@ -87,7 +87,7 @@ else
>  
>      # SeaBIOS requires GCC 4.6 or later
>      if [[ "${cc_is_gcc}" == "y" && "${cc_ver}" -lt 0x040600 ]]; then
> -        cfgargs+=("--with-system-seabios=/bin/false")
> +        cfgargs+=("--with-system-seabios=/usr/share/no-seabios.bin")
>      fi
>  
>      ./configure "${cfgargs[@]}"
> -- 
> 2.30.2
> 
--8323329-1372857357-1682541154=:3419--


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 22:31:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 22:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526840.818854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prnfN-0005J2-CV; Wed, 26 Apr 2023 22:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526840.818854; Wed, 26 Apr 2023 22:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prnfN-0005Iv-9j; Wed, 26 Apr 2023 22:31:13 +0000
Received: by outflank-mailman (input) for mailman id 526840;
 Wed, 26 Apr 2023 22:31:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldec=AR=citrix.com=prvs=473a90206=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1prnfL-0005Ip-Ni
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 22:31:12 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07391e23-e482-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 00:31:05 +0200 (CEST)
Received: from mail-bn7nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 Apr 2023 18:30:50 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5773.namprd03.prod.outlook.com (2603:10b6:a03:2db::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Wed, 26 Apr
 2023 22:30:44 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Wed, 26 Apr 2023
 22:30:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07391e23-e482-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682548265;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4nw0Cds71RGSE+AxdSb3cmGc1BsiqZaUNI/kvew6Qj8=;
  b=DjXX/85vLU4ZJAhx7uKGrGAKtHBufDs9cRrgy3F9drUy8+OFydDiEiln
   o2OGjW4xa4FIKjE7iGxMaUf3V40EGgV04dozobCb1w0qWsAJ9r+7IjPyx
   ygBlSyrQsmlZN2DtgZ2Y9RiU4aPnTU5EkqsPcAdv8HWTt+Mm+QnHrjp5+
   Y=;
X-IronPort-RemoteIP: 104.47.70.108
X-IronPort-MID: 106888233
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:H9CZwaw4AuBqqRtSbwV6t+eQxyrEfRIJ4+MujC+fZmUNrF6WrkVSz
 WUaDD+DP/eDZzbzf9B3adu+ox4O6JPUytY2TQE++SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiP64T5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KV9o6
 fEAN2sGVzeCwP2ckeyESapHpe12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQhuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANxCRO3iraUCbFu72TMKUTMcDUqHnvSljEWdf8BVE
 FUKw397xUQ13AnxJjXnZDWxpHOGtxgQQd0WDeQ+7AyPzYLf5wGECi4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9L2UPeCsFRgst+MT4rcc4iRenZtR+FK+4iPXlFDe2x
 CqFxAAlnKkah8MP06S9/HjEjiiqq5yPSRQ6ji3IWkq14wU/Y5SqD6Sq5kLc9u1oN5uCQx+Ku
 31ss9Sf6cgeAJfLkzaCKM0oHbqp7vLDFyfOjFpHFoMksT+q/haekZt45Th/IAJjNJkCcDqwO
 EvL41oJtNlUIWegarJxb8SpEcM2wKP8FNPjEPfJct5JZZs3fwiClM1zWXOtM6nWuBBEuckC1
 V2zK53E4aoyYUi/8AeLeg==
IronPort-HdrOrdr: A9a23:54cRXqv9HPYWBXRfNeRMppuy7skDFdV00zEX/kB9WHVpm62j9/
 xG+c5x6faaslsssR0b8+xoW5PgfZqjz/FICOAqVN+ftWLd1FdAQrsN0bff
X-Talos-CUID: 9a23:tSGrUG6a2EeOu/pCE9ssymgIF9g6KXbk63aXPGXpV1lqC5a2YArF
X-Talos-MUID: 9a23:FFelbgTthe1EudkXRXTz2yg7Gs0w8piRBVwLmrYiqcO/LDJJbmI=
X-IronPort-AV: E=Sophos;i="5.99,229,1677560400"; 
   d="scan'208";a="106888233"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I4eLznXHKioee5hKGGZjQasgte8pt1JlMz+kRdJD38V3woFLrHp0Dq8vQzANrrFBckJEgFW1Mr1aXJ4HH8Sdfq3PEaGV+3KLSyTRzKNGN89W12i7Xs+OwS7AvZSCe1rjkcObCWf8wPAEo+oypGMF1iFFkaF+83j+u8KMSNROt8Qe9jNnXBF28BsvXZFhplWUBQQ0o1fqrAHlvFMzuBEPH1vwAKe8pfm21VGsfoNwRr2oq3+AK4y6iWhwgzQGT4K26gYRwqjsy2wn6s77PyzczBR8+W0YEgfjTKgx6YYf+nf2jTtFqLw+y2OsYex2A20Wcc3t6xyalVoJXNZtWSbtMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9w9bPw8y6S/YFtiIMXqD+Okee94jIAXriIc0tbrE3Vs=;
 b=GLEFExtVbJGogePF/4EAnAzQQvug1C5ojgvtKOp/uobf6Rqn/q+8YICv5fyVL0FaegIL8l/1g3HlW5D5S4wHBzk/IXVwXduPg7WUj39vsWgywj8LDjdZYEgz/xmpoivNPfbGoXMDnviQ9OSiUs0DWRHfWPeNsCsGxNg75+Iftqv+hkjOL96CO+FD6KFPCszeRle1QonygJ8Yt//iHLeXnFJqYhcboJ4Y73Ec1EKGxfvZVGosZFBA87xI92xG36e3/QkjkKUjkC6+cSW08Oz0eu59TG2rzBG8zDjbEFjBOFMTyBgjcn5Fl3rGj8r0oSA2fqvwPr+ZrCzyThEy7rhQMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9w9bPw8y6S/YFtiIMXqD+Okee94jIAXriIc0tbrE3Vs=;
 b=Cf2wXwt9vaiM1HuNzoA9mNdMm6hCWur96UcoRvdBMRVdlmXOe2s4tHnQ87PB1w5VAna1k1aDPB3qFmlclYEOcJcMsXTOgb5J8KVJlOUxQxjZFBuNg941yhJL+8WzEKEEqwlM8NMzaeybz6hqLSbHk7ctP8JEQztAbvJW2Fmu+WE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <cf554e7d-5d9a-8beb-b0a6-810267e5c3ad@citrix.com>
Date: Wed, 26 Apr 2023 23:30:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Gitlab curiosity: Was [PATCH 0/7] Rationalize usage of
 xc_domain_getinfo{,list}()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Tim Deegan <tim@xen.org>, Michal Orzel <Michal.Orzel@arm.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0118.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c6::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5773:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e7fd86a-f49c-4017-9e99-08db46a5dfda
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KQCI5Dwed1Hz9d5VUmhvvhzQxh9CMM0a4KKZ98o2i3/rGGzj4KMWldcmJWOVNRaXPBhKRtMNJdJ1miJEZXwyBx2IuU/y1S88/FIhp6eQRoVUPy/abkdDa595Tubl1Ok8vPx4K2xRqP6a6Rrf6YX+ywxHpKX5cewr4uK0nFC9FyirqvLn9XP06gBYyc+N2M0Eq0riR5VYjf56aOoCEL9TRVcn6YZHzcMu5fTxygCvlqtdIJmooLuT6jnWCVgv/3u0GpjIGmMeXgh6iAAvCXdwxXD/UR4jDkYXLAINNG5NTX3JtG4OiqxdjDCAenKfuhbrkTUPB5Ejf3yVpSCDXLo9gezp1z91pDPj+R086UVggcD6Hi1Wv9/uXW07qxAtzaO2yI0eMOVIL+mZkAyXe7PC9//U9pgwjxSfavgsVX1mVnFZymykTED8x+dJ03pqGjKwXdX6y/k8iDbcRpvEr5DIcaQiKyx8lgns3ztChhUxhQgSP42uGldVavxsxKQrpMtfsAQyR1KxYaH9qr4xSCkhnOhj/E/P78tzP9k088vlHTMpdYHAtFvsZi2KfJ25YlvHskqMquCEm2PGlbdw3b1WcJ2QrL9Ohn54LkPDhH8hMNAeW1Ur5VLFesUtYeWARu37qryspXFd6hSgnW4soubqOmyxIouDcKjd6EzKIQ00qOvGho3vF/vnFvdA1BCdl6rl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(39860400002)(376002)(346002)(396003)(451199021)(2906002)(66946007)(66556008)(66476007)(66899021)(316002)(4326008)(7416002)(8936002)(8676002)(41300700001)(36756003)(86362001)(31696002)(26005)(6512007)(186003)(53546011)(478600001)(5660300002)(38100700002)(6666004)(966005)(6486002)(31686004)(83380400001)(2616005)(6506007)(82960400001)(110136005)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WDU3Tko4SExNdkttRy9EcXlSTVMxeWhhV0tJUXN4RzF0QnFPbGJ2bk51bWtR?=
 =?utf-8?B?ZHFaeUxTemx5YUx3N2NPL0xwY09adHpESkQzQ2FMZkJDVWtQRXRGc2JJajQ2?=
 =?utf-8?B?elZIejZzSzh4ZkVLNG9qNG0vZXkydmUzZXJmYTlIc0VTQ2FwYmkrc1ZGNG4w?=
 =?utf-8?B?bG5FZGNuSkJFSUx6bTlFZGhxTTBrNDJ1SENMYStneTliM0wyVTZ0OHRuZjY1?=
 =?utf-8?B?Vy9GcUJMYWZBZTNhWFRtRWp0VTFRQUcvb1BFZDZUZWs2MFJJd2Q1ekRZaHhh?=
 =?utf-8?B?SXA2azNSb0d3OWIyTDRIZElhbzUvSGlIN1dUeWNDN3JsWmRlWmpMait4cmRp?=
 =?utf-8?B?Uy9GNUNJSkQrZ0g0aUE3VDFac05DQXczN3VYblNkdnU0MWR2U25lZUJWby9h?=
 =?utf-8?B?V0ZWeHdBeGJma1VlbVcyV3hSVlNWZFI1NEFJMXVQWENCc201UTloU1dNL2dv?=
 =?utf-8?B?S1BMNk5DMTNmL1RwRWRDSU1qU0tKSG8vcUtKNitJQ2ZRYXhJOVM1NEdrdVds?=
 =?utf-8?B?TDNmUHhXb2pmSmE4SllNdHkvN014d0IybkwxK0JlOG5rTnBZazJna3JnZGdB?=
 =?utf-8?B?V3pGdDZDOG5wTjV5dGZHR1BLTm54L3hmWEVIcWJhL0E2MmV6dStIYWJTZDU3?=
 =?utf-8?B?cENlVHk0T3hrZHRWbDEra1JtUFZ3UExxaUZsVisvTkZRdm92bWpCRGVaZC9L?=
 =?utf-8?B?SjludVoxVE9jUHVhMHRFRjlyWHZIKzd1bmpmSDl6Q1MveXJpS25aM3V3bzFj?=
 =?utf-8?B?djUzT0Y2c0VtWUZTbWV6ZWp4b3JaVU1iVFBhUWNSTEhyNnRBazVScGd2SXJE?=
 =?utf-8?B?cEdNSlVhYnl6T2M5amxlcXpIcnM2bmNwdHBWbnk0UlpOeGFXdDNKRWt1bEwv?=
 =?utf-8?B?aVNWVzByVDlacU5xT1dPM1FzcGIvR1pOS0ZDeU9aOFRUamYyQjk4RDNoUXZU?=
 =?utf-8?B?WDVhVWE4aVJ0VXlpWEcxVCt6SHhWOFNCYWVINHdDeTUyK2o3Uy9SYjB0clY4?=
 =?utf-8?B?RjVNeVhLOStkNzdvTzFEeWxoNzhjYnMwaXlNWXhlQyttR3RWdEtBV1V5dEdm?=
 =?utf-8?B?djhlRjJUU0FhMzVIWGRmcFNLWGpjTk9SNk1DU3p2YVpmWGM0Ujc3VC8zbVE5?=
 =?utf-8?B?YTUwN0VHbFNkY2EyTWx5OTRNZ0R6Qm9tQy82MG9uSGtRRHN2RGlTZWFCa2JD?=
 =?utf-8?B?bzJka2VVWnJNSlJvVjNPU25WNHl5RjJ1d0ZjQ2syMmtjbFplbW4xSE00eFd5?=
 =?utf-8?B?a1pGU01DdlErU2dMUFRNdm9vU25zaG5Kb0l0LzdpK0t5eTltSEd2dXUxRG9z?=
 =?utf-8?B?Uk5QdnNHbGJBNDZFNWpyTENCZFFSYWt1UmxWeXJjdUVpazJ1d1FvbUVuS3VT?=
 =?utf-8?B?eXNaZkJWaW1YZTloMEZIaE93Mno4UXVxRzJmZWZMM0pxQ3luSTQxcUhJeEZF?=
 =?utf-8?B?Z0lWVlBkS3ViVGRzbkhtTEdPby9lblU3ZHREc3FoZ095d1hYTkJVQUpKczYv?=
 =?utf-8?B?ajJZNUFYTllkWjQ3TmhKbkdrMENOaTdncUNaVGRZRnJWcEFVZXhLSERXYkxG?=
 =?utf-8?B?NmxCUHR1UkJudEs2ZVMwQzdGRzZLd0lhN2pxMWxiTmJ4WmpiRUZLSWNZMVNH?=
 =?utf-8?B?YTVMN09wd1QrSi9sZVFjYVo4YW5ySlJtaFVvb1VaL2FwUklhYTlYZXBaMElx?=
 =?utf-8?B?WmRzVWgvVkYwL2dmZVZiOVZsOUhHNEpDRC9aYVY2dnJpYXhGUElvLzlkQXhF?=
 =?utf-8?B?TU9venpFaE9DS284UGVUNFpGSWF3RTFtUk4wNklBQ1NtRXg5UUJXNHQ3K1FN?=
 =?utf-8?B?NUV4ZkpUclhtVC9Fc1Vnc3dpMVp3ZHdxWVE1VFg4U3lSckViZUVKR1NkREZs?=
 =?utf-8?B?UHJoK0trNUxYRVI0czdVSVBEUUxTb3J2THlwT0xKZEc2MVNsTXZxU1pEbHRL?=
 =?utf-8?B?U2JoZitPaFFXY1RuM1NXSWhvdENhMEdGdG9jM0pOSGRSWmVyTlhIRHk3Z2ZM?=
 =?utf-8?B?c3pWbGtpSE9JZzFYTFVqaGtoemRmZzF0WDZTMUpmNkFsYVlxYlhkMHUrWDRj?=
 =?utf-8?B?K0tDVVd4QVRxL1NsUCtscEVJMmZuTWN4M3dVTXZXZUVNVnBZL0dWbENmaTJN?=
 =?utf-8?B?ZWhBcmhKWnZyRCtZNjlVZmtqaVpSNEF3U0VpSHRXSDkxOVBoNG1UR2pXSE4y?=
 =?utf-8?B?WHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	rMm+nnLZORTsT0sJcbu4EVlTDxLb9zD0Hn43wCvlZVP9KWsHKcE6z+NouW0rXFVPzOk/XaSCyaDiL3DHRx9MxkaFojOlYBEcm21uB927lNuB4U67n837UR/Eu3e4IdhPe9UjCUphkfrkjI6sVV+pd5UJgwdnav8xwXyTCSbKD7Znr6oHdDFwUO6/tdj0Pj8Q7r7+WxRiz7HSu/aM6oLD6s2/R+iPPqwIPtCQcMEEQnYWJhk669v5DJ2t4czc/v0r7hKtzOI9MQiZ11WPJKKqq5dy5LT6Kb1TAU/pXJ+qfRAe6lWo+2rLxBPphanC2lRYry/iMH7p27CzEchdrFyI6Msghr5fnNkPTcx2SoGmv0kE3/lCJHJ8DJTpGlKNrvx32NJkq2AJt3w/I/+1glNFLwXK/mbaccK3Rl1u54AwU3u++oF0fbYsrWnR/L9SQAHatx1K69bLzcDva9O1iA8TvJyeBb/lcxxjT3Brsp9uCZmHumwOGxHJ0g9FSgAvyrwGScJfRQMSnDRA2ThV21bPP14zvUzy8SpakK0sK0Fixug1nK7o1IvQ6fWf9wDjLzdNkizN1nlGsVK9sP1oaVOC5QU+s9VKkhM9hRY1BAGPzIiT0m4NpOYBpTn3Vvr0ROSXAXyRwahqsl74jNvcIDYjIQXjCQC4z2ni5Goju6zELPW0EzQENPT09adz2mclFLMuwHbfgss9Ll/VqtXnUlcPUsQrh1I8wWcsxhFNTWgOtDdryfZLZeKWgtq8OuhN8Mv3LTo7YMu+KoO7NjlMwBOZ9IjCxv7C/5NBWZGErjAuD3R9UBtkGTZSlVKQLU3MD4pKZcNiRInttNgdJN9PoYrJSuELc80rEqWEOlTi0iFC5jya431cEQxD3pPioHD8TEKThZaVod0eNGyYBLjOO3JlQQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e7fd86a-f49c-4017-9e99-08db46a5dfda
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Apr 2023 22:30:44.6215
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zkgRDJZe4fAdjPOjnocSJ6ETnSuIeN1Ja+jgyiVOC/pdTAkwlXBG7Ti1gjHyz9iTgP6QA2/RugTRLQPQX4RGqyq54GQNKZfSVjG/hz6cSz0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5773

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> xc_domain_getinfo() returns the list of domains with domid >= first_domid.
> It does so by repeatedly invoking XEN_DOMCTL_getdomaininfo, which leads to
> unintuitive behaviour (asking for domid=1 might succeed returning domid=2).
> Furthermore, N hypercalls are required whereas the equivalent functionality
> can be achieved using with XEN_SYSCTL_getdomaininfo.
>
> Ideally, we want a DOMCTL interface that operates over a single precisely
> specified domain and a SYSCTL interface that can be used for bulk queries.
>
> All callers of xc_domain_getinfo() that are better off using SYSCTL are
> migrated to use that instead. That includes callers performing domain
> discovery and those requesting info for more than 1 domain per hypercall.
>
> A new xc_domain_getinfo_single() is introduced with stricter semantics than
> xc_domain_getinfo() (failing if domid isn't found) to migrate the rest to.
>
> With no callers left the xc_dominfo_t structure and the xc_domain_getinfo()
> call itself can be cleanly removed, and the DOMCTL interface simplified to
> only use its fastpath.
>
> With the DOMCTL ammended, the new xc_domain_getinfo_single() drops its
> stricter check, becoming a simple wrapper to invoke the hypercall itself.
>
> Alejandro Vallejo (7):
>   tools: Make some callers of xc_domain_getinfo use
>     xc_domain_getinfolist
>   tools: Create xc_domain_getinfo_single()
>   tools: Refactor the console/io.c to avoid using xc_domain_getinfo()
>   tools: Make init-xenstore-domain use xc_domain_getinfolist()
>   tools: Modify single-domid callers of xc_domain_getinfolist
>   tools: Use new xc function for some xc_domain_getinfo calls
>   domctl: Modify getdomaininfo to fail if domid is not found

The patchew run found one single failure,

https://gitlab.com/xen-project/patchew/xen/-/jobs/4183881202

This part looks reasonably fatal:

 * Starting local ... *   Executing "/etc/local.d/xen.start" ...Starting
/usr/local/sbin/xenstored...
/etc/xen/scripts/launch-xenstore: line 90: echo: write error: Invalid
argument
Setting domain 0 name, domid and JSON config...
Done setting up Dom0
Starting xenconsoled...

except it was only the part trying to set the OOM score after starting
xenstored, and the only way that plausibly fails is if the pidfile was
bad.  And then the other print messages are clearly out of order.

I've rerun the pipeline a second time,
https://gitlab.com/xen-project/patchew/xen/-/pipelines/850230144, to see
if gitlab thinks it is a reliable or unreliable failure.


But, there's a plenty of other stuff in this log which is concerning. 
Stefano, Michal:

 * Starting networking ...awk: /etc/network/interfaces: No such file or
directory
 * ERROR: networking failed to start

The domains ought to have a interfaces file with "auto eth0", or even
nothing.  Alpine clearly isn't expecting the absence of the file at
all.  The fact the ping test passes usually means that this error isn't
as fatal as it makes out.

Next,

 * Executing: /lib/rc/sh/openrc-run.sh /lib/rc/sh/openrc-run.sh
/etc/init.d/modloop start
 * Mounting modloop  ... [ !! ]
 * ERROR: modloop failed to start

Not sure what modloop is, but this doesn't look healthy.

Next,

 * Loading modules ... *   Processing /etc/modules
modprobe: can't change directory to '/lib/modules': No such file or
directory

This probably just wants an empty dir in the filesystem.

I could go on, but I wont.  One thing that we do need however is
/var/log/xen/* pulled into the artefacts of the job, because if there
really is a real xenstored problem hiding in this series, there's no way
to debug it with the current artefacts collected.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 22:58:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 22:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526845.818864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pro5L-0007of-Fm; Wed, 26 Apr 2023 22:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526845.818864; Wed, 26 Apr 2023 22:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pro5L-0007oY-Cv; Wed, 26 Apr 2023 22:58:03 +0000
Received: by outflank-mailman (input) for mailman id 526845;
 Wed, 26 Apr 2023 22:58:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pro5K-0007oO-Dm; Wed, 26 Apr 2023 22:58:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pro5K-0001oY-6G; Wed, 26 Apr 2023 22:58:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pro5J-0005FU-LW; Wed, 26 Apr 2023 22:58:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pro5J-00071M-L3; Wed, 26 Apr 2023 22:58:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7Gqe92FT1OYXjqC1w59XI32XEMNV2C3j/c59wHLBwTY=; b=l0B+s+ybufg16suKmeSEFx7mGH
	jFREcHwDo9EfyKXW8q5qNww2gc/wKzBpUANKJb2Tj+LyeUoDnvvQXzSc4upDi3cWePak080bjZ60g
	j/jl8EuOnsBr5mfmYxs9VFlw+Mplzeo4Mhp3cWUgKBVqIzlg4hAHPmw0b68tbP3X8B50=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180436-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180436: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=edacc551e6586258ab046dd852f65d674e3e2af0
X-Osstest-Versions-That:
    ovmf=ede0bd1496405f72147308b9570efba0234349b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 22:58:01 +0000

flight 180436 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180436/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 edacc551e6586258ab046dd852f65d674e3e2af0
baseline version:
 ovmf                 ede0bd1496405f72147308b9570efba0234349b2

Last test of basis   180429  2023-04-26 09:41:52 Z    0 days
Testing same since   180436  2023-04-26 17:42:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Michael Roth <michael.roth@amd.com>
  Roth, Michael via groups.io <Michael.Roth=amd.com@groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ede0bd1496..edacc551e6  edacc551e6586258ab046dd852f65d674e3e2af0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 23:34:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 23:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526854.818884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1proeE-0003je-Dl; Wed, 26 Apr 2023 23:34:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526854.818884; Wed, 26 Apr 2023 23:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1proeE-0003jX-AP; Wed, 26 Apr 2023 23:34:06 +0000
Received: by outflank-mailman (input) for mailman id 526854;
 Wed, 26 Apr 2023 23:34:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1proeD-0003jN-Mq; Wed, 26 Apr 2023 23:34:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1proeD-0002le-L6; Wed, 26 Apr 2023 23:34:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1proeD-0007BF-9D; Wed, 26 Apr 2023 23:34:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1proeD-0003uK-8Q; Wed, 26 Apr 2023 23:34:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oBaEC5u2PCpuJegX4p7W5Roz36OpFsL90LVUXokADi8=; b=ld5cQiAIMq5tPSRGZOXGRI/1+K
	FFtEVESwFgIs3/+oc9n96q5sCjieBwnpXlBMMoAsOfyiL5isCboTcDdKEZXx+DWyyKpdlDga5LuNH
	n4RXmvoKuYyu6Ft730bn1Rs6CKjqsHVufzxnkmQvAkH51DG1GriWypHMjYhNyouPL0bY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180425-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 180425: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=98ec8ad2eeb96eb9d4b7f9bfd1ef3a994c63af17
X-Osstest-Versions-That:
    xen=622675cdbc5f249bddfe970054f43a867d3ebed0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Apr 2023 23:34:05 +0000

flight 180425 xen-4.14-testing real [real]
flight 180437 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180425/
http://logs.test-lab.xenproject.org/osstest/logs/180437/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180437-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180217
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180217
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180217
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180217
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180217
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180217
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180217
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180217
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180217
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180217
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180217
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180217
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  98ec8ad2eeb96eb9d4b7f9bfd1ef3a994c63af17
baseline version:
 xen                  622675cdbc5f249bddfe970054f43a867d3ebed0

Last test of basis   180217  2023-04-12 08:06:29 Z   14 days
Testing same since   180425  2023-04-26 07:39:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   622675cdbc..98ec8ad2ee  98ec8ad2eeb96eb9d4b7f9bfd1ef3a994c63af17 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Apr 26 23:44:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Apr 2023 23:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526862.818894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pronx-0005Kb-GW; Wed, 26 Apr 2023 23:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526862.818894; Wed, 26 Apr 2023 23:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pronx-0005KU-C4; Wed, 26 Apr 2023 23:44:09 +0000
Received: by outflank-mailman (input) for mailman id 526862;
 Wed, 26 Apr 2023 23:44:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0jx=AR=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pronv-0005KO-TJ
 for xen-devel@lists.xenproject.org; Wed, 26 Apr 2023 23:44:08 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38d7f14d-e48c-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 01:44:05 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id EBB6C3200319;
 Wed, 26 Apr 2023 19:43:59 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Wed, 26 Apr 2023 19:44:00 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 26 Apr 2023 19:43:57 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38d7f14d-e48c-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682552639; x=1682639039; bh=juromuWe+xLzqtJizKVtk+5279RWmuI4tdO
	nzfQg3VA=; b=JXcLVRzpk4SmxtgnRfk9WA5rd/olIl1nr2PO3e5i/cUK3p57d/+
	oNl9qWoHVjnJgE5pL0FbJ7xS+kqOl0nyPW/dVz9o/zddWeJE83JQNdKbEt3odaS2
	wtC5KC9ORc38GwN06exiSJvMLShQt2iWHhIgANpg6mH4mJTIxxsi/UU4oQJtQ6X/
	bBg6GmUVQbaVE0Qdwm6xTO7KK0DZeBs/cqVS3hXa+WQo8KDV3jst7DZzOPz6RfF4
	lckF4fksV9qwFAEapASZMJVTMJcpoYQTSMQaEG29SWo7R+Rhm8NFxZPLxdWbslA9
	8avsB+SsKIAy6KeJImsvS+GUaWAC/cC6mAg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682552639; x=1682639039; bh=juromuWe+xLzq
	tJizKVtk+5279RWmuI4tdOnzfQg3VA=; b=PFHHEo4Fd4lh+ChhhPGd6D5a2oN2E
	jNqZY7c0MwKSwDbnpfHmD33fJ8XMtJ8Bg60PFmzwFnDYsBAZE05JZmpwij36gMof
	H2v2e3pNnc+uhA68A/DyfCi96n2x2XIzb168l2TVxsPmbAKHmnfndalOTwAwtVW3
	ikbya0AR0Uw2m9ibcWaCz0IAhMZXUTfhSoroMm4lhG9B8di1cfgSnVFFuaAM5lN/
	XayEs8HVK4JoIB6wH84xL6TxgA92B5gCIKOHtIxtxFz60tSwYWb85KomMxwc3Oth
	XagDlfAWaeVR3tuuYUDQBpdgsJp6FPfs71VpSsr1bSh5RXvI/bvjwbU4g==
X-ME-Sender: <xms:P7dJZACs70xsi4LxYBfB7AS1_X1MpV2a0a1gtGDRZlqy4QTce5wj2g>
    <xme:P7dJZChYiu4QOAKos3EXBTWOJRtNNBtBf-7kQWYufFv2HTOwSOTsCz8bEyoufuwvf
    WQfF_lYnFvJRg>
X-ME-Received: <xmr:P7dJZDnRW8hADTDwyl6bOJtO1B4L8ijZncPSYqhXXMXBiry3CMYGVm16qzZawWcs4MSmVMy8YW58oHTE_44KFZmtHjueW2ebiRk>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeduhedgvdeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:P7dJZGy3gGBWHp-z_nVoTrOQ93X-pwBpY9PJ0jmkgEFnwsIcJVGPpw>
    <xmx:P7dJZFTlbLeg0XSxwLbxePGzNEyfJ5OUw2BBCGP_WWtKTu26AH0t8A>
    <xmx:P7dJZBa5vbEQcknWI08YipDC-6_iL94uaHNxU0NLkdUKxEbxxLN3gg>
    <xmx:P7dJZOJM4sYgELjXQUBoRuWwdLvTx047N-viXpxCdLI28Hq_K5tPlg>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 27 Apr 2023 01:43:55 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZEm3O2PL0NLWqoMk@mail-itl>
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
 <ZEjXNLAVCixClGyl@Air-de-Roger>
 <b41e8eb0-a776-8924-429f-8abe7e70ead7@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="97dlg8z4lmExvfhD"
Content-Disposition: inline
In-Reply-To: <b41e8eb0-a776-8924-429f-8abe7e70ead7@suse.com>


--97dlg8z4lmExvfhD
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 27 Apr 2023 01:43:55 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card

On Wed, Apr 26, 2023 at 10:24:28AM +0200, Jan Beulich wrote:
> On 26.04.2023 09:48, Roger Pau Monn=C3=A9 wrote:
> > On Tue, Apr 25, 2023 at 04:39:02PM +0200, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> >> --- a/xen/drivers/char/ns16550.c
> >> +++ b/xen/drivers/char/ns16550.c
> >> @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_po=
rt *port, char *pc)
> >>  static void pci_serial_early_init(struct ns16550 *uart)
> >>  {
> >>  #ifdef NS16550_PCI
> >> -    if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
> >> +    if ( uart->bar )
> >> +    {
> >> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> >> +                                  uart->ps_bdf[2]),
> >> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> >=20
> > Don't you want to read the current command register first and just
> > or PCI_COMMAND_MEMORY?
> >=20
> > I see that for IO decoding we already do it this way, but I'm not sure
> > whether it could cause issues down the road by unintentionally
> > disabling command flags.
>=20
> Quite some time ago I asked myself the same question when seeing that
> code, but I concluded that perhaps none of the bits are really sensible
> to be set for a device as simple as a serial one. I'm actually thinking
> that for such a device to be used during early boot, it might even be
> helpful if bits like PARITY or SERR get cleared. (Of course if any of
> that was really the intention of the change introducing that code, it
> should have come with a code comment.)

I have mirrored the approach used for IO ports, with similar thinking,
and I read the above as you agree. Does it mean this patch is okay,
or you request some change here?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--97dlg8z4lmExvfhD
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRJtzoACgkQ24/THMrX
1yzVJwf/VoO+AdaYvfNdG/6HU8qru+NLn+kerIjpwDpiYdAZ7fQEoDQ/ZbAg0vv8
c+FRdA+xjs3kooQfY6moIKCeii9mvlkET0/UWSrKFgSNAkMHPALDaWwUR6jsNiq8
pHMn6it+ACN27tiT8oExY5LBCR6JE01WZ8qksmi2uPX3OZw25jSLTBUlPTcDtvNC
FubG5kiImn+eFkYL/cUKfotc7Mbx+LIlqYyq//nQp0uoXPEaq/ppLlXBdt9jJFed
1Ogr3eE3o4udmSQsTS/pwp/Wkmurr7VJU1BYUg8iwVV1gKQjhLB4wjuLkH2pbwlb
yiqf0t8zHfkVK69h4HwpruyF0ENxbQ==
=CNGE
-----END PGP SIGNATURE-----

--97dlg8z4lmExvfhD--


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 00:08:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 00:08:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526867.818904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prpBW-0008TZ-Px; Thu, 27 Apr 2023 00:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526867.818904; Thu, 27 Apr 2023 00:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prpBW-0008TS-Mb; Thu, 27 Apr 2023 00:08:30 +0000
Received: by outflank-mailman (input) for mailman id 526867;
 Thu, 27 Apr 2023 00:08:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prpBV-0008TI-44; Thu, 27 Apr 2023 00:08:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prpBU-00049I-PJ; Thu, 27 Apr 2023 00:08:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1prpBU-0000MF-CJ; Thu, 27 Apr 2023 00:08:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1prpBU-0005ei-Bq; Thu, 27 Apr 2023 00:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o6QAMcz2W5gG4cVNR5fo8fwFQWc1ViL7hTWtrhxTryg=; b=TganZspEmzLx+cnRNFmPg/KmPe
	BkpMnNXj39wqAyJkdTq9SefPrrLZJA29dGBNgncG1sZznGVZsO2/C5dkCmkBEzSgrz5dvcqbigWE9
	r9QTYTb0ztnEQzJ6qlagTm9tkobFyboIfYnlEihOgaVJ7DsX+qUFKrcFgoYRnayQdmvQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180426-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 180426: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87cb0fd8757542893336aa2ffce3947451adf144
X-Osstest-Versions-That:
    xen=7963cdbf91d8a8d2f8338171adab3807b20f658a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 00:08:28 +0000

flight 180426 xen-4.15-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180426/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180218
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180218
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180218
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180218
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180218
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180218
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180218
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180218
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180218
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt   16 saverestore-support-check fail starved in 180218
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail starved in 180218
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail starved in 180218

version targeted for testing:
 xen                  87cb0fd8757542893336aa2ffce3947451adf144
baseline version:
 xen                  7963cdbf91d8a8d2f8338171adab3807b20f658a

Last test of basis   180218  2023-04-12 08:06:45 Z   14 days
Testing same since   180426  2023-04-26 07:39:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7963cdbf91..87cb0fd875  87cb0fd8757542893336aa2ffce3947451adf144 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 05:52:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 05:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526876.818921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pruXs-0007nd-Ib; Thu, 27 Apr 2023 05:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526876.818921; Thu, 27 Apr 2023 05:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pruXs-0007nG-CP; Thu, 27 Apr 2023 05:51:56 +0000
Received: by outflank-mailman (input) for mailman id 526876;
 Thu, 27 Apr 2023 05:51:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pruXq-0007n6-65; Thu, 27 Apr 2023 05:51:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pruXp-0002pk-RD; Thu, 27 Apr 2023 05:51:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pruXp-0001cF-Br; Thu, 27 Apr 2023 05:51:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pruXp-0008Ko-BB; Thu, 27 Apr 2023 05:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CX/1Z3FVcTlc2GmZxq8CWLOOs4ZD/DVq+S26JhUTgA0=; b=0jk+jU0OlrySDwltPlIAa7+9j6
	Eqkh+iHofis40TyY5IfjmJj3GJc/ieHtXh+4pREDbhGcGlWX5ALiL+/K7la+RIgrJqhm1ryyM/qZO
	FtVAY7MOR0LdVB7JiD2eZBebfyYZtvxhS8x0qx+6uDubx2M+4Hrp7vy8pQOFeAhEE5i8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180427-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180427: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0cfd8703e7da687924371e9bc77a025bdeba9637
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 05:51:53 +0000

flight 180427 linux-linus real [real]
flight 180439 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180427/
http://logs.test-lab.xenproject.org/osstest/logs/180439/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0cfd8703e7da687924371e9bc77a025bdeba9637
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   10 days
Failing since        180281  2023-04-17 06:24:36 Z    9 days   17 attempts
Testing same since   180427  2023-04-26 08:52:34 Z    0 days    1 attempts

------------------------------------------------------------
1122 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 95217 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 06:42:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 06:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526885.818931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prvKE-0004uN-9E; Thu, 27 Apr 2023 06:41:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526885.818931; Thu, 27 Apr 2023 06:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prvKE-0004uG-5B; Thu, 27 Apr 2023 06:41:54 +0000
Received: by outflank-mailman (input) for mailman id 526885;
 Thu, 27 Apr 2023 06:41:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R8CS=AS=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1prvKC-0004uA-Oy
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 06:41:53 +0000
Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com
 [2607:f8b0:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94c352b0-e4c6-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 08:41:47 +0200 (CEST)
Received: by mail-pg1-x52c.google.com with SMTP id
 41be03b00d2f7-52079a12451so6062737a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Apr 2023 23:41:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94c352b0-e4c6-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682577707; x=1685169707;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=EP2CuVw1sQBqyKNWGbgs/xR1rv9yEs96g44ek0mlGcE=;
        b=Iv+EakVmXXSmeMeWfZiW7FzPdbI+mQ1nB1rIkn35yBqaNqy4vDlpl1FYNKbhn2Ayyr
         +/LArkarvQhV7v41GB0Z5Y4obMIqv8xBUhdXAniyfcAA6mGhvVQb3IzFurnAGn9KBBb7
         NMG41KVz2IG5lc5vthagQOYcUHvkjGLlb6L4m1n5RyAeO3H7a9QjfZ6aMXpui+V7pdqT
         5pKLzV0K2uC5eur3/jHIQ02WP4Syer3L1dhKLp5fbJwUpPPPRqrMrCunHid70XeHQeX8
         x5Ev6li9PLQvNi/wlGGJdM0sBdV2FIRzs7tI1Y2HOaQMWpV7aKhPAXddyP6p05oevSi9
         mrqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682577707; x=1685169707;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=EP2CuVw1sQBqyKNWGbgs/xR1rv9yEs96g44ek0mlGcE=;
        b=U/j+pID0LFRXR845ccbRpTXdTZ+7wMZOzCREWc6kpvYUH6ac21VcPz6QCHpCoUfK8m
         LB15T6sKjfk6HBx38Esf9TAOaytyojeZyz1pBfUfs8HOHveoR49CReEpMdNHqbVH2DNa
         6VR3mpo8CbH79DKgbQxnlQu12YYkLBF8KE4ZogTKHNI9zoyJqdDluHVx/7rhRMy8xQA+
         k10i578icxniMjupQ7zp92bA9KmA8suv3xX3cdYhh8HaxR1sKmLur28E8TaJOhZ0MY3o
         ZuSe+umrbq7lTs0fJq6HiGvxQ+6quxkvnzWkafesyoHqrD6mCA3/OYWLbH5YLbfkEQF+
         qUPA==
X-Gm-Message-State: AC+VfDxa1AMVk+x3s0wb2ipvMdMEiQYjhELHhtuMVTp2AijPgZbEfIWI
	tDTg9CdUBTWn65KoLn/f6MzYFP9/N008QIg7IxE=
X-Google-Smtp-Source: ACHHUZ4tUXVsEGineD0PrAwrJrzskV+t+pwwZvOM3dkEFqDrWLZGDAeap5EXnL18PpA/GkHudmVHp3oMrx8A7WiOv+w=
X-Received: by 2002:a17:90b:1645:b0:247:e4c:d168 with SMTP id
 il5-20020a17090b164500b002470e4cd168mr862046pjb.10.1682577706201; Wed, 26 Apr
 2023 23:41:46 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2srSq5Vwq8KL4TGc-GC3OjsFf=d-yKLVPw=C0KfBW67eA@mail.gmail.com>
 <58cae772-dd3b-31f4-9849-9c2597f6eae6@amd.com> <CA+SAi2vU0i9trrdgCusB0WYJmYLqjXRk9qSGALjMbKYvmPGcvw@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com> <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Thu, 27 Apr 2023 09:46:16 +0300
Message-ID: <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="0000000000007c382805fa4ba681"

--0000000000007c382805fa4ba681
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Stefano,

Thanks for clarification.
We nighter use ImageBuilder nor uboot boot script.
A model is zcu102 compatible.

Regards,
O.

=D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 21:21, Ste=
fano Stabellini <sstabellini@kernel.org>:

> This is interesting. Are you using Xilinx hardware by any chance? If so,
> which board?
>
> Are you using ImageBuilder to generate your boot.scr boot script? If so,
> could you please post your ImageBuilder config file? If not, can you
> post the source of your uboot boot script?
>
> SErrors are supposed to be related to a hardware failure of some kind.
> You are not supposed to be able to trigger an SError easily by
> "mistake". I have not seen SErrors due to wrong cache coloring
> configurations on any Xilinx board before.
>
> The differences between Xen with and without cache coloring from a
> hardware perspective are:
>
> - With cache coloring, the SMMU is enabled and does address translations
>   even for dom0. Without cache coloring the SMMU could be disabled, and
>   if enabled, the SMMU doesn't do any address translations for Dom0. If
>   there is a hardware failure related to SMMU address translation it
>   could only trigger with cache coloring. This would be my normal
>   suggestion for you to explore, but the failure happens too early
>   before any DMA-capable device is programmed. So I don't think this can
>   be the issue.
>
> - With cache coloring, the memory allocation is very different so you'll
>   end up using different DDR regions for Dom0. So if your DDR is
>   defective, you might only see a failure with cache coloring enabled
>   because you end up using different regions.
>
>
> On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> > Hi Stefano,
> >
> > Thank you.
> > If I build xen without colors support there is not this error.
> > All the domains are booted well.
> > Hense it can not be a hardware issue.
> > This panic arrived during unpacking the rootfs.
> > Here I attached the boot log xen/Dom0 without color.
> > A highlighted strings printed exactly after the place where 1-st time
> panic arrived.
> >
> >  Xen 4.16.1-pre
> > (XEN) Xen version 4.16.1-pre (nole2390@(none))
> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy 2023-04-21
> > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300
> git:321687b231-dirty
> > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
> > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part
> 0xd03,rev 0x4
> > (XEN) 64-bit Execution:
> > (XEN)   Processor Features: 0000000000002222 0000000000000000
> > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> > (XEN)     Extensions: FloatingPoint AdvancedSIMD
> > (XEN)   Debug Features: 0000000010305106 0000000000000000
> > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
> > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
> > (XEN)   ISA Features:  0000000000011120 0000000000000000
> > (XEN) 32-bit Execution:
> > (XEN)   Processor Features: 0000000000000131:0000000000011011
> > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> > (XEN)     Extensions: GenericTimer Security
> > (XEN)   Debug Features: 0000000003010066
> > (XEN)   Auxiliary Features: 0000000000000000
> > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
> > (XEN)                          0000000001260000 0000000002102211
> > (XEN)   ISA Features: 0000000002101110 0000000013112111 000000002123204=
2
> > (XEN)                 0000000001112131 0000000000011142 000000000001112=
1
> > (XEN) Using SMC Calling Convention v1.2
> > (XEN) Using PSCI v1.1
> > (XEN) SMP: Allowing 4 CPUs
> > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz
> > (XEN) GICv2 initialization:
> > (XEN)         gic_dist_addr=3D00000000f9010000
> > (XEN)         gic_cpu_addr=3D00000000f9020000
> > (XEN)         gic_hyp_addr=3D00000000f9040000
> > (XEN)         gic_vcpu_addr=3D00000000f9060000
> > (XEN)         gic_maintenance_irq=3D25
> > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
> > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
> > (XEN) Using scheduler: null Scheduler (null)
> > (XEN) Initializing null scheduler
> > (XEN) WARNING: This is experimental software in development.
> > (XEN) Use at your own risk.
> > (XEN) Allocated console ring of 32 KiB.
> > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
> > (XEN) Bringing up CPU1
> > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
> > (XEN) CPU 1 booted.
> > (XEN) Bringing up CPU2
> > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
> > (XEN) CPU 2 booted.
> > (XEN) Bringing up CPU3
> > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
> > (XEN) Brought up 4 CPUs
> > (XEN) CPU 3 booted.
> > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
> > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
> > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
> > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register
> groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context banks (0
> > stage-2 only)
> > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
> > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
> > (XEN) I/O virtualisation enabled
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> > (XEN) alternatives: Patching with alt table 00000000002cc5c8 ->
> 00000000002ccb2c
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN) Loading d0 kernel from boot module @ 0000000001000000
> > (XEN) Loading ramdisk from boot module @ 0000000002000000
> > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
> > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
> > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
> > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
> > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
> > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
> > (XEN) Allocating PPI 16 for event channel interrupt
> > (XEN) Extended region 0: 0x81200000->0xa0000000
> > (XEN) Extended region 1: 0xb1200000->0xc0000000
> > (XEN) Extended region 2: 0xc8000000->0xe0000000
> > (XEN) Extended region 3: 0xf0000000->0xf9000000
> > (XEN) Extended region 4: 0x100000000->0x600000000
> > (XEN) Extended region 5: 0x880000000->0x8000000000
> > (XEN) Extended region 6: 0x8001000000->0x10000000000
> > (XEN) Loading zImage from 0000000001000000 to
> 0000000010000000-0000000010e41008
> > (XEN) Loading d0 initrd from 0000000002000000 to
> 0x0000000013600000-0x000000001ff3a617
> > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
> > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch
> input)
> > (XEN) null.c:353: 0 <-- d0v0
> > (XEN) Freed 356kB init memory.
> > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
> > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host)
> (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU Binutils)
> > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
> > [    0.000000] Machine model: D14 Viper Board - White Unit
> > [    0.000000] Xen 4.16 support found
> > [    0.000000] Zone ranges:
> > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
> > [    0.000000]   DMA32    empty
> > [    0.000000]   Normal   empty
> > [    0.000000] Movable zone start for each node
> > [    0.000000] Early memory node ranges
> > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
> > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
> > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
> > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
> > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
> > [    0.000000] Initmem setup node 0 [mem
> 0x0000000010000000-0x000000007fffffff]
> > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
> > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
> > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
> > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
> > [    0.000000] psci: probing for conduit method from DT.
> > [    0.000000] psci: PSCIv1.1 detected in firmware.
> > [    0.000000] psci: Using standard PSCI v0.2 function IDs
> > [    0.000000] psci: Trusted OS migration not required
> > [    0.000000] psci: SMC Calling Convention v1.1
> > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
> > [    0.000000] Detected VIPT I-cache on CPU0
> > [    0.000000] CPU features: kernel page table isolation forced ON by
> KASLR
> > [    0.000000] CPU features: detected: Kernel page table isolation (KPT=
I)
> > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages:
> 403845
> > [    0.000000] Kernel command line: console=3Dhvc0 earlycon=3Dxen
> earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/ram0 maxcpus=3D2
> > [    0.000000] Unknown kernel command line parameters "earlyprintk=3Dxe=
n
> fips=3D1", will be passed to user space.
> > [    0.000000] Dentry cache hash table entries: 262144 (order: 9,
> 2097152 bytes, linear)
> > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 104857=
6
> bytes, linear)
> > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
> > [    0.000000] mem auto-init: clearing system memory may take some
> time...
> > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code,
> 836K rwdata, 2396K rodata, 1536K init, 262K bss, 256944K reserved,
> > 262144K cma-reserved)
> > [    0.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D=
2, Nodes=3D1
> > [    0.000000] rcu: Hierarchical RCU implementation.
> > [    0.000000] rcu: RCU event tracing is enabled.
> > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=
=3D2.
> > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay
> is 25 jiffies.
> > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=3D16,
> nr_cpu_ids=3D2
> > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> > [    0.000000] Root IRQ handler: gic_handle_irq
> > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
> > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff
> max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
> > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps
> every 4398046511100ns
> > [    0.000258] Console: colour dummy device 80x25
> > [    0.310231] printk: console [hvc0] enabled
> > [    0.314403] Calibrating delay loop (skipped), value calculated using
> timer frequency.. 200.00 BogoMIPS (lpj=3D400000)
> > [    0.324851] pid_max: default: 32768 minimum: 301
> > [    0.329706] LSM: Security Framework initializing
> > [    0.334204] Yama: becoming mindful.
> > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768
> bytes, linear)
> > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3,
> 32768 bytes, linear)
> > [    0.354743] xen:grant_table: Grant tables using version 1 layout
> > [    0.359132] Grant table initialized
> > [    0.362664] xen:events: Using FIFO-based ABI
> > [    0.366993] Xen: initializing cpu0
> > [    0.370515] rcu: Hierarchical SRCU implementation.
> > [    0.375930] smp: Bringing up secondary CPUs ...
> > (XEN) null.c:353: 1 <-- d0v1
> > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> > [    0.382549] Detected VIPT I-cache on CPU1
> > [    0.388712] Xen: initializing cpu1
> > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd03=
4]
> > [    0.388829] smp: Brought up 1 node, 2 CPUs
> > [    0.406941] SMP: Total of 2 processors activated.
> > [    0.411698] CPU features: detected: 32-bit EL0 Support
> > [    0.416888] CPU features: detected: CRC32 instructions
> > [    0.422121] CPU: All CPU(s) started at EL1
> > [    0.426248] alternatives: patching kernel code
> > [    0.431424] devtmpfs: initialized
> > [    0.441454] KASLR enabled
> > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles:
> 0xffffffff, max_idle_ns: 7645041785100000 ns
> > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes,
> linear)
> > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
> > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic
> allocations
> > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for
> atomic allocations
> > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for
> atomic allocations
> > [    0.519478] audit: initializing netlink subsys (disabled)
> > [    0.524985] audit: type=3D2000 audit(0.336:1): state=3Dinitialized
> audit_enabled=3D0 res=3D1
> > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
> > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint
> registers.
> > [    0.545608] ASID allocator initialised with 32768 entries
> > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for
> software IO TLB
> > [    0.559332] software IO TLB: mapped [mem
> 0x0000000011800000-0x0000000011c00000] (4MB)
> > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0
> pages
> > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0
> pages
> > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0
> pages
> > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0
> pages
> > [    0.636520] DRBG: Continuing without Jitter RNG
> > [    0.737187] raid6: neonx8   gen()  2143 MB/s
> > [    0.805294] raid6: neonx8   xor()  1589 MB/s
> > [    0.873406] raid6: neonx4   gen()  2177 MB/s
> > [    0.941499] raid6: neonx4   xor()  1556 MB/s
> > [    1.009612] raid6: neonx2   gen()  2072 MB/s
> > [    1.077715] raid6: neonx2   xor()  1430 MB/s
> > [    1.145834] raid6: neonx1   gen()  1769 MB/s
> > [    1.213935] raid6: neonx1   xor()  1214 MB/s
> > [    1.282046] raid6: int64x8  gen()  1366 MB/s
> > [    1.350132] raid6: int64x8  xor()   773 MB/s
> > [    1.418259] raid6: int64x4  gen()  1602 MB/s
> > [    1.486349] raid6: int64x4  xor()   851 MB/s
> > [    1.554464] raid6: int64x2  gen()  1396 MB/s
> > [    1.622561] raid6: int64x2  xor()   744 MB/s
> > [    1.690687] raid6: int64x1  gen()  1033 MB/s
> > [    1.758770] raid6: int64x1  xor()   517 MB/s
> > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
> > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
> > [    1.767957] raid6: using neon recovery algorithm
> > [    1.772824] xen:balloon: Initialising balloon driver
> > [    1.778021] iommu: Default domain type: Translated
> > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
> > [    1.789149] SCSI subsystem initialized
> > [    1.792820] usbcore: registered new interface driver usbfs
> > [    1.798254] usbcore: registered new interface driver hub
> > [    1.803626] usbcore: registered new device driver usb
> > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
> > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007
> Rodolfo Giometti <giometti@linux.it>
> > [    1.822903] PTP clock support registered
> > [    1.826893] EDAC MC: Ver: 3.0.0
> > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI
> mbox with TX/RX channels.
> > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI
> mbox with TX/RX channels.
> > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI
> mbox with TX/RX channels.
> > [    1.855907] FPGA manager framework
> > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
> > [    1.871712] NET: Registered PF_INET protocol family
> > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144
> bytes, linear)
> > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order=
:
> 2, 16384 bytes, linear)
> > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 26214=
4
> bytes, linear)
> > [    1.894846] TCP established hash table entries: 16384 (order: 5,
> 131072 bytes, linear)
> > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144
> bytes, linear)
> > [    1.910350] TCP: Hash tables configured (established 16384 bind 1638=
4)
> > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes,
> linear)
> > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes=
,
> linear)
> > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
> > [    1.936834] RPC: Registered named UNIX socket transport module.
> > [    1.942342] RPC: Registered udp transport module.
> > [    1.947088] RPC: Registered tcp transport module.
> > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module=
.
> > [    1.958334] PCI: CLS 0 bytes, default 64
> > [    1.962709] Trying to unpack rootfs image as initramfs...
> > [    1.977090] workingset: timestamp_bits=3D62 max_order=3D19 bucket_or=
der=3D0
> > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
> > [    2.021045] NET: Registered PF_ALG protocol family
> > [    2.021122] xor: measuring software checksum speed
> > [    2.029347]    8regs           :  2366 MB/sec
> > [    2.033081]    32regs          :  2802 MB/sec
> > [    2.038223]    arm64_neon      :  2320 MB/sec
> > [    2.038385] xor: using function: 32regs (2802 MB/sec)
> > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded
> (major 247)
> > [    2.050959] io scheduler mq-deadline registered
> > [    2.055521] io scheduler kyber registered
> > [    2.068227] xen:xen_evtchn: Event-channel device installed
> > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
> > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
> > [    2.085548] brd: module loaded
> > [    2.089290] loop: module loaded
> > [    2.089341] Invalid max_queues (4), will use default max: 2.
> > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
> > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
> > [    2.104156] usbcore: registered new interface driver rtl8150
> > [    2.109813] usbcore: registered new interface driver r8152
> > [    2.115367] usbcore: registered new interface driver asix
> > [    2.120794] usbcore: registered new interface driver ax88179_178a
> > [    2.126934] usbcore: registered new interface driver cdc_ether
> > [    2.132816] usbcore: registered new interface driver cdc_eem
> > [    2.138527] usbcore: registered new interface driver net1080
> > [    2.144256] usbcore: registered new interface driver cdc_subset
> > [    2.150205] usbcore: registered new interface driver zaurus
> > [    2.155837] usbcore: registered new interface driver cdc_ncm
> > [    2.161550] usbcore: registered new interface driver r8153_ecm
> > [    2.168240] usbcore: registered new interface driver cdc_acm
> > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modem=
s
> and ISDN adapters
> > [    2.181358] usbcore: registered new interface driver uas
> > [    2.186547] usbcore: registered new interface driver usb-storage
> > [    2.192643] usbcore: registered new interface driver ftdi_sio
> > [    2.198384] usbserial: USB Serial support registered for FTDI USB
> Serial Device
> > [    2.206118] udc-core: couldn't find an available UDC - added
> [g_mass_storage] to list of pending drivers
> > [    2.215332] i2c_dev: i2c /dev entries driver
> > [    2.220467] xen_wdt xen_wdt: initialized (timeout=3D60s, nowayout=3D=
0)
> > [    2.225923] device-mapper: uevent: version 1.0.3
> > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22)
> initialised: dm-devel@redhat.com
> > [    2.239315] EDAC MC0: Giving out device to module 1 controller
> synps_ddr_controller: DEV synps_edac (INTERRUPT)
> > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-eda=
c
> controller zynqmp_ocm: DEV ff960000.memory-controller (INTERRUPT)
> > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
> > [    2.267487] sdhci: Copyright(c) Pierre Ossman
> > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
> > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
> > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
> > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
> > [    2.327875] securefw securefw: securefw probed
> > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
> > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES
> Successfully Registered
> > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
> > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is availabl=
e
> > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is availabl=
e
> > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registere=
d
> > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy
> registered
> > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
> > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info:
> 1.512.15.0 KeyLen: 32
> > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper
> handler. Retrying...
> > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
> > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
> > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
> > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VT=
I
> Count: 512 Event Count: 32
> > [    2.420856] default preset
> > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
> > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
> > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
> > [    2.441976] vmcu driver init
> > [    2.444922] VMCU: : (240:0) registered
> > [    2.444956] In K81 Updater init
> > [    2.449003] pktgen: Packet Generator for packet performance testing.
> Version: 2.75
> > [    2.468833] Initializing XFRM netlink socket
> > [    2.468902] NET: Registered PF_PACKET protocol family
> > [    2.472729] Bridge firewalling registered
> > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
> > [    2.481341] registered taskstats version 1
> > [    2.486394] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dno, fsver=
ity=3Dno
> > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =3D 36,
> base_baud =3D 6250000) is a xuartps
> > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
> > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA
> driver Probe success
> > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
> > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
> > [    2.952393] Creating 2 MTD partitions on "spi0.0":
> > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
> > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
> > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and
> forward
> > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106
> at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
> > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and
> forward
> > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106
> at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
> > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
> > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate
> interface QSGMII
> > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate
> interface QSGMII
> > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate
> interface type 18
> > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate
> interface QSGMII
> > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate
> interface QSGMII
> > [    3.045301] viper_enet viper_enet: Viper enet registered
> > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
> > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
> > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
> > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
> > [    3.097729] si70xx: probe of 2-0040 failed with error -5
> > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with
> timeout 60s
> > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with
> timeout 10s
> > [    3.112457] viper-tamper viper-tamper: Device registered
> > [    3.117593] active_bank active_bank: boot bank: 1
> > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
> > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
> > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info:
> 1.512.15.0 KeyLen: 32
> > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
> > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
> > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
> > [    3.158582] lpc55_user lpc55_user: The major number for your device
> is 236
> > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
> > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
> > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
> > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
> > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
> > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc]
> using ADMA 64-bit
> > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
> > [    3.215694] lpc55_l2 spi1.0: rx error: -110
> > [    3.284438] mmc0: new HS200 MMC card at address 0001
> > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
> > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
> > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
> > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardwar=
e
> clock
> > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
> > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
> > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
> > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
> > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
> > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
> > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
> > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
> > [    3.633253] lpc55_l2 spi1.0: rx error: -110
> > [    3.639104] k81_bootloader 0-0010: probe
> > [    3.641628] VMCU: : (235:0) registered
> > [    3.641635] k81_bootloader 0-0010: probe completed
> > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
> > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
> > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
> > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
> > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
> > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
> > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
> > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
> > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C
> switch pca9546
> > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
> > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
> > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
> > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
> > [    3.737549] sfp_register_socket: got sfp_bus
> > [    3.740709] sfp_register_socket: register sfp_bus
> > [    3.745459] sfp_register_bus: ops ok!
> > [    3.749179] sfp_register_bus: Try to attach
> > [    3.753419] sfp_register_bus: Attach succeeded
> > [    3.757914] sfp_register_bus: upstream ops attach
> > [    3.762677] sfp_register_bus: Bus registered
> > [    3.766999] sfp_register_socket: register sfp_bus succeeded
> > [    3.775870] of_cfs_init
> > [    3.776000] of_cfs_init: OK
> > [    3.778211] clk: Not disabling unused clocks
> > [   11.278477] Freeing initrd memory: 206056K
> > [   11.279406] Freeing unused kernel memory: 1536K
> > [   11.314006] Checked W+X mappings: passed, no W+X pages found
> > [   11.314142] Run /init as init process
> > INIT: version 3.01 booting
> > fsck (busybox 1.35.0)
> > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
> > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
> > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
> > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
> > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal.
> Opts: (null). Quota mode: disabled.
> > Starting random number generator daemon.
> > [   11.580662] random: crng init done
> > Starting udev
> > [   11.613159] udevd[142]: starting version 3.2.10
> > [   11.620385] udevd[143]: starting eudev-3.2.10
> > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
> > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
> > [   12.063396] ip_local_port_range: prefer different parity for
> start/end values.
> > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> > hwclock: RTC_RD_TIME: Invalid exchange
> > Mon Feb 27 08:40:53 UTC 2023
> > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
> > hwclock: RTC_SET_TIME: Invalid exchange
> > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> > Starting mcud
> > INIT: Entering runlevel: 5
> > Configuring network interfaces... done.
> > resetting network interface
> > [   12.718295] macb ff0b0000.ethernet control_red: PHY
> [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
> > [   12.723919] macb ff0b0000.ethernet control_red: configuring for
> phy/gmii link mode
> > [   12.732151] pps pps0: new PPS source ptp0
> > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock
> registered.
> > [   12.745724] macb ff0c0000.ethernet control_black: PHY
> [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
> > [   12.753469] macb ff0c0000.ethernet control_black: configuring for
> phy/gmii link mode
> > [   12.761804] pps pps1: new PPS source ptp1
> > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock
> registered.
> > Auto-negotiation: off
> > Auto-negotiation: off
> > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate
> target frequency: 125000000 Hz
> > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up -
> 1Gbps/Full - flow control off
> > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate
> target frequency: 125000000 Hz
> > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up -
> 1Gbps/Full - flow control off
> > Starting Failsafe Secure Shell server in port 2222: sshd
> > done.
> > Starting rpcbind daemon...done.
> >
> > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> > hwclock: RTC_RD_TIME: Invalid exchange
> > Starting State Manager Service
> > Start state-manager restarter...
> > (XEN) d0v1 Forwarding AES operation: 3254779951
> > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid
> 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744 /dev/dm-0
> > scanned by udevd (385)
> > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
> > [   17.350670] BTRFS info (device dm-0): has skinny extents
> > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
> > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e
> devid 1 transid 6 /dev/mapper/client_prov scanned by mkfs.btrfs
> > (526)
> > [   17.872699] BTRFS info (device dm-1): using free space tree
> > [   17.872771] BTRFS info (device dm-1): has skinny extents
> > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata
> feature
> > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
> > [   17.895695] BTRFS info (device dm-1): checking UUID tree
> >
> > Setting domain 0 name, domid and JSON config...
> > Done setting up Dom0
> > Starting xenconsoled...
> > Starting QEMU as disk backend for dom0
> > Starting domain watchdog daemon: xenwatchdogd startup
> >
> > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921
> devid 1 transid 6 /dev/mapper/client_config scanned by mkfs.btrfs
> > (574)
> > [done]
> > [   18.465552] BTRFS info (device dm-2): using free space tree
> > [   18.465629] BTRFS info (device dm-2): has skinny extents
> > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata
> feature
> > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd
> optimizations
> > [   18.486659] BTRFS info (device dm-2): checking UUID tree
> > OK
> > starting rsyslogd ... Log partition ready after 0 poll loops
> > done
> > rsyslogd: cannot connect to 172.18.0.1:514: Network is unreachable
> [v8.2208.0 try https://www.rsyslog.com/e/2027 ]
> > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095
> devid 1 transid 608 /dev/dm-3 scanned by udevd (518)
> >
> > Please insert USB token and enter your role in login prompt.
> >
> > login:
> >
> > Regards,
> > O.
> >
> >
> > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39,=
 Stefano Stabellini <sstabellini@kernel.org
> >:
> >       Hi Oleg,
> >
> >       Here is the issue from your logs:
> >
> >       SError Interrupt on CPU0, code 0xbe000000 -- SError
> >
> >       SErrors are special signals to notify software of serious hardwar=
e
> >       errors.  Something is going very wrong. Defective hardware is a
> >       possibility.  Another possibility if software accessing address
> ranges
> >       that it is not supposed to, sometimes it causes SErrors.
> >
> >       Cheers,
> >
> >       Stefano
> >
> >
> >
> >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
> >
> >       > Hello,
> >       >
> >       > Thanks guys.
> >       > I found out where the problem was.
> >       > Now dom0 booted more. But I have a new one.
> >       > This is a kernel panic during Dom0 loading.
> >       > Maybe someone is able to suggest something ?
> >       >
> >       > Regards,
> >       > O.
> >       >
> >       > [    3.771362] sfp_register_bus: upstream ops attach
> >       > [    3.776119] sfp_register_bus: Bus registered
> >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
> >       > [    3.789399] of_cfs_init
> >       > [    3.789499] of_cfs_init: OK
> >       > [    3.791685] clk: Not disabling unused clocks
> >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 --
> SError
> >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
> 5.15.72-xilinx-v2022.1 #1
> >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
> >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT
> -SSBS BTYPE=3D--)
> >       > [   11.010422] pc : simple_write_end+0xd0/0x130
> >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
> >       > [   11.010438] sp : ffffffc00809b910
> >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27:
> ffffffef69ba88c0
> >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24:
> 0000000000000000
> >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21:
> ffffff807315a260
> >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18:
> 0000000000000000
> >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15:
> 0000000000000000
> >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12:
> 0000000000000000
> >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 :
> 0000000000000000
> >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 :
> 000000002d89b700
> >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 :
> 0000000000001000
> >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 :
> 0000000000000005
> >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError
> Interrupt
> >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted
> 5.15.72-xilinx-v2022.1 #1
> >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
> >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
> >       > [   11.010556] Call trace:
> >       > [   11.010558]  dump_backtrace+0x0/0x1c4
> >       > [   11.010567]  show_stack+0x18/0x2c
> >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
> >       > [   11.010583]  dump_stack+0x18/0x34
> >       > [   11.010588]  panic+0x14c/0x2f8
> >       > [   11.010597]  print_tainted+0x0/0xb0
> >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
> >       > [   11.010614]  do_serror+0x28/0x60
> >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
> >       > [   11.010628]  el1h_64_error+0x78/0x7c
> >       > [   11.010633]  simple_write_end+0xd0/0x130
> >       > [   11.010639]  generic_perform_write+0x118/0x1e0
> >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
> >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
> >       > [   11.010656]  __kernel_write+0xfc/0x2ac
> >       > [   11.010665]  kernel_write+0x88/0x160
> >       > [   11.010673]  xwrite+0x44/0x94
> >       > [   11.010680]  do_copy+0xa8/0x104
> >       > [   11.010686]  write_buffer+0x38/0x58
> >       > [   11.010692]  flush_buffer+0x4c/0xbc
> >       > [   11.010698]  __gunzip+0x280/0x310
> >       > [   11.010704]  gunzip+0x1c/0x28
> >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
> >       > [   11.010715]  do_populate_rootfs+0x80/0x164
> >       > [   11.010722]  async_run_entry_fn+0x48/0x164
> >       > [   11.010728]  process_one_work+0x1e4/0x3a0
> >       > [   11.010736]  worker_thread+0x7c/0x4c0
> >       > [   11.010743]  kthread+0x120/0x130
> >       > [   11.010750]  ret_from_fork+0x10/0x20
> >       > [   11.010757] SMP: stopping secondary CPUs
> >       > [   11.010784] Kernel Offset: 0x2f61200000 from
> 0xffffffc008000000
> >       > [   11.010788] PHYS_OFFSET: 0x0
> >       > [   11.010790] CPU features: 0x00000401,00000842
> >       > [   11.010795] Memory Limit: none
> >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronou=
s
> SError Interrupt ]---
> >       >
> >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 15:52, Michal Orzel <michal.orzel@amd.com
> >:
> >       >       Hi Oleg,
> >       >
> >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
> >       >       >
> >       >       >
> >       >       >
> >       >       > Hello Michal,
> >       >       >
> >       >       > I was not able to enable earlyprintk in the xen for now=
.
> >       >       > I decided to choose another way.
> >       >       > This is a xen's command line that I found out completel=
y.
> >       >       >
> >       >       > (XEN) $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M
> dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative
> >       sched=3Dnull
> >       >       timer_slop=3D0
> >       >       Yes, adding a printk() in Xen was also a good idea.
> >       >
> >       >       >
> >       >       > So you are absolutely right about a command line.
> >       >       > Now I am going to find out why xen did not have the
> correct parameters from the device tree.
> >       >       Maybe you will find this document helpful:
> >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >       >
> >       >       ~Michal
> >       >
> >       >       >
> >       >       > Regards,
> >       >       > Oleg
> >       >       >
> >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 11:16, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> >       >       >
> >       >       >
> >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
> >       >       >     >
> >       >       >     >
> >       >       >     >
> >       >       >     > Hello Michal,
> >       >       >     >
> >       >       >     > Yes, I use yocto.
> >       >       >     >
> >       >       >     > Yesterday all day long I tried to follow your
> suggestions.
> >       >       >     > I faced a problem.
> >       >       >     > Manually in the xen config build file I pasted th=
e
> strings:
> >       >       >     In the .config file or in some Yocto file (listing
> additional Kconfig options) added to SRC_URI?
> >       >       >     You shouldn't really modify .config file but if you
> do, you should execute "make olddefconfig" afterwards.
> >       >       >
> >       >       >     >
> >       >       >     > CONFIG_EARLY_PRINTK
> >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
> >       >       >     I hope you added =3Dy to them.
> >       >       >
> >       >       >     Anyway, you have at least the following solutions:
> >       >       >     1) Run bitbake xen -c menuconfig to properly set
> early printk
> >       >       >     2) Find out how you enable other Kconfig options in
> your project (e.g. CONFIG_COLORING=3Dy that is not enabled by
> >       default)
> >       >       >     3) Append the following to
> "xen/arch/arm/configs/arm64_defconfig":
> >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >       >       >
> >       >       >     ~Michal
> >       >       >
> >       >       >     >
> >       >       >     > Host hangs in build time.
> >       >       >     > Maybe I did not set something in the config build
> file ?
> >       >       >     >
> >       >       >     > Regards,
> >       >       >     > Oleg
> >       >       >     >
> >       >       >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >     >
> >       >       >     >     Thanks Michal,
> >       >       >     >
> >       >       >     >     You gave me an idea.
> >       >       >     >     I am going to try it today.
> >       >       >     >
> >       >       >     >     Regards,
> >       >       >     >     O.
> >       >       >     >
> >       >       >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=
=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >     >
> >       >       >     >         Thanks Stefano.
> >       >       >     >
> >       >       >     >         I am going to do it today.
> >       >       >     >
> >       >       >     >         Regards,
> >       >       >     >         O.
> >       >       >     >
> >       >       >     >         =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>:
> >       >       >     >
> >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko
> wrote:
> >       >       >     >             > Hi Michal,
> >       >       >     >             >
> >       >       >     >             > I corrected xen's command line.
> >       >       >     >             > Now it is
> >       >       >     >             > xen,xen-bootargs =3D "console=3Ddtu=
art
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
> >       >       bootscrub=3D0 vwfi=3Dnative sched=3Dnull
> >       >       >     >             > timer_slop=3D0 way_size=3D65536
> xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >       >     >
> >       >       >     >             4 colors is way too many for xen, jus=
t
> do xen_colors=3D0-0. There is no
> >       >       >     >             advantage in using more than 1 color
> for Xen.
> >       >       >     >
> >       >       >     >             4 colors is too few for dom0, if you
> are giving 1600M of memory to Dom0.
> >       >       >     >             Each color is 256M. For 1600M you
> should give at least 7 colors. Try:
> >       >       >     >
> >       >       >     >             xen_colors=3D0-0 dom0_colors=3D1-8
> >       >       >     >
> >       >       >     >
> >       >       >     >
> >       >       >     >             > Unfortunately the result was the
> same.
> >       >       >     >             >
> >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
> >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit P=
A
> and 8-bit VMID
> >       >       >     >             > (XEN) P2M: 3 levels with order-1
> root, VTCR 0x0000000080023558
> >       >       >     >             > (XEN) Scheduling granularity: cpu, =
1
> CPU per sched-resource
> >       >       >     >             > (XEN) Coloring general information
> >       >       >     >             > (XEN) Way size: 64kB
> >       >       >     >             > (XEN) Max. number of colors
> available: 16
> >       >       >     >             > (XEN) Xen color(s): [ 0 ]
> >       >       >     >             > (XEN) alternatives: Patching with
> alt table 00000000002cc690 -> 00000000002ccc0c
> >       >       >     >             > (XEN) Color array allocation failed
> for dom0
> >       >       >     >             > (XEN)
> >       >       >     >             > (XEN)
> ****************************************
> >       >       >     >             > (XEN) Panic on CPU 0:
> >       >       >     >             > (XEN) Error creating domain 0
> >       >       >     >             > (XEN)
> ****************************************
> >       >       >     >             > (XEN)
> >       >       >     >             > (XEN) Reboot in five seconds...
> >       >       >     >             >
> >       >       >     >             > I am going to find out how command
> line arguments passed and parsed.
> >       >       >     >             >
> >       >       >     >             > Regards,
> >       >       >     >             > Oleg
> >       >       >     >             >
> >       >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg
> Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >     >             >       Hi Michal,
> >       >       >     >             >
> >       >       >     >             > You put my nose into the problem.
> Thank you.
> >       >       >     >             > I am going to use your point.
> >       >       >     >             > Let's see what happens.
> >       >       >     >             >
> >       >       >     >             > Regards,
> >       >       >     >             > Oleg
> >       >       >     >             >
> >       >       >     >             >
> >       >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal
> Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
> >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com
> >>>:
> >       >       >     >             >       Hi Oleg,
> >       >       >     >             >
> >       >       >     >             >       On 19/04/2023 09:03, Oleg
> Nikitenko wrote:
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       > Hello Stefano,
> >       >       >     >             >       >
> >       >       >     >             >       > Thanks for the clarificatio=
n.
> >       >       >     >             >       > My company uses yocto for
> image generation.
> >       >       >     >             >       > What kind of information do
> you need to consult me in this case ?
> >       >       >     >             >       >
> >       >       >     >             >       > Maybe modules
> sizes/addresses which were mentioned by @Julien Grall
> >       <mailto:julien@xen.org
> >       >       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>> ?
> >       >       >     >             >
> >       >       >     >             >       Sorry for jumping into
> discussion, but FWICS the Xen command line you provided seems to be
> >       not the
> >       >       one
> >       >       >     >             >       Xen booted with. The error yo=
u
> are observing most likely is due to dom0 colors
> >       configuration not
> >       >       being
> >       >       >     >             >       specified (i.e. lack of
> dom0_colors=3D<> parameter). Although in the command line you
> >       provided, this
> >       >       parameter
> >       >       >     >             >       is set, I strongly doubt that
> this is the actual command line in use.
> >       >       >     >             >
> >       >       >     >             >       You wrote:
> >       >       >     >             >       xen,xen-bootargs =3D
> "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2
> >       dom0_vcpus_pin
> >       >       bootscrub=3D0 vwfi=3Dnative
> >       >       >     >             >       sched=3Dnull timer_slop=3D0
> way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >       >     >             >
> >       >       >     >             >       but:
> >       >       >     >             >       1) way_szize has a typo
> >       >       >     >             >       2) you specified 4 colors
> (0-3) for Xen, but the boot log says that Xen has only one:
> >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
> >       >       >     >             >
> >       >       >     >             >       This makes me believe that no
> colors configuration actually end up in command line that Xen
> >       booted
> >       >       with.
> >       >       >     >             >       Single color for Xen is a
> "default if not specified" and way size was probably calculated
> >       by asking
> >       >       HW.
> >       >       >     >             >
> >       >       >     >             >       So I would suggest to first
> cross-check the command line in use.
> >       >       >     >             >
> >       >       >     >             >       ~Michal
> >       >       >     >             >
> >       >       >     >             >
> >       >       >     >             >       >
> >       >       >     >             >       > Regards,
> >       >       >     >             >       > Oleg
> >       >       >     >             >       >
> >       >       >     >             >       > =D0=B2=D1=82, 18 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44,
> Stefano Stabellini <sstabellini@kernel.org
> >       >       <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
> >       <mailto:sstabellini@kernel.org
> >       >       <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >       >       >     >             >       >
> >       >       >     >             >       >     On Tue, 18 Apr 2023,
> Oleg Nikitenko wrote:
> >       >       >     >             >       >     > Hi Julien,
> >       >       >     >             >       >     >
> >       >       >     >             >       >     > >> This feature has
> not been merged in Xen upstream yet
> >       >       >     >             >       >     >
> >       >       >     >             >       >     > > would assume that
> upstream + the series on the ML [1] work
> >       >       >     >             >       >     >
> >       >       >     >             >       >     > Please clarify this
> point.
> >       >       >     >             >       >     > Because the two
> thoughts are controversial.
> >       >       >     >             >       >
> >       >       >     >             >       >     Hi Oleg,
> >       >       >     >             >       >
> >       >       >     >             >       >     As Julien wrote, there
> is nothing controversial. As you are aware,
> >       >       >     >             >       >     Xilinx maintains a
> separate Xen tree specific for Xilinx here:
> >       >       >     >             >       >
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >       <https://github.com/xilinx/xen
> >       >       <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >       <https://github.com/xilinx/xen
> >       >       <https://github.com/xilinx/xen>>>
> >       >       >     >             >       >
> >       >       >     >             >       >     and the branch you are
> using (xlnx_rebase_4.16) comes from there.
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       >     Instead, the upstream
> Xen tree lives here:
> >       >       >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>=
 <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>=
> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>=
 <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>=
>>
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       >     The Cache Coloring
> feature that you are trying to configure is present
> >       >       >     >             >       >     in xlnx_rebase_4.16, bu=
t
> not yet present upstream (there is an
> >       >       >     >             >       >     outstanding patch serie=
s
> to add cache coloring to Xen upstream but it
> >       >       >     >             >       >     hasn't been merged yet.=
)
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       >     Anyway, if you are usin=
g
> xlnx_rebase_4.16 it doesn't matter too much for
> >       >       >     >             >       >     you as you already have
> Cache Coloring as a feature there.
> >       >       >     >             >       >
> >       >       >     >             >       >
> >       >       >     >             >       >     I take you are using
> ImageBuilder to generate the boot configuration? If
> >       >       >     >             >       >     so, please post the
> ImageBuilder config file that you are using.
> >       >       >     >             >       >
> >       >       >     >             >       >     But from the boot
> message, it looks like the colors configuration for
> >       >       >     >             >       >     Dom0 is incorrect.
> >       >       >     >             >       >
> >       >       >     >             >
> >       >       >     >             >
> >       >       >     >             >
> >       >       >     >
> >       >       >
> >       >
> >       >
> >       >
> >
> >
> >

--0000000000007c382805fa4ba681
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello Stefano,</div><div><br></div><div>Thanks for cl=
arification.</div><div>We nighter use ImageBuilder nor uboot boot script.</=
div><div>A model is zcu102 compatible.</div><div><br></div><div>Regards,</d=
iv><div>O.<br></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" c=
lass=3D"gmail_attr">=D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 21:21, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@ker=
nel.org">sstabellini@kernel.org</a>&gt;:<br></div><blockquote class=3D"gmai=
l_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,20=
4,204);padding-left:1ex">This is interesting. Are you using Xilinx hardware=
 by any chance? If so,<br>
which board?<br>
<br>
Are you using ImageBuilder to generate your boot.scr boot script? If so,<br=
>
could you please post your ImageBuilder config file? If not, can you<br>
post the source of your uboot boot script?<br>
<br>
SErrors are supposed to be related to a hardware failure of some kind.<br>
You are not supposed to be able to trigger an SError easily by<br>
&quot;mistake&quot;. I have not seen SErrors due to wrong cache coloring<br=
>
configurations on any Xilinx board before.<br>
<br>
The differences between Xen with and without cache coloring from a<br>
hardware perspective are:<br>
<br>
- With cache coloring, the SMMU is enabled and does address translations<br=
>
=C2=A0 even for dom0. Without cache coloring the SMMU could be disabled, an=
d<br>
=C2=A0 if enabled, the SMMU doesn&#39;t do any address translations for Dom=
0. If<br>
=C2=A0 there is a hardware failure related to SMMU address translation it<b=
r>
=C2=A0 could only trigger with cache coloring. This would be my normal<br>
=C2=A0 suggestion for you to explore, but the failure happens too early<br>
=C2=A0 before any DMA-capable device is programmed. So I don&#39;t think th=
is can<br>
=C2=A0 be the issue.<br>
<br>
- With cache coloring, the memory allocation is very different so you&#39;l=
l<br>
=C2=A0 end up using different DDR regions for Dom0. So if your DDR is<br>
=C2=A0 defective, you might only see a failure with cache coloring enabled<=
br>
=C2=A0 because you end up using different regions.<br>
<br>
<br>
On Tue, 25 Apr 2023, Oleg Nikitenko wrote:<br>
&gt; Hi Stefano,<br>
&gt; <br>
&gt; Thank you.<br>
&gt; If I build xen without colors support there is not this error.<br>
&gt; All the domains are booted well.<br>
&gt; Hense it can not be a hardware issue.<br>
&gt; This panic arrived during unpacking the rootfs.<br>
&gt; Here I attached the boot log xen/Dom0 without color.<br>
&gt; A highlighted strings printed exactly after the place where 1-st time =
panic arrived.<br>
&gt; <br>
&gt; =C2=A0Xen 4.16.1-pre<br>
&gt; (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux=
-gcc (GCC) 11.3.0) debug=3Dy 2023-04-21<br>
&gt; (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-=
dirty<br>
&gt; (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5<br>
&gt; (XEN) Processor: 00000000410fd034: &quot;ARM Limited&quot;, variant: 0=
x0, part 0xd03,rev 0x4<br>
&gt; (XEN) 64-bit Execution:<br>
&gt; (XEN) =C2=A0 Processor Features: 0000000000002222 0000000000000000<br>
&gt; (XEN) =C2=A0 =C2=A0 Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL=
0:64+32<br>
&gt; (XEN) =C2=A0 =C2=A0 Extensions: FloatingPoint AdvancedSIMD<br>
&gt; (XEN) =C2=A0 Debug Features: 0000000010305106 0000000000000000<br>
&gt; (XEN) =C2=A0 Auxiliary Features: 0000000000000000 0000000000000000<br>
&gt; (XEN) =C2=A0 Memory Model Features: 0000000000001122 0000000000000000<=
br>
&gt; (XEN) =C2=A0 ISA Features: =C2=A00000000000011120 0000000000000000<br>
&gt; (XEN) 32-bit Execution:<br>
&gt; (XEN) =C2=A0 Processor Features: 0000000000000131:0000000000011011<br>
&gt; (XEN) =C2=A0 =C2=A0 Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazell=
e<br>
&gt; (XEN) =C2=A0 =C2=A0 Extensions: GenericTimer Security<br>
&gt; (XEN) =C2=A0 Debug Features: 0000000003010066<br>
&gt; (XEN) =C2=A0 Auxiliary Features: 0000000000000000<br>
&gt; (XEN) =C2=A0 Memory Model Features: 0000000010201105 0000000040000000<=
br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A00000000001260000 0000000002102211<br>
&gt; (XEN) =C2=A0 ISA Features: 0000000002101110 0000000013112111 000000002=
1232042<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 00000000=
01112131 0000000000011142 0000000000011121<br>
&gt; (XEN) Using SMC Calling Convention v1.2<br>
&gt; (XEN) Using PSCI v1.1<br>
&gt; (XEN) SMP: Allowing 4 CPUs<br>
&gt; (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz=
<br>
&gt; (XEN) GICv2 initialization:<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_dist_addr=3D00000000f9010000<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_cpu_addr=3D00000000f9020000<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_hyp_addr=3D00000000f9040000<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_vcpu_addr=3D00000000f9060000<br>
&gt; (XEN) =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_maintenance_irq=3D25<br>
&gt; (XEN) GICv2: Adjusting CPU interface base to 0xf902f000<br>
&gt; (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).<br>
&gt; (XEN) Using scheduler: null Scheduler (null)<br>
&gt; (XEN) Initializing null scheduler<br>
&gt; (XEN) WARNING: This is experimental software in development.<br>
&gt; (XEN) Use at your own risk.<br>
&gt; (XEN) Allocated console ring of 32 KiB.<br>
&gt; (XEN) CPU0: Guest atomics will try 12 times before pausing the domain<=
br>
&gt; (XEN) Bringing up CPU1<br>
&gt; (XEN) CPU1: Guest atomics will try 13 times before pausing the domain<=
br>
&gt; (XEN) CPU 1 booted.<br>
&gt; (XEN) Bringing up CPU2<br>
&gt; (XEN) CPU2: Guest atomics will try 13 times before pausing the domain<=
br>
&gt; (XEN) CPU 2 booted.<br>
&gt; (XEN) Bringing up CPU3<br>
&gt; (XEN) CPU3: Guest atomics will try 13 times before pausing the domain<=
br>
&gt; (XEN) Brought up 4 CPUs<br>
&gt; (XEN) CPU 3 booted.<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: stage 2 translation<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register group=
s, mask 0x7fff&lt;2&gt;smmu: /axi/smmu@fd800000: 16 context banks (0<br>
&gt; stage-2 only)<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -&gt; 48-bit PA<br=
>
&gt; (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices<br>
&gt; (XEN) I/O virtualisation enabled<br>
&gt; (XEN) =C2=A0- Dom0 mode: Relaxed<br>
&gt; (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt; (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt; (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt; (XEN) alternatives: Patching with alt table 00000000002cc5c8 -&gt; 000=
00000002ccb2c<br>
&gt; (XEN) *** LOADING DOMAIN 0 ***<br>
&gt; (XEN) Loading d0 kernel from boot module @ 0000000001000000<br>
&gt; (XEN) Loading ramdisk from boot module @ 0000000002000000<br>
&gt; (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:<br>
&gt; (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)<br>
&gt; (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)<br>
&gt; (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)<br>
&gt; (XEN) Grant table range: 0x00000000e00000-0x00000000e40000<br>
&gt; (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000<br>
&gt; (XEN) Allocating PPI 16 for event channel interrupt<br>
&gt; (XEN) Extended region 0: 0x81200000-&gt;0xa0000000<br>
&gt; (XEN) Extended region 1: 0xb1200000-&gt;0xc0000000<br>
&gt; (XEN) Extended region 2: 0xc8000000-&gt;0xe0000000<br>
&gt; (XEN) Extended region 3: 0xf0000000-&gt;0xf9000000<br>
&gt; (XEN) Extended region 4: 0x100000000-&gt;0x600000000<br>
&gt; (XEN) Extended region 5: 0x880000000-&gt;0x8000000000<br>
&gt; (XEN) Extended region 6: 0x8001000000-&gt;0x10000000000<br>
&gt; (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000=
010e41008<br>
&gt; (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x=
000000001ff3a617<br>
&gt; (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc<br>
&gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.<br>
&gt; (XEN) Std. Loglevel: All<br>
&gt; (XEN) Guest Loglevel: All<br>
&gt; (XEN) *** Serial input to DOM0 (type &#39;CTRL-a&#39; three times to s=
witch input)<br>
&gt; (XEN) null.c:353: 0 &lt;-- d0v0<br>
&gt; (XEN) Freed 356kB init memory.<br>
&gt; (XEN) d0v0 Unhandled SMC/HVC: 0x84000050<br>
&gt; (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4=
<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8=
<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER1=
2<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER1=
6<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER2=
0<br>
&gt; (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0=
<br>
&gt; [ =C2=A0 =C2=A00.000000] Booting Linux on physical CPU 0x0000000000 [0=
x410fd034]<br>
&gt; [ =C2=A0 =C2=A00.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user=
@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU Binutils)<b=
r>
&gt; 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023<br>
&gt; [ =C2=A0 =C2=A00.000000] Machine model: D14 Viper Board - White Unit<b=
r>
&gt; [ =C2=A0 =C2=A00.000000] Xen 4.16 support found<br>
&gt; [ =C2=A0 =C2=A00.000000] Zone ranges:<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 DMA =C2=A0 =C2=A0 =C2=A0[mem 0x0000000=
010000000-0x000000007fffffff]<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 DMA32 =C2=A0 =C2=A0empty<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 Normal =C2=A0 empty<br>
&gt; [ =C2=A0 =C2=A00.000000] Movable zone start for each node<br>
&gt; [ =C2=A0 =C2=A00.000000] Early memory node ranges<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000010000000=
-0x000000001fffffff]<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000022000000=
-0x0000000022147fff]<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000022200000=
-0x0000000022347fff]<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000024000000=
-0x0000000027ffffff]<br>
&gt; [ =C2=A0 =C2=A00.000000] =C2=A0 node =C2=A0 0: [mem 0x0000000030000000=
-0x000000007fffffff]<br>
&gt; [ =C2=A0 =C2=A00.000000] Initmem setup node 0 [mem 0x0000000010000000-=
0x000000007fffffff]<br>
&gt; [ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 8192 pages in unavailabl=
e ranges<br>
&gt; [ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 184 pages in unavailable=
 ranges<br>
&gt; [ =C2=A0 =C2=A00.000000] On node 0, zone DMA: 7352 pages in unavailabl=
e ranges<br>
&gt; [ =C2=A0 =C2=A00.000000] cma: Reserved 256 MiB at 0x000000006e000000<b=
r>
&gt; [ =C2=A0 =C2=A00.000000] psci: probing for conduit method from DT.<br>
&gt; [ =C2=A0 =C2=A00.000000] psci: PSCIv1.1 detected in firmware.<br>
&gt; [ =C2=A0 =C2=A00.000000] psci: Using standard PSCI v0.2 function IDs<b=
r>
&gt; [ =C2=A0 =C2=A00.000000] psci: Trusted OS migration not required<br>
&gt; [ =C2=A0 =C2=A00.000000] psci: SMC Calling Convention v1.1<br>
&gt; [ =C2=A0 =C2=A00.000000] percpu: Embedded 16 pages/cpu s32792 r0 d3274=
4 u65536<br>
&gt; [ =C2=A0 =C2=A00.000000] Detected VIPT I-cache on CPU0<br>
&gt; [ =C2=A0 =C2=A00.000000] CPU features: kernel page table isolation for=
ced ON by KASLR<br>
&gt; [ =C2=A0 =C2=A00.000000] CPU features: detected: Kernel page table iso=
lation (KPTI)<br>
&gt; [ =C2=A0 =C2=A00.000000] Built 1 zonelists, mobility grouping on.=C2=
=A0 Total pages: 403845<br>
&gt; [ =C2=A0 =C2=A00.000000] Kernel command line: console=3Dhvc0 earlycon=
=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/ram0 maxcpu=
s=3D2<br>
&gt; [ =C2=A0 =C2=A00.000000] Unknown kernel command line parameters &quot;=
earlyprintk=3Dxen fips=3D1&quot;, will be passed to user space.<br>
&gt; [ =C2=A0 =C2=A00.000000] Dentry cache hash table entries: 262144 (orde=
r: 9, 2097152 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A00.000000] Inode-cache hash table entries: 131072 (order=
: 8, 1048576 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A00.000000] mem auto-init: stack:off, heap alloc:on, heap=
 free:on<br>
&gt; [ =C2=A0 =C2=A00.000000] mem auto-init: clearing system memory may tak=
e some time...<br>
&gt; [ =C2=A0 =C2=A00.000000] Memory: 1121936K/1641024K available (9728K ke=
rnel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss, 256944K reserve=
d,<br>
&gt; 262144K cma-reserved)<br>
&gt; [ =C2=A0 =C2=A00.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=
=3D0, CPUs=3D2, Nodes=3D1<br>
&gt; [ =C2=A0 =C2=A00.000000] rcu: Hierarchical RCU implementation.<br>
&gt; [ =C2=A0 =C2=A00.000000] rcu: RCU event tracing is enabled.<br>
&gt; [ =C2=A0 =C2=A00.000000] rcu: RCU restricting CPUs from NR_CPUS=3D8 to=
 nr_cpu_ids=3D2.<br>
&gt; [ =C2=A0 =C2=A00.000000] rcu: RCU calculated value of scheduler-enlist=
ment delay is 25 jiffies.<br>
&gt; [ =C2=A0 =C2=A00.000000] rcu: Adjusting geometry for rcu_fanout_leaf=
=3D16, nr_cpu_ids=3D2<br>
&gt; [ =C2=A0 =C2=A00.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: =
0<br>
&gt; [ =C2=A0 =C2=A00.000000] Root IRQ handler: gic_handle_irq<br>
&gt; [ =C2=A0 =C2=A00.000000] arch_timer: cp15 timer(s) running at 100.00MH=
z (virt).<br>
&gt; [ =C2=A0 =C2=A00.000000] clocksource: arch_sys_counter: mask: 0xffffff=
ffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns<br>
&gt; [ =C2=A0 =C2=A00.000000] sched_clock: 56 bits at 100MHz, resolution 10=
ns, wraps every 4398046511100ns<br>
&gt; [ =C2=A0 =C2=A00.000258] Console: colour dummy device 80x25<br>
&gt; [ =C2=A0 =C2=A00.310231] printk: console [hvc0] enabled<br>
&gt; [ =C2=A0 =C2=A00.314403] Calibrating delay loop (skipped), value calcu=
lated using timer frequency.. 200.00 BogoMIPS (lpj=3D400000)<br>
&gt; [ =C2=A0 =C2=A00.324851] pid_max: default: 32768 minimum: 301<br>
&gt; [ =C2=A0 =C2=A00.329706] LSM: Security Framework initializing<br>
&gt; [ =C2=A0 =C2=A00.334204] Yama: becoming mindful.<br>
&gt; [ =C2=A0 =C2=A00.337865] Mount-cache hash table entries: 4096 (order: =
3, 32768 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A00.345180] Mountpoint-cache hash table entries: 4096 (or=
der: 3, 32768 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A00.354743] xen:grant_table: Grant tables using version 1=
 layout<br>
&gt; [ =C2=A0 =C2=A00.359132] Grant table initialized<br>
&gt; [ =C2=A0 =C2=A00.362664] xen:events: Using FIFO-based ABI<br>
&gt; [ =C2=A0 =C2=A00.366993] Xen: initializing cpu0<br>
&gt; [ =C2=A0 =C2=A00.370515] rcu: Hierarchical SRCU implementation.<br>
&gt; [ =C2=A0 =C2=A00.375930] smp: Bringing up secondary CPUs ...<br>
&gt; (XEN) null.c:353: 1 &lt;-- d0v1<br>
&gt; (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0=
<br>
&gt; [ =C2=A0 =C2=A00.382549] Detected VIPT I-cache on CPU1<br>
&gt; [ =C2=A0 =C2=A00.388712] Xen: initializing cpu1<br>
&gt; [ =C2=A0 =C2=A00.388743] CPU1: Booted secondary processor 0x0000000001=
 [0x410fd034]<br>
&gt; [ =C2=A0 =C2=A00.388829] smp: Brought up 1 node, 2 CPUs<br>
&gt; [ =C2=A0 =C2=A00.406941] SMP: Total of 2 processors activated.<br>
&gt; [ =C2=A0 =C2=A00.411698] CPU features: detected: 32-bit EL0 Support<br=
>
&gt; [ =C2=A0 =C2=A00.416888] CPU features: detected: CRC32 instructions<br=
>
&gt; [ =C2=A0 =C2=A00.422121] CPU: All CPU(s) started at EL1<br>
&gt; [ =C2=A0 =C2=A00.426248] alternatives: patching kernel code<br>
&gt; [ =C2=A0 =C2=A00.431424] devtmpfs: initialized<br>
&gt; [ =C2=A0 =C2=A00.441454] KASLR enabled<br>
&gt; [ =C2=A0 =C2=A00.441602] clocksource: jiffies: mask: 0xffffffff max_cy=
cles: 0xffffffff, max_idle_ns: 7645041785100000 ns<br>
&gt; [ =C2=A0 =C2=A00.448321] futex hash table entries: 512 (order: 3, 3276=
8 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A00.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol =
family<br>
&gt; [ =C2=A0 =C2=A00.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for=
 atomic allocations<br>
&gt; [ =C2=A0 =C2=A00.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA =
pool for atomic allocations<br>
&gt; [ =C2=A0 =C2=A00.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA3=
2 pool for atomic allocations<br>
&gt; [ =C2=A0 =C2=A00.519478] audit: initializing netlink subsys (disabled)=
<br>
&gt; [ =C2=A0 =C2=A00.524985] audit: type=3D2000 audit(0.336:1): state=3Din=
itialized audit_enabled=3D0 res=3D1<br>
&gt; [ =C2=A0 =C2=A00.529169] thermal_sys: Registered thermal governor &#39=
;step_wise&#39;<br>
&gt; [ =C2=A0 =C2=A00.533023] hw-breakpoint: found 6 breakpoint and 4 watch=
point registers.<br>
&gt; [ =C2=A0 =C2=A00.545608] ASID allocator initialised with 32768 entries=
<br>
&gt; [ =C2=A0 =C2=A00.551030] xen:swiotlb_xen: Warning: only able to alloca=
te 4 MB for software IO TLB<br>
&gt; [ =C2=A0 =C2=A00.559332] software IO TLB: mapped [mem 0x00000000118000=
00-0x0000000011c00000] (4MB)<br>
&gt; [ =C2=A0 =C2=A00.583565] HugeTLB registered 1.00 GiB page size, pre-al=
located 0 pages<br>
&gt; [ =C2=A0 =C2=A00.584721] HugeTLB registered 32.0 MiB page size, pre-al=
located 0 pages<br>
&gt; [ =C2=A0 =C2=A00.591478] HugeTLB registered 2.00 MiB page size, pre-al=
located 0 pages<br>
&gt; [ =C2=A0 =C2=A00.598225] HugeTLB registered 64.0 KiB page size, pre-al=
located 0 pages<br>
&gt; [ =C2=A0 =C2=A00.636520] DRBG: Continuing without Jitter RNG<br>
&gt; [ =C2=A0 =C2=A00.737187] raid6: neonx8 =C2=A0 gen() =C2=A02143 MB/s<br=
>
&gt; [ =C2=A0 =C2=A00.805294] raid6: neonx8 =C2=A0 xor() =C2=A01589 MB/s<br=
>
&gt; [ =C2=A0 =C2=A00.873406] raid6: neonx4 =C2=A0 gen() =C2=A02177 MB/s<br=
>
&gt; [ =C2=A0 =C2=A00.941499] raid6: neonx4 =C2=A0 xor() =C2=A01556 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.009612] raid6: neonx2 =C2=A0 gen() =C2=A02072 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.077715] raid6: neonx2 =C2=A0 xor() =C2=A01430 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.145834] raid6: neonx1 =C2=A0 gen() =C2=A01769 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.213935] raid6: neonx1 =C2=A0 xor() =C2=A01214 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.282046] raid6: int64x8 =C2=A0gen() =C2=A01366 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.350132] raid6: int64x8 =C2=A0xor() =C2=A0 773 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.418259] raid6: int64x4 =C2=A0gen() =C2=A01602 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.486349] raid6: int64x4 =C2=A0xor() =C2=A0 851 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.554464] raid6: int64x2 =C2=A0gen() =C2=A01396 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.622561] raid6: int64x2 =C2=A0xor() =C2=A0 744 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.690687] raid6: int64x1 =C2=A0gen() =C2=A01033 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.758770] raid6: int64x1 =C2=A0xor() =C2=A0 517 MB/s<br=
>
&gt; [ =C2=A0 =C2=A01.758809] raid6: using algorithm neonx4 gen() 2177 MB/s=
<br>
&gt; [ =C2=A0 =C2=A01.762941] raid6: .... xor() 1556 MB/s, rmw enabled<br>
&gt; [ =C2=A0 =C2=A01.767957] raid6: using neon recovery algorithm<br>
&gt; [ =C2=A0 =C2=A01.772824] xen:balloon: Initialising balloon driver<br>
&gt; [ =C2=A0 =C2=A01.778021] iommu: Default domain type: Translated<br>
&gt; [ =C2=A0 =C2=A01.782584] iommu: DMA domain TLB invalidation policy: st=
rict mode<br>
&gt; [ =C2=A0 =C2=A01.789149] SCSI subsystem initialized<br>
&gt; [ =C2=A0 =C2=A01.792820] usbcore: registered new interface driver usbf=
s<br>
&gt; [ =C2=A0 =C2=A01.798254] usbcore: registered new interface driver hub<=
br>
&gt; [ =C2=A0 =C2=A01.803626] usbcore: registered new device driver usb<br>
&gt; [ =C2=A0 =C2=A01.808761] pps_core: LinuxPPS API ver. 1 registered<br>
&gt; [ =C2=A0 =C2=A01.813716] pps_core: Software ver. 5.3.6 - Copyright 200=
5-2007 Rodolfo Giometti &lt;<a href=3D"mailto:giometti@linux.it" target=3D"=
_blank">giometti@linux.it</a>&gt;<br>
&gt; [ =C2=A0 =C2=A01.822903] PTP clock support registered<br>
&gt; [ =C2=A0 =C2=A01.826893] EDAC MC: Ver: 3.0.0<br>
&gt; [ =C2=A0 =C2=A01.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered =
ZynqMP IPI mbox with TX/RX channels.<br>
&gt; [ =C2=A0 =C2=A01.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered =
ZynqMP IPI mbox with TX/RX channels.<br>
&gt; [ =C2=A0 =C2=A01.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered =
ZynqMP IPI mbox with TX/RX channels.<br>
&gt; [ =C2=A0 =C2=A01.855907] FPGA manager framework<br>
&gt; [ =C2=A0 =C2=A01.859952] clocksource: Switched to clocksource arch_sys=
_counter<br>
&gt; [ =C2=A0 =C2=A01.871712] NET: Registered PF_INET protocol family<br>
&gt; [ =C2=A0 =C2=A01.871838] IP idents hash table entries: 32768 (order: 6=
, 262144 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.879392] tcp_listen_portaddr_hash hash table entries: =
1024 (order: 2, 16384 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.887078] Table-perturb hash table entries: 65536 (orde=
r: 6, 262144 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.894846] TCP established hash table entries: 16384 (or=
der: 5, 131072 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.902900] TCP bind hash table entries: 16384 (order: 6,=
 262144 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.910350] TCP: Hash tables configured (established 1638=
4 bind 16384)<br>
&gt; [ =C2=A0 =C2=A01.916778] UDP hash table entries: 1024 (order: 3, 32768=
 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.923509] UDP-Lite hash table entries: 1024 (order: 3, =
32768 bytes, linear)<br>
&gt; [ =C2=A0 =C2=A01.930759] NET: Registered PF_UNIX/PF_LOCAL protocol fam=
ily<br>
&gt; [ =C2=A0 =C2=A01.936834] RPC: Registered named UNIX socket transport m=
odule.<br>
&gt; [ =C2=A0 =C2=A01.942342] RPC: Registered udp transport module.<br>
&gt; [ =C2=A0 =C2=A01.947088] RPC: Registered tcp transport module.<br>
&gt; [ =C2=A0 =C2=A01.951843] RPC: Registered tcp NFSv4.1 backchannel trans=
port module.<br>
&gt; [ =C2=A0 =C2=A01.958334] PCI: CLS 0 bytes, default 64<br>
&gt; [ =C2=A0 =C2=A01.962709] Trying to unpack rootfs image as initramfs...=
<br>
&gt; [ =C2=A0 =C2=A01.977090] workingset: timestamp_bits=3D62 max_order=3D1=
9 bucket_order=3D0<br>
&gt; [ =C2=A0 =C2=A01.982863] Installing knfsd (copyright (C) 1996 <a href=
=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>).<br>
&gt; [ =C2=A0 =C2=A02.021045] NET: Registered PF_ALG protocol family<br>
&gt; [ =C2=A0 =C2=A02.021122] xor: measuring software checksum speed<br>
&gt; [ =C2=A0 =C2=A02.029347] =C2=A0 =C2=A08regs =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 : =C2=A02366 MB/sec<br>
&gt; [ =C2=A0 =C2=A02.033081] =C2=A0 =C2=A032regs =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0: =C2=A02802 MB/sec<br>
&gt; [ =C2=A0 =C2=A02.038223] =C2=A0 =C2=A0arm64_neon =C2=A0 =C2=A0 =C2=A0:=
 =C2=A02320 MB/sec<br>
&gt; [ =C2=A0 =C2=A02.038385] xor: using function: 32regs (2802 MB/sec)<br>
&gt; [ =C2=A0 =C2=A02.043614] Block layer SCSI generic (bsg) driver version=
 0.4 loaded (major 247)<br>
&gt; [ =C2=A0 =C2=A02.050959] io scheduler mq-deadline registered<br>
&gt; [ =C2=A0 =C2=A02.055521] io scheduler kyber registered<br>
&gt; [ =C2=A0 =C2=A02.068227] xen:xen_evtchn: Event-channel device installe=
d<br>
&gt; [ =C2=A0 =C2=A02.069281] Serial: 8250/16550 driver, 4 ports, IRQ shari=
ng disabled<br>
&gt; [ =C2=A0 =C2=A02.076190] cacheinfo: Unable to detect cache hierarchy f=
or CPU 0<br>
&gt; [ =C2=A0 =C2=A02.085548] brd: module loaded<br>
&gt; [ =C2=A0 =C2=A02.089290] loop: module loaded<br>
&gt; [ =C2=A0 =C2=A02.089341] Invalid max_queues (4), will use default max:=
 2.<br>
&gt; [ =C2=A0 =C2=A02.094565] tun: Universal TUN/TAP device driver, 1.6<br>
&gt; [ =C2=A0 =C2=A02.098655] xen_netfront: Initialising Xen virtual ethern=
et driver<br>
&gt; [ =C2=A0 =C2=A02.104156] usbcore: registered new interface driver rtl8=
150<br>
&gt; [ =C2=A0 =C2=A02.109813] usbcore: registered new interface driver r815=
2<br>
&gt; [ =C2=A0 =C2=A02.115367] usbcore: registered new interface driver asix=
<br>
&gt; [ =C2=A0 =C2=A02.120794] usbcore: registered new interface driver ax88=
179_178a<br>
&gt; [ =C2=A0 =C2=A02.126934] usbcore: registered new interface driver cdc_=
ether<br>
&gt; [ =C2=A0 =C2=A02.132816] usbcore: registered new interface driver cdc_=
eem<br>
&gt; [ =C2=A0 =C2=A02.138527] usbcore: registered new interface driver net1=
080<br>
&gt; [ =C2=A0 =C2=A02.144256] usbcore: registered new interface driver cdc_=
subset<br>
&gt; [ =C2=A0 =C2=A02.150205] usbcore: registered new interface driver zaur=
us<br>
&gt; [ =C2=A0 =C2=A02.155837] usbcore: registered new interface driver cdc_=
ncm<br>
&gt; [ =C2=A0 =C2=A02.161550] usbcore: registered new interface driver r815=
3_ecm<br>
&gt; [ =C2=A0 =C2=A02.168240] usbcore: registered new interface driver cdc_=
acm<br>
&gt; [ =C2=A0 =C2=A02.173109] cdc_acm: USB Abstract Control Model driver fo=
r USB modems and ISDN adapters<br>
&gt; [ =C2=A0 =C2=A02.181358] usbcore: registered new interface driver uas<=
br>
&gt; [ =C2=A0 =C2=A02.186547] usbcore: registered new interface driver usb-=
storage<br>
&gt; [ =C2=A0 =C2=A02.192643] usbcore: registered new interface driver ftdi=
_sio<br>
&gt; [ =C2=A0 =C2=A02.198384] usbserial: USB Serial support registered for =
FTDI USB Serial Device<br>
&gt; [ =C2=A0 =C2=A02.206118] udc-core: couldn&#39;t find an available UDC =
- added [g_mass_storage] to list of pending drivers<br>
&gt; [ =C2=A0 =C2=A02.215332] i2c_dev: i2c /dev entries driver<br>
&gt; [ =C2=A0 =C2=A02.220467] xen_wdt xen_wdt: initialized (timeout=3D60s, =
nowayout=3D0)<br>
&gt; [ =C2=A0 =C2=A02.225923] device-mapper: uevent: version 1.0.3<br>
&gt; [ =C2=A0 =C2=A02.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-2=
2) initialised: <a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm=
-devel@redhat.com</a><br>
&gt; [ =C2=A0 =C2=A02.239315] EDAC MC0: Giving out device to module 1 contr=
oller synps_ddr_controller: DEV synps_edac (INTERRUPT)<br>
&gt; [ =C2=A0 =C2=A02.249405] EDAC DEVICE0: Giving out device to module zyn=
qmp-ocm-edac controller zynqmp_ocm: DEV ff960000.memory-controller (INTERRU=
PT)<br>
&gt; [ =C2=A0 =C2=A02.261719] sdhci: Secure Digital Host Controller Interfa=
ce driver<br>
&gt; [ =C2=A0 =C2=A02.267487] sdhci: Copyright(c) Pierre Ossman<br>
&gt; [ =C2=A0 =C2=A02.271890] sdhci-pltfm: SDHCI platform and OF driver hel=
per<br>
&gt; [ =C2=A0 =C2=A02.278157] ledtrig-cpu: registered to indicate activity =
on CPUs<br>
&gt; [ =C2=A0 =C2=A02.283816] zynqmp_firmware_probe Platform Management API=
 v1.1<br>
&gt; [ =C2=A0 =C2=A02.289554] zynqmp_firmware_probe Trustzone version v1.0<=
br>
&gt; [ =C2=A0 =C2=A02.327875] securefw securefw: securefw probed<br>
&gt; [ =C2=A0 =C2=A02.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-ae=
s)<br>
&gt; [ =C2=A0 =C2=A02.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-ae=
s: AES Successfully Registered<br>
&gt; [ =C2=A0 =C2=A02.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rs=
a)<br>
&gt; [ =C2=A0 =C2=A02.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 =
is available<br>
&gt; [ =C2=A0 =C2=A02.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 =
is available<br>
&gt; [ =C2=A0 =C2=A02.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manage=
r registered<br>
&gt; [ =C2=A0 =C2=A02.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Pr=
oxy registered<br>
&gt; [ =C2=A0 =C2=A02.372525] viper-vdpp a4000000.vdpp: Device Tree Probing=
<br>
&gt; [ =C2=A0 =C2=A02.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9=
.0 Info: 1.512.15.0 KeyLen: 32<br>
&gt; [ =C2=A0 =C2=A02.386432] viper-vdpp a4000000.vdpp: Unable to register =
tamper handler. Retrying...<br>
&gt; [ =C2=A0 =C2=A02.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree=
 Probing<br>
&gt; [ =C2=A0 =C2=A02.399854] viper-vdpp-net a5000000.vdpp_net: Device regi=
stered<br>
&gt; [ =C2=A0 =C2=A02.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tr=
ee Probing<br>
&gt; [ =C2=A0 =C2=A02.412037] viper-vdpp-stat a8000000.vdpp_stat: Build par=
ameters: VTI Count: 512 Event Count: 32<br>
&gt; [ =C2=A0 =C2=A02.420856] default preset<br>
&gt; [ =C2=A0 =C2=A02.423797] viper-vdpp-stat a8000000.vdpp_stat: Device re=
gistered<br>
&gt; [ =C2=A0 =C2=A02.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree=
 Probing<br>
&gt; [ =C2=A0 =C2=A02.435948] viper-vdpp-rng ac000000.vdpp_rng: Device regi=
stered<br>
&gt; [ =C2=A0 =C2=A02.441976] vmcu driver init<br>
&gt; [ =C2=A0 =C2=A02.444922] VMCU: : (240:0) registered<br>
&gt; [ =C2=A0 =C2=A02.444956] In K81 Updater init<br>
&gt; [ =C2=A0 =C2=A02.449003] pktgen: Packet Generator for packet performan=
ce testing. Version: 2.75<br>
&gt; [ =C2=A0 =C2=A02.468833] Initializing XFRM netlink socket<br>
&gt; [ =C2=A0 =C2=A02.468902] NET: Registered PF_PACKET protocol family<br>
&gt; [ =C2=A0 =C2=A02.472729] Bridge firewalling registered<br>
&gt; [ =C2=A0 =C2=A02.476785] 8021q: 802.1Q VLAN Support v1.8<br>
&gt; [ =C2=A0 =C2=A02.481341] registered taskstats version 1<br>
&gt; [ =C2=A0 =C2=A02.486394] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=
=3Dno, fsverity=3Dno<br>
&gt; [ =C2=A0 =C2=A02.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (i=
rq =3D 36, base_baud =3D 6250000) is a xuartps<br>
&gt; [ =C2=A0 =C2=A02.507103] of-fpga-region fpga-full: FPGA Region probed<=
br>
&gt; [ =C2=A0 =C2=A02.512986] xilinx-zynqmp-dma fd500000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.520267] xilinx-zynqmp-dma fd510000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.528239] xilinx-zynqmp-dma fd520000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.536152] xilinx-zynqmp-dma fd530000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.544153] xilinx-zynqmp-dma fd540000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.552127] xilinx-zynqmp-dma fd550000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.560178] xilinx-zynqmp-dma ffa80000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.567987] xilinx-zynqmp-dma ffa90000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.583889] xilinx-zynqmp-dma ffab0000.dma-controller: Zy=
nqMP DMA driver Probe success<br>
&gt; [ =C2=A0 =C2=A02.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)<br=
>
&gt; [ =C2=A0 =C2=A02.946467] 2 fixed-partitions partitions found on MTD de=
vice spi0.0<br>
&gt; [ =C2=A0 =C2=A02.952393] Creating 2 MTD partitions on &quot;spi0.0&quo=
t;:<br>
&gt; [ =C2=A0 =C2=A02.957231] 0x000004000000-0x000008000000 : &quot;bank A&=
quot;<br>
&gt; [ =C2=A0 =C2=A02.963332] 0x000000000000-0x000004000000 : &quot;bank B&=
quot;<br>
&gt; [ =C2=A0 =C2=A02.968694] macb ff0b0000.ethernet: Not enabling partial =
store and forward<br>
&gt; [ =C2=A0 =C2=A02.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev =
0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)<br>
&gt; [ =C2=A0 =C2=A02.984472] macb ff0c0000.ethernet: Not enabling partial =
store and forward<br>
&gt; [ =C2=A0 =C2=A02.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev =
0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)<br>
&gt; [ =C2=A0 =C2=A03.001043] viper_enet viper_enet: Viper power GPIOs init=
ialised<br>
&gt; [ =C2=A0 =C2=A03.007313] viper_enet viper_enet vnet0 (uninitialized): =
Validate interface QSGMII<br>
&gt; [ =C2=A0 =C2=A03.014914] viper_enet viper_enet vnet1 (uninitialized): =
Validate interface QSGMII<br>
&gt; [ =C2=A0 =C2=A03.022138] viper_enet viper_enet vnet1 (uninitialized): =
Validate interface type 18<br>
&gt; [ =C2=A0 =C2=A03.030274] viper_enet viper_enet vnet2 (uninitialized): =
Validate interface QSGMII<br>
&gt; [ =C2=A0 =C2=A03.037785] viper_enet viper_enet vnet3 (uninitialized): =
Validate interface QSGMII<br>
&gt; [ =C2=A0 =C2=A03.045301] viper_enet viper_enet: Viper enet registered<=
br>
&gt; [ =C2=A0 =C2=A03.050958] xilinx-axipmon ffa00000.perf-monitor: Probed =
Xilinx APM<br>
&gt; [ =C2=A0 =C2=A03.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed =
Xilinx APM<br>
&gt; [ =C2=A0 =C2=A03.063538] xilinx-axipmon fd490000.perf-monitor: Probed =
Xilinx APM<br>
&gt; [ =C2=A0 =C2=A03.069920] xilinx-axipmon ffa10000.perf-monitor: Probed =
Xilinx APM<br>
&gt; [ =C2=A0 =C2=A03.097729] si70xx: probe of 2-0040 failed with error -5<=
br>
&gt; [ =C2=A0 =C2=A03.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog T=
imer with timeout 60s<br>
&gt; [ =C2=A0 =C2=A03.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog T=
imer with timeout 10s<br>
&gt; [ =C2=A0 =C2=A03.112457] viper-tamper viper-tamper: Device registered<=
br>
&gt; [ =C2=A0 =C2=A03.117593] active_bank active_bank: boot bank: 1<br>
&gt; [ =C2=A0 =C2=A03.122184] active_bank active_bank: boot mode: (0x02) qs=
pi32<br>
&gt; [ =C2=A0 =C2=A03.128247] viper-vdpp a4000000.vdpp: Device Tree Probing=
<br>
&gt; [ =C2=A0 =C2=A03.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9=
.0 Info: 1.512.15.0 KeyLen: 32<br>
&gt; [ =C2=A0 =C2=A03.142151] viper-vdpp a4000000.vdpp: Tamper handler regi=
stered<br>
&gt; [ =C2=A0 =C2=A03.147438] viper-vdpp a4000000.vdpp: Device registered<b=
r>
&gt; [ =C2=A0 =C2=A03.153007] lpc55_l2 spi1.0: registered handler for proto=
col 0<br>
&gt; [ =C2=A0 =C2=A03.158582] lpc55_user lpc55_user: The major number for y=
our device is 236<br>
&gt; [ =C2=A0 =C2=A03.165976] lpc55_l2 spi1.0: registered handler for proto=
col 1<br>
&gt; [ =C2=A0 =C2=A03.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad =
result: 1<br>
&gt; [ =C2=A0 =C2=A03.182856] rtc-lpc55 rtc_lpc55: registered as rtc0<br>
&gt; [ =C2=A0 =C2=A03.188656] lpc55_l2 spi1.0: (2) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.193744] lpc55_l2 spi1.0: (3) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.198848] lpc55_l2 spi1.0: (4) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.202932] mmc0: SDHCI controller on ff160000.mmc [ff160=
000.mmc] using ADMA 64-bit<br>
&gt; [ =C2=A0 =C2=A03.210689] lpc55_l2 spi1.0: (5) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.215694] lpc55_l2 spi1.0: rx error: -110<br>
&gt; [ =C2=A0 =C2=A03.284438] mmc0: new HS200 MMC card at address 0001<br>
&gt; [ =C2=A0 =C2=A03.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB<br>
&gt; [ =C2=A0 =C2=A03.291784] =C2=A0mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8<br>
&gt; [ =C2=A0 =C2=A03.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB<br>
&gt; [ =C2=A0 =C2=A03.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB<br>
&gt; [ =C2=A0 =C2=A03.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chard=
ev (244:0)<br>
&gt; [ =C2=A0 =C2=A03.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad =
result: 1<br>
&gt; [ =C2=A0 =C2=A03.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read =
the hardware clock<br>
&gt; [ =C2=A0 =C2=A03.591252] cdns-i2c ff020000.i2c: recovery information c=
omplete<br>
&gt; [ =C2=A0 =C2=A03.597085] at24 0-0050: supply vcc not found, using dumm=
y regulator<br>
&gt; [ =C2=A0 =C2=A03.603011] lpc55_l2 spi1.0: (2) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.608093] at24 0-0050: 256 byte spd EEPROM, read-only<b=
r>
&gt; [ =C2=A0 =C2=A03.613620] lpc55_l2 spi1.0: (3) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.619362] lpc55_l2 spi1.0: (4) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.624224] rtc-rv3028 0-0052: registered as rtc1<br>
&gt; [ =C2=A0 =C2=A03.628343] lpc55_l2 spi1.0: (5) mcu still not ready?<br>
&gt; [ =C2=A0 =C2=A03.633253] lpc55_l2 spi1.0: rx error: -110<br>
&gt; [ =C2=A0 =C2=A03.639104] k81_bootloader 0-0010: probe<br>
&gt; [ =C2=A0 =C2=A03.641628] VMCU: : (235:0) registered<br>
&gt; [ =C2=A0 =C2=A03.641635] k81_bootloader 0-0010: probe completed<br>
&gt; [ =C2=A0 =C2=A03.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 =
irq 28<br>
&gt; [ =C2=A0 =C2=A03.669154] cdns-i2c ff030000.i2c: recovery information c=
omplete<br>
&gt; [ =C2=A0 =C2=A03.675412] lm75 1-0048: supply vs not found, using dummy=
 regulator<br>
&gt; [ =C2=A0 =C2=A03.682920] lm75 1-0048: hwmon1: sensor &#39;tmp112&#39;<=
br>
&gt; [ =C2=A0 =C2=A03.686548] i2c i2c-1: Added multiplexed i2c bus 3<br>
&gt; [ =C2=A0 =C2=A03.690795] i2c i2c-1: Added multiplexed i2c bus 4<br>
&gt; [ =C2=A0 =C2=A03.695629] i2c i2c-1: Added multiplexed i2c bus 5<br>
&gt; [ =C2=A0 =C2=A03.700492] i2c i2c-1: Added multiplexed i2c bus 6<br>
&gt; [ =C2=A0 =C2=A03.705157] pca954x 1-0070: registered 4 multiplexed buss=
es for I2C switch pca9546<br>
&gt; [ =C2=A0 =C2=A03.713049] at24 1-0054: supply vcc not found, using dumm=
y regulator<br>
&gt; [ =C2=A0 =C2=A03.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-onl=
y<br>
&gt; [ =C2=A0 =C2=A03.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 =
irq 29<br>
&gt; [ =C2=A0 =C2=A03.731272] sfp viper_enet:sfp-eth1: Host maximum power 2=
.0W<br>
&gt; [ =C2=A0 =C2=A03.737549] sfp_register_socket: got sfp_bus<br>
&gt; [ =C2=A0 =C2=A03.740709] sfp_register_socket: register sfp_bus<br>
&gt; [ =C2=A0 =C2=A03.745459] sfp_register_bus: ops ok!<br>
&gt; [ =C2=A0 =C2=A03.749179] sfp_register_bus: Try to attach<br>
&gt; [ =C2=A0 =C2=A03.753419] sfp_register_bus: Attach succeeded<br>
&gt; [ =C2=A0 =C2=A03.757914] sfp_register_bus: upstream ops attach<br>
&gt; [ =C2=A0 =C2=A03.762677] sfp_register_bus: Bus registered<br>
&gt; [ =C2=A0 =C2=A03.766999] sfp_register_socket: register sfp_bus succeed=
ed<br>
&gt; [ =C2=A0 =C2=A03.775870] of_cfs_init<br>
&gt; [ =C2=A0 =C2=A03.776000] of_cfs_init: OK<br>
&gt; [ =C2=A0 =C2=A03.778211] clk: Not disabling unused clocks<br>
&gt; [ =C2=A0 11.278477] Freeing initrd memory: 206056K<br>
&gt; [ =C2=A0 11.279406] Freeing unused kernel memory: 1536K<br>
&gt; [ =C2=A0 11.314006] Checked W+X mappings: passed, no W+X pages found<b=
r>
&gt; [ =C2=A0 11.314142] Run /init as init process<br>
&gt; INIT: version 3.01 booting<br>
&gt; fsck (busybox 1.35.0)<br>
&gt; /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks<br>
&gt; /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks<br>
&gt; /dev/mmcblk0p3 was not cleanly unmounted, check forced.<br>
&gt; /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks<=
br>
&gt; [ =C2=A0 11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without jo=
urnal. Opts: (null). Quota mode: disabled.<br>
&gt; Starting random number generator daemon.<br>
&gt; [ =C2=A0 11.580662] random: crng init done<br>
&gt; Starting udev<br>
&gt; [ =C2=A0 11.613159] udevd[142]: starting version 3.2.10<br>
&gt; [ =C2=A0 11.620385] udevd[143]: starting eudev-3.2.10<br>
&gt; [ =C2=A0 11.704481] macb ff0b0000.ethernet control_red: renamed from e=
th0<br>
&gt; [ =C2=A0 11.720264] macb ff0c0000.ethernet control_black: renamed from=
 eth1<br>
&gt; [ =C2=A0 12.063396] ip_local_port_range: prefer different parity for s=
tart/end values.<br>
&gt; [ =C2=A0 12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad resul=
t: 1<br>
&gt; hwclock: RTC_RD_TIME: Invalid exchange<br>
&gt; Mon Feb 27 08:40:53 UTC 2023<br>
&gt; [ =C2=A0 12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad resul=
t<br>
&gt; hwclock: RTC_SET_TIME: Invalid exchange<br>
&gt; [ =C2=A0 12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad resul=
t: 1<br>
&gt; Starting mcud<br>
&gt; INIT: Entering runlevel: 5<br>
&gt; Configuring network interfaces... done.<br>
&gt; resetting network interface<br>
&gt; [ =C2=A0 12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.=
ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)<br>
&gt; [ =C2=A0 12.723919] macb ff0b0000.ethernet control_red: configuring fo=
r phy/gmii link mode<br>
&gt; [ =C2=A0 12.732151] pps pps0: new PPS source ptp0<br>
&gt; [ =C2=A0 12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock re=
gistered.<br>
&gt; [ =C2=A0 12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c000=
0.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)<br>
&gt; [ =C2=A0 12.753469] macb ff0c0000.ethernet control_black: configuring =
for phy/gmii link mode<br>
&gt; [ =C2=A0 12.761804] pps pps1: new PPS source ptp1<br>
&gt; [ =C2=A0 12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock re=
gistered.<br>
&gt; Auto-negotiation: off<br>
&gt; Auto-negotiation: off<br>
&gt; [ =C2=A0 16.828151] macb ff0b0000.ethernet control_red: unable to gene=
rate target frequency: 125000000 Hz<br>
&gt; [ =C2=A0 16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1=
Gbps/Full - flow control off<br>
&gt; [ =C2=A0 16.860552] macb ff0c0000.ethernet control_black: unable to ge=
nerate target frequency: 125000000 Hz<br>
&gt; [ =C2=A0 16.867052] macb ff0c0000.ethernet control_black: Link is Up -=
 1Gbps/Full - flow control off<br>
&gt; Starting Failsafe Secure Shell server in port 2222: sshd<br>
&gt; done.<br>
&gt; Starting rpcbind daemon...done.<br>
&gt; <br>
&gt; [ =C2=A0 17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad resul=
t: 1<br>
&gt; hwclock: RTC_RD_TIME: Invalid exchange<br>
&gt; Starting State Manager Service<br>
&gt; Start state-manager restarter...<br>
&gt; (XEN) d0v1 Forwarding AES operation: 3254779951<br>
&gt; Starting /usr/sbin/xenstored....[ =C2=A0 17.265256] BTRFS: device fsid=
 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744 /dev/dm-0<br>
&gt; scanned by udevd (385)<br>
&gt; [ =C2=A0 17.349933] BTRFS info (device dm-0): disk space caching is en=
abled<br>
&gt; [ =C2=A0 17.350670] BTRFS info (device dm-0): has skinny extents<br>
&gt; [ =C2=A0 17.364384] BTRFS info (device dm-0): enabling ssd optimizatio=
ns<br>
&gt; [ =C2=A0 17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5=
b2e2e devid 1 transid 6 /dev/mapper/client_prov scanned by mkfs.btrfs<br>
&gt; (526)<br>
&gt; [ =C2=A0 17.872699] BTRFS info (device dm-1): using free space tree<br=
>
&gt; [ =C2=A0 17.872771] BTRFS info (device dm-1): has skinny extents<br>
&gt; [ =C2=A0 17.878114] BTRFS info (device dm-1): flagging fs with big met=
adata feature<br>
&gt; [ =C2=A0 17.894289] BTRFS info (device dm-1): enabling ssd optimizatio=
ns<br>
&gt; [ =C2=A0 17.895695] BTRFS info (device dm-1): checking UUID tree<br>
&gt; <br>
&gt; Setting domain 0 name, domid and JSON config...<br>
&gt; Done setting up Dom0<br>
&gt; Starting xenconsoled...<br>
&gt; Starting QEMU as disk backend for dom0<br>
&gt; Starting domain watchdog daemon: xenwatchdogd startup<br>
&gt; <br>
&gt; [ =C2=A0 18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087=
b8921 devid 1 transid 6 /dev/mapper/client_config scanned by mkfs.btrfs<br>
&gt; (574)<br>
&gt; [done]<br>
&gt; [ =C2=A0 18.465552] BTRFS info (device dm-2): using free space tree<br=
>
&gt; [ =C2=A0 18.465629] BTRFS info (device dm-2): has skinny extents<br>
&gt; [ =C2=A0 18.471002] BTRFS info (device dm-2): flagging fs with big met=
adata feature<br>
&gt; Starting crond: [ =C2=A0 18.482371] BTRFS info (device dm-2): enabling=
 ssd optimizations<br>
&gt; [ =C2=A0 18.486659] BTRFS info (device dm-2): checking UUID tree<br>
&gt; OK<br>
&gt; starting rsyslogd ... Log partition ready after 0 poll loops<br>
&gt; done<br>
&gt; rsyslogd: cannot connect to <a href=3D"http://172.18.0.1:514" rel=3D"n=
oreferrer" target=3D"_blank">172.18.0.1:514</a>: Network is unreachable [v8=
.2208.0 try <a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" t=
arget=3D"_blank">https://www.rsyslog.com/e/2027</a> ]<br>
&gt; [ =C2=A0 18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb7=
22095 devid 1 transid 608 /dev/dm-3 scanned by udevd (518)<br>
&gt; <br>
&gt; Please insert USB token and enter your role in login prompt.<br>
&gt; <br>
&gt; login:<br>
&gt; <br>
&gt; Regards,<br>
&gt; O.<br>
&gt; <br>
&gt; <br>
&gt; =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39=
, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Here is the issue from your logs:<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0SError Interrupt on CPU0, code 0xbe000000 --=
 SError<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0SErrors are special signals to notify softwa=
re of serious hardware<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0errors.=C2=A0 Something is going very wrong.=
 Defective hardware is a<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0possibility.=C2=A0 Another possibility if so=
ftware accessing address ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0that it is not supposed to, sometimes it cau=
ses SErrors.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Cheers,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Stefano<br>
&gt; <br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On Mon, 24 Apr 2023, Oleg Nikitenko wrote:<b=
r>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thanks guys.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I found out where the problem was.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now dom0 booted more. But I have a new =
one.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; This is a kernel panic during Dom0 load=
ing.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Maybe someone is able to suggest someth=
ing ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.771362] sfp_register_b=
us: upstream ops attach<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.776119] sfp_register_b=
us: Bus registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.780459] sfp_register_s=
ocket: register sfp_bus succeeded<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.789399] of_cfs_init<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.789499] of_cfs_init: O=
K<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.791685] clk: Not disab=
ling unused clocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010355] SError Interrupt on=
 CPU0, code 0xbe000000 -- SError<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010380] CPU: 0 PID: 9 Comm:=
 kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010393] Workqueue: events_u=
nbound async_run_entry_fn<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010414] pstate: 60000005 (n=
ZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010422] pc : simple_write_e=
nd+0xd0/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010431] lr : generic_perfor=
m_write+0x118/0x1e0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010438] sp : ffffffc00809b9=
10<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010441] x29: ffffffc00809b9=
10 x28: 0000000000000000 x27: ffffffef69ba88c0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010451] x26: 0000000000003e=
ec x25: ffffff807515db00 x24: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010459] x23: ffffffc00809ba=
90 x22: 0000000002aac000 x21: ffffff807315a260<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010472] x20: 00000000000010=
00 x19: fffffffe02000000 x18: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010481] x17: 00000000ffffff=
ff x16: 0000000000008000 x15: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010490] x14: 00000000000000=
00 x13: 0000000000000000 x12: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010498] x11: 00000000000000=
00 x10: 0000000000000000 x9 : 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010507] x8 : 00000000000000=
00 x7 : ffffffef693ba680 x6 : 000000002d89b700<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010515] x5 : fffffffe020000=
00 x4 : ffffff807315a3c8 x3 : 0000000000001000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010524] x2 : 0000000002aab0=
00 x1 : 0000000000000001 x0 : 0000000000000005<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010534] Kernel panic - not =
syncing: Asynchronous SError Interrupt<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010539] CPU: 0 PID: 9 Comm:=
 kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010545] Hardware name: D14 =
Viper Board - White Unit (DT)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010548] Workqueue: events_u=
nbound async_run_entry_fn<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010556] Call trace:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010558] =C2=A0dump_backtrac=
e+0x0/0x1c4<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010567] =C2=A0show_stack+0x=
18/0x2c<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010574] =C2=A0dump_stack_lv=
l+0x7c/0xa0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010583] =C2=A0dump_stack+0x=
18/0x34<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010588] =C2=A0panic+0x14c/0=
x2f8<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010597] =C2=A0print_tainted=
+0x0/0xb0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010606] =C2=A0arm64_serror_=
panic+0x6c/0x7c<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010614] =C2=A0do_serror+0x2=
8/0x60<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010621] =C2=A0el1h_64_error=
_handler+0x30/0x50<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010628] =C2=A0el1h_64_error=
+0x78/0x7c<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010633] =C2=A0simple_write_=
end+0xd0/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010639] =C2=A0generic_perfo=
rm_write+0x118/0x1e0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010644] =C2=A0__generic_fil=
e_write_iter+0x138/0x1c4<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010650] =C2=A0generic_file_=
write_iter+0x78/0xd0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010656] =C2=A0__kernel_writ=
e+0xfc/0x2ac<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010665] =C2=A0kernel_write+=
0x88/0x160<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010673] =C2=A0xwrite+0x44/0=
x94<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010680] =C2=A0do_copy+0xa8/=
0x104<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010686] =C2=A0write_buffer+=
0x38/0x58<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010692] =C2=A0flush_buffer+=
0x4c/0xbc<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010698] =C2=A0__gunzip+0x28=
0/0x310<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010704] =C2=A0gunzip+0x1c/0=
x28<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010709] =C2=A0unpack_to_roo=
tfs+0x170/0x2b0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010715] =C2=A0do_populate_r=
ootfs+0x80/0x164<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010722] =C2=A0async_run_ent=
ry_fn+0x48/0x164<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010728] =C2=A0process_one_w=
ork+0x1e4/0x3a0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010736] =C2=A0worker_thread=
+0x7c/0x4c0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010743] =C2=A0kthread+0x120=
/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010750] =C2=A0ret_from_fork=
+0x10/0x20<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010757] SMP: stopping secon=
dary CPUs<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010784] Kernel Offset: 0x2f=
61200000 from 0xffffffc008000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010788] PHYS_OFFSET: 0x0<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010790] CPU features: 0x000=
00401,00000842<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.010795] Memory Limit: none<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.277509] ---[ end Kernel pan=
ic - not syncing: Asynchronous SError Interrupt ]---<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 20=
23=E2=80=AF=D0=B3. =D0=B2 15:52, Michal Orzel &lt;<a href=3D"mailto:michal.=
orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 21/04/2023=
 14:49, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Mi=
chal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I was no=
t able to enable earlyprintk in the xen for now.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I decide=
d to choose another way.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; This is =
a xen&#39;s command line that I found out completely.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) $$=
$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 do=
m0_vcpus_pin bootscrub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0timer_slop=3D=
0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Yes, adding a=
 printk() in Xen was also a good idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; So you a=
re absolutely right about a command line.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now I am=
 going to find out why xen did not have the correct parameters from the dev=
ice tree.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Maybe you wil=
l find this document helpful:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<a href=3D"ht=
tps://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree=
/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilin=
x/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=
=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Or=
zel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.or=
zel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0On 21/04/2023 10:04, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Hello Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Yes, I use yocto.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Yesterday all day long I tried to follow your suggestions=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; I faced a problem.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Manually in the xen config build file I pasted the string=
s:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0In the .config file or in some Yocto file (listing additional =
Kconfig options) added to SRC_URI?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0You shouldn&#39;t really modify .config file but if you do, yo=
u should execute &quot;make olddefconfig&quot; afterwards.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; CONFIG_EARLY_PRINTK<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; CONFIG_EARLY_PRINTK_ZYNQMP<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; CONFIG_EARLY_UART_CHOICE_CADENCE<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0I hope you added =3Dy to them.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0Anyway, you have at least the following solutions:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A01) Run bitbake xen -c menuconfig to properly set early printk<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A02) Find out how you enable other Kconfig options in your proje=
ct (e.g. CONFIG_COLORING=3Dy that is not enabled by<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0default)<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A03) Append the following to &quot;xen/arch/arm/configs/arm64_de=
fconfig&quot;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0CONFIG_EARLY_PRINTK_ZYNQMP=3Dy<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Host hangs in build time.=C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Maybe I did not set something in the config build file ?<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt; =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 11:57, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" =
target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:ol=
eshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.=
com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Thanks Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0You gave me an idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I am going to try it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2=
023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko &lt;<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.=
com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Thanks Stefano.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0I am going to do it toda=
y.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=D1=81=D1=80, 19 =D0=B0=
=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano Stabellini &lt;<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bl=
ank">sstabellini@kernel.org</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_b=
lank">sstabellini@kernel.org</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Wed, 19=
 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hi Mi=
chal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I cor=
rected xen&#39;s command line.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now i=
t is<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; xen,x=
en-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M do=
m0_max_vcpus=3D2 dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0=
 vwfi=3Dnative sched=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; timer=
_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors i=
s way too many for xen, just do xen_colors=3D0-0. There is no<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0advantage =
in using more than 1 color for Xen.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors i=
s too few for dom0, if you are giving 1600M of memory to Dom0.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Each color=
 is 256M. For 1600M you should give at least 7 colors. Try:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0xen_colors=
=3D0-0 dom0_colors=3D1-8<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Unfor=
tunately the result was the same.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 =C2=A0- Dom0 mode: Relaxed<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 P2M: 40-bit IPA with 40-bit PA and 8-bit VMID<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 P2M: 3 levels with order-1 root, VTCR 0x0000000080023558<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Scheduling granularity: cpu, 1 CPU per sched-resource<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Coloring general information<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Way size: 64kB<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Max. number of colors available: 16<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 alternatives: Patching with alt table 00000000002cc690 -&gt; 00000000002cc=
c0c<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Color array allocation failed for dom0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 ****************************************<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Panic on CPU 0:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Error creating domain 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 ****************************************<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)=
 Reboot in five seconds...<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am =
going to find out how command line arguments passed and parsed.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regar=
ds,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=
=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg N=
ikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ole=
shiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" =
target=3D"_blank">oleshiiwood@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.=
com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0Hi Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; You p=
ut my nose into the problem. Thank you.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am =
going to use your point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Let&#=
39;s see what happens.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regar=
ds,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=
=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal=
 Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal=
.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targe=
t=3D"_blank">michal.orzel@amd.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0On 19/04/2023 09:03, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; Thanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; My company uses yocto for image generation.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; What kind of information do you need to consult m=
e in this case ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; Maybe modules sizes/addresses which were mentione=
d by @Julien Grall<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &l=
t;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org=
</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@=
xen.org</a>&gt;&gt;&gt; ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0Sorry for jumping into discussion, but FWICS the Xen c=
ommand line you provided seems to be<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0not the<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0one<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0Xen booted with. The error you are observing most like=
ly is due to dom0 colors<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0configuration not<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0being<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0specified (i.e. lack of dom0_colors=3D&lt;&gt; paramet=
er). Although in the command line you<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0provided, this<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0parameter<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0is set, I strongly doubt that this is the actual comma=
nd line in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0You wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0xen,xen-bootargs =3D &quot;console=3Ddtuart dtuart=3Ds=
erial0 dom0_mem=3D1600M dom0_max_vcpus=3D2<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0=
 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colo=
rs=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0but:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A01) way_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A02) you specified 4 colors (0-3) for Xen, but the boot =
log says that Xen has only one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0(XEN) Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0This makes me believe that no colors configuration act=
ually end up in command line that Xen<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0booted<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0Single color for Xen is a &quot;default if not specifi=
ed&quot; and way size was probably calculated<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0by asking<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0HW.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0So I would suggest to first cross-check the command li=
ne in use.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini &lt;<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&g=
t;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023, Oleg Niki=
tenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This feature has=
 not been merged in Xen upstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assume that up=
stream + the series on the ML [1] work<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Please clarify this point=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Because the two thoughts =
are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0As Julien wrote, there is noth=
ing controversial. As you are aware,<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Xilinx maintains a separate Xe=
n tree specific for Xilinx here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/=
xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/=
xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" ta=
rget=3D"_blank">https://github.com/xilinx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a><br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a><br=
>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0and the branch you are using (=
xlnx_rebase_4.16) comes from there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Instead, the upstream Xen tree=
 lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://xenbits.xen=
.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">=
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"norefer=
rer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsumm=
ary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"norefer=
rer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsumm=
ary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"norefer=
rer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsumm=
ary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=
=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"norefer=
rer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsumm=
ary</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0The Cache Coloring feature tha=
t you are trying to configure is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16, but not y=
et present upstream (there is an<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0outstanding patch series to ad=
d cache coloring to Xen upstream but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merged yet.)<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, if you are using xlnx_=
rebase_4.16 it doesn&#39;t matter too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0you as you already have Cache =
Coloring as a feature there.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I take you are using ImageBuil=
der to generate the boot configuration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0so, please post the ImageBuild=
er config file that you are using.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0But from the boot message, it =
looks like the colors configuration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0=
 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
&gt; <br>
&gt; </blockquote></div>

--0000000000007c382805fa4ba681--


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 07:38:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 07:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526891.818940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prwCk-00027E-LV; Thu, 27 Apr 2023 07:38:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526891.818940; Thu, 27 Apr 2023 07:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prwCk-000277-Iv; Thu, 27 Apr 2023 07:38:14 +0000
Received: by outflank-mailman (input) for mailman id 526891;
 Thu, 27 Apr 2023 07:38:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1prwCi-00026y-U1
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 07:38:12 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75273c14-e4ce-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 09:38:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 87AF6210EB;
 Thu, 27 Apr 2023 07:38:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5F75C13910;
 Thu, 27 Apr 2023 07:38:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ChnZFWEmSmS1LgAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 27 Apr 2023 07:38:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75273c14-e4ce-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682581089; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=Nr7EqN4AM70YmyuoNpfe+PXcas5rhy8+HTE3yW76PeE=;
	b=tcZ3SDL3frCSTs9kfNRn1gulCF03JIxBOSfdNvNV9pmuqasP8RcnwLNU4Ie+SqChQjpt7/
	1Z+9CNH5MFYuoHvtQlPqBtJX+0qA5UWFIZra1y5H4dRnltRWbQI1r6acOVSrfMlYWFg5Oc
	4Cs97+s6ZUGnpWyGL058ZE+gnbn8Atc=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v6.4-rc1
Date: Thu, 27 Apr 2023 09:38:08 +0200
Message-Id: <20230427073808.12580-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc1-tag

xen: branch for v6.4-rc1

It contains the following changes:

- a 4 patch series doing some cleanups in the Xen blkback driver
- 3 patches fixing potential sleeps under lock in various Xen drivers 

Thanks.

Juergen

 drivers/block/xen-blkback/blkback.c | 126 ++++++++++++++++++++++++++++++++----
 drivers/block/xen-blkback/common.h  | 103 +----------------------------
 drivers/xen/pvcalls-front.c         |  46 +++++++------
 drivers/xen/xen-pciback/pci_stub.c  |   6 +-
 drivers/xen/xen-scsiback.c          |  27 ++++----
 5 files changed, 160 insertions(+), 148 deletions(-)

Juergen Gross (8):
      xen/pciback: don't call pcistub_device_put() under lock
      xen/scsiback: don't call scsiback_free_translation_entry() under lock
      xen/pvcalls: don't call bind_evtchn_to_irqhandler() under lock
      xen/blkback: fix white space code style issues
      xen/blkback: remove stale prototype
      xen/blkback: simplify free_persistent_gnts() interface
      xen/blkback: move blkif_get_x86_*_req() into blkback.c


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 07:44:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 07:44:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526895.818951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prwIt-0003Yq-Bi; Thu, 27 Apr 2023 07:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526895.818951; Thu, 27 Apr 2023 07:44:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prwIt-0003Yj-8D; Thu, 27 Apr 2023 07:44:35 +0000
Received: by outflank-mailman (input) for mailman id 526895;
 Thu, 27 Apr 2023 07:44:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1prwIr-0003Yd-8b
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 07:44:33 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58721d90-e4cf-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 09:44:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7013.eurprd04.prod.outlook.com (2603:10a6:20b:116::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 07:44:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 07:44:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58721d90-e4cf-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AgzDE6XyINxQ0eiLh2h7OQUzeVPr1ThWkiT/Na8Vn6QQ7AoR+D2JH0+nFTPcsS4XbXJzhz4bWELADafV3Km0zvn6i1WeoIz96tFAykhqbmtSHlo9GdRhaDkseyMI5he+AoPq2+3uOT7EFqkyU0u7QoRbEX2C83r30PRaAoX9A9tcechFSGtl9dZTRPfRhfw7Noua+shoNOV3PappJHxHCTZ9k9ecnvb82spqmus/vPcxLej2OSH9hkZxTIVA/AY9XoXMB3+rT29VBxkqTGPKd0SCTnjmDriFzYp7xLCG6eIhq8ug0dXOu4oq1K46fU/f0HZuYThZ952W61JNux033Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Il2FPybCc8VxAutI5Akbi8VvA9CAHwBY/A8KbfUNolg=;
 b=Rdsj8Y7xttL2j6KCNpttVzSZvmXIBrWWPnbDFFmYbtAOq5fY0qNeYEYAqHos1WnD59tKGPTHegt3x3AtwBx47CDeewXs+i5UEh4SX1DUgmXNzV1cD1jCqrrAUSpt4QZBjnJfw/5xRhexz1qe4uWiDH8v5cTmKUOoqYUz4Sq5+PXR4vFps4MR76xN9l83uvy2dqhkbd5MJzBkeNPkIXdJSfE1FouIMwRywPr8tKubsyOpWBizW7U3F0E1XcsWKsaZzkLg9VvqC/mwwCfIpght5detj4u6pvFDNaIbewMSNNruV/4GdeQ3EHQRylDbIs7bHs9AuMjH/Fq6zTxHsYkBww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Il2FPybCc8VxAutI5Akbi8VvA9CAHwBY/A8KbfUNolg=;
 b=m6gAFnl0SIzFAWVAuBsHTR1YKtfM8CVJBSYR9ZQZJWA0l96wY0yRs7AH13EauNfPFzkONF+3Ptd0LjUPRzi+Z6TdnLhXJPCBpe5Sx2uo5ViKjbd5YtOFfUZFAAZLS86ZGgy534wmKq+v2+/hwUWlFhAL30WrU66Ny30e10jU7N8sYYkflrPw09aUeDKuwFBzf7+uyqRueBfFPPosoKA9yyjWBhvugXvZ2PLSNyycLlqXTM1NjS0PfvEqOCxBUzot6XfoCYTSdg8+VzI5N5N7GjLIUEutJmLRqPITrM7k2kohrdvzkHZAs4O34Jem5MSsStQXQKwEh/B+yyLhr/OjvQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5cb69afd-044d-deb8-7f81-2f189e4a018b@suse.com>
Date: Thu, 27 Apr 2023 09:44:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
 <ZEjXNLAVCixClGyl@Air-de-Roger>
 <b41e8eb0-a776-8924-429f-8abe7e70ead7@suse.com> <ZEm3O2PL0NLWqoMk@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZEm3O2PL0NLWqoMk@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0143.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7013:EE_
X-MS-Office365-Filtering-Correlation-Id: a9142d45-1d5b-4a47-067b-08db46f33b71
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	02q/3MxAsLozCX8DZzgY3o0zXIpFNR6AgdlYDRNaiA/2pZ0uLbn5tIvgq0CWOcScWwKR2pF8RxkByx1LZZUXOCYSp6Lo2AdNTwaSakhdklt5kJRmBuN6o1dEa2FJEZqGDBESA2BpYASX0x5j3SBmVUItZW2QXRJr9bYCaMPQNKkOTagrRphL7F+FdETK+LlfmBW4Vh5SrJtuoaXa0/9I1VE/6hc4qbhuNusYfGDEXbMpFSuIK5+ZaDCkxJdVNn7HkTSf9mrovjTgrRYRHr3U5YBOoyq4CJJLNQNg4GJeVsgK72h+d+YumYc8mFfWnZoGr1oF6Uzp4KmMOoZZMoTOiBYdF7b8p4hfW9JUgHNj9vOLWVEBi2mFSI4DtY6RcmAVpaEAd5vfQEPhsfpFqoQ5nd0YgHsAm4pSA5GStGxGZzHxQp8EQqLB3k0S3FBQsat+/ifoEB+VxZJLpnGeYpqcpkqHGRI/m1gmI8t7hXR2JL9zGY7x3b66iU7MerH0LrTYKJ2rzTGO/8WnnzCOXkjZb/Wa4Q0o2HQnxJbth6L6LNizOb1FTz3yc6RosqZeASiIsLGqlGdno915eAF157EAY4k4U7qaliDs+MUe0u1noeBsb25uiKDKFXA08cBVeLx0UxmwC7/SBIVGysar9wjR/w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(39860400002)(346002)(366004)(396003)(451199021)(36756003)(38100700002)(83380400001)(31696002)(66574015)(31686004)(478600001)(54906003)(186003)(6486002)(26005)(41300700001)(53546011)(8936002)(2906002)(6506007)(2616005)(6512007)(316002)(4326008)(86362001)(5660300002)(66476007)(6916009)(66946007)(8676002)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WGZUeDVKa3JGTllLWkQ5Yk1wQmtLR3ZtQlAvVTc1bGM4SXFVKzBVcndVWENM?=
 =?utf-8?B?USs3VU0vYUNWWm5wWlI5QXdKQXFsbXMzVXF6QVBhMGdvWU5rRUkyNEsxUnBq?=
 =?utf-8?B?VHZrNzJISndjdWFleWwxM0g3dnhEVHFmbTRSaHVIK2ttLzA5STdad09YcXZ2?=
 =?utf-8?B?blVXTTFuQ3laSTJ0WGhWY2lhaFdWR1VoY0paZ3c0NGJqNXlsZ240M0VMZDl6?=
 =?utf-8?B?VFB5RVVocElWNVBvMS90NDhVMHJKMTBzc2xvTHpmZlpiRnlXNms0STV5RUhp?=
 =?utf-8?B?S1BlaldyMmZxME93czhxWXBKeklhcXNFZjNOcmNBWFpGN1dMVXFkSDA0QjIv?=
 =?utf-8?B?V2o0cmRlQWNDVlhLSTF1cnova0RWajVxQkFCS0NVODVaTjd0c2VjenhIbjAv?=
 =?utf-8?B?ZjV5dVN0WCtSbVFRWVJkZEFDTyttTHRmTXNEcDJWdmJBdURXK3ZtcDBRVEcr?=
 =?utf-8?B?RlF1VWRLMGtWWE44U1FXWC9xR2ZCWlhLYm1GUWs3ZEN4bFNuZTJVd0dmVzVl?=
 =?utf-8?B?bEZxc2svWU5NeHROc2V2VG5WSUZCc1NxQzlXRnFrRDFoWk0yekk5U29PWldn?=
 =?utf-8?B?VlZGczR2ajEwYmgxd1RuV09UdFhrRmpQakRCZzVhb2VkMlhQejhSc1dBQjRM?=
 =?utf-8?B?SHgzV1hsYWVtSWR6K1FLVTVCdXEwOFpURXloQWxoK0V6WjlhM2ppVlhTM1Nq?=
 =?utf-8?B?c3lhOHRKTlUwKzh0eFl0bkVva1NlWi9lckNnUWZNZW43b0hPOXNuY3R4ME0w?=
 =?utf-8?B?VllNUnJrcTdHS0pQTTF0YmNkUVltVVlRZW93aVFTcXFGcHJDczQxS2Z6R20z?=
 =?utf-8?B?RWcyeDRNSEtVWENzN1JoNDhxSnhWVXJSckIzemdjMXJkSloyekZVai9QUGpW?=
 =?utf-8?B?WnUydkxKQU02LzY3OW11YzhBdWluYXlTN2t3bng0UkhUZndIWU9iVlFxWkJs?=
 =?utf-8?B?d3pmQkZEazk3QTJVY2VhdWtMeXljNDBpaEtWL1BOUEVXOWNHQ200Um1XNmtG?=
 =?utf-8?B?cVhSaEdBSlMxeVk0T3ZNa1hLbFlhSkI5R3I4VWY2ZkU0cDBVTjQ1L0NockdB?=
 =?utf-8?B?V3ZnMHRLVlBMQWdlUXVrbVJHLzEzVk5xdi9PZEFxMUVCYU0wRGttckJPNXhH?=
 =?utf-8?B?bGRUL0ZwbmltVXlmUFQvajBCSktZZFpIU0xxSHBLVGFHN3FvQU9zbHBweGQ0?=
 =?utf-8?B?SUFPYWg3TUhFOXdnR0JZV3hWa0dhYmFzNytzeHNlUytKSllEVi9MWVVtRHBp?=
 =?utf-8?B?a2hORkc4Z1VqOEpTK0xoRDU4VVJkYVFRTGJMaENlNHVlbytNaHJTRVo4K09r?=
 =?utf-8?B?QWR3SkVNTGN5OXYvS3F6VzJhWjh6L2JvU1BvekRvQXg0MS84bzg1KzlXMGE4?=
 =?utf-8?B?WEtNN2p5M1RDWHUxVGdDUTMwUFR3TTVaWUF2MlZldGFhOUZBVUNVSmE5MkIx?=
 =?utf-8?B?SFVYaHdZRFVjaGtSQU5SYU52bFVDNXFEQlFCWFZ1RVpjOGpjZEFVMUlhTndx?=
 =?utf-8?B?SUNHTnJUN05LYSs1dk40Z2hFZTl3dDZuWk5yWWJqZHcwNEVjSzFadmxNM09B?=
 =?utf-8?B?L0huY29FSjk4YkJpRVRpdVh3YzlXb2krbzlRUTVKQTAxTHhLWDU0c1h4TkpU?=
 =?utf-8?B?cjQ1RDZYSkJiL1Q2SWozTzczdEt5Q1ZITXdtUDhTc0RsL3ZpOEl5VGFZY1Iw?=
 =?utf-8?B?VFpPMjBQV2oybHBIY2NHc2FsYjRTZFJnalo5VUVxZGhtNnEzNU9ZaEdqTmQ0?=
 =?utf-8?B?U0U1ZFBFOVl1QTBiN2ZpQXhnUDdvbXZDRU1QTlM5OXk2bnBLdmp4b0Y4R05i?=
 =?utf-8?B?TGg3OGVvY2NLYXF1K2dkZTNrdGJhY3NBdDJvNENEQWg5RzYzUW0xK0VvdjNW?=
 =?utf-8?B?TnZxRzI3WGdRRi9PZjlneU5WMEVjV0hRUVhON0k5dHIxT00ra0pTTXFCZTZP?=
 =?utf-8?B?dnVkSjRmNE5sQk14ZlI1eWZTM1kxdlQ0NXRzaVdGWnEzTFB1Vy8rN1kyelNh?=
 =?utf-8?B?eG1oSzJSRWFmK3lYc0drSjh0d2RuM0JwRlFoSWJUOTQ5U2U3WC9TellmaGl6?=
 =?utf-8?B?NzRWYTRIcE1MbEpBejZ0VzJTeVp4YXR0Y0VxdkpBTkVWcnVOMHlCcjlCcjNw?=
 =?utf-8?Q?KM23tLkyKiV69IqlaYJwUIYmY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9142d45-1d5b-4a47-067b-08db46f33b71
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 07:44:29.2702
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Q6ASQVHZ7+BiRfCKP4Z6JlrQHBTcf11spqphKKfxXr6iiu6jJ2iVYz/s55gInOpNA8lVbkp2eEJd+zNC3dUqmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7013

On 27.04.2023 01:43, Marek Marczykowski-Górecki wrote:
> On Wed, Apr 26, 2023 at 10:24:28AM +0200, Jan Beulich wrote:
>> On 26.04.2023 09:48, Roger Pau Monné wrote:
>>> On Tue, Apr 25, 2023 at 04:39:02PM +0200, Marek Marczykowski-Górecki wrote:
>>>> --- a/xen/drivers/char/ns16550.c
>>>> +++ b/xen/drivers/char/ns16550.c
>>>> @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>>>>  static void pci_serial_early_init(struct ns16550 *uart)
>>>>  {
>>>>  #ifdef NS16550_PCI
>>>> -    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
>>>> +    if ( uart->bar )
>>>> +    {
>>>> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
>>>> +                                  uart->ps_bdf[2]),
>>>> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
>>>
>>> Don't you want to read the current command register first and just
>>> or PCI_COMMAND_MEMORY?
>>>
>>> I see that for IO decoding we already do it this way, but I'm not sure
>>> whether it could cause issues down the road by unintentionally
>>> disabling command flags.
>>
>> Quite some time ago I asked myself the same question when seeing that
>> code, but I concluded that perhaps none of the bits are really sensible
>> to be set for a device as simple as a serial one. I'm actually thinking
>> that for such a device to be used during early boot, it might even be
>> helpful if bits like PARITY or SERR get cleared. (Of course if any of
>> that was really the intention of the change introducing that code, it
>> should have come with a code comment.)
> 
> I have mirrored the approach used for IO ports, with similar thinking,
> and I read the above as you agree. Does it mean this patch is okay,
> or you request some change here?

Sorry, I've yet to get to look at v2 itself. So far I've only looked at
Roger's response.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 09:27:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 09:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526903.818961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prxu4-0005ib-UM; Thu, 27 Apr 2023 09:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526903.818961; Thu, 27 Apr 2023 09:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1prxu4-0005iU-RO; Thu, 27 Apr 2023 09:27:04 +0000
Received: by outflank-mailman (input) for mailman id 526903;
 Thu, 27 Apr 2023 09:27:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1prxu3-0005iO-FZ
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 09:27:03 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa077e7f-e4dd-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 11:27:02 +0200 (CEST)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 05:26:48 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB5026.namprd03.prod.outlook.com (2603:10b6:408:d6::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 09:26:46 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 09:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa077e7f-e4dd-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682587622;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=TuuZPu7Chmy066j7TkpfpQXlWqMzk1mCxqd90J3rUZo=;
  b=dOvYWNF/3eiK9hS8aZFPV/5RRK5aG+amTJO+mhcoUu599Eg5EILYcaV3
   7rz2TzP8ppGKBJitt6C+BkgZiwEGPXMx29XEF3jRJvqjufhYD+tCZNgZy
   yWLm4zznj8+Vg455zjVO6jujpnwZ+IxvQKAeBCNSP/edXkS3P7uRhEccC
   8=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 105815953
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hF3WCKh7zmyqNdy5vXQJcDqFX161xBEKZh0ujC45NGQN5FlHY01je
 htvDT2DbPqDZGX0f9xyPdux90gF7ZeBmocwTAM9qCA1FSob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AWBzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRfNQhdSxCbq9m776m7Ufl9r8khd+jSadZ3VnFIlVk1DN4AaLWaG+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluG1arI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXlpqc30QDIroAVIEwzcwWqrfm8sU7gdOJ8c
 RMV3ioA95FnoSRHSfG4BXVUukWstxoRWdNWH/c9rh+Ezq7Z4QGxDWwDUzIHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTS0OQBYB4tLjiJoulR+JRdFmeIa3k9n0FDfY0
 z2M6i8kiN07ltUX3q+2+VTGhTOEpZXTSAMxoALNUQqN8QdRdIOjIYuy5jDz7/laK52CZkKcp
 3VCkM+bhMgSDJSlhCGLBuIXE9mBwveOMDTYx3l1DZQl3z23/jioeoU43d1lDEJgM8JBdTq3Z
 kbW4FtV/MUKZCHsarJraYWsDchs1bLnCdnuSvHTaJxJf4R1cwiEuippYCZ8wlzQraTlqolnU
 b/zTCpmJS9y5XhPpNZue9og7A==
IronPort-HdrOrdr: A9a23:Gre3eKxAnjGA8gYFXCukKrPwEb1zdoMgy1knxilNoH1uA6+lfq
 WV98jzuiWUtN98Yh8dcLO7V5VoI0mskKKdiLN5Vd3OMDUO0FHYTr2KhrGD/9SPIVyZysdtkY
 tmbqhiGJnRIDFB/L/HCELRKadF/DBfytHOudvj
X-Talos-CUID: 9a23:4HESOW5YHd+A/LoyxNsss1MsSuUlalHmwVjLI1SoGE9kSua0cArF
X-Talos-MUID: =?us-ascii?q?9a23=3AqUF8FQz9jcxNbjFvJaBMc202OKOaqPWLLh4cnZU?=
 =?us-ascii?q?DgMnaGhxTYWicqDSZGLZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="105815953"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=abtQtQxQeNBdXcB09yF3fI/DhESO9hEO+igId8scy7hDeBnaYAh0cIi/A/rOGtj7cPmLOj+VXdNbBr9xxpDioggYuPDVWn2Eodj6y1S2OtUh87LBgy38yYXGerS0Vf/9TJ0TdI7mPrB0wOOn6sgnXwPtwGgf75Pr4VH25H8F/vcWvOheXpG7dBcrfF2cmeAgtMvmd9T9RFFlpf9sqnb4sfZ7cwDOx/46QIiuY684sHxOA4kPdAiMC/i4JZKEEnzm2vB5TXu8TOqdx6r/pwyikjxsFSWDesdfa5xZ0h09O769EfF/WmYV0CNMOpV+k3i6MMypo1ZdLU85ByBQ9XVnGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TuuZPu7Chmy066j7TkpfpQXlWqMzk1mCxqd90J3rUZo=;
 b=AqFnKfEU5LhWWRDDXCI0EUXWHG9HQFU/jHAycOzZZZHSJGUGqIK5yVgYTG5Sau7vfzBr6Y9S2FpfCtHgCI8jrRyuWXqH7oNBC9g0pbNhDP5wUqSOKJ5rXSL3hGLvSWC75/o8iQHw4jXsFbfwSJok5phFYpFl+Q/A95lvTTOLIvWxG8KaSQk9SShkiIrooMq27Sq8POQMfcSu6YGTXJrJEGPCnBP/HppIraBOiNONMjKxP5RDOXO4nukzrw4CMv3up6otRy7OdbLEi3ki0ok/HT12yqR5Jw7UlpBM+nh8AUXgUOAZwgdsTx6m4V3ueHSPIkA2DEDreRmqFFJVbAPu+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TuuZPu7Chmy066j7TkpfpQXlWqMzk1mCxqd90J3rUZo=;
 b=aZZh1/Hm10pKubOftDuRGHQKIe/KymMJoiaWuEtkBLSLNK+PRt2+3w/tctdiV+hDL3uYbJr0AJ8Wf1o0wgKgeaTPeAo8aOPGs0MF82UPn7zABmRBS6Vd2N1PedjWzjgQkMmEdjJ9vvM9EIvUwyuY4SruuzLJtFGk3UEtnKfVpN0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <fbad5a38-ee41-46c4-6c28-135e992c08e5@citrix.com>
Date: Thu, 27 Apr 2023 10:26:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Gitlab curiosity: Was [PATCH 0/7] Rationalize usage of
 xc_domain_getinfo{,list}()
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 Tim Deegan <tim@xen.org>, Michal Orzel <Michal.Orzel@arm.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <cf554e7d-5d9a-8beb-b0a6-810267e5c3ad@citrix.com>
Content-Language: en-GB
In-Reply-To: <cf554e7d-5d9a-8beb-b0a6-810267e5c3ad@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0497.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN8PR03MB5026:EE_
X-MS-Office365-Filtering-Correlation-Id: 51ccab17-c5ef-4443-edb0-08db4701853c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VvLXNbKR3/voa+wZj761R+ssNVHVi5RFiK6fj3Jqkk21KyicmB0BcBC4prxo+hgH9Bvq2lTlQLKiDQUyu+zC3O6LEjvwI+2nWg8ILQqXAIwXJvK5UTJ8BeOn4d/IYnWNxnzpMNTuhopkiYeqrMMI59I/I6IHhN+2gUGrqk7yzzl2t/Db8r3BqHdHLArQ9wssOAO/7mBThvz4qwVeGDFNyWNZWPoaBiYCgk4EYCKApUF+i7bN072JxReB+1jFUiP8SEqWfwDfLJJB5KaO8AEjHUSdEd6FS9q+wN9hcRfSshrxrRVNsT1utqZjCzD99Aoqx3r9Lpvc//rJzd09t885oDn8ltNQMhYD+HRsTYOyFPXFPkF2SmSHEFAKVvZUHYjmO+mobY43+6fBb4zwVjdmJYIkY2pfEaQXNvap0Nui3Na8egkJEun5ASY4koz5/5+KXC5gNBK/XpTNGgu2YuDzA1HC3nqVxVDTTz5hGev5pGNZLx3ay3wOePpzRzXbIiGZk2GNDuHFxdWR4/e8kwrwBWf2hD6p5sEUlVSdFn3g/3+wFS7RPJjbouut5JawVf1i3D0xhhaMMHczQXPYbGcy9Z786tRkudML08qwa/WoGSlOl3vpQspPVOV7WfM1eN88EvVdi5NGCWJiGIUXAYkUDQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(376002)(346002)(39860400002)(396003)(451199021)(6666004)(31686004)(82960400001)(478600001)(83380400001)(54906003)(6486002)(110136005)(186003)(38100700002)(2616005)(66946007)(66556008)(66476007)(316002)(4326008)(41300700001)(6512007)(6506007)(26005)(86362001)(53546011)(36756003)(8676002)(8936002)(7416002)(31696002)(2906002)(4744005)(5660300002)(966005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFU4SnY2WGFrd1FPUmhBL2lUSXE4UG1zNXh0OU1iVCt4amd6R3JHMXFUY0s4?=
 =?utf-8?B?aEkyY0FPUElMdG1lcWVBaGM0QklJSitSeGFRS29RWDV5eUJDUXdUSUUzM1c5?=
 =?utf-8?B?YVRnZTdyQitMZURzSGJ2WmdEc0ExTkE0eXlPTFQ2WWxGa0E3cEZGdHltNjJ3?=
 =?utf-8?B?bjllUmF3VndmZGpoOENNWmoxRlo4cDRBdmRZNWpSN2lIMzhFUERTa1luQ1Ay?=
 =?utf-8?B?eWRiMk81V3RMOHdCdGtkRVd0ekp6azI2dTM1M29IUVZ4eDlrRG1WWEVPbDRv?=
 =?utf-8?B?RzE0U0g5SVJzODROYnlmOGZ2MUdSK0taaFY3eHd5aVRKdWhVRmNFSnFyT3hJ?=
 =?utf-8?B?TTVJWjBKUko5WGVxLzc0TGxZN0U2Q3BKeUNXR3RZTkM4b2tZYzFKY0loUlk1?=
 =?utf-8?B?d2FZWnIveEt4b3ZubE9jU1NWcy9yZzZtTWJML05uODBYNk5OcDg3Qm40bXZI?=
 =?utf-8?B?aXZ5alZMNFdER3RhY0lONVlQZlFpN1lsMlNKRGJVRWRkL1p0OGVEYk5PYjdU?=
 =?utf-8?B?UEhLRFRiQnBvTURFREh6L0puMUlsZkUzaTBuZjNPa3VCZWptblFhdy9XQ1hv?=
 =?utf-8?B?NFdkOE9Tci9WS0U3MldPQjNHRC8wUzNiM2ZlYmlSenE4UFd4bHZwZkpGeGNQ?=
 =?utf-8?B?eENRK0lmVmUrNk11Z0kwYlNEbWxUclRTdjltMi81QS9mQ1Q4MU1zRlh0Nktu?=
 =?utf-8?B?OUlCQ1h2T1V6Skh1SnJObnp4eUZFU0R5ckVFQUlYenJXUWVoWVhoYlFjSkds?=
 =?utf-8?B?bTVCMnhOYytjbkllbHMzOGFia2VxektsZktZb21OY3hHaW1LRU9LUnBpUk5m?=
 =?utf-8?B?NE4yb0R3c290Q3dxT2pIYksrbFpVaXAwQW1Xc3JLTlhUVkNrK2tEaWVSNEcz?=
 =?utf-8?B?dHhTR1ZPMzRjMHRjQWpIeWMrTWJZVWRtSk5yWEF4cS9Xa3Jwc3JEdEZRL20v?=
 =?utf-8?B?blo5WDh2NFozVmd5NlJFQzBNdkxuUGk2aG5DVzZhTEViNDBrZ1dIWVd5NUdU?=
 =?utf-8?B?bWZXck9IVmsvWlFZc0F1RlY2NnVQbU9JU2lXd2RhVm9LSUdsVm5NTTluS1pG?=
 =?utf-8?B?eUxqeDQ4WDBLdTNyQXlMcVVYRnZtMmlpS3hsbEtVdVdlanZkalpRZ21ibVJG?=
 =?utf-8?B?S2JVRUFlcjBNbXNSSGhNV1ZGT0hVY3lCNjFUb3RtdVlqSXdrL2lvbEdkQUF4?=
 =?utf-8?B?YWVYN21QakxKK0gvUFN2QlArU3FKNmNzVzVJc3h2dWdBOGw2b2lwbEZJVENa?=
 =?utf-8?B?aC9yZytaZFZSRVU2Zy9iV2gxaytOR3VMbDRjWUdUVFZlbUNQa2F4UHJwdDZ0?=
 =?utf-8?B?eHpYeGtiL0lUTk5sa0RvU2ExSkQrOTVqRXpqUG5lVmpHb1psVXhqUUt3d0ov?=
 =?utf-8?B?VlFsd015Qm5lZmo2VTZYMnh3SEdVVjBqQ1NyakFzZlhVMlZUc0Vnb1BrQ3lU?=
 =?utf-8?B?RlRvY0lxTUt4RVNFNDc1TVFzMjZIWXNPS1gwTDh3RFJ4ZXMwRit2YmZkRXZW?=
 =?utf-8?B?cm13Vk5rc3gra2xFaXlObjFxd1ZQMXFjQXBoVkdhbFhBdzB6Wjhxa2E4T2ZV?=
 =?utf-8?B?WU9WV2pRN2Exd2Q3dWRKZzFtMmVDcjBUYUFnY3ZmYXFZSlRadXdOZEczUFBn?=
 =?utf-8?B?VmxiWTl5d0NqL21kRnlEa3lNMXcvRVpXL0hjOXVicndHdHpla0JwL2JUTGVC?=
 =?utf-8?B?eEhEc00xLzZ3dWNXdVZFb2g0cWpNcitnS0hmTi8yYlp3NjFwSkU1MHpLc1hN?=
 =?utf-8?B?d01Za0tCcDZtK25maEZONnZsZ1BjR09zY0dqMG0zK1RCMElobTBCUGlZdmYr?=
 =?utf-8?B?dWVqVEMwbFpnRTZFaVVtMFNDSkJnS1dOTkxUYm9NSGJpZDI2NWQwVE15TWIz?=
 =?utf-8?B?bHRJOHc4Qkd6YVBCc0JudzBlekFNTzNXSHFYNVE3T1o2RUV3TW0vK1RRMjBu?=
 =?utf-8?B?UEFOU1FNY1NhRjdRSW1CcWhkeFRwczF6ZHF1c21jK1l6R2tmMGx6MUVPMEdu?=
 =?utf-8?B?UzN2RzlYNTlNV2czTURoQ3RERmFkY2FGbVZrbkZMTUMxMWpPak80NjJmMkhn?=
 =?utf-8?B?WXJKMXdaNDB0ZFpWbjcyYU9kd3NPWjRneTRtbUhEVkx2VTQ2NEwrc1dFMFhC?=
 =?utf-8?B?K2g5aEhnZDZkbXVQN2VIQWFTVnkvVnhwRGFjY1RaVUhTOERvemhJSzdEVTRm?=
 =?utf-8?B?UHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	qiuVHo2iA8t5TwFCXjhmO666q7IvcKBR4ZDLTjYIckaObWYrXn+GWzqZv/5ixOdbToKYnVFRGF88wrOsl1i9NswTXdDwUcgg7zsHYRVtOVNHuLYP+OAEBNILNzfBWsmsM1dwsfB2isqsi8XAFBUfAFBD5BEO414aO7bpphE+kkDn+HYwuB0XF0s47Ih3YBt6ctTOFZDymG813IiHqHi24TmWih3sXc+YkPT8N2FGA9z5rl2s8IQGz+DgfOzQTS9I5/7/x+CNVIam5PwaGyHrSebiN94DFmFM0mQETto28L/sIeWpb8IYTiAzCbxvvBnetSW98bFTx0x3aTlNpWsDK7mUbg3YriI3w4xsGGylOUEppQ8jXMNpE7iVlXX5B9HA9WwvheYE+CG3x4lXYtmFNxgocGIhvtJBXfpEOxY17hPtcIKsg+P60LquHWKmgzKK9VkA0j0LjyY+2Jw82+NWblr281jiq2AyUSKi8u34Xa7bYxheQUTSO+3Ctf8oY6mv/hdFpYu3xwSIiTmcDZrtQlAHWqhIlmcBahr2u8mU6Q5n8fbzITWbvllLiwBZv11QtNDpsjBc1FPNviJVirJnFtW8ViyjAYgVkbEe3SYo9pW1ci8NnZhDh1vQulliUwINGJNiLbxg31fUrBRCGKktG5vpTyw9Pldxs8L0wcbgSJDpGxTRCEUoVVxIc0qFguMZNc+vSQgHzqlFc74N4Wf9Cwy7L3us5VH+lqatV2YXiOz29+SlVtpTeJFhv2zzdkwBRHSf5BjS1wNy8oVe3BAnPIRfm7TQRdiBi0A5YFQFYVESFsFL7m+lVOvW+vXSw0MsR9+DQl7wVycOVUp5jUSuN+9PGhupXzg/nOjAenqdqNTYWg/5t31UZJ2QPhhNOZN3VEWbbzscLF0hvGu2gccb0Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 51ccab17-c5ef-4443-edb0-08db4701853c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 09:26:46.2400
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Lfhji5hADTstREJSSnMn707qi+7vGhiacJz5HYJKZIu4Kj27U9M7CfFcdTPEhKDUAOtKNWONb6SFIq1rTA5BV4DCmj2VV48bHCE4VehGgvI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5026

On 26/04/2023 11:30 pm, Andrew Cooper wrote:
> I've rerun the pipeline a second time,
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/850230144, to see
> if gitlab thinks it is a reliable or unreliable failure.

It passed the second time, so I'm pretty confident this is a buggy test
with a rare race condition, rather than a bug in the series under test.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 09:43:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 09:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526907.818971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryA0-000886-9s; Thu, 27 Apr 2023 09:43:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526907.818971; Thu, 27 Apr 2023 09:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryA0-00087z-7C; Thu, 27 Apr 2023 09:43:32 +0000
Received: by outflank-mailman (input) for mailman id 526907;
 Thu, 27 Apr 2023 09:43:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pry9y-00087p-7j; Thu, 27 Apr 2023 09:43:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pry9x-00005N-Uk; Thu, 27 Apr 2023 09:43:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pry9x-0002B3-Bb; Thu, 27 Apr 2023 09:43:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pry9x-0007Pt-Ar; Thu, 27 Apr 2023 09:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LBML6giS2pxWxCT2pKB8jf5+bvHZnpw89+uMzqQA02k=; b=61mtEMVV/KkfP92UAw+865MNyI
	HL8Gr8sJ4FHgd5P8LkkwUDr9RMDKO4k6xhdfSyqOR+YxvZzUPPdUl+9B3xoU+vjYwZCubyoOXudo/
	Hs8/e6E9rBT94pa2DcnNvnb3QlV9B2QzcLTp6hIGqKn7bcUvEceQ7syZ7zoZbUC8SduU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180428-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180428: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:regression
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ea7862c507eca54ea6caad9dcfc8bba5e749fbde
X-Osstest-Versions-That:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 09:43:29 +0000

flight 180428 linux-5.4 real [real]
flight 180441 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180428/
http://logs.test-lab.xenproject.org/osstest/logs/180441/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail REGR. vs. 180369
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail in 180441 REGR. vs. 180369

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1  14 guest-start      fail in 180441 pass in 180428
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail pass in 180441-retest
 test-amd64-i386-pair         25 guest-start/debian  fail pass in 180441-retest
 test-armhf-armhf-xl-multivcpu 14 guest-start        fail pass in 180441-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 180441 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 180441 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180369
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180369
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180369
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180369
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180369
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180369
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ea7862c507eca54ea6caad9dcfc8bba5e749fbde
baseline version:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac

Last test of basis   180369  2023-04-21 21:43:52 Z    5 days
Testing same since   180428  2023-04-26 09:43:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alyssa Ross <hi@alyssa.is>
  Anders Roxell <anders.roxell@linaro.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Baokun Li <libaokun1@huawei.com>
  Bhavya Kapoor <b-kapoor@ti.com>
  Brian Masney <bmasney@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <error27@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Young <dyoung@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Raillard <douglas.raillard@arm.com>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gao Xiang <hsiangkao@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gwangun Jung <exsociety@gmail.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Stuebner <heiko@sntech.de>
  Ingo Molnar <mingo@kernel.org>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jason Wang <jasowang@redhat.com>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Denose <jdenose@chromium.org>
  Jonathan Denose <jdenose@google.com>
  Juergen Gross <jgross@suse.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Pingfan Liu <kernelfans@gmail.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Ritesh Harjani <riteshh@linux.ibm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Levin <sashal@kernel.org>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tomas Henzl <thenzl@redhat.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tudor Ambarus <tudor.ambarus@linaro.org>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vasily Gorbik <gor@linux.ibm.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>
  Yanjun Zhang <zhangyanjun@cestc.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1535 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 09:51:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 09:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526916.818983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryHZ-0001G9-9b; Thu, 27 Apr 2023 09:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526916.818983; Thu, 27 Apr 2023 09:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryHZ-0001G2-6J; Thu, 27 Apr 2023 09:51:21 +0000
Received: by outflank-mailman (input) for mailman id 526916;
 Thu, 27 Apr 2023 09:51:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pryHX-0001Fw-NA
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 09:51:19 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c4dea0c-e4e1-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 11:51:16 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 05:51:12 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7415.namprd03.prod.outlook.com (2603:10b6:510:2e7::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20; Thu, 27 Apr
 2023 09:51:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 09:51:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c4dea0c-e4e1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682589076;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=yjhs9dDFYa9Fqlyr4J4wesKeikDnBWLkAtQWiQjwyAc=;
  b=BbPSJNLoVDi8gulFGnU6J7+FGxvJ8rmNn8S3LjvP4H8h86fVEhAxsFxo
   kySbXMzJXuLeV0v/FgdOUGJyQnP2uOEjN8W+KrpVaAO9iE73hGzSm1Nhh
   osKHcVIxCL/BDcsJaXnWwb1JKTTnTQA8Sp4aWZQlOEzWOcDb2OOGM1P/D
   w=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 107461788
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:AdxvVK/5q2BHkWSTzLdCDrUDRn+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 jRKXjqPP/qMYzP2fNonPY3n8RgFu5HQmNI3HVRupC88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaoU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl/z
 tc1MxsjYiy/rKXqmpjnZ8tIodoaeZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUuidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpNSOPhqa466LGV7mghMEUtSnumm/KCuwmiV4JdI
 RMxwyV7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXAGTT8JS00C+daLiIM8lBXUVf54DbW4yNbyHFnNL
 yuiqSE/g/AZi54N3qDip1Tf2Wvz/97OUxI/4RjRUiS99ARlaYW5Zouur1/G8fJHK4XfRV6E1
 JQZp/WjACk1JcnlvESwrC8lRdlFO97t3OXgvGNS
IronPort-HdrOrdr: A9a23:7n6X5amXCfZfwHjDW+Lr2xvocYXpDfKd3DAbv31ZSRFFG/Fw9v
 re58jzsCWetN9/YhwdcK+7Sc29qB/nn6KdmLNhWItKPzOWwFdAUrsM0WKK+VSJJwTE+uRczK
 dtdLVXBdDqAUJhrM7h4WCDYrAd6eKK+qCwhevY0l9hQBpncbtA4w91Bh3zKDwOeOAhP/QE/f
 GnivavyQDARV0nKuCAQlQUWODGp9fQ0LjhbBI+DxQk4AeDsj+y8r79FHGjr3UjegIK5Y1n3H
 jDmwj47L/mk/ag1xfa3WO71eU0pPLRjv94QOGdjcAQKj/3ziKlfp5oVbHHnB1dmoGSARIR4b
 7xnys=
X-Talos-CUID: =?us-ascii?q?9a23=3AFa8JbWhTKX5yhE51QWd1Hf/wHTJuQ0b24Wz7DUO?=
 =?us-ascii?q?ENU0uFOe/FXKy3IR6qp87?=
X-Talos-MUID: 9a23:hYZPtgYF1JtNV+BTsw3U3Ap9FNpS6J+fT1wBwZcM5Oy9Onkl
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="107461788"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CefeabRc9C97zlwaQzwagDHtMY6glNHR2NkMwyIs94MWJh0nR7gR49L/muAaIj6IQAUiHIOTb8CVFSQWlYjRaUk4NhXWo2K8D2zmdFj6V8HoLJdWnmRT/qzasau7sMy9auUuXQ8/Dt2YTfp5abiNpVVHxEqc3hPwCQPUKhUg4OORDifLFnjEGsQQNL+SNcOrSdJGP52zvk8ybb2OUDUkerzSO0Z8m3XHoz29CcG6ZqRZ6HoOXFuwWFnDKC4+M5bP9ONDzn8tre5yowGinx9tNbWYfDfPb2Ra54VohPi+0sAW4ViOi6vwO0K3ZKmBX9wClwz/lX9bWrn0qSgtDkrw4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y8B8r2CQO5o0S0w0vbsn0/Lring0Lo106pFhpgXCuls=;
 b=VTiKV2TElX1rm1AsnIXMo5jBoZMwApvWeuRLHLrjS/oV1WBgt7e663dnKg/5fWDR3y01HqswbKqSz7nh/SnCbu+nHM2f+Ma1KpeGNARLYrZ1IYrh6EGhY4lIlWFBSfjf9Dk2ISQ3kD1q5OfK3qBOip/pw75TTbEwE7HU4UK/REdI26zVnFxBgPkcK5orsvwGi1vlQVZrJ8fiCPjtfquUL4ftOdZKYYFlcNNJucp7AH+yStE+6wCep5RTJ2+TVpZ0YGsJivt9AC+B3txhne7pZg81dgSj7kvp8bymJYswtGWNxnVLdftKlOYCrN4JY0hm22Bd4UY31S+kL2zYp4ZNnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y8B8r2CQO5o0S0w0vbsn0/Lring0Lo106pFhpgXCuls=;
 b=bdMp4oq3tfR1f3Ptso4h1c2fagGjeEXrvodpvZNh/Z0c3VlSAIfMnNCWCH6MvmOYyIgKPOM1QdcjCP1hNPP9vnJo0BA0qJgIXSO7UNXK6nYCBf6NtpXPPAcmgrnEYIrH9/FZD8MK+8+I8V9NbHqbo+YyjkRbEg/DIdIm+EEy6ys=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d93a5eaa-1f6f-a2b9-b238-04b8bb1a33dc@citrix.com>
Date: Thu, 27 Apr 2023 10:51:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/7] tools: Make some callers of xc_domain_getinfo use
 xc_domain_getinfolist
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-2-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-2-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P302CA0045.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:317::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7415:EE_
X-MS-Office365-Filtering-Correlation-Id: c8d14635-0686-49ba-6ea6-08db4704edc9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oMo7F1j9gUDOKU0sGWHXVV4Wdq2EVBlDqp0r2tSo4j259N6DvSxBLQAnrjb6Lp9JNGumQQP5NjNSRAJL3QxJAinrsuwluWf0awd6KdISTodbj3yyCCMw5Q52opKjPbhdPahFej2vBc73UgrTWqKIor5xC+wSdhFJPos9ainZbxQt4gU5fooOm7+RjVCsh56r7J8DFHf8kHaIG3ACqCg4O7Pndb3TBotjczLSMVhcap9GJ07yADsetTVMInK4aoQ4jkzEx7uJuzVuFd8OrePnbddiqe2Zt5h9L3ENHy25w8gY6ini87uAP3cPLWuglqPU26eOXPtUEhkkhVvXFcRexT8mc42IzhFg6pO8XoNW8xomkHDPNnNE9Iu7lSJ7aDbVNl5dNBMaxozlXghCVWQiHvyp750mRIQBIiMSs0DWQH6qi4WLCPbt862txPqVl5gx0D0zkF7XW0kUfeAs21W/Z+eD4TMSzCdqbkPr1OvLmc2jfCtK9rV2UxAxYS0JmFAnHCOpchJpP0D/VjHtPieZBQG4wmyYEszPc77Jhl+6GVoc8DgelsX4YLgec7+NLSuEHI9q/e1iqBt362sVht1Y6AefCH8qSpNLsMbJzkjJZ5w3wjlnezteMZ4VtocPvvJb1zY8jWyUVtxzHtVRUecDww==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199021)(31686004)(6506007)(186003)(26005)(6512007)(316002)(53546011)(8936002)(86362001)(5660300002)(31696002)(66476007)(66946007)(66556008)(41300700001)(110136005)(83380400001)(2616005)(478600001)(8676002)(36756003)(82960400001)(6486002)(4326008)(2906002)(54906003)(6666004)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T09HRHI4OEl0WUJqWWlSRnpTSnJ3N1Q1YXVQWHRTQ3FWa0UyN2ZKQlBqQnFY?=
 =?utf-8?B?WkMzYUsrdlVURUh6YzZ2SnFldlBNQ0tJUWxxTkN3a0dONjIzZTE0dGJSb3A0?=
 =?utf-8?B?eTdEakhETW9zNXFHdUs0U21oMDJsTGtkSCtNR21uSDQxZkVJYXFWM1dRVEZw?=
 =?utf-8?B?dFhJQjdYa0U3Sk9sdGMyWmUrT1I3ZndLMW9yVEFBU3o4dzNha25QY3U4dnZi?=
 =?utf-8?B?QzA3S3N5VHo0Mm16S096Ymk4RHcvZkRWZ2lhckp6bWxRQTJWR3Bub3VIcE1D?=
 =?utf-8?B?cFB3aVFJa01LdlVWMDNmUlBQQWg2WC9pcjdUaFFqVXBGaVRBRGN6Vk1vNUtH?=
 =?utf-8?B?QW5ISHhKTFRwMkVxRVhNSzZNdnpocVRtUlhadFh5UHBSYlZvaGRxMTdrSlAx?=
 =?utf-8?B?Sml6YU5rOGttVUY1QXVCdDIzbXh2R3poSFVlbCtDSlgwQVU5ditHWVo5blM1?=
 =?utf-8?B?MlNOK3RNTkV6MGZUSFo5enNqSjdmQ20wdUJNK1FYemJUR3dTelAzOWloNU1n?=
 =?utf-8?B?ZGlCUlFpcHZDZDI3MzVhZmkxamRlZ3BZZDM2Ym9uQlMxcGNVRGpPUkpBVnls?=
 =?utf-8?B?RGQyRVJrcGtrbk1ncnY4TkpIS04rUFVDV1VSa0ovbklJZGh5UXc4M0N5Yzg2?=
 =?utf-8?B?Umt2K3dsb2tzeC8wMlpUeGtkZURuVkRJbk9wdGlWWXU1NGkxaEZhWHFxbmE4?=
 =?utf-8?B?UWdUZ05QS0Q5WUkvVjc1REJ5Zk8xYkF1RkZ4NlhONExHdDJZelozU1ZBUjBl?=
 =?utf-8?B?SGszYzYxVHRGY2d2WHkwRXBGZWhSSHJvN3g1RVk5b2lNVzBlOCtzMHBBSUVi?=
 =?utf-8?B?MnNHdU5BT200RnJBVXJBek5QNEdWY1dzOVV1amR2R1MyV3dNVXZtZjRjZnlt?=
 =?utf-8?B?R1U5TkZ6bUJETGpqSXlwT3B6VnZNNGpNcFJUUFZvdEg0dWdkSjg3RHVKTjYy?=
 =?utf-8?B?M0lRbVhITG0yNmVNSTVhUjhReWNJb3hLbk4zK3YxZzVZSnVuS0F4blkzMEdE?=
 =?utf-8?B?Zk83Rk9ZNXQxRE1VakdxdHpOWDNVVHN4amJ3S0graEY1RHVaWkJIMEcyUDQy?=
 =?utf-8?B?ZWw3TzY0RGVBYndTeFBRc1dUdTdJUkh5ZytXMlluSERxcFJEQS84YVBtMzFi?=
 =?utf-8?B?RHpONGthcko5cFZRNmN4OE4rWnBpdm5RNzh0dTdWWmQ3UGpGOVhRN0tzaFla?=
 =?utf-8?B?dDJZUkcxcmhTUzB4alVsb2dibmNBYWxkSVgwdmdvbDRpcjVoTnF0ci9OTFRn?=
 =?utf-8?B?VjdiTDV3YWlVN0xCRk5jaVFyeDlabjdCRTZRV242TFN5YVJEMktETVkvQkxL?=
 =?utf-8?B?cTNsZmt3Rml4akpLcWxmZCtTTGZ4QUxLVFE2cVdMOVZqVWN6YkRCRFRoa2RI?=
 =?utf-8?B?cTBhRWN4SFhTZ0NIaW9qQkdENlVKMGJzVG9ENUFDQlEvekZEbDBjaWduS1Fv?=
 =?utf-8?B?eVppa0NQZkVoWCt5bVNPc2J0RWU5NVpQVm1kNUFCbFZ4NFRhTVZiMmtzWjVs?=
 =?utf-8?B?WEhKR3pkdTFvaDgwMmZldytzem1PR3hReThiY2c0VWhGczBTNldnVlJFbmFV?=
 =?utf-8?B?RGpGYXRFNGUrWml5ckZGVko4QUV6cldVSkdUTXZyU0VtdXFjYkNNMERnRUdv?=
 =?utf-8?B?dTB5a2wxa2Q1UEE0TW9vZE5nWHNwMkxLMTJTT011eVIwRkVsUUl2SDhVUGo1?=
 =?utf-8?B?TVpTQ2E2eGtaMjd5WW1lWVpacEkwcGZSMHFiK0Y2cGVkK3dNejNNTHEwTVd5?=
 =?utf-8?B?TDhZMDJINGd3RFR6bXcrUXNOK3oyMUVyY0xGaks5K3A0ZWpneXFnKzZJMlFB?=
 =?utf-8?B?MjlJUVBvOXZ5bTlkVFBFTUJYRWpGek9tdmgvNFMyYnZqaEJ5UStZTW9ZMmRQ?=
 =?utf-8?B?djdEOTU1UmRtejQ0bXJ4aWpZTmJ2U0kwZFJjTk5mZERmMHJxc1pxdUtXMGFh?=
 =?utf-8?B?S0krWnNUVENLREVEdVd1NUdHTFcxalBFVkVuUW42cE5jT3dWeFRoRzNrdGpI?=
 =?utf-8?B?K3c3K25nMXFmZ2kzcks5eVRtOWhZQlAvd2U0LzFuSlhGcWF2SjZSaVhVTHR4?=
 =?utf-8?B?b1pYMmdoOWRqb1lUSHY3ZmhNTkZ5Y09pYk14QlcxYWl2RUp4NXNWQklpMERK?=
 =?utf-8?B?THRlempHWGY4ZFB2VWVqeTI5SmdyWHpQUWNkL0xRNWZ3eHZybVhnSCs4REl2?=
 =?utf-8?B?WHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	G5B+0FJCX8ayF+iZ/eieNJFL8AM1wFK787uBx8M8hZiw2VB9RSHb/vHoqkpLD6Ik3u9nW6YNdgNFYffvg3aSn6/BcuvQioc28hubjF22ZqomiWTvVBXY5sO4b24iOwmfGEvXpDcVXGfvXE+vCbZHptNthOjwMA7Be+MNsleWLXYHMFxH6Pk9cT/KFllRbgO29lyXdjD2UnVG5WPCVlQvAcxVHXybj+knRqWgL5A2S3RjI9WneU5JA/I3j4jVehd7iLjlug44WeMSyE2MTgGSfGaWOCppiOyEQuvDho6MadYD3yE4nMmslVa+EdaWYlAQVNBbKt6eynkeO/bYB4Giqmbavk76TzmONLe4q33ALvrg/hZ9qaSDIqSc4FF6ov92Vd2vkaKx+j/FBfEPb36SRRViqyXb9wemSS4tL1Ey+BQtZwJ7uoSFfFRBBBfZvKtejLZdeEY3wn95UUab08OUqJX96NaBShO0O+xh6ecrx4ZNRstwXFC7fgzxAgpcJTP62g0y2UYmz7aJHDNbJKaHxQCm1FZzd3ifbWjbaypcXpG7+TNUr2NTlZBkF5tncin3PHBh3DqIF8L0sjeRIw9CXjER+umBqtkXc30Zy9ifiDDm3Q0Jm1rt6+BU9pOGaisPtz/a0OMZon/kTMJILomnyJWUgom2HamwPsyAD3wTSaVYoJCoV3QoCQEsqx77duKmiV44Urb1n/d4qmqkBOaUW6oND295MA5TAiN0pTKzfiNErDF2PJ9PP9dgURSPjFEzFplyAiRBoygdxBXsfryB8ptCkbTGwWjihNsw+N+1rNO6Ra5NG8IKT6ukTwPH0fiQ9kCQA/5JPhKzNfZaihvpyOBfOUUCfet5hj26e+mel9c=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8d14635-0686-49ba-6ea6-08db4704edc9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 09:51:10.0939
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fyKGVl/tvuaKVp6NWXetpqs+jhKNcyNGoEhEHoa/QEAp7mZ5+NiE8JXJeowEFK1Kd9FpoU07cC1moPkBbx35m7jP2bUNs2BgRhqgBF0x7Eg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7415

Just as a note for the subject, we more commonly write function names
with ()'s.

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 05967ecc92..90b33aa3a7 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -468,6 +468,11 @@ typedef struct xc_dominfo {
>  
>  typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
>  
> +static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
> +{
> +    return (info->flags >> XEN_DOMINF_shutdownshift) & XEN_DOMINF_shutdownmask;
> +}
> +
>  typedef union 
>  {
>  #if defined(__i386__) || defined(__x86_64__)
> diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
> index 35901c2d63..38212e8091 100644
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -368,21 +368,22 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
>          info_dict = Py_BuildValue(
>              "{s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i"
>              ",s:L,s:L,s:L,s:i,s:i,s:i}",
> -            "domid",           (int)info[i].domid,
> +            "domid",           (int)info[i].domain,
>              "online_vcpus",    info[i].nr_online_vcpus,
>              "max_vcpu_id",     info[i].max_vcpu_id,
> -            "hvm",             info[i].hvm,
> -            "dying",           info[i].dying,
> -            "crashed",         info[i].crashed,
> -            "shutdown",        info[i].shutdown,
> -            "paused",          info[i].paused,
> -            "blocked",         info[i].blocked,
> -            "running",         info[i].running,
> -            "mem_kb",          (long long)info[i].nr_pages*(XC_PAGE_SIZE/1024),
> +            "hvm",             !!(info[i].flags & XEN_DOMINF_hvm_guest),
> +            "dying",           !!(info[i].flags & XEN_DOMINF_dying),
> +            "crashed",         (info[i].flags & XEN_DOMINF_shutdown) &&
> +                                 (dominfo_shutdown_reason(&info[i]) == SHUTDOWN_crash),

Isn't this your dominfo_shutdown_with() from patch 6 ?

I'd pull that forward to this patch too, and use it here.

> +            "shutdown",        !!(info[i].flags & XEN_DOMINF_shutdown),
> +            "paused",          !!(info[i].flags & XEN_DOMINF_paused),
> +            "blocked",         !!(info[i].flags & XEN_DOMINF_blocked),
> +            "running",         !!(info[i].flags & XEN_DOMINF_running),
> +            "mem_kb",          (long long)info[i].tot_pages*(XC_PAGE_SIZE/1024),
>              "cpu_time",        (long long)info[i].cpu_time,
> -            "maxmem_kb",       (long long)info[i].max_memkb,
> +            "maxmem_kb",       (long long)(info[i].max_pages << (XC_PAGE_SHIFT - 10)),
>              "ssidref",         (int)info[i].ssidref,
> -            "shutdown_reason", info[i].shutdown_reason,
> +            "shutdown_reason", dominfo_shutdown_reason(&info[i]),
>              "cpupool",         (int)info[i].cpupool);
>          pyhandle = PyList_New(sizeof(xen_domain_handle_t));
>          if ( (pyhandle == NULL) || (info_dict == NULL) )
> diff --git a/tools/xenmon/xenbaked.c b/tools/xenmon/xenbaked.c
> index 4dddbd20e2..8632b10ea4 100644
> --- a/tools/xenmon/xenbaked.c
> +++ b/tools/xenmon/xenbaked.c
> @@ -775,7 +775,7 @@ static void global_init_domain(int domid, int idx)
>  static int indexof(int domid)
>  {
>      int idx;
> -    xc_dominfo_t dominfo[NDOMAINS];
> +    xc_domaininfo_t dominfo[NDOMAINS];
>      xc_interface *xc_handle;
>      int ndomains;
>    
> @@ -797,7 +797,7 @@ static int indexof(int domid)
>  
>      // call domaininfo hypercall to try and garbage collect unused entries
>      xc_handle = xc_interface_open(0,0,0);
> -    ndomains = xc_domain_getinfo(xc_handle, 0, NDOMAINS, dominfo);
> +    ndomains = xc_domain_getinfolist(xc_handle, 0, NDOMAINS, dominfo);
>      xc_interface_close(xc_handle);

Not to do with your patch, but this is logic is mad.  xenbaked open and
closes a xenctrl handle every time its set of domids changes.

I'm very seriously tempted to delete all of tools/xenmon because it
shows no signs of being in use, and right now all it does is spit out an
unending stream of

gotten<100ns in qos_switchout(domid=32767)
gotten<100ns in qos_switchout(domid=0)

to stdout, which is antisocial for something calling itself a daemon.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 09:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 09:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526920.818995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryJz-0001oj-P8; Thu, 27 Apr 2023 09:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526920.818995; Thu, 27 Apr 2023 09:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryJz-0001oT-Jl; Thu, 27 Apr 2023 09:53:51 +0000
Received: by outflank-mailman (input) for mailman id 526920;
 Thu, 27 Apr 2023 09:53:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pryJy-0001oJ-Ey
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 09:53:50 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 679b6bd9-e4e1-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 11:53:49 +0200 (CEST)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 05:53:46 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB4920.namprd03.prod.outlook.com (2603:10b6:a03:1f0::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 09:53:41 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 09:53:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 679b6bd9-e4e1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682589229;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=qAOe1aIyzpSn91DRLcDRcRftC0dcXMadusU1TNYFK+c=;
  b=RNDU13PYge1avhOq5hNremY430tvLQKLuL6M26QfIuf9oCfM65DrX/xG
   QkznRaCCQFYNRsO7BsOdlLxrRsvqFBBkI8pEC51kprkoVLEGLCJGaD8TG
   wX7QcuFpT/yEAufEV+9AiXHze9maEWdRBU/Eklto0zczgwh61DmNegg5C
   w=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 105818692
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QmMGRq7CEGOR1OJOcQ55dgxRtP7GchMFZxGqfqrLsTDasY5as4F+v
 mdKXjzTbPbfMzb1KIpwOo239UkHuZeHx9JhTwBuqHtgHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPawR4weE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mq
 P0GGhlVViy/ifu/+pOhbeNqr9pkBZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOmF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKSe3hpqA72jV/wEQRGDAyRX2Si8W9k2S/YvB7E
 FA58G0X+P1aGEuDC4OVsweDiHeAsxwVXdZKFKsk4QWJx6jTyw2dAXUICDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRUcdYQcUQA1D5MPsyKkxgQjIVc1LC7Oug5v+HjSY/
 tyRhC03hrFWh8hU0ay+pAjDm2j1/smPSRMp7ALKWG7j9hl+eIOue42v7x7c8OpEK4GaCFKGu
 RDohvSj0QzHNrnV/ATlfQnHNOvBCyqtWNEEvWNSIg==
IronPort-HdrOrdr: A9a23:DjjAgK15/32CD7NbAKXNNgqjBIgkLtp133Aq2lEZdPUzSKClfq
 GV88jzsCWetN9/Yh8dcLy7WZVoI0mslqKdkLNwAV7KZmCP0gaVxepZnOnfKlPbakrD398Y+a
 B8c7VvTP3cZGIK6/oSOTPIdurIFuP3lJyVuQ==
X-Talos-CUID: 9a23:YHSGKGxE1EesznAxpQ0pBgVJG94XWyXl8EveOkaCKHxOQru8Vxi5rfY=
X-Talos-MUID: =?us-ascii?q?9a23=3ArKeFqQ57tKgl+60V1CyELeXWxoxq84mKGGYku6k?=
 =?us-ascii?q?CouOFDA5UACuwhnOOF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="105818692"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NIROtLFGVcvUIfhD8n2f3vVOkFb01LGbOL906wTwI3WRPREy9UNvtbQyFFWBcaYHSi294+Mdo+zV1Tv8VAYdGrcpu3e92m/RabLfPlaztnvCpk0gigAido3jJfGK2w9DibRfto8z65/kJ+NiVNFJxo1kSvDDQ115kYvL+BYP8Go1aImaNlYjWOQitiOHBpq2feHhyao+r0JMxLocePY+WWiPbKhFjxyujuNJQO+6orHmokNKtCUKFiKf8zxUIQs0XTlrS71KwnhxfKFOqdzcN0W5OO8EVP9+//9BYzXJSsvg+nBfF3jbTonnTRaU5QKAZFUE180sREIsIhn+cTDfzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2hxBeI8ESlA1z5ElHS4zF+BjfYcILnTjOYa7geTM/DA=;
 b=kBIHsMogI/hCfQOGg+Wdi0/yIKHzNM/hFcHngw/Z4URZBDH2hCjcz4H25LSfiTmCR2FVPsH1cTn+aNmcvoa1CERHIH+vLBNgurptg8TCdkio7XDpvPoY4RRc060rjhAletVqQvhW5nD8OFV6C2Hxp1ygxvWQfu+ay343ukZmL04oIWr9Q2Tbu+QxDzatWGiBP+V+KNOzDAop6XHj680GSzy1JWqoHu4RUuvEsNtRwPIW2NwACb0+IWiR78wh9jzDDz8XcPfqV+7IGdtVK6oZOO40/7vtlLcYXKo4fFIQjfvflWHV7qgyEyFAWEfmwho39zX0ib/JFRaJ8cB36MiGQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2hxBeI8ESlA1z5ElHS4zF+BjfYcILnTjOYa7geTM/DA=;
 b=QqLZcGhNoKrjS786xc/aZNj4Im7HQk0KGrryfBAfmQJG20VPunwxK4T4OTNy0I5wXmjn4SmBGL/MkHPeEe/91stN5DsGAepxD+GJQwbC1art45a2NaPPKVo6oCvFF3/8BIuMZO4Y6fbmEyEWsupFXUAHaBjUEJwH+pRCdYfx718=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a32fb66f-ac9e-e6f1-04a7-1345ab7f9805@citrix.com>
Date: Thu, 27 Apr 2023 10:53:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/7] tools: Create xc_domain_getinfo_single()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-3-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB4920:EE_
X-MS-Office365-Filtering-Correlation-Id: a79058ba-0520-45d8-3ef5-08db4705479b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EArXKni+XRnsTWtoTqKIaXgUZ2FdKxnD2fLXQ+1q94TTl6iqdQncutWtTqQDtJR93/RYUus+s3oZG8d7GfDVc1rVpJyoo4a3fdJvKqn+n1v7xSIJNiRES2ZUn2itOPbGKPh9bbQF83U+gl8NN1GZf7++k4W4M0KcrHe4IGQpj0q3iBrMri761EOYtAYfgcxV8wXXv0xS3yt+MWK9Haw8F9e+Vw97X4fr0vAzrOZzgzQJlfKAIIKcbScukfBS8M6tM7U285PcgjETdb22xK4uuv8AmOCu3UotIeWAk/P3/4UTuG5/+fyJrlWdHvFfYMaCq6vjspQ89eIH4J21AqjWmR2l0Y/gtoY0/Udghr1NaSDZmAin6ULWkN2P+FMnxrDI6KL6UAqZV4RdaqTcJBOTvX1mhCf1o5zT1ABz+iCLJyUxPLpO3Ejblt0aQYWc2cHcFoaETP9xHmv3XVmP7HcTVnZdXeaofaMPH3qYboMuugwzdAnPt94oaqM4/5ZZSGOlNjJrXy2NUKqLqNmzSkjp0BdCVuFw1svaXAKKt53pQnjuAIjUGEdi6qGX/YI9wgvXFxo+WTTw76KBZnPkwgJe1bKrEOclWy/o/EPgYMBjJU54q7gghPfsLXkMSfkWMnwH3Tcb/+uDgymoecba3S6edA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(86362001)(53546011)(41300700001)(26005)(6512007)(6506007)(2906002)(5660300002)(8676002)(8936002)(36756003)(31696002)(478600001)(83380400001)(38100700002)(54906003)(186003)(6486002)(110136005)(6666004)(82960400001)(31686004)(66476007)(66556008)(4326008)(316002)(2616005)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UU5DRnFVRUhFd0lLaEVHcExSdHptM2RQNHNrSnZPNTZQU2EvdXNVbmZqMVp0?=
 =?utf-8?B?Qk5YbHVYK0dqOXZKQUI2YTMrdStJeXN6UWxqRGZnSERYbGo1WTA4RWhhTGdC?=
 =?utf-8?B?RG05dWs3M1FwcnFvemZOVGFoSWt2N1JHTlRvb0phdDk5VnZBczFHMEpieWNk?=
 =?utf-8?B?eFlYOXlWZDFKSS9WY001bEhnbmRxenN5NWU0NnRkSE5pUnEyQmJPdW8xY3Nl?=
 =?utf-8?B?VkgzbjZQQmgrVmM0akl2YjB3ejR3R3Y4OU5HZDRIdFdoaU9oYTdoTnhXQlFl?=
 =?utf-8?B?d3V0M2NxWExYSEEzOTQrMU43ZVI1L01Mb1ZLS1hRNkFOTWRoNFVOMUlwUXVy?=
 =?utf-8?B?aWYwNWJ3ZEI2cHl0cjQydEdIRzVVZUhHQmlhU2V6VkRudnh1OVpuZ2RRSkFl?=
 =?utf-8?B?bFdNdGZKNk9hUDRoM2c1R1BmOFNsZ1ZsWW81cDVSQzR6Z29kUnlTVjJITWM4?=
 =?utf-8?B?UFE2Zzd0V01IOEJYaENZU0ZZQ1YwYWFNQXpiVEV3NHhNdGloRjl3SG8zU0hY?=
 =?utf-8?B?WTFTY1RwKzZOL3Q2M0hKQThxVzB3aGthVHFIRjVrS3Y1WVorSW9vWmRTQmVD?=
 =?utf-8?B?QW8yRUFMb2g5ZXJNTStKdTFuVVBZUnlsOFJnMFlQbjVMZXN2NGphR1h1Y3Y2?=
 =?utf-8?B?YmJBOGU4MEtGRXJvVmt5RFVrbU5EVk1JdjJSeS8yczc5MzB0bllpdGticTVO?=
 =?utf-8?B?MlBSdnpSemlVS1pvVElTVGZiNGNVZE50dDRDLzJhcGhMcjB2SGlZeVZvT1NU?=
 =?utf-8?B?dlJUVTRYdERWN3BhZjhucnN2ZVFGTXNzWHpMY0wvamkrdkZ0TXhsdWluVk13?=
 =?utf-8?B?ZHZJeG1FcHM4QlVmWGxlcXNFZ1k4Q00rVlViK2dPY1FRb2plTEpIb2pKa0cv?=
 =?utf-8?B?dnY3K1RBTUp4b01uSkhKc1JzSm80RmsrTE8wWkJVQVNaS0szbTJacjRpWkl1?=
 =?utf-8?B?YzV0WlFBRVBodTdhRGx1MU8yRURuSVFBMW1FUGIxZmE5ajJiWTNjYURWb092?=
 =?utf-8?B?NWpRVXNud01kQTJpUTJSWTI4MktYZjNHY3MzVW9XVFI1ckJxWXh1SER2eEdV?=
 =?utf-8?B?UTA2UEFLSEV3OGFKUk1XRFlpU3lXTm9xdlV0ekdjOWpJV0kwZ3lsN1hSODdN?=
 =?utf-8?B?WS9OT3ZZa2lreTRMVTNiOVhyOVVtUTM5MWhDVk9mZ3R1WWtnMEQwS1gxdnJn?=
 =?utf-8?B?RC9UelpkdzVBaG5NdHY5UnpDUWtnT1BpSVQyT1pkZkUwV20yZTQyWEx5eG54?=
 =?utf-8?B?S3ppOHQyRkN6ZTIxL2ZNdnhSZS9ELy9RYklaRUZDeWovdjUrU3RUeENmRWJG?=
 =?utf-8?B?cDI1cERxNVBNN3Bub1RRTzd2NVh0cUkxT1hmdDdQWWJsdG5keU1GdTRBbi8x?=
 =?utf-8?B?VlFHOC9Xb0dQY3NZYjdiTmtzR0wwRHlqYmJ3SFVYcTY1bVZQcVpNaTdFdXRk?=
 =?utf-8?B?S0tCa2gzYXBYMFRLVW0wbHJxZWlBeFk2ZFlPcGF2K1NINTdGM21scm41Y1E5?=
 =?utf-8?B?VSt2VHJxZzVNQzFZbGRySE1OT3NQRktYam4xS3F3OTdPZE5SOFM2TjAzdkRa?=
 =?utf-8?B?eWlEbW9VakUvTWJqVytpY01scFl3RTM3MHZ6N2ZNeXJyUXIyN1IxWGIxSUEv?=
 =?utf-8?B?czBNUGRFbzdHM3d4UjJQalRRdUVuTzBCL2dWcXNaQlpPTm9ZN0lPTzBoLzdD?=
 =?utf-8?B?aE9jVWNiVUFyanBRSlVobUo0TnR1ekdFRDdjTXFVRnVIU2RYYzhneDF6eUhi?=
 =?utf-8?B?TWIyMnY4R2NZWkdZWFZORHB1UjRreU1iR0RnTmVpUVRQK0Nad2RMYkgzZ3Ex?=
 =?utf-8?B?Q0tiVy9IdDJHQUZDcjVUOEdtaHMyNHJ4WjcvZUR3dW5IeE0vWDh6WkVSUWFV?=
 =?utf-8?B?eS9sN3BaR2dlZ0dpdDhXQmtHT1JvTjJrMS9tcVYvZy9mSnFyOWhVN0lsRXU2?=
 =?utf-8?B?K3ROdUNERHloL0tWZ1lLNVZPUXpSRVRmcThYNXdQMS9WSlpINGxlZUZvRXZv?=
 =?utf-8?B?TjNJWDBlUVkyV0Y3dWMvTUJuK2FDMHNtUUtrSlA3MklaU1Bkb3Jwbm1yZVda?=
 =?utf-8?B?SHgzT3NjUnUvdWNpWS9HN1IvbWlNdXZ6dEp0RU9PLzBrY3FOU1Fhb2JnQU5s?=
 =?utf-8?B?NW5HUzM3OVlGWVVSRWNhbzVNMy9DYU5oRGtTTWlSRkdaNkJHVHY5M0N2cXNx?=
 =?utf-8?B?Q2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	VJmspOsUI+ujXzOE8WivhJXYWsgsYbADhyPgyPDPI+QuZc1DLtFpDSm2cjh0xJCJKLLRN0AsPny6TUYVDAyOmOy7Y1ZfQ42zuFAzK/OH19U+i21PPjBqdAAkkDsaGgOWAZKxKIG07u8WxRBLJBhliwPd4cPymQl1UB89oOdboAcibVrxvEb0ZCkZqauM6bE7ua7cpvcdzXNm1x/GaIsX++LlPySCRt1uAtcgf8YCW9kJ1wNEOIx0cbqNk35eR4B3eS5FvEvMqxJMJIwlNPnup0NpwyXpTfbG63rar9Rp+a2J9Km7Ftz3YKr2+5u+eR0Qe/Ku8XtRB6E7ZDK1ghyvSh224JZefYMPdOXrx8T5G/Elwy1Fe2L7361kkPSxrr6pYYuuUZxCBNrqZfk2I6QPuR1uM8hgWhqt/MNKsJyQrrDPRJpr2DaxwKJW79h/5k9OJWPyWvK/0PhlF/aUsbHJwdexEtFd/mPg32xG87efUwyJwOxve/rU4lLI2M+pjxpQ0zbU3OIDFXkWr+yZphR47/gXz4ojr/4l/VhY28KI7/L4vIQ4aeozK+r6A/hDPHOLWyJztfx+cvFcG1ExN/2KhytwKSkmwiGwjr/BpJ3CqNIbyAB7XyCOuKtfrf4yZu902iopn54b/LF/6QknzAE6e/3p5/TtoeapQsHrOt8kymmtcca+9yXgmiohABlGbmsoDlT0giBHS1N/umADqzOVHerxVHjQ30X0itF1J3hwbG8bKop+1zF1crwgLnIP8ro/6fN6Wam4XSxRhqjubwa/iKqL1ihvZeO8w0RdtCNSCP3yAwkoEgLbBZg74sXRxz75fxdDSRmWRKdvgj3SfJQsMqcVjDKHRmTe65akJFlU8E4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a79058ba-0520-45d8-3ef5-08db4705479b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 09:53:40.7303
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xMv5LEwY9QHlSnTppvtiiXw+AX4i2Voyo/J94mAJKVISVj08vA9uy2XG/32bS6OKu9lz4RzSXagwv5CXARmqh9Yy7PoVbdGCPzVqpkWrP88=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4920

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> It's a stricter version of xc_domain_getinfo() where the returned domid
> always matches the requested domid or the error code shows an error instead.
> A few patches ahead usages of xc_domain_getinfo() are removed until only
> xc_domain_getinfo_single() and xc_domain_getinfolist() remain.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
>  tools/include/xenctrl.h     | 16 ++++++++++++++++
>  tools/libs/ctrl/xc_domain.c | 22 ++++++++++++++++++++++
>  2 files changed, 38 insertions(+)
>
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 90b33aa3a7..73b07955c6 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -696,6 +696,22 @@ int xc_vcpu_getaffinity(xc_interface *xch,
>  int xc_domain_get_guest_width(xc_interface *xch, uint32_t domid,
>                                unsigned int *guest_width);
>  
> +/**
> + * This function will return information about a single domain. It looks
> + * up the domain by the provided domid and succeeds if the domain exists
> + * and is accesible by the current domain, or fails otherwise. A buffer
> + * may optionally passed on the `info` parameter in order to retrieve
> + * information about the domain. The buffer is ignored if NULL is
> + * passed instead.
> + *
> + * @parm xch a handle to an open hypervisor interface
> + * @parm domid domid to lookup
> + * @parm info Optional domain information buffer (may be NULL)
> + * @return 0 on success, otherwise the call failed and info is undefined
> + */
> +int xc_domain_getinfo_single(xc_interface *xch,
> +                             uint32_t domid,
> +                             xc_domaininfo_t *info);
>  
>  /**
>   * This function will return information about one or more domains. It is
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index e939d07157..3ff91023bf 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -345,6 +345,28 @@ int xc_dom_vuart_init(xc_interface *xch,
>      return rc;
>  }
>  
> +int xc_domain_getinfo_single(xc_interface *xch,
> +                             uint32_t domid,
> +                             xc_domaininfo_t *info)
> +{
> +    struct xen_domctl domctl = {
> +        .cmd = XEN_DOMCTL_getdomaininfo,
> +        .domain = domid,
> +    };
> +
> +    int rc = do_domctl(xch, &domctl);

Minor style.  Should have a newline here, and drop the one 2 lines up.

By and large, this library is mostly Xen style and we're trying to make
it more consistent than it is, so we want extra spaces in the if
conditions below.

Otherwise, LGTM.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> +    if (rc < 0)
> +        return rc;
> +
> +    if (domctl.u.getdomaininfo.domain != domid)
> +        return -ESRCH;
> +
> +    if (info)
> +        *info = domctl.u.getdomaininfo;
> +
> +    return rc;
> +}
> +
>  int xc_domain_getinfo(xc_interface *xch,
>                        uint32_t first_domid,
>                        unsigned int max_doms,



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 10:37:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 10:37:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526926.819004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryzT-0006NX-2H; Thu, 27 Apr 2023 10:36:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526926.819004; Thu, 27 Apr 2023 10:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pryzS-0006NQ-VN; Thu, 27 Apr 2023 10:36:42 +0000
Received: by outflank-mailman (input) for mailman id 526926;
 Thu, 27 Apr 2023 10:36:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pryzQ-0006NF-HT
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 10:36:41 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62598de4-e4e7-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 12:36:37 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 06:36:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB5957.namprd03.prod.outlook.com (2603:10b6:208:310::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 10:36:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 10:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62598de4-e4e7-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682591797;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Q1tFO+88RrQt7lv6OEG2DKUUXqE+G27coHu+8t2eDmA=;
  b=fIyucOLKYDEfuT602SpjAE0eHbRKqdgpt5njWgKLmRnc17m1delNeJDe
   MFGmVy5GloVaL7hLPg+smRx/mfqQXjBwfU3bR2U1mtQPgZx3XmIQYS4oa
   IKwkieJDztKEC1y9qoXK4ps2Nrjdl6tM20qNvEmVuWxfjRI2He31cGK0d
   A=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 105823590
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gC5bTa20Q/3BqtAdnfbD5eRwkn2cJEfYwER7XKvMYLTBsI5bpzwPn
 zYcDWDXPKvfa2P9eYt3a9jk/UlQsJaGy4RiHQs5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBkOqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfOkN+0
 axfOhs0ZUrAtbO6xaCVZfNCr5F2RCXrFNt3VnBI6xj8VK9jareaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6Kk1UZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjBdJIT+DprpaGhnWjmzENSxs0FmCn4vqgj27iRv5jG
 xc9r39GQa8asRbDosPGdxS8rXyNuBIGXJxOGuk+5QOK4qHQ5BuVQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAKKUcSaClCShEKi+QPu6k2hxPLC91kSai8i4SsHSmqm
 m/T6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHb1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:+6kV56ja0/0zLGTxIX6vQxBUCnBQXhMji2hC6mlwRA09TyVXrb
 HXoB0+726KtN9xYgBdpTnkAsW9qBznhPlICOUqTNGftUrdyRiVxeJZnPbfKl/balfDH7VmpN
 ddmsFFYbWaZzUa/KOKhDVQeOxQu+VvnprY/Ns2iE0McehtUc4PnmQJaDqzIwlTTAlCBZ02UL
 +d/NcvnUvTRZ0OVLXLOpAaZZm8m+H2
X-Talos-CUID: 9a23:mtp2pm6uDNH5We/GPdssz2wkPpBiYFbkxy2TER60DHRHZ6e7YArF
X-Talos-MUID: =?us-ascii?q?9a23=3AZUaW/w3UztZIbnpXik2dU+jLDjUj0/nzFgMLs9Y?=
 =?us-ascii?q?9p4qHLClQZyqxsATsa9py?=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="105823590"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IlOZu7Z5kQ000+YzhT+aywnpb5eJ59UruwG7gk8tPde2c0Htx2oV+RwFC0glTlU/sFxx3NLPy+2/f6nKWIlwyQXmcqnoEaRa7mKRzPMPwwKsf02byNH0CfKa8SgJrdIyu7Cv+PUiXNM3P71gZFl1cl00n1yRh3tszNIW3AdcWsXDkIe7ZSvKLopgWT19Sg6MY/GPjSNapr2cfVefADyl2imv63/RrUOrFYyAnzjbjFmgOUuaowVtGALpMHBs6iAOGM5JFLeJQTmD99EdhHzzQ9MAeS9nt913fh0gdEKUveRSLXJJsj6/SZ6k4F6gxw5eqBPabJS743mmgRZyNLLYFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3iNQPhyiP0Ij1u2HvRwvPqHUNERwQ0Zytpd+yZyPP5s=;
 b=CUpK8kC6b6MJjoXCKzc/m0gYQ/jT4esg6CKXXEyl1Ot+vWK+kRYhG0Fhyo5prL7Bl3IT/GmEtQs2qurtdPUfOsmejTzl9GwIJjD4dI1aMmiYCGDkrEIk2kXxD/tzmUsWXcVwldAQAr78abOeFPXG9lfDzQ84Um7LwtKX1XYLJi3VxH1SwJgjLGX1CPL6HKDeZPI4oWZinAlEJ3QCc/Z45lVPjiog73T/8XIDxKVe+NzpnqFAUHhV7/f7HSHCag6eBdgJ2iSReYeQcUGgpVVZApTN1vMaKAY/osdQR2MsemgLQxixPpm3QvmagiDLNsQ3qLHuHGPHuvQnj/cWcVTfqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3iNQPhyiP0Ij1u2HvRwvPqHUNERwQ0Zytpd+yZyPP5s=;
 b=CsoZLmlTV7x9/QGpmY6f6TlLcEg9Hp1iIv04gRsSDiRHYA4oteqn11p99fzUGSuyJ7cRCBXNXOOZLpAPFARwBL5eNy+/aXTCi0I//KPIvUn5ewqYCm9Cq1wKptA16bU2dF8X1Ba8wse74yADPMdIK6FAgl6k6LoUKRc3WGeT+eo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d66185a7-fb45-3e08-7414-e168c8ee96f6@citrix.com>
Date: Thu, 27 Apr 2023 11:36:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/7] tools: Refactor the console/io.c to avoid using
 xc_domain_getinfo()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-4-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0022.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB5957:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c829a5f-ecf4-4b79-02b9-08db470b418c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q++Jc5UOk4HCP+IveO+xKojP0Sjf5QHWNKsC8VkcvQ6mwLTKeI6QphFD8zDUL7ujA50aXuWb/J85s7oQzPazeYPNEMlie6FdIYL6rq/oxhGpq8A7yIyr21ZdXC4PkbtJL4Axk8tc/hy4jgM7lgxRCZjkqSS8Xd65u0J21lwPGOOuHbEPjdOQFiOTToaqldhhAy7AsGsS7V7/6pwi0YM82RM8d9GOoEKWq+v4YGUNNc2v38ukAEXu56179riQ8XgamXadRtl1TfjSQ2wJLEcQnXkDOOyljlr0uMeG4udJGsDVbgfQ6sdaQBDK0MTu3nZthdlnDrX6bOrts+jf4+NK+/Qk26/KTrMVksDmtgxZH5Eroe4Hqrp/S5zuoJCsHIl7QwQSix0pNXS2oOooCrRV+rhCOmi827kUhyHhT4YgVlSRIBJjHcMLHnGQN2JIi9Y/yCI8DlDMPBYNxqPudsSwHjLa3iTOwHyk2aBEdRDKW5H/oc9PH2UiU73UODDXhce8XG9ptq2sCu8+WuacLxxqAQPM1CLHmubO9BHePbeIYWC9CPHuL4Wjohw1Hf15hkpi8h1WSGqeFJQNrOZUxTcMoFs5N9VXy+m0uP8ydCg3J/8xhvm2c/I1zr2leFpCzYlg2perUElNbyO0PNHFyzPStg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(396003)(136003)(376002)(451199021)(41300700001)(2616005)(26005)(6512007)(53546011)(6506007)(5660300002)(82960400001)(8936002)(8676002)(83380400001)(31686004)(2906002)(38100700002)(4326008)(316002)(66476007)(66556008)(66946007)(36756003)(31696002)(478600001)(86362001)(186003)(110136005)(107886003)(54906003)(6666004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bTJLM0pkYk5PK0RBT0tVdUxuT0lFT0FCY0E3anpuY0NIczRVV0NCbVlZSTlv?=
 =?utf-8?B?RWt6SURML3F6Z0x0cndKWTZuZG9pL0JRQzV5WkhZVG4wc1YrWHE3czhaWmpH?=
 =?utf-8?B?NkRvOGYyaFJFSFo4T2d5R3VqQ2RwRERGV2h3ZUlwOEdqa1JDTG5NR29PcnZs?=
 =?utf-8?B?U1RmM1Z0UmdKVEFwM09ZNUxUL1BSNXN2OXVlK1F2NWJUQ3hDUHUvZytPeHUy?=
 =?utf-8?B?bWtQTytTbVQ2MmhVTjJuQ09WWFhVVlo1Q3BTMmhvTkFVSG4zNncxcysvVXJu?=
 =?utf-8?B?b2dGUnJmU1hTSDR5MW9RWTY1ak9pRURzWXByNXdXNnYrQ0d5UUE0em16ZG5D?=
 =?utf-8?B?ZHFZc1Z2alBBNzZuRzZqVjhHbFFYQlpRK1UxeGx0ZllTM1RueWJoOTZ5aFZE?=
 =?utf-8?B?cGxFbWhZZFFqUG0zMURrZkttY1JpRWd0RC9FczFhTm5aeGxwTlBkaEkrb3o3?=
 =?utf-8?B?aEFTRnVXV1MvdDZUWk51QjFmaGUvQWlvWWk1cHdYbkhtZExBTXpZYUJ6RHdQ?=
 =?utf-8?B?eXZ3L2I2RnlDYXZjUUxsVUJGWllzVi8rMytkRUpmbVdTRk9GcEtnUEwvRlBw?=
 =?utf-8?B?bm94ODZCRnl1YVhKYVJ2OXJCcis2alJUR0xBTUcxbGI1aVVDcmJ6SUlXbFc1?=
 =?utf-8?B?YWpGeTU1b1Q4SkV2K0l2V0praGFHaUtydEd5Z1AvYSt1c1B4c0F6MTFjc0Zm?=
 =?utf-8?B?TUR4a2xEcFNjSEFVRVNMVFp0UC9tWTlQK3Z3Qy80dkhnMEc1OEtsb0VxeEcy?=
 =?utf-8?B?UXQ0MDFkRDkyNy9rbjY5SXZBOTA4d3dUbUxBbXJ6UzEvejh2dHozVDBOYVhj?=
 =?utf-8?B?VmJpL1puQXpwVXFZR3pMb1BQS01Hd1h2WFZjdy9yRGxYMCs5cWVnM0RZeHNl?=
 =?utf-8?B?Y1o1VDJGRjFXU0NDTXprL1loYzk3MDRNaXZXWkI2dmNNdnMxbENUZVg3emk5?=
 =?utf-8?B?TmN5cVlVS2d2V1pETk8rYjlwOEZldXBnU0dBR1plTy9naTJkRUdXVUI5cW9Y?=
 =?utf-8?B?M3ZxYTB0LzkrbXFUZklWQnhmSkkrdVNORnhRZURhYmo4SXhVOUxpZTFhWW5Q?=
 =?utf-8?B?QSt6c0psc1BxUTRYMFhvbk1SMEhxTGFNOE1nWUJkcStDaGFidCtUdlVQeEdh?=
 =?utf-8?B?bzRFUTkxV3ZncjNlZEI5citFOWxZejdUUlVnOVJnQ1Z6WjFGZFJubXJlT3B3?=
 =?utf-8?B?STFvaUtVRjBQdzV1YXlIeFNsS2RPUGRFckU2SldaQ3NqK1hJUlhFcFdHTStJ?=
 =?utf-8?B?WTVBeXI1QUV3UUN0SUJ3U0l0WjYyR2h6ZXNuelFKR2huWndtM0pyb3JYZnA1?=
 =?utf-8?B?eldiRG5ETmpDSXFDSGZheXRoeVRzTU82OEk4Sk5ydzB6b0p5alZqYlRDZ2RB?=
 =?utf-8?B?SC8vYTZQYjRHU0xnTkpqazlGTGNrb0tJU0F2WVVDN2dVWllVbGs2VUFsZ05s?=
 =?utf-8?B?Sm5CcGF4WnltM2dZOWJCcDBxY2JMcGtLQmR4NXo4VVYxRjJHa2Jvc3FFTXJU?=
 =?utf-8?B?R0NYK1pqMDYyL3BMUnFWMkFxS2xYcy92TUxaSGdXdGwzVUFWSklFNGt1RXJO?=
 =?utf-8?B?eWNBd1hxSmtaSGluYVg0KzBSd3o0eVptN21QRWI1QVJLNDdOV1pLcjBqM01v?=
 =?utf-8?B?ZXJpRmNFUjd6VDJLY0JCdFFKakNuMmliS0hNKzJDZFZlN1VENjhaV3ZuYzdY?=
 =?utf-8?B?VVBqY0t3ZDZKK2YvYTBLbmZSOWIxWmVEazlTZG9UbnozYThacDVJWWVmdFVV?=
 =?utf-8?B?ODJPcEZRUTBCaXJLU3RTQmNYN3I3cHhBb2liK2dEM01xZTZkZXptY3F6SFhC?=
 =?utf-8?B?Y2hVM1VtQTIyQWtLVTBwV2xTamR2RDY0VE9RcEtaeE5Na1Rja1JXZGh2QWJi?=
 =?utf-8?B?cWFpNFpZUVF2L1RodkpxSDRwdS90M3NwVXJhTlF0eTZxNjJCYk5mUFBQSjI3?=
 =?utf-8?B?TW1FVXhsMTl1NXpiZjJEeXFGY3JUMjJJZG9ITkN2Z2hkQmhEaG5oWmZwZER4?=
 =?utf-8?B?cm9UVjBXa1ZmcjJZT0xPdG9HSThhOU5oRUdheCtzMnFQOCtEOHMyWDNYek0v?=
 =?utf-8?B?WGpmZlo1Q2RTdDJMMWFiaTNMTHV4SFdxVUYvNEZCRXo0NDdST3d6bFhSMS81?=
 =?utf-8?B?K3JwbXlwSWJCKzJCR01oeFB1a0RBcGJ0ZnlwSUxOc3BJWHY5MEh4S2JUdS9a?=
 =?utf-8?B?MEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	zvE0Aofo23qM2DERGzlQKU2X1w4l2LNsIuy0PD2O+rEOCe9C3tmeEqu//s+TMcVXPt8jCnOf6x0+y7ZSb7d28T/9CGe4BOvwUjYL4Irrk7yQWU4E3tnckTZbgOGJeB7EtLISmd1GGwv+K/wJdkuBsQzyLgm/5PFnKvgbCqy+BSy0ZCuNcyfaXD+vw+3tnvFtUXRTH+RqmI3PA6x89b4/TwZ2ugd/hA3mRBVmYDVgD7UJB1jFVeFa9YoymVZTWZ0uLvnC3KEQ4nj0rqB8YJRGed8I2YKouN3wmXU545MDXAsqLiKBAOQNiyspJWd7CCsc4a9SEclGCMIZoSf1MJXprOqrzootvZPAt1Jt3aYjrac66JxfU1ygSP1rRLn87mCYzDYGeRSeY+hYGl74T6rxfGyw9pH1Vs5+pOMPWHRv8jBI1eEAWZcpQ4UzXyfOETAc2nvr2Atkx3c/834jvAGtMpiYdq4BqGAo7zy2nuzvRifGPDgUEQPJzBTXs1pKA8dCeLpf0GdiqhLwZdAa1K2g6Pyj0cRT3EN8eVe650nnz/n5Sd5dNdnC9nPQPctseCIISLnTcyWyV9ySaSEeTfrGPLPfmk9hyxInPC9Jz3/bCSppAT+wy4mnPRSZoVq0hfq+GUOVPL7G8UMouAR47FZtTNiSRSQRANNqKnZ+6FdkKsUSdcXhOwzLnIQSvWdBVX9CjtCOI0tR2edwyJNloMkerQrLOwRz66LLB/U02HgLBIi+4r5PceBdVm2NXO1YbGPmKiInmJ/ahOxeYB/X1IJ9TdpGtmtHzJ8+C+sJ/sTWvM+IvvzUY0IRkyq46TYQow9BT5XNj5+LRZcyWISYU7dzsw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c829a5f-ecf4-4b79-02b9-08db470b418c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 10:36:27.6546
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Iu4ZZr88zfC37LhJZPkb+Z42fSTW1QWs77KNOcdX56D152HcQpGV77NmfAkshKYyZNXSzqVkq1HZN5VLekZEKLbW/oMeI8PdgHtdo+4oroc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB5957

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> It has 2 avoidable occurences
>
> * Check whether a domain is valid, which can be done faster with
>     xc_domain_getinfo_single()
> * Domain discovery, which can be done much faster with the sysctl
>     interface through xc_domain_getinfolist().
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  tools/console/daemon/io.c | 31 +++++++++++++++++--------------
>  1 file changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
> index 6bfe96715b..1fc56f6643 100644
> --- a/tools/console/daemon/io.c
> +++ b/tools/console/daemon/io.c
> @@ -405,13 +405,7 @@ static void buffer_advance(struct buffer *buffer, size_t len)
>  
>  static bool domain_is_valid(int domid)
>  {
> -	bool ret;
> -	xc_dominfo_t info;
> -
> -	ret = (xc_domain_getinfo(xc, domid, 1, &info) == 1 &&
> -	       info.domid == domid);
> -		
> -	return ret;
> +	return xc_domain_getinfo_single(xc, domid, NULL) == 0;
>  }
>  
>  static int create_hv_log(void)
> @@ -959,26 +953,35 @@ static void shutdown_domain(struct domain *d)
>  
>  static unsigned enum_pass = 0;
>  
> +/**
> + * Memory set aside to query the state of every
> + * domain in the hypervisor in a single hypercall.
> + */
> +static xc_domaininfo_t domaininfo[DOMID_FIRST_RESERVED - 1];
> +

We prefer to reduce scope where possible, and in this case it's fine to
have this declared inside enum_domains().  Preferred style for that
would be:

static void enum_domains(void)
{
    /**
     * Memory set aside to query the state of every
     * domain in the hypervisor in a single hypercall.
     */
    static xc_domaininfo_t domaininfo[DOMID_FIRST_RESERVED - 1];

    int ret;
    struct domain *dom;


i.e. one blank line between the static and local variable declarations.

>  static void enum_domains(void)
>  {
> -	int domid = 1;
> -	xc_dominfo_t dominfo;
> +	int ret;
>  	struct domain *dom;
>  
>  	enum_pass++;
>  
> -	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
> -		dom = lookup_domain(dominfo.domid);
> -		if (dominfo.dying) {
> +	/* Fetch info on every valid domain except for dom0 */
> +	ret = xc_domain_getinfolist(xc, 1, DOMID_FIRST_RESERVED - 1, domaininfo);

This is a correct translation of the prior logic.

But it does highlight that xenconsoled currently depends on running in
dom0.  I bet this is going to be fun bug for someone down the line...

Also, while going for 32k entries is absolutely the right thing to do,
the entire buffer will be bounced twice to make it hypercall safe. 
Bordering on 4M of data, I think this is quickly going to become a
second improvement to work on.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 11:14:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 11:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526954.819025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1przaK-0002TW-3Z; Thu, 27 Apr 2023 11:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526954.819025; Thu, 27 Apr 2023 11:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1przaK-0002TP-0x; Thu, 27 Apr 2023 11:14:48 +0000
Received: by outflank-mailman (input) for mailman id 526954;
 Thu, 27 Apr 2023 11:14:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1przaI-0002TJ-Cu
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 11:14:46 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5d76ed1-e4ec-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 13:14:44 +0200 (CEST)
Received: from mail-mw2nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 07:14:35 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7166.namprd03.prod.outlook.com (2603:10b6:a03:4f8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20; Thu, 27 Apr
 2023 11:14:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 11:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5d76ed1-e4ec-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682594084;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wyY/1gXo6wIULriHmmDkpKht/oEPM2Yn3JhjFMcZzSg=;
  b=ASUFOcmHIXP5M8PU9RQpgDOqJgQkfoBQU8enss7dTYQKnehJY/adugjs
   yXaqdlXhAkVqlUU2TUcW1uvAP/KjXaBAqHbr/hffBXsaGoL4csajDFqRb
   HD8H7Xie4LDhEOrNAuqQWZkGmoTrhj0votD2gp/LUhCqBBzVO3n04CEh0
   c=;
X-IronPort-RemoteIP: 104.47.55.101
X-IronPort-MID: 105827684
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ojwpkq8Bf/rcoOpKQSS8DrUDQX+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mcdDTqBO6mKNjfxftF0bYq0oUsOvZ/Rz942SlA/rXw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaoU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl+9
 t5EAywUUSmnoLqq272qWOdl3eQKeZyD0IM34hmMzBn/JNN/GdXmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUpidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpPROfnqK426LGV7nALMBg5exj4m8eSt1PgV9FVL
 24O1iV7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXAGTT8JS00C+daLiIM8lBXUVf54DbW4yNbyHFnNL
 yuiqSE/g/AWkpQN3qDipVTf2Wv0/97OUxI/4RjRUiS99ARlaYW5Zouur1/G8fJHK4XfRV6E1
 JQZp/WjACk1JcnlvESwrC8lRe3BCyqtWNEEvWNSIg==
IronPort-HdrOrdr: A9a23:j/AJrKOet01afcBcTtCjsMiBIKoaSvp037BL7S1MoHluGaalfq
 +V7ZcmPGDP+VQssR0b9+xoW5PtfZq/z/5ICOAqVN+ftWLd11dAQrsC0WLq+UyEJxHD
X-Talos-CUID: =?us-ascii?q?9a23=3Agct2mms9lD1SduuC1xH4dCLC6IsVcV+B52vJHHa?=
 =?us-ascii?q?cLj51U7KEW1GZo4NNxp8=3D?=
X-Talos-MUID: 9a23:uZBfMwXgIluautHq/CP0vDRNGoQr2ZqFNV1TnZYMv+ueGSMlbg==
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="105827684"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C+/auSI0ESX4LY7afQeG2xiH+7XxbVvN/YAYGD13JyNTpeWmXEpHYUxhld6/5ubsru7Ach3lF2JCZbr+OwuYFiUeYaKKKBgWUnF5YXatAtqeOg6L8u5I2kC0ZKjYr30LX/8j0iSCrEFlQgW4sNKUXzH+aBYwRQCwZQT4EmKwGuV7FwLXCB0O8BhBEKYBJL5Lhkj3ufmpl4A5mxeCHGOI+ox5M15ErvrbTX8/j7+o3XHO9OCek19cPzTxmXgojMvLe3dkKKcUk/GCgjoWmxFty437QKnXpIHsqRFn2imTFtVDO2LFgvJgPH1wIbzGInDI5w9rvh3xNfmtxvDHsZflhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Du775kTpuiYYRdC1MCLtJ7Y1tKccXrPTqxmMH3KH9CI=;
 b=WRY5rJxaUeUxtxbB++p9obR5MN9u1jZgXBITCI3931YpAEQXPVJ9MpMqXLAcfBjwnPdIwI8SjQOzu13zV22qcGuZg7hmg7H+wa3NyWMKQIiB46v2LA81lKmxRIq9V6JdmmDgOr2gZSl++1zJg3LlrN0aQ6hdN+DA8M/2aQLGnxSd0TMrfSo2DGPvfZgKiP5Dqn7nsSJJ0uQP4uKRGm4Q9mkZgXeQXDC5qpsgLdFDGrfOMb9LwmKouZod5+xXVLPtFcLxJoc0DJ74c8zx8vVokA7W7gbnypUKU3ynkNzSrzd10/DJzM/ufhDOFLJtN4qE4IPTDnNedjYMcx9QC2a0Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Du775kTpuiYYRdC1MCLtJ7Y1tKccXrPTqxmMH3KH9CI=;
 b=Vl292L7AdZ/2RAY+x3pU044rcDrXd09zZ6paI+0jPldtP/JdQPM5T60k4IVUZgrX7Ussx1QeU/cBiT/dSJPd91dYmVQPWbSULaT//KN+Tr3sklKSclEhfPkPwsZLA2mDMQnlLAiv9Y8qJgUhjFxwNXandm6xkFrKeGiMLDW7Qpc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2bf43f9c-6f24-9997-b676-4a21e1f04e56@citrix.com>
Date: Thu, 27 Apr 2023 12:14:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/7] tools: Modify single-domid callers of
 xc_domain_getinfolist
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-6-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-6-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0353.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7166:EE_
X-MS-Office365-Filtering-Correlation-Id: 153acacd-8470-4300-af60-08db4710928e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2emDi5uiSXJza9oqPHXFpq7ZWXeIywQziRW2poI8hAWTgardd0w/lxZasdmrt6hk7R1rQV5Jvja8HLrWEIym2Xp200e9yPSwogwsdiDTIVdDNRl3DeEM6xP0N2G/ZeMVQ3w0YjUrcFgED8p7VvhJV2SHGK4FP5DpcHascFCRY0AtgUHeurDzApeDHpcoZsdlI6hR9mg/Glri6ue7jPrkT269fR9KTpsNoSlEak8Xr/kFQ6+1i8OeYpBnE64h+4eggPDvYqD58Zn23eO/SG/s1sUZucdS21Bdsk8FK9sFF6WI5KNN0W+Xc+mW7D+1o7cMqEbzhlMCcDSbmB1uDyNJhc09QvgFqlnji42a9YNs2l3XxFwF5OVbRYueXHq8sGjTwLFlKDGuXYBXkjKldKQIWtBXJmZQ7mFxVx1yl+g2ZnIlWNEWx83AMWWgFLTwYi8E1WeTe9phlDlUxLPczLek8ALhpM8il0PdVi3l/LmbYK5jEv1tV+pN7Bd6seJjH1iltoZZlOMf3CZM0fy9Bw/3Gazh6a9D6yBYrMvhEypvDkcmViIAW0SnpssDsmptULOz8p/otEOZoUdSqok1fT3rnWa4TAOXimKiOc3c6tiT8GPrrb3ZpEA9kziTkNZRdbXHcRmqDr7U+JbZCVFGoz6a0w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199021)(31686004)(6506007)(186003)(26005)(6512007)(316002)(53546011)(8936002)(86362001)(5660300002)(31696002)(66476007)(66946007)(66556008)(41300700001)(110136005)(83380400001)(2616005)(478600001)(8676002)(36756003)(82960400001)(6486002)(4326008)(2906002)(54906003)(6666004)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1R0Z2tMYUZGclliTkMrUmE2MmQzMUFRcTVySTZIRU9FakVKL1RHUUtEWStu?=
 =?utf-8?B?RGdtMEhudmpnakx3aUlKajFqekhsbUxOMU11S1pOMVVmUyt4aU9GU05lUHNl?=
 =?utf-8?B?L2l4WTROeUpFSU1RNjcxbnhqM2ZXdDVsS0pacU9uTXlwNmxUL2gwQ2pNckda?=
 =?utf-8?B?amJtVzA3eGQ5RVRnSXlQWjJGY3Yrd3RkQWJka3ZrSGV1R0VPRXMzMnRRV2Nr?=
 =?utf-8?B?eWZ1b0M1ZG10bE5ETUhDWEFSdVMvZVJWUUl2M3g4MXdTdHp0cVRHQXYrSWtp?=
 =?utf-8?B?T1R2VFJwb05Mc05IVHptZTNtUUxYUU5IZnNxZjRwUFc4UDB6ektydDJiMTdu?=
 =?utf-8?B?Y1RhU1JFUytVd2tZWnBldjVQdzJJV0VHMm5HMlNWWG9ZZ2FvK1JNQzlaUDVr?=
 =?utf-8?B?NXBLcEo1REM4VDNJa01MUzU0YWNVSmxwMFVmQ0FyWDZYc1dYMWlzZ245dFBN?=
 =?utf-8?B?YWRON0dtMDlydTVsdWNXSVRaRnFCOCtpaUkzRHJpeVBXNWc2NmR5SjFEK2Fr?=
 =?utf-8?B?d0RSdEhaaXNCd0lzNFhIU3Y0Zk9ubSsrQUpBZGh0eW5nY2xpN3lDUmtCTEF4?=
 =?utf-8?B?bk10MnliMnhibjJBQzBwUmViQjYrVXRnYkdUYlEyWHI4R0pLTm4wNzJIcjlL?=
 =?utf-8?B?MVM0WWQ0M2hkYmRYOEpXMmtzMUl3WGxId2h5QXhwSzBNdFhySzJxL2VTaHdY?=
 =?utf-8?B?OStmYjdjb3U5YldOdXVoajBCVVJIQnBLdlMyamtEOXlVZFlldUFSQkwzWEwx?=
 =?utf-8?B?VVRNclY1TlJLNlpHUDVuVXJieGlMQmF3SGM0ZCtQdHZrVWl3eU45d2tOTmxV?=
 =?utf-8?B?ZG5MR0FjdEtyTTNvanh0elhmZHlCVzd0dlhPc2ZlQU9JTzJtMzhPb2dISTVa?=
 =?utf-8?B?TWxvakVmME9tanQ4SFFIbkNraE9jb2tTb2c2eEdMRjhRaVdoNFJhdDRSSSsr?=
 =?utf-8?B?dW9DWEFjbm11VkM4UEd0cC9RR2pYUm51ck1IZWVVNjdtRHN3UzY0UFlRUUlz?=
 =?utf-8?B?N25kSjdWQldnczl0M2hPMVlpN3RkNm5OaGJNaUM2RTBFMEVRaUFPWFZ1dGhH?=
 =?utf-8?B?cjVjMzUweFIrRW16d2JnL0NpdHRVRXlHUk5hYVB5dnpNRlIvWG45M1RReUtG?=
 =?utf-8?B?ZzA4YnpPTHgrSEhqYXJXMFppV1RNTDQzMVFtMklHcUo5MGpTUUNlaGR2UDRn?=
 =?utf-8?B?bGV4SzNDMFkrbW1wZlFsZUZRSTBvK2NTcS9HcHZtbGw4dnpKMkk2aFEySmZV?=
 =?utf-8?B?ZGZYM3pKQ2xyYzN5dmNaRE8xV2FsRGluYloyNkNlYXM1VmZXeEpTOG83dE5q?=
 =?utf-8?B?N01CZ3BJU0MyeTdYYmppVVdnRkZpVnM2N00vM0dXYWVjYWJUVUtDZEVQNEUy?=
 =?utf-8?B?dHRzM0oxR0xIbGU3bnR2VUVaY0VLWCt3WTgvTkZUNUVpaW5FcEpNUEQ4N1Zm?=
 =?utf-8?B?NzB3eFpWeW9jUERaN2VkY1VDL2ltSTRONWtpVVdnV0UvSXdsM3BUWEh4WFJ5?=
 =?utf-8?B?eUdmTU1nSGgrL3ZvS3pUR1B6ODVZd2VUUENZZkZNbXRkcUNZbFM5bG9DMFJo?=
 =?utf-8?B?TmJaRTdFT0hWUzYwNXFQQzlxUE4zZUE3WUg5R3FFNEMvTkJ2Mll0Q1Bxc2ty?=
 =?utf-8?B?ZmR3QkQxOG1nRFdQdUJZM1AvNWVPZmFQbUEwckI3c0psb1gxZ0t2a2d0UUlP?=
 =?utf-8?B?Qk05VUpzcXg5VHZJZ1VzSmlrRE5yalp2dHhQbnVrZFllSThpNVB0elorckp5?=
 =?utf-8?B?aHhWa1lTQlZYZVEySVVQUnZPY3VFWWp3SjVFWmg2S3R3N1NPTTFHdEpEUGI3?=
 =?utf-8?B?dmI1MTQ2eFhxTGtnVm9JUzRCKzZMME5rdkl1eWdrQkpsbThoNlN3WGZIazdM?=
 =?utf-8?B?cXFwdWVzNkF4N1RKUlIvbm9hNVdFSGFLRUxoUnJ3WjQxdEtmclNNSWtoR1JL?=
 =?utf-8?B?aVpKQkdjK3dWWTIvYXZZUVdtcjhqUnR2eXNNL0taRzJQR3dZaFpsZng1QWdJ?=
 =?utf-8?B?RmNQdURVQTNoNHBUVUVTU1NNUGE2L1VpOE40aEcvTWt4T1VvaW9zdTVyN1Bx?=
 =?utf-8?B?WXY1Z2dWNnRCNmxXY3NsRVdWU08xQXgyY2NVMmREUG1Da3ZGVS95OXBmMHF1?=
 =?utf-8?B?K2prbEVLUGs1NUxJWUZwU1h1aUVzWTFJbFBMVm54TVRMbklLMmc0OXkybUJI?=
 =?utf-8?B?MUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	3fKgkdxyrmpc4GBMlQLdmU8IyNswlo+y8qlXwRHsiXYdVK7hgMaDf/g7TduNpl03YghzSbmhldMaSYTF8SyrPKcFOjBCdyhEVlRN0NZmVvLmkQGQvV+b5o0qCJnDvqk2u4LDhZgYOf79GN8UrLjbIuyhy6khATyPy5zRsVqB8ZW2b05ZdybcDCw6aSfuvFkapYCgbehG8gzKISyyo/uStmW68gPZk76VFnKEaxmu6BkAtkbMKEh8dZ4bU4Awv2pTRjEQpnWY4heZJzYLXMJ88uIkIRTIROOGEuZoUYH5IWWM9c2JzQL5XREk17VT+mlPCADbB+bMom+EDoqbq9JLfrk/L3m4YCyV8a+/E+7W0/6BO2wS+b9hT4vzHUyoROi7/GdViv1HxpBpXBFzbCQJIq2PjDvkbZBNegDxnBR29pOHMgo8sHGJfexJ/FNF8mGLZ3zMT5bBJ3b7l5FfbpXrqavF1mf4NUDNfYDms8ikfoibg26f5EMhlKVF3RM+tqoH5B1qfggH4QJIGvHX/zqluOPaybpnAiKANARVq5Tfe47vNuiY2ci2umdPaWqPWw3w3jWKiHW2ON3eUdbWuZULhbWUdBsDxLxoRLnbb1F6E08ky/mqLJ0vWnQvU6Zc/GRmAl4zixVEbbglkaoNWjXPtL9/+yvEzSU5YilYixg8XpYgYFuad6+jK7RoDpnJ7qNJh0RcTAA8pUZBSdk8Q+wFIWc91X83UgwQ+PcSNXQjou3a8YY5lvh9CQ4SbnzdEkVqD5oWUwpg8+btKHrK7hM8a4v0ETrr2Su2LLAxrmqCzSynu+0vlRB6rzuFumAxyxohg0D2GPEiP4t78Ej7IClF8pMeYZZySPvKT3YRBcZabGo=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 153acacd-8470-4300-af60-08db4710928e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 11:14:30.9860
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oHSlnhhnhTXnEhxJ7j02/Q6V5NkK8Ve/lSZz3E3UU1TCGbC43drNbE0LKKBLAE6HBzzryQQ4iVeZQ+QEyXcx9FpxNUFsKkNpVSGwoxkB+KE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7166

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
> index 25fb716084..482e04b38c 100644
> --- a/tools/libs/light/libxl_dom.c
> +++ b/tools/libs/light/libxl_dom.c
> @@ -70,15 +70,10 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
>      xc_domaininfo_t info;
>      int ret;
>  
> -    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
> -    if (ret != 1)
> +    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
> +    if (ret < 0)
>      {
> -        LOGE(ERROR, "getinfolist failed %d", ret);
> -        return ERROR_FAIL;
> -    }
> -    if (info.domain != domid)
> -    {
> -        LOGE(ERROR, "got info for dom%d, wanted dom%d\n", info.domain, domid);
> +        LOGE(ERROR, "getinfo_single failed %d", ret);

These are vaguely for human consumption.  This one wants to be

LOGED(ERROR, domid, "get dominfo failed: %d", ret);

I think.  (This code quite possibly predates LOGED() being introduced.)

> diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
> index 7f0986c185..33ac8e9ce8 100644
> --- a/tools/libs/light/libxl_domain.c
> +++ b/tools/libs/light/libxl_domain.c
> @@ -349,16 +349,12 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
>      int ret;
>      GC_INIT(ctx);
>  
> -    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
> +    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
>      if (ret<0) {
> -        LOGED(ERROR, domid, "Getting domain info list");
> +        LOGED(ERROR, domid, "Getting domain info single");

Swapping list for single really isn't very helpful here.  "Getting
domain info" would be better than either of these, but all of these
ought to be updated to print ret, because right now I don't think
there's any qualifying information.

Interpreting -ESRCH is the important thing here, because that's the
common "your domain doesn't (/no longer) exists" case.

> diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
> index 6e5490315d..023f2bf295 100644
> --- a/tools/xenpaging/xenpaging.c
> +++ b/tools/xenpaging/xenpaging.c
> @@ -169,10 +169,10 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
>      xc_domaininfo_t domain_info;
>      int rc;
>  
> -    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
> -    if ( rc != 1 )
> +    rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id, &domain_info);
> +    if ( rc < 0 )
>      {
> -        PERROR("Error getting domain info");
> +        PERROR("Error getting domain info single");

These messages I'd just be tempted to leave as-are.  xenpaging hasn't
left experimental status...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 11:16:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 11:16:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526960.819036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1przbY-00037N-Ho; Thu, 27 Apr 2023 11:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526960.819036; Thu, 27 Apr 2023 11:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1przbY-00037G-EE; Thu, 27 Apr 2023 11:16:04 +0000
Received: by outflank-mailman (input) for mailman id 526960;
 Thu, 27 Apr 2023 11:16:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1przbX-00037A-60
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 11:16:03 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2ad50ff-e4ec-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 13:16:00 +0200 (CEST)
Received: from mail-mw2nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 07:15:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7166.namprd03.prod.outlook.com (2603:10b6:a03:4f8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20; Thu, 27 Apr
 2023 11:15:55 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 11:15:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2ad50ff-e4ec-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682594160;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Fe8gMXfOsgnswZ2zOO7JRZKORpH+mxR2VdCwSfoEgdg=;
  b=dUt8BZ9Kr3x0W1//SouSwc8rkwV6d6YGH0/8MVdyKQK9dTFnmSyr094V
   CiM2DhTzFGkya4En1f4v8fqqczxTwjiGsL1uAWTqi8XNmvPjfMRUXx8Nf
   O1M4AOiNPhwn8WP+oGsacGit7khgrtLNeeV8WljNbnqGqfzzKZM5Yfaru
   w=;
X-IronPort-RemoteIP: 104.47.55.102
X-IronPort-MID: 107470510
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/QMusa//rbq0O3SNQUSqDrUDe3+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 GofWm3SM6qPMTDxLo9zYY/k80hQ6MTczIdiGlY+qno8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaoU5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl07
 s40EWokZCyu2d2Wz7TjdLVWi+88eZyD0IM34hmMzBn/JNN/GdXmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUpitABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpNSOLgq6cy6LGV7lQiTw0MFkeynfyGpVCnSvkBD
 kAd0TV7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BW0IaDIATAAFy8L+u4x1hRXKJv5hDaq0g9vdCTz2h
 TeQo0AWnK4PhMQG06G6+1HvgD+2oJXNCAkv6W3/QWaN/g5/Iom/aOSA61fB6u1bBJ2EVVTHt
 38B8/Vy98gLBJCJ0SmSGuMEGejx4+7faWWGx1lyA5Mm6jKhvWa5epxd6y1/I0EvNdsYfTjuY
 wnYvgY5CIJvAUZGpJRfO+qZY/nGB4C5fTg5fpg4tuZzX6U=
IronPort-HdrOrdr: A9a23:CqgkkK+yUIUiGFJdPEtuk+DJI+orL9Y04lQ7vn2ZKCY5TiX8ra
 vFoB11726WtN9vYgBDpTntAse9qBDnmaKdg7NwAV7KZmCPhILCFu5fBOXZogEIEheOk9Jg6Q
 ==
X-Talos-CUID: =?us-ascii?q?9a23=3A2e70PGrWt2wbqk38M3OdPtXmUfo+QFHkll7aH2L?=
 =?us-ascii?q?mNkZmTuWrFWWd07wxxg=3D=3D?=
X-Talos-MUID: 9a23:9qMHhAQTiUulc825RXTUtXZyLoR06pirN28TiIQt5eyIPAdvbmI=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="107470510"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QOi3jRAPcGcrXvJIg6LxSXPLxK38pOoEcn1HsvalCfOfmuXgoUyu44cYAfqrK7d5h/ku/yb2YCbAtmWrRPPsYZdSEKr2HqPt4SNP3ycC3quy84RS+mvIm0SAwmMA/5Rt+e7AZTPgj/83z10QW2X570oohU0cYEkrAUM/Jre/O1WHtKLamGXCiCJqtw2gwB3lXTYZrPe7g4mEizmFolefkABIqMOzXGk0mztoZQNOy8hJnloIC0XVQ8K6dceZwfA1eKET/A3AAcN0hKidyr60oiz1LJPQLZtEx5wiLBLS8QXCvpoaYLhZDsddpsja28QI+KRoCDOQcghf42oxdTR9Ww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sM11ygwuR02diH497+xYmRjJrONKzsLSOcjSgvC8yxE=;
 b=Rdfn0H0ESZcmcLMh3EOs8dR3uHqMkUx2MZvjB1rg2lUMpD4mYdJfkVTKxTUbhNI2CKIowYQCD6nhc+Vt173lofdxINCn8/uSSAxWtJ3L3ZS43IedfCa2qzOZhJZ8S5/vUDylWyu3butop8IXPZ/CT+T5ytqc9TOQBdDS0lwFB1a3tIB6oId+f1iI0v2iukpdQAH0x9BPznvDG6rPerjITEQN42j72CTtw0Qn+JtDID2n9vI526/+F07bjp5q6Z7Exe/04rzn/xAUMxQVSTo7l12eTDnodq3rcMN5oevvudkdJtBQ0Ec1L+WGdtvgJYaSXyatPwf7kQKK9Y3gJZG63A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sM11ygwuR02diH497+xYmRjJrONKzsLSOcjSgvC8yxE=;
 b=bilUk+1t5slc70j5rZcDBFqjoF+ID9+BaxKc8BOWfExUJeUGe4faQQfuzjjH5bjwc7xQa4Zccl1+PW6dTkPnvNxT/bSdz5VXEtWdi6/zSzd/MgTcSpIcNk+l2Dg5E+6Q74ZbS+zwPAL2B5O/RBW/tDJHxVjZNpywXFyb0arXl78=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <7a2b9ce2-a82b-1ef3-ff0f-f8bd479451b4@citrix.com>
Date: Thu, 27 Apr 2023 12:15:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 7/7] domctl: Modify getdomaininfo to fail if domid is not
 found
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-8-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-8-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0357.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7166:EE_
X-MS-Office365-Filtering-Correlation-Id: c1c32a54-1b00-4a82-98d1-08db4710c4d3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T4lyWh5OZpj82kFO0uXXHF5VTgFM1xnUnLWF+nxFvd8paqQMp1zODY3cRRYOurpDCvgH3qsgRtho8DmPUb2CMAXx/m/zk3mJEwAK0iyiB0kauw9EMpVVO5qhWCOnqaNowK0d9YzRCeSTXYEVw1VQQJ4JTZvqzbJ2TJmL3OQqm05W69sKDJZYCSH2+pnQTa8wr9Neo6++0uJI/9l1vbY92Ra7Vu7iqA8r39E649KfB9bJLyWeQlrMLN/6ikuhIpywhUPYgRYCvk4ZGz0uJb3/gUvKymUFqk5Y0XY+I0Ux/C8X4yg1NbPVCEhWBFvvtCU42GSIXNBvfz/I/OTjI1xjdrvejEYE8pCNCMIAnFCWrmrrbBbjGjFy07rdR2QXWOSOHv/QxIuBET0OYtgqGpfCPYdy8Sp5pbSArO/yEjirQQMGthETujR+piGI+IXSAUtx1Z9gW2qw5x4nVzm7EsAmkTweD79J8FYXZGIbxmli0hJzsPvKcm2FtR/O7lqTOT3+JHgTMO/HicXioR8f0glDxNjFkFeWYbNPXcKwcmROfpac/xIfE45HDM2yLs2QtUb81uQXkWHbpcEbrTtodvqFOL3NYD0nmoK4kgx9GNnqoSo1GPfwZvzs3gg4MFdanehc+LWDIp0p1xkoTPKa6XgLyA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199021)(31686004)(6506007)(186003)(26005)(6512007)(316002)(53546011)(8936002)(86362001)(5660300002)(31696002)(66476007)(66946007)(66556008)(41300700001)(110136005)(83380400001)(2616005)(478600001)(8676002)(36756003)(82960400001)(6486002)(4326008)(2906002)(54906003)(6666004)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RitCdXY4RGhIcXJQWUVqK2NvWjVEd1dXZkluaXBPTzZmTWtoVi80cEVXaS80?=
 =?utf-8?B?K00vZ0wxb2d2a2U2YzAvRStYdVVIYWdEQ3l6Rm1Ud3ZqR2hRUFF5TEVhVktS?=
 =?utf-8?B?cnN6d0JrYVY4ZytuUzgvUkdMZ2lQYnFiUE8zNDNXUWMwSk1jRWhMNUVHWDEr?=
 =?utf-8?B?Ti9sOGZzTlZhN25GTVF0K2dPSE96ZjZ1bWRKUUpLNFRhdG5IaFh4d1diRHI1?=
 =?utf-8?B?N3BPeUJCa3pTWHpPT3dLSDlHRmVPQTU1YlUxdE1VN2RSZklOeGRMYVVwaUp5?=
 =?utf-8?B?aUFZeFFOaGVzZ0l2a01hYVBGM2RJU0pPWmZKanhLWm9WOXNRcGkwcjZUMXF2?=
 =?utf-8?B?eENXb25Zc1lLRE5PUE0wZnFyYkxNWFN3elhIeFJ4UEVDSVJHK05SQUNmQ2pi?=
 =?utf-8?B?bms5UlBVQVZYQXFJRkk0SGwzeXFlOTVrYjhmd0h0K1Jza1M2V0FzMExxYlcx?=
 =?utf-8?B?MlVtbFFmOHhxRXJHbGk1Y1RUUy9GbnB2Z2JZNktYNnNHZnI0SHBJS2V4OWFj?=
 =?utf-8?B?Sko1cXUyV213NnJVVktYdFE5NjU2SDMvRXMwUkF1b2FMcysyUy8wbjNpR2o1?=
 =?utf-8?B?dXc1NVRKbTRDbkV4MmRpYzlSUy9CRUNoYkQxV2VGMFFSZnVWYzk1QXpIM1Ba?=
 =?utf-8?B?RE1RL0NtWU1QVkdOTUZta243aURZd29JWXJVQ3I5RjZ3WkJ1QldqNXFlQzl1?=
 =?utf-8?B?SXJtWTNEZ05xNTdlTUZ6T0ZpRW9jcjVNYk9reFRlQjZuRmRGMWtKalJISE1z?=
 =?utf-8?B?Ly9hcGhMMVBwbG52WEFRTGxBT2pWU04zR0ZOcDB4NmFFQVdEcTVLbGI0bzht?=
 =?utf-8?B?TlZJWDZrRjZyVFVLdlVnWUFobGZxazNMaXhINmt5ZjY5Q0kxSjIzaXUyS3BV?=
 =?utf-8?B?WXc5YmFsNmltSFZIV1VJV3dESThZZ2FVcGgrUW9kekRTU1BSSUMxaEhZRkFv?=
 =?utf-8?B?alpHNUNsVjBwdXk1N1FZT09RdTJBVlZhY0xPN01JNVZkT1cwd3IvSmxhemF2?=
 =?utf-8?B?NmNlR3ZXK1BiN0tHYzE1R2F6bWVHYUE3WEo0S2FBakZibEFCY2ROdzlkQ01E?=
 =?utf-8?B?Z3A0YjdWdVg5WE9ReWVaa2dxVE1yQWNjcjFyNjljQWIxTVdUS1EvYnFobjFN?=
 =?utf-8?B?eXZ6Z25iMmRMWmc4amxXdjFLVU9FeVFKclBmYU5ITXo5NnNqaDR0cXpkRndG?=
 =?utf-8?B?U3ZUVHZ3T25qZ2RPckw4YlNiWEF4NStRWU9RRzAwMmRnZTVDM1B4MDZUaUox?=
 =?utf-8?B?SFpEZEV6QytjWHB1NXY2UDZVQjFRc3dJVVMwQ0hEQmwvRnIzc3A0cjcwbllR?=
 =?utf-8?B?THBhcFhaenZ2ekVCbVlUM1R5S2VYMVFiRlF1SUMyZVByQnIrS0VBRjdrWnpk?=
 =?utf-8?B?dXFvU2RMR0U0SGMybW82eFoveTdmMGlsMlJOQVlld1NKazNDWVFLR3FvSk9X?=
 =?utf-8?B?WjFvQ3JkT2dkWWVxKzZSZ0FIcGt6dmZZb0NBTjJscEthclF3d1ZOVkV3alJ2?=
 =?utf-8?B?aEVlK1BXdlB3UHdteW1HeU4weEtzQVF2Z3hCTkJhdGJtKzRrQkV6ZEthVys0?=
 =?utf-8?B?VXNNQ01sUCs5U3pyYUxCZzlXUjlYNzFDSXg5SHh6VmllRllmbW5SMTBUaVlF?=
 =?utf-8?B?YW8zMVI5YlpqRkhMNmNuVG1kVVpiTlBkTDhnWUhqaUs1UnV0Q2lSbVVPRFEv?=
 =?utf-8?B?NjFVckNQbG9nendFdHhzQ1FmNENaeEhaaHJyUEw1bXM0eWkvSlQ3eXBuSGI2?=
 =?utf-8?B?ejI5YjRBOTJleTFXbDNMcWxIa1ZRVHhsSXM5SWJmQWZra2FKekxsaEhBak1O?=
 =?utf-8?B?YXJERGd3UlRnbG9NejBGNHRnb3diN2JwNUIzeTZjKy9jdnBKdU1FV0krVVBs?=
 =?utf-8?B?aGhKVmMxa3BsQTdWdEJzeVlQUC9pZDV5R2NmMVRGS1JJamd2a29zSVdheHRU?=
 =?utf-8?B?cTUzWUtjY2N3bjg4Z0hkTnhPaWZQaUsvcHA1bzAraUhhMjJsNmlMbDBTcm5H?=
 =?utf-8?B?MlNUSjFrNjRlT2gzM0ZzQ3diV2VHYjJuRDNrWEgwV0o5dkd3V1c3b2lQaTVO?=
 =?utf-8?B?ZlVXZklVb1ROSVJXdE9hZ2ZBQmt1SGZKQ3EzdHZ5Yi9ETDFLOU5XYWgwKzR3?=
 =?utf-8?B?ang3bjJpUWdPV0QyQ2QrekJZM0poTGVnMlFEZ0xzc1E0MTZoQ2xlM3FRT2Qx?=
 =?utf-8?B?Nmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	wArkehGNV5x/W4pBpmwZVTcf0mmxRwVmy0RDUksBN87aL0HmaXrR0+UGa3jVjDAfr8sSdQxNxD0kBscRqueRGTYCeO7HP1EVTekZ4cjmkDtrGDdl0E0C+tGP41oNhGchBH4yhgYOdurMVWdJcsTLpOMXQM18t7lt7xULaof420xLKY72cF0shaR9hJdm6CeNZ3H4AMGu9a+Z0ySjq4WmD0YG4SisgydFLTISRYTtEBD7CgcWfkbHzUAwtthwDa/2QLq0OgasTadH+V08UgQQMuAGihP3uwkZcsSjaJHp8weypGjK9VsKSBZjJQCBV7ZFMiVTcqUftbyg90Gxh1cUhwFUOwEiCPoT/e0zLD3tXiBVnlDySYSdjP9JWnB6Og7ThrjOX5M6xvMiLkMRZLSMJBRkpdgB0z4pSga/ND6FbBOv77mWlCUTA/ZTmfEYHvYRJmy8TyyomKRAFkQd+EyzyZUGLkbagPDUulutH0vgi10/7IsxFUjef+s+Xggh+qRSUt211Z/Cpt3lO7cFVfbEEXd1jTfWZ9bFlEPvvkUyXnBEyrlSmkjVujfmRURzPq4F5271uLmwwGtyGKCXkbeU+24xcKYtk/CTzVr2h3X8VZR9/Eh7W4D5hZt7x4TfMu+8GTYWZLxCro9u2FVS5X9bG/szpV0uHvlfXK8X2wTsUpYVIhY17H9x2ZfVA+t/7Tt6z6XRY0Q9ahhMmcE/71t+7/hJcsLXBlVEeV2WSuTwe9/RKlUmB0UJ7kiArQTpijuQ90u1kfKWnUrMc6Vg02n56zjlwWRzZ6FueV6Asy+zKDhcU30ant4B8YWGVoBRIg8tAVushZeYb2C3AnTD9RMXkXrwMTvysSSWqbqLa3Xj4J03Y2xYk8SmN3Ap+EEpwq7l
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1c32a54-1b00-4a82-98d1-08db4710c4d3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 11:15:55.3037
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hDjxlckX3F9Rjsm1pZyB9zYZy/e8Jg7pLfI22X1lQyRWY1gNZmnmSiYJ6ftvNOTEWa5BYFy/rogkZJzoZLCIlhF8Sm91gTFHtKLrMSuVenA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7166

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> It previously mimicked the getdomaininfo sysctl semantics by returning
> the first domid higher than the requested domid that does exist. This
> unintuitive behaviour causes quite a few mistakes and makes the call
> needlessly slow in its error path.
>
> This patch removes the fallback search, returning -ESRCH if the requested
> domain doesn't exist. Domain discovery can still be done through the sysctl
> interface as that performs a linear search on the list of domains.
>
> With this modification the xc_domain_getinfo() function is deprecated and
> removed to make sure it's not mistakenly used expecting the old behaviour.
> The new xc wrapper is xc_domain_getinfo_single().
>
> All previous callers of xc_domain_getinfo() have been updated to use
> xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
> means xc_dominfo_t is no longer used by anything and can be purged.
>
> Resolves: xen-project/xen#105
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
>  tools/include/xenctrl.h     | 43 -----------------------
>  tools/libs/ctrl/xc_domain.c | 70 -------------------------------------
>  xen/common/domctl.c         | 32 ++---------------
>  3 files changed, 2 insertions(+), 143 deletions(-)

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Good riddance to this disaster of an interface...


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:06:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:06:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526972.819072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0O0-0000mq-4H; Thu, 27 Apr 2023 12:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526972.819072; Thu, 27 Apr 2023 12:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0O0-0000mj-18; Thu, 27 Apr 2023 12:06:08 +0000
Received: by outflank-mailman (input) for mailman id 526972;
 Thu, 27 Apr 2023 12:06:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UO3s=AS=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ps0Ny-0000lo-Rx
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:06:06 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e051c735-e4f3-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 14:06:02 +0200 (CEST)
Received: from BL0PR02CA0046.namprd02.prod.outlook.com (2603:10b6:207:3d::23)
 by IA1PR12MB7494.namprd12.prod.outlook.com (2603:10b6:208:41a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 12:05:58 +0000
Received: from BL02EPF00010207.namprd05.prod.outlook.com
 (2603:10b6:207:3d:cafe::27) by BL0PR02CA0046.outlook.office365.com
 (2603:10b6:207:3d::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22 via Frontend
 Transport; Thu, 27 Apr 2023 12:05:58 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF00010207.mail.protection.outlook.com (10.167.241.197) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.15 via Frontend Transport; Thu, 27 Apr 2023 12:05:58 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 07:05:58 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 07:05:57 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 27 Apr 2023 07:05:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e051c735-e4f3-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OTpC3oWXlPXFsYSFr7V9QcFH2uDaTRRKlA/m2cHl08B3p3m1Aoa3BenqjEf4BuKrtnN5n3J8zCjyQSOcyYgTUUHFzhjBSgC85SFITKHdpnnUrH7iEPE+22DxvBV7zOT/VTXhvIul3h1o8Jy8kokc5lSdXT/F5mSaidl0L2xVQ8Ovr3/BcbrhbG9MinRcYj1eNMmzBew/pCJqphQMXYUoX5+TTaumWxgPsQmthQhqfhtgQueQQijHF9RAu9jl85v1FJ6doL1W1zECM5TpaoiWrrWTDLCpyTVj4xC0Yo/XIQy7pfPIuhMMGMuqeQrMMEjXaFhFJGgxA2+xsiGq9vMtHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Zv9ymspoEyaD/gOFLfw2UKsgnsHva99GtVEfBw58y0Q=;
 b=UxcIYBM1hKQuE5u9iD4T4zDtaDGjVCiAr0UM0+B3OPwpth0bVA6e0OroQIReTYoPMzw4OGeBdMEzOg7OqIRu7ArOPMTXolEoR1HP3v475/yQ24bjQBBbgvjJvNkDb3K3ld6I2yu/gaK35cctpB6D0bPzDYDmLB3UtMvDUXhE5guRtOvBRE/n0LtBzfFbCcihtEj3tTQ7WbO9XYwv+sOJ7vtrcs5w2NXct0CsL+2b0c0psitGiGdZvWiE/PwNts5FUo6UESkXOqEPrmE0Ysqq+KQO/tnWCk6ph8BH7gfz4o4OFuP+0BaMVCg2PyiJ/A/hRGYHnxldK2XpbuIq+xqdWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zv9ymspoEyaD/gOFLfw2UKsgnsHva99GtVEfBw58y0Q=;
 b=GQLZwNzMIVEdnsida90/eR4RKRq6p2JUG0b3tw71CInLaQmzpZopbDRZHcTGI2SzebaCxCt6USb93nW0ayFXffTyNbk527Y5ynCo0FTTjAe2sca3xAPmKAGPeLJp+dix5/AysM/78QH2ixnpC61xUhclcI/rXeSixAIySnpB1+E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/2] automation: xilinx: Add GEM passthrough test
Date: Thu, 27 Apr 2023 14:05:53 +0200
Message-ID: <20230427120553.18088-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230427120553.18088-1-michal.orzel@amd.com>
References: <20230427120553.18088-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF00010207:EE_|IA1PR12MB7494:EE_
X-MS-Office365-Filtering-Correlation-Id: e733f66f-b032-42b8-b380-08db4717c30c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bD/pqKiX3rA8tBQ3cG89Ot5KrA9AjjvdFhcnDoXMYSbVbPS+bGH+NWipAKGgT0DKtPbhYbi+hSrKNtsOCiIZcgA+M0AoYuU1UqbkYKwFmfdSy+6Aao/4ywSlWD4V3FmcR6tpYOq7u7kj4N/dCqiT0ogT951nC/HhX+e4O+qthNiBYCRDNlM5vOjAv7wzK5WcpKUBqLnPe8o/gYhYnPPVmouAdNKbY1bInbZonhKkqavSGzbiUgr6F7VVXUFyiqaTmyilXh33lM2SoFS9Pw/4PLNSp6TG+r7c32AhvDWQBesCBbMDMMLkB7HGfYtbEOozoEdmPlQ0TpS5jmOOvUb1CWbiO+XUXzptGjnWSmeAtY9A+EOS4SnsHWWqawzxyOK88OTD8DQt5PqnLAQ4SuSitVuuEdJ2O/VOtvD4Mng9+HokgsytRD2mIRzKSkdynAyHfEejXJ9g30lJlzxlsK+UQOJNRJKvwAk4/HRbfld/qeYnhkwNVoQ+lJrWv5sS8XiAnKAztHqin2RE3Ai8yA+8e39sCLAho9xAqbWhqf3LNNYQJ83XivlzdGQ9pCKkhbHubqGl6ZkNlbCS6NveLkN++Di1M4yTLrpg172LSLpCAIXpY/T3e/V3FcjMqAnBczUXL00ObC+ybfJldwAws6/j6KOijte6iPH+7lijugW/+OsbPgNyCc8Fmq3kNhPoJkKs8rFxLUOgQ7Y8KW6KzNZuw0BoRqXd5L3NVtCT9K2fP/1eJGLC+zxGszXUjC97bicL8TkyhV1k0ENcyY5D8TlDwQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(36860700001)(40480700001)(426003)(47076005)(336012)(83380400001)(2616005)(186003)(40460700003)(26005)(1076003)(86362001)(82740400003)(316002)(4326008)(2906002)(44832011)(70206006)(70586007)(41300700001)(6916009)(81166007)(356005)(478600001)(54906003)(6666004)(8676002)(8936002)(36756003)(966005)(5660300002)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:05:58.4797
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e733f66f-b032-42b8-b380-08db4717c30c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF00010207.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7494

Being able to access a real board with real resources gives a great
opportunity to finally test passthroughing devices to guests. Therefore,
create a new Xilinx job to test GEM (Gigabit Ethernet MAC) controller
passthrough to a dom0less domU.

By passing "gem-passthrough" as a test variant, the test will instruct
the ImageBuilder to use "eth0.dtb" (passthrough dtb stored under tftp
server root) as a guest dtb and to add "xen,passthrough" dtb property to
"/amba/ethernet@ff0e0000" node. The guest itself will try to bringup
the network interface, obtain dynamically IP address and ping the default
gateway.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Example job:
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4189922473
---
 automation/gitlab-ci/test.yaml                |  8 ++++++
 .../scripts/xilinx-smoke-dom0less-arm64.sh    | 25 +++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index d68c584269dd..3409d704a7eb 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -131,6 +131,14 @@ xilinx-smoke-dom0less-arm64-gcc:
     - *arm64-test-needs
     - alpine-3.12-gcc-arm64
 
+xilinx-smoke-dom0less-arm64-gcc-gem-passthrough:
+  extends: .xilinx-arm64
+  script:
+    - ./automation/scripts/xilinx-smoke-dom0less-arm64.sh gem-passthrough 2>&1 | tee ${LOGFILE}
+  needs:
+    - *arm64-test-needs
+    - alpine-3.12-gcc-arm64
+
 adl-smoke-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
diff --git a/automation/scripts/xilinx-smoke-dom0less-arm64.sh b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
index 73ba251f4cc1..075305241c8d 100755
--- a/automation/scripts/xilinx-smoke-dom0less-arm64.sh
+++ b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
@@ -22,6 +22,22 @@ echo \"${passed}\"
 "
 fi
 
+if [[ "${test_variant}" == "gem-passthrough" ]]; then
+    passed="${test_variant} test passed"
+
+    # For a passthroughed GEM:
+    # - bring up the network interface
+    # - dynamically assign IP
+    # - ping the default gateway
+    domU_check="
+set -ex
+ifconfig eth0 up
+udhcpc -i eth0 -n
+ping -c 10 \$(ip route | awk '/^default/ {print \$3}')
+echo \"${passed}\"
+"
+fi
+
 # DomU
 mkdir -p rootfs
 cd rootfs
@@ -96,6 +112,15 @@ cp -f binaries/domU-rootfs.cpio.gz $TFTP/
 # export dtb to artifacts
 cp $TFTP/mpsoc_smmu.dtb .
 
+if [[ "${test_variant}" == "gem-passthrough" ]]; then
+    echo "
+    DOMU_PASSTHROUGH_DTB[0]=\"eth0.dtb\"
+    DOMU_PASSTHROUGH_PATHS[0]=\"/amba/ethernet@ff0e0000\"" >> $TFTP/config
+
+    # export passthrough dtb to artifacts
+    cp $TFTP/eth0.dtb .
+fi
+
 rm -rf imagebuilder
 git clone https://gitlab.com/ViryaOS/imagebuilder
 bash imagebuilder/scripts/uboot-script-gen -t tftp -d $TFTP/ -c $TFTP/config
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:06:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:06:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526971.819062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0Nv-0000Vz-Sb; Thu, 27 Apr 2023 12:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526971.819062; Thu, 27 Apr 2023 12:06:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0Nv-0000Vs-Ow; Thu, 27 Apr 2023 12:06:03 +0000
Received: by outflank-mailman (input) for mailman id 526971;
 Thu, 27 Apr 2023 12:06:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UO3s=AS=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ps0Nu-0000GO-Qc
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:06:02 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20610.outbound.protection.outlook.com
 [2a01:111:f400:7eab::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e02d4034-e4f3-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:06:02 +0200 (CEST)
Received: from BN9P220CA0012.NAMP220.PROD.OUTLOOK.COM (2603:10b6:408:13e::17)
 by MW4PR12MB7143.namprd12.prod.outlook.com (2603:10b6:303:222::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.34; Thu, 27 Apr
 2023 12:05:58 +0000
Received: from BN8NAM11FT115.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13e:cafe::61) by BN9P220CA0012.outlook.office365.com
 (2603:10b6:408:13e::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22 via Frontend
 Transport; Thu, 27 Apr 2023 12:05:57 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT115.mail.protection.outlook.com (10.13.177.151) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.23 via Frontend Transport; Thu, 27 Apr 2023 12:05:57 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 07:05:56 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 07:05:56 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 27 Apr 2023 07:05:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e02d4034-e4f3-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VKYwXZB1Zmf5AQD3Ogl5wdp0UjYnG5obue5G/OY6ZNdRDWuKYIRO8CasvOO1qunIxOsgbAjMEwKwy8oPcBBGvnQw6COA4TI1qpAqJVb72NW14hWBYToWxe1T6VDLQT6WLq31PHUQBQcjb58CWfeOd4lxZxH3OeR/z5hQnUuCVO0qLQzSRYFiPHU5G624qixL0XvjjYAw4F9zbpQqotC+IdcWsXLdGjOlhobM9d2l6FAFi/f6lTNua15qbitMMpfLaU2V3JHIR16e+ztqHmJUxXk4ujpnTD9y5+7LRujj+rVtjMZDtTaHtuVyYO6xTiVJSJ7Naaq0iAfX1Bjfh0WedQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/Yco+D0ZQTHFIKK4jqyOrQ0GQY+dKN40P9v5wpwSnVw=;
 b=WI7Jq9waBXj1u3xx2MPWj3sIPMIpANwc4BoeSHfPuwyxtr/7+BhPUXJ9gqHc2ie1aqyrmhd8wAJVoSlNWTc+oXtP/ouaXZ/ZkthGc6/ujEx86iFyZYk5j2peKZaC/rXQnd0BuKnGdXVRtwShJzn2Z21czbPpbcUqXHHERGf/udmMdtuJBIPu8StJX8ZRinztCsfn8ElD7fAtG9jmSTKddn/LSaG4uPNKbT0jwhtevTONGO0/tGqhnVu2TugNEN111hGbYEavGea8PWfJxMWTDyBuFIGyZDI1S5uy8pztqEchgSbRd4qtI9FRWHcwcEPj8XjGM60qtO3MVObpWenzNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Yco+D0ZQTHFIKK4jqyOrQ0GQY+dKN40P9v5wpwSnVw=;
 b=yNBgphgmuCQ0zomRtQAZ9M+tHHp7V5LBJgBzmN31/Sq88qbbtqXq9T5Zo/PmaMtZJgNoog2nB4H9jxKYsAvNhmcYqhgoZsZX/wrws+d88BC3CU93BHFiv7hJnKP2d74EGbFUrdchhJHnndvOpMipkj4nGZPfL4IGVLp+e6e6pIA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/2] automation: xilinx: Set up bridging only for a default test case
Date: Thu, 27 Apr 2023 14:05:52 +0200
Message-ID: <20230427120553.18088-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230427120553.18088-1-michal.orzel@amd.com>
References: <20230427120553.18088-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT115:EE_|MW4PR12MB7143:EE_
X-MS-Office365-Filtering-Correlation-Id: e9c909b7-ae0d-4728-d379-08db4717c245
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sQxVhiUaEZVAvtVrVyWFL+OzbnT7B4wpfYaFDKy1gflTopWpiosFJb+uvWy578Y8l9VpBBPz39wMSd9fHN/rcSLweH0KX9N4Y/8ZF43Zc/sxTm7a2JLW3jmeM6fvK/qI4CweEkB2FEBt2qWO70ixd2TPAILqFa4cW4CX3Ga9Z7uuRpkhXojEHuWNXWrCG4PI6VOu5nbXYvgJstAPtM+1JTafFXo25tx6RQuBD1RTyRHYLPaXDp9Z25TKGoDL9irYGmA3DhGzyp0vbo19PAUV2xBKdW+9/T6ZtFQC9xYh+KuDhAEbWLKmMKPrYjbRm+4jGR5JB87hwaIe2fqQ7aM5nS+Y6GeX2ApYaeeGtRlozSoPdLzkq8oaljZHwAswT2Ll2eUL9kVLyKBf58jqIln7UQvetCvWEZPF2L5r9JrU3GH+2sN2TIT8wrFMzFSsoqpLVxFv89FLTLdDmnMoYwjBnlbLj5GYnR6angjokWU6PvB6rppwg9hUKdo8d8thQZbBY7T7gG0eP/eRcOC4C9jiUleoFWCb1fbt3iaG+e9Iod/zl5eZ7kUmRi9RCeuP3eWj3z7+MpU9rp5h2zSCI+ryT7jzjQma7Pc6ghc8oOeA0fTTFiI9azJEucsIIxYTXKIGMXylQecBUhawvFWxwkP+lPJOsztRUik0wmNmhSRgbgh8YBHSwBUqoWEh60/iwhjj4cxmaYDw982miWAAeTYK9hGuG5Vyrjr6J0uOnxJhMkI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199021)(36840700001)(40470700004)(46966006)(2616005)(336012)(426003)(83380400001)(36860700001)(47076005)(40460700003)(186003)(40480700001)(70586007)(6916009)(4326008)(81166007)(8676002)(8936002)(2906002)(44832011)(70206006)(82740400003)(5660300002)(316002)(41300700001)(26005)(1076003)(356005)(86362001)(478600001)(82310400005)(6666004)(54906003)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:05:57.1764
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e9c909b7-ae0d-4728-d379-08db4717c245
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT115.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7143

At the moment, setting up a network bridge is unconditionally placed
in the dom0 xen.start script. Since we might want to use the network
interface (there is only one working GEM on the board) for other tests
(e.g. passthrough), move the bridge setup to a dom0_check variable being
part of a default ping test (i.e. if no test variant specified).

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 automation/scripts/xilinx-smoke-dom0less-arm64.sh | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/automation/scripts/xilinx-smoke-dom0less-arm64.sh b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
index 82158ab7ea1b..73ba251f4cc1 100755
--- a/automation/scripts/xilinx-smoke-dom0less-arm64.sh
+++ b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
@@ -6,6 +6,14 @@ test_variant=$1
 
 if [ -z "${test_variant}" ]; then
     passed="ping test passed"
+    dom0_check="
+brctl addbr xenbr0
+brctl addif xenbr0 eth0
+ifconfig eth0 up
+ifconfig xenbr0 up
+ifconfig xenbr0 192.168.0.1
+xl network-attach 1 type=vif
+"
     domU_check="
 until ifconfig eth0 192.168.0.2 &> /dev/null && ping -c 10 192.168.0.1; do
     sleep 30
@@ -51,13 +59,6 @@ bash /etc/init.d/xencommons start
 
 /usr/local/lib/xen/bin/init-dom0less
 
-brctl addbr xenbr0
-brctl addif xenbr0 eth0
-ifconfig eth0 up
-ifconfig xenbr0 up
-ifconfig xenbr0 192.168.0.1
-
-xl network-attach 1 type=vif
 ${dom0_check}
 " > etc/local.d/xen.start
 chmod +x etc/local.d/xen.start
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:06:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526970.819051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0Nu-0000Gb-J1; Thu, 27 Apr 2023 12:06:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526970.819051; Thu, 27 Apr 2023 12:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0Nu-0000GU-G6; Thu, 27 Apr 2023 12:06:02 +0000
Received: by outflank-mailman (input) for mailman id 526970;
 Thu, 27 Apr 2023 12:06:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UO3s=AS=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1ps0Nt-0000GO-1K
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:06:01 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df67b3c9-e4f3-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:05:59 +0200 (CEST)
Received: from BN0PR04CA0038.namprd04.prod.outlook.com (2603:10b6:408:e8::13)
 by PH7PR12MB7454.namprd12.prod.outlook.com (2603:10b6:510:20d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20; Thu, 27 Apr
 2023 12:05:56 +0000
Received: from BN8NAM11FT007.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e8:cafe::fe) by BN0PR04CA0038.outlook.office365.com
 (2603:10b6:408:e8::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22 via Frontend
 Transport; Thu, 27 Apr 2023 12:05:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT007.mail.protection.outlook.com (10.13.177.109) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.23 via Frontend Transport; Thu, 27 Apr 2023 12:05:55 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 07:05:55 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 27 Apr
 2023 05:05:55 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 27 Apr 2023 07:05:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df67b3c9-e4f3-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RLyQvTiFOx3zbzKzJWO9buaNmgMKh6SDfYR31QU3FKoZjY1YLNJiURqqs2ClTihGfAp/ikFalfk+btmh0BDP8OCX8wmswSckKRrkxvhzJhY7g2iQvCb1wJ+zmIVj42AR+Q75s2gtPRjo/o3rvc62T8U28Fp48reysJF48DECmwAoLyr4ten+cDpvuAIXZDCKcS0xypP0YgfA2UEcAiJ654ZwXHGawj+FTMD3JSsGQC8SKG2C0gx55cPXFf8NIvDjuhml8f51vyj5A5GUIVB2WIUxk1lMuw3spanW9FxyNFoQGm3ZjqZ+R4n2l2yS0QzSYjqulmbHcPnX832QNuKgsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eLRSRq4DYXabUfbBLLYypaisVz196Xrufh+Q8gdSKGM=;
 b=Q7hOCsZxCF6MzDa6WilSTC/dTthxUE/8WJ3X86izk7tglnyNZZrg0n7oelnGSpFUtSY8iyQvUhEwv8BdRzoDtzJBj/fSUVgNrEP/r+8aLAHRJY5vGgcOXnRhvsiy7uJk/xkQCXRETdOC7LWr/uoVVfJsEmWPD+qLjc62YaLR+eQGXiShoUkwpfADnIPFHjrfa50FQcNBgLJrJ77KscwCbxRo3elBWCE6CqZ4GiUSSnOTOLjFj53kwEKj1XoJbShE2YoRdzbGzUDruvppePwcSUkzSWkAboEVcOn9gWkaQy61JSbQJDQ4eXWctjJfdmLw0NWK6Xbm+7C+cBch+9YAnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eLRSRq4DYXabUfbBLLYypaisVz196Xrufh+Q8gdSKGM=;
 b=T4quccLzDqpSe+Ci3HkFWY2m0heE8wlPGwqcLGERG4fmbpXRM5dMakv9ZspLQXkU201Vyc38eugFNuikL0fjKYfafhPGxvnkI+o0/IYy7i5LJ5jLsSYMUC/nqJmVq+F1kKMDPwG9tfs4mbr2Q3oVT+Iqo4sC5vatFH1+x4eTUUQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 0/2] automation: xilinx: GEM passthrough
Date: Thu, 27 Apr 2023 14:05:51 +0200
Message-ID: <20230427120553.18088-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT007:EE_|PH7PR12MB7454:EE_
X-MS-Office365-Filtering-Correlation-Id: 2fe32b8b-0266-4305-6128-08db4717c183
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MTPx3aPdGORHLiS5YTx5VdXp4ql2SCmJfEfwWBi+REIJIp5i5V8siog7cHiWb9bfHl87hwCu20VIZNzBhqYZTnYaI+vigFVTVHNo0DH4Cga8HzdGAVplurtmxBVKE1vpzuoJRGG+sCW/hpKDqmdl9n1KEmzIQicIhW70xz4t366l5BF+EAlcozcmMvrVJlPAvMhEGV6p9K+jlo8z6/YfrvwsJbkdzoewOYrb8clMe12gnaraCZfFQqvPAYx0hPdav3sEN/vJS8XmoTM8T/nXRWCllPMqf78ZQr2eBHjmB6t1fQ9UKRIELGz+ZKynGwnwDWVmpru9b1jReiJT6r4BhaLzryr8o4mlAMEQBBlwT6g80Zmoi4qLaKFqRPNPn6IziF00O54AYYYJLbpfIhxaC7fyuXD7epBkAnW9NibDVZVWfPVj6eh6q4CEho8t5oAOqnzq5YdlaE9NP3MMWcJNzYhAypRUV55lbUzdBGvCG/dhqdkTA0oVO3kQYJBx+yyOpMwUYBhSNrIO5eC5JnH1EH0zpfGBPk36n22tQF16gSn8P7ENQstdkRR6rd/1Ju2RSywLCslyIQNeoYOswzxipGXdUUlQIMRadJD4EBoh7UMTZMH/qUB2DP8TuSPpQI3XCPGJJESC8Djmafvbt2j+exzDjTLQvAQ6gTmuhybJ/Smuqr0zZzbFxsw6ZJwYg5VeQODYHuTzp4JQcJEKM8W/7tFqh1ivNhbAdyuqw4fR1yU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(478600001)(83380400001)(36860700001)(2616005)(1076003)(26005)(40480700001)(6666004)(70586007)(54906003)(70206006)(186003)(47076005)(336012)(426003)(81166007)(41300700001)(40460700003)(356005)(5660300002)(316002)(44832011)(82740400003)(4326008)(6916009)(86362001)(4744005)(2906002)(8676002)(8936002)(82310400005)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:05:55.9021
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2fe32b8b-0266-4305-6128-08db4717c183
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT007.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7454

Small series to add a GEM passthrough test on a Xilinx HW.

Michal Orzel (2):
  automation: xilinx: Set up bridging only for a default test case
  automation: xilinx: Add GEM passthrough test

 automation/gitlab-ci/test.yaml                |  8 ++++
 .../scripts/xilinx-smoke-dom0less-arm64.sh    | 40 +++++++++++++++----
 2 files changed, 41 insertions(+), 7 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:29:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:29:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526982.819082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0kN-0003rY-0G; Thu, 27 Apr 2023 12:29:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526982.819082; Thu, 27 Apr 2023 12:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0kM-0003rR-Tn; Thu, 27 Apr 2023 12:29:14 +0000
Received: by outflank-mailman (input) for mailman id 526982;
 Thu, 27 Apr 2023 12:29:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ps0kL-0003rL-Fz
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:29:13 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20630.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1cc12cf9-e4f7-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 14:29:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7698.eurprd04.prod.outlook.com (2603:10a6:20b:282::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 12:29:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 12:29:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cc12cf9-e4f7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fC2N7b0tQXNUZuDQa9pJJ54MD+zQz1lcVlX8BWi23z0wZAzTexJnWmjVy1KwiJYxOdvh7hybsf3U7LSz0iqMNwRSO3N/ENAe9+oh+0bweL4WoEHury6WjWCsCgG7OJlpyxJjL7Sjg7Z2dc6SC5MwMFTN1oiOdWeOsnZD7iA9uxiSbd0XWzuCOXugxOP/NZHvz1nR8jPeQFjjaXF68MXBWgLUios2Br1uyrTCjL57RxR2sphoUEt4a2EhanO+GI1lko256XaHLB74y3K+1wc6sot5oi2anGCLI5wGoDAIPh8S1jlgcy47YI5/Ir1NXvzUr0vhgsUnYPqMu5aN0b+vwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bJ3ZejmMgiNwVsLvBG8URgcin/AR9FTILHMG5MwlCUc=;
 b=TGjQ+ZpuvAu0CSnQeIL64l08sfSpWONkJEYxVJy5wzqZHVzsU9LY4LKem8kN1ZtOR1KgRBrmhNPunpzTvhho/C0c6xfybQ4Zxv0igcyKmgE9szXRLxKA30Uquno1dzOFB/IF2RKdPxzS2aisCXXGJMOggwgcEgVErLhPn2bViWI4xrq0J3IHc7V+IjyxM3TG6H38/Wwrkqq1pvxS2+9q8BZ4b9qiJyBOuI/Fe2FbqZCvoWTx7R7NnD4aY5MiGVD47J6mefVS2Wvg3aKry2o6rp53ix9/3P9xUefj1sJo3/FQ6ykGF6h/7x2mkJ2c0M8d/mezbtJwLInoE9gPo2iueQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bJ3ZejmMgiNwVsLvBG8URgcin/AR9FTILHMG5MwlCUc=;
 b=RZb20PL4s0JbvYVo4B2X8i/X5jwL5YS7Q5VDjyVXrOz9CoWyVlGVwRap6KS8MmSNpC9nvF7WWy6JiWC72Uu7bh/+w+2We6e3DZcVoRxsLpFdPdrD2lZjf/5CIwJ3w1qUyZZgFBuC7ImoxWZEXLvlQ8Kc7zU3lrDGz68FHJAGL7MnnnV6Si7OVpFM23us3BqtaPQRQLqATZGYjEfDarzoXqQdUTtn4S/gy77QtlEAXf64SWBfBabNqx/JW2+3ZWIFG0NYY5KPdck8Ju9XBncYgXgdG6PhnwZWHzrqLWRn8FERvte3b+yMxnl+tFqq3Cgm0sLEjSwbJQ2e24vLcDIpWg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <104c3456-03bd-37be-627b-45e614a616c1@suse.com>
Date: Thu, 27 Apr 2023 14:29:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: replace bogus assertion in paging_log_dirty_op()
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7698:EE_
X-MS-Office365-Filtering-Correlation-Id: d7bb33cd-9045-459e-2913-08db471aff45
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1tFNJGHVNJsPmMksQFjQLoEnras3mV78PD8vq9tLUzXXdG+Udo3Wpe1nf8fXoKHofoS4MVALGR9iFhtVbqn6j6yzfpJcK1hEN3sf0mPimcRPkljESp0VCj/y96/VCs2cnwzSxp6wzsh46SLMC9rY0urGzHwkG4kxHIasCzcwXkw2wy9k0PdO2gYUHAn+JRPwJrW1YQd+tsWwB02vZfhhS34oUoYrMz5UQwkyBM7WVVruLidBHZghOl+3JM5riXJWoGbmK5mzA6QsJztATPH86WMud0iVA3RzTwRyqyKCta3XthcRv/2NftbY3m+nTUKHiV5vmF4AOXGoo4YQUMQI/DX9bREcfsLDUDBPk5i80F7wueYMbt5PQMkWoAv+wVq9nXiDgv9tOLUe6J3E39zpIhg+e8tRqB6eSc7mF7UA559pCTUSAD0Znt9EryGOzJWzbfoKWfevTLL5CbTnUV0KgFBkqXrx/LjogxEGQ8IJrKCtsIpBEAjtXvuMPaQJdFgaflFrHGeXwnQZwNGGR66ZkkMHIKNiLa5zSedyBhpDI9lP4o3bJRg76JyUZ7OiqgCF9nVKKTkBdaItN+7+RD2fsDK7zxwmJ3qIUU0U2V9a9uCVneYKBN25wzjN9+QlqWMvIulL4CctdlbtTzsIMnAiJpoj6jkquRQbnOjJmLP7veU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(346002)(376002)(136003)(451199021)(6512007)(66946007)(66556008)(478600001)(6916009)(6486002)(4326008)(54906003)(8936002)(66476007)(41300700001)(186003)(38100700002)(8676002)(316002)(83380400001)(2616005)(26005)(6506007)(86362001)(31696002)(36756003)(2906002)(4744005)(5660300002)(31686004)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUVGNUNDdWw4RTRYRVpEM0d6MHcwbW1wOFhmS2tqM0ppSnU3T3daajI5QkdD?=
 =?utf-8?B?T2pnaVc5RXZkMFdLRGVwWTB1U2k1YzJjMzJpNEo2MkIzRGVheng1WWRkWWtU?=
 =?utf-8?B?YVQ4dHk3OENvQkg4Y2Jxemk5TGtpVW9GNVlOYW9VTldWV3Y2VnBTR2ppYkk1?=
 =?utf-8?B?dHVpdmJCQUhlczRLcVJmOExuMDNOcisvZXkyNmdLSzNCUDdGcS9GZDE0Y1po?=
 =?utf-8?B?M2dKQlJ2Yk92a0djeXI0ZlVmZVN5d256UGFaaFZ3ZFpULzJya3dMODAwWEVN?=
 =?utf-8?B?U1pNZkxRYm8xSS9MakN6R3FOQTBwYzlUOGlzRjJXb1doZzlXT1puNkZMNlZI?=
 =?utf-8?B?ekZ1OXBnKzFHZE9raGxGNHhxanFyeDZJMmQrWjNCWFZ4cVhtV3FrQzRFTXBu?=
 =?utf-8?B?N3RrWTh5eGFZTHlud3A0VE9tZy9UNVJHYzNIUG5mNlVPeDVDR1AzeG9LdHBX?=
 =?utf-8?B?VDU2dDRGbkdmVDgxb05GZ0h6SVpLK0tFcDAremVRMUtnK21GNW1sWnpwL1Vi?=
 =?utf-8?B?WlRrazNLTXpOTEVYMnZhbWkrTFI2OEdRMnkrYnJJKzF2MDRLODBQcW5yYkZP?=
 =?utf-8?B?eVZ6N2kxOSt3SGZlS3JXWG9VdjBuK0hTVklVb1F1TlZrL0dXNHZoL3VVTlhw?=
 =?utf-8?B?ODMzMlRJbTkzYlgwRTV4Ny80RSt4Vzd4Q1UvVXVzTG1lRDNTcEd4NTdhZndi?=
 =?utf-8?B?OXBob2xINkVIQ0ZNbUx4WklQQy8xbHJqZG0xMHlzUC9RUE9aVy9BMnpnZElE?=
 =?utf-8?B?RHpiRHdGRlVIYVd4Ukh1SDUyc2w0K0hlUU9HWC9NMDdZK3luMmdVL2wxMDgv?=
 =?utf-8?B?OWZTL1dnZ0xMTzgvYytpanpDZG9PUm1CdE1OaktCcjZzOER2K0tkSEtOM1pS?=
 =?utf-8?B?RXBjQXpkMXBZQTNML3hJKzBuWm5FRmNaVFM5Nkx3YzROVlJzT01VWktPTlA4?=
 =?utf-8?B?aUZRNFRWdzUwTHptbzVJblVUZ3h1M1pyUEZzTXRXdHVpNDgzd1pHbmgyaHhO?=
 =?utf-8?B?MGw4UmZxMjdtaW5YQk56R2s5V2xoMkpSVVZVaVp3U0RORkk3SXZ4NTZyVE41?=
 =?utf-8?B?eTdrRzkvNWozcFRkUHliZlR2bXl4TUFEQmJWcU1WT0VyUUdXK2MzVkM0REVC?=
 =?utf-8?B?eDVoMlltOVIzcE13SUtHZCszazd3b3p2U0tieEpzamwzMlduWjQybEhOaldV?=
 =?utf-8?B?dVdVTXkyTDlWVTVieXBmL3N3blMzTU5takwxNjdSMjRROWlqZEhOOWVFM0RX?=
 =?utf-8?B?a05FcURjRnJPYnJlNVAxQlZOU3d1Z21xMm1YUTZlb1IyR2I3Tk9UbmUvbVhT?=
 =?utf-8?B?UjdhZmFyWHdab0JDNTc0MEZEcEwvTkk4Vmw2ZWZaMEo2ejRWL3owR2VTZXRF?=
 =?utf-8?B?bEZEdHU4MFVTRWo1dXBEMUdqTm84bUtCb3lQZXArK2tHbWJ5Y3Y1WFBiRWpr?=
 =?utf-8?B?a3JFc0R0QmpOVGZ4bFFGVDljYzNrSkVGODZJZjF5TWliZi91R01OZS9vdFlM?=
 =?utf-8?B?cmdacFV6TkF3SjdRVkRDUFRONFhiRlgwSFBLQnZURDFIdi92a29HQ3o2Vy9x?=
 =?utf-8?B?T1hpejRYNlJ3TDhVeEk4L2ljd2xMdW1tL0tvK0syaERvMktVdytDU3hWS0Yz?=
 =?utf-8?B?WW41QU11cHlORUdCT0tRWmU1cVlZMHFMbzluei9sTWdLTTJrSE5zR2JyZ01q?=
 =?utf-8?B?VFNkdk9LOGJCOEREU1JQengzeEY0V01tSGU0Ry9JSWhXUHdwRGhObjVjU2NX?=
 =?utf-8?B?MnNCZWp3elBoc0swaU41dnhIeFN3emZRMldaYjM5RlR4dTRZbDVuanVhU0pH?=
 =?utf-8?B?UnZ4ZGs3SzZSYUQyTWF6ZmNmb3hZZkNFcGFEcjU0N0FyTG8rM09VdXhEYnFk?=
 =?utf-8?B?cHJETDllaTMraDJxN3NWWGRRYzB5di9PNHhjVFVTNGZuS0VIMUtFeUhEZGdB?=
 =?utf-8?B?L2tUTjhLUWVzSkl2YVNHYk85Q1NzSVk4U1Z3aWEzL3ZraElPd0hSMUlpY0hi?=
 =?utf-8?B?UnBVSnFualN5RkMySW5URUNjRG1ldWM0b0dyQVJuSHRIRkRvbC9IYnkrOElW?=
 =?utf-8?B?UDVkNWdlUFJUNjJQdWZXRXdMK2xkRHdaTVZyaFVZSFEzOWVFS21RdHdpdGNY?=
 =?utf-8?Q?QEvIWJj1LcbYRLE5oOMzSaoLG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7bb33cd-9045-459e-2913-08db471aff45
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:29:08.2296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O53Gyw1qa38ZNtFQHG79A7td0u0CkHlvCioDGNeJqVYxDoUwYqhtrc4jROLbbVVoif1+vPN2NFGpc4j/VF2kMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7698

While I was the one to introduce it, I don't think it is correct: A
bogus continuation call issued by a tool stack domain may find another
continuation in progress. IOW we've been asserting caller controlled
state (which is reachable only via a domctl), and the early (lock-less)
check in paging_domctl() helps in a limited way only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -431,8 +431,8 @@ static int paging_log_dirty_op(struct do
               d->arch.paging.preempt.op != sc->op )
     {
         paging_unlock(d);
-        ASSERT(!resuming);
-        domain_unpause(d);
+        if ( !resuming )
+            domain_unpause(d);
         return -EBUSY;
     }
 


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:31:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:31:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526988.819092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0lv-0005IT-DY; Thu, 27 Apr 2023 12:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526988.819092; Thu, 27 Apr 2023 12:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0lv-0005IM-An; Thu, 27 Apr 2023 12:30:51 +0000
Received: by outflank-mailman (input) for mailman id 526988;
 Thu, 27 Apr 2023 12:30:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ps0lu-0005IG-7i
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:30:50 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20606.outbound.protection.outlook.com
 [2a01:111:f400:fe12::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 570e67e0-e4f7-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:30:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7698.eurprd04.prod.outlook.com (2603:10a6:20b:282::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 12:30:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 12:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 570e67e0-e4f7-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U2GUxjBLQNoYdJvKh8yU8WqSHbgtEprjKthpU4kWYbc0Vdex31787FKr+HEN+FS0UL8PmbsDCYz1w4kbLybJj00dWLojVaICh/1DAhjigkUWyD0HZecwgg3CvLGEX3UvFgfCsAhHwU70bU6T+m7EDrPcZUPG/1WWMp1u+fFv1619+CXsKmJT+zuiynFvbgK5Ng72BlJYfsFyqt1Xykzh8S6A/JyMcn+Nr08GGDcMufTc7wFngayLwMz9eRqfMojza8rEbIEvNzTd/16OpCytHJA4w7EsCksBymgsgJSbu1+U485O0aBiSSkkv1N4gtPrUIBcg5QX3PUw+GabL1UyGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TSCDGoMS8DxV99cjm4knNAU2old1EG2GkTc6sQ76v8M=;
 b=iBUAILYoepAVxZouI/7DytgeFwi/D8nW6HQOYXhspsCvdEACaXSbkqkn0H5gLYr0EracMoir99gB1cxD+jXioTp5FNCAR70vFO80Ql+OQ/9JqYxnjk9uEjeuYkLS6QA/At5KxWJkYBWl7afWtL3knPz+n8JUmd+wxZO6YwwKkv0KnXqJWRoQ899YZZqCt4gO1Iu8wxVdBcRKUsXYY/g/6CAhzWxFP3Cbq4kad9GnEASNNxeMM+jtkOOYyDvjQ1hgYB9uttm4W8QoDG6PBMOQkAukF2jsqHolPlPt+KUgFVwRpG9SpbFMlLm6Ml9RzuE4q1Z2/qsYf2QfFYpRbm8tQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TSCDGoMS8DxV99cjm4knNAU2old1EG2GkTc6sQ76v8M=;
 b=e4sO/7+87GVi1yaL0Rz3OGqUOVx8oAo05be+x8OhM3RgN7hZkwL/SDX88tqbUoQlUmbwWmWq1OsqCmLLiZ+w3mmTs2WInDSg6E8LsBmPdYEhm8lCb+U9EAtRS071fE/JeEeqGlPjXgyRk2NGgWCDjRKnHREemh5GWgQbpD0eiFbPRQ7dQRvweHTI+1/bJXiJrqQKfnuATqaPF/dzHEo3mGRQEdzl9nZkutU3pRdpDd4Qc1XrLvNWdkZGQWUmxNWv1EHv7Yy5jI43oOJPPUGw1KqNDo1jQ3EmYcxZ0cLQtiGDm64fgrUufWJsVJq3/cUlSxdSVOpnFtUR8TOgqtUSbQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d6803514-7aa1-a927-57e7-299097470bb7@suse.com>
Date: Thu, 27 Apr 2023 14:30:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: drop log-dirty-enable's log_global parameter
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0017.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::27)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7698:EE_
X-MS-Office365-Filtering-Correlation-Id: 1251d0c2-2ff2-424e-79fd-08db471b3a59
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jneGr4KRieLA59UhpXlfTLdK2OkIMlN7BtV6b9qsgT94p4DLZyg1TACB/335hW6d0OOwRJ15Q7NRZEg//IWaACY4HIqqEYE3if4va+MfivbfcVbWUguhUFubes0U5cgAB0fBtCutzyHv4T0vU/Uiu7zsuL1xQMbBdNR/4s1MZBtQkN1asGMDlyvUBqglioF5soc5Dpf3OTMPRYCbiAKMyZHz8kZUSX+MpmqZ4e7KJhosMuYUBRaHqYWx4QSeH/r6yWmi8+IysAp67xib/mwl2qk7Vqkf1jeZQYSSu3ceqbMEFARKR8nQZNOLkFzQ1PB3/PqFO68uB/nfyWbFp+UNI6491XOcJfRZHBNArkjfhVWhjSzUpsFqd9p6x0/P++SG2KM0a5nRW48atG5T5mqRTRZYiZfoMinSPyNB2rt6w1VtHWiBnixe0MDfpUA0c3FMuMwvTpx6G2vj+2wh4+0iPMZ4glqjIE9dt1SPnrZLZlGDcvNkkfe1UpCYPUYGWDBeR9AyT8ZCBU2u1M+xNkV0PEbkDd7ffxave7Qh4KQ1Qr1hjPWjZOpXjrctu8c0hHQsHVySEH3eL7EDmQNgHLE3bq5XlzyePgcG08Td0h3NBGrPbZYqmEqL6NI+xXsOCvk2jgClWxcvmylr5kujNmjZmP/9fxlymVbSZk+f5BIdqcw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(346002)(376002)(136003)(451199021)(6512007)(66946007)(66556008)(478600001)(6916009)(6486002)(4326008)(54906003)(8936002)(66476007)(41300700001)(186003)(38100700002)(8676002)(316002)(2616005)(26005)(6506007)(86362001)(31696002)(36756003)(2906002)(5660300002)(31686004)(14143004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHAxRGtBQkZya3NMa3daT0Z5YzVBTW1iLy9uVzFjSXdGZFVlMDQzT1RtcTIy?=
 =?utf-8?B?WXBkU1dWcXhaVnkvZzVUMTRkcWtiZXJhUldoQWt1MWlXaXRyN0hxcTRUb291?=
 =?utf-8?B?aFlCWDVMS1I1Z21GTEFyeUhzZVZGZ1VONVlpSjFrMHdPMGlYSkdtUnA2YTFn?=
 =?utf-8?B?YTBkWERqZlRSSHVXdkdSVnEwSFdlNy85K0tWSmNsWXBEWldvZStEWTVFQWFT?=
 =?utf-8?B?ZWJneTExSFo0MFFSRUhuWkZQUC9wN2RhUzRGb3JYakUrS0xiTm1YR3VTVHNY?=
 =?utf-8?B?OFhsVlhHRXFtRGd0cDlCNlhlNUp6VW9aS1JXV1cxL2pwNDRTanEwNkNsYnZN?=
 =?utf-8?B?MzQralBmTDdBQlNDLzY5djhZWGxVdXo2cDB6eXRuTnMvcU9RWGRLdnhJanpr?=
 =?utf-8?B?YlAxSUFLcjBSSUthcGUrRGl6NTF2UkhhdnlGRFJFeWQ4SjhRMnY3U3RVUEJM?=
 =?utf-8?B?MlNmbTNVb2kyUFo3WjZLRlp3cHpaQlMxemhtVVhYUVNKeEZFRUNVQ1hWYllU?=
 =?utf-8?B?c0VnYkRqKzFhVTIwOW5FU1lKV1FEQWo2dFEvc1pSVmtocnJ2Q2NaenVEOTQ2?=
 =?utf-8?B?a0hjNldFNjVGNDZld2k1d29WaTc5L2dxUWFUY09EbDNRM2owUGFNYXB5d3BC?=
 =?utf-8?B?LzhhUjNFMFVSb1dFVGxha2hwYmI0WUFseEFtRE5WZ3Q0dHpmeUNpMFJMM3VE?=
 =?utf-8?B?dVYybnh2QnBQZWNGa0RuU1JCbnpRNisxVS9BOFhZeDFvRnFjWHcxUExOMitv?=
 =?utf-8?B?c1BHWjhxQ1hrS2d0NTJUb2N4V1BVdXJkaUJzUWR4N0l1dWMydTMvQVJUeWU5?=
 =?utf-8?B?LzdjM2lVWkdOdlMzY3NaeEdPemxSTFFEeVYvWkxLcmJXQVAyeFVoZ3Rvb0t0?=
 =?utf-8?B?cjZqY0FNSVpJUmdsYlNYRFh3L205VDBnamJ1aVE5d25hTkM5OGRUUm1najFZ?=
 =?utf-8?B?Ri9VUjNPUk5mUWdvY3gyaGNrVmx4YjdJVjN5REROV20vRzdadlRmODZOc1No?=
 =?utf-8?B?akNSaWxBajBXaXNud21jcjM2R2lDU01rWkt2Q0l5cWNDWU1USk5QMWVjMWZO?=
 =?utf-8?B?QjdsK3BSQkdwMFZnL0FDS3ZYNVR3aC9tWUFOdEZ1QkxXT0hEVnllYWV1YkVT?=
 =?utf-8?B?MXhoYkVUbXBSTVl2c0EvNFcxOW1DcDNVU21rc3J4c1h6cXUyL0R4ZVZrandO?=
 =?utf-8?B?YTRGaVIrUDJ2U1NlL0dyRVVhdHlBME50R1NVYUIrNFllcEx4MEVyMUlremJX?=
 =?utf-8?B?MU8wVjNna3R0TDV1QU5IWHBXY29sNGtxN1FXTWdWSE52WEtRUi8xNEdocXU2?=
 =?utf-8?B?ekZzcHhuZWl5QkMzb1lMdVJQRzRFb2VxQWt6RGIxVmwzVlhNSnBBNFN6Z3Rp?=
 =?utf-8?B?bDJwbGRURTUxa2h3N09PaDNyRnhRRG83VVBhcksyWkpZNTgwd1BRYnlwaDMy?=
 =?utf-8?B?bndjZ2ZFK0dCbG9IMmlId0xxYTRaajUrN096MzlySXJKUFRFZ2l4NVBqQWhl?=
 =?utf-8?B?cW8zWW9aejJKRm9UYkd6QVdLK0I0VXN4ZzZPOVlqdGROa251dExVbHdLTHJJ?=
 =?utf-8?B?MFhIa2pJWERYMWNqRUlzLzdvK3dhY2hET2NrSVlPSHU3cHlTdy9mY2FuUjln?=
 =?utf-8?B?VVRrZCtXKzZ2NDY2aCs0bXdOZzZDYVBKR0ZjNDVtUHdYSEFmcTBXdjZTL1RW?=
 =?utf-8?B?YmRQTVlFZmZKTEpTODYzSXY3azUwQWxxdlpGbWM5emI0akhmcGxHZkRBaHhT?=
 =?utf-8?B?cHJhKzJBdjhLTWluaGYxSXNpL0x2M3krNzRPZzVFQjJKc1hOa1gwVU9tQWMw?=
 =?utf-8?B?R2p0TTV1RmFSMlFEcFdrbWVtQ2FNdVZpcldCZW5rY2ovclRFbFAxNTRCaThu?=
 =?utf-8?B?S3k5Mm01cE9KL2hOY0VQenJRSUh6QkpXZEt4MDEyR0lwTytHUzQ4SUg1dHdY?=
 =?utf-8?B?M1B2akRZMW9wS2lGR1ZHaUMrK0k4VjlnV2U2MmlTbkpTMGJqYW9uVnN2VU5l?=
 =?utf-8?B?RUtqUmF6MHUxdjJWMGEzWEZnZi84VVdRc3lxMGZSWmpGb3RjMHY4enlUcEQ3?=
 =?utf-8?B?clFKTnc4Q2daTFM5M2Z1cUdsLzJnOFdjd0ZjR25UZVpTdkFxYU5qdnArdmJF?=
 =?utf-8?Q?ETtel68mYqULjMHJtbZhrZxmk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1251d0c2-2ff2-424e-79fd-08db471b3a59
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:30:47.3111
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XF2vTIVbsKZSiPnLqferFOnouzILiYmERqEd9S/JqagFRQ8EhBKCCo8FO9ru7nlUXf41DUPZIP4EVCGkKuIXSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7698

As of XSA-397 the only caller passes true for it. Simplify things by
getting rid of the parameter for both the internal paging function and
the involved hook.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -186,7 +186,7 @@ struct log_dirty_domain {
 
     /* functions which are paging mode specific */
     const struct log_dirty_ops {
-        int        (*enable  )(struct domain *d, bool log_global);
+        int        (*enable  )(struct domain *d);
         int        (*disable )(struct domain *d);
         void       (*clean   )(struct domain *d);
     } *ops;
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -164,10 +164,10 @@ out:
 /*
  * hap code to call when log_dirty is enable. return 0 if no problem found.
  *
- * NB: Domain that having device assigned should not set log_global. Because
+ * NB: Domains having a device assigned should not come here, because
  * there is no way to track the memory updating from device.
  */
-static int cf_check hap_enable_log_dirty(struct domain *d, bool log_global)
+static int cf_check hap_enable_log_dirty(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
@@ -175,7 +175,7 @@ static int cf_check hap_enable_log_dirty
      * Refuse to turn on global log-dirty mode if
      * there are outstanding p2m_ioreq_server pages.
      */
-    if ( log_global && read_atomic(&p2m->ioreq.entry_count) )
+    if ( read_atomic(&p2m->ioreq.entry_count) )
         return -EBUSY;
 
     /* turn on PG_log_dirty bit in paging mode */
@@ -186,15 +186,13 @@ static int cf_check hap_enable_log_dirty
     /* Enable hardware-assisted log-dirty if it is supported. */
     p2m_enable_hardware_log_dirty(d);
 
-    if ( log_global )
-    {
-        /*
-         * Switch to log dirty mode, either by setting l1e entries of P2M table
-         * to be read-only, or via hardware-assisted log-dirty.
-         */
-        p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
-        guest_flush_tlb_mask(d, d->dirty_cpumask);
-    }
+    /*
+     * Switch to log dirty mode, either by setting l1e entries of P2M table
+     * to be read-only, or via hardware-assisted log-dirty.
+     */
+    p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
+    guest_flush_tlb_mask(d, d->dirty_cpumask);
+
     return 0;
 }
 
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -201,11 +201,11 @@ static int paging_free_log_dirty_bitmap(
     return rc;
 }
 
-static int paging_log_dirty_enable(struct domain *d, bool log_global)
+static int paging_log_dirty_enable(struct domain *d)
 {
     int ret;
 
-    if ( has_arch_pdevs(d) && log_global )
+    if ( has_arch_pdevs(d) )
     {
         /*
          * Refuse to turn on global log-dirty mode
@@ -218,7 +218,7 @@ static int paging_log_dirty_enable(struc
         return -EINVAL;
 
     domain_pause(d);
-    ret = d->arch.paging.log_dirty.ops->enable(d, log_global);
+    ret = d->arch.paging.log_dirty.ops->enable(d);
     domain_unpause(d);
 
     return ret;
@@ -728,7 +728,7 @@ int paging_domctl(struct domain *d, stru
             break;
         /* Else fall through... */
     case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
-        return paging_log_dirty_enable(d, true);
+        return paging_log_dirty_enable(d);
 
     case XEN_DOMCTL_SHADOW_OP_OFF:
         if ( (rc = paging_log_dirty_disable(d, resuming)) != 0 )
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -67,7 +67,7 @@ const uint8_t sh_type_to_size[] = {
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
-static int cf_check sh_enable_log_dirty(struct domain *, bool log_global);
+static int cf_check sh_enable_log_dirty(struct domain *);
 static int cf_check sh_disable_log_dirty(struct domain *);
 static void cf_check sh_clean_dirty_bitmap(struct domain *);
 
@@ -3030,7 +3030,7 @@ static int shadow_test_disable(struct do
 /* Shadow specific code which is called in paging_log_dirty_enable().
  * Return 0 if no problem found.
  */
-static int cf_check sh_enable_log_dirty(struct domain *d, bool log_global)
+static int cf_check sh_enable_log_dirty(struct domain *d)
 {
     int ret;
 
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -1,13 +1,7 @@
 #include <xen/mm.h>
 #include <asm/shadow.h>
 
-static int cf_check _enable_log_dirty(struct domain *d, bool log_global)
-{
-    ASSERT(is_pv_domain(d));
-    return -EOPNOTSUPP;
-}
-
-static int cf_check _disable_log_dirty(struct domain *d)
+static int cf_check _toggle_log_dirty(struct domain *d)
 {
     ASSERT(is_pv_domain(d));
     return -EOPNOTSUPP;
@@ -27,8 +21,8 @@ int shadow_domain_init(struct domain *d)
 {
     /* For HVM set up pointers for safety, then fail. */
     static const struct log_dirty_ops sh_none_ops = {
-        .enable  = _enable_log_dirty,
-        .disable = _disable_log_dirty,
+        .enable  = _toggle_log_dirty,
+        .disable = _toggle_log_dirty,
         .clean   = _clean_dirty_bitmap,
     };
 


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:35:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:35:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526992.819102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0qV-0005vo-0A; Thu, 27 Apr 2023 12:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526992.819102; Thu, 27 Apr 2023 12:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0qU-0005vh-T0; Thu, 27 Apr 2023 12:35:34 +0000
Received: by outflank-mailman (input) for mailman id 526992;
 Thu, 27 Apr 2023 12:35:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ps0qS-0005vb-MI
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:35:33 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fdcdcf69-e4f7-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:35:30 +0200 (CEST)
Received: from mail-dm6nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 08:35:27 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5242.namprd03.prod.outlook.com (2603:10b6:5:22a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 12:35:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 12:35:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdcdcf69-e4f7-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682598930;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QuOt3UBLtwCO3EbJMvyvX7eSt9bdu+ciYPzQQLgFNbc=;
  b=Z6snwhVtm7YmrsD1Vk/Hm9ArlsiKCatWIUbC1mDAo/7rZ6fKIKhq/c0U
   c3XWbOtDXBBZ/QO2elv89qiNCjWxrf4JWZDTUVMZkSIlDPDiMzyb9J6MK
   EzeLHbYh6xwZHdtf8EVqoSRd8lnnO1egeqdBa1h9PqwcDk5LPsN64gQEK
   M=;
X-IronPort-RemoteIP: 104.47.57.169
X-IronPort-MID: 107479148
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:YQWWWqKMD/r9DRfAFE+RBJQlxSXFcZb7ZxGr2PjKsXjdYENS0WAGm
 jEcDWzVb/bbMWT0c9ByYYnj80wBuZCAy9VhGQVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gZhPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5TXzx25
 flJMAoObzfbmbKHwrS9EfdF05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGNnWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHukBN1DTeHonhJsqAWY4WVPNDc8bgT4jqKEpGvkANwYD
 nVBr0LCqoB3riRHVOLVVhm1oneCsgQbHcRZF+k36galwa7T/grfDW8BJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaETcRBX8PY2kDVwRty8XipakjgxSJScxseIa3gcfyAirY2
 C2RoW41gLB7sCIQ/6Cy/FSCjzfyoJHMF1cx/l+OADPj6R5lbom4YYDu8ULc8ftLMIeeSB+Go
 WQAnM+dqusJCPlhiRCwfQnEJ5nxj97tDdEWqQcH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:1lVBrqHnvT+x638tpLqF9ZLXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6faVslkssb8b6Km90dq7MBThHPlOkPQs1NaZLXPbUQ6TQL2KgrGSoAEIdxeOk9K1kJ
 0QCJSWa+eAc2SS7/yb3ODQKb9Jrri6GeKT9J/jJh9WPH5XgspbnmNE42igYytLrUV9dPgE/M
 323Ls6m9PsQwVeUiz9bUN1LdTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y1zwoTSDRGxJYl6C
 zgnxbi7quunvmnwluEvlWjo6h+qZ/E8J9uFcaMgs8aJnHFjRupXp1oX/mvrS04u+am7XctiZ
 3prw07N8p+xnvNdiWeoAfr2SPnzDEygkWShGOwsD/Gm4jUVTg6A81OicZwdQbY0VMpuJVZ3L
 hQ12yUmpJLBVeY9R6NreTgZlVPrA6ZsHAimekcgzh2VpYfUqZYqcg68FlOGJkNMSrm4MQMEf
 VoDuvb+PFKGGnqJEzxjy1K+piBT34zFhCJTgwrvdGU6SFfmDRDw04R1KUk7wA93aN4b6MBy/
 XPM6xumr0LZNQRd7hBCOAIRtbyInDRQDrXWVjiYWjPJeUiATbgupT36LI66KWBY5oT1qY/n5
 zHTRdxqXMyQUTzEseDtac7sywleF/NHwgF9/suoqSQ4tbHNf7W2Gy4OR4TevKb0rYi6paxYY
 f1BHpUa8WTWVcGV7w5mTEWYKMiWkX2YPdly+rTZGj+0v4jCreawNAzI8yjbYbFIHIDZl7VJE
 clcXzaGPhgh3rbL0MQxiKhFE/QRg==
X-Talos-CUID: 9a23:JJKPiGGXFbL4OfQNqmJDqhMsCsV0akfezVCIPR+JC2BvE6O8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3As9rQ/wzAJBBthLcFQscoX7VXLAOaqPylL0lRrb8?=
 =?us-ascii?q?sgJbHGndNFRLH1zrwZ7Zyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="107479148"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X0iwS9Y4qvTCOq2Nw4Im/9tPNcZyBcYU/h8c/aBYTgnJpdZML5E0D82up77RDhoigJAzM8PktDh4PVQePnJ3wtuR9hYIHYRL+OyylOJUZbIclhE5Cb8lsZL+JFxdBK8K+WftnNIzII4Q38yQJpZqAjZkDHOEFyNWTwf9diIUHRydIJzqXFy2OsXq4/cCO//uySBHBfhiXZ/8gksbwJ18yyt4kTaFTu6xC6C2TtJOl0lGC28WsLNn5P1q7znbyewKXp/rkXgIu/6ETm8bemAFnvk86xUudb8Ye46JseNbpfrGj+AwG8lK4oGgUi4PQ0PIyC0iooS/JU5nLmjlYli7Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ex0XMmkcCyK+JshzwV+UnrRjt8dIFduuMrFGElq2B9g=;
 b=mOi9X0wt4XDNEgIkv6EuK6E8JFDGTnmpjLarWqZNe9eHVKcBEdEDoPuFPzwjk/LLcb2VPoAOguMk9CvylJn5ntneIrbUzcvjUAtt3UPiJrGpqzsqd5ynaLS/gSlDlqOMrGxktrsq9Zl0bt9WZCOyMkZ65Zk3u+rQDiup3kWiMcswiz9+feo8SHuAYwX7OzhyiIIfIIRtXkbPhD3I/VuW4qh9ie7lC5y+GkvN1ZpUVJQ4sQX5rHykN3MdHOR3tGwKkKNoeOQ/NEHbDMHMrVrbEyGbQgb0TjZQfb6LISAPX7wW52AizMJFiRM+4bflI1wbfXxtcjtG/JXtWLdN6e46Cg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ex0XMmkcCyK+JshzwV+UnrRjt8dIFduuMrFGElq2B9g=;
 b=QKqgewQUXt6hescbqYDFPWwxIY+Mjy6yegwdzF3f59rOQyX8dqGfqO85VMuZoq7DzNh2nlkSWvqD/xMK6QY1rqkHyS/gFA4c4+gTQORqcGCBGzriHPjR0+j4Hg3dGtCKAmkZTpcUXkL/o1NBnYWpIAUWDTsbb9kqCs5I4WwzOEE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <91d37778-ec77-6716-61e9-d47b0517508a@citrix.com>
Date: Thu, 27 Apr 2023 13:35:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/7] tools: Use new xc function for some xc_domain_getinfo
 calls
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-7-alejandro.vallejo@cloud.com>
In-Reply-To: <20230426145932.3340-7-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P123CA0048.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:310::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5242:EE_
X-MS-Office365-Filtering-Correlation-Id: 51703748-5511-4446-8332-08db471bdf5c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BqchtaWJZMkcm5D81HEfshhVq+O2N5SNCXmiuEa9jmIBBgiNUvLtTVtUYZe5WlU1aOB8jQQClyzEyk+KcdAmjBQo7uFwseR2P5fJRkgWLjLUwt3/bokKl9/Fico0PMzKHyy+pBm9GTeZeXLskGHSRILObxQFMDnOytwZJZnLt4vk5VGwhigPP7faiXYuBEWPVrqboP1KMQV3vRGmxC1fhQRMo8sRTAODE9pzr3iimQ6pT6TiRflwQ6/KQ7xv4Ewt+QBCOYD6GO9Y8p/Drx4k6Zlztj9nUyRSRT7jvi0+g/nw3XnZmCQuWfjzye4VtySu3919WEt71rYQ0Aqo0BAbXKXvhMYFtddLpfvUQeIH3D9WoJ5JJRL5pf2QFtdryTqVIVG76xP8Tnw1RSvA4y57lZPfM+41GKYGJzYQYe9T6Bt5nEJXVSxCIH0naT5m1ri4W61uDn/wLeTpQmMX9lECMxkjIEoF4WxsJyhdRt+SW/2o7kxI0mfrGdnTklL4vEc+iwB9b8IeFvNLqeg0GF9aDC7BJbC5ftwYOY17JHUGmn/Tc4gGDXQjHWWF6/hCfgVBslVwsou7ZyVyOuXv9XHesfvUsZpXCnHEtT+dakW4j7npT90qJZEu4x79x42E0cqXGs1wc7j/vJujeReJiyT4Bw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(376002)(366004)(346002)(451199021)(83380400001)(478600001)(6666004)(6486002)(110136005)(2616005)(6512007)(6506007)(54906003)(26005)(186003)(53546011)(2906002)(36756003)(38100700002)(316002)(66476007)(66556008)(66946007)(4326008)(82960400001)(8676002)(8936002)(41300700001)(5660300002)(86362001)(31696002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGFIc0NoVGhWVUx6UVE0VUFaeU9NdlNFZkN2dnFHQjBuamJacGV6YVo4SU5H?=
 =?utf-8?B?Q1NBaC8vQjFsZlNEM2hITUtyMk9xcWs0ejFrcTd3TEwvaGZJaTJtUElLb01l?=
 =?utf-8?B?L2o2ODdJa0NLL0NBWldhNGw1SmV0Y1pZa0VobW9zWEVHQThLVmVlbHhwc2Ju?=
 =?utf-8?B?Y252MTR0U3NEQ1NsTkZKZGdHOHoxbVhYTHRwVEZrUUp5bytGZjRWRnlRZlFq?=
 =?utf-8?B?azZVWDVNbnZBTThjcWtYUS9hVHlpRTg3cEVpcDVwZ2pOaWVyeUVHVUx0RTRL?=
 =?utf-8?B?Q2duRkpTc1JYZzdIQmd0azhXckxJdGc4OTZmMnlQWDlUMkswM3UxYzZpSUtu?=
 =?utf-8?B?dStTbWU2d2dSMUgxN1RjRXNUOVlCSis3MXVaWXNrQk8xZ2pYUm9WMytkV2FR?=
 =?utf-8?B?S25BeExlbE1IS1dFY2h6N2VsdVR5NWt1S0VVQ2JTTTBGTm85OTk4bzdCOWlz?=
 =?utf-8?B?V3YvaGhuT3JBY21Kb0dZeTFGZjRPT1JGbElnWjhXeExBcGo1SEtEOSsrTVFm?=
 =?utf-8?B?d2hKdjFBbDhTQ3JlTWJNOHRxb05YeGgzQllYNWhtQk96eUYvMWVFaUVHRkRn?=
 =?utf-8?B?Y1dlcXU4UForSTIzaDNGbSs2enN4aWVZTTJPYXEyNEhBODFmcHErWnl2dnds?=
 =?utf-8?B?OVB5RlhlRk5kWnRXUlNtb1o5aXNVeGVoZ0JOakVKL0Z1RmZRbkU0SzFBeHFB?=
 =?utf-8?B?ejh6VWFVQXpOVGlHVmNrVnA2MjZDS3U4c0dlcVNtb2pCa05neGdFQmxrTVlU?=
 =?utf-8?B?Tit6Ri9va25LTG9ORXVTR3ZtV2NaeTlWd1JPa2RvVU92NHMrUzJPZEJtby9t?=
 =?utf-8?B?TVJDK0FINzcxSVMzZ1ZRbVMxYmFWL25yLy9GSmo1N3kreTUrblFWWUptelJF?=
 =?utf-8?B?MlM0Y3hJSmtMUTFZbWFkNlRxSUFocEY5M3FmdmZCZllxSUg4WVIzM1JiVFZv?=
 =?utf-8?B?NHVicll5OTVPT1VYQUc5c3g5SnJQR09meit2aThXZk1MWkZZWHNwdGVILzU1?=
 =?utf-8?B?cUdqNGluM1docFV6L01yc2Evcy9WeGUyc1NFWWJZaVZDZ0RTTFNmcFpKbUZY?=
 =?utf-8?B?Zi9iZGhnT3BFUWhCaTBlUjM2UjlIRzJNaWZiaGZ1WTM5UUROZm9XTndZUFJS?=
 =?utf-8?B?dlg5M3daRUtSd1BHczlUZ0NNRStRZFVYbEwrNEVWL1J5YW42UzA5ZEpIMkwv?=
 =?utf-8?B?Y01ORkRiSENFeXk0NmFDODd3R2Q1NW5oT3pWdGNQZHNPYllqSEhSZUs4eFg1?=
 =?utf-8?B?dVRsdWdrT3crZlhlZVY5K1hWR1RuRlpsd2dOKzFCVGFDL05YTUcyOTc1eHRK?=
 =?utf-8?B?V3lvVlpsdkVObFFnMEZUdmxsMUdXSUVyS2hleTBxcnN6RmVONmNXZ3FDNHl4?=
 =?utf-8?B?UDVxL3phc2g0dFVPcWZYQ256T3Jhc2lpSk9hZ09GQXU4b01ScTV2M09LTERH?=
 =?utf-8?B?UE5OcXpYLzdva0MzWSs0bG9hdVlWQWtYQ3FsR2ZqdUV1czBxdG1LNjQyaFJi?=
 =?utf-8?B?Q0RjeW92TjBVNklzbzJ3T1ViQ3NISUYweWNURWRULzNTYXFKNzU3Zm1JbTZj?=
 =?utf-8?B?WDFXVVovdGhsaUxmNjQ2dHpDSzk5WStDV29tSWg3Zks5RjlqTUNMVTNHSFpz?=
 =?utf-8?B?Uy84QXQ1cjJtYndtOGRyOUNRK1A0TkhBSkZNWG1PbzErSVRJc2QyeVB5RHBP?=
 =?utf-8?B?aCsyRUJSeFVrOWI3Nm96WWRsS1g1ZFMxOWJDV2hNU3VESGZubHpucEl4MWNO?=
 =?utf-8?B?QzdzTjlEZnVoU0lmUi85NEphNXVkRUd3Skpyc3hWakZxTm9Vb1FrOTArWmVy?=
 =?utf-8?B?QTFNRDdDQjRVb3JGcTYybEhzNCszTXJSQ0lKTkI1ckg1eVkvaGs4N2wxZG8v?=
 =?utf-8?B?cWJSSmJGVUpPQXcraEFUVW50SnF2RlIxWE5KVEJkbGh5dEpiTEppa2s1Mi9o?=
 =?utf-8?B?cVJFSHNIR3ZsYWxHWUsyNndGOVk0MDZqVFMwV0pMNUhibnRvVzRhQXlxckhi?=
 =?utf-8?B?QVZkTUFTWER6MWtSQUp2V1hMaW14QVlpNHFWVmF6dSszck1WcTBQMXpESVk2?=
 =?utf-8?B?WHBCZ2tsRmRiS1VWZnQ4ZmdpVTlvYnF6UmJPQ0NwdjdsWW1TU2o0d3dHQ3lX?=
 =?utf-8?B?R3hySFMyVFZNOGxuZU94ek5meTdvWC9Pa3ZnUzByYkJSY0loNHlJbEZERGtt?=
 =?utf-8?B?Unc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4i2xuOZ0MX5hKjQCvOzZmC7mrPzimvjYbY2yWp3qVq+sBHc1UBfynZ3WTXIUHwVFm0aoNh7dJWzw6epO4K7k0ngaIJ6KU+j5x9lB6cyLQRwFDYL4VkPzZfZ03ICISFu3NpTbUhrzhcuATLXKfZVYMyNi30gxdn2BBFwskMA0qW6VgHnzGyQ4FZpsnHidhFzMsJ2DEEDRd8LpC2g7SJ0x9ztO0y0cKlbWC6lzV1zaXfKM+GODo6RriwW8FGNBBVvOziC7GoRSJ/XZZrHyi1InwLKNgo0WxbRsGRcBAveqwYtAIoM4kx/o9qZxvPGAI8DHLlNyLIFD/1alGMh2nM9sSt5QBICUPWnHPrmvMmiv+1aLmXgNpyVAhlBr1JU6+fwJ1tUQTmB5I92NEuHDUmuSmhs9a28pfOBnSuAbvXM8arm7KP5VtZ2i5W4nfcb6wNxuEhf/7hpQKoMUE8dcoPQ/3+Mk99/PtqczXB4OXbLtJrHTw/gIc5+aQFj8G0pt3jvN/imWcwD/f//T0lboguioUP/LferaXVCbptCMTGncJjc2UmsP46n6p4LHYeyOAwvO9XHTRnA/v02HiC2Iu1CBNiPABb1aX8rVs30zPFrTnilsbH1UpZXowcAswPyEvtgwrQF9uL5+bRZRQHmEYScE+mazm8HL2DzGtge9Zvlz/xp9QyVe0kc9lCUELnMvMh6M21Qut4xWI4oZ1p2HQ9C+q+hptuDhnnSrHuDMOezJhaKCUHdYcuzt9DajISFeCTn6llN1aB82Wzq3UWoNQhlUrmvLIfWYq0VbNdTS0Rm4urJR+jOVoDKPJ31QEEiXvLsYQn7uWdJ0IXibHEk6bPVJk0QsKhqL+CdGJUzTuss8+kc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 51703748-5511-4446-8332-08db471bdf5c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:35:24.3619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: drB7+7BAj5Tjpxeg7TUsVQ+V3e+RIBDzw16LiB9L8JvFOl6f+a3fcH2zsR4dUInb+Wr8GoaBGqjKaLxzqRJRBugPFQK8HzbH9HsoiPK3pME=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5242

On 26/04/2023 3:59 pm, Alejandro Vallejo wrote:
> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> index bd16a87e48..6d260d2cff 100644
> --- a/tools/libs/guest/xg_cpuid_x86.c
> +++ b/tools/libs/guest/xg_cpuid_x86.c
> @@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
>      xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
>  {
>      int rc;
> -    xc_dominfo_t di;
> +    bool is_hvm;

I know it makes a slightly larger diff, but simply "bool hvm" is what we
use more commonly elsewhere.

> +    xc_domaininfo_t di;
>      unsigned int nr_leaves, nr_msrs;
>      uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
>      /*
> @@ -291,13 +292,13 @@ static int xc_cpuid_xend_policy(
>      xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
>      unsigned int nr_host, nr_def, nr_cur;
>  
> -    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
> -         di.domid != domid )
> +    if ( xc_domain_getinfo_single(xch, domid, &di) < 0 )
>      {
>          ERROR("Failed to obtain d%d info", domid);
>          rc = -ESRCH;

Now that xc_domain_getinfo_single() has a sane return value, you want to
set it in the if(), and not override to ESRCH here.

These two comments repeat several other times.

> diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
> index 263a3f4c85..59b4d641c9 100644
> --- a/tools/libs/guest/xg_dom_boot.c
> +++ b/tools/libs/guest/xg_dom_boot.c
> @@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
>  
>  int xc_dom_boot_image(struct xc_dom_image *dom)
>  {
> -    xc_dominfo_t info;
> +    xc_domaininfo_t info;
>      int rc;
>  
>      DOMPRINTF_CALLED(dom->xch);
> @@ -174,21 +174,13 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
>          return rc;
>  
>      /* collect some info */
> -    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
> +    rc = xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info);
>      if ( rc < 0 )
>      {
>          xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
>                       "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
>          return rc;

This need to change to -1, or you've broken the error convention of this
function.  (Yes, libxc is a giant mess of error handling...)

> diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
> index 77e2451a3c..60d682c746 100644
> --- a/tools/libs/guest/xg_resume.c
> +++ b/tools/libs/guest/xg_resume.c
> @@ -26,28 +26,27 @@
>  static int modify_returncode(xc_interface *xch, uint32_t domid)
>  {
>      vcpu_guest_context_any_t ctxt;
> -    xc_dominfo_t info;
> +    xc_domaininfo_t info;
>      xen_capabilities_info_t caps;
>      struct domain_info_context _dinfo = {};
>      struct domain_info_context *dinfo = &_dinfo;
>      int rc;
>  
> -    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
> -         info.domid != domid )
> +    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
>      {
>          PERROR("Could not get domain info");
>          return -1;
>      }
>  
> -    if ( !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
> +    if ( !dominfo_shutdown_with(&info, SHUTDOWN_suspend))

Needs a space at the end.

> diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
> index 7314a24cf9..a03183f4b9 100644
> --- a/tools/libs/guest/xg_sr_restore.c
> +++ b/tools/libs/guest/xg_sr_restore.c
> @@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          break;
>      }
>  
> -    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
> +    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
>      {
>          PERROR("Failed to get domain info");
>          return -1;
>      }
>  
> -    if ( ctx.dominfo.domid != dom )
> -    {
> -        ERROR("Domain %u does not exist", dom);
> -        return -1;
> -    }
> -
> +    is_hvm = !!(ctx.dominfo.flags & XEN_DOMINF_hvm_guest);

No need for !! now you switched this to bool.

> diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
> index 9853d8d846..8fc8e9d3b2 100644
> --- a/tools/libs/guest/xg_sr_save.c
> +++ b/tools/libs/guest/xg_sr_save.c
> @@ -996,17 +994,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
>      ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
>      ctx.save.recv_fd = recv_fd;
>  
> -    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
> +    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
>      {
>          PERROR("Failed to get domain info");
>          return -1;
>      }
>  
> -    if ( ctx.dominfo.domid != dom )
> -    {
> -        ERROR("Domain %u does not exist", dom);
> -        return -1;
> -    }
> +    is_hvm = !!(ctx.dominfo.flags & XEN_DOMINF_hvm_guest);

Same here.  Can drop the !!.

> diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
> index a3a2741242..b483f63fdc 100644
> --- a/tools/misc/xen-lowmemd.c
> +++ b/tools/misc/xen-lowmemd.c
> @@ -38,7 +38,7 @@ void cleanup(void)
>  #define BUFSZ 512
>  void handle_low_mem(void)
>  {
> -    xc_dominfo_t  dom0_info;
> +    xc_domaininfo_t  dom0_info;

Use this opportunity to remove the double space.

> diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
> index e1d959c6d1..9c4c336b03 100644
> --- a/tools/vchan/vchan-socket-proxy.c
> +++ b/tools/vchan/vchan-socket-proxy.c
> @@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
>          if (ctrl)
>              break;
>  
> -        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
> +        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
>          /* break the loop if domain is definitely not there anymore, but
>           * continue if it is or the call failed (like EPERM) */
>          if (ret == -1 && errno == ESRCH)

Oh wow... so this bit of vchan was written expecting sane semantics out
of xc_domain_getinfo() in the first place...

This needs adjusting too because of the -1/errno -> -errno change.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:37:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:37:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.526998.819111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0se-0006Xr-Fk; Thu, 27 Apr 2023 12:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 526998.819111; Thu, 27 Apr 2023 12:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps0se-0006Xi-Cw; Thu, 27 Apr 2023 12:37:48 +0000
Received: by outflank-mailman (input) for mailman id 526998;
 Thu, 27 Apr 2023 12:37:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ps0sd-0006XZ-8W
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:37:47 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4dee3263-e4f8-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:37:45 +0200 (CEST)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 08:37:27 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5855.namprd03.prod.outlook.com (2603:10b6:a03:2d4::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 12:37:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 12:37:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dee3263-e4f8-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682599065;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=p43qOpXlKVpAZLLXDXmXfYSUEnKWUQO9XqjldDmyclM=;
  b=C+6bEnBbSXD20NsZxyR2Qsul1HH3wNi9HeJ72Xr0ID7XcRPM1Tf62ctd
   ozKCKevoz2bzNgBW8I9bzKWemLo20lVx9DxGW14cpTwWHqBZMkZloxyCH
   nHpbOerkpowKS08mdHPv5j9hZO0Cfk/1btO3uddOEJUIJwrlMsGo56GLY
   8=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 106961567
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Xgm1DauzEvtHppip89QNmWe73efnVGVfMUV32f8akzHdYApBsoF/q
 tZmKTiOOK3YYmH3ed8jPNmxoUlQu8Ldy9A2HQpopXszRiIQ+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOEyCFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwAS02MzK7h9qP+7fkc/RVoMItfdvMI9ZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60b4S9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqK802QDNmQT/DjVICkGpkNmHoHePSsxHC
 GYs0Bgjs/QtoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSy2ETgYKykFfyBsZRcE5vHzrYd1iQjAJuuPC4awh9zxXDv2k
 zaDqXFkg61J1JFVkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshTXflhSATlrD0xIYyU
IronPort-HdrOrdr: A9a23:WzTtCam6ZhBRHdPFT7bNTFWqEaTpDfIV3DAbv31ZSRFFG/Fw9v
 re5cjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: =?us-ascii?q?9a23=3ATAo4PWuj31qGFZXhqalWRvqU6IsMb0aA7k/QG3a?=
 =?us-ascii?q?DDDlQVYSzT1HB6oR7xp8=3D?=
X-Talos-MUID: 9a23:mbgcygSDG24brFB/RXSy2ytwBedWyZ2fGV4Vsr8+kfuJK3BvbmI=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="106961567"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h8h0qi6O3gJqFg7XWaSLie5Bv4iStCKqj6+Jg7WAsGZGI+oLUlaBdtk0sLuGxSiTuDxKw5ikC4QdAI8mZn8IXXbBahueDyjHWmqVbZqRtSEYFdFzJK0bpsFRJkZyMVoy8E5wH5U6QGgAliedPlKISl5vv8MsD+0X2ftJC4Y6WRMgq0H6aLHaT+Kj/cr2qn7mKfkcaj23im2zGMHV8xFiQaHz6Bf80wS2b8va7uavjKD5zgD08Fokmj2Y6UQeFAIgOfxhH/5yATecXL4kqFavH0TEj9UQcvJzaueznw1CA/iKqAGwvOJu4omnx9SG0jbMZLE/aJIlFj+EdNWoo+UXsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p43qOpXlKVpAZLLXDXmXfYSUEnKWUQO9XqjldDmyclM=;
 b=cHVx5Q/zTMyodpoEVl1PAIu/dKOY0Ll3xJMhz0txSvCc/4Cr1wQT2OPz+/vbXxMbpVDVIad833Y6NggVL7CWPgHHCbITdLG0dnoFzDTZqzaGEZiovq2VaL4MlB4rtLDEbUcIeqiXo0xYsgge+umoRLAxp37Yu3Kkd7FSw7eGD4k3hPuhxWVWBga4BoQrIBw00uuWimjcKSpx1Tzoe46tmtP19q19XiRBiJdK/61c3vH0mSXOoJT1ELGF0FJ4fQGU/GJDWAN4qLfdcfc1+xrxW/ooSAt7A08lFxK+9wO9WxzkiccGcpKtBPpubEpHbPpg/JXgQT9gWtzHoLBhvqlKxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p43qOpXlKVpAZLLXDXmXfYSUEnKWUQO9XqjldDmyclM=;
 b=YC79ZgtEtPKsT4JUjqXvknB7WxiAMhiRg8mB2qtxgLS1yb4sFSRHMu49lFNKSwIou9pOUqoPD/EdiEBX6UAh2g6g+Ag7v250U3CElV7Am0XRr7k/O/G2KY6zb6IxCd+8nB+uaeg9PYk3FIE8AjjI0bYQm6WF1cXLeAN4kZ3qAcM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <1f827054-13f7-02f2-032c-c7a1c83b2a90@citrix.com>
Date: Thu, 27 Apr 2023 13:37:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/mm: drop log-dirty-enable's log_global parameter
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Tim Deegan <tim@xen.org>
References: <d6803514-7aa1-a927-57e7-299097470bb7@suse.com>
In-Reply-To: <d6803514-7aa1-a927-57e7-299097470bb7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO6P123CA0053.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:310::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5855:EE_
X-MS-Office365-Filtering-Correlation-Id: 4fbec900-c8e4-471a-59ff-08db471c248b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kbpqYwjzoaZ6CuELkHPXkWhcXxG7XSwnCH+dQK8h1OSh8iUuPolMJo1xpo1Yj/N18RsCHgvpnJPP5RExwc0+josBAX7w9zjUaWoSxv6kJlt70x+8jLSVUe1oPL0TnbXJLkfZGPJwido2Buej6GSux2eVK1xvVyHPd3S7Bw2XcE1IMEed0DTxEhPmbgA5sQacKC+zrISXHzWuJgS8JwNQO2tStN4IAWBGIRdRI/N2fILyNcfX88IgSCmnNQu72v4spUWj89CcO0ZMP4Xfk1owXjnbt2Udx4DdtpNKPXolKshq+wxOkpzw0XcIh7ql9uW6i1U6yI9TRYko+T6OrMnjKbU/2vIsATABn2Y4LhOsxfMRDn6ftH+CFf4qqxb/o7Y5Y1Q/mkleg/235g/JHragsvOFcOj2Cf9MA5iaYED05ROWxbOuLUB2rPQ24QHKdc4vwIGw+fcXIFYIMcdb3s3wxAFV1kqRPWGIHgdraubb/8A0jZKtUdAWnwzLippTgL8Rp+JitghGNa7fhFqi+IZGoDkfbxsJbwuJimxvfQVg4M8wEn+/RrjDP+Zkau+C5yv2vKUIMg3sCEB/tIvqZxU4zlmGm/RiVxWe+OTe347IDMnsK3ld7luAG3LuVm0Qwt2xI9/14/WIjKWd4YHH2o2Y2SS5nSEfhHapVK0iaB1Y0E0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(396003)(346002)(136003)(376002)(451199021)(38100700002)(41300700001)(2616005)(8936002)(8676002)(6486002)(36756003)(316002)(4326008)(66556008)(66476007)(82960400001)(66946007)(31696002)(31686004)(2906002)(4744005)(5660300002)(86362001)(186003)(110136005)(54906003)(6506007)(26005)(6512007)(53546011)(478600001)(6666004)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UlZIS0d3aXFUNFNWajlnWEZ4QmJhenJ6YlppMjhyTjZZNUxVZkFiOGlxVDVz?=
 =?utf-8?B?SndKcldwTmVtLzR1b253U3hjNVM4bUdVKzRPT0dnSGthKzRWNFVYNUxVaWV1?=
 =?utf-8?B?eEdZb1VHZzlCRnFFdG5DQU9qSTNjN2c5ODZIWVdEcnJ5SFpsZnhETkNYQmJw?=
 =?utf-8?B?UFNVaTFSSGkxa2ZNWnhocXYzanhhK1dxSUEwYk9iOURsTkNtL3htZ0VHRjBV?=
 =?utf-8?B?NDVhUVdPU0FUSDEvdGdxa3BPRHFHRWhIVCs5dmk0UnJIMnNyZHgvb2FIeWZS?=
 =?utf-8?B?OTJIK2MxZTlKS1J6RWJCbHNRUTdQL1hoWUtHWVNmZGZOZkNnN1ZXdHNIV25B?=
 =?utf-8?B?UlZ5cVdsUjRmNkpySHZ6b3k2eWdhR3cxN3JlSDBqS3ZIK0Y4cjlRK3FST0ZJ?=
 =?utf-8?B?aG11d1JYMEVrUXVZZnN6S0Yxcm41dER1M2dCbmd5anQ1dERYa0IzNlYzZ1hu?=
 =?utf-8?B?Vmt6b0xLbGlETjZOVE1kb3o0OUJOaVowaTlFR3ZLdjBnRTNUVld0RWQ3UXpv?=
 =?utf-8?B?ejJNRjZqZWt1M1g2NWNZWDEvNStnbUJKdWRvb3J3U2I1aUczT2JuOWY3Tm1B?=
 =?utf-8?B?WUE3b2psTFNhcTFadWp6UHRJNVhZMG5oSU9SZVVXdE5lZ0hlU0dZV1UxM2FE?=
 =?utf-8?B?b1BqRlZQUFFGeDNwcGJSU3p2L1V2RjFDbk9lekVsY1d4bW9pa2dzTEdKTXk1?=
 =?utf-8?B?RU9DMlNKUTdpNWowcjJPNUczd1N3OXNoUlczdk5uNjRsamtPdENNM0syZDF3?=
 =?utf-8?B?UFBxVjFVY3pPSjg2R0hpUi9IR0Y1MHQzc3IrZTN2bXBLUkFkTS9HY0VrdUNi?=
 =?utf-8?B?SHBGTXoydm5CNWQ3d2JaRC90QjhWRFpYd3lVVDVSbExpa0tIZnI5emVqM25y?=
 =?utf-8?B?T0Ztd2pOemw0dS80SWIzVFBEcS8rdlAzbE50UHlKakszR2lmWHdiNlgvclB0?=
 =?utf-8?B?cnB4d29BT0VXVDYvTS9OdVlNVE12dGUvaGJkbDNYMHdTZTJqTjQ5Tnhrcllz?=
 =?utf-8?B?UmVaeDBVdzJtNGlhWFBveVR4ZXB6YWtXRHU0SFJGd1FDcmJKZW41M1h4SmE0?=
 =?utf-8?B?V3NKTFF5ZWIzanM1ZmRjWXYvaG1oaW5IUXNoWHJ6Z1p3Zm9iWkpOUEJiVCs1?=
 =?utf-8?B?NGVhMno3VWYxKzRHWXlnOUVIU3lWYTZtU2xld3BTZTVDK0Q5c1dCUGtOYWxv?=
 =?utf-8?B?NGZlMVVYL0pTbEZMTnM2V0ZEQi9XWitZK3JVbFpvaUhBdUFKU21ERisrdVNG?=
 =?utf-8?B?b0pGM1ZwdlJiQjhUell2UnJLcFVkSElxcm1zbGU2dDZCUGdEbmZuZ1orZ0p6?=
 =?utf-8?B?d0QwU1hyZ0JWc3dHenF6R1dzQ1I4ZXdYOWlQTHlwNURERS96UmVSRDh6TlpO?=
 =?utf-8?B?TEhRanIzT1c4c2Q4QjlDU1p5cWt4ODdGWnBmUmVZWm9yUEJnalFTRHdrbk1v?=
 =?utf-8?B?cVFUL09tL05BbVBibUFxcUI2OHUxWG9FaDZkTkNYeklScXlqM2VmTmxFUUhw?=
 =?utf-8?B?TUhiUjBYTGZsMXpWZldkTGlDcDNBQlBSUVFlM2pEWGZqNmNKQUJScHNxNzZX?=
 =?utf-8?B?YTNaWktxbmgzandBMC9TNnJxVzRUdXR0Yk80cGVjNHZoTjlHR3NjSTdLTjhi?=
 =?utf-8?B?bTVtYVpkRUVib3BUaDlGcFZtamFDMzF5eElORnlUSlo1c25RQWVCWkFxaW9J?=
 =?utf-8?B?b0xyNTJGU0hXQXZjVURyZFRISjhZQm1pU1ZTK0RMRlBta3RYRHVBbHRpaGpl?=
 =?utf-8?B?QkMxMmdQUlprNGhYclhtek1Bb00rY1FidDBOSk5Td3N3ZFMvRUZyNkFucnNI?=
 =?utf-8?B?enAyZkV2OW92bWhCSHZ4bXRXQ1F5aXZhRlVxcDZ4ckZOcDkxTFFmOWsxRk9w?=
 =?utf-8?B?c1gxRVBuZUxWVWNKbFBJTEhuVGtpRVhJUmRqSy9hRjJrTTUyQ2o1TzFEYkl4?=
 =?utf-8?B?RGNYQS9wVFdxV3ZpMDdJeFhJeW1PZjBiZm5TdXYya21rNVU3WHF3SCsxUTkz?=
 =?utf-8?B?dEJHand6ck9CQkRRa1V0Z1hQZEk2RzkxZFpJZDhvenVqOXVUV1FNZGxaR1RI?=
 =?utf-8?B?TGkxZDAvcEpjaTMzN3RwbXlCaEM3WUVLalFmYmxQcnkybkRENktuS1BadGUv?=
 =?utf-8?B?R2U3WTVUSTB1eGQyUFdWWTlmV1FNbXJSejdEVHZYV25QVS9vNEJpem1PQm1N?=
 =?utf-8?B?Z0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Qdtbwl2vNdF9dVhzaDAfw+KM97ImCvevwDA+SPbpj3/xipbx63KwSjdQiOpUr43fBxu8RdtvbukFu35PuF8JkmB1U4J5d6cVi/er1bAwhAtzQu4LUtPHMRxwuz9w7TNKZ1nSdm04IlYycZvQTQhel8klnO57FaOoREcLQ3g2xfjULPH1leZxFc/gXUlF4nxRLPMtn2njGb8VXXGG4L5CRMfbNeR4/Bxwt9fOo7mnzDJrA86qpyZlKhJogLyN7/aVVqXfy7NRR9wKdFTG+JpQyX9+jm0ytNNtN9Q2AUKTq5eEcdLRooHVbOzXKyyXe3TiX5HxOYXRGhhrbIvtjnftfiwnr6UWJof1QpcQZK48YeKGYYKfC0wA/En2VngyJ9UHWQT2urBDEhP2GOS2Yb8/5Xj/d9Jm2Ce9lZMzGNxvnCw+U3Gchu2d7wi5bT4CPxyAB8g20b26vhXAcGHZrwfHs5qEmA5YfNgeYEjp0CB1G2wWNuo8gUygmHhwCXJfOuXOwSzj64stCKvIqp0/OdZ9kCiESHdK3aJiCqT+7SQn9tMJPsCBKhq8wCSrLbWtz7fwX9gWBsyx+uLymSapyE10j48BKZFFt4qUWXMniUHbwRoSOy8UHpFOyb+DAGFYlWhAYOfsmYjjremy0+TrWQkdgPpo/Ga+R8n9VocM6yMQ3/8GyT5qiSJMw/dnOz/+Or6BPsuRm4j+HdRD9NmHpCiDiqSKwIPnTHZoaG1n8D2cIngbatKM1JqQ72D1y+EcFF+x8GraGl/slI9ZdqTNzVWlOr2KLZXUOIStTHlRAwFMXU2WiDgj8GZFUkTRdCY85vKPGQl1B9EjOcLNOYNIWG3cPw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fbec900-c8e4-471a-59ff-08db471c248b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:37:20.4029
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0ibHzPV30+WRgZfYt1UUYitctTbJgtg1lgXlyCDkEMktcGWCWfIlG1JKFVpf69WEBuUzEF9FKOMuo6MdXhSntNah8wxlp+RocTx1c/Yoo5Q=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5855

On 27/04/2023 1:30 pm, Jan Beulich wrote:
> As of XSA-397 the only caller passes true for it. Simplify things by
> getting rid of the parameter for both the internal paging function and
> the involved hook.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 12:49:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 12:49:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527002.819122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps13h-00085l-GI; Thu, 27 Apr 2023 12:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527002.819122; Thu, 27 Apr 2023 12:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps13h-00085e-D9; Thu, 27 Apr 2023 12:49:13 +0000
Received: by outflank-mailman (input) for mailman id 527002;
 Thu, 27 Apr 2023 12:49:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LZ9G=AS=citrix.com=prvs=474adc217=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1ps13g-00085Y-EX
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 12:49:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e62e7cb3-e4f9-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 14:49:11 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 08:49:06 -0400
Received: from DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) by
 MW4PR03MB6524.namprd03.prod.outlook.com (2603:10b6:303:127::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Thu, 27 Apr
 2023 12:49:01 +0000
Received: from DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d]) by DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d%6]) with mapi id 15.20.6340.022; Thu, 27 Apr 2023
 12:49:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e62e7cb3-e4f9-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682599751;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=MowOelL4WY3AnOtRtFAPyjhQBaxNSL96IxypAZPYVLo=;
  b=RcHJnkYBW1r9h786s4HLs5hfI6NZdFdHXcuZelgfseIgNrbh9oouwOqD
   znIVTUFf4jgbYS/gGZA3JQQQgBsboVc4EE1i8DTz9J4ClHlCNmS3Uf7OY
   SpdoKO17GM65gx8HynUvOVk7pgcJHoFBuc6Bn3fw+M+QtfQa6D6BkjQFZ
   U=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 107480919
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:R0fmqK1vc0KKiLH4R/bD5Rhwkn2cJEfYwER7XKvMYLTBsI5bpzYCy
 zYaUGzQPvqKamT1fY8iOdyxo0MGv5WGmN83HQNqpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBkOqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfUDsW0
 9EHMCg0QxWOi/jpmb60EPFRmZF2RCXrFNt3VnBI6xj8VK9ja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsy6Kkl0ZPLvFabI5fvSISMNTn0iVv
 CTG8n7zDwsGHNee1SCE4jSngeqncSbTAdpNSO3gp6c76LGV7jQsSzkGaGG5nfWorxODXd1Gd
 HwL/yV7+MDe82TuFLERRSaQoneCsgQNRtl4HOgz6QXLwa3Riy6bC24CTzBMcpomudU8SCY2/
 lSIg8n5QzdotdW9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJv5hGqOoitz+GRnr3
 iuH6iM5gt0uYdUj0qy6+RXNhWKqr52QFwotvFyJDySi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:LXssXqCp9hm80YXlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-Talos-CUID: 9a23:VJE8i2CTskMCh176E3g9rGg5Bdg9S1DYwXvKfBa0BjxxdaLAHA==
X-Talos-MUID: 9a23:srDOTQQF5kBj2DbQRXS0u3JGJPpS4ZiiGWVQrJMJopa0Kil/bmI=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="107480919"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oIWum3x6Mp+b/9/kZCaK9Kccn7Eyx/p7DC8O5hOx9rdhSuLMvdqJz79pgb31+qm8ec5efNBlfOq3isNfGXdri9j3B33CkhHBMGj8m93wEZaJFV+nocXQvT+GDNSlZF0tk7wzbYBtdhnGUi/4tOW+aR/bPmtW9odnWeDGbyBt+rJ4qVoQ+FoA8CaiCfDAn81vzwmUdXyneBdFdPqPlBwpNcCQxKzlE+cEYFmCQf8gnVpR/bnCudOPgM72V+WkUZr9qeMkQG0b8F0Z1N6BWYhZbZgv09IrUoUtsNtHFFZMUyv3q7N1leGNWlIQgvAxN0u7euk3alRBusHsa6RXNpadgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hWKRVpndLVHinVcMqV1UFuUnAHGkarNRH1V6BXiQHHY=;
 b=Lf0oG7BpSaxXsSSaRCdQABq+VpHZgT6H6Aud6lHrFc6vflGKQhZWXgbDlI84s5lOGVhqD9cXWYisqe8z8aWVPdhi3aZZqMyKaCSqfX5X+ak3Wl8z0Cr1YJLJpXQuYi0v1kZTQn+s2oAP3L4xZb3E0/safIzfKUQF44H1uGBULvcmtTEZ/n/YtJDjK+0mukBVrpU5G3ONyVWUQuirQxxq95NfuvSRCr0AqB0SOC1ka1UUvdW1IJfbsSPS7qaZosUJjBsCXbEfdClsChpwNAfqMRq/rl9TqQME5/dMwDvCLnHSvIy87UN7oC1txkyovKGjp5U3jYpd484784u6F/5qhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hWKRVpndLVHinVcMqV1UFuUnAHGkarNRH1V6BXiQHHY=;
 b=PcNjp9kK2nEfOUNRl6h5dwmXmHM/eHaxwz2G1FdKjWwK/K6HFbKgLppVH7uuxju1wAJBIxs6dj5Y6O7a/GiXbTrg+XhCgyfmaYZoiykDwX8RaRrSm3wPucMoTbzxj+auJMwbJoV/CqfPQUoU1LJmoS5qU7J7o8CvQ8DM2MWbUek=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <573ebf81-5565-b127-61c1-fc08854c9064@citrix.com>
Date: Thu, 27 Apr 2023 13:48:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
To: Jason Andryuk <jandryuk@gmail.com>
Cc: jennifer.herbert@citrx.com, Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-2-jennifer.herbert@citrix.com>
 <CAKf6xpsbaZMMFCW3Uw0XZ2gm185iwwtT2H+RcAReFrze9UWdAw@mail.gmail.com>
From: Jennifer Herbert <jennifer.herbert@citrix.com>
In-Reply-To: <CAKf6xpsbaZMMFCW3Uw0XZ2gm185iwwtT2H+RcAReFrze9UWdAw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0251.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::23) To DS7PR03MB5414.namprd03.prod.outlook.com
 (2603:10b6:5:2c2::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS7PR03MB5414:EE_|MW4PR03MB6524:EE_
X-MS-Office365-Filtering-Correlation-Id: bf0c0187-ad5b-49d6-fdf4-08db471dc5f6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d8ZN2rUPNbMLfj8B7h9BftVGi3n+zl62cP1erHv58oZUg4Zx0JfL0XzeuyJ2W8kUFEgu6iOrG4GXpi26+S+RB8ZI98NMm/o6XyT+iZ2eY7orr7q0qYmzsy0g/z+1BWnZP5SwSXeE8SF/XsHaqhXspz6DbSKRCZ5z9gVPaKw9ZcF02qgHkj7k8HFNqRn/OFpU7Cm9UbfoKPSOz1S238B5Tji2a1QwxCzgZ9PMB69LGyk7WgRqZdI+K/SvnxX5v6bOBD149bycHPuMRkH7J//3H6+BqdsoDL8mkPvZVvCh5dYQpv7aYRxCp3NanyO5UmjRKpqiCyfFeMICmiZe2RFwKN4cs4ImStnGRvk3RfBjrwCR8V/ezqwayICUBOJd/dFS8ue4NLFRgF5/lLWIg1aqV+GSxjzWvykFH3R+WxB+bmMHbamPH/vksniP5TPwizX6N6CEavzpGZBwO6ClNEe23Z4PCrj1ZcXeBKCeVxl3sneFRtTi4DhPUkHcwUZFuXnFegiaQkU9c5rsD3xHjzZW1CV7VJMpJKSw8D44uB4zBLnkKhAD4ULDcqxVoVIAPRQcSAsHTSf6H3evrSdNkZy4qILC4FXXjlJtb4Gpe6LLOmWENchdnBEM7V02hE1ETYnxA/Her9M9zU4a7lvaHzhDgA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5414.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(6666004)(478600001)(6486002)(6916009)(66556008)(316002)(66946007)(54906003)(66476007)(4326008)(107886003)(2616005)(53546011)(186003)(36756003)(26005)(6512007)(6506007)(2906002)(41300700001)(8936002)(5660300002)(8676002)(44832011)(38100700002)(31686004)(31696002)(82960400001)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RS8vSVAzSnNiU1hMWEl5bHVLN09KMytDdzFDNzFNanRjb3F3ams0UWRtSVpC?=
 =?utf-8?B?Q3JOTmwwK2pSVmVWeS8wQjVZc0FWZXJ6MVdZcnI1K2JwUlRaNWVuUmR6T0Iy?=
 =?utf-8?B?NGc3S0R1WU56OUY5LzdPdC9icmdMWmtSUmdma0tGb1ZrTitBQk1FMHV5UEZX?=
 =?utf-8?B?VkRrTStrR3p4cVdMZUVPZWN2NFIzUlBaSW9OdnRhcndtODQ5NnpjWXlmM3U1?=
 =?utf-8?B?UGtlVDFMZHNXZ0grZlloQ2xSTnBKWmUvempTcmtkTG1yQWNzQlNSMjFtaGFK?=
 =?utf-8?B?Q0tERjQvNEpVcS9odjV6ZU4rYWo3WGV6cWRzdTFLZEtLcXNQVGNmUmdGRWtY?=
 =?utf-8?B?MVg5WHVqSng5b2lHbUdBbW40czJ5M2JRU3NLOS9kUTYvc3RFM2ZNK1QyL3R3?=
 =?utf-8?B?VThpUEhZWWFlV3JPdkg4dm1zRWdhQ3N4emhETWhKc2JQSTBTbjcvZEttZ2pS?=
 =?utf-8?B?MXRyL2lkVmNsTzFKb3lmZmQ0WHp2ZmJxZXlmZDdZQXFHd0FHQXZBYXFUSlUy?=
 =?utf-8?B?V24xTjlxMHpEaUt3d0hrb1JPM0RRVmVLdStVWTY5d0RuUkhMaTl3ekpUYnAz?=
 =?utf-8?B?S1o4VTdpclBrREhSQkJkNlFqY0p0ODR4N1JJb3F4YWRVVmRhdDZMaEtVMkdy?=
 =?utf-8?B?RkZKWWlWNmt1SC9yYjdLdDZ3RmpaUnhBQ3pkVEVvdVA0SHFzd3hUdDJrdldj?=
 =?utf-8?B?ZlZ1OS9NZW5DbVAyaXRUVHczZlJCSjMxd1FxS29qY3cwMDRjUlhnZmdPaE4x?=
 =?utf-8?B?eTRlcGNMR29iaHNmbXRGc0RHNnBITDRadmorL2hMNWhHeXRCcldWeXVFQlJt?=
 =?utf-8?B?c3hLT3Zsa21ZQlNwSERUNDU3YVdJMTBndEIwZWxBc01VUi9kZW5RalhZUTl2?=
 =?utf-8?B?TTN2VXlrY2hIcEU0WXhlSFNkNDZpZitwL2xmbUdYWUNGRFVLbnE4Tnkxbjl6?=
 =?utf-8?B?dTBONHhURmJnaHVNWnZUS2szYTMvUTFwL0lZRDhzc2VZU0s5RlhieC83WGkx?=
 =?utf-8?B?c21uNFRJVysweVN3TXY4SVRFNnE3bEhDQXFJZHp4RzFwbEtzbTdvL05VSXJs?=
 =?utf-8?B?UEFsZ3VOWHBhc2gyN29HWlUvb3puMHVJcEhjWFV3Y3hsRmJ2Znp4R2FBamxl?=
 =?utf-8?B?UnZmTnJnU1ZVdFlicTRZVUpKUEw5VjdSeXBoOUVjalEzS0NOZmZtZVFVdmhj?=
 =?utf-8?B?R3p6K3ZYK3JROTVxa3R5WE5DS1F4NE95cHYwRXVWeFlDWnhrVEVXWkVINXdT?=
 =?utf-8?B?T1BWRlpxL3BVMWQrelphUjlEcXdFVzlDNkdWZFpOOE9WN1d1UGprd1ZMdGZp?=
 =?utf-8?B?czlXc2YxWnJXZDhmSUM4clJaRlVhRnhMMlk1azExbDJqdC85SzVRdE5TK3hp?=
 =?utf-8?B?WUVwbkhBTE45RC84MFkwQ04zY291bkg1VTlEdVl6NDhpWUp3d2pXRlFIN3h4?=
 =?utf-8?B?d1dZR2ZaZ1pRTGhNMVFxaGd5ZGUzT3JheUd6dmlDZEY1WGU5TXljbE8yVzFW?=
 =?utf-8?B?ejdScFV3WTJlbHRlYjJkRXVzWkg2bW1abkdpNWpoeFVYY3BvNU45TmNRT1Ru?=
 =?utf-8?B?SzBZOUYxM0wya2lWOWFZdDhXVzhaU0ZBUmlpSVVvUnNIUVozWmE2dEI1NDhB?=
 =?utf-8?B?Q3dHR0RDbTdzSFZCamxpRlVDQnVtb2Y5SG5wZzBKVlBtV29nWmZTS2dpTEN2?=
 =?utf-8?B?cjBLUnlUcE5IWTJrb3FrL1Z1a0orU0xwK2ZLM1FxWHpKVFpybzhnRVR2V3Q2?=
 =?utf-8?B?OHE1Zi95NElTcUZDUm4ydTJ3QmZKcld3d0Z2dVVQeE54ZWltR3JvQUd3cndX?=
 =?utf-8?B?akp5RWJ4L2ZwWFB6RDRKV2dvQWw1N1JDVzVweHVwaDVTK1ZoU0NGOUFaTEtr?=
 =?utf-8?B?WUtNOXJMa081aTN4RFF1R2hMYW5Sc2Fxbk1XTjhhKzFVVkZSU2syejNZSnhZ?=
 =?utf-8?B?OEhjTDhqSFA3cS9GZlBFWjV1YWZzRE9jNFVjUEZDM0gxa0gvbVJ0dVlzS0Fa?=
 =?utf-8?B?U3JBWU16ZmFyNG9jR1pzSGZ3RGw0RnFMZm8vMkVjUFFIckpRaGRsM001UmFa?=
 =?utf-8?B?WjJRSmlCMHRTQUszM05LSHRmQWJsWkRHY3dhTFN0WE5OMFNWQmxWZUgyWm1i?=
 =?utf-8?B?YTl3cW9wU3F3QnpWa3RSbWw2ZzhkWlpQU21RQ0RpZHlMS0dBeW43YXNsVmxy?=
 =?utf-8?B?RlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FeD9rghYNlPg+Pq7zuK+br30K/dZ0UvwpsYydKlxT3MWZOl48uxYB9SohpZ7MqV+iN/j17p0+ia4r/tMrfLCtaWWUQav/4sH91kn/a2shcNMwmPucNF1c9HCdyfzUk0TIsPhimvX7fVYgv1KBAk+aBdC5zTv+WZ5c7nGX2jJfLtPrWuTn0OrkxUElMZmrQtiLeoVJVbPFle9ZdOsG4qDeJ4adulsrmIij2BWDLdMTEKaQ5CIRdNnkDwWdkB7zV9DA1wruMpczFLVLYe1EqXnujv5WPzENv4FRK1qQyaoeMOyyvMoYaQ+OtKSit//qM4R8nAV9mEI2+5JjGX5F2h5aoQ+Q/eCsFdNhDNn+XKenXPiPPLzolOKuJqJ1h5xqhp3VLbymk2fMvFZLm86TpGJ9qIx14QacIYMABUjau/lZ77EUHJXFOfFk7rbPKZjlVPAidnq/LPvPMK53R6dFgu0jeuoW/q3Fn2xfzZyWxfQJrnYqaVcDdMBgO5ib5H1j8kGckjU4nEtBl9RtkHYnjm2IvCs2QQiUOXnEmzXomzkSMMkHbn0PxcVQ5EjKybxjWJDWnTqQykD+0q31Ae1dWzYOorXlVXE4WpnQ5zuZNR0Q1jI+rLDvZX3waKnIziyC9NyBUson1SrgNgEOnLSvEz8sHow/X6pqmUep50A9qBuxknBRinjCvAWV1W0tBtaP8+1jm+qArOcxxGbdwlv7jFj6zzFCaTESame2mC8hR5SOHwBI3MLY+HzbmHzEBOlfDx59siGltmKGgLf7o0tQz6urXRpLrRN6tgk2NnlP6P1OFlCVmt/3EBCWOfCauDB/qi76iT8hCZrVGNWhi8u37ZWVd11Rgg9ndvKWyqM28+HsBo=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf0c0187-ad5b-49d6-fdf4-08db471dc5f6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5414.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 12:49:00.5490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: quXx1z2fdz7yU9E52ODtVIzZOM423nKATfv0MxDRPU4P/yGlLSbVtOj9DsQIyweF0VPd0AndV1aPQw+5ml5xyGaFomaw/pKtRGl97ccrN5g=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6524


On 26/04/2023 21:27, Jason Andryuk wrote:
> Hi, Jennifer,
>
> On Tue, Apr 25, 2023 at 1:48 PM Jennifer Herbert
> <jennifer.herbert@citrix.com> wrote:
>> This patch makes the TPM version, for which the ACPI libary probes, configurable.
>> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
>> I have also added to hvmloader an option to allow setting this new config, which can
>> be triggered by setting the platform/tpm_version xenstore key.
>>
>> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> ...
>> --- a/tools/libacpi/build.c
>> +++ b/tools/libacpi/build.c
>> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
> ...
>> +        switch ( config->tpm_version )
>>           {
>> -            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
>> -            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
>> -            memset(lasa, 0, tcpa->laml);
>> -            set_checksum(tcpa,
>> -                         offsetof(struct acpi_header, checksum),
>> -                         tcpa->header.length);
>> +        case 0: /* Assume legacy code wanted tpm 1.2 */
> This shouldn't be reached, since tpm_version == 0 won't have
> ACPI_HAS_TPM set.  Still, do you want to make it a break or drop the
> case to avoid falling through to the TPM1.2 code?

So there was concerns in v2 about backward compatilibity before of this 
area of code.  The exact nature of the concern was vauge, so I made this 
very conservative.   This was intended to mitigate agaist this code 
being run, but with the structure being set up with something other 
then  the code in tools/firmware/hvmloader/util.c.  Any such alaternate 
code would set the tpm flag, but not know about the version field, and 
so leave it at zero.  In this case, dropping into 1.2 probing would be 
the correct solution.

As you say, in the use cases I'm farmiliar with, this would not get 
reached, so shoudn't affect anything.
Haveing a break or dropping the case would result in it silently 
ignoring the 'invalid' tpm configuration, so if not compatibilty mode, 
I'd personally prefer some sort of assert, although i'm not sure how 
best to do that in this code.


-jenny


> Looks good though.
>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
>
> Thanks,
> Jason


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 13:26:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 13:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527006.819132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps1dY-0004BK-5m; Thu, 27 Apr 2023 13:26:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527006.819132; Thu, 27 Apr 2023 13:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps1dY-0004BD-2z; Thu, 27 Apr 2023 13:26:16 +0000
Received: by outflank-mailman (input) for mailman id 527006;
 Thu, 27 Apr 2023 13:26:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X9G0=AS=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1ps1dW-0004B7-Rv
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 13:26:14 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 1278e750-e4ff-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 15:26:10 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 40FD62F4;
 Thu, 27 Apr 2023 06:26:53 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A48C43F587;
 Thu, 27 Apr 2023 06:26:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1278e750-e4ff-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v2] xen/misra: xen-analysis.py: fix return error on PhaseExceptions
Date: Thu, 27 Apr 2023 14:25:59 +0100
Message-Id: <20230427132559.14712-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the script return code is 0 even if an exception is
found, because the return code is written only if the exception
object has the errorcode member.

Fix the issue returning the errorcode member in case it exists,
otherwise use a generic value different from 0.

Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Change-Id: I1b76b8fa4668bef49da3282339fca3052e3379cd
---
Changes from v1:
 - use getattr() (Andrew)
---
 xen/scripts/xen-analysis.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/scripts/xen-analysis.py b/xen/scripts/xen-analysis.py
index 8e50c27cd898..5e8f2910cd72 100755
--- a/xen/scripts/xen-analysis.py
+++ b/xen/scripts/xen-analysis.py
@@ -26,8 +26,7 @@ def main(argv):
             cppcheck_analysis.generate_cppcheck_report()
     except PhaseExceptions as e:
         print("ERROR: {}".format(e))
-        if hasattr(e, "errorcode"):
-            ret_code = e.errorcode
+        ret_code = getattr(e, "errorcode", 1)
     finally:
         if settings.step_clean_analysis:
             cppcheck_analysis.clean_analysis_artifacts()
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 14:23:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 14:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527016.819148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2WJ-00025n-K5; Thu, 27 Apr 2023 14:22:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527016.819148; Thu, 27 Apr 2023 14:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2WJ-00025g-GU; Thu, 27 Apr 2023 14:22:51 +0000
Received: by outflank-mailman (input) for mailman id 527016;
 Thu, 27 Apr 2023 14:22:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LZ9G=AS=citrix.com=prvs=474adc217=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1ps2WH-00025a-Jh
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 14:22:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fab5396c-e506-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 16:22:47 +0200 (CEST)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 10:22:36 -0400
Received: from DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) by
 BN8PR03MB5026.namprd03.prod.outlook.com (2603:10b6:408:d6::15) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.22; Thu, 27 Apr 2023 14:22:34 +0000
Received: from DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d]) by DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d%6]) with mapi id 15.20.6340.022; Thu, 27 Apr 2023
 14:22:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fab5396c-e506-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682605367;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ad63oPpzUqUnt70ypGC9WRVmmhmOSVpciSG8wO3Ozz4=;
  b=HnQ//PP5g9h+inAvYqEAcTyYUgi2PvJyFspPGo1xxvWSIIVkAZIF1G8u
   oeCU65jG6ZoGjKhQ1LXmtU7GpK/zAT1+A6mB11pldHuucDTE5CtJ850l2
   3K6px90YOudoiNEPQ3RNrGxiq9IUncixuuz2gBtb5gAkW10uykF+71Mcn
   U=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 105852491
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Oq4NjKAqkFuNAhVW/+Liw5YqxClBgxIJ4kV8jS/XYbTApG53gmAEn
 GEdCz/XPfzYZDTyctEjO4qz9kpVuZeDztdnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5vY0OHF8+
 fciJzEtM0vYi8mM54CmY7w57igjBJGD0II3nFhFlGmcKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuuzS7IA9ZidABNPLXd9qMRMtYhACYq
 3jM8n7lKhobKMae2XyO9XfEaurnxHumCN9ISOzhnhJsqBqdmS88ChMRbGaAqPulkBS9fs9kN
 0NBr0LCqoB3riRHVOLVXRS+rGSVox00VN9ZEul84waIooLW7gCfB2YJVHhBZcYsudUqbTcry
 kWZ2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85jrNRNt+FK++jvXuBCr9h
 TuNqUADa647iMcK0+C+4grBijf1/pzRFFdtukPQQ36v6R5/aMi9fYu05FPH7PFGaoGEUl2Gu
 3tCkM+bhAwTMayweOW2aL1lNNmUCzytaVUwXXYH80EdygmQ
IronPort-HdrOrdr: A9a23:mz1m0qh7w3c9UiuLJrl/S1501nBQXuEji2hC6mlwRA09TyX4rb
 HWoB1/73TJYVkqKRYdcLy7Scq9qArnlaKdgrNhW4tKPjOKhILAFugLh7cKpQeQeREWndQtsZ
 uIHZIObeEYOmIXsS8q2miF+4dJ+re6GP7Bv4jj80s=
X-Talos-CUID: =?us-ascii?q?9a23=3AYzbFImi6iv3PxqBmXA4iMDoA8zJubm2e9WfiL0S?=
 =?us-ascii?q?CBmdJFebORXSq35F5up87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AWVd2RQ2D56PsUlE2hjWp2+Y53TUjpKrxS28Pn5I?=
 =?us-ascii?q?/hvKNNzBqYTS4kQ+rTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,230,1677560400"; 
   d="scan'208";a="105852491"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TJd/WEotfqua+iIaQVGXuX94IiZrFKhWaLJLrLl79HAWMqfaFdXS0UUCxwVE2UvSpjXdNYDlJDhsVrULqdNhHGl2vEsZCvEviBBWQE2MqXjiD+gKmJW1GBldfm46hsJ47UwIFBolDnJwjXfQll0cVCNSeWq4Y530uC67YROAV7xFvf6wbA1Axnlgi/Z2dNRpCfFAW7Grb9NpNhBPLgbp77765ZzDkhcDa9Ye+HlyUwsdPMdC0ox+lp7Uu9Mow+3B7byv5XcLTIsdYedOU3f3mIrNC5jUtf60eX+tZNP5W+FX4YOCixdWDNsRsb4F0OHPtBIjQLOfE83KeBSZbRLr3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DmGZ5QmJqK4pu+gOZzCUk686UjjFCwoRO4ZHGu4/hbY=;
 b=lrGEOpe8T9FiUn7OO8i3O7bRp7dvoK8jTQG/8S0qK1esidxh8tWhIp1ir1usbjj6Lu2RNMRWFI6Fd40YQdEo+93liuXTfjA7YiAnjXSItHRNrLOHNErITtGhpRj2zSuEkCZd+g188kO8pVKkSmraNVjs/5p/ua8qeumvizkfs1QRJZ+5fu+DJvKyyxxezrzfW0P7fR5/zbTh9+IyUPvepovUtfFwBDm38D1cmVc1ZGSD9b/InlkceSslsqJx5imvr/3mChwYwNpcADRln4oafZGJSVIshkkrGHeal1tH7Yp5D2oXzn6ajMhvtz+K0Ssd+8vLCgPopvTEtmARU5QRWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DmGZ5QmJqK4pu+gOZzCUk686UjjFCwoRO4ZHGu4/hbY=;
 b=PZsFDKLX9TAbFLQtMQ25oMSDAwo6hMbXoiUxbOnT2VkYM/hsIGiExOgxQigAGxwntuDd7gdhgJCQHSdEMcmengQzaOOHtkkGHT7DeZl8P1qDdeWb1CrESfOacvvOYJ9OQVOZC1rFPXMKR1+F/TAFRB4RUZ4EsO1DR95oNJU+4GQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <7971962f-2bfe-e824-980a-cc913ead4580@citrix.com>
Date: Thu, 27 Apr 2023 15:22:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
To: Jason Andryuk <jandryuk@gmail.com>
Cc: jennifer.herbert@citrx.com, Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-3-jennifer.herbert@citrix.com>
 <CAKf6xpskM2k5aeLoYLfxnR9KFuK7w3NkZaT_4z-SdOQ8VUc8NQ@mail.gmail.com>
From: Jennifer Herbert <jennifer.herbert@citrix.com>
In-Reply-To: <CAKf6xpskM2k5aeLoYLfxnR9KFuK7w3NkZaT_4z-SdOQ8VUc8NQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0372.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::17) To DS7PR03MB5414.namprd03.prod.outlook.com
 (2603:10b6:5:2c2::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS7PR03MB5414:EE_|BN8PR03MB5026:EE_
X-MS-Office365-Filtering-Correlation-Id: ab294a8a-52e1-40ef-5c87-08db472ad7ec
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eWnie+LYwaz7izyNJMAyYsiyTToXR8jZv6eBpXCWCvGbHQXBIaic7gAHEbry2BkP1dL6P7a6ohkZcZtqoHVPR6lrjHdlzPyr5hWXGFZSR9GuPHvO+DmHpABMvFEUjpOXJYP9AcGU17JD8rEdAkPw5aNbyfORpKKvHpx3kRrk7OIPOoxtq5odlsKodBkG6WMR7HKinKrPcCmvTsqgPc1VURaeZy0hiouERnU/FAObGeYuPtwo2qSpRD5DCLtbBEbZrCxdEprTMcbhEZnw2/fIKcPOtFykS/5W6lWSV0EekKK4WBinpmLih8sa1hFq9rdJg0c+PRhKjkqt/JyBgLTjZwpffsskHosclFOpE0IvEvaeo8/qwU94gWwLX3ce1PvN46NYFWzhXKdvuex/bZweQCdkxU5ZWs+YqoR+FU6SEuEeIVNMUkStqjkW/n/eJlrXby3QKBpr5miBrkyPlVeGuB6BiVhNk73TnFZxWyAfAK57iHBvRMSHtq33blCdm1HO6/uwi5LiTLZxJ17FRzxYTi5bk2UvZRbpbaL/SEBvEfC46gW7VKHL/150BisIhwW9KQip6UBM+laledTQB16OA26oRDXVdJvyRSMTADLagMr5vtDRzywuLlz3i3XcQTeHb3MAUHEsA10Sp88XmUd/2Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5414.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(366004)(396003)(451199021)(6666004)(31686004)(82960400001)(478600001)(54906003)(6486002)(186003)(38100700002)(2616005)(66946007)(66556008)(66476007)(316002)(6916009)(4326008)(41300700001)(6512007)(6506007)(26005)(107886003)(86362001)(53546011)(36756003)(8676002)(8936002)(44832011)(31696002)(2906002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVI5aVNObituU29nSWx6c1BnVVZYbnBOQTdMRisySXRtMnkzblVFRnJvZjN2?=
 =?utf-8?B?N3FrMDB4aUVsUXc3d3R4U2hUcFRPbmw5ZklNWjdOdDZGZFRFRWowLy9nRnRt?=
 =?utf-8?B?RFRRdGZSdGsrMVN5Q2lmSXp0NlJVVC9KRU14cm82K3dLL2QzV1E2YmRaSmMw?=
 =?utf-8?B?bDBHZ1VWK1d4K282OE1pVkhvVzM2OEpJazlWS2d1K2NHUWd2cU03NjBlanVX?=
 =?utf-8?B?Vm00cG94NEFnZStlVlJFNWlXUE9XNUt5Z3drSW9jbjVaQTR0Q1BPMUY2ak9P?=
 =?utf-8?B?cUVTMTdKSGNSSEpsZFN3RWlTQjlwR0V1cFBzdjVUT3JES3czRjlaWjh4WndC?=
 =?utf-8?B?aVBOUDQxR0o3V2pSdDM1S2NuMHMrVHdDMGpXKzU5OHFIdzhITThjTEtKUm5s?=
 =?utf-8?B?WVcwRVRiUlBKZWZJQS9SUlExbjBkdFBCR0hnSzZHRHNmYnBaOFNBRjFocGFW?=
 =?utf-8?B?dDFadFIvdHBOelVBRldqQXdqbW5pODhaWmFVN3dqeUJTUFlaNCtFZTh5VW9E?=
 =?utf-8?B?c2NjUHNiTTNrY29nUzFoMnVOejNkcHlEcVFFL2hTbElmS3FqSzJlYWQzNnR4?=
 =?utf-8?B?SWt1OWFQbjhkbkVkbXF4NXo0ZDNsSDRhU2JrOEUyNjRYUUwzVHlkMzBPcXpv?=
 =?utf-8?B?MnkrNUhBYUJRelkrVnc3NndkZnEra3RpekJWQkVMSXZrSkc5c2g3U2xGZ3lo?=
 =?utf-8?B?TU9FV1pGYkllQysxaG5MNWRFNjNTeXVjSC80Q1h4VkZOUEpZNDBuK2VpTXA3?=
 =?utf-8?B?ZHJrUDE4aVRxbjkyYUpwOHpPZ2tKclhKeUZ3cmhTSVpCTTNVUzB1MWFMZTA2?=
 =?utf-8?B?MTV4SG1jSllEcEtRcklQR1c5SWRRUlhGcHFoSTJtUzFQalcxaEorVUlRVnIr?=
 =?utf-8?B?UWdubktYZjBsS3dVU2NQOWs4czNtNGsxOWFxcUhBMjJ2MmdPUS9yUkcvU0py?=
 =?utf-8?B?RGNXQnp6L3ZyQ0F4SURqelRGRmpjWmhuQWtwUi9YaFZrVmt2dmp1VEZud2dY?=
 =?utf-8?B?UXVQQVFJVWY4eERlQUVSa0puNThoSEpPNTVLaTFCc0pab1dtYUJaUXlJTWpV?=
 =?utf-8?B?ODNZbHc5YzBwODFoTllDNHE5cUtZWTV4KzdkQ0xkR2VNbXY4ME1kNzJldmNY?=
 =?utf-8?B?QkdkalR0NkthYjdpQVJjUTlFdDVNaHJiR09uWmQyK3c4ZUpIMyt1djlFeitH?=
 =?utf-8?B?cFpOUFFDWUM4cU9ZU0pnWTQzSGxEWWEwUjVMUEhBaEdFWHd3cE1lYUxjdzV3?=
 =?utf-8?B?d1FqbmVnVVVySXVENkQ0UGwzUEIvbStFQ21rQkt1RVczQ3ZHWmR5cHBuMjVt?=
 =?utf-8?B?dGV4NU9BOUlCdlY4aWRTd2QxL1I0ZkRJdk9JYjJZaWZtdmcxN3lNdjkvTmtj?=
 =?utf-8?B?djJERjVKTHZGVll2Yitia2lEMFZzVjVGdXdEUlYwYXdKbC9CM2JhdjBNWkYy?=
 =?utf-8?B?d3BxbUU1TjhpZHRPSXB6eWxiaktPU1JQRUZLMEFpdUx0SytENEhkdUZyalE5?=
 =?utf-8?B?STM1K1Y2NVI2TFFxRWd4YzZQTzB5R09sRDR1TnRVSFhzRDRzV3ZPLzV3Snd5?=
 =?utf-8?B?Z21CanQwT1JObTB2T00zRjJhVHdrdERka0kwcXFuajFudnZreWJUY3padDVy?=
 =?utf-8?B?aHpVN2VZNE9RZHgrK3dWeFdtbzg3eml3dXpaNlBCMTd2bFBVSGJaUm5aS2RY?=
 =?utf-8?B?QnRvZ2JidC9lNlZIMHVBakdOMWhEMEJYT0dqcnd1TS9YaUwyUXNtVGI2THFs?=
 =?utf-8?B?WVUrWE8rQU5IdXNlK2thNE9GdFVCTForaS81Mzl5aWN2VnNCTDZLN3VUMHVq?=
 =?utf-8?B?bDZyYUc4elFGV1NZbUhUUUdYYVdLeVZBVWFxSVQwUWRXTmEwKzd1NkZ4Y3F3?=
 =?utf-8?B?UVY4bzNjZzR6NytQR1lRaFp1NGZxenFGZi9EMGpJWnRXdkJHSjNIaXlaS2pD?=
 =?utf-8?B?SW8zNzU5eEptYng5UVUwMVRiaVArWndJdTc4djY2NFZqN29IaXNzM3NwS2ty?=
 =?utf-8?B?UDU4bzBVYytGbk80eWU2eGtlcmx0R2xEUm1RUkhUWmxCdGFpZXN2NFRTRjVR?=
 =?utf-8?B?bEoydzRJdUkzLzJSWDVFTDJrKzZrTmZrL2dXd0VCdi9HeXdLTXpReDJUZGda?=
 =?utf-8?B?RllPMXFONDAwZW1HZUJoREl6bEUzOVdhL0pIVzN0ams0aXlPYnB6a3pPTVlV?=
 =?utf-8?B?YUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	B2V6jTbwqHuq9gxOLsi1cIFn9AjOi0/LHw8kYBKtEqx2LjQA79fUOPNYjaf4wUwQW6k7QEk+jR+R/Dcf4avBSBADC0Jmu3ZdPsQUgcjtHaawV76QDyd1K4xnXZf8kQJrXRSO+87U8Tf844vCNxJBm2SEqqDAJnUjSNt51V8/pdK6ApMFQgXjmyAN6DzTru3TwBly393gyHXAQaTHp19Vau7mqbbi+QcO7uO8Ihd1fgZT+ZS/QPT+fOCUDBlJjdQtPhhkuW8nEFt62z/s36ufYmkDhqbxTBJ8Xn8YxAnXBiEZk2zDXVnGtirfd3sO5d8oYOLbzUjnGaoXexZNbdBntXlwv95PchAQg3q9zaT2kP7rleoYfElXIjfIOUvFeIsTgHv8n7hr36LiIHphCl8jmf8Doz2QJbe2ZCI2tqLDAXLIZq3XCXMbu64h2tu+9qRMtfBNTAe6RZdtvd5QFClEMkkKMH/ifx4Utgg+uQRbwikmJ1XVmw+XaLAef4LZ2W23Dbm6kxjbgggNBJxJgUBkiLrrl1zEjAgXPeil5oJXYaevBsgigwY5pZmn+pLf26y8gcM8nqT+15U81dJNTFMoZjYt1wkV7brMxFSs9235tm8D6IXHh3S/8CAQh82gGeXfI72meH0Q6BZnL7cKFYO2LdpnskiJUqVCAOrSfR3qsTzydUGpqMReekS6ABgrBkJRY8JHLeehqNHPQ7N0GzdwD5ncRpDgFsqPfA81ylIgqAoqkHwQD/O0/Q99QJQZD/APA0M0/Hae6VmODCGHKu/iVKbY0TXaZVVRfN6zN5m2lWGkGlhliQnHVLRXyIybgkNorAJr+1YoZYDnLsva+aTZ9GPOKFzioP78WH03SzSWt4Q=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ab294a8a-52e1-40ef-5c87-08db472ad7ec
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5414.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 14:22:34.1715
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 12j0gKGNh45nF48292Fow6FqO4TQdM016DqGXVV3Jkk7ipA8TmOec/GOYMaiHtCrPpc22JbdZcGoO+3tSw0r2+YPiib7G0UFAnEkJnDECRQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB5026


On 26/04/2023 21:29, Jason Andryuk wrote:
> On Tue, Apr 25, 2023 at 1:48 PM Jennifer Herbert
> <jennifer.herbert@citrix.com> wrote:
>> This patch introduces an optional TPM 2 interface definition to the ACPI table,
>> which is to be used as part of a vTPM 2 implementation.
>>
>> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
>> ---
> ...
>> diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
>> index f39a8e584f..51272530fe 100644
>> --- a/tools/firmware/hvmloader/util.c
>> +++ b/tools/firmware/hvmloader/util.c
>> @@ -1009,6 +1009,15 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>>           config->table_flags |= ACPI_HAS_TPM;
>>           config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
>>           break;
>> +
>> +    case 2:
>> +        config->table_flags |= ACPI_HAS_TPM;
>> +        config->crb_id = (uint16_t *)TPM_CRB_INTF_ID;
>> +
>> +        mem_hole_populate_ram(TPM_LOG_AREA_ADDRESS >> PAGE_SHIFT,
>> +                              TPM_LOG_SIZE >> PAGE_SHIFT);
>> +        memset((void *)TPM_LOG_AREA_ADDRESS, 0, TPM_LOG_SIZE);
> TPM_LOG_AREA_ADDRESS is reserved in the e820 table since it is the
> high memory range after the ACPI data, correct?

This is my understanding yes.  We made sure to put it well clear of the 
qemu impremnted TPM, just incase it later decided to support more 
localitie levels,  but still well within the RESERVED area, in the e820.


-jenny



> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
>
> Thanks,
> Jason
>


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 14:29:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 14:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527019.819157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2cO-0002iu-8s; Thu, 27 Apr 2023 14:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527019.819157; Thu, 27 Apr 2023 14:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2cO-0002in-69; Thu, 27 Apr 2023 14:29:08 +0000
Received: by outflank-mailman (input) for mailman id 527019;
 Thu, 27 Apr 2023 14:29:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S+EI=AS=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ps2cN-0002ih-MH
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 14:29:07 +0000
Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com
 [2a00:1450:4864:20::443])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd7bc96e-e507-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 16:29:06 +0200 (CEST)
Received: by mail-wr1-x443.google.com with SMTP id
 ffacd0b85a97d-2f55ffdbaedso5487389f8f.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Apr 2023 07:29:06 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 f4-20020a0560001b0400b002ffbf2213d4sm18827769wrz.75.2023.04.27.07.29.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Apr 2023 07:29:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd7bc96e-e507-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682605746; x=1685197746;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=iCHoVKO38jPFFnKgVLFT/0YW6XWN64qCh8yFkzyuyQU=;
        b=k2Vor43M1aVCbW5lgeziT4Fv+k+5mp1zyPyP1un/khVxtko3PDVyYUvc5VXBP8i07u
         5y38y8Pf/+XFwKmMiF78fVBrQS8XTIbF27Cd/U5xV3bfHYxQICViRpCxf8zKz7xgvX5S
         LhvD4B6wtkjRy3HEhn92CimejNLYNGWmfHJXk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682605746; x=1685197746;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iCHoVKO38jPFFnKgVLFT/0YW6XWN64qCh8yFkzyuyQU=;
        b=NKNdTVZHOW+l3mED0WkRJWJk2ozdSGTEgmEWTkQc2Nnrwobw55NmN5h1wEpia5AM/0
         562sgZrlYUlnX9woRwY1uUf2UAtLfji9ycRKLMlrBQXyQDmDzbppslnFpu30Mx8Wlzgp
         FI+g/ROghosTnBp4EiWKzgJzo1eToad9f9NZrtXmAqzfDVfwaGH07wrcgkYeUd2zHFYm
         qVr1J3ynjTcuGZVBWAol8IDJq9jF3o83FVfDH0LeM5FwJx4eRxvn2775Ihont9yQ/0he
         X3fZ+GHMIum+6NB9Rn5pWPbT2G9FXyRMNBvpEqPxzfk0VdW4OVzDYeUQCJHjMh8upnTa
         JTHA==
X-Gm-Message-State: AC+VfDzWhZ8y8mmsYk/2PLYeTevziAwljhouYGCuvACPUaXzSGuhoegh
	M7jkw8Hk2LAIxeJLhi4k2ADBvg==
X-Google-Smtp-Source: ACHHUZ6O+94WEXVhDIr329qNmHlojWyFhlfQMjvTjLVyOlIGelUmhMu115E8nv6THqL4j/4Qhfj2rQ==
X-Received: by 2002:adf:f150:0:b0:2d2:d324:e44f with SMTP id y16-20020adff150000000b002d2d324e44fmr1546532wro.16.1682605745851;
        Thu, 27 Apr 2023 07:29:05 -0700 (PDT)
Message-ID: <644a86b0.050a0220.dd247.df2d@mx.google.com>
X-Google-Original-Message-ID: <ZEqGrrkWb9H8GYhH@EMEAENGAAD19049.>
Date: Thu, 27 Apr 2023 15:29:02 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Juergen Gross <jgross@suse.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.org>
Subject: Re: [PATCH 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-5-alejandro.vallejo@cloud.com>
 <18f4bd31-b26c-5cdc-5798-94ac8b7f282e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <18f4bd31-b26c-5cdc-5798-94ac8b7f282e@suse.com>

Ah, I didn't notice that header. Sure, added locally on v2.

Cheers,
Alejandro

On Wed, Apr 26, 2023 at 05:20:01PM +0200, Juergen Gross wrote:
> Please include <xen-tools/common-macros.h> instead of defining ARRAY_SIZE().
> 
> With that changed:
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> 
> Juergen







From xen-devel-bounces@lists.xenproject.org Thu Apr 27 14:37:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 14:37:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527023.819168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2kJ-0004Bz-21; Thu, 27 Apr 2023 14:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527023.819168; Thu, 27 Apr 2023 14:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2kI-0004Bs-VK; Thu, 27 Apr 2023 14:37:18 +0000
Received: by outflank-mailman (input) for mailman id 527023;
 Thu, 27 Apr 2023 14:37:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S+EI=AS=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ps2kI-0004Bm-9e
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 14:37:18 +0000
Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com
 [2a00:1450:4864:20::342])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0143814f-e509-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 16:37:15 +0200 (CEST)
Received: by mail-wm1-x342.google.com with SMTP id
 5b1f17b1804b1-3f19afc4fbfso64894705e9.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Apr 2023 07:37:15 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 d13-20020adfe88d000000b002e55cc69169sm18680949wrm.38.2023.04.27.07.37.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Apr 2023 07:37:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0143814f-e509-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682606235; x=1685198235;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=11U62xqVI6IElR8Ictp5N7XNuJZduJKfVtizCWTyVfc=;
        b=FPPKXnIeJu2yC0JRFeg0dREaPEiLu2M7PE0Q/MZ4kuQGfS53w+51cRn46k3klaKKv0
         BmvCrY1EV7do4eCcZ9T55DsSFfbbNYM2mGRai01k5ncvBVlZAbzSJP3FFMgs4qjXpjT3
         6eHpWpjYatpFIhea06FlzmxI317ytaPCFRVxU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682606235; x=1685198235;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=11U62xqVI6IElR8Ictp5N7XNuJZduJKfVtizCWTyVfc=;
        b=Tct2Ny2ximUmWSdvf3TvRXWvcee66X/IP259Ur9vXAoWkz14/qNyHB+zVh/teGcoRv
         BXRR01pAAKfyjLZ/1Fatr8XL9+N7Ho5AeqKiKIH3S4h8oAiZdj+LSV+95qJ/4UG+Jso/
         L35E90EDqOiO4wnuxLrkhR4qSO9Q2wWg81OVHlw900DRjltvmr+vWqcKI5XOUaD4pNOC
         dr2cQRZ4t+9lduOtlHpTHEuTmH+4lOr2Qyn0H+HQFjWZbIrO63AGVd+SLeSnt104QhOK
         uaHjr6d6jGtestWa6ELZWPPhRos3R70c89xkGCs6OQQSZI6OWUONQTH8nv9N8dqnB4jL
         uOYQ==
X-Gm-Message-State: AC+VfDx9OsG3EMSwCJFXIRU6WegU+UqUfby5idlW5jBXKkM+ISSp80RO
	P8EeOMHPC6uwKt5xzbc8/o9fSA==
X-Google-Smtp-Source: ACHHUZ4mkurR1ZW8hSg4SdtK0W/DqU/twQXbajkV8cxWGlmhNURM+6406XK5JibdB0cXwCrrNl7PGQ==
X-Received: by 2002:a05:600c:213:b0:3f1:8223:6683 with SMTP id 19-20020a05600c021300b003f182236683mr1623512wmi.40.1682606235314;
        Thu, 27 Apr 2023 07:37:15 -0700 (PDT)
Message-ID: <644a889a.df0a0220.9062c.d2f3@mx.google.com>
X-Google-Original-Message-ID: <ZEqImKg1iMJBBRAm@EMEAENGAAD19049.>
Date: Thu, 27 Apr 2023 15:37:12 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/7] tools: Make some callers of xc_domain_getinfo use
 xc_domain_getinfolist
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-2-alejandro.vallejo@cloud.com>
 <d93a5eaa-1f6f-a2b9-b238-04b8bb1a33dc@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d93a5eaa-1f6f-a2b9-b238-04b8bb1a33dc@citrix.com>

Answers inlined

On Thu, Apr 27, 2023 at 10:51:03AM +0100, Andrew Cooper wrote:
> Just as a note for the subject, we more commonly write function names
> with ()'s.

No harm in abiding by that. Done on v2.

> > +            "crashed",         (info[i].flags & XEN_DOMINF_shutdown) &&
> > +                                 (dominfo_shutdown_reason(&info[i]) == SHUTDOWN_crash),
> 
> Isn't this your dominfo_shutdown_with() from patch 6 ?
> 
> I'd pull that forward to this patch too, and use it here.

It is indeed. Done locally on v2.

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 14:48:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 14:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527026.819178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2v9-0005iY-2s; Thu, 27 Apr 2023 14:48:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527026.819178; Thu, 27 Apr 2023 14:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps2v8-0005iR-W4; Thu, 27 Apr 2023 14:48:30 +0000
Received: by outflank-mailman (input) for mailman id 527026;
 Thu, 27 Apr 2023 14:48:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2R8u=AS=microsoft.com=mikelley@srs-se1.protection.inumbo.net>)
 id 1ps2v7-0005iL-Be
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 14:48:29 +0000
Received: from DM5PR00CU002.outbound.protection.outlook.com
 (mail-cusazlp170110003.outbound.protection.outlook.com
 [2a01:111:f403:c111::3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 914ee298-e50a-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 16:48:28 +0200 (CEST)
Received: from BYAPR21MB1688.namprd21.prod.outlook.com (2603:10b6:a02:bf::26)
 by CY5PR21MB3591.namprd21.prod.outlook.com (2603:10b6:930:c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.19; Thu, 27 Apr
 2023 14:48:23 +0000
Received: from BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::89f5:e65e:6ca3:e5a7]) by BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::89f5:e65e:6ca3:e5a7%6]) with mapi id 15.20.6363.009; Thu, 27 Apr 2023
 14:48:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 914ee298-e50a-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FidD5mo08ZZMs3qcbCHj/LsG4d+WWfJPvwFEp5DfPQ0DO4cftDKuQEIDFltayIQYWxw+91RLGJxgI4TH71etbkPZ2kAFP/LGvndIEo7qdKiSiO+e2uHwj3QDON06SmYvo7nDi1nK67JGyen8bNFqvr7m9w7YSQS81sAgp0WEF1rWbWofZiLiANR6RHoD8tan97ue7ZDWEU6wa71jenpQz99PF6ux42xqwGtDLdrGsddwQ7Pg5ylAI8Rv7I0VCF9Qjhm06BMnJOhhcs4d2aNEoAidIywGPZWKdkXHXR8UXGSCquO3884Nida3mFgA02pmfo4DuDC3/nUqy6ieM4/r+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Nxt769QptW+s630rckbhmu9SAzkE81OepKrOgTaIYlw=;
 b=Ux63q2WzcXqnYJK2Kvn/0lCJgupO4k0mi8cCf1Br5LnZLFSTHZiOdEEQ4Q5uqWvZKop//nJwm6RdVxR/YyoTjcB8vww3Tw+/Ht0WIaggiMH1v7m8SAEiTbaAMpQU5clbpsXIv2dVTjoTXjdCTieV2bX2rjWjZDUbmkqQEQgLWfJrNGq5F2D2CFmbpMm/gccx3w9Q7sLM4Ubd3p8u2KxprLn7XUSOliqtnpPgKKoECEW4mQPAHim7tHyoHMgGcmVjUFyvw0EuHucoe7uFleQV9cLS4qW42h4db+7sblVoPYU+hhwmX3/VS06vBZ1E7VcjICVjlV0TyUw/X0xNipOM8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nxt769QptW+s630rckbhmu9SAzkE81OepKrOgTaIYlw=;
 b=EXJJriYI5k6vBBRh7obCiEK823hyn9Go/XSWIcZqT70cQZjkgwcTewkow5/omK1EUhEDklwwPh7Tom9jsFOQbievkExHBOumTULKvyWUrzpWxG5OW1bae7QVOCHmiGUEmUSzOCB4S5K273mpNLpK3xnH05ssPaIvMbCjK5qG4Zw=
From: "Michael Kelley (LINUX)" <mikelley@microsoft.com>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
CC: "x86@kernel.org" <x86@kernel.org>, David Woodhouse <dwmw@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
	<thomas.lendacky@amd.com>, Sean Christopherson <seanjc@google.com>, Oleksandr
 Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
	<lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama Arif
	<usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Russell King <linux@armlinux.org.uk>, Arnd
 Bergmann <arnd@arndb.de>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, Catalin Marinas
	<catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>, "linux-csky@vger.kernel.org"
	<linux-csky@vger.kernel.org>, Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>, "linux-mips@vger.kernel.org"
	<linux-mips@vger.kernel.org>, "James E.J. Bottomley"
	<James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Paul Walmsley
	<paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
	"linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Mark
 Rutland <Mark.Rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Subject: RE: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Thread-Topic: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Thread-Index: AQHZbysRZA8Vj8T0G0ur4YbixPC7La8/SSgw
Date: Thu, 27 Apr 2023 14:48:23 +0000
Message-ID:
 <BYAPR21MB168888DC5432883D8866BA40D76A9@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230414225551.858160935@linutronix.de>
In-Reply-To: <20230414225551.858160935@linutronix.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=46284e3d-8064-4df4-a070-7e25b2bb6662;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2023-04-27T14:22:50Z;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=microsoft.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR21MB1688:EE_|CY5PR21MB3591:EE_
x-ms-office365-filtering-correlation-id: f896ad88-fe39-4f83-266e-08db472e7362
x-ld-processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 NWmmkUhHIp7h/hf4wUwLyyRMGm5e99Tu/sxAss9qebk4QYmtuboxufSOw98Xn9Jt0ZxttFSNYvzUSZyK6WpDm2krqaTBg5Jz4l4fqd6c21wrPSQqpSAN8a1LDZh373yxLTlLdwINCjAntW4iPDaP8XDXxor/kXCqhLTwSbo0frTC4K1RY9NrRHp1fd6c99Rcye1jSbfSmge1ONMzgjA1hUENXkZ94m9UgyhzqwW0LoHkTP2ca/0ZExcIX/m3ja+1Q6oqPhhTnrHZKEvK1JCocNBbSUJvnRs4Yph4mOcmis2LF59OTU9rpPxLCw9StiGB/hYVN4Nf9Qza3ZRwT6G0U00fFxNgLICO/zme7wvpcWXsKgWx7h+Nf1i4j7NSRhTBgFw+7hi5U54GWTrTkqN+eJZHizwbTZMZ1eqBltl6+mWbzfn79or9B3rtl6qk8NFavH7UyJiS9c/f3wNmK/HYLbjzH2/YjsLC8wDCFBftMuJdXoMiCcoNuvId5P4pZlPMbt2KJW6w7E4c5Ml2PKFYh2BoPIAv//5ceDyJ74M2RTma8B3dmjREiNZUcZhfHitYDTkevhSc5sqqjWRrscJaWrAX3ov52oaD9cr9gKdkEbtWUSwGjwPW1VPg/+KPbir95r5FTaQ/o7YgAObx6BWjIDqluuzUj4MFghErOS7LzW0=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1688.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(346002)(136003)(366004)(451199021)(33656002)(38100700002)(4326008)(10290500003)(7696005)(38070700005)(66446008)(316002)(786003)(110136005)(478600001)(66556008)(71200400001)(54906003)(66946007)(8990500004)(66476007)(41300700001)(86362001)(64756008)(76116006)(8936002)(83380400001)(26005)(6506007)(122000001)(55016003)(82960400001)(9686003)(8676002)(82950400001)(7406005)(186003)(5660300002)(2906002)(52536014)(7416002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?W6lJiT+4w8ZRagviKqWC2A2XgpZ68z3pUr1Iv7/nshr39JdQIWHEB405Ertm?=
 =?us-ascii?Q?NkKNdMQ9DsUKPz0MwW1DILAaR7kNDQQin11j3CHuUZXEAvl+B4KqchkXEyKP?=
 =?us-ascii?Q?nhWdaezRu1854fiKM6cq9SdcQsEpD+0jegHUz7VMn/FAS/N2FZ7p8WkL3OKg?=
 =?us-ascii?Q?tngf7sf9/cCjDkMwQwqrV+MPQDnYncxbXl+ezboIW6ZJw3jmOZ20h7gE9WUy?=
 =?us-ascii?Q?zdXUlu9YOQ7c+rFfl1w5wECNb8rsMfRN3z5zu4ROCXrZs3songstQjaVcaLP?=
 =?us-ascii?Q?BmQ9WME8Dmxg3rArdCboQ2T+eHCQcvZh6z3gDwYt8u+YK/rR91ofsm9qPepO?=
 =?us-ascii?Q?R0qRE+F5QwTOealwAK5FXu1aQrimGigh7M89TSslTSfbYJ45yVuWHktJUa2z?=
 =?us-ascii?Q?NTw3hEOe1EJv4vX/mSodAl0iRtSRwItPFpFbOphIjwZyc8VeZ1PhhOOFKXqW?=
 =?us-ascii?Q?lDgERP+K2IbZRbpPSGKTMII8++SnMjg5gWyZHyk+uhV9jrpcDDRyI32npUNY?=
 =?us-ascii?Q?LoHysFz8GXLNTjMxY1QbnaaC5CSucG3XyEhSqoq/NQhNl4RQak+DYmulnyoo?=
 =?us-ascii?Q?TxagI9B6Fm6agr+qpi1QqwnYQWWVstCL/EKYWKwGUOOwTMMoEOUs8+VpCL5Q?=
 =?us-ascii?Q?8n7CiesCAqYE26j0S7peweAjaj+aNm8quFSGPKqe1aGjbhFkXJj1dVEZh8zC?=
 =?us-ascii?Q?KC9hQC/Pin1sU01HypsMJDdrhG9C1+UEtA2+C/SCm7bnTKeJEK0sdH0wQm5u?=
 =?us-ascii?Q?1KEPteGpPFbKn6Ery7CC9P0cyCwoHHulMEljOdU6x9ufvCujAXqvg9ANe5Vb?=
 =?us-ascii?Q?yjLELto+wNh3/qvle7RaGgCKIyGwRWXHLYZcE1FnaSP6QvKZ71f9YRtcun44?=
 =?us-ascii?Q?HSQCXv5h8esHGH4mG3SI6FCkOikDzkioLgIbN6YgA1tcAB0u03VZZVmlhUZa?=
 =?us-ascii?Q?zlHxa2Nvnl8V1uoBKaTG9jAM4bf+JRf9ag2U9K3Nef9emKMDyNgWkIodwXCb?=
 =?us-ascii?Q?Okcg08k1CJ4pTeWtLpEBpGzFksCmm6dqXDQoLkX+mOS7j5SzoorEl0vN2LGt?=
 =?us-ascii?Q?SsdKojk+Cds6BKyt+95uM95TdefFRpFxue724VdM74edz2cleZIMLVHUD28j?=
 =?us-ascii?Q?HqbzeqR971IqaoKwK1ERDKKIfPkhU9vGFtvYrdNq1Vihq4Zcvxy2aNBM5+Ya?=
 =?us-ascii?Q?SKat6d5Z+Hizrd8DqrAij9oDrqP3qzV0xRFHN0ssVx03wK1vPnhC1dj1ehtP?=
 =?us-ascii?Q?OseYF72nbh86EXpyBrRMbL6L2X7ywYN40zS6VjMzaRLdPeVzLrzct1J4Ldzx?=
 =?us-ascii?Q?yMj2fgYJAAcyuW/jUd8e3X2bxN2ko6oN8wphLeDcBrXF8WpT9H67onVge7MV?=
 =?us-ascii?Q?zbd3PtE1kPwutdsQyfiBNSvClsTExEeHVeitmV5Y8LI3KDQ18w5X9NlIwFWf?=
 =?us-ascii?Q?l3rF4HEjtZ3AGnmXr8kOIj/bgEC8bf1q+BXGgJGgmZTxPMFTA2/Gd/7jj2wN?=
 =?us-ascii?Q?HWlUBrKbiGwic/o91lN3seK+Ob71l59Ngiz0byFNibAuQlKIxc/W75dveaTx?=
 =?us-ascii?Q?ruizcvuy/bDLurjAa3I/0RiS3QmsinyNDuiIfsfytJTuNvJlPIB2EMruHvuC?=
 =?us-ascii?Q?og=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1688.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f896ad88-fe39-4f83-266e-08db472e7362
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Apr 2023 14:48:23.2371
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: YrxGibgeGnDk6XVTMicUZCzlePqRgHhRoxhsCC4oaF79GggjOnm7azqCR25lFAik2nmp68DOpWD/cEVwrYZZAWW2JWVxou0cU4qAsNSdxjg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR21MB3591

From: Thomas Gleixner <tglx@linutronix.de> Sent: Friday, April 14, 2023 4:4=
4 PM

[snip]

>=20
> Conclusion
> ----------
>=20
> Adding the basic parallel bringup mechanism as provided by this series
> makes a lot of sense. Improving particular issues as pointed out in the
> analysis makes sense too.
>=20
> But trying to solve an application specific problem fully in the kernel
> with tons of complexity, without exploring straight forward and simple
> approaches first, does not make any sense at all.
>=20
> Thanks,
>=20
> 	tglx
>=20
> ---
>  Documentation/admin-guide/kernel-parameters.txt |   20
>  Documentation/core-api/cpu_hotplug.rst          |   13
>  arch/Kconfig                                    |   23 +
>  arch/arm/Kconfig                                |    1
>  arch/arm/include/asm/smp.h                      |    2
>  arch/arm/kernel/smp.c                           |   18
>  arch/arm64/Kconfig                              |    1
>  arch/arm64/include/asm/smp.h                    |    2
>  arch/arm64/kernel/smp.c                         |   14
>  arch/csky/Kconfig                               |    1
>  arch/csky/include/asm/smp.h                     |    2
>  arch/csky/kernel/smp.c                          |    8
>  arch/mips/Kconfig                               |    1
>  arch/mips/cavium-octeon/smp.c                   |    1
>  arch/mips/include/asm/smp-ops.h                 |    1
>  arch/mips/kernel/smp-bmips.c                    |    1
>  arch/mips/kernel/smp-cps.c                      |   14
>  arch/mips/kernel/smp.c                          |    8
>  arch/mips/loongson64/smp.c                      |    1
>  arch/parisc/Kconfig                             |    1
>  arch/parisc/kernel/process.c                    |    4
>  arch/parisc/kernel/smp.c                        |    7
>  arch/riscv/Kconfig                              |    1
>  arch/riscv/include/asm/smp.h                    |    2
>  arch/riscv/kernel/cpu-hotplug.c                 |   14
>  arch/x86/Kconfig                                |   45 --
>  arch/x86/include/asm/apic.h                     |    5
>  arch/x86/include/asm/cpu.h                      |    5
>  arch/x86/include/asm/cpumask.h                  |    5
>  arch/x86/include/asm/processor.h                |    1
>  arch/x86/include/asm/realmode.h                 |    3
>  arch/x86/include/asm/sev-common.h               |    3
>  arch/x86/include/asm/smp.h                      |   26 -
>  arch/x86/include/asm/topology.h                 |   23 -
>  arch/x86/include/asm/tsc.h                      |    2
>  arch/x86/kernel/acpi/sleep.c                    |    9
>  arch/x86/kernel/apic/apic.c                     |   22 -
>  arch/x86/kernel/callthunks.c                    |    4
>  arch/x86/kernel/cpu/amd.c                       |    2
>  arch/x86/kernel/cpu/cacheinfo.c                 |   21
>  arch/x86/kernel/cpu/common.c                    |   50 --
>  arch/x86/kernel/cpu/topology.c                  |    3
>  arch/x86/kernel/head_32.S                       |   14
>  arch/x86/kernel/head_64.S                       |  121 +++++
>  arch/x86/kernel/sev.c                           |    2
>  arch/x86/kernel/smp.c                           |    3
>  arch/x86/kernel/smpboot.c                       |  508 ++++++++---------=
-------
>  arch/x86/kernel/topology.c                      |   98 ----
>  arch/x86/kernel/tsc.c                           |   20
>  arch/x86/kernel/tsc_sync.c                      |   36 -
>  arch/x86/power/cpu.c                            |   37 -
>  arch/x86/realmode/init.c                        |    3
>  arch/x86/realmode/rm/trampoline_64.S            |   27 +
>  arch/x86/xen/enlighten_hvm.c                    |   11
>  arch/x86/xen/smp_hvm.c                          |   16
>  arch/x86/xen/smp_pv.c                           |   56 +-
>  drivers/acpi/processor_idle.c                   |    4
>  include/linux/cpu.h                             |    4
>  include/linux/cpuhotplug.h                      |   17
>  kernel/cpu.c                                    |  397 +++++++++++++++++=
-
>  kernel/smp.c                                    |    2
>  kernel/smpboot.c                                |  163 -------
>  62 files changed, 953 insertions(+), 976 deletions(-)
>=20

I smoke-tested several Linux guest configurations running on Hyper-V,
using the "kernel/git/tglx/devel.git hotplug" tree as updated on April 26th=
.
No functional issues, but encountered one cosmetic issue (details below).

Configurations tested:
*  16 vCPUs and 32 vCPUs
*  1 NUMA node and 2 NUMA nodes
*  Parallel bring-up enabled and disabled via kernel boot line
*  "Normal" VMs and SEV-SNP VMs running with a paravisor on Hyper-V.
    This config can use parallel bring-up because most of the SNP-ness is
    hidden in the paravisor.  I was glad to see this work properly.

There's not much difference in performance with and without parallel
bring-up on the 32 vCPU VM.   Without parallel, the time is about 26
milliseconds.  With parallel, it's about 24 ms.   So bring-up is already
fast in the virtual environment.

The cosmetic issue is in the dmesg log, and arises because Hyper-V
enumerates SMT CPUs differently from many other environments.  In
a Hyper-V guest, the SMT threads in a core are numbered as <even, odd>
pairs.  Guest CPUs #0 & #1 are SMT threads in core, as are #2 & #3, etc.  W=
ith
parallel bring-up, here's the dmesg output:

[    0.444345] smp: Bringing up secondary CPUs ...
[    0.445139] .... node  #0, CPUs:    #2  #4  #6  #8 #10 #12 #14 #16 #18 #=
20 #22 #24 #26 #28 #30
[    0.454112] x86: Booting SMP configuration:
[    0.456035]       #1  #3  #5  #7  #9 #11 #13 #15 #17 #19 #21 #23 #25 #27=
 #29 #31
[    0.466120] smp: Brought up 1 node, 32 CPUs
[    0.467036] smpboot: Max logical packages: 1
[    0.468035] smpboot: Total of 32 processors activated (153240.06 BogoMIP=
S)

The function announce_cpu() is specifically testing for CPU #1 to output th=
e
"Booting SMP configuration" message.  In a Hyper-V guest, CPU #1 is the sec=
ond
SMT thread in a core, so it isn't started until all the even-numbered CPUs =
are
started.

I don't know if this cosmetic issue is worth fixing, but I thought I'd poin=
t it out.

In any case,

Tested-by: Michael Kelley <mikelley@microsoft.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 14:59:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 14:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527032.819188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps35P-0007J6-6U; Thu, 27 Apr 2023 14:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527032.819188; Thu, 27 Apr 2023 14:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps35P-0007Iz-3P; Thu, 27 Apr 2023 14:59:07 +0000
Received: by outflank-mailman (input) for mailman id 527032;
 Thu, 27 Apr 2023 14:59:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps35N-0007Im-U9; Thu, 27 Apr 2023 14:59:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps35N-0007Cn-Mf; Thu, 27 Apr 2023 14:59:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps35N-0000HY-BU; Thu, 27 Apr 2023 14:59:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ps35N-0006sw-B0; Thu, 27 Apr 2023 14:59:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e/5hIUi7M1LdPINTdvhE1+OUTO027fgXwQFv9rmJhcY=; b=17AsG5KbVkRDKE7AQSfDBBqEGl
	nYIjY0r9JEzqgzh8XQUAU5rZlBVbsUaZQehoxLuKcpl6xzCi9mE72HB0PrI7oXVbsXVodWoxXh+3y
	B45va0fZKmJsd/N9trTM9nh+z9FuoiV+6T6NCbX2gPqVp8k8aSTIr5ewOpTJ9NBlnsQ4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180431-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180431: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c3f9aa8e488db330197c9217e38555f6772e8f07
X-Osstest-Versions-That:
    qemuu=a14b8206c5edcbbad1c71256ea9b44c3b382a9f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 14:59:05 +0000

flight 180431 qemu-mainline real [real]
flight 180447 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180431/
http://logs.test-lab.xenproject.org/osstest/logs/180447/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail pass in 180447-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180447-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop       fail blocked in 180412
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180412
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180412
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180412
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180412
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180412
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180412
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180412
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c3f9aa8e488db330197c9217e38555f6772e8f07
baseline version:
 qemuu                a14b8206c5edcbbad1c71256ea9b44c3b382a9f5

Last test of basis   180412  2023-04-25 15:56:04 Z    1 days
Testing same since   180431  2023-04-26 11:38:43 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Wang Liang <wangliangzz@inspur.com>
  Wilfred Mallawa <wilfred.mallawa@wdc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a14b8206c5..c3f9aa8e48  c3f9aa8e488db330197c9217e38555f6772e8f07 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 16:32:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 16:32:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527042.819204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps4Xv-00019P-Vj; Thu, 27 Apr 2023 16:32:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527042.819204; Thu, 27 Apr 2023 16:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps4Xv-00019I-Sx; Thu, 27 Apr 2023 16:32:39 +0000
Received: by outflank-mailman (input) for mailman id 527042;
 Thu, 27 Apr 2023 16:32:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ps4Xu-00019C-LZ
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 16:32:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ps4Xt-0001SH-5u; Thu, 27 Apr 2023 16:32:37 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.9.197]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ps4Xs-0007sf-V4; Thu, 27 Apr 2023 16:32:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=N9FCVVVUvS/22Enw+WdvB2BsaYVQCQjox23NKf05Z4c=; b=dmhIiqgLcQQEI6eQsDJ6eCw3vX
	/ZfA1WusHjRNApZCdUg6g/2FESpZ1X9V3rNaVqRMlHnlgr1xEDxRkuvBJs5r6c4GScS9L59JJZaal
	nU87PqlT6+PYouXAL0vFj1XWtN/2qFH/KHvXjAI6dP25yYkRu8PLVHoFruXpb5QcT4w4=;
Message-ID: <89cbdd73-29e5-f158-4058-668048a0ca60@xen.org>
Date: Thu, 27 Apr 2023 17:32:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
 <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
 <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 26/04/2023 08:08, Juergen Gross wrote:
> On 25.04.23 19:52, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 05/04/2023 08:03, Juergen Gross wrote:
>>> Instead of modifying accounting data and undo those modifications in
>>> case of an error during further processing, add a framework for
>>> collecting the needed changes and commit them only when the whole
>>> operation has succeeded.
>>>
>>> This scheme can reuse large parts of the per transaction accounting.
>>> The changed_domain handling can be reused, but the array size of the
>>> accounting data should be possible to be different for both use cases.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V3:
>>> - call acc_commit() earlier (Julien Grall)
>>> - add assert() to acc_commit()
>>> - use fixed sized acc array in struct changed_domain (Julien Grall)
>>> ---
>>>   tools/xenstore/xenstored_core.c   |  9 ++++--
>>>   tools/xenstore/xenstored_core.h   |  3 ++
>>>   tools/xenstore/xenstored_domain.c | 53 ++++++++++++++++++++++++++++++-
>>>   tools/xenstore/xenstored_domain.h |  5 ++-
>>>   4 files changed, 66 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index 3ca68681e3..84335f5f3d 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, 
>>> int error)
>>>               break;
>>>           }
>>>       }
>>> +
>>> +    acc_drop(conn);
>>> +
>>>       send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
>>>                 strlen(xsd_errors[i].errstring) + 1);
>>>   }
>>> @@ -1034,6 +1037,9 @@ void send_reply(struct connection *conn, enum 
>>> xsd_sockmsg_type type,
>>>       assert(type != XS_WATCH_EVENT);
>>> +    conn->in = NULL;
>>
>> AFAIU, you are setting conn->in to NULL in order to please..
>>
>>> +    acc_commit(conn);
>>
>> ... this call. However in case of an error like...
>>
>>> +
>>>       if ( len > XENSTORE_PAYLOAD_MAX ) { >           
>>> send_error(conn, E2BIG);
>>
>> ... here, send_reply() will be called again. But the error will not be 
>> set because conn->in is NULL.
>>
>> So I think you want to restore conn->in rewrite acc_commit(). The 
>> ordering would also deserve an explanation in a comment.
> 
> Just to make sure I understand you correctly (I have some difficulties
> parsing "So I think you want to restore conn->in rewrite acc_commit()."
> completely):

Hmmm... Not sure why I wrote "rewrite". I was meant to say that you want 
to restore conn->in after acc_commit() is called.

> 
> You are suggesting to move setting conn->in to NULL to acc_commit() and
> to restore it before returning? I'm fine with that.

Either that or what I wrote above. It depends on whether you expect 
other caller to be in the same situation.

> 
>>
>>>           return;
>>> @@ -1059,8 +1065,6 @@ void send_reply(struct connection *conn, enum 
>>> xsd_sockmsg_type type,
>>>           }
>>>       }
>>> -    conn->in = NULL;
>>> -
>>>       /* Update relevant header fields and fill in the message body. */
>>>       bdata->hdr.msg.type = type;
>>>       bdata->hdr.msg.len = len;
>>> @@ -2195,6 +2199,7 @@ struct connection *new_connection(const struct 
>>> interface_funcs *funcs)
>>>       new->is_stalled = false;
>>>       new->transaction_started = 0;
>>>       INIT_LIST_HEAD(&new->out_list);
>>> +    INIT_LIST_HEAD(&new->acc_list);
>>>       INIT_LIST_HEAD(&new->ref_list);
>>>       INIT_LIST_HEAD(&new->watches);
>>>       INIT_LIST_HEAD(&new->transaction_list);
>>> diff --git a/tools/xenstore/xenstored_core.h 
>>> b/tools/xenstore/xenstored_core.h
>>> index c59b06551f..1f811f38cb 100644
>>> --- a/tools/xenstore/xenstored_core.h
>>> +++ b/tools/xenstore/xenstored_core.h
>>> @@ -139,6 +139,9 @@ struct connection
>>>       struct list_head out_list;
>>>       uint64_t timeout_msec;
>>> +    /* Not yet committed accounting data (valid if in != NULL). */
>>> +    struct list_head acc_list;
>>> +
>>>       /* Referenced requests no longer pending. */
>>>       struct list_head ref_list;
>>> diff --git a/tools/xenstore/xenstored_domain.c 
>>> b/tools/xenstore/xenstored_domain.c
>>> index 30fb9acec6..144cbafb73 100644
>>> --- a/tools/xenstore/xenstored_domain.c
>>> +++ b/tools/xenstore/xenstored_domain.c
>>> @@ -91,6 +91,8 @@ struct domain
>>>       bool wrl_delay_logged;
>>>   };
>>> +#define ACC_CHD_N (ACC_TR_N < ACC_REQ_N ? ACC_REQ_N : ACC_TR_N)
>>
>> Both ACC_TR_N and ACC_REQ_N are fixed. Can you explain why we need 
>> this magic?
>>
>> Related, wouldn't it be better to define it in the enum?
> 
> I can do that, of course. I just didn't want to make the enum even more
> complex. :-)

My concern is there is a disconnect between the enum and this macro. 
What would happen if we increase the enum past ACC_REQ_N/ACC_TR_N?
Would it be necessary to update ACC_CHD_N?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 17:38:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 17:38:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527046.819213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps5Z8-0007Uc-Qa; Thu, 27 Apr 2023 17:37:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527046.819213; Thu, 27 Apr 2023 17:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps5Z8-0007UV-O0; Thu, 27 Apr 2023 17:37:58 +0000
Received: by outflank-mailman (input) for mailman id 527046;
 Thu, 27 Apr 2023 17:37:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ps5Z7-0007UP-Fz
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 17:37:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ps5Z7-0002vV-69; Thu, 27 Apr 2023 17:37:57 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=[192.168.9.197]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ps5Z6-0007NK-VB; Thu, 27 Apr 2023 17:37:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=80QuypVd/flItxQaXZXwpsaWYPqJBcbdQNMI2g0RKPA=; b=ce39Es+cd7VCBVDRArsyMTPMrm
	KQKitOW6tUy0EOOKg72bescPnUS3TRjpc5sNaJfTsNMBsQt/JibCuhOckpc8ua5GECvoOL28Xn56y
	LrjF3ZliXmYwRWiJhF/kaQmBxubp9Ui0XJF2aUjNjbhkBFuZ9bFFQW3hmIjZwHpWZ+LA=;
Message-ID: <d9c6df98-2b38-4a4b-8228-04ce072b3b56@xen.org>
Date: Thu, 27 Apr 2023 18:37:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 00/13] tools/xenstore: rework internal accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230405070349.25293-1-jgross@suse.com>
 <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 26/04/2023 08:19, Juergen Gross wrote:
> On 05.04.23 09:03, Juergen Gross wrote:
>> This series reworks the Xenstore internal accounting to use a uniform
>> generic framework. It is adding some additional useful diagnostic
>> information, like accounting trace and max. per-domain and global quota
>> values seen.
>>
>> Changes in V2:
>> - added patch 1 (leftover from previous series)
>> - rebase
>>
>> Changes in V3:
>> - addressed comments
>>
>> Changes in V4:
>> - fixed patch 3
> 
> Another thought for this series and followup one:
> 
> Do we want to keep current coding style in tools/xenstore (basically
> Linux kernel style), or do we want to switch to Xen style instead?

I am a bit split on this one. I don't particularly like the Linux coding 
style, but it has the advantage to be well-documented (if we compare to 
the Xen one).

May I ask what would be the reason to switch?

> 
> If a switch to Xen style is preferred (I do prefer that switch), I'd
> like to suggest that I do a rework of this series and the followup one
> to use the Xen style for new or moved functions.

I think this is a bad idea because it would make difficult for a 
developer/reviewer to know what is the coding style of a given function.

At least in my workflow, it would also means that I need two open the 
file twice with different settings (e.g. soft vs hard tab).

> 
> A more radical approach would be to do a large style switch series
> after the two series, but I'm hesitant as this would affect backports
> rather badly.

In general, I would agree with that. But, after your work, I am under 
the impression that Xenstored will become quite different. So I am not 
convince we will be able to backports a lot of patch without significant 
rework.

Therefore, converting all the files in one pass may not be too bad 
(assuming we agree on switching to the new coding style).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 18:07:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 18:07:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527053.819224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps61q-0002aR-8j; Thu, 27 Apr 2023 18:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527053.819224; Thu, 27 Apr 2023 18:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps61q-0002aK-62; Thu, 27 Apr 2023 18:07:38 +0000
Received: by outflank-mailman (input) for mailman id 527053;
 Thu, 27 Apr 2023 18:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S+EI=AS=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ps61p-0002aE-Ah
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 18:07:37 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62666bf2-e526-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 20:07:35 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-2f58125b957so8085521f8f.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Apr 2023 11:07:34 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 f12-20020adfdb4c000000b002f9ff443184sm19071096wrj.24.2023.04.27.11.07.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Apr 2023 11:07:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62666bf2-e526-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682618853; x=1685210853;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=7DibGKgOKBzho3uTV59K1AKg5KmQUAReKXhR2ASaC9g=;
        b=XuFZeiZUF+dtcgpmPaqmtsI14m29aSoBkLupeB1jmDcK3qHtRWGFOpIE7fjZjmD3He
         hOn+tCde9cgPHcq2Q6vGsMpINFJpWuY6VYdjZg2Y4ldk52i+sj311CtGKljUAst38iRN
         LHvZ90QFX2Xes1uh/cGYNOuOAnbDEgKQRFiKA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682618853; x=1685210853;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=7DibGKgOKBzho3uTV59K1AKg5KmQUAReKXhR2ASaC9g=;
        b=FVrcCcmjQ9O6HMEBVSY6UeW5D59dokLq/GXCfC3OpLlSD9tEAGXSEB78yyHUPgkylv
         CDWoRcIanC3HaqP+LfGnqRheRjxJa4mjpor+ZJI5EgHvBWUoAZtH9kYyUnezUQlTLApa
         uJZgDDsiwZxOgdAEeRsyv9AN8NsQ5m6CJcAn7+Zwq2kRDbOqOP3asJ3GVRvyJO/GS8A9
         g+j2XU9fO2IGFvEaYjucCymfJTdMKZCWGseBp/GbGhD3f3seM7hloA7JV5B8RGCrP2oe
         x1tCLwPaFNxVrHKqp6rIDdGK7vxMO/Vxs/dckjR+sRQMJ5Mikr+eQzsiRTUvsqMbSY8i
         GDTA==
X-Gm-Message-State: AC+VfDxHfrXGlvDAYn0Zb9F/ab2tyn9mVOvvGAeT6m+Zs/6YGGEMfrJv
	nbdrqUPLEfWO6nXOD9iVZUS20A==
X-Google-Smtp-Source: ACHHUZ4wWbfDC5vU2SS0087aAFjY++P0Vc9HNEa5//25qFtxlLzogYujIrCKPyGkQPeq9Z+dX9rO1g==
X-Received: by 2002:adf:f810:0:b0:2c5:5a65:79a0 with SMTP id s16-20020adff810000000b002c55a6579a0mr2149125wrp.53.1682618853615;
        Thu, 27 Apr 2023 11:07:33 -0700 (PDT)
Message-ID: <644ab9e4.df0a0220.d3920.fced@mx.google.com>
X-Google-Original-Message-ID: <ZEq54R4z/PWcg3Kg@EMEAENGAAD19049.>
Date: Thu, 27 Apr 2023 19:07:29 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 6/7] tools: Use new xc function for some
 xc_domain_getinfo calls
References: <20230426145932.3340-1-alejandro.vallejo@cloud.com>
 <20230426145932.3340-7-alejandro.vallejo@cloud.com>
 <91d37778-ec77-6716-61e9-d47b0517508a@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <91d37778-ec77-6716-61e9-d47b0517508a@citrix.com>

On Thu, Apr 27, 2023 at 01:35:18PM +0100, Andrew Cooper wrote:
> 
> > +    xc_domaininfo_t di;
> >      unsigned int nr_leaves, nr_msrs;
> >      uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
> >      /*
> > @@ -291,13 +292,13 @@ static int xc_cpuid_xend_policy(
> >      xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
> >      unsigned int nr_host, nr_def, nr_cur;
> >  
> > -    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
> > -         di.domid != domid )
> > +    if ( xc_domain_getinfo_single(xch, domid, &di) < 0 )
> >      {
> >          ERROR("Failed to obtain d%d info", domid);
> >          rc = -ESRCH;
> 
> Now that xc_domain_getinfo_single() has a sane return value, you want to
> set it in the if(), and not override to ESRCH here.
> 
> These two comments repeat several other times.

That override shouldn't be done, true. Removed on v2.

That said, looking again through the callers it appears some/many of them rely on
PERROR() to print the error code to stderr on failure. At the bottom of the
invocation there's xencall1(), which itself is defined in xencall.h to return -1 on
error and sets errno.

I think I'll also modify the error handlers that this patch touches to improve
their debug. i.e: ERROR() -> PERROR() and also print the domid that failed to be
found where it's not set. And patch 2 to define the new wrapper as returning 0 or
-1 exclusively rather than forwarding whatever comes from below.

> 
> > diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
> > index 263a3f4c85..59b4d641c9 100644
> > --- a/tools/libs/guest/xg_dom_boot.c
> > +++ b/tools/libs/guest/xg_dom_boot.c
> > @@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
> >  
> >  int xc_dom_boot_image(struct xc_dom_image *dom)
> >  {
> > -    xc_dominfo_t info;
> > +    xc_domaininfo_t info;
> >      int rc;
> >  
> >      DOMPRINTF_CALLED(dom->xch);
> > @@ -174,21 +174,13 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
> >          return rc;
> >  
> >      /* collect some info */
> > -    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
> > +    rc = xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info);
> >      if ( rc < 0 )
> >      {
> >          xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> >                       "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
> >          return rc;
> 
> This need to change to -1, or you've broken the error convention of this
> function. (Yes, libxc is a giant mess of error handling...)

I think you meant xc_exchange_page() instead? I've also modified this one so
errno lands in the error message.

> > diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
> > index e1d959c6d1..9c4c336b03 100644
> > --- a/tools/vchan/vchan-socket-proxy.c
> > +++ b/tools/vchan/vchan-socket-proxy.c
> > @@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
> >          if (ctrl)
> >              break;
> >  
> > -        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
> > +        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
> >          /* break the loop if domain is definitely not there anymore, but
> >           * continue if it is or the call failed (like EPERM) */
> >          if (ret == -1 && errno == ESRCH)
> 
> Oh wow... so this bit of vchan was written expecting sane semantics out
> of xc_domain_getinfo() in the first place...
> 
> This needs adjusting too because of the -1/errno -> -errno change.
> 
> ~Andrew

With the code changed to assume -1/errno (so PERROR() can still be used) this
line will be correct and can be left as-is. Does this all sound ok?

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 18:43:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 18:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527056.819234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps6a6-0006uU-1B; Thu, 27 Apr 2023 18:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527056.819234; Thu, 27 Apr 2023 18:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps6a5-0006uN-Ui; Thu, 27 Apr 2023 18:43:01 +0000
Received: by outflank-mailman (input) for mailman id 527056;
 Thu, 27 Apr 2023 18:43:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps6a5-0006uD-7S; Thu, 27 Apr 2023 18:43:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps6a4-0004Qc-O4; Thu, 27 Apr 2023 18:43:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ps6a4-0005XT-6R; Thu, 27 Apr 2023 18:43:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ps6a4-0006Dl-5T; Thu, 27 Apr 2023 18:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f6NKi0NgW3H27zRrCh0PqJASOWgvc3ivUPubVCJRnVc=; b=kBVKs1mmIEKZAaENBioSRMB61W
	nkCgK2vOmvcRUX2BxY0GMKSi4aDH+pRra7m5KvfjqSiojtRJ2EIyY6hmKdCyLHTetZb4z6XBAyieX
	jBwmzwZAeQ+QmNYfIGfCWaiOANsAGgWXhYmG98tcMcB4ry0aSO+OHbAzFOD0m7FaRGAs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180433-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180433: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
X-Osstest-Versions-That:
    xen=c6c8c0808f908911a38bc330cdc7a26ac4bf6d51
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 18:43:00 +0000

flight 180433 xen-unstable real [real]
flight 180451 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180433/
http://logs.test-lab.xenproject.org/osstest/logs/180451/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot              fail pass in 180451-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180381
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180401
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180401
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180401
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180401
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180401
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180401
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180401
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180401
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180401
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180401
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180401
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180401
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
baseline version:
 xen                  c6c8c0808f908911a38bc330cdc7a26ac4bf6d51

Last test of basis   180401  2023-04-25 01:51:51 Z    2 days
Failing since        180416  2023-04-25 18:06:58 Z    2 days    2 attempts
Testing same since   180433  2023-04-26 14:00:59 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c6c8c0808f..18a36b4a9b  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47 -> master


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 20:07:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 20:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527071.819280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps7tk-0007Dn-I4; Thu, 27 Apr 2023 20:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527071.819280; Thu, 27 Apr 2023 20:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps7tk-0007Dg-FK; Thu, 27 Apr 2023 20:07:24 +0000
Received: by outflank-mailman (input) for mailman id 527071;
 Thu, 27 Apr 2023 20:07:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ps7tj-0007Da-FA
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 20:07:23 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c0aaac4-e537-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 22:07:19 +0200 (CEST)
Received: from mail-mw2nam04lp2172.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 16:07:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5881.namprd03.prod.outlook.com (2603:10b6:806:11d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 20:07:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 20:07:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c0aaac4-e537-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682626039;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=oGkNY0Q+NnWyGPPLZkgocnvJltkWbQ8KlFQUNknQAHA=;
  b=hl1M9MxOKmGCLFyZx0mELpdbnFeklYed5/9lhR5NV9l/TmykB6K31Lbt
   fvMQuJli/OFNGmQVr5TBv37buBlbKS4qxHAk5xx6cbo3Ia7n8rRb5h4YZ
   a+icOFug7V5CtbMHQS6fYlqLp0t+9SLyVYq1H0zALls21sAGcUj9whdpO
   s=;
X-IronPort-RemoteIP: 104.47.73.172
X-IronPort-MID: 107539113
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rVa2l6kkFjD4/Sifm8Whlg/o5gxIJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJLX2nXO6mJZmrze9wiO96zpx5VuJGGnd9nTFFo+Xs8RCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5gCGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 awJCm4Dch6vvtL13rKxTNlcu9QScta+aevzulk4pd3YJdAPZMifBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1c3iee3WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLlra8z2Qb7Kmo7FT0oDEaf/sGCkGGgdshAM
 HQTqjs1sv1nnKCsZpynN/Gim1aAoxUVHdRZFeY37AWQ4qPO5kCSAW1sZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluaMyUPMXULYyNCSAIf+sTiu6k6lBeJRdFmeIaqg9yzEjH9x
 RiDti14jLIW5fPnzI2+9FHDxjez/J7AS1Zp4h2NBz3/qARkeISieoqkr0DB6upNJ5qYSV/Hu
 2UYn8+Z76YFCpTleDGxfdjh1YqBv56tWAAwS3Y2d3X931xBI0KeQL0=
IronPort-HdrOrdr: A9a23:LO/qEqhL1PgzNToDV7fD7hTGbnBQX7x23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewK7yXcH2/h0AV7EZniahILIFvAZ0WKG+VHd8kLFh41gPM
 tbAtBD4ZjLfCNHZKXBkXeF+rQboOVvmZrA7Ym+854ud3ASV0gJ1XYHNu/xKDwSeOApP+tfKH
 PR3Lskm9L2Ek5nEvhTS0N1FtTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y1zwoTSDRGxJYl6C
 zgnxbi7quunvmnwluEvlWjoqh+qZ/E8J9uFcaMgs8aJnHFjRupXp1oX/mvrS04u+am7XctiZ
 3prw07N8p+xnvNdiWeoAfr2SPnzDEygkWSg2OwsD/Gm4jUVTg6A81OicZwdQbY0VMpuJVZ3L
 hQ12yUmpJLBVeY9R6NrOTgZlVPrA6ZsHAimekcgzh2VpYfUqZYqcg68FlOGJkNMSrm4MQMEf
 VoDuvb+PFKGGnqJ0zxjy1K+piBT34zFhCJTgwrvdGU6SFfmDRDw04R1KUk7wM93aN4b6MBy/
 XPM6xumr0LZNQRd7hBCOAIRtbyInDRQDrXWVjiYGjPJeUiATbgupT36LI66KWBY5oT1qY/n5
 zHTRdxqXMyQUTzEseDtac7vCwleF/NHggF9/supaSQ4tbHNf/W2Gy4OR8TevKb0rUi6paxYY
 f2BHpUa8WTWFcGV7w5mDEWYKMiWkX2YPdly+rTZGj+0v4jCreawdAzI8yjUobFIHIDZl7VJE
 clcXzaGPhgh3rbKEMQxiKhF0/QRg==
X-Talos-CUID: =?us-ascii?q?9a23=3A8cVXDGhUvR+j7+rt8tB30PPfnjJuImz/knnMLWO?=
 =?us-ascii?q?BOSV3SLeTeW+t6JI6jJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AeOnXLQ+UXQlSrxtjT6M61QKQf9pI3f2zLB81qo8?=
 =?us-ascii?q?ppeSeKjdIPGeMlyviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,232,1677560400"; 
   d="scan'208";a="107539113"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WKZltyeyLy1HTwThCCjicI5ya+QDVhFkZ5K03voTjdZQesDYGV+9JP5vX+4pj7LkgmNokj9jia8QN2qyq4nAjlMJGKm3FvQevbWoYdjJ9DXrjUpHeovwGD3RcWkjmEYruzpRq3E763VLIlBkZgdRHYv3ehFitu0dYaSO4mywz90q+8iiBme6YKFywAxJLzf0j3zWJPO7SrZmh4nIJVf7rPvpPGxAOIq1JsdfK9vuNJ+Vc/2kgt13SGdQtQ2/umjGWTAumER4iMHqgiIaVj2gq7/YMCtHwUc/2g4dzzvflyUTKGGAMJelgEOsLx+jJ7yon3g7O5/sEnX+LLUvuvcycg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9AVOa7NBV5cC/pxdxzos/UkaHUaqLqsFBVe53H07fLg=;
 b=JdRZcMEazuIBlQr65mygBro5JY87/Q1yv+1J8M7LT84sLplnWpMowR4Gx/jGHHz1S/QFHkjhHFuGkxHvT8pea8BQrKD15MTaMSS3Yo/MztcFIcBldpU0ix3hFlQ1ftPizDbXcH57IBp7XuMR7nlHkVGpqjMGdVCj7n1CMNM2HgFLTRaDAuQNXeJbbAkGZnTtABHbqGyRHRYNFyOyDSQWSRd4y/2KlfBas+RAPqNgSpxRXJftBrjxxsx73qhrWhCw+5eOL0ipoZZxqhDLLxwVC2jmXHM3LGNnhWfzX3AutqCECtauB1fPwVV4YqGQhji9YLjbFpLhe/LWdQJ6ZItlpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9AVOa7NBV5cC/pxdxzos/UkaHUaqLqsFBVe53H07fLg=;
 b=NvlAsO13KmXbWaiESij8UzV2gsrK2Fv952XuZ0aUZJLA/Yl75TTdcc67/SNjgkJMtCV+0uZC5cYq7OK80t6LHXru1Kfo5GwvVmHyGWaX1U7iCM5Mra2XBZY2/dKq4DKBX5ypquRRNrVKU3pBtRMOxqpv8VaLzFKgIwI92C/vk5g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <29317463-a5de-2a5d-e217-498d3250ceae@citrix.com>
Date: Thu, 27 Apr 2023 21:07:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Content-Language: en-GB
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com
References: <20230427132559.14712-1-luca.fancellu@arm.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230427132559.14712-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0251.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA2PR03MB5881:EE_
X-MS-Office365-Filtering-Correlation-Id: 41c6c73a-b5f0-47aa-f086-08db475afcd4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wBRhdky4C3DAwPWCfKAvkbFZnEjT0DZLqupzqazTXmBvQVK/RXW8LLxUR+ZFuxkBsGisJKX3CdxobDq7j4sZxrdbmGKyqcjFpmrLrFl0ZZpLDoVg9r3FamJfexN/XkiQTRJJCt4U+aBhQH6a6C+7ylqti0eb8CLhlfEiOuN2XfxW6llRM6UdV4XDreKZbTXNdPIg6ovyrmzB8Ff8/8tFQg5WD4rZKazI0Pm+6LS33pyYmErZg7Q4/pl1OvnHHUgW+XAttA8HEkDssxLtPNoqh+hvs9OXws6hH14fQETsBxA4PhgQC7q0h82HX0nNsot+DssLUQ1RfXrjlayhZBW+YsD+lNKaP68PtRGYWSgSXHGbDRvN13MiNjZCuot5CasJg6n+HGHNaN4xTiuvoROK3xkpY5i31tGD5It4/jtUnWD8hgdFd3VjBBdT21PnVjfRtyJKgZsA7gG8DmPUQNh7xP+T+1wPjhFQjmEXZ5xOJXZfR1GDa93IM4CKfg5UclJKSO1ZldUTecVt9pFdmUKLq+0lkaFQP7c0meemrHALLRzUr+MwrgiqxGG3jO2SEhVGsmxxHOYgJ7sOgw1B3U0mX+otsIPYAu/vdY/wm6+lEy/otY5HinjIdI8Y3y8rAa3uiVmA13ja0aJwZ31pbjCArg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(366004)(136003)(346002)(451199021)(86362001)(53546011)(41300700001)(26005)(6512007)(6506007)(2906002)(5660300002)(8936002)(36756003)(8676002)(31696002)(83380400001)(478600001)(6486002)(186003)(38100700002)(6666004)(82960400001)(31686004)(66556008)(66476007)(4326008)(316002)(2616005)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QlZWeGV2QUlCSFpySFYwM1dzazBTaEkwcWEzV2RSU0ZBcWhqTlF3ZmN6RHFT?=
 =?utf-8?B?bE9oRytXeEFObklMY0pZb0JPNDZrU3lJZUZDbDNZUGNXMElzWENFdjdadkVm?=
 =?utf-8?B?dmJmZ2RwcHJldUk4RGJLbGRueXNJbFNtZ0U1SEtDaTg0SFhaMFFvNGtLOUVP?=
 =?utf-8?B?Vk5Uemh4bzNOUXh3bVlnMDJnUFV2VUFnUk9MUWhQSlpCS0ZvNzd2b2RVUEpG?=
 =?utf-8?B?NUllcmkzSnN1U3BpOEJLQlNlbFBvUkpxcTBsMmpBQnpWc1o0ZzQ5OEtiYW4z?=
 =?utf-8?B?QUJzQUYrSVdmM2U4NDMvUkFUTHNEb2EvRWwyTkgzU1d4RStuYkEyRWtvZjg0?=
 =?utf-8?B?dXQ5MGNBSGlvZXdDelJ1SHQ1Z0YzK2pJU1FKbi9ydFZGNHlwQmE3SlZTNlB1?=
 =?utf-8?B?MjFzQjNwMUZsSG03SzhINm8rREF6QWt1dG1RR01yem1kKzRoQ1RTamdlWUox?=
 =?utf-8?B?bitpYTJDQXFpNFBaREowaTRwSG0vdlRHVHFidGNrZlh2V0ZFZTM5VFBtN1ZV?=
 =?utf-8?B?UXBGTXlia2FtaGIveGFmWG5kS0Z0QTNFTEt2eHRMVVZOQ0tzODVpKzk3NGM5?=
 =?utf-8?B?Vk95OUVuSnQyajlLY2ZkOVZTMVkwd1J3aDRPYzBFWFlzQXBBSHBHUlhZSlJl?=
 =?utf-8?B?WFBWVy80dG9hd0M1SFBnRlI2eDhZcWgxaGxaN1VQeENVbU5lTEFKc1dCNzQx?=
 =?utf-8?B?ZzV0VEFQY1k1d1NtTnQreFNwNm5QdG1JUVZYVXhBWTA5U1U3YzRDNVpvN0Za?=
 =?utf-8?B?dFQweG4vTitYOHBWY0hBWGJVQURlU3M3ZU5TNVFaSkRXL3lqdGIxZFEyaXRS?=
 =?utf-8?B?b2tmNjdOdGpKZ0ZZOUhlYnBSaUZucDl1SGk2MzIvdHA2VDFkQkdnZkFTTk1E?=
 =?utf-8?B?RE9pVUswOGhEOHJMKzlxNGYxK2VrdkJ3d2sxdkxjdWowZ0FCRU56RkZZUXhy?=
 =?utf-8?B?c2l2UjIxSzlWVXlSUitHbXJrQlpvQWRRREpiLytsN2tSVHRWeU5kVkJzZmNB?=
 =?utf-8?B?Zkc5YjliVkFSVjdVeXlLYlkyUkdtWHB0TWJjUlpnZmhncHZNcEFmMkRLeno0?=
 =?utf-8?B?VUJNcWRIdXJyY0IzZlUreXZ5OXFTRGNoTEUzaEVGOWZraHdJSE12WDVWZit1?=
 =?utf-8?B?MFh2dUhhN212L2ttdVN1aHltSFlpRitGTnZYOVNjQ3VxdnIxeGJMZzRTdGl3?=
 =?utf-8?B?bEFYbFIvc0VKV2U4Vlg0YnZHeUYvblBnU2xwRWIyRWJNUUhuUVpKTWhqVm43?=
 =?utf-8?B?b1FKNXlyTnVzUHRmZU9sL2l0dVByQ2xBSUxYVGhhbyt0d25yNE84amRGc0Rk?=
 =?utf-8?B?Q2NpUnllZFZ0ZlVBZG1CdGFpQXp2MjhYSm5EVm1ZWFhGcVZGc0RkcW5XV3A1?=
 =?utf-8?B?dERTcHhrL2xhZlowc1hjQ09ManY1Um9PRXZ1M2doajcwTWgwQ2ZBYmVhelA4?=
 =?utf-8?B?YUNJMGlEVXJOVE9OUjc1NjJMR3A1blJzYk9MNDJicU9pVjdTVGNHT1JDcktr?=
 =?utf-8?B?bm5meDRBTlhUTzF0VmZDNEJ6ZlFuSjYzTmFrSE42cTdoZ1pQSEtNd0w5Z0o3?=
 =?utf-8?B?M3d2M1hNSCtCUE1zN3pPcWpFQW4rUmdZdmZlZkQvK1hQN29MOEliRDZDTVdh?=
 =?utf-8?B?WHlZM2piSmZTTjFJUlRTV3U0WHRFbkk4dXU4UCttV2dsRWV2bW1XNmovT09x?=
 =?utf-8?B?dDlZRDU1ZGJBMkduOFJaZHFrUjNZUVlENjNDWXRvV2xuMlhia1FNWURFdXBJ?=
 =?utf-8?B?ZzA3R3dQS0F3WjcvdS9aTkkyUXVGYXNLaTIraC90U3I5dGV3b0xvdUJUcFFq?=
 =?utf-8?B?NWp0bHliWWFuc2lsZldWTVNKRVFJN0ErZ2FFWlNoc2gyY3YrMCs4RnI5b2Fq?=
 =?utf-8?B?K3ZkUzYwUlFUaFA5Z0VkeStzZW9YOEdCbmRYM1hUdE9LSXlwY1NHMm40ajVO?=
 =?utf-8?B?WWp0UGVEK0QrUFZDNVI3Y2FLWS9VUWFBcy93WGNTNHB0VFRQWmtER2dCWjVT?=
 =?utf-8?B?ZFFXblRkeXlzMmlnb2NTM0xZUk40dUhyUTdhZWVkbHVyQW5JMHB6NlJyOUo4?=
 =?utf-8?B?ZEFJYnN0MXdnNUZjMkw3TWo3d2lHVlc0MVBPVjFvQ1VjcDdRRWgrN252cldl?=
 =?utf-8?B?L3c3UDNkRE4rU2dXOGJmdkFNTFR1TFNiaDN1Zk1WalQwNVRvc21SRk5yQmUw?=
 =?utf-8?B?MHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	aAn3CAWI6LF39Q4s3Upzi+OoE4aNsvkgLL7yKem6LbemTiHtIvd2yi/rXqn6yy41plvibTaEQhL7sXIPxrHEm73y1Waz4oW+Ig+T8sVaTrht8bzaodsndN4HjyxaJGeq7ddQ2BjjF2mkVG1/otJNOhlhSgMJm/hRsDTyd5bnGyRHy+uPNynRAkphBczyegb1zFo+D63jEy4cNkzOQAdyv3sNFTk2iso6T/TgufH11dn3Aj+pmGcmpsWbMr5fxlcq/tKhsCRgJIAMPT961PFM+BNfdr71xZgf3cfbEnhW7BJ5jGYTc+acK9b+KyIrcx6gWM3k4a8xgAszvtqwKVw1RlE5oLIril476+nUcoKSdYxNGbtTgzxr2jOLAIVaAolXS02tJI+DmIFc6ol99LBaOEhq/4wlnYpJrCwhr5t0ncT84f0WTkqbiWbJdZ0H9zGTgpZgTsOsbBItmhd1kBMCkqm9o9Qj+SJal//7h7uDIFfo8u/D6cRTmy9th81Zdl6DwCKr5Cq65SqqPJsbA2n2gqHqnSD9b6qCj6xRoCBhWAPkqqoz0PI65ranx3R6wlGcmfiqZgbmqdTR2kYfS3A6T/L4pzR2nTFO+YwZhNxIbWbr953BmptEmjgUaadAiLEPEIPIWFoFXzs9VuZNpQVyMTyg9YePoYfgH9kIqJSCL4hn3SOv7D0GcXtlFQgVjHJvwPnRrhB7g1zKjCOrJkjP+Yn0Cgy/r1RStAs8o1qjIsgj/FqxadushtL6f2AdbTKMQZj5Bgk7ZgUfOSeB1Cpb4JasLJST1O2S3CVh4EElAEg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41c6c73a-b5f0-47aa-f086-08db475afcd4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 20:07:12.4348
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oCSHQvhbr07IB1eriWkW5fu5C1Z/FeD2NDNBZe/FkFVi2FvDY9TVM3+d7/EeqmhYxIVK0W8ovK5SKK3VxM32xAGSFFTSPOEn4DrplzZyPdY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5881

On 27/04/2023 2:25 pm, Luca Fancellu wrote:
> Currently the script return code is 0 even if an exception is
> found, because the return code is written only if the exception
> object has the errorcode member.
>
> Fix the issue returning the errorcode member in case it exists,
> otherwise use a generic value different from 0.
>
> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> Change-Id: I1b76b8fa4668bef49da3282339fca3052e3379cd

although this doesn't look like it should be here.  I've stripped it

~Andrew

> ---
> Changes from v1:
>  - use getattr() (Andrew)
> ---
>  xen/scripts/xen-analysis.py | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/xen/scripts/xen-analysis.py b/xen/scripts/xen-analysis.py
> index 8e50c27cd898..5e8f2910cd72 100755
> --- a/xen/scripts/xen-analysis.py
> +++ b/xen/scripts/xen-analysis.py
> @@ -26,8 +26,7 @@ def main(argv):
>              cppcheck_analysis.generate_cppcheck_report()
>      except PhaseExceptions as e:
>          print("ERROR: {}".format(e))
> -        if hasattr(e, "errorcode"):
> -            ret_code = e.errorcode
> +        ret_code = getattr(e, "errorcode", 1)
>      finally:
>          if settings.step_clean_analysis:
>              cppcheck_analysis.clean_analysis_artifacts()



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 20:28:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 20:28:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527077.819290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8EJ-0001QT-Co; Thu, 27 Apr 2023 20:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527077.819290; Thu, 27 Apr 2023 20:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8EJ-0001QM-9A; Thu, 27 Apr 2023 20:28:39 +0000
Received: by outflank-mailman (input) for mailman id 527077;
 Thu, 27 Apr 2023 20:28:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ps8EH-0001QG-Uc
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 20:28:38 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14f8cc1c-e53a-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 22:28:36 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 16:28:22 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5141.namprd03.prod.outlook.com (2603:10b6:a03:1e9::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 27 Apr
 2023 20:28:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 20:28:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14f8cc1c-e53a-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682627316;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=zhuXYFUYgbFTVQ6BmtHH707YwJfVGzMcZojwzMH9ers=;
  b=NVlOuz4ajK9KBKD7kulJwXbpldn74E4fpeypdXCzIoYPeu/OOfvRK1sb
   hLtpm7TNebB/Q5XJwRDaRI25DkZJ5TdzT19T7EjY7q4NBV+c4Agk3NX7v
   PtD6U0oadkSYSnPJfuQWBrjJ6md475aaGI0QGU0agqMk4/3l1VVDXdKAk
   E=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 107155973
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MwD6NaLLptf9nr5qFE+RBJQlxSXFcZb7ZxGr2PjKsXjdYENS0mNUy
 jAaWWuHPancZ2L0fo8jad/nphlV7ZfUyNdrGwRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4gZhPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GP2xDx
 78acQkTb0CjrN+m4amJZfdV05FLwMnDZOvzu1lG5BSAVbMMZ8+GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VvpTGLk2Sd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHukAd9PReLjnhJsqGS570MMIkQ0bnCYq/2wqh7gVNlzJ
 0NBr0LCqoB3riRHVOLVUBC/unGJ+BIBXd5ZC8Ux7AaQxuzf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8pDWuOS8TPCkaaDUNVwct6N3kvY11hRXKJv58FIalg9vtAzbyz
 juW6i8kiN0uYdUj0qy6+RXNhW2qr52QFgotvFyPASSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02AQH80UJn9h1x0OeQA==
IronPort-HdrOrdr: A9a23:1tBjx65DgGbY444CJAPXwa6CI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HnBEDyewK5yXcT2/htAV7CZnidhILMFu1fBOTZsl7d8kHFh5ZgPO
 JbAtND4b7LfCZHZKTBgDVQeuxIqLfnzEnrv5am854Ed3AUV0gK1XYdNu/0KDwQeOALP+taKH
 LKjfA32wZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeF+aP3CB+R2zYZSndqza05+W
 bIvgTl7uH72svLhyP05iv21dB7idHhwtxMCIiljdUUECzljkKFdZlsQLqLuREyuaWK5EwxmN
 fBjh88N4BY6m/XfEuyvRzxsjOQmgoG2jvH8xu1kHHjqcv2SHYTDNdAv5tQdl/851A7tN9x/a
 pX1ybB3qAnRi/orWDY3ZzlRhtqnk27rT4LlvMStWVWVc87ZKVKpYIS0UtJGNMrHT786qogDO
 5yZfusrMp+QBe/VTT0r2NvyNujUjAaGQqHeFELvoiv3z1fjBlCvj8l7f1auk1F2IM2SpFC6e
 iBGL9vjqtyQsgfar84LPsdQOOsY1a9AC7kASa3GxDKBasHM3XCp9rc+7Mu/tynf5QO0d8bhI
 nBalVFrmQ/EnieR/Fm5Kc7sSwlfV/NHwgEkqpllt1EU/zHNfXW2BS4ORATe5DKmYRaPiXZM8
 zDTa6+TcWTalcGIrw5rDEWa6MiWEX2b/dlyurTe2j+1f4jebeawNDzQbL0GIfHNwoCdyfWPk
 YjNQKDVvmoqHrbFkPFvA==
X-Talos-CUID: =?us-ascii?q?9a23=3AslJlBmoyUOUg5DjCi1++nqLmUfEbf3Dvj3WJGlG?=
 =?us-ascii?q?5WUZCda20Um2/5poxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AvJu9YgxmMtuQqC/V9OcrBxjRgTeaqJ2EIW8Su8w?=
 =?us-ascii?q?kgOKnPyh5G2fDgz+bToByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,232,1677560400"; 
   d="scan'208";a="107155973"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hhvpZMGzBhhd1MQyTX1UozOHyl8FqeoJZ5oDN5a2Qz9VLNQ/SDbJBEx3PN6r+6eyONteRHokqd3xC6eoJjVtEfweZfx0rkeFlpjHCivPAuSlN3PHUenLu8BtalFq9QaaZXEUm0ks5RY9JBQPY+I1cQU7vguOOspPP2pCDqQRVgGKIlD9R2fgA4VQZGwR2w5TmICmkb/V0Ksc9kTYLLNuirwUnMujwOLaMdelqNwTqGV+vBXMfwV0Qoi9FJA572Zn3HmyMe1HgL3sNzXZd1fvyxpl8ZtfgdmKXD9pWdTIFBQAcr3oHIxvp4HxqvML5HUUCqJ/kYblXPQlgXV1YOSWeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zcYwePm3wqF18F67BMAPfo29HOUiUXQw1Vx6vSXGZpo=;
 b=A4kFpDvhJc+nfd1hxhBh2dv92tHILIqh/YhTfw7HtBR57xQJFzfuf71XBSMpFwrPQPsOlRx1DuaRNDwqA7O938tqXxDvGyVcUUh/u3DreKQTCkYBbHKvb4As3k/MZXPVRxFPheG/HbkGP9XeAbnxaJB24dxs9Vcv/7KwfSfDyos2tQJN95qr2RtlM/F1CePcNqfCczD+vKbm+2qdL98XkGaSztossak5fo9OIXCR8ansVrAxvEmZ0atx9wY0WMdVmFxie2/FAnLDKOJAqrBI0Jt81p5g0i34dk3AT7GJmbvl+Q1qTDIB+MkworhUT2TOZYgzkKMR4FCDBzijscbh4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zcYwePm3wqF18F67BMAPfo29HOUiUXQw1Vx6vSXGZpo=;
 b=U9gaHInTJOMqxtL/vwvbDijnKnnRZTD3FV0zPiMxqBqbFZ8z1AbECjetGSHZAlOAf3obAw1K/zrFyj1tAjCmAcrMEwZnQRMIjTAlwB99NhHHFjLk/Uwlgr+ZiaPXfKl1cGIZiNM6c2hDWQZ/DadTJuwtYigxx8ZeUta5kEgon+k=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <85cbaee4-bffd-6220-19b3-55f544f9f2ce@citrix.com>
Date: Thu, 27 Apr 2023 21:28:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 1/2] xenalyze: Handle start-of-day ->RUNNING transitions
Content-Language: en-GB
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Anthony Perard <anthony.perard@cloud.com>, Wei Liu <wl@xenproject.org>
References: <20230327161326.48851-1-george.dunlap@cloud.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230327161326.48851-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0136.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5141:EE_
X-MS-Office365-Filtering-Correlation-Id: c5c42059-0abb-43af-d350-08db475df0a5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hqJXAI+qUn9g73Qclto8KRSYzmxt/cby1735FDjVu3CfZbyGbP713+3Oy8wg1Ykommij6nE6zxPr2Ev846AMjIHje9WCBaPY5g7QpG+4FAwF5ZCqPFU+sh9uhM3l/CKBGu4Ccy4fb1C1GRGzBCPjWpIZoS3rEWV+uxFAHvA4gL3MV+tiYxz0ABeSlGhnIYUX7vcOmFmrapzCE5uzFjjgCqIPSyg1FHv1WLM+u18V5g+4SeuzBdyr+Vr7r4Hxru5gZupoxLTRDsN/xX4vIMX4s9Z3sJJB2OwiRdSgzqRPH+zrVjM0vTmPhAyxH3Bbfbz2da3FXcNW9YKop9s+s6hrsveiWX04iC12bnNhHJy5Ey77z9wgH+B/bnQ0f0R6jCLO341LUMpcSiCqf/s9nW+mGHYzIRejI2mLR271x67MnRHBSxD6nzhlABxi6qaMA2CzsRnJfEva81GWXtBMyghqO9hdB2iz3cDuKDgYvSGR/pdru5tfD7EwRvzzajK9HNVXgLTk+PPVIhfALn6vEnkQ3TUhF7O59/or+SwxMsdoEFqFZKQ7vwQiWbQF7CpsVgGUl9Pdeq0lrjvrXckAyMBgtNklkUwUnDaOLKX26xC56XgrzE/ZcU9HANbmw0FZKYjbuZ+cMGbjuyw/bGB25xcxdw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(39860400002)(396003)(136003)(346002)(451199021)(66476007)(4326008)(66946007)(66556008)(8676002)(316002)(2906002)(83380400001)(478600001)(6486002)(6666004)(5660300002)(86362001)(31686004)(82960400001)(38100700002)(54906003)(8936002)(41300700001)(31696002)(36756003)(186003)(53546011)(66899021)(26005)(6512007)(6506007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anIxMU9ncjlVbUZCcTRpZ05vT0RwOWlwdVduV1J2UHN6aTgyVU5jUStyZEJw?=
 =?utf-8?B?NmVNcTN5QzBvaWlkcTRrVEIxTGY5cTI4dXR4bjFPUTJnd0thMDdsQ25xVXZr?=
 =?utf-8?B?aDJPUExNbUREbkhpUEFsNzZNMlFHRmVTZytLbjcyK3dvczdxemFWYW1wbWdV?=
 =?utf-8?B?S2JIM1N6RjJrbG5aM2xzWGxmK2tFS0c3K2k3MkR1UUQrbUk5N1RlRUdIUVRS?=
 =?utf-8?B?NFBaRFlWVTE2YnVsc2Vabm04WWZrNy9XY05ud3Y5Tlg5WEFiclN0NVQzd2pn?=
 =?utf-8?B?Q210YzhUYVp1bnRId211eEY4cWoxMjZMM0hKVG82dElKdVNxLzNyMHYxZGtY?=
 =?utf-8?B?NG03TTFpZ0VRdFZNZWhySkYyQjFRNVJIbUw4dEp1NmFOWWtBTXB2VU9CTUth?=
 =?utf-8?B?TmJSUVcycHpDOWh1YUFDSE5LdzlrSUxZdjV3QXZDYU8zZVlrMHBWMEJLU3oy?=
 =?utf-8?B?YTE0c2doZVM3c0x2dGpwb3o3MDhGVTlyNjdORFdFYS9oUCs2cGIvSkE1RmNW?=
 =?utf-8?B?NnA5eHowSTd1c0ZzQ3M4czFLTk1IRHZydTZ4c2FwZm1tcjB1ZHRyVk00SlFm?=
 =?utf-8?B?NXpoWXViczZnUndqUlhDeHdMZVc1czNqSkx3eTVpZEpNTjEvM2xKd0ZtSDVV?=
 =?utf-8?B?NUw3RmxpT1hsYjViYzMvaEVSUzBLWVRLa09DbC9TWmpUL2tTYzVoeElhZktC?=
 =?utf-8?B?WEltaWdSK0IvWDZCcjBBV3dBM3pWWUcwTTYrMDVQSkhYZG96RGNydDFiMStt?=
 =?utf-8?B?aVBxNFZPanVTUG4xM2Q2UWxhMzBTUHJUbVlDOHh1NEZORlNuaklFcEprK3pj?=
 =?utf-8?B?aWNxS0JxaUNoNndjRVM2K0xKTkkwVDgvenhhY2NCa0VKdzlOdzZTZjNKMW5a?=
 =?utf-8?B?NEE3Nk52Y2sxZ2d5eHdDcFhBeFpRQTJEUG96QU91dm5ZNFVOUU1UajJERFZj?=
 =?utf-8?B?cnFIMGM1WDdHRXNKdUs0RWJFbHR2UjZXNlAzRFFVeWhaRUJaaUZDRzg5WWo1?=
 =?utf-8?B?WHBBenR5dk9weFJNMXVvN0I5SXZJdDJHWEQzckRkU0tZWFZlbEF6c1pBTE94?=
 =?utf-8?B?WVFtV2xqaTU1ekUyT09EZnNVRTZTMlp1ODNzK2NmUFUxVDltUEVWUzFDL2xB?=
 =?utf-8?B?RWQ3L2dPb3VIaWZuS2ROb3B4RWEzR1A0cHB5YkFKc29WN0NIdGNWbnBhZDBK?=
 =?utf-8?B?aDBiRjRibkI0K1hDQ3pUVndQVGRKYlcrVG1BWW1HTUd3TytnMnhBeE8zci9O?=
 =?utf-8?B?QkF3d1BiNFRZUHc1S1A1dG9TTERWWUx2L0FuUzYyUVlqbWYxa1J5bXFHeEFH?=
 =?utf-8?B?cldEMUhtMXp4VCtnZ2pGL3dEdFZmNXA3djNJbk03TmFCdGEyTzhSdXErQnNB?=
 =?utf-8?B?dFZvaitCb2s2NkxhZDU3SEw4TGE5UElGRHB4aXhRdnVEbXpQMFF5K2FFbkFq?=
 =?utf-8?B?UzlTRUZNMkZsNFJOc0M3UHVCTHZpM0laZ2Z1c3kyaExUdnpLVlhDL3RkNXlB?=
 =?utf-8?B?Z1FkZ1hVSmxrUmJ0K1dTUHBSY0dtR0lmUWVXN1Vqb0o4Mi9VQWN1cjk0RUM5?=
 =?utf-8?B?a0crb2Y3Sm5pQnlzV2tlcnhRS29tclVCNHQvSTZtQTkvb25Td0ZFZmhVdVNw?=
 =?utf-8?B?Uk12aGNkM04zZHNJbjgwRFZsT2diaUJtZUFwSTJuZU9ZM2FpdVFFV1V0Q2JR?=
 =?utf-8?B?aFgwZ3JPS0g3L3NTYTc2aEZvUUh3bkJWdkFjSkRWRFR3SkNsYkFITXJENzBL?=
 =?utf-8?B?RE4vZUJhTUw3THR0VWJ3THc1cVRKSUhZci8vM2UxY3FVU3JSK0JqNUZMakFI?=
 =?utf-8?B?cnZPY3hHcU9qbHlvQ3BacTE2dy9EV3YvL1NwNktkWkg0czJOK2dGbThEZloz?=
 =?utf-8?B?OVVtUEhTQlBYS3g5N2JwRnlPYUY0bURXWnpFdG82NFlvS1dwNCtWYkpSU2hi?=
 =?utf-8?B?dit3ZXkwQnh4WTB4U1pvTG5CZDlZemx1eVRpNzhSZEtsSU50VkEyaVMwekx6?=
 =?utf-8?B?NnpnVHBCQ3dLckhhRTNpMEc0aVNBcUZXVDgrSDhsWjBiVmUybm5GWXFCZkI0?=
 =?utf-8?B?cHZOMERKR0Q3cFU0SDRQQU1tM01XenVsaFBSTUlwWmlzaVpKcjdxdkc4c01k?=
 =?utf-8?B?NHgvKzVnbVV1K0gzUXZFblJLMTdTaVFXdE5Wc0FhaVZWbXpUVXMzTGNqWlJr?=
 =?utf-8?B?Q3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	RZ/K5O9xBrqw9il9KXyoiQnZAZpSi5+AR6spySwz0h1Cwf2RxYC+UsX71DEJBqEHnm3l4dPduj5BmGFkZBkNkV2OfTDvx2cL6GTe/7MsmoZGkwWsnPFplXDfEQmNNfJBJ4W04vQfDSqqrkFvPjCwB7CzRFHK+DxZqu5Nykwg30JPDohIpODIJGL3PCBDXHcY0q+j++BRVDDtPW0TnTbuXz1r8X36IAc3jtkIJ+wKqffmtZ56H7c6Lss9ICrzQoX7MhvV/bbJSR3pRj6gjLQ3XQ1N+Z1IlaVqsAayhbpuHJu8X9d15Tc05an8Hp3uDZAm8vIMAqvOEd2VD5/YCo0elT5RjKH+3hJFn2OioFp0/vX01Hx6VgEqKCo/1OH5MVwe7y++BOB+tP9tn7R46Gyiy1KV1eS/QN5VyXAbxQ2YPeylXMTDaGzeHre2CUKVxmVHq3UryiN0NTK3fL+VmCQVl8bUDUH7ln6u3ro2AXHy8l5CQeVtdTrwG8mKyBoohSQ3SFOeP7A5hCwjY8O0l/6uKr18kN4TTQSZbWCJnHctdHMQ+ZIT52Y+gdZA+upuIKpemPRgNOOSXNeOf3efEoMeQ9sHoIUstcz9jU3i3GOKk0njwqG4VmtljTGMdz9CHtg/rM+l6BN0gqSHvX8m0wToFgtJVWfLm/Z3jB+t8UALi1CCN1oWOnA4Cc6EKsRzbmfTKXmUNYE4eiM4t6kvbeCNIPIIhutuwG+w9A3aTrCJU9QdzLHyY/U0f4VZJLm9etVXkJHq0eCUoOrRW75puSRkYbltz0Ui+0O6lE0iaoGnyCFgAn40O7miOim+W5irU2vK1HBcaDRN5iQxwS7wjCRICg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5c42059-0abb-43af-d350-08db475df0a5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 20:28:20.3880
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Sm7WWum4scBrLbKyYKD8DZmM9NWqYkR1srD1dGGsi6Gx7M1DRxV2FIa1DLpKa56poZXhM/qYSk99WA53G3FbTSWd9xDb/xlbX1J+Vyp06mo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5141

On 27/03/2023 5:13 pm, George Dunlap wrote:
> A recent xentrace highlighted an unhandled corner case in the vcpu
> "start-of-day" logic, if the trace starts after the last running ->
> non-running transition, but before the first non-running -> running
> transition.  Because start-of-day wasn't handled, vcpu_next_update()
> was expecting p->current to be NULL, and tripping out with the
> following error message when it wasn't:
>
> vcpu_next_update: FATAL: p->current not NULL! (d32768dv$p, runstate RUNSTATE_INIT)
>
> where 32768 is the DEFAULT_DOMAIN, and $p is the pcpu number.
>
> Instead of calling vcpu_start() piecemeal throughout
> sched_runstate_process(), call it at the top of the function if the
> vcpu in question is still in RUNSTATE_INIT, so that we can handle all
> the cases in one place.
>
> Sketch out at the top of the function all cases which we need to
> handle, and what to do in those cases.  Some transitions tell us where
> v is running; some transitions tell us about what is (or is not)
> running on p; some transitions tell us neither.
>
> If a transition tells us where v is now running, update its state;
> otherwise leave it in INIT, in order to avoid having to deal with TSC
> skew on start-up.
>
> If a transition tells us what is or is not running on p, update
> p->current (either to v or NULL).  Otherwise leave it alone.
>
> If neither, do nothing.
>
> Reifying those rules:
>
> - If we're continuing to run, set v to RUNNING, and use p->first_tsc
>   as the runstate time.
>
> - If we're starting to run, set v to RUNNING, and use ri->tsc as the
>   runstate time.
>
> - If v is being deschedled, leave v in the INIT state to avoid dealing
>   with TSC skew; but set p->current to NULL so that whatever is
>   scheduled next won't trigger the assert in vcpu_next_update().
>
> - If a vcpu is waking up (switching from one non-runnable state to
>   another non-runnable state), leave v in INIT, and p in whatever
>   state it's in (which may be the default domain, or some other vcpu
>   which has already run).
>
> While here, fix the comment above vcpu_start; it's called when the
> vcpu state is INIT, not when current is the default domain.
>
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 20:29:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 20:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527078.819299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8En-0001uT-Ld; Thu, 27 Apr 2023 20:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527078.819299; Thu, 27 Apr 2023 20:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8En-0001uM-IM; Thu, 27 Apr 2023 20:29:09 +0000
Received: by outflank-mailman (input) for mailman id 527078;
 Thu, 27 Apr 2023 20:29:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=81si=AS=citrix.com=prvs=47455b11e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ps8Em-0001sv-Kr
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 20:29:08 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 271862c8-e53a-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 22:29:06 +0200 (CEST)
Received: from mail-dm6nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 16:29:03 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5130.namprd03.prod.outlook.com (2603:10b6:5:1e3::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Thu, 27 Apr
 2023 20:29:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::a5a1:8cae:d45b:2030%4]) with mapi id 15.20.6319.033; Thu, 27 Apr 2023
 20:29:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 271862c8-e53a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682627346;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=JBpKAFwrPS2Y95fkY1i2lDSfBo12L2oC3NyVZpkpelw=;
  b=dtsy6xT5s5BynRRILJ+UQxGsEx8t4DdnWBhaRHT0niRqhmVuSiTRDxMa
   Oa+8It3AISty+juecFchLfLTSwKGPS6plq8EgWjO+dFcRIeSd/FR/nQ6W
   cf8auLomhHfaH7Vj+fGATFc4EowKjzWzUSCb/BisGPKaKG5Aht//gS0/l
   k=;
X-IronPort-RemoteIP: 104.47.57.168
X-IronPort-MID: 107156018
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pl3BA6n/FSu2uCvZUApd9Azo5gxMJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIbUGjSPPaDNDfxftwjao2/pE0Au5Lczt81HVBs+S48FyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgX5gCGzBH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 awWITJKck2Pvdvo35KUdfZivtoBHca+aevzulk4pd3YJdAPZMmaBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3iea9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLhq6Ay2Q37Kmo7FFo1CkmfuqSDgU+jSvFFJ
 m0/wypxov1nnKCsZpynN/Gim1aLsxkGVNcWH/A87AiV4qHQ5BuVQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAKKUcSaClCShEKi/Hqpp0ziRTeCM5uCqevgvX+HTbtz
 3aBqy1WulkIpcsC1qH+8VWZhTup/8HNVlRsuFWRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:OyaKxq+Z/O0ntTe4jLpuk+G+dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXpVoMkmskaKdhrNhQItKPTOWwldASbsP0WKM+UyCJ8STzJ8k6U
 4kSdkENDSSNykFsS+Z2mmF+r8bqbHokZxAx92utkuFJTsaFJ2IhD0JbzpzfHcGIzWuSaBJdq
 Z1saF81kadkDksH42GL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftxK/mHwOe1hI+VSoK5bs562
 DKnyHw+63m6piAu1Lh/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtSJV9V6aEtDUVpvjqzFoxit
 HDrzopIsw2wXLMeWOepwfrxmDboXwTwk6n7WXdrWrooMT/Sj5/I81dhbhBeh+cz0Y7ptlz3I
 9Cwmrc7vNsfFj9tRW4w+KNewBhl0Kyr3ZnuekPj0ZHWY9bTLNKt4QQ8G5cDZ9FNiPn74IMFv
 VoEajnlb5rWGLfS0qcknhkwdSqUHh2NhCaQnIassjQ6DRSlGAR9Tpt+OUv2lM7sL4tQZhN4O
 rJdo5ykqtVc8MQZaVhQM8cXMqeEAX2MFPxGVPXBW6iOLAMOnrLpZKyyq4y/vuWdJsBy4Z3sI
 jdUWlfqXU5dyvVeIOzNaVwg1PwqViGLHbQIpk03ek9hlS8fsulDcS7ciFvryP6yM9vRvEyWJ
 6ISedr6rHYXCzT8L1yrn7DsqlpWAgjufIuy6YGsnK107X2w97Rx5rmWceWAobROhAZfU66Kk
 c/fVHIVbd9BwaQKzPFvCQ=
X-Talos-CUID: 9a23:a5RkIGG/TI/M91Y/qmI3+kcWXdx4fkfkkmjwDHCnB153aoyaHAo=
X-Talos-MUID: 9a23:KHhgjwWVc+DypcHq/COrp25wENw12a+vFVEui5gnkPCEaiMlbg==
X-IronPort-AV: E=Sophos;i="5.99,232,1677560400"; 
   d="scan'208";a="107156018"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W7D43DOYMsBgJv4+CD6J//4Jr6lPVT0eSXPZ6Gj0nSrlJ+FyDIbKyvev6XEg2TCeCeMmdx5GBtPeNTe2imDUPIvHWpRa/Qm52SHpctyp6LKv1vC+E1JhcucaMuLH/wins5DjuNJ60U+FAczljMVQm0Duv4Uw7wfkRkL6IfYxs/OycQ0EijekL4+d2NHC5I62e88QYRMcHxBPbYuWiJNCDnspjY2/kwv5L8jA++BUybQupO+g5jE3D//ADB9YT2Jxmn9P4LmmG9gMKSTeKmoVfoA0jQi32OGCooU2a9AFIvgKAxJ92JiNGMx08O4fKjVXzZEem5dCsnpeNCOrjHUybA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JBpKAFwrPS2Y95fkY1i2lDSfBo12L2oC3NyVZpkpelw=;
 b=l+6l2ad+C4jGwuRrItf7BPU8zVH/9sy/gPktNGEqGZHG0Nl8VVCTgkINhZoweaAKLdAJGJaQ14jKwMhHxxKhInzqge2WnZwqS47cXtTMIhj/Wdopvwa/CyA5hSJ8P66xEmLoLQH1MgtIUnFZZLGTjm6zb6g2al/2A7aQHoy9ea8x3RilaPDtEzSJ2UvQC7KZwEqTvlqiyumAeycRLgw+VZPr1spCCNj8aPa5eU2dccG/S8pZUF4DfVnKyq7+nyLWlE5rOJiDtkg2SB16bzSlfnILK9RNUCoD34HmyIoq6XbtYl+jHEtgZonbgQJJ4tY2k7wJFHxER9Cg5vbFuh8r6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBpKAFwrPS2Y95fkY1i2lDSfBo12L2oC3NyVZpkpelw=;
 b=ACGMMyMCJQ7cYvfKt1pRynApr7nCsgkBQ9nnZ/xV9Z8K9iir5IhOp/NQCd5A2nNAFQNG0tf93+vkW2LZATejCy/H17lyXuz5vpT1vhBmEcWFIF2mgVrVzUiZmg6hVCrld5ntR2SyvK1/mcMS+cSvxlocGg36v65LMdQOxyUDvLI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <35e5f7aa-dded-d108-ba26-08aa68f09854@citrix.com>
Date: Thu, 27 Apr 2023 21:28:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xenalyze: Basic TRC_HVM_EMUL handling
Content-Language: en-GB
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@cloud.com>
References: <20230327161326.48851-1-george.dunlap@cloud.com>
 <20230327161326.48851-2-george.dunlap@cloud.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230327161326.48851-2-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0140.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5130:EE_
X-MS-Office365-Filtering-Correlation-Id: b4f57ab3-f2a8-4958-8e4a-08db475e0909
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a5CQ8cJahvMPVQMCqLn33F2uc2UMH/GlggDC74XJe7VJiEdtkdU0D33KaqzWtTNOS/7GKjhdaeYSY+AKJC94JDIJQS82RGxsp/wDX0YDbbbBwsgpgkxx+UuklT3FaeMt3sIodN81qttmeWrhZCR/jOZiNEaUZIVD6G/4Iv2MQgqkImS0gdOWsUbkueRfagOktLCP3fxSCiObe8+IQAkO96RLJ8Hbx8Sw+q5LPfOobwmM/ZGsuJrRQNxPZD4aU5PtO6FEBLbYnTciOoaxXCEmh70KVo7zPZvQ37sXAASnLp/TtWMZu7HRHmW1ASXXKLVJ0dffNCaWFQaLLjGeQPh//0yqJxXUtAJKm1idI78ZKaGgNdBRlk2LRwrZLpVQJVbywT5g6EjmLG1c/79vXP4Sl0uXFIy8oxkhVKN4LX8TIXoNweonldyGGvKugqVXDeigliwkRJMUT6QDBRI0sCuKUYLyWCg7yVZpfIBkuhwJZOjCZFwmOSsB6Am+/YakaD3FQABn1x68fpOVvN6tnxw5CD9t6wWhLXvPMluYKWwYw7cM89Nymgq9F09bsRKLWhFa8dEVBV1x/jX91p5zBcWDmmuLDqzpLkoDjpVE+fbTfbx0RU3zCd8EL0k6tJRP8LG8Rtxf2nRcJSeQRKuv4YXroQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(366004)(39860400002)(346002)(451199021)(478600001)(2906002)(36756003)(66946007)(186003)(66556008)(66476007)(4326008)(6506007)(83380400001)(26005)(53546011)(6512007)(86362001)(31686004)(31696002)(2616005)(558084003)(5660300002)(8936002)(8676002)(38100700002)(6486002)(6666004)(41300700001)(82960400001)(316002)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1g2QjBYSm1NMzNFZHZ6Z3J1QXJab29BQnY2OHF2U1pBUlI2SUxCS1V3NzRq?=
 =?utf-8?B?eGduUUl3KzBxMFpyUHowcXVLOW54Q3BHdUxKYnNNSEpGYWEwU1haVjNoUU5L?=
 =?utf-8?B?blVWYkZUL1JOWVlFSktGT1oyTUEyczd3L0M3UDVuZ0RCcVA1R21MRGZJUUlH?=
 =?utf-8?B?WHpaTG0zdEJ4ck5tSzFLNEV6NXNYRmczRFZlZ1VUOFBqaHlHL1lrTnZ6WGlY?=
 =?utf-8?B?QktyN2NTaklaQjh6TzBtWkJQeFFCUXhqcGRCR3RzRkZ0enRiN0ZQOWsvUkFs?=
 =?utf-8?B?Wml0cHdGUnBPM3A5ckgrWExEM0QyRVkxeW1vZzNJWGdodkFHb2w5UDVtYUxP?=
 =?utf-8?B?K0dJRk9xWnFYZSs0ZWM3WFFMaFRjQ2NzbElXNjdtUktNNlhiYTFxR1VvR25X?=
 =?utf-8?B?b29zYTN5Tm5wcmpQc1MyNlhpUzdON3N4dlU0bG1HaStwWENoTmdiYzIybU8r?=
 =?utf-8?B?aG9JMlBnMGR4ZTZnWlhBS2MvZG4veHZMTkxUVUVPUWxwakJrWjNoKzA2YWVi?=
 =?utf-8?B?SVp4ZnlyNEdSTFdiN3lLVE1NZ2N3aE9qamRZTUU5S1lMQ3hocmtuclVyVHZL?=
 =?utf-8?B?QUVZbEIxOWJLeVVnVTJCQmhBQWJjTGNHL3JyaFVMbTRLejhMUGttWEh2YUVh?=
 =?utf-8?B?WFJza2UwaHJuQmkzNkFBc1NCWW1YUjdyMTNUOFVzRnpabnBOMzNuS0hXU0lq?=
 =?utf-8?B?OWlka20wbG5nRW5rbkxRZHhPcE00ZXRDVGpGZEZMcUFLdzR6YjdtZkhpU285?=
 =?utf-8?B?a2JURW5NQlpHajhFS0E3dngzdlcvWWFMMm9RdE1wbVB3dHBja1loRGpqMnNv?=
 =?utf-8?B?T0F1RVc0L3JpVit1WThMbzBHSnUzSWxjNXpNRjFQd2RGbFBMSDdnMENRL3Ey?=
 =?utf-8?B?VGtOWUhTRWltNjE3aG1aYytMVUZEUVRMUXduMU5hSWdsZFptU3ZXTUJxLy83?=
 =?utf-8?B?bHh4NW9wY2hmVnVDc1V0KzVVT0RJbGF0ODgyNWQyTExocnpKQXVLUkY3UUZG?=
 =?utf-8?B?amYzS2llbnZwS0tMdk9LUHY0L1lvSTdIWW9QZGFvbFRLWkVwZ1Y5emlCcUEy?=
 =?utf-8?B?UExRb3VtSldRVkZSK0o4M0RKWWxRSEFEZVZhR215Y1Mvb2VOb0JINXlKUmxI?=
 =?utf-8?B?cm56azlHUFpBeko0VzU4N2s3TUlONEFMMElKb2szZTN4U0d2SmI4ZWN1LzMr?=
 =?utf-8?B?cjBSN2lUbTYvSXBHQ0RlZXgySHlqTGFVVjAwS3BtVXVKWDZENVFWTWluYkMr?=
 =?utf-8?B?QXM1K1B3VjB5MzBXNXhNUEJBVFZUNnZHWHlhMWZHT0UvUE01NmJtak9TMWsx?=
 =?utf-8?B?OE85WnptQUZNTFNweDFJUDF6a3Y3UXFrR292WlFDUTNiMTh6MlNVQW1xdjJ3?=
 =?utf-8?B?ZDFJakx0ZFFUdnROSW56OHhQWnQ0d1VvZ1VqSWg3YjBDdDR3a1VtNXlXT25P?=
 =?utf-8?B?azkrZElPdzJFRTAxblJBNGxobFcwWEFLTTdteCtrUkhCcFZEODZoSUFxU0hs?=
 =?utf-8?B?L3NDL3pXbkdjVnZRNm1YaWc0UndXUWlicUtNZWlYdXVQUmlmdWpSekYyTlBi?=
 =?utf-8?B?YTdVTzVjaDVYWDZCT2hXZ0kxWEg3anZhTVA2WGJuVGt0anB5K1cwak9iemYv?=
 =?utf-8?B?dTByd1RJN0N6bE5Yd3FzKzltSklpU0dSNkFRZWVsSTBKKytXMUxUMTZWd1d3?=
 =?utf-8?B?WWxGMTROSEVNQUVpakIyWnh2UkEwVDAyQlMxZ09jS3Exa1duajdENTVyeWpR?=
 =?utf-8?B?SkFwZFpqNFdqY2tYOEpOZVY3cFJGN2F3a3JFZEtkZU1JNFAwL09SeG85aW5r?=
 =?utf-8?B?QmRTZzNFcnIvTERnOU4rMCswWEFsWnVQT1VOV3JWV0xnUEJ4RWZMYm1QNk1q?=
 =?utf-8?B?dFI3QThuYkRJMW9xMTlXOWtMNGc0WXVQcCtJbWhzeU45ejlxTTRxWFdTc2VI?=
 =?utf-8?B?RTF3bU84ckh5MXk1TEwvTGRESUZJMHhuTG01THlreUhFNEZzZnZTZlpSMU9z?=
 =?utf-8?B?ODFHMzFqc1UvSmdvZjR6QzB1NTViWm9oOXhRV05pWldIaGE3RkR1OUdZMUlm?=
 =?utf-8?B?bUxqREhUODRPM1FLeEN1ODk0aHhuUGpJRnF5SWhVZ0pZVlpwbjQvd09MTnpK?=
 =?utf-8?B?MzFleEJ0QUFKS1RLL2wyUDRuMVAwZFR5Q09UUWM3Y09rdVhpOTFyOExHa1V5?=
 =?utf-8?B?bFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	unVZXvEXNm1tZGN0CD9G6MCrCLBpXz2QsUnKgy0pdKi8JIlS/NCsrKV82FYV4mtLGXdcIqm4cYtHJfHe5FdWAsN50jlRLPmpCHfs2p/CST84Aosb1iB1YGX+JK2c8oKxk+/1oizB4dkOgVedlqRmrBjdSghod/1o3wL1u5muwmxJ3nJmY1yX7oFtTHAxdnhUCqInYYPPtPw9v/oTez55vNmjQB3McHlQJw6+3MXDdk/IBbOJSFKtW06oba7/BZ13YpE1DBxSaaggSSXUhA4XNUhr/CuACt/rurPW6VyvQlPgGJ870YkgT1+Xv3h7sdDJBhtB0bNMT0Pfitlm42a/dyB+63kRvIVWoduD6YOyozH/kS7TnzGyYH9jOR/adBl9I0fObdDyX2l0TJgfJDPVUlk4wD9++afIDsFz7qAn9Ma6W9SgvmZXASdQD65j0w+7L7S3Jdw7own4ypv4UUF+sjOG8VXZs07DbTes3BOhdCSPzbP3Ox+gkshX9UvxXxCnYnJvot6ts2YLb9Ar6LkJXJKcBXa6Iyni9VoJ8YzZUMle2yMU+KrvKXqoNY3MZAKM94aSJ1Ud0yQEW55a1V/ly7f88SD/xwB4JVcEQtgfJBpT7Icg3oqCBK3TZKKcMYS3FvKKa/ol0LOkFcDbBlGRMm7GwFTJWqys153VoSnRDpGPMmP7142+WTKVlQh+m8of4kClxYbpm1Qx+UUUcMEHcRtSQjtytYErowM+On9vJZa23lt7hm4X++ZHknJ9k59pwL6/IYv+0kL0jprRydsUnHRnPVhTY1tnvG9KHxGwZWvvJGVheXXn4os9s5NVV7PBr5zfAQ3kB4BYrrO34DdTAw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b4f57ab3-f2a8-4958-8e4a-08db475e0909
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2023 20:29:01.3828
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W4Z8w+fGClgLT9QBcNYGvoflUkTVNXMXzWoTlZWTxDEBsinHqsOMxM/6IvOtndf0BuwXvJuId0ovGgVMVK+GO/XevpIzS/kT8AlqGdzAaX4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5130

On 27/03/2023 5:13 pm, George Dunlap wrote:
> For now, mainly just do volume analysis and get rid of the warnings.
>
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 21:11:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 21:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527085.819309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8tC-0007Mj-Q7; Thu, 27 Apr 2023 21:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527085.819309; Thu, 27 Apr 2023 21:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps8tC-0007Mc-NW; Thu, 27 Apr 2023 21:10:54 +0000
Received: by outflank-mailman (input) for mailman id 527085;
 Thu, 27 Apr 2023 21:10:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8ICJ=AS=bu.edu=alxndr@srs-se1.protection.inumbo.net>)
 id 1ps8tB-0007MU-An
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 21:10:53 +0000
Received: from esa3.hc2706-39.iphmx.com (esa3.hc2706-39.iphmx.com
 [68.232.154.118]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9d7d0c6-e53f-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 23:10:47 +0200 (CEST)
Received: from mail-qv1-f69.google.com ([209.85.219.69])
 by ob1.hc2706-39.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 27 Apr 2023 17:10:39 -0400
Received: by mail-qv1-f69.google.com with SMTP id
 6a1803df08f44-6164549243eso33233136d6.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Apr 2023 14:10:39 -0700 (PDT)
Received: from mozz.bu.edu (mozz.bu.edu. [128.197.127.33])
 by smtp.gmail.com with ESMTPSA id
 h11-20020a0cf44b000000b005fd79831ac7sm5655053qvm.84.2023.04.27.14.10.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Apr 2023 14:10:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9d7d0c6-e53f-11ed-8611-37d641c3527e
X-IronPort-RemoteIP: 209.85.219.69
X-IronPort-MID: 275044140
X-IronPort-Reputation: None
X-IronPort-Listener: OutgoingMail
X-IronPort-SenderGroup: RELAY_GSUITE
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jWy096DFhsNoChVW/23iw5YqxClBgxIJ4kV8jS/XYbTApDxwhTNRy
 2UdUW7XafbcNGv0LYx3O4nk8k0PvsLUytUwTANkpHpgcSl2pJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF+lH2dOCn9SImvU2xbuKUIPbePSxsThNTRi4kiBZy88Y0mYctitWia++3k
 YqaT/b3ZRn0ilaYDkpOs/jY8E8146yp0N8llgdWic5j7Qe2e0Y9Ucp3yZGZdxPQXoRSF+imc
 OfPpJnRErTxpkpF5nuNy94XQ2VTKlLgFVHmZkl+AsBOtiN/Shkaic7XAha9hXB/0F1ll/gpo
 DlEWAfZpQ0BZ8Ugk8xEO/VU/r0X0QSrN9YrLFDm2fF/wXEqfFPW0qR8HW5xNrcR67h4Hl5is
 uIRdjwSO0Xra+KemNpXS8Fpj8UnadD1ZcYR5CAmwjbeAvIrB5vERs0m5/cChGZ21p0IR6+PI
 ZRIAdZsRE2ojxlnM1MHDp4ktO21wHTzblW0rXrP+vdvvzSKkFQZPL7FCITMOcaOeuNptRjFm
 lPBp2fyOipGHYnKodaC2jf27gPVpgvrVYRXGLCm+/pChFyI2ndVGBAQTUG8o/Sylgi5Qd03F
 qAP0i8nrKx37VLyC9ejDlu3p3mLuhNaUN1VewEn1DywJmPvy17xLgA5ovRpMrTKaOdeqeQW6
 2K0
IronPort-HdrOrdr: A9a23:6p2AFKAwZiuAEo7lHelx55DYdb4zR+YMi2TDtnoBLyC9F/bzqy
 nApoV+6faZskdyZJhCo6HiBEDjex3hHPdOiOF8AV7LZmLbkVGpIoZj6oWn7jvlEy34n9Q96Y
 5bc6Z4CNr/SWJ3iMrx/Q+ieuxB/DDtytHOuQ6x9QYJcShaL4ph6Ap4DQjeMmAefmZ77MoCea
 ah2g==
X-Talos-CUID: 9a23:CRTgwG31rQ3WuydR7XUMvrxfINF0YGzyi3zqKl7mBnlWZPqLQAGc5/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3AP9t/OQ/sAOnpvnteL836IoCQf/82+ue0DH8rrb5?=
 =?us-ascii?q?Y5vW1MnRzMA+m0B3iFw=3D=3D?=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=bu.edu; s=s1gsbu; t=1682629839; x=1685221839;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VcAcKX7/rrkQbLPugxX5FYzGcIPCu/jEZtAHpxtM8bY=;
        b=N40DSYhOta7JFmG2GB79FxQAiuTAlNVGB9URV2Swz+8b3fBhvOazTB62Y1t4RueU1l
         QwO7Lpz5bSqDLIXvWeZsSmOw8bLAuizNjQvBsLrvFXvpVp5Z7zAmo9xK20K5Y+WU/cbT
         K0mx99O39IvOP5gNli+xSBIcCe/EinKgZOvP9S0OMFa9rEH5e+qU77WYpwnVRpi1qDe/
         w1phQtKZsywbP3+aj2gWR0D9Bcno0pS37W33dlK9meGQrf94n9gkwJouAXpKVfmPldMW
         Rc6SZ77f5OXS+hsOo35EABfV+q+wGhd2LFsUEbG+Qbdu6Gwa/CSf77HM3ivs+B4GfCTi
         tnYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682629839; x=1685221839;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=VcAcKX7/rrkQbLPugxX5FYzGcIPCu/jEZtAHpxtM8bY=;
        b=K24WjEvwWSxLgxhJg7cWO0KqC56s6PF8ukmG3VUBL5/ZEH60rYNG4+uvMVNMFAKVTu
         wJESsVWwnqInOkTaCnZg4WMeUXnY28wDXStXe0gfzvG6oWPHx+0ayNvi0dru5Lh3jtpj
         XPYk0ikadcRn2CZTPhI4ytby+3fJEyEdzfeusmx/kj8X+oHB8PlXfpRivQ0/FT3S1eAG
         1gpUzOHdKr1FFyLQ/P6J0fTlzWENpmur8I6ZknRWyBK919cLsBseocscgehjk6SKwpiJ
         3ihxFjYrM7D/bCq/HS3MuqzZ47ggB5m7fhwUhaaTlZC5dg8iU8NiZ6S4anVkUb8DqYoJ
         9DZw==
X-Gm-Message-State: AC+VfDzjRimz4zTOndVlXVJHDWERZGiKzsN+gVBoSmcVyICkx9bWneMr
	ngMY6aytWx7SwCyAiCRHMrRlwozbgt2RnKoZLkduJwupy56fGKo1OmDQr4w8nbRN0wApDAz6vOM
	Mw9zTRA+RbVUeyQFAiS5RjbeU17r7M4Fr0oPYsw6rAQ==
X-Received: by 2002:a05:6214:2347:b0:615:29ab:e4a8 with SMTP id hu7-20020a056214234700b0061529abe4a8mr4422556qvb.31.1682629839187;
        Thu, 27 Apr 2023 14:10:39 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ4laAbHzi59OplL5JKmO3ZiPSgnb9qxKrUcBhNChCQYWle0Ak0C/LKDEq5d8xbA/FOOHc7blw==
X-Received: by 2002:a05:6214:2347:b0:615:29ab:e4a8 with SMTP id hu7-20020a056214234700b0061529abe4a8mr4422500qvb.31.1682629838820;
        Thu, 27 Apr 2023 14:10:38 -0700 (PDT)
From: Alexander Bulekov <alxndr@bu.edu>
To: qemu-devel@nongnu.org
Cc: Alexander Bulekov <alxndr@bu.edu>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Mauro Matteo Cascella <mcascell@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Bandan Das <bsd@redhat.com>,
	"Edgar E . Iglesias" <edgar.iglesias@gmail.com>,
	Darren Kenny <darren.kenny@oracle.com>,
	Bin Meng <bin.meng@windriver.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	=?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Jon Maloy <jmaloy@redhat.com>,
	Siqi Chen <coc.cyqh@gmail.com>,
	Michael Tokarev <mjt@tls.msk.ru>,
	Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Amit Shah <amit@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	John Snow <jsnow@redhat.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Keith Busch <kbusch@kernel.org>,
	Klaus Jensen <its@irrelevant.dk>,
	Fam Zheng <fam@euphon.net>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs),
	qemu-block@nongnu.org (open list:virtio-blk),
	qemu-arm@nongnu.org (open list:i.MX31 (kzm)),
	qemu-ppc@nongnu.org (open list:New World (mac99))
Subject: [PATCH v10 4/8] hw: replace most qemu_bh_new calls with qemu_bh_new_guarded
Date: Thu, 27 Apr 2023 17:10:09 -0400
Message-Id: <20230427211013.2994127-5-alxndr@bu.edu>
X-Mailer: git-send-email 2.39.0
In-Reply-To: <20230427211013.2994127-1-alxndr@bu.edu>
References: <20230427211013.2994127-1-alxndr@bu.edu>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CES-GSUITE_AUTH: bf3aNvsZpxl8

This protects devices from bh->mmio reentrancy issues.

Thanks: Thomas Huth <thuth@redhat.com> for diagnosing OS X test failure.
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Thomas Huth <thuth@redhat.com>
---
 hw/9pfs/xen-9p-backend.c        | 5 ++++-
 hw/block/dataplane/virtio-blk.c | 3 ++-
 hw/block/dataplane/xen-block.c  | 5 +++--
 hw/char/virtio-serial-bus.c     | 3 ++-
 hw/display/qxl.c                | 9 ++++++---
 hw/display/virtio-gpu.c         | 6 ++++--
 hw/ide/ahci.c                   | 3 ++-
 hw/ide/ahci_internal.h          | 1 +
 hw/ide/core.c                   | 4 +++-
 hw/misc/imx_rngc.c              | 6 ++++--
 hw/misc/macio/mac_dbdma.c       | 2 +-
 hw/net/virtio-net.c             | 3 ++-
 hw/nvme/ctrl.c                  | 6 ++++--
 hw/scsi/mptsas.c                | 3 ++-
 hw/scsi/scsi-bus.c              | 3 ++-
 hw/scsi/vmw_pvscsi.c            | 3 ++-
 hw/usb/dev-uas.c                | 3 ++-
 hw/usb/hcd-dwc2.c               | 3 ++-
 hw/usb/hcd-ehci.c               | 3 ++-
 hw/usb/hcd-uhci.c               | 2 +-
 hw/usb/host-libusb.c            | 6 ++++--
 hw/usb/redirect.c               | 6 ++++--
 hw/usb/xen-usb.c                | 3 ++-
 hw/virtio/virtio-balloon.c      | 5 +++--
 hw/virtio/virtio-crypto.c       | 3 ++-
 25 files changed, 66 insertions(+), 33 deletions(-)

diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 74f3a05f88..0e266c552b 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -61,6 +61,7 @@ typedef struct Xen9pfsDev {
 
     int num_rings;
     Xen9pfsRing *rings;
+    MemReentrancyGuard mem_reentrancy_guard;
 } Xen9pfsDev;
 
 static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev);
@@ -443,7 +444,9 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
         xen_9pdev->rings[i].ring.out = xen_9pdev->rings[i].data +
                                        XEN_FLEX_RING_SIZE(ring_order);
 
-        xen_9pdev->rings[i].bh = qemu_bh_new(xen_9pfs_bh, &xen_9pdev->rings[i]);
+        xen_9pdev->rings[i].bh = qemu_bh_new_guarded(xen_9pfs_bh,
+                                                     &xen_9pdev->rings[i],
+                                                     &xen_9pdev->mem_reentrancy_guard);
         xen_9pdev->rings[i].out_cons = 0;
         xen_9pdev->rings[i].out_size = 0;
         xen_9pdev->rings[i].inprogress = false;
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index b28d81737e..a6202997ee 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -127,7 +127,8 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
     } else {
         s->ctx = qemu_get_aio_context();
     }
-    s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
+    s->bh = aio_bh_new_guarded(s->ctx, notify_guest_bh, s,
+                               &DEVICE(vdev)->mem_reentrancy_guard);
     s->batch_notify_vqs = bitmap_new(conf->num_queues);
 
     *dataplane = s;
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 734da42ea7..d8bc39d359 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -633,8 +633,9 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev,
     } else {
         dataplane->ctx = qemu_get_aio_context();
     }
-    dataplane->bh = aio_bh_new(dataplane->ctx, xen_block_dataplane_bh,
-                               dataplane);
+    dataplane->bh = aio_bh_new_guarded(dataplane->ctx, xen_block_dataplane_bh,
+                                       dataplane,
+                                       &DEVICE(xendev)->mem_reentrancy_guard);
 
     return dataplane;
 }
diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
index 7d4601cb5d..dd619f0731 100644
--- a/hw/char/virtio-serial-bus.c
+++ b/hw/char/virtio-serial-bus.c
@@ -985,7 +985,8 @@ static void virtser_port_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    port->bh = qemu_bh_new(flush_queued_data_bh, port);
+    port->bh = qemu_bh_new_guarded(flush_queued_data_bh, port,
+                                   &dev->mem_reentrancy_guard);
     port->elem = NULL;
 }
 
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 80ce1e9a93..f1c0eb7dfc 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2201,11 +2201,14 @@ static void qxl_realize_common(PCIQXLDevice *qxl, Error **errp)
 
     qemu_add_vm_change_state_handler(qxl_vm_change_state_handler, qxl);
 
-    qxl->update_irq = qemu_bh_new(qxl_update_irq_bh, qxl);
+    qxl->update_irq = qemu_bh_new_guarded(qxl_update_irq_bh, qxl,
+                                          &DEVICE(qxl)->mem_reentrancy_guard);
     qxl_reset_state(qxl);
 
-    qxl->update_area_bh = qemu_bh_new(qxl_render_update_area_bh, qxl);
-    qxl->ssd.cursor_bh = qemu_bh_new(qemu_spice_cursor_refresh_bh, &qxl->ssd);
+    qxl->update_area_bh = qemu_bh_new_guarded(qxl_render_update_area_bh, qxl,
+                                              &DEVICE(qxl)->mem_reentrancy_guard);
+    qxl->ssd.cursor_bh = qemu_bh_new_guarded(qemu_spice_cursor_refresh_bh, &qxl->ssd,
+                                             &DEVICE(qxl)->mem_reentrancy_guard);
 }
 
 static void qxl_realize_primary(PCIDevice *dev, Error **errp)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5e15c79b94..66ac9b6cc5 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1339,8 +1339,10 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     g->ctrl_vq = virtio_get_queue(vdev, 0);
     g->cursor_vq = virtio_get_queue(vdev, 1);
-    g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
-    g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
+    g->ctrl_bh = qemu_bh_new_guarded(virtio_gpu_ctrl_bh, g,
+                                     &qdev->mem_reentrancy_guard);
+    g->cursor_bh = qemu_bh_new_guarded(virtio_gpu_cursor_bh, g,
+                                       &qdev->mem_reentrancy_guard);
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 55902e1df7..4e76d6b191 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1509,7 +1509,8 @@ static void ahci_cmd_done(const IDEDMA *dma)
     ahci_write_fis_d2h(ad);
 
     if (ad->port_regs.cmd_issue && !ad->check_bh) {
-        ad->check_bh = qemu_bh_new(ahci_check_cmd_bh, ad);
+        ad->check_bh = qemu_bh_new_guarded(ahci_check_cmd_bh, ad,
+                                           &ad->mem_reentrancy_guard);
         qemu_bh_schedule(ad->check_bh);
     }
 }
diff --git a/hw/ide/ahci_internal.h b/hw/ide/ahci_internal.h
index 303fcd7235..2480455372 100644
--- a/hw/ide/ahci_internal.h
+++ b/hw/ide/ahci_internal.h
@@ -321,6 +321,7 @@ struct AHCIDevice {
     bool init_d2h_sent;
     AHCICmdHdr *cur_cmd;
     NCQTransferState ncq_tfs[AHCI_MAX_CMDS];
+    MemReentrancyGuard mem_reentrancy_guard;
 };
 
 struct AHCIPCIState {
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 45d14a25e9..de48ff9f86 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -513,6 +513,7 @@ BlockAIOCB *ide_issue_trim(
         BlockCompletionFunc *cb, void *cb_opaque, void *opaque)
 {
     IDEState *s = opaque;
+    IDEDevice *dev = s->unit ? s->bus->slave : s->bus->master;
     TrimAIOCB *iocb;
 
     /* Paired with a decrement in ide_trim_bh_cb() */
@@ -520,7 +521,8 @@ BlockAIOCB *ide_issue_trim(
 
     iocb = blk_aio_get(&trim_aiocb_info, s->blk, cb, cb_opaque);
     iocb->s = s;
-    iocb->bh = qemu_bh_new(ide_trim_bh_cb, iocb);
+    iocb->bh = qemu_bh_new_guarded(ide_trim_bh_cb, iocb,
+                                   &DEVICE(dev)->mem_reentrancy_guard);
     iocb->ret = 0;
     iocb->qiov = qiov;
     iocb->i = -1;
diff --git a/hw/misc/imx_rngc.c b/hw/misc/imx_rngc.c
index 632c03779c..082c6980ad 100644
--- a/hw/misc/imx_rngc.c
+++ b/hw/misc/imx_rngc.c
@@ -228,8 +228,10 @@ static void imx_rngc_realize(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->iomem);
 
     sysbus_init_irq(sbd, &s->irq);
-    s->self_test_bh = qemu_bh_new(imx_rngc_self_test, s);
-    s->seed_bh = qemu_bh_new(imx_rngc_seed, s);
+    s->self_test_bh = qemu_bh_new_guarded(imx_rngc_self_test, s,
+                                          &dev->mem_reentrancy_guard);
+    s->seed_bh = qemu_bh_new_guarded(imx_rngc_seed, s,
+                                     &dev->mem_reentrancy_guard);
 }
 
 static void imx_rngc_reset(DeviceState *dev)
diff --git a/hw/misc/macio/mac_dbdma.c b/hw/misc/macio/mac_dbdma.c
index 43bb1f56ba..80a789f32b 100644
--- a/hw/misc/macio/mac_dbdma.c
+++ b/hw/misc/macio/mac_dbdma.c
@@ -914,7 +914,7 @@ static void mac_dbdma_realize(DeviceState *dev, Error **errp)
 {
     DBDMAState *s = MAC_DBDMA(dev);
 
-    s->bh = qemu_bh_new(DBDMA_run_bh, s);
+    s->bh = qemu_bh_new_guarded(DBDMA_run_bh, s, &dev->mem_reentrancy_guard);
 }
 
 static void mac_dbdma_class_init(ObjectClass *oc, void *data)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 53e1c32643..447f669921 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2917,7 +2917,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
         n->vqs[index].tx_vq =
             virtio_add_queue(vdev, n->net_conf.tx_queue_size,
                              virtio_net_handle_tx_bh);
-        n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
+        n->vqs[index].tx_bh = qemu_bh_new_guarded(virtio_net_tx_bh, &n->vqs[index],
+                                                  &DEVICE(vdev)->mem_reentrancy_guard);
     }
 
     n->vqs[index].tx_waiting = 0;
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f59dfe1cbe..fd917fcda1 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -4607,7 +4607,8 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
         QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
     }
 
-    sq->bh = qemu_bh_new(nvme_process_sq, sq);
+    sq->bh = qemu_bh_new_guarded(nvme_process_sq, sq,
+                                 &DEVICE(sq->ctrl)->mem_reentrancy_guard);
 
     if (n->dbbuf_enabled) {
         sq->db_addr = n->dbbuf_dbs + (sqid << 3);
@@ -5253,7 +5254,8 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
         }
     }
     n->cq[cqid] = cq;
-    cq->bh = qemu_bh_new(nvme_post_cqes, cq);
+    cq->bh = qemu_bh_new_guarded(nvme_post_cqes, cq,
+                                 &DEVICE(cq->ctrl)->mem_reentrancy_guard);
 }
 
 static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
diff --git a/hw/scsi/mptsas.c b/hw/scsi/mptsas.c
index c485da792c..3de288b454 100644
--- a/hw/scsi/mptsas.c
+++ b/hw/scsi/mptsas.c
@@ -1322,7 +1322,8 @@ static void mptsas_scsi_realize(PCIDevice *dev, Error **errp)
     }
     s->max_devices = MPTSAS_NUM_PORTS;
 
-    s->request_bh = qemu_bh_new(mptsas_fetch_requests, s);
+    s->request_bh = qemu_bh_new_guarded(mptsas_fetch_requests, s,
+                                        &DEVICE(dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), &dev->qdev, &mptsas_scsi_info);
 }
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index c97176110c..3c20b47ad0 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -193,7 +193,8 @@ static void scsi_dma_restart_cb(void *opaque, bool running, RunState state)
         AioContext *ctx = blk_get_aio_context(s->conf.blk);
         /* The reference is dropped in scsi_dma_restart_bh.*/
         object_ref(OBJECT(s));
-        s->bh = aio_bh_new(ctx, scsi_dma_restart_bh, s);
+        s->bh = aio_bh_new_guarded(ctx, scsi_dma_restart_bh, s,
+                                   &DEVICE(s)->mem_reentrancy_guard);
         qemu_bh_schedule(s->bh);
     }
 }
diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index fa76696855..4de34536e9 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -1184,7 +1184,8 @@ pvscsi_realizefn(PCIDevice *pci_dev, Error **errp)
         pcie_endpoint_cap_init(pci_dev, PVSCSI_EXP_EP_OFFSET);
     }
 
-    s->completion_worker = qemu_bh_new(pvscsi_process_completion_queue, s);
+    s->completion_worker = qemu_bh_new_guarded(pvscsi_process_completion_queue, s,
+                                               &DEVICE(pci_dev)->mem_reentrancy_guard);
 
     scsi_bus_init(&s->bus, sizeof(s->bus), DEVICE(pci_dev), &pvscsi_scsi_info);
     /* override default SCSI bus hotplug-handler, with pvscsi's one */
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index 88f99c05d5..f013ded91e 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -937,7 +937,8 @@ static void usb_uas_realize(USBDevice *dev, Error **errp)
 
     QTAILQ_INIT(&uas->results);
     QTAILQ_INIT(&uas->requests);
-    uas->status_bh = qemu_bh_new(usb_uas_send_status_bh, uas);
+    uas->status_bh = qemu_bh_new_guarded(usb_uas_send_status_bh, uas,
+                                         &d->mem_reentrancy_guard);
 
     dev->flags |= (1 << USB_DEV_FLAG_IS_SCSI_STORAGE);
     scsi_bus_init(&uas->bus, sizeof(uas->bus), DEVICE(dev), &usb_uas_scsi_info);
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 8755e9cbb0..a0c4e782b2 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -1364,7 +1364,8 @@ static void dwc2_realize(DeviceState *dev, Error **errp)
     s->fi = USB_FRMINTVL - 1;
     s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
-    s->async_bh = qemu_bh_new(dwc2_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(dwc2_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
 
     sysbus_init_irq(sbd, &s->irq);
 }
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..c930c60921 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -2533,7 +2533,8 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp)
     }
 
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, ehci_work_timer, s);
-    s->async_bh = qemu_bh_new(ehci_work_bh, s);
+    s->async_bh = qemu_bh_new_guarded(ehci_work_bh, s,
+                                      &dev->mem_reentrancy_guard);
     s->device = dev;
 
     s->vmstate = qemu_add_vm_change_state_handler(usb_ehci_vm_state_change, s);
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 8ac1175ad2..77baaa7a6b 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -1190,7 +1190,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
                               USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL);
         }
     }
-    s->bh = qemu_bh_new(uhci_bh, s);
+    s->bh = qemu_bh_new_guarded(uhci_bh, s, &DEVICE(dev)->mem_reentrancy_guard);
     s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, uhci_frame_timer, s);
     s->num_ports_vmstate = NB_PORTS;
     QTAILQ_INIT(&s->queues);
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index 176868d345..f500db85ab 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -1141,7 +1141,8 @@ static void usb_host_nodev_bh(void *opaque)
 static void usb_host_nodev(USBHostDevice *s)
 {
     if (!s->bh_nodev) {
-        s->bh_nodev = qemu_bh_new(usb_host_nodev_bh, s);
+        s->bh_nodev = qemu_bh_new_guarded(usb_host_nodev_bh, s,
+                                          &DEVICE(s)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(s->bh_nodev);
 }
@@ -1739,7 +1740,8 @@ static int usb_host_post_load(void *opaque, int version_id)
     USBHostDevice *dev = opaque;
 
     if (!dev->bh_postld) {
-        dev->bh_postld = qemu_bh_new(usb_host_post_load_bh, dev);
+        dev->bh_postld = qemu_bh_new_guarded(usb_host_post_load_bh, dev,
+                                             &DEVICE(dev)->mem_reentrancy_guard);
     }
     qemu_bh_schedule(dev->bh_postld);
     dev->bh_postld_pending = true;
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index fd7df599bc..39fbaaab16 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -1441,8 +1441,10 @@ static void usbredir_realize(USBDevice *udev, Error **errp)
         }
     }
 
-    dev->chardev_close_bh = qemu_bh_new(usbredir_chardev_close_bh, dev);
-    dev->device_reject_bh = qemu_bh_new(usbredir_device_reject_bh, dev);
+    dev->chardev_close_bh = qemu_bh_new_guarded(usbredir_chardev_close_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
+    dev->device_reject_bh = qemu_bh_new_guarded(usbredir_device_reject_bh, dev,
+                                                &DEVICE(dev)->mem_reentrancy_guard);
     dev->attach_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, usbredir_do_attach, dev);
 
     packet_id_queue_init(&dev->cancelled, dev, "cancelled");
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 66cb3f7c24..38ee660a30 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -1032,7 +1032,8 @@ static void usbback_alloc(struct XenLegacyDevice *xendev)
 
     QTAILQ_INIT(&usbif->req_free_q);
     QSIMPLEQ_INIT(&usbif->hotplug_q);
-    usbif->bh = qemu_bh_new(usbback_bh, usbif);
+    usbif->bh = qemu_bh_new_guarded(usbback_bh, usbif,
+                                    &DEVICE(xendev)->mem_reentrancy_guard);
 }
 
 static int usbback_free(struct XenLegacyDevice *xendev)
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index fd06fcfb3f..d004cf29d2 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -886,8 +886,9 @@ static void virtio_balloon_device_realize(DeviceState *dev, Error **errp)
         precopy_add_notifier(&s->free_page_hint_notify);
 
         object_ref(OBJECT(s->iothread));
-        s->free_page_bh = aio_bh_new(iothread_get_aio_context(s->iothread),
-                                     virtio_ballloon_get_free_page_hints, s);
+        s->free_page_bh = aio_bh_new_guarded(iothread_get_aio_context(s->iothread),
+                                             virtio_ballloon_get_free_page_hints, s,
+                                             &dev->mem_reentrancy_guard);
     }
 
     if (virtio_has_feature(s->host_features, VIRTIO_BALLOON_F_REPORTING)) {
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 802e1b9659..2fe804510f 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -1074,7 +1074,8 @@ static void virtio_crypto_device_realize(DeviceState *dev, Error **errp)
         vcrypto->vqs[i].dataq =
                  virtio_add_queue(vdev, 1024, virtio_crypto_handle_dataq_bh);
         vcrypto->vqs[i].dataq_bh =
-                 qemu_bh_new(virtio_crypto_dataq_bh, &vcrypto->vqs[i]);
+                 qemu_bh_new_guarded(virtio_crypto_dataq_bh, &vcrypto->vqs[i],
+                                     &dev->mem_reentrancy_guard);
         vcrypto->vqs[i].vcrypto = vcrypto;
     }
 
-- 
2.39.0



From xen-devel-bounces@lists.xenproject.org Thu Apr 27 21:52:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 21:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527091.819319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9Wv-0003QP-TK; Thu, 27 Apr 2023 21:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527091.819319; Thu, 27 Apr 2023 21:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9Wv-0003QI-Qg; Thu, 27 Apr 2023 21:51:57 +0000
Received: by outflank-mailman (input) for mailman id 527091;
 Thu, 27 Apr 2023 21:51:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUcw=AS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1ps9Wt-0003QC-ND
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 21:51:56 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b700cca9-e545-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 23:51:51 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B4D33619AB;
 Thu, 27 Apr 2023 21:51:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDE05C433D2;
 Thu, 27 Apr 2023 21:51:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b700cca9-e545-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682632309;
	bh=84HShm9vBpg6c8cNfOQaGwJwwNkQL+haVyUF8J2dBTc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UwlpRI574f1GQGUrGAm0/cfzAG0iThSvDSqzzYs4DATeTNwbbzUIBCRupo7/StV/l
	 yTBKwyas41TOtvrAFqhK1N1jQaT4UbBl7A1QfLFAk7t0/BfR8X9hc0YpLpOxpHL6bG
	 8bpt9qTdjygSRYp+z6G9XbE2LGjJVZlp9BzaXeDoXtSfqbLucJ5Ehr+pbFJs3v1MLB
	 Jeoc1WA6qKOLWV9yUJbjq4uC+CG2YzaGHzV7J/L4nGfBFZNFDYNJdglWJyrOAgA4Yi
	 Ttajc9B21lbcO5GPNm5FxgM3VUPDKzm4BQJzcjVV6/Dz2Z2GHr2JGFCB5TQfntIsJS
	 h4ewcsvU3U3oQ==
Date: Thu, 27 Apr 2023 14:51:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>, Stewart.Hildebrand@amd.com
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com> <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop> <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com> <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com> <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com> <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop> <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop> <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1685484581-1682632248=:3419"
Content-ID: <alpine.DEB.2.22.394.2304271450530.3419@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1685484581-1682632248=:3419
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304271450531.3419@ubuntu-linux-20-04-desktop>

I am familiar with the zcu102 but I don't know how you could possibly
generate a SError.

I suggest to try to use ImageBuilder [1] to generate the boot
configuration as a test because that is known to work well for zcu102.

[1] https://gitlab.com/xen-project/imagebuilder


On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> Hello Stefano,
> 
> Thanks for clarification.
> We nighter use ImageBuilder nor uboot boot script.
> A model is zcu102 compatible.
> 
> Regards,
> O.
> 
> вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org>:
>       This is interesting. Are you using Xilinx hardware by any chance? If so,
>       which board?
> 
>       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>       could you please post your ImageBuilder config file? If not, can you
>       post the source of your uboot boot script?
> 
>       SErrors are supposed to be related to a hardware failure of some kind.
>       You are not supposed to be able to trigger an SError easily by
>       "mistake". I have not seen SErrors due to wrong cache coloring
>       configurations on any Xilinx board before.
> 
>       The differences between Xen with and without cache coloring from a
>       hardware perspective are:
> 
>       - With cache coloring, the SMMU is enabled and does address translations
>         even for dom0. Without cache coloring the SMMU could be disabled, and
>         if enabled, the SMMU doesn't do any address translations for Dom0. If
>         there is a hardware failure related to SMMU address translation it
>         could only trigger with cache coloring. This would be my normal
>         suggestion for you to explore, but the failure happens too early
>         before any DMA-capable device is programmed. So I don't think this can
>         be the issue.
> 
>       - With cache coloring, the memory allocation is very different so you'll
>         end up using different DDR regions for Dom0. So if your DDR is
>         defective, you might only see a failure with cache coloring enabled
>         because you end up using different regions.
> 
> 
>       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>       > Hi Stefano,
>       >
>       > Thank you.
>       > If I build xen without colors support there is not this error.
>       > All the domains are booted well.
>       > Hense it can not be a hardware issue.
>       > This panic arrived during unpacking the rootfs.
>       > Here I attached the boot log xen/Dom0 without color.
>       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>       >
>       >  Xen 4.16.1-pre
>       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y 2023-04-21
>       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>       > (XEN) 64-bit Execution:
>       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>       > (XEN) 32-bit Execution:
>       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>       > (XEN)     Extensions: GenericTimer Security
>       > (XEN)   Debug Features: 0000000003010066
>       > (XEN)   Auxiliary Features: 0000000000000000
>       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>       > (XEN)                          0000000001260000 0000000002102211
>       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>       > (XEN) Using SMC Calling Convention v1.2
>       > (XEN) Using PSCI v1.1
>       > (XEN) SMP: Allowing 4 CPUs
>       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>       > (XEN) GICv2 initialization:
>       > (XEN)         gic_dist_addr=00000000f9010000
>       > (XEN)         gic_cpu_addr=00000000f9020000
>       > (XEN)         gic_hyp_addr=00000000f9040000
>       > (XEN)         gic_vcpu_addr=00000000f9060000
>       > (XEN)         gic_maintenance_irq=25
>       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>       > (XEN) Using scheduler: null Scheduler (null)
>       > (XEN) Initializing null scheduler
>       > (XEN) WARNING: This is experimental software in development.
>       > (XEN) Use at your own risk.
>       > (XEN) Allocated console ring of 32 KiB.
>       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>       > (XEN) Bringing up CPU1
>       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>       > (XEN) CPU 1 booted.
>       > (XEN) Bringing up CPU2
>       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>       > (XEN) CPU 2 booted.
>       > (XEN) Bringing up CPU3
>       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>       > (XEN) Brought up 4 CPUs
>       > (XEN) CPU 3 booted.
>       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context
>       banks (0
>       > stage-2 only)
>       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>       > (XEN) I/O virtualisation enabled
>       > (XEN)  - Dom0 mode: Relaxed
>       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>       > (XEN) *** LOADING DOMAIN 0 ***
>       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>       > (XEN) Allocating PPI 16 for event channel interrupt
>       > (XEN) Extended region 0: 0x81200000->0xa0000000
>       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>       > (XEN) Extended region 4: 0x100000000->0x600000000
>       > (XEN) Extended region 5: 0x880000000->0x8000000000
>       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>       > (XEN) Std. Loglevel: All
>       > (XEN) Guest Loglevel: All
>       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>       > (XEN) null.c:353: 0 <-- d0v0
>       > (XEN) Freed 356kB init memory.
>       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU
>       Binutils)
>       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>       > [    0.000000] Machine model: D14 Viper Board - White Unit
>       > [    0.000000] Xen 4.16 support found
>       > [    0.000000] Zone ranges:
>       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>       > [    0.000000]   DMA32    empty
>       > [    0.000000]   Normal   empty
>       > [    0.000000] Movable zone start for each node
>       > [    0.000000] Early memory node ranges
>       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>       > [    0.000000] psci: probing for conduit method from DT.
>       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>       > [    0.000000] psci: Trusted OS migration not required
>       > [    0.000000] psci: SMC Calling Convention v1.1
>       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>       > [    0.000000] Detected VIPT I-cache on CPU0
>       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1 root=/dev/ram0
>       maxcpus=2
>       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user space.
>       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>       > [    0.000000] mem auto-init: clearing system memory may take some time...
>       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss,
>       256944K reserved,
>       > 262144K cma-reserved)
>       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>       > [    0.000000] rcu: Hierarchical RCU implementation.
>       > [    0.000000] rcu: RCU event tracing is enabled.
>       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>       > [    0.000000] Root IRQ handler: gic_handle_irq
>       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
>       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>       > [    0.000258] Console: colour dummy device 80x25
>       > [    0.310231] printk: console [hvc0] enabled
>       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS (lpj=400000)
>       > [    0.324851] pid_max: default: 32768 minimum: 301
>       > [    0.329706] LSM: Security Framework initializing
>       > [    0.334204] Yama: becoming mindful.
>       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>       > [    0.359132] Grant table initialized
>       > [    0.362664] xen:events: Using FIFO-based ABI
>       > [    0.366993] Xen: initializing cpu0
>       > [    0.370515] rcu: Hierarchical SRCU implementation.
>       > [    0.375930] smp: Bringing up secondary CPUs ...
>       > (XEN) null.c:353: 1 <-- d0v1
>       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>       > [    0.382549] Detected VIPT I-cache on CPU1
>       > [    0.388712] Xen: initializing cpu1
>       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>       > [    0.406941] SMP: Total of 2 processors activated.
>       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>       > [    0.416888] CPU features: detected: CRC32 instructions
>       > [    0.422121] CPU: All CPU(s) started at EL1
>       > [    0.426248] alternatives: patching kernel code
>       > [    0.431424] devtmpfs: initialized
>       > [    0.441454] KASLR enabled
>       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
>       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>       > [    0.519478] audit: initializing netlink subsys (disabled)
>       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>       > [    0.545608] ASID allocator initialised with 32768 entries
>       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>       > [    0.636520] DRBG: Continuing without Jitter RNG
>       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>       > [    1.767957] raid6: using neon recovery algorithm
>       > [    1.772824] xen:balloon: Initialising balloon driver
>       > [    1.778021] iommu: Default domain type: Translated
>       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>       > [    1.789149] SCSI subsystem initialized
>       > [    1.792820] usbcore: registered new interface driver usbfs
>       > [    1.798254] usbcore: registered new interface driver hub
>       > [    1.803626] usbcore: registered new device driver usb
>       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
>       > [    1.822903] PTP clock support registered
>       > [    1.826893] EDAC MC: Ver: 3.0.0
>       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>       > [    1.855907] FPGA manager framework
>       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>       > [    1.871712] NET: Registered PF_INET protocol family
>       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>       > [    1.936834] RPC: Registered named UNIX socket transport module.
>       > [    1.942342] RPC: Registered udp transport module.
>       > [    1.947088] RPC: Registered tcp transport module.
>       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>       > [    1.958334] PCI: CLS 0 bytes, default 64
>       > [    1.962709] Trying to unpack rootfs image as initramfs...
>       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
>       > [    2.021045] NET: Registered PF_ALG protocol family
>       > [    2.021122] xor: measuring software checksum speed
>       > [    2.029347]    8regs           :  2366 MB/sec
>       > [    2.033081]    32regs          :  2802 MB/sec
>       > [    2.038223]    arm64_neon      :  2320 MB/sec
>       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>       > [    2.050959] io scheduler mq-deadline registered
>       > [    2.055521] io scheduler kyber registered
>       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>       > [    2.085548] brd: module loaded
>       > [    2.089290] loop: module loaded
>       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>       > [    2.104156] usbcore: registered new interface driver rtl8150
>       > [    2.109813] usbcore: registered new interface driver r8152
>       > [    2.115367] usbcore: registered new interface driver asix
>       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>       > [    2.126934] usbcore: registered new interface driver cdc_ether
>       > [    2.132816] usbcore: registered new interface driver cdc_eem
>       > [    2.138527] usbcore: registered new interface driver net1080
>       > [    2.144256] usbcore: registered new interface driver cdc_subset
>       > [    2.150205] usbcore: registered new interface driver zaurus
>       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>       > [    2.168240] usbcore: registered new interface driver cdc_acm
>       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>       > [    2.181358] usbcore: registered new interface driver uas
>       > [    2.186547] usbcore: registered new interface driver usb-storage
>       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending drivers
>       > [    2.215332] i2c_dev: i2c /dev entries driver
>       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>       > [    2.225923] device-mapper: uevent: version 1.0.3
>       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
>       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
>       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>       ff960000.memory-controller (INTERRUPT)
>       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>       > [    2.327875] securefw securefw: securefw probed
>       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>       > [    2.420856] default preset
>       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>       > [    2.441976] vmcu driver init
>       > [    2.444922] VMCU: : (240:0) registered
>       > [    2.444956] In K81 Updater init
>       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>       > [    2.468833] Initializing XFRM netlink socket
>       > [    2.468902] NET: Registered PF_PACKET protocol family
>       > [    2.472729] Bridge firewalling registered
>       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>       > [    2.481341] registered taskstats version 1
>       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
>       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
>       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>       > [    3.045301] viper_enet viper_enet: Viper enet registered
>       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>       > [    3.112457] viper-tamper viper-tamper: Device registered
>       > [    3.117593] active_bank active_bank: boot bank: 1
>       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>       > [    3.639104] k81_bootloader 0-0010: probe
>       > [    3.641628] VMCU: : (235:0) registered
>       > [    3.641635] k81_bootloader 0-0010: probe completed
>       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>       > [    3.737549] sfp_register_socket: got sfp_bus
>       > [    3.740709] sfp_register_socket: register sfp_bus
>       > [    3.745459] sfp_register_bus: ops ok!
>       > [    3.749179] sfp_register_bus: Try to attach
>       > [    3.753419] sfp_register_bus: Attach succeeded
>       > [    3.757914] sfp_register_bus: upstream ops attach
>       > [    3.762677] sfp_register_bus: Bus registered
>       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>       > [    3.775870] of_cfs_init
>       > [    3.776000] of_cfs_init: OK
>       > [    3.778211] clk: Not disabling unused clocks
>       > [   11.278477] Freeing initrd memory: 206056K
>       > [   11.279406] Freeing unused kernel memory: 1536K
>       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>       > [   11.314142] Run /init as init process
>       > INIT: version 3.01 booting
>       > fsck (busybox 1.35.0)
>       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode: disabled.
>       > Starting random number generator daemon.
>       > [   11.580662] random: crng init done
>       > Starting udev
>       > [   11.613159] udevd[142]: starting version 3.2.10
>       > [   11.620385] udevd[143]: starting eudev-3.2.10
>       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       > hwclock: RTC_RD_TIME: Invalid exchange
>       > Mon Feb 27 08:40:53 UTC 2023
>       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>       > hwclock: RTC_SET_TIME: Invalid exchange
>       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       > Starting mcud
>       > INIT: Entering runlevel: 5
>       > Configuring network interfaces... done.
>       > resetting network interface
>       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=POLL)
>       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>       > [   12.732151] pps pps0: new PPS source ptp0
>       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY]
>       (irq=POLL)
>       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>       > [   12.761804] pps pps1: new PPS source ptp1
>       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>       > Auto-negotiation: off
>       > Auto-negotiation: off
>       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>       > Starting Failsafe Secure Shell server in port 2222: sshd
>       > done.
>       > Starting rpcbind daemon...done.
>       >
>       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       > hwclock: RTC_RD_TIME: Invalid exchange
>       > Starting State Manager Service
>       > Start state-manager restarter...
>       > (XEN) d0v1 Forwarding AES operation: 3254779951
>       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744
>       /dev/dm-0
>       > scanned by udevd (385)
>       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6 /dev/mapper/client_prov scanned by
>       mkfs.btrfs
>       > (526)
>       > [   17.872699] BTRFS info (device dm-1): using free space tree
>       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>       >
>       > Setting domain 0 name, domid and JSON config...
>       > Done setting up Dom0
>       > Starting xenconsoled...
>       > Starting QEMU as disk backend for dom0
>       > Starting domain watchdog daemon: xenwatchdogd startup
>       >
>       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6 /dev/mapper/client_config scanned by
>       mkfs.btrfs
>       > (574)
>       > [done]
>       > [   18.465552] BTRFS info (device dm-2): using free space tree
>       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>       > OK
>       > starting rsyslogd ... Log partition ready after 0 poll loops
>       > done
>       > rsyslogd: cannot connect to 172.18.0.1:514: Network is unreachable [v8.2208.0 try https://www.rsyslog.com/e/2027 ]
>       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scanned by udevd (518)
>       >
>       > Please insert USB token and enter your role in login prompt.
>       >
>       > login:
>       >
>       > Regards,
>       > O.
>       >
>       >
>       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org>:
>       >       Hi Oleg,
>       >
>       >       Here is the issue from your logs:
>       >
>       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>       >
>       >       SErrors are special signals to notify software of serious hardware
>       >       errors.  Something is going very wrong. Defective hardware is a
>       >       possibility.  Another possibility if software accessing address ranges
>       >       that it is not supposed to, sometimes it causes SErrors.
>       >
>       >       Cheers,
>       >
>       >       Stefano
>       >
>       >
>       >
>       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>       >
>       >       > Hello,
>       >       >
>       >       > Thanks guys.
>       >       > I found out where the problem was.
>       >       > Now dom0 booted more. But I have a new one.
>       >       > This is a kernel panic during Dom0 loading.
>       >       > Maybe someone is able to suggest something ?
>       >       >
>       >       > Regards,
>       >       > O.
>       >       >
>       >       > [    3.771362] sfp_register_bus: upstream ops attach
>       >       > [    3.776119] sfp_register_bus: Bus registered
>       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>       >       > [    3.789399] of_cfs_init
>       >       > [    3.789499] of_cfs_init: OK
>       >       > [    3.791685] clk: Not disabling unused clocks
>       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>       >       > [   11.010438] sp : ffffffc00809b910
>       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>       >       > [   11.010556] Call trace:
>       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>       >       > [   11.010567]  show_stack+0x18/0x2c
>       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>       >       > [   11.010583]  dump_stack+0x18/0x34
>       >       > [   11.010588]  panic+0x14c/0x2f8
>       >       > [   11.010597]  print_tainted+0x0/0xb0
>       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>       >       > [   11.010614]  do_serror+0x28/0x60
>       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>       >       > [   11.010633]  simple_write_end+0xd0/0x130
>       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>       >       > [   11.010665]  kernel_write+0x88/0x160
>       >       > [   11.010673]  xwrite+0x44/0x94
>       >       > [   11.010680]  do_copy+0xa8/0x104
>       >       > [   11.010686]  write_buffer+0x38/0x58
>       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>       >       > [   11.010698]  __gunzip+0x280/0x310
>       >       > [   11.010704]  gunzip+0x1c/0x28
>       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>       >       > [   11.010743]  kthread+0x120/0x130
>       >       > [   11.010750]  ret_from_fork+0x10/0x20
>       >       > [   11.010757] SMP: stopping secondary CPUs
>       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>       >       > [   11.010788] PHYS_OFFSET: 0x0
>       >       > [   11.010790] CPU features: 0x00000401,00000842
>       >       > [   11.010795] Memory Limit: none
>       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>       >       >
>       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com>:
>       >       >       Hi Oleg,
>       >       >
>       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>       >       >       >       
>       >       >       >
>       >       >       >
>       >       >       > Hello Michal,
>       >       >       >
>       >       >       > I was not able to enable earlyprintk in the xen for now.
>       >       >       > I decided to choose another way.
>       >       >       > This is a xen's command line that I found out completely.
>       >       >       >
>       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0
>       vwfi=native
>       >       sched=null
>       >       >       timer_slop=0
>       >       >       Yes, adding a printk() in Xen was also a good idea.
>       >       >
>       >       >       >
>       >       >       > So you are absolutely right about a command line.
>       >       >       > Now I am going to find out why xen did not have the correct parameters from the device tree.
>       >       >       Maybe you will find this document helpful:
>       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt
>       >       >
>       >       >       ~Michal
>       >       >
>       >       >       >
>       >       >       > Regards,
>       >       >       > Oleg
>       >       >       >
>       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>       >       >       >
>       >       >       >
>       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>       >       >       >     >       
>       >       >       >     >
>       >       >       >     >
>       >       >       >     > Hello Michal,
>       >       >       >     >
>       >       >       >     > Yes, I use yocto.
>       >       >       >     >
>       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>       >       >       >     > I faced a problem.
>       >       >       >     > Manually in the xen config build file I pasted the strings:
>       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
>       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make olddefconfig"
>       afterwards.
>       >       >       >
>       >       >       >     >
>       >       >       >     > CONFIG_EARLY_PRINTK
>       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>       >       >       >     I hope you added =y to them.
>       >       >       >
>       >       >       >     Anyway, you have at least the following solutions:
>       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not
>       enabled by
>       >       default)
>       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>       >       >       >
>       >       >       >     ~Michal
>       >       >       >
>       >       >       >     >
>       >       >       >     > Host hangs in build time. 
>       >       >       >     > Maybe I did not set something in the config build file ?
>       >       >       >     >
>       >       >       >     > Regards,
>       >       >       >     > Oleg
>       >       >       >     >
>       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >       >     >
>       >       >       >     >     Thanks Michal,
>       >       >       >     >
>       >       >       >     >     You gave me an idea.
>       >       >       >     >     I am going to try it today.
>       >       >       >     >
>       >       >       >     >     Regards,
>       >       >       >     >     O.
>       >       >       >     >
>       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >       >     >
>       >       >       >     >         Thanks Stefano.
>       >       >       >     >
>       >       >       >     >         I am going to do it today.
>       >       >       >     >
>       >       >       >     >         Regards,
>       >       >       >     >         O.
>       >       >       >     >
>       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org>
>       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>       >       >       >     >
>       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>       >       >       >     >             > Hi Michal,
>       >       >       >     >             >
>       >       >       >     >             > I corrected xen's command line.
>       >       >       >     >             > Now it is
>       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2
>       dom0_vcpus_pin
>       >       >       bootscrub=0 vwfi=native sched=null
>       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>       >       >       >     >
>       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>       >       >       >     >             advantage in using more than 1 color for Xen.
>       >       >       >     >
>       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>       >       >       >     >
>       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>       >       >       >     >
>       >       >       >     >
>       >       >       >     >
>       >       >       >     >             > Unfortunately the result was the same.
>       >       >       >     >             >
>       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       >       >       >     >             > (XEN) Coloring general information
>       >       >       >     >             > (XEN) Way size: 64kB
>       >       >       >     >             > (XEN) Max. number of colors available: 16
>       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>       >       >       >     >             > (XEN) Color array allocation failed for dom0
>       >       >       >     >             > (XEN)
>       >       >       >     >             > (XEN) ****************************************
>       >       >       >     >             > (XEN) Panic on CPU 0:
>       >       >       >     >             > (XEN) Error creating domain 0
>       >       >       >     >             > (XEN) ****************************************
>       >       >       >     >             > (XEN)
>       >       >       >     >             > (XEN) Reboot in five seconds...
>       >       >       >     >             >
>       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>       >       >       >     >             >
>       >       >       >     >             > Regards,
>       >       >       >     >             > Oleg
>       >       >       >     >             >
>       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com>
>       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>       >       >       >     >             >       Hi Michal,
>       >       >       >     >             >
>       >       >       >     >             > You put my nose into the problem. Thank you.
>       >       >       >     >             > I am going to use your point.
>       >       >       >     >             > Let's see what happens.
>       >       >       >     >             >
>       >       >       >     >             > Regards,
>       >       >       >     >             > Oleg
>       >       >       >     >             >
>       >       >       >     >             >
>       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com
>       <mailto:michal.orzel@amd.com>
>       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>       >       >       >     >             >       Hi Oleg,
>       >       >       >     >             >
>       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >       >       >     >             >       >       
>       >       >       >     >             >       >
>       >       >       >     >             >       >
>       >       >       >     >             >       > Hello Stefano,
>       >       >       >     >             >       >
>       >       >       >     >             >       > Thanks for the clarification.
>       >       >       >     >             >       > My company uses yocto for image generation.
>       >       >       >     >             >       > What kind of information do you need to consult me in this case ?
>       >       >       >     >             >       >
>       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall
>       >       <mailto:julien@xen.org
>       >       >       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> ?
>       >       >       >     >             >
>       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided
>       seems to be
>       >       not the
>       >       >       one
>       >       >       >     >             >       Xen booted with. The error you are observing most likely is due to dom0 colors
>       >       configuration not
>       >       >       being
>       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you
>       >       provided, this
>       >       >       parameter
>       >       >       >     >             >       is set, I strongly doubt that this is the actual command line in use.
>       >       >       >     >             >
>       >       >       >     >             >       You wrote:
>       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2
>       >       dom0_vcpus_pin
>       >       >       bootscrub=0 vwfi=native
>       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>       >       >       >     >             >
>       >       >       >     >             >       but:
>       >       >       >     >             >       1) way_szize has a typo
>       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only
>       one:
>       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>       >       >       >     >             >
>       >       >       >     >             >       This makes me believe that no colors configuration actually end up in command line
>       that Xen
>       >       booted
>       >       >       with.
>       >       >       >     >             >       Single color for Xen is a "default if not specified" and way size was probably
>       calculated
>       >       by asking
>       >       >       HW.
>       >       >       >     >             >
>       >       >       >     >             >       So I would suggest to first cross-check the command line in use.
>       >       >       >     >             >
>       >       >       >     >             >       ~Michal
>       >       >       >     >             >
>       >       >       >     >             >
>       >       >       >     >             >       >
>       >       >       >     >             >       > Regards,
>       >       >       >     >             >       > Oleg
>       >       >       >     >             >       >
>       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org
>       >       >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>       >       <mailto:sstabellini@kernel.org
>       >       >       <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>       >       >       >     >             >       >
>       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >       >       >     >             >       >     > Hi Julien,
>       >       >       >     >             >       >     >
>       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>       >       >       >     >             >       >     >
>       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1] work
>       >       >       >     >             >       >     >
>       >       >       >     >             >       >     > Please clarify this point.
>       >       >       >     >             >       >     > Because the two thoughts are controversial.
>       >       >       >     >             >       >
>       >       >       >     >             >       >     Hi Oleg,
>       >       >       >     >             >       >
>       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>       >       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >       <https://github.com/xilinx/xen
>       >       >       <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >       <https://github.com/xilinx/xen
>       >       >       <https://github.com/xilinx/xen>>>
>       >       >       >     >             >       >
>       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>       >       >       >     >             >       >
>       >       >       >     >             >       >
>       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>       >       >       >     >             >       >
>       >       >       >     >             >       >
>       >       >       >     >             >       >     The Cache Coloring feature that you are trying to configure is present
>       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>       >       >       >     >             >       >     hasn't been merged yet.)
>       >       >       >     >             >       >
>       >       >       >     >             >       >
>       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>       >       >       >     >             >       >     you as you already have Cache Coloring as a feature there.
>       >       >       >     >             >       >
>       >       >       >     >             >       >
>       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are using.
>       >       >       >     >             >       >
>       >       >       >     >             >       >     But from the boot message, it looks like the colors configuration for
>       >       >       >     >             >       >     Dom0 is incorrect.
>       >       >       >     >             >       >
>       >       >       >     >             >
>       >       >       >     >             >
>       >       >       >     >             >
>       >       >       >     >
>       >       >       >
>       >       >
>       >       >
>       >       >
>       >
>       >
>       >
> 
> 
> 
--8323329-1685484581-1682632248=:3419--


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 21:59:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 21:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527097.819330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9dz-00048k-Re; Thu, 27 Apr 2023 21:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527097.819330; Thu, 27 Apr 2023 21:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9dz-00048d-Nn; Thu, 27 Apr 2023 21:59:15 +0000
Received: by outflank-mailman (input) for mailman id 527097;
 Thu, 27 Apr 2023 21:59:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUcw=AS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1ps9dx-00048X-Ps
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 21:59:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be0ef246-e546-11ed-b224-6b7b168915f2;
 Thu, 27 Apr 2023 23:59:12 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1A85660B46;
 Thu, 27 Apr 2023 21:59:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6281C433EF;
 Thu, 27 Apr 2023 21:59:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be0ef246-e546-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682632750;
	bh=9V2totps9QaIaAeRc12FcCixTxaKaBm1Nb6gJmSFR6Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GBCO7MobW1PQdVUANy1FsxBKVAgbATQM3DgO62qUKyaASIZ2Iy6I++MuTC1aegIYb
	 kZpUQnjCHrIEx45ZAhyaxECE3nbmMRG1rJEBJcjpRT2vVjl5h80+/cThUDH+nxJUJD
	 JfTNVASi7mxtBOGH0uXLyM8YNdYaHOjPZmS2xg+b14FwoVZ75uahzt1pbpaJEG4Png
	 bDwudD6Nuf0Q95AeyyvhBPkyM3L/FAZIuclsPnWcvKIJHNyLKK3/rfcZMvVxmRstCo
	 yDQI8RKC/B6sDvr6zu3deVz+B/e7hrDcWE3u4JinROMO4M9+DCd/6ncnbAZNWwZVTt
	 FCpoA4aMUP41g==
Date: Thu, 27 Apr 2023 14:59:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 1/2] automation: xilinx: Set up bridging only for a
 default test case
In-Reply-To: <20230427120553.18088-2-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2304271459020.3419@ubuntu-linux-20-04-desktop>
References: <20230427120553.18088-1-michal.orzel@amd.com> <20230427120553.18088-2-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 27 Apr 2023, Michal Orzel wrote:
> At the moment, setting up a network bridge is unconditionally placed
> in the dom0 xen.start script. Since we might want to use the network
> interface (there is only one working GEM on the board) for other tests
> (e.g. passthrough), move the bridge setup to a dom0_check variable being
> part of a default ping test (i.e. if no test variant specified).
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/xilinx-smoke-dom0less-arm64.sh | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/automation/scripts/xilinx-smoke-dom0less-arm64.sh b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> index 82158ab7ea1b..73ba251f4cc1 100755
> --- a/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> +++ b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> @@ -6,6 +6,14 @@ test_variant=$1
>  
>  if [ -z "${test_variant}" ]; then
>      passed="ping test passed"
> +    dom0_check="
> +brctl addbr xenbr0
> +brctl addif xenbr0 eth0
> +ifconfig eth0 up
> +ifconfig xenbr0 up
> +ifconfig xenbr0 192.168.0.1
> +xl network-attach 1 type=vif
> +"
>      domU_check="
>  until ifconfig eth0 192.168.0.2 &> /dev/null && ping -c 10 192.168.0.1; do
>      sleep 30
> @@ -51,13 +59,6 @@ bash /etc/init.d/xencommons start
>  
>  /usr/local/lib/xen/bin/init-dom0less
>  
> -brctl addbr xenbr0
> -brctl addif xenbr0 eth0
> -ifconfig eth0 up
> -ifconfig xenbr0 up
> -ifconfig xenbr0 192.168.0.1
> -
> -xl network-attach 1 type=vif
>  ${dom0_check}
>  " > etc/local.d/xen.start
>  chmod +x etc/local.d/xen.start
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 21:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 21:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527098.819339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9eE-0004Ss-35; Thu, 27 Apr 2023 21:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527098.819339; Thu, 27 Apr 2023 21:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ps9eE-0004Sl-0J; Thu, 27 Apr 2023 21:59:30 +0000
Received: by outflank-mailman (input) for mailman id 527098;
 Thu, 27 Apr 2023 21:59:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JUcw=AS=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1ps9eD-0004SQ-7g
 for xen-devel@lists.xenproject.org; Thu, 27 Apr 2023 21:59:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6477c31-e546-11ed-8611-37d641c3527e;
 Thu, 27 Apr 2023 23:59:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 43E8D63FB5;
 Thu, 27 Apr 2023 21:59:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E9FEC433D2;
 Thu, 27 Apr 2023 21:59:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6477c31-e546-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682632764;
	bh=BUoSp+weCqvNgrY37BhxpEFM0kusq/eUBbrdMNJcRr0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nKJrybTSQiBriNRuUnAqCrYfAavb4RRqDgDnJP+8wUPS1I1Ah8oM42hQ7MNqKNiR8
	 AYqxkGSi4/E7ZOVRWhpUCUSykTv7xSsZWdxQ2RsBXURPRhOtRDbaiQZ64w3Qa2cfLT
	 vo7v9QaQtDzUw/6UKcg5jTGHL7ktDZiJWkdvlOGZf8HkuYRZwAGocKetpIC9lkBUah
	 Kcq3/aajkpRalS09RNpYCHDHPRKZDkrXXmcdfJ7RYwQxez0MJX4+q2eAWdVt5kdEXz
	 /sseWLO9POMsM/LvlzrLe6z1xk5eoA1D/fBrAbVc6xmdeykDGl3RSxcsc6Y39daBuU
	 /vXeOy2wrrzHA==
Date: Thu, 27 Apr 2023 14:59:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 2/2] automation: xilinx: Add GEM passthrough test
In-Reply-To: <20230427120553.18088-3-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2304271459140.3419@ubuntu-linux-20-04-desktop>
References: <20230427120553.18088-1-michal.orzel@amd.com> <20230427120553.18088-3-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 27 Apr 2023, Michal Orzel wrote:
> Being able to access a real board with real resources gives a great
> opportunity to finally test passthroughing devices to guests. Therefore,
> create a new Xilinx job to test GEM (Gigabit Ethernet MAC) controller
> passthrough to a dom0less domU.
> 
> By passing "gem-passthrough" as a test variant, the test will instruct
> the ImageBuilder to use "eth0.dtb" (passthrough dtb stored under tftp
> server root) as a guest dtb and to add "xen,passthrough" dtb property to
> "/amba/ethernet@ff0e0000" node. The guest itself will try to bringup
> the network interface, obtain dynamically IP address and ping the default
> gateway.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Example job:
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4189922473
> ---
>  automation/gitlab-ci/test.yaml                |  8 ++++++
>  .../scripts/xilinx-smoke-dom0less-arm64.sh    | 25 +++++++++++++++++++
>  2 files changed, 33 insertions(+)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index d68c584269dd..3409d704a7eb 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -131,6 +131,14 @@ xilinx-smoke-dom0less-arm64-gcc:
>      - *arm64-test-needs
>      - alpine-3.12-gcc-arm64
>  
> +xilinx-smoke-dom0less-arm64-gcc-gem-passthrough:
> +  extends: .xilinx-arm64
> +  script:
> +    - ./automation/scripts/xilinx-smoke-dom0less-arm64.sh gem-passthrough 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *arm64-test-needs
> +    - alpine-3.12-gcc-arm64
> +
>  adl-smoke-x86-64-gcc-debug:
>    extends: .adl-x86-64
>    script:
> diff --git a/automation/scripts/xilinx-smoke-dom0less-arm64.sh b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> index 73ba251f4cc1..075305241c8d 100755
> --- a/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> +++ b/automation/scripts/xilinx-smoke-dom0less-arm64.sh
> @@ -22,6 +22,22 @@ echo \"${passed}\"
>  "
>  fi
>  
> +if [[ "${test_variant}" == "gem-passthrough" ]]; then
> +    passed="${test_variant} test passed"
> +
> +    # For a passthroughed GEM:
> +    # - bring up the network interface
> +    # - dynamically assign IP
> +    # - ping the default gateway
> +    domU_check="
> +set -ex
> +ifconfig eth0 up
> +udhcpc -i eth0 -n
> +ping -c 10 \$(ip route | awk '/^default/ {print \$3}')
> +echo \"${passed}\"
> +"
> +fi
> +
>  # DomU
>  mkdir -p rootfs
>  cd rootfs
> @@ -96,6 +112,15 @@ cp -f binaries/domU-rootfs.cpio.gz $TFTP/
>  # export dtb to artifacts
>  cp $TFTP/mpsoc_smmu.dtb .
>  
> +if [[ "${test_variant}" == "gem-passthrough" ]]; then
> +    echo "
> +    DOMU_PASSTHROUGH_DTB[0]=\"eth0.dtb\"
> +    DOMU_PASSTHROUGH_PATHS[0]=\"/amba/ethernet@ff0e0000\"" >> $TFTP/config
> +
> +    # export passthrough dtb to artifacts
> +    cp $TFTP/eth0.dtb .
> +fi
> +
>  rm -rf imagebuilder
>  git clone https://gitlab.com/ViryaOS/imagebuilder
>  bash imagebuilder/scripts/uboot-script-gen -t tftp -d $TFTP/ -c $TFTP/config
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Apr 27 22:56:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Apr 2023 22:56:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527106.819354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psAWp-0002cK-7W; Thu, 27 Apr 2023 22:55:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527106.819354; Thu, 27 Apr 2023 22:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psAWp-0002cD-4R; Thu, 27 Apr 2023 22:55:55 +0000
Received: by outflank-mailman (input) for mailman id 527106;
 Thu, 27 Apr 2023 22:55:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psAWn-0002c3-E1; Thu, 27 Apr 2023 22:55:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psAWn-0001kN-CK; Thu, 27 Apr 2023 22:55:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psAWm-00058z-RJ; Thu, 27 Apr 2023 22:55:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psAWm-0004Ny-Ov; Thu, 27 Apr 2023 22:55:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MsgpDkD6MQQSE4IS8opdLsCIv3/wIJfqBbUoMdzbwhw=; b=OLnwjdetA9zWz/Se71q6qE2MZ6
	d9F/VXBQaBv92s077s95Gv8FQ/to9AhpqKfeo0EhGqqVUGu5whd/j/EjLT3TuFJahcV78WpUIeQMH
	7PPrknCsBhrg1JdGzSJ1Jhtv0IZdTL68qNo+za/FgWKAwB/dlmsAv18SaubhzraKf7es=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180446-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180446: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=0880df6f5f905bffc86dd181a8af64f16cc62110
X-Osstest-Versions-That:
    xen=8b5be1fe938f52b5d3682dee7702fd51c8cfb61b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Apr 2023 22:55:52 +0000

flight 180446 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180446/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180422
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180422
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180422
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180422
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180422
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180422
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180422
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180422
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180422
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180422
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180422
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180422
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  0880df6f5f905bffc86dd181a8af64f16cc62110
baseline version:
 xen                  8b5be1fe938f52b5d3682dee7702fd51c8cfb61b

Last test of basis   180422  2023-04-26 03:03:06 Z    1 days
Testing same since   180446  2023-04-27 13:08:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8b5be1fe93..0880df6f5f  0880df6f5f905bffc86dd181a8af64f16cc62110 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 00:31:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 00:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527113.819364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psC1R-00054E-7v; Fri, 28 Apr 2023 00:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527113.819364; Fri, 28 Apr 2023 00:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psC1R-000547-5F; Fri, 28 Apr 2023 00:31:37 +0000
Received: by outflank-mailman (input) for mailman id 527113;
 Fri, 28 Apr 2023 00:31:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CQRl=AT=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1psC1P-000541-M7
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 00:31:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 066fa2e6-e55c-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 02:31:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 18A84640A3;
 Fri, 28 Apr 2023 00:31:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 7DDC1C4339B;
 Fri, 28 Apr 2023 00:31:31 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 6C499E5FFC8; Fri, 28 Apr 2023 00:31:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 066fa2e6-e55c-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682641891;
	bh=yQBYUeEWds65eDsZqLypvvncMG/gMCNpVIptuh3Jl/g=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=YxWqsX6TehvwKrSgrxe+kUpRIgAiJCIU2WhF2l4yoaJs5+7zCj5+ydh9LTP+AIm+R
	 RmhxgO738sIFjNJZGStvbF497mhkm3c4qOffKLqN11TAUPv4noc3n08C9vkvQAUFOE
	 SmA/f2OAaasA3WDMm3Td6p8n4dOt+JwahcGYp3Y9m/XCEbtwO3iGDqunzGrWETA3Q/
	 FsxDX/qswRkps/sZYAaqT87SzsPePvSHooI2+HtoSq9PgRDSieKoAZb2YrlnzptfXy
	 11EE6cuT+pf6I8PW7nls/O2v4tNbEtM3Tp+gXcXpXj2v7G0NtOHIouSBSS3Z2wPKp5
	 GzKJMX6v7dxZQ==
Subject: Re: [GIT PULL] xen: branch for v6.4-rc1
From: pr-tracker-bot@kernel.org
In-Reply-To: <20230427073808.12580-1-jgross@suse.com>
References: <20230427073808.12580-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20230427073808.12580-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc1-tag
X-PR-Tracked-Commit-Id: cbfac7707ba16619006a4fd60faac46303fd2f3e
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 35fab9271b7e6d193b47005c4d07369714db4fd1
Message-Id: <168264189143.7031.13138758191360122835.pr-tracker-bot@kernel.org>
Date: Fri, 28 Apr 2023 00:31:31 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Thu, 27 Apr 2023 09:38:08 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc1-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/35fab9271b7e6d193b47005c4d07369714db4fd1

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 00:38:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 00:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527120.819374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psC7c-0005lA-1L; Fri, 28 Apr 2023 00:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527120.819374; Fri, 28 Apr 2023 00:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psC7b-0005l3-Uv; Fri, 28 Apr 2023 00:37:59 +0000
Received: by outflank-mailman (input) for mailman id 527120;
 Fri, 28 Apr 2023 00:37:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psC7a-0005kt-VC; Fri, 28 Apr 2023 00:37:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psC7a-0004e7-H5; Fri, 28 Apr 2023 00:37:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psC7Z-0002Yz-TC; Fri, 28 Apr 2023 00:37:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psC7Z-0000uj-Sf; Fri, 28 Apr 2023 00:37:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ndEa4VVlZQ1/m4hpv65tI0uBcu1m29Q/49K9GsySXUI=; b=JrI2YTtfXApIX2Rs9Kar1YAcj7
	3MIZRGrdbwIuqRtiLLEInLVhy4g5F0kmw4sas6FHUuYYOEbpWhgP26xenBnlBPmrexomuOhCMFZbQ
	QhTHqLFDq65trD2FW1uH718hog74+2Wiqo2PI/ETTLJvRebdB34SstBElcxYDh8m+MCU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180440-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180440: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6e98b09da931a00bf4e0477d0fa52748bf28fcce
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 00:37:57 +0000

flight 180440 linux-linus real [real]
flight 180455 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180440/
http://logs.test-lab.xenproject.org/osstest/logs/180455/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6e98b09da931a00bf4e0477d0fa52748bf28fcce
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   11 days
Failing since        180281  2023-04-17 06:24:36 Z   10 days   18 attempts
Testing same since   180440  2023-04-27 05:55:33 Z    0 days    1 attempts

------------------------------------------------------------
1623 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 166540 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 00:44:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 00:44:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527126.819384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psCDE-0007D8-NC; Fri, 28 Apr 2023 00:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527126.819384; Fri, 28 Apr 2023 00:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psCDE-0007D1-JJ; Fri, 28 Apr 2023 00:43:48 +0000
Received: by outflank-mailman (input) for mailman id 527126;
 Fri, 28 Apr 2023 00:43:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psCDE-0007Cr-1v; Fri, 28 Apr 2023 00:43:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psCDD-0004jk-Iu; Fri, 28 Apr 2023 00:43:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psCDC-0002kK-US; Fri, 28 Apr 2023 00:43:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psCDC-0008Px-To; Fri, 28 Apr 2023 00:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J5wFZJGm+fBSPARAxXR5FGqoBdfLM9+5YADp7nyW0u4=; b=2eBEAvQuNJaxqq22GjtfMvnVqY
	YlHaDVk7385VOVwDk9MW+08R8sg26qGB0XG4CIcikUcapvGyGd9ka3hrgNaonPTsiNfVgfoKQPzr2
	2wLtoBOZ4Mh84iXwqxOkpgDeUMDO+oqQilebgIMcwLz7HtOE/d+Z2kXIsJYEvC8NCjuY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180454-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180454: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e5e1cd1a83e2e7aa2179db3de5fc00d76713ec6f
X-Osstest-Versions-That:
    ovmf=edacc551e6586258ab046dd852f65d674e3e2af0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 00:43:46 +0000

flight 180454 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180454/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e5e1cd1a83e2e7aa2179db3de5fc00d76713ec6f
baseline version:
 ovmf                 edacc551e6586258ab046dd852f65d674e3e2af0

Last test of basis   180436  2023-04-26 17:42:39 Z    1 days
Testing same since   180454  2023-04-27 22:12:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   edacc551e6..e5e1cd1a83  e5e1cd1a83e2e7aa2179db3de5fc00d76713ec6f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 01:37:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 01:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527136.819394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psD2t-0002aj-OL; Fri, 28 Apr 2023 01:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527136.819394; Fri, 28 Apr 2023 01:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psD2t-0002ab-In; Fri, 28 Apr 2023 01:37:11 +0000
Received: by outflank-mailman (input) for mailman id 527136;
 Fri, 28 Apr 2023 01:37:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psD2s-0002aR-Pa; Fri, 28 Apr 2023 01:37:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psD2s-0004Gr-EK; Fri, 28 Apr 2023 01:37:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psD2s-0004z1-4P; Fri, 28 Apr 2023 01:37:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psD2s-0006RE-3n; Fri, 28 Apr 2023 01:37:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A5Rw1pdz6glX+Wipd68utgA0QTwGmHdnTNzEjGCbABc=; b=BGonZOohCeejTo5NlUVTifBVYF
	/zs9RyQS3fEkd0on7kOSeSJddyOkoybg+czGkdeW/ncZJY4hupsNuH16lR0Mv4KZUeu36oFapOMg8
	q6pafzNE3k7JYunqbpK3kSlaFOFFMOjOwWr2YjsYLF/gRRYtqeGnXx+ZpzufVkR4s3eo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180453-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180453: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:guest-start/debian.repeat:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e974df445807bb4a3629ca51145c7d74ee85c8f
X-Osstest-Versions-That:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 01:37:10 +0000

flight 180453 xen-unstable-smoke real [real]
flight 180456 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180453/
http://logs.test-lab.xenproject.org/osstest/logs/180456/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm     18 guest-start/debian.repeat fail REGR. vs. 180435

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e974df445807bb4a3629ca51145c7d74ee85c8f
baseline version:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c

Last test of basis   180435  2023-04-26 17:01:58 Z    1 days
Testing same since   180453  2023-04-27 21:02:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8e974df445807bb4a3629ca51145c7d74ee85c8f
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:16 2023 +0200

    automation: include tail of serial log in the gitlab outout
    
    Make it a bit easier to see what has failed.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3822b16a17dfa6396009c4acaf2ae660f933566f
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:15 2023 +0200

    automation: PCI passthrough tests on ADL hw
    
    Add simple PCI passthrough test to both PV and HVM domU. It passes
    through a network adapter (the only one in the system), gets an IP via
    DHCP (first basic test) and then ping the gateway (second basic test).
    Finally, if device is supposed to use MSI or MSI-X (as set in the
    PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.
    
    On the current runner, the device in question is this:
    03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
            Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
            Flags: bus master, fast devsel, latency 0, IRQ 18
            Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
            Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
            Capabilities: [40] Power Management version 3
            Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
            Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
            Capabilities: [a0] Express Endpoint, MSI 00
            Capabilities: [100] Advanced Error Reporting
            Capabilities: [140] Device Serial Number ...
            Capabilities: [1c0] Latency Tolerance Reporting
            Capabilities: [1f0] Precision Time Measurement
            Capabilities: [1e0] L1 PM Substates
            Kernel driver in use: igc
            Kernel modules: igc
    
    With the current Xen version, it uses MSI-X under PV and MSI under HVM.
    
    This patch moves domU config to a variable, to make it configurable on
    per-test basis. Add also a few comments for visual separation of tests.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 937e73feca9abaea06ec496cd93f8da8bd3b70bf
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:14 2023 +0200

    automation: wait for the login prompt as test end marker
    
    The login prompt is printed after all the startup (test) scripts, wait
    for that instead of "passed" marker. And only then check if test passed.
    Before this patch there was a race: "passed" marker could be already
    printed, but the final check would fail because login prompt wasn't
    there yet.
    
    Also, modify etc/issue in domU rootfs to avoid confusing the one from
    domU with the dom0's one. Use the dom0 one as test end marker.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit ac58d7fda63fecc6fae24f7824dbe033d001833e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 26 15:34:30 2023 +0100

    CI: Remove all use of /bin/false as a ROM
    
    As the recent work to get PCI Passthrough testing working shows, putting
    `/bin/false` as a ROM into guest context doesn't work so well.
    
    For all ROM paths where we're skipping the build, use a slightly-plausible but
    likely non-existent path instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 09c7179f0a2c66d4d1716cc41c498349bf78811b
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu Apr 27 14:25:59 2023 +0100

    xen/misra: xen-analysis.py: fix return error on PhaseExceptions
    
    Currently the script return code is 0 even if an exception is
    found, because the return code is written only if the exception
    object has the errorcode member.
    
    Fix the issue returning the errorcode member in case it exists,
    otherwise use a generic value different from 0.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 04:37:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 04:37:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527146.819407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psFqw-0003wD-9o; Fri, 28 Apr 2023 04:37:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527146.819407; Fri, 28 Apr 2023 04:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psFqw-0003w6-6n; Fri, 28 Apr 2023 04:37:02 +0000
Received: by outflank-mailman (input) for mailman id 527146;
 Fri, 28 Apr 2023 04:37:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psFqu-0003vw-SV; Fri, 28 Apr 2023 04:37:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psFqu-0000lV-Jk; Fri, 28 Apr 2023 04:37:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psFqu-0004ym-7K; Fri, 28 Apr 2023 04:37:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psFqu-00071E-6m; Fri, 28 Apr 2023 04:37:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wlABoyqL8aSdju5O86Ltn7i+2+m+nVQQZOB234o5NdY=; b=OATQTlAJD345f1Mrenfa8tpZ8o
	lp002BPoTX4aSRMIt1WpdIcmjlHRGrSobBY8ci/v9eS9jvEuiGdZ0yPQ7XPSJNDpBI3qVgRIF94Xb
	D6UGzLlx4braXDxIqDXIFG1zMwrqbKsWiAliMGq+doN9xfgIC3iJQZuXlZrmvOk6u0as=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180459-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180459: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e974df445807bb4a3629ca51145c7d74ee85c8f
X-Osstest-Versions-That:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 04:37:00 +0000

flight 180459 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180459/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e974df445807bb4a3629ca51145c7d74ee85c8f
baseline version:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c

Last test of basis   180435  2023-04-26 17:01:58 Z    1 days
Testing same since   180453  2023-04-27 21:02:00 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dde20f7dc1..8e974df445  8e974df445807bb4a3629ca51145c7d74ee85c8f -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 05:24:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 05:24:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527153.819417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psGab-00015c-0P; Fri, 28 Apr 2023 05:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527153.819417; Fri, 28 Apr 2023 05:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psGaa-00015V-Sv; Fri, 28 Apr 2023 05:24:12 +0000
Received: by outflank-mailman (input) for mailman id 527153;
 Fri, 28 Apr 2023 05:24:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psGaZ-00015L-K2; Fri, 28 Apr 2023 05:24:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psGaZ-0001z8-DD; Fri, 28 Apr 2023 05:24:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psGaZ-0006fe-0r; Fri, 28 Apr 2023 05:24:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psGaZ-0005GS-0N; Fri, 28 Apr 2023 05:24:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mUsoHoapc2bBNEKPPn3SrrEPLoLjOJjMizn9BqhgeQk=; b=VKT8cNrbEZfTCS+Wrxv2WWe7EM
	b0T17MhwnzPdCBhpEmvgNWju/zwL1WQR+Y2KTtOzpTCirQEFiXfcl6xPTkNp8Wv2J1jvfX27cappC
	tHausMOr5FQPhCj3P9LZvTXc2SYYNhb43lGEYxAbaRTTe0e6h7AHkkl0sC7ZIvDg9wRg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180443-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180443: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ea7862c507eca54ea6caad9dcfc8bba5e749fbde
X-Osstest-Versions-That:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 05:24:11 +0000

flight 180443 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180443/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 180369

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair       25 guest-start/debian fail in 180428 pass in 180443
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 180428 pass in 180443
 test-armhf-armhf-xl-credit1 18 guest-start/debian.repeat fail in 180428 pass in 180443
 test-amd64-i386-libvirt-pair 10 xen-install/src_host       fail pass in 180428
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 180428

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180428 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180428 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180428 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180369
 test-armhf-armhf-xl-credit2  14 guest-start                  fail  like 180369
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180369
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180369
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180369
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180369
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 180369
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180369
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ea7862c507eca54ea6caad9dcfc8bba5e749fbde
baseline version:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac

Last test of basis   180369  2023-04-21 21:43:52 Z    6 days
Testing same since   180428  2023-04-26 09:43:27 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alyssa Ross <hi@alyssa.is>
  Anders Roxell <anders.roxell@linaro.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Baokun Li <libaokun1@huawei.com>
  Bhavya Kapoor <b-kapoor@ti.com>
  Brian Masney <bmasney@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <error27@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Young <dyoung@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Raillard <douglas.raillard@arm.com>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gao Xiang <hsiangkao@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gwangun Jung <exsociety@gmail.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Stuebner <heiko@sntech.de>
  Ingo Molnar <mingo@kernel.org>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jason Wang <jasowang@redhat.com>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Denose <jdenose@chromium.org>
  Jonathan Denose <jdenose@google.com>
  Juergen Gross <jgross@suse.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Pingfan Liu <kernelfans@gmail.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Ritesh Harjani <riteshh@linux.ibm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Levin <sashal@kernel.org>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tomas Henzl <thenzl@redhat.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tudor Ambarus <tudor.ambarus@linaro.org>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vasily Gorbik <gor@linux.ibm.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>
  Yanjun Zhang <zhangyanjun@cestc.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1535 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 05:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 05:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527158.819427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psGi3-0002Zf-PK; Fri, 28 Apr 2023 05:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527158.819427; Fri, 28 Apr 2023 05:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psGi3-0002ZY-Me; Fri, 28 Apr 2023 05:31:55 +0000
Received: by outflank-mailman (input) for mailman id 527158;
 Fri, 28 Apr 2023 05:31:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xpt/=AT=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1psGi2-0002ZS-0D
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 05:31:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fad1bd89-e585-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 07:31:52 +0200 (CEST)
Received: from AM6P194CA0064.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::41)
 by AS2PR08MB9596.eurprd08.prod.outlook.com (2603:10a6:20b:608::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Fri, 28 Apr
 2023 05:31:50 +0000
Received: from AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::83) by AM6P194CA0064.outlook.office365.com
 (2603:10a6:209:84::41) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 05:31:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT048.mail.protection.outlook.com (100.127.140.86) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.23 via Frontend Transport; Fri, 28 Apr 2023 05:31:49 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Fri, 28 Apr 2023 05:31:49 +0000
Received: from 354fcd50ed2d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 63502CC3-B893-4E98-8CAE-7B22DED7272A.1; 
 Fri, 28 Apr 2023 05:31:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 354fcd50ed2d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 28 Apr 2023 05:31:43 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM9PR08MB6081.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 05:31:39 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%5]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 05:31:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fad1bd89-e585-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C80hBpRfopBuAUbs21hwg3cF5r8UOx9NeTcCdymtSfs=;
 b=S1HDtL1tO/C3KsFLJAZhRxSmqzdJbL3MGCvIgtjJTxvMZZlz3eT/R9/eAVPpI5FrnlzoxAV+tnvFafjRCLGeJPwrPYIs5uEKvrmH6voUMVcTHNw+XC8iMvaDU9tlqrrYMCVqy0XhXDzfg4Xinoe9M54wwJZJHko0I6cITt5RE8M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3c9f0d3445903b3b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y63QRZwhapNVui/lakBfkpVSbakOVT6+n5ijtvirZKpGX3ZTC9+cfykbXuHxAmy1czBKEcQdj22Gas/lGkpUW9NWQK7liy47JV+QQszMu4+90uDq0/Eom1lGUEsv5Es2vJvfDls20DNSEscBZdhbRVXzMRbCpJ33cm7sZRDRQsF9+PQdZTfTKVfANCWJB6CZCSU/rli1PogJgj63oNagMVF0Nul6ZOZGGR/A4FKcaATbYlfv/DAgTMOxpNZBqxRYZbgVVyQV6RzjlBudBA97Y2FP9qExT0PRIFbVgnvOBFAWhPlZxy9igOHLRtvytF+mUxYHfwhqiLlgwPiMjtZl3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C80hBpRfopBuAUbs21hwg3cF5r8UOx9NeTcCdymtSfs=;
 b=XH/yv4cprBDFDtxZI1FNL9Gsd0oCL2d9gPLCImjvuv+EKjD5F5Hubza7nICvH3PJ12xH+kR5A/IdYLlSQt5wd7g7pc1VlTIH/9/eBjwpUtzJ56sg2RlDONN+G+a2m3GzUCYa1DXw7kfcWldUNvPNxxpteF+miGIyvD1YAWQnpcHgWLcPpO3uryOH+XceNp8b4tQSeXLLrxLEQ8+NlOXsC+7fvxPBJUKU6JGv3P1tAIUqzOS1SBzgsR1dE+RJ2VWBtCmbrvct2tS4I3r4FVVyez+bUNOZgTXO4R8XV6QS7NbQkqZ7gMxIcVc3ZXMyeUVt9bmCbPXa8C/xBkAwtP5b1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C80hBpRfopBuAUbs21hwg3cF5r8UOx9NeTcCdymtSfs=;
 b=S1HDtL1tO/C3KsFLJAZhRxSmqzdJbL3MGCvIgtjJTxvMZZlz3eT/R9/eAVPpI5FrnlzoxAV+tnvFafjRCLGeJPwrPYIs5uEKvrmH6voUMVcTHNw+XC8iMvaDU9tlqrrYMCVqy0XhXDzfg4Xinoe9M54wwJZJHko0I6cITt5RE8M=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>
Subject: Re: [PATCH v2] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Thread-Topic: [PATCH v2] xen/misra: xen-analysis.py: fix return error on
 PhaseExceptions
Thread-Index: AQHZeQvr/NpTS3xi9kqkgHy5QIJUF68/lZYAgACdrgA=
Date: Fri, 28 Apr 2023 05:31:39 +0000
Message-ID: <D04634DD-8278-4F48-8A89-65405C261E77@arm.com>
References: <20230427132559.14712-1-luca.fancellu@arm.com>
 <29317463-a5de-2a5d-e217-498d3250ceae@citrix.com>
In-Reply-To: <29317463-a5de-2a5d-e217-498d3250ceae@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM9PR08MB6081:EE_|AM7EUR03FT048:EE_|AS2PR08MB9596:EE_
X-MS-Office365-Filtering-Correlation-Id: 777ec377-6587-4703-fc2b-08db47a9ddba
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Qw8Er/EA8fHtWgx8wn3gyJEtt47ufSbKs7X6hU5+kqvVvK38v1zl3eXvzKMl2cKRBpArCy0lxTWOyO1CVuKBopQs+lKo0ts76b19zeuZHfPdIhoo5bc/jQawoTK1LoRmr7vfQgNYCzltNH9BoIuBThGtMFBC9KgD/qXofuAwsAtRG5rmz8mtMJzxKWJRnwI6Bn9QyLczhyEN9IaI9ReOf5Cq+HS6FqxuQLWT27XsNzh51Z6qXjEXoWXiQuIj5N1LDrsAQFsK79sVpv2d4HjX2iKkrkLyadUrnOV61OyqB59Wm1wlaipE1rpmwE5ziHyfv4962dyndbiLjpGHuBJPt08F/vL/MoNnmcjhp+3vMOwYktks3XsonRJeezmuGmJOwoHEEQgSfW3308h/LluIoJzTWAa06HfWrdF2YuoOshF6QrRaGS70v0j3SKTHY+Kx1m+rtOlFhSK1VL5proFth5MncwTI7KC9uDVEaBCYUHZ3OIk/WjZqhSdtNb7oTdtvylC93uET1V0t5NnOqiMw2jdHxXf2hVAjV3sDiiuZXCU5Ec1Qucwz9LdvomyGcbMRAwvoALtdlzmGxs+Ka5jGAkRZUd/jmCEcsR5qz9lqaauLoAznGtlyMVR9vZ1ijxk4BIbvtnU8An44M5T405/FSg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(39860400002)(346002)(136003)(396003)(451199021)(54906003)(478600001)(5660300002)(8676002)(8936002)(36756003)(2906002)(38070700005)(86362001)(33656002)(64756008)(66446008)(66476007)(66556008)(66946007)(76116006)(4326008)(6916009)(316002)(91956017)(38100700002)(41300700001)(122000001)(26005)(53546011)(6512007)(6506007)(83380400001)(2616005)(186003)(71200400001)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <0F9344BB88E99F499077B741C465CBE2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6081
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3993550e-ff4b-402d-d47c-08db47a9d784
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A4QlMo8Lco8CkesJZqUGEW33jPEf8Vg5NxpLWBqFeuEF+gqECGp0SSu92wBfx0OSYIiNXtgkJCeQjgfnjZyqfS133CYKqZ3ydrGF/y1C5xyEHVH5kXFVaDOAOxwydCtmRDIu2o0dn4TpSZTAEbEjXchuuJTQ3uQiNSLSkb5le/OSrV/cX4e2u81lxjAsUIqx/ryRTi9Caer91sZ+Q/Y8lvw0M4+djew85erMmXPvCYeecSYM9VHYUd0kY8lNBaVSGpib8X/6icH4kMs3p3Omamgi9iFpJRTBZ3RCaZ11GTb2JhUMF54OzvibQxOv0B66HacwIXVkJ+hOC3APQ0ihpcg/BmGXkWaTp4/8qWJz8MCtZFCUwlV7jEu9bUP+xWpqK28heoTv/YQqzVbLfI+14Wxx0yN0WdsQmXe9TCFokHod3AF20JFtHrIfrMtSwWp2JcK9tOLZ4EaGd77hh/S2nwJL9ANeOYmmFuXdPXJi5gfSeB5yC+oP5TcRM2CBopUcg/CmMOBWuA+vtHpw0zpV6p5/dJBkTkezCRpfniQKrRtKsd5Vh2T3qZ97Ch9HjaE8C5g+hHExa7mcx0YfVXuIVqBuQFDn+0MpKCrC4pG7hi1m5ANrsgoD+ELPCrHzumF3xMQZt8DyMDORHY/XKuFRbGob+eyOD7/d+auVK+s3iXrmKaDSFA7npx20ej+oXOwk101/nHsGmmc9T9XGUNJVR+AqkEKp31JX20JiFL1viOs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(34020700004)(2616005)(40460700003)(33656002)(82310400005)(6486002)(81166007)(478600001)(82740400003)(6506007)(53546011)(36756003)(356005)(8936002)(8676002)(41300700001)(2906002)(316002)(5660300002)(6862004)(86362001)(70586007)(4326008)(54906003)(70206006)(6512007)(26005)(186003)(47076005)(83380400001)(36860700001)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 05:31:49.7496
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 777ec377-6587-4703-fc2b-08db47a9ddba
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9596



> On 27 Apr 2023, at 21:07, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> On 27/04/2023 2:25 pm, Luca Fancellu wrote:
>> Currently the script return code is 0 even if an exception is
>> found, because the return code is written only if the exception
>> object has the errorcode member.
>>=20
>> Fix the issue returning the errorcode member in case it exists,
>> otherwise use a generic value different from 0.
>>=20
>> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis=
.py script")
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>=20
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
>> Change-Id: I1b76b8fa4668bef49da3282339fca3052e3379cd
>=20
> although this doesn't look like it should be here.  I've stripped it

My bad! I forgot to remove it before sending. Thanks

>=20
> ~Andrew
>=20
>> ---
>> Changes from v1:
>> - use getattr() (Andrew)
>> ---
>> xen/scripts/xen-analysis.py | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>=20
>> diff --git a/xen/scripts/xen-analysis.py b/xen/scripts/xen-analysis.py
>> index 8e50c27cd898..5e8f2910cd72 100755
>> --- a/xen/scripts/xen-analysis.py
>> +++ b/xen/scripts/xen-analysis.py
>> @@ -26,8 +26,7 @@ def main(argv):
>>             cppcheck_analysis.generate_cppcheck_report()
>>     except PhaseExceptions as e:
>>         print("ERROR: {}".format(e))
>> -        if hasattr(e, "errorcode"):
>> -            ret_code =3D e.errorcode
>> +        ret_code =3D getattr(e, "errorcode", 1)
>>     finally:
>>         if settings.step_clean_analysis:
>>             cppcheck_analysis.clean_analysis_artifacts()
>=20



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 06:28:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 06:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527166.819443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psHaq-00086L-6k; Fri, 28 Apr 2023 06:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527166.819443; Fri, 28 Apr 2023 06:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psHaq-00086E-2P; Fri, 28 Apr 2023 06:28:32 +0000
Received: by outflank-mailman (input) for mailman id 527166;
 Fri, 28 Apr 2023 06:28:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psHap-000864-3M; Fri, 28 Apr 2023 06:28:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psHao-0003Pv-Rc; Fri, 28 Apr 2023 06:28:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psHao-0000Tk-A4; Fri, 28 Apr 2023 06:28:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psHao-0002hJ-9f; Fri, 28 Apr 2023 06:28:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=keDh5Kl37/Q2x23Nbmkj5PPDCL6vwsVF/LZMjD53mO8=; b=iZT2NTv5HtUUw7W8zkxs+6lK8t
	N/AEho8Fq/YzpsqbUTeX7x3kYAuObGH4Fr80gbM0vmgQJk/Z7D0pYCEWNK64VSS4v2APZknLKRJPe
	4SPnsLBg9k5TtxHWkMRuoypRFixrpk94HFKwQL2CdTKzEWpATi6Q83n5LB9qBBP29l7I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180445-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180445: regressions - FAIL
X-Osstest-Failures:
    xen-4.16-testing:build-arm64:xen-build:fail:regression
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.16-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7251cea957cbe4a0772651a4ab110ed76f689f96
X-Osstest-Versions-That:
    xen=6f6526ac7e342b2f125ffba649d4f13e22bbc860
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 06:28:30 +0000

flight 180445 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180445/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180407
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180407
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 180407

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180407
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180407
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180407
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180407
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180407
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180407
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7251cea957cbe4a0772651a4ab110ed76f689f96
baseline version:
 xen                  6f6526ac7e342b2f125ffba649d4f13e22bbc860

Last test of basis   180407  2023-04-25 08:21:48 Z    2 days
Testing same since   180445  2023-04-27 13:08:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7251cea957cbe4a0772651a4ab110ed76f689f96
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 27 14:54:26 2023 +0200

    update Xen version to 4.16.4
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 07:20:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 07:20:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527172.819456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psIP2-0005v5-2w; Fri, 28 Apr 2023 07:20:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527172.819456; Fri, 28 Apr 2023 07:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psIP1-0005uy-VV; Fri, 28 Apr 2023 07:20:23 +0000
Received: by outflank-mailman (input) for mailman id 527172;
 Fri, 28 Apr 2023 07:20:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ym/r=AT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1psIP0-0005us-L0
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 07:20:22 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2199f08a-e595-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 09:20:19 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2D2331FF7C;
 Fri, 28 Apr 2023 07:20:19 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EFE28138FA;
 Fri, 28 Apr 2023 07:20:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id j5EdObJzS2SSUQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 28 Apr 2023 07:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2199f08a-e595-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682666419; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/KmDUKAFfI5mfEPMJCpmv6BWERYYu6iB+QxiLTf4+w4=;
	b=f9/zHM3LrILg3k+gs0WZkR7HSECS3Gu8vX/3b+G+utnpq9Ut5u6jjQDa4VxdWqAdazDGlU
	3dIAD83o1VhNruUzb/C6Hm744RQHpk2Ljd9pJ7A40qq/x/l2hMTxlVGoaIrtah8+T/3za4
	x0OwkWI0J7CfsgpEdjQQ665CweBRSuo=
Message-ID: <cf8ce348-eb61-150a-1e5a-62b0b302cd01@suse.com>
Date: Fri, 28 Apr 2023 09:20:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
 <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
 <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
 <89cbdd73-29e5-f158-4058-668048a0ca60@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
In-Reply-To: <89cbdd73-29e5-f158-4058-668048a0ca60@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------CM8ISQ3LXKqmS3qHvDyzCXmb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------CM8ISQ3LXKqmS3qHvDyzCXmb
Content-Type: multipart/mixed; boundary="------------DqPU2tTqAoprtjlYn5Xz70Z2";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <cf8ce348-eb61-150a-1e5a-62b0b302cd01@suse.com>
Subject: Re: [PATCH v4 04/13] tools/xenstore: add framework to commit
 accounting data on success only
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-5-jgross@suse.com>
 <e8003d2d-5557-f5d9-38ca-793c30637e61@xen.org>
 <cb57a654-a766-5354-a122-989f43b440d5@suse.com>
 <89cbdd73-29e5-f158-4058-668048a0ca60@xen.org>
In-Reply-To: <89cbdd73-29e5-f158-4058-668048a0ca60@xen.org>

--------------DqPU2tTqAoprtjlYn5Xz70Z2
Content-Type: multipart/mixed; boundary="------------N62ZUYeOYAfm1hmPMyBDfT6y"

--------------N62ZUYeOYAfm1hmPMyBDfT6y
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDQuMjMgMTg6MzIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAyNi8wNC8yMDIzIDA4OjA4LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMjUuMDQu
MjMgMTk6NTIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+PiBIaSBKdWVyZ2VuLA0KPj4+DQo+
Pj4gT24gMDUvMDQvMjAyMyAwODowMywgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+Pj4gSW5z
dGVhZCBvZiBtb2RpZnlpbmcgYWNjb3VudGluZyBkYXRhIGFuZCB1bmRvIHRob3NlIG1vZGlm
aWNhdGlvbnMgaW4NCj4+Pj4gY2FzZSBvZiBhbiBlcnJvciBkdXJpbmcgZnVydGhlciBwcm9j
ZXNzaW5nLCBhZGQgYSBmcmFtZXdvcmsgZm9yDQo+Pj4+IGNvbGxlY3RpbmcgdGhlIG5lZWRl
ZCBjaGFuZ2VzIGFuZCBjb21taXQgdGhlbSBvbmx5IHdoZW4gdGhlIHdob2xlDQo+Pj4+IG9w
ZXJhdGlvbiBoYXMgc3VjY2VlZGVkLg0KPj4+Pg0KPj4+PiBUaGlzIHNjaGVtZSBjYW4gcmV1
c2UgbGFyZ2UgcGFydHMgb2YgdGhlIHBlciB0cmFuc2FjdGlvbiBhY2NvdW50aW5nLg0KPj4+
PiBUaGUgY2hhbmdlZF9kb21haW4gaGFuZGxpbmcgY2FuIGJlIHJldXNlZCwgYnV0IHRoZSBh
cnJheSBzaXplIG9mIHRoZQ0KPj4+PiBhY2NvdW50aW5nIGRhdGEgc2hvdWxkIGJlIHBvc3Np
YmxlIHRvIGJlIGRpZmZlcmVudCBmb3IgYm90aCB1c2UgY2FzZXMuDQo+Pj4+DQo+Pj4+IFNp
Z25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+Pj4gLS0t
DQo+Pj4+IFYzOg0KPj4+PiAtIGNhbGwgYWNjX2NvbW1pdCgpIGVhcmxpZXIgKEp1bGllbiBH
cmFsbCkNCj4+Pj4gLSBhZGQgYXNzZXJ0KCkgdG8gYWNjX2NvbW1pdCgpDQo+Pj4+IC0gdXNl
IGZpeGVkIHNpemVkIGFjYyBhcnJheSBpbiBzdHJ1Y3QgY2hhbmdlZF9kb21haW4gKEp1bGll
biBHcmFsbCkNCj4+Pj4gLS0tDQo+Pj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9j
b3JlLmPCoMKgIHzCoCA5ICsrKystLQ0KPj4+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5owqDCoCB8wqAgMyArKw0KPj4+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmMgfCA1MyArKysrKysrKysrKysrKysrKysrKysrKysrKysrKystDQo+Pj4+
IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaCB8wqAgNSArKy0NCj4+Pj4g
wqAgNCBmaWxlcyBjaGFuZ2VkLCA2NiBpbnNlcnRpb25zKCspLCA0IGRlbGV0aW9ucygtKQ0K
Pj4+Pg0KPj4+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
YyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+Pj4gaW5kZXggM2NhNjg2
ODFlMy4uODQzMzVmNWYzZCAxMDA2NDQNCj4+Pj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2NvcmUuYw0KPj4+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29y
ZS5jDQo+Pj4+IEBAIC0xMDIzLDYgKzEwMjMsOSBAQCBzdGF0aWMgdm9pZCBzZW5kX2Vycm9y
KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBpbnQgDQo+Pj4+IGVycm9yKQ0KPj4+PiDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBicmVhazsNCj4+Pj4gwqDCoMKgwqDCoMKgwqDCoMKg
IH0NCj4+Pj4gwqDCoMKgwqDCoCB9DQo+Pj4+ICsNCj4+Pj4gK8KgwqDCoCBhY2NfZHJvcChj
b25uKTsNCj4+Pj4gKw0KPj4+PiDCoMKgwqDCoMKgIHNlbmRfcmVwbHkoY29ubiwgWFNfRVJS
T1IsIHhzZF9lcnJvcnNbaV0uZXJyc3RyaW5nLA0KPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgc3RybGVuKHhzZF9lcnJvcnNbaV0uZXJyc3RyaW5nKSArIDEpOw0KPj4+
PiDCoCB9DQo+Pj4+IEBAIC0xMDM0LDYgKzEwMzcsOSBAQCB2b2lkIHNlbmRfcmVwbHkoc3Ry
dWN0IGNvbm5lY3Rpb24gKmNvbm4sIGVudW0gDQo+Pj4+IHhzZF9zb2NrbXNnX3R5cGUgdHlw
ZSwNCj4+Pj4gwqDCoMKgwqDCoCBhc3NlcnQodHlwZSAhPSBYU19XQVRDSF9FVkVOVCk7DQo+
Pj4+ICvCoMKgwqAgY29ubi0+aW4gPSBOVUxMOw0KPj4+DQo+Pj4gQUZBSVUsIHlvdSBhcmUg
c2V0dGluZyBjb25uLT5pbiB0byBOVUxMIGluIG9yZGVyIHRvIHBsZWFzZS4uDQo+Pj4NCj4+
Pj4gK8KgwqDCoCBhY2NfY29tbWl0KGNvbm4pOw0KPj4+DQo+Pj4gLi4uIHRoaXMgY2FsbC4g
SG93ZXZlciBpbiBjYXNlIG9mIGFuIGVycm9yIGxpa2UuLi4NCj4+Pg0KPj4+PiArDQo+Pj4+
IMKgwqDCoMKgwqAgaWYgKCBsZW4gPiBYRU5TVE9SRV9QQVlMT0FEX01BWCApIHsgPiBzZW5k
X2Vycm9yKGNvbm4sIEUyQklHKTsNCj4+Pg0KPj4+IC4uLiBoZXJlLCBzZW5kX3JlcGx5KCkg
d2lsbCBiZSBjYWxsZWQgYWdhaW4uIEJ1dCB0aGUgZXJyb3Igd2lsbCBub3QgYmUgc2V0IA0K
Pj4+IGJlY2F1c2UgY29ubi0+aW4gaXMgTlVMTC4NCj4+Pg0KPj4+IFNvIEkgdGhpbmsgeW91
IHdhbnQgdG8gcmVzdG9yZSBjb25uLT5pbiByZXdyaXRlIGFjY19jb21taXQoKS4gVGhlIG9y
ZGVyaW5nIA0KPj4+IHdvdWxkIGFsc28gZGVzZXJ2ZSBhbiBleHBsYW5hdGlvbiBpbiBhIGNv
bW1lbnQuDQo+Pg0KPj4gSnVzdCB0byBtYWtlIHN1cmUgSSB1bmRlcnN0YW5kIHlvdSBjb3Jy
ZWN0bHkgKEkgaGF2ZSBzb21lIGRpZmZpY3VsdGllcw0KPj4gcGFyc2luZyAiU28gSSB0aGlu
ayB5b3Ugd2FudCB0byByZXN0b3JlIGNvbm4tPmluIHJld3JpdGUgYWNjX2NvbW1pdCgpLiIN
Cj4+IGNvbXBsZXRlbHkpOg0KPiANCj4gSG1tbS4uLiBOb3Qgc3VyZSB3aHkgSSB3cm90ZSAi
cmV3cml0ZSIuIEkgd2FzIG1lYW50IHRvIHNheSB0aGF0IHlvdSB3YW50IHRvIA0KPiByZXN0
b3JlIGNvbm4tPmluIGFmdGVyIGFjY19jb21taXQoKSBpcyBjYWxsZWQuDQo+IA0KPj4NCj4+
IFlvdSBhcmUgc3VnZ2VzdGluZyB0byBtb3ZlIHNldHRpbmcgY29ubi0+aW4gdG8gTlVMTCB0
byBhY2NfY29tbWl0KCkgYW5kDQo+PiB0byByZXN0b3JlIGl0IGJlZm9yZSByZXR1cm5pbmc/
IEknbSBmaW5lIHdpdGggdGhhdC4NCj4gDQo+IEVpdGhlciB0aGF0IG9yIHdoYXQgSSB3cm90
ZSBhYm92ZS4gSXQgZGVwZW5kcyBvbiB3aGV0aGVyIHlvdSBleHBlY3Qgb3RoZXIgY2FsbGVy
IA0KPiB0byBiZSBpbiB0aGUgc2FtZSBzaXR1YXRpb24uDQoNCkkgdGhpbmsgaXQgc2hvdWxk
IGJlIGxvY2FsIHRvIGFjY19jb21taXQoKSwgYXMgY29ubi0+aW4gYmVpbmcgTlVMTCBpcyBh
DQpyZXF1aXJlbWVudCBvZiB0aGUgYWNjX2NvbW1pdCgpIGhhbmRsaW5nLg0KDQo+IA0KPj4N
Cj4+Pg0KPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuOw0KPj4+PiBAQCAtMTA1OSw4
ICsxMDY1LDYgQEAgdm9pZCBzZW5kX3JlcGx5KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBl
bnVtIA0KPj4+PiB4c2Rfc29ja21zZ190eXBlIHR5cGUsDQo+Pj4+IMKgwqDCoMKgwqDCoMKg
wqDCoCB9DQo+Pj4+IMKgwqDCoMKgwqAgfQ0KPj4+PiAtwqDCoMKgIGNvbm4tPmluID0gTlVM
TDsNCj4+Pj4gLQ0KPj4+PiDCoMKgwqDCoMKgIC8qIFVwZGF0ZSByZWxldmFudCBoZWFkZXIg
ZmllbGRzIGFuZCBmaWxsIGluIHRoZSBtZXNzYWdlIGJvZHkuICovDQo+Pj4+IMKgwqDCoMKg
wqAgYmRhdGEtPmhkci5tc2cudHlwZSA9IHR5cGU7DQo+Pj4+IMKgwqDCoMKgwqAgYmRhdGEt
Pmhkci5tc2cubGVuID0gbGVuOw0KPj4+PiBAQCAtMjE5NSw2ICsyMTk5LDcgQEAgc3RydWN0
IGNvbm5lY3Rpb24gKm5ld19jb25uZWN0aW9uKGNvbnN0IHN0cnVjdCANCj4+Pj4gaW50ZXJm
YWNlX2Z1bmNzICpmdW5jcykNCj4+Pj4gwqDCoMKgwqDCoCBuZXctPmlzX3N0YWxsZWQgPSBm
YWxzZTsNCj4+Pj4gwqDCoMKgwqDCoCBuZXctPnRyYW5zYWN0aW9uX3N0YXJ0ZWQgPSAwOw0K
Pj4+PiDCoMKgwqDCoMKgIElOSVRfTElTVF9IRUFEKCZuZXctPm91dF9saXN0KTsNCj4+Pj4g
K8KgwqDCoCBJTklUX0xJU1RfSEVBRCgmbmV3LT5hY2NfbGlzdCk7DQo+Pj4+IMKgwqDCoMKg
wqAgSU5JVF9MSVNUX0hFQUQoJm5ldy0+cmVmX2xpc3QpOw0KPj4+PiDCoMKgwqDCoMKgIElO
SVRfTElTVF9IRUFEKCZuZXctPndhdGNoZXMpOw0KPj4+PiDCoMKgwqDCoMKgIElOSVRfTElT
VF9IRUFEKCZuZXctPnRyYW5zYWN0aW9uX2xpc3QpOw0KPj4+PiBkaWZmIC0tZ2l0IGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmgNCj4+Pj4gaW5kZXggYzU5YjA2NTUxZi4uMWY4MTFmMzhjYiAxMDA2NDQNCj4+
Pj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaA0KPj4+PiArKysgYi90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oDQo+Pj4+IEBAIC0xMzksNiArMTM5LDkg
QEAgc3RydWN0IGNvbm5lY3Rpb24NCj4+Pj4gwqDCoMKgwqDCoCBzdHJ1Y3QgbGlzdF9oZWFk
IG91dF9saXN0Ow0KPj4+PiDCoMKgwqDCoMKgIHVpbnQ2NF90IHRpbWVvdXRfbXNlYzsNCj4+
Pj4gK8KgwqDCoCAvKiBOb3QgeWV0IGNvbW1pdHRlZCBhY2NvdW50aW5nIGRhdGEgKHZhbGlk
IGlmIGluICE9IE5VTEwpLiAqLw0KPj4+PiArwqDCoMKgIHN0cnVjdCBsaXN0X2hlYWQgYWNj
X2xpc3Q7DQo+Pj4+ICsNCj4+Pj4gwqDCoMKgwqDCoCAvKiBSZWZlcmVuY2VkIHJlcXVlc3Rz
IG5vIGxvbmdlciBwZW5kaW5nLiAqLw0KPj4+PiDCoMKgwqDCoMKgIHN0cnVjdCBsaXN0X2hl
YWQgcmVmX2xpc3Q7DQo+Pj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmMgDQo+Pj4+IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5j
DQo+Pj4+IGluZGV4IDMwZmI5YWNlYzYuLjE0NGNiYWZiNzMgMTAwNjQ0DQo+Pj4+IC0tLSBh
L3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYw0KPj4+PiArKysgYi90b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMNCj4+Pj4gQEAgLTkxLDYgKzkxLDggQEAgc3Ry
dWN0IGRvbWFpbg0KPj4+PiDCoMKgwqDCoMKgIGJvb2wgd3JsX2RlbGF5X2xvZ2dlZDsNCj4+
Pj4gwqAgfTsNCj4+Pj4gKyNkZWZpbmUgQUNDX0NIRF9OIChBQ0NfVFJfTiA8IEFDQ19SRVFf
TiA/IEFDQ19SRVFfTiA6IEFDQ19UUl9OKQ0KPj4+DQo+Pj4gQm90aCBBQ0NfVFJfTiBhbmQg
QUNDX1JFUV9OIGFyZSBmaXhlZC4gQ2FuIHlvdSBleHBsYWluIHdoeSB3ZSBuZWVkIHRoaXMg
bWFnaWM/DQo+Pj4NCj4+PiBSZWxhdGVkLCB3b3VsZG4ndCBpdCBiZSBiZXR0ZXIgdG8gZGVm
aW5lIGl0IGluIHRoZSBlbnVtPw0KPj4NCj4+IEkgY2FuIGRvIHRoYXQsIG9mIGNvdXJzZS4g
SSBqdXN0IGRpZG4ndCB3YW50IHRvIG1ha2UgdGhlIGVudW0gZXZlbiBtb3JlDQo+PiBjb21w
bGV4LiA6LSkNCj4gDQo+IE15IGNvbmNlcm4gaXMgdGhlcmUgaXMgYSBkaXNjb25uZWN0IGJl
dHdlZW4gdGhlIGVudW0gYW5kIHRoaXMgbWFjcm8uIFdoYXQgd291bGQgDQo+IGhhcHBlbiBp
ZiB3ZSBpbmNyZWFzZSB0aGUgZW51bSBwYXN0IEFDQ19SRVFfTi9BQ0NfVFJfTj8NCg0KVGhp
cyBleHBhbnNpb24gaXMgaGFwcGVuaW5nIGluIGxhdGVyIHBhdGNoZXMuDQoNCj4gV291bGQg
aXQgYmUgbmVjZXNzYXJ5IHRvIHVwZGF0ZSBBQ0NfQ0hEX04/DQoNCldpdGggdGhlICNkZWZp
bmUgYWJvdmU6IG5vLg0KDQpXaXRoIGluY2x1ZGluZyBpdCBpbiB0aGUgZW51bTogZGVwZW5k
cyBvbiB0aGUgdmFsdWVzIG9mIEFDQ19SRVFfTiBhbmQgQUNDX1RSX04uDQpBQ0NfQ0hEX04g
bXVzdCBhbHdheXMgYmUgZXF1YWwgdG8gdGhlIGxhcmdlciBvbmUgb2YgYm90aC4NCg0KSW4g
bXkgY3VycmVudCBWNiB2YXJpYW50IEkndmUgYWRkZWQgaXQgdG8gdGhlIGVudW0gd2l0aCBh
IGNvbW1lbnQ6DQoNCisgICAgICAgQUNDX0NIRF9OID0gQUNDX1RSX04sICAgLyogbWF4KEFD
Q19SRVFfTiwgQUNDX1RSX04pLCBmb3IgY2hhbmdlZCBkb20uICovDQoNCg0KSnVlcmdlbg0K
DQo=
--------------N62ZUYeOYAfm1hmPMyBDfT6y
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------N62ZUYeOYAfm1hmPMyBDfT6y--

--------------DqPU2tTqAoprtjlYn5Xz70Z2--

--------------CM8ISQ3LXKqmS3qHvDyzCXmb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRLc7IFAwAAAAAACgkQsN6d1ii/Ey+8
BQf/df6oIzkpy6uXxLOXSCre/QKZPk8at2iA7saka3G61ghi02S39inyNAd8W3g4xvYl8GXy/kWd
052LUDlMc40kudk1/TNfeFfXWe/huR2n7k7dZp7zmNkmfaPPPytfkV+ZjoP6SDBGi7nlN9pOxjMf
TxQYSXnju8UvdTr6tfT5aDxyDjjQQmvUQ26rHKCXf2exfZ4wIP5dvLjvVrUr5hGLY2lZ/tIm1hI/
+3oSfLGjMefJtffqWPswK/zBGtu1VMwH+ww16mlgMoA9Z27sJixrfdbykv8z1TBThXtspi1eftLa
+F4Thvob6bkCjMJ5S3BmKBeJSBAYIV+Pv5yPnBJPpg==
=GdZd
-----END PGP SIGNATURE-----

--------------CM8ISQ3LXKqmS3qHvDyzCXmb--


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 07:29:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 07:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527177.819466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psIY2-0006d0-23; Fri, 28 Apr 2023 07:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527177.819466; Fri, 28 Apr 2023 07:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psIY1-0006ct-VZ; Fri, 28 Apr 2023 07:29:41 +0000
Received: by outflank-mailman (input) for mailman id 527177;
 Fri, 28 Apr 2023 07:29:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ym/r=AT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1psIY1-0006cm-1d
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 07:29:41 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6edfb62a-e596-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 09:29:39 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4E00521CF4;
 Fri, 28 Apr 2023 07:29:38 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 01B4A1390E;
 Fri, 28 Apr 2023 07:29:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id x2p0OuF1S2RIVgAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 28 Apr 2023 07:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6edfb62a-e596-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682666978; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YcQhtP+aozoW4wXflHn0Stmz4XFnsMTpU25EOuSUzqs=;
	b=nfbti+m+sf/Nbbd+JKbsxEhOnbXOPMcpCdmaq0hWqUV85ujMaaTFK3tOm017O2wCKsV34+
	E68kU8uEbHhM0x0oPlP/HeRbZM0BVOZsX+EZ0HGexWVruz2G9OGs4dHO9pzpLryMz1+Jlt
	QQY0r2BMoOBrplL2lXt8o6zAWsS09ks=
Message-ID: <02859744-da6e-0811-5d14-43b4225d6383@suse.com>
Date: Fri, 28 Apr 2023 09:29:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230405070349.25293-1-jgross@suse.com>
 <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
 <d9c6df98-2b38-4a4b-8228-04ce072b3b56@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 00/13] tools/xenstore: rework internal accounting
In-Reply-To: <d9c6df98-2b38-4a4b-8228-04ce072b3b56@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------WEkgQ0Db38CwmIwJ9xg2ZPBM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------WEkgQ0Db38CwmIwJ9xg2ZPBM
Content-Type: multipart/mixed; boundary="------------lT0MadlvBg02Uhl6U9mjGjWN";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <02859744-da6e-0811-5d14-43b4225d6383@suse.com>
Subject: Re: [PATCH v4 00/13] tools/xenstore: rework internal accounting
References: <20230405070349.25293-1-jgross@suse.com>
 <6807cae6-16e1-b041-5492-15eda6732275@suse.com>
 <d9c6df98-2b38-4a4b-8228-04ce072b3b56@xen.org>
In-Reply-To: <d9c6df98-2b38-4a4b-8228-04ce072b3b56@xen.org>

--------------lT0MadlvBg02Uhl6U9mjGjWN
Content-Type: multipart/mixed; boundary="------------jurzqmzzDcWtknf0eArusff6"

--------------jurzqmzzDcWtknf0eArusff6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjcuMDQuMjMgMTk6MzcsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDI2LzA0LzIwMjMgMDg6MTksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAwNS4wNC4yMyAwOTowMywgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+PiBUaGlzIHNlcmll
cyByZXdvcmtzIHRoZSBYZW5zdG9yZSBpbnRlcm5hbCBhY2NvdW50aW5nIHRvIHVzZSBhIHVu
aWZvcm0NCj4+PiBnZW5lcmljIGZyYW1ld29yay4gSXQgaXMgYWRkaW5nIHNvbWUgYWRkaXRp
b25hbCB1c2VmdWwgZGlhZ25vc3RpYw0KPj4+IGluZm9ybWF0aW9uLCBsaWtlIGFjY291bnRp
bmcgdHJhY2UgYW5kIG1heC4gcGVyLWRvbWFpbiBhbmQgZ2xvYmFsIHF1b3RhDQo+Pj4gdmFs
dWVzIHNlZW4uDQo+Pj4NCj4+PiBDaGFuZ2VzIGluIFYyOg0KPj4+IC0gYWRkZWQgcGF0Y2gg
MSAobGVmdG92ZXIgZnJvbSBwcmV2aW91cyBzZXJpZXMpDQo+Pj4gLSByZWJhc2UNCj4+Pg0K
Pj4+IENoYW5nZXMgaW4gVjM6DQo+Pj4gLSBhZGRyZXNzZWQgY29tbWVudHMNCj4+Pg0KPj4+
IENoYW5nZXMgaW4gVjQ6DQo+Pj4gLSBmaXhlZCBwYXRjaCAzDQo+Pg0KPj4gQW5vdGhlciB0
aG91Z2h0IGZvciB0aGlzIHNlcmllcyBhbmQgZm9sbG93dXAgb25lOg0KPj4NCj4+IERvIHdl
IHdhbnQgdG8ga2VlcCBjdXJyZW50IGNvZGluZyBzdHlsZSBpbiB0b29scy94ZW5zdG9yZSAo
YmFzaWNhbGx5DQo+PiBMaW51eCBrZXJuZWwgc3R5bGUpLCBvciBkbyB3ZSB3YW50IHRvIHN3
aXRjaCB0byBYZW4gc3R5bGUgaW5zdGVhZD8NCj4gDQo+IEkgYW0gYSBiaXQgc3BsaXQgb24g
dGhpcyBvbmUuIEkgZG9uJ3QgcGFydGljdWxhcmx5IGxpa2UgdGhlIExpbnV4IGNvZGluZyBz
dHlsZSwgDQo+IGJ1dCBpdCBoYXMgdGhlIGFkdmFudGFnZSB0byBiZSB3ZWxsLWRvY3VtZW50
ZWQgKGlmIHdlIGNvbXBhcmUgdG8gdGhlIFhlbiBvbmUpLg0KDQpJIGhhdmUgcmFpc2VkIHRo
ZSBpZGVhIHRvIHN3aXRjaCB0byB0aGUgTGludXggc3R5bGUgZm9yIHRoYXQgcmVhc29uLCBi
dXQgaXQgd2FzDQpyZWplY3RlZCByYXRoZXIgZmlybWx5Lg0KDQpTbyB3ZSB3b24ndCBnZXQg
cmlkIG9mIHRoZSBYZW4gc3R5bGUuDQoNCj4gTWF5IEkgYXNrIHdoYXQgd291bGQgYmUgdGhl
IHJlYXNvbiB0byBzd2l0Y2g/DQoNCkFjY29yZGluZyB0byBDT0RJTkdfU1RZTEUgaXQgaXMg
dGhlIHN0eWxlIHRvIGJlIHVzZWQuIFdlIGNvdWxkIGFkZCBhIGxvY2FsIHN0eWxlDQpoaW50
IGluIHRvb2xzL3hlbnN0b3JlZCwgYnV0IEknZCByYXRoZXIgbm90IGFkZCBhbm90aGVyIGxv
Y2FsIHN0eWxlLg0KDQpJbiB0aGUgZW5kIGl0IGlzIGFib3V0IGNvbnNpc3RlbmN5Lg0KDQo+
PiBJZiBhIHN3aXRjaCB0byBYZW4gc3R5bGUgaXMgcHJlZmVycmVkIChJIGRvIHByZWZlciB0
aGF0IHN3aXRjaCksIEknZA0KPj4gbGlrZSB0byBzdWdnZXN0IHRoYXQgSSBkbyBhIHJld29y
ayBvZiB0aGlzIHNlcmllcyBhbmQgdGhlIGZvbGxvd3VwIG9uZQ0KPj4gdG8gdXNlIHRoZSBY
ZW4gc3R5bGUgZm9yIG5ldyBvciBtb3ZlZCBmdW5jdGlvbnMuDQo+IA0KPiBJIHRoaW5rIHRo
aXMgaXMgYSBiYWQgaWRlYSBiZWNhdXNlIGl0IHdvdWxkIG1ha2UgZGlmZmljdWx0IGZvciBh
IA0KPiBkZXZlbG9wZXIvcmV2aWV3ZXIgdG8ga25vdyB3aGF0IGlzIHRoZSBjb2Rpbmcgc3R5
bGUgb2YgYSBnaXZlbiBmdW5jdGlvbi4NCj4gDQo+IEF0IGxlYXN0IGluIG15IHdvcmtmbG93
LCBpdCB3b3VsZCBhbHNvIG1lYW5zIHRoYXQgSSBuZWVkIHR3byBvcGVuIHRoZSBmaWxlIHR3
aWNlIA0KPiB3aXRoIGRpZmZlcmVudCBzZXR0aW5ncyAoZS5nLiBzb2Z0IHZzIGhhcmQgdGFi
KS4NCg0KT2theS4gVGhpcyBpcyBhIHJhdGhlciBnb29kIHJlYXNvbiBub3QgdG8gdXNlIGRp
ZmZlcmVudCBzdHlsZXMgaW4gb25lIHNvdXJjZS4NCg0KPj4gQSBtb3JlIHJhZGljYWwgYXBw
cm9hY2ggd291bGQgYmUgdG8gZG8gYSBsYXJnZSBzdHlsZSBzd2l0Y2ggc2VyaWVzDQo+PiBh
ZnRlciB0aGUgdHdvIHNlcmllcywgYnV0IEknbSBoZXNpdGFudCBhcyB0aGlzIHdvdWxkIGFm
ZmVjdCBiYWNrcG9ydHMNCj4+IHJhdGhlciBiYWRseS4NCj4gDQo+IEluIGdlbmVyYWwsIEkg
d291bGQgYWdyZWUgd2l0aCB0aGF0LiBCdXQsIGFmdGVyIHlvdXIgd29yaywgSSBhbSB1bmRl
ciB0aGUgDQo+IGltcHJlc3Npb24gdGhhdCBYZW5zdG9yZWQgd2lsbCBiZWNvbWUgcXVpdGUg
ZGlmZmVyZW50LiBTbyBJIGFtIG5vdCBjb252aW5jZSB3ZSANCj4gd2lsbCBiZSBhYmxlIHRv
IGJhY2twb3J0cyBhIGxvdCBvZiBwYXRjaCB3aXRob3V0IHNpZ25pZmljYW50IHJld29yay4N
Cj4gDQo+IFRoZXJlZm9yZSwgY29udmVydGluZyBhbGwgdGhlIGZpbGVzIGluIG9uZSBwYXNz
IG1heSBub3QgYmUgdG9vIGJhZCAoYXNzdW1pbmcgd2UgDQo+IGFncmVlIG9uIHN3aXRjaGlu
ZyB0byB0aGUgbmV3IGNvZGluZyBzdHlsZSkuDQoNCkZpbmUgd2l0aCBtZS4gSW4gYW55IGNh
c2UgdGhpcyBzaG91bGQgYmUgZG9uZSBpbiB0aGUgc2FtZSBYZW4gcmVsZWFzZSBhcyB0aGUN
Cm1ham9yIHJld29yay4gT3RoZXJ3aXNlIHRoZSBiYWNrcG9ydCBwcm9ibGVtIHdpbGwgYmUg
YSByZWFsIG9uZS4NCg0KDQpKdWVyZ2VuDQo=
--------------jurzqmzzDcWtknf0eArusff6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------jurzqmzzDcWtknf0eArusff6--

--------------lT0MadlvBg02Uhl6U9mjGjWN--

--------------WEkgQ0Db38CwmIwJ9xg2ZPBM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRLdeEFAwAAAAAACgkQsN6d1ii/Ey/C
9Qf9EOwf6SASfETGPCmL699MJPXVA98NIBtrjvCEena4PwhCsKGq+z258I0PYtFSlW9P9r+3POYe
6keVB7ZCX2JfaG18HJaA0heKgbxJlk6T2eal3DYxAkYNckes6QmE+qOrIUazpdwUMOp6L70FH4Pt
zfEOGquPHGXpXL7PC0FxeubgUAiY8jhhNHRt+rqchiAf7Ay2endx3y69NhElR3wGTCSRb50LKdYj
L4xFsHyZNAvha/GrHAN8gK13Kj8F00huldErqpbJqOA/I+W+m5iaE0kVAc/Cirth08W0RvLXZHag
2doDy26BRHZhNODCqLJOeGzS+Ieg4Qa9ARjbdy8rQg==
=Iizg
-----END PGP SIGNATURE-----

--------------WEkgQ0Db38CwmIwJ9xg2ZPBM--


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527185.819485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9h-00037G-CT; Fri, 28 Apr 2023 08:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527185.819485; Fri, 28 Apr 2023 08:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9h-000379-9W; Fri, 28 Apr 2023 08:08:37 +0000
Received: by outflank-mailman (input) for mailman id 527185;
 Fri, 28 Apr 2023 08:08:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJ9g-00036x-G7
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:08:36 +0000
Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com
 [2a00:1450:4864:20::335])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df9af7b3-e59b-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:08:35 +0200 (CEST)
Received: by mail-wm1-x335.google.com with SMTP id
 5b1f17b1804b1-3f178da21afso64382075e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 13-20020a05600c230d00b003f31da39b62sm2569464wmo.18.2023.04.28.01.08.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:08:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df9af7b3-e59b-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669314; x=1685261314;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ruSDN7fkH/OcFoZofz/uE6f8wHRubeKwBT3psKG1zkY=;
        b=Bbk8MHYhSTUiQuilXeTQUUn2+xE9TuMCk0qc3kF6rZaqQAM8riYJ0NuiVz5s8GR3UJ
         1/DzStwJdpKKBPUMDubq4p/j6PdT90q4ac4ZeFbI7o3mdZYlV3igYQf/5PDMIQQPHVrO
         TLw+NXBZ9z6ysOr9wHa0ndsrIeb7ACZl2qjaw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669314; x=1685261314;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ruSDN7fkH/OcFoZofz/uE6f8wHRubeKwBT3psKG1zkY=;
        b=krgU1RyyxB0FiGhRXpc06/0ghNhfqeHBG7eK2HHMmWo8iHcipuzJwHyuwXKW6ZliN3
         WpLQQ6DZ54nh44QvuLW+nvQczWUq6hQoQIiTe47TPuA+jA0eKeBSsFGMTgYgxkgN5ig5
         uK/D+6pfpdQ/aoST4VXTkQXhC4SRbEh8ceRcwtcZoE4808/x10GTyu4WY4HT+MQC/KuU
         iB0A8HNOKwTNuAd3uqJNHwob0JjL8z0FJ+ZmINrxikJStgNeW0dqPYx+hp4QYvWJ6gBg
         Xsio1SrE4fixiHZYcSrNTzO7GvW0bkJ5EHKz57hn1bJPnPNyNxiIwmOH9gciNqH05IcQ
         RHiw==
X-Gm-Message-State: AC+VfDz1pWFn+t/7B0TRoy3TQSrG/NaMDjrViUeOmHu6F+6OW+mWrqCx
	BJ8v1e1la0lVzddjHJ0u3L0nYwtvmxpesL+0Y/U=
X-Google-Smtp-Source: ACHHUZ662rNa/wbV8EG5ieb17gNhTuOnlw4IBT7Q1xgrvUjLcH2hUUYaFfSxqNOVQlypCm8sE4yC+w==
X-Received: by 2002:a7b:cb94:0:b0:3f1:7ba6:d5ab with SMTP id m20-20020a7bcb94000000b003f17ba6d5abmr3392478wmi.36.1682669314569;
        Fri, 28 Apr 2023 01:08:34 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@cloud.com>
Subject: [PATCH 2/5] xenalyze: Basic TRC_HVM_EMUL handling
Date: Fri, 28 Apr 2023 09:08:29 +0100
Message-Id: <20230428080832.2461044-2-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230428080832.2461044-1-george.dunlap@cloud.com>
References: <20230428080832.2461044-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For now, mainly just do volume analysis and get rid of the warnings.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@cloud.com>
---
 tools/xentrace/xenalyze.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index ff9716cd12..f7f8943079 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -987,6 +987,7 @@ enum {
     HVM_VOL_VMENTRY,
     HVM_VOL_VMEXIT,
     HVM_VOL_HANDLER,
+    HVM_VOL_EMUL,
     HVM_VOL_MAX
 };
 
@@ -1013,6 +1014,7 @@ const char *hvm_vol_name[HVM_VOL_MAX] = {
     [HVM_VOL_VMENTRY]="vmentry",
     [HVM_VOL_VMEXIT] ="vmexit",
     [HVM_VOL_HANDLER]="handler",
+    [HVM_VOL_EMUL]="emul",
 };
 
 enum {
@@ -5275,15 +5277,18 @@ void hvm_process(struct pcpu_info *p)
     if(vcpu_set_data_type(p->current, VCPU_DATA_HVM))
         return;
 
-    if(ri->evt.sub == 2)
-    {
+    switch ( ri->evt.sub ) {
+    case 2: /* HVM_HANDLER */
         UPDATE_VOLUME(p, hvm[HVM_VOL_HANDLER], ri->size);
         hvm_handler_process(ri, h);
-    }
-    else
-    {
+        break;
+    case 4: /* HVM_EMUL */
+        UPDATE_VOLUME(p, hvm[HVM_VOL_EMUL], ri->size);
+        warn_once("WARNING: We don't yet analyze HVM_EMUL events.\n");
+        /* FIXME: Collect analysis on this */
+        break;
+    default:
         switch(ri->event) {
-            /* HVM */
         case TRC_HVM_VMEXIT:
         case TRC_HVM_VMEXIT64:
             UPDATE_VOLUME(p, hvm[HVM_VOL_VMEXIT], ri->size);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527188.819510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9k-0003e2-53; Fri, 28 Apr 2023 08:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527188.819510; Fri, 28 Apr 2023 08:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9j-0003dB-WF; Fri, 28 Apr 2023 08:08:40 +0000
Received: by outflank-mailman (input) for mailman id 527188;
 Fri, 28 Apr 2023 08:08:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJ9i-0003M3-O4
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:08:38 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df5aa8d4-e59b-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 10:08:35 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 5b1f17b1804b1-3f182d745deso94547175e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 13-20020a05600c230d00b003f31da39b62sm2569464wmo.18.2023.04.28.01.08.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:08:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df5aa8d4-e59b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669314; x=1685261314;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=oHctl/+Yw4lELN5Eo4ilEJN3AsQPRKgX697uKFaIV2k=;
        b=Rxi3Av75ra1OLpfSrLCkapSZR9wDbu+YuPgRG88Ofyz17/gMJjaL7PYRN6mjsrsrOF
         IZbwtz6LnI7x7AeMMhsaHUy4Kn8lmUy70VhNAfedtUOjJwXeDT+RqT7CKurPQb24hywd
         qqYpvVJ+6JP7kXZFqq5TYbNB05TWFXFXGkPNE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669314; x=1685261314;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=oHctl/+Yw4lELN5Eo4ilEJN3AsQPRKgX697uKFaIV2k=;
        b=G/ohaeCVdMw2tcyczYPzkWzwcLceRRjjz4x4EIQlz21UkIJ4TZql+3IdnYKTla4/SH
         cBWc5wEtHONyaXxVIJDc/7BSfjDlslXWFdlIHITLnGE6o27hDd3jfSIkflrnhWklSAIP
         YQBJ+mo6gpPAZIpTWjlWZv+bmq3DDY9aekyZynA96DN3VukQjBSF4Lg9BKPUsjt7hwpD
         mGFMqKd3/gVPqop5tfiEGbjeMxQ84LyLwx5OSAtfpAIdK2G0WVCcnj3Pa8yuoqcR/zqe
         RyoVvg72MC+7SswjKgpwdTfxQVji1dAcIUvK0PN8IPYGrw4CzmrBFqaG9aiksz+NGBUt
         F6pA==
X-Gm-Message-State: AC+VfDxWJDEkeCPS6Yng15UHp3cn/bu7EY4eI5VEHwVh/BmKwF5jr4kv
	oSytNuopzy4gGKwiuWRby+emdq9yVXDmEVE6oOI=
X-Google-Smtp-Source: ACHHUZ7gVLP4vqO2Cl4E//bGC+5sMhTO1AwKt98Epp5tTTyGBhb4S9xMh4Qm4crO2WL6jFoaF+w9CA==
X-Received: by 2002:a1c:f717:0:b0:3f2:5999:4f3a with SMTP id v23-20020a1cf717000000b003f259994f3amr3344119wmh.31.1682669314091;
        Fri, 28 Apr 2023 01:08:34 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Anthony Perard <anthony.perard@cloud.com>,
	Wei Liu <wl@xenproject.org>
Subject: [PATCH 1/5] xenalyze: Handle start-of-day ->RUNNING transitions
Date: Fri, 28 Apr 2023 09:08:28 +0100
Message-Id: <20230428080832.2461044-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A recent xentrace highlighted an unhandled corner case in the vcpu
"start-of-day" logic, if the trace starts after the last running ->
non-running transition, but before the first non-running -> running
transition.  Because start-of-day wasn't handled, vcpu_next_update()
was expecting p->current to be NULL, and tripping out with the
following error message when it wasn't:

vcpu_next_update: FATAL: p->current not NULL! (d32768dv$p, runstate RUNSTATE_INIT)

where 32768 is the DEFAULT_DOMAIN, and $p is the pcpu number.

Instead of calling vcpu_start() piecemeal throughout
sched_runstate_process(), call it at the top of the function if the
vcpu in question is still in RUNSTATE_INIT, so that we can handle all
the cases in one place.

Sketch out at the top of the function all cases which we need to
handle, and what to do in those cases.  Some transitions tell us where
v is running; some transitions tell us about what is (or is not)
running on p; some transitions tell us neither.

If a transition tells us where v is now running, update its state;
otherwise leave it in INIT, in order to avoid having to deal with TSC
skew on start-up.

If a transition tells us what is or is not running on p, update
p->current (either to v or NULL).  Otherwise leave it alone.

If neither, do nothing.

Reifying those rules:

- If we're continuing to run, set v to RUNNING, and use p->first_tsc
  as the runstate time.

- If we're starting to run, set v to RUNNING, and use ri->tsc as the
  runstate time.

- If v is being deschedled, leave v in the INIT state to avoid dealing
  with TSC skew; but set p->current to NULL so that whatever is
  scheduled next won't trigger the assert in vcpu_next_update().

- If a vcpu is waking up (switching from one non-runnable state to
  another non-runnable state), leave v in INIT, and p in whatever
  state it's in (which may be the default domain, or some other vcpu
  which has already run).

While here, fix the comment above vcpu_start; it's called when the
vcpu state is INIT, not when current is the default domain.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Anthony Perard <anthony.perard@cloud.com>
CC: Wei Liu <wl@xenproject.org>
---
 tools/xentrace/xenalyze.c | 159 ++++++++++++++++++++++++--------------
 1 file changed, 101 insertions(+), 58 deletions(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 12dcca9646..ff9716cd12 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -6885,39 +6885,86 @@ void vcpu_next_update(struct pcpu_info *p, struct vcpu_data *next, tsc_t tsc)
     p->lost_record.seen_valid_schedule = 1;
 }
 
-/* If current is the default domain, we're fixing up from something
- * like start-of-day.  Update what we can. */
-void vcpu_start(struct pcpu_info *p, struct vcpu_data *v) {
-    /* If vcpus are created, or first show up, in a "dead zone", this will
-     * fail. */
-    if( !p->current || p->current->d->did != DEFAULT_DOMAIN) {
-        fprintf(stderr, "Strange, p->current not default domain!\n");
-        error(ERR_FILE, NULL);
-        return;
-    }
+/* 
+ * If the vcpu in question is in state INIT, we're fixing up from something
+ * like start-of-day.  Update what we can.
+ */
+void vcpu_start(struct pcpu_info *p, struct vcpu_data *v,
+                int old_runstate, int new_runstate, tsc_t ri_tsc) {
+    tsc_t tsc;
+
+    /*
+     * 
+     * Cases:
+     * running -> running:
+     *  v -> running, using p->first_tsc
+     * {runnable, blocked} -> running:
+     *  v -> running, using ri->tsc
+     * running -> {runnable, blocked}:
+     *  Leave v INIT, but clear p->current in case another vcpu is scheduled
+     * blocked -> runnable:
+     *  Leave INIT, and also leave p->current, since we still don't know who's scheduled here
+     */
+
+    /* 
+     * NB that a vcpu won't come out of INIT until it starts running somewhere.  
+     * If this event is pcpu that has already seen a scheduling event, p->current
+     * should be null; if this is the first scheduling event on this pcpu, 
+     * p->current should be the default domain.
+     */
+    if( old_runstate == RUNSTATE_RUNNING ) {
+        if ( !p->current ||  p->current->d->did != DEFAULT_DOMAIN) {
+            fprintf(stderr, "Strange, p->current not default domain!\n");
+            error(ERR_FILE, NULL);
+            return;
 
-    if(!p->first_tsc) {
-        fprintf(stderr, "Strange, p%d first_tsc 0!\n", p->pid);
-        error(ERR_FILE, NULL);
+        }
+        
+        if(!p->first_tsc) {
+            fprintf(stderr, "Strange, p%d first_tsc 0!\n", p->pid);
+            error(ERR_FILE, NULL);
+        }
+
+        if(p->first_tsc <= p->current->runstate.tsc) {
+            fprintf(stderr, "Strange, first_tsc %llx < default_domain runstate tsc %llx!\n",
+                    p->first_tsc,
+                    p->current->runstate.tsc);
+            error(ERR_FILE, NULL);
+        }
+    
+        /* Change default domain to 'queued' */
+        runstate_update(p->current, RUNSTATE_QUEUED, p->first_tsc);
+
+        /* 
+         * Set current to NULL, so that if another vcpu (not in INIT)
+         * is scheduled here, we don't trip over the check in
+         * vcpu_next_update()
+         */
+        p->current = NULL;
     }
 
-    if(p->first_tsc <= p->current->runstate.tsc) {
-        fprintf(stderr, "Strange, first_tsc %llx < default_domain runstate tsc %llx!\n",
-                p->first_tsc,
-                p->current->runstate.tsc);
-        error(ERR_FILE, NULL);
+    /* TSC skew at start-of-day is hard to deal with.  Don't
+     * bring a vcpu out of INIT until it's seen to be actually
+     * running somewhere. */
+    if ( new_runstate != RUNSTATE_RUNNING ) {
+        fprintf(warn, "First schedule for d%dv%d doesn't take us into a running state; leaving INIT\n",
+                v->d->did, v->vid);
+
+        return;
     }
 
-    /* Change default domain to 'queued' */
-    runstate_update(p->current, RUNSTATE_QUEUED, p->first_tsc);
+    tsc = ri_tsc;
+    if ( old_runstate == RUNSTATE_RUNNING ) {
+        /* FIXME: Copy over data from the default domain this interval */
+        fprintf(warn, "Using first_tsc for d%dv%d (%lld cycles)\n",
+                v->d->did, v->vid, p->last_tsc - p->first_tsc);
 
-    /* FIXME: Copy over data from the default domain this interval */
-    fprintf(warn, "Using first_tsc for d%dv%d (%lld cycles)\n",
-            v->d->did, v->vid, p->last_tsc - p->first_tsc);
+        tsc = p->first_tsc;
+    }
 
     /* Simulate the time since the first tsc */
-    runstate_update(v, RUNSTATE_RUNNING, p->first_tsc);
-    p->time.tsc = p->first_tsc;
+    runstate_update(v, RUNSTATE_RUNNING, tsc);
+    p->time.tsc = tsc;
     p->current = v;
     pcpu_string_draw(p);
     v->p = p;
@@ -7021,6 +7068,13 @@ void sched_runstate_process(struct pcpu_info *p)
     last_oldstate = v->runstate.last_oldstate;
     v->runstate.last_oldstate.wrong = RUNSTATE_INIT;
 
+    /* Handle all "start-of-day" issues in one place.  This can be
+     * done before any of the other tracks or sanity checks. */
+    if ( v->runstate.state == RUNSTATE_INIT ) {
+        vcpu_start(p, v, sevt.old_runstate, sevt.new_runstate, ri->tsc);
+        return;
+    }
+
     /* Close vmexits when the putative reason for blocking / &c stops.
      * This way, we don't account cpu contention to some other overhead. */
     if(sevt.new_runstate == RUNSTATE_RUNNABLE
@@ -7190,32 +7244,27 @@ update:
      * or stopping actually running on a physical cpu. */
     if ( type == CONTINUE )
     {
-        if( v->runstate.state == RUNSTATE_INIT ) {
-            /* Start-of-day; account first tsc -> now to v */
-            vcpu_start(p, v);
-        } else {
-            /* Continue running.  First, do some sanity checks */
-            if ( v->runstate.state == RUNSTATE_LOST ) {
-                fprintf(warn, "WARNING: continue with d%dv%d in RUNSTATE_LOST.  Resetting current.\n",
-                        v->d->did, v->vid);
-                if ( p->current )
-                    vcpu_prev_update(p, p->current, ri->tsc, RUNSTATE_LOST);
-                vcpu_next_update(p, v, ri->tsc);
-            }
-            else if( v->runstate.state != RUNSTATE_RUNNING ) {
-                /* This should never happen. */
-                fprintf(warn, "FATAL: sevt.old_runstate running, but d%dv%d runstate %s!\n",
-                        v->d->did, v->vid, runstate_name[v->runstate.state]);
-                error(ERR_FILE, NULL);
-            } else if ( v->p != p ) {
-                fprintf(warn, "FATAL: continue on p%d, but d%dv%d p%d!\n",
-                        p->pid, v->d->did, v->vid,
-                        v->p ? v->p->pid : -1);
-                error(ERR_FILE, NULL);
-            }
-
-            runstate_update(v, RUNSTATE_RUNNING, ri->tsc);
+        /* Continue running.  First, do some sanity checks */
+        if ( v->runstate.state == RUNSTATE_LOST ) {
+            fprintf(warn, "WARNING: continue with d%dv%d in RUNSTATE_LOST.  Resetting current.\n",
+                    v->d->did, v->vid);
+            if ( p->current )
+                vcpu_prev_update(p, p->current, ri->tsc, RUNSTATE_LOST);
+            vcpu_next_update(p, v, ri->tsc);
+        }
+        else if( v->runstate.state != RUNSTATE_RUNNING ) {
+            /* This should never happen. */
+            fprintf(warn, "FATAL: sevt.old_runstate running, but d%dv%d runstate %s!\n",
+                    v->d->did, v->vid, runstate_name[v->runstate.state]);
+            error(ERR_FILE, NULL);
+        } else if ( v->p != p ) {
+            fprintf(warn, "FATAL: continue on p%d, but d%dv%d p%d!\n",
+                    p->pid, v->d->did, v->vid,
+                    v->p ? v->p->pid : -1);
+            error(ERR_FILE, NULL);
         }
+
+        runstate_update(v, RUNSTATE_RUNNING, ri->tsc);
     }
     else if ( sevt.old_runstate == RUNSTATE_RUNNING
               || v->runstate.state == RUNSTATE_RUNNING )
@@ -7232,10 +7281,7 @@ update:
          *   # (should never happen)
          */
         if( sevt.old_runstate == RUNSTATE_RUNNING ) {
-            if( v->runstate.state == RUNSTATE_INIT ) {
-                /* Start-of-day; account first tsc -> now to v */
-                vcpu_start(p, v);
-            } else if( v->runstate.state != RUNSTATE_RUNNING
+            if( v->runstate.state != RUNSTATE_RUNNING
                        && v->runstate.state != RUNSTATE_LOST ) {
                 /* This should never happen. */
                 fprintf(warn, "FATAL: sevt.old_runstate running, but d%dv%d runstate %s!\n",
@@ -7264,11 +7310,8 @@ update:
 
         vcpu_next_update(p, v, ri->tsc);
     }
-    else if ( v->runstate.state != RUNSTATE_INIT )
+    else
     {
-        /* TSC skew at start-of-day is hard to deal with.  Don't
-         * bring a vcpu out of INIT until it's seen to be actually
-         * running somewhere. */
         runstate_update(v, sevt.new_runstate, ri->tsc);
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527189.819516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9k-0003pR-NQ; Fri, 28 Apr 2023 08:08:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527189.819516; Fri, 28 Apr 2023 08:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9k-0003nA-Jh; Fri, 28 Apr 2023 08:08:40 +0000
Received: by outflank-mailman (input) for mailman id 527189;
 Fri, 28 Apr 2023 08:08:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJ9j-0003M3-D7
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:08:39 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0a4c247-e59b-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 10:08:37 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f1e2555b5aso44959705e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:08:37 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 13-20020a05600c230d00b003f31da39b62sm2569464wmo.18.2023.04.28.01.08.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0a4c247-e59b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669316; x=1685261316;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ull2YxWBFQl0HliuvqOLTfd5qwQa1W30wkLZFscPDpw=;
        b=c+VOSAyMBxEBJRiQ8XbRHkxqCgy0oM1qlBzbUlrqhy5sOrgE/OmyiX6kEah4qp9Mv1
         0PH0dOE65KsGrnQ4cqQe7QqsgNJuJrDbY5ZtDA3X7mWDTuBb3ADkldtl8Pr82RMs1+QK
         xDvNRBvKGqm1IWFmyCTvrlH4OwlKt15nv/Woc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669316; x=1685261316;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ull2YxWBFQl0HliuvqOLTfd5qwQa1W30wkLZFscPDpw=;
        b=GX67mYm0XihBfzix7fo1d/rZ2zdad+QYsmvs8dhqqV+N0MAJlQg6pRnvE0yCDH4Z78
         Rzc+Kk1wgyFN4ayEv1YXowbOQnxAUzulI3f1osMNfLTZhnfV7ZvlPuE4kdJT1F6SqmkE
         s5Hzh5JeLWN/vXg3B6R2ZDdWhsGYJbqfyj6fRuh9CIdMFohkQ8URsxtmZJANjuqu++tA
         P0g+UHZbXlMM1LmMUj4WuxBJzw6v+Q4kOsdQfzi11KgtTmlvtvjOE9+WvI3US9Kk5LzG
         sSVIzDK7jcR+vrPkfkRQZZU9a3SH2UxvyF0MFVeJntsegJxZk7icVnYWKoX/cVCiUgJ2
         loTA==
X-Gm-Message-State: AC+VfDxHf7YNzgJW7RfiSiQKQarJiwRGOc6xRNZ2lGqtC3qkftfEqg/f
	1WfmXCFPRp0mRun8B5rwAbVFttFtllTc4MzfNq8=
X-Google-Smtp-Source: ACHHUZ4sKhcZrmWQOmeyk+jAWUc6aoWKQoq2pZkkD16DksbVl/Kln3xeeBjNzDXSKv8Zi17uTg+tkw==
X-Received: by 2002:a05:600c:220d:b0:3f1:9526:22be with SMTP id z13-20020a05600c220d00b003f1952622bemr3282326wml.23.1682669316259;
        Fri, 28 Apr 2023 01:08:36 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@cloud.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 5/5] SUPPORT.md: Make all security support explicit
Date: Fri, 28 Apr 2023 09:08:32 +0100
Message-Id: <20230428080832.2461044-5-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230428080832.2461044-1-george.dunlap@cloud.com>
References: <20230428080832.2461044-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The initial goal of SUPPORT.md was to help both users, and the Xen
Project Security Team, determine what functionality was security
supported; i.e., what kinds of security bugs would trigger an XSA.

Our proposal is that as of 4.18, all functionality not explicitly
listed as security supported will be considered not security
supported.  Add some text to that effect.

The patch as written cannot be applied, since specifying "xl.cfg core
functionality" is a TODO; but it should do to start a discussion.

Signed-off-by: Georg Dunlap <george.dunlap@cloud.com>
---
CC: Wei Liu <wl@xen.org>
CC: Andrew Cooper <andrew.cooper@cloud.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@cloud.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
---
 SUPPORT.md | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..fcbcb44c44 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -17,6 +17,36 @@ for the definitions of the support status levels etc.
 Release Notes
 : <a href="https://wiki.xenproject.org/wiki/Xen_Project_X.YY_Release_Notes">RN</a>
 
+# General security support
+
+An XSA will always be issued for security-related bugs which are
+present in a "plain vanilla" configuration.  A "plain vanilla"
+configuration is defined as follows:
+
+* The Xen hypervisor is built from a tagged release of Xen, or a
+  commit which was on the tip of one of the supported stable branches.
+
+* The Xen hypervisor was built with the default config for the platform
+
+* No Xen command-line parameters were specified
+
+* No parameters for Xen-related drivers in the Linux kernel were specified
+
+* No modifications were made to the default xl.conf
+
+* xl.cfg files use only core functionality
+
+* Alternate toolstacks only activate functionality activated by the
+  core functionality of xl.cfg files.
+
+Any system outside this configuration will only be considered security
+supported if the functionality is explicitly listed as supported in
+this document.
+
+If a security-related bug exits only in a configuration listed as not
+security supported, the security team will generally not issue an XSA;
+the bug will simply be handled in public.
+
 # Feature Support
 
 ## Kconfig
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527187.819505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9j-0003bB-QH; Fri, 28 Apr 2023 08:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527187.819505; Fri, 28 Apr 2023 08:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9j-0003b4-N8; Fri, 28 Apr 2023 08:08:39 +0000
Received: by outflank-mailman (input) for mailman id 527187;
 Fri, 28 Apr 2023 08:08:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJ9i-00036x-6D
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:08:38 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0175b10-e59b-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:08:36 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f09b4a156eso64347975e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:08:36 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 13-20020a05600c230d00b003f31da39b62sm2569464wmo.18.2023.04.28.01.08.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0175b10-e59b-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669315; x=1685261315;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lLu7TscClVzVFwnxnkUi2eFFYfLdqDnUzBYuO4yEO4M=;
        b=MS6ycKFAY4yrzKVadA9O0uoVCowNIXOMxRBSbz1Xada8oMpsZ9G5KTsHr1wfuqoZBk
         VeJ3LEyktkgctiUxiKfqCBQhHyqBkQRYChGUwCfe+priBB2XqeKFv3TyNDAnBHwr6kjR
         8fPcrPPxt/z4WgFuY2aMsdfgV0BujO+FwHqNw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669315; x=1685261315;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lLu7TscClVzVFwnxnkUi2eFFYfLdqDnUzBYuO4yEO4M=;
        b=OvL51uncN0b8WBa5bHf87Lz0mySOgUB/2uNv/MgO4iE8HZemC7Tcyd/p5AsP/wDhfi
         iGxMqmaoC6DVwSAZNk/HVlgugR5Cb3anpDKljR3PQpj92TI/9R47Vr5LMF62aUZwDLTX
         OPp6JN0y3uO7d+GqmMTQfYAY2OMV841VwnF5mg8co7/OBK3VJeZuDurS7JNueVRiZ3mU
         Uh2lFf2k/C22C0Ge492dc5zPbtaFsT9ThvpiAYnbOcdiBWHyweFSSIk1skCqa5bojDgF
         JVx/HIbk4AYOfFgr3uc1TpM17vuT6T23i1zGGY7ZfVv6QwkQSWR1Iq/3aqbQqTtUjJaZ
         yA6g==
X-Gm-Message-State: AC+VfDwpDAAvUucWzdCXCxBalF0guqThYvJqQNBKaCuiNfj8JriSWW8L
	zBvF/e2tkMXTyDGL21OwKkjGi7ogEKI4eC3cAR8=
X-Google-Smtp-Source: ACHHUZ5awrW5ndtj6h3K+RiEO1oWn0kIUfGYxK1DkO+FZrvVuXWn53EffJsqWvILbOxVSWqykPhuhA==
X-Received: by 2002:a7b:cb94:0:b0:3f1:7ba6:d5ab with SMTP id m20-20020a7bcb94000000b003f17ba6d5abmr3392523wmi.36.1682669315504;
        Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH 4/5] credit: Don't steal vcpus which have yielded
Date: Fri, 28 Apr 2023 09:08:31 +0100
Message-Id: <20230428080832.2461044-4-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230428080832.2461044-1-george.dunlap@cloud.com>
References: <20230428080832.2461044-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On large systems with many vcpus yielding due to spinlock priority
inversion, it's not uncommon for a vcpu to yield its timeslice, only
to be immediately stolen by another pcpu looking for higher-priority
work.

To prevent this:

* Keep the YIELD flag until a vcpu is removed from a runqueue

* When looking for work to steal, skip vcpus which have yielded

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/common/sched/credit.c | 26 +++++++++++++++++---------
 1 file changed, 17 insertions(+), 9 deletions(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index b8bdfd5f6a..70a1a57ba6 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -319,6 +319,11 @@ __runq_remove(struct csched_unit *svc)
 {
     BUG_ON( !__unit_on_runq(svc) );
     list_del_init(&svc->runq_elem);
+
+    /*
+     * Clear YIELD flag when scheduling back in
+     */
+    clear_bit(CSCHED_FLAG_UNIT_YIELD, &svc->flags);
 }
 
 static inline void
@@ -1638,6 +1643,13 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
         if ( speer->pri <= pri )
             break;
 
+        /*
+         * Don't steal a UNIT which has yielded; it's waiting for a
+         * reason
+         */
+        if (test_bit(CSCHED_FLAG_UNIT_YIELD, &speer->flags))
+            continue;
+
         /* Is this UNIT runnable on our PCPU? */
         unit = speer->unit;
         BUG_ON( is_idle_unit(unit) );
@@ -1955,11 +1967,6 @@ static void cf_check csched_schedule(
         dec_nr_runnable(sched_cpu);
     }
 
-    /*
-     * Clear YIELD flag before scheduling out
-     */
-    clear_bit(CSCHED_FLAG_UNIT_YIELD, &scurr->flags);
-
     do {
         snext = __runq_elem(runq->next);
 
@@ -1974,10 +1981,11 @@ static void cf_check csched_schedule(
         /*
          * SMP Load balance:
          *
-         * If the next highest priority local runnable UNIT has already eaten
-         * through its credits, look on other PCPUs to see if we have more
-         * urgent work... If not, csched_load_balance() will return snext, but
-         * already removed from the runq.
+         * If the next highest priority local runnable UNIT has
+         * already eaten through its credits (and we're below the
+         * balancing ratelimit), look on other PCPUs to see if we have
+         * more urgent work... If we don't, csched_load_balance() will
+         * return snext, but already removed from the runq.
          */
         if ( snext->pri <= CSCHED_PRI_TS_OVER
              && now - spc->last_load_balance > prv->load_balance_ratelimit) {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:08:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:08:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527186.819495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9i-0003MB-Iy; Fri, 28 Apr 2023 08:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527186.819495; Fri, 28 Apr 2023 08:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJ9i-0003M4-Fx; Fri, 28 Apr 2023 08:08:38 +0000
Received: by outflank-mailman (input) for mailman id 527186;
 Fri, 28 Apr 2023 08:08:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJ9h-00036x-5v
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:08:37 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfe1bf24-e59b-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:08:35 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f178da21b5so65422235e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 13-20020a05600c230d00b003f31da39b62sm2569464wmo.18.2023.04.28.01.08.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:08:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfe1bf24-e59b-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669315; x=1685261315;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lcniXQbDbWeLRllJP53Ow0hRDGiOunRnJjI6sJOsHmU=;
        b=Tt9NLBoplSuQ4mo/EcdFjUCOO1/5GW1htY6ErRTJzM2I/E6v1i8WjeGghS5NQg54Bj
         DBIghLd/qZhORWId+mbFQiJ/xZGlY8KDRHPEcXyNAt7k7iPgktlPH44L9Ojzec1olVvt
         Sh1DQnmN0xpc+CsIbW6/Augr8nNKpdanhqrkA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669315; x=1685261315;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lcniXQbDbWeLRllJP53Ow0hRDGiOunRnJjI6sJOsHmU=;
        b=LU+j8ITf3kWVksi/Wv+VhnwTDAoWz5ePB12NBklEPXYFn0Va1CctWowizA5j/TFacH
         QRjiECnV1SRrBFwAybMAkli2Q04FfIclmKm0VJDV2nCvZ050M1hpLSqtrbd57icJTQH1
         frHY5eN/JJKUV6BQasK+Rgh4xWDgXnm9LTsVk0R8XoWnGZoB4USuy4j3PBqva8RLj6vX
         pDB5dNjHp74ZkDPyeS3ThC+1H7x1gK1uWwxbTJVB07eRuGLDYCmdQyQFgsttoRdLvhax
         jGwy3/+UP+Mvw5UVYykHn+GGlqqXQavj1VmowSd2qQPFc5De1mcL6M8mp1VPRgFyuej1
         l1gw==
X-Gm-Message-State: AC+VfDzJRcCy9ckXMBBj+39t+4fDzbuhfOzFRJ1ajHxVvBa2Zpc8Og4Y
	SXdJZd/+Pj4sWrVkNS5b5LrEq7VtF1RSeHZ7IAU=
X-Google-Smtp-Source: ACHHUZ6S2ykCEmxqd6YKHBi4EZgGUrRFvLs9W6AhzdzaFW9dxCuWknEEQ3+DRECWsIh8rXyXIB6rYw==
X-Received: by 2002:a7b:ce07:0:b0:3f1:e5f2:5e86 with SMTP id m7-20020a7bce07000000b003f1e5f25e86mr3185741wmc.23.1682669315034;
        Fri, 28 Apr 2023 01:08:35 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH 3/5] credit: Limit load balancing to once per millisecond
Date: Fri, 28 Apr 2023 09:08:30 +0100
Message-Id: <20230428080832.2461044-3-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230428080832.2461044-1-george.dunlap@cloud.com>
References: <20230428080832.2461044-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The credit scheduler tries as hard as it can to ensure that it always
runs scheduling units with positive credit (PRI_TS_UNDER) before
running those with negative credit (PRI_TS_OVER).  If the next
runnable scheduling unit is of priority OVER, it will always run the
load balancer, which will scour the system looking for another
scheduling unit of the UNDER priority.

Unfortunately, as the number of cores on a system has grown, the cost
of the work-stealing algorithm has dramatically increased; a recent
trace on a system with 128 cores showed this taking over 50
microseconds.

Add a parameter, load_balance_ratelimit, to limit the frequency of
load balance operations on a given pcpu.  Default this to 1
millisecond.

Invert the load balancing conditional to make it more clear, and line
up more closely with the comment above it.

Overall it might be cleaner to have the last_load_balance checking
happen inside csched_load_balance(), but that would require either
passing both now and spc into the function, or looking them up again;
both of which seemed to be worse than simply checking and setting the
values before calling it.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 docs/misc/xen-command-line.pandoc |  6 +++++
 xen/common/sched/credit.c         | 40 ++++++++++++++++++++++++++-----
 xen/include/public/sysctl.h       |  6 +++++
 3 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..ae51a8cfa2 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1840,6 +1840,12 @@ By default, Xen will use the INVPCID instruction for TLB management if
 it is available.  This option can be used to cause Xen to fall back to
 older mechanisms, which are generally slower.
 
+### load-balance-ratelimit
+> `= <integer>`
+
+The minimum interval between load balancing events on a given pcpu.
+At the moment only credit honors this parameter.
+
 ### noirqbalance (x86)
 > `= <boolean>`
 
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index f2cd3d9da3..b8bdfd5f6a 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -50,6 +50,8 @@
 #define CSCHED_TICKS_PER_TSLICE     3
 /* Default timeslice: 30ms */
 #define CSCHED_DEFAULT_TSLICE_MS    30
+/* Default load balancing ratelimit: 1ms */
+#define CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US 1000
 #define CSCHED_CREDITS_PER_MSEC     10
 /* Never set a timer shorter than this value. */
 #define CSCHED_MIN_TIMER            XEN_SYSCTL_SCHED_RATELIMIT_MIN
@@ -153,6 +155,7 @@ struct csched_pcpu {
 
     unsigned int idle_bias;
     unsigned int nr_runnable;
+    s_time_t last_load_balance;
 
     unsigned int tick;
     struct timer ticker;
@@ -218,7 +221,7 @@ struct csched_private {
 
     /* Period of master and tick in milliseconds */
     unsigned int tick_period_us, ticks_per_tslice;
-    s_time_t ratelimit, tslice, unit_migr_delay;
+    s_time_t ratelimit, tslice, unit_migr_delay, load_balance_ratelimit;
 
     struct list_head active_sdom;
     uint32_t weight;
@@ -612,6 +615,8 @@ init_pdata(struct csched_private *prv, struct csched_pcpu *spc, int cpu)
     BUG_ON(!is_idle_unit(curr_on_cpu(cpu)));
     cpumask_set_cpu(cpu, prv->idlers);
     spc->nr_runnable = 0;
+
+    spc->last_load_balance = NOW();
 }
 
 static void cf_check
@@ -1267,7 +1272,8 @@ csched_sys_cntl(const struct scheduler *ops,
                  && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_MAX
                      || params->ratelimit_us < XEN_SYSCTL_SCHED_RATELIMIT_MIN))
              || MICROSECS(params->ratelimit_us) > MILLISECS(params->tslice_ms)
-             || params->vcpu_migr_delay_us > XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US )
+             || params->vcpu_migr_delay_us > XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US
+             || params->load_balance_ratelimit_us > XEN_SYSCTL_CSCHED_LB_RATE_MAX_US)
                 goto out;
 
         spin_lock_irqsave(&prv->lock, flags);
@@ -1278,6 +1284,7 @@ csched_sys_cntl(const struct scheduler *ops,
             printk(XENLOG_INFO "Disabling context switch rate limiting\n");
         prv->ratelimit = MICROSECS(params->ratelimit_us);
         prv->unit_migr_delay = MICROSECS(params->vcpu_migr_delay_us);
+        prv->load_balance_ratelimit = MICROSECS(params->load_balance_ratelimit_us);
         spin_unlock_irqrestore(&prv->lock, flags);
 
         /* FALLTHRU */
@@ -1285,6 +1292,7 @@ csched_sys_cntl(const struct scheduler *ops,
         params->tslice_ms = prv->tslice / MILLISECS(1);
         params->ratelimit_us = prv->ratelimit / MICROSECS(1);
         params->vcpu_migr_delay_us = prv->unit_migr_delay / MICROSECS(1);
+        params->load_balance_ratelimit_us = prv->load_balance_ratelimit / MICROSECS(1);
         rc = 0;
         break;
     }
@@ -1676,9 +1684,17 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
     return NULL;
 }
 
+/*
+ * Minimum delay, in microseconds, between load balance operations.
+ * This prevents spending too much time doing load balancing, particularly
+ * when the system has a high number of YIELDs due to spinlock priority inversion.
+ */
+static unsigned int __read_mostly load_balance_ratelimit_us = CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US;
+integer_param("load-balance-ratelimit", load_balance_ratelimit_us);
+
 static struct csched_unit *
 csched_load_balance(struct csched_private *prv, int cpu,
-    struct csched_unit *snext, bool *stolen)
+                    struct csched_unit *snext, bool *stolen)
 {
     const struct cpupool *c = get_sched_res(cpu)->cpupool;
     struct csched_unit *speer;
@@ -1963,10 +1979,12 @@ static void cf_check csched_schedule(
          * urgent work... If not, csched_load_balance() will return snext, but
          * already removed from the runq.
          */
-        if ( snext->pri > CSCHED_PRI_TS_OVER )
-            __runq_remove(snext);
-        else
+        if ( snext->pri <= CSCHED_PRI_TS_OVER
+             && now - spc->last_load_balance > prv->load_balance_ratelimit) {
+            spc->last_load_balance = now;
             snext = csched_load_balance(prv, sched_cpu, snext, &migrated);
+        } else
+            __runq_remove(snext);
 
     } while ( !unit_runnable_state(snext->unit) );
 
@@ -2181,6 +2199,14 @@ csched_global_init(void)
                XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US, vcpu_migration_delay_us);
     }
 
+    if ( load_balance_ratelimit_us > XEN_SYSCTL_CSCHED_LB_RATE_MAX_US )
+    {
+        load_balance_ratelimit_us = CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US;
+        printk("WARNING: load-balance-ratelimit outside of valid range [0,%d]us.\n"
+               "Resetting to default: %u\n",
+               XEN_SYSCTL_CSCHED_LB_RATE_MAX_US, load_balance_ratelimit_us);
+    }
+
     return 0;
 }
 
@@ -2223,6 +2249,8 @@ csched_init(struct scheduler *ops)
 
     prv->unit_migr_delay = MICROSECS(vcpu_migration_delay_us);
 
+    prv->load_balance_ratelimit = MICROSECS(load_balance_ratelimit_us);
+
     return 0;
 }
 
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..192458d635 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -637,6 +637,12 @@ struct xen_sysctl_credit_schedule {
     */
 #define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)
     uint32_t vcpu_migr_delay_us;
+    /*
+     * Minimum delay, in microseconds, between load balance
+     * operations; max 1 second.
+     */
+#define XEN_SYSCTL_CSCHED_LB_RATE_MAX_US (1000000)
+    uint32_t load_balance_ratelimit_us;
 };
 
 struct xen_sysctl_credit2_schedule {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:10:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527202.819535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJBW-0006gH-4c; Fri, 28 Apr 2023 08:10:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527202.819535; Fri, 28 Apr 2023 08:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJBW-0006gA-1f; Fri, 28 Apr 2023 08:10:30 +0000
Received: by outflank-mailman (input) for mailman id 527202;
 Fri, 28 Apr 2023 08:10:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJBU-0006g0-HF
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:10:28 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 224cc02c-e59c-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:10:27 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2a8a5f6771fso93743451fa.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:10:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 224cc02c-e59c-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669426; x=1685261426;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9do+S2FEEjKxmRM+f2f1iEqcQl8DyalQDpF2+cUGZVI=;
        b=Gv67nv/EyFLjbNJCdDNNVG+env0R/0hR7lRQsR3ZRLIZ1xziDZ7k7kqOciZSNTNq4P
         9y0bFeh9Jn38wZTiVVF0peJPwGAY5RRklyIYtKqcUEof7uD1rVeFknPU4td6feBSJF6/
         VmBWcUdF5IDpp0KH61wbRYiN7qUCX6jPMZehI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669426; x=1685261426;
        h=to:subject:message-id:date:from:in-reply-to:references:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=9do+S2FEEjKxmRM+f2f1iEqcQl8DyalQDpF2+cUGZVI=;
        b=dRfjnwKASYTDnlz9nyE0+r1+QhWJ3aGmig/UsZiL0xx7WTbFdLdNdentbdzAhKdrpl
         2HiJnMhhGzAnA2rQa+gkILpsUeq2Cjg4ICUuRZB/tOYwypEfnh6w8pCqrK5YoI9J6i25
         DXiHSJMe6330Y7QIEOQfcVdQC5tdc9IPNsWDhoov9LLRge+TUBYKVQCuBw6EhCW4TwkU
         cHjVzhOGgaKWVeB2Pc/F0ast1JWkeBEkT+cLE7v5FyVDeo08tnTImCfXdQ4puDE0NpWy
         9ZsoecoifK7eXzxZSqguQrDldOO2rLkhP08ImHhGHkRIWTaOnjIQhua4v+STDAVaauty
         I8Ng==
X-Gm-Message-State: AC+VfDwJW+ONLjwvxAWimjSjRMx804yV/nmtc4q50RXYISM8r52b7gXm
	0NehScbo84HL6UkDVK8zm7Q91bSlLJcpawt8WE4OS2I0MI9No7jw
X-Google-Smtp-Source: ACHHUZ5n8uykskIiiQ1GRnWK98t305gCrpDHyz6xeswD7z3TgDunuIvMW1CCUxi5t00Hue/3Z5cUp2OI4SMrzRhKGhI=
X-Received: by 2002:a2e:9d42:0:b0:2ab:4bb3:e32c with SMTP id
 y2-20020a2e9d42000000b002ab4bb3e32cmr147514ljj.31.1682669426460; Fri, 28 Apr
 2023 01:10:26 -0700 (PDT)
MIME-Version: 1.0
References: <20230428080832.2461044-1-george.dunlap@cloud.com> <20230428080832.2461044-3-george.dunlap@cloud.com>
In-Reply-To: <20230428080832.2461044-3-george.dunlap@cloud.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 28 Apr 2023 09:10:15 +0100
Message-ID: <CA+zSX=ZzhJOkOdbbS3Vr1yino=7k5u3E3r-FvQbi6eopk4tK9w@mail.gmail.com>
Subject: Re: [PATCH 3/5] credit: Limit load balancing to once per millisecond
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000007066f505fa6101ee"

--0000000000007066f505fa6101ee
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Please ignore patches 1-4 of this series...

 -George

On Fri, Apr 28, 2023 at 9:08=E2=80=AFAM George Dunlap <george.dunlap@cloud.=
com>
wrote:

> The credit scheduler tries as hard as it can to ensure that it always
> runs scheduling units with positive credit (PRI_TS_UNDER) before
> running those with negative credit (PRI_TS_OVER).  If the next
> runnable scheduling unit is of priority OVER, it will always run the
> load balancer, which will scour the system looking for another
> scheduling unit of the UNDER priority.
>
> Unfortunately, as the number of cores on a system has grown, the cost
> of the work-stealing algorithm has dramatically increased; a recent
> trace on a system with 128 cores showed this taking over 50
> microseconds.
>
> Add a parameter, load_balance_ratelimit, to limit the frequency of
> load balance operations on a given pcpu.  Default this to 1
> millisecond.
>
> Invert the load balancing conditional to make it more clear, and line
> up more closely with the comment above it.
>
> Overall it might be cleaner to have the last_load_balance checking
> happen inside csched_load_balance(), but that would require either
> passing both now and spc into the function, or looking them up again;
> both of which seemed to be worse than simply checking and setting the
> values before calling it.
>
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>
> ---
>  docs/misc/xen-command-line.pandoc |  6 +++++
>  xen/common/sched/credit.c         | 40 ++++++++++++++++++++++++++-----
>  xen/include/public/sysctl.h       |  6 +++++
>  3 files changed, 46 insertions(+), 6 deletions(-)
>
> diff --git a/docs/misc/xen-command-line.pandoc
> b/docs/misc/xen-command-line.pandoc
> index e0b89b7d33..ae51a8cfa2 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1840,6 +1840,12 @@ By default, Xen will use the INVPCID instruction
> for TLB management if
>  it is available.  This option can be used to cause Xen to fall back to
>  older mechanisms, which are generally slower.
>
> +### load-balance-ratelimit
> +> `=3D <integer>`
> +
> +The minimum interval between load balancing events on a given pcpu.
> +At the moment only credit honors this parameter.
> +
>  ### noirqbalance (x86)
>  > `=3D <boolean>`
>
> diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
> index f2cd3d9da3..b8bdfd5f6a 100644
> --- a/xen/common/sched/credit.c
> +++ b/xen/common/sched/credit.c
> @@ -50,6 +50,8 @@
>  #define CSCHED_TICKS_PER_TSLICE     3
>  /* Default timeslice: 30ms */
>  #define CSCHED_DEFAULT_TSLICE_MS    30
> +/* Default load balancing ratelimit: 1ms */
> +#define CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US 1000
>  #define CSCHED_CREDITS_PER_MSEC     10
>  /* Never set a timer shorter than this value. */
>  #define CSCHED_MIN_TIMER            XEN_SYSCTL_SCHED_RATELIMIT_MIN
> @@ -153,6 +155,7 @@ struct csched_pcpu {
>
>      unsigned int idle_bias;
>      unsigned int nr_runnable;
> +    s_time_t last_load_balance;
>
>      unsigned int tick;
>      struct timer ticker;
> @@ -218,7 +221,7 @@ struct csched_private {
>
>      /* Period of master and tick in milliseconds */
>      unsigned int tick_period_us, ticks_per_tslice;
> -    s_time_t ratelimit, tslice, unit_migr_delay;
> +    s_time_t ratelimit, tslice, unit_migr_delay, load_balance_ratelimit;
>
>      struct list_head active_sdom;
>      uint32_t weight;
> @@ -612,6 +615,8 @@ init_pdata(struct csched_private *prv, struct
> csched_pcpu *spc, int cpu)
>      BUG_ON(!is_idle_unit(curr_on_cpu(cpu)));
>      cpumask_set_cpu(cpu, prv->idlers);
>      spc->nr_runnable =3D 0;
> +
> +    spc->last_load_balance =3D NOW();
>  }
>
>  static void cf_check
> @@ -1267,7 +1272,8 @@ csched_sys_cntl(const struct scheduler *ops,
>                   && (params->ratelimit_us > XEN_SYSCTL_SCHED_RATELIMIT_M=
AX
>                       || params->ratelimit_us <
> XEN_SYSCTL_SCHED_RATELIMIT_MIN))
>               || MICROSECS(params->ratelimit_us) >
> MILLISECS(params->tslice_ms)
> -             || params->vcpu_migr_delay_us >
> XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US )
> +             || params->vcpu_migr_delay_us >
> XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US
> +             || params->load_balance_ratelimit_us >
> XEN_SYSCTL_CSCHED_LB_RATE_MAX_US)
>                  goto out;
>
>          spin_lock_irqsave(&prv->lock, flags);
> @@ -1278,6 +1284,7 @@ csched_sys_cntl(const struct scheduler *ops,
>              printk(XENLOG_INFO "Disabling context switch rate
> limiting\n");
>          prv->ratelimit =3D MICROSECS(params->ratelimit_us);
>          prv->unit_migr_delay =3D MICROSECS(params->vcpu_migr_delay_us);
> +        prv->load_balance_ratelimit =3D
> MICROSECS(params->load_balance_ratelimit_us);
>          spin_unlock_irqrestore(&prv->lock, flags);
>
>          /* FALLTHRU */
> @@ -1285,6 +1292,7 @@ csched_sys_cntl(const struct scheduler *ops,
>          params->tslice_ms =3D prv->tslice / MILLISECS(1);
>          params->ratelimit_us =3D prv->ratelimit / MICROSECS(1);
>          params->vcpu_migr_delay_us =3D prv->unit_migr_delay / MICROSECS(=
1);
> +        params->load_balance_ratelimit_us =3D prv->load_balance_ratelimi=
t /
> MICROSECS(1);
>          rc =3D 0;
>          break;
>      }
> @@ -1676,9 +1684,17 @@ csched_runq_steal(int peer_cpu, int cpu, int pri,
> int balance_step)
>      return NULL;
>  }
>
> +/*
> + * Minimum delay, in microseconds, between load balance operations.
> + * This prevents spending too much time doing load balancing, particular=
ly
> + * when the system has a high number of YIELDs due to spinlock priority
> inversion.
> + */
> +static unsigned int __read_mostly load_balance_ratelimit_us =3D
> CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US;
> +integer_param("load-balance-ratelimit", load_balance_ratelimit_us);
> +
>  static struct csched_unit *
>  csched_load_balance(struct csched_private *prv, int cpu,
> -    struct csched_unit *snext, bool *stolen)
> +                    struct csched_unit *snext, bool *stolen)
>  {
>      const struct cpupool *c =3D get_sched_res(cpu)->cpupool;
>      struct csched_unit *speer;
> @@ -1963,10 +1979,12 @@ static void cf_check csched_schedule(
>           * urgent work... If not, csched_load_balance() will return
> snext, but
>           * already removed from the runq.
>           */
> -        if ( snext->pri > CSCHED_PRI_TS_OVER )
> -            __runq_remove(snext);
> -        else
> +        if ( snext->pri <=3D CSCHED_PRI_TS_OVER
> +             && now - spc->last_load_balance >
> prv->load_balance_ratelimit) {
> +            spc->last_load_balance =3D now;
>              snext =3D csched_load_balance(prv, sched_cpu, snext, &migrat=
ed);
> +        } else
> +            __runq_remove(snext);
>
>      } while ( !unit_runnable_state(snext->unit) );
>
> @@ -2181,6 +2199,14 @@ csched_global_init(void)
>                 XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US, vcpu_migration_delay_us=
);
>      }
>
> +    if ( load_balance_ratelimit_us > XEN_SYSCTL_CSCHED_LB_RATE_MAX_US )
> +    {
> +        load_balance_ratelimit_us =3D
> CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US;
> +        printk("WARNING: load-balance-ratelimit outside of valid range
> [0,%d]us.\n"
> +               "Resetting to default: %u\n",
> +               XEN_SYSCTL_CSCHED_LB_RATE_MAX_US,
> load_balance_ratelimit_us);
> +    }
> +
>      return 0;
>  }
>
> @@ -2223,6 +2249,8 @@ csched_init(struct scheduler *ops)
>
>      prv->unit_migr_delay =3D MICROSECS(vcpu_migration_delay_us);
>
> +    prv->load_balance_ratelimit =3D MICROSECS(load_balance_ratelimit_us)=
;
> +
>      return 0;
>  }
>
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 2b24d6bfd0..192458d635 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -637,6 +637,12 @@ struct xen_sysctl_credit_schedule {
>      */
>  #define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)
>      uint32_t vcpu_migr_delay_us;
> +    /*
> +     * Minimum delay, in microseconds, between load balance
> +     * operations; max 1 second.
> +     */
> +#define XEN_SYSCTL_CSCHED_LB_RATE_MAX_US (1000000)
> +    uint32_t load_balance_ratelimit_us;
>  };
>
>  struct xen_sysctl_credit2_schedule {
> --
> 2.25.1
>
>

--0000000000007066f505fa6101ee
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Please ignore patches 1-4 of this series...<div><br></div>=
<div>=C2=A0-George</div></div><br><div class=3D"gmail_quote"><div dir=3D"lt=
r" class=3D"gmail_attr">On Fri, Apr 28, 2023 at 9:08=E2=80=AFAM George Dunl=
ap &lt;<a href=3D"mailto:george.dunlap@cloud.com">george.dunlap@cloud.com</=
a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Th=
e credit scheduler tries as hard as it can to ensure that it always<br>
runs scheduling units with positive credit (PRI_TS_UNDER) before<br>
running those with negative credit (PRI_TS_OVER).=C2=A0 If the next<br>
runnable scheduling unit is of priority OVER, it will always run the<br>
load balancer, which will scour the system looking for another<br>
scheduling unit of the UNDER priority.<br>
<br>
Unfortunately, as the number of cores on a system has grown, the cost<br>
of the work-stealing algorithm has dramatically increased; a recent<br>
trace on a system with 128 cores showed this taking over 50<br>
microseconds.<br>
<br>
Add a parameter, load_balance_ratelimit, to limit the frequency of<br>
load balance operations on a given pcpu.=C2=A0 Default this to 1<br>
millisecond.<br>
<br>
Invert the load balancing conditional to make it more clear, and line<br>
up more closely with the comment above it.<br>
<br>
Overall it might be cleaner to have the last_load_balance checking<br>
happen inside csched_load_balance(), but that would require either<br>
passing both now and spc into the function, or looking them up again;<br>
both of which seemed to be worse than simply checking and setting the<br>
values before calling it.<br>
<br>
Signed-off-by: George Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com"=
 target=3D"_blank">george.dunlap@cloud.com</a>&gt;<br>
---<br>
=C2=A0docs/misc/xen-command-line.pandoc |=C2=A0 6 +++++<br>
=C2=A0xen/common/sched/credit.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 40 +++++=
+++++++++++++++++++++-----<br>
=C2=A0xen/include/public/sysctl.h=C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 6 +++++=
<br>
=C2=A03 files changed, 46 insertions(+), 6 deletions(-)<br>
<br>
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line=
.pandoc<br>
index e0b89b7d33..ae51a8cfa2 100644<br>
--- a/docs/misc/xen-command-line.pandoc<br>
+++ b/docs/misc/xen-command-line.pandoc<br>
@@ -1840,6 +1840,12 @@ By default, Xen will use the INVPCID instruction for=
 TLB management if<br>
=C2=A0it is available.=C2=A0 This option can be used to cause Xen to fall b=
ack to<br>
=C2=A0older mechanisms, which are generally slower.<br>
<br>
+### load-balance-ratelimit<br>
+&gt; `=3D &lt;integer&gt;`<br>
+<br>
+The minimum interval between load balancing events on a given pcpu.<br>
+At the moment only credit honors this parameter.<br>
+<br>
=C2=A0### noirqbalance (x86)<br>
=C2=A0&gt; `=3D &lt;boolean&gt;`<br>
<br>
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c<br>
index f2cd3d9da3..b8bdfd5f6a 100644<br>
--- a/xen/common/sched/credit.c<br>
+++ b/xen/common/sched/credit.c<br>
@@ -50,6 +50,8 @@<br>
=C2=A0#define CSCHED_TICKS_PER_TSLICE=C2=A0 =C2=A0 =C2=A03<br>
=C2=A0/* Default timeslice: 30ms */<br>
=C2=A0#define CSCHED_DEFAULT_TSLICE_MS=C2=A0 =C2=A0 30<br>
+/* Default load balancing ratelimit: 1ms */<br>
+#define CSCHED_DEFAULT_LOAD_BALANCE_RATELIMIT_US 1000<br>
=C2=A0#define CSCHED_CREDITS_PER_MSEC=C2=A0 =C2=A0 =C2=A010<br>
=C2=A0/* Never set a timer shorter than this value. */<br>
=C2=A0#define CSCHED_MIN_TIMER=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 XEN=
_SYSCTL_SCHED_RATELIMIT_MIN<br>
@@ -153,6 +155,7 @@ struct csched_pcpu {<br>
<br>
=C2=A0 =C2=A0 =C2=A0unsigned int idle_bias;<br>
=C2=A0 =C2=A0 =C2=A0unsigned int nr_runnable;<br>
+=C2=A0 =C2=A0 s_time_t last_load_balance;<br>
<br>
=C2=A0 =C2=A0 =C2=A0unsigned int tick;<br>
=C2=A0 =C2=A0 =C2=A0struct timer ticker;<br>
@@ -218,7 +221,7 @@ struct csched_private {<br>
<br>
=C2=A0 =C2=A0 =C2=A0/* Period of master and tick in milliseconds */<br>
=C2=A0 =C2=A0 =C2=A0unsigned int tick_period_us, ticks_per_tslice;<br>
-=C2=A0 =C2=A0 s_time_t ratelimit, tslice, unit_migr_delay;<br>
+=C2=A0 =C2=A0 s_time_t ratelimit, tslice, unit_migr_delay, load_balance_ra=
telimit;<br>
<br>
=C2=A0 =C2=A0 =C2=A0struct list_head active_sdom;<br>
=C2=A0 =C2=A0 =C2=A0uint32_t weight;<br>
@@ -612,6 +615,8 @@ init_pdata(struct csched_private *prv, struct csched_pc=
pu *spc, int cpu)<br>
=C2=A0 =C2=A0 =C2=A0BUG_ON(!is_idle_unit(curr_on_cpu(cpu)));<br>
=C2=A0 =C2=A0 =C2=A0cpumask_set_cpu(cpu, prv-&gt;idlers);<br>
=C2=A0 =C2=A0 =C2=A0spc-&gt;nr_runnable =3D 0;<br>
+<br>
+=C2=A0 =C2=A0 spc-&gt;last_load_balance =3D NOW();<br>
=C2=A0}<br>
<br>
=C2=A0static void cf_check<br>
@@ -1267,7 +1272,8 @@ csched_sys_cntl(const struct scheduler *ops,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &amp;&amp; (=
params-&gt;ratelimit_us &gt; XEN_SYSCTL_SCHED_RATELIMIT_MAX<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 || params-&gt;ratelimit_us &lt; XEN_SYSCTL_SCHED_RATELIMIT_MIN))<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 || MICROSECS(params-&gt;ra=
telimit_us) &gt; MILLISECS(params-&gt;tslice_ms)<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|| params-&gt;vcpu_migr_de=
lay_us &gt; XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|| params-&gt;vcpu_migr_de=
lay_us &gt; XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|| params-&gt;load_balance=
_ratelimit_us &gt; XEN_SYSCTL_CSCHED_LB_RATE_MAX_US)<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto out;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0spin_lock_irqsave(&amp;prv-&gt;lock, flag=
s);<br>
@@ -1278,6 +1284,7 @@ csched_sys_cntl(const struct scheduler *ops,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0printk(XENLOG_INFO &quot;Di=
sabling context switch rate limiting\n&quot;);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0prv-&gt;ratelimit =3D MICROSECS(params-&g=
t;ratelimit_us);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0prv-&gt;unit_migr_delay =3D MICROSECS(par=
ams-&gt;vcpu_migr_delay_us);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 prv-&gt;load_balance_ratelimit =3D MICROSECS(p=
arams-&gt;load_balance_ratelimit_us);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0spin_unlock_irqrestore(&amp;prv-&gt;lock,=
 flags);<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* FALLTHRU */<br>
@@ -1285,6 +1292,7 @@ csched_sys_cntl(const struct scheduler *ops,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0params-&gt;tslice_ms =3D prv-&gt;tslice /=
 MILLISECS(1);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0params-&gt;ratelimit_us =3D prv-&gt;ratel=
imit / MICROSECS(1);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0params-&gt;vcpu_migr_delay_us =3D prv-&gt=
;unit_migr_delay / MICROSECS(1);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 params-&gt;load_balance_ratelimit_us =3D prv-&=
gt;load_balance_ratelimit / MICROSECS(1);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D 0;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break;<br>
=C2=A0 =C2=A0 =C2=A0}<br>
@@ -1676,9 +1684,17 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, in=
t balance_step)<br>
=C2=A0 =C2=A0 =C2=A0return NULL;<br>
=C2=A0}<br>
<br>
+/*<br>
+ * Minimum delay, in microseconds, between load balance operations.<br>
+ * This prevents spending too much time doing load balancing, particularly=
<br>
+ * when the system has a high number of YIELDs due to spinlock priority in=
version.<br>
+ */<br>
+static unsigned int __read_mostly load_balance_ratelimit_us =3D CSCHED_DEF=
AULT_LOAD_BALANCE_RATELIMIT_US;<br>
+integer_param(&quot;load-balance-ratelimit&quot;, load_balance_ratelimit_u=
s);<br>
+<br>
=C2=A0static struct csched_unit *<br>
=C2=A0csched_load_balance(struct csched_private *prv, int cpu,<br>
-=C2=A0 =C2=A0 struct csched_unit *snext, bool *stolen)<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 stru=
ct csched_unit *snext, bool *stolen)<br>
=C2=A0{<br>
=C2=A0 =C2=A0 =C2=A0const struct cpupool *c =3D get_sched_res(cpu)-&gt;cpup=
ool;<br>
=C2=A0 =C2=A0 =C2=A0struct csched_unit *speer;<br>
@@ -1963,10 +1979,12 @@ static void cf_check csched_schedule(<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * urgent work... If not, csched_load_bal=
ance() will return snext, but<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * already removed from the runq.<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( snext-&gt;pri &gt; CSCHED_PRI_TS_OVER )<b=
r>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 __runq_remove(snext);<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 else<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( snext-&gt;pri &lt;=3D CSCHED_PRI_TS_OVER<=
br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&amp;&amp; now - spc-&gt;l=
ast_load_balance &gt; prv-&gt;load_balance_ratelimit) {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spc-&gt;last_load_balance =3D no=
w;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0snext =3D csched_load_balan=
ce(prv, sched_cpu, snext, &amp;migrated);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 } else<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 __runq_remove(snext);<br>
<br>
=C2=A0 =C2=A0 =C2=A0} while ( !unit_runnable_state(snext-&gt;unit) );<br>
<br>
@@ -2181,6 +2199,14 @@ csched_global_init(void)<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 XEN_SYSCTL_CSCHED_M=
GR_DLY_MAX_US, vcpu_migration_delay_us);<br>
=C2=A0 =C2=A0 =C2=A0}<br>
<br>
+=C2=A0 =C2=A0 if ( load_balance_ratelimit_us &gt; XEN_SYSCTL_CSCHED_LB_RAT=
E_MAX_US )<br>
+=C2=A0 =C2=A0 {<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 load_balance_ratelimit_us =3D CSCHED_DEFAULT_L=
OAD_BALANCE_RATELIMIT_US;<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 printk(&quot;WARNING: load-balance-ratelimit o=
utside of valid range [0,%d]us.\n&quot;<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&quot;Resetting to =
default: %u\n&quot;,<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0XEN_SYSCTL_CSCHED_L=
B_RATE_MAX_US, load_balance_ratelimit_us);<br>
+=C2=A0 =C2=A0 }<br>
+<br>
=C2=A0 =C2=A0 =C2=A0return 0;<br>
=C2=A0}<br>
<br>
@@ -2223,6 +2249,8 @@ csched_init(struct scheduler *ops)<br>
<br>
=C2=A0 =C2=A0 =C2=A0prv-&gt;unit_migr_delay =3D MICROSECS(vcpu_migration_de=
lay_us);<br>
<br>
+=C2=A0 =C2=A0 prv-&gt;load_balance_ratelimit =3D MICROSECS(load_balance_ra=
telimit_us);<br>
+<br>
=C2=A0 =C2=A0 =C2=A0return 0;<br>
=C2=A0}<br>
<br>
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h<br>
index 2b24d6bfd0..192458d635 100644<br>
--- a/xen/include/public/sysctl.h<br>
+++ b/xen/include/public/sysctl.h<br>
@@ -637,6 +637,12 @@ struct xen_sysctl_credit_schedule {<br>
=C2=A0 =C2=A0 =C2=A0*/<br>
=C2=A0#define XEN_SYSCTL_CSCHED_MGR_DLY_MAX_US (100 * 1000)<br>
=C2=A0 =C2=A0 =C2=A0uint32_t vcpu_migr_delay_us;<br>
+=C2=A0 =C2=A0 /*<br>
+=C2=A0 =C2=A0 =C2=A0* Minimum delay, in microseconds, between load balance=
<br>
+=C2=A0 =C2=A0 =C2=A0* operations; max 1 second.<br>
+=C2=A0 =C2=A0 =C2=A0*/<br>
+#define XEN_SYSCTL_CSCHED_LB_RATE_MAX_US (1000000)<br>
+=C2=A0 =C2=A0 uint32_t load_balance_ratelimit_us;<br>
=C2=A0};<br>
<br>
=C2=A0struct xen_sysctl_credit2_schedule {<br>
-- <br>
2.25.1<br>
<br>
</blockquote></div>

--0000000000007066f505fa6101ee--


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:12:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:12:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527207.819545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJDX-0007MQ-LM; Fri, 28 Apr 2023 08:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527207.819545; Fri, 28 Apr 2023 08:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJDX-0007MH-IN; Fri, 28 Apr 2023 08:12:35 +0000
Received: by outflank-mailman (input) for mailman id 527207;
 Fri, 28 Apr 2023 08:12:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJDV-0007Lx-WB
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:12:33 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d3d61d0-e59c-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:12:33 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f18dacd392so59630545e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:12:33 -0700 (PDT)
Received: from georged-x-u.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 b5-20020a056000054500b002e5ff05765esm20655456wrf.73.2023.04.28.01.12.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 01:12:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d3d61d0-e59c-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669552; x=1685261552;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=ull2YxWBFQl0HliuvqOLTfd5qwQa1W30wkLZFscPDpw=;
        b=LVsGuSvxSXIKRXj6ZPk4Zx48oWm72M1THdn6GJgbyPbEg3QSyNDe2Iik3tPNCu3YEt
         9Ai0K8fJ9dQAi9dthxVlHBDvRl1/Qj53KZ1PESnDr0I1qRMlblGYCTAHiHDrdl9zBnoT
         Bldgj8thXfWN2LecuFI9jyXSOHI+QtJK37mYU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669552; x=1685261552;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ull2YxWBFQl0HliuvqOLTfd5qwQa1W30wkLZFscPDpw=;
        b=hKlUBVx8u2eVojiA/xS3iemEDBfD1P7v45f7KJLEroStJMj/z2zKr647Cz056YQGXQ
         CU3dE1UhABZscn7ZrWNrsi6MIO0zl0gDLhpDaC3/bR+KSt6QYXembi4BF/O0H923unGv
         gbLAvEFs58bNYfanpO8+y38JKPEGGBRrepQ+G7Pz0HR/715R1wA189v0Qh1zneShakUN
         YNn1WAb37WcGdCR+GfYC1r2v+cAAgqPVjBBB70M5yNmmeyfa5lK8KTl8a403KNXWXNT4
         hdDGpAmE1Un/ZHWvYV8RbCt0dKLny+RXTbfSZOCH7ceTRZ0kcyUYvRatDQjfaP1xV++6
         qANQ==
X-Gm-Message-State: AC+VfDzK1TdEq0N21almdehdwDl/nLjw2w4N20femdF1OFnvPYATh7aU
	iSmj68iv0GoEC806vnuxUh2hcS1/k9knJ2HagOU=
X-Google-Smtp-Source: ACHHUZ7VNVaY+I109n2GVJBGOURrb5g+fuYJ/Plbvuhsj8QMP5lDMr1XLZ/RGo8yfQadGyFSGf1pYg==
X-Received: by 2002:a05:600c:299:b0:3f1:8c37:fa79 with SMTP id 25-20020a05600c029900b003f18c37fa79mr3288376wmk.31.1682669552336;
        Fri, 28 Apr 2023 01:12:32 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@cloud.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH RFC] SUPPORT.md: Make all security support explicit
Date: Fri, 28 Apr 2023 09:12:31 +0100
Message-Id: <20230428081231.2464275-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The initial goal of SUPPORT.md was to help both users, and the Xen
Project Security Team, determine what functionality was security
supported; i.e., what kinds of security bugs would trigger an XSA.

Our proposal is that as of 4.18, all functionality not explicitly
listed as security supported will be considered not security
supported.  Add some text to that effect.

The patch as written cannot be applied, since specifying "xl.cfg core
functionality" is a TODO; but it should do to start a discussion.

Signed-off-by: Georg Dunlap <george.dunlap@cloud.com>
---
CC: Wei Liu <wl@xen.org>
CC: Andrew Cooper <andrew.cooper@cloud.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@cloud.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
---
 SUPPORT.md | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..fcbcb44c44 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -17,6 +17,36 @@ for the definitions of the support status levels etc.
 Release Notes
 : <a href="https://wiki.xenproject.org/wiki/Xen_Project_X.YY_Release_Notes">RN</a>
 
+# General security support
+
+An XSA will always be issued for security-related bugs which are
+present in a "plain vanilla" configuration.  A "plain vanilla"
+configuration is defined as follows:
+
+* The Xen hypervisor is built from a tagged release of Xen, or a
+  commit which was on the tip of one of the supported stable branches.
+
+* The Xen hypervisor was built with the default config for the platform
+
+* No Xen command-line parameters were specified
+
+* No parameters for Xen-related drivers in the Linux kernel were specified
+
+* No modifications were made to the default xl.conf
+
+* xl.cfg files use only core functionality
+
+* Alternate toolstacks only activate functionality activated by the
+  core functionality of xl.cfg files.
+
+Any system outside this configuration will only be considered security
+supported if the functionality is explicitly listed as supported in
+this document.
+
+If a security-related bug exits only in a configuration listed as not
+security supported, the security team will generally not issue an XSA;
+the bug will simply be handled in public.
+
 # Feature Support
 
 ## Kconfig
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 08:14:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 08:14:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527210.819556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJFP-0007xS-2l; Fri, 28 Apr 2023 08:14:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527210.819556; Fri, 28 Apr 2023 08:14:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psJFO-0007xL-UY; Fri, 28 Apr 2023 08:14:30 +0000
Received: by outflank-mailman (input) for mailman id 527210;
 Fri, 28 Apr 2023 08:14:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TmhV=AT=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1psJFO-0007xF-1c
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 08:14:30 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b25781ad-e59c-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 10:14:28 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ab25e8a4a7so44492501fa.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 01:14:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b25781ad-e59c-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682669668; x=1685261668;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=13tg2gqJpPSUUt+GS2Udyinv/zQw/hIp7g6Znr4hpVM=;
        b=PB4ytY+7btAoE6pQuA0sJgXRigzsCafJhI9grcr9yYL/UPF5pBYVJl6ePwW+55qvfj
         Y1el6oI2T0EaArkOIc//5ty2P3rv7lO3oDLOJoGgynHNQjd90E1hecojiraGyg7nacga
         NBEZtCWgJylqP9mmbGVkoVcxh8uukt/Q3AjHs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682669668; x=1685261668;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=13tg2gqJpPSUUt+GS2Udyinv/zQw/hIp7g6Znr4hpVM=;
        b=OJHsPL8Tdun9W+qzfLRCrrV8yxBLmKS1D1t9CqoDVXopD4CRpgKBBYAFgiM6k4fWf3
         /CChFu5TNhcEPPPWESIrct3zhd0f5djBBNw6pF8Hm97FiFTPZMvRT2uzu5gV312Yn9Uw
         rEsIel6Q0EUTfUlRVyF9pOK5rcAX2d/7A4SCn6RqxlqODdQgwPYEUhrKen/QW/8Jd3DK
         MkgChiirrSN2iZYLE7g06yFGHjoOTArQfPOnZIfWA5tL75z70wCfvp1KPL23J0GsDfGH
         ZQeb/mFTSAGeFF+/8RjBbFV8bIxqVTJ0SflaN73jchlvx/WjIRhZSEkTJNLfvEFPCHbo
         ThMw==
X-Gm-Message-State: AC+VfDyXWFo1ymCcwFFV9zX0cycR2zXJ4WIhQZvhMM9lSoGoY5CiEQcH
	gjrUifrKAgWBHXKCaSl/nBQIUc/3+CIAUiaq0B3dD11hV60auCFtapA=
X-Google-Smtp-Source: ACHHUZ7GhRBxQA/iuw8YkhGm1MR1HeRBYnEyMA2gCTDWZbg4UesQfCZYzdOGnINUHgXeIvEqOKNv29TRlRHNhAzaI0U=
X-Received: by 2002:a2e:b2c7:0:b0:2a8:bc05:1d6e with SMTP id
 7-20020a2eb2c7000000b002a8bc051d6emr1287157ljz.37.1682669668215; Fri, 28 Apr
 2023 01:14:28 -0700 (PDT)
MIME-Version: 1.0
References: <20230428081231.2464275-1-george.dunlap@cloud.com>
In-Reply-To: <20230428081231.2464275-1-george.dunlap@cloud.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 28 Apr 2023 09:14:17 +0100
Message-ID: <CA+zSX=Z3Sr+OOoM3V-oVG6ooGFG7zmpqnAEdBC4q8pnmgfx7JA@mail.gmail.com>
Subject: Re: [PATCH RFC] SUPPORT.md: Make all security support explicit
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper@cloud.com>, Jan Beulich <jbeulich@suse.com>, 
	Roger Pau Monne <roger.pau@cloud.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>
Content-Type: multipart/alternative; boundary="000000000000d9473505fa610f38"

--000000000000d9473505fa610f38
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Apr 28, 2023 at 9:12=E2=80=AFAM George Dunlap <george.dunlap@cloud.=
com>
wrote:

> The initial goal of SUPPORT.md was to help both users, and the Xen
> Project Security Team, determine what functionality was security
> supported; i.e., what kinds of security bugs would trigger an XSA.
>
> Our proposal is that as of 4.18, all functionality not explicitly
> listed as security supported will be considered not security
> supported.  Add some text to that effect.
>
> The patch as written cannot be applied, since specifying "xl.cfg core
> functionality" is a TODO; but it should do to start a discussion.
>
> Signed-off-by: Georg Dunlap <george.dunlap@cloud.com>
>
One of the interesting outcomes of this thought process is that
"supported" is really about the configuration of the system (including
guests):

1. Where it came from
2. How it was build configured
3. Xen command-line parameters
4. Configuration of Xen-related kernel drivers
5. Configuration of support infrastructure; namely xenstore
6. Configuration of guests

That means that in particular, we need to somehow make it clear which
of the hundreds of Xen command-line parameters are OK to modify and
how.

It occurred to me that in many (most? all?) cases it would be more
effective to define the security support parameters in the
documentation itself.

e.g.:

```pandoc
### invpcid (x86)
> `=3D <boolean>`

> Default: `true`

> Supported values: all

By default, Xen will use the INVPCID instruction for TLB management if
it is available.  This option can be used to cause Xen to fall back to
older mechanisms, which are generally slower.
```

or (for example):

```pandoc
### loglvl
> `=3D <level>[/<rate-limited level>]` where level is `none | error | warni=
ng
| info | debug | all`

> Default: `loglvl=3Dwarning`

> Supported values: `none, error, warning`

> Can be modified at runtime

Set the logging level for Xen.  Any log message with equal more more
importance will be printed.

The optional `<rate-limited level>` option instructs which severities
should be rate limited.
```

Since people are (at least somewhat) used to documenting their
features, this would prompt people to consider the security
implications explicitly as they're adding features, rather than having
it be in a separate document.

Another option would be to have a section of the doc where we list
supported hypervisor parameters; e.g.:


```markdown
# Xen command-line arguments

...
invpcid
...
loglvl {none, error, warning}
...
```

It's tempting to consider the idea of listing the options that
*aren't* supported; but that puts us back where we are now, where new
features end up supported by default unless we remember to list them
as unsupported.

Finally, what might be particularly useful is a tool which looks at
the Xen Kconfig value from hypfs, the Xen command-line, and a bunch of
other parameters, and tells you if it sees anything running in the
system that's unsupported.  The challenge there is making it reliable
enough that a "clean bill of health" is actually an accurate
indication that nothing unsupported is being run.

 -George

--000000000000d9473505fa610f38
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, Apr 28, 2023 at 9:12=E2=80=AF=
AM George Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com">george.dunl=
ap@cloud.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" styl=
e=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddin=
g-left:1ex">The initial goal of SUPPORT.md was to help both users, and the =
Xen<br>
Project Security Team, determine what functionality was security<br>
supported; i.e., what kinds of security bugs would trigger an XSA.<br>
<br>
Our proposal is that as of 4.18, all functionality not explicitly<br>
listed as security supported will be considered not security<br>
supported.=C2=A0 Add some text to that effect.<br>
<br>
The patch as written cannot be applied, since specifying &quot;xl.cfg core<=
br>
functionality&quot; is a TODO; but it should do to start a discussion.<br>
<br>
Signed-off-by: Georg Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com" =
target=3D"_blank">george.dunlap@cloud.com</a>&gt;<br></blockquote>One of th=
e interesting outcomes of this thought process is that<br>&quot;supported&q=
uot; is really about the configuration of the system (including<br>guests):=
<br><br>1. Where it came from<br>2. How it was build configured<br>3. Xen c=
ommand-line parameters<br>4. Configuration of Xen-related kernel drivers<br=
>5. Configuration of support infrastructure; namely xenstore<br>6. Configur=
ation of guests<br><br>That means that in particular, we need to somehow ma=
ke it clear which<br>of the hundreds of Xen command-line parameters are OK =
to modify and<br>how.<br><br>It occurred to me that in many (most? all?) ca=
ses it would be more<br>effective to define the security support parameters=
 in the<br>documentation itself.<br><br>e.g.:<br><br>```pandoc<br>### invpc=
id (x86)<br>&gt; `=3D &lt;boolean&gt;`<br><br>&gt; Default: `true`<br><br>&=
gt; Supported values: all<br><br>By default, Xen will use the INVPCID instr=
uction for TLB management if<br>it is available.=C2=A0 This option can be u=
sed to cause Xen to fall back to<br>older mechanisms, which are generally s=
lower.<br>```<br><br>or (for example):<br><br>```pandoc<br>### loglvl<br>&g=
t; `=3D &lt;level&gt;[/&lt;rate-limited level&gt;]` where level is `none | =
error | warning | info | debug | all`<br><br>&gt; Default: `loglvl=3Dwarnin=
g`<br><br>&gt; Supported values: `none, error, warning`<br><br>&gt; Can be =
modified at runtime<br><br>Set the logging level for Xen.=C2=A0 Any log mes=
sage with equal more more<br>importance will be printed.<br><br>The optiona=
l `&lt;rate-limited level&gt;` option instructs which severities<br>should =
be rate limited.<br>```<br><br>Since people are (at least somewhat) used to=
 documenting their<br>features, this would prompt people to consider the se=
curity<br>implications explicitly as they&#39;re adding features, rather th=
an having<br>it be in a separate document.<br><br>Another option would be t=
o have a section of the doc where we list<br>supported hypervisor parameter=
s; e.g.:<br><br><br>```markdown<br># Xen command-line arguments<br><br>...<=
br>invpcid<br>...<br>loglvl {none, error, warning}<br>...<br><div>```=C2=A0=
<br></div><div><br></div><div>It&#39;s tempting to consider the idea of lis=
ting the options that<br>*aren&#39;t* supported; but that puts us back wher=
e we are now, where new<br>features end up supported by default unless we r=
emember to list them<br>as unsupported.<br><br>Finally, what might be parti=
cularly useful is a tool which looks at<br>the Xen Kconfig value from hypfs=
, the Xen command-line, and a bunch of<br>other parameters, and tells you i=
f it sees anything running in the<br>system that&#39;s unsupported.=C2=A0 T=
he challenge there is making it reliable<br>enough that a &quot;clean bill =
of health&quot; is actually an accurate<br>indication that nothing unsuppor=
ted is being run.<br></div><div><br></div><div>=C2=A0-George</div></div></d=
iv>

--000000000000d9473505fa610f38--


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:23:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527215.819565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKJm-0006ik-5L; Fri, 28 Apr 2023 09:23:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527215.819565; Fri, 28 Apr 2023 09:23:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKJm-0006id-1i; Fri, 28 Apr 2023 09:23:06 +0000
Received: by outflank-mailman (input) for mailman id 527215;
 Fri, 28 Apr 2023 09:23:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psKJl-0006iT-Hv; Fri, 28 Apr 2023 09:23:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psKJl-00080C-9M; Fri, 28 Apr 2023 09:23:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psKJk-0002nj-UN; Fri, 28 Apr 2023 09:23:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psKJk-0005lK-Ts; Fri, 28 Apr 2023 09:23:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UP3CF0w6rn/bV0QsoFth0J3hyoiylRQbcUvajhPJoNU=; b=Wk7FCO7Fi96II+dYqKRq6rqEq0
	dxboiUTgcjZWOciIw3RZsg5gyHaLTWwyVwRoMtjSOiicWu41nrUdDQMXZShjbMmohQo2PFWmtNXls
	ZRt3xTg1bu4lCaCpjihcPVomwJ1R+EABZqzMuZ9YCSdIZfHEbizNVmGMocCT0Lp5jtAM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180463-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180463: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=95ef765839a8d0de52095e3dec3584fc347b94b2
X-Osstest-Versions-That:
    ovmf=e5e1cd1a83e2e7aa2179db3de5fc00d76713ec6f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 09:23:04 +0000

flight 180463 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180463/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 95ef765839a8d0de52095e3dec3584fc347b94b2
baseline version:
 ovmf                 e5e1cd1a83e2e7aa2179db3de5fc00d76713ec6f

Last test of basis   180454  2023-04-27 22:12:22 Z    0 days
Testing same since   180463  2023-04-28 06:10:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  BruceX Wang <brucex.wang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e5e1cd1a83..95ef765839  95ef765839a8d0de52095e3dec3584fc347b94b2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527228.819598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKqm-00028d-B6; Fri, 28 Apr 2023 09:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527228.819598; Fri, 28 Apr 2023 09:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKqm-00025t-31; Fri, 28 Apr 2023 09:57:12 +0000
Received: by outflank-mailman (input) for mailman id 527228;
 Fri, 28 Apr 2023 09:54:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKng-0001in-KE
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:54:00 +0000
Received: from out0-208.mail.aliyun.com (out0-208.mail.aliyun.com
 [140.205.0.208]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96c1f67c-e5aa-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 11:53:57 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STFoGhB_1682675630) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:53:50 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96c1f67c-e5aa-11ed-b224-6b7b168915f2
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047206;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---.STFoGhB_1682675630;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
   <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC 37/43] x86/xen: Pin up to VSYSCALL_ADDR when vsyscall page is out of fixmap area
Date: Fri, 28 Apr 2023 17:51:17 +0800
Message-Id: <13975abd9b8b2e2e1e2efd3be6c341542b08af24.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If vsyscall page is moved out of fixmap area, then FIXADDR_TOP would be
below vsyscall page. So it should pin up to VSYSCALL_ADDR if vsyscall is
enabled.

Suggested-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/xen/mmu_pv.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index a59bc013ee5b..28392f3478a0 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -587,6 +587,12 @@ static void xen_p4d_walk(struct mm_struct *mm, p4d_t *p4d,
 	xen_pud_walk(mm, pud, func, last, limit);
 }
 
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+#define __KERNEL_MAP_TOP	(VSYSCALL_ADDR + PAGE_SIZE)
+#else
+#define __KERNEL_MAP_TOP	FIXADDR_TOP
+#endif
+
 /*
  * (Yet another) pagetable walker.  This one is intended for pinning a
  * pagetable.  This means that it walks a pagetable and calls the
@@ -594,7 +600,7 @@ static void xen_p4d_walk(struct mm_struct *mm, p4d_t *p4d,
  * at every level.  It walks the entire pagetable, but it only bothers
  * pinning pte pages which are below limit.  In the normal case this
  * will be STACK_TOP_MAX, but at boot we need to pin up to
- * FIXADDR_TOP.
+ * __KERNEL_MAP_TOP.
  *
  * We must skip the Xen hole in the middle of the address space, just after
  * the big x86-64 virtual hole.
@@ -609,7 +615,7 @@ static void __xen_pgd_walk(struct mm_struct *mm, pgd_t *pgd,
 
 	/* The limit is the last byte to be touched */
 	limit--;
-	BUG_ON(limit >= FIXADDR_TOP);
+	BUG_ON(limit >= __KERNEL_MAP_TOP);
 
 	/*
 	 * 64-bit has a great big hole in the middle of the address
@@ -797,7 +803,7 @@ static void __init xen_after_bootmem(void)
 #ifdef CONFIG_X86_VSYSCALL_EMULATION
 	SetPagePinned(virt_to_page(level3_user_vsyscall));
 #endif
-	xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);
+	xen_pgd_walk(&init_mm, xen_mark_pinned, __KERNEL_MAP_TOP);
 }
 
 static void xen_unpin_page(struct mm_struct *mm, struct page *page,
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527226.819591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-00021c-WD; Fri, 28 Apr 2023 09:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527226.819591; Fri, 28 Apr 2023 09:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001z7-QA; Fri, 28 Apr 2023 09:57:11 +0000
Received: by outflank-mailman (input) for mailman id 527226;
 Fri, 28 Apr 2023 09:53:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKnH-0001iA-Dx
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:53:35 +0000
Received: from out0-200.mail.aliyun.com (out0-200.mail.aliyun.com
 [140.205.0.200]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88079f65-e5aa-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 11:53:33 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STFoGYl_1682675602) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:53:23 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88079f65-e5aa-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047212;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---.STFoGYl_1682675602;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Darren Hart" <dvhart@infradead.org>,
  "Andy Shevchenko" <andy@infradead.org>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
   <xen-devel@lists.xenproject.org>,
   <platform-driver-x86@vger.kernel.org>
Subject: [PATCH RFC 29/43] x86/PVH: Adapt PVH booting for PIE support
Date: Fri, 28 Apr 2023 17:51:09 +0800
Message-Id: <ea6994d2ab49a50cb5a8911c24562cd6d223c2b6.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If PIE is enabled, all symbol references would be RIP-relative. However,
PVH booting runs in low address space, which could cause wrong x86_init
callbacks assignment. Since init_top_pgt has building high kernel
address mapping, let PVH booting runs in high address space to make all
things right.

PVH booting assumes that no relocation happened. Since the kernel
compile address is still in top 2G, so it is allowed to use R_X86_64_32S
for symbol references in pvh_start_xen().

Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/platform/pvh/head.S | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index 5842fe0e4f96..09518d4de042 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -94,6 +94,13 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
 	/* 64-bit entry point. */
 	.code64
 1:
+#ifdef CONFIG_X86_PIE
+	movabs  $2f, %rax
+	ANNOTATE_RETPOLINE_SAFE
+	jmp *%rax
+2:
+	ANNOTATE_NOENDBR // above
+#endif
 	/* Set base address in stack canary descriptor. */
 	mov $MSR_GS_BASE,%ecx
 #if defined(CONFIG_STACKPROTECTOR_FIXED)
@@ -149,9 +156,15 @@ SYM_CODE_END(pvh_start_xen)
 	.section ".init.data","aw"
 	.balign 8
 SYM_DATA_START_LOCAL(gdt)
+	/*
+	 * Use an ASM_PTR (quad on x64) for _pa(gdt_start) because PIE requires
+	 * a pointer size storage value before applying the relocation. On
+	 * 32-bit _ASM_PTR will be a long which is aligned the space needed for
+	 * relocation.
+	 */
 	.word gdt_end - gdt_start
-	.long _pa(gdt_start)
-	.word 0
+	_ASM_PTR _pa(gdt_start)
+	.balign 8
 SYM_DATA_END(gdt)
 SYM_DATA_START_LOCAL(gdt_start)
 	.quad 0x0000000000000000            /* NULL descriptor */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527229.819607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKqm-0002Fa-OS; Fri, 28 Apr 2023 09:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527229.819607; Fri, 28 Apr 2023 09:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKqm-0002DX-DO; Fri, 28 Apr 2023 09:57:12 +0000
Received: by outflank-mailman (input) for mailman id 527229;
 Fri, 28 Apr 2023 09:54:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKng-0001in-Rn
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:54:00 +0000
Received: from out0-193.mail.aliyun.com (out0-193.mail.aliyun.com
 [140.205.0.193]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9675234d-e5aa-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 11:53:57 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STFoGgp_1682675627) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:53:48 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9675234d-e5aa-11ed-b224-6b7b168915f2
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047188;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---.STFoGgp_1682675627;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Juergen Gross" <jgross@suse.com>,
  "=?UTF-8?B?U3JpdmF0c2EgUy4gQmhhdCAoVk13YXJlKQ==?=" <srivatsa@csail.mit.edu>,
  "Alexey Makhalov" <amakhalov@vmware.com>,
  "VMware PV-Drivers Reviewers" <pv-drivers@vmware.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Andrew Morton" <akpm@linux-foundation.org>,
  "=?UTF-8?B?TWlrZSBSYXBvcG9ydCAoSUJNKQ==?=" <rppt@kernel.org>,
  "Liam R. Howlett" <Liam.Howlett@Oracle.com>,
  "Suren Baghdasaryan" <surenb@google.com>,
  "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
   <virtualization@lists.linux-foundation.org>,
   <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC 36/43] x86/vsyscall: Don't use set_fixmap() to map vsyscall page
Date: Fri, 28 Apr 2023 17:51:16 +0800
Message-Id: <a77a84cc7fc4bf70bb8ac7fb6e55110e74bde3ca.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to unify FIXADDR_TOP for x86 and allow fixmap area to be
moveable, vsyscall page should be mapped individually. However, for
XENPV guest, vsyscall page needs to be mapped into user pagetable too.
So introduce a new PVMMU op to help to map vsyscall page.

Suggested-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/entry/vsyscall/vsyscall_64.c |  3 +--
 arch/x86/include/asm/paravirt.h       |  7 +++++++
 arch/x86/include/asm/paravirt_types.h |  4 ++++
 arch/x86/include/asm/vsyscall.h       | 13 +++++++++++++
 arch/x86/kernel/paravirt.c            |  4 ++++
 arch/x86/xen/mmu_pv.c                 | 20 ++++++++++++++------
 6 files changed, 43 insertions(+), 8 deletions(-)

diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index e0ca8120aea8..4373460ebbde 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -385,8 +385,7 @@ void __init map_vsyscall(void)
 	 * page.
 	 */
 	if (vsyscall_mode == EMULATE) {
-		__set_fixmap(VSYSCALL_PAGE, physaddr_vsyscall,
-			     PAGE_KERNEL_VVAR);
+		__set_vsyscall_page(physaddr_vsyscall, PAGE_KERNEL_VVAR);
 		set_vsyscall_pgtable_user_bits(swapper_pg_dir);
 	}
 
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 2350ceb43db0..dcc0706287ee 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -576,6 +576,13 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 {
 	pv_ops.mmu.set_fixmap(idx, phys, flags);
 }
+
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+static inline void __set_vsyscall_page(phys_addr_t phys, pgprot_t flags)
+{
+	pv_ops.mmu.set_vsyscall_page(phys, flags);
+}
+#endif
 #endif
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 982a234f5a06..e79f38232849 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -224,6 +224,10 @@ struct pv_mmu_ops {
 	   an mfn.  We can tell which is which from the index. */
 	void (*set_fixmap)(unsigned /* enum fixed_addresses */ idx,
 			   phys_addr_t phys, pgprot_t flags);
+
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+	void (*set_vsyscall_page)(phys_addr_t phys, pgprot_t flags);
+#endif
 #endif
 } __no_randomize_layout;
 
diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
index ab60a71a8dcb..73691fc60924 100644
--- a/arch/x86/include/asm/vsyscall.h
+++ b/arch/x86/include/asm/vsyscall.h
@@ -2,6 +2,7 @@
 #ifndef _ASM_X86_VSYSCALL_H
 #define _ASM_X86_VSYSCALL_H
 
+#include <asm/pgtable.h>
 #include <linux/seqlock.h>
 #include <uapi/asm/vsyscall.h>
 
@@ -15,6 +16,18 @@ extern void set_vsyscall_pgtable_user_bits(pgd_t *root);
  */
 extern bool emulate_vsyscall(unsigned long error_code,
 			     struct pt_regs *regs, unsigned long address);
+static inline void native_set_vsyscall_page(phys_addr_t phys, pgprot_t flags)
+{
+	pgprot_val(flags) &= __default_kernel_pte_mask;
+	set_pte_vaddr(VSYSCALL_ADDR, pfn_pte(phys >> PAGE_SHIFT, flags));
+}
+
+#ifndef CONFIG_PARAVIRT_XXL
+#define __set_vsyscall_page	native_set_vsyscall_page
+#else
+#include <asm/paravirt.h>
+#endif
+
 #else
 static inline void map_vsyscall(void) {}
 static inline bool emulate_vsyscall(unsigned long error_code,
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ac10b46c5832..13c81402f377 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -33,6 +33,7 @@
 #include <asm/tlb.h>
 #include <asm/io_bitmap.h>
 #include <asm/gsseg.h>
+#include <asm/vsyscall.h>
 
 /*
  * nop stub, which must not clobber anything *including the stack* to
@@ -357,6 +358,9 @@ struct paravirt_patch_template pv_ops = {
 	},
 
 	.mmu.set_fixmap		= native_set_fixmap,
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+	.mmu.set_vsyscall_page	= native_set_vsyscall_page,
+#endif
 #endif /* CONFIG_PARAVIRT_XXL */
 
 #if defined(CONFIG_PARAVIRT_SPINLOCKS)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index fdc91deece7e..a59bc013ee5b 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -59,6 +59,7 @@
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
+#include <asm/vsyscall.h>
 #include <asm/mmu_context.h>
 #include <asm/setup.h>
 #include <asm/paravirt.h>
@@ -2020,9 +2021,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	switch (idx) {
 	case FIX_BTMAP_END ... FIX_BTMAP_BEGIN:
-#ifdef CONFIG_X86_VSYSCALL_EMULATION
-	case VSYSCALL_PAGE:
-#endif
 		/* All local page mappings */
 		pte = pfn_pte(phys, prot);
 		break;
@@ -2058,14 +2056,21 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 	vaddr = __fix_to_virt(idx);
 	if (HYPERVISOR_update_va_mapping(vaddr, pte, UVMF_INVLPG))
 		BUG();
+}
 
 #ifdef CONFIG_X86_VSYSCALL_EMULATION
+static void xen_set_vsyscall_page(phys_addr_t phys, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(phys >> PAGE_SHIFT, prot);
+
+	if (HYPERVISOR_update_va_mapping(VSYSCALL_ADDR, pte, UVMF_INVLPG))
+		BUG();
+
 	/* Replicate changes to map the vsyscall page into the user
 	   pagetable vsyscall mapping. */
-	if (idx == VSYSCALL_PAGE)
-		set_pte_vaddr_pud(level3_user_vsyscall, vaddr, pte);
-#endif
+	set_pte_vaddr_pud(level3_user_vsyscall, VSYSCALL_ADDR, pte);
 }
+#endif
 
 static void __init xen_post_allocator_init(void)
 {
@@ -2156,6 +2161,9 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = {
 		},
 
 		.set_fixmap = xen_set_fixmap,
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+		.set_vsyscall_page = xen_set_vsyscall_page,
+#endif
 	},
 };
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527220.819575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001nM-61; Fri, 28 Apr 2023 09:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527220.819575; Fri, 28 Apr 2023 09:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001nF-37; Fri, 28 Apr 2023 09:57:11 +0000
Received: by outflank-mailman (input) for mailman id 527220;
 Fri, 28 Apr 2023 09:52:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKmQ-0001fa-6g
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:52:42 +0000
Received: from out0-199.mail.aliyun.com (out0-199.mail.aliyun.com
 [140.205.0.199]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66dc1d4b-e5aa-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 11:52:37 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STCEPV9_1682675548) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:52:29 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66dc1d4b-e5aa-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047198;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---.STCEPV9_1682675548;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Darren Hart" <dvhart@infradead.org>,
  "Andy Shevchenko" <andy@infradead.org>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
   <xen-devel@lists.xenproject.org>,
   <platform-driver-x86@vger.kernel.org>
Subject: [PATCH RFC 15/43] x86/PVH: Use fixed_percpu_data to set up GS base
Date: Fri, 28 Apr 2023 17:50:55 +0800
Message-Id: <4fdb800ce6f1a2315918cb02eec3efbec1032cb8.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

startup_64() and startup_xen() both use fixed_percpu_data to set up GS
base. So for consitency, use it too in PVH entry.

Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/platform/pvh/head.S | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index c4365a05ab83..b093996b7e19 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -96,7 +96,7 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
 1:
 	/* Set base address in stack canary descriptor. */
 	mov $MSR_GS_BASE,%ecx
-	mov $_pa(canary), %eax
+	mov $_pa(INIT_PER_CPU_VAR(fixed_percpu_data)), %eax
 	xor %edx, %edx
 	wrmsr
 
@@ -156,8 +156,6 @@ SYM_DATA_START_LOCAL(gdt_start)
 SYM_DATA_END_LABEL(gdt_start, SYM_L_LOCAL, gdt_end)
 
 	.balign 16
-SYM_DATA_LOCAL(canary, .fill 48, 1, 0)
-
 SYM_DATA_START_LOCAL(early_stack)
 	.fill BOOT_STACK_SIZE, 1, 0
 SYM_DATA_END_LABEL(early_stack, SYM_L_LOCAL, early_stack_end)
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527224.819586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001vW-P6; Fri, 28 Apr 2023 09:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527224.819586; Fri, 28 Apr 2023 09:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001uC-Gr; Fri, 28 Apr 2023 09:57:11 +0000
Received: by outflank-mailman (input) for mailman id 527224;
 Fri, 28 Apr 2023 09:52:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKmh-0001fa-9d
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:52:59 +0000
Received: from out0-197.mail.aliyun.com (out0-197.mail.aliyun.com
 [140.205.0.197]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7280a1d6-e5aa-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 11:52:57 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STCEPaY_1682675567) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:52:48 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7280a1d6-e5aa-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047192;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=32;SR=0;TI=SMTPD_---.STCEPaY_1682675567;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Peter Zijlstra" <peterz@infradead.org>,
  "Josh Poimboeuf" <jpoimboe@kernel.org>,
  "Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
  "Dennis Zhou" <dennis@kernel.org>,
  "Tejun Heo" <tj@kernel.org>,
  "Christoph Lameter" <cl@linux.com>,
  "Paolo Bonzini" <pbonzini@redhat.com>,
  "Wanpeng Li" <wanpengli@tencent.com>,
  "Vitaly Kuznetsov" <vkuznets@redhat.com>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Nathan Chancellor" <nathan@kernel.org>,
  "Nick Desaulniers" <ndesaulniers@google.com>,
  "Tom Rix" <trix@redhat.com>,
  "David Woodhouse" <dwmw@amazon.co.uk>,
  "Brian Gerst" <brgerst@gmail.com>,
   <linux-mm@kvack.org>,
   <kvm@vger.kernel.org>,
   <xen-devel@lists.xenproject.org>,
   <llvm@lists.linux.dev>
Subject: [PATCH RFC 18/43] x86/percpu: Use PC-relative addressing for percpu variable references
Date: Fri, 28 Apr 2023 17:50:58 +0800
Message-Id: <175116f75c38c15d8d73a03301eab805fea13a0a.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For PIE binary, all symbol references are PC-relative addressing, even
for percpu variable. So to keep compatible with PIE, add %rip suffix in
percpu assembly macros if PIE is enabled. However, relocation of percpu
variable references is broken now for PIE. It would be fixed later.

Suggested-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/entry/calling.h             | 17 ++++++++++++----
 arch/x86/include/asm/nospec-branch.h | 10 +++++-----
 arch/x86/include/asm/percpu.h        | 29 +++++++++++++++++++++++++---
 arch/x86/kernel/head_64.S            |  2 +-
 arch/x86/kernel/kvm.c                | 21 ++++++++++++++++----
 arch/x86/lib/cmpxchg16b_emu.S        |  8 ++++----
 arch/x86/xen/xen-asm.S               | 10 +++++-----
 7 files changed, 71 insertions(+), 26 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index f6907627172b..11328578741d 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -173,7 +173,7 @@ For 32-bit we have the following conventions - kernel is built with
 .endm
 
 #define THIS_CPU_user_pcid_flush_mask   \
-	PER_CPU_VAR(cpu_tlbstate) + TLB_STATE_user_pcid_flush_mask
+	PER_CPU_VAR(cpu_tlbstate + TLB_STATE_user_pcid_flush_mask)
 
 .macro SWITCH_TO_USER_CR3_NOSTACK scratch_reg:req scratch_reg2:req
 	ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
@@ -370,8 +370,8 @@ For 32-bit we have the following conventions - kernel is built with
 .endm
 
 .macro SAVE_AND_SET_GSBASE scratch_reg:req save_reg:req
+	GET_PERCPU_BASE \scratch_reg \save_reg
 	rdgsbase \save_reg
-	GET_PERCPU_BASE \scratch_reg
 	wrgsbase \scratch_reg
 .endm
 
@@ -407,15 +407,24 @@ For 32-bit we have the following conventions - kernel is built with
  * Thus the kernel would consume a guest's TSC_AUX if an NMI arrives
  * while running KVM's run loop.
  */
-.macro GET_PERCPU_BASE reg:req
+#ifdef CONFIG_X86_PIE
+.macro GET_PERCPU_BASE reg:req scratch_reg:req
+	LOAD_CPU_AND_NODE_SEG_LIMIT \reg
+	andq	$VDSO_CPUNODE_MASK, \reg
+	leaq	__per_cpu_offset(%rip), \scratch_reg
+	movq	(\scratch_reg, \reg, 8), \reg
+.endm
+#else
+.macro GET_PERCPU_BASE reg:req scratch_reg:req
 	LOAD_CPU_AND_NODE_SEG_LIMIT \reg
 	andq	$VDSO_CPUNODE_MASK, \reg
 	movq	__per_cpu_offset(, \reg, 8), \reg
 .endm
+#endif /* CONFIG_X86_PIE */
 
 #else
 
-.macro GET_PERCPU_BASE reg:req
+.macro GET_PERCPU_BASE reg:req scratch_reg:req
 	movq	pcpu_unit_offsets(%rip), \reg
 .endm
 
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index edb2b0cb8efe..d8fd935e0697 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -59,13 +59,13 @@
 
 #ifdef CONFIG_CALL_THUNKS_DEBUG
 # define CALL_THUNKS_DEBUG_INC_CALLS				\
-	incq	%gs:__x86_call_count;
+	incq	%gs:(__x86_call_count)__percpu_rel;
 # define CALL_THUNKS_DEBUG_INC_RETS				\
-	incq	%gs:__x86_ret_count;
+	incq	%gs:(__x86_ret_count)__percpu_rel;
 # define CALL_THUNKS_DEBUG_INC_STUFFS				\
-	incq	%gs:__x86_stuffs_count;
+	incq	%gs:(__x86_stuffs_count)__percpu_rel;
 # define CALL_THUNKS_DEBUG_INC_CTXSW				\
-	incq	%gs:__x86_ctxsw_count;
+	incq	%gs:(__x86_ctxsw_count)__percpu_rel;
 #else
 # define CALL_THUNKS_DEBUG_INC_CALLS
 # define CALL_THUNKS_DEBUG_INC_RETS
@@ -95,7 +95,7 @@
 	CALL_THUNKS_DEBUG_INC_CALLS
 
 #define INCREMENT_CALL_DEPTH					\
-	sarq	$5, %gs:pcpu_hot + X86_call_depth;		\
+	sarq	$5, %gs:(pcpu_hot + X86_call_depth)__percpu_rel;\
 	CALL_THUNKS_DEBUG_INC_CALLS
 
 #define ASM_INCREMENT_CALL_DEPTH				\
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 13c0d63ed55e..a627a073c6ea 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -4,16 +4,26 @@
 
 #ifdef CONFIG_X86_64
 #define __percpu_seg		gs
+#ifdef CONFIG_X86_PIE
+#define __percpu_rel		(%rip)
+#else
+#define __percpu_rel
+#endif /* CONFIG_X86_PIE */
 #else
 #define __percpu_seg		fs
+#define __percpu_rel
 #endif
 
 #ifdef __ASSEMBLY__
 
 #ifdef CONFIG_SMP
-#define PER_CPU_VAR(var)	%__percpu_seg:var
+/* Compatible with Position Independent Code */
+#define PER_CPU_VAR(var)	%__percpu_seg:(var)##__percpu_rel
+/* Rare absolute reference */
+#define PER_CPU_VAR_ABS(var)	%__percpu_seg:var
 #else /* ! SMP */
-#define PER_CPU_VAR(var)	var
+#define PER_CPU_VAR(var)	(var)##__percpu_rel
+#define PER_CPU_VAR_ABS(var)	var
 #endif	/* SMP */
 
 #ifdef CONFIG_X86_64_SMP
@@ -148,10 +158,23 @@ do {									\
 	(typeof(_var))(unsigned long) pfo_val__;			\
 })
 
+/*
+ * Position Independent code uses relative addresses only.
+ * The 'P' modifier prevents RIP-relative addressing in GCC,
+ * so use 'a' modifier instead. Howerver, 'P' modifier allows
+ * RIP-relative addressing in Clang but Clang doesn't support
+ * 'a' modifier.
+ */
+#if defined(CONFIG_X86_PIE) && defined(CONFIG_CC_IS_GCC)
+#define __percpu_stable_arg	__percpu_arg(a[var])
+#else
+#define __percpu_stable_arg	__percpu_arg(P[var])
+#endif
+
 #define percpu_stable_op(size, op, _var)				\
 ({									\
 	__pcpu_type_##size pfo_val__;					\
-	asm(__pcpu_op2_##size(op, __percpu_arg(P[var]), "%[val]")	\
+	asm(__pcpu_op2_##size(op, __percpu_stable_arg, "%[val]")	\
 	    : [val] __pcpu_reg_##size("=", pfo_val__)			\
 	    : [var] "p" (&(_var)));					\
 	(typeof(_var))(unsigned long) pfo_val__;			\
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 61f1873d0ff7..1eed50b7d1ac 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -396,7 +396,7 @@ SYM_CODE_START(start_cpu0)
 	UNWIND_HINT_END_OF_STACK
 
 	/* Find the idle task stack */
-	movq	PER_CPU_VAR(pcpu_hot) + X86_current_task, %rcx
+	movq	PER_CPU_VAR(pcpu_hot + X86_current_task), %rcx
 	movq	TASK_threadsp(%rcx), %rsp
 
 	jmp	.Ljump_to_C_code
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 1cceac5984da..32d7b201f4f0 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -794,14 +794,27 @@ PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted);
 
 extern bool __raw_callee_save___kvm_vcpu_is_preempted(long);
 
+#ifndef CONFIG_X86_PIE
+#define KVM_CHECK_VCPU_PREEMPTED			\
+	"movq	__per_cpu_offset(,%rdi,8), %rax;"	\
+	"cmpb	$0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
+#else
+#define KVM_CHECK_VCPU_PREEMPTED			\
+	"pushq	%rdi;"					\
+	"leaq	__per_cpu_offset(%rip), %rax;"		\
+	"movq	(%rax,%rdi,8), %rax;"			\
+	"leaq	steal_time(%rip), %rdi;"		\
+	"cmpb	$0, (%rax, %rdi, 1);"			\
+	"popq	%rdi;"
+#endif
+
 /*
  * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and
  * restoring to/from the stack.
  */
-#define PV_VCPU_PREEMPTED_ASM						     \
- "movq   __per_cpu_offset(,%rdi,8), %rax\n\t"				     \
- "cmpb   $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax)\n\t" \
- "setne  %al\n\t"
+#define PV_VCPU_PREEMPTED_ASM		\
+	KVM_CHECK_VCPU_PREEMPTED	\
+	"setne  %al\n\t"
 
 DEFINE_PARAVIRT_ASM(__raw_callee_save___kvm_vcpu_is_preempted,
 		    PV_VCPU_PREEMPTED_ASM, .text);
diff --git a/arch/x86/lib/cmpxchg16b_emu.S b/arch/x86/lib/cmpxchg16b_emu.S
index 33c70c0160ea..891c5e9fd868 100644
--- a/arch/x86/lib/cmpxchg16b_emu.S
+++ b/arch/x86/lib/cmpxchg16b_emu.S
@@ -27,13 +27,13 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
 	pushfq
 	cli
 
-	cmpq PER_CPU_VAR((%rsi)), %rax
+	cmpq PER_CPU_VAR_ABS((%rsi)), %rax
 	jne .Lnot_same
-	cmpq PER_CPU_VAR(8(%rsi)), %rdx
+	cmpq PER_CPU_VAR_ABS(8(%rsi)), %rdx
 	jne .Lnot_same
 
-	movq %rbx, PER_CPU_VAR((%rsi))
-	movq %rcx, PER_CPU_VAR(8(%rsi))
+	movq %rbx, PER_CPU_VAR_ABS((%rsi))
+	movq %rcx, PER_CPU_VAR_ABS(8(%rsi))
 
 	popfq
 	mov $1, %al
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 9e5e68008785..448958ddbaf8 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -28,7 +28,7 @@
  * non-zero.
  */
 SYM_FUNC_START(xen_irq_disable_direct)
-	movb $1, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
+	movb $1, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
 	RET
 SYM_FUNC_END(xen_irq_disable_direct)
 
@@ -69,7 +69,7 @@ SYM_FUNC_END(check_events)
 SYM_FUNC_START(xen_irq_enable_direct)
 	FRAME_BEGIN
 	/* Unmask events */
-	movb $0, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
+	movb $0, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
 
 	/*
 	 * Preempt here doesn't matter because that will deal with any
@@ -78,7 +78,7 @@ SYM_FUNC_START(xen_irq_enable_direct)
 	 */
 
 	/* Test for pending */
-	testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending
+	testb $0xff, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_pending)
 	jz 1f
 
 	call check_events
@@ -97,7 +97,7 @@ SYM_FUNC_END(xen_irq_enable_direct)
  * x86 use opposite senses (mask vs enable).
  */
 SYM_FUNC_START(xen_save_fl_direct)
-	testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
+	testb $0xff, PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_mask)
 	setz %ah
 	addb %ah, %ah
 	RET
@@ -113,7 +113,7 @@ SYM_FUNC_END(xen_read_cr2);
 
 SYM_FUNC_START(xen_read_cr2_direct)
 	FRAME_BEGIN
-	_ASM_MOV PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_arch_cr2, %_ASM_AX
+	_ASM_MOV PER_CPU_VAR(xen_vcpu_info + XEN_vcpu_info_arch_cr2), %_ASM_AX
 	FRAME_END
 	RET
 SYM_FUNC_END(xen_read_cr2_direct);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 09:57:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 09:57:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527222.819580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001pY-DD; Fri, 28 Apr 2023 09:57:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527222.819580; Fri, 28 Apr 2023 09:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psKql-0001oz-9g; Fri, 28 Apr 2023 09:57:11 +0000
Received: by outflank-mailman (input) for mailman id 527222;
 Fri, 28 Apr 2023 09:52:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dCeu=AT=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1psKma-0001fa-Sr
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 09:52:52 +0000
Received: from out0-214.mail.aliyun.com (out0-214.mail.aliyun.com
 [140.205.0.214]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ebca779-e5aa-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 11:52:50 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.STFQGKD_1682675559) by smtp.aliyun-inc.com;
 Fri, 28 Apr 2023 17:52:41 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ebca779-e5aa-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047194;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=35;SR=0;TI=SMTPD_---.STFQGKD_1682675559;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Brian Gerst" <brgerst@gmail.com>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Darren Hart" <dvhart@infradead.org>,
  "Andy Shevchenko" <andy@infradead.org>,
  "Nathan Chancellor" <nathan@kernel.org>,
  "Nick Desaulniers" <ndesaulniers@google.com>,
  "Tom Rix" <trix@redhat.com>,
  "Peter Zijlstra" <peterz@infradead.org>,
  "=?UTF-8?B?TWlrZSBSYXBvcG9ydCAoSUJNKQ==?=" <rppt@kernel.org>,
  "Ashok Raj" <ashok.raj@intel.com>,
  "Rick Edgecombe" <rick.p.edgecombe@intel.com>,
  "Catalin Marinas" <catalin.marinas@arm.com>,
  "Guo Ren" <guoren@kernel.org>,
  "Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
  "Jason A. Donenfeld" <Jason@zx2c4.com>,
  "Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
  "Kim Phillips" <kim.phillips@amd.com>,
  "David Woodhouse" <dwmw@amazon.co.uk>,
  "Josh Poimboeuf" <jpoimboe@kernel.org>,
   <xen-devel@lists.xenproject.org>,
   <platform-driver-x86@vger.kernel.org>,
   <llvm@lists.linux.dev>
Subject: [PATCH RFC 16/43] x86-64: Use per-cpu stack canary if supported by compiler
Date: Fri, 28 Apr 2023 17:50:56 +0800
Message-Id: <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Brian Gerst <brgerst@gmail.com>

From: Brian Gerst <brgerst@gmail.com>

If the compiler supports it, use a standard per-cpu variable for the
stack protector instead of the old fixed location.  Keep the fixed
location code for compatibility with older compilers.

[Hou Wenlong: Disable it on Clang, adapt new code change and adapt
missing GS set up path in pvh_start_xen()]

Signed-off-by: Brian Gerst <brgerst@gmail.com>
Co-developed-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Cc: Thomas Garnier <thgarnie@chromium.org>
Cc: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/x86/Kconfig                      | 12 ++++++++++++
 arch/x86/Makefile                     | 21 ++++++++++++++-------
 arch/x86/entry/entry_64.S             |  6 +++++-
 arch/x86/include/asm/processor.h      | 17 ++++++++++++-----
 arch/x86/include/asm/stackprotector.h | 16 +++++++---------
 arch/x86/kernel/asm-offsets_64.c      |  2 +-
 arch/x86/kernel/cpu/common.c          | 15 +++++++--------
 arch/x86/kernel/head_64.S             | 16 ++++++++++------
 arch/x86/kernel/vmlinux.lds.S         |  4 +++-
 arch/x86/platform/pvh/head.S          |  8 ++++++++
 arch/x86/xen/xen-head.S               | 14 +++++++++-----
 11 files changed, 88 insertions(+), 43 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 68e5da464b96..55cce8cdf9bd 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -410,6 +410,18 @@ config CC_HAS_SANE_STACKPROTECTOR
 	  the compiler produces broken code or if it does not let us control
 	  the segment on 32-bit kernels.
 
+config CC_HAS_CUSTOMIZED_STACKPROTECTOR
+	bool
+	# Although clang supports -mstack-protector-guard-reg option, it
+	# would generate GOT reference for __stack_chk_guard even with
+	# -fno-PIE flag.
+	default y if (!CC_IS_CLANG && $(cc-option,-mstack-protector-guard-reg=gs))
+
+config STACKPROTECTOR_FIXED
+	bool
+	depends on X86_64 && STACKPROTECTOR
+	default !CC_HAS_CUSTOMIZED_STACKPROTECTOR
+
 menu "Processor type and features"
 
 config SMP
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index b39975977c03..57e4dbbf501d 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -111,13 +111,7 @@ ifeq ($(CONFIG_X86_32),y)
         # temporary until string.h is fixed
         KBUILD_CFLAGS += -ffreestanding
 
-	ifeq ($(CONFIG_STACKPROTECTOR),y)
-		ifeq ($(CONFIG_SMP),y)
-			KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard
-		else
-			KBUILD_CFLAGS += -mstack-protector-guard=global
-		endif
-	endif
+	percpu_seg := fs
 else
         BITS := 64
         UTS_MACHINE := x86_64
@@ -167,6 +161,19 @@ else
         KBUILD_CFLAGS += -mcmodel=kernel
         KBUILD_RUSTFLAGS += -Cno-redzone=y
         KBUILD_RUSTFLAGS += -Ccode-model=kernel
+
+	percpu_seg := gs
+endif
+
+ifeq ($(CONFIG_STACKPROTECTOR),y)
+	ifneq ($(CONFIG_STACKPROTECTOR_FIXED),y)
+		ifeq ($(CONFIG_SMP),y)
+			KBUILD_CFLAGS += -mstack-protector-guard-reg=$(percpu_seg) \
+					 -mstack-protector-guard-symbol=__stack_chk_guard
+		else
+			KBUILD_CFLAGS += -mstack-protector-guard=global
+		endif
+	endif
 endif
 
 #
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 6f2297ebb15f..df79b7aa65bb 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -229,6 +229,10 @@ SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL)
 	int3
 SYM_CODE_END(entry_SYSCALL_64)
 
+#ifdef CONFIG_STACKPROTECTOR_FIXED
+#define __stack_chk_guard fixed_percpu_data + FIXED_stack_canary
+#endif
+
 /*
  * %rdi: prev task
  * %rsi: next task
@@ -252,7 +256,7 @@ SYM_FUNC_START(__switch_to_asm)
 
 #ifdef CONFIG_STACKPROTECTOR
 	movq	TASK_stack_canary(%rsi), %rbx
-	movq	%rbx, PER_CPU_VAR(fixed_percpu_data) + FIXED_stack_canary
+	movq	%rbx, PER_CPU_VAR(__stack_chk_guard)
 #endif
 
 	/*
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 2a5ec5750ba7..3890f609569d 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -379,6 +379,8 @@ struct irq_stack {
 } __aligned(IRQ_STACK_SIZE);
 
 #ifdef CONFIG_X86_64
+
+#ifdef CONFIG_STACKPROTECTOR_FIXED
 struct fixed_percpu_data {
 	/*
 	 * GCC hardcodes the stack canary as %gs:40.  Since the
@@ -394,21 +396,26 @@ struct fixed_percpu_data {
 
 DECLARE_PER_CPU_FIRST(struct fixed_percpu_data, fixed_percpu_data) __visible;
 DECLARE_INIT_PER_CPU(fixed_percpu_data);
+#endif /* CONFIG_STACKPROTECTOR_FIXED */
 
 static inline unsigned long cpu_kernelmode_gs_base(int cpu)
 {
+#ifdef CONFIG_STACKPROTECTOR_FIXED
 	return (unsigned long)per_cpu(fixed_percpu_data.gs_base, cpu);
+#else
+#ifdef CONFIG_SMP
+	return per_cpu_offset(cpu);
+#else
+	return 0;
+#endif
+#endif
 }
 
 extern asmlinkage void ignore_sysret(void);
 
 /* Save actual FS/GS selectors and bases to current->thread */
 void current_save_fsgs(void);
-#else	/* X86_64 */
-#ifdef CONFIG_STACKPROTECTOR
-DECLARE_PER_CPU(unsigned long, __stack_chk_guard);
-#endif
-#endif	/* !X86_64 */
+#endif	/* X86_64 */
 
 struct perf_event;
 
diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
index 00473a650f51..24aa0e2ad0dd 100644
--- a/arch/x86/include/asm/stackprotector.h
+++ b/arch/x86/include/asm/stackprotector.h
@@ -36,6 +36,12 @@
 
 #include <linux/sched.h>
 
+#ifdef CONFIG_STACKPROTECTOR_FIXED
+#define __stack_chk_guard fixed_percpu_data.stack_canary
+#else
+DECLARE_PER_CPU(unsigned long, __stack_chk_guard);
+#endif
+
 /*
  * Initialize the stackprotector canary value.
  *
@@ -51,25 +57,17 @@ static __always_inline void boot_init_stack_canary(void)
 {
 	unsigned long canary = get_random_canary();
 
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_STACKPROTECTOR_FIXED
 	BUILD_BUG_ON(offsetof(struct fixed_percpu_data, stack_canary) != 40);
 #endif
 
 	current->stack_canary = canary;
-#ifdef CONFIG_X86_64
-	this_cpu_write(fixed_percpu_data.stack_canary, canary);
-#else
 	this_cpu_write(__stack_chk_guard, canary);
-#endif
 }
 
 static inline void cpu_init_stack_canary(int cpu, struct task_struct *idle)
 {
-#ifdef CONFIG_X86_64
-	per_cpu(fixed_percpu_data.stack_canary, cpu) = idle->stack_canary;
-#else
 	per_cpu(__stack_chk_guard, cpu) = idle->stack_canary;
-#endif
 }
 
 #else	/* STACKPROTECTOR */
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index bb65371ea9df..f39baf90126c 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -56,7 +56,7 @@ int main(void)
 
 	BLANK();
 
-#ifdef CONFIG_STACKPROTECTOR
+#ifdef CONFIG_STACKPROTECTOR_FIXED
 	OFFSET(FIXED_stack_canary, fixed_percpu_data, stack_canary);
 	BLANK();
 #endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 3ea06b0b4570..972b1babf731 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2051,10 +2051,6 @@ DEFINE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot) = {
 EXPORT_PER_CPU_SYMBOL(pcpu_hot);
 
 #ifdef CONFIG_X86_64
-DEFINE_PER_CPU_FIRST(struct fixed_percpu_data,
-		     fixed_percpu_data) __aligned(PAGE_SIZE) __visible;
-EXPORT_PER_CPU_SYMBOL_GPL(fixed_percpu_data);
-
 static void wrmsrl_cstar(unsigned long val)
 {
 	/*
@@ -2102,15 +2098,18 @@ void syscall_init(void)
 	       X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_RF|
 	       X86_EFLAGS_AC|X86_EFLAGS_ID);
 }
-
-#else	/* CONFIG_X86_64 */
+#endif	/* CONFIG_X86_64 */
 
 #ifdef CONFIG_STACKPROTECTOR
+#ifdef CONFIG_STACKPROTECTOR_FIXED
+DEFINE_PER_CPU_FIRST(struct fixed_percpu_data,
+		     fixed_percpu_data) __aligned(PAGE_SIZE) __visible;
+EXPORT_PER_CPU_SYMBOL_GPL(fixed_percpu_data);
+#else
 DEFINE_PER_CPU(unsigned long, __stack_chk_guard);
 EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
 #endif
-
-#endif	/* CONFIG_X86_64 */
+#endif
 
 /*
  * Clear all 6 debug registers:
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 21f0556d3ac0..61f1873d0ff7 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -68,7 +68,13 @@ SYM_CODE_START_NOALIGN(startup_64)
 
 	/* Setup GSBASE to allow stack canary access for C code */
 	movl	$MSR_GS_BASE, %ecx
+#if defined(CONFIG_STACKPROTECTOR_FIXED)
 	leaq	INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx
+#elif defined(CONFIG_SMP)
+	movabs	$__per_cpu_load, %rdx
+#else
+	xorl	%edx, %edx
+#endif
 	movl	%edx, %eax
 	shrq	$32,  %rdx
 	wrmsr
@@ -283,16 +289,14 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 	movl %eax,%fs
 	movl %eax,%gs
 
-	/* Set up %gs.
-	 *
-	 * The base of %gs always points to fixed_percpu_data. If the
-	 * stack protector canary is enabled, it is located at %gs:40.
+	/*
+	 * Set up GS base.
 	 * Note that, on SMP, the boot cpu uses init data section until
 	 * the per cpu areas are set up.
 	 */
 	movl	$MSR_GS_BASE,%ecx
-#ifndef CONFIG_SMP
-	leaq	INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx
+#if !defined(CONFIG_SMP) && defined(CONFIG_STACKPROTECTOR_FIXED)
+	leaq	__per_cpu_load(%rip), %rdx
 #endif
 	movl	%edx, %eax
 	shrq	$32, %rdx
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 25f155205770..f02dcde9f8a8 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -500,12 +500,14 @@ SECTIONS
  */
 #define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load
 INIT_PER_CPU(gdt_page);
-INIT_PER_CPU(fixed_percpu_data);
 INIT_PER_CPU(irq_stack_backing_store);
 
+#ifdef CONFIG_STACKPROTECTOR_FIXED
+INIT_PER_CPU(fixed_percpu_data);
 #ifdef CONFIG_SMP
 . = ASSERT((fixed_percpu_data == 0),
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
+#endif
 
 #endif /* CONFIG_X86_64 */
diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index b093996b7e19..5842fe0e4f96 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -96,8 +96,16 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
 1:
 	/* Set base address in stack canary descriptor. */
 	mov $MSR_GS_BASE,%ecx
+#if defined(CONFIG_STACKPROTECTOR_FIXED)
 	mov $_pa(INIT_PER_CPU_VAR(fixed_percpu_data)), %eax
 	xor %edx, %edx
+#elif defined(CONFIG_SMP)
+	mov $__per_cpu_load, %rax
+	cdq
+#else
+	xor %eax, %eax
+	xor %edx, %edx
+#endif
 	wrmsr
 
 	call xen_prepare_pvh
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 643d02900fbb..09eaf59e8066 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -51,15 +51,19 @@ SYM_CODE_START(startup_xen)
 
 	leaq	(__end_init_task - PTREGS_SIZE)(%rip), %rsp
 
-	/* Set up %gs.
-	 *
-	 * The base of %gs always points to fixed_percpu_data.  If the
-	 * stack protector canary is enabled, it is located at %gs:40.
+	/*
+	 * Set up GS base.
 	 * Note that, on SMP, the boot cpu uses init data section until
 	 * the per cpu areas are set up.
 	 */
 	movl	$MSR_GS_BASE,%ecx
-	movq	$INIT_PER_CPU_VAR(fixed_percpu_data),%rax
+#if defined(CONFIG_STACKPROTECTOR_FIXED)
+	leaq	INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx
+#elif defined(CONFIG_SMP)
+	movabs	$__per_cpu_load, %rdx
+#else
+	xorl	%eax, %eax
+#endif
 	cdq
 	wrmsr
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:27:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527247.819635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLJx-0008C9-Fz; Fri, 28 Apr 2023 10:27:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527247.819635; Fri, 28 Apr 2023 10:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLJx-0008C2-CC; Fri, 28 Apr 2023 10:27:21 +0000
Received: by outflank-mailman (input) for mailman id 527247;
 Fri, 28 Apr 2023 10:27:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Azqw=AT=amazon.co.uk=prvs=475810522=hakor@srs-se1.protection.inumbo.net>)
 id 1psLJw-0008Bw-CP
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:27:20 +0000
Received: from smtp-fw-9103.amazon.com (smtp-fw-9103.amazon.com
 [207.171.188.200]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ff9d0bd-e5af-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:27:18 +0200 (CEST)
Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO
 email-inbound-relay-iad-1a-m6i4x-617e30c2.us-east-1.amazon.com)
 ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2023 10:27:06 +0000
Received: from EX19D017EUB004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1a-m6i4x-617e30c2.us-east-1.amazon.com (Postfix)
 with ESMTPS id D78B6639FB; Fri, 28 Apr 2023 10:26:56 +0000 (UTC)
Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by
 EX19D017EUB004.ant.amazon.com (10.252.51.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Fri, 28 Apr 2023 10:26:55 +0000
Received: from dev-dsk-hakor-1a-9589d7a9.eu-west-1.amazon.com (172.19.124.154)
 by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server
 id
 15.2.1118.26 via Frontend Transport; Fri, 28 Apr 2023 10:26:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ff9d0bd-e5af-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1682677639; x=1714213639;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=6N7xgEqW1adG5czCROokGAB7KqoQj1XrEp7xLcti96I=;
  b=Mdjs+Bs1c/l2zAWchEeuKbPZFRb6iwbXWJOf/f2Vbo7kwuncgzAiXvkw
   D3UUMuOg+CYrs7Og6xqle0HYr1wZOX7b2nmIJOdQju2pFKz/vPKCUSoEy
   ZIaETlBs5PKt/srzFbfJlr47olQKs26n5QNI2jcgKf25ekN5tH43HqqJq
   4=;
X-IronPort-AV: E=Sophos;i="5.99,234,1677542400"; 
   d="scan'208";a="1126887208"
From: Ruben Hakobyan <hakor@amazon.com>
To: <xen-devel@lists.xenproject.org>
CC: Ruben Hakobyan <hakor@amazon.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [PATCH] xen/grant-table: Properly acquire the vCPU maptrack freelist lock
Date: Fri, 28 Apr 2023 10:26:33 +0000
Message-ID: <20230428102633.86473-1-hakor@amazon.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Precedence: Bulk

Introduced as part of XSA-228, the maptrack_freelist_lock is meant to
protect all accesses to entries in the vCPU freelist as well as the
head and tail pointers.

However, this principle is violated twice in get_maptrack_handle(),
where the tail pointer is directly accessed without taking the lock.
The first occurrence is when stealing an extra entry for the tail
pointer, and the second occurrence is when directly setting the tail of
an empty freelist after allocating its first page.

Make sure to correctly acquire the freelist lock before accessing and
modifying the tail pointer to fully comply with XSA-228.

It should be noted that with the current setup, it is not possible for
these accesses to race with anything. However, it is still important
to correctly take the lock here to avoid any future possible races. For
example, a race could be possible with put_maptrack_handle() if the
maptrack code is modified to allow vCPU freelists to temporarily
include handles not directly assigned to them in the maptrack.

Note that the tail and head pointers can still be accessed without
taking the lock when initialising the freelist in grant_table_init_vcpu()
as concurrent access will not be possible here.

Signed-off-by: Ruben Hakobyan <hakor@amazon.com>
---
 xen/common/grant_table.c | 30 ++++++++++++++++++------------
 1 file changed, 18 insertions(+), 12 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index d87e58a53d..67e346ca64 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -660,23 +660,27 @@ get_maptrack_handle(
     if ( !new_mt )
     {
         spin_unlock(&lgt->maptrack_lock);
+        handle = steal_maptrack_handle(lgt, curr);
+        if ( handle == INVALID_MAPTRACK_HANDLE )
+            return handle;
+
+        spin_lock(&curr->maptrack_freelist_lock);
+        if ( curr->maptrack_tail != MAPTRACK_TAIL )
+        {
+            spin_unlock(&curr->maptrack_freelist_lock);
+            return handle;
+        }
 
         /*
          * Uninitialized free list? Steal an extra entry for the tail
          * sentinel.
          */
-        if ( curr->maptrack_tail == MAPTRACK_TAIL )
-        {
-            handle = steal_maptrack_handle(lgt, curr);
-            if ( handle == INVALID_MAPTRACK_HANDLE )
-                return handle;
-            spin_lock(&curr->maptrack_freelist_lock);
-            maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
-            curr->maptrack_tail = handle;
-            if ( curr->maptrack_head == MAPTRACK_TAIL )
-                curr->maptrack_head = handle;
-            spin_unlock(&curr->maptrack_freelist_lock);
-        }
+        maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
+        curr->maptrack_tail = handle;
+        if ( curr->maptrack_head == MAPTRACK_TAIL )
+            curr->maptrack_head = handle;
+        spin_unlock(&curr->maptrack_freelist_lock);
+
         return steal_maptrack_handle(lgt, curr);
     }
 
@@ -696,8 +700,10 @@ get_maptrack_handle(
     }
 
     /* Set tail directly if this is the first page for the local vCPU. */
+    spin_lock(&curr->maptrack_freelist_lock);
     if ( curr->maptrack_tail == MAPTRACK_TAIL )
         curr->maptrack_tail = handle + MAPTRACK_PER_PAGE - 1;
+    spin_unlock(&curr->maptrack_freelist_lock);
 
     lgt->maptrack[nr_maptrack_frames(lgt)] = new_mt;
     smp_wmb();
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527252.819664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXo-0002fj-6B; Fri, 28 Apr 2023 10:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527252.819664; Fri, 28 Apr 2023 10:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXo-0002fc-3S; Fri, 28 Apr 2023 10:41:40 +0000
Received: by outflank-mailman (input) for mailman id 527252;
 Fri, 28 Apr 2023 10:41:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXm-0002Ac-E9
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:38 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40cc4b21-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:37 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 5b1f17b1804b1-3f195b164c4so54569655e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:37 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40cc4b21-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678497; x=1685270497;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GjNbfk0xALFvPg/Q2nqnzUi1rQe3ybCNF8EPFcCfhsg=;
        b=doXKRnIfGtk8X1UsxlUlwUs3hgezFypzhgo49lzUFmiP+ZMYZMkyUe54ma48a0dfFM
         7sIZhmeAs+d0n7tHhabHuQLdK1Vmu0+FDPJuGtuoDCDueM1mRNwcH/Xie1W0kkDru9v7
         jZpqSfb6GMf4L325KbTUfsuDPledE0aHomcG4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678497; x=1685270497;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GjNbfk0xALFvPg/Q2nqnzUi1rQe3ybCNF8EPFcCfhsg=;
        b=LLr2kt+8+qj6aiJHK+Aw+lfLUmsPkakz1XdLnJSKh1Dnpruod5yDZwGLFfegB0wgr0
         FLyFIrL1OIWa0m9y5/odSWy92HEB3mosfmOF73jiIATVvikf6uoQ5QFIbgflNbHmV1R4
         38NPZyBWkujhh+IYbJ/k2tRYGdIWEFTPSc23ytprkS9/gv6rrUx62VwVCxhUNn6YCMNG
         dIgOkGFAoktLZOhDEw1AJEpbcloDL9rNdwdhfoR2U7ndnc9TSMPBatri9uSu1mBMIo8J
         +/EFou8q/bvFma6fMS6zoT3+hX0VQYA7SH+H8+zA4e39pOiQ2wml4gRiTj0YRoOukUCA
         1lSA==
X-Gm-Message-State: AC+VfDyF0+3pVPg6cjGMFIuTr9tEBJR4Fg5+d0AAwtMSvlT/z2gIHvJ6
	DXnQQ4n30CsacHdG2APNL84BgKZdOcNTxt0cyHQ=
X-Google-Smtp-Source: ACHHUZ5zmHCUIvKa4yobrBKZfiTv65xktyXqTmuejeBm+sYxeGHsM6c+JXMfu169jy8KdVYhontodg==
X-Received: by 2002:adf:dccc:0:b0:302:2d79:9f6e with SMTP id x12-20020adfdccc000000b003022d799f6emr2773957wrm.60.1682678496749;
        Fri, 28 Apr 2023 03:41:36 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 2/7] tools: Create xc_domain_getinfo_single()
Date: Fri, 28 Apr 2023 11:41:19 +0100
Message-Id: <20230428104124.1044-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It's a stricter version of xc_domain_getinfo() where the returned domid
always matches the requested domid or the error code shows an error instead.
A few patches ahead usages of xc_domain_getinfo() are removed until only
xc_domain_getinfo_single() and xc_domain_getinfolist() remain.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h     | 16 ++++++++++++++++
 tools/libs/ctrl/xc_domain.c | 23 +++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index f5bc7f58b6..685df1c7ba 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -703,6 +703,22 @@ int xc_vcpu_getaffinity(xc_interface *xch,
 int xc_domain_get_guest_width(xc_interface *xch, uint32_t domid,
                               unsigned int *guest_width);
 
+/**
+ * This function will return information about a single domain. It looks
+ * up the domain by the provided domid and succeeds if the domain exists
+ * and is accesible by the current domain, or fails otherwise. A buffer
+ * may optionally passed on the `info` parameter in order to retrieve
+ * information about the domain. The buffer is ignored if NULL is
+ * passed instead.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domid to lookup
+ * @parm info Optional domain information buffer (may be NULL)
+ * @return 0 on success, otherwise the call failed and info is undefined
+ */
+int xc_domain_getinfo_single(xc_interface *xch,
+                             uint32_t domid,
+                             xc_domaininfo_t *info);
 
 /**
  * This function will return information about one or more domains. It is
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e939d07157..6b11775d4c 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -345,6 +345,29 @@ int xc_dom_vuart_init(xc_interface *xch,
     return rc;
 }
 
+int xc_domain_getinfo_single(xc_interface *xch,
+                             uint32_t domid,
+                             xc_domaininfo_t *info)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_getdomaininfo,
+        .domain = domid,
+    };
+
+    if ( do_domctl(xch, &domctl) < 0 )
+        return -1;
+
+    if ( domctl.u.getdomaininfo.domain != domid ) {
+        errno = ESRCH;
+        return -1;
+    }
+
+    if ( info )
+        *info = domctl.u.getdomaininfo;
+
+    return 0;
+}
+
 int xc_domain_getinfo(xc_interface *xch,
                       uint32_t first_domid,
                       unsigned int max_doms,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527250.819644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXk-0002Ap-Mo; Fri, 28 Apr 2023 10:41:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527250.819644; Fri, 28 Apr 2023 10:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXk-0002Ai-KA; Fri, 28 Apr 2023 10:41:36 +0000
Received: by outflank-mailman (input) for mailman id 527250;
 Fri, 28 Apr 2023 10:41:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXj-0002Ac-Da
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:35 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3eb7ffad-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:34 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-3f19ab994ccso74494695e9.2
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:34 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3eb7ffad-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678493; x=1685270493;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=Hrr2ZYnPDaoY9sj7/TY27jsqfX5ZclTyiDVdZcG4M6g=;
        b=kA5v7F3SNuias6hBwlj/q1MkGTJr0FeIhTuqyV4efKSw9lQT20kW+0B0Mtxa8+syxv
         PTslaEubQi3J8fALCxTymNTlf8SPltge8CIxUt0ooE7H/Tg25LHMil9vSoqb526l7/uY
         yshGbERZUbBWSwTaRb2Rtsm5X8oWpLOofB5Wo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678493; x=1685270493;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Hrr2ZYnPDaoY9sj7/TY27jsqfX5ZclTyiDVdZcG4M6g=;
        b=Qe2IICJViaBqKw8XIo2PVBIHizsmHNuct5GFFrjMjIimpV/HGhj3f31mXlQdOOc8Nz
         v/W6wWa5AYXzDVeRR+AaiOBtCxJ0rHpS7zJGI4esFeWJDFG8thoLMZwIvzWzMawdtWdV
         CBwmi2lhKO32q4+lmPcx/4div9/K2bgKlBFk4CAH4yR4s+U6Ex7MBEzs3Ay1uU2HRxpj
         Mau+YZaugDUVJaXdnVUe/nocrnu62Ui/ZdSadmQpwr50aCc4E+G6nySkz9RiLjzFW7U9
         GgqMWjZsk2IEbrvty/PtKqSG0zT7ips/z9b/Gcya/t/RNFaMTlOdhvUKXqEAX8l6QqpS
         XUCg==
X-Gm-Message-State: AC+VfDz2vAYGp70sOmIemtnAGo+HMu5ZGz2e7WSd411JFA3QXvhj45AI
	/j0X0gedSnbxnML4uVSt3nSVOH+BciUIETVpnuw=
X-Google-Smtp-Source: ACHHUZ6x1OJKZRo9ALWEgqTISi6PL6Qu4mlIRfa9jivtw9deAVAvSPJ/ATWUJT6KLkltl37QHgC1ag==
X-Received: by 2002:a5d:5490:0:b0:301:8551:446a with SMTP id h16-20020a5d5490000000b003018551446amr3061547wrv.2.1682678492942;
        Fri, 28 Apr 2023 03:41:32 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Tim Deegan <tim@xen.org>
Subject: [PATCH v2 0/7] Rationalize usage of xc_domain_getinfo{,list}()
Date: Fri, 28 Apr 2023 11:41:17 +0100
Message-Id: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

v2:
 * Various stylistic changes to match Xen style
 * Removed hardcoded ARRAY_SIZE() macro in favour of common-macros.h
 * Changed error handling convention of xc_domain_getinfo_single()
   to return {0,-1}/errno
 * Removed error handling overrides on xc_getininfo_single() callers
 * Improved some debug on the error path of xc_getinfo_single() callers
 * Used dominfo_shutdown_with() in lowlevel/xc/xc.c rather than extracting
   the 'crashed' condition manually.

xc_domain_getinfo() returns the list of domains with domid >= first_domid.
It does so by repeatedly invoking XEN_DOMCTL_getdomaininfo, which leads to
unintuitive behaviour (asking for domid=1 might succeed returning domid=2).
Furthermore, N hypercalls are required whereas the equivalent functionality
can be achieved using with XEN_SYSCTL_getdomaininfo.

Ideally, we want a DOMCTL interface that operates over a single precisely
specified domain and a SYSCTL interface that can be used for bulk queries.

All callers of xc_domain_getinfo() that are better off using SYSCTL are
migrated to use that instead. That includes callers performing domain
discovery and those requesting info for more than 1 domain per hypercall.

A new xc_domain_getinfo_single() is introduced with stricter semantics than
xc_domain_getinfo() (failing if domid isn't found) to migrate the rest to.

With no callers left the xc_dominfo_t structure and the xc_domain_getinfo()
call itself can be cleanly removed, and the DOMCTL interface simplified to
only use its fastpath.

With the DOMCTL amended, the new xc_domain_getinfo_single() drops its
stricter check, becoming a simple wrapper to invoke the hypercall itself.

Alejandro Vallejo (7):
  tools: Make some callers of xc_domain_getinfo() use
    xc_domain_getinfolist()
  tools: Create xc_domain_getinfo_single()
  tools: Refactor console/io.c to avoid using xc_domain_getinfo()
  tools: Make init-xenstore-domain use xc_domain_getinfolist()
  tools: Modify single-domid callers of xc_domain_getinfolist()
  tools: Use new xc function for some xc_domain_getinfo() calls
  domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found

 tools/console/client/main.c             |  7 +--
 tools/console/daemon/io.c               | 31 +++++-----
 tools/debugger/kdd/kdd-xen.c            |  6 +-
 tools/helpers/init-xenstore-domain.c    | 14 +++--
 tools/include/xenctrl.h                 | 63 ++++++++-----------
 tools/libs/ctrl/xc_domain.c             | 82 +++++--------------------
 tools/libs/ctrl/xc_pagetab.c            |  7 +--
 tools/libs/ctrl/xc_private.c            |  7 +--
 tools/libs/ctrl/xc_private.h            |  7 ++-
 tools/libs/guest/xg_core.c              | 23 +++----
 tools/libs/guest/xg_core.h              |  6 +-
 tools/libs/guest/xg_core_arm.c          | 10 +--
 tools/libs/guest/xg_core_x86.c          | 18 +++---
 tools/libs/guest/xg_cpuid_x86.c         | 34 +++++-----
 tools/libs/guest/xg_dom_boot.c          | 16 ++---
 tools/libs/guest/xg_domain.c            |  8 +--
 tools/libs/guest/xg_offline_page.c      | 12 ++--
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 19 +++---
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 ++---
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 26 +++-----
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 +-
 tools/libs/light/libxl_dom.c            | 17 ++---
 tools/libs/light/libxl_dom_suspend.c    |  7 +--
 tools/libs/light/libxl_domain.c         | 13 ++--
 tools/libs/light/libxl_mem.c            |  4 +-
 tools/libs/light/libxl_sched.c          | 26 ++++----
 tools/libs/light/libxl_x86_acpi.c       |  4 +-
 tools/misc/xen-hvmcrash.c               |  6 +-
 tools/misc/xen-lowmemd.c                |  6 +-
 tools/misc/xen-mfndump.c                | 22 +++----
 tools/misc/xen-vmtrace.c                |  6 +-
 tools/python/xen/lowlevel/xc/xc.c       | 28 ++++-----
 tools/vchan/vchan-socket-proxy.c        |  6 +-
 tools/xenmon/xenbaked.c                 |  6 +-
 tools/xenpaging/xenpaging.c             | 10 +--
 tools/xenstore/xenstored_domain.c       | 15 +++--
 tools/xentrace/xenctx.c                 |  8 +--
 xen/common/domctl.c                     | 32 +---------
 41 files changed, 254 insertions(+), 386 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527251.819654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXl-0002Pi-Ul; Fri, 28 Apr 2023 10:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527251.819654; Fri, 28 Apr 2023 10:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXl-0002Pb-S9; Fri, 28 Apr 2023 10:41:37 +0000
Received: by outflank-mailman (input) for mailman id 527251;
 Fri, 28 Apr 2023 10:41:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXk-0002Ac-N6
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:36 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3fb6672f-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:36 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-2f46348728eso5967202f8f.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:36 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fb6672f-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678495; x=1685270495;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=T4l1yE4xai3KjB0fNKNqHg0rUNasi8EVSaas8lld3vQ=;
        b=gGQYE0wnbv2u/2ujI0WT0Em2d7RlOT6NbB68Ao4yFN1CQrsbPYm3pGRGca5e5S21je
         LK5e5ebdxID3/tapawpLEny5pI6iapNe4ybB65OhR6gmfqjAplOhrEECZ/I5SXl1SmGZ
         oBDzpm8Hu6zDrlFtPZ6rQ19eZvzHj4r5ICLM4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678495; x=1685270495;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=T4l1yE4xai3KjB0fNKNqHg0rUNasi8EVSaas8lld3vQ=;
        b=FlUEv5OXOJaLPAcAjsAW0UBhhYl+wiot4b+He5o7b7O45Nwx9NmW+EFVtCXHs1v6uF
         n4gspvLK9Xt4tF8vb8+g8iQEFzbVSfBWHxrdg77WE6VXm2v7cu/hCynKV7ue8mwM+eoN
         K3+57mpfVAzTM027R3vdzkbbrjPIExj4bmrLqH/quLN8nZc+kcFAKO+EiNFRuIEAi+b/
         yGUKrQL/9omldubg2Bk6DRgj+S5LQxV3xkNA77cOQ84ff3/NVoOus726uFy3hcogEwz9
         C06mMjAb/sei5v7ke3s4YYhcYCqKPE5KVoGdCxJTbkGtWhx/p/dQIaIWcFjDbiKMEE9z
         0elQ==
X-Gm-Message-State: AC+VfDwjN2/X27Ana9Z520a1nqFIYMGvDJrNV0UWbh+6a0rAp7r5E3Cd
	XOf4rRJuZpqt+dQoGy9ax5ESedgFjKv8ZdwHL3k=
X-Google-Smtp-Source: ACHHUZ7+y4UzEtRzaUR1jtLZQSfoQhM4fDts/+72js+8UWSv9x7HCxBfeQORSOI42k1XUiS6jtyKWQ==
X-Received: by 2002:adf:dc0f:0:b0:2e4:eebe:aee3 with SMTP id t15-20020adfdc0f000000b002e4eebeaee3mr3432484wri.60.1682678494942;
        Fri, 28 Apr 2023 03:41:34 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 1/7] tools: Make some callers of xc_domain_getinfo() use xc_domain_getinfolist()
Date: Fri, 28 Apr 2023 11:41:18 +0100
Message-Id: <20230428104124.1044-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfo() is slow and prone to races because N hypercalls are
needed to find information about N domains. xc_domain_getinfolist() finds
the same information in a single hypercall as long as a big enough buffer
is provided. Plus, xc_domain_getinfo() is disappearing on a future patch
so migrate the callers interested in more than 1 domain to the the *list()
version.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h           | 12 ++++++++++++
 tools/python/xen/lowlevel/xc/xc.c | 28 ++++++++++++++--------------
 tools/xenmon/xenbaked.c           |  6 +++---
 3 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..f5bc7f58b6 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -468,6 +468,18 @@ typedef struct xc_dominfo {
 
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
+static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
+{
+    return (info->flags >> XEN_DOMINF_shutdownshift) & XEN_DOMINF_shutdownmask;
+}
+
+static inline bool dominfo_shutdown_with(xc_domaininfo_t *info, unsigned int expected_reason)
+{
+    /* The reason doesn't make sense unless the domain is actually shutdown */
+    return (info->flags & XEN_DOMINF_shutdown) &&
+           (dominfo_shutdown_reason(info) == expected_reason);
+}
+
 typedef union 
 {
 #if defined(__i386__) || defined(__x86_64__)
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 35901c2d63..d7ce299650 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -342,7 +342,7 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
     uint32_t first_dom = 0;
     int max_doms = 1024, nr_doms, i;
     size_t j;
-    xc_dominfo_t *info;
+    xc_domaininfo_t *info;
 
     static char *kwd_list[] = { "first_dom", "max_doms", NULL };
 
@@ -350,11 +350,11 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
                                       &first_dom, &max_doms) )
         return NULL;
 
-    info = calloc(max_doms, sizeof(xc_dominfo_t));
+    info = calloc(max_doms, sizeof(*info));
     if (info == NULL)
         return PyErr_NoMemory();
 
-    nr_doms = xc_domain_getinfo(self->xc_handle, first_dom, max_doms, info);
+    nr_doms = xc_domain_getinfolist(self->xc_handle, first_dom, max_doms, info);
 
     if (nr_doms < 0)
     {
@@ -368,21 +368,21 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
         info_dict = Py_BuildValue(
             "{s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i"
             ",s:L,s:L,s:L,s:i,s:i,s:i}",
-            "domid",           (int)info[i].domid,
+            "domid",           (int)info[i].domain,
             "online_vcpus",    info[i].nr_online_vcpus,
             "max_vcpu_id",     info[i].max_vcpu_id,
-            "hvm",             info[i].hvm,
-            "dying",           info[i].dying,
-            "crashed",         info[i].crashed,
-            "shutdown",        info[i].shutdown,
-            "paused",          info[i].paused,
-            "blocked",         info[i].blocked,
-            "running",         info[i].running,
-            "mem_kb",          (long long)info[i].nr_pages*(XC_PAGE_SIZE/1024),
+            "hvm",             !!(info[i].flags & XEN_DOMINF_hvm_guest),
+            "dying",           !!(info[i].flags & XEN_DOMINF_dying),
+            "crashed",         dominfo_shutdown_with(&info[i], SHUTDOWN_crash),
+            "shutdown",        !!(info[i].flags & XEN_DOMINF_shutdown),
+            "paused",          !!(info[i].flags & XEN_DOMINF_paused),
+            "blocked",         !!(info[i].flags & XEN_DOMINF_blocked),
+            "running",         !!(info[i].flags & XEN_DOMINF_running),
+            "mem_kb",          (long long)info[i].tot_pages*(XC_PAGE_SIZE/1024),
             "cpu_time",        (long long)info[i].cpu_time,
-            "maxmem_kb",       (long long)info[i].max_memkb,
+            "maxmem_kb",       (long long)(info[i].max_pages << (XC_PAGE_SHIFT - 10)),
             "ssidref",         (int)info[i].ssidref,
-            "shutdown_reason", info[i].shutdown_reason,
+            "shutdown_reason", dominfo_shutdown_reason(&info[i]),
             "cpupool",         (int)info[i].cpupool);
         pyhandle = PyList_New(sizeof(xen_domain_handle_t));
         if ( (pyhandle == NULL) || (info_dict == NULL) )
diff --git a/tools/xenmon/xenbaked.c b/tools/xenmon/xenbaked.c
index 4dddbd20e2..8632b10ea4 100644
--- a/tools/xenmon/xenbaked.c
+++ b/tools/xenmon/xenbaked.c
@@ -775,7 +775,7 @@ static void global_init_domain(int domid, int idx)
 static int indexof(int domid)
 {
     int idx;
-    xc_dominfo_t dominfo[NDOMAINS];
+    xc_domaininfo_t dominfo[NDOMAINS];
     xc_interface *xc_handle;
     int ndomains;
   
@@ -797,7 +797,7 @@ static int indexof(int domid)
 
     // call domaininfo hypercall to try and garbage collect unused entries
     xc_handle = xc_interface_open(0,0,0);
-    ndomains = xc_domain_getinfo(xc_handle, 0, NDOMAINS, dominfo);
+    ndomains = xc_domain_getinfolist(xc_handle, 0, NDOMAINS, dominfo);
     xc_interface_close(xc_handle);
 
     // for each domain in our data, look for it in the system dominfo structure
@@ -808,7 +808,7 @@ static int indexof(int domid)
         int jdx;
     
         for (jdx=0; jdx<ndomains; jdx++) {
-            if (dominfo[jdx].domid == domid)
+            if (dominfo[jdx].domain == domid)
                 break;
         }
         if (jdx == ndomains)        // we didn't find domid in the dominfo struct
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527253.819675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXq-0002xG-Kh; Fri, 28 Apr 2023 10:41:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527253.819675; Fri, 28 Apr 2023 10:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXq-0002x7-HU; Fri, 28 Apr 2023 10:41:42 +0000
Received: by outflank-mailman (input) for mailman id 527253;
 Fri, 28 Apr 2023 10:41:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXp-0002tZ-Di
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:41 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41b60cc2-e5b1-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 12:41:39 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-2fc3f1d6f8cso6128360f8f.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:39 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41b60cc2-e5b1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678498; x=1685270498;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZUhXin/OPZZXRGBUuqUwhDZyM+uQc0DAORSurlrGu/k=;
        b=DbzyUqqc6Cw2R8dr9vtDR53PFmBj/RuOITN0eOZONHdDlyEjXuN8SgfQV9y3x3WQUd
         Gx+j2mLhyNZRtRgHwB6UDFDSN9ROX1Kr/3PmRwwCts2TXhztfqeGuzzrBOMJh0iBplqA
         A6nM4qi7FXDYzCR+VP6cewiBdvF9NK0xzzCGA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678498; x=1685270498;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZUhXin/OPZZXRGBUuqUwhDZyM+uQc0DAORSurlrGu/k=;
        b=WhQpPhuqU+k7k3ehiQK8YNZqE1e/aK0If0pfgQ4WiK4pDPUAcKKtZ6C8fg65ukd9KH
         6le6o6jnhIoNOn7APhMSCqE5wBOzwiJJih6Ss9BYuK7xceb35F5oj1HA2z9PXsJ7nSdC
         ev5Jo/aLsWkESGcKGz64BNzN9BpYyaq4j+MlDY9QEXR6zDYBoRI8VYbMZkBOMYVCNsXs
         47FuGH41meOYxXh+LR8U80upKFpS0xMIP4qM69s4u68mBB1W/x+gjAwpNKA5N5/CtNe7
         WBwkY+YLOqRVlvBbm1GWmFHBUojlHnLLr/g87mc758SPgrfe/PX3HcptxKzoHvjbQr8f
         vEFw==
X-Gm-Message-State: AC+VfDxJ5QSMRnS8k87rHdH/rB1uPSa+DtVhwV0vzV+OlsuYqwZiS4UY
	1DIW2wl4W80TpYtHiQG+Rcf1YwhdfVtGGT39f20=
X-Google-Smtp-Source: ACHHUZ4zH2UeGBP+lfx0B2dTM3GHzvvs90ipnwa4Jzx0cFPfM1bp7lgtg0zeJbuzU+pqxW/QVhNW7Q==
X-Received: by 2002:a5d:49c3:0:b0:2d8:47c7:7b52 with SMTP id t3-20020a5d49c3000000b002d847c77b52mr3576746wrs.9.1682678498391;
        Fri, 28 Apr 2023 03:41:38 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2 3/7] tools: Refactor console/io.c to avoid using xc_domain_getinfo()
Date: Fri, 28 Apr 2023 11:41:20 +0100
Message-Id: <20230428104124.1044-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It has 2 avoidable occurences

* Check whether a domain is valid, which can be done faster with
    xc_domain_getinfo_single()
* Domain discovery, which can be done much faster with the sysctl
    interface through xc_domain_getinfolist().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/console/daemon/io.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index 6bfe96715b..c5972cb721 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -405,13 +405,7 @@ static void buffer_advance(struct buffer *buffer, size_t len)
 
 static bool domain_is_valid(int domid)
 {
-	bool ret;
-	xc_dominfo_t info;
-
-	ret = (xc_domain_getinfo(xc, domid, 1, &info) == 1 &&
-	       info.domid == domid);
-		
-	return ret;
+	return xc_domain_getinfo_single(xc, domid, NULL) == 0;
 }
 
 static int create_hv_log(void)
@@ -961,24 +955,33 @@ static unsigned enum_pass = 0;
 
 static void enum_domains(void)
 {
-	int domid = 1;
-	xc_dominfo_t dominfo;
+	/**
+	 * Memory set aside to query the state of every
+	 * domain in the hypervisor in a single hypercall.
+	 */
+	 static xc_domaininfo_t domaininfo[DOMID_FIRST_RESERVED - 1];
+
+	int ret;
 	struct domain *dom;
 
 	enum_pass++;
 
-	while (xc_domain_getinfo(xc, domid, 1, &dominfo) == 1) {
-		dom = lookup_domain(dominfo.domid);
-		if (dominfo.dying) {
+	/* Fetch info on every valid domain except for dom0 */
+	ret = xc_domain_getinfolist(xc, 1, DOMID_FIRST_RESERVED - 1, domaininfo);
+	if (ret < 0)
+		return;
+
+	for (size_t i = 0; i < ret; i++) {
+		dom = lookup_domain(domaininfo[i].domain);
+		if (domaininfo[i].flags & XEN_DOMINF_dying) {
 			if (dom)
 				shutdown_domain(dom);
 		} else {
 			if (dom == NULL)
-				dom = create_domain(dominfo.domid);
+				dom = create_domain(domaininfo[i].domain);
 		}
 		if (dom)
 			dom->last_seen = enum_pass;
-		domid = dominfo.domid + 1;
 	}
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527254.819685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXr-0003EA-W9; Fri, 28 Apr 2023 10:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527254.819685; Fri, 28 Apr 2023 10:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXr-0003Dz-RK; Fri, 28 Apr 2023 10:41:43 +0000
Received: by outflank-mailman (input) for mailman id 527254;
 Fri, 28 Apr 2023 10:41:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXq-0002tZ-Jf
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:42 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42a65b98-e5b1-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 12:41:41 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f19ab99540so72951255e9.2
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:41 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42a65b98-e5b1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678500; x=1685270500;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lOzH/Me/kjor0auj5RRaJFlmWnO3/lLM0su8aCzstvk=;
        b=NHaNEcfqmhBB0K7GuSC9Ddm33qSow+5NVzw8VxsQR8/bNACoaK60FKoMk/zqilpjUw
         DlUTFZHePTkE/J3qQtaXJXc8U3vWhWS/6eAIuVgT6P6a5bVghUv8buhL7e8Wguisqm3d
         sg6U4Kc/diktbkSax4XkwCzQN0DdCJuE+V5b8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678500; x=1685270500;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lOzH/Me/kjor0auj5RRaJFlmWnO3/lLM0su8aCzstvk=;
        b=b0SQkHBBJtBUk52f2BIgfTzYywU0PquHTBeWM0qetgZfWFpJ967Gtu2sehg6rNtgUs
         mzMDVzQQNPcHJn03iaKRAUkIkABDeOCBNBtbSH4wEDDBdGz02PSaNEI60Px4RxgC3cRM
         4hi5MvgDUi+Vw18c/tM7wOqPTrBCdYIDYutrHU0+K701GxSio5CsXJl/dtr6yG07+wXS
         5Lyn8uXgCMuL3ZJU85CXM2N4GHEknHxB6fcPK3BJXY2G4igifs9TXGoXyrlhq9mC7ZG0
         3j3JCfR6at0lqcErYKcULJgytl1BPSq85LVTKtGY/xQbpn0sc05m1v/Xq4gciJ3qltTu
         KvsA==
X-Gm-Message-State: AC+VfDzqGgM7T9BLbTHhzUUrd1HNLVVSe2pUiLu4bCozkkBrqTUkyBJ3
	0GfU3k6pwnD2cgvUcwb9fJ3h5l5oyYXXIG15Rps=
X-Google-Smtp-Source: ACHHUZ7T9It8+eIvsRC61MTZIGXGbVOGfPIJ8eeWPp+1eUO3RxQkCwYc4oL4qaOxLg5L5AMyeGJBIQ==
X-Received: by 2002:a5d:58d2:0:b0:2f4:215a:98c5 with SMTP id o18-20020a5d58d2000000b002f4215a98c5mr3211610wrf.70.1682678500025;
        Fri, 28 Apr 2023 03:41:40 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.org>
Subject: [PATCH v2 4/7] tools: Make init-xenstore-domain use xc_domain_getinfolist()
Date: Fri, 28 Apr 2023 11:41:21 +0100
Message-Id: <20230428104124.1044-5-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It currently relies on xc_domain_getinfo() returning the next available
domain past "first_domid", which is a feature that will disappear in a
future patch.

Furthermore and while at it, make it so the hypercall tries to fetch information
about more than one domain per hypercall so we can (hopefully) get away with a
single hypercall in a typical system.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Juergen Gross <jgross@suse.org>
---
 tools/helpers/init-xenstore-domain.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 0950ba7dc5..e210a2677e 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -13,6 +13,7 @@
 #include <xentoollog.h>
 #include <libxl.h>
 #include <xen/sys/xenbus_dev.h>
+#include <xen-tools/common-macros.h>
 #include <xen-xsm/flask/flask.h>
 #include <xen/io/xenbus.h>
 
@@ -322,16 +323,19 @@ err:
 
 static int check_domain(xc_interface *xch)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info[8];
     uint32_t dom;
     int ret;
 
     dom = 1;
-    while ( (ret = xc_domain_getinfo(xch, dom, 1, &info)) == 1 )
+    while ( (ret = xc_domain_getinfolist(xch, dom, ARRAY_SIZE(info), info)) > 0 )
     {
-        if ( info.xenstore )
-            return 1;
-        dom = info.domid + 1;
+        for ( size_t i = 0; i < ret; i++ )
+        {
+            if ( info[i].flags & XEN_DOMINF_xs_domain )
+                return 1;
+        }
+        dom = info[ret - 1].domain + 1;
     }
     if ( ret < 0 && errno != ESRCH )
     {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527255.819696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXt-0003VE-94; Fri, 28 Apr 2023 10:41:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527255.819696; Fri, 28 Apr 2023 10:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXt-0003V1-3q; Fri, 28 Apr 2023 10:41:45 +0000
Received: by outflank-mailman (input) for mailman id 527255;
 Fri, 28 Apr 2023 10:41:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXr-0002Ac-GN
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:43 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43bd335a-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:42 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-3010889c6ebso5961855f8f.2
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:42 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43bd335a-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678501; x=1685270501;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+HVWSE744l3OULZh4xSGmf/1IsRnzQuhvgEnpES8XSA=;
        b=VvNQ0dal8cGrhAqkRjUQvIbeMt5it4j2BZcrWYsk7wVhfIELfqIdSPro+LY4OnhPgx
         PQ+4akxOW9Ks7zKvxEJx/YGdnvyX58FVNARVvurBA88DRgdwOybD3kMnnzQ8raD0tE1/
         3gvcwRyHPhuBd5KP7GSoy5vPDTB75jFg+PqfY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678501; x=1685270501;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+HVWSE744l3OULZh4xSGmf/1IsRnzQuhvgEnpES8XSA=;
        b=SvePO8iJCnRD4eovakbs2F566pSp3Pcn0oMz4XpCeTUX8oJ4cEtuTzRuFxPBQCTNrM
         pavEeSZc27o1e+Ipd85cxefIrwwk5nd/LzVGBOONoU8Q4u28GFly36kT6bgEVBex0Y2F
         FE5OIRmoaiC+E4TRdF2PqG8zMRRAUMTN7E3qXXIsEqTII+rudcmCeOwQKrPmzai5eq3+
         OAx8HW6wzZEWpeTdETxU59E+HH3kuOn7bDsvdbyDtaQuWJLckCLYdWsHcj40H51yV6DV
         rCKQ+Eqgi1hhoCjz0q0VO4Lp2qSxVTwlM6nqmZfaBrQJRf7vDybz77VOzu83OXi76cF7
         +0ng==
X-Gm-Message-State: AC+VfDweMLetR2W/qScM86FFr2TY+cXHq35rfo5zXjiAt4XZkG4qR6f9
	aOUvmpWj5qOx4tXHJHMoYCmS/xFRIrfWHOapu0s=
X-Google-Smtp-Source: ACHHUZ79L7YIHLjdQZubJbqtXhl3bN8DMD7+its5fbVq8tcieC29FXvNjK5bWE+tsktvZUY2dZeLmA==
X-Received: by 2002:adf:f583:0:b0:2ce:a30d:f764 with SMTP id f3-20020adff583000000b002cea30df764mr3347215wro.21.1682678501700;
        Fri, 28 Apr 2023 03:41:41 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 5/7] tools: Modify single-domid callers of xc_domain_getinfolist()
Date: Fri, 28 Apr 2023 11:41:22 +0100
Message-Id: <20230428104124.1044-6-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfolist() internally relies on a sysctl that performs
a linear search for the domids. Many callers of xc_domain_getinfolist()
who require information about a precise domid are much better off calling
xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
instead and ensure the returned domid matches the requested one. The domtctl
will find the domid faster too, because that uses hashed lists.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/libs/light/libxl_dom.c         | 15 +++++----------
 tools/libs/light/libxl_dom_suspend.c |  7 +------
 tools/libs/light/libxl_domain.c      | 13 +++++--------
 tools/libs/light/libxl_mem.c         |  4 ++--
 tools/libs/light/libxl_sched.c       | 12 ++++--------
 tools/xenpaging/xenpaging.c          | 10 +++++-----
 6 files changed, 22 insertions(+), 39 deletions(-)

diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 25fb716084..bd5d823581 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -32,8 +32,8 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (ret != 1 || info.domain != domid) {
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (ret < 0) {
         LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
         return LIBXL_DOMAIN_TYPE_INVALID;
     }
@@ -70,15 +70,10 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
-    if (ret != 1)
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
+    if (ret < 0)
     {
-        LOGE(ERROR, "getinfolist failed %d", ret);
-        return ERROR_FAIL;
-    }
-    if (info.domain != domid)
-    {
-        LOGE(ERROR, "got info for dom%d, wanted dom%d\n", info.domain, domid);
+        LOGED(ERROR, domid, "get domaininfo failed: %d", ret);
         return ERROR_FAIL;
     }
     return info.cpupool;
diff --git a/tools/libs/light/libxl_dom_suspend.c b/tools/libs/light/libxl_dom_suspend.c
index 4fa22bb739..6091a5f3f6 100644
--- a/tools/libs/light/libxl_dom_suspend.c
+++ b/tools/libs/light/libxl_dom_suspend.c
@@ -332,13 +332,8 @@ static void suspend_common_wait_guest_check(libxl__egc *egc,
     /* Convenience aliases */
     const uint32_t domid = dsps->domid;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (ret < 0) {
-        LOGED(ERROR, domid, "unable to check for status of guest");
-        goto err;
-    }
-
-    if (!(ret == 1 && info.domain == domid)) {
         LOGED(ERROR, domid, "guest we were suspending has been destroyed");
         goto err;
     }
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 7f0986c185..9fa4091859 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -349,16 +349,12 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
     int ret;
     GC_INIT(ctx);
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
     if (ret<0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info: %d", ret);
         GC_FREE;
         return ERROR_FAIL;
     }
-    if (ret==0 || xcinfo.domain != domid) {
-        GC_FREE;
-        return ERROR_DOMAIN_NOTFOUND;
-    }
 
     if (info_r)
         libxl__xcinfo2xlinfo(ctx, &xcinfo, info_r);
@@ -1657,14 +1653,15 @@ int libxl__resolve_domid(libxl__gc *gc, const char *name, uint32_t *domid)
 libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
                                        int *nr_vcpus_out, int *nr_cpus_out)
 {
+    int rc;
     GC_INIT(ctx);
     libxl_vcpuinfo *ptr, *ret;
     xc_domaininfo_t domaininfo;
     xc_vcpuinfo_t vcpuinfo;
     unsigned int nr_vcpus;
 
-    if (xc_domain_getinfolist(ctx->xch, domid, 1, &domaininfo) != 1) {
-        LOGED(ERROR, domid, "Getting infolist");
+    if ((rc = xc_domain_getinfo_single(ctx->xch, domid, &domaininfo)) < 0) {
+        LOGED(ERROR, domid, "Getting dominfo: %d", rc);
         GC_FREE;
         return NULL;
     }
diff --git a/tools/libs/light/libxl_mem.c b/tools/libs/light/libxl_mem.c
index 92ec09f4cf..44e554adba 100644
--- a/tools/libs/light/libxl_mem.c
+++ b/tools/libs/light/libxl_mem.c
@@ -323,8 +323,8 @@ retry_transaction:
     libxl__xs_printf(gc, t, GCSPRINTF("%s/memory/target", dompath),
                      "%"PRIu64, new_target_memkb);
 
-    r = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (r != 1 || info.domain != domid) {
+    r = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (r < 0) {
         abort_transaction = 1;
         rc = ERROR_FAIL;
         goto out;
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 7c53dc60e6..b8d0b9ccd7 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -219,13 +219,11 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t domaininfo;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &domaininfo);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info: %d", rc);
         return ERROR_FAIL;
     }
-    if (rc != 1 || domaininfo.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
@@ -426,13 +424,11 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t info;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info");
+        LOGED(ERROR, domid, "Getting domain info: %d", rc);
         return ERROR_FAIL;
     }
-    if (rc != 1 || info.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit2_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 6e5490315d..c7a9a82477 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -169,8 +169,8 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
-    if ( rc != 1 )
+    rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id, &domain_info);
+    if ( rc < 0 )
     {
         PERROR("Error getting domain info");
         return -1;
@@ -424,9 +424,9 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
-                                   &domain_info);
-        if ( rc != 1 )
+        rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id,
+                                      &domain_info);
+        if ( rc < 0 )
         {
             PERROR("Error getting domain info");
             goto err;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527256.819705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXv-0003pI-Kl; Fri, 28 Apr 2023 10:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527256.819705; Fri, 28 Apr 2023 10:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXv-0003op-G0; Fri, 28 Apr 2023 10:41:47 +0000
Received: by outflank-mailman (input) for mailman id 527256;
 Fri, 28 Apr 2023 10:41:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXu-0002Ac-Fu
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:46 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44caa373-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:44 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-2f6401ce8f8so6071363f8f.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:44 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44caa373-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678503; x=1685270503;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=v99j2hJbcGtMSmVaaEeDO1EFctsPfye93KMdokSPlDU=;
        b=HFN66pEadT5faiUtIODgVjwCcmmlhtYuk/4HyLfZe8yu84lfRKcDWCr4W+f6vBIZmD
         3TM7zxhELR1DzmCQNFndfyyk4WDypjLrPFzIJ48F7nit/AuyTcINSMN3lvAaOjOEQVbT
         SawXpuh9MO+E9Oq/IvKikXtNhIBX6/jORR3aM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678503; x=1685270503;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=v99j2hJbcGtMSmVaaEeDO1EFctsPfye93KMdokSPlDU=;
        b=RrCm+DRrs9IrKMvbseIDnUN7Vu4mMdMRPSREVtlLV6CTClOzEW9qFvtHWpSizh4wpw
         ylf2WksvtnL0+ohHm93JLk0NT/wZL60zEkUaWrcNhcBDQHjVqyJ8W4XuONIM97+0MiqO
         VTUcCTr8mBOvlUDRB3E0z/7Mp0I7EYkIOgvIjqfxvMyiWncgQU1/8VsrCRfLPOKqAWD+
         tWQoxxaJHWN1sBwlcaEfNSTz3UhgfR5A/b2vswYd7YS5pjVdA+ySECoCepG8R8DzpHMi
         VvTbXOZcAFJVzBsI3Ob/wpIYE9eZzo5Gz+w2RCfVpVcGxAGkQkdNA8q7ohMLnjztKbBt
         t5hg==
X-Gm-Message-State: AC+VfDzl8012h1GgXymz9a5VZN2WJB+u+zzgHM7DopYJF3PKktHAeixW
	/2pELxl4Vll9ctBVvMSCLDxAahIZsq4hjpiGitU=
X-Google-Smtp-Source: ACHHUZ4xuZX66cWz9P2umlvA4np83H6+V06TE5gRnqrWzrvdttDZmJrZwirz3y1zGIs+lncsZKt9PQ==
X-Received: by 2002:a5d:4b50:0:b0:2fb:703d:1915 with SMTP id w16-20020a5d4b50000000b002fb703d1915mr3484814wrs.43.1682678503288;
        Fri, 28 Apr 2023 03:41:43 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 6/7] tools: Use new xc function for some xc_domain_getinfo() calls
Date: Fri, 28 Apr 2023 11:41:23 +0100
Message-Id: <20230428104124.1044-7-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move calls that require a information about a single precisely identified
domain to the new xc_domain_getinfo_single().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/console/client/main.c             |  7 +++--
 tools/debugger/kdd/kdd-xen.c            |  6 +++--
 tools/libs/ctrl/xc_domain.c             |  9 +++----
 tools/libs/ctrl/xc_pagetab.c            |  7 +++--
 tools/libs/ctrl/xc_private.c            |  7 +++--
 tools/libs/ctrl/xc_private.h            |  7 ++---
 tools/libs/guest/xg_core.c              | 23 +++++++----------
 tools/libs/guest/xg_core.h              |  6 ++---
 tools/libs/guest/xg_core_arm.c          | 10 ++++----
 tools/libs/guest/xg_core_x86.c          | 18 ++++++-------
 tools/libs/guest/xg_cpuid_x86.c         | 34 ++++++++++++-------------
 tools/libs/guest/xg_dom_boot.c          | 16 +++---------
 tools/libs/guest/xg_domain.c            |  8 +++---
 tools/libs/guest/xg_offline_page.c      | 12 ++++-----
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 19 +++++++-------
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 +++++--------
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 26 ++++++++-----------
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 ++---
 tools/libs/light/libxl_dom.c            |  4 +--
 tools/libs/light/libxl_domain.c         |  4 +--
 tools/libs/light/libxl_sched.c          | 20 +++++++--------
 tools/libs/light/libxl_x86_acpi.c       |  4 +--
 tools/misc/xen-hvmcrash.c               |  6 ++---
 tools/misc/xen-lowmemd.c                |  6 ++---
 tools/misc/xen-mfndump.c                | 22 +++++++---------
 tools/misc/xen-vmtrace.c                |  6 ++---
 tools/vchan/vchan-socket-proxy.c        |  6 ++---
 tools/xenstore/xenstored_domain.c       | 15 +++++------
 tools/xentrace/xenctx.c                 |  8 +++---
 32 files changed, 157 insertions(+), 187 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 1a6fa162f7..6775006488 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -408,17 +408,16 @@ int main(int argc, char **argv)
 	if (dom_path == NULL)
 		err(errno, "xs_get_domain_path()");
 	if (type == CONSOLE_INVAL) {
-		xc_dominfo_t xcinfo;
+		xc_domaininfo_t xcinfo;
 		xc_interface *xc_handle = xc_interface_open(0,0,0);
 		if (xc_handle == NULL)
 			err(errno, "Could not open xc interface");
-		if ( (xc_domain_getinfo(xc_handle, domid, 1, &xcinfo) != 1) ||
-		     (xcinfo.domid != domid) ) {
+		if (xc_domain_getinfo_single(xc_handle, domid, &xcinfo) < 0) {
 			xc_interface_close(xc_handle);
 			err(errno, "Failed to get domain information");
 		}
 		/* default to pv console for pv guests and serial for hvm guests */
-		if (xcinfo.hvm)
+		if (xcinfo.flags & XEN_DOMINF_hvm_guest)
 			type = CONSOLE_SERIAL;
 		else
 			type = CONSOLE_PV;
diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index e78c9311c4..b1a96bf4e2 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -570,7 +570,7 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     kdd_guest *g = NULL;
     xc_interface *xch = NULL;
     uint32_t domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
 
     g = calloc(1, sizeof (kdd_guest));
     if (!g) 
@@ -590,7 +590,9 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     g->domid = domid;
 
     /* Check that the domain exists and is HVM */
-    if (xc_domain_getinfo(xch, domid, 1, &info) != 1 || !info.hvm)
+    if (xc_domain_getinfo_single(xch, domid, &info) < 0)
+        goto err;
+    if (!(info.flags & XEN_DOMINF_hvm_guest))
         goto err;
 
     snprintf(g->id, (sizeof g->id) - 1, 
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 6b11775d4c..533e3c1314 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1959,15 +1959,14 @@ int xc_domain_memory_mapping(
     uint32_t add_mapping)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int ret = 0, rc;
     unsigned long done = 0, nr, max_batch_sz;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
-        return -EINVAL;
+        PERROR("Could not get info for dom%u", domid);
+        return -1;
     }
     if ( !xc_core_arch_auto_translated_physmap(&info) )
         return 0;
diff --git a/tools/libs/ctrl/xc_pagetab.c b/tools/libs/ctrl/xc_pagetab.c
index db25c20247..d9f886633a 100644
--- a/tools/libs/ctrl/xc_pagetab.c
+++ b/tools/libs/ctrl/xc_pagetab.c
@@ -29,17 +29,16 @@
 unsigned long xc_translate_foreign_address(xc_interface *xch, uint32_t dom,
                                            int vcpu, unsigned long long virt)
 {
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     uint64_t paddr, mask, pte = 0;
     int size, level, pt_levels = 2;
     void *map;
 
-    if (xc_domain_getinfo(xch, dom, 1, &dominfo) != 1 
-        || dominfo.domid != dom)
+    if (xc_domain_getinfo_single(xch, dom, &dominfo) < 0)
         return 0;
 
     /* What kind of paging are we dealing with? */
-    if (dominfo.hvm) {
+    if (dominfo.flags & XEN_DOMINF_hvm_guest) {
         struct hvm_hw_cpu ctx;
         if (xc_domain_hvm_getcontext_partial(xch, dom,
                                              HVM_SAVE_CODE(CPU), vcpu,
diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index 2f99a7d2cf..8dcebad401 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -441,11 +441,10 @@ int xc_machphys_mfn_list(xc_interface *xch,
 
 long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
 {
-    xc_dominfo_t info;
-    if ( (xc_domain_getinfo(xch, domid, 1, &info) != 1) ||
-         (info.domid != domid) )
+    xc_domaininfo_t info;
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
         return -1;
-    return info.nr_pages;
+    return info.tot_pages;
 }
 
 int xc_copy_to_domain_page(xc_interface *xch,
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 80dc464c93..8faabaea67 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -16,6 +16,7 @@
 #ifndef XC_PRIVATE_H
 #define XC_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <stdarg.h>
 #include <stdio.h>
@@ -420,12 +421,12 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
 #if defined (__i386__) || defined (__x86_64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
-    return info->hvm;
+    return info->flags & XEN_DOMINF_hvm_guest;
 }
 #elif defined (__arm__) || defined(__aarch64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
     return 1;
 }
diff --git a/tools/libs/guest/xg_core.c b/tools/libs/guest/xg_core.c
index c52f1161c1..f83436d6cb 100644
--- a/tools/libs/guest/xg_core.c
+++ b/tools/libs/guest/xg_core.c
@@ -349,7 +349,7 @@ elfnote_dump_none(xc_interface *xch, void *args, dumpcore_rtn_t dump_rtn)
 static int
 elfnote_dump_core_header(
     xc_interface *xch,
-    void *args, dumpcore_rtn_t dump_rtn, const xc_dominfo_t *info,
+    void *args, dumpcore_rtn_t dump_rtn, const xc_domaininfo_t *info,
     int nr_vcpus, unsigned long nr_pages)
 {
     int sts;
@@ -361,7 +361,8 @@ elfnote_dump_core_header(
     
     elfnote.descsz = sizeof(header);
     elfnote.type = XEN_ELFNOTE_DUMPCORE_HEADER;
-    header.xch_magic = info->hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC;
+    header.xch_magic = (info->flags & XEN_DOMINF_hvm_guest) ? XC_CORE_MAGIC_HVM
+                                                            : XC_CORE_MAGIC;
     header.xch_nr_vcpus = nr_vcpus;
     header.xch_nr_pages = nr_pages;
     header.xch_page_size = PAGE_SIZE;
@@ -423,7 +424,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                                 void *args,
                                 dumpcore_rtn_t dump_rtn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo = NULL;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
@@ -468,15 +469,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         goto out;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
+        PERROR("Could not get info for dom%u", domid);
         goto out;
     }
     /* Map the shared info frame */
     live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE,
                                        PROT_READ, info.shared_info_frame);
-    if ( !live_shinfo && !info.hvm )
+    if ( !live_shinfo && !(info.flags & XEN_DOMINF_hvm_guest) )
     {
         PERROR("Couldn't map live_shinfo");
         goto out;
@@ -517,12 +518,6 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         dinfo->guest_width = sizeof(unsigned long);
     }
 
-    if ( domid != info.domid )
-    {
-        PERROR("Domain %d does not exist", domid);
-        goto out;
-    }
-
     ctxt = calloc(sizeof(*ctxt), info.max_vcpu_id + 1);
     if ( !ctxt )
     {
@@ -560,9 +555,9 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
      * all the array...
      *
      * We don't want to use the total potential size of the memory map
-     * since that is usually much higher than info.nr_pages.
+     * since that is usually much higher than info.tot_pages.
      */
-    nr_pages = info.nr_pages;
+    nr_pages = info.tot_pages;
 
     if ( !auto_translated_physmap )
     {
diff --git a/tools/libs/guest/xg_core.h b/tools/libs/guest/xg_core.h
index aaca9e0a8b..ff577dad31 100644
--- a/tools/libs/guest/xg_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -134,15 +134,15 @@ typedef struct xc_core_memory_map xc_core_memory_map_t;
 struct xc_core_arch_context;
 int xc_core_arch_memory_map_get(xc_interface *xch,
                                 struct xc_core_arch_context *arch_ctxt,
-                                xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                                xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
 int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
-                         xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                         xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                          xen_pfn_t **live_p2m);
 
 int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
-                                  xc_dominfo_t *info,
+                                  xc_domaininfo_t *info,
                                   shared_info_any_t *live_shinfo,
                                   xen_pfn_t **live_p2m);
 
diff --git a/tools/libs/guest/xg_core_arm.c b/tools/libs/guest/xg_core_arm.c
index de30cf0c31..34276152da 100644
--- a/tools/libs/guest/xg_core_arm.c
+++ b/tools/libs/guest/xg_core_arm.c
@@ -33,14 +33,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -59,7 +59,7 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
@@ -67,14 +67,14 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_core_x86.c b/tools/libs/guest/xg_core_x86.c
index c5e4542ccc..dbd3a440f7 100644
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -49,14 +49,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -314,24 +314,24 @@ xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinf
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     xen_pfn_t *p2m_frame_list = NULL;
     uint64_t p2m_cr3;
-    uint32_t dom = info->domid;
+    uint32_t dom = info->domain;
     int ret = -1;
     int err;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &dinfo->p2m_size) < 0 )
     {
         ERROR("Could not get maximum GPFN!");
         goto out;
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    if ( dinfo->p2m_size < info->tot_pages  )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("p2m_size < nr_pages -1 (%lx < %"PRIx64, dinfo->p2m_size, info->tot_pages - 1);
         goto out;
     }
 
@@ -366,14 +366,14 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index bd16a87e48..1519b5d556 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
     xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int nr_leaves, nr_msrs;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     /*
@@ -291,13 +292,12 @@ static int xc_cpuid_xend_policy(
     xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
     unsigned int nr_host, nr_def, nr_cur;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
         goto fail;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -330,12 +330,12 @@ static int xc_cpuid_xend_policy(
     /* Get the domain type's default policy. */
     nr_msrs = 0;
     nr_def = nr_leaves;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
                                            : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_def, def, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto fail;
     }
@@ -428,7 +428,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
                           const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpu_policy *p = NULL;
@@ -436,13 +437,12 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
         goto out;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -475,12 +475,12 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     /* Get the domain's default policy. */
     nr_msrs = 0;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
                                            : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_leaves, leaves, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s default policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto out;
     }
@@ -514,7 +514,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         p->feat.hle = test_bit(X86_FEATURE_HLE, host_featureset);
         p->feat.rtm = test_bit(X86_FEATURE_RTM, host_featureset);
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset);
         }
@@ -571,7 +571,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     {
         p->extd.itsc = itsc;
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->basic.pae = pae;
             p->basic.vmx = nested_virt;
@@ -579,7 +579,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         }
     }
 
-    if ( !di.hvm )
+    if ( !hvm )
     {
         /*
          * On hardware without CPUID Faulting, PV guests see real topology.
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 263a3f4c85..1dea534bba 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
 
 int xc_dom_boot_image(struct xc_dom_image *dom)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int rc;
 
     DOMPRINTF_CALLED(dom->xch);
@@ -174,19 +174,11 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
         return rc;
 
     /* collect some info */
-    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
-    if ( rc < 0 )
-    {
-        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
-        return rc;
-    }
-    if ( rc == 0 || info.domid != dom->guest_domid )
+    if ( xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info) < 0 )
     {
         xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: Huh? No domains found (nr_domains=%d) "
-                     "or domid mismatch (%d != %d)", __FUNCTION__,
-                     rc, info.domid, dom->guest_domid);
+                     "%s: getdomaininfo failed (errno=%d)",
+                     __FUNCTION__, rc, errno);
         return -1;
     }
     dom->shared_info_mfn = info.shared_info_frame;
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index f0e7748449..198f6f904a 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -37,7 +37,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
 {
     struct domain_info_context _di;
 
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo;
     xen_capabilities_info_t xen_caps = "";
     unsigned long i;
@@ -49,9 +49,9 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get dominfo for dom%u", domid);
         return -1;
     }
 
@@ -86,7 +86,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                                        info.shared_info_frame);
     if ( !live_shinfo )
     {
-        PERROR("Could not map the shared info frame (MFN 0x%lx)",
+        PERROR("Could not map the shared info frame (MFN 0x%"PRIx64")",
                info.shared_info_frame);
         return -1;
     }
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index ccd0299f0f..292f73a3e7 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -370,7 +370,7 @@ static int clear_pte(xc_interface *xch, uint32_t domid,
  */
 
 static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
-                               xc_dominfo_t *info)
+                               xc_domaininfo_t *info)
 {
     uint32_t status;
     int rc;
@@ -381,7 +381,7 @@ static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
         DPRINTF("Dom0's page can't be LM");
         return 0;
     }
-    if (info->hvm)
+    if (info->flags & XEN_DOMINF_hvm_guest)
     {
         DPRINTF("Currently we can only live change PV guest's page\n");
         return 0;
@@ -462,7 +462,7 @@ err0:
 /* The domain should be suspended when called here */
 int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xc_domain_meminfo minfo;
     struct xc_mmu *mmu = NULL;
     struct pte_backup old_ptes = {NULL, 0, 0};
@@ -477,13 +477,13 @@ int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
     xen_pfn_t *m2p_table;
     unsigned long max_mfn;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        ERROR("Could not get domain info");
+        PERROR("Could not get domain info for dom%u", domid);
         return -1;
     }
 
-    if (!info.shutdown || info.shutdown_reason != SHUTDOWN_suspend)
+    if (!dominfo_shutdown_with(&info, SHUTDOWN_suspend))
     {
         errno = EINVAL;
         ERROR("Can't exchange page unless domain is suspended\n");
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index e729a8106c..d73947094f 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -16,6 +16,7 @@
 #ifndef XG_PRIVATE_H
 #define XG_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <errno.h>
 #include <fcntl.h>
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index 77e2451a3c..2965965e5b 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -26,28 +26,27 @@
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
     vcpu_guest_context_any_t ctxt;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     xen_capabilities_info_t caps;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
     int rc;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get info for dom%u", domid);
         return -1;
     }
 
-    if ( !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&info, SHUTDOWN_suspend) )
     {
         ERROR("Dom %d not suspended: (shutdown %d, reason %d)", domid,
-              info.shutdown, info.shutdown_reason);
+              info.flags & XEN_DOMINF_shutdown, dominfo_shutdown_reason(&info));
         errno = EINVAL;
         return -1;
     }
 
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
     {
         /* HVM guests without PV drivers have no return code to modify. */
         uint64_t irq = 0;
@@ -133,7 +132,7 @@ static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
 static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int i, rc = -1;
 #if defined(__i386__) || defined(__x86_64__)
     struct domain_info_context _dinfo = { .guest_width = 0,
@@ -146,7 +145,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
     xen_pfn_t *p2m = NULL;
 #endif
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return rc;
@@ -156,7 +155,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
      * (x86 only) Rewrite store_mfn and console_mfn back to MFN (from PFN).
      */
 #if defined(__i386__) || defined(__x86_64__)
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
         return xc_domain_resume_hvm(xch, domid);
 
     if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 )
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 36d45ef56f..2f058ee3a6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -220,7 +220,7 @@ struct xc_sr_context
     /* Plain VM, or checkpoints over time. */
     xc_stream_type_t stream_type;
 
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 
     union /* Common save or restore data. */
     {
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7314a24cf9..6767c9f5cc 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -852,6 +852,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       xc_stream_type_t stream_type,
                       struct restore_callbacks *callbacks, int send_back_fd)
 {
+    bool hvm;
     xen_pfn_t nr_pfns;
     struct xc_sr_context ctx = {
         .xch = xch,
@@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         break;
     }
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
-        PERROR("Failed to get domain info");
-        return -1;
-    }
-
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
+        PERROR("Failed to get info for dom%u", dom);
         return -1;
     }
 
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
     DPRINTF("fd %d, dom %u, hvm %u, stream_type %d",
-            io_fd, dom, ctx.dominfo.hvm, stream_type);
+            io_fd, dom, hvm, stream_type);
 
     ctx.domid = dom;
 
@@ -914,8 +910,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     ctx.restore.p2m_size = nr_pfns;
-    ctx.restore.ops = ctx.dominfo.hvm
-        ? restore_ops_x86_hvm : restore_ops_x86_pv;
+    ctx.restore.ops = hvm ? restore_ops_x86_hvm : restore_ops_x86_pv;
 
     if ( restore(&ctx) )
         return -1;
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/guest/xg_sr_restore_x86_pv.c
index dc50b0f5a8..eaeb97f4a0 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -903,7 +903,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
         ctx->dominfo.shared_info_frame);
     if ( !guest_shinfo )
     {
-        PERROR("Failed to map Shared Info at mfn %#lx",
+        PERROR("Failed to map Shared Info at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         goto err;
     }
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 9853d8d846..b0b30b4bc2 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -336,19 +336,17 @@ static int suspend_domain(struct xc_sr_context *ctx)
     }
 
     /* Refresh domain information. */
-    if ( (xc_domain_getinfo(xch, ctx->domid, 1, &ctx->dominfo) != 1) ||
-         (ctx->dominfo.domid != ctx->domid) )
+    if ( xc_domain_getinfo_single(xch, ctx->domid, &ctx->dominfo) < 0 )
     {
         PERROR("Unable to refresh domain information");
         return -1;
     }
 
     /* Confirm the domain has actually been paused. */
-    if ( !ctx->dominfo.shutdown ||
-         (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
     {
         ERROR("Domain has not been suspended: shutdown %d, reason %d",
-              ctx->dominfo.shutdown, ctx->dominfo.shutdown_reason);
+              ctx->dominfo.flags & XEN_DOMINF_shutdown, dominfo_shutdown_reason(&ctx->dominfo));
         return -1;
     }
 
@@ -893,8 +891,7 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type)
         if ( rc )
             goto err;
 
-        if ( !ctx->dominfo.shutdown ||
-             (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+        if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
         {
             ERROR("Domain has not been suspended");
             rc = -1;
@@ -989,6 +986,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         .fd = io_fd,
         .stream_type = stream_type,
     };
+    bool hvm;
 
     /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
     ctx.save.callbacks = callbacks;
@@ -996,17 +994,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
     ctx.save.recv_fd = recv_fd;
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
         PERROR("Failed to get domain info");
         return -1;
     }
 
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
-        return -1;
-    }
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
 
     /* Sanity check stream_type-related parameters */
     switch ( stream_type )
@@ -1018,7 +1012,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         assert(callbacks->checkpoint && callbacks->postcopy);
         /* Fallthrough */
     case XC_STREAM_PLAIN:
-        if ( ctx.dominfo.hvm )
+        if ( hvm )
             assert(callbacks->switch_qemu_logdirty);
         break;
 
@@ -1028,11 +1022,11 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     DPRINTF("fd %d, dom %u, flags %u, hvm %d",
-            io_fd, dom, flags, ctx.dominfo.hvm);
+            io_fd, dom, flags, hvm);
 
     ctx.domid = dom;
 
-    if ( ctx.dominfo.hvm )
+    if ( hvm )
     {
         ctx.save.ops = save_ops_x86_hvm;
         return save(&ctx, DHDR_TYPE_X86_HVM);
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..f3d7a7a71a 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -20,7 +20,7 @@ static int map_shinfo(struct xc_sr_context *ctx)
         xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
     if ( !ctx->x86.pv.shinfo )
     {
-        PERROR("Failed to map shared info frame at mfn %#lx",
+        PERROR("Failed to map shared info frame at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         return -1;
     }
@@ -943,7 +943,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 #ifdef __i386__
             if ( mfn == INVALID_MFN )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
@@ -965,7 +965,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 
             if ( !mfn_in_pseudophysmap(ctx, mfn) )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index bd5d823581..94fef37401 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -34,7 +34,7 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
 
     ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
     if (ret < 0) {
-        LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
+        LOGED(ERROR, domid, "unable to get dominfo");
         return LIBXL_DOMAIN_TYPE_INVALID;
     }
     if (info.flags & XEN_DOMINF_hvm_guest) {
@@ -73,7 +73,7 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
     ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (ret < 0)
     {
-        LOGED(ERROR, domid, "get domaininfo failed: %d", ret);
+        LOGED(ERROR, domid, "get domaininfo failed");
         return ERROR_FAIL;
     }
     return info.cpupool;
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 9fa4091859..5709b3e62f 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -351,7 +351,7 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
 
     ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
     if (ret<0) {
-        LOGED(ERROR, domid, "Getting domain info: %d", ret);
+        LOGED(ERROR, domid, "Getting domain info");
         GC_FREE;
         return ERROR_FAIL;
     }
@@ -1661,7 +1661,7 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
     unsigned int nr_vcpus;
 
     if ((rc = xc_domain_getinfo_single(ctx->xch, domid, &domaininfo)) < 0) {
-        LOGED(ERROR, domid, "Getting dominfo: %d", rc);
+        LOGED(ERROR, domid, "Getting dominfo");
         GC_FREE;
         return NULL;
     }
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index b8d0b9ccd7..b87e490d12 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -221,7 +221,7 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
 
     rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info: %d", rc);
+        LOGED(ERROR, domid, "Getting domain info");
         return ERROR_FAIL;
     }
 
@@ -426,7 +426,7 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
 
     rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info: %d", rc);
+        LOGED(ERROR, domid, "Getting domain info");
         return ERROR_FAIL;
     }
 
@@ -498,10 +498,10 @@ static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -552,10 +552,10 @@ static int sched_rtds_vcpu_get_all(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -602,10 +602,10 @@ static int sched_rtds_vcpu_set(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -662,11 +662,11 @@ static int sched_rtds_vcpu_set_all(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
     uint32_t num_vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
index 22eb160659..796b009d0c 100644
--- a/tools/libs/light/libxl_x86_acpi.c
+++ b/tools/libs/light/libxl_x86_acpi.c
@@ -87,14 +87,14 @@ static int init_acpi_config(libxl__gc *gc,
 {
     xc_interface *xch = dom->xch;
     uint32_t domid = dom->guest_domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct hvm_info_table *hvminfo;
     int i, r, rc;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
 
-    r = xc_domain_getinfo(xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(xch, domid, &info);
     if (r < 0) {
         LOG(ERROR, "getdomaininfo failed (rc=%d)", r);
         rc = ERROR_FAIL;
diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
index 4f0dabcb18..1d058fa40a 100644
--- a/tools/misc/xen-hvmcrash.c
+++ b/tools/misc/xen-hvmcrash.c
@@ -48,7 +48,7 @@ main(int argc, char **argv)
 {
     int domid;
     xc_interface *xch;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     int ret;
     uint32_t len;
     uint8_t *buf;
@@ -66,13 +66,13 @@ main(int argc, char **argv)
         exit(1);
     }
 
-    ret = xc_domain_getinfo(xch, domid, 1, &dominfo);
+    ret = xc_domain_getinfo_single(xch, domid, &dominfo);
     if (ret < 0) {
         perror("xc_domain_getinfo");
         exit(1);
     }
 
-    if (!dominfo.hvm) {
+    if (!(dominfo.flags & XEN_DOMINF_hvm_guest)) {
         fprintf(stderr, "domain %d is not HVM\n", domid);
         exit(1);
     }
diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
index a3a2741242..9d5cb549a8 100644
--- a/tools/misc/xen-lowmemd.c
+++ b/tools/misc/xen-lowmemd.c
@@ -38,7 +38,7 @@ void cleanup(void)
 #define BUFSZ 512
 void handle_low_mem(void)
 {
-    xc_dominfo_t  dom0_info;
+    xc_domaininfo_t dom0_info;
     xc_physinfo_t info;
     unsigned long long free_pages, dom0_pages, diff, dom0_target;
     char data[BUFSZ], error[BUFSZ];
@@ -58,13 +58,13 @@ void handle_low_mem(void)
         return;
     diff = THRESHOLD_PG - free_pages; 
 
-    if (xc_domain_getinfo(xch, 0, 1, &dom0_info) < 1)
+    if (xc_domain_getinfo_single(xch, 0, &dom0_info) < 0)
     {
         perror("Failed to get dom0 info");
         return;
     }
 
-    dom0_pages = (unsigned long long) dom0_info.nr_pages;
+    dom0_pages = (unsigned long long) dom0_info.tot_pages;
     printf("Dom0 pages: 0x%llx:%llu\n", dom0_pages, dom0_pages);
     dom0_target = dom0_pages - diff;
     if (dom0_target <= DOM0_FLOOR_PG)
diff --git a/tools/misc/xen-mfndump.c b/tools/misc/xen-mfndump.c
index b32c95e262..8863ece3f5 100644
--- a/tools/misc/xen-mfndump.c
+++ b/tools/misc/xen-mfndump.c
@@ -74,7 +74,7 @@ int dump_m2p_func(int argc, char *argv[])
 int dump_p2m_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     unsigned long i;
     int domid;
 
@@ -85,8 +85,7 @@ int dump_p2m_func(int argc, char *argv[])
     }
     domid = atoi(argv[0]);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -158,7 +157,7 @@ int dump_p2m_func(int argc, char *argv[])
 int dump_ptes_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, max_mfn;
     int domid, pte_num, rc = 0;
@@ -172,8 +171,7 @@ int dump_ptes_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -266,7 +264,7 @@ int dump_ptes_func(int argc, char *argv[])
 int lookup_pte_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, j;
     int domid, pte_num;
@@ -280,8 +278,7 @@ int lookup_pte_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -336,7 +333,7 @@ int lookup_pte_func(int argc, char *argv[])
 
 int memcmp_mfns_func(int argc, char *argv[])
 {
-    xc_dominfo_t info1, info2;
+    xc_domaininfo_t info1, info2;
     void *page1 = NULL, *page2 = NULL;
     int domid1, domid2;
     xen_pfn_t mfn1, mfn2;
@@ -352,9 +349,8 @@ int memcmp_mfns_func(int argc, char *argv[])
     mfn1 = strtoul(argv[1], NULL, 16);
     mfn2 = strtoul(argv[3], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid1, 1, &info1) != 1 ||
-         xc_domain_getinfo(xch, domid2, 1, &info2) != 1 ||
-         info1.domid != domid1 || info2.domid != domid2)
+    if ( xc_domain_getinfo_single(xch, domid1, &info1) < 0 ||
+         xc_domain_getinfo_single(xch, domid2, &info2) < 0)
     {
         ERROR("Failed to obtain info for domains\n");
         return -1;
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 5b688a54af..ba2ce17a17 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -133,15 +133,15 @@ int main(int argc, char **argv)
 
     while ( !interrupted )
     {
-        xc_dominfo_t dominfo;
+        xc_domaininfo_t dominfo;
 
         if ( get_more_data() )
             goto out;
 
         usleep(1000 * 100);
 
-        if ( xc_domain_getinfo(xch, domid, 1, &dominfo) != 1 ||
-             dominfo.domid != domid || dominfo.shutdown )
+        if ( xc_domain_getinfo_single(xch, domid, &dominfo) < 0 ||
+             (dominfo.flags & XEN_DOMINF_shutdown) )
         {
             if ( get_more_data() )
                 goto out;
diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
index e1d959c6d1..9c4c336b03 100644
--- a/tools/vchan/vchan-socket-proxy.c
+++ b/tools/vchan/vchan-socket-proxy.c
@@ -222,7 +222,7 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
     struct libxenvchan *ctrl = NULL;
     struct xs_handle *xs = NULL;
     xc_interface *xc = NULL;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     char **watch_ret;
     unsigned int watch_num;
     int ret;
@@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
         if (ctrl)
             break;
 
-        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
         /* break the loop if domain is definitely not there anymore, but
          * continue if it is or the call failed (like EPERM) */
         if (ret == -1 && errno == ESRCH)
             break;
-        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+        if (ret == 0 && (dominfo.flags & XEN_DOMINF_dying))
             break;
     }
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..aeb7595ae1 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -339,15 +339,14 @@ static int destroy_domain(void *_domain)
 	return 0;
 }
 
-static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
+static bool get_domain_info(unsigned int domid, xc_domaininfo_t *dominfo)
 {
-	return xc_domain_getinfo(*xc_handle, domid, 1, dominfo) == 1 &&
-	       dominfo->domid == domid;
+	return xc_domain_getinfo_single(*xc_handle, domid, dominfo) == 0;
 }
 
 static int check_domain(const void *k, void *v, void *arg)
 {
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 	struct connection *conn;
 	bool dom_valid;
 	struct domain *domain = v;
@@ -360,12 +359,12 @@ static int check_domain(const void *k, void *v, void *arg)
 		return 0;
 	}
 	if (dom_valid) {
-		if ((dominfo.crashed || dominfo.shutdown)
+		if ((dominfo.flags & XEN_DOMINF_shutdown)
 		    && !domain->shutdown) {
 			domain->shutdown = true;
 			*notify = true;
 		}
-		if (!dominfo.dying)
+		if (!(dominfo.flags & XEN_DOMINF_dying))
 			return 0;
 	}
 	if (domain->conn) {
@@ -486,7 +485,7 @@ static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
 static struct domain *find_or_alloc_existing_domain(unsigned int domid)
 {
 	struct domain *domain;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	domain = find_domain_struct(domid);
 	if (!domain && get_domain_info(domid, &dominfo))
@@ -1010,7 +1009,7 @@ int domain_alloc_permrefs(struct node_perms *perms)
 {
 	unsigned int i, domid;
 	struct domain *d;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	for (i = 0; i < perms->num; i++) {
 		domid = perms->p[i].id;
diff --git a/tools/xentrace/xenctx.c b/tools/xentrace/xenctx.c
index 85ba0c0fa6..9acb9db460 100644
--- a/tools/xentrace/xenctx.c
+++ b/tools/xentrace/xenctx.c
@@ -92,7 +92,7 @@ static struct xenctx {
     int do_stack;
 #endif
     int kernel_start_set;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 } xenctx;
 
 struct symbol {
@@ -989,7 +989,7 @@ static void dump_ctx(int vcpu)
 
 #if defined(__i386__) || defined(__x86_64__)
     {
-        if (xenctx.dominfo.hvm) {
+        if (xenctx.dominfo.flags & XEN_DOMINF_hvm_guest) {
             struct hvm_hw_cpu cpuctx;
             xen_capabilities_info_t xen_caps = "";
             if (xc_domain_hvm_getcontext_partial(
@@ -1269,9 +1269,9 @@ int main(int argc, char **argv)
         exit(-1);
     }
 
-    ret = xc_domain_getinfo(xenctx.xc_handle, xenctx.domid, 1, &xenctx.dominfo);
+    ret = xc_domain_getinfo_single(xenctx.xc_handle, xenctx.domid, &xenctx.dominfo);
     if (ret < 0) {
-        perror("xc_domain_getinfo");
+        perror("xc_domain_getinfo_single");
         exit(-1);
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 10:41:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 10:41:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527257.819710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXw-0003tJ-39; Fri, 28 Apr 2023 10:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527257.819710; Fri, 28 Apr 2023 10:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psLXv-0003sp-R1; Fri, 28 Apr 2023 10:41:47 +0000
Received: by outflank-mailman (input) for mailman id 527257;
 Fri, 28 Apr 2023 10:41:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psLXu-0002Ac-Uz
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 10:41:46 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45b8f029-e5b1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 12:41:46 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-2f95231618aso6131763f8f.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 03:41:46 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k6-20020a5d5246000000b002e71156b0fcsm20930378wrc.6.2023.04.28.03.41.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 03:41:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45b8f029-e5b1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682678505; x=1685270505;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=elEycC0m0QwhjrI/cCyPmElzCeDcTnaeFu39NnQQmM4=;
        b=hC/PZiuATFhzTmZYjkhbaeHmZK6LjbbDcfC9oLcOccrJjMDYraxZuYkn0v1xMmz/Gx
         +tu2+qJvczbTi8wVU4opXXncC+RZGlNwMBd1mgrJwajRv3oDJ3L+ag7JQKQ4GxyXjRr0
         MqvqZ2+GO8pgAswESbH3H1NLjAroNuar5opew=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682678505; x=1685270505;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=elEycC0m0QwhjrI/cCyPmElzCeDcTnaeFu39NnQQmM4=;
        b=NgW8j/U+7lNThrnqILMuiyWtJed3ShEUYHaNhHLYng3A4vJUbEdLxZMBvYEqLSeUYa
         Zb6dvZphIWgH0mTTCg5MTkOtGp2HEQNsNy8vhSWRuNhrFgkGYvIs7z9ZkwpWBvITfKQo
         zyeZi7Tm4gjiB6NFmVQJ3j9hZu0bYbgsOOBgE5TkE8nz+7ZHvPt/SDda+HttSrN3vty2
         8ryvfaFmKuOQzgbs6VoVUmsEGuVtbrt7EdMd/N82s20d+ou/JudyyVXpkmXadUmyVELv
         hvBB7VM8EaGmpU94TkTiqKs24shWjdFkAus0ihnv/vosHf/dB3X/K31w5qOyvAYApzJw
         trVQ==
X-Gm-Message-State: AC+VfDzCWlMA+bP6yvuCB8Qr6XMLf2fYthPQ42gaTV5Pu3ZzTmJQKyfD
	XK33KU1WP5/weyElCUkoHJW9QPDagHa3LJ8QmSQ=
X-Google-Smtp-Source: ACHHUZ4EX2u22zMI6e8UKRVUTZvFp8QMKnPqfpUIQgBLJ/iq77hFmLT3SbznD0IrDjoU6i6pKGJ06A==
X-Received: by 2002:adf:f70c:0:b0:304:a7ca:1052 with SMTP id r12-20020adff70c000000b00304a7ca1052mr3342986wrp.35.1682678505001;
        Fri, 28 Apr 2023 03:41:45 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2 7/7] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found
Date: Fri, 28 Apr 2023 11:41:24 +0100
Message-Id: <20230428104124.1044-8-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It previously mimicked the getdomaininfo sysctl semantics by returning
the first domid higher than the requested domid that does exist. This
unintuitive behaviour causes quite a few mistakes and makes the call
needlessly slow in its error path.

This patch removes the fallback search, returning -ESRCH if the requested
domain doesn't exist. Domain discovery can still be done through the sysctl
interface as that performs a linear search on the list of domains.

With this modification the xc_domain_getinfo() function is deprecated and
removed to make sure it's not mistakenly used expecting the old behaviour.
The new xc wrapper is xc_domain_getinfo_single().

All previous callers of xc_domain_getinfo() have been updated to use
xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
means xc_dominfo_t is no longer used by anything and can be purged.

Resolves: xen-project/xen#105
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h        | 43 --------------------
 tools/libs/ctrl/xc_domain.c    | 72 ----------------------------------
 tools/libs/guest/xg_dom_boot.c |  2 +-
 xen/common/domctl.c            | 32 +--------------
 4 files changed, 3 insertions(+), 146 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 685df1c7ba..04d33db305 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -444,28 +444,6 @@ typedef struct xc_core_header {
  * DOMAIN MANAGEMENT FUNCTIONS
  */
 
-typedef struct xc_dominfo {
-    uint32_t      domid;
-    uint32_t      ssidref;
-    unsigned int  dying:1, crashed:1, shutdown:1,
-                  paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1, xenstore:1, hap:1;
-    unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
-    unsigned long nr_pages; /* current number, not maximum */
-    unsigned long nr_outstanding_pages;
-    unsigned long nr_shared_pages;
-    unsigned long nr_paged_pages;
-    unsigned long shared_info_frame;
-    uint64_t      cpu_time;
-    unsigned long max_memkb;
-    unsigned int  nr_online_vcpus;
-    unsigned int  max_vcpu_id;
-    xen_domain_handle_t handle;
-    unsigned int  cpupool;
-    uint8_t       gpaddr_bits;
-    struct xen_arch_domainconfig arch_config;
-} xc_dominfo_t;
-
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
 static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
@@ -720,27 +698,6 @@ int xc_domain_getinfo_single(xc_interface *xch,
                              uint32_t domid,
                              xc_domaininfo_t *info);
 
-/**
- * This function will return information about one or more domains. It is
- * designed to iterate over the list of domains. If a single domain is
- * requested, this function will return the next domain in the list - if
- * one exists. It is, therefore, important in this case to make sure the
- * domain requested was the one returned.
- *
- * @parm xch a handle to an open hypervisor interface
- * @parm first_domid the first domain to enumerate information from.  Domains
- *                   are currently enumerate in order of creation.
- * @parm max_doms the number of elements in info
- * @parm info an array of max_doms size that will contain the information for
- *            the enumerated domains.
- * @return the number of domains enumerated or -1 on error
- */
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info);
-
-
 /**
  * This function will set the execution context for the specified vcpu.
  *
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 533e3c1314..724fa6f753 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -357,84 +357,12 @@ int xc_domain_getinfo_single(xc_interface *xch,
     if ( do_domctl(xch, &domctl) < 0 )
         return -1;
 
-    if ( domctl.u.getdomaininfo.domain != domid ) {
-        errno = ESRCH;
-        return -1;
-    }
-
     if ( info )
         *info = domctl.u.getdomaininfo;
 
     return 0;
 }
 
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info)
-{
-    unsigned int nr_doms;
-    uint32_t next_domid = first_domid;
-    DECLARE_DOMCTL;
-    int rc = 0;
-
-    memset(info, 0, max_doms*sizeof(xc_dominfo_t));
-
-    for ( nr_doms = 0; nr_doms < max_doms; nr_doms++ )
-    {
-        domctl.cmd = XEN_DOMCTL_getdomaininfo;
-        domctl.domain = next_domid;
-        if ( (rc = do_domctl(xch, &domctl)) < 0 )
-            break;
-        info->domid      = domctl.domain;
-
-        info->dying    = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_dying);
-        info->shutdown = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_shutdown);
-        info->paused   = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_paused);
-        info->blocked  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_blocked);
-        info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
-        info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
-        info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
-        info->xenstore = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_xs_domain);
-        info->hap      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hap);
-
-        info->shutdown_reason =
-            (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
-            XEN_DOMINF_shutdownmask;
-
-        if ( info->shutdown && (info->shutdown_reason == SHUTDOWN_crash) )
-        {
-            info->shutdown = 0;
-            info->crashed  = 1;
-        }
-
-        info->ssidref  = domctl.u.getdomaininfo.ssidref;
-        info->nr_pages = domctl.u.getdomaininfo.tot_pages;
-        info->nr_outstanding_pages = domctl.u.getdomaininfo.outstanding_pages;
-        info->nr_shared_pages = domctl.u.getdomaininfo.shr_pages;
-        info->nr_paged_pages = domctl.u.getdomaininfo.paged_pages;
-        info->max_memkb = domctl.u.getdomaininfo.max_pages << (PAGE_SHIFT-10);
-        info->shared_info_frame = domctl.u.getdomaininfo.shared_info_frame;
-        info->cpu_time = domctl.u.getdomaininfo.cpu_time;
-        info->nr_online_vcpus = domctl.u.getdomaininfo.nr_online_vcpus;
-        info->max_vcpu_id = domctl.u.getdomaininfo.max_vcpu_id;
-        info->cpupool = domctl.u.getdomaininfo.cpupool;
-        info->gpaddr_bits = domctl.u.getdomaininfo.gpaddr_bits;
-        info->arch_config = domctl.u.getdomaininfo.arch_config;
-
-        memcpy(info->handle, domctl.u.getdomaininfo.handle,
-               sizeof(xen_domain_handle_t));
-
-        next_domid = (uint16_t)domctl.domain + 1;
-        info++;
-    }
-
-    if ( nr_doms == 0 )
-        return rc;
-
-    return nr_doms;
-}
-
 int xc_domain_getinfolist(xc_interface *xch,
                           uint32_t first_domain,
                           unsigned int max_domains,
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 1dea534bba..dc858a1567 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -178,7 +178,7 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
     {
         xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
                      "%s: getdomaininfo failed (errno=%d)",
-                     __FUNCTION__, rc, errno);
+                     __FUNCTION__, errno);
         return -1;
     }
     dom->shared_info_mfn = info.shared_info_frame;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ad71ad8a4c..24a14996e6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* fall through */
     default:
         d = rcu_lock_domain_by_id(op->domain);
-        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
+        if ( !d )
             return -ESRCH;
     }
 
@@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_getdomaininfo:
     {
-        domid_t dom = DOMID_INVALID;
-
-        if ( !d )
-        {
-            ret = -EINVAL;
-            if ( op->domain >= DOMID_FIRST_RESERVED )
-                break;
-
-            rcu_read_lock(&domlist_read_lock);
-
-            dom = op->domain;
-            for_each_domain ( d )
-                if ( d->domain_id >= dom )
-                    break;
-        }
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            goto getdomaininfo_out;
-
         ret = xsm_getdomaininfo(XSM_HOOK, d);
         if ( ret )
-            goto getdomaininfo_out;
+            break;
 
         getdomaininfo(d, &op->u.getdomaininfo);
 
         op->domain = op->u.getdomaininfo.domain;
         copyback = 1;
-
-    getdomaininfo_out:
-        /* When d was non-NULL upon entry, no cleanup is needed. */
-        if ( dom == DOMID_INVALID )
-            break;
-
-        rcu_read_unlock(&domlist_read_lock);
-        d = NULL;
         break;
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:19:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:19:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527276.819725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN4f-00073i-4b; Fri, 28 Apr 2023 12:19:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527276.819725; Fri, 28 Apr 2023 12:19:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN4f-00073b-0u; Fri, 28 Apr 2023 12:19:41 +0000
Received: by outflank-mailman (input) for mailman id 527276;
 Fri, 28 Apr 2023 12:19:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DSG/=AT=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1psN4c-00073V-WC
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:19:39 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eeecb492-e5be-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:19:35 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 0A6FA5C0165;
 Fri, 28 Apr 2023 08:19:33 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Fri, 28 Apr 2023 08:19:33 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 28 Apr 2023 08:19:31 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eeecb492-e5be-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682684373; x=1682770773; bh=4T5RRApkW6b7ElBIKdGiQjp3SasNWBWNqmz
	TjiICWXg=; b=T3xgOfVCs2pthgRcV3Hw8hSW3plk0AFyYlJUMrsD8aHc7nmjbE5
	0EeU1hZzd0wfxs4oVsGDU9mCCwvgMrw/0aQbiEqaKNIBhWFh4eNnuTIjouHlAa6m
	Iq3FN2uqjwAudOkSYaDmWnJ0L0HvXRmLuz+SR88biz4jb15KbK4krQOjyr7LrYlx
	vK8VfavpEI/QxC00HmXpvGFHpae0lv/LfX+kzFXcxtvATkmzPZiROur3VcNB0mj1
	tVdd55Vk94ouz4gtGR1DtxYYdVwifWP8ZuCT4kURp+/RtwzyqSqCyKvoODhxeQ3k
	oBNxiLZ6rxJuPiQxsnF8vtV/9I157J5GLTw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682684373; x=1682770773; bh=4T5RRApkW6b7E
	lBIKdGiQjp3SasNWBWNqmzTjiICWXg=; b=G7+4qG8DzpAANK2InTztfHRCnqwji
	aFA2Q813N5xXKBzLLqcEuPPZyHy4d/9sC2c01uJsbAd3rQvhL+uINTHjyFr8h3jH
	cvYQ+6UQ/2qTIvlNTUw4kw3IT26RDaEPgXbxLKgyWLMq12RS28Ps092cZpMPwGSv
	cLeIOQcMSpicVOeBUuM82AC5x86L0Bhl0mbkzAR4eifmlgiY8UFjJYAuDjesMqmC
	km2hwmREQV9Be0/FS8+4vjWGbX3Y5Dscqqqb0Cf1VlyqDnpwhgkYS6MgfiInRRP7
	fZ7MOIRRYBY0P1Q/8HJnGJ6k0udsyn63W8YzaJJsu575Sid5BLqpEb0/g==
X-ME-Sender: <xms:1LlLZONhivo1dCZHWsDo27M6pj6LgG1Geagm5_6jKk-EFtw7L4_OpQ>
    <xme:1LlLZM-7F2BI8pTb_X1NJV0vBzz14369-O2K0pdr0_dyn4lzFvjgQi5ZFa766Koad
    CfG9hI7rNbAgQ>
X-ME-Received: <xmr:1LlLZFRJC9li75idbUBHX5BTBsNuTA7KLYO_JtUqQebNFJhtP7jvt7yhpzNXg3nQdpvaNi6wu76vhcA_QNpij_w44ogkgUPudiE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedukedghedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:1LlLZOscA0TUWiiTguOVO8S8haIsKHUFXBkyb_HMQ5-NLAUKfU-MeA>
    <xmx:1LlLZGc2NnBYEUp0O1TaoSVCQdbWfzysjOKaa2ypdl4xwr-hx7ugSg>
    <xmx:1LlLZC1Q0HX3-hM1UQazHxSFGBlrn4w51tW5K_q0SNUjMZ_Q8-Bm9A>
    <xmx:1blLZG6uzwCyC8sXFLI52F4pB-JH9HVErpK5-eatHFax4WVgzE-N3g>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 28 Apr 2023 14:19:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 1/7] tools: Make some callers of xc_domain_getinfo()
 use xc_domain_getinfolist()
Message-ID: <ZEu5ziJNe6v80VE0@mail-itl>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-2-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="YAJPkbW/82GaqTzN"
Content-Disposition: inline
In-Reply-To: <20230428104124.1044-2-alejandro.vallejo@cloud.com>


--YAJPkbW/82GaqTzN
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 28 Apr 2023 14:19:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 1/7] tools: Make some callers of xc_domain_getinfo()
 use xc_domain_getinfolist()

On Fri, Apr 28, 2023 at 11:41:18AM +0100, Alejandro Vallejo wrote:
> xc_domain_getinfo() is slow and prone to races because N hypercalls are
> needed to find information about N domains. xc_domain_getinfolist() finds
> the same information in a single hypercall as long as a big enough buffer
> is provided. Plus, xc_domain_getinfo() is disappearing on a future patch
> so migrate the callers interested in more than 1 domain to the the *list()
> version.
>=20
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

For the Python part:
Acked-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
>  tools/include/xenctrl.h           | 12 ++++++++++++
>  tools/python/xen/lowlevel/xc/xc.c | 28 ++++++++++++++--------------
>  tools/xenmon/xenbaked.c           |  6 +++---
>  3 files changed, 29 insertions(+), 17 deletions(-)
>=20
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 05967ecc92..f5bc7f58b6 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -468,6 +468,18 @@ typedef struct xc_dominfo {
> =20
>  typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
> =20
> +static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t=
 *info)
> +{
> +    return (info->flags >> XEN_DOMINF_shutdownshift) & XEN_DOMINF_shutdo=
wnmask;
> +}
> +
> +static inline bool dominfo_shutdown_with(xc_domaininfo_t *info, unsigned=
 int expected_reason)
> +{
> +    /* The reason doesn't make sense unless the domain is actually shutd=
own */
> +    return (info->flags & XEN_DOMINF_shutdown) &&
> +           (dominfo_shutdown_reason(info) =3D=3D expected_reason);
> +}
> +
>  typedef union=20
>  {
>  #if defined(__i386__) || defined(__x86_64__)
> diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowleve=
l/xc/xc.c
> index 35901c2d63..d7ce299650 100644
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -342,7 +342,7 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
>      uint32_t first_dom =3D 0;
>      int max_doms =3D 1024, nr_doms, i;
>      size_t j;
> -    xc_dominfo_t *info;
> +    xc_domaininfo_t *info;
> =20
>      static char *kwd_list[] =3D { "first_dom", "max_doms", NULL };
> =20
> @@ -350,11 +350,11 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
>                                        &first_dom, &max_doms) )
>          return NULL;
> =20
> -    info =3D calloc(max_doms, sizeof(xc_dominfo_t));
> +    info =3D calloc(max_doms, sizeof(*info));
>      if (info =3D=3D NULL)
>          return PyErr_NoMemory();
> =20
> -    nr_doms =3D xc_domain_getinfo(self->xc_handle, first_dom, max_doms, =
info);
> +    nr_doms =3D xc_domain_getinfolist(self->xc_handle, first_dom, max_do=
ms, info);
> =20
>      if (nr_doms < 0)
>      {
> @@ -368,21 +368,21 @@ static PyObject *pyxc_domain_getinfo(XcObject *self,
>          info_dict =3D Py_BuildValue(
>              "{s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i,s:i"
>              ",s:L,s:L,s:L,s:i,s:i,s:i}",
> -            "domid",           (int)info[i].domid,
> +            "domid",           (int)info[i].domain,
>              "online_vcpus",    info[i].nr_online_vcpus,
>              "max_vcpu_id",     info[i].max_vcpu_id,
> -            "hvm",             info[i].hvm,
> -            "dying",           info[i].dying,
> -            "crashed",         info[i].crashed,
> -            "shutdown",        info[i].shutdown,
> -            "paused",          info[i].paused,
> -            "blocked",         info[i].blocked,
> -            "running",         info[i].running,
> -            "mem_kb",          (long long)info[i].nr_pages*(XC_PAGE_SIZE=
/1024),
> +            "hvm",             !!(info[i].flags & XEN_DOMINF_hvm_guest),
> +            "dying",           !!(info[i].flags & XEN_DOMINF_dying),
> +            "crashed",         dominfo_shutdown_with(&info[i], SHUTDOWN_=
crash),
> +            "shutdown",        !!(info[i].flags & XEN_DOMINF_shutdown),
> +            "paused",          !!(info[i].flags & XEN_DOMINF_paused),
> +            "blocked",         !!(info[i].flags & XEN_DOMINF_blocked),
> +            "running",         !!(info[i].flags & XEN_DOMINF_running),
> +            "mem_kb",          (long long)info[i].tot_pages*(XC_PAGE_SIZ=
E/1024),
>              "cpu_time",        (long long)info[i].cpu_time,
> -            "maxmem_kb",       (long long)info[i].max_memkb,
> +            "maxmem_kb",       (long long)(info[i].max_pages << (XC_PAGE=
_SHIFT - 10)),
>              "ssidref",         (int)info[i].ssidref,
> -            "shutdown_reason", info[i].shutdown_reason,
> +            "shutdown_reason", dominfo_shutdown_reason(&info[i]),
>              "cpupool",         (int)info[i].cpupool);
>          pyhandle =3D PyList_New(sizeof(xen_domain_handle_t));
>          if ( (pyhandle =3D=3D NULL) || (info_dict =3D=3D NULL) )
> diff --git a/tools/xenmon/xenbaked.c b/tools/xenmon/xenbaked.c
> index 4dddbd20e2..8632b10ea4 100644
> --- a/tools/xenmon/xenbaked.c
> +++ b/tools/xenmon/xenbaked.c
> @@ -775,7 +775,7 @@ static void global_init_domain(int domid, int idx)
>  static int indexof(int domid)
>  {
>      int idx;
> -    xc_dominfo_t dominfo[NDOMAINS];
> +    xc_domaininfo_t dominfo[NDOMAINS];
>      xc_interface *xc_handle;
>      int ndomains;
>   =20
> @@ -797,7 +797,7 @@ static int indexof(int domid)
> =20
>      // call domaininfo hypercall to try and garbage collect unused entri=
es
>      xc_handle =3D xc_interface_open(0,0,0);
> -    ndomains =3D xc_domain_getinfo(xc_handle, 0, NDOMAINS, dominfo);
> +    ndomains =3D xc_domain_getinfolist(xc_handle, 0, NDOMAINS, dominfo);
>      xc_interface_close(xc_handle);
> =20
>      // for each domain in our data, look for it in the system dominfo st=
ructure
> @@ -808,7 +808,7 @@ static int indexof(int domid)
>          int jdx;
>     =20
>          for (jdx=3D0; jdx<ndomains; jdx++) {
> -            if (dominfo[jdx].domid =3D=3D domid)
> +            if (dominfo[jdx].domain =3D=3D domid)
>                  break;
>          }
>          if (jdx =3D=3D ndomains)        // we didn't find domid in the d=
ominfo struct
> --=20
> 2.34.1
>=20
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--YAJPkbW/82GaqTzN
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRLuc4ACgkQ24/THMrX
1ywZ9Qf+PLiM87ReV41LAFBAA6DWUKUf7OLSlOz4vi/2sX6ALcsNUH43j5qnJPvK
Ag4RQwt1hahx+lsBU2R+6USST3AErNlxeOS0G45Utw2QIQ3NazH7JT++8AbPCXC8
vv0qZ7cs4RQnbWAngiaR2MI5TjkCOGxICSMAGIySwGnntqwSsplQzw6E21oZYl9c
7jEChJ/TJWP/4AI5f0f1sIOkQo2pgo18bakzDuQwYcfKXc0q7D6vXVXErUm50gFt
oJI0l+51TwSGFlNUhaJirxu7UA4z13xPQFKCifuFnx9oT1V/b3Os4RRQ0nZz0YW9
FUYte/Kru5nVryuCcaMIcybnjI08zQ==
=9Uct
-----END PGP SIGNATURE-----

--YAJPkbW/82GaqTzN--


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:21:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527280.819735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN6O-0008R4-FG; Fri, 28 Apr 2023 12:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527280.819735; Fri, 28 Apr 2023 12:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN6O-0008Qx-CB; Fri, 28 Apr 2023 12:21:28 +0000
Received: by outflank-mailman (input) for mailman id 527280;
 Fri, 28 Apr 2023 12:21:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psN6M-0008Qi-Ct; Fri, 28 Apr 2023 12:21:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psN6M-0004DA-8K; Fri, 28 Apr 2023 12:21:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psN6L-0004xa-Ua; Fri, 28 Apr 2023 12:21:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psN6L-0000yt-UA; Fri, 28 Apr 2023 12:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DSG7FFoq2INCoQh9buaKvYHAPxW5ZY9fiuNDPPwHhpo=; b=2zK80gB1Z8CK2eKPK76aOKGmJ/
	T/Zyqg7uqQDhSxuKRtj4OuxC/xJ/Ov7TXGLPgdMLBc0k5xd5wEfSoxVtyyIWpVV0tjljPWNNE9aXj
	+YvXGJEH5b8d9FobbNpNgKZUH4cK0lUIqZGV6O+etIwCsAh0Nq6xeMlhsx63Y+9ZpmWk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180467-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180467: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ecbcff0f4935395f66ecc9e9ac76b804ecdec2e8
X-Osstest-Versions-That:
    ovmf=95ef765839a8d0de52095e3dec3584fc347b94b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 12:21:25 +0000

flight 180467 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180467/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ecbcff0f4935395f66ecc9e9ac76b804ecdec2e8
baseline version:
 ovmf                 95ef765839a8d0de52095e3dec3584fc347b94b2

Last test of basis   180463  2023-04-28 06:10:48 Z    0 days
Testing same since   180467  2023-04-28 09:42:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nhi Pham <nhi@os.amperecomputing.com>
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   95ef765839..ecbcff0f49  ecbcff0f4935395f66ecc9e9ac76b804ecdec2e8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:22:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527286.819744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN6z-0000Wd-OS; Fri, 28 Apr 2023 12:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527286.819744; Fri, 28 Apr 2023 12:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN6z-0000WW-LT; Fri, 28 Apr 2023 12:22:05 +0000
Received: by outflank-mailman (input) for mailman id 527286;
 Fri, 28 Apr 2023 12:22:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psN6x-0000HN-4r
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:22:03 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 45cc2a06-e5bf-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 14:22:00 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 08:21:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB4984.namprd03.prod.outlook.com (2603:10b6:a03:1ea::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 12:21:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 12:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45cc2a06-e5bf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682684520;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=11oaI3nPUD3u3AnIH+q8E2vEmJZI4B/jcKW+/7AaCfQ=;
  b=fAP5kf9ISdkXhkNrBEVdjBYalwZOokdKD9LRx8xQEhwWbv9cFeIJNkqf
   MbmxYoqQPbQDQ6sVzH9DJSdzmG8RcdNlHToF0zCDgdapGIULeZqTtsqaF
   VuIYOM6ENkt2CTk5hEymAGV8SXIuuBVqR1/pgO+l2B9iB/rPR0Z5nAfxy
   8=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 107617826
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:viHQkqtDheru81DCgqbTz6P4QefnVJBfMUV32f8akzHdYApBsoF/q
 tZmKTuFbKneMWX0e9glatvnphwOsJTRmIVnSFdrqSA1RXsQ+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOExyFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwJxEcahrSisaMx+iyVbZ+gsosfZbyM9ZK0p1g5Wmx4fcOZ7nmGvyPzvgBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjv60b4O9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqaAz3w3JmAT/DjU7dnyXgPa3q3WMYOtFI
 kZP0S0ytIUboRnDot7VGkfQTGS/lhcYVthZFeEg70eTw67Q7gSeLmMASSNNLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcbSSMV7t+lp5s85jrNQcxkC7WdlcDuFHf7x
 DXikcQlr7AajMpO3KLi+1nC2muovsKQEVdz4RjLVGW46A8/fJSie4Gj9Vnc67BHMZqdSV6C+
 nMDnqBy8dwzMH1ErwTVKM1lIV1jz6/t3OH06bK3I6Qcyg==
IronPort-HdrOrdr: A9a23:wAMqpaM7RNuJlcBcTw/155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN93YgBGpTngAtjlfZq4z/JICOYqTNWftWXd2FdAT7sSlrcKoQeQYxEWn9Q1vc
 kAT0EXMqyIMbEQt7ee3ODXKadC/DHRm5rY49s2gk0dPj1CWuVF1UNUGwybGkp5SE1vAoc4Lo
 OV4o5iqyC7cXoaQ8ynDj1dNtKz0uHjpdbDW1orFhQn4A6BgXeB76P7KQGR2lM7XylUybkv3G
 DZm0jS57mlsdu81hjAvlWjnah+qZ/E8J9uFcaMgs8aJnHFjRupXp1oX/m4sDU8sIiUmSgXue
 iJhy1lE9V46nvXcG3wiwDqwRPc3DEn7GKn4UOEgFP4yPaJDQ4SOo5kv8Z0YxHZ400vsJVXy6
 RQxV+UsJJREFfpgDn93d7VTBtn/3DE10bKqdRjxEC3bLFuIoO49eckjQ5o+dY7bVXHAbkcYa
 FT5Jq23ocbTbuYB0qpwVWHjubcEUjbJS32PnTqivblqQS+o0oJsnfw5PZv70vop6hNO6Wsod
 60RphVqA==
X-Talos-CUID: 9a23:0o+HDG+LJsT9+ktWyi2Vv0E1C8QvI3D59nCKKlDnG2JnR+zKUFDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AlWWWtQxTjkTT94uJ+LY8qnIDKzSaqImNL2wsnps?=
 =?us-ascii?q?ngPu/DDZ5MWy2iyiub6Zyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="107617826"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VZARcrrNsSpOaAi/TWOFQ2dJ68aQcnDrioh1yFlS0Nv3xMO0YhuPWIL5LsdYjrdI6uw7SYDTAb+1RF8GH8x6kOT6EwMdxBbvfNYLdo2tv7E33aaTCsYKmRle13lf6mMLAbbX9EthX5c5wfuhRvPEyi3jonQfy0Vcrr3H+zbvr6F/dLN3TkARLMPSfiJ5hZApV0eRel1r199w+5MrDKT6t4rCkXNPiGEvzCnH3gHSn2d5s+Zrpc3KjkgqP2BYAIT3mfDwE9scQtiWibSpA4jsL+oHCw/4xkpxOodQ1usZwWBesZs/OcCsNyQKnRvMjKbmFjspVqT96sGXhquU7GtLvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=11oaI3nPUD3u3AnIH+q8E2vEmJZI4B/jcKW+/7AaCfQ=;
 b=RdqWZwoFN1oLitZyi8a15YteMjPTN151P/UQU8SNlXwlxod8bm2B1mz3cpE8JMbwOcxWVTSN7y2xUvTfIPoRykE5aUzOK8jfylKnvwtRRn9Q/YcK2GyyUvLGUJmcJIdHbmi3YpVakzFWoUYdy1JF3C7dSlTNeCEh5Mrbt0NUGqa0yepGtxlRuEMIvDn361MuEcZDAOsuxbjYn0cVAggKMHCOWRjrYzaoTljH1dZw1ixtkI/jzhIVPxLocjzlM9dOyQYBqFq4srMp8H0c5ztK1fBBiJgBB2ZpFAmxeyA77uUBR4ss5mn3iXRAMbiAyig40mjVpfG86U0kK+4yAvF3gQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=11oaI3nPUD3u3AnIH+q8E2vEmJZI4B/jcKW+/7AaCfQ=;
 b=t6YdlonKbRbg6fBeo30fYvActPxDjVYgRQOWvz89Q8AISMQ54M1jYTYwudwjT+oupV6cSZ0YKiMTqhrkp9y9moHKm5nQ6liAcE4lwmw/nReZT54E5ipcXS+kg2BHyuxopQ4z/5QdGFED7wEV/37/MfiMGYyyfrCK4GrfVuZquzU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <09696f50-cbfa-794d-2057-13de0a55252e@citrix.com>
Date: Fri, 28 Apr 2023 13:21:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 1/7] tools: Make some callers of xc_domain_getinfo()
 use xc_domain_getinfolist()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-2-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-2-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0046.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB4984:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b791def-240a-45d0-891d-08db47e327c3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dk41V/VDcFEAhjObbJ9+gbvoaMfqFrjbFzpIbpgRtJlqgKRP8e2M+kqgvTzUYs/zfU5RfcxfxEUVA4zsEie05xMEtS5nb03cwWYbvnHgfcymNZ8QAMkBsn+NKwM9ErLGLtxXUsOcEHg2FLuaoidpxBnOvSz2r2H3q0paELIL+TFlf5yPQ2pwKsA7tg5vYkLKeW3xssCv3aYTIFMTe6kFWR3Izrw3VH4odKddEzwh6MG8B98MXnqgZ+JB6gTELoADFok4nIEPniBdFpkd/M4srDRaLYhv9Vxt+OF9bWwxT1pGobL4ZKnfO/pzm4ULNQXql8IXtfU9OVrYa7CugSmuIOdkmhEZkJaUejySOEUFnbo6NivnyHcMKbOmac9q7FxFjsfE6mB+ZVLChtjujEpwMfMwwHGTl7TU0e/FZ7hgd+WyvjZ0UTbF+sc92M6uG1nOxNb7qlaejPH3PATyfFfgjCELKxpUIBxw4qSmsuLmmfK3i2IX0tzWCVBkz6cLjNH1flEjAXou5bsepNfWeajSpshFog7Hzl/hlIagLd5AJB6BCdu0y0eiynpnQMjBz6xFqh0x9D/LeLxQ1nbMuj9vbAbqyHLXDVGKdJ+aAJNfr6xXCBqln2NBseaHNBg0olleQ6sYK1fsE6ArjlMRatUJAgnM7znfCMCJzTWGqbyDhTOUHFLV65aVFFS+ULxwCMsk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(451199021)(478600001)(110136005)(36756003)(38100700002)(8676002)(8936002)(54906003)(5660300002)(4744005)(2906002)(66946007)(66476007)(66556008)(4326008)(316002)(82960400001)(41300700001)(31696002)(6666004)(6506007)(31686004)(2616005)(26005)(86362001)(186003)(53546011)(6512007)(6486002)(83380400001)(429304003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QnlBN3pBWVFUTnc2WFJkT1Ira1Exck1Jd2FJR3QvSDNwR0E5WUg2TTJJa3Ez?=
 =?utf-8?B?ZnczYmQzZUlLOGFHcGVIc0Z1RXpxQW0rWmFUbGZWNk9kTTRGUXhIa251cW42?=
 =?utf-8?B?YWxPeDJmVDFrT1Q2OVpQVXYzSW1PZE9XVVQrblVMME5Fb1A5V3d2L2hzYWRx?=
 =?utf-8?B?dEhvQWo2akdsL2dTWUFES0puYmlBS2Z4SXdBeHV4N0RQNlNCMU9ndkJLcW5x?=
 =?utf-8?B?Qzg5eUpBNWljTVNLNDF0dnZ1YTZLQ3l2M01pT3ZOYlE0aWxZTkt2WkRoUGt1?=
 =?utf-8?B?UXR3ZUJJN1lsYlg1VGJBa3pnbGEvUGlvNFN5MVQ1bWl0Y1JYZlB3aWY1OG5H?=
 =?utf-8?B?ZGozY2NpdnRHVm1VcUFlbWN1amF0bDdKdmxaRjh1VlNsZXhFK240NnhXLzRp?=
 =?utf-8?B?bFBRbWQ3cFoyYWUvSktRREJ4Tmc5N25XU21YajZJRDh5dU9GSWFBTTMzdlJr?=
 =?utf-8?B?cy92TjJZaXhhdnhmalp3eVpIUUVjSkg2VHh4YkVsbXNiN01XVGVVUk1Ja1R6?=
 =?utf-8?B?QkYvKzQ3dnFIZ2ZSbGpOMnV4MlprZ1JqUGJQbXJUd3RodC9EdGhPdkUrT20r?=
 =?utf-8?B?QjB5aDcwMzRwNDRGUHNkbnRtc2JnSVZYSGl0ci9SWEZXdGRIdERjOWRzR2F5?=
 =?utf-8?B?UGF5U3pZaXBUck02MkFGcjBsRWFuYW14RytYUHBVSkE3Yjc1ZHJJWmNIUEhP?=
 =?utf-8?B?VHJ3OUtFZFY1ajYvVVlFVnp4YzVVVzZ1QmxLZWU5b2VwaXEwM0pxVEZIUERJ?=
 =?utf-8?B?amVLT1dFL3N3VlBkNVlENk93SkVYb1pkVXZLRUdyM3RSTjFqUUI2cTVIU1RV?=
 =?utf-8?B?N3hMODBIdWlZNTQ3SFAyWVMrZHhxdmduVHpxWG1mSlJ3a0ludll2M2QyejNq?=
 =?utf-8?B?b3kyeGpWUzJoK2hNeXFmZ0dKQzdqZ1U0eFlVZFJYRXMyWXgxdkhTOUJEanZo?=
 =?utf-8?B?a1FLY1VvMUtXcU5saytWNFZlTTlPNzhHUnlHczE0clg5SkU0bkhLbWNCSEFO?=
 =?utf-8?B?dTVOWlBtM3pBM21nVDQrU2ZmY25vMEk4cUl0aGpKR2pOZGlTWnMyZ1hNV3pq?=
 =?utf-8?B?cEVyY1IxeFpCeHZ5UEk4M2o2VnQ3bzZMOU9pYU9OaDZnUjVPaWhWTHVpblkx?=
 =?utf-8?B?SlROaGVXVjBCa1o3cjBUUFBxblpEc2RSY3dDRzhhRmwxRkdJd29lRE9kZ0Nm?=
 =?utf-8?B?SzBLUmk3Z1k2Zlpadldwdng0ekluNmV5a1hWU1l3MXVkQTluVDRCT3lMMDBs?=
 =?utf-8?B?OUwzaUdQN3FHQ1dkSE9lcnhQNkFLdzVhaHJoUTIvd2dDWVlyeWRta1c4cWVn?=
 =?utf-8?B?OFU2M3V5QVF2S1QzRjVzTnFXV1ZjYlNYcjZsb3YzaTNGVlJaS3VPTjFLVXlU?=
 =?utf-8?B?TDV3YzAvaDliV3k3ME5JTEZ6RzRFajRjY0trY08zT2NvZm9vWGRkKzg0S0ZD?=
 =?utf-8?B?UTd3Y0pZeWJyelNWZUpYeFQ4emxTZEt0VG1yaG9tTGJqdEppazU1TFFzaTI3?=
 =?utf-8?B?c3NyVEZzNW10ajZWWTliZ2FKM1ArYU5GdkZYNElXZjRWcFZGTEsvUmlPMCtL?=
 =?utf-8?B?WVVOR25xK3V1SktTSEoxNnRCdndxMUt1R1Z5N2JjRFRsd29WNHBmZjNYdllD?=
 =?utf-8?B?dlpveGlEOUFGY2NTQmlwQXhsakZja2R6ZXQ0MlE2SGp1NERvUStBazJHclRw?=
 =?utf-8?B?NEVBMzJITjh0NTBlZ2kwRFlRRlh5cXIvdzkxdXBQNHY4ZmtPMjJnRGMwNncw?=
 =?utf-8?B?a1NOQzZ4dDcrT1pmendxcTJYYUdmYUNycEFHZU9tajZqQWJoT2NWWWJmYitS?=
 =?utf-8?B?OFlLaHVSSFdJYlZLWi9hUklvYXByeGEwbnRlNlR0TzVmbEJBU21JVi9lWE50?=
 =?utf-8?B?c0xNTEJGWDlJS2l5QUlIUGU3QkYwOCthM29uNFEzOXdyUWV6Ym1teFdGdndz?=
 =?utf-8?B?cVNNNWJwdmJPN3VBTFZLb0VqUzYySUh6VUg3Z2VpbjlPaElEbXdMSEVVNys0?=
 =?utf-8?B?dUU0SEthcFBxK1dubzloeFM2TDZldHQvQ3RQTXpyUnJkc3gxK3JqZ3RWaUVR?=
 =?utf-8?B?ZFY2amlFMnFxajhjSk1wYTRNajFKSUppL2JQMnlrWkl6am5jc09DZGk3RUI5?=
 =?utf-8?B?a1orVHArbG56d1kwOG1UUlp1WWZMVE5wZ29BRmpqQVlEcjlQRHhmdit0dTZC?=
 =?utf-8?B?T3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	jSoOuIDxCzywqb9L2BgW7PIzOb8C+HhqB9lL1NOPQNDElDh3JDBmfyb83HL0LRSLtDg9w+GTdBOma5sNezHnHWZ3lD275rno+ePY2H+fzCNJgg62r5JBgzwdxQ047gVTiqWPn2KaEm8He/kqOmnArB5CX/gtM4z9kbhxQ1zHpVt0iDVxWRnlO9GaUbk4oOT6jNhYRXK+j3fWFo/SDPCuYr+y2cBWOMIA+gXJ9p6olPMdpC3aOlX393d0NV4hnK+ojyQiJxcWVQNMY5RnjULH9+24mw82vohs6P1ggddztngrXBEE1r9Rdhqut2StZaoEwmE/bjBEaIZ179dcuhFEq78izNz4CsBDiOrVIgKcvatBpJSaVKY/sDyjYmnvb/3aj51xJ8DYoJlIzqjtqLfy8YjjIOtUMARkjYc5jevQ/9+xMwk4+0wQKvm9gH5gXQUrU1kD9So1hOb8dkOh6/YwvCgMBKvUlw9vhE1aAXLFptAl5r85pcQ1NDTSd2Spo5dKray+x3GzY7bdYXXim59wR8AGc7Xj3ea8F4bLKqeTG76u4F8MbxZDLOefxF/rSbO6o5ecmTCFhT+uT+0AEge4uTVUoB+PGzWgAOwqj+g4y8C/s6OX0qI8csSSUVtZuGH6GPcRdIUuhmR2hvFrDf6RrMj2TmPJKot8QqKIY/WuoHa2k/bqlT1aKXf+bjIfirVP0UBZzlO2/kKqkZGBNqUPo1sxiSSykMgv6SG3o+Y3jxbmdiHBlVegZtHoOcahj62gqcrl3XQcf7FzYYCS3axF1p/kKCKtBJxRhAor/ExV0Jg3wh8grhpdLGFujZ2Wfu4jQf0Y3rn+dbH+YUSqBjKrQN5Uerbzg7peCRRnnuC1E5g=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b791def-240a-45d0-891d-08db47e327c3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 12:21:55.7091
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XtoF0ofm7RfxhWrEsG5RYMAirGPaIJqwEB92b4eUJFpUi8nf3EjEriTh+Eeq3mNs9Q/vaCQs6sIfnF5hljcQau7Wcre1s2kl8qSAmg3c4Nc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4984

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> xc_domain_getinfo() is slow and prone to races because N hypercalls are
> needed to find information about N domains. xc_domain_getinfolist() finds
> the same information in a single hypercall as long as a big enough buffer
> is provided. Plus, xc_domain_getinfo() is disappearing on a future patch
> so migrate the callers interested in more than 1 domain to the the *list()
> version.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>

FYI, expected practice is to put your v2 notes in each patch, here in
the commit message.  They're more accessible here than in the cover letter.

However, this patch is now fine, so Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:23:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527291.819755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN8U-0001BZ-8i; Fri, 28 Apr 2023 12:23:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527291.819755; Fri, 28 Apr 2023 12:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psN8U-0001BS-5p; Fri, 28 Apr 2023 12:23:38 +0000
Received: by outflank-mailman (input) for mailman id 527291;
 Fri, 28 Apr 2023 12:23:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psN8S-0001BI-QJ
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:23:36 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7dee6d28-e5bf-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:23:34 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 08:23:32 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB4984.namprd03.prod.outlook.com (2603:10b6:a03:1ea::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 12:23:30 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 12:23:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dee6d28-e5bf-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682684614;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=c4g8OMBQyK2XZl71iDoSI61NhSymN4enTpC4Vy8hndo=;
  b=Xkmmb8B/ewBK6JdzvGRwJrQlXY6YcxHLZmRy3vuyegpyMFXS/kBJWHJi
   h9qnYdMtgm5y1B/D2XsVI8lFnd439/xiAOhqPZVb5V82n/O5yEsTPLR9u
   8fS6VHvdyejTw/UJ4aFJDDPwZbwjm3AFufUkVRTUC74/hDSfUtQJ/8nvZ
   M=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 107101978
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1Cxm3aiKyDFNLP0RBI8wgWFpX161phEKZh0ujC45NGQN5FlHY01je
 htvXDrUM6yJM2SkKdF2Otm29BtXvcSAydFmGlRq+CE2Fnkb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AWOzyV94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRDBmgcXg2Zu96S+72ZSM1BqON6cMT0adZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGyb7I5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnr6Yz0QLDroAVIBIPVgqdr9mEsHavS4pmF
 GEep3UHladnoSRHSfG4BXVUukWstxoRWdNWH/c9rh+Ezq7Z4QGxDWwDUzIHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTTAAZRsI5Z/kuo5bphDAVNF4C4auk8b4Xzr3x
 li3QDMWgrwSiYsH0vu99FWe2ja0/MGWEUgy+xndWX+j4kVhfom5aoe06F/dq/FdMIKeSVrHt
 38B8ySD0N0z4Vi2vHTlaI0w8HuBvZ5p7BW0bYZTIqQc
IronPort-HdrOrdr: A9a23:xZF+g61SaGs2ggy91qttmgqjBIskLtp133Aq2lEZdPU1SL3gqy
 nKpp4mPHDP+VMssR0b6LK90ey7MBDhHP1OgLX5X43SODUO0VHAROpfBMnZowEIcBeOkdK1u5
 0QFZSWy+edMbG5t6vHCcWDfOrICePozJyV
X-Talos-CUID: =?us-ascii?q?9a23=3AgJcRuGr5J3lQOiboZNIGDDXmUZsLKmzt0W3sH2u?=
 =?us-ascii?q?DBkRrbJC7bgbM9bwxxg=3D=3D?=
X-Talos-MUID: 9a23:vgRhYgQ4RU/hrxKLRXTjnTIzEslI3p6WGVBVqKorsZTbHBR/bmI=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="107101978"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nH+qUEsE+oI+W1qNFdJosl5KV/CLYfMyLnP+eIACX9+G3aFfkBFFdkKIFHsestvXBdlBV5+nSVeu//QwDizj2Cy7RmVYoxnipWH9r677k1lzkBrbuaD+dEKdCsYe1ZJ6n/v+ANocqG3t+kW4s8+heRRJ6i+azZxu2ET6oRE0Nazpt5HC8S++Blm5Vqxxiu5zixJt/+2KpmWyFmtmYyvwH4iByEd2VT5GkLZE5JBeAsMoZA4g7lG2ZAmfFiYdNRBG2lkIgPnj3JsQsQVxuaPZpSwmkJ2o8xmAd1yatofWR5IL9R/DBMzB9xHtjoNw1ZtggNtUeMxCWPnTHdUST87jlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jFWB7fPYNURP88i6cGvb3OG8iyV8YeJBR9miWlCarbU=;
 b=OVxUmvvtLVmCmU/4A2p16cPe5RnvK29gh0KGfstItOQBX3IaRfK2vHLswarefXbkVxgLVbV6pop+2S+XN6tCz1uFu2CWqS33xtwv0ItIMlTFFWoZvtEsQAtXhh+FmuMLmoiCRCxzJij9/zKUclQHz4vNthapGnjknMbQx0Z2Wnsi0vwwJh4xxLfl9lzs5842GbiUR1m59K/4rvHrAk4nJ1rfRusq88IkpphrMixPSVeY2pv7UIHxAmfzQitoxc0z6oj0XxDEPBMEFS4FVXJhqLv295aKQQs+wVR2uZTU0MSUOOhgEBSNjvf2CI6Z8B8qITVC3OdVqDsyQRmiRAzf0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jFWB7fPYNURP88i6cGvb3OG8iyV8YeJBR9miWlCarbU=;
 b=fRVleyHr3CfIT+qIPBl7/6hGHmdnGkSWWFi8NzkeeC7G4v5iBH7cnJR7o3nvie4A0yg68YQJF17fRs9hhGUfllP5dXFNpka2b49hp97YmV0tNWW5vQ9z79+73up5HBnbn4WW8HLT8zV0dDxmBnOjeJ8UErMruHx/ExAZ807YlT4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b67921bf-032a-8baf-f2aa-0776ca5b085f@citrix.com>
Date: Fri, 28 Apr 2023 13:23:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 2/7] tools: Create xc_domain_getinfo_single()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-3-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0047.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB4984:EE_
X-MS-Office365-Filtering-Correlation-Id: c6f414bb-34ef-4762-5cb8-08db47e36005
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uWyQQfXLJmao5E2ItMlyCe2/B9ZS2mXdIj7gV02VxrjvQIkD8btDvQbA0CWdUSP/SmL3gRic7DVTPGcHorxXRwGJbW0TaXRKGuyUxVA7cy3g0kulGl0GdH5TFFs2DIqzU5s3ZFBrHusLdpfdpUpgm4zqoeTaw0hqzRiYbtVLQ1w/INtnKgaku9yzJEP382XYwHk/T/mMk2aFJPPjqeIM5m5q4aYOJdf8Zo7TfRGDdIZxFMJI3bcxBa7fW58G3K/QS9IiX5R2t/sRctRDxEvUEUAMMX04T0ua5Ac8Tg29RNgj2+jax4dzVFHMndbFsldiTV2mT4k9BL/JT+cdzIYPsdEiF5agQ5rdxySqg3ZDVLE6H5Inwe3FeOWPPXC6WztIMt6XgA3Zbq05nU+YCkJc/G3w01VjqueZuE/cOSW2dTs8nA8C+MwY+HZpmJ/tGpBdCOe13LcjqYz/zEXobp2YZRR31JVWcxHIyCiNpesjyn+58Uqje26DDbg9TrJixXpnTJkZQG/9xD+xiPMAEfFpZnwGaHJh6QUgJBKkfQuDUH6Uroe0/+LOwyTV6TEC+fQRLqMaiqFFULYto861eDOwKb0haOT+7rYJUF1YeYtsAt+VDxMDUDPn7bBJVbk/av2Z6ZMn/pWyKyOIJOC5VfOWQw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(366004)(136003)(39860400002)(451199021)(478600001)(110136005)(36756003)(38100700002)(8676002)(8936002)(54906003)(5660300002)(4744005)(2906002)(66946007)(66476007)(66556008)(4326008)(316002)(82960400001)(41300700001)(31696002)(6666004)(6506007)(31686004)(2616005)(26005)(86362001)(186003)(53546011)(6512007)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TnRndGdjazRtaFU4VmFmYkE5QzVUOVFnblFNb09FenFzK0pudUdDNDZzV2V4?=
 =?utf-8?B?dU05VW5ZZERxeEg2bE9yMjAxZVNYcGlLNC82eERFZ2FWanFnQlVCSXQ3ckdC?=
 =?utf-8?B?MTFkZ2k3Z1VCTVZONXdDcUluV0hucy8rTm1mbkFHVEhZVzdqajNrSHAzY3NW?=
 =?utf-8?B?RXB5VFRPT1E0ZTZYVUpya2dNNFJIWVd6VXIvYUtUek9EWWhzcVFBQTI4VkNQ?=
 =?utf-8?B?azlxK2NjNm9pbWpJNWplbHFrWnJZNFcyZFZiM0dpMkZSMmsyZHZQWWFoT1Y3?=
 =?utf-8?B?MGNxamNBUnFlNmNYSCtPWUYxMzdMTUV4bGhhb0h6ZDljZDhNQUJJRlFmeUxl?=
 =?utf-8?B?UlNwenAya203ZFJQWjgycHpXVThMNlRNOGF2OE9pMS9hQmU1cmc4RGR1TVp1?=
 =?utf-8?B?c1VoMUQ3S1FSUlBSZXVZQUJyVDhGTjRFWlUvSVp1bG5JTEwrZWZ0NGo3YVVv?=
 =?utf-8?B?Y0lPNHg0dFM0a3ZRRGUraDVPamVqU29QSGNkWmdSS2pzWmRmbkdDUU1Nd3FJ?=
 =?utf-8?B?Y2lNd2dsTVRiUWRxamJMM1ExYWdDajZhNTZsNEwzNHZ2ZDhrTWg4M0krTXJj?=
 =?utf-8?B?QThEY0xpRHhDNEpva0tRekxjY0ZFVE9kTmV2TmFmc3J3YzlhWUFNdUZHb2JN?=
 =?utf-8?B?U1NwMVU3NFhDMENMSEQrcXN3NGhHZTlJRHdiZXlMendNOWVKLzN2dlI1bDYz?=
 =?utf-8?B?WnFscmdtSVVWMG5SUnZBaWVISGpmcDJDaG40aHpURG5ENm9pK1p5T2tkWXhx?=
 =?utf-8?B?ZlRQazZPdEFsT0h0Rml1MmxBbFhHWjJkNVRPNUNLK0NVTEUvUFFDYjc4RU9y?=
 =?utf-8?B?cmpaeHJrWWNERW5mbTdGZ2dESmN3TWxOdS9OeU93dGE5UXBmNUNFVElWeU1u?=
 =?utf-8?B?TXFPSVdJR1VsZU5EZmdnNWp5cnpzZ1JjMEMwVCtTN3ZESTRaK0gremNwZGFy?=
 =?utf-8?B?SFJyUnlSRHM3Qk1CdUJIL0FOZVFRTC9RdDhPbkk1TEI4cmh0QW1XTDRlbGlE?=
 =?utf-8?B?T01JdWpudklyWktQWFBWNDVQV094YmtWKzkzKzFXRzl6VHRUZmZkV3NpVlZ1?=
 =?utf-8?B?aC9LaUJFUXkxVkpFMUxTeE05YWM1WmZMK3gzMzBhZXV4UmlFaFUzWEVxNy9w?=
 =?utf-8?B?aDVodGx6RjJQaXIvVzkvTlBjaUFtNkhTU2tZL3ZBTXRrWXNlOEVqemVOWE1k?=
 =?utf-8?B?NnlxSVdzSjJ4NW01bzkzM1IyREVPeE96TmwyeFUrRzNUdzhrS1hENDJNZVNa?=
 =?utf-8?B?RUZvYUV4MlZUUFo0ejgxNU05Mmk1emRmT3VTbXNvcjZnQm1BMzJ3SEhLc1Z0?=
 =?utf-8?B?Q3I3RzIzekdOd3lQcUVQRXdyT1AxVXVLRGR3UmI5ZmN0N1NWWENib0VrUlZJ?=
 =?utf-8?B?TWJtMVlDbHNOSWYxdkw0ZktQYWpFSittREZ1NGVlN1oxQm5rOUFsQVpsaFlz?=
 =?utf-8?B?N2RKVW9FbU4wZmNpU2lNeENYTlpYdC9xTE93bUU5bW9UVEVOVy80TktGdVVq?=
 =?utf-8?B?cE5oc2Q5dW5TWUxJQkVpaU9jZXFZeEcyZk1XV2NPS0NGZEJUVERHZU16WkhM?=
 =?utf-8?B?YUZkeTkzYjBMU3JITENtU0dKLzVXVjJaV3RHMUFzWVZsTytxZmlpYUlBdXBl?=
 =?utf-8?B?dTRhaVJ6ZGczcGVQOW1PczVlZC9QMDZoM3dmaldDQVVQc1FIN25wOThhYlh2?=
 =?utf-8?B?TTNKSGprbStVL0hKM3FzRjd4Um4reElVTDhQZ2lPOUhHTThETmNzMjNFRCsx?=
 =?utf-8?B?cUFiak5UM2tIbDdsY3Mxd2RLanV0OVdhZzJBQTdqdklpZHhnQmNNUkFJZkFi?=
 =?utf-8?B?b3dsV21rVEp0VHZCY0lMVWxZc2JZYTNDVjZmMUI0cVNhNVdacXg0WG9JWnpG?=
 =?utf-8?B?amVLOU5JOVprMDd5ZU91cjZ5SlZXSjJWL3RRRHhlM2ZNRy9WMWJnWlczT0Mx?=
 =?utf-8?B?T1I4c1RsTHY2bDNMU3hSbmpjb1llZUFMelFkYkZYVnVBOUhqOEtra0JvK3Ri?=
 =?utf-8?B?MVNxaTJCbDFlOGtqK2JtTGY5OVB2dDFLRnc1aGhMZHpVcWNGcmgzMzVFenRE?=
 =?utf-8?B?c0k2dWFnMFE5RURLdVcxZ0tJdng0dktIUDFOV0NpSXZvMEIrYXBIYzZlSm1C?=
 =?utf-8?B?UXBkTnhYMFVBSjFORVlVOGFaTkdTb0lBbFVhaFpQbTBpamhqSDRxU0RUSEhX?=
 =?utf-8?B?QWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xu23S+ME+KypOKrn4RnRODlRD5RAJ/EUVuRsJhM5mvarEVFmqnzlOvkWPJMbKGl63g1rzM+f5qVR8RLhnQB3GxljCrjDTvL7A3iCGO33DH3sf1IUWSeL60znSjFrU6haDrPI0me8TTpA/kUofvDdjnlm8aXmV5wYhyVJcRTsayaN892RJZVRrcVrCP5F6f+MwDMjStE5sqTOlBrnnzocTbWDq6qv83JKJzuT0DNjig4j9GYmBSjh+T9yFtKTk5GAwvfQfWQVTVHdF0DMad1i+KwkY2F2W3EBrDT4ifDvtSU01oT24G5mvbBY15gxGI0rDtpkcls7lP6iKkQxMTq9J6pq+FyPV75fhrW8TqLKDHYcj/0Vnnk0vPcp+iQHubnGXBMjyKmPDYPsOJtc7LUGRvyVO1VI/+0rGxJJXm9RBRx28pNFt0DM1kzvUbKQi6R16WxS6KB/MWQn6dTFMqAaLfViVZo82LHc+9I/6OCbUXj7ZHR9dIwGlxSgSEWptU/Lwp7m4M+kJcojPsytOKo0NG/4s8xBtATZcbS8sblrkLKjk/BbvoBRimUV84z/6cyR+zatDOxGCdZ/MpAB4pD04UmwVkCcDz4yd1k4VqEHvp6gpd+iZ7knxwdUnEHhG/TE233B8wtHJtczu8BnYvRI4o3p7j7ephnOJNuZmsQ34+qoPY+CuERpJkc55EC/qjOYj/P71F/fNean/zXEQra1Z+lwKiabuLVJ6xmAxAdvDEzeKkn2pEonNQaDLuImlqBBkou7wdLmqvKPZQ68cQaeQ9ZdiCiz0FS9NFXFYwBzN3BCSaijP+Lwc4DT0L5+MzXb5QGyNd+/7HKjEgxyEQ82IMzjUN9EjE8Kft4SEYsOo08=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c6f414bb-34ef-4762-5cb8-08db47e36005
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 12:23:29.9786
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KTMqmRZ7b7gW5cbsku1nomaUIbCkgWfOtBsQSRDvKmGIeCYQ1mKK8vYtLw2noUSYa/5zj4I+89cJ1PRdygGJXApUMgg/QJzrE8/oWYEg8uE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4984

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index e939d07157..6b11775d4c 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -345,6 +345,29 @@ int xc_dom_vuart_init(xc_interface *xch,
>      return rc;
>  }
>  
> +int xc_domain_getinfo_single(xc_interface *xch,
> +                             uint32_t domid,
> +                             xc_domaininfo_t *info)
> +{
> +    struct xen_domctl domctl = {
> +        .cmd = XEN_DOMCTL_getdomaininfo,
> +        .domain = domid,
> +    };
> +
> +    if ( do_domctl(xch, &domctl) < 0 )
> +        return -1;
> +
> +    if ( domctl.u.getdomaininfo.domain != domid ) {

One tiny style issue.  This brace should be on the next line.

I'll fix on commit.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:30:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:30:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527295.819765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNEj-0002Vj-1D; Fri, 28 Apr 2023 12:30:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527295.819765; Fri, 28 Apr 2023 12:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNEi-0002VA-RY; Fri, 28 Apr 2023 12:30:04 +0000
Received: by outflank-mailman (input) for mailman id 527295;
 Fri, 28 Apr 2023 12:30:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZgR+=AT=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1psNEh-0002H3-Qh
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:30:03 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6609aa96-e5c0-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:30:02 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-50a14564d17so16740261a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 05:30:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6609aa96-e5c0-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682685002; x=1685277002;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AuPCMNYSGWT14kmdLajotbLRxh0Bhc5GAJSMt+VdK8A=;
        b=VWBbw826lKjtBd3HVPlKPaIgM78uNeDqHUuXKuJFbf/H8aicwb1Nk1XoFpEOphWkgo
         eKTm/YEsTuAlAdkqkulB5Kxjha8rXza2SVTQXN8H+NkKb93dqgIBNxh1unx4qQhcpIjM
         Y2ix2p/F3+aSxUlzzbeuvf04tG53ufrPr+NSBTXF2V9yLsJ/I91/gLjFXyaspYcwCyC1
         AFP1jiIhwArUzN9Ql6Zs8kNGsvXXOWatfoQTUF8HXaA17l1qffoRzAjp45l33tKjiNuQ
         nnSWLB87ZEc3wg04L5Q10rNrtVaHarks1n38AAJJNepJYIGTRE0qeu4ySd4v38/vhl0l
         LvYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682685002; x=1685277002;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AuPCMNYSGWT14kmdLajotbLRxh0Bhc5GAJSMt+VdK8A=;
        b=QAoVT5QLiPrTT/kNYoGU+8Lm+V20B19XgpjD741bhAk6YCjj3YjuxhyLqO/Lo7Xqhw
         z68lCl/a8tixUpQzSg3ZX8xrApcNzYa/K0aLN4tr1W+yE2bC/YO63Uvc8swRgyoo5t7S
         1vnovAGvfP0L987ojkb+L58TnO3OBXbj+pdjtTPu5TiAQs7LuCH/rZmxwuvKK9pm7qqH
         bKrV5IWDu6kaLpOObNAk1vq9+lmmW7EjP+L1X1G6PDHr7eHqM6f03MLkC13/0Yx5Jvpl
         ASv1lmNZ6xqnIC59ratTRXyIVH8NoLxzdMqi7AX62cFKoS9YDfj6Ym6CIsaKTZKr/x/x
         N6rg==
X-Gm-Message-State: AC+VfDxDeROaRR9IHWPIQAbfBIrCqujJqlKLHPI8D//jdTRlpSkBdzq2
	C05n/pfunqbRw8LQueK0oklY8XmaTdHn5knZ+GU=
X-Google-Smtp-Source: ACHHUZ67O0z6utftdqDDSWnFcRX3tuAn3yczRfnZt2GBSL/M2YvdJBEGUwxa2dm+GbSfgvlw1O2yaKhheC4sS7c66r4=
X-Received: by 2002:a05:6402:40cc:b0:506:bbf8:5152 with SMTP id
 z12-20020a05640240cc00b00506bbf85152mr9206160edb.9.1682685002102; Fri, 28 Apr
 2023 05:30:02 -0700 (PDT)
MIME-Version: 1.0
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-2-jennifer.herbert@citrix.com> <CAKf6xpsbaZMMFCW3Uw0XZ2gm185iwwtT2H+RcAReFrze9UWdAw@mail.gmail.com>
 <573ebf81-5565-b127-61c1-fc08854c9064@citrix.com>
In-Reply-To: <573ebf81-5565-b127-61c1-fc08854c9064@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 28 Apr 2023 08:29:49 -0400
Message-ID: <CAKf6xpvmyfa6TYk1UWzpqsBbPxqytyuiyjhCoUhVE7jabj0smA@mail.gmail.com>
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Apr 27, 2023 at 8:49=E2=80=AFAM Jennifer Herbert
<jennifer.herbert@citrix.com> wrote:
>
>
> On 26/04/2023 21:27, Jason Andryuk wrote:
> > Hi, Jennifer,
> >
> > On Tue, Apr 25, 2023 at 1:48=E2=80=AFPM Jennifer Herbert
> > <jennifer.herbert@citrix.com> wrote:
> >> This patch makes the TPM version, for which the ACPI libary probes, co=
nfigurable.
> >> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) s=
hould be probed.
> >> I have also added to hvmloader an option to allow setting this new con=
fig, which can
> >> be triggered by setting the platform/tpm_version xenstore key.
> >>
> >> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> > ...
> >> --- a/tools/libacpi/build.c
> >> +++ b/tools/libacpi/build.c
> >> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acp=
i_ctxt *ctxt,
> > ...
> >> +        switch ( config->tpm_version )
> >>           {
> >> -            tcpa->lasa =3D ctxt->mem_ops.v2p(ctxt, lasa);
> >> -            tcpa->laml =3D ACPI_2_0_TCPA_LAML_SIZE;
> >> -            memset(lasa, 0, tcpa->laml);
> >> -            set_checksum(tcpa,
> >> -                         offsetof(struct acpi_header, checksum),
> >> -                         tcpa->header.length);
> >> +        case 0: /* Assume legacy code wanted tpm 1.2 */
> > This shouldn't be reached, since tpm_version =3D=3D 0 won't have
> > ACPI_HAS_TPM set.  Still, do you want to make it a break or drop the
> > case to avoid falling through to the TPM1.2 code?
>
> So there was concerns in v2 about backward compatilibity before of this
> area of code.  The exact nature of the concern was vauge, so I made this
> very conservative.   This was intended to mitigate agaist this code
> being run, but with the structure being set up with something other
> then  the code in tools/firmware/hvmloader/util.c.  Any such alaternate
> code would set the tpm flag, but not know about the version field, and
> so leave it at zero.  In this case, dropping into 1.2 probing would be
> the correct solution.
>
> As you say, in the use cases I'm farmiliar with, this would not get
> reached, so shoudn't affect anything.
> Haveing a break or dropping the case would result in it silently
> ignoring the 'invalid' tpm configuration, so if not compatibilty mode,
> I'd personally prefer some sort of assert, although i'm not sure how
> best to do that in this code.

Ok, sounds good to leave it as you wrote it here.  The split of
hvmloader and libacpi requires the backwards compatible approach
inside libacpi.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:33:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527298.819775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNHv-0003KD-DF; Fri, 28 Apr 2023 12:33:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527298.819775; Fri, 28 Apr 2023 12:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNHv-0003K6-AF; Fri, 28 Apr 2023 12:33:23 +0000
Received: by outflank-mailman (input) for mailman id 527298;
 Fri, 28 Apr 2023 12:33:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psNHt-0003Jw-Ld; Fri, 28 Apr 2023 12:33:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psNHt-0004bg-BX; Fri, 28 Apr 2023 12:33:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psNHs-0005Fu-So; Fri, 28 Apr 2023 12:33:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psNHs-0005sP-SJ; Fri, 28 Apr 2023 12:33:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5tzdwUr2A+jwloJE9P2/qrMS9rCN/+o1zm4zmMye/uk=; b=FOJf7ECIMcNNiz66pr48r1Oemd
	RLYWvYllNQEuSc6c5k3y+KfrYDCB+Xyd5XMPRk7gq0XLvTafB5iEaltk1UmTvAPq8mau4z6hrZmkP
	C7W5487GPfQrJ+vu8Q6PmqgUt02jspCNvVXdFb8p5+0xNNBqFw73OfbObaMK5vuo4/nQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180449-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180449: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1eb95e1baef852d0971a1dd62a3293cd68f1ec35
X-Osstest-Versions-That:
    qemuu=c3f9aa8e488db330197c9217e38555f6772e8f07
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 12:33:20 +0000

flight 180449 qemu-mainline real [real]
flight 180468 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180449/
http://logs.test-lab.xenproject.org/osstest/logs/180468/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180468-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180431
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180431
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180431
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180431
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180431
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180431
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180431
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180431
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1eb95e1baef852d0971a1dd62a3293cd68f1ec35
baseline version:
 qemuu                c3f9aa8e488db330197c9217e38555f6772e8f07

Last test of basis   180431  2023-04-26 11:38:43 Z    2 days
Testing same since   180449  2023-04-27 15:03:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juan Quintela <quintela@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Peter Xu <peterx@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c3f9aa8e48..1eb95e1bae  1eb95e1baef852d0971a1dd62a3293cd68f1ec35 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:33:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:33:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527304.819785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNIT-0003pR-Rg; Fri, 28 Apr 2023 12:33:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527304.819785; Fri, 28 Apr 2023 12:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNIT-0003pK-Ob; Fri, 28 Apr 2023 12:33:57 +0000
Received: by outflank-mailman (input) for mailman id 527304;
 Fri, 28 Apr 2023 12:33:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psNIS-0003km-J2
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:33:56 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efe9e873-e5c0-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:33:55 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 08:33:53 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA2PR03MB5819.namprd03.prod.outlook.com (2603:10b6:806:113::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 12:33:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 12:33:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efe9e873-e5c0-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682685235;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=cRdkIQCG4dT0BZ5JT04N1f1IEKk2pM21yKZ+abY5WEY=;
  b=PCdkrvs6mrWu1DRx5DvVD9bQzZgOdHieGdUWLsuNsZxVFcmXtzxxZsMl
   iiPBNsrJkss2InFun9pHxRYULnZlQVKy3g46VXhdGfj7DaZtcHkck46PR
   JSzj3Wf5ySrrOLqKGQj4mUOQi54icZGTABwL6vctR9yzSFbuJPwPFljJF
   o=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 107103674
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/KJc86g/Mg9albtM4+1EsBbmX161VxEKZh0ujC45NGQN5FlHY01je
 htvUGDUPveNYDbyfNkkYYyy9R4C78OHm9NrSQVppXszHyob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4AWOzyV94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ9F20iYh+crdiamoCrW+g8tJUbfPH0adZ3VnFIlVk1DN4AaLWaGuDgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluGzYLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnr6Yx3gLProAVIF4ndUqanvKVtlSddsh+K
 mkWvSkUjIFnoSRHSfG4BXVUukWstxoRWdNWH/c9rh+Ezq7Z4QGxDWwDUzIHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTTAAZRsI5Z/kuo5bs/7UZtNqEarwhNulHzj1m
 mmOtHJn2eVVitMX3aKm+1yBmyirupXCUg8y4EPQQ36h6QR6IoWiYuRE9GTm0BqJF67BJnHpg
 ZTOs5H2ADwmZX1VqBGwfQ==
IronPort-HdrOrdr: A9a23:wKQr2Kwac9TNPpCAqKijKrPwPb1zdoMgy1knxilNoH1uH/Bw8v
 rE9sjzuiWE6wr5J0tQ++xoVJPvfZq+z/JICOsqXYtKNTOO0FdAR7sM0WKN+Vzd8iTFh4tg6Z
 s=
X-Talos-CUID: =?us-ascii?q?9a23=3AmDd3zGt1kJ+RlwGZHZR5lbmf6IsATSDywHjPCXW?=
 =?us-ascii?q?UU1R5WbzFew6oqJt7xp8=3D?=
X-Talos-MUID: 9a23:0OTFFgkwBcXc6QueDI/9dnohBed5wpulJHwXsrgcmsihN2tzARa02WE=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="107103674"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mGFDx/lWdvjNsatouccgaRvVTZGlF2BDE9f8Vsaax396V7A852Fike+BI4IKrF44X/l2/zMxgpLu3XmPAol+LMSpYXS6OtP0EvxOD5eQUeAwbyPJb2G3z9uBtp1DQTlhmM9jRiUZwmgBwywUadXz6Oj2/kUM4/evbDXRv7iIjj5YQQdPzEdBuUX0n+Pl3bm7WWngbLrrv5hNdYwdPD3h8Ft3nD5ZOK7Uoi6cx1QAyg7DyfFSDyoNMlZUEyPMzy9kYwGdhw8qGcmDtB5At24aoIszSb5oipPl6lv5MMxXkv70TkT4555ojZdxRHHNrksK/h+XvE5a+gZUggn6IQumPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MOB0Ijva4oi5TiSj74GAVfoKZMQqT6eHBNEr/tiIN/U=;
 b=KMheG4jeNWtePIl44sKM4GdocCzWvDjL/m8dM7OwpUH/c0HHkWYmZip0BO3GX2VVs85PYR1aqcZiv3csFTO6G9FdHeuY+0LEXTSIgpfiFrbMrSqFpKQRsRcTlYo1h/dPg+vtDt0Di7ayEPP90jcLuC89sMazqmuzHVASOngM/FtabwnPN9tlfnhN+TNhamcXf6YFVW4w/4TzRusi3fkTnvjllhxsxTp1rhC5ZXjrTOZodMiypp0ZrYHklA2WXTh2lz6hTHZbhqryzg1coqLFBxx+sBt3iOelp5eJJ8wSY2S5veiiMe3HXEJDxDqJE3I8FDWkQb8WoL8Qx9vMuGyB7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MOB0Ijva4oi5TiSj74GAVfoKZMQqT6eHBNEr/tiIN/U=;
 b=Ba++kDtoiUxsaYaRpFUKtaq+AqPT/9OU9WgZ/a0YU1Aocik0oljPokYVZUhxg+rJPZ5OuGcZ3quBG/tuI4jCoZuo+q3Ey5LdZeLuvVIpwfAY3knB4Z6PGlrWh2hDSdyaKmekm4xAB7TJDyWg/k4ceyd5iCea82IbCoYRGQ9qpXw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <91438b54-df82-f790-7154-c76feea90f18@citrix.com>
Date: Fri, 28 Apr 2023 13:33:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 3/7] tools: Refactor console/io.c to avoid using
 xc_domain_getinfo()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-4-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P265CA0013.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:339::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA2PR03MB5819:EE_
X-MS-Office365-Filtering-Correlation-Id: 2aae1e69-7769-4de4-d46b-08db47e4d217
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LgstE9+GwJQ4bbY1KfkP/Xa4wBYxJzUH1igQxw6h6uidDzxCW5wEPIdWeT06sUmomwlMj5wEu7ecDIq/IyY19k3cTvtDd332vIja6YV1BUtYoP1SPkDUUyhpzvExY+MrGgTHWbxKydRtWcPEWYfZ38DcwQ+24+gNj2S6PTLk3ZJ2V8JqQtgcc/JqCaySJmYA6I+ZWerVwZJi01BcOsRgbSDhRMiMFRe0vikpUW73+xlr9bBveXrbiIKEsWNdDLCJ9AxWbA7RkAo8oErhbFWnFuojpV19Kz4l/RZbHqmz/2zWKuVPL05740XDquSqJvZOMMKGbgacinIlanVuKJ0RcuwRqZd1BaVGTIwFHHO2r4W6cIDWR6H/DNz5s/xMtdxNXhYh5q/kfFi0jMbwxPx2jbtVySIlMdu+S9rv7SichSC3jGxjBEexFhYSYMWzwvaMGxdp7OEyBeKpRbjZv0NiS5NtpwEEFwmdYIismepiV+Yertg9qovot8S4F5ooXsmRemP6xyJ1ZqvKBQkQC/v2YsZAJwCZCi4vylxGbk2fnxH8elhDGgvLMdjvFhYMBnYIYNb4HNi+Cb0BJjz8CX6yB9sSslXVxrsT85C7QHurAtKx/cYMMEZaK5AyHfK9BZ6uwXwLUXkqQlKXCoOMBNNfAg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(366004)(451199021)(66556008)(110136005)(66946007)(4326008)(66476007)(478600001)(5660300002)(8676002)(8936002)(316002)(82960400001)(41300700001)(38100700002)(83380400001)(2616005)(6486002)(53546011)(6666004)(186003)(107886003)(26005)(6512007)(6506007)(86362001)(31696002)(36756003)(4744005)(2906002)(54906003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S29iWTVyRlFyVFRlVnFsLzV3TVB6ZWhabEhLL0xNNjMwQkNlMlNMRkZ6Zm5q?=
 =?utf-8?B?Q1dpOU1WdUxmcmdld1QwMTBqVnZtaTQyYUNqbTNrcHJGZWJnVjREdG9jSXU4?=
 =?utf-8?B?NU9kZlE2TnlOTjNORlFjamJhQThhaUNMdEV1VlNvcUhQUjhuZjZHVnJYRVdO?=
 =?utf-8?B?cTkyWWVETnRSWEVSUi96Z1JnN1ptSUlvazVudVU3a3E3SkdrVXBKdTVCaFBN?=
 =?utf-8?B?aFlTd2h6OHMwcjN3b2RndG1iUnFIdmRZVzZTbi94NlpYOHBvZlo0U2pTY2pQ?=
 =?utf-8?B?Ly9Ndk1zTkRDank3M2NiaTUvRzlJYmgwYXlOZTRMZUFDby9TMXAwOE9XUVhY?=
 =?utf-8?B?N2hZWEJKeHFibDBDelorNHRNZ3RrSmlaYzN5bEM0R3RhdkhCK1hFQ2Z1LzVp?=
 =?utf-8?B?a3hhaXVCL0NHMG5XRjUxOUpJQksyaTU5UmIzckxZR0Q5eDVWM0hnNXFYdEds?=
 =?utf-8?B?eVBUa1M4TytEY0Z1MDk3SDloT3dOb0REMjI1Tk9YWVRYbzZFU3JMNzQ5R1JG?=
 =?utf-8?B?VThJT1I5VlFXOG9wbVR0NzdBb2JGSkpUMmpTOVhPQ2h4YXVKTnlUZklMaW9x?=
 =?utf-8?B?aTNMTVZmRXJESkNRYTI5aWZROU1PUEo5eVEvVzk4UjRxYXpyN3lBNHgyQmdC?=
 =?utf-8?B?QUJMNXJqdkxwOGhVNHYrWndGS1pNcWVrcVFmaFM0cVpkeDlvWHp6V1JFZjN5?=
 =?utf-8?B?VzhsekxWRUF6S29nSTdsc3BtV3F5clFMZFZta3p4dndvMkRpVmp4cGlBTXRN?=
 =?utf-8?B?NzBibVZVZlVaOUZNd2tSUnpSQmFDeHhtd2EydTl3VEZaWmQvTmtZQUFqMStT?=
 =?utf-8?B?MEs3dTZ5MUNJbTZFdGdXK3JjOGVWTDIvdXpiRldNMDBmRVZPTmNwbkxVMGxn?=
 =?utf-8?B?Mm05ZUxwaHQrZHFPWERGaUpiS1Y4Z29KQ1RiUENNWEcwU1VBZURPOG02SDIy?=
 =?utf-8?B?VGpaeDlnR0dhaEFNNzUyRlhwYWFBeWFvL2I1WHdoRW9qWUhVajlGV3d1dnhN?=
 =?utf-8?B?LzRGQ0lwajlWWHJXaDF5R3hlUWt2N2ZmVk9YOE5jWEJUbVlEbE51RFVDdlRF?=
 =?utf-8?B?dlNLQ3JtQWNIQkhmK1NBOThHMDRWdkljcFZBV2F2M0dBQ3FqWC9waTN6YytB?=
 =?utf-8?B?bXBKQjhnMUdoeVc2TWM2U0pJejdBcUpiVzZUNWtyWmI1QXZCdWlDMS9GQXE2?=
 =?utf-8?B?c2kwWlh5UlI3dVlZMkoramRCZG5leEpORFdGU1czdTN5MGMwUWd3eVRnZ1Nt?=
 =?utf-8?B?dmdpcFNDOFdQZFcwRzcwbEJaY1dwdGVtMlkzaUxNMFJzUnBJWlowTXdaS2xv?=
 =?utf-8?B?L1RYL25od3RheFQ0QkR0T2pFQnhKWnNNVDZRMWpreVJwN09ZTHdSeFJrQUh2?=
 =?utf-8?B?L0Y2b1lRaXg5TitqT3cvdkEzdW5yN09FNzN3WllBSHEyUGliUDRzNVJmWGds?=
 =?utf-8?B?TnRHUEpXbUc2Y2w3REJncjJLZnNGZ1ZaT0hIdUdTeWJGSEVmcmJaWG8rK2gx?=
 =?utf-8?B?eDJtMWEvTVRGbXZGejdOSStvNTBBTWEybEx1dDlFTEJGZW1XTlo5ZzBrRUE4?=
 =?utf-8?B?ajQwbHNnSTQyMUJWbENjK0grVkYxWnRBejZTQ2tQWnNyQWNlNWlyYmxyKzBs?=
 =?utf-8?B?RVdQRk9KTjNZZFJ2K3JxQ2Rwa3Rzb21wL015RkNVMDQwK0JSOHowZmZYVG9Q?=
 =?utf-8?B?MnltM2trMExBMElaTS9yUHdhVU4rMlEyYVdUWGl2YkIyT1lMR2J4Nnl6enIw?=
 =?utf-8?B?RE1nN1VCN0U4U2M3MUM5K250MHlNcURvTSttUjJFNkJ1aXg0VlpCcWxrMTJv?=
 =?utf-8?B?R1dDekZmZXdjNjRubm9SalIxUzFVLytLc294Um5FaGM1V0JZMFdoSndXTmor?=
 =?utf-8?B?QnFJM25SVTlXSW9RWlNrYlowTXRBVVpab3djUFRIK0dhL3BSNnc4SzlYclBl?=
 =?utf-8?B?dzRpb052TW15Q0Fwczh1ZENTcFZFdHdKbmNnVi9hMFdjZ1lOVUt4VkpzYzFi?=
 =?utf-8?B?WXhOZmE1bzRTaXdmTmZsSEtpSmN1RXRrL2RJZngvajViSnVoR0JKVGUvQVZK?=
 =?utf-8?B?WEhxK2o0NkFhbFlQa1p4b3JEMWJjQ25SWEVzVGs5S1YrNmZoaUFtd2kvem1h?=
 =?utf-8?B?WXRTMGJkMWpYWlU5QTJaSjZZM25SYTNMdFRCbFQ2OWFLdFhVSDRwLzhHbm1M?=
 =?utf-8?B?UGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	DTQy+qFDZE085rojjMCEwVUl21mraLP6xpbfHGjlzsHg7ZNIaYYh1GgokSgsCypv9/3TdUqWdg8v8DJn3FLT2fE0WaY2Qd8tePNpYAtGMLvLbzBtrPZ36Qy0a5CSnGwldEa6sqOFns+8zAL1eVcBevhyjJxdKFw4CjNUkQymk/Jp1I5WU6dNaxpjjK0YCnZOm5w+XK/8rFax+pJbq3Z/1OSQZMGK0W6Moex6qXsGxXSeWKr4ZNim5fVuqjpz1oyVPJUHUD6bFMf5n9qADuKjacrL2ZygnXg/htZBoCmaGy0+4sHLvQjGOarerZtw+vh9hyAkVV+wQ5QlzDPuOCLz8PCdXTqC6mS+eUrWMb/4igcipP98Ztnsw7A53UDkVYzy9Zc85YL+t/rDKTqSI9QQM+2wBW2EaAveX7kwJzO8MKgeJ9pPNwJqz9TnmhJm+hmKF46Cwbr/4mjSj0dfZVaI2J7dgPzsJmO7nHRJKvc0brZHLkWr42nD1Jp466kgIZtq0B1pqlfEIgYW5Wmc8LbJrqKc9nQwwHtxqL3jarg5B/N/QPHhZ1EUvFdvLOSc8lH4EgJMbdCMahgR1tMp9W+BtLjljRW5gkOrnnToSsbjH7cesfpCWhKDyFu1pVlhYJ4CioaA9J54Mb+0nZYNRvCeLsWVPpFTu2WvFveDAZFn24W3oTRJ7+MDJMkf1Qy784HjAuUOuYSxkyCnqB5CY9ekp+zQT3fR3lD1qPGSCscFtDjSCsIzlAROPA8xd2wxh84NVyW4FkCsFg17CrgA2vL92o4oHWOcXzSgFT8J3iVJWGXQO0fEs8EVoD3bIBv93PDyUGM8JEHB5O/SPouvm6sU4w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2aae1e69-7769-4de4-d46b-08db47e4d217
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 12:33:50.9920
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ufajj0jPNftsz7dhR0LpJoHRHYXFWHK//M2RMOL64GQJP0pE/voSl78w603tRlMNgKZkeAa7H/KbK8V/OnyDyaRnNm2cGQ6bbn1/2sOXghI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5819

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> It has 2 avoidable occurences
>
> * Check whether a domain is valid, which can be done faster with
>     xc_domain_getinfo_single()
> * Domain discovery, which can be done much faster with the sysctl
>     interface through xc_domain_getinfolist().

It occurs to me that this isn't really right here.

It's true in principle, but switching to requesting all domains at once
is a fix for a race condition.

I'd suggest "which can be done in a race free way through ..." and avoid
saying faster.  It's likely not faster now with the 4M bounce, but we
can fix that in due course.


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:37:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527308.819794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNLU-0004Xb-93; Fri, 28 Apr 2023 12:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527308.819794; Fri, 28 Apr 2023 12:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNLU-0004XU-6C; Fri, 28 Apr 2023 12:37:04 +0000
Received: by outflank-mailman (input) for mailman id 527308;
 Fri, 28 Apr 2023 12:37:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y+ir=AT=arm.com=rahul.singh@srs-se1.protection.inumbo.net>)
 id 1psNLS-0004XO-RB
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:37:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5f05277b-e5c1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:37:00 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 23FBFC14;
 Fri, 28 Apr 2023 05:37:44 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.62])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B971B3F7D8;
 Fri, 28 Apr 2023 05:36:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f05277b-e5c1-11ed-b224-6b7b168915f2
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Samuel Holland <samuel@sholland.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Marc Zyngier <maz@kernel.org>,
	Jane Malalane <jane.malalane@citrix.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
Date: Fri, 28 Apr 2023 13:36:48 +0100
Message-Id: <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Xen 4.17 supports the creation of static evtchns. To allow user space
application to bind static evtchns introduce new ioctl
"IOCTL_EVTCHN_BIND_STATIC". Existing IOCTL doing more than binding
that’s why we need to introduce the new IOCTL to only bind the static
event channels.

Also, static evtchns to be available for use during the lifetime of the
guest. When the application exits, __unbind_from_irq() end up
being called from release() fop because of that static evtchns are
getting closed. To avoid closing the static event channel, add the new
bool variable "is_static" in "struct irq_info" to mark the event channel
static when creating the event channel to avoid closing the static
evtchn.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 drivers/xen/events/events_base.c |  7 +++++--
 drivers/xen/evtchn.c             | 22 +++++++++++++++++-----
 include/uapi/xen/evtchn.h        |  9 +++++++++
 include/xen/events.h             |  2 +-
 4 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index c7715f8bd452..31f2d3634ad5 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -112,6 +112,7 @@ struct irq_info {
 	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
 	u64 eoi_time;           /* Time in jiffies when to EOI. */
 	raw_spinlock_t lock;
+	u8 is_static;           /* Is event channel static */
 
 	union {
 		unsigned short virq;
@@ -982,7 +983,8 @@ static void __unbind_from_irq(unsigned int irq)
 		unsigned int cpu = cpu_from_irq(irq);
 		struct xenbus_device *dev;
 
-		xen_evtchn_close(evtchn);
+		if (!info->is_static)
+			xen_evtchn_close(evtchn);
 
 		switch (type_from_irq(irq)) {
 		case IRQT_VIRQ:
@@ -1574,7 +1576,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority)
 }
 EXPORT_SYMBOL_GPL(xen_set_irq_priority);
 
-int evtchn_make_refcounted(evtchn_port_t evtchn)
+int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static)
 {
 	int irq = get_evtchn_to_irq(evtchn);
 	struct irq_info *info;
@@ -1590,6 +1592,7 @@ int evtchn_make_refcounted(evtchn_port_t evtchn)
 	WARN_ON(info->refcnt != -1);
 
 	info->refcnt = 1;
+	info->is_static = is_static;
 
 	return 0;
 }
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index c99415a70051..47681d4c696b 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -366,7 +366,8 @@ static int evtchn_resize_ring(struct per_user_data *u)
 	return 0;
 }
 
-static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
+static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port,
+			bool is_static)
 {
 	struct user_evtchn *evtchn;
 	struct evtchn_close close;
@@ -402,7 +403,7 @@ static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
 	if (rc < 0)
 		goto err;
 
-	rc = evtchn_make_refcounted(port);
+	rc = evtchn_make_refcounted(port, is_static);
 	return rc;
 
 err:
@@ -456,7 +457,7 @@ static long evtchn_ioctl(struct file *file,
 		if (rc != 0)
 			break;
 
-		rc = evtchn_bind_to_user(u, bind_virq.port);
+		rc = evtchn_bind_to_user(u, bind_virq.port, false);
 		if (rc == 0)
 			rc = bind_virq.port;
 		break;
@@ -482,7 +483,7 @@ static long evtchn_ioctl(struct file *file,
 		if (rc != 0)
 			break;
 
-		rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
+		rc = evtchn_bind_to_user(u, bind_interdomain.local_port, false);
 		if (rc == 0)
 			rc = bind_interdomain.local_port;
 		break;
@@ -507,7 +508,7 @@ static long evtchn_ioctl(struct file *file,
 		if (rc != 0)
 			break;
 
-		rc = evtchn_bind_to_user(u, alloc_unbound.port);
+		rc = evtchn_bind_to_user(u, alloc_unbound.port, false);
 		if (rc == 0)
 			rc = alloc_unbound.port;
 		break;
@@ -536,6 +537,17 @@ static long evtchn_ioctl(struct file *file,
 		break;
 	}
 
+	case IOCTL_EVTCHN_BIND_STATIC: {
+		struct ioctl_evtchn_bind bind;
+
+		rc = -EFAULT;
+		if (copy_from_user(&bind, uarg, sizeof(bind)))
+			break;
+
+		rc = evtchn_bind_to_user(u, bind.port, true);
+		break;
+	}
+
 	case IOCTL_EVTCHN_NOTIFY: {
 		struct ioctl_evtchn_notify notify;
 		struct user_evtchn *evtchn;
diff --git a/include/uapi/xen/evtchn.h b/include/uapi/xen/evtchn.h
index 7fbf732f168f..aef2b75f3413 100644
--- a/include/uapi/xen/evtchn.h
+++ b/include/uapi/xen/evtchn.h
@@ -101,4 +101,13 @@ struct ioctl_evtchn_restrict_domid {
 	domid_t domid;
 };
 
+/*
+ * Bind statically allocated @port.
+ */
+#define IOCTL_EVTCHN_BIND_STATIC			\
+	_IOC(_IOC_NONE, 'E', 7, sizeof(struct ioctl_evtchn_bind))
+struct ioctl_evtchn_bind {
+	unsigned int port;
+};
+
 #endif /* __LINUX_PUBLIC_EVTCHN_H__ */
diff --git a/include/xen/events.h b/include/xen/events.h
index 44c2855c76d1..962f0bbc7ce1 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -69,7 +69,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority);
 /*
  * Allow extra references to event channels exposed to userspace by evtchn
  */
-int evtchn_make_refcounted(evtchn_port_t evtchn);
+int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static);
 int evtchn_get(evtchn_port_t evtchn);
 void evtchn_put(evtchn_port_t evtchn);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:41:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:41:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527312.819805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNPY-000602-QJ; Fri, 28 Apr 2023 12:41:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527312.819805; Fri, 28 Apr 2023 12:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNPY-0005zv-MT; Fri, 28 Apr 2023 12:41:16 +0000
Received: by outflank-mailman (input) for mailman id 527312;
 Fri, 28 Apr 2023 12:41:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psNPY-0005zp-2V
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:41:16 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5b75082-e5c1-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:41:14 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 08:40:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7234.namprd03.prod.outlook.com (2603:10b6:806:2f6::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Fri, 28 Apr
 2023 12:40:55 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 12:40:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5b75082-e5c1-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682685674;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=+/orVMwf9ZoiiQHEJa/4dgnnbEpOlYdCAmwHJb6sJ/A=;
  b=gCjMtA8xRfCwFcM4/yu66ulu87IXml8dCSOcGuuKUrxu5JTxDbSVxUBh
   ETMV54p2QbLorefID2jDi4qKxTzJO0wSX5wls8iGOK7nxAqWNT0KRtByd
   FhNTpf4Z91wNgt8g13djtK8UakxN0PawInZImOMnsEySoy3orIIqn3TTr
   g=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 106546004
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:A0HsSKoG+VSvTpdqBp9XYMS7cx5eBmLZZBIvgKrLsJaIsI4StFCzt
 garIBmFPvzbYWKjed11PNnlp0wPup/Wn983Sgs/rSg9QigR85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weCzSlNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAAoXTlO5rf+O+aq2U+VVu/w6DPesMZxK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSFEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqA33APLnjB75Bs+flWnn6HmkEiEftNgF
 h0I4wEPp7Ep6xn+JjX6d1jiyJKehTYVX9dSGus28gbL1KPQ5wubAUAPSjlcZJots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsUg8t89Tl5oYpgXrnTNl5F7WupsboAjy2x
 CqFxBXSnJ0WhM8Pkq+9rVbOhmv1ooCTF1BloALKQmii8wV1Ipa/YJCl4kTa6vAGK5uFSl6Gv
 z4PnM32AP0yMKxhXRelGI0ldIxFLd7cYVUwXXYH80EdygmQ
IronPort-HdrOrdr: A9a23:495DDatuZs0Ylkq0eiZblpmf7skCEIAji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MArhHO1OkO4s1NCZLXTbUQqTXftfBO7ZrwEIdBeOldK1uZ
 0QFpSWTeeAdmSS7vyKnjVQcexB/DDvysnB64bjJjVWPHlXgslbnnhE422gYylLrWd9dPwE/d
 anl6h6T23KQwVqUi33PAhNY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU
 zIiRbw6KmPu+yyjka07R6f071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P
 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6p9XaoxV/Y5eDpTjMzDMRMwapfbxvi8kIl+PVxyr
 hC0W61v4deSUqoplW32/H4EzVR0makq3srluAey1RZTIslcbdU6agS5llcHpssFD/zrKonDO
 5tJsfB4+s+SyLTU1np+k1UhPC8VHU6GRmLBmAEp8yuyjBT2Et0ykMJrfZv6ksoxdYYcd1p9u
 7EOqNnmPVlVckNd59wA+8HXI+eFnHNaQikChPSHX3XUIU8f17doZ/+57s4oMuwfoYT8Zc0kJ
 PdFHtFqG8JfV70A8Hm5uwEzvn0ehT/Yd3R8LAd23Ag0YeMAYYDcBfzB2zGqvHQ48n2WabgKr
 KO0JE/OY6XEYKhI/cP4+TEYeggFZAvarxlhj8FYSP/nivqEPycigWJSoekGJPdVRAZZ0jYPl
 wvGBDOGeQo1DHYZpa/ummcZ0/Q
X-Talos-CUID: 9a23:/XSxg2437jn/+XzRxdsszlQ0WfoPb0Xk1HrJL3aFMX9JdvqYVgrF
X-Talos-MUID: 9a23:4APMGQsAsG522qV8782nrz9+Cp83+aKULUEyrMQaouKlPzVrJGLI
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="106546004"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fZYMGiBk0dgItz90vl1YTpbPQOJnRM5gauSyZNyrWU7OupgGftnpOvBVKY0YTSs7RylrPstSqoUeHnGOGL2jMsjRRLQWKOMXtTceTOrDN0hXU/VT5JrNznx8FSwPSl3aL+YqR09pNngr2J4E2NI5UCWheQIalTnLsfoeOfhC8ZzDG1IgFx4GVnwrsk5PS98JpIdhGdaQmkpSSYVgs/bcDKUzvzA9tnMIEtc88XMXWIJlA809kzn3Pj2FdDOcs4l1Tg9qKr7H6L+woEKlCueZOOzjPlKj36Gjz7k7dY4lnWJlVYYj1gNQEwP0d0aG7q3g43OXoSIGqY0bvRghhdD4dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DpKa27AMU53aPXOAAvM5i4G0m4kr25JUg9B4mnhTNiI=;
 b=T3Q1lrCK+BXBevCz97ZWtmpPryP14uSNVBHP32OGdcTQ5F9limyTDImIGTSUF+WgOfYG5KO6XhagYkIMG+kxyPVcxM8Fr2JlceekH8YfAixZyb29jIEyUSpOf3qDL/B1aXWxVP0XcYyYjLbTW1uKLb8Sl57W6DN2tAMHd3CPXYDh8DcC5jjE4IR0EnPX5aJINr+O/0O0m3ZMxaKcOpAlqKhDu/s4pAXbIqFNq1PXwnfzJJpTfQhPmKI9nguhfSXeSSPJv0syHQSKGR30H7vkpo/lru2fcmJYei0lC0deBh14CJGI8JE41JGslFjDQ9pOd1aeG8CafRipfY4paIwcjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpKa27AMU53aPXOAAvM5i4G0m4kr25JUg9B4mnhTNiI=;
 b=ZAgbotknmbb1Qr+h0fcTVqk7m64+Ch1xjQo0H+AdwGLEIZMaqPOokOoyjaOmNAQbjkHjjDUBnvo49GJgKqwbRNiNM2sCCh7jrM4s35Fpp4DyaQcuGZWK9c3r0gDeE0WhrZHHoS8EfkEaeCGw2MQKhAF7QSh0s7JsV2LldxZa920=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <8b7b7fe0-f814-ba53-eca9-a9341665cc7f@citrix.com>
Date: Fri, 28 Apr 2023 13:40:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.org>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-5-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-5-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0441.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7234:EE_
X-MS-Office365-Filtering-Correlation-Id: c8f8ee26-d882-4ee3-5d07-08db47e5cefe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1btQ8YaRWJXExHHuRnckIg17q5M5EXr45rJ3/kuFdVevVomKzzEZ8M7zDt2/1ncCYrylQ2hG/GWFQEz3yHePVj2qKx0exuQuuY3ARd1e9FNG/fNs7P8BoW5aGHcmpntbJVqMF6ftqf9BKBih+A/e4Ds7w+jnJlsd84YuD2Avng3KyevQqwkOU/g6aQLxLfUXAxl9I+3FGSPyCs7MFY7HJiMRnc+IaS0EZx0Ndtm0FsXsfB2kxYkfXF1rLh/xQytMTD2WS56hCkULRpVYM79eQIC4z4wvS8oez6VoZ6NVrm2dIAhdDeNpI2N+5YNuF66TZU1zBGBvPZ42sAr+mPuxJbfNqlvf52roTAEzYW7VdFu/VrQ06i97H36cynG8Xdyvk/skG2HmJj7grHHwlguKmXrDCxmyPP0GZ+luwRwHJn+zenqAzRo1sVKG2kl843+/7TVReX8vZsFc6SLlRTlH9oiGXk+B3jgAaLx/aflAN+Zq+RGcpC6Xsxdwk5bbQ2Ru/1ghPU8LFUNaMktPFyvX875w7R9UOGe/nYHpGWUUNd83WsqOFef+yXd/o3dCkI3J/PamELoqiLobn9bHTmd5WAQkROeFYph6AKh9/pwBTu0CllhwU8wSUVlnCcGfXaH6R8uWWxMl4QlCerx+MnzP+g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(31686004)(86362001)(186003)(6506007)(26005)(6512007)(38100700002)(5660300002)(53546011)(2616005)(31696002)(6486002)(6666004)(478600001)(36756003)(54906003)(110136005)(41300700001)(4744005)(8676002)(8936002)(2906002)(82960400001)(316002)(66556008)(66476007)(66946007)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZzRJQzBLZEdnSXowQjlVSjA2blhLOEtVVWQ4NnZKY0tEbnZTS1ROMjN3WVYz?=
 =?utf-8?B?VTNPVHFkd0hHUlg5TmRNUlluOEZualFBM0NDZzB2K0FiVS85WHdIZVB6QzJv?=
 =?utf-8?B?TStWUWFxRVRtK1lqYzJHMGJOSUU0Vm93d2pVY2trTzA2NktUVjRCYk9uais0?=
 =?utf-8?B?TkZ6Z2VYWEYyYkVpL3VHWnVURXhLWUs0V0lNMGIzTFI2Z1BHK3RIcUs1YW1j?=
 =?utf-8?B?WExVQTEzc1Q1VlEzcm40ZHBHUDIvb2JNcDZJcHlLSmFsZ0FVMFlMYWs2M3U1?=
 =?utf-8?B?UzdGcGcxcXVkQWRCMDJ1aCtXYi9IbVAxdlRVNENPSWVKdWU5Q0k3UktMSHNM?=
 =?utf-8?B?RWJkQkNpd09mTWs5RHVVb241OUVyMjVZQ3VjYzFIdnFaV2hseHlLb0VERk1Z?=
 =?utf-8?B?TVpkRWN6M0pKZmpzL3ROMTQ2T0tjbUFGSlBvL0FncUt1ZGZDRnRwU0w1VE4y?=
 =?utf-8?B?bFd0Q3BOUXdsZ2FrbDF0a0FWa2ZVeldYNUFXUkNkUUVCM1JqWjhQQkpWWUFk?=
 =?utf-8?B?c2NKcmpCdlVPRW9jdnZiM2V0dUdvTEZSR1ZLMnF6cHh4NGh2M0FOUlVzZUZG?=
 =?utf-8?B?SEFGNjFQdWFQQnZHZktTUW9kdW9OQUtnZ1ZqWktEWGs5aTNQbVUwcktXODh2?=
 =?utf-8?B?UVRqYm5YNUhQQTRuQnJJemFSRUJrS1A3L2t3MDBDeGpwVGRGVGV1THpGQW5k?=
 =?utf-8?B?ZWN3M3VBd1ZlY1doSTNzNnozK3p3UW5qWGYxTGE2UVdBMnJWNWNqMFBlZWd1?=
 =?utf-8?B?ZHVnMUVSTHY0Y0k4ZklqYmhEVVZZZkJlRnFzNlIzaTdpeGxlaXFFODkxeG93?=
 =?utf-8?B?TWF6QVNLbGRWNHZ2c3VaWVp5bHBOOHpsOGRLZXkyTXRYQnNmQXAyNVZjcW1Y?=
 =?utf-8?B?MWlIS090QmhHZWx2VEZWakpCUmhQejFSdVY5MURNVFpYa3VxeS9BdjlTQSsw?=
 =?utf-8?B?Mm5FTENlT0hpSHNTQ1BRUWZOelR2emw3R2dTSEZhZ1k1bVpVdUFaOWtGU1pF?=
 =?utf-8?B?cTFxcm0rOGM5K3NEaElRK0pKYkVtOE5MVGtVaU4rQTVvWVFLa3JmUTRLVnd2?=
 =?utf-8?B?YXFzMGdUellvdk1uWDVzR3NpckZrWU1TdmVydjUvMEpHK3hMWkRmREpOVitt?=
 =?utf-8?B?bXhEMFNEZW1aempJSXFFVFk4UVY0UzIzaGZBRDBEVElTTXJmNm9seW5CQnBH?=
 =?utf-8?B?QVhkMzRBdFV5RHJLNVVOVmxubi9zWDlVWGJGeTlybWtrYlZkVncrK05OZWY4?=
 =?utf-8?B?NG5ZT0hQZEplMWRqR3V3eTN1enIxYjFqWDFlUFp3ZFQ3OGtlbVlKaUFRYjVW?=
 =?utf-8?B?WWd4cW50aGY2MmNudmIzWlMwT3BpQ2gzOHd5Ymc2TEdKN0VubFk5QTdWd1Jw?=
 =?utf-8?B?SDB4V20vZVdxRU5uMG1uUnFJTnVhSWNIREtDb2JLYm96MGNaQzl6Q1YzRGlQ?=
 =?utf-8?B?bjlXYWFseVVFTFRWOWxWTjFQNDJ6ZUhXQVNyUEJDQ3NMUVhUU2l1N3NFNjFs?=
 =?utf-8?B?bmhBSFAyOW40Nm9NUWVFZXQybTg0bkdpdVhzais4MWJjNE1tK0daL0grbGVY?=
 =?utf-8?B?Rm80YUw5NHF2TzhpdEoxN2R5azAzN0VodERSTjExU1VlTFJSVUFyWjR5YStz?=
 =?utf-8?B?VG5FR0hkd2VncEEvV1owbG10QW03ZVZaZHNUT21GVGVBMmVXdjNtdWNCK1NY?=
 =?utf-8?B?WHA5MzdINUxhYStqcm04eDhmT2pWdWpab2c2T2k3RXBIdnpvT1duaTIwZTVG?=
 =?utf-8?B?ZXFzT2ZWeHhZd3NUeHV5cFdrbGQ2eiswaVBOZFBXcWNuMXZJL1JyTW9ta1J5?=
 =?utf-8?B?U2dVTTVQNnNGNUs2SXRSTkZvUkZURXRLRUN5a3BBUGMrdzNiZDdDSWVsNnRM?=
 =?utf-8?B?dkk2ZVg4R3NPSHV2TDhmRkpjM0ZuSkpDK2M4V21rQWk0cW5GRzJpUk5RcnRq?=
 =?utf-8?B?T0Fwd3d2d2xieEhqcUVFWTJPbHIzYjlhWW9Pdkc4MkJNV0w1OUdVUmxFcEVa?=
 =?utf-8?B?UDY5R3YvMEpFV3NILzgyY3lJbWVLVG1GWE9JRG5VQldzV3JicmV0bkRTZFV2?=
 =?utf-8?B?UUFOelpWeG5Nd2toVGZoZUM3bkhFRVhLUlJSWW02c3hxZ2FFUDdzYWtJRkph?=
 =?utf-8?B?OFFLSHdUbk8vWGN5OW0wdDB5SzBXYzhFakJmUnEvMGErWVN0bHpvSlJTUXFE?=
 =?utf-8?B?L0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ww0Bk9ozffMvWwGlz4o8ryBPLyUWfqXtRxqcXJtMfke6RbJ18xAysUH4wG9yBHdoeo7qJ5PRJfp2ibjdCDfrRGbHInVp/baJ9r6t4n9NMNH2/M9mSddYNkwx6fc0ZmozkZUsR8XzduZ+LqcZx+gapw4jTEbH7huQkFdJYoB3/ZTf/qR/mcnqxcW79Vls/U/1S/hA6sGXPy0bvfE8YNl3CT8KCCxn40Wqcpfd9hGShHR6VB3fu4gcDfwraitkmsFMWCUhGUmxpPnpXOYS3tLifR6at7pUZC8lwOzoKhr/gyZQ/r5OuSu48DgTovyQYhnS6u9B5QAENt2o+otYpF9dkdJswnV+KFW5dQq3r0DDn/VS8S//m/KE5/piw7fAdFnzAX0UnNSQuNdi9scMe6tJCjptje+8xMNNGo/yhycZQyRKtBPN9B89y2rqklcrQZpRAY3TJGGF8KEJJFemg0400diM0XFI1cRMlEfX+re2Fv84ups0ziKQkCotSmViWud9075JmrMKeVTjWCwL7NTIVQB4ln3SMNt7GQIz1B6pl4+9U/7p6Z/TlAK4LvZx0fMZzmIOCZ9CC4lZtp42HvM1HWBz7UftpCnaPVfE7WaXffYgMdnlN89BLPdut2u/1IrDZX4VYQyyeHTOGlytoYngDVZtEDfuR7YmN+HVhY9KXrG6hCMPrg6oEb0O/uVKatFuqAD5WnZrcR0ouuxuMraD0AarQX+iiRlUY9k3N9zU1G4aZNF0cqrrrIZWDBi5EjNTYKUZ3KCJo24pzCD8QjA2XDtGghU8ooiaICvmml/Z2+pCzoerztLyjQE99ZnOoi9d2cK0Xmj9HUMRGJEE/Zxb8KWcKdRVuy0iQ5hvWAyZ0c8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8f8ee26-d882-4ee3-5d07-08db47e5cefe
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 12:40:55.1766
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +u/9msdq+jOctVyu8HoBnTefhl6pYu7yEj2rC0LpRjhdGSY/AgKseS+iI2a19wM8m/FhtAsMqaW7CpxEkX6rX98kYj1wkUJNCnaF/S3e+jM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7234

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
> index 0950ba7dc5..e210a2677e 100644
> --- a/tools/helpers/init-xenstore-domain.c
> +++ b/tools/helpers/init-xenstore-domain.c
> @@ -322,16 +323,19 @@ err:
>  
>  static int check_domain(xc_interface *xch)
>  {
> -    xc_dominfo_t info;
> +    xc_domaininfo_t info[8];

I'm recommend having a comment here, saying something like /* Commonly
dom0 is the only domain, but buffer a little for efficiency. */

Because this is also the justification for why we don't need to ask for
32k domains at once to find XEN_DOMINF_xs_domain in a race-free way.

Can be fixed on commit if you're happy with the adjustment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:43:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:43:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527317.819814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNRo-0006cJ-99; Fri, 28 Apr 2023 12:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527317.819814; Fri, 28 Apr 2023 12:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNRo-0006cC-6R; Fri, 28 Apr 2023 12:43:36 +0000
Received: by outflank-mailman (input) for mailman id 527317;
 Fri, 28 Apr 2023 12:43:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psNRn-0006c4-EF
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:43:35 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4869d867-e5c2-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:43:34 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 08:43:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7234.namprd03.prod.outlook.com (2603:10b6:806:2f6::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.33; Fri, 28 Apr
 2023 12:43:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 12:43:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4869d867-e5c2-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682685814;
  h=message-id:date:subject:from:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5TmE0ib5nYT3dzY1XFmJSniHrWs41bPMrCeg4Qzl6eo=;
  b=f6VhkBaY8XDN7gHbdl0edq96NMVhAzZfRabJCGbbW94RU1B7HNlalVFZ
   fbjZrCalFrrd5TEoHR5ZtZZpi7GkEuYEnfKp31tCN090K5Jw/tHVGDzas
   smhVXyRhWRpqr991pBdiAwKxjdGsGi7RbQPq3PmJOK2kMuoC/jyEtOs7Y
   w=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 109660540
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ecO916DMQaZg2RVW/x7iw5YqxClBgxIJ4kV8jS/XYbTApGhz1zcHx
 mcaX22DO62IYmvwfNEnYdnk8kpU75/dxt82QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C7gRiDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw4e10JH4V8
 +4iEW5TUjXarOC34JGiY7w57igjBJGD0II3nFhFlGicJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuvTm7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXzH2lAN9OTNVU8NZ1i1nLmFRUJSERcmSiguvmiwmZd81Qf
 hl8Fi0G6PJaGFaQZtD5Uh+xpnKeuVgCUt5UHu89wAqJzbfYpQ2eAwAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRutPQAFIGlEYjULJTbp+PHmqYA3yxjJHtBqFffsisWvQG+hh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:FKXPnKssI11ZsHIT3uKLFg+J7skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-Talos-CUID: 9a23:7m2GxWBzJIa4HkT6EzJs92sLQcY3SESDkkv1emS6CEZKUZTAHA==
X-Talos-MUID: 9a23:f8H9gwttvbJKn2NBKM2nmzpmbJlQpImSD2cyiLYU5MjDEgBvNGLI
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="109660540"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gq2WgbflQ3wloA0pXqSWechOBg/mYwrIgSKmrLCo6EfKgH8q6oTYPVht2DqTvx1UDFU9WfDKtCgcHzw8BFXu293uGePUwsCUH70I+9vCmgoj3URKZFnDiI622nDNZnmUzU8h/E3VOGQmL5dKCW7at/CdG1l+vkKywBbAH2M7ZgVgIBUUOKi1Y3ykKn5WLx1PQbDVRaZ0CncCKxCRfqHY6boqJs6AILLhnAR0yRaIbd/Ykn2YJGulhyd+iFHK/I+1i+sACE6rCqasSJd2HifeCMMXoObUcgmnsivq44xN4NcAmRKKvjHvBAQP/5ydhvQUXQP0UuxIzOQx4qATb2p9Aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p87CTcF+HpANUKE9O88aZaLB9i6E73I4ciXgdDOh8O4=;
 b=QQgsb5IkjNByUBXTc23ohjqqQdQ9qUShTIYLo7YG9VtpJZ7NRMdDJeGrq0tyWF6kxYspPRHuekvTSmNIwbfTZ9/g3q9vbKaSpE/4Frvp51XLdF5leRSJ7M96Rez/wwU+hl+/k/NEGfNrrewAhAalhagW+07lY43k0a6hTziQHtVHOVEd+/U6fIs/ce8Cugb1rQUt7v4K3mZMjBjDNmeSyFG+rpfHSVUqQ+gmQa58KtRI2VBj5ul+AoivPR1QsM3+we8ARA/xkeML5ZqVBO6C9vCTzqZ1Utm4ghrXxnDp+dWQuFqgAZ+eqW0VGdBdcLEORvdqMvLsngLCs1eMoBdpFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p87CTcF+HpANUKE9O88aZaLB9i6E73I4ciXgdDOh8O4=;
 b=JwPhTnceeraZ1G6vTdZkew5dIb4wF7wakNj5jNhEWGUNlnko/N604+JQZtKyHn+WK3AqdwJijCuyKUp1RLPDSru5k/1tQ2aCKuZcsCOCL6hPz1Gcg5A7SrzaLGfYziHUMZGQjbGarbsKdeB8aCJzWT/TYqjBSR1jMj8QVgo8Nj8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2a55b80d-e3f3-ad26-57a2-9c153c6e7334@citrix.com>
Date: Fri, 28 Apr 2023 13:43:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 3/7] tools: Refactor console/io.c to avoid using
 xc_domain_getinfo()
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-4-alejandro.vallejo@cloud.com>
 <91438b54-df82-f790-7154-c76feea90f18@citrix.com>
In-Reply-To: <91438b54-df82-f790-7154-c76feea90f18@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0513.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7234:EE_
X-MS-Office365-Filtering-Correlation-Id: 49321b15-57b9-4b3f-2946-08db47e62a76
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TByvI7hszIkhiTpwxG5TGLbY/Pq+MqhT25J1GbMkj0JdJSK75AtlCQCsX3OeozzWrcxFQkGIkybWWd+0LG0kBgQLXDaTHmi+w+ZVbKajT75/iNiFjXnDr/fzSGvlSL4ABdq/bLV+2j9dklw4d94pOJhsKj/c/7m/AOjStvn1eJWVFl+NomKSMyvgHoALe3Tv73OFe4fltLkrq+3m0ImtMNf5vkspAYX/vqtJ5SupbM/H+Xs8c0jKrSegrI4WxJ9ftrqukumvIfvG8f2tSDdac1vbt1u09mXjBsBWJrVG4EsT2k6AxSWEW1wUWTD5p/gS0PSbvibwcQltkA/TfazBm47zINiZEW//+w9Wyp0ptzeNGqKjVqJ6vIkGmth2b9XnGtgbKEFRmvla8Zy9iKxZMDjOBp2XaVozgkLX+9p0ORDgy1oSQO0DdCxpMCKinMRY6l1x+2QMavsYMtx11uiupnJUqvj4xNJBRbq6GBWWjA4P0kWBxCrrjfI5C5eeXcmnWgOh8aLsbYA3e8MPf9u8pIr4/Bnzrb8eeQpKjkliZyU37UaFGBsO2kFyvt79aKpvfp30ykGYN/bVWf3vj4lgeoot7wjhASJ30wxlhMcLlQyFvkYVJtRg/hj7bKISnBhszaPz18n5wJBPsrzoiCU1Bg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(31686004)(86362001)(186003)(6506007)(26005)(6512007)(38100700002)(5660300002)(53546011)(2616005)(31696002)(6486002)(107886003)(6666004)(83380400001)(478600001)(36756003)(54906003)(110136005)(41300700001)(4744005)(8676002)(8936002)(2906002)(82960400001)(316002)(66556008)(66476007)(66946007)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1R6VkFpRDZGRXlaUnhkdE5hWGUrN0N6TURqSmZuNnp2R09yV0JxMUdHM2FP?=
 =?utf-8?B?Q0M1WXZjRGx3ckJOaGNhcmlpcXZueWtEampSd0Joc2xkcUhOdGNoL0orbnUy?=
 =?utf-8?B?SjBDNUl6VHBCU2dkZEpGZ09zdjE4RGhsTU43azVVRlA2OTBaQ3hDL0xoaFhY?=
 =?utf-8?B?N1JvYjI2VlBSTDRDaTErTytwblYvYjcvaElkTEp4T0krbW1MTUhnSmp4SHk5?=
 =?utf-8?B?Z3hsVE5aaXpuWnM0TnpFVUdSbjVpMGZETWsvUUVpR0ZvZ3FmbGhmNWlEVWdv?=
 =?utf-8?B?VWtYcUR0ZmpPR3dZbnJkUFR6MTI4b3IvMER2ZnlhblVUTEYySWszRDZzeE5u?=
 =?utf-8?B?QkdNOGYrN0JHd2NzK1FFcFU4VnFjZU5Ua1hVZ2dqRThKKzlSUm8xLytRcmhK?=
 =?utf-8?B?Q0x2RiswWGRGTlY0M09BN1NQUlVzZVVYY2drMWM2a1UxOHJxQVJuVGJUUk9O?=
 =?utf-8?B?VlRtYTg2ZXM4bkJxZWVOcXFVQzd6dFM1VEN5L0VTcnRJUVhEenJMS2tLQnh0?=
 =?utf-8?B?aUNGbldwVDBBUml1S0J4OGVpbXJBZGp5Wm9vTjl2ZXlOWEdIUVFjTC8vVmhB?=
 =?utf-8?B?d2NPeWVqdlk1eTBEMkR1b09jU2JGaElTNzgwWjE0QnJGSEpqQ2VHNGFmVzFV?=
 =?utf-8?B?eEg5Wjg0Y052eXplN2k3amJDajVOODIzS2NxVzRZYVRubmV4cTRZeGhxQk5a?=
 =?utf-8?B?aEpGWlBvMDN0aXdGUHByaDdvTmhXMTNUZUVpMGNTRlUrbHdVT1AzdklTdDk5?=
 =?utf-8?B?dFRVeGtad3BVMWtuaDFEQWZGVFFibHB2UkE3dzY0OXNzWVBrSDYwaDhUYjZt?=
 =?utf-8?B?cGxCNDFydkxJcmkzeTd5Yk9KbHJ3OXVmWTlzUEpHd1NBYTNYVHdoU0ExNjRR?=
 =?utf-8?B?UXBUdTNpMGl5NjhDL0hHNTk5VGlJOGYrQVdBNUx4Y3QrSTZKbmFNdThqSkVr?=
 =?utf-8?B?cHhnS2t6Qi9OVUlFVlNwcmhLWFhTcENTQnAxc3p0ckJxQmVGN1JsNmh2b3Y0?=
 =?utf-8?B?RzlqQnAyeHdqckIyWmp0UjBXdFlBZ2lnblhUNTZFNGljRjBNem5rTlgzSGJS?=
 =?utf-8?B?VGN6RnlORnhVampIOFZIaXBiZUsxTDFhMUZFaWt5a2hicGhUeHJFVTZiWkl6?=
 =?utf-8?B?dUF1NEE0b1I2ZmNXb0VPN2VXSjhjdlYxUDViRS9Pb25XN3FHc3dFQkJ6QzNP?=
 =?utf-8?B?clpsV1Z2YllaaVpmUXBlZC9KZ0F1ZDNtS2gxZENrenl5Z3NPRUtuR080NDRG?=
 =?utf-8?B?MUtlVWk5aEZKU1lua0ZhdnpVMS9oR09BOU5ya3dPODU3RmNvUnJnbFI4SFR0?=
 =?utf-8?B?d05mZTFpdG1tQ1FkQmUwcGJPQ3RrOHNMVVM5YTBMcEJjODRvNDdHT0wxZlhW?=
 =?utf-8?B?eE5LTDl6RzFvZjZwL2hyVDVHR0xUQVI4R3NsQ2RtcUdLQWJsVXhxY1Z1M3N1?=
 =?utf-8?B?WTZoOFdPa3RPL21MeHYrdE9VOW1SRXd4NCt0bTdLNDFOeElRUFJlU1NTUkk3?=
 =?utf-8?B?enJpTkZWUFQ3T2cwaTlKZGRjbVBJQVlIdzc4TWl1UXIzYzhNL3RtUVc2ZitO?=
 =?utf-8?B?MFI1TGVQbTlaWDh3UDdoQlBVaFlVRFhseVFYYldMdGNrSmtoa1Z1OFBFblNw?=
 =?utf-8?B?aXJxR1h6eW9pbkN2a3pIRWFiNVVnN0pmK1ZmSUZNbjYyN1hHczZqQVRSTk1G?=
 =?utf-8?B?eEJHaVZRN1k1VUEwWnhlYUJrOHVlSHhETVVJM0ZSWEFUci9HWTB6NTV3dm5D?=
 =?utf-8?B?dXpRdXErU2krQnIrT2VmS2FPK1VwQWp2dk0rdVY2dTdsdVB5dXlXd3BDVmRU?=
 =?utf-8?B?TGhmSEQ0ZU0wNnFxbGUvZVlYLzFyYVl0WmJlWkM2elJwaVlCdDI4NUZtRCs1?=
 =?utf-8?B?WWtzTEZxcUhlK1BBWGtLVmNiZXBIOEVMMHlFNzZ3RDg1VEh2VkVadWZkZjk4?=
 =?utf-8?B?aE1HMUYvYmE3TmhTajNJNFpHN3o5YnpoWWJYM1A1TUNrOEIwY1JOYnhxMDl0?=
 =?utf-8?B?NmprNWpRSGEwS04ydDNBVUJhQzNWTEk1YVlKTTBkS3RIVWxvTEVDWEZ3dlpF?=
 =?utf-8?B?RWMrRDA5RFl6cC9yNkxhSDQxdFFTQzI3Zkl2a0JWazdDMm13S1E3M0Z0RHVv?=
 =?utf-8?B?dGdkajBpUERSWGpZMG5oMlo0QkdMdW1HMzM4aW1ycVdEOFM3TlVIRElVUU5o?=
 =?utf-8?B?RHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tc0auVU61KOyzzoGCP6ZObbF5SJ+5iCrb0PfZsinwrlKU+C7kSayc0RJdxMQUiphCNrhgJ7UdVHQQAEzLg7UctsM2LT+D7qUWBceW123n6OGwZTH3k9Nn2xew+j2uw5I2IUxWItamV4n291XP3P2t5AZeqzMSIWMuUubaOuNvffcavfG/sGrtC9FNqBgqbHV0iHg4G+rukuuZYL1D8xX9hI7cVj9hGfgeB4jiDztQl3r3Mpkq2+2oMWUzEPrgh+ibTOBPKwgLm+7/ndqiXPsRB7+isZoenw0Wu6MjDdReir84NA0huhrv82uWfihnItJfgDbPBicXA61OeBGvazooAcqVqc/RW5er6dni/+3ZqSEAkOh66qa4U0mraLTJLp+F115pWeWL4l7fZP6cMSvfvH3sal2t1cWg0Ramw17qGVgGH4qlA8x7RycQ5ThkgjWw3bjipWz1qAXgGJb0EK5HqwFJd3VPs+Yg5E0exrvh6y0rQFPM5z17aZ7DwFZdUOFL9yYZAmq8HF/uU0j98JvmT04DJOP/u+E+yoYKz0ajNjTyriD+5PfG7gZjjcuVUy7aRbQfCxpAksz0VlVO7ywgDCCglDqHKU4bIijk/q8Jup++vY+Zz2SBS3NY8axL9c8R90nk9pYzvCoAo9Igm75HblTRy+G195Js9elY2uqRTIxSx8kqJ60vniwDJvPk0JQQ25mRfXrXlgFYtAwswYKhTr5wbyRXeixML2x8pZitf+4lzCADUGJqJ/5CoxzPqKC58UJ65f/8Xf5KZgkTUxbUeIPe7Bp0NP1X3SgFfLxrK0bL22UYPE0XF6uqreBdxCixp+sUFEnvhFnNBWtYQrU4A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49321b15-57b9-4b3f-2946-08db47e62a76
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 12:43:28.6064
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YHH8MtrEsaMm+38jT1fjJsmD2dXiOhrQzgep1xnfLI5sClvbh00Qf6vJVwofrLSVJyMP9oRJySeeqfFRzcxTuYrtaFZCIOKVBWUi2xyOSdc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7234

On 28/04/2023 1:33 pm, Andrew Cooper wrote:
> On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
>> It has 2 avoidable occurences
>>
>> * Check whether a domain is valid, which can be done faster with
>>     xc_domain_getinfo_single()
>> * Domain discovery, which can be done much faster with the sysctl
>>     interface through xc_domain_getinfolist().
> It occurs to me that this isn't really right here.
>
> It's true in principle, but switching to requesting all domains at once
> is a fix for a race condition.
>
> I'd suggest "which can be done in a race free way through ..." and avoid
> saying faster.  It's likely not faster now with the 4M bounce, but we
> can fix that in due course.

Oh, there's also one tabs/spaces indentation hiccup.  That can be fixed
on commit too.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:45:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:45:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527320.819825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNTj-0007Cv-LL; Fri, 28 Apr 2023 12:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527320.819825; Fri, 28 Apr 2023 12:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNTj-0007Co-IU; Fri, 28 Apr 2023 12:45:35 +0000
Received: by outflank-mailman (input) for mailman id 527320;
 Fri, 28 Apr 2023 12:45:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psNTi-0007Cg-EI
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:45:34 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90af33cd-e5c2-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 14:45:33 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id
 ffacd0b85a97d-2f625d52275so9469358f8f.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 05:45:33 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 v11-20020a5d43cb000000b002ff77b033b1sm21021014wrr.33.2023.04.28.05.45.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 05:45:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90af33cd-e5c2-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682685933; x=1685277933;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CPPLZGllxogODxD/PVNpAMECSxpd7F8e1+DC4KSwfXU=;
        b=ftc7pjM1VxG3C6fcoI0q0pI7waJGLslNGT9WeiqvT172xDh9kEwFxlLivj+4ekH4zu
         8BpfJNeD0hDu18ZQaob4qRKVTPG3/sD354IxONy1je2haKsH6QI1uAPPHDrNAnQacLtg
         pHY5I5E0dEqKOkyhql1eQO2GK9cXYdfx5UAP0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682685933; x=1685277933;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CPPLZGllxogODxD/PVNpAMECSxpd7F8e1+DC4KSwfXU=;
        b=IfY2iqrLYgQfByQhiEbd88ki4RaW6KjDcZ8/YcRhwPbjV48vBoQiA3E2sHTjBvBR8Z
         Eo4KRKpHmPHtitxZrVT7eamH9ZRfjJV7DTAhSmsDfqY/EpBVno1H7x1rHMP0p/NvCi4M
         qje4lLdMI8HMjSDeufpfcf1/qH/Hbeg21mgg42AzzCSHuWSRjUIVjwhRUvenaP7PS7ww
         TLbEfeE2l8wAih9cmbNFIvTYhkCKmXbCqrIGqlovSqIOjK20uO6VLhiqJXaRCzUrkEG+
         0M8OrfPyYDRsUMC+RuWrThbx0hj/2CHJbJtm2LBWuotLErXEvZi9P0oDjErEcz+8wPwO
         Cdlg==
X-Gm-Message-State: AC+VfDwc1Kfv/G8bRpHNe4Cfxm39AvKZJBLx+j2vCqQY8oHW/Eig/LFP
	+fJW2J2si8/s3i5x/qPhfgO0oQ==
X-Google-Smtp-Source: ACHHUZ622D/ODo8CwhixTjH626Gk3gSCLFN2Mi4mmjXDtK0cISb1oWJyIqFQ/RXg/i9NoSGO10/Eqg==
X-Received: by 2002:adf:e6d1:0:b0:303:ba27:4366 with SMTP id y17-20020adfe6d1000000b00303ba274366mr3401582wrm.49.1682685932859;
        Fri, 28 Apr 2023 05:45:32 -0700 (PDT)
Message-ID: <644bbfec.5d0a0220.4f622.d428@mx.google.com>
X-Google-Original-Message-ID: <ZEu/6f1YHOTof4UY@EMEAENGAAD19049.>
Date: Fri, 28 Apr 2023 13:45:29 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.org>
Subject: Re: [PATCH v2 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-5-alejandro.vallejo@cloud.com>
 <8b7b7fe0-f814-ba53-eca9-a9341665cc7f@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8b7b7fe0-f814-ba53-eca9-a9341665cc7f@citrix.com>

Sounds good to me.

Cheers,
Alejandro

On Fri, Apr 28, 2023 at 01:40:50PM +0100, Andrew Cooper wrote:
> I'm recommend having a comment here, saying something like /* Commonly
> dom0 is the only domain, but buffer a little for efficiency. */
> 
> Because this is also the justification for why we don't need to ask for
> 32k domains at once to find XEN_DOMINF_xs_domain in a race-free way.
> 
> Can be fixed on commit if you're happy with the adjustment.
> 
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 12:58:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 12:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527322.819835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNg5-0000K0-Mh; Fri, 28 Apr 2023 12:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527322.819835; Fri, 28 Apr 2023 12:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNg5-0000Js-Jv; Fri, 28 Apr 2023 12:58:21 +0000
Received: by outflank-mailman (input) for mailman id 527322;
 Fri, 28 Apr 2023 12:58:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19My=AT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1psNg4-0000Jm-01
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 12:58:20 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58175972-e5c4-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 14:58:17 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-3f178da21afso67067555e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 05:58:17 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 t12-20020a5d460c000000b002f6962ee703sm21079612wrq.61.2023.04.28.05.58.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 05:58:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58175972-e5c4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1682686697; x=1685278697;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=I4CBZO3VZzr3VcxoHMJPx5192j4wNQ7zuHM7OHU5MIM=;
        b=IQSaG0SdU23395dtkSsY+HynAQU2XjUeVBfWYZyxAjcrfTcxFw3TtKqvp3geSNf0EJ
         Tl4kfmNb1sAJ6MbM8+x/0Y81D9eRjOkZ/Uu/u+6Kg4RRu5a37uvNsbzlrnj5svNsnPOT
         afiy87iiuC1YtaMRc7+intkpRmfECpJtzTgM4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682686697; x=1685278697;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=I4CBZO3VZzr3VcxoHMJPx5192j4wNQ7zuHM7OHU5MIM=;
        b=UbTQFqDfhYF04XM3sK2mIoI/V7QV+A9EpSi2jmhcVya2vt5Nfscj4Q23bvnCf+/IWs
         Kid7005VkM3qRjTTb0wTNHbVyVmUJEGaCccBquNje8eabbw3vXEw8iKu0MGk1wnP675x
         3fz8BRwR4jR0F1tL51U2IK67RMT6UeZVZ8QAoEAbb50/gRMqZsvYd2PlVdUHaviP6aYj
         Tpmiv8gMfoVhhIRrQKCxg9CCJq7s9duAjUZMEJVKQEMts8Kdqh0AcOrnoUgsTY/rMArA
         kJKr8M5+2s+pFtOQ33QZQMC8+YpLnUFcVnkpA0XmmLX/XrEwAszUIzR3A807bFnsZG7U
         9yUg==
X-Gm-Message-State: AC+VfDxCtEQb5TBulcn+/GWlsKc9DokBEjK2J2BPsgEQ6uV8b9GZ+9Vl
	5nhSS930DkpKdiIg/gY4WPItmQ==
X-Google-Smtp-Source: ACHHUZ5nrK2nv4OlTQdMdEE7E/5dunabfIbYBIfbK4/GYZG7Rwz7K4+HAjysidp9jBFEH/wmE2va3Q==
X-Received: by 2002:a7b:cc16:0:b0:3ed:f5b5:37fc with SMTP id f22-20020a7bcc16000000b003edf5b537fcmr3939461wmh.1.1682686696846;
        Fri, 28 Apr 2023 05:58:16 -0700 (PDT)
Message-ID: <644bc2e7.5d0a0220.3e72d.ff3d@mx.google.com>
X-Google-Original-Message-ID: <ZEvC5ODtE7W+hPL+@EMEAENGAAD19049.>
Date: Fri, 28 Apr 2023 13:58:12 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 3/7] tools: Refactor console/io.c to avoid using
 xc_domain_getinfo()
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-4-alejandro.vallejo@cloud.com>
 <91438b54-df82-f790-7154-c76feea90f18@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <91438b54-df82-f790-7154-c76feea90f18@citrix.com>

On Fri, Apr 28, 2023 at 01:33:45PM +0100, Andrew Cooper wrote:
> On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> > It has 2 avoidable occurences
> >
> > * Check whether a domain is valid, which can be done faster with
> >     xc_domain_getinfo_single()
> > * Domain discovery, which can be done much faster with the sysctl
> >     interface through xc_domain_getinfolist().
> 
> It occurs to me that this isn't really right here.
> 
> It's true in principle, but switching to requesting all domains at once
> is a fix for a race condition.
> 
> I'd suggest "which can be done in a race free way through ..." and avoid
> saying faster. It's likely not faster now with the 4M bounce, but we
> can fix that in due course.

I agree, yes.

Alejandro


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:10:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527324.819845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNs8-0002jP-QK; Fri, 28 Apr 2023 13:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527324.819845; Fri, 28 Apr 2023 13:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psNs8-0002jI-N5; Fri, 28 Apr 2023 13:10:48 +0000
Received: by outflank-mailman (input) for mailman id 527324;
 Fri, 28 Apr 2023 13:10:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psNs7-0002jC-4a
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 13:10:47 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1446efb0-e5c6-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 15:10:45 +0200 (CEST)
Received: from mail-co1nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 09:10:41 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5456.namprd03.prod.outlook.com (2603:10b6:a03:28c::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Fri, 28 Apr
 2023 13:10:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 13:10:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1446efb0-e5c6-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682687445;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=0w+W2qlgNdJyfzB8ZjTgO8zdBPcBlG2cWo8OFLLGWQk=;
  b=AuwL4vrpHxR6ciRhbvRIxikA7beOVGMIUHgmmY2RB47Im/7+iK98aFZL
   Y0uB1qbUbuS3fz7WXBr8IkJJTTXeGFYtwR37Zxk/2ycXYqMiOKa1cGVmu
   U3XWxiDOUg7MWH2UtwDnJJ2LPq8JRZIR1k9UNyh1dj2IXdVXhiJSXWVZ0
   A=;
X-IronPort-RemoteIP: 104.47.56.170
X-IronPort-MID: 106550286
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:tt0fWq9bfiqzNPI5LjQGDrUDRn+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 GFOUTqOafmLMzanft0gPN7joR8BupLQztJjHVA//ys8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOaob5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklQ8
 sIKcG0xNSqZvPjm7YDlU/k8uN4seZyD0IM34hmMzBn/JNN/GNXoZPyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilUuiNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMSeDgrK406LGV7ncaMToTUEm7m+u0mk6GZoJxB
 kZJqhN7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXAGTT8JS00C+daLiIM8lBXUVf54DbW4yNrvFlnNL
 yuiqSE/g/AYipcN3qDipFTf2Wvz/t7OUxI/4RjRUiS99ARlaYW5Zouur1/G8fJHK4XfRV6E1
 JQZp/WjACk1JcnlvESwrC8lRtlFO97t3OXgvGNS
IronPort-HdrOrdr: A9a23:teMJ6q8A57Dahorj0iFuk+DcI+orL9Y04lQ7vn2ZHyYlFvBw9v
 rPoB1173TJYVoqMk3I+urgBEDjexzhHMVOkOws1N6ZNWHbUQ2TQb2KhrGN/9SPIUHDHkE279
 YGT0DTYueAbmSTLKzBkWuFL+o=
X-Talos-CUID: 9a23:JlGHBG0FVNsIgVOra5VGvLxfBMcbYCWA8CzqOXSiWUtbFqSFGWGK9/Yx
X-Talos-MUID: 9a23:uwigSQQOtx8yeRmVRXT3jQhFMcJ64568BVANqahWleSOaHxJbmI=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="106550286"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A78sn13xEpzA1FmPOI4QMgR/2159fTfKC/MSSi1EwiOSkS4pPUq3cqAdRaoX46MhmqL1pIwRqKX466OrV0fIxYTXKpZoY4cBaQ78/U9A4qOi7kL4oZLbDaoBNGoz/qMq56N/h+XvEGEiGny91K8OOpCaCr3P18G52W5JGaIZM3xPAxW+BqrP38sBl7nNE99RS24W/GtM8uqHrAbVp5lO2PguxuTU7vZx++YddRS0/re+hvpRYYPIGEUsudCKKst8FJgPKx54ey9Nh2Fps+YuYM77u2XAAbFd5wMtJsmzP8x9fqllqd3k6dfXZUYVF9TF5zWNZQYxA1Q4S+Ez5bilWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0w+W2qlgNdJyfzB8ZjTgO8zdBPcBlG2cWo8OFLLGWQk=;
 b=PSPKQtoK44DlBAzL1I/XUy0a9u80I9fmXeV2fGe9dW5MBGTxxbRt7M+oeWS/pRI5htGeB4swHWWIYM3ByMf60kg78CN9/1M2YHshGB+fR68y+KzHPN9XmLOxqo2ajS9/Yv3Z17/9we7rGgvhSgVMSqIsM83zY/4CnXvmai+E7zCqDAmlRkjXJDswWPAn/S9LpUHDs39cOOw7Onfw4pCMfyE3nWMeYkMrRwRQ1kpwiZLp6hLyAbCTqphz543+VI1Og6K1aUexyKJwmnGBrMbTCCM5skxPYOU+7KBKSrF4/QoSboQxa44ESWYr7WD8q09bP8vj6AwahNgQ+xTxVRxZDQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0w+W2qlgNdJyfzB8ZjTgO8zdBPcBlG2cWo8OFLLGWQk=;
 b=TaG+doL1niDyF6plNl9QbP0RJnWB09m/PeiwiPcpfMYY47pWd652XABKcyuabjtb/p1VZalRcuzg3WJ33Sv2BFM49BWVuxSMut4qwmi7T76KcAVK+Oy9l04EvKIzfu6PLE1FMhsiyqgsUJbQCyrpqSMSlsnHjjZN0tBzbL5BsHw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <cbeeb484-22f7-873c-e0b4-e9094e127ee7@citrix.com>
Date: Fri, 28 Apr 2023 14:10:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 4/7] tools: Make init-xenstore-domain use
 xc_domain_getinfolist()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.org>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-5-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-5-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0494.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5456:EE_
X-MS-Office365-Filtering-Correlation-Id: 8375d621-4775-4c52-deb6-08db47e9f6a2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZLVfL6lkEEP07d//3mHKfHnIgqOxn6HIcAULY0CkWbNEagSuTcFglJyGfL3Bck1bUKsqogGfc4x+dhCZB9V8NQ9f6NsskwA7OG54Azz4gVzQQBj4pRjIjWvhro5710qfGgu75L6SRTp+99f5pUVjfF6QsT5VeJcgF20ANIeC1ekBOsAlrlbFsXqxVocWLLUbnpfSsKSOwtNhFt75cPi/VWYLBYyg0bxFeInmZR+iIckGEBBrR0mfM9rUf5595gZedxhlTZYl+GkMr6BBN2CullIc+M45zWWDuRXK5PaAOLE1Uq7ulhc9Bwxr4MUW2j3rqk03rawJK+icQ3rs5NYaXhIchh7NIU+/zxrek+8GHK2sRzMnPBnWAPTKRqx12vDYIEo2CdeMjK50kpKk6HBWEd1gWVfb2As6U+cqPcNqocmlVMSyU12EthrJ6rfnkJFVb8DhuyL6HtUUGHj87KlOV3QdR9alXOUMMoI98xt0jo5yLRckjkkmXff4uckHfAdwiO5afN/XMSKuQAiqfSjhgOw9Em534G95MbkXnJFg5gqSWTuNhcdHzwuDuDdzd6c8oNOqgp//QTCa67QrQFddDPF0p3yxkxqSeUnXn2pCbTUvU3suh0VPKafXspgR9Hf3VCPZ4YfBW2FY5ASSZTVQew==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(396003)(376002)(136003)(451199021)(2616005)(4326008)(316002)(66556008)(82960400001)(66476007)(186003)(6666004)(66946007)(6486002)(478600001)(54906003)(110136005)(8676002)(41300700001)(8936002)(38100700002)(6512007)(5660300002)(26005)(53546011)(31686004)(6506007)(2906002)(4744005)(31696002)(36756003)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amlzZDRhOEtwdkZlZWJMN1l3MWsrTXp5NGRzNUUwbkRWL2YybndJQWlycEpU?=
 =?utf-8?B?NmxHQWdxYkR0cWdobWxvNWtwMm1EMHgyZjQ4TlV0Qm5kQWxWWlpTSHlMeXNk?=
 =?utf-8?B?cUlyNmNhbkRGeHEweXZQMWdBRkRRSjk1VkY1WHpreFVBNmpwMkZuU29xSk5m?=
 =?utf-8?B?eE9DWGJOZzhubVZvQXJJUi9lNmNmWWZhWEduRXFwT0hsVzBwcmI3aStWdWIx?=
 =?utf-8?B?cFR2Nlhsb3hkSlRGR3pKU3Y4VEN6bHdlSTVWYWxQUnI3SWpMME9TY1JocmNB?=
 =?utf-8?B?ZkZHeDJ0Y0QxaEFKMFpEaU84MWRFQ0N3SXR6UmRtdUl3U2dEVnVkT1FxeEFG?=
 =?utf-8?B?QkNoSnBOVVI4RmhRSlBuZFFzbjBpaWNLcWI4UmhhY0dFYUtlbFhoT0laZ0ts?=
 =?utf-8?B?YjJtM2lCaWltTjM5ZnU0ay9uSjF2eWpGOFJDV0g0YTA5ZUp1MFROVXBGYnQ4?=
 =?utf-8?B?bmRNYnJjMFUrckZySlhXMnRMQW1CUVZPVkdtZGl0ZVg4QkkyYkQza1ZFY01S?=
 =?utf-8?B?Y1d1TG1iWGNGYlhDcENWeHhPOEVpRTQ3SXYxNnFRY3dueXZYTXliRFNiTGpH?=
 =?utf-8?B?VVpGUHRrazkvN2Nka1BiNHpuNm5wa21SY0hTZkt5R0FKQXRYUm5ZTnppT3NY?=
 =?utf-8?B?WHduK2Z3UEVBeERFblAwZ0RNbHd4ZXFCRWIzR3Z2M3dySVM3Z20xUzh6dHVs?=
 =?utf-8?B?MmVROWs1VnZTMnFDSlRUSWRFWGN1emcxdUMycnc3bEp2cUhGTXFWZUlHQk9C?=
 =?utf-8?B?emNRenNXQjlzOHk1L1c1Q0lCMHFPZForZkdoN0htemhPbHZRT2oxVWV4SlM4?=
 =?utf-8?B?Z3hERkF2Q0NraHdRQ2V2UFRBMFNXNEs3RFFIbWtLck4xY1FoZnF5cEtTL2Yr?=
 =?utf-8?B?bVRvQU40V1FZWW55dnNKUmZsR0NPS3dOdks0NWRhWmFhWlk1OVc2MDY3Znlq?=
 =?utf-8?B?RE4xMlo1QjhFc1laaUZxbWNISlROYkZBRHpVQ0x6UEo2dHZxSHhycUNiRFRi?=
 =?utf-8?B?cW5oVXQwYWhwbTM4cnFRTC9teFhLcTVIN2FQTDJmcEJ5QU5lcHl2ZFpHZ3Mx?=
 =?utf-8?B?ajRUcUVZMUZzRW9HdjFBaTB5RHpNVENpS1pFVEp2RTlhSXBBbDB2dUV0bWFu?=
 =?utf-8?B?d3dTWkhWdXp5ZE94cG42bXNHcWR3Z25aemRNMGJzWnpmTlhYV3RRdnRWd1A0?=
 =?utf-8?B?Q0ZWa2p6Z3pJZUZ3ZjBJZEJCRG5nQUV4Z2diOFAvaUxtVzBQRW9EVk9sUGtH?=
 =?utf-8?B?c2tOQnVVVjRXK0ozM0tIYlVhZ1FxemQvN0x5Uk1ubjVnNDZKMUdSaHFsM01Z?=
 =?utf-8?B?WWZuVjgrZnlpRmxQUXdKMm4vclMybkZRRk1FWlZWQU5kaXB2TlUvTVNCVWp3?=
 =?utf-8?B?MWNqbWRiZE1WeDRSSUtXVTNWWk4rVUtkL3lUb1E4SVovck84Yk1qbkhVdXJ1?=
 =?utf-8?B?YVk1cS8ySjlPSGJ2RjNjSlVtM3VQYkJHbkZqUHhKSU1zd2lKWW55dkVRTWg1?=
 =?utf-8?B?YTNwUTY3NndTTlZpaGV6dzZ2MzR0WHZGOE85ellvdTFuOU1jYWgwcm9WMG0v?=
 =?utf-8?B?alQ4eFdtNDJvai9GSmMyQVNoNnZZUlN4MTlMUzUwUjJyZEFvYzFtdXp3Y0Ru?=
 =?utf-8?B?Wmwvd2l3YURQV0JBYTFqNWRXdXVSUm1uT1VOQ1R4Vm92WG5xTVVHUWlTbXZa?=
 =?utf-8?B?Vi96NmZTVjgzMy9LVW00OFdvL01OR2VrQ3BJVjViK1p2RFg5OE93WmZKUytK?=
 =?utf-8?B?OW4yZlRwMTNhOGsvcFFkeUVCajRraW9CaUpGMW1hUUN3N0F4bGlDTmFkK3N0?=
 =?utf-8?B?NlNOa1BObFJ3dy9oR1RwVmFUTCtGMnVKUDJSb1ZsQytQdkNIcUdiL215T0xF?=
 =?utf-8?B?c1RCUUJtOHhVZXpaS1hnWEFKVEx2cHNWM2grSWpRZmF1S2tKYStDNkViWjdY?=
 =?utf-8?B?aXNaK0tQUU1wc0FEeEFQcFdiTktWeHNrUXhSaER3OE1xaU9Ea3VEcHBEWmlJ?=
 =?utf-8?B?S1YwSkR3d3h3bWl4RUJITzBQTzZUMkZCb1VyT0twZmVUOG5pejJCdXkyM01F?=
 =?utf-8?B?dmJDa3M3S0JHeDFHK2VtNXo1WGthVTY1bFR5VUduUk5EOWovL2Q3K3VhNGM3?=
 =?utf-8?B?SytYa3ltNGxidkU5TERGODBnWTMxSFJmeGwwUFU3ZTE2d2Y5TjdJSHRpMFkz?=
 =?utf-8?B?MVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	qoAgJRy0thBMP57uW1QwGJNwBu7unHVDstHYedTV8eBOkoVHCgJorbgJ/wz+T702IlM2Qi6jBDecKCfQMS2mZidUSnW4uCyZ0CTqD0iX8hBnQv01lZH2QrqSsW1CNcGOoBNGcGbH6745BgO5Gl9vEbxjFROqtl/TERypIUP4MFVqXI0V3kLuOjT36/d13rJ0FAerLNjCnPXmkD08qDFvfHiDBnmqioRU2p/OEVAus7GHAKmVbF7Kz5h6Ond6TWeZSDlZkARpsa4tgimJN/lgggczWgkP8rjRyOfiis7XofnzpDkoOdUqTG5rdwDZwulzpUPxCj5pUTwFbxqclTdZzbW59Alzxrb2+mMCaS1jjGGgRb8gUYhvgLws79dyqo1/WudjHLsd2dGmVemGzcpJgIez4Aei64V+yagq+wJnDbwaL+5CGKbS+RUB2Dgcfc7wMkQb7g5E5dSn980OfpC0qsWf2Vzfxvdmu5AUufJO4l6bwoYqdo2jdg+7IPZ01sxeK5QtvfiXbxNqpNcO9FkefTkU03G9VNva2/6CSKRrGuzBQGKGF1xl4UdjkBtjdProwBrN7eiOrtWqaTf17NnT/V+rMP1fAr4gUsAX//QReugs4Ncy3mMTP5CWr2phvTizWw6HWqZaLoHELZB3eC26X/+lU5AqAyMPp+buyVtCkeo7OMvp/WHdig7lxAmsXL+J5yH2f8I0K0xjupUfm1NTqMzmZz763Jp2JAW1Q7+peOHVITQMMgXAFM4zbjQB2CcDQ76BOkBo/PKbUX06YTpj0Yo+fUszUWzGHv/TWH09fD2PVcpXoB0/8q2sA3x4I3QixzCZBkQ8qx7aa/RKljk3YiVI4gATLfSJ5pr5QNSmIDE=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8375d621-4775-4c52-deb6-08db47e9f6a2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 13:10:39.6607
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /RaOx7dfXR4I+TnmjQfvdH4JqROF/5QBaKlr0tph4QDjecJTAPP8v1Nb5DpaHHvfa1QwA8OnIeWfzwb8G0KM6FJQ2yq7Y4etIFK5NAu6LR8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5456

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> It currently relies on xc_domain_getinfo() returning the next available
> domain past "first_domid", which is a feature that will disappear in a
> future patch.
>
> Furthermore and while at it, make it so the hypercall tries to fetch information
> about more than one domain per hypercall so we can (hopefully) get away with a
> single hypercall in a typical system.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Juergen Gross <jgross@suse.org>

Oh, also, you should have retained the Reviewed-by: that Juergen gave
you on v1, seeing as you did precisely what he asked.

Same for my R-by on patch 7, except there's a different hiccup there now...

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:28:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527329.819855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psO8p-0004My-Cn; Fri, 28 Apr 2023 13:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527329.819855; Fri, 28 Apr 2023 13:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psO8p-0004Mr-9p; Fri, 28 Apr 2023 13:28:03 +0000
Received: by outflank-mailman (input) for mailman id 527329;
 Fri, 28 Apr 2023 13:28:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ym/r=AT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1psO8n-0004Ml-VX
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 13:28:01 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e476650-e5c8-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 15:27:59 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 04EB521B36;
 Fri, 28 Apr 2023 13:27:59 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BBE64138FA;
 Fri, 28 Apr 2023 13:27:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NO6GLN7JS2SmHAAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 28 Apr 2023 13:27:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e476650-e5c8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682688479; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=rht8dosjRjMYbzmS7nuUus/q8ExMB9UOqXydMF+SbH8=;
	b=qLI8uJtk9SxX1ql52bOq61CXuH1Clugvwb5HktJPoCOwQYPJsEf5ynAX+Fgd8KYJVbmWa+
	XxjbGVGtTyPPWZBlKF1QTy+cRhxVgPQccCUD7mM8O4G4M/BfsacTbjl3x2eNt3yOIni4Xq
	WNv48oTd9ITlpaTDlTyDO/sDKdm1LbA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] MAINTAINERS: add more xenstore files
Date: Fri, 28 Apr 2023 15:27:56 +0200
Message-Id: <20230428132756.8763-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xenstore consists of more files than just the tools/xenstore directory.

Add them to the XENSTORE block.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 0e5eba2312..f2f1881b32 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -653,6 +653,11 @@ M:	Wei Liu <wl@xen.org>
 M:	Juergen Gross <jgross@suse.com>
 R:	Julien Grall <julien@xen.org>
 S:	Supported
+F:	tools/helpers/init-xenstore-domain.c
+F:	tools/include/xenstore-compat/
+F:	tools/include/xenstore.h
+F:	tools/include/xenstore_lib.h
+F:	tools/libs/store/
 F:	tools/xenstore/
 
 XENTRACE
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:30:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527332.819865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOAp-0005h2-Qn; Fri, 28 Apr 2023 13:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527332.819865; Fri, 28 Apr 2023 13:30:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOAp-0005gX-MK; Fri, 28 Apr 2023 13:30:07 +0000
Received: by outflank-mailman (input) for mailman id 527332;
 Fri, 28 Apr 2023 13:30:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psOAo-0005ba-1Q
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 13:30:06 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7dde610-e5c8-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 15:30:04 +0200 (CEST)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 09:29:56 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by LV8PR03MB7445.namprd03.prod.outlook.com (2603:10b6:408:191::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Fri, 28 Apr
 2023 13:29:53 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 13:29:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7dde610-e5c8-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682688604;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=IqKP5l5H501Wy+dDXHrm5OIQs9mLS/TkDv0zSinRTlE=;
  b=BAnBPEFVanBAwQCvaadsDNLpxSgEkVKsSDHQK92E8bWd2kDOEu4/SP2C
   eNQ7pLQsmM542Md8pwB9Eu6c/hZjmrWITk729V0fLL+hu2nUgxBLPkx0V
   pSDy/z1EfpBPtLlLt7EKFv7RE2HyLbhKWmXY1cpv6Br874U+++dODaYUi
   o=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 109666873
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:oRtH96AMz/GuTBVW/8ziw5YqxClBgxIJ4kV8jS/XYbTApDwngTEAz
 2ocDG2Pa/aIZWagfNF+bYy39ENSucXXzt9kQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C7gRiDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw6MhrWG5e9
 KIkGSEdbzm4lqW7y6OUc7w57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xrswA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eew32lCN1JfFG+3tl6uV6dxzM8MjMTd3vgpPOVhXSVHOsKf
 iT4/QJr98De7neDTNbnWAajiGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLT5lvaCRSHmd3qyJtj70Mi8QRUcAeCsFQA0t89Tl5oYpgXrnVc1/GaS4itn0HzDYw
 D2QqiU6wbIJgqYjyKGT7V3BxTW2qfD0ohUd4wzWWieg8Vl/bYv8PYiwswCHvbBHMZqTSUSHs
 D4cgc+C4esSDJaL0iuQXOEKG7Lv7PGAWNHBvWNS81Aa32zF0xaekUp4uVmS+G8B3h44RALU
IronPort-HdrOrdr: A9a23:seGJS6uLG+wvtt7sIv/3P0YX7skDzdV00zEX/kB9WHVpmwKj9v
 xG+85rsyMc6QxhP03I/OrrBEDuex7hHPJOjbX5eI3SPzUPVgOTXf1fBMjZskDd8xSXzJ8j6U
 4YSdkBNDSTNzhHZLfBkW2F+o0bsaC6GcmT7I+0854ud3AJV0gH1WhE422gYyhLrWd9a6bRPa
 Dsl/Zvln6PeWk3cs/+PXUMRe7Fzue77q7OUFopBwMH9ALLtj+j6Kf7Hx+Ety1uKA9n8PMN8X
 Xljwe83amos+i6xhjAk0ff4o9bgsGJ8KoyOOW8zuYUNxTxgUKTaINtV6bqhkFMnN2S
X-Talos-CUID: 9a23:lhoIsWF7MNoIIjIEqmJ12lJNFdAkLEHs3WfRCnSpTmtiWZKsHAo=
X-Talos-MUID: 9a23:0DezEguGR8uA9U9S/s2noyFbN8lpvIuUJkUvsc4DseWLDwc3AmLI
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="109666873"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QdoVLSm8uJY75CIaBP4+TM8U/NjjLyZdaB4BJ8yClyLbLb68dapNz9MrwnyJpiZaFb2YUmz/Aq8Pualqob2LepVQT8JD+csceb9FNjCwPGDsbV+qcf47FpQog5C6bio0gUmOCkD+qQYhKkEUuHXcRon4wb7X2rLzqMzzYq67AjMx7An9ePTq4h/p5SOAjzJNGltDtExysoflwFjVgWeozFKMBW16duSgEoRPCbRnycLR2+TSRHiRyaxcOJmh6oCFFjH96Dmraf/VFa5Woryx7iv6g6z2ZlLbXQ88W/EkDqVZSXzlmtSXlca0Xna+eWXFTmtT/hGgn2/k2/HDzmEetA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IqKP5l5H501Wy+dDXHrm5OIQs9mLS/TkDv0zSinRTlE=;
 b=G9jmQD//FazmB5q/ABVxozHkK/cA9y5ePIhd35njcV+OiRG+Wfm6zEY4F6H51IN8UmZsyOiH5NZQz3PykkwU2eToVxyMogBAAg1UwC46twIpjJTZDrnJ3/yb8zI70Mulf+8A3p+oglpDThuuUNA/rVDrWPMuDbHqdhnxxWH7u6fNiy3Bmi0I5Jw/gFVzvSeQC7JSYVIdHLhLA/ysUriUdD/MRxMKp9D+HYSbsSsigqYeVRdrrzotklS7MmTnBgMv8bFWBOkONsJOxiddxF/Y8X0WU7ua0p2dpKfHn2db0HH1HKZhZ0SMzTKekOL8xqxroU/hDGNxQT4DPf4S+PpNAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IqKP5l5H501Wy+dDXHrm5OIQs9mLS/TkDv0zSinRTlE=;
 b=lDvf9KOXrkFxUAnNfDzTGxYHnm8qNZLQUkEMaZJdLUc/x+yJ9VrrJEqtGFFU6A1NMFI7q8A/DaoG66Ir92ID5mwZn1yvGPS/ngGxmCqlb+UubGQKSpdvOT+kHfeUhFY4FdwWxd2OTX2wEQ1oyqbF+YHIq6QPxIiMWDsOUIFa1sw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e5d00d25-b395-39d0-6fc7-c596c2249477@citrix.com>
Date: Fri, 28 Apr 2023 14:29:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] MAINTAINERS: add more xenstore files
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230428132756.8763-1-jgross@suse.com>
In-Reply-To: <20230428132756.8763-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0581.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|LV8PR03MB7445:EE_
X-MS-Office365-Filtering-Correlation-Id: f8e01f13-3102-4ead-656b-08db47eca54e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KAL+BI7t/vjLo6jXNgERTM3SGaV5H9kN9UgiOUfvihTvzKZdvxVdXKEGxutFunaMBxUYV9r6n8qahp6UhQHGb7NJUKVkIgrv11C0DDvceJRwGWG5TCuMv6tRyDOyq4uu0dwm1Y2jv3/5NZ9HpGjq2lQER8BBN5D4XBlfHS/TX/NjGJUrZtpE3htzc+rU07X2vYzVS4soiGGt1fvhdcyFNm+OaEwlAkxPOEUOqbFAP8g3u6DrN8UJrBfQCSd4fuYmakI+TUn97UYsNT3hIIxy6hqDTX0Vg888OMvqn4NppOkOXYIETANyAqQ5x2wnFA3Zem2eaJRWNIRv5Hpq3cp3+3q/kYormfxxhAeToqCFlto43CVrNjsgzF2z9VrP63cjLtszx7JQ+4qMJfqjpfqHhgqiteO55JwFNa3yj2Q2ei4m2K7Sd/hmsWAkKYmyrkG1NTl6mQsNCMB/yNCoqn8eceSusqTAUXjADSyzRxNgu4AzzwBgtjRMBsEFQa0TSjI0sxbBSISOXLuZ6SbAaan3kSdpDcy0QFV4aw7AjtKcEe++w1uBbykDrM172AdxPOPhP6WHz56bRDbnRo/RNWelmtlAousUhFhfD5rWxt4tlpbYLygNanu0zs0EAJ3RDbiM4QVWaikczJUlDpiLv6yWdw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(39860400002)(376002)(346002)(136003)(451199021)(86362001)(186003)(31686004)(66476007)(66946007)(4326008)(41300700001)(82960400001)(6506007)(6512007)(53546011)(66556008)(26005)(316002)(31696002)(6666004)(83380400001)(6486002)(2616005)(54906003)(478600001)(36756003)(5660300002)(38100700002)(4744005)(2906002)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cEhtS3F0enJVVEpuMVp0cHZseHkrdENUWUgrbXRFYzFxVW04SmJiSDdKR1BJ?=
 =?utf-8?B?OG1pSXZLL1Fyc1dGYkIzU1B0UmY3SjE5WENTaEVCVkxmcHVaM0JObnppdXRX?=
 =?utf-8?B?V3dyN1NvYVFCY0Rpd2haWVl1bXdrWE9uQXYyR3dzNW82bDVWaE53bCtONFpl?=
 =?utf-8?B?eVhoeVZ5c1lmTG1lMkxKSDBldUREZUNJbG9Vdk5XRGI1K0tnVXJlUzRqY3JG?=
 =?utf-8?B?Ly9NWTdsQWJGazdTQlpQUUErZEl3YktnMG8rc2pYSHdsZkZGZVQ1MTR5MkJ5?=
 =?utf-8?B?SG5adDBnR1ZoK0JtWDl3QnlsVC9qaDExUDJGQ21rcGwwMDRUSDJCbnBLb0tV?=
 =?utf-8?B?cW9sOXlOUlNoR0toOFc1YTZ3c3FZNWlaQjlQUCs2aW1hVFB0YkFZZmFIRkJJ?=
 =?utf-8?B?WE1TWk5Odm91b1QyajFUaEdiVzhlUlh4ZGpNVFpNSHg1MTQvcVliU2ErMEpr?=
 =?utf-8?B?Lzc1azM5aWlBVzFGMGJ5YjZOZDZMYXlXOXYzL3FRSFdBVHIxQlFGTWh0cWVj?=
 =?utf-8?B?VjQ0aDhFdi93OG5YbWVHZncyS1BxYTQrTEhWQlRzRlpJYmoxQWdjTkJGaXl1?=
 =?utf-8?B?YVlDekRueFF5R2tZNWtZbnM5NGt3NkgrYTY2MjU3RjBwY1RiYUs5M3lDclRT?=
 =?utf-8?B?SloxcGVYTzVZYmU4a0d3NHNWZDdzWk9qRmczQXpNVEs2MDA5cTg5UmJCTE5j?=
 =?utf-8?B?aERYaldXSGJhRnBBdEc4eHNma1hIdGpPU1VPTldXeVYrUWJtU1BFT2ViZHBn?=
 =?utf-8?B?bTFVaTgrOFR4N1M2Q0tPUzNaR1RGbzVwODZ2YnpLNWh4blNYV1ZHZU1MY1Rv?=
 =?utf-8?B?Rng4c1FDOFRxa3BaVzZkSnVOeEwzdEgwS09RZEwyVHE1d2tCcGxPdTdBM1FO?=
 =?utf-8?B?QlB5MlYzTnpKTE4wWlZLdTRCVCt1dVNGejVVUGEwOFhwSjA0VlVJOVIvQVpk?=
 =?utf-8?B?VXZ1ZGMxZ29RK0dsTng1VWdUSVdkU0RScjcxanYzMzIrTmRtTlFQa2F0ZWFT?=
 =?utf-8?B?RlRVWFlCZ3hKR3NaN0pIcThJbGpaTHZpK2RWaklRQ2dTbXJwUVdKYWF2Mjcz?=
 =?utf-8?B?MC95c0NkalQ1eDR4MjZta1J3WWtuVXpJdy9GQmpud084bjlKVGxlWWVZTGRz?=
 =?utf-8?B?QmpGcmVNQWFDblIzWlAzVVVVWlpWZnhDR3kvMHB6MUF6MUtpbGZpNUlJSUFm?=
 =?utf-8?B?Zit4cGZJeDFnMGpnSkdEdXhONTM3LzV0VFJTYUROTTd2SVRJZnV1UG83Ty9l?=
 =?utf-8?B?RlFZQkRDVUlUK2liVkRkTWwyQUgwOE1veWJ4MmVPRWJmSW9nakFUbmxRUkM2?=
 =?utf-8?B?YUVpalRLRFpRNUhDVSs1NGErV2RMbDJnZzdadzR4WEl4Z1dieFFwQmlVS1Fo?=
 =?utf-8?B?dGJRdHhuUUwxSnJNZEpUWkRmV210MXRCdzgrd1ZUZElmb2hRSkM0SHRIakZP?=
 =?utf-8?B?akYxSXZYak56b3FpZG1hN0xmekQ0VzhmVmwxeW5nSmNmQkRsNE16ZWdUbmxU?=
 =?utf-8?B?cXNjVkZORDQvT0pjMi9leFFtdkFuRUhQT2ZCZ3VsMHB6V1I4b08xNU5NRFJu?=
 =?utf-8?B?djZPSXMyTU40MThBVkVEQ2w2NTJ3NjhTWU1wdTNHQ21TZFhxZXZLMjFsK0hB?=
 =?utf-8?B?ak50ZHJ3YlozVTlHOU1VdEJlL0xZNjdoNEJtdEIyQnVCK0ZIbSthNDNjY3dI?=
 =?utf-8?B?SjBMc1loUFFHeFRSV3hDdk40dVJQUkRhalA3SDUzRHBNK3pHdFV6WW5YZ0hZ?=
 =?utf-8?B?UENCemJ4SmNMWHdMZmYvT1YwbmtGM0Fab1AyMXpOSFhJM3FLREhuNmdRTFV5?=
 =?utf-8?B?Wko5L3ZkQzNCUWJXSHZQc1E2TVBaeU9Md2xMNmkraUphNzhtV0s1OXYrSDk4?=
 =?utf-8?B?OWVzQk53MzFxeHN5US9HYjdkYmF5dlFJT0FPK0o0MEFVUCtxSWJSSDE1Nkxn?=
 =?utf-8?B?TENMam00V2lDZzUwYktjVEk4SnM0K0d4alo3K1ZWM0RJSFJqZGh0cnprUS9J?=
 =?utf-8?B?YjRPelAvMzF1YXpoQmJ5dmg1RXVwcUhTTWYrUmo4cWdtNjNVUVAvWlpWeGFJ?=
 =?utf-8?B?TVovNkRZOEFxSUYxRmhVRXJXamxhT2swVWxPaHQ3ZStuVmxBQTBLb2Q2Ny83?=
 =?utf-8?B?WDRVcEdWRFBrRlpRa2tuTzk4aU91cUVsRkNyUkdnZURCaWZ3RDdoVzAyN0Ju?=
 =?utf-8?B?Zmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xddr4u1Om1petEwF0Umm+fj7m1aWTp+4Wk/v/3iLEeUXKsybhJqc/vHKlNuQe9+IyMJQzgxkOoAxFlqCpTw4zz/9NqvfGTLhg6FkXFipxgroMZblny8//SEjRv+G227FOkiiGc+03XJaBzGcWGacCrS4Wp8Ax4T6+yhuzfdf77OEVALGDPfyfdPwTvDpqyhJH8QYOhuJPW/UFlOp0DmwUFgm/7JyPpH911PA8XIlHa9ZlNgv8Jan8EtiGnNmpbksIrMgEj2B3tjNhtAzNVmZGAg50LmrpuzCgBZB7kkaXSR16h70tSIA6dOq55TR7VIqgAamf+UOmHcXkyjM/ErypxN51NKKMMTDt8ZpQbX+ik+H0i9lPWnE/OPSGBXvAKzY0VNL6TMSgUrGQitnE5Tj6EC3K7UqRKZI9wFWHQnWQ9rXvdmVmvBEMsc9s8Nf2gKf4DLgazqnumDq78mEL5on+Bdtrl50hmVbUkrUmKZMfpOpw7JqoPhxIFxxusdREIo6eISzzjDPOJOUdn0eWFL5aWiLCkFWrHTPwRzHyVxPP3fhpRlPaecUpgv9FWU7YFBDP3R7YKPboXcJhz6DaRtVHzUNZhVCCdAoPm7PIpOr8E09losjGOL5QsskOwcHOwzdJPd9PrVvzQQpi9k5SGfPd3ISjX1Gq/bpOkeN7vv+ofl30lJbEacqnWKcvtqCmwOLeLwk0vkkgmm7HO6VYlvyJxjEA4LFM26rsOK6fmzL/p4PLt1l41Qsa536TT4g+Rs2xwR1OW0yj/bL9tpcCF5L5n7dVE8gb4MSgXKlJaR4wrpAtIZ/UrJdsAHkOOzzxEvXffLdvUcNHuQZfBwUBhXO59loG1EpjqQ3HuUFkBlWN5o=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f8e01f13-3102-4ead-656b-08db47eca54e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 13:29:51.7580
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P620so7b3ty8PH8cN+CtlgroP+ms81S+QA8owbYQxg/917hXGTnXTcDWObqqz8ON1wnevFRlIQiVi/9I6fAxzEwaDFzV3JbphnLmfhLGhtE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR03MB7445

On 28/04/2023 2:27 pm, Juergen Gross wrote:
> Xenstore consists of more files than just the tools/xenstore directory.
>
> Add them to the XENSTORE block.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:30:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:30:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527333.819875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOB6-00066j-2H; Fri, 28 Apr 2023 13:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527333.819875; Fri, 28 Apr 2023 13:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOB5-00066a-VA; Fri, 28 Apr 2023 13:30:23 +0000
Received: by outflank-mailman (input) for mailman id 527333;
 Fri, 28 Apr 2023 13:30:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psOB4-00064Y-7w
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 13:30:22 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d10aca20-e5c8-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 15:30:19 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN6PR12MB8592.namprd12.prod.outlook.com (2603:10b6:208:478::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 13:30:15 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6340.022; Fri, 28 Apr 2023
 13:30:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d10aca20-e5c8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lgrslIRO8/FhGbv9gaZvRC2vzwNQ8HxdJf0PbIfaLU3HweP+KK4zBlMlc8wYoJG9NBP/l3w3jahBAysw8ZC+0cFI3Hhpu5pPrQ8KTXAhVGAEWcb4dBY//8kmY8oSva8jz5pqtFAXKv3e/e9V5fDXIlPC1lF9+PpO+gGrFEiZddj4YongYkZ+iNBf58eGZONxMJRP90HHgOM+0CMHAFjCUp1LIrSnxLnmDNO3BRcmxKhvGSQpnrgyLIICB17Cx+aHi3E8nUOa5ZFQ9f28VADrI9aRQ8VLWtCRW9Fb7fPHotChuxmlKQCtjIaUkbiWx/txAgGkuMIBqhxbCnfeloafhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E4dwnUD2oOXA8EyxFkVyoHOHzMDnxWz+nMqD96ZpiXA=;
 b=MKeU40eMmdN3/dEiunJhnfbGcIG84VsYo36XT8IsVYGTB25AXN58k4/ZbNbqr5NOlxudtBRs5455jgxNgQcYMgZ8Ikom2pQ8mPfMp/oN6VlFSpOM5AQ/E7G2gYMbaSQLnO1RWsoS2w3yhxMTInFKHoSlmfWDUTqDP1F/d1gDL8pTAZq57tZ5dZqr+8GYzVjwtciXrR3AV6KP4+NBerl5RaMbI5VPnHuf7z0DbQTWMWvchuCK9bVvx3myUr4Sn2jWJJeqbY+GBwyS+roFmY1bmkMgRcNsJNUO3w84U6c7Uknqqn6Ar8wKnXWw4omTULt+XkIJYYYK3vV3BYAr18X6Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E4dwnUD2oOXA8EyxFkVyoHOHzMDnxWz+nMqD96ZpiXA=;
 b=Vz8V5KhbLfTpW2CcUlOc+9HpLKiRPI9ANOA9l85+rLkiIHjwS2pdp0qCtjFGgRnf/Ktu4dbJhhBWNTFyjFp+dFoj/KDva87C9PcsYgPNaoyz+z1hM/WK6N+2RwgxfVcUgcFPEbiC10U5F+AA34GIDUC/Sv5WPwUEzc8qodm7syI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <a3a0dba5-a231-7cc2-dbad-79df7ad9a136@amd.com>
Date: Fri, 28 Apr 2023 14:30:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Samuel Holland <samuel@sholland.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Marc Zyngier <maz@kernel.org>,
 Jane Malalane <jane.malalane@citrix.com>, David Woodhouse <dwmw@amazon.co.uk>
References: <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P123CA0003.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::8) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN6PR12MB8592:EE_
X-MS-Office365-Filtering-Correlation-Id: dba4d9a2-6f47-4cd7-f9ab-08db47ecb3b2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r/mpqrox6I81UYhKkT+xoIKVlK4ZeQU2gsefsjpgHzReLZxKkspVn0J7GRh/zdBCjAVYCYPZiZ99AChtZsJNSIwpJsYi2uQh7tutnYncCc4OdhILEAAfP+W05ijs7xjJpxC0Hjyx2x8EyhVpPxYQ4434cTENWj0zd6UUxgf2CBifujiHXuxxpB0KKgk+xQjNRAwmCflOIazDjaFDuEkBlgIe2514MS6Wr1HWjPvVVrVjfh0mw+OGbXfrdYvzucdy70r8xEL2MwEkiceEFU4GDbYM/y2FeLKYSqcwkblHf8XeHGDmGGC/KCot/qvKLNE0PJ1MB9nWUfsiMOFfwxYJU0cdVgknqO00VNQGI2AtQleVTlQPlV51HjnYoIMx78AF6AUoD4Rt4C1ciKcqFXpm9OLnH6Bh6CDGFM1F5vZfaspHGgyF/HKCnwZQBl/eW0sCWfL4jXlYSFqDvl8vKd9Yn3tX2X+DwwxcVbtnuyznTVSZrKEnhvUFjrAkMLCaoXVghm/PU4wJiK6XH/odHZ0eVK7Hkwaz+fMzhRujDuiDNej1yrJcOTUFx+0q5ipDEp3YAjqYotWaOzH5Ju2QdLlYKe7d0EnnFDeVbG5Kuvcv9HaqPe7er6I+jRDhXtpB1x7x2n6zwrj8kEFKDLR3PB55PQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(136003)(396003)(346002)(376002)(451199021)(31686004)(31696002)(2906002)(36756003)(6486002)(53546011)(6666004)(2616005)(186003)(83380400001)(6512007)(26005)(6506007)(66476007)(4326008)(66556008)(66946007)(478600001)(316002)(5660300002)(38100700002)(41300700001)(8676002)(54906003)(7416002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MzIwQWVZc2t0QlAzNUdydnVpRXZHUnVwUUVHMzRxY2EyWVZja1owZzJuMkMv?=
 =?utf-8?B?cC90WnFoTkVndGJBcndYcERROTFZdURZTjdsZ1BCU3pMSkd4WFM4ckNaL05s?=
 =?utf-8?B?WUF6UnFudTNzbHVnd1E1ZDdCQlptcjBmVEVtWXZyOXZuNzFEWjFXcFkycXp1?=
 =?utf-8?B?TWhCRmNIRHFwc24vdmJEK3BFbm1jcXd4NElzS0MyNXlLWG9WWDB4M2NzM1pU?=
 =?utf-8?B?QU11NkdFdDhTcXRSdjdVYVRxUkhNYzBpQU51RC9WZTFldkpXSTk2bGxYRG1S?=
 =?utf-8?B?aW5XZExtcHpIVWtkOUxPVEc5M08xUWFvb1hLcCtNV2hKQTBkczJkRFpwYWdK?=
 =?utf-8?B?dUVPazh2dXpGUzhod01YQkRhQTlKc1J2MHlUMnBrQlo3ODdGbUpFaTBvdWxP?=
 =?utf-8?B?ZWxkQ1hsWUZaSTNWUnJuQnFMNTNZZWpDVWNkN0V2M3dUb25NUldlRExZTTRR?=
 =?utf-8?B?WFBzMlh6M2svNk91Sk9McUtoSVhrNVJNTjRQSmFQNzM4eUwveGVOUmF6Q0Vu?=
 =?utf-8?B?bFRRaXpPaC9MMDJxVDZSM1dEQW15aXhjWHdlWFppNGpNVXBSYUhnS09oZGxh?=
 =?utf-8?B?N3c1cXhaNXo1YXVqbjdBa2FFQW1GZHJkcTZWMkdBRGMzaFVBMjRsS2ZPaDlh?=
 =?utf-8?B?TjNBa3hhNGVhOGYzWWpDbUxrQ3lMc2hDQkY1S0g5aUxSa1Nsc2JUT3pGYzFW?=
 =?utf-8?B?a3JaenhHUWRjalhLaTlKRFFWcG1JakJsM25GbHVMcGJEdFJIQ201c2ZrcjRE?=
 =?utf-8?B?bm5vNjF5SU9GdVZZQjRYZ21wT2VVb2NhZ3lTQXJiQlRkWXcxRnVEVXM2bXI1?=
 =?utf-8?B?SXd6enVoRGo3THBDQm9kb3JtTjB5Ym9qSGFsOG9vYWFDYSs4T1hKN2JtUkVQ?=
 =?utf-8?B?ZUNQYUIvWnZkSSsybUtiZHp2cnhNN0pJRXpCdUZsQTZyMDBwb0d0a3hSc21p?=
 =?utf-8?B?eXgwc1Mzd1JtZnc4S1krSnBxTG9OZUU3a0dSLzE3QUFxMjhTZVk0VXM3TG11?=
 =?utf-8?B?TnNKUmJhckZvWDR6WkUwcXFyNk5RZEZqUUZxUnljaGFlYkZiRGVHcCtGcytI?=
 =?utf-8?B?cjJhNDcwOTZhZG9OQTY1WUlQY1Y4alh2TE9lWFg5RkMyS2U3cENqZktFdUpo?=
 =?utf-8?B?YmVTRmZDdUVBNmxmclFlWUMwaVJDTEU5bTl1VWEyTG5maFVHajRrTlVVSVZk?=
 =?utf-8?B?Uy9xWXZYL2tOblB2amZicDhBV0dscnhyY3piQ010V0pkeWpsbDdFTSsxeW83?=
 =?utf-8?B?VFIrc2FOS1BxQ2NRc0plYWFNWFlocWhlMk1nYTZCMFFETkFWcTlIY2FJVThm?=
 =?utf-8?B?dXloRTRqaGd4VFgvaENNKzJmMlJPZTlvUnRqVXRhZ1N2VGZMbmlLcWE0Q2p4?=
 =?utf-8?B?WkVLN0dLWkJOM3Yyc0ZXT0hNUEpUeHpTdUlxQ0c5WnJZVVQxRzNkcVpyczV2?=
 =?utf-8?B?Y29KWi9nUTZLd2o4aGlSVjNhWENNWlR0c0t6OSt5REFaNENrQXdvWnppTU5m?=
 =?utf-8?B?WUJveUEwcy9oYll1L09mdHpyRlVDZktUOGxOLzg4MkhXNGR0REVmS0VSY04w?=
 =?utf-8?B?aUFrNVZKMnFzci8rRVU4MTNGQXpWcks3TlFoMTBpc1oyNkhuN0c1dmozUkRD?=
 =?utf-8?B?TDkxUkhRWUQzNjRwTGdwLzJ3SExQTXBPdlgvSmhLZm42VFRjWnp0QzdqeGxN?=
 =?utf-8?B?VHU1NFFtVS9KRE5HYnNGeHppQ3NFWTVZaU93TzhLSEVvTjNLd0ducDZhOXNJ?=
 =?utf-8?B?N0FLNlNvU2hGdkFNZGp4czlUR3Y4a3B6UHh2NUhQMjdXdVBCbnN0RkRDMCta?=
 =?utf-8?B?eDdTSXdqdDNRLzBHQjNsRVpqRWZxSjd0aVZFMjhrMjNyUUdsQTZLSUNTcDBq?=
 =?utf-8?B?UHM4QzFtWmQzWWw3Nnc0a1dtbmh2Q0E3MmkrN3N4elNJZUJCOC9nKzkrUUg2?=
 =?utf-8?B?T2YxQ1VkRkwvQ0s4TExXQ0J1RlF0bnprTWkyWjdFbUJKT0xBY1ViWW9vNW1x?=
 =?utf-8?B?WHBiL0JpaDd6VWMrMUwyUm9qek9zWG5sNEJxRDJheDRTdDU3ZlVxeWlEVVZH?=
 =?utf-8?B?L0dOUFNrd2dJVnRyK0N3V0IveStUOEpmQkZKcHd2c0Y4c3N0bjdGZ3BrdVVk?=
 =?utf-8?Q?qSQ61UecgLSn5HRD3n9m8YPFh?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dba4d9a2-6f47-4cd7-f9ab-08db47ecb3b2
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 13:30:15.8155
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3T7J/KCBcuDu0YAlEb1ya/+voJAp7mK1XtWY/2DnWVRX4CRZBdQvJdiM7MZJiSgS/BWKaW08YNWtC6jX3uZL4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8592

Hi Rahul,

On 28/04/2023 13:36, Rahul Singh wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> Xen 4.17 supports the creation of static evtchns. To allow user space
> application to bind static evtchns introduce new ioctl
> "IOCTL_EVTCHN_BIND_STATIC". Existing IOCTL doing more than binding
> that’s why we need to introduce the new IOCTL to only bind the static
> event channels.
>
> Also, static evtchns to be available for use during the lifetime of the
> guest. When the application exits, __unbind_from_irq() end up
> being called from release() fop because of that static evtchns are
> getting closed. To avoid closing the static event channel, add the new
> bool variable "is_static" in "struct irq_info" to mark the event channel
> static when creating the event channel to avoid closing the static
> evtchn.
>
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   drivers/xen/events/events_base.c |  7 +++++--
>   drivers/xen/evtchn.c             | 22 +++++++++++++++++-----
>   include/uapi/xen/evtchn.h        |  9 +++++++++
>   include/xen/events.h             |  2 +-
>   4 files changed, 32 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index c7715f8bd452..31f2d3634ad5 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -112,6 +112,7 @@ struct irq_info {
>          unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
>          u64 eoi_time;           /* Time in jiffies when to EOI. */
>          raw_spinlock_t lock;
> +       u8 is_static;           /* Is event channel static */

I think we should avoid u8/u16/u32 and instead use 
uint8_t/uint16_t/uint32_t.

However in this case, you can use bool.

- Ayan

>
>          union {
>                  unsigned short virq;
> @@ -982,7 +983,8 @@ static void __unbind_from_irq(unsigned int irq)
>                  unsigned int cpu = cpu_from_irq(irq);
>                  struct xenbus_device *dev;
>
> -               xen_evtchn_close(evtchn);
> +               if (!info->is_static)
> +                       xen_evtchn_close(evtchn);
>
>                  switch (type_from_irq(irq)) {
>                  case IRQT_VIRQ:
> @@ -1574,7 +1576,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority)
>   }
>   EXPORT_SYMBOL_GPL(xen_set_irq_priority);
>
> -int evtchn_make_refcounted(evtchn_port_t evtchn)
> +int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static)
>   {
>          int irq = get_evtchn_to_irq(evtchn);
>          struct irq_info *info;
> @@ -1590,6 +1592,7 @@ int evtchn_make_refcounted(evtchn_port_t evtchn)
>          WARN_ON(info->refcnt != -1);
>
>          info->refcnt = 1;
> +       info->is_static = is_static;
>
>          return 0;
>   }
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index c99415a70051..47681d4c696b 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -366,7 +366,8 @@ static int evtchn_resize_ring(struct per_user_data *u)
>          return 0;
>   }
>
> -static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
> +static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port,
> +                       bool is_static)
>   {
>          struct user_evtchn *evtchn;
>          struct evtchn_close close;
> @@ -402,7 +403,7 @@ static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
>          if (rc < 0)
>                  goto err;
>
> -       rc = evtchn_make_refcounted(port);
> +       rc = evtchn_make_refcounted(port, is_static);
>          return rc;
>
>   err:
> @@ -456,7 +457,7 @@ static long evtchn_ioctl(struct file *file,
>                  if (rc != 0)
>                          break;
>
> -               rc = evtchn_bind_to_user(u, bind_virq.port);
> +               rc = evtchn_bind_to_user(u, bind_virq.port, false);
>                  if (rc == 0)
>                          rc = bind_virq.port;
>                  break;
> @@ -482,7 +483,7 @@ static long evtchn_ioctl(struct file *file,
>                  if (rc != 0)
>                          break;
>
> -               rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
> +               rc = evtchn_bind_to_user(u, bind_interdomain.local_port, false);
>                  if (rc == 0)
>                          rc = bind_interdomain.local_port;
>                  break;
> @@ -507,7 +508,7 @@ static long evtchn_ioctl(struct file *file,
>                  if (rc != 0)
>                          break;
>
> -               rc = evtchn_bind_to_user(u, alloc_unbound.port);
> +               rc = evtchn_bind_to_user(u, alloc_unbound.port, false);
>                  if (rc == 0)
>                          rc = alloc_unbound.port;
>                  break;
> @@ -536,6 +537,17 @@ static long evtchn_ioctl(struct file *file,
>                  break;
>          }
>
> +       case IOCTL_EVTCHN_BIND_STATIC: {
> +               struct ioctl_evtchn_bind bind;
> +
> +               rc = -EFAULT;
> +               if (copy_from_user(&bind, uarg, sizeof(bind)))
> +                       break;
> +
> +               rc = evtchn_bind_to_user(u, bind.port, true);
> +               break;
> +       }
> +
>          case IOCTL_EVTCHN_NOTIFY: {
>                  struct ioctl_evtchn_notify notify;
>                  struct user_evtchn *evtchn;
> diff --git a/include/uapi/xen/evtchn.h b/include/uapi/xen/evtchn.h
> index 7fbf732f168f..aef2b75f3413 100644
> --- a/include/uapi/xen/evtchn.h
> +++ b/include/uapi/xen/evtchn.h
> @@ -101,4 +101,13 @@ struct ioctl_evtchn_restrict_domid {
>          domid_t domid;
>   };
>
> +/*
> + * Bind statically allocated @port.
> + */
> +#define IOCTL_EVTCHN_BIND_STATIC                       \
> +       _IOC(_IOC_NONE, 'E', 7, sizeof(struct ioctl_evtchn_bind))
> +struct ioctl_evtchn_bind {
> +       unsigned int port;
> +};
> +
>   #endif /* __LINUX_PUBLIC_EVTCHN_H__ */
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 44c2855c76d1..962f0bbc7ce1 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -69,7 +69,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority);
>   /*
>    * Allow extra references to event channels exposed to userspace by evtchn
>    */
> -int evtchn_make_refcounted(evtchn_port_t evtchn);
> +int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static);
>   int evtchn_get(evtchn_port_t evtchn);
>   void evtchn_put(evtchn_port_t evtchn);
>
> --
> 2.25.1
>
>


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:49:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527340.819885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOTb-00080h-Nb; Fri, 28 Apr 2023 13:49:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527340.819885; Fri, 28 Apr 2023 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOTb-00080a-KX; Fri, 28 Apr 2023 13:49:31 +0000
Received: by outflank-mailman (input) for mailman id 527340;
 Fri, 28 Apr 2023 13:49:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=plFI=AT=citrix.com=prvs=475a2a817=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1psOTb-00080U-7J
 for xen-devel@lists.xen.org; Fri, 28 Apr 2023 13:49:31 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e01e082-e5cb-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 15:49:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e01e082-e5cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682689769;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=RHamvegccrGji3l8RtMeLDfhu/Zy0+5QMeZ4/KMRwgE=;
  b=AfEc8eqeg7Vmen4dY5CjzCOTD+iSS9ZUBl4wO24LwtodgNuDmiVkpYVn
   Qh1m2VQSn/SXfOY3AFgPaxBPdK7LhbNLIfhvBPg5weQuogVhNWz3TpUcR
   mnoRJbR2ov/Nr6PRA6Bw2wrv/BoNDTBoTQD85Jb3WkYSTntD3UwbUoY5z
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109669889
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:8EDZNazXgeAQ4gtIryl6t+dSwSrEfRIJ4+MujC+fZmUNrF6WrkVUx
 mEeDDyAM6yOYGr3f90naIyyo0wPscDVm9ExQAVr+SAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiP6AT4DcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KX587
 sQ+Mm4TUiusvOeJ5avhUM5s3tt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzxjG9
 zKWrgwVBDkrd/i4wyK/tUiBl77Uoj7FcZIdS7m3o6sCbFq7mTVIVUx+uUGAiee4kEOlW5RcN
 kkd4AIqrK477kvtScPyNzWorXjBshMCVt54F+wh9BrL2qfSpQGDCQAsTDFbb8c9nNQrXjFs3
 ViM9/vrGDhuvbu9WX+bsLCOoluaJykTJmIEeWkLUAoZ/97/iIUyiBvVSZBkCqHdpsbpAzjsx
 CvPoCUgr7ILyMoKzLmgu1TGhTu2od7OVAFdzgzTU3Lj5A5/YoOoT4ip71HB6rBHNonxZlyIo
 HgFltXY9OcPF5CAjgSJQeMEBrbv7PGAWBXbhVNsBIUw7DSF9HuqfIQW6zZ7TG9kKMcHPyTiY
 E7XvQJX67dXPX2jd6gxZJi+Y+wtyaXjDt3jWurjcstVYpNxeQmE+wljfUeVmWvqlSAEmLkyI
 56Bfe6wDHwRDuJsyz/eegsG+eZ1nGZknzqVHM2liU39itJyeUJ5V58aAkC2SPpn8p+/jz7Qz
 PNGbdGty01ABbiWjjbszWIDEbwbBSFlVcqo9pANK7brzhlOQz94VaKIqV81U8k8xvkOyL+Vl
 p2ochUAoGcTk0ErPuljhppLTLr0FahyonsgVcDHFQb5giNzCWpDAUp2SnfWQVXE3LY5pRKMZ
 6NZE/hs+9wWItg9xxwTbIPmsKtpfwmxiASFMkKNOWZvJcA6H1CXp4G9I2MDERXi6QLm3fbSX
 pX6jl+LKXb9b14K4DnqhAKHkArq4Cl1dBNaVErUONhDEHjRHHxRA3Wp1JcfephcQSgvMxPGj
 2569z9E/7iSy2L0mfGV7Z25Q3CBT7EuRBQGQjaFvN5b90DypwKe/GOJa87QFRi1aY8+0P7Ki
 Tl9px0kDMA6oQ==
IronPort-HdrOrdr: A9a23:XtzwN6DbjQauaQ3lHemr55DYdb4zR+YMi2TDtnoBLyC9F/bz+v
 xG88576faZslYssQgb9+xoW5PwJk80l6QFhLX5VI3KNGLbUQ2TXeJfBOPZrwEIcBeOktK1u5
 0QEZSX17XLYmST6q7BkXCFL+o=
X-Talos-CUID: =?us-ascii?q?9a23=3A21U9TWoFMDC8XmnclWFR0rfmUd8XaUX71inOH3S?=
 =?us-ascii?q?XMmNWbaTPU3iQoLwxxg=3D=3D?=
X-Talos-MUID: 9a23:g3hHjgWlzGz/1xnq/A6voBZoLP0v2Ym/WVsyjKsvq8bbMSMlbg==
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="109669889"
Date: Fri, 28 Apr 2023 14:49:15 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH V3 1/2] docs: Allow generic virtio device types to
 contain device-id
Message-ID: <9395f0e5-3fe6-4235-a6a8-738c45041a82@perard>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>

On Thu, Apr 06, 2023 at 02:28:17PM +0530, Viresh Kumar wrote:
> For generic virtio devices, where we don't need to add compatible or
> other special DT properties, the type field is set to "virtio,device".
> 
> But this misses the case where the user sets the type with a valid
> virtio device id as well, like "virtio,device1a" for file system device.
> The complete list of virtio device ids is mentioned here:
> 
> https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.html#x1-2160005
> 
> Update documentation to support that as well.
> 
> Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 13:50:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 13:50:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527341.819894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOTv-0008Mi-Vi; Fri, 28 Apr 2023 13:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527341.819894; Fri, 28 Apr 2023 13:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOTv-0008Mb-T5; Fri, 28 Apr 2023 13:49:51 +0000
Received: by outflank-mailman (input) for mailman id 527341;
 Fri, 28 Apr 2023 13:49:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=plFI=AT=citrix.com=prvs=475a2a817=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1psOTv-00080U-3S
 for xen-devel@lists.xen.org; Fri, 28 Apr 2023 13:49:51 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a2f385a-e5cb-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 15:49:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a2f385a-e5cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682689789;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=FlF4+hxhk9VFaazEr8izLGgYD1gdeY+jeq/+ewTQkPg=;
  b=OJz9uJMg+AKL0FxpXo5BS/nUWNiUjyFWHcEZzbV2PuaVrk4/K14g1Ff1
   eo1JlMzKvPkEtdHHKVaOmN//wFaO/DYFZob0g7aKuXD5HT9D7pFp5Xa69
   XyXQxKJBPUNKaNfrR+eGBn2KLLDVxVVVQ/AvwMwwD3G37+ELDv5EPK8qX
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109669946
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:PmPFxquR2oj44vV8mQRyekl6EefnVDNZMUV32f8akzHdYApBsoF/q
 tZmKW7UO6nea2f2fowlaYW+9B8FvJ+HnYcyQAVv/CxmESxA+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOExyFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwKBEDUCyt1t2MxpmDdvVxueh8fY7ZBdZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw/
 zqXoTqoU01EXDCZ4SWk8Hy+q+KXpz2hQd0XGaSx7P1I3lLGkwT/DzVJDADm8JFVkHWWXNZSK
 FcI6zEuhac3/U2vCNL6WnWQpXeYvh8RRpxIFOwg6QyX4q7V5Q+DAS4PSTspQNUiud9wTzEs0
 FKEt9foAzV1t/uSU3313r6MoCm7IyQ9MW4IbihCRgwAi/HkpIwwlRvJQsxUDL+ujtb1FDfzx
 BiHtCE7wb4UiKYj2r+6/hbciDKtopzISAEd4gTRV3iiqARja+aNZYOy7kPAxe1dN4vfRV6E1
 FAUls7b4O0QAJWlkC2WXP5LDLyvofGfP1X0n191EoIhsTew/ne5VYRR5jhkIwFuKMlsUST1e
 kbPvhgU6JJNFHyyKKt2eJ6qTcAry6H6EpLiTP+8RsRPZJN8chPB8zxveVKXw0jpkU4li6Z5P
 o2UGe6uEHIbBKJP3DewAeAH3tcWKjsWnD2JA8qhllL+jOTYPSTOIVsYDLeQRuAY4p24jlTqy
 dB0D/eW1ipzD7y5UBCCpOb/Mms2wWgH6YHe8pIHLLDeflA7QAnNGNeKn+p/JtUNc7B9076Ro
 yrjAhIwJE/X3yWvFOmcVpx0hFoDt75bpGlzAyEjNE3AN5MLMdf2t/d3m3fakNAaGA1fIR1cF
 aNtlz2oWKgnd9g+0211gWPBhIJjbg+3ogmFIjCoZjMyF7Y5GVyTpIW8IlWyqnBTZsZSiSfZi
 +zI6+8macBbG1QK4Dj+MppDMG9dTVBCwbkvDiMk0/FYeVn28ZgCFhEdesQfeplWQT2an2vy6
 upjKUtAzQU7i9NvoYahaGHth9vBLtaS6WIGRTmBs+rraneLlodhqKcZONu1kfnmfDuc0M2fi
 S99lpkQ7NVvcI52jrdB
IronPort-HdrOrdr: A9a23:yiZVqas7hM32OCb4TM0XZaaU7skDcdV00zEX/kB9WHVpm62j5q
 aTdZEgv3LJYVkqOU3I9erhBED4ex3hHP1OkO4s1NWZLWzbUQKTRekI0WKF+UyCJ8SXzIJgPM
 xbAsxD4bPLfDpHZRmT2maF+7FJ+qj/zJyV
X-Talos-CUID: 9a23:B9GCbGDbKChC49b6EylG+nAoNJEaSHv29EvMPnCUVjcueoTAHA==
X-Talos-MUID: 9a23:ApcLaguToukmfGAzmM2nmxBgOtVF7vyUFhoUzpALosCeNg4uNGLI
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="109669946"
Date: Fri, 28 Apr 2023 14:49:42 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>
Subject: Re: [PATCH V3 2/2] libxl: fix matching of generic virtio device
Message-ID: <5e98d465-be8f-4050-a988-2a0829a71a2e@perard>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
 <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>

On Thu, Apr 06, 2023 at 02:28:18PM +0530, Viresh Kumar wrote:
> The strings won't be an exact match, as we are only looking to match the
> prefix here, i.e. "virtio,device". This is already done properly in
> libxl_virtio.c file, lets do the same here too.
> 
> Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 14:05:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 14:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527346.819905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOjH-0002dN-B4; Fri, 28 Apr 2023 14:05:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527346.819905; Fri, 28 Apr 2023 14:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psOjH-0002dG-8H; Fri, 28 Apr 2023 14:05:43 +0000
Received: by outflank-mailman (input) for mailman id 527346;
 Fri, 28 Apr 2023 14:05:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psOjF-0002dA-Jz
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 14:05:41 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c014df37-e5cd-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 16:05:39 +0200 (CEST)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 10:05:35 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5775.namprd03.prod.outlook.com (2603:10b6:a03:2d4::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Fri, 28 Apr
 2023 14:05:33 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 14:05:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c014df37-e5cd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682690738;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Z9IWpLZRvNTWr7HyvAvD+tzvrcLV5Ffmvh7en+qkInw=;
  b=iLyDSxmRHVzYTwCiRUvawBN3tJO50O154tTiPRknnTkiU7eIAGdROeHV
   YDqwVgj3mYIMmXSN3CKJZxUsKTjqtlSD6rAOEdhla9k5kxSJxuKoMhC7W
   GfKcgDst6GXy8358e7Q0bJFstzlHifnrNE8jGQfBxnlBO0eMQ1OJimWz/
   8=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 107632353
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SHx9KKy/A5h/SL5IR+56t+fMxyrEfRIJ4+MujC+fZmUNrF6WrkVRn
 GZJWTuFa/iNZ2OhedAlbtnlpk0AuJCByd5nGgI+pSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRiP6AT4DcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUxyr
 fxANhw9Uh+aiPC82ZCcQfVHie12eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQruFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAN1MT+fjq6UCbFu79HwNIwwNcGqB/Keal0mzRd5nc
 ndLw397xUQ13AnxJjXnZDWxpHOGtxgQQd0WDeQ+7AyPzYLf5wGECi4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9L2UPeCsFRgst+MT4rcc4iRenZtR+FK+4iPXlFDe2x
 CqFxAAlnKkah8MP06S9/HjEjiiqq5yPSRQ6ji3IWkq14wU/Y5SqD6Sq5kLc9u1oN5uCQx+Ku
 31spiSFxOUHDJXInirSRuwIRemt/6zcaGeahkNzFZ488Tjr42SkYY1b/DB5IgFuL9oAfjjqJ
 kTUvGu9+aNuAZdjVocvC6rZNijg5fOI+QjNPhwMUudzXw==
IronPort-HdrOrdr: A9a23:UAZJh6kH58m77v5fY/YbMl7fk83pDfIK3DAbv31ZSRFFG/Fwwf
 re5sjz8SWE8Qr5P0tQ/+xoWZPwJk80kKQe3WB/B8bAYOCLgguVxeJZnO/fKl/bak/DH7VmpN
 9dmsFFYbWaMbEQt7ee3ODXKbcd6ejC2Ly0g/zT1nJ8JDsaEJ2ILD0UNu9YKCBLrcV9aqbR3a
 Dz2vZ6
X-Talos-CUID: 9a23:P8eHv28tX8mail/IL9yVv2MoQsB9bFHt8HjJLF2yEEROQ6G+RnbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AoIzsFAz79BkKVb1s1ItmxgVhESCaqJa+FmscirA?=
 =?us-ascii?q?3gZTaFjF3NimRtG2MHbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="107632353"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H73FYkH4vY5t+VjGqn54K6qrZYctbi1C0Bcn7MY+te6JB8TpeFGbuctgr9LWEk19RIVwenlorK/yplvusuJwPNj8fOBWGTOASsgQXR0sny04IP6RqP5Tr49Snrcq8a/0f6Uj4umK6vGf4iNIBgYHWB1eqmweMLMHOpQBWi982kzG6U7erz3aGisFd5mb58qEpCIBWd5Pdgwal69/JGwy53Km7Y1JrOxZmW8W5QB2W+EzCCJ2+drB57k8SZPUMqMKC1q/G3JFg3uEgwUm5cNazJSFi9AJu5Hb9jdf6cirDmTlT1LHJujeL4jNLeMfpzzF0ViFqrtB7AatNJrWwts93g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VmXfpfamU0nc/ZxZH7bT9Vh4vN5MLQouDVIikVOw4Y8=;
 b=kMouGy5PGMktCeDGD4ZN7e5q3PUyLEZ4oHn2Esk8ywSH2dvrgRMtf8xfiEJ9xCahFUwNEteU0bq6QPTtOFgWrtPyOV3Y7n0wamXc5JBrRrvB2qBT2qcQWfrvY64tF2UnaPLhBtNAhw0VDjzkS54w5Gg8YQZsAoQAPRCeWJaInqc2JV73VIUHGCSnGDDT2GhJTpKRTJ8W/SiSqxCu87IcxaIkr9RCa2MBc7KliH2ytNEcRxxzvaVV5iuJ8w6mSw2P3GlK0HIeiKF1Ro5K8gq8Zx9UwiM/tpThqVJs6hC07l+O8yPQlKlHnPh7YynkIZKONx+tpEfTIi4CWa0pvQQV/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VmXfpfamU0nc/ZxZH7bT9Vh4vN5MLQouDVIikVOw4Y8=;
 b=HCpCdS+GfWZ+frC78i8WgyU90pohdl9GtA8LmYEH48ExvoulQyjxTk9SsPbH9TCVPcyJ2bgg/JqJ6Rmcx4vWFALMyPe09ESWQOilZu5f0W+1N7pruFi8KxMR3MP2P1SIIHBhqrrQA/sRJ7YD3lSh6bw6vHV2Q4pOo1y2ppnDKKw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b2339ed9-59ab-e758-0180-f3deca061198@citrix.com>
Date: Fri, 28 Apr 2023 15:05:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 7/7] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if
 domid is not found
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-8-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-8-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0645.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:296::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5775:EE_
X-MS-Office365-Filtering-Correlation-Id: 860feab7-08ec-45b7-50d2-08db47f1a146
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CMBA1NKDWQXUFBW8pJ4Hlz2l8fgINnQ4WfYElxebkZttHZXGZ0f4jF1Lhx+sCQRZcJzzhz5RsD8502FeuFJZ0iMsLT/zYZLtqSY+EV4Q5D+39YdVDebCxqlxNPNfFxftprOoRtPeL79BHE4ynFREYj3WCERMCBu33JrChVA3QfYgUXDgtvJZ9kRA3HhRablzd1yKE9lfra8PLw3Z4IJyU1DaKOC/PhbgUvRAWzU2fmDyW9szW7CJI6IJVXKR8S7AQ9bIT+HZKjOgunZKm4XJv61qyl03HEro7lVok/vhizyxSgqpZcJjT19/5qzHQGaZgjfb1vAPvslmewKFVcMRa5mXXeNFPx4X8+5IrQsQOaezLgCy9hxZEop3zAzVyMdX098a2f6eSOYRgakhFGoV2j9qHx6/B7TqIuvAsDgvu+np9Wol05V8qBR4DIKkScPO2NYkX7Ey+nYDQG3uW6/aG9WQVq5KgsJSLehNPUG4mi/IjndiZv31U2/3kt6hJpfLzMuVzCxkgYYc1u72fCvpwqN93xo/VxtuBhoT+0K+9lhRqZFpMzhuTkZdym98T1vAeLwBL3ay+ddpoG5LEfBXVK4YOEu6asHAYjf4B/axMxFvIzcG+2Vrl/wqInX75AWXOMCS3u8Id100cJIacrBLWg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(346002)(39860400002)(136003)(396003)(451199021)(478600001)(31696002)(110136005)(54906003)(86362001)(36756003)(6486002)(53546011)(186003)(26005)(316002)(6512007)(82960400001)(6666004)(66476007)(66556008)(6506007)(66946007)(83380400001)(4326008)(2906002)(41300700001)(38100700002)(8676002)(31686004)(5660300002)(2616005)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aFdlR2trNElLY3Y5ZFplZTd1cU5WL0NlVHo3VU9Gc0hlVjhuM1lQbnNDaHQw?=
 =?utf-8?B?VFhYQ2F5MW1oY3R6c1VJaENFSzdJQStYYlZjdGNGc2czSFltYWZ4NE03N2NZ?=
 =?utf-8?B?Nm5oZW5wNkJmejJNSUdYdWZ6a0E5TzhDMzVtVG5sdXE4QlIwd0tXSm1nWGtI?=
 =?utf-8?B?MXdWdDJYUEljOUxHS1dPY2ZXSVhRZkJTVnNtbmE0VGFIbjBTbDNOSjN1RjdP?=
 =?utf-8?B?NTM5amFQKzN5SmdOUm9pVGsybXB3aVlWTkZ3dXd6V1M3aDRUUnljdk1rLzBZ?=
 =?utf-8?B?Q3JiVW9GSmQzRmc0NThXMk5qK2RabllQL1pWUUtEaFpWZ2xWT29iazN5SUZp?=
 =?utf-8?B?NWgrQUtMUDQ3SmJpOVZYZVMrRTNKRFdqL1ZDODNQaFdPSDhPbjU4TlUwSmNT?=
 =?utf-8?B?UlhCcEZUNDVHYndOVjZMc1Z0TDFxRlFuSVphMnRWQmtMbWwyYnV5Qm5oNmtk?=
 =?utf-8?B?ZWRpNzhxV0IxbDlZWEoxMVUzWjJCbVhvdHpwRnBjazJMQU1lbkZWektiQngy?=
 =?utf-8?B?TVIxVXhyUUdSdzRsZlBhSWpwWUFnbVd5SmZ5Ty85cUxSTWpOOC80L3RkZ3hi?=
 =?utf-8?B?eG5ERkRadGJLS1plSUR5dzlHMFlqOVRoSkM2SGFjMU05VHNsaGlNUCtjcXYr?=
 =?utf-8?B?VHV4YWFzbDZTajVCTmwrQW5hYjRIaTVGa1RIbzNITWh1K0RRYW5nN2dhTUxU?=
 =?utf-8?B?WHo5WGw4bDRCdjhkMzNmLzA0R29YcGNlOERKWXVnaklyTTI3ODV2eDh0bFRq?=
 =?utf-8?B?SHZ2Y1c0QUdwOWJoeGl3STREMFgySk1tQlkzTFRudVExcGd2SW14OHVDV2Ew?=
 =?utf-8?B?b0tVTFd3dDRxYndTcmIvcWpwVmRiSnh3dUdEQWM4NXZjWUxLaWc4alBKQVpX?=
 =?utf-8?B?aURMOXJ6VEg5b01UdU1ycnFyRkJaL1YrQmd0VHpDVUtGenpiQTQzckNMS3l4?=
 =?utf-8?B?VHMzelcxYWNwcjNMNDR4TzVudDAycDN0VExXWldIVWNRUnVoUEVRYk85YTA3?=
 =?utf-8?B?a1ZZdm1TdFRSUXpDT0FVbzh1OHIwS2dpc1dQTnh6aGt6dGxQU1gwMEtUS012?=
 =?utf-8?B?ZitrWkVmNzJRQlpVMVlXc2EyWVdsdU9vTFpxMGthMnpxemkrbm1HU3kwWFJr?=
 =?utf-8?B?RlloZHNoVXY5cThBbzgzcjZIUlpZcWQydU84WlAxQ2VudE91R29xQnV6cEJI?=
 =?utf-8?B?dkpLclBnQXd4TXRrVnZqbkd4eEl0YVJqaFp2UnkvQ2hFOXdIR2VNcXlJWFB2?=
 =?utf-8?B?cFBTSVFhYkZ1a1BaMmw4Uml2eWI2L09NM09NUU9ZZVdLSTR2SDJsYU8rTDNN?=
 =?utf-8?B?UFIySkh4ejFTM3dpTmhuQjZCT1pVVDJ4eG1WbTVuYyt0b1RGOVVYQ0tOUlhP?=
 =?utf-8?B?cnBTWS80SWFvbWpobWd4NWVRdDlySENBSjVRMDhITjJoQkhZTGQ1SEJDWWZH?=
 =?utf-8?B?bHVHNTNDODhQdEhQYlRiQXU1VHBKeld6K1VMZTNzcEx6bDZGcGZoT3R6RFYv?=
 =?utf-8?B?L1o2aXlMVENEQnhjRlJtYzg3RU56U3hZcVErL3NEMW5jeGg1ZXFTV1RyMVV2?=
 =?utf-8?B?WGVpb3ZZM3BMNTh2YVNOemFBdFlQMDNORHh6YUNqbFJZc0lDM3FSQ3kzbUkz?=
 =?utf-8?B?cyt0ZWl3WXBHRmZhRjRaYlhobXg2cThBVnBxOXhkQmJYMStkY05Jelc0SGlm?=
 =?utf-8?B?SUVWL0c5QWlvUDJ2WjJjY0dnUjRuTXF5NEt4Z292TmMySHEvVUIxamd3aFJt?=
 =?utf-8?B?aSthdDlCZHpLa1BlaWFtRGY2ajRLV3g4ZTZRK3RNcWVGemNxS0hHTjJTVWsv?=
 =?utf-8?B?MWFmS0dqSFVQUzIwczBGbXo4ZEUxdXB3M1p6Q3k2SmxrN2xFcTJQV1dlMUVG?=
 =?utf-8?B?TjRrNGl2ZGJBejhkbWpRV1A0Uzcza0ozNjkydm9YNnM3UGEvWGV3cE5ucXVH?=
 =?utf-8?B?MkUxaTAvTTBLRFBqQzd1RnFzcXlGNHltVFp4WlBVRm9ySGxwN0VGVndyTSs1?=
 =?utf-8?B?Vm15ajRKSDBpMkQya1RvVmVSRGtDcVBmY2d0dFVIaUhVQS9HTU1mTUFKL3hG?=
 =?utf-8?B?VHVVN0k1eWtWSGtJVTZZR0RVWS9sUDZKNkkvekpkdDVXVTZlSi82UVprYy80?=
 =?utf-8?B?aGF0Vi8zRVdFZWpZOUdzUDFRcE5oQVJ2d1ZJZDdtRXFOU21JR3RhRjZXZFJp?=
 =?utf-8?B?UUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Xb9cEQQ0ImJPMeyr+0W2BMZ83IYXAcosgBXsGCs/FLnaYcZ1VO55P/xYbT/YSwwimsymN7wsOUyP4oCDA21zudgPdXQeVf2SJcX7QWIULxS84qr/9pyKHOnJuVZSy48NUT0qwvzAEQ2d9eI3iGrybp3gYIJSX6Xl/Vy0kKRXddXv2UpUmh1aaEFsrok8gnpRPxeqZDV66ISDrSDxxrqtxn2pGN7KXXOMD16Q2TGLByXCCeHWgnp/UaM2BGIp+MU8AhG09DHsljvLTCjnBabNkJEM3k+W/v7223ZNCd+hZ2aQv/TmpfVRLWwaJVrUOQn7qRv78aiF+tRkLGA/3aN3lu+gWEPBx7y+AnejNMg7EBvMWr2e1ziSfJGuMAHJbeCn2U8UEE3UWwpWZdJmCeD12Nd5EycjSdexByfHlnfbUjd1xkiYV8hZQwsmY87YaAX9QT465WFiO/KhZnRtaEu0A0DYsbdWv8Lcl7WWJkG3TavJ+cW/C3XsYeetebBUcO4tiSpjJhhWJLZsOuyxwoG7Nv0TkPMxCk4IsNcQzUEp+jA0taLnTAitC2F/xVc3kcQWGzSNOPn/zRCSw+2oKNywg6q0ZZnNJA3Tx9Wgb7VjtktFuNYapursD7X9hSLU6qrJF1dlc2aPc1qZHb9iUEUtIvgrrggROsXhV67Jmv3iBnyveam9uREN6utYuUJIizGgFR4wbaH63uBncCHrnaaBowqwgoKWy7mV/UWk4GKsTBW2lXsFXAb5+MeMdXV9gFZxLskgJXL5WfXORX9H1LDtEPPKq7QbQJu8cNh4OKJ3EuvpMf7BQdh0rQPRSDCNl+0ga4cY79CHIxGI0afYjapRuPJx7j5ksJALnnRnHhJzcIw9OfuofQS2WZJNkn3NdguM
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 860feab7-08ec-45b7-50d2-08db47f1a146
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 14:05:32.7272
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4ovAn6eeFsDlc/pcRQsDplSziAbJiNfqy/mf9NYtJG7FTWFWLwBG26aofWeb1HCiN91/i3zMvYUJ0/yDoSyNGb9kLdSNpiBWur8CBvwlKco=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5775

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> It previously mimicked the getdomaininfo sysctl semantics by returning
> the first domid higher than the requested domid that does exist. This
> unintuitive behaviour causes quite a few mistakes and makes the call
> needlessly slow in its error path.
>
> This patch removes the fallback search, returning -ESRCH if the requested
> domain doesn't exist. Domain discovery can still be done through the sysctl
> interface as that performs a linear search on the list of domains.
>
> With this modification the xc_domain_getinfo() function is deprecated and
> removed to make sure it's not mistakenly used expecting the old behaviour.
> The new xc wrapper is xc_domain_getinfo_single().
>
> All previous callers of xc_domain_getinfo() have been updated to use
> xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
> means xc_dominfo_t is no longer used by anything and can be purged.
>
> Resolves: xen-project/xen#105
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

You haven't (in theory) changed this patch, so should have retained my
R-by from v1, except...

> diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
> index 1dea534bba..dc858a1567 100644
> --- a/tools/libs/guest/xg_dom_boot.c
> +++ b/tools/libs/guest/xg_dom_boot.c
> @@ -178,7 +178,7 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
>      {
>          xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
>                       "%s: getdomaininfo failed (errno=%d)",
> -                     __FUNCTION__, rc, errno);
> +                     __FUNCTION__, errno);
>          return -1;
>      }
>      dom->shared_info_mfn = info.shared_info_frame;

... this hunk means the patch 6 build is broken.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 14:23:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 14:23:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527349.819915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psP0C-00056e-Qq; Fri, 28 Apr 2023 14:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527349.819915; Fri, 28 Apr 2023 14:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psP0C-00056X-Nd; Fri, 28 Apr 2023 14:23:12 +0000
Received: by outflank-mailman (input) for mailman id 527349;
 Fri, 28 Apr 2023 14:23:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iNlu=AT=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1psP0B-000567-L1
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 14:23:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 318debbc-e5d0-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 16:23:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-604-0PdmC3NgNtmYmq9aJEKuPQ-1; Fri, 28 Apr 2023 10:23:02 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B44A0A0F380;
 Fri, 28 Apr 2023 14:23:01 +0000 (UTC)
Received: from redhat.com (unknown [10.39.194.116])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 3E708404DC40;
 Fri, 28 Apr 2023 14:22:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 318debbc-e5d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682691786;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Mf2nplae7cm9fCnXCDok8h2PhbjqtV4qNMokWM0HkuI=;
	b=WJrLq5VzszfkbO11x0hk0NL0y22Zupt99ph3UH9DMzexk3YXeB/b3GF2wFDPzjWFwjMZUz
	OCERcZxjWIJlFHzs9rsGPbxwFQMVyb7KekeB3XI7OCWUMEQlLq1vwqQtpMTnG2scr7t98c
	4TMPrmuTxnknuVl6k4cYr5ZKSQpP354=
X-MC-Unique: 0PdmC3NgNtmYmq9aJEKuPQ-1
Date: Fri, 28 Apr 2023 16:22:55 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <ZEvWv8dF78Jpb6CQ@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-5-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> This patch is part of an effort to remove the aio_disable_external()
> API because it does not fit in a multi-queue block layer world where
> many AioContexts may be submitting requests to the same disk.
> 
> The SCSI emulation code is already in good shape to stop using
> aio_disable_external(). It was only used by commit 9c5aad84da1c
> ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> disk") to ensure that virtio_scsi_hotunplug() works while the guest
> driver is submitting I/O.
> 
> Ensure virtio_scsi_hotunplug() is safe as follows:
> 
> 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
>    device_set_realized() calls qatomic_set(&dev->realized, false) so
>    that future scsi_device_get() calls return NULL because they exclude
>    SCSIDevices with realized=false.
> 
>    That means virtio-scsi will reject new I/O requests to this
>    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
>    virtio_scsi_hotunplug() is still executing. We are protected against
>    new requests!
> 
> 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
>    that in-flight requests are cancelled synchronously. This ensures
>    that no in-flight requests remain once qdev_simple_device_unplug_cb()
>    returns.
> 
> Thanks to these two conditions we don't need aio_disable_external()
> anymore.
> 
> Cc: Zhengui Li <lizhengui@huawei.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

qemu-iotests 040 starts failing for me after this patch, with what looks
like a use-after-free error of some kind.

(gdb) bt
#0  0x000055b6e3e1f31c in job_type (job=0xe3e3e3e3e3e3e3e3) at ../job.c:238
#1  0x000055b6e3e1cee5 in is_block_job (job=0xe3e3e3e3e3e3e3e3) at ../blockjob.c:41
#2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=0x55b6e72b7570) at ../blockjob.c:54
#3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=0x55b6e74af0a0) at ../blockdev.c:157
#4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=0x55b6e7c04d40) at ../hw/scsi/scsi-bus.c:303
#5  0x000055b6e3db0d0e in device_set_realized (obj=0x55b6e7c04d40, value=false, errp=0x55b6e497c918 <error_abort>) at ../hw/core/qdev.c:599
#6  0x000055b6e3dba36e in property_set_bool (obj=0x55b6e7c04d40, v=0x55b6e7d7f290, name=0x55b6e41bd6d8 "realized", opaque=0x55b6e7246d20, errp=0x55b6e497c918 <error_abort>)
    at ../qom/object.c:2285
#7  0x000055b6e3db7e65 in object_property_set (obj=0x55b6e7c04d40, name=0x55b6e41bd6d8 "realized", v=0x55b6e7d7f290, errp=0x55b6e497c918 <error_abort>) at ../qom/object.c:1420
#8  0x000055b6e3dbd84a in object_property_set_qobject (obj=0x55b6e7c04d40, name=0x55b6e41bd6d8 "realized", value=0x55b6e74c1890, errp=0x55b6e497c918 <error_abort>)
    at ../qom/qom-qobject.c:28
#9  0x000055b6e3db8570 in object_property_set_bool (obj=0x55b6e7c04d40, name=0x55b6e41bd6d8 "realized", value=false, errp=0x55b6e497c918 <error_abort>) at ../qom/object.c:1489
#10 0x000055b6e3daf2b5 in qdev_unrealize (dev=0x55b6e7c04d40) at ../hw/core/qdev.c:306
#11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_dev=0x55b6e81c3630, dev=0x55b6e7c04d40, errp=0x7ffec5519200) at ../hw/core/qdev-hotplug.c:72
#12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=0x55b6e81c3630, dev=0x55b6e7c04d40, errp=0x7ffec5519200) at ../hw/scsi/virtio-scsi.c:1065
#13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=0x55b6e81c3630, plugged_dev=0x55b6e7c04d40, errp=0x7ffec5519200) at ../hw/core/hotplug.c:56
#14 0x000055b6e3a28f84 in qdev_unplug (dev=0x55b6e7c04d40, errp=0x7ffec55192e0) at ../softmmu/qdev-monitor.c:935
#15 0x000055b6e3a290fa in qmp_device_del (id=0x55b6e74c1760 "scsi0", errp=0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
#16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=0x7f61cc005eb0, ret=0x7f61d5a8ae38, errp=0x7f61d5a8ae40) at qapi/qapi-commands-qdev.c:114
#17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=0x7f61d5a8ae08) at ../qapi/qmp-dispatch.c:128
#18 0x000055b6e4007b9e in aio_bh_call (bh=0x55b6e7dea730) at ../util/async.c:155
#19 0x000055b6e4007d2e in aio_bh_poll (ctx=0x55b6e72447c0) at ../util/async.c:184
#20 0x000055b6e3fe3b45 in aio_dispatch (ctx=0x55b6e72447c0) at ../util/aio-posix.c:421
#21 0x000055b6e4009544 in aio_ctx_dispatch (source=0x55b6e72447c0, callback=0x0, user_data=0x0) at ../util/async.c:326
#22 0x00007f61ddc14c7f in g_main_dispatch (context=0x55b6e7244b20) at ../glib/gmain.c:3454
#23 g_main_context_dispatch (context=0x55b6e7244b20) at ../glib/gmain.c:4172
#24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-loop.c:290
#25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:313
#26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=0) at ../util/main-loop.c:592
#27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstate.c:731
#28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c:37
#29 0x000055b6e3dab2b8 in main (argc=24, argv=0x7ffec55196a8) at ../softmmu/main.c:48
(gdb) p jobs
$4 = {lh_first = 0x0}

Kevin



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 15:22:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 15:22:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527354.819925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psPvk-00034M-As; Fri, 28 Apr 2023 15:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527354.819925; Fri, 28 Apr 2023 15:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psPvk-00034F-76; Fri, 28 Apr 2023 15:22:40 +0000
Received: by outflank-mailman (input) for mailman id 527354;
 Fri, 28 Apr 2023 15:22:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psPvi-000349-Ql
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 15:22:39 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7fbb83eb-e5d8-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 17:22:35 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 11:22:32 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6078.namprd03.prod.outlook.com (2603:10b6:5:391::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 15:22:30 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 15:22:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fbb83eb-e5d8-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682695355;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=RUiVyvfJjpvdivuD7I+aDU4+0VWMoKV3foRZe1OOZv0=;
  b=ZUULZPCneMQUqx0CuMfWCrIMS74TXcIBbsZWVPtSXjuwgWwkLeDYEjM0
   uEZpLDRmdgVdR1NhXPmt8e6/WVR74KqcYiT5VnF4sg08AYNHy7xcAsNme
   35UgxNprsov9IJMJF4CtDo/M8BkEP0aTKrolIDHL2pP/JSQgGh9eVlfvX
   M=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 107643009
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:55hbYau7xEdyMEUs4+XnRf6+W+fnVJBfMUV32f8akzHdYApBsoF/q
 tZmKT2PM/uONjCmLo8lb4+xoRgDsZPWnd9hTwc/riA2EioV+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOExyFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwMzJOfE6917+Kw7ecReAv3d8ccJHhI9ZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60b4a9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqaA22wXLlwT/DjUbRRySm9WFknS3cP19J
 xQuwRc+8IYtoRnDot7VGkfQTGS/lhcYVthZFeEg70eTw67Q7gSeLmMASSNNLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcbSSMV7t+lp5s85jrNQcxkC7WdlcDuFHf7x
 DXikcQlr7AajMpO26Dl+1nC2miovsKQEVJz4RjLVGW46A8/fJSie4Gj9Vnc67BHMZqdSV6C+
 nMDnqBy8dwzMH1ErwTVKM1lIV1jz6zt3OH06bK3I6Qcyg==
IronPort-HdrOrdr: A9a23:25TXOaxbNjpZA4i7RifbKrPwEr1zdoMgy1knxilNoH1uA7Wlfq
 WV954mPHDP+VUssU8b6Le90cW7IU80jKQFh7X5Xo3NYOCFghrREGgK1+KLrgEIfReOk9K1vp
 0AT0ERMqyTMbD75vyKhDVRkb0brOVuq8iT6tvj8w==
X-Talos-CUID: 9a23:m98mUWAWX3KXEZr6EzJlt1xKJJB4Tn7Y63jZYBSjNjoxQqLAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3AUrv2zgzrmhBejj1YTgIJQONl+X6aqKm+T2k3vak?=
 =?us-ascii?q?egcCvGg58Yw2msSTqbJByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,234,1677560400"; 
   d="scan'208";a="107643009"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WHxWobuNQQKqXDHqlkEQtqhMToQj/bc7FRdcmE843mMqTeAK3TXS7UAibBxsvnwWyADd8LtmrDrZc+KBU++c77iRKt39uFXGlnEowAT8ThwWUw7szXxVthATDNxl92Rcq5qojXtBByvaVcVnliSIm28dPM6Zx8QUzDqPyNiV3FVUT4Qp0DP7oO3mvznkrgygOvmdkIkIZjcVOOlb2dGMs/oWTDILw8SvXwA2OYUrNpmrQ7mRd/yJ0rZ48Qk5chhlxsiICTCmRSgRPnqhPDgy5OwblG2y6YgqAyC9Zhy1eQTHxsUOopLALY71bY05izNc798bSSSvmQta17vC0pJb6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xSdch7VJ9CsOzzeQi4IbtZS6Bu3WNkTNR1mj+fvzyaQ=;
 b=AHrhwsQEkO0RR0QL7Nh9cXIyAxaOn7TztT/a0yy6HN61cvOlpgiAI/ndOgZmwuZt6IfCCGon90n0wKbQQjqL2gGMghbdV7+ko+1l3lt+mFyHoE8VLHdmXNwiu9OIH4rfpwdp8bXFo8uHmjGSqd67vRAHZ9jZXEfVxFxes0FmfM4lNxRmIyQAAc7ttyHof0DrwkXydQovakvUVbQ2k81bmvGmZKp+QNR83psAVqyAdw3rka6yJ0hPbdW4s89OREaPgXLWaVYo046lOVCDKGVsEs0t6r8hRNL7rTN9QLQXM8IVozKLJhZprDzr5/q6f9llQ+9nWHB+dLnGLnY6ID/ynA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xSdch7VJ9CsOzzeQi4IbtZS6Bu3WNkTNR1mj+fvzyaQ=;
 b=A9fy3EFixf7hZdEBgvi5LO2ki+ddKXqSFwIA6QqQoD4lVpH89RqmBrT/T+3rIDiTewOt94e2LICsK5TIRfAJOn/IJXvUSHl7m0lyDD6oEVJwmb6uhPstSp8qLF5LkrEF4LtGWsnLAAzswUgfjip/8J0IG3buXysXTcpKeZevf+0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <aaf6d2c4-b01b-ff26-104c-0b3689c0e076@citrix.com>
Date: Fri, 28 Apr 2023 16:22:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 5/7] tools: Modify single-domid callers of
 xc_domain_getinfolist()
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-6-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-6-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0635.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:294::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6078:EE_
X-MS-Office365-Filtering-Correlation-Id: 0a64e46a-9d4e-4d6b-24a7-08db47fc6148
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vi343JGhL46wSQR2VbQ4F29k0zM6ojXFxR0HZhW3QDvsRGuaZooTSfP5ZP3bQykC5Uos2DhKE4Rilx/aay+3v3I7+6QaN0oNywF3CNenC8CsZFU9k1+6mNz5vxbEN27J9VWndhun0B6WvU/6enbP7HWBAnOjoOA1d8LhEi4HApUQ93BsJXg+ykd5/59jXOqdKhfBpftnsuCZ/V+OWDg05Nwu9Y+HadnlKZyKzgpwbFzWxV5u4PmbEOKonttfaax4RzL2IPLD1UzPsc5eRovbXO1uwMKE89zcxt35PjmGwGhgFq3aR1Mhd6a7yJlWnDU8aHskm+PBCkzk+hUN6ApsGU+jsocoPFU82sUDXt0beKlwKJxWOtLOBun5ZxQD8kDRCr8Ldt7MO8FaEM6dqfhoO4I2HZEr0v1/8PSA5SkLaFZM9qvOml+eRFcSuX9jMqBr4UsWViyfi/QVEQjrzZQOvfkhJO6Kavl7VnyVZPm/4sDkIhgK/3oAG1i2/hySQJwieiIl4iFWIKXXQHzstgEx2TzF/GeUXGTmbU32i0LtJgK92cbpusHup2Xct835g5ijZL/Nhs+1IA/L+Ld2riRH93C3ppX2mEGWBFeINfKeTmVhfTandSuZS9IYSoKI7FPZ7CK+AUq8jKkQrd5+E6Hf7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(366004)(376002)(346002)(451199021)(53546011)(186003)(6506007)(26005)(6512007)(2616005)(31686004)(83380400001)(5660300002)(66476007)(66946007)(66556008)(38100700002)(82960400001)(110136005)(41300700001)(316002)(2906002)(4326008)(31696002)(86362001)(478600001)(54906003)(8936002)(8676002)(6486002)(6666004)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZXFaVE1oejZzcldObXRNdFNvWEMzcmhST3hmQ2Q0cDBoQWFHeXVWUVhIMndI?=
 =?utf-8?B?R201anZSWU5xd1l0NnlieTdFb1pKcE1GS1BIZmRVdzNVdUtoYnFuNXRGV1F3?=
 =?utf-8?B?eko1TGN4cHpvbkV2U3VZSUJsaERhU0ttbG1EVU1mVTNnRGdLbWxzekM1L1Ft?=
 =?utf-8?B?amdlWURjZEttMUNLbXRmOG0xdWYxSVloS3Y3dUJNTFNFRkNmNmFBUkFMSGYz?=
 =?utf-8?B?czB1aVBoNDN4R1FLSjVMWnJWRmVDK1Y2TmdOdnZIcU4rYlBEMnphMGdMQzk5?=
 =?utf-8?B?TjE5eHU0RkNyai9VL0tKdU9RV25rcVh2dEpad1B2QmFaSGxieklrc0c3MXpj?=
 =?utf-8?B?b1NOdzZ5VTkyU0w5NmlGUnVKbDlPeGl5UzBsT1VzcmQ2dkdiempYRW1GL3gv?=
 =?utf-8?B?WHBJRzlXZ3lDYWgwbE94MnVvd01OeGQ4RHFHZFF2aEdqa1IxT3g0Z2dRVUxI?=
 =?utf-8?B?RDc3VWhLQjNQNTdkWVM0RzBUeEJpMWc0M3VnMlBXTXA5NHFMMmRWTnlqNGF3?=
 =?utf-8?B?UE5DYnFZYmdmWEpGR3NDZHhqSjNVeWZIVGVka0pnZUZIQzg5eVNiNEtFdTEz?=
 =?utf-8?B?dHp0bE5oRTNyY1VFc1p6SDVzSUdNTnNPajByUG1FVksvNUJycFJEN2FlckJw?=
 =?utf-8?B?VTJtUnVOL2pPbE9KTkZGMnJYRzdEMmNZdk1UN0gyVXM2Qjk3eVpWVkxSNlZa?=
 =?utf-8?B?bjA4bzVuMGErMWFXMU15TkhlRUMxVlNYdFRVdlhRVXY5TVBlaGJpNVhLT0JD?=
 =?utf-8?B?bTl6eUMvbko5eDRpaUErU3VyZG5wYWszcFo2VWFjaFEzNnJOZ2JTcjM2TDFv?=
 =?utf-8?B?bE1ySzUwaWVFd1UzK05Ualllb0FBMlA0R2FKcnllOGhtWGVOU2VyeUtzNW1U?=
 =?utf-8?B?RitVckoxcmcybFdHL3dwOEdrZmNEQ3JVcjZJb01RR3NEbDhNS2NxVkk1ajNx?=
 =?utf-8?B?THpLTDhVVTN1LzBoQzZTeThoMmhxbk0wTzNIZGs2L2xuZVZIV2hWekwrNVpD?=
 =?utf-8?B?NFdhSGgxMlJyMmNnc0FmMUNPcVJpOHlxMTNTWkxVQW9RWmVKSk1zVVY2WVVn?=
 =?utf-8?B?K2c5eWlkaU9CV1BOUzZ3ZW9DcWZTd0pGd1ZJTTkxME5FaXRlcGVRT2pXZGNN?=
 =?utf-8?B?QlVSMjJPQkJvMkJBblBrbUx0Y1MzL1dzMFNjdWN3dWEvaEJqclVHeWYya212?=
 =?utf-8?B?aS9mbUVIRDU2VWJoanpWWVdwSlcrT3ZVRit4cG41QlQ5aTVpT1BmaGFoWGdM?=
 =?utf-8?B?TVZmeEtOWXlhVFF6cXhqUFdVODJZU0wzQmh2UnFkUFcwYko0NmI4T0RaSTRt?=
 =?utf-8?B?QzdTdUxUdFRINVJxNDh2RUpLTWgxOGZXcDVjME55QnA0ejBmd0IycEFtZnR2?=
 =?utf-8?B?M254STR0Tmt5NW92SkhwZ2o2QzRkSk8rWjA5L3o2Mm5ydzJJa1dWN1JXVmJr?=
 =?utf-8?B?aTR6SGpqR3piY3FweWRicFZmeVVmWWwvM0gzcXVNcUJQMnlVbTRtRy9KZ21G?=
 =?utf-8?B?bnUrUno3R0VvRmxnMXN3dUtSbmZjbGZQSllLM0ZtQXZIRG5JUTlPczkvbktl?=
 =?utf-8?B?WDV0bFJ6Sm1UT0syUzhuSXRRN2RHNlMzMlE3akJ5M2J5cDAwV2lFeTRqMExI?=
 =?utf-8?B?SEg2eEJHT2FoN2Y1Nm9GSWxXQ3lYajZSMHlFR2Q0TXp3aEdYZUsvNUpEVDl3?=
 =?utf-8?B?VUJ0Qm9oTzkzV3JLcFoxdGNlOWxlOHd4N1dicHBqTEZORmtMVXo5elNCdWtR?=
 =?utf-8?B?eDFHK1lZUHplRDBsb09ybHJ5RGlsY2RZbm4rSFRaczJHaUpsYzM5UDFITDdL?=
 =?utf-8?B?WU9aQTBXbjl3S08vTk1RZCtJV3BIeDZOOC9QOXh3Q3ZoRXBwNG1uVWJwVElT?=
 =?utf-8?B?R3QrNXhaTGRBSGxwOWhnZ3M5VDZMTnpXbUZMZE1IdC8zdW43WU56bGlYbjBY?=
 =?utf-8?B?SnlqZ1Y4S1JlU3hCMCt6SW1jV0NJN0dpbUNGek0vUUlXSEhQamcwWHdKUDkv?=
 =?utf-8?B?MVZzbnNRdjl3ZjQ0b0QrSytseXJSa0E0Y2Z2bU1vV3J3V09jM1lPMlBmTjBi?=
 =?utf-8?B?TUxFWFAyWm12YVZKbHNVWDgreHV3TXBrT0xNNXpXK0FKbXBBVFVrLzByZ0tn?=
 =?utf-8?B?MlJkZ0hmNTNieURQWTNrVWNRRFVlazZyd0RaSEFoNjVBaVFJK01oTnFTSUZo?=
 =?utf-8?B?QXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4Qyszt4LuzLrJiELToiEwzIwOqAhjsoSIdZ4vPsq5ckDUfwh6FSpEl/8d44p9fpBGE0lxHLQH7TByalWrafdck4pHZAYj0J/GMrVZ0AdUHfhOb7o61VodKvPLtsyLU/ZPVwvRZiPkaPreP0z2kx1pcDXRY93pnlGwI8bfsEbH8A0bN54eRzbDYLlojvvB4zEi0ckmzHLtb2yA4uyBppRB7/LwF14YDvlhg0qPDTUuUhxd/S7fvkW4bsWZY97UDYRp6dYu6dATOXxsWsqvSueZ3rEFBj4kl1oBDycLQHREYuL5B68cHzj1Gq6+xSF6fL1aHPl5+Ojns+j11t/ARL1JUHCWAze9xV2efAfk6spZ6qeaYnhAk/sIPw6t7u+ZfIAFOfolj9VTAfLzoWn3u744XhpVR4eOByWEmsbd/patXuQNqVG3qkNRO7U7FYvpT9NWTrQmuPRSWk9VcFzE8KnsqTJXvX1MIBPJhI3j8mdbUUsnDv62SUjQR9x+2l/vophwU6A5vf+xsgppFCslQg1e0n2kIb5lVH+7PcZjKp1moyU/hZ0/XnYWCllhnuP7GR4sbosQkD/bgiGoQOIzymhpovtkBal0RIGw7/b9NKo7qFvmeuYdGhotZG4hMJR9w5F0R8PVSdmMU20cgSnw22qGWjeeqE+kYGA0Gle437nrl8J3pC8hhr3iH5Tvcd8MAz8QAjPKTu/JfPc7lLMoNczmH3/GjLYYNo7boiBBo5O5OwO4e1fAg1ohJ3MqSWnJr25YlNyZsxKUuumzUsyLaHdQH6W2C7QxnmHeM+QRfDWPKECY0CIAOv/o7gRbfMVyCFkxBLgRNteoG/91ivVs6zHolE0AJbKzZxSWvvKlaUgYk8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0a64e46a-9d4e-4d6b-24a7-08db47fc6148
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 15:22:29.5662
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ym2o3PDj0WyWSwPPaLZJ0b3oXTP02IQrKPUjFItqVzKQz556oT4+OFqa1mUyf1bJGAgn+fNL1Xo+diqa9ZMxWtowFxql5vmA+HtNZ3qKjoY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6078

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> xc_domain_getinfolist() internally relies on a sysctl that performs
> a linear search for the domids. Many callers of xc_domain_getinfolist()
> who require information about a precise domid are much better off calling
> xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
> instead and ensure the returned domid matches the requested one. The domtctl
> will find the domid faster too, because that uses hashed lists.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
>  tools/libs/light/libxl_dom.c         | 15 +++++----------
>  tools/libs/light/libxl_dom_suspend.c |  7 +------
>  tools/libs/light/libxl_domain.c      | 13 +++++--------
>  tools/libs/light/libxl_mem.c         |  4 ++--
>  tools/libs/light/libxl_sched.c       | 12 ++++--------
>  tools/xenpaging/xenpaging.c          | 10 +++++-----
>  6 files changed, 22 insertions(+), 39 deletions(-)
>
> diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
> index 25fb716084..bd5d823581 100644
> --- a/tools/libs/light/libxl_dom.c
> +++ b/tools/libs/light/libxl_dom.c
> @@ -32,8 +32,8 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
>      xc_domaininfo_t info;
>      int ret;
>  
> -    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
> -    if (ret != 1 || info.domain != domid) {
> +    ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
> +    if (ret < 0) {
>          LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);

I think this LOG() would benefit from turning into a LOGED() like the
others.

Otherwise, everything LGTM.  I'm happy to adjust on commit.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 16:15:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 16:15:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527412.819969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psQkG-0002VX-UP; Fri, 28 Apr 2023 16:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527412.819969; Fri, 28 Apr 2023 16:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psQkG-0002VQ-Rm; Fri, 28 Apr 2023 16:14:52 +0000
Received: by outflank-mailman (input) for mailman id 527412;
 Fri, 28 Apr 2023 16:14:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psQkF-0002VG-KD; Fri, 28 Apr 2023 16:14:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psQkF-000264-8a; Fri, 28 Apr 2023 16:14:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psQkE-0007Es-NX; Fri, 28 Apr 2023 16:14:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psQkE-0001yl-NA; Fri, 28 Apr 2023 16:14:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RIH2Dh6ZS2FQ94Kneyvcg0a/uXn8LPila2L3Fvapevo=; b=4B+jnu9I5x5WDI/BL0LEbFnvcH
	zSF1aL1O/S50RcqXxnbq6eM2WrhSjpNEJ1gQopVY1hlcvk8baGKYqf5GCGBi0zfHk8QeB940zIlyc
	H1fH7To0Ecc+3LEe0U4M1Ic1lDwRiT3g/m7h+BRbYLz/4LeVdFQ1h0Z0O4I9pmDUpXOs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180457-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180457: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35fab9271b7e6d193b47005c4d07369714db4fd1
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 16:14:50 +0000

flight 180457 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180457/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180278
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                35fab9271b7e6d193b47005c4d07369714db4fd1
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   11 days
Failing since        180281  2023-04-17 06:24:36 Z   11 days   19 attempts
Testing same since   180457  2023-04-28 00:42:29 Z    0 days    1 attempts

------------------------------------------------------------
1873 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 210735 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 16:29:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 16:29:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527422.819979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psQyL-00047o-6k; Fri, 28 Apr 2023 16:29:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527422.819979; Fri, 28 Apr 2023 16:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psQyL-00047h-47; Fri, 28 Apr 2023 16:29:25 +0000
Received: by outflank-mailman (input) for mailman id 527422;
 Fri, 28 Apr 2023 16:29:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CzbF=AT=citrix.com=prvs=4752babc1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psQyJ-00047b-0N
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 16:29:23 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1e947e6-e5e1-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 18:29:18 +0200 (CEST)
Received: from mail-co1nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 28 Apr 2023 12:29:15 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN8PR03MB4996.namprd03.prod.outlook.com (2603:10b6:408:7e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 16:29:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Fri, 28 Apr 2023
 16:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1e947e6-e5e1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682699358;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mVXk2JvhTVmbrxmBGwfrJ9NN+eP9NWyG/G563xcBgqs=;
  b=fJ+1f3bJsTyJ96Vhj5NN2ueic1Lmqn0hnff8Ou4Wf0Bdamp3dSnffXUL
   9gkYy85cjyIxFVIcoKAoyTYbrANEzVml52OZiRRnf1L6uvaoCM5mOrj1l
   SQzgTDUfP7IdDnmgzfaMEUHrhTGh6KrlD38Yfv1TmJU81Rq4uJU/6f8q/
   Y=;
X-IronPort-RemoteIP: 104.47.56.170
X-IronPort-MID: 107651768
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Y1ByMKOwAz8C9MvvrR1wlsFynXyQoLVcMsEvi/4bfWQNrUog1WYHm
 zRNWGvXP/6JZTagf9glPdznoU8A7ZCHz9Q3SAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5AxmOJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0stzLmB1r
 fMpET9TQiCe3emyz7y2b/Y506zPLOGzVG8ekldJ6GiASN0BGNXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PRxujeLpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXylBdtNSu3QGvhCgmWMxDMrKTkqZQG34tuduFeDfsBgN
 BlBksYphe1onKCxdfH0WxC6qXiIpBlaRdNUF+A47ymGzq3J70CSAW1sZi5MbpkqudE7QRQu1
 0SVhJX5CDp3qrqXRHmBsLCOoluaJiw9PWIEIygeQmMt/9jmiJE+iFTIVNkLOKy6lNruAhnr3
 iuH6iM5gt0uYdUj0qy6+RXMhGuqr52QFwotvFyIBiSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:q1ZQka1gruN0g4f3vGRz1QqjBKUkLtp133Aq2lEZdPUzSL37qy
 nOpoV56faQsl16ZJhOo7290da7MBbhHPJOjbX5X43NYOCWgguVxehZhOPfKlbbehEWmNQz6U
 5oSdkbNOHN
X-Talos-CUID: 9a23:PmO5VmE8bn37Xdr7qmJK8GEVIZgqQ0TSklvgPhWzDWlnRZiKHAo=
X-Talos-MUID: 9a23:ydUFuwgiAYT4pq+Ymx5JW8MpNv934fuWKWQxoc8K+OugBzVNHiWik2Hi
X-IronPort-AV: E=Sophos;i="5.99,235,1677560400"; 
   d="scan'208";a="107651768"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KjfWmV4wpdm/nF/hJDvkgOMxCyGaCwmTAKnxe5Xi0PyxirSlW/RMlKf193RT48IZOoqe3loO9EUZoH9N5LPi+uAyCprmZMTFP2vrIhJk0dco+5yCPeVJPaa6gUSXwX28PDWbaqs6OZh8NVKI5z4YhOzFbk0tRZjuS31tFoH2mYFnZeYm05+4w2Z8ptOoYaxLltdJhvOgPVH2B3uwk+yGUAt7d6b5+PikRxabuW6lpssbYa1xug8HO0bAGTJsXyUo19zCg4/f/gGF3fncTPZLKsfEWmVo2Gq8NJX7RZD5X2llnmXeIZY0KBFdhbe679sCLwSA+mB7wf1KRhbT1zoykg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dxCCJDFIlnfu1ie0kTBSfHr4JZQgZ2HEEvmCizB2PXI=;
 b=dTcnaWULBmntr5HDMKaSSOoBEc4UF55VJ2vw0i4lzmdL6ozWL1FgB+HEeurXd/Idt/nWXGVnPmntQtB4o+FugDFWjLFXdB1HPjuMM7nqqg/rj7176Zi50AAkVCFIBva1s4s371DQs28v4uDS9LVp7evRwLv/+UiuLuAZtJwhIWiaIJAA49/IjTiWCAc/AdOnr+Zgt9bjr615FpYxKZqxcCUMMTgIZlrm2DY3vOu1JtmnNF1xZ3BmWihHoAIuEoxMX0Go3amTTpzq3YLqDxhFojPa58KMkWOv8FCskgHwHQlROr4RBdGDO0jr507xHwny2KZORZIq06uK30x2xs7eNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dxCCJDFIlnfu1ie0kTBSfHr4JZQgZ2HEEvmCizB2PXI=;
 b=Nzh8B1Qbhy6A+aavvfyrEGPmTFS6UXsVcWpdvP4fg3Xr/IpIlXC7g+3b2AjjrLZmY0L4p6HbKWy9GraHMh477cE6H1se5sSMG1KDa6JBT6bFPK/ibG5FVM7wuDxH4XMwodP8wNAD78OY//WsrhCH7+psjCGXkPzpBZ53rFu910I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <13e96e00-85e8-03d9-86e6-7692ca0f8bdd@citrix.com>
Date: Fri, 28 Apr 2023 17:29:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 6/7] tools: Use new xc function for some
 xc_domain_getinfo() calls
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230428104124.1044-1-alejandro.vallejo@cloud.com>
 <20230428104124.1044-7-alejandro.vallejo@cloud.com>
In-Reply-To: <20230428104124.1044-7-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0547.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN8PR03MB4996:EE_
X-MS-Office365-Filtering-Correlation-Id: 30e6eadb-915d-4ca9-e032-08db4805b39a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fP2ykMGNx7+KtNtWkU2mtwukad0m1R6cAAH5A1sQzxBEGbIZcneLITWY6M/9aGvjTKY5IGEpKwl3D0WWv2R1olmK6BxdEhQL5edxAqKbkbvcJ2JesWDJOkD/2NmCsS8hZIZdbetN7eBGP3MNHjXhEUXdwcIV0Me8jaCh2RyaPJ0wK1rDGnJSrap54Dllp9D3P23AdbFTMDCOg0FtlfpSpEs5kEDq8vdBMfZJ1n7h1Ej4FL0nUZERTjyUFe9qNEOx6ei4e6OkInZwWos+3pBsJkPcF/YlcDTgs0eHl/8CkoLtkDn15fKidv1Lps5Y2McSPAklMbHVzWV9NdksvCo7cVY3PHoeg219bojhMCtTOUVJwTOK3Ubqbf63p65kAD5y3DA9xD3Nnm1rLYy1vXYXUEhkqKIp6zaCKj4YDIHZDbsUTQpUVvxJdtLyQqa9gdwISpoNu5SZr+cnXUks6FLgvyQGUo+e+9Q+D5K4Gmz+MbCKE9aMMNjQRV6NYSdLi2n9XWCOJGNlzZTL1mZslY54y/JE+QxYGq1koEhzHOViDHw+ptoX7FsKdRVcZzOxjswkOwNC770vb1IpB/q0McT2VBcpL4PVTyz1P29RY2L1YEDV586M0+iR8sIKZS96BdGZ0rx15dT/zhHzgz8LM++01g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(86362001)(53546011)(26005)(41300700001)(6512007)(6506007)(2906002)(5660300002)(8936002)(36756003)(8676002)(31696002)(83380400001)(38100700002)(186003)(478600001)(54906003)(6486002)(110136005)(6666004)(82960400001)(31686004)(316002)(66556008)(66476007)(4326008)(66946007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWNZSlJva3VocXVmV1pMVjIzNFBSZ0RBM3R3RDBDSFVQNGd2YXJTMm1VSDdp?=
 =?utf-8?B?Q1hGVEFWQmF3SmhRa0lMSTZ2RC83ZFMwbG8rRDEwV0V4WHU2MlpRbWFUYnIy?=
 =?utf-8?B?SUFNMHp4RkpQUlVXd2c0UlUvNVBQZEJ0bTJlQXNOSW1tYmRxTlkxa0s4RWhR?=
 =?utf-8?B?dU91Q1RIb01WcmprcmxGUDYvdUZBeGVROXFaOGxyRHp6ZkdqYUpSU211djRl?=
 =?utf-8?B?dXFSYzZIRzlKYmpGd0FIYWd5L3N0R2V5VnR6ajY5YjV6V1EyV0FJdC9zRit2?=
 =?utf-8?B?aHJnL1l1YlJZek9PeFdiZ2tudSt4bko5MHhiQUxaK2FFSi9pMG93Rk9oZnVv?=
 =?utf-8?B?dDFvRUdkU21aTFJzSjdSY1FtaitDeXpZbkp3akFIbG5JdlhjTklqdEhsYWFJ?=
 =?utf-8?B?WDZMRnRBM3ZiNE9yWU02aUYvMU9JOVpRWFNZK2ZudU81OGp4enVzU0toL2to?=
 =?utf-8?B?TmtucFZWOFBZVkQ5RUc5eEJSTUwycGJ1b3BmaWFsMnhZZkRoU044VmExYmcz?=
 =?utf-8?B?c1pNVFlLZTVHMHB6N0lLN0RYRnlSaTFZYlRZdFhtUWNaVjNXNVlGT2NZWEdM?=
 =?utf-8?B?OHY0ZXlSdStQL2hTMmttOEdZT01uMjVGS0VENFlxbSt0U2lldSswa2syR0Q2?=
 =?utf-8?B?d0haRnorMS9NUmc4ZkdMejN3aUdyeERUaFd1N2FzNXRjbHJXUVVERHNDTGNQ?=
 =?utf-8?B?Vll6NmJka0YrVnU2NXYwN0VSTkNLWm9rTTE0Q01RWlJYRHVUVEh3bHY1TEhO?=
 =?utf-8?B?MzcvOXRYTlIvUS9pdTlCbG1nR1RBN1BFUnR4dnNiNml0TmJTcEtGLzEwejBz?=
 =?utf-8?B?SGVVT1RRQ1FLUWhEdFdOOVp1Mmp0SnpxaCtUTzJqZ3RzN3B1bC9XK0IvUkpG?=
 =?utf-8?B?dWtUUWhRMEkrZHFUTTlSZ2FnZXJxMXVsY2pEeWxkaWM3L1p2SGE3Nm5yQjVT?=
 =?utf-8?B?SXFaZVdRNXdaY1FrTnhyVloxVWN2dU5ISTJRNlNEaVcxWGd5bENvK1MyLzV6?=
 =?utf-8?B?REhvajRjeTBKM0grQk5hcmlkNnFmRHV6MXZNNW1XakNMZS9sdyszL1hlcUFU?=
 =?utf-8?B?LzdPSGgwakJiNGdLYVU3cU1sSUlodU84aGNhWDdoMUZQS1YrSUFmQlNlK29J?=
 =?utf-8?B?eHNlYWJ3ZVNyQVFxRU50bzUxY2pIdE9lYUQxSE0wSWxzWEJON25sSVl2R1hs?=
 =?utf-8?B?NzI3MmpxdFpsRGxhVmd5QnNPaWc3aEUyMmlndDZpVlEwNDluc0hVV2NobEJo?=
 =?utf-8?B?L1FwbVhoeUZ1NlF3RXhja1hIbFZBZmh2d3g0QUFYYUd6S1p2TDA4WkE1NTAy?=
 =?utf-8?B?M1pQN3B4Z2lNRW12TlJINWtLTG1RbVhqT05NTTJnTlMzNTN5RXFreTBYM2FS?=
 =?utf-8?B?VkkwTjR5aVkvWXlWSk1JVExHdWFhWUxYQVd1OXoxdDk4dnZtYmNYOVVsTWtC?=
 =?utf-8?B?YWdRMC9IdU1mZFZPUnYwUDduRStDMGR4b09WR2k5UElLaXpFMG9qaHpUd25J?=
 =?utf-8?B?Tkh2dzFGVXF1OVJRVEV2QVFiOUVWUlRzdkhpQ0h4WlVtSlBZak41cGFnaThT?=
 =?utf-8?B?SWVqNEk3MWpGSFduV3pjZmFoOHlmeFhPNk1QdE9CVGhSWXZGYjNlSWdsc3k3?=
 =?utf-8?B?VVJ4c2JPTmhMT2V3TWpqVHBIZklRWjBYbHNaWXB1M2xKZWd3MDVnMjRvejdW?=
 =?utf-8?B?OVpRbkhMcURDSDZMUHBGTUVFNm9JNkExVUxoUXB3NEpvMklwd0YwV1V3NHhz?=
 =?utf-8?B?b3RPVWlpV2FncVMvK1BaaG0xSzFMWkR3VitZYXRPZjYwd3Nnd2VsUmpzMlQy?=
 =?utf-8?B?aGZiejRwYWlDMmpkUThXVC9FZHFSaUJyNGFHU01GblA4eU9DTERucmIxempW?=
 =?utf-8?B?VEpuYzFUVmlmZFV1TFBRelFFcS85OG1xVDFZeVBETmV4bk9vcDNUbkJJSVdN?=
 =?utf-8?B?cjgvTkhHQ0RHQXNySGNsVGhyODhmem9NWUxFTmF6dXMzSDF3MmVXbk40WWlK?=
 =?utf-8?B?SkhNUFhCN1BleWY5bzlmSi90WGxySDlYUEZUOGtJVmVyMEpsR0FtRERzR1E4?=
 =?utf-8?B?K3JzVHpNYUR1cDJkdXpSWU1rSnFKbk11M3NtUW51aHFRUGh5cEF1NkNZZ0p4?=
 =?utf-8?B?aG1yVWFSZUlBZ09Sb1dqRzlzL0tkakJyMk8zWG9ScUVFK2wwOWJaSnRiODli?=
 =?utf-8?B?N1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4aW6XcyVnGi+Eb7ja/rf8lhD5v3HZzMq7jB7QiqhOP3t3g/JFFs32y1xs+Rn20DfLGZx7bWmXGn1DRqaTubrH1REKU/STCftJITokiE410kQAcGyM3epAmldB7uM+v+yqlnE3ydfleMhjtU6k3FCJGAt4YDFzvJLHuOFfquDL/uuhVNNfcSQ6QD9l16b5mJGsqcdjQGcHVeKJA5n3BaTPaUJSeDqoPALMC7oq8Z7zGhWWokq4IBJ9JF0EeBwnFnmlCLyLwt7BFXtnvwrXqFlazF5czhaD6b4/uMnb30e8ispuSdCKyRZttQHwh5EIH7rOdp5ba9Jn9Y2UNssSFn8royq5DRPCucGyFZDnLnbwznGAobsN3xIKHbQG+WZOBFT+tWZVhgyvCRe9d3FTnoI6WN+oA/gZy8JNh/WOBTHBmuaXrw4azAFrl6uLtzt62edlaUC5J1EWYbSteNTT6W5TlsoOxoMnUeJYn6INB1Y3ePMcrVgEWsW+3aEIuNFuInGOcZYYVDyiGa7njRQXHyCR3fERae6bsUZ/+B9/c2Vt+KDdpP5Of8R2bdJugh/yu1W109wPOyWtXV8QTiSQFScVZLUV6iRcU+weZznZ6ZSX2fHkXTnHwMyf9ASZneqmoUP4tS5EaY4H3ZmYnE9vlKlR+t76Ky4wVAuCW7erfhT8ALGF/AbOUMqs82eAG0wI8hcqTPOcuvWAFK3vg1dgkKkgDIPH1uWKtgyR5vcxtoozQ1Y1kGlYUUF2eUC4fi6rYzZMdZC3PbEVOaX9oYlLsgWhvywwRvtr0l2zYrEwlnc6Bhf8CeMf9irMO1zGvnmqFNKSjP3NYyWK/TDuVkiNc92LB5ms5YjpFXZIJEIf3X+K0Y=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30e6eadb-915d-4ca9-e032-08db4805b39a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 16:29:13.2088
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zYJh1OhhQzQphOKwUSFUpJZ8rfn0jKFriisiADIz695bUkI2KjvsEbIH4GmyHBAq0RXH8rUJtPvjeRQh+Jw5AqxHBpck2oBNiDuEtOLnwoE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR03MB4996

On 28/04/2023 11:41 am, Alejandro Vallejo wrote:
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index 6b11775d4c..533e3c1314 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -1959,15 +1959,14 @@ int xc_domain_memory_mapping(
>      uint32_t add_mapping)
>  {
>      DECLARE_DOMCTL;
> -    xc_dominfo_t info;
> +    xc_domaininfo_t info;
>      int ret = 0, rc;
>      unsigned long done = 0, nr, max_batch_sz;
>  
> -    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
> -         info.domid != domid )
> +    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
>      {
> -        PERROR("Could not get info for domain");
> -        return -EINVAL;
> +        PERROR("Could not get info for dom%u", domid);
> +        return -1;

I think this needs to be "return -errno" to have the same semantics as
before.

> diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
> index 2f99a7d2cf..8dcebad401 100644
> --- a/tools/libs/ctrl/xc_private.c
> +++ b/tools/libs/ctrl/xc_private.c
> @@ -441,11 +441,10 @@ int xc_machphys_mfn_list(xc_interface *xch,
>  
>  long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
>  {
> -    xc_dominfo_t info;
> -    if ( (xc_domain_getinfo(xch, domid, 1, &info) != 1) ||
> -         (info.domid != domid) )
> +    xc_domaininfo_t info;
> +    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
>          return -1;
> -    return info.nr_pages;
> +    return info.tot_pages;

As we're modifying every line in the function, take the opportunity to
add two extra blank lines to improve readability.

> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> index bd16a87e48..1519b5d556 100644
> --- a/tools/libs/guest/xg_cpuid_x86.c
> +++ b/tools/libs/guest/xg_cpuid_x86.c
> @@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
>      xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
>  {
>      int rc;
> -    xc_dominfo_t di;
> +    bool hvm;
> +    xc_domaininfo_t di;
>      unsigned int nr_leaves, nr_msrs;
>      uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
>      /*
> @@ -291,13 +292,12 @@ static int xc_cpuid_xend_policy(
>      xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
>      unsigned int nr_host, nr_def, nr_cur;
>  
> -    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
> -         di.domid != domid )
> +    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
>      {
> -        ERROR("Failed to obtain d%d info", domid);
> -        rc = -ESRCH;
> +        PERROR("Failed to obtain d%d info", domid);
>          goto fail;

Sorry, I gave you bad advice last time around.  We need rc = -errno to
maintain behaviour here, and that is a pattern used out of context too.

Same in related hunks too.

> @@ -330,12 +330,12 @@ static int xc_cpuid_xend_policy(
>      /* Get the domain type's default policy. */
>      nr_msrs = 0;
>      nr_def = nr_leaves;
> -    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
> +    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
>                                             : XEN_SYSCTL_cpu_policy_pv_default,

We like to keep the ? and : vertically aligned if possible.

> diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
> index 263a3f4c85..1dea534bba 100644
> --- a/tools/libs/guest/xg_dom_boot.c
> +++ b/tools/libs/guest/xg_dom_boot.c
> @@ -174,19 +174,11 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
>          return rc;
>  
>      /* collect some info */
> -    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
> -    if ( rc < 0 )
> -    {
> -        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> -                     "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
> -        return rc;
> -    }
> -    if ( rc == 0 || info.domid != dom->guest_domid )
> +    if ( xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info) < 0 )
>      {
>          xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
> -                     "%s: Huh? No domains found (nr_domains=%d) "
> -                     "or domid mismatch (%d != %d)", __FUNCTION__,
> -                     rc, info.domid, dom->guest_domid);
> +                     "%s: getdomaininfo failed (errno=%d)",
> +                     __FUNCTION__, rc, errno);

Ah yes, this is where your stray hunk from patch 7 wants to live.

> diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
> index 7314a24cf9..6767c9f5cc 100644
> --- a/tools/libs/guest/xg_sr_restore.c
> +++ b/tools/libs/guest/xg_sr_restore.c
> @@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          break;
>      }
>  
> -    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
> +    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
>      {
> -        PERROR("Failed to get domain info");
> -        return -1;
> -    }
> -
> -    if ( ctx.dominfo.domid != dom )
> -    {
> -        ERROR("Domain %u does not exist", dom);
> +        PERROR("Failed to get info for dom%u", dom);

This is somewhat ambiguous, because "info" could be anything.  "dominfo"
would be better.

> diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
> index 9853d8d846..b0b30b4bc2 100644
> --- a/tools/libs/guest/xg_sr_save.c
> +++ b/tools/libs/guest/xg_sr_save.c
> @@ -336,19 +336,17 @@ static int suspend_domain(struct xc_sr_context *ctx)
>      }
>  
>      /* Refresh domain information. */
> -    if ( (xc_domain_getinfo(xch, ctx->domid, 1, &ctx->dominfo) != 1) ||
> -         (ctx->dominfo.domid != ctx->domid) )
> +    if ( xc_domain_getinfo_single(xch, ctx->domid, &ctx->dominfo) < 0 )
>      {
>          PERROR("Unable to refresh domain information");
>          return -1;
>      }
>  
>      /* Confirm the domain has actually been paused. */
> -    if ( !ctx->dominfo.shutdown ||
> -         (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
> +    if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
>      {
>          ERROR("Domain has not been suspended: shutdown %d, reason %d",
> -              ctx->dominfo.shutdown, ctx->dominfo.shutdown_reason);
> +              ctx->dominfo.flags & XEN_DOMINF_shutdown, dominfo_shutdown_reason(&ctx->dominfo));

This likely wants wrapping onto the next line.

> diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
> index bd5d823581..94fef37401 100644
> --- a/tools/libs/light/libxl_dom.c
> +++ b/tools/libs/light/libxl_dom.c
> @@ -34,7 +34,7 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
>  
>      ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
>      if (ret < 0) {
> -        LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
> +        LOGED(ERROR, domid, "unable to get dominfo");
>          return LIBXL_DOMAIN_TYPE_INVALID;
>      }

Ah, this is the answer to my review on patch 5.

Quite a few of the following hunks look like want to be in patch 5 too.

Everything else looks good, best as I can tell.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:27:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527427.819989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psRsB-00027k-Ng; Fri, 28 Apr 2023 17:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527427.819989; Fri, 28 Apr 2023 17:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psRsB-00027d-JA; Fri, 28 Apr 2023 17:27:07 +0000
Received: by outflank-mailman (input) for mailman id 527427;
 Fri, 28 Apr 2023 17:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psRsA-00027T-Pe; Fri, 28 Apr 2023 17:27:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psRsA-0003hd-Mg; Fri, 28 Apr 2023 17:27:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psRsA-0001sr-B4; Fri, 28 Apr 2023 17:27:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psRsA-00040N-Ac; Fri, 28 Apr 2023 17:27:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zEoH3ROC2+vPjXmIHA0C7J5uCqiqgxVhCfHX5FOY0Pw=; b=FGH3I8yEfAmA1a4jL3W41UmBl1
	StEwc0MikBjQKbaqp+NxJEiBRRLE5jJX8D0vvpqI/o8BDboOi3cdkat/ei7bUOMDGl1lxDy+lg31m
	/wbnGYBui2dFFXnuSwfkcxjcsLV6cypYOzaqI0K8va7bLJPsQNZFedDNV1kAQbXwcJ2c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180470-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180470: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=56e9828380b7425678a080bd3a08e7c741af67ba
X-Osstest-Versions-That:
    ovmf=ecbcff0f4935395f66ecc9e9ac76b804ecdec2e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 17:27:06 +0000

flight 180470 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180470/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 56e9828380b7425678a080bd3a08e7c741af67ba
baseline version:
 ovmf                 ecbcff0f4935395f66ecc9e9ac76b804ecdec2e8

Last test of basis   180467  2023-04-28 09:42:13 Z    0 days
Testing same since   180470  2023-04-28 12:40:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ecbcff0f49..56e9828380  56e9828380b7425678a080bd3a08e7c741af67ba -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:56:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527433.820001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSKC-0005Wo-Vr; Fri, 28 Apr 2023 17:56:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527433.820001; Fri, 28 Apr 2023 17:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSKC-0005Wh-Sk; Fri, 28 Apr 2023 17:56:04 +0000
Received: by outflank-mailman (input) for mailman id 527433;
 Fri, 28 Apr 2023 17:56:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSKB-0005Wb-EK
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:56:03 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061e.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eda7fe4b-e5ed-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:55:59 +0200 (CEST)
Received: from MW4PR04CA0200.namprd04.prod.outlook.com (2603:10b6:303:86::25)
 by BL1PR12MB5804.namprd12.prod.outlook.com (2603:10b6:208:394::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 17:55:52 +0000
Received: from CO1NAM11FT061.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:86:cafe::51) by MW4PR04CA0200.outlook.office365.com
 (2603:10b6:303:86::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:55:52 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT061.mail.protection.outlook.com (10.13.175.200) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.23 via Frontend Transport; Fri, 28 Apr 2023 17:55:51 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:55:50 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:55:49 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eda7fe4b-e5ed-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V0mg1DeMCcYHqDm55/RmWCoqZ4fbvyrWB99KpbjQ3WhdYO6xfBTlCAlnrd+HLhTHRgZAs1gkGmqKnwsxpc9hnvG+dBboJer4NEbjP/gcUdvSDr7GeifSqp214dcKR6KnFd0bFVT8TKZtA6EM8gRbEHA4HBRet5kOy2Bl9ljvTwvXb39Pkf3RfhHQ+5+dSy9SLoH19RMLfZQr8USt/uTRWnDHTwdy/9m4B5/IUNUgE6EG2FxbOBnoHaqZTiEXGO5F+Q3gSw9DP+xN737lcD9bFWAZo3pfKG8m4q+kj8EMTC1GC3lvqoS2/Koh17KdBqEUvM9m+KwRwA+gWqeJUYx68g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gDZVjVx8ORl4uOYkRL2blaaUXr4kPF8yScK/mrAYFOk=;
 b=Lu5TZIWWACl2jyYO3zgrV1IjnsEcuKXLivstDLtdtBiaXXgF/UE5abyKiYgzzy+SDJFad4P+0uAD/Gg1SmBrTWh0yGRTUBmFbT4/vMHdDTdJxEPz0himQoh6fzbij0T2iil+NjbGSwl1oQe9FO7H4J+ZDFeMnvD/+NkQiWH4i8t5lSGvhxd/bB2dgOaKOpTRWwcx0CBKPqhH/YyY2VzQewklTnppBzbE2Uk8qrzSSzj3fojVAC13M5bkIkgdDG7tcWNgb+qZoVSrBVV5eiExIZ0EVe6HkTXFbXSoJHiBmMhLtXAB0HaQ+5ds1qMDes+Pg6NA1BQrBsBF+3d1C8RVJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gDZVjVx8ORl4uOYkRL2blaaUXr4kPF8yScK/mrAYFOk=;
 b=pUyoTQc/KjRcmLh86un7uU+yS/QQwFnAjA4FuxJN+D/FTM3NqzC6KGW+nIoOPlzf7u1iw0FcQI3wMoSyd0HGeV7xKIUVPhc3HbGvCIu9d94wGKnkH09D15MIRk7cjGf/MXkUTHdxCp/ehmJPX5oFyFOFz8ZRJEKl3XTZtk05PdA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 00/12] Add support for 32-bit physical address
Date: Fri, 28 Apr 2023 18:55:31 +0100
Message-ID: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT061:EE_|BL1PR12MB5804:EE_
X-MS-Office365-Filtering-Correlation-Id: 8eeca503-858c-4be7-31d9-08db4811ce78
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uD9q40pRhG27/lfJ1Sto0RoVaHJaLTrcvZ6gqUBm0fMkwCtNS+/kLq8rT4LYWiCo6nhmqnEKaUJGtZAwXJSADj8HuXFjZgmQTJ0V5e1mTW56EC/l48/+8jcxJaXQYwhITtI3CglhLYt/7vNOvzJRUB6cTDyCpk2NgW1+bhE+prSH1zPg7f6igz9RBNrBpi6eQKBGab0NlGNupZvbmDofnI7jBYeuau8TxNlcjdij7uMqzVOcZ15P3U7+JgG/Z/8c4XtwBiJq0ii8eRvZFE1SwiF2eQhU1QJYl4qUWgnS6qi1ZnGzAnus0TEYoLgoI7SM/0cZXNzlFZ8WH3lhmPAFB7lvTcZ1HvqI+o5pEveJMW1195jmyAAG+rogczS3wGLvvtAYP4nxE8WtYJ5A97qKkzBSAMA2WnpqJLgrXrSlXUBzFKXVczbgCG35TBfKg4B9aI0YaExqt8mW+N1utcqQ440k/56HrWtoqdw/9e7yrZT2iqrGHhz0vffb3l4L/M7THuCHqmzWKUUHA1o3rxhubGag+sq8VmJRDzy537libW/4lWYjjqEbDHcW4/JghOTEqHb//oxHCJExBncdTo/sbQrNXzmwUEAHF4n18bqgnEuZJT/CiS5TjMJEgcqiMPMYR+EaFFzq60Aol0Voaiiq0IDLvoImk7nrH8PLaNQiQcKc1sH8E1s90bRil3M41C6dD0XFNFFLhSwMKsD3VVy+yMB75nQa8+LhueiUh90oJPkEtoGrcsevYvm1MubCZWLnW51SN04mKum5VDOmxIqjhw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(86362001)(7416002)(41300700001)(26005)(1076003)(47076005)(2906002)(5660300002)(966005)(40480700001)(8936002)(336012)(36756003)(8676002)(426003)(81166007)(40460700003)(103116003)(83380400001)(356005)(54906003)(186003)(478600001)(82740400003)(6666004)(70206006)(36860700001)(82310400005)(6916009)(4326008)(316002)(2616005)(70586007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:55:51.6966
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8eeca503-858c-4be7-31d9-08db4811ce78
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT061.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5804

Hi All,

Please have a look at https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01465.html
for the context.

The benefits of using 32 bit physical addresses are as follows :-

1. It helps to use Xen on platforms (for eg R52) which supports 32-bit
physical addresses and has no support for large physical address extension.
On 32-bit MPU systems which supports flat-mapping (for eg R52), it helps
to translate 32 bit VA into 32 bit PA.

2. It also helps in code optimization when the underlying platform does not
use large physical address extension.

The following points are to be noted :-
1. Device tree always use uint64_t for address and size. The caller needs to
translate between uint64_t and uint32_t (when 32 bit physical addressing is used).
2. Currently, we have enabled this option for Arm_32 as the MMU for Arm_64
uses 48-bit physical addressing.
3. https://lists.xenproject.org/archives/html/xen-devel/2022-12/msg00117.html
has been added to this series.

Changes from :

v1 - 1. Reordered the patches such that the first three patches fixes issues in
the existing codebase. These can be applied independent of the remaining patches
in this serie.

2. Dropped translate_dt_address_size() for the address/size translation between
paddr_t and u64 (as parsed from the device tree). Also, dropped the check for
truncation (while converting u64 to paddr_t).
Instead now we have modified device_tree_get_reg() and typecasted the return for
dt_read_number(), to obtain paddr_t. Also, introduced wrappers for
fdt_get_mem_rsv() and dt_device_get_address() for the same purpose. These can be
found in patch 4/11 and patch 6/11.

3. Split "Other adaptations required to support 32bit paddr" into the following
individual patches for each adaptation :
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_ARM_PA_32"

4. Introduced "xen/arm: p2m: Enable support for 32bit IPA".

v2 - 1. Dropped patches 1/11, 2/11 and 3/11 from the v2 as it has already been
committed (except 2/11 - "[XEN v5] xen/arm: Use the correct format specifier"
which is waiting to be committed).

2. Introduced a new patch "xen/drivers: ns16550: Use paddr_t for io_base/io_size".

v3 - 1. Combined the patches from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00656.html in this series.

v4 - 1. Dropped "xen/drivers: ns16550: Use paddr_t for io_base/io_size" from the patch series.

2. Introduced "xen/arm: domain_build: Check if the address fits the range of physical address".

3. "xen/arm: Use the correct format specifier" has been committed in v4.

v5 - 1. Based on the comments on "[XEN v5 08/10] xen/arm: domain_build: Check if the address fits the range of physical address",
the patch has been modified and split into the following :-

a.  xen: dt: Replace u64 with uint64_t as the callback function parameters
    for dt_for_each_range()
b.  xen/arm: pci: Use 'uint64_t' as the datatype for the function
    parameters.
c.  xen/arm: domain_build: Check if the address fits the range of physical
    address


Ayan Kumar Halder (12):
  xen/arm: domain_build: Track unallocated pages using the frame number
  xen/arm: Typecast the DT values into paddr_t
  xen/arm: Introduce a wrapper for dt_device_get_address() to handle
    paddr_t
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: Introduce choice to enable 64/32 bit physical addressing
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_PHYS_ADDR_T_32"
  xen/arm: Restrict zeroeth_table_offset for ARM_64
  xen: dt: Replace u64 with uint64_t as the callback function parameters
    for dt_for_each_range()
  xen/arm: pci: Use 'uint64_t' as the datatype for the function
    parameters.
  xen/arm: domain_build: Check if the address fits the range of physical
    address
  xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
  xen/arm: p2m: Enable support for 32bit IPA for ARM_32

 xen/arch/Kconfig                           |  3 ++
 xen/arch/arm/Kconfig                       | 32 +++++++++++-
 xen/arch/arm/bootfdt.c                     | 46 +++++++++++++----
 xen/arch/arm/domain_build.c                | 57 +++++++++++++++-------
 xen/arch/arm/gic-v2.c                      | 10 ++--
 xen/arch/arm/gic-v3-its.c                  |  4 +-
 xen/arch/arm/gic-v3.c                      | 10 ++--
 xen/arch/arm/guest_walk.c                  |  2 +
 xen/arch/arm/include/asm/lpae.h            |  4 ++
 xen/arch/arm/include/asm/p2m.h             |  8 +--
 xen/arch/arm/include/asm/page-bits.h       |  6 +--
 xen/arch/arm/include/asm/setup.h           |  6 +--
 xen/arch/arm/include/asm/types.h           |  6 +++
 xen/arch/arm/mm.c                          | 12 ++---
 xen/arch/arm/p2m.c                         | 32 ++++++------
 xen/arch/arm/pci/pci-host-common.c         |  8 +--
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 +--
 xen/arch/arm/platforms/exynos5.c           | 32 ++++++------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/arch/arm/setup.c                       | 14 +++---
 xen/arch/arm/smpboot.c                     |  2 +-
 xen/common/device_tree.c                   | 43 +++++++++++++++-
 xen/drivers/char/cadence-uart.c            |  4 +-
 xen/drivers/char/exynos4210-uart.c         |  4 +-
 xen/drivers/char/imx-lpuart.c              |  4 +-
 xen/drivers/char/meson-uart.c              |  4 +-
 xen/drivers/char/mvebu-uart.c              |  4 +-
 xen/drivers/char/omap-uart.c               |  4 +-
 xen/drivers/char/pl011.c                   |  6 +--
 xen/drivers/char/scif-uart.c               |  4 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 +--
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         | 23 ++++-----
 xen/include/xen/device_tree.h              | 42 +++++++++++++++-
 xen/include/xen/libfdt/libfdt-xen.h        | 55 +++++++++++++++++++++
 37 files changed, 367 insertions(+), 146 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:57:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527436.820011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSLS-000630-9l; Fri, 28 Apr 2023 17:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527436.820011; Fri, 28 Apr 2023 17:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSLS-00062t-6a; Fri, 28 Apr 2023 17:57:22 +0000
Received: by outflank-mailman (input) for mailman id 527436;
 Fri, 28 Apr 2023 17:57:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSLR-0005xG-IF
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:57:21 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1dd676a4-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:57:19 +0200 (CEST)
Received: from BN9PR03CA0781.namprd03.prod.outlook.com (2603:10b6:408:13f::6)
 by PH0PR12MB5450.namprd12.prod.outlook.com (2603:10b6:510:e8::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 17:57:14 +0000
Received: from BL02EPF000145B8.namprd05.prod.outlook.com
 (2603:10b6:408:13f:cafe::a2) by BN9PR03CA0781.outlook.office365.com
 (2603:10b6:408:13f::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:57:13 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000145B8.mail.protection.outlook.com (10.167.241.208) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.15 via Frontend Transport; Fri, 28 Apr 2023 17:57:12 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:57:12 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:57:12 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:57:10 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dd676a4-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=he8wUTrQyxRNVmeXe7Ck5g+D0hKqRwEnakTT1wUjuvoFcA+IJAwPljG7v3ECpqNcNK6PL1BC3N0k8PJLMNdY/VfMiKCXDYFkP6ywx8f09orF5dirJAOWcmtkAFeAbicWMjrPIi0+cJe0HwP06hcXY4yiBKEruJXPlt8t5+v/Astg0JnqOFqgMxcHLTpYa6oI94ecFZk39MKk9iNp0tPTWQ3cxdZE9B6mQZawlUCjrFWVDNgC5xBEp9XbDjJ0EajHjz13LoRQpgqAL4tF2W2WN8DNfJ1GcWYh+ErubZm+GeKAQVlNqu/9whOWI533bWAYPK//JC242HjzpNonKaFDbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RLOopyrCxXrFnZHqu61vRHbngeDUKnfARgP4fYXLREo=;
 b=FybGUtgmqipk6hdUXR3/shyYOHw0XPKvD9h4HIcwveVWYkFTFJboEuq9eXF2ukhBcHKovfjS4uSJFpoHDQol3igESY63KDFZUtLImacm23XpwBE58JnoPOxDawX301eN3/vfwX0R4e3fhoG7sKb9m3AXFSCDuqGtdcBK+ltW2HXUf0nURdwydwQXjFRUzXKdh52d3u6Ao06GT2gy+5mxOAZ5GW5nur9kB7zGe1M9Ikdatf0ong8WLdptM2Z83b6N+KNBKfxB1gUxWoMFHNBt/eo/SeycEJxx4i+zhnFptOxFNu30tKCSVTXBkpl3u3aFnOi0/EmzwFdnHt5MurU3Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RLOopyrCxXrFnZHqu61vRHbngeDUKnfARgP4fYXLREo=;
 b=uJXBbTqwwF+Byazy7H5LHgoPgnwUbUKuXbg5xZom5YbzUva56uB35qku4C/owetAUoOMxjSg8bisDY87Zi/7sihCxm0MyUs3jCT8OYLgXMxeVc58kdkIoSCqEjpqIWWk0cezVawb6TVqMCQ9FrQwLlC7tN+pq0Ks8XK2r5ONk2k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 01/12] xen/arm: domain_build: Track unallocated pages using the frame number
Date: Fri, 28 Apr 2023 18:55:32 +0100
Message-ID: <20230428175543.11902-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000145B8:EE_|PH0PR12MB5450:EE_
X-MS-Office365-Filtering-Correlation-Id: fa3189f5-981d-4e8f-e130-08db4811feb5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OiUpvXG3+lXoq17BnlvYznR4NfscNYqeRm5aEnuaqhyTML+zfRfomIWIXWLTSFjYcQe6i4XwWBmkJ28iie0zeKHm0/lFR1+iW+jKBkhDjaJM474trp/b17aNuHb/T7Taax0hZbJO5ddM/1i8zfFdCn/WE/F322e8FbJW3o0NIZ1rRcr/uZtuJMWsbajoqVwNM//7ZRaEhA4862drXLwu7+a7XVzxx2AR4oDUvkPb+USgiktt6sVcpTtH09mhoIw6SRuMlGFCmZ0NifZ9ryY5hWF1CSjGS/s6BUIrZDaHlFUbofoHTfdz6hyo9QS1UpU7qGcEOEAVSL56DWyFB88zF2V6BW4IxSgAHlNQ2lQr+5Uh6TbTO6JlzwAhWnG3lqt6wJFsXdddku2otmU3q3AILZYfNCYfiOKzfy3qwtkGdZa4sBBvSLFSwfDuO6YLJGbSc49pgQcLR2c1e3URQQJGtD+BqVtOBnMPdT+SZi3YXL6CgpSxswTw1gvF5XAdUlEqxS7S7GD5AuMqCq7NJy5t1HRMNGRt5PIzl3OjVI//tjAOQEZrpPdGohWDxnRrIby69TKxJt37ReobZ2f8CmYstrhkT6Iz82kDk+3XElGpjTpYN2bE56x+nxTcDOsMPIGHWMqdLbD1/uhvQ1+UII4vVoJ6Rb92uHy3bd28TKgVdAnMy+xo7wZykD+xi6lBMS9eZkiei3776VB7GT/OMSuaFfVcMEt7mZvj/Tptg1Z7cUZ+uuORQCcfTOKkskpDYQ80
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199021)(46966006)(36840700001)(40470700004)(8676002)(8936002)(7416002)(41300700001)(82740400003)(356005)(81166007)(36860700001)(316002)(2906002)(70206006)(70586007)(5660300002)(6916009)(82310400005)(4326008)(47076005)(83380400001)(426003)(336012)(2616005)(103116003)(54906003)(40480700001)(186003)(40460700003)(36756003)(86362001)(966005)(6666004)(1076003)(478600001)(26005)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:57:12.7564
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fa3189f5-981d-4e8f-e130-08db4811feb5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000145B8.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5450

rangeset_{xxx}_range() functions are invoked with 'start' and 'size' as
arguments which are either 'uint64_t' or 'paddr_t'. However, the function
accepts 'unsigned long' for 'start' and 'size'. 'unsigned long' is 32 bits for
Arm32. Thus, there is an implicit downcasting from 'uint64_t'/'paddr_t' to
'unsigned long' when invoking rangeset_{xxx}_range().

So, it may seem there is a possibility of lose of data due to truncation.

In reality, 'start' and 'size' are always page aligned. And Arm32 currently
supports 40 bits as the width of physical address.
So if the addresses are page aligned, the last 12 bits contain zeroes.
Thus, we could instead pass page frame number which will contain 28 bits (40-12
on Arm32) and this can be represented using 'unsigned long'.

On Arm64, this change will not induce any adverse side effect as the max
supported width of physical address is 48 bits. Thus, the width of 'gfn'
(ie 48 - 12 = 36) can be represented using 'unsigned long' (which is 64 bits
wide).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from -

v3 - 1. Extracted the patch from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00657.html
and added it to this series.
2. Modified add_ext_regions(). This accepts a frame number instead of physical
address.

v4 - 1. Reworded the commit message to use Arm32/Arm64
(32-bit/64-bit Arm architecture).
2. Replaced pfn with gfn to denote guest frame number in add_ext_regions().
3. Use pfn_to_paddr() to return a physical address from the guest frame number.

v5 - 1. Updated the commit message. Added R-b and A-b.

 xen/arch/arm/domain_build.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af2..494611a3e5 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1500,10 +1500,13 @@ static int __init make_resv_memory_node(const struct domain *d,
     return res;
 }
 
-static int __init add_ext_regions(unsigned long s, unsigned long e, void *data)
+static int __init add_ext_regions(unsigned long s_gfn, unsigned long e_gfn,
+                                  void *data)
 {
     struct meminfo *ext_regions = data;
     paddr_t start, size;
+    paddr_t s = pfn_to_paddr(s_gfn);
+    paddr_t e = pfn_to_paddr(e_gfn);
 
     if ( ext_regions->nr_banks >= ARRAY_SIZE(ext_regions->bank) )
         return 0;
@@ -1566,7 +1569,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = bootinfo.mem.bank[i].start;
         end = bootinfo.mem.bank[i].start + bootinfo.mem.bank[i].size;
-        res = rangeset_add_range(unalloc_mem, start, end - 1);
+        res = rangeset_add_range(unalloc_mem, PFN_DOWN(start),
+                                 PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1580,7 +1584,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = assign_mem->bank[i].start;
         end = assign_mem->bank[i].start + assign_mem->bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1595,7 +1600,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
         start = bootinfo.reserved_mem.bank[i].start;
         end = bootinfo.reserved_mem.bank[i].start +
             bootinfo.reserved_mem.bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1607,7 +1613,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     /* Remove grant table region */
     start = kinfo->gnttab_start;
     end = kinfo->gnttab_start + kinfo->gnttab_size;
-    res = rangeset_remove_range(unalloc_mem, start, end - 1);
+    res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1617,7 +1623,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(unalloc_mem, start, end,
+    res = rangeset_report_ranges(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions, ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
@@ -1639,7 +1645,7 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
 
     start = addr & PAGE_MASK;
     end = PAGE_ALIGN(addr + len);
-    res = rangeset_remove_range(mem_holes, start, end - 1);
+    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1677,7 +1683,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     /* Start with maximum possible addressable physical memory range */
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_add_range(mem_holes, start, end);
+    res = rangeset_add_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1708,7 +1714,8 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
             start = addr & PAGE_MASK;
             end = PAGE_ALIGN(addr + size);
-            res = rangeset_remove_range(mem_holes, start, end - 1);
+            res = rangeset_remove_range(mem_holes, PFN_DOWN(start),
+                                        PFN_DOWN(end - 1));
             if ( res )
             {
                 printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1735,7 +1742,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(mem_holes, start, end,
+    res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions,  ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:58:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527443.820036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMb-000708-9T; Fri, 28 Apr 2023 17:58:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527443.820036; Fri, 28 Apr 2023 17:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMb-0006yf-59; Fri, 28 Apr 2023 17:58:33 +0000
Received: by outflank-mailman (input) for mailman id 527443;
 Fri, 28 Apr 2023 17:58:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSMZ-0006vK-08
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:58:31 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20626.outbound.protection.outlook.com
 [2a01:111:f400:7eab::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47697246-e5ee-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 19:58:29 +0200 (CEST)
Received: from DM6PR03CA0065.namprd03.prod.outlook.com (2603:10b6:5:100::42)
 by SJ0PR12MB7067.namprd12.prod.outlook.com (2603:10b6:a03:4ae::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Fri, 28 Apr
 2023 17:58:25 +0000
Received: from DM6NAM11FT064.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:100:cafe::9) by DM6PR03CA0065.outlook.office365.com
 (2603:10b6:5:100::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:58:25 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT064.mail.protection.outlook.com (10.13.172.234) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:58:25 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:58:24 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 10:58:24 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47697246-e5ee-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L4NJnTEFRQM/ycoPF4I/NZ3VSkwWdlJtp1EqV7fs+ySEzy2q0mHSaJ0L7x/BaBsHbpbe2SKthK6bXYXRPXX4thITUHmwKVk67Lo9bahmR0Fei3xFwcy1DUEJwXvu9NuhEX0DjRXRtgCPeE6ljRt+w7bBbm0Rvv7xNxNhRCHMHVGBU0XDLns2wg0vgKMyjIe3LbqXYVOvir7uVgJDNtKOg/dGmjlKSBv3loP5egjdTTSZjnASCeqGKHmgx2eQDHuWWx/Y/zVC2EMWIyn5nyjMQRPfX2QmSF31grPHjeYvBb1Ut2I9iZCLP8DgiUPVyH66tT7WLUhcs94xDGWFvB9czw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4pvC9+1/ct+brxho4Wv1tf2FWo3FFS+xRuL5fSwoFRM=;
 b=dmdNrLCOYfoxO/gGNE3jAh13t6M8OatT8pEkL+TDqRtQCZ3mHZiju81KLWCaH2+yflD+tK5Wf7/+FHQtDDQYSwCwpQbYAR9sn8WFWGVRYehg1MK4uNn7iLB1wbSk8EjCXoWGWhjuaCljqRLzS6lRkdaxDz7MmF/4orCoj5tIW0Rzk2q1855KPCFwwEBiN+tz67+nHqdb90ApU+/NoXrFm8z1J5pz4JQ8YNpkjao5I1JY2LZrSQ3MXn6FJwyahwN7mttZRwKOcAQjUPIOlCrfvOUiQkTFL2n7SO0Ad/QD8+8A1euOhkQa4WChsANDau64s4aJnfW759gmszVdt4W7Aw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4pvC9+1/ct+brxho4Wv1tf2FWo3FFS+xRuL5fSwoFRM=;
 b=cSO5G7qX86EVqaodc2XVPGLpfG2uI+5t4DDw3Qi63WNS32ghhn7WRpx0xlboE8wKRilnYHbw7QetJoZXBVotNEz5ffR6QGFbUDQXJwjrXsIeqSGhSu+OnqY7OdeXYGJMskNkK303lN1FP3Nl3vxQil/n9smKpDnHS1xPU69Yh0I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 04/12] xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to SMMU_CBn_TTBR0
Date: Fri, 28 Apr 2023 18:55:35 +0100
Message-ID: <20230428175543.11902-5-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT064:EE_|SJ0PR12MB7067:EE_
X-MS-Office365-Filtering-Correlation-Id: ec4cd5b0-aef2-4b62-08a1-08db481229e7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	85PhVJqEBD0yYhcj62ZbpQE/UajUENU952KqaiVi5XX+G8rAi/xnY+913XuAZ/nmq0b/wlVniaY4l+geLcMKO16xC5MncS7XfR4kD2Q6KDrt2OGfT9XuIFDVV+K32ZQ+9AgjMVqZ8QqyECTQ9egUA47whKh6sNSARSz+Wmp72D/0SS8rcxiWmLl9qmd3JeQwmazGliG+VJM2GGGJigQK5xEjzrp9iY/vsP9fNuBmv9v1+LSkfBSqHtU1yNHRN9oqtJxGKT+SNkfJir6yWgysi/2GN8dsN2lccPyXfgN+7mWZmQgG5NiCkTJWD9OfkkE89XIMbgDBSpKtp50edAwDw/k2rIs2P3BJHbRDbIDCXvurtwV3CIiyKCN160xVcEAYUYlX5yI0MuZ5c4iSizIQ5jGWMK4PyiDForUHxxNOhH3dATL96gB0UctNMNcT3oMHCaTTwsyAlqh2KwfN0P+1/XmiPtGOy0IQ7/Ks5aOs3u8gNYrvjGZK7i8TcS1bRzHkb43rCnT4Ziyc7VghpAfvqRH70zVbFHx9FdfocxUqxei93xRJtRfqPnwwc+bw5vefyR+cjXCXegygpTGVvXzxyANv1gCo0tW2W74uyG7dvHXWNBusw/R1rLXtzbB+TOC/biOyZcKfRmS16hbnQrkcPfVz19qfbhFQGvX/1dhg98f4K3/N0nNxJdP9PEfNOcF59vyuyMcvLtPkwIIm3mvE8RgPdahBkR4KNlfeo9bPqbQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(7416002)(5660300002)(2906002)(8936002)(8676002)(36756003)(82310400005)(103116003)(86362001)(1076003)(40480700001)(6666004)(26005)(54906003)(36860700001)(478600001)(2616005)(83380400001)(47076005)(426003)(336012)(186003)(70586007)(70206006)(356005)(316002)(81166007)(6916009)(4326008)(82740400003)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:58:25.1894
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ec4cd5b0-aef2-4b62-08a1-08db481229e7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT064.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7067

Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
writeq_relaxed_non_atomic() to write to it instead of invoking
writel_relaxed() twice for lower half and upper half of the register.

This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
Thus, one can assign p2maddr to a 64 bit register and do the bit
manipulations on it, to generate the value for SMMU_CBn_TTBR0.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from -

v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
fashion.

v2 - 1. Added R-b.

v3 - 1. No changes.

v4 - 1. Reordered the R-b. No further changes.
(This patch can be committed independent of the series).

v5 - Used 'uint64_t' instead of u64. As the change looked trivial to me, I
retained the R-b.

 xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 79281075ba..fb8bef5f69 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
 #define ARM_SMMU_CB_SCTLR		0x0
 #define ARM_SMMU_CB_RESUME		0x8
 #define ARM_SMMU_CB_TTBCR2		0x10
-#define ARM_SMMU_CB_TTBR0_LO		0x20
-#define ARM_SMMU_CB_TTBR0_HI		0x24
+#define ARM_SMMU_CB_TTBR0		0x20
 #define ARM_SMMU_CB_TTBCR		0x30
 #define ARM_SMMU_CB_S1_MAIR0		0x38
 #define ARM_SMMU_CB_FSR			0x58
@@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
 static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 {
 	u32 reg;
+	uint64_t reg64;
 	bool stage1;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
@@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
 		   smmu_domain->cfg.domain->domain_id, p2maddr);
 
-	reg = (p2maddr & ((1ULL << 32) - 1));
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
-	reg = (p2maddr >> 32);
+	reg64 = p2maddr;
+
 	if (stage1)
-		reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
+		reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
+		         << 32);
+
+	writeq_relaxed_non_atomic(reg64, cb_base + ARM_SMMU_CB_TTBR0);
 
 	/*
 	 * TTBCR
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:58:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527442.820032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMa-0006wW-WA; Fri, 28 Apr 2023 17:58:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527442.820032; Fri, 28 Apr 2023 17:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMa-0006wP-TH; Fri, 28 Apr 2023 17:58:32 +0000
Received: by outflank-mailman (input) for mailman id 527442;
 Fri, 28 Apr 2023 17:58:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSMY-0006fW-MZ
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:58:30 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20602.outbound.protection.outlook.com
 [2a01:111:f400:fe59::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46a057dd-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:58:28 +0200 (CEST)
Received: from BN0PR03CA0009.namprd03.prod.outlook.com (2603:10b6:408:e6::14)
 by MW4PR12MB7336.namprd12.prod.outlook.com (2603:10b6:303:21a::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.20; Fri, 28 Apr
 2023 17:58:23 +0000
Received: from BL02EPF000145B9.namprd05.prod.outlook.com
 (2603:10b6:408:e6:cafe::6a) by BN0PR03CA0009.outlook.office365.com
 (2603:10b6:408:e6::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:58:22 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000145B9.mail.protection.outlook.com (10.167.241.209) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.16 via Frontend Transport; Fri, 28 Apr 2023 17:58:21 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:58:21 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 10:58:21 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46a057dd-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WzksMsmhk12NDR98XFTADVbPMoJB/mP6ZdDktjdF6tKgm1tnGRRCJ1tcfRmX6AbNgqAPJMoYdiFNlAc0v6SfAm2nYKRpfOds4LT5FSyomFWpFWeO5O9UVeLc3ECWiAF1aN1KCBLTxihwp8/F7WUNCqNTmW8P99t1PL09+50tPv7CeQoxqoeA82rbjKYeEORhSuud/yBjr0gcF5VdQTuYBAcFG1z7fVahJKl48/F11k56O8r4uqCNXd0gL8pDwBwy9VdAgB230Gcv58pD4LkX5wbFiLjJGw5qV+IZbqnxS8NnU/84/hLWT/dm5h2tCWswSY7v6e+aMsu8K/Cbh/yhlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2tVyky3bpGA94A0u00HDJgTdA9k/RVnS4Af0d6fVRvA=;
 b=fQF/x4MM5AqutneDpqI22xMKWcGdaeec3R1MlzhF8qRymZ8WEK7qEUZsmoE7afDQ6pnWM/fM2Bp4/Y8VoXH1D4KqCcmOlOWeJCuYZ393tHMn9CVFKS6UeuoyiLr7urCOENPjOiCNNgzb4ELYJQTCtk//ub+yeBC8+/TP+LbmjPIt2Ag4PSUK4oHmUDg/Y4sxA+vIbmXPebm35WveaSZe5cvIjBXNvjJMzCnajlpdMXS/cOjvMc6PLrOe2r9ywfy47Ysoh9M6zf/e9VCLY9gt3Nzv0qTNJZ2q2qOAVq5PVbaAAVBNVeFWcWC7ExA0rKTr8Cwl8E3Eq8pv0E7CUTyqTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2tVyky3bpGA94A0u00HDJgTdA9k/RVnS4Af0d6fVRvA=;
 b=HjiuSLGbJ49bZNsoul9rdh+i0fjawHDEHXHD1t4nP6itLrLYYDYc+6HU1qPu86vtKPZyyW6v0m8l9CNrJXnEoatnxTThlVBv6o56kiEfTNlVabQ9MmOrS7MjHGk/q+57+bCi7nt+nMR0xwiJj99YTSRoWqfe3RkD6BJKuDpK+E8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 03/12] xen/arm: Introduce a wrapper for dt_device_get_address() to handle paddr_t
Date: Fri, 28 Apr 2023 18:55:34 +0100
Message-ID: <20230428175543.11902-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000145B9:EE_|MW4PR12MB7336:EE_
X-MS-Office365-Filtering-Correlation-Id: 941913dc-7749-4620-501f-08db481227f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k+iUGGqKqrI3/EBEtagkurlopE2gDcCASom5LczhICiQef5QEsCJ5u6NhI3cCUlCEPH9oDZGu439cdVwdINf9W5hWn/6VLX1l4yUhWDt57s2vBog5/umtakUd50j2gYs1t2lAmg5cCLBCoQ5n/8oB6mbY88H0cbDCiacCx+z7KVzTeV66kOgmwi6/vR6NfjkoU/OgvlaZbUVv2Un8VAfLK3lZzaZAT1pJ38GkSX/jz7q71odhP2e180PSFy5hn6CmaMCCSofKKmg+ZxWhXJf8PoMF0a74KDLQ7LlsLTutLQ63CPNZ23pPu5HAsSuI0Pe1db3mjCzbwyHwT4koLoVf2+Sdi0sL5EJOsbg7IgPSHmMymBqBG4sSXRzLlAl7q5y1UQuwZQnzSuCn94mJnWsmuzIEOF3eIwTknzfEbpfkRN9PDXL/A3h7kXTvfqOv44pFoP9i+D2S7BWscW6UuGEa3DAGpkcS9+l+HCgl+VMz5AyZKOVm89n2lSCC1n0P426cLSSfiIxXnZB17cSnQ5pm86yq8rLq+jMK8ZkDlDY4T/1lPrIURI6QnQt05cYN455KhCNDDoY0EfTGmFoEbfQTL7f5fB2YCtREdGc9BjXrc2dfWwxP1Qxiodt6IuTpor54Vmz6jp7fREcVWv8Cs7/wlQvMpHGgdDxDkUMakPAg+9gtaTxym7NBDRsCCFOWR4xalFMICcI6LEsm7W4P0L9c+fLltkOd8ZGdv3MwmPP7EM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(40470700004)(36840700001)(46966006)(1076003)(2616005)(54906003)(83380400001)(47076005)(36860700001)(478600001)(40480700001)(6666004)(966005)(26005)(356005)(82740400003)(4326008)(6916009)(81166007)(70586007)(316002)(426003)(336012)(70206006)(186003)(7416002)(8676002)(41300700001)(40460700003)(8936002)(2906002)(30864003)(103116003)(82310400005)(86362001)(5660300002)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:58:21.9653
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 941913dc-7749-4620-501f-08db481227f6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000145B9.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7336

dt_device_get_address() can accept uint64_t only for address and size.
However, the address/size denotes physical addresses. Thus, they should
be represented by 'paddr_t'.
Consequently, we introduce a wrapper for dt_device_get_address() ie
dt_device_get_paddr() which accepts address/size as paddr_t and inturn
invokes dt_device_get_address() after converting address/size to
uint64_t.

The reason for introducing this is that in future 'paddr_t' may not
always be 64-bit. Thus, we need an explicit wrapper to do the type
conversion and return an error in case of truncation.

With this, callers can now invoke dt_device_get_paddr().

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - 1. New patch.

v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size"
into this patch.

2. dt_device_get_address() callers now invoke dt_device_get_paddr() instead.

3. Logged error in case of truncation.

v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
2. Some sanity fixes.

v4 - 1. Some sanity fixes.
2. Preserved the declaration of dt_device_get_address() in
xen/include/xen/device_tree.h. The reason being it is currently used by
ns16550.c. This driver requires some more changes as pointed by Jan in
https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
which is to be addressed as a separate series.

v5 - 1. Removed initialization of variables.
2. In dt_device_get_paddr(), added the check
if ( !addr )
    return -EINVAL; 

 xen/arch/arm/domain_build.c                | 10 +++---
 xen/arch/arm/gic-v2.c                      | 10 +++---
 xen/arch/arm/gic-v3-its.c                  |  4 +--
 xen/arch/arm/gic-v3.c                      | 10 +++---
 xen/arch/arm/pci/pci-host-common.c         |  6 ++--
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 ++--
 xen/arch/arm/platforms/exynos5.c           | 32 +++++++++---------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/common/device_tree.c                   | 39 ++++++++++++++++++++++
 xen/drivers/char/cadence-uart.c            |  4 +--
 xen/drivers/char/exynos4210-uart.c         |  4 +--
 xen/drivers/char/imx-lpuart.c              |  4 +--
 xen/drivers/char/meson-uart.c              |  4 +--
 xen/drivers/char/mvebu-uart.c              |  4 +--
 xen/drivers/char/omap-uart.c               |  4 +--
 xen/drivers/char/pl011.c                   |  6 ++--
 xen/drivers/char/scif-uart.c               |  4 +--
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         |  8 ++---
 xen/include/xen/device_tree.h              | 13 ++++++++
 23 files changed, 120 insertions(+), 68 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 270fb06139..1c558fca0c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1698,13 +1698,13 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     dt_for_each_device_node( dt_host, np )
     {
         unsigned int naddr;
-        u64 addr, size;
+        paddr_t addr, size;
 
         naddr = dt_number_of_address(np);
 
         for ( i = 0; i < naddr; i++ )
         {
-            res = dt_device_get_address(np, i, &addr, &size);
+            res = dt_device_get_paddr(np, i, &addr, &size);
             if ( res )
             {
                 printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2479,7 +2479,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     unsigned int naddr;
     unsigned int i;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
     bool own_device = !dt_device_for_passthrough(dev);
     /*
      * We want to avoid mapping the MMIO in dom0 for the following cases:
@@ -2534,7 +2534,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     /* Give permission and map MMIOs */
     for ( i = 0; i < naddr; i++ )
     {
-        res = dt_device_get_address(dev, i, &addr, &size);
+        res = dt_device_get_paddr(dev, i, &addr, &size);
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2965,7 +2965,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         if ( res )
         {
             printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   " 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
                    kinfo->d->domain_id,
                    mstart & PAGE_MASK, PAGE_ALIGN(mstart + size) - 1);
             return res;
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 5d4d298b86..6476ff4230 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -993,7 +993,7 @@ static void gicv2_extension_dt_init(const struct dt_device_node *node)
             continue;
 
         /* Get register frame resource from DT. */
-        if ( dt_device_get_address(v2m, 0, &addr, &size) )
+        if ( dt_device_get_paddr(v2m, 0, &addr, &size) )
             panic("GICv2: Cannot find a valid v2m frame address\n");
 
         /*
@@ -1018,19 +1018,19 @@ static void __init gicv2_dt_init(void)
     paddr_t vsize;
     const struct dt_device_node *node = gicv2_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the distributor\n");
 
-    res = dt_device_get_address(node, 1, &cbase, &csize);
+    res = dt_device_get_paddr(node, 1, &cbase, &csize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the CPU\n");
 
-    res = dt_device_get_address(node, 2, &hbase, NULL);
+    res = dt_device_get_paddr(node, 2, &hbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the hypervisor\n");
 
-    res = dt_device_get_address(node, 3, &vbase, &vsize);
+    res = dt_device_get_paddr(node, 3, &vbase, &vsize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the virtual CPU\n");
 
diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 1ec9934191..3aa4edda10 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -1004,12 +1004,12 @@ static void gicv3_its_dt_init(const struct dt_device_node *node)
      */
     dt_for_each_child_node(node, its)
     {
-        uint64_t addr, size;
+        paddr_t addr, size;
 
         if ( !dt_device_is_compatible(its, "arm,gic-v3-its") )
             continue;
 
-        if ( dt_device_get_address(its, 0, &addr, &size) )
+        if ( dt_device_get_paddr(its, 0, &addr, &size) )
             panic("GICv3: Cannot find a valid ITS frame address\n");
 
         add_to_host_its_list(addr, size, its);
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index bb59ea94cd..4e6c98bada 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1377,7 +1377,7 @@ static void __init gicv3_dt_init(void)
     int res, i;
     const struct dt_device_node *node = gicv3_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv3: Cannot find a valid distributor address\n");
 
@@ -1393,9 +1393,9 @@ static void __init gicv3_dt_init(void)
 
     for ( i = 0; i < gicv3.rdist_count; i++ )
     {
-        uint64_t rdist_base, rdist_size;
+        paddr_t rdist_base, rdist_size;
 
-        res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
+        res = dt_device_get_paddr(node, 1 + i, &rdist_base, &rdist_size);
         if ( res )
             panic("GICv3: No rdist base found for region %d\n", i);
 
@@ -1417,10 +1417,10 @@ static void __init gicv3_dt_init(void)
      * For GICv3 supporting GICv2, GICC and GICV base address will be
      * provided.
      */
-    res = dt_device_get_address(node, 1 + gicv3.rdist_count,
+    res = dt_device_get_paddr(node, 1 + gicv3.rdist_count,
                                 &cbase, &csize);
     if ( !res )
-        dt_device_get_address(node, 1 + gicv3.rdist_count + 2,
+        dt_device_get_paddr(node, 1 + gicv3.rdist_count + 2,
                               &vbase, &vsize);
 }
 
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index a8ece94303..5550f9478d 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -93,7 +93,7 @@ gen_pci_init(struct dt_device_node *dev, const struct pci_ecam_ops *ops)
         cfg_reg_idx = 0;
 
     /* Parse our PCI ecam register address */
-    err = dt_device_get_address(dev, cfg_reg_idx, &addr, &size);
+    err = dt_device_get_paddr(dev, cfg_reg_idx, &addr, &size);
     if ( err )
         goto err_exit;
 
@@ -349,10 +349,10 @@ int __init pci_host_bridge_mappings(struct domain *d)
 
         for ( i = 0; i < dt_number_of_address(dev); i++ )
         {
-            uint64_t addr, size;
+            paddr_t addr, size;
             int err;
 
-            err = dt_device_get_address(dev, i, &addr, &size);
+            err = dt_device_get_paddr(dev, i, &addr, &size);
             if ( err )
             {
                 printk(XENLOG_ERR
diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index 811b40b1a6..407ec07f63 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -64,7 +64,7 @@ static void __iomem *rpi4_map_watchdog(void)
     if ( !node )
         return NULL;
 
-    ret = dt_device_get_address(node, 0, &start, &len);
+    ret = dt_device_get_paddr(node, 0, &start, &len);
     if ( ret )
     {
         printk("Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/brcm.c b/xen/arch/arm/platforms/brcm.c
index d481b2c60f..951e4d6cc3 100644
--- a/xen/arch/arm/platforms/brcm.c
+++ b/xen/arch/arm/platforms/brcm.c
@@ -40,7 +40,7 @@ static __init int brcm_get_dt_node(char *compat_str,
                                    u32 *reg_base)
 {
     const struct dt_device_node *node;
-    u64 reg_base_64;
+    paddr_t reg_base_paddr;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, compat_str);
@@ -50,7 +50,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         return -ENOENT;
     }
 
-    rc = dt_device_get_address(node, 0, &reg_base_64, NULL);
+    rc = dt_device_get_paddr(node, 0, &reg_base_paddr, NULL);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "%s: missing \"reg\" prop\n", __func__);
@@ -61,7 +61,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         *dn = node;
 
     if ( reg_base )
-        *reg_base = reg_base_64;
+        *reg_base = reg_base_paddr;
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507092..c48093cd4f 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -42,8 +42,8 @@ static int exynos5_init_time(void)
     void __iomem *mct;
     int rc;
     struct dt_device_node *node;
-    u64 mct_base_addr;
-    u64 size;
+    paddr_t mct_base_addr;
+    paddr_t size;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,exynos4210-mct");
     if ( !node )
@@ -52,14 +52,14 @@ static int exynos5_init_time(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &mct_base_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &mct_base_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos4210-mct\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_INFO, "mct_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_INFO, "mct_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             mct_base_addr, size);
 
     mct = ioremap_nocache(mct_base_addr, size);
@@ -97,9 +97,9 @@ static int __init exynos5_smp_init(void)
     struct dt_device_node *node;
     void __iomem *sysram;
     char *compatible;
-    u64 sysram_addr;
-    u64 size;
-    u64 sysram_offset;
+    paddr_t sysram_addr;
+    paddr_t size;
+    paddr_t sysram_offset;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,secure-firmware");
@@ -125,13 +125,13 @@ static int __init exynos5_smp_init(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &sysram_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &sysram_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in %s\n", compatible);
         return -ENXIO;
     }
-    dprintk(XENLOG_INFO, "sysram_addr: %016llx size: %016llx offset: %016llx\n",
+    dprintk(XENLOG_INFO,"sysram_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"offset: 0x%"PRIpaddr"\n",
             sysram_addr, size, sysram_offset);
 
     sysram = ioremap_nocache(sysram_addr, size);
@@ -189,7 +189,7 @@ static int exynos5_cpu_power_up(void __iomem *power, int cpu)
     return 0;
 }
 
-static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
+static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
 {
     struct dt_device_node *node;
     int rc;
@@ -208,14 +208,14 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, power_base_addr, size);
+    rc = dt_device_get_paddr(node, 0, power_base_addr, size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos5XXX-pmu\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_DEBUG, "power_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_DEBUG, "power_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             *power_base_addr, *size);
 
     return 0;
@@ -223,8 +223,8 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
 
 static int exynos5_cpu_up(int cpu)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *power;
     int rc;
 
@@ -256,8 +256,8 @@ static int exynos5_cpu_up(int cpu)
 
 static void exynos5_reset(void)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *pmu;
     int rc;
 
diff --git a/xen/arch/arm/platforms/sunxi.c b/xen/arch/arm/platforms/sunxi.c
index e8e4d88bef..2b2c215f20 100644
--- a/xen/arch/arm/platforms/sunxi.c
+++ b/xen/arch/arm/platforms/sunxi.c
@@ -50,7 +50,7 @@ static void __iomem *sunxi_map_watchdog(bool *new_wdt)
         return NULL;
     }
 
-    ret = dt_device_get_address(node, 0, &wdt_start, &wdt_len);
+    ret = dt_device_get_paddr(node, 0, &wdt_start, &wdt_len);
     if ( ret )
     {
         dprintk(XENLOG_ERR, "Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index befd0c3c2d..6fc2f9679e 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -50,7 +50,7 @@ static void __init xgene_check_pirq_eoi(void)
     if ( !node )
         panic("%s: Can not find interrupt controller node\n", __func__);
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("%s: Cannot find a valid address for the distributor\n", __func__);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..2163cf26d0 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -955,6 +955,45 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
     return 0;
 }
 
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size)
+{
+    uint64_t dt_addr, dt_size;
+    int ret;
+
+    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
+    if ( ret )
+        return ret;
+
+    if ( !addr )
+        return -EINVAL;
+
+    if ( addr )
+    {
+        if ( dt_addr != (paddr_t)dt_addr )
+        {
+            printk("Error: Physical address 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+                   dt_addr, dev->name, sizeof(paddr_t));
+            return -ERANGE;
+        }
+
+        *addr = dt_addr;
+    }
+
+    if ( size )
+    {
+        if ( dt_size != (paddr_t)dt_size )
+        {
+            printk("Error: Physical size 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+                   dt_size, dev->name, sizeof(paddr_t));
+            return -ERANGE;
+        }
+
+        *size = dt_size;
+    }
+
+    return ret;
+}
 
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
diff --git a/xen/drivers/char/cadence-uart.c b/xen/drivers/char/cadence-uart.c
index 22905ba66c..c38d7ed143 100644
--- a/xen/drivers/char/cadence-uart.c
+++ b/xen/drivers/char/cadence-uart.c
@@ -158,14 +158,14 @@ static int __init cuart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct cuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &cuart_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("cadence: Unable to retrieve the base"
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 43aaf02e18..2503392ccd 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -303,7 +303,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct exynos4210_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -316,7 +316,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     uart->parity    = PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("exynos4210: Unable to retrieve the base"
diff --git a/xen/drivers/char/imx-lpuart.c b/xen/drivers/char/imx-lpuart.c
index 9c1f3b71a3..77f70c2719 100644
--- a/xen/drivers/char/imx-lpuart.c
+++ b/xen/drivers/char/imx-lpuart.c
@@ -204,7 +204,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     const char *config = data;
     struct imx_lpuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -216,7 +216,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     uart->parity = 0;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("imx8-lpuart: Unable to retrieve the base"
diff --git a/xen/drivers/char/meson-uart.c b/xen/drivers/char/meson-uart.c
index b1e25e0468..c627328122 100644
--- a/xen/drivers/char/meson-uart.c
+++ b/xen/drivers/char/meson-uart.c
@@ -209,14 +209,14 @@ static int __init meson_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct meson_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &meson_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("meson: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/mvebu-uart.c b/xen/drivers/char/mvebu-uart.c
index a00618b96f..cc55173513 100644
--- a/xen/drivers/char/mvebu-uart.c
+++ b/xen/drivers/char/mvebu-uart.c
@@ -231,14 +231,14 @@ static int __init mvebu_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct mvebu3700_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &mvebu3700_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("mvebu3700: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index d6a5d59aa2..8e643cb039 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -324,7 +324,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     struct omap_uart *uart;
     u32 clkspec;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -344,7 +344,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     uart->parity = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("omap-uart: Unable to retrieve the base"
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index be67242bc0..052a651251 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -222,7 +222,7 @@ static struct uart_driver __read_mostly pl011_driver = {
     .vuart_info   = pl011_vuart,
 };
 
-static int __init pl011_uart_init(int irq, u64 addr, u64 size, bool sbsa)
+static int __init pl011_uart_init(int irq, paddr_t addr, paddr_t size, bool sbsa)
 {
     struct pl011 *uart;
 
@@ -258,14 +258,14 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
 {
     const char *config = data;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
     {
         printk("WARNING: UART configuration is not supported\n");
     }
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("pl011: Unable to retrieve the base"
diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index 2fccafe340..1b28ba90e9 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -311,14 +311,14 @@ static int __init scif_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct scif_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &scif_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("scif-uart: Unable to retrieve the base"
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b217..611d9eeba5 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -794,7 +794,7 @@ static void ipmmu_device_reset(struct ipmmu_vmsa_device *mmu)
 static __init bool ipmmu_stage2_supported(void)
 {
     struct dt_device_node *np;
-    uint64_t addr, size;
+    paddr_t addr, size;
     void __iomem *base;
     uint32_t product, cut;
     bool stage2_supported = false;
@@ -806,7 +806,7 @@ static __init bool ipmmu_stage2_supported(void)
         return false;
     }
 
-    if ( dt_device_get_address(np, 0, &addr, &size) )
+    if ( dt_device_get_paddr(np, 0, &addr, &size) )
     {
         printk(XENLOG_ERR "ipmmu: Failed to get PRR MMIO\n");
         return false;
@@ -884,7 +884,7 @@ static int ipmmu_probe(struct dt_device_node *node)
 {
     const struct dt_device_match *match;
     struct ipmmu_vmsa_device *mmu;
-    uint64_t addr, size;
+    paddr_t addr, size;
     uint32_t reg;
     int irq, ret;
 
@@ -905,7 +905,7 @@ static int ipmmu_probe(struct dt_device_node *node)
     bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
 
     /* Map I/O memory and request IRQ. */
-    ret = dt_device_get_address(node, 0, &addr, &size);
+    ret = dt_device_get_paddr(node, 0, &addr, &size);
     if ( ret )
     {
         dev_err(&node->dev, "Failed to get MMIO\n");
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bfdb62b395..b7fa2e90f7 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2428,7 +2428,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
 	}
 
 	/* Base address */
-	ret = dt_device_get_address(np, 0, &ioaddr, &iosize);
+	ret = dt_device_get_paddr(np, 0, &ioaddr, &iosize);
 	if (ret)
 		goto out_free_smmu;
 
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..79281075ba 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -73,8 +73,8 @@
 /* Xen: Helpers to get device MMIO and IRQs */
 struct resource
 {
-	u64 addr;
-	u64 size;
+	paddr_t addr;
+	paddr_t size;
 	unsigned int type;
 };
 
@@ -101,7 +101,7 @@ static struct resource *platform_get_resource(struct platform_device *pdev,
 
 	switch (type) {
 	case IORESOURCE_MEM:
-		ret = dt_device_get_address(pdev, num, &res.addr, &res.size);
+		ret = dt_device_get_paddr(pdev, num, &res.addr, &res.size);
 
 		return ((ret) ? NULL : &res);
 
@@ -169,7 +169,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
 	ptr = ioremap_nocache(res->addr, res->size);
 	if (!ptr) {
 		dev_err(dev,
-			"ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+			"ioremap failed (addr 0x%"PRIpaddr" size 0x%"PRIpaddr")\n",
 			res->addr, res->size);
 		return ERR_PTR(-ENOMEM);
 	}
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 5f8f61aec8..ce25b89c4b 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -585,6 +585,19 @@ int dt_find_node_by_gpath(XEN_GUEST_HANDLE(char) u_path, uint32_t u_plen,
  */
 const struct dt_device_node *dt_get_parent(const struct dt_device_node *node);
 
+/**
+ * dt_device_get_paddr - Resolve an address for a device
+ * @device: the device whose address is to be resolved
+ * @index: index of the address to resolve
+ * @addr: address filled by this function
+ * @size: size filled by this function
+ *
+ * This function resolves an address, walking the tree, for a give
+ * device-tree node. It returns 0 on success.
+ */
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size);
+
 /**
  * dt_device_get_address - Resolve an address for a device
  * @device: the device whose address is to be resolved
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:58:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:58:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527441.820021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMW-0006fv-O2; Fri, 28 Apr 2023 17:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527441.820021; Fri, 28 Apr 2023 17:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSMW-0006fo-Kq; Fri, 28 Apr 2023 17:58:28 +0000
Received: by outflank-mailman (input) for mailman id 527441;
 Fri, 28 Apr 2023 17:58:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSMV-0006fW-0A
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:58:27 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20609.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 443abc28-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:58:24 +0200 (CEST)
Received: from DM6PR06CA0096.namprd06.prod.outlook.com (2603:10b6:5:336::29)
 by DM4PR12MB8476.namprd12.prod.outlook.com (2603:10b6:8:17e::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 17:58:18 +0000
Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:336:cafe::23) by DM6PR06CA0096.outlook.office365.com
 (2603:10b6:5:336::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:58:18 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:58:18 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:58:17 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 10:58:17 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 443abc28-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TdfiZWsEWVEPu9iOIBZh9iud8lhd+zD9j2wKSLX6xbR7TvvXYCg4rDbWvNYbdt/fLn3dO6+/ykcW2ypkw63PZgesM064GMB6D3y7P4EBOiAYQpH2gaFbh81+fezJoa2Ofdyn69HigI/V0CfESWIDraMLSLcz66h/lK3T45iTWS9mOps0n2POVBrm9lSxu0kOxt8eyY5vO7MVKw05AvcE4hqlEO7e6QGNnUZs8pvvYN11yRAhchbjezlKKNr61pR0RQJjR1g+dX4GpQpzu/U78ghyJvQ/xQA7UD1bH4WhovP4tW/QtR+vq3PwfNos343feWMxK1pDddGbLRLrCGrD2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8ItbA7MR54EyUW59X/0s7sZ+VmZsyeSeXlpY7oOYDWU=;
 b=ngpgaAt8U+5kGfG6ik/Pa2YKqqHANKJCPEuOQRSYx8DW586qgphGrPeBZrcwAnNyal2KxQVMsH7LSKb617ko9VoqPMBz3Kp9XmeXVvuCEuN9dciVQar7cimK6aPoylNYK7PLlaV+D0p2/bor1hC2cjkewUe8QoMBANN4Fgv44g4VjTnoeRVDNHnDUgtcEh8LD1ifUGM90PZSvhRcOcKC9xPibgVRnM+js+yglQqvalY8wVC8Cmu1ixL2ueuidVD13f4bHwRhXJOnRQSZppmBC0UiUrav4fU+Pw2nZ3jkoJRCqtZSfi3JxiFwHp2GGt2tiNRj6VPPlANc5P8Vj9Izew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8ItbA7MR54EyUW59X/0s7sZ+VmZsyeSeXlpY7oOYDWU=;
 b=g8h7tadonNxZ8a2OeG5zwvQDq4pHyqfGSuIgVu8YvDx9XlCe1vlyeGAgiOpZHLAKEyekZm6u47dgtzLy2dU8tQsBn1TmEAu4BmCTZKxVMi+30o2bVfd+qMxuEVlVhlqlf2agFaLIFnJJCCJWoq4ML4p569XSLD8SZHr/T1PfWDQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 02/12] xen/arm: Typecast the DT values into paddr_t
Date: Fri, 28 Apr 2023 18:55:33 +0100
Message-ID: <20230428175543.11902-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT068:EE_|DM4PR12MB8476:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d088cdc-a40e-4a1e-9a60-08db481225ae
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E1NjJKs5hWXPbda8yXqbNBE6Li5XP2FZZQ4f4i/LSGMWUwjU7cGupSEEIv2KZgy9JJxj/RdeDUIobjoE9Laqbaa63srQLsyuhSW8+IZBuMyZW/kgxCuBMXG6sZdCPdg74nWb+X9sCztfWSMR8LnuxThmTPu3wFzO7zHJd5g55TD2G3jmmccYjGUTDbGcXfc9W1nLvrrm80exU2lcjJGIKwDr1H5zS6mgMP2c9iok8e4ZyRjMcPm5CSgYfKNwexyq3avd8SeMQvtyevVkf50cB/pKMHJx6jX81qvbJS3GQtM/WOON/6+TqbrfIx2QmCzy7+NO23vMCfsp6h0JgTvps5hICQZJExWY75P039U3+YS7cZQUA7xzgTGLPyTBRXPloA8bN197rluF1NCbhLCdFGx4WI2eoD1XCE8lLcj1tauWXe7+PywGDN2TqmBwBws8x2vN9dOglXvbnye9RaLnQFo0Oe48uOkM2QoxQahNjRDj9eg2N6TVCY2fpJ49mD28m+dmw7+AoTH0k7VETmMi8w1jIHMUGo71G+tkiZLMe92TMG2sSeJbxExBHIrxeXDOF51JvsXUFAWY+/aqxL7R+7gnL5pLgLFpgmh3/YOdayXBBzW618L06A3PNLqWVSlybO+4p6zIeODHHi4Us11lu0KiLrCe1D/zKFTxhJ4eJn0IlDAX2isn0NpVtXwChbz3Moct0AM+LUMW5uRWJ5lK2yproczmiZ+pc7C1oP/2S+k=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(82310400005)(336012)(426003)(83380400001)(36860700001)(41300700001)(30864003)(1076003)(26005)(82740400003)(40460700003)(36756003)(186003)(2906002)(2616005)(103116003)(6666004)(40480700001)(86362001)(5660300002)(8936002)(7416002)(8676002)(47076005)(4326008)(316002)(6916009)(70206006)(81166007)(478600001)(356005)(70586007)(54906003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:58:18.1086
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d088cdc-a40e-4a1e-9a60-08db481225ae
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB8476

The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
currently accept or return 64-bit values.

In future when we support 32-bit physical address, these DT functions are
expected to accept/return 32-bit or 64-bit values (depending on the width of
physical address). Also, we wish to detect if any truncation has occurred
(i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).

device_tree_get_reg() should now be able to return paddr_t. This is invoked by
various callers to get DT address and size.

For fdt_get_mem_rsv(), we have introduced a wrapper named
fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
has been imported from external source.

For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
dt_read_paddr() to read physical addresses. We chose not to modify the original
function as it is used in places where it needs to specifically read 64-bit
values from dt (For e.g. dt_property_read_u64()).

Xen prints warning when it detects truncation in cases where it is not able to
return error.

Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
by the code changes.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

Changes from

v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
"[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
this approach achieves the same purpose.

2. No need to check for truncation while converting values from u64 to paddr_t.

v2 - 1. Use "( (dt_start >> (PADDR_SHIFT - 1)) > 1 )" to detect truncation.
2. Introduced libfdt_xen.h to implement fdt_get_mem_rsv_paddr
3. Logged error messages in case truncation is detected.

v3 - 1. Renamed libfdt_xen.h to libfdt-xen.h.
2. Replaced u32/u64 with uint32_t/uint64_t
3. Use "(paddr_t)val != val" to check for truncation.
4. Removed the alias "#define PADDR_SHIFT PADDR_BITS". 

v4 - 1. Added a WARN() when truncation is detected.
2. Always check the return value of fdt_get_mem_rsv().

v5 - 1. Removed the initialization of variables in fdt_get_mem_rsv_paddr().
The warning has been fixed by checking "if (ret < 0)", similar to how it was
being done for fdt_get_mem_rsv().

2. Removed printing "Error:" before WARN().
3. Added the note about implicit casting before dt_read_number()
4. Sanity fixes.

 xen/arch/arm/bootfdt.c              | 46 +++++++++++++++++++-----
 xen/arch/arm/domain_build.c         |  2 +-
 xen/arch/arm/include/asm/setup.h    |  4 +--
 xen/arch/arm/setup.c                | 14 ++++----
 xen/arch/arm/smpboot.c              |  2 +-
 xen/include/xen/device_tree.h       | 27 ++++++++++++++
 xen/include/xen/libfdt/libfdt-xen.h | 55 +++++++++++++++++++++++++++++
 7 files changed, 130 insertions(+), 20 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index e2f6c7324b..b6f92a174f 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,7 +11,7 @@
 #include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/lib.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/sort.h>
 #include <xsm/xsm.h>
 #include <asm/setup.h>
@@ -52,11 +52,37 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
     return false;
 }
 
-void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                                u32 size_cells, u64 *start, u64 *size)
+void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                                uint32_t size_cells, paddr_t *start,
+                                paddr_t *size)
 {
-    *start = dt_next_cell(address_cells, cell);
-    *size = dt_next_cell(size_cells, cell);
+    uint64_t dt_start, dt_size;
+
+    /*
+     * dt_next_cell will return uint64_t whereas paddr_t may not be 64-bit.
+     * Thus, there is an implicit cast from uint64_t to paddr_t.
+     */
+    dt_start = dt_next_cell(address_cells, cell);
+    dt_size = dt_next_cell(size_cells, cell);
+
+    if ( dt_start != (paddr_t)dt_start )
+    {
+        printk("Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Physical size greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    *start = dt_start;
+    *size = dt_size;
 }
 
 static int __init device_tree_get_meminfo(const void *fdt, int node,
@@ -329,7 +355,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-start property has invalid length %d\n", len);
         return -EINVAL;
     }
-    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    start = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
     if ( !prop )
@@ -342,7 +368,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-end property has invalid length %d\n", len);
         return -EINVAL;
     }
-    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    end = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     if ( start >= end )
     {
@@ -593,9 +619,11 @@ static void __init early_print_info(void)
     for ( i = 0; i < nr_rsvd; i++ )
     {
         paddr_t s, e;
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
+
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
             continue;
-        /* fdt_get_mem_rsv returns length */
+
+        /* fdt_get_mem_rsv_paddr returns length */
         e += s;
         printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
     }
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 494611a3e5..270fb06139 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
         BUG_ON(!prop);
         cells = (const __be32 *)prop->value;
         device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
-        psize = dt_read_number(cells, size_cells);
+        psize = dt_read_paddr(cells, size_cells);
         if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
         {
             printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 38e2ce255f..47ce565d87 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -159,8 +159,8 @@ const char *boot_module_kind_as_string(bootmodule_kind kind);
 extern uint32_t hyp_traps_vector[];
 void init_traps(void);
 
-void device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                         u32 size_cells, u64 *start, u64 *size);
+void device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                         uint32_t size_cells, paddr_t *start, paddr_t *size);
 
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a..74b40e527f 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -29,7 +29,7 @@
 #include <xen/virtual_region.h>
 #include <xen/vmap.h>
 #include <xen/trace.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/acpi.h>
 #include <xen/warning.h>
 #include <asm/alternative.h>
@@ -222,11 +222,11 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     {
         paddr_t r_s, r_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        r_e += r_s; /* fdt_get_mem_rsv returns length */
+        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
 
         if ( s < r_e && r_s < e )
         {
@@ -592,13 +592,13 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     {
         paddr_t mod_s, mod_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened,
-                             i - mi->nr_mods,
-                             &mod_s, &mod_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
+                                   i - mi->nr_mods,
+                                   &mod_s, &mod_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        /* fdt_get_mem_rsv returns length */
+        /* fdt_get_mem_rsv_paddr returns length */
         mod_e += mod_s;
 
         if ( s < mod_e && mod_s < e )
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..e107b86b7b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
-        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
+        addr = dt_read_paddr(prop, dt_n_addr_cells(cpu));
 
         hwid = addr;
         if ( hwid != addr )
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..5f8f61aec8 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -241,6 +241,33 @@ static inline u64 dt_read_number(const __be32 *cell, int size)
     return r;
 }
 
+/* Wrapper for dt_read_number() to return paddr_t (instead of uint64_t) */
+static inline paddr_t dt_read_paddr(const __be32 *cell, int size)
+{
+    uint64_t dt_r;
+    paddr_t r;
+
+    /*
+     * dt_read_number will return uint64_t whereas paddr_t may not be 64-bit.
+     * Thus, there is an implicit cast from uint64_t to paddr_t.
+     */
+    dt_r = dt_read_number(cell, size);
+
+    if ( dt_r != (paddr_t)dt_r )
+    {
+        printk("Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    r = dt_r;
+
+    return r;
+}
+
 /* Helper to convert a number of cells to bytes */
 static inline int dt_cells_to_size(int size)
 {
diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
new file mode 100644
index 0000000000..a5340bc9f4
--- /dev/null
+++ b/xen/include/xen/libfdt/libfdt-xen.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xen/include/xen/libfdt/libfdt-xen.h
+ *
+ * Wrapper functions for device tree. This helps to convert dt values
+ * between uint64_t and paddr_t.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ */
+
+#ifndef LIBFDT_XEN_H
+#define LIBFDT_XEN_H
+
+#include <xen/libfdt/libfdt.h>
+
+static inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
+                                        paddr_t *address,
+                                        paddr_t *size)
+{
+    uint64_t dt_addr;
+    uint64_t dt_size;
+    int ret;
+
+    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
+    if ( ret < 0 )
+        return ret;
+
+    if ( dt_addr != (paddr_t)dt_addr )
+    {
+        printk("Error: Physical address greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Error: Physical size greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    *address = dt_addr;
+    *size = dt_size;
+
+    return ret;
+}
+
+#endif /* LIBFDT_XEN_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527449.820051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSN3-000893-Rw; Fri, 28 Apr 2023 17:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527449.820051; Fri, 28 Apr 2023 17:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSN3-00088w-Op; Fri, 28 Apr 2023 17:59:01 +0000
Received: by outflank-mailman (input) for mailman id 527449;
 Fri, 28 Apr 2023 17:59:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSN2-0006fW-38
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:00 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58a74395-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:58:58 +0200 (CEST)
Received: from DS7PR03CA0101.namprd03.prod.outlook.com (2603:10b6:5:3b7::16)
 by SJ1PR12MB6028.namprd12.prod.outlook.com (2603:10b6:a03:489::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 17:58:54 +0000
Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b7:cafe::11) by DS7PR03CA0101.outlook.office365.com
 (2603:10b6:5:3b7::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:58:54 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:58:54 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:58:53 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:52 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58a74395-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N6gEIyoy6fBgKKsDA2Vc5FH8mEawzDjZTNwdqYQZRKqFnV1x5xud/YhEjLUPza987QQAScN/HQubqoQuoHJqJCTO1Vn3v4eSKyBm23PF3zXcFRq0+27XCCofwfXyfFKKi1u9T44vX6VKuP8EzmP1pIt9rpzukXZFbo6u1BvRAuqAoLduogQXDTkxEUNosGQDSM1a1SnE/OvgYXnF4R9txCHuPNkUEqKObnuDoJBiqjOzIVtxvqeZtkMmp+f/CFjzbp8d/8lflF9MuRxlsoveDDClq+WtJDPjYYKWT0UWmdIqzpWLrzlO+Ia3Jv3twVf2PuKPU3+9G5VGAH9NMQds0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=or840FRCn8ypSDsEb4hyxwxB+cIKiY1L3JPA3/w2J3M=;
 b=PfCnDAqXR3hJOHyC2BUCaN8xP0Sz3b2ayUrTfXCOwgQdG1MYyuO8+XQzz2EM67kV+WxQMX4UegMqoqH+axxOEbh63+aDXTZoQXizcNk4wHZbzvr+iCy6+YHBsA3IWKY3l9oIsts9AyVKagemmFaaffbLPuAry7vu1Euv0axDkwwLiPxNMcpsJO1KxhDIy4CZ8ZcEGA1sgh5EossMva/Q0Up/jYemAHfn600yTTwwzzjFRoEU8XLj9AC2xSTiMJckVGVuf+fl8PRxoEFALSbOqqg+qV98fnwriU7/P8PBi8e8r8yql9GqUsqrIfU9pYumX44HkgzWj/BX2K4pZTnYBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=or840FRCn8ypSDsEb4hyxwxB+cIKiY1L3JPA3/w2J3M=;
 b=MAWkdwAk6hBvlXnCjY21brvnnAVFlkcu9gXVUMDnV8TNTt1cfHtpT9LiXTf3jc7l1+57a3KqpdLVOVYtq8rYXwZo+en9cTNLiIugzUffzrqKK6Od2tscdFb5zQ2JdcDlgeCr8jNd5cUt26tzX3Yke8dyUarywYytIOMRhzv7x2Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 05/12] xen/arm: Introduce choice to enable 64/32 bit physical addressing
Date: Fri, 28 Apr 2023 18:55:36 +0100
Message-ID: <20230428175543.11902-6-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT066:EE_|SJ1PR12MB6028:EE_
X-MS-Office365-Filtering-Correlation-Id: 6197a4b4-40b1-41d1-38bb-08db48123b3c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	74Q5XoRJHEgustXl6j5yOuhWmwwJV6Xckfp9z9TTYKSaSBs0LNteWH32UBF1O9WCV3pIbNLIkXqG76eTWIluOy7JqxHnd88+qTlCTE7Bg3lJz3X0uz9JFqFYdgUzDcQU1dkkhrzKpZTEyQ96bS/Z12ERdL5tlCMRY2JuXPjbLNDXHb4GsoNfPvOiIjaop4xB/V9bmH7VtZy153czcbhx/h6IU1Gy4wUiQLfaxb/VWf/7eacJUooOU0z5zw1Uqhvpirl8m5bdCrrIKebsmw5+BSxvq/X1l0BsivuqWJJ6asQC6tjJJiv8yiBfQBpTODXNCiauXWfrmMtePBsDQhImDBJlDUZuOY1dJTDyVdNZ1MKrRqv29ALtjwJCsjDlQUW1iZbvZNHtqmSKorVt3pVglJFwT9Vfmj7CSKKOhiVKdVq7Zu7JsLAvyd8Q9sXHZTsCmaBYmUkp5HuX4Nc9kwLC1x2BMNtUTAQ827OaLC/zql6nslwRKEhZUV/WhC5gMuk0b72grPPevRHOhDDzzGWU4TMEUZdHRlfxxB8ZjV9NeoLx6S5XjFhdttfQwDtusGK7PWrsedOo2l/+x1GgdfIGbgYJZAFcFX+yuagTCw6qN88J3tMScuEAt537QKG/d2PVo0/YyAb45Da413GavavYKz9j3vizR3kaI243NKrnmmcXvkWbHwspvZMb6sebVRxUVmrMwVM4VlFd8w0/UMYphYngOwYVyPT4AZl8VZKo6As=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(103116003)(2906002)(36756003)(82310400005)(40460700003)(40480700001)(36860700001)(6666004)(2616005)(336012)(186003)(47076005)(83380400001)(426003)(1076003)(4326008)(6916009)(70586007)(70206006)(478600001)(26005)(316002)(82740400003)(41300700001)(5660300002)(356005)(8676002)(54906003)(81166007)(8936002)(7416002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:58:54.2689
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6197a4b4-40b1-41d1-38bb-08db48123b3c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6028

Some Arm based hardware platforms which does not support LPAE
(eg Cortex-R52), uses 32 bit physical addresses.
Also, users may choose to use 32 bits to represent physical addresses
for optimization.

To support the above use cases, we have introduced arch independent
configs to choose if the physical address can be represented using
32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
For now only ARM_32 provides support to enable 32 bit physical
addressing.

When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32.
When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
For ARM_64, PADDR_BITS is set to 48.
The last two are same as the current configuration used today on Xen.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'. 

v3 - 1. Allow user to define PADDR_BITS by selecting different config options
ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
2. Add the choice under "Architecture Features".

v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.

v5 - 1. Removed ARM_PA_BITS_48 as there is no choice for ARM_64.
2. In ARM_PA_BITS_32, "help" is moved to last, and "depends on" before "select".

 xen/arch/Kconfig                     |  3 +++
 xen/arch/arm/Kconfig                 | 32 ++++++++++++++++++++++++++--
 xen/arch/arm/include/asm/page-bits.h |  6 +-----
 xen/arch/arm/include/asm/types.h     |  6 ++++++
 xen/arch/arm/mm.c                    |  5 +++++
 5 files changed, 45 insertions(+), 7 deletions(-)

diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 7028f7b74f..67ba38f32f 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -1,6 +1,9 @@
 config 64BIT
 	bool
 
+config PHYS_ADDR_T_32
+	bool
+
 config NR_CPUS
 	int "Maximum number of CPUs"
 	range 1 4095
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..192582b61d 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -19,13 +19,41 @@ config ARM
 	select HAS_PMAP
 	select IOMMU_FORCE_PT_SHARE
 
+menu "Architecture Features"
+
+choice
+	prompt "Physical address space size" if ARM_32
+	default ARM_PA_BITS_40 if ARM_32
+	help
+	  User can choose to represent the width of physical address. This can
+	  sometimes help in optimizing the size of image when user chooses a
+	  smaller size to represent physical address.
+
+config ARM_PA_BITS_32
+	bool "32-bit"
+	depends on ARM_32
+	select PHYS_ADDR_T_32
+	help
+	  On platforms where any physical address can be represented within 32 bits,
+	  user should choose this option. This will help is reduced size of the
+	  binary.
+
+config ARM_PA_BITS_40
+	bool "40-bit"
+	depends on ARM_32
+endchoice
+
+config PADDR_BITS
+	int
+	default 32 if ARM_PA_BITS_32
+	default 40 if ARM_PA_BITS_40
+	default 48 if ARM_64
+
 config ARCH_DEFCONFIG
 	string
 	default "arch/arm/configs/arm32_defconfig" if ARM_32
 	default "arch/arm/configs/arm64_defconfig" if ARM_64
 
-menu "Architecture Features"
-
 source "arch/Kconfig"
 
 config ACPI
diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
index 5d6477e599..deb381ceeb 100644
--- a/xen/arch/arm/include/asm/page-bits.h
+++ b/xen/arch/arm/include/asm/page-bits.h
@@ -3,10 +3,6 @@
 
 #define PAGE_SHIFT              12
 
-#ifdef CONFIG_ARM_64
-#define PADDR_BITS              48
-#else
-#define PADDR_BITS              40
-#endif
+#define PADDR_BITS              CONFIG_PADDR_BITS
 
 #endif /* __ARM_PAGE_SHIFT_H__ */
diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
index e218ed77bd..e3cfbbb060 100644
--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -34,9 +34,15 @@ typedef signed long long s64;
 typedef unsigned long long u64;
 typedef u32 vaddr_t;
 #define PRIvaddr PRIx32
+#if defined(CONFIG_PHYS_ADDR_T_32)
+typedef unsigned long paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "08lx"
+#else
 typedef u64 paddr_t;
 #define INVALID_PADDR (~0ULL)
 #define PRIpaddr "016llx"
+#endif
 typedef u32 register_t;
 #define PRIregister "08x"
 #elif defined (CONFIG_ARM_64)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 74f6ff2c6f..5ef5fd8c49 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -703,6 +703,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
     int rc;
 
+    /*
+     * The size of paddr_t should be sufficient for the complete range of
+     * physical address.
+     */
+    BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS);
     BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
 
     if ( frametable_size > FRAMETABLE_SIZE )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527451.820062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSN6-0008Qj-4T; Fri, 28 Apr 2023 17:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527451.820062; Fri, 28 Apr 2023 17:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSN6-0008QW-0x; Fri, 28 Apr 2023 17:59:04 +0000
Received: by outflank-mailman (input) for mailman id 527451;
 Fri, 28 Apr 2023 17:59:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSN4-0006vK-NU
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:02 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e88::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5acc86fe-e5ee-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 19:59:01 +0200 (CEST)
Received: from BN9PR03CA0152.namprd03.prod.outlook.com (2603:10b6:408:f4::7)
 by DM6PR12MB4281.namprd12.prod.outlook.com (2603:10b6:5:21e::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Fri, 28 Apr
 2023 17:58:58 +0000
Received: from BN8NAM11FT061.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f4:cafe::31) by BN9PR03CA0152.outlook.office365.com
 (2603:10b6:408:f4::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:58:58 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT061.mail.protection.outlook.com (10.13.177.144) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:58:57 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:58:57 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5acc86fe-e5ee-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nrwn3ccYQHQI9TX+zs2bgb1Gx5Y9vuLDx8x0W2MjHB3B7LU3ypiDMjuK1IUVwL+V65K0HiSry4ezRULdqtERBGvvdehDV9O7/t8G8bOchfhEPuyUma3Z8l6nOOzn5siZnK+EgJqmpcRAWbM6RCiNDExe6DMInghq8cHbrzz2/QPGgsaqESw5KcnXx3HATmf3GgK6aBCqldVH3LwCBWwK9YCzqaxcmxiO4NQQhXKwd8fX7Dx1bYnq11b8lTX83RnpwwtQKgSkG7f3zooraaTZ+7fPvo9D5F8cWslJUgJWN25Z+VpOMoWRDyV28qVLuCjkwaiGDRvW8oZKpXnth3NZpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R8UR0Sc7OfEDIVIoizf2pWazrciBj/wXI6Khj5u0U5s=;
 b=R10jD4q1l7S2yp1dmFuXHHDlQ7kyfZrEA7lp/bJK+upL7e2weyv7gD6Q4h4kan7xhCXm08Qc4aCKz67C+lNJAGzXMiAY1HI4j4PwAtCmpwi5LMB6VWWQWucC4c5bVb663Pese7H/fCBh4iwJvO13CEU8iuOpxJ8hMZNi5D8AWq5VeUS0X5CXWtKWLb2wE89CZQLJ3xBqVKuqUuQ5ETLVIRwjcA4bYOlTomqT/lFIHV1MHC3Umb6dbkp0UTK79NLIg4gG9p9YG62r9JoeFFdvF0S8N+9z9CNgP4XlYVwlkV1fLxxPww7Wj9q14xVtO2IdacE2d9dLpnKtOIRQUfULaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R8UR0Sc7OfEDIVIoizf2pWazrciBj/wXI6Khj5u0U5s=;
 b=pRA0Hp1CDSM7uXFM7GwYnZhGAxuANrlT+1R1QMPPMC/XFTselGjG6P9JS53rwyvflvLgCs1mXg3H3y9S3n25CuhytZqaKjwdAe0LkN5EYg9o0ymv3IBZyw11RjJQl1+uAh0wRkiIjHkNeplTJ2229cMYB8/6axp6IWszZ12CrFc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 06/12] xen/arm: guest_walk: LPAE specific bits should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32"
Date: Fri, 28 Apr 2023 18:55:37 +0100
Message-ID: <20230428175543.11902-7-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT061:EE_|DM6PR12MB4281:EE_
X-MS-Office365-Filtering-Correlation-Id: 436a378f-56cc-45aa-5f0a-08db48123d5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	evl6DS4tKw5v8G8yo12wqc2ACZzNvyAWHI95RHglsfTYn865W+tTw5VKZZSUdORZPDHXSYeLJQPWNbDhK4N8tX9a+nBjmxW6ISFtOMkW6sjy65zJmswtXuj/kfMal/VF8fC7w8F05+vhIl9JSpYawOKg0bbUzZzi1kGS0lVzL9wg6NL81JXBb+75GejBRXOseI3K2rvRLwJfFWF0/d+fruzcjaka9e8/0NHeXXNOiJrzAuwEpt5fzUph05D25D2yCZpE8LU+9LdJdvEHmmSMMZ/qlTCUSi4hkrgaihEdjcPg6keeFfiml7Trcb3PZr5WyRRCAuAdSgNpXp/dkl23YC2XjKydd9qdmakJR0iz5B/m2eM8NMipb3TNNlKMKaCJu+46I3FMmGeBwVMIV7FGZCXF4Ud9IqO1nzFHYsYLx4cSXrWjIQkMWxdWQvDEvpKz20bT83rDd2L5JdkZSz1cxUlh1Lw630VWXlPViTHTIfBOCItIsFOVHvhiqMkeufSXwEVRJvxVI5ojhgw1lQopRePBfFcdtTkglpc4L7B9r6K9S+pM/BrlARryZpAyb8m5RiFhkWEFe0e72ZU+WtlHDwbhQ10nbjW1grRM/U73tu0G5TlFGPIdBiUD0HKTYaasMNXBhZCZj3mzqpNSqQx9F8Uo25rXZrH/I+A1x5kA43+RTRsxr/ttP80haz0GYAoBARQO6+9AAHgryV5iMylNa1jiknu3Yn3u6wqDMaCyumQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(346002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(54906003)(478600001)(5660300002)(8676002)(8936002)(36756003)(2906002)(103116003)(86362001)(82310400005)(7416002)(70206006)(70586007)(6916009)(40480700001)(4326008)(316002)(82740400003)(81166007)(41300700001)(356005)(186003)(1076003)(26005)(36860700001)(336012)(47076005)(83380400001)(426003)(2616005)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:58:57.8860
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 436a378f-56cc-45aa-5f0a-08db48123d5f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT061.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4281

As the previous patch introduces CONFIG_PHYS_ADDR_T_32 to support 32 bit
physical addresses, the code specific to "Large Physical Address Extension"
(ie LPAE) should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32".

Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
unsigned int extbase2:4;    /* Extended base address, PA[39:36] */

Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
are supported.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---

Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Reordered this patch so that it appears after CONFIG_ARM_PA_32 is
introduced (in 6/9).

v3 - 1. Updated the commit message.
2. Added Ack.

v4 - 1. No changes.

v5 - 1. No changes.

 xen/arch/arm/guest_walk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 43d3215304..c80a0ce55b 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -154,8 +154,10 @@ static bool guest_walk_sd(const struct vcpu *v,
             mask = (1ULL << L1DESC_SUPERSECTION_SHIFT) - 1;
             *ipa = gva & mask;
             *ipa |= (paddr_t)(pte.supersec.base) << L1DESC_SUPERSECTION_SHIFT;
+#ifndef CONFIG_PHYS_ADDR_T_32
             *ipa |= (paddr_t)(pte.supersec.extbase1) << L1DESC_SUPERSECTION_EXT_BASE1_SHIFT;
             *ipa |= (paddr_t)(pte.supersec.extbase2) << L1DESC_SUPERSECTION_EXT_BASE2_SHIFT;
+#endif /* CONFIG_PHYS_ADDR_T_32 */
         }
 
         /* Set permissions so that the caller can check the flags by herself. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527452.820072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNC-0000Qa-LM; Fri, 28 Apr 2023 17:59:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527452.820072; Fri, 28 Apr 2023 17:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNC-0000QR-II; Fri, 28 Apr 2023 17:59:10 +0000
Received: by outflank-mailman (input) for mailman id 527452;
 Fri, 28 Apr 2023 17:59:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNB-0006vK-FE
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:09 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5e94cecf-e5ee-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 19:59:08 +0200 (CEST)
Received: from BN9PR03CA0586.namprd03.prod.outlook.com (2603:10b6:408:10d::21)
 by IA1PR12MB6044.namprd12.prod.outlook.com (2603:10b6:208:3d4::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 17:59:04 +0000
Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10d:cafe::73) by BN9PR03CA0586.outlook.office365.com
 (2603:10b6:408:10d::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:04 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:59:04 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:04 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 10:59:03 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:59:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e94cecf-e5ee-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HBXlB1ZG6/vmMhaoU+FTj7J3FLddgv8NMNmGzN+5zsBrlXJskBWgla0qsKCXzXKQb3jeFfJl3RJ2EVZnWjV5UP16frOlDMLlkhtJRHmKm37TDsfulB9eGeqUkntWvCQ308UhlqxXRCRgLmf/us2OF8pgbB4rklum8PDXC7ZEmYQCMgMjq0wlWV5n2Z/iuSgXIp0xATKc4ojRTwSGjaaOPXG0ocrVWrZLubX/L7t3dLDKGqyZKpNzlR+9Mt6aW1DD8ZRjKmEDYb7vVUR4EGtaLATJ2Wh2Fpe+FdWBcuVR8Hu7Av40Z4AgNIx+YzNAOrxUYbdr2H7sgME1BFPYKbs/DA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YOm2SpQiMUEOdheU2722eg0DN0nkyL2aqaeiPOs0j1M=;
 b=PFU3zbn4b2Z7u47mU7vqTKQxNb+/E1YpQcr8BrMnJo1u4PZKfAQGS1BQZa/wJ2vAt41OTkLjpdmiKKRZjZvLUSYLtylUjsi0p9LGIH/AjLa60YkRGV34ZTTk/fCych3TbuTU8AI51GXw1ZlFP6OZJbuQ37Q+X05RuvQoBQ2T2VX1ZLOwDCJfWPNvh6bNwjI/TjBGlsyCAnhW/R2vnbjvTvFgC3KR1d+4YS/yd2aTVQMPbACY93YUAFobHSRjhRnFrQxUZRTXUYRIldr1dXsBXFAPG3+Khq/lhU1irqvgj9YgawNaIbQvQGq5HDZU0coSLMuhEi8eGyu2QF+gWBl2og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YOm2SpQiMUEOdheU2722eg0DN0nkyL2aqaeiPOs0j1M=;
 b=U7ZT1wluL6xXL64gowWh5nZLZzZ345LZEmQrSclzzb+7NqRxPuC7sG6HgSjI07kkd1Y2QltaQQlM2co2T/Oxhs9zXHbgq3Pz2clZC53BC5F38IjljLGKnGQiqa70doNtmHxuRd5aUZQ5B3ckCqGbY001HkTSybPPjhPW6bd/ISQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 08/12] xen: dt: Replace u64 with uint64_t as the callback function parameters for dt_for_each_range()
Date: Fri, 28 Apr 2023 18:55:39 +0100
Message-ID: <20230428175543.11902-9-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|IA1PR12MB6044:EE_
X-MS-Office365-Filtering-Correlation-Id: d7398eb0-bf9e-433f-65d7-08db48124147
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d71H0XCuBBnFgA4kyQ1y2Fhn2MDWnhQEAjsBhycx7F/cnP1Z5Peixv9KVmHxbt7fAompvA1yTA3iDqRtWIKytyHTiCUs+gmvtFgYWoCtM2yNUGZrqj8Jc2x5yAQHy57ryBuMqFC+gcqJZR5HuA4D/5S9huoToJN6BsvLx7hgOxlT16vi7hgjtlfmEmWGquuSXExz9VaeuxuQmwHPArO6xbpqwIdRNW2B6+0hzME8RyXS17iPaCEB21woZdlwTHFKdjFVFaqaW/bZqbfar3rCdfSK7F27cgssAzEtKrqX6N9A0djW/Z5AbDk/kTzJFrIX0BIGB5101O0NK05/7S1G0tqqRRgIy1LTonYy2Vdj6EjhBj/fliayahV31V8pYc46dyfaUZ0CHw3oNlstkL48p/zAHG+pXGOcpbxP7ZiqrxY4CCp6M4Oo720r+QUfHWk3eXbd9Pu7/Xig/UNX5U663cNnU46I+GpNQ6RNuWpLWmRl6J6PsG/Jcaw9ZPt6Xz7/C+CPN514fX9aRG4A5gdmv674M9yx9oz195+WXWL7G5lB3HPG5LCg9oY1LYvUThd1+Dq3n/89DOp807ws9OV5ib5ONsqWdI1Z+8tuN2qU4ZqyYvgD9k5AqY/RrSDngY/fPu8lmM8NbLJagEa6tb6vx7yPVxZYQd1qOdLOCnOv6z2OkpwwkRBPO6UA7bz5V1pDFNxqFIEvhmVjW06BYhm5u2RdJJHZiDCMFVMbMY8cRrc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(346002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(81166007)(103116003)(70206006)(40460700003)(82310400005)(70586007)(6666004)(2906002)(356005)(40480700001)(36756003)(1076003)(26005)(316002)(82740400003)(186003)(41300700001)(4326008)(6916009)(478600001)(36860700001)(5660300002)(86362001)(54906003)(8676002)(8936002)(336012)(426003)(83380400001)(7416002)(47076005)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:04.4554
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d7398eb0-bf9e-433f-65d7-08db48124147
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6044

In the callback functions invoked by dt_for_each_range() ie handle_pci_range(),
map_range_to_domain(), 'u64' should be replaced with 'uint64_t' as the data type
for the parameters. Also dt_for_each_range() invokes the callback functions with
'uint64_t' arguments.

There is another callback function ie is_bar_valid() which uses 'paddr_t'
instead of 'u64' or 'uint64_t'. We will change it in the subsequent commit.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from :-

v1-v5 - New patch introduced in v6.

 xen/arch/arm/domain_build.c      | 4 ++--
 xen/arch/arm/include/asm/setup.h | 2 +-
 xen/common/device_tree.c         | 4 ++--
 xen/include/xen/device_tree.h    | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 1c558fca0c..9865340eac 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1637,7 +1637,7 @@ out:
 }
 
 static int __init handle_pci_range(const struct dt_device_node *dev,
-                                   u64 addr, u64 len, void *data)
+                                   uint64_t addr, uint64_t len, void *data)
 {
     struct rangeset *mem_holes = data;
     paddr_t start, end;
@@ -2331,7 +2331,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 }
 
 int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
+                               uint64_t addr, uint64_t len, void *data)
 {
     struct map_range_data *mr_data = data;
     struct domain *d = mr_data->d;
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 47ce565d87..fe17cb0a4a 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,7 +166,7 @@ u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
 int map_range_to_domain(const struct dt_device_node *dev,
-                        u64 addr, u64 len, void *data);
+                        uint64_t addr, uint64_t len, void *data);
 
 extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 2163cf26d0..ab5f8df66c 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -997,7 +997,7 @@ int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
 
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
-                                u64 addr, u64 length,
+                                uint64_t addr, uint64_t length,
                                 void *),
                       void *data)
 {
@@ -1060,7 +1060,7 @@ int dt_for_each_range(const struct dt_device_node *dev,
 
     for ( ; rlen >= rone; rlen -= rone, ranges += rone )
     {
-        u64 a, s;
+        uint64_t a, s;
         int ret;
 
         memcpy(addr, ranges + na, 4 * pna);
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index ce25b89c4b..b3888c1b96 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -681,7 +681,7 @@ int dt_for_each_irq_map(const struct dt_device_node *dev,
  */
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
-                                u64 addr, u64 length,
+                                uint64_t addr, uint64_t length,
                                 void *),
                       void *data);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527453.820078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSND-0000UR-3P; Fri, 28 Apr 2023 17:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527453.820078; Fri, 28 Apr 2023 17:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNC-0000TP-RP; Fri, 28 Apr 2023 17:59:10 +0000
Received: by outflank-mailman (input) for mailman id 527453;
 Fri, 28 Apr 2023 17:59:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNB-0006fW-RM
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:09 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e83::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e802ccc-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:59:08 +0200 (CEST)
Received: from BN9PR03CA0581.namprd03.prod.outlook.com (2603:10b6:408:10d::16)
 by SA3PR12MB9198.namprd12.prod.outlook.com (2603:10b6:806:39f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 17:59:04 +0000
Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10d:cafe::9d) by BN9PR03CA0581.outlook.office365.com
 (2603:10b6:408:10d::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:04 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.25 via Frontend Transport; Fri, 28 Apr 2023 17:59:03 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:00 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:58:59 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e802ccc-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fYR+z5uiPu6EGYFFfewNlpphFFM4RDdAwZHuucFBtv9oa+7PAwyZwZ1L+f/U7YaVnzQFGonBb51kWqzwIy7WY1whwWOaH19zMeg+NBsOssWsKl6GmnU9K9OmPHux7TKjbYeQoJw96/zIFUTVHGRZdvBkKWjBdzd/bdd412wAX4igaAu3cz7lpSfiS6czkU6aTg3r1AUpw6I0MZAJSn1F38iomYyZu6Ct2TPbpeWYtf6J81aZ9eTNm5fom7RtjyR4c+Bf2y3pcomAPFSH6BSjG4DatWkxbjOn1BU+YWcLKhgwIDXdihy6MBDdxek1fG1SumTfdvVNm7pbYGEFBjO6dA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S5GvJVPCSXr6/uw/3PeMNt7ZXD0H8lgklyZTFsf2XdI=;
 b=JYEDpKYQpFn84JeqqziijSb5sgt6f0oAt0A+//EMGcj+MMRQjigksO9QxH1HBFlc63W/q0fU2JhRxpt5vkiht499JCUywN1E+j1ZuNzcH1etzQucfYykMRPHPZMCBwJea+3X9g52IGdxIGRO6tT4ZUpVa7BWO4SzJKSJZb7llFxw2zdazhwkvr14XsnfYoM2rqT2OPkZPx6bk5EjZAYg3NEtgSWfcV05GvUDVb00itmbHynMy03zx3KMakfyor/Zel2P8zr3wNWZ/XwRgy4VCBeVqMDuWN5yAYKVeEZYxEgwvP7+qk/PbOLJ+OhDINr38C6YIkO9+zIWkIvL07GkYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S5GvJVPCSXr6/uw/3PeMNt7ZXD0H8lgklyZTFsf2XdI=;
 b=kQY65WoqSMq4XCRqqf4rfeXGQgk5b5fSi7xQd4IJdrOgJeEzgf2+UEnizkiDILSajjV6s2RgXnLB1AaDnVVQCZabNdR/Qure8OMQfw0e8Z5Tlnqc7+C+VI1Y5qsEzToSVEJNBBARlA0C/7ynKTkIhezZ5rC6V0Sbz6vISQDDo0I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 07/12] xen/arm: Restrict zeroeth_table_offset for ARM_64
Date: Fri, 28 Apr 2023 18:55:38 +0100
Message-ID: <20230428175543.11902-8-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|SA3PR12MB9198:EE_
X-MS-Office365-Filtering-Correlation-Id: b47ea684-a41c-4919-09a9-08db481240e8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VMYm6LxIACBOR/HPyHIdKzZNbO0NvBPKS8U5S8HdUAlqf09/Uq5gjhDKX9yH8m53EFdMIW4j9UsDsdHManayjSDBFeaGRS82QxhhP5YfkRoQlDb8hy5SfqyXPzQ3nbyfgnF/ugFElU+qhAcjZ7uUf/gxNywITTQ0MkxNXwTBGrIPTfX0B8oMtJXl+hd3JOkhNr4Z6kSCQ9cZQxsUB4PKMFof7vmbrJSAjz6JZOLc9qlGm5Tfb4RnWHXsF+EZWfJuzQdSPq1xbUIy4OLCbEl89Xwm75a7hYFn24uwhki8jC8MFm08rRSPn6QhKLDL97c4oVIIWA+aXy/rh2OJVAiP3EakPO/N5/xPw4mArxdg1Z9gy4yUbn0py5l2pKhI1EsLutjrSSTfVmFNTIS8+FzZlQa5Slv+p9kgUisDU8wVb7zF7zGgltM1t7pQ9XMjg+BSiD5WIzCHELXgxRy00RpI5ohBN5qdDYHL9Hrz6SP49rt9tUldhrvYPNDeLDhTgS+aqeLEntBEMoaGM/WvbweBVBxFKwymibIyustWAYk6UdgsL1YImmUQFx+04+cxpOXhLaQpIly8Qrt0hS2y+yYZ1VY67ggC3UGkZD0Y5wJ++Mv49XLIAEJtwYfkqsm5T5w+u0pgXCvvEN4crEb06D9sGAXQuQstSmaNswg8MTftAgPdYPBiqF5W7RXtHvic33ajBJqd+0Lq8IrewkNT/uO5Fj409sTN9pwm7AcGZKCvB/E=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(396003)(346002)(451199021)(40470700004)(46966006)(36840700001)(70586007)(2906002)(70206006)(2616005)(86362001)(82310400005)(5660300002)(7416002)(8676002)(8936002)(54906003)(40460700003)(36756003)(41300700001)(4326008)(6916009)(83380400001)(316002)(6666004)(478600001)(40480700001)(103116003)(426003)(186003)(1076003)(336012)(81166007)(26005)(356005)(82740400003)(36860700001)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:03.8304
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b47ea684-a41c-4919-09a9-08db481240e8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB9198

When 32 bit physical addresses are used (ie PHYS_ADDR_T_32=y),
"va >> ZEROETH_SHIFT" causes an overflow.
Also, there is no zeroeth level page table on Arm32.

Also took the opportunity to clean up dump_pt_walk(). One could use
DECLARE_OFFSETS() macro instead of declaring an array of page table
offsets.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from -

v1 - Removed the duplicate declaration for DECLARE_OFFSETS.

v2 - 1. Reworded the commit message. 
2. Use CONFIG_ARM_PA_32 to restrict zeroeth_table_offset.

v3 - 1. Added R-b and Ack.

v4 - 1. Removed R-b and Ack as we use CONFIG_PHYS_ADDR_T_32
instead of CONFIG_ARM_PA_BITS_32. This is to be in parity with our earlier
patches where we use CONFIG_PHYS_ADDR_T_32 to denote 32-bit physical addr
support.

v5 - 1. Added R-b and Ack.

 xen/arch/arm/include/asm/lpae.h | 4 ++++
 xen/arch/arm/mm.c               | 7 +------
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index 3fdd5d0de2..7d2f6fd1bd 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
 #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
+#ifdef CONFIG_PHYS_ADDR_T_32
+#define zeroeth_table_offset(va)  0
+#else
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+#endif
 
 /*
  * Macros to define page-tables:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 5ef5fd8c49..e460249736 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -233,12 +233,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 {
     static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
     const mfn_t root_mfn = maddr_to_mfn(ttbr);
-    const unsigned int offsets[4] = {
-        zeroeth_table_offset(addr),
-        first_table_offset(addr),
-        second_table_offset(addr),
-        third_table_offset(addr)
-    };
+    DECLARE_OFFSETS(offsets, addr);
     lpae_t pte, *mapping;
     unsigned int level, root_table;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527454.820092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNH-00015s-9o; Fri, 28 Apr 2023 17:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527454.820092; Fri, 28 Apr 2023 17:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNH-00015Y-5A; Fri, 28 Apr 2023 17:59:15 +0000
Received: by outflank-mailman (input) for mailman id 527454;
 Fri, 28 Apr 2023 17:59:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNF-0006fW-6H
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:13 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe59::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6083cdc4-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:59:11 +0200 (CEST)
Received: from BN9P222CA0030.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::35)
 by MN0PR12MB5761.namprd12.prod.outlook.com (2603:10b6:208:374::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Fri, 28 Apr
 2023 17:59:08 +0000
Received: from BN8NAM11FT073.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:10c:cafe::3) by BN9P222CA0030.outlook.office365.com
 (2603:10b6:408:10c::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:08 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT073.mail.protection.outlook.com (10.13.177.231) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.24 via Frontend Transport; Fri, 28 Apr 2023 17:59:07 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:07 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:07 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:59:06 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6083cdc4-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CtX7F0/Sv6FUz9DPXDsb5qskn+XhgEyne1KJwZnnRqfkIyrj0BHe3j2h5f757NDkmdEBwpC481/wwKcLxM1o9TM4Qe3trN+VJz2UeJFao+ltsnKJJIB9Yq11yv8kR6H2D1SWauByNMt1fYmkqYqacDSGEb2PpeR6py1yekeau7cYSpnBHYfUCfgwMNgI3XzpFjDIHWVFWWmjV9p8H7WVvaz6ZcbRAVp1Zdh0YnKbFyMG3JCYSC+LmM2JHxmDRwHTF2SSEmoskL3ngzlrT3+W0Qpd/bJDw/RcdQJPH8+2PU6KXmfLgswYamMvA30MkDuypPeypbNZe3F2lb9DjFJe1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aVRoIcVr9qIryp7aBKt+Cf5kSuhKxlTPFt/EQaROMMA=;
 b=WgIRelK+J3NITr5k+RIFXZdX+LtSg92rECCDtMQlr/OvAAGa/YLI0EC8wiGYhO0cLMUDTW3cp8Cp4OO57HM+869CANaSOgBIVqo6Xv5QoJAFQtA0EvUqzlnDpPdX7TkaI+/tE/eQzXLpPL2Y0U/UALRwMVVCDJW/BhyZdzteeKKb1Gu5wDTwUsip/rxhIqWIQrVnJMeGa3W8jdXazgs0khglldxIgjzMjDNm2rWfmktiZcKpERzfAgd2a3rQHoC+MEcjdur9WeRT4aEkk0m15UibrHoWZjR6Q+LLPB1dOXoVwaF4vbLDpiUmFZEvZ5xLFtLBDELQrLxZnaO1EMCelA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aVRoIcVr9qIryp7aBKt+Cf5kSuhKxlTPFt/EQaROMMA=;
 b=HNJa2mBVdn+btQ7DDOEgMtw23GaGUS2jtRK8nMfvtzGmfhXZq6M0eB0S0vDKfFOzwbUmiGmVVI/Y4pxnGsSHjuFT6ZlsFdKQCkFUot4/zYmXkvWLP56ew4oBz3sshaYeEsS3eFyAQLedi6oriJWWk7pCk7nppEB9dPbkGWmbdko=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 09/12] xen/arm: pci: Use 'uint64_t' as the datatype for the function parameters.
Date: Fri, 28 Apr 2023 18:55:40 +0100
Message-ID: <20230428175543.11902-10-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT073:EE_|MN0PR12MB5761:EE_
X-MS-Office365-Filtering-Correlation-Id: 7206b6de-4e5f-45da-4de9-08db4812435d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	424IDj+UHl4gy/IRVolTWpbDRnpdvlBHOfOw3MTD/Sm74whe1xOkzf5ORJkNkDQoj6IRE9j3848GC1ozdaNYKIvl/5tKWnqlgFtB+yx1IXc9sNYjfdlUUv6Nu2gw3EfzySiAjRU9OTRYgQsfsQ3AjwnDAjpejlcNBmRlUp2X9NKdmlSlrHYTNHn0I6OP6uOfehSW+pfDNsvNcB9urv/Z05wyf2gMq40WfTeXDAMVO9Ij6qMjdAnsgWPARKGVOHGnPPy6nf49jDnOFRMempQyWsCDgpMPbmlEro6Sg23E0jWHKaDPkQjpYVnCYp0xSSpISSEn4WV2gQH11T8VfA7zjP/nJFAk+7V6dgD5oBE7qhrYZ9FZwsMCTf7nFs0gqEgfDPab9b4GuwgTW6nDcpa+1H74quKdScSR+mMGoVug/WORCuB26+k05dQcDTeMt1RiTosW1asyu6qodveUEqsrZE5KjEPBovu/fETbOyj5+gRDqH/0BB19xtanywWIIJn4D1qq+gBCGJdYSIOMClh2OkfO7BeK6BY2WKcYlyttokM013P1H4JqKAiX+AP+8sPU/rvjdzGgeg8+M1fyO51O0fMm2JWE54sqA2qr9GBw+cEb7GHBPy6upzFv+lflKJX5xk0f2M/S2gkooTkSLYoBJD2ZMDhal2V1d8d7jUaJqcnzsylIMYmcECrYM9LdlcMR6Zxa0vR2TWw1uwbEXsRyd1iSu0+uuPJNvieckFa8a0j7dV303MZT//6hCMXBrdn8gntOrPJziayTJlFbiTWVm+g/lFppiAZVR9CdalK5Mlk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(40460700003)(103116003)(36756003)(5660300002)(41300700001)(4744005)(7416002)(2906002)(8936002)(8676002)(86362001)(40480700001)(316002)(82740400003)(6916009)(70586007)(81166007)(70206006)(4326008)(356005)(82310400005)(426003)(47076005)(83380400001)(336012)(26005)(1076003)(186003)(478600001)(54906003)(2616005)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:07.9387
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7206b6de-4e5f-45da-4de9-08db4812435d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT073.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5761

As the callback function in dt_for_each_range() uses 'uint64_t' as the datatype
for the arguments, so one needs to change the type of parameters in
is_bar_valid() as well.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from :-

v1-v5 - New patch introduced in v6.

 xen/arch/arm/pci/pci-host-common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index 5550f9478d..de915aa590 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -379,7 +379,7 @@ int __init pci_host_bridge_mappings(struct domain *d)
  * right place for alignment check.
  */
 static int is_bar_valid(const struct dt_device_node *dev,
-                        paddr_t addr, paddr_t len, void *data)
+                        uint64_t addr, uint64_t len, void *data)
 {
     struct pdev_bar_check *bar_data = data;
     paddr_t s = bar_data->start;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527455.820101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNI-0001Rh-Ol; Fri, 28 Apr 2023 17:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527455.820101; Fri, 28 Apr 2023 17:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNI-0001QR-L2; Fri, 28 Apr 2023 17:59:16 +0000
Received: by outflank-mailman (input) for mailman id 527455;
 Fri, 28 Apr 2023 17:59:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNH-0006vK-J6
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:15 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e89::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 623c960d-e5ee-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 19:59:14 +0200 (CEST)
Received: from BN9PR03CA0799.namprd03.prod.outlook.com (2603:10b6:408:13f::24)
 by BN9PR12MB5051.namprd12.prod.outlook.com (2603:10b6:408:134::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 17:59:11 +0000
Received: from BL02EPF000145B8.namprd05.prod.outlook.com
 (2603:10b6:408:13f:cafe::1d) by BN9PR03CA0799.outlook.office365.com
 (2603:10b6:408:13f::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:11 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000145B8.mail.protection.outlook.com (10.167.241.208) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.15 via Frontend Transport; Fri, 28 Apr 2023 17:59:11 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:10 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:10 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:59:09 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 623c960d-e5ee-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V6ieHHG+7eRz/D2vOKRn93Ub9PW1l6KtbpVevoYAmKa6uMkfS/n/FhCH1siIMTTgW8yGZbKX6jby71GcnE4wjAD2/MdSMhEzhnZkuXRRP9IrTB6SfEM93sD3/gVaPAaLQZ4WLFRqURWLvsVJCT1INDLa7szAuoOaeu9Na799S8UNAzwB3jqIG+WzQWhncJBKKva9Z9DO0yhvB3c7HHbEq+9piaXblQqZ8YfbKsLvKyDc/9cWEWNEqUqeGtVijNYJ4qMXpdSDEy9j1n53AbQGsRe9fKQB439MeT/mZMt8BbJIqymdUCW2TwLV53j43o9OZ8nvgwyXXSzosWrb0+Kd5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qUQ3wxOwdNc9vYtQ3TKA2WsU1jIK+EVT1oaYvAZOlv8=;
 b=eJgWR40tEeRAW7W+aqCGa4Nkd3VQxq9rPOA73le29P7GY9yfCq4J+Mg1Kg3PsqrK3lPatsHMJ/g7njyveHgv5HgVSUvEALy7/mwB0I4h8O0LVV4uIzLjCEb98KwRSs6ZlJyeCUPvGVGtRzVPjAZS7Jrmnid0jSKMUtaJCJLyYbAO82g1Qq0uhrMjP05sI1AsVQqRtAtxd+deKtzYAN5DaSNMeZ6pzQWLONiTUpY4cEe19OVLuDoEv90XnjzAOeiBq61xApqZwjfrqlTpyJ2qk7b9z6//zD3+6IIKgAiW4OakLZygBJKKEJkqXTul6VHuoeqKNz3+xGqrrW3TU/R5wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qUQ3wxOwdNc9vYtQ3TKA2WsU1jIK+EVT1oaYvAZOlv8=;
 b=el8UuNA7dV42j09VKpbSshjVsgfnbW/nq/FU59mhlk2OExzIvvD62DRsYtBRS5QpeNAXb4F3sK7A9mH5lguJplOU7aiN4gzgebrygRHMO91EXbZwdlz0iqr0zuqIIULNItHQmaFdzReOUAluhraOFRor4D2jOlYYtruUoOXlq7E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 10/12] xen/arm: domain_build: Check if the address fits the range of physical address
Date: Fri, 28 Apr 2023 18:55:41 +0100
Message-ID: <20230428175543.11902-11-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000145B8:EE_|BN9PR12MB5051:EE_
X-MS-Office365-Filtering-Correlation-Id: f163562c-bc26-436f-d1ba-08db4812453f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Uzuj9bnuFMC3N94up0WidahJ6o9QvI4hRGpDGZbIEcsQyyov7mRhhYaabw18pHHC12kngopsssqrdumdyLz3ahckfA0WifffSSyfSMAy2PGeXygwwmPc45mHKtUMo1L7ARKJ5s/vjObWyU9upl3ngkf75Mahb38wqmfz5nsu8QNOYuQzsttE9tqHZTcDN4vzyTaAjmmLsj0vUtWHRrD6pz7pdQG3kqy2d3qbIMCeC4cparLu8ILq9G03nfqHbM0Qd/AGSMhcoIKCsTziTjmH1/KWiwNK6kS2p7Ivi19LAsYKGTNu+sET7B9KGDYcDVsAbCzWzNXhG4a0cdy6UV9faUSxugF0Y3FIDO61jSngbOx5qk0TpokEXb6pwEJQe1xPRiAgxT8HScVPk1lGI1dICc89HGOnn63ZCkvYGRHpfAbAp0bVRoi+x3k8yWE5TG9KvZ0OLJ+HOv3u+MGaPZCuZ0MTkcyBsIhImTsXfbNlsu42iSTEw7g02BmnhjaS0R1jU5e5oWQZgrypQXwot+g4Sr7HOkgHvqpHS0DVutcngYKTZ/hvUkUXGcSmzbB09lj2n2VOP5fA4jDkWovJBtCNMo5ko7P6fYLUBclZPQ7WaSjsaOuR7Sjjh/ofAebv4NWIivx5nkEZufneiE1OGoZd9iwkWQ47iZNc7V5dv/iCnILK+JdXXSkM7kwwn1OttqevKaiomF+pbjlWFRzBn5BFAwPYSNyRQ56FExgbHrHwTts=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(40460700003)(54906003)(478600001)(5660300002)(8936002)(8676002)(36756003)(86362001)(2906002)(103116003)(7416002)(82310400005)(70586007)(70206006)(40480700001)(4326008)(6916009)(316002)(82740400003)(81166007)(41300700001)(356005)(186003)(26005)(1076003)(336012)(36860700001)(47076005)(426003)(83380400001)(2616005)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:11.1009
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f163562c-bc26-436f-d1ba-08db4812453f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000145B8.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5051

handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
parameters. Then frame numbers are obtained from addr and len by right shifting
with PAGE_SHIFT. The frame numbers are expressed using unsigned long.

Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
when the result is stored as 'unsigned long'.

To mitigate this issue, we check if the starting and end address can be
contained within the range of physical address supported on the system. If not,
then an appropriate error is returned.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from :-
v1...v4 - NA. New patch introduced in v5.

v5 - 1. Updated the error message
2. Used "(((paddr_t)~0 - addr) < len)" to check the limit on len.
3. Changes in the prototype of "map_range_to_domain()" has been
addressed by the patch 8.

 xen/arch/arm/domain_build.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9865340eac..719bb09845 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1643,6 +1643,13 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
     paddr_t start, end;
     int res;
 
+    if ( addr != (paddr_t)addr || (((paddr_t)~0 - addr) < len) )
+    {
+        printk(XENLOG_ERR "%s: [0x%"PRIx64", 0x%"PRIx64"] exceeds the maximum allowed PA width (%u bits)",
+               dt_node_full_name(dev), addr, (addr + len), PADDR_BITS);
+        return -ERANGE;
+    }
+
     start = addr & PAGE_MASK;
     end = PAGE_ALIGN(addr + len);
     res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
@@ -2337,6 +2344,13 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
     struct domain *d = mr_data->d;
     int res;
 
+    if ( addr != (paddr_t)addr || (((paddr_t)~0 - addr) < len) )
+    {
+        printk(XENLOG_ERR "%s: [0x%"PRIx64", 0x%"PRIx64"] exceeds the maximum allowed PA width (%u bits)",
+               dt_node_full_name(dev), addr, (addr + len), PADDR_BITS);
+        return -ERANGE;
+    }
+
     /*
      * reserved-memory regions are RAM carved out for a special purpose.
      * They are not MMIO and therefore a domain should not be able to
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 17:59:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 17:59:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527458.820112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNO-00022h-2c; Fri, 28 Apr 2023 17:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527458.820112; Fri, 28 Apr 2023 17:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSNN-00022a-Us; Fri, 28 Apr 2023 17:59:21 +0000
Received: by outflank-mailman (input) for mailman id 527458;
 Fri, 28 Apr 2023 17:59:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNM-0006vK-FH
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:20 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e88::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 656c6e9c-e5ee-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 19:59:19 +0200 (CEST)
Received: from BN0PR02CA0036.namprd02.prod.outlook.com (2603:10b6:408:e5::11)
 by SA1PR12MB7038.namprd12.prod.outlook.com (2603:10b6:806:24d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.21; Fri, 28 Apr
 2023 17:59:15 +0000
Received: from BN8NAM11FT021.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e5:cafe::61) by BN0PR02CA0036.outlook.office365.com
 (2603:10b6:408:e5::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT021.mail.protection.outlook.com (10.13.177.114) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.24 via Frontend Transport; Fri, 28 Apr 2023 17:59:15 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:15 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 10:59:14 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:59:13 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 656c6e9c-e5ee-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YYI93H0aWx5bslBuTgES2z8gcyV89MrM5SfHZq09sJ98zM3WGT0uSxxljbpOJMbgoz0acvkb4Y1cQYC5jvlCCdohaqfT35i11rKaiafGzE09wzxSwtitbC28M9gZ2MD4daGM4DitSrPOkSJFNf2mXDxkASo5ub4gmNNE6wmpzjyKmDKnrGVW3gBfKy4X88gn+svXjdLuzjTm7WHbCfdmN0/9qZl3ED2AnOfD2tVQfXdBREx+dECkO3RCun9nN1UjN+a53wN/dmynZk2ziR3Qw5mknx+pibQ4bTxfHPaXF7C2x28t0HBAjWYkMRIwX3E94CWnJIs04UH7EbErfu7JMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JXmmpQTEBupiASRKTbTu9Mst1nMy3GQUH57jkcbo7eo=;
 b=FLm6YTgSkEri3DqixTMo3MDIAlSBr1LOWdJ/0iT5Xf1/r+O9y0DHMLd5Buml7H5hZBnayDr4cNMAXUZVNO6GDzHsrtzcfSW24dh1VmLpOM0fd6FdMHanrO0OXNgAqd61Spn/JZpc+pan1G0NxceHZQf48cLZwMp2zcLMQhHe3QHZ7j+oZWZhLcGENT07ZvKzUusDeBA7lP33ozQTwxRrLc2Ms0bkvWBTZIwwJToQhKJ3CwUFXCWRgSg+GiBAjjwBhdOk2iApXN2wEoHLytLs1qmXJ1K29EpmD+CBiKQELwyvcREL5SrQz2WCfULNfj/Ko4ZnIl+xh+C6hSehX7oAOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JXmmpQTEBupiASRKTbTu9Mst1nMy3GQUH57jkcbo7eo=;
 b=WlQPaVRlY9vccLkzUq5mRtGdnK+GH2aWud2EPElne94AG5KdxSd7RtwUkHv8YAXBL6DROBikC8CgP3zL0vKhCcf0gyxVxTRMmctXfs5mj4RKnakQQiGMNj6yMH0VDqectuFGY9uoU/rxAgaur8orRgqYvduvBDhscIwBGmUE1G8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
Date: Fri, 28 Apr 2023 18:55:42 +0100
Message-ID: <20230428175543.11902-12-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT021:EE_|SA1PR12MB7038:EE_
X-MS-Office365-Filtering-Correlation-Id: e39507f3-398e-404c-afc2-08db481247da
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OamGLiBcpppl2eYNyW0nRJ0izW8tJKjPAKcRXc8wXuaaQ+vizWsXGPhaF0m2LqpktaDvVTK2q1TTMgIWRFSElf2LlgQtefKpqjwbu+lleRp9NCr+x6lHNpW8oZ1E2hrmsuTB4jOtUgCZCREEABkCzMJKvQChcSATqP2N4PSvY/mtHHyk2CBn+gCO364xwOFtKadszIQAF3vLxHvIasD5WTnR6FKi8QFFmmkDMZIIBzRH5paabVGhriSQ4QTXp5QhqiolbsZOyQB1UxfOUwOomz9YLRHSd6QTY25UVq/VOj55hZ/icLXP76kDkQNLC91A48hcYswwh6jNIZhw1CkfDZWwr4ssBuAWQX8NrQ5leC4yrFoIq1tgSVtAkiipa5ZGI0zUD47OLrViau/CxaKO+bgdc8Q6neNda6y0yHQGV0UVBSisNpPQs14r0Dfw2nD2Ymkl+wwZeAiVCvt9n6oitTYAqE5Uf6He8pRSIG1YiiucTwv0HA4N6FcxiYxbXKPSnqKFf0hbq7xqp7qHqR/d7SmH+LQFTETMXidb591WT98IvCSfIva2wuEWap+yiuLW5iPI50uGVWDvg2+2onJLCaBJdxAUJt1yiLyJWDp3iJ9BSiBotW/acJl/qUkzvJ3YthhEfTsCV8mktE4r7s1y9hnr4UqXm8TApmXORraOZNX+Jeqt+IQSG+OU/RkKZzIxux2C8NDhGzAFIbqoifdDv4TbjpcWv3CwpJSEbtAUHZs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(40480700001)(7416002)(81166007)(70206006)(6916009)(40460700003)(70586007)(478600001)(8676002)(8936002)(5660300002)(54906003)(4326008)(82740400003)(316002)(356005)(336012)(83380400001)(426003)(6666004)(186003)(2616005)(36860700001)(1076003)(26005)(47076005)(82310400005)(36756003)(86362001)(103116003)(2906002)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:15.4713
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e39507f3-398e-404c-afc2-08db481247da
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT021.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7038

Restructure the code so that one can use pa_range_info[] table for both
ARM_32 as well as ARM_64.

Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
p2m_root_order can be obtained from the pa_range_info[].root_order and
p2m_root_level can be obtained from pa_range_info[].sl0.

Refer ARM DDI 0406C.d ID040418, B3-1345,
"Use of concatenated first-level translation tables

...However, a 40-bit input address range with a translation granularity of 4KB
requires a total of 28 bits of address resolution. Therefore, a stage 2
translation that supports a 40-bit input address range requires two concatenated
first-level translation tables,..."

Thus, root-order is 1 for 40-bit IPA on ARM_32.

Refer ARM DDI 0406C.d ID040418, B3-1348,

"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:

0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of the input
address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
address region is 2^40 bytes.

Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v3 - 1. New patch introduced in v4.
2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
well as ARM_64.

v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
The reason being root_order will not be always 1 (See the next patch).
2. Updated the commit message to explain t0sz, sl0 and root_order values for
32-bit IPA on Arm32.
3. Some sanity fixes.

v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
when PARange is 0, the PA size is 32, 1 -> 36 and so on. So pa_range_info[] has
been updated accordingly.
For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not support
32-bit, 36-bit physical address range yet.

 xen/arch/arm/include/asm/p2m.h |  8 +-------
 xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index f67e9ddc72..4ddd4643d7 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -14,16 +14,10 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
-#ifdef CONFIG_ARM_64
 extern unsigned int p2m_root_order;
 extern unsigned int p2m_root_level;
-#define P2M_ROOT_ORDER    p2m_root_order
+#define P2M_ROOT_ORDER p2m_root_order
 #define P2M_ROOT_LEVEL p2m_root_level
-#else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_ORDER    1
-#define P2M_ROOT_LEVEL 1
-#endif
 
 struct domain;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 418997843d..1fe3cccf46 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -19,9 +19,9 @@
 
 #define INVALID_VMID 0 /* VMID 0 is reserved */
 
-#ifdef CONFIG_ARM_64
 unsigned int __read_mostly p2m_root_order;
 unsigned int __read_mostly p2m_root_level;
+#ifdef CONFIG_ARM_64
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 /* VMID is by default 8 bit width on AArch64 */
 #define MAX_VMID       max_vmid
@@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
     /* Setup Stage 2 address translation */
     register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
 
-#ifdef CONFIG_ARM_32
-    if ( p2m_ipa_bits < 40 )
-        panic("P2M: Not able to support %u-bit IPA at the moment\n",
-              p2m_ipa_bits);
-
-    printk("P2M: 40-bit IPA\n");
-    p2m_ipa_bits = 40;
-    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
-    val |= VTCR_SL0(0x1); /* P2M starts at first level */
-#else /* CONFIG_ARM_64 */
     static const struct {
         unsigned int pabits; /* Physical Address Size */
         unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
@@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
     } pa_range_info[] __initconst = {
         /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
         /*      PA size, t0sz(min), root-order, sl0(max) */
+        [2] = { 40,      24/*24*/,  1,          1 },
+#ifdef CONFIG_ARM_64
         [0] = { 32,      32/*32*/,  0,          1 },
         [1] = { 36,      28/*28*/,  0,          1 },
-        [2] = { 40,      24/*24*/,  1,          1 },
         [3] = { 42,      22/*22*/,  3,          1 },
         [4] = { 44,      20/*20*/,  0,          2 },
         [5] = { 48,      16/*16*/,  0,          2 },
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
+#else
+        [0] = { 0 },  /* Invalid */
+        [1] = { 0 },  /* Invalid */
+        [3] = { 0 }  /* Invalid */
+#endif
     };
 
     unsigned int i;
     unsigned int pa_range = 0x10; /* Larger than any possible value */
 
+#ifdef CONFIG_ARM_64
     /*
      * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
      * with IPA bits == PA bits, compare against "pabits".
@@ -2291,6 +2288,9 @@ void __init setup_virt_paging(void)
      */
     if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
         max_vmid = MAX_VMID_16_BIT;
+#else
+    p2m_ipa_bits = PADDR_BITS;
+#endif
 
     /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
     for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
@@ -2306,24 +2306,28 @@ void __init setup_virt_paging(void)
     if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
         panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
 
+#ifdef CONFIG_ARM_64
     val |= VTCR_PS(pa_range);
     val |= VTCR_TG0_4K;
 
     /* Set the VS bit only if 16 bit VMID is supported. */
     if ( MAX_VMID == MAX_VMID_16_BIT )
         val |= VTCR_VS;
+
+    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
+#endif
+
     val |= VTCR_SL0(pa_range_info[pa_range].sl0);
     val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
 
     p2m_root_order = pa_range_info[pa_range].root_order;
     p2m_root_level = 2 - pa_range_info[pa_range].sl0;
-    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
 
     printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
            p2m_ipa_bits,
            pa_range_info[pa_range].pabits,
            ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
-#endif
+
     printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
            4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 18:08:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 18:08:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527461.820122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSW8-000501-UT; Fri, 28 Apr 2023 18:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527461.820122; Fri, 28 Apr 2023 18:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSW8-0004zu-Rp; Fri, 28 Apr 2023 18:08:24 +0000
Received: by outflank-mailman (input) for mailman id 527461;
 Fri, 28 Apr 2023 18:08:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSW7-0004zk-94; Fri, 28 Apr 2023 18:08:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSW6-0004oI-Pf; Fri, 28 Apr 2023 18:08:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSW6-0004Dx-3I; Fri, 28 Apr 2023 18:08:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psSW6-0007MX-2q; Fri, 28 Apr 2023 18:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VN+PReiT4yWmfD6SghnBrvX4F3VT2XcXX77juY+GGZ4=; b=5pck4cL6tmSb0qhwlgYOJYZHTT
	p1fFPOe4nRHuNLBeO1Ipvn8+vWiFXfWzubHR9QBsb4S6hjFBU5otNs0C62X1Lrp01LYyk+aDCSaSO
	4B6WWt4Ww+mGMSrDikDAOtlHK8yRTHGzh6vAAdjyC4q6+Hp5xdPIq/pZo22Vvi37dOw8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180452-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180452: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-pvshim:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:xen-boot/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
X-Osstest-Versions-That:
    xen=18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 18:08:22 +0000

flight 180452 xen-unstable real [real]
flight 180472 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180452/
http://logs.test-lab.xenproject.org/osstest/logs/180472/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvshim 20 guest-localmigrate/x10 fail pass in 180472-retest
 test-amd64-amd64-libvirt-pair 13 xen-boot/dst_host  fail pass in 180472-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180472-retest
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180472-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180433
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180433
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180433
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180433
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180433
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180433
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180433
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180433
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180433
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180433
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180433
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180433
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c
baseline version:
 xen                  18a36b4a9b088875486cfe33a2d4a8ae7eb4ab47

Last test of basis   180433  2023-04-26 14:00:59 Z    2 days
Testing same since   180452  2023-04-27 18:45:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   18a36b4a9b..dde20f7dc1  dde20f7dc182fdfeeb6c55648979326bb982ca8c -> master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 18:09:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 18:09:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527476.820131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSWm-0005bT-Ap; Fri, 28 Apr 2023 18:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527476.820131; Fri, 28 Apr 2023 18:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSWm-0005bM-87; Fri, 28 Apr 2023 18:09:04 +0000
Received: by outflank-mailman (input) for mailman id 527476;
 Fri, 28 Apr 2023 18:09:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSNg-0006fW-56
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 17:59:40 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 714437a0-e5ee-11ed-8611-37d641c3527e;
 Fri, 28 Apr 2023 19:59:38 +0200 (CEST)
Received: from BN0PR04CA0185.namprd04.prod.outlook.com (2603:10b6:408:e9::10)
 by DM4PR12MB5119.namprd12.prod.outlook.com (2603:10b6:5:392::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Fri, 28 Apr
 2023 17:59:35 +0000
Received: from BL02EPF000145BA.namprd05.prod.outlook.com
 (2603:10b6:408:e9:cafe::9e) by BN0PR04CA0185.outlook.office365.com
 (2603:10b6:408:e9::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.25 via Frontend
 Transport; Fri, 28 Apr 2023 17:59:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF000145BA.mail.protection.outlook.com (10.167.241.210) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.21 via Frontend Transport; Fri, 28 Apr 2023 17:59:34 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 12:59:34 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 12:59:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 714437a0-e5ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZNJFPjwP0pZxrbqYJFpKRMh++hOK7cQZCZ1R7MIqhcdVpBTHrQfKWfiWykIV1RQMB2sJ88Gc668TW3520fee6QG66UA5gc3gw4V0VRcyq7sLAxkGWhdxrr/H5yEWuOcC94blk4KlWURuwbvoiOZG0KT3TQD/uX7V+/L7DyiVORZ+fok/pG6lDbRJdngiUMXwt3yYtbfGSaA5EP4KE2qhRwOzUdyt5ixduvWS7w+rMFAtiEwXloxFsTzNkdPfX+eB7yv2dpTepeRTKzzQF8BQihYxCX2iSS3D5dfv0aC4wViDYznNYFcDdJdQVZgV95Ns6oKFtjio5pVph07ONeoPUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z6Q5ypvhP+q/whBG3MtdTSywgqoOEnfxXXcGmqYwu3k=;
 b=Y7N8UCp3DlCAIXy4CpTb+Ek6w93IpEGfox2FUgYPecwBpV6t0GVBNTPwFcbExIKYNhAhro9qwlJwtTKKwMaDIDni20K2d/YGUIVfgoMDNab6SyFBOgiOeIvYnTrEuwFM9Np492LEDHWbqBN3eZCP2mKrSD22TelGcIj1nP9wFxafx4DN8nCOSLumUleNemDzkDmystlzbuMJfnGxMaDNNueC/7jVl2eOU8bPGnZKP7lLvlnLtX17qdftzazCU2T361sokcvE4ajjSyeDxMKsV2KZSfIA2+okQnzGhz/3WsXaBtshb408Tb2hUamFxrRCUsGOlFJIzVrUUV3fDy+4Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z6Q5ypvhP+q/whBG3MtdTSywgqoOEnfxXXcGmqYwu3k=;
 b=mLGU3sDV/qaWmiKR9iYGmfZr1UvqMu01Btkd47beQ7oa7mjxXOrNRYllpB8NIQUwIvepwKhgZZH4l57XyUiHlQONzfk7I5L7voRPjTLqHFix7hty66uWXQu9x+h/57/zHsVDGIj04MJkPikFR+WNn0htIaoqexjqDuVcvmPNJqk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [XEN v6 12/12] xen/arm: p2m: Enable support for 32bit IPA for ARM_32
Date: Fri, 28 Apr 2023 18:55:43 +0100
Message-ID: <20230428175543.11902-13-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF000145BA:EE_|DM4PR12MB5119:EE_
X-MS-Office365-Filtering-Correlation-Id: 3971b29a-7fbf-41fd-fdde-08db4812536d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DWLYNZ4mdaXyrDy44o5pO4nuzXBOGZEtLPYFscggbmlZtxJ9wyw+eJqOcOtdlufEtVTrZG4ykzuiEZwQCU+tnGq0ht18PIR4IGQdjspt6y+xtH72sU/DMRANHXEzN59hb35ht3gB2BZj5q1GtxgeT3jIBxIQ5+g4JwgANn6YnhlyxJdtXzWAemfIXshC4MnqYlFps3hYLqWi7Uo5mEOs8kFB4114kWDqJ3nVgUT5F7xacBGtAMzcgE9WzG8CtOVZFHCuN8PdD+f2gvEfbRnx6a8174W4UVzrPbzzqMQOfVtMKr3K9ONqoopTYRsBYBhuBJiNsUD3Ztb+pqwod0jwFOB8d3mImBNgmC7ngwckDzOjm9heyGRiiiHHbEmTKJN2G5xsfFnRakSHxvHCKbke+F60pNonUxHA87yQ120t4ORBaXw6c+LBStNdQecWQmBiNfQWKBD3OwsPwzeUcfv10t3KNBGLGPahLn4YMYoGiy6Htv1T3J/ehIGWtI1gxty3pcNE98GdXqzQMqo+pgUXPdH/2npdtSDCPqZLJ/ZLx8Zu7zICAhMc9O6T90MeIiEt48VWC4gy1iT0s4C5h41JrsZppJFd3M29OY8BB7iFPBeT+4MxvrP4CaQ2JgLcaMdi6gPZVcUy5QZnPzeyVe95zD6AhHFX9p7OZkThXocpO5BgBlZalU77cdVdzxUul8NsIgDMsODZiyLAoOqbsmCW5r8HNpSkpwX7zZe9XUeSe3A=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(451199021)(40470700004)(36840700001)(46966006)(186003)(26005)(1076003)(41300700001)(81166007)(356005)(6666004)(426003)(83380400001)(47076005)(336012)(36860700001)(2616005)(478600001)(54906003)(40460700003)(82310400005)(7416002)(82740400003)(70206006)(70586007)(4326008)(6916009)(40480700001)(316002)(5660300002)(36756003)(2906002)(103116003)(86362001)(8936002)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 17:59:34.8916
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3971b29a-7fbf-41fd-fdde-08db4812536d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF000145BA.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5119

Refer ARM DDI 0406C.d ID040418, B3-1345,

"A stage 2 translation with an input address range of 31-34 bits can
start the translation either:

- With a first-level lookup, accessing a first-level translation
  table with 2-16 entries.

- With a second-level lookup, accessing a set of concatenated
  second-level translation tables"

Thus, for 32 bit IPA, there will be no concatenated root level tables.
So, the root-order is 0.

Also, Refer ARM DDI 0406C.d ID040418, B3-1348
"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:
0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of
the input address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = 0 when the size of
input address region is 2^32 bytes.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - New patch.

v2 - 1. Added Ack.

v3 - 1. Dropped Ack. 
2. Rebased the patch based on the previous change.

v4 - 1. t0sz is 0 for 32-bit IPA on Arm32.
2. Updated the commit message to explain t0sz, sl0 and root_order.

v5 - 1. Rebased on top of the changes in the previous patch.

 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1fe3cccf46..6e13772cbc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2265,7 +2265,7 @@ void __init setup_virt_paging(void)
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
 #else
-        [0] = { 0 },  /* Invalid */
+        [0] = { 32,      0/*0*/,    0,          1 },
         [1] = { 0 },  /* Invalid */
         [3] = { 0 }  /* Invalid */
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 18:10:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 18:10:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527487.820144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSXw-00072b-MB; Fri, 28 Apr 2023 18:10:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527487.820144; Fri, 28 Apr 2023 18:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSXw-00072U-JO; Fri, 28 Apr 2023 18:10:16 +0000
Received: by outflank-mailman (input) for mailman id 527487;
 Fri, 28 Apr 2023 18:10:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SG5/=AT=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1psSXv-00072O-EG
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 18:10:15 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb62dfd4-e5ef-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 20:10:13 +0200 (CEST)
Received: from MW4PR04CA0303.namprd04.prod.outlook.com (2603:10b6:303:82::8)
 by CH0PR12MB5282.namprd12.prod.outlook.com (2603:10b6:610:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24; Fri, 28 Apr
 2023 18:10:10 +0000
Received: from CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::d1) by MW4PR04CA0303.outlook.office365.com
 (2603:10b6:303:82::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.24 via Frontend
 Transport; Fri, 28 Apr 2023 18:10:09 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT085.mail.protection.outlook.com (10.13.174.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.24 via Frontend Transport; Fri, 28 Apr 2023 18:10:09 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 28 Apr
 2023 13:10:08 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 28 Apr 2023 13:10:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb62dfd4-e5ef-11ed-b224-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ET0JH+k70M+bUmA91gAXKdEp6HFlvmxEQOHMMCCZcrJnmjazsu4Sm8qL2pmdUGW8nhw/7wH5GcUMyWhh1TZEKXDErsON0JipUo0km+Nzpc3tUCmflmY2vawIcJYVBQ84sFBguRZUFwk/s2gHzfd10l4GQLcMxqFBXkoh23IyA5aPH8i7mkvpVw9zbHmI6MoXpBZ47+76IxYyI7MV6u7bCi9X+pWz/j9cILWxiFTdLE89dLT8cMcQwHu4g2aseIvYyAts713nN+4y+nZggWVOE4iojt/wKEtPVArWuWMNNaRaUZbTF2zjxv1JzxbG8QhjoHDAQJCy/K0ugbs0X5/3+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=myh+PukQA6Z4gYCi8cMS0oYES1gEx4Q0mtnGq4k4UKY=;
 b=Cey6eQ6eCRKJdRbTdUwWohear6sl5O75BHw4jAIsj9rZyzSWcd9niWuqcWvxFjHljTuCEDK4AtoAKK2nJcrgqIySTvPB+7POAgS2Qp3Zo5L0oGtqH9kgi9vAGVG8fTo+n+tiiRyHL4THBu7TbYgvvoNJmFxvFxkdOda0qd1qFWkGA+nL4xMli78RDwe1br/sk34/2z//2Efhe6APB4S/VV37t7C2GGpnJV4BrDWDfBtLBsqNKxkbzbInmaEbHcFDdHxLXQpIt4zp54EMeKmUN3B9wi0ij4PYpwTcVWsarb4+CCdrAZH3iHtEmdMhWd5zEVz/NbfCaqbAFsdyip/p+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=myh+PukQA6Z4gYCi8cMS0oYES1gEx4Q0mtnGq4k4UKY=;
 b=FnQQB28wgsrICb6IDg07DASzdOhOkCcM7bsx/n/RypEN4IeHCB1YgcXajPjH20e7YphLRq4A8fYLsyWXIos7HhUSsoD9ZJ4AsziQRQ6ILEMyGANJw5CwkZFrdkSzws9VyneZONCkIuTLMSPpHKbHUqn3LlOBGl+/jHh2qVG9OOw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <bobbyeshleman@gmail.com>, <alistair.francis@wdc.com>,
	<connojdavis@gmail.com>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Subject: [XEN] xen/riscv: Updated the license header
Date: Fri, 28 Apr 2023 19:09:52 +0100
Message-ID: <20230428180952.22708-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT085:EE_|CH0PR12MB5282:EE_
X-MS-Office365-Filtering-Correlation-Id: cb8bf205-d5bc-440f-bb13-08db4813cda1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uLCtab+tHg6IRGqgtppQDhuHevMCBSL0bsxylVvAW0i5WA9Bg5c+HBMm/v/I+LOmk20iNH9bUhGU5DY1gw5D/QKg1XxnrLU9tKu2Rd/gNjK6Up07fA4H4ViupTSBwBaXpYa51R/iQlYcEznnaX4pkrR+teVEij/zKRSJyERFAprefIYdgoWEsL0o2wfpa+6xtFIBK8ixHCG7UTrW9rjQYhO2blBx8+D3IZ2u6j206kyqb2qQycyIdJymWAi8wyp/y2R5tbBp7Ziy7jjFMqUx6O5ka505fhOFC1Rc2jklj+o4giUipMSf7bkq3BsD39uFVT2oO2PxJSzRzs82juhl0xzzBcAwiR7l1b2PJLlkIbQbN2pp5/uoOzjpBNCKho6gmnYeJxcIl3yOSQ8r6CmPbkUgwQd3+LcmgMtvAr8Zy4Ibn5cYlU0oqP9grCIqj9TLUxsyygr3OhXi1HtMqtYmxRx1/duwctgGV23Okqw4j/+8EN4L1vsvpIYDmPkEnWN/K877QVtrIX9/BudhN4fj9d3ePmcxc/ZYl47rMsR0fCF9g3apnbhIJj83WB7pVvtn1qg2CcfUhIMlUXR1cr2bkifB1TTM4CdNl6lf0GkQ1uvNQbQrmzx8Zy22VQA4OcOVt7OX6bF91sykRfkRN6F+f8VPDHa8ZLgmR3Fk29BlJoaAX19TCI9yaKSQXZZirWKNO2+f0Azp2odtO7a/0WSuVEvD3esfvk7gZS+Wd+paK1A=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(36840700001)(40470700004)(46966006)(186003)(426003)(336012)(6666004)(966005)(36860700001)(54906003)(1076003)(82740400003)(6916009)(4326008)(26005)(83380400001)(2616005)(70206006)(70586007)(478600001)(356005)(316002)(47076005)(81166007)(41300700001)(2906002)(5660300002)(36756003)(15650500001)(40460700003)(86362001)(103116003)(82310400005)(8676002)(8936002)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Apr 2023 18:10:09.2845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cb8bf205-d5bc-440f-bb13-08db4813cda1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5282

Updated the license header in a separate comment of its own.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

This was highlighted in the following review -
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg141930.html

 xen/arch/riscv/include/asm/csr.h            | 3 +--
 xen/arch/riscv/include/asm/riscv_encoding.h | 3 +--
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/arch/riscv/include/asm/csr.h b/xen/arch/riscv/include/asm/csr.h
index 8215562343..be57dcce1c 100644
--- a/xen/arch/riscv/include/asm/csr.h
+++ b/xen/arch/riscv/include/asm/csr.h
@@ -1,6 +1,5 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * SPDX-License-Identifier: GPL-2.0-only
- *
  * Copyright (C) 2015 Regents of the University of California
  */
 
diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
index 43dd4f6981..58abe5eccc 100644
--- a/xen/arch/riscv/include/asm/riscv_encoding.h
+++ b/xen/arch/riscv/include/asm/riscv_encoding.h
@@ -1,6 +1,5 @@
+/* SPDX-License-Identifier: BSD-2-Clause */
 /*
- * SPDX-License-Identifier: BSD-2-Clause
- *
  * Copyright (c) 2019 Western Digital Corporation or its affiliates.
  *
  * Authors:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Apr 28 18:16:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 18:16:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527495.820171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSdq-0007sb-Fn; Fri, 28 Apr 2023 18:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527495.820171; Fri, 28 Apr 2023 18:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psSdq-0007sU-CY; Fri, 28 Apr 2023 18:16:22 +0000
Received: by outflank-mailman (input) for mailman id 527495;
 Fri, 28 Apr 2023 18:16:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSdp-0007sK-3a; Fri, 28 Apr 2023 18:16:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSdo-0004y3-RE; Fri, 28 Apr 2023 18:16:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psSdo-0004UT-Eh; Fri, 28 Apr 2023 18:16:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psSdo-00007h-EG; Fri, 28 Apr 2023 18:16:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7802yRmMn4Oah+1rNby1/jU/P+xcrAEaNVJ4smMs764=; b=iD53B7HaudV1052+AP7uzN4qGs
	nV61vNwZETGmat/rhEkMOzDON6473cVYa702MCm+FRcMhjlNEG+Qq+rQT1G24qwFyRaQHLxEH4bbF
	roq2lFNZGCYMapUZWa5U0JkNieGOby9W9KxTuHIjpsaHYEqZN0bwbpTJVeWr9rEEkmhU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180471-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180471: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
X-Osstest-Versions-That:
    xen=8e974df445807bb4a3629ca51145c7d74ee85c8f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 18:16:20 +0000

flight 180471 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180471/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b
baseline version:
 xen                  8e974df445807bb4a3629ca51145c7d74ee85c8f

Last test of basis   180459  2023-04-28 02:00:26 Z    0 days
Testing same since   180471  2023-04-28 15:02:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e974df445..6a47ba2f78  6a47ba2f7855f8fc094ec4f837e71a34ededb77b -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 20:02:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 20:02:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527505.820187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psUI1-0002ds-7G; Fri, 28 Apr 2023 20:01:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527505.820187; Fri, 28 Apr 2023 20:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psUI1-0002dl-4O; Fri, 28 Apr 2023 20:01:57 +0000
Received: by outflank-mailman (input) for mailman id 527505;
 Fri, 28 Apr 2023 20:01:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psUI0-0002db-7t; Fri, 28 Apr 2023 20:01:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psUI0-0007XL-1r; Fri, 28 Apr 2023 20:01:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psUHz-00005S-JC; Fri, 28 Apr 2023 20:01:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psUHz-0005nC-Ig; Fri, 28 Apr 2023 20:01:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b/qiAcFBVCv5Jcm8SwmbrXlqISw5C5eV9U0ALkYr4Tk=; b=XDfj+fX9z8ReTTgCxZ38AAaFXC
	jk7SVjvQDBde5xUWV9BzZHZL7jvDNWuGA1rZJ0xIJjpwKS4D18h0Mt5UunwbfqWfLhsSnkknoyPAl
	wmxBHg4yB0MobYYs7YO0LBPfwBCPg/72xPpGbLiYjTvQurpxFYG0DS2YebmEXbItEWH4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180460-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180460: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=0324adb647885932efc97eefcfe08f6a8db60ae1
X-Osstest-Versions-That:
    libvirt=74b86146ef8a4ae484d74f104b465bfc8cc73512
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Apr 2023 20:01:55 +0000

flight 180460 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180460/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180424
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180424
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180424
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              0324adb647885932efc97eefcfe08f6a8db60ae1
baseline version:
 libvirt              74b86146ef8a4ae484d74f104b465bfc8cc73512

Last test of basis   180424  2023-04-26 04:18:59 Z    2 days
Testing same since   180460  2023-04-28 04:20:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   74b86146ef..0324adb647  0324adb647885932efc97eefcfe08f6a8db60ae1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Apr 28 20:05:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Apr 2023 20:05:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527512.820196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psULY-0003Jo-Pm; Fri, 28 Apr 2023 20:05:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527512.820196; Fri, 28 Apr 2023 20:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psULY-0003Jh-Ms; Fri, 28 Apr 2023 20:05:36 +0000
Received: by outflank-mailman (input) for mailman id 527512;
 Fri, 28 Apr 2023 20:05:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xwQI=AT=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1psULX-0003Jb-Iv
 for xen-devel@lists.xenproject.org; Fri, 28 Apr 2023 20:05:35 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0918a969-e600-11ed-b224-6b7b168915f2;
 Fri, 28 Apr 2023 22:05:34 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-4efe8991bafso386021e87.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 Apr 2023 13:05:34 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 q2-20020ac25a02000000b004b4cbc942a3sm3471056lfn.127.2023.04.28.13.05.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Apr 2023 13:05:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0918a969-e600-11ed-b224-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682712334; x=1685304334;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=0aND+n7/PKHOEQPKVPWNXIrcaugL76T49hwzjxzmNGU=;
        b=Rl7+/L59pveNTCO1IV9amW2mqiYZ0nJSn7ixOASwqOKUdakKz42WY9nXIZlVo+u9/8
         DKtCLkaQnfcK83PfVocSVv/7GeFpjC/9IjVYoHFDZWfT3fFwczZ0g62GdlmxCafyLHNE
         bLDdQ4XniROIJcwh2RrhsYTNsGoxUi7/g7XjpZL1IOoAeXtpOdZ4p0jRtcGwosPjQn+9
         1OYOdPEobvQWmNKbI1YG58rjIYB6ujtSV9F8MRZyUUHiwyhIRB2LX+d0GQDHJ4W4D5d7
         DpFEfji58BhHThcL3YM6jIUCf85SWd3IWp78WPY3UB5KHR/0r6UAFcO++JgUR7IJVed4
         0Qiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682712334; x=1685304334;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=0aND+n7/PKHOEQPKVPWNXIrcaugL76T49hwzjxzmNGU=;
        b=FDVQVEp/sQRBD7IFLSIlf6mjbzr8j8gPjKaRjIj1QFasdsV+TZOcIT6MKdgxYIN5S/
         4xVSqOh0suEMwmiUA2xYvwE8gUEhH5We2hsy+6XMvMpaJ09wJGeD/P9G6oTIF5iU08DG
         NYNvrTvUT5o8PxyYoGLZ09ATAbTxbVG9MZufsP5bXBxfRwOFl6VOQTgfR0AbYfbUf1ef
         bUaTfW3w+ytaIyqACVyHc0fEoOZGSeNCIZWSabMvBtFvXjTc6EMvNiXfwurTbnhxjaQ1
         +sy77K/kn1dlGnJmgIfrD/eHopAryNl0SVk2kRknH+hO0p/sJ2OeplkQ6baX8tX+TTAH
         FalQ==
X-Gm-Message-State: AC+VfDyEEdtCsYEMTkXDrCAUahhx7NI9sMVI+Fqe1BZT3WGRxKdmbErW
	qXNbEHZ4Mt8PjAGr8h0ClHQ=
X-Google-Smtp-Source: ACHHUZ5j1F8D2DcPWLwMybnj8vOPjyp9/mfCHNRpZdfNlqJyx4IUH/FlROB0BT00LeSbblrTFz6r/g==
X-Received: by 2002:ac2:46f3:0:b0:4ec:8362:1880 with SMTP id q19-20020ac246f3000000b004ec83621880mr1911756lfo.48.1682712333975;
        Fri, 28 Apr 2023 13:05:33 -0700 (PDT)
Message-ID: <5316859bf081d2c00dae784e6700f55747a6635d.camel@gmail.com>
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Connor Davis
 <connojdavis@gmail.com>,  xen-devel@lists.xenproject.org
Date: Fri, 28 Apr 2023 23:05:32 +0300
In-Reply-To: <2c424759-3072-cd07-913d-c45ae6791ce2@suse.com>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
	 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
	 <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
	 <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
	 <3d440048717892fe5d3ed7fe3255dc8c9f5d38a3.camel@gmail.com>
	 <2c424759-3072-cd07-913d-c45ae6791ce2@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

Hi Jan,

On Mon, 2023-04-24 at 17:35 +0200, Jan Beulich wrote:
> On 24.04.2023 17:16, Oleksii wrote:
> > On Mon, 2023-04-24 at 12:18 +0200, Jan Beulich wrote:
> > > On 21.04.2023 18:01, Oleksii wrote:
> > > > On Thu, 2023-04-20 at 16:36 +0200, Jan Beulich wrote:
> > > > > On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > > > > > +=C2=A0=C2=A0=C2=A0 csr_write(CSR_SATP,
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 ((unsigned long)stage1_pgtbl_root >>
> > > > > > PAGE_SHIFT)
> > > > > > >=20
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 satp_mode << SATP_MODE_SHIFT);
> > > > > > +
> > > > > > +=C2=A0=C2=A0=C2=A0 if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT=
) =3D=3D
> > > > > > satp_mode
> > > > > > )
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 is_mode_supported =
=3D true;
> > > > > > +
> > > > > > +=C2=A0=C2=A0=C2=A0 /* Clean MMU root page table and disable MM=
U */
> > > > > > +=C2=A0=C2=A0=C2=A0 stage1_pgtbl_root[index] =3D paddr_to_pte(0=
x0, 0x0);
> > > > > > +
> > > > > > +=C2=A0=C2=A0=C2=A0 csr_write(CSR_SATP, 0);
> > > > > > +=C2=A0=C2=A0=C2=A0 asm volatile("sfence.vma");
> > > > >=20
> > > > > I guess what you do in this function could do with some more
> > > > > comments.
> > > > > Looks like you're briefly enabling the MMU to check that what
> > > > > you
> > > > > wrote
> > > > > to SATP you can also read back. (Isn't there a register
> > > > > reporting
> > > > > whether the feature is available?)
> > > > I supposed that it has to be but I couldn't find a register in
> > > > docs.
> > >=20
> > > Well, yes, interestingly the register is marked WARL, so
> > > apparently
> > > intended to by used for probing like you do. (I find the
> > > definition
> > > of
> > > WARL a little odd though, as such writes supposedly aren't
> > > necessarily
> > > value preserving. For SATP this might mean that translation is
> > > enabled
> > > by a write of an unsupported mode, with a different number of
> > > levels.
> > > This isn't going to work very well, I'm afraid.)
> > Agree. It will be an issue in case of a different number of levels.
> >=20
> > Then it looks there is no way to check if SATP mode is supported.
> >=20
> > So we have to rely on the fact that the developer specified
> > RV_STAGE1_MODE correctly in the config file.
>=20
> Well, maybe the spec could be clarified in this regard. That WARL
> behavior may be okay for some registers, but as said I think it isn't
> enough of a guarantee for SATP probing. Alistair, Bob - any thoughts?
I've re-read the manual regarding CSR_SATP and the code of detecting
SATP mode will work fine.
>From the manual ( 4.1.11
Supervisor Address Translation and Protection (satp) Register ):
=E2=80=9CImplementations are not required to support all MODE settings, and=
 if
satp is written with an unsupported MODE, the entire write has no
effect; no fields in satp are modified.=E2=80=9D

So there is no leave any open question so I'll provide new patch
series.

~ Oleksii




From xen-devel-bounces@lists.xenproject.org Sat Apr 29 00:02:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 00:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527520.820216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psY2b-0003Af-6d; Sat, 29 Apr 2023 00:02:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527520.820216; Sat, 29 Apr 2023 00:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psY2b-0003AX-1t; Sat, 29 Apr 2023 00:02:17 +0000
Received: by outflank-mailman (input) for mailman id 527520;
 Sat, 29 Apr 2023 00:02:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psY2Z-0003A1-OV; Sat, 29 Apr 2023 00:02:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psY2Y-0005M9-0T; Sat, 29 Apr 2023 00:02:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psY2X-0004AP-La; Sat, 29 Apr 2023 00:02:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psY2X-0002fa-L5; Sat, 29 Apr 2023 00:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=clc0cp2dg9JTeM2Qb5pdXLFcgYInEN+ZcBOQ7extyiQ=; b=cqLyAP69k9Bzk7u6jB/qUpEAuK
	0NE2UmgMsgCPqez1+P/9Z48YJCvlUZBC/D0IRf0h5YcAmD5e/bL0haFPdDb38oHPl9aAxMOk5Kz/y
	wyg6wvPsTUajaJWVSYDaCQIYMd0lpozSpqjDIOg1QHKDEYwVmwjvAa27VZmQC66q5Nwg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180461-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180461: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-pair:guest-start/debian:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ea7862c507eca54ea6caad9dcfc8bba5e749fbde
X-Osstest-Versions-That:
    linux=58f42ed1cd31238745bddd943c4f5849dc83a2ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 00:02:13 +0000

flight 180461 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180461/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail in 180428 pass in 180461
 test-amd64-i386-pair       25 guest-start/debian fail in 180428 pass in 180461
 test-armhf-armhf-xl-multivcpu 14 guest-start     fail in 180428 pass in 180461
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180443 pass in 180461
 test-amd64-i386-libvirt-pair 10 xen-install/src_host       fail pass in 180428
 test-armhf-armhf-xl-credit1  18 guest-start/debian.repeat  fail pass in 180443

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  14 guest-start         fail in 180443 like 180369
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 180443 like 180369
 test-armhf-armhf-xl-credit2  18 guest-start/debian.repeat    fail  like 180352
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180352
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180369
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180369
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180369
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180369
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180369
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180369
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180369
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180369
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180369
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ea7862c507eca54ea6caad9dcfc8bba5e749fbde
baseline version:
 linux                58f42ed1cd31238745bddd943c4f5849dc83a2ac

Last test of basis   180369  2023-04-21 21:43:52 Z    7 days
Testing same since   180428  2023-04-26 09:43:27 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alyssa Ross <hi@alyssa.is>
  Anders Roxell <anders.roxell@linaro.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Baokun Li <libaokun1@huawei.com>
  Bhavya Kapoor <b-kapoor@ti.com>
  Brian Masney <bmasney@redhat.com>
  Chandan Babu R <chandan.babu@oracle.com>
  Chris Paterson (CIP) <chris.paterson2@renesas.com>
  Christoph Hellwig <hch@lst.de>
  Cristian Ciocaltea <cristian.ciocaltea@collabora.com>
  Damien Le Moal <damien.lemoal@opensource.wdc.com>
  Dan Carpenter <error27@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Young <dyoung@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Raillard <douglas.raillard@arm.com>
  Ekaterina Orlova <vorobushek.ok@gmail.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Gao Xiang <hsiangkao@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gwangun Jung <exsociety@gmail.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Stuebner <heiko@sntech.de>
  Ingo Molnar <mingo@kernel.org>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim<jhs@mojatatu.com>
  Jason Wang <jasowang@redhat.com>
  Jianqun Xu <jay.xu@rock-chips.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Denose <jdenose@chromium.org>
  Jonathan Denose <jdenose@google.com>
  Juergen Gross <jgross@suse.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Marc Gonzalez <mgonzalez@freebox.fr>
  Mark Brown <broonie@kernel.org>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr>
  Naama Meir <naamax.meir@linux.intel.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Pingfan Liu <kernelfans@gmail.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Ritesh Harjani <riteshh@linux.ibm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Levin <sashal@kernel.org>
  Sebastian Basierski <sebastianx.basierski@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <thierry.reding@gmail.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tomas Henzl <thenzl@redhat.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tudor Ambarus <tudor.ambarus@linaro.org>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vasily Gorbik <gor@linux.ibm.com>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>
  Yanjun Zhang <zhangyanjun@cestc.com>
  Ziyang Xuan <william.xuanziyang@huawei.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   58f42ed1cd31..ea7862c507ec  ea7862c507eca54ea6caad9dcfc8bba5e749fbde -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 03:05:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 03:05:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527527.820226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psath-0003az-B0; Sat, 29 Apr 2023 03:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527527.820226; Sat, 29 Apr 2023 03:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psath-0003as-78; Sat, 29 Apr 2023 03:05:17 +0000
Received: by outflank-mailman (input) for mailman id 527527;
 Sat, 29 Apr 2023 03:05:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PDeG=AU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1psatf-0003ad-OV
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 03:05:15 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a75a8d12-e63a-11ed-8611-37d641c3527e;
 Sat, 29 Apr 2023 05:05:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4AC1060B68;
 Sat, 29 Apr 2023 03:05:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B76CEC433EF;
 Sat, 29 Apr 2023 03:05:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a75a8d12-e63a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682737509;
	bh=HhmEzj+Ra0V1lJ0JXB1vLQGIJK7CQHfjNVMp9u4weR8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=B+hJc5H85Lit+Q7ez48jXGr9nqIsL2fQePpHVL733ReLhd/LXYyE8y5T599ef8Gfn
	 KLjCkgrpN1zjVXYD5ELKC9tBfcwDbkbpTXtS5Ry8GN6/ZmZYfbUM/nCJ29eycSc42i
	 5HxhOxYNUYGiurHzBWKKxPmJFyof4XVJPoIKdIocLMJMitJC12UwxmzVblTbYsUAcI
	 sLQ1ZVp/f3RtUlXsKizwldkp+n+k5C2hnRWqXOgC5OT/STDG2RtbDtJlK3NDemYJDo
	 c5CeiEk3KakKphTl/dvTg0P90htRbH0cGNjswKoAAC/YGwzt3rp6LFMLUihKgoWHoJ
	 kt8cgp8FSb1Tg==
Date: Fri, 28 Apr 2023 20:05:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: alejandro.vallejo@cloud.com
cc: committers@xenproject.org, michal.orzel@amd.com, 
    xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    sstabellini@kernel.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
In-Reply-To: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
Message-ID: <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-960710983-1682733920=:974517"
Content-ID: <alpine.DEB.2.22.394.2304281905250.974517@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-960710983-1682733920=:974517
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304281905251.974517@ubuntu-linux-20-04-desktop>

On Fri, 28 Apr 2023, GitLab wrote:
> Pipeline #852233694 triggered by
> [568538936b4ac45a343cb3a4ab0c6cda?s=48&d=identicon]
> Ganis
> had 3 failed jobs
> Failed jobs
> ✖
> test
> qemu-smoke-dom0less-arm64-gcc

This is a real failure on staging. Unfortunately it is intermittent. It
usually happens once every 3-8 tests for me.

The test script is:
automation/scripts/qemu-smoke-dom0less-arm64.sh

and for this test it is invoked without arguments. It is starting 2
dom0less VMs in parallel, then dom0 does a xl network-attach and the
domU is supposed to setup eth0 and ping.

The failure is that nothing happens after "xl network-attach". The domU
never hotplugs any interfaces. I have logs that show that eth0 never
shows up and the only interface is lo no matter how long we wait.


On a hunch, I removed Alejandro patches. Without them, I ran 20 tests
without any failures. I have not investigated further but it looks like
one of these 4 commits is the problem:

2023-04-28 11:41 Alejandro Vallejo    tools: Make init-xenstore-domain use xc_domain_getinfolist()
2023-04-28 11:41 Alejandro Vallejo    tools: Refactor console/io.c to avoid using xc_domain_getinfo()
2023-04-28 11:41 Alejandro Vallejo    tools: Create xc_domain_getinfo_single()
2023-04-28 11:41 Alejandro Vallejo    tools: Make some callers of xc_domain_getinfo() use xc_domain_getinfol 
--8323329-960710983-1682733920=:974517--


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 03:19:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 03:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527530.820236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psb74-0005DL-GZ; Sat, 29 Apr 2023 03:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527530.820236; Sat, 29 Apr 2023 03:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psb74-0005DE-Dl; Sat, 29 Apr 2023 03:19:06 +0000
Received: by outflank-mailman (input) for mailman id 527530;
 Sat, 29 Apr 2023 03:19:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psb73-0005D2-5K; Sat, 29 Apr 2023 03:19:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psb72-0008S7-So; Sat, 29 Apr 2023 03:19:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psb72-0000Ku-CQ; Sat, 29 Apr 2023 03:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psb72-0005ux-Bz; Sat, 29 Apr 2023 03:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S+rNsLaSiqZwcMlz88sKbD1YbbF5ZmRpbuNkKb7Pq/M=; b=1dZWs/zkubUSXU3SUxYDhUlwSd
	Cv90AyAIynAOJVoWdTQFUfydooEAE/spkrTl4ic4/KWkfio6OZgLPWsDZxft/KOzBpT4Pg7/1QTz0
	Lyv++ynpN97O/YUdf65S1zPz0yele91ucxNRZMdEV28RCQQUdlVG3VSTZzMKmgxfQBKM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180464-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180464: regressions - FAIL
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    xen-4.16-testing:build-arm64:xen-build:fail:regression
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-pvhv2-intel:debian-fixup:fail:heisenbug
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    xen-4.16-testing:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.16-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7251cea957cbe4a0772651a4ab110ed76f689f96
X-Osstest-Versions-That:
    xen=6f6526ac7e342b2f125ffba649d4f13e22bbc860
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 03:19:04 +0000

flight 180464 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180464/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180407
 build-arm64                   6 xen-build      fail in 180445 REGR. vs. 180407

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 7 xen-install fail in 180445 pass in 180464
 test-amd64-amd64-xl-pvhv2-intel 13 debian-fixup            fail pass in 180445
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 180445

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180445 n/a
 build-arm64-libvirt           1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180445 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180445 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180407
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180407
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180407
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180407
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180407
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180407
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180407
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7251cea957cbe4a0772651a4ab110ed76f689f96
baseline version:
 xen                  6f6526ac7e342b2f125ffba649d4f13e22bbc860

Last test of basis   180407  2023-04-25 08:21:48 Z    3 days
Testing same since   180445  2023-04-27 13:08:28 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7251cea957cbe4a0772651a4ab110ed76f689f96
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 27 14:54:26 2023 +0200

    update Xen version to 4.16.4
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 09:19:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 09:19:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527566.820246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psgjk-0001YF-8k; Sat, 29 Apr 2023 09:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527566.820246; Sat, 29 Apr 2023 09:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psgjk-0001Y8-5i; Sat, 29 Apr 2023 09:19:24 +0000
Received: by outflank-mailman (input) for mailman id 527566;
 Sat, 29 Apr 2023 09:19:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psgjj-0001Xy-CI; Sat, 29 Apr 2023 09:19:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psgji-0000Ss-OO; Sat, 29 Apr 2023 09:19:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psgji-0007fN-E6; Sat, 29 Apr 2023 09:19:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psgji-00015i-De; Sat, 29 Apr 2023 09:19:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gwWrv+5AgGaE20T+bgv+jR3WBmO/eMnOnahXBS6PdYk=; b=zy8rUnAJNhigmN/p6ZRCzSVUTc
	4Bns+zwS8OUvhd/EpGJ0MJj30ulSyDdotBLM8ndQn9T8w9sxbiVXvizFPWM+KaslEJlU1GmnIMf8i
	i+YzEv7Nxiy0wPik7dX6tle+KTE9PojQpo2rygLTS8JtqZnyyJnNVYPf+ddcfMJSLJXM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180469-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180469: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=05d50ba2d4668d43a835c5a502efdec9b92646e6
X-Osstest-Versions-That:
    qemuu=1eb95e1baef852d0971a1dd62a3293cd68f1ec35
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 09:19:22 +0000

flight 180469 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180469/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180449
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180449
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180449
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180449
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180449
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180449
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180449
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180449
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180449
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                05d50ba2d4668d43a835c5a502efdec9b92646e6
baseline version:
 qemuu                1eb95e1baef852d0971a1dd62a3293cd68f1ec35

Last test of basis   180449  2023-04-27 15:03:28 Z    1 days
Testing same since   180469  2023-04-28 12:35:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Hariharan T S <hariharan.ts@linux.vnet.ibm.com>
  Juan Quintela <quintela@redhat.com>
  Kautuk Consul <kconsul@linux.vnet.ibm.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Yohei Kojima <y-koj@outlook.jp>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1eb95e1bae..05d50ba2d4  05d50ba2d4668d43a835c5a502efdec9b92646e6 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 09:27:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 09:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527573.820257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psgrZ-00036s-AJ; Sat, 29 Apr 2023 09:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527573.820257; Sat, 29 Apr 2023 09:27:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psgrZ-00036l-5v; Sat, 29 Apr 2023 09:27:29 +0000
Received: by outflank-mailman (input) for mailman id 527573;
 Sat, 29 Apr 2023 09:27:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NroF=AU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1psgrX-00036f-BP
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 09:27:27 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d4cc94c-e670-11ed-8611-37d641c3527e;
 Sat, 29 Apr 2023 11:27:25 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-4ec8eca56cfso970277e87.0
 for <xen-devel@lists.xenproject.org>; Sat, 29 Apr 2023 02:27:25 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 c15-20020a19760f000000b004d57a760e4dsm3677721lff.37.2023.04.29.02.27.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 29 Apr 2023 02:27:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d4cc94c-e670-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682760445; x=1685352445;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=mYJiKwSRaUWtPwC5UKGwfjq/gWyGJiPpfL51KGBsvn0=;
        b=ntVhqD6Ip3+xEqbqSIGyxZH4SDl0AFXLl7mdkMjZKMWggTtTnfMgDSN6SHPr1+cyKp
         xn94Ay//sHuvaR4TrurBuagDaE4/zExoSsfiPqCYp+oOWIc47v/t7LsdjqYA4WVN2DZY
         cnVxHFSTqXPRuNPUv7C5eosZeD4vhbSjPDj2kOOFKc9nP91YomExGvVTiGj7uW/sm1Wx
         uD86VHBXuRm9M3tn8rcp6Vjf/C2YshxK0YZ/NPJv3Yrcm8ymndRuNay5a6Fe/kzVFuLA
         DQqgaOPgMJ62T1o5h9X5gwWwrnPxM7wxiZooY30NN1AGbdF9mZnJ2dPCe66ZCjh2V/6S
         yzMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682760445; x=1685352445;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=mYJiKwSRaUWtPwC5UKGwfjq/gWyGJiPpfL51KGBsvn0=;
        b=lcEGYRdLGX3k9suNdyYemIgu29VReILD1oJf9JQhMZhx2Yq+pA6F0GQGaoWKyPOUq+
         zVGCQ6r5+RFs5kjXuGvtN3/JU9dmrA3r9zlVTTC9iNCi8d+Q1uRUpAdCSdFpo42fcjWM
         lh9go7XxM8C5W5aTac0DHWpRPNLxdTWZZkN3h7RE8gX3UqSu2eOydALAkZQ5sd6cQRQ2
         2YHWq4iWYbiuiXM/emXRJ4gkHj9Lumm/66RIum7xaoM018SO42sfzToljym7D6B/Ffoi
         qXgpgfFMJMI9WeteVAYf92Yzx3huc/f2z0Ens+wviL2IwmFNcLtF+yxrO/DYpLGIyRRO
         aXVA==
X-Gm-Message-State: AC+VfDyORr/o+33pJovc6McbU9UiJ6AqUolwgIxyNEdB/kzsgWNbciNI
	MFTaUCJOEUkrkLulwf8L5Eg=
X-Google-Smtp-Source: ACHHUZ7jwvXhsOgAR0JgUljjJd0nqT17FmV2pViDO7ZjeTlMffZpFhqrQsVYjjS2RvRqFLtqGIohhg==
X-Received: by 2002:a19:f018:0:b0:4eb:3bb5:81c5 with SMTP id p24-20020a19f018000000b004eb3bb581c5mr2002476lfc.15.1682760444501;
        Sat, 29 Apr 2023 02:27:24 -0700 (PDT)
Message-ID: <6de9bb09b395ae15316a6c7fc523d72d8570c5ef.camel@gmail.com>
Subject: Re: [XEN] xen/riscv: Updated the license header
From: Oleksii <oleksii.kurochko@gmail.com>
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
	xen-devel@lists.xenproject.org
Cc: bobbyeshleman@gmail.com, alistair.francis@wdc.com, connojdavis@gmail.com
Date: Sat, 29 Apr 2023 12:27:23 +0300
In-Reply-To: <20230428180952.22708-1-ayan.kumar.halder@amd.com>
References: <20230428180952.22708-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

Hi Ayan,

On Fri, 2023-04-28 at 19:09 +0100, Ayan Kumar Halder wrote:
> Updated the license header in a separate comment of its own.
>=20
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
>=20
> This was highlighted in the following review -
> https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg141930.htm=
l
>=20
> =C2=A0xen/arch/riscv/include/asm/csr.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 3 +--
> =C2=A0xen/arch/riscv/include/asm/riscv_encoding.h | 3 +--
> =C2=A02 files changed, 2 insertions(+), 4 deletions(-)
>=20
> diff --git a/xen/arch/riscv/include/asm/csr.h
> b/xen/arch/riscv/include/asm/csr.h
> index 8215562343..be57dcce1c 100644
> --- a/xen/arch/riscv/include/asm/csr.h
> +++ b/xen/arch/riscv/include/asm/csr.h
> @@ -1,6 +1,5 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> =C2=A0/*
> - * SPDX-License-Identifier: GPL-2.0-only
> - *
> =C2=A0 * Copyright (C) 2015 Regents of the University of California
> =C2=A0 */
> =C2=A0
> diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h
> b/xen/arch/riscv/include/asm/riscv_encoding.h
> index 43dd4f6981..58abe5eccc 100644
> --- a/xen/arch/riscv/include/asm/riscv_encoding.h
> +++ b/xen/arch/riscv/include/asm/riscv_encoding.h
> @@ -1,6 +1,5 @@
> +/* SPDX-License-Identifier: BSD-2-Clause */
> =C2=A0/*
> - * SPDX-License-Identifier: BSD-2-Clause
> - *
> =C2=A0 * Copyright (c) 2019 Western Digital Corporation or its affiliates=
.
> =C2=A0 *
> =C2=A0 * Authors:

Looks fine to me.

I'll do my best to stick to what was mentioned here:
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg141930.html

Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 10:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 10:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527578.820266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pshSd-0007Zw-9V; Sat, 29 Apr 2023 10:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527578.820266; Sat, 29 Apr 2023 10:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pshSd-0007Zp-4z; Sat, 29 Apr 2023 10:05:47 +0000
Received: by outflank-mailman (input) for mailman id 527578;
 Sat, 29 Apr 2023 10:05:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NroF=AU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pshSb-0007Zj-NY
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 10:05:45 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66123d41-e675-11ed-b225-6b7b168915f2;
 Sat, 29 Apr 2023 12:05:41 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-4edcdfa8638so963080e87.2
 for <xen-devel@lists.xenproject.org>; Sat, 29 Apr 2023 03:05:41 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 f12-20020ac2532c000000b004eb0dcc52ddsm3690891lfh.41.2023.04.29.03.05.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 29 Apr 2023 03:05:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66123d41-e675-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682762741; x=1685354741;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Qd7fTbCml8yhI5ilGvALDKsOLdO6KOdtx4cyQfkrzpU=;
        b=nbXCnd6Y7yjeA8IQ6HOg4ayKBn8S2DiRzyprTC4bwUSix5NN9DXyyUAkLyRvCLMy5x
         SU+/9jKRy3VOGc6jwDBLaod1wCJLNgA7wTrENUrn8R5ifRxMYLyo8lQwIlF3V2sdeGQS
         bf351Lx7jaBwUrlKgzt4yajLvV3mKnlR1SADjsf7sAN3QbZxk5+S1Efth7Rj8AcmaEc5
         fcEMrHuksXYG3Wy7EjwJoIXWEfadPMIH2YwCVfGkIVNKL7+ZVZf/3BD6l9ih7GyeW4Ff
         Sq71S97/h0eRn66AMKEGTNx+EyX1rliII7h1PhivdhiWff7x5Js74idekhCKdAqX5qwI
         CG4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682762741; x=1685354741;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Qd7fTbCml8yhI5ilGvALDKsOLdO6KOdtx4cyQfkrzpU=;
        b=CQs04d9oXznU9xQqLHszYMbyKabaB+s1iysFGM9o9CdWkaRZ2P0NIJBOwAVT7MxDrM
         UOMZ5cumJrYFGQeykwsUukQ7FV9obE0Sci/cV+iAmoWsvaIef6r1u1BufNRaGbrv4ip3
         3zOU530/4/JZ6XxP5eKiRnhGW9hd3fEVTJ2Q8vlul31FZPxYMUVvCHf+ndSjMMdMOmQU
         +a1/G4M2FW4Xp1ynhWgApnXbfFGL1StfzCk0WomXi8wLdMU+6cu/C5eZkwJtB0f3ZWrE
         3cCSxdMKGJo5TX/oPv+PBLUIEq7PVKP4TE/68b5ZfAqYfuXSQrl+bGg4TMJaDZDdjkBH
         1niw==
X-Gm-Message-State: AC+VfDytTuTpHaeFNPfxwB0XsbEz6NT1aMlqD2GkLHCXIbosXglpsmFW
	3bdDRIjaP5t/i4aixcoVenA=
X-Google-Smtp-Source: ACHHUZ5RhWJRxhYkvatJGvgS1bNo0jV+DNqkvg2UjDGE3y9yKGRmyGemb0beF6vaC4oGBqrCrIMgOA==
X-Received: by 2002:a05:6512:20a:b0:4dd:9ddc:4463 with SMTP id a10-20020a056512020a00b004dd9ddc4463mr2246783lfo.5.1682762740929;
        Sat, 29 Apr 2023 03:05:40 -0700 (PDT)
Message-ID: <016a95e8cc1be45ce1821aba0570ff87973c4c35.camel@gmail.com>
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gianluca Guida
	 <gianluca@rivosinc.com>, Bob Eshleman <bobbyeshleman@gmail.com>, Alistair
	Francis <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
	xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>
Date: Sat, 29 Apr 2023 13:05:39 +0300
In-Reply-To: <5176b0bc-3727-e939-9776-ee4bfd732e32@xen.org>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
	 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
	 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
	 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
	 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
	 <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
	 <d157b1e2-cfc5-f7b7-9443-16d1db9a4311@suse.com>
	 <5176b0bc-3727-e939-9776-ee4bfd732e32@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

Hi Julien,=20
On Mon, 2023-04-24 at 12:08 +0100, Julien Grall wrote:
> > > > > > On 19.04.2023 17:42, Oleksii Kurochko wrote:
> > > > > > > + *
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=
=A0 End addr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 |=
 VM
> > > > > > > area
> > > > > > > description
> > > > > > > + *
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > > > > > > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=
=A0 | Xen
> > > > > > > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=
=A0 | FDT
> > > > > > > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=
=A0 |
> > > > > > > Fixmap
> > > > > >=20
> > > > > > These are all L2 slot 511 aiui, which may be worth
> > > > > > mentioning
> > > > > > especially since
> > > > > > the top bits don't match the top bits further down in the
> > > > > > table
> > > > > > (because of the
> > > > > > aliasing).
> > > > >=20
> > > > > Than I'll add one more column where I'll put slot number
> > > > >=20
> > > > > >=20
> > > > > > > + *=C2=A0=C2=A0=C2=A0=C2=A0 .................. unused .......=
...........
> > > > > >=20
> > > > > > This is covering slot 510, which again may be worth
> > > > > > mentioning.
> > > > > >=20
> > > > > > > + * 0000003200000000 |=C2=A0 0000007f40000000 | 331 GB |
> > > > > > > Direct map(L2
> > > > > > > slot: 200-509)
> > > > > > > + * 0000003100000000 |=C2=A0 0000003140000000 |=C2=A0 1 GB=C2=
=A0 |
> > > > > > > Frametable(L2
> > > > > > > slot: 196-197)
> > > > > >=20
> > > > > > 1Gb is, if I'm not mistaken, a single L2 slot.
> > > > > Yeah, it can be misunderstood. I meant [196, 197), so 197
> > > > > isn't
> > > > > included. I'll update the table.
> > > > >=20
> > > > > >=20
> > > > > > Also assuming a 32-byte struct page_info (I don't think
> > > > > > you'll get
> > > > > > away with
> > > > > > less than that, when even Arm32 requires this much),
> > > > > > there's a
> > > > > > mismatch
> > > > > > between direct map and frame table size: With 4k pages, the
> > > > > > scaling
> > > > > > factor
> > > > > > would be 128 if I'm not mistaken. So perhaps you really
> > > > > > mean 3Gb here
> > > > > > to
> > > > > > cover for (slightly more than) the 331Gb of memory you mean
> > > > > > to be
> > > > > > able to
> > > > > > map?
> > > > > For RV64 page_info size will be 56-byte and 32-byte for RV32
> > > > > but you
> > > > > are right it should 3 Gb in that case what will be enough (
> > > > > taking into
> > > > > account both available sizes of page_info structure ).
> > > >=20
> > > > As to the plan to it being 56 bytes (i.e. like on Arm): Arm
> > > > forever has
> > > > had a 64-bit padding field at the end. My best guess is that
> > > > the field
> > > > was introduced to have a 32-byte struct on Arm32.
> > >=20
> > > I can't exactly remember. But I would like to rework the struct
> > > page_info on Arm64 because...
> > >=20
> > > But then why
> > > > artificially increase the struct from 48 to 56 bytes on Arm64?
> > > > And hence
> > > > why have the same oddity on RV64?
> > >=20
> > >=20
> > > ... with 56 bytes, some struct page_info may cross a cache
> > > boundary.
> >=20
> > I guess that's going to be challenging, unless you mean to go
> > further up
> > to 64 bytes?
>=20
> Yes.
>=20
> >=20
> > > For
> > > RISC-V, I would recommend to make sure the struct page_info will
> > > never
> > > cross a cache boundary.
Do you mean that size(struct page_info) <=3D cache line size?

> >=20
> > Since going up to 64 bytes is wasteful,=20
>=20
> Well yes. But this is a trade-off between performance and memory
> usage.=20
> With the current situation, you may have to pull two cache lines for=20
> struct page_info.
Just for my understanding.

stuct page_info will consume two cache lines (it happens in case of 3
consecutive which exceed the size of the cache line) if they are not
aligned on 64 bytes boundary as cache line size on ARM is 128 ( if
believe to ARM's asm/cache.h ).

Shouldn't alignment ( by adding empty fields inside page_info ) fix an
issue? It looks like if the size of page_info will not be aligned that
it will be always performance issue.

>=20
> I suspect you might see some slow down when using the grant. But I
> don't=20
> have any concrete numbers.
>=20
> > and going down to 32 bytes likely
> > isn't going to be easy, sticking to 48 bytes for now would seem
> > reasonable
> > to me.
>=20
> It may be more difficult to argue for an increase (if we notice any=20
> performance degradation) in the future because this would reduce the=20
> memory usable for every users.
>=20
> Anyway, I haven't fully explore the problem on Arm yet and it is=20
> possible we could deal with any performance degradation differently=20
> (e.g. re-order the field and/or slightly increasing/decreasing the
> size).
>=20
> I thought I would point it out just in case the RISC-V folks care
> about it.
I think it will be useful as RISC-V uses the same page_info structure.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 10:25:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 10:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527584.820280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pshlb-0001ec-TH; Sat, 29 Apr 2023 10:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527584.820280; Sat, 29 Apr 2023 10:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pshlb-0001eV-QA; Sat, 29 Apr 2023 10:25:23 +0000
Received: by outflank-mailman (input) for mailman id 527584;
 Sat, 29 Apr 2023 10:25:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pshlb-0001e5-7I; Sat, 29 Apr 2023 10:25:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pshlb-0001wY-4w; Sat, 29 Apr 2023 10:25:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pshla-0001tr-N4; Sat, 29 Apr 2023 10:25:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pshla-0006el-Mf; Sat, 29 Apr 2023 10:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6nHhCZU6X5B6IkqKCyvxaG4oZQLPR4/8NN15CuhXJvg=; b=CZFZ6NxKWIHeShcTnWcrRB7bLz
	5kE/2pmCl3ojnNzUodOA8DEdVMbjGGMEOKh63NMT2BSLYtv4jd0QKXWNCvoOYeHiUKwqr6s52Dtpo
	y1Fs+RkgC5/1ymQxQpfrp9xaHL0PAkti8KmqkIaM8P/otvD68JqkxG8wsOEl2WVk0wl8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180478-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180478: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=7251cea957cbe4a0772651a4ab110ed76f689f96
X-Osstest-Versions-That:
    xen=6f6526ac7e342b2f125ffba649d4f13e22bbc860
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 10:25:22 +0000

flight 180478 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180478/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180407
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180407
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180407
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180407
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180407
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180407
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180407
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180407
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180407
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  7251cea957cbe4a0772651a4ab110ed76f689f96
baseline version:
 xen                  6f6526ac7e342b2f125ffba649d4f13e22bbc860

Last test of basis   180407  2023-04-25 08:21:48 Z    4 days
Testing same since   180445  2023-04-27 13:08:28 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6f6526ac7e..7251cea957  7251cea957cbe4a0772651a4ab110ed76f689f96 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 11:42:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 11:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527598.820290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psixU-0001SM-IP; Sat, 29 Apr 2023 11:41:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527598.820290; Sat, 29 Apr 2023 11:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psixU-0001SF-Ek; Sat, 29 Apr 2023 11:41:44 +0000
Received: by outflank-mailman (input) for mailman id 527598;
 Sat, 29 Apr 2023 11:41:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/eqU=AU=citrix.com=prvs=4767ec71a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psixT-0001S9-70
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 11:41:43 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd6f0b23-e682-11ed-8611-37d641c3527e;
 Sat, 29 Apr 2023 13:41:40 +0200 (CEST)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Apr 2023 07:41:37 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5634.namprd03.prod.outlook.com (2603:10b6:208:285::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Sat, 29 Apr
 2023 11:41:33 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Sat, 29 Apr 2023
 11:41:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd6f0b23-e682-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682768500;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=l9kbN7uogL6OLJNgtXYrZFuBuR84eLX0U/jpea1E274=;
  b=c2mYqWt/r0NJLzsIydvaz2AN/O6uIvUbmjwuTTHo/QIz0BuO0fv1lLzL
   SgVkqV9pWHv1uETf9MqUUT2yfdo+TZfOoplWgO8opO7tjxiqeKskmQ8YE
   QyLwj6qpyxZ3MRNkQ74xnFsk97tEpcybMsqiao83jNJ3UBFqjXV9409mS
   I=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 107717193
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:tC+ed6tfZMhKGPfsamG9DCeSWOfnVLxfMUV32f8akzHdYApBsoF/q
 tZmKWCBafiOY2OjLdAnPY60901S75GDzdJrHldo+CtkFnhE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AOExiFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwaz8mcj7ZoNyK46viTfRGlPomLPa6M9ZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIK9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgqaEz3wTCnQT/DjUHaWr8j9yDtHeSQupFc
 VQqwXMDp4sLoRnDot7VGkfQTGS/lgUHR9NaHuk+6QeM4qnZ+QCUAi4DVDEpQNUiuc0/QzAw0
 RmXltftCD5omLaRTm+RsLyTqFuaMi8QKG4eeSIeViMV7t/7uoYxgxnTCNF5H8adi9z+GTj0h
 TeQviU6r7wWgYgA0KDT1UDKhXegq4bESiYx5x7LRSS14wVhfomnaoe0r1/B4p59wJ2xS1CAu
 D0OnZeY5eVXVZWVznXVGKMKAa2j4OuDPHvEm1lzEpI99jOrvXm+YYRX5zI4L0BsWioZRQLUj
 IbokVs5zPdu0LGCNMebv6rZ5xwW8JXd
IronPort-HdrOrdr: A9a23:iLkkfaDqMZUvJKrlHemW55DYdb4zR+YMi2TDtnocdfUxSKelfq
 +V88jzuSWbtN9pYgBHpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qH/xTQXwH46+5Bxe
 NBXsFFebrN5IFB/KTHCd+DYrMd/OU=
X-Talos-CUID: 9a23:UR/G3G0cDgB57+elOyyM27xfWc8OblOH60nsAFbhDntDWoOQSWDB9/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3Ag+lt8w3YiETRG0V2mrXOUeGlcjUj8pifT24oq6s?=
 =?us-ascii?q?/l9C8L3ZoBQzEoT6Ue9py?=
X-IronPort-AV: E=Sophos;i="5.99,237,1677560400"; 
   d="scan'208";a="107717193"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fmsfvUHIzRyf+1rX+0xJWaB8LhkCfCFtInUQa3CsuN7vvMdiCBrXHEUYIJMODRS7drezrr4DZNeb9ILwywCIxCRMDTlWNJhPMZ6x1t07e2vjyvjPhmeZYwGHtSi445mcg9COElcrNSbiMt9kTA8lPVuurZdFDO/gbJKZrzCNiXTJh82b2213stNiLDbeAuOzA6/1AuR4ZgbFIFXM1K7sLjn2joJg+pppJWK3SHp2IZjoiYm0oI4FnTGI2zObmtQftSUlHqKCIWyZdf0t0ET0NCqdRV4hgllatph+oTA58X+va6UakJME0xledt0typZId+IxZvMijkAN5wnUSeL/iw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gdywTMgkRK0eSOU+VAX0t/M8x8K1iVjOblidHhBmENU=;
 b=S7+CDJUtL4z4FHcoYhcOqqsBjZU8QJWWV0xIVglv9LH1NpbDR0Ma3JsV6O+zOAOZGRa/vpEzxQbOEmcCvNklo9oHkwjIMQCgmj+OT4jo3WEmGLTBHkeHQtFwu0SoAxyXOpQa4XySq3W7PcRQRYBM4wQMYfpC83jtGNeLRxhUxM5lkG7fCbc3VlzkdKRG3rwwN/XBpacu+QfF02aSGMpSHCtI+qN0GyNpAYOLrNfb+DLlyRB+oDCZLGoXt9OeGHSs4Bx5llN5hKvpW1mojj1MHqiuP4EbFjX0t5FXlB1MP/TSjV8HZrGOXYZlyjbcUMxVGO+z3G8EACcyBcd4jxVhQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gdywTMgkRK0eSOU+VAX0t/M8x8K1iVjOblidHhBmENU=;
 b=cq5dqACjYbwZzLPjIA9Abxas04ot+E1/BrvxXd+c/2JZHicljovhNNUJs/2a3y5uIWG+03tXB8MoffBH5y81sEzGngOWzw+B8ckb2j0OTs/yX2/GJCvHB0AQAvyJklaiTM7phBhGmzQO+Q8dMRTaEzLORPP0eRIT6FXH+ctgfsM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
Date: Sat, 29 Apr 2023 12:41:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: andrew.cooper3@citrix.com
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
Content-Language: en-GB
To: Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0342.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5634:EE_
X-MS-Office365-Filtering-Correlation-Id: 62dd8ce5-6926-4cc1-1e92-08db48a6ae3b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ctm4Lup13l9mhsd05rBrlX/Z/xbUnROhIYVuwLG6gqRYr3cCWwWZotjSXZqGQdKsCeV6EwmDi9p9mMoGeKdoNqisMl08dp9Pixl78053QEX18HZ7aDKczhM/3k7vsAZeXgaxs5c7gLwCyk+Y82OBE7xmvKQG56t9l3TWxq0fWdw7ZKsCojSNEZOQpDiZcsAXC0Lgl7NmYYeV3zURfGvkcCM63cKnzz2ScNKUe+F/ZrHlf+3QbOJeEIf5NA1NZKi9fPI2Liz8gSB+NYxgwg8f3LaS6kni0ugN8zm7hE2j2ovQhLAMqiBKQ8k7/Oj0eNbTQQjzVkCDs0u2BQ3X9s9ApMDxXL5JZBIOBuuqUF0PE5eNEho7fJuEn25TCf8gFlbbbKI8Dgz7ltLqyl/novjkACoW8rRgVS/4NUArqmdpzit4drS5MOpzTCfZ9meo8mLJ15R4N9v0TXNJVVxK90F8i3HzGzcwLBvpIJyP85+FBEGiAa8qAQwMpELfTlNvIZYSiqhmDytF/gyjX4hH0J8RcToxqt13mrlVIr9BH8SLWyALkDzfrMfY9kjlT8GqnkReyg+ApQTpi0v4YNjSUFlK8Lmf1rJ0JAm8Q0xr782n9SwQVrfsjcl8FqHXWHcOoAOQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(346002)(136003)(376002)(451199021)(41300700001)(316002)(478600001)(82960400001)(8676002)(2906002)(38100700002)(31696002)(6486002)(6666004)(86362001)(66476007)(66556008)(8936002)(83380400001)(5660300002)(66946007)(31686004)(4326008)(26005)(53546011)(2616005)(6512007)(36756003)(9686003)(6506007)(186003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ck01VEhxMTMzY2JxNnp5WTRZY3ZlTWFnRUMzbkQ0VFZROGljMHp4YXBDN3dQ?=
 =?utf-8?B?NVBtSTRSamZwWThCT2FQQnY5dXZ4NGFxbXR6dkwvQmcwUHV1Q3N5MVdBcyta?=
 =?utf-8?B?Sm1mQjEwR2pLY2RZNEZ3a3E5WExtRmxOYTluTElFY2hjWTBZTTdkdDlzMVNa?=
 =?utf-8?B?cWxlZEhJUGRjTThWNCtoazJLdk9YUnhWdUwycUJTRFRic0d2bWpRbVZsMnVB?=
 =?utf-8?B?QnJBYnoyaGNaTWlxQUtVb0djeFNadkNYUUVPekp6L0hUd3E0enJWRGZWQTEx?=
 =?utf-8?B?QzVWOVg0aHovQ3FCWmxnY29nWWlVK0k4cnozVUZIR2Z0dFplTE5yQStlZk81?=
 =?utf-8?B?ZlNXMmJXMW4zWFUrbUJjcnY3Z0xQS1ZhUjUxUTYrMHNvUi94a3lrN0VvU215?=
 =?utf-8?B?Mk16ZVZQUm5JU3NIcEc5N2JueWNPaXh1NWVqUnlKUkM1ZFdjelVSUDhNVGhu?=
 =?utf-8?B?UjB1dS9wT3pjUnNhN2gva0VhTnMxT0VFdXdNdkI5QStxTWdGb09kSDNjbjJk?=
 =?utf-8?B?cXk2bnRZL2tud0tvbXNXNzJZOFV0TkNENlRDTGdzQk5PbWNVTmZ0ZXlKNzNt?=
 =?utf-8?B?Q1djUTFDUlpNcHZFdHZlTGdURUs5dzdhQ25mdlZ3ekNueW10ODRPd2N6azR3?=
 =?utf-8?B?M1hUUThQM0daQXVLWkxiekZqSytnSmV6TDZRTHQ2aFdFWUFQU0hnSmVOL0dr?=
 =?utf-8?B?R1RYZEJOZG5pTWU0czJWTEpIN1RZT3Jsc1k1RE53NW1YQ3VCaUZyQVpQSmFL?=
 =?utf-8?B?Zi9uK1lwZVp5R3JESUNncTI4ZlpXb2x2VmJsclpFblJyTktFcmkxc1RKaWpj?=
 =?utf-8?B?U3VZUkhqbmtra1F0a1N6ZjhCdFRtU01xVXhGM3NTbGRUSzVKcDBqaC9oZk15?=
 =?utf-8?B?TlFqYTVHMGpqZVl6MTBOcmJFaEd3Y0NBNjdHMjI5VG1tdm41S2xsZjlXaXdD?=
 =?utf-8?B?eldLWUZwUm5jazhLcTNKcnFOMFJ4ZldVanBJLy9mZ0Z6dGxXRnk2V3R3MDdz?=
 =?utf-8?B?VmdQUHY1TjNtTHhiYlpwWnlJS09uQWd6UHBIY0FqdG1maVZJbWtteEtzbVIv?=
 =?utf-8?B?akMxeU0rdXVhMHNJRG9STzVzeWI4RTdKQUZXTnRQNFBPZUZ2QURVcWhxcGcr?=
 =?utf-8?B?U1p4cVZqWjBmSC8vSXBuZ3dlTHZmZnNyVXJOVlNlbElhSUpJbitmbnU0TkVQ?=
 =?utf-8?B?LzJFTnZwWW5pcG5LWnkyV0JtZURlRklYT05yUlY4ZHdWbnhvdXZqRWZoaGdU?=
 =?utf-8?B?dnNpOE5NYmxjR0k0U0VKR3V3SnhkUlpBVU4yT0lXZG0wMXlBUWRrdWZqcnND?=
 =?utf-8?B?RUNsbFlVbnJoZzFCQ1dGWTM5c3M1dTZFRzVYdVVFY0p1c0JxSkZxek1iRlN4?=
 =?utf-8?B?SzJhdTBCMmd5VXBYQ1FFN2haY3NkL2I0TDc0OU9DTzE2ekVwLzZIcFcrQ0hL?=
 =?utf-8?B?K3Q2ZFBwSSszTXczM0tLWTBSbUJLL2FrU2lDemFlOHBtWktVaElnbGk5UTg5?=
 =?utf-8?B?cU9zTUJXNVZYYSt5blBMVWVRdTJiOHRhVFNUSlgxK3MwRVNQb3RMZFFvL3BW?=
 =?utf-8?B?aC9GVUNhdCtEakI5VzErcnc4Y09iOFBjQWRVcTFnNHRHTkFNUVBXejFseTRO?=
 =?utf-8?B?YWptUHdUUnVILzljaDRGRHlUYmdmODl6T2JmemtXbm5FdkhlU3Z0TXdiaWFQ?=
 =?utf-8?B?ZmlrblZVUlNrbGd6WGNsSGxtZ0NHeU1xWjM1aCtxeEdjNUo2RzErMnErWnFj?=
 =?utf-8?B?aHJHdjdudHRsQy9JQTlQZk1pd29kWXlGWllQd2U5VnhrU3BkZTRPV1BRTllS?=
 =?utf-8?B?VGxvL3NRditiTFN1Tk1jMXI3NXcxOTFiczFmMzdXMUozVGNWRGkrRWE4Y2pt?=
 =?utf-8?B?OWdvOXp2RzdaSnBMVjB2WWRZUWdxV1hHQ0NOeXMrZUtKN2xHNGZOS1VjZWVo?=
 =?utf-8?B?Q1ZWaUZlR2tKeFN3aWcwZm42VDBud2JXSWdoYldWb3FFZmVTN3JuSW9RRUwz?=
 =?utf-8?B?cURYVnhjcCtXZGdacDZnalM2Ykp3bzFsVU9GdDhndUh6S05jSzFZWGpkd1hJ?=
 =?utf-8?B?SnRRVU56Y1I0UDQ1cHZuakZLK21aenhTQWYvNVZYMmVsK0REbTlISXpURk1m?=
 =?utf-8?B?aWkwQVRKbG45TEV2YzdZU0czTGFXMGxESnBCYVRndVQ1a2JlM1FMRDRDZ0RN?=
 =?utf-8?B?VHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	r71SCAjLFZuZK+q6VYuyuFO/jKpf6waf6BE12lCYCnJlMYexsbXazDktTDdk8sQSOvlVNH5mjYsaubJ5nS60ZoOaKj0NFeP0PkpaRzd0NIScO3gtC6MPH3+YakFcITPC4/vQe0vM8rT2HkoYbJpdpme9nkUP8Vlx/jf4/Tg9xtPd/1t41PQSKYpGG7yaP3JlbXD/w9WMLm3NN9GwK/m1unad3dMuFmLhkhU5SzBTilbBA9Fj7Uu52wz8IxhQ7tU6aQaSwHE5rmaRj8ch0dI4Yq1XxXW9Uase3Dc0o0aRnPs+d8lzKvjub2DEvdtn9zpe/vNYvw42kTegTfMmwP/LjwS7j/e1/ueXdCKh9mi+icgjVxLzZibzHh+tt3h9DdSa9Gvaohikyc0wjZoplpMtJJdNl8BSwzJEYJNv4AJrFEv1Gfu0MIDsqxgPIa1KnUQQmvGUfGZ8rz2mXA02yv+Bi489EeC3bTu8qp2eQ6+OSLSQll1uLKGY1yxejPgwCs1kjhNqqhTTEI7TW7yxQwsh7TJCbxw9kkyoEFQv7aKSrHtx8u3P2xzRh9E1DCBmIs4ymJ84RL7IacMCLsFpyoQRf0WbELphpQ19ZTZyOT+plHK02wfqqJm8RUi0wlRPau/Rxiirca/ywt8+JVjVvmbw86g5n5+VquQoGEcX0O9xKnptmYz2r4mo8YZnTmAeMouBBdJoqoNgTsnorE/NRNtfT1sHenQeEDFQJAmcRFwZuYli6QUXl4BS75271vjqhMzIiRL21g9zg6FDVgYl6lQvVTyKdRhdZ+ZtTLjhL9+eS7CShvamsfS42SQ4jCW8ThDKArwoSASkLesXOb0aFMTrXbLcBYBgABbgugECLkYL1sXloGSw1h64BZEcTgCvUSVkbtUJu8+EaNAVlOlo4zM2Hg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 62dd8ce5-6926-4cc1-1e92-08db48a6ae3b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2023 11:41:33.1639
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Dn9LalAbkNt4I2vuVowT8iyKaPo37ixW4wkoaWCsFiNe6gyNbetHDNK/26LeLw9c8JtbYsL0lADEUwTGlIjZpSFsQl0QBfanEVDNXx5JoVY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5634

On 29/04/2023 4:05 am, Stefano Stabellini wrote:
> On Fri, 28 Apr 2023, GitLab wrote:
>> Pipeline #852233694 triggered by
>> [568538936b4ac45a343cb3a4ab0c6cda?s=48&d=identicon]
>> Ganis
>> had 3 failed jobs
>> Failed jobs
>> ✖
>> test
>> qemu-smoke-dom0less-arm64-gcc
> This is a real failure on staging. Unfortunately it is intermittent. It
> usually happens once every 3-8 tests for me.
>
> The test script is:
> automation/scripts/qemu-smoke-dom0less-arm64.sh
>
> and for this test it is invoked without arguments. It is starting 2
> dom0less VMs in parallel, then dom0 does a xl network-attach and the
> domU is supposed to setup eth0 and ping.
>
> The failure is that nothing happens after "xl network-attach". The domU
> never hotplugs any interfaces. I have logs that show that eth0 never
> shows up and the only interface is lo no matter how long we wait.
>
>
> On a hunch, I removed Alejandro patches. Without them, I ran 20 tests
> without any failures. I have not investigated further but it looks like
> one of these 4 commits is the problem:
>
> 2023-04-28 11:41 Alejandro Vallejo    tools: Make init-xenstore-domain use xc_domain_getinfolist()
> 2023-04-28 11:41 Alejandro Vallejo    tools: Refactor console/io.c to avoid using xc_domain_getinfo()
> 2023-04-28 11:41 Alejandro Vallejo    tools: Create xc_domain_getinfo_single()
> 2023-04-28 11:41 Alejandro Vallejo    tools: Make some callers of xc_domain_getinfo() use xc_domain_getinfol 

In commit order (reverse of above), these patches are:

1) Modify the python bindings and xenbaked
2) Introduce a new library function with a better API/ABI
3) Modify xenconsoled
4) Modify init-xenstore-domain

The test isn't using anything from 4 or 1, and 2 definitely isn't
breaking anything on its own.

That just leaves 3.  This test does turn activate xenconsoled by virtue
of invoking xencommons, but that doesn't help explain why a change in
xenconsoled interferes (and only intermittently on this one single test)
with `xl network-attach`.

The xenconsoled change does have correctness fix in it, requiring
xenconsoled to ask for all domains info in one go.  This does mean it's
hypercall-buffering (i.e. bouncing) a 4M array now where previously it
was racy figuring out which VMs had come and gone.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 12:48:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 12:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527627.820301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psjzZ-0008Hj-RO; Sat, 29 Apr 2023 12:47:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527627.820301; Sat, 29 Apr 2023 12:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psjzZ-0008Hc-MS; Sat, 29 Apr 2023 12:47:57 +0000
Received: by outflank-mailman (input) for mailman id 527627;
 Sat, 29 Apr 2023 12:47:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psjzX-0008HS-VL; Sat, 29 Apr 2023 12:47:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psjzX-000548-M5; Sat, 29 Apr 2023 12:47:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psjzX-0007xC-65; Sat, 29 Apr 2023 12:47:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psjzX-0005ih-5c; Sat, 29 Apr 2023 12:47:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ijaHJmv/HMztDBKRzNvZQJCiZr315DECEZuR0n2DrVo=; b=haOwZMn0ewah4NQlBvvTgAe5Pj
	rtege8+RPeMZQshm4lnz7sw8TVODl3HyXPF7T2ff831Lxpv7T9T0lMLE2Oc1BgxL0um19/oegPA3P
	bRUUs+Bkj70P9lLjzvxZiRe8Qfxte+c19SFzV4DH7x0Jg+4dJfUjtmbCZQ48ppXUcrnQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180475-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180475: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    xen-unstable:build-armhf:host-build-prep:fail:regression
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=8e974df445807bb4a3629ca51145c7d74ee85c8f
X-Osstest-Versions-That:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 12:47:55 +0000

flight 180475 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180475/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 180452
 build-armhf                   5 host-build-prep          fail REGR. vs. 180452

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180452
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180452
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180452
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180452
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180452
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180452
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180452
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180452
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180452
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  8e974df445807bb4a3629ca51145c7d74ee85c8f
baseline version:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c

Last test of basis   180452  2023-04-27 18:45:25 Z    1 days
Testing same since   180475  2023-04-28 18:13:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 8e974df445807bb4a3629ca51145c7d74ee85c8f
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:16 2023 +0200

    automation: include tail of serial log in the gitlab outout
    
    Make it a bit easier to see what has failed.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3822b16a17dfa6396009c4acaf2ae660f933566f
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:15 2023 +0200

    automation: PCI passthrough tests on ADL hw
    
    Add simple PCI passthrough test to both PV and HVM domU. It passes
    through a network adapter (the only one in the system), gets an IP via
    DHCP (first basic test) and then ping the gateway (second basic test).
    Finally, if device is supposed to use MSI or MSI-X (as set in the
    PCIDEV_INTR test variable), check if it's in use via /proc/interrupts.
    
    On the current runner, the device in question is this:
    03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
            Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:7d25]
            Flags: bus master, fast devsel, latency 0, IRQ 18
            Memory at 50400000 (32-bit, non-prefetchable) [size=1M]
            Memory at 50500000 (32-bit, non-prefetchable) [size=16K]
            Capabilities: [40] Power Management version 3
            Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
            Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
            Capabilities: [a0] Express Endpoint, MSI 00
            Capabilities: [100] Advanced Error Reporting
            Capabilities: [140] Device Serial Number ...
            Capabilities: [1c0] Latency Tolerance Reporting
            Capabilities: [1f0] Precision Time Measurement
            Capabilities: [1e0] L1 PM Substates
            Kernel driver in use: igc
            Kernel modules: igc
    
    With the current Xen version, it uses MSI-X under PV and MSI under HVM.
    
    This patch moves domU config to a variable, to make it configurable on
    per-test basis. Add also a few comments for visual separation of tests.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 937e73feca9abaea06ec496cd93f8da8bd3b70bf
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Wed Apr 26 02:16:14 2023 +0200

    automation: wait for the login prompt as test end marker
    
    The login prompt is printed after all the startup (test) scripts, wait
    for that instead of "passed" marker. And only then check if test passed.
    Before this patch there was a race: "passed" marker could be already
    printed, but the final check would fail because login prompt wasn't
    there yet.
    
    Also, modify etc/issue in domU rootfs to avoid confusing the one from
    domU with the dom0's one. Use the dom0 one as test end marker.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit ac58d7fda63fecc6fae24f7824dbe033d001833e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 26 15:34:30 2023 +0100

    CI: Remove all use of /bin/false as a ROM
    
    As the recent work to get PCI Passthrough testing working shows, putting
    `/bin/false` as a ROM into guest context doesn't work so well.
    
    For all ROM paths where we're skipping the build, use a slightly-plausible but
    likely non-existent path instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 09c7179f0a2c66d4d1716cc41c498349bf78811b
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu Apr 27 14:25:59 2023 +0100

    xen/misra: xen-analysis.py: fix return error on PhaseExceptions
    
    Currently the script return code is 0 even if an exception is
    found, because the return code is written only if the exception
    object has the errorcode member.
    
    Fix the issue returning the errorcode member in case it exists,
    otherwise use a generic value different from 0.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 13:19:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 13:19:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527632.820310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pskTq-0003Ie-9v; Sat, 29 Apr 2023 13:19:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527632.820310; Sat, 29 Apr 2023 13:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pskTq-0003IX-6p; Sat, 29 Apr 2023 13:19:14 +0000
Received: by outflank-mailman (input) for mailman id 527632;
 Sat, 29 Apr 2023 13:19:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pskTp-0003IN-P0; Sat, 29 Apr 2023 13:19:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pskTp-0005jb-Es; Sat, 29 Apr 2023 13:19:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pskTp-0001Gu-4o; Sat, 29 Apr 2023 13:19:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pskTp-0007ZR-4N; Sat, 29 Apr 2023 13:19:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wscOpBL5Rl/6rtyFhHXQEyxPXfjSBcBrTXBQCKkHSx4=; b=r+wDLGl8VsFFwJGw8nMAdBwZSw
	8j4OKHWA4uwmtGEBE/2VBFskz76NTsVkH/X+v0S6ZZL8HoKWDlWhxaRzdPo6reXgFJl2hK7zO0+tS
	xwuXIYWHcCBzF+5q/ZDvEF9E3hSKN1JIatYJrVqw5P2LLyMFOGvreWYRgU0nE3xeACtU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180473-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180473: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-armhf:<job status>:broken:regression
    linux-linus:build-armhf:host-build-prep:fail:regression
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=33afd4b76393627477e878b3b195d606e585d816
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 13:19:13 +0000

flight 180473 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180473/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   5 host-build-prep          fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                33afd4b76393627477e878b3b195d606e585d816
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   12 days
Failing since        180281  2023-04-17 06:24:36 Z   12 days   20 attempts
Testing same since   180473  2023-04-28 16:17:49 Z    0 days    1 attempts

------------------------------------------------------------
1949 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 225289 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 13:35:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 13:35:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527640.820320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pskjA-0005jG-Ro; Sat, 29 Apr 2023 13:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527640.820320; Sat, 29 Apr 2023 13:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pskjA-0005j9-Nc; Sat, 29 Apr 2023 13:35:04 +0000
Received: by outflank-mailman (input) for mailman id 527640;
 Sat, 29 Apr 2023 13:35:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Ech=AU=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pskj9-0005j3-2K
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 13:35:03 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a20c844d-e692-11ed-b225-6b7b168915f2;
 Sat, 29 Apr 2023 15:35:00 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 4D8AC5C0176;
 Sat, 29 Apr 2023 09:34:57 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Sat, 29 Apr 2023 09:34:57 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 29 Apr 2023 09:34:55 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a20c844d-e692-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1682775297; x=1682861697; bh=f/hYQRnJOgnhGx13UV29iWjW/ZCGlPtK+TI
	sFDedytI=; b=D2DDCqtwAhN9/mPazwUU4k/Pg8dDPCBFEygSBB5aVGoiPtXQiyM
	6AFkK948/ZYcFea0B5ZZFn7plCjZdYLJqaRJ6zuNCuEp5Rp49ZnhalLnk077fNoU
	mQwZfsxJW+rqor3OwU00WHJVC2VxUQJgVQq0p6ty8gWK6F3JCZdT0oQTxFTByPyy
	OVIKoorpVhl3EaS3xEUAowFmp5J0+B2uXzKvTCHcALpWGsW3dWwBWRo8yJOHZpMy
	mH/4GIZrBr6cNtfNCv13Mvra1IaIrT4dH2vVEGZMCMWH/fntyF9QaRS3rLKi0lxs
	jX+W4tgRAok/Am3waNOvaN3dvGBxy1zKWQA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1682775297; x=1682861697; bh=f/hYQRnJOgnhG
	x13UV29iWjW/ZCGlPtK+TIsFDedytI=; b=SMr/73E4C6EDPsgl11xUalGpabatM
	Pf3lo/9iF+4KGyFA38SkhWkPcfb1ysVHHISTLPAtJ/oUCkn2hZHEakAkI8Adxyjf
	NbYMD+4ZEXOTAfnqr+wah9wSwz8IAdGCsOQCEAN0MabpSjfEZuHCHRMKc2UlB2Rh
	40ZgdlVDfsW45Qoxt/beeHtq+0UYtqqHohMf4sNuWl8Kmb8UbirjGShF5wjOsIvl
	9uPbEoXsTvOcBQwGJko5VC/hAAMh9aADuuWczlEcdI3f9yap/8olQFyz/pIxSH+e
	W2t5vYxivb+sUIg7JgOrvZScNYU92xL8HK2yI4sA96U61y4q/EDM1Up/A==
X-ME-Sender: <xms:AB1NZPYHmYj09dKZ9LPJDPJ2Zd7FyvTenC0SSHSzEw4PM_dNqfZn8w>
    <xme:AB1NZOaDDZkxkmjP9JMUEnvXtFq3sRN8JasY1061J4m9AffmQ-TwceeUsCD7uj3Bi
    Rw8S6YSU70roQ>
X-ME-Received: <xmr:AB1NZB8lqlJOk_-EtXcxArmyLeD5BA_vC7GLegxDe9DizVES_4xTg0NqvBpMSwr5CZQLjGJdLk778YetkoT3zmNLgR5B0Qpl_4c>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvtddggeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:AB1NZFoUQ2_0oEKt7b3PRFzRM-EyaruV4gLzesHabUYSefZWG7krPg>
    <xmx:AB1NZKpKFK80a4z9ACfAK7YNAIyRH32XAQFZVLPiLEZOb0-3IYOQrQ>
    <xmx:AB1NZLQiV4hB5sV7fZavbCObicSABx688fmwmFBq8BVK5UhYwke18w>
    <xmx:AR1NZNWqdWd1sMaXdpd0ZPuF5VhXxNAp8fhFAPWT9jsivJenqTvh6A>
Feedback-ID: i1568416f:Fastmail
Date: Sat, 29 Apr 2023 15:34:53 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: andrew.cooper3@citrix.com
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	alejandro.vallejo@cloud.com, committers@xenproject.org,
	michal.orzel@amd.com, xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
Message-ID: <ZE0c/dEaIUglww+g@mail-itl>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="zRbvTpPLxZ4MpD3L"
Content-Disposition: inline
In-Reply-To: <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>


--zRbvTpPLxZ4MpD3L
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 29 Apr 2023 15:34:53 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: andrew.cooper3@citrix.com
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	alejandro.vallejo@cloud.com, committers@xenproject.org,
	michal.orzel@amd.com, xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f

On Sat, Apr 29, 2023 at 12:41:26PM +0100, andrew.cooper3@citrix.com wrote:
> On 29/04/2023 4:05 am, Stefano Stabellini wrote:
> > On Fri, 28 Apr 2023, GitLab wrote:
> >> Pipeline #852233694 triggered by
> >> [568538936b4ac45a343cb3a4ab0c6cda?s=3D48&d=3Didenticon]
> >> Ganis
> >> had 3 failed jobs
> >> Failed jobs
> >> =E2=9C=96
> >> test
> >> qemu-smoke-dom0less-arm64-gcc
> > This is a real failure on staging. Unfortunately it is intermittent. It
> > usually happens once every 3-8 tests for me.
> >
> > The test script is:
> > automation/scripts/qemu-smoke-dom0less-arm64.sh
> >
> > and for this test it is invoked without arguments. It is starting 2
> > dom0less VMs in parallel, then dom0 does a xl network-attach and the
> > domU is supposed to setup eth0 and ping.
> >
> > The failure is that nothing happens after "xl network-attach". The domU
> > never hotplugs any interfaces. I have logs that show that eth0 never
> > shows up and the only interface is lo no matter how long we wait.
> >
> >
> > On a hunch, I removed Alejandro patches. Without them, I ran 20 tests
> > without any failures. I have not investigated further but it looks like
> > one of these 4 commits is the problem:
> >
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Make init-xenstore-domain =
use xc_domain_getinfolist()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Refactor console/io.c to a=
void using xc_domain_getinfo()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Create xc_domain_getinfo_s=
ingle()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Make some callers of xc_do=
main_getinfo() use xc_domain_getinfol=20
>=20
> In commit order (reverse of above), these patches are:
>=20
> 1) Modify the python bindings and xenbaked
> 2) Introduce a new library function with a better API/ABI
> 3) Modify xenconsoled
> 4) Modify init-xenstore-domain
>=20
> The test isn't using anything from 4 or 1, and 2 definitely isn't
> breaking anything on its own.
>=20
> That just leaves 3.=C2=A0 This test does turn activate xenconsoled by vir=
tue
> of invoking xencommons, but that doesn't help explain why a change in
> xenconsoled interferes (and only intermittently on this one single test)
> with `xl network-attach`.
>=20
> The xenconsoled change does have correctness fix in it, requiring
> xenconsoled to ask for all domains info in one go.=C2=A0 This does mean i=
t's
> hypercall-buffering (i.e. bouncing) a 4M array now where previously it
> was racy figuring out which VMs had come and gone.

Can it be that xl network-attach fails and that failure is silently
ignored by the test?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--zRbvTpPLxZ4MpD3L
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRNHPwACgkQ24/THMrX
1yzpdwf/d0oZw1B7pzKxk5LdNmZPy/bwZEO5/VLG1PAFAD9vnRB4DTH9vOab5eC2
AlE5C8Omhxnf5yCE8HyGprDEC2ghnwIhowFvSjWMVrR9OEiPWsT5j3ClEprpHbyE
+XVAdNIrAExCXU/vEJR9frlydf3UD5nCN0/4QW+HffSL+VZ6rr69fFVdpKDPD943
jj7qfQmDb6xtMUp8cwH2DFXqMxk1pD/PPwZL2uPPD8OeQoN7LNNrOoKl1xpTQ/Gb
6B6RnQ5P9QLx/Quw8M94/eSOj5kogHMICv7V9ATifiUKuQjknTBj3kmtE1saoOpl
kgeSqvPfw3z4EmLxKmlm3boWOGSvjg==
=xGZo
-----END PGP SIGNATURE-----

--zRbvTpPLxZ4MpD3L--


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 14:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 14:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527651.820330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pslXp-0002jf-N9; Sat, 29 Apr 2023 14:27:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527651.820330; Sat, 29 Apr 2023 14:27:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pslXp-0002jY-Jz; Sat, 29 Apr 2023 14:27:25 +0000
Received: by outflank-mailman (input) for mailman id 527651;
 Sat, 29 Apr 2023 14:27:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Ech=AU=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pslXo-0002jS-6P
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 14:27:24 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f372da01-e699-11ed-8611-37d641c3527e;
 Sat, 29 Apr 2023 16:27:21 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 632395C0161;
 Sat, 29 Apr 2023 10:27:20 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Sat, 29 Apr 2023 10:27:20 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 29 Apr 2023 10:27:19 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f372da01-e699-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682778440; x=1682864840; bh=9uCHS2zYFXZb16zRhgtNL5MuA
	4xe2dD4D2irqiSKyXM=; b=AHyK1OqHeuvyN/T83wHJePcKyNSIK+9jc5Dpuoula
	CzKUSEGhC3/bO/UA1m8GJ0M8CHkIOjZesfdDDRM95vBiz3olbnsaVovcY4zikodE
	91Ai8zS4YXT2c0gflRYKbnNRmo9jq1XBwFoEYYjQoHIp58+PzFI/GKQyn5ksX9eW
	XMSCTs6N+xBs5RVWUha9h+jRAHSrkl08Y5MyoLZODafU8vjXLaZAPUnOmryJhWA7
	1kCgfbxgtnQI3CBMZuIkXeRcaqA/9cADOXAEu5qmTrB2ESaiZ4QmxGtkxdUx1/z/
	2S9g1lRGqaTp7LWCO4LFQ9szAh5UiKzvJAiVopj5m4LpQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682778440; x=1682864840; bh=9
	uCHS2zYFXZb16zRhgtNL5MuA4xe2dD4D2irqiSKyXM=; b=Oo2NN5k0uxUo/UZaA
	jR3z4fDxtj1Ze6HX0O2OSX+pexYZjy4u3l4hCd9DHVIVYnzoebwE9DSvhyvOZbkH
	cxAIc/vdwbv4GHOckWE/dGLWl30nwAPCiZc7rYwtR83CnK61zSaKKc8cPcqTwL/1
	uUW+iQfP0vs22goAfBfympoIrJywzEJ2koWESSuPyR6ASkisCjOvDGRsVecCNMwl
	YYGF7Y5Qx1z6JxzUNbCapL2dc8mMUwSX9yE840zauwkTWtEReUgmWeRGjXHohlOa
	QiqaErNgNYOr0RAeweTzVx8m+65an1adp/7QcFAIQssmko1YFz1tye1OIqC6v16N
	YI3Lw==
X-ME-Sender: <xms:SClNZOiB0OZlGBgMS_7NraZiHE2hPGDh8Oz9VtJGFYE1ozjnIFMzLg>
    <xme:SClNZPDbU9VTqp1wrQ7dnv3ldAIq9mdorKD0Eptpwk2FSYiXbIerpmf0zooJqZKjk
    nzIGPoT3BSh7A>
X-ME-Received: <xmr:SClNZGGd7gSAHxpn6A5C8PYtzf1EqAON-sxyh4jypbuow3FmIkCtokN4vh0bZYKSGsR1mGuRge3dJdyrYFJgZgKlibOetsvWWA648PeK7UhJAruD-2jK>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvtddgheduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejueef
    hfelieekgeeftdfgieeugefhudetjeethfefveehffejhfeigefgjeekleenucffohhmrg
    hinhepghhithhlrggsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm
    pehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:SClNZHSUTfOrRTeYHg8uKoFMMPMpcjqaHn3MXpjYgte9DFeoEuZxNQ>
    <xmx:SClNZLwxnPqLHEbqEHAAAt9DyoWXEws6ZAMeAdjYN34w7VhWapAInQ>
    <xmx:SClNZF6xZmsojHsfdLBSGYXKvV-HfBVuTeCm8x_1TZhTRgmR6u-Q5w>
    <xmx:SClNZEo6TygQAF7D5rNXepqXg9PrOKH8e2jsSqXpHXfvTP8HEQxbXg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] automation: optimize build jobs order
Date: Sat, 29 Apr 2023 16:27:07 +0200
Message-Id: <20230429142707.176299-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Put jobs that are needed for (any) test earlier, so the tests can start
running in parallel to builds.
This commits splits only x86 build jobs into two sections (one on top
and one on bottom), but keep ARM build jobs in one section, as most of
them have some test connected and the few that do not are not worth
reducing readability of the file.

And also, put artifacts jobs at the very beginning, not the very end.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
This made the pipeline to complete within 45 minutes. This isn't big
improvement on its own, but should make adding more runners more
beneficial. While looking at it in real time, most jobs were really
waiting for available runners and not stuck on dependencies anymore.
---
 automation/gitlab-ci/build.yaml | 735 ++++++++++++++++----------------
 1 file changed, 370 insertions(+), 365 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index d323c30a8304..32dec45b8b9a 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -236,359 +236,211 @@
   variables:
     <<: *gcc
 
-# Jobs below this line
+## Test artifacts common
 
-archlinux-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: archlinux:current
+.test-jobs-artifact-common:
+  stage: build
+  except: !reference [.test-jobs-common, except]
 
-archlinux-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: archlinux:current
+# Arm test artifacts
 
-centos-7-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: centos:7
+alpine-3.12-arm64-rootfs-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12-arm64v8
+  script:
+    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
+  artifacts:
+    paths:
+      - binaries/initrd.tar.gz
+  tags:
+    - arm64
 
-centos-7-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: centos:7
+kernel-5.19-arm64-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:5.19-arm64v8
+  script:
+    - mkdir binaries && cp /Image binaries/Image
+  artifacts:
+    paths:
+      - binaries/Image
+  tags:
+    - arm64
 
-debian-stretch-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: debian:stretch
+qemu-system-aarch64-6.0.0-arm64-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
+  script:
+    - mkdir binaries && cp /qemu-system-aarch64 binaries/qemu-system-aarch64
+  artifacts:
+    paths:
+      - binaries/qemu-system-aarch64
+  tags:
+    - arm64
 
-debian-stretch-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: debian:stretch
+qemu-system-aarch64-6.0.0-arm32-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
+  script:
+    - mkdir binaries && cp /qemu-system-arm binaries/qemu-system-arm
+  artifacts:
+    paths:
+      - binaries/qemu-system-arm
+  tags:
+    - arm64
 
-debian-stretch-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: debian:stretch
+# x86_64 test artifacts
 
-debian-stretch-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: debian:stretch
+alpine-3.12-rootfs-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
+  script:
+    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
+  artifacts:
+    paths:
+      - binaries/initrd.tar.gz
+  tags:
+    - x86_64
 
-debian-stretch-32-clang-debug:
-  extends: .clang-x86-32-build-debug
-  variables:
-    CONTAINER: debian:stretch-i386
+kernel-6.1.19-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
+  script:
+    - mkdir binaries && cp /bzImage binaries/bzImage
+  artifacts:
+    paths:
+      - binaries/bzImage
+  tags:
+    - x86_64
 
-debian-stretch-32-gcc-debug:
-  extends: .gcc-x86-32-build-debug
-  variables:
-    CONTAINER: debian:stretch-i386
+# Jobs below this line
 
-debian-buster-gcc-ibt:
+# Build jobs needed for tests
+
+alpine-3.12-gcc:
   extends: .gcc-x86-64-build
   variables:
-    CONTAINER: debian:buster-gcc-ibt
-    RANDCONFIG: y
-    EXTRA_FIXED_RANDCONFIG: |
-      CONFIG_XEN_IBT=y
+    CONTAINER: alpine:3.12
 
-debian-unstable-clang:
-  extends: .clang-x86-64-build
+alpine-3.12-gcc-debug:
+  extends: .gcc-x86-64-build-debug
   variables:
-    CONTAINER: debian:unstable
+    CONTAINER: alpine:3.12
+
+debian-stretch-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: debian:stretch
 
 debian-unstable-clang-debug:
   extends: .clang-x86-64-build-debug
   variables:
     CONTAINER: debian:unstable
 
-debian-unstable-gcc:
-  extends: .gcc-x86-64-build
+# Arm32 cross-build
+
+debian-unstable-gcc-arm32:
+  extends: .gcc-arm32-cross-build
   variables:
-    CONTAINER: debian:unstable
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
 
-debian-unstable-gcc-debug:
-  extends: .gcc-x86-64-build-debug
+debian-unstable-gcc-arm32-debug:
+  extends: .gcc-arm32-cross-build-debug
   variables:
-    CONTAINER: debian:unstable
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
 
-debian-unstable-gcc-randconfig:
-  extends: .gcc-x86-64-build
+debian-unstable-gcc-arm32-randconfig:
+  extends: .gcc-arm32-cross-build
   variables:
-    CONTAINER: debian:unstable
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
     RANDCONFIG: y
 
-debian-unstable-gcc-debug-randconfig:
-  extends: .gcc-x86-64-build-debug
+debian-unstable-gcc-arm32-debug-randconfig:
+  extends: .gcc-arm32-cross-build-debug
   variables:
-    CONTAINER: debian:unstable
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
     RANDCONFIG: y
 
-debian-unstable-32-clang-debug:
-  extends: .clang-x86-32-build-debug
+debian-unstable-gcc-arm32-staticmem:
+  extends: .gcc-arm32-cross-build
   variables:
-    CONTAINER: debian:unstable-i386
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
+    EXTRA_XEN_CONFIG: |
+      CONFIG_EXPERT=y
+      CONFIG_UNSUPPORTED=y
+      CONFIG_STATIC_MEMORY=y
 
-debian-unstable-32-gcc-debug:
-  extends: .gcc-x86-32-build-debug
+debian-unstable-gcc-arm32-debug-staticmem:
+  extends: .gcc-arm32-cross-build-debug
   variables:
-    CONTAINER: debian:unstable-i386
+    CONTAINER: debian:unstable-arm64v8-arm32-gcc
+    HYPERVISOR_ONLY: y
+    EXTRA_XEN_CONFIG: |
+      CONFIG_EXPERT=y
+      CONFIG_UNSUPPORTED=y
+      CONFIG_STATIC_MEMORY=y
 
-fedora-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: fedora:29
+# Arm builds
 
-fedora-gcc-debug:
-  extends: .gcc-x86-64-build-debug
+debian-unstable-gcc-arm64:
+  extends: .gcc-arm64-build
   variables:
-    CONTAINER: fedora:29
-
-# Ubuntu Trusty's Clang is 3.4 while Xen requires 3.5
+    CONTAINER: debian:unstable-arm64v8
 
-ubuntu-trusty-gcc:
-  extends: .gcc-x86-64-build
+debian-unstable-gcc-debug-arm64:
+  extends: .gcc-arm64-build-debug
   variables:
-    CONTAINER: ubuntu:trusty
+    CONTAINER: debian:unstable-arm64v8
 
-ubuntu-trusty-gcc-debug:
-  extends: .gcc-x86-64-build-debug
+debian-unstable-gcc-arm64-randconfig:
+  extends: .gcc-arm64-build
   variables:
-    CONTAINER: ubuntu:trusty
+    CONTAINER: debian:unstable-arm64v8
+    RANDCONFIG: y
 
-ubuntu-xenial-clang:
-  extends: .clang-x86-64-build
+debian-unstable-gcc-debug-arm64-randconfig:
+  extends: .gcc-arm64-build-debug
   variables:
-    CONTAINER: ubuntu:xenial
+    CONTAINER: debian:unstable-arm64v8
+    RANDCONFIG: y
 
-ubuntu-xenial-clang-debug:
-  extends: .clang-x86-64-build-debug
+alpine-3.12-gcc-arm64:
+  extends: .gcc-arm64-build
   variables:
-    CONTAINER: ubuntu:xenial
+    CONTAINER: alpine:3.12-arm64v8
 
-ubuntu-xenial-gcc:
-  extends: .gcc-x86-64-build
+alpine-3.12-gcc-debug-arm64:
+  extends: .gcc-arm64-build-debug
   variables:
-    CONTAINER: ubuntu:xenial
+    CONTAINER: alpine:3.12-arm64v8
 
-ubuntu-xenial-gcc-debug:
-  extends: .gcc-x86-64-build-debug
+alpine-3.12-gcc-arm64-randconfig:
+  extends: .gcc-arm64-build
   variables:
-    CONTAINER: ubuntu:xenial
+    CONTAINER: alpine:3.12-arm64v8
+    RANDCONFIG: y
 
-ubuntu-bionic-clang:
-  extends: .clang-x86-64-build
+alpine-3.12-gcc-debug-arm64-randconfig:
+  extends: .gcc-arm64-build-debug
   variables:
-    CONTAINER: ubuntu:bionic
+    CONTAINER: alpine:3.12-arm64v8
+    RANDCONFIG: y
 
-ubuntu-bionic-clang-debug:
-  extends: .clang-x86-64-build-debug
+alpine-3.12-gcc-arm64-staticmem:
+  extends: .gcc-arm64-build
   variables:
-    CONTAINER: ubuntu:bionic
+    CONTAINER: alpine:3.12-arm64v8
+    EXTRA_XEN_CONFIG: |
+      CONFIG_EXPERT=y
+      CONFIG_UNSUPPORTED=y
+      CONFIG_STATIC_MEMORY=y
 
-ubuntu-bionic-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-bionic-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-focal-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:focal
-
-opensuse-leap-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-tumbleweed-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-alpine-3.12-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: alpine:3.12
-
-# Arm32 cross-build
-
-debian-unstable-gcc-arm32:
-  extends: .gcc-arm32-cross-build
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-
-debian-unstable-gcc-arm32-debug:
-  extends: .gcc-arm32-cross-build-debug
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-
-debian-unstable-gcc-arm32-randconfig:
-  extends: .gcc-arm32-cross-build
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-    RANDCONFIG: y
-
-debian-unstable-gcc-arm32-debug-randconfig:
-  extends: .gcc-arm32-cross-build-debug
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-    RANDCONFIG: y
-
-debian-unstable-gcc-arm32-staticmem:
-  extends: .gcc-arm32-cross-build
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-    EXTRA_XEN_CONFIG: |
-      CONFIG_EXPERT=y
-      CONFIG_UNSUPPORTED=y
-      CONFIG_STATIC_MEMORY=y
-
-debian-unstable-gcc-arm32-debug-staticmem:
-  extends: .gcc-arm32-cross-build-debug
-  variables:
-    CONTAINER: debian:unstable-arm64v8-arm32-gcc
-    HYPERVISOR_ONLY: y
-    EXTRA_XEN_CONFIG: |
-      CONFIG_EXPERT=y
-      CONFIG_UNSUPPORTED=y
-      CONFIG_STATIC_MEMORY=y
-
-# Arm builds
-
-debian-unstable-gcc-arm64:
-  extends: .gcc-arm64-build
-  variables:
-    CONTAINER: debian:unstable-arm64v8
-
-debian-unstable-gcc-debug-arm64:
-  extends: .gcc-arm64-build-debug
-  variables:
-    CONTAINER: debian:unstable-arm64v8
-
-debian-unstable-gcc-arm64-randconfig:
-  extends: .gcc-arm64-build
-  variables:
-    CONTAINER: debian:unstable-arm64v8
-    RANDCONFIG: y
-
-debian-unstable-gcc-debug-arm64-randconfig:
-  extends: .gcc-arm64-build-debug
-  variables:
-    CONTAINER: debian:unstable-arm64v8
-    RANDCONFIG: y
-
-alpine-3.12-gcc-arm64:
-  extends: .gcc-arm64-build
-  variables:
-    CONTAINER: alpine:3.12-arm64v8
-
-alpine-3.12-gcc-debug-arm64:
-  extends: .gcc-arm64-build-debug
-  variables:
-    CONTAINER: alpine:3.12-arm64v8
-
-alpine-3.12-gcc-arm64-randconfig:
-  extends: .gcc-arm64-build
-  variables:
-    CONTAINER: alpine:3.12-arm64v8
-    RANDCONFIG: y
-
-alpine-3.12-gcc-debug-arm64-randconfig:
-  extends: .gcc-arm64-build-debug
-  variables:
-    CONTAINER: alpine:3.12-arm64v8
-    RANDCONFIG: y
-
-alpine-3.12-gcc-arm64-staticmem:
-  extends: .gcc-arm64-build
-  variables:
-    CONTAINER: alpine:3.12-arm64v8
-    EXTRA_XEN_CONFIG: |
-      CONFIG_EXPERT=y
-      CONFIG_UNSUPPORTED=y
-      CONFIG_STATIC_MEMORY=y
-
-alpine-3.12-gcc-debug-arm64-staticmem:
-  extends: .gcc-arm64-build-debug
+alpine-3.12-gcc-debug-arm64-staticmem:
+  extends: .gcc-arm64-build-debug
   variables:
     CONTAINER: alpine:3.12-arm64v8
     EXTRA_XEN_CONFIG: |
@@ -706,78 +558,231 @@ debian-unstable-gcc-arm64-cppcheck:
     CPPCHECK: y
     HYPERVISOR_ONLY: y
 
-## Test artifacts common
+# Build jobs not needed for tests
 
-.test-jobs-artifact-common:
-  stage: build
-  except: !reference [.test-jobs-common, except]
+alpine-3.12-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: alpine:3.12
 
-# Arm test artifacts
+alpine-3.12-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: alpine:3.12
 
-alpine-3.12-arm64-rootfs-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12-arm64v8
-  script:
-    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
-  artifacts:
-    paths:
-      - binaries/initrd.tar.gz
-  tags:
-    - arm64
+archlinux-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: archlinux:current
 
-kernel-5.19-arm64-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:5.19-arm64v8
-  script:
-    - mkdir binaries && cp /Image binaries/Image
-  artifacts:
-    paths:
-      - binaries/Image
-  tags:
-    - arm64
+archlinux-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: archlinux:current
 
-qemu-system-aarch64-6.0.0-arm64-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
-  script:
-    - mkdir binaries && cp /qemu-system-aarch64 binaries/qemu-system-aarch64
-  artifacts:
-    paths:
-      - binaries/qemu-system-aarch64
-  tags:
-    - arm64
+centos-7-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: centos:7
 
-qemu-system-aarch64-6.0.0-arm32-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
-  script:
-    - mkdir binaries && cp /qemu-system-arm binaries/qemu-system-arm
-  artifacts:
-    paths:
-      - binaries/qemu-system-arm
-  tags:
-    - arm64
+centos-7-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: centos:7
 
-# x86_64 test artifacts
+debian-stretch-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:stretch
 
-alpine-3.12-rootfs-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
-  script:
-    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
-  artifacts:
-    paths:
-      - binaries/initrd.tar.gz
-  tags:
-    - x86_64
+debian-stretch-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: debian:stretch
+
+debian-stretch-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: debian:stretch
+
+debian-stretch-32-clang-debug:
+  extends: .clang-x86-32-build-debug
+  variables:
+    CONTAINER: debian:stretch-i386
+
+debian-stretch-32-gcc-debug:
+  extends: .gcc-x86-32-build-debug
+  variables:
+    CONTAINER: debian:stretch-i386
+
+debian-buster-gcc-ibt:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:buster-gcc-ibt
+    RANDCONFIG: y
+    EXTRA_FIXED_RANDCONFIG: |
+      CONFIG_XEN_IBT=y
+
+debian-unstable-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc-randconfig:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+    RANDCONFIG: y
+
+debian-unstable-gcc-debug-randconfig:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: debian:unstable
+    RANDCONFIG: y
+
+debian-unstable-32-clang-debug:
+  extends: .clang-x86-32-build-debug
+  variables:
+    CONTAINER: debian:unstable-i386
+
+debian-unstable-32-gcc-debug:
+  extends: .gcc-x86-32-build-debug
+  variables:
+    CONTAINER: debian:unstable-i386
+
+fedora-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: fedora:29
+
+fedora-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: fedora:29
+
+# Ubuntu Trusty's Clang is 3.4 while Xen requires 3.5
+
+ubuntu-trusty-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:trusty
+
+ubuntu-trusty-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:trusty
+
+ubuntu-xenial-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-bionic-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-focal-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
+opensuse-leap-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-tumbleweed-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
 
-kernel-6.1.19-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
-  script:
-    - mkdir binaries && cp /bzImage binaries/bzImage
-  artifacts:
-    paths:
-      - binaries/bzImage
-  tags:
-    - x86_64
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Sat Apr 29 17:32:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 17:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527680.820339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psoR3-0005GK-2R; Sat, 29 Apr 2023 17:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527680.820339; Sat, 29 Apr 2023 17:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psoR2-0005GD-W1; Sat, 29 Apr 2023 17:32:36 +0000
Received: by outflank-mailman (input) for mailman id 527680;
 Sat, 29 Apr 2023 17:32:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psoR1-0005G3-Eg; Sat, 29 Apr 2023 17:32:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psoR1-0003o8-4R; Sat, 29 Apr 2023 17:32:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psoR0-0007Wu-Ks; Sat, 29 Apr 2023 17:32:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psoR0-0000aE-KL; Sat, 29 Apr 2023 17:32:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VXF4fYaqyr7wn3fLdBWEV4AoMye4S/xAQk5Iia2Cl1Y=; b=hei2KR+8K81d9X9c/6ad6ypcHl
	rA0E/hSWQP84OV4czAc0A0lSuduf/X/satSExFa/8vifdYGAKfQ1iseEEmJ6XV0l76EYaKPL41S1k
	SavBBtKn6/qEi9ThRqHOhll3MUH13m4wndWWPkoL0piW7Es6LrSpqRQA7T6FD/kbWASU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180479-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180479: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=844a3b48d6161560eddc1e1f85719b67659f1ea9
X-Osstest-Versions-That:
    libvirt=0324adb647885932efc97eefcfe08f6a8db60ae1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 17:32:34 +0000

flight 180479 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180479/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180460
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180460
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180460
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              844a3b48d6161560eddc1e1f85719b67659f1ea9
baseline version:
 libvirt              0324adb647885932efc97eefcfe08f6a8db60ae1

Last test of basis   180460  2023-04-28 04:20:20 Z    1 days
Testing same since   180479  2023-04-29 04:20:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yuri Chornoivan <yurchor@ukr.net>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   0324adb647..844a3b48d6  844a3b48d6161560eddc1e1f85719b67659f1ea9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 19:49:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 19:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527690.820349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psqYm-0001kb-Kv; Sat, 29 Apr 2023 19:48:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527690.820349; Sat, 29 Apr 2023 19:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psqYm-0001kU-I7; Sat, 29 Apr 2023 19:48:44 +0000
Received: by outflank-mailman (input) for mailman id 527690;
 Sat, 29 Apr 2023 19:48:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/eqU=AU=citrix.com=prvs=4767ec71a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psqYk-0001kO-MP
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 19:48:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5b92ab9-e6c6-11ed-b225-6b7b168915f2;
 Sat, 29 Apr 2023 21:48:40 +0200 (CEST)
Received: from mail-co1nam11lp2171.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Apr 2023 15:48:36 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5520.namprd03.prod.outlook.com (2603:10b6:a03:282::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.26; Sat, 29 Apr
 2023 19:48:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Sat, 29 Apr 2023
 19:48:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5b92ab9-e6c6-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682797720;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eSL1KX72Djd4ue6cSxqJmzE9iKTQ2rMjQ3821BQWYyM=;
  b=SOP8ZAPln6l6DzYqxeGvYZ7w0/XLnSVwUZk4ZK4qMQtIKICmDFRt4B+2
   TsOT+AQh7vQbm4LUq5FOGqPpmsJ2SmFBis1ydA8gtj0mkBPjUX96nyMyX
   F64IExbs5+S5fttVqWRx6vspPaL0l0Je661yCcYf9uFEUbmQf/1pOXf/u
   s=;
X-IronPort-RemoteIP: 104.47.56.171
X-IronPort-MID: 106096200
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dyCNVK3VJEL7FFC7wvbD5UZwkn2cJEfYwER7XKvMYLTBsI5bp2RVn
 WBOCGjSM/uKNmqjfYp+bI3joUpUvpeEnNZkHlA5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBkNKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfQntAr
 PMCMxYxcA2H2MGwzb38Y+JLiZF2RCXrFNt3VnBI6xj8VK9jareaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6Kk1UZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjBtpDSufgrpaGhnWu6GgNLTgcaWG5gtyDiWmPVZVwN
 2kLr39GQa8asRbDosPGdxixunuNpBMfc9tWDewh6QuJx7bU4gCWHWwNRHhKb9lOnNQtWTUg2
 1uNntXoLT9iqruYTTSa7Lj8hTq2NCocK2MYYmkaRA8B7tvkiIo3iQ/DCN1kFcadhdrwHDDs3
 z2QtwAuirMLl8kJ2q6nu1fdjFqEo5nCTgcxoALNTG+hxgp8aMiuYInAwUjW67NMIZiUSnGFv
 WMYgI6O4eYWF5aPmSeRBuIXE9mUC+2tNTTdhRtkGMAn/jH0onq7J9kPuXd5OVtjNdsCdXnxe
 kjPtAhN5ZhVeny3catwZIH3AMMvpUT9KenYujnvRoImSvBMmMWvpUmCuWb4M7jRrXUR
IronPort-HdrOrdr: A9a23:Lw7gTKDPxWx9by7lHejKsseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ4OxoS5PwJk80kqQFnLX5XI3SJjUO3VHFEGgM1/qA/9SNIVyaygcZ79
 YaT0EcMqyPMbEZt6bHCWCDer5PoeVvsprY/ds2p00dMj2CAJsQizuRZDzrdHGeCDM2Z6bQQ/
 Gnl7Z6TnebCD0qR/X+IkNAc/nIptXNmp6jSRkaByQ/4A3LoSK05KX8Gx242A5bdz9U278t/U
 XMjgS8v8yYwryG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGpphe0aJ9nU7iiuilwhO208l4lnP
 TFvh9lFcVu7HH6eH2zvHLWqkPd+Qdrz0Wn5U6TgHPlr8C8bDUmC/BZjYYcVhfC8UIvsPx1za
 oOhguixtFqJCKFuB64y8nDVhlsmEbxiX0+kdQLh3gadYcFcrdeoaEW4UsQOpYdGyDR7pwhDY
 BVfYnhzccTVWnfQ2HSv2FpztDpdnMvHi2eSkxHgcCR2yg+pgEM82IogOgk2lsQ/pM0TJdJo8
 7eNL5zqb1IRsgKKYpgGeYoW6KMey3waCOJFFjXDUXsFakBNX6IgYXw+q8J6Oajf4FN5Icuma
 7GTEhTuQcJCgzT4PW1rd52Gy32MSeAtWyH8LAa23E5gMyyeFPTC1zCdLh0+PHQ58n2AaXgKo
 OO0dxtcrjexFDVaPV0Nj3FKuhvwEYlIb0oU/YAKiWzS5HwW/vXn92eVsrvD5zQNhthcl/DIx
 I4LUrOzYN7nwyWZkM=
X-Talos-CUID: 9a23:Bk5g5W0E0jD0hzsfKchFYrxfWdorK3LH11HsPVK8VkVvRbLSYFS39/Yx
X-Talos-MUID: 9a23:228M+QhD9pDxwUQGAnfXasMpCels/YGLAnw3k7YjgcqjBCI3FCu4k2Hi
X-IronPort-AV: E=Sophos;i="5.99,237,1677560400"; 
   d="scan'208";a="106096200"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SxDl+COO8GYZ4N3xlWIFk5YURD2+5KAOIQ22l/aQQ8k2zrwpoJAq2ELwcHJfUXPGNbbdnd8e8EKOxD2gO2jNo9ruVQ2ddpnRthkGrD0POjrK7TPJr2Ds+7EaWLJ33z86gxXXBf9/XoYSr1EKu4l0s7WsedamYu5Ts8YVNe9mpSJ0udshGgLgPUx6292sROZVu15hynWaCBxher8IFFIuGe3AeTKzKn9r75jb+l+aVIhCli+zAmotN8+vLcp3KVvUwvxMXL6csJ4rtl6PSP9rMjWhKwNQZb+3cdVcBkJpvD+7KyiGpwLGZ5qIDd4cmOLHCZLN+LtyD6XZL0INpnkvvg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mi2cszWA+xCK2a3jom6TzyTBnHGobW48pnHq9HdvvWM=;
 b=Vi7/NH9tphvXg/4q3XobjdX67dxBQyx0w9FOCIjgKs9BQDxi2l96ZflQxXPsGGq/zceUU5Wox0rcIN7/FuYzoc8Szb6PIIcHQpP5eT7JhDdl7nbaOBsiufyDMgZpyWpRiwsOtv98pT4eXDIhkqq5U0pKVdGcV+t+MsRERCw/Ks8LcZXTYPtC9k73taaLJ5a2XYimpbmjUbdI4mxgyovfZfZrVyHF9AJzBRsfSTlSDsqX3BYfFBMN4cjm69VGakLOwD8LIk2lYT7aVvzA8XozqzmLkh8cxHPFEZ2UEZPj6w44NCOGSeCaiRA/eLHgZJdyztPX2D/WNAsuYoo4K+l7AA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mi2cszWA+xCK2a3jom6TzyTBnHGobW48pnHq9HdvvWM=;
 b=VJSczikk8eoIQ7ZIGPAZSwfqyNrNUIIR+5/jDiHqrVfbAGEi8Vf2Yj9waJtWj7M99yUMX2z7mFxmLAQHKBgOS0bbnf4VNHm0FxvgojtoXRFyyVxqVn1q9qqMOvaYbm/9GKINb6fKzcQRXuxzbrwSrFZIDn4nAm9sEve1LgjmZRU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6bbc5ee4-66e0-f81f-112d-b136da71e7aa@citrix.com>
Date: Sat, 29 Apr 2023 20:48:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: andrew.cooper3@citrix.com
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
Content-Language: en-GB
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com,
 committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com> <ZE0c/dEaIUglww+g@mail-itl>
In-Reply-To: <ZE0c/dEaIUglww+g@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0090.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5520:EE_
X-MS-Office365-Filtering-Correlation-Id: 97a0c597-971d-4c69-de67-08db48eab577
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HYWBhCn8yw8evM3GEG/gxtxNc8765e1ogdavWjK1ukssDDJin3SzdpLUXLS2uXVvZHGbtXpDR/o3lgQRITOVAPww93RpYGxUpzwpH15wGFHCoiHfjnFYlfCxon4GVk3alZFelNcjZCoxUWxwVQmwtUJ52zIaQzWNeBoxgDOKrvlwjcOjNJbpZMV5uac0JASJcEFUQvlnq9j5KwGYvpJKfZGBXMpeTAsOPi9aW8xh+su5KIoEvRpWK4xR5nhwyFNxI3RGY7S56FXfcxaygbsYYfgKSnlEHnTBN6eueojt+sCMp0XXohyr8JjjnEtKUKVezppBrsbK+lZ0HJI3Q9ohUie7BYQeKiuxRW3UJtl7Lf/gZv/L6iBNdqJlrvXQOMt0RHzvms2GFMHzI3yyCLfzWqxge16iYx7uxM3OaOpQOdISVwH65YYYq5mFtdqKRGdSjq3qNbVCEzwmJazXs/ZAr3Ue9kWeQb/m/bR+GsLke9PYad5DFbkaL9/Gi/J7SR3LImUoALJXpK3EdZJbhz3njE2Lyk10ZAl/Ris0k9fKRTvd/aX3I9ZWOlUBLLyS6SlRMDAYJ9ZcyFDxR53gEewQYkmNxadFYMFmOhmLOOUxk23i2ZTKE5y9cwAEyJY24zgg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(38100700002)(82960400001)(9686003)(6512007)(6506007)(53546011)(31686004)(186003)(26005)(36756003)(86362001)(31696002)(6486002)(6666004)(66574015)(2616005)(83380400001)(41300700001)(66946007)(66556008)(66476007)(316002)(4326008)(6916009)(5660300002)(478600001)(2906002)(8676002)(8936002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QXgwSkgraGk0ZWlZdjhjY2t4WXkyZ3JOdGwxQmNraGk0MlFBOUYvNldKN2Fy?=
 =?utf-8?B?dEdHUHVQTFlkV0wwUFY5REpjblZmMjFMbHR1bXArNVhqQmNpd2dwWXcxTk5K?=
 =?utf-8?B?UStVWEdBeWpwOFFaVStrVWlLbFRGYzVKSG9LbnNCMEYvUHE4VVpFdlEzWUtE?=
 =?utf-8?B?UlZyNXc1WTZlaGJhemwxQWVDZjl6aDE1cGdOTE9EdWpSMnN0Umc3MjA5WFZq?=
 =?utf-8?B?VDY2dlBnN1BBZk9YTWV3NkQyNDJCUWlXcTR4dnFpSHBDMnRIM0ZyY2F6VmYr?=
 =?utf-8?B?YzFEdUdqTGQweVo0clRib1FkVjFaak9aMTN6KzFuV0tWUHZOOXRmTm9LQVRk?=
 =?utf-8?B?MTNFTlJNaWoyempZY2ZWOUZrOVBCcnViL3dHS0piclpsRFRQdnd4elhxSVg1?=
 =?utf-8?B?aGo0dmlyUUE4emhPeWZtWi9PSUg2ZE93b015Q1dHUUsvZVNPNFpVendocEg0?=
 =?utf-8?B?VWIwYkU2OFdFRm1KL09NQzVqQW12ZlVMckpMamJvNTM3UU04WTVxdCtnUndq?=
 =?utf-8?B?a3FtWVVxYnpuQmZqcUk3SFFpRTcyZHVhNFZHL0lab01PQnZRdTBFQnNUMG1O?=
 =?utf-8?B?dGtvaUtjUEZESEM5Z1dtVjFCL1VJSGczdER1a0JVaDRTbTdOUHcxdlN0NkxC?=
 =?utf-8?B?bEFQTVliRDErS0VpaWtHMG5zNVIxMkk5Z09Wakl2Qi9KVjJVUXcwMFBCbVdt?=
 =?utf-8?B?Mi8xZStpbC9rUW1XZkVyaVhDSzJRTWxBRmY1SVlXd25PQVZCZXRzU3JPcERp?=
 =?utf-8?B?Wm5HZTErTWliZ1htWGdkTlN0UXVGZmQrTFhvb0dyZE1vRE9teUhRYjFTZ1l2?=
 =?utf-8?B?c2srQ3FNenNFcWFRREpPMEhmVmNLUjlqU3pUeHp5M1ZBeUtGaUNlcmVpQSsv?=
 =?utf-8?B?aXJYSWN3RFMxZktQcTkyYzBhL1orWGltbUF0T1dIeVJSMDZZTFlwRU9RS2Jw?=
 =?utf-8?B?OUQ3c3lOZU8yMm1EbEUvdVVLUnE5MHVnblZLVm9hWG42Rmg5TG0xQS9aRUd1?=
 =?utf-8?B?bmYwRUlXbjNxK2tSdG9ackVDckZ0RXF5VmowZW14UWxLcFAvMjVUWGZIOGdz?=
 =?utf-8?B?RjFmeml0V01XcHgzbUdnajJjQTZZVk90YTVLZEp2c0NabjFub2kwYVh6RDF4?=
 =?utf-8?B?VUtRWVVpbVdWTXFUVkJCMGRmSy9BZm53VDhZZXZ5Ym10ZVVyZVk0OGlkcGhU?=
 =?utf-8?B?SXppOFRjeURvV1N3dDlUZzh6dzVjMVdmU05oVUNuTzBRUmFVV2pDSWNQLzQ3?=
 =?utf-8?B?NFFTQVQ3Rjk3S1ByOHBHSEx2NkllcWxNblVmSGo0NlQrRERzeVd6OGhEdGZT?=
 =?utf-8?B?NEFESXpPWlVGTDFqNjR2cWtOWk5lTml4UUlyMTZ4dVlGY21ya3BMNzAvRHJG?=
 =?utf-8?B?OENzZllxdWlHN3hvc1MzeVVZdHFRM2Y5SUV3VTd6RmpWQmZQVDBtUlp6N0JS?=
 =?utf-8?B?RXZmV0FUdnNLYkR1UTk3RVZsL2RCRzFSelgxdUhrWVJvUmZYcnVXekp4eU1W?=
 =?utf-8?B?d21aMW9iMVlqSThSbmdpUUREcXMvc09odkhpWU9KaFkxckdYV21XZWxPREZD?=
 =?utf-8?B?OXAxZHZoa2tkY0FLZE1rTE5OY0gxanNjZkNTZjkyZ0lWYkR0cm90bXFLVmc2?=
 =?utf-8?B?WkZQNFYya1k3cFpCeU1jSG84L1FVSHlGSmlDQ3JndWIzd0EvZjM3TVE1Nzho?=
 =?utf-8?B?RUhLdUY1dGgzZHVQc0FPZ25SYVY0bElaL2Mxc0pJemJXWFNIYjBySi9Ca3VS?=
 =?utf-8?B?dlA2dGlPLzA4UUFibHBWUUVWNlEzRVA0SGhxNzJhMUw1YzVpdnlSK2x1OUVF?=
 =?utf-8?B?VlRyRTQ3U0t0NW1tcjB6dWRQRGI5ZGFvaFcvQ2tIN3BZRW9rUTNNendST2Fa?=
 =?utf-8?B?SG5jR2F0TGhJdFZYUVF6Q3UvM29QTmYzSXcwSWxzYUxVSEpNdVhnR3RrWWx6?=
 =?utf-8?B?NnhnTXAxTUl4WjluSkNRZkdwdmhMbVYwMnVWZkZVNnFQYXZkOXlZR3plQjBF?=
 =?utf-8?B?cEVXczdzaTVGODZBU1BmNE95cFo4WDdJb21sUDVEcERaaHpQY3JGOHhKYnFr?=
 =?utf-8?B?b1VCM0xGMlRNVnNMNG15ODBhTEN5Qzh1ODRqV3JNMk52VVRjL01ycFdJV21p?=
 =?utf-8?B?UmRnSTdrTExBWEdyRm8yRlpGelQ1NFlMOU9PNkJIL2JwRis1Z28vRWJoeHhC?=
 =?utf-8?B?VGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?RnBGaXpGRHU4S3lRSUNVMnd3K2Z6NTNBVzRBSGp0c0hUZy9kY0Roc3NPRE5z?=
 =?utf-8?B?T1hXVGZ0QTA5TmdnR1lxZmhVYnVNdmZranhLNndzSDZ0NGlqNVBBdWl1a3Vu?=
 =?utf-8?B?ZjE5VTJJSWdwbEZCS0VkZ25PeUx3cnJZSkdSWjBOaksxK1J0eHQ2eVpsNTcx?=
 =?utf-8?B?QlJpdkZtejV5OVkrVmUrQ3c1UXlaelVMQTM1T0VZaXZhSEFoTVBNT2tXRDRu?=
 =?utf-8?B?ZEM4a2t6bGlFcWw0anIxRHJVUENVZEJqZ09WcDdMNjJOREUxK0hXWnhFZFBN?=
 =?utf-8?B?QVpQTldhbFlSTSt2V3NLWEI0QU9GUkRmNU5oMi95RHM1MVdyampDNmRJVE8x?=
 =?utf-8?B?VStEMUdGZGhPOS81QkdHWW83STR4MG8rWDVXeWxUU3pKSmJOTW9NR3hQZ3Vi?=
 =?utf-8?B?ZXg2dkttOTYxNEplRHNKcWI4bzIrS0dWNUcrNHMvN1I0YVZsRExZM0wxQTU3?=
 =?utf-8?B?cXlQQkhSZ3Z1NHNsSXp2a2o0WmlRc0JYQkYyVnh5MWdPNS9HTGZEWGtoTEFW?=
 =?utf-8?B?dzAvaFE1ejljSEYrajhHMWNNSGlLQ3VpblBLbEFHSVBSQ1BvTmFXNUtaam9N?=
 =?utf-8?B?RE5qZ1haNEUzVWp3Qzl5aEpqTWtnVEVsNHY1S003UjVLVW5Yc3FTalQ0ck5E?=
 =?utf-8?B?eGFoL1VYY00wV003QWsxenJjL0x5UDNoM3I3L2I2Q09YN0dGMVdQU0xJellL?=
 =?utf-8?B?UXV1VzV0bDJKdTJhN0JLR0ovUm9RZlVCTTRFNDk0aU9VbTFpM2IwWlY5eXRQ?=
 =?utf-8?B?WjBSbkExelk4MFhHcGN5TDRqOXBWRDV4cmNjcUJnRFRDMjJBUnpKcGdLQ1FR?=
 =?utf-8?B?N243QmtxSmg3RGw4d28wK0htSUtSbm5NWWR4MTBIVm4yemJ1akx5TjYva0NC?=
 =?utf-8?B?RFNWUkNHWHJ5em45Z1RqTEliY3dGbXE0TzVGTCtBamIzaXQxK3dsQkYyQ2FH?=
 =?utf-8?B?RzFuQkt6OVBJUmlFQ3BqK0VQc1NxTkxjVHBmbzlhRC93M252NDloUnI2bmQx?=
 =?utf-8?B?MG9NbDR0dVEraVhtVmtjTW54TzQ4Z28zQTM0YnlsTlBOWE9wOXQ2cnJON092?=
 =?utf-8?B?dHVyL3RMWXpycHpnTXVVSlB6NVVBSy9vRWJTQlgranRiYXVhODA4Qk1tVEZJ?=
 =?utf-8?B?VVFPQ01WT0ZwazdWWFNxTm40bU5aWERNMmNNVzhjdldBdit4TC9aMjVESlpL?=
 =?utf-8?B?MW1PVDIwcDlNVDEvcmtrLytpdjZwdmV5MG1mV2xibFZaS1UrVUxDdEtyZ0RQ?=
 =?utf-8?B?dWRiVkJQTVNvcWloTkhqUFpKcEwxSmlsemRFdkU5eTZpcEFLZz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 97a0c597-971d-4c69-de67-08db48eab577
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2023 19:48:31.0925
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dptnQs+yZfG2BN/8HvsokFHJkqV2iyl/JAj0iFLd9UNoOm7AghbcBxrTfVxQWZsSbxPOIQxiTkRO16BXYD+tAqALVQubB7dwcUMGWX3Bkfk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5520

On 29/04/2023 2:34 pm, Marek Marczykowski-Górecki wrote:
> On Sat, Apr 29, 2023 at 12:41:26PM +0100, andrew.cooper3@citrix.com wrote:
>> On 29/04/2023 4:05 am, Stefano Stabellini wrote:
>>> On Fri, 28 Apr 2023, GitLab wrote:
>>>> Pipeline #852233694 triggered by
>>>> [568538936b4ac45a343cb3a4ab0c6cda?s=48&d=identicon]
>>>> Ganis
>>>> had 3 failed jobs
>>>> Failed jobs
>>>> ✖
>>>> test
>>>> qemu-smoke-dom0less-arm64-gcc
>>> This is a real failure on staging. Unfortunately it is intermittent. It
>>> usually happens once every 3-8 tests for me.
>>>
>>> The test script is:
>>> automation/scripts/qemu-smoke-dom0less-arm64.sh
>>>
>>> and for this test it is invoked without arguments. It is starting 2
>>> dom0less VMs in parallel, then dom0 does a xl network-attach and the
>>> domU is supposed to setup eth0 and ping.
>>>
>>> The failure is that nothing happens after "xl network-attach". The domU
>>> never hotplugs any interfaces. I have logs that show that eth0 never
>>> shows up and the only interface is lo no matter how long we wait.
>>>
>>>
>>> On a hunch, I removed Alejandro patches. Without them, I ran 20 tests
>>> without any failures. I have not investigated further but it looks like
>>> one of these 4 commits is the problem:
>>>
>>> 2023-04-28 11:41 Alejandro Vallejo    tools: Make init-xenstore-domain use xc_domain_getinfolist()
>>> 2023-04-28 11:41 Alejandro Vallejo    tools: Refactor console/io.c to avoid using xc_domain_getinfo()
>>> 2023-04-28 11:41 Alejandro Vallejo    tools: Create xc_domain_getinfo_single()
>>> 2023-04-28 11:41 Alejandro Vallejo    tools: Make some callers of xc_domain_getinfo() use xc_domain_getinfol 
>> In commit order (reverse of above), these patches are:
>>
>> 1) Modify the python bindings and xenbaked
>> 2) Introduce a new library function with a better API/ABI
>> 3) Modify xenconsoled
>> 4) Modify init-xenstore-domain
>>
>> The test isn't using anything from 4 or 1, and 2 definitely isn't
>> breaking anything on its own.
>>
>> That just leaves 3.  This test does turn activate xenconsoled by virtue
>> of invoking xencommons, but that doesn't help explain why a change in
>> xenconsoled interferes (and only intermittently on this one single test)
>> with `xl network-attach`.
>>
>> The xenconsoled change does have correctness fix in it, requiring
>> xenconsoled to ask for all domains info in one go.  This does mean it's
>> hypercall-buffering (i.e. bouncing) a 4M array now where previously it
>> was racy figuring out which VMs had come and gone.
> Can it be that xl network-attach fails and that failure is silently
> ignored by the test?

Well, it's ultimately doing a ping test between the two VMs, so the
network-attach is rather important.  I don't see an obviously way for us
to get false negatives like this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 20:01:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 20:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527693.820359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psqkt-0004Gl-OA; Sat, 29 Apr 2023 20:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527693.820359; Sat, 29 Apr 2023 20:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psqkt-0004Ge-LH; Sat, 29 Apr 2023 20:01:15 +0000
Received: by outflank-mailman (input) for mailman id 527693;
 Sat, 29 Apr 2023 20:01:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/eqU=AU=citrix.com=prvs=4767ec71a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1psqks-0004GY-Az
 for xen-devel@lists.xenproject.org; Sat, 29 Apr 2023 20:01:14 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94a9f410-e6c8-11ed-8611-37d641c3527e;
 Sat, 29 Apr 2023 22:01:09 +0200 (CEST)
Received: from mail-dm6nam04lp2046.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 Apr 2023 16:01:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5869.namprd03.prod.outlook.com (2603:10b6:a03:2d4::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.27; Sat, 29 Apr
 2023 20:00:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.024; Sat, 29 Apr 2023
 20:00:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94a9f410-e6c8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682798469;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=z0477FqDiZXn9zUb4PFl911Gg9UxC9omPfnirEG+BOo=;
  b=FKkUc1b0ZlkAwtBgw33Js/Lb04GvG4Ko8HXzMvMcgIzikF4G3XzQLLxF
   +H1HxKarnKUBrR2dun9Q/g1D4wnpbgsbu7udl6Ujc00trTpqQ01WoO1Ji
   tzu+U8otoHWQhbS5yUNuRdeoeTwGPmZc9Im3MUMcElJnIseUKeS7RfonN
   4=;
X-IronPort-RemoteIP: 104.47.73.46
X-IronPort-MID: 106665322
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:MjyAN6AhVXm9ThVW//jiw5YqxClBgxIJ4kV8jS/XYbTApDInhDdRz
 WZLWG+OOa6DYGPxfNBzaoyyp00PuJaEz9JjQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G9C7wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3M1LAV4N0
 uEkJDEnQBalgMyZ27+/Y7w57igjBJGD0II3nFhFlW2cIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxrvQA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37eRw32mA9hIfFG+3qB60E2Y6TBIMwdIThijgaGYiA2iYN0Kf
 iT4/QJr98De7neDS9DnWhSirX2svxgCWsFRGek39AGMzKXP5w+TQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAeJHUOYiIsRgIf797u5oo0i3rnVc1/GaS4itn0HzDYw
 D2QqiU6wbIJgqY2O76T+FnGh3ego8bPRwttugHPBDr5sEV+eZKvYJGu5R7D9/FcIY2FT16H+
 n8Zh8yZ6+NIBpaI/MCQfNgw8HiSz67tGFXhbZRHRfHNKxzFF6afQL1t
IronPort-HdrOrdr: A9a23:E5tkOaFb/woCy8BtpLqE0seALOsnbusQ8zAXPiFKJSC9F/byqy
 nAppsmPHPP5gr5IUtQ/+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpwHIKmnT8fNcyL
 clU4UWMqyWMbGit7eZ3OBvKadE/OW6
X-Talos-CUID: 9a23:Tw42S27Q7Zgk56X/s9ss0mUoGv9mVWDhwHb6DUmeDkhGU7CRYArF
X-Talos-MUID: 9a23:n2z1ZwYgFSWI1OBT6S+8lDpNaJxT+fq1S1gtybQE+NW/Onkl
X-IronPort-AV: E=Sophos;i="5.99,237,1677560400"; 
   d="scan'208";a="106665322"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GF1SccVHTZjv/P9YHzzvEh89TPiIDZ2DkGcrNaUybeDnK6ZcpC6YbLqsWNMv+1Feli/uZasTsN9WFTGl3NXmObsjXDD5OVaNpF2sEBcQmVp6w1mhoHecMRCPYo40afuBhSJie89OL5g8FZCFoiPyux/3yH3aEIdET8JC09Ga0aNfmDtLxTEVipzVZPUoo4zD/CQrEip8MMuCSBsI4+f5Ivd/6uDJ2GIAg9IZPlWHobvhi6Al0HX8bk4vGR0XdTVhGbguLSrLoMzkBh2qaSa4FsUKkil3k6epxhwtpU3znQ1xDxTGZc7EBRAWI4h1YjAKt01pxBKFvslb9aGPRfci9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z0477FqDiZXn9zUb4PFl911Gg9UxC9omPfnirEG+BOo=;
 b=kiI5l2YVkAXjMVF9gGtm1N9CRFHlOJzExeag72hhxG9ZqCkJvlIusy1sTvPHFzMv/UesHIFFfPOnnR5tMaolSb4arMuqfzD0NgTv1ztl+MaOkj7RGAKroLKw5K5F1qmsZROThXgUM0aon8pnMs+HM4eLVjPla+/lSkYqZXvw6ZbKQnUXe8Z6UOI9ImfOEOwxfnZ4oK0kD6w+lLpn9JU0XeRRPlshuu1NYHm+GxsObBOT1HOxiXk8zqxUN7p9MSutrb2maQj3NpcG2FSBGv2X9OPgnzpCx8n5XOLG3zIiwl0J59Np1ezzevu2wJWmZgObEiuKyJZNeVPdmpfE6cTvig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z0477FqDiZXn9zUb4PFl911Gg9UxC9omPfnirEG+BOo=;
 b=n7bXwHrkFZCmba0fZNyDf9wTr1NPimH+AaU20dFBPOjweGp/+jXqoqFPHHeN4kO/+E7i+O+YctUf4YxhpUR4XuUEuWW5xH22tB82imkEdhcWmXgm0QLHNmhY6V47BHEdJ7tN+N/oS+mfw8gBvmG099yBMgOkp8IZl/r7ZLOC+5g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6a2d1b46-9ea4-ace5-f26f-d373685956e9@citrix.com>
Date: Sat, 29 Apr 2023 21:00:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] automation: optimize build jobs order
Content-Language: en-GB
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230429142707.176299-1-marmarek@invisiblethingslab.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230429142707.176299-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0307.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5869:EE_
X-MS-Office365-Filtering-Correlation-Id: dd7d9e2e-989b-405d-8948-08db48ec736d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GsqdjgDR43OEawL0nLSWXwRQ53vShyXWs902uwJX/r3uG0j0q0Lk9PznfryriDP3TwiHVqZ06TsIq0vMvBjz31heuJfPhROZYvLnb1LoJa+9mLyWf0EdFQIeyjylXCbEcS3HmIAoJQ1eVnHDAUFwe1SWi6KmeccEkCWRnRt8c9oT2sAc211OFUC/qTALq2CpAPyhmZqgp00mpf5koJFSNhNQNzQppzwllJTtitCPHWXWugk+M66rpYhPFfYB3tujNucij3x53zhzYbjUkOFFe/j9wPqlG0kbAuM/o4nJ640mwC6SGlNiQM078HH5ZZkM8CRpxBWyMZYnwvXhfrjQdXqxR2RmIYxAjwKTz1tjQG1sD8WO+STW0kkXuXvjsqKCl1ITNMmcKOdu5T2EHtMo+K750jb2QNkUufk4D7DHo7lTd58Mn1JvN45p3O6398yk5jJO4Neg0s/NKbsen341eGB2TU13bPrh0UZNNxTSzx0P/NhsLVNJRE3BX1sFFbW00SmHtv7wnhL3kezu0l+RronrZ0uCNInl0IZFBwx/f/foeLCHNcxFGuSm+p1bs3A+34JQL+buJhOwyZqlmceetwi5xZdOcmzf8+PCk+kuZYPR5D+PPXe+zRUiR5gpu9qaVuZjJTgsD3WYiYuoqBBl1g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(376002)(396003)(39860400002)(451199021)(36756003)(86362001)(31696002)(82960400001)(38100700002)(5660300002)(66946007)(66556008)(66476007)(4326008)(8936002)(8676002)(316002)(41300700001)(2906002)(83380400001)(2616005)(6666004)(6486002)(54906003)(478600001)(53546011)(6512007)(6506007)(26005)(186003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZDdmRzhQUlZrZWErVDhkb2NOZEYxMS9IMVBibk1DWDZPbS9YQnZXNFR1UFQ0?=
 =?utf-8?B?UFI4OGFRaHJ1d3JEVDlSYnJ5K0kybUFxT1greGREL2dqWHdlb3ZpbGd6OFRk?=
 =?utf-8?B?VFlFeHgyNjIxNHhJa0hxWGtEcVVWRGx0a09DbHJacmhURE00cTdJbENIeGRj?=
 =?utf-8?B?SHJsYlNrcmJkMklHWktWbnF6QW5iQ0NiaFJqd2krbjlSZ2dwMkx1aUk2cStE?=
 =?utf-8?B?VjJpVnp1SG5LenBBWUU2ZjJPcXF1L1BvTndWajJRZGQ1SGVPQ013TnZiNGU4?=
 =?utf-8?B?NmlMMStUcmRkN1RFMzJBcnFMcEl0NGM3MkFLU1dPM01uTzR6QXVRN2FGQ2FZ?=
 =?utf-8?B?NnpTVU5wZ3hXVzI3dzVsWjYrMUxvbTBQdFJZUXk2bGdSV3lpdDB3L3lnMzMz?=
 =?utf-8?B?RitZVGRwUHBtV2J1OXI2Ukx6bW5JVko2N1MxQjl3eTNCdXlYVlVDL2ROcm05?=
 =?utf-8?B?UW9YeitBbm1rbDV0MWl1TTRNWEFmb09jWnIxUnA0aGtHMlBXdjhBeW5yVmNm?=
 =?utf-8?B?U09NMUFKMEs4UjhHb2o0MlpsQVBreVRHRlI3bU1POHlyRVkzc2R5THVWRjJo?=
 =?utf-8?B?RGZUTzBLQ3QrT0Z2cS9GbkNzQTVrRmRza3BMdElxUWlHSnhOdk5yQnVxWm5Q?=
 =?utf-8?B?ZXZ4TVQvWUFjTytRdnJHOXdKUWFhWHkwNStJN2Z5TDZPY2p0T1Z6SFdXZ1Ew?=
 =?utf-8?B?SmNtaHllZGMybk82dDU1enN1dURIT2VTaWJJakNjc29DTWQvb20zNGhVd0Jp?=
 =?utf-8?B?K0pMUldSUzlSTlhHNE1vTXhzM2xISjNJcUtLMXF0SmRadHppeGcyNEFKSnl6?=
 =?utf-8?B?Vk41OWRhbjhMSS83THlXcWloR2tGbFp3elhsRnRhVndQYjZ3VEV6TUJzVEJD?=
 =?utf-8?B?WmFkb05UUkFPVHd5T0puTWtTVlNCdnBxQ0lpc3Z1OGhVZEE1ZHlkRlgwRHRo?=
 =?utf-8?B?RDgyelV2U0MrYXBOM3ZFMXpORVZRVUNJSGhTcE1zdHdPUGxHY1Bwcnh0VDFi?=
 =?utf-8?B?bEJObzUya0trc2RESWhaVjd3dmVTeFV4Q2V4b3NvdWpPdEx6SW5BRTlYWEYx?=
 =?utf-8?B?Q3QraWFNN0VmY3c0MkxLd0hYZG1CbFhYcUJHTnU5VmxrTm8xNDRJdlJZTE53?=
 =?utf-8?B?WXR4bG5CaXZSN1ZEamtNRWZmTmdMMlV5Vk83bFBGTkxLUERXbi91TjVqV01q?=
 =?utf-8?B?M1hGSUFYRE1yRTJxckFOcnNhVTU3aWdUTFNPMWJuYm45d1E0cjlodUtzUFZh?=
 =?utf-8?B?cExGRGhuY3lNTFFVdHpIZ3hlYWtZLzEzYXMzMnBKTHMzS3AyaHVhWmd2ZUdx?=
 =?utf-8?B?M25EWGRuaGZzcm85WHZkRCtaTTZQRURsQTkrNmxCcWJBU1NEanVTKy9GNVcv?=
 =?utf-8?B?NDQ5YlBDNkFJTEJEUExKZ3hSTFhCeXN6YzJqK2Nmd3ZLTGxFaXhSZCtyYUV5?=
 =?utf-8?B?VWs3NVRQY1Z5blc3ZSt5NVYzZ2ZnVE0ycFBqdUNCcEYwT3NCcHhiQkdua3hS?=
 =?utf-8?B?a3B2VnJESTlDRlFkNWc0YXAvWG1qQTFBMW9LVnQ3eFdEcG1iOE9pWGdCS2Fj?=
 =?utf-8?B?azlvSUExaVhqOEtoSGNUUUNmdVVWN1doUlViRDYwazBoWStua2MrajlNUHVx?=
 =?utf-8?B?UVlNZVh4RWlLK1l6bDd0UGRlQkttRzMrbWF1bE9XMUlrUmQrSytvazNHcnBy?=
 =?utf-8?B?WlI0dFhvUHp6TklSRDFxL1g5NjJLR3hwSXRSKzVUQUxQT2JUSkFOSUVYeXVo?=
 =?utf-8?B?eGVZaUJwVHY0YjlUcjVHc3lVZEk2ZTU4ek5MWFJTS0pUaGxjVlhWWlpvYTlK?=
 =?utf-8?B?TGdlVVlLVUdhREs4S2ltWU15b3hMQTlCZzEvZHlia2x5UXk4aU5ROHpPNzFD?=
 =?utf-8?B?NnpCckl2dFQxZVpmWlVXSnAxQ25jSGV6a1ZvcXdFMVd5MkpoQnZ3ZGw2UUR0?=
 =?utf-8?B?VkptTTNmMXVjaGVKcE9pU2Z5VVV2RzJDYWxHYndJNVVUdVFJRXo2ODVEbEx5?=
 =?utf-8?B?SnJEVWlnREJETkYxNEtCUzFHUkFSdS9ObS8xdWNnaEhyM213Z2NFQzJVbmts?=
 =?utf-8?B?dG5WNEpYUExRcGNOMnZEMUFvdys2UFJVc21PeWQ1b3BSNDQ5ZEVRUm1iREhQ?=
 =?utf-8?B?US9mZjg0NE1QTnBDam1tcGNTaXNZcmRrd0JGR2gxS2t0V3FyYnJOdEl6WkhK?=
 =?utf-8?B?S0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hlVRK7dtHQmT12TQFIcOOXoysMlxJPFWWKaZ1wUgBuvQFOMnhFvdnrW3vGArAkr/7ege95BmojYAJRfLeBmP9DGzfuRAmazy8/XG7R+RAvRCgNCd0XAt+pfjQw1a6cJMaziQX0yS7mpldeQdFL7w4CHQfEh4EW16/cakFyUg0aNpRWBTHeKUxf2Pq2VYYVrOA0qbBJyeb6t+qrEq5O2pm7KUdTXZBua5ouCvsRC1l5GtaxZoHe12yn8//oCX9Yow7+cw/pyX4KLesAPTi9W00wKAO+R7EwWCGbKeTpvQ4My63JxU1eYagmFZ2zIkjp3fpsm/4KmSWOFZfHp4EcFMqPJXVD0LnqxXgYqtYe/UrHZztn6CkfJmTlLeiCOnYSTMCnF285EX993gmKq0v3O2XLUguSn2Eb6HHfjCI6NPMf6/YA4hGrS7DkyFbT/hydrKZdeZLf1guT7uyVq3G7xjUVM8e54dvzSjoqRgRilflmBoYymcOlJ9Vgm8SKwOYKQ3OHUvyJAl/fkIhUE5rtMBRPrxviASGa74pRALSOq8Xkb9hvyQQAzrLR0wQdrURo66RY45x8epTPlydBPFp/GpXyczAU1Gklh6wtnAT9AH3VcDR0msOL0YCdA8BTVIOLuvbA05ekjcuOkoO6Y3fwXgupSuQdtZ3k/o00UqOD0lZhbTFQMDazzwVXN3ip2bYJtmOZXpuWB5htx6jGI13sVK9n6jo9BaVybQdhjQKUIJKhSMK3C1sxvT8vBNkWpHgiMYGVChH/yJErrVQ56vdsBWVHobh0bbMSX8rZJgsBh4LGH36i9L8F8qHvX44h7l17fFWNzL0KLpmFJMvlIHr+DdVgyNs6Wyh8SCMmuGzOoiD552CbjJfjRL/aVo1Lq6R20U79+mDgEzYP2eP8SH68CeFQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dd7d9e2e-989b-405d-8948-08db48ec736d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2023 20:00:59.1796
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CrEKk+idNtDIh8QCb5Oyf0HCqU5EIuUxnkplExIu1n+clbouvsx9v+lMIgMX6P8PP8jxOnG9/cWLKLKXP8U5VjeKQ+/brmww4/h7Hwa3YGM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5869

On 29/04/2023 3:27 pm, Marek Marczykowski-Górecki wrote:
> Put jobs that are needed for (any) test earlier, so the tests can start
> running in parallel to builds.
> This commits splits only x86 build jobs into two sections (one on top
> and one on bottom), but keep ARM build jobs in one section, as most of
> them have some test connected and the few that do not are not worth
> reducing readability of the file.
>
> And also, put artifacts jobs at the very beginning, not the very end.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
> This made the pipeline to complete within 45 minutes. This isn't big
> improvement on its own, but should make adding more runners more
> beneficial. While looking at it in real time, most jobs were really
> waiting for available runners and not stuck on dependencies anymore.

That's still 1/4 better than before.  I'd say that's a good improvement
all on its own.

As for the patch, Its not the easiest to review.

The test artefacts section is new, and is just moving various jobs
forwards in the file.

I suspect that if you split the patch into two, first forming the test
artefacts section, and second rearranging the existing x86 tests, the
result might be readable (or at least, more readable).

The key (I think) will be to keep the # Jobs below this line, and
following archlinux tests unmodified in patch 1, at which point the diff
ought to render as one block insertion, then scattered deletions.

I suspect the second patch is going to be a mess however you try to
rearrange it, so I wouldn't worry too much if this approach doesn't work.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Apr 29 21:09:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Apr 2023 21:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527703.820370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psrp4-0002WM-Rz; Sat, 29 Apr 2023 21:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527703.820370; Sat, 29 Apr 2023 21:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psrp4-0002WF-P6; Sat, 29 Apr 2023 21:09:38 +0000
Received: by outflank-mailman (input) for mailman id 527703;
 Sat, 29 Apr 2023 21:09:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psrp3-0002W5-Hh; Sat, 29 Apr 2023 21:09:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psrp3-0000Fk-E4; Sat, 29 Apr 2023 21:09:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psrp2-0006wy-Oa; Sat, 29 Apr 2023 21:09:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psrp2-0008As-O8; Sat, 29 Apr 2023 21:09:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lr267inL1KFAHz85eLYFB0p8ji3v7E0o56b1TD6RRWU=; b=oomD1NTQ8pd3xKm3cHYKUfmtxs
	Lkocw6+xg4XD/eM66EMU/iUlrMJCf9aakWB3OpGbe0tv+NQ+bdWmJU2AcIblYCL2PMnimntUW5H5r
	vIMpn/xBPGCCGDqhtronNcIK7gig8hEJOVR+yvY4ZpszPXNQ2MTOmzPs6VCgufMyvd5k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180480: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9b112b1b79f0e93242a9ce9bffd1113458e93e03
X-Osstest-Versions-That:
    qemuu=05d50ba2d4668d43a835c5a502efdec9b92646e6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Apr 2023 21:09:36 +0000

flight 180480 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180480/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180469
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180469
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180469
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180469
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180469
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180469
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180469
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180469
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180469
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                9b112b1b79f0e93242a9ce9bffd1113458e93e03
baseline version:
 qemuu                05d50ba2d4668d43a835c5a502efdec9b92646e6

Last test of basis   180469  2023-04-28 12:35:39 Z    1 days
Testing same since   180480  2023-04-29 09:24:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  David Hildenbrand <david@redhat.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Markus Armbruster <armbru@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  zhenwei pi <pizhenwei@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   05d50ba2d4..9b112b1b79  9b112b1b79f0e93242a9ce9bffd1113458e93e03 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 01:21:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 01:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527724.820380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psvkY-0002Ma-GD; Sun, 30 Apr 2023 01:21:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527724.820380; Sun, 30 Apr 2023 01:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psvkY-0002MT-DI; Sun, 30 Apr 2023 01:21:14 +0000
Received: by outflank-mailman (input) for mailman id 527724;
 Sun, 30 Apr 2023 01:21:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1psvkX-0002MN-TP
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 01:21:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ab3a7da-e6f5-11ed-b225-6b7b168915f2;
 Sun, 30 Apr 2023 03:21:12 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 81A0960A2A;
 Sun, 30 Apr 2023 01:21:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01214C433EF;
 Sun, 30 Apr 2023 01:21:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ab3a7da-e6f5-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682817669;
	bh=zvdiHVD+yQZ6YgLLxkqDm2WUcDmMrw3uH/G3jHJgrnE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WkdxA0MsIkRlVTvyBg7Zv2f9vLtXaQ7swIawx3jhreSRj4v0nFkIppGpJ//SL50+w
	 1Hf/YR2dghTSynRaX46B0HR9kEtK/iCccTfKkZ1oUlInkOCG23qqjVplT1H4egWfGV
	 5z/CVel353VtfOjiI1umyw9aIE8IKytFFajZaQjgoufODcMHg+p0ir+PnL6N6SI2Di
	 kUVDcedzaVdvXlhk3AN6IIw8OFZmOalJYMoxaYr3dGccPPZUlqutvJuSya5PRzSM2/
	 yLgDOMSmSAw/bM7gUvWWYXLM8TCk+g/3kx0vzeSZ+GhLuO8drzFtCgsxGk3Xxq8Vjf
	 sJVBeTAN+RKBw==
Date: Sat, 29 Apr 2023 18:21:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: andrew.cooper3@citrix.com
cc: Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com, 
    committers@xenproject.org, michal.orzel@amd.com, 
    xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
In-Reply-To: <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2304291808420.974517@ubuntu-linux-20-04-desktop>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail> <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop> <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1229086743-1682817254=:974517"
Content-ID: <alpine.DEB.2.22.394.2304291814590.974517@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1229086743-1682817254=:974517
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2304291814591.974517@ubuntu-linux-20-04-desktop>

On Sat, 29 Apr 2023, andrew.cooper3@citrix.com wrote:
> On 29/04/2023 4:05 am, Stefano Stabellini wrote:
> > On Fri, 28 Apr 2023, GitLab wrote:
> >> Pipeline #852233694 triggered by
> >> [568538936b4ac45a343cb3a4ab0c6cda?s=48&d=identicon]
> >> Ganis
> >> had 3 failed jobs
> >> Failed jobs
> >> ✖
> >> test
> >> qemu-smoke-dom0less-arm64-gcc
> > This is a real failure on staging. Unfortunately it is intermittent. It
> > usually happens once every 3-8 tests for me.
> >
> > The test script is:
> > automation/scripts/qemu-smoke-dom0less-arm64.sh
> >
> > and for this test it is invoked without arguments. It is starting 2
> > dom0less VMs in parallel, then dom0 does a xl network-attach and the
> > domU is supposed to setup eth0 and ping.
> >
> > The failure is that nothing happens after "xl network-attach". The domU
> > never hotplugs any interfaces. I have logs that show that eth0 never
> > shows up and the only interface is lo no matter how long we wait.
> >
> >
> > On a hunch, I removed Alejandro patches. Without them, I ran 20 tests
> > without any failures. I have not investigated further but it looks like
> > one of these 4 commits is the problem:
> >
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Make init-xenstore-domain use xc_domain_getinfolist()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Refactor console/io.c to avoid using xc_domain_getinfo()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Create xc_domain_getinfo_single()
> > 2023-04-28 11:41 Alejandro Vallejo    tools: Make some callers of xc_domain_getinfo() use xc_domain_getinfol 
> 
> In commit order (reverse of above), these patches are:
> 
> 1) Modify the python bindings and xenbaked
> 2) Introduce a new library function with a better API/ABI
> 3) Modify xenconsoled
> 4) Modify init-xenstore-domain
> 
> The test isn't using anything from 4 or 1, and 2 definitely isn't
> breaking anything on its own.
> 
> That just leaves 3.  This test does turn activate xenconsoled by virtue
> of invoking xencommons, but that doesn't help explain why a change in
> xenconsoled interferes (and only intermittently on this one single test)
> with `xl network-attach`.
> 
> The xenconsoled change does have correctness fix in it, requiring
> xenconsoled to ask for all domains info in one go.  This does mean it's
> hypercall-buffering (i.e. bouncing) a 4M array now where previously it
> was racy figuring out which VMs had come and gone.

Your guess was correct. I have done more bisecting today. The culprit is
the following commit (I reverted only this commit and ran 25 tests
successfully, usually it fails in less than 5):

e522c98c3    tools: Refactor console/io.c to avoid using xc_domain_getinfo()

I don't know why. Traditionally if this was OSSTest we would revert the
commit until we understand what is going on to unblock master/staging. I
suggest to do the same here to be consistent. And also because otherwise
future failures in this test due to other bugs might be masked by this
unsolved issue.

I have nothing against this commit and I'd be happy for it to go in
again as soon as things are not necessarely resolved, but at least
better understood.
--8323329-1229086743-1682817254=:974517--


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 03:19:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 03:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527730.820390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psxan-0005oG-Ty; Sun, 30 Apr 2023 03:19:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527730.820390; Sun, 30 Apr 2023 03:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1psxan-0005o9-Qd; Sun, 30 Apr 2023 03:19:17 +0000
Received: by outflank-mailman (input) for mailman id 527730;
 Sun, 30 Apr 2023 03:19:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psxam-0005nz-P3; Sun, 30 Apr 2023 03:19:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psxam-0007pJ-Gp; Sun, 30 Apr 2023 03:19:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1psxam-00078Y-45; Sun, 30 Apr 2023 03:19:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1psxam-0003uy-3b; Sun, 30 Apr 2023 03:19:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j+ZTbTpQNWZ/NZU/7Ytbt26zCJMEAnpFGIZEjc6ITxE=; b=nLmRUXLFb1JOT/zNADhY39ub/M
	gy8kvUtT4lT5ca/WFx2YmKrEkLlxAulIdAVSPjyj24W3Bc0YYctcJt6BWUMRN2DR9ObhyjowjB8xf
	6N3proGeoiRTFNmAmUoh+2mqQOlH6aL61gRrl6RRHMdJ8MSYjr3Pcaor0/6TyK1X2Nx4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180481-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180481: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
X-Osstest-Versions-That:
    xen=dde20f7dc182fdfeeb6c55648979326bb982ca8c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Apr 2023 03:19:16 +0000

flight 180481 xen-unstable real [real]
flight 180483 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180481/
http://logs.test-lab.xenproject.org/osstest/logs/180483/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 180483-retest
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail pass in 180483-retest
 test-armhf-armhf-libvirt-qcow2  8 xen-boot          fail pass in 180483-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180483-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 180483 like 180452
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 180483 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180452
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180452
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180452
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180452
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180452
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180452
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180452
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180452
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180452
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180452
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180452
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b
baseline version:
 xen                  dde20f7dc182fdfeeb6c55648979326bb982ca8c

Last test of basis   180452  2023-04-27 18:45:25 Z    2 days
Failing since        180475  2023-04-28 18:13:52 Z    1 days    2 attempts
Testing same since   180481  2023-04-29 12:51:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Juergen Gross <jgross@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dde20f7dc1..6a47ba2f78  6a47ba2f7855f8fc094ec4f837e71a34ededb77b -> master


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 05:27:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 05:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527748.820439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pszaC-0003KC-8U; Sun, 30 Apr 2023 05:26:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527748.820439; Sun, 30 Apr 2023 05:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pszaC-0003K5-5p; Sun, 30 Apr 2023 05:26:48 +0000
Received: by outflank-mailman (input) for mailman id 527748;
 Sun, 30 Apr 2023 05:26:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pszaB-0003Jv-3P; Sun, 30 Apr 2023 05:26:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pszaA-00032v-S4; Sun, 30 Apr 2023 05:26:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pszaA-0002gG-BH; Sun, 30 Apr 2023 05:26:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pszaA-0001Ak-Ao; Sun, 30 Apr 2023 05:26:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vGjm4f95wezVKNXKT47DzfDFBpUbhmZXdtUUGr6HoaM=; b=7AqM3e9esR34h/qGDAUXYHVzfq
	JnAb2iIVe7uRNXzTbwkcOfcVS+QbbECrYMdoatOIlvVK7Fk9uGAGdGC/8vqfXhii2IQeDqua4jwVx
	smXndX5gadwgK0jGoaIhHqoxuHTYyHDLlWerxuUfAUJ5kWiHXoYbHnRciuy0PaCW/Hr0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180482-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180482: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=89d77f71f493a3663b10fa812d17f472935d24be
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Apr 2023 05:26:46 +0000

flight 180482 linux-linus real [real]
flight 180485 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180482/
http://logs.test-lab.xenproject.org/osstest/logs/180485/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                89d77f71f493a3663b10fa812d17f472935d24be
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   13 days
Failing since        180281  2023-04-17 06:24:36 Z   12 days   21 attempts
Testing same since   180482  2023-04-29 13:22:08 Z    0 days    1 attempts

------------------------------------------------------------
2006 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 233651 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 11:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 11:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527799.820450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pt5G6-0007Xj-UB; Sun, 30 Apr 2023 11:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527799.820450; Sun, 30 Apr 2023 11:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pt5G6-0007Xb-QT; Sun, 30 Apr 2023 11:30:26 +0000
Received: by outflank-mailman (input) for mailman id 527799;
 Sun, 30 Apr 2023 11:30:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pt5G4-0007XM-Ky; Sun, 30 Apr 2023 11:30:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pt5G4-00036W-Fl; Sun, 30 Apr 2023 11:30:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pt5G4-0000TM-0c; Sun, 30 Apr 2023 11:30:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pt5G4-0003SE-0B; Sun, 30 Apr 2023 11:30:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+qUTtRwD09gQ5zLq2G9JI5Zl2AdJ6RRUDteX6H/LIYk=; b=KCA/yzB1Ba8Tl652mekpqmztBi
	EoKQTDVb5bOjPbMs/fgtSgDIKJRcXNkJMSZm83jx6WesZWb4T2lK41KWg6DeR2OwEAFouDpI2wAwh
	es8nvQ+uPd3j5u0N+BVyEmOLin+uZ8kBYYk4j0/h6TZ44oc5kHCDQYSBmTJRgzjBvy54=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180484-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180484: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
X-Osstest-Versions-That:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Apr 2023 11:30:24 +0000

flight 180484 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180484/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 180481 pass in 180484
 test-armhf-armhf-libvirt-qcow2  8 xen-boot       fail in 180481 pass in 180484
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180481 pass in 180484
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 180481

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail blocked in 180481
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop fail in 180481 blocked in 180484
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install             fail like 180481
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180481
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180481
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180481
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180481
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180481
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180481
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180481
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180481
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180481
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180481
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b
baseline version:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b

Last test of basis   180484  2023-04-30 03:21:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Apr 30 14:47:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 14:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527816.820460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pt8KD-0001aG-3g; Sun, 30 Apr 2023 14:46:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527816.820460; Sun, 30 Apr 2023 14:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pt8KD-0001a9-0u; Sun, 30 Apr 2023 14:46:53 +0000
Received: by outflank-mailman (input) for mailman id 527816;
 Sun, 30 Apr 2023 14:46:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pt8KB-0001a3-Hj
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 14:46:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d635aa53-e765-11ed-8611-37d641c3527e;
 Sun, 30 Apr 2023 16:46:49 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B4F1F221D9;
 Sun, 30 Apr 2023 14:46:48 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7550913319;
 Sun, 30 Apr 2023 14:46:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 3fcbG1h/TmTjAgAAMHmgww
 (envelope-from <jgross@suse.com>); Sun, 30 Apr 2023 14:46:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d635aa53-e765-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1682866008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=4DXzQGU0fs1LGWAuV3shMeqRhZhnDtfQ/XOdGvqODYM=;
	b=pWkkDKbSxXXQwv/hwENRh+9WOto6rFtihpC9/qu40QG0BmehJc2RiMteckOom74m91CxS+
	DYWnaLnwcD5VveJjvN9AlSDEDURAQAHbcc0CFRAuc2fuPwtv4MospZ+iBn30t6lnu9ZRzv
	QzYH08NE5A/AjoEaJChhPZurCCs/11s=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling with XSM
Date: Sun, 30 Apr 2023 16:46:46 +0200
Message-Id: <20230430144646.13624-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
can fail if the last domain scanned isn't allowed to be accessed by
the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).

Fix that by just ignoring scanned domains where xsm_getdomaininfo()
is returning an error, like it is effectively done when such a
situation occurs for a domain not being the last one scanned.

Fixes: d046f361dc93 ("Xen Security Modules: XSM")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sysctl.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 02505ab044..0cbfe8bd44 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             if ( num_domains == op->u.getdomaininfolist.max_domains )
                 break;
 
-            ret = xsm_getdomaininfo(XSM_HOOK, d);
-            if ( ret )
+            if ( xsm_getdomaininfo(XSM_HOOK, d) )
                 continue;
 
             getdomaininfo(d, &info);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Sun Apr 30 17:16:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 17:16:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527833.820470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptAf9-0000kE-Sc; Sun, 30 Apr 2023 17:16:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527833.820470; Sun, 30 Apr 2023 17:16:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptAf9-0000k7-P4; Sun, 30 Apr 2023 17:16:39 +0000
Received: by outflank-mailman (input) for mailman id 527833;
 Sun, 30 Apr 2023 17:16:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3rtS=AV=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1ptAf9-0000k1-3S
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 17:16:39 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [74.104.188.4])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1fd6230-e77a-11ed-8611-37d641c3527e;
 Sun, 30 Apr 2023 19:16:36 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 33UHGPvO054819
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sun, 30 Apr 2023 13:16:31 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 33UHGPQs054818;
 Sun, 30 Apr 2023 10:16:25 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1fd6230-e77a-11ed-8611-37d641c3527e
Date: Sun, 30 Apr 2023 10:16:25 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [BUG] x2apic broken with current AMD hardware
Message-ID: <ZE6iaaUvScHUjoKy@mattapan.m5p.com>
References: <a2e5cb62-9aef-4f91-b5e9-35fee6739fc8@suse.com>
 <ZAkVVhIldUv/xQqt@mattapan.m5p.com>
 <21436010-8212-7b09-a577-09d3f57156bf@suse.com>
 <ZAvGvokloPf+ltr9@mattapan.m5p.com>
 <f33c9b8a-f25d-caab-659d-d34ba21ebc25@suse.com>
 <ZBOSKo+sT/FtWY9C@mattapan.m5p.com>
 <e5b28dae-3699-cb0d-ab7e-42fdd42d3222@suse.com>
 <ZBSi2KfoQXo7hr6z@mattapan.m5p.com>
 <b2eaeacc-de5f-ebe9-a330-fbf9e20626b1@suse.com>
 <a2de5d87-ada8-46b9-090b-00dc43309362@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a2de5d87-ada8-46b9-090b-00dc43309362@suse.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mattapan.m5p.com

On Mon, Mar 20, 2023 at 09:28:20AM +0100, Jan Beulich wrote:
> On 20.03.2023 09:14, Jan Beulich wrote:
> > On 17.03.2023 18:26, Elliott Mitchell wrote:
> >> On Fri, Mar 17, 2023 at 09:22:09AM +0100, Jan Beulich wrote:
> >>> On 16.03.2023 23:03, Elliott Mitchell wrote:
> >>>> On Mon, Mar 13, 2023 at 08:01:02AM +0100, Jan Beulich wrote:
> >>>>> On 11.03.2023 01:09, Elliott Mitchell wrote:
> >>>>>> On Thu, Mar 09, 2023 at 10:03:23AM +0100, Jan Beulich wrote:
> >>>>>>>
> >>>>>>> In any event you will want to collect a serial log at maximum verbosity.
> >>>>>>> It would also be of interest to know whether turning off the IOMMU avoids
> >>>>>>> the issue as well (on the assumption that your system has less than 255
> >>>>>>> CPUs).
> >>>>>>
> >>>>>> I think I might have figured out the situation in a different fashion.
> >>>>>>
> >>>>>> I was taking a look at the BIOS manual for this motherboard and noticed
> >>>>>> a mention of a "Local APIC Mode" setting.  Four values are listed
> >>>>>> "Compatibility", "xAPIC", "x2APIC", and "Auto".
> >>>>>>
> >>>>>> That is the sort of setting I likely left at "Auto" and that may well
> >>>>>> result in x2 functionality being disabled.  Perhaps the x2APIC
> >>>>>> functionality on AMD is detecting whether the hardware is present, and
> >>>>>> failing to test whether it has been enabled?  (could be useful to output
> >>>>>> a message suggesting enabling the hardware feature)
> >>>>>
> >>>>> Can we please move to a little more technical terms here? What is "present"
> >>>>> and "enabled" in your view? I don't suppose you mean the CPUID bit (which
> >>>>> we check) and the x2APIC-mode-enable one (which we drive as needed). It's
> >>>>> also left unclear what the four modes of BIOS operation evaluate to. Even
> >>>>> if we knew that, overriding e.g. "Compatibility" (which likely means some
> >>>>> form of "disabled" / "hidden") isn't normally an appropriate thing to do.
> >>>>> In "Auto" mode Xen likely should work - the only way I could interpret the
> >>>>> the other modes are "xAPIC" meaning no x2APIC ACPI tables entries (and
> >>>>> presumably the CPUID bit also masked), "x2APIC" meaning x2APIC mode pre-
> >>>>> enabled by firmware, and "Auto" leaving it to the OS to select. Yet that's
> >>>>> speculation on my part ...
> >>>>
> >>>> I provided the information I had discovered.  There is a setting for this
> >>>> motherboard (likely present on some similar motherboards) which /may/
> >>>> effect the issue.  I doubt I've tried "compatibility", but none of the
> >>>> values I've tried have gotten the system to boot without "x2apic=false"
> >>>> on Xen's command-line.
> >>>>
> >>>> When setting to "x2APIC" just after "(XEN) AMD-Vi: IOMMU Extended Features:"
> >>>> I see the line "(XEN) - x2APIC".  Later is the line
> >>>> "(XEN) x2APIC mode is already enabled by BIOS."  I'll guess "Auto"
> >>>> leaves the x2APIC turned off since neither line is present.
> >>>
> >>> When "(XEN) - x2APIC" is absent the IOMMU can't be switched into x2APIC
> >>> mode. Are you sure that's the case when using "Auto"?
> >>
> >> grep -eAPIC\ driver -e-\ x2APIC:
> >>
> >> "Auto":
> >> (XEN) Using APIC driver default
> >> (XEN) Overriding APIC driver with bigsmp
> >> (XEN) Switched to APIC driver x2apic_cluster
> >>
> >> "x2APIC":
> >> (XEN) Using APIC driver x2apic_cluster
> >> (XEN) - x2APIC
> >>
> >> Yes, I'm sure.
> > 
> > Okay, this then means we're running in a mode we don't mean to run
> > in: When the IOMMU claims to not support x2APIC mode (which is odd in
> > the first place when at the same time the CPU reports x2APIC mode as
> > supported), amd_iommu_prepare() is intended to switch interrupt
> > remapping mode to "restricted" (which in turn would force x2APIC mode
> > to "physical", not "clustered"). I notice though that there are a
> > number of error paths in the function which bypass this setting. Could
> > you add a couple of printk()s to understand which path is taken (each
> > time; the function can be called more than once)?
> 
> I think I've spotted at least one issue. Could you give the patch below
> a try please? (Patch is fine for master and 4.17 but would need context
> adjustment for 4.16.)

Given the patch didn't fix the problem, that wasn't the issue.  I did
though manage to try another variant of BIOS settings for this
motherboard.  Setting "Local APIC Mode" to "x2APIC" in the BIOS neither
breaks anything additional, nor fixes issues.  What was in Xen's dmesg
did change slightly and looks likely better for my purposes.  Some more
snippets from 4.17 Xen dmesg, with "x2apic_phys=true":

(XEN) AMD-Vi: IOMMU Extended Features:
(XEN) - Peripheral Page Service Request
(XEN) - x2APIC
(XEN) - NX bit
(XEN) - Guest APIC Physical Processor Interrupt
(XEN) - Invalidate All Command
(XEN) - Guest APIC
(XEN) - Performance Counters
(XEN) - Host Address Translation Size: 0x2
(XEN) - Guest Address Translation Size: 0
(XEN) - Guest CR3 Root Table Level: 0x1
(XEN) - Maximum PASID: 0xf
(XEN) - SMI Filter Register: 0x1
(XEN) - SMI Filter Register Count: 0x1
(XEN) - Guest Virtual APIC Modes: 0x1
(XEN) - Dual PPR Log: 0x2
(XEN) - Dual Event Log: 0x2
(XEN) - Secure ATS
(XEN) - User / Supervisor Page Protection
(XEN) - Device Table Segmentation: 0x3
(XEN) - PPR Log Overflow Early Warning
(XEN) - PPR Automatic Response
(XEN) - Memory Access Routing and Control: 0x1
(XEN) - Block StopMark Message
(XEN) - Performance Optimization
(XEN) - MSI Capability MMIO Access
(XEN) - Guest I/O Protection
(XEN) - Enhanced PPR Handling
(XEN) - Invalidate IOTLB Type
(XEN) - VM Table Size: 0x2
(XEN) - Guest Access Bit Update Disable
(XEN) AMD-Vi: Disabled HAP memory map sharing with IOMMU
(XEN) AMD-Vi: IOMMU 0 Enabled.


(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) nr_sockets: 1
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) Enabling APIC mode:  Physical.  Using 2 I/O APICs
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method


(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - VMCB Clean Bits
(XEN)  - DecodeAssists
(XEN)  - Virtual VMLOAD/VMSAVE
(XEN)  - Virtual GIF
(XEN)  - Pause-Intercept Filter
(XEN)  - Pause-Intercept Filter Threshold
(XEN)  - TSC Rate MSR
(XEN)  - NPT Supervisor Shadow Stack
(XEN)  - MSR_SPEC_CTRL virtualisation
(XEN) HVM: SVM enabled

If I'm reading that correctly, everything is there for x2APIC.  As such
there seem to be 1 or 2 bugs:

The definite bug is the x2apic_cluster APIC driver fails on recent AMD
processors.

I'm unsure whether selecting the x2apic_cluster APIC driver is correct or
not.  Capabilities you used to only find a multi-socket server
motherboards are now being found on desktop motherboards.  My
understanding is this processor does NUMA on a single die, not merely a
single-socket.  As such it may well need the features of x2apic_cluster,
perhaps the driver assumes nr_socket > 1 which is untrue here?

Does appear "x2apic_phys=true" plus "tsc_mode = 'always_emulate'" are
adaquate workarounds all the way back to 4.14.  Now for the second
correct bugfix.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sun Apr 30 17:18:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 17:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527837.820480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptAgf-0001Gj-5S; Sun, 30 Apr 2023 17:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527837.820480; Sun, 30 Apr 2023 17:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptAgf-0001Gc-2k; Sun, 30 Apr 2023 17:18:13 +0000
Received: by outflank-mailman (input) for mailman id 527837;
 Sun, 30 Apr 2023 17:18:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptAgd-0001GI-VD; Sun, 30 Apr 2023 17:18:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptAgd-0002wW-Ja; Sun, 30 Apr 2023 17:18:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptAgc-0001S3-U4; Sun, 30 Apr 2023 17:18:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptAgc-0007D3-Te; Sun, 30 Apr 2023 17:18:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2lfgOs7OVD5wL6GP0J90UiqbFKVG1XZm65OIUCga/1Q=; b=ielrN0mlOp8ZHXCWKocM+sNp0H
	7cDB7i0wvLGGHxDEJsz/clEnSYoUZs5J8a96CsrW4OAvZOFrCjzOvH9W5Gx/aWMWhx01nf3oJzPJ/
	xqHKfkoLoLTuxorrRBKdxyAGnLlvFO+9oRg8m/xu+ivuF8CHA04Aqj/jCQVRoK4PrgKE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180486-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180486: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=825a0714d2b3883d4f8ff64f6933fb73ee3f1834
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Apr 2023 17:18:10 +0000

flight 180486 linux-linus real [real]
flight 180488 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180486/
http://logs.test-lab.xenproject.org/osstest/logs/180488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                825a0714d2b3883d4f8ff64f6933fb73ee3f1834
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   13 days
Failing since        180281  2023-04-17 06:24:36 Z   13 days   22 attempts
Testing same since   180486  2023-04-30 05:33:05 Z    0 days    1 attempts

------------------------------------------------------------
2087 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 247816 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 21:09:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 21:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527882.820490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptEI0-0007tE-GU; Sun, 30 Apr 2023 21:09:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527882.820490; Sun, 30 Apr 2023 21:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptEI0-0007t7-Dh; Sun, 30 Apr 2023 21:09:00 +0000
Received: by outflank-mailman (input) for mailman id 527882;
 Sun, 30 Apr 2023 21:08:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptEHy-0007sx-K7; Sun, 30 Apr 2023 21:08:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptEHy-00011C-EF; Sun, 30 Apr 2023 21:08:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptEHx-0001Mr-Ud; Sun, 30 Apr 2023 21:08:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptEHw-0003wV-7O; Sun, 30 Apr 2023 21:08:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eYRQND/PdFwSm73A4JKz1cGRsY6aDwq3n7kd0F2TU1U=; b=NTfyrvlRzb3YowphKdJtzR0QwM
	ofavZgfnXdHsvfq6Sb7LTeTBp/h9r0dwhyIBANuc+TU7MC7clkLrwvkUo48zMlUFg3Afq1/e6zgt2
	R+F5WeJpC7ELTn+adUmFuZuXMMu6mA81i6rfrZh8EL2bh9YVPLfZXsd5AWpPzn96LVqw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180487-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180487: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7c18f2d663521f1b31b821a13358ce38075eaf7d
X-Osstest-Versions-That:
    qemuu=9b112b1b79f0e93242a9ce9bffd1113458e93e03
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Apr 2023 21:08:56 +0000

flight 180487 qemu-mainline real [real]
flight 180490 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180487/
http://logs.test-lab.xenproject.org/osstest/logs/180490/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180490-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180480
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180480
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180480
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180480
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180480
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180480
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180480
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180480
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                7c18f2d663521f1b31b821a13358ce38075eaf7d
baseline version:
 qemuu                9b112b1b79f0e93242a9ce9bffd1113458e93e03

Last test of basis   180480  2023-04-29 09:24:45 Z    1 days
Testing same since   180487  2023-04-30 07:39:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cédric Le Goater <clg@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  David 'Digit' Turner <digit@google.com>
  Jiaxi Chen <jiaxi.chen@linux.intel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Tao Su <tao1.su@linux.intel.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   9b112b1b79..7c18f2d663  7c18f2d663521f1b31b821a13358ce38075eaf7d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 22:42:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 22:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527889.820500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkg-0001A8-Hn; Sun, 30 Apr 2023 22:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527889.820500; Sun, 30 Apr 2023 22:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkg-0001A1-Ev; Sun, 30 Apr 2023 22:42:42 +0000
Received: by outflank-mailman (input) for mailman id 527889;
 Sun, 30 Apr 2023 22:42:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MVg7=AV=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ptFkf-00019p-Ge
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 22:42:41 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e11efa6-e7a8-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 00:42:38 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 6E43E5C0089;
 Sun, 30 Apr 2023 18:42:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Sun, 30 Apr 2023 18:42:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 30 Apr 2023 18:42:35 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e11efa6-e7a8-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1682894556; x=1682980956; bh=3sUQj9tWV99SSb2ZZEQJJXH/Z
	420sQTp6kjDCThm5zk=; b=D/cULiKOMTF8Ghso+8uj+1fm+169QRaMlNbVB5lH7
	IqR0olX1rA4PvV5XfIjHwH5AcP3zDHGvsSkchkK87M5BG5qR7zyLqB9RFYuQ6VVe
	oLSqSQrVrddODSYIcQEyXM9HGDistjqhjc8x4NOT/eK1z+yHfKro2rIj4gQinw2I
	8HSRfQ9P60xwFhdASSog9mWyLG0zdOwnI8gdZNrVwuSIgLFrJROCSTJOGeoW3H+F
	Emc1kU3zdjJFrS3B9ehi7I6gdpLpC4KPVE5ifK0MX9EIyT4hJsrHJbgDUXBrv941
	jXR9P2eP3fh6w8BvMTjErOf/8JDhka9C1aBFu388xcuXg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1682894556; x=1682980956; bh=3
	sUQj9tWV99SSb2ZZEQJJXH/Z420sQTp6kjDCThm5zk=; b=Iolcj6+8GfwnM05Cj
	+GJjgv/6sih3Bbr3g/0daDFBXmjWbP3kyheZ0vxcESSWCE+/2CgWj3T1zwCl+dBI
	w/d0m0B27LkuadcxEaUmtIspBW8srQSIS5VNhx9d4HWbwfzw9HHBf4P2JvCj3N2r
	18f+JOkXRbj2rXkvGTG1YyJ7cJkRenEbMIsTb7zxvew1I6hmZke+DirESyRqj6yf
	VpfC4Ncv9L5S1C2gH9/H0C193k2YLdS5W1Vnk0owHXLFeWnaHyBPVJBtcndHTxXd
	YOnpHeZMeXjdni75FxazR3UDg6JwS6asToGwN2B/rsO4cL2a7vxFAWjK4V05iMOX
	GUUow==
X-ME-Sender: <xms:3O5OZHYe_ThKxY-qGzXLykW8PRpHxGNbhSZAAzLwYC6UzoKVtK8JiQ>
    <xme:3O5OZGYY4l1spRZaBZ1N8FuYsUJFIUGMJcXPGQdEBtrUucY-OyH71eMm-x5bpqesh
    xIfD4adb2viaA>
X-ME-Received: <xmr:3O5OZJ9fmAQeO-c5yaz2ZsIMMAp7_EG5qbw5CxEBJW8CbAfSkybRpCBn5cz3WhCcSMTw6DLp_ZM8b9C47b4sbijbS5-Zxwd9M_9vt9mjmRUGmQhsWD0w>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvfedgudegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucenucfjughrpefhvfevufffkffogggtgfesthekre
    dtredtjeenucfhrhhomhepofgrrhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggt
    khhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
    eqnecuggftrfgrthhtvghrnhepleekhfduleetleelleetteevfeefteffkeetteejheel
    gfegkeelgeehhfdthedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrg
    hilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdr
    tghomh
X-ME-Proxy: <xmx:3O5OZNr8QCg_tmyZGazqn_t4rVz2Y6VNDujDqgRnLw41PGVg32_pMA>
    <xmx:3O5OZCpJE-hEOBnZHOIyJmPqyKJKh6lHw-2J9Tp-NwFvWXaFYN6T2Q>
    <xmx:3O5OZDR_kIN0YzVC6IyrXde6xDOfr_yBg-cQ_SiH34DhfOTZalUEYg>
    <xmx:3O5OZAGyztfFCYkfFgSOwK8K7FGPaISCcTq82QqjoXPbXyDbmjvfFw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH v2 0/2] automation: optimize build jobs order
Date: Mon,  1 May 2023 00:42:21 +0200
Message-Id: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This made the pipeline to complete within 45 minutes. This isn't big
improvement on its own, but should make adding more runners more
beneficial. While looking at it in real time, most jobs were really
waiting for available runners and not stuck on dependencies anymore.

Marek Marczykowski-Górecki (2):
  automation: move test artifacts jobs to the top
  automation: optimize build jobs order

 automation/gitlab-ci/build.yaml | 734 ++++++++++++++++-----------------
 1 file changed, 369 insertions(+), 365 deletions(-)

base-commit: 6a47ba2f7855f8fc094ec4f837e71a34ededb77b
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 22:42:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 22:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527891.820520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkm-0001fm-2X; Sun, 30 Apr 2023 22:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527891.820520; Sun, 30 Apr 2023 22:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkl-0001fd-Vr; Sun, 30 Apr 2023 22:42:47 +0000
Received: by outflank-mailman (input) for mailman id 527891;
 Sun, 30 Apr 2023 22:42:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MVg7=AV=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ptFkk-0001dl-C4
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 22:42:46 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4eb274d7-e7a8-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 00:42:42 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id B542E5C00A0;
 Sun, 30 Apr 2023 18:42:37 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sun, 30 Apr 2023 18:42:37 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 30 Apr 2023 18:42:36 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4eb274d7-e7a8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682894557; x=1682980957; bh=v2
	Vi7FgEgNWCjyUl2feZ+z266kJ26xqC5Cqb4uQACSc=; b=E4U3sDTC6au7lRyETH
	63MPUAdEiBt0XwlbbNo7uEOuD9zihvr+6ai76H6yQRNyj8j/0+AZKq9D4BTSECj9
	dP6X/E4n/Y/C9UGskyKxGwEPOsy1whCEwa1OLCV/uELFuWvdstm/Yx4TqOrzUK08
	9u7j/M3pKNSvEPdBTTMulMIoAwOKEZDJKKtYcRMgpwoMwtJxhtC2WRKfxMeqy9yS
	YrbLcqCmPeTWiUzZs2vHczaPzk7iYAnegxyzQOhcdsyfgLxfaf86mmqd1AqfWuhd
	y86LDiUF10ALwVtqqKfovqUuJgnb2C338IF8ncNfFG1q2IWcIOL8aB7pgm9BY33Z
	eIcA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682894557; x=1682980957; bh=v2Vi7FgEgNWCjyUl2feZ+z266kJ26xqC5Cq
	b4uQACSc=; b=iGH9evVcNV1k9ggHEWeyVjxyQfEQFKAZkwih711T/Hty4cGhfcM
	sqnrLH556V8nwbQjhUcTnQSM4TezNWgUVAroa0xOSpfk01fxTFGA5DweEnF6GYUy
	U8qYPuXgxFhEoL/aRPLWXR+UywXM0ZqFA6doa6gRBDpWWhMl4tm4kBqKybKKZTs4
	4I6huimvPmRCN70+GShjHupKaaAMTl6Yj2K+OIBZDybErjdoS5oDgk6HS60fnK7U
	LEPhqcaP8EGF4D2L67nKfaNaQxY7LsJ/EeMTuAIbALr7nEzGqnKq94fJQe6Hnb8j
	z9x50tkGzSHA50In48LJAwTRiFGyAof6GvA==
X-ME-Sender: <xms:3e5OZCpFOwU3hWJKGUwqsEk1SdlAuPly12rqwVgu24veq0XB7gel3A>
    <xme:3e5OZApLDvGRgKwDb0AoDpQ4BTCNKX_tf47N_2aQW2_21Z-i_RenZHHRhDE3fdmwG
    8fDFO8zp7z9CA>
X-ME-Received: <xmr:3e5OZHNHGUuIAxVkbSwanR9WOezF_IKIiiSEv1COoEC-4Kib8T4iVtkzLFVc9afdoxMwZWKejTtqpFmCOWJG4hhgrfxTTJ9Ki6lnVqKv_lJL-3mEHaLr>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvfedgudegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeel
    fedtfefgfffghfevhfehvdeileehheffueekieetfeffhfetgefggfejudfggeenucffoh
    hmrghinhepghhithhlrggsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghr
    rghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomh
X-ME-Proxy: <xmx:3e5OZB4q3Iz4UFI4kmCZKYIu7tI49ngOCIbC0pKE-zTd-kEnm6nGJg>
    <xmx:3e5OZB6ngNuiX-wzATpSTNO6B4bx-ZB9J8C2TDWhax_azpzxWidbPw>
    <xmx:3e5OZBi4eqD9mikA2JPnS-1zc9BCaxhh94BlSUOJOlfwv7A33pf6qg>
    <xmx:3e5OZDQbN7qLSss2lj1AYYUPlynaQ4A30JGwg7jCbUUaYes082V9Cw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 1/2] automation: move test artifacts jobs to the top
Date: Mon,  1 May 2023 00:42:22 +0200
Message-Id: <1529fdbdd083aa64c2d234b7ee88206bec774972.1682894502.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
References: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make them run earlier, so tests can start earlier in parallel to the
build jobs.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/gitlab-ci/build.yaml | 152 ++++++++++++++++-----------------
 1 file changed, 76 insertions(+), 76 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index d323c30a8304..3f44902c44d0 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -236,6 +236,82 @@
   variables:
     <<: *gcc
 
+## Test artifacts common
+
+.test-jobs-artifact-common:
+  stage: build
+  except: !reference [.test-jobs-common, except]
+
+# Arm test artifacts
+
+alpine-3.12-arm64-rootfs-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12-arm64v8
+  script:
+    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
+  artifacts:
+    paths:
+      - binaries/initrd.tar.gz
+  tags:
+    - arm64
+
+kernel-5.19-arm64-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:5.19-arm64v8
+  script:
+    - mkdir binaries && cp /Image binaries/Image
+  artifacts:
+    paths:
+      - binaries/Image
+  tags:
+    - arm64
+
+qemu-system-aarch64-6.0.0-arm64-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
+  script:
+    - mkdir binaries && cp /qemu-system-aarch64 binaries/qemu-system-aarch64
+  artifacts:
+    paths:
+      - binaries/qemu-system-aarch64
+  tags:
+    - arm64
+
+qemu-system-aarch64-6.0.0-arm32-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
+  script:
+    - mkdir binaries && cp /qemu-system-arm binaries/qemu-system-arm
+  artifacts:
+    paths:
+      - binaries/qemu-system-arm
+  tags:
+    - arm64
+
+# x86_64 test artifacts
+
+alpine-3.12-rootfs-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
+  script:
+    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
+  artifacts:
+    paths:
+      - binaries/initrd.tar.gz
+  tags:
+    - x86_64
+
+kernel-6.1.19-export:
+  extends: .test-jobs-artifact-common
+  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
+  script:
+    - mkdir binaries && cp /bzImage binaries/bzImage
+  artifacts:
+    paths:
+      - binaries/bzImage
+  tags:
+    - x86_64
+
 # Jobs below this line
 
 archlinux-gcc:
@@ -705,79 +781,3 @@ debian-unstable-gcc-arm64-cppcheck:
     CONTAINER: debian:unstable-cppcheck
     CPPCHECK: y
     HYPERVISOR_ONLY: y
-
-## Test artifacts common
-
-.test-jobs-artifact-common:
-  stage: build
-  except: !reference [.test-jobs-common, except]
-
-# Arm test artifacts
-
-alpine-3.12-arm64-rootfs-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12-arm64v8
-  script:
-    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
-  artifacts:
-    paths:
-      - binaries/initrd.tar.gz
-  tags:
-    - arm64
-
-kernel-5.19-arm64-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:5.19-arm64v8
-  script:
-    - mkdir binaries && cp /Image binaries/Image
-  artifacts:
-    paths:
-      - binaries/Image
-  tags:
-    - arm64
-
-qemu-system-aarch64-6.0.0-arm64-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
-  script:
-    - mkdir binaries && cp /qemu-system-aarch64 binaries/qemu-system-aarch64
-  artifacts:
-    paths:
-      - binaries/qemu-system-aarch64
-  tags:
-    - arm64
-
-qemu-system-aarch64-6.0.0-arm32-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/qemu-system-aarch64:6.0.0-arm64v8
-  script:
-    - mkdir binaries && cp /qemu-system-arm binaries/qemu-system-arm
-  artifacts:
-    paths:
-      - binaries/qemu-system-arm
-  tags:
-    - arm64
-
-# x86_64 test artifacts
-
-alpine-3.12-rootfs-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
-  script:
-    - mkdir binaries && cp /initrd.tar.gz binaries/initrd.tar.gz
-  artifacts:
-    paths:
-      - binaries/initrd.tar.gz
-  tags:
-    - x86_64
-
-kernel-6.1.19-export:
-  extends: .test-jobs-artifact-common
-  image: registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
-  script:
-    - mkdir binaries && cp /bzImage binaries/bzImage
-  artifacts:
-    paths:
-      - binaries/bzImage
-  tags:
-    - x86_64
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sun Apr 30 22:42:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Apr 2023 22:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527890.820506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkg-0001De-R2; Sun, 30 Apr 2023 22:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527890.820506; Sun, 30 Apr 2023 22:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptFkg-0001Ck-MY; Sun, 30 Apr 2023 22:42:42 +0000
Received: by outflank-mailman (input) for mailman id 527890;
 Sun, 30 Apr 2023 22:42:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MVg7=AV=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ptFkg-00019p-3f
 for xen-devel@lists.xenproject.org; Sun, 30 Apr 2023 22:42:42 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f83c576-e7a8-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 00:42:39 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 229C95C009B;
 Sun, 30 Apr 2023 18:42:39 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 30 Apr 2023 18:42:39 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 30 Apr 2023 18:42:37 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f83c576-e7a8-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1682894559; x=1682980959; bh=NJ
	LqfqVgmi3zQgvQJ5K3XZAUDkzQPr1TvscaLzZrSXU=; b=YZ4Q3mc5CnUMF7RF0R
	4WHC7rNFyxOYP9tAHcuOoWlE+1JqLOQMOmzFgpW3koTsXTUVRhEwvCGQxjOVwcm7
	ENpeH/kct620U2hdRlXcrbYR6HXR5f7CG8mbIDsd6T6f7/05eYYcx91HsNlmy/hx
	YmrYQO3oJTBTAf+nJEVkHSLU6JrtrMus0cfYCKFdVl5XJ8mvLJk3tkWaEp9o5R7E
	zsLv4R1dq/OdcIEtN3eDxK/DBaqqKPZwbXBvWlO5rgnAV0WS39EFUJ5hZuRwdtO8
	Kbqfvhrk9Z3LTf00KITQHecEnSvNwWa0xVsvV2FIV6U0ng2GxlpRQKQUbhfhQjcg
	Fd8g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1682894559; x=1682980959; bh=NJLqfqVgmi3zQgvQJ5K3XZAUDkzQPr1Tvsc
	aLzZrSXU=; b=gi5mbo7vg7vITLcySDxpDghMzIbA6/U9WgbT6aQFeBAgFTvK5pU
	IpfezgtTjXuttKoBw72kDuX11FpU7HtU0Brbe0gCe3RmuDZzxfYg4aiQSyJXqKmK
	jxZ8oWK2ZOh4dfhPlz0IdpImDz4ivCwHuS9m9iqH8s4LVa7QLMW9HMAQgrn60QfZ
	IpPTC+Ah2E1G9jN2mxmuPZPpK+q9N2SQEYNyPPJWHtFTGF/+bkBxYcaSL5t6sryg
	ikckw+Y2j+mk+NIQ+XmIczOZa85mieen9cfBZtN0kSBh0picXGk8N49+o3mA5BVD
	jBg2ZZEUsb+znCwoEFPOxbo9X9TQXcWBbbQ==
X-ME-Sender: <xms:3u5OZOqdK_VXXiEdw35i-P1Aat0tM_zdVzBJKNpwOg80Q9NHoZkCkw>
    <xme:3u5OZMqUIFs-U0Fu1vBpRezFSYNFpLK_BThuFQMZCMbCcQaCVuOX_yJDDwa25UDxV
    rFkDLsAXtRwaA>
X-ME-Received: <xmr:3u5OZDMuJWC6dygdadYvkJqdqBMeVV3wBkTS54FKboPpF2RQuCc9MngTH3uUxTiYfUM1gGNPQ1diGHpN7PfwPPjDnVZzf6JOZy4MPAAcBN8MFCKZ_e10>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvfedgudegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:3-5OZN7pPW0uehpfBwjM3Ifs9tlZbJyOHxrk1JlKItioe12dpoO-Ew>
    <xmx:3-5OZN7LZXxZhzTWPNBxEndkiMvA0OHkjtxfc_FBFXW0so8RMtuDfg>
    <xmx:3-5OZNjdD2KxMiU3kVZ-d_xPB1c6g5SYMuqAjiMeTRK-FpsOAD7uiw>
    <xmx:3-5OZPQSqVPhh4qDY7qTjP_V728Nj-rgxKpg3vZYqTMYii5HuwD4eQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 2/2] automation: optimize build jobs order
Date: Mon,  1 May 2023 00:42:23 +0200
Message-Id: <da4e7c1303754d50e03a6bcf97eb2e4b53503c4b.1682894502.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
References: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Put jobs that are needed for (any) test earlier, so the tests can start
running in parallel to builds.
This commits splits only x86 build jobs into two sections (one on top
and one on bottom), but keep ARM build jobs in one section, as most of
them have some test connected and the few that do not are not worth
reducing readability of the file.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/gitlab-ci/build.yaml | 462 ++++++++++++++++-----------------
 1 file changed, 233 insertions(+), 229 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 3f44902c44d0..420ffa5acb47 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -314,252 +314,28 @@ kernel-6.1.19-export:
 
 # Jobs below this line
 
-archlinux-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: archlinux:current
+# Build jobs needed for tests
 
-archlinux-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: archlinux:current
-
-centos-7-gcc:
+alpine-3.12-gcc:
   extends: .gcc-x86-64-build
   variables:
-    CONTAINER: centos:7
+    CONTAINER: alpine:3.12
 
-centos-7-gcc-debug:
+alpine-3.12-gcc-debug:
   extends: .gcc-x86-64-build-debug
   variables:
-    CONTAINER: centos:7
-
-debian-stretch-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: debian:stretch
-
-debian-stretch-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: debian:stretch
-
-debian-stretch-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: debian:stretch
+    CONTAINER: alpine:3.12
 
 debian-stretch-gcc-debug:
   extends: .gcc-x86-64-build-debug
   variables:
     CONTAINER: debian:stretch
 
-debian-stretch-32-clang-debug:
-  extends: .clang-x86-32-build-debug
-  variables:
-    CONTAINER: debian:stretch-i386
-
-debian-stretch-32-gcc-debug:
-  extends: .gcc-x86-32-build-debug
-  variables:
-    CONTAINER: debian:stretch-i386
-
-debian-buster-gcc-ibt:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: debian:buster-gcc-ibt
-    RANDCONFIG: y
-    EXTRA_FIXED_RANDCONFIG: |
-      CONFIG_XEN_IBT=y
-
-debian-unstable-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: debian:unstable
-
 debian-unstable-clang-debug:
   extends: .clang-x86-64-build-debug
   variables:
     CONTAINER: debian:unstable
 
-debian-unstable-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: debian:unstable
-
-debian-unstable-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: debian:unstable
-
-debian-unstable-gcc-randconfig:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: debian:unstable
-    RANDCONFIG: y
-
-debian-unstable-gcc-debug-randconfig:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: debian:unstable
-    RANDCONFIG: y
-
-debian-unstable-32-clang-debug:
-  extends: .clang-x86-32-build-debug
-  variables:
-    CONTAINER: debian:unstable-i386
-
-debian-unstable-32-gcc-debug:
-  extends: .gcc-x86-32-build-debug
-  variables:
-    CONTAINER: debian:unstable-i386
-
-fedora-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: fedora:29
-
-fedora-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: fedora:29
-
-# Ubuntu Trusty's Clang is 3.4 while Xen requires 3.5
-
-ubuntu-trusty-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:trusty
-
-ubuntu-trusty-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:trusty
-
-ubuntu-xenial-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: ubuntu:xenial
-
-ubuntu-xenial-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:xenial
-
-ubuntu-xenial-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:xenial
-
-ubuntu-xenial-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:xenial
-
-ubuntu-bionic-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-bionic-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-bionic-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-bionic-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:bionic
-
-ubuntu-focal-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: ubuntu:focal
-
-ubuntu-focal-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: ubuntu:focal
-
-opensuse-leap-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-leap-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-leap
-
-opensuse-tumbleweed-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-opensuse-tumbleweed-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: suse:opensuse-tumbleweed
-  allow_failure: true
-
-alpine-3.12-gcc:
-  extends: .gcc-x86-64-build
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-gcc-debug:
-  extends: .gcc-x86-64-build-debug
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-clang:
-  extends: .clang-x86-64-build
-  variables:
-    CONTAINER: alpine:3.12
-
-alpine-3.12-clang-debug:
-  extends: .clang-x86-64-build-debug
-  variables:
-    CONTAINER: alpine:3.12
-
 # Arm32 cross-build
 
 debian-unstable-gcc-arm32:
@@ -781,3 +557,231 @@ debian-unstable-gcc-arm64-cppcheck:
     CONTAINER: debian:unstable-cppcheck
     CPPCHECK: y
     HYPERVISOR_ONLY: y
+
+# Build jobs not needed for tests
+
+alpine-3.12-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: alpine:3.12
+
+alpine-3.12-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: alpine:3.12
+
+archlinux-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: archlinux:current
+
+archlinux-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: archlinux:current
+
+centos-7-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: centos:7
+
+centos-7-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: centos:7
+
+debian-stretch-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:stretch
+
+debian-stretch-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: debian:stretch
+
+debian-stretch-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: debian:stretch
+
+debian-stretch-32-clang-debug:
+  extends: .clang-x86-32-build-debug
+  variables:
+    CONTAINER: debian:stretch-i386
+
+debian-stretch-32-gcc-debug:
+  extends: .gcc-x86-32-build-debug
+  variables:
+    CONTAINER: debian:stretch-i386
+
+debian-buster-gcc-ibt:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:buster-gcc-ibt
+    RANDCONFIG: y
+    EXTRA_FIXED_RANDCONFIG: |
+      CONFIG_XEN_IBT=y
+
+debian-unstable-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: debian:unstable
+
+debian-unstable-gcc-randconfig:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: debian:unstable
+    RANDCONFIG: y
+
+debian-unstable-gcc-debug-randconfig:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: debian:unstable
+    RANDCONFIG: y
+
+debian-unstable-32-clang-debug:
+  extends: .clang-x86-32-build-debug
+  variables:
+    CONTAINER: debian:unstable-i386
+
+debian-unstable-32-gcc-debug:
+  extends: .gcc-x86-32-build-debug
+  variables:
+    CONTAINER: debian:unstable-i386
+
+fedora-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: fedora:29
+
+fedora-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: fedora:29
+
+# Ubuntu Trusty's Clang is 3.4 while Xen requires 3.5
+
+ubuntu-trusty-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:trusty
+
+ubuntu-trusty-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:trusty
+
+ubuntu-xenial-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-xenial-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:xenial
+
+ubuntu-bionic-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-bionic-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:bionic
+
+ubuntu-focal-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: ubuntu:focal
+
+ubuntu-focal-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: ubuntu:focal
+
+opensuse-leap-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-leap-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-leap
+
+opensuse-tumbleweed-clang:
+  extends: .clang-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-clang-debug:
+  extends: .clang-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-gcc:
+  extends: .gcc-x86-64-build
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
+
+opensuse-tumbleweed-gcc-debug:
+  extends: .gcc-x86-64-build-debug
+  variables:
+    CONTAINER: suse:opensuse-tumbleweed
+  allow_failure: true
-- 
git-series 0.9.1


